With permission from the original author, we are reposting details on a failed KickStarter project called “Blindeagle”. It was cancelled by its project creator on April 12th after only achieving less than 10% of its goal.
Blindeagle is asking for money for a product, a product that promises private and secure communication with anyone over the internet and wants 90,000EUR to do it. For an additional 920,000 EUR, they’ll even remake what RedPhone already does for free. With a pricetag like that, it better not just be useful but live up to every one of its promises. What are its promises, anyway?
The advertised unit is a keychain that plugs in through the headphone jack of a mobile device, meant to interact closed-source app to provide impenetrable crypto. This crypto is said to use a one-time pad (OTP) system. The design, photos, prototype, and social networking vibe feel all too similar to the vaporware you’d expect a San Francisco based startup of 5 college students to poorly slap together and unload to unsuspecting venture capital firms for a million in seed money, who later are forced to abandon the broken concept and cut their losses. But it’s not like that– these 5 college students are from Belgium!
The broken English consisting primarily of hypespeak and buzzwords is a bit difficult to extract hard data from, so building a critique of the supposedly infallible security model wasn’t cake. By focusing on the major claims only and not nitpicking about general hyperbole, we show this product for the fraud it really is — a broken security model rife with contradictions, in the best case simply dangerous for its users, and in the worst an intentional scam surrounded by lies.
Why be so hard on a kickstarter that will likely never meet its goals in the first place? Because this campaign masquerades as an infallible solution to a current global crisis on data privacy, capitalizing on people’s fears and ignorance while overpromising and dangerously underthinking a science that often means the difference between life and death. Cryptography is the backbone for all security on the internet, and doing it right has always been undeniably hard. If their team of expert cryptographers are working on this device, we’re prepared to give some leeway to explain themselves, open source it, and work on it over the years like Telegram was given a chance to do at first… except there is no team of cryptographers, not even a “math expert”. So who is the savior that will guide us through this privacy crisis?
No background in crypto
Meet David (no last name provided). With no crypto background and “now over 5 years of experience in Java, web and iOS/OS X development”, “he .. takes care of the technical side of blindeagle, from the website to the apps and including programming the servers and the external units”. Let’s not be too hard on David, he’s likely been suckered into this by a friend and is either too naive to realize the ramifications or is ignorant and being used as a fall guy by a scammer. Assuming he hasn’t singlehandedly broken the underlying security of everything due to human error, miscalculations, improper security model, or a complete and utter lack of proper crypto background or experience, we can move on to the message and leave the messenger be for now.
As quoted from their product homepage, using their device “guarantee[s] you total confidentiality and absolute security”. That’s quite a claim to make, especially since it’s impossible. Every legitimate cryptographic tool or product in the world is designed with an understanding that as time passes, the likelihood of its security being compromised increases exponentially; that vigilance, not a false promise of trust, is the backbone of true security. Security is not a fixed-state, it is an evolving process. Does Blindeagle understand that process? By asking us to trust their closed-source apps written entirely by David on closed-source devices manufactured by an unknown third-party supplier, the picture looks pretty grim. Despite several free, secure applications that do encryption “right” (XMPP+OTR, BitMessage, Tox), we’re to believe that we need a separate closed-source device. What does that device even do?
The device purports to feed one-time Vernam cyphers from a pool stored in its memory directly to the mobile app. Properly implemented Vernam Ciphers (and OTP in general) can be extremely secure, but the difference between broken and sound cryptography is often in its implementation. While claiming it is infallible compared to email or other chat apps in terms of encryption, it fails to describe in any detail whatsoever how this particular implementation can’t be intercepted by a rogue app on a rooted phone, sniffed over the air via the device itself, or any number of potential attack vectors. That would take actual knowledge!
No, instead we are lead to believe that the infallible OTP key material preloaded onto the device at manufacturing has not been copied, tampered with in any way, and loaded in a secure way that could not be extracted through a simple buffer overflow or injection attack. OTP key material that was generated when you plug the device in might lead to secure keys, but trusting their third-party manufacturer presses the boundaries of what can be considered “secure”. Keys generated by the company could be stored and used to decrypt all the messages you use at any point in time. Even if the company wasn’t malicious, what’s to stop a malicious nation-state actor forcing them to hand over every single key they produce? Whilst some of this is protected by the plausible deniability given by a OTP system, they only provide 2GB of material. That is
2147483648 bytes of key material. Computers are incredibly fast. End users expecting a fast gaming experience from their cheap desktop may not realize it, but computers are designed to be fast for simple XOR operations. A computer could process all 2 gigabytes of the key material and break the message in probably a matter of minutes. Compare this to seed files used for real OTPs, which are often in the terabytes, to ensure an attacker could not load the seed into memory.
Powered by buzzwords
According to the copy, “the key existing in the external unit is generated using quantum phenomena”. This is buzzspeak for “a mirror sensor looks at light and makes a key based on the photo it takes”. While interesting in theory, theories that cryptographic security revolve around should be tested and proven before going into production. It goes on to guarantee that the keys in the device can only be used once, that it behaves as single use memory. Except, if, somebody copies the key data. Let’s go back to how the device plugs into the headphone port onto your device. Putting aside the logistics of getting a device like this to work on a computer without a combined headphone and microphone socket, what’s to stop a malicious app pretending to be the official app, reading in all of the key data, and then simply saving it to your local storage? There is no technical explanation provided by Blindeagle how this can be guaranteed aside from a brief introduction to “potting”.
The straw that broke the crypto’s back
Among the claims of perfect crypto is the use of “end to end encryption”, something by definition readable by only two-parties and is unbreakable unless the underlying crypto is broken or a key materializes. End-to-end crypto– if done right– is a good thing and Blindeagle would be silly not to include it as a main feature. But is Blindeagle truly end-to-end encrypted?
After data is encrypted using your Blindeagle device, it is sent to their closed-source proprietary servers in the EU. From that point, the data is “decrypt[ed] with the sender key followed by the instantaneous encryption with the receiver key, just before the destruction of the encryption keys”. If you are thinking to yourself, “isn’t that the definition of a middle-man?”, you’re likely more suitable to lead their team than poor David.
Blindeagle clearly advertises “no data is stored on our servers”, in addition to the “No data-retention” laws in Belgium. Despite being empty and unprovable claims, we have learned from experience and leaks that neither nation-state actors nor hackers need permission, nor do they follow laws when hijacking, injecting, seizing or bugging servers for their own malicious purposes. By purposely introducing a middle-man into their transport protocol, they cannot make the claim with absolute certainty that no data will be stored.
Blindeagle’s security model does not meet the requirements of even the most basic security theory, its advertised implementation is dangerous, and its claims are contradictory, misleading and at times downright lies. At this point it’d be preferable if it ends up having been a non-delivering scam.
Written mostly by sn0wmonster from the ##crypto IRC on freenode, with some technical input from SunDwarf.