Exclusive: Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech’ and warn it could be exploited by government agencies
A security backdoor that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.
Facebook claims that no one can intercept WhatsApp messages, not even the company and its staff, ensuring privacy for its billion-plus users. But new research shows that the company could in fact read messages due to the way WhatsApp has implemented its end-to-end encryption protocol. Continue reading...
This year has been full of developments in messaging platforms that employ encryption to protect users. 2016 saw an increase in the level of security for some major messaging services, bringing end-to-end encryption to over a billion people. Unfortunately, we’ve also seen major platforms making poor decisions for users and potentially undermining the strong cryptography built into their apps.
WhatsApp makes big improvements, but concerning privacy changes In late March, the Facebook-owned messaging service WhatsApp introduced end-to-end encryption for its over 1 billion monthly active users. The enormous significance of rolling out strong encryption to such a large user-base was combined with the fact that underlying Whatsapp’s new feature was the Signal Protocol, a well-regarded and independently reviewed encryption protocol. WhatsApp was not only protecting users’ chats, but also doing so with one of the best end-to-end encrypted messaging protocols out there. At the time, we praised WhatsApp and created a guide for both iOS and Android on how you could protect your communications using it.
Signal takes steps forward Meanwhile, the well-regarded end-to-end encryption app Signal, for which the Signal Protocol was created, has grown its user-base and introduced new features. Available for iOS and Android (as well as desktop if you have either of the previous two), Signal recently introduced disappearing messages to its platform. With this, users can be assured that after a chosen amount of time, messages will be deleted from both their own and their contact’s devices.
Signal also recently changed the way users verify their communications, introducing the concept of “safety numbers” to authenticate conversations and verify the long-lived keys of contacts in a more streamlined way.
Mixed-mode messaging 2016 reminded us that it’s not as black-and-white as secure messaging apps vs. not-secure ones. This year we saw several existing players in the messaging space add end-to-end encrypted options to their platforms. Facebook Messenger added “secret” messaging, and Google released Allo Messenger with “incognito” mode. These end-to-end encrypted options co-exist on the apps with a default option that is only encrypted in transit.
Unfortunately, this “mixed mode” design may do more harm than good by teaching users the wrong lessons about encryption. Branding end-to-end encryption as “secret,” “incognito,” or “private” may encourage users to use end-to-end encryption only when they are doing something shady or embarrassing. And if end-to-end encryption is a feature that you only use when you want to hide or protect something, then the simple act of using it functions as a red flag for valuable, sensitive information. Instead, encryption should be an automatic, straightforward, easy-to-use status quo to protect all communications.
In contrast, the end-to-end encrypted “letter sealing” that LINE expanded this year is enabled by default. Since first introducing it for 1-on-1 chats in 2015, LINE has made end-to-end encryption the default and progressively expanded the feature to group chats and 1-on-1 calls. Users can still send messages on LINE without end-to-end encryption by changing security settings, but the company recommends leaving the default “letter sealing” enabled at all times. This kind of default design makes it easier for users to communicate with encryption from the get-go, and much more difficult for them to make dangerous mistakes.
The dangers of unsecure messaging In stark contrast to the above-mentioned secure messaging apps, a November report from Citizen Lab exposes China’s WeChat messenger’s practice of performing selective censorship on its over 806 million monthly active users. When a user registers with a Chinese phone number, WeChat will censor content critical of the regime no matter where that user is. The censorship effectively “follows them around,” even if the user switches to an international phone number or leaves China to travel abroad. Effectively, WeChat users may be under the control of China’s censorship regime no matter where they go.
Compared to the secure messaging practices EFF advocates for, WeChat represents the other end of the messaging spectrum, employing algorithms to control and limit access rather than using privacy-enhancing technologies to allow communication. This is an urgent reminder of how users can be put in danger when their communications are available to platform providers and governments, and why it is so important to continue promoting privacy-enhancing technologies and secure messaging.
Neal H. Walfield is a hacker at g10code working on GnuPG. This op-ed was written for Ars Technica by Walfield, in response to Filippo Valsorda's "I'm giving up on PGP" story that was published on Ars last week.
Every once in a while, a prominent member of the security community publishes an article about how horrible OpenPGP is. Matthew Green wrote one in 2014 and Moxie Marlinspike wrote one in 2015. The most recent was written by Filippo Valsorda, here on the pages of Ars Technica, which Matthew Green says "sums up the main reason I think PGP is so bad and dangerous."
In this article I want to respond to the points that Filippo raises. In short, Filippo is right about some of the details, but wrong about the big picture. For the record, I work on GnuPG, the most popular OpenPGP implementation.