Signal is finally tightening its desktop client's security by changing how it stores plain text encryption keys for the data store after downplaying the issue since 2018.
The whole drama seems to be pushing for Electron's safeStorage API, which uses a device's secrets manager. But aren't secrets stored there still accessible when the machine is unlocked anyway? I'm not sure what this change accomplishes other than encryption at rest with the device turned off - which is redundant if you're using full disk encryption.
I don't think they're downplaying it, it just doesn't seem to be this large security concern some people are making it to be.
This is like the third time in the past two months I've seen someone trying to spread FUD around Signal.
Yeah they are, this problem is super overblown. Weirdly I've seen articles about this coming up for other apps too, like the ChatGPT app for MacOS storing conversation history in plain text on the device. Weird that this is suddenly a problem.
If someone wants better security, the can use full disk encryption and encrypt their home directory and unlock it on login.
But aren't secrets stored there still accessible when the machine is unlocked anyway?
I think the OS prevents apps from accessing data in those keychains, right? So there wouldn't be an automated/scriptable way to extract the key in as easy of a way.
But that's the thing: I haven't found anything that indicates it can differentiate a legitimate access from a dubious one; at least not without asking the user to authorize it by providing a password and causing the extra inconvenience.
If the wallet asked the program itself for a secret - to verify the program was legit and not a malicious script - the program would still have the same problem of storing and retrieving that secret securely; which defeats the use of a secret manager.
I haven't found anything that indicates it can differentiate a legitimate access from a dubious one
It's not about legitimate access versus illegitimate access. As I understand it, these keychains/wallets can control which specific app can access any given secret.
It's a method of sandboxing different apps from accessing the secrets of other apps.
In function, browser access to an item can be problematic because browsers share data with the sites that it visits, but that's different from a specific app, hardcoded to a specific service being given exclusive access to a key.
upon reading a bit how different wallets work, it seems macos is able to identify the program requesting the keychain access when it's signed with a certificate - idk if that's the case for signal desktop on mac, and I don't know what happens if the program is not signed.
As for gnome-keyring, they ackowledge that doing it on Linux distros this is a much larger endeavor due to the attack surface:
An active attack is where the attacker can change something in your security context. In the context of gnome-keyring an active attacker would have access to your user session in some way. An active attacker might install an application on your computer, display a window, listen into the X events going to another window, read through your memory, snoop on you from a root account etc.
While it'd be nice for gnome-keyring to someday be hardened against active attacks originating from the user's session, the reality is that the free software "desktop" today just isn't architected with those things in mind. We need completion and integration things like the following. Kudos to the great folks working on parts of this stuff:
- Trusted X (for prompting)
- Pervasive use of security contexts for different apps (SELinux, AppArmor)
- Application signing (for ACLs)
We're not against the goal of protecting against active attacks, but without hardening of the desktop in general, such efforts amount to security theater.
Also
An example of security theater is giving the illusion that somehow one application running in a security context (such as your user session) can keep information from another application running in the same security context.
In other words, the problem is beyond the scope of gnome-keyring. Maybe now with diffusion of Wayland and more sandboxing options reducing this context becomes viable.
Yes but it pushes it to an operating system level and that means everyone wins as the operating system solutions to improve as vulnerabilities are found and resolved.
You also don't need rce access to exfiltrate data. If decrypted keys are held in memory, that mitigates an entire class of vulnerabilities from other applications causing your private chats from leaking.
Full disk encryption is not a solution here. Any application that's already running which can provide read only file system access to an attacker is not going to be affected by your full disk encryption.
Full disk encryption is not a solution here. Any application that’s already running which can provide read only file system access to an attacker is not going to be affected by your full disk encryption.