I’ve used a duress password with crypto containers since the old TrueCrypt introduced me to it a while back. Sure you can have the password and unlock the vault but it’s just text file notes in there that aren’t at all important. In reality though, no one would ever give a shit about my data enough to even ask me my password.
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.
Even if an attacker kidnaps the user and sends in a body double, with your user's id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.
The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.
I am well aware of learning, but people tend to learn by comprehension and understanding. Completing phrases without understanding the language (or the concept of language) is the realm of LLM and Scrabble players.
Smart. I like the idea of replacing biometrics with something that can't easily be cloned - learned behaviour. Perhaps with a robust ML approach you could use analysis of gait, expressions, and other subtle behavioural tics rather than or in addition to facial/fingerprint/iris recognition. I suspect that would be very hard to fake - although perhaps vulnerable to, idk, having a bad day and acting "off".
Having read the paper, there seems to be a glaring problem: Even though the user can't tell an attacker the password, nothing is stopping them from demonstrating the password. It doesn't matter if it's an interactive sequence -- the user is going to remember enough detail to describe the "prompts".
A rubber hose and a little time will get enough information to make a "close enough" mock-up of the password entry interface the trusted user can use to reveal the password.
There are some cases involving plausible deniability where game theory tells you should beat the person until dead even if they give up their keys, since there might be more.
I mean, I'd definitely do it to SBF if his crap wasn't cleaned out already. Though admittedly I'd largely keep going just because this world DESPERATELY needs fewer SBF types in it...
Game theory would lead you, as the tortured, to realize that they're just going to beat you until death to extract any keys you may or may not have, so the proper answer is to give them 1 and no more. You're dead anyway, may as well actually protect what you thought was worth protecting. Giving 1 key that opens a dummy vault may get the torturers to stop at you, thinking this lead is a dead one.
The private key, or a symmetric key would break the algorithm. It's kind of the point that a person having those can read it. The public key is the one you can show people.