You don't want a randomised fingerprint, as that is relatively unique among a sea of fingerprints [1]. What you want is a fingerprint that's as similar to everyone else (generic) as possible; that's what Firefox's resist fingerprinting setting aims to do, and what the Tor browser does.
[1] There are many values you can't change, so the randomisation of the ones you can change could end up making you more unique ... think of it like having your language set to french but are based in the USA — that language setting can't uniquely identify the French in france, but will stick out like a sore thumb if set in shitsville Idaho. It's likely the same if you use firefox but have your user agent set to chrome; that's more rare and unique than not changing the user agent at all.
But isn't randomization supposed to give you a different unique fingerprint each time? So yes, you would be unique and easily tracked but only until your fingerprint changes
The benefit is that it's much easier to maintain and also increases privacy over the "blending in" approach. With trying to make your fingerprint similar to others, there are always going to be things that you miss that do actually make you uniquely identifiable. Certain things also aren't practical to "blend".
Think about a real life analogy. If you try and blend in with a crowd, even if you do it really well, a sufficiently sophisticated observer will still be able to spot you.
With a randomisation strategy you acknowledge that you will never be able to perfectly blend in and thus allow yourself to stand out. Trackers can build up a profile using that fingerprint. But as soon as your fingerprint changes you are completely unique again.
If you are in a crowd an observer can track you, but next time you appear you appear as someone completely different and thus lose the tail.
I don't think there is any proven results, but I think the reason the EFF prefers Braves decision is the philosophy that there are so many data points that it could be possible to link you to it using the ones not standardized by anti fingerprinting.
Like ways to incorrectly describe someone. One describes a guy correctly but generically. One describes a guy with a lot of detail but the wrong race and two feet too short.
This can be an effective method for breaking persistence, but it is important to note that a tracker may be able to determine that a randomization tool is being used, which can itself be a fingerprinting characteristic. Careful thought has to go into how randomizing fingerprinting characteristics will or will not be effective in combating trackers.
In practice, the most realistic protection currently available is the Tor Browser, which has put a lot of effort into reducing browser fingerprintability. For day-to-day use, the best options are to run tools like Privacy Badger or Disconnect that will block some (but unfortunately not all) of the domains that try to perform fingerprinting, and/or to use a tool like NoScript( for Firefox), which greatly reduces the amount of data available to fingerprinters.
So the EFF seem to recommend generic over randomisation...
Maybe ask yourself why the Tor project decided against randomisation?
No, that's absolutely incorrect. You want a new fake fingerprint every single time someone asks your browser for your information. You want it to lie about your plugins, user agent, your fonts and your screen size. Bonus if you use common values, but not necessary.
The randomized data they're providing isn't static and it isn't the same from session to session.
100% White noise is a far better obfuscation than a 40% non-unique tracking ID. Yes, your data is lumped in with 47 million other users, but used in conjunction with static pieces of your data you become uncomfortably identifiable.
Yeah... I don't know why a bunch of privacy bros think they know better than the CS and cryptography PhD's of the Tor project; the most advanced and complex privacy and anonymity preserving project in computing history.