Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SN
GoogleyWoog @lemmy.ml
Posts 0
Comments 11
Big Tech passkey implementations are a trap | Proton
  • If I use a password manager with long random passwords, and use 2FAS to generate those 6-digit two factor authentication codes whenever possible (as opposed to SMS/email 2FA), is there any advantage?

    Is it just that you don't actually have to type anything, just press "I approve" on your phone after entering your username?

    Or is it more just designed to improve security for people like my family members who use the same ~10 digit passwords for everything?

  • AI ‘dream girls’ are coming for porn stars’ jobs
  • Yes. The Llama 70B derived models, as well as Mixtral 8x7B and the new Mistral Medium 70B are competitive with ChatGPT 3.5. Most of them can do 16,000 token context similar to ChatGPT as well.

    You only NEED 40GB of free RAM to run them at decent quality, but it's slow.

    With a 24GB GPU like a 3090 or 4090 you can run them at a reasonable speed with partial GPU offload. About 1-2 words per second. I run 70Bs in this manner on my computer.

    With two 24GB GPUs you can run them very fast, like ChatGPT.


    There's of course a whole world in between as well, but those are the rough hardware requirements to match ChatGPT in a self-hosted sort of way. There's also a new thing people are doing where they add layers from one model onto another one, like a merge but keeping >50% of the original layers from each model. "Goliath 120B" and the like, which is made from 2 different 70Bs. They're even better but it's a bit beyond reasonable consumer hardware for now.