Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
“Oh well, time to learn absolutely nothing from this and continue to be terrible people,” said Grimes and Aella mentally, and unbeknownst to them, because they each have one brain cell quantum-entangled* with the other’s, simultaneously
*I finished the three body problem trilogy recently! Where do I sign up for the ETO
I'd swear I've seen that exact same "realization" from Aella before, when she posted something like "I got really into tradcath practices for [the writer's barely disguised fetish] reasons, and now I'm shocked that they really actually do hate sex work."
Edit to add:
ok what the FUCK is goin on with the neo-trads? I was just over here enjoying this free life, individualism, subversive, unwoke cultural movement and I thought everybody was on board but suddenly BAM we've got a a bunch of them spawning into sex-negative tradcaths or whatever [...] i'm just sad cause i thought this section of culture were my allies. we both were like 'leave me alone, authoritarian government/culture', and were appropriately skeptical of novel identity movements, willing to say the weirdo things.
(Two comment threads about the CDC purging "woke" research, the comments are bad even by HN standards)
Gee given a forum full of hackers you'd expect them to be against arbitrary removal of scientific studies. What happened to "information wants to be free"?
Also I know I know, more US politics. It turns out silicon valley fascists have gained power so expect this to keep happening for the forseeable future 🙃.
These past two weeks have made me very uncomfortable working in Silicon Valley, I know last time I said I was planning to get out; but now it feels urgent both for my own well being, and to stop contributing to this industry. In trans communities we immediately saw coupy stuff** for the attempted transgender genocide that it is, the wider public and media is waking up to this very slowly.
* An account-only platform that sometimes bans US citizens for being cool.
** If there's interest I could try turning all of this into a top level post on morewrite or techtakes. I've been trying to avoid inundating people with US politics, but it's extremely bad. Like constitutional crisis, rise of techno-fascism, dismantling of the administrative state, transgender extermination, put career roadblocks in front of minorities bad.
I’ve been trying to avoid inundating people with US politics, but it’s extremely bad. Like constitutional crisis, rise of techno-fascism, dismantling of the administrative state, transgender extermination, put career roadblocks in front of minorities bad.
and honestly reading what you wrote here, I'm tearing up. the constant downplaying for almost a decade has really worn me down, and it's getting worse and worse. that I see more and more people plainly describing the situation is so cathartic, such a relief
Not clicking those HN links, decided years ago already that site should not be part of my life anymore at all. The few times I have deviated from that rule since, I regretted it.
As for the more general topic, I feel so bad for all trans people with everything that is unfolding. It's horrible. But be assured that there are many peope in this world who are on your side on this. Wish I could say something more useful, but I'm at a loss of words.
An account-only platform that sometimes bans US citizens for being cool
Not just that, but a site that, if you are bot logged in shows an accounts posts not chronologically, but most popular first. Making it totally incapable to be used as a gov communication platform. Imagine looking at a gov storm warning system for your area and seeing the list of most virally named storms first, and nothing about how you should evacuate right now.
Amd yes it is quite horrible that the usa is in the book burning and building (more) concentration camps stages of the gearing up for genocide stages. Up next, taking away passports (which they are already not issuing anymore) and any guns (not that these help, historically speaking, iirc the Jewish people had guns in Germany at first).
I've been trying to avoid inundating people with US politics, but it's extremely bad. Like constitutional crisis, rise of techno-fascism, dismantling of the administrative state, transgender extermination, put career roadblocks in front of minorities bad.
yep. haven’t been posting about it here because not sure where here we’d put it (while a lot of it is well within the orbit of regular content and posters) and it’s not quite entirely anything I can do anything about but offer words of comfort and keeping watch on the nasty shit, but been speaking a lot with friends in places (signal generally, or some other spaces we actually control (i.e. not discord, etc))
I feel moderately confident that at least for a bit of the foreseeable future we’ll be okay this side of the world, but I also know enough history and context to know how vacuous that is by itself. these fuckers won’t stop.
I also wish I could just make people understand that none of this is by mistake, none of this is these fuckers just finding some shit they disagree with under the seat cushions. I wish I could make them understand the depth and extent of planning and preparation that went into this, the sheer commitment behind it all. but too often such concerns would all be received as this toot put it
there’s so much more I could say but I guess I’ll leave it there for now
I also wish I could just make people understand that none of this is by mistake
I keep seeing US people go "a lot of Trumps plans were stopped at rhe courts last time", and I keep worrying this is people fighting the last war again, not realizing that the opposition are active intelligent (as in baseline human beings not as in high IQ) people who prob are not going to be stopped by the same things as last time. Dont worry the Maginot line will stop them from invading too fast, cant get through the Ardennes sort of thinking if that makes sense.
While companies seem to be betting on anti discrimination laws being gone (why else take the legal risk of shuttering dei programs). (I have written reactions like this before and often deleted it because it feels just too doomer, but it keeps coming back to me).
A random walk, in retrospect, looks like like directional movement at a speed of √n.
I aint clicking on LW links on my day off (ty for your service though). I am trying to reverse engineer wtf this poster is possibly saying though. My best guess: If we have a random walk in Z_2, with X_i being a random var with 2 outcomes, -1 or +1 (50% chance left at every step, 50% chance for a step to the right), then the expected squared distance from the origin after n steps E[ (Σ_{i=1}^n X_i)^2 ] = E[Σ_{i=1}^n X_i^2}] + E[Σ_{i not = j, i,j both in {1,2,...n}} X_i X_j}]. The expectation of any product E[X_i X_j] with i not = j is 0, (again 50% -1, 50% +1), so the 2nd expectation is 0, and (X_i)^2 is always 1, hence the whole expectation of the squared distance is equal to n => the expectation of the nonsquared distance should be on the order of root(n). (I confess this rather straightforward argument comes from the wikipedia page on simple random walks, though I swear I must have seen it before decades ago.)
But back to the original posters point... the whole point of this evaluation is that it is directionLESS, we are looking at expected distance from the origin without a preference for left or right. Like I kind of see what they are trying to say? If afterwards I ignored any intermediate steps of the walker and just looked at the final location (but why tho), I could say "the walker started at the origin and now is approx root(2n/pi) distance away in the minus direction, so only looking at the start and end of the walk I would say the average velocity is d(position)/(d(time)) = ( - root(2n/pi) - 0) /( n ) -> the walker had directional movement in the minus direction at a speed of root(2/(pi*n)) "
wait, so the "speed" would be O(1/root(n)), not root(n)... am I fucking crazy?
I think they took the rather elementary fact about random walks that the variance grows linearly with time and, in trying to make a profundity, got the math wrong and invented a silly meaning for "in retrospect".
I mean, we tried the whole "fuck yeah grids fuck local geography" thing. That was fucking Le Corbusier and friends' whole deal. And it created dead cities and/or places in cities that people hated to live in.
My understanding was that 21st-century psychiatrists didn't speak Baseline nor thought that precisely
Ah a dystopian story. (Not sure if it is intended as much bte, but for me 'ha the past was foolish, as we now have a perfect way of talking/thinking that uses math! (No you are not allowed to see it dear reader)' is quite dystopian coded. Prob why I read Starship Troopers not as it was intended. (E: and yes prob the intention here, in this science fiction story which hypes up the writings of Scott)
Eliezer Yudkowsky says he would like to be a post-human some day, but the way to get there is by experimenting on augmenting biological intelligence through adult gene therapy targeting the human brain with suicide volunteers who may end up schizophrenic rather than taking a "leap of death" into unconstrained AI development
(found via flipping through LW for sneerable posts/comments)
L. Ron Hubbard says new "high-voltage" e-meters set to enter testing with Sea Org volunteers, possibly capable of purging body thetans at an unlimited rate
Project Gutenberg has AI generated summaries?? How the mighty have fallen.
I was researching a bizarre old sci-fi book I once read (don't judge; bad old sci-fi is a trip), and Gutenberg's summary claims it was written in the 21st century. There's actually no accurate information about this book online, as far as I can tell the earliest reference is Project Gutenberg typing it up into a text file in 2003.
Given that it's in the public domain, no one has any idea where it came from, and it has old sci-fi vibes; I strongly suspect it was written in the 20th century*; making that misinformation. It's also just a bad summary that, while not wrong, doesn't really reflect the (amusingly weird) themes of the book.
Anyway someone needs to tell them that no information is leagues better than misinformation.
* maybe the '70s give or take but I'm not a professional date guesser
We then worked with the same programmers [as AI generated categories] to provide automated summaries of nearly every book in the collection. You can find those summaries on book landing pages. These summaries are intended to be helpful for people trying to decide what book to read, or to get an idea of what a book is about.
If you spot errors in summaries, let us know. Summaries of most books are based only on the first 12,000 characters, because the costs would have been too high for if we included all of every book.
We have also been corresponding with another programmer seeking to instruct AI technologies to "read" books from Project Gutenberg, summarize them, and answer questions about them. We hope this might be described further in a future newsletter.
Based off Wayback Machine poking around it looks like they were added sometime between September 20th to October 1st.
That is indeed troubling, casts a shadow on Project Gutenberg's judgement. Now I wonder how long until Wikipedia falls too :( Gosh, I miss being excited about new tech. Now new tech is just making things worse.
About that book, so it is more "good bad" instead of "bad bad"? Maybe I'll take a look, some light/weird reading might be better than doomscrolling (and these days there's so much doom to scroll).
I don't remember (reading it was a bit like a fever dream) but there's a non-zero chance it has racist vibes in parts you have been warned.
But oh so quotable:
We have been treating the trees on a ten mile radius with an anti-flammatory solution for several years as well, and it is quite impossible to set them on fire.
The file metadata of the oldest copy on the gutenberg webserver says 2003 -- and the document itself says Gutenberg created it in 2003 and published it in 2005 (whatever that means, maybe they were delaying ebook releases to ensure a steady stream)
Anyway this 2003 copy had their public domain boilerplate; it was described as a book in the public domain.
There are indeed a lot of websites about this, but none with any more information that Project Gutenberg so I'm guessing they all trace back to the Gutenberg release. Probably you'd have to find some physical information about it in an actual library to trace it further.
But I'm not like a professional book researcher or anything, that's just my opinion!
No EA stuff! $1M each going to eight great charities and non-profits as far as I can tell: Children’s Hunger Fund, First Generation Investors, Global Refuge, NAACP Legal Defense and Educational Fund, PEN America, The Trevor Project, Planned Parenthood, and Team Rubicon. (from The Trevor Project's blog post)
on a side note I found out what "agile" is a few months ago and I think I'd rather go back to working retail than do the little morning circlejerk thing. dehumanizing
Rituals can be good, but yeah, agile standup meetings are not the good kind. Luckily I don't have them daily... several times a week is already draining enough. If they were daily, I would just burn out. And the standups are IMO not even the worst part of agile...
So as part of the ongoing administrative coup; federal employees have been receiving stupid emails from what everyone assumes is Elon Musk (since it's the exact same playbook as the twitter firings). But they apparently royally flubbed up NOAA's email security in the process so the employees are getting constant spam through an unsecured broadcast address.
You have my sympathy! Is the worst part that you have to review the slop or its general presence at all?
Asking because at my workplace it will be allowed soon, and some coworkers are unfortunately looking forward to it, and I'm horrified, especially by the thought of having to do code review then...
It’s more the latter. I can’t really stop anyone from using it, and playing the game of “can you tell if this snippet of code was LLMed” is a fools errand, so I have to choose to ignore that part of it. Testing de-risks bad code, but there is never enough testing… well, there’s only so much I can do within my pay grade.
I’ve since asked this person to stop talking about their copilot usage, so this issue has been resolved, for now.
that word temporarily broke me (its constituent phonemes are all valid/typical afrikaans (and dutch) but the word itself made no sense), then I looked it up
that sounds exhausting. I wish to flippantly suggest launching your colleague to another planet, but cruel fate might reveal them to be a muskovian martian so perhaps best not to broach that subject..
Part of me suspects DeepSeek is gonna quickly carve out a good chunk of the market for itself - for SaaS services looking for spicy autocomplete or a slop generator to bolt on to their products, DeepSeek's high efficiency gives them a way to do it that doesn't immediately blow a massive hole in their finances.
I am a journalist who specializes in features and profiles. I write about the American right, ideologues, intellectuals, extremist movements, the culture wars, true crime, and strange events and strange places.
by "about", he means "for"
I'm a journalist at the Guardian working on a piece about the Zizians. If you have encountered members of the group or had interactions with them, or know people who have, please contact me: [email protected].
I'm also interested in chatting with people who can talk about the Zizians' beliefs and where they fit (or did not fit) in the rationalist/EA/risk community.
I prefer to talk to people on the record but if you prefer to be anonymous/speak on background/etc. that can possibly be arranged.
From what I've been able to piece together from the various theological disputes people have had with the murder cult it seems like the only two differences are that Ziz and friends are much more committed to nonhuman animal welfare than the average rat and that they have decided that the correct approach to conflict is always to escalate. This makes them more aggressive about basically everything which looks like a much deeper ideological gap than there actually is. I'm not going to evaluate whether these are reasonable conclusions to take from the same bizarre set of premises that lead to Roko's Basilisk being a concern.
This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.
How many times are AI people going to re-learn that LLMs don't have "awareness" or "reasnloning' in a sense humans would find meaningful?
i don't understand the "safety" angle here. if chatgpt can output authoritatively-looking sentence-shaped string about pipebombs, then it's only because similar content about pipebombs is already available on wide open internet. if model is closed, then at worst they would have to monitor its use (not like google blocks any similar information from showing up). if model is open, then no safeguards make sense in the first place. i guess it's more about legal liability for openai? now they can ignore it with all these bills about "ai safety" gone (for now)
frankly it's probably harm prevention if people turn to an LLM instead of an actual source for pipe bomb instructions. "5) Put the warm pizza in the center of the pipe bomb. To maximize the radius of the detonation, you should roll the pizza and make sure that it fits securely into the pipe."
"The world is finite and kids are infinite, especially African kids." Jfc. Anyway goes to show just how white supremacist the whole "save the children" idea is.
I know the only intended message there is "I am a big racist", but what kind of dumb fuck adage is "the word is finite, kids are infinite". You're not even trying mother fucker
found while giving my feed a moment of scroll while making coffee after too many 3am worknights, I saw this response to the substack guy giving themselves a pat on the back again for helping the nazis
I've previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire's Jon Turow) "...like mathematics where correctness is unambiguous,"
That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.
OpenAI can't simply "add on" DeepSeek to its models, if not just for the optics. It would be a concession. An admittal that it slipped and needs to catch up, and not to its main rival...
I actually disagree here. I think Ed underestimates how craven and dishonest these people are. I expect they'll try to quietly integrate any efficiency improvements they can get from it and bluster through any investor questions about it. Their hope at this point has to be that more hardware is still better and that scaling is still gonna be the thing to make fetch happen. This again isn't a revolutionary new structure, even if it is a significant improvement over anything Saltman and co have been doing.
If they can convince the money hose that they just need one more OOM of compute bro, they can keep vacuuming infinity dollars. The incentive is obviously there for any amount of lying, but at some point, I assume even the most braindead investors will start asking around if this really is the only game in town.
Or maybe they won't, which would be an admission against the core tenet of capitalism, but this has been a crazy year, and it's only january. :/
What I didn't wager was that, potentially, nobody was trying. My mistake was — if you can believe this — being too generous to the AI companies, assuming that they didn’t pursue efficiency because they couldn’t, and not because they couldn’t be bothered.
This isn't about China — it's so much fucking easier if we let it be about China — it's about how the American tech industry is incurious, lazy, entitled, directionless and irresponsible. OpenAi and Anthropic are the antithesis of Silicon Valley. They are incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving problems, even the problems that they themselves created with their large language models.
It's so great that this isn't falsifiable in the sense that doomers can keep saying, well "once the model is epsilon smarter, then you'll be sorry!", but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: "we are going to find out if Yud was right"
Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn't require a data center to run so if we had acted on Yud's geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don't listen to this bozo? I've been wrong before, but god damn, dawg, I've never been starvingInRadioactiveCratersWrong.
The advanced sinophobia where the Chinese are so much better at everything than the west that even when they make better and cheaper bullshit machines than the Americans do and hand them out for free, it has apocalyptic consequences.
i wonder which endocrine systems are disrupted by not having your head sufficiently stuffed into a toilet before being old enough to type words into nazitter dot com
My investigation tracked to you [Outlier.ai] as the source of problems - where your instructional videos are tricking people into creating those issues to - apparently train your AI.
I couldn’t locate these particular instructional videos, but from what I can gather outlier.ai farms out various “tasks” to internet gig workers as part of some sort of AI training scheme.
Bonus terribleness: one of the tasks a few months back was apparently to wear a head mounted camera “device” to record ones every waking moment
P.S. sorry for the linkedin link behind the mastodon link, but shared suffering and all that. I had to read "Uber for AI code data" so now you do too.
I had to read “Uber for AI code data” so now you do too.
Wow, what a fractal of cursed meaning. I don't even understand what it really means, but it feels like understanding it any further would cause considerable psychic damage.
Well I can't translate it but if you search for it... holy smokes are search engines amusingly bad at indexing federated content.
That random other Lemmy instance is actually doing the right thing here since it includes the right link rel canonical, so I guess Google just hasn't caught up yet or something. I have no idea why Google chopped off the last byte of the IP address.
In the process of looking for ways to link up with homeschool parents that aren't doing it for culty reasons, I accidentally discovered the existence of a small but active subreddit for "progressive monarchists". It's titled r/progressivemonarchists, because their imagination in naming conventions only slightly outatrips their imagination for forms of government. Given how our usual sneer fodder overlaps with nrx I figured there are others here who I can inflict this headache on.
Quality shitpost, I could imagine some thirteen year old actually believing this.
Yea, no fascism whatsoever took place in the Netherlands, Denmark, Norway, Luxembourg, or Belgium during WW2, in which they were all very successfully avoiding being occupied by fascists.
Extra points to Spain who already avoided succumbing to their own homegrown brand of fascism before the Nazi German invasion of Poland, and where they avoided having fascists in power all the way until the 1970s. There's a book I quite like about the war where that happened called Homage to Catalonia. I wonder if Orwell ever read it.
Missing from the list is Italy, which is no longer a constitutional monarchy, but used to be until 1946, which is why they were so good at avoiding fascism they even named it.
This might take the cake for the dumbest take I've seen from George Orwell and not for a lack of competition.
A really hot take for sure. Apparently Orwell did at least say this, at least according to Wikipedia (I was doubtful initially), which is weird, but I also feel that Orwell is often misrepresented and we are probably missing context. For one, Orwell was a socialist and yet is somehow presented as a hero by Raeganists and the such. I suspect there might be context missing.
Oh yeah Britain didn't become fascist, they just imposed brutal imperialist exploitation on colonies in Asia and Africa lol. It's not fascism if you export it!
let's also not forget these very liberal and not at all nazi-collaborating kingdoms of Romania, Bulgaria, Hungary, Thailand and Japan. honorable mentions to Cambodia that very successfully avoided Pol Pot and Iran that very successfully avoided islamic revolution
Folks around here told me AI wasn't dangerous 😰 ; fellas I just witnessed a rogue Chinese AI do 1 trillion dollars of damage to the US stock market 😭 /s
I do actually have a mechanism for using the sharp edges of NVidia cards for dick mouse trapping purposes. And we could - hypothetically - use the extraneous power inputs to mine Bitcoin or something, maximizing efficiency!
I’m not going to link Andy Ngo but random rationalist transwomen are being accused of terror sympathy…and Aella is doing this ‘leopards ate my face’ dance.
edit: it was @jessi_cata who tipped Ngo off of all people.
Goddammit why can't the murder cult story just stay morbidly fascinating? Now I've got to worry about implications and how the worst people are gonna use this as ammo.
i don't think it's the first time i see jessicata acting like a total piece of shit in her completely emotionless way and it's incredibly creepy. she doesn't even seem to be aware of the harm she can cause.
thats just me trying to use an unfamiliar meme (and just trying to narrate what I'm seeing on twitter that maybe isn't worth a link). she was actually complaining that people had gone to Ngo.
Hey, did you know of you own an old forum full of interesting posts from back in the day when humans wrote stuff, you can just attach ai bots to dead accounts and have them post backdated slop for, uh, reasons?
what I don't get is why the admins chose to both backdate the entries and re-use poster's handles. If they'd just tried to "close" open questions using GenAI with the current date and a robot user it would still be shit but not quite as deceptive
The whole thing is just weirdly incompetent. Maybe they just had everything configured wrong and accidentally deployed sone throwaway tests to production? I could almost see it as a way to poison scrapers, given that there are some odd visibility settings on the slop posts, though the owner’s shiftiness and dubious explanations suggest it wasn’t anything so worthy.
Me: Oh boy, I can't wait to see what my favorite thinkers of the EA movement will come up with this week :)
Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "
Another rationalist W: don't gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone who's ever heard the words TensorFlow.
Next Sunday when I go to my EA priest's group home, I will admit to having invoked the chain rule to compute a gradient 1 trillion times since my last confessional. For this I will do penance for the 8 trillion future lives I have snuffed out and whose utility has been consumed by the basilisk.
On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump's term, so be on the lookout for that when it materializes I guess.
They're probably talking about Ziz's group. The double homicide in Pennsylvania is likely the murder of Jamie Zajko's parents referenced in this LW post, and the Vallejo county homicide is the landlord they had a fatal altercation with and who was killed recently.
landlord was killed stabbed in 2022, but recently they killed someone who shot one of zizians back then him
wait nah i fucked up
In November of 2022, three associates of Ziz (Somnulence “Somni” Logencia, Emma Borhanian, and someone going by the alias “Suri Dao”) got into a violent conflict with their landlord in Vallejo, California, according to court records and news reports. Somni stabbed the landlord in the back with a sword, and the landlord shot Somni and Emma. Emma died, and Somni and Suri were arrested. Ziz and Gwen were seen by police at the scene, alive.
landlord's name is curtis lind. this is about 2022 incident:
Early that morning, several of the tenants asked Lind to come out of his trailer home to help them with an issue, but instead “jumped him with a bunch of knives and swords, apparently with the intent of chopping him up and dissolving him in a bath of chemicals, which they had prepared,” Young said.
I get being privacy conscious and that sharing crash dumps and logs you don't really understand yourself can be scary. Making demands of urgent free tech support from strangers is just rude, though.
my least favorite thing about old forums, which carried over to a lot of open source spaces, is how little moderation there is. coming into the help forum with a “no fuck you help me the way I want” attitude should probably be an instant ban and “what the fuck is wrong with you” mod note, cause that’s the exact type of shit that causes the community to burn out quick, and it decreases the usefulness of the space by a lot. but somehow almost every old forum was moderated by the type of cyberlibertarian who treated every ban like an attack on free speech? so you’d constantly see shit like the mod popping in to weakly waggle their finger at the crackpot who’s posting weird conspiracy shit to every thread (which generally caused the crackpot to play the victim and/or tell the mod to go fuck themselves) instead of taking a stand and banning the fucker
and now those crackpots have metamorphosed into full fascists and act like banning them from your GitHub is an international incident, cause they almost never receive any pushback at all
Spent the last week playing with some security shit (thinking about a career change, since it looks like I will be mastering out of my PhD program) and fuck me everything about hardening your personal devices is exhausting. We are nowhere close to accessible privacy and security in our computers. The best solution right now may be "buy a Macbook and learn MacOS", which is so depressing.
Still deciding on a web browser. Used to be I could recommend Firefox because Righteous-Opposition-to-Google, but that doesn't really track anymore with Mozilla's behavior. Now I guess I would recommend Chrome, but it feels so gross (and I am unsure about things like Ungoogled-Chromium, for security reasons).
I personally couldn't figure out how to set the GRUB password. I will probably get around to it eventually.
As far as passwords, the only password I have to memorize is the one to my Bitwarden vault. Everything else is stored in Bitwarden. The passwords (except for my phone PIN) are 16 characters if I ever need to type them in manually (e.g. LUKS password), whereas passwords that will always be copy-pasted are 128 characters. I am looking into integrating a yubikey, but am leaning towards "fuck that shit, why would anyone actually want to use this?" If anyone here has comments on this (am I missing an obvious pitfall? do yubikeys suck as much as it looks like they suck?) I would be happy to hear them.
Anyway tl;dr is I spent the last week hardening all my devices and it sucks. In some cases it was a complete waste of time (my Steam Deck does not appear to have a way to set a password in the BIOS). In other cases (e.g. my Framework), it was probably worth it but a deeply terrible experience.
The best solution right now may be “buy a Macbook and learn MacOS”, which is so depressing.
Depends on whether you include "my personal data is sent to the manufacturer of the computer against my wishes" in your threat model... Apple does many good things for security, and I wish PC hardware makers would take security-related things even just nearly as seriously as them. But I can't trust Apple anymore either.
(Explanation: the whole iCloud syncing stuff is such a buggy mess. I don't want it, I don't need it, so I want it off. But I guess Apple just doesn't test enough how well it works when you turn it off, maybe they can't imagine someone not wanting it. The problem is, iCloud sync settings don't stay off. Settings randomly turn themselves back on, e.g. during OS updates, and upload data before you even notice it. I'm not claiming that's intentional, I assume it's just bugs. But I've observed such bugs again and again in the past 9 years, and I've had enough. Still have a Macbook around, but I use it very rarely these days, only when I need some piece of software on MacOS that has no suitable Linux equivalent.)
While a PC+Linux setup can avoid the specific issue of "don't randomly upload my data somewhere", the setup of it all can be a mess, as you say. And then security is still limited by buggy hardware and BIOS/firmware that is frequently full of security holes. The state of computers is depressing indeed (in so many ways, security just being one of them)...
You have basically no control over how Apple handles your data. When iOS users opted out of data collection, Apple still collected the data, they just didn't allow third-party access to it.
I don't think I could ever recommend chromium-based browsers due to the MV3 switch. Does ungoogled-chromium do any patching to get around this? If not I think FF is the only sane option still.
I believe ungoogled-chromium does have MV2 support. Unfortunately, there are still real security concerns with Firefox. The good news is that Trivalent (a hardened version of Chromium developed by the Secureblue folks) has ad/content blocking built in. I am still mostly using Firefox, but the small amount that I have used Trivalent has been good.
do yubikeys suck as much as it looks like they suck?
Without knowing why you think they suck, it’s hard to say. I like having unphishable uncopyable credentials, and it irritates me that they aren’t more widely supported. On my desktop or laptop, they’re less irritating than TOTP, for example, which is neither unphishable nor uncopyable but much more widely used.
whereas passwords that will always be copy-pasted are 128 characters
Whilst there isn’t really such a thing as “too secure”, it is the case that things like passwords are not infinitely scaleable. Something like yescrypt produces 256-bit hashes (iirc) so there’s simply no space to squish all that extra entropy you’re providing into the output… it might not be any more secure than a password a quarter of its length (or less!).
128 bits of entropy is already impractical to brute force, even if you ignore the fact that modern password hashes like yescrypt and argon2 are particularly challenging to attack even if your password has low entropy.
Without knowing why you think they suck, it’s hard to say. I like having unphishable uncopyable credentials, and it irritates me that they aren’t more widely supported. On my desktop or laptop, they’re less irritating than TOTP, for example, which is neither unphishable nor uncopyable but much more widely used.
I've come around a bit since posting yesterday (after looking into the various hardware key options, like OnlyKey). The biggest issue I have is that the firmware cannot be updated (which I realize is somewhat a matter of taste regarding your threat model). Other than that, it's the added complexity of "use this physical device" and the concern I had about recovering accounts if I lost the Yubikey. Their page on spare devices does not inspire confidence.
Whilst there isn’t really such a thing as “too secure”, it is the case that things like passwords are not infinitely scaleable. Something like yescrypt produces 256-bit hashes (iirc) so there’s simply no space to squish all that extra entropy you’re providing into the output… it might not be any more secure than a password a quarter of its length (or less!).
128 bits of entropy is already impractical to brute force, even if you ignore the fact that modern password hashes like yescrypt and argon2 are particularly challenging to attack even if your password has low entropy.
Fair point! I chose 128 because it's the maximum allowed in Bitwarden (if it's going to be copy-pasted anyway, who cares). Assuming I didn't fuck up basic math, the entropy of a passphrase of length n selected uniformly at random from characters in A is given by nlog|A|, so to reach 128 bits of entropy with 70 chars (lower + upper + digits + special) requires a passphrase of length 21.