Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
I’m going to start replying to everything like I’m on Hacker News. Unhappy with Congress? Why don’t you just start a new country and write a constitution and secede? It’s not that hard once you know how. Actually, I wrote a microstate in a weekend using Rust.
App developers think that’s a bogus argument. Mr. Bier told me that data he had seen from start-ups he advised suggested that contact sharing had dropped significantly since the iOS 18 changes went into effect, and that for some apps, the number of users sharing 10 or fewer contacts had increased as much as 25 percent.
aww, does the widdle app's business model collapse completely once it can't harvest data? how sad
this reinforces a suspicion that I've had for a while: the only reason most people put up with any of this shit is because it's an all or nothing choice and they don't know the full impact (because it's intentionally obscured). the moment you give them an overt choice that makes them think about it, turns out most are actually not fine with the state of affairs
@froztbyte@jwz Not the biggest Apple fan, but you got to give them credit: with privacy changes in their OSs, they regularly expose all the predatory practices lots of social media companies are running on.
There are so many features of modern applications and platforms that I have to wonder why anybody would have thought it was a good idea, this is just one of them. Sharing your contacts shouldn't even be an option. As somebody else in this thread put it, it's not your data.
Example: the article Leninist historiography was entirely written by AI and previously included a list of completely fake sources in Russian and Hungarian at the bottom of the page.
@blakestacey Super depressed that people were using the rubbish plagiarism machines to edit Wikipedia anyway. I don't understand the point of contributing if you don't think *you* have anything to contribute without that garbage.
There are the weirdest people who make 'content' out there. For example, I saw a 'how to start the game' joke guide on steam, so I went to their page to block them (to see if this also blocks the guides from popping up, doesn't seem so) and they had made hundreds of these guides, all just copy pasted shit. And there were more people doing the exact same thing. Bizarre shit. (Prob related to the thing where you can give people stickers, gamification was a mistake).
"The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise."
No no no it's fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.
As previously mentioned, the "Behind the Bastards" podcast is tackling Curtis Yarvin. I'm just past the first ad intermission (why are all podcast ads just ads for other podcasts? It's like podcast incest), and according to the host, Yarvin models his ideal society on Usenet pre-Eternal September.
This is something I've noticed too (I got on the internet just before). There's a nostalgia for the "old" internet, which was supposed to be purer and less ad-infested than the current fallen age. Usenet is often mentioned. And I've always thought that's dumb because the old internet was really really exclusionary. You had to be someone in academia or internet business, so you were Anglophone, white, and male. The dream of the old pure internet is a dream of an internet without women or people of color, people who might be more expressive in media other than 7 bit ASCII.
This was a reminder that the nostalgia can be coded fascist, too.
I have a lot of time for nostalgia about older versions of the web, but it really ticks me off when people who actively participated in making the web worse start to indulge in nostalgia about the web. Doesn't Yarvin get a lot of money from Peter Thiel?
There were women and people of colour on the old web, and feminists and radical anti-racists too - they were just outnumbered and outgunned. One of the earliest projects listed on the cyberfeminism index are VNS Matrix, who were "corrupting the discourse" way back in 1991.
why are all podcast ads just ads for other podcasts? It’s like podcast incest
I'm thinking combination of you probably having set all your privacy settings to non serviam and most of their sponsors having opted out of serving their ads to non US listeners.
I did once get some random scandinavian sounding ads, but for the most part it's the same for me, all iheart podcast trailers.
(why are all podcast ads just ads for other podcasts? It’s like podcast incest)
Because they think you live in a real country, not the USA.
old internet
I wonder for how many people this is a reactionary impulse, wanting back to the 'old internet' they didn't actually participate in. At least in modern days the flamewar posts are quite limited in length, in the old days they could reach novel sizes. Anyway sure we should go back to the old internet, where suddenly your whole university had no internet because there was a dos attack on the network to force a netsplit on an a random irc channel.
So I'm guessing what happened here is that the statistically average terminal session doesn't end after opening an SSH connection, and the LLM doesn't actually understand what it's doing or when to stop, especially when it's being promoted with the output of whatever it last commanded.
Shlegeris said he uses his AI agent all the time for basic system administration tasks that he doesn't remember how to do on his own, such as installing certain bits of software and configuring security settings.
"I only had this problem because I was very reckless," he continued, "partially because I think it's interesting to explore the potential downsides of this type of automation. If I had given better instructions to my agent, e.g. telling it 'when you've finished the task you were assigned, stop taking actions,' I wouldn't have had this problem.
just instruct it "be sentient" and you're good, why don't these tech CEOs undersand the full potential of this limitless technology?
wow, so efficient! I'm so glad that we have this wonderful new technology where you can write 2kb of text to send to an api to spend massive amounts of compute to get back an operation for doing the irredeemably difficult systems task of initiating an ssh connection
Assistant: I apologize for the confusion. It seems that the 192.168.1.0/24 subnet is not the correct one for your network. Let's try to determine your network configuration. We can do this by checking your IP address and subnet mask:
there are multiple really bad and dumb things in that log, but this really made me lol (the IPs in question are definitely in that subnet)
if it were me, I'd be fucking embarrassed to publish something like this as anything but a talk in the spirit of wat. but the promptfondlers don't seem to have that awareness
So, today MS publishes this blog post about something with AI. It starts with "We’re living through a technological paradigm shift."... and right there I didn't bother reading the rest of it because I don't want to expose my brain to it further.
Dear heavens the hype is off the chart in this blog post. Must resist sneering at every single sentence.
It is perhaps the greatest amplifier of human well-being in history, one of the most effective ways to create tangible and lasting benefits for billions of people.
Chatbots: better for human civilization than agriculture!
With your permission, Copilot will ultimately be able to act on your behalf, smoothing life’s complexities and giving you more time to focus on what matters to you. [...], while supporting our uniqueness and endlessly complex humanity.
(Sorry this ended up as a vague braindump)
It's interesting that someone thought "smoothing life's complexities" is a good thing to advertise wrt. chatbots. One of the threads of criticism is that they smear out language and art until all the joy is lost to statistical noise. Like if someone writes me a letter and I have Bingbot summarize it to me I am losing that human connection.
Apparently Bingbot is supposed to smooth out life's complexities without smoothing out people's complexities, but it's not clear to me how I can rely on a computer as a Husbando to do all my chores and work for me without losing something in the process (and that's if it actually worked, which it doesn't).
I've felt some vague similar thoughts towards non-AI computing. Life was different before the internet and computers and computers making management decisions was ubiquitous, and life was better in a lot of ways. On the whole it's hard for me to say if computers were a net benefit or not, but it's a shame we couldn't as a society take all the good and ignore all the bad (I know this is a bit idealistic of me).
Similarly whatever results from chatbots may change society, and unfortunately all the people in charge are doing their darndest to make it change society for the worse instead of the better.
This just their brains on science fiction, they think chatbot can help like the independent AI agents could in the science fiction they half remember. Or at least they think marketing it like that will appeal to people.
A lot less, 'Copilot make this list of bullet points into an email' and more 'Copilot, lock on to the intruder, close the bulkheads after them and flush it to the nearest trash compactor'.
I think that 'giving microsoft the power to do things in my behalf' is quite an iffy decision to make, but that is just me. Ow look it autorenewed your licenses for you, and bought a subscription Copaint, it even got you a deal not 240 dollars per year, but 120, a steal!
E: I saw this image and because cursed eyeballs is the gift that keeps on giving, I will link it to yall as well, nsfw warning. This is the AI future microsoft wants
Ex-headliners Evergreen Terrace: "Even after they offered to pull Kyle from the event, we discovered several associated entities that we simply do not agree with"
the new headliner will be uh a Slipknot covers band
organisers: "We have been silent. But we are prepping. The liberal mob attempted to destroy Shell Shock. But we will not allow it. This is now about more than a concert. This is a war of ideology." yeah you have a great show guys
(This would've been more shocking to me in 2023, but after over a year in this bubble I have stopped expecting anything resembling basic human decency from those who work in AI)
My current hyperfixation is Ecosia, maker of “the greenest search engine” (already problematic) implementing a wrapped-chatgpt chat bot and saying it has a “green mode” which is not some kind of solar-powered, ethically-sound, generative AI, but rather an instructive prompt to only give answers relating to sustainable business models etc etc.
They’re from Germany and made the rounds on the news here a few years back. They’re famous for basically donating all their profits to ecological projects, mostly for planting trees. These projects are publicly visible and auditable, so this at least isn’t bullshit.
Under the hood they’re just another Bing wrapper (like DuckDuckGo).
I actually kinda liked the project until they started adding a chatbot some months back. It was just such a weird decision because it has no benefits and is actively against their mission. Their reason for adding it was “user demand” which is the same bullshit Proton spewed and I don’t believe it.
This green mode crap sounds really whack, lol. So I really wonder what’s up with that. I gotta admit that I thought they were really in it because they believed in their ecological idea (or at least their marketing did a great job convincing me) so this feels super weird.
I have sent an email to their press enquiries contact asking for more information, but I don’t know if I have the “press” clout to warrant a response (I know I don’t)
"if you can't beat 'em, join 'em" but the wrong way around. I guess they got tired of begging google for money?
And, for the foreseeable future at least, advertising is a key commercial engine of the internet
this tracks analogously to something I've been saying for a while as well, but with some differences. one of the most notable is the misrepresentation here of "the internet", in the stead of "all the entities playing the online advertising game to extract from everyone else"
[Advertising is] the most efficient way to ensure the majority of content remains free and accessible to as many people as possible.
Content is a scarce resource y'know. Heaven forbid the content farms go out of business; or we might end up having to read Sherlock Holmes isekai fanfiction rather than a content farm's two paragraphs and three screen-fulls of ads surrounding the tweet du jour. That would be terrible actually quite nice.
We know that not everyone in our community will embrace our entrance into this market. But taking on controversial topics because we believe they make the internet better for all of us is a key feature of Mozilla’s history
WTF. How is it possible for a company to be this self-congratulatory about entering the advertising space?! Someone needs to fork Firefox.
Aren’t you supposed to try to hide your psychopathic instincts? I wonder if he’s knowingly bullshitting or if he’s truly gotten high on his own supply.
You know, when Samuel L Jackson decided that the best approach to climate change was to kill billions of poor people rather than ask the rich to give up any privileges in Kingsman it was more blatantly evil but appreciably less dumb than this. Very similar wavelength though.
Small FYI, not a sneer or anything, you can stop reading if you don't know what the godotengine is. But if you do and hear of the fork, you can just ignore the fork. (the people involved also seem to be rather iffy, one guy who went crazy after somebody mentioned they would like gay relationships in his game, and some maga conspiracy theory style coder. That is going by the 3 normal people the account follows (out of 5) who I assume are behind it).
I have no idea what set the drama off btw, I have not really looked into it (could it be that this mod you are talking about was the unofficial mod the godot communication was talking about? Or is that a different mod? And did the redot (wait, re.? please tell me it isn't a reference to the reeeee thing) people really pick the side of the n-word mod?)
I did see that the guy who started redot basically only forked it and then went 'any devs wanna take over this fork?' Very I started the wiki, without even starting a wiki.
Obviously I can't share access to the backend, but the numbers on the side are kept up to date. Last week we went down by about €5,000 when a corporate sponsorship expired.
Since Friday morning we lost €170/month in sponsorships but gained €1,610/month in new sponsorships. In terms of numbers of people: 10 people have cancelled their donations so far and 74 new people have signed up.
go woke, go 1400 EUR ahead and lose a pile of shitheads you never wanted
the chuds have done their usual thing of throwing themselves on the ground and acting extremely injured by being blocked for violating the CoC of an open source project (via Liam at GamingOnLinux):
what’s fucked is this is the exact same playbook as NixOS and Python, though this time Godot doesn’t seem to be taking any shit and that seems to be preventing those tactics from working. weird how easy it is to weather shit like this when you have a fucking spine and aren’t trying to retain fascist assholes
image description
Twitter posts by Rémi Verschelde (@Akien):
I see misunderstanding around Godot blocking some users on its
GitHub organization.
We've blocked 5 accounts so far
All opening issues with slurs, harassing contributors (breaching Godot's CoC and GH's ToS)
Blocking users does NOT prevent download of the engine or source
—
Blocking users doesn't even prevent them from reading issues and PRs, just interacting with them.
You can read and download anything from a GitHub repository as an anonymous (not logged in) user.
*git clone https://github.com/godotengine/godot.. works for anyone with an Internet connection.
—
Just adding as some asked - if you want to quote this to people who still believe we're mass blocking people on GitHub and cutting them off their tech stack, feel free to grab a screenshot.
I locked my account while the heat dies down, so you can't easily link those tweets.
it also pushed me to start learning Godot since its community seems awesome, and that’s definitely showing through on the docs so far — they go into so much depth on why Godot’s designed like it is, and what specifically it’s good for
I’ve only just started, but it’s reminding me very positively of what Unreal Engine was for a brief period of time: a runtime for a powerful domain-specific scripting language that could be extended by native code when needed, targeting indie devs
unfortunately Tim Sweeney kind of sucks at designing languages (though he used to do it a lot) so UnrealScript was a real fucking mess, and UE never really captured the indie market (cause you had to pay a fuckload for the privilege of writing native code) so UnrealScript got excised and the engine became “free” (as in free timeshare) and entirely refocused on developers pumping out AAA garbage and other whales (and, more charitably, anyone who needs an engine that can do state of the art graphics)
Godot, so far, to me feels kind of like an Unreal Engine that didn’t fuck up with the indie market and also isn’t closed source greedware
also apparently there’s a new Unreal scripting language? it’s got the Haskell guy behind it and it’s functional which is cool, but it’s also already bathed in horseshit:
Verse is the new scripting language for Unreal Engine, first implemented in Fortnite.[11] Simon Peyton Jones, known for his contributions to the Haskell programming language, joined Epic Games in December 2021 as Engineering Fellow to work on Verse with his long-time colleague Lennart Augustsson and others.[12] Conceived by Sweeney,[13] it was officially presented at Haskell eXchange in December 2022 as an open source functional-logic language for the metaverse.[14] A research paper, titled The Verse Calculus: a Core Calculus for Functional Logic Programming, was also published.[15]
The language was eventually launched in March 2023 as part of the release of the Unreal Editor for Fortnite (UEFN) at the Game Developers Conference, with plans to be available to all Unreal Engine users by 2025.[11]
so I guess Fortnite modders can weigh in on how good Haskell for Gaming is
e: also, imagine if any of these pro gamers knew Godot is the Cassette Beasts and Cruelty Squad engine
I've definitely seen this kind of meme format, to be fair. But generally speaking I think we should make a rule that in order to be considered satire or joking something should need to actually be funny.
Viral internet celebrity podcaster says in-depth Marxist economic analysis? Funny
Viral internet celebrity podcaster breaks down historical context of Game of Thrones? Funny
Viral internet celebrity podcaster says the same VPN marketing schpiel as every other podcaster? Not. Funny.
Also got a quick sidenote, which spawned from seeing this:
This is pure gut feeling, but I suspect that "AI training" has become synonymous with "art theft/copyright infringement" in the public consciousness.
Between AI bros publicly scraping against people's wishes (Exhibit A, Exhibit B, Exhibit C), the large-scale theft of data which went to produce these LLMs' datasets, and the general perception that working in AI means you support theft (Exhibit A, Exhibit B), I wouldn't blame Joe Public for treating AI as inherently infringing.
The horror. Replacing the joy of looking into something and making your findings available in your own style to others replaced by autogenerated slop. And soon this will be all over the place, we will look back on the past period of low effort clickbait with nostalgia.
This exchange on HN, from the Wordpress meltdown, is going to make an amazing exhibit in the upcoming trial:
Anonymous: Matt, I mean this sincerely: get yourself checked out. Do you have a carbon monoxide detector in your house? … Go to a 10 day silent retreat, or buy a ranch in Montana and host your own Burning Man…
Matt Mullenweg: Thanks, I carry a co2 and carbon monoxide monitor. … I do own a place in Montana, and I meditate several times a day.
Tweet from Paul Graham: "Renaming Twitter X doesn't seem to have damaged it. But it doesn't seem to have helped it either. So it was a waste of time and a domain name."
The tragedy of being Elon Musk: he has thousands and thousands of cryptocurrency bros and fascists that worship the ground he walks on, but what he really want is for Stephen King to reply to one of his tweets.
I'm not in support of Effective Altruism as an organization, I just understand what it's like to get caught up in fear and worry over if what you're doing and donating is actually helping. I donate to a variety of causes whenever I have the extra money, and sometimes it can be really difficult to assess which cause needs your money more. Due to this, I absolutely understand how innocent people get caught up in EA in a desire to do the maximum amount of good for the world. However, EA as an organization is incredibly shady. u/Evinceo provided this great article: https://www.truthdig.com/articles/effective-altruism-is-a-welter-of-fraud-lies-exploitation-and-eugenic-fantasies/
Man, that hits close to home. It's a hard sell to sneer at people ostensibly doing their best to do good. Any kind of altruism, particularly one ostensibly focused on at least trying to be effective, feels like a such a rare treat that I feel like the worst kind of buzzkill letting newcomers know what cynical doomer ass death obsessed sex cult (and not even in a kinkily cool way*) a big chunk of EA and other TESCRL are. I can relate to them in so many ways, especially remembering what my teenage self was like, but at the same time it's weirdly hard to articulate how immature those opinions (some of which) I used to, and they continue to hold, are**.
Anyway, charity is a symptom of the failure of society. Luxury is a human right. Profit is exploitation. Nobody gets a billion dollars without mass homicide.
* but unfortunately often in an uncool, very rapey way
** not all of them, there are levels of cringe I managed to avoid even in my teenage years
so, I've always thought that blind's "we'll verify your presence by sending you shit on your corp mail" (which, y'know, mail logs etc....) is kinda a fucking awful idea. but!
Two Harvard students recently revealed that it's possible to combine Meta smart glasses with face image search technology to "reveal anyone's personal details," including their name, address, and phone number, "just from looking at them."
In a Google document, AnhPhu Nguyen and Caine Ardayfio explained how they linked a pair of Meta Ray Bans 2 to an invasive face search engine called PimEyes to help identify strangers by cross-searching their information on various people-search databases. They then used a large language model (LLM) to rapidly combine all that data, making it possible to dox someone in a glance or surface information to scam someone in seconds—or other nefarious uses, such as "some dude could just find some girl’s home address on the train and just follow them home,” Nguyen told 404 Media.
This is all possible thanks to recent progress with LLMs, the students said.
Putting my off-the-cuff thoughts on this:
Right off the bat, I'm pretty confident AR/smart glasses will end up dead on arrival - I'm no expert in marketing/PR, but I'm pretty sure "our product helped someone dox innocent people" is the kind of Dasani-level disaster which pretty much guarantees your product will crash and burn.
I suspect we're gonna see video of someone getting punched for wearing smart glasses - this story's given the public a first impression of smart glasses that boils down to "this person's a creep", and its a lot easier to physically assault someone wearing smart glasses than some random LLM
100%. This criti-hype is going to blow up in their faces. "They" being, in order:
AnhPhu Nguyen and Caine Ardayfio
Meta
PimEyes
In addition to your analysis, I'd like to point out that this reads just like the Rabbit R1, but for stalkers who also have a deep craving to look like an insufferable dork. This whole thing could be, and already is, an app -- if someone needs this kind of evil bullshit, clearview.ai been around forever.
I also now wonder how illegal this could be in various jurisdictions. I know that aiming security cams at public roads is a bit frowned upon here in .nl for example
I might be wrong but this sounds like a quick way to make the web worse by putting a huge computational load on your machine for the purpose of privacy inside customer service chat bots that nobody wants. Please correct me if I’m wrong
WebLLM is a high-performance in-browser LLM inference engine that brings language model inference directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including streaming, JSON-mode, function-calling (WIP), etc.
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
I read this twice as LLM interference engine and was hoping for something like SETI or Folding@Home except my computer could interfere with ChatGPT somehow.
Comparing it to Tesla certainly is a choice. I'm amazed at how many ways the cybertruck has found to die on people for example. More varied ways to crash and burn than nethack.
I didn't realize I was still signed up to emails from NanoWrimo (I tried to do the challenge a few years ago) and received this "we're sorry" email from them today. I can't really bring myself to read and sneer at the whole thing, but I'm pasting the full text below because I'm not sure if this is public anywhere else.
spoiler
Supporting and uplifting writers is at the heart of this organization. One priority this year has been a return to our mission, and deep thinking about what is in-scope for an organization of our size.
National Novel Writing Month
To Our NaNoWriMo Community:
There is no way to begin this letter other than to apologize for the harm and confusion we caused last month with our comments about Artificial Intelligence (AI). We failed to contextualize our reasons for making this statement, we chose poor wording to explain some of our thinking, and we failed to acknowledge the harm done to some writers by bad actors in the generative AI space. Our goal at the time was not to broadcast a comprehensive statement that reflected our full sentiments about AI, and we didn’t anticipate that our post would be treated as such. Earlier posts about AI in our FAQs from more than a year ago spoke similarly to our neutrality and garnered little attention.
We don’t want to use this space to repeat the content of the full apology we posted in the wake of our original statements. But we do want to raise why this position is critical to the spirit—and to the future—of NaNoWriMo.
Supporting and uplifting writers is at the heart of what we do. Our stated mission is “to provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds—on and off the page”. Our comments last month were prompted by intense harassment and bullying we were seeing on our social media channels, which specifically involved AI. When our spaces become overwhelmed with issues that don’t relate to our core offering, and that are venomous in tone, our ability to cheer on writers is seriously derailed.
One priority this year has been a return to our mission, and deep thinking about what is in-scope for an organization of our size. A year ago, we were attempting to do too much, and we were doing some of it poorly. Though we admire the many writers’ advocacy groups that function as guilds and that take on industry issues, that isn’t part of our mission. Reshaping our core programs in ways that are safe for all community members, that are operationally sound, that are legally compliant, and that are mission-aligned, is our focus.
So, what have we done this year to draw boundaries around our scope, promote community safety, and return to our core purpose?
We ended our practice of hosting unrestricted, all-ages spaces on NaNoWriMo.org and made major website changes. Such safety measures to protect young Wrimos were long overdue.
We stopped the practice of allowing anyone to self-identify as an educator on our YWP website and contracted an outside vendor to certify educators. We placed controls on social features for young writers and we’re on the brink of relaunch.
We redesigned our volunteer program and brought it into legal compliance. Previously, none of our ~800 global volunteers had undergone identity verification, background checks, or training that meets nonprofit standards and that complies with California law. We are gradually reinstating volunteers.
We admitted there are spaces that we can’t moderate. We ended our policy of endorsing Discord servers and local Facebook groups that our staff had no purview over. We paused the NaNoWriMo forums pending serious overhaul. We redesigned our training to better-prepare returning moderators to support our community standards.
We revised our Codes of Conduct to clarify our guidelines and to improve our culture. This was in direct response to a November 2023 board investigation of moderation complaints.
We proactively made staffing changes. We took seriously last year’s allegations of child endangerment and other complaints and inspected the conditions that allowed such breaches to occur. No employee who played a role in the staff misconduct the Board investigated remains with the organization.
Beyond this, we’re planning more broadly for NaNoWriMo’s future. Since 2022, the Board has been in conversation about our 25th Anniversary (which we kick off this year) and what that should mean. The joy, magic, and community that NaNoWriMo has created over the years is nothing short of miraculous. And yet, we are not delivering the website experience and tools that most writers need and expect; we’ve had much work to do around safety and compliance; and the organization has operated at a budget deficit for four of the past six years.
What we want you to know is that we’re fighting hard for the organization, and that providing a safer environment, with a better user interface, that delivers on our mission and lives up to our values is our goal. We also want you to know that we are a small, imperfect team that is doing our best to communicate well and proactively. Since last November, we’ve issued twelve official communications and created 40+ FAQs. A visit to that page will underscore that we don’t harvest your data, that no member of our Board of Directors said we did, and that there are plenty of ways to participate, even if your region is still without an ML.
With all that said, we’re one month away! Thousands of Wrimos have already officially registered and you can, too! Our team is heads-down, updating resources for this year’s challenge and getting a lot of exciting programming staged and ready. If you’re writing this season, we’re here for you and are dedicated, as ever, to helping you meet your creative goals!
I don't have the broader context to comment on the changes they discussed regarding child endangerment and community standards apart from "Wait... oh my God you weren't already doing that???"
But it's such a huge pull back to go from "hating AI is ableist and basically Hilter" to "uhhhh guys we've had our plates full cleaning up the mess and the most we'll say about AI is to stop being assholes about it on our forums." Clearly there's still a lot of cleaning up to do at some level.
“Wait… oh my God you weren’t already doing that???”
I'm not at all surprised given it wasn't exactly started in the present form by people with money to hire consultants who would know to do those things.
For the first mumble years there probably wasn't much involvement by kids at all so it would never have occurred to them. Or there were some kids but not the forums or other potential settings for adult misconduct.
God that's exhausting. Wasn't Nanowrimo supposed to be a fun thing at some point? Is there anyone in the world who thinks this sort of scummy PR language is attractive?
what's the over/under on the spruce pine thing causing promptfondlers and their ilk to suddenly not be able to get chips, and then hit a(n even more concrete) ceiling?
(I know there may be some of the stuff in stockpiles awaiting fabrication, but still, can't be enough to withstand that shock)
If we're lucky, it'll cut off promptfondlers' supply of silicon and help bring this entire bubble crashing down.
It'll probably also cause major shockwaves for the tech industry at large, but by this point I hold nothing but unfiltered hate for everyone and everything in Silicon Valley, so fuck them.
Building 5-7 5GW facilities full of GPUs is going to be an extremely large amount of silicon. Not to mention the 25-35 nuclear power plants they apparently want to build to power them.
So on the list of things not happening...
that would be 25-35 reactors, as long as cooling is available you can just put them in one place. 5GW is around size of largest european nuclear powerplants (Zaporozhian, 5.7GW; Gravelines, 5.4GW; six blocks each) or around energy consumption of decently-sized euro country like Ireland, Hungary or Bulgaria. 25GW is electricity consumption of Poland, 30GW UK, 35GW Spain
this is not happening hardest because by the time they'd get permits for NPP they'll get bankrupt because bubble will be over
i .... have no idea whatsoever what the use case is here ... you make the chatbot generate the code instead of cloning the repo? or it's like generating an API that doesn't work or something?
Cloudflare is such a weird company in various ways. Saying loudly that they can't judge groups when people ask them not to support the neo-nazis, harassers and worse (they have moved on this under pressure, but it takes a lot of pressure). But then they do this.
Wasn't the first time he shut down 8chan (or was it Kiwifarms? Something along the lines), he immediately came out to say "It's really bad that I have the power to take down a website of shitheads." Just seemed like everything about his ideology is confused.
Hopefully this doesn’t break the rules. But where can I find some educational podcasts that aren’t overly capitalist, reactionary, rationalist, or otherwise right-leaning or authoritarian in nature.
I want to specifically avoid content like Lex Friedman, Huberman, Joe Rogan, Sam Harris. That sounds good on the surface but goes down a rabbit hole of affirming reactionary bias.
I’m not amazing with words, so I hope what I’m saying makes sense. Thanks.
Most everything from Cool Zone Media is going to be pretty decent. Haven't listened to the whole catalogue, but Ed Zitron of Better Offline is an established nonmember (as far as I know) friend of the sneer and Behind the Bastards is truly excellent.
Maintenance Phase is an excellent examination of diet and health grifters, and Mike's others (You're Wrong About and If Books Could Kill) are also pretty excellent.
I also want to spotlight Wittenburg to Westphalia, a history podcast ostensibly about the wars of the reformation and the social and economic chamges of the early modern period. But in order to really give a sense of how dramatic those changes are, he has so far provided only an incredibly thorough examination of medieval European society from the politics to economics and social structures. He has an episode about unfree labor that I found particularly interesting.
Second on Maintenance Phase! I marathonned it on a road trip a couple of days ago, and not only is it a well-researched and a fun listen, you'll discover that so much of the stuff Aubrey and Michael discuss is directly congruent to our typical subjects. Can't recommend it enough.
They've inspired me to work on an effort post for MoreWrite, tentatively titled, "A Unified Theory of Bullshitter-Driven Social Diseases."
Which isn't going to be as pompous as it sounds, I promise!
Behind the Bastards is very easy to listen to and usually focuses on documenting the bad shit that various reactionary and fascist figures have done (in a humorous manner — the host was a writer for Cracked during its peak). a couple of the most recent episodes have covered some of the same topics we talk about in SneerClub and TechTakes, and they’re well worth a listen even if you know the subject matter well. I haven’t checked it out yet, but I think It Could Happen Here is a spin-off with the same main host that’s also broadly anti-fascist.
e: also, and I had to look this up cause I keep switching podcast apps: I Don’t Speak German is also good, and my co-admin David was on it (episode 82? I swear it was more recent than that… David were you on more than once?)
I tend to like "Cool People Who Did Cool Stuff" more than "Behind the Bastards". Need some nugget of hope in these dark days. A lot of the cool people have been downright inspiring.
My daily podcast is "It Could Happen Here", but some other mainstays in the educational side include:
Live Like the World is Dying
Strangers in a Tangled Wilderness
It's Going Down
Final Straw Radio
Reaction (especially liked her dives on the Pinkertons and "The Business Plot")
Srsly Wrong [unrelated to the similarly named thing]
These aren’t exactly educational but the two pods I bring up in this joint are “If Books Could Kill” and “Scam Goddess”. Again, they aren’t exactly educational but you’ll learn from them!
If you want interesting historical deep dives, I always enjoy Dig - the history podcast. Well researched by actual scholars, which goes hand in hand with the episodes not dropping that often.
I'm in the other camp: I remember when we thought an AI capable of solving Go was astronomically impossible and yet here we are. This article reads just like the skeptic essays back then.
Ah yes my coworkers communicate exclusively in Go games and they are always winning because they are AI and I am on the street, poor.
There's not that much else to sneer at though, plenty of reasonable people.
I think the one thing LLMs have shown us is that coherent English is less complicated than we previously believed. I don't think we learned anything about actual cognition.
This remark is actually part of a long fight between CS and CS people. And it is really frustrating in various ways, as CS always thinks they did better than CS while being blind of the actual accomplishments of CS they don't know and just how complex the subject matter is. It is an annoying failure to communicate between both disciplines. (A lot of people don't fall victim to this btw, but it can be really annoying to encounter a 'Our CS is good, and theirs is bad because strawman', who often don't even realize that various words have different meanings in the different fields).
Well that's quite the confused comment chain given that neither Go nor chess are solved. "Remember that thing everyone said wouldn't happen? Well it still hasn't happened! 🫨"
Confusing 'solved' with 'a computer can win playing vs high level human players a high % of the times' because they don't know that 'solved' actually has a specific meaning.
Tech reporting has massively fucked up this as well over the years btw, so I'm not that annoyed random HN people also don't get it. But there is a wikipedia page for it: https://en.wikipedia.org/wiki/Solved_game
So to throw my totally-amateur two cents in, it seems like it's definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I've done over the years, though I've never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it's probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.
Moravec's Paradox is actually more interesting than it appears. You don't have take his reasoning or Pinker's seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it's a common theme.
One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.
It's part of why I don't think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.
A redditor has a pinned post on /r/technology. They claim to be at a conference with Very Important Promptfondlers in Berlin. The OP feels like low-effort guerilla marketing, tbh; the US will dominate the EU due to an overwhelming superiority in AI, long live the new flesh, Emmanuel Macron is on board so this is SUPER SERIOUS, etc.
PS: the original poster, /u/WillSen, self-identifies as CEO of a bootcamp/school called "codesmith," and has lots of ideas about how to retrain people to survive in the longed-for post-AI hellscape. So yeah, it's an ad.
The central problem of 21st century democracy will be finding a way to inoculate humanity against confident bullshitters. That and nature trying to kill us. Oh, and capitalism in general, but I repeat myself.
that thread’s so dense with marketing patterns and critihype, it’s fucking shameless. whenever anyone brings up why generative AI sucks, the OP “yes and”s it into more hype — like when someone brings up how LLMs shit themselves fast if they train on LLM-generated text, the fucker parries it to a “oh the ARM guy said he’s investing in low-hallucination LLMs and that’ll solve it”. like… what? no it fucking will not, those are two different problems (and throwing money at LLMs sure as fuck doesn’t seem to be fixing hallucinations so far either way)
the worst part is this basic shit seems to work if the space is saturated with enough promptfondlers. it’s the same tactic as with crypto, and it’s why these weird fucks always want you on their discord, and why they always try to take up as much space as possible in discussions outside of that. it’s the soft power of being able to shout down dissenting voices.
been feeling this for a while too and wondering how to put it into words. especially in light of all the techfash, pressing climate and general market problems, etc
one of the things I've been holding onto (hoping in?) is my estimation/belief that I don't think the current state of all the deeply-fucked systems is inherently stable, or viable. as I've said here before, that very instability is part of why so many of them are engaged in trying to set things up to protect those self-same systems, as they know the swingback is coming and they want to make it as hard as possible to claw things back from them
but how long until it breaks, and with how much splash damage, are things I haven't really been able to estimate
My computer crashed as I was writing a response. In short:
I think fedi existing and having the userbase it has is “victory” enough. Capitalism and fascism push us to think “winning” and “success” are the greatest thing to aspire to. To “win” over capitalism and fascism will require an unlearning and disavowal of those aspirations.
oh yeah, I agree quite strongly with that sentiment too (and that's why I didn't re-use the words of the post I linked)
the fedi has some pretty dire threats (shit like threads etc) that I do think it needs to deal with by way of more teeth (consider it self-protective boundary setting), but in general I think a lot of the current state of it satisfying people just for being happy to be themselves is perfectly fine and good
side note: part of my problem is that my thinking on matters is a bit waterlogged due to shortage of knowledge/references, and backfilling that is ... well, hard to find the right resources for reading, and perpetual spoon shortage. I've been working my way through some Graeber and some other stuff, but very slowly and need more things. also doesn't help that ZA is, functionally, a desert island
This is absolutely an important idea, but in the context of anti-capitalism I think there's a kind of catch-22 at play. The alternative systems that operate under a capitalist paradigm have serious externalities that come back to bite us whether we engage or not with them. My wife and I have spent some late nights over the last week trying to help family and friends in North Carolina keep track of which roads are usable, who is or isn't confirmed to be alive yet, etc. Maybe I'm a little extra feisty about climate change today, but it seems like while the alternative doesn't have to "win" in the same way that capitalists want to we do still need them to lose. Existing independently in parallel isn't a sustainable end goal, though I do agree that parallel structures are an important part of the solution.
Im reminded of the "Rationalism is systematized winning". Post for some reason. This post and the recent musk chess post just makes me wonder about "what does winning" even mean. But in the spirit of wargames, i have not thought about it much more than that.
actually, forget I asked. I've had an eventful enough week of bullshit, and am going to close my friday off with some careless daydrinking and relaxation
(e: I would add: good god "musk chess post" must be one of the craziest strings of words I've seen in a while)
I got this AMAZING OPPORTUNITY in my inbox, because once your email appears on a single published paper you're forever doomed to garbage like this (transcript at the end):
Highlights:
Addresses me as Dr. I'm not a doctor. I checked, and apparently Dr. Muhhamad Imran Qureshi indeed has a PhD and is a lecturer at Teesside University International Business School (link to profile). His recent papers include a bunch of blockchain bullshit. Tesside University appears to be a legit UK university, although I'm not sure how legit the Business School is (or how legit any Business School can be, really).
Tells us their research is so shit that using wisdom woodchippers actually increases their accuracy.
One of the features is "publication support", so this might be one of those scams where you pay an exorbitant fee to get "published" in some sketchy non-peer-reviewed journal.
One of the covered AI tools is Microsoft Excel. If you were wondering if "AI" had any meaning.
Also, by god, are there so many different ChatGPT clones now? I haven't heard most of those names. I kinda hope they're as AI as Excel is.
I'm not sure which would be worse, this being a scam, or them legit thinking this brings value to the world and believing they're helping anyone.
transcript
Email titled Revolutionize Your Research: AI-Powered Systematic Literature Review Master Class
Online course on writing
AI-Powered Systematic Literature Review
Register Now:
Dear Dr. [REDACTED],
we're reaching out because we believe our AI-Powered Systematic Review Masterclass could be a game-changer for your research. As someone who's passionate about research writing, we know the challenges of conducting thorough and efficient systematic reviews.
Key takeaways:
AI-powered prompt engineering for targeted literature searches
Crafting optimal research questions for AI analysis
Intelligent data curation to streamline your workflow
Leveraging AI for literature synthesis and theory development
Join our Batch 4 and discover how AI can help you:
Save time by automating repetitive tasks
Improve accuracy with AI-driven analysis
Gain a competitive edge with innovative research methods
Enrollment is now open! Don't miss this opportunity to take your systematic review skills to the next level.
Key Course Details:
Course Title: AI-Powered Systematic Literature Reviews Master Class
Live interaction + recording = Learning that fits your life
Dates: October 13, 2024, to November 3, 2024
Live Session Schedule: Every Sunday at 2 PM UK time (session recordings will be accessible).
Duration: Four Weeks
Platform: Zoom
Course Fee: GBP 100
Certification: Yes
Trainer: Dr. Muhammad Imran Qureshi
Key features
Asynchronous learning
Video tutorials
Live sessions with access to recordings
Research paper Templates
Premade Prompts for Systematic Literature Review
Exercise Files
Publication support
The teaching methodology will offer a dynamic learning experience, featuring live sessions every Saturday via Zoom for a duration of four weeks. These sessions will provide an interactive platform for engaging discussions, personalised feedback, and the opportunity to connect with both the course instructor and fellow participants.
Moreover, our diverse instructional approach encompasses video tutorials, interactive engagements, and comprehensive feedback loops, ensuring a well-rounded and immersive learning experience.
Certification
Upon successful completion of the course, participants will receive certification from the Association of Professional Researchers and Academicians UK, validating their mastery of AI-enabled methodologies for conducting comprehensive and insightful literature reviews.
Folks, I need some expert advice. Thanks in advance!
Our NSF grant reviews came in (on Saturday), and two of the four reviews (an Excellent AND a Fair, lol) have confabulations and [insert text here brackets like this] that indicate that they are LLM generated by lazy people. Just absolutely gutted. It's like an alien reviewed a version of our grant application from an parallel dimension.
Who do I need to contact to get eyes on the situation, other than the program director? We get to simmer all day today since it was released on the weekend, so at least I have an excuse to slow down and be thoughtful.
I haven't had to report malfeasance like that, but if that happened to me, I would be livid. I'd start by contacting the program officer; I'd also contact the division director above them and the NSF Office of Inspector General. I mean, that level of laziness can't just have affected one review! And, for good measure, I'd send a tip to 404media, as they have covered this sort of thing. That might well go nowhere, but it can't hurt to be in their contact list.
Total amateur here, but from quickly reviewing the process it looks like the program officer would be your primary point of contact within NSF to address this kind of thing? But then I would assume they read the reviews themselves before passing them back to you so I would hope they would notice? The bit of my brain that's watched too much TV would like to see them answer some questions from an AI skeptic journalist, but that's not exactly a great avenue for addressing your specific problem.
Mostly commenting to make it easier to keep track of the thread tbh. Thats some kinda nonsense you're dealing with here.
So the ongoing discourse about AI energy requirements and their impact on the world reminded me about the situation in Texas. It set me thinking about what happens when the bubble pops. In the telecom bubble of the 90s or the British rail bubble of the 1840s, there was a lot of actual physical infrastructure created that outlived the unprofitable and unsustainable companies that had built them. After the bubble this surplus infrastructure helped make the associated goods and services cheaper and more accessible as the market corrected. Investors (and there were a lot of investors) lost their shirts, but ultimately there was some actual value created once we were out of the bezzle.
Obviously the crypto bubble will have no such benefits. It's not like energy demand was particularly constrained outside of crypto, so any surplus electrical infrastructure will probably be shut back down (and good riddance to dirty energy). The mining hardware itself is all purpose-built ASICs that can't actually do anything apart from mining, so it's basically turning directly into scrap as far as I can tell.
But the high-performance GPUs that these AI operations rely on are more general-purpose even if they're optimized for AI workloads. The bubble is still active enough that there doesn't appear to be much talk about it, but what kind of use might we see some of these chips and datacenters put to as the bubble burns down?
But the high-performance GPUs that these AI operations rely on are more general-purpose even if they’re optimized for AI workloads. The bubble is still active enough that there doesn’t appear to be much talk about it, but what kind of use might we see some of these chips and datacenters put to as the bubble burns down?
If those GPUs end up being used for Glaze and Nightshade, I'd laugh like a hyena.