Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 18 August 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Considering they were saying this while having trouble doing internet radio at scale, a problem basically solved 20 years ago, I'm not sure we should listen to them.
Musk: Happy to host Kamala on an 𝕏 Spaces too
PrimalPoly:
Suggested questions for Kamala:
How do crypto blockchains work, & why are so many Americans skeptical of Central Bank Digital Currencies?
How would you stop the US gov't from colluding with Big Tech social media companies to censor Americans?
What is the main cause of inflation?
What is a woman?
Description ends, question I have for anybody with a screenreader, does this spoiler method work? And also does the screenreader properly work with the letter: X as used on twitter, namely 𝕏.
Spoilers are an html element so they should work everywhere. Mastodon just shows the text without spoiler or CW. Letters in a different typeface specified by Unicode are announced the same as regular letters for this purpose, emphasis, to the dismay of mathematicians that would want "double-stroke X" to be announced.
Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion.
Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it,
maybe some of them will self reflective on what the hell they think "Effective" means anyways.
I cannot get over the fact that this man child who is so concerned with "the future of humanity" is both out right trying to buy the presidency and downplaying the very real weapons that can easily wipe out 70% of the Earth's population in 2 hours. Remember ya'll, the cost of microwaving the world is negligible compared to the power of spicy autocomplete.
They are both stupid men who repeat stuff they hear to make them look good. So the question is who are this time the "very smart people" that are telling numbnuts like these two that nuclear war is survivable - and by extension winnable? Because if that is the US defense establishment, then yeah we might be cooked.
The Bismarck Analysis crew were sneering at Sagan being a filthy peace activist so I would hazard that the era of ‘survivable nuclear war’ rides again.
Oh this sounds like a dog I used to have as a kid! They needed more enrichment during the day or else she'd bark into the void all night and get super excited when another dog barked back.
Have they tried taking the waymos out for walkies?
saw a video of this yesterday, that “honk” title extremely understates how fucking dumb the problem is
in the video I saw, those dumb-ass things are literally crawling forward and back in the parking lot, because the one in front of it is also doing it, because…
yes, a multi-car movement deadlock, with a visually clear solution (which any human driver would be able to implement in seconds) that nonetheless still doesn’t happen because….? I guess waymo didn’t code in inter-car communication or something
seriously, find a copy and watch. it’ll give a lovely kicker to your day :>
omg, next time my wife asks me how she looks, I'm definitely dropping that "legible magyar admixture"
Edit: Didn't work. She started talking about how in the old country, the Hungarians chased her family out of the village for being religious minorities. I give this approach 0 bags of popcorn and a magen david.
That's certainly one approach to commenting on someone's picture. Pretty sure it's better to stick with the standard "Wow! 😍😍😍" but this certainly sticks out from the crowd?
It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream
There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”
Embarrassing to be this uninformed about such a high profile issue, no less that you're choosing to write about derisively.
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.
While I'm not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe they're building the P(doom) machine.
Longer story: This is now how software releases work I guess. Alot is running on open ai's anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there's no more training data. So the next trick is that for their next batch of models they have "solved" various problems that people say you can't solve with LLMs, and they are going to be massively better without needing more data.
But, as someone with insider info, it's all smoke and mirrors.
The model that "solved" structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it's a price optimization afaik).
The next large model launching with the new Q* change tomorrow is "approaching agi because it can now reliably count letters" but actually it's still just agents (Q* looks to be just a cost optimization of agents on the backend, that's basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they're so confident in this model that they don't run the resulting python themselves. It's still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um... checks notes count the number of letters in a sentence.
But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.
Expect more of this around GPT-5 which they promise "Is so scary they can't release it until after the elections". My guess? It's nothing different, but they have to create a story so that true believers will see it as something different.
Yeah, I'm not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think they're the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
'TESCREAL' refers to a nonsense conspiracy theory that disparages people such as Nick Bostrom without citing any sources that are credible on the question of whether Nick Bostrom is an 'evil eugenicist' or whatever.
I'm ok with this because everytime Nick Bostrom's name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, "What the fuck is this shit? And these people are associated with him? Fuck that."
Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?
This is kind of the central issue to me honestly. I'm not a lawyer, just a (non-professional) artist, but it seems to me like "using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work" is extremely not fair use. In fact it's kind of a prototypically unfair use.
Meanwhile Midjourney and OpenAI are over here like "uhh, no copyright infringement intended!!!" as though "fair use" is a magic word you say that makes the thing you're doing suddenly okay. They don't seem to have very solid arguments justifying them other than "AI learns like a person!" (false) and "well google books did something that's not really the same at all that one time".
I dunno, I know that legally we don't know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as "oh, both sides have good points! who will turn out to be right in the end!" really bugs me for some reason. Like, it seems to me that there's a notable asymmetry here!
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers
You're not wrong on the AI corps having good lawyers, but I suspect those lawyers don't have much to work with:
Pretty much every AI corp has been caught stealing from basically everyone (with basically everyone caught scraping without people's knowledge or consent, and OpenAI, Perplexity, and Anthropic all caught scraping against people's explicit wishes)
Said data was used to create products which, either implicitly or [explicitly]((https://archive.is/jNhpN), produce counterfeits of the stolen artists' work
Said counterfeits are, in turn, destroying the artists' ability to profit from their original work and discouraging them from sharing it freely
If I were a betting man, I'd put my money on the trial being a bloodbath in the artists' favour, and the resulting legal precedent being one which will likely kill generative AI as we know it.
Who had Trump accusing the Harris campaign of using AI to inflate crowd size photos on their Election ‘24 bingo card? Anyway, I’m sure that being associated with fraud and fakes is Good For AI.
jesus, that's telling. and I can 100% see that sentence forming in the heads of the types of people who fall over themselves to create something like these tools. so caught up in the math and the technical cool, they can't appreciate other beauty
STORM: AI agents role-play as "Wikipedia editors" and "experts" to create Wikipedia-like articles, a more sophisticated effort than previous auto-generation systems
ai slop in extruded text form, now longer and worse! and burns extra square kilometers of rainforest
we propose the STORM paradigm for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking
oh come the fuck on
The authors hail from Monica S. Lam's group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, "the first few-shot LLM-based chatbot that almost never hallucinates" – a paper that received the Wikimedia Foundation's "Research Award of the Year" some weeks ago).
from the same minds as STOTRMPQA comes: we constructed this LLM so it won’t generate a response unless similar text appears in the Wikipedia corpus and now it almost never entirely fucks up. award-winning!
[1] doom scenario is my interpretation, not actually included in ZDnet article.
Sadly, Langford hacks seem to have never achieved memetic takeoff. Having an internet legally enforced on pain of death to be text-only would probably be a good thing.
This community pops up on /r/all every so often and each time it scares me.
Sometimes I see kids games (and all games really) have ultra-niche, super-online protests that are like "STOP Zooshacorp from DESTROYING K-Smog vs. Batboy Online", and when I look closer it's either even more confusing or it's about something people didn't like in the latest update. This is like that, but with an awful twist where it's about people getting really attached to these AI girlfriend/sex roleplay apps. The spelling and sentences make it seem like it's mostly kids, too.
Yesterday I saw a link to some podcast/post float by, of an interview with some genml company “discussing people falling in love with, having relations with, and even wanting to marry”, where the ceo is “okay with it”. didn’t click because ugh, but will see if I can find it
and ofc all these weird fucking things will pop the moment their vc runs out or openai raises prices or whatever. bet you they don’t have any therapy contingency for helping people with their ai partners suddenly getting vc-raptured
I remember 15 years ago when I read about a Japanese man marrying a character from a dating sim game (source, archive link).
The internet clowned on him, but he was very serious, and it was the first time when I realized that these “anime waifu” people probably aren’t all just taking the piss.
There’s a whole socio-economic angle there, of course, which I don’t think I wanna get into here, but to me this whole “AI girlfriend” market really seems like a low-effort take on “dating sim as a service” with as much game removed as possible but the exploitative nature turned up to fucking eleven.
It's really funny that this was probably the closest thing to a killer app powered by genAI to exist.
Wonder if they're getting rid of this stuff because they realized it's actually a liability to mine these ERP convos for data and they're burning money on every conversation as it is.
Have a AI regulation committee and also give the committee their own hardware so that they can use that hardware to regulate the other hardware. Maybe.
the open source apps for the learning system I want to use do exist! that system is essentially an automation around reading an interesting text in Spanish (or any other language), marking and translating terms and phrases with a translation dictionary, and generating flash cards/training materials for those marked terms and phrases. there’s no good name for the apps that implement this idea as a whole so I’m gonna call them the LWT family for reasons that will become clear.
briefly, the LWT family apps I’ve discovered so far are:
LWT (Learning With Texts) is the original open source system that implemented the learning system I described above (though LWT itself originated as an open source clone of LingQ with some ideas from other learning systems). the Hugo Fara fork is the most recently-maintained version of LWT, but it’s generally considered finished (and extraordinarily difficult to modify) software. I need to look into LWT more since it’s still in active use; I believe it uses an Anki exporter for spaced repetition training. it doesn’t seem to have a mobile UI, which might be a dealbreaker since I’ll probably be doing a lot of learning from my phone
Lute (Learning Using Texts) is a modernized LWT remake. this one is being developed for stability, so it’s missing features but the ones that exist are reputedly pretty solid. it does have a workable mobile UI, but it lacks any training framework at all (it may have an extremely early Anki plugin to generate flash cards)
LinguaCafe is a completely reworked LWT with a modern UI. it’s got a bunch of features, but it’s a bit janky overall. this is the one I’m using and liking so far! installing it is a fucking nightmare (you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native) but the UI’s very modern, it works well on mobile (other than jank), and it has its own spaced repetition training framework as well as (currently essentially useless) Anki export. it supports a variety of freely available translation dictionaries (which it keeps in its own storage so they’re local and very fast) and utterly optional DeepL support I haven’t felt the need to enable. in spite of my nitpicks, I really am enjoying this one so far (but I’m only a couple days in)
you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native
I always want to point out how there are never specific metrics attached to these criticisms. Whenever I've seen actual numbers checked there doesn't appear to be a significant impact between before and after companies started WFH during the pandemic.
Of course I also haven't looked to close because I've been too busy enjoying my life rather than pretending my boss is funny at a water cooler.
I personally take my job as Photo Detective very seriously, a trait I only acquired from too many dates with people who did not look as good as their photos in real life.
Double standards!!
I'm pretty sure if men would use ChatGPT to catch women lying about their body this conversation would be in a completely different tone.
For some reason The Internet decided that 6 feet was an arbitrary limit, under which men could be just ignored.
Imagine men deciding that any woman with smaller than (insert random body measurement we can't affect) could just be filtered out?
Both sides!
Women will hate finding out that it can guess their age and weight! It can even guess their socioeconomic group, and if they dye their hair.
Great, guys can use it too. We'll see how these chicks react...
She knows you swiped left. She slams the table with both her hands. The formica cracks beneath her mighty fists as she shouts oaths in the name of Crom.
chechen warlord ramzan kadyrov brags about cybertruck technical and praises musk for allegedly giving him it in grozny (take it with a heap of salt, like everything that clown says). they want to send it to the frontline armed with nothing more than NSV (think soviet M2 Browning) https://xcancel.com/NOELreports/status/1824820843463000392#m
Kadyrov claimed that Elon Musk supposedly gave him a Cybertruck in Grozny. According to Kadyrov, he plans to send this armed vehicle to the "SVO" zone.
“I express my sincere gratitude to Elon Musk! This is, of course, the strongest genius of our time and a specialist. Great man! Well, the cybertruck turned out to be a powerful project. Undoubtedly one of the best cars in the world! I literally fell in love with this car.”
would be a nice future addition to oryx's list, if they actually put it outside of chechenya
I did not have Musk x Kadyrov on my 2024 bingo card.
Not really sure if this is actionable sanctions-busting. It would be hilarious if so. Even more hilarious if Ukraine captures it and sticks into the victory museum.
Given how it was marketed I'm surprised we didn't see cybertechnicals out of some stateside militia group somewhere, though I guess not currently having dictatorial power forces them to have better opsec.
Either way this things gonna light up FIRMS like it's a one-vehicle ammo depot if/when it goes up.
cybertruck just plainly sucks as a base for technical. i'm almost certain that it can't stop 5.56 (or 5.45) or anything bigger, or bigger fragmentation, windows are not bulletproof, it will get mauled by AT mines and maybe even some AP ones, lithium battery can catch fire, it has limited range, can't be refueled quickly, probably can't be repaired in field, can't haul anything, its frame is brittle, it rusts, it's shiny, it's probably quite loud in radio spectrum. kadyrov mounted 12.7mm HMG which itself is fine as a weapon but woefully underpowered alone for frontline service, at least needs some additional antitank weapon - maybe it won't ever see tanks, but it's still useful against fortifications, BMPs, MTLBs and the like
but this won't ever happen, because this thing is for celebratory joyrides in grozny and firing that 12.7mm hmg up in the air, as a local custom
He doesn't mention genAI, but his video essays attack many of the common arguments, even the more awful anti-AI ones. I particularly like his demonstration in which the listener predicts what follows a chord, or melodic, sequence and subverting the expectation leads to more interesting music; Next-token predictors are completely orthogonal to this characteristic.
I just discovered his channel last week and have consumed far too much of his content already, haha. I've been using Musescore forever, so those videos where extremely interesting as well.
It's now been nominated for deletion (again), the discussion has blown up because it was noted on social media, and the nominator seems to be an... interesting... character
Edit I now see they have admitted conflict of interest and visually downgraded their "Delete" nomination (I have no fucking clue if that actually affects anything, it would not surprise me that Wikipedia considers strong formatting a valid signal for determining weight of opinion)
Tangentially related. In 1878, some asshole exhumed 25 human skulls from an abandoned cemetery in far Northern Sweden and took them to Helsinki, where he and his buddies measured them to "prove" that people from the region were less advanced.
Now the skulls are finally being re-interred thanks to pressure from the local community.
Here's a background on the cemetery and the skull-snatching (in Swedish):
It's the weekend B) time to check in on the doomers w/ Dr. Torres...
Oh damn! I missed the irrefutable evidence of LLM reasoning. They must have done a series of replicable experiments that contradicts the overwhelming evidence that LLM reasoning is more or less a series of pattern matching heuristics. Let's take a look together at their data lads, let the scales fall from our eyes.
... this is their confirmation of reasoning? And they say we are the ones who are fucking coping, lmaou.
This really opened my eyes to some historical context I never thought of before.
My initial gut reaction was judgmental about the way billionaires spend their money; thinking it might involve some amount of hubris.
Then I realized I have no idea of how sculpture that are now show in museums as treasured historical art pieces were judge in the time they were created. Today we treasure them. But what did the general population think of them? I have no idea.
I imagine that at the time of their commissioning they were also paid by affluent people that could afford such luxuries. People that probably mirror today’s billionaires in influence and access. So what’s different about these?
brain throbbing furiously hang on... if artists produced what we say is moving and novel work WHILE wealthy people threw money at whatever bullshit they wanted, that must mean wealthy people throwing money at whatever bullshit they want is good