Skip Navigation
blakestacey blakestacey @awful.systems
Posts 44
Comments 771
Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • shot:

    Von Neumann arguably had the highest processor-type "horsepower" we know of plus his breadth of intellectual achievements is unparalleled.

    chaser:

    But imo Grothendieck is a better comparison point for ASI as his intelligence, while being strangely similar to LLMs in some dimensions

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • "Raw, intellectual horsepower" means fucking an intellectual horse without a condom.

    Oh, wait, that's rawdogging intellectual horsepower, my mistake.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • So, the Wikipedia article about "prompt engineering" is pretty terrible. First source: OpenAI. Second: a blog. Third: OpenAI. Fourth: OpenAI's blog. ArXiv, arXiv, arXiv... 43 times. Hop on over to the Talk page, and we find this gem:

    It is sometimes necessary to make assumptions to write an article (see WP:MNA).

    Spoiler alert: that link doesn't justify anything. It basically advises against going off on tangents: There's no need to rehash the fact that evolution is a fact on every damn biology page. It does not say that Wikipedia should have an article on some creationist fantasy, like baraminology or flood geology, based entirely on creationist screeds that all cite each other.

  • bless this jank @awful.systems blakestacey @awful.systems

    Images aren't loading

    I'm seeing empty square outlines next to "awful.systems" and my username in the top bar, and next to many (but not all) usernames in comment bylines.

    1
    Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • Underlying original post: a Twitter bluecheck says,

    Sometimes in the process of writing a good enough prompt for ChatGPT, I end up solving my own problem, without even needing to submit it.

    Matt Novak on Bluesky screenshots this and comments,

    AI folks have now discovered "thinking"

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • If you can't get through two short paragraphs without equating Stalinism and "social justice", you may be a cockwomble.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • Welp, time to start the thread with fresh Awful for everyone to regret:

    r/phenotypes

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    (Semi-obligatory thanks to @dgerard for starting this.)

    146
    Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Here's a start:

    Given their enormous environmental cost and their foundation upon exploited labor, justifying the use of Large Generative AI Models in telecommunications is an uphill task. Since their output is, in the technical sense of the term, bullshit, climbing that hill has no merit.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • I think it could be very valuable to alignment-pill these people.

    Zoom and enhance!

    alignment-pill

    The inability to hear what their own words sound like is terminal. At this stage, we can only provide palliative care, i.e., shoving into lockers.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • [Fiction] [Comic] Effective Altruism and Rationality meet at a Secular Solstice afterparty

    When the very first thing you say about a character is that they "have money in crypto", you may already be doing it wrong

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • "The Publisher of the Journal "Nature" Is Emailing Authors of Scientific Papers, Offering to Sell Them AI Summaries of Their Own Work", by Maggie Harrison Dupré at Futurism:

    Springer Nature, the stalwart publisher of scientific journals including the prestigious Nature as well as the nearly 200-year-old magazine Scientific American, is approaching the authors of papers in its journals with AI-generated "Media Kits" to summarize and promote their research.

    In an email to journal authors obtained by Futurism, Springer told the scientists that its AI tool will "maximize the impact" of their research, saying the $49 package will return "high-quality" outputs for marketing and communication purposes. The publisher's sell for the package hinges on the argument that boiling down complex, jargon-laden research into digestible soundbites for press releases and social media copy can be difficult and time-consuming — making it, Springer asserts, a task worth automating.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Today's news....

    internally at Meta:

    -trans and nonbinary themes stripped from Messenger

    -enforcement policy now allows for the denial of trans people's existence

    -tampons removed from men's restrooms

    -DEI programs shuttered

    -Kaplan briefed top conservative influencers the night before policy changes were announced

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • My favorite quote from flipping through LessWrong to find something passingly entertaining:

    You only multiply the SAT z-score by 0.8 if you're selecting people on high SAT score and estimating the IQ of that subpopulation, making a correction for regressional Goodhart. Rationalists are more likely selected for high g which causes both SAT and IQ

    (From the comments for "The average rationalist IQ is about 122".)

  • Facebook "Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal"

    Kate Knibbs reports in Wired magazine:

    > Against the company’s wishes, a court unredacted information alleging that Meta used Library Genesis (LibGen), a notorious so-called shadow library of pirated books that originated in Russia, to help train its generative AI language models. [...] In his order, Chhabria referenced an internal quote from a Meta employee, included in the documents, in which they speculated, “If there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues.” [...] These newly unredacted documents reveal exchanges between Meta employees unearthed in the discovery process, like a Meta engineer telling a colleague that they hesitated to access LibGen data because “torrenting from a [Meta-owned] corporate laptop doesn’t feel right 😃”. They also allege that internal discussions about using LibGen data were escalated to Meta CEO Mark Zuckerberg (referred to as "MZ" in the memo handed over during discovery) and that Meta's AI team was "approved to use" the pirated material.

    15
    Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Saying that Excel is not and never was a good solution for any problem feels like a rather blinkered, programmer-brained technique.

  • Yud goes full seed oil-ist
  • xcancel link, since nitter.net is kaput.

    New diet villain just dropped. Believe or disbelieve this specific one, "fat" or even "polyunsaturated fat" increasingly looks like a failure as a natural category. Only finer-grained concepts like "linoleic acid" are useful for carving reality at the joints.

    Reply:

    This systematic review and meta-analysis doesn't seem to indicate that linoleic acid is unusually bad for all-cause mortality or cardiovascular disease events.

    https://doi.org/10.1002/14651858.CD011094.pub4

    Yud writes back:

    And is there another meta-analysis showing the opposite? I kinda just don't trust those anymore, unless somebody I trust vouches for the meta-analysis.

    Ah, yes, the argumentum ad other-sources-must-exist-somewhere-um.

  • Elsevier: Proudly charging you money so its AI can make your articles worse

    Retraction Watch reports:

    > All but one member of the editorial board of the Journal of Human Evolution (JHE), an Elsevier title, have resigned, saying the “sustained actions of Elsevier are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE’s success.”

    The resignation statement reads in part,

    > In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.

    (Via Pharyngula.)

    Related:

    2

    The Professor Assigns Their Own Book — But Now With a Tech Bubble in the Middle Step

    The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

    The logic the professor gives completely baffles me:

    > "Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

    I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

    I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

    The sales talk in the news release is really going all in selling that undercoat.

    > Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

    Back in my day, we called that "having a book" and "writing a lesson plan".

    Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

    (Hat tip.)

    13

    Harmonice Mundi Books: An idea for an ethical academic publisher

    So, after the Routledge thing, I got to wondering. I've had experience with a few noble projects that fizzled for lacking a clear goal, or at least a clear breathing point where we could say, "Having done this, we're in a good place. Stage One complete." And a project driven by volunteer idealism — the usual mix of spite and whimsy — can splutter out if it requires more than one person to be making it a high/top priority. If half a dozen people all like the idea but each of them ranks it 5th or 6th among things to do, academic life will ensure that it never gets done.

    With all that in mind, here is where my thinking went. I provisionally tagged the idea "Harmonice Mundi Books", because Kepler writing about the harmony of the world at the outbreak of the Thirty Years' War is particularly resonant to me. It would be a micro-publisher with the tagline "By scholars, for scholars; by humans, for humans."

    The Stage One goal would be six books. At least one would be by a "big name" (e.g., someone with a Wikipedia article that they didn't write themselves). At least one would be suitable for undergraduates: a supplemental text for a standard course, or even a drop-in replacement for one of those books that's so famous it's known by the author's last name. The idea is to be both reputable and useful in a readily apparent way.

    Why six books? I want the authors to get paid, and I looked at the standard flat fee that a major publisher paid me for a monograph. Multiplying a figure in that range by 6 is a budget that I can imagine cobbling together. Not to make any binding promises here, but I think that authors should also get a chunk of the proceeds (printing will likely be on demand), which would be a deal that I didn't get for my monograph.

    Possible entries in the Harmonice Mundi series:

    • anything you were going to send to a publisher that has since made a deal with the LLM devil

    • doctoral theses

    • lecture notes (I find these often fall short of being full-fledged textbooks, chiefly by lacking exercises, but perhaps a stipend is motivation to go the extra km)

    • collections of existing long-form online writing, like the science blogs of yore

    • text versions of video essays — zany, perhaps, but the intense essayists already have manual subtitles, so maybe one would be willing to take the next, highly experimental step

    Skills necessary for this project to take off:

    • subject-matter editor(s) — making the call about what books to accept, in the case we end up with the problem we'd like to have, i.e., too many books; and supervising the revision of drafts

    • production editing — everything from the final spellcheck to a print-ready PDF

    • website person — the site could practically be static, but some kind of storefront integration would be necessary (and, e.g., rigging the server to provide LLM scrapers with garbled material would be pleasingly Puckish)

    • visuals — logo, website design, book covers, etc. We could have all the cover art be pictures of flowers that I have taken around town, but we probably shouldn't.

    • publicity — getting authors to hear about us, and getting our books into libraries and in front of reviewers

    Anyway, I have just barely started looking into all the various pieces here. An unknown but probably large amount of volunteer enthusiasm will be needed to get the ball rolling. And cultures will have to be juggled. I know that there are some tasks I am willing to do pro bono because they are part of advancing the scientific community, I am already getting a salary and nobody else is profiting. I suspect that other academics have made similar mental calculations (e.g., about which journals to peer review for). But I am not going to go around asking creative folks to work "for exposure".

    2

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    298

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    164

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    200

    Random Positivity Thread: Happy Computer Memories

    Time for some warm-and-fuzzies! What happy memories do you have from your early days of getting into computers/programming, whenever those early days happened to be?

    When I was in middle school, I read an article in Discover Magazine about "artificial life" — computer simulations of biological systems. This sent me off on the path of trying to make a simulation of bugs that ran around and ate each other. My tool of choice was PowerBASIC, which was like QBasic except that it could compile to .EXE files. I decided there would be animals that could move, and plants that could also move. To implement a rule like "when the animal is near the plant, it will chase the plant," I needed to compute distances between points given their x- and y-coordinates. I knew the Pythagorean theorem, and I realized that the line between the plant and the animal is the hypotenuse of a right triangle. Tada: I had invented the distance formula!

    25

    Off-Topic: Music Recommendation Thread

    So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

    39

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024

    Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > >Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    221

    Honest Government Ad | AI

    Bumping this up from the comments.

    2
    bless this jank @awful.systems blakestacey @awful.systems

    503?

    Was anyone else getting a 503 error for a little while today?

    5

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 16 June 2024

    Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    129

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 June 2024

    Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    102

    Neil Gaiman on spicy autocomplete

    www.tumblr.com Neil Gaiman

    I apologize if you’ve been asked this question before I’m sure you have, but how do you feel about AI in writing? One of my teachers was “writing” stories using ChatGPT then was bragging about how go…

    > Many magazines have closed their submission portals because people thought they could send in AI-written stories. > > For years I would tell people who wanted to be writers that the only way to be a writer was to write your own stories because elves would not come in the night and do it for you. > > With AI, drunk plagiaristic elves who cannot actually write and would not know an idea or a sentence if it bit their little elvish arses will actually turn up and write something unpublishable for you. This is not a good thing.

    12
    arstechnica.com Cybertruck owners allege pedal problem as Tesla suspends deliveries

    Owners will have to wait until April 20 for deliveries to resume.

    Cybertruck owners allege pedal problem as Tesla suspends deliveries

    > Tesla's troubled Cybertruck appears to have hit yet another speed bump. Over the weekend, dozens of waiting customers reported that their impending deliveries had been canceled due to "an unexpected delay regarding the preparation of your vehicle." > > Tesla has not announced an official stop sale or recall, and as of now, the reason for the suspended deliveries is unknown. But it's possible the electric pickup truck has a problem with its accelerator. [...] Yesterday, a Cybertruck owner on TikTok posted a video showing how the metal cover of his accelerator pedal allegedly worked itself partially loose and became jammed underneath part of the dash. The driver was able to stop the car with the brakes and put it in park. At the beginning of the month, another Cybertruck owner claimed to have crashed into a light pole due to an unintended acceleration problem.

    Meanwhile, layoffs!

    1
    www.404media.co Google Books Is Indexing AI-Generated Garbage

    Google said it will continue to evaluate its approach “as the world of book publishing evolves.”

    Google Books Is Indexing AI-Generated Garbage

    > Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.

    2
    futurism.com Elon Musk’s Tunnel Reportedly Oozing With Skin-Burning Chemical Sludge

    Elon Musk's Boring Company has only built a few miles of tunnel underneath Vegas — but those tunnels have taken a toxic toll.

    Elon Musk’s Tunnel Reportedly Oozing With Skin-Burning Chemical Sludge

    [Eupalinos of Megara appears out of a time portal from ancient Ionia] Wow, you guys must be really good at digging tunnels by now, right?

    40