Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
scruiser @awful.systems
Posts 2
Comments 41
"Hours and hours of content have been minted by highly-educated, prestigiously-credentialed people, consternating about the policy implications of Sam Altman’s speculative fan fiction"
  • There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

    /r/rational isn't just for AI fiction, it also claims includes anything with decent verisimilitude, so stuff like The Hatchet and The Martian show up in its recommendation lists also! letting it claim credit for better fiction than the AI stuff

  • "Hours and hours of content have been minted by highly-educated, prestigiously-credentialed people, consternating about the policy implications of Sam Altman’s speculative fan fiction"
  • Oh no, its much more than a single piece of fiction, it's like an entire mini genre. If you're curious...

    A short story... where the humans are the AI! https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message Its meant to suggest what could be done with arbitrary computational power and time. Which is Eliezer's only way of evaluating AI, by comparing it to the fictional version with infinite compute inside of his head. Expanded into a longer story here: https://alicorn.elcenia.com/stories/starwink.shtml

    Another parable by Eliezer (the genie is blatantly an AI): https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2 Fitting that his analogy for AI is a literal genie. This story also has some weird gender stuff, because why not!

    One of the longer ones: https://www.fimfiction.net/story/62074/friendship-is-optimal A MLP MMORPG AI is engineered to be able to bootstrap to singularity. It manipulates everyone into uploading into it's take on My Little Pony! The author intended it as a singularity gone subtly wrong, but because they posted it to both a MLP fan-fiction site in addition to linking it to lesswrong, it got an audience that unironically liked the manipulative uploading scenario and prefers it to real life.

    Gwern has taken a stab at it: https://gwern.net/fiction/clippy We made fun of Eliezer warning about watching the training loss function, in this story the AI literally hacks it way out in the middle of training!

    And another short story: https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story

    So yeah, it an entire genre at this point!

  • "Hours and hours of content have been minted by highly-educated, prestigiously-credentialed people, consternating about the policy implications of Sam Altman’s speculative fan fiction"
  • Some nitpicks. some of which are serious are some of which are sneers...

    consternating about the policy implications of Sam Altman’s speculative fan fiction

    Hey, the fanfiction is actually Eliezer's (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

    So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

    Well actually, you can get something close to as powerful on a personal computer... because the massive size of ChatGPT and the like don't actually improve their performance that much (the most useful thing I think is the longer context window?).

    I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)... https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

    AI has no initiative. It doesn’t want anything

    That’s next on the roadmap though, right? AI agents?

    Well... if the way corporations have tried to use ChatGPT has taught me anything, its that they'll misapply AI in any and every way that looks like it might save or make a buck. So they'll slap an API to a AI it into a script to turn it into an "agent" despite that being entirely outside the use case of spewing words. It won't actually be agentic, but I bet it could cause a disaster all the same!

  • OpenAI’s Strawberry will turn you into paperclips any day now
  • First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set.

    An extra bit of labeling on your training data set really doesn't help you that much. LLMs already make up plausible looking citations and website links (and other data types) that are actually complete garbage even though their training data has valid citations and website links (and other data types). Labeling things as "fact" and forcing the LLM to output stuff with that "fact" label will get you output that looks (in terms of statistical structure) like valid labeled "facts" but have absolutely no guarantee of being true.

  • That tracing woodgrains peice on David Gerard is out
  • Reddit can be really hit or miss, but I'm glad subredditdrama and /r/wikipedia aren't buying TWG's bullshit. Well, some of the /r/wikipedia assume TWG is merely butthurt over losing edit wars as opposed to a more advanced agenda, but that is fair of them.

  • That tracing woodgrains peice on David Gerard is out
  • I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

    Wow, just a few words off the 14 words.

    I find it kind of irritating how someone that doesn't familiarize themselves with white supremacists rhetoric and methods might manage to view that phrase innocuously. But it really isn't that hard to see through the bullshit once you've familiarized themselves with the most basic dog whistles and slogans.

  • That tracing woodgrains peice on David Gerard is out
  • Wow... I took a look at that link before reading the comments/explanations here, and I was briefly confused why they were hating on him so much, before I realized he isn't radical right wing enough for them.

    Eh, you're a gay furry ex-Mormon (which is like a triple strike against you in my book) but I still like you well enough.

    It is almost sad seeing TWG trying to appeal to these people that fundamentally hate him... except he could just admit themotte is a cesspit and abandon it. But that would involve admitting that sneerclub (and David Gerard specifically) was right about the sort of people that lurked around SCC and later concentrated within themotte, so I think he's going to keep making himself suffer.

    TW knows about the propaganda war, but has very different objectives to you. Much harder to balance ones too: he needs enough Progress for surrogate gaybies, but not too much that white gay guys can't get the good lawyer jobs.

    Wow, I feel really gross agreeing with a motte poster, but they've called out TWG pretty effectively. TWG at least knows he needs things progressive enough he doesn't end up against the wall for being gay, ex-Mormon and furry (as he describes himself), yet he wants to flirt with the alt-right!

    and in case I was in danger of forgetting what the motte really is...

    Yes, we've all thrown our hat in the ring in different ways. I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

    sure buddy, you just need to "secure the future for your people and your children"... Yeah I know the rest of the words that go in that slogan.

  • OAI employees channel the spirit of Marvin Minsky
  • I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.

    The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).

    Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.

  • AI doomers are all trying to find the guy building the AI doom machines
  • Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?

  • AI doomers are all trying to find the guy building the AI doom machines
  • iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue

    Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI's efforts on abstractions like AIXI instead of more practical things like neural network interpretability.

  • AI doomers are all trying to find the guy building the AI doom machines
  • I unironically kinda want to read that.

    Luckily LLMs are getting better at churning out bullshit, so pretty soon I can read wacky premises like that without a human having to degrade themselves to write it! I found a new use case for LLMs!

  • AI doomers are all trying to find the guy building the AI doom machines
  • Sneerclub tried to warn them (well not really, but some of our mockery could be interpreted as warning) that the tech bros were just using their fear mongering as a vector for hype. Even as far back as the OG mid 2000s lesswrong, a savvy observer could note that much of the funding they recieved was a way of accumulating influence for people like Peter Thiel.

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • Careful, if you present the problem and solution that way, AI tech bros will try pasting a LLM and a Computer Algebra System (which already exist) together, invent a fancy buzzword for it, act like they invented something fundamentally new, and then devise some benchmarks that break typical LLMs but their Frankenstein kludge can ace, and then sell the hype (actual consumer applications are luckily not required in this cycle but they might try some anyway).

    I think there is some promise to the idea of an architecture similar to a LLM with components able to handle math like a CAS. It won't fix a lot of LLM issues but maybe some fundamental issues (like ability to count or ability to hold an internal state) will improve. And (as opposed to an actually innovative architecture) simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let's be honest, is the first thing tech bros will try as it doesn't require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

  • In Case You Had Any Doubts About Manifest Being Full Of Racists

    forum.effectivealtruism.org My experience at the controversial Manifest 2024 — EA Forum

    My experience at the recently controversial conference/festival on prediction markets …

    My experience at the controversial Manifest 2024 — EA Forum

    So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

    26

    Sneerquence Classic: "Shut up and do the impossible!" (ironic in hindsight given the doomerism)

    www.lesswrong.com Shut up and do the impossible! — LessWrong

    The virtue of tsuyoku naritai, "I want to become stronger", is to always keep improving—to do better than your previous failures, not just humbly con…

    Shut up and do the impossible! — LessWrong

    This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

    Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

    10