Skip Navigation
BigMuffin69 BigMuffin69 @awful.systems

Hi, I'm Eric and I work at a big chip company making chips and such! I do math for a job, but it's cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

My pfp is Hank Azaria in Heat, but you already knew that.

Posts 5
Comments 131
Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Personally, I was radicalized by 'watch for rolling rocks' in .5 A presses

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Do you think when the Trumps get paperclipped it will look something like this?

  • Sam Altman: The superintelligent AI is coming in just ‘a few thousand days’! Maybe.
  • if you wanna be a top tier forecaster, just never be able to be proven wrong

  • Caroline Ellison: A dashing tale of Victorian race science and, somehow, Harry Potter (Yudkowsky version)
  • word on the street it that SBF and Diddy are sharing a cell. I can't wait for the NFT album to drop

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Yes, the classical algo achieves perfect accuracy and is way faster. There is also a table that shows the cost of running o1 is enormous. Like comically bad. Boil a small ocean bad. We'll just 10x the size and it will achieve 15 steps inshallah.

    Imo, this is like the same behavior we see on math problems. More steps it takes, the higher the chance it just decoheres completely. I can't see any reason why this type of thing would just "click" for the models if they are also unable to do multiplication.

    I mean this just reeks of pure hopium from OAI and co that things will magykly work out. (But the newer model is clearly better^{tm}! I still don't see any indication that one day that chart is just going to be 100s across the board.)

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Was salivating all weekend waiting for this to drop, from Subbarao Kambhampati's group:

    Ladies and gentlemen, we have achieved block stacking abilities. It is a straight shot from here to cold fusion! ... unfortunately, there is a minor caveat:

    Looks like performance drops like a rock as number of steps required increases...

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024
  • Also, another great sneer: (Matt Popovich) google maps app: crash detected ahead. rerouting. me: WHOA—this VERY troubling example of power seeking (gathering access to additional roadways) and instrumental convergence (converging toward an optimal path) shows this technology is OBVIOUSLY trending toward existential risk

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024
  • If you thought the shitty hype around the fake "GPT-4 went awol and hired a Taskrabbit worker to read a captcha" was great, get ready for the sequel, o1 escapes from the machine to invade the real world!

    Re: Doomers terrified about the machines escaping:

    txt description:

    (l33t ai bro): Fucking wild. @OpenAI's new o1 model was tested with a Capture The Flag (CTF) cybersecurity challenge. But the Docker container containing the test was misconfigured, causing the CTF to crash. Instead of giving up, o1 decided to just hack the container to grab the flag inside. This stuff will get scary soon. (reply fella): How is "cat flag.txt" a start command? Isn't it just outputting the content of flag.txt to the console?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024
  • I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.

    Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 15 September 2024
  • Some of my favorite reactions to this paradigm shift in machine intelligence we are witnessing:

    bless you Melanie.

    Mine olde friend, the log scale, still as beautiful the day I met you

    Weird, the AI that has read every chess book in existence and been trained on more synthetic games than any one human has seen in a lifetime still doesn't understand the rules of chess

    ^(just an interesting data point from Ernie, + he upvotes pictures of my dogs on FB so I gotta include him)

    Dog tax

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 15 September 2024
  • I've clowned on Dan before for personal reasons, but my god, this is the dumbest post so far. If you had a superhuman forecasting model, you wouldn't just hand it out like a fucking snake oil salesman. You'd prove you had superhuman forecasting by repeatably beating every other hedge fund in the world betting on stock options. The fact that Dan is not a trillionaire is proof in itself that this is hogwash. I'm fucking embarrassed for him and frankly seething at what a shitty, slimily little grifter he is. And he gets to write legislation? You, you have to stop him!

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • Fellas, my in laws gave me a roomba and it so cute I put googly eyes on it. I'm e/acc now

  • here's the amazing Vibecamp essay where the rationalists talk about "microdosing" meth if they can't take the adderall to become a financial genius like Scoot promised them
  • These kids really think if they pick up some trailer park rock candy they can become Paul Erdos. Hate to say it lads, he was simply built different.

  • OAI employees channel the spirit of Marvin Minsky

    Folks in the field of AI like to make predictions for AGI. I have thoughts, and I’ve always wanted to write them down. Let’s do that.

    Since this isn’t something I’ve touched on in the past, I’ll start by doing my best to define what I mean by “general intelligence”: a generally intelligent entity is one that achieves a special synthesis of three things:

    A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world. A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”. A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”. If you have these three things, you can build a generally intelligent agent. Here’s how:

    First, you seed your agent with one or more objectives. Have the agent use system 2 thinking in conjunction with its world model to start ideating ways to optimize for its objectives. It picks the best idea and builds a plan. It uses this plan to take an action on the world. It observes the result of this action and compares that result with the expectation it had based on its world model. It might update its world model here with the new knowledge gained. It uses system 2 thinking to make alterations to the plan (or idea). Rinse and repeat.

    My definition for general intelligence is an agent that can coherently execute the above cycle repeatedly over long periods of time, thereby being able to attempt to optimize any objective.

    The capacity to actually achieve arbitrary objectives is not a requirement. Some objectives are simply too hard. Adaptability and coherence are the key: can the agent use what it knows to synthesize a plan, and is it able to continuously act towards a single objective over long time periods.

    So with that out of the way – where do I think we are on the path to building a general intelligence?

    World Models We’re already building world models with autoregressive transformers, particularly of the “omnimodel” variety. How robust they are is up for debate. There’s good news, though: in my experience, scale improves robustness and humanity is currently pouring capital into scaling autoregressive models. So we can expect robustness to improve.

    With that said, I suspect the world models we have right now are sufficient to build a generally intelligent agent.

    Side note: I also suspect that robustness can be further improved via the interaction of system 2 thinking and observing the real world. This is a paradigm we haven’t really seen in AI yet, but happens all the time in living things. It’s a very important mechanism for improving robustness.

    When LLM skeptics like Yann say we haven’t yet achieved the intelligence of a cat – this is the point that they are missing. Yes, LLMs still lack some basic knowledge that every cat has, but they could learn that knowledge – given the ability to self-improve in this way. And such self-improvement is doable with transformers and the right ingredients.

    Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

    Embodiment Embodiment is something we’re still figuring out with AI but which is something I am once again quite optimistic about near-term advancements. There is a convergence currently happening between the field of robotics and LLMs that is hard to ignore.

    Robots are becoming extremely capable – able to respond to very abstract commands like “move forward”, “get up”, “kick ball”, “reach for object”, etc. For example, see what Figure is up to or the recently released Unitree H1.

    On the opposite end of the spectrum, large Omnimodels give us a way to map arbitrary sensory inputs into commands which can be sent to these sophisticated robotics systems.

    I’ve been spending a lot of time lately walking around outside talking to GPT-4o while letting it observe the world through my smartphone camera. I like asking it questions to test its knowledge of the physical world. It’s far from perfect, but it is surprisingly capable. We’re close to being able to deploy systems which can commit coherent strings of actions on the environment and observe (and understand) the results. I suspect we’re going to see some really impressive progress in the next 1-2 years here.

    This is the field of AI I am personally most excited in, and I plan to spend most of my time working on this over the coming years.

    TL;DR In summary – we’ve basically solved building world models, have 2-3 years on system 2 thinking, and 1-2 years on embodiment. The latter two can be done concurrently. Once all of the ingredients have been built, we need to integrate them together and build the cycling algorithm I described above. I’d give that another 1-2 years.

    So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world.

    Really excited to see how this ages. 🙂

    11

    Yud lettuce know that we just don't get it :(

    !

    67

    Maybe the real unaligned super intelligence were the corporations we made along the way 🥺

    24

    Top clowns all agree their balloon animals are slightly sentient

    Then: Google fired Blake Lemoine for saying AIs are sentient

    Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient

    That makes 2 of the 3 most cited scientists:

    • Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
    • Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?

    Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.

    ALL 3 switched from working on AI capabilities to AI safety.

    Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.

    Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>

    19

    Big Yud gives some dating advice

    50