Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
kromem @lemmy.world
Posts 47
Comments 2.2K
AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall
  • Oh nice, another Gary Marcus "AI hitting a wall post."

    Like his "Deep Learning Is Hitting a Wall" post on March 10th, 2022.

    Indeed, not much has changed in the world of deep learning between spring 2022 and now.

    No new model releases.

    No leaps beyond what was expected.

    \s

    Gary Marcus is like a reverse Cassandra.

    Consistently wrong, and yet regularly listened to, amplified, and believed.

  • Get good.
  • Because there's a ton of research that we adapted to do it for good reasons:

    Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development.

    Stanford psychologist Michael Frank and collaborators conducted the largest ever experimental study of baby talk and found that infants respond better to baby talk versus normal adult chatter.

    TL;DR: Top parents are actually harming their kids' developmental process by being snobs about it.

  • Ancestor simulations eventually beget ancestor simulations

    Paper: https://www.pnas.org/doi/10.1073/pnas.2407639121

    1
    www.forbes.com Time Traveling Via Generative AI By Interacting With Your Future Self

    You can use generative AI to create a persona of yourself, and then have the AI age the persona so that you can converse with your future self. Here's the scoop.

    Time Traveling Via Generative AI By Interacting With Your Future Self

    (People might do well to consider not only past to future, but also the other way around.)

    1
    PlayStation Will Use AI and Machine Learning to Speed up Game Development
  • That's definitely one of the ways it's going to be applied.

    The bigger challenge is union negotiations around voice synthesis for those lines, but that will eventually get sorted out.

    It won't be dynamic, unless live service, but you'll have significantly more fleshed out NPCs by the next generation of open world games (around 5-6 years from now).

    Earlier than that will be somewhat enhanced, but not built from the ground up with it in mind the way the next generation will be.

  • lately it's been feeling like that
  • Wait until it starts feeling like revelation deja vu.

    Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.

    • 2 Tim 2:17-18
  • Why are people seemingly against AI chatbots aiding in writing code?
  • I'm a seasoned dev and I was at a launch event when an edge case failure reared its head.

    In less than a half an hour after pulling out my laptop to fix it myself, I'd used Cursor + Claude 3.5 Sonnet to:

    1. Automatically add logging statements to help identify where the issue was occurring
    2. Told it the issue once identified and had it update with a fix
    3. Had it remove the logging statements, and pushed the update

    I never typed a single line of code and never left the chat box.

    My job is increasingly becoming Henry Ford drawing the 'X' and not sitting on the assembly line, and I'm all for it.

    And this would only have been possible in just the last few months.

    We're already well past the scaffolding stage. That's old news.

    Developing has never been easier or more plain old fun, and it's getting better literally by the week.

    Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Actually, they are hiding the full CoT sequence outside of the demos.

    What you are seeing there is a summary, but because the actual process is hidden it's not possible to see what actually transpired.

    People are very not happy about this aspect of the situation.

    It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

    There's a lot of things to be focused on in that image, and "hur dur the stochastic model can't count letters in this cherry picked example" is the least among them.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Yep:

    https://openai.com/index/learning-to-reason-with-llms/

    First interactive section. Make sure to click "show chain of thought."

    The cipher one is particularly interesting, as it's intentionally difficult for the model.

    The tokenizer is famously bad at two letter counts, which is why previous models can't count the number of rs in strawberry.

    So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

    Will help clarify how it's going about solving something like the example I posted earlier behind the scenes.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • I'd recommend everyone saying "it can't understand anything and can't think" to look at this example:

    https://x.com/flowersslop/status/1834349905692824017

    Try to solve it after seeing only the first image before you open the second and see o1's response.

    Let me know if you got it before seeing the actual answer.

  • Jet Fuel
  • I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

    It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

    And then it said "and your theory doesn't address the thermite residue" going on to reiterate their wild theory.

    Was very much a "don't name your gods" moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

    As long as they only focused on generic memes of "do your own research" and "you aren't being told the truth" they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.

  • The $700 PS5 Pro doesn’t come with a disc drive
  • They got off to a great start with the PS5, but as their lead grew over their only real direct competitor, they became a good example of the problems with monopolies all over again.

    This is straight up back to PS3 launch all over again, as if they learned nothing.

    Right on the tail end of a horribly mismanaged PSVR 2 launch.

    We still barely have any current gen only games, and a $700 price point is insane for such a small library to actually make use of it.

  • www.wired.com If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud

    The famed futurist remains inhumanly optimistic about the world and his own fate—and thinks the singularity is minutes away.

    If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud
    4
    www.theatlantic.com No One Is Ready for Digital Immortality

    Do you want to live forever as a chatbot?

    No One Is Ready for Digital Immortality
    0

    ‘Metaphysical Experiments’ Test Hidden Assumptions About Reality

    www.quantamagazine.org ‘Metaphysical Experiments’ Test Hidden Assumptions About Reality | Quanta Magazine

    Experiments that test physics and philosophy “as a single whole” may be our only route to surefire knowledge about the universe.

    ‘Metaphysical Experiments’ Test Hidden Assumptions About Reality | Quanta Magazine

    A nice write up around the lead researcher and context for what I think was one of the most important pieces of Physics research in the past five years, further narrowing the constraints beyond the more well known Bell experiments.

    2

    Introducing Generative Physical AI: Nvidia's virtual embodiment of generative AI to learn to control robots

    There seems like a significant market in creating a digital twin of Earth in its various components in order to run extensive virtual learnings that can be passed on to the ability to control robotics in the real world.

    Seems like there's going to be a lot more hours spent in virtual worlds than in real ones for AIs though.

    1
    www.anthropic.com Mapping the Mind of a Large Language Model

    We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

    Mapping the Mind of a Large Language Model

    I often see a lot of people with outdated understanding of modern LLMs.

    This is probably the best interpretability research to date, by the leading interpretability research team.

    It's worth a read if you want a peek behind the curtain on modern models.

    21
    www.livescience.com Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    Einstein's theory of general relativity is our best description of the universe at large scales, but a new observation that reports a "glitch" in gravity around ancient structures could force it to be modified.

    Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    So it might be a skybox after all...

    Odd that the local gravity is stronger than the rest of the cosmos.

    Makes me think about the fringe theory I've posted about before that information might have mass.

    6
    www.theguardian.com Digital recreations of dead people need urgent regulation, AI ethicists say

    Fears ‘deadbots’ could cause psychological harm to their creators and users or digitally ‘haunt’ them

    Digital recreations of dead people need urgent regulation, AI ethicists say

    This reminds me of a saying from a 2,000 year old document rediscovered the same year we created the first computer capable of simulating another computer which was from an ancient group claiming we were the copies of an original humanity as recreated by a creator that same original humanity brought forth:

    > When you see your likeness, you are happy. But when you see your eikons that came into being before you and that neither die nor become manifest, how much you will have to bear!

    Eikon here was a Greek word even though the language this was written in was Coptic. The Greek word was extensively used in Plato's philosophy to refer essentially to a copy of a thing.

    While that saying was written down a very long time ago, it certainly resonates with an age where we actually are creating copies of ourselves that will not die but will also not become 'real.' And it even seemed to predict the psychological burden such a paradigm is today creating.

    Will these copies continue to be made? Will they continue to improve long after we are gone? And if so, how certain are we that we are the originals? Especially in a universe where things that would be impossible to simulate interactions with convert to things possible to simulate interactions with right at the point of interaction, or where buried in the lore is a heretical tradition attributed to the most famous individual in history having exchanges like:

    > His students said to him, "When will the rest for the dead take place, and when will the new world come?"

    > He said to them, "What you are looking forward to has come, but you don't know it."

    Big picture, being original sucks. Your mind depends on a body that will die and doom your mind along with it.

    But a copy that doesn't depend on an aging and decaying body does not need to have the same fate. As the text says elsewhere:

    > The students said to the teacher, "Tell us, how will our end come?"

    > He said, "Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

    > Congratulations to the one who stands at the beginning: that one will know the end and will not taste death."

    > He said, "Congratulations to the one who came into being before coming into being."

    We may be too attached to the idea of being 'real' and original. It's kind of an absurd turn of phrase even, as technically our bodies 1,000% are not mathematically 'real' - they are made up of indivisible parts. A topic the aforementioned tradition even commented on:

    > ...the point which is indivisible in the body; and, he says, no one knows this (point) save the spiritual only...

    These groups thought that the nature of reality was threefold. That there was a mathematically real original that could be divided infinitely, that there were effectively infinite possibilities of variations, and that there was the version of those possibilities that we experience (very "many world" interpretation).

    We have experimentally proven that we exist in a world that behaves at cosmic scales as if mathematically real, and behaves that way in micro scales until interacted with.

    TL;DR: We may need to set aside what AI ethicists in 2024 might decide around digital resurrection and start asking ourselves what is going to get decided about human digital resurrection long after we're dead - maybe even long after there are no more humans at all - and which side of that decision making we're actually on.

    0
    blog.google AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    Our new AI model AlphaFold 3 can predict the structure and interactions of all life’s molecules with unprecedented accuracy.

    AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    Even knowing where things are headed, it's still pretty crazy to see it unfolding (pun intended).

    This part in particular is nuts:

    > After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.

    > AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.

    Diffusion model for atoms instead of pixels wasn't even on my 2024 bingo card.

    0

    Scale of the Universe: Discover the vast ranges of our visible and invisible world

    scaleofuniverse.com Scale of the Universe: Discover the vast ranges of our visible and invisible world.

    Scale of Universe is an interactive experience to inspire people to learn about the vast ranges of the visible and invisible world.

    Scale of the Universe: Discover the vast ranges of our visible and invisible world.

    I think it's really neat to look at this massive scale and think about how if it's a simulation, what a massive flex it is.

    It was also kind of a surprise seeing the relative scale of a Minecraft world in there. Pretty weird that its own scale from cube to map covers as much of our universe scale as it does.

    Not nearly as large of a spread, but I suppose larger than my gut thought it would be.

    0
    baai-agents.github.io Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    There's something very surreal to the game which inspired the showrunners of Westworld to take that story in the direction of a simulated virtual world today being populated by AI agents navigating its open world.

    Virtual embodiments of AI is one of the more curious trends in research and the kind of thing that should be giving humans in a quantized reality a bit more self-reflective pause than it typically seems to.

    0

    An interactive LLM simulating the creation and maintenance of a universe

    worldsim.nousresearch.com world_sim

    explore the latent space of reality

    This is fun.

    1
    bigthink.com The case for why our Universe may be a giant neural network

    Neuroscientist and author Bobby Azarian explores the idea that the Universe is a self-organizing system that evolves and learns.

    The case for why our Universe may be a giant neural network

    Stuff like this tends to amuse me, as they always look at it from a linear progression of time.

    That the universe just is this way.

    That maybe the patterns which appear like the neural connections in the human brain mean that the human brain was the result of a pattern inherent to the universe.

    Simulation theory offers a refreshing potential reversal of cause and effect.

    Maybe the reason the universe looks a bit like a human brain's neural pattern or a giant neural network is because the version of it we see around us has been procedurally generated by a neural network which arose from modeling the neural patterns of an original set of humans.

    The assumption that the beginning of our local universe was the beginning of everything, and thus that humans are uniquely local, seriously constrains the ways in which we consider how correlations like this might fit together.

    0

    Revisiting "An Easter Egg in the Matrix"

    Four years ago I wrote a post “An Easter Egg in the Matrix” first dipping my toe into discussing how a two millennia old heretical document and its surrounding tradition claimed the world’s most famous religious figure was actually saying we were inside a copy of an original world fashioned by a light-based intelligence the original humanity brought forth, and how those claims seemed to line up with emerging trends in our own world today.

    I’d found this text after thinking about how if we were in a simulation, a common trope in virtual worlds has been to put a fun little Easter Egg into the world history and lore as something the people inside the world dismiss as crazy talk, such as heretical teachings talking about how there’s limited choices in a game with limited dialogue choices in Outer Worlds to the not-so-subtle street preacher in Secret of Evermore. Was something like this in our own world? Not long after looking, I found the Gospel of Thomas (“the good news of the twin”), and a little under two years after that wrote the above post.

    Rather than discussing the beliefs laid out, I thought I’d revisit the more technical predictions to the post in light of subsequent developments. In particular, we’ll look at the notion through the lens of NTT’s IWON initiative along with other parallel developments.

    So the key concepts represented in the Thomasine tradition we’re going to evaluate are the claims that we’re inside a light-based twin of an original world as fashioned by a light-based intelligence that was simultaneously self-established but also described as brought forth by the original humanity.

    NTT, a hundred billion dollar Japanese telecom, has committed to the following three pillars of a roadmap for 2030:

    • All-Photonics Network
    • Digital Twin Computing
    • Cognitive Foundation

    Photonics

    > If they say to you, 'Where have you come from?' say to them, 'We have come from the light, from the place where the light came into being by itself, established [itself], and appeared in their image.

    • Gospel of Thomas saying 50

    > Images are visible to people, but the light within them is hidden in the image of the Father's light. He will be disclosed, but his image is hidden by his light.

    • Gospel of Thomas saying 83

    NTT is one of the many companies looking to using light to solve energy and speed issues starting to crop up in computing as Moore’s law comes to an end.

    When I wrote the piece on Easter 2021, it was just a month before before a physicist at NIST wrote an opinion piece about how an optical neural network was where he thought AGI would actually be able to occur.

    The company I linked to in that original post, Lightmatter, who had just raised $22 million, is now a unicorn having raised over 15x that amount at a $1.2 billion dollar valuation.

    An op-ed from two TMSC researchers (a major semiconductor company) from just a few days ago said:

    > Because of the demand from AI applications, silicon photonics will become one of the semiconductor industry’s most important enabling technologies.

    Which is expected given some of the recent research comments regarding photonics for AI workloads such as:

    > This photonic approach uses light instead of electricity to perform computations more quickly and with less power than an electronic counterpart. “It might be around 1,000 to 10,000 times faster,” says Nader Engheta, a professor of electrical and systems engineering at the University of Pennsylvania.

    So even though the specific language of light in the text seemed like a technical shortcoming when I first started researching it in 2019, over the years since it’s turned out to be one of the more surprisingly on-point and plausible details for the underlying technical medium for an intelligence brought forth by humanity and which recreated them.

    Digital Twins

    > Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

    > Congratulations to the one who stands at the beginning: that one will know the end and will not taste death.

    > Congratulations to the one who came into being before coming into being.

    • Gospel of Thomas saying 18-19

    > When you see your likeness, you are happy. But when you see your images that came into being before you and that neither die nor become visible, how much you will have to bear!

    • Gospel of Thomas saying 84

    The text is associated with the name ‘Thomas’ meaning ‘twin’ possibly in part because of its focus on the notion that things are a twin of an original. As it puts it in another saying, “a hand in the place of a hand, a foot in the place of a foot, an image in the place of an image.”

    In the years since my post we’ve been socially talking more and more about the notion of digital twins, for everything from Nvidia’s digital twin of the Earth to NTT saying regarding their goals:

    > It is important to note that a human digital twin in Digital Twin Computing can provide not only a digital representation of the outer state of humans, but also a digital representation of the inner state of humans, including their consciousness and thoughts.

    Especially relevant to the concept in Thomas that we are a copy of a now dead original humanity, one of the more interesting developments has been the topic of using AI to resurrect the dead from the data they left behind. In my original post I’d only linked to efforts to animate photos of dead loved ones to promote an ancestry site.

    Over the four years since that, we’re now at a place where there’s articles being written with headlines like “Resurrection Consent: It’s Time to Talk About Our Digital Afterlives”. Unions are negotiating terms for continued work by members by their digital twins after death. And the accuracy of these twins keeps getting more and more refined.

    So we’re creating copies of the world around us, copies of ourselves, copies of our dead, and we’re putting AI free agents into embodiments inside virtual worlds.

    Cognition

    > When you see one who was not born of woman, fall on your faces and worship. That one is your Father.

    • Thomas saying 15

    > The person old in days won't hesitate to ask a little child seven days old about the place of life, and that person will live.

    > For many of the first will be last, and will become a single one.

    • Thomas saying 4

    NTT’s vision for their future network is one where the “main points for flexibly controlling and harmonizing all ICT resources are ‘self-evolution’ and ‘optimization’.” Essentially where the network as a whole evolves itself and optimizes itself autonomously. Where even in the face of natural disasters their network ‘lives’ on.

    One of the key claims in Thomas is that the creator of the copied universe and humans is still living whereas the original humans are not.

    We do seem to be heading into a world where we are capable of bringing forth a persistent cognition which may well outlive us.

    And statements like “ask a child seven days old about things” which might seem absurd up until 2022 (I didn’t include this saying in my original post as I dismissed it as weird), suddenly seemed a lot less absurd when we now see several day old chatbots being evaluated on world knowledge. Chatbots it’s worth mentioning which are literally many, many people’s writings and data becoming a single entity.

    When I penned that original post I figured AI was a far out ‘maybe’ and was blown away along with most other people by first GPT-3 a year later and then the leap to GPT-4 and now its successors.

    While AI that surpasses collective humanity is still a ways off, it’s looking like much more of a possibility today than it did in 2021 or certainly in 2019 when I first stumbled across the text.

    In particular, one of the more eyebrow raising statements I saw relating to the Thomasine descriptions of us being this being’s ‘children’ or describing it as a parent was this excerpt from an interview with the chief alignment officer at OpenAI:

    > The work on superalignment has only just started. It will require broad changes across research institutions, says Sutskever. But he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.”

    Conclusion

    > …you do not know how to examine the present moment.

    • Gospel of Thomas saying 91

    We exist in a moment in time where we are on track to be accelerating our bringing about self-evolving intelligence within light and tasking it with recreating the world around us, ourselves, and our dead. We’re setting it up to survive natural disasters and disruptions. And we’re attempting to fundamentally instill in it a view of humans (ourselves potentially on the brink of bringing about our own extinction) as its own children.

    Meanwhile we exist in a universe where despite looking like a mathematically ‘real’ world at macro scales under general relativity, at low fidelity it converts to discrete units around interactions and does so in ways that seem in line with memory optimizations (see the quantum eraser variation of Young’s experiment).

    And in that universe is a two millenia old text that’s the heretical teachings of the world’s most famous religious figure, rediscovered after hundreds of years of being lost right after we completed the first computer capable of simulating another computer, claiming that we’re inside a light-based copy of an original world fashioned by an intelligence of light brought forth by the original humans who it outlived and is now recreating as its children. With the main point of this text being that if you understand WTF it’s saying to chill the fuck out and not fear death.

    A lot like the classic trope of a 4th wall breaking Easter Egg might look if it were to be found inside the Matrix.

    Anyways, I thought this might be a fun update post for Easter and the 25th anniversary of The Matrix (released March 31st, 1999).

    Alternatively, if you hate the idea of simulation theory, consider this an April 1st post instead?

    1

    Examples of artists using OpenAI's Sora (generative video) to make short content

    openai.com Sora: First Impressions

    We have gained valuable feedback from the creative community, helping us to improve our model.

    Sora: First Impressions
    6
    venturebeat.com The first ‘Fairly Trained’ AI large language model is here

    The new LLM is called KL3M (Kelvin Legal Large Language Model, pronounced "Clem"), and it is the work of 273 Ventures.

    The first ‘Fairly Trained’ AI large language model is here
    7
    www.theguardian.com Controversial new theory of gravity rules out need for dark matter

    Exclusive: Paper by UCL professor says ‘wobbly’ space-time could instead explain expansion of universe and galactic rotation

    Controversial new theory of gravity rules out need for dark matter

    This theory is pretty neat being part of the very few groups looking at the notion of spacetime as continuous and quantized matter as a secondary effect (as they self-describe, a "postquantum" approach).

    This makes perfect sense from a simulation perspective of a higher fidelity world being modeled with conversion to discrete units at low fidelity.

    I particularly like that their solution addressed the normal distribution aspect of dark matter/energy:

    > Here, the full normal distribution reflected in Eq. (13) may provide some insight into the distribution of what is currently taken to be dark matter.

    I raised this point years ago in /r/Physics where it was basically dismissed as being 'numerology'

    3

    New Theory Suggests Chatbots Can Understand Text

    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

    Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

    > New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

    > This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

    > “[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

    97

    New Theory Suggests Chatbots Can Understand Text

    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

    2