Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MA
masonlee @lemmy.world
Posts 4
Comments 14
AI companies are violating a basic social contract of the web and and ignoring robots.txt
  • Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)

    I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)

    If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)

    At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)

    Cheers and wish us luck!

  • AI companies are violating a basic social contract of the web and and ignoring robots.txt
  • Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)

    That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!

  • AI companies are violating a basic social contract of the web and and ignoring robots.txt
  • Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.

  • AI companies are violating a basic social contract of the web and and ignoring robots.txt
  • Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.

  • Sam Altman Says AI Using Too Much Energy, Will Require Breakthrough Energy Source
  • It’s the further research being done on top of the breakthrough tech enabling the chat bots applications people are worried about. It’s basically big tech’s mission now to build Ultron, and they aren’t slowing down.

  • OpenAI, Google will watermark AI-generated content to hinder deepfakes, misinfo
  • Here some big names are working on a standard for chaining digital signatures on media files: https://c2pa.org.

    Their idea is that the first signature would come from the camera sensor itself, and every further modification adds to the signature chain.

  • Research Paper: Hendrycks, et al. "An Overview of Catastrophic AI Risks". (June 2023)

    This paper provides an overview of the main sources of catastrophic AI risks, organized into four categories: Malicious Use; AI Race; Organizational Risks; and Rogue AIs. (PDF can be downloaded from the linked arxiv.org page.)

    0

    The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI

    1

    Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun

    Debating the proposition "AI research and development poses an existential threat"! Witness incredible feats of mental gymnastics and denialism! Gaze in dumbstruck awe as Yann LeCun suggests there is no need to worry because if and when AI starts to look dangerous we simply won't build it! Feel your jaw hit the floor as Melanie Mitchell argues that of course ASI is not an X-risk, because if such a thing could exist, it would certainly be smart enough to know not to do something we don't want it to do! A splendid time is guaranteed for all.

    0

    Bankless Podcast - "AI is a Ticking Time Bomb with Connor Leahy"

    podcast.banklesshq.com Bankless: 177 - AI is a Ticking Time Bomb with Connor Leahy

    AI is here to stay, but at what cost? Connor Leahy is the CEO of Conjecture, a mission-driven organization that’s trying to make the future of AI go as well as it possibly can. He is also a Co-Founder of EleutherAI, an open-source AI research non-profit lab. In today’s episode, Connor and David cove...

    Pretty solid interview with Connor Leahy about why AI Safety/Alignment is important, how transhumanists are caught in an ideological race, and some ideas for possible societal solutions/regulations.

    I think this might be Leahy's best yet.

    0