AI Safety
-
‘Artificial Escalation’: Imagining the future of nuclear risk
thebulletin.org 'Artificial Escalation': Imagining the future of nuclear riskThe reasons not to integrate AI into comprehensive nuclear command, control, and communications systems are manifold. They involve increased speed of warfare, accidental escalation, misperception of intentions and capabilities, an erosion of human control, first-strike instability, and the unpredict...
- www.lesswrong.com Lessons On How To Get Things Right On The First Try — LessWrong
This post is based on several true stories, from a workshop which John has run a few times over the past year. …
- apnews.com UN council to hold first meeting on potential threats of artificial intelligence to global peace
The U.N. Security Council will hold a first-ever meeting on the potential threats of artificial intelligence to international peace and security organized by the United Kingdom.
- www.bostonglobe.com Dan Hendrycks wants to save us from an AI catastrophe. He’s not sure he’ll succeed. - The Boston Globe
An evangelical turned computer scientist has articulated how technology could all go wrong. Now he needs to figure out how to make it right.
- openai.com Introducing Superalignment
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for e...
- www.who.int WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use
Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today. The report, Ethics and governance of artificia...
-
42% of CEOs say AI could destroy humanity in five to ten years
www.cnn.com Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN BusinessMany top business leaders are seriously worried that artificial intelligence could pose an existential threat to humanity in the not-too-distant future.
- www.foxbusiness.com News publishers explore coalition to address AI impact: report
Some of the top publishers in the nation are in discussion to form a coalition aimed at examining the impact of artificial intelligence on the news industry.
-
A City Councilmember succinctly reports on Artificial General Intelligence and previews a resolution against it
YouTube Video
Click to view this content.
- www.firstpost.com ChatGPT in trouble: OpenAI sued for stealing everything anyone’s ever written on the Internet
OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
-
Research Paper: Hendrycks, et al. "An Overview of Catastrophic AI Risks". (June 2023)
This paper provides an overview of the main sources of catastrophic AI risks, organized into four categories: Malicious Use; AI Race; Organizational Risks; and Rogue AIs. (PDF can be downloaded from the linked arxiv.org page.)
-
The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI
YouTube Video
Click to view this content.
-
A tour of ChatGPT getting things wrong — ChatGPT's Achilles' Heel
YouTube Video
Click to view this content.
-
Breakdown of confusions from the recent Munk debate on AI (LessWrong)
www.lesswrong.com Munk AI debate: confusions and possible cruxes — LessWrongThere was a debate on the statement “AI research and development poses an existential threat” (“x-risk” for short), with Max Tegmark and Yoshua Bengio arguing in favor, and Yann LeCun and Melanie Mit…
-
Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun
YouTube Video
Click to view this content.
Debating the proposition "AI research and development poses an existential threat"! Witness incredible feats of mental gymnastics and denialism! Gaze in dumbstruck awe as Yann LeCun suggests there is no need to worry because if and when AI starts to look dangerous we simply won't build it! Feel your jaw hit the floor as Melanie Mitchell argues that of course ASI is not an X-risk, because if such a thing could exist, it would certainly be smart enough to know not to do something we don't want it to do! A splendid time is guaranteed for all.
-
Bankless Podcast - "AI is a Ticking Time Bomb with Connor Leahy"
podcast.banklesshq.com Bankless: 177 - AI is a Ticking Time Bomb with Connor LeahyAI is here to stay, but at what cost? Connor Leahy is the CEO of Conjecture, a mission-driven organization that’s trying to make the future of AI go as well as it possibly can. He is also a Co-Founder of EleutherAI, an open-source AI research non-profit lab. In today’s episode, Connor and David cove...
Pretty solid interview with Connor Leahy about why AI Safety/Alignment is important, how transhumanists are caught in an ideological race, and some ideas for possible societal solutions/regulations.
I think this might be Leahy's best yet.
-
There's No Rule We'll Make It
YouTube Video
Click to view this content.
This video by Robert Miles makes a levelheaded argument for taking existential AI Safety seriously as rapidly as possible.