How worried are top AI experts about the threat posed by large language models like GPT-4?
Generally, it seems like AI experts are divided about how close we are to developing an AGI, and how close any of this might take us to an extinction level event. On the whole, they seem less likely to think that AI will kill us all. Maybe.
As a kid I have very often dreamed to have a robot friend, between the people I already knew. And not just to talk, but it being a fully functional human in capabilities.
I dreamed of robots just walking around acting and talking like everyone with everyone, both other robots and humans, like there is not a single difference.
And now, despite being a grown adult knowing more, I can't help but actually being extremely positive of the steps AI is taking.
From your exact article:
“Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.” -Nick Bostrom
I simply can't see how that's a bad thing. My inner child would be so happy! But now lets put aside me remembering nice times.
I believe we are closer to an AGI than ever before, but it will in no way make disaster. In fact, it will instead improve our lives drastically. What gain would bring to build something to willingly cause harm to self and others? Also there is no way no regulations will be made, and it will reduce the chances of a major fuck-up ever more.