This seems like abnormal paranoia even for LW, maybe they're getting worse. I really do they don't drive too many people nuts with anxiety :<. Fun as it is to poke at them, cults are depressing things.
All media criticism must be done with rigorously updated priors. Baysian WatchMojo will bring the world's most rational top 10 lists.
Classic Californian ideology. Get frustrated that an existing system isn't perfect, and decide that the only solution is to build an entirely new system separated from the old one. Promise lots of nice stuff (walkable cities yay!) but you can be sure it's only going to be available to the wealthy.
How are you going to make low-density med-style homes in CALIFORNIA cheap? 0% chance that the people doing this are in favour of rent controls. Once all the houses in the walkable areas get bought by the super rich, who's going to work in the shops? Workers will get bussed in and you'll be left with another rich person enclave that happens to have a street mall that you can walk to.
Also all of their promo-images were AI generated which bodes really well.
No it's not. GPT-4 is nowhere near suitable for general interaction. It just isn't.
"Just do machine learning to figure out what a human would do and you'll be able to do what a human does!!1!". "Just fix it when it goes wrong using reinforcement learning!!11!".
GPT-4 has no structured concept of understanding. It cannot learn-on-the-fly like a human can. It is a stochastic parrot that badly mimics a the way that people on the internet talk, and it took an absurd amount of resources to get it to do even that. RL is not some magic process that makes a thing do the right thing if it does the wrong thing enough and it will not make GPT-4 a general agent.
And just briefly, because the default answer to this point is "yes but we'll eventually do it": once we do come up with a complex problem solver, why would we actually get it to start up the singularity? Nobody needs infinite computing power forever, except for Nick Bostrom's ridiculous future humans and they aren't alive to be sad about it so I'm not giving them anything. A robot strip mining the moon to build a big computer doesn't really do that much for us here on Earth.
Needling in on point 1 - no I don't, largely because AI techniques haven't surpassed humans in any given job ever :P. Yes, I am being somewhat provocative, but no AI has ever been able to 1:1 take over a job that any human has done. An AI can do a manual repetitive task like reading addresses on mail, but it cannot do all of the 'side' work that bottlenecks the response time of the system: it can't handle picking up a telephone and talking to people when things go wrong, it can't say "oh hey the kids are getting more into physical letters, we better order another machine", it can't read a sticker that somebody's attached somewhere else on the letter giving different instructions, it definitely can't go into a mail center that's been hit by a tornado and plan what the hell it's going to do next.
The real world is complex. It cannot be flattened out into a series of APIs. You can probably imagine building weird little gizmos to handle all of those funny side problems I laid out, but I guarantee you that all of them will then have their own little problems that you'd have to solve for. A truly general AI is necessary, and we are no closer to one of those than we were 20 years ago.
The problem with the idea of the singularity, and the current hype around AI in general, is a sort of proxy Dunning-Kruger. We can look at any given AI advance and be impressed but it distracts us from how complex the real world is and how flexible you need to be a general agent that actually exists and can interact and be interacted upon outside the context of a defined API. I have seen no signs that we are anywhere near anything like this yet.