I remined of the time researchers used an evolutionary algorithm to devise a circuit that would emit a tone on certain audio inputs and not on others. They examined the resulting circuit and found an extra vestigial bit, but when they cut it off, the chip stopped working. So they re-enabled it. Then they wanted to show off their research at a panel, and at the panel it completely failed. Dismayed they brought it back to their lab to figure out why it stopped working, and it suddenly started working fine.
After a LOT of troubleshooting they eventually discovered that the circuit was generating the tone by using the extra vestigial bit as an antenna that picked up emissions from a CRT in the lab and downconverted it to the desired tone frequency. Turn of the antenna, no signal. Take the chip away from that CRT, no signal.
That's what I expect LLMs will make. Complex, arcane spaghetti stuff that works but if you look at it funny it won't work anymore, and nobody knows how it works at all.
That's a disturbing handwave. "We don't really know what intelligence is, so therefore, anything we call intelligence is fair game"
A thermometer tells me what temperature it is. It senses the ambient heat energy and responds with a numeric indicator. Is that intelligence?
My microwave stops when it notices steam from my popcorn bag. Is that intelligence?
If I open an encyclopedia book to a page about computers, it tells me a bunch of information about computers. Is that intelligence?
I mean, I think intelligence requires the ability to integrate new information into one's knowledge base. LLMs can't do that, they have to be trained on a fixed corpus.
Also, LLMs have a pretty shit-tastic track record of being able to differentiate correct data from bullshit, which is a pretty essential facet of intelligence IMO
There's a good point here that like about 80% of what we're calling AI right now... isn't even AI or even LLM. It's just.... algorithm, code, plain old math. I'm pretty sure someone is going to refer to a calculator as AI soon. "Wow, it knows math! Just like a person! Amazing technology!"
(That's putting aside the very question of whether LLMs should even qualify as AIs at all.)
This is easy to say about the output of AIs.... if you don't check their work.
Alas, checking for accuracy these days seems to be considered old fogey stuff.
Yeah, and Wikipedia is one of the most useful sites on the net, but it didn't exactly result in the entire web becoming crowdsourced.
So you're saying we wont have any crowdsourced blockchain Web 2.0 AIs?
I have whole cabinets like this