Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
This has been said multiple times but I don't think it's possible to internalize because of how fucking bleak it is.
The VC/MBA class thinks all communication can be distilled into saying the precise string of words that triggers the stochastically desired response in the consumer. Conveying ideas or information is not the point. This is why ChatGPT seems like the holy grail to them, it effortlessly1 generates mountains of corporate slop that carry no actual meaning. It's all form and no substance, because those people -- their entire existence, the essence of their cursed dark souls -- has no substance.
I think you're right. But they're wrong. And only the chowderheads who don't interact with customers or service personnel would believe that crap. Now, that's not to say they can't raise a generation that does believe that crap.
I am so cynical at this point I am fully bought into the idea that these chowderheads don't even interact with reality, just with the PowerPoint and Jira-driven shadows on the wall.