I've been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can't deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.
This bullshit isn't going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.
Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.
The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.
Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.
Platnet and iNaturalist are pretty good for plant identification as well, I use them all the time to find out what's volunteering in my garden. Just looked them up and it turns out iNaturalist is by Seek.
This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.
Well, you see, that's the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there's something called compute efficient frontier (technical but neatly explained video about it). Basically you can't make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That's all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.
That would be innovation, which I'm convinced no company can do anymore.
It feels like I learn that one of our modern innovations was already thought up and written down into a book in the 1950s, and just wasn't possible at that time due to some limitation in memory, precision, or some other metric. All we did was do 5 decades of marginal improvement to get to it, while not innovating much at all.
This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.
The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.
Even if AI were to completely freeze right now, people will continue using it.
Why are people reacting like AI is going to get dropped?
People are dumping billions of dollars into it, mostly power, but it cannot turn profit.
So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.
This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.
Because novelty is all it has. As soon as it stops improving in a way that makes people say "oh that's neat", it has to stand on the practical merits of its capabilities, which is, well, not much.
I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.
And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.
As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:
Copilot can't read my mind and figure out what I'm trying to do.
I'm working on an uncommon problem where the typical solutions don't work
Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.
2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.
I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.
Ahh right, so when I use copilot to autocomplete the creation of more tests in exactly the same style of the tests I manually created with my own conscious thought, you're saying that it's really just copying what someone else wrote? If you really believe that, then you clearly don't understand how LLMs work.
Not copilot, but I run into a fourth problem:
4. The LLM gets hung up on insisting that a newer feature of the language I'm using is wrong and keeps focusing on "fixing" it, even though it has access to the newest correct specifications where the feature is explicitly defined and explained.
Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.
Quantum computers are only good at a very narrow subset of tasks. None of those tasks are related to Neural Networks, AGI, or the emulation of neurons.
It's still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don't mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or "this is absolutely mathematically unsound but fun to mess with". Messing with stuff other than conditioning or mere model selection.
Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.
Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.
"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."
I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.
Oh they definitely exist. At a high level the bullshit is driven by malicious greed, but there are also people who are naive and ignorant and hopeful enough to hear that drivel and truly believe in it.
Like when Microsoft shoves GPT4 into notepad.exe. Obviously a terrible terrible product from a UX/CX perspective. But also, extremely expensive for Microsoft right? They don't gain anything by stuffing their products with useless annoying features that eat expensive cloud compute like a kid eats candy. That only happens because their management people truly believe, honest to god, that this is a sound business strategy, which would only be the case if they are completely misunderstanding what GPT4 is and could be and actually think that future improvements would be so great that there is a path to mass monetization somehow.
Some are just opportunists, but there are certainly true believers — either in specific technologies, or pedal-to-the-metal growth as the only rational solution to the world’s problems.
AI vagina Fleshlight beds. You just find your sleep inside one and it will do you all night long! Telling you stories of any topic. Massaging you in every possible way. Playing your favorite music. It's like a living room! Oh I'm sleeping in the living room again. Yeah I'm in the dog house. But that's why you need an AI vagina Fleshlight bed!
largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful
Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.
No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.
Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.
I'm not an expert, but the whole basis of LLM not actually understanding words, just the likelihood of what word comes next basically seems like it's not going to help progress it to the next level... Like to be an artificial general intelligence shouldn't it know what words are?
I feel like this path is taking a brick and trying to fit it into a keyhole...
Someone in here has once linked me a scientific article about how today's "AI" are basically one level below what they need to be anything like an AI. A bit like the difference between exponent and Ackermann function, but I really forgot what that was all about.
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology.
Right because corporate management doesn't ever blindly and stupidly overinvest in fads that blow up in their faces...
I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I.
You clearly have no clue what you're on about. As someone with a degrees and experience in both CS and Finance all I have to say is that's not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they're wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can't be just as good or better.
I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
This is usually what happens once they finally realize spending money on hype doesn't pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it's programmed correctly.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
No one ever said otherwise. And we're talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.
ChatGPT is basically the best LLM of its kind. As for Nvidia I'm not talking about hardware I'm talking about all of the models it's trained to do everything from DLSS and ACE to creating virtual characters that can converse and respond naturally to a human being.
"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."
"As I have always warned," he added, "that's just a fantasy."
Even Zuckerberg admits that trying to scale LLMs larger doesn’t work because the energy and compute requirements go up exponentially. There must exist a different architecture that is more efficient, since the meat computers in our skulls are hella efficient in comparison.
Once we figure that architecture out though, it’s very likely we will be able to surpass biological efficiency like we have in many industries.
That's a bad analogy. We weren't able to surpass biological efficiency in industry sector because we figured out human anatomy and how to improve it. It's simply alternative ways to produce force like electricity and motors which had absolutely no relation to how muscles works.
I imagine it would be the same for computers, simply another, better method to achieve something but it's so uncertain that it's barely worth discussing about.
I have to do similar things when it comes to 'raytracing'. It meant one thing, and then a company comes along and calls something sorta similar the same thing, then everyone has these ideas of what it should be vs. what it actually is doing. Then later, a better version comes out that nearly matches the original term, but there's already a negative hype because it launched half baked and misnamed. Now they have to name the original thing something new new to market it because they destroyed the original name with a bad label and half baked product.
He is writing about LLM mainly, and that is absolutely AI, it's just not strong AI or general AI (AGI).
You can't invent your own meaning for existing established terms.
LLMs are AI in the same way that the lane assist on my car is AI. Tech companies, however, very carefully and deliberately play up LLMs as being AGI or close to it. See for example toe convenient fear-mongering over the "risks" of AI, as though ChatGPT will become Skynet.
It'll implode but there are much larger elephants in the room - geopolitical dumbassery and the suddenly transient nature of the CHIPS Act are two biggies.
Third, high flying growth, blue sky darlings, they're flaky. In a downturn growth is worth 0 fucking dollars, throw that shit in a dumpster and rotate into staples. People can push off a phone upgrade or new TV and cut down on subscriptions, but they'll always need Pampers.
The thing propping up AI and semis is an arms race between those high flying tech companies, so this whole thing is even more prone to imploding than tech itself, since a ton of revenue comes from tech. Sensitive sector supported by an already sensitive sector. House of cards with NVDA sitting right at the tippy top. Apple, Facebook, those kinds of companies, when they start trimming back it's over.
But, it's one of those things that is anyone's guess. When you think it's not even possible for everything to still have steam one of the big guys like TSMC posts some really delightful earnings and it gets another second wind, for the 29th time.
Definitely a house of cards tho, and suddenly a lot more precarious because suddenly nobody knows how policy will affect the industry or the market as a whole
They say shipping is the bellwhether of the economy and there's a lot of truth to that. I think semis are now the bellwhether of growth. Sit back and watch the change in the wind
Seems to me the rationale is flawed. Even if it isn't strong or general AI, LLM based AI has found a lot of uses. I also don't recognize the claimed ignorance among people working with it, about the limitations of current AI models.
Can you name some of those uses that you see lasting in the long term or even the medium term? Because while it has been used for a lot of things it seems to be pretty bad at the overwhelming majority of them.
AI is already VERY successful in some areas, when you take a photo, it is treated with AI features to improve the image, and when editing photos on your phone, the more sophisticated options are powered by AI. Almost all new cars have AI features.
These are practical everyday uses, you don't even have to think about when using them.
But it's completely irrelevant if I can see use cases that are sustainable or not. The fact is that major tech companies are investing billions in this.
Of course all the biggest tech companies could all be wrong, but I bet they researched the issue more than me before investing.
Show me by what logic you believe to know better.
The claim that it needs to be strong AI to be useful is ridiculous.
while you may be right, one would think that the problem lies in the overestimated peception of the abilities of llms leading to misplaced investor confidence -- which in turn leads to a bubble ready to burst.
Yup. Investors have convinced themselves that this time AI development is going to grow exponentially. The breathless fantasies they’ve concocted for themselves require it. They’re going to be disappointed.
Luddites weren't against new technology, they were against the aristocrats using new technology as a tool or excuse to oppress and kill the labor class. The problem is not the new technology, the problem is that people were dying of hunger and being laid off in droves. Destroying the machinery, which almost always they were the operators of when working on said aristocrat's factories, was an act of protest, just like a riot, or a strike. It was a form of collective bargaining.
Ya AI was never going to be it. But I wouldn’t understate its impact even in its current stage. I think it’ll be a tool that will be incredibly useful for just about every industry
There aren't many industries where results that are correct in the very common case everybody knows anyway, a bit wrong in the less common case and totally hallucinated in the actually original cases is useful. Especially if you can't distinguish between those automatically.