Nvidia has to be the most obvious thing to short in this whole mess, except for that quote. If the AI bubble popped tomorrow, you'll make a lot of money. If it pops in a year, you may lose it all before then.
And this absolutely will not change the course of AI investment whatsoever because it still driving a huge amount of profit.
The only thing that will finally change the course of AI investment is when the bubble finally burst which will cause the collapse of our economy because, by that point, so much money will have been invested in it. There will be no other possible result.
And why? Because these assholes only care about one thing: short term results at any cost.
Firing the staff & reporting “earnings”?
Goosed stock price on the above + hypey garbage?
Enforced “features” no one wants?
AI hardware makers? Okay, that one’s legit, but ironically not AI.
I wish i knew of a good way to profit off of this bubble. I could work for a company in the AI space, but I think it would be well above my “executives hyping the smell of their own farts” threshold. And shorting Google and Microsoft is a dangerous game.
You got me thinking a bit on this one. One possibility is if you want to make a bet on it failing to deliver value in the near future, look at the companies whose stock prices have fallen on the fear of AI putting them out of business. For example, Concentrix does call center outsourcing and their stock is down significantly from their 2022 peak, partially on the expectation that AI is going to take business from them. Now, their profit margin is tiny and they don't seem to be growing much, so I don't know that they are a great investment, but there could be upside if the negative cloud of AI is removed. There are probably better examples out there, this one just came to mind.
Note: I have not done any research on this idea or on Concentrix and don't know if this is a good idea, but at least less risky than shorting the AI hype.
The only thing that will finally change the course of AI investment is when the bubble finally burst which will cause the collapse of our economy because, by that point, so much money will have been invested in it.
We're not there yet. Remember, tech is not the whole of the economy. The recent tech layoffs have had Silicon Valley screaming, "The sky is falling!!" and the rest of the planet going "huh? You guys hear something? Must've been a fly"
Number 3 drives me hair-tearing insane, I have straight up seen AI cultists say AI will fix the power grid but only if we keep pouring resources into it so that it can fix all our problems. ಠ_ಠ
"we should all do heroin to support the habit and mainstream it." - every fucking company pushing AI onto society when it's dangerous, janky af and ridiculously expensive.
I'm very confident that with carte blanche the electrical engineers already overseeing the grid could solve the problems it faces. We don't need an ai miracle, we need to remove bureaucratic and funding obstacles for critical infrastructure.
And this is it: Many of those "AI will be so smart that it can solve these problems for us!" arguments refer to problems where having a "smart" enough solution isn't the problem... Getting people to care/notice/participate/get out of the way is.
e/acc. The dumb MFs believe burning fossil fuels as fast as possible will lead to technological advancements to mitigate the problems. It's all wishful thinking and convienant blind faith.
Maybe he should buy Red Lobster, force them into unfavorable contracts for supplies, sell out their land from under them, and lease the land back to them about it.
If AI is a trillion dollar investment, what trillion dollar problem is it solving?
Why, the trillion dollars not yet in the pockets of the companies that think they can take advantage of AI of course.
The naked truth is that #4 answers #1. The biggest utility AI might provide would be replacing paid workers. That's a trillion dollar problem if your ultimate goal is to hoard wealth and sit atop the highest pile of gold like a dragon.
So again, we have a solution to a problem only the wealthy elite have, being marketed as an advancement for the greater good of society, to justify stealing the massive resources it consumes, in order to not have to pay that directly to their workers.
That's probably why Goldman Sachs is against AI all the sudden: They didn't invest much in it and now everyone else is reaping gains in the stock market that they failed to take advantage of.
Goldman was strongly bullish on Microsoft in mid-2023, right before it went on a historical run, precisely because they had enormous faith in the OpenAI project. This is a huge heel turn relative to their multi-billion dollar investment in the company from last year.
Yeah this is basically Metaverse or NFTs but with a slightly more plausible use case so that it will drag out far longer before corporations quietly pretend it never happened.
It will not be wasted or forgotten. The cat's out of the bag. You can run it locally on your machine. It can summarize text for you, it can help you write boilerplate code, it can help you find that file with that thing that you don't quite remember, it can create a poem about your left nut. The tech already has proven useful, it's about where you use it.
Aren't LLM already pretty much out of (past) training data? Like, they've already chewed through Reddit/Facebook etc and are now caught up to current posts. Of course people will continue talking online and they'll continue to use it to train AI. But if devouring decades of human data, basically everything online, resulted in models that hallucinate, lie to us, and regurgitate troll posts, how can it reach the exponential improvement they promise us!? It already has all the data, has been trained on it, and the average person still sees no value in it...
Your mistake is in thinking AI is giving incorrect responses. What if we simply change our definition of correctness and apply the rubric that whatever AI creates must be superior to human work product? What if we just assume AI is right and work backwards from there?
Then AI is actually perfect and the best thing to feed AI as training data is more AI output. That's the next logical step forward. Eventually, we won't even need humans. We'll just have perfect machines perfectly executing perfection.
My wife works for a hospital system and they now interact with a chat bot. Somehow it's HIPAA compliant, I dunno. But I said to her, all it's doing is learning the functions of you and your coworkers, and it will eventually figure out how to streamline your position. So theres more to learn, but it's moved into private sectors too.
It already has all the data, has been trained on it, and the average person still sees no value in it…
And that data that it has been trained on is mostly "pre-GPT". They're going to have to spend another untold fortune in tagging and labeling data before training the newer ones because if they train on AI-generated content they will be prone to rot.
The last part is wrong. They aren’t imagining improvement. They know this is it for now and they’re lying their asses off to pretend that they’ll be able to keep improving it when there’s no training data left. The grift is all that’s left.
I think that's the obvious implication of this question, but you're missing the implied question arising from that answer: is this a problem we want to solve in this way ?
The AI makes the art. The art is made into an NFT. The NFT goes on the blockchain. We all get rich. Climate is saved. End of story. How are people not getting this??? 😂😭
This is all true if you take a tiny portion of what AI is and does (like generative AI) and try to extrapolate that to all of AI.
AI is a vast field. There are a huge number of NP-hard problems that AI is really really good at.
If you can reasonably define your problem in terms of some metric and your problem space has a lot of interdependencies, there's a good chance AI is the best and possibly only (realistic) way to address it.
Generative AI has gotten all the hype because it looks cool. It's seen as a big investment because it's really expensive. A lot of the practical AI is for things like automated calibration. It's objectively useful and not that expensive to train.
In my career I deal a lot with random, weird problems that servers have when doing work. Having an AI that's just able to monitor logs and stats and then help with diagnosing issues or even suggesting solutions would be terribly useful.
That's a great use case. Splunk does something along those lines.
Logs are particularly nice because they tend to be so large that individual companies can often have the AI trained to the specifics of their environment.
Ok but point 4 is a bit too based for GS.
Tho I have been arguing that at some point ("voluntary") consumption just collapses over average sentiment. Eg over bad living and working conditions, or just a hopelessly depressive environment (like, I don't wanna buy slave chocolate).
Nice now i've read a post about an article about a paper by goldman-sachs, see you later if i find the original paper, otherwise there's nothing really to discuss.
The answer to question 1, to me, seems to be that it is promising to replace workforce acrossamy fields. This idea makes investors and Capitalists (capital C, the people who have and want to keep the money) drool. AI promises to do jobs without needing a paycheck.
I'm not saying I believe it will deliver. I'm saying it is being promised, or at least implied. Therefore, I agree, there's a lot of grift happening. Just like crypto and NFTs, grift grift grift.
Yeah, that seems to be the end goal, but Goldman Sachs themselves tried using AI for a task and found it cost six times as much as paying a human.
That cost isn't going down--by any measure, AI is just getting more expensive as our only strategy for improving it seems to be "throw more compute, data, and electricity at the problem." Even new AI chips are promising increased performance but with more power draw, and everybody developing AI models seem to be taking the stance of just maximizing performance and damn everything else. So even if AI somehow delivers on its promises and replaces every white collar job, it's not going to save any actual money for corporations.
Granted, companies may be willing to eat that cost at least temporarily in order to suppress labor costs (read: pay us less), but it's not a long term solution
Ed Zitron, a tech beat reporter, criticizes a recent paper from Goldman Sachs, calling AI a "grift." The article raises questions about the investment, the problem it solves, and the promise of AI getting better. It debunks misconceptions about AI, pointing out that AI has not been developed over decades and the American power grid cannot handle the load required for AI. The article also highlights that AI is not capable of replacing humans and that AI models use the same training data, making them a standstill for innovation.
Ed Zitron, a tech beat reporter, criticizes a recent paper from Goldman Sachs, calling AI a "grift."
Fittingly, this paragraph is incomprehensible to anyone who hasn't already read the blog post; who is calling AI a grift, Zitron or GS? And is Zitron critical of the GS article (no, he's not)?
Now, if it was your job to actually absorb the information in this blog post, there's really no way around actually reading the thing - at least if you wanna do a good job. Any "productivity boost" would sacrifice quality of output.
This IS the tl;dr of the article first off, and second, just read the top "paragraph" (in quotes because it's only like 2 sentences). It's basically the tldr of the tldr
Point 2 and 3 are legit, especially the part about not having a roadmap, a lot of what's going on is pure improvisation at this point and trying different things to see what sticks. The grid is a problem but fixing it is long over due. In any case, these companies will just build their own if the government can't get its head out of it's ass and start fixing the problem (Microsoft is already doing this).
The last two point specifically point to this person being someone that doesn't know the technology just like what they are accusing others of being.
It's already replacing people. You don't need it to do all the work, it will still bring about layoffs if it gives the ability for one person to do the job of 5. It's already affecting jobs like concept artist and every website that used to have someone at the end of their chat app now has an LLM. This is also only the start, it's the equivalent of people thinking computers won't affect the workforce in the early 90s. It won't hold up for long.
The data point is also quit a bold statement. Anyone keeping abreast with the technology knows that it's now about curating the datasets and not augmenting them. There's also a paper that comes out everyday about new training strategies which is helping a lot more than a few extra shit posts from Reddit.
Feels like you're missing the point of the fourth bulletpoint. What they are saying, is not that AI is not taking people's jobs, only that true potential comes from real humans that provide some quality that AI is not capable of truly replacing. It is being used to replace people with it's inferior imitations.
Not that your point is invalid, it absolutely is a valid and valuable criticism itself.
I can answer one of these criticisms regarding innovation: AI is incredibly inefficient at what it does. From training to execution, it's but a fraction as efficient as it could be. For this reason most of the innovation going on in AI right now is related to improving efficiency.
We've already had massive improvements to things like AI image generation (e.g. SDXL Turbo which can generate an image in 1 second instead of 10) and there's new LLMs coming out all the time that are a fraction of the size of their predecessors, use a fraction of the computing power, and yet perform better for most use cases.
There's other innovations that have the potential to reduce the power requirements by factors of one thousand to millions such as ternary training and execution. If ternary AI models turn out to be workable in the real-world (I see no reason why they couldn't) we'll be able to get the equivalent of ChatGPT 4 running locally on our phones and it won't even be a blip on the radar from a battery life perspective nor will it require more powerful CPUs/GPUs.
As usual a critic of novel tech gets some things right and some things wrong, but overall not bad. Trying to build a critic of LLMs where your understanding is based on a cartoon representation skipping the technical details about what is novel about the approach and only judging based on how commercial products are using it can be an overly narrow lens to what it can be, but isn't too far off from what it is.
I suspect LLMs or something like them will be a part of something approaching AGI, and the good part is once the tech exists you don't have to reinvent it and can test it's boundaries and how it would integrate with other systems, but if that is 1%, 5%, or 80% of an overall solution is unknown.
There are a lot of improvements in the making. Agents, memory, self-improvement. It's a young technology.
Currently AI is not good enough to replace people. It's good enough to improve productivity and will probably get better at that. This will be the reason why many people loose their jobs - there might be human level AI in the future but that's hard to predict.
Build more power plants. This is already happening. A problem but not a impossible one.
That is just plain wrong. Try different models and get different results. What the future will bring is hard to predict but artificial data or self improving models might be the solution to the data problem. Time will tell.
If you could increase the productivity of knowledge-workers 5%, that's worth a trillion
A big if (and where does these numbers come from?), but more importantly, a "more productive" knowledge worker isn't necessarily a good thing if the output is less reliable, interesting or innovative for example. 10 shitty articles instead of 1 quality article is useless if the knowledge is actually worth anything to the end user.
Qualitatively, there were huge leaps made between 2018 and 2020. Then it's been maybe a shade better but not really that much better than 3 years ago. Certainly are finding ways to apply it more broadly and more broad ecosystem of providers catching up to each other, but the end game is mostly more ways to get to roughly the same experience you could get in 2021.
Meanwhile, people deep in the field go "but look at this obscure quantitative measure of "AI" capability that's been going up this whole time, which show a continuation of the improvement we saw from 2018". Generally, the correlation between those values and the qualitative experience tracked during those early years, but since then qualitative has kind of stalled and the measures go up. Problem is the utility lies in the qualitative experience.
the first two computers were connected in 1969 leading to arpanet. I would say the qualitative experience took quite some time to improve. The type of algorithms ai has evloved from I would say came out of the 2000's maybe late 90's. Taking google as sorta a baseline maybe. I would say we are equivalent now to about mid nineties internet wise so it will be interesting to say the least on where this goes. They do use to much energy though and I hope they can bring this down maybe with hardware acceleration.
as though the implication were that these are unanswerable questions
when they're actually easily answerable
2: it can be applied to logistics, control of fusion energy, drug-discovery pipelines, lots of things that could soon amount to a trillion dollars
3: it can be improved by combining LLMs with neural-symbolic logic and lots of other things extensively written about
I assume the Goldman Sachs report is more intelligent than this summary makes out. Coz the summary is just saying we should throw our hands up in despair at well-studied questions that a lot of work has gone into answering.
I question more that smartphones and especially the internet had roadmaps. was the roadmap for the military to pass it to education to be taken up by companies and transform it from text to gui then use algorithms to take advanatage of human psychology???
I feel like #3 shows up for every tech innovation. I remember people bitching about the Internet not being viable because phone lines were too slow. The demand needs to be there before the infrastructure will get built.
Some difference in that fixing the power capacity problem will absolutely mean combusting more hydrocarbons in practical terms, something we can ill afford to do right now. Until we get our legs under us with non carbon based energy generation, we should at least not take on huge power burdens.
Now there was an alleged breakthrough that might make LLMs less energy hogs but I haven't seen it discussed much to know if it is promising or a bust. Either way power efficiency might come as a way to save #3.
The investment is the increase in progress of technology. It will develop faster and more efficient. In all sectors.
AI will improve because of its capability to help programmers in order to develop itself.
AI is being misused for all kinds of vain projects at the moment. But once the shine wears off people will go jump on the new bandwagon. By limiting use to the things that actually improve through AI, we can stabilise the energy requirement. But I agree we are not there yet.
AI is not going to replace jobs. Why would a company fire people when they could just use AI to multiply that particular teams productivity and make more money? Some c-suits do not understand this yet, but they will eventually.
Because of copyright, they all need to use non-copyrighted material. Which in this timeline is very far and few between. They will find other ways to improve AI, data feeding is the easiest way but not the only one. I suspect the next innovation will be data refining. Imagine an LLM only trained on scientific papers.
LLMs are only the first step into creating AGI. We are teaching the computer to be able to communicate with us in a meaningful manner. The intelligence will come after.
It is like teaching a baby to talk. Yes it will say dumb shit, but it will improve over time.