OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS's LLM play at this point.
Openai biggest spending is infrastructure, Whis is rented from... Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested
While title is click bite, they do say right at the beginning:
*Right now, it is pulling through only because of Microsoft's $10 billion funding *
Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.
I mean, you're correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn't care if they make a loss because it's sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it's still true that OpenAI's business plan was unsustainable hyped hogwash.
Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.
Almost every company uses either Google or Microsoft Office products and we already know that they're working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they'll recoup that investment in months.
Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.
It won't take long for them to recoup their investment in OpenAI.
If OpenAI has been more responsible in how they released ChatGPT, they wouldn't be facing this problem. Just completely opening Pandora's box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.
There's plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.
I'm also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they're making mad money right now, but just like the crypto space that can dry up instantly.
That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.
I don't think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.
What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn't supposed to do.
I've noticed it has become worse at rubber ducking non-trivial coding prompts. I've noticed that my juniors have a hell of a time functioning without access to it, and they'd rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.
A good tool for getting people on ramped if they've never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.
As a Sr. Dev, I'm always floored by stories of people trying to integrate chatGPT into their development workflow.
It's not a truth machine. It has no conception of correctness. It's designed to make responses that look correct.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.
Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.
But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.
I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.
I mean apart from the fact it's not sourced or whatever, it's standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).
Once everyone's using your product over competitors who couldn't afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you're the biggest player in the market.
The difference is that the VC bubble has mostly ended. There isn't "free money" to keep throwing at a problem post-pan. That's why there's an increased focus on Uber (and others) making a profit.
In this case, Microsoft owns 49% of OpenAI, so they're the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.
This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.
I don't know anything about anything, but part of me suspects that lots of good funding is still out there, it's just being used more quietly and more scrupulously, & not being thrown at the first microdosing tech wanker with a great elevator pitch on how they're going to make "the Tesla of dental floss".
Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.
Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?
If we know, but there's no public text about it, it doesn't know either.
it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new
I asked Bard how ChatGPT can fix itself, here's what it said:
ChatGPT can fix itself in a number of ways, including:
Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.
Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.
Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.
Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.
Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT's performance.
ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they'll absolutely keep throwing cash at it.
It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.
Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached
It seems to be a common thing. I gave up on /r/futurology and /r/technology over on Reddit long ago because it was filled with an endless stream of links to cool new things with comment sections filled with nothing but negativity about those cool new things. Even /r/singularity is drifting that way. And so it is here on the Fediverse too, the various "technology" communities are attracting a similar userbase.
Sure, not everything pans out. But that's no excuse for making all of these communities into reflections of /r/nothingeverhappens. Technology does change, sometimes in revolutionary ways. It'd be nice if there was a community that was more upbeat about that.
People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can't strike for better wages when your employer is already trying to get rid of you.
The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.
Hunger shouldn't be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.
I probably sound like I hate it, but I'm just giving my annual "this new tech isn't the miracle it's being sold as" warning, before I go back to charging folks good money to clean up the mess they made going "all in" on the last one.
No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they're anywhere near a net profit but they're far from bankruptcy.
It works if you ask it for small specific components, the bigger the scope of the request, the less likely it will give you anything worthwhile.
So basically you still need to know what you're doing and how to design a script/program anyway, and you're just using chatgpt to figure out the syntax.
It's a bit of time-saver at times but it's not replacing anyone in the immediate future.
I've tried using it myself and the responses I get, no matter how I phrase them, are too vague in most places to be useful. I have yet to get anything better than what I've found in documentation.
Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.
Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…
AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It's genuinely an astonishing historical and cultural achievement, peak of human ingenuity.
No idea why such hate...
One can hate disney ceo for misusing AI, but why shitting on AI?
It's shit on because it is not actually AI as the general public tends to use the term. This isn't Data from Star Trek, or anything even approaching Asimov's three laws.
The immediate defense against this statement is people going into mental gymnastics and hand waving about "well we don't have a formal definition for intelligence so you can't say they aren't" which is just... nonsense rhetorically because the inverse would be true as well. Can't label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.
Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this "AI haters" group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems' problems intrinsically means you want them to fail. That's BS.
Machine Learning and Large Language Models are amazing, they're game changing, but they aren't magical panaceas and they aren't even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.
Anyone trying to say different isn't familiar with the field or is trying to sell you something. It's the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.
The frustrating part is that people caught up in the hype train of AI will say the same thing: "You just don't understand!" But then they'll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.
At least in my opinion that's where the negativity comes from.
Remind me again how that "revolution of human mobility", the Segway, is doing now...
Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be...
Or how the safest Operating System ever, Windows 7, turned out in terms of security..
Or how Bitcoin has revolutionized how people pay each other for stuff...
Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.
Rational people who are not driven by "personal profit maximization on the backs of suckers" will not use salesspeak and refer to anything brand new as "the most incredible creation of humanity" (it's way too early to tell) or deem any and all criticism of it as "shitting on it".
It's just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
I.e. they don't really hate AI. You can't hate something if you have zero understanding what that something is. It's just an expression of hate for someone who promotes that something.
I'll clarify, it's basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it's already being used to put people out of their jobs.
Techbros think AI will run rampant and kill all humans, when they're the ones killing people by replacing them with shitty AI. And the worst part is that it isn't even good at the jobs it's being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.
Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.
The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.
I read an article about the bot collapse. Basically companies use bot to buy add space on websties. Google uses a bot to match adds to websites. Now we have a massive influx of AI made pages. Literally pages of bs just to make more add space that a bot will sell to another not. It is bots all the way down.
I wouldn't put nfts in the same boat as the dotcom bust. The dotcom thing was way bigger. Most people didn't do anything with nfts. Crypto seems in between. The AI thing seems similar though.
As for the pace, I think the US financial services industry has been on a growth spree for decades and they’re desperate to find the new thing that will make them money. It’s like ed edd & eddy but with the PC, internet, dotcom, internet service, social media and now crypto
True. They could close it off to the public at any time and only offer a subscription service.
However, they are probably afraid to do that for fear that they will lose out to competitors. Offering the service for free was the key to their popularity and bringing AI technology into the hands the average users. If they cut that off, someone else will quickly take their place.
I actually started my journey into Lewd AI stuff with NovelAI. I stopped using it after awhile because I like chatbot rp specifically, not just something that will finish a story for me. Using Silly Tavern to try and emulate the NovelAI models into acting like chat bots just shows how not good they are at that.
If ChatGPT only costs $700k to run per day and they have a $10b war-chest, assuming there were no other overhead/development costs, OpenAI could run ChatGPT for 39 years. I'm not saying the premise of the article is flawed, but seeing as those are the only 2 relevant data points that they presented in this (honestly poorly written) article, I'm more than a little dubious.
But, as a thought experiment, let's say there's some truth to the claim that they're burning through their stack of money in just one year. If things get too dire, Microsoft will just buy 51% or more of OpenAI (they're going to be at 49% anyway after the $10b deal), take controlling interest, and figure out a way to make it profitable.
What's most likely going to happen is OpenAI is going to continue finding ways to cut costs like caching common query responses for free users (and possibly even entire conversations, assuming they get some common follow-up responses). They'll likely iterate on their infrastructure and cut costs for running new queries. Then they'll charge enough for their APIs to start making a lot of money. Needless to say, I do not see OpenAI going bankrupt next year. I think they're going to be profitable within 5-10 years. Microsoft is not dumb and they will not let OpenAI fail.
Details of the 10b aren't public, but we know it's a multi year deal, so it's possible that OpenAI doesn't actually have the full amount in cash now, and they could go bankrupt before they unlock the full amount. In the event of a bankruptcy, Microsoft could be in a position to acquire their assets for themselves on the cheap.
because i distrust this kind of technology in general and for sure it would add to the dystopian, anti-consumer, anti-workforce agenda big tech is currently enforcing. i work in desktop publishing and about 3/4 of jobs in that branche would be cancelled the moment ai could replace them for a fraction of the cost.
The thing about all GPT models is that they’re based on the frequency of the word to determine its usage. Which means the only way to get good results is if it's running on cutting edge equipment designed specifically for that job, while being almost a TB in size. Meanwhile, Diffusion models are only GB and run on the GPU but still produce masterpieces because they already know what that word is associated with.
It's definitely become a part of a lot of people's workflows. I don't think OpenAI can die. But the need of the hour is to find a way to improve efficiency multifold. This will make it cheaper, more powerful and more accessible
I think they're just trying to get people hooked, and then they'll start charging for it. It even says at the bottom of the page when you're in a chat:
Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT August 3 Version
I don’t think it’s at all clear that that’s a viable business strategy in a market where that kind of sleight of hand is as well known as it is right now
LLM's are pricey to train and evaluate, much more so than compositional models.
But no, OpenAI aren't going bust due to this. Given that they have the most successful LLM on the market, it's safe to say that they probably know how much they cost, and can calculate roughly how much their yearly spend will be.
They're gonna be in even bigger trouble when it's determined that AI training, especially for content generation, is not fair use and they have to pay each and every person whose data they've used.
A) Not true. Many have been training models using various online data that that doesn't belong to them, has not been licensed, and has been used without informed consent of the rights holders.
B) Terrible comparison. Music sampling is a grey area that is much more complex and dubious than you're suggesting. There are instances in which sampling has been considered fair use, but outside of that there are strict laws around sampling. Finally, human music creation and sampling have very little in common with generative AI.
AI is here to stay. But the free ride of scraping every piece of information in human history without even a basic regard towards intellectual property or personality rights is unsustainable, unethical, and nowhere near the threshold for what can be considered fair use.
Once people start needing to own or license their training data sets the technology will be just fine, but costs will rise dramatically and the VC investment bubble is going to pop bigtime.
What's legal changes. There will absolutely be new ai focused laws enacted just like there were internet focused laws once the Internet became very impactful. We simply have no idea how this will play out. Whatever new laws are passed will definitely not kill ai though since it's a big business and us law makes will want ai companies to thrive so those services can be exported. People acting like ai will die for legal reasons are completely off base.
Ignoring the fact that training an AI is insanely transformative and definitely fair use, people would not get any kind of pay. The data is owned by websites and corporations.
If AI training was to be highly restricted, Microsoft and google would just pay each other for the data and pay the few websites they don't own (stack, GitHub, Reddit, Shutterstock, etc), a bit of money would go to publishing houses and record companies, not enough for the actual artist to get anything over a few dollars.
And they would happily do it, since they would be the only players in the game and could easily overcharge for a product that is eventually going to replace 30% of our workforce.
Your emotional short sighted response kills all open source and literally gives our economy to Google and Microsoft. They become the sole owners of AI tech. Don't be stupid, please. They want you to be mad, it literally only helps them.
And payment sharing will most likely be a percentage of revenue and right now their biggest hurdle is just scaling, and it's incredibly rare that a startup with huge demand completely fails because of scaling challenges. Once they scale their profit margin will be huge, they'd be able to do payouts and still profit. But don't get excited about payouts, it'll probably amount to pennies like it does on Spotify.
AI is too useful and too powerful that none of the major players in world politics are going to put serious restrictions on it, do you really think they're going to risk Chinese and Russian ai giving them the economic and scientific edge?
Yes selfish people want to stop progress which could help everyone in the world hey access to education, medical care, legal advice, social care, etc because they think they're owed twenty cents for the text they wrote but thankfully society isn't going to take them seriously, there are money grubbers and antisocial people everywhere who are looking for any chance to ruin things that could help others and we ignore those people.
well I mean, chatGPT actually does have some real world use. personally, I find chatGPT more helpful than Stack Overflow when it comes to finding problems with my code