Why is there so much hype around artificial intelligence?
I've tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn't work why are they trying to make us all use it?
They were pretty cool when they first blew up. Getting them to generate semi useful information wasn't hard and anything hard factual they would usually avoid answering or defer.
They've legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They've also been pushed harder than they were meant to, to show "improvement" to investors demanding more accurate human like fact responses.
At this point it's a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM's like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like "paid time off" and "healthcare". In reality they've been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.
I found a non paywalled article where scientists from Oxford University state that feeding AI synthetic data from other AI models could lead to a collapse.
I find that a lot of discourse around AI is... "off". Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can't trust, but honestly I don't have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can't do.
For a while Google said they would revolutionize search with artificial intelligence. That hasn't been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.
Yeah, it's much better at "creative" tasks (generation) than it is at providing accurate data. In general it will always be better at tasks that are "fuzzy", that is, they don't have a strict scale of success/failure, but are up to interpretation. They will also be better at tasks where the overall output matters more than the precise details. Generating images, text, etc. is a good fit.
That sounds about right. I heard that the recommendation from AI to put glue on your pizza was from a joke on Reddit about how to keep cheese from falling off the pizza. So obviously the AI doesn't know what a good source of information is from a bad source of information. But as you say something that's fuzzy and doesn't need to be 100% accurate works pretty well apparently. Also my logic is a little fuzzy once in awhile myself.
Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.
I know what model collapse is, it's a fairly well-documented problem that we're starting to run into. You're not wrong, it's just that the person you replied to was agreeing about this.
Someone said to try the creative side and so far, so good.
Nice! I'm glad you were able to find something useful to use it for.