WTF they MUST KNOW which ones have shitty microphones F*** they have never asked, "Was it painful to shout your order at someone who is either trying or not" and the screen that shows you what the human they paid as little as allowed by law has transcribed, is broken half the time
They just want to make an economy they don't have to pay anyone to profit from. That's why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.
The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don't personally suffer approvals for care. They profit from denying their livestock's care.
Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.
We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.
Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. "Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!" On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.
And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can't or refuse to understand it.
I mean I don't know how it is where you live but here taking the orders has been 99% supplanted by touch screens (without AI) So yeah, a machine can do that job.
Current AI is just going to be used to further disenfranchise citizens from reality. It's going to be used to spread propaganda and create noise so that you can't tell what is true and what is not anymore.
We already see people like Elon using it in this way.
McDonalds removes AI drive-throughs after order errorsbecause they aren't generating increased profits
Schools, doctor's offices, and customer support services will continue to use them because reducing quality of service appears to have no impact on the influx in private profit-margin on revenue.
Machine Learning is awesome for medicine, when they run your genetic sequence and then say "we should check for this weird genetic illness that very few people have because it's likely you'll have it" that comes from Machine Learning algorithms finding patterns in the old patient data we feed it.
Machine Learning is great for finding discrepancies in big data sets, like statistics of illnesses.
Machine Learning (AI) is incapable of making good decisions based on that statistical analysis though, which is why it's still a horrible idea to totally automate medicine.
It also makes tons of mistakes and false-positives.
There's a right way to use it, and the wrong way is by using proprietary algorithms that haven't published openly and reviewed by the government and experts. And with failsafes to override the decisions made by the algorithms, in recognition that they often make terrible mistakes that disproportionately harm minorities.
Not a good argument. Applying a specific technology in a specific setting does not invalidate its use or power in other application settings. It also doesn't tell how good or bad an entire branch of technology is.
It's like saying "fuck tools", because someone tried to loosen a screw with a hammer.
Tbh if I told half the doctors and top scientists in the world to take my burger order, or flip the patty, they'd fall apart and fuck it up. It's apples and oranges
Assuming you taught them how to enter orders into the till (the AI was "trained" on how to input orders, let's compare apples to apples here) no, they wouldn't fuck it up. They would be slower than a regular employee but they wouldn't fuck up what people wanted.
Oh, and if they weren't sure for some reason they would ask somebody for help instead of making shit up.
I mean they likely would because employees regularly fuck up my order. I don't really go to fast food anymore but when I do it's almost inevitable that there's at least one minor fuck up on my order even when I try to be very very clear on my order.
I do my best to be one of those people that is clear concise and says the items exactly as they are listed on the menu but somehow I still end up with mistakes in my order pretty regularly when I do go
In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business customers will buy our products so their products will cost more to make, but will be of higher quality.”
Very much so. As a nurse the AI components I like are things that bring my attention to critical results (and combinations of results) faster. So if my tech gets vitals and the blood pressure is low and the heart rate is high and they're running a temperature, I want it to call both me and the rapid response nurse right away and we can all sort out whether it's sepsis or not when we get to the room together. I DON'T want it to be making decisions for me. I just want some extra heads up here and there.
Ideally, yeah - people would review and decide first, then check if the AI opinion confers.
We all know that's just not how things go in a professional setting.
Anyone, including me, is just going to skip to the end and see what the AI says, and consider whether it's reasonable. Then spend the alotted time goofing off.
Obviously this is not how things ought to be, but it's how things have been every time some new tech improves productivity.
Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.
Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven't told them that and they have no idea what a lamp post is. They will just produce results like the shapes you've shown them, which generally end up looking like lamp posts.
Except the "shape" in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM "understands", it just matches the shape of what's been written so far with previous patterns and extrapolates.
My same reaction, but scientific, peer-reviewed and published studies are very important if e.g. we want to stop our judicial systems from implementing LLM AI
That’s just false. People are all capable of reasoning, it’s just that plenty of them get terribly wrong conclusions from doing that, often because they’re not “good” at reasoning. But they’re still able to do that, unlike AI (at least for now).
DAE people are really stupid? 50% of all people are dumber than average, you know. Heh. NoW jUsT tHinK abOuT hOw dUmb tHe AverAgE PeRsoN iS. Maybe that's why they can't get my 5-shot venti caramel latte made with steamed whipped cream right. *cough* Where is my adderall.
I severely hope that people aren't using LLM-AI to do reasoning tasks. I appreciate that I am likely wrong, but LLMs are neither the totality or the pinnacle of AI tech. I don't think we are meaningfully closer to AGI than we were before LLMs blew up.
You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.
These companies knew that their basic model was failing and that overfitying trashed their models.
Sam Altman and all these other fuckers knew, they've always known, that their LLMs would never function perfectly. They're convincing all the idiots on earth that they're selling an AGI prototype while they already know that it's a dead-end.
As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that
models are undertrained and underperform while using too much compute due to this. They tested
a model with 70B params and were able to outperform much larger models while using less compute by
introducing more training. I don't think there can be any general conclusion about some hard
ceiling for LLM performance drawn from this.
However, this does not change the fact that there are areas (ones that rely on correctness)
that simply cannot be replaced by this kind of model, and it is a foolish pursuit.
If I've said it once, I've said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language...
What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.
To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone "oh yeah, that's good enough. People will buy that because it looks cool". Nevermind that it's not even close to what the term "AI" implies to the average person and it's not even technically AI either so...
I don't remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it's not.
They're sentence-constructing machines. Very advanced ones. There was one in the 80s called Racter that spat out a lot of legible text that was basically babble. Now it looks like it isn't babble and that's sometimes the case.
Well it seems like a pretty natural fallacy to think that if something talks to us, in a language that we understand, that it must be intelligent. But it also doesn't help that LLMs, aka. fancy text generators built with machine learning algorithms, are marketed as artificial intelligence.
The LLMs can also be EXTREMELY useful, if used correctly.
Instead of replacing customer service workers, use the speech processing to highlight keywords on the service workers PC, so they can quickly find the right internal wiki page? Atlassian Intelligence works pretty neat in that way, a Help desk ticket already has some keywords highlighted and when you click on it, it shows an AI summary of what this means from resources in the Atlassian account. Helps inexperienced people to quickly get up to speed and it's only helping, not replacing.
What blows my mind about all this AI shit is that these bots are “programmed” by just telling them what to do. “You are an employee working at McDonald’s” and they take it from there.
Yeah, all the control systems are in-band, making them impossible to control. Users can just modify them as part of the normal conversation. It's like they didn't learn anything from phone phreaking.
Yeah fuck AI but can we stop shitting on fast food jobs like they are embarassing jobs to have that are somehow super easy.
What you should hate about AI is the way it is used as a concept to dehumanize people and the labor they do and this kind of meme/statement works against solidarity in our resistance by backhandedly insulting people working in fastfood.
Is it the most complicated job in the world? Probably not, but that doesn't mean these jobs aren't exhausting and worthy of respect.
The whole point of AI is to provide a narrative framework that allows the ruling class to further dehumanize labor and treat workers worse (because replacement with automation is just around the corner).
Realize that agreeing to this framework of low paid jobs as easy and worthless plays right into the actual reasons the ruling class are pushing AI so hard. The true power is in the story not the tech.
I have to had so many conversations with people still thinking fast food is only for high school kids. It's odd. If I say how will they be open during school hours, they make up some bullshit 'get a better job.' It doesn't make snese. Most of these people don't have good jobs and are lucky to be supported in their current lifestyle. They don't see that though.
I try to push the point of 'they are paying for your time and for you to be on standby.' you don't need to be actively moving all 8 hours. Your bosses don't. I've seen so many waste of time meetings to justify their welfare jobs. It's comical. They don't produce value. They are leeches. Not all, but too many.
I hate that talking point so much (and hear it all the time from people complaining about immigrants turkin ur jerbs). The Fast-Food-Jobs-Are-Brutal-And-Pay-Shit-Wages-Because-They’re-Building-Teen-Character narrative is anti-worker bullshit that denies folk job security and a living wage.
Someone's widowed nan needs this job. The single dad living next door needs this job. A diverse workforce - that includes young people looking for a summer gig - need this job.
Can we also talk about how much everyone, everywhere relies on service industry workers and how much everyone would absolutely lose their goddamn minds if they had to make their own burgers and fries twice a week, AND how these staple institutions, jobs we deemed so important that we made people work at them during a pandemic, how much the prices of these sandwiches and snacks has gone up in the last few years, how even bringing up the possibility of increasing minimum wage for these difficult and demanding jobs leads to an entire social "discourse" and fierce debates about if people should be able to afford things.
Also centrists who think of themselves as tech savy will smugly tell you the only way technology can improve fastfood workers lives is by eliminating jobs and thus all the ruling class has to do is push inflation up and these types of people will shout down anyone who argues we need to pay fastfood workers more to compensate because that must be pushing against the "natural" path of technological progress.
I don’t think it’s shitting on fast food jobs at all. The point of this is that taking orders at a fast food is, in the micro, an extremely easy task. What makes the job as a whole exhausting is the fact that you have to do that for a full shift and the human brain gets stressed from doing that. But AI doesn’t, and yet it’s messing up the simplest part of the job.
I don't agree we can just authoritatively state in broad terms that working fastfood is extremely easy in any framing, especially for shit pay and lack of quality recuperation time associated with getting treated like you aren't really a human being (more like an approximation of a robot).
McDonald's did not factor in the same thing you are apparently not factoring in- when humans at McDonald's fuck up your order, you can tell them about it.
When chat gpt first was released to the public I thought I’d test it out by asking it questions about something I’m an expert in. The results I got back were a Frankenstein of the worst possible answers from the internet. What I asked wasn’t very technical or obscure, and what I received was useless garbage. Haven’t used it since, I think it’s fraud like NFTs were fraud, only worse because these fraudsters convinced the business class that they have a tech solution to the problem of labor lowering their already obscene profits.
If it got my thing wrong I can only imagine what else it gets wrong. And our elites want to replace us with this? Ok lol good luck with that
You asked a search engine for information from the internet in its early stages and got Frankensteined results from the Internet. That was its purpose was it not? Obviously the more info it scrapes the more info it will go off of.. but yeah it is still just scraped data, not even particularly from reliable sources. The language models job is to make sentences out of information it has. It doesn't do anything intelligent to disect the information.
A good example is many AI (programs) can draw you a leopard. If you ask the program to then draw an arrow to its tail... It doesn't know what a tail is so it will draw a random arrow that would point to where a tail would be on a stock image, not even the one it "drew"
What it knows "this entity is a leopard".
We want it to know much more than that. We want the program to see "draw a leopard" and then it draw: 2 eyes: run the eye operation - which then says draw shapes, for items like retinas, colors, blood vessels etc and document all of that data while creating each hair and skin blotch. Then be able to comprehend what all of the items are and do, so when asked a question or given a task it can perform such actions..
But we haven't programmed it to do so, and thus it can't, because it doesn't think to learn, it just aggregates data and searches it. It can't compile the data and recycle itself back to the original operation, as that would be pretty intelligent.
You asked a search engine for information from the internet in its early stages and got Frankensteined results from the Internet. That was its purpose was it not? Obviously the more info it scrapes the more info it will go off of.. but yeah it is still just scraped data, not even particularly from reliable sources. The language models job is to make sentences out of information it has. It doesn't do anything intelligent to disect the information.
Yeah all that is true, and smart people understand it’s limitations, especially the nerds (no offense) that closely follow tech. But for the general public that’s being fed all this hype about AI? Especially school kids? Oh god. This is going to lead to some bad outcomes where the entire population is going to further dumb itself down and potentially end in catastrophe and a collapse of knowledge.
Well, the LLMs got a lot better since the first release. I would guess that the main problem with this AI (probably not one of the bleeding edge LLMs, guessing from the timeline) is that they have piss poor micros - even humans have problems getting your order right.
They have become a lot more convincing, not a lot better.
They're still misinformation amplifiers with a feedback loop. There's more misinformation on most topics out there (whether intentional, via simplification, or accidental) than there is information. LLMs, which have no model of reality and thus cannot really assess the credibility of sources, just hoover it all up and mix it all together to return it to you.
You (the generic you, not you in specific ... necessarily) then take the LLM's hallucinated garbage (which is increasingly subtle in its hallucinations) and post it. Which the LLMs hoover up in the next round of model updates and ...
People don’t get my order right either, for what it’s worth. But at least they have the excuse of being over-worked/under-paid, under pressure of being fast to hit metrics, and are usually a teenager or low skill worker.
And usually they take the order right, it just gets messed up on the line. So the AI is worse
It was an umbrella term for stuff we didn't have yet..then marketing teams remembered it existed. So now it's either a nonsense term (like a PID is AI now) or it means large language models. Since that is clearly both what mcd's used and what the follow up message is referring to, I don't think you need to gatekeep this message. Take that cape off, hero of AI-term-correctness. Flip the dictionary-signal off and turn in to bed.
There is no the AI. There are many, many different algorithms and datasets that we call AI. Saying that this particular issue proves that no AI can be used for education or other tasks is just stupid. These "fuck %thing%" communities attract all the stupidest people that just want to be angry at things they don't understand.
Yeah I mean what really is the "program" they are calling AI.
I assume a database where they log terms like double cheeseburger, and try to add as many various ways you can say that without overlapping another item then when that is recognized it runs and if statement that points to term names that are recognized as part of the double cheeseburger. 2 pickles, onions, ketchup, mustard, 2 patties, 2 slices of cheese, and 2 pieces of bread. If the user mentioned a term such as "take off" "hold" "remove" a function is used to change 2 pickles to 0 or 4.
Then that data set is aggregated into a program that tallies that -2 or +2 pickles changes the cost up/down by said amount. (McDonald's actually does lower cost if you remove items, I found that interesting..)
Order paid for... 2 patties automatically dropped into the burger making machine (I think that's still a carbon based meat sack), subtracted from inventory. 2 buns added.. dropped.
I assume the hard part is when someone says ,"can you remove the lettuce from the first one" after you added a chicken sandwich and add referring to the burger you ordered earlier... Idk
I work adjacent to a group that does speech recognition. There's a massive amount of variation in regional dialects and that's before you get to non-native speakers. The you have people like my mother in law who doesn't have an accent, but her diction and grammar are... unique.
If someone is speaking in sentences you can use context clues to infer intent, but it's a lot more challenging when you're just getting spoken commands.
I suspect it's a training/sample gap, but it's likely going to be really hard to get to 100%.
I was going to say, I went through a drive through, had the clearest exchange ever when placing my order, and it was only after pulling away and hearing the same voice at the same time that it was an LLM.
I'd personally take that over trying playing telephone on shitty mics and speakers any day. "Sorry, could you repeat that?" Etc etc.
Pull up to McDonald's, order a Big Mac, fries and orange juice. Pay and take the bag at the window. Open the bag at the park, it's cancer medicine! Some little kid in the hospital is eating your fries! Stupid AI, second time this week!
Edit: I'm willing to take my downvotes, but I need to know, is it because I made a joke about little kids being denied cancer medicine by stupid AI? Or is it because I like orange juice?
Yes, clearly it makes sense to think the AI software that takes McDonalds orders is the same as the AI software that would be used in a classroom. So if McDonalds AI can't handle its job, neither can the other.
They'd likely use different models, but they'd be based on the same fundamental technology of LLMs. The training data would be different, but they'd have similar issues, but the one that's supposed to be in a classroom would have significantly more signals to interpret and significantly higher consequences for being wrong.
They'd be different, but if it doesn't work for McDonald's, it can't work for the much more complex task.
If anybody can build a teaching AI worth its salt then every other user facing AI service company would want to copy their architecture, and would be willing to pay for it
So if a megacorporation can't get the good stuff, and nobody's even seen the good stuff, it probably doesn't even exist
I was going to post a reply supportive of you but realized this is an anti-ai grandpa group lol. I can't stand hearing people talk about topics they don't know anything about. Yall need to give your order to ChatGPT, and then ask it to repeat your order for you. If you don't know why McDonalds fails but ChatGPT succeeds you need to shut the fuck up.
Thank you! Clearly we shouldn't be making a 1 to 1 comparison between the McD's AI and one used in education. It's like saying, "If Notepad can't correct spelling errors or grammar mistakes, then Word shouldn't be used to rely on such either!" Different programs, both text editors.
inb4 "Notepad can do so now" or "you can if you get this plugin" - that's not the point.