I can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools:
Oobabooga textgenUi
Yeah if your willing to carry a brick or at least a power bank (brick) if you don't want it to constantly overheat or deal with 2-3 hours of battery life. There's only so much copper can take and there are limits to minaturization.
And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.
On the bright side there are many open source llms, and it seems like there's more everyday.
We really need to stop calling things "AI" like it's an algorithm. There's image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they've all been called AI, but functionally they have almost nothing to do with each other.
For computer scientists this year has been a sonofabitch to communicate through.
But "AI" is the umbrella term for all of them. What you said is the equivalent of saying:
we really need to stop calling things "vehicles". There's cars, trucks, airplanes, submarines, and space shuttles and they've all been called vehicles, but functionally they have almost nothing to do with each other
All of the things you've mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer "thinking" and making shaved intelligent decisions.
Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.
No. No AI is NOT the umbrella term for all of them.
No computer scientist will ever genuinely call basic algorithmic tasks "AI". Stop saying things you literally do not know.
We are not talking about what what the word means to normies colloquially. We're talking about what it actually means. The entire point it is a separate term from those other things.
Engineers would REALLY appreciate it if marketing morons would stop misapplying terminology just to make something sound cooler... NONE of those things are "AI". That's the fucking point. Marketing gimmicks should not get to choose our terms. (as much as they still do)
If I pull up to your house on a bicycle and tell you, "quickly, get in my vehicle so I can drive us to the store." You SHOULD look at that person weirdly: They're treating a bicycle like it's a car capable of getting on the freeway with passengers.
While this is true, I think of AI in the sci fi sense of a programmed machine intelligence rivaling human problem solving, communication, and opinion forming. Everything else to me is ML.
But like Turing thought, how can we really tell the difference
You're right, but so is the previous poster. Actual AI doesn't exist yet, and when/if it does it's going to confuse the hell out of people who don't get the hype over something we've had for years.
But calling things like machine learning algorithms "AI" definitely isn't going away... we'll probably just end up making a new term for it when it actually becomes a thing... "Digital Intelligence" or something. /shrug.
Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that'll require some hefty neural net.
Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn't AI.
Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.
A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn't to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.
Those are all very specific intelligences. The goal is to unite them all under a so-called general intelligence. You're right, that's the dream, but there are many steps along the way that are fairly called intelligence.
I imagine it's because all of these technologies combine to make a sci-fi-esque computer assistant that talks to you, and most pop culture depictions of AI are just computer assistants that talk to you. The language already existed before the technology, it already took root before we got the chance to call it anything else.
Language is fluid, and there is plenty of terminology that is dumb or imprecise to someone in the field, but A-ok to the wider populace. "Cloud" is also not actually a formation of water droplets, but someone's else's datacenter, but to some people the cloud is everything from Gmail to AWS.
If I say AI today and most people associate the same thing with it (these days that usually means generative AI
, i.e. mostly diffusion or transformer models) then that's fine for me. Call it Plumbus for all I care.
The bad news is the AI they'll pay for will instead estimate your net worth and the highest price you're likely to pay. They'll then dynamicly change the price of things like groceries to make sure the price they're charging will maximize their profits on any given day. That's the AI you're going to get.
The prices will automatically adjust like an Uber algorithm for supply and demand, so as people buy the item it keeps going up and when people stop buying it if decreases slowly.
Next, she's going to want a Libre AI that does not share her information with third parties or suggest unnecessary changes to make her spend more at sponsored businesses.
Cameras in your fridge and pantry to keep tabs on what you have, computer vision to take inventory, clustering to figure out which goods can be interchanged with which, language modeling applied to a web crawler to identify the best deals, and then some conventional code to aggregate the results into a shopping list
Unless you're assuming that you're gonna be supplied APIs to all the grocery stores which have an incentive to prevent this sort of thing from happening, and also assuming that the end user is willing, able, and reliable enough to scan every barcode of everything they buy
This app basically depends on all the best ai we already have except for image generation
Rolling this out for tools and parts at my work. Tool boxes with cameras in the drawers to make sure you put it back. Vending machines for parts with auto order.
Cameras and computer vision aren't necessary. Food products already come with upcs. All you need is a barcode reader to input stuff and to track what you use in meals. Tracking what you use could also be used for meal planning.
I think you can achieve a similar result by having one giant DB so we can average out general consumption and then have a personal/family profile, where we in the first place manually feed the AI with data like, what did we bought, exp date, when did we partly or fully consume it. Although intensive at first I think AI will increasingly become more accurate whereby you will need to input less and less data as the data will be comming from both you and the rest of the users. The only thing that still needs to be input is "did you replace it ?"
I'm sure there are companies who'd love to develop something like this. And collect that information about exactly what groceries you currently have and statistics of how you consume them, so they can sell it to advertisers. Not advertisers that sell these groceries, of course - for these the AI company could just make the AI buy them from suppliers that pay them.
They already exist and have been doing this for a long time, they are just using dumber versions of deep learning than what we have right now.
Less about giving your personal information to an advertiser though and more about using aggregate data trends to guide marketing efforts.
Like if you know buns and hotdogs sell like crazy the week before July 4th merchandizing bundles of both that override brand purchase intent on favor of convenience and discount.
An example of this kind of market research in action would be a clothes store that knows 20% of its sales were to people who shopped the day before they came back to buy offering 48hr exit coupons that would be valid the next day for a limited time.
The personalized data is used in house at these aggregators to market to you directly, such as the war and peace length personalized coupons on receipts where they've been contracted by the retailers.
Not just advertisers, it would also get sold to food manufacturers and product developers. This is not so bad though cause it helps new products come out that might be in line with what you want
I want AI to control traffic lights so that I don't sit stopped through an entire cycle as the only car in a 1 mile radius. Also, if there is just one more car in line, let the light stay green just a couple seconds longer. Imagine the gas and time that could be saved... and frustration.
That's already a thing, though it isn't AI driven. Many intersections have sensors that detect traffic and can change the lights quickly or let them stay green longer if you're approaching it. It's only getting more advanced as time goes on.
Thank goodness. until every intersection becomes this intuitive, I will only continually notice the ones that hold me hostage through several cycles and /or don't even notice I'm there waiting at a red light for 5 minutes at 3am when I'm the only car there.
Doesn't need AI, and there are countries that already have a system in place with the same result. Unsurprisingly the places with more focus on pedestrian, cyclist, and public transit infrastructure have the most enjoyable driving experience. All the people that don't want to drive will stop as soon as it is safe and convenient, and all those cars off the road also help with this because the lights will be queued up with fewer cars.
To be fair, there are already more intelligent traffic light systems that use sensors in the road to see if there is traffic sitting at the lights, combined with push buttons for pedestrians and cyclists. These can be combined with sensors further up the road to see if more traffic is coming and extend the periods of green light for certain sides. It may not be perfect and it may require touching up later after seeing which times could be extended or shortened.
It's not AI but it works a lot better than the old hard coded traffic lights. Here in the Netherlands there are only a handfull of intersections left that still have the hard coded traffic lights.
Not near me. I can't speak to the entire US, but everywhere I've been, it's horrible. In Germany they have a green wave, where all of the lights are green if you go the speed limit. I have only encountered this twice within 200 miles of where I live.
You and Sarah Radz and everyone else here with brilliant practical ideas need to submit your resumes to the Silicon-Valley-esque corporations that comandeer such industries, be hired on as brains.
That would be great, but it's just not practical in many places.
I looked up how to get to work using public transportation once. It was 3 hours using 3 busses and a half hour walk. LOL. I could literally do it in two hours using a bike. But I'm just not willing to spend 4 hours a day getting to work and back. I don't know many that would of they had a choice. It's half an hour drive for me, but 22 miles, mostly interstate.
I mean, when that xkcd was made, that was a hard task. Now identifying a bird in a picture can be done in realtime on a raspberry pi in a weekend project.
The problem in the op isn't really a limitation of AI, it's coming up with an inventory management system that can detect low inventory without being obtrusive in a users life. The rest is just scraping local stores prices and compiling a list with some annealing algo that gets the best price to stops ratio.
AI image manipulation is entirely based in a computer where an image is processed by an algorithm. Grocerybot involves many different systems and crosses the boundary between digital and physical. The intertwined nature of the complexity is what makes it (relatively) difficult to explain.
Aye, this be the problem. As long as there is a profit motive the AI is going to steer you to whatever makes them money, be it whoever works the SEO game or pays for API access.
Local models are a thing, and GPT is extremely useful in some cases, even with the corporate handholding. I find the whole space super exciting, personally.
The accessibility of local models is nowhere near what the early web was. We could ALL have a geocities website and our own goofy "corner of the internet" without the extra bullshit.
No. That's just what they wanted you to believe. All they really did was find a way to separate people from more money.
I found out two people in my family bought smart fridges and both listed watching tv and listening to music as reasons for purchase. Not the only ones mind you, but some of the first ones mentioned. I don't get it.
This is surprisingly difficult problem because different people are okay with different brand substitutions. Some people may want the cheapest butter regardless of brand, while others may only buy brand name.
For example my wife is okay with generic chex from some grocery stores but not others, but only likes brand names Cheerios. Walmart, Aldi, and Meijer generic cheese is interchangable, but brand name and Kroger brand cheese isn't acceptable.
Making a software system that can deal with all this is really hard. AI is probably the best bet, but it needs to be able to handle all this complexity to be useable, which is a lot of up front work
As long as the AI has access to their ongoing purchase histories it's actually quite easy to have this for day to day situations.
Where it would have difficulty is unexpected spikes in grocery usage, such as hosting a non-annual party.
In theory, as long as it was fine tuned on aggregate histories it should be decent at identifying spikes (i.e. this person purchased 10x the normal amount of perishables this week, that typically is an outlier and they'll be back to 1x next week), but anticipating the spikes ahead of time is pretty much impossible.
Both of these problems could feasibly be solved by user input. If you had the ability to set rules for your personal experience, problems like that would only last as long as it takes the user to manually correct.
Like, "Ai, I bought groceries for a party on March 5th. Don't use that bill to predict what I need" or "stop recommending butter that isn't this specific brand"
Also quite difficult from a vision perspective. Tons of potential object classes, objects with no class (e.g., leftovers, homemade things), potential obfuscation if you are monitoring the refrigerator/cabinets. If the object is in a container, how do you measure the volume remaining of that substance? This is just scratching the surface I imagine. These problems individually are maybe not crazy challenging but they are quite hard all together.
Honestly I would be perfectly happy with the service like this, even if I had to manually input what groceries I need. It's still an incredibly complex problem though. AI is probably better suited for it than anything else since you can have iterative conversations with latest generation AIs. That is, if I tell it I need cereal, it looks at my purchase history and guesses what type of cereal I want this week, and adds it to my list, I can then tell it no, actually I want shredded mini wheats.
So it would probably have to be a combination of a very large database and information gathering system with a predictive engine and a large language model as the user interface.
Exactly. But also I'm blown away that most grocery stores don't list inventory and prices on the website. I can only think this is because they don't want to show prices in an attempt to get you to go to the store.
They absolutely don't want to make automatic comparison shopping that easy. The goal of every grocery store is to get you there with one or two specific good deals they advertise and then have you do the rest of your shopping there because nobody wants to go to a second store and MAYBE get a slightly better deal but also maybe get a worse deal.
But also I’m blown away that most grocery stores don’t list inventory and prices on the website. I can only think this is because they don’t want to show prices in an attempt to get you to go to the store.
I'm sure Sara is not ready to be served the optimal outcome from a competitive multi-agent simulation. Because when everyone gets that AI, oh boy the local deals on groceries will be fun.
The equilibrated state your imagining never happens. This is like talking about when the ocean finally levels out. The ocean’s never going to level out. There will always be waves to surf.
And if it requires an API code to run on someone else's shit then it can go fuck itself. I want to self-host my AI and run it from my own domain I bought cheap during a holiday sale.
Cant I get both? Here are your weekly projections, sir. You will need to get this list of items at these locations and here is what you would look like as a latin american dictator. Enjoy
It's not exactly an ai-task, I guess? Like pretty much the only ai-related thing there is to classify stuff in ocr-ed receipts (technically, one can opencv whatever is in the fridge, but I suspect it won't be reliable enough).
Bruh. If AI is being taught to drive cars on the open road then I feel like cameras to detect what's in your fridge is pathetically easy in comparison and very much an AI task
That's how you get weird things like the AI determining that your favorite items are jam, baking soda and whatever you left at the back of your fridge to rot for six months.
It is easy to detect what's in your fridge. We have that today on some smart fridges.
The problem to be solved though is
what's in your fridge
what's not in your fridge
what do you consume vs throw away
what do you buy
where do you shop
what prices are available
what's the best way to minimize cost and store trips
what's your metric for how to balance that
Of those things, AI is really only helpful for determining the metric for how much money you need to save to add another grocery stop, and knowing that the orange blob is probably baking soda.
Most of the rest of that is manual inputs or relatively basic but tedious programming, and those are the parts that would be the most annoying.
I say this as a person who has repeatedly utterly failed to use https://grocy.info/ because actually recording what you eat vs throw away is painful.
This isn't a great AI problem not because AI can't help, but because the tedious part isn't the part it can help with right now.
Yeah, kinda. Except you'll likely need a camera or two for each shelf of the fridge (given the layout remains unchanged), and also you have to make sure they don't get covered with ice/spilled milk/whatever or blocked by a box of some stuff. Aaaalternatively, you install a receipt scanner and touch scrreen which asks you what you took and updates an internal db accordingly.
then I feel like cameras to detect what's in your fridge is pathetically easy in comparison
But you're skipping over a huge amount of context that's missing. It's context we (as humans) take for granted. What's the difference between a jar and a bottle? Is the cream cheese in a tub or in a little cardboard container? Then it would need to be able to see all items in a fridge, know the expiration dates for each thing, know what you want to get, how quickly something gets used, etc.
Some of those things are more straightforward, and some of them need data well beyond "this container has milk". The issue isn't processing all the data, but acquiring it consistently and reliably. We humans are very chaotic with how we do stuff in the physical world. Even the most organized person would throw off an AI system every so often. It's the reason self driving cars are not a reality yet and won't be for a while.
The problem is that "AI" is a completely ill-defined term. The commenter above used the definition of it just being a more complex program and then they argued that you don't need a more complex program. That's as good of a definition as any other.
By "ai tasks" I mean smth where ai is actually useful, such as object/pattern recognition, object classification, making predictions based on past data, etc. Can one train an ai to predict they need to buy onions when they have less than X in their fridge? Yap. Can one do the same with an if statement and prevent themselves from running into issues when ambient temperature on Mars rises? Also, yes.
I think it would be a perfect function for ai. It’s more than just determining what is/is not in the fridge. Compiling the grocery list and determining which store has the best price for the goods would be great but also the ai knowing the mode of transportation as well as the weather and time of day would be critical as well to determine if it is worth going to multiple stores or not.
Why are so many of you trivializing the fact that providing perfectly formatted input data that having set logic figure something out is a VERY different thing than providing a firehose of data and then asking the software to make sense of it? Like have you been paying attention here at all?
In this case I would suppose that there's no need to get firehose of data, especially if run locally. The user only has so many shops around and the fridge is not a factory scale big
In my experience, most things touted as AI are mostly rule-based or graph-based, with a sprinkling of some classification somewhere for a manager to get that sweet VC money.
That's not to say that this couldn't be done with AI, particularly one that is trained on top of a rule-based system to find the best options for given circumstances.
Why would AI made by some company bother searching website or fliers when they could instead show you products from businesses who pay them to show you products?
When the AI made by a company, running on their company's servers, with no way for you to know what it is basing its decisions on... you'd probably best assume that it is acting for the company's benefit rather than yours.
This is totally doable. Someone could make that right now. Just have and AI agent like auto got or an open-source alternative and a database that can be accessed and altered.
I want AI to anticipate what groceries I'm running low on, search every flier and website in my city to find the best price, and compile me a weekly list based on best deals per fewest stops. I do not want AI to make a picture of me if I were an astronaut.
Idk man, AI art generation is pretty rad, it opens a whole new world of artistic endeavors up for people who never had the access, ability, time, or energy to do so otherwise. Also, por que no los dos?
Creating art is creating art. Back in the day you had to mix your own pigments and put those on a canvas. Now you can buy pigments and canvas, or skip that altogether and go digital. That doesn't make digital art any less art. It's the same thing with AI art. It's another tool for us to use.
New tools come up for professions ALL THE TIME, and it's up to people in those fields to figure out how to roll with it. This tool is out of the box and it's not going back in.
What we should be asking now is how do we ETHICALLY use this tool? Well, probably by crediting people. Licensing any copywritten material that needs it. Don't use it to make gross shit like deep fakes or direct rip-offs. Which, just because it's easier to do these things with an AI, doesn't mean they didn't happen with Photoshop etc.
There's also more nuance to the process than "type a prompt get image." That works, but it'll get you shit, inconsistent results. You still have to play around with the image, adjusting parameters and sometimes even loading it into a "real" image manipulation software.
To give you an idea of how I personally am using stable diffusion, I've been using it to generate a few dozen images that look like a character in going for. I'll grab those images, edit them, and then use them to train my own LoRA (a kind of mini, specific, model) to use for future generation of that character. It's actually work, just work I'm better at than manipulating images manually.
You can already do something similar manually with an app called Grocy.
I tried it and didn't last two week, too much time spent scanning barcodes and dealing with inventory. I was hoping to save time to generate grocery list faster, not spend my life on it.
Damn no let me make my mistakes, there are way enough people telling me what to do, I need the AI to create pictures of my cat conquering Europe and that's it
First, anticipating what groceries are low: Simplest implementation would be a list where the user manually enters additions and removals.
Secondly searching every flier and website in the city. Okay that one will be a bit trickier. First you'd need to gather a list of all stores in the city, and then look for any deal. The big challenge here is that they don't just have a common API where you check. You either need to program a bot to scrape all their web presences, or convince them to provide the information in a common format.
The last step seems like it could be related to the traveling salesman problem.
This is ridiculous, just go to prison if you want someone making all of your decisions for you. The intelligence within the walls lets just say is very artificial
Why are people like this? It's like somehow they've not actually lived in the real world for any of their lives.
If a product like that existed, and bloody hell would it be complicated, then there wouldn't be a best price for groceries. The fact that a product like this doesn't exist is why there can be variation in price. Otherwise there would be no point.
There are a lot of considerstions when buying stufff other than price. Some people buy only local and are willing to pay a premium. Others only buy organic. Others boycott Nestle. Others just buy what "feels" right. Some just have money to burn and literally can't be bothered by money.
Sure, having something like that could upset the balance, but not everyone will use it. And even if a lot of people would, you'd still habe to take in things like distance or Costco memberships, etc.
Also forgot to add - imagine a store having two products, one right next to the other. They're exactly the same other than one being store-brand and 50% cheaper than the other. Surely no-one would buy the other one, yet it still sells. You don't need AI to tell you store-brand is the same thing but cheaper.
Interesting that these uninformed takes don't consider that the underlying algorithms are similar, and that advancements in one area often transfer to others.
Piece by piece an artificial mind is being constructed, yet people who are not a part of the research and development process are complaining about which breakthroughs they believe should come first.
*E: Do you think down voting this fact will increase the speed at which chatgpt or midjourney are unreleased? This is just how progress is occuring.