I've found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
ChatGPT is incredibly good at helping you with random programming questions, or just dumping a full ass error text and it telling you exactly what's wrong.
This afternoon I used ChatGPT to figure out what the error preventing me from updating my ESXi server. I just copy pasted the entire error text which was one entire terminal windows worth of shit, and it knew that there was an issue accessing the zip. It wasn't smart enough to figure out "hey dumbass give it a full file path not relative" but eventually I got there. Earlier this morning I used it to write a cross apply instead of using multiple sub select statements. It forgot to update the order by, but that was a simple fix. I use it for all sorts of other things we do at work too. ChatGPT won't replace any programmers, but it will help them be more productive.
A lot of papers are showing that the code written by people using ChatGPT have more vulnerabilities and use more obsoleted libraries. Using ChatGPT actively makes you a worse programmer, according to that logic.
Agree to disagree. If you trust this, you're a fool. Trust me, I've tried for hours asking it about a myriad of tech issues, and it just constantly fucking lies.
It can help you, but NEVER trust it. Never. Google everything it tells you if it's important.
If you blindly trust it then yeah it will cause problems. But if you know what you're doing, but forget X or Y minor thing here and there, or just need some direction it's amazing.
Same. When I've got a session coming upjwithjless than ideal prep time, I've used chat get to help figure out some story beats. Or reframe a movie plot into DnD terms. But more often than not I use the Story Engine Deck to help with writers block. I'd rather support a small company with a useful product than help Sam Altman boil the oceans.
I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven't found a particularly usefull use case for LLMs.
I like asking ChatGPT for movie recommendations. Sometimes it makes some shit up but it usually comes through, I've already watched a few flicks I really like that I never would've heard of otherwise
I tried to give it a fair shake at this, but it didn't quite cut it for my purposes. I might be pushing it out of its wheelhouse though. My problem is that, while it can rhyme more or less adequately, it seems to have trouble with meter, and when I do this kind of thing, it revolves around rhyme/meter perfectionism. Of course, if I were trying to actually get something done with it instead of just seeing if it'll come up with something accidentally cool, it would be reasonable to take what it manages to do and refine it. I do understand to some extent how LLMs work, in terms of what tokens are and why this means it can't play Wordle, etc., and I can imagine this also has something to do with why it's bad at tightly lining up syllable counts and stress patterns.
Most of it is either the LLMs shitting themselves or GPT doing that masturbatory optimism thing. Da Vinci's "Suspicious mind..." in the second image is a little bit heavyish though. And those last two ("Gangsterland" and "My name is B-Rabbit, I'm down with M.C.s, and I'm on the microphone spittin' hot shit") are god damn funny.
Personally I use it when I can't easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.
because of the way it's trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.
Demystifying obscure or non-existent documentation
Basic error checking my configs/code: input error, ask what the cause is, double check it's work. In hour 6 of late night homelab fixing this can save my life
I use it to create concepts of art I later commission. Most recently I used it to concept an entirely new avatar and I'm having a pro make it in their style for pay
DnD/Cyberpunk character art generation, this person does not exist website basically
duplicate checking / spot-the-diffetences, like pastebins "differences" feature because the MMO I play released prelim as well as full patch notes and I like to read the differences
I got high and put in prompts to see what insane videos it would make. That was fun. I even made some YouTube videos from it. I also saw some cool & spooky short videos that are basically "liminal" since it's such an inhuman construction.
But generally, no. It's making the internet worse. And as a customer I definitely never want to deal with an AI instead of a human.
100%. I don't need help finding what's on your website. I can find that myself. If I'm contacting customer support it's because my problem needs another brain on it, from the inside. Someone who can think and take action to help me. Might require creativity or flexibility.
AI has never helped me solve anything.
I mean, yeah, but that difference is quite crucial.
People have always wanted to be the top search result without putting effort in, because that brings in ad money.
But without putting effort in, their articles were generally short, had typoes, and there were relatively few such articles.
Now, LLMs allow these same people to pump out hundredfold as much gargage, consisting of lengthy articles in many languages. And because LLMs are specifically trained to produce texts that are human-like, it's difficult for search engines to filter out these bad quality results.
Ive found its made doing end-runs around enshitification easier.
For example Trying to find a front suspension top for a peugeot 206 gti with google means being recommended everything front suspension for the peugeot 207, 208, Vw Gti, Swift Gti... not to mention the websites "Best price on insert what you searched for here" only they sell nothing.
So I ask chat gpt for the part number and search that.
This is exactly the kind of thing that LLMs are good for. I also use them to get quick and concise answers about programming frameworks, instead of trying to triangulate the answer from various anecdotes on stackoverflow, or reading two hours of documentation.
But I figured this kind of thing doesn't count as "slop." OP was talking about the incoherent trash hallucinations, so I left that one out.
AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.
Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.
Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.
I use perplexity.ai more than google now. I still don’t love it and it’s more of a testament to how far google has fallen than the usefulness of AI, but I do find myself using it to get a start on basic searches. It is, dare I say, good at calorie counting and language learning things. Helps calculate calorie to gram ratios and the math is usually correct. It also helps me with German, since it’s good at finding patterns and how German people typically say what I am trying to say, instead of just running it through a translator which may or not have the correct context.
I do miss the days where I could ask AI to talk like Obama while he’s taking a shit during an earthquake. ChatGPT would let you go off the rails when it first came out. That was a lot of fun and I laughed pretty hard at the stupid scenarios I could come up with. I’m probably the reason the guardrails got added.
i switched to kagi a year ago as i usually need to go through search result. i was astonished at just how dogpoop google search is compared to it.
youtube was even worse, i had to go through 10 unrelated videos to find one slightly relevant one.
kagi is usually dont have the latest results but is on point on relevancy.
youtube was even worse, i had to go through 10 unrelated videos to find one slightly relevant one.
Last month I typed letter for letter the title of a video I saw on there in YouTube search and it tried so hard to push some other barely related videos, I couldn't believe it. I ended up typing the url manually like some internet cave man
It also helps me with German, since it’s good at finding patterns and how German people typically say
Depending on your first language I can offer you my assistance as a native german :)
If you want to, pm me or send a message to my email: [email protected]
I am moving to Germany next year. Even though I was born there and my mother taught me some, and I learned it in high school, and I also studied in college in the USA, I cannot speak it worth shit. I’m hoping I pick up some more when I move, but if not maybe my kids can teach me.
I find that I’m just trying to pick up things through osmosis. I watch German youtubers and try to watch a German movie or two every now and then. And then sometimes when I’m talking I try to directly translate what I’m saying in my head, and assuming I know the words, I usually fuck up the order, article, or tense.
I say all that to say that my current workflow is already overwhelming and I’m on a bit of a time crunch. I do really need to surround myself with native speakers and listen to them more. I will reach out. Thanks!
I use silly tavern for character conversations, pretty fun. I have SD forge for Pomy diffusion, and use Suno and Udio. Almost all of that goes to DND, the rest for personal recreation.
Google and openai all fail to meet my use cases and if I cuss they get mad so fuck em.
I never use those for making money or any other personal progression, that would be wrong.
Garbage in; garbage out. Using AI tools is a skillset. I've had great use with LLMs and generative AI both, you just have to use the tools to their strengths.
LLMs are language models. People run into issues when they try to use them for things not language related. Conversely, it's wonderful for other tasks. I use it to tone check things I'm unsure about. Or feed it ideas and let it run with them in ways I don't think to. It doesn't come up with too much groundbreaking or new on its own, but I think of it as kinda a "shuffle" button, taking what I have already largely put together, and messing around with it til it becomes something new.
Generative AI isn't going to make you the next mona Lisa, but it can make some pretty good art. It, once again, requires a human to work with it, though. You can't just tell it to spit out an image and expect 100% quality, 100% of the time. Instead, it's useful to get a basic idea of what you want in place, then take it to another proper photo editor, or inpainting, or some other kind of post processing to refine it. I have some degree of aphantasia - I have a hard time forming and holding detailed mental images. This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it's frustrating seeing it shot down by people who don't actually understand it.
I think no one likes any new fad that's shoved down their throats. AI doesn't belong in everything. We already have a million chocolate chip cookie recipes, and chatgpt doesn't have taste buds. Stop using this stuff for tasks it wasn't meant for (unless it's a novelty "because we could" kind of way) and it becomes a lot more palatable.
This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it’s frustrating seeing it shot down by people who don’t actually understand it.
Stop using this stuff for tasks it wasn’t meant for (unless it’s a novelty “because we could” kind of way) and it becomes a lot more palatable.
Preach! I'm surprised to hear it works for people with aphantasia too, and that's awesome. I personally have a very vivid mind's eye and I can often already imagine what I want something to look like, but could never put it to paper in a satisfying way that didn't cost excruciating amount of time. GenAI allows me to do that with still a decent amount of touch up work, but in a much more reasonable timeframe. I'm making more creative work than I've ever been because of it.
It's crazy to me that some people at times completely refuse to even acknowledge such positives about the technology, refuse to interact with it in a way that would reveal those positives, refuse to look at more nuanced opinions of people that did interact with it, refuse even simple facts about how we learn and interact with other art and material, refusing legal realities like the freedom to analyze that allow this technology to exist (sometimes even actively fighting to restrict those legal freedoms, which would hurt more artists and creatives than it would help, and give even more more power to corporations and those with enough capital to self sustain AI model creation).
It's tiring, but luckily it seems to be mostly an issue on the internet. Talking to people (including artists) in real life about it shows that it's a very tiny fraction that holds that opinion. Keep creating 👍
There's a handful of actual good use-cases. For example, Spotify has a new playlist generator that's actually pretty good. You give it a bunch of terms and it creates a playlist of songs from those terms. It's just crunching a bunch of data to analyze similarities with words. That's what it's made for.
It's not intelligence. It's a data crunching tool to find correlations. Anyone treating it like intelligence will create nothing more than garbage.
Some of my friends enjoy fucking around with those character AIs. I never got the appeal, even as an RP nerd, RPing is a social activity to me, and computers aren't people
I have seen funny memes be made with Image Generators -- And tbqh as long as you're not pretending that being an AI prompter makes you an "artist", by all means go crazy with generating AI images for your furry porn/DnD campaign/whatever
https://goblin.tools/ is a cool little thing for people as intensely autistic as I am, and it runs off AI stuff.
Voice Recognition/Dictation technology powered by AI is a lot better than its pre-AI sibling. I've been giving it a shot lately. It helps my arthritis-ridden hands.
If you mean anything that utilizes machine learning ("AI" is a buzzword), then "AI" technology has been used to help scientists and doctors do their jobs better since the mid 90s
I have a local instance of Stable Diffusion that I use to make art for MtG proxies. Prior to AI my art was limited to geometric designs and edits of existing pieces. Integrating AI into my work flow has expanded my abilities greatly, and my art experience means that I can do more with it than just prompt engineering.
Generative AI has been an absolute game changer in my retouching work. Slightly worrying that it'll put me out of work sometime in the future, but for now it's saving me loads of time, handling the boring stuff so I can concentrate on the stuff it can't do.
When it just came out I had AI write fanfiction that no sane person would write, and other silly things. I liked that. That and trail cam photos of the Duolingo mascot.
I think my complaints are more with how capitalism treats new technology, though-- and not just lost jobs and the tool on the climate. Greed and competition is making it worse and worse as a technology that AI itself, within a years span, has been enshittified. There are use cases that it can do a world of good, though, just like everything else bad people ruin.
My primary use of AI is for programming and debugging. It's a great way to get boilerplate code blocks, bootstrap scripts, one-liner shell commands, creating regular expressions etc. More often than not, I've also learned new things because it ends up using something new that I didn't know about, or approaches I didn't know were possible.
I also find it's a good tool to learn about new things or topics. It's very flexible in giving you a high level summary, and then digging deeper into the specifics of something that might interest you. Summarizing articles, and long posts is also helpful.
Of course, it's not always accurate, and it doesn't always work. But for me, it works more often than not and I find that valuable.
Like every technology, it will follow the Gartner Hype Cycle. We are definitely in the times of "everything-AI" or AI for everything - but I'm sure things will calm down and people will find it valuable for a number of specific things.
It helps make simple code when Im feeling lazy at work and need to get something out the door.
In personal life, I run a local llm server with SillyTavern, and get into some kinky shit that often makes for an intense masturbation session. Sorry not sorry.
It's nice to generate images of settings for my d&d campaign.
It's nice that I can replace Google/Siri with something I run and control locally, for controlling my home.
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn't. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).
Tbh it’s made a pretty significant improvement in my life as a software developer. Yeah, it makes shit up/generates garbage code sometimes, but if you know how to read code, debug, and program in general, it really saves a lot of grunt work and tedious language barriers. It can also be a solid rubber duck for debugging.
Basically any time I just need a little script to take x input and give me y output, or a regex, I’ll have ChatGPT write it for me.
A lot of the time I get 3/4 of the way through writing a prompt and don't bother hitting enter because I already figured it out. Great way to get your thoughts organized to have an incentive to put them down in writing.
I like it for more obscure things where the context is needed to filter out results because the words themselves get too many hits.
But I've also had issues with accuracy, like asking for help with syntax for an obscure scripting language application (think like lua where a specific context added an API and wanting information about that API).
It seemed like it knew what it was talking about, but turns out none of the syntax it gave were real argument names, they couldn't be split up into seperate lines like it claimed, and the way scope worked was off. Though it was enough to get me to a decent place where correcting everything didn't take very long.
Edit: I also like to use it to fact check comments before I post them. You can just copy paste the comment and ask it to comment on the accuracy to add a quick but basic peer review.
Yeah absolutely don't just paste in IDE and hit compile, but it usually gets you going in the right direction even if it gets some of the specifics wrong. Sometimes I don't even know what to call the concept I'm looking for and describe my understanding of what I want in a fair amount of detail. Goofle can do fuckall with that, whereas GPT will say "It sounds like you probably mean..." and at least give me a starting point or a phrase to search in the docs.
An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you're okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you've written, AI might be the way to go since it turns a writing problem into a proofreading problem.
The Google Search LLM which summarises search results is good enough for most purposes. I wouldn't rely on it for in-depth research but like I said, it's 90% accurate and 1,000% faster. You just have to be mindful of this limitation.
I don't personally like interacting with customer service LLMs because they can only serve up help articles from the company's help pages, but they are still remarkably good at that task. I don't need help pages because the reason I'm contacting customer service to begin with is because I couldn't find the solution using the help pages. It doesn't help me, but it will no doubt help plenty of other people whose first instinct is not to read the f***ing manual. Of course, I'm not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go "off the script" and hallucinate solutions that obviously don't work, or pretend that they've scheduled a callback with a human when you request it, but they actually haven't. This is a really common problem with any sort of LLM.
At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.
Consumers don't like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.
But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren't good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.
It might not work so flawlessly on the 2nd, 3rd, or 100th time though. I use ChatGPT semi-frequently for coding, while it generally does a surprisingly good job, I often find things it overlooks, and need to keep prompting it for further refinements, or just fix it myself.
Boilerplate code (the stuff you usually have to copy anyway from GitHub) and summarising long boring articles. That's the use case for me. Other than that I agree - and having done AI service agent coding myself for fun I can seriously say that I would not trust it to run a business service without a human in the loop
I think it’s a fun toy that is being misused and forced into a lot of things it isn’t ready for.
I’m doing a lot with AI but it’s pretty much slop. I use self hosted stable diffusion, Ollama, and whisper for a discord bot, code help, writing assistance, and I pay elevenlabs for TTS so I can talk to it. It’s been pretty useful. It’s all running on an old computer with a 3060. Voice chat is a little slow and has its own problems but it’s all been fun to learn.
It helps when writing a lot of boilerplate or if I’m being lazy and want to solve something. However I do not need AI in everything I use. It seems everyone wants AI in their product whilst it’s doing the same thing everyone else is doing.
It can be such a different experience editing/touching something up rather than having to create it wholesale where it can often take on a life of its own and takes so much more time
Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code "shaders" for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.
But for very specific purposes it's worth considering as an option.
Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.
Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a "help wanted" ad in the style of newspaper personals and the result was hilariously campy)
In terms of actual problem solving... There have been a couple instances where, when Google or Stack Overflow haven't helped, I've asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It's not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I've already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It's extremely helpful, and definitely faster for what I'd already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could "teach" it to do certain tasks someday, that would be friggin awesome.
It's done a lot of bad/annoying things but I'd be lying if I said it hasn't enabled me to completely sidestep the enshittification of Google. You have to be smart about how you use it but at least you don't have to wade through all the SEO slop to find what you want.
And it's good for weird/niche questions. I used it the other day to find a list of meme songs that have very few/simple instruments so that I could find midi files for them that would translate well when going through Rust's in-game instruments. I seriously doubt I'd find a list like that on Google, even without the enshittification.
I've enjoyed some of the absurd things out can come up with. Surreal videos and memes (every president as a bodybuilder wrestler). However it's never been useful and the cost isn't worth the benefit, to me.
If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren't the focus, and should be things that shouldn't matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn't important, but having something low-quality is better than having effectively nothing.
In a similar case, for multidisciplinary projects where the artists can't realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I'm proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can't buy resources regularly. AI can be a good enough solution to get the project out the door.
In the same way, LLM tools can be good if used as a way to "extend" existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it's a way of improving or speeding up your work.
Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.
To me AI is useless. Its not intelligent, its just a blender that blends up tons of results into one hot steaming mug of "knowledge". If you toss a nugget of shit into a smoothie while it's being blended, it's gonna taste like shit. Considering the amount of misinformation on the internet, everything AI spits out is shit.
It is purely derivative, devoid of any true originality with vague facade of intelligence in an attempt to bypass existing copyright law.
That thought process would say patent law was incorrect though right? If you break something down to parts and say, well all those parts exist on their own, you just reordered them so you never created anything new. A fun case people refer to was against Ford I believe, when they tried to steal the intermittent windshield wiper idea from someone by claiming that resistors already existed, it was just placed elsewhere, so he couldn't claim it as a new invention. Ford lost and had to pay to use the idea.
I see it as the same premise. All programming and language breaks down to words that already exist, so either rearranging them and using them in a new manner is a new work, or none of it is. Thereby saying all books, music, and code wouldn't be able to have copyrights or patents. Which I believe that would cause a bit of chaos.
Intelligence is defined as the ability to acquire, understand and use knowledge. Self-driving cars, for example, are intelligent and they run by AI too.
To me it's glorified autocomplete. I see LLM as a potencial way of drastically lowering barrier of entry to coding. But I'm at a skill level that coercing a chatbot into writing code is a hiderance. What I need is good documentation and good IDE statical analysis.
I'm still waiting on a good, IDE integrated, local model that would be capable of more that autompleting a line of code. I want it to generate the boiler plate parts of code and get out of my way of solving problems.
I have horrible spelling and sometimes write in an archaic register. I also often write in a way that sounds rather aggressive which is not my intention most of the time. Ai helps me rewrite that shit and makes me more sensitive to tone in written text.
Of course just like normal spell check and auto completion feature one still needs to read it a final time.
it’s useful for programming from time to time. But not for asking open questions.
I’ve found having to double check is too unnerving and letting it just provide the links instantly is more my way of working.
Other than that it sometimes sketches things out when I have no idea what to do, so all in all it’s a glorified search engine for me.
Other than work I despise writing emails and reports and it fluffs them up.
I usually have to edit them afterwards to not make em look ai-made but it adds some „substance“.
I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.
I'm all for these types of specialized AIs, and hope to see even further advances in the future.
I work on a 20+ year knowledge base for a big company that has had no real content management governance for pretty much that whole time.
We knew there was duplicate content in that database, but were talking about thousands of articles, with several more added daily.
With such a small team, identifying duplicate/redundant content was just an ad-hoc thing that could never be tackled as a whole without a huge amount of resources.
AI was able to comb through everything and find hundreds of articles with duplicate/redundant content within a few hours. Now we have a list of articles we can work through and clean up.
Not that I can’t just, you know, FIND porn, but there’s something really fun about trying to generate an image just right, tweaking settings and models until you get the result you’re after.
So I'm really bad about remembering to add comments to my code, but since I started using githubs ai code assistant thing in vs code, it will make contextual suggestions when you comment out a line. I've even gone back to stuff I made ages ago, and used it to figure out what the hell I was thinking when I wrote it back then 😆
It's actually really helpful.
I feel like once the tech adoption curve settles down, it will be most useful in cases like that: contextual analysis
I use ChatGpt to ask programming questions, it’s not always correct but neither is Stack Overflow nowadays. At least it will point me in the right direction.
ChatGPT actually explains the code and can answer questions about it and doesn't make snarky comments about how your question is a duplicate of sixteen other posts which kind of intersect to do what you want but not in a clean way.
to copy my own comment from another similar thread:
I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.
I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.
These are the opinions of an idiot with no marketable skills.
That's a bit loaded question. By AI I assume you're refering to GenAI/LLMs rather than AI broadly.
I use it to correct my spelling on longer posts and I find that it improves the clarity and helps my point come across better.
I use Dall-E to create pictures I never could have before, because despite my interest in drawing, I just never bothered to learn it myself. GenAI enables me to skip the learning and go straight to creating.
I like that it can simulate famous people and allows me to ask 'them' questions that I never could in real life. For example, yesterday I spent a good while chatting with 'Sam Harris' about the morality of lying and the edge cases where it might be justified. I find discussions like this genuinely enjoyable and insightful.
I also like using the voice mode where I can just talk with it. As a non-native english speaker, I find it to be good practise to help me improve my spelling pronunciation.
It's great for parsing through the enshittified journalism. You know the classic recipe blog trope? If you ask chatgpt for a recipe, it just gives you one. Whether it's good or not is a different story, but chatgpt is leagues better at getting to the info you want than search has been for the last decade.
I wouldn't say GenAI caused that problem, I'd say it was advertising practices and the structure of key words prioritizing responses in search engines.
I use LLMs for multiple things, and it's useful for things that are easy to validate. E.g. when you're trying to find or learn about something, but don't know the right terminology or keywords to put into a search engine. I also use it for some coding tasks. It works OK for getting customized usage examples for libraries, languages, and frameworks you may not be familiar with (but will sometimes use old APIs or just hallucinate APIs that don't exist). It works OK for things like "translation" tasks; such as converting a MySQL query to a PostGres query. I tried out GitHub CoPilot for a while, but found that it would sometimes introduce subtle bugs that I would initially overlook, so I don't use it anymore. I've had to create some graphics, and am not at all an artist, but was able to use transmission1111, ControlNet, Stable Diffusion, and Gimp to get usable results (an artist would obviously be much better though). RemBG and works pretty well for isolating the subject of an image and removing the background too. Image upsampling, DLSS, DTS Neural X, plant identification apps, the blind-spot warnings in my car, image stabilization, and stuff like that are pretty useful too.
I usually keep abreast of the scene so I'll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.
Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.
Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.
I use ChatGPT and Copilot as search engines, particularly for programming concepts or technical documentation. The way I figure, since these AI companies are scraping the internet to train these models, it’s incredibly likely that they’ve picked up some bit of information that Google and DDG won’t surface because SEO.
Thank you for pointing that out. I don't use it for anything critical, and it's been very useful because Kagi's summarizer works on things like YouTube videos friends link which I don't care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.
The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC's five-category, 15-point scale), compared to 12.2 points for the human summaries.
The focus on the (now-outdated) Llama2-70B also means that "the results do not necessarily reflect how other models may perform" the authors warn.
to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms
In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more
work if used (in current state), due to the need to fact check outputs, or because the original source
material actually presented information better. The assessments showed that one of the most
significant issues with the model was its limited ability to pick-up the nuance or context required to
analyse submissions.
The duration of the PoC was relatively short and allowed limited time for optimisation of the
LLM.
So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it's definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.
The services I use, Kagi's autosummarizer and DeepL, haven't done that when I've checked. The downside of the summarizer is that it might remove some subtle things sometimes that I'd have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.
Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check
The only things I use and I know they have AI are Spotify recommendations, live captions on videos and DLSS. I don't find generative AI to be interesting, but there's nothing wrong with machine learning itself imo if it's used for things that have purpose.
Regardless of how useful some might find it, there isn’t a single use case that justifies the environmental cost (not to mention the societal cost). None. Stop using it. You were able to survive and function without it 2 years ago, and you still can.
This is like saying you can't play video games because it costs electricity and you can go without. You can say it about literally everything that isn't strictly necessary to live. AI isn't just LLMs and only LLMs have a high environmental cost, and unless you are literally wasting the output like the big tech companies are, even that can be justified for the right reasons.
Hey man, why are we using the internet, don't you see this is bad for the environment, while your at it. stop wearing clothes! Our ancestors were able to get by with just our body hair, we're ruining nature.
I was talking about LLMs. If you can find a search engine that still works, and look at where we’re at in the destruction of our planetary life support systems, and the colossal amounts of energy and water required for LLMs, then you might revisit your opinion.
Then the use of AI advancements in medicine is right out then too? I pretty sure the radiologist that has looked at my MRI this past week looking for lung damage, (thanks long covid!), used it in some form. And my Wife's upcoming mammogram will also use some form of AI to assist in diagnosis. Or the scheduling department for these appointments that used their own type of AI to manage 1000's of appointments per month and year. And this is just one example where AI is quickly becoming indispensable.
AI can be tremendously useful for somethings and useless for other things. Painting with such a large brush like you do makes you no better than those tech bros who push AI for everything to make it all more impressive sounding.
There are plenty of specialized “AIs” that are useful and come at a reasonable environmental and societal cost. LLMs are simply an ecological nightmare that arrive at a time where we’re already on the brink of a total breakdown of the biophysical systems that keep us alive. It’s sheer madness.
I've been finding it useful for altering recipes to take my wife's allergies into account. I don't use it for much else. And certainly not for anything important.
There are plenty of uses for it. There are also plenty of bad implementations that don't use it in a way that helps anyone.
We're going through an overhyped period currently but we'll see actual uses in a few years once the dust settles. About 10 years ago, a similar thing happened with AI vision and now everyone has filters they can use on cameras and face detection. We'll reach another plateau until the next tech hype comes about.
For the most part it's not useful, at least not the way people use it most of the time.
It's an engine for producing text that's most like the text it's seen before, or for telling you what text it's seen before is most like the text you just gave it.
When it comes to having a conversation, it can passibly engage in small talk, or present itself as having just skimmed the Wikipedia article on some topic.
This is kinda nifty and I've actually recently found it useful for giving me literally any insignificant mental stimulation to keep me awake while feeding a baby in the middle of the night.
Using it to replace thinking or interaction gives you a substandard result.
Using it as a language interface to something else can give better results.
I've seen it used as an interface to a set of data collection interfaces, where all it needed to know how to do was tell the user what things they could ask about, and then convert their responses into inputs for the API, and show them the resulting chart. Since it wasn't doing anything to actually interpret the data, it never came across as "wrong".
It's funny you mention this, but the erotic roleplay aspect of llms is a thriving business generating millions of dollars every month now in subscription costs.
We've barely even scratching the surface of what these models can do and they're increasing in usage at an exponential rate.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.
I needed instructions on how to downgrade the firmware of my Unifi UDR because they pushed a botched update. I searched for a while and could only find vague references to SSH and upgrading.
They had a “Unifi GPT” bot so I figured what the hell. I asked “how to downgrade udr firmware to stable”. It gave me effective step by step instructions on how to enable SSH, SSH in and what commands to run to do so. Worked like a charm.
So yeah, I think the problem is we’re in the hype era of LLMs. They’re being over applied at lots of things they aren’t good at. But it’s extremism in the other direction to say there aren’t functions they can do well.
They are at least better than your average canned chat/search bot or ill informed CSR at finding an answer to your question. I think they can help with lots of frustrating or opaque computer related tasks, or at least point you in the right direction or surface something you might not be able to find easily otherwise.
They just aren’t going to write programs for you or do your office job for you like execs think they will.
My corp has been very skeptical and suspicious. So far the only allowed ai is to summarize slack. For channels that I want to keep in the loop but not waste time monitoring, it creates a nice summary of recent traffic.
I was trying to help one guy who used an online ai despite it being against policy. However he was just using it as a search engine to find a code solution and it took way too long to give him the wrong answer. A search engine would have been faster but he’d have to use his own judgement to identify the wrong answer. Pretty arrogant guy despite not knowing what he was doing, so I didn’t fight it when he insisted he was going to follow what it told him
I used to spend 1 month a year where all I did was write performance reports on people I supervise. Now I put the facts in let AI write the first draft, do some editing and I'm done in a week.
I use it for coding (rarely pure copy paste), explaining code, use/examples, finding tools to use.
Better translation than Google translate for Japanese.
Asking for things that search engines only gives generic results for.
Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it's something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.
However a lot of the pushed corporate AI feels not that useful, and there's something about it that really rubs me the wrong way.
I made an AI song for my mom's birthday on Suno and she loved it so much she cried. So that was nice.
I don't like how people are using it to just replace artists. It would be find if it's just to automate some things, like, "AI can tell you when ___ needs to be replaced," but it feels more like it's being used as a stick to workers. Like, "Keep acting up and I'll replace you with dun dun dunAI!"
I've never had AI code run straight off the bat - generally because if I've resorted to asking an AI, I've already spent an hour googling - but it often gives me a starting point to narrow my search.
There's been a couple of times it's been useful outside of coding/config - for example, finding the name of some legal concepts can be fairly hard with traditional search, if you don't know the surrounding terminology.
I’ve used it to fill in the gaps for DND storyline. I’ll give it a prompt and a couple of story arcs then I’ll tell it to write in a certain style, say a cowardly king or dogmatic paladin. From there it will spit out a story. If I don’t like certain affects, I’ll tell it to rewrite a section with some other detail in mind. It does a fantastic job and saves me some of the guesswork.
For those interested, I just asked it to generate a campaign with a quick prompt and this is what it spit out. Not perfect, but a good basis to build from:
Campaign Framework: The Relic of Shadows
Introduction: The Call to Adventure
Setting: The campaign begins in the small, picturesque fiefdom of Ravenwood, ruled by the benevolent Lord Alaric. Known for his wisdom and kindness, Lord Alaric's peace is shattered when a relic of immense power, the Amulet of Shadows, is stolen by a band of notorious highwaymen.
Hook: Lord Alaric seeks the help of the adventurers, promising wealth and favor in return for the retrieval of the Amulet of Shadows. The relic is said to possess the ability to manipulate shadows, providing its bearer with unparalleled stealth and the power to traverse through the Shadow Realm.
Act 1: The Journey Begins
Initial Quest: The adventurers set off to track down the highwaymen, following clues and engaging in minor skirmishes along the way. They learn that the highwaymen are more than simple thieves—they are a fragmented faction of former soldiers who turned to banditry after being betrayed by a corrupt noble.
Twist: Upon confronting the highwaymen, the adventurers discover that Lord Alaric had once wronged them in his past. The highwaymen had stolen the Amulet of Shadows not out of greed, but to prevent Alaric from using its dark powers. They reveal that Alaric's benevolent facade hides his darker ambitions.
Act 2: The Corrupted Lord
Return and Betrayal: Despite the highwaymen's revelations, the adventurers decide to return the Amulet of Shadows to Lord Alaric, hoping to resolve the situation peacefully. However, upon returning the relic, Lord Alaric's true nature is revealed. Corrupted by the power of the amulet, he becomes malevolent and orders the adventurers imprisoned in a distant, forgotten dungeon.
Dungeon Escape: The adventurers must navigate the treacherous dungeon, facing traps, monsters, and puzzles. During their imprisonment, they encounter an exiled hermit named Elias, a former friend of Lord Alaric who was cast out due to discovering the lord's dark ambitions.
Act 3: The Path of Vengeance
The Hermit's Guidance: Elias offers to guide the adventurers back to Ravenwood and reveals crucial insights into Lord Alaric's plans. He explains that Alaric seeks to use the Amulet of Shadows to control the Shadow Realm and extend his dominion. Elias knows secret paths and hidden truths that can aid the adventurers.
Critical Choice: The adventurers must protect Elias throughout their journey. If Elias perishes, they lose his valuable knowledge, making their mission significantly harder. Without Elias, they will face additional challenges and must uncover Alaric's weaknesses through other means.
Act 4: The Final Confrontation
Return to Ravenwood: The adventurers, now armed with the knowledge provided by Elias (or through their own investigations if Elias is lost), must confront Lord Alaric. They rally allies, face Alaric's corrupted forces, and uncover the hidden vulnerabilities in Alaric's plans.
Climactic Battle: The campaign culminates in an epic showdown with Lord Alaric in his fortified castle. The adventurers must disable the Amulet of Shadows, overcome Alaric's dark magic, and ultimately restore peace to Ravenwood.
Resolution
Victory or Defeat: Depending on the adventurers' actions, they may emerge victorious, freeing Ravenwood from Alaric's tyrannical grasp, or they might succumb to the shadows, leaving the fiefdom to its dark fate. Their choices throughout the campaign will determine the final outcome and the legacy they leave behind.
Feel free to add or adjust details to fit your players' preferences and playstyle! This framework should give you a solid foundation for a thrilling and complex campaign. Happy adventuring! 🐉⚔️
And below was my prompt, took me appx 5 minutes to write from my phone. I like that the AI thought that Elias would recall secret passages from his youth, which I’m thinking could help the adventures bypass some of the guard. I definitely would want to workshop that highwaymen twist, I mean what kind of party would be willing to return a relic of shadows back when they perceive a ruler as being corrupt? It needs something a bit more convincing.
——
Provide me a framework for a DND campaign that will contain the following story arcs. A lord of a small fiefdom seeks a group of traveling adventures to return a relic (you choose the relic, it must have magical powers) that was stolen from him by a group of highwaymen. The story must include a twist about the highwaymen. When returned, the lord becomes corrupted and throws the party in a far off dungeon. The adventures must work their way back to the lord and seek their revenge, with the assistance of a self exiled hermit who formerly knew the lord in his youth that they encounter along the way. If the hermit dies, the party loses insight into the lord's intentions and it makes it much more challenging to win the campaign.
I have a custom agent that i ask questions to that then goes and finds sources then answers my question. Can do math by writing python code and using the result. I uae it almost exclusively instead of regular search. Ai makes coding far quicker giving examples remeber shit i cant remeber how to use writing basic functions etc.
Writing emails.
Making profile pictures.
I used to enjoy the tldr bot on lemmy till some fascist decided to kill it instead of just letting people block it.
It looks impressive on the surface but if you approach it with any genuine scrutiny it falls apart and you can see that it doesn't know how to draw for shit.
I find it helpful to chat about a topic sometimes as long as it's not based on pure facts, You can talk about your feelings with it.
I have had fun with ChatGPT, but in terms of integrating it into my workflow: no. It just gives me too much garbage on a regular basis for me not to have to check and recheck anything it produces, so it's more efficient to do it myself.
And as entertainment, it's more expensive than e.g. a game, over time.
There are a few uses where it genuinely speeds up editing/insertion into contracts and warns of you of red flags/riders that might open you up to unintended liability. BUT the software is $$$$ and you generally need a law degree before you even need a tool like that. For those that are constantly up to their chins in legal shit, it can be helpful. I'm not, thankfully.
I ask it a lot of technical questions that are broad and non-specific. It helps to quickly get a gauge on what is the correct way to implement something.
A friend's wife "makes" and sells AI slop prints. He had to make a twitter account so he could help her deal with the "harassment". Not sure exactly what she's dealing with, but my friend and I have slightly different ideas of what harassment is and I'm not interested in hearing more about the situation. The prints I've seen look like generic fantasy novel art that you'd see at the checkout line of a grocery store.
I use chatgpt to make questions for me when my teachers refuse to give me anything to practice on before final exams. Even then, I'd take literally anything they'd give over whatever AI can generate
I built a spreadsheet for a client that sorts their email into threads and then segments various conversations into a different view based on shipment numbers mentioned in the conversations. But it's a lot of work to get something like this set up. Am thinking of going into consulting/implementation.
I'll use it to write scripts for repetitive tasks at my job. I never learned or know code so it's actually super helpful in that sense but that isn't really what OP is asking i don't think. I use AI by going on to their platform and initiating the interaction. I disable every form of AI I am capable of disabling/uninstalling. Every integrated sense of AI has been obnoxious.
I like messing with the locally hosted AI available. We have a locally hosted LLM trained on our command media at work that is occasionally useful. I avoid it otherwise if I didn't set it up myself or know who did.
I’m not impressed with the LLMs. They do make great synonym generators.
Stable diffusion and other image diffusers are genuinely amazing. And I’m not talking about asking copilot to make Fortnite shrek. There are incredibly complex ways in which you can fine tune to tell it how to shape and refine the image. It has and is going to continue to revolutionize graphical art. And once the math shrinks down it’s going to be everywhere.
I use AI every day. I think it's an amazing tool. It helps me with work, with video games, with general information, with my dog, and with a whole lot of other things. Obviously verify the claims if it's an important matter, but it'll still save you a lot of time. Prompting AI with useful queries is a skill set that everyone should be developing right now. Like it or not, AI is here and it's going to impact everyone.
Internet search, e.g. Google, is now functionally almost completely useless. I use ChatGPT basically as a Google replacement.
I will still search for stuff - I use Kagi - but give up after half a dozen results if none of them are relevant and go to ChatGPT instead. Often, ChatGPT is more helpful. But sometimes it just makes a bunch of nonsense up.
ChatGPT is great for when you need to find something where you kind of know at least the vague shape of what you’re expecting and you have enough expertise to filter out any of the lies it makes up.
It stimulates my brain, and I enjoy the randomness of it all. It's like how in nature things can be perfectly imperfect - random and still beautiful - unintentional and still emotion-inducing. Sure, I see the ethical issues with how an AI is trained and how capitalism cares more about profit than people leading to job loss or exploitation; however, those are separate issues in my mind, and I can still find joy in the random output of an AI. I could easily tunnel on the bad parts of AI and what's happening as the world devours a new technology, but I still see benefits it can bring in the medical research and engineering fields.
I find ChatGPT useful in getting my server to work (since I'm pretty new with Linux)
Other than that, I check in on how local image models are doing around once every couple of months. I would say you can achieve some cool stuff with it, but not really any unusual stuff.
It's an overly broad term, and the "hype" use-cases dominate the discussion in a way that lacks vision. I'm using machine learning to optimize hardware accelerated processing for particle physics. So, ya, it's not all slop. And what is, may very well evolve.
Porn has been ruined by AI too. Jokes aside it's really a boner killer.
Idk who faps to that whack shit but it's trying so hard to make everything look baby silk smooth with unrealistic bodies most likely stolen from hentai.
For me throwing a graph in and telling it to create a table from it and stuff like that is really super helpful, since I often have to do this, and by hand it's a very tedious job. Sorting and cleaning tables and translating stuff is super handy and I use it quite often. But other then that I don't care.
Im suprisingly on board for ai art. It does allow you to create whatewer you want without having the technical ability to do so.
( For example of you want a sick wallpaper )
Ot significantly lowers the floor as far as creating anything art related goes.
I was really psyched about AI when it first hit my news feed. Now I'm less than impressed. Most generalist AI platforms get things wrong constantly. Having an LLM trained on specific things, like math or science or maybe law, I could see being useful.
We're at the "AI everything" phase instead of the "AI what makes sense" phase.
I used it a decent amount at my last job to write test reports that had a lot of similar text with minor changes.
I also use it for dnd to help me quickly make the outlines of side characters & flesh out my world.
Only one I ever use is the meta AI built into messenger because my friends and I can have it make silly and often extremely cursed pictures that make us laugh
As a college student, best experience I've had is just generating stories that you can easily tell are AI written by use of specific language.
Second best was when I tried taking pokemon from older generations, taking their BST, telling an AI (perplexity) that I wanna give them gen 5 BST, providing a spreadsheet with all gen 5 pokemon w/BST and each individual stat, and using whatever it gives me as a baseline for making BST edits.
Otherwise, I wouldn't say I'm a big fan of AI since I don't have many uses for it myself.
Base stat total. I really don't care too much for all these different acronyms, but I watch a fair bit of pokemon challenge content so I hear it more often than I care for.
Going through data and writing letters are the only tasks I've seen AI be useful for. I still wouldn't trust it as far as I could kick it's ass and I'd check it well before submitting for work.
The applications of what you call ai are absolutely limitless. But to be clear what you're calling "AI" isn't AI in terms of what you might want it to be what you're referring to are large language models or LLM's. Which aren't ai, not yet.
It's short sighted statements like this that really get my blood boiling.
If humanity actually achieves artificial intelligence it'll be the equivalent of the printing press or agriculture. It'll be like inventing the superconductor or micro transistors all over again. Our world will completely change for the better.
If your interactions with these llms have been negative, I can only assume that you have a strong bias against this type of technology and have simply not used it in a way that's applicable for you.
I personally use llms pretty much daily in my life and they have been nothing but an excellent tool.
How could you possibly know that achieving AI will change the world for the better? Change, I believe, but even people running the AI companies talk about how hard alignment is. There’s a chance it has a net positive effect on the world, but I guarantee you there is also a non-zero chance it has a net negative effect. If you have some way of predicting the future, maybe you should get into investing.
There's no reason to assume that AI will be malevolent.
I also said it would be equivalent to other important events throughout human history.
For example, I believe the discovery of agriculture is one of the most detrimental things ever that happened to humanity. Doesn't make it any less riveting.
If you do not understand or don't want to understand the implications of a fully realized artificial intelligence then you are simply willfully ignorant or want to be intentionally contrary.
Either way when our ai overlords take over the earth your name won't be in the protected scrolls. May God have mercy on your soul.
Kitboga has used AI (STT, LLMs, and TTS) to waste the time of Scammers.
There are AI tools being used to develop new cures which will benefit everyone.
There are AI tools being used to help discover new planets.
I use DLSS for gaming.
I run a lot of my own local AI models for various reasons.
Whisper - for Audio Transcriptions/Translations.
Different Diffusion Models (SD or Flux) - for some quick visuals to recap a D&D session.
Tesseract OCR - to scan an image and extract any text that it can find (makes it easy to pull out text from any image and make it searchable).
Local LLMs (Llama, Mixtral) for brainstorming ideas, reformatting text, etc. It's great for getting started with certain subjects/topics, as long as I verify everything that it says.
Like any new tool it is being abused to hurt the working class by the wealthy. It does have useful aspects if used properly but it's pretty overshadowed by all the awful uses imo
you should try kagi. they have had it for more than a year now. it also generates summary for any webpage in the result so as to avoid all the ads and prompts.
That said, I did find some use for chatGPT last year. I had it explain to me some parts of Hawking's paper on black hole particle creation, this was only useful for this one case because Hawking had a habit of stating something is true without explaining it and often without providing useful references. For the record, chatGPT was not good at this task, but with enough prodding and steering I was eventually able to get it to explain some concepts well enough for my usage. I just needed to understand a topic, I definitely wasn't asking chatGPT to do any writing for me, most of what it spits out is flat out wrong.
I once spent a day trying to get it to solve a really basic QM problem, and it couldn't even keep the maths consistent from one line to another.
I abhor it and I think anybody who does actually like it is using it unethically: for art (which they intend to profit off of), for writing papers or articles, and for writing bad code.
I think that you’re right, with the way that our society is structured, it is unethical. It’s essentially the world’s most advanced plagiarism tool.
However, being realistic, even if no private individual ever used it, it would still exist and would be used by corporations for profit maximising.
In my opinion, telling people that they’re bad people for using something which is made unethically isn’t really helpful. For example, smartphones aren’t made ethically, but the way to get that to change isn’t to change consumer habits - because we know that just doesn’t work - it’s to get organised, as a collective working class, and take action into our own hands.
Corpos are currently shooting themselves in the foot by trying to sell an essentially useless product which only lowers the quality of everything it touches.
I'm sure someday it will replace the press number phone machines, at the cost of accessibility, but otherwise I cannot imagine it "maximising profits".
Totally second the latter part - it's the self destructive nature of being blindly anti-AI. Pretty much everyone would support giving more rights and benefits to people displaced by AI, but only a fraction of that group would support an anti-AI mentality. If you want to work against the negative effects of AI in a way that can actually change things, the solution is not to push against the wall closing in on you, but to find the escape.
Yes. Ai art is great. It's a new medium and pretty much every argument against it was made against photography a century ago, and most of them against pre-mixed paints before that. Stop believing the haters who don;t know what it actually is.
My main argument againt it is that I could not care less about something generated by a machine. What I like about art is seeing the world from the perspective of another human. Machines could make music albums or movies in seconds, to me it's just a bland mashup of previous works created by humans and I have no interest in that. AI is only capable of creating variations of human art, not innovation like real artist can. We are on the edge of infinite content, I chose to give my time to human creation, not generic spin-off of it. My two cents.
Most things produced by AI and assisted by AI are still human creation, as it requires a human to guide it to what it's making. Human innovation is also very much based on mixing materials it's seen before in new creative manners. Almost no material is truly innovative. Ask any honest artists about their inspirations and they can tell you what parts of their creations were inspired by what. Our world has explored the depths of most art forms so there is more than a lifetime's worth of art to mix and match. Often the real reason things feel fresh and new is because they are fresh and new to us, but already existed in some form out there before it came to our attention.
That AI can match this is easily proven by fact AI can create material that no human would realistically make (like AI generated QR codes, or 'cursed' AI), very proficient style mixing that would take a human extensive study of both styles to pull off (eg. Pokemon and real life), or real looking images that could not realistically, financially, conscionably, be made using normal methods (eg. A bus full of greek marble statues).
Nobody is saying you have to like AI art, and depending on your perspective, some or most of it will still be really low effort and not worth paying attention to, but that was already the state of art before AI. Lifetimes of art are being uploaded every day, but nobody has the time to view it all.
So I would really keep an open mind that good AI art and AI assisted art exists out there, and you might one day come to like it but not realize you're seeing it, because good AI usage is indistinguishable from normal art.