Why are people seemingly against AI chatbots aiding in writing code?
Please remove it if unallowed
I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.
Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.
I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.
I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.
That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.
Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.
What do you think? What negative experience do you have with AI chatbots that made you hate them?
A lot of the criticism comes with AI results being wrong a lot of the time, while sounding convincingly correct. In software, things that appear to be correct but are subtly wrong leads to errors that can be difficult to decipher.
Imagine that your AI was trained on StackOverflow results. It learns from the questions as well as the answers, but the questions will often include snippets of code that just don't work.
The workflow of using AI resembles something like the relationship between a junior and senior developer. The junior/AI generates code from a spec/prompt, and then the senior/prompter inspects the code for errors. If we remove the junior from the equation to replace with AI, then entry level developer jobs are slashed, and at the same time people aren't getting the experience required to get to the senior level.
Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.
Another argument would be that if I generate code that I have to take time to review and figure out what might be wrong with it, it might just be quicker and easier to write it correctly the first time
Business often doesn't understand these subtleties. There's a ton of money being shovelled into AI right now. Not only for developing new models, but for marketing AI as a solution to business problems. A greedy executive that's only looking at the bottom line and doesn't understand the solution might be eager to implement AI in order to cut jobs. Everyone suffers when jobs are eliminated this way, and the product rarely improves.
Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.
This really resonated with me and is an excellent point. I'm going to have to remember that one.
A developer who is afraid of peer review is not a developer at all imo, but more or less an artist who fears exposing how the sausage was made.
I’m not saying a junior who is nervous is not a dev, I’m talking about someone who has been at this for some time, and still can’t handle feedback productively.
As a cybersecurity guy, it's things like this study, which said:
Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
FWIW, at this point, that study would be horribly outdated. It was done in 2022, which means it probably took place in early 2022 or 2021. The models used for coding have come a long way since then, the study would essentially have to be redone on current models to see if that's still the case.
The people's perceptions have probably not changed, but if the code is actually insecure would need to be reassessed
Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that's no longer the case, I'll revisit my opinion. Until then, I'm assuming that the study holds true. We can't do security based on "it's probably fine now."
I think it's more appalling because they should have assumed this was early tech and therefore less trustworthy. If anything, I'd expect more people to believe their code is secure today using AI than back in 2021/2022 because the tech is that much more mature.
I'm guessing an LLM will make a lot of noob mistakes, especially in languages like C(++) where a lot of care needs to be taken for memory safety. LLMs don't understand code, they just look at a lot of samples of existing code, and a lot of code available on the internet is terrible from a security and performance perspective. If you're writing it yourself, hopefully you've been through enough code reviews to catch the more common mistakes.
If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.
The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.
If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.
Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.
The problem (one of the problems) is that people do lean too heavily on the AI tools when they're inexperienced and never learn for themselves "where to draw the X".
If I'm hiring a dev for my team, I want them to be able to think for themselves, and not be completely reliant on some LLM or other crutch.
The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.
I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I'd wanted to do, while in another tab chatgpt was still spewing nonsense.
That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn't incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone's time and learning nothing.
Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you're copying will work for your particular situation or not.
LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you'd do given different commented answers.
They'll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it's not.
This is my big concern...not that people will use LLMs as a useful tool. That's inevitable. I fear that people will forget how to ask questions and learn for themselves.
Exactly. Maybe you want the number of unicode code points in the string, or perhaps the byte length of the string. It's unclear what an LLM would give you, but the docs would clearly state what that length is measuring.
Use LLMs to come up with things to look up in the official docs, don't use it to replace reading docs. As the famous Russian proverb goes: trust, but verify. It's fine to trust what an LLM says, provided you also go double check what it says in more official docs.
I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines. I feel with an llm i can give it the wider context and it figures it exactly the sort of things I'm trying to find. Even more useful with trying to understand a complex error message you haven't seen before.
That being said. LLMs are not where my searching ends. I check to see where it got the information from so I can read the actual truth and not what it has conjured up.
I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines
Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.
That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.
I remembered another example: I was checking a pull request and it wouldn't compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace() (in C# internal means “I designed my classes wrong and I don't have time to redesign them from scratch; this member should be private or protected, but I need to access it from outside the class hierarchy, so I'll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend in c++; it's used a lot in standard .NET libraries).
Now, that particular internal function isn't documented practically anywhere, and being internal can't be used outside its particular library, so it wouldn't pop up in any example the coder might have seen... but .NET is open source, and the library's source code is on GitHub, so chatgpt/copilot has been trained on it, so that's where the coder must have gotten it from.
The thing, though, is that LLM's being essentially statistic engines that'll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal. Or private, or protected, for that matter.
That function is used in the code they've been trained on to figure if a string is empty, so they're just as likely to output it as string.IsNullOrWhitespace() or string.IsNullOrEmpty().
Hell, if(condition) and if(!condition) are probably also equally likely in most places... and I for one don't want to have to debug code generated by something that can't tell those apart.
We literally had an applicant use AI in an interview, failed the same step twice, and at the end we asked how confident they were in their code and they said "100%" (we were hoping they'd say they want time to write tests). Oh, and my coworker and I each found two different bugs just by reading the code. That candidate didn't move on to the next round. We've had applicants write buggy code, but they at least said they'd want to write some test before they were confident, and they didn't use AI at all.
I thought that was just a one-off, it's sad if it's actually more common.
OP was able to write a bash script that works... on his machine 🤷 that's far from having to review and send code to production either in FOSS or private development.
I also noticed that they were talking about sending arguments to a custom function? That's like a day-one lesson if you already program. But this was something they couldn't find in regular search?
We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
AI code is designed to look like it fits, not be correct. Sometimes it is correct. Sometimes it’s close but has small errors. Sometimes it looks right but is significantly wrong. Personally I’ve never gotten ChatGPT to write code without significant errors for more than trivially small test cases.
You aren’t learning as much when you have ChatGPT do it for you, and what you do learn is “this is what chat gpt did and it worked last time” and not “this is what the problem is and last time this is the solution I came up with and this is why that worked”. In the second case you are far better equipped to tackle future problems, which won’t be exactly the same.
All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know. But take every answer it gives you with a grain of salt. And if you can find documentation I’d trust that a lot more.
All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know.
I am also weary regarding AI and coding but this is actually the first time I used ChatGpt to programm something for a small home project in python, since I never used it. I was positively surprised in how much it could help me getting started. I also learned quite a bit since I always asked for comparison with Java, which I know, and for reasonings why it is that way. I simply also wanted to understand what it puts out. I also only asked for single lines of code rather than generating a whole method, e.g. I want to move a file from X to Y.
The thought of people blindly copying the produced code scares me.
It gives a false sense of security to beginner programmers and doesn't offer a more tailored solution that a more practiced programmer might create. This can lead to a reduction in code quality and can introduce bugs and security holes over time. If you don't know the syntax of a language how do you know it didn't offer you something dangerous? I have copilot at work and the only thing I actually accept its suggestions for now are writing log statements and populating argument lists. While those both still require review they are generally faster than me typing them out. Most of the rest of what it gives me is undesired: it's either too verbose, too hard to read, or just does something else entirely.
If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.
Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]
I hate big tech too, but I'm not really sure how the GPL or MIT licenses (for example) would apply. LLMs don't really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren't really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.
I'm not advocating for OpenAI by any means, but I'm genuinely skeptical that most copyleft licenses have any stake in this. There's no static linking or source code distribution happening. Many basic algorithms don't follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.
If your code is on GitHub, it really doesn't matter what license you provide in the repository -- you've already agreed to allowing any user to "fork" it for any reason whatsoever.
This is a good quote, but it lives within a context of professional code development.
Everyone in the modern era starts coding by copying functions without understanding what it does, and people go entire careers in all sorts of jobs and industries without understanding things by copying what came before that 'worked' without really understanding the underlying mechanisms.
What's important is having a willingness to learn and putting in the effort to learn. AI code snippets are super useful for learning even when it hallucinates if you test it and make backups first. This all requires responsible IT practices to do safely in a production environment, and thats where corporate management eyeing labor cost reduction loses the plot, thinking AI is a wholesale replacement for a competent human as the tech currently stands.
When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you're tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it's slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.
ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of "rubber ducky" debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that's just not true. Sure, ChatGPT can produce code that "works," but it will fail at edge cases and will generally be inefficient and slow.
Exactly. LLMs may replace interns and junior devs, they won't replace senior devs. And if we replace all of the interns and junior devs, who is going to become the next senior devs?
As a senior dev, a lot of my time is spent reviewing others' code, doing pair-programming, etc. Maybe in 5-10 years, I could replace a lot of what they do with an LLM, but then where would my replacement come from? That's not a great long-term direction, and it's part of how we ended up with COBOL devs making tons of money because financial institutions are too scared to port it to something more marketable.
When I use LLMs, it's like you said, to get hints as to what options I have. I know it's sampling from a bunch of existing codebases, so having the LLM go figure out what's similar can help. But if I ask the LLM to actually generate code, it's almost always complete garbage unless it's really basic structure or something (i.e. generate a basic web server using <framework>), but even in those cases, I'd probably just copy/paste from the relevant project's examples in the docs.
That said, if I had to use an LLM to generate code for me, I'd draw the line at tests. I think unit tests should be hand-written so we at least know the behavior is correct given certain inputs. I see people talking about automating unit tests, and I think that's extremely dangerous and akin to "snapshot" tests, which I find almost entirely useless, outside of ensuring schemas for externally-facing APIs are consistent.
I agree AI is a godsend for non coders and amateur programmers who need a quick and dirty script. As a professional, the quality of code is oftentimes 💩 and I can write it myself in less time than it takes to describe it to an AI.
I think the process of explaining what you want to an AI can often be helpful. Especially given the number of times I've explained things to junior developers and they've said they understood completely, but then when I see what they wrote they clearly didn't.
Explaining to an AI is a pretty good test of how well the stories and comments are written.
I think you’ve hit the nail on the head. I am not a coder but using chatGPT I was able to take someone else’s simple program and modify for my own needs within just a few hours of work. It’s definitely not perfect and you still need to put in some work to get your program to run exactly the way you want it to but it’s using chatGPT is a good place to start for beginners, as long as they understand that it’s not a magic tool.
For me it's because if the AI does all the work the person "coding" won't learn anything. Thus when a problem does arise (i.e. the AI not being able to fix a simple mistake it made) no one involved has the means of fixing it.
But I don't want to learn. I want the machine to free me from tedious tasks I already know how to do. There's no learning experience in creating a Wordpress plugin or a shell script.
business sending their whole codebase to third party (copilot etc.) instead of local models
time gain is not that substantial in most case, as the actual "writing code" part is not the part that takes most time, thinking and checking it is
"chatting" in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you're half-competent. We've known that since customer/developer meetings have existed.
the dev have to actually be competent enough to review the changes/output. In a way, "peer reviewing" becomes mandatory; it's long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
some business thinking that LLM outputs are "good enough", firing/moving away people that can actually do said review, leading to more issues down the line
actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of "optimized out" in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term
Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:
sometimes, it does produce useful output in which I only have to edit a few parts to make it works
local autocomplete is sometimes almost as useful as the regular contextual autocomplete
the chatbot turning short code into longer "natural language" explanations can sometimes act as a rubber duck in aiding for debugging
Note the "sometimes". I don't have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens.
For most of the downside, it's not even a matter of the tool becoming better, it's the usefulness to begin with that's uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare "gains".
a lot of your issues are effeciency related which i think can realistically be solved given some time for development cycles to take hold on ai.
if they were better all around to whatever standard you think is sufficiently useful, would you then think it would be useful?
the other side related thing too is that if it can get that level of competence in coding then it most likely can get just as competant in a variety of other domains too.
The point is, they don't get "competent". They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can't improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it's usually as fast to type it. With the added value that it's easier to double check.
What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we're seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting "competent".
Also, you sweep out the privacy and licensing issues, which are big no-no too.
LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I'd say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it'd be great. Unfortunately, since you can't trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.
Coding doesn't allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.
It doesn't adequately indicate "confidence". It could return "foo" or "!foo" just as easily, and if that's one term in a nested structure, you could spend hours chasing it.
So many hallucinations-- inventing methods and fields from nowhere, even in an IDE where they're tagged and searchable.
Instead of writing the code now, you end up having to review and debug it, which is more work IMO.
One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.
In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.
An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.
An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.
So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.
I've found it to be extremely helpful in coding. Instead of trying to read huge documentation pages, I can just have a chatbot read it and tell me the answer.
My coworker has been wanting to learn Powershell. Using a chatbot, his understanding of the language has greatly improved. A chatbot can not only give you the answer, but it can break down how it reached that conclusion. It can be a very useful learning tool.
It's great for regurgitating pre written text. For generating new or usable code it's largely useless. It doesn't have an actual understanding of what it says. It can recombine information and elements its seen before. But not generate anything truly unique.
I've been using it for CLI syntax and code for a while now. It's not always right but it definitely helps in getting you almost all the way there when it doesn't. I will continue to use it 😁
We built a Durable task workflow engine to manage infrastructure and we asked a new hire to add a small feature to it.
I checked on them later and they expressed they were stuck on an aspect of the change.
I could tell the code was ChatGPT. I asked "you wrote this with ChatGPT didn't you?" And they asked how I could tell.
I explained that ChatGPT doesn't have the full context and will send you on tangents like it has here.
I gave them the docs to the engine and to the integration point and said "try using only these and ask me questions if you're stuck for more than 40min.
They went on to become a very strong contributor and no longer uses ChatGPT or copilot.
I've tried it myself and it gives me the wrong answers 90% of the time. It could be useful though. If they changed ChatGPT to find and link you docs it finds relevant I would love it but it never does even when asked.
Phind is better about linking sources. I've found that generated code sometimes points me in the right direction, but other times it leads me down a rabbit hole of obsolete syntax or other problems.
Ironically, if you already are familiar with the code then you can easily tell where the LLM went wrong and adapt their generated code.
But I don't use it much because its almost more trouble than its worth.
my company doesn't allow it - my boss is worried about our IP getting leaked
I find them more work than they're worth - I'm a senior dev, and it would take longer for me to write the prompt than just write the code
I just dont know anything about bash’s syntax
That probably won't be the last time you write Bash, so do you really want to go through AI every time you need to write a Bash script? Bash syntax is pretty simple, especially if you understand the basic concept that everything is a command (i.e. syntax is <command> [arguments...]; like if <condition> where <condition> can be [ <special syntax> ] or [[ <test syntax> ]]), which explains some of the weird corners of the syntax.
AI sucks for anything that needs to be maintained. If it's a one-off, sure, use AI. But if you're writing a script others on your team will use, it's worth taking the time to actually understand what it's doing (instead of just briefly reading through the output). You never know if it'll fail on another machine if it has a different set of dependencies or something.
What negative experience do you have with AI chatbots that made you hate them?
I just find dealing with them to take more time than just doing the work myself. I've done a lot of Bash in my career (>10 years), so I can generally get 90% of the way there by just brain-dumping what I want to do and maybe looking up 1-2 commands. As such, I think it's worth it for any dev to take the time to learn their tools properly so the next time will be that much faster. If you rely on AI too much, it'll become a crutch and you'll be functionally useless w/o it.
I did an interview with a candidate who asked if they could use AI, and we allowed it. They ended up making (and missing) the same mistake twice in the same interview because they didn't seem to actually understand what the AI output. I've messed around with code chatbots, and my experience is that I generally have to spend quite a bit of time to get what I want, and then I still need to modify and debug it. Why would I do that when I can spend the same amount of time and just write the code myself? I'd understand the code better if I did it myself, which would make debugging way easier.
Anyway, I just don't find it actually helpful. It can feel helpful because it gets you from 0 to a bunch of code really quickly, but that code will probably need quite a bit of modification anyway. I'd rather just DIY and not faff about with AI.
You boss should be more worried about license poisoning when you incorporate code that's been copied from copyleft projects and presented as "generated".
Perhaps, but our userbase is so small that we'd be very unlikely that someone would notice. We are essentially B2B with something like a few hundred active users. We do vet our dependencies religiously, but in all actuality, we could probably get away with pulling in some copyleft code.
It doesn't pass judgment. It just knows what "looks" correct. You need a trained person to discern that. It's like describing symptoms to WebMD. If you had a junior doctor using WebMD, how comfortable would you be with their assessment?
Lots of good comments here. I think there's many reasons, but AI in general is being quite hated on. It's sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here's a few perspectives:
Training data is questionable/debatable ethics,
Amateur programmers don't build up the same "code muscle memory",
It's being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
The time saved writing code isn't being used to review and test the code more carefully than it was before,
The AI is being used for problem solving, where it's not ideal, as opposed to code-from-spec where it's much better,
Non-Local AI is scraping your (often confidential) data,
Environmental impact of the use of massive remote LLMs,
Can be used (according to execs, anyways) to replace entry level developers,
Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what's important and don't see the skills they strengthen along the way to the answer.
I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.
I think there is a lot of reasons to hate on it, but I think it's because the reasons to use it effectively are still being figured out.
Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don't really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It'll take time to adopt and adapt.
I spend a lot of time training people how to properly review code, and the only real way to get good at it is by writing and reviewing a lot of code.
With an LLM, it trains on a lot of code, but it does no review per-se… unlike other ML systems, there’s no negative and positive feedback systems in place to improve quality.
Unfortunately, AI is now equated with LLM and diffusion models instead of machine learning in general.
Many lazy programmers may just copy paste without thinking too much about the quality of generated code. The other group of person who oppose it are those who think it will kill the programmer job
Sure, but if you're copying from stack overflow or reddit and ignore the dozens of comments telling you why the code you're copying is wrong for your use case, that's on you.
An LLM on the other hand will confidently tell you that its garbage is perfect and will do exactly what you asked for, and leave you to figure out why it doesn't by yourself, without any context.
An inexperienced programmer who's willing to learn won't fall for the first case and will actually learn from the comments and alternative answers, but will be completely lost if the hallucinating LLM is all they've got.
Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.
Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…
In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.
I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.
The theme that I would highlight here though:
More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.
But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.
How will you be ready for that if you don’t get practice along the way?
Personally, I've found AI is wrong about 80% of the time for questions I ask it.
It's essentially just a search engine with cleverbot. If the problem you're dealing with is esoteric and therefore not easily searchable, AI won't fare any better.
I think AI would be a lot more useful if it gave a percentage indicating how confident it is in its answers, too. It's very useless to have it constantly give wrong information as though it is correct.
I use ai, but whenever I do I have to modify it, whether it's because it gives me errors, is slow, doesn't fit my current implementation or is going off the wrong foot.
I have a coworker who is essentially building a custom program in Sheets using AppScript, and has been using CGPT/Gemini the whole way.
While this person has a basic grasp of the fundamentals, there's a lot of missing information that gets filled in by the bots. Ultimately after enough fiddling, it will spit out usable code that works how it's supposed to, but honestly it ends up taking significantly longer to guide the bot into making just the right solution for a given problem. Not to mention the code is just a mess - even though it works there's no real consistency since it's built across prompts.
I'm confident that in this case and likely in plenty of other cases like it, the amount of time it takes to learn how to ask the bot the right questions in totality would be better spent just reading the documentation for whatever language is being used. At that point it might be worth it to spit out simple code that can be easily debugged.
Ultimately, it just feels like you're offloading complexity from one layer to the next, and in so doing quickly acquiring tech debt.
Exactly my experience as well. Using AI will take about the same amount of time as just doing it myself, but at least I'll understand the code at the end if I do it myself. Even if AI was a little faster to get working code, writing it yourself will pay off in debugging later.
And honestly, I enjoy writing code more than chatting with a bot. So if the time spent is going to be similar, I'm going to lean toward DIY every time.
A lot of people are very reactionary when it comes to LLMs and any of the other "AI" technologies.
For myself, I definitely roll my eyes at some of the "let's shoehorn 'AI' into this!" marketing, and I definitely have reservations about some datasets stealing/profiting from user data, and part of me worries about the other knock-on effects of AI (e.g. recently it was found that some foraging books on Amazon were AI generated and, if followed, would've led to people being poisoned. That's pretty fucking bad).
...but it can also be a great tool, too. My sister is blind, and honestly, AI-assisted screen readers will be a game changer. AI describing images online that haven't been properly tagged for blind people (most of them, btw!) is huge too. This is a thing that is making my little sister's life better in a massive way.
It's been useful for me in terms of translation (Google translate is bad), in terms of making templates that take a lot of the tedious legwork out of programming, effortlessly clearing up some audio clarity issues for some voluntary voice acting "work" I've done for a huge game mod, and for quickly spotting programming or grammar mistakes that a human could easily miss.
I wish people could just have rational, adult discussions about AI tech without it just descending into some kind of almost religious shouting match.
Sounds like it's just another tool in a coding arsenal! As long as you take care to verify things like you did, I can't see why it'd be a bad idea.
It's when you blindly trust that things go wrong.
I use it as a time-saving device. The hardest part is spotting when it's not actually saving you time, but costing you time in back-and-forth over some little bug. I'm often better off fixing it myself when it gets stuck.
I find it's just like having another developer to bounce ideas off. I don't want it to produce 10k lines of code at a time, I want it to be digestible so I can tell if it's correct.
but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.
Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.
I have worked with somewhat large codebases before using LLMs. You can ask the LLM to point a specific problem and give it the context. I honestly don't see myself as capable without a LLM. And it is a good teacher. I learn much from using LLMs. No free advertisement for any of the suppliers here, but they are just useful.
You get access to information you can't find on any place of the Web. There is a large structural bad reaction to it, but it is useful.
(Edit) Also, I would like to add that people who said that questions won't be asked anymore seemingly never tried getting answers online in a discussion forum - people are viciously ill-tempered when answering.
With a LLM, you can just bother it endlessly and learn more about the world while you do it.
Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it's much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn't understand the code it gives you, nor can it reason if it will function.
Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.
Basically, if you can help the LLM hone in its probabilities on the better data from the start, you're more likely to get what may be good code.
My workplace of 5 employees and 2 owners have embraced it as an additional tool.
We have Copilot inside Visual studio professional and it’s a great time saver. We have a lot of boiler plate code that it can learn from and why would i want to waste valuable time writing the same things over and over. If every list page follows the same pattern then it’s boring we are paid to solve problems not just write the same things.
We even have a tool powered by AI made by the owner which we can type commands and it will scaffold all our boiler plate. Or it can watch the project and if I update a model it will do the mutations and queries in c# set up the graphql layer and then implement some views in react typescript.
[NB: I'm no programmer. I can write some few lines of bash because Linux, I'm just relaying what I've read. I do use those bots but for something else - translation aid.]
The reasons that I've seen programmers complaining about LLM chatbots are:
concerns that AI will make human programmers obsolete
concerns that AI will reduce the market for human programmers
concerns about the copyright of the AI output
concerns about code quality (e.g. it assumes libraries and functions out of thin air)
concerns about the environmental impact of AI
In my opinion the first one is babble, the third one is complicated, but the other three are sensible.
I don't think that the current approaches being used by generative AIs are sufficient to reliably produce correct code; I think that they're more-amenable to human-consumable output (and even there, I'm much more enthusiastic about their use for images than text, as things stand). A human needs approximately-correct material to cue their brain; CPUs are more particular.
We'll probably get there, in the same sense that we can ultimately produce human-level AI for anything, but I think that it'll entail higher-level reasoning about a problem, which present generative text approaches don't do.
I did start with internet search....I could not find how to pass values into the function and return from a function easily,
So, now, this I have a hard time with.
When I search for "pass value function bash", this is the first page I get, which clearly shows an example:
This isn't where I'd consider generative AI to be a useful example; it's something that there will be existing material already readily-available via a search.
The other issue with using generative AI for coding is that for taking pre-existing code for common tasks and using it in multiple programs, we already have an approach: use libraries. That way code gets maintained and such, but doesn't need to be reimplemented by humans over-and-over.
Say someone says "I need linked-list code". Okay, I mean, that's a pretty common, plain Jane thing to need.
But if you use a library, and there's a bug in that code, and it gets fixed, then the bugfix propagates when you update to a newer library. If you generate a linked-list implementation, even if you wind up with working linked-list code at the end, then that isn't gonna happen.
As someone who just delved into a related but unfamiliar language for a small project, it was relatively correct and easy to use.
There were a few times it got itself into a weird “loop” where it insisted on doing things in a ridiculous way, but prior knowledge of programming was enough for me to reword and “suggest” different, simpler, solutions.
Would I have ever got to the end of that project without knowledge of programming and my suggestions? Likely, but it would have taken a long time and been worse off code.
The irony is, without help from copilot, I’d have taken at least three times as long.
It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.
Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.
Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.
So you'll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.
Lemmy is an outlier where anything "AI" immediately triggers the luddites to scream and rant (and occasionally send threats over PMs...) that it is bad because it is "AI" and so forth. So... massive grain of salt.
Speaking as (for simplicity's sake) a software engineer who wears both a coder and a manager hat?
"AI" is incredibly useful for charlie work. Back in the day you would hire an intern or entry level staff to write your unit tests and documentation and utility functions. But, for well over a decade now, documentation and even many unit tests can be auto-generated by scripts for vim or plugins for an IDE. They aren't necessarily great but... the stuff that Fred in Accounting's son wrote was pretty dogshit too.
What LLMs+RAG do is step that up a few notches. You still aren't going to have them write the critical path code. But you can farm off a LOT more charlie work to the point where you just need to do the equivalent of review an MR that came from a plugin rather than a kid who thinks we don't know he reeks of weed.
And... that is good and bad. Good in that it means smaller companies/teams are capable of much bigger projects. And bad because it means a lot fewer entry level jobs to teach people how to code.
So that is the manager/mentor perspective. Let's dig a bit deeper on your example:
I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.
I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.
Honestly? That sounds to me like foundational issues. You already articulated what you need but you wanted to find an all in one guide rather than googing "bash function input example" or "bash function return example" or "strip trailing strash from directory path linux" and so forth. Also, I am pretty sure I very regularly find a guide that covers every one of those questions except for string processing every time I forget the syntax to a for loop in bash and need to google it.
And THAT is the problem with relying on these tools. I know plenty of people who fundamentally can't write documentation because their IDE has always generated (completely worthless) doxygen for them. And it sounds like you don't know how to self-educate on how to solve a problem.
Which is why, generally speaking:
I still prefer to offload the charlie work to newbies because it helps them learn (and it lets me justify their paycheck). And usually what I do is tell them I want to "walk you through our SDLC. it is kind of annoying" to watch over their shoulder and make sure they CAN do this by hand. Then... whatever. I don't care if they pass everything through whatever our IT/Cybersecurity departments deem legit.
Which... personally? I generally still prefer "dumb" scripts to generate the boilerplate for myself. And when I do ask chatgpt or a "local" setup: I ask general questions. I don't paste our codebase in. I say "Hey chatgpt, give me an example of setting the number of replicas of a pod based upon specific metrics collected with prometheus". And I adapt that. Partially to make sure I understand what we are adding to our codebase and mostly because I still don't trust those companies with my codebase and prompts. Which... is probably going to mean moving away from VSCode within the next year (yay Copilot) but... yeah.
A lot of people spent many many nights wasting away at learning some niche arcane knowledge and now are freaking out that a kid out of college can do what they can with a cool new machine. Maybe not fully what they do but 70% there and that makes them so hateful. They'll pull out all these articles and studies but they're just afraid to face the reality that their time and life was wasted and how unfair life can be
Coders are gonna get especially screwed by AI, compared to other industries that were disrupted by leaps in technology.
Look at auto assembly. Look at how many humans used to be involved in that process. Now a lot of the assembly is performed by robotics.
The real sad part is that there's tons of investment (in terms of time and in terms of money) to become a skilled programmer. Any idiot can read a guide on Python and throw together some functional scripts, but programming isn't just writing lines of code. That code comes from tons of experience, experiments, and trial and error.
At least auto workers had unions though. Coders don't have that luxury. As a profession it really had its big boom at a time when people had long since been trained to be skeptical of them.
I don't think it's the same at all. Building code issue the same as building physical vehicle parts. All it'll mean is that any company that uses strictly AI will be beat by a company using AI plus developers because the developers will just add AI as another tool in their toolbox to develop code.
People are in denial. AI is going to take programmer's jobs away, and programmers perceive AI as a natural enemy and a threat. That is why they want to discredit it in any way possible.
Honestly, I've used chatGPT for a hundred tasks, and it has always resulted in acceptable, good-quality work. I've never (!) encountered chatGPT making a grave or major error in any of the questions that I asked it (physics and material sciences).