Wow, the text generator that doesn't actually understand what it's "writing" is making mistakes? Who could have seen that coming?
I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.
I have one right now that looks at data and says "Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause."
Which is pretty well within what they are good at, especially if you are doing the training yourself.
I wish we could say the students will figure it out, but I've had interns ask for help and then I've watched them try to solve problems by repeatedly asking ChatGPT. It's the scariest thing - "Ok, let's try to think about this problem for a moment before we - ok, you're asking ChatGPT to think for a moment. FFS."
I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn't recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I've had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.
I absolutely think SW engineering is going to be a great career long-term, I just can't encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you're passionate about it, you're going to ignore whatever I say anyway, and you'll succeed. But if my recommendation changes your mind, then you probably aren't passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.
I'm not worried at all about my job or anyone on my team, I'm worried for the next batch of CS grads who chatGPT'd their way through their degree. "Cs get degrees" isn't going to land you a job anymore, passion about the subject matter will.
Altering the prompt will certainly give a different output, though. Ok, maybe "think about this problem for a moment" is a weird prompt; I see how it actually doesn't make much sense.
However, including something along the lines of "think through the problem step-by-step" in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of "reasoning", thereby arriving at an output that's more correct or of higher quality.
This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It "thinks" about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw "thinking" to the user.
Of course, it's unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.
All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.
All the while realising you did this to save time but at this point coding from scratch would have been faster.
I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn't notice.
AI is fine if you know what you're doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you're doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It'll do a decent job at certain prompts (i.e. generate me a function/class that...), but you're going to need to go through line-by-line and make sure it's actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don't bother w/ AI, but YMMV.
An area where it's probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, "find the docs for a function in library X that does..." I know what I want, I just may not remember the name or the module, and I certainly don't remember the argument order.
That sums up my experience too, but I have found it good for discussing functions for SQL and Powershell. Sometimes, it’ll throw something into its garbage code and I’ll be like “what does this do?” It’ll explain how it’s supposed to work, I’ll then work out its correct usage and solve my problem. Weirdly, it’s almost MORE helpful than if it just gave me functional code, because I have to learn how to properly use it rather than just copy/paste what it gives me.
It was ChatGPT from earlier this year. It wasn't a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I'd have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.
All LLMs will fail like this in some way, because they don't actually understand what they're generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).
I've been laughing at this quote for 5 minutes straight
It's so good
He knows he's right
Also: I code sometimes, and all of my code is of masterpiece quality. I cannot debug my own code, I ask for outside help and we have to dismantle the NT kernel to find out what's gone wrong
Good. This is digital Darwinism at its finest. Weeds out the companies who thought they could save money by relying on a digital monkey instead of actual professionals.
Lmao my job announced layoffs a few months back. They continue to parade their corporate restructuring plan in front of us like we give a fuck if shareholders make money. My output has dropped significantly as I search for another role. Whatever code I do write now is always just copy pasted from AI (which is getting harder to use...fuck you Copilot). I give zero fucks about this place anymore. Maybe if people had some small semblance of investment in their company's success (i.e.: not milked by shareholders and beaten to dust by shitty profit driven metrics that take away from the core business), the employees might give enough fucks to not copy paste shitty third party code.
Additionally, this is a training issue. Don't offload the training of your people onto the universities (which then trap the students into an insurmountable debt load leading them to take jobs they otherwise wouldn't want to take just to eat and have a roof over their heads). The modern corporate landscape has created a perfect shitstorm of disincentives for genuine effort and diligence. Then you expect us to give a shit about your company even though the days of 40 years and a pension are now gone. We're stuck with 401k plans and social security and the luck of the draw as to whether we can retire or not. Work your whole life for what? Fuck you. I'm gonna generate that AI code and enjoy my 30s and 40s.
A workforce trapped by debt, forced to prioritize job security and paycheck size over passion or purpose. People end up in roles they don't care about, working for companies they have no investment in, simply to keep up with loan payments and the ever increasing cost of living.
"Why is my organization falling apart!?" Fucking look up from the stupid fucking metrics that don't actually tell you anything you dumb fucks. Make an actual human decision and fix the wealth inequality. It's literally always wealth inequality.
"People work in roles they don't care about, for companies they have no investment in, to pay loans they shouldn't have."
That sounds like a fight club quote lol. I know you didn't say "loans they shouldn't have" but the cost of college is just stupidly high. It doesn't have to be free but come on.
I beg to differ! My degree was free for all intents and purposes, and no, it didn't take away from the challenge or the quality of education. I cried blood tears in order to graduate but it was worth it.
15 years ago I got a job where I wasn't allowed to do anything. I hated it. I wanted to learn and be valuable and be valued. I left that job.
I worked for a bank and then Red Hat and I loved what I did and burned myself out trying to make them happy. Only to find out they still didn't value me.
I switched jobs two years ago and increased my pay 30% overnight and back to a job doing nothing. And I'm totally fine with it now. I have a family and I focus on them and during work, if they don't have anything for me to do I make my own happiness.
Fuck corporations. I'll take your money, I'll never again kill myself as I'll never be valued anyway. Jobs aren't worth it. People are.
I told my manager that I've been burned and can't make myself work hard for another company again. She's leaving so there's no vested interest in the company for her. But yeah, fuck these cunts.
Similar trajectory for me, but I'm now being micromanaged on the daily. We got a new CIO recently who is micromanaging his direct reports and our culture has evaporated overnight. The shit is indeed rolling down hill and the writing is on the wall to leave. I know it's not just me either. There will be an exodus when rates get cut and hiring picks up again. This place is fucked.
But that's the key. If you can find something and lay low with minimal annoyance, hang onto that for as long as you can.
For me it's the "Stop responding" button. Sometimes I'll neglect something in my prompt, such as the fact that I'm stuck on ES5 javascript in my job (ServiceNow). It'll spit out ES6+ with let declarations or something like that, and I have to go back and qualify my limitations. So I click stop responding. What used to happen was that it would stop and allow for additional prompting. Now it's just like a client side trick. It hides the output but the server is still returning shit in the background, so if I try to re-prompt or add context it finishes what it was originally saying first, then tacks the new answer onto the old one without pause, separation, or human readable formatting that would indicate that there is a new output. It's an awful experience.
I've been using perplexity.ai but my company thinks its agreements will stop Microsoft from training their AIs on our proprietary data, so I have to be more careful with perplexity than Copilot.
Reminds me of the time that I took down the corporate website by translating the entire website into German. I'd been asked to do this but I hadn't realized that the auto translation Plug-In actually rewrote code into German, I thought it was just going to alter the HTML with JavaScript at runtime, but nope. It actually edited the files.
It also translated the password into German which was fun because it was just random characters so I have no idea what it translated into.
Can we take a moment to ask ourselves - how the hell did piping to shell become ok? We have all kinds of method's for deploying stuff - from the age old tarball to the new shinny flat pack. But somehow we also became ok with
I've had this argument with them a few times at work. They are definitely going to replace this all with AI. Probably within the next year and no amount of us pointing out that it won't work and they'll end up having to bring us back, at 3x the rate, seems to have any effect on them.
I'm probably going to have to listen to a lot of arguments about this strawberry thing tomorrow.
I was once in a similar position: company merger and they decided to move support offshore. We got 6 months lead notice and generous severance paid out as long as we stayed to the end. Fast forward a year and they took 85% customer approval to 13%. We got hired back at 1.5x our old pay rate, so not quite the 3x you mentioned. Hoping this works out similar for you in the end.
As stated in the article, this has less to do with using AI, more to do with sloppy code reviews and code quality enforcement. Bad code from AI is just the latest version of mindlessly pasting from Stack Overflow.
I encourage jrs to use tools such as Phind for solving problems but I also expect them to understand what they’re submitting and be ready to defend it no differently to any other PR. If they’re submitting code they don’t understand that’s incredibly unprofessional and I would come down very hard on them. They don’t do this though because we don’t hire dickheads.
Now we have AI generated shit code, with devs that don't understand the low level details of both the language, and the specifics of the generated code.
So we basically have content entry (ai inputs) and extremely shitty QA bundled into the "developer" role.
As a 20 year veteran of the industry, people keep asking me if I think AI will make developers obsolete. I keep telling them "maybe some day, but today's LLMs are not it. The AI bubble is going to burst, and a few legit use cases will make it through"
Yeah but... i asked chatgpt once how to style something in asciidoctors style.yml. It proposed me html syntax (some inline stuff can be done with html tags in asciidoctor, if output is html). After the usual apology, it suggested some wrong yaml. Third try, because formatting was wrong, it mixed them both.
I mean, sure, some niche usecase in a somewhat obscure (lots of moving parts) lightweight markup. But still, this was a lesson.
We used to have these shit developers and I accepted a lot of bad code back then -- if it actually worked -- because otherwise "code review" is full-on training, which is an entire other job from the one I was hired to do.
The client ditched that contracting firm, and the devs I work with now are worth putting in time on code review with -- but damn, we got hella shit code in our codebase to deal with now. Some of it got tossed, some of it ... we live with.
If I was still in a senior dev position, I’d ban AI code assistants for anyone with less than around 10 years experience. It’s a time saver if you can read code almost as fluently as you can read your own native language but even besides the A.I. code introducing bugs, it’s often not the most efficient way. It’s only useful if you can tell that at a glance and reject its suggestions as much as you accept them.
Which, honestly, is how I was when I was first starting out as a developer. I thought I was hot shit and contributing and I was taking half a day to do tasks an experienced developer could do in minutes. Generative AI is a new developer: irrationally confident, not actually saving time, and rarely doing things the best way.
You make a good point about using it for documentation and learning. That’s a pretty good use case. I just wouldn’t want young developers to use it for code completion any more than I’d want college sophomores to use it for writing essays. Professors don’t have you write essays because they like reading essays. Sometimes, doing a task manually is the point of the assignment.
Eh, I'm a senior dev, and I don't ban it (my boss, the director, does that for me lol; he's worried about company secrets leaking).
In fact, we had an interview for a senior dev position, and the applicant asked if they could use AI, and I told them to use whatever tools they normally would for development. It shouldn't come as a surprise that they totally botched the programming challenge because of it (introduced the same bug twice, then said they were very confident in the correctness of the code...), and that made it so much easier to filter them out from our hiring pool. If you're going to use a tool in an interview, you better feel confident with it. If that dev had solved the problem significantly faster than our other applicants, I would've taken that to my boss to have the team experiment with it. We target budget 30 min for our challenges, and our seniors generally finish in under 20, and it took them more than our allotted time to get the code to actually run properly (and that's with us pointing out certain mistakes the AI generated).
But no, I haven't seen an actually productive use of AI for software development, beyond searching for docs online (which you can totally do w/ Bing or Google w/o involving our codebase). You may feel more productive because more code is appearing on the screen, but the increase in bugs likely reduces overall productivity. We're always looking for ways to improve, but when I can solve the same problem in my bare-bones editor (vim) faster than my more junior colleagues can with their fancy IDEs, I really don't think AI is going to be the thing that improves our productivity, actually understanding logic will. If someone demonstrates that AI does save time, I'll try it out and campaign for it.
Anyway, that's my take as someone who has been in the industry for something like 15 years. Knowing your tools is more important, IMO, than having more tools.
I had my suspicions before but the moment I realized for certain Elon Musk couldn’t run a software company was when he judged people by lines of code written.
I've worked as a freelancer (specifically as a Contractor) in Software Development for over a decade and more often than not I ended up having to work with some existing code base, having to deal with the design choices, coding style and bugs of somebody else, often multiple somebody elses.
There's nothing quite as "entertaining" as having to deal with 3+ different code and design styles in the same code base because all previous developer thought their own way of doing things was the superior way so just added one more layer of their style (not just coding but, worse, software design) on top of what was already there increasing the mess, rather than work within the existing structure and style and doing some refactoring.
Anyway, in my experience having to read, understand and work with existing code that you yourself did not made is way more time costly and less pleasant than actually doing your stuff from scratch.
See? AI creates jobs! Granted, it's specialized mop up situations, but jobs!
It'll be even more interesting in the future! Every now and then a T1000 will lose all hydraulic fluids right out it's prosthetic anus and they'll need someone there with a mop and bucket! Our economy lives on...
If by economy you mean some of us are needed to mop up hydraulic ass-juices at gunpoint I suppose you're technically correct. At least they have to feed us, right?
Having spent most of my career working as a senior contractor, which often meant landing on code bases with 3+ layers of fuckups, I can only imagine how painful it will be to end up having to clean and fix AI generated code, since that doesn't even have a consistent coding style or pattern of design errors and bugs.
I’m not sure how AI supposed to understand code. Most of the code out there is garbage. Even most of the working code out there in the world today is garbage.
Heck, I sometimes can’t understand my own code. And this AI thing tries to tell me I should move this code over there and do this and that and then poof it doesn’t compile anymore. The thing is even more clueless than me.
Can confirm. At our company, we have a tech debt budget, which is really awesome since we can fix the worst of the problems. However, we generate tech debt faster than we can fix it. Adding AI to the mix would just make tech debt even faster, because instead of senior devs reviewing junior dev code, we'd have junior devs reviewing AI code...
The point of the article isn't that AI is outright useless as a coding tool but that it lulls programmers into a false sense of security regarding the quality and security of their code. They aren't reviewing their work as frequently because of this new reliance on AI as a time saver, and as such are more likely to miss any mistakes that they or the AJ made.
The point of the article isn’t that AI is outright useless as a coding tool but that it lulls programmers into a false sense of security regarding the quality and security of their code.
Lulling them into a false sense of security is half of what makes it useless. The fact that it makes shitty code is the other half.
But the job of a software developer is not to write good code, it is to deliver features. People have been writing bad code without any AI for decades. Businesses often prioritize speed over quality, rewarding teams that deliver features quicker.
AI can be a useful tool, but it’s not a substitute for actual expertise. More reviews might patch over the problem, but at the end of the day, you need a competent software developer who understands the business case, risk profile, and concrete needs to take responsibility for the code if that code is actually important.
AI is not particularly good at coding, and it’s not particularly good at the human side of engineering either. AI is cheap. It’s the outsourcing problem all over again and with extra steps of having an algorithm hide the indirection between the expertise you need and the product you’re selling.
Debugging and maintenance was always the hardest aspect of large code bases... writing the code is the easy part. Offloading that part to AI only makes the hard stuff harder
I have a lot of empathy for a lot of people. Even ones, who really don't deserve it. But when it comes to people like these, I have absolutely none. If you make a chatbot do your corporate security, it deserves to burn to the ground
Also it is pure junk. Chat-GPT code may come out fast on the screen but it's garbage. I tried python and c++ both just pure garbage. Sure I got it to do what I wanted but only after a day of hair pulling repetitive madness. Simple task, open an image and invert it . Then we'll it opened the image but didn't invert. Or maybe it's upside down. Can you open the image right side up and invert it....fuck fuck, why is the window full screen? Did I ask for full screen, shit heavens no! Anyway it's a fuckin idiot just rambling code at me.
It's just an example. I did get useful code from all this effort but usually the first prompt gets the closest. Everything else is like a bad genie story. Exactly like this: https://youtu.be/lM0teS7PFMo?si=yMtEaVkpSrn9q5Ap
Open it how using what at what size what codec where, for how long, for what purpose, using what data structures, use what libraries, what versions. You sound like my PO trying to request an update to software they have no comprehension of.
Wait. Ai doesn't have logic built beyond untested data that's thrown at it? Who could had told someone this would happen ahead of time? Conspiracy theorists.
Good question, because it's to narrow it down to the point, and it's more likely to be a fair use if I commentated it and contain less of the content. For the full content you should view the actual article.
Thank you! That is indeed a valid point. I was hoping more people came up with this valid remark. Do you have any other questions or predictions you would like to know? So that we don't get "surprises" in the field of technology again?
I predicted that introducing AI on software engineer (especially juniors) will result in overall worse code, since apparently people don't feel responsible for the genAI code. While I believe the responsibility is still fully at the humans who try to deliver code. And on top of that, most devs are not doing good code reviews in general (often due to lack of time or .. skill issue). And now we have AI that generates code which are too easily accepted on top of reviewers who blindly accept code.. And no unit tests or integration tests.. And then we have this current situation. No wonder this would happen. If you are in software engineering, you would know exactly where I'm talking about. Especially if you would work at larger companies.
The thing I dislike most about code assisting tools is that they're geared to answering your questions instead of giving advice. I'm sure they also give bad recommendations but I've seen LLMs basically double down on bad code.
No they’re giving you exactly what you’re asking for. Problem is you’re not asking for advice. Your asking to “build a thing” and expecting it to read your mind.
Where’s the articles about humans doing the exact same shit for the last 40-50 fucking years and no one bats an eye. Looks at the prompts from people complaining about ai responses and see they don’t know how to use this shit any better than my grandparents can use a touchtone phone.
“Build an app”
Fails
“This ai is shit”.
Just like ever other piece of technology. Garbage in garbage out. If you can’t reliably describe what you want then no one is going to be able to do it. AI just blatantly points out your descriptive failures.
I've yet to see generative AI make an error that a human couldn't make. Maybe that's why people seem so hateful of it; they were expecting it to be superhuman but instead it's too much like us.
That's on them though. The other ones making the claim that it's supposed to be The Culture, but I don't think anyone at the companies is saying that it is.