i'm studying mechanical engineering and there's a guy in our class who's obsessed with chatgpt. he's always trying to solve all of the tasks using chatgpt and he's always the first to share the solution in zoom. so far it's never been correct but he just sticks with it...
I am a mechanical engineer. I was able to get special permission from my IT department to use LLMs as part of my workflow as a genie pig for the department. It is completely useless.
One the most valuable skill an engineer can have is being able to communicate technical information effectively to different audiences. GPT is on overly polite meat grinder, spitting out half chewed technical slop.
Yeah I also do and it is indeed frequently incorrect. It is good when you have like no idea about what you're doing. It can help you get on track and then you can research by yourself.
I am a hobbyist (and not very good) programmer, and while ChatGPT (free version) often gives me wrong answers, it still gives me some insight on how some stuff could be done (intentionally or not) or how something works and is actually somewhat helpful in learning stuff, but I guess this could be double-edged sword even in that regard.
It is also pretty good at detecting simple code errors, from what I have seen.
Overall more positive than negative, but I wouldn't recommend to use it blindly.
Don’t get it to write your code for you, it’s not gonna work 3/10 times. Instead use it to review your code, help remove any code smells for refactoring.
I use chatGPT for any topic I'm curious about, and like half the time when i double check the answers it turns out they're wrong.
For example i asked for a list of phones with screens that don't use PWM, and when i looked up the specs of the phones it recommended it turned out they all had PWM, even though in the chatGPT answer it explicitly stated that each of these phones don't use PWM. Why does it straight up lie?!
People on ELI5 ask questions that can be answered with a single google search. Yet they do not do the google search. What makes yoi think they will do the bard or chatgpt?
Because if the second worst option is asking ELI5 something basic, then the worst thing is asking Al the same question and then getting the wrong answer. So they choose Al
I don't get why people completely disregard their usefulness because of that. Just don't trust anything they say until you verify it. It's still useful for exploration or to get enough of a grasp of something that you can figure it out on your own.
Interestingly, as ChatGPT might be trained on these ELI5 questions and as a result they are asked more infrequently, it might get worse over time or out of date on these types of questions by its own doing. I especially wonder how bad this influence will get on subjects that you'd normally search stackoverflow for.
Yeah I read a 30 year old newspaper a while back and it was like super high latency internet. Message board, posts, replies to posts, personals etc. None of that stuff makes it into newspapers anymore...
It is. I've seen 'Write the Editor' sections often in the magazines I check out from the library from time to time. IIRC: Popular Mechanics, Popular Science, and The New Yorker have one.
Jokes aside, I am not talking about the "write the editor" sections we see now. I am saying they'd use it like GOOGLE. You're not going to see someone ask "what's 32,344 divided by 7?" Or "who is the senator of idaho" In the new yorker.