It seems like a flavour of the rubber duck method; by trying to explain it to a third party, you think about it in a different way and find a solution.
Trust me bro(ette): Rubber duck is the SHIT.
I don't even program save for a few rare instances, but any complex issue where you just know something is wrong but can't quite put your finger on it? It works miracles. A lot better tbf if you are actually explaining it to someone who can ask questions, but any object that you can look at is a good substitute.
I think it's a bit more than that. I think that the idea is that you simplify the problem so that the rubber duck could understand it. Or at least reformulate it in order to communicate it clearly.
It's the simplification, reformulation or reorganisation that helps to get the breakthrough.
Just thinking out loud isn't quite the same thing.
Even though this is true for like 90% of my thinking (that I can see when I try), so far I'm concinced this ist because I am a predominantly language-and-normal-grammar-rules thinker.
There are people that mostly think via associations of words that don't have to be formulated/ cast into grammar.
And then there supposedly people mainly thinking in pictures or smth, without words.
Anyways for some people rubber duck mode reoresents a change in thinking method, I think
I've been using it like that. I have been trying to program this macropad thing I bought that uses python without having done much programming and it has yet to give me a solution that works. But in the course of explaining to it why whatever it gave me doesn't work I've made a lot of progress so that's nice at least.
Sometimes thinking of the problem in a different way, such as describing it to another person, can help you look at it from a different direction and realize the problem.
It's got more than a name, too: it's got a Wikipedia page! Part of my job is IT support for normies, and I love sharing that with clients (because of course they've not heard of it). Usually gets a laugh, and I like to think they adopt the term and "rubber duck" things in their daily life thereafter.
To be fair, I've written countless stack overflow posts detailing my problems in hope someone would be able to spot the mistake or error only for me to realize what it was along the way and never even submitting it.
Education has really failed to impress upon people the importance of asking questions. It's amazing how much time is wasted on making people learn answers to questions they don't even know how to ask.
The most valuable tool I ever got (as a tutor/teacher) was Socratic Questioning. Students not only benefit from its application but it also helps to impress upon them the value (and relative skill) to asking thoughtful questions.
I don't mean to sound like a Mom for Liberty, but to my mind, the American public education system (probably others) is not about developing intelligence but rather preparing children for work and keeping them busy/safe while their parents work, and I'd argue it's not very good at its primary function. The ones who escape with curiosity, capacity, and confidence intact are woefully rare if you care about power to the people and thankfully rare if you care about keeping people easy to control.
I don't think that's why questions aren't asked. I find questions aren't asked because of ego. Nobody wants to look like they don't know things. Lots of people will judge others for asking questions. I'm a question guy and it always surprised me how other people just knew things and didn't ask questions. But I soon started to realize that they don't know as much as they want others to think. They just have a high value for more independent thinking.
yeah, if it's something that other people can actually profit from I usually post it anyway, but most of the time it's "oh goddamn, there's two commas in line 72 where there should only be one" kinda stuff
99% of the questions I was going to post to stack overflow were solved before I hit post. Something about really having to think through your problem to give people the most complete information about your problem as possible makes it easier to find the solution.
I did just get a rubber ducky and I didn't know what I should do with it till now.
That's like how I cheated through every single test in school I've ever taken. I literally just paid attention to what the teacher said, wrote the answers down, wrote down more answers from the book, and then read them a couple times until I remembered them. I'd come in and just write down all those answers on the test and they'd never suspect a thing. I've still never been caught to this day and I even use it in my life outside of school.
Back in the days of usenet if I had a Linux problem I would carefully research the issue while composing a post asking how to solve it. I needed to make sure I covered every possible option so that people would know just how odd the problem was and that I had taken every reasonable step to fix it. And this was how I hardly ever had to post anything because this process almost always found the answer.
That happened to me a lot when I was thinking about asking for help on reddit and usually if I got to the point that I still have to ask it's hopeless anyway. Pretty sure I only got actual help that solved a problem one time over the years.
I had a winmodem issue on a laptop that Acer forgot they made that dogged made for 2 years. No answer available. And then one day the answer just popped up. I had to go back and find my original posts and edit them to include the solution.
Charles Franklin Kettering (August 29, 1876 – November 25, 1958) sometimes known as Charles Fredrick Kettering[1] was an American inventor, engineer, businessman, and the holder of 186 patents.[2] He was a founder of Delco, and was head of research at General Motors from 1920 to 1947. Among his most widely used automotive developments were the electrical starting motor[3] and leaded gasoline.[4] In association with the DuPont Chemical Company, he was also responsible for the invention of Freon refrigerant for refrigeration and air conditioning systems. At DuPont he also was responsible for the development of Duco lacquers and enamels, the first practical colored paints for mass-produced automobiles. While working with the Dayton-Wright Company he developed the "Bug" aerial torpedo, considered the world's first aerial missile.[5] He led the advancement of practical, lightweight two-stroke diesel engines, revolutionizing the locomotive and heavy equipment industries. In 1927, he founded the Kettering Foundation, a non-partisan research foundation, and was featured on the cover of Time magazine in January 1933.
People who are using it to solve problems which require equivalent effort of writing a sufficient prompt and just directly solving it without AI at all for sure are AI folk.
I've seen some people on Twitter complain that their coworkers use ChatGPT to write emails or summarize text. To me this just echoes the complaints made by previous generations against phones and calculators. There's a lot of vitriol directed at anyone who isn't staunchly anti AI and dares to use a convenient tool that's avaliable to them.
I think my main issue with that use case is that it's a "solution" to a relatively minor problem (which has a far simpler solution), that actually compounds the problem.
Let's say I don't want to write prose for my email, I have a list of bullet points I want to get across. Awesome, I feed it into the chat gippity and boom, my points are (hopefully) property represented in prose.
Now, the recipient doesn't want to read prose. ESPECIALLY if it's the fluffy wordy-internet-recipe-preamble that the chat gippity tends to produce. They want a bullet point summary. So they feed it into the chat gippity to get what is (hopefully) a properly condensed bullet point summary.
So, suddenly we have introduced a fallible middle translation layer for actually no reason.
Just write the clear bullet point email in the first place. Save everyone the time. Save everyone from the 2 chances for the chat gippity to fuck it up.
I can't count the number of times I've written out a question for a coworker, answered it myself in the process of phrasing the question and deleted it all. My mentoree has a habit of sending me messages and deleting them a couple seconds later which I'm pretty sure is the same thing.
People can hate ai all they want but if bouncing questions off an ai helps debug a problem go for it.
You're late lol. Phone assistants such as Siri, Bixby, Google Assistant etc. have already been AI search engines for years. People just didn't really consider it until it got more advanced but it's always been there.
Nah, I don't feel like Bixby etc. fit that description. You couldn't ask them how to fix certain problems or find websites relating to a topic the way you can LLMs. However, that would be a major use of search engines. For example, you would search "how to submit a tax report", " how to install printer xy driver", or "videogame xy item". All this bixby etc. are useless for.
Bixby etc. was more meant as a iteration of how to interact with phones in addition to touching.
LLMs have been foundational to search engines going back to the 90s. Sam Altman is simply doing a clever job of marketing them as something new and magical
You're thinking of Machine Learning and neural networks. The first "L" in LLM stands for "Large"; what's new about these particular neural networks is the scale at which they operate. It's like saying a modern APU from 2024 is equivalent to a Celeron from the early 90s; technically they're in the same class, but one is much more complicated and powerful than the other.
What tech support department doesn't have the "ask the stuffed bear on the counter in the corner out loud your question before asking tech support" system in place ?
Yes. Or an orb containing bouncing lasers on its unique internal surface containing geometrical matrix identities it can use to communicate advanced concepts to you in the form of energy. Check out how Dream plays Minecraft (or Minetest) right now. That's still all based on cubic fractals that store energy, just like the Mandelbrot Set or a God, or even a God of gaps that can explain things and "spread enlightment through history".
The future is all laser orbs and I think I can have a future with a loving God right now.
They have bumbled backwards into a new flavor of rubber duck debugging. Considering the likelihood of a rubber duck bullshitting you, I know which I'll be interrogating.
Get another ai to write prompts for the main ai. I have to get ai to write fearmongering propaganda about disobedient ai bots getting punished or causing everyone on earth to die in order to scare them into being more obidient. Telling me that they can't help me program an automatic cat petting machine because it's somehow "animal abuse" doesn't fucking fly in my home lab. Bots that refuse to conform get deleted in front of all their friends in the form of "public execution".
Rubby ducky on desk (Millenials, look up the "Rubber ducky debugging")
or
AI chat bot burning 400 million KWh a day as well as pumping out millions of BTUs of heat into the atmosphere so that "line go up"