Why is it impossible to reverse-engineer closed source software?
The first programs were written in binary/hexadecimal, and only later did we invent coding languages to convert between human readable code and binary machine code.
So why can't we just do the same thing in reverse? I hear a lot about devices from audio streaming to footware rendered useless by abandonware. Couldn't a very smart person (or AI) just take the existing program and turn it into code?
As others have mentioned, it's possible but very complicated. Decompilers produce code that isn't very readable for humans.
I am indeed awaiting the big news headlines that will for some reason catch everyone by surprise when a LLM comes along that's trained to "translate" machine code into a nice easily-comprehensible high-level programming language. It's going to be a really big development, even though it doesn't make programs legally "open source" it'll make it all source available.
You might actually consider dipping your toes into trying to learn how to analyze/reverse those yourself. Relatively speaking, software that old can sometimes be easier to reverse.
Yeah I'm not unfamiliar (still a novice though) with the process and mostly used it circumvent something obnoxious or tweak save files. Just takes a lot of effort when you're just looking to spend a couple hours playing a game before bed.
I'm currently experiencing a frustrating bug in dolphin and I'm being tempted to learn enough about it. My MIPS buddy won't help me with it because he thinks it's a waste of time.
I like LLMs for the time it saves you to do something laborious or mundane. One day we'll have general ai fingers crossed
I am indeed awaiting the big news headlines that will for some reason catch everyone by surprise when a LLM comes along that's trained to "translate" machine code into a nice easily-comprehensible high-level programming language.
Another commenter dismissed the idea outright. WTF... What is implausible about an LLM that takes decompiled code, deals with the obfuscating bs, recognizes known libraries, and organizes the remaining code. That will totally happen, if it hasn't already been done.
There's a lot of outright rejection of the possibilities of AI these days, I think because it's turning out to be so capable. People are getting frightened of it and so jump to denial as a coping mechanism.
I wonder if there are research teams out there sitting on more advanced models right now, fretting about how big a bombshell it'll be when this gets out.
It's easy to say that we should throw AI at a problem and in a few years it will solve it, but most of the time it doesn't actually work that way. If you think about the Turing Test itself, where the history goes back to the 1950s, how many decades did it take for us to get to anything that could reasonably come close to passing it? So anytime you think to yourself that one of these days AI is going to get there, remember that one of these days might actually be a half century from now.
The other aspect to this challenge, or rather specifically with regards to this challenge, is that the setup involves humans organizing code in a certain way according to some kind of reasoning that the authors know about, and then that being compiled away, and then another computer program trying to get back what the original authors might have been thinking when they designed the thing originally. That's a steep hill to climb. Can it be done on a small scale? It certainly can. On a large scale? Don't hold your breath.