OpenAI CPO Kevin Weil says their o1 model can now write legal briefs that previously were the domain of $1000/hour associates: "what does it mean when you can suddenly do $8000 of work in 5 minutes for $3 of API credits?" pic.twitter.com/MotT9Oo9rv— Tsarathustra (@tsarnick) October 19, 2024 OpenAI's...
We'll see about that. AI is currently approaching the trough of disillusionment on the gartner hype cycle. That's certainly not something one of the largest AI companies will admit to, but probably still true.
And btw, the article doesn't load for me. Not sure if it's my browser or if I'm getting geo blocked... But the page is just white. No text.
This headline certainly seems sensational, but I've also started seeing some really nice uses of LLMs cropping up. Some of the newer API features make them a lot more practical for development of things other than simple chat bots. It remains to be seen if the value delivered is worth the energy/data costs long term, but LLMs in general seems to be finding their feet in some ways.
Sure. I'm mainly basing my opinion on some more recent research (which I can't find right now) that had some disheartening numbers on AI use in programming. As far as I remember it said at the end of the day it saves some time, but not a lot, but on the flipside the code that has been produced by programmers with help of AI has significantly more bugs in it. Which makes me doubt it's a good fit to replace professionals (at this time).
And secondly, the stock prices of companies like Nvidia tell us, some of the hot air in the AI bubble is escaping. I'd say things are calming down a bit, not accellerating.
Exactly the same prolly... LLM is useful for any "learned" profession but so far I have not seen perform beyond college level type thing.
I guess they can be developed better but there isn't training data or not enough to train the model to be as good as a proper professional.
Once that dataset is available then I can see LLMs to start taking some real jobs, legal or anyone else whose job is jockeying paper or spreadsheets or code on a computer.
LLM is a sorting tool. It's not capable of novel ideation, only derivative. The only thing this might help with is research. Not to mention federal and state regulations require human representation to file anyway.
People have often tended to think about AI and robots replacing jobs in terms of working-class jobs like driving, factories, warehouses, etc.
When it starts coming for the professional classes, as this is now starting to, I think things will be different. It's been a long-observed phenomena that many well-off sections of the population hate socialism, except when they need it - then suddenly they are all for it.
I wonder what a small army of lawyers in support of UBI could achieve?
The legal profession won't touch it till it's been 100% proven that hallucinations have been completely eliminated. And when it comes to anything Sam Altmann says. People rarely regret doubting him.
Not all lawyers are that smart or careful. And these are just the ones who let the AI do the work without checking out validating anything! For every lawyer THIS dumb, there are hundreds who let AI do the grunt work, but actually validate the output for hallucination. The main problem is the AI makes them worse lawyers to have because if the AI missed something in the research so will the lawyer.
The legal profession revolves around logic. Legal arguments are formed based on the rules of logic. A fine tuned model would absolutely demolish any human opponent here.
Take chess for example. There's a predefined set of rules. Chess was one of the first games where ML models beat humans.
Sure, the magnitude of complexity of rules in law is more than that in chess. The ground principle is still the same.
Law = rules. Action A is legal. Action B isn't. Doing X + Y + Z constitutes action A and so on. Legal arguments have to abide by all rules of logic. Law is one of the most logic heavy fields out there.
As for LLMs not being able to reason, it's very debatable. Whether they reason or not depends upon your definition of "reasoning". Debating definitions here is useless however, as the end result speaks for itself.
If LLMs can pass certification exams for lawyers, then it means either one of two things: