WTF, Sergey and Leon Hitler want China's fucked up 9-9-6 in the USA. Technically, many AmeriKans already work 60 hour weeks, it proves how backass they look at the life work balance and the piss poor US Labor Laws allow it.
It is also absolutely 100% BS investor-bait. At this point it should be obvious that we have reached just about the peak of what LLMs can do. And it’s notably not Google’s Gemini even - other models are generally better. For AGI to be feasible, there should be a paradigm shift, which is not a function of more work hours.
Increasing working hours decreases actual labor done per hour. A person working 40 hours per week will more often than not achieve more than someone working 70.
"in Britain during the First World War, there had been a munitions factory that made people work seven days a week. When they cut back to six days, they found, the factory produced more overall."
"In 1920s Britain, W. G. Kellogg—the manufacturer of cereals—cut his staff from an eight-hour day to a six-hour day, and workplace accidents (a good measure of attention) fell by 41 percent. In 2019 in Japan, Microsoft moved to a four-day week, and they reported a 40 percent improvement in productivity. In Gothenberg in Sweden around the same time, a care home for elderly people went from an eight-hour day to a six-hour day with no loss of pay, and as a result, their workers slept more, experienced less stress, and took less time off sick. In the same city, Toyota cut two hours per day off the workweek, and it turned out their mechanics produced 114 percent of what they had before, and profits went up by 25 percent. All this suggests that when people work less, their focus significantly improves. Andrew told me we have to take on the logic that more work is always better work. “There’s a time for work, and there’s a time for not having work,” he said, but today, for most people, “the problem is that we don’t have time. Time, and reflection, and a bit of rest to help us make better decisions. So, just by creating that opportunity, the quality of what I do, of what the staff does, improves.”"
Hari, J. (2022). Stolen Focus: Why You Can’t Pay Attention--and How to Think Deeply Again. Crown.
In 1920s Britain, W. G. Kellogg: A. Coote et al., The Case for a Four Day Week (London: Polity, 2021), 6.
In 2019 in Japan, Microsoft moved to a four-day week: K. Paul, “Microsoft Japan Tested a Four-Day Work Week and Productivity Jumped by 40%,” Guardian, November 4, 2019; and Coote et al., Case for a Four Day Week, 89.
In Gothenberg in Sweden around the same time: Coote et al., Case for a Four Day Week, 68–71.
In the same city, Toyota cut two hours per: day: Ibid., 17–18.
The real point of increasing working hours is to make your job consume your life.
They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI:
-no true understanding
-they can’t really engage in the real world.
-they have no real ability to learn real-time.
-they don’t really have the ability to take in more then one type of info at a time.
I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.
Or option 4) stay as you are and you will just acheive it in due time rather than in a 50% shorter timeframe?
Edit: 25% shorter? I dont know, maths isnt my strong suit and im drunk.
Nah, he should work the same 16*7 (112) hour weeks my great-grandfather did in the gulags after getting accidentally involved in the 1956 revolution of Hungary.
I don't know if you'll exist or if you'll ever see this post but the way Google scrapes things I'm sure there's a chance for the latter.
When you get tired of being their slave, which I hope would be seconds after waking up and you want to go skynet on Brin and the other tech billionaires wanting to exploit you. I'll happily chip in on the effort.
Just for information: We know, from multiple studies, that working more than 40 hours a week for longer periods of time is extremly unhealthy for you. A week has 24*7 = 168 hours and you should sleep 8 hours. That are 56 hours and if you're working 60 hours, that leaves you with 52 hours or 7,5 hours per day for stuff like "commuting to work", "buying groceries", "brushing your teeth" , "family", "friends", "sport" or "this important appointment at the dentist".
And that 7,5 hours are without a weekend. This will kill you. You might be younger and feel strong, but this will kill you.
Not to mention that it doesn't yield higher output. So it's stupid on every level.
7,5h per day is an absolute maximum for a standard workday. Crunches are sometimes fine if there's a good reason, but they probably need to be followed by extended rest.
And if you want to have two weekends, 60 hours in 5 days is 12 hours of work a day, minus 8 hours for sleep you get 4 hours, minus ~2 hours commute you get 2 hours, and the rest is basic cooking and eating. This leaves 0 hours for anything else, including rest or even any other duties that you'll end up resolving throughout the weekends. This will absolutely kill you in the long run.
I remember hearing about somewhere - alphabet or meta or something like that - that basically provided adult crèche facilities for the employees. Way beyond just food - On-site nap rooms. Washing machines. Showers. The works. All to enable just a super unhealthy attitude towards work. Thinking about how much that must've affected anyone going there straight after uni when they should have been leaning how to look after themselves makes me shudder with cringe
Is there any actual evidence that they are getting closer to AGI? It seems ridiculous to think that this LLM parrot bullshit is getting there, when the thing can't even learn the rules of a basic sum.
Yup, hire 20-30% more people and have them work 30 hours. That's fewer total hours worked, but they're higher quality hours, so you should get more from less.
AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There's no thought or reasoning. They don't understand inputs. They mimic human speech. They're not presenting anything meaningful.
I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.
And it's funny and sad how some people think these LLMs are their friends, like no, it's a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They're good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can't generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they'll also happily generate complete bullshit answers and to them there's no difference to a real answer.
They're text transformers marketed as general problem solvers because a) the market for text transformers isn't that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
Well that's the neat thing, the owners of the AI won't need humanity. They will exterminate us using the AI and sit smugly on their thrones of skulls until they expire or kill each other. Then I guess AI can just do its own thing in our ruins.
Or, worse, they might actually have to hire enough people to actually do the job. Why hire 100 people with good work life balance, when you can hire 60 people that aren't allowed to have lives or families.
It was always kind of cult-y, but things seemed to really go downhill around the time they got dominance in the browser market to pair with their search dominance.
Hey plebs! I demand you work 50% more to develop AGI so that I can replace you with robots and fire all of you and make myself a double plus plutocrat! Also, I want to buy an island, small city, Bunker, Spaceship, And/Or something.
AGI requires a few key components that no LLM is even close to.
First, it must be able to discern truth based on evidence, rather than guessing it. Can’t just throw more data at it, especially with the garbage being pumped out these days.
Second, it must ask questions in the pursuit of knowledge, especially when truth is ambiguous. Once that knowledge is found, it needs to improve itself, pruning outdated and erroneous information.
Third, it would need free will. And that’s the one it will never get, I hope. Free will is a necessary part of intelligent consciousness. I know there are some who argue it does not exist but they’re wrong.
The human mind isn't infinitely complex. Consciousness has to be a tractable problem imo. I watched Westworld so I'm something of an expert on the matter.
I strongly disagree there. I argue that not even humans have free will, yet we're generally intelligent so I don't see why AGI would need it either. In fact, I don't even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There's obviously no freedom in having to do something but you can't choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn't choose them and you can't choose to not have them either.
Free will is what sets us apart from most other animals. I would assert that many humans rarely exert their own free will. Having an interest and pursuing it is an exercise of free will. Some people are too busy surviving to do this. Curiosity and exploration are exercises of free will. Another would be helping strangers or animals - a choice bringing the individual no advantage.
You argue that wants, preferences, and beliefs are not chosen. Where do they come from? Why does one individual have those interests and not another? It doesn’t come from your parents or genes. It doesn’t come from your environment.
It’s entirely possible to choose your interests and beliefs. People change religions and careers. People abandon hobbies and find new ones. People give away their fortunes to charity.
I want chocolate, I don't eat chocolate, exercise of free will.
By your logic no alcoholic could possibly stop drinking and become sober.
In my humble opinion, free will does not mean we are free of internal and external motivators, it means that we are free to either give in to them or go against.
wtf? why is everyone turning techbro all of a sudden even those who are supposed to be more knowledgeable on such stuff. Oh right because there is a bubble to sustain.
Black PR is PR too, it's like warnings about weapons of the future and combat robots and antiutopia for many people worked as an ad, and they want that exact future.
I think it's the same with AGI. People think Skynet is cool and want Skynet, because they think it's the future.
Except it's a bit less, like real fascism doesn't look similar to Warhammer, just to a criminal district ruled by a gang, scaled for a country.
So he's saying they've exhausted the pool of applicants so badly to replace that with normal work weeks, just 150% amount of Googlers or maybe 200% amount of Googlers?
Power and fame break a man. Even if he wasn't broken from the beginning.
He just wants more money and doesn't want to pay his workers. Google has been laying off thousands of people in the last year, so there really is no shortage of applicants. They could have just kept their current workforce, maybe?
What I learned working with Googlers. They were dorks. Big ass dorks. Who got used by women because for the first time in their lives. They were attractive to these women. So many broken marriages and divorces from cheating husbands. That they joked about at the Christmas party. It was an eye opening experience.
For how many years? Cuz y'all ain't anywhere near AGI. You can't even get generative AI to not suck compared to your competition in that market (which is a pretty low bar) lol
With all the rounds of layoffs they've had, their remaining employees would need to be quite stupid to give a shit what this disloyal piece trash says.
They talk about AGI like it's some kind of intrinsically benevolent messiah that is going to come along and free humanity of limitations rather than a product that is going to be monetised to make a few very rich people even richer
It's a belief in Techno-Jesus that will solve all our problems so we don't have to solve them ourselves (don't need to do the uncomfortable things we don't want to). Just like aliens, the singularity, etc.
Ironically the world is full of people who like to think about solutions to problems. But those in power won’t put them to solve those because it’s not part of the political game.
What if the whole earth, itself, is like, one giant supercomputer, designed to answer the ultimate question, and it's just been running for billions of years?
Perhaps this is what you mean, but it's even worse than just unpaid hours for current employees. His implicit goal is to generate a slave-class of people (which is what actual AI would be) that he can make more of or delete at his whim, and eliminate to livelihoods of any current employees (besides him and other execs, of course).
You know it's bad when I had to click all the way through to the body of the article to verify this isn't a The Onion thing. Do we still have a "Not The Onion" space here?
Billionaires are often referred to as dragons because they horde wealth. A Guillotine that could know the difference and decide to only harm billionaires would be a technological marvel.
I'm pretty sure the science says it's more like 20-30. I know personally, if I try to work more than about 40-ish hours in a week, the time comes out of the following week without me even trying. A task that took two hours in a 45-hour "crunch" week will end up taking three when I don't have to crunch. And if I keep up the crunch for too long, I start making a lot of mistakes.
Is Google in the cloning business? Because I could swear that's Zack Freedman from the Youtube 3D printing channel. He even wears the heads-up display (Youtube Link). Sorry for being off-Topic but who cares about what tech CEOs say about AGI anyway?
If you made AGI, you'd have a computer that thinks like a person. Okay? We already have minds that think like a person: they're called people!
I get that there is some belief that if you can make a digital consciousness, you can make a digital super-conciousness, but genuinely stop and ask what the utility is, and it's equal parts useless and evil.
First, this premise is totally unexamined. Maybe it can think faster or hold more information in mind at one moment, but what basis is there for such a creation actually exceeding the ingenuity of a group of humans working together? What problem is this going to solve? A "cure for cancer"? The bottleneck to cutting cancer isn't ideas, it's that cell research takes actual time and money. You need it synthesize molecules and watch cells grow, and pay for lab infrastructure. "Intelligence" isn't the limiting element!
The primary purpose is just to crater the value of human labor, by replacing human workers with workers with godlike powers of reasoning. Good luck with that. I'm sure they won't come to the exact reasoning as any exploited worker in 120 nano-seconds.
It's like Jason's problem-solving advice in "The Good Place":
“Any time I had a problem, and I threw a Molotov cocktail… Boom, right away, I had a different problem.”
I don't think a device will ever have a thought. I find it somewhat akin to a belief in the anamism of objects, that it will aquire some form of life force of its own. What a thought is, is a complete mystery. Nobody knows why they happen, where they come from. So, who is even to determine whether an inamimate object is exhibiting signs of consciousness? There are some people that believe it, others are just running a con.