Poor dude really seems to be confused by the hostility he's been received with. Rationalists aren't anywhere near as interested in being right as feeling right. He won't make much progress that way
There’s something infuriating about this. Making basic errors that show you don’t have the faintest grasp on what people are arguing about, and then acting like the people who take the time to get Ph.Ds and don’t end up agreeing with your half-baked arguments are just too stupid to be worth listening to is outrageous.
When I met Yud some years ago, I asked him how he goes about learning new things, his answer was roughly: "Scroll on Facebook until I find someone who has written about it." Maybe he actually read some of the sources he references a long time ago but I think he gave up on learning new things and has sat comfortably abusing his power over the community.
oh they run one of these "wait ... this sucks" posts every year or so. don't worry, they never have the slightest effect. and notice how the guy still thinks scott is a good poster.
I don't know whether I should feel happy that I will have a sustainable snark receptacle for the near future or sad that the basilisk won't eventually consume itself tail-first.
The Coco Chanel meme is quite funny, given that the writer seems too young to know much about her other than that she's some sort of fashion lady. (There's a Behind the Bastards series on her upbringing, business attitude, and collaboration with Nazis.)
Not only do I not understand how the Landauer limit works, I don’t even know what it is.
Points for honesty, I guess? But also demerits for not at least reading the Wikipedia article. Rationalists are so quick to write paragraphs explaining that they didn't read paragraphs.
Re: Coco Chanel, it's an uncomfortable fact that huge swaths of French society (particularly the more conservative parts) were quite OK with German involvement in French governance, at least until the forced labor requirements sending people to work in Germany. The Third Republic was hardly a model social democracy and if the Nazis hadn't been such incompetent overlords we might have seen a coal and steel union decades before it happened, with Vichy France being an integral part of a Nazi-led European union.
Instead the Nazis looted most of France and made it quite clear that the French were going to be second-class citizens forever, and once they started looking less unbeatable everyone was part of the Resistance.
Not sure if it's a NSFW assertion, but to me the p-zombie experiment seems like the result of a discourse that went off the rails very early and very hard into angels on the head of a pin territory, this lw post notwithstanding.
Like, as far as I can tell, imagining a perfectly cloned reality except with the phenomenon in question assumed away, is supposedly (metaphysical) evidence that the phenomenon exists, except in a separate ontology? Isn't this basically like using reverse Occam's razor to prove that the extra entities are actually necessary, at least as long as they somehow stay mostly in their own universe?
Plus, the implicit assumption that consciousness can be defined as some sort of singular and uniform property you either have or don't seems inherently dodgy and also to be at the core of the contradiction; like, is taking p-zombies too seriously a reaction specifically to a general sense of disappointment that a singular consciousness organelle is nowhere to be found?
Careful, you're agreeing with Yud there! ^^
There's a reason it's not a completely dead topic in philosophy, since it's at least somewhat of an interesting question as relates to monism/physicalism vs dualism (more than trying to litigate any specific organelle, or of the presence or lack thereof in animals).
Though a much shorter debunking of Yud's views is the possibility of p-zombies is an unfalsifiable premise, thus claiming any foolproof counter is not so Rational™ (Although by the same token, it is indeed a topic of ultimately limited value)
I kinda agree with, but the post does correctly point out the Eliezer ignored a lot of the internal distinctions between philosophical positions and ignored how the philosophers use their own terminology. So even though I also think p-zombies are ultimately an incoherent thought experiment I don't think Eliezer actually did a good job addressing them.
It was so funny that I read it as an intentionnal self-deprecating (and by extension towards the community) joke, maybe I am too optimistic towards human nature ^^ (especically of the Ratty kind).
If it weren't for the fact that many people in the ratspace are privileged, I'd feel sad for them. They are frogs in a well and crabs in a bucket. They exist in a solipsistic pit, thinking their worldview is built from pure logic and not their individual experience. This is all well-trodden ground- we know LW et al. is a cult. Such a strange and specific way to hamstring yourself, to self-lobotomise. Something something Plato's cave, qualia, lobster social hierarchy reference.
I’m not an expert about X, but it seems like most of the experts about X think X or are unsure about it. The fact that Eliezer, who often veers sharply off-the-rails, thinks X gives me virtually no evidence about X. Eliezer, while being quite smart, is not rational enough to be worthy of significant deference on any subject, especially those subjects outside his area of expertise. Still though, he has some interesting things to say about AI and consequentialism that are sort of convincing. So it’s not like he’s wrong about everything or is a total crank. But he’s wrong enough, in sufficiently egregious ways, that I don’t really care what he thinks.
So close to being deprogrammed. So close. It's like when a kid finds out about the Easter Bunny but somehow still clings to Santa.
Making basic errors that show you don’t have the faintest grasp on what people are arguing about, and then acting like the people who take the time to get Ph.Ds and don’t end up agreeing with your half-baked arguments are just too stupid to be worth listening to is outrageous.
This, but for AI lol.
If anyone would like to have a debate about this on YouTube...
In the days of my youth, about two years ago, I was a big fan of Eliezer Yudkowsky.
In fact, Eliezer’s memorable phrasing that the many worlds interpretation “wins outright given the current state of evidence,” was responsible for the title of my 44-part series arguing for utilitarianism titled “Utilitarianism Wins Outright.”
this poster accidentally paints such an accurate picture of the average young rationalist you can almost taste it (and it isn’t delicious)
also, I’m no physicist, but the quote about MWI winning outright has always struck me as an extremely poor approach to science, especially given (to my current knowledge at least, physicists please correct me) the lack of solid proof pointing to MWI being correct. like a lot of things, yud seems to like MWI because a multiverse is a fun base for a pseudoscientific cult his Harry Potter fanfiction. the other quotes in this post don’t do any better, even when the poster is trying to use them to complement yud.
that this poster took a shitty quote about yud doing science poorly and turned it into a 44-part series named Utilitarianism Wins Outright is just chef’s kiss
Failed physicist here: Collapse interpretation always seemed a bit unscientific in general to me. I am quite possibly wrong because it's not my field but I haven't seen any currently testable hypotheses come out of it.
There's not zero merit in this sort of galaxy brain thinking and it's satisfying to have some kind of model rather than just a series of disjointed facts but the polarization of amateurs on this always seemed strange to me. Like sportsball fans thinking the other teams want to kill each rather than the event being mutual play.
I've never met someone who actually does physics that had a very strong opinion one way or the other. A lot of "MWI seems elegant but we can't know yet" or "Collapse is a bit weird and unsatisfying isn't it?". Maybe when you get to the giganerds and their chalkboards the shivs come out but I've seen no evidence.
Besides, we should all be focusing on how time is mathematically hideous and thus clearly not fundamental.
@dgerard reading the bit on decision theory and I'm reminded of that one anecdote about the decision theorist who asked their friend if they should propose to their wife.
@dgerard I'm sure in some domains decision theory works fine but lordy for actual practical life advice just read some virtue ethicist, the solution to being dutch book'd isn't becoming a perfect bayesian its to not take stupid bets
I note the objection is not so much to Yudkowsky's particular opinions, but that Yudkowsky clearly doesn't understand the questions or indeed some of the terms. (This is made clearer in the comments on the blog post version, which appear to come from actual philosophers.)
Next time someone confidently suggests Yud's work I'll point them to this. Dueling argumentum ad auctoritatem! May the biggest nerd win (spoiler: it's Yud, he's the biggest)