I would love to get your take on the paper given that you seem to disagree with the general effective altruist grift.
Holy shit, it could not be getting any dumber, Effective altruist philosopher defends longtermism from critics who deny utilitarian principles.... With utilitarian principles.
WOOOOOOO MORE AXE GRINDING LETS GO!
Okay enough of that, so I was doing a little bit of a foray into the GPI cesspit to look at the latest decision theoretic drivel they've been putting out recently. And boy oh boy did I come across something juicy.
Basically this 36 Page paper is one big 'nuh uh' to all the critics of longtermism. Think Crary and the like; it explicitly states that critics dismiss longtermism out of hand by denying broadly utilitarian principles. This is all fair enough, but the the philosopher tries to defend longtermism by saying that denying it on broadly normative grounds incurs 'significant theoretical costs'. I've checked what these 'costs' would be and to may admittedly quite dumb eyes they look like they're only be 'costs' if you are a utilitarian in the first place! The entire discussion is predicted on utilitarian principles, the weighing of theoretical costs and benefits the consistently bullshit new principles and what I've always thought were completely as hoc new rules that they make up to make anything fit the criteria and get longtermism out the ass end as well making the discussion impervious to criticism cos insert brand new shiny principle here it's fucken dumb.
Not to overstate my case, I'm kinda dumb, which means I could be very wrong here, but even with that in mind I woulda expected better from a PhD.
Anyways to end off, are there any resources that actually go through their math and fact check that shit? Actually wanna see if the math they use actually checks out or if it's kinda cobbled together.
I genuinely hate decision theory.
Bit of a rant but I genuinely hate decision theory. At first it seemed like a useful tool to make the best long term decisions for economics and such then LessWrong, EA, GPI, FHI, MIRI and co needed to take what was essentially a tool and turn it into the biggest philosophical disaster since Rand. I'm thinking about moral uncertainty, wagers, hedging, AGI, priors, bayesianism and all the shit that's grown out of this cesspit of rationalism.
What's funny about all this is that there's no actual way to argue against these people unless you have already been indoctrinated into the cult of Bayes, and even if you manage to get through one of their arguments they'll just pull out some other bullshit principle that they either made up or saw somewhere in a massively obscure book to essentially say 'nuh uh'.
What's more frustrating is that there's now evidence that people make moral judgements using a broadly bayesian approach, which I hope just stays in the descriptive realm.
But yeah, I hate decision theory, that is all.
How could we not tremble before thee oh almighty one
Anybody else genuinely hate the 'wagers' these guys gush about ALL THE TIME
Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time.
Now I want to gush to you guys about something thats been really bothering me for a good long while now. WHY DO RATIONALISTS LOVE WAGERS SO FUCKING MUCH!?
I mean holy shit, theres a wager for everything now, I read a wager that said that we can just ignore moral anti-realism cos 'muh decision theory', that we must always hedge our bets on evidential decision theory, new pascals wagers, entirely new decision theories, the whole body of literature on moral uncertainty, Schwitzgebels 1% skepticism and so. much. more.
I'm beginning to think its the only type of argument that they can make, because it allows them to believe obviously problematic things on the basis that they 'might' be true. I don't know how decision theory went from a useful heuristic in certain situations and economics to arguing that no matter how likely it is that utilitarianism is true you have to follow it cos math, acausal robot gods, fuckin infinite ethics, basically providing the most egregiously smug escape hatch to ignore entire swathes of philosophy etc.
It genuinely pisses me off, because they can drown their opponents in mathematical formalisms, 50 page long essays all amounting to impenetrable 'wagers' that they can always defend no matter how stupid it is because this thing 'might' be true; and they can go off create another rule (something along the lines of 'the antecedent promulgation ex ante expected pareto ex post cornucopian malthusian utility principle) that they need for the argument to go through, do some calculus declare it 'plausible' and then call it a day. Like I said, all of this is so intentionally opaque that nobody other than their small clique can understand what the fuck they are going on about, and even then there is little to no disagreement within said clique!
Anyway, this one has been coming for a while, but I hope to have struck up some common ground between me and some other people here