Skip Navigation
Collectivist Collectivist @awful.systems

Scared of Roko's spoopy snek

Posts 1
Comments 37
Tech Bros Invented Trains And It Broke Me
  • Yeah, I didn't say he only makes those videos, just that he makes a lot of them

  • Tech Bros Invented Trains And It Broke Me
  • This guy has like a billion videos that are just some variation of "Here's a tech bro startup making a gadgetbahn and here's why it wouldn't work and trains are a thousand times better". Great that it exists, but since these startups never learn from others' mistakes and thus keep making the same missteps over and over and over again, it makes the videos very samey after a while. Not sure what I would do in his position.

  • Yud lettuce know that we just don't get it :(
  • He wanted to be the foundation, but he was scaffolding

    That's a good quote, did you come up with that? I for one would be ecstatic to be the scaffolding of a research field.

  • Scott Alexander shares conspiracy theory that COVID didn't happen
  • I left a comment that made a similar point with some data:

    4: Please stop sharing conspiracy theories

    5: Higher wages are useless if your country's infrastructure and tax system is so piss poor that you need to spend more on basic necessities. We have economic metrics that account for some of this, such as the difference between income and discretionary income. Free-market propagandists always point to the US having high income, but the same can not be said for discretionary income. For example, if we compare the US to the Netherlands, we see that the US median disposable income is 41K while in the Netherlands it's 36K. But let's compare how much you have to spend in your day to day life and calculate the discretionary income based on that:

    ________________________US_______Netherlands

    income________________41k_______36k

    food___________________5.1k_______3.7k

    shelter_________________13.2k______13k

    clothing________________1.2k_______1.5k

    transport______________6.3k_______3.4k

    health__________________3.2k_______1.8k

    student debt___________2.1k_______0.8k

    discretionary income__9.9k_______11.8k

    As we see, the case the free-market capitalist makes falls apart once we look at discretionary income, which collectivist and social policies ensure is higher in the Netherlands.

    EDIT: Scott has edited the post to make 4 seem less like an endorsement and more an ironic share. This is better, but I still prefer it if these things aren't spread at all.

    EDIT 2: Source for the 2021 US-Dutch disposable income vs discretionary income (as well as a lot of other comparisons between median US and Dutch expenditure): https://www.moneymacro.rocks/2021-07-02-dutch-vs-america-middle-class/

  • Scott Alexander shares conspiracy theory that COVID didn't happen

    In his original post he said:

    >4: Related, breaking news: A popular Substack claims that COVID didn’t happen at all, and that both “lab leak” and “natural origins” are part of the higher-level conspiracy to distract people from the fact that there was never a virus in the first place.

    He later edited the post to add:

    >I wonder if I could even more Substack likes if I one-upped them with a theory that lockdowns never even happened, and it was just one of those Berenstein Bear or Mandela Effect things where everyone has a false memory.

    So now it's ironic, and therefore not harmful to spread the conspiracy theory to his large audience.

    18
    NSFW
    here's a great and useful rundown of Scott Alexander Siskind's actual awful beliefs, from a centrist who found it difficult to accept until that email leaked
  • Wasn't phrenology about skull shape and its influence on mental traits in general? Otherwise it's not really a field of study, it's just one claim: larger skull = more intelligence (which is just a less precise version of more childhood nutrition = taller = larger skull = more intelligence), but phrenologists also claimed they could explain all sorts of traits like criminality and personality with things like bumps in the skull.

  • Inside Wytham Abbey, the £15 Million Castle Effective Altruism Must Sell - Effective Ventures is giving $26.8m in stolen money back to FTX creditors, so they gotta sell their not-a-castle
  • Wytham Abbey is being listed on the open market for £15 million [...] Adjusted for inflation, the purchase price of the house two years ago now equals £16.2 million.

    Remember when one of their justifications was that it's also an investment?

    Reaction on the EA forum:

    It's not necessarily a loss of a million pounds if many of the events that happened there would have spent money to organise events elsewhere (renting event spaces and accommodation for event attendees can get quite pricey) and would have spent additional time on organising the events, finding venues, setting them up etc (compared to having them at Wytham). For comparison, EA Global events cost in the ballpark of a million pounds per event.

  • Why are our enemies so pathetic and stupid when we're so handsome and smart?
  • I actually don't find this a bad post, but I do want to point out that it got way more karma than any of titotals more critical posts, even though I find many of them better. This once again points to how the EA Forum's voting-power-by-popularity karma system creates groupthink; being critical nets you less voting power than being lauditory, and it disincentivizes calling out bullshit in general.

    When Ives Parr of "Effective Altruism is when you want to spend money on genetic engineering for race-and-IQ theories" fame, made a seperate post complaining that that post got downvoted despite nobody giving a good counterargument, I wanted to comment and call him out on his bullshit, but why bother with a karma system that allows him and his buddies to downvote it out of the frontpage while leaving you with less voting power? A lot of EA's missteps are just one off blunders, but what makes the EA forum's """epistocratic""" voting system so much worse is that it's systematic, every post and comment is now affected by this calculus of how much you can criticize the people with a lot of power on the forum without losing power of your own, making groupthink almost inevitable. Given the fact that people who are on the forum longer have on average more voting power than newer voices, I can't help but wonder if this is by design.

  • Why are our enemies so pathetic and stupid when we're so handsome and smart?
  • It was a chateau in the Czech Republic

    It wasn't CEA/EV like with the other 'castle', but it was an organization that had its own tag on the EA forum, so at the very least EA-aligned.

  • testosterone and IQ: from the LessWrong School of Taking 4chan Memes and Just Running With Them
  • I have a tremendously large skull (like XXL hats) - maybe that's why I can still do some basic math after the testosterone brain poison during puberty? [...] Now I'm looking at tech billionaires. Mostly lo-T looking men. Elon Musk & Jeff Bezos were big & bald but seem to have pretty big skulls to compensate

    Mark phrenology off your bingo cards, Foppington's law strikes again:

    Once bigotry or self-loathing permeate a given community, it is only a matter of time before deep metaphysical significance is assigned to the shape of human skulls.

  • oh no, the organisation founded by a Swedish EA with a racism scandal is defunct! No, not that one, the other one
  • I would've suggested that we call ourselves the megaforecasters to one-up them, but then they might start calling themselves the überforecasters.

  • GabAI system prompt leaked
  • They have to call it Arya, because No-one takes them seriously

  • SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing
  • embrace the narrative that “SBF died for our sins”

    Huh? This is so absurdly self-aggrandizing that I struggle to comprehend what he's even saying. What did he imagine "our sins" were, and how did getting imprisoned absolve them?

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • No no, not the term (my comment is about how he got his own term wrong), just his reasoning. If you make a lot of reasoning errors, but two faulty premises cancel each other out, and you write, say, 17000 words or sequences of hundreds of blog posts, then you're going to stumble into the right conclusion from time to time. (It might be fun to model this mathematically, can you err your way into being unerring?, but unfortunately in reality-land the amount of premises an argument needs varies wildly)

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • Zack thought the Times had all the justification they needed (for a Gettier case) since he thought they 1) didn't have a good justification but 2) also didn't need a good justification. He was wrong about his second assumption (they did need a good justification), but also wrong about the first assumption (they did have a good justification), so they cancelled each other out, and his conclusion 'they have all the justification they need' is correct through epistemic luck.

    The strongest possible argument supports the right conclusion. Yud thought he could just dream up the strongest arguments and didn't need to consult the literature to reach the right conclusion. Dreaming up arguments is not going to give you the strongest arguments, while consulting the literature will. However, one of the weaker arguments he dreamt up just so happened to also support the right conclusion, so he got the right answer through epistemic luck.

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • It made me think of epistemic luck in the rat-sphere in general, him inventing then immediately fumbling 'gettier attack' is just such a perfect example, but there are other examples in there such as Yud saying:

    Personally, I’m used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish[…]

    Which @200fifty points out:

    Zack is actually correct that this is a pretty wild thing to say… “Rest assured that I considered all possible counterarguments against my position which I was able to generate with my mega super brain. No, I haven’t actually looked at the arguments against my position, but I’m confident in my ability to think of everything that people who disagree with me would say.” It so happens that Yudkowsky is on the ‘right side’ politically in this particular case, but man, this is real sloppy for someone who claims to be on the side of capital-T truth.

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • The sense of counter-intuitivity here seems mostly to be generated by the convoluted grammar of your summarising assessment, but this is just an example of bare recursivity, since you’re applying the language of the post to the post itself.

    I don't think it's counter-intuitive and the post itself never mentioned 'epistemic luck'.

    Perhaps it would be interesting if we were to pick out authentic Gettier cases which are also accusations of some kind

    This seems easy enough to contstruct, just base an accusation on a Gettier case. So in the case of the stopped clock, say we had an appointment at 6:00 and due to my broken watch I think it’s 7:00, as it so happens it actually is 7:00. When I accuse you of being an hour late it is a "Gettier attack", it's a true accusation, but it isn’t based on knowledge because it is based on a Gettier case.

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • While the writer is wrong, the post itself is actually quite interesting and made me think more about epistemic luck. I think Zack does correctly point out cases where I would say rationalists got epistemically lucky, although his views on the matter seem entirely different. I think this quote is a good microcosm of this post:

    The Times's insinuation that Scott Alexander is a racist like Charles Murray seems like a "Gettier attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted New York Times readers is completely bogus.

    A "Gettier attack" is a very interesting concept I will keep in my back pocket, but he clearly doesn't know what a Gettier problem is. With a Gettier case a belief is both true and justified, but still not knowledge because the usually solid justification fails unexpectedly. The classic example is looking at your watch and seeing it's 7:00, believing it's 7:00, and it actually is 7:00, but it isn't knowledge because the usually solid justification of "my watch tells the time" failed unexpectedly when your watch broke when it reached 7:00 the last time and has been stuck on 7:00 ever since. You got epistemically lucky.

    So while this isn't a "Gettier attack" Zack did get at least a partial dose of epistemic luck. He believes it isn't justified and therefore a Gettier attack, but in fact, you need justification for a Gettier attack, and it is justified, so he got some epistemic luck writing about epistemic luck. This is what a good chunk of this post feels like.

  • NSFW
    the [simulated] are a convenient group of people to advocate for
  • I don't know, when I googled it this 80000 hours article is one of the first results. It seems reasonable at first glance but I haven't looked into it.

  • Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next
  • Wait they had Peter's arguments and sources before the debate? And they're blaming the format? Having your challenger's material before the debate, while they don't have yours is basically a guaranteed win. You have his material, take it with you to the debate and just prepare answers in advance so you don't lose $100K! Who gave these idiots a $100K?

  • NSFW
    the [simulated] are a convenient group of people to advocate for
  • The way this is categorized, this 18.2% is also about things like climate change and pandemics.