The French urologist by day eugenicist transhumanist crank by night Laurent Alexandre had an interesting perspective on the subject fifteen years ago. We should do eugenics and edit our brains to be better so as to counter the emergence of Artificial Intelligence. The robots will replace us if we don't do rigorous embryo selection.
Incidentally he also warns that a modern day Stalin in possession of nanomachines and gene editing technologies would create what he terms "neuro-gulags".
I don't remember seeing Lesswrongers play with these sorts of ideas, which is a little surprising. But they're reluctant to diminish the AI's omnipotence I imagine.
I don’t remember seeing Lesswrongers play with these sorts of ideas
the all-defector in rajaniemi's books probably sits pretty close?
(I still can't tell if rajaniemi's actually really into all this shit, or just found it all to be a highly convenient backstory for some entertainingly detailed scifi)
I could trivially create an entire libguide dedicated to peer reviewed publications about how washing machines completely, fundamentally reshaped society. They (when combined with other electrified household labor saving devices) possibly caused more social change than computers. Non-automated laundry is so labor intensive that in pre-industrial societies, even the very poor often paid to send their laundry out, like baking bread only more so.
We do not understand genetic code as code. We merely have developed some statistical relations between some part of the genetic code and some outcomes, but nobody understands the genetic code good enough to write even the equivalent of "Hello World!".
Gene modification consists of grabbing a slice of genetic code and splicing it into another. Impressive! Means we can edit the code. Doesn't mean we understand the code. If you grab the code for Donkey Kong and put it into the code of Microsoft Excel, does it mean you can throw barrels at your numbers? Or will you simply break the whole thing? Genetic code is very robust and has a lot of redundancies (that we don't understand) so it won't crash like Excel. Something will likely grow. But tumors are also growth.
Remember Thalidomide? They had at the time better reason to think it was safe then we today have thinking gene editing babies is safe.
The tech bros who are gene editing babies (assuming that they are, because they are stupid, egotistical and wealthy enough to bend most laws) are not creating super babies, they are creating new and exciting genetic disorders. Poor babies.
My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.
If you want something more complicated that isn't just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like "intelligence" then yeah, the technology or even basic scientific understanding is lacking.
Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren't nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).
there's been some (what appears to me to be) remarkable progress in the field, in that I know that it's possible to create intentional structures. it's very much not my field so I can't speak to it in detail, I think the best way I could describe where I understand it to be is that it's like people building with lego, if that makes sense?
but yeah it's still a damn far way off from what we'd call "gene programming" as we have "computer programming"
I wouldn’t say that modern computer programming is that hot either. On the other hand, I can absolutely see “no guarantee of merchantability or fitness for any particular purpose” being enthusiastically applied to genetic engineering products. Silicon Valley brought us “move fast and break things”, and now you can apply it to your children, too!
I am not a geneticist, but I have had reasons to talk to geneticists. And they do a lot of cool stuff. For example, I talked with geneticists who researched the genom of a hard to treat patient group to find genetic clusters to yield clues of potential treatments.
You have patient group A that has a cluster of genes B which we know codes for function C which can go haywire in way D which already has a treatment E. Then E becomes a potential treatment for A. You still have to run trials to see if it actually has effect, but it opens up new venues with existing treatments. This in particular has potential for small patient groups that are unlikely to receive much funding and research on its own.
But this also highlights how very far we are from understanding the genetic code as code that can be reprogrammed for intelligence or longevity. And how much more likely experiments are to mess things up in ways we can not predict beforehand, and which doesn't have a treatment.
It's not that eugenics is a magnet for white supremacists, or that rich people might give their children an even more artificially inflated sense of self-worth. No, the risk is that the superbabies might be Khan and kick start the eugenics wars. Of course, this isn't a reason not to make superbabies, it just means the idea needs some more workshopping via Red Teaming (hacker lingo is applicable to everything).
The commenter makes and extended tortured analogy to machine learning... in order to say that maybe genes with correlations to IQ won't add to IQ linearly. It's an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.
Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research. [...] The scientific establishment, however, seems to not have gotten the memo. [...] I remember sitting through three days of talks at a hotel in Boston, watching prominent tenured professors in the field of genetics take turns misrepresenting their own data [...] It is difficult to convey the actual level of insanity if you haven’t seen it yourself.
Like Yudkowsky writing about quantum mechanics, this is cult shit. "The scientists refuse to see the conclusion in front of their faces! We and we alone are sufficiently Rational to embrace the truth! Listen to us, not to scientists!"
Gene editing scales much, much better than embryo selection.
"... Mister Bond."
The graphs look like they were made in Matplotlib, but on another level, they're giving big crayon energy.
Working in the field of genetics is a bizarre experience
How the fuck would you know that, mate? You don't even have a degree in your field, which, let me remind you, is (allegedly) computer science. Has Yud ever been near an actual genetics professor?
these posers would repost "i fucking love science" on facebook but clearly never came back home in dead of the night after 13-hour shift in lab smelling of cum and mothballs because of a minor accident that nevertheless allowed to push envelope of known world just that little bit farther
Working in the [field] is a bizarre experience. No one seems to be interested in the most interesting applications of their research
depending on field, it might be crackpottery or straight up criminal. but if you post shit like this on linkedin, then it's suddenly "inspiring" and "thought-provoking"
Our knowledge has advanced to the point where, if we had a safe and reliable means of modifying genes in embryos, we could literally create superbabies
Am i misunderstanding the data? No it is all the scientists who are wrong. (He is also ignoring the "scientists" who do agree with him, who all seem to have a special room for ww2 paraphernalia)
So AGI is 0.5-2 years away. After which the singularity happens and due to AI alignment we either are immortal forever, or everybody is diamondoid paperclips.
A normal human takes 18 years to grow to maturity. So for the sake of the argument (yes yes, don't hand it to ISIS) a supergene baby can do that in 9 years. (poor kid). Those timelines seem at odds with each other (and that is assuming the research was possible now).
I know timelines and science fiction stories are a bit fluid but, come on, at least pretend you believe in it. I'm not saying he is full of shit but... no wait, I am saying that.
As we know, the critical age for a boy genius is somewhere from 11 (Harry Potter) to 15 (Paul Atreides), so the gene-enhanced baby ought to have a fair shot after a few months or so.
Superbabies is a backup plan; focus the energy of humanity’s collective genetic endowment into a single generation, and have THAT generation to solve problems like “figure out how to control digital superintelligence
The academic institutions in charge of exploring these ideas are deeply compromised by insane ideologies. And the big commercial entities are too timid to do anything truly novel; once they discovered they had a technology that could potentially make a few tens of billions treating single gene genetic disorders, no one wanted to take any risks; better to take the easy, guaranteed money and spend your life on a lucrative endeavor improving the lives of 0.5% of the population than go for a hail mary project that will result in journalists writing lots of articles calling you a eugenicist.
Superbabies is a backup plan; focus the energy of humanity’s collective genetic endowment into a single generation, and have THAT generation to solve problems like “figure out how to control digital superintelligence”.
Science-fiction solutions for science-fiction problems!
Let's see what the comments say!
Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless [...]
Dude are you aware where you are posting.
Just hope it never happens, like nuke wars?
Yeah that's what ran the Cold War, hopes and dreams. JFC I keep forgetting these are kids born long after 1989.
Could you do all the research on a boat in the ocean? Excuse the naive question.
No, please keep asking the naive questions, it's what provides fodder for comments like this .
(regarding humans having "[F]ixed skull size" and can therefore a priori not compete with AI):
Artificial wombs may remove this bottleneck.
This points to another implied SF solution. It's already postulated by these people that humans are not having enough babies, or rather the right kind of humans aren't (wink wink). If we assume that they don't adhere to the Platonic ideal that women are simply wombs and all traits are inherited from males, then to breed superbabies you need buy-in from the moms. Considering how hard it is for these people to have a normal conversation with the fairer sex, them both managing to convince a partner to have a baby and let some quack from El Salvador mess with its genes seems insurmountable. Artificial wombs will resolve this nicely. Just do a quick test at around puberty to determine the God-given IQ level of a female, then harvest her eggs and implant them into artificial wombs. The less intelligent ones can provide eggs for the "Beta" and "Gamma" models...
But you don't go from a 160 IQ person with a lot of disagreeability and ambition, who ends up being a big commercial player or whatnot, to 195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever.
Okay but this is an amazing out-of-context sentence. I will croudfund a $1000 award for anyone who is able to put that sentence into a paper and get published in Nature without anyone noticing.
195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever.
I can't even decipher what this is about. Like if you're 195IQ you can invent Avada Kedavra in a decade?
this whole "superbabies will save us from AI" presupposes that the superbabies are immune to the pull of LW ideas. Just as LW are discounting global warming, fascism etc to focus on runaway AI, who says superbabies won't have a similar problem? It's just one step up the metaphorical ladder:
LW: "ugh normies don't understand the x-risk of AI!"
Superbabies: "ugh our LW parents don't understand the x-risk of Evangelion being actually, like, real!"