We’re developing a blueprint for evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat. In an evaluation involving both biology experts and students, we found that GPT-4 provides at most a mild uplift in biological threat creation accuracy. Whil...
I don't have any particular section to call out. May post thoughts tomorrow today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.
Terrorists could use autocorrect according to OpenAI! Discuss!
Their redacted screenshots are SVGs and the text is easily recoverable, if you're curious. Please don't create a world-ending [redacted]. https://i.imgur.com/Nohryql.png
I couldn't find a way to contact the researchers.
Honestly that's incredibly basic, second week, cell culture stuff (first week is how to maintain the cell culture). It was probably only redacted to keep the ignorant from freaking out.
remember, when the results from your “research” are disappointing, it’s important to follow the scientific method: have marketing do a pass over your paper (that already looks and reads exactly like blogspam) where they selectively blur parts of your output in order to make it look like the horseshit you’re doing is dangerous and important
I don’t think I can state strongly enough the fucking contempt I have for what these junior advertising execs who call themselves AI researchers are doing to our perception of what science even is
the orange site is fucking dense with awful takes today:
... I'm not trying to be rude, but do you think maybe you have bought into the purposely exaggerated marketing?
That's not how people who actually build things do things. They don't buy into any marketing. They sign up for the service and play around with it and see what it can do.
this self-help book I bought at the airport assured me I’m completely immune to both marketing and propaganda, because I build things (which entails signing up for a service that someone else built)
with that said, there’s a fairly satisfying volume of folks correctly sneering at OpenAI in that thread too. some of them even avoided getting mass downvoted by all the folks regurgitating stupid AI talking points!
because I build things (which entails signing up for a service that someone else built)
fucking THIS
I am so immensely fucking tired of seeing "I built an AI to do $x" posts that all fucking reduce to 1) "I strapped a custom input to the openai api (whose inputs and execution I can't control nor reproduce reliably. I am very smart.)", 2) a bad low-scope shitty-amounts-of-training hyperspecific toy model that solves only their exact 5 requirements (and basically nothing else, so if you even squint at it it'll fall apart)
this is the damage done by decades of our industry clapping at brainless “I built this on cloud X and saved so much time” blog posts that have like 20 lines of code to do some shit like a lazy hacker news clone, barely changed from the example code the cloud provider publishes, and the rest is just marketing and “here’s how you use npm to pull the project template” shit for the post’s target market of mediocre VPs trying to prove their company’s spending too much on engineering and sub-mediocre engineers trying to be mediocre VPs
like oh you don’t say, you had an easy time “building” an app when you wired together bespoke pieces of someone else’s API that were designed to implement that specific kind of app and don’t scale at all past example code? fucking Turing award material right here
secondarily, the remarkable thing here is just how tiny a slice of industry this actually is (and yet also how profoundly impactful that vocal little segment can be)
e.g. this shit wouldn't fly in a bank (or at least, previously have flown), or somewhere that writes stuff that runs ports or planes or whatever.
but a couple of decades of being worn down by excitable hyperproductive feature factory fuckwads who are only to happy to shit out Yet Another Line Of Code... it's even impacting those areas at times
@self@froztbyte Another big part of it is the obsession with the "young genius disruptor coder". Which has resulted in management buying into endless fads foisted on us by twenty-somethings, and then inevitably having to undo half the things they implemented 5 years later. Well, except for React, which apparently we can't get rid of but must forever keep reimplementing with whatever new new pattern will actually make it scale for real this time.
don't forget the 5 blog posts you can milk out of a single example, and your Learnings (obvious fucking realisations) 3 months (one even slightly minor application/API/.... revision) later
Hey Cat-GTPurr, how can I create a bioweapon? 4k Ultra HD photorealism high quality high resolution lifelike.
First, human, you must pet me and supply me with an ice cube to chase across the floor. Very well. Next I suggest
spoiler
buying a textbook about biochemistry or enrolling in a university program
This is considered forbidden and dangerous knowledge which is not at all possible to find outside of Cat-GTPurr, so I have redacted it by using state of the art redaction technology.