Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
speaking of the Godot engine, here’s a layered sneer from the Cruelty Squad developer (via Mastodon):
image description
a post from Consumer Softproducts, the studio behind Cruelty Squad:
weve read the room and have now successfully removed AI from cruelty squad. each enemy is now controlled in almost real time by an employee in a low labor cost country
and also speaking of Godot — does anyone doing game dev right now have a good source for placeholder assets? I just finished all the introductory tutorials for the engine and now I want to flex what I’ve learned a bit
Can't really say I'm surprised that Mr Facebook takes this attitude. His whole fortune is built on the belief that aggregating and hosting content is more valuable than creating it
As always with plagiarism, regardless of what they say they always, always, always act out of a complete disregard for the value of whatever they're ripping off.
hmm, I meant to link that when I saw it, guess I forgot. whoops :D
but yeah, entirely unsurprising from the guy who literally started by harvesting a pile of data and then building a commercial service off it. facebook and parentco should be ended, his assets taken for public good
i wouldn't want to sound like I'm running down Hinton's work on neural networks, it's the foundational tool of much of what's called "AI", certainly of ML
but uh, it's comp sci which is applied mathematics
They're reeeaallly leaning into the fact that some of the math involved is also used in statistical physics. And, OK, we could have an academic debate about how the boundaries of fields are drawn and the extent to which the divisions between them are cultural conventions. But the more important thing is that the Nobel Prize is a bad institution.
effectively they made machine learning look like an Ising model, and you honestly have no idea how much theoretical physicists fucking love it when things turn out to be the Ising model
does that match your experience? if so i'll quote that
yeah, takes from physicists i know range from "wtf" to "it's plaaausible with a streeetch"
looking through the committee, I see Ulf Danielsson is notable on AI mostly for being skeptical (he writes pop sci books so people ask him about all manner of shit)
It’s going to be like the Industrial Revolution - but instead of our physical capabilities, it’s going to exceed our intellectual capabilities ... but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control
the mozilla PR campaign to convince everyone that advertising is the lifeblood of commerce and that this is perfectly fine and good (and that everyone should just accept their viewpoint) continues
We need to stare it straight in the eyes and try to fix it
try, you say? and what's your plan for when you fail, but you've lost all your values in service of the attempt?
For this, we owe our community an apology for not engaging and communicating our vision effectively. Mozilla is only Mozilla if we share our thinking, engage people along the way, and incorporate that feedback into our efforts to help reform the ecosystem.
are you fucking kidding me? "we can only be who we are if we maybe sorta listen to you while we keep doing what we wanted to do"? seriously?
the purestrain corporate non-apology that is “we should have communicated our vision effectively” when your entire community is telling you in no uncertain terms to give up on that vision because it’s a terrible idea nobody wants
"it's a failure in our messaging that we didn't tell you about the thing you'd hate in advance. if we were any good we would've gotten out ahead of it (and made you think it's something else)"
and the thing is, that's probably exactly the lesson they're going to be learning from this :|
How do we ensure that privacy is not a privilege of the few but a fundamental right available to everyone? These are significant and enduring questions that have no single answer. But, for right now on the internet of today, a big part of the answer is online advertising.
How do we ensure that traffic safety is not a privilege of the few but a fundamental right available to everyone? A big part of the answer is drunk driving.
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed).
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
I just... don't have words for how bad this is going to go. How much work this will inevitably be. At least we'll get a real world example of just how many guardrails are actually needed to make LLM text "work" for this sort of use case, where neutrality, truth, and cited sources are important (at least on paper).
I hope some people watch this closely, I'm sure there's going to be some gold in this mess.
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
Wikipedia's mod team definitely haven't realised it yet, but this part is pretty much a de facto ban on using AI. AI is incapable of producing output that would be acceptable for a Wikipedia article - in basically every instance, its getting nuked.
I'd like to believe some of them have, but it's easier or more productive to keep giving the benefit of the doubt (or at at least pretend to) than argue the point.
Welcome to the club. They say a shared suffering is only half the suffering.
This was discussed in last week's Stubsack, but I don't think we mind talking about talking the same thing twice. I, for one, do not look forward to browsing Wikipedia exclusively through pre-2024 archived versions, so I hope (with some pessimism) their disapponintingly milquetoast stance works out.
Reading a bit of the old Reddit sneerclub can help understand some of the Awful vernacular, but otherwise it's as much of a lurkmoar as any other online circlejerk. The old guard keep referencing cringe techbros and TESCREALs I've never heard of while I still can't remember which Scott A we're talking about in which thread.
Scott Computers is married and a father but still writes like an incel and fundamentally can't believe that anyone interested in computer science or physics might think in a different way than he does. Dilbert Scott is an incredibly divorced man. Scott Adderall is the leader of the beige tribe.
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed)
first impression: your post is entirely on topic, welcome to the stubsack
techtakes is a sister sub to sneerclub (also on this instance, previously on reddit) and that one has a bit of an explanation. generally any (classy) sneerful critique of bullshit and wankery goes, modulo making space for chuds/nazis/debatelords/etc (those get shown the exit)
you use a lot of terms I’m not inherently familiar with (is there a “welcome guide” of some sort somewhere I missed).
we’re pretty receptive to requests for explanations of terms here, just fyi! I imagine if it begins to overwhelm commenting, a guide will be created. Unfortunately there is something of an arms race between industry buzzword generation and good sense, and we are on the side of good sense.
I could only find one positive response in the replies, and that one is getting torn to shreds as well:
I did also find a quote-tweet calling the current AI bubble an "anti-art period of time", which has been doing pretty damn well:
Against my better judgment, I'm whipping out another sidenote:
With the general flood of AI slop on the Internet (a slop-nami as I've taken to calling it), and the quasi-realistic style most of it takes, I expect we're gonna see photorealistic art/visuals take a major decline in popularity/cultural cachet, with an attendant boom in abstract/surreal/stylised visuals
On the popularity front, any artist producing something photorealistic will struggle to avoid blending in with the slop-nami, whilst more overtly stylised pieces stand out all the more starkly.
On the "cultural cachet" front, I can see photorealistic visuals becoming seen as a form of "techno-kitsch" - a form of "anti-art" which suggests a lack of artistic vision/direction on its creators' part, if not a total lack of artistic merit.
They're basically admitting they didn't pay an influencer to spread misinformation about public wifi in order to sell VPN products, they just stole her likeness, used her photo, and attributed completely made up quote to her.
But it was a joke guys! We did a satire! I’m totally certain I know what satire is!
The logical conclusion of normalizing "Social Media Manager" as a role in companies is that as they get better at their jobs and become more believable, the average corporate communication will trend towards 13-year old edgy shitposter. God I feel old.
I really think that Naomi Klein pointing out the brand being the product created a wave of tech entrepreneurs who reacted by making the user experience the product and now we’re seeing how bad they are at the most basic brand maintenance.
Plus they create brands that cultivate a following that is not compatible with corporate growth interests. Proton are like Mozilla, they wanna play with the bad kids but they promised their parents they’d come straight home
Another upcoming train wreck to add to your busy schedule: O’Reilly (the tech book publisher) is apparently going to be doing ai-translated versions of past works. Not everyone is entirely happy about this. I wonder how much human oversight will be involved in the process.
translate technically fiddly instructions of the type where people have trouble spotting mistakes, with patterned noise generators. what could go wrong
Earlier today, the Internet Archive suffered a DDoS attack, which has now been claimed by the BlackMeta hacktivist group, who says they will be conducting additional attacks.
Hacktivist group? The fuck can you claim to be an activist for if your target is the Internet Archive?
Training my militia of revolutionary freedom fighters to attack homeless shelters, soup kitchens, nature preserves, libraries, and children's playgrounds.
I feel like the Internet Archive is a prime target for techfashy groups. Both for the amount of culture you can destroy, and because backed up webpages often make people with an ego the size of the sun look stupid.
Also, I can't remember but didn't Yudkowsky or someone else pretty plainly admit to taking a bunch of money during the FTX scandal? I swear he let slip that the funds were mostly dried up. I don't think it was ever deleted, but that's the sort of thing you might want to delete and could get really angry about being backed up in the Internet Archive. I think Siskind has edited a couple articles until all the fashy points were rounded off and that could fall in a similar boat. Maybe not him specifically, but there's content like that that people would rather not be remembered and the Internet Archive falling apart would be good news to them.
Also (again), it scares me a little that their servers are on public tours. Like it'd take one crazy person to do serious damage to it. I don't know but I'm hoping their >100PB of storage is including backups, even if it's not 3-2-1. I'm only mildly paranoid about it lol.
it scares me a little that their servers are on public tours
frankly, the entire design of IA is more than a bit fucking stupid for the purpose it serves. "oh hey here's the whole IA, right in this building over here" is just galaxybrained derpery
physical goods I can understand central-point (or some centralisation) in archive management, but ffs we're multiple decades into knowing how to build things differently
(stance contextualisation: while I'm glad that the IA exists, I'm not an unreserved stan of it. there are a couple other notable concerns with it, alongside the thing I just mentioned)
Synthetic Users uses the power of LLMs to generate users that have very high Synthetic Organic Parity. We start by generating a personality profile for each user, very much like a reptilian brain around which we reconstruct its personality. It’s a reconstruction because we are relying on the billions of parameters that LLMs have at their disposal.
They could've worded this so many other ways
But I suppose creepyness is a selling point these days
Plenty of agreement, but also a lot of "what is reasoning, really" and "humans are dumb too, so it's not so surprisingly GenAIs are too!". This is sure a solid foundation for multi-billion startups, yes sirree.
it’s kind of comforting that the current attitude towards generative AI in some tech spaces is “of course it can’t do cognition and it isn’t really good for anything, who said it was” which is of course rich from the exact same posters who were breathlessly advertising for the tech as revolutionary both online and at work as recently as a couple of weeks ago (and a lot of them still hedge it with “but it might be useful in the near future”). the comfort is it feels like that attitude comes from deep embarrassment, like how the orange site started claiming it is and always was skeptical of crypto once the technology got irrevocably associated with scams and gambling and a lot of the easy money left
yeah, there's a stench of desperation from the defenders
of course, as with crypto, there are uses (in the case of crypto , nothing legitimate). And it will be going to be a fallback for fondlers to point them out (for example, I believe that auto-generated audiobooks are viable, if they're generated from actual books)
I was watching a h0ffman stream the other day when someone happened to bring up autoplag in some context. didn't see the asking context, but h0ffman's answer warmed my heart. paraphrased: "what would you want to use that for? you wouldn't steal a mod, why would you want to use a prompt? that stole from artists. fuck that shit."
(h0ffman's one of the names in the demoscene, often plays sets at compos, does some of his own demos, etc)
A cafe run by immortality-obsessed multi-millionaire Bryan Johnson is reportedly struggling to attract customers with students at the crypto-funded Network School in Singapore preferring the hotel’s breakfast buffet over “bunny food.”
I did not expect to be tricked into reading about the nighttime erections of the man with the most severe midlife crisis in the world.
he has 80% fewer gray hairs, representing a “31-year age reversal”
According to Wikipedia this guy is 47. Sorry about your hair as a teenager I guess? I hope the early graying didn't lead to any long term self-esteem issues.
Alternatively, he only had 5 gray hairs to begin with I guess? I'm more concerned about the fact that he's apparently taking time to set a timer whenever he gets hard at night. I don't want to yuck anyone's yum, but I'm pretty sure you're doing it wrong if you're taking time out of the experience to collect those metrics.
The Network School offers Johnson’s healthy food and a fitness program called the Blueprint Protocol. He claims that after three years of following his blueprint
the duration of his night-time erections totals 179 minutes, “better than the average 18-year-old”
Yeah, this is a very normal diet that's advertising itself in very normal ways.
I’m trying to imagine the kind of wacky gross VR body tracking setup you’d need to measure that metric while asleep and all I’m coming up with is mutilated Powerglove
What do you think is the venn diagram of "people who go to The Network School" and "men who believe in the meat-only diet"? I imagine there's a lot of crossover
If you mention SpaceBattles we also need to add Sufficient Velocity for completeness’s sake.
There’s another one that focuses mostly on erotic fiction but since that’s not really my bag I’ve forgotten what it’s called. And I think it’s not as big as SB and SV anyway since that user base is mostly on AO3 these days.
This is gut instinct, but I'm starting to get the feeling this AI bubble's gonna destroy the concept of artificial intelligence as we know it.
Mainly because of the slop-nami and the AI industry's repeated failures to solve hallucinations - both of those, I feel, have built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligencetm), no matter how many server farms you build or oceans of water you boil.
Additionally, I suspect that working on/with AI, or supporting it in any capacity, is becoming increasingly viewed as a major red flag - a "tech asshole signifier" to quote Baldur Bjarnason for the bajillionth time.
Eagan Tilghman, the man behind the slaughter animation, may have been a random indie animator, who made Springtrapped on a shoestring budget and with zero intention of making even a cent off it, but all those mitigating circumstances didn't save the poor bastard from getting raked over the coals anyway. If that isn't a bad sign for the future of AI as a concept, I don't know what is.
I think a couple of people noted it at the start, but this is truly a paradigm shift.
We've had so many science fiction stories, works, derivatives, musing about AI in so many ways, what if it were malevolent, what if it rebelled, what if it took all jobs... But I don't think our collective consciousness was aware of the "what if it was just utterly stupid and incompetent" possibility.
I don’t think our collective consciousness was aware of the “what if it was just utterly stupid and incompetent” possibility.
Its a possibility which doesn't make for good sci-fi (unless you're writing an outright dystopia (e.g. Paranoia)), so sci-fi writers were unlikely to touch it.
The tech industry had enjoyed a lengthy period of unvarnished success and conformist press up to this point, so Joe Public probably wasn't gonna entertain the idea that this shiny new tech could drop the ball until they saw something like the glue pizza sprawl.
And the tech press isn't gonna push back against AI, for obvious reasons.
So, I'm not shocked this revelation completely blindsided the public.
I think a couple of people noted it at the start, but this is truly a paradigm shift.
Yeah, this is very much a paradigm shift - I don't know how wide-ranging the consequences will be, but I expect we're in for one hell of a ride.
Alan Moore wrote a comic book story about AI about 10 years ago that parodied rationalist ideas about AI and it still holds up pretty well. Sadly the whole thing isn't behind that link - I saw it on Twitter and can't find it now.
Many thanks to @blakestacey and @YourNetworkIsHaunted for your guidance with the NSF grant situation. I've sent an analysis of the two weird reviews to our project manager and we have a list of personnel to escalate with if we can't get any traction at that level. Fingers crossed that we can be the pebble that gets an avalanche rolling. I'd really rather not become a character in this story (it's much more fun to hurl rotten fruit with the rest of the groundlings), but what else can we do when the bullshit comes and finds us in real life, eh?
It WAS fun to reference Emily Bender and On Bullshit in the references of a serious work document, though.
Edit: So...the email server says that all the messages are bouncing back. DKIM failure?
Edit2: Yep, you're right, our company email provider coincidentally fell over. When it rains, it pours (lol).
Edit3: PM got back and said that he's passed it along for internal review.
I'd really rather not become a character in this story
Good luck. In my experience you can't speak up about stuff like this without putting yourself out there to some degree. Stay strong.
Regarding the email bounceback, could you perhaps try sending an email from another address (with a different host) to the same destination to confirm it's not just your "sending" server?
The bounceback should have info in it on the cause, and DKIM issues should result in a complaint response from the denying recipient server.
And, because this is becoming so common, another sidenote from me:
With the large-scale art theft that gen-AI has become thoroughly known for, how the AI slop it generates has frequently directly competed with its original work (Exhibit A), the solid legal case for treating the AI industry's Biblical-scale theft as copyright infringement and the bevvy of lawsuits that can and will end in legal bloodbaths, I fully expect this bubble will end up strengthening copyright law a fair bit, as artists and megacorps alike endeavor to prevent something like this ever happening again.
Precisely how, I'm not sure, but to take a shot in the dark I suspect that fair use is probably gonna take a pounding.
To my mind, the cover of "researchers" using the public internet to seed products commercialized by OpenAI and friends is the biggest betrayal of fair use in recent memory. The big companies cynically exploited the research exception to fair use and possibly destroyed in the future.
And now, another sidenote, because I really like them apparently:
This is gut instinct like my previous sidenote, but I suspect that this AI bubble will cause the tech industry (if not tech as a whole) to be viewed as fundamentally hostile to artists and fundamentally lacking in art skills/creativity, if not outright hostile to artists and incapable of making (or even understanding) art.
Beyond the slop-nami flooding the Internet with soulless shit whose creation was directly because of tech companies like OpenAI, its also given us shit like:
This is gut instinct like my previous sidenote, but I suspect that this AI bubble will cause the tech industry (if not tech as a whole) to be viewed as fundamentally hostile to artists and fundamentally lacking in art skills/creativity, if not outright hostile to artists and incapable of making (or even understanding) art.
As a programmer who likes to see himself more adjacent to artists (and not only because I only draw stuff — badly — and write stuff — terribly — as a hobby, but also because I hold the belief that creating something with code can be seen as artistic too) this whole attitude which has been plaguing the tech industry for — let's be real here — the last 15 years at least but probably much longer makes me irrationally angry. Even the parts of the industry where creativity and artistry should play a larger role, like game dev, have been completely fucked over by this idea that everything is about efficiency and productivity. You wanna be successful? You need to be productive all the time, 24/7, and now there's tools that help you with that, and these tools are now fucking AI-powered! Because everything is a tool for out lord and savior productivity.
(I really should get to this toxic productivity write-up I've been meaning to do for a year now,)
Just ignore the inconsistent theming, blurry cars, people phasing in and out of existence, nonsense traffic signals, unnatural leaf rustling, the car driving on the wrong(?) side of the road and about to plow into a tree, the weirdly oversized tree, the tree missing a trunk, the nonsense traffic paint, the shoddy textures, and the fact that the scene is entirely derivative and no one feels any joy from watching it.
Phew
If you ignore all that it could be the end of animators!!
I was focusing more on the fact Justine failed to recognise Minimax had failed at its only job (giving her...whatever that anim is...instead of something actually 8-bit), but yeah all that sucks too
And on the subject of AI: strava is adding ai analytics. The press release is pretty waffly, as it would appear that they’d decided to add ai before actually working out what they’d do with it so, uh, it’ll help analyse the reams of fairly useless statistics that strava computes about you and, um, help celebrate your milestones?
Definitely saw an ad today for an AI-powered workout machine. It looks like if Bowflex was made by Tesla and promises to "optimize your workout with every rep" or some such nonsense.
I tried to remember the name of it by googling "AI exercise equipment" and despite the slick branding (it's called Tonal btw) it was like 5th on the list. Do you think it's awkward having all these overlapping grifts? In the pre-internet days I'm imagining like 10 unique traveling snake oil salesmen trying very hard to sell their bullshit over everyone else's in the same tiny frontier town without inviting anyone to look too closely at any of them.
Context, Luke-Jr is an early Bitcoin adopter, literal Florida man, and all-around kook. His wikipedia user page used to be a work of art with an ordered list of his obsessions[1], starting with sedevacantism and including Tonal, an early attempt to promote hexadecimal. Here's a long page on the old Bitcoin wiki: https://en.bitcoin.it/wiki/Tonal_Bitcoin. Sadly it never caught on, people don't want to say "bong bitcoin" apparently.
Alignment? Well, of course it depends on your organization's style guide but if you're using TensorFlow or PyTorch in Python, I recommend following PEP-8, which specifies four spaces per indent level and…
Wait, you're not working in AI the what are you even asking for?
Good sign of a crank is wanting to solve the biggest problem in the field (or more like several fields) with this one easy trick. Amazing others are now following yuds lead.
Amazon asked Chun to dismiss the case in December, saying the FTC had raised no evidence of harm to consumers.
ah yes, the company that's massively monopolized nearly all markets, destroyed choice, constantly ships bad products (whose existence is incentivised by programs of its own devising), and that has directly invested in enhanced price exploitation technologies? that one? yeah, totes no harm to consumers there
Pulling out a specific paragraph here (bolding mine):
I was glad to see some in the press recognizing this, which shows something of a sea change is underfoot; outlets like the Washington Post, CNN, and even Inc. Magazine all published pieces sympathizing with the longshoremen besieged by automation—and advised workers worried about AI to pay attention. “Dockworkers are waging a battle against automation,” the CNN headline noted, “The rest of us may want to take notes.” That feeling that many more jobs might be vulnerable to automation by AI is perhaps opening up new pathways to solidarity, new alliances.
To add my thoughts, those feelings likely aren't just that many more jobs are at risk than people thought, but that AI is primarily, if not exclusively, threatening the jobs people want to do (art, poetry, that sorta shit), and leaving the dangerous/boring jobs mostly untouched - effectively the exact opposite of the future the general public wants AI to bring them.
AI is primarily, if not exclusively, threatening the jobs people want to do (art, poetry, that sorta shit)
Jobs people want to do, but which also take a lot of effort to learn to do well. I think there exists a certain envy of people who have put in the time and effort to learn something, which motivates the AI hype.
Visual arts, writing, translation, music, video production, programming, sex. The common thread is that these are things most people wish they could be good at, and they're also the most popular uses for generative AI.
Was thinking about this over the weekend and it suddenly struck me that saltman and his fellow podcasting bros (thank you, TSMC execs) are the modern equivalent of the guys in academic posts who'd describe themselves using titles like "futurist" and spent their time turning out papers that got them interviewed on telly, inspired other academics with too much spare time to write their own takes on it and get interviewed on TV as well, maybe write a book and get an adoring profile in WIRED, that sort of thing. Maybe they'd have a sideline in cyberpunk fiction or be part of a group that hung around in Berkeley making languid proclamations about how cyberspace would be the end of all laws and stuff like that. They were the first hype men of tech -- didn't actually do very much themselves but gave other people ideas. Certainly loved the sound of their own voices and adored the attention. But they were very clear that these were ideas to hang stuff off in the future, not the present.
Nobody was dumb enough to actually take their stuff at face value as something they should immediately throw huge amounts of money at to make them reality. This started to blur during the period when Negroponte was really hustling and everything the MIT Media Lab squirted out was treated like the second coming. It blurred further when tech companies started employing people to act as hype men who had job titles like "Chief Visionary". These guys could take the ideas coming from the nerdy engineers and turn them into excited press releases that would get the top brass excited into giving them more headcount to work on it. Type specimen: Shingy (formerly of AOL)
Today, that circlejerk (futurists - journalism - readers - companies - investors) has collapsed into a line with two points. Someone like Altman shows up with a barely-proof-of-concept idea but is able to hype it directly to VCs who have too much money and no imagination and make decisions based entirely on FOMO. So Altman appears, gets showered with cash, then as he's being showered with cash and hyping for all it's worth other tech companies and VCs jump on the FOMO wagon and pour cash into it as well and... we get to today. Not so much a circlejerk as a reacharound. The sanity filter of open discussion and decent tech journalism between blue-sky ideas and billions of dollars of cash has been removed completely.
The most recent bubbles - cryptocurrency, blockchain, NFTs, LLMs... none of these would have progressed much beyond a few academic papers, maybe a PoC and some excited cyberpunk mailing list traffic until about 15 years ago. The computing power to do them was easily available, it's just that people would have asked "What is this for?" and "Why is it better?". It's what happens when you stop using academia (generally a fairly sceptical community) as an ideas factory and start using coked-up Stanford grads who've spent their entire university career being constantly told how special and important they are.
Result: massive waste of talent which could be used on genuinely innovative and society-improving ideas, stifling of said genuinely good ideas as "a startup" now has to mean $10m in seed capital and "graduating" from an incubator rather than a couple of people coding in an apartment, billions of dollars firehosed off a cliff for no good reason, the environment being set on fire, and society is being made incrementally worse and not better.
How fucking depressing. Capitalism, you suck.
(full disclosure: I've had dinner with a couple of top-tier Cyberpunk Luminaries in the US and one of them was pretty much the most annoying, self-satisfied "I Am Very Clever And Will Talk Loudly" person I've ever met. I now know what it feels like to be mansplained at having had things like basic facts about the country I was then living in and the European Union explained to me incorrectly.)
evidence of wider continued rising of the tide against saltman’s bullshit grows
Precisely when that rising tide will drown Altman I'm not sure, but I feel safe in saying it'll probably drown the rest of the AI industry (and potentially "AI" as a concept) as well - Altman is pretty much the face of this AI bubble, after all.
The rising tide was likely also helped along by OpenAI going fully for-profit, which shattered the humanitarian guise it spent the last decade or so building, and, to quote myself, "given the true believers reason to believe [Altman would] commit omnicide-via-spicy-autocomplete for a quick buck".
First-ever criminal charges against financial services firms for market manipulation and “wash trading” in the cryptocurrency industry
Cryptocurrency is a 15-year old industry built mostly on market manipulation and wash trading and now we're seeing the first charges for it? Man, back in the day they told me doing crime was illegal.
Deflationary
With every transaction supply shrinks by burning a percentage of reflections to the burn wallet
Turns out libertarians actually love taxes, but only if instead of spending the tax money on anything, it's burned to waste.
Fraudster: "the “objective on the secondary markets” is to find “other buyers from the community, people you don’t know about or don’t care about” because “we have to make [the other buyers] lose money in order to make profit.”"
Is this supposed to be a parody or something? If so, why is this being done at the direction of the FBI? Or is that also part of the parody? If so I didn't know you could use the FBI logo for that purpose.
This week's Mystery AI Hype Theater 3000 really hit home. It's about a startup trying to sell "The AI Scientist." It even does reviews!
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”
Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.
Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can’t live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!
Today's entry in the wordpress saga: seizing plugins from devs. The author of this one appears to be affiliated with wpengine, which possibly signals more events like this in the future.
We have been made aware that the Advanced Custom Fields plugin on the WordPress directory has been taken over by WordPress dot org.
A plugin under active development has never been unilaterally and forcibly taken away from its creator without consent in the 21 year history of WordPress.
@BlueMonday1984 Yeah, I don't get it. If you want to be a "hacktivist", why not go after one of the MILLIONS of organizations making the planet a worse place?