These experts on AI are here to help us understand important things about AI.
Who are these generous, helpful experts that the CBC found, you ask?
"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.
"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.
Other people making points in this article:
C. L. Polk, award-winning author (of Witchmark).
"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.
"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.
Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.
Things I picked out, from article and round table (before the video stopped playing):
Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?
Who is the "we" who have to adapt here?
AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.
"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.
"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)
Me about the article:
I'm feeling that same underwhelming "is this it" bewilderment again.
Me about the video:
Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.
"learn AI now" is interesting in how much it is like the crypto "build it on chain" and how they are both different from something like "learn how to make a website".
Learning AI and Building on chain start with deciding which product you're going to base your learning/building on and which products you're going to learn to achieve that. Something that has no stability and never will. It's like saying "learn how to paint" because in the future everyone will be painting. It doesn't matter if you choose painting pictures on a canvas or painting walls in houses or painting cars, that's a choice left up to you.
"Learn how to make a website" can only be done on the web and, in the olden days, only with HTML.
"Learn AI now", just like "build it on chain" is nothing but PR to make products seem like legitimised technologies.
What's worse is these people who shill AI and genuinely are convinced Chat GPT and stuff are going to take over the world will not feel an ounce of shame once AI dies just like the last fad.
If I was wrong about AI being completely useless and how its not going to take over the world, I'd feel ashamed at my own ignorance.
Something I try to remember is that being useless, broken, bad, stupid, or whatever is more reason to fear it being used and not a reason it won’t be used.
I wanna expand on this a bit because it was a rush job.
This part...
Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will.
...is a bit wrong. The AI environment has no stability now because it's a mess of products fighting for sensationalist attention. But if it ever gains stability, as in there being a single starting point for learning AI, it will be because a product, or a brand, won. You'll be learning a product just like people learned Flash.
Seeing people in here talk about CoPilot or ChatGPT and examples of how they have found it useful is exactly why we're going to find ourselves in a situation where software products discourage any kind of unconventional or experimental ways of doing things. Coding isn't a clean separation between mundane, repetitive, pattern-based, automatable tasks and R&D style, hacking, or inventiveness. It's a recipe for applying the "wordpress theme" problem to everything where the stuff you like to do, where your creativity drives you, becomes a living hell. Like trying to customise a wordpress theme to do something it wasn't designed to do.
The stories of chatgpt helping you out of a bind are the exact stories that companies like openAI will let you tell to advertise for them, but they'll never go all in on making their product really good at those things because then you'll be able to point at them and say "ahah! it can't do this stuff!"
It's my own name I made up from a period in the late 2000s, early 2010s when I'd have a lot of freelance clients ask me to build their site "but it's easy because I have already purchased an awesome theme, I just need you to customise it a bit"
It's the same as our current world of design systems and component libraries. They get you 95% of the way and assume that you just fill in the 5% with your own variations and customisations. But what really happens is you have 95% worth of obstruction from making what would normally be the most basic CSS adjustment.
It's really hard to explain to someone that it'd be cheaper and faster if they gave me designs and I built a theme from scratch than it would be to panel-beat their pre-built theme into the site they want.
I’d have a lot of freelance clients ask me to build their site “but it’s easy because I have already purchased an awesome theme, I just need you to customise it a bit”
oh my god, this was all of my clients when I was in college
I have a set of thoughts on a related problem in this (which I believe I've mentioned here before (and, yes, still need to get to writing)).
the dynamics of precision and loss, in communication, over time, socially, end up resulting in some really funky setups that are, well, mutually surprising to most/all parties involved pretty much all of the time
and the further down the chain of loss of precision you go, well, godspeed soldier
Also, like, when you simplify the complicated parts of something, what happens to the parts of that thing that were already simple? They don’t get more simple, usually they become more complex, or not possible at all anymore.
I've been watching the 5 hours of tobacco advertising hearings from the 90s in a floating window while working on code spaghetti vue js components all day.
seriously, every minute of these hearings is fascinating. Just some of the most evil, greedy, slimy shit coming out of the mouths of suited up old white men who are trying every single misdirection possible to justify targeted marketing of tobacco
(~stream of consciousness commentary because spoon deficit:)
I've seen samples of it used in some media before
I haven't ever gotten to watch it myself
probably there's value in viewing and analyzing it in depth, because... a lot of other bad actors (involved in current-day bad) pull pretty much the "same sort of shit"
the legal methodology and wordwrangling and dodging may have evolved (<- speculation/guess)
I would say that the modern techniques are not as modern as I thought. I'm seeing plenty of similarities to crypto whataboutisms and ai charlatans claiming to care about the common person.
Not sure if this'll work - but here's a clip I posted on masto of a guy basically saying tobacco companies should be able to advertise because advertising is a fight for market share, not for increasing the market https://hci.social/@fasterandworse/111142173296522921
the hearing is just for regulations on their advertising practices too. One of the most common complaints from the lobbyists was "if you want to do this you should go all the way and outlaw smoking completely" as if a marlboro logo on an f1 car was keeping the industry alive.
the gall of selling a literally addictive product then complaining they wouldn't let you advertise enough. buddy, you don't need to advertise! nicotine is doing your work for you!
Ooh, could you elaborate? I don't know anything about "user experience" marketing. I suppose the heavy regulation was teamed with a big media and cultural anti-tobacco push and as that faded the effectiveness of tobacco ad regulation also faded.
The Bureau of Investigative Journalism has done some interesting reporting on tobacco/vape marketing today - for example whether influencer and digital marketing is being used to quietly push vape and tobacco ads on teenagers.
I haven't paid that much attention to the software and platforms behind all this. Now that you mention it, yes, they are all products not underlying technologies. A bit like if somebody was a Zeus web server admin versus AOL web server admin without anybody being just a web server admin. Or like if somebody had to choose between Windows or Solaris without just considering operating systems.
Then again, what with all the compute and storage and ongoing development needed I'm not convinced that AI currently can be a gratis (free as in beer) thing in the same way that they just hand out web servers.
Bingo. "Learn AI" is an even more patronizing and repellent version of "learn to code", which was already not much of a solution to changes in the jobs market.
good point. "learn to code" is such an optimistically presented message of pessimism. It's like those youtube remixes people would do of comedy movie trailers as horror movies. "learn to code" like "software is eating the world" works so much better as a claustrophobic, oppressive, assertion.
The blasé spite with which some people would say "just learn to code" was a precursor to the glee with which these arrogant bozos are predicting that commercial AI generators will ruin the careers of artists, journalists, filmmakers, authors, who they seem to hate.
and as we’ve seen in this thread, they don’t mind if it ruins the career of every junior dev who’s not onboard either. these bloodthirsty assholes want everyone they consider beneath them to not have gainful employment
their apparently sincere belief that not being in poverty is a privilege that people should have to earn —by doing the right kind of job, and working the right kind of way, and having the right kind of politics, is genuinely very strange and dark. The worst of vicious "stay poor" culture.
in spite of what they claim, most tech folk are extremely conservative. that’s why it’s so easy for some of them to drop the pretense of being an ally when it becomes inconvenient, or when there’s profit in adopting monstrous beliefs (and there often is)
I think you're missing the forest for the trees here.
Learning about AI is great advice. Being able to convey that you can understand and speak to a complex topic like AI shows intelligence.
I get what you're saying wrt block chain but the applications are night and day in terms of usability and value to the common company or consumer.
Every aspect of business will be affected by ai. That's a fact. Blockchain not so much.
you’re on an instance for folks who’ve already learned about AI and, through intensive research, have found it to be goofy as fuck grift tech designed and marketed by assholes
why would you working in a field make it not a grift? all of the reformed cryptocurrency devs I know maintain that they didn’t know it was a grift until it was far too late (even as we told them it was in no uncertain terms). both industries seem to have the same hostility towards skeptics and constant kayfabe, and the assholes at the top are very experienced at creating systems that punish dissent.
of course I’m wasting my time explaining this — your continued paycheck and health insurance rely on you rejecting the idea that your career field produces fraudulently marketed software and garbage research. the only way that ends is if you see something bad enough you can’t reason past it, or if the money starts to show signs of running out. it’s almost certainly gonna be the latter — the fucking genius part about targeting programmers for this kind of affinity fraud is most of them have flexible enough ethics that they’ll gladly pump out shitheaded broken software that’s guaranteed to fuck up the earth and/or get folks killed if there’s quick profit in it
Every aspect of business will be affected by ai. That’s a fact.
never say "that's a fact" about a product prediction.
the relevance of usability/ux of a thing is in inverse proportion to the value the thing creates. If it created value, usability/ux would only exist as a topic for marketing one product against another.
any industry that emphasises usability/ux as a feature is on a spectrum somewhere between problemless solutions and flooded markets.
also, re: "I work with AI so it’s not a grift."
if your employer has a mission statement that is anything other than "make as much money as possible" then they are more likely to be a grift than a company whose mission statement is "make as much money as possible"