Let’s encourage the Canadian Government to join the Fediverse
Anyone know some #PEI Fediversians? Maybe we could get our last (but first!) Province represented for signature #300! (Currently 297) Canada #Mastodon #Fediverse is killin' it! Please share and spread the word on other Fediverse platforms too! #pixelfed #Lemmy #peertube #Friendica #Bookwyrm #funkwh...
A parliamentary petition has been started to try to encourage the Canadian Government to join the Fediverse. If you want to support this petition, follow the link to Chris Alameny’s post where you’ll find links to both French and English versions of the petition. You have to be a Canadian citizen to sign the petition. We only need a few more signatures to get over the 500 needed. We’d love to see every Province and Territory represented.
Nothing. Waterloo is a fine place. I lived there for quite a while. But, I’ve been in Kitchener for 30 years, so when I found the “official” Kitchener page, I followed it. Just thought I’d wave to see if anyone else was here.
My deepest apologies for the typo. Your policing is deeply appreciated.
Ontario’s Liberal Party is still in trouble, and it seems like they don’t know it.
Great essay by Scrimshaw on the state of play in the OLP. It’s likely they won’t survive another election if they continue with their internal delusions.
Great essay from Brittlestar this week on just not being a dick.
or How Defending What's Right Doesn't Require A Membership
I always enjoy Brittlestar’s stuff, but this week’s essay seem particularly appropos.
Ya just gotta wonder how the cetacean one got on the books in the first place. I mean, was somebody running around randomly impregnating whales with stolen sperm at some point?
Interesting paper that points out the likelihood that extensive use of generative AI on the web will eventually cause theses models to collapse. The likely outcome is even wilder hallucinations and eventually, gibberish.
On Catastrophic Forgetting and Mode Collapse in Generative Adversarial Networks
In this paper, we show that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution. We show that GAN training is a continual learning problem in which the sequence of changing model distributions is the sequence of tasks to the discriminator. The level of mismatch between tasks in the sequence determines the level of forgetting. Catastrophic forgetting is interrelated to mode collapse and can make the training of GANs non-convergent. We investigate the landscape of the discriminator's output in different variants of GANs and find that when a GAN converges to a good equilibrium, real training datapoints are wide local maxima of the discriminator. We empirically show the relationship between the sharpness of local maxima and mode collapse and generalization in GANs. We show how catastrophic forgetting prevents the discriminator from making real datapoints local maxima, and thus causes non-convergence. Finally, we study methods for preventing catastrophic forgetting in GANs.
10 AI Graphs to rule them all
The AI Index tracks breakthroughs, GPT training costs, misuse, funding, and more
Here’s the state of the art in AI, according to Stanford.
An AI Scorecard - experts views on AGI and the possibility of apocalypse
How worried are top AI experts about the threat posed by large language models like GPT-4?
Generally, it seems like AI experts are divided about how close we are to developing an AGI, and how close any of this might take us to an extinction level event. On the whole, they seem less likely to think that AI will kill us all. Maybe.
What about the Regions, like York Region, Waterloo Region, etc? Would it make sense to have lemmy communities for them, too?
I'm reading about applications for AI in safety related control systems for machinery. Finding clear guidance on risk assessment for AI systems has been quite difficult. I’d like to talk to anyone who has experience in this area.
Hey fellow Kitchener folk
Hi all, new to Lemmy but a KW resident for more than 30 years. Looking forward to seeing some stuff going on here.