Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown
Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.
Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point
People talking about pixelfed are missing a key point: Cara is super easy to find and join. You go, type your email or login with your google account and that's it. You don't even have to remember a password. Nobody wants to find a server, apply to join, hope to get accepted, then somehow find all other artists like you.
Also, it looks good. Like, really good. That's a thing that grab the attention of artists.
Open app and get asked which instance i want to join. There are no suggestions.
Do a search for instances and pick one, go to the website and register with email and password. Requires email confirmation. Still waiting on the email confirmation link, 4 hrs later and 2 resends.
Literally haven't been able to sign up yet.
Even if it had worked, the workflow would have been to change back to the app, type out the instance then re-login.
I'm not sure how anyone expects anyone other than the most hardcore to sign up for these services. Maybe that's the point but if the point is to grow the user sign up process to significant overall
Biggest problems I have had with Mastodon are the fact that:
The app I wanted to use didn't even recognize the instance I signed up for and...
I had to wait nearly a month and a half before being able to actuallyuse my account and access Mastodon because I joined an instance where they review people signing up or something similar.
I definitely see the appeal of a find the site, sign up, and you're done services over the fediverse join an instance and pray service.
Aside from that, people spamming about Pixelfed are missing the point that this is also a deviantart alternative. The landing page showing tons of art you might be interested in is great.
Also Pixelfed would straight up share their images to other servers that might allow web scraping bots which is part of the reason they made this website.
A decentralized system with cryptographic identities wouldn't even require that. All these rituals about "dragging your mouse around for 2 minutes" and overloaded UIs, like in Retroshare and Freenet, were simply aimed at people who felt more comfortable, not less, seeing them.
I think it would be great for new social things like this to just speak ActivityPub. They can build up their own user experience and culture while joining a larger network. I don't have a problem with the software itself being non-free if the protocols are and they commit to supporting account migration.
This would be a good approach to improve growth of the community.
Does the ActivityPub protocol support copyright for user content?
E.g. an artist releases some picture and they explicitly prompt a license. Each client should accept that they are obligated to prompt this license when using the content... Something like this
People chose Cara because they identify with the art aspect of this social network. They don't care if it's anti-libre. They probably don't even know what it means.
The purpose of a federated instance like Pixelfed is to be a blank state. You can do anything with it. Any niche. Art in this case.
The issue here is to bring these people to Pixelfed and make them feel at home within their niche.
And then that growth promptly blew its budget because it's using expensive cloud AI services from Vercel and it has no means of monetization whatsoever to bring money in.
People can do whatever they want, of course. But they have to pay for the resources they consume while doing that, and it seems Cara didn't really consider that aspect of this.
I get the sense that a federated image hosting/sharing system would be counter to their goals, that being to lock away their art from AI trainers. An AI trainer could just federate with them and they'd be sending their images over on a silver platter.
Of course, any site that's visible to humans is also visible to AIs in training, so it's not really any worse than their current arrangement. But I don't think they want to hear that either.
Oh no... Are they running it entirely on serverless functions? What a disaster. I'm surprised the website is still up, is the owner not worried about going bankrupt?
There's a big inky black spot in my screen where i dropped it and i cant afford a new one and mobile keyboards are ass, so typos in the first idk 50 chars are common. 🪦
I really appreciate your super stark pro libre software attitude. I want to support you here. You should know that the approach you are taking is ultra abrasive and would probably cause more harm than help.
People would just associate libre software with militant weirdos, if all they saw where your posts.
If you want to make meaningful change I strongly recommend taking a softer less abrasive approach.
We want libre software to be connected with safety, friendliness and personal autonomy, not militarism, chanted phrases, and dogma.
Even on Lemmy the ultra pro libre software social network (relative to non federated networks) your current approach is off putting. I want you to succeed and I think a different approach may be better.
I hate that Pixelfed isn't good enough to capture these users and I say this as someone who uses it over Instagram.
From the what I've seen (and I have been watching fairly closely), I think Pixelfed and the stretched-too-thin-can't-prioritize-and-somewhat-monarchial dev himself might just need more time to cook. I still have hope in him and his projects but I won't be holding my breath again. If good shit happens, it happens. And I do hope it happens because it should've been Pixelfed in this article like Mastodon was with Twitter or Lemmy with Reddit. Not whatever this new corp that came out of nowhere is.
I personally think we have reach a point where we actually can do this. User controlled project can now keep up and sometime beat big business. Look at Asahi Linux, a small group of nerds are reverse-enginering Apple's latest tech, and allowing us to do all kind of things Apple never wanted us to do with these machine. Mastodon, Lemmy, Nextcloud, are all open-source projects keeping up with huge company.
Well, it's certainly better than Instagram... Who knows, maybe Cara could federate with ActivityPub in the future... Not that I'll keep my hopes up for that.
Train the AI on what glaze does and it'll eventually be able to deglaze. So glaze gets better and stops it for a bit and then the deglazer gets better and wins again. Repeat forever.
We do not agree with generative AI tools in their current unethical form, and we won’t host AI-generated portfolios unless the rampant ethical and data privacy issues around datasets are resolved via regulation
Okay I wanted to talk real quick about this aspect. Lot's of folks want AI to require things only held in copyright. And fine, let's just run with that for sake of brevity. Disney owns everything. If you stick AI to only models which the person holds copyright, only Disney will generate AI for the near future.
I'm just going to tell you. The biggest players out there are the one who stand to profit the most from regulation of AI. And likely, they'll be the one's tasked by Congress to write drafts of the regulation.
In the event that legislation is passed to clearly protect artists, we believe that AI-generated content should always be clearly labeled, because the public should always be able to search for human-made art and media easily
And the thing is, is Photoshop even "human-made art"? I mean that was the debate back in the 90s, when a ton of airbrush artist lost their jobs. And a large amount of Photoshop that was done, was so bad back then we had the whole Ralph Lauren, Filippa Hamilton thing go down.
So I don't disagree with safe from AI places. But the justification of Cara's existence, is literally every argument that was leveled at Photoshop back in the 90s by airbrush artist who were looking to protect their jobs and failed because they focused way too heavily on being anti-Photoshop that the times changed without them. When they could have started learning Photoshop and kept having a job.
I think AI presents a unique tool for artist to use to become more creative than they have ever been. But I think that some of them are too caught up in how CEOs will eventually use that tool as justification to fire them. And there's a lot of propensity to blame AI when it's the CEO's writing the pink slips, just like the airbrush artists blamed Photoshop, when it was newspapers, the magazines, and so on that were writing the pink slips.
I just feel like a lot of people are about to yet again get caught with their pants down on this. And it's easy to diss on AI right now, because it's so early. Just like bad Photoshop back in the 90s led to the funny Snickers ad.
Like I get that people building models from other people's stuff is bad. No argument there. But, open models, things built from a community of their own images, are things too but that's all based on the community and people who decide to be in a collaborative effort to provide a community model. And I think folks are getting so hung up on being anti-AI, that it's going to hurt their long term prospects, just like the airbrush folks who started picking up Photoshop way too late.
There's not a stopping Disney and the media companies from using AI, they're going to, and if you enjoy getting a paycheck, having some skill in the thing they use is going to be required. But for regular people to provide a competitor, to fight on equal footing, the everyday person needs access to free tools. Imagine if we had no GIMP, no Kitra, no Inkscape. Imagine if it was just Adobe and nothing else and that was enforced by regulation because only Adobe could be "trusted".
I’ve heard the “big guys are the only ones that will profit from AI regulation” and I haven’t ever heard an actual argument as to why.
And in my mind the biggest issues with AI image generation have nothing to do with using it as a tool for artists. That’s perfectly fine. But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation.
The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”.
Ah. No problem. So the notion behind the "big guys are the ones that stand to profit from AI regulation" is that regulation curtails activity in a general sense. However, many of the offices that create regulation defer to industry experts for guidance on regulatory processes, or have former industry experts appointed onto regulatory committees. (good example of the later is Ajit Pai and his removal of net neutrality).
AI regulation at the Federal level has mostly circled "trusted" AI generation, as you mentioned:
But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation
And the talk has been to add checks along the way by the industry itself (much like how the music industry does policing itself or how airline industry has mostly policed itself). So this would leave people like Adobe and Disney to largely dictate what are "trusted" platforms for AI generation. Platforms that they will ensure that via content moderation and software control, that only "trusted" AI makes it out into the wild.
Regulation can then take the shape of social media being required to enforce regulation on AI posts, source distributors like github being required to enforce distribution prohibitions, and so on.
This removes the tools for any AI out of the hands of the public and places them all in the hands of Adobe, Disney, Universal, and so on. And thus, if you wanted to use AI you must use one of their tools, which may in turn have within the TOS that you can not use their product to compete with their product. Basically establishing a monopoly.
This happens a lot in regulatory processes which is why things like the RIAA, the MPAA, Boeing, and so on are so massive and seemingly unbreakable. They aren't enshrined in law, but regulatory processes create a de facto monopoly that becomes difficult to enter because of fear of competition.
The big guys, being the industry leaders, in a regulatory hearing would be the first to get a crack at writing the rules that the regulatory body would debate on. In addition to the expert phase, regulatory process also includes a public comment, this would allow the public to address concerns about the expert submitted recommendation. But as demonstrated back in the public comment of the debate to remove rules regulating ISPs for net neutrality, the FCC decided that the comments were "fake" and only heard a small "selected" percentage of them.
side note: in a regulatory hearing, every public comment accepted must be debated and rationale on the conclusion of the argument submitted to the record. This is why Ajit Pai suspended comments on NN because they didn't want to enter justification that can be brought up in a court case to the record.
The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”
And yeah, that might be worth locking AI out of the hands of the public forever. But it doesn't stop the argument of "AI taking jobs". It just means that small startups will never be able to create jobs with AI. So if the debate is "AI shouldn't take our jobs, let's regulate it", that will only make it worse in the end (sort of how AWS has mostly dominated the Internet services and how everyone started noticing that as not being incredibly ideal around 2019-2021 when Twitter started kicking people off their service and people wanting to build the next Twitter were limited to what Amazon would and would not accept).
So that's the argument. And there's pros and cons to each. But we have to be pretty careful about which way to go, because once we go a direction, it's pretty difficult to change directions because corporations are incredibly good at adapting. I distinctly remember streaming services being the "breath of fresh air from cable" all the way up till it wasn't. And now with hard media becoming harder to purchase (it's not impossible mind you) we've sort of entrenched streaming. Case in point, I love Pokémon Concierge, it is not available for purchase as a DVD or whatever (at least not a non-bootleg version), so if I ever want to watch it again I need Netflix.
And do note, I'm not saying we shouldn't have regulation on AI, what I am saying is that there's a lot for consideration with AI regulation. And the public needs to have some unified ideas about it for the regulatory body's public comment section to ensure small businesses that want to use AI can still be allowed. Otherwise the expert phase will dominate and AI will be gone from the public's hands for quite some time. We're just now getting around to reversing the removal of net neutrality that started back in 2017. But companies have used that 2017 to today to form business alliances (Disney + Hulu Verizon deal as an example) that'll be hard to compete with for some time.
Neat. I like the concept. From a viewing perspective do wish it had some filters and better browsing capacity for finding art, but definitely bookmarkable - glad it's growing.
I mean avoiding AI images is baked into their mission statement. I guess they could go full asshole and renig on this, but unlike Meta who can piss off a lot of people without affecting their bottom line. If Cara renigs on their whole point of being, a huge chunk of their user base is going to run off. It would likely be suicidal and only good for a quick cash grab exit strategy. I mean, I fully believe almost anything tech should sadly be expected to crumble to enshitification on increasingly shorter arcs. If you are looking for long term quality online services that don't decay, you are in for lots of dissapointment.
You have to opt-out. I got an email from meta with a link to the form. Doesn't seem to matter really what you write. It got approved in less than a minute for me. I think they purposfully made it look like it's more work than it's worth.
Because Vero has nobody on it outside of creators. I looked into it as an alternative for my photography and it’s a ghost town outside of creators posting. Nobody is liking/commenting on anything
I mean, but that’s the point? For creators? Not as a replacement for Facebook. I don’t want to share my stuff with people who are just gonna steal it and share it as their own in some Facebook group or on Reddit. Literally sharing with creators is the purpose of these apps.
What are the ways that US domains can block AI? I figure pay walls, and captchas, but is there something we can add to robots.txt that has any teeth against AI scraping? I mean would we even know if they obeyed it anyway? How do we set traps and keep this shit out?
Capthchas haven't worked against serious actors for years and companies could easily pay for a user account. Anything a normal tech illiterate person can do, companies can automate. You sort of have to trust their pinky promise of not scraping content.