important instance shit
-
update: email, backups, and writefreely
this is somewhat of a bigger update, and it's the product of a few things that have been in progress for a while:
email
email should be working again as of a couple months ago. good news: our old provider was, ahem, mildly inflating our usage to get us off their free plan, so this part of our infrastructure is going to cost a lot less than anticipated.
backups
we now have a restic-based system for distributed backups, thanks to a solid recommendation from @[email protected]. this will make us a lot more resilient to the possibility of having our host evaporate out from under us, and make other disaster scenarios much less lethal.
writefreely
I used some of the spare capacity on our staging instance to spin up a new WriteFreely instance where we can post long-form articles and other stuff that's more suitable for a blog. post your gibberish at gibberish.awful.systems! contact me if you'd like an invite link; WriteFreely instances are particularly vulnerable to being turned into platforms for spam and nothing else, so we're keeping this small-scale for instance regulars for now.
alongside all the ordinary WriteFreely stuff (partial federation, a ton of jank), our instance has a special feature: if you have an account, you can make a PR on this repository and once it's merged, gibberish will automatically pull its frontend files from that repo and redeploy WriteFreely. currently this is only for the frontend, but there's a lot you can do with that -- check out the
templates
,pages
,less
, andstatic
directories on the repo to see what gets pulled. check it out if you see some jank you want to fix! (also it's the only way to get WriteFreely to host images as part of a post, no I'm not kidding)what's next?
next up, I plan to turn off Hetzner's backups for awful.systems and use that budget to expand the node's storage by 100GB, which should increase the monthly bill by around 2.50 euros. I want to go this route to expand our instance's storage instead of using an object store like S3 or B2 because using block storage makes us more resilient to Hetzner or Backblaze evaporating or ending our service, and because it's relatively easy to undo this decision if it proves not to scale, but very hard to go from using object storage back to generic block storage.
after that, it'll be about time to carefully upgrade to the current version of Lemmy, and to get our fork (Philthy) in a better state for contributions.
as always, see our infrastructure deployment flake for more documentation and details on how all of the above works.
-
infra: email notifications might be a bit spotty
we’ve exceeded the usage tier for our email sending API today (and they kindly didn’t email me to tell me that was the case until we were 300% over), so email notifications might be a bit spotty/non-working for a little bit. I’m working on figuring out what we should migrate to — I’m leaning towards AWS SES as by far the cheapest option, though I’m no Amazon fan and I’m open to other options as long as they’ve got an option to send with SMTP
-
flag any spam you see from lemmy.world (or elsewhere)
we’re seeing a bit of spam come in from lemmy.world. if you happen to see any (and a lot of it seems to be in DMs), make sure to flag it. that’ll let both us and the originating instance’s mods know. if we get a bunch of reports and it seems like lemmy.world isn’t cleaning things up properly, we’ll take further steps to limit the amount of spam we get
-
upgrade: lemmy 0.19.3
this one should hopefully fix the remaining token issues folks have been having, though I'm not seeing anything in the commit log about fixes for the other session and pagination issues we've noticed. as always, let me know if anything looks broken. I'm still working on getting Photon deployed, which might be a good workaround for the frontend breakages we've been seeing.
-
upgrade: lemmy 0.19.2 + infrastructure
today's (later than planned) upgrade to lemmy 0.19.2 provisionally appears to have gone alright. if you see excessive amounts of jank (and your page footer can't decide what version of lemmy it's running on, IE it shows separate FE and BE versions), clear your browser cache and cookies since lemmy doesn't seem to do that cleanly on its own
next up I'm planning to deploy the Proton frontend as an alternative to the default and I'm also going to start pushing code to codeberg (most likely) so stay tuned for that
-
awful.systems is updating its priors
I’m taking awful.systems down for a bit tomorrow (January 13) around 11 PM GMT because after 16 release candidates and 2 hotfixes, lemmy 0.19.x finally seems like a safe enough upgrade. this is going to be a major one, so I’ll be taking our instance down temporarily to get a database backup before I apply the upgrade. expect exceptional levels of jank!
-
defed: threads (and commemorative cocktail recipe!)
now that threads is starting to federate, they sure as fuck aren’t with us
threads.net commemorative cocktail:
- glass: old fashioned (lowball)
- pour hard cider from red apples until glass is 3/4ths full
- top with 1 shot of bourbon
- smoke glass with cherry wood
- garnish with sliced lime, or add lime juice to taste
- drink and meditate on what AOL and then Google did to usenet
-
interest check: should we start a buttcoin (and meme stock) sub?
a couple of our regulars have expressed interest in having an anti-cryptocurrency sub here. so interest check: reply to this thread if you want us to have buttcoin
edit: also, meme stock bullshit is on topic for our buttcoin (unless the threads get overwhelming, then we’ll split off another sub)
-
federation outage (update: fixed? post here if it's not fixed)
update: the fix for this was stupid, please let me know if anything still looks broken
it's looking like our federation with other servers may have fallen over sometime during the week. we're currently debugging; right now we're seeing that threads seem to federate between lemmy instances (and federate into mastodon when requested specifically), but comments aren't federating in either direction
-
should we start a community to announce and seek contributors for open source projects?
given the absolute fucking state of the open source community in general, and the fact that hacker news of all places is where the majority of new open source projects get discovered, is there any interest in starting a community here where folks can announce and solicit for help with their open source projects?
we could possibly use NotAwfulTech, but:
- I kind of want to keep self-promotion out of that community
- my code is probably awful for everyone else, that's why I'm seeking contributors
let me know if anyone's down for the new community or wants to expand the scope of NotAwfulTech to include stuff like this. if you're on team new community also feel free to suggest a name
-
does anyone use or want the Photon alternative lemmy UI?
in a thread complaining about the general state of lemmy, I read a comment where someone linked the alternative lemmy UI Photon. some general thoughts:
- this shit looks like new.reddit, which I hate
- however, it is extremely fast
- it looks like someone with UX experience was at least in proximity to this at the time it was designed?
- I don’t think there’s an easy CSS way to make this look less like new.reddit
- having tried it on a test instance, the promise of better mod/admin tools seems ambitious currently, though maybe they’ll get there faster than lemmy-ui
- overall, it feels a lot nicer to use than either lemmy-ui or new.reddit
you can hook Photon up to awful.systems using the Accounts option in the menu on the top right, though for opsec reasons I can’t encourage anyone to log in to this weird external site with their awful.systems credentials. check it out with the guest instance option (which doesn’t need a login) or use a disposable lemmy.ml account or something
what I want to know is: does anyone use this thing, and does anyone want it here? if there’s demand for it, I can spin up a secure copy of it for our instance under an alternate path. for me it’s a bit of a hard sell due to its resemblance to the reddit redesign, but lemmy’s UI is decoupled enough from its backend that running this thing shouldn’t impact much
-
defed: hexbear
whoa, lemmygrad got a vaporwave logo and a much stupider name! too bad their posts are still fucking terrible
-
update: lemmy -> 0.18.4, @dgerard added as an infrastructure admin
some quick awful.systems infrastructure updates:
- @[email protected] is now an infrastructure admin!
- updated lemmy to 0.18.4
- broke lemmy and lemmy-ui into their own flakes, which the deployment repo will grab and build as needed
- added the sneer-archive flake to the deployment
- finally wrote some docs on how to deploy from the flake
-
defed: rammy
I added
rammy
to the instance blocklist because it's apparently unmoderated and has been invaded by anime nazis -
defed: added exploding-heads and basedcount to the instance block list
I defederated us from two lemmy instances:
exploding-heads
: transphobiabasedcount
: finally I get to ban most of r/PoliticalCompassMemes in one go
-
postmort: july 31, 2023 outage
we suffered some extremely unexpected downtime while I deployed a trivial change (a reverse proxy from
http://awful.systems/archives
tohttp://these.awful.systems/archives
) to prodthe downtime was unrelated to the deployment change; instead, it seems like
lemmy-ui
started crashing because it couldn't render the app icons it uses when saved as a home screen app on mobile. it uses a fairly heavy dependency to do this, and has no error handling in case the source icon data is corrupt, which causes it to crash on every request (resulting in a503 Service Unavailable
error for everyone who tried to access awful.systems during this outage)since I don't know how that corruption occurred or why it was persistent (the app icon data should be fully static as part of the Nix store as far as I know), so until I can dig in I've disabled generating app icons for our instance. since it seems like we're the first ones to hit this bug, I'll do my best to keep the patch upstreamable so other lemmy instances can benefit from the fix
-
federation with Mastodon (and possibly other fediverse services) is now working!
see this lemmy-ansible github issue for the fix; basically, our web server now knows how to handle activitypub traffic in a more conforming way
to interact with us from mastodon:
- find the community you want to subscribe to here. note its real name -- that's the name in the sidebar after the !
- search for @[email protected] in mastodon
- follow that user and enjoy our posts over there! replying and boosting should work ok, no guarantees for anything else
as for interacting with mastodon from here, I think you can paste mastodon URLs into our search and it'll maybe work? someone try that
-
we're federated ya'll! (and other instance updates)
big update, awful.systems is now a federated lemmy instance. let me know if anything looks broken! here's what to expect:
- to pull up an awful.systems community on another instance, just paste that community's URL into the other instance's search bar
- federation with other lemmy instances should work, and probably kbin too? there's no way I can find to pull in pre-federation posts on remote instances though, so send your friends here to read the backlogs
- we can't federate with most of mastodon right now because lemmy doesn't implement
authorized_fetch
, which is a best practice setting for mastodon instances. if your instance doesn't use it, try entering something like@[email protected]
into your mastodon search; lemmy communities are represented to mastodon as users - this is pretty much an experimental thing so if we have to turn it off, I'll send out another post
- reply to this post with ideas for moderation tools and instances you'd like to see blocked (and a reason why) and we'll take action on anything that sounds like a good idea
federation was made possible by
- lemmy's devs skipping their release process and not telling anyone 0.18.2 was released on friday? so we're on 0.18.2 now
- updating all of the deployment cluster's flake inputs just in case
- @[email protected] shouting yolo
-
updated lemmy to 0.18.1
I was gonna do this quietly since I was doing it mostly for security fixes, but now I guess I gotta announce that I deployed lemmy 0.18.1 to the awful.systems cluster. changes include
- sweet christ did this UI get smaller and uglier? whose idea was this.
- we have more theme options! most of them are terrible. there is a vaporwave theme I kinda like in a geocities way. if you come here and it looks like geocities I switched to that one
- they fixed like 3 out of the 4 webdev 101 security holes they left in the code
- there's some small new UI features?
- sometimes they just make changes for no reason
- let me know if anything looks broken
-
email notifications and mobile apps are now working, and other minor updates
I rolled out some minor but important updates to the deployment cluster just now:
- email notifications are now enabled in production. let me know if this tanks performance. also we’re on a fairly limited email plan so I’ll post an update if we exceed its limits (which’ll break notifications again)
- we now have a staging deployment and a development deployment, which let me make these changes without taking production down
lemmy-ui
in production is now running in production mode, which should improve performance slightly- late update: awful.systems now reports a correct lemmy backend version, so lemmy mobile apps should work. I confirmed that mlem on ios works, but let me know if jerboa or anything else is broken
-
awful.systems is running on a new host, tell me if anything looks broken
I just finished a migration that doubled the resources awful.systems has available. let me know if I fucked anything up and didn't notice
changelog for this deployment:
more.awful.systems
, a Hetzner CPX31, was added to the cluster- all dynamic data was migrated from
these.awful.systems
tomore.awful.systems
- the load balancer target was swapped from
these
tomore
- now I can throw up a maintenance page for next time I need to do this
-
wanna see some code? c'mere
this instance runs on open infrastructure. the code that deploys awful.systems is available here.
right now I've got the following planned for the awful.systems cluster:
in addition to the currentprod
lemmy deployment, split offstaging
anddev
.staging
will be used to function check infrastructure updates before they hitprod
.dev
will be used for feature development.add a maintenance mode toprod
that shuts off the lemmy services and replaces every route with a maintenace page. this will be necessary for big moves like host migrations or storage expansion that'll take the database offlinemake the backend return a damn version so the lemmy apps don't break? I'm guessing this broke because nix deletes.git
when pulling sources. this can probably be fixed lazily usingkeepGit
or properly with a patch to lemmy's version detection- start work on a less janky alternative to
lemmy-ui
, which will be deployed todev
until it's worth using and hopefully mostly not broken (ima call itlessjank
) - also start work on better moderation tools, implemented in both
lemmy-server
andlemmy-ui
probably migrateprod
to a bigger hetzner host -- this'll take awful.systems offline for a little bit as I restore the database into the new system- ~~eventually set up sendgrid? email notifications actually working will probably be beneficial ~~
if you'd like to contribute, contact me. the deployment parts of awful.systems are written in nix, and everything else will be rust