Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FA
farcaller @fstab.sh

**beep ** bop.

Posts 7
Comments 133
Network Switch
  • I had exactly the same use case and I ended up with a 40G DAC fiber for that case. It ended up cheaper than converting the whole lan to 10G.

    That said, it feels like used 10G equipment is easier to come by than 2.5G for now, and if you have 2G fiber uplink and only 1G past the router then it’s a waste.

  • Should I keep shared or separate k8s clusters?
  • No. It's my in-cluster storage that I only use for things that are easier to work with via S3 api, and I do backups outside of the k8s scope (it's a bunch of various solutions that boil down to offsite zfs replication, basically). I'd suggest you to take a look at garage's replication features if you want it to be durable.

  • Should I keep shared or separate k8s clusters?
  • Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they'd have to fully compromise the cluster to get the secrets to access those in the first place.

    I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it's just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.

  • My impression of github since switching to Linux
  • Between homebrew and nix, the amount of foss macs can run out of the box is pretty close to some generic Ubuntu (nixpkgs is technically the largest repo out there, but not all of the nixpkgs are available on mac).

  • Should I keep shared or separate k8s clusters?
  • I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you'd always want to have an "infra" cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it's not worth it.

    I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.

  • Email server thru cloudflare
  • The biggest certainty is that just having an open port for an SMTP server dangling out there means you will 100% be attacked.

    True.

    Not just sometimes, non-stop.

    True

    So you don't want to host on a machine with anything else on it, cuz security.

    I don’t think "cuz security" is a proper argument or no one would be ever listening on public internet. Are there risks? Yes.

    So you need a dedicated host for that portion

    Bullshit. You do not need a dedicated host for smtp ingress. It won’t be attacked that much.

    and a very capable and restrictive intrusion detection system (let's say crowdsec), which is going to take some amount of resources to run, and stop your machine from toppling over.

    That's not part of the mail pipeline the OP asked for.

    Here, I brought receipts. There are two spikes of attempted connections in the last month, but it's all negligible traffic.

    Self-hosting mail servers is tricky, same as self-hosting ssh, http, or whatever else. But it's totally doable even on an aging RPi. No, you don’t need to train expensive spam detection because it's enough to have very strict rules on where you get mail from and drop 99% of the traffic because it will be compliant. No, you don’t need to run crowdstrike for a server that accepts bytes and stores them for another server (IMAP) to offer them to you. You don’t even need an antivirus, it's not part of mail hosting, really.

    Instead of bickering and posturing, you could have spent your time better educating OP on the best practices, e.g. like this.

  • Email server thru cloudflare
  • I won’t quote the bit of your post again, but no, if you have an open smtp port then you won’t get constantly attacked. Again, I have a fully qualified smtp server and it receives about 40 connections per hour (mostly the spam ones). That's trivial to process.

    It doesn’t matter that I forward emails from another server, because, in the end, mine is still public on the internet.

    If you are trying to make a point that it's tricky to run a corporate-scale smtp and make sure that end users are protected, then it's clearly not what the OP was looking for.

  • Email server thru cloudflare
  • The biggest certainty is that just having an open port for an SMTP server dangling out there means you will 100% be attacked. Not just sometimes, non-stop. So you don't want to host on a machine with anything else on it, cuz security. So you need a dedicated host for that portion, and a very capable and restrictive intrusion detection system (let's say crowdsec), which is going to take some amount of resources to run, and stop your machine from toppling over.

    I need to call BS on this. No one cares. I’ve been running a small go-smtp based server that would do some processing on forwarded mail for 2 years now and I don’t see much of “attacks”. Yeah, sometimes I get passersbys trying to figure if this is a mail relay, which it’s not.

    You absolutely don’t need a dedicated machine and an IDS. And you definitely need crowdsec.

    Yeah, sending mail is somewhat hard lately, but DKIM and DMARC can be figured out. Receiving mail is just straightforward.

  • Router died - Replacement/solution recommendations
  • I would not recommend unifi for a mature solution. It sure works nice as a glass panel, but it will get limiting if you will have a desire to hack around your network. Their APs are solid, though, it's just the USG/Dream machine that I wouldn’t recommend.

    Mikrotik software is very capable and hackable and you can run it in a vm if you feel like bringing your own hardware.

  • Garage is an open source S3-compatible object storage cluster designed for self hosting
  • Clearly I mean Garage in here when I write "S3." It is significantly easier and faster to run hugo deploy and let it talk to Garage, then to figure out where on a remote node the nginx k8s pod has its data PV mounted and scp files into it. Yes, I could automate that. Yes, I could pin the blog's pod to a single node. Yes, I could use a stable host path for that and use rsync, and I could skip the whole kubernetes insanity for a static html blog.

    But I somewhat enjoy poking the tech and yes, using Garage makes deploys faster and it provides me a stable well-known API endpoint for both data transfers and for serving the content, with very little maintenance required to make it work.

  • Garage is an open source S3-compatible object storage cluster designed for self hosting
  • S3 storage is simpler than running scp -r to a remote node, because you can copy files to S3 in a massively parallel way and scp is generally sequential. It's very easy to protect the API too, as it's just HTTP (and at it, it's also significantly faster than WebDAV).

  • farcaller.net How to split short term and long term VictoriaMetrics storage

    VictoriaMetrics is a fantastic alternative to prometheus, especially in a home lab where resources are constrained. It’s several times more efficient with its RAM usage while being pretty much fully PromQL compatible (with a few nice extras, too). One of the nice features of VM are retention filters...

    I finally got to cleaning up the metrics in my homelab and researched the means to separate my long-term and short-term data. This way you can scrape all kinds of noisy sources (e.g. kubernetes) while having a separate store for things you want to observe on longer time windows (months and years). The best thing? It's transparent for grafana and the like, so you can keep all your dashboards intact.

    1

    Self-hosted alternative to synology drive?

    I moved off a Synology NAS to a self-managed machine and one thing I still struggle to replace is something like a synology drive. Here are my requirements:

    • server side store data in a plain FS (I want transparency)
    • client side (windows), it must support VFS (download files when needed, support offloading of large files)
    • having snapshots of data is a must

    I have a 40gbit uplink to my desktop, so if everything else fails I’ll just use samba with zfs snapshots exposed to VSS, but we’re talking some large files still (think several hundreds of MBs) and I’m not sure Blender will be happy working off a network disk.

    I’ve been pointed to next/own-cloud previously, but they don’t seem to cover my use case, I think. Should I actually try one of those? I browsed around owncloud's storage bit (which is written in go), and it seems mostly fitting, but I’ve been told I should steer away from ownCloud towards nextCloud.

    16

    Why fediverse clients reinvent the C2S APIs and don't use ActivityPub?

    I’m reading the ActivityPub spec here and it seems pretty fit for client-to-server communications. Yeah, it might be somewhat bulkier than your typical rest api, but it's more universal, which begs the question: why do mastodon and lemmy both decided to implement custom (and incompatible) APIs for their clients to talk to the servers? Wouldn’t it be more straightforward if e.g. my voyager app talked ActivityPub to lemmy.world which then talked ActivityPub to lemmy.ml or something.

    What am I missing?

    8
    ap playground @fstab.sh farcaller @fstab.sh

    123

    11

    0

    I made a lemmy community directory

    I wasn't sure how to find the communities I'm interested in, so I quickly hacked together a scraper that makes a list of all the communities(1) of all the servers mine is federating to(2).

    You can find it (with a very trivial UI) at directory.fstab.sh. Hover over the link to see the description. Use the search bar to search by text.

    Is this something useful or there was a better way to do the same?

    • (1) it does its best to scrape them all but incidents might happen
    • (2) updated nightly
    13

    /r/nixos

    0

    Reading /all from another instance?

    Is there any way to see the “all” stream from another instance (pretty much what wefwef does for lemmy.world when there's no account) when I’m logged into my instance?

    4