Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
I'll post my ongoing things later/tomorrow but I didn't want to forget the post again.
I need to switch to OPN. Was on pfSense Plus until they csncelled the free licenses so I finally "downgraded" to pfSense CE and now I'm finding it hasn't been updated in 2+ years and I'm really missing having DHCP hostnames being added to local DNS automatically.
I know this isn't sexy but I've been working on my documentation. Getting configs etc properly versioned in my gitea instance, readmes updated etc. My memory is not what it once was and I need the hints when things break.
Same here. I got Gemini to write a shell script for me that I can run on my Proxmox host which will output all of my configs to a .txt file. I asked it to format the output in a way a LLM can understand so I can just copy/paste it next time I need to consult AI.
I've been trying to learn K8s and more recently the Gateway API. The struggles are that most Helm charts don't know Gateway (most are barely Ingressroute) and I'm trying to find a solution to one service affecting the other gateways.when a service cannot find a pod, the httproute fails and when one route fails, the ingress fails. It's a weird cascading problem.
Right now, I'm considering adding a secondary service to each gateway that resolves to a static error page. I haven't looked into it yet; it cane to me in the brief moment of clarity before I fell asleep last night.
Also, I may be doing everything wrong, but I am learning and learning is fun.
My girlfriends phone was having issues connecting to self hosted servers, so I set her DNS from private to network default. Hope this helps any android users that may have issues.
A couple of days ago, after testing it myself for a few months to make sure I understood how everything works, I made the switch to NextCloud Calendar, and will no longer use Google Calendar.
This is the best part though... I somehow convinced my wife to do the same. She let me install the NextCloud app(optional for Calendar stuff but makes the setup easier) and DAVx5 on her phone (both from F-Droid, so DAVx5 was free). I exported and imported her calendar, and made sure the notifications were set up to her preferred default.
It's multiple days later, and she hasn't complained!
I've also moved all of my contacts over to NextCloud, but have yet to coerce my spouse to do the same.
I've been using Fossify Calendar for a while now and it's been pretty great. I moved to it after the whole Simple apps getting sold drama when it happened.
In a web browser I use the NextCloud one. It's functionally very similar to Google and I like it a lot.
For our Android phones, my wife uses the Google Calendar app, and I like Business Calendar Pro.
Both apps you just have to go into the settings once DAVx5 has done the initial sync and uncheck all of the Google calendars so they aren't shown, and check the boxes next to the new Calendars.
Exporting and Importing I could only really easily do via the web browser for both.
Today I'm experimenting with Ansible. Wanna try setting up a Docker hosted RSS reader with it. Hopefully will write up controls for my whole Docker server with Ansible once I'm more familiar.
I'm trying to figure out setting up TrueNAS scale and docker for the first time. Building a NAS and self hosting a few things from an old all in one mini PC.
Moved my fediverse apps friendica, lemmy, 35c. (only user is me) to one server since it was overkill having 2 barely using 8% if that if their cpu/ram. Suprisingly easy with yunohost backups, remade users and restored backup if just the apps. Updated enhance panel, switched the sites im making for family to use as a portfolio for local webdev to ols, fairly easy,, was using wordpress templates wrong so I fixed that and redid the home pages, now I feel less confident with wordpress and wonder if ive always made sites wrong, think i just forgot since its been years.
I initially fd it up because I didnt deselect everything but the apps, but I at least thought to backitup and dload it locally beforehand so it was an easy/quick recovery
Crazy enough, I have everything going that I want to on my server!
*arr suite and jellyfin
traefik reverse proxy with crowdsec + bouncer for some sites (e.g. not documents or media)
paperless-ngx for documents
immich for photos
leantime to manage personal projects
Book stack for a personal wiki
calibre-web for my library
syncthing for file and music syncing so I don't have to stream music
valheim server for me and my friends
boinc for turning my server to a productive heater in the winter
home assistant for my in-renovation smart home
As far as my server goes, I have everything I need. Maybe setting up something for sharing files over the web if needed. I used nextcloud for that before it killed itself completely and I realized I never really needed it.
Next is working on my smart home because we had to fully strip the house to renovate. KNX first, zwave for things that KNX doesn't have or are crazy expensive, ESPHome for everything that the other two can't accomplish. Minimal 2.4GHz interference and don't have to rely as much as possible on flaky wireless in a brick house.
Setting up let's encrypt auto cert renewal with ACME. Also looking to setup some monitoring service, basic stuff like CPU, memory usage etc.
If anyone has recommendations that have an android app available, that would be awesome.
I'm personally using Prometheus Stack and like it, but I just check Grafana in my Android browser. I think Zabbix has an Android app but I don't know if it has as many possibilities as Prometheus.
I setup a VPN for my moms Synology so I can request and download media for her through my local qbit instance and using Radarr/Sonarr to move the files over.
I have a problem where both arrs don't auto start when I power up the debian VM in Proxmox even though the daemon is running and restart policy is set to always...
She doesn't make a lot of requests so I just go and start them manually but I would eventually like to get it fixed..
If hardware service counts. :) I have been fighting for the last few months with my Promxox server telling me a drive went read only , from a SSD and even a HDD, very odd behavior and it finally pulled the last straw with me last Thursday. I had a 4TB drive acting as my Storage/backup drive which this complained about so I put a 1TB drive in which is pretty much 2 yrs old so plenty of life on it.
I went through and tested the SSD with extended tests and it passed with flying colors, so it dawned on me, maybe it's the SATA data cable, and sure enough, it was. When I had run the sudo smartctl -x -T permissive /dev/sdb it only presented very little information on it, swapping the cable and it now presents the full SMART data and stats as it should. Additionally, it's been more stable with the performance so far. So I call that a win.
In the software side, I have been going through the Home Assistant instance and removing dead/old entities I never had gotten to removing
I dealt with a lot of time sinks like this running on consumer hardware. I got a Dell R720 and those problems all went away. Now I have a power and cooling problem. :D
I recently setup weechat (IRC) and learned about bouncers. From what I understand it's similar to a proxy but with backlogging IRC conversation. I'm still new to it and have a lot a new things to learn.
I'm thinking to self-host my personal bouncer on some cheap VPS.
Other than that was busy with encoding with av1an and encode my bluray library to AV1 codec :).
I also recently self-hosted metube (yt-dlp web frontend) to download some music from RiMusic. Still need to work on a shortcut with HTTPS shortcut on Android !
That's wechat, and this confusion made it really difficult to find the right info on the web... Most search results were linking to the Chinese thing, uhhg !
I have recently setup my paperless-ngx instance and have uploaded all my scanned documents. Now I have to tag all that stuff which seems like a lot of work. So I'm looking into paperless-ai... 🧞
(pre ai) I found that adding a few, tagging them correctly and then adding the rest worked pretty good with auto tagging. Don't know how much of a difference paperless-ai is going to make but it sounds interesting. I would just make sure to only plug in a selfhosted thing
I'm running Nextcloud and PaperlessNXG on my servers. Over the last few months I tested out my remote management. Now that I'm back home, I've been making a few adjustments based on my learnings. Firstly, Wireguard is slower than a turtle, while Tailscale has been a little bit faster. I'm guessing this is due to my upload speed and switching to fiber may fix this.
I'd also like to add TubeArchivist back in since there's some great videos that I don't trust Google to preserve given the direction things are going.
The folks on the "privacy" Lemmy gave me some good tips on app replacements and after making a big spreadsheet with all my apps, their licenses, etc., I cut down my remaining proprietary apps by at least 50% and I only have a few proprietary essentials that still depend on Google Play. I've been meaning to do this for a long time and I almost have a path towards completely removing all Google, Amazon, and Microsoft products from my life.
Next, I'd like to set up Wander to eventually get rid of Garmin/Strava but I haven't been able to figure it out and I'm still locked in to some degree because of my hardware (Garmin watch). The Ring doorbell has to be the next thing to go, but I'm exhausted and haven't had the motivation to start a new project until the dust settles from the last one.
That's definitely one of those things I found bizarre and awful yet...entirely unsurprising. I can see how selling that data probably sounds like such a lucrative edge to marketing companies.
how did we as society come to accept this?
By not establishing ethical lines high-voltage containment fences on the advertising industry quickly enough, and letting them convince us "this is just how business works", when their entire existence is about finding the scummiest ways to hack free will for profit.
IMO you should stick with a local device store only. If you're worried about the state getting hold of the data, having any backups is gonna be a liability.
I spun up a new Plex server with a decent GPU - and decided to try offloading Home Assistant's Preview Voice Assistant TTS/STT to it. That's all working as of yesterday, including an Ollama LLM for processing.
Last on my list is figuring out how to get Home Assistant to help me find my phone.
Sure! I mostly followed this random youtuber's video for getting Wyoming protocols offloaded (Whisper/Piper), but he didn't get Ollama to use his GPU: https://youtu.be/XvbVePuP7NY.
I'm eternally sitting here putting off migrating my homelab from docker to rootless podman due to some rather janky patterns I use. It might be super smooth or it might not so instead I just wait in endless decision paralysis
I ended up just adapting my composes to run commands, on my desktop I don't mind having to manually start them at boot, I could easily make a simple thing to just run at boot and just say podman run <container> as most of my containers depend on others so I can just start the child-most container and it'll start them all. I just have some shenanigans where I use one container as a VPN for the other ones, which is a bit messy if using rootless.
I'll have a look into the links and see if there's anything new in there I haven't seen before but yeah, nothing unsolveable I'm just needlessly putting things off lol
I more mean replacing the runtime etc, I've got some running on another machine and had some difficulty wrapping my head around the subuid and subgid stuff, so in theory I should be fine but it's an irrational worry lol
I have a family member across the country that wants to break from Google and really isn't the type to self-host themselves, and I connect to my self hosted NextCloud solely through TailScale.
NextCloud permissions seem easy enough, but I'm researching how to add them to my Tailnet safely to avoid potential compromise of my network if something happens to their system.
Presuming this involves ACLs, which look intimidating, but I'm doing some research on that.
Tag owner for internal is admins
Tag owner for nextcloud is admins
Action accept, src admin, dst *:*
Action accept, src nextcloud, dst nextcloud *.
Then tag your nextcloud ts connection as nextcloud in the webadmin
Tag all your other clients admin in the webadmin
Note: you can't just paste what I put here you need to find a viable template and then follow along. I'm on a mobile device where I would give you something more finalized
Edit: tag your fam client as nextcloud
Something like this:
I stripped down one of my configs, I took out SSH, I don't think it requires it
It might be some way, however not easily. My mega-corpo ISP blocks incoming connections on common hosting ports, because they want to keep the network safe sell expensive home-business plans. Lol
I'm also very amateur at this as I go along, and I'm not sure I'm ready to deal with the fallout of missing some security step and getting my server botted or ransomwared lol.
I haven't done the hardware stuff with setting up my own router/firewall box either, for instance.
So Tailscale works really well for me by seemingly magically bypassing a lot of that nonsense and giving me less to worry about. They allow 3 users for free, but have a relatively inexpensive family plan for like 6 users as well, if that becomes necessary.
I mainly just need to tell them not to try and use my server as an exit node if they're across the country 😂.
But yeah definitely, I'm using this as a way to test the waters for running service alternatives as the web we knew collapses around us lol. I'm not ready to be running something people really rely on yet, though. :)
Managed to set up immich remote machine learning (old 7th gen Optiplex to gaming PC). If only I bought an nvidia card.. I wasn't able to get it my AMD 7800 XT to work with immich ML.. Next up is setting up microservices because immich is crippling my unraid server 🤦🏼😭
Honestly I'm not sure. I had the ML on my AMD gaming PC but the other (metadata and thumbnail) services were still on the unraid server.. Photoprism doesn't have that issue at all..
I've switched back to PP until I sort out the microservices..
I've recently setup an recipe archival project using tandoor, I'm working on converting all my grandparents fading old as dust cooking recipes from their misc handwritten cursive notecards to digital.
Setup was uneventful but it took a little research to figure out how to use a remote postgres server, turns out the app doesn't give an error when it can't connect to the server, it just fails to run
Have to say the actual program itself is absolutely absurd and how they choose their permissions, it breaks all conventional and took quite a bit to get used to.
Trying to figure out how to get my qBittorrent docker container to route all traffic through my VPS through wireguard. The catch is that the webui needs to be accessible through LAN.
Considering moving my stuff into a VirtualBox VM or two rather than running directly on my PC. Then at some point in the future when I have the hardware for it I can fairly easily move it to proxmox. Also means installing a clean OS on my main PC is a quicker task as it would just be install virtual box, load up the VMs and a lot of stuff would already be done.
Consider using containers. I used to think this way, though now my goal is to get down to almost all containers since it's nice to be able to spin up and down just what the one 'thing' needs.
I'm still using Docker Rootless, which I want to change for Podman since Rootless is second-class for Docker, but I haven't been able to read the documentation enough to understand Podman Quadlets to migrate my compose files, and there are some incompatible configurations so even if using podlet, I have to edit some things manually.
I also want to migrate to MicroOS if possible in my server, but I'm still testing things in a VM to understand enough and the cost-benefit u.u
Working on Smart Playlists for Pinepods I'm the developer of the self-hosted podcast management server and Sunday is always my new feature day. I've had a lot of fun adding in new features lately. Like designing a homepage and adding OIDC login support. Don't let anybody tell you auth isn't fun.
Oh wow, I hadn't known pinepods! I've been looking for a selfhosted podcast management thing for literal years and recently audiobookshelf popped up, but maybe I should check Pinepods instead! I don't have audiobooks anyways. Are there Android Apps that can be used as a frontend?
Btw, github links to https://pinepods.online/, but the website seems to be exclusively available on https://www.pinepods.online/.
Edit: Just found you release an Android app as well. I'll have a look!
Yep! The Android App is somewhat in the works. It compiles, works and has all the features of the web version currently. The things that don't work are the aspects that make it an actual Android app. Like local device downloads and integration with Android APIs. That stuff is coming. After this next minor release getting to those are my priority.
Not that I'm biased or anything, but Pinepods objectively has WAYYY more features than audiobookshelf for podcast management. Because it's a podcast server of course. Things like podcast 2.0 support with chapters, hosts, and transcripts, YouTube channel support, embedded podcast Index, YouTube and iTunes search support, external RSS feed support, etc. Audiobookshelf is great, but it's an audiobook app. It'd probably be clutter to add a lot of this stuff.
I. Build a PC for video editing because it was becoming impossible to do on the laptop. I realized that I can use the GPU also to run large language models myself.
So this week I've been setting up ollama and Open WebUI to be able to move some of my queries I ask ChatGPT and ask them on my computer, even if I'm away.
This way I don't need to send sensitive data to the USA and China. It works quite well but I only can use smaller models up to 14B because of the 12 GB VRAM my graphics card only has.
My NAS and our desktops are all on WiFi, so I'm planning to run some cable or install moca or something. Our uplink is currently only 100mbit (max for this ISP, I refuse to switch) but our city plans to roll out gigabit everywhere in the next couple years, so I want something forward compatible (powerline will probably be too limiting). SO has been complaining about latency, and I think the WiFi card is to blame, so I'm trying this before upgrading the WiFi card.
Our house has the following:
phone lines everywhere (could maybe use the existing cables to fish through cat6?)
cable jacks e everywhere (have an unused satellite dish)
lots of power plugs
two floors (rambler + basemen) with pretty much no shared walls (everything will need to jog a bit)
I'm going to try running some cable tomorrow (holiday in the US, just want a test run from bedroom internet source -> basement water heater room), but if that doesn't work, I'll need a backup plan.
Anyone have experience with any of the above? Tips?
This may sound dumb or be helpful so I'll toss it in just in case:
Depending on when they're built, a lot of houses' RJ-11 phone jacks are actually using CAT-5E. If you're lucky, they're individual runs and not daisy-chained!
The way they set up the runs here is weird though, they're cat-5E but we have no fancy junction box. It all runs to some hatch on the side of the house presumably for telecom/satellite TV installers. So you might have secret ethernet cable behind your landline jacks, even if there's no tidy junction box! :)
It was cool finding out there's already capable infrastructure in the walls, but you gotta replace the wall jacks with RJ-45 using a tone tool to label which one goes where, and then the next trick is figuring out an affordable switch that can handle a garage that could get to 100ºF + in summer...
But anyway, worth checking before you start getting too deeply sunk into other solutions. :)
It was built in the late 80s so I doubt it's cat5. But I also know the basement was finished later, so maybe I'll get lucky at least with those.
I just need to figure out where it's all going to see if I can reuse it.
Another interesting thing is the previous owner ran speaker wire to the master bed, living room, and basement room exactly where I want to go, so maybe can reuse those runs.
Currently trying to figure out how to create and maintain an internal CA in order to enable pod to pod TLS communication, while using letsencrypt for my public ingresses.
I finally set up Joplin server. It is a revelation after too long using Syncthing to sync databases. I wasn't able to use Joplin on Android anymore- the sync to file system had gotten too slow. Now everything syncs pretty much instantly!
Anyone know how to set up NPM on truenas scale? I've spent all day trying to get my SSL certs and it fails every damn time. Just says the donation is unknown or that it can't find my npm install 😮💨
I'm using a freedns domain tho so maybe I'm gonna need to try buying a domain.
I've gotten a CalDAV server, audiobookshelf, and selfhosted obsidian live sync running on my laptop while I wait for movers to bring my shit to my house. Then gotta migrate it all across to my mini PC afterwards. Doing a modular NixOS setup to replace/complement what I used to have running on proxmox.
Once everything is on a dedicated machine I'm going to make a nice little homepage for it, inspired by a previous thread here.
Obviously I can just dump it on my server and people can download it from a browser but how are they gonna send me anything? I'm not gonna put an upload on my site, that's a security nightmare waiting to happen. HTTP uploads have always been wonky, for me, anyway.
Yeah, copyparty was my attempt at solving this issue - a single python-file for receiving uploads of infinitely large files, usually much faster than other alternatives (ftp, sftp, nextcloud, etc.) especially when the physical distance to the uploader is large (hairy routing).
I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen.
curious to hear your specific concerns on this; maybe it's something that's already handled?
Sending is someone else's problem. They have all sorts of different understandings and tools and I can't deal with them all. So the only alternative is to set them up with an account in (e.g.) Nexcloud or just accept whatever Google service they use to send you a large file.
Sending other people files is easy in Nextcloud, just create a shared link and unshare when done. Set a password on the file itself.
It becomes my problem when I'm the one who wants the files and no free service is going to accept an 80gb file.
It is exactly my point that I should not have to deal with third parties or something as massive and monolithic as Nextcloud just to do the internet equivalent of smoke signals. It is insane. It's like someone tells you they don't want to bike to the grocer 5 minutes away because it's currently raining and you recommend them a monster truck.
On a related note, it would be nice if there was a shared storage option for self hosting. It wouldn't be the same as self hosting, but more like distributed hosting where everyone pools storage they have available and we could have an encrypted sharing option.
Could you set a 'password' on the uploads? So the server will only accept and start the upload if the password is present. The password is a passphrase to make it easy to type in.
I've got a project to look forward to. Have my Proxmox server with a UPS, running NUT to watch the battery percentage and power down gracefully if the % gets too low. I have Home Assistant watching that so it's supposed to notify me before that happens. It's not notifying me though, so I gotta look into that. I know it's not working this morning because the power went out, so now I'm just sitting here theorizing instead of actually looking at it. 🙃
Oooo can you tell me more?
I have a UPS and it's connected to and communicate with my Synology, but the NUC could also benefit from a safe shutdown in case of power outages.
I used mostly this, but had to customize it a bit I think to get things working right. NUT feels like a super finicky system, but in the end it does work. My biggest issue right now is that it only reports a new status update to Home Assistant every few minutes, so the actions don't really get a chance to trigger before the server shuts down. It also shuts down with the UPS at way too high of a percentage remaining, so I need to figure out how to make it wait just a little bit longer before the power down. It wants to power off like < 2 minutes after the power goes out...
Set up paperless-ngx and cannot get my scanner to send a scan to a FTP server. It supposedly supports sending to FTP but doesn't have much documentation for it. I've tried FTPS, SFTP, and unsecured FTP. Both secure types just cause it to error out. But with unsecured the scanner just freezes then reboots. Really annoying me. I'm about to give up and just scan to s USB thumb drive then copy the scans to the server.
I had to have my scanner scan to a windows VM that saves it to a network drive for paperless to injest. Its not my favorite solution but at least I don't have to manually move the files around
The scanner also supports sending to email. I will try that before setting up a windows VM. I was just tubing i would use USB for the initial import of my file cabinet.
I'm starting to see mastodon users on my tiny pixelfed server. It's such a good feeling.
On the sad side, my Lemmy update went south and I had to remove it off my setup. Still looking for a good replacement for max two users. Something dirt simple like GoToSocial turned out to be.
I tried getting it setup but it didn't want to work on my system. The docker container didn't work with some errors and the docs seem like they need a bit of work. I love piefed, but if it takes more than a weekend to setup then I personally don't have enough time.
I'm currently half way thru building a ZFS array using (5) 8tb Ironwolf Pro drives. I'm modelling and 3d printing a custom drive cage with brackets to hold them all inside the shitty Dell tower case I have dedicated to it. Hoping I can get it done sometime Sunday, I'm on V2 of the drive cage print and it takes like 8 hours to do lol.
I changed my proxmox server from zfs raid pool to software raid with mdadm. Saved me a ton of ram and cheap ssd's don't really like zfs, so it's a win win. And while messing around with drive setups I also changed the system around a bit. Previously it had only single ssd with LVM and 7x4TB drives with zfs but as I don't really need that much storage it's now running 3x1TB SSD + 4x4TB HDD, both with software raid5 so 2TB of fast(ish, they're still sata drives) storage and 12TB (or 10,6 in the real wold, TB vs TiB) of spinning rust storage.
Well enough for my needs and I finally have enough fast storage for my immich server to maintain all the photos and videos over 20+ years. Took "a while" to copy ~5TB over 1gig lan to other system and back, but it's now done and the copying didn't need babysitting in the first place, so not too big of a deal. Biggest unexpected issue was that my 3,5" hdd hotswap cradles didn't have option to mount 2,5" drives so I had to shut down the server and open the case to mount the drives.
And while doing that my piHole was down, so the whole network didn't have DNS server around. I'd need to either set up another pihole server or just set up some scripts to the router to change DNS offerings to dhcp clients while pihole is down and shorten the lease time to few minutes.
Recently set up Nextcloud, but ran into trouble getting it to connect with a domain because of Starlink being the ISP. Found out about tailscale and have been getting things connected and accessible with Tailscale’s magic DNS that it uses.
Currently trying to figure out how to use the iOS tailscale app to connect to an exit node, which will be my server at home, but it’s not easy. Apparently it can be done through the shortcuts and automation on the iPhone, but can’t sort out a way to connect easily that doesn’t throw errors with no good documentation to say what I’ve done wrong.
I have yet again broke a Nextcloud server By trying to upgrade it (from 27 to 30) . Even after hours of debug i’ll have to remake it from scratch ….again
I guess updating it regularly in small steps really is the best working thing. Switched to nc-aio with Auto-Updates half a year ago and didn't have any troubles with updates since.
I run it on BSD and just use the pkg and never have any trouble. Clients are all in the Tumbleweed repos so are the latest which I think helps. Update, run occ update and it always works fine.
So I recently sandboxed a webapp I am getting ready to launch.
Basically Unifi switch > Vlan port > Server > Hosting Webapp instances, worker instance, cloudflared and DBs.
Pretty chuffed at the docker config actually. Just configuring my WAF and tunnel settings with Cloudflare to reduce the scanning from VPS providers. Anyone have a solution or will I need to configure some sort of nginx instance to do it as Cloudflare only allows a certain length for each WAF rule for free.
Side thought, does anyone know of a tutorial for CICD to auto build my containers and deploy? I've been reading github and codeberg docs and playing around to no avail. I'm temped to just write a go script to handle it on my server.
I had a bit of a hiccup with my Unraid server. It wouldn’t turn on, and I got so fed up that I decided to get rid of it. But now, I’m back on the hunt for a new home server OS, and I’m hoping to find one that’s easy to use and has a GUI.
If you were in my shoes in 2025, which one would you pick?
I'm still a noob but I have been shocked at how easy Cosmos Cloud has been to set up compared to my old docker/portainer/nginx architecture. Things just work with minimal to no troubleshooting
Thanks for the recommendation. It’s looking pretty interesting. I’m surprised I haven’t heard about it until now. How’s your experience overall? What other OS’s have you tried? I’ve tried Proxmox, then TrueNAS, but I settled on Unraid. (The Unraid server died on me. When I turned it on, I couldn’t access the web UI. Sometimes, when I press the power button, it doesn’t turn on. )
Scripting enlarging 2400 10x10 png files to 512x512 Stable Diffusion generated images that look like high resolution cityscapes in the style of Salvador Dali. I can't get the API to spit out a single image.
Finally managed to set up tvheadend with rebreoadcasted IPTV from a private group with functioning and automated import into jellyfin.
Works very well (if the IP stream doesnt crash)
Edit: Spelling mistake.
Additional info: Took me 3 weekends to figure out how it all works together, finding out that firefox browser neither on desktop nor android can play live tv on jellyfin (even with transcode) and that EPG is a bitch to get right with obscure tv stations.
And webgrabplus has asinine documentation. Meaning: non existant. Go figure it out yourself what each parameter means, lol)
My 8GB soquartz CM4 has a broken memory chip and I can't return it, so I am contemplating whether I should throw it in the oven and hope for the best or whether somebody wants to buy a half broken one unbaked ...
Set up pi-hole on my network and I’m realizing it clashes with my VPN on my desktop and private relay on my Apple devices lol. Progress everywhere else though?
I've been slowly, but steadily, migrating the services I run on my TrueNAS CORE (FreeBSD) from Jails to Debian VMs so I can migrate to TrueNAS 25 (no more SCALE it seems, and Linux) around April without many hurdles, hopefully.
Besides having to learn some systemd, it has been a smooth ride.
Now I'm down to the last 2 services, which I think are the most complicated setups I have and with no nice deb packages to ease installation: Paperless-ngx and Photoprism.
I'll probably look into playing with Containers (LXC/Incus) to have the same lightweight and efficiency as Jails once the migration to Linux is done. But honestly, if everything is running nicely, I won't be very motivated to do so, let's see.
Although I can't migrate from CORE and have the service migrated seamlessly unless I use VMs.
And I don't know docker containers, so it is something else I'd have to learn and understand. If I have to choose, I'd probably learn LXN/Incus instead.
Set up an instance of Supabase for an application I'm building that needs a REST backend. So far, so good, using it's Auth functions for OTP login and they work well.
Adding a second vdev today to my primary pool running on Scale. New vdev will be 12TB by 4 wide, with existing at 10TB by 5 wide. Drives are all 7,200 RPM enterprise grade, CRM drives.
May also add a second pool with the drives from my previous build which would be 10TB by 4 wide. These drives are 5,400 RPM so I would not use them in my primary pool.
Also, Noctua sent me a bracket (at no charge) so I can correct the orientation of the CPU fan to be facing front to back (currently left to right). I also have a couple 80mm fans and a 120mm fan to add to the server case. Once all of that is in place I hope to start running Ldarr against my libraries without CPU temps hitting 95°C.
This year has been my first foray into self hosting in general. I have been doing a lot of learning and have a long way to go but have got to the point where I have proxmox running with a few VMs running an arr stack, a jellyfin server and a Plex server.
I'm just super happy to get everything running and now need to fine tune stuff. Currently trying to figure out why the Plex server is down half the time externally.
If this is stuff that you can't afford to lose like family pictures, music library, or 90's memes or something, I've had decent luck with iDrive for my offsite backups. 4TB relatively cheap, works with Linux (using some Perl scripts they made), and you can define your own encryption keys so not even they can see your stuff.
It reliably backs up my NAS.
They've usually got a crazy cheap deal to start with on their homepage or if you look around, for the first year. So maybe that could be helpful until you get some other storage. :)
We finally got our music server set up after a lot of finagling with wireguard. It's really cool; we have slsk-dl set up to batch download our playlists from Soulseek, which we can then play in Jellyfin. Next I'm gonna set up Nextcloud for backing up photos, projects, the whole shebang.
Attempting to get my lemmy instance going properly. Got it running on digital ocean but they don't allow outgoing email and reccomend a third party service. I decided to try out Hetzner and am getting errors saying that docker compose isn't installed when running the ansible script.
Had Jellyseer break on me again on Truenas scale, something about a jellyfin API blah blah blah. Decided that Sonarr and Radarr are fine enough to interface with that I don't need it and deleted the image.
I'm iterating again on my lab setup and moving a few apps that I expose externally to their own VM so I can lock that sucker down even further. Right now I have a few different servers with podman/docker containers grouped by application type. e.g. critical apps: foregjo, nextcloud, vaultwarden. My arr stack. Media consumption. Knowledge & tracking apps, and general apps.
I eventually intend to throw the external apps into a DMZ VM but my network isn't setup to do that right now, so instead I'm getting them set up on their own host and will lock down the firewall to only allow it to communicate with my reverse proxy and nothing else.
It's been fun reworking my Ansible playbooks to do all my server provisioning (still need to figure out Terraform) along with running app installs and updates automatically at the press a button. Working with firewall rules via Ansible was a bit of a headache at first but now I'm in a really good spot.
I'm also testing out linkwarden and hoarder to finally replace what I lost with Omnivore a while ago.
Working on testing stalwart... And will need to organize and document properly my various nft rules and routing tables, because its slightly getting out of hand...
My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I'm using a combination of making their pi be their DHCP and one user is running on avahi.
Chrome, the people's browser of choice, really, really hates http so I'm putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn't matter. But if people are going to be using it then I'll have to get a more memorable one.
System updates have been a faff. I'm 'ssh'ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I'll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.
Locally works as intended though, so that's nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I've got into something rational. I'm on the make it good part of the "make it work > make it good" cycle.
System updates have been a faff. I'm 'ssh'ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally.
Have a look at Screen. You can create a persistent terminal to start your update in, disconnect (manually or by connection loss), and resume the session when you reconnect, with it having completed the update while you were gone.
I bought a coral tpu and setup frigate. I've been tweaking the alerts and motions. Moving home assistant notifications from reolink to frigate. Was thinking of singing up for frigate+ for additional animal objects. Has anyone signed up for frigate+? Is it worth it?
Still haven't properly set up my backups ...
Have my Nextcloud on a zfs (single disk sadly) and want to send it to a server at my parents place (also zfs) but both are behind NAT. While I've successfully set up wireguard between the two, but the connection won't stay up so there's still a ways to go till I got a happy off-site Backup.
I kinda shied away from tailscale because "I wanted to do it on my own" but I've just set up tailscale (while on a train no less) and it was really simple ... Guess I'll run with it for now :D now I'll just have to set up the send/receive scripts but that's just some BASHing my head against a wall ;)
I am setting up the server on Raspberry Pi 4 with RaspiOS.
I want to download torrents and I have connected an external hdd USB3 for it....
I was told that you could change the Docker directory to the external hdd to mount the containers on it.
That way the microsd would work less and in case of failure, it would only be to install RaspiOS again and change the directories.... All the configuration, docker containers, etc are in the hdd.... So far I have not succeeded, although I have listened to 2 or 3 tutorials.
You can also mount everything on the Raspberry, leaving the microSD only for booting, but it is more complicated....
Tried to setup custom domains using Nginx Proxy Manager and Let's Encrypt DNS-01 challenges so I wouldn't have to open any ports and it worked!... except not really?
Proxy Manager shows everything was successful but the domains don't go anywhere. It seems to be because the TP-Link router from my ISP does DNS Rebinding protection... with no option to turn it off apparently... why......
So now I don't know where to go. I'm not really fancying hosting DNS myself but if I can't fix this any other way then I guess I'll do it.
Or maybe I should ditch the ISP TP-Link and get something I could flash OpenWRT on?
If not, IMHO I'd use the ISP equipment as a pass-through modem (if possible on that model?) and have a separate OpenWRT / pfSense firewall do all the heavy lifting for DHCP, DNS, ad blocking, etc