Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LE
Lem453 @lemmy.ca
Posts 14
Comments 409
Migrating from Nextcloud AIO to Owncloud Infinite Scale: Good Idea?
  • Not sure what you mean by expects oauth.

    I've been testing it and it works very well so far in my tests with just a normal user name and password to login.

    I've actually been meaning to work on getting oauth connected to my authentik but haven't gotten around to it yet

    So far the server seems very solid and the clients for android and windows also seem very good.

  • Recommend me good e-reader
  • If you are into self hosting already, you run a calibre web instance then you can enable full integration with the kobo so Your own self hosted eBook repository becomes the 'store' on the kobo.

    https://brandonjkessler.com/technology/2021/04/26/setup-kobo-sync-in-calibre-web.html

    I use it like to to get access to all my ebooks

    If you don't already do any self hosting, then it can take a while to get the foundation of your server setup. I already had that setup so this took less than an hour for me.

    [email protected]

  • Options on Cozy.io
  • https://owncloud.dev/ocis/

    Is a complete rewrite I'm GO

    I started with owncloud, then the devs left for NextCloud in I've been in that since version 1

    Then I looked at seafile and then lacking features (search)

    Finally I discovered that owncloud has been rewriting ocis I'm GO which greatly simplifies a lot of the issues NextCloud inherited from the original owncloud

    Works very well so far. Just trying to get oAuth to work

  • Help with docker setup
  • I love the one click pull from git option. Don't like the corporate direction they seem to be taking.

    I haven't seen aby alternative docker GUI managers that have the git pull for the compose.

  • Help with ZFS Array
  • I may have done a bios update around the time it went down, I don't remember for sure but I haven't added to physically changed the hardware in anyway. Its working now with the above suggestions so thanks!

  • Help with ZFS Array
  • Thanks, this worked. I made the ZFS array in the proxmox GUI and it used the nvmeX names by default. Interestingly, when I did zfs export, nothing seemed to happen and it -> I tried zpool import and is said no pools available to import, but then when I did zpool status it showed the array up and working with all 4 drives showing healthy and it was now using device IDs. Odd but seems to be working correctly now.

    root@pve:~# zpool status
      pool: zfspool1
     state: ONLINE
      scan: resilvered 8.15G in 00:00:21 with 0 errors on Thu Nov  7 12:51:45 2024
    config:
    
    		NAME                                                                                 STATE     READ WRITE CKSUM
    		zfspool1                                                                             ONLINE       0     0     0
    		  raidz1-0                                                                           ONLINE       0     0     0
    			nvme-eui.000000000000000100a07519e22028d6-part1                                  ONLINE       0     0     0
    			nvme-nvme.c0a9-313932384532313335343130-435431303030503153534438-00000001-part1  ONLINE       0     0     0
    			nvme-eui.000000000000000100a07519e21fffff-part1                                  ONLINE       0     0     0
    			nvme-eui.000000000000000100a07519e21e4b6a-part1                                  ONLINE       0     0     0
    
    errors: No known data errors
    
  • Where to Mount NFS Shares and Where to Keep Docker-Compose
  • This is better as well because it prevents the docker from starting if the mount doesn't work. Some apps will freak out if they loose their data and apps if they index files like jellyfin might start deleting the files from the index as the library is now empty.

    NFS mount via docker compose is the best way to go

  • Help with ZFS Array

    I have a ZFS pool that I made on proxmox. I noticed an error today. I think the issue is the drives got renamed at some point and how its confused. I have 5 NVME drives in total. 4 are supposed to be on the ZFS array (CT1000s) and the 5th samsung drive is the system/proxmox install drive not part of ZFS. Looks like the numering got changed and now the drive that used to be in the array labeled nvme1n1p1 is actually the samsung drive and the drive that is supposed to be in the array is now called nvme0n1.

    ``` root@pve:~# zpool status pool: zfspool1 state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub repaired 0B in 00:07:38 with 0 errors on Sun Oct 13 00:31:39 2024 config:

    NAME STATE READ WRITE CKSUM zfspool1 DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 7987823070380178441 UNAVAIL 0 0 0 was /dev/nvme1n1p1 nvme2n1p1 ONLINE 0 0 0 nvme3n1p1 ONLINE 0 0 0 nvme4n1p1 ONLINE 0 0 0

    errors: No known data errors ```

    Looking at the devices: nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme4n1 /dev/ng4n1 193xx6A CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013 /dev/nvme3n1 /dev/ng3n1 1938xxFF CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013 /dev/nvme2n1 /dev/ng2n1 192xx10 CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR010 /dev/nvme1n1 /dev/ng1n1 S5xx3L Samsung SSD 970 EVO Plus 1TB 1 289.03 GB / 1.00 TB 512 B + 0 B 2B2QEXM7 /dev/nvme0n1 /dev/ng0n1 19xxD6 CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013

    Trying to use the zpool replace command gives this error: root@pve:~# zpool replace zfspool1 7987823070380178441 nvme0n1p1 invalid vdev specification use '-f' to override the following errors: /dev/nvme0n1p1 is part of active pool 'zfspool1'

    where it thinks 0n1 is still part of the array even though the zpool status command shows that its not.

    Can anyone shed some light on what is going on here. I don't want to mess with it too much since it does work right now and I'd rather not start again from scratch (backups).

    I used smartctl -a /dev/nvme0n1 on all the drives and there don't appear to be any smart errors, so all the drives seem to be working well.

    Any idea on how I can fix the array?

    22

    Am I the only one who missed the Owncloud rewrite in Go?

    owncloud.com Infinite Scale - the new cloud-native data platform

    Infinite Scale is a new data platform with a focus on performance, scalability, security and governance. Microservice-based, database-less and written in Go.

    Infinite Scale - the new cloud-native data platform

    The topic of self-hosted cloud software comes up often but I haven't seen anyone mention owncloud infinite scale (the rewrite in Go).

    I started my cloud experience with owncloud years ago. Then there was a schism and almost all the active devs left for the nextcloud fork.

    I used nextcloud from it's inception until last year but like many others it always felt brittle (easy to break something) and half baked (features always seemed to be at 75% of what you want).

    As a result I decided to go with Seafile and stick to the Unix philosophy. Get an app that does one thing very well rather than a mega app that tries to do everything.

    Seafile does this very well. Super fast, works with single sign on etc. No bloat etc.

    Then just the other day I discovered that owncloud has a full rewrite. No php, no Apache etc. Check the github, multiple active devs with lots of activity over the last year etc. The project seems stronger than ever and aims to fix the primary issues of nextcloud/owncloud PHP. Also designed for cloud deployment so works well with docker, should be easy to configure via docker variables instead of config files mapped into the container etc.

    Anyways, the point of this thread is:

    1. If you never heard of it like me then check it out
    2. If you have used it please post your experiences compared to NextCloud, Seafile etc.
    63

    A Story of Silent Data Corruption with Seafile

    Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device).

    I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs.

    This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing.

    This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc).

    Badness alert....

    All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not.

    Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size.

    Google/DDG...can't find anyone that has the same issue...very bad

    Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased).

    Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there???

    Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes.

    Quick google reveals docker volume purge which deletes many GBs worth of volumes that were old and unused.

    By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly.

    Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server.

    I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable.

    IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist.

    Things that really should be present to avoid this:

    1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent.
    2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted.
    3. Need some kind of reminder to check in on unused docker containers.
    8

    Self hosted YouTube player with automatic yt-dlp downloader

    Looking for a self hosted YouTube front end with automatic downloader. So you would subscribe to a channel for example and it would automatically download all the videos and new uploads.

    Jellyfin might be able to handle the front end part but not sure about automatic downloads and proper file naming and metadata

    13

    Jellyin app on steam deck allows screen to dim

    The jellyfin app (self hosted video streaming) app on steam deck (installed via desktop mode->discovery as a flat pack) doesn't seem to register as 'playing' with the os. The screen will dim after a few mins.

    I'm 'playing' the jellyfin app as a non steam game in game mode.

    I know I can disable screen dimming in the settings but is there a way to have it auto detect when a video is playing and prevent the screen from dimming?

    8

    Suggestions for good decaf coffee roasters in Canada?

    Any suggestions for roasted decaf beans I can get Canada?

    9

    Anyone tried this 4x 10gbe + 5x 2.5gbe router?

    forums.servethehome.com ServeTheHome Forums

    A place to discuss servers, storage and networking

    Very solid price, the cheapest I've seen for something like this. Has anyone tried it with OPNsense or other software?

    The linked thread talks about someone getting 60C load temps but the air was 37C and they are using a RJ45 DAC which are known to use lots of power.

    Wondering if anyone else has experience with this. Seems like a big advancement in what's possible at a home scale for non second hand equipment.

    Another article about this: https://liliputing.com/this-small-fanless-pc-is-built-for-networking-with-four-10-gbe-and-five-2-5-gb-ethernet-ports/

    10

    FFmpeg now has multi-threading

    This should eventually make it's way into jellyfin. Eager to see the performance improvements.

    6

    Shout out to JellyStat, awesome stats for Jellyfin!

    github.com GitHub - CyferShepard/Jellystat: Jellystat is a free and open source Statistics App for Jellyfin

    Jellystat is a free and open source Statistics App for Jellyfin - GitHub - CyferShepard/Jellystat: Jellystat is a free and open source Statistics App for Jellyfin

    GitHub - CyferShepard/Jellystat: Jellystat is a free and open source Statistics App for Jellyfin

    Beautiful stats for Jellyfin. I just set it up in docker compose yesterday. Love it!

    2

    zwave to Ethernet bridge

    I'm wondering if I can get a device that enables zwave over Ethernet/wifi and connect that to my home assistant setup?

    Basically I have a home assistant setup in my house. I want to add a few simple things to my parents place but I want it to all be on the same HA instance.

    On the router in my parents place, I can install wireguard to connect it to my LAN. So now my parents network is the same as my LAN network.

    I'm looking for a device that can connect to zwave and then send that info over the LAN to my home assistant. Does such a thing exist? Thanks.

    7

    Do UltraPro Z-WAve Switches have local control?

    By local control, I mean if the Z-wave hub is down will the switch still work as a dumb switch and turn the lights on/off?

    This is the product I would like to get, but can't find if they allow 'dumb switch' operation. Does anyone have experience with these? https://byjasco.com/ultrapro-z-wave-in-wall-smart-switch-with-quickfit-and-simplewire-white

    Thanks!

    3

    Starship ready for second test flight. Waiting for FAA approval.

    www.space.com SpaceX stacks giant Starship rocket ahead of 2nd test flight (video, photos)

    'Starship is ready to launch, awaiting FAA license approval,' Elon Musk said.

    SpaceX stacks giant Starship rocket ahead of 2nd test flight (video, photos)

    Starship has been stacked and is apparently ready to launch as per Musk. Waiting on FAA approval for second test flight.

    8
    homelab @lemmy.ml Lem453 @lemmy.ca

    NixOS as a docker host VM on proxmox?

    Hi all. Just learned about NixOS a few weeks ago. I'm in the process of migrating several of my docker services to a new server that will have proxmox installed as the host and then a VM for docker.

    I'm currently using alpine as the VM and it works well but one of the main goals of the migration is to use infrastructure as code as much as possible. All my docker services are docker compose files checked into a git repo that gets deployed. When I need to make a change, I update the git repo and pull down the latest docker compose.

    I currently have a bunch of steps that I need to do on the alpine VM to make it ready for docker (qemu agent, NFS shares, etc).

    NixOS promises to be able to do all that with a single config file and then create a immutable OS that never changes after that. That seems to follow the philosophy well for infrastructure as code and easy reproducibility.

    Has anyone else tried NixOS as a docker host? Any issues you've encountered?

    0
    homelab @lemmy.ml Lem453 @lemmy.ca

    People who have infrastructure as code setups: show off your setup!

    I'm just starting to upgrade my basic unraid docker to an InfraAsCode setup.

    I will use unraid as Nas only. My media and backups will be on unraid, everything else on a separate proxmox VM that is running and SSD storage array for ZFS. Both the unraid and proxmox hosts share their storage via NFS. Each docker container mounts the NFS volumes as needed.

    For the containers I use an alpine VM with docker. I use portainer to connect to a gitea repo (on unraid) to pull down the docker compose file.

    So my workflow is, use VS code on my PC to write the compose file, commit to git, then on portainer I hit the redeploy button and it pulls the latest compose file automatically.

    What's your setup?

    1