(Why) would it be "bad practice" to separate CPU and storage to separate devices?
TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?
I have a Linux PC that acts as my homelab server, and a Synology NAS.
The server is fast but has 100GB SSD.
The NAS is slow(er) but has oodles of storage.
Both devices are wired to their own little gigabit switch, using priority ports.
Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?
I use this approach myself but am starting to reconsider.
I have an Asus PN51 (NUC-like) minipc as my server / brains, hosting all my dockers etc.
All docker containers and their volumes are locally sorted on the device.
I have a networked QNAP NAS for storage for things like Plex / jellyfin.
It's mostly ok but the NAS is noticeably slower to start up than the NUC, which has caused issues after power loss where Plex thinks all the movies are gone so it empties out the library, then when the NAS comes back up it reindexes and throws off all the dates for everything. It also empties out tags (collections) and things like radarr and sonarr will start fetching things it thinks don't exist anymore.
I've stopped those problematic services from starting on boot to hopefully fix those issues.
I've also added a UPS to avoid minor power outs.
You might be able to solve some of these issues by changing the systemd service descriptions. Change/add an After keyword to make sure the network storage is fully mounted before trying to start the actual service.
Might be worth it to have the systemd service that runs the docker container (if you run it like that) have a ExecStartPre= statement that checks for availability of the NAS.