Use cockpit by Red Hat. It gives you a GUI to make networking changes*, and will check if the connection still works before making the change. If the connection doesn't work (like the ip addresses changed), it will undo the change and then warn you. You can then either force the change through or leave it be.
Netplan is an abstraction layer, so it can go over systemd-networkd, NetworkManager, or iproute. I suppose it's better though, because it can be used with multiple backends.
I have a failsafe service for one of my servers, it pings the router and if it hasn't reached it once for an entire hour then it will reboot the server.
This won't save me from all mistakes but it will prevent firewall, link state, routing and a few other issues when I'm not present.
I started to DBAN (wipe) my internal drive once instead of an attached drive. That was the last time I ran DBAN on a machine with any drives of value plugged in.
Lol I've locked myself out of so many random cloud and remote instances like this that now I always make a sleep chain or a kill timer with tmux/screen.
A decade and change ago, in a past life, I was tasked with switching SELinux to permissive mode on the majority of systems on our network (multiple hundreds, or we might have gotten above one thousand at that point, I don't recall exactly). This was to be done using Puppet. A large number of the systems, including most of our servers, had already been manually switched to permissive but it wasn't being enforced globally.
Unfortunately, at that point I was pretty familiar with Puppet but had only worked with SELinux a very few times. I did not correctly understand the syntax of the config file or setenforce and set the mode to ... Something incorrect. SELinux interpreted whatever that was as enforcing mode. I didn't realize what I had done wrong until we started getting alerts from throughout the network. Then I just about had a panic attack when I couldn't login to the systems and suddenly understood the problem.
Fortunately, it's necessary to reboot a system to switch SELinux from disabled to any other mode, so most customer facing systems were not impacted. Even more fortunately, this was done on a holiday, so very few customers were there to be inconvenienced by the servers becoming inaccessible. Even more fortunately, while I was unable to access the systems that were now in enforcing mode, the Puppet agent was apparently still running ... So I reversed my change in the manifest and, within half an hour, things were back to normal (after some service restarts and such).
When I finally did correctly make the change, I made sure to quintuple check the syntax and not rush through the testing process.
edit: While I could have done without the assault on my blood pressure at the time, it was an effective demonstration of our lack of readiness for enforcing mode.
Not SysAdmin but about a year into my first software engineer job I was working on the live DB in SQL without using BEGIN TRAN ROLLBACK TRAN.
Suffice to say I broke the whole system my making an UPDATE without a WHERE clause. Luckily we have regular backups but it was a lot of debugging with the boss before I realised it was me that caused the issue the client was reporting.
I was on-call and half awake when I got paged about a cache serverâs memcached being down for the third time that night. Theyâd all start to go down like dominoes if you werenât fast enough at restarting the service, which could overwhelm the database and messaging tiers (baaaaad news to say the least). Two more had their daemon shit the bed while I was examining it. Often it was best to just kick it on all of them to rebalance things. It was⊠not a great design.
So I wrote a quick loop to ssh in and restart the service on each box in the tier to refresh them all just in case and hopefully stop the incessant pages. Well. In my bleary eyed state I set reboot in the variable instead of restart. Took out the whole cache tier (50+) and the web site. First and only time I did that but that definitely woke me up. Oddly enough the site ran better after that for months as my reboots uncovered an undiscovered problem.
Yes, that what you'd think. And then you'll sit with a blank terminal once again when you did some trivial mistake yet again.
A friend of mine developed a habit (working on a decent sized ISP 20+ years ago) to set up a scheduled reboot for everything in 30 minutes no matter what you're going to do. The hardware back then (I think it was mostly cisco) had a 'running conrfig' and 'stored config' which were two separate instances. Log in, set up scheduled reboot, do whatever you're planning to do and if you mess up and lock yourself out the system will restore to previous config in a while and then you can avoid the previous mistake. Rinse and repeat.
And, personally, I think that's the one of the best ways to differentiate actual professionals from 'move fast and break things' group. Once you've locked yourself out of the system literally half way across the globe too many times you'll eventually learn to think about the next step and failovers. I'm not that much of a network guy, but I have shot myself in the foot enough that whenever there's dd, mkfs or something similar on the root shell I automatically pause for a second to confirm the command before hitting enter.
And while you gain experience you also know how to avoid the pitfalls, the more important part (at least for myself) is to think ahead. The constant mindset of thinking about processes, connectivity, what you can actually do if you fuck up and so on becomes a part of your workflow. Accidents will happen, no matter how much experience you have. The really good admins just know that something will go wrong at some point in the process and build stuff to guarantee that when you fuck things up you still have availability to fix it instead of calling someone 6 timezones away in the middle of the night to clean up your mess.
I still prefer net-tools and use ifconfig eth0 up
That ip mess I'd rather do without, and those funky UU device/interface names I wish them out of my system
By the way, what system/init/svc manager are you using?
With 50y in your back, cron job to check if it is up and resetting it while you are away. You can always remotely cancel the cronjob ... but it will be a new mistake not the old one :)
I started on Irix and ultrix if you remember those, what would I know :)
It was a bcachefs array with data replicas being a mix of 1,2 & 4 depending on what was most important, but thankfully I had the foresight to set metadata to be mirrored for all 4 drives.
I didn't get the good fortune of only having to do a resilver, but all I really had to do was fsck to remove references to non-existent nodes until the system would mount read-only, then back it up and rebuild it.
NixOS did save my bacon re: being able to get back to work on the same system by morning.
Like 3 weeks ago on my (testing) server I accidentally DD'd a Linux ISO to the first drive in my storage array (I had some kind of jank manual "LVM" bullshit I set up with odd mountpoints to act as a NAS, do not recommend), no Timeshift, no Btrfs snapshot. It gave me the kick in the pants I needed to stop trying to use a macbook air with 6 external hard drives as a server though. Also gave me the kick in the pants I needed to stop using volatile naming conventions in my fstab.
If it makes you feel any better, I did something just as infuriating a few years ago.
I had set up my home media server, and had finally moved it to my garage with just a power cable and ethernet cable plugged in. Everything was working perfectly, but I needed to check something with the network settings. Being quite new to Linux, I used a remote desktop tool to log in and do everything through a gui.
I accidentally clicked the wrong item in the menu and disconnected the network. I only had a spare ps/2 keyboard and mouse, and as the server was an old computer, it would crash if I plugged a ps/2 device in while it was running*.
The remote desktop stayed open but frozen, mocking me for my obvious mistake and lack of planning, with the remote mouse icon stuck in place on the disconnect menu.
*I can't remember if that was a ps/2 thing, or something specific to my server, but I didn't want to risk it
Old hardware used to get really upset before plug and play became common. I remember I was playing some old racing game with a joystick on a win95 box, and accidentally pulled the connector out, lost my entire game because the system flipped out.
Did this once on a router in a datacenter that was a flight away. Have remembered to set the reboot in future command since. As I typed the fatal command I remember part of my brain screaming not to hit enter as my finger approached the keyboard. đ€Šââïž
Do it. This saved my life on more than one occasion.
You'll think ânah, it'll be fineâ and then at 11pm when your brain's fried on vending machine coffee you'll be glad that you did it.. 3 times over...
I've been trying to find a network capable KVM for home use. They're all pretty expensive or lacking functionality. I don't actually need one or I'd pull the trigger, but I sure have been tempted.
At $DAYJOB, we're currently setting up basically a way to bridge an interface over the internet, so it transports everything that enters on an interface across the aether. Well, and you already guessed it, I accidentally configured it for eth0 and couldn't SSH in anymore.
Where it becomes fun, is that I actually was at work. I was setting it up on two raspis, which were connected to a router, everything placed right next to me. So, I figured, I'd just hook up another Ethernet cable, pick out the IP from the router's management interface and SSH in that way.
Except I couldn't reach the management interface anymore. Nothing in that network would respond.
Eventually, I saw that the router's activity lights were blinking like Christmas decoration. I'm guessing, I had built a loop and therefore something akin to a broadcast storm was overloading the router.
Thankfully, the solution was then relatively straightforward, in that I had to unplug one of the raspis, SSH in via the second port, nuke our configuration and then repeat for the other raspi.
I was scared to move the cloud for this reason. I was used to running to the server room and the KVM if things went south. If that was frozen, usually unplugging the server physically from the switch would get it calm down.
Now Amazon supports a direct console interface like KVM and you can virtually unplug virtual servers from their virtual servers too.
I hope you don't admin any mission critical servers. That's a first year mistake.
Edit:
Hi salty kiddos still making year one mistakes! Down voting this comment won't improve your skill set. You will get walked out at a serious enough outfit for doing something like that to a prod system.
This is a server I was setting up. It's not doing anything useful at all at the moment, hence the lax work practice. The only reason I drove back to work is because it's needed tomorrow and I wanted to finish setting it up tonite.
Gotcha. I disagree with your methodology of not being careful about non-prod systems. You stated you forgot it was remote. What happens the next time you do that but forget it's prod or mix up terminals?