Early in my career as a cloud sysadmin, shut down the production database server of a public website for a couple of minutes accidentally. Not that bad and most users probably just got a little annoyed, but it didn't go unnoticed by management 😬 had to come up with a BS excuse that it was a false alarm.
Because of the legacy OS image of the server, simply changing the disk size in the cloud management portal wasn't enough and it was necessary to make changes to the partition table via command line. I did my research, planned the procedure and fallback process, then spun up a new VM to test it out before trying it on prod. Everything went smoothly except on the moment I had to shut down and delete the newly created VM, I instead shut down the original prod VM because they had similar names.
Put everything back in place, and eventually resized the original prod VM, but not without almost suffering a heart attack. At least I didn't go as far as deleting the actual database server :D
I tried to change ONE record in the production db but I forgot the WHILE clause, ended up changing over 2 MILLION records instead. Three hour production shutdown. Fun times.
I did my research, planned the procedure and fallback process, then spun up a new VM to test it out before trying it on prod
Went through a similar process when I was resizing some partitions on my media server. On the test run I forgot to specify G on the new size so it defaulted to MB when I resized it. Resulting in a 450gb partition going down to 400mb. I was real glad I tested that out first.