There are A LOT of people training AI with faulty models that are learning from heavily biased data sets. Never underestimate humans' ability to fuck shit up. For every genius that makes a technical breakthrough, there are thousands of morons ready to misuse it or break it.
Also, the Sarah Connor Chronicles seemed to be moving in the direction of preventing Judgement Day by befriending Skynet when it gained consciousness rather than trying to destroy it. Alas, the series got cancelled before its time.
Loitering munitions are basically our version of t-800. It's also not fun thinking if you have ever contributed to any computer vision open source projects and wondering if your code somehow ended up inside a loitering munitions drone because Iran who don't bother to respect the GPL decided to use those open source libraries in their drone's targeting system and deploy them in Ukraine. Maybe I'm just overthinking things, but would you lose some sleep knowing your code become a critical part of an autonomous system that can decide to kill people on their own without a direct human input?
I don't think it's exactly the same. Now if the hammer has a mind of its own and can loiter in an alley and can decide to kill a passerby or not based on its own judgement. e.g. the owner told the hammer to kill passerby wearing blue shirt, and the hammer does it but has 0.1% chance of killing people wearing yellow shirt due to computer vision quirks. Does the responsibility of killing people wearing yellow shirt fell partially to the blacksmith? Did the blacksmith can sleep soundly knowing people wearing yellow shirt might not need to die if his programming is a little bit more better, even though he never sold the hammer for the purpose of killing people in the first place (it's his customer that abuse it).