Good effort
You're viewing a single thread.
Yeah, telling ai what not to do is highly ineffective
"Do not injure a human or through inaction allow a human to come to harm."
Case in point, Asimov's laws never worked haha
Yeah, but in Asimov's case it was because a strict adherence to the Three Laws often prevented the robots from doing what the humans actually wanted them to do. They wouldn't just ignore a crucial "not" and do the opposite of what a law said.