The whole point of asimov's laws of robotics was things can go wrong even if a system adhered to them perfectly. And current AI attempts doesn't even have that.
I honestly ponder if an LLM trained on every human on earth's input once a month about their opinions on the world and what should be done to fix it would have a "normalized trend" in that regard.
There are more dumb people than smart people so a "normalized trend" would be a pretty bad idea.
Most people, regardless of personal beliefs, are highly susceptible to populist rhetoric, and generally you want an AI governance bot to make the right choices, not the popular choices.
Since "dumb" and "smart" are both defined by the median between each other, there logically follows that there are always about as much "dumb" as "smart" people.