Algorithms are used to dictate speed, behaviour and, ultimately, the wages of gig workers, resulting in different payments for the same work conducted at the same time, with the same skills
This is exactly the sort of thing I'm worried about with AI.
Let's take a quick step back. AI/Machine Learning is a program that is set to learn how to accomplish one specific job, and to do that job very well. For this example, let's say the AI needs to be able to identify any picture with a cat in it. Programmers develop the framework for this code, and then feed the AI with test cases aimed to "teach" the AI how to do this job with minimal errors. It will be fed correct pictures as well as incorrect ones (some with other animals, or paintings rather than pictures). With enough test cases and human confirmation that the results were correct or incorrect, the AI can successfuly identify pictures of cats with little to no errors.
But thing is, and this is important, the developers of AI generally don't know exactly how the AI program is able to make these determinations. They just feed it test cases and confirmation when the bot is right. AIs obviously don't have human brains and think the way we do, so the connections they make are through various patterns that people may not be able to determine. This is fine with identifying cat photos, but let's apply this back to the Uber and DoorDash payment methods. This means that these companies are not paying their employees based on human standards and expectations of a job well done, but based off of pattern recognition from an AI that may lower or raise pay based off of elements that are completely unknown to the worker and the company, and may not even be items the company wants to encourage (they just don't know what the AI is rewarding).
I have no concerns of the unrealistic "robots cause the apocalypse" nonsense that hollywood loves, my concern is people assigning AI jobs that AI shouldn't do and assuming AI is some master super intellect instead of the trained program it is.