The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
While the motives may be noble (regulating surveillance) it might happen that models like Stable Diffusion will get caught in the regulatory crossfire to a point where using the original models becomes illegal and new models will get castrated until they are useless.
Further this might make it impossible to train open source models (maybe even LoRAs) by individuals or smaller startups.
Adobe and the large corporations would rule the market.
Honestly I might've been caught up in some fear mongering but at this point I don't feel like AI can be safe in the hands of capital owners. The huge possibilities for harming our society are too great.