That's pretty disingenuous, since most files aren't just random data.
Most real files actually have rather low entropy, even if they look like random junk (e.g., executables), chiefly due to repetition of similar data and sparse values.
Is it because it works on patterns and your random garbled string would have too much noise to be compressed well, while a structured file coming from an actual piece of software would probably have enough repeating patterns to the point where it actually can be shrunk?
the ups and downs are battling hard on the parent comment. gotta admit, I had to think for a few seconds to get the gist of it, but its actualy pretty slick and perfectly snarky.
edit: only thought would be that an infinite selection of random data sets would be somewhat evenly split between compressable and non-compressable, but if you add compression structure, it tips the balance firmly into "file size increases" territory.