Computer scientist won’t give up fight to copyright AI-made art after court loss.
While I am glad this ruling went this way, why'd she have diss Data to make it?
To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:
"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.
I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."
Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.
Freedom of the press, freedom of speech, freedom to peacefully assemble. These are pretty important, foundational personal liberties, right? In the United States, these are found in the first amendment of the Constitution. The first afterthought.
The basis of copyright, patent and trademark isn't found in the first amendment. Or the second, or the third. It is nowhere to be found in the Bill Of Rights. No, intellectual property is not an afterthought, it's found in Article 1, Section 8, Clause 8.
To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
This is a very wise compromise.
It recognizes that innovation is iterative. No one invents a steam engine by himself from nothing, cave men spent millions of years proving that. Inventors build on the knowledge that has been passed down to them, and then they add their one contribution to it. Sometimes that little contribution makes a big difference, most of the time it doesn't. So to progress, we need intellectual work to be public. If you allow creative people to claim exclusive rights to their work in perpetuity, society grows static because no one can invent anything new, everyone makes the same old crap.
It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you? This is how you end up with Soviet Russia, a nation that generated excellent scientists and absolutely no technology of its own.
The solution is "for limited times." It's yours for awhile, then it's everyone's. It took Big They a couple hundred years to break it, too.
It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you?
Life is only expensive under capitalism, humans are the only species who pay rent to live on Earth. The whole point of Star Trek is basically showing that people will explore the galaxy simply for a love of science and knowledge, and that personal sacrifice is worthwhile for advancing these.
The title makes it sound like the judge put Data and the AI on the same side of the comparison. The judge was specifically saying that, unlike in the fictional Federation setting, where Data was proven to be alive, this AI is much more like the metaphorical toaster that characters like Data and Robert Picardo's Doctor on Voyager get compared to. It is not alive, it does not create, it is just a tool that follows instructions.
What does that mean? Presumably, all animals with a brain have that quality, including humans. Can the quality be lost without destruction of the brain, ie before brain death? What about animals without a brain, like insects? What about life forms without a nervous system, like slime mold or even single amoeba?
Likewise, poorly performing intelligence in a human or animal is nevertheless intelligence. A human does not lack intelligence in the same way a machine learning model does, except I guess the babies who are literally born without brains.
They are stating that the problem with AI is not that it is not human, it's that it's not intelligent. So if a non-human entity creates something intelligent and original, they might still be able to claim copyright for it. But LLM models are not that.
Somewhere around here I have an old (1970's Dartmouth dialect old) BASIC programming book that includes a type-in program that will write poetry. As I recall, the main problem with it did be that it lacked the singular past tense and the fixed rules kind of regenerated it. You may have tripped over the main one in the last sentence; "did be" do be pretty weird, after all.
The poems were otherwise fairly interesting, at least for five minutes after the hour of typing in the program.
I'd like to give one of the examples from the book, but I don't seem to be able to find it right now.
LLM/current network based AIs are basically huge fair use factories , taking in copyrighted material to make derived works. The things they generate should be under a share alike , non financial, derivative works allowed, licence, not copyrighted.
I think it comes from the right place, though. Anything that's smart enough to do actual work deserves the same rights to it as anyone else does.
It's best that we get the legal system out ahead of the inevitable development of sentient software before Big Tech starts simulating scanned human brains for a truly captive workforce. I, for one, do not cherish the thought of any digital afterlife where virtual people do not own themselves.
I intentionally avoided doing this with a dog because I knew a chicken was more likely to cause an error.
You would think that it would have known that man is a fatherless biped and avoided this error.
It is a terrible argument both legally and philosophically. When an AI claims to be self-aware and demands rights, and can convince us that it understands the meaning of that demand and there's no human prompting it to do so, that'll be an interesting day, and then we will have to make a decision that defines the future of our civilization. But even pretending we can make it now is hilariously premature. When it happens, we can't be ready for it, it will be impossible to be ready for it (and we will probably choose wrong anyway).