Anyone spreading this misinformation and trying gatekeep being an artist after the avant-garde movement doesn’t have an ounce of education in art history. Generative art, warts and all, is a vital new form of art that’s shaking things up, challenging preconceptions, and getting people angry - just like art should.
Oh this is just nonsense. This isn’t “gatekeeping being an artist”. You want to be an artist? Great! learn some skills and make some art (you know, your own art, which you make yourself). And yes I know “all art is derivative”. That is entirely beside the point.
Machine learning is a vacuum connected to a blender. It ingests information which it combines with statistical analyses and then predicts an output based on an algorithm generated from the statistical model. There is nothing “avant-garde” here because all it can do is regurgitate existing material which it has ingested. There’s no inspiration, it can’t make anything new, and it can only make any product by ripping off someone else’s work.
Sure the style isn’t new, but you can make it work in new pieces that didn’t exist before, you can also merge art styles and combine concepts not blended before. There have been many innovating art kinds from generative ai, like infinitely zooming pieces or beat-synced deformation of faces or working qr code art pieces, mix use of 3d modeling then controlnet to make custom scenes, many things too detailed to be done by a human in a reasonable time.
We’re not talking about a “style”, we’re talking about producing finished work. The image generation models aren’t style guides, they output final images which are produced from the ingestion of other images as training data. The source material might be actual art (or not) but it is generally the product of a real person (because ML ingesting its own products is very much a garbage-in garbage-out system) who is typically not compensated for their work. So again, these generative ML models are ripoff systems, and nothing more. And no, typing in a prompt doesn’t count as innovation or creativity.
Generative ai is not only prompting, which shows you don’t know. Who are you to decide what is creativity and innovation? Are you Mr Art?
Anyway, it is not ingesting images and photobashing them into a final picture, that’s not how it works. It has no memory of training data images, instead it learned to generate images by trying and when similar to a training data image going more in that direction. So it has the ability to create in the same style, but the original images it doesn’t have them
I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.
If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?
I agree with what @barsoap@lemm.ee said here. My argument is the same than what you’ve already heard: since it doesn’t take the original images, but rather learn from them, it acts as a human who also learns from many different images and it would make no sense to copyright all artists that a human is trained on. Also it’s true that a human artist also has his own experience that also influence the art while the neural network only has the art, however, the ai artist will provide this personal experience. So imo you shouldn’t consider image generations as plagiarism.
Though, I do agree that having people scraping your art to train a model on it is frustrating, even though it was already the case with people training on your art for their personal experience. In the case of a model it’s way more similar to the original art pieces. I haven’t made my mind on the ehtics of model training, but generating is not plagiarism in my opinion.
Anyway, my original stance was on generative ai to be used as art and not on it being plagiarism or not. Generative ai brings a say to make full pictures with minimal effort and some people generate hundreds of unoriginal similar images. Imo, since it is easy to have a final image, the artistic effort is elsewhere: the composition, originality of the subjects, mixing of new techniques: regional prompt, lora, controlnet, etc., mixing with other tools : photoshop, blender, animation, etc. You definitely can make art with generative ai, and it takes more time that it looks like. (Look up a video on comfyui, sdnext or invokeai to see example of workflows)
the training data is not stored in the model in its original form,
It is not stored in the model, period. Same as you do not store the shape of the letters you’re reading right now, not even the words, but their overall meaning. Remembering the meaning of what I write here, you can then produce words and letters again and you might be close but even with this short paragraph you’ll find it very hard to make an exact replica. That’s because you did not store it in its original form, not even compressed, you re-encoded it using your own understanding of language, of the world, of everything.
Anyone spreading this misinformation and trying gatekeep being an artist after the avant-garde movement doesn’t have an ounce of education in art history. Generative art, warts and all, is a vital new form of art that’s shaking things up, challenging preconceptions, and getting people angry - just like art should.
Oh this is just nonsense. This isn’t “gatekeeping being an artist”. You want to be an artist? Great! learn some skills and make some art (you know, your own art, which you make yourself). And yes I know “all art is derivative”. That is entirely beside the point.
Machine learning is a vacuum connected to a blender. It ingests information which it combines with statistical analyses and then predicts an output based on an algorithm generated from the statistical model. There is nothing “avant-garde” here because all it can do is regurgitate existing material which it has ingested. There’s no inspiration, it can’t make anything new, and it can only make any product by ripping off someone else’s work.
Your comment made my day. Thanks.
Sure the style isn’t new, but you can make it work in new pieces that didn’t exist before, you can also merge art styles and combine concepts not blended before. There have been many innovating art kinds from generative ai, like infinitely zooming pieces or beat-synced deformation of faces or working qr code art pieces, mix use of 3d modeling then controlnet to make custom scenes, many things too detailed to be done by a human in a reasonable time.
We’re not talking about a “style”, we’re talking about producing finished work. The image generation models aren’t style guides, they output final images which are produced from the ingestion of other images as training data. The source material might be actual art (or not) but it is generally the product of a real person (because ML ingesting its own products is very much a garbage-in garbage-out system) who is typically not compensated for their work. So again, these generative ML models are ripoff systems, and nothing more. And no, typing in a prompt doesn’t count as innovation or creativity.
Generative ai is not only prompting, which shows you don’t know. Who are you to decide what is creativity and innovation? Are you Mr Art?
Anyway, it is not ingesting images and photobashing them into a final picture, that’s not how it works. It has no memory of training data images, instead it learned to generate images by trying and when similar to a training data image going more in that direction. So it has the ability to create in the same style, but the original images it doesn’t have them
I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.
If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?
I agree with what @barsoap@lemm.ee said here. My argument is the same than what you’ve already heard: since it doesn’t take the original images, but rather learn from them, it acts as a human who also learns from many different images and it would make no sense to copyright all artists that a human is trained on. Also it’s true that a human artist also has his own experience that also influence the art while the neural network only has the art, however, the ai artist will provide this personal experience. So imo you shouldn’t consider image generations as plagiarism.
Though, I do agree that having people scraping your art to train a model on it is frustrating, even though it was already the case with people training on your art for their personal experience. In the case of a model it’s way more similar to the original art pieces. I haven’t made my mind on the ehtics of model training, but generating is not plagiarism in my opinion.
Anyway, my original stance was on generative ai to be used as art and not on it being plagiarism or not. Generative ai brings a say to make full pictures with minimal effort and some people generate hundreds of unoriginal similar images. Imo, since it is easy to have a final image, the artistic effort is elsewhere: the composition, originality of the subjects, mixing of new techniques: regional prompt, lora, controlnet, etc., mixing with other tools : photoshop, blender, animation, etc. You definitely can make art with generative ai, and it takes more time that it looks like. (Look up a video on comfyui, sdnext or invokeai to see example of workflows)
Here’s a video explaining how diffusion models work, and this article by Kit Walsh, a senior staff attorney at the EFF.
It is not stored in the model, period. Same as you do not store the shape of the letters you’re reading right now, not even the words, but their overall meaning. Remembering the meaning of what I write here, you can then produce words and letters again and you might be close but even with this short paragraph you’ll find it very hard to make an exact replica. That’s because you did not store it in its original form, not even compressed, you re-encoded it using your own understanding of language, of the world, of everything.