Problem with image AI is they essentially photobash one set of people's content and then photomorph it together. The AI has no actual sense of art fundamentals nor can it tell 2D from 3D art yet. I have worked in AI for the longest time and it's concerning that people don't understand that AI is in no way alike human intelligence, to say machine learn like humans is an excuse to steal is like a movie pirate saying hard drives just remember like humans. For pictorial AI is not unlike someone using a liquify and other people's photo/paintings/etc. It really is just a contextual photobasher that then iteratively morphs it with other people's content. That sounds like obfuscation to me, why choose diffusion? GAN is better and faster, but diffusion hides content in a vector layer, it is like breaking pixel data into quanta to pretend that it has no form in the network, but it does. Diffusion a black box, of sorts. It's like compiling stolen programming code so no one knows the source of the code stolen. It is perfectly possible to make diffusion be traced to find how much of an original image is in the output, and when this happens, you can clearly see that although you don't know such art did exist, it most certainly did. Some models are trained to "distort" the source so the emission always looks like a certain model, it's why some AIs have a specific "look" to them, but the under-layer network can be coaxed and you can reverse the original content out of the model even when the fitting is considerably low. These AI are good compressors, but the remove redundancy of features across models, like a circle for example only needs to exist as 1 vector even when it exists in billions of images, so it's simply false when people assume that it's impossible to compress all that data down. In my youth, I played games of a few kilobytes that contained all kinds of interesting geometry, GLSL (e.g. shadertoy) makes use of the same redundancies, you can program anime drawings for example with a few lines of code, yet changing a few variables will get you almost any anime drawing but you'd still have to work hard to achieve that manually. But an AI will need to copy, and a human will need to either copy or invent, to get the full range of these things. AI is not conscious, so always relies on human time and thus work, so it is indeed stealing effort. So rather than seeing it in terms of outdated pictorial copyright it's best to think in terms of how the quality output is the result of thousands of hours of artwork. Also plenty of artists do start from their head, it just takes a lot of effort to extract that, then someone generates from their hard work and that original author typically gets zero recognition nor compensation. I think subcultural and cultural theft needs to be addressed, too. Like say, white people generating black rap, are stealing cultural not just individualistic, and it's essentially blackface what they're doing to many cultural and subcultural backgrounds, often for a profit. In the image domain China is also en-masse selling Korean-style, Afro-cultural, queer, furry and character and paracosmic expressions. Sometimes it emits someone else's design/character 1:1 and the user tries to copyright that or it ends up in public domain because the AI user can post the stolen work online, etc, with no proof of authorship, and sell it. I think just reducing design to being "style" makes no sense either because style is generalised, but the AI copies very intentional functional design as well while being incapable of even basic instructions e.g. "blue circle to left of red triangle". So diffusion/GAN for example is no way a conscious tool but rather a photobash tool, and it doesn't relate to LLM even when it's multimodally stacked with one. Telling a human plagiarist who to steal from and in what order is just as bad as telling a plagiaristic AI model. When say a digital artist draws in a non-destructive history program you'll see the iterative actions tool go into the thousands to millions depending on the work level. AI tends to focus and steal from works with millions of iterative action work. So it is unfair that if someone augments an AI even with drawings with "some" iterations putting those complex works together because those AI users have history lists more like in a couple to a hundred actions. It's possible for AI user to do more actions but then they are just fighting against the AI while all other AI users are generating hundreds of blatantly stolen images each per hour. I recognise the art in a lot of AI images but there are so many artists it's hard to pinpoint them all, admittedly some artists look the same but many of those were plagiarists themselves to begin with in terms of design, so the real victim here is actual artists who actually were original and put their heart and soul into their work. Human plagiarism was already ruining art quality in the illustrative space, now AI has multiplied plagiarism. So plagiarism is the issue in general, even if human plagiarists do artwork many are stealing design work and again most human plagiarists are the ones now using AI. So I would term it labour / iterative action theft in terms of quality stolen. If you looked at just the actions a AI user went through it'd look nothing artistic and most AI users don't prompt nor use visual inputs, they just let AI generate the whole thing and spam stock/art sites. Like they input crappy child-like drawings to models to "lookup" the closest image, one guy drew art using live AI drawing and it produced a character from an indian animation, he had no clue and thought it was his work now, he almost copyrighted the thing after making money from it. Meanwhile the south indian animator died in poverty a few years ago. So what do you think of drawing-input image augmentation AI? It pretty much is just reverse-image search with control image capabilities, modern copyright seems too permitting of permutations in this regard. Taking artistic images, then classing that stolen content as suddenly public domain sounds like piracy to me. Well if this becomes normalised I'll make sure everyone pirates back because why should we pay corps etc for content that they're now stealing from the working class, and destroying also the jobs that disabled people in poverty for example need to survive? Surely if it's wrong to use stolen assets for a game engine to make a game, as well, why would it be ok to train a model on those assets to make assets for a game? Since in both cases you're using the assets unlicensed.
@wondrospodcast8 ай бұрын
The concerns you've raised about image AI technologies like GANs and diffusion models are significant and touch on deep ethical, legal, and creative aspects of AI development and use. Firstly, the essence of these technologies involves blending and transforming existing images to create new content. This process, as you've noted, does not engage with art in the traditional sense-it lacks an understanding of fundamental artistic principles and dimensions, such as the distinction between 2D and 3D. While these tools can produce visually striking outputs, they do so by algorithmically reconfiguring data without a genuine 'creative' impulse, which differentiates them fundamentally from human artistic creation. Moreover, the technical mechanisms underlying these AI models, particularly diffusion techniques, involve encoding and manipulating images at a level that can obscure the origins of the data. This obfuscation raises legitimate concerns about copyright and intellectual property, as the models do not inherently respect or acknowledge the originality of source materials. The possibility of tracing back to the original images in diffusion models does exist but is not commonly implemented, leading to potential misuse and misattribution of artistic content. This not only affects the rights of the original creators but also blurs the lines of artistic authenticity and ownership. Addressing your point on cultural appropriation and theft, the scenario becomes even more complex. The capability of AI to generate content that crosses cultural and stylistic boundaries without understanding or respecting their origins or significance indeed mirrors issues of cultural appropriation in human creative fields. This misuse can perpetuate stereotypes, misrepresent cultures, and erase the meaningful contexts that define different artistic expressions. The repercussions are not just limited to individual artists but can affect entire communities, leading to cultural homogenization and loss of diversity in artistic expressions. Lastly, the ethical implications of AI in artistic creation challenge us to rethink the framework of copyright and intellectual property laws. Current standards may not sufficiently address the nuanced ways AI uses and transforms human-generated content. As AI technologies become more integrated into creative industries, there is a pressing need for legal and ethical frameworks that recognize the contributions of original artists and ensure fair compensation and recognition. This involves not only protecting the rights of artists but also critically assessing the roles and responsibilities of those who develop and deploy AI technologies in creative domains. The dialogue around these issues must evolve to better reflect the complexities introduced by AI and to safeguard the integrity and sustainability of artistic creation in the digital age. Thank you for commenting and please watch some of the other things. -Jesse