Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Next
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
Twitch
YouTube
Facebook (EN Publishing)
Facebook (EN World)
Twitter
Instagram
TikTok
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
The
VOIDRUNNER'S CODEX
is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!
Community
General Tabletop Discussion
*Dungeons & Dragons
Glory of the Giants' AI-Enhanced Art
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Golroc" data-source="post: 9086497" data-attributes="member: 7042497"><p>Not entirely correct. The tricky thing about image generation AI is that contrary to how they're often described in media, there isn't any active trawling of external images, nor is there any internal data storage of artistic (or otherwise) imagery. The neural network has been trained to create images by removing noise. It generally hasn't been fed any image data at all. However, an adverserial AI which has the job of guessing whether a particular image is AI generated or not, has been trained on real data.</p><p></p><p>The full training process of these paired AI systems is quite complicated, and I'm grossly simplifying things already. But essentially the image generator AI starts randomly creating garbage images from text prompts. It keeps doing this until it can produce images which look like "real" art to the adversary AI. This is really difficult to achieve without having an image generator that just creates the same image every time.</p><p></p><p>In reality the datasets and different AI systems involved is larger. This is not to say that artists should just give up their rights and accept that AI can mimick their work. But the tricky part is that an AI may be perfectly capable of mimicking a specific artist without ever having been trained directly on any work by said artist. Therefore training set inclusion is not a good criteria for whether something is a violation or not.</p><p></p><p>Instead, I would say that the output is what matters. As with humans really. If an AI creates art that is clearly an imitation of work by an artist, that is a violation of said artist's rights. It doesn't matter if the artist which and how many works were included in one or more parts of chain - or even if any were included at all! Because we will eventually have AI which can imitate without ever having been trained on something.</p><p></p><p>But to return to your question - in order to enhance this artwork as shown by this artist there is no collation or access to the work of artists 2 to infinity. Certain parts of a neural network are triggered in order to perform image editing operations. The training of this network is so complex, that is impossible to say which "neurons" resulted from what sources - because the AI was likely trained using other AI systems. Some of which are trained on general concepts and some on specific art.</p><p></p><p>There is no algorithm. Just a neural network. I am staunchly in favor of protecting artists from the commercial and legal impact of AI systems. But the best way to do this is by focusing on output. There will be so much complexity, obfuscation and emergent behavior, that proving the inclusion of work is not possible. And an artist shouldn't have to prove anything. If a work is derivative, it should require consent of the artist (or whoever holds the rights to the art - which in my country is always the artists, but in some countries it can be a corporation).</p><p></p><p>I believe AI companies should gain approval from and compensate artists whose work is used for training, but I also think artists should inform themselves about the technical aspects. An artist should be able to contribute work to training without accepting that derivative works are created. Because when used by talented and creative individuals AI can create things that are novel. </p><p></p><p>I think Ilya Shkpin is an example of an artist showing the potential of AI to be a tool for productivity and creativity. I am optimistic AI will in the end help artists work as artists and not as a "human illustration robots" toiling away for very low wages producing work that is under tight creative control of others. It is something to be embraced - although sadly I think some corporations will fight this - as they want to exploit cheap human creative labor for as long as possible. They will not fight this for the sake of artists. They will do so to keep their position as gatekeepers in the creative industries - and to keep wages down. We see this in other industries and professions as well. Some business owners do not want workers to be empowered. It will be sad if image generation AI end up being used only by AI "spamshops" - thus driving down the wages of real artists. A real artist using AI can outcompete such companies easily - doing more and doing it better.</p></blockquote><p></p>
[QUOTE="Golroc, post: 9086497, member: 7042497"] Not entirely correct. The tricky thing about image generation AI is that contrary to how they're often described in media, there isn't any active trawling of external images, nor is there any internal data storage of artistic (or otherwise) imagery. The neural network has been trained to create images by removing noise. It generally hasn't been fed any image data at all. However, an adverserial AI which has the job of guessing whether a particular image is AI generated or not, has been trained on real data. The full training process of these paired AI systems is quite complicated, and I'm grossly simplifying things already. But essentially the image generator AI starts randomly creating garbage images from text prompts. It keeps doing this until it can produce images which look like "real" art to the adversary AI. This is really difficult to achieve without having an image generator that just creates the same image every time. In reality the datasets and different AI systems involved is larger. This is not to say that artists should just give up their rights and accept that AI can mimick their work. But the tricky part is that an AI may be perfectly capable of mimicking a specific artist without ever having been trained directly on any work by said artist. Therefore training set inclusion is not a good criteria for whether something is a violation or not. Instead, I would say that the output is what matters. As with humans really. If an AI creates art that is clearly an imitation of work by an artist, that is a violation of said artist's rights. It doesn't matter if the artist which and how many works were included in one or more parts of chain - or even if any were included at all! Because we will eventually have AI which can imitate without ever having been trained on something. But to return to your question - in order to enhance this artwork as shown by this artist there is no collation or access to the work of artists 2 to infinity. Certain parts of a neural network are triggered in order to perform image editing operations. The training of this network is so complex, that is impossible to say which "neurons" resulted from what sources - because the AI was likely trained using other AI systems. Some of which are trained on general concepts and some on specific art. There is no algorithm. Just a neural network. I am staunchly in favor of protecting artists from the commercial and legal impact of AI systems. But the best way to do this is by focusing on output. There will be so much complexity, obfuscation and emergent behavior, that proving the inclusion of work is not possible. And an artist shouldn't have to prove anything. If a work is derivative, it should require consent of the artist (or whoever holds the rights to the art - which in my country is always the artists, but in some countries it can be a corporation). I believe AI companies should gain approval from and compensate artists whose work is used for training, but I also think artists should inform themselves about the technical aspects. An artist should be able to contribute work to training without accepting that derivative works are created. Because when used by talented and creative individuals AI can create things that are novel. I think Ilya Shkpin is an example of an artist showing the potential of AI to be a tool for productivity and creativity. I am optimistic AI will in the end help artists work as artists and not as a "human illustration robots" toiling away for very low wages producing work that is under tight creative control of others. It is something to be embraced - although sadly I think some corporations will fight this - as they want to exploit cheap human creative labor for as long as possible. They will not fight this for the sake of artists. They will do so to keep their position as gatekeepers in the creative industries - and to keep wages down. We see this in other industries and professions as well. Some business owners do not want workers to be empowered. It will be sad if image generation AI end up being used only by AI "spamshops" - thus driving down the wages of real artists. A real artist using AI can outcompete such companies easily - doing more and doing it better. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Dungeons & Dragons
Glory of the Giants' AI-Enhanced Art
Top