Commons talk:AI-generated media/Archive 1
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 → |
Thoughts
This is decent but “scope” is vague and doesn’t solve the issue of “I like it” being the deciding factor. Dronebogus (talk) 03:35, 30 December 2022 (UTC)
- That sounds more like a problem with the DR procedure in general? Trade (talk) 03:37, 30 December 2022 (UTC)
- It seems like a problem of trying to figure out the scope of artwork in general. AI-generated media is by its nature largely artwork. You don't have AI-generated "photographs" because it's not a photograph. I could dump thousands of pages of a AI-generated book but no one would say the halfway gibbish text is in-scope. However, do an art piece and people say that it is in theory useful no matter what it looks like. I'm curious how File:19th Century Riveboat.jpg will turn out. I can't even tell if this is just poor AI-generated work or was just a poor watercolor piece or what but it has already survived one discussion precisely because it was listed as part of a AI-generated discussion. Ricky81682 (talk) 04:39, 30 December 2022 (UTC)
- I think obviously amateurish art should be deleted whether or not it’s AI— in the latter case it’s going to be mainly garbled images that don’t represent a discernible topic, like your typical artbreeder image, or stuff that represents something but is blatantly erroneous (i.e. those body-horrific pictures you can generate of anime girls with heads growing out of their heads, backwards arms, six boobs, too many/few fingers etc.) Dronebogus (talk) 07:09, 30 December 2022 (UTC)
- Though a few intentionally “bad” images could obviously be educational to illustrate those flaws Dronebogus (talk) 07:11, 30 December 2022 (UTC)
- I’d recommend the main scope-related criteria be: “no images without recognizable people, places, or things in them” and “no poorly made (amateur quality or lower) or obviously malformed images”. Dronebogus (talk) 07:19, 30 December 2022 (UTC)
- That sounds like "I like it" in more words. Ricky81682 (talk) 10:30, 30 December 2022 (UTC)
- I’m sorry but I don’t see how. Dronebogus (talk) 05:02, 31 December 2022 (UTC)
- What is "poorly made" if not "I like it"? Malformed is a bit odd when you have cubism-like pieces like File:Cubism Profile.png. You are basically limiting AI artwork to photorealistic imagery which is fine but I don't see it as the actual consensus. The problem comes from the more abstract AI-generated work, which I don't see resolve to anything other than whether or not the crowd likes it. The issue is really for images that aren't being used at this time. Ricky81682 (talk) 00:40, 5 January 2023 (UTC)
- I’m sorry but I don’t see how. Dronebogus (talk) 05:02, 31 December 2022 (UTC)
- That sounds like "I like it" in more words. Ricky81682 (talk) 10:30, 30 December 2022 (UTC)
- I’d recommend the main scope-related criteria be: “no images without recognizable people, places, or things in them” and “no poorly made (amateur quality or lower) or obviously malformed images”. Dronebogus (talk) 07:19, 30 December 2022 (UTC)
- It seems like a problem of trying to figure out the scope of artwork in general. AI-generated media is by its nature largely artwork. You don't have AI-generated "photographs" because it's not a photograph. I could dump thousands of pages of a AI-generated book but no one would say the halfway gibbish text is in-scope. However, do an art piece and people say that it is in theory useful no matter what it looks like. I'm curious how File:19th Century Riveboat.jpg will turn out. I can't even tell if this is just poor AI-generated work or was just a poor watercolor piece or what but it has already survived one discussion precisely because it was listed as part of a AI-generated discussion. Ricky81682 (talk) 04:39, 30 December 2022 (UTC)
Purpose
What is the goal of this policy? We should have that clear before proceeding.
So far we have seen two groups of editors, supporting two different sets of goals:
- AI-generated content is welcome, but must observe the same rules for attribution and licensing, and particularly those around derivative works of copyright sources, as we have previously had.
- AI-generated content is haram, it must all be deleted, those who upload it must be banned.
I see little chance of compromise between these two positions. So we should decide first which we are seeking, because otherwise the detailed rules that emerge will be confused and unworkable. Andy Dingley (talk) 11:14, 30 December 2022 (UTC)
- Neither of these. AI images can be accepted, but seeing that Commons could be easily overflooded with out-of-scope images from AI, we should restrict these to cases when they are really useful. Yann (talk) 15:07, 30 December 2022 (UTC)
- We already have COM:SCOPE. We do not need to change that, or to create a COM:AI SCOPE.
- It has always been the case that "Commons could easily be flooded" with anything, be that phone selfies or a a Pokemon namespace. Yet we have survived that, and we've done so by sticking to the principles of SCOPE, not by having to embroider around it. Andy Dingley (talk) 16:25, 30 December 2022 (UTC)
- Well, it was shown recently that some users like to upload out-of-scope AI images. IMO these files may have been in scope if they weren't made by AI. That's the difference. Yann (talk) 17:06, 30 December 2022 (UTC)
- No, it wasn't "shown recently"! There are still ongoing DRs about some AI generated images, and a wide range of random reasons are being given as to why they "must" be deleted (and the uploader indef banned). Despite a serious lack of policy-based reasons as to this, and SCOPE only being one of those claimed.
- Your logic here is circular: SCOPE is to be redefined by what we delete, and the deletions are necessary because they're outside SCOPE. That is not any way in which we can work: we have to go by SCOPE (and hopefully avoid continually using DRs to harass a user).
- You claim that "these files may have been in scope if they weren't made by AI", which is problematic. Firstly the most coherent DRs are those based on them being derivative work of recognisable artists (part of COM:DW, not SCOPE): that applies equally well whether they're AI generated or human. Secondly your claim is based on pre-supposing a simple statement in SCOPE (despite there not being one) "AI-generated files are outside SCOPE", just because they're AI-generated. We have no such policy. This page is discussing what we might do as such a policy, but there's no indication that a simple blanket ban will ever be part of this. Andy Dingley (talk) 18:27, 30 December 2022 (UTC)
- What can I do when you don't even recognize facts, i.e. images already deleted for being out of scope? Yann (talk) 19:17, 30 December 2022 (UTC)
- Can you be more specific as to which have been deleted for scope? I'm sure that something, somewhere has been, but this is far from a consensus that "AI is implicitly outside SCOPE" and given how many are still open it is very far from an established or clearly fixed consensus. At Commons:Deletion requests/Files uploaded by Benlisquare there are 40 of them, nominated as COM:DW. We've also seen a number already deleted as COM:CSD#F10 (a pretty disparaging implication to an uploader who's just been banned for "being disparaging" amongst other things). At Commons:Deletion requests/Files uploaded by David S. Soriano we have a very vague rationale of "scope", for files that weren't AI generated, and they were kept anyway. Andy Dingley (talk) 20:13, 30 December 2022 (UTC)
- @Andy Dingley: A cursory glance at Category:AI-generation related deletion requests/deleted would show you the first discussion of Commons:Deletion requests/Files uploaded by User:David S. Soriano, Commons:Deletion requests/File:Science Fiction Art 2022 Mars Attack Illustration.png which relates to being too close to a derivative work, Commons:Deletion requests/File:An Unusual Sphinx.jpg deleted as amateur artwork, and others. Wholesale "every AI-generated image uploaded by someone" discussions aren't going to work which isn't that much of a surprise since the copyright isn't clear. I have a bigger issue with Soriano not identifying whether his uploads are AI or not which is what I really want people to provide. If they are AI generated entirely and copyright law says that AI generated works are actually truely public domain, his GFDL licenses are entirely incorrect. I suspect if attribution wasn't required and someone else could remake the work without giving any credit to the original prompt creator, akin to en:Monkey selfie copyright dispute, there would be a lot less interest in uploading works here, especially if the ability to sell the works plummets to zero. -- Ricky81682 (talk) 08:57, 31 December 2022 (UTC)
- "A cursory glance": To DRs from a year ago, or even 2019, that haven't been mentioned anywhere in these threads?
- More importantly, that doesn't touch on SCOPE. DW is an issue of licensing for the sources, not the scope of the result.
- Commons:Deletion requests/Files uploaded by User:David S. Soriano is hardly a useful contribution to the debate as it's an unopposed (ignored!) nomination on the hopelessly broad rationale "No encyclopedic use." That doesn't further our debate here by setting any useful precedent. Also Soriano's work isn't even clear if it's AI or not: he seems to be a human artist who's using AI as a starting point or embedded tool, but then doing much of the work manually. If this has any educational scope (I'm undecided) it's on the basis of the technique being used and whether that makes it educationally interesting. WEBHOST etc. would otherwise apply.
- Copyright-based arguments are just not relevant to the issue of SCOPE. We have to worry about those too, but they're not SCOPE questions.
- As to the minutiae of licensing, you can of course still apply a licence to a PD work. This is Getty and Alamy's business model. Licensees didn't need to license it, they don't need to give you money for it, but it seems that some are happy to do so, despite. The monkey selfie is still a bad day for Wikimedia and should never be used to set new policy, but in that case Wikimedia took the line that "as a PD work" (we can't claim that) Commons was welcome to do what it liked with it! Andy Dingley (talk) 11:12, 31 December 2022 (UTC)
- Commons:Undeletion requests is still available if you want to argue for the restoration of those files. Otherwise, it's easy to find no purpose to having a policy when you ignore prior discussions on the issue. We have no idea how Soriano does his work which is precisely why I support a policy that mandates some details from people. Commons:Project scope/Evidence holds that the uploader provide the evidence rather than leave us guessing what he's doing here. Ricky81682 (talk) 12:04, 31 December 2022 (UTC)
- I have not mentioned undeleting anything. Please try to stay on point (which is SCOPE applied to AI). I don't know what you mean by "it's easy to find no purpose" when the issue is that these are not prior discussions on the issue, they're on a different issue. So far we have discussion on DW (i.e. licensing, not scope), a question on whether Soriano is even relevant to this discussion or not, and a very bland mention of "no encyclopedic use" (which isn't even SCOPE, SCOPE is slightly broader) that had no discussion. Commons:Project scope/Evidence is also about licensing, not COM:SCOPE. Andy Dingley (talk) 12:35, 31 December 2022 (UTC)
- Commons:Undeletion requests is still available if you want to argue for the restoration of those files. Otherwise, it's easy to find no purpose to having a policy when you ignore prior discussions on the issue. We have no idea how Soriano does his work which is precisely why I support a policy that mandates some details from people. Commons:Project scope/Evidence holds that the uploader provide the evidence rather than leave us guessing what he's doing here. Ricky81682 (talk) 12:04, 31 December 2022 (UTC)
- @Andy Dingley: A cursory glance at Category:AI-generation related deletion requests/deleted would show you the first discussion of Commons:Deletion requests/Files uploaded by User:David S. Soriano, Commons:Deletion requests/File:Science Fiction Art 2022 Mars Attack Illustration.png which relates to being too close to a derivative work, Commons:Deletion requests/File:An Unusual Sphinx.jpg deleted as amateur artwork, and others. Wholesale "every AI-generated image uploaded by someone" discussions aren't going to work which isn't that much of a surprise since the copyright isn't clear. I have a bigger issue with Soriano not identifying whether his uploads are AI or not which is what I really want people to provide. If they are AI generated entirely and copyright law says that AI generated works are actually truely public domain, his GFDL licenses are entirely incorrect. I suspect if attribution wasn't required and someone else could remake the work without giving any credit to the original prompt creator, akin to en:Monkey selfie copyright dispute, there would be a lot less interest in uploading works here, especially if the ability to sell the works plummets to zero. -- Ricky81682 (talk) 08:57, 31 December 2022 (UTC)
- Can you be more specific as to which have been deleted for scope? I'm sure that something, somewhere has been, but this is far from a consensus that "AI is implicitly outside SCOPE" and given how many are still open it is very far from an established or clearly fixed consensus. At Commons:Deletion requests/Files uploaded by Benlisquare there are 40 of them, nominated as COM:DW. We've also seen a number already deleted as COM:CSD#F10 (a pretty disparaging implication to an uploader who's just been banned for "being disparaging" amongst other things). At Commons:Deletion requests/Files uploaded by David S. Soriano we have a very vague rationale of "scope", for files that weren't AI generated, and they were kept anyway. Andy Dingley (talk) 20:13, 30 December 2022 (UTC)
- What can I do when you don't even recognize facts, i.e. images already deleted for being out of scope? Yann (talk) 19:17, 30 December 2022 (UTC)
- Well, it was shown recently that some users like to upload out-of-scope AI images. IMO these files may have been in scope if they weren't made by AI. That's the difference. Yann (talk) 17:06, 30 December 2022 (UTC)
- Just because you have strawmanned everyone who disagrees with you into a simple position doesn't mean it's true. There is no requirement to work on this proposal. If you want to propose a separate one, that's also possible. This may even up irrelevant if there is no support for it as a policy in the end. -- Ricky81682 (talk) 08:57, 31 December 2022 (UTC)
- Unsurprisingly, I remain very interested in any proposal that has the potential to start indef banning uploaders for Islamophobia. Please don't tell me that's a "straw man", when we already have threads like Commons:Administrators' noticeboard/User problems#User:Benlisquare for cross-wiki harassment, Islamophobia, and other policy violations Andy Dingley (talk) 11:12, 31 December 2022 (UTC)
- This proposal won't go far if it doesn't at least reflect current consensus. Can you point me to any language in the proposal related to Islamophobia? Ricky81682 (talk) 11:59, 31 December 2022 (UTC)
- The proposal is so far free of that, but we have AN/U threads active on indef bans, how to extend behavioural blocks from WP to Commons (Commons doesn't do that, so why are we even discussing it), and citing "hot button" topics like Islamophobia and child pornography as justifications for banning Commons users active with AI images. This is a problem. Andy Dingley (talk) 12:35, 31 December 2022 (UTC)
- The entire discussion derailed from one user's absolute insistence on using hot button prompts for their choice of AI images and even then the uploader has been unblocked. Their images are likely to be kept and the ones showing AI techniques on buildings are not offensive and no one cares. If you want to put a warning not to use prompts that are offensive to people because people will look at the basis for your images separate from what the images contain (aka the "this teenage-looking character is actually hundreds of years old in the story so it's not icky" argument), that may be helpful. Ricky81682 (talk) 00:54, 5 January 2023 (UTC)
- The proposal is so far free of that, but we have AN/U threads active on indef bans, how to extend behavioural blocks from WP to Commons (Commons doesn't do that, so why are we even discussing it), and citing "hot button" topics like Islamophobia and child pornography as justifications for banning Commons users active with AI images. This is a problem. Andy Dingley (talk) 12:35, 31 December 2022 (UTC)
- Unsurprisingly, I remain very interested in any proposal that has the potential to start indef banning uploaders for Islamophobia. Please don't tell me that's a "straw man", when we already have threads like Commons:Administrators' noticeboard/User problems#User:Benlisquare for cross-wiki harassment, Islamophobia, and other policy violations Andy Dingley (talk) 11:12, 31 December 2022 (UTC)
- The point of this proposal should be outlining the copyright status of AI imagery, and clarifying that AI images have roughly the same criteria as any other image— i.e. passable quality (unless the point is illustrating bad quality unique to AI) is and educationally useful. It seems fairly simple and it should be kept that way. Dronebogus (talk) 11:17, 31 December 2022 (UTC)
- (By “unique bad quality” I’m talking about the aforementioned “double jointed arms, eight fingers, two heads and six breasts” type thing) Dronebogus (talk) 11:19, 31 December 2022 (UTC)
- I think we should have a relatively high bar for quality when it comes to AI-generated images as it's easy (if you bother to spend half an hour learning how to use negative prompts) to generate images without any obvious AI artifacts. This is similar to how we don't accept (most) blurry photos since it's easy to take a non-blurry photo. Nosferattus (talk) 16:12, 31 December 2022 (UTC)
- Maybe we should have a template for AI images with obvious artifacts? The same as we do with blurry photos Trade (talk) 20:12, 31 December 2022 (UTC)
- @Nosferattus Taking a non-blurry photo in an ordinary environment with a modern camera literally requires no skill at all: just point the camera at the subject and press the shutter button.
- Generating images without obvious AI artifacts is, according to you, ‘easy (if you bother to spend half an hour learning how to use negative prompts)’. The phrase ‘bother to’ implies that anyone can do it, if they just put in the effort. I doubt that is true. And even for the people who can learn this skill, it won’t always be that fast or easy. Brianjd (talk) 06:00, 21 January 2023 (UTC)
- I think we should have a relatively high bar for quality when it comes to AI-generated images as it's easy (if you bother to spend half an hour learning how to use negative prompts) to generate images without any obvious AI artifacts. This is similar to how we don't accept (most) blurry photos since it's easy to take a non-blurry photo. Nosferattus (talk) 16:12, 31 December 2022 (UTC)
- (By “unique bad quality” I’m talking about the aforementioned “double jointed arms, eight fingers, two heads and six breasts” type thing) Dronebogus (talk) 11:19, 31 December 2022 (UTC)
- @Andy Dingley It’s our job to make sure the first group wins. AI is as notable as anything gets: it’s transforming every part of human existence. To arbitrarily exclude it from an ‘educational’ media repository will destroy Commons’ credibility. Brianjd (talk) 11:34, 20 January 2023 (UTC)
Colorized photographs, former b/w
Hi, would there be a place in this policy for (old) black and white photographs that have been colorized? If I am correct (and please correct me if I am wrong!), this topic is no yet covered in other policies and may fit the topic. Ciell (talk) 11:24, 7 January 2023 (UTC)
- IMHO, no, because the authorship / credit / IP rights situation would be so different. Although that's another issue we ought to be thinking about. Andy Dingley (talk) 12:58, 7 January 2023 (UTC)
- If we do this, we would want to distinguish hand-colorization from the period of the photograph, later hand-colorization, and modern computer-assisted colorization. Possibly even other relevant distinctions. - Jmabel ! talk 16:46, 7 January 2023 (UTC)
Deepfakes
@King of Hearts: Your recent edit claims that COM:CSD#G3 should not be used because the type of media covered here is not clearly defined. If that’s true, then we can’t make bold claims about it violating the WMF Terms of Use either. Further editing (and maybe further discussion) is required. Brianjd (talk) 11:13, 20 January 2023 (UTC)
- This section references Commons:Photographs of identifiable people. There is currently a proposal to rename ‘Photographs of identifiable people’ to ‘Photos and videos of people’. Brianjd (talk) 11:21, 20 January 2023 (UTC)
- @Brianjd: The part about it violating the WMF Terms of Use is straightforward and shouldn't be controversial. The Terms of Use states that uses may not engage in "harassment, threats, stalking, spamming, or vandalism" or "intentionally or knowingly posting content that constitutes libel or defamation" or "with the intent to deceive, posting content that is false or inaccurate". Deepfakes would usually violate at least 2 of those rules, and often all 3. Nosferattus (talk) 20:05, 21 January 2023 (UTC)
- @King of Hearts and Nosferattus: Can you give an example of something that violates the relevant part of the Terms of Use, but does not fall under COM:CSD#G3, or vice versa? Brianjd (talk) 01:29, 22 January 2023 (UTC)
- When alleged harassment is concerned, it is often not a clear-cut case, and different people might have different opinions on it. So instead of speedy deletion, such claims should be discussed at DR. -- King of ♥ ♦ ♣ ♠ 04:12, 22 January 2023 (UTC)
- @King of Hearts Then we are back to my original point: this uncertainty rules out COM:CSD#G3 and it also rules out bold claims about such material violating the Terms of Use. Brianjd (talk) 04:14, 22 January 2023 (UTC)
- I just did a quick update to the page to reflect this. Other users are encouraged to aggressively edit it to improve its wording etc. Brianjd (talk) 04:18, 22 January 2023 (UTC)
- @King of Hearts Then we are back to my original point: this uncertainty rules out COM:CSD#G3 and it also rules out bold claims about such material violating the Terms of Use. Brianjd (talk) 04:14, 22 January 2023 (UTC)
- When alleged harassment is concerned, it is often not a clear-cut case, and different people might have different opinions on it. So instead of speedy deletion, such claims should be discussed at DR. -- King of ♥ ♦ ♣ ♠ 04:12, 22 January 2023 (UTC)
- @King of Hearts and Nosferattus: Can you give an example of something that violates the relevant part of the Terms of Use, but does not fall under COM:CSD#G3, or vice versa? Brianjd (talk) 01:29, 22 January 2023 (UTC)
- @Brianjd: The part about it violating the WMF Terms of Use is straightforward and shouldn't be controversial. The Terms of Use states that uses may not engage in "harassment, threats, stalking, spamming, or vandalism" or "intentionally or knowingly posting content that constitutes libel or defamation" or "with the intent to deceive, posting content that is false or inaccurate". Deepfakes would usually violate at least 2 of those rules, and often all 3. Nosferattus (talk) 20:05, 21 January 2023 (UTC)
A solution in search of a problem?
The proposal currently says: AI generated media present unique challenges for licensing, attribution, and scope evaluation.
What ‘unique challenges’? To quote a comment above:
We already have COM:SCOPE. We do not need to change that, or to create a COM:AI SCOPE.
It has always been the case that "Commons could easily be flooded" with anything, be that phone selfies or a a Pokemon namespace. Yet we have survived that, and we've done so by sticking to the principles of SCOPE, not by having to embroider around it.
— User:Andy Dingley 16:25, 31 December 2022 (UTC)
The page should start with an explanation of what these ‘unique challenges’ actually are, then state a solution. Brianjd (talk) 11:48, 20 January 2023 (UTC)
- Agree— though maybe it should start by stating there are no major differences, and then move on to the metaphorical “meat and potatoes”. Dronebogus (talk) 14:09, 20 January 2023 (UTC)
- I disagree. There are no differences. SCOPE stays the same, with "educational, according to its broad meaning of 'providing knowledge; instructional or informative'". We might need to clarify interpretation of that, but we don't change it, even in the slightest. Andy Dingley (talk) 20:11, 20 January 2023 (UTC)
- I don't think it's a change of scope, but I still think a guideline is in order. There seem to be a minority of people who believe that essentially anything they can produce this way should be in scope because it was generated this way. - Jmabel ! talk 20:29, 20 January 2023 (UTC)
- That is pretty much how we tend to treat hand drawn illustrations of various NSFW subjects Trade (talk) 03:28, 21 January 2023 (UTC)
- @Jmabel Guideline or policy? Commons:Nudity is a policy. Brianjd (talk) 05:40, 21 January 2023 (UTC)
- @Brianjd: I don't really care which. - Jmabel ! talk 16:55, 21 January 2023 (UTC)
- I don't think it's a change of scope, but I still think a guideline is in order. There seem to be a minority of people who believe that essentially anything they can produce this way should be in scope because it was generated this way. - Jmabel ! talk 20:29, 20 January 2023 (UTC)
- I disagree. There are no differences. SCOPE stays the same, with "educational, according to its broad meaning of 'providing knowledge; instructional or informative'". We might need to clarify interpretation of that, but we don't change it, even in the slightest. Andy Dingley (talk) 20:11, 20 January 2023 (UTC)
- While we struggle to work out what is going on here, the user who brought us this page and its claim of ‘unique challenges’ (Nosferattus) seems to have gone missing. They are still active on Commons; hopefully they will come back to discuss here. Brianjd (talk) 05:42, 21 January 2023 (UTC)
- For the record, I also dispute that AI has unique challenges for attribution. Commons already does a poor job of attributing anything that involves multiple authors or works, with every user apparently hacking together their own solution in the information template and/or licensing section (or just ignoring the whole issue, especially when overwriting existing files). (That seems a bit strange for a project that keeps pushing users towards ‘structured data’.) Brianjd (talk) 05:47, 21 January 2023 (UTC)
- @Brianjd: The unique challenge is that AI images should be attributed to non-humans, which is unintuitive for most people. Most people will just attribute to the person who prompted the AI. If you don't like the wording, remove it or rewrite it. Changes are welcome. Nosferattus (talk) 17:02, 21 January 2023 (UTC)
- I'm just discovering this page now, sorry for barging in. I think I have the solution, com:NOTUSED needs a slight precision. About what is "realistically useful". We need to also account for how much time it would take to redo the image once we actually NEED it. A real "The long lost black and white photo of sarcophagus" licensed under common is valuable even if not immediately used in any Wikipedia article. But a fictive "cute (red panda:.1) ears waifu in the style of studio ghibli" is 30 second away from being generated at any time with SD. And that's what push most of the AI generated stuff out of COM:SCOPE. The technology will just become more ubiquitous and accessible from now, and I don't understand why we should store in advance images that COULD be useful. We should encourage people to go write that article first and then upload their images. Iluvalar (talk) 19:49, 21 January 2023 (UTC)
- @Iluvalar Cameras are ‘ubiquitous and accessible’ (much more so than AI), yet we store unused photos that could theoretically be re-created. Brianjd (talk) 01:32, 22 January 2023 (UTC)
- Also, creating and curating images and writing articles are different skills possessed by different people; that will remain true as AI becomes more accessible (at least until AI takes over the whole thing). Brianjd (talk) 01:45, 22 January 2023 (UTC)
- Agree, potentially useful images shouldn’t be held to a higher standard because “AI is cheap” or whatever. We have tons and tons of generic, middling-quality photographs that are kept because they are in scope. Dronebogus (talk) 12:34, 22 January 2023 (UTC)
- The analogy doesn't hold here, because photos are from real things (generally). Which de facto give them much more weight than a fictive image. by com:scope. And I'm not a frequent contributor here, but I'm pretty sure that even photos are heavily curated. Your argument about skill is also void of sense. There are many existing articles which were already written by skilled writer and require YOUR skill to illustrate. That's what com:scope is. Iluvalar (talk) 17:39, 22 January 2023 (UTC)
- @Iluvalar Cameras are ‘ubiquitous and accessible’ (much more so than AI), yet we store unused photos that could theoretically be re-created. Brianjd (talk) 01:32, 22 January 2023 (UTC)
- Attributing images to non-humans might be a legitimate issue, though still not specific to AI. Does anyone else have any thoughts? Brianjd (talk) 01:41, 22 January 2023 (UTC)
- Dog selfie image: License tag removed by another user, but Flickr review remains in place. Brianjd (talk) 14:37, 24 January 2023 (UTC)
- I'm just discovering this page now, sorry for barging in. I think I have the solution, com:NOTUSED needs a slight precision. About what is "realistically useful". We need to also account for how much time it would take to redo the image once we actually NEED it. A real "The long lost black and white photo of sarcophagus" licensed under common is valuable even if not immediately used in any Wikipedia article. But a fictive "cute (red panda:.1) ears waifu in the style of studio ghibli" is 30 second away from being generated at any time with SD. And that's what push most of the AI generated stuff out of COM:SCOPE. The technology will just become more ubiquitous and accessible from now, and I don't understand why we should store in advance images that COULD be useful. We should encourage people to go write that article first and then upload their images. Iluvalar (talk) 19:49, 21 January 2023 (UTC)
- @Brianjd: The unique challenge is that AI images should be attributed to non-humans, which is unintuitive for most people. Most people will just attribute to the person who prompted the AI. If you don't like the wording, remove it or rewrite it. Changes are welcome. Nosferattus (talk) 17:02, 21 January 2023 (UTC)
Fan art
This section says that Images that depict characters from or are based on proprietary works such as movies, TV shows, computer games, comic books or manga/anime may be derivative work, which is copyright infringement.
COM:FANART has the example of "a drawing of a boy with black hair and glasses, with a zig-zag scar on his forehead" being allowed on Commons as an illustration of Harry Potter "provided it does not copy the specific realizations of the book cover illustrations, the movies, or the computer games"; it also allows a drawing of "Daniel Radcliffe in a wizard hat and robes" if it is "a wholly new and original drawing of the actor which is not copied in any way from an existing copyright work".
DALL-E output for an actual prompt of "Daniel Radcliffe in a wizard hat and robes" would presumably be disallowed at Commons because it will almost certainly have used unspecified copyrighted images of the actor, and may be closely derived from a particular photo. DALL-E output for "Harry Potter" likewise. But would a DALL-E image from a prompt of "a drawing of a boy with black hair and glasses, with a zig-zag scar on his forehead" (or a more descriptive version that falls short of ever naming the character or actor) be considered derivative? Is there a concern that the AI may end up drawing on copyrighted Harry Potter imagery if asked to depict a boy wizard with a particular style of scar? Belbury (talk) 10:04, 23 January 2023 (UTC)
- @Belbury Is there any particular concern with DALL-E, or is this question about AI in general? Brianjd (talk) 10:27, 23 January 2023 (UTC)
- AI in general, just DALL-E as an example. I'm wondering if we can pick apart "may be derivative work" a bit more, and whether it would apply to someone asking an AI to draw a character without actually naming that character. I'm here off the back of Commons:Deletion requests/File:A golden skin lady wearing ornamental heads and golden ornaments on her body standing in a lake.jpg, which I took to be inappropriate fanart but which may actually be within policy. Belbury (talk) 10:52, 23 January 2023 (UTC)
- @Belbury There’s a comment at Commons:Deletion requests/Files uploaded by Benlisquare suggesting that it might be OK even if the character is named. But that is for ‘clip art’; I don’t know if that makes a difference. Brianjd (talk) 11:09, 23 January 2023 (UTC)
- Note that all of the examples of proprietary works given are visual, or have a visual component. If you do your own illustration for a text, influenced only by the text, that is usually fine; the same would presumably apply to AI, though so far, to my knowledge, it has not been tested in court. If you are relying on copyrighted illustrations for your imagery, then you probably have at least a potential copyvio on your hands, and the precautionary principle would suggest we should steer clear of it. - Jmabel ! talk 16:14, 23 January 2023 (UTC)
- @Belbury There’s a comment at Commons:Deletion requests/Files uploaded by Benlisquare suggesting that it might be OK even if the character is named. But that is for ‘clip art’; I don’t know if that makes a difference. Brianjd (talk) 11:09, 23 January 2023 (UTC)
- AI in general, just DALL-E as an example. I'm wondering if we can pick apart "may be derivative work" a bit more, and whether it would apply to someone asking an AI to draw a character without actually naming that character. I'm here off the back of Commons:Deletion requests/File:A golden skin lady wearing ornamental heads and golden ornaments on her body standing in a lake.jpg, which I took to be inappropriate fanart but which may actually be within policy. Belbury (talk) 10:52, 23 January 2023 (UTC)
Edit warring over DR examples
@Yann and Trade: Please discuss here instead of edit warring. Brianjd (talk) 10:37, 25 January 2023 (UTC)
- These are typical examples of images with issues, and we don't have so many examples right now, so it is useful to list them. Yann (talk) 12:11, 25 January 2023 (UTC)
- The examples are completely generic and barely shows any discussion. nor insight on the topic Trade (talk) 12:56, 25 January 2023 (UTC)
- Well, there are still useful as examples of files we do not want. If and when we have examples with more discussion, we could replace them (however this my never happen). Yann (talk) 13:24, 25 January 2023 (UTC)
- The examples are completely generic and barely shows any discussion. nor insight on the topic Trade (talk) 12:56, 25 January 2023 (UTC)
- The disputed examples are:
- I agree with Trade; we don’t need generic ‘personal artwork’ nominations here. They are completely useless as examples for users who cannot see the actual files.
- Commons:Deletion requests/File:Algorithmically-generated portrait art of a young woman with long purple hair.png is not good either. It just talks about ‘bad taste’, without explanation; there is nothing in the filename that suggests bad taste. In fact, the linked Commons:Deletion requests/Algorithmically generated AI artwork in specific styles by User:Benlisquare suggests it might be in scope for its detailed description. It also refers to an unidentified talk page discussion. That would be two comments buried in the middle of en:User talk:Benlisquare#December 2022:
- (responding to ping) This is an AI image you made and uploaded to Commons, in your own words,
generated with txt2img using the following prompts:
. Goodbye. Levivich (talk) 21:47, 23 December 2022 (UTC)Prompt: attractive young girl, large breasts...
- Yes, and I agree this image was created in bad taste. However, I have not plastered it all over any Wikipedia articles. As a sign of good faith, I would have nommed it for speedy deletion out of compliance with the community, but I won't be able to do so due to the indef. --benlisquareT•C•E 21:56, 24 December 2022 (UTC)
- These comments must be read in the context of enormous controversy, on both projects, regarding that user’s uploads and the way that user was treated in response.
- There are two lessons here:
- Remember that, once a file is deleted, non-admins can no longer see the file or its description and may not be able to see its usage. Ensure that deletion discussions include the required context, especially if they are to be used as precedents.
- Be careful which deletion requests you list as examples.
- Brianjd (talk) 13:44, 25 January 2023 (UTC)
- That leaves Commons:Deletion requests/File:Algorithmically-generated portrait art of a young woman with long blonde hair.png (where I left a long comment that was never addressed) and Commons:Deletion requests/Files uploaded by 冷床系 (with some very questionable non-policy-based comments that don’t address the uploader’s arguments). Brianjd (talk) 13:52, 25 January 2023 (UTC)
- In #Purpose, Yann said: AI images can be accepted, but seeing that Commons could be easily overflooded with out-of-scope images from AI, we should restrict these to cases when they are really useful. Please point to an example of an AI image that can be accepted.
- Wait, I just realised that’s not the point of that section. It’s called Out of scope concerns with AI images (emphasis added). No wonder 100% of the deletion requests listed as examples were closed as ‘delete’.
- Let’s rename it to Scope concerns and find a more balanced list of examples. Brianjd (talk) 13:57, 25 January 2023 (UTC)
- The copyright section is just as bad: every example was closed as ‘delete’. We need more balance there too. Brianjd (talk) 13:57, 25 January 2023 (UTC)
- What would be balance? I added Commons:Deletion requests/File:A curly-haired indian woman with iridescent violet eyes holds a hand to her face.png for scope because it was a keep discussion. The issue seemingly was we don't have "any other images of India women with curly hair in this style" which I find ridiculous but whatever, that was the consensus. What example would have you of an copyright issue where people kept it anyways? -- Ricky81682 (talk) 06:24, 11 February 2023 (UTC)
- The copyright section is just as bad: every example was closed as ‘delete’. We need more balance there too. Brianjd (talk) 13:57, 25 January 2023 (UTC)
- That leaves Commons:Deletion requests/File:Algorithmically-generated portrait art of a young woman with long blonde hair.png (where I left a long comment that was never addressed) and Commons:Deletion requests/Files uploaded by 冷床系 (with some very questionable non-policy-based comments that don’t address the uploader’s arguments). Brianjd (talk) 13:52, 25 January 2023 (UTC)
- I don't think either of those are good examples. First, in neither case, it was no clear it was AI-generated at all. I added Commons:Deletion requests/Files uploaded by IAskWhatIsTrue as an attribution example. The debate was because the uploader kept fighting about whether "they" created it or an AI created it. If they did, they needed to follow VRT. In either case, the failure to make it clear how the artwork was created was the issue in the debate. -- Ricky81682 (talk) 06:24, 11 February 2023 (UTC)
Metadata & reproducibility
Best practice for an educationally useful AI-generated image would probably involve including information on how exactly the image was generated. Which algorithm was used, on which set of training data, which query, etc. Can/should we standardize that somehow? El Grafo (talk) 10:59, 6 February 2023 (UTC)
- Agreed. Basic information like prompt, etc., should be mandatory. Yann (talk) 12:33, 6 February 2023 (UTC)
- I'm afraid mandatory might be a bit too much to ask, especially for imported files ... El Grafo (talk) 08:11, 7 February 2023 (UTC)
- I agree that "mandatory" goes too far. Ziko van Dijk (talk) 20:16, 8 February 2023 (UTC)
- It should be mandatory at least for images that the uploader claims as their own work, e.g. in order to enable the community to better evaluate that claim. That would address the issue mentioned by El Grafo. (Apropos, Ziko, is there a particular reason why you consistently withheld the prompt for the images at Category:AI images created by Ziko van Dijk?)
- Regards, HaeB (talk) 07:39, 9 February 2023 (UTC)
- Mandatory for own work might work. On the other hand, we don't require source code for things like Category:Gnuplot_diagrams either, although it would make a lot of sense to do so. El Grafo (talk) 08:59, 9 February 2023 (UTC)
- I agree that "mandatory" goes too far. Ziko van Dijk (talk) 20:16, 8 February 2023 (UTC)
- I'm afraid mandatory might be a bit too much to ask, especially for imported files ... El Grafo (talk) 08:11, 7 February 2023 (UTC)
"Special case: Deepfakes"
A deep fake is literally just any AI generated image in which a person have been replaced with someone else's likeness. Does it really need it's own policy? Trade (talk) 19:18, 10 February 2023 (UTC)
- I think it does. They are almost always unacceptable. - Jmabel ! talk 23:05, 10 February 2023 (UTC)
- There is nothing in Commons:Photographs of identifiable people to suggest that one of type AI generated picture technique is worse than others.
- The section either needs to be renamed or reworked entirely Trade (talk) 21:40, 11 February 2023 (UTC)
Court cases
Hi, Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content in UK. And also Kelly McKernan, Karla Ortiz, and Sarah Andersen, three artists from the USA. Yann (talk) 12:56, 23 January 2023 (UTC)
- Artists Protest After ArtStation Features AI-Generated Images Hmm. I wonder if Commons are going to face similar backlash for allowing AI art Trade (talk) 13:54, 23 January 2023 (UTC)
- There are issues with the images from the protest being uploaded here. Commons:Deletion requests/File:AI protest image 4.jpg was an image based on an image from that protest which is a step removed. Ricky81682 (talk) 23:44, 22 February 2023 (UTC)
Intentionality
I disagree with the statements about intentionality introduced in these edits by User:Gnom. Yes, historically courts have ruled that copyright infringement requires intent, but we have no idea what they will rule about apparently derivative works by non-human intelligences. I could readily imagine a legal argument that the failure of the creators of the AI to take precautions against it creating infringing works is the sort of negligence that will be viewed just as much of an infringement as if there were intent. We just don't know. I believe that in discussing the legal situation at this time we need to consider this a gray area. - Jmabel ! talk 16:51, 18 February 2023 (UTC)
- Erm, at least under European copyright law, copyright infringement does not require intent. In other words, it doesn't matter whether someone intentionally or unintentionally uploads a derivative work on Commons, we still have to take it down. Gnom (talk) 19:13, 18 February 2023 (UTC)
"This AI-generated image likely violates the copyright in Breakfast at Tiffany's"
The idea of displaying a AI generated copyvio image to illustrate the concept of copyvio is a bit weird isn't it? Trade (talk) 23:41, 18 February 2023 (UTC)
- Yes, but so far the image has not been deleted, for reasons unbeknownst to me. Gnom (talk) 00:02, 19 February 2023 (UTC)
- It’s a picture of Audrey Hepburn dressed like Holly Golightly. That’s not copyrightable, so this is a terrible example. I’m removing it and the other image because they don’t show how they’re doing so. It’d be better to use a fake or PD source to demonstrate the concept of derivative works. Obviously. Dronebogus (talk) 07:14, 19 February 2023 (UTC)
Problems with rewrite
@Gnom: The recent rewrite introduced some problems, IMO. Primarily, it makes some bold statements implying that most AI generated images are derivative works of their training images. This is an oversimplification of the situation. An AI-generated image can be extremely derivative of one or more training images or it can be relatively original. This depends on a huge range of variables, including what type of training is used, how large the training set is, whether an image prompt is used, whether an artist prompt is used, how much weight those respective prompts are assigned, how much "chaos" the model adds or is instructed to add, etc. I think the wording needs a lot more nuance and should basically say that "it depends" rather than suggesting that most AI-generated images are copyvios (aside from fan art).
Another problem with the rewrite is that it suggests that a person must give their consent in order to be depicted in an image without violating their privacy. In the United States at least, that's a pretty alien concept and I think we would have a hard time gaining consensus for it on Commons. Personally, I agree with the statement, but I think most people on Commons don't agree with it, and if we want this page to have any chance of being enacted as a guideline, its initial wording should be conservative.
Finally, the claim that "the creators of image-producing algorithms may hold (co-)authorship in the output of their software" is simply not true and is contradicted by pretty much every source on the subject. You'll have to present some strong source citations to convince me that it's correct. Nosferattus (talk) 08:26, 19 February 2023 (UTC)
- Although there are now some problems with the current version. It claims, "In the United States and most other jurisdictions, only works by human authors qualify for copyright protection." which is true, but not helpful. The point is that in some jurisdictions there is copyright applicable to computer-generated works. So we can't simply write this off as a blanket statement. There will almost certainly need to be country-specific guidance here, such as we have to do for FoP.
- Also the claim, "In 2022, the US Copyright Office confirmed that this means that AI-created artworks that lack human authorship are ineligible for copyright." keeps circulating. But this doesn't much apply to us either. Even if AI-creation doesn't engender a new copyright, there may still be an existing copyright involved (the DW situation). Most of the sensible discussion here on Commons has been concerned about that: a trained GAN AI system (with a large training dataset) appears likely to be judged for this quite differently to an automated system producing algorithmic or random art.
- See also the Facebook thread, where a WMF staffer confidently asserts "Images made by non-humans are public domain." Andy Dingley (talk) 11:13, 19 February 2023 (UTC)
- I don't see the quoted comment starting with "Images made by non-humans" in the Facebook thread. Maybe the comment is hidden to the public, edited or inaccurately quoted? / Okay, I see it in one of the collapsed comments, but the person who said it doesn't seem to be a WMF staffer but a student. (I prefer not to name the student.) whym (talk) 12:00, 19 February 2023 (UTC)
- @Nosferattus: Can we please continue iterating the page from where I last left off? I really put a lot of thought into this – and maybe you allow me to mention that I have extensive academic and professional knowledge in this specific field. Gnom (talk) 14:43, 19 February 2023 (UTC)
- OK, some specific points:
- I don't like the phrase "a certain likelihood". It's correct, but it's too likely to be read incorrectly as "a certainty", which is not what is meant. We don't need this phrase and we can use better. Even "likely" is clearer.
- "When using prompts intended to imitate the style of a particular artist, there is a high risk that the algorithm uses copyrighted works by that artist to create the desired output. " I don't much like the phrase "high risk". We don't care about that. Our concerns should begin at a lower level, that of a "credible risk".
- "it is likely that the [such] output constitutes a copyright infringement." This is true, but it's also ignoring any aspects of fair use or parody. Given the immature state of the law around AI generation, we don't know how those will be interpreted. Under US law (and in other sections we're taking a deliberately US-only view) these could be reasonable justification for AI use of them for training, without it then being seen as a violation. As none of us can cite an authoritative view on this as yet, we should describe the fuller situation.
- "making them "unintentionally derivative works"" What's an "unintentionally derivative work"? (as distinct from a simple derivative work?) This is quoted as if that's a specific and distinct term, but I can find no source, definition or different legal treatment for it.
- "Uploading AI-generated art on Commons may violate the rights of some algorithm providers" What is the basis for this? Under what jurisdictions would such rights be enforceable? (I know of none). There is a difference between a computer system providing some output, which is then the copyright material of the system's owner / operator (UK law at least recognises this concept) and the provider of an AI algorithm or system then owning rights to the copyright of any material produced by it in the future. That's a step further and I know of no jurisdiction that would recognise such.
- Andy Dingley (talk) 17:05, 19 February 2023 (UTC)
- Per Gnom's request, I restored the rewrite and instead edited it to address some of the concerns raised above. I still have strong concern about the privacy wording, but I'm not sure what it should say instead. I basically agree with Gnom's wording there, but I don't think it reflects consensus on Commons. Nosferattus (talk) 17:53, 19 February 2023 (UTC)
- Thank you! I will look into the various points and probably respond tomorrow. Gnom (talk) 18:04, 19 February 2023 (UTC)
- Regarding "Uploading AI-generated art on Commons may violate the rights of some algorithm providers", note that this unsubstantiated legal claim (which had beenadded together with the "However, most algorithm providers renounce all rights ..." statement that is quite clearly false, see below) has since already been changed back by Nosferattus to refer to terms of use as before the rewrite (current wording: Uploading AI-generated art on Commons may violate the terms of use of some AI providers). In any case, the following sentences clarify that the Commons community does not mere TOU violations (that are not also copyright violations) as grounds for deletion, per COM:NCR. Regards, HaeB (talk) 16:12, 20 February 2023 (UTC)
- Per Gnom's request, I restored the rewrite and instead edited it to address some of the concerns raised above. I still have strong concern about the privacy wording, but I'm not sure what it should say instead. I basically agree with Gnom's wording there, but I don't think it reflects consensus on Commons. Nosferattus (talk) 17:53, 19 February 2023 (UTC)
- OK, some specific points:
- @Nosferattus: Can we please continue iterating the page from where I last left off? I really put a lot of thought into this – and maybe you allow me to mention that I have extensive academic and professional knowledge in this specific field. Gnom (talk) 14:43, 19 February 2023 (UTC)
- I don't see the quoted comment starting with "Images made by non-humans" in the Facebook thread. Maybe the comment is hidden to the public, edited or inaccurately quoted? / Okay, I see it in one of the collapsed comments, but the person who said it doesn't seem to be a WMF staffer but a student. (I prefer not to name the student.) whym (talk) 12:00, 19 February 2023 (UTC)
- I agree with many of Nosferattus' concerns. In particular, Gnom has made several bold claims about various frequencies that I find highly dubious (based on what I have read so far in various sources that I consider fairly reliable about this area of machine learning and/or copyright, happy to spend time to go into more detail if Gnom is willing to divulge his evidence first). Besides those about the probabilities of copyright violations in particular situations, this also includes Gnom's claim (since removed by someone else) that "most algorithm providers renounce all rights in the output through their terms of service." Based on a quick check, that's evidently not true for Midjourney and Craiyon. It also wasn't true for DALL-E at the time of this deletion discussion.
- Regarding "I have extensive academic and professional knowledge in this specific field": I am willing to assume that, given his education and current profession, Gnom has good professional knowledge of at least German Urheberrecht black-letter law. But that doesn't mean that the Commons community is required to espouse his personal theories about a new and currently widely debated and contested legal topic, especially if he isn't even willing to share citations or other evidence (despite this being the norm in many academic or professional legal contexts). Also, while I don't doubt Gnom's longtime personal dedication to the Wikimedia movement's mission, I would also encourage some reflectiveness about potential conflicts of interest (or at least conflicts of perspective) in cases where one's job regularly involves taking the side of companies and individual professionals against alleged violators of their intellectual property.
- Regards, HaeB (talk) 14:37, 20 February 2023 (UTC)
- It seems that the likelihood of an AI work being a derivative also depends highly on the AI model being used. According to this paper, DALL-E is poor at mimicking artist styles, while Stable Diffusion is very good at it. Basically, I think there are too many variables to draw any sweeping conclusions. In most cases, the derivative issue will need to be handled on a case-by-case basis (given the absence of any clear legal precedence, which may change after the Warhol Foundation v. Goldsmith decision comes out). Nosferattus (talk) 19:53, 20 February 2023 (UTC)
- A good point in general about differences between AI models; but even so note that artist styles are not copyright protected, see e.g [1] or [2]. This fact is well-known among many artists - but apparently not to those who are recently engaging in this kind of "AI stole my style" hue and cry. Regards, HaeB (talk) 00:05, 21 February 2023 (UTC)
- That's also a good point that might be worth mentioning in the derivatives section. Nosferattus (talk) 01:27, 21 February 2023 (UTC)
- A good point in general about differences between AI models; but even so note that artist styles are not copyright protected, see e.g [1] or [2]. This fact is well-known among many artists - but apparently not to those who are recently engaging in this kind of "AI stole my style" hue and cry. Regards, HaeB (talk) 00:05, 21 February 2023 (UTC)
- It seems that the likelihood of an AI work being a derivative also depends highly on the AI model being used. According to this paper, DALL-E is poor at mimicking artist styles, while Stable Diffusion is very good at it. Basically, I think there are too many variables to draw any sweeping conclusions. In most cases, the derivative issue will need to be handled on a case-by-case basis (given the absence of any clear legal precedence, which may change after the Warhol Foundation v. Goldsmith decision comes out). Nosferattus (talk) 19:53, 20 February 2023 (UTC)
- The problem is that considering the amount of AI-generated media that is currently being uploaded on Commons, there is a significant risk of inadvertent (and near-impossible to detect) copyright and privacy violations being among them. And we need to clearly acknowledge this risk on this page. Once we have done that, we need to have a discussion about what to do with this risk. Ignore it? Or ban all AI-generated media altogether? Or some other solution? With the current hype, it's probably difficult to have a proper discussion about the latter for the time being, but at's least get the first part right. --Gnom (talk) 22:54, 22 February 2023 (UTC)
- Keep in mind that currently this page is not an official guideline or policy, so even the very basic guidance we have outlined so far is not official or actionable. If we hope to have the community enact this as a guideline, its initial version should be something that nearly everyone on Commons can agree with. In other words, its initial wording needs to be rather "soft" (except where there is clear consensus or legal precedent to lay down harder rules). It is extremely difficult to get the community to adopt a new set of official guidelines. Please don't underestimate this. If the wording is even slightly controversial, there is no chance it will achieve consensus. After the initial guidelines are adopted, there will be plenty of opportunity to discuss tightening them and responding to changes in the legal landscape. Nosferattus (talk) 01:08, 24 February 2023 (UTC)
- Last month, the U.S. copyright office again asserted that AI generated works have no copyright protection (as original works): https://www.copyright.gov/docs/zarya-of-the-dawn.pdf. This is the third time that the U.S. Copyright office has made such an assessment. Thus the theory that AI providers have a copyright stake in AI generated works is bogus. Even the UK, which provides a limited copyright protection for the prompter, provides no copyright ownership for the software developers or providers. I'm going to revert the changes that were made regarding this. Nosferattus (talk) 17:43, 18 March 2023 (UTC)