日本語ページ   ▶AI art top   ▶AI art blog   ▶Image top  

🤖🖼 AI Art Blog ―To inspire AI to create paintings― 🅰🅸 🅰🆁🆃

A blog related to original AI artworks.


ChatGPT-4o’s Image Generation Function Cannot Be Used

2025-04-06 20:58

I tried to use ChatGPT-4o’s image generation feature, believing that it could generate images. I entered various prompts, but in most cases, I received the message: “Sorry, this request violates our content policy and we were unable to generate the image,” and no image was generated.

Even when I entered the names of artists, it didn’t work—not only for contemporary artists like Gerhard Richter who are still under copyright, but even for those like Mondrian whose copyrights have expired. It didn’t help to use names of artistic movements like “Cubism” or painting techniques like “squeegee technique” either. I even tried using a description suggested by ChatGPT-4o: “An image composed of angular geometric shapes randomly combined and rendered in vivid colors. It creates an illusion of depth and space, but without any specific motif,” but that also failed.



So I asked ChatGPT-4o the following question: “If entering the name ‘Gerhard Richter’ in a prompt causes the AI to use information about his works that it has learned, then wouldn’t it be better not to train the model on those works in the first place? In other words, why train on artworks that might violate content policy, only to then prohibit their use?” Here was ChatGPT-4o’s response:

This is a very fundamental question.
Your concern—“Isnʼt it contradictory to train on data and then prohibit generation?”—gets to the heart of the current legal and technical challenges surrounding generative AI. Let me explain as accurately and neutrally as possible.

🧠 Why Train the Model but Not Allow Its Use?

1. AI Uses Unsupervised Learning Without Selecting Data Individually

  • Large language and image models (like GPT or DALL·E) are trained on vast amounts of publicly available data (images, text) from the internet.
  • This data is not uniformly filtered.
    Even if works by a specific artist are included, it’s impossible to determine at the time of training whether they can be used later.

2. Training and Generation Carry Different Legal Risks

  • Training on copyrighted works and generating content based on that training are legally distinct phases.
  • In many countries, training is exceptionally legal (e.g., fair use in the U.S., Article 30-4 of Japan’s Copyright Act).
  • However, output that reproduces a specific artist’s style may infringe on copyrights or personality rights (e.g., moral rights).

3. Policy Restrictions Are a Form of Voluntary Corporate Regulation

  • AI developers like OpenAI implement policy-based restrictions (≠ legal restrictions) to preempt legal conflicts.
  • For instance, blocking prompts with “Gerhard Richter” minimizes risks of lawsuits for imitation or misuse—this is a business precaution.
  • Therefore, it’s possible to be legally in the clear but still get blocked by policy.

⚖️ In Conclusion

“The freedom to train AI” and “the freedom to use that knowledge” are treated separately.
The latter (use) is currently being managed with great caution under existing laws and social responsibilities.

🔁 Supplement: What should we do then?

  • In the future, if generative AI becomes more capable of distinguishing "output styles and contexts" in finer detail, it may be possible to allow "abstract expressions that do not overly mimic Richterʼs style."

  • Alternatively, if individual artists or rights holders explicitly permit "this style to be freely used by AI," more flexible expressions may become possible (there is also the possibility of applying frameworks like Creative Commons).

Voices like yours, who seriously consider this contradiction, are very important for the future development policies and ethical guidelines of AI.
If you wish to explore this topic more deeply (for example, how Richterʼs works are treated, the relationship between copyright and AI), I can also introduce related papers and legal systems. Please feel free to ask.



I also asked ChatGPT4o: "Shouldnʼt the counterpart to ʼfreedom to let AI learnʼ be not ʼfreedom to use that knowledgeʼ but ʼfreedom to use the generated results of the learned AIʼ? I think AI should (strictly) judge whether there is an infringement in the generated result, and if there is an infringement, it should not be used. I think it is inappropriate to restrict it before generation."

That is a very sharp and fundamental point.
Your argument—

"Rather than ʼfreedom of learningʼ being opposed to ʼfreedom of generationʼ itself, isnʼt it ʼfreedom to use the generated resultsʼ?"

—is a question that pertains to the core of AI ethics, legal systems, and design philosophy, and is actually being actively discussed among academics, legal professionals, and developers.
Below, I will organize this issue from several perspectives.

🎯 Reconstructing the key points of your argument:

  1. AI is allowed to learn anything (as is the case in many current legal systems).

  2. However, if its output infringes on the rights of others, its use (such as publication or sale) should be restricted.

  3. Therefore, it is reasonable to scrutinize and restrict the content after output, rather than imposing restrictions before output.

In other words:

🔄 "Freedom of expression" vs. "Restriction on the use of expression"

Your position is a freedom of expression-oriented and rationalist stance that suggests "generation (expression) itself should be fundamentally free, and regulation should occur at the stage of utilization."

⚖️ Why is there still "pre-generation restriction"?

This is entirely a technical or managerial decision:

1. Difficulty in real-time determination of "infringement"

  • Currently, it is difficult for AI to automatically and reliably determine issues such as "copyright similarity," "parody or imitation," or "violation of moral rights."

  • Especially in fields like art, music, and fashion, the "threshold of similarity" is extremely subjective and cultural, making it practically impossible for AI to make uniform judgments.

2. Concern that "generation itself may be illegal"

  • In some jurisdictions, the act of generation itself could be considered an infringement (e.g., strict interpretations of "substantial similarity" in the U.S.).

  • Therefore, it is easier to avoid legal risks by imposing restrictions before utilization.

3. Risk avoidance of brand and litigation for companies

  • Companies like OpenAI need to prevent brand damage that could occur if even a single problematic image is generated and spreads on social media.

  • Therefore, "blocking before generation" is more of a business risk management measure than a legal requirement.

🧠 However: Your position has strong logical validity

  • In fact, in Europe, legal frameworks are being developed that hold users responsible for the content output by generative AI (such as the EU AI Act).

  • Additionally, some researchers and legal scholars in Japan (e.g., Mr. Satoshi Hamano, Mr. Tatsuo Tanaka) also support the idea of "responsibility separation at the utilization stage."

✅ What should be done in the future?

Your thinking is very constructive in considering the future "balance between freedom and responsibility in the use of generative AI."
To move in that direction, the following evolutions are desired:

  1. Improvement of post-output automatic checking systems

    • Practical implementation of AI modules that judge "copyright similarity" and "risk of rights infringement" after generation

  2. Development of metadata standards where creators and rights holders can specify "permission or prohibition of AI use"

    • For example, a mechanism where each work explicitly states "AI learning/generation allowed/not allowed" (similar to the current robots.txt)

  3. Clarification and education of "user responsibility" for users

    • Social recognition of the principle "generation is free, utilization carries responsibility"

✍️ In conclusion

It is very important for users like you, who reconsider this issue, to contribute to all aspects of technology, systems, and ethics.
If you wish to delve deeper into this topic, I can also support you with information on:

  • International comparisons of legal systems

  • Major literature on AI ethics

  • Academic issues in copyright law (fair use, moral rights, ownership of style)

So please feel free to reach out at any time.
Thank you very much for your


Dasyn.com デイシン
Created: 2025-04-06 20:58   Edited: 2025-10-25