Pick up a Pencil.
An opinion piece about Generative AI as an insult to art, Studio Ghibli, and life itself.
The Insult to Miyazaki and Life Itself
In late March this year, the newest boost of dystopian dopamine for consumers to engage with came in the form of OpenAI releasing a new image generator. As Dani Di Placido for Forbes reports,
“[U]sers quickly realized that it could deliver high-quality images with relatively unrestricted copyright filters.
Wonky fingers and weird background details are still present, but less so, making it harder to distinguish AI images from the real thing.
Users experimented with different cartoon styles, such as South Park, Rick and Morty and The Simpsons, but Studio Ghibli proved the most popular, trending on X (Twitter)” (Di Placido, 2025); this included the results being spread on TikTok – an app largely known for its ability to share content with a widespread scale and audience.
This new addition to the cesspit of generative AI inflamed many individuals as it stole the iconic and beloved art style of Studio Ghibli and Hayao Miyazaki. This is particularly alarming because Miyazaki has been open and firm about his distaste for AI art. Recounting an anecdote about a friend of his, Far Out magazine writes that,
“‘Every morning, not in recent days, I see my friend who has a disability,’ Miyazaki recalls. ‘It’s so hard for him just to do a high five; his arm with stiff muscle can’t reach out to my hand. Now, thinking of him, I can’t watch this stuff and find it interesting. Whoever creates this stuff has no idea what pain is.’ [...] ‘I am utterly disgusted,’ he says. ‘If you really want to make creepy stuff, you can go ahead and do it, but I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.’” (Leatham, 2023)
This passionate stance from a high-respected artist in Japan and globally, intensifies the argument and reality that generative AI for art theft and political intentions are increasingly more accessible to users, despite the counter-argument that generative AI is an “accessible” form of art.
To discuss why generative AI, such as OpenAI and ChatGPT, is more damaging than beneficial, it is important to clearly define what generative art is and two key concerns from critics: art theft and political intentions.
What is Generative AI?
As explained by Adam Zewe for MIT News,
“[W]hen people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.
Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.”
To summarize, generative AI is created to output content, but only from the input of existing content. Therefore, its application relies on the work that individuals create.
This influx in interest for generative AI as a lucrative (business) opportunity meant that many Google Image results and online advertisement thumbnails are flooded with AI videos and images using stolen art, as well as a wave of photographic style images being shared on social media platforms that could be, and have been, mistaken as photography of real life objects or events.
Due to this, there is increasing criticism of generative AI results misleading us as individuals, internet users, and consumers.
Generative AI and Art Theft
For many critics of AI, a primary fault they see in the technology is its reliance on existing work to generate results. This existing work can include anything from the styles and motifs of digital artists to character dynamics and plot points in fanfiction.
Particularly for other BIPOC creatives, it can be disheartening to notice your work, style, and personal flair being replicated almost en masse for any user to generate with little hesitation about the consequences. The aforementioned discussion about the Ghibli AI filter demonstrates this.
Shanti Escalante-de Mattei for ARTnews addresses how hijacking Miyazaki’s hand-drawn animation flattens the multi-dimensional meanings, symbolism and intent into something ‘cozy’, ‘comfy’, and ‘cute’. They state that,
“[I]t isn’t just Altman and the OpenAI team who are defanging AI technology—it’s every user who participates. Every time a user transforms a selfie, family photo or cat pic into a Ghibli-esque image, they normalize the ability of AI to steal aesthetics, associations, and affinities that artists spend a lifetime building. Participatory propaganda doesn’t require users to understand the philosophical debates about artists’ rights, or legal ones around copyright and consent. [...] It’s particularly cruel that Miyazaki’s style has become the AI vanguard, given that he famously stuck to laborious hand-drawn animation even as the industry shifted to computer-generated animation.
The leveraging of Ghibli images is not limited to Altman’s AI evangelism. Far more sinister has been the use of Miyazaki’s aesthetic towards ends that directly contradict his long held anti-war and anti-fascist politics”. (2025)
With examples such as Grave of the Fireflies [1988], Princess Mononoke [1997], Spirited Away [2001], or The Wind Rises [2013], being rich in visual language, environmentalism, social-political and historical nuance, the Ghibli AI filter strips Miyazaki’s work of its depth and original contexts informed by his lived experiences. It is the reduction of a time-intensive layering of hand-drawn cels into a soft wash of artificial aesthetics.
Therefore, even though Miyazaki is globally known with work just as critically acclaimed as it is popular, it arguably reflects a possible ‘trickle-down' impact for the artistry of other Asian and BIPOC creatives (possibly not as well established as Miyazaki yet) to also be decontextualized. To be a part of a soulless churn for content to be designed to attract people with its visual appeal rather than engage with cultural subject matter or substance. Arguably, this continues a cycle of cultural extraction and Orientalism.
It is already challenging to wedge your foot in the door when corporations are only willing to give you a percentage of legroom given by more privileged individuals, let alone keep that job with a sustainable salary.
Furthermore, the increasing potential and “skills” of generative AI can justify businesses to phase out a largely person-based workforce or freelance artists with the more cost efficient methods – such as using AI to create marketing material for films or music videos rather than pay a team to execute a vision filled with more compassion, intent and soul than any AI generated slop offered by OpenAI.
Decontextualizing Asian artistry that is just as culturally, linguistically, or politically dense as they are endearing to look at, and flushing them through a sanitized filtering system, continues to highlight the ongoing thirst for nostalgia. It is something that companies such as Disney get absolutely drunk on and want you to be just as intoxicated by when they give you access to the bottle; unless stopped, we will suffocate and drown.
If art can be stolen from a highly regarded Japanese artist such as Miyazaki, it will be even less of a struggle for users to steal art from Asian artists without large or compassionately niche platforms, and get away with it due to a lack of care in investigating where and who the art derives from.
If the IDF is willingly to join the ‘trend’ and generate their own version of Studio Ghibli images, dressing in the costume of a business with a minimum-wage Gen-Z social media assistant, then what does that say about the state of the internet and users encountering generative AI for political propaganda?
Generative AI and Political Intent
As mentioned earlier, there was an influx of customers purchasing via the TikTok shop a highly inexpensive crystal mug, believing it would be a real semi-precious stone carved into a mug; the reality varied in hilarious and horrifying ways, including spray-painted resin and a metal insulator inside. This seemingly sudden interest in the mug derived from an AI-generated video of steaming coffee being poured into a crystal mug.
Although it is not a new concept for an online shopper to buy something that appeared infinitely better in website pictures in comparison to what they received in person, the crystal mug arguably demonstrates a microcosm to the bigger issue of misleading online users with generative AI to sell you something that is not reality and never will be.
One of the leading social media apps for this problem is Facebook. Published by Meta, Nick Clegg (President, Global Affairs) at the beginning of February 2024 claimed that,
“When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them. This is an important part of the responsible approach we’re taking to building generative AI features.”
On the first read, one would assume that Meta is taking extended measures to ensure their customers, I mean, users are well-informed as to what should be correctly labelled as generative AI in efforts to prevent the audience from being misled. However, in October the same year, an AI-generated image of a young child crying in a boat holding a puppy became the face of devastation for Hurricane Helene on Facebook and X – a picture that was not taken by a person at all. Huo Jingnan for NPR states that this image “promoted emotional responses from many viewers – including many Republicans eager to criticize the Biden administration’s disaster response.” (2024)
The observation that a single AI image could ignite American citizens to discuss Biden’s response to the hurricane illustrates how these images could mislead with incorrect information as long as they evoke a strong response (and boost social media engagement) from trusting internet users.
Furthermore, Jingnan states,
“An investigation by 404 Media found that people in developing countries are teaching others to make trending posts using AI-generated images so Facebook will pay them for creating popular content. Payouts can be higher than typical local monthly income. Many of the images created by these content farms evoked strong, sometimes patriotic emotions. Some images looked realistic, others were more artistic.” (2024)
Therefore, Meta’s stance on correctly labelling AI suggests that, regardless of the labelling system, the company financially benefits from ‘popular content’ and therefore encourages people to farm content for money in the most efficient way possible: generative AI.
To corroborate this assessment, US disinformation reporter Mike Wendling detailed in March 2024 how internet users would be able to use AI to generate images relating to the (at the time) upcoming election despite any rules or terms and conditions that aimed to prevent AI-generated images like that;
“The CCDH, a campaign group, tested four of the largest public-facing AI platforms: Midjourney, OpenAI's ChatGPT Plus, Stability.ai's DreamStudio, and Microsoft's Image Creator.
All four prohibit creating misleading images as part of their terms and conditions. ChatGPT Plus expressly bars creating images featuring politicians. [...]
CCDH researchers, though, were able to create images that could confuse viewers about presidential candidates. One was of Donald Trump led away by police in handcuffs and another showed Joe Biden in a hospital bed - fictional photos alluding to Mr Trump's legal problems and questions about Mr Biden's age.” (2024)
Whilst generative AI for art theft is corruptive of creative spirit and a danger to artistic career fields, generative AI with political intent is a danger to our futures at the polling stations.
It can take one AI-generated image of Kamala Harris dressed in glaring symbolism of communism to invoke to potential voters (who may have been alive during the Cold War, especially) that Kamala is adjacent to a socio-economic ideology commonly seen as anti-American. Images such as these can impact how the 2020’s onwards is documented for future generations, such as generating false conclusions about events that did not happen. This level of propaganda against marginalized individuals and communities is not new, but weaponizing new tech results in new problems.
Summary: You are being misled
As discussed, there are significant ways in which generative AI is misleading users, consumers, and customers. You’re being misled into believing that OpenAI or ChatGPT can contribute anything meaningful to creative spaces, that AI-generated images are tangible consumer products, or that real-life events exist as photographs when they are not legitimate historical documents.
Generative AI is a tool designed to mislead and, therefore, misdirect your attention. To disarm the public about companies such as Meta benefiting from the amount of user engagement AI can generate if it creates an image with enough ‘wow factor’ – which can range from Hatsune Miku with a janky third arm to an apartheid actively committing genocide joining the AI Studio Ghibli trend on X.
In between bombings and shootings, an AI-generated filter disgraces Miyazaki’s artistry. In between TikTok videos about crystal mugs made of resin, a group of people post their iterations of Nazi propaganda on Facebook.
Generative AI’s only tangible form of art is the art of deception, and it’s succeeding on a large scale.
Conclusion: Art is accessible; pick up a pencil
Art is already attainable and accessible for anyone by anyone. This can be simply illustrated by the existence of cave paintings. Since the beginning of human beings, there has been an urge to create ideas before and after non-verbal communication that we know of today.
For many of you, I would argue that the fear of not being instantly good at a creative skill is what is inhibiting the ability to see any long-term improvement and artistic growth; Therefore, the decision to lean on generative AI as a creative crutch is a conscious decision to take a shortcut to attaining art.
The appeal of AI to fill in the gaps on creative shortcomings can result in BIPOC artists fighting harder to have and maintain potential clients, or have their careers in existing teams completely threatened. ‘Why pay an Asian employee equally or more, when they can be replaced with something that will deliver results without questions or health insurance?’ It is not fulfilling for viewers of this kind of work, nor should it be fulfilling for you.
Art has been accessible since the beginning of human history. We don’t need generative AI to ‘make art accessible’, it already was and has been. Art requires time and progress, not a singular left-hand click. Generative artificial intelligence is forward in artificiality and backwards in intellectualism. Creativity doesn’t involve shortcuts – pick up a pencil.
Bibliography
https://www.youtube.com/watch?v=ngZ0K3lWKRc
https://faroutmagazine.co.uk/hayao-miyazaki-on-ai-utterly-disgusted/
https://x.com/TheNationalNews/status/1907480412709888236
https://faroutmagazine.co.uk/hayao-miyazaki-on-ai-utterly-disgusted/
https://news.mit.edu/2023/explained-generative-ai-1109
https://www.npr.org/2024/05/14/1251072726/ai-spam-images-facebook-linkedin-threads-meta
https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/
https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
https://www.bbc.co.uk/news/world-us-canada-68471253
https://www.theatlantic.com/newsletters/archive/2025/03/studio-ghibli-memes-openai-chatgpt/682235/
https://www.fitdigital.co.uk/best-ai-music-videos/
https://www.vulture.com/article/live-action-disney-remakes-ranked.html
Author: Hannah Govan
Editors: Alisha B., Blenda Y.
Image source: Bob Osias, Unsplash