When Hayao Miyazaki was shown an artificial intelligence system capable of generating crude animation in 2016, his response was blunt: “I am utterly disgusted.”
The Studio Ghibli co-founder — whose films include the 1997 epic “Princess Mononoke” and the 2001 Oscar-winning “Spirited Away” — also called the demonstration “an insult to life itself.” The remark, delivered with visible contempt, quickly became a rallying cry for skeptics of machine-made art.
At the time, the technology seemed crude, even nonthreatening. Few predicted how quickly it would mature. Eight years later, OpenAI released a major update to ChatGPT, its AI-powered chatbot, allowing users to generate images from text prompts in seconds.
Within days, the internet was flooded with memes, selfies and politically charged posts rendered in a style long associated with Ghibli’s hand-drawn worlds — now produced in seconds.
What began as novelty quickly turned contentious. On X, one user wrote, “Do you morons truly value art so little that it’s just a filter for your profile pic?” Another warned that “AI stealing the style of Studio Ghibli will be the last straw for many people who were on the fence about copyright.”
Copyright law does not protect artistic style, however. Under Article 30-4 of Japan’s Copyright Act, works may be used for information analysis purposes, a provision that has been interpreted to allow generative artificial intelligence systems to train on copyrighted material without permission.
It is this part of the law — training data — that worries many of Japan’s illustrators more than the images AI generates. One of the top priorities of the Freelance League of Japan, an advocacy group representing independent creative professionals, is to create safeguards that would prevent creators’ work from being used in training datasets without their consent.
“What I would look forward to is if AI started creating manga in my style, and then I could collect a usage fee or ideation fee from those users,” says Mitsuru Yaku, a manga creator and honorary chairman of the league. “That would make my life a lot easier.”
In some ways, Japan’s creatives, business leaders and government officials are aligned. Many see AI as a necessary tool to address the country’s labor shortage and economic stagnation. Yet experts warn that enthusiasm has moved faster than understanding. As generative AI becomes embedded in daily life, confusion about how it functions — and what protections exist — is widespread.
Japan’s generative AI optimism
Over the past several years, large language models (LLMs) have moved rapidly into public life. They can draft emails and essays, generate design proposals, write code, build applications and flag fraud or regulatory violations. At the same time, researchers have shown they can spread misinformation, amplify bias and infringe on data privacy and copyright.
For now, in Japan at least, public sentiment appears to lean more toward AI’s promise than its perils. Hiromi Yokoyama, a University of Tokyo professor who researches AI ethics, says the technology is generally welcomed as a potential solution to the nation’s severe labor shortage. She points to data suggesting the Japanese public has a relatively high level of scientific literacy.
“When we measured their level of knowledge about AI in a survey, we were surprised to find that they tended to answer more questions correctly than participants from the U.S. and Germany,” she says.
A separate survey by Nomura Research Institute found that twice as many Japanese express high expectations for generative AI as those who do not.
“People in Japan express higher baseline trust in AI than people in the U.K.,” says Steven Pickering, a professor of political science at the University of Amsterdam. Pickering has worked on studies comparing attitudes on AI in Japan and the U.K. and argues that this trust is linked to broader confidence in institutions.
“Japanese respondents often acknowledge that AI may replace jobs (even their own job) and yet they still trust it,” he says. “AI is more often interpreted as part of a broader, structural adjustment to demographic and labor-market realities, rather than as an immediate personal threat.”
That willingness to trust AI — even while acknowledging job displacement — aligns with Japan’s demographic reality: a labor shortage projected to leave the country 11 million workers short by 2040.
As a result, the Japanese government and major corporations have moved aggressively to promote adoption. New legislation seeks less to restrict artificial intelligence than to accelerate its deployment. Japan’s Digital Agency also partnered with OpenAI to develop “Gennai,” an internal generative AI tool for government employees built on the company’s language models.
The government has earmarked ¥340 billion in digital transformation subsidies to support AI adoption. Major corporations and industry leaders have followed suit. Most recently, SoftBank Group said it would spend $3 billion (¥463 billion) annually to deploy OpenAI technology across its subsidiaries.
“It’s absolutely vital that businesses make use of AI,” says Kensuke Ozawa, an AI specialist who writes and lectures on the technology’s practical applications. He argues that a future in which nearly all routine work is handled by AI systems, with humans providing oversight, is inevitable.
“In the years to come, AI will certainly be able to take over people’s white-collar jobs,” he says.
Underused and misunderstood
While Ozawa is optimistic about AI’s potential in the workforce, his experience as an educator often leaves him frustrated.
“There are a lot of misunderstandings,” he says. “Up until last year, most people fixated on prompt engineering. They thought you had to come up with a prompt every single time to make use of AI.”
He emphasizes AI’s most transformative aspects simply aren’t being realized yet in Japan.
“A significant amount of people who use AI only use it for searches — to find information,” he adds. “It’s such a waste of potential.”
Findings from Yokoyama’s research suggest this may be partly due to an inherent cautiousness toward risk among Japanese workers, as well as the country’s generally slow pace of digital transformation.
A recent large-scale survey by the Organization for Economic Cooperation and Development sheds light on what is happening in companies across the country. Just 8.4% of employees say they use AI, with 6.4% using generative AI specifically. This figure ranges from 22.9% in the information and communications sector to just 4.1% in the accommodations, eating and drinking services sector.
“In Japanese workplaces, a number of factors are blocking the smooth introduction of AI, including a lack of human resources and a cautious attitude toward the risks posed by AI,” Takahiro Toda, co-author of the survey, says in an email. “The current situation is unable to realize the positive effects of AI.”
The survey identifies that skill and understanding are the biggest barriers. More than 60% of Japanese companies intending to adopt AI cited a shortage of AI-related talent as an obstacle, and nearly 50% said AI is not well understood within their organizations.
“I believe that in the workplace, many Japanese are aware of risks of AI like hallucination and bias,” Toda says. “So if the risks associated with AI are mitigated, we can expect the adoption to move forward.”
Japan’s reality is a delicate one: While boardrooms and government offices speak confidently about AI’s promise, only a small minority of workers have experienced its implementation — and, as Ozawa notes, even that implementation may amount to little more than search queries.
At the same time, relatively few prominent voices are openly critical. One of the most visible is the writer and economist Yukio Noguchi, who has warned of AI ushering in an era of extreme socioeconomic inequality. In a recent column for Toyo Keizai, Noguchi assumes generative AI will displace human labor:
“AI can be used to improve your productivity, output and therefore income. A significant disparity will emerge between the small minority of people who can do that and the vast majority who cannot. … The difference in learning and development conditions under generative AI will far exceed what we’ve seen in terms of parents from wealthy families hiring private tutors for their children.”
In other words, Noguchi argues that the gains from generative AI will not be evenly distributed and may accelerate inequality.
In a recent opinion column for TBS Cross Dig, Hirotaka Isobe raised a more everyday concern. When humans make mistakes at work, they face criticism and, if the errors pile up, consequences. LLMs, however, also make mistakes. One 2024 study found that 52% of ChatGPT’s answers to programming questions were incorrect or contained misinformation. A subsequent 2025 study described LLMs’ performance when fact-checking political information as “modest”; when evaluating true statements, their accuracy was comparable to random guessing.
“Are we more tolerant of generative AI than humans?” Isobe writes. “We’re on our way to a world where ‘I’m sorry, that was generative AI’s doing’ is forgiven with an ‘OK, well, then there’s nothing we can do about it.’”
Who benefits, who loses?
There is one thing most Japanese commentators and members of the general public agree on: AI can and will replace human jobs.
A study by the Daiwa Institute of Research in 2024 examined the risk and likelihood of job replacement by AI across sectors in Japan. The projected rate ranges from just 2.5% of jobs in forestry to 57% in insurance and finance. Other surveys suggest that up to 66% of software engineering roles are at risk of automation. Occupations identified as vulnerable in general include data operations, bank and insurance staff, accountants, translators and factory line workers.
While such surveys are forward-looking, creatives — particularly illustrators — say they are already feeling the effects of generative AI. According to a survey by the Freelance League of Japan, 80% of creators have directly experienced or heard of AI causing professional difficulty or threatening their income. Ten percent reported a direct decline in income due to AI, and another 10% said they were beginning to seek new sources of income outside their creative work.
“The psychological safety of the creator community has been eroded by generative AI,” says Masayuki Takada, director of the Freelance League of Japan. “Creators want to control the rights of their creative work — that’s the most important part.”
Despite these threats, most analysts are optimistic that AI will ultimately ease — not worsen — Japan’s labor shortage.
“There is little evidence to date that AI is replacing jobs,” says Stijn Broecke, a senior economist at the OECD and co-author of the aforementioned survey. “On the contrary, AI could … address skill and labor shortages related to an aging workforce.
“Japan is stuck in a Catch-22 situation where AI adoption could help address skills shortages, but AI is not being adopted because of a lack of skills.”
An International Monetary Fund report similarly argues that limited implementation has prevented Japanese companies from realizing AI’s potential to supplement its shrinking workforce.
This tension leaves Japan in an unusual position. On the one hand, government officials, businesses and much of the public express confidence in generative AI’s economic promise. On the other, few have firsthand experience of what widespread implementation will look like — in terms of either gains or trade-offs.
Meanwhile, AI systems continue to advance, with more autonomous “agentic” tools entering the market and little Japan-specific research yet examining their potential use.
Even AI advocates such as Ozawa draw boundaries.
“While I think AI is fine for work, I do worry about it being put to use in culture. It’s important that we preserve Japanese culture,” he adds, noting that most AI tools are built on English-language data, which could marginalize minority values, languages, and perspectives.
By the time Japan fully measures the economic impact of the technology, the cultural effects may already be entrenched.
“According to recent research, generative AI could ultimately rob humans of our ability to think and make decisions,” Yokoyama says. “Human ambition could be putting humanity itself in serious danger.”



