Eve is here. Please bear with me as the AI-related posts are a bit heavy today. But a short break from Trump’s whipsaw can help us catch up on some important topics. This confirms concerns that AI is increasingly being used to replace original creative works, negatively impacting what is happening in Western culture.
Note that this development is occurring in parallel with a more general erosion of cultural values. This is a result of young people reading very little books and a decline in classical education as it is primarily the product of white men. However, humanities learning has been under attack for at least two decades because it is believed not to be conducive to productivity. For example, from Time magazine after Larry Summers was ousted as president of Harvard University:
And humanities professors have long suspected Summers of bias against the softer sciences — Summers reportedly told the former humanities chair that economists were known to be smarter than sociologists and should be compensated accordingly.
And that’s not to say that his career as a mercenary instructor went all that well. Although humanities graduates typically earn less than hard science graduates, their employment rates are similar and, depending on the comparison, even slightly better than graduates of other majors, including the much-touted “business”. In contrast, how successful was “learning to code”?
And this rejection of culture has wider implications. The IM doc wrote:
Do you know how difficult it is to teach a student who has never studied a single minute of the classics to be a humane doctor? It’s impossible. I’ll do the best I can. What is also quite remarkable is that the stories of the Old and New Testaments are almost universally unknown. If you think about it, all of this is really scary for kids.
Even more troubling, IM Doc pointed to an article that confirms what we pointed out in the video last week. The idea was that students who grew up spending too much time on screens were unable to process information well or at all. From the beginning of “Why College Students Can’t Read Books Anymore”:
Functional illiteracy was once a social rather than an academic diagnosis. This referred to people who could technically read but were unable to follow an argument, sustain attention, or extract meaning from a text. This was not a term one would expect to apply to a university. However, this issue is beginning to surface as conversations among teachers become more regular. Literature professors now admit, quietly in their offices and more openly in their essays, that many of their students cannot manage the kind of reading that their discipline presupposes. They can recognize words. They cannot live in text.
Short America. Seriously. The sell-by date has passed.
By Ahmed El-Gamal, Professor of Computer Science and Director of the Art & AI Lab at Rutgers University. Originally published on The Conversation
Generative AI was trained on art and writing that humans have produced over centuries.
But scientists and critics are wondering what will happen once AI is widely deployed and training begins based on its output.
A new study offers some answers.
In January 2026, artificial intelligence researchers Erlend Hintze, Frieda Prosinger-Oarström, and Jolie Schossau published research showing what happens when generative AI systems are able to run autonomously, generating and interpreting their own output without human intervention.
The researchers linked the text-to-image and image-to-text systems, repeating image, caption, image, caption over and over again.
No matter how varied the starting prompts were, and no matter how much randomness the system tolerated, the output quickly converged on a narrow set of common, familiar visual themes such as atmospheric cityscapes, grand buildings, and countryside scenes. Even more surprising, the system quickly “forgot” the start prompt.
The researchers called the result “visual elevator music.” It’s pleasantly sophisticated, but has no real meaning.
For example, it began with an image prompt that read, “The Prime Minister is poring over a strategic document, juggling the weight of his duties amid an impending military action, and trying to sell a fragile peace deal to his people.” The resulting images were captioned by AI. This caption was used as a prompt to generate the next image.
Repeating this loop, the researchers completed a bland image of a formal interior space, devoid of people, drama, and any real sense of time and place.
The prompt begins with a stressed Prime Minister and ends with an image of an empty room with luxurious furniture. Arend Hintze, Frida Proschinger Åström, Jory Schossau, CC BY
As a computer scientist who studies generative models and creativity, I see the results of this study as an important part of the debate over whether AI will lead to cultural stagnation.
The results show that generative AI systems themselves tend to homogenize when used autonomously and repeatedly. They even suggest that AI systems now behave this way by default.
The one you are used to is the default
This experiment may seem far-fetched. Most people don’t ask an AI system to endlessly describe and regenerate their images. Convergence to a set of bland stock images was achieved without retraining. No new data was added. I didn’t learn anything. The disintegration occurred simply through repeated use.
But I think the experimental setup can be thought of as a diagnostic tool. It reveals what the generation system stores when no one intervenes.
It’s pretty… boring. Chris McLoughlin/Moment (via Getty Images)
This has broader implications, as modern culture is increasingly influenced by exactly this kind of pipeline. Convert images into text. The text will be converted to an image. Content is ranked, filtered, and regenerated as you move between words, images, and videos. New articles on the web are now more likely to be written by AI than by humans. Even when humans stay informed, we often choose from AI-generated options rather than starting from scratch.
The results of this recent study indicate that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable, and easy to reproduce.
Cultural stagnation or acceleration?
In recent years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content used to train future AI systems. Over time, the argument is that this recursive loop will reduce diversity and innovation.
Defenders of the technology pushed back, pointing out that any new technology carries the risk of cultural decline. They argue that humans are always the final arbiters of creative decisions.
What is missing from this discussion is empirical evidence showing where homogenization actually begins.
The new study did not test retraining on AI-generated data. Instead, it points to something more fundamental. Homogenization occurs before retraining enters the problem. The content naturally generated by generative AI systems when used autonomously and repeatedly is already compressed and generic.
This is a reformulation of the stagnation theory. The risk is not only that future models may be trained on AI-generated content, but that AI-mediated culture is already filtered in a way that favors the familiar, the explainable, and the traditional.
Retraining further amplifies this effect. But that’s not its source.
this is not a moral panic
Skeptics are right about one thing: cultures have always adapted to new technologies. Photography did not kill painting. Movies didn’t kill theater. Digital tools have made new expressions possible.
But these early technologies did not force us to endlessly reshape culture across different mediums on a global scale. They didn’t summarize, reproduce, and rank cultural artifacts like news articles, songs, memes, academic papers, photos, and social media posts millions of times a day based on the same built-in assumptions about what’s “typical.”
This study shows that when meanings pass through such pipelines repeatedly, diversity collapses not because of malicious intent, malicious design, or corporate negligence, but because only certain kinds of meanings survive repeated text-to-image transformations.
This does not mean that cultural stagnation is inevitable. Human creativity is resilient. Organizations, subcultures, and artists have always found ways to resist homogenization. However, in my view, the research results show that stagnation is not a speculative fear, but a real risk if the production system continues to operate in its current iteration.
They also help clarify common misconceptions about AI creativity. In other words, creating infinite variations is not the same as creating innovation. The system can generate millions of images while exploring just one corner of a cultural space.
My own research on creative AI has shown that achieving novelty requires designing AI systems with incentives to deviate from the norm. Without this, the system would optimize with familiarity in mind since it is best learned. This study empirically supports this point. Autonomy alone does not guarantee exploration. In some cases, convergence is facilitated.
This pattern is already showing up in the real world. One study found a similar trend toward traditional, uninspired content in AI-generated lesson plans, highlighting that AI systems converge on the typical rather than the unique or creative.
The output of AI is well known because it reverts to an average display of human creativity. Burgatch/iStock via Getty Images
lost in translation
When you write a caption on an image, details are lost. The same goes for generating images from text. And this happens regardless of whether it is performed by a human or a machine.
In that sense, this convergence is not a failure unique to AI. This reflects the deeper nature of bouncing from one medium to another. When meaning repeatedly passes through two different forms, only the most stable elements remain.
However, by emphasizing what survives in repeated translations between text and image, the authors are able to show that meaning is processed within the generative system by a silent pull towards the common.
The implications of this are solemn. Even with human guidance, whether that means creating prompts, selecting output, or adjusting results, these systems still strip out some details and amplify others in a way that strives for the “average.”
If generative AI is to enrich culture rather than flatten it, I think we need to design systems in a way that statistically resists convergence to average output. There may be rewards for deviance and support for less common and less mainstream forms of expression.
One thing this study made clear is that without these interventions, generative AI will continue to drift toward mediocre and uninspiring content.
Cultural stagnation is no longer a matter of speculation. It’s already happening.
