Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s an atypical instalment this week. OpenAI just dropped Sora, a new text-to-video model that has set the internet on fire.
And there’s no denying: this model produces amazing outputs.
I want briefly to survey some of key lines of response, and also offer some reflections of my own.
No audio version for this one. For obvious reasons this is a highly visual instalment, so it’s best to scroll and read rather than listen. But audio will be back next week.
Let’s get into it.
💥 Department of Light and Magic
This week, a further advance when it comes to the emergence of AI as something akin to magic.
OpenAI yesterday announced Sora, a new text-to-video model. Sora produces photorealistic video up to one minute long, and is also able to accurately simulate complex physical dynamics, such as liquid swirling in a cup or reflections in a moving window.
Predictably, X went mad. Everyone is sharing their favourite videos and visual effects:
Jim Fan, who leads work on AI agents for Nvidia, noted that Sora has already rendered a near-perfect Minecraft world. There’s something so gloriously weird about this; a machine dreaming of a world that was already a machine-generated dream.
Meanwhile, Ed Newton-Rex of Stability AI asked: was Sora trained on copyrighted material? Newton-Rex has been sounding the alarm on this issue for some time now.
It’s hard to argue with his logic. The work of writers, artists, and filmmakers is being scooped up and used to train AIs that write, draw and now even make films. We surely need new intellectual property law around this.
The New York Times has filed a lawsuit against OpenAI for use of its work in training data. And right now OpenAI’s defence is, essentially, to claim that it is impossible to train models without the use of copyrighted work and that therefore they should be allowed to use that work. Which seems a strangely circular argument. The legal challenge is only going to get deeper.
Meanwhile, many voiced concern that the rise of generative AI models such as Sora means we’ll have no call, in future, for human creators:
But the musician Grimes replied with what seems to me a persuasive argument, at least when it comes to filmmaking:
Sure, generative AI tools are going to change the creative landscape and the job market for creators. But I think there is potential here for a new creative golden age, too.
Yes, we’re going to face a tsunami of AI-generated writing, pictures, and films. But amid that, human creativity will remain just as important — probably more important — as a differentiator. The most creative people will do better work with these models than the average person; they’ll use them as amplifiers of their own creativity and find new styles, stories, and modes of expression. Access to Hollywood-level visual effects will be democratised; those kinds of now-amazing visual effects will become a commodity and culturally unremarkable. But the most creative people will find ways to stand apart.
⚡ NWSH Take:
In addition to the above, some final reflections.
It’s not an original observation as this stage, but: the pace of advance here is insane. Look back to Imagen and Phenaki, the Google text-to-video models I wrote about in October 2022. Sora is in a different world. It won’t be long until it’s possible to string together text prompts to create feature-length, photorealistic films.
Yes, this will transform animation, advertising, TV, and more. It will also do something perhaps even more consequential: change the way millions use existing social media, and lead to entirely new forms of social media. It will lead us even deeper into the highly visual culture we now inhabit; one in which written arguments, and the Gutenberg minds to which they give rise, have less purchase. It will make it impossible to trust much that we see on film, and ignite a billion-dollar quest to solve that problem.
So much for the practical implications; now a few thoughts on the spiritual implications of this technology.
Sora and models like are technologies of world representation. And the technologies we use to represent the world around us — language underpins all of them — shape our understanding of the ultimate relationship between us and the world.
Consider: people are already talking about watching films generated by Sora inside their Apple Vision Pro. Soon, we’ll be able to generate photorealistic and physically accurate virtual worlds simply by describing them via natural language.
And remember, Sora is able to create accurate depictions of physical processes; in addition to being a video model, this AI is a kind of simulation engine. Various users have even said that Sora lends further credence to the Simulation Argument: the idea that the universe we live in is itself, in some deep and ultimate sense, a simulation.
World as simulation is emerging as the new dominant metaphor for the ultimate nature of the world we find ourselves in. It is replacing world as machine, which is the metaphor that emerged out of the Newtonian scientific revolution.
The rise of virtual worlds — and our ability to conjure them simply by describing them — has implications for all this. The emergence of these worlds will push us deeper into the new age of world as simulation. And this will have profound implications for the way we understand ourselves and our place in this world; the world, that is, we still for now call reality.
Lots, then, to think about. And I’ll be writing much more about world as machine and world as simulation soon.
The Worlds to Come
Thanks for reading this special instalment.
We’re still so early into the journey with generative AI, text-to-video, and virtual worlds. It reminds me of the excitement of the original 8-bit home computer revolution when I was a child in the 1980s: that sense that something amazing is unfolding and we get to see where it leads.
I’ll be watching every step of the way, and working to make sense of it all. And there’s one thing you can do to help: share!
If you found today’s instalment valuable, why not take a second to forward this email to one person – a friend, relative, or colleague – who’d also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think it’s worth their time. Just hit the share button:
I’ll be back next week as usual. Until then, be well,
David.
"There’s something so gloriously weird about this; a machine dreaming of a world that was already a machine-generated dream." Well said.
This world we’re living in is changing in ways we can’t even comprehend.
Not sure I agree (currrently) with the theory that we’re living in a simulation, but with all these AI advances and tools like the Vision Pro, we sure heading that way.
I love technology, and I think it can help us grow in many ways. Give us access to knowledge that can help us understand better this world.
But we most make sure not to loose ourselves so much in it, that we loose our humanity.