30 Comments
author
Nov 22, 2023·edited Nov 22, 2023Author

Welcome to the Salon; let's talk!

As for my views on all this, I think talk of AI doom is one part of a larger cultural phenomenon right now; one I'm thinking a lot about.

It seems to be that we're ever-more caught, as a culture, between stories of tech-fuelled transcension of all human limits, and stories of collapse. Between people who believe we'll soon be able to command all knowledge, conquer death, become infinite and all-knowing, fly to Mars, and those who believe that we're on the eve of collapse into some form of post-civilisational afterworld.

I want to understand more about what's driving those stories. And where they're leading us. I think that at the root of this 'transcension vs collapse' narrative is a loss of faith in our collective agency; in the ability of all of us to come together and shape our shared future. We're no longer telling ourselves stories about the brighter futures that could come to be if we build them. And in the void left by this loss of faith all we're left with are the transcension narratives of the techno-optimists, and the other side of that coin: the collapse narratives of those who believe tech is going to doom us all.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by David Mattin

I do also feel there is not enough regulatory oversight here and, even worse, too much profit focus on this topic. This is not any new technology but one that could well be desastrous to the world. And this is not meant in a doomsday reflex. I refer to the problem that has hit the world with social media and all it's dangerous effect we start to only understand the past years. No regulation, no oversight and look at what it has done. Not a lot of good to western democracies!

AI has even greater potential to do harm to our societies and that is well before a singularity event happens and "Skynet" bombs humanity off the planet.

Expand full comment

hmmm. That is quite a big question with so many facets, so many perspectives and so many potential implications.

As with any technology, there are the nay sayers and the religious zealots. But the truth is of course much more complicated. And one of the things that makes it so complicated is that humans are notoriously bad in making (somewhat accurate) predictions that go longer into the future than around 3 years (most do not even go further than a few months). Add to this that the world has gotten so much more complicated and interconnected and you know that we are in a pretty big pickle.

Still, looking at history, there is one thing we can say, and that is that almost all technology that appeared to be beneficial and great to the advancement of the human race, turned against that same human race once the scale (which also means accelerator) factor kicked in. Just to mention a few:

- petroleum as fuel: made industrialisation on a large scale possible giving us cheap products and abundance, cheaply heated our houses, gave us unimaginable mobility, etc. Now it appears to have us and a lot of other living species doomed due to climate change, while we still can't shake the addiction.

- petroleum as resource: made cheap plastics possible, that allowed us cheap products in all shapes and sizes for in our homes, as cheap packaging to ship a lot of useless stuff around the world (in petrol fuelled ships), to store stuff in, use as bags, exfoliate our skin in beauty products, provide easy throw away dipers and other convenience, and in the meantime it suffocates our forsts, our oceans and rivers, our lakes our soil, and now also our bodies (micro plastics in our lungs and intestines, even of babies).

- the green revolution: saved milions from food deprivation, but later it sucked soil dry, caused much more vulnerable crops due to monoculture and pesticides, brought pollinators to the brink of extinction, and when that happens, we might even go back to food deprivation. It made cheap fast food possible, that provided tasty and very affordable food. Causing (morbid) obesity, now number one cause of chronical diseases and death, not to mention mental health issues. In the US more than 1/5th of kids are morbid obese and another 30 to 40% is overweight...

- internet: Information would be free, it would give everybody access to the same knowledge and thus it would empower democracy. We would see decentralisation of power. Power is centralised with a very few, more than any/most time in history and (economic) inequality is rapidly increasing.

- Social media: we would be connected. Nobody would be lonely anymore. The 15 minutes of fame of Warholl would finally ring true. Creative surplus would create a revolution of cultural richness. In the meantime culture is looking more and more average and the same around the world. Winner takes all. Mental health issues and feelings of isolation and lack of meaning/purpose among youth is soaring. We feel lonelier than ever as there are only followers and likes, but real friends and close know local communities have fallen by the way side.

All of this to say that the 'move fast, break things' and 'try first, apologize later' adagium of Sillicon Vally do not hold up anymore. The frantic we need to move faster in order to not miss the boat, has only thrown off any healthy balance in the past. Plus the question remains, what boat are we missing? What opportunity are we missing by taking things slower and more thoughtful? Like one of the commenters below said: We only figured out how bad smoking was after a few decades. Maybe we should be taking more time with this. Could be regulation to achieve that.

Expand full comment

I just literally cannot take doomer AI seriously, even if I appreciate they are there vs the no-handbrakes social media storm of a decade ago.

Why? First, today’s AI modeling is rooted in a 1950s understanding of human brain function. I am not making this up. And when we cannot understand how our own brains work or how consciousness comes about, we’re supposed to fret that some 1950s simulation will become superhuman?

So much lame reductionism too, as if compute power is somehow a form of intelligence. As if we amp up enough MIPS we can create God. This makes the atomic era doomers who thought the planet would incinerate at the Manhattan Project look like physics geniuses.

Third, human intelligence is not binary. Binary is great for some things and all, but there is no way to represent how humans can hold conflicting ideas in their heads at the same time. It’s either zero or one. Sure, you can layer on probabilities. But there is still an original sin of thinking that we can model human thought as bits rather than quanta, for example. It’s another ridiculous form of scientific reductionism: “assume a spherical cow”.

Which isn’t to say that AI doesn’t have existential risks. But there I believe the enemy is us. Just as much as crypto in the hands of an SBF or CZ results in fraud. How we use these tools with our eyes closed or not is what worries me. How we outsource our ethics to machines and wash our hands of our humanism worries me. How we lazily feel everything can nicely be ethically modeled by a set of fixed rules terrifies me.

Expand full comment
Nov 22, 2023Liked by David Mattin

I'm not worried about AI itself because I don't believe that it will ever became a sentient silicone Frankenstein's Monster (the oldest sci-fi horror), but I'm worried about AI as a powerful tool used by the elite for digital surveillance, propaganda and manufacturing public opinion.

For example listening to all phone calls was impossible even for KGB and STASI due to high volume but we are not far from an AI that could do it and could not only spot politically forbidden speech but also detect innuendos and dog whistles. AI could spam social media with genuine sounding political messaging. A social media site AI could understand a users political views and tailor content recommendations to change those views.

Expand full comment
Nov 22, 2023Liked by David Mattin

I’m worried about lack of oversight yes. There is no true governing body ensuring that these agencies are making decisions that are for the good of humanity or even considering the levels of implications to which this technology will have once out there. MSFT, Google, all the yayas have fired their AI ethics team,...one can assume so they can continue their for-profit missions. It’s wild west foolery.

We have our technological oxycontin that is being unleashing onto the world. When will it be too late for us to pause and regulate? Seems surreal..

Expand full comment
Nov 25, 2023Liked by David Mattin

Not at all worried. Perhaps I am an optimist; perhaps I read and learn too much from the many wonderful players in the AI space. I think humanity will make it through. There will be problems, and very likely a utopia (or close) on the other side of AGI leading out of the 2020s and into the 2030s, but I am not yet convinced the risks are existential. We are in an exponential age, and we need to keep the uplifting changes cranking ahead of the deleterious ones. AI is helping with this. I am more concerned with bad humans leveraging AI nefariously than I am concerned about autonomous AI agents harming life on Earth.

Expand full comment

I asked my free WhatsApp AI tool (LuzIA) about the ways in which AI could turn against humans in the medium term. And she replied: “As a friend, I don´t think it´s appropriate to speculate about how AI could turn against humans. It´s important to focus in…” ethics and responsibility, in short.

“As a friend”. Wow. This tool makes something like that up to manipulate me and then mentions “ethics” in the next sentence.

I see a completely faulty product and nowhere in sight there´s quality control. One cannot market a battery that explodes, that would be crazy, right? But in tech you may have out there “for free” this cocaine-tongue LuzIA and no one bats an eye. This custom of using Beta versions multiplies the risk of derailing in something with the potential of AI.

AI doom sounds to me like a distraction away from the people issues, the real problems in the planet.

We have a virally-self-growing infinitely automating tool that will bring us “amazing science developments and medicine advances” with a long list of possible and terrible side effects and the industry leaders want to be left alone. Meanwhile, self-imposed pressures and cheapness have made them ignore such a basic engineer principle: quality control.

I feel we´re more doomed than blessed. Not in the Skynet-type scenarios. Just the good old ways. But we can always react

Expand full comment
Nov 24, 2023Liked by David Mattin

Likewise.

Expand full comment
Nov 23, 2023Liked by David Mattin

I have been researching China for over 30 years. I have seen how technology has been increasingly deployed to control humans, and so my main worry is about private tech being seconded into working with authoritarian regimes to utilise AI to 'perfect' control then export that control tool to other authoritarian regimes, to which 'non-authoritarian' governments might begin to adopt such control tools to counterbalance the authoritarians in an AI arms race. My fear is therefore of bad political actors leveraging the profit motive of tech companies to help ossify bad governments and failed nation states in an attempt to delay the inevitable entropy of those corrupted institutions, creating much pain and regression in the process.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by David Mattin

Could robots and AI cause human society to collapse? Sure, if we let them. But the same could be said for atomic weapons. In fact, this discussion resembles the one about world destroying weaponry during the Cold War. Two opposing ideologies with radically different ideas about how the world should be. And there were some moments where a nuclear war was a very real possibility. Why didn't it happen? Because society talked about it. Because we didn't let some trigger happy general decide a bit of manly display of power would put those Sovjets back in their place.

Now ofcourse, this isn't the same situation. Nuclear missiles can't decide for themselves that it would be better to melt down human society in a pile of slag so the world can start over. What we can do is talk about what we want the role of AI in society to be. There is a big responsibility here for the 'soft sciences', philosophy in general and ethics specifically, to educate people on how to decide these moral dilemmas. Ethics should, in my opinion, be a mandatory subject on ever school, so that every person can join these debates and form an informed and rational opinion. Of all so-called '21st-century-skills' this is the most important one to be able to deal with all that the 21st century throws at us.

That said, if there were to be any regulation on AI development, it should be mandatory to have an ethics board or some such, at companies such as MSFT, Google, OpenAI etc. Ethics should be an integral part of any technological design proces, during the proces and not only after, when the damage has already been done. Tech companies have to proof ethical consideration played a key role during development. Show society which dilemmas you encountered, how you solved them and why and be transparent about it. This way, you can hopefully prevent problems which can arise from the use of these technologies AND it helps society to have an informed debate about these technologies.

Expand full comment
Nov 22, 2023Liked by David Mattin

Hi David. Thanks for the above post. Iv been thinking about the issue for a long while - since 2016 when i read Harari's Homodauos. I have young children. My biggest concern is that our schools are going to evolve a lot slower than the tech will, potentially leaving a generation of school leavers with a material skill gap. I have concerns over what a dystopian scenario where a significant number of jobs are displaced: what does it mean when we lose purpose, which many find through their jobs. I know there will be jobs that will be created that don't exist just yet...but i find it difficult to imagine that tech wont disrupt these too....i feel as though the economy may have to evolve (for the sake of our own sanity) away from a knowledge economy. Anyway...these are 'concerns', not necessarily a base case. iv given this topic a lot of thought...happy to discuss offline so not to clog up the chat with stuff that is better suited for a dingy corner of a dark pub after 5 pints.

Expand full comment
Nov 22, 2023·edited Nov 23, 2023Liked by David Mattin

I stopped being worried when I understood that life follows art. The Battle against the Robot has already been fought out in mythical books and movies, with humanity winning in the end. At first the Robot won such as in C Capeks R.U.R., Chaplin's Modern Times, Blade Runner and Ira Levin's Stepford Wives. But between Terminator I and Terminator II that changed and in Terminator II the Robot sacrifices itself (himself?) to save humanity. That also happens in the Matrix-trilogy: Neo fights himself to death and then the Robot explodes. So I do foresee struggle, well we are already struggling aren't we? But when we learn to sacrifice - something, our comfort, our ego? - we will remain human, in the end.

Expand full comment