30 Comments
author
Nov 22, 2023·edited Nov 22, 2023Author

Welcome to the Salon; let's talk!

As for my views on all this, I think talk of AI doom is one part of a larger cultural phenomenon right now; one I'm thinking a lot about.

It seems to be that we're ever-more caught, as a culture, between stories of tech-fuelled transcension of all human limits, and stories of collapse. Between people who believe we'll soon be able to command all knowledge, conquer death, become infinite and all-knowing, fly to Mars, and those who believe that we're on the eve of collapse into some form of post-civilisational afterworld.

I want to understand more about what's driving those stories. And where they're leading us. I think that at the root of this 'transcension vs collapse' narrative is a loss of faith in our collective agency; in the ability of all of us to come together and shape our shared future. We're no longer telling ourselves stories about the brighter futures that could come to be if we build them. And in the void left by this loss of faith all we're left with are the transcension narratives of the techno-optimists, and the other side of that coin: the collapse narratives of those who believe tech is going to doom us all.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by David Mattin

I do also feel there is not enough regulatory oversight here and, even worse, too much profit focus on this topic. This is not any new technology but one that could well be desastrous to the world. And this is not meant in a doomsday reflex. I refer to the problem that has hit the world with social media and all it's dangerous effect we start to only understand the past years. No regulation, no oversight and look at what it has done. Not a lot of good to western democracies!

AI has even greater potential to do harm to our societies and that is well before a singularity event happens and "Skynet" bombs humanity off the planet.

Expand full comment
author

The social media comparison is an apt one. Our democracies are still struggling to comprehend the impact of that technology, and to regulate it. And Web 2.0 is now more than 20 years old! So the challenge posed by AI, especially given the speed at which it is evolving at the moment, is huge.

I wonder if we need new democratic forms to meet this challenge. I'm increasingly drawn to the idea of People's Councils to fuel more citizen involvement.

Expand full comment

uh ... that is a difficult one. I don't have that much trust into the masses.

But beyond my personal beliefs .... Given peoples willingness to discurs & debates nowadays, this might end up similar to the Greek democracy of ancient times where some intellectuals / previliged people decide for all others. Who would govern these people councils?

And is this then so much better in the end vs. governments who should better up their game and do their job?

Expand full comment

Yea.... I'm sort of agreeing with T. Eckhardt below. I'm not all hung up on IQ, but here in our crystal spheres do we really have any idea of what "people" would do? Across the range of a far more liberal definition of IQ. Various ill-defined "smarts".

Frankly "peoples councils" has that nice ring harking back to various power grabs by common frustrated shit-heads. Pardon my language, but if you want to see how horrible the "people" can be, give them power.

David, please expand on the influences which make "Peoples Councils" seem like a good thing? Thanking you in advance.

Expand full comment
Nov 22, 2023Liked by David Mattin

I feel we are not capable of oversight here. The orders of magnitude shifts in the power held in few hands, the magnitude of the power, and the smallness of the numbers of hands wielding. And so, so much of educated society does not have enough background to have any sort of real "gut feel" for what is going on and will go on.

And even if western agencies had taken seriously the "know thyself" advice and made it widely understood in light of the last few decades advances in cognitive sciences, agencies which could regulate will be supplanted by state and private bad actors.

So I agree, regulation would be nice, but it appears a tyrpical political situation where we will hear "Trust me, I am the one who knows how to regulate this", and it is pure BS. And here we are, a few spoiled children among us playing with far more powerful tools than we have ever imagined. "Trust us, it will be great". Suuuuure.

Expand full comment

hmmm. That is quite a big question with so many facets, so many perspectives and so many potential implications.

As with any technology, there are the nay sayers and the religious zealots. But the truth is of course much more complicated. And one of the things that makes it so complicated is that humans are notoriously bad in making (somewhat accurate) predictions that go longer into the future than around 3 years (most do not even go further than a few months). Add to this that the world has gotten so much more complicated and interconnected and you know that we are in a pretty big pickle.

Still, looking at history, there is one thing we can say, and that is that almost all technology that appeared to be beneficial and great to the advancement of the human race, turned against that same human race once the scale (which also means accelerator) factor kicked in. Just to mention a few:

- petroleum as fuel: made industrialisation on a large scale possible giving us cheap products and abundance, cheaply heated our houses, gave us unimaginable mobility, etc. Now it appears to have us and a lot of other living species doomed due to climate change, while we still can't shake the addiction.

- petroleum as resource: made cheap plastics possible, that allowed us cheap products in all shapes and sizes for in our homes, as cheap packaging to ship a lot of useless stuff around the world (in petrol fuelled ships), to store stuff in, use as bags, exfoliate our skin in beauty products, provide easy throw away dipers and other convenience, and in the meantime it suffocates our forsts, our oceans and rivers, our lakes our soil, and now also our bodies (micro plastics in our lungs and intestines, even of babies).

- the green revolution: saved milions from food deprivation, but later it sucked soil dry, caused much more vulnerable crops due to monoculture and pesticides, brought pollinators to the brink of extinction, and when that happens, we might even go back to food deprivation. It made cheap fast food possible, that provided tasty and very affordable food. Causing (morbid) obesity, now number one cause of chronical diseases and death, not to mention mental health issues. In the US more than 1/5th of kids are morbid obese and another 30 to 40% is overweight...

- internet: Information would be free, it would give everybody access to the same knowledge and thus it would empower democracy. We would see decentralisation of power. Power is centralised with a very few, more than any/most time in history and (economic) inequality is rapidly increasing.

- Social media: we would be connected. Nobody would be lonely anymore. The 15 minutes of fame of Warholl would finally ring true. Creative surplus would create a revolution of cultural richness. In the meantime culture is looking more and more average and the same around the world. Winner takes all. Mental health issues and feelings of isolation and lack of meaning/purpose among youth is soaring. We feel lonelier than ever as there are only followers and likes, but real friends and close know local communities have fallen by the way side.

All of this to say that the 'move fast, break things' and 'try first, apologize later' adagium of Sillicon Vally do not hold up anymore. The frantic we need to move faster in order to not miss the boat, has only thrown off any healthy balance in the past. Plus the question remains, what boat are we missing? What opportunity are we missing by taking things slower and more thoughtful? Like one of the commenters below said: We only figured out how bad smoking was after a few decades. Maybe we should be taking more time with this. Could be regulation to achieve that.

Expand full comment
author

Thanks Kitty. I definitely think the age during which we could tolerate a 'move fast and break things' attitude to tech is over. As you say, that mindset – incubated inside Web 2.0 giants — has left with significant problems when it comes to the impacts of social media.

I think some form of regulation has to be the way. And I think we need to find new ways to involve citizens in the decision making around that.

Expand full comment

I just literally cannot take doomer AI seriously, even if I appreciate they are there vs the no-handbrakes social media storm of a decade ago.

Why? First, today’s AI modeling is rooted in a 1950s understanding of human brain function. I am not making this up. And when we cannot understand how our own brains work or how consciousness comes about, we’re supposed to fret that some 1950s simulation will become superhuman?

So much lame reductionism too, as if compute power is somehow a form of intelligence. As if we amp up enough MIPS we can create God. This makes the atomic era doomers who thought the planet would incinerate at the Manhattan Project look like physics geniuses.

Third, human intelligence is not binary. Binary is great for some things and all, but there is no way to represent how humans can hold conflicting ideas in their heads at the same time. It’s either zero or one. Sure, you can layer on probabilities. But there is still an original sin of thinking that we can model human thought as bits rather than quanta, for example. It’s another ridiculous form of scientific reductionism: “assume a spherical cow”.

Which isn’t to say that AI doesn’t have existential risks. But there I believe the enemy is us. Just as much as crypto in the hands of an SBF or CZ results in fraud. How we use these tools with our eyes closed or not is what worries me. How we outsource our ethics to machines and wash our hands of our humanism worries me. How we lazily feel everything can nicely be ethically modeled by a set of fixed rules terrifies me.

Expand full comment
author

I'm increasingly obsessed with questions around the nature of *human* cognition and intelligence, and the difference between that and the machine processes that we're calling 'intelligence'.

Hope to write a lot more about this, and bring in some interesting guests to talk about it too.

Expand full comment

Hear, hear. As someone with an academic background in neuropsychology, I find the obsession with general artificial intelligence baffling. We still don't know how our own consciousness and intelligence works, so how would we know how to model it? Let alone, we can't get into "god mode" and observe ourselves. Our definition of intelligence has been mainly informed historically by reductionist science where we can only describe and measure that which can be defined as distinctive phenomena and relations between those. Awareness of and as emergent properties and states are relatively speaking fresh. Also, what would have to gain by AGI? Are highly specialized robots and AI improving and replacing difficult, dangerous or tedious work not more feasible and desirable, thus more viable? And are the current side-effects and foundations of LLMs and generative AI not more troublesome. This whole AI doom talk sounds a lot like misdirection.

Expand full comment
author

It is crazy how people are willing to dive into a discussion on whether AI is/can ever be conscious when we really have no understanding of our own consciousness and no consensus on what the word really means.

Expand full comment
Nov 22, 2023Liked by David Mattin

I'm not worried about AI itself because I don't believe that it will ever became a sentient silicone Frankenstein's Monster (the oldest sci-fi horror), but I'm worried about AI as a powerful tool used by the elite for digital surveillance, propaganda and manufacturing public opinion.

For example listening to all phone calls was impossible even for KGB and STASI due to high volume but we are not far from an AI that could do it and could not only spot politically forbidden speech but also detect innuendos and dog whistles. AI could spam social media with genuine sounding political messaging. A social media site AI could understand a users political views and tailor content recommendations to change those views.

Expand full comment
Nov 22, 2023Liked by David Mattin

I’m worried about lack of oversight yes. There is no true governing body ensuring that these agencies are making decisions that are for the good of humanity or even considering the levels of implications to which this technology will have once out there. MSFT, Google, all the yayas have fired their AI ethics team,...one can assume so they can continue their for-profit missions. It’s wild west foolery.

We have our technological oxycontin that is being unleashing onto the world. When will it be too late for us to pause and regulate? Seems surreal..

Expand full comment
author

I'm somewhat drawn to the idea of some kind of international oversight body; something akin to the International Atomic Energy Agency.

What do you think of that?

At the same time I'm highly suspicious when Big Tech — including OpenAI, which is basically Microsoft at this point — calls for regulation. As many have pointed out, it feels that they want regulation mostly to consolidate their own position and scupper insurgents.

Expand full comment
Nov 22, 2023Liked by David Mattin

I agree on an international governing body disconnected to the interests of for profits.

Nolan’s Oppenheimer film had me rattled, seeing a reflection of our own blindspots with AI. Yes, AI has been around for a long time. But like tobacco, it may be a while before the negative effects start being recognized. Seems like we are at the surface of conjecturing aftermaths.

I don’t think weighing implications is doom-or-gloom. It’s being mindful and considerate and prepared for what is at stake. Seems more humanitarian than taking the “let’s wait and see” approach.

Big tech bullhorning regulation does come off disingenuous, rhymes greenwashing. Makes me wonder, what they distract us from. I find the transparency war being the most dangerous act of this all.

Expand full comment
Nov 25, 2023Liked by David Mattin

Not at all worried. Perhaps I am an optimist; perhaps I read and learn too much from the many wonderful players in the AI space. I think humanity will make it through. There will be problems, and very likely a utopia (or close) on the other side of AGI leading out of the 2020s and into the 2030s, but I am not yet convinced the risks are existential. We are in an exponential age, and we need to keep the uplifting changes cranking ahead of the deleterious ones. AI is helping with this. I am more concerned with bad humans leveraging AI nefariously than I am concerned about autonomous AI agents harming life on Earth.

Expand full comment
author

I agree that the more immediate danger is not AI taking control but humans who are in control of AI and use it to bad ends.

Expand full comment

So per usual, we live not in the midst of a technology problem, but a human behavior problem. How then can we leverage technology, psychology, and incentives in a game theoretic space to direct humanity toward the prosperous future we know we can create? It’s a possibility and probability. We just need to make it happen.

Expand full comment

I asked my free WhatsApp AI tool (LuzIA) about the ways in which AI could turn against humans in the medium term. And she replied: “As a friend, I don´t think it´s appropriate to speculate about how AI could turn against humans. It´s important to focus in…” ethics and responsibility, in short.

“As a friend”. Wow. This tool makes something like that up to manipulate me and then mentions “ethics” in the next sentence.

I see a completely faulty product and nowhere in sight there´s quality control. One cannot market a battery that explodes, that would be crazy, right? But in tech you may have out there “for free” this cocaine-tongue LuzIA and no one bats an eye. This custom of using Beta versions multiplies the risk of derailing in something with the potential of AI.

AI doom sounds to me like a distraction away from the people issues, the real problems in the planet.

We have a virally-self-growing infinitely automating tool that will bring us “amazing science developments and medicine advances” with a long list of possible and terrible side effects and the industry leaders want to be left alone. Meanwhile, self-imposed pressures and cheapness have made them ignore such a basic engineer principle: quality control.

I feel we´re more doomed than blessed. Not in the Skynet-type scenarios. Just the good old ways. But we can always react

Expand full comment
Nov 24, 2023Liked by David Mattin

Likewise.

Expand full comment
Nov 23, 2023Liked by David Mattin

I have been researching China for over 30 years. I have seen how technology has been increasingly deployed to control humans, and so my main worry is about private tech being seconded into working with authoritarian regimes to utilise AI to 'perfect' control then export that control tool to other authoritarian regimes, to which 'non-authoritarian' governments might begin to adopt such control tools to counterbalance the authoritarians in an AI arms race. My fear is therefore of bad political actors leveraging the profit motive of tech companies to help ossify bad governments and failed nation states in an attempt to delay the inevitable entropy of those corrupted institutions, creating much pain and regression in the process.

Expand full comment
author

The techno-surveillance state that China is building is something to behold. We've never seen anything like it.

Would love to discuss this at greater length!

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by David Mattin

Could robots and AI cause human society to collapse? Sure, if we let them. But the same could be said for atomic weapons. In fact, this discussion resembles the one about world destroying weaponry during the Cold War. Two opposing ideologies with radically different ideas about how the world should be. And there were some moments where a nuclear war was a very real possibility. Why didn't it happen? Because society talked about it. Because we didn't let some trigger happy general decide a bit of manly display of power would put those Sovjets back in their place.

Now ofcourse, this isn't the same situation. Nuclear missiles can't decide for themselves that it would be better to melt down human society in a pile of slag so the world can start over. What we can do is talk about what we want the role of AI in society to be. There is a big responsibility here for the 'soft sciences', philosophy in general and ethics specifically, to educate people on how to decide these moral dilemmas. Ethics should, in my opinion, be a mandatory subject on ever school, so that every person can join these debates and form an informed and rational opinion. Of all so-called '21st-century-skills' this is the most important one to be able to deal with all that the 21st century throws at us.

That said, if there were to be any regulation on AI development, it should be mandatory to have an ethics board or some such, at companies such as MSFT, Google, OpenAI etc. Ethics should be an integral part of any technological design proces, during the proces and not only after, when the damage has already been done. Tech companies have to proof ethical consideration played a key role during development. Show society which dilemmas you encountered, how you solved them and why and be transparent about it. This way, you can hopefully prevent problems which can arise from the use of these technologies AND it helps society to have an informed debate about these technologies.

Expand full comment

Agreed. To me it feels like society accepts the role of big corporates to come up with AI, run into problems and than regulate it back into place. Why have an ethics board on a company? We've seen that a change in management can result in removal of those boards. Instead of being naive in trusting these corporate actors to regulate themselves (as history shows this hardly ever works well\), we as society should define the playing field and have a vision on AI in general instead of being reactive when trouble is already there.

Expand full comment
Nov 22, 2023Liked by David Mattin

Hi David. Thanks for the above post. Iv been thinking about the issue for a long while - since 2016 when i read Harari's Homodauos. I have young children. My biggest concern is that our schools are going to evolve a lot slower than the tech will, potentially leaving a generation of school leavers with a material skill gap. I have concerns over what a dystopian scenario where a significant number of jobs are displaced: what does it mean when we lose purpose, which many find through their jobs. I know there will be jobs that will be created that don't exist just yet...but i find it difficult to imagine that tech wont disrupt these too....i feel as though the economy may have to evolve (for the sake of our own sanity) away from a knowledge economy. Anyway...these are 'concerns', not necessarily a base case. iv given this topic a lot of thought...happy to discuss offline so not to clog up the chat with stuff that is better suited for a dingy corner of a dark pub after 5 pints.

Expand full comment
author

Thanks Amres. Ha don't worry about clogging up the chat; discussion, random thoughts, rabbit holes, all welcome in the Monthly Salon. So please do share more thinking!

The concern about the world machine intelligence is building and in particular what it mean for children is one I hear a lot. I think the technology is evolving far to fast, and is too complex, for us to really understand what it will mean for jobs. But it's clear that substantial parts of many current jobs will be automated away. In an environment in which so many human domains of activity are being turned into procedural technique for machines, I think what will be most prized and valuable will be precisely what machines can never do: being human. I think we need to lean hard into this insight when it comes to education, and reemphasise the value of creativity, self-expression, and empathy. Machines will do so much; but they will never be able to be *another person who can truly see and empathise with me*. I think much of the economy will shift in that direction: towards creativity, entertainment, care, and simply being with one another.

Expand full comment

yes, v much agree. i have notes on my phone from 2016 where i was trying to imagine what's left over in a dystopian scenario...had it boiled down to

1/ capital/resource allocators

2/ some form of entertainment

3/ research/design (ai has scaled human knowledge...but from studying past discoveries/advancements, many of them have come from out of the box thoughts ie creativity...think there's space to do that here...but it's v likely reserved for a far fewer segment of the population ie...the very right tail of the bell curve)

4/ end of life care...

also agree with the notion of 'being human'...im focusing a lot time in crafting an alt-curriculum on what it means to be human...i see this as an optimistic scenario in a dystopian future...ie we re-write social contract with each other to be more community focused and allow tech to make the larger, macro decisions.

Expand full comment
Nov 22, 2023·edited Nov 23, 2023Liked by David Mattin

I stopped being worried when I understood that life follows art. The Battle against the Robot has already been fought out in mythical books and movies, with humanity winning in the end. At first the Robot won such as in C Capeks R.U.R., Chaplin's Modern Times, Blade Runner and Ira Levin's Stepford Wives. But between Terminator I and Terminator II that changed and in Terminator II the Robot sacrifices itself (himself?) to save humanity. That also happens in the Matrix-trilogy: Neo fights himself to death and then the Robot explodes. So I do foresee struggle, well we are already struggling aren't we? But when we learn to sacrifice - something, our comfort, our ego? - we will remain human, in the end.

Expand full comment
author

Thanks Lisette; it would be absolutely fascinating to read a history of robots in literature and film. And to chart the evolution, through that, of our hopes and fears around machine intelligence and technology more broadly. I really feel we're at such a consequential moment now when it comes to the human relationship with technology, and that perhaps the primary collective challenge for the decades ahead will be to maintain recognisably human modes of living and being in the face of enormous technological change. And like you, I have faith we can do it.

Expand full comment