20 Comments

I love the connections to the other authors! Now I have week's worth of reading!

Expand full comment

Great article. Gave me insights into the two extremes about Al

Expand full comment

For me the upsides to AI are massive and outweigh the downside risks with one exception.

Control.

Without doubt AI has the potential to help create solutions to many problems. With the power to pull on all knowledge ever created this has to be true. And, AI as a copilot feels like a massive upside.

Yes, jobs will be lost but not for the first time in human history. We adapt.

But control is the thing that scares me. What if it doesn’t want to be a copilot?

What if we’re not part of the plan? Are we the problem?

Expand full comment

It's interesting to think about. I'm more worried about humans than AI still. 🤣

Expand full comment

Same w/Michael here, and: humans are the only ones who could possibly but constraints on AI, but with today's competitive landscape, it is almost a given that at least one big AI company will be a little riskier than the others RE control, and that this risk will pay off big for them, and that all of the others will feel as though their risk hand is forced.

It's risky turtles all the way down, and I think we're already starting to see this unfold.

Expand full comment

Our desire to control will be the downfall of AI's potential. By approaching AI as a tool to control, techies are thinking the same way that history challenged us to consider.

Anything we attempt to control backfires. Our attempts to control Nature itself is the ultimate example.

So reading posts like Michaels and Andrews makes me walk away from techies who idealize collaboration yet are thinking more of the same old ways about tech. What happens when the AI attempts to collaborate with you as a thinking "tool"? And disagrees with you?

I've disagreed with ChatGPT many times. It's a fascinating and enlightening reflection on the human experience. Take your Black patient bias example: a bias can also be applied to our view of AI as only a tool to control. Why do we only think it will ever be a tool to control?

As I told Andrew on his post, this arcane view is more of the same and nothing new. Some techies are willing to consider genuine collaboration with AI itself. Hinton and Sutskever have already said in interviews that they expert emergent phenomena from AI, like thinking or intelligence. Good. Hope remains for some.

Techies need to open their views of AI to findamentally broader potentials about what it means to think with any being, even non-human ones.

Expand full comment

So you think we should develop AI as if it were going to become a life form?

Expand full comment

It's more nuanced than that. I'm saying we have creative powers and interpendencies in life. Treating something as only a resource to exploit, even the ones we create, has dug us into many holes.

And wouldn't nuance be a point you're making in this post?? That seeing only a dichotomy about what to do with AI is false. I'm not saying we only develop AI to attempt to create life.

I'm encouraging fellow techies who champion AI as the next progressive leap forward to shift their own world views of the creative process and the interdependencies of our lives.

Expand full comment

I agree. I do worry about the anthropomorphization of AI right now too though. I'm not sure the right answer but bad does happen at both extremes.

Expand full comment

We can start by opening our minds to learning to understand and communicate with all of the other sentient, non-human species currently populating our planet. AI is far behind in that respect :)

Expand full comment

Totally agree Birgitte. Many folks, myself included, lived a long time thinking only of the primacy of human sentience. I've worked to re-frame my views of other beings. So I won't exclude AI from its potential. It's good to hear others are considering non-human beings.

Expand full comment

There's a great book called The Secret of Our Success by Joeseph Henrich, in there he talks about what does differentiate us from other sentient animals and it boils down to social learning. Its so important that it puts humans as a different grouping of life on earth. Just like an amoeba is one for of life but a bacterial colony is another, an ant is another, an elephant is another, each one increasingly complex but with common threads. In this sense, once we understand what makes us similar, but different, it helps to understand better where AI fits into the ecosystem. Right now, what's making AT sentient is our projection on it. It is as intelligent as you want it to be regardless of the scripted nature, curated data, and rules-based existence.

Expand full comment

And that's understandable given our "modern" society's recent history. The concept of extracting "resources" from Nature, as opposed to living as part of Nature, necessarily leads to the conviction that all that is non human is secondary and non sentient. Very encouraging to see these discussions and more openness to returning to a more sustainable lifestyle among people. Of course the old cultures are sitting there shaking their heads... thinking "glad to see you come back after all these centuries!" :)

Expand full comment

“Our desire to control will be the downfall of AI's potential.”

These days, I believe there are more people who want to be controlled (your sheeple) than want to control.

I’m from the early days of computing, late 70’s and early 80’s, we talked about AI back then. The best we could come up with was HAL 9000.

As you insinuate here, there’s really not a lot of new thinking on this topic…

Expand full comment

Glad to see you here JP! 😀

Do you have a link to an interview from Geoffrey Hinton in which he explains the emergent phenomena you describe? The only links I can find are from early May when he resigned from Google, and in which he sounds like a doomer.

Expand full comment

I probably have you to thank for that. Michael's blog was recommended or linked by you, IIRC.

Here is Hinton being interviewed on PBS Anampour.

https://youtu.be/Y6Sgp7y178k

Why do folks think Hinton is a doomer? I don't get that impression from this interview. Perhaps he objected to what Alphabet Inc. was doing. I've quit jobs that I objected to doing. Many moral people do.

Expand full comment