24 Comments
Jul 7Liked by Michael Woudenberg

Andrej Karpathy floated the idea of "Intelligence Amplification" as a mirror term to "Artificial Intelligence" earlier this year. It shares the same letters but in reverse. It positions itself as artificial tools that help us leverage our own intelligence rather than do our thinking for us. And so on. (https://x.com/karpathy/status/1744179910347039080?lang=en)

That resonates with me, so this post does as well.

Also, thanks for the shoutout!

Expand full comment
author

Thanks for sharing that. Amplification is good too.

Expand full comment
Jul 7Liked by Michael Woudenberg

I think you make a great point. AI can be treated just as another tool to help streamline many of the day-to-day tasks we take on.

Expand full comment
author

I’ve been finding a lot of really fun use cases that dramatically reduce the time it takes to do things. It’s been fun to experiment.

Expand full comment
Jul 8Liked by Michael Woudenberg

One great tool I've found is using it to write short snippets of code for me. I spend many hours on Stack Overflow trying to solve specific problems, but AI is terrific for helping write short blocks of code. Most cases involve a little debugging and familiarization with a given language, so it isn't yet a stand-in for programmers, but it's helped the process a lot for me.

Expand full comment
author

Yeah, I've seen that. I like using it for excel formulas and such as well though it can get finickey

Expand full comment
Jul 8Liked by Michael Woudenberg

Great idea. Excel formulas are evil. And why I load data tables always into Python and do any data manipulation there…

Expand full comment
Jul 7Liked by Michael Woudenberg

Do you think the goal of AI will forever be that of augmentation?

At what point do you think we'll bridge the gap between artificial intelligence and true intelligence? Then what?

Expand full comment
author

I think there’s a lot of space left to go because biological intelligence is much more complicated. If and when we get to the point of tipping, there will be a lot of interesting ethical dilemmas that’s for sure.

But they’re kind of a brain in a jar right now. It will take a very very long time until they wouldn’t rely on humans.

Expand full comment
Jul 7Liked by Michael Woudenberg

Agreed, I think our intelligence is intertwined within our body's DNA in ways we don't fully comprehend. I feel as though full-blown intelligence is a long way off yet. Unfortunately, even by that time I don't know if we'll be fully equipped to deal with the ethical and other dilemmas that will arise. All we have to do is look at the struggles they have with LLMs, and they're super basic in comparison to what we're discussing.

Expand full comment
author

Agreed. I wrote this one a few months ago on What’s in a Brain trying to explore some of that too:

https://www.polymathicbeing.com/p/whats-in-a-brain

Expand full comment

Really enjoyed this. I have been thinking a lot about this topic and recently wrote a post on it. Would love to collaborate. https://www.thoughtsfromthedatafront.com/p/15-the-future-of-work-humans-strategic

Expand full comment
author

Oh, I forgot to mention, I did an essay on an actionable framework for critical thinking. Since you mentioned the concept in your essay, I'd love your thoughts:

https://www.polymathicbeing.com/p/do-you-really-think-critically

Expand full comment

This is amazing!!

Expand full comment
author

Thanks! What you described in your essay is very true. Technonology has a very long history of shifting jobs from low skill to high skill.

Expand full comment

You are right!! One of my favorite examples of the exception is Ford's assembly line breaking complex car-making into simple, repetitive tasks performable by less-skilled workers, reducing the need for skilled craftsmen, as each worker now specialized in a single, easily-learned task rather than building entire cars.

Expand full comment

This issue brought to mind a whole series of reflections regarding the various ways in which we interact with AI. The non-augmentation vs. automation paradigm is not only developing more and more, but also in the two modes the prospects of an exploration aimed at understanding how we 'augment' or how we 'automate' various things are broadening. Thanks for sharing.

Expand full comment
author

Awesome and agree. There's so much opportunity in this space with a bit more nuance.

Expand full comment

I've always felt that AI was augmenting our own intelligence. I believe it's about information being optimized between global and local clusters of data, a synergy between man and machine 😎.

It's the cosmic intelligence at play.

Expand full comment
author

The super intelligence is amalgamating human intelligence across domains and disciplines.

Expand full comment
Jul 7Liked by Michael Woudenberg

Whoa whoa whoa, I came here for simplistic answers that confirm my ultra-inflexible worldview. What even is this crap!

Speaking of Ethan Mollick's work, what do you think of the cyborg/centaur framework? When I heard his description a couple of years ago, I realized that I'd been framing it the same way in my mind, but did not yet have the language to describe those two ways of using AI. I found it helpful when explaining to other folks.

Expand full comment
author

Elon's got a lot of good ways to frame things. I'm more of centaur myself.

Expand full comment
Jul 7Liked by Michael Woudenberg

Good middle position. AI is limited by what information it can take in. So much of what humans take in from the environment, culture, etc. is difficult to abstract into something AI can receive. Polanyi- "we know more than we can tell"- https://en.wikipedia.org/wiki/Polanyi%27s_paradox

Expand full comment
author

Exactly right!

Expand full comment