Nicely put. I’m a trauma survivor too and have been learning slowly to recognize those triggers and my brain-gut connection. Perhaps as a result of this more nuanced appreciation of how our brains work, I’m less worried than my colleagues in tech that AI is on the verge of putting us all out of work.
Have you read “Body Keeps the Score”? I’ve got tons of personal experience with the way our body holds information our minds aren’t yet ready to process...it’s fascinating and has led me to think our entire body has intelligences of various sorts, right down to our feet.
Thanks and I'll have to read that book. I've been reading more how trauma and our response to it can actually affect gene expression that can get passed on to kids.
The body is amazing and we know so little about it!
Very refreshing to read a piece like this. In my experience AI enthusiasts rarely seem to focus on the unique complexity of the human mind and body compared to AI programs and instead focus on the increasing 'intelligence' of these programs. It also strikes me as odd that improvement in the intelligence of AI should correspond to a conscious AI. Chickens aren't the brightest among us but we can probably agree that they are conscious.
I've also been interested in learning more about PTSD and your insights are greatly appreciated. I'm looking forward to reading more.
I like the diagram with the green and red balls and connections. I know that certain thoughts or events can "trigger" the emotional part of myself. I studied cognitive behavioral therapy to help myself (and others). Most of the time I can be mindful and question if something is real or an emotional flashback, which can get me back around towards the "green" in that example. But sometimes it is a much more difficult task battling it logically with just logical mindfulness. I'm going to try EMDR as well.
One of the bigger issues I've seen recently with Veterans and PTSD is that 1. The VA made it a disability and compensates for it and 2. Vets have made it part of their identity in a perverted victim status. They are doing the opposite of CBT because they NEED the hijacking.
I have a Substack called The Drama of It All which is all about the problems of seeing yourself as a victim in the Drama Triangle. I know there are many perverse incentives to staying in that triangle perspective. I see what you mean. But they might need to get their own insurance and get help elsewhere if that is the issue. Does their insurance HAVE to come from the VA?
It’s not that their healthcare comes from the VA but if you claim PTSD and make yourself suffer you can get 100% disability which is worth $4,000 a month + a lot of additional fringe benefits.
Then there’s the added sympathy of the perception.
The irony is, many / most of these Vets never left their forward operating bases and rarely, if ever, saw combat.
Interesting. I've got family members who have PTSD from military stuff, so I was wondering if the EMDR works for me if I would recommend it. But it may be more complicated than I thought.
If they want to help reduce the PTSD and potentially eliminate it. It will work. If there are incentives that make that identity more valuable than the alternative, they won't want to.
"As I mentioned, as I researched PTSD, I came up with a mental model based on my reading. This was driven by the discovery that we don’t store memories as if they were movies, able to be recalled perfectly. We store them as feelings, impacts, and objects."
I feel like this explains why my memory seems to have gotten worse compared to when I was younger, as well as in more emotionally unstable times due to the situation I was in. Now that my environment is more stable and I haven't been feeling a lot of strong emotions it seems as though I've kind of lost my memory "anchors". However instances that have been able to evoke some emotion I find much easier to recall.
It was an interesting read and a lot of useful information.
In general, I agree with you, but it is not entirely clear why you oppose research in the direction of artificial general intelligence to research in biological intelligence.
To whom is this opposition directed?
For those who hype on the topic of AI? But they do not care about scientific truth, their goal is usually in the field of marketing, and their actions are guided by pragmatism (so it does not matter if they agree with you or not)
AI researchers? But they are aware of what you write and are aware of how it intersects with their work.
I don't know how immersed you are in the technical details of modern AI architectures, but the 'T' in GPT means "transformer", which means learning the internal representation of phrases, images, and signals, and the "attention" mechanism that learns the correlations between these representations. That is, in fact, the transformer learns to operate with the meanings of the input information in a style similar to the manipulations of "images" and "patterns" described by you.
This in no way means that the transformer is comparable to biological thinking. Doesn't this mean that there is now a new line of research to better understand how biological intelligence works? A better understanding of the brain can be gained by drawing analogies not only from biological intelligence to artificial intelligence (as has been done since McCulloch, Pitts, and Rosenblatt), but also vice versa.
Airplanes do not fly like birds, but they do, and dreams of the sky stimulated the development of scientific knowledge about the world around us (physics, mechanics, aerodynamics) which eventually gave rise not only to aviation, but also to a better understanding of bird flight as such.
I wouldnt say I'm opposed. But I don't think you get AGI as it's defined without understanding human biology. Because you'll get an AI that's just logical and not organic.
The challenge is that the logical alone means you miss a great deal of intelligence.
Jul 23, 2023·edited Jul 25, 2023Liked by Michael Woudenberg
First, I want to immediately apologize to you for the somewhat aggressive style of my previous comment. But an overly herbivorous statement would not be interesting, right?
I may have left out some phrases from the first paragraphs, since English is not my native language, so I hastily accused you of opposing. However, thanks to this you have one more opportunity to more accurately convey the idea.
In fact, you provided information that was helpful for me. Not exactly new (some things I read or guessed intuitively), but streamlined a lot of my ideas in the field of natural biological intelligence.
Regarding your last comment. I don't really understand AGI. Moreover, I am ready to say that not a single person in the world understands what AGI is because no one knows what biological intelligence is. While in the best minds it is something like a philosopher's stone, isn't it? AGI should be able to do something like "turn anything into anything", isn't it?
However, alchemy eventually led pundits to chemistry, and today it is almost impossible to find a completely natural material in the envoronment around us. One can hardly doubt the role of chemistry in modern civilization, even though we still cannot turn sand into gold.
I completely agree that reasonable purposeful behavior rests on the question of what is considered "reasonable" and "purposeful" - here we go beyond logic and rest on the fact that a person is a biochemical machine, as you rightly write. Things like "copying consciousness" require an answer to the question of what consciousness is and how it is related to the nervous system and the organism as a whole.
But it seems to me that science is not the first time to encounter such restrictions. Newton's classical mechanics turned out to be incorrect in describing the world, but until the 19th century it was quite enough to ensure progress (and lay the scientific foundations for further generalization).
It seems to me that those scientists who work on modern AI technologies are not inclined to equate their achievements with AGI. But the progress is really impressive (and most importantly, that it can bring us closer to understanding natural intelligence, as it seems to me, or do you disagree with me here?)
The same people who are chasing hype - we have been hearing from them for more than half a century about "thinking similar to human" - but the devil is in the word "similar" - how similar? who is really in a position to quantify this today?
Once again, thank you for your essay, it is really informative and I'm really glad to have this discussion, thank you for your response to my comment.
P.S. I remembered the reason that made me actively comment on your essay. This is a discussed topic of the dangers of artificial intelligence https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence. Or rather, how it is perceived by the townsfolk and illustrated in pop culture. Your essay does an excellent job of stating why this issue is so insignificant in that popular sense. Here I would completely agree with you, but your focus is shifted to skepticism about the concept of AGI. (However, AI, of course, is dangerous, like any powerful technology - but progress is impossible without curbing the dangers, otherwise humanity would have to abandon even fire, not to mention chemistry, electricity and atomic energy; however, for this time understanding this danger in itself creates a request to many areas of knowledge)
No issues with the tone. I try not to read too much in on that 😄
And your right, it is a starting point and so much more is needed to understand better. I actually explore some of what your talking about in my new novel. It explores the realm of the possible when it comes to creating a sentient AI.
an illuminating look at the "human processor"...I do a lot of thinking on how we humans, with all of our collective technological development and complex interpersonal systems, can throw all of it out of the window in an instant based on the material circumstances at hand.
I did take special interest in this part of your post:
"This creates a discordance if we are actually safe as most of us are in our first-world lives. While externally everything ‘feels’ safe, our gut is telling us something is wrong and so the brain freaks out. Because a threat that we can’t see is the worst thing for human survival. Historically, these were the threats that could kill you the fastest. So, the brain starts dumping alarm signals that lead to hyper-awareness and worry. What we diagnose as Anxiety."
how does this reality sqaure with threats that are more impending than imminent, like climate change, resource shortages, and social degradation? it often feels "safe," but the writing is on the wall.
how can we counter what seem to be the inevitable effects of accurately percieving the world around us with an eye on the future?
In history we've always had an apocalypse looming. Yet lives were also hard and there wasn't a ton of time to do more than survive. Now we have the ability to be safe while at the same time catastrophizing about futures that might occur (but are probably closer to the ancient manifestations of apocalypses)
We are safer, but we never got rid of that nagging pessimism. I'd wager the larger the delta, the greater the anxiety. It's almost like we need adversity in our faces or we'll create it anyway.
With AI, my worry isn't in the technology, it's in how humans will react to it.
in my opinion, we're NOWHERE CLOSE to being ready for the societal upheaval that the technology will bring. our core institutions are arguably closer to medieval than "modern," as we all recently learned, society is not set up well for sudden changes.
how many accountants haven't gotten the bad news yet?
still, in the end the tech is a reflection of those who contribute to it.
Nicely put. I’m a trauma survivor too and have been learning slowly to recognize those triggers and my brain-gut connection. Perhaps as a result of this more nuanced appreciation of how our brains work, I’m less worried than my colleagues in tech that AI is on the verge of putting us all out of work.
Have you read “Body Keeps the Score”? I’ve got tons of personal experience with the way our body holds information our minds aren’t yet ready to process...it’s fascinating and has led me to think our entire body has intelligences of various sorts, right down to our feet.
It’s a fascinating topic. Thanks for sharing.
Thanks and I'll have to read that book. I've been reading more how trauma and our response to it can actually affect gene expression that can get passed on to kids.
The body is amazing and we know so little about it!
My physical therapist recommended The Body Keeps the Score. It's really good. I could see so much of what he talked about in my past and current life.
Very refreshing to read a piece like this. In my experience AI enthusiasts rarely seem to focus on the unique complexity of the human mind and body compared to AI programs and instead focus on the increasing 'intelligence' of these programs. It also strikes me as odd that improvement in the intelligence of AI should correspond to a conscious AI. Chickens aren't the brightest among us but we can probably agree that they are conscious.
I've also been interested in learning more about PTSD and your insights are greatly appreciated. I'm looking forward to reading more.
Thanks for the great feedback and I'm glad you enjoyed it. This one was interesting to study.
I like the diagram with the green and red balls and connections. I know that certain thoughts or events can "trigger" the emotional part of myself. I studied cognitive behavioral therapy to help myself (and others). Most of the time I can be mindful and question if something is real or an emotional flashback, which can get me back around towards the "green" in that example. But sometimes it is a much more difficult task battling it logically with just logical mindfulness. I'm going to try EMDR as well.
CBT is super powerful to help rewire.
One of the bigger issues I've seen recently with Veterans and PTSD is that 1. The VA made it a disability and compensates for it and 2. Vets have made it part of their identity in a perverted victim status. They are doing the opposite of CBT because they NEED the hijacking.
I have a Substack called The Drama of It All which is all about the problems of seeing yourself as a victim in the Drama Triangle. I know there are many perverse incentives to staying in that triangle perspective. I see what you mean. But they might need to get their own insurance and get help elsewhere if that is the issue. Does their insurance HAVE to come from the VA?
It’s not that their healthcare comes from the VA but if you claim PTSD and make yourself suffer you can get 100% disability which is worth $4,000 a month + a lot of additional fringe benefits.
Then there’s the added sympathy of the perception.
The irony is, many / most of these Vets never left their forward operating bases and rarely, if ever, saw combat.
Interesting. I've got family members who have PTSD from military stuff, so I was wondering if the EMDR works for me if I would recommend it. But it may be more complicated than I thought.
If they want to help reduce the PTSD and potentially eliminate it. It will work. If there are incentives that make that identity more valuable than the alternative, they won't want to.
Very interesting,
"As I mentioned, as I researched PTSD, I came up with a mental model based on my reading. This was driven by the discovery that we don’t store memories as if they were movies, able to be recalled perfectly. We store them as feelings, impacts, and objects."
I feel like this explains why my memory seems to have gotten worse compared to when I was younger, as well as in more emotionally unstable times due to the situation I was in. Now that my environment is more stable and I haven't been feeling a lot of strong emotions it seems as though I've kind of lost my memory "anchors". However instances that have been able to evoke some emotion I find much easier to recall.
There’s a lot to that. Emotion definitely has an anchoring effect.
It was an interesting read and a lot of useful information.
In general, I agree with you, but it is not entirely clear why you oppose research in the direction of artificial general intelligence to research in biological intelligence.
To whom is this opposition directed?
For those who hype on the topic of AI? But they do not care about scientific truth, their goal is usually in the field of marketing, and their actions are guided by pragmatism (so it does not matter if they agree with you or not)
AI researchers? But they are aware of what you write and are aware of how it intersects with their work.
I don't know how immersed you are in the technical details of modern AI architectures, but the 'T' in GPT means "transformer", which means learning the internal representation of phrases, images, and signals, and the "attention" mechanism that learns the correlations between these representations. That is, in fact, the transformer learns to operate with the meanings of the input information in a style similar to the manipulations of "images" and "patterns" described by you.
This in no way means that the transformer is comparable to biological thinking. Doesn't this mean that there is now a new line of research to better understand how biological intelligence works? A better understanding of the brain can be gained by drawing analogies not only from biological intelligence to artificial intelligence (as has been done since McCulloch, Pitts, and Rosenblatt), but also vice versa.
Airplanes do not fly like birds, but they do, and dreams of the sky stimulated the development of scientific knowledge about the world around us (physics, mechanics, aerodynamics) which eventually gave rise not only to aviation, but also to a better understanding of bird flight as such.
I wouldnt say I'm opposed. But I don't think you get AGI as it's defined without understanding human biology. Because you'll get an AI that's just logical and not organic.
The challenge is that the logical alone means you miss a great deal of intelligence.
First, I want to immediately apologize to you for the somewhat aggressive style of my previous comment. But an overly herbivorous statement would not be interesting, right?
I may have left out some phrases from the first paragraphs, since English is not my native language, so I hastily accused you of opposing. However, thanks to this you have one more opportunity to more accurately convey the idea.
In fact, you provided information that was helpful for me. Not exactly new (some things I read or guessed intuitively), but streamlined a lot of my ideas in the field of natural biological intelligence.
Regarding your last comment. I don't really understand AGI. Moreover, I am ready to say that not a single person in the world understands what AGI is because no one knows what biological intelligence is. While in the best minds it is something like a philosopher's stone, isn't it? AGI should be able to do something like "turn anything into anything", isn't it?
However, alchemy eventually led pundits to chemistry, and today it is almost impossible to find a completely natural material in the envoronment around us. One can hardly doubt the role of chemistry in modern civilization, even though we still cannot turn sand into gold.
I completely agree that reasonable purposeful behavior rests on the question of what is considered "reasonable" and "purposeful" - here we go beyond logic and rest on the fact that a person is a biochemical machine, as you rightly write. Things like "copying consciousness" require an answer to the question of what consciousness is and how it is related to the nervous system and the organism as a whole.
But it seems to me that science is not the first time to encounter such restrictions. Newton's classical mechanics turned out to be incorrect in describing the world, but until the 19th century it was quite enough to ensure progress (and lay the scientific foundations for further generalization).
It seems to me that those scientists who work on modern AI technologies are not inclined to equate their achievements with AGI. But the progress is really impressive (and most importantly, that it can bring us closer to understanding natural intelligence, as it seems to me, or do you disagree with me here?)
The same people who are chasing hype - we have been hearing from them for more than half a century about "thinking similar to human" - but the devil is in the word "similar" - how similar? who is really in a position to quantify this today?
Once again, thank you for your essay, it is really informative and I'm really glad to have this discussion, thank you for your response to my comment.
P.S. I remembered the reason that made me actively comment on your essay. This is a discussed topic of the dangers of artificial intelligence https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence. Or rather, how it is perceived by the townsfolk and illustrated in pop culture. Your essay does an excellent job of stating why this issue is so insignificant in that popular sense. Here I would completely agree with you, but your focus is shifted to skepticism about the concept of AGI. (However, AI, of course, is dangerous, like any powerful technology - but progress is impossible without curbing the dangers, otherwise humanity would have to abandon even fire, not to mention chemistry, electricity and atomic energy; however, for this time understanding this danger in itself creates a request to many areas of knowledge)
No issues with the tone. I try not to read too much in on that 😄
And your right, it is a starting point and so much more is needed to understand better. I actually explore some of what your talking about in my new novel. It explores the realm of the possible when it comes to creating a sentient AI.
https://www.amazon.com/dp/B0C7NBZX89
an illuminating look at the "human processor"...I do a lot of thinking on how we humans, with all of our collective technological development and complex interpersonal systems, can throw all of it out of the window in an instant based on the material circumstances at hand.
I did take special interest in this part of your post:
"This creates a discordance if we are actually safe as most of us are in our first-world lives. While externally everything ‘feels’ safe, our gut is telling us something is wrong and so the brain freaks out. Because a threat that we can’t see is the worst thing for human survival. Historically, these were the threats that could kill you the fastest. So, the brain starts dumping alarm signals that lead to hyper-awareness and worry. What we diagnose as Anxiety."
how does this reality sqaure with threats that are more impending than imminent, like climate change, resource shortages, and social degradation? it often feels "safe," but the writing is on the wall.
how can we counter what seem to be the inevitable effects of accurately percieving the world around us with an eye on the future?
Great questions. So, I'm just ruminating here.
In history we've always had an apocalypse looming. Yet lives were also hard and there wasn't a ton of time to do more than survive. Now we have the ability to be safe while at the same time catastrophizing about futures that might occur (but are probably closer to the ancient manifestations of apocalypses)
We are safer, but we never got rid of that nagging pessimism. I'd wager the larger the delta, the greater the anxiety. It's almost like we need adversity in our faces or we'll create it anyway.
With AI, my worry isn't in the technology, it's in how humans will react to it.
sheesh, isn't that always the worry.
in my opinion, we're NOWHERE CLOSE to being ready for the societal upheaval that the technology will bring. our core institutions are arguably closer to medieval than "modern," as we all recently learned, society is not set up well for sudden changes.
how many accountants haven't gotten the bad news yet?
still, in the end the tech is a reflection of those who contribute to it.
Very impressive thinking and writing you're doing, Michael. Thank you.
Thanks. I appreciate that.