8 Comments
author

Increasingly front and center of every AI conversations. Can you handle the truth?

Expand full comment
author

Few can!

Expand full comment
Feb 26Liked by Michael Woudenberg

Nice. I am going to steal "don't trust- entrust", since this very reasonably encapsulates my own view. Well done!

Expand full comment
author

Steal like an artist.

Expand full comment
Feb 25Liked by Michael Woudenberg

Interesting concepts being posited! It seems to me that with regards to AI LLMs, intelligence would correspond to training knowledge base. Collaboration would correspond to how well the system responds to the values of its creator/programmer, before we assign it a level of autonomy. I guess that’s the rub — how do we get the system to have a value system installed. These recent iterations of LLMs, both American and Chinese, appear to have hard guardrails, as opposed to softer ones, like a mountain or even a gentle hill bordering a valley road. How to do this? I wonder if it is possible to training on morality documents and then rescore the tokens, against a weighted morality keys?

Expand full comment
author

Yeah multiple angles on that one. Morality is even more fickle than trust! Clearly, LLMs are intelligent but not able to be fully entrusted with tasks. People who do, as exemplified by lawyers getting found out, get in trouble.

The other challenge is that there are still people out there who can't discern the bad writing / images of ChatGPT from reality. I wouldn't entrust them with much either! :)

Expand full comment

Thanks for a great post! I've been thinking a lot about trust and AI lately, and you've now given me a lot more to think about

Expand full comment
author

Great to hear! It's a messy topic with a lot more conversations needing to happen.

Expand full comment