Bruce Schneier is a respected computer security expert who lectures at Harvard’s Kennedy School. Schneier is one of those rare humans with deep understanding and competence in several areas of mathematics and computer technology and recognized ability to analyze and recommend policies that make the world a safer and better place to live. I just finished reading an essay he wrote entitled AI and Trust in his latest newsletter. It is quite good at analyzing trust relationships and recommending policy that reduces the negative impacts that AI would otherwise have on society. It’s nearly 3500 words, but not overly technical.

Here’s an extremely important point from Schneier’s essay:

AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations… At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do…. If we want trustworthy AI, we need to require trustworthy AI controllers.

The opening portion of the article is a discussion of the difference between what Schneier calls interpersonal trust and social trust. I found his discussion of trust illuminating and important. This is a well-reasoned, articulate article that I will be recommending to thoughtful people. It is accessible to those who do not have deep technical knowledge of AI. As far as I can tell there is no paywall.