Situational Awareness of Artificial Intelligence – a Theological Perspective

by

in
Terminator praying

I am a theologian and an army officer. Of course, I need to read up on AI. Until recently, I have been a casual user of Microsoft’s version of ChatGPT. It comes in handy when I want to refine my writing or create images for blogposts. But AI isn’t just fun and games:

Leopold Aschenbrenner wrote a series of papers on Situational Awareness: The Decade Ahead. Some of it is hopeful, some of it disturbing. It is a very solid reflection on what is to come in AI. Now I am not a tech person. I am a pastor and a soldier. As such, I read Situational Awareness from my two perspectives on ethical and military implications. This is my review as a theologian.

Aschenbrenner’s AI theology
Leopold Aschenbrenner says he and a few hundred San Francisco insiders have Situational Awareness of what is going to come in AI. His essay is an attempt to catch the rest of us up. He starts easy by defining what AI is, is not, and is becoming, quite literally like the book of Revelations describes “who is and who was and who is to come, the Almighty.” (Revelation 1:8) Current AI produces instant results that Aschenbrenner says would take him working on something for a few minutes. But in 2027 that instant result will be the equivalent of a human work over multiple months. (page 35)

The Question of Ethical Power
The trillion-dollar ethical question – and I applaud Aschenbrenner for not making a single Terminator reference – is how bad AI can go wrong. Or as the AI expert puts it,
“Unless we solve alignment—unless we figure out how to instill those side-constraints—there’s no particular reason to expect this small civilization of superintelligences will continue obeying human commands in the long run. It seems totally within the realm of possibilities that at some point they’ll simply conspire to cut out the humans, whether suddenly or gradually.” (page 111)

A Rabbinic Approach
Aschenbrenner proposes a traditional solution to this futuristic problem. Jewish hermeneutics has long used the principle ofqal va-ḥomer ‘the argument from the minor to the major’. […] The Rabbis use the argument as one of their hermeneutical principles by means of which they expand and elaborate on the Biblical teachings.
For AI, Aschenbrenner proposes the same path, “however, we can study: how will the AI systems generalize from human supervision on easy problems (that we do understand and can supervise) to behave on the hard problems (that we can’t understand and can no longer supervise)? For example, perhaps supervising a model to be honest in simple cases generalizes benignly to the model just being honest in general, even in cases where it’s doing extremely complicated things we don’t understand.” (pages 116-117)

Faithful AI
The situation would be much easier if AI would just behave according to the cardinal virtues that the Apostle Paul lists so famously, “And now faith, hope, and love remain, these three, and the greatest of these is love.” (1. Corinthians 13:13) But the world and AI are not so and Aschenbrenner calls the AI community to work on challenges like, “how do we ensure the CoT is faithful, i.e. actually reflects what models are thinking? (E.g, there’s some work that shows in certain situations, models will just make up posthoc reasoning in their CoT that don’t actually reflect their actual internal reasoning for an answer.)” (page 120)

Prayer and Action
Aschenbrenner calls for decisive government involvement in AI development. And that is because action is called for and just “praying for the best” (page 151) is just not good enough.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *