10 Comments

Well said. The concept of using AI to talk to historical figures seems to be based on the idea that because it's a *computer*, it magically has access to Truth in a way that people don't. Which is just not how AI works at all- ChatGPT is designed to produce something that sounds like something that someone would say, that's it, not necessarily something that is meaningful/correct.

Expand full comment

But Laura, a MAN on TWITTER told you to open your mind because *gestures vaguely in the direction of AI*

Expand full comment

My guess is the Jeremiah Krakowskis of the world want to be able to do this because they want Paul (and any other biblical figures) to “definitively” “confirm” their own views on doctrine and dogma. “See, AI Paul told me all women should remain silent in all churches! Take that!” This betrays such a laziness in the unwillingness to get messy in the grappling with Scripture and what God is doing in our own day and contexts.

Expand full comment

What I've come to realize is that there are just some things which aren't possible. Not to say that some science fiction ideas aren't possible, but there are some things so far out of the realm of possibility, it breaches into supernatural.

I've noticed that trend with AI where since it's growing in complexity, there is absolutely no limit to the possibilities of what it can do. There is a limit.

Expand full comment

"It’s also interesting that ChatGPT goes for a gender neutral translation of ἀδελφός, which isn’t wrong but definitely more of a modernism."

Came here to say this first thing. Any "what would Paul/Jesus/Socrates have said" is going to be colored by our differing modern perspectives on what this person accomplished. Opinions on many public figures vary wildly if you also include what happened in their private lives, for example.

"We’d still be left with the question that we can’t answer with the tools of history: “would Paul have known how to cook that? How much did Paul know about weaving? Would he have said that?” Or, if the software was completely contemporary in its style and had Paul make a Sopranos reference, “Would Paul have watched The Sopranos? Would he have liked it? How would he have engaged with the materia[l]?”"

I sometimes participate in the comments sections on Focus on the Family's media branch, Plugged In, and they've gotten this question more than once. "Should we even be bothering to review XYZ thing," whether because it's rated R or it contains sexual content or what have you. (Their decision to review the first Fifty Shades movie was a very contentious one, one that I myself thanked them for, and I frequently noticed that different groups of friends had very different focuses on "does XYZ contain graphic content" versus "looking beyond the surface, does XYZ contain abusive messages.") Having never seen The Sopranos, I can't make any kind of deep comment or inference about its content or about the ethics of its usage thereof (whereas, from what I saw of "Euphoria," I'm surprised its writers didn't get arrested, despite it being well made in itself).

This dovetails into the conversation of whether we take this content at face value. "Do Paul's words about women and gay people need to be read as is, or can they only be properly understood with external information that isn't always directly mentioned in Scripture." "Are some parts of Scripture meant to be interpreted literally or as allegories (e.g., Song of Solomon)." If we ask an "artificial-intelligence" model (computers can only do what they are told, and this will inevitably be colored by the way something is programmed, ranging from "how do we portray Paul" all the way to the specifics of autonomous drone warfare) what Paul believed about marriage, is it going to give more weight to what he wrote to the Corinthians, or to what he wrote to the Ephesians?

Your part about people using an AI model to ask Paul questions about the intricacies of the modern world (e.g., not just the Israel-Hamas protests but where they should take place and how they should be conducted) makes me think a lot of people would want an "AI Paul" not so much to gain insight but more to use as an appeal to authority: "Seeeeeeee? We went and summoned this guy, witch of Endor-style, and HE says, ladies, that you're not supposed to have driver's licenses!"

Expand full comment

I don't know that bringing back dead thinkers is going to happen, but your arguments don't hold water.

Your Problem One: LLMs aren't limited to Paul's words. They can use words not from Paul to identify ways Paul wouldn't speak and narrow appropriately. And just beacause current LLMs don't use Koine Greek doesn't mean they can't

Your Problem Two: Identifying limitations in the current generation of LLMs doesn't say anything about future generations

Problem Three reflects a misunderstanding of how LLMs work. If LLMs operated the same way as hamans. They wouldn't detect cancers that humans don't. They wouldn't be able to come up with and prove new mathematical theorems.

While mimicking Paul seems to me like a parlor trick not worth attempting, I think AI may well be able to give us insights into Paul's thinking that wouldn't otherwise be found

Expand full comment
author

So problems 1 and 2 are issues I can't figure out how to resolve as a practical matter.

How would we know how Paul WOULD NOT speak from such a short corpus -- that is, crucially, not him speaking?

And I think that the point of Part 2 is that the limitation is just one that is endemic to the number of texts at hand.

The thing about 3 is that there's an issue here with imitating a human mind and expanding its ability. I see how you prove and test a mathematical theory. I also see how you detect a cancer that is literally there. But how do you recreate what a person *would say* if they were here? It's a fundamentally untestable hypothesis because unlike a cancer, we can't say if it's accurate or not.

Expand full comment

Well; no. Start from a known: the current generation of LLMs’ utter inability to understand either content (direct meaning of words) or context (meaning relative to conditions or circumstances), because all those things are many orders of magnitude different from (not just more complex than) predictive text. And LLMs are just that; predictive text in fancy clothes.

“AI” is better described as ML, which an expert practitioner described to me as “statistical modeling really fast”. Statistical models require large volumes of data. This is understood by serious practitioners.

Laura hit the nail on the head here, because she exercises systematic thinking. All I see you doing in your “arguments” is to project wishful thinking as though it were somehow a reasonable projection of a likely future. I understand you want these things to happen, but that doesn’t make them reasonable projections.

Expand full comment

LLMs aren't designed to "understand," just as computers that play chess aren't designed to play chess like humans but still beat them.

I think if you look at the progress from 2nd to 3rd and from 3rd to 4th generation AI, my projection into the future is not unreasonable.

Clearly LLMs using Koine Greek aren't on any AI roadmap but the problems Dr Robinson points out are not insurmountable, even in the short term

Expand full comment

Thank you for writing this and drawing the discussion to my attention! If you haven't read it let me recommend Jack McDevitt's science fiction story "Gus" which is about almost precisely this!

(I will lend it to you if you can't get hold of a copy.)

Expand full comment