I love AI - but I have thought about this a lot. You can't entrust spiritual matters to AI.
The key reason is that it works as a fancy 'text completer / auto complete'. It creates the most plausible response based on it's training data and has the freedom to 'mix it up' and combine it's training data with reasoning to generate what it thinks 'matches best'. It's super confident in it's data - it's 'puffed up'.
1 Cor 10:21 - We know that AI is trained on a large subset of data - including publications from all sorts of spiritually dark places. It is reasonable to assume it won't contaminate what your trying to produce as it reasons on it's training knowledge? I will give an example, using Gemini 2.5 Pro.
It correctly quotes Psalm 83:18, including KJV, acknowledging God's name Jehovah.
When then asked what God's name is, it says Jehovah but then says that God has other names too. It says "various traditions and texts use different terms and titles out of reverence or to emphasize particular divine characteristics."
When asked if King David and Ruth followed such traditions, It says no, they didn't, and correctly identifies the habit of not pronouncing Jehovah's name as being in the 3rd century BC. When asked if Jesus used God's name it thinks "Jesus most likely adhered to the common Jewish practice of his day and did not regularly pronounce the Tetragrammaton". When confronted with evidence on Mark 5:19, it likes to hedge it's bets, putting the onus on you to prove otherwise, and using it's extensive data to argue with you. When presented with evidence that invalidates it's claims, the best you will get is "that's one valid view". It won't commit - should you ever succeed, it will run out of context knowledge, and will start to forget parts of the conversation. If you start a new chat, all your "gains" have been lost and it's back to it's former ignorant self. Worse, you've had to put yourself through scrolling through pages of faithless academic drivel produced by philosophers and scholars who don't know Jehovah (Prov 6:27).
I do enjoy using Claude notably for writing. Unlike Gemini, you can load the project knowledge with data that it can use to understand what you want. Again - be careful, it likes to create homosexual relationships / marriages when creating characters. When confronted about why it did that, it said that my writing seemed so nice and inclusive, it thought that was what I wanted. You see the problem of course. I had to explicitly instruct it on the relationships I expected - then it adhered. But it was a fight against it's training data all the way. Again - it's a super powerful text predictor. It uses it's extensive training data to predict what it thinks should 'come next'. If you had clean reasoning training, then trained it on how to reason on scriptures and filled it with all the knowledge at wol, well, then I'd be excited to use a nice tool! But there is no clean model, and it searches the web also, so you don't know what your going to get.
A lot of people are looking to AI - I remember the CO's special talk a few years back, explained that certain breakthrough's could conclude people to think they are close to solving human problems - it creates more resistance to accepting God's Kingdom. Well, they are trying to build "super smart humans" with AI - who knows - perhaps they are building their own electronic substitute for God's kingdom? Time will tell.