When arguments and evidence keep changing but the answer is invariant, that’s a really strong indicator of motivated belief, where the evidence follows from the answer rather than the answer coming from the evidence.
This is what Catholics in Germany were paying for. They are forced to pay an annual Church tax called “Kirchensteuer” unless they officially declare that they are leaving their faith. It brings in about $5 billion (I don’t have figures to hand) of which a sizeable percentage is sent to the Vatican to keep the orgies going. Friday night is defrocking night!!! (say with German accent).
It is interesting to consider whether machines could be programmed for moral behaviour. Apparently, yes, in very specialised and limited circumstances. I think the real limits on this are the limits on machine cognition: the skills that humans use in relating to and cooperating with each other are very complex: cognitively, psychologically and emotionally. There are various factors involved in a “whole” morality that machines would find it very hard to replicate: for example, empathy (understanding another person and their needs), or reputation (weighing up the previous history of someone’s behaviour). So it would always need humans just to guide it, but potentially, if machines were sophisticated enough, they could do it too. So this would automatically mean they were increasing in sentience, and their learning would be a form of increased sentience. Even emotions could be simulated, if emotions are seen as whether someone is approaching or retreating from their goals. This would have to wait for quantum computing to have any chance of coming about.