Bayes Theorem & Joseph Smith's Seer Stone

The catch-all forum for general topics and debates. Minimal moderation. Rated PG to PG-13.
User avatar
Gadianton
God
Posts: 3933
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Gadianton »

Huck wrote: I can honestly switch glasses to see the matter differently.
True, but you could use Bayes either way. I realize you're not strictly talking about Bayes at this point, at least I don't think you are. But it felt to me like what Philo was trying to get across -- and he's getting some good chances here with AS to clarify his understanding and exposition here ;) -- is that you can use Bayes under any paradigm. How useful is it? I don't know.

I think it could be quite useful for questioning assumptions. I'd love for a TBM stats expert to break down the Mormon model of testimony in Bayesian terms. I don't think it can be done, because utter confirmation bias appears to be built right in. If an investigator starts with 50/50 odds on the Book of Mormon, prays and gets an answer, and if "I know it's true" really means absolute certainty or thereabouts, then good luck fudging the numbers on your excel spreadsheet to get from 50/50 to 99% after getting a distinct and clear burning in the bosom after praying. Now, people don't need Bayes explicitly to see the problem, which is why there are already plenty of explanations for testimony to keep faith going indefinitely. But, if you had to nail it down in Bayesian terms, it would be interesting to see if a model could be made that any real TBM would accept.

On the other end of the spectrum, I thought your example was interesting:
One glass says the following. It is clear by the wonder of the world and our ability to interrelate with that wonder that there is a God and we have the ability to connect with an awareness of God.
It's a "Goldilocks" argument where self-selection could bias us towards a conclusion, because there are limited opportunities for updating our place and time in the universe. It might be much easier to make this argument work in Bayesian terms than the Mormon testimony. An argument for Eternal Progression might work here: we'd expect to find ourselves existing in mortality, and going through all of these trials in a universe where there are worlds without number, and where countless other humans also are going through these same difficult, mortal lives.
huckelberry
God
Posts: 2644
Joined: Wed Oct 28, 2020 3:48 pm

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by huckelberry »

Gadianton wrote:
Mon May 10, 2021 9:07 pm
Huck wrote: I can honestly switch glasses to see the matter differently.
True, but you could use Bayes either way. I realize you're not strictly talking about Bayes at this point, at least I don't think you are. But it felt to me like what Philo was trying to get across -- and he's getting some good chances here with AS to clarify his understanding and exposition here ;) -- is that you can use Bayes under any paradigm. How useful is it? I don't know.

I think it could be quite useful for questioning assumptions. I'd love for a TBM stats expert to break down the Mormon model of testimony in Bayesian terms. I don't think it can be done, because utter confirmation bias appears to be built right in. If an investigator starts with 50/50 odds on the Book of Mormon, prays and gets an answer, and if "I know it's true" really means absolute certainty or thereabouts, then good luck fudging the numbers on your excel spreadsheet to get from 50/50 to 99% after getting a distinct and clear burning in the bosom after praying. Now, people don't need Bayes explicitly to see the problem, which is why there are already plenty of explanations for testimony to keep faith going indefinitely. But, if you had to nail it down in Bayesian terms, it would be interesting to see if a model could be made that any real TBM would accept.

On the other end of the spectrum, I thought your example was interesting:
One glass says the following. It is clear by the wonder of the world and our ability to interrelate with that wonder that there is a God and we have the ability to connect with an awareness of God.
It's a "Goldilocks" argument where self-selection could bias us towards a conclusion, because there are limited opportunities for updating our place and time in the universe. It might be much easier to make this argument work in Bayesian terms than the Mormon testimony. An argument for Eternal Progression might work here: we'd expect to find ourselves existing in mortality, and going through all of these trials in a universe where there are worlds without number, and where countless other humans also are going through these same difficult, mortal lives.
Gadianton, I am little uncertain about how you are reading my post. I was thinking that with either starting point one could analyze the likely hood Jesus was raised from death and reach very different conclusions. I had some doubt that Bayes theorem would be adding any clarity to the problem. I am inclined to agree with Physics Guys comment. I am sure he has more technical knowledge about the matter than I but I think I can see the large outline of the problem he sees.

Quote,
Post by Physics Guy » Fri May 07, 2021 2:19 am
I like Bayes's theorem for real statistics problems, where it may not be clear what the data means but there's no subjectivity about what the data is. I'm skeptical about applying Bayesian inference to more qualitative questions, because I think the quantitative formalism can easily serve as a convenient distraction to divert attention from huge cherry-picking fallacies.

I'm not mainly worried about people being dishonest with that. I'm mainly worried about people for whom anything mathematical is a somewhat shaky foreign language bamboozling themselves, first of all, and then eagerly spreading delusion to others. In the enthusiasm of born-again Bayesians what you can often read between the lines is the feeling that, "Wow, the great thing about Bayes is that it means you don't have to worry about cherry-picking and question-begging and sharpshooting and all that nasty stuff!" But that is not true at all.
Top
......................
Huckelberry continues,
Gad, I think you are observing that the starting points invite an analysis themselves but you do not seem to think they would create convincing results. I do not think they would. Mormon proof test of burning bosom leaves me cold enough not to wish to persue. My proposal about believing God exists is clearly, been demonstrated frequently, to not be proof. It is a way of seeing that I can share. It does not require any particular ideas about our future. It does not require belief in a life after death. It simply implies gratitude for life.
User avatar
DrStakhanovite
Elder
Posts: 336
Joined: Thu Mar 11, 2021 8:55 pm
Location: Cassius University

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by DrStakhanovite »

Aristotle Smith wrote:
Sat May 08, 2021 4:08 am
I hear this all the time. I know what people are trying to say, but I don't think it's right.
When it comes to the so-called “Sagan Standard” which states that extraordinary claims require extraordinary evidence, I completely agree with you that as some kind of general heuristic it is fairly worthless. However I think within the context of Philo’s OP the Sagan Standard states something fairly important about probabilistic reasoning that often gets overlooked.

Before saying anything more I’d like to be clear that I am hardly an ally of the Bayesian community and I find almost every contemporary use of Bayes in the world of religious apologetics to be inadequate for their purported task. This ranges from the infamous Dale & Dale paper ‘The World’s Greatest Guesser’ to Richard Carrier’s book length treatments; all substandard and lacking appropriate rigor. In the realm of meta-logic and mathematical logic I think Bayesian systems of probabilistic reasoning suffers greatly from conceptual problems and when it comes to Bayes being utilized for confirmation theories within the natural sciences it absolutely fails in its task.

Having said that I still think there is great value in Bayesian analysis. Philo’s use of Bayesian probability is no different than someone using categorical syllogisms from the realm of Classical Logic or employing a Propositional Calculus to frame an argument against the existence of a Biblical God. Just because there are issues with Kripke semantics doesn’t mean a potential LDS philosopher can’t use a relevant Modal Logic for fleshing out an explicitly Mormon position on the nature of the Heavenly Father’s and Jesus Christ’s metaphysical relationship.

For me, Bayesian probability is simply a tool and every tool has a time and place for its use.

Aristotle Smith wrote:Take a Mormon example, what would be needed for the evidence to show that the Book of Abraham was translated from papyrus? Easy, the original papyrus and the translation. We have both, and the rather ordinary evidence shows that it was not translated from papyrus. Similar proofs would exist for the Book of Mormon. In fact, I think it does harm to the critic's case to assert that extraordinary evidence is needed. This then gives the apologist the ability to say the critic is making unreasonable demands, by demanding the extraordinary. The critic is actually demanding rather ordinary evidence, which the apologist cannot supply.
I’d like to take a crack at demonstrating the relevance of the Sagan Standard to bayesian analysis, but to do so I’m going to reinvent the wheel here a little bit. I wanna customize my example to better make my point and I don’t want to commandeer Philo’s examples in the OP to do so.
I’d like to begin from an explicit personalist perspective. By “personalist perspective” I mean that numeric values represent degrees of belief where 1 conveys total confidence and 0 conveys a total lack of confidence. Any real number between 1 and 0 represents a degree of belief and 1 and 0 act as limits, the closer to 1 the stronger the belief and closer to 0 the weaker the belief. Bayesians get their evangelical zeal from the normative principle that if your confidence towards a hypothesis (an explicit belief that could be true) is above .5, then you ought to believe and if it is below .5 you ought not to believe it.

With that said let me define some symbols:

⍴ represents a discrete mathematical function of probability (Bayesian personalist)
β represents a set of background assumptions regarding the Book of Abraham
α represents the hypothesis that the Book of Abraham is an ancient work
~α represents all the different ways that the hypothesis α fails to be true
ɛ represents the sum of historical data that has any bearing on the Book of Abraham

Here is the standard short form of Bayes:

⍴β(α|ɛ) = [⍴β(ɛ|α) × ⍴β(α)] / ⍴β(ɛ)

Now the fundamental thing I always want people to be aware of when it comes to Bayes is that there are two different kinds of probabilities in play and if you are not aware of them then you can easily fall for a sleight of hand trick. In the standard short form above “⍴β(α)” represents a numeric value for the prior personalist probability that α is true while taking into account β and “⍴β(α|ɛ)” represents a numeric value for the posterior personalist probability that α is true while taking into account β and given the truth of ɛ.

What a lot of people don’t understand right away when first confronting Bayes is that the relationship between “⍴β(α)” and “⍴β(α|ɛ)” is that as long as “⍴β(α)” has a value above 0 then it is fairly easy to get the value of “⍴β(α|ɛ)” above .5, it seems obvious when pointed out but a lot of people don’t pick up on the shift of sense.

I think what vaulted Bayesian reasoning into popularity in the realm of religious apologetics were the early debates between William Lane Craig and Bart Ehrman. At the time Ehrman had a heuristic he employed in his New Testament Textbook that said the role of an ancient historian is to determine what probably happened in the ancient past and since miracles were by definition the most improbable of all events, an ancient historian could never determine a miracle had taken place.

Now I understand the spirit of Ehrman’s heuristic and I am very sympathetic to it, but he really didn’t have the wherewithal to defend himself from Craig’s attack on his heuristic. Craig quite rightly pointed out that such a heuristic conflated the prior probability (miracles are very rare) with the posterior probability that takes evidence into account. Due to the mathematical relationship that exists between “⍴β(α|ɛ)” and “⍴β(α|ɛ)” in the formula, it is a matter of necessity that if “⍴β(α|ɛ)” is above 0 then “⍴β(α|ɛ)” can get above the important threshold of .5.

Ehrman didn’t respond well and acted flabbergasted, eventually accusing Craig of trying to prove the existence of God with math. It was one of those rare “gotcha” moments that I think excited apologists across all faith communities and in turn, drove a lot of the counter-apologists to learn more about Bayes. Because Mormon apologetics is slower on the draw than other apologetic communities, it wasn’t until the infamous Dale & Dale paper that you saw some Mormon apologists make a concerted effort to apply Bayes to the standard talking points:
Dale & Dale wrote:This article analyzes that evidence, using Bayesian statistics. We apply a strongly skeptical prior assumption that the Book of Mormon “has little to do with early Indian cultures,” as Dr. Coe claims. We then compare 131 separate positive correspondences or points of evidence between the Book of Mormon and Dr. Coe’s book.
The sleight of hand in action; give a low prior probability as if you are doing the otherside a favor, pump up the posterior probability with over 100 different data points, and then take a victory lap around the bloggernacle.

Now here is the proverbial fly in the analytical ointment, most people will just pick up Bayes theorem and start using it on a variety of subjects. The reality of the situation is that if you’re not dealing with stochastic processes or something like infection rates of a population, the theorem expressed as “⍴β(α|ɛ) = [⍴β(ɛ|α) × ⍴β(α)] / ⍴β(ɛ)” isn’t going to be enough. You don’t need a graduate education, but some basic understanding of Set Theory and Boolean Algebra is needed to get the most out of it. I’ll try to demonstrate that without getting into the particulars.

So if “⍴β(α)” is set low, how do we counter that? In this case the important relation is “⍴β(ɛ|α)/⍴β(ɛ)”, getting this value high enough can’t offset a low prior of any magnitude. Now I’m of the opinion that “⍴β(ɛ|α)” represents the capacity α to explain all the evidence we have given our background assumptions, let’s call it explanatory power. I’d also assert that “⍴β(ɛ)” represents the dynamic nature of the evidence. If in the final result you want “⍴β(α|ɛ)” to be high as possible you need “⍴β(ɛ)” to be low as possible.

So how do I justify talking about how “⍴β(ɛ)” measures this so-called “dynamic nature”? Well if you do a little logical jiu-jitsu with the very same axioms of probability used to get Bayes theorem, you can get a very interesting derivation:
⍴β(ɛ) = [⍴β(ɛ|α) × ⍴β(α)] + [⍴β(ɛ|~α) × ⍴β(~α)]

Now the relationship between “α” and “~α” is one that is mutually exclusive and jointly exhaustive. If your priors are low then ⍴β(~α) is going to be high, which means that every single hypothesis that is not specifically “α” is wrapped up in “~α” and “~α” is getting a huge pump in value via the prior probability. This means “⍴β(ɛ|~α)” needs to have a really really small value to overcome the large value “⍴β(~α)”.

So what is the take away of all this? Well let me re-quote the esteemed Professor Smith:
Aristotle Smith wrote:In fact, I think it does harm to the critic's case to assert that extraordinary evidence is needed. This then gives the apologist the ability to say the critic is making unreasonable demands, by demanding the extraordinary.
When it comes to Bayesian arguments, I think the main draw they have for apologists is that they can “apply a strongly skeptical prior assumption” as a way of appearing balanced or even generous, knowing that such prior assumptions can be overcome rotely and obscure such a process with algebra.

Yet to overcome those priors, apologists have to explain why “ɛ” is so unique and dynamic in nature that it renders every other hypothesis’ explanatory power to a pittance and calls those same hypotheses’ legitimacy into question. They need “⍴β(ɛ|~α)” to be low to obtain a low value for “⍴β(ɛ)” to raise the vale of ⍴β(α|ɛ). It creates an undeniable burden for the apologist to explain why this evidence absolutely overturns everything we think we know (the priors) and makes every other hypothesis completely irrelevant.

To me, the Dale & Dale paper falls under the Sagan Standard and fails to meet it.
Image
User avatar
Physics Guy
God
Posts: 1575
Joined: Tue Oct 27, 2020 7:40 am
Location: on the battlefield of life

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Physics Guy »

DrStakhanovite wrote:
Tue May 11, 2021 3:48 am
When it comes to Bayesian arguments, I think the main draw they have for apologists is that they can “apply a strongly skeptical prior assumption” as a way of appearing balanced or even generous, knowing that such prior assumptions can be overcome rotely and obscure such a process with algebra.

Yet to overcome those priors, apologists have to explain why “ɛ” is so unique and dynamic in nature that it renders every other hypothesis’ explanatory power to a pittance and calls those same hypotheses’ legitimacy into question. They need “⍴β(ɛ|~α)” to be low to obtain a low value for “⍴β(ɛ)” to raise the vale of ⍴β(α|ɛ). It creates an undeniable burden for the apologist to explain why this evidence absolutely overturns everything we think we know (the priors) and makes every other hypothesis completely irrelevant.
People like the Dales love Bayes because besides looking objective with a low prior for their favourite theory, they can also appear to skip all the difficult steps of argument and just assign a seemingly conservative "explanatory value" to every one of an arbitrarily large number of data points, which they have cherry-picked by the bushel. Then it's easy to see why their evidence overturns everything: multiplication.

Textbooks often emphasise that Bayesian inference is a learning algorithm. You update the probabilities of alternative hypotheses in light of observed evidence. The problem with applying Bayesian inference to subjective cases with ambiguous data is that confirmation bias is also a form of learning.

An important mechanism in the abuse of Bayesian inference to support confirmation bias is differential resolution. Sometimes we carefully distinguish multiple separate cases even when the distinctions are fine. Sometimes we lump large numbers of possible scenarios into just a few categories and call it a day. And often we shift resolution, zooming in to tease apart sub-cases or stepping back by folding a range of cases into one generalised case. Zooming in and stepping back can both seem like good and right things to do, intellectually. We're looking more closely—that's good; we're drawing conclusions—that's great. It's a natural part of all human thinking. Our brains are not all that big, so whenever we think about one thing we focus in on it and let the rest of the universe blur.

This basic human tendency to fixate within one conceptual frame has a powerful effect on Bayesian inference, though, because it makes our subjective judgement of probability really bad.

How likely is it that there is life on some body in the solar system other than Earth? Well, it seems unlikely, but I don't want to be dogmatic, so I guesstimate 10%. But then if you ask me separately how likely it is that there is life on each individual other planet or asteroid or moon or comet in the solar system, in each case I'm going to have a similar reluctance to be too dogmatic, and so I'll probably allow at least a couple of percent in each individual case. But that's absurd, because there are a lot of individual heavenly bodies in the solar system, and if I allow at least a few percent chance for each one of them to host life, the total chance of having life somewhere in the solar system is going to be way more than 10%. Conversely, if I want to stand by my original guess of 10% for the whole solar system then I'm going to have to pull the trigger on assigning some very low probabilities of life on asteroids and moons and gas giants.

And then once I've forced myself to pull that trigger, and assign really low probabilities to life on individual moons, suddenly it's not at all clear that 10% for the solar system as a whole was actually reasonable, anyway. If a moon can have a really low chance of life, why can't the whole solar system? Why can't the whole galaxy? Who has any idea how likely life is, actually? Maybe the only living cells in the whole local galaxy group are all right here on Earth. Or maybe every other comet has microbes. Who knows?

My subjective guess at what seems like a reasonable probability depends strongly on how finely I focus. That has drastic consequences, though, when I shift resolution. Suppose I establish a reasonable-looking set of probabilities at one level of resolution, whether for priors or for figures of explanatory power. Then I zoom in, and apply those same nicely established probabilities at a finer resolution where there are many more issues and cases, without reassessing the consistency of my probabilities, or whether or not any of them may be correlated. Then I step back again and multiply a bunch of Bayesian likelihoods together to get some overall probabilities. I'm very likely to have generated some surprisingly high probabilities, by this procedure.

It's a Carnot cycle of confirmation bias, because how probabilities defined at one level of resolution are supposed to translate to finer levels of detail is really a hypothesis in itself—but it's a fnord-hypothesis to which the human mind is naturally almost blind. So the Bayesian game can be subtly rigged by implicitly defining what gets to count as a distinct case of its own, rather than being split into sub-cases or lumped into larger cases, and what gets to count as an independent piece of evidence, rather than being split up into separate issues or bundled together as part of some larger picture.

And I'm not sure that this is even a fallacy that we can avoid by being careful. We may simply be stuck with it. I'm not even entirely sure that the problem is a human neurological failure to implement proper logic. It might conceivably be a limitation of logic itself. Or there might be a logical solution to this problem that we have not yet identified. I wonder whether the so-called Yale Shooting Problem ought to rank up there with Schrödinger's Cat.

To return to the Sagan slogan, the problem is that "extraordinary" is not easy to translate into Bayesian terms. Probabilities can be high or low—and both are perfectly ordinary. If "extraordinary" is a useful term at all, either for claims or for evidence, I think it must be in pointing to the difficulty in assigning probabilities, not to how high or low they are.
I was a teenager before it was cool.
User avatar
DrStakhanovite
Elder
Posts: 336
Joined: Thu Mar 11, 2021 8:55 pm
Location: Cassius University

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by DrStakhanovite »

Physics Guy wrote:
Tue May 11, 2021 6:26 am
It's a Carnot cycle of confirmation bias, because how probabilities defined at one level of resolution are supposed to translate to finer levels of detail is really a hypothesis in itself—but it's a fnord-hypothesis to which the human mind is naturally almost blind. So the Bayesian game can be subtly rigged by implicitly defining what gets to count as a distinct case of its own, rather than being split into sub-cases or lumped into larger cases, and what gets to count as an independent piece of evidence, rather than being split up into separate issues or bundled together as part of some larger picture.

And I'm not sure that this is even a fallacy that we can avoid by being careful. We may simply be stuck with it. I'm not even entirely sure that the problem is a human neurological failure to implement proper logic. It might conceivably be a limitation of logic itself. Or there might be a logical solution to this problem that we have not yet identified. I wonder whether the so-called Yale Shooting Problem ought to rank up there with Schrödinger's Cat.
I find myself pretty much in accord with the sentiments you’ve expressed here and I think it illustrates the pitfalls of relying too much on a specific method; given the human condition and our limitations. We are all experts at self-deception.

Physics Guy wrote:
Tue May 11, 2021 6:26 am
To return to the Sagan slogan, the problem is that "extraordinary" is not easy to translate into Bayesian terms. Probabilities can be high or low—and both are perfectly ordinary. If "extraordinary" is a useful term at all, either for claims or for evidence, I think it must be in pointing to the difficulty in assigning probabilities, not to how high or low they are.
Natural language expressions don’t translate well into formal languages, there is simply no equivalent for “extraordinary” and so we have to find moments when the spirit of the term is applicable.
Image
Philo Sofee
God
Posts: 5061
Joined: Thu Oct 29, 2020 1:18 am

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Philo Sofee »

Mr Stak
and when it comes to Bayes being utilized for confirmation theories within the natural sciences it absolutely fails in its task.
Archaeologists use it all the time now though. And it is definitely useable in astronomy. See here - http://slittlefair.staff.shef.ac.uk/tea ... index.html
This is actually a pretty good little article on the subtlety of Bayes, which many people are not perceiving.

And it is certainly used in Natural Sciences.....https://www.nature.com/articles/494035b

And though Richard Carrier gets dissed all the time, others are saying the exact same things he said in his book "Proving History" concerning our everyday thinking being and using Bayes, even if we don't know it. https://theconversation.com/bayes-theor ... s-it-76140

There are several articles listed here where chemists are using Bayes in their chemistry work...https://www.researchgate.net/publicatio ... %2C%202004).

Biological research is most definitely picking up fast in using Bayes and its techniques and methods. https://link.springer.com/article/10.10 ... 20research.

And the reason I looked was due to my skepticism and a high probability that what you have said is not quite correct. I thought, ya know, my prior here is high that he is incorrect. I wonder if the evidence will give me a strong consequent probability that you might very well be wrong in your asserted claim. It actually strengthened my skepticism, with the subsequent probability going up thanks to the evidence that, indeed, you may very well be wrong in your claim... and I only just skimmed the entries showing Bayes being used exactly in all the Natural Sciences. There are literally dozens of articles on this topic of Bayes used in the Natural Sciences. That is, in fact, how Bayes worked with me in this very situation. I saw a claim, my prior was quite low that you were correct, or rather high that you were incorrect, I looked for evidence, and found it added to the subsequent probability seriously high enough to agree with my original assessment that your claim is wrong. Or you at least need to modify it...
Richard Carrier says this is exactly how we can use Bayes to make better decisions in accepting claims, and finding out how justified we are in our own beliefs and doubts. My initial skepticism was your statement that Bayes "absolutely fails..." It has NEVER "absolutely" failed...because you gave me a number I was skeptical off. Absolutely fails means by definition 100%, and nothing is 100% in either direction, and the more it (Bayes) is used, and in varying multiple disciplines, it is succeeding vastly more than people credit it with. If it is so miserable at being useful, then why are more and more scientists using it? Forget about Richard Carrier, E.T. Jaynes is the man for powerful materials on the power of Bayes in myriads of various applications...
Philo Sofee
God
Posts: 5061
Joined: Thu Oct 29, 2020 1:18 am

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Philo Sofee »

Mr Stak
given the human condition and our limitations. We are all experts at self-deception.
Exactly why we must use Bayes, and use it correctly in order to get rid of our own self-deception, which we sometimes are not even aware of. Bayes helps us get explicit about why and how we are justified in our beliefs. It is not a proof, it is not final, it is a conditional probability on what we know. We change that through time with further evidences...upgrading our beliefs and doubts in what we think we know, based on evidence. We are not after certainty, probability is not about certainty. The correct thing is not to stop using Bayes, but to start using it correctly. And I am not saying I do, I am still learning the ropes. E. T. Jaynes is magnificent for this! I am quite honestly serious here. E. T. Jaynes "Probability Theory: The Logic of Science" Cambridge Univ. Press, 2003 is simply over the top MUST reading for skeptics against Bayes...
Philo Sofee
God
Posts: 5061
Joined: Thu Oct 29, 2020 1:18 am

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Philo Sofee »

Physics Guy
Then I zoom in, and apply those same nicely established probabilities at a finer resolution where there are many more issues and cases, without reassessing the consistency of my probabilities, or whether or not any of them may be correlated. Then I step back again and multiply a bunch of Bayesian likelihoods together to get some overall probabilities. I'm very likely to have generated some surprisingly high probabilities, by this procedure.
I'm not trying to argue here. I am trying to understand. THE problem is correct as you state it, I will focus on this sentence of yours - "without reassessing the consistency of my probabilities".

If someone is doing this, it is not a proper use of Bayes, and therefore, it needs to be addressed. The correct thing to do is not quit using Bayes, but to use it correctly, and point out when it is not being used correctly, right? The method works. That has been quite strongly established, and it works in more variety of ways as it is becoming to be used in a lot of different applications! It works when it is properly used, and doesn't when it isn't used correctly, But then, but this is true of any kind of reasoning! The cure is not to ditch reasoning all together, but to reason more accurately and effectively, yes?
User avatar
Gadianton
God
Posts: 3933
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by Gadianton »

But Philo, PG said:
So the Bayesian game can be subtly rigged by implicitly defining what gets to count as a distinct case of its own, rather than being split into sub-cases or lumped into larger cases, and what gets to count as an independent piece of evidence, rather than being split up into separate issues or bundled together as part of some larger picture.

And I'm not sure that this is even a fallacy that we can avoid by being careful. We may simply be stuck with it. I'm not even entirely sure that the problem is a human neurological failure to implement proper logic.
Also:
why can't the whole solar system? Why can't the whole galaxy? Who has any idea how likely life is, actually? Maybe the only living cells in the whole local galaxy group are all right here on Earth. Or maybe every other comet has microbes. Who knows?
To me, the bolded part above is the problem. We don't know the distribution.

Well, in Bayes, you don't care about the distribution, that's the point, right? You just keep updating until it works out. That's great for facial recognition software being fed reams of data and the goal is clear; there is a distribution in there somewhere. But for a thought experiment with no actual data?

Is "differential resolution" a known problem in A.I.? I couldn't find anything on it. I also didn't see it as a common objection to Bayes. It seems to me that I could get a computer to help me come up with consistent numbers for planets and galaxies, but it would end up being consistent BS.
Last edited by Gadianton on Thu May 13, 2021 3:51 am, edited 1 time in total.
User avatar
DrStakhanovite
Elder
Posts: 336
Joined: Thu Mar 11, 2021 8:55 pm
Location: Cassius University

Re: Bayes Theorem & Joseph Smith's Seer Stone

Post by DrStakhanovite »

Hiya Philo,

I don’t mean to hijack your thread and make it all about the viability of Bayes in a given area. The reason I made those qualifications to Aristotle is because I wanted to convey that despite my misgivings about Bayes, I still saw merit in the Sagan Standard in the context of your OP. My intention wasn’t to call your OP into question at all.

That said, I’m happy to address the concerns you have about my assertions. I’m just concerned that I might be rudely dragging you down some rabbit trails that are tangential at best. If that is the case, please say so and I’ll cease with no ill feelings.
Philo Sofee wrote:
Wed May 12, 2021 12:27 am
Richard Carrier says this is exactly how we can use Bayes to make better decisions in accepting claims, and finding out how justified we are in our own beliefs and doubts. My initial skepticism was your statement that Bayes "absolutely fails..." It has NEVER "absolutely" failed...because you gave me a number I was skeptical off. Absolutely fails means by definition 100%, and nothing is 100% in either direction, and the more it (Bayes) is used, and in varying multiple disciplines, it is succeeding vastly more than people credit it with. If it is so miserable at being useful, then why are more and more scientists using it? Forget about Richard Carrier, E.T. Jaynes is the man for powerful materials on the power of Bayes in myriads of various applications...
For reference this what Philo is responding to:
DrStakhanovite wrote:
Tue May 11, 2021 3:48 am
I think Bayesian systems of probabilistic reasoning suffers greatly from conceptual problems and when it comes to Bayes being utilized for confirmation theories within the natural sciences it absolutely fails in its task.
What I failed to make clear is that “confirmation theories” is a reference to philosophical problems related to the logic of induction and more broadly the philosophy of science. To help illustrate my position I’m going to draw on a paper written by J.D. Norton from University of Pittsburgh’s department of History and Philosophy of Science. The article itself is called ‘Probability Disassembled’ and it was published in 2007.

Here is the abstract:
”Norton” wrote:While there is no universal logic of induction, the probability calculus succeeds as a logic of induction in many contexts through its use of several notions concerning inductive inference. They include Addition, through which low probabilities represent disbelief as opposed to ignorance; and Bayes property, which commits the calculus to a ‘refute and rescale’ dynamics for incorporating new evidence. These notions are independent and it is urged that they be employed selectively according to needs of the problem at hand. It is shown that neither is adapted to inductive inference concerning some indeterministic systems.
I’m going to quote this paper quite a bit because it gives a good background on a very specific issue that I think bears directly on the use of Bayes in the area of apologetics and the responses of counter-apologetics. I’ve included the relevant footnotes with my own bolding and I’ve taken the liberty of underlining what I think is directly relevant to what Philo has mentioned thus far.

First order of business is to state clearly that no one reasonably denies the great utility of Bayesian probability, however the enthusiasm for the project often motivates its advocates to try and do too much:
”Norton” wrote:No single idea about induction[1] has been more fertile than the idea that inductive inferences may conform to the probability calculus. For no other proposal has proven anywhere near as effective at synthesizing a huge array of disparate intuitions about induction into a simple and orderly system. No single idea about induction has wrought more mischief than the insistence that all inductive inferences must conform to the probability calculus. For it has obliged probabilists to stretch their calculus to fit it to cases to which it is ill suited, and to devise many ingenious but ill fated proofs of its universal applicability.

[1]:The terms ‘induction’ and ‘inductive inference’ are used here in the broadest sense of any form of ampliative inference. They include more traditional forms of induction, such as enumerative induction and inference to the best explanation, which embody a rule of detachment; as well as confirmation theories, such as in traditional Bayesianism or Hempel’s satisfaction criterion, which lack such a rule and merely display confirmatory relations between sentences...


Now I also want to show that this paper addresses the work of E.T. Jaynes:
”Norton” wrote:There have been numerous attempts to establish that the probability calculus is the universally applicable logic of induction. The best known are the Dutch book arguments, developed most effectively by de Finetti (1937), or those that recover probabilistic beliefs from natural presumptions about our preferences (Savage 1972).[3] Others proceed from natural supposition over how relations of inductive support must be, such as Jaynes (2003, Ch. 2).

[3]:Strictly speaking, these arguments purport to establish only that degrees of belief, as made manifest by a person’s preferences and behaviors, must conform to the probability calculus on pain of inconsistency. They become arguments for universality if we add some version of a view common in subjectivist interpretations that degrees of belief are only meaningful insofar as they can be manifested in preferences and behaviors.
Now let's get into the specifics of these “ill fated proofs”:
”Norton” wrote:These demonstrations are ingenious and generally quite successful, in the sense that accepting their premises leads inexorably to the conclusion that probability theory governs inductive inference. [4]That, of course, is just the problem. The conclusion is established only insofar as we accept the premises. Since the conclusion makes a strong, contingent claim about our world, the demonstrations can only succeed if their premises are at least strong factually.[/u] [4]

[4]:There is no escape in declaring that good inductive inferences are, by definition, those governed by the probability calculus. For any such definition must conform with essentially the same facts in that it must cohere with canonical inductive practice. Otherwise we would be free to stipulate any system we choose as the correct logic of inductive inference.
What Norton is setting up is looking at the logical machinery that makes these calculi work. For example he makes mention of one of the more well known objections to Jaynes out there in the literature:
”Norton” wrote:Finally, Jaynes ([2003], §2.1) proceeds from the assumption that the plausibility of A and B conditioned on C (written ‘(AB|C)’) must be a function of (B|C) and (A|BC) alone, from which he recovers the familiar product rule for probabilities, P(AB|C) = P(A|BC)P(B|C). That this sort of functional relation must exist among plausibilities, let alone this specific one, is likely to be uncontroversial only for someone who already believes that plausibilities are probabilities, and has tacitly in mind that we must eventually recover the product rule. [6]

[6]: A simple illustration of an assignment of plausibilities that violates the functional dependence is ‘Plaus.’ It is generated by a probability measure P over propositions A, B, . . . as a coarsening, with only two intermediate values: Plaus(A|B) = ‘Low’ when 0 < P(A|B) < 1/2; and Plaus(A|B) = ‘High’ when 1/2 < P(A|B) < 1
But I don’t want to focus on plausibilities and functional relations, but rather something a lot more troubling (at least in my opinion). Speaking about the success of Bayesian analysis in the 20th century Norton states the following:
”Norton” wrote:In my view, the success is overrated and does not sustain the probability calculus as the unique logic of induction. In many cases, the success is achieved only by presuming enough extra hidden structures—priors, likelihoods, new variables, new spaces—until the desired intuition emerges. That does not mean that the logic on the surface is probabilistic, but only that this surface logic can be simulated with a more complicated, hidden structure that employs probability measures.
So Norton wants to look at those hidden structures:
”Norton” wrote:The system of properties for confirmation relations to be described here draws on the extensive literature in axioms for the probability calculus already developed.
What follows is a brief rundown on the various axioms of probability. I’ll skip that discussion and highlight just one axiom needed to understand my coming concern about the use of Bayes:
”Norton” wrote:F3. Universal comparability. For all admissible propositions A, B, C and D [A|B] ≤ [C|D] or [A|B] ≥ [C|D]
This axiom in particular is necessary because it enables a measurement algebra to be commutative across different sets; or if you like, events across probability spaces. Here Norton explains the basics of the problem caused by universal comparability:
”Norton” wrote:We cannot presume, as Keynes (1921, Ch.3) correctly urged, that all degrees of confirmation are comparable. A tacit expectation of universal comparability is natural as long as we think of degrees of confirmation as real valued. The expectation rapidly evaporates once we use more complicated structures. Imagine, for example, that the degrees are real intervals in [0,1] with the size of the interval betokening something about the bearing of evidence. Take two intervals [0.01, 0.99] and [0.49, 0.51]. If they must be comparable, the only relation that respects the symmetry of dispositions about the midpoint 0.5 is that they are equal. But that contradicts the presumption that the size of the interval represents some sort of difference in the degrees of confirmation.

However, even if degrees of confirmation are real valued, it does not follow that they are comparable. For two degrees to be comparable in the relevant sense, they must measure essentially the same thing. The mere fact that two scales employ real values is not enough to assure this. One hundred degrees Celsius on the mercury thermometer scale and on the ideal gas thermometer scale are equivalent since they measure the same thing, temperature. They are none of equivalent to, less than or greater than one hundred degrees Baumé of specific gravity.
Now this is the most important part of all of this, it is a specific example of how Bayes starts to fail us when considering the half-life of Radium 221:
”Norton” wrote:Propositions can bear, evidentially, on one another in many ways, and the range of variation is sufficiently great that we can surely not always presume comparability of the degrees, even if both are measured on the same numerical scale. Consider the hypothesis H that the half-life of radioactive decay of Radium 221 is 30 seconds and the evidence E that some Radium 221 atom did decay in a time period of 30 seconds. The two degrees, [E|H] and [H|E],are very different. In the first, we take certain laws of physics, with their characteristic constants, as fixed and distribute belief over possibilities (decay in 30 seconds, decay in 40 second, etc.). Those laws provide physical chances for the possibilities and the bearing of H on E is detailed for us completely as a matter of physical law.10 In the second, we take an experimental fact as fixed and must now distribute belief over the possibility of different half-lives for Radium 221. No physical law can fix the bearing of E on H, for now the range of possibilities must involve denial of physical laws; there is only one correct value for the half-life. Even exactly how we are to conceive that range is unclear. Will we try to hold all of physics fixed and just imagine different half-lives for Radium 221? Or should we recall that the physical properties of Radium 221 are fixed by quantum physics and chemistry, so that differences in half-lives must be reflected in differences throughout those theories. And how should those differences be effected? As alterations just to fundamental constants like h and c? Or in alterations to Schrödinger’s equation itself? My point is not that we cannot answer these questions, but that answering them engages us in a very different project that is a mixture of science and speculative metaphysics. The way H bears on E in [E|H] is very different from the way E bears on H in [H|E].

So, if we expect the degrees of confirmation simply to measure the bearing of evidence, as an objectivist about probability like Keynes would, then we should not expect the two sets of degrees always to be comparable. A subjectivist about probabilities has no easy escape. Of course, the subjectivist simply supposes comparability and stipulates real valued prior probabilities that lead to real values for both [E|H] and [H|E] upon conditionalization. The hope is that the subjectivist’s assignments will eventually betoken something more than arbitrary numbers as the accumulation of evidence ‘washes out the priors’ and leads to a convergence of values for all subjectivists. If the very idea that the two degrees are comparable entered originally as a supposition without proper grounding, the convergence does not remove its arbitrariness. Oranges are not apples, even if we end up agreeing on how many apples make an orange.
There is a lot more I could say, especially about Richard Carrier’s understanding of issues, but this is already too long a post.
Image
Post Reply