Dr Moore wrote: ↑Fri Oct 01, 2021 2:17 am
Let me guess.
Another non sequitur statistical algorithm which magically yields astronomical odds against some thing that he claims is a proxy for chiasmus, but has no demonstrable connection with chiasmus.
And unsupported claims that Joseph couldn’t possibly have known how to construct chiasms in a dictated work. And dismissal or neglecting to mention that in-out was a popular storytelling model, even then.
Your first paragraph is exactly right. In this case, the proxy for chiasmus is the “inverted type-token ratio,” which is used by quantitative linguists to assess the working vocabulary of an author. It is simply the total number of words in a text divided by the number of unique words.
Kyler asserts that in addition to measuring the working vocabulary of an author, the inverted type-token ratio measures how much chiasmus is present in a book.
It turns out the odds of the Book of Mormon having such a small working vocabulary are a hundred trillion to one, which means that the odds of it using chiasmus in a uniquely ancient way is also a hundred trillion to one. This piece of incredible evidence overcame the last doubts we had about the Book of Mormon, and combined with the prior episodes we’ve overcome the seemingly insurmountable prior odds and are now 99.999999997% certain the Book of Mormon is authentic.
It turns out that the D&C also has a small working vocabulary. A lesser statistician might take this as a reason to doubt their theory about the relationship between type-token ratios and ancient chiasmus. But not Kyler. Kyler thinks this is evidence that the D&C was originally written in an ancient language and then translated by the Book of Mormon ghost committee into English, which is
further evidence that these books are all authentic.
This raises a serious question:
Is Kyler punking us?
Final Technical Criticism
Kyler’s mistakes are getting repetitious and there is no point in responding anymore. But I do want to make one final remark about a glaring technical mistake Kyler consistently makes that somehow slipped through his peer review process and that I haven’t heard anybody comment on.
In Bayesian statistics, you ask “What is the probability you’d see this basket of evidence if hypothesis A is true? What is the probability you’d see this basket of evidence if hypothesis B is true?” In a continuous statistical model, “the probability of seeing this basket of evidence” is given by the
height of a probability distribution function (PDF)
at a specific point corresponding to the evidence, or by the
height of a likelihood function
at a specific point corresponding to the evidence.
However, that isn’t what Kyler does. Rather, he consistently uses the p-value from Fisher’s significance testing. This corresponds to the height of the
cumulative distribution function (CDF)—not to the height of the
PDF. This makes his math an invalid Frankensteinian mishmash of Fisherian statistics and Bayesian statistics. Kyler might argue that the height of the CDF is a good-enough proxy for the height of the PDF, but that simply isn't true. His models are built on unacknowledged, sometimes contradictory, and always extraordinarily unlikely assumptions. Even if all of those assumptions were exactly true, he's still plugging the wrong numbers from his statistical analysis into the Bayesian equations.
Who peer reviewed this?