« Looking for evidence showing you can get rich *and* be nice to people? Exhibit #1 is Steve Ells. | Main | Learning to hug it out: The competing logics of the physician-patient relationship and business-like, evidence-based health care. »

Friday, 11 March 2011

Comments

Mike Will

"Errors using inadequate data are much less than those using no data at all." - Charles Babbage

Svetlana Pertsovich

The method of maximum likelihood operates only with two groups of data. It is 1) data, contained in your hypothesis and 2) the data of your experiment (I say it specially for you, considering the specific features of your work with ion channels). It is clear that you are forced to trust the data of your experiment in these calculations. The experiment is the only criterium of righteousness of your hypothesis. But what if your experiment is incorrect? For example, it contains a systematic error. Or - much worse! - what if your colleague, who carried out the experiment mistook? Then after applying of the method of maximum likelihood you will get an incorrect result. And if some additional data allows you to find out this, it means that you wasted time, making all your calculations by the method of maximum likelihood. You can't correct your result. All your work is crashing.

Unlike the method of maximum likelihood, Bayesian statistics operates with three groups of data - 1) data, contained in your hypothesis; 2) the data of your experiment and 3) additional data. Of course, you can say that when working with the method of maximum likelihood, you count the additional data too. Yes, it is so. But, as I mentioned above, you can't include it in the calculations! While Bayesian statistics allows you to include the additional data directly in the calculation and to increase the quality of the results and, besides, calculate new values of the posterior/prior probabilities. Moreover, it is possible you will be able even to check the correctness of your experimental data, received in the previous stage (again - I say it, considering your own work with ion channels).

Thus, Bayesian statistics is more flexible instrument that the method of maximum likelihood and other methods of conventional statistics. Bayesian method allows to include the reason in the calculations, i.e. in a sense becomes the instrument of reasonable analysis of the reality. So it is not surprising, for example, that Eliezer Yudkowsky, specialist in field of artificial intellect, as well as other his colleagues, is so interested in Bayesian statistics.

As for your sceptical attitude to this method, David, it happens rather because you have never used Bayesian statistics yourself. Your criticism is being explained by your conservatism...

Svetlana Pertsovich

No, you are not right, David. You try to absolutize the only approach. It is wrong. There is a season for any method. All must be in good time. Any method has both merits and demerits.

Certainly, the method of maximum likelihood estimators can be useful in the starting stage of mathematical processing of experimental data, if you have no information about prior probabilities. The method is enough simple and doesn't require complicated calculations. And it is really convenient to use the method of maximum likelihood for receiving the results in the first approximation. Moreover, in fact the method of maximum likelihood will give you an output information about probabilities.

But after this stage, you can employ Bayesian analysis for further mathematical processing of your data (including additional new data!), using the information about probabilities, which you receive in method likelihood estimators. This information will play a role of an input information (i.e. information about prior probabilities in fact) in your Bayesian calculations.

Of course, you could object, that you prefer to use the method of maximum likelihood in this new stage of mathematical processing too. And you will be wrong. Because the method of likelihood on the new stage will give you nothing new. While Bayesian method can give much new interesting information. And you are not right in this case, speaking "Bayes is not helpful". Bayes will be greatly helpful in the mentioned stage. Because in general Bayesian statistics is more powerful than method of maximum likelihood. As well as the method of maximum likelihood itself is more powerful than, for example, non-parametric statistics. The power of these methods increases in the series:

Bayesian statistics > maximum likelihood > non-parametric statistics

You can't deny this fact, if you are a real statistician.

David Colquhoun

@Pertsovich
If the prior probabilities have no effect (as one would hope in cases where they aren't known)then there is no reason to bother with them at all.

In such cases surely it is better simply to use the right hand side, i.e. the likelihood (in the technical sense, the probability of the data given your hypothesis)and forget Bayes. This is what R.A. Fisher advocated, and maximum likelihood estimators are what we use for inferences from single ion channel data.

Svetlana Pertsovich

@ David Colquhoun

No, Bayesian analysis doesn't become more problematic for the examples, mentioned by you. Bayesian statistics has the special procedures for the cases, which you speak about. For instance, if you have no real numerical knowledge of the prior probabilities, then you must consider the prior probabilities as equal to each other. Generally the choice of the prior probabilities doesn't influence the result in Bayesian analysis. The result depends on informativity of the posterior probabilities.
Besides, Bayesianism has no any limits for its application. It can be used in the same cases, which are being interpreted in the usual frequentist way, with standard conditional probabilities.

However, it is well-known properties of Bayesian analysis. Any statistician knows them.
But maybe did you mean something else?

David Colquhoun

Yudkowsky's account is indeed lovely, but I don't think it is really touches on the contentious bit of Bayesian statistics at all. It can all be done with standard conditional probabilities, interpreted in the usual frequentist way. There is barely any need to introduce Bayes at all.

Bayes becomes problematic when you have no real numerical knowledge of the prior probabilities and when you you are forced to drop the interpretation of probabilities as long-run frequencies. These problems didn't arise in the examples chosen by Yudkowsky.

Rob Ryan

Yudkowsky is, as my friend Dr. Michael Tobis says, "wicked smart."

The comments to this entry are closed.

Enter your email address:

Delivered by FeedBurner

Subscribe to the Soup feed

Evidence Soup is brought to you by Tracy Allison Altman.

I’m on Twitter: @EvidenceSoup.
My day job: PepperSlice.

Why Evidence Soup?
Site search Web search

Powered by TypePad