tag:blogger.com,1999:blog-715339341803133734.comments2017-03-18T13:40:54.923-05:00Maximum EntropyTom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comBlogger106125tag:blogger.com,1999:blog-715339341803133734.post-5828326447540505032015-09-01T14:08:15.616-05:002015-09-01T14:08:15.616-05:00Just noticed,
in deriving equation 1, I invoked t...Just noticed,<br /><br />in deriving equation 1, I invoked the condition that 'neither P(A) nor P(B) is 0 or 1,' but this isn't sufficient for the proof - it also should be true that A and B don't form an exhaustive set of hypotheses. (If they did, P(A+B) would be 1, and the inequality would become an equality.)Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-86794972109192789442015-07-21T14:46:05.840-05:002015-07-21T14:46:05.840-05:00Maybe I didn't explain myself clearly. I don&#...Maybe I didn't explain myself clearly. I don't think we need to marginalize over probability models and confidence procedures.<br /><br />From your paper:<br /><br />"A X% confidence interval for a parameter theta is an interval (L, U) generated by an algorithm that in repeated sampling has an X% probability of containing the true value of theta."<br /><br />Thus, if Professor Bumbledorf conducts 100 experiments, and reports a (valid) 95% CI for each, and I draw one of them at random from a hat, then 95 times out of 100, I expect to get one that contains the true value of theta - there is a 95% probability that it will contain the true value (the Bernoulli urn rule). After a single measurement, but without seeing his data, I'm in the same position of randomly sampling from the hat. All I have to go on is the CI and its associated properties. Regardless of the probability model and the shape of the posterior, the integral from L to U is thus 0.95, by definition.<br />Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-68793841217812179412015-07-21T14:08:13.928-05:002015-07-21T14:08:13.928-05:00I don't feel that "probability is somethi...I don't feel that "probability is something out there". What I'm skeptical of is that there is a meaningful way of marginalizing over the possible probability models and confidence procedures, and hence I'm skeptical that any one is compelled to adopt probability assignment. There are, after all, an uncountable number of probability models, and for each of these probability models there are an uncountable number of 95% confidence procedures. "Many" of these are trivial, having 0 probability of containing the true value. These can't be described in any sort of "space" that I'm aware of. In typical scenarios where the principle of indifference is applied, there are natural symmetries or invariances in the problem that allow one to apply the principle. I don't see that here, but maybe I'm missing something obvious.Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-80080045803463345442015-07-21T10:39:29.472-05:002015-07-21T10:39:29.472-05:00Hi Richard
Many thanks for your comment. Strictly...Hi Richard<br /><br />Many thanks for your comment. Strictly, you are correct that we cannot say that a probability assignment follows logically, without specifying a probability model. As there was no probability model prescribed in the survey question, however, I was free to supply my own, and as there was no information supplied to adjust my probability estimate either above or below the 95% confidence reported, I shot down the middle, as indifference dictates. The situation is analogous to the submarine example, where instead of being supplied the separation between a pair of bubbles, we are only aware of the calculated confidence interval, and we have to determine the desired probability - all we have to go on is the defined 'long-run' behaviour of confidence intervals.<br /><br />You say that "under some conditions the statements might be true, but inferring this would require information that was not stated in the problem." Maybe I misunderstand your meaning, but this strikes me as strange. It sounds as if you feel that a probability is something out there, waiting to be discovered, as soon as the required evidence comes to light. <br /><br />My view is that probability theory is the machine we use for quantifying how much we know, in the presence of missing information. As long as a question is meaningfully constructed, there can be no situation in which there is not enough information to form probability estimate. Otherwise, how would we ever accumulate enough knowledge to get started?<br /><br /> Thanks again for your comment.Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-64868777246847036042015-07-21T07:34:46.079-05:002015-07-21T07:34:46.079-05:00Hi Tom, I just saw your blog post here. Thanks for...Hi Tom, I just saw your blog post here. Thanks for the summary. I would like to object to your characterisation of the Hoekstra et al survey: note that in the survey, the participants were asked to note which of the responses *logically follow* from the information given. In the sense of the paper, "false" is "this statement does is false in the sense that I cannot infer the statement from the information about the CI." It is certainly true that none of the statements logically follow. It is also certainly true that under some conditions the statements might be true, but inferring this would require information that was not stated in the problem. Best, Richard MoreyRichard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-1288161170609136612015-06-16T05:11:05.701-05:002015-06-16T05:11:05.701-05:00insurance
this is amazing.thanks for sharing it.<a href="http://autocoverageworld.com/" rel="nofollow">insurance</a><br /><br />this is amazing.thanks for sharing it.Vinod Singhhttp://www.blogger.com/profile/04491417943664744942noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-10223893453561837752015-06-15T14:41:25.765-05:002015-06-15T14:41:25.765-05:00Edwin Jaynes was the greatest educator in Probabil...Edwin Jaynes was the greatest educator in Probability and Statistics that the twentieth century produced. His writing is of such clarity, his logic is so solid, that his conclusions are unassailable.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-58794273315085361752015-05-23T02:21:06.243-05:002015-05-23T02:21:06.243-05:00Hi, this is Jack from Chennai. Your blog is really...Hi, this is Jack from Chennai. Your blog is really very informative. Thanks for sharing this informative blog. SAS is a comprehensive statistical software system which integrates utilities for storing, modifying, analyzing, and graphing data. If anyone want to know more details please visit fita academy.<br /><br />Regards..<br /><a href="http://www.sastraininginchennai.co.in/" rel="nofollow">SAS Training in Chennai</a><br />jack wilsonhttp://www.blogger.com/profile/16773105118355188788noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-8399866827265472762015-04-28T18:15:51.418-05:002015-04-28T18:15:51.418-05:00Quite right, thanks for pointing it out.
Thinkin...Quite right, thanks for pointing it out. <br /><br />Thinking about it drew a couple of other points to mind, which I've also reflected in a minor edit or two:<br /><br />(1) I tacitly assumed the uniqueness of the median, throughout, in effect assuming a continuous distribution.<br /><br />(2) The relationship between the mean and median is not uniquely determined by the direction of skew. Hence, I added the word 'typically' in the sentence: <br /><br />"If a distribution has an extended tail on one side only, then the mean will typically be positioned further out into the tail than the median. " <br /><br />Cheers.Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-13987237485448774182015-04-28T16:40:49.207-05:002015-04-28T16:40:49.207-05:00"(all such distributions are symmetric, and v...<i>"(all such distributions are symmetric, and vice versa)."</i><br /><br />Really? All symmetric distributions have the same mean and median, but the reverse is in general not true. Say income is distributed as a Gaussian and each person earns an integral number of pounds. Let the top earner earn a pound more. This moves the mean to the right but not the median. Move the mean back to its original position by giving 100 people to the left of the mean one penny more. The resulting distribution has the same mean and median, but is not symmetric.<br />Phillip Helbighttp://www.blogger.com/profile/12067585245603436809noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-889822714618889192015-04-28T16:38:26.022-05:002015-04-28T16:38:26.022-05:00This comment has been removed by the author.Phillip Helbighttp://www.blogger.com/profile/12067585245603436809noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-26198832836234941232014-12-19T09:42:00.599-06:002014-12-19T09:42:00.599-06:00"As I see it, conceptual analysis does nothin..."As I see it, conceptual analysis does nothing more than specialize in eliminating ambiguity in the relationship between symbol and signified - something that is inherently part of science, anyway. Your examples of "things" and "reality" are trivial empirical questions, and the concept of truth is already given, once we have the hardware in place to constitute a decision-making entity."<br /><br />Reducing ambiguity is a huge part of conceptual analysis, and it is indeed a part of the scientific method - but it is the <i>non-empirical part</i>, the part that comes before (as well as after) the empirical investigations. And it has a name - "philosophy". :)<br /><br />I don't think our natural conceptions of things like "thing" or "truth" are necessarily clear enough, I do think philosophizing and coming up with non-ambigous (and useful!) definitions and recognizing our preconceptions can be productive. <br /><br />"Of course, we would like to be able to boast a non-circular foundation for everything, but I'm afraid this is just not a luxury we can aspire to. You say that reasoning under uncertainty is founded on axiomatic principles of reasoning under certainty - well yes, but but where do those axioms come from? They are either arbitrary (hardly a satisfactory solution to the circularity problem), or else they are derived from our capacity to reason probabilistically. "<br /><br />I don't think there is any way to justify our most basic rational intuitions; they're what we use to judge everything else. For myself, I am convinced by deductive arguments that Bayesianism is the right way (e.g. Cox's Theorem) rather than being convinced by probabilistic arguments that logic is true (I'm not sure if you can even state that consistently). <br /><br />"Do they have any familiarity with the normal distribution? "<br /><br />No, but they can gain familiarity...<br /><br />"When I get some time, I'll develop an example and post it on the blog - hopefully, before too far into the new year."<br /><br />I'm looking forward to it!יאיר רזקhttp://www.blogger.com/profile/15798134654972572485noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-13195708851224044522014-12-16T10:23:06.333-06:002014-12-16T10:23:06.333-06:00Hi Yair
Interesting comments.
As I see it, conc...Hi Yair<br /><br />Interesting comments. <br /><br />As I see it, conceptual analysis does nothing more than specialize in eliminating ambiguity in the relationship between symbol and signified - something that is inherently part of science, anyway. Your examples of "things" and "reality" are trivial empirical questions, and the concept of truth is already given, once we have the hardware in place to constitute a decision-making entity.<br /><br />Of course, we would like to be able to boast a non-circular foundation for everything, but I'm afraid this is just not a luxury we can aspire to. You say that reasoning under uncertainty is founded on axiomatic principles of reasoning under certainty - well yes, but but where do those axioms come from? They are either arbitrary (hardly a satisfactory solution to the circularity problem), or else they are derived from our capacity to reason probabilistically. <br /><br />__<br /><br />Regarding a linear regression example, I would start with a case that is assumed to pass through the origin, so that there is only one parameter to estimate - the calculation can then be done numerically using a spread sheet, or if you would like to be able to extend it more easily to higher dimensionality, using some simple code. <br /><br />Do they have any familiarity with the normal distribution? They will need to be able to appreciate that each x-y pair used to fit the line has itself an associated probability distribution.<br /><br />When I get some time, I'll develop an example and post it on the blog - hopefully, before too far into the new year.<br />Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-78477182449366270272014-12-16T04:26:31.956-06:002014-12-16T04:26:31.956-06:00Your blog give many information thanks for share t...Your blog give many information thanks for share this informative article.<br /><a href="http://websiteproxy.co.uk/dl4all.com.html" rel="nofollow">DownloadForAll UK proxy</a><br />Julia Davidhttp://www.blogger.com/profile/04201527870889773578noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-46142462772414107572014-12-16T02:00:35.590-06:002014-12-16T02:00:35.590-06:00I would argue in contrast that there is a prior ba...I would argue in contrast that there is a prior basis of rational reasoning, which is conceptual analysis (which includes logic, mathematics, understanding language, and so on). Only on this basis can the metaphysical and propositional model underlying your domains of knowledge can be defined; e.g. only once you understanding "things" to exist in "reality" and propositions as being "true" if they correspond to reality can you define (again, using language and logic - conceptual analysis) the question of whether some proposition is true (your (1)). <br /><br />Conceptual analysis is what (good) philosophy is all about, and does NOT fall under the scientific method but rather justifies it - it is because of our understanding of what "truth" or "belief" are, for example, and by the application of logic and mathematics, that we can justify the use of Bayes' Rule as the way to seek out truth. <br /><br />If our methods of thinking were justified scientifically, we would have circularity - our methods justifying our methods. What we have instead, I suggest, is foundationalism - our methods for reasoning under uncertainty are founded on our methods for thinking about certainty (which are themselves axiomatic).<br /><br />The upshot of all of this is that the charge of Scientism CAN be correctly derogative when people maintain that we should apply the Scientific Method to conceptual questions. You don't do mathematics by empirical induction, and you don't do an analysis of what "morally good" means by induction (although what people think about when the use the word is important, at the philosophical level the point is to explicate a clear meaning rather than to describe the confused and varied uses in ordinary use). <br /><br />In practice, however, the charge of Scientism is usually leveled at applications of the Scientific Method where it DOES belong, as in e.g. the science of morality (which IS a science, although like all sciences it is based on arbitrary/philosophical definitions of its subject matter), rather than where it doesn't belong (as in e.g. coming up with said definitions). <br /><br />Yair<br /><br />P.S. On an unrelated question - I'm teaching some Scientific Method to highschoolers. I taught them Bayes Rule, but I can't find a nice and SIMPLE (highschoolers!) Baysian analog of simple linear regression - coming up with the parameters of for a line formula and the uncertainty in them from a Bayesian perspective. Can you perhaps direct me in the right direction?<br /><br />Cheers,<br />Yairיאיר רזקhttp://www.blogger.com/profile/15798134654972572485noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-842369405919173512014-09-29T14:17:15.567-05:002014-09-29T14:17:15.567-05:00Thanks for your thoughts
According to my investig...Thanks for your thoughts<br /><br />According to my investigations, a 'Normal human conscience' is not necessary for a rationally derived morality, though it is not far from the truth (depending on exactly what you meant by that) as we are all physically very similar, and we are a hi-tech species, meaning that the social contract entwines our individual values. Morality <i>is</i> relativistic, as I discuss in 'Practical Morality,' parts <a href="http://maximum-entropy-blog.blogspot.com/2014/01/practical-morality-part-1.html" rel="nofollow">1</a> and <a href="http://maximum-entropy-blog.blogspot.com/2014/02/practical-morality-part-2.html" rel="nofollow">2</a>, but on a practical level, this isn't all that important.<br /><br />To my mind, politics (when done correctly!) is a subset of ethics. Again, because of the overwhelming force of the social contract, what is good for society ought to be good for me. There may be extreme cases where my rationally inferred goals conflict radically with those of society - but then I should stop playing politics (at least, stop trying to serve society). There are many interesting avenues of thought one can pursue here, though. E.g. society can seek to (and in fact does) implement deterrents and re-rehabilitation to reshape an individual's utility function such that it matches more closely one that society would desire. The difficult (but not impossible) trick is to re-shape the politician's utility function, so that he must better serve the common man!<br /><br />Cheers,<br />Tom<br /><br /> Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-69780029528843129202014-09-28T15:26:46.479-05:002014-09-28T15:26:46.479-05:00Very good post! I'm sorry I didn't read it...Very good post! I'm sorry I didn't read it earlier. <br /><br />I think the existence of a rational theory of morality comes down to whether there exist a Normal human conscience - whether there is something like a fairly-sharp Normal distribution of fundamental human values in the moral sphere, that we can then advance scientifically. If there are no such fixed values, then morality becomes relativistic.<br /><br />An important point I don't recall seeing you make is the difference between Ethics and Politics. Ethics is the art of advancing your own values; or at least, the theory of advancing Normal values. Politics, on the other hand, is the art of advancing Normal values in society. This can be very different as your Normal values (such as seeing your family prosper) can clash with mine, which leads to a game-theoretic game where certain values can cancel out or strengthen or so on. Hume argued that the winner in the Politics game is Empathy, which ultimately drives the development of social morality (Politics) against the other tendencies (which tend to cancel out, on the very long historical timescale), hence leading to social advancement. I'm not sure it's that simple, but I do think that figuring out what your values are and how best to achieve them in a competitive environment is radically different from a society - even of such perfectly rational agents, and surely one incorporating many irrational ones - cooperating to advance certain values.<br /><br />One should also be careful to clarify well what one means in "morality". At the top level there is a theory that advances ALL Normal values, a theory every Normal person should rationally follow. But it might be wise, for example, to limit ourselves to PRESCRIPTIONS only, to things we want others to do or not to do; this is a smaller set of values than what we want in general. In making these sorts of distinctions, philosophy has a role. It isn't to prescribe what is moral, or even what the "right" distinctions are - but rather to make us aware of the various meanings and be careful to make sure it is clear what we are talking about when we say "morality". <br /><br />Yairיאיר רזקhttp://www.blogger.com/profile/15798134654972572485noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-30682706965113841902014-08-19T11:12:00.142-05:002014-08-19T11:12:00.142-05:00Yes, it is always possible that nature will go ...Yes, it is always possible that nature will go 'off the rails,' but this does not invalidate any previous probability assignment. A probability assignment is not a statement only about some external state of the world. It is a statement about our relationship with that state of the world. <br /><br />Two rational agents, possessing exactly the same information, can legitimately arrive at two different probability assignments for the same proposition, if they use different probability models. This does not invalidate either probability assignment. The calculus of probability is impossible without such probability models, and there is no prior principle for choosing one over the other (assuming they are both mathematically valid - and even this criterion is problematic, under the closest scrutiny).<br /><br />We <i>can</i> use past success to predict future events. This does not guarantee those predictions will be correct. (This is just as true, concerning inferences about the past.) We <i>do</i> need to assume some kind of uniformity to do this, but there is no way around this, and there is nothing to say against such a procedure. If we begin to suspect the exact form of our symmetry assumptions (probability models), we can test them by applying Bayes' theorem at a higher level. <br /><br />What Hume and Popper both wanted was for our inferences to be absolutely guaranteed (independent of any vulnerable assumptions). There is no way to grant this wish, probabilistic or otherwise.Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-23462835088802006532014-08-19T10:27:42.554-05:002014-08-19T10:27:42.554-05:00Hume and Goodman's problems do not require us ...Hume and Goodman's problems do not require us to be 100% certain in our inferences. The problems remain even if we consider our predictions to be highly probable, rather than certain (e.g., that we expect the sun to rise tomorrow with very high probability). It is always possible that Nature is going to 'go off the rails' in the next observation interval. In fact, if Hume and Goodman are correct, unless we presuppose the uniformity of Nature (even in the probabilistic sense), there is no way to assign a probability to the next observation. If this is the case, then we cannot use past scientific success to predict future observations, even probabilistically- which means that we cannot claim to know anything about the world beyond what has already been observed. Note that it is partly Hume's problem of induction which led Popper to reject inductivism in favor of deductivism. But as you rightly point out, given that falsificationism is also inductive, Popper's solution ultimately fails. <br /><br />Now, I am not saying that Hume's and Goodman's problems cannot be solved via probabilistic reasoning a la Bayes; I'm just curious about your thoughts on what this solution might look like. How would Bayesian confirmation theory provide grounds for believing that all emeralds are 'green' rather than 'grue'?YFhttp://www.blogger.com/profile/06353112342089566468noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-13185156302062707932014-08-19T04:49:06.864-05:002014-08-19T04:49:06.864-05:00Thanks for your comment.
The term 'knowledge,...Thanks for your comment.<br /><br />The term 'knowledge,' often defined as 'justified true belief,' is highly problematic. The concept is hopelessly over simplistic, to the point of uselessness, and should be removed from all serious philosophical discussion.<br /><br />It is true that inductive inference can not guarantee the truth of relationships between past, present, or future entities. (For the purposes of inference, there is no important difference between past and future.) But to say that science can't be trusted to make predictions about the future is over simplistic, and a stronger statement than saying that science can't guarantee our inferences. Lack of complete trust is no the same as complete lack of trust. <br /><br />There is no solution to the problem of induction - no way to be 100% certain (independent of a probability model) of anything interesting. We can, however, apply arbitrarily many layers of sophistication in our pursuit of the truth (via <a href="http://maximum-entropy-blog.blogspot.com/p/glossary.html#model-comparison" rel="nofollow">model comparison</a>), and thus examine our hypotheses with arbitrarily stringent tests.<br /><br />It is thus possible to be highly confident of our inferences, and to say "given the information I have now, there is no good reason for acting as if X is not true."<br /><br />Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-25932287182289793522014-08-18T22:50:09.789-05:002014-08-18T22:50:09.789-05:00Excellent post. Given, as you've argued, that ...Excellent post. Given, as you've argued, that science is inductive and that the proper way to update our beliefs about the world is via Bayesian confirmation theory, I was wondering if you've considered writing a post on whether Bayes can solve Hume's and Goodman's problems of induction. If it cannot, then science cannot be trusted to make predictions about future states, and it becomes a truly a rear-view mirror endeavor. Indeed, if they cannot be solved, then we cannot claim to have any real knowledge about the world at all...YFhttp://www.blogger.com/profile/06353112342089566468noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-18531177133567750322014-07-08T16:52:44.791-05:002014-07-08T16:52:44.791-05:00Happy to continue the discussion.
A classic thoug...Happy to continue the discussion.<br /><br />A classic thought experiment in moral philosophy is the trolley problem. Several versions exist, but usually, the point is that by pushing a button you can cause 1 innocent person to die, rather than say 5 (also innocent) people, if you don't. To make it easier, we can say that the default case is 5 million people dying, as opposed to 1 person dying if you push the button. <br /><br />I can't think of any principle that would make pushing that button not an act of violence against that poor individual, but, <i>ceteris paribus</i>, I would describe that act as moral.<br /><br />Any act of enforced punishment, be it for purposes of incapacitation, deterrence (of the guilty individual or of others), or rehabilitation, has similar properties - normally experienced freedoms are removed for the purpose of protecting others in society. Under many circumstances, such actions are appropriate. <br /><br />We could choose to define these actions as not violent (e.g. since they represent a form of self defense), but such a definition would not change the physics of what is happening. What matters, ultimately, is not what we call it, but whether such behavior is appropriate. <br /><br />By the way, the guy who desperately wants your $200, and is willing to shoot you for it, is also defending his perceived self interest. There are no black-and-white absolute principles of morality, other than acting rationally on the best available information, and thereby treating each case on its merits.<br /><br />Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-10146901792749920822014-07-08T16:11:10.727-05:002014-07-08T16:11:10.727-05:00Sorry for the delayed response.
I think you are ri...Sorry for the delayed response.<br />I think you are right, self-interest plus universality can arrive to "stealing is wrong", but eliminate "getting caught is wrong" as a principle, even though the premise of self-interest alone is insufficient.<br /><br />Regarding your last point, I'm not so sure that it's easy to identify cases where violence (in aggression, not self-defense) would be moral. A criminally insane person that didn't harm anyone (yet) can be ostracized (not allowed on anyone's property), but I'm not sure violence (imprisonment) would be warranted.Julien Couvreurhttp://www.blogger.com/profile/15158751165174523704noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-4372877483382214832014-03-22T10:07:54.455-05:002014-03-22T10:07:54.455-05:00Thanks for elaborating.
Yes, frequentist thinking...Thanks for elaborating.<br /><br />Yes, frequentist thinking is complicated - it needs to be to disguise the fact that it is wrong. The device that the frequentists use to make e.g. item 4 false is completely ad hoc, declared out of thin air, and has no basis. Tom Campbell-Rickettshttp://www.blogger.com/profile/07387943617652130729noreply@blogger.comtag:blogger.com,1999:blog-715339341803133734.post-21045046307686199552014-03-22T05:55:14.721-05:002014-03-22T05:55:14.721-05:00To be really precise you should note that sentence...To be really precise you should note that sentences of the form "[0.1,0.4] is a 95% confidence interval" are never legit. You can only assign the property of being a "95% confidence interval" to <i>procedures</i> that calculate an interval given the data. The best you can say is "[0.1,0.4] was generated by a procedure that (before I knew the data) I assigned a 95% probability of creating an interval that contained the true value".<br /><br />So we have:<br /><br />False: 4a. <i>There is a 95% probability that the true mean lies between 0.1 and 0.4.</i><br /><br />True: 4b. <i>There is a 95% probability that the true mean lies between the lower end of the confidence interval and the higher end (provided we haven't yet seen the data that determines what they are).</i><br /><br />Our problem here is that the frequentists refuse to treat the true value as an r.v. So before we get the data the Bayesian views both the true value and data as random, but the frequentist views only the data as random. So at this point the frequentist can say "performing the following procedure on the data will yield an interval that with 95% probability contains the true value". When they say this it is the <i>interval</i> that they are treating as random, and they are saying that the statement is true <i>for each possible true value</i>. The Bayesian is treating both the true value and interval as random, but actually agrees with the frequentist's statement since if the statement is true for each possible true value then it must also be true for the random true value weighted according to the Baysian's prior.<br /><br />After we find out the data, the frequentist calculates her confidence interval <i>and then has no random variables left at all</i>. Thus the frequentist is now incapable of making any probabilistic statements at all. And so all six of the given statements are false.<br /><br />Now, what does the Bayesian do when we find the data? She updates her probability distribution for the true value. Also, just like for the frequentist, the interval can be calculated and so ceases to be an r.v. But the Bayesian can still make some probabilistic statements since for her the true value is still random. In particular the Bayesian can calculate the probability, based on their posterior, that the true value lies in the interval. But this needn't be 95%, and so even from a Bayesian perspective we must judge all six propositions to be false.<br /><br />An amusing example is to consider an experiment where the true value is known to be positive (perhaps it is a scale parameter) but it is being measured with some Gaussian noise (with say s.d. 1). Then it is clear that a 95% confidence interval will be given by taking the measured value plus or minus 1.96. So suppose by misfortune we get the measurement "-2" then our confidence interval is [-3.96,-0.04]. Certainly no one would claim that our true value was 95% certain to lie in there!<br /><br />Gosh, frquentist thinking is complicated isn't it?Oscar Cunninghamnoreply@blogger.com