|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: Proving God Statistically | |||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: quote: Good so far, but
quote: I disagree: the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 is special. You are correct that that particular sequence is just as unlikely as any other SINGLE outcome, but we have to look at the larger picture. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 is a perfectly sequential outcome and such an outcome can occur in only an extremely limited number of ways (2, ascending and descending) whereas a non-perfectly sequential outcome can occur in a multitude of different ways (9,999,999,998). Therefore, the probability of getting a perfectly sequential outcome is only 1 in 5 billion. But why partition the possible outcomes into two sets according to what might appear to be an ad hoc condition, that of being perfectly sequential? Because it jumps out at us: any school child would recognize that pattern. Nevertheless, we can expand the definition of the "success" set and the same conclusion is reached. The vast majority of possible outcomes do not fit any recognizable pattern (for example, the one you listed appears to fall into this set). The flip side is that the number of outcomes that falls within the set of easily recognizable patterns is extremely small: probably not too much larger than the following: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10;10, 9, 8, 7, 6, 5, 4, 3, 2, 1; 1, 3, 5, 7, 9, 2, 4, 6, 8, 10; 2, 4, 6, 8, 10, 1, 3, 5, 7, 9; 10, 8, 6, 4, 2, 9, 7, 5, 3, 1; 9, 7, 5, 3, 1, 10, 8, 6, 4, 2} Since the cardinality of the set containing non-recognizable patterns is immensely larger than that of the set containing recognizable patterns, (1) there is something "special" about sequences corresponding to recognizable patterns, and (2) the probability of hitting upon any of the recongnizable patterns is extremely small. However, even within this expanded "success" set the number of perfectly sequential outcomes is small: only 2 out of x. And it is within this subset of all recognizable patterns that the outcome is found. So we are NOT looking at the probability of getting ANY recognizable pattern but of getting one that is perfectly sequential, since that is a better and more accurate description of the actual outcome. So we are back to 1 chance in 5,000,000,000. Therefore, while we should not be surprised to hear that "Frank" hit upon your sequence by chance in a single shot, we should be surprised to hear that "Frank" hit upon the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 by chance in a single shot (in fact, we should probably reject "Frank"'s claim). ************************PS: I agree that the original poster's probability calculation was flawed. However, the counter offered had it's own flaw. [This message has been edited by DNAunion, 11-15-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
Rei, if you are going to respond to my posts the least you could do is respond to what I said. I didn't mention self-replicators or incremental improvements at all in my post to which you replied.
Now, do you find fault in what I actually said in that other post?
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: It can be done afterwards, but it is not always as clear cut as it is when a specific prediction is made ahead of time. Since letter sequences demonstrate this better than numbers, I'll switch to using them. Take 27 identical tiles and paint a unique letter of the English alphabet on each, and on the remaining one an underscore to represent a space, then place them all into an urn. For this thought experiment the number of letters we will select one at a time from the urn will be represented by the variable n. Draw a linear and contiguous series of n boxes from left to right, each of which will hold a single letter that is drawn from the urn. The letters will be written sequentially, each in the leftmost empty box at the time it is drawn. Now perform the following steps n times: (1) thoroughly shake the urn to randomize the tiles(2) randomly select a single tile from the urn (3) write down the letter on the tile in the appropriate box (4) replace the tile Now any single n-letter outcome is just as unlikely as any other single outcome. I am not disputing that. My argument is based on an aggregate view, which involves partitioning the possible outcomes into two sets: success and failure. With that in mind... Would you be surprised if someone claimed to have followed the above method and ended up with, in a single trial, the following? P_AOZIUHV_EU_KS__IPBODQKVO_IYTCR Probably not: there’s nothing readily noticeable in that result that immediately raises suspicion of cheating. But, what if the person claimed to have obtained any of the following in a single run through: FOR_SCORE_AND_SEVEN_YEARS_AGO___ or ME_THINKS_IT_IS_LIKE_A_WEASEL___ or ONTOGENY_RECAPITULATES_PHYLOGENY You should seriously doubt the claim that chance alone produced those results. But why? After all, each one of these particular sequences has the same exact probability as the other single sequence...so why should we be surprised to see any of these results, but NOT be surprised to see the first one? Because the second outcomes match patterns specified independently of the event: they're a line from Lincoln's Gettysburg Address, a line from one of Shakespeare's works, and a summarization of Haeckel's position of recapitulation. Note that no one had to proclaim ahead of time, "Hey, I predict this guy is going to select ME THINKS IT IS LIKE A WEASEL" or either of the other two. The MATCHING of the outcome to an independent pattern that eliminates chance from being the best explanation occurred AFTER the event, not before. Now we can go the extra step to show that the pattern itself can be specified after the event. Suppose n is 66 and a person claims to follow the method spelled out above and to have hit upon the following in a single shot: I_WAS_PULLING_TILES_OUT_OF_AN_URN_AND_THIS_IS_WHAT_I_ENDED_UP_WITH Clearly such a recognizable English sentence doesn't have to be specified PRIOR to the event in order to be used as a non-ad hoc pattern capable of eliminating chance. We simply don’t accept that someone is going to pull chips from an urn using a truly random process and end up with a long and meaningful English statement: sufficiently complex and specified outcomes do not occur by chance alone. It is important to keep in mind that there are two requirements for eliminating chance: sufficient complexity (sufficiently low probability) and specification (a non-ad hoc pattern that the outcome matches). [This message has been edited by DNAunion, 11-16-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: quote: Yes, you should...
quote: Which was stated in my post: 1 attempt. *************************
quote: Which is why I intentionally refrained from saying what you think I said. I purposely avoided saying that such things CANNOT occur by chance alone; I said they DO NOT. There is a difference. (Well, let me head off attempts to show that there is not difference. To do that, one would have to demonstrate that literally everything that is not impossible actually does occur. Of course there is the highly speculative MWI: it's hardly able to be demonstrated. How about sticking to things that are testable. Let's see...I'm sitting here typing on my keyboard and it is not literally impossible for my hard drive to fail before I finish typing this sentence. Well, it wasn't impossible and it didn't happen.)
quote: No, because I didn't end up with 0. Here’s what I actually said.
quote: I said sufficiently low probability, not 0 probability.
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: Yes, we are interested in any language. I believe French uses the same set of symbols for their language, so if a meaningful French sentence appeared I would miss it, but a French speaking person would not. Is that a problem? Not as far as eliminating chance is concerned. Here's why. ME: I see no recognizable pattern so I don't eliminate chance.HENRI: I see a meaningful French statement so I eliminate chance. The point is that chance was not eliminated when it shouldn't have been. Thus, the method is still a reliable method of eliminating chance.
quote: Actually, that has basically been done by Dembski (please, let's stick to probability and not try to apply it to evolution!). I'm a bit rusty on this, but... What he was looking for was the maximum number of specifications (non-ad hoc patterns) that could have been produced since the origin of the Universe. To arrive at this number he took the estimated number of elementary particles in the universe (10^80), multiplied it by the number of seconds the Universe has existed (based on 15 billion years, I think), and then multiplied that by the number of reactions that can occur per particle per second, based on Planck time (I think 10^43 or something like that). When all was done, he calculated that since the origin of the Universe a maximum of 10^150 specifications could have been made. At this point I am losing track of his argument: how exactly one gets from here to the final conclusion (I told you I was rusty). Anyway, his position is that any event that did occur and (1) was sufficiently complex (had a probability smaller than 10^-150), and (2) was specified (matches an independently created, no-ad hoc pattern), should not be attributed to chance. The 10^-150 is his "universal probability bound". For events that we don't have to look at the whole universe - for example, Mr X winning the state lottery 5 times in a row - we can use a local probability bound. [This message has been edited by DNAunion, 11-16-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: Which Dembski takes into account. He speaks of taking all probabilitist resources into consideration to arrive at a saturated probability. For example, the probability of tossing a 20 heads in a row with a fair die in a single attempt is 1 in 1,048,576. This is indeed small and we should not expect to do it in a single attempt. But taking all of the coin tosses every made into account, which would easily be greater than one million, we should not be surprised to learn that someone somewhere has achieved 20 consecutive heads by chance alone. But what about 500 heads in a row by chance? We should reject that no matter how many people performed how many tosses...the probability is simply too low for this specified event to occur by chance alone, even taking all humans that ever lived into consideration. This may not be the ultimate quote on this, but it does show the general idea.
quote:
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: To a greater extent, once evolution comes into play, I don't see how Dembski's method can be applied. I think Dembski's EF works accurately to eliminate chance in general probabilistic events (tossing coins, choosing letters, and many other everday things) but not for biological processes.
quote: But the purpose of the method is to be able to eliminate chance: that is, when we eliminate chance using the method chance has been correctly eliminated. This doesn't depend upon our being able to recognize French, for example: in such cases where we miss meaningful French statements we would fail to eliminate chance, leaving chance as a possible explanation. That is not a problem because of the purpose of the method (it is not to detect chance, but to eliminate it).
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
I have seen the 10^80 value for the number of elementary particles in the known universe elsewhere. I'll try a web search.
By the way, in that link Dembski does mention that others have calculated certain universal probability bounds and come up with numbers lower than his (that is, the positive exponent is lower than 150).
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
Here are a couple of independent references to there being about 10^80 elementary particles in the observable universe. Note that exponentiation is oftentimes lost when copying something from one source to a web page: the person making the page must manually go back and add the ^ symbol, which is not always done.
quote: quote: edited urls in attempt to fix page width - The Queen [This message has been edited by AdminAsgara, 11-18-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: What's a gravitron? [This message has been edited by DNAunion, 11-18-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: quote: quote: So is a gravitRon similar to a leptRon, like the familiar electon that orbits the nucleus where the protRons and neutons are found? Or maybe it’s more like a muRon, or a hadon, or a mesRon, or a photRon? Or was Rei referring to the amusement ride called the gravitron?Gravitron – Amusement Ride Extravaganza [This message has been edited by DNAunion, 11-19-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: Someone is singular; their is plural.
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
ANYONE WHO DOES NOT BELIEVE DEMBSKI IS A MORON
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
quote: Here's the code that generated that sentence, first run, by radomly generating letters A - Z and the space character.
lnSequenceLength = 46 =RAND(-1) lcString = "" FOR lnLcv = 1 TO lnSequenceLength lcString = lcString + CHR(RandomNumber()) ENDFOR ? lcString FUNCTION RandomNumber LOCAL lnMin, lnMax lnMin = 65 lnMax = lnMin + 26 lnRandom = FLOOR((lnMax - lnMin + 1) * RAND() + lnMin) IF (lnRandom = lnMax) lnRandom = 32 ENDIF RETURN lnRandom ENDFUNC [This message has been edited by DNAunion, 11-19-2003]
|
|||||||||||||||||||||||||||
DNAunion Inactive Member |
Well what do you know, I just looked in the dictionary and I was wrong for calling that an error. Rats!
[This message has been edited by DNAunion, 11-19-2003]
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024