Register | Sign In


Understanding through Discussion


EvC Forum active members: 59 (9164 total)
2 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,924 Year: 4,181/9,624 Month: 1,052/974 Week: 11/368 Day: 11/11 Hour: 0/2


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Proving God Statistically
DNAunion
Inactive Member


Message 13 of 96 (66722)
11-15-2003 6:28 PM
Reply to: Message 12 by compmage
11-14-2003 3:04 AM


quote:
DavidPryor writes: so that the chance of ten following nine, is 1 in 10 000 000 000 or 10 billion
quote:
The chance of getting 1,2,3,4,5,6,7,8,9 and 10, in that order, is 1 in 10 billion. However, the chance of getting 2,1,7,5,6,3,9,4,8 and 10 are also 1 in 10 billion, the change of getting ANY predetermined combination in ten draws is 1 in 10 billion.
Good so far, but
quote:
Just because the sequence of 1 to 10 has more significance to you makes not difference to its odds. It isn't more special than any other combination.
I disagree: the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 is special.
You are correct that that particular sequence is just as unlikely as any other SINGLE outcome, but we have to look at the larger picture. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 is a perfectly sequential outcome and such an outcome can occur in only an extremely limited number of ways (2, ascending and descending) whereas a non-perfectly sequential outcome can occur in a multitude of different ways (9,999,999,998). Therefore, the probability of getting a perfectly sequential outcome is only 1 in 5 billion.
But why partition the possible outcomes into two sets according to what might appear to be an ad hoc condition, that of being perfectly sequential? Because it jumps out at us: any school child would recognize that pattern.
Nevertheless, we can expand the definition of the "success" set and the same conclusion is reached.
The vast majority of possible outcomes do not fit any recognizable pattern (for example, the one you listed appears to fall into this set). The flip side is that the number of outcomes that falls within the set of easily recognizable patterns is extremely small: probably not too much larger than the following:
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10;
10, 9, 8, 7, 6, 5, 4, 3, 2, 1;
1, 3, 5, 7, 9, 2, 4, 6, 8, 10;
2, 4, 6, 8, 10, 1, 3, 5, 7, 9;
10, 8, 6, 4, 2, 9, 7, 5, 3, 1;
9, 7, 5, 3, 1, 10, 8, 6, 4, 2}
Since the cardinality of the set containing non-recognizable patterns is immensely larger than that of the set containing recognizable patterns, (1) there is something "special" about sequences corresponding to recognizable patterns, and (2) the probability of hitting upon any of the recongnizable patterns is extremely small.
However, even within this expanded "success" set the number of perfectly sequential outcomes is small: only 2 out of x. And it is within this subset of all recognizable patterns that the outcome is found. So we are NOT looking at the probability of getting ANY recognizable pattern but of getting one that is perfectly sequential, since that is a better and more accurate description of the actual outcome. So we are back to 1 chance in 5,000,000,000.
Therefore, while we should not be surprised to hear that "Frank" hit upon your sequence by chance in a single shot, we should be surprised to hear that "Frank" hit upon the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 by chance in a single shot (in fact, we should probably reject "Frank"'s claim).
************************
PS: I agree that the original poster's probability calculation was flawed. However, the counter offered had it's own flaw.
[This message has been edited by DNAunion, 11-15-2003]

This message is a reply to:
 Message 12 by compmage, posted 11-14-2003 3:04 AM compmage has replied

Replies to this message:
 Message 14 by Rei, posted 11-15-2003 8:11 PM DNAunion has replied
 Message 17 by compmage, posted 11-16-2003 9:53 AM DNAunion has not replied
 Message 52 by Peter, posted 11-19-2003 2:05 AM DNAunion has replied

  
DNAunion
Inactive Member


Message 15 of 96 (66741)
11-15-2003 9:00 PM
Reply to: Message 14 by Rei
11-15-2003 8:11 PM


Rei, if you are going to respond to my posts the least you could do is respond to what I said. I didn't mention self-replicators or incremental improvements at all in my post to which you replied.
Now, do you find fault in what I actually said in that other post?

This message is a reply to:
 Message 14 by Rei, posted 11-15-2003 8:11 PM Rei has replied

Replies to this message:
 Message 16 by Rei, posted 11-16-2003 12:06 AM DNAunion has not replied

  
DNAunion
Inactive Member


Message 19 of 96 (66932)
11-16-2003 7:39 PM
Reply to: Message 18 by NosyNed
11-16-2003 11:06 AM


quote:
I think in this case the coins were deliberately numbered to pick *in advance* the sequence 1,2 ... 10. This makes it "special". As you say any sequence has the same odds against. Calling it amazing after it has occured sounds pretty silly. However, calling it in advance and then pulling it is something else altogether.
It can be done afterwards, but it is not always as clear cut as it is when a specific prediction is made ahead of time. Since letter sequences demonstrate this better than numbers, I'll switch to using them.
Take 27 identical tiles and paint a unique letter of the English alphabet on each, and on the remaining one an underscore to represent a space, then place them all into an urn. For this thought experiment the number of letters we will select one at a time from the urn will be represented by the variable n. Draw a linear and contiguous series of n boxes from left to right, each of which will hold a single letter that is drawn from the urn. The letters will be written sequentially, each in the leftmost empty box at the time it is drawn.
Now perform the following steps n times:
(1) thoroughly shake the urn to randomize the tiles
(2) randomly select a single tile from the urn
(3) write down the letter on the tile in the appropriate box
(4) replace the tile
Now any single n-letter outcome is just as unlikely as any other single outcome. I am not disputing that. My argument is based on an aggregate view, which involves partitioning the possible outcomes into two sets: success and failure. With that in mind...
Would you be surprised if someone claimed to have followed the above method and ended up with, in a single trial, the following?
P_AOZIUHV_EU_KS__IPBODQKVO_IYTCR
Probably not: there’s nothing readily noticeable in that result that immediately raises suspicion of cheating.
But, what if the person claimed to have obtained any of the following in a single run through:
FOR_SCORE_AND_SEVEN_YEARS_AGO___
or
ME_THINKS_IT_IS_LIKE_A_WEASEL___
or
ONTOGENY_RECAPITULATES_PHYLOGENY
You should seriously doubt the claim that chance alone produced those results. But why? After all, each one of these particular sequences has the same exact probability as the other single sequence...so why should we be surprised to see any of these results, but NOT be surprised to see the first one?
Because the second outcomes match patterns specified independently of the event: they're a line from Lincoln's Gettysburg Address, a line from one of Shakespeare's works, and a summarization of Haeckel's position of recapitulation.
Note that no one had to proclaim ahead of time, "Hey, I predict this guy is going to select ME THINKS IT IS LIKE A WEASEL" or either of the other two. The MATCHING of the outcome to an independent pattern that eliminates chance from being the best explanation occurred AFTER the event, not before.
Now we can go the extra step to show that the pattern itself can be specified after the event.
Suppose n is 66 and a person claims to follow the method spelled out above and to have hit upon the following in a single shot:
I_WAS_PULLING_TILES_OUT_OF_AN_URN_AND_THIS_IS_WHAT_I_ENDED_UP_WITH
Clearly such a recognizable English sentence doesn't have to be specified PRIOR to the event in order to be used as a non-ad hoc pattern capable of eliminating chance.
We simply don’t accept that someone is going to pull chips from an urn using a truly random process and end up with a long and meaningful English statement: sufficiently complex and specified outcomes do not occur by chance alone. It is important to keep in mind that there are two requirements for eliminating chance: sufficient complexity (sufficiently low probability) and specification (a non-ad hoc pattern that the outcome matches).
[This message has been edited by DNAunion, 11-16-2003]

This message is a reply to:
 Message 18 by NosyNed, posted 11-16-2003 11:06 AM NosyNed has not replied

Replies to this message:
 Message 20 by crashfrog, posted 11-16-2003 8:38 PM DNAunion has replied
 Message 53 by Peter, posted 11-19-2003 2:17 AM DNAunion has not replied

  
DNAunion
Inactive Member


Message 23 of 96 (66961)
11-16-2003 11:04 PM
Reply to: Message 20 by crashfrog
11-16-2003 8:38 PM


quote:
You should seriously doubt the claim that chance alone produced those results.
quote:
Should I?
Yes, you should...
quote:
It depends of course on how many trials it took before they got that result.
Which was stated in my post: 1 attempt.
*************************
quote:
See, you had it right at the first part; partitioning all possible strings into two different sets. The set of significant strings is considerably smaller than the set of insignificant strings, yes.
So, the odds of acheiving a significant string is the number of such strings divided by the number of all possible strings. If it's your position that this number is astronomicall small, you would be correct. But, when you say things like this:
*****************************
sufficiently complex and specified outcomes do not occur by chance alone.
*****************************
You seem to be implying that there's zero chance. There's a considerable difference.
Which is why I intentionally refrained from saying what you think I said.
I purposely avoided saying that such things CANNOT occur by chance alone; I said they DO NOT. There is a difference.
(Well, let me head off attempts to show that there is not difference. To do that, one would have to demonstrate that literally everything that is not impossible actually does occur. Of course there is the highly speculative MWI: it's hardly able to be demonstrated. How about sticking to things that are testable. Let's see...I'm sitting here typing on my keyboard and it is not literally impossible for my hard drive to fail before I finish typing this sentence. Well, it wasn't impossible and it didn't happen.)
quote:
Could you explain your mathematics here? Especially how you get 0 from p divided by q, where p and q are both non-zero integers.
No, because I didn't end up with 0. Here’s what I actually said.
quote:
... sufficiently complex and specified outcomes do not occur by chance alone. It is important to keep in mind that there are two requirements for eliminating chance: sufficient complexity (sufficiently low probability) and specification (a non-ad hoc pattern that the outcome matches).
I said sufficiently low probability, not 0 probability.

This message is a reply to:
 Message 20 by crashfrog, posted 11-16-2003 8:38 PM crashfrog has replied

Replies to this message:
 Message 24 by crashfrog, posted 11-16-2003 11:18 PM DNAunion has not replied
 Message 54 by Peter, posted 11-19-2003 2:24 AM DNAunion has replied

  
DNAunion
Inactive Member


Message 25 of 96 (66964)
11-16-2003 11:29 PM
Reply to: Message 21 by NosyNed
11-16-2003 10:00 PM


quote:
Even in this case there is some prior selection being done. We speak English, in fact we are really interested in the odds of something coming out in any language.
Yes, we are interested in any language. I believe French uses the same set of symbols for their language, so if a meaningful French sentence appeared I would miss it, but a French speaking person would not. Is that a problem? Not as far as eliminating chance is concerned. Here's why.
ME: I see no recognizable pattern so I don't eliminate chance.
HENRI: I see a meaningful French statement so I eliminate chance.
The point is that chance was not eliminated when it shouldn't have been. Thus, the method is still a reliable method of eliminating chance.
quote:
If we were to compare this to the patterns in DNA we would have to accept all possible languages as well. I'm not sure we can even calculate the odds in the case of letters talk about anything else.
Actually, that has basically been done by Dembski (please, let's stick to probability and not try to apply it to evolution!).
I'm a bit rusty on this, but...
What he was looking for was the maximum number of specifications (non-ad hoc patterns) that could have been produced since the origin of the Universe. To arrive at this number he took the estimated number of elementary particles in the universe (10^80), multiplied it by the number of seconds the Universe has existed (based on 15 billion years, I think), and then multiplied that by the number of reactions that can occur per particle per second, based on Planck time (I think 10^43 or something like that). When all was done, he calculated that since the origin of the Universe a maximum of 10^150 specifications could have been made.
At this point I am losing track of his argument: how exactly one gets from here to the final conclusion (I told you I was rusty).
Anyway, his position is that any event that did occur and (1) was sufficiently complex (had a probability smaller than 10^-150), and (2) was specified (matches an independently created, no-ad hoc pattern), should not be attributed to chance.
The 10^-150 is his "universal probability bound". For events that we don't have to look at the whole universe - for example, Mr X winning the state lottery 5 times in a row - we can use a local probability bound.
[This message has been edited by DNAunion, 11-16-2003]

This message is a reply to:
 Message 21 by NosyNed, posted 11-16-2003 10:00 PM NosyNed has replied

Replies to this message:
 Message 27 by NosyNed, posted 11-16-2003 11:38 PM DNAunion has replied
 Message 32 by Rei, posted 11-17-2003 2:19 PM DNAunion has not replied
 Message 55 by Peter, posted 11-19-2003 2:26 AM DNAunion has not replied

  
DNAunion
Inactive Member


Message 28 of 96 (66968)
11-16-2003 11:49 PM
Reply to: Message 26 by NosyNed
11-16-2003 11:31 PM


quote:
Yes, Crash there is a threshold but it isn't very specific. It would vary depending on the importance of making a decision one way or the other.
In fact, picking a threshold would have to involve some of what you are suggesting. How many trials can be performed? If a lot then a very low probability is still over the "threshold".
Which Dembski takes into account. He speaks of taking all probabilitist resources into consideration to arrive at a saturated probability. For example, the probability of tossing a 20 heads in a row with a fair die in a single attempt is 1 in 1,048,576. This is indeed small and we should not expect to do it in a single attempt. But taking all of the coin tosses every made into account, which would easily be greater than one million, we should not be surprised to learn that someone somewhere has achieved 20 consecutive heads by chance alone.
But what about 500 heads in a row by chance? We should reject that no matter how many people performed how many tosses...the probability is simply too low for this specified event to occur by chance alone, even taking all humans that ever lived into consideration.
This may not be the ultimate quote on this, but it does show the general idea.
quote:
"In sum, probabilistic resources comprise the relevant ways an event can occur (replicational resources) and be specified (specificationsl resources) withing a given context. The important question therefore is not What is the probability of the event in question? but rather What does its probability become after all relevant probabilitist resources have been factored in? Probabilities can never be considered in isolation, but must always be referred to a relevant class of probabilitist resources. A seemingly improbable event can become quite probable when referred to the relevant probabilitist resources." (William Dembski, The Design Inference: Eliminating Chance Through Small Probabilities, Cambridge University Press, 1998, p181-182)

This message is a reply to:
 Message 26 by NosyNed, posted 11-16-2003 11:31 PM NosyNed has replied

Replies to this message:
 Message 30 by NosyNed, posted 11-17-2003 12:04 AM DNAunion has not replied

  
DNAunion
Inactive Member


Message 29 of 96 (66971)
11-17-2003 12:00 AM
Reply to: Message 27 by NosyNed
11-16-2003 11:38 PM


quote:
Well, the problem is that still only eliminates chance for things which are independant random events.
With letters drawn from a hat we can be careful that it is random (but have to work at it a little). With chemistry we know that many reactions are not random so the whole calculation goes out the window.
To a greater extent, once evolution comes into play, I don't see how Dembski's method can be applied. I think Dembski's EF works accurately to eliminate chance in general probabilistic events (tossing coins, choosing letters, and many other everday things) but not for biological processes.
quote:
Of course, in addition, we have to know how to calculate the "success" probability. In our letters example, as you noted, we have to know for all possible languages. And we can not know all possible languages.
But the purpose of the method is to be able to eliminate chance: that is, when we eliminate chance using the method chance has been correctly eliminated. This doesn't depend upon our being able to recognize French, for example: in such cases where we miss meaningful French statements we would fail to eliminate chance, leaving chance as a possible explanation. That is not a problem because of the purpose of the method (it is not to detect chance, but to eliminate it).

This message is a reply to:
 Message 27 by NosyNed, posted 11-16-2003 11:38 PM NosyNed has not replied

Replies to this message:
 Message 36 by Loudmouth, posted 11-17-2003 3:56 PM DNAunion has not replied

  
DNAunion
Inactive Member


Message 40 of 96 (67250)
11-17-2003 10:07 PM
Reply to: Message 39 by PaulK
11-17-2003 5:11 PM


I have seen the 10^80 value for the number of elementary particles in the known universe elsewhere. I'll try a web search.
By the way, in that link Dembski does mention that others have calculated certain universal probability bounds and come up with numbers lower than his (that is, the positive exponent is lower than 150).

This message is a reply to:
 Message 39 by PaulK, posted 11-17-2003 5:11 PM PaulK has not replied

Replies to this message:
 Message 41 by DNAunion, posted 11-17-2003 10:21 PM DNAunion has not replied

  
DNAunion
Inactive Member


Message 41 of 96 (67254)
11-17-2003 10:21 PM
Reply to: Message 40 by DNAunion
11-17-2003 10:07 PM


Here are a couple of independent references to there being about 10^80 elementary particles in the observable universe. Note that exponentiation is oftentimes lost when copying something from one source to a web page: the person making the page must manually go back and add the ^ symbol, which is not always done.
quote:
Density 1 H-atom/m3; 1080 particles in observable universe www.physics.gmu.edu/classinfo
quote:
To simulate the Universe in every detail since time began, the computer would have to have 1090 bits - binary digits, or devices capable of storing a 1 or a 0 - and it would have to perform 10120 manipulations of those bits. Unfortunately there are probably only around 1080 elementary particles in the Universe. www2b.abc.net.au
edited urls in attempt to fix page width - The Queen
[This message has been edited by AdminAsgara, 11-18-2003]

This message is a reply to:
 Message 40 by DNAunion, posted 11-17-2003 10:07 PM DNAunion has not replied

Replies to this message:
 Message 43 by Rei, posted 11-18-2003 1:04 PM DNAunion has replied

  
DNAunion
Inactive Member


Message 44 of 96 (67397)
11-18-2003 1:39 PM
Reply to: Message 43 by Rei
11-18-2003 1:04 PM


quote:
This ignores counting things such as gravitrons...
...ignoring gravitrons ...
What's a gravitron?
[This message has been edited by DNAunion, 11-18-2003]

This message is a reply to:
 Message 43 by Rei, posted 11-18-2003 1:04 PM Rei has not replied

Replies to this message:
 Message 45 by crashfrog, posted 11-18-2003 2:04 PM DNAunion has replied

  
DNAunion
Inactive Member


Message 57 of 96 (67838)
11-19-2003 9:16 PM
Reply to: Message 45 by crashfrog
11-18-2003 2:04 PM


quote:
Rei: This ignores counting things such as gravitrons...
...ignoring gravitrons ...
quote:
DNAunion: What's a gravitron?
quote:
Crashfrog: DNAunion writes:
It's not my job to educate you. If you don't understand something, try reading a book.
So is a gravitRon similar to a leptRon, like the familiar electon that orbits the nucleus where the protRons and neutons are found? Or maybe it’s more like a muRon, or a hadon, or a mesRon, or a photRon?
Or was Rei referring to the amusement ride called the gravitron?
Gravitron – Amusement Ride Extravaganza
[This message has been edited by DNAunion, 11-19-2003]

This message is a reply to:
 Message 45 by crashfrog, posted 11-18-2003 2:04 PM crashfrog has replied

Replies to this message:
 Message 58 by crashfrog, posted 11-19-2003 10:43 PM DNAunion has replied

  
DNAunion
Inactive Member


Message 59 of 96 (67868)
11-19-2003 11:05 PM
Reply to: Message 58 by crashfrog
11-19-2003 10:43 PM


quote:
Did it ever occur to you to address the substance of someone's post, rather than their [sic] spelling shortcomings? Just curious...
Someone is singular; their is plural.

This message is a reply to:
 Message 58 by crashfrog, posted 11-19-2003 10:43 PM crashfrog has replied

Replies to this message:
 Message 61 by crashfrog, posted 11-19-2003 11:10 PM DNAunion has replied
 Message 66 by Zhimbo, posted 11-20-2003 3:30 AM DNAunion has replied

  
DNAunion
Inactive Member


Message 60 of 96 (67869)
11-19-2003 11:06 PM
Reply to: Message 54 by Peter
11-19-2003 2:24 AM


ANYONE WHO DOES NOT BELIEVE DEMBSKI IS A MORON

This message is a reply to:
 Message 54 by Peter, posted 11-19-2003 2:24 AM Peter has seen this message but not replied

Replies to this message:
 Message 62 by Chiroptera, posted 11-19-2003 11:13 PM DNAunion has replied
 Message 85 by nator, posted 11-23-2003 8:47 AM DNAunion has replied

  
DNAunion
Inactive Member


Message 63 of 96 (67873)
11-19-2003 11:23 PM
Reply to: Message 62 by Chiroptera
11-19-2003 11:13 PM


Re: no need to thank me
quote:
ANYONE WHO DOES NOT BELIEVE DEMBSKI IS A MORON
Here's the code that generated that sentence, first run, by radomly generating letters A - Z and the space character.
lnSequenceLength = 46

=RAND(-1)
lcString = ""
FOR lnLcv = 1 TO lnSequenceLength
	lcString = lcString + CHR(RandomNumber())
ENDFOR
? lcString

FUNCTION RandomNumber
	LOCAL lnMin, lnMax
	lnMin = 65
	lnMax = lnMin + 26
	lnRandom = FLOOR((lnMax - lnMin + 1) * RAND() + lnMin)
	IF (lnRandom = lnMax)
		lnRandom = 32
	ENDIF
	RETURN lnRandom
ENDFUNC
[This message has been edited by DNAunion, 11-19-2003]

This message is a reply to:
 Message 62 by Chiroptera, posted 11-19-2003 11:13 PM Chiroptera has not replied

Replies to this message:
 Message 92 by Peter, posted 11-25-2003 7:15 AM DNAunion has replied

  
DNAunion
Inactive Member


Message 64 of 96 (67875)
11-19-2003 11:38 PM
Reply to: Message 61 by crashfrog
11-19-2003 11:10 PM


Well what do you know, I just looked in the dictionary and I was wrong for calling that an error. Rats!
[This message has been edited by DNAunion, 11-19-2003]

This message is a reply to:
 Message 61 by crashfrog, posted 11-19-2003 11:10 PM crashfrog has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024