|
Register | Sign In |
|
QuickSearch
EvC Forum active members: 65 (9164 total) |
| |
ChatGPT | |
Total: 916,914 Year: 4,171/9,624 Month: 1,042/974 Week: 1/368 Day: 1/11 Hour: 0/0 |
Thread ▼ Details |
Junior Member (Idle past 5875 days) Posts: 27 From: Oklahoma City, Ok Joined: |
|
Thread Info
|
|
|
Author | Topic: Irreducible Complexity and TalkOrigins | |||||||||||||||||||
Percy Member Posts: 22505 From: New Hampshire Joined: Member Rating: 5.4 |
Hi Nosy,
I think WK or DA have provided correct information (except I can't comment on the papers WK referenced because I haven't looked at them), but I'm wondering if they're actually answering your question. What you asked back in Message 20 was this:
NosyNed in Message 20 writes: So is there or is there not a specification for the Shannon info in AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA? This question can be answered by providing a little more context, and I'll get to that in a minute. Beforehand I should note that I think WK is correct to say that I tend to think of Shannon information in terms of the number of bits required to communicate information, and this is the same way you're trying to think about it, and what's more, in a prior message you actually provided example binary encodings for the specific DNA base sequences of AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA. Fundamental to the concept of Shannon information is that both sender and receiver must agree upon the set of messages to be sent. WK and DA are defining the message set to be {C,A,G,T}. The size of the message set is 4, and assuming equal probability for each base type, the number of bits to communicate a single message from the set of possible messages is log24, which is 2 bits. So to communicate 16 messages (i.e., 16 base types) of 2 bits each would require a total of 32 bits. This also means that both your example sequences, AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA, require 32 bits of information to communicate from sender to receiver. But what if those sequences are the only messages in your message set? In other words, what if your message set were {AAAAAAAAAAAAAAAA, AAAAAAATAAAAAAAA}? The size of your message set is 2, and so the number of bits required to send one message from this message set is log22, which is 1 bit. Of course, as I think both WK and DA have explained, in the general case there aren't really only two messages in your message set. If any of the 16 bases can be any of {C,A,G,T}, then the size of your message set is actually 416 = 232, and of course log2(232) is 32 bits. But I don't think the general case was what you were asking about. What I think you were asking is, "What if a gene, say AAAAAAAAAAAAAAAA, were to experience a single point mutation and change to AAAAAAATAAAAAAAA?" In order to answer this question we have to ask whether AAAAAAAAAAAAAAAA was the only allele of this gene. In other words, was the messages set of this gene just {AAAAAAAAAAAAAAAA}? If so, then the amount of Shannon information necessary to communicate this information from one generation to the next is log21, which is 0 bits. Incredible but true, and it's because with Shannon information it is assumed that sender and receiver know the message set. But let's assume a larger message set, with a smaller number of bases so I don't have to type as much. Let's say our message set is {AAAA, AAAC, AAAG}. The size of our message set is 3, and so it takes log23 to communicate one message of a three message set, which is 1.585 bits (Shannon information only provides the smallest number of bits necessary, it doesn't tell you what the encoding is, that's a whole other problem). Now let's say a new allele joins the other three to expand the message set to {AAAA, AAAC, AAAG, AAAT}. The size of our message set is now 4, and so it takes log24 = 2 bits to communicate one of these messages to the next generation. The amount of information in this gene for our population's genome has just risen from 1.585 to 2 bits, an increase of .415 bits. --Percy
|
|||||||||||||||||||
Percy Member Posts: 22505 From: New Hampshire Joined: Member Rating: 5.4 |
TheWay writes: Are you not re-defining information from the abstract? If I laid out a 1 1 1 1 (four ones) sequence we could say that it is information because we can add these up to get something new like the number four. So abstractly a sequence can manifest a certain idea. Also, we can attach meaning to each (1) number and have it be represented accordingly. Information theory, as first conceived by Claude Shannon of Bell Labs, addresses the problem of how to communicate information from sender to receiver in the presence of noise, for example, interference on a telephone landline. There is one particular facet of the definition of "information" in information theory that is of extreme importance, and I'll quote Shannon from his landmark paper, A Mathematical Theory of Communication:
Shannon writes: The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. In other words, information theory is not concerned with meaning, and in this context it is important not to confuse information with meaning. Notice that Shannon mentions communicating a message. In order for a sender to communicate with a receiver, both must know the message set. For example, let's say the message set consists of the 26 letters plus some punctuation. When the sender sends a message to the to the receiver, he is sending a letter, say "A" or "G". In order to send that message over a wire the letter must be encoded, and so "A" might be encoded by the bits "01000001" and "G" might be encoded by the bits "01000111". The sender can send the string "AG" by first sending the message "01000001" followed by the message "01000111". But notice that these numbers "01000001" and "01000111" have no inherent meaning. They have meaning to the sender and receiver in that one means "A" and the other means "B", but the numbers themselves have no meaning. When the sender sends those numbers down the wire, the wire has no knowledge of the meanings of those numbers, and it doesn't matter to the wire or to the larger problem of communication what they actually mean. It could be '"01000001" if by land and "01000111" if by sea' for all the wire cares. The amount of information being sent down the wire is the log2 of the number of messages in the message set. A message set consisting only of "A" and "G" is of size 2, and so log22 = 1 bit means that it really takes only one bit to encode our message. The encoding would be obvious in this case, you'd define A=0 and G=1, or A=1 and G=0. In other words, if our message set isn't the 26 letters plus punctuation but only two letters then we don't need the longer encodings I mentioned above, which happen to be the ASCII codes for A and G. You asked how much information is in "1111". Assuming that "1111" is the sequence of bits you're going to send down the wire, and assuming that your message set size is 16 (four bits can take on 16 different values, from "0000" through "1111"), the amount of information in your message is log216, which is 4 bits, which we knew already since we were sending 4 bits. It would be much more informative to ask something like, "How much information would I have to send in order to communicate the 20 amino acids." This is straightforward. The log220 = 4.32 bits. In other words, 4.32 bits is the minimum number of bits necessary to communicate a single message from a message set of size 20. Unfortunately there are no simple equations to tell you the most efficient encoding, but we can easily invent our own that takes sometimes 4 bits and sometimes 5 bits, because we'll assume that a leading 0 means the message consists of 4 bits, while a leading 1 means it consists of 5 bits, e.g.:
0000 Alanine 0001 Arginine 0010 Asparagine 0011 Aspartic acid 0100 Cysteine 0101 Glutamic acid 0110 Glutamine 0111 Glycine 10000 Histidine 10001 Isoleucine 10010 Leucine 10011 Lysine 10100 Methionine 10101 Phenylalanine 10110 Proline 10111 Serine 11000 Threonine 11001 Tryptophan 11010 Tyrosine 11011 Valine We can tell we haven't achieved the most efficient encoding, because assuming each amino acid is equally likely to be sent, then the average number of bits in each message is 4.6, which is above the 4.32 bits that the Shannon equation tells us is the minimum. Finding the most efficient coding is often very difficult. The first couple pages of the Shannon paper are worth a read, they are very comprehensible. After that point it gets rather complicated mathematically. --Percy
|
|||||||||||||||||||
Percy Member Posts: 22505 From: New Hampshire Joined: Member Rating: 5.4 |
Looks right to me. The challenge is to find that encoding. When I was still in school it was believed that there were in many cases no mathematical approaches for deriving the most efficient encoding, but I don't know if that's still the case. But given sufficient computer resources you can always find the most efficient encoding through brute force enumeration.
But finding the most efficient encoding is not very often a goal, while reliability often is. Redundancy used to play a significant role in computer design (e.g., parity on memories and buses), but I go back a ways and that's probably not so much the case anymore with modern IC technology. But redundancy plays a significant role in recorded media such as hard disks and optical media like DVDs. My guess is that the digital encoding of HDTV broadcasts involves a great deal of redundancy, given the significant number of potential sources of signal interference. --Percy
|
|||||||||||||||||||
Percy Member Posts: 22505 From: New Hampshire Joined: Member Rating: 5.4 |
Suroof writes: Of course, the few they have picked on have been shown to be evolvable too. No they haven't - the blood clotting cascade, the cilium, phototransduction (http://www.arn.org/docs/behe/mb_idfrombiochemistry.htm) and many others Actually, yes they have been shown to be evolvable and therefore not irreducibly complex. Behe has never submitted his ideas on ID and IC to any scientific peer-reviewed technical journal, but his ideas received so much popular attention through his book Darwin's Black Box that a number of biologists have invested the effort to debunk his ideas anyway. Plus he was completely undressed on the stand at Dover. If blood clotting evolution and so forth are sufficiently interesting to you that you'd like to discuss them in detail then we could do that. Irreducible complexity does not at present have any status as a valid scientific concept. What the concept of specified complexity (SC) lacks at present, aside from the complete absence of any representation in the technical literature, is an objective definition and an objective quantitative measure. For example, IDists should be able to answer questions like these about SC:
More fatally, ID suffers from an infinite regression that begins with the question, "Who was the creator of life on earth?" If the answer is, "Life from elsewhere in the universe," then the next question is, "Who was the creator of this other life?" Ultimately the answer ends up at God, revealing the inherently religious nature of ID. Behe admitted he believed the intelligent designer was God right on the stand at Dover. --Percy
|
|||||||||||||||||||
Percy Member Posts: 22505 From: New Hampshire Joined: Member Rating: 5.4 |
Suroof writes: Yes, if you have any idea as to how the irreducible core of the blood clotting system (as described in message 65) evolved in a Darwinian step-by-step model please show us. I see that RickJB has picked up the blood clotting issue. As the discussion has already made clear, there are a wealth of possible natural pathways. No scientific unknown has ever resolved to a supernatural origin. The correct answer to something we do not know is, "We do not know," not, "God did it." The entire history of religion is one of deciding God did it, while the whole history of science is, "Gee, how about that, just matter and energy following natural laws once again! Who woulda thought!" Just like everything else in this world, blood clotting has a natural origin. You simply ignored the more important points in my Message 93, concerning the complete lack of objective methods for measuring SC, and the infinite regression of the designer to a supernatural origin. ID has no scientific facets, only supernatural. --Percy
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024