Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9164 total)
1 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,913 Year: 4,170/9,624 Month: 1,041/974 Week: 368/286 Day: 11/13 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Irreducible Complexity and TalkOrigins
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 3 of 128 (434292)
11-15-2007 10:05 AM
Reply to: Message 1 by TheWay
11-14-2007 8:54 PM


I just ordered Behe's book, I haven't read it yet so please no spoilers!
It turns out that the butler did it.
How did the original IC system evolve?
The idea is that the original larger system was not IC, that there were redundancies in the system which meant that the loss of one part did not impair its function. However it could not subsequently sustain the further loss of the complementary component. Or alternatively components which were initially capable of substituting for each other may diverge to such a degree that they lose that ability and the loss of either subsequently compromises the function of the system. There may be other routes to non IC systems becoming IC but those are two that spring to mind.
Dr. Spetner suggests that there is a limit to the mutations of an organism based off "how many essential nucleotides it has in its active genome." [spetner 1998 Not by Chance! pg.81] So if this is the case
This is a massive assumption given that there isn't a scrap of evidence to support that contention.
How possible is it that the parts will transpose randomly in the genome to result in even one mutation that could result in an IC system?
Unless you are using it in a strange way the word transpose is a strange one to use here. Transposition is only one of many forms of mutation. While there might be one particular mutation which would repredent the last step in a system becoming IC there would be a long history of mutations and evolution in place before in the prior development of the system. This is only a problem if you expect IC systems to spring into being fully formed from no precursors.
What is the chance of getting a mutation?
There have been lots of studies on mutation rates and indeed there have been a number of threads on the forum dealing both with mutation rates and with theoretical limits to variation. I won't go into it in derail but a look at the scientific literature would produce a considerable body of evidence indicating the chances of a mutation occurring.
What fraction of the mutations have a selective advantage?
This is also something which has been extensively studied and can be found in the literature. It isn't something that can be easily stated however. For a start the fitness, i.e. beneficial or detrimental character, of a mutation is highly dependent on the environmental context in which it arises. What may be beneficial in one context may not be in another. It is however widely accepted that beneficial mutations are less frequent than detrimental mutations.
How many replications are there in each step of the chain of cumulative selection?
I'm not sure I understand this question. Does replications mean generations? Does the chain of cumulative selection mean the progress of the various mutations which result in the resulting system?
How many of those steps do their need to be for a new species to form?
This seems to be based on the assumption that the differences leading to speciation need to be based on adaptive traits
, which may not be the case. There is no definitive answer for this as there are multiple routes to the genetic establishment of reproductive isolation which are all going to be of varying lengths. There are single mutations which can be shown to be sufficient to produce reproductive isolation, as seen in studies of reproductive isolation in Drosophila (Orr, 2005).
Sounds like a guess, has anyone ever seen this? Is there any evidence that this has occurred?
Well there are lots of examples of genes which have undergone duplication and subsequent divergence. I don't know if any of these are necessarily components of an IC system but they certainly occur and there is considerable evidence that such genes may substitute for each other, either spontaneously in the embryo or if they are geentically engineered to be expressed in place of the related gene.
As I understand it, if one amino acid in a chain is altered or mutated we can't really expect the same result in the phenotype as was prior the mutation. Correct me if I am wrong.
You are wrong. This is also highly context dependent. There are situations where the change of a single amino acid, it would not be a mutation as mutations occur in DNA not proteins the mutation in the DNA would produce the change in the protein, could entirely destroy a proteins function but there are also instances where it would have no effect whatsoever.
When has a protein been "co-opted for a different function?"
In many cases in controlled experiments. One such example is the evolution of antibiotic resistance genes from other metabolic enzymes , of multidrug resistance in cancerous cells and of insecticide resistance in numerous species. If you want to look at any of these in greater detail I can provide references.
I thought enzymes played a part somehow?
Enzymes are proteins. They are a particular class of proteins which act to speed up chemical reactions.
He sure says "may" many times when speaking of evolution as a process. (just a quick jab to the ribs )
It's not a jab to the ribs at all, it is simply that science is tentative and considered and the evidence is primarily inferred from genetic data. There is clear evidence of the processes described but that doesn't stop creationists denying their existence on the grounds that it is merely inferred and we haven't directly observed whole new species arise through genome duplication and divergence.
TTFN,
WK
Edited by Wounded King, : No reason given.

This message is a reply to:
 Message 1 by TheWay, posted 11-14-2007 8:54 PM TheWay has replied

Replies to this message:
 Message 6 by TheWay, posted 11-15-2007 12:33 PM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 5 of 128 (434307)
11-15-2007 11:31 AM
Reply to: Message 4 by Rahvin
11-15-2007 11:09 AM


I believe he's referring to a case where a gene is transcribed multiple times in a single copy. So, in the sequence "ABC," the gene "A" is replicated an extra time and the sequence becomes "AABC."
I'm not sure what you are talking about here in terms of transcription. Gene duplication is the result of a copying error either at the level of individual genes, chromosomes or entire genomes, so it either occurs during DNA synthesis during S phase or when the cell is dividing if the genetic mterial is not divided equally between the daughter cells. The first case seems to be what you are describing and is what would be thought to have produced genetic features such as the initial clusters of Hox genes for example, these clusters have subsequently been duplicated by whole genome duplication leading to dozens of different forms of Hox genes.
TTFN,
WK

This message is a reply to:
 Message 4 by Rahvin, posted 11-15-2007 11:09 AM Rahvin has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 7 of 128 (434344)
11-15-2007 2:59 PM
Reply to: Message 6 by TheWay
11-15-2007 12:33 PM


Re: A few questions...
Correct me if I am wrong, the idea is that a more complex system or a system with more ?parts? evolved and then lost ?parts? to reduce into IC? Is this a documented event as a whole or is this a speculation on how it could have happened?
Well in the particular context of IC it is hard to say if it has happened since there is no agreement upon exactly what constitutes an IC system or which systems are IC. If the question is if more complex systems can reduce then they certainly can, a prime example would be the numerous instances where independent parasitic organisms have lost their independence and become obligate parasites. If you mean is it documented as a whole as in have we actively observed every step of such a degradation in a laboratory then the answer is no, these things take a long time to occur. But neither is it mere speculation as there is compelling genetic evidence showing the derived nature of several obligate parasites from independent ancestors.
Has an IC system been shown to add information or adapt new parts?
Again without some clearly identified system agreed to be IC this is virtually impossible to answer and similarly without some functional definition of what constitutes genetic information this statement is impossible to address.
I think this idea begs the question of how much information must be added to supply a genome to reduce itself before a cell can be functional?
Once again the nature of the information you mean is unclear. Do you mean perhaps that there must be a minimum amount of genetic information, and specific information that performs a particular purpose, before a genome can reproduce itself? There are a number of pre-cellular and pseudo cellular genetic systems which are possible and truly cellular life itself is usually considered a subsequent development. This is more an abiogenesis question than an IC question, unless the contention is that life itself is fundamentally IC. I may just have misunderstood the question.
As I understand it, information can only be added to the genotype one bit at a time and for every random positive mutation there must be consequentially a bit of information added most of the time.
I think that neither of these things is neccessarily true depending on your understanding of information. For the first point we can see entire genomes duplicate as one 'mutation'. While I agree that this deosn't neccessarily produce any novel information there are any number of informational metrics where such an event would represent a doubling in the informational content, and higher ploidy numbers are also commonly associated with increases in size, which may well represent a beneficial trait in some situations.
The second point, that a random positive mutation requires an increase in information, is definitely not true. In many cases it is beneficial for an organism to lose something it no longer needs, those reduced obligate parasites would be a prime example. In a permissive environment an organism that loses a feature which requires energy to produce or maintain but which is superfluous in its current environment has a beneficial increase in its available energy in that environment. Beneficial mutations need not represent an increase in genetic information.
Is this accurate or are there other ways to add information and please list if they are observed or hypothetical?
I can't tell if it is accurate or not, though I suspect not, or answer the rest of this query without a robust definiton of what you mean by genetic information. Spetner doesn't have to be lying, he might just be wrong.
He suggests that experiments have somewhat verified this postulate. I assume he is referring to his experiments in 1964. Is it possible that you are not aware of any such experiments or is he lying?
As far as I can tell the paper Spetner published in 1964 was in the 'Journal of Theoretical Biology', which as its name suggests focusses on theoretical speculations.
Because experiments have shown that most mutations are harmful, genes that already built up to be useful would suffer damage.
This is just plain wrong. Most mutations are neutral. There is certainly a higher frequency of detrimental than beneficial ones though, as far as we have been able to determine. It is also worthwhile recalling that this is not occurring in individuals in isolation but in populations. While a particular gene may be compromised by mutation in an individual the original form will still be extant in the population. This is one reason why natural selection is considered such an important factor as it reduces the spread of such compromised alleles though the population, or in many cases they are so deleterious as to preclude their being passed on such as mutation which are lethal at the early stages of embryonic development.
Not all 'detrimental' mutations need be so lethal however and organisms could easily carry a substantial burden of such genes in a population provided the environment was not too severe. Without more information on how he gets to his conclusion whether this in anyway precludes evolution from fitting into current timeframes is hard to say.
I used transpose to mean basically change, I now understand I was potentially writing to geneticists. I apologize. I however, do not understand the use of the word "repredent," I could not find a definition anywhere I can only surmise what it means and I think it is an important adjective to unlocking your point.
The answer was right in front of you on your keyboard the d is right next to the s, unless you use Dvorak, it was simply a typo, my bad! The word should have been 'represent'.
However I hardly accept that any function or system in the genotype would "spring" into being fully formed. I accept that a Highly intelligent being created these systems as one would create a computer program for example.
What are you? A Raelian?
I would like your opinion. You are qualified enough to give a reasonable answer correct?
The chances of an individual getting some mutation are very high, almost every generation will introduce several mutations into a relatively large genome simply because copying during DNA synthesis while very good is not perfect. Most humans are though to have around 20-30 novel mutations, of some kind, compared to their parent. The chances of any particular mutation occurring are very small, although some are more likely than other, such as chromosomal abnormalities which crop up infrequently but with regularity.
Also, I would like to ask what is the chance of getting a positive mutation as opposed to a neutral and detrimental mutation?
Very low, but the chances of getting some positive mutation are higher and the chances of positive mutations persisting tends to be higher than either neutral or detrimental mutations although the degree varies dramatically depending on the specific case.
Are you suggesting that environmental context plays the majority role in natural selection? Does natural selection have any other methods of giving a higher selective value to a mutant organism?
depending on what one considers to constitute the environment I might be tempted to say it plays the whole role, if one were to allow the genetic environment of the cell and the other organisms in the individual's population to be considered environmental factors.
There are a myriad of factors which contribute to natural selection and as I suggest any one could be considered environmental.
I think this is a good time to bring out the probability of survival in a mutant that has had a positive mutation. Assuming there is such a thing as a positive mutation as described by the Neo-Darwinian Theory.
I think it would be hard to dispute the existence of such mutations given the numerous experimental studies on factors such as antibiotic resistance in bacteria.
A mutant must have an above average offspring survival rate resulting in a higher selective value. The higher the selective value, the higher the chance that the mutant will survive to take over a population. Am I missing something?
Not anything particularly significant and Spetner's precis of Fisher is also correct, but do not forget that it applies equally to detrimental and neutral mutations.
There is more, however it seems that odds such as these very slim, and very slim to happen around the 500 times, that G. Ledyard Stebbins Processes of Organic Evolution 1966, predicted it would take to gain a new species.
I'm not familiar with the basis of Stebbins calculation but I don't think 500 could be taken as anything but a very approximate average even if it was reliable, as I pointed out before the genetic bases of reproductive isolation are highly variable. but even taking 500 as a fixed number then why is it of neccessity a problem, we would need to have both that figure and the calculated probability for mutations arising and being fixed to determine that. It also seems to assume that the contributing fixed mutations would necessarily be beneficial which need not be the case.
Spetner doesn't seem to put much stock in any other model of evolution other than copying errors to produce random mutations.
Well I might agree with this depending on what exactly constitutes a copying error. If it just means point mutations then I would be less likely to agree. If it includes large scale gene duplications I would be happier.
He believes the other ways such as transposition fail to conclusively show their true randomness.
Well point mutations don't show 'true randomness' in the sense of all being equiprobable, the probability of a particular mutation is highly affected by the particular base involved the surrounding genetic sequence and the surrounding higher order structure of the chromosome.
So he rules out these early on, sticking to evolutions main pony - copying errors.
This would seem to limit the applicability of his conclusions right from the get go. It also make a potentially unwarranted assumption that the most frequent type of mutation, the single base substitution, is also the most important type of mutation.
Let's look at all of the ways evolution can produce random mutations in another thread, that I imagine would be an immense topic.
You got it back to front, it is the mutations that are one of the things that produce evolution. That might be an interesting thread, I tried to start a similar one before, Mutation and its role in evolution: A beginners guide, but everyone thought it was too technical.
For now though, How many copying errors would it take to produce an active cumulative evolution?
I don't know that the phrase 'active cumulative evolution' makes any sense. As to how many 'copying errors' might be required I couldn't say offhand, especially since I'm not sure what you mean. I would tend to think it would be however many would have occured in a population in the time until two de novo beneficial mutations, i.e new to the population, were both present in some individual, but it might just be the number until 2 mutations affecting the same trait occurred independently, as no further point mutations would be involved in the mutations being present together in some individual.
Also, how much information would need to be added with each corresponding mutation?
I certainly can't answer this question especially, as I already pointed out, since beneficial mutations can be caused by a loss of information, by most informational metrics.
I was under the impression that speciation as reproductive isolation was reversable and not subject to complete isolation in the genera or family. What is the stipulations for a complete speciation?
Reproductive isolation can be of several kinds, if it is the result of geographic isolation then it can often be reversed if the two populations are allowed to mix again. However given a long enough period of geographic isolation sufficient genetic changes, either influencing behaviour or directly affecting the biology of reproduction, may accrue that when reintroduced the two species cannot reintegrate. Some people use an even more stringent requirement that behavioural isolation is insufficient and that even in-vitro fertilisations should not produce fertile offspring.
I think the most common conception of speciation is that from the biological species concept, this includes geographical isolation as a suitable isolating factor except in flying species. I personally would ted to be a bit more stringent and want to see evidence of some reproductive isolation if the two potential species are reintegrated.
In this divergence has it been shown to increase the amount of information in the genome?
Again I can't answer this until you tell me how you personally are measuring such information since there are several possible metrics.
Also, is the embryo the only known place these can occur outside of physical tampering?
Well the examples I am thinking of are cases where for one reason for another a particular gene fails to be expressed but a related gene compensates for its loss, this is sometimes even associated with an unusual spread of the expression domain of the second gene. This naturally principally occurs in embryos, although I imagine you might see similar effect in cell cultures, but I don't suppose that is what you meant. They may also persist into adulthood but that is a lot harder to study.
How scientific would it be to assume that this process can create the neccessary level of complexity we see in an IC system and even the amount of information prior to its subsequent devolution?
I don't see any particular barrier to it, but its hard to tell without proper deftions of several of the key terms, i.e. IC and information.
Are there any documented cases where a change causes a positive effect?
Certainly there are, including the very popular sickle cell related mutations. There are sickle cell types which both illustrate the highly context sensitive nature of the concept of beneficial, i.e. Hbs providing an advantage to heterozygotes in areas where malaria is endemic, and ones which show that different mutations can perform the same function, the Hbc allele which shows a much reduced severity of sickle cell symptoms although not in conjunction with Hbs.
Is the smallest change restricted to a single nucleotide?
That is the smallest possible change.
How probable is this in a natural setting and how likely, by your best estimates, is this to occur in a natural setting? Your examples make me wonder how much "evolution" is to credit and how much adaptation resulting from prior information in the genome is to credit?
Well in the case of cancers many of the studies have been on tissues taken from 'nature' i.e. clinical samples an we can get a pretty good idea of the pre-cancerous genetic makeup based on the patients non-cancerous cells.
'Prior information' is obviously important as it is the raw material novel mutations work with. As we have both agreed new things are rarely likely to appear from nowhere. The closest to such a deus ex machina would be examples where new proteins are thought to have originated from frame shift mutations, such as the nylon degrading protein nylb see [thread=-9415].
I would very much like more information on this, I have read about resistences credited to evolution only to wonder how this would eventually effect the phenotype as we see in the phylogeny of our organisms.
I don't understand the issue. Are you wondering if the temporary benefits of resistance are outweighed by long term effects on the population? Evolution doesn't operate like that, if the resistant ones live and the non-resistant ones die then thats the important thing.
Are you saying we shouldn't be skeptical of unobserved phenomena that claims itself a scientific fact?
No I'm saying we shouldn't be skeptical of phenomena just because they haven't been directly observed when there is a huge body of directly observed evidence that supports their existence.
If I sequence the genomes of two identical twins then I should be able to infer that they are twins and therefore that they have the same parent, I don't have to have met the parents. Similarly patterns of similarity between organisms lead us to conclusions about their relatedness which don't rely on us having the genetic information of every preceeding generation inbetween from both lineages.
Evidences of processes that could have well been established by a designer and interpreted poorly by the design only gives me more of a reason to question what exactly is the evidence.
I hesitate to say it but the evidence is considerably greater than the evidence for there being any sort of creator. Why couldn't it be the ID camp that is poorly interpreting the design. Perhaps the designer just set up a minimal genome and let it go upon its own merry way through random mutation and natural selection, perhaps he made everyone and everything in a poof exactly as they are 5 minutes ago. Without knowing what sort of designer we are talking about or having any evidence for design its pretty much just empty speculation.
I hope you will continue in our discussion, I appreciate all the well mannered answers you have given.
You might be surprised what a lone voice yours is on that matter.
TTFN,
WK
Edited by Wounded King, : fixed link to nylon bug thread

This message is a reply to:
 Message 6 by TheWay, posted 11-15-2007 12:33 PM TheWay has replied

Replies to this message:
 Message 10 by TheWay, posted 11-23-2007 7:15 PM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 17 of 128 (436061)
11-24-2007 8:47 AM
Reply to: Message 10 by TheWay
11-23-2007 7:15 PM


Re: A few questions...
Hi TheWay,
Sorry to take so long to get back to you.
The others have given some pretty good replies to your questions on information but I'll give you my view as well.
What is it? How can it be measured? Can it be measured?
Bioportals definition is fine as far as it goes but it doesn't give us any idea how to measure genetic information. The commonly used metrics are those already mentioned, Shannon's formulation and Kolmogorov complexity. There is an extensive literature showing how these techniques have been applied to genetic sequences and none of them, other than perhaps some which have shades of ID about them, i.e. Abel and Trevors' papers and Dembski's own work, suggest that there is any genetic barrier to genetic information change in any direction.
EO Wilson writes:
Artificial selection has always been a tradeoff between the genetic creation of traits desired by human beings and an unintended but inevitable genetic weakness in the face of natural enemies.
Far be it from me to disagree with EO Wilson but I would say here that until recently it has been outside the ability of artificial breeders to 'create' desired traits, they have only been able to select for them. And since humans can select traits entirely unconcerned, except in extreme cases, with their side effects on the populations fitness they will be more prone to breeding strains with weaknesses one would not expect to see propagated under natural selection in the wild.
All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not to increase it.
A point mutation could either increase, decrease or maintain the information depending on the specific mutation and the surrounding sequence. There is certainly nothing in what you have related of Spetner on information that would suggest otherwise and certainly nothing in the scientific literature.
believe he states that there is a switch type mechanism in the genome that activates certain things or deactivates certain things and that this system is what ultimately governs mutations. Is that right? If not I'll go back and read it more carefully.
No, that is not right. There are certain biological systems which repair DNA and correct some mutations and some organisms do show variations in their mutational rate in response to environmental stress but I can't think of anything which would fit the description you have just given.
Anyways he states that "new" information, like what the NDT requires for macroevolution, can only be added by a binary like system. Such as one bit at a time. I could have got that screwy, anyways lets move on to some examples he gives.
I don't think NDT has any such requirement for macroevolution.
Resistance of bacteria to antibiotics and of insects to pesticides. He states, "Some bacteria have built into them at the outset a resistance to some antibiotics." This resistance results from an enzyme that makes the drug inactive and this doesn't build up through mutation. He then cites J. Davies as proposing that the purpose of this particular type of enzyme had a completely unrelated primary function. Basically, a lucky side effect. He also cites a study done on antibiotics and how they are the natural products of "certain fungi and bacteria." Which we should expect to see some natural resistance to. Also non resistant bacteria can become resistant by picking up a resistant virus and the virus may have picked up the gene from a naturally resistant bacteria. Apparently, scientists can genetically modify organisms to become resistant.
OK, either Spetner has no idea what he is talking about or you have misunderstood him. Certainly there have previously been bacteria which were resistant to a variety of antibiotics well before humans started using antibiotics. Without a reference for Davies hypothesis I'm not sure what you mean, however I have seen a paper in which he suggests that horizontal gene transfer of enzymes for the transport and processing of certain antibiotics from the fungi that produce them to bacteria could confer antibiotic resistance although he himself has subsequently suggested that the evidence does not support this hypothesis.
As for 'viruses' I think either you or Spetner are getting them mixed up with plasmids. Plasmids are spread quite promiscuously through bacterial populations and can certainly confer antibacterial resistance, but the genes in the plasmids still have to originated somewhere.
None of this however controverts the fact that we can see antibiotic resistance derive de novo in populations which previously did not show resistance and that this resistance can be traced back to genetic differences between the current population and the original population and in which the acquiring of plasmid based resistance can be ruled out.
The more specific the system or code, the more information it would contain. When it a gene's specificity is reduced by a mutation the information is lost to the subsequent generations. So based on this, how can an organism gain complexity yet lose specificity and ultimately information?
This is where you need a way to measure information exactly how Spetner intends it, because in terms of gentic information as we have discussed it previously it is nonsense. The subsequent function of the sequence is entirely irrelevant to it's genetic informational content.
As it happens the whole argument seems specious since Gartner and Orias actually say that by decreasing the rates of transcription of only mutant forms of the amber and ochre codons the resistant strains ribosome actually shows higher specificity (Gartner and Orias, 1966).
Spetner claims, with help from a citation, that a change in an amino acid often affects the way a protein functions. So with resistances, the loss of specificity would result in a degradation of the organism in other ways.
A change in an amino acid certainly can change a proteins function, but there is no reason why that change shouldn't increase specificity. Nor is there any evidence supporting the idea that a loss of specificity should degrade the organism, or that the degradation if there was any would be greater than the advantage the mutation conferred.
What about the bacterium phlagellum? Also, wouldn't it be required to have an abundant amount of information to start with, for any system to even reduce? And doesn't this seem unlikely given that the evidence is tentative for macro-evolution?
What about it? Without a clear working definition of what is IC how can we tell it is IC? For a start a flagellum will work with some components missing, IC propenents have to go to a certain core flagellar system to even demonstrate a functional form of IC, i.e. the loss of any protein component would lead to a total loss of function.
There is no reason why there couldn't have been much more information, bacteria can duplicate genes and larger genetic tracts just as easily as other organisms, not to mention the possibilities plasmids confer.
You would have to support your contention that there is only tentative evidence for 'macro-evolution' and you also probably have to tell us what you mean by macro-evolution as it certainly doesn't seem consistent with the way the term is used in evolutionary biology.
If there wasn't an introduction of the antibiotics, would the mutation have been neccessary or relevant, or would it have persisted?
That is the whole point that the beneficial nature of a mutation is highly dependent on its environmental context, the mutation probably wouldn't have been neccessary, relevant or persistant if it hadn't been for the introduction of the antibiotic to the environment but I don't see how this relates to what you said previously. The antibiotic being introduced doesn't create the mutation, it just allows it to spread and flourish, but we do know that the mutation was not present in the original population.
I believe he leans more towards point mutations, although I am unsure of what you mean by "large scale gene duplications." He talks about gene duplications, but it is rather limited. This book was pressed in 1998, if that matters.
He probably does as they are the smallest changes an consequently will probably reflect the smallest informational change by most criteria. Being published in 1998 makes no difference duplications at the gene, chromosome and whole genome level have been documented for decades.
It doesn't seem very likely that two de novo beneficial mutations could possibly occur in a population. Wouldn't natural selection have to select both of these and then would at some point these mutations have to "mate."
Or am I completely confused?
Why ever not? A large population will have large numbers of mutations between generations and a proportion of those will be beneficial. If the mutations are beneficial in their own right then they might well be maintained long enough either for the further beneficial mutation to arise or for mating to bring the two traits together.
But since I still don't know what you mean by 'active cumulative evolution' its hard to answer.
lso, could you comment on how things like the sonar-like systems in bats and whales are similar
They are functionally similar in that they use echolocation to hunt, I don't think the genetic bases of the systems is similar.
I think the similarity is good evidence of design, now I know you disagree but without similarity we couldn't assimilate our environment which encourages me to think that that was the plan.
I'm not sure what you mean by assimilate. If you mean that without the same sort of amino acids, sugars and fats we wouldn't be able to eat and survive you are right but there is no reason that these things require similar genetic sequences and genes even if they require the same genetic framework, i.e. DNA.
Like UFO's? Some could argue using the same reasoning. It's a bit of a stretch, but hey might as well point it out.
Only if they don't understand the difference between repeated direct observation and anecdotal evidence.
I just don't understand how randomness can create such highly organized complexity.
Because randomness is not the only factor at work.
TTFN,
WK

This message is a reply to:
 Message 10 by TheWay, posted 11-23-2007 7:15 PM TheWay has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 27 of 128 (436165)
11-24-2007 3:24 PM


Just a further point on genomic Shannon information. A good paper to read is Chang et al.(2004) if the PDF doesn't load the abstract is here where they try to quantify the Shannon information for an entire genome.
They argue, in line with TheWay's thinking, that truly random sequences have less Shannon information than observed genomic sequences so random mutations will tend to decrease the Shannon information by making the sequence more like a random sequence. However they note that large scale sequence duplications contravene this expectation and can increase Shannon information.
TTFN,
WK

Replies to this message:
 Message 29 by NosyNed, posted 11-24-2007 5:56 PM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 32 of 128 (436350)
11-25-2007 9:09 AM
Reply to: Message 29 by NosyNed
11-24-2007 5:56 PM


More or less SI
The closest I can see to your definition is Hartley who formulates information as LogSN where S is the number of possible symbols and N the number of symbols in the message, which sounds similar to what Dr. A was calculating. Again as with Spetner's description this seems to be more to do with the information capacity of the system than information in a particular message/sequence so again there would be no change between the two sequences as they are of the same length.
Perhaps what Percy was talking about was Shannon Information entropy, i.e. the degree of uncertainty as to the next conveyed letter in a message, which should certainly be maximal in a truly random sequence?
The paper must be behind a paywall that I have institutional access through, sorry about that. There is a more recent paper which I think is available without a subscription and which describes the same measurements of Shannon Information/Divergence (Chen et al., 2005). The authors link increasing information in the message to decreasing uncertainty perhaps the reason this is the converse of Percy's description in the other thread is that Percy was describing the transmission of the message over a channel, as Shannon did, and consequently you will get more information per bit about a random message than about one with built in redundancy as the redundancy means you get some of the same information twice.
Dr. A's calculation of 2 bits of information for every letter is true when each letter is equally likely but when the probabilities are uneven it may take less. Shannon's original paper (p.18) discusses a case where a sequence consists of 4 different symbols with differing probabilities and calculated that the bits required to encode the message would be 7/4*(N) where N is the length of the message so if we were to apply Shannon's probabilities to the sequence length you gave us, positing A as the most frequent symbol, it would suggest that the message could be encoded in only 28 bits on average. In fact using the sort of frequencies Shannon posits and his subsequent encoding you would be able to transmit the 16(A) message using only 16 bits, and potentially the 15(A)1(T) message with 17 bits.
Using the observed frequencies of bases we could work out a theoretical average bit requirement for transmitting an arbitrary length of genetic sequence, by we I mean someone better at maths than I am. Similarly this approach means that there are clearly some possible messages which may require considerably more than the average number of bits to convey, i.e. a sequence made up of low frequency bases.
What Cheng, et al. do is rather than look at base frequencies to look at the frequencies of groups of bases or 'words' of varying length, as Shannon described in his section on artificial languages.
TTFN,
WK
P.S. Please bear in mind that I am not any sort of mathematician, so my comprehension of these things is not informed by a deep understanding of the maths involved but principally the discursive text of the papers.

This message is a reply to:
 Message 29 by NosyNed, posted 11-24-2007 5:56 PM NosyNed has replied

Replies to this message:
 Message 33 by NosyNed, posted 11-25-2007 9:51 AM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 34 of 128 (436364)
11-25-2007 10:00 AM
Reply to: Message 33 by NosyNed
11-25-2007 9:51 AM


Re: More or less SI
In the Shannon example the encodings for 4 different symbols with frequencies of (1/2,1/4,1/8,1/8) respectively were 0,10,110,111.
If we let A=0 and T=10 then your initial sequence can be encoded as 0000000000000000 and the second as 00000001000000000, this incorporates the position of the T. Conversely a sequence of 16(G) would take 48 bits to convey with this encoding.
If the symbols are all equiprobable the relevant encodings would be 00,01,10,11 and it would require 32 bits whatever the sequence was.
TTFN,
WK
Edited by Wounded King, : No reason given.

This message is a reply to:
 Message 33 by NosyNed, posted 11-25-2007 9:51 AM NosyNed has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 42 of 128 (437447)
11-30-2007 5:09 AM
Reply to: Message 41 by Antioch's Fire
11-30-2007 2:55 AM


Re: Just a quick question...
I'm not sure why you think dogs are a particularly relevant counter evidence to the idea that reproductive isolation can arise over the course of 'relatively few' generations.
No one has suggested that simply being bred, or even selectively bred, for several generations should give rise to new species.
Now if there had been a concerted effort to breed dogs which were reproductively isolated then you might have a point. In the same way that all the people who go on about how 100 years of Drosophila mutational experiments have failed to produce a new species would have a point if that was ever what those experiments were intended to do.
I'm not sure what the failure for reproductive isolation to be esablished as an unintentional side effect of human breeding of domestic dogs from grey wolves is supposed to demonstrate. In terms of genetic incompatibility grey wolves and dogs are still the same species.
TTFN,
WK

This message is a reply to:
 Message 41 by Antioch's Fire, posted 11-30-2007 2:55 AM Antioch's Fire has not replied

Replies to this message:
 Message 43 by bluescat48, posted 11-30-2007 12:13 PM Wounded King has not replied
 Message 44 by Fosdick, posted 11-30-2007 2:44 PM Wounded King has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 48 of 128 (437887)
12-01-2007 5:52 PM
Reply to: Message 45 by TheWay
12-01-2007 12:17 PM


Re: Thank you. I'm slow
Complexity requires complex information
Well that is something of an assumption. There are a number of obvious mathematical formulae showing complexity from very simple initial information, for instance many of Wolfram's cellular automata programs are very simple but they produce very complex patterns.
Complexity requires complex information, which has not yet been shown to have been accumulated through natural processes
How can we show it to you unless we agree what it would look like? Without a usable definition for this 'complex information' the IDists can just shift the goalposts any time a counterexample is given.
Do you think Spetner knows how to measure it? Does he give any examples?
TTFN,
WK
Edited by Wounded King, : No reason given.

This message is a reply to:
 Message 45 by TheWay, posted 12-01-2007 12:17 PM TheWay has replied

Replies to this message:
 Message 50 by TheWay, posted 12-02-2007 6:31 PM Wounded King has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 54 of 128 (439734)
12-10-2007 7:09 AM
Reply to: Message 53 by Percy
12-03-2007 8:02 AM


Re: Mutations and information, part 2
So does that mean that for a system with 4 states, i.e. DNA bases, the optimum encoding for 20 amino acids should be Log420 = 2.16?
So a minimal encoding would only take an average of 2.16 bases to encode a specific amino acid, presumably assuming that the frequencies of the amino acids are equivalent?
Is that right?
TTFN,
WK

This message is a reply to:
 Message 53 by Percy, posted 12-03-2007 8:02 AM Percy has replied

Replies to this message:
 Message 55 by Percy, posted 12-10-2007 8:26 AM Wounded King has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 61 of 128 (440438)
12-13-2007 6:15 AM
Reply to: Message 56 by Suroof
12-12-2007 6:25 PM


This molecular process is irreducibly complex as all the many parts are required
Really? Where is the evidence for this? How much of a part is required? Does the rest of the protein linked to retinal matter or is it only retinal itself that forms part of the IC system? If we changed some structural features of downstream elements and the system still worked would that show that it wasn't IC? Just what is the irreducible part of the IC system?
If the parts are so well matched how come the cGMP cascade is downstream of several other signalling pathways? And for that matter how come retinal is a component of dozens of different photosensitive proteins in varying organisms?
Behe's argument falls down just as hard on the molecular biochemical level as it does on the gross morphological level. The blood clotting cascade for a start was a poor example given an extensive body of literature on the topic and a number of extant organisms with more rudimentary clotting systems.
I'm not much of a scientist but from the academic journals I have seen Behe does contribute.
I agree, Behe has contributed to the scientific literature, but none of his contributions have offered any support for either ID or IC.
TTFN,
WK

This message is a reply to:
 Message 56 by Suroof, posted 12-12-2007 6:25 PM Suroof has replied

Replies to this message:
 Message 62 by Suroof, posted 12-13-2007 7:18 AM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 63 of 128 (440452)
12-13-2007 8:48 AM
Reply to: Message 62 by Suroof
12-13-2007 7:18 AM


Retinal is not part of the protein.
I know that, that is why I said 'the protein linked to retinal'.
The components of the systems must be specific, and in this case they are
There is no evidence for this from what you have said. They only need be specific enough for the system to function. Clearly the components don't need to be highly specific since you yourself mention all the variation in the other vertebrate opsins, and that is without going anywhere near all the non-vertebrate opsins out there.
You just keep reiterating various steps in the pathway, you don't demonstrate any particular specifity or any reason why such a system can't evolve.
since even then the system would be IC.
So there are only certain key residues that constitute the IC part of the system? Even if we could change 60% of the amino acids the few key interacting domains conferring specificity would be the IC core?
There are other known opsin molecules in other regions but they are not involved in vision.
So in fact in terms of specificity the only really specific element seems to be the photosensitivity of retinol to light just one of a number of signals which can activate different opsins and initiate downstream signalling. I suppose the particular elements involved in returning retinal to its original form might also be specific.
However Behe does say the blood clotting cascade is irreducible
So far Behe saying something is IC seems to be the only criteria out there. Once again saying the clotting cascade turns out to mean, certain elements of the clotting cascade form an IC core. Every time it seems to be a sweeping claim about a whole system which subsequently comes to rest on a few key proteins/elements, and at a further reduction potentially only on specific structural elements of those. I'm still not sure what implication can be drawn from this about the evolvability of any such IC system.
Whether a modern mammal can survive without key elements of the clotting cascade tells us nothing about whether ancestors of that animal could have survived with a more rudimentary clotting system. I was also thinking of examples from more primitive organisms than dolphins or puffer fish since research suggests that key elements of the clotting cascade, probably those you identify as the components of the IC core of the system, are common to numerous vertebrates including teleost fish suggesting the origin of any such system would be before the divergence of tetrapods and teleosts (Davidson et al., 2003). I was thinking more of the fibrinogen like proteins found in lobsters and sea urchins and the lobster coagulation cascade.
TTFN,
WK

This message is a reply to:
 Message 62 by Suroof, posted 12-13-2007 7:18 AM Suroof has replied

Replies to this message:
 Message 65 by Suroof, posted 12-13-2007 10:51 AM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 66 of 128 (440477)
12-13-2007 11:11 AM
Reply to: Message 65 by Suroof
12-13-2007 10:51 AM


The reason this can't be done (by Darwinian evolution) is because it works gradually, and therefore all the proteins I mentioned above (which have to appear simultaneously) could not have come about except by several unselected steps.
This doesn't follow. The fact that we can't document functional molecular intermediates doesn't mean that they can't exist, just that we don't know what they are.
Look at the nonsensicalness of the statement
For the blood clotting to evolve, an explanation must give a selective advantage of each step
For the blood clotting cascade to evolve requires no explanation at all, all this says is that we can't fully describe the evolution of the cascade. How is this anything other than a classic argument from ignorance?
TTFN,
WK

This message is a reply to:
 Message 65 by Suroof, posted 12-13-2007 10:51 AM Suroof has replied

Replies to this message:
 Message 67 by Suroof, posted 12-13-2007 12:23 PM Wounded King has not replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 77 of 128 (440562)
12-13-2007 5:11 PM
Reply to: Message 73 by Suroof
12-13-2007 4:49 PM


Dembski fails to engage Dawkin's point
Dembski's argument is bogus since the point of Dawkin's Weasel program is not what he claims it is. The weasel program is not intended to be an example of dgital evolution but merely an example of the power of cumulative selection. The fact that Dawkin's chose a particular target phrase is one of the key reasons the example is not analogous to evolution.
Dembski sets up the weasel program as a strawman and is astonished when it fails to do something it was never intended or expected to do. Its hardly a novel creationist tactic.
As for SCI it is an even more tenuous and febrile concept than IC.
TTFN,
WK

This message is a reply to:
 Message 73 by Suroof, posted 12-13-2007 4:49 PM Suroof has replied

Replies to this message:
 Message 81 by Suroof, posted 12-13-2007 9:33 PM Wounded King has replied

  
Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 89 of 128 (440682)
12-14-2007 5:35 AM
Reply to: Message 81 by Suroof
12-13-2007 9:33 PM


Again with the non sequitur.
if its purpose was only delinate single step and cumulative selection, the program was mostly redundant, and it shows no real evolutionary algorithm exists to explain complexity.
What sort of logic is that? How does the existence of a simple program to illustrate a point, and which does illustrate that point as a counter to the strawman 'tornado in a junkyard' type calculations so beloved of creationists, which is not intended to be an evolutionary algorithm to produce complexity show that no real evolutionary algorithm exists to explain complexity. It is, once again, a complete non sequitur. Its like saying that the fact that my rabbit isn't a dog shows that no dogs exist.
TTFN,
WK

This message is a reply to:
 Message 81 by Suroof, posted 12-13-2007 9:33 PM Suroof has replied

Replies to this message:
 Message 91 by Suroof, posted 12-14-2007 7:43 AM Wounded King has replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024