Register | Sign In


Understanding through Discussion


EvC Forum active members: 59 (9164 total)
3 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,929 Year: 4,186/9,624 Month: 1,057/974 Week: 16/368 Day: 16/11 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 197 of 315 (517604)
08-01-2009 9:11 PM
Reply to: Message 186 by Parasomnium
07-30-2009 6:55 AM


Re: Constraints
quote:
So the evolutionary process set up by the researcher would evolve ordinary oscillators just like those the researcher probably could have thought of himself? No novelty would come out of this process?
No novelty. Computers could develop something that scientists have by themselves overlooked. But even that novelty was together with the whole sequence space programmed into the computer.

This message is a reply to:
 Message 186 by Parasomnium, posted 07-30-2009 6:55 AM Parasomnium has not replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 198 of 315 (517606)
08-01-2009 9:16 PM
Reply to: Message 187 by PaulK
07-30-2009 7:52 AM


Re: Three failures of CSI
quote:
None of this addresses my point. The Design Inference is purely negative. Actual designe detection uses positive evidence, too, to construct an inference to the best explanation. Therefore the Design Inference fails to fully capture the way in which we identify design.
Explain, how exactly is it purely negative.
quote:
1) He ruled out evolutionary explanations on the grounds that Behe asserted that IC systems couldn't evolve. Unfortunately Behe (correctly) admitted in Darwin's Black Box that IC systems could evolve by what he called "indirect" routes. Behe dismisses this option only on the grounds that he considers it too improbable - however he has not provided any solid grounds for this, and even if he did the probability could still be greater than Dembski's probability bound.
Actually no. He ruled out evolution because of the NFL theorem.
quote:
2) Dembski failed to provide an adequate specification. This is a very important point because the probability calculation that must be done is the probability that the specification is met. Without a valid specification that can be used in the calculation the necessary calculation cannot be done.
All the specifications are there. The number of genes, and the numbers of proteins used to make a flagellum.
quote:
3) Dembski calculated the wrong probability (and apparently botched the calculation, too - by 65 orders of magnitude). Of course he couldn't calculate the right probability because he wrongly ruled out evolution a priori and failed to provide an adequate specification.
Instead he produces a probability based on randomly assembling an individual flagellum from protein sub-units - without taking into accunt the actual mechanisms by which a flagellum grows.
Well like I said before. He ruled out the evolution because of the NFL theorem. So the only thing you are left with is blind chance.
quote:
You are incorrect here. A specification must be - in Dembski's term "separable" from the event. That is it must be one that can be DESCRIBED without appealing to specific featues of the event. Perhaps it is better understood as a specification that might reasonably be proposed without detailed knowledge of the event.
However it is absolutely legitimate - according to Dembski - to use knowledge of the event in proposing the specification (e.g. the specification Dembski uses in the Caputo case is based on the knowledge that the results favoured the Democrats, and the degree to which the results favoured them).
But you always use the knowledge of the event together with your background knowledge. And thais background knowledge in this case is that the 40 out of 41 cases teh Democrats were first on the ballot. And that this coresponds to an independently givven pattern of Democrats having more chance at winning elections, and that Caputo himself was a democrat.

This message is a reply to:
 Message 187 by PaulK, posted 07-30-2009 7:52 AM PaulK has replied

Replies to this message:
 Message 219 by PaulK, posted 08-02-2009 2:21 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 199 of 315 (517607)
08-01-2009 9:17 PM
Reply to: Message 188 by kongstad
07-30-2009 9:22 AM


quote:
it means no such thing. It means the search is better than a random search for these types of problems.
No one is arguing that the search is in any way optimal.
Everyone is. For an algorithm to perform better than blind search you have to optimize it. The NFL says so.

This message is a reply to:
 Message 188 by kongstad, posted 07-30-2009 9:22 AM kongstad has not replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 200 of 315 (517609)
08-01-2009 9:35 PM
Reply to: Message 194 by Percy
07-31-2009 7:37 AM


quote:
I gave the multiply program the same information any human would have. The program is producing the same information that a person multiplying two numbers together would produce. The program could even be modified to model a person carrying out multiplication by hand using pencil and paper. If a person multiplying two numbers together is producing new information, then so is a computer.
But your program is using the already programmed in instructions from the computer. Like , etc. Did those references appear from thin air, or were they designed prior to your use of them. And could you program anything without them? Obviously not.
quote:
Concerning GA's, of course the algorithm is "programmed in." GA's model evolution, so of course an algorithm that models evolution is "programmed in." The random mutations of evolution are modeled by random changes to design parameters. Natural selection is modeled by an assessing algorithm. Reproduction is modeled by randomly "mating" design alternatives and randomly combining their design parameters.
Ah, but here comes the problem. Evolution has no knowledge of the search target. Therefore it's as useful as blind chance.
quote:
Are you saying that before a design team even gathers that the solutions are already there, that they just have to find them? This is a much more sweeping argument than you were making before. In effect you're saying that neither computers nor people produce new information. Apparently for you the solutions are already out there just floating around somewhere waiting to be discovered.
No, people do make new information by calculations. The computers just process them faster than we do.
quote:
I think you're confusing the potential to produce a design with the design itself when you say the solutions already exist, and that the designers task is just a matter of finding them. The multiply program I provided as an example has the potential to solve many multiplication problems, but that doesn't mean the answers already exist. When you run the program and enter two numbers, you get a result you didn't know before. New information has been created for you.
No, it hasen't it has only been processed by the algorithm you produced. All the relevant information to produce it was already in there. The whole search space was in there from the start. You just optimized an algorithm to find it faster than blind chance.
quote:
"... no operation performed by a computer can create new information."
Look, no operation by a computer can create new information. It's a well known fact.
The Evolutionary Informatics Lab - EvoInfo.org
quote:
The initial parameters are part of the model. Just as evolving bacteria in a laboratory experiment have initial conditions, so must any computer model of evolution. The evolutionary model must have access to the same information (or at least a reasonable approximation , or analogous information in the case of GA's) as the real world. The principles of modeling the real world are the same regardless of whether one is modelling the weather or evolution.
But in the real world, evolution has no information about the search problem.
quote:
I'd love to see this calculation. Could you please provide it?
It's in the book. But I did manage to find an online version.
It'se from pages 289 - 302.
Dembski - No Free Lunch
quote:
I think that if you look up Dembski at Wikipedia you'll find that what I said was true. He really is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, and he really does teach courses there in its Department of Philosophy of Religion. And gee, Wikipedia says the exact same thing!
The point remains that you didn't mention his other education degrees. Like these:
quote:
He returned to school at the University of Illinois at Chicago (UIC), where he studied psychology (in which he received a B.A. in 1981) and statistics (receiving an M.S. in 1983). He was awarded an S.M. in mathematics in 1985, and a Ph.D., also in mathematics, in 1988, both from the University of Chicago, after which he held a postdoctoral fellowship in mathematics from the National Science Foundation from 1988 until 1991, and another in the history and philosophy of science at Northwestern University from 19921993. He was awarded an M.A. in philosophy in 1993, and a Ph.D. in the same subject in 1996, both from UIC, and an M.Div from Princeton Theological Seminary, also in 1996.
quote:
If you think there's relevant research from the Biologic Institute then please just enter it into the discussion.
I'm just saying there is ID based reasearch. You made it look like there isn't.
quote:
But if you're going to invent your own lingo you have to tell people what it means. In this case there's no way to know whether your "turned on" corresponds to cleaved or uncleaved.
The articels ays that they interfer witht he working of LexA and the evolution of resistance stops.
Edited by Admin, : Shorten long link.
Edited by Admin, : Change angle brackets to literals, fix errors in quoting.

This message is a reply to:
 Message 194 by Percy, posted 07-31-2009 7:37 AM Percy has replied

Replies to this message:
 Message 207 by Percy, posted 08-02-2009 6:55 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 201 of 315 (517611)
08-01-2009 9:48 PM
Reply to: Message 195 by Percy
07-31-2009 8:03 AM


quote:
You were saying the NFL theorem says that evolution can be no better than random search. Just repeating that claim is no help to me. In order for me to assess your claim you need to provide a general idea of how random search differs from evolution.
It doesn't. They give you the same results on average.
quote:
Sure it can. In Shannon information the problem of communication can be reduced to reproducing at one point a message from a set of messages at another point. Everything that happens in the universe can be interpreted this way.
Uh, no. I cited the articelw here it says that it can't. Did you miss it? If his model does not account for semantics than it can't be used to measure biological function.
quote:
This is untrue, but the real question is why you believe that statistical approaches are excluded from the biological realm.
Oh, but it's very true.
quote:
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
...
Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
See, it's only about the statistical aproach.
Information theory - Wikipedia
And I never said that statistical aproach is excluded from biological realm. I'm saying that it's not enough to describe an measure biological information. You need more than that.
quote:
I understand that you accept the claims of people like Dembski, Abel and Trevors, but you need to go beyond just repeating their claims. I provided an example of how the amount of information in a population is increased by random mutation. If you think I was incorrect then you have to go beyond just stating I'm wrong. You have to show how I'm wrong. Here's the example again:
Oh, you mean that D appears. Well, in that case, such a thing has never been observed. Furthermore having more genes does not equal more information.
Consider this example.
"MY HOUSE IS BIG"
This statement is made of 4 words. This represents information.
Now if I were to double this, I would have:
"MY HOUSE IS BIG"
"MY HOUSE IS BIG"
By your definition I would have more information. But I do not. I would only have more statistical part of information. But no new meaning. And sice information consists of: statistics, syntax, semantics, pragmatics, and apobetics, you need to increase all 5 to have new information. Not just the statistical part.
To get more information you would have to type something like this:
"MY HOUSE IS BIG"
"IT HAS A RED ROOF"
Now you know something else. And thus you have more information.
Gene duplication does not give you that. It has never been observed to create new biological functions. YOu might have more genes, but they do not perform new functions.

This message is a reply to:
 Message 195 by Percy, posted 07-31-2009 8:03 AM Percy has replied

Replies to this message:
 Message 205 by Parasomnium, posted 08-02-2009 6:04 AM Smooth Operator has replied
 Message 208 by Percy, posted 08-02-2009 7:35 AM Smooth Operator has replied
 Message 230 by kongstad, posted 08-03-2009 1:03 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 209 of 315 (517737)
08-02-2009 11:39 AM
Reply to: Message 202 by Wounded King
08-02-2009 4:46 AM


quote:
Um, that was rather my point. The antibiotics are designed to bind to gyrase. Are you claiming that after the antibiotics were designed the information content of gyrase just magically went up without any change in its genetic sequence because it suddenly had a high affinity binding molecule created for it?
No, there certainly was a change in the genetic sequence, but it degraded the information.
quote:
If you are talking about a loss of specificity in the gyrase are you claiming that the binding affinity is a result of complex specified information in the gyrase? If so then how are you not claiming that the gyrase had evolved or been designed to bind the antibiotic, rather than the other way round. And unless this is the case how can it possibly be a loss of information for the gyrase when it mutates in such a way as to lose this affinity?
Becasue the gyrase was designed to do it's job. It was doing it just fine. And if it can not perform it anymore as good, it must have lost information in the process.
quote:
Then how can it be considered a loss of information for the gyrase? The only 'functional' element lost to the gyrase is that of binding a molecule which makes it damaging or possibly lethal to the organism, and that was a function that didn't exist until the antibiotic was developed.
The function and the shape was always there. It lost it die to mutations.
quote:
Are you saying that gyrase was intelligently designed to be susceptible to the antibiotics in the future? That seems a lot of effort for the intelligent designer to go to for only a few decades of antibiotic protection.
Simply saying 'it's a loss of specificity' is not a sufficient answer.
But that's what it is. What am I supposed to say. The gyrase was designed the way it was. Just becasue we find some weakness doesn't mean it wasn't designed.
quote:
How is this substantially different? Are you saying 'this is the range of informational variation we know about, anything beyond this is a loss of information'? If not how can we measure this existing range? This can hardly be anything other than pure speculation since we don't know what all the existing genetic variations have been for a whole species ever. So how can you possibly know what the existing 'informational range' is for anything?
You are confusing expression with degradation of informaton. Since we know that a physical process can not produce new information, than it is obvious that all changes are either due to gene expression, or degradation of information.
quote:
Than why haven't bacteria simply vanished over the billions of generations of their existence?
Maybe becasue they are not billions of years old?

This message is a reply to:
 Message 202 by Wounded King, posted 08-02-2009 4:46 AM Wounded King has replied

Replies to this message:
 Message 221 by Wounded King, posted 08-02-2009 2:35 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 210 of 315 (517740)
08-02-2009 11:46 AM
Reply to: Message 204 by Rrhain
08-02-2009 5:14 AM


quote:
But if your claim of "no new information" is true, then they all have the same genetic profile. Therefore, they should either all die or all live. With no new "information" to confer resistance, then the entire lawn necessarily behaves as one as they are all descended from a single ancestor.
So since the the lawn is not behaving as a single unit, since only some live and some die, the premise that no new "information" has been introduced is proven false.
No, that's simply not the case. There are billions of ways a genome can mutate. So there are billions of ways some individual can have a specific genetic profile.
quote:
Incorrect. Because you can repeat the process by taking the new, resistant bacteria, isolating a single bacterium, and having it reproduce to form a lawn and re-infecting the lawn with T4 phage. By your logic, with no new "information" to confer infectivity, the entire lawn should survive. Instead, we find plaques starting to form.
Now, this can't possibly be a case of the bacterium reverting to wild-type. If there were a bacterium that had reverted, it would be infected by T4 and die, but it is surrounded by the immune bacteria who would reproduce and fill in the gap.
So it clearly isn't the bacteria that evolved but rather the phage.
But by your logic, since all this is "loss of information," how on earth is anything still alive?
Didn't I already respond to this?
quote:
Kevin Anderson? Creation Research Society Quarterly? Those are your references? At any rate, your reference doesn't actually support the claim. The antibiotics disrupt cellular activity via a certain pathway. The bacteria acquire mutations that allow them to reproduce without using that pathway. That isn't "loss of information."
And for further evidence, the process by which bacteria become resistant to ciproflaxacin (one of the antibiotics in your source) is by actually creating new genes with an altered amino acid sequence such that the ciproflaxacin doesn't bond well to the topoisomerase anymore.
Where is this "loss of information" you are crowing about?
Yes, it is a loss of information, since the pathway has become non functional.
quote:
But you can keep running the experiment over and over, having the two continually change their genes to maintain the standoff. If this were the result of "loss of information" as you keep claiming, then eventually there wouldn't be anything left in either genome.
So since the bacteria and phage are still around and have just as big a genome as they always have, where did the "loss of information" go?
Yes that's true. The genome is deteriorating. Read Sanford's Genetic Entropy.
quote:
Use any one you want. For those three comparisons, which one has more "information"?
1.) In Shannons case AB, in Gitt's case AB
2.) Both the same in any case.
3.) In Shannon's case AA, in Gitt's case both the same.

This message is a reply to:
 Message 204 by Rrhain, posted 08-02-2009 5:14 AM Rrhain has replied

Replies to this message:
 Message 228 by Rrhain, posted 08-02-2009 11:01 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 211 of 315 (517742)
08-02-2009 11:48 AM
Reply to: Message 205 by Parasomnium
08-02-2009 6:04 AM


Re: New information? Easy!
quote:
Or this:
"MY HOUSE IS BIG"
"MY MOUSE IS BIG"
A doubling followed by a small mutation can easily result in new information. This is how it can and does happen in genomes. It has been observed and documented. Repeatedly negating this fact doesn't make it go away. It makes one look ignorant.
I specifically explained why this is not the case of new information. Yet you decided to not read it.

This message is a reply to:
 Message 205 by Parasomnium, posted 08-02-2009 6:04 AM Parasomnium has replied

Replies to this message:
 Message 213 by Parasomnium, posted 08-02-2009 12:23 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 212 of 315 (517744)
08-02-2009 12:04 PM
Reply to: Message 207 by Percy
08-02-2009 6:55 AM


quote:
I think you meant to say that the instructions come from people, right?
Right.
quote:
But a person performing multiplication using pencil and paper is just following the instructions he learned in fifth grade. There's no difference between a person following an algorithm and a computer following an algorithm when it comes to creating new information.
There is a difference. Instructions tehmselves do nto lead to calculations. For an example. Just becasue you know the alphabet you may or you may not write a novel. If a computer knows the alphabet, there is no chance he will write it by itself.
quote:
The implication of your position is that no new information has been created from performing multiplication since someone first figured out how to do it. That inventor of multiplication created new information, and everyone since has just been following instructions. And even the inventor of multiplication was just taking advantage of information he was taught by others before him and merely economized by showing how people could perform multiplication of many digits just by memorizing the times table for single digits, so he didn't create new information, either.
Obviously that's an unworkable definition of new information.
The problem here is that people choose what they are going to do. A computer does not. Just from knowing how to calculate, doesn't mean you will calculate something. A computer has to if you make it do it.
quote:
Shannon defined the problem of communication as one of replicating at one point a message from a set of messages originating from another point. When a message from the message set is sent from point A to point B then information has been communicated.
So it works like this. A person sending you messages from his message set (his personal store of knowledge that he keeps in his brain) is adding to your own personal message set every time he tells you something you didn't already know. For you, everything you didn't already know is new information. You add it to your personal message set, and now this becomes a message that you can send to someone else.
So let's say you're chatting online with someone who tells you that 17x26 is 442. This is new information for you. You could have easily have figured it out yourself, but you didn't, so your online friend has now added information to your message set. Your message set has increased in size. For the length of time that you remember that 17x26 is 442, this is a message that you can pass on to others, thereby increasing their personal message sets.
But it makes no difference where the message that 17x26 is 442 came from. If you had instead used your calculator you would have still added new information to your message set. In other words, it doesn't matter if the new information came from a person or an object. For all you care the clouds could have formed into the equation "17x26=442" in the sky and it would still represent new information for you.
But the information in the calculator also originated in the mind first.
quote:
In other words, the creation of new information doesn't mean that the same information hasn't been created before. It would make no sense to say that of two independent inventors who create the same invention with no knowledge of the other's work, that the inventor who completed the invention first created new information and the other did not.
That's true but matter by itself does not create new information. It only processes it, and transfers it. It always originates in the mind first.
quote:
So new information is everything you learn that you didn't already know. The source of the information is irrelevant.
New information is what you make in your mind. And the source when it is known is always a mind.
quote:
All that remains is to add to this the fact that information is sent and received by everything everwhere in existence. In other words, the sharing and creation of information is not a special trait of human beings. It is possessed by all matter everywhere.
This is simply not true. All matter can transfer information, yes. But it only originates in the mind.
quote:
This is half correct. Mutation has no knowledge of any "search target," but selection is the very opposite of random. The best adapted survive and contribute their genes to the next generation, including any mutations they might have. That's why white rabbits evolve in the arctic and not the rain forest. If evolution were truly random then white rabbits could evolve anywhere.
No, it's actually totally correct. Since natural selection also has no knowledge of the search target. It does not know what function to select for. So teh result is the same as blind chance.
And as for the rabbits, that's probably an epigenetic factor.
quote:
So we're back to the same question. You cited the NFL theorem which holds that one algorithm cannot perform better than another algorithm unless it has more information. So you're talking about two different algorithms, one that you call "evolution," and the other that you call "random". How does the "evolution" algorithm differ from the "random" algorithm?
It doesn't.
quote:
So the whole search space is there from the start, and if designers search the search space and find a solution, then that is new information. And if a computer searches the search space and finds a solution, then that's not new information.
Information is there from the start. It only depends who extracts it. In both cases, the origin of the search space, and the whole of information is from the mind.
quote:
But you can't just cite Mr. Robertson. You have to understand why Mr. Robertson said this and explain here why I'm wrong. Otherwise I can go off and search the web for quotes of people saying that computers *can* create new information. The purpose of discussion isn't to make arguments from authority, otherwise we'll end up arguing who cited the best authority. The goal is to actually understand what you're debating to the point where you can make the arguments yourself.
I didn't make an argument from authority. I'm not saying that you should accept it becasue he said it, I'm jsut trying to show you that I'm not making things up.
quote:
And how did the information come into the DNA program? Through evolution, which potentially reflects copious information, perhaps 1035 bits of feedback.
Well he's simply wrong about that since teh sequence space has been programmed in from the start by a mind.
quote:
You refer me to a Google Books page in Croatian? That doesn't work?
If you have an argument to make about CSI based upon Dembski's book No Free Lunch, could you please enter the argument into the discussion in your own words?
I'm sorry but it works for me. The book is in English. And there is no specific argument. I just said he calculated the CSI of a flagellum, that's all.
quote:
I didn't mention any of Dembski's degrees. The point is that scientists aren't producing advances based upon CSI, not even Dembski who is working as a professor at a Bible college where he teaches courses in the philosophy of religion. If you think the Biologic Institute is producing evidence of CSI, then I think it would be highly relevant to this discussion if you would tell us about it.
There's nothing to discuss here. They are making their work, just as any other institute does. Except this one is ID based.
quote:
You're just stating your original position again.
I have no idea why the authors of the article chose to overstate the point. Obviously evolution does not stop. There is no process that can make the copying of genetic material perfect.
Than tell me how long does it take to evolve resistance without LexA?

This message is a reply to:
 Message 207 by Percy, posted 08-02-2009 6:55 AM Percy has seen this message but not replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 214 of 315 (517754)
08-02-2009 12:24 PM
Reply to: Message 208 by Percy
08-02-2009 7:35 AM


quote:
That they give the same results is what you claim the NFL theorem tells us about the two different algorithms, "random" on the one hand and "evolution" on the other. How do these two algorithms differ in their definition? I know how evolution works. How does this "random" algorithm that your contrasting evolution with work?
It picks sequences randomly.
quote:
I think you're confusing what a gene does with meaning. Meaning and semantics are a human interpretation.
Not really. They are an objective reality. A noise has no meaning. A picture of a person has. It represents that person. it has mening. The meaning is that person.
quote:
Semantics cannot be quantified,
Of course it can. CSI does it perfectly. So do Abel and Trevors with their FSC.
quote:
is not part of information theory, and isn't even relevant.
Of course it is. It's not part of Shannon's information model, but Shannons information model is not the whole of Information theory. It's liek saying that mutations are not part of Evolution theory because Darwin didn't mention them.
Information consists of 5 levels.
Statistics
Syntax
Semantics
Pragmatics
Apobetics
All of those levels together make information.
quote:
This is as true today as it was then.
Yes, and you totally missed the point. He said that semantics are meaningless to engineering problems, not to the theroy of information itself. If you wan't to transfer information in systems, it is enough to just describe it with a statistical model, like Shannon's. That is true. But if you are going to measure biological functions, than no, it is not enough. You need at least two more levels to describe them. Becasue now you are dealing with semantics. And biological information has semantics if form of functions.
quote:
Yes, I know you believe this, but can you support your position with evidence and arguments? You're most common responses to everyone seem to be variants of either, "No, I'm right," or "No, you're wrong."
That's beacuse it's tiresome to constantly have to be repeating the same thing over and over again.
quote:
Population genetics is an extremely statistical science, and this flatly contradicts your position.
Almost all medical studies are statistical in nature, and this also flatly contradicts your position.
No, they are not dealing with measuring biological functions.
This goes back to Shannon's original paper,[/quote]
quote:
But my original reason for responding was to point out you were wrong to say that Shannon information "deals only with statistical aspect of information" in your Message 182. Are you talking about the quantification of information? Not statistical. Are you talking about the introduction of noise into communication? Very statistical. In other words, Shannon information has both statistical and non-statistical aspects. Like many things. I thought the Widipedia article made this pretty clear.
So statistical approaches are appropriate in the biological realm. Indeed, where wouldn't statistical approaches be appropriate? Statistics is a tool (among many) that one can probably apply to virtually any problem.
There you see. This is a prime example why this discussion is getting boring. You totally and completelly misunderstood me. You don't know what I meant by the word statistical. No the approach that statistitians use!
I meant the number of enteties in a system. For an example, the number of bits in information used to convey a message.
That is just one part of what information is. Like I said there are still:
Syntax - do you know what this is?
Semantics - you probably know what this is?
Pragmatics - how about this?
Apobetics - you probably don't know what this is...
quote:
Mutations not currently present in a population have never been observed? Could you please return to reality?
Again, you misunderstood me. That's why this discussion is boring. I said that there are no cases of mutations producing new biological functions.
quote:
My example was the addition of a single allele to a pre-existing gene, but gene duplication adds even more information. Let's go back to Shannon again, saying what I've already said, but I want to show you that I've been accurately describing information theory:
Shannon information? Yes it does. But that's not a good description of biological functions.
quote:
So if we increase the number of alleles in a gene from 3 to 4, the amount of information in the message set rises from 1.585 bits to 2 bit, an increase of .415 bits.
You evidently thought I was talking about gene duplication when I was actually talking about a single mutation causing the addition of an allele, but let's talk about gene duplication using your example.
But this is using Shannon information. This is not good enough. This way is not going to get you new biological functions. More genes that are the same will not get you new biological functions.
quote:
First you have this gene:
My house is big.
Then there's gene duplication and you have this:
My house is big.
My house is big.
We can argue about whether this represents more information or not, but we don't need to. Now the duplicated gene experiences a mutation and we get this:
My house is big.
My mouse is big.
And then another mutation:
My house is big.
My mouse is bit.
And another:
My house is big.
My mouse is lit.
And so on, every change creating new information. And assuming there was reproduction involved, this new gene now has the alleles "My house is big," "My mouse is big," "My mouse is bit" and "My mouse is lit." That's quite a bit of new information in the population.
There is only one problem. This is the same amount of information. So it's only fine tuning.
Even if you got a longer sentence it's still couldn't be applied to reality, because such things don't happen in real life. That's the problem. Random chance will not do this. It will be more like this:
MY HOUSE IS BIG
MA HOUSE IS BIG
MO HIUSE IS BIG
LO FIUSE IS BIO
NI KOFSA IL FIO
This is what is happening in reality with biological information. It is deteriorating.

This message is a reply to:
 Message 208 by Percy, posted 08-02-2009 7:35 AM Percy has replied

Replies to this message:
 Message 247 by Percy, posted 08-03-2009 3:14 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 215 of 315 (517755)
08-02-2009 12:26 PM
Reply to: Message 213 by Parasomnium
08-02-2009 12:23 PM


Re: New information? Easy!
quote:
In your example we get to know that the place where you live is a big place. Saying it twice does not teach us anything new. I think we can agree on that.
But in my example, first we learn that the place where you live is a big place, and next we learn that your rodent is a big rodent. These are two completely different pieces of information.
If we started out with "MY HOUSE IS BIG", got a duplicate of that, after which the duplicate acquired a small mutation - "HOUSE" becomes "MOUSE" - we genuinely do have some new information.
My example models exactly what can and does happen in genomes. Could you please explain how this is not an example of new information?
Because it takes the same same amount of bits to describe it. You need a longer sentence. But the problem is again, in the fact that this doesn't happen. The opposite is happening. Random mutatons cause the genome to deteriorate.

This message is a reply to:
 Message 213 by Parasomnium, posted 08-02-2009 12:23 PM Parasomnium has replied

Replies to this message:
 Message 216 by Parasomnium, posted 08-02-2009 12:44 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 253 of 315 (518026)
08-03-2009 7:29 PM
Reply to: Message 216 by Parasomnium
08-02-2009 12:44 PM


Re: New information? Easy!
quote:
No, no, no! You don't get it. I'll put it in a different format, maybe you'll get it then.
First we have:
"MY HOUSE IS BIG"
By duplication we get:
"MY HOUSE IS BIGMY HOUSE IS BIG"
Then the second H is replaced by an M:
"MY HOUSE IS BIGMY MOUSE IS BIG"
This sentence is longer and contains more information. Moreover, it contains new information.
Yes, now it contains everything it needs to be new information. But the problem is, this does not happen by the influence of matter itself in real life.
quote:
I'd say that random mutations cause the genome to change, not necessarily to deteriorate. And next, selection kicks in, selecting those changes that do well, while weeding out those that do not.
No, it doesn't work that way. It only works if you do it. Mutations either modify the genome to have a different expression of genes, with the same informational content, or they deteriorate it. There are no adding of information. And the selection can't help you because of the NFL theorem.
quote:
My sentence "MY MOUSE IS BIG" would fit in there nicely. If the selection pressure was about correct sentences, then most of them would not be selected, but "MY MOUSE IS BIG" most certainly would. ("MA HOUSE IS BIG" might also be selected for, if we happened to be in, say, Arkansas.)
Do you now understand what I mean?
Yes I do. But do you understand that this does not happen in real life, because evolution does not know what it is supposed to pick? And if it doesn't it's going to select what has the best fitness on average. But fitness is not corelated with new information, so it's useless.

This message is a reply to:
 Message 216 by Parasomnium, posted 08-02-2009 12:44 PM Parasomnium has replied

Replies to this message:
 Message 269 by Parasomnium, posted 08-04-2009 4:33 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 254 of 315 (518028)
08-03-2009 7:35 PM
Reply to: Message 219 by PaulK
08-02-2009 2:21 PM


Re: Three failures of CSI
quote:
Odd how you suddenly fail to understand the argument.
It is purely negative because it relies solely on eliminating alternatives. No positive design hypothesis is stated, nor is any argument made for it.
It's a deductive argument. You remove all teh possibilities, and the only logical alternative that is left is design. Becasue intelligence is known to make CSI. So that is called the inference to the best explanation. That is the positive part of the argument.
quote:
No. In fact that whole claim that IC systems are CSI is based on the assumed impossibility of evolving IC systems. Even though Behe - the supposed authority behind the claim - doesn't even agree.
Forget about IC systems. I'm talking about NFL theorems.
quote:
That sounds like a pretty clear example of a "fabrication". And it is not even adequate for Dembski's actual calculation (there is nothing for the "configuration" aspect, for instance),
No, it's not since we can describe all the aspects of the flagellum without looking at it in advance.
quote:
Even if you were correct, that would still be invalid. The NFL theorems do not (and obviously cannot) show that evolution can never work.
That is precisely what they do show. They show that without intelligent input, any algorithm, including evolution, works as good as blind chance.
quote:
I was answering your false assertion that the use of the event to derive a specification was invalid. I note that you implicitly acknowledge that that assertion was false.
We use the event that happened to detect design, but we have to have prior knowledge of that pattern.
quote:
However I note that you have helped prove my initial point - that in real design detection cases we use positive evidence when we can. The fact that Caputo supported the Democrats and was in a position of authority that might have enabled him to rig the draw is indeed relevant - but it is not part of Dembski's Design Inference. That method avoids any talk of possible designers. You automatically appeal to that circumstantial evidence, just as I said.
Caputo is the designer.

This message is a reply to:
 Message 219 by PaulK, posted 08-02-2009 2:21 PM PaulK has not replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 256 of 315 (518032)
08-03-2009 7:42 PM
Reply to: Message 221 by Wounded King
08-02-2009 2:35 PM


quote:
That was subsequently according to you, when the gyrase lost binding affinity. What I'm asking is did the gyrase have complex specified information in the pre-mutated sequence even though it had no binding partner to have an affinity for prior to the development of the antibiotic?
Yes it did. It was designed that way. The antibiotic was also designed to fit the gyrase.
quote:
No problem, but the gyrase's job isn't to bind an antibiotic that would impair its function. If you are claiming the mutation had an effect other than to change the affinity of gyrase and the antibiotic then that would be a good thing to provide evidence for. All the paper you directed me to does is claim that the loss of affinity between the gyrase and the antibiotic represents a loss of information for the gyrase.
So tell me what job the gyrase no longer performs as well?
I know it isn't but it lost something. That is loss of information. The deterioration of it's structure is the loss of information. If this continues that it can stop doing it's job altogether, and than the organism dies since there is nothing to coil the DNA.
quote:
So it functioned to bind and antibiotic for millenia before that antibiotic was developed? In what way does that fulfil the concept of function?
Nope. It coils DNA. But it's structure is damaged. Therefore it lost information. It's obviously not a fatal loss, but it is a loss non the less.
quote:
Please state this explicitly, you are saying the designer designed gyrase to have a specific genetic sequence which it maintained for thousands of years simply so humans could develop an antibiotic that would be effective for a few decades before the gyrase mutated and made the antibiotic ineffective?
Is that what you are saying?
Obviously not.
quote:
Well we don't know that, you may claim that but it is a quite different thing.
It has never been observed, nor has matter been seen to have properties that are known to produce information, so there is no reason to believe that it can.
Hey, maybe there are little dwarfs turning the Moon around us. But we haven't seen them, so why believe in them?
quote:
So bacteria have a generation time of 1 or so years do they? I'm glad you have helped us gauge your knowledge of biology so succinctly.
Your argument seems to do nothing except make a nonsense of the concept of function as useful in terms of information.
IS this a real argument?

This message is a reply to:
 Message 221 by Wounded King, posted 08-02-2009 2:35 PM Wounded King has replied

Replies to this message:
 Message 268 by Wounded King, posted 08-04-2009 3:11 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5145 days)
Posts: 630
Joined: 07-24-2009


Message 257 of 315 (518033)
08-03-2009 7:52 PM
Reply to: Message 228 by Rrhain
08-02-2009 11:01 PM


quote:
So? If you were to sequence every single bacterium in the lawn, you'd find mutations all over the place. That's the entire point: New "information" is being created. Despite the fact that all of the bacteria are descended from a single ancestor, the genetic sequence of the descendants is not the same as that of the ancestor.
I know it's not the same. That doesn't mean it's new information.
quote:
At least one of those bits of new "information" provides resistance to T4 phage. That's why most of the lawn dies but some colonies survive. If we go with your claim that "no new information" is created, then either every single bacterium is capable of fending off T4 phage or none of them are capable of doing so and thus the entire lawn necessarily reacts as one.
Since the lawn does not react as a single unit, since some bacteria die while others live, your premise of "no new information" is necessarily proven untrue.
Again, you simply don't get it. New genes do not equal new information.
quote:
No. If there is this continual "loss of information," how on earth is anything still alive?
Becasue the loss is gradual.
quote:
That isn't a loss of "information," though. At the very least, it is a neutral shift in the genetic sequence.
Nothing is perfectly neutral. If something losses it's function, it means that it lost inforamtion for that function to do it's job.
quote:
There is an experiment you can run with removing the lactose operon from E. coli. This is the gene that allows them to be able to digest lactose. Under similar processes as the T4 phage experiment (take one, let it grow to a lawn, letting the generations pile up the mutations in the genome), they eventually regain the ability to digest lactose.
Yes, becasue they will use transposons to produce it again and again.
quote:
How is that not "new information"? They literally did not have any ability to digest lactose. If you had fed them only lactose, they would have died. So why is it that the descendants of these bacteria are able to do something that their ancestors can't? If your claim of "no new information" is true, then the lactose operon is always and forever gone because it was specifically and deliberately removed from the genome.
So where did this new operon that can digest lactose come from? A miracle?
Nope, it's a designed mechanism that does this. It has an algorithm that mutates the genome. But algorithms can only give you as much information as you input into them. They can't produce more than you give them. So this ability is just an expression of what the algorithm can do.
quote:
So if I start with a genetic sequence of A and we see a duplication event so that we have AA and then we see a mutation event so that we have AB, how is that not an "increase in information"?
Because that is biologicaly meaningless. You need new CSI, not just one new nucleotide. No new functions are gained by just including one more nucleotide.
quote:
According to your description of Gitt saying that A and AA have the same "information" and that A and B have the same "information," then this process that involves two actions that don't by themselves create new "information" actually creates new "information" since we started with A and we ended with AB which, according to your own description, is new "information."
Gitt's information is not used for biological functions since last two levels can't be quantatively measured. It's describing information in general.
For biological information you need to use CSI. But CSI is from 400 bits and up. That's about 80-something letters of latin alphabet and English language.
quote:
By your description of Shannon, the new "information" step happened at the duplication stage.
So I have to ask you: Where is your justification of "no new information" when we have directly observed processes that result in what you claim is "new information"?
Becasue Shannon's definition of information can't be used for biological functions. Because it only deals with the first level of information, and that is statistics. That's not enough.

This message is a reply to:
 Message 228 by Rrhain, posted 08-02-2009 11:01 PM Rrhain has replied

Replies to this message:
 Message 260 by Coyote, posted 08-03-2009 8:46 PM Smooth Operator has replied
 Message 298 by Rrhain, posted 08-19-2009 5:34 AM Smooth Operator has replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024