|
Register | Sign In |
|
QuickSearch
EvC Forum active members: 65 (9164 total) |
| |
ChatGPT | |
Total: 916,915 Year: 4,172/9,624 Month: 1,043/974 Week: 2/368 Day: 2/11 Hour: 1/0 |
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: How long does it take to evolve? | |||||||||||||||||||||||||||||||||
RAZD Member (Idle past 1435 days) Posts: 20714 From: the other end of the sidewalk Joined: |
edited to |add| some detail and change tone somewhat ...
Let us go step by step. Great idea. It is only by exploring that we discover.
How are you addressing my first point? quote: The question really is whether it really is that remarkable. If you remember I had said:
quote: A skin cell could easily be considered a very primitive eye, all it can detect is if it receives sunlight or not. |From there we can consider the skin to be a primitive retina which can be used to determine the approximate direction of the sun.| The point is that you can experience this primitive level of sensation yourself |and see that there would be benefit to having these sensations|. As others have noted this easily reduces further to the action of sunlight on a protein that changes how the protein behaves in the cell. Curiously I don't find this at all remarkable, as the cell is already a highly evolved system that reacts to its environment |and there are many single cell organisms that sense light and react to it|. Single cell life was around for a billion years or so before multicellular life and the development of organisms that would find more |developed| sense\reaction systems |to be even more beneficial|.
... My point is, according to my understanding, every one of those steps seems to exhibit IC. And I think it would be better to use words than terminology that is a bit iffy to start with ... I get the impression "IC" is often tossed around as if it were a real thing, when at best it is ignorance of how something actually developed. Again you had said:
quote: |Yes you do need to explain, as what has been discussed so far on similarities to other known systems being used as intermediates are all useful stages of sensing light and benefiting from it.|
Everything living is a result of the DNA coding. If a human cell contains 6 ft of microscopic DNA coding, let us take an arbitrary guess at how much DNA would be needed to program for a simple light receptor- let's say 1/10 of a mm. (pick your own guess) That would be far, far, more organization than the word "monkey" right there. And less likely to happen than it is to have any word formed by shaking up a bunch of letters and pulling them one by one. ... Others have already dealt with this.
... So maybe IC or ID is not the right word... Never good to rely on |questionable| acronyms for explanations, so yes not the right words imho. Perhaps you should try to put them in your own words.
... let's call it a statistical improbability. ... Oh good, that proves that it couldn't happen? Again we go back to the lottery: the "statistical improbability" of a single ticket winning is high ... but the the "statistical improbability" of the lottery being won is low. Statistics really prove nothing, and worse, they are irrelevant unless you know ALL the possibilities.
... It is towards this first step that I do not see how NS could aid in organizing. Life reacts to stimuli -- that is the first step ... Enjoy Edited by RAZD, : rewrote partsby our ability to understand Rebel☮American☆Zen☯Deist ... to learn ... to think ... to live ... to laugh ... to share. Join the effort to solve medical problems, AIDS/HIV, Cancer and more with Team EvC! (click)
|
|||||||||||||||||||||||||||||||||
MrHambre Member (Idle past 1423 days) Posts: 1495 From: Framingham, MA, USA Joined: |
Percy writes:
Or maybe we just belabor the computer metaphor past its usefulness when we talk about the complex cellular machinery in living organisms. DNA has to contain the information, else the information is nowhere and we couldn't exist. Harvard geneticist Richard Lewontin is no creationist crank, but he seems to think the DNA-as-program metaphor is something we respond to because it panders to the high-tech chauvinism that privileges the master-plan over the microscopic drones that do the work of building a living thing:
The problem is not one of dimension, but one of size. The nucleus of a cell of the fruit fly Drosophila, the favorite organism of geneticists, has enough DNA to specify the structure of about five thousand different proteins, and about thirty times that much DNA is available to provide spatial and temporal instructions about when the productions of proteins by those genes should be turned on and turned off. But this is simply too little, by many orders of magnitude, to tell every cell when it should divide, exactly where it should move next, and what cellular structures it should produce over the entire developmental history of the fly. One needs to imagine an instruction manual that will tell every New Yorker when to wake up, where to go, and what to do, hour by hour, day by day, for the next century. There is just not enough DNA to go around.
-from "The Science of Metamorphoses," New York Review of Books, April 27, 1989. Edited by MrHambre, : Attribution
|
|||||||||||||||||||||||||||||||||
Omnivorous Member Posts: 3992 From: Adirondackia Joined: Member Rating: 7.5
|
Lamden writes: Secondly, the light receptor is still 100% useless without a brain capable of deciphering the light in to "message". Think webcam without a computer. (this point I actually heard from someone else, who likely heard it from some creation science guy or something like that. But I think it's a great point.). There would be no reason for NS to aid in the dominance or propagation until the brain was there , (another very organized block of mush, even at it's simplest level) Thirdly, (back to my own thinking), even after deciphered in to a message, a light message requires further action from the brain. Does the light mean I should jump in to the fire, or away from the fire? A further impediment from allowing NS to help out . All this is for the simplist level of light receptor. Your level of incredulity is high, Lamden, but only with regard to science. I assume there are religious tenets at work. I don't deal well with that kind of preemptive obduracy, so I'll post this as an exit from the thread. You balk at the notion of gradual stages of eye evolution, because you think there must be a brain for any of those stages to confer any benefit. But photoacceptors, molecules that react to light, exist in single celled organisms like E. coli; in fact, some wavelengths of light promote population growth and others inhibit it. In a local environment, this has the effect of shifting the (now larger) population toward the beneficial light source. In addition, E. coli can be motile. So only one change would be required to produce a bacterium that moves toward the good light: essentially, the detection of that light already exists (in the stimulated metabolic pathway) and the ability to move to that light already exists. Add a tropism, with the motile bacteria reacting to and thus moving 'upstream' a gradient of goodness (light), and you've got productive 'vision' with nary a brain. Every benefit of vision that you think requires a brain is exhibited in single cell organisms: photoacceptors, photosensitives, phototropics, photophilics...and keep in mind that bacteria can exchange packets of beneficial genes. What seems impossible to you for evolution to achieve in vertebrates was sketched out by single cell organisms long before vertebrates existed. What were the odds when you consider those billions of years and add an incalculable number of organisms working on the problem sharing steps in its solution? It took me 20 minutes on Google to trace the discovery that some broad wavelengths promote and some inhibit bacterial growth (19th century), that monochromatic lasers can refine the effect (1980s), that optimal levels of intensity, pulse frequency and wavelength can be determined for an effect (1990s), and that precise correspondences can be established between specific wavelengths and specific metabolic pathways in E. coli--an organism that cannot long survive outside the darkness of our guts. Light got in the picture early. Beware of how easy it is to believe what you'd like to believe. Look around. Life is infinitely more clever than theology. Edited by Omnivorous, : No reason given."If you can keep your head while those around you are losing theirs, you can collect a lot of heads." Homo sum, humani nihil a me alienum puto.-Terence
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
First, perhaps I should explain why I write the way that I do (on Facebook, one old friend even complained that reading my emails is like reading a college assignment). It's how I get to think about things, to think them through, to clarify my thoughts. For example, until I started explaining it to you, I hadn't really thought through how science is independent of questions of whether the ultimate origin of the universe was by purely natural or by supernatural means and hence what the consequences of that would be.
I think it's an outgrowth of a software debugging technique I developed in school. If my program had a bug that I couldn't figure out, I'd sit somebody down and explain to him what my program did step-by-step. It didn't matter whether that person understood what I was telling him. What did matter was that it forced me to go through my code and think about what each line did instead of looking at that code and already "knowing" what it did (ie, I knew what I had intended it to do, not what it was actually doing). It's the same idea as a writer having somebody else proofread something he wrote, since when we look at something we wrote we see what we "know" we had written instead of the words that are actually there (our brains play "fill in the blank" tricks on us all the time). Anyway, kind of an explanation and an apology, if necessary. And also, thank you for motivating me to make a couple updates to my site. As a working software engineer, it can be difficult for me to work on my many personal projects in "my copious spare time" (engineering inside joke, since we have so little spare time).
... and thus the monkey/weasel computer model would not apply to this first step.
And from Message 115That would be far, far, more organization than the word "monkey" right there. I get the feeling that there's still a bit of confusion about MONKEY. For one thing, it was never written to produce the word "monkey", even though you could choose that string if you wished. Rather, the task had reminded me of the frequently rehashed idea of an infinite number of monkeys banging away on typewriters producing literature (most commonly it's Hamlet, though the original formulation was all the books in the British Museum (see the quote below)). And it allowed me to include some interesting and humorous quotations, namely (from my MONKEY page):
quote:My personal favorite was Lennon and McCartney, since it's the creationists who have so much to hide, unlike me. Also, for your edification, RFC means "Request For Comment" which quickly turned into the de facto documentation for TCP/IP and the Internet. Over the years on April Fools Day, somebody will post a humorous RFC, such as RFC 2324 ("Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0)", 1 April 1998) or RFC 2549 ("IP over Avian Carriers with Quality of Service", 1 April 1999) or RFC 6921 ("Design Considerations for Faster-Than-Light (FTL) Communication.", 1 April 2013). BYTE magazine used to have similar articles in their April editions in the 70's and 80's. See Wikipedia's April Fools' Day Request for Comments for more information and examples. That page's listing of RFC 2795 ("The Infinite Monkey Protocol Suite (IMPS)") provides a link to the infinite monkey theorem, which is what I was referring to by naming my program "MONKEY". As for what MONKEY is and is not, I found a readme file which was an email to a creationist I corresponded with back in 1990 -- it's the file, READ.ME, in the ZIP file you can download from my MONKEY page (and which I have just updated and will upload tonight). Rather than post it here, you can download and read it off-line. Basically, MONKEY is heavily abstract. You may want to have something simple and neat to study, but life is not the least bit simple nor neat. Life is very complex and very messy. What Dawkins had done with his WEASEL was to take one aspect of life to study, natural selection. To do that, he had to abstract its essential properties, thus arriving at cumulative selection. To test that abstract idea, he needed to abstract the shite out of the supporting life function concepts such as fitness (abstracted immensely to include a target string), viability, and reproduction. He even had to abstract away the distinction between genotype and phenotype and, of course, development (ie, the translation of a genotype into a phenotype, most commonly through embryonic development). And since I was repeating Dawkins' work, I had to perform the same abstractions. A common mistake others will make is to take MONKEY too literally and to try to apply it directly to life examples. It is far too abstract for that. Rather, it proves out the difference between single-step selection and cumulative selection and demonstrates how well a system using cumulative selection can perform and converge onto a solution. Life does not even begin to use "the monkey/weasel computer model", but rather life is subject to natural selection, which is a form of cumulative selection. So the correct way of looking at it is to say that evolution uses natural selection, that natural selection is a type of cumulative selection, and that we know from abstract mathematical studies of cumulative selection that systems that use it should have a very high probability of converging on solutions very quickly. Which begs the question of whether you understand natural selection. That is after all part of my larger question: how do others understand that evolution is supposed to work? I think that if we can learn that, then we can begin to understand how they form their ideas of what evolution could or could not do.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
Or maybe we just belabor the computer metaphor past its usefulness when we talk about the complex cellular machinery in living organisms. I do agree that the computer metaphor continually gets taken way too far. However, in the current discussion it may help Lamden understand something. He's trying to look at a particular length of DNA and figure out how much "information" it contains. That would be like looking at how many lines of code a program contains to figure out how much it does. At a very simple level, software contains conditional statements, AKA "if-then-else". There are entire sections of code that either will or will not be executed depending on certain conditions. Similarly in our DNA we have regulator genes that control whether other genes are active or not. So a straight count of lines of code or length of DNA will not give us an accurate idea of how much either "code" will do. There are also loops and similar control structures in software. For example, I was assigned the job of maintaining some Pascal code written by a programmer whose main experience was in FORTRAN. Her code performed the same operations repeatedly but with different variables and she wrote it that way, just as she was used to doing in FORTRAN. Her source code was over 40 pages long. I took those operations and put them into a procedure (AKA "void function" in C) and then replaced that code with a call to that procedure. My version was less than 8 pages long. By straight lines-of-code count, her version was more than 2.5 times larger than mine and so, by Lamden's apparent reasoning, should contain more than 2.5 times as much information than mine and do more than 2.5 time as much as mine. Yet both versions contained just as much information and did exactly the same thing. Size is not a reliable metric for determining power. Similarly, regulatory genes can cause genetic "code" to "loop". Dawkins discussed this in The Blind Watchmaker. Let's take a literalistic centipede with 100 legs (id est, with 50 body segments that each have a leg on either side). Does its genetic code need to have separate genes for each and every one of those body segments? No. All you need is code to make one body segment and regulatory genes to repeat that code 50 times. And as I recall, Dawkins' metaphor for genetic code was not that it contained instructions, but rather a blueprint specification. But then it's been more than 25 years.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
Everything living is a result of the DNA coding. If a human cell contains 6 ft of microscopic DNA coding, let us take an arbitrary guess at how much DNA would be needed to program for a simple light receptor- let's say 1/10 of a mm. (pick your own guess) That would be far, far, more organization than the word "monkey" right there. And less likely to happen than it is to have any word formed by shaking up a bunch of letters and pulling them one by one. So maybe IC or ID is not the right word.... let's call it a statistical improbability. It is towards this first step that I do not see how NS could aid in organizing. Your message seems to display some confusion of the roles of genetics and of natural selection, as well as a fixation on DNA (also seen in your Message 119). And trying to over-apply MONKEY directly to questions about life, which I've already talked about. How does evolution work? To make a general statement of how evolution is understood to work, we have these basic concepts:
Now we can look at how evolution works so that we can compare it with how you appear to be thinking about it. We start with a population of organisms. The wording chosen also assumes a species that reproduces sexually and also implicitly assumes those organisms to be animals.
Does that make sense?
|
|||||||||||||||||||||||||||||||||
RAZD Member (Idle past 1435 days) Posts: 20714 From: the other end of the sidewalk Joined: |
And as I recall, Dawkins' metaphor for genetic code was not that it contained instructions, but rather a blueprint specification. ... Cake recipe. Enjoyby our ability to understand Rebel☮American☆Zen☯Deist ... to learn ... to think ... to live ... to laugh ... to share. Join the effort to solve medical problems, AIDS/HIV, Cancer and more with Team EvC! (click)
|
|||||||||||||||||||||||||||||||||
Tanypteryx Member Posts: 4451 From: Oregon, USA Joined: Member Rating: 5.5 |
And as I recall, Dawkins' metaphor for genetic code was not that it contained instructions, but rather a blueprint specification. ... Cake recipe. Wouldn't you say that beyond just the recipe, parts of it (DNA) are also replicator/fabricator of raw materials and building blocks?What if Eleanor Roosevelt had wings? -- Monty Python One important characteristic of a theory is that is has survived repeated attempts to falsify it. Contrary to your understanding, all available evidence confirms it. --Subbie If evolution is shown to be false, it will be at the hands of things that are true, not made up. --percy
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
Cake recipe. Yeah. Though what I seem to remember more clearly was plans for making a bicycle. And the example of a jetliner with an emphasis on the idea that with a basic plan for making a fuselage section you could then make a stretch version of the plane pretty much just by increasing the number of sections (though obviously there'd be repercusions on the entire design).
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
Let us take the eye, as a tribute to RAZD that mentioned it. Firstly, and most importantly, I imagine that even the most primitive light receptor is the result of a remarkable organization, be it natural or not. Again, look to Nature for examples of animals with some kind of light receptor and observe directly how much organization that requires. While Richard Dawkins in The Blind Watchmaker did a good job of presenting the stages of the evolution of the eye as observed in living species (in Wikipedia, see Evolution of the eye for graphics), that was just an elaboration of Charles Darwin's own presentation in the "Difficulties of the Theory" chapter of Origin of Species (1859) in the section, "Organs of extreme perfection and complication.":
quote: In subsequent editions, Darwin expanded that discussion to two or three pages of known examples. Also, it should be noted that this quote is frequently misquoted by creationists by always stopping at the end of the very first sentence. Just to point out, Darwin was talking about a single nerve ending close to the skin being enough to confer sensitivity to light. No complex structure before it could detect light, just an almost-exposed nerve ending. Those who had that nerve ending and could benefit from it would have been more fit and would have passed that trait on to the next generation. IOW, natural selection at work. Then any pigmentation in the skin over that nerve ending would serve to collect more light and amplifying the stimulation and so on. Those who had that pigmentation and whom it benefited would in turn cause that trait to be selected. And so on over the generations. Natural selection would work quite well here.
Secondly, the light receptor is still 100% useless without a brain capable of deciphering the light in to "message". Others have already discussed how bacteria and single-celled animals can respond to light without any brain nor any kind of nerve tissue. There is also the hydra, which is little more than a sack with tentacles, all of which (ie, the tentacles and the wall of the sack body) is only two cells thick, and which responds quite readily to tactile stimulus. It has no brain whatsoever, but rather a neural network, a network of nerves connected to each other. I do not know of it having any sensitivity to light, but it does react to stimuli without benefit of brains. More advanced invertebrates have ganglia, small clusters of nerves that begin to act like a brain, but comes nowhere near what laymen would consider a brain and certainly completely unsuited for what you would want a brain to be able to do with a visual image, since you're thinking of "webcam without a computer."
Think webcam without a computer. No, not a webcam. A webcam analogy would have to come much later. In the beginning of the evolution of the eye, it would not yet be capable of discerning an image, but rather it's more at the early stages of detecting the presence or absense of light and maybe some basic idea of direction. Think a single photo-detector, like the safety device for your garage door opener or the old arriving customer detector in stops that would ring a bell when you'd enter and break the light beam. A webcam would be an entire array of really tiny photodetectors, but this is just a single one.
(this point I actually heard from someone else, who likely heard it from some creation science guy or something like that. But I think it's a great point.). Sounds like the kind of thing a creationist would come up with. He doesn't really know what he's talking about and those who also don't know enough think it sounds great. Kind of like that "chicken or egg" argument Bill Morgan loves to tell and his audience thinks is really great stuff. Do you still think that webcam analogy is such a great point?
There would be no reason for NS to aid in the dominance or propagation until the brain was there No, natural selection would still operate with or without a brain. As we can see in photosensitive single-celled life. As we can see in simple animals with neural nets and no brains. As we can see in invertebrates with neural ganglia which are primarily disorganized clusters of nerves. An organism's traits that can be inherited by its progeny and that confer any degree of greater fitness, regardless of how slight, would still serve as grist for the mill of natural selection.
Thirdly, (back to my own thinking), even after deciphered in to a message, a light message requires further action from the brain. Does the light mean I should jump in to the fire, or away from the fire? A further impediment from allowing NS to help out . Huh? So now this invertebrate with nothing more than a ganglion must be capable of rational thought and problem solving? Huh?? Let's try a computer analogy again, especially since my initial technical training was as an electronic digital computer technician. Digital electronics use voltage levels that can be in two states, high or low (some outputs can have a third state, high impedence, which effectively disconnects them from the circuit; we will not refer to that again). We assign to those two voltage levels boolean values of true or false, or binary values of one or zero. There are three basic types of digital circuits based on the three basic operators of Boolean Algebra:
You can combine these three fundamental gates to create more complicated gates (NAND gates which invert the output of an ANA gate, NOR gates, XOR gates (exclusive-or, which only outputs true if the two inputs are different), flip-flops (a basic memory cell; you input a one or a zero and it remembers that value), counters (a series of flip-flops that will step through a count as you pulse it), registers (a series of flip-flops that store several binary digits (AKA "bits") that taken altogether form a number or an address), shift registers (registers that allow you to shift bits to the adjacent flip-flops; used to multiply or divide by two), adders (gates that add two bits together and produce a sum and a carry), etc. You can also connect gates to form a combinatorial network, which will generate output values in response to a given set of input values -- we will return to this idea shortly. For example, you could have photo-detectors and door switches providing inputs and have control and alarm voltages as outputs to detect a condition that you need to take action on (eg, the photo-detector for your garage door opener losing light, which means there's something standing there like a small child, as the door is descending will cause a control voltage that will stop the door and make it go back up). A more ubiquitous example would be 7-segment decoders, which cause a number to be displayed on an LCD displays by using the number's bits as inputs and generating outputs to each of the seven display segments to turn each one on or off (in my digital design class, designing that combinatorial network was one of our assignments). Add a counter to a combinatorial network and you have a sequential network. A digital clock is a good example of that. A computer combines all that and more. With a computer, we really up the ante way up with a CPU ("central processing unit", now also called a microprocessor with the simpler ones being called microcontrollers (the latter go into washing machines and microwave ovens)). The CPU reads numbers from memory (basically a huge two-dimensional array of flip-flops organized into sequential addresses, each of which accesses a register), deciphers them as instruction codes which it then uses to generate control signals which tell the computer's combinatorial and sequential and other digital circuitry to retrieve the other values from memory and perform the required operations on them, including storing the results in a particular register or memory location. The basic difference between combinatorial and sequential networks and computers is that those networks are hard-wired to only do the thing that they were designed to do, whereas the computer does whatever its program tells it to do. In order to change what a network does, you have to redesign it and completely rebuild it. In order to change what a computer does, you simply give it a different program. Indeed, computers have been called "the universal machine" because the same machine can do almost anything simply by giving it a different program. As a side-note, notice that the computer is constructed entirely of combinatorial and sequential networks -- I know because in tech school we chased sparks (ie, traced signals) through the logic diagrams of a functional computer. And while on the top level the computer can do almost anything, in the lower levels its programming was still hard-wired. Under the hood, most of it is still combinatorial and sequential networks. Now let's apply that to the brain and to ganglia and neural networks. We can only compare our brains to a computer in general terms, since our brain's method of reprogramming consists basically of rewiring itself. However, our brains are indeed capable of learning and of reasoning, things that in computers have to be simulated through software. But like the computer, our brains operate on different levels in kind of a hierarchy of circuits (I'm drawing here from the BTYE book, The Brains of Men and Machines). The top-most layer of the brain decides upon an action we want to take, then that decision goes down through layer after layer before that action is actually taken. This is required because the brain is rather slow and having to handle all the details at the top-most level would overwhelm it -- consider what happens when you try to learn a new dance step or other complex movement; when you have to think your way through it you cannot do it because you're too slow and clumsy, but as you move it "down into muscle memory" to where you don't have to think about it anymore, then you become faster and more adroit. At the same time, each successively deeper layer becomes increasingly basic and hard-wired. For example, in order to regulate the actual muscle tension and positioning of a part of the body, you have pairs of nerve networks that detect the muscle tension of opposing sets of muscles and increase or relax the tension appropriate to keep the body where you had decided you wanted it. Above that you have reflexes in which the body bypasses the brain and responds to a stimulus locally on its own; no reasoning anything out there! And even behaviorally within the brain, we have still certain instinctual behaviors and drives that can take control unless overridden by the conscious brain, difficult though that can often be. Perhaps a simpler illustration of this hierarchy of brain circuitry would be touchtyping, which I learned in junior high. The entire course of instruction consists primarily of creating muscle memory. You learn through constant drilling which letter can be reached by which finger by moving that finger up or down and keeping it over its home key. Then you drill on the most frequent words in English, especially the two-, three-, and four-letter words. And the most common suffixes by practicing words that contain them. The end result is that when you see a long word you don't usually type, you spell it out in your head and your fingers "know" where to go. And if that uncommon word ends in a common ending, suddenly your fingers speed up and rip through it. And the common words you never have to think about. That is all because with those drills you had built up that neural hierarchy of your brain the lower levels that knew how to type any given key and how to type each and every one of those common words, such that all your conscious brain had to do was to think of the word (even that could become unconcious such that you could just look at some text and transcribe it without thinking). I realized that the very first time I typed a paper for German class. Suddenly, I could barely type! I had to spell out each and every German word, even the most common ones. But by the time I finished typing that paper, I had committed those common German words to muscle memory. OK, that works for us brainy animals, which should also apply to various degrees to most all vertebrates (since they all have some form of central nervous system; that neural cord is a prerequisite for joining our club). But what about the brainless invertebrates? The ones with nothing more than a ganglion? In their cases, they primarily have just hard-wired neural networks with extremely little or no reasoning ability. Basically, a given set of sensory inputs will produce the same behavioral response. They should running almost purely on instinct. That would render moot your concern for them to be able to reason through a situation. Though that would also beg the question of human reactions to dangerous situations. Basically, when in danger we instinctively revert to "fight or flight". More exactly, blood flow to the neo-cortex is restricted and redirected to the limbic complex, thus shutting down our ability for rational thought and ramping up our emotional and instinctual responses. That is what strong emotions such as panic and rage does to our brains. That is why the military drills us on what to do in emergency situations, so that when the balloon does go up we know what to do. That is why A1C Stone responded immediately and effectively against that threat on that train. In our case, we can learn different responses to danger, new behaviors for when instinct kicks in. In the case of that lowly invertebrate, it's primarily all instinct. Instinct that it was born with. Instinct that it largely inherited from its parent(s).
Thirdly, (back to my own thinking), even after deciphered in to a message, a light message requires further action from the brain. Does the light mean I should jump in to the fire, or away from the fire? A further impediment from allowing NS to help out . OK, the invertebrate inherited its instinctual response to light. Like your garage door opener, it has a hard-wired instinctual response to light and to the lack of light. How did that instinctual behavior develop? By natural selection. Way back when, ber-great Ur-grandpappy invertebrate could sense light but didn't have any instinct for doing anything about it. OK, that's not quite right. There was an ancestral population with the ability to sense light and a variety of instinctual responses to it, including ignoring it. There were certain benefits to moving towards the light (photophilic) and certain benefits to moving away from it (photophobic). Actually, this presents a situation in which a population would split into two sub-populations and end up evolving into two species. The photophobes would benefit by moving into dark safe places away from predators (think cockroaches), while the photophiles would have some other benefit that I can't quite think of at the moment (possibly getting out into the open during the daytime in order to feed on flowers' nectar and pollen -- guess I'm moving towards moths at this point). In the resultant photophobe species, offspring could develop a more photophilic behavior, but that would make them less fit, plus I'm sure it would crimp their love life (moving away from potential mates). Photophile offspring that stopped moving towards the light would likewise lose their food source, etc, and be selected against. In both cases, natural selection would have established the species and maintain their traits. All that evolved before fire. What effect would that have? We know that moths are photophiles; think "drawn like moths to a flame." So we have a large population of moths that are photophilic and drawn to light and whose range is an area of 100 sq. miles. A party of humans arrives to camp for a week in their range. Some moths are drawn to their campfire and die, but most of the population are not close enough to be affected and so are not affected. OK, so now let's have the humans settle the area and populate it densely such that the entire 100 sq. miles is completely covered by them and all of them have campfires every night. That would severely impact the moth population. If they all have very simple behavior of moving towards any source of light, then they're goners. But if some of them have developed through mutation or recombination the ability to distinguish between different intensities of light, then they could distinguish between sunlight and the much dimmer firelight. That ability could have developed countless generations ago, but was neutral because it made no difference. But now it would be important. Another nascent trait could be a preference towards either brighter or dimmer light. So now those with the ability to distinguish between sun and fire and prefered the brighter sunlight would not be drawn to the fires. The end result would be a new species of moth. Because of natural selection. But to answer your basic question, jumping into the fire or avoiding it would depend on what your instinctual behavior to that stimulus is. And those whose behavior favored avoiding the fire would be favored by natural selection. Remember: natural selection happens. (like in Forrest Gump, sh*t happens) The actual results of natural selection will vary, but as long as life keeps doing what life does, selection will happen and evolution will happen. They never stop.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
At http://cre-ev.dwise1.net/monkey.html.
quote: To re-iterate, the main problem is that the newer 64-bit Windows systems (I've go two of them) refuse to run 16-bit applications, which is what the old MONKEY executable is. Therefore, I needed to provide an executable that the newer boxes could run. In order to do that, I had to recompile it with a 32-bit compiler. Since 32-bit Turbo Pascal compilers are hard to come by, I had to convert it to another language, like C. The other problem is that the original program used the conio library, which is not universally supported. Of the three development systems I have (MinGW gcc, Pelles C, Microsoft Visual Studio 2008), only MinGW gcc supports conio. So that's the one I used to build the new executable. MinGW gcc depends on a distributable Microsoft Visual C++ runtime library. In order to get around that, I chose the build option to link the libraries in statically. It did increase the size of the executable very noticeably as I would have expected. My understanding is that that should remove the requirement for that distributable runtime file. Unfortunately, I could be wrong. Doubly unfortunately, all the computers I have access to have MinGW gcc installed and hence also that runtime file. That means that I have no means to test the new executable. Therefore, if you encounter problems running MONKEY, please inform me of that fact and give me enough information to resolve the problem. Here's a false positive you may get. The other old farts ... er, experienced geeks ... will remember the Norton Utilities, a real treasure trove for geeks. Part of that was the Norton Index. The standard was the "true blue" IBM PC/XT (by "true blue", that means the actual IBM product and not a clone). The "true blue" XT ran at 4.77 MHz, while most all the clones ran at 8 MHz. Using the "true blue" XT as the standard, Norton assigned it a Norton Index of 1. Therefore, whatever Norton Index your PC got was the number of times faster than a "true blue" XT your PC was running. A clone XT had an index of 2, running twice as fast. I think an AT ran at 5. So what's the Norton Index of the current machines? I don't know for sure, but I think it's up around 2000. That means that MONKEY runs much faster than when I first developed it at a Norton Index of 2. When I first ran MONKEY on a newer machine, I thought it was broken. Nothing seemed to have happened and it reported that zero time had transpired. But it reported having arrived at the solution. It had, but it had done so too fast for the time counters. Sure had me going for a while. With a generation size of 100, you arrive at the solution far too quickly for it to register. You have to pare it down to something smaller, like 10, to be able to observe it approaching the target and then backsliding away, etc. BTW, for the amount of time the single-step selection method needs to reach one chance in a million to succeed, I postulated a super-computer capable of one million attempts per second. We're still not there yet. To the single-step selection display, I added a new statistic when you stop (with Esc, not space): the number of attempts per second. Currently on my Win7 box its just over 1800. Share and enjoy!
|
|||||||||||||||||||||||||||||||||
Coyote Member (Idle past 2136 days) Posts: 6117 Joined: |
Back to the main point.
How long does it take to evolve? Dunno. I've been waiting for quite a few years and I seem to be devolving more than I am evolving. Hmmmm. What's wrong here???Religious belief does not constitute scientific evidence, nor does it convey scientific knowledge. Belief gets in the way of learning--Robert A. Heinlein How can I possibly put a new idea into your heads, if I do not first remove your delusions?--Robert A. Heinlein It's not what we don't know that hurts, it's what we know that ain't so--Will Rogers If I am entitled to something, someone else is obliged to pay--Jerry Pournelle If a religion's teachings are true, then it should have nothing to fear from science...--dwise1 "Multiculturalism" demands that the US be tolerant of everything except its own past, culture, traditions, and identity.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7 |
I know the feeling. You're just getting old. Deal with it.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5952 Joined: Member Rating: 5.7
|
We have already gone through a lot of discussion about natural selection acting upon phenotypes instead of with the genotypes, and a lot more.
When we speak of evolution, we end up speaking about speciation events, the formation of a new species -- see Wikipedia at Speciation. Back in 1983/4, I heard a very good presentation about "creation science" in which the speaker, Fred Edwords, cited the most radically rapid rate of speciation given by the most radical scientists. 50,000 years. I do not know what his source was and I do not know what it is based on. The context of his remark is that the standard creationist argument for being able to stuff so many animals onto Noah's Ark (yes, they are very serious about that!) was to postulate "basic created kinds" such as the basic "canid kind" and the basic "felid kind" and the basic "worm kind" and the basic "beetle kind", and then after the Ark landed each of those kinds underwent rapid "micro-evolution" to produce all the species, genera, and much higher taxa that we observe today. His argument was that whereas 50,000 years for a single speciation event is radically rapid by scientific standards, creationists are arguing for vastly more rapid evolution in their attempts to "disprove" evolution. Actually, it is much worse that. Le Baron Georges de Cuvier, the Father of Paleontology, in Napoleon's time was a staunch anti-evolutionist. He was also a young-earther. And he examined the mummies that Napoleon's army brought back from Egypt, including many mummified animals, and he could not find any difference between the mummies and modern animals and humans. Therefore, in all those thousands of years, no evolution had occurred. That shrinks the time for creationist "basic created kind" evolution not only down to near-zero, but also deep into the negative scale. But think about what you are trying to do. You want to somehow map out the genetic changes from a single-cell organism to a human. But the differences between a single-celled organism to a human are all measure in the phenotype. But you want to measure it through the genotype. That seems like a fairly major disconnect. There are a number of problems with what you are proposing. Here's one that I'm sure you didn't anticipate. To measure the genetic difference between that ancestral single-celled organism and humans, you need to get the genome of that ancestral single-celled organism and measure the differences between it and modern humans. Can you do that? No, because those ancestral single-celled organisms no longer exist. But, you say, yes they do! We can still find them. No, we can find their modern descendants, but not the original ones. A couple/few decades ago, an Australian medical doctor, Michael Denton, wrote a book, Evolution: A Theory in Crisis. This book became very popular with anti-evolutionists. But then after its publication he became aware through many conversations the gross errors that he had made. He said that if he were to write it again it would be very different, but he has no plans to rewrite it. Here is something I had written about it:
quote: So then, your best bet is to be able to take that ancestral DNA of a single-celled organism and compare it directly to modern-day humans in order to see exactly what genetic changes had occurred. But you don't have that information, do you? Edited by dwise1, : Added an extremely strategic period at the end of the first sentence of the third paragraph. Edited by dwise1, : Fracking fracking bill shite! That was intended to be a period, not a comma!!!!!
|
|||||||||||||||||||||||||||||||||
Percy Member Posts: 22506 From: New Hampshire Joined: Member Rating: 5.4 |
conio.h isn't supported on Linux.
--Percy
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024