
Register  Sign In 

QuickSearch
EvC Forum active members: 50 (9182 total) 
 
Wes Bailey  
Total: 918,348 Year: 5,605/9,624 Month: 11/619 Week: 0/47 Day: 0/6 Hour: 0/0 
Thread ▼ Details 


Author  Topic: A proof against ID and Creationism  
cavediver Member (Idle past 3771 days) Posts: 4129 From: UK Joined: 
Sorry, but even Stephen Hawking seams to disagree with you on that one. Believe me, he doesn't... Your quote is trying very unsuccessfully to describe the Hawking/Penrose Singularity Theorems. The theorems say nothing concerning creation and cause. Have you ever seen anything created or caused to exist? Or have you just seen preexisting matter rearranged? This message has been edited by cavediver, 04112006 07:30 AM


Modulous Member (Idle past 113 days) Posts: 7801 From: Manchester, UK Joined: 
As far as the Universe not being complex, I’m not trying to be mean here, however you may want to read up on the little science has been able to deduce from the heavens. Every time we delve into her mysteries, the Universe turns out to be far more complex then we ever thought possible. I'm not sure that's entirely true. 'Strange' might be a good term, perhaps 'difficult to understand', but complex isn't necessarily true. As my post indicated you'll have to define complex. As per my post you'll also need to take into account the entirety of the universe. On average. This includes time (remember your 4 dimensions), and as far as I am aware, the universe is going to spend the majority of its existence in heat death which by any definition I'm aware of is extremely noncomplex. This message has been edited by Modulous, Tue, 11April2006 01:00 PM


gregor Inactive Member 
Many people still believe that random developmental processes are hopelessly inefficient in solving difficult problems. I therefore want to point to a simple random process which may find good solutions to a very difficult problem; the traveling salesman problem.
The salesman should visit a number of towns, one at a time, and wants to know in what order the towns should be visited in order to make the tour as short as possible. Suppose that the number of towns is = 60. For a random process, this is like having a deck of cards numbered 1, 2, 3, ... 59, 60 where the number of permutations is of the same order of magnitude as the total number of atoms in the universe. If the hometown is not counted the number of possible tours becomes 60*59*58*...*4*3 (about 10 raised to 80). The probability to find the shortest tour by random permutation of the cards is about one in 10 raised to 80 so, it will never happen.But the natural evolution uses an inversion operator, which  in principle  is extremely well suited for finding good solutions to the problem. A part of the card deck  chosen at random  is taken out, turned in opposite direction and put back in the deck again. If this inversion takes place where the tour happens to have a loop, then the loop is opened and the salesman is guaranteed a shorter tour. In a population of one million card decks this might happen at least 200 times in every generation. I have simulated this with a population of 180 card decks, from which 60 decks are selected in every generation (using MATLAB, the language of technical computing). After about 1500 generations all loops have been removed and the length of the random tour at start has been reduced to 1/5 of the original tour. In a special case when all towns are equidistantly placed along a circle, the optimal solution has been found when all loops have been removed. This means that this simple random process has been able to find one optimal tour out of as many as 10 raised to 80. This also means that random variation and selection is a very important principle for creating a huge amount of information. So, there is no reason to distrust random developmental processes. Se also Goldberg, D. E. Genetic Algorithms in Search Optimization & Machine Learning. AddisonWesley, New York, 1989.


ramoss Member (Idle past 740 days) Posts: 3228 Joined: 
Except , of course, that is a bad anaology, since the route is a cumlative process.


Modulous Member (Idle past 113 days) Posts: 7801 From: Manchester, UK Joined: 
Perhaps you could expand on that response. Why does the route being a cumulative process make it a bad analogy?


gregor Inactive Member 
I still think that the traveling salesman example may be seen as simulated evolution solving a combinatorial problem. The process is working with random variation and selection of individuals.
In addition I would like to show an example of parametric simulated evolution and a lttle of my background. In the middle of the 60ties, I worked at a Swedish telecom AB with analysis and optimisations of signal processing systems. Formerly such systems consisted of interconnected components such as resistors, inductors and capacitors.In the late 60ties my boss formulated a technical problem: “Try to find system solutions that are insensitive to variations in parameter or component values due to the statistical spread in manufacturing” he said. This means that he wanted the manufacturing yield maximized. If we have only two components  each having a parameter value  the problem is very simple. Let the first parameter value be the shortest distance to the left edge of a picture while the second value is the distance to the bottom edge. Then, if the interconnection is given, a point in the picture represents the system unambiguously. Suppose now that all points inside a certain triangle (region of acceptability) will meet all requirements according to the specification of the system, while all other points does not, and that the spread of parameter values is uniformly distributed over a circle. Then, if the circle touches the three sides of the triangle, the centre of the circle would be a perfect solution to the problem. But if we have 10 or 100 parameters, then the number of possible parameter combinations becomes superastronomical and the region of acceptability will not possibly be surveyed. I begun to think that the man was not all there. The problem was almost forgotten until a system designer entered my room about half a year later. He wanted to maximize the manufacturing yield of his system that was able to meet all requirements according to the specification, but with a very poor yield. Oh, dear! I would not like to get fired immediately. So, we wrote a computer program in a hurry, using a random number generator giving normally Gaussian distributed numbers. The system functions of each randomly chosen system were calculated and compared with the requirements. In this way we got a population (generation) of about 1000 systems from which a certain fraction of approved systems was selected. For the next generation the centre of gravity of the normal distribution was moved to the centre of gravity of the approved systems and this process was repeated for many generations. After about 100 generations the centres of gravity reached a state of equilibrium. Then the designer said “but this looks very god”. And we were both astonished, because we had only put some things together by chance. A closer look revealed that there is a mathematical theorem (the theorem of normal or Gaussian adaptation) valid for normal distributions only stating: “If the centre of gravity of the approved systems coincides with the centre of gravity of the normal distribution in a state of selective equilibrium, then the yield is maximal.” For the proof see references Kjellstrm (1970) & Taxén, 1981. This gave an almost religious experience. Here a mathematical theorem solved our problem without our knowledge and independently of the structure of the region of acceptability. Our very simple process was similar to the evolution in the sense that it worked with random variation and selection. Later, it turned out that evolution might as well use the theorem and much more than that. Today I would not hesitate to regard this an example of “intelligent design” effectuated by a mathematical theorem and a process using random variation and selection. References Kjellstrm, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157175, 1970. Kjellstrm, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS28, no. 7, July 1981. This message has been edited by gregor, 04162006 09:49 AM This message has been edited by gregor, 04162006 09:51 AM gkm


gregor Inactive Member 
In short, there is a pocketful of mathematical theorems ruling the evolution, at least at a fairly good statistical second order approximation. This means that the gene pool is approximated by a normal distribution. Then by using the rules of genetic variation ( crossing over, inversion etcetera) as a random number generator, evolution may effectuate a simultaneous maximization of mean fitness and genetic disorder/diversity.
This means that evolution strives to secure our survival with largest possible margins to spare, while the disorder stands for imagination and creativity. 1 The central limit theorem: The sum of a large number of random steps tend to become normally Gaussian distributed. Since the development from fertilized egg to adult individual may be seen as a stepwise modified repetition of the evolution of a particular individual, morphological characters (parameters) tend to become normally distributed. As examples of such parameters we may mention the length of a bone or the distance between the pupils. Even mental parameters such as IQ may also be normally distributed. See Cramér in references. 2 The normal distribution is the most disordered distribution among all statistical distributions having the same variance. See Middleton. 3 The theorem of normal (Gaussian) adaptation or normal (Gaussian) centering (we have many names for the things we love): If the centre of gravity (m*) of the gene pool of the parents to offspring in the next generation coincides with the centre (m) of the normally distributed gene pool in the next generation  in a state of selective equilibrium (m* = m)  then the mean fitness is maximal. See Kjellstrom (1970) & Taxén, 1981. This theorem may be proved in two different ways. Firstly, one may maximize mean fitness while keeping the disorder of the normal distribution constant. Secondly, one may maximize the disorder of the normal distribution keeping the mean fitness constant. In both cases the condition of optimality will be the same, m* = m. This means that evolution effectuates a simultaneous maximization of mean fitness and genetic disorder/diversity. A more general formulation of the theorem includes the mean value of information and the moment matrix M of the normal distribution allowing he disorder to increase even more, still keeping the mean fitness constant. The condition of optimality becomes M* proportional to M, where M* is the moment matrix for the parental distribution. This will make normal adaptation a second order approximation of evolution. Note that mean fitness is calculated as a mean over the set of individuals, in contrast to the fundamental theorem of biology (Fisher, 1930) where mean fitness is calculated over the set of genes leading to a dubious teorem. 4 The theorem for the choice of a breeding partner (HardyWeinberg): If mating takes place at random, then the allele frequencies in the next generation are exactly the same as they were for the parents. See Hartl, 1981. Since the centers of gravity (m* and m) will behave similarly, evolution will strive to fulfill the condition of optimality of the theorem of normal adaptation according to point 3, and mean fitness will be maximized. 5 The second law of thermodynamics (the entropy law): The disorder will always increase in all isolated systems. But in order to avoid considering isolated systems I prefer an alternative formulation: A system attains its possible states in proportion to their probability of occurrence. See Reif, 1985, and Brooks, 1986. Thus, the system will attain its most probable disordered states of occurrence even if some force from outside influences it. If the mutation rate is sufficiently high,evolution will also be able to maximize the genetic disorder/diversity in accordance with the theorem of normal adaptation, point 3. 6 The theorem of efficiency. The most important difference between the natural and the simulated evolution in my PC is that the natural one is able to test millions of individuals in parallel, while my PC has to test one at a time. But the efficiency of evolution also depends on the mutation rate and this is plain from the theorem of efficiency. It is based on the theory of information (Shannon 1948, see Middleton). So, if P is the probability that an individual in a large population will be able to survive, then the negative logarithm of P, log(P), is the information in the art of survival gained when a survivor has been found. Since the inverse of P is proportional to the work or time needed to find a survivor, then P*log(P) becomes a measure of efficiency. A simplified version of the theorem states that all measures of efficiency, that satisfy certain postulates, are asymptotically proportional to P*log(P) when the number of statistically independent parameters tend towards infinity. See Kjellstrm, 1991. Maximum efficiency is attained when P = 1/e = 0.3679, where e is the base of the natural logarithmic system. As an example of a measure (not based on the theory of information) of efficiency I may mention the average speed of a random walk in simplex region. The average speed will asymptotically tend to P*log(P) when the number of dimensions tend towards infinity. See Kjellstrm, 1969. References Brooks, D. R. & Wiley, E. O. Evolution as Entropy, Towards a Unified Theory of Biology. The University of Chicago Press, 1986 Cramér, H. Mathematical Methods of Statistics. Princeton, Princeton University Press, 1961. Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981. Kjellstrm, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133151, 1969. Kjellstrm, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157175, 1970. Kjellstrm, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS28, no. 7, July 1981. Kjellstrm, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, Dec. 1991. Middleton, D. An Introduction to Statistical Communication Theory. McGrawHill, 1960. Reif, F. Fundmentals of Statistical and Thermal Physics. McGrawHill, 1985.


Wounded King Member (Idle past 161 days) Posts: 4149 From: Cincinnati, Ohio, USA Joined: 
Since the development from fertilized egg to adult individual may be seen as a stepwise modified repetition of the evolution of a particular individual, Does this refer to something other than Haeckelian recapitulation? That is what it sounds like, Unless that is your 'modified' is there to cover a multitude of sins. TTFN, WK


Brad McFall Member (Idle past 5161 days) Posts: 3428 From: Ithaca,NY, USA Joined: 
Well
quote:why can't your distribution be derived from some division of genes distributed PER individual. Without THAT it seems that though your idea is well designed it might be more proof against ID than for Creationism etc. To me the difficulty in doing this comes from having to keep in sync 4 DIFFERENT hierarchies A)Transfinite GENERATING a finitary list of real number groupsB)Tracks,Nodes,Main Massings, Baselines GENERATING a finite list of baselines C)Ecosystem Engineered Populations GENERATING an actual list of species D)Phenomenological Monohierarchies GENERATING a divided place other than Boltzman listing differences of macrothermodynamics and hierarchical thermodynamics If all four hierarchies were logically related(other than independently listed as I did), it seems to me that your notion might apply to genes and individuals in the same structure. This message has been edited by Brad McFall, 04182006 07:03 AM


gregor Inactive Member 
What I was trying to say was that the sequence of random steps stored in the DNAmessages during billion years of evolution has been modified due to mutations.
gkm


gregor Inactive Member 
The “theory” I am advocating describes only the possible random climbing of a population of a certain species. In this way evolution may climb a genetic landscape, the complexity of which may be inherent in your propositions A . D.
I am only opposing the kind of thinking emanating from the “fundamental theorem of biology” due to Fisher (1930), where a gene may have a fitness of its own and be a unit of selection. Dawkins: The selfish gene, follows the same track. I get the impression that egoism is a law of nature. But the theorems in my list have primarily nothing to do with egoism. On the contrary, since mean fitness is a collective parameter, which is being maximized, evolution has strong forces in favor of the collective. But, of course, this has nothing to do with communism. According to Maynard Smith the fundamental theorem states that; “the rate of increase of mean fitness of any organism at any time is equal to its genetic variance at that time”. But a population may reach a state of selective equilibrium, in which case the increase of mean fitness is equal to zero, but not necessarily the genetic variance. Instead, according to the new model, the disorder and variances of the normal distribution are simultaneously maximized. So the “fundamental theorem” can hardly be a fundamental truth. Of course, the calculations of Fisher are certainly correct. But the result is seemingly wrong. I think the reason is that the premise is wrong and that a gene cannot have a fitness of its own. It is therefore meaningless to calculate the mean fitness over the set of genes. Genes may of course be enriched in the gene pool but when an individual is selected evolution can hardly point to certain good genes for selection. The whole individual will either be selected or rejected. So, it is the selection of individuals that rules evolution, not the selection of genes. Mayr: What Evolution is, comes seemingly to the same conclusion. The same way of thinking appears in the definition of fitness according to Maynard Smith: “Fitness is a property, not of an individual, but of a class of individuals  for example homozygous for allele A at a particular locus.” This definition is certainly useful in breeding programs. But unfortunately, a theory based on this is completely useless as a basis of a model of an evolution selecting individuals. The evolution of helper behavior is also explained in terms of egoism as kinselection (Hamilton). But there is no need for any egoism to explain the phenomenon. If the individuals of some primitive species do not help their offspring to survive, then mean fitness may increase if a certain helper behavior evolves and  vice versa  an increase of mean fitness may cause a helper behavior to evolve. Further, if this behavior is extended to include relatives or even any individual independent of race or religion, then the mean fitness of the total gene pool may increase even more. gkm


Brad McFall Member (Idle past 5161 days) Posts: 3428 From: Ithaca,NY, USA Joined: 
It seems to me that your notion requires one to second guess that cllimbing by Cornithian columns (SJ Gould's idea on the same "individual of Mayr but modified) can not predominate. I doubt that it does (then I would have to be way wrong and Wolfram all right in what he wrote on cellular automata) but it is the "good" scientist in me to wait for that kind of evidence first.


The Tiger Inactive Member 
Another possibility, is that God is eternal and always has been, as the Bible says.


crashfrog Member (Idle past 1595 days) Posts: 19762 From: Silver Spring, MD Joined: 
Another possibility, is that God is eternal and always has been, as the Bible says. If you can have one complex thing that isn't itself created, why not others?


inkorrekt Member (Idle past 6210 days) Posts: 382 From: Westminster,CO, USA Joined: 
What is the correct information please?



Do Nothing Button
Copyright 20012023 by EvC Forum, All Rights Reserved
™ Version 4.2
Innovative software from Qwixotic © 2024