Browse Search Feedback Other Links Home Home The Talk.Origins Archive: Exploring the Creation/Evolution Controversy

Genetic Algorithms and Evolutionary Computation

by
Copyright © 2004
[Posted: April 23, 2004]

Introduction

Creationists occasionally charge that evolution is useless as a scientific theory because it produces no practical benefits and has no relevance to daily life. However, the evidence of biology alone shows that this claim is untrue. There are numerous natural phenomena for which evolution gives us a sound theoretical underpinning. To name just one, the observed development of resistance - to insecticides in crop pests, to antibiotics in bacteria, to chemotherapy in cancer cells, and to anti-retroviral drugs in viruses such as HIV - is a straightforward consequence of the laws of mutation and selection, and understanding these principles has helped us to craft strategies for dealing with these harmful organisms. The evolutionary postulate of common descent has aided the development of new medical drugs and techniques by giving researchers a good idea of which organisms they should experiment on to obtain results that are most likely to be relevant to humans. Finally, the principle of selective breeding has been used to great effect by humans to create customized organisms unlike anything found in nature for their own benefit. The canonical example, of course, is the many varieties of domesticated dogs (breeds as diverse as bulldogs, chihuahuas and dachshunds have been produced from wolves in only a few thousand years), but less well-known examples include cultivated maize (very different from its wild relatives, none of which have the familiar "ears" of human-grown corn), goldfish (like dogs, we have bred varieties that look dramatically different from the wild type), and dairy cows (with immense udders far larger than would be required just for nourishing offspring).

Critics might charge that creationists can explain these things without recourse to evolution. For example, creationists often explain the development of resistance to antibiotic agents in bacteria, or the changes wrought in domesticated animals by artificial selection, by presuming that God decided to create organisms in fixed groups, called "kinds" or baramin. Though natural microevolution or human-guided artificial selection can bring about different varieties within the originally created "dog-kind," or "cow-kind," or "bacteria-kind" (!), no amount of time or genetic change can transform one "kind" into another. However, exactly how the creationists determine what a "kind" is, or what mechanism prevents living things from evolving beyond its boundaries, is invariably never explained.

But in the last few decades, the continuing advance of modern technology has brought about something new. Evolution is now producing practical benefits in a very different field, and this time, the creationists cannot claim that their explanation fits the facts just as well. This field is computer science, and the benefits come from a programming strategy called genetic algorithms. This essay will explain what genetic algorithms are and will show how they are relevant to the evolution/creationism debate.

What is a genetic algorithm?

Concisely stated, a genetic algorithm (or GA for short) is a programming technique that mimics biological evolution as a problem-solving strategy. Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem, encoded in some fashion, and a metric called a fitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work, with the aim of the GA being to improve them, but more often they are generated at random.

The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates, of course, most will not work at all, and these will be deleted. However, purely by chance, a few may hold promise - they may show activity, even if only weak and imperfect activity, toward solving the problem.

These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect; random changes are introduced during the copying process. These digital offspring then go on to the next generation, forming a new pool of candidate solutions, and are subjected to a second round of fitness evaluation. Those candidate solutions which were worsened, or made no better, by the changes to their code are again deleted; but again, purely by chance, the random variations introduced into the population may have improved some individuals, making them into better, more complete or more efficient solutions to the problem at hand. Again these winning individuals are selected and copied over into the next generation with random changes, and the process repeats. The expectation is that the average fitness of the population will increase each round, and so by repeating this process for hundreds or thousands of rounds, very good solutions to the problem can be discovered.

As astonishing and counterintuitive as it may seem to some, genetic algorithms have proven to be an enormously powerful and successful problem-solving strategy, dramatically demonstrating the power of evolutionary principles. Genetic algorithms have been used in a wide variety of fields to evolve solutions to problems as difficult as or more difficult than those faced by human designers. Moreover, the solutions they come up with are often more efficient, more elegant, or more complex than anything comparable a human engineer would produce. In some cases, genetic algorithms have come up with solutions that baffle the programmers who wrote the algorithms in the first place!

Methods of representation

Before a genetic algorithm can be put to work on any problem, a method is needed to encode potential solutions to that problem in a form that a computer can process. One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where the digit at each position represents the value of some aspect of the solution. Another, similar approach is to encode solutions as arrays of integers or decimal numbers, with each position again representing some particular aspect of the solution. This approach allows for greater precision and complexity than the comparatively restricted method of using binary numbers only and often "is intuitively closer to the problem space" (Fleming and Purshouse 2002, p. 1228).

This technique was used, for example, in the work of Steffen Schulze-Kremer, who wrote a genetic algorithm to predict the three-dimensional structure of a protein based on the sequence of amino acids that go into it (Mitchell 1996, p. 62). Schulze-Kremer's GA used real-valued numbers to represent the so-called "torsion angles" between the peptide bonds that connect amino acids. (A protein is made up of a sequence of basic building blocks called amino acids, which are joined together like the links in a chain. Once all the amino acids are linked, the protein folds up into a complex three-dimensional shape based on which amino acids attract each other and which ones repel each other. The shape of a protein determines its function.) Genetic algorithms for training neural networks often use this method of encoding also.

A third approach is to represent individuals in a GA as strings of letters, where each letter again stands for a specific aspect of the solution. One example of this technique is Hiroaki Kitano's "grammatical encoding" approach, where a GA was put to the task of evolving a simple set of rules called a context-free grammar that was in turn used to generate neural networks for a variety of problems (Mitchell 1996, p. 74).

The virtue of all three of these methods is that they make it easy to define operators that cause the random changes in the selected candidates: flip a 0 to a 1 or vice versa, add or subtract from the value of a number by a randomly chosen amount, or change one letter to another. (See the section on Methods of change for more detail about the genetic operators.) Another strategy, developed principally by John Koza of Stanford University and called genetic programming, represents programs as branching data structures called trees (Koza et al. 2003, p. 35). In this approach, random changes can be brought about by changing the operator or altering the value at a given node in the tree, or replacing one subtree with another.

Genetic Programming Trees

Figure 1: Three simple program trees of the kind normally used in genetic programming. The mathematical expression that each one represents is given underneath.

It is important to note that evolutionary algorithms do not need to represent candidate solutions as data strings of fixed length. Some do represent them in this way, but others do not; for example, Kitano's grammatical encoding discussed above can be efficiently scaled to create large and complex neural networks, and Koza's genetic programming trees can grow arbitrarily large as necessary to solve whatever problem they are applied to.

Methods of selection

There are many different techniques which a genetic algorithm can use to select the individuals to be copied over into the next generation, but listed below are some of the most common methods. Some of these methods are mutually exclusive, but others can be and often are used in combination.

Elitist selection: The most fit members of each generation are guaranteed to be selected. (Most GAs do not use pure elitism, but instead use a modified form where the single best, or a few of the best, individuals from each generation are copied into the next generation just in case nothing better turns up.)

Fitness-proportionate selection: More fit individuals are more likely, but not certain, to be selected.

Roulette-wheel selection: A form of fitness-proportionate selection in which the chance of an individual's being selected is proportional to the amount by which its fitness is greater or less than its competitors' fitness. (Conceptually, this can be represented as a game of roulette - each individual gets a slice of the wheel, but more fit ones get larger slices than less fit ones. The wheel is then spun, and whichever individual "owns" the section on which it lands each time is chosen.)

Scaling selection: As the average fitness of the population increases, the strength of the selective pressure also increases and the fitness function becomes more discriminating. This method can be helpful in making the best selection later on when all individuals have relatively high fitness and only small differences in fitness distinguish one from another.

Tournament selection: Subgroups of individuals are chosen from the larger population, and members of each subgroup compete against each other. Only one individual from each subgroup is chosen to reproduce.

Rank selection: Each individual in the population is assigned a numerical rank based on fitness, and selection is based on this ranking rather than absolute differences in fitness. The advantage of this method is that it can prevent very fit individuals from gaining dominance early at the expense of less fit ones, which would reduce the population's genetic diversity and might hinder attempts to find an acceptable solution.

Generational selection: The offspring of the individuals selected from each generation become the entire next generation. No individuals are retained between generations.

Steady-state selection: The offspring of the individuals selected from each generation go back into the pre-existing gene pool, replacing some of the less fit members of the previous generation. Some individuals are retained between generations.

Hierarchical selection: Individuals go through multiple rounds of selection each generation. Lower-level evaluations are faster and less discriminating, while those that survive to higher levels are evaluated more rigorously. The advantage of this method is that it reduces overall computation time by using faster, less selective evaluation to weed out the majority of individuals that show little or no promise, and only subjecting those who survive this initial test to more rigorous and more computationally expensive fitness evaluation.

Methods of change

Once selection has chosen fit individuals, they must be randomly altered in hopes of improving their fitness for the next generation. There are two basic strategies to accomplish this. The first and simplest is called mutation. Just as mutation in living things changes one gene to another, so mutation in a genetic algorithm causes small alterations at single points in an individual's code.

The second method is called crossover, and entails choosing two individuals to swap segments of their code, producing artificial "offspring" that are combinations of their parents. This process is intended to simulate the analogous process of recombination that occurs to chromosomes during sexual reproduction. Common forms of crossover include single-point crossover, in which a point of exchange is set at a random location in the two individuals' genomes, and one individual contributes all its code from before that point and the other contributes all its code from after that point to produce an offspring, and uniform crossover, in which the value at any given location in the offspring's genome is either the value of one parent's genome at that location or the value of the other parent's genome at that location, chosen with 50/50 probability.

Crossover
Mutation

Figure 2: Crossover and mutation. The above diagrams illustrate the effect of each of these genetic operators on individuals in a population of 8-bit strings. The upper diagram shows two individuals undergoing single-point crossover; the point of exchange is set between the fifth and sixth positions in the genome, producing a new individual that is a hybrid of its progenitors. The second diagram shows an individual undergoing mutation at position 4, changing the 0 at that position in its genome to a 1.

Other problem-solving techniques

With the rise of artificial life computing and the development of heuristic methods, other computerized problem-solving techniques have emerged that are in some ways similar to genetic algorithms. This section explains some of these techniques, in what ways they resemble GAs and in what ways they differ.

A Simple Neural Network
Figure 3: A simple feedforward neural network, with one input layer consisting of four neurons, one hidden layer consisting of three neurons, and one output layer consisting of four neurons. The number on each neuron represents its activation threshold: it will only fire if it receives at least that many inputs. The diagram shows the neural network being presented with an input string and shows how activation spreads forward through the network to produce an output.

A brief history of GAs

The earliest instances of what might today be called genetic algorithms appeared in the late 1950s and early 1960s, programmed on computers by evolutionary biologists who were explicitly seeking to model aspects of natural evolution. It did not occur to any of them that this strategy might be more generally applicable to artificial problems, but that recognition was not long in coming: "Evolutionary computation was definitely in the air in the formative days of the electronic computer" (Mitchell 1996, p.2). By 1962, researchers such as G.E.P. Box, G.J. Friedman, W.W. Bledsoe and H.J. Bremermann had all independently developed evolution-inspired algorithms for function optimization and machine learning, but their work attracted little followup. A more successful development in this area came in 1965, when Ingo Rechenberg, then of the Technical University of Berlin, introduced a technique he called evolution strategy, though it was more similar to hill-climbers than to genetic algorithms. In this technique, there was no population or crossover; one parent was mutated to produce one offspring, and the better of the two was kept and became the parent for the next round of mutation (Haupt and Haupt 1998, p.146). Later versions introduced the idea of a population. Evolution strategies are still employed today by engineers and scientists, especially in Germany.

The next important development in the field came in 1966, when L.J. Fogel, A.J. Owens and M.J. Walsh introduced in America a technique they called evolutionary programming. In this method, candidate solutions to problems were represented as simple finite-state machines; like Rechenberg's evolution strategy, their algorithm worked by randomly mutating one of these simulated machines and keeping the better of the two (Mitchell 1996, p.2; Goldberg 1989, p.105). Also like evolution strategies, a broader formulation of the evolutionary programming technique is still an area of ongoing research today. However, what was still lacking in both these methodologies was recognition of the importance of crossover.

As early as 1962, John Holland's work on adaptive systems laid the foundation for later developments; most notably, Holland was also the first to explicitly propose crossover and other recombination operators. However, the seminal work in the field of genetic algorithms came in 1975, with the publication of the book Adaptation in Natural and Artificial Systems. Building on earlier research and papers both by Holland himself and by colleagues at the University of Michigan, this book was the first to systematically and rigorously present the concept of adaptive digital systems using mutation, selection and crossover, simulating processes of biological evolution, as a problem-solving strategy. The book also attempted to put genetic algorithms on a firm theoretical footing by introducing the notion of schemata (Mitchell 1996, p.3; Haupt and Haupt 1998, p.147). That same year, Kenneth De Jong's important dissertation established the potential of GAs by showing that they could perform well on a wide variety of test functions, including noisy, discontinuous, and multimodal search landscapes (Goldberg 1989, p.107).

These foundational works established more widespread interest in evolutionary computation. By the early to mid-1980s, genetic algorithms were being applied to a broad range of subjects, from abstract mathematical problems like bin-packing and graph coloring to tangible engineering issues such as pipeline flow control, pattern recognition and classification, and structural optimization (Goldberg 1989, p. 128).

At first, these applications were mainly theoretical. However, as research continued to proliferate, genetic algorithms migrated into the commercial sector, their rise fueled by the exponential growth of computing power and the development of the Internet. Today, evolutionary computation is a thriving field, and genetic algorithms are "solving problems of everyday interest" (Haupt and Haupt 1998, p.147) in areas of study as diverse as stock market prediction and portfolio planning, aerospace engineering, microchip design, biochemistry and molecular biology, and scheduling at airports and assembly lines. The power of evolution has touched virtually any field one cares to name, shaping the world around us invisibly in countless ways, and new uses continue to be discovered as research is ongoing. And at the heart of it all lies nothing more than Charles Darwin's simple, powerful insight: that the random chance of variation, coupled with the law of selection, is a problem-solving technique of immense power and nearly unlimited application.

What are the strengths of GAs?

What are the limitations of GAs?

Although genetic algorithms have proven to be an efficient and powerful problem-solving strategy, they are not a panacea. GAs do have certain limitations; however, it will be shown that all of these can be overcome and none of them bear on the validity of biological evolution.

Some specific examples of GAs

As the power of evolution gains increasingly widespread recognition, genetic algorithms have been used to tackle a broad variety of problems in an extremely diverse array of fields, clearly showing their power and their potential. This section will discuss some of the more noteworthy uses to which they have been put.

Creationist arguments

As one might expect, the real-world demonstration of the power of evolution that GAs represent has proven surprising and disconcerting for creationists, who have always claimed that only intelligent design, not random variation and selection, could have produced the information content and complexity of living things. They have therefore argued that the success of genetic algorithms does not allow us to infer anything about biological evolution. The criticisms of two anti-evolutionists, representing two different viewpoints, will be addressed: young-earth creationist Dr. Don Batten of Answers in Genesis, who has written an article entitled " Genetic algorithms -- do they show that evolution works?", and old-earth creationist and intelligent-design advocate Dr. William Dembski, whose recent book No Free Lunch (Dembski 2002) discusses this topic.

Don Batten

William Dembski

Old-earth creationist Dr. William Dembski's recent book, No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence, is largely devoted to the topic of evolutionary algorithms and how they relate to biological evolution. In particular, Dembski's book is concerned with an elusive quality he calls "specified complexity," which he asserts is contained in abundance in living things, and which he further asserts evolutionary processes are incapable of generating, leaving "design" through unspecified mechanisms by an unidentified "intelligent designer" the only alternative. To bolster his case, Dembski appeals to a class of mathematical theorems known as the No Free Lunch theorems, which he claims prove that evolutionary algorithms, on the average, do no better than blind search.

Richard Wein has written an excellent and comprehensive rebuttal to Dembski, entitled Not a Free Lunch But a Box of Chocolates, and its points will not be reproduced here. I will instead focus on chapter 4 of Dembski's book, which deals in detail with genetic algorithms.

Dembski has one main argument against GAs, which is developed at length throughout this chapter. While he does not deny that they can produce impressive results - indeed, he says that there is something "oddly compelling and almost magical" (p.221) about the way GAs can find solutions that are unlike anything designed by human beings - he argues that their success is due to the specified complexity that is "smuggled into" them by their human designers and subsequently embodied in the solutions they produce. "In other words, all the specified complexity we get out of an evolutionary algorithm has first to be put into its construction and into the information that guides the algorithm. Evolutionary algorithms therefore do not generate or create specified complexity, but merely harness already existing specified complexity" (p.207).

The first problem evident in Dembski's argument is this. Although his chapter on evolutionary algorithms runs for approximately 50 pages, the first 30 of those pages discuss nothing but Dr. Richard Dawkins' "weasel" algorithm, which, as already discussed, is not a true genetic algorithm and is not representative of genetic algorithms. Dembski's other two examples - the crooked wire genetic antennas of Edward Altshuler and Derek Linden and the checkers-playing neural nets of Kumar Chellapilla and David Fogel - are only introduced within the last 10 pages of the chapter and are discussed for three pages, combined. This is a serious deficiency, considering that the "weasel" program is not representative of most work being done in the field of evolutionary computation; nevertheless, Dembski's arguments relating to it will be analyzed.

In regard to the weasel program, Dembski states that "Dawkins and fellow Darwinists use this example to illustrate the power of evolutionary algorithms" (p.182), and, again, "Darwinists... are quite taken with the METHINKS IT IS LIKE A WEASEL example and see it as illustrating the power of evolutionary algorithms to generate specified complexity" (p.183). This is a straw man of Dembski's creation (not least because Dawkins' book was written long before Dembski ever coined that term!). Here is what Dawkins really says about the purpose of his program:

"What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years." (Dawkins 1996, p.49, emphasis original)

In other words, the weasel program was intended to demonstrate the difference between two different kinds of selection: single-step selection, where a complex result is produced by pure chance in a single leap, and cumulative selection, where a complex result is built up bit by bit via a filtering process that preferentially preserves improvements. It was never intended to be a simulation of evolution as a whole.

Single-step selection is the absurdly improbable process frequently attacked in creationist literature by comparing it to a tornado blowing through a junkyard producing a 747 airliner, or an explosion in a print shop producing a dictionary. Cumulative selection is what evolution actually uses. Using single-step selection to achieve a functional result of any significant complexity, one would have to wait, on average, many times the current age of the universe. Using cumulative selection, however, that same result can be reached in a comparatively very short length of time. Demonstrating this difference was the point of Dawkins' weasel program, and that was the only point of that program. In a footnote to this chapter, Dembski writes, "It is remarkable how Dawkins' example gets recycled without any indication of the fundamental difficulties that attend it" (p.230), but it is only misconceptions in the minds of creationists such as Dembski and Batten, who attack the weasel program for not demonstrating something it was never intended to demonstrate, that give rise to these "difficulties".

Unlike every example of evolutionary algorithms discussed in this essay, the weasel program does indeed have a single, prespecified outcome, and the quality of the solutions it generates is judged by explicitly comparing them to that prespecified outcome. Therefore, Dembski is quite correct when he says that the weasel program does not generate new information. However, he then makes a gigantic and completely unjustified leap when he extrapolates this conclusion to all evolutionary algorithms: "As the sole possibility that Dawkins' evolutionary algorithm can attain, the target sequence in fact has minimal complexity.... Evolutionary algorithms are therefore incapable of generating true complexity" (p.182). Even Dembski seems to recognize this when he writes: "...most evolutionary algorithms in the literature are programmed to search a space of possible solutions to a problem until they find an answer - not, as Dawkins does here, by explicitly programming the answer into them in advance" (p.182). But then, having given a perfectly good reason why the weasel program is not representative of GAs as a whole, he inexplicably goes on to make precisely that fallacious generalization!

In reality, the weasel program is significantly different from most genetic algorithms, and therefore Dembski's argument from analogy does not hold up. True evolutionary algorithms, such as the examples discussed in this essay, do not simply find their way back to solutions already discovered by other methods - instead, they are presented with problems where the optimal solution is not known in advance, and are asked to discover that solution on their own. Indeed, if genetic algorithms could do nothing more than rediscover solutions already programmed into them, what would be the point of using them? It would be an exercise in redundancy to do so. However, the widespread scientific (and commercial) interest in GAs shows that there is far more substance to them than the rather trivial example Dembski tries to reduce this entire field to.

Having set up and then knocked down this straw man, Dembski moves on to his next line of argument: that the specified complexity exhibited by the outcomes of more representative evolutionary algorithms has, like the weasel program, been "smuggled in" by the designers of the algorithm. "But invariably we find that when specified complexity seems to be generated for free, it has in fact been front-loaded, smuggled in, or hidden from view" (p.204). Dembski suggests that the most common "hiding place" of specified complexity is in the GA's fitness function. "What the [evolutionary algorithm] has done is take advantage of the specified complexity inherent in the fitness function and used it in searching for and then locating the target..." (p.194). Dembski goes on to argue that, before an EA can search a given fitness landscape for a solution, some mechanism must first be employed to select that fitness landscape from what he calls a phase space of all the possible fitness landscapes, and if that mechanism is likewise an evolutionary one, some other mechanism must first be employed to select its fitness function from an even larger phase space, and so on. Dembski concludes that the only way to stop this infinite regress is through intelligence, which he holds to have some irreducible, mysterious ability to select a fitness function from a given phase space without recourse to higher-order phase spaces. "There is only one known generator of specified complexity, and that is intelligence" (p.207).

Dembski is correct when he writes that the fitness function "guid[es] an evolutionary algorithm into the target" (p.192). However, he is incorrect in his claim that selecting the right fitness function is a process that requires the generation of even more specified complexity than the EA itself produces. As Koza (1999, p. 39) writes, the fitness function tells an evolutionary algorithm "what needs to be done", not "how to do it". Unlike the unrepresentative weasel program example, the fitness function of an EA typically does not specify any particular form that the solution should take, and therefore it cannot be said to contribute "specified complexity" to the evolved solution in any meaningful sense.

An example will illustrate the point in greater detail. Dembski claims that in Chellapilla and Fogel's checkers experiment, their choice to hold the winning criterion constant from game to game "inserted an enormous amount of specified complexity" (p.223). It is certainly true that the final product of this process displayed a great deal of specified complexity (however one chooses to define that term). But is it true that the chosen fitness measure contained just as much specified complexity? Here is what Chellapilla and Fogel actually say:

"To appreciate the level of play that has been achieved, it may be useful to consider the following thought experiment. Suppose you are asked to play a game on an eight-by-eight board of squares with alternating colors. There are 12 pieces on each side arranged in a specific manner to begin play. You are told the rules of how the pieces move (i.e., diagonally, forced jumps, kings) and that the piece differential is available as a feature. You are not, however, told whether or not this differential is favorable or unfavorable (there is a version of checkers termed 'suicide checkers,' where the object is to 'lose' as fast as possible) or if it is even valuable information. Most importantly, you are not told the object of the game. You simply make moves and at some point an external observer declares the game over. They do not, however, provide feedback on whether or not you won, lost, or drew. The only data you receive comes after a minimum of five such games and is offered in the form of an overall point score. Thus, you cannot know with certainty which games contributed to the overall result or to what degree. Your challenge is to induce the appropriate moves in each game based only on this coarse level of feedback." (Chellapilla and Fogel 2001, p.427)

It exceeds the bounds of the absurd for Dembski to claim that this fitness measure inserted an "enormous" amount of specified complexity. If a human being who had never heard of checkers was given the same information, and we returned several months later to discover that he had become an internationally ranked checkers expert, should we conclude that specified complexity has been generated?

Dembski states that to overturn his argument, "one must show that finding the information that guides an evolutionary algorithm to a target is substantially easier than finding the target directly through a blind search" (p.204). I contend that this is precisely the case. Intuitively, it should not be surprising that the fitness function contains less information than the evolved solution. This is precisely the reason why GAs have found such widespread use: it is easier (requires less information) to write a fitness function that measures how good a solution is, than to design a good solution from scratch.

In more informal terms, consider Dembski's two examples, the crooked-wire genetic antenna and the evolved checkers-playing neural network named Anaconda. It requires a great deal of detailed information about the game of checkers to come up with a winning strategy (consider Chinook and its enormous library of endgames). However, it does not require equally detailed information to recognize such a strategy when one sees it: all we need observe is that that strategy consistently defeats its opponents. Similarly, a person who knew nothing about how to design an antenna that radiates evenly over a hemispherical region in a given frequency range could still test such an antenna and verify that it works as intended. In both cases, determining what constitutes high fitness is far easier (requires less information) than figuring out how to achieve high fitness.

Granted, even though choosing a fitness function for a given problem requires less information than actually solving the problem defined by that fitness function, it does take some information to specify the fitness function in the first place, and it is a legitimate question to ask where this initial information comes from. Dembski may still ask about the origin of human intelligence that enables us to decide to solve one problem rather than another, or about the origin of the natural laws of the cosmos that make it possible for life to exist and flourish and for evolution to occur. These are valid questions, and Dembski is entitled to wonder about them. However, by this point - seemingly unnoticed by Dembski himself - he has now moved away from his initial argument. He is no longer claiming that evolution cannot happen; instead, he is essentially asking why we live in a universe where evolution can happen. In other words, what Dembski does not seem to realize is that the logical conclusion of his argument is theistic evolution. It is fully compatible with a God who (as many Christians, including evolutionary biologist Kenneth Miller, believe) used evolution as his creative tool, and set up the universe in such a way as to make it not just likely, but certain.

I will conclude by clearing up some additional, minor misconceptions in chapter 4 of No Free Lunch. For starters, although Dembski, unlike Batten, is clearly aware of the field of multiobjective optimization, he erroneously states that "until some form of univalence is achieved, optimization cannot begin" (p.186). This essay's discussion of multiple-objective genetic algorithms shows the error of this. Perhaps other design techniques have this restriction, but one of the virtues of GAs is precisely that they can make trade-offs and optimize several mutually exclusive objectives simultaneously, and the human overseers can then pick whichever solution best achieves their goals from the final group of Pareto-optimal solutions. No method of combining multiple criteria into one is necessary.

Dembski also states that GAs "seem less adept at constructing integrated systems that require multiple parts to achieve novel functions" (p.237). The many examples detailed in this essay (particularly John Koza's use of genetic programming to engineer complex analog circuits) show this claim to be false as well.

Finally, Dembski mentions that INFORMS, the professional organization of the operations research community, pays very little attention to GAs, and this "is reason to be skeptical of the technique's general scope and power" (p.237). However, just because a particular scientific society is not making widespread use of GAs does not mean that such uses are not widespread elsewhere or in general, and this essay has endeavored to show that this is in fact the case. Evolutionary techniques have found a wide variety of uses in virtually any field of science one would care to name, as well as among many companies in the commercial sector. Here is a partial list:

By contrast, given the dearth of scientific discoveries and research stimulated by intelligent design, Dembski is in a poor position to complain about lack of practical application. Intelligent design is a vacuous hypothesis, telling us nothing more than "Some designer did something, somehow, at some time, to cause this result." By contrast, this essay has hopefully demonstrated that evolution is a problem-solving strategy rich with practical applications.

Conclusion

Even creationists find it impossible to deny that the combination of mutation and natural selection can produce adaptation. Nevertheless, they still attempt to justify their rejection of evolution by dividing the evolutionary process into two categories - "microevolution" and "macroevolution" - and arguing that only the second is controversial, and that any evolutionary change we observe is only an example of the first.

Now, microevolution and macroevolution are terms that have meaning to biologists; they are defined, respectively, as evolution below the species level and evolution at or above the species level. But the crucial difference between the way creationists use these terms and the way scientists use them is that scientists recognize that these two are fundamentally the same process with the same mechanisms, merely operating at different scales. Creationists, however, are forced to postulate some type of unbridgeable gap separating the two, in order for them to deny that the processes of change and adaptation we see operating in the present can be extrapolated to produce all the diversity observed in the living world.

However, genetic algorithms make this view untenable by demonstrating the fundamental seamlessness of the evolutionary process. Take, for example, a problem that consists of programming a circuit to discriminate between a 1-kilohertz and a 10-kilohertz tone, and respond respectively with steady outputs of 0 and 5 volts. Say we have a candidate solution that can accurately discriminate between the two tones, but its outputs are not quite steady as required; they produce small waveforms rather than the requisite unchanging voltage. Presumably, according to the creationist view, to change this circuit from its present state to the perfect solution would be "microevolution", a small change within the ability of mutation and selection to produce. But surely, a creationist would argue, to arrive at this same final state from a completely random initial arrangement of components would be "macroevolution" and beyond the reach of an evolutionary process. However, genetic algorithms were able to accomplish both, evolving the system from a random arrangement to the near-perfect solution and finally to the perfect, optimal solution. At no step of the way did an insoluble difficulty or a gap that could not be bridged turn up. At no point whatsoever was human intervention required to assemble an irreducibly complex core of components (despite the fact that the finished product does contain such a thing) or to "guide" the evolving system over a difficult peak. The circuit evolved, without any intelligent guidance, from a completely random and non-functional state to a tightly complex, efficient and optimal state. How can this not be a compelling experimental demonstration of the power of evolution?

It has been said that human cultural evolution has superceded the biological kind - that we as a species have reached a point where we are able to consciously control our society, our environment and even our genes to a sufficient degree to make the evolutionary process irrelevant. It has been said that the cultural whims of our rapidly changing society, rather than the comparatively glacially slow pace of genetic mutation and natural selection, is what determines fitness today. In a sense, this may well be true.

But in another sense, nothing could be further from the truth. Evolution is a problem-solving process whose power we are only beginning to understand and exploit; despite this, it is already at work all around us, shaping our technology and improving our lives, and in the future, these uses will only multiply. Without a detailed understanding of the evolutionary process, none of the countless advances we owe to genetic algorithms would have been possible. There is a lesson here to those who deny the power of evolution, as well as those who deny that knowledge of it has any practical benefit. As incredible as it may seem, evolution works. As the poet Lord Byron put it: "'Tis strange but true; for truth is always strange, stranger than fiction."

References and resources

"Adaptive Learning: Fly the Brainy Skies." Wired, vol.10, no.3 (March 2002). Available online at http://www.wired.com/wired/archive/10.03/everywhere.html?pg=2.

Altshuler, Edward and Derek Linden. "Design of a wire antenna using a genetic algorithm." Journal of Electronic Defense, vol.20, no.7, p.50-52 (July 1997).

Andre, David and Astro Teller. "Evolving team Darwin United." In RoboCup-98: Robot Soccer World Cup II, Minoru Asada and Hiroaki Kitano (eds). Lecture Notes in Computer Science, vol.1604, p.346-352. Springer-Verlag, 1999.

Andreou, Andreas, Efstratios Georgopoulos and Spiridon Likothanassis. "Exchange-rates forecasting: A hybrid algorithm based on genetically optimized adaptive neural networks." Computational Economics, vol.20, no.3, p.191-210 (December 2002).

Ashley, Steven. "Engineous explores the design space." Mechanical Engineering, February 1992, p.49-52.

Assion, A., T. Baumert, M. Bergt, T. Brixner, B. Kiefer, V. Seyfried, M. Strehle and G. Gerber. "Control of chemical reactions by feedback-optimized phase-shaped femtosecond laser pulses." Science, vol.282, p.919-922 (30 October 1998).

Au, Wai-Ho, Keith Chan, and Xin Yao. "A novel evolutionary data mining algorithm with applications to churn prediction." IEEE Transactions on Evolutionary Computation, vol.7, no.6, p.532-545 (December 2003).

Beasley, J.E., J. Sonander and P. Havelock. "Scheduling aircraft landings at London Heathrow using a population heuristic." Journal of the Operational Research Society, vol.52, no.5, p.483-493 (May 2001).

Begley, Sharon and Gregory Beals. "Software au naturel." Newsweek, May 8, 1995, p.70.

Benini, Ernesto and Andrea Toffolo. "Optimal design of horizontal-axis wind turbines using blade-element theory and evolutionary computation." Journal of Solar Energy Engineering, vol.124, no.4, p.357-363 (November 2002).

Burke, E.K. and J.P. Newall. "A multistage evolutionary algorithm for the timetable problem." IEEE Transactions on Evolutionary Computation, vol.3, no.1, p.63-74 (April 1999).

Charbonneau, Paul. "Genetic algorithms in astronomy and astrophysics." The Astrophysical Journal Supplement Series, vol.101, p.309-334 (December 1995).

Chellapilla, Kumar and David Fogel. "Evolving an expert checkers playing program without using human expertise." IEEE Transactions on Evolutionary Computation, vol.5, no.4, p.422-428 (August 2001). Available online at http://www.natural-selection.com/NSIPublicationsOnline.htm.

Chellapilla, Kumar and David Fogel. "Anaconda defeats Hoyle 6-0: a case study competing an evolved checkers program against commercially available software." In Proceedings of the 2000 Congress on Evolutionary Computation, p.857-863. IEEE Press, 2000. Available online at http://www.natural-selection.com/NSIPublicationsOnline.htm.

Chellapilla, Kumar and David Fogel. "Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player." Neurocomputing, vol.42, no.1-4, p.69-86 (January 2002).

Chryssolouris, George and Velusamy Subramaniam. "Dynamic scheduling of manufacturing job shops using genetic algorithms." Journal of Intelligent Manufacturing, vol.12, no.3, p.281-293 (June 2001).

Coale, Kristi. "Darwin in a box." Wired News, July 14, 1997. Available online at http://www.wired.com/news/technology/0,1282,5152,00.html.

Coello, Carlos. "An updated survey of GA-based multiobjective optimization techniques." ACM Computing Surveys, vol.32, no.2, p.109-143 (June 2000).

Davidson, Clive. "Creatures from primordial silicon." New Scientist, vol.156, no.2108, p.30-35 (November 15, 1997). Available online at http://www.newscientist.com/hottopics/ai/primordial.jsp.

Dawkins, Richard. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. W.W. Norton, 1996.

Dembski, William. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. Rowman & Littlefield, 2002.

Fleming, Peter and R.C. Purshouse. "Evolutionary algorithms in control systems engineering: a survey." Control Engineering Practice, vol.10, p.1223-1241 (2002).

Fonseca, Carlos and Peter Fleming. "An overview of evolutionary algorithms in multiobjective optimization." Evolutionary Computation, vol.3, no.1, p.1-16 (1995).

Forrest, Stephanie. "Genetic algorithms: principles of natural selection applied to computation." Science, vol.261, p.872-878 (1993).

Gibbs, W. Wayt. "Programming with primordial ooze." Scientific American, October 1996, p.48-50.

Gillet, Valerie. "Reactant- and product-based approaches to the design of combinatorial libraries." Journal of Computer-Aided Molecular Design, vol.16, p.371-380 (2002).

Giro, R., M. Cyrillo and D.S. Galvão. "Designing conducting polymers using genetic algorithms." Chemical Physics Letters, vol.366, no.1-2, p.170-175 (November 25, 2002).

Glen, R.C. and A.W.R. Payne. "A genetic algorithm for the automated generation of molecules within constraints." Journal of Computer-Aided Molecular Design, vol.9, p.181-202 (1995).

Goldberg, David. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989.

Graham-Rowe, Duncan. "Radio emerges from the electronic soup." New Scientist, vol.175, no.2358, p.19 (August 31, 2002). Available online at http://www.newscientist.com/news/news.jsp?id=ns99992732.

Graham-Rowe, Duncan. "Electronic circuit 'evolves' from liquid crystals." New Scientist, vol.181, no.2440, p.21 (March 27, 2004).

Haas, O.C.L., K.J. Burnham and J.A. Mills. "On improving physical selectivity in the treatment of cancer: A systems modelling and optimisation approach." Control Engineering Practice, vol.5, no.12, p.1739-1745 (December 1997).

Hanne, Thomas. "Global multiobjective optimization using evolutionary algorithms." Journal of Heuristics, vol.6, no.3, p.347-360 (August 2000).

Haupt, Randy and Sue Ellen Haupt. Practical Genetic Algorithms. John Wiley & Sons, 1998.

He, L. and N. Mort. "Hybrid genetic algorithms for telecommunications network back-up routeing." BT Technology Journal, vol.18, no.4, p. 42-50 (Oct 2000).

Holland, John. "Genetic algorithms." Scientific American, July 1992, p. 66-72.

Hughes, Evan and Maurice Leyland. "Using multiple genetic algorithms to generate radar point-scatterer models." IEEE Transactions on Evolutionary Computation, vol.4, no.2, p.147-163 (July 2000).

Jensen, Mikkel. "Generating robust and flexible job shop schedules using genetic algorithms." IEEE Transactions on Evolutionary Computation, vol.7, no.3, p.275-288 (June 2003).

Kewley, Robert and Mark Embrechts. "Computational military tactical planning system." IEEE Transactions on Systems, Man and Cybernetics, Part C - Applications and Reviews, vol.32, no.2, p.161-171 (May 2002).

Kirkpatrick, S., C.D. Gelatt and M.P. Vecchi. "Optimization by simulated annealing." Science, vol.220, p.671-678 (1983).

Koza, John, Forest Bennett, David Andre and Martin Keane. Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufmann Publishers, 1999.

Koza, John, Martin Keane, Matthew Streeter, William Mydlowec, Jessen Yu and Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Kluwer Academic Publishers, 2003.

Keane, A.J. and S.M. Brown. "The design of a satellite boom with enhanced vibration performance using genetic algorithm techniques." In Adaptive Computing in Engineering Design and Control '96 - Proceedings of the Second International Conference, I.C. Parmee (ed), p.107-113. University of Plymouth, 1996.

Lee, Yonggon and Stanislaw H. Zak. "Designing a genetic neural fuzzy antilock-brake-system controller." IEEE Transactions on Evolutionary Computation, vol.6, no.2, p.198-211 (April 2002).

Lemley, Brad. "Machines that think." Discover, January 2001, p.75-79.

Mahfoud, Sam and Ganesh Mani. "Financial forecasting using genetic algorithms." Applied Artificial Intelligence, vol.10, no.6, p.543-565 (1996).

Mitchell, Melanie. An Introduction to Genetic Algorithms. MIT Press, 1996.

Naik, Gautam. "Back to Darwin: In sunlight and cells, science seeks answers to high-tech puzzles." The Wall Street Journal, January 16, 1996, p. A1.

Obayashi, Shigeru, Daisuke Sasaki, Yukihiro Takeguchi, and Naoki Hirose. "Multiobjective evolutionary computation for supersonic wing-shape optimization." IEEE Transactions on Evolutionary Computation, vol.4, no.2, p.182-187 (July 2000).

Petzinger, Thomas. "At Deere they know a mad scientist may be a firm's biggest asset." The Wall Street Journal, July 14, 1995, p.B1.

Porto, Vincent, David Fogel and Lawrence Fogel. "Alternative neural network training methods." IEEE Expert, vol.10, no.3, p.16-22 (June 1995).

Rao, Srikumar. "Evolution at warp speed." Forbes, vol.161, no.1, p.82-83 (January 12, 1998).

Rizki, Mateen, Michael Zmuda and Louis Tamburino. "Evolving pattern recognition systems." IEEE Transactions on Evolutionary Computation, vol.6, no.6, p.594-609 (December 2002).

Robin, Franck, Andrea Orzati, Esteban Moreno, Otte Homan, and Werner Bachtold. "Simulation and evolutionary optimization of electron-beam lithography with genetic and simplex-downhill algorithms." IEEE Transactions on Evolutionary Computation, vol.7, no.1, p.69-82 (February 2003).

Sagan, Carl. Broca's Brain: Reflections on the Romance of Science. Ballantine, 1979.

Sambridge, Malcolm and Kerry Gallagher. "Earthquake hypocenter location using genetic algorithms." Bulletin of the Seismological Society of America, vol.83, no.5, p.1467-1491 (October 1993).

Sasaki, Daisuke, Masashi Morikawa, Shigeru Obayashi and Kazuhiro Nakahashi. "Aerodynamic shape optimization of supersonic wings by adaptive range multiobjective genetic algorithms." In Evolutionary Multi-Criterion Optimization: First International Conference, EMO 2001, Zurich, Switzerland, March 2001: Proceedings, K. Deb, L. Theile, C. Coello, D. Corne and E. Zitler (eds). Lecture Notes in Computer Science, vol.1993, p.639-652. Springer-Verlag, 2001.

Sato, S., K. Otori, A. Takizawa, H. Sakai, Y. Ando and H. Kawamura. "Applying genetic algorithms to the optimum design of a concert hall." Journal of Sound and Vibration, vol.258, no.3, p. 517-526 (2002).

Schechter, Bruce. "Putting a Darwinian spin on the diesel engine." The New York Times, September 19, 2000, p. F3.

Srinivas, N. and Kalyanmoy Deb. "Multiobjective optimization using nondominated sorting in genetic algorithms." Evolutionary Computation, vol.2, no.3, p.221-248 (Fall 1994).

Soule, Terrence and Amy Ball. "A genetic algorithm with multiple reading frames." In GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference, Lee Spector and Eric Goodman (eds). Morgan Kaufmann, 2001. Available online at http://www.cs.uidaho.edu/~tsoule/research/papers.html.

Tang, K.S., K.F. Man, S. Kwong and Q. He. "Genetic algorithms and their applications." IEEE Signal Processing Magazine, vol.13, no.6, p.22-37 (November 1996).

Weismann, Dirk, Ulrich Hammel, and Thomas Bäck. "Robust design of multilayer optical coatings by means of evolutionary algorithms." IEEE Transactions on Evolutionary Computation, vol.2, no.4, p.162-167 (November 1998).

Williams, Edwin, William Crossley and Thomas Lang. "Average and maximum revisit time trade studies for satellite constellations using a multiobjective genetic algorithm." Journal of the Astronautical Sciences, vol.49, no.3, p.385-400 (July-September 2001).

Zitzler, Eckart and Lothar Thiele. "Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach." IEEE Transactions on Evolutionary Computation, vol.3, no.4, p.257-271 (November 1999).


Home Browse Search Feedback Other Links The FAQ Must-Read Files Index Evolution Creationism Age of the Earth Flood Geology Catastrophism Debates
Home Page | Browse | Search | Feedback | Links
The FAQ | Must-Read Files | Index | Creationism | Evolution | Age of the Earth | Flood Geology | Catastrophism | Debates