APPLYING A GENETIC ALGORITHM TO STELLAR SPECTRUM FITTING PROBLEMS
Formal Report of the writing and testing of a genetic algorithm, for use in fitting stellar spectra
1. Introduction 2
1.1. Short Description of SFIT 2
2. Brief Introduction to Genetic Algorithms 2
2.1. Reproduction in Living Organisms 3
2.2 Application to Computing Problems 3
3. More on Genetic Algorithms 3
3.1. Selection Methods 3
3.2. Advantages of Genetic Algorithms 4
4. Example of a Genetic Algorithm:
Finding a Rational Approximation to Pi 4
5. Applying Genetic Algorithms to Stellar Fitting 5
5.1. Encoding Parameters 5
5.2. Selection Methods 5
5.3. Other Changes 5
5.4. Input File 6
5.5 Testing the Algorithm 7
6. The Next Step:
Parallelising the Genetic Algorithm 9
7. Conclusion 9
8. References 10
Applying Genetic Algorithms to Stellar Fitting Problems
We worked on a stellar fitting spectrum known as SFIT, with a view to finding ways of improving it. We added a new minisation subroutine, which aimed to find the combination of parameters that would give the lowest chi-squared value. We used a genetic algorithm which applies evolutionary principles to computing problems. Finally we attempted to run this program on several processors in parallel, using PVM software. All codes are written in Fortran 90.
1.1. Short Description of Sfit
Sfit is a package for fitting stellar spectra. It attempts to find the best possible match for parameters such as effective temperature (Teff), surface gravity (g) and chemical composition. It can also take into account rotation and translational motion. Using a grid of high-resolution spectra the programs will attempt to find a best fit for Teff, log g, He abundance, vsin i and other parameters.
When we began work on sfit it had two subroutines which attempted to find this best-fit solution to the problem. Amoeba is a downhill simplex method which will begin with an initial guess of the parameters and gradually move 'downhill' towards the minimum, in this case of chi-squared.
The other option which hadn't been used for some time was Levenburg-Marquardt.. This is a non-linear least-squard method. It uses derivatives to find stationary values on the solution space. We added a third, the genetic algorithm.
2. Brief Introduction to Genetic Algorithms
Genetic Algorithms borrow from the ideas of natural selection and genetics, but instead of living organisms strings of numbers "evolve". These strings encode information on a potential solution to the problem. Survival of the fittest is the key idea here as the most successful strings will be used to create the next generation.
The basics of genetic algorithms can be explained by referring to a simplified model of reproduction in living creatures.
2.1 Reproduction of Living Organisms
In the cells of all living organisms on Earth the information determining characteristics is contained within DNA. The 'alphabet' spelling out the information has 4 'letters' (4 chemical bases) often abbreviated to A,C,T and G. DNA is arranged into chromosomes, each of which contains thousands of genes, which contain hundreds of bases. Genes give us our own individual traits, and determine our 'fitness' for survival in the environment in which we live. Within a population those individuals with the genetic make-up best suited to their environment are the most likely to survive long enough to reproduce.
During sexual reproduction chromosomes will 'cross-over' (see figure 1). This gives rise to different combinations of genes and a greater degree of variation. This is important for natural selection as the more variation there is within the new generation the higher the chances of new individuals arising that are fitter than any in older generations.
Another source of variation within a reproducing population is mutation. This occurs when one or more of the bases changes randomly during the production of new DNA. This is normally detrimental to the fitness of the offspring, but occasionally it gives rise to stronger individuals. Mutation is a very important source of variation in organisms.
With each new generation the population will be more suited to its
environment, as the organisms accumulate favourable characteristics and any members of the population that are more susceptible to disease or predators, for example, will be more likely to die before passing on their genes.
Of course real life is more complicated than this, as environments change, but this model is sufficient to aid understanding of genetic algorithms.
2.2 Application to computing problems
These ideas can easily be applied to a wide variety of problems that computers may be required to solve. Instead of an initial population of organisms we have a randomly generated population of potential solutions to the problem, which samples over the whole of the solution space. The members of this population are known as "parents". Each parent will normally consist of one 'chromosome' which now encodes information on the potential solution. The 'alphabet' will now be a string of digits, which will encode a potential solution in some way. For a multi-parameter problem the chromosome may consist of a number of 'genes', each encoding one parameter.
A fitness function must be calculated for each member of the population. For example, if we are trying to maximise the value of a mathematical function within a given range then the fitness function will be the mathematical function itself.
To create a next generation we choose parents from the population in such a way that the probability of being selected is related to the fitness function (eg the probability is proportional to the fitness function). A crossover probability and mutation probability are assigned at the start of the program. If crossover occurs a point is chosen at random on the string and all the digits are swapped over from that point to the end, forming new offspring. Otherwise the parents become the offspring, ie they are cloned
The offspring may then be mutated. If mutated, a digit will be randomly changed to another digit. Often binary alphabets are used, in which case mutation will change 1 to 0 and vice-versa. The crossover probability will typically be around 0.9, while the mutation probability is of the order of 0.01. Crossover and mutation are crucial in creating new variations, allowing the whole of the search space to be explored. The offspring become the parent generation and the process repeats (it goes through another generation). The "evolution" continues until one region of the space is populated while others are more or less empty. In this way the algorithm homes in on a solution. The number of generations the program goes through can either be fixed by the user or the GA may stop when a given level of accuracy has been obtained.
3. More About Genetic Algorithms
3.1 Selection Methods
There are several ways of selecting which individuals are going to produce the next generation. Some of the simplest and most widely used are described below.
3.1.1. Fitness Proportionate (FP): the probability of each individual being selected is proportional to its fitness. This is often known as 'roulette wheel selection' as it can be imagined as each parent being assigned a section of a roulette wheel,whose size is in proportion to its fitness. The disadvantage is that during early generations a few highly fit individuals can multiply very quickly, causing the GA to be trapped by a local minimum.
3.1.2. Tournament: a number of individuals are chosen at random and the one with the highest suitability will pair with another parent chosen in the same way, to either crossover or be cloned. The individuals are returned to the original population, and may be chosen again. This is computationally more efficient than FP.
3.1.3. Rank: similar to FP, in that each individual has a probability of being selected which is determined by its fitness. In this case the probability is based solely on the rank of the individual within the population. This means that the probabilities of the highest and second highest being selected will always be the same, regardless of their relative fitnesses. This method doesn't have the problems of rapid-convergence that may affect FP, but it loses some potentially important information about individuals.
A new array is set up which gives each parent a rank, determined by the suitability. The higher the suitability the higher the rank, (so in a population of 50 the fittest individual has rank 50). The probability of a parent(i) being chosen is given by a linear equation (equation 1)
N is the number of parents in the population. Min and Max are values defined in the program. They must be chosen such that the sum of the probabilities is one. This means that #and #. Baker found the best value for Max to be 1.1.
3.1.4. Elitism: this can be used in conjunction with any of the above methods. It will chose the best few individuals and they will always go through to the next generation unchanged. The advantages of this are clear, since you avoid losing the best individuals each time, but as we have seen before, favouring very good individuals at the start may lead to convergence on a local minimum.
3.2. Advantages of Genetic Algorithms
Perhaps it isn't obvious why such an algorithm should lead to accurate solutions for optimisation problems. Crossover is a crucial aspect of any genetic algorithm, but it may seem that it will dramatically change parents with a high fitness function so that they will no longer be fit. However this is not the case. As in biology, crossover can lead to new combinations of genes which are more fit than any in the previous generations. Other offspring will be created which are less fit but these will have a low probability of being selected to go on to the next generation. Creating new variants is the key to genetic algorithms, as there is a good chance of finding better solutions. This is why mutation is also a necessary part of the program. It will create offspring which would not have arisen otherwise, and may lead to a better solution.
Other optimisation algorithms have the disadvantage that some kind of initial guess is required and this may bias the final result. GAs on the other hand only require a search range, which need only be constrained by prior knowledge of the physical properties of the system. Effectively they search the whole of the solution space, without calculating the fitness function at every point. This can help avoid a danger in any optimisation problem: being trapped in local maxima or minima. There are two main reasons for this: i) the initial population, being randomly generated, will sample the whole of the solution space, and not just a small area; ii) variation-inducing tactics, ie crossover and mutation, prevent the algorithm being trapped in one part of the solution space.
The starting point for our GAs was a FORTRAN genetic algorithm available on Phil Brierley's webpage(see references).
4. Example of a Genetic Algorithm: finding a rational approximation to pi
A straightforward example of a genetic algorithm is one which attempts to find the best rational approximation to pi, ie a number of the form x1/x2. This is a two-parameter problem as we are trying to find values of x1 and x2 which give the closest approximation to pi. A population of parents is randomly generated by producing a string of numbers, the length of which is defined in the program. The string of numbers consists of any digit between zero and nine, however it is very straightforward to convert this to binary, or any other alphabet. The relative merits of different alphabet lengths are considered later. The first half of each chromosome ('string') encodes the value of x1 and the second half encodes x2. The fitness function ('suitability') is very simple:
SUITABILITY = -|x1/x2 - 3.14159265|
This will always be negative or zero. The best possible solution would give a suitability equal to zero. The program will attempt to maximise suitability, ie converge towards zero, so the solution will be a rational number which is as close to our approximation of pi as possible.
The algorithm calculates the suitability of each parent in a subroutine known as CALCSUITABILITY. In this subroutine the values of x1 and x2 encoded by each parent are calculated, allowing the suitability to be found. In the BREED subroutine 'dads' are chosen in a form of tournament selection. The next generation is produced and the program begins again.
A typical convergence is shown in table 1. Since we are using a string-length of six and digits between zero and nine the highest value that either x1 or x2 can take is 1000. The program is designed to stop once the suitability is greater than a given tolerance (-10-5).
MAX_SUITABILITY x_1 x_2 x_1/x_2
1 -7.912798E-02 715.0000 222.0000 3.220721
2 -1.953101E-03 697.0000 222.0000 3.139640
3 -1.953101E-03 697.0000 222.0000 3.139640
4 -1.953101E-03 697.0000 222.0000 3.139640
5 -2.773456E-03 893.0000 284.0000 3.144366
6 -1.953101E-03 697.0000 222.0000 3.139640
7 -1.953101E-03 697.0000 222.0000 3.139640
8 -1.953101E-03 697.0000 222.0000 3.139640
9 -1.793414E-07 710.0000 226.0000 3.141593
710/226 can be reduced to 355/113, which is the best known rational approximation for pi in the allowed range. This was achieved after only nine generations, with an initial population of 100. With lower population sizes it may be possible to reach an acceptable solution if the tolerance is increased.
5. Applying Genetic Algorithms to Stellar Fitting
The next stage was to apply this algorithm to the stellar fitting problem. To do this we adapted the structure of the simple GA used previously as a subroutine in the main sfit program called SFIT_GA.
5.1. Encoding Parameters
This is a multi-parameter problem and the string-length must be variable, as it depends on the number of parameters to be found. It was also necessary to find a way to convert the string of digits into the parameter required. The method should not depend on which parameter is being varied, in order that it is equally easy to change any parameter. This was achieved by having a variable in the program called ga__stringsize, which is equal to the total length of each chromosome. A substring size is chosen (ie the number of digits assigned to each parameter), and this is multiplied by the number of parameters to be found. Each substring is made to represent a number between zero and one. In an input file a range for each parameter is defined and the substring is multiplied by the size of this range. For example
Alphabet = binary
Substring length =10
Substring representing Teff = 0110001011=395
210 - 1=1023
hence Teff = Teff((low)+(Teff(high)-Teff((low))*0.386119257=37.72
This is done for all the parameters in the subroutine CALCPARAM.
5.2. Fitness Function
An appropriate fitness function was also needed. The ideal function is chi-squared, as this is a measure of how observed results compare with a predicted function. We require the minimum value of the reduced chi-squared which is the value of chi-square divided by the number of observations made (reduced by the number of parameters). Since we need to find a reduced chi-squared as close to zero as possible (rather than one as in some applications) the fitness function is reduced chi-squared and the program is written so as to minimise this. The minimum value should correspond to the correct parameters for the star. The GA calls an external subroutine - NR-CHISQ, to calculate chi-squared for each parent.
5.3 Other Changes
There are several values required by the GA which may be chosen by the user, for example the population size, the crossover and mutation probabilities, the substring size and the alphabet size. The SFIT input file was modified so that these values may be entered easily (more on this later). The BREED subroutine was extended so that there are three selection methods: tournament, rank and fitness proportionate. The user can select which method should be used. This is entered into the input file. We also added the option of elitism.
We made an addition to the program which will restart the algorithm if it appears to be converging towards a local minimum. It does this by checking the difference between the best and worst suitability in each generation and if this is lower than a given value while the best reduced chi-squared is greater than the tolerance then the program will make another attempt at finding the minimum. It begins again with a new population. There is a maximum number of attempts which can be made (3), and this is built into the program. After this many attempts are made the program will report the best chi-squared found so far and print the corresponding parameters.
5.4 Input File
When using method genetic, several parameters are used to control the performance of the Genetic Algorithm search procedure. These should be entered into a suitable input file, similar to figure 2.
A short explanation of the input parameters is given:
generations: the maximum number of generations the program will go through, before stopping, if the solution is not within the given tolerance.
population: the number of parents randomly generated in the first generation, and in each subsequent generation.
prob_mut: the probability of mutation in the offspring, explained earlier.
prob_cross: probability of crossover between two parents, explained earlier.
tolerance: highest chi-squared value whhich will be tolerated.
selection_method: there are three selection methods available: tournament, rank, and fitness proportionate, which were discussed earlier. The method used can be chosen by entering either: rank, tournament or fitness into the input file.
contestants: specific to tournament selection. This number of parents will be selected at random and from these the two with the lowest chi-squared values will be chosen as a pair of parents.
substring_length: this defines the number of characters coding each parameter. This is multiplied by the number of parameters being varied to give the total string length for each parent.
alphabet _size: the number of different characters encoding the parent. For instance if this variable is given the value 2 the parent will be encoded by a string of ones and twos.
elitism: this is an option which, if chosen, always keeps the best few members of each generation. If this is required 'yes' should be typed into the input file, otherwise 'no' should be entered.
kept: here the number of individuals to be carried over to the next generation (if elitism is selected) is chosen.
In place of Pars and Delta_P lower and upper limits for each of the stellar parameters should be defined. If the parameter is not to be varied by the program its fixed value should be entered into the first column of the three with a zero entered in the second column. Otherwise the lower limit should go into the first column and the upper limit in the second column.
5.5 Testing the Algorithm
The final output is the set of parameters encoded by the best member of the final generation.
In order to test the algorithm test spectra were created using model spectra from the grid. Using these we could do blind tests on spectra whose parameters are known.
An example of an output given by such a spectrum is shown in table . The method of selection is tournament. We used a model spectrum for a star with Teff = 40kK, log g = 4.5, He abundance = 0.997 and delta_v = 5ms-1. Teff and log g were allowed to vary while the other parameters remained constant. In all tests population size = 60, alphabet size = 4, string length = 10, crossover probability=0.85, mutation probability=0.01 and number of contestants(for tournament selection) = 3.
Gen Best red chi-sq Ave red chi-sq T1/1000 K log g1
1 1.207662 107632520.000000 40.9142113 4.5436082
2 1.207732 85.626022 40.9142494 4.5436082
3 2.488284 23.908514 39.5697594 4.6649842
4 0.568621 13.585575 39.9314308 4.5399842
5 0.568621 6.570744 39.9314308 4.5399842
6 1.959439 6.086067 39.5697594 4.6337342
7 0.925172 4.825766 39.5698738 4.5399842
8 0.924842 2.322456 39.5701027 4.5399842
9 0.748389 1.624009 39.5697594 4.5087342
10 0.747986 1.077167 39.5701027 4.5087342
11 0.748254 1.669584 39.5698738 4.5087342
12 0.748254 1.672681 39.5698738 4.5087342
The table shows the lowest reduced chi-squared for each generation and the parameters corresponding to these values, along with the average chi-squared. The final chi-squared is given.
If the third column of the first table is studied it can be seen that the average chi-squared in the population reduces dramatically during the twelve generations, from over 100 million to under 1.7. This shows the power of the method in getting rid of unsuitable members of the population, so it can search within a highly fit population.
The program was run again using 'rank' selection. The results are shown in table 3.
Gen Best red chi-sq Ave red chi-sq T1/1000 K log g1
1 1.207662 107632520.000000 40.9142113 4.5436082
2 3.213858 400.530701 42.1352615 4.7910833
3 0.823334 304.934784 39.3407631 4.4454637
4 1.104495 625.592163 39.2186546 4.4742379
5 3.130505 929.271607 42.0595017 4.7910833
6 3.213681 635.353088 42.1351090 4.7910833
7 3.184707 882.940003 42.1094742 4.7910833
8 3.185941 1237.373291 42.1105805 4.7910833
9 0.902313 719.689331 39.3433952 4.4740281
10 0.902480 1736.840576 39.3432808 4.4740281
11 1.010192 1482.041626 39.3433952 4.3334026
12 0.514197 3559.208497 40.0464821 4.4785829
The program appears to have been successful, as the final best reduced chi-squared is the best from any generation. The parameter values are close to the expected values. Given the inherent 'graininess' in the search space arising from the relatively low string length and alphabet size it may be difficult for the program to find the exact parameters.
In table 4 showing the output of fitness proportionate selection, it can be seen that the average chi-squared decreases by several orders of magnitude, although the best chi-squared actually increases. Perhaps if the program had been allowed to run for a few more generations it would give a better solution.
Gen Best red chi-sq Ave red chi-sq T1/1000 K log g1
1 1.207662 107632520.000000 40.9142113 4.5436082
2 1.342192 747663.187500 39.2175102 4.2751179
3 1.080467 627.612610 39.3407631 4.3204632
4 4.629039 188.925995 36.6841660 4.0129910
5 6.027118 503.876648 36.3069305 4.0129681
6 4.844400 196.259369 36.5722046 3.8540401
7 2.446665 638.533020 39.3407631 4.6282372
8 2.162004 332.287323 38.4836006 4.0989971
9 2.161388 53.880573 38.4824562 4.0989971
10 2.163022 102.136383 38.4854698 4.0989971
11 2.126919 589.114380 38.1821594 4.0989971
12 2.642960 63.104702 37.5571594 4.0141964
The GA was allowed to go through another eight generations, and the best reduced chi-squared values did improve:
Gen Best red chi-sq Ave red chi-sq T1/1000 K log g1
13 2.353299 164.482758 38.1821594 4.2630596
14 2.112086 289.003265 38.1821594 4.1065311
15 2.114271 28.593861 38.1717835 4.1077557
16 2.198078 23.207083 38.1821594 4.2239971
17 0.628230 16.831244 39.6923256 4.4195809
18 0.758736 12.203951 39.3833351 4.4195809
19 1.395559 9.995948 39.0702248 4.4739976
20 1.395559 6.687889 39.0702248 4.4739976
Using the final best reduced chi-squared as a gauge it appears that rank selection is the most successful of the three GA selection methods. It is quite difficult to pick out one method as more successful than the others as all have good and bad runs. Overall it would appear that rank is a slightly more successful method. This is perhaps not surprising. Tournament has quite a large element of luck in it. A few parents are chosen at a time and the two with the lowest chi-squared values will give rise to a member of the next generation. FP, on the other hand, strongly favours the fittest members of the generation, so it carries with it a danger of losing members of the population which are less fit but also carry useful information. Rank selection has a similar source code to FP but the probability of the best parent being chosen never varies.
Not shown here is the output given when the all the members of the GA are the same. It is possible to see that this has happened by looking at the best and average chi-squared values, and if they are very similar it is likely that the algorithm has converged. The tolerance can be set to an appropriate value in the input file, and if the reduced chi-squared is not below this value when convergence occurs it restarts. It will do this a maximum of three times.
It is useful to look at the behaviour of the GA as some of the parameters are changed. It is interesting that as the population is increased the time taken to run the program increases as expected, but there does not appear to be a corresponding improvement in the results achieved, ie there is no reduction in the final chi-squared values. There does not seem to be anything to be gained from having high population sizes. The default value in the program (50) is a good value to use.
Increasing the substring size and the alphabet size allows more accurate results to be obtained. The increment between parameter values which can be obtained is:
E(I)=(DELTA_P - PARS(I))/((GA__ALPHA^GA__SUBSTRING) - 1)
For Teff, if the alphabet size is 3 and the substring length 10, with the range for Teff from 20kK to 60kK E= 0.04. To avoid making the program too slow it is probably best not to make both of these too high. If higher accuracy is require then it is preferable to increase the string length rather than alphabet size. Longer strings give more opportunities for crossover, allowing more variations to be explored.
6. The Next Step: Parallelisation of the GA
Genetic algorithms require a large number of calculations, and can take a long time to run. This time could be greatly reduced by using several processors which each carry out a fraction of the calculations. A software package which allows this to be done easily is PVM (parallel virtual machine). This is available free on the Internet (see references). It installs PVM daemons on all host machines in the PVM. A typical program will consist of a master and a number of slaves. The master sends messages to the slaves. The slaves will carry out operations on the data contained within the messages and return the results to the master. The master waits for all the results from the slaves before continuing with the program. All processes which enroll in PVM are assigned a task id (tid). This is chosen by the PVM daemon and not the user. This allows processes to be identified and results to be coordinated when they are sent to the master.
A beowulf cluster was set up in the observatory for use as such a parallel virtual machine. It consists of five Linux Alphas and one Alpha, which will run the master programs.
In our GA the limiting step is CALL NR_CHISQ. The calculation of chi-squared involves thousands of individual calculations. It should be straightforward to parallelise this call. The master program will generate the parents, calculate the parameters. It divide up the population according to how many parents there are then send the parameters in an array to each slave. The slave will call NR_CHISQ, and return the result to the master. When the master has received all of the results the GA will continue as before. Since there are six machines on our PVM we hope that the time taken to run the program will be reduced by a factor of six.
Unfortunately we were unable to make PVM operational in the time we spent in Armagh, but we have made significant progress in writing the necessary master and slave programs, which should help the people who continue the work after us.
Genetic algorithms are very useful tools in search and optimisation problem. It has been seen that when a GA is used for stellar fitting it will often successfully find the parameters. It appears to produce solutions which are as good as other methods used before, but unfortunately it takes a much longer time. It is a pity that we were unable to make PVM operational. With the extra computing power it could have provided us with would have made the GA much more accurate and fast. We hope that people who continue this work after us will be more successful in parallelising the program, so taking complete advantage of the GA.
It is very interesting that genetic algorithms work so well. It is very different to more traditional methods, but it should not be surprising that this approach works. After all, evolution produces organisms that are better equipped for survival by accumulating favourable characteristics and weeding out those which handicap the organism. Similar processes are believed to occur in human culture. The analogues of genes here are known as memes (a term coined by Richard Dawkins), which are important facets of human society. Classic memes are religions, fashion trends and brand names. Successful memes will survive, exchange ideas with other successful memes, and essentially "evolve". Less successful ones die out. Also, in farms, greenhouses and gardens all over the world we have plants and animals which are dramatically different from their wild counterparts. Humans controlling the breeding of these organisms will select in favour of traits which will increase profit, but may have caused a disadvantage for the organisms if they lived in the wild. Dogs are an excellent example of how, in a relatively short time in evolution terms, artificial selection can give rise to animals which are very different from their wild ancestors, but ideally suited for their purpose.
The PC of a scientist is perhaps the ideal situation in which to apply this artificial selection, for it seems that GAs are perhaps compared more easily with this artificial selection than natural selection. Natural selection has no goal in mind, it simply responds to the environment, producing organisms which can live, and more importantly reproduce, successfully there. It does not have an ultimate goal of producing an ideal species (not even homo sapiens). The end result of natural selection is a greater number of different species than there was before, as evolution produces many possible 'solutions' to the problem of survival and procreation. Artificial selection does have a goal, for example a cow which produces more milk. In a problem to which a GA will be applied there will normally be one solution and it is our aim to find this. We decide what makes a solution suitable and aim towards the one that is best on those terms. However the basic ideas of evolution are used, and GAs apply natural selection in the best way possible.
The time spent here has been very useful for us, what we have learned in these weeks is likely to be applicable in our future careers. Genetic methods are possibly still tools that most scientists are unaware of, so it is exciting to have learned how and when to apply them.
We hope that the work we have done will be applied in other areas of research going on in the Observatory, and elsewhere. It should be possible to make minor changes which will allow our program to calculate parameters in other astronomical problems.
We are very happy to have been able to produce something which will be used in real scientific research, and we hope to be able to continue doing so in our future careers.
An Introduction to Genetic Algorithms: Melanie Mitchell, MIT press.
Genetic Algorithms: in Search, Optimization and Machine Learning: David E. Goldberg, Addison-Wesley.
Rochester University, GA example: http://www.cs.rochester.edu/users/faculty/leblanc/csc173/genetic-algs/example.html
Phil Brierley, Fortran 90 GA code:
University of Cambridge, Overview of Genetic Algorithms:
University of Birmingham; Optimisation with Genetic Algorithms:
Parallel Virtual Machine (PVM) Version 3:
Joint Institute for Computational Science, Beginners Guide to PVM
Computational Astroseismology, Travis Scott Metcalfe, B.S., M.A., University of Texas at Austin.
Figure 1: crossover and mutation
Table 2: tournament selection
Table 3: rank selection
Table 4: fitness proportionate.
Figure 2: example input file