We believe that Machine Learning using Evolutionary Computing to obtain a Model Algorithm is the big next step in the AI race.

If there is too much marketing hype in the above statement, then just ignore this feeling for now.
We have a log of an (unsupervised) machine learning session. There is a training data at the beginning, with numbers between 3 and 122. Primes are labeled with 0, nonprimes with 1. The third column are all 1’s and that simply means “use this row as a part of valid training data. This table of three columns is the entire world, for our Spector-Profunder (SP) to see. Otherwise SP is a “blank slate”, except for some basic mathematical operations.

Now, the SP begins to observe the world around. A small matrix 3 by 120 of numbers, In the first column there are integers from 3 to 122, in the second column there is either 0 or 1 (prime or nonprime), and in the last column there is always 1 in this case.

SP is like a person who woke up from deep comma and total amnesion. It has no knowledge about itself of course. Only the built in mathematical skills kicks in. It’s thinking language is a set of Turing complete computer commands. This language is quite understandable to humans, insofar as any programming language is.

In the log bellow, there are chronologically ordered algorithms, with which the SP models the world (number table) it sees. The first one is probably completely random and does not explain a thing. After some random mutations, there comes a generation slightly better. A random mutation and a nonrandom selection, Darwinian way. The log file is too long (in TB) to be readable, so just some more interesting “fossils” are shown.

After 20 thousand generations (which take a fraction of second), we only get a consistent algorithm which mostly misses the desired output (0 or 1 for prime or nonprime). It’s error is measured in “percent squared” is about 10^5. There may be no single correct prime guessed. SP might assign a nonsensical 2 to every number or something equally wrong. But we have a large “damage number” in the form of squares of differences between desired and actual values.

After some 24 million generations, we got the first algorithm which correctly predicts primality of ebery number from the list of tripples. Had we used a neural network, we would say it’s trained by now. But it’s overfited. Too long and too cumbersome.

After 100 million generations, we have quite an understandable code, which behaves quite well. The Eratosthenes sieve of a kind, was reinvented. Evolved once again.

****************** Generation#2,826,188,589 ****************

Algorithm’s error : 0
Algorithm’s cost : 0.0019871199949837033
Input register(s) : A
Output register(s): B

01 Nop
02 ID2 : B=B+1; C=B+1
03 Jmc : If (A%C==0) {24 Else 4}
04 Nop
05 Nop
06 Opc : If (B<A) {C=C+B}
07 Jmc : If (A%C==0) {20 Else 9}
08 Nop
09 Opc : If (B<A) {B=B+B}
10 Opc : If (B<A) {C=C+B}
11 Jmc : If (A%C==0) {20 Else 12}
12 Nop
13 Nop
14 Opc : If (B<A) {C=C+B}
15 Nop
16 Nop
17 Sq? : B=(A Is X^2)
18 Jmc : If (A%C==0) {21 Else 26}
19 Nop
20 Nop
21 Opc : If (B<A) {A=A Xor C}
22 Nop
23 Sgn : B=Sgn(A)
24 Nop
25 Nop
26 Nop

The log of events during machine learning