[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
computer-go: reducing to number of search strings
I have come up with a new idea to reduce the number of search strings.
Well, to put brute force in solving Go would be an impossible task !
So, lets say we have a well trained NN. (The size can be 5x5 or something else)
The NN will propably have some points with a high output score (> .9 or something like that)
but if we only consider the best of the best point, and run "blindly" down a string of the best of the best points,
for some large numbers of moves (say 10, 20, 50 or more), and
then we go back and take the second best 1st move, and then run "blindly" down the best of the best moves again
(for 10, 20, 50 or more moves).
This second position should become totally different from the first.
Now compare the 2 strings(positions) and give credit to the 2 first moves from the result of the strings.
The next thing to do is to step down to the reply on the 1st move and repeat the process,
to see if the program actually made the best of the best reply.
This methode could reduce the problem with scoring of a move, depending on the surroundings.
Kjeld
NB: I ones had a program that could draw Mandelbrot pictures, quite fast.
I think the programs name was fractint.exe. (or something like that)
The thing that made the program so fast was the methode for the calculations.
The fractals was calculated in integer instead of reals.
Have anybody made a integer version of a NN to speed up a Go program ??