Quantcast

Wednesday, October 26, 2011

Lessons learned from Mona Lisa (part 2)


    I learned yet another important lesson about why it's extremely important to provide the fitness evaluator the exact same image as shown to the user.

    If you recall the original problem I reported was that the candidate image was being scaled down before evaluation, thereby allowing many pixels to avoid detection/correction. Yesterday I enabled anti-aliasing for the image shown to the user. I figured so long as the evaluator was seeing 100% of the pixels (no scaling going on) I should be safe, but it turns out this isn't enough.

    Take a look at the following images:

Anti-aliasing Enabled

Anti-Aliasing Disabled

They show the exact same candidates with anti-aliasing enabled and disabled. Notice how the anti-aliased version has "streaks" of errors across the face, similar to the problem I was seeing when the candidate was being scaled. It turns out that sometimes the candidates contains polygons that introduce errors into the image in the form of "streaks" (polygons rendered with sub-pixel precision). The interesting thing is that aliasing suppresses these errors so the evaluator function does not see it. Consequently, the users sees a whole bunch of errors which the fitness function will never fix. Sounds familiar?

    In conclusion: you should always (always!) pass the fitness function the exact same image you display to the user. Better safe than sorry :)

Friday, October 21, 2011

Lessons learned from Mona Lisa

I've been studying Genetic Algorithms using Roger Alsing's Evolution of Mona Lisa algorithm over the past month and made some interesting discoveries:

  1. Using opaque polygons improves rendering performance (and by extension the performance of the fitness function) by an order of magnitude.
  2. In all things, favor many small changes over drastic large changes...
  3. When adding a new polygon, give it a size of 1 pixel instead of assigning it vertexes with random coordinates. This improves its chances of survival.
  4. When adding a new vertex, instead of dropping it into a random position give it the same position as an existing vertex in the polygon. It won't modify the polygon in any noticeable way, but it will open the door for the "move vertex" mutation to move more vertexes than it could before.
  5. When moving vertexes, favor many small moves (3-10 pixels at a time) instead of trying to span the entire canvas.
  6. If you're going to damp mutations over time, damp the amount of change as opposed to the probability of change.
  7. The effects of damping are minimal. It turns out that if you've followed the above steps (favor small changes) there should be no real need to damp.
  8. Don't use Crossover mutation. It introduces a high mutation rate which is great early on but very quickly high mutation becomes a liability because an image that is mostly converged will reject all but small mutations.
  9. Don't scale the image in the fitness evaluator function. This one took me a while to figure out. If your input image is 200x200 but the fitness evaluator scales the image down to 100x100 before generating a fitness score it will result in candidate solutions containing steaks/lines of errors that are invisible to the fitness function but are clearly wrong to the end-user. The fitness function should process the entire image or not at all. A better solution is to scale the target image across-the-board so your fitness function is processing 100% of the pixels. If 100x100 is too small to display on the screen you simply up-scale the image. Now the user can see the image clearly and the fitness function isn't missing a thing.
  10. Prevent the creation of self-intersecting polygons. They never yield good results so we can substantially speed up the algorithm by preventing mutations from creating them. Implementing the check for self-intersecting polygons was a pain in the ass but it was worth the trouble in the end.
  11. I've modified the fitness score to remove hidden polygons (purely for performance reasons):
  12. fitness += candidate.size();
    
    This means that the fitness score will never hit zero.
  13.  I've increased the maximum number of polygons from 50 to 65535.

    When I first tried running Watchmaker's Mona Lisa example it would run for days and not look anything close to the target image. Roger's algorithm was better but still stagnated after an hour. Using the new algorithm I managed to recreate the target image in less than 15 minutes. The fitness score reads 150,000 but to the naked eye the candidate looks almost identical to the original.

    I put together a diagnostics display that shows me the entire population evolving over time. It also tells me how many unique candidates are active in the population at any given time. A low number indicates a lack of variance. Either the population pressure is too high or the mutation rates are too low. In my experience, a decent population contains at least 50% unique candidates.

I used this diagnostic display to tune the algorithm. Whenever the number of unique candidates was too low, I'd increase the mutation rate. Whenever the algorithm was stagnating too quickly I'd examine what was going on in the population. Very frequently I'd notice that the mutation amount was too high (colors or vertices moving too quickly).

    I'm glad I spent the past month studying this problem. It's taught me a lot about the nature of GAs. It has a lot more to do with design than code optimization. I've also discovered that it's extremely important to watch the entire population evolve in real time as opposed to only studying the fittest candidate. This allows you to discover fairly quickly which mutations are effective and whether your mutation rate is too low or high.

Genetic Algorithms are a lot of fun. I encourage you to play with them yourself.