Image Recognition

Experiment 1 - IRaaS:

Image Recognition as a Service - IRaaS was a simple clustering algorithm that I wrote in NodeJS. I then used the edge data to do a simple comparison against letters to determine what percentage of the pixels matched.

Scale and Rotation:

I made it account for discrepancies of the scale of the symbols (letters) but not rotation yet. Adding rotation would be simple, similar to the algorithm I used to detect which stamp was used back at SnowShoeStamp.com. There are some subtle differences which will make image recognition more difficult but I doubt I can discuss them without breaking some type of non-disclosure.

Speed:

Obviously NodeJS would not be fast enough so next time I think I will use Python in an effort to dust off my Python skills (Haven't used that since I wrote a crude 3 dimensional engine back when I was bored in Trig my sophomore year of college).

What is next?

Machine Learning:

The next step is to apply some basic machine learning to my image recognition. There are a lot of machine learning/data analysis techniques one could use but my favorite so far are my own home baked genetic learning algorithms. You can read more about in the Machine Learning section of this site.

Future Experiment - Instagram's APIs:

A while ago I launched a free app on the Android market called Time Machine. Time Machine was a simple photo browsing app that pulled photos that were geotagged near you in the past 12-24 hours. It was funny when I was testing it one Sunday I saw pictures of some people I knew but was not connected with wearing togas pre-gaming for the Badger football game the day before. The app has since been taken down because I failed to monetize it and therefore had little interest in maintaining it.

Getting to the point I could use my image recognition service tied with the basic machine learning to try and determine which algorithms would best be suited to determine if a given landmark is in an image.

Future Experiment 2 - Tie it together:

This makes one wonder what would happen if you took Instagram's APIs out of the equation and used the Signal Trilateration to determine the location of the device taking the photo and the devices around it. Then analyze the photo to see if the image recognition can identify those devices in the image. Just wondering...