Artificial Intelligence and Bots have interested me for a long time now.
Experiment 1 - The Fish Tank:
The Fish Tank was my first attempt at creating a genetic learning algorithm during a hackathon a year or two back. I started by teaching it to run through a game of tic tac toe. It was a short one day (not 24 hour sprint) hackathon so I didn't get too far but it was a start.
Experiment 2 - Chaos Engine:
Chaos Engine was a far more complicated endeavor. Chaos Engine took a more simplistic approach by preprogramming a series of simple actions into it.
I took an old idea for a game I was using to teach a 9 year old to program called Owen's World.
Bots and Data Acquisition:
Project 1- NJax:
https://github.com/schematical/njax is a lightweight framework built off of the MEAN stack. It is designed to be everything I wanted in the book I wrote. Its designed around micro service architeture and how best to utilize the human element, your developers, when building your end product.
That is all great but what does it have to do with data and machine learning? NJax is uniquely designed so that the core service can tag, comment on, subscribe to, and store event data on entities for its running child micro services. But it doesn't use ObjectId's for foreign keys. Instead each entity across the entire system is assigned a unique URL. That URL is used as the foreign key. This way the system can easily query data about or link the users to the tagged, commented, or subscribed entity. This is vitally important since the micro service runs on another server on another subdomain.
That is cool but its not mind blowing until you ask yourself, "What else can I tag, subscribe, comment on?" The answer is: "Anything with a URL". Yes, you can then comment on, subscribe to, tag, or trigger events that our system will remember on basically any URL on the web.
This technology isn't anything new if you are a giant well funded start up in Silicon Valley but to the average dorm room hacker this could be a game changer.
How do other URLs get in the system?
1 - Shares:
When people share links in the system by default we can crawl them. Then interactions done through our news feed such as commenting or subscribing, 'liking', 'favoriting', whatever the designer decides to publicly call it.
2 - Embeds:
NJax is built to be embedable. When an application built on NJax allows users to embed some of its functionality into their page we can track the users interactions with the widget on that page.
What to do once we have the data:
Eventually once we have all this data on who commented on what and interacted with which url, website, etc on the web then we can start to find trends in the data and user behavior.
...I am still writing this section...