Summary

There are a few subsects of AI and Machine Learning that I find interesting to explore. I had a brief introduction to normal feedforward neural networks through my Machine Learning class in Shanghai as well as Customer Behavior Prediction algorithm I made for the HSBC Hackathon. I also learned about CNN's in this class and implemented one into a web app to classify ASL signs made towards a camera into the corresponding letter. RNN's, LSTM's, and GAN's are the subsect of neural network architectures that I still have not dove into the deep end with but they are on the list!

This page is dedicated to my recent exploration of Reinforcement Learning, because this is the field of AI that I feel the most innovations and benefit to humanity will come from. This is because I relate the way that these algorithms work, at least at a very high level, to the way that our dopamine reward circuitry guides our own human actions. So the more improvements that are made to the neural network architectures in this subset the closer I predict we will get to a generalized intelligence.

Q-Learning vs. Policy Gradient

There are two approaches to Reinforcement Learning: Policy Gradient & Q-Learning

DDPG

The coming together of policy gradient and Q-Learning methods.

MADDPG

Using certain parts of the DDPG architecture to do agent based modeling using RL algorithms as the decision making mechanism for the agent