Recurrent Neural Network (RNN) has been successful in modeling time series data.
People say that RNN is great for modeling sequential data because it is designed to potentially remember the entire history of the time series to predict values. "In theory" this may be true.
But when it comes to implementation of the RNN model in Keras, practitioners need to specify a "length of time series" in batch_shape:
Create a simple game using pygame
The above gif is my very first game. In this blog post, I will introduce a very simple game created using pygame.
What is this game about?¶
In this game, you are a "princess" and the goal is to rescue "price". But be careful, there is a snake monster that could hit you!
Create deep learning calculators based on Encoder-Decoder RNN using Keras
Google search is great. You can ask any question you have and give you links to the potential solutions. It can sometimes give you the solution itself when the question is simple enough. One of such example is that google search can act as a calcualator. You can ask "1 + 1" or "1 + 1" to get the value 2. It somehow knows that the spaces are nuicense and return correct value. You can even ask the calculation with some strings. See pic below:
Assess the robustness of CapsNet
In the Understanding and Experimenting Capsule Networks, I experimented Hinton's Capsule Network.
Dynamic Routing Between Capsules discusses the robustness of the Capsule Networks to affine transformations:
"Experiments show that each DigitCaps capsule learns a more robust representation for each class than a traditional convolutional network. Because there is natural variance in skew, rotation, style, etc in hand written digits, the trained CapsNet is moderately robust to small affine transformations of the training data (Section 5.2, page 6)."
Understanding and Experimenting Capsule Networks
This blog is inspired by Dynamic Routing Between Capsules and aims to understand Capsule Networks with hands-on coding.
I use Keras with tensorflow backend. The codes here are created by modifing Kevin Mader's ipython notebook script in Kaggle competition, which, in turn are written by adapting Xifeng Guo's script in Github
CNN modeling with image translations using MNIST data
In this blog, I train a standard CNN model on the MNIST data and assess its performance.
Learn the breed of a dog using deep learning
My friend asked me if I can figure out the breed of his dog, Loki. As I am not a dog expart, I will ask opinions from deep learning. Here, I use VGG16 trained on ImageNet dataset.
What is VGG16 and ImageNet?¶
According to Wikipedia,
"The ImageNet project is a large visual database designed for use in visual object recognition software research...Since 2010, the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a competition where research teams evaluate their algorithms on the given data set, and compete to achieve higher accuracy on several visual recognition tasks."
The first deep learning model for NLP - Let AI tweet like President Trump -
The goal of this blog is to learn the functionalities of Keras for language processing applications.
In the first section, I create a very simple single-word-in single-word-out model based on a single sentence. With this application, I make sure that the model works in this simple possible scenario and it can correctly predict the next word given the current word correctly for this trainning sentence.
Extract someone's tweet using tweepy
This blog post is to remind myself the simple useage of the tweepy. I will extract someone's past tweets using tweepy and create .csv file that can be used to train machine learning models. I created the scripts by referencing the following seminal blog posts:
Importing necessary python scripts.
Visualization of Filters with Keras
The goal of this blog post is to understand "what my CNN model is looking at". People call this visualization of the filters. But more precisely, what I will do here is to visualize the input images that maximizes (sum of the) activation map (or feature map) of the filters. I will visualize the filters of deep learning models for two different applications: