Automatic Graphic Generation from Text using Generative Adverserial Networks
A computer that can generate graphics in a desired style from text captions is very powerful and its applications in marketing, advertising, and gaming are incredibly useful. We present a Generative Styled Network (GSN), which is capable of taking as input a text caption and some style images, and then proceed to produce an image on a white background that matches both the text description and the style desired.
Jeff Chen, Stephanie Dong, Nick Guo
2018
High Performance Question Answering using Natural Language Processing with Neural Networks
We trained a question-answering model for Wikipedia, using the Stanford Question Answering Dataset. We achieved state-of-the-art F1 score of 75.8% on the test set. We demonstrate that the model generalizes at 68-90% correct using our hand-crafted datasets in 3 other types of text: Top News, Financial News, and Corporate Annual Report. Code available upon request.
Jeff Chen, Alexandre Gauthier
2018
Deep Bin Picking using Reinforcement Learning
We use reinforcement learning to solve the challenge of a robot picking items from a deep bin. This task is challenging, as lifting one item may cause other items to be unintentially lifted as well. We show that Pybullet and Dex-Net can be used to create the simulation environment for this task and Q-learning significantly outperforms random and greedy algorithms.
Jeff Chen, Tori Fujinami, Ethan Li
2018
Affordable Self Driving Cars and Robots with Semantic Segmentation
Image segmentation is critical for the camera system in self driving cars as it informs the car of object boundaries for precise navigation. We present image cropping as a method to speed up training in a Fully Convolutional Network and compare against softmax regression and maximum likelihood methods using the Cityscape dataset.
Gaurav Bansal, Jeff Chen, Evan Darke
2017