A Neural Chatbot with Personality

Conversational modeling is an important task in natural language processing as well as machine learning. Previously, conversational models have been focused on specific domains, such as booking hotels or recommending restaurants. In this paper, we experiment building open-domain response generator with personality and identity. We built chatbots that imitate characters in popular TV shows.

The sequence-to-sequence (seq2seq) model is taught in this class CS224N. This model has been successfully used for many different natural language processing tasks, such as alignment, translation and summarization. In our model, the encoder processes an utterance by human, and the decoder produces the response to that utterance. We train the word embeddings as we train the model.

This is inspired by Google’s Zero-shot multilingual translation system. To test our models, we use both automatic metrics and human judgment. We find that our bots can make reasonable responses and sometimes hold decent conversations. Building chatbots with personality or persona is still an open challenge.

Auto100

https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1174/reports/2761115.pdf