Also categorized in Machine Translation:
Machine Translation links
x hide permanently

dialogue-generation

Internet Archive

Language English In this work, we investigate sequence to sequence models for dialogue generation where given an utterance, the model should generate a relevant response. We suggest that there are three underlying obstacles which are inherent to these models. First, the learning objective in the sequence to sequence model does not consider the relevance of the response to the given utterance and only forces the model to learn the response of the training example.