1. & Marcinkiewicz

    0 Comments Leave a Comment

    1-9 of 9
    1. Mentioned In 9 Articles

    2. .Ps

      Statistical Methods in Natural Language Processing Michael Collins AT&T Labs-Research Overview Some NLP problems: Information extraction (Named entities, Relationships between entities, etc.) Finding linguistic structure Part-of-speech tagging, ``Chunking'', Parsing Techniques: Log-linear (maximum-entropy) taggers Probabilistic context-free grammars (PCFGs) PCFGs with enriched non-terminals Discriminative methods: Conditional MRFs, Perceptron algorithms, Kernel methods
      Read Full Article
    3. .Pdf

      Statistical Methods in Natural Language Processing Michael Collins AT&T Labs-Research Overview Some NLP problems: ¯ Information extraction (Named entities, Relationships between entities, etc.) ¯ Finding linguistic structure Part-of-speech tagging, “Chunking”, Parsing Techniques: ¯ Log-linear (maximum-entropy) taggers ¯ Probabilistic context-free grammars (PCFGs) PCFGs with enriched non-terminals ¯ Discriminative methods: Conditional MRFs, Perceptron algorithms, Kernel methods Some
      Read Full Article
    4. Parsing with a Single Neuron: Convolution Kernels for NaturalLanguage Problems

      Parsing with a Single Neuron: Convolution Kernels for Natural Language Problems Michael Collinsy and Nigel Du yz yAT&T Labs-Research, Florham Park, New Jersey. mcollins@research.att.com zDepartment of Computer Science, University of California at Santa Cruz. nigeduff@cs.ucsc.edu Abstract This paper introduces new train- ing criteria and algorithms for NLP problems, based on the Support Vec- tor Machine (SVM) approach to classi cation problems. SVMs can ...
      Read Full Article
    5. Discriminative Reranking for Natural Language Parsing

      Michael Collins MCOLLINS@RESEARCH.ATT.COM AT&T Labs--Research, Rm A-253, Shannon Laboratory, 180 Park Avenue, Florham Park, NJ 07932 Abstract This paper considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this initial
      Read Full Article
    6. Convolution Kernels for Natural Language

      Michael Collins AT&T Labs--Research 180 Park Avenue, New Jersey, NJ 07932 mcollins@research.att.com Nigel Duffy Department of Computer Science University of California at Santa Cruz nigeduff@cse.ucsc.edu Abstract We describe the application of kernel methods to Natural Language Processing (NLP) problems. In many NLP tasks the objects being modeled are strings, trees, graphs or other discrete structures which require some mechanism to convert them into ...
      Read Full Article
    7. New Ranking Algorithms for Parsing and Tagging: Kernels overDiscrete Structures, and the Voted Perceptron

      New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron Michael Collins AT&T Labs-Research, Florham Park, New Jersey. mcollins@research.att.com Nigel Duffy iKuni Inc., 3400 Hillview Ave., Building 5, Palo Alto, CA 94304. nigeduff@cs.ucsc.edu Abstract This paper introduces new learning algorithms for natural language processing based on the perceptron algorithm. We show how the algorithms can be efficiently applied ...
      Read Full Article
    8. Discriminative Training Methods for Hidden Markov Models: Theoryand Experiments with Perceptron Algorithms.

      Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms Michael Collins AT&T Labs-Research, Florham Park, New Jersey. mcollins@research.att.com Abstract We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a mod
      Read Full Article
    9. Machine Learning Methods in Natural Language Processing

      Machine Learning Methods in Natural Language Processing Michael Collins MIT CSAIL Some NLP Problems Information extraction – Named entities – Relationships between entities Finding linguistic structure – Part-of-speech tagging – Parsing Machine translation Common Themes Need to learn mapping from one discrete structure to another – Strings to hidden state sequences Named-entity extraction, part-of-speech tagging – Strings to strings Machine translation – Strings to underlying trees Pa
      Read Full Article
    10. Machine Learning Methods in Natural Language Processing

      Machine Learning Methods in Natural Language Processing Michael Collins MIT CSAIL Some NLP Problems Information extraction -- Named entities -- Relationships between entities Finding linguistic structure -- Part-of-speech tagging -- Parsing Machine translation Common Themes Need to learn mapping from one discrete structure to another -- Strings to hidden state sequences Named-entity extraction, part-of-speech tagging -- Strings to strings Machine translation -- Strings to underlying
      Read Full Article
    11. 1-9 of 9
  1. Categories

    1. Default:

      Discourse, Entailment, Machine Translation, NER, Parsing, Segmentation, Semantic, Sentiment, Summarization, WSD