1. 1-24 of 200 1 2 3 4 5 6 7 8 9 »
    1. Humans can still extort more money from me than machines can

      Humans can still extort more money from me than machines can

      Like lots of folks, I wonder sometimes about AI and jobs. I'm neither a believer that there's a catastrophe coming up, nor am I a believer that everything will magically work out and we're not entering a world with new forms of inequities. I did have an experience recently that made me think somewhat differently about what sorts of jobs are at risk. I was visiting family in LA, and renting a car (because LA). I can't remember what company it was, but they didn't have "live people" at the booth.

      Read Full Article
    2. Structured prediction is *not* RL

      Structured prediction is *not* RL

      It's really strange to look back now over the past ten to fifteen years and see a very small pendulum that no one really cares about swing around. I've spent the last ten years trying to convince you that structured prediction is RL; now I'm going to tell you that was a lie :). Short Personal History Back in 2005, John Langford, Daniel Marcu and I had a workshop paper at NIPS on relating structured prediction to reinforcement learning .

      Read Full Article
    3. Initial thoughts on fairness in paper recommendation?

      There are a handful of definitions of "fairness" lying around, of which the most common is disparate impact: the rate at which you hire members of a protected category should be at least 80% of the rate you hire members not of that category. (Where "hire" is, for our purposes, a prediction problem, and 80% is arbitrary.) DI has all sorts of issues, as do many other notions of fairness, but all the ones I've seen rely on a pre-ordained notion of "protected category".

      Read Full Article
      Mentions: Percy Liang NLP
    4. Trying to Learn How to be Helpful (IWD++)

      Over the past week, in honor women of this International Women's Day , I had a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr , Awesome People: Ellen Riloff , Awesome People: Lise Getoor , Awesome People: Karen Spärck Jones and Awesome People: Kathy McKeown . (Today's is delayed one day, sorry!) I've been incredibly fortunate to have a huge number of influential women in my life and my career.

      Read Full Article
    5. Awesome people: Kathy McKeown (IWD++)

      Awesome people: Kathy McKeown (IWD++)

      To honor women this International Women's Day , I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr , Awesome People: Ellen Riloff , Awesome People: Lise Getoor and Awesome People: Karen Spärck Jones . Continuing on the topic of "who has been influential in my career and helped me get where I am?"

      Read Full Article
    6. Awesome people: Karen Spärck Jones (IWD++)

      Awesome people: Karen Spärck Jones (IWD++)

      To honor women this International Women's Day , I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr , Awesome People: Ellen Riloff and Awesome People: Lise Getoor . Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Karen Spärck Jones .

      Read Full Article
    7. Awesome people: Lise Getoor (IWD++)

      Awesome people: Lise Getoor (IWD++)

      To honor women this International Women's Day , I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr and Awesome People: Ellen Riloff . Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Lise Getoor.

      Read Full Article
    8. Awesome people: Ellen Riloff (IWD++)

      Awesome people: Ellen Riloff (IWD++)

      To honor women this International Women's Day , I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr . Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Ellen Riloff .

      Read Full Article
    9. Awesome people: Bonnie Dorr (IWD++)

      Awesome people: Bonnie Dorr (IWD++)

      To honor women this International Women's Day , I have a several posts, broadly around the topic of women in STEM. This is the first, and the topic is "who has been influential in my career and helped me get where I am?" There are many such people, and any list will be woefully incomplete, but today I'm going to highlight Bonnie Dorr (who founded the CLIP lab together with Amy Weinberg and Louiqa Raschid , and who also is a recent fellow of the ACL !).

      Read Full Article
    10. Should the NLP and ML Communities have a Code of Ethics?

      At ACL this past summer, Dirk Hovy presented a very nice opinion paper on Ethics in NLP. There's also been a great surge of interest in FAT-everything (FAT = Fairness, Accountability and Transparency), typified by FATML, but there are others. And yet, despite this recent interest in ethics-related topics, none of the major organizations that I'm involved in have a Code of Ethics, namely: the ACL , the NIPS foundation nor the IMLS .

      Read Full Article
    11. Whence your reward function?

      I ran a grad seminar in reinforcement learning this past semester , which was a lot of fun and also gave me an opportunity to catch up on some stuff I'd been meaning to learn but haven't had a chance and old stuff I'd largely forgotten about. It's hard to believe, but my first RL paper was eleven years ago at a NIPS workshop where Daniel Marcu, John Langford and I had a first paper on reducing structured prediction to reinforcement learning , essentially by running Conservative Policy Iteration . (This work eventually became Searn .) Most of my own ...

      Read Full Article
    12. Workshops and mini-conferences

      Workshops and mini-conferences

      I've attended and organized two types of workshops in my time, one of which I'll call the ACL-style workshop (or "mini-conference"), the other of which I'll call the NIPS-style workshop (or "actual workshop"). Of course this is a continuum, and some workshops at NIPS are ACL-style and vice versa. As I've already given away with phrasing, I much prefer the NIPS style.

      Read Full Article
    13. Bias in ML, and Teaching AI

      Bias in ML, and Teaching AI

      Yesterday I gave a super duper high level 12 minutes presentation about some issues of bias in AI. I should emphasize (if it's not clear) that this is something I am not an expert in; most of what I know is by reading great papers by other people (there is a completely non-academic sample at the end of this post). This blog post is a variant of that presentation. Structure: most of the images below are prompts for talking points, which are generally written below the corresponding image.

      Read Full Article
    14. Debugging machine learning

      Debugging machine learning

      I've been thinking, mostly in the context of teaching, about how to specifically teach debugging of machine learning. Personally I find it very helpful to break things down in terms of the usual error terms: Bayes error (how much error is there in the best possible classifier), approximation error (how much do you pay for restricting to some hypothesis class), estimation error (how much do you pay because you only have finite samples), optimization error (how much do you pay because you didn't find a global optimum to your optimization problem). I've generally found that trying to ...

      Read Full Article
      Mentions: Bayes
    15. Feature (or architecture) ablation

      I wrote my first (and only) coreference paper back in 2005. At the time, my goals were to (a) do well on coref, (b) integrate background knowledge (like "Bush" is "president") using simple techniques, and (c) try to figure out how important different (types of) features were for making coreference decisions. For the last, there is a reasonably extensive feature-type ablation experiment using backward selection (which I trust far more than forward selection).

      Read Full Article
      Mentions: RNN
    16. Some papers I liked at ACL 2016

      A conference just ended, so it's that time of year! Here are some papers I liked with the usual caveats about recall. Before I go to the list, let me say that I really really enjoyed ACL this year. I was completely on the fence about going, and basically decided to go only because of giving a talk at Repl4NLP , and wanted to attend the business meeting for the discussion of diversity in the ACL community, led by Joakim Nivre with an amazing report that he, Lyn Walker, Yejin Choi and Min-Yen Kan put together. (Likely I'll post ...

      Read Full Article
    17. Fast & easy baseline text categorization with vw

      Fast & easy baseline text categorization with vw

      About a month ago, the paper Bag of Tricks for Efficient Text Categorization was posted to arxiv. I found it thanks to Yoav Goldberg's rather incisive tweet : Yoav is basically referring to the fact that the paper is all about (a) hashing features and (b) bigrams and (c) a projection that doesn't totally make sense to me, which (a) vw does by default (b) requires "--ngrams 2" and (c) I don't totally understand I don't think is necessary.

      Read Full Article
      Mentions: Yoav Goldberg
    18. A quick comment on structured input vs structured output learning

      When I think of structured input models, I typically think of things like kernels over discrete input spaces. For instance, the famous all-substrings kernel for which K(d1,d2) effectively counts the number of common substrings in two documents, without spending exponential time enumerating them all. Of course there are many more ways of thinking about structured inputs: tree-to-string machine translation has a tree structured input.

      Read Full Article
    19. Decoding (neural?) representations

      I remember back in grad school days some subset of the field was thinking about the following question. I train an unsupervised HMM on some language data to get something-like-part-of-speech tags out. And naturally the question arises: these tags that come out... what are they actually encoding? At the time, there were essentially three ways of approaching this question that I knew about:
      Read Full Article
      Mentions: NLP POS
    20. Some picks from NAACL 2016

      Usual caveats: didn't see all talks, didn't read all papers, there's lot of good stuff at NAACL that isn't listed here! That said, here are some papers I particularly liked at NAACL, with some comments. Please add comments with papers you liked! Anyone who has taught has suffered the following dilemma. You ask students for feedback throughout the course, and you have to provide free text because if you could anticipate their problems, you'd have addressed them already.

      Read Full Article
    21. Rating the quality of reviews, after the fact

      Rating the quality of reviews, after the fact

      Groan groan groan reviewers are horrible people. Not you and me. Those other reviewers over there! tldr: In general we actually don't think our reviews are that bad, though of course it's easy to remember the bad ones. Author perception of review quality is colored by, but not determined by, the overall accept/reject decision and/or the overall score that review gave to the paper.

      Read Full Article
    22. Language bias and black sheep

      Tolga Bolukbasi and colleagues recently posted an article about bias in what is learned with word2vec , on the standard Google News crawl (h/t Jack Clark ). Essentially what they found is that word embeddings reflect stereotypes regarding gender (for instance, "nurse" is closer to "she" than "he" and "hero" is the reverse) and race ("black male" is closest to "assaulted" and "white male" to "entitled"). This is not hugely surprising, and it's nice to see it confirmed.

      Read Full Article
      Mentions: Google News Emea
    1-24 of 200 1 2 3 4 5 6 7 8 9 »
  1. Categories

    1. Default:

      Discourse, Entailment, Machine Translation, NER, Parsing, Segmentation, Semantic, Sentiment, Summarization, WSD