1. Articles in category: Summarization

    73-96 of 1856 « 1 2 3 4 5 6 7 ... 76 77 78 »
    1. Video summarization using sparse basis function combination

      A method for determining a video summary from a video sequence including a time sequence of video frames, comprising: defining a global feature vector representing the entire video sequence; selecting a plurality of subsets of the video frames; extracting a frame feature vector for each video frame in the selected subsets of video frames; defining a set of basis functions, wherein each basis function is associated with the frame feature vectors for the video frames in a particular subset of video frames; using a data processor to automatically determine a sparse combination of the basis functions representing the global feature ...
      Read Full Article
    2. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

      Related Articles Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse. Artif Intell Med. 2012 Nov;56(3):157-72 Authors: Hunter J, Freer Y, Gatt A, Reiter E, Sripada S, Sykes C Abstract INTRODUCTION: Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). METHODS: A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in ...
      Read Full Article
    3. Systems and methods for presenting a content summary of a media item to a user based on a position within the media item

      Systems and methods for presenting a content summary of a media item to a user based on a position within the media item are disclosed herein. According to an aspect, a method may include receiving identification of a position within a media item residing on an electronic device. For example, the identified position may be a bookmarked position within an e-book residing on an e-book reader. The method may also include generating a content summary for a portion of the media item based on the identified position. For example, an electronic device may dynamically generate a content summary based on ...
      Read Full Article
    4. Character-based automated media summarization

      Methods, devices, systems and tools are presented that allow the summarization of text, audio, and audiovisual presentations, such as movies, into less lengthy forms. High-content media files are shortened in a manner that preserves important details, by splitting the files into segments, rating the segments, and reassembling preferred segments into a final abridged piece. Summarization of media can be customized by user selection of criteria, and opens new possibilities for delivering entertainment, news, and information in the form of dense, information-rich content that can be viewed by means of broadcast or cable distribution, "on-demand" distribution, internet and cell phone digital ...
      Read Full Article
    5. EpiDEA: Extracting Structured Epilepsy and Seizure Information from Patient Discharge Summaries for Cohort Identification.

      EpiDEA: Extracting Structured Epilepsy and Seizure Information from Patient Discharge Summaries for Cohort Identification. AMIA Annu Symp Proc. 2012;2012:1191-200 Authors: Cui L, Bozorgi A, Lhatoo SD, Zhang GQ, Sahoo SS Abstract Sudden Unexpected Death in Epilepsy (SUDEP) is a poorly understood phenomenon. Patient cohorts to power statistical studies in SUDEP need to be drawn from multiple centers due to the low rate of reported SUDEP incidences. But the current practice of manual chart review of Epilepsy Monitoring Units (EMU) patient discharge summaries is time-consuming, tedious, and not scalable for large studies. To address this challenge in the multi-center ...
      Read Full Article
    6. Summarizing streams of information

      Concepts and technologies are described herein for summarizing streams of information. A stream of information is obtained and analyzed. One or more entities are identified in the stream. The data in the stream is grouped into one or more clusters corresponding to the identified entities. The data in the clusters is summarized, and a timeline corresponding to the data in the cluster is determined. In some embodiments, a format can be selected for presentation of the summarized stream data. The data in the stream can be formatted in the selected format, and the summarized data can be presented in the ...
      Read Full Article
    7. An Approach to Extraction of Linguistic Recommendation Rules – Application of Modal Conditionals Grounding

      An approach to linguistic summarization of distributed databases is considered. It is assumed that summarizations are produced for the case of incomplete access to existing data. To cope with the problem the stored data are processed partially (sampled). In consequence summarizations become equivalent to the natural language modal conditionals with modal operators of knowledge, belief and possibility. To capture this case of knowledge processing an original theory for grounding of modal languages is applied. Simple implementation scenarios and related computational techniques are suggested to illustrate a possible utilization of this model of linguistic summarization. Content Type Book ChapterPages 249-258DOI 10 ...
      Read Full Article
    8. An Approach to Extraction of Linguistic Recommendation Rules – Application of Modal Conditionals Grounding

      An approach to linguistic summarization of distributed databases is considered. It is assumed that summarizations are produced for the case of incomplete access to existing data. To cope with the problem the stored data are processed partially (sampled). In consequence summarizations become equivalent to the natural language modal conditionals with modal operators of knowledge, belief and possibility. To capture this case of knowledge processing an original theory for grounding of modal languages is applied. Simple implementation scenarios and related computational techniques are suggested to illustrate a possible utilization of this model of linguistic summarization. Content Type Book ChapterPages 249-258DOI 10 ...
      Read Full Article
    9. Building a Lexically and Semantically-Rich Resource for Paraphrase Processing

      In this paper, we present a methodology for building a lexically and semantically-rich resource for paraphrase processing on French. The paraphrase extraction model is rule-based and is guided by means of predicates. The extraction process comprises 4 main processing modules: 1. derived words extraction; 2. sentences extraction; 3. chunking & head word identification, and 4. predicate-argument structure mapping. We use the corpus provided by an agro-food industry enterprise to test the 4 modules of the paraphrase structures extractor. We explain how each processing module functions. Content Type Book ChapterPages 138-143DOI 10.1007/978-3-642-33983-7_14Authors Wannachai Kampeera, Centre Tesnière - Équipe d’Accueil EA ...
      Read Full Article
      Mentions: France
    10. Extracting lexical and phrasal paraphrases: a review of the literature

      Abstract  Recent advances in natural language processing have increased the popularity of paraphrase extraction. Most of the attention, however, has been focused on the extraction methods only without taking the resource factor into the consideration. Unknowingly, there is a strong relationship between them and the resource factor also plays an equally important role in paraphrase extraction. In addition, almost all of the previous studies have been focused on corpus-based methods that extract paraphrases from corpora based solely on syntactic similarity. Despite the popularity of corpus-based methods, a considerable amount of research has consistently shown that these methods are vulnerable to ...
      Read Full Article
    11. System and method for automatically summarizing fine-grained opinions in digital text

      A method and system for automatically summarizing fine-grained opinions in digital text are disclosed. Accordingly, a digital text is analyzed for the purpose of extracting all opinion expressions found in the text. Next, the extracted opinion expressions (referred to herein as opinion frames) are analyzed to generate opinion summaries. In forming an opinion summary, those opinion frames sharing in common an opinion source and/or opinion topic may be combined, such that an overall opinion summary indicates an aggregate opinion held by the common source toward the common topic.
      Read Full Article
    12. Paraphrase Based Similar Expression Generation

      In this paper we propose a novel paraphrase based method to improve the performance of the similar expressions generation that has been widely used in writing assistant system. The users’ queries are paraphrased to multiple expressions which cover abundant example sentences in the system. Three paraphrase methods are presented to satisfy the users’ different intentions. First of all, we employ a statistical collocation model to generate collocations for two-word queries. Then we use a phrase based paraphrase model and a dependency based paraphrase model to substitute contiguous phrases and long-distance collocations in multi-word queries (> 2 words). The experimental results indicate ...
      Read Full Article
    13. Analyzing the capabilities of crowdsourcing services for text summarization

      Abstract  This paper presents a detailed analysis of the use of crowdsourcing services for the Text Summarization task in the context of the tourist domain. In particular, our aim is to retrieve relevant information about a place or an object pictured in an image in order to provide a short summary which will be of great help for a tourist. For tackling this task, we proposed a broad set of experiments using crowdsourcing services that could be useful as a reference for others who want to rely also on crowdsourcing. From the analysis carried out through our experimental setup and ...
      Read Full Article
    14. Presenting multiple document summarization with search results

      Methods and computer-readable media are provided for summarizing the content of a plurality of documents and presenting the results of such multiple-document summarization to a user in such a way that the user is able to quickly and easily discern what, if any, unique information each document contains. Each sentence of each document is assigned a score based upon the perceived importance of the information contained therein. The sentences receiving the highest scores are then compared with one another to identify and remove any duplicate sentences. The remaining high-scoring sentences are extracted from the corresponding documents and presented to the ...
      Read Full Article
    15. Text summarization as a decision support aid

      Abstract Background  PubMed data potentially can provide decision support information, but PubMed was not exclusively designed to be a point-of-care tool. Natural language processing applications that summarize PubMed citations hold promise for extracting decision support information. The objective of this study was to evaluate the efficiency of a text summarization application called Semantic MEDLINE, enhanced with a novel dynamic summarization method, in identifying decision support data. Methods  We downloaded PubMed citations addressing the prevention and drug treatment of four disease topics. We then processed the citations with Semantic MEDLINE, enhanced with the dynamic summarization method. We also processed the citations ...
      Read Full Article
    16. A Zipf-Like Distant Supervision Approach for Multi-document Summarization Using Wikinews Articles

      This work presents a sentence ranking strategy based on distant supervision for the multi-document summarization problem. Due to the difficulty of obtaining large training datasets formed by document clusters and their respective human-made summaries, we propose building a training and a testing corpus from Wikinews. Wikinews articles are modeled as “distant” summaries of their cited sources, considering that first sentences of Wikinews articles tend to summarize the event covered in the news story. Sentences from cited sources are represented as tuples of numerical features and labeled according to a relationship with the given distant summary that is based on the ...
      Read Full Article
    17. Cross-lingual training of summarization systems using annotated corpora in a foreign language

      Abstract  The increasing trend of cross-border globalization and acculturation requires text summarization techniques to work equally well for multiple languages. However, only some of the automated summarization methods can be defined as “language-independent,” i.e., not based on any language-specific knowledge. Such methods can be used for multilingual summarization, defined in Mani (Automatic summarization. Natural language processing. John Benjamins Publishing Company, Amsterdam, 2001) as “processing several languages, with a summary in the same language as input”, but, their performance is usually unsatisfactory due to the exclusion of language-specific knowledge. Moreover, supervised machine learning approaches need training corpora in multiple languages ...
      Read Full Article
    18. Review of Proposed Architectures for Automated Text Summarization

      Automatic Summarization is the creation of a shortened version of the text by a Computer Program. It is a brief and accurate representation of input text such that the output covers the most important concepts of the source in a condensed manner. The summarization process could be extractive or abstractive. Extract summaries contain sentences that are copied exactly from the source document. In abstractive approaches, the aim is to derive the main concept of the source text, without necessarily copying its exact sentences. It is generally agreed that automating the summarization procedure should be based on text understanding that mimics ...
      Read Full Article
      Mentions: India NLP Pune
    19. Automatic Text Summarization: Past, Present and Future

      Automatic text summarization, the computer-based production of condensed versions of documents, is an important technology for the information society. Without summaries it would be practically impossible for human beings to get access to the ever growing mass of information available online. Although research in text summarization is over 50 years old, some efforts are still needed given the insufficient quality of automatic summaries and the number of interesting summarization topics being proposed in different contexts by end users (“domain-specific summaries”, “opinion-oriented summaries”, “update summaries”, etc.). This paper gives a short overview of summarization methods and evaluation. Content Type Book ChapterPages ...
      Read Full Article
    73-96 of 1856 « 1 2 3 4 5 6 7 ... 76 77 78 »
  1. Categories

    1. Default:

      Discourse, Entailment, Machine Translation, NER, Parsing, Segmentation, Semantic, Sentiment, Summarization, WSD
  2. Popular Articles