Computational Linguistics: 16th International Conference Of The Pacific Association For Computational Linguistics, Pacling 2019, Hanoi, Vietnam, ... In Computer And Information Science (1215))

Preparing link to download Please wait... Attached file not found

E-Book Overview

This book constitutes the refereed proceedings of the 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, held in Hanoi, Vietnam, in October 2019.

 The 28 full papers and 14 short papers presented were carefully reviewed and selected from 70 submissions. The papers are organized in topical sections on text summarization; relation and word embedding; machine translation; text classification; web analyzing; question and answering, dialog analyzing; speech and emotion analyzing; parsing and segmentation; information extraction; and grammar error and plagiarism detection.


E-Book Information

  • Series: Communications in Computer and Information Science (1215) (Book 1215)

  • Year: 2,020

  • Edition: 1st ed. 2020

  • Pages: 538

  • Pages In File: 525

  • Language: English

  • Identifier: 9811561672,9789811561672

  • Org File Size: 47,134,324

  • Extension: pdf

  • Toc: Preface Organization Contents Text Summarization A Submodular Approach for Reference Recommendation 1 Introduction 2 Related Work 3 Submodularity Background 3.1 Definitions 3.2 Submodular Functions Used in Document Summarization 4 Submodular Reference Recommendation 4.1 Non-monotone Submodular Functions 4.2 Monotone Submodular Functions 5 Experiments 5.1 Corpus 5.2 Evaluation Metrics 5.3 Experimental Settings 5.4 Parameter Tuning 5.5 Performance Comparison 6 Conclusion References Split First and Then Rephrase: Hierarchical Generation for Sentence Simplification 1 Introduction 2 Split-First-and-Then-Rephrase Model 3 Experiments 3.1 Dataset 3.2 Training Details 3.3 Results 3.4 Segmentation Analysis 3.5 Error Analysis 4 Related Work 4.1 Other Approaches to the Split-and-Rephrase Task 4.2 Hierarchical Text Generation in Other Tasks 5 Conclusion References Abstractive Text Summarization Using LSTMs with Rich Features 1 Introduction 2 Related Work 3 Proposed Model 3.1 Baseline Model 3.2 Our Proposed Model 4 Experiments and Results 4.1 Dataset 4.2 Processing Data 4.3 Experiments 4.4 Results 5 Conclusions References Relation and Word Embedding SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders 1 Introduction 2 Related Work 3 Proposed Approach 3.1 Word-Sequence Extraction 3.2 Model Architecture 4 Experiments 4.1 Training Data 4.2 Models and the Training Setup 4.3 Evaluation Setup 4.4 Word-Sequence Length Impact 5 Results and Discussion 5.1 Supervised Tasks 5.2 Unsupervised Tasks 6 Conclusion References Learning to Compose Relational Embeddings in Knowledge Graphs 1 Introduction 2 Background 2.1 Knowledge Graph Embedding Methods 2.2 Relational Walk 2.3 Inference in Knowledge Graphs 3 Relation Composition 3.1 Unsupervised Relation Composition 3.2 Supervised Relation Composition 4 Experiments 4.1 Datasets 4.2 Relation Composition Ranking 4.3 Triple Classification 5 Conclusion References Context-Guided Self-supervised Relation Embeddings 1 Introduction 2 Related Work 2.1 Pattern-Based Approach for Relations