DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...63
Hits 81 – 100 of 1.255

81
How effective is BERT without word ordering? Implications for language understanding and data privacy ...
BASE
Show details
82
GEM: Natural Language Generation, Evaluation, and Metrics - Part 4 ...
BASE
Show details
83
The statistical advantage of automatic NLG metrics at the system level ...
BASE
Show details
84
Counter-Argument Generation by Attacking Weak Premises ...
BASE
Show details
85
Supporting Cognitive and Emotional Empathic Writing of Students ...
BASE
Show details
86
What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus ...
BASE
Show details
87
Are Pretrained Convolutions Better than Pretrained Transformers? ...
BASE
Show details
88
Evaluation Examples are not Equally Informative: How should that change NLP Leaderboards? ...
BASE
Show details
89
Beyond Offline Mapping: Learning Cross-lingual Word Embeddings through Context Anchoring ...
BASE
Show details
90
Hate Speech Detection Based on Sentiment Knowledge Sharing ...
BASE
Show details
91
Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation ...
BASE
Show details
92
Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese Grammatical Error Correction ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.385 Abstract: We investigate the problem of Chinese Grammatical Error Correction (CGEC) and present a new framework named Tail-to-Tail (\textbf{TtT}) non-autoregressive sequence prediction to address the deep issues hidden in CGEC. Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected based on the bidirectional context information, thus we employ a BERT-initialized Transformer Encoder as the backbone model to conduct information modeling and conveying. Considering that only relying on the same position substitution cannot handle the variable-length correction cases, various operations such substitution, deletion, insertion, and local paraphrasing are required jointly. Therefore, a Conditional Random Fields (CRF) layer is stacked on the up tail to conduct non-autoregressive sequence prediction by modeling the token dependencies. Since most tokens are correct and ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/xnpa-9w66
https://underline.io/lecture/25707-tail-to-tail-non-autoregressive-sequence-prediction-for-chinese-grammatical-error-correction
BASE
Hide details
93
WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation ...
BASE
Show details
94
An End-to-End Progressive Multi-Task Learning Framework for Medical Named Entity Recognition and Normalization ...
BASE
Show details
95
How does Attention Affect the Model? ...
BASE
Show details
96
Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance ...
BASE
Show details
97
Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities ...
BASE
Show details
98
Neural Machine Translation with Monolingual Translation Memory ...
BASE
Show details
99
Using Meta-Knowledge Mined from Identifiers to Improve Intent Recognition in Conversational Systems ...
BASE
Show details
100
Modeling Transitions of Focal Entities for Conversational Knowledge Base Question Answering ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...63

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.255
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern