DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...83
Hits 81 – 100 of 1.643

81
Idiomatic Expression Identification using Semantic Compatibility
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1546-1562 (2021) (2021)
BASE
Show details
82
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 176-194 (2021) (2021)
BASE
Show details
83
Reducing Confusion in Active Learning for Part-Of-Speech Tagging
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1-16 (2021) (2021)
BASE
Show details
84
Differentiable Subset Pruning of Transformer Heads
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1442-1459 (2021) (2021)
BASE
Show details
85
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1061-1080 (2021) (2021)
BASE
Show details
86
Parameter Space Factorization for Zero-Shot Learning across Tasks and Languages
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 410-428 (2021) (2021)
BASE
Show details
87
Data-to-text Generation with Macro Planning
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 510-527 (2021) (2021)
BASE
Show details
88
Structured Self-Supervised Pretraining for Commonsense Knowledge Graph Completion
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1268-1284 (2021) (2021)
BASE
Show details
89
RYANSQL: Recursively Applying Sketch-based Slot Fillings for Complex Text-to-SQL in Cross-Domain Databases
In: Computational Linguistics, Vol 47, Iss 2, Pp 309-332 (2021) (2021)
BASE
Show details
90
Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1032-1046 (2021) (2021)
BASE
Show details
91
Maintaining Common Ground in Dynamic Environments
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 995-1011 (2021) (2021)
BASE
Show details
92
Infusing Finetuning with Semantic Dependencies
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 226-242 (2021) (2021)
BASE
Show details
93
On Generative Spoken Language Modeling from Raw Audio
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1336-1354 (2021) (2021)
BASE
Show details
94
Pretraining the Noisy Channel Model for Task-Oriented Dialogue
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 657-674 (2021) (2021)
BASE
Show details
95
Approximating Probabilistic Models as Weighted Finite Automata
In: Computational Linguistics, Vol 47, Iss 2, Pp 221-254 (2021) (2021)
BASE
Show details
96
Sensitivity as a Complexity Measure for Sequence Classification Tasks
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 891-908 (2021) (2021)
Abstract: AbstractWe introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity. The sensitivity of a function, given a distribution over input sequences, quantifies the number of disjoint subsets of the input sequence that can each be individually changed to change the output. We argue that standard sequence classification methods are biased towards learning low-sensitivity functions, so that tasks requiring high sensitivity are more difficult. To that end, we show analytically that simple lexical classifiers can only express functions of bounded sensitivity, and we show empirically that low-sensitivity functions are easier to learn for LSTMs. We then estimate sensitivity on 15 NLP tasks, finding that sensitivity is higher on challenging tasks collected in GLUE than on simple text classification tasks, and that sensitivity predicts the performance both of simple lexical classifiers and of vanilla BiLSTMs without pretrained contextualized embeddings. Within a task, sensitivity predicts which inputs are hard for such simple models. Our results suggest that the success of massively pretrained contextual representations stems in part because they provide representations from which information can be extracted by low-sensitivity decoders.
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doaj.org/article/958ef3445dbd4c7dbaea2d7c380df722
https://doi.org/10.1162/tacl_a_00403
BASE
Hide details
97
Unsupervised Learning of KB Queries in Task-Oriented Dialogs
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 374-390 (2021) (2021)
BASE
Show details
98
<scp>ParsiNLU</scp>: A Suite of Language Understanding Challenges for Persian
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1147-1162 (2021) (2021)
BASE
Show details
99
Adaptive Semiparametric Language Models
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 362-373 (2021) (2021)
BASE
Show details
100
Strong Equivalence of TAG and CCG
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 707-720 (2021) (2021)
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...83

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.643
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern