23 |
Higher-order Derivatives of Weighted Finite-state Machines ...
|
|
|
|
BASE
|
|
Show details
|
|
25 |
A surprisal--duration trade-off across and within the world's languages ...
|
|
|
|
BASE
|
|
Show details
|
|
31 |
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
|
|
|
|
BASE
|
|
Show details
|
|
35 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
36 |
Finding Concept-specific Biases in Form–Meaning Associations ...
|
|
|
|
BASE
|
|
Show details
|
|
38 |
On Finding the K-best Non-projective Dependency Trees ...
|
|
|
|
Abstract:
The connection between the maximum spanning tree in a directed graph and the best dependency tree of a sentence has been exploited by the NLP community. However, for many dependency parsing schemes, an important detail of this approach is that the spanning tree must have exactly one edge emanating from the root. While work has been done to efficiently solve this problem for finding the one-best dependency tree, no research has attempted to extend this solution to finding the K-best dependency trees. This is arguably a more important extension as a larger proportion of decoded trees will not be subject to the root constraint of dependency trees. Indeed, we show that the rate of root constraint violations increases by an average of 13 times when decoding with K=50 as opposed to K=1. In this paper, we provide a simplification of the K-best spanning tree algorithm of Camerini et al. (1980). Our simplification allows us to obtain a constant time speed-up over the original algorithm. Furthermore, we present a ... : Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing ...
|
|
URL: https://dx.doi.org/10.3929/ethz-b-000519003 http://hdl.handle.net/20.500.11850/521262
|
|
BASE
|
|
Hide details
|
|
39 |
Efficient computation of expectations under spanning tree distributions ...
|
|
|
|
BASE
|
|
Show details
|
|
40 |
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language berts ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|