23 |
Higher-order Derivatives of Weighted Finite-state Machines ...
|
|
|
|
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-short.32 Abstract: Weighted finite-state machines are a fundamental building block of NLP systems. They have withstood the test of time---from their early use in noisy channel models in the 1990s up to modern-day neurally parameterized conditional random fields. This work examines the computation of higher-order derivatives with respect to the normalization constant for weighted finite-state machines. We provide a general algorithm for evaluating derivatives of all orders, which has not been previously described in the literature. In the case of second-order derivatives, our scheme runs in the optimal O(A^2 N^4) time where A is the alphabet size and N is the number of states. Our algorithm is significantly faster than prior algorithms. Additionally, our approach leads to a significantly faster algorithm for computing second-order expectations, such as covariance matrices and gradients of first-order expectations. ...
|
|
URL: https://dx.doi.org/10.48448/b0dp-6v40 https://underline.io/lecture/25998-higher-order-derivatives-of-weighted-finite-state-machines
|
|
BASE
|
|
Hide details
|
|
25 |
A surprisal--duration trade-off across and within the world's languages ...
|
|
|
|
BASE
|
|
Show details
|
|
31 |
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
|
|
|
|
BASE
|
|
Show details
|
|
35 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
36 |
Finding Concept-specific Biases in Form–Meaning Associations ...
|
|
|
|
BASE
|
|
Show details
|
|
39 |
Efficient computation of expectations under spanning tree distributions ...
|
|
|
|
BASE
|
|
Show details
|
|
40 |
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language berts ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|