DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers ...
BASE
Show details
2
Don't Let Discourse Confine Your Model: Sequence Perturbations for Improved Event Language Models ...
BASE
Show details
3
TellMeWhy: A Dataset for Answering Why-Questions in Narratives ...
BASE
Show details
4
IrEne: Interpretable Energy Prediction for Transformers ...
BASE
Show details
5
On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers ...
Abstract: How much information do NLP tasks really need from a transformer's attention mechanism at application-time (inference)? From recent work, we know that there is sparsity in transformers and that the floating-points within its computation can be discretized to fewer values with minimal loss to task accuracies. However, this requires retraining or even creating entirely new models, both of which can be expensive and carbon-emitting. Focused on optimizations that do not require training, we systematically study the full range of typical attention values necessary. This informs the design of an inference-time quantization technique using both pruning and log-scaled mapping which produces only a few (e.g. 2^3) unique values. Over the tasks of question answering and sentiment analysis, we find nearly 80% of attention values can be pruned to zeros with minimal (< 1.0%) relative loss in accuracy. We use this pruning technique in conjunction with quantizing the attention values to only a 3-bit format, without ...
Keyword: Computational Linguistics; Condensed Matter Physics; FOS Physical sciences; Information and Knowledge Engineering; Machine Learning; Neural Network; Semantics
URL: https://underline.io/lecture/29879-on-the-distribution,-sparsity,-and-inference-time-quantization-of-attention-values-in-transformers
https://dx.doi.org/10.48448/jn9k-w368
BASE
Hide details
6
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
BASE
Show details
7
Modeling Label Semantics for Predicting Emotional Reactions ...
BASE
Show details
8
Residualized Factor Adaptation for Community Social Media Prediction Tasks ...
BASE
Show details
9
The Fine Line between Linguistic Generalization and Failure in Seq2Seq-Attention Models ...
BASE
Show details
10
Generating Coherent Event Schemas at Scale
BASE
Show details
11
Improved Document Representation for Classification Tasks for the Intelligence Community
In: School of Information Studies - Faculty Scholarship (2005)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern