DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
Don't Let Discourse Confine Your Model: Sequence Perturbations for Improved Event Language Models ...
BASE
Show details
2
TellMeWhy: A Dataset for Answering Why-Questions in Narratives ...
BASE
Show details
3
IrEne: Interpretable Energy Prediction for Transformers ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.167 Abstract: Existing software-based energy measurements of NLP models are not accurate because they do not consider the complex interactions between energy consumption and model execution. We present IrEne, an interpretable and extensible energy prediction system that accurately predicts the inference energy consumption of a wide range of Transformer-based NLP models. IrEne constructs a model tree graph that breaks down the NLP model into modules that are further broken down into low-level machine learning (ML) primitives. IrEne predicts the inference energy consumption of the ML primitives as a function of generalizable features and fine-grained runtime resource usage. IrEne then aggregates these low-level predictions recursively to predict the energy of each module and finally of the entire model. Experiments across multiple Transformer models show IrEne predicts inference energy consumption of transformer models with an error of under 7% compared to ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/fe76-z925
https://underline.io/lecture/25487-irene-interpretable-energy-prediction-for-transformers
BASE
Hide details
4
On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers ...
BASE
Show details

Catalogues
Bibliographies
Linked Open Data catalogues
Online resources
Open access documents
4
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern