DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
DoT: An efficient Double Transformer for NLP tasks with tables ...
BASE
Show details
2
MATE: Multi-view Attention for Table Transformer Efficiency ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.600/ Abstract: This work presents a sparse-attention Transformer architecture for modeling documents that contain large tables. Tables are ubiquitous on the web, and are rich in information. However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens. Here we propose MATE, a novel Transformer architecture designed to model the structure of web tables. MATE uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. This architecture scales linearly with respect to speed and memory, and can handle documents containing more than 8000 tokens with current accelerators. MATE also has a more appropriate inductive bias for tabular data, and sets a new state-of-the-art for three table reasoning datasets. For HybridQA (Chen et al., 2020b), a dataset ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Sentiment Analysis
URL: https://underline.io/lecture/37326-mate-multi-view-attention-for-table-transformer-efficiency
https://dx.doi.org/10.48448/fney-f862
BASE
Hide details
3
DoT: An efficient Double Transformer for NLP tasks with tables ...
BASE
Show details
4
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern