1 |
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Structural Guidance for Transformer Language Models ...
|
|
|
|
Abstract:
Transformer-based language models pre-trained on large amounts of text data have proven remarkably successful in learning generic transferable linguistic representations. Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data. We explore two general ideas. The "Generative Parsing" idea jointly models the incremental parse and word sequence as part of the same sequence modeling task. The "Structural Scaffold" idea guides the language model's representation via additional structure loss that separately predicts the incremental constituency parse. We train the proposed models along with a vanilla Transformer language model baseline on a 14 million-token and a 46 million-token subset of the BLLIP dataset, and evaluate models' syntactic generalization performances on SG Test Suites and sized BLiMP. Experiment results across two benchmarks suggest converging evidence that ... : To be issued as paper revision for ACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2108.00104 https://arxiv.org/abs/2108.00104
|
|
BASE
|
|
Hide details
|
|
3 |
A Systematic Assessment of Syntactic Generalization in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|