DE eng

Search in the Catalogues and Directories

Hits 1 – 6 of 6

1
DAWGs for Parameterized Matching: Online Construction and Related Indexing Structures
Nakashima, Katsuhito; Fujisato, Noriki; Hendrian, Diptarama. - : LIPIcs - Leibniz International Proceedings in Informatics. 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020), 2020
BASE
Show details
2
DAWGs for Parameterized Matching: Online Construction and Related Indexing Structures ...
Nakashima, Katsuhito; Fujisato, Noriki; Hendrian, Diptarama. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020
BASE
Show details
3
DAWGs for parameterized matching: online construction and related indexing structures ...
BASE
Show details
4
An Extension of Linear-size Suffix Tries for Parameterized Strings ...
BASE
Show details
5
Efficiency in the Identification in the Limit Learning Paradigm
In: Topics in Grammatical Inference ; https://hal.archives-ouvertes.fr/hal-01399418 ; Jeffrey Heinz and José M. Sempere. Topics in Grammatical Inference, pp.25 - 46, 2016, 978-3-662-48395-4. ⟨10.1007/978-3-662-48395-4_2⟩ ; http://www.springer.com/la/book/9783662483930 (2016)
Abstract: International audience ; Two different paths exist when one is interested in validating an idea for a learning algorithm. On the one hand, the practical approach consists in using the available data to test the quality of the learning algorithm (for instance the widely used cross-validation technique). On the other hand, a theoretical approach is possible by using a learning paradigm, which is an attempt to formalize what learning means. Such models provide a framework to study the behavior of learning algorithms and to formally establish their soundness. The most widely used learning paradigm in Grammatical Inference is the one known as identification in the limit. The original definition has been found lacking because no efficiency bound is required. It remains an open problem how to best incorporate a notion of efficiency and tractability in this framework. This chapter surveys the different refinements that have been developed and studied, and the challenges they face. Main results for each formalisation, along with comparisons, are provided.
Keyword: [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]
URL: https://hal.archives-ouvertes.fr/hal-01399418/document
https://hal.archives-ouvertes.fr/hal-01399418/file/Chapter-Efficiency.pdf
https://hal.archives-ouvertes.fr/hal-01399418
https://doi.org/10.1007/978-3-662-48395-4_2
BASE
Hide details
6
Well-Nestedness Properly Subsumes Strict Derivational Minimalism
In: LACL 2011 ; Logical Aspects of Computational Linguistics - 6th international conference ; https://hal.archives-ouvertes.fr/hal-00959629 ; Logical Aspects of Computational Linguistics - 6th international conference, 2011, Montpellier, France. pp.112-128 (2011)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
6
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern