DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 21 – 35 of 35

21
Reliability versus affiliation : selective trust in accented speakers
BASE
Show details
22
Formulaic language in L1 acquisition
In: Annual review of applied linguistics. - Cambridge, Mass. [u.a.] : Univ. Press 32 (2012), 3-16
BLLDB
OLC Linguistik
Show details
23
The syntax of questions and variation in adult and child African American English
BASE
Show details
24
Can We Dissociate Contingency Learning from Social Learning in Word Acquisition by 24-Month-Olds?
Bannard, Colin; Tomasello, Michael. - : Public Library of Science, 2012
BASE
Show details
25
'Frequent frames' in German child-directed speech: a limited cue to grammatical categories
In: Cognitive science. - Hoboken, NJ : Wiley-Blackwell 35 (2011) 6, 1190-1205
BLLDB
OLC Linguistik
Show details
26
Two- and three-year-olds' linguistic generalizations are prudent adaptations to the language they hear
In: Experience, variation and generalization (Amsterdam, 2011), p. 153-166
MPI für Psycholinguistik
Show details
27
Word meaning in context as a paraphrase distribution : evidence, learning, and inference
Abstract: text ; In this dissertation, we introduce a graph-based model of instance-based, usage meaning that is cast as a problem of probabilistic inference. The main aim of this model is to provide a flexible platform that can be used to explore multiple hypotheses about usage meaning computation. Our model takes up and extends the proposals of Erk and Pado [2007] and McCarthy and Navigli [2009] by representing usage meaning as a probability distribution over potential paraphrases. We use undirected graphical models to infer this probability distribution for every content word in a given sentence. Graphical models represent complex probability distributions through a graph. In the graph, nodes stand for random variables, and edges stand for direct probabilistic interactions between them. The lack of edges between any two variables reflect independence assumptions. In our model, we represent each content word of the sentence through two adjacent nodes: the observed node represents the surface form of the word itself, and the hidden node represents its usage meaning. The distribution over values that we infer for the hidden node is a paraphrase distribution for the observed word. To encode the fact that lexical semantic information is exchanged between syntactic neighbors, the graph contains edges that mirror the dependency graph for the sentence. Further knowledge sources that influence the hidden nodes are represented through additional edges that, for example, connect to document topic. The integration of adjacent knowledge sources is accomplished in a standard way by multiplying factors and marginalizing over variables. Evaluating on a paraphrasing task, we find that our model outperforms the current state-of-the-art usage vector model [Thater et al., 2010] on all parts of speech except verbs, where the previous model wins by a small margin. But our main focus is not on the numbers but on the fact that our model is flexible enough to encode different hypotheses about usage meaning computation. In particular, we concentrate on five questions (with minor variants): - Nonlocal syntactic context: Existing usage vector models only use a word's direct syntactic neighbors for disambiguation or inferring some other meaning representation. Would it help to have contextual information instead "flow" along the entire dependency graph, each word's inferred meaning relying on the paraphrase distribution of its neighbors? - Influence of collocational information: In some cases, it is intuitively plausible to use the selectional preference of a neighboring word towards the target to determine its meaning in context. How does modeling selectional preferences into the model affect performance? - Non-syntactic bag-of-words context: To what extent can non-syntactic information in the form of bag-of-words context help in inferring meaning? - Effects of parametrization: We experiment with two transformations of MLE. One interpolates various MLEs and another transforms it by exponentiating pointwise mutual information. Which performs better? - Type of hidden nodes: Our model posits a tier of hidden nodes immediately adjacent the surface tier of observed words to capture dynamic usage meaning. We examine the model based on by varying the hidden nodes such that in one the nodes have actual words as values and in the other the nodes have nameless indexes as values. The former has the benefit of interpretability while the latter allows more standard parameter estimation. Portions of this dissertation are derived from joint work between the author and Katrin Erk [submitted]. ; Linguistics
Keyword: Computational linguistics; Lexical semantics; Natural language processing; Paraphrasing; Probabilistic graphical models; Word sense disambiguation
URL: http://hdl.handle.net/2152/ETD-UT-2011-08-4143
BASE
Hide details
28
Unsupervised partial parsing
BASE
Show details
29
Paper bullets of the brain
BASE
Show details
30
Children's production of unfamiliar word sequences is predicted by positional variability and latent classes in a large sample of child-directed speech
In: Cognitive science. - Hoboken, NJ : Wiley-Blackwell 34 (2010) 3, 465-488
BLLDB
OLC Linguistik
Show details
31
Repetition and reuse in child language learning
In: Acquisition, loss, psychological reality, functional explanations (2009), p. 297-322
MPI für Psycholinguistik
Show details
32
Modeling children's early grammatical knowledge
Bannard, Colin; Lieven, Elena; Tomasello, Michael. - : National Academy of Sciences, 2009
BASE
Show details
33
Acquiring phrasal lexicons from corpora
Bannard, Colin James. - : The University of Edinburgh, 2006
BASE
Show details
34
Learning about the meaning of verb#8211particle constructions from corpora
In: Computer speech and language. - Amsterdam [u.a.] : Elsevier 19 (2005) 4, 467-478
OLC Linguistik
Show details
35
Multiword expressions
Villavicencio, Aline (Hrsg.); Bond, Francis (Hrsg.); Korhonen, Anna (Hrsg.). - Amsterdam [u.a.] : Elsevier, 2005
BLLDB
Show details

Page: 1 2

Catalogues
0
0
4
0
0
0
0
Bibliographies
4
0
0
0
0
0
0
0
2
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern