DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 94

1
Towards Parallel Algorithms for Abstract Dialectical Frameworks ...
Hofer, Mathias. - : TU Wien, 2022
BASE
Show details
2
Detecting Signal Corruptions in Voice Recordings for Speech Therapy ; Igenkänning av Signalproblem i Röstinspelningar för Logopedi
Nylén, Helmer. - : KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021
BASE
Show details
3
Noise-skipping Earley parsing and in-order tree extraction from shared packed parse forests
Dohmann, Jeremy. - 2021
BASE
Show details
4
Temporal Social Network Analysis using Harvard Caselaw Access Project
Trias, Fernando. - 2021
BASE
Show details
5
Going beyond our means: A proposal for improving psycholinguistic methods
BASE
Show details
6
Extracting Human Behaviour and Personality Traits from Social Media
Singh, Ravinder. - 2021
BASE
Show details
7
Towards provably efficient algorithms for learning neural networks ...
Goel, Surbhi. - : The University of Texas at Austin, 2020
BASE
Show details
8
Factions: acts of worldbuilding on social media platforms ...
Little, Dana L.. - : University of Glasgow, 2020
BASE
Show details
9
Parallel text index construction ...
Kurpicz, Florian. - : Technische Universität Dortmund, 2020
BASE
Show details
10
Understanding and generating language with abstract meaning representation
Damonte, Marco. - : The University of Edinburgh, 2020
BASE
Show details
11
Detección de linguaxe misóxino e xenófobo en redes sociais mediante aprendizaxe máquina
BASE
Show details
12
Parallel text index construction
BASE
Show details
13
Towards provably efficient algorithms for learning neural networks
Goel, Surbhi. - 2020
Abstract: Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success in fields such as computer vision, robotics, and natural language. Developing provably efficient algorithms for learning commonly used neural network architectures continues to be a core challenge in understanding deep learning. In particular, even the problem of learning very basic architectures remains open. The underlying difficulty arises from the highly non-convex nature of the optimization problems posed by neural networks. Despite their practical success, the standard neural network training algorithms based on gradient descent (GD) and its variants have almost no provable guarantees. This necessitates a paradigm shift towards developing new principled algorithms with provable guarantees. In this thesis, we give the first set of efficient algorithms for learning commonly studied neural network architectures under minimal assumptions. In the first part of the thesis, we focus on characterizing the computational complexity of learning a single non-linear unit. We combine techniques from kernel methods and polynomial approximation to give the first dimension-efficient algorithm for learning a single ReLU (rectified linear unit), the most popular activation function, in the agnostic learning model (arbitrary noise) for any distribution on the unit sphere. We further show that if the input distribution is assumed to be Gaussian, the problem is hard. Our results unconditionally imply that GD cannot agnostically learn a single ReLU. Lastly, we show that if we relax our learning guarantee, then there is a fully polynomial time algorithm that achieves a constant factor approximation for all isotropic log-concave distributions. We further extend our results to shallow NNs. We give the first dimension efficient algorithm for learning norm-bounded one layer fully connected NNs. We subsequently show that if the marginal distribution on the input exhibits sufficient eigenvalue decay (low dimensional structure), then one-hidden-layer NNs can be learned in polynomial time in all parameters. For one hidden layer convolutional NNs, we propose a simple iterative algorithm that efficiently recovers the underlying parameters for commonly used convolutional schemes from computer vision. We further give the first polynomial time algorithm for networks with more that one hidden layer in a weaker noise model. The techniques from this work also give improved results for problems related to boolean concept learning. Lastly, we shift focus to the unsupervised learning setting through the lens of graphical models. We study Restricted Boltzmann Machines (RBMs) which are simple generative neural networks that model a probability distribution. We give the first algorithm for learning RBMs with non-negative interactions under arbitrary biases on binary as well as non-binary input. ; Computer Sciences
Keyword: Efficient algorithms; Learning theory; Neural networks
URL: https://doi.org/10.26153/tsw/13349
https://hdl.handle.net/2152/86398
BASE
Hide details
14
Emoción, percepción, producción: un estudio psicolingüístico para detectar emociones en el habla
Gibson, M. (Mark); González-Machorro, M. (Mónica). - 2020
BASE
Show details
15
Language Recognition in the Sliding Window Model ... : Formale Sprachen im Sliding-Window-Modell ...
Ganardi, Moses. - : Universitätsbibliothek Siegen, 2019
BASE
Show details
16
System-Aware Algorithms For Machine Learning
Mendler-Dünner, Celestine. - : ETH Zurich, 2019
BASE
Show details
17
Seguridad del paciente: estudio de factores para su consecución
Figueiredo Escribá, Carlos de. - : Universitat de Barcelona, 2019
In: TDX (Tesis Doctorals en Xarxa) (2019)
BASE
Show details
18
Fast machine translation on parallel and massively parallel hardware
Bogoychev, Nikolay Veselinov. - : The University of Edinburgh, 2019
BASE
Show details
19
Preference inference based on lexicographic and Pareto models
George, Anne-Marie. - : University College Cork, 2019
BASE
Show details
20
Seguridad del paciente: estudio de factores para su consecución
Figueiredo Escribá, Carlos de. - : Universitat de Barcelona, 2019
BASE
Show details

Page: 1 2 3 4 5

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
8
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
85
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern