1 |
Preventing author profiling through zero-shot multilingual back-translation
|
|
|
|
In: 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) ; https://hal.inria.fr/hal-03350906 ; 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2021, Punta Cana, Dominica (2021)
|
|
BASE
|
|
Show details
|
|
2 |
On the effect of normalization layers on Differentially Private training of deep Neural networks
|
|
|
|
In: https://hal.inria.fr/hal-03475600 ; 2021 (2021)
|
|
Abstract:
Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm, which can mitigate privacy threats that arise from the presence of sensitive information in training data. However, one major drawback of training deep neural networks with DPSGD is a reduction in the models accuracy. In this paper, we study the effect of normalization layers on the performance of DPSGD. We demonstrate that normalization layers significantly impact the utility of deep neural networks with noisy parameters and should be considered essential ingredients of training with DPSGD. In particular, we propose a novel method for integrating batch normalization with DPSGD without incurring an additional privacy loss. With our approach, we are able to train deeper networks and achieve a better utility-privacy trade-off.
|
|
Keyword:
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing
|
|
URL: https://hal.inria.fr/hal-03475600
|
|
BASE
|
|
Hide details
|
|
3 |
Preventing Author Profiling through Zero-Shot Multilingual Back-Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|