A. Madsen, and A. Johansen. (2019)cite arxiv:1910.01888Comment: Published at Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
Abstract
The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can
learn exact arithmetic operations between the elements of a hidden state. The
goal of NALU is to learn perfect extrapolation, which requires learning the
exact underlying logic of an unknown arithmetic problem. Evaluating the
performance of the NALU is non-trivial as one arithmetic problem might have
many solutions. As a consequence, single-instance MSE has been used to evaluate
and compare performance between models. However, it can be hard to interpret
what magnitude of MSE represents a correct solution and models sensitivity to
initialization. We propose using a success-criterion to measure if and when a
model converges. Using a success-criterion we can summarize success-rate over
many initialization seeds and calculate confidence intervals. We contribute a
generalized version of the previous arithmetic benchmark to measure models
sensitivity under different conditions. This is, to our knowledge, the first
extensive evaluation with respect to convergence of the NALU and its sub-units.
Using a success-criterion to summarize 4800 experiments we find that
consistently learning arithmetic extrapolation is challenging, in particular
for multiplication.
cite arxiv:1910.01888Comment: Published at Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada
%0 Journal Article
%1 madsen2019measuring
%A Madsen, Andreas
%A Johansen, Alexander Rosenberg
%D 2019
%K learning sets
%T Measuring Arithmetic Extrapolation Performance
%U http://arxiv.org/abs/1910.01888
%X The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can
learn exact arithmetic operations between the elements of a hidden state. The
goal of NALU is to learn perfect extrapolation, which requires learning the
exact underlying logic of an unknown arithmetic problem. Evaluating the
performance of the NALU is non-trivial as one arithmetic problem might have
many solutions. As a consequence, single-instance MSE has been used to evaluate
and compare performance between models. However, it can be hard to interpret
what magnitude of MSE represents a correct solution and models sensitivity to
initialization. We propose using a success-criterion to measure if and when a
model converges. Using a success-criterion we can summarize success-rate over
many initialization seeds and calculate confidence intervals. We contribute a
generalized version of the previous arithmetic benchmark to measure models
sensitivity under different conditions. This is, to our knowledge, the first
extensive evaluation with respect to convergence of the NALU and its sub-units.
Using a success-criterion to summarize 4800 experiments we find that
consistently learning arithmetic extrapolation is challenging, in particular
for multiplication.
@article{madsen2019measuring,
abstract = {The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can
learn exact arithmetic operations between the elements of a hidden state. The
goal of NALU is to learn perfect extrapolation, which requires learning the
exact underlying logic of an unknown arithmetic problem. Evaluating the
performance of the NALU is non-trivial as one arithmetic problem might have
many solutions. As a consequence, single-instance MSE has been used to evaluate
and compare performance between models. However, it can be hard to interpret
what magnitude of MSE represents a correct solution and models sensitivity to
initialization. We propose using a success-criterion to measure if and when a
model converges. Using a success-criterion we can summarize success-rate over
many initialization seeds and calculate confidence intervals. We contribute a
generalized version of the previous arithmetic benchmark to measure models
sensitivity under different conditions. This is, to our knowledge, the first
extensive evaluation with respect to convergence of the NALU and its sub-units.
Using a success-criterion to summarize 4800 experiments we find that
consistently learning arithmetic extrapolation is challenging, in particular
for multiplication.},
added-at = {2020-01-06T03:21:25.000+0100},
author = {Madsen, Andreas and Johansen, Alexander Rosenberg},
biburl = {https://www.bibsonomy.org/bibtex/26d7e6eb59bac73552339f1e324f882ff/kirk86},
description = {[1910.01888] Measuring Arithmetic Extrapolation Performance},
interhash = {95de9b3b26f04f0e118ab0cbf0c5071b},
intrahash = {6d7e6eb59bac73552339f1e324f882ff},
keywords = {learning sets},
note = {cite arxiv:1910.01888Comment: Published at Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada},
timestamp = {2020-01-06T03:21:25.000+0100},
title = {Measuring Arithmetic Extrapolation Performance},
url = {http://arxiv.org/abs/1910.01888},
year = 2019
}