Deep learning and the Low Dimensional Manifold Hypothesis

Deep learning and the Low Dimensional Manifold Hypothesis

The low dimensional manifold hypothesis posits that the data found in many applications, such as those involving natural images, lie (approximately) on low dimensional manifolds embedded in a high dimensional Euclidean space. In this setting, a typical neural network defines a function that takes a finite number of vectors in the embedding space as input. However, one often needs to consider evaluating the optimized network at points outside the training distribution. This project considers the case in which the training data is distributed in a linear subspace of Rd. We derive estimates on the variation of the learning function, defined by a neural network, in the direction transversal to the subspace. We study the potential regularization effects associated with the network’s depth and noise in the codimension of the data manifold. We also present additional side effects in training due to the presence of noise.

Main results

  1. If the data points, including noise, lie on $M$, the linear network’s depth may provide certain regularization or side effects. For ReLU neural networks, we prove that $\frac{\partial f}{\partial n_M}$ is sensitive to the initialization of a set of “untrainable” parameters.
  2. If the noise has a small positive variance in the orthogonal complement of $M$, then:
    • $\frac{\partial f}{\partial n_M}$ can be made arbitrarily small, provided that the number of data points scales according to some inverse power of the variance for deep linear neural networks and nonlinear deep linear neural networks;
    • For linear neural network models, it may take a gradient descent algorithm exponentially (in the reciprocal of the variance) long to converge to the unique optimal model parameters, which yield small $\frac{\partial f}{\partial n_M}$. In addition, it may also need a long time to escape the near region of origin.
  3. The stability-accuracy trade-off. The role of noise can be interpreted as a stabilizer for a model when evaluated on points outside of the (clean) data distribution. However, adding noise to the data set will impact of the accuracy of the network's generalization error (for evaluation within the data distribution). For nonlinear data manifolds, uniform noise may render the labeled data incompatible.

Publications

  • He, J., Tsai, R., & Ward, R. (2023). Side effects of learning from low-dimensional data embedded in a Euclidean space. Research in the Mathematical Sciences, 10(1), 13.