Top: Cross-entropy loss for one-hidden-layer instance of Parametric UMAP versus a neural network trained to predict nonparametric embeddings using MSE on MNIST. The same network architectures are used in each case. The -axis varies the number of neurons in the network's single hidden layer layer. The dashed gray line is the loss for the nonparametric embedding. Bottom: Projections corresponding to the losses shown in the panel above.
This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy. No content on this site may be used to train artificial intelligence systems without permission in writing from the MIT Press.