You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 211: Modal Analysis (25)
jasonyang7

I'm curious on why the authors of this paper decided that an autoencoder would be the best model type for this kind of task. The paper mentions that there is a need for a very high dimensionality than the simplified linear model in order to fully capture what is going on in the system, but isn't the point of the autoencoder to effectively reduce the dimensionality of the problem?

jasonyang7

Also, the paper mentions that there is PCA coupling with a deep encoding net. Isn't PCA also effectively reducing the dimensionality of the problem too much, just like the simplified linear model?

longh2000

PCA does reduce the dimensionality of data, but deep encoding net enables nonlinearity beyond. I think as long as the output representation is not set deliberately too low dimensional it is the goal of autoencoder while maintaining sufficient dimension for expressivity.

jierui-cell

It is interesting to see how neural networks were able to detect non-linear part of the model and adding it was able to adjust the missing part of PCA. But it is still impressive to see how good PCA performs given that it is a very simple and easy-to-implement model.

You must be enrolled in the course to comment