Towards understanding the role of over-parametrization in generalization of neural networks.


My first talk at UBC!
The topic for this semester at the machine learning seminar was to (try to) better understand why deep learning works so well.
In this talk I presented and discussed a paper dealing with generalization bounds for neural networks.

You can look at the slides here.