7—Exotic CNN architectures; RNN from scratch

This is the last lesson of part 1 of Practical Deep Learning For Coders! This lesson is in two parts:

  1. We look at a range of more ‘exotic’ CNN architectures to learn how to deal with multiple inputs (such as incorporating metadata), multiple outputs (including predicting bounding boxes for localization), creating heatmaps, and handling larger images. These architectures all draw on keras’ functional API, which we learnt about in the last lesson; so be sure to study up on that powerful tool before diving into this lesson.
  2. We build a recurrent neural network from scratch in pure python / numpy, including implementating the gradient calculations and SGD updates ourselves. Then we build a gated recurrent unit (GRU) in theano.

These more advanced topics are designed to be a stepping point towards part 2 of the course, which will be taught at the Data Institute at USF from Feb 27, 2017, and will be online sometime around May, 2017. It may take more study to get a complete understanding of this week’s lesson than some previous weeks—indeed, we will be revising some of this week’s material and discussing it in more depth in the next part of the course. In the meantime, we hope that you’ve got plenty to keep you busy! Many thanks for investing your time in this course!

If you’ve got something out of it, perhaps you could give back by contributing to the Fred Hollows Foundation? As we discussed in the class, just $25 is enough to restore site to one eye. Since we’ve learnt to create software that can see, let’s help more people see too. :)