02_deep-convolutional-models-case-studies

Note

This is my personal note after studying the course of the 2nd week convolutional neural networks and the copyright belongs to deeplearning.ai.

01_case-studies

01_why-look-at-case-studies

Hello and welcome back. This week the first thing we’ll do is show you a number of case studies of the factor convolutional neural networks. So why look at case studies? Last week we learned about the basic building blocks such as convolutional layers, proving layers and fully connected layers of conv nets. It turns out a lot of the past few years of computer vision research has been on how to put together these basic building blocks to form effective convolutional neural networks. And one of the best ways for you to get intuition yourself is to see some of these examples. I think just as many of you may have learned to write codes by reading other people’s codes, I think that a good way to get intuition on how to build conv nets is to read or to see other examples of effective conv nets. And it turns out that a net neural network architecture that works well on one computer vision task often works well on other tasks as well such as maybe on your task. So if someone else is training neural network as speak it out in your network architecture is very good at recognizing cats and dogs and people but you have a different computer vision task like maybe you’re trying to sell self-driving car. You might well be able to take someone else’s neural network architecture and apply that to your problem. And finally, after the next few videos, you’ll be able to read some of the research papers from the theater computer vision and I hope that you might find it satisfying as well. You don’t have to do this as a class but I hope you might find it satisfying to be able to read some of these seminal computer vision research paper and see yourself able to understand them. So with that, let’s get started.

As an outline of what we’ll do in the next few videos, we’ll first show you a few classic networks.

The LeNEt-5 network which came from, I guess, in 1980s, AlexNet which is often cited and the VGG network and these are examples of pretty effective neural networks. And some of the ideas lay the foundation for modern computer vision. And you see ideas in these papers that are probably useful for your own. And you see ideas from these papers that were probably be useful for your own work as well.

Then I want to show you the ResNet or conv residual network and you might have heard that neural networks are getting deeper and deeper. The ResNet neural network trained a very, very deep 152-layer neural network that has some very interesting tricks, interesting ideas how to do that effectively.

And then finally you also see a case study of the Inception neural network. After seeing these neural networks, l think you have much better intuition about how to built effective convolutional neural networks. And even if you end up not working computer vision yourself, I think you find a lot of the ideas from some of these examples, such as ResNet Inception network, many of these ideas are cross-fertilizing on making their way into other disciplines. So even if you don’t end up building computer vision applications yourself, I think you’ll find some of these ideas very interesting and helpful for your work.

02_classic-networks

In this video, you’ll learn about some of the classic neural network architecture starting with LeNet-5, and then AlexNet, and then VGGNet. Let’s take a look.


Here is the LeNet-5 architecture. You start off with an image which say, 32 by 32 by 1. And the goal of LeNet-5 was to recognize handwritten digits, so maybe an image of a digits like that. And LeNet-5 was trained on grayscale images, which is why it’s 32 by 32 by 1. This neural network architecture is actually quite similar to the last example you saw last week. In the first step, you use a set of six, 5 by 5 filters with a stride of one because you use six filters you end up with a 20 by 20 by 6 over there. And with a stride of one and no padding, the image dimensions reduces from 32 by 32 down to 28 by 28. Then the LeNet neural network applies pooling. And back then when this paper was written, people use average pooling much more. If you’re building a modern variant, you probably use max pooling instead. But in this example, you average pool and with a filter width two and a stride of two, you wind up reducing the dimensions, the height and width by a factor of two, so we now end up with a 14 by 14 by 6 volume. I guess the height and width of these volumes aren’t entirely drawn to scale. Now technically, if I were drawing these volumes to scale, the height and width would be stronger by a factor of two. Next, you apply another convolutional layer. This time you use a set of 16 filters, the 5 by 5, so you end up with 16 channels to the next volume. And back when this paper was written in 1998, people didn’t really use padding or you always using valid convolutions, which is why every time you apply convolutional layer, they heightened with strengths. So that’s why, here, you go from 14 to 14 down to 10 by 10. Then another pooling layer, so that reduces the height and width by a factor of two, then you end up with 5 by 5 over here. And if you multiply all these numbers 5 by 5 by 16, this multiplies up to 400. That’s 25 times 16 is 400. And the next layer is then a fully connected layer that fully connects each of these 400 nodes with every one of 120 neurons, so there’s a fully connected layer. And sometimes, that would draw out exclusively a layer with 400 nodes, I’m skipping that here. There’s a fully connected layer and then another a fully connected layer. And then the final step is it uses these essentially 84 features and uses it with one final output. I guess you could draw one more node here to make a prediction for ŷ. And ŷ took on 10 possible values corresponding to recognising each of the digits from 0 to 9. A modern version of this neural network, we’ll use a softmax layer with a 10 way classification output. Although back then, LeNet-5 actually use a different classifier at the output layer, one that’s useless today. So this neural network was small by modern standards, had about 60,000 parameters. And today, you often see neural networks with anywhere from 10 million to 100 million parameters, and it’s not unusual to see networks that are literally about a thousand times bigger than this network. But one thing you do see is that as you go deeper in a network, so as you go from left to right, the height and width tend to go down. So you went from 32 by 32, to 28 to 14, to 10 to 5, whereas the number of channels does increase. It goes from 1 to 6 to 16 as you go deeper into the layers of the network. One other pattern you see in this neural network that’s still often repeated today is that you might have some one or more conu layers followed by pooling layer, and then one or sometimes more than one conu layer followed by a pooling layer, and then some fully connected layers and then the outputs. So this type of arrangement of layers is quite common.

Now finally, this is maybe only for those of you that want to try reading the paper. There are a couple other things that were different. The rest of this slide, I’m going to make a few more advanced comments, only for those of you that want to try to read this classic paper. And so, everything I’m going to write in red, you can safely skip on the slide, and there’s maybe an interesting historical footnote that is okay if you don’t follow fully. So it turns out that if you read the original paper, back then, people used sigmoid and tanh nonlinearities, and people weren’t using value nonlinearities back then. So if you look at the paper, you see sigmoid and tanh referred to. And there are also some funny ways about this network was wired that is funny by modern standards. So for example, you’ve seen how if you have a nh by nw by nc network with nc channels then you use f by f by nc dimensional filter, where everything looks at every one of these channels. But back then, computers were much slower. And so to save on computation as well as some parameters, the original LeNet-5 had some crazy complicated way where different filters would look at different channels of the input block. And so the paper talks about those details, but the more modern implementation wouldn’t have that type of complexity these days. And then one last thing that was done back then I guess but isn’t really done right now is that the original LeNet-5 had a non-linearity after pooling, and I think it actually uses sigmoid non-linearity after the pooling layer. So if you do read this paper, and this is one of the harder ones to read than the ones we’ll go over in the next few videos, the next one might be an easy one to start with. Most of the ideas on the slide I just tried in sections two and three of the paper, and later sections of the paper talked about some other ideas. It talked about something called the graph transformer network, which isn’t widely used today. So if you do try to read this paper, I recommend focusing really on section two which talks about this architecture, and maybe take a quick look at section three which has a bunch of experiments and results, which is pretty interesting.


The second example of a neural network I want to show you is AlexNet, named after Alex Krizhevsky, who was the first author of the paper describing this work. The other author’s were Ilya Sutskever and Geoffrey Hinton. So, AlexNet input starts with 227 by 227 by 3 images. And if you read the paper, the paper refers to 224 by 224 by 3 images. But if you look at the numbers, I think that the numbers make sense only of actually 227 by 227. And then the first layer applies a set of 96 11 by 11 filters with a stride of four. And because it uses a large stride of four, the dimensions shrinks to 55 by 55. So roughly, going down by a factor of 4 because of a large stride. And then it applies max pooling with a 3 by 3 filter. So f equals three and a stride of two. So this reduces the volume to 27 by 27 by 96, and then it performs a 5 by 5 same convolution, same padding, so you end up with 27 by 27 by 276. Max pooling again, this reduces the height and width to 13. And then another same convolution, so same padding. So it’s 13 by 13 by now 384 filters. And then 3 by 3, same convolution again, gives you that. Then 3 by 3, same convolution, gives you that. Then max pool, brings it down to 6 by 6 by 256. If you multiply all these numbers,6 times 6 times 256, that’s 9216. So we’re going to unroll this into 9216 nodes. And then finally, it has a few fully connected layers. And then finally, it uses a softmax to output which one of 1000 causes the object could be. So this neural network actually had a lot of similarities to LeNet, but it was much bigger. So whereas the LeNet-5 from previous slide had about 60,000 parameters, this AlexNet that had about 60 million parameters. And the fact that they could take pretty similar basic building blocks that have a lot more hidden units and training on a lot more data, they trained on the image that dataset that allowed it to have a just remarkable performance. Another aspect of this architecture that made it much better than LeNet was using the value activation function. And then again, just if you read the bay paper some more advanced details that you don’t really need to worry about if you don’t read the paper, one is that, when this paper was written, GPUs was still a little bit slower, so it had a complicated way of training on two GPUs. And the basic idea was that, a lot of these layers was actually split across two different GPUs and there was a thoughtful way for when the two GPUs would communicate with each other. And the paper also, the original AlexNet architecture also had another set of a layer called a Local Response Normalization. And this type of layer isn’t really used much, which is why I didn’t talk about it. But the basic idea of Local Response Normalization is, if you look at one of these blocks, one of these volumes that we have on top, let’s say for the sake of argument, this one, 13 by 13 by 256, what Local Response Normalization, (LRN) does, is you look at one position. So one position height and width, and look down this across all the channels, look at all 256 numbers and normalize them. And the motivation for this Local Response Normalization was that for each position in this 13 by 13 image, maybe you don’t want too many neurons with a very high activation. But subsequently, many researchers have found that this doesn’t help that much so this is one of those ideas I guess I’m drawing in red because it’s less important for you to understand this one. And in practice, I don’t really use local response normalizations really in the networks language trained today. So if you are interested in the history of deep learning, I think even before AlexNet, deep learning was starting to gain traction in speech recognition and a few other areas, but it was really just paper that convinced a lot of the computer vision community to take a serious look at deep learning to convince them that deep learning really works in computer vision. And then it grew on to have a huge impact not just in computer vision but beyond computer vision as well. And if you want to try reading some of these papers yourself and you really don’t have to for this course, but if you want to try reading some of these papers, this one is one of the easier ones to read so this might be a good one to take a look at. So whereas AlexNet had a relatively complicated architecture, there’s just a lot of hyperparameters, right? Where you have all these numbers that Alex Krizhevsky and his co-authors had to come up with.


Let me show you a third and final example on this video called the VGG or VGG-16 network. And a remarkable thing about the VGG-16 net is that they said, instead of having so many hyperparameters, let’s use a much simpler network where you focus on just having conv-layers that are just three-by-three filters with a stride of one and always use same padding. And make all your max pooling layers two-by-two with a stride of two. And so, one very nice thing about the VGG network was it really simplified this neural network architectures. So, let’s go through the architecture. So, you solve up with an image for them and then the first two layers are convolutions, which are therefore these three-by-three filters. And in the first two layers use 64 filters. You end up with a 224 by 224 because using same convolutions and then with 64 channels. So because VGG-16 is a relatively deep network, am going to not draw all the volumes here. So what this little picture denotes is what we would previously have drawn as this 224 by 224 by 3. And then a convolution that results in I guess a 224 by 224 by 64 is to be drawn as a deeper volume, and then another layer that results in 224 by 224 by 64. So this conv64 times two represents that you’re doing two conv-layers with 64 filters. And as I mentioned earlier, the filters are always three-by-three with a stride of one and they are always same convolutions. So rather than drawing all these volumes, am just going to use text to represent this network. Next, then uses are pooling layer, so the pooling layer will reduce. I think it goes from 224 by 224 down to what? Right. Goes to 112 by 112 by 64. And then it has a couple more conv-layers. So this means it has 128 filters and because these are the same convolutions, let’s see what is the new dimension. Right? It will be 112 by 112 by 128 and then pooling layer so you can figure out what’s the new dimension of that. And now, three conv-layers with 256 filters to the pooling layer and then a few more conv-layers, pooling layer, more conv-layers, pooling layer. And then it takes this final 7 by 7 by 512 these in to fully connected layer, fully connected with four thousand ninety six units and then a softmax output one of a thousand classes. By the way, the 16 in the VGG-16 refers to the fact that this has 16 layers that have weights. And this is a pretty large network, this network has a total of about 138 million parameters. And that’s pretty large even by modern standards. But the simplicity of the VGG-16 architecture made it quite appealing. You can tell his architecture is really quite uniform. There is a few conv-layers followed by a pulling layer, which reduces the height and width, right? So the pulling layers reduce the height and width. You have a few of them here. But then also, if you look at the number of filters in the conv-layers, here you have 64 filters and then you double to 128 double to 256 doubles to 512. And then I guess the authors thought 512 was big enough and did double on the game here. But this sort of roughly doubling on every step, or doubling through every stack of conv-layers was another simple principle used to design the architecture of this network. And so I think the relative uniformity of this architecture made it quite attractive to researchers.

The main downside was that it was a pretty large network in terms of the number of parameters you had to train. And if you read the literature, you sometimes see people talk about the VGG-19, that is an even bigger version of this network. And you could see the details in the paper cited at the bottom by Karen Simonyan and Andrew Zisserman. But because VGG-16 does almost as well as VGG-19. A lot of people will use VGG-16. But the thing I liked most about this was that, this made this pattern of how, as you go deeper and height and width goes down, it just goes down by a factor of two each time for the pooling layers whereas the number of channels increases. And here roughly goes up by a factor of two every time you have a new set of conv-layers. So by making the rate at which it goes down and that go up very systematic, I thought this paper was very attractive from that perspective.

So that’s it for the three classic architecture’s. If you want, you should really now read some of these papers. I recommend starting with the AlexNet paper followed by the VGG net paper and then the LeNet paper is a bit harder to read but it is a good classic once you go over that. But next, let’s go beyond these classic networks and look at some even more advanced, even more powerful neural network architectures. Let’s go into.

03_resnets

Very, very deep neural networks are difficult to train because of vanishing and exploding gradient types of problems. In this video, you’ll learn about skip connections which allows you to take the activation from one layer and suddenly feed it to another layer even much deeper in the neural network. And using that, you’ll build ResNet which enables you to train very, very deep networks. Sometimes even networks of over 100 layers. Let’s take a look.


ResNets are built out of something called a residual block, let’s first describe what that is. Here are two layers of a neural network where you start off with some activations in layer a[l], then goes a[l+1] and then deactivation two layers later is a[l+2]. So let’s to through the steps in this computation you have a[l], and then the first thing you do is you apply this linear operator to it, which is governed by this equation. So you go from a[l] to compute z[l +1] by multiplying by the weight matrix and adding that bias vector. After that, you apply the ReLU nonlinearity, to get a[l+1]. And that’s governed by this equation where a[l+1] is g(z[l+1]). Then in the next layer, you apply this linear step again, so is governed by that equation. So this is quite similar to this equation we saw on the left. And then finally, you apply another ReLU operation which is now governed by that equation where G here would be the ReLU nonlinearity. And this gives you a[l+2]. So in other words, for information from a[l] to flow to a[l+2], it needs to go through all of these steps which I’m going to call the main path of this set of layers. In a residual net, we’re going to make a change to this. We’re going to take a[l], and just first forward it, copy it, match further into the neural network to here, and just at a[l], before applying to non-linearity, the ReLU non-linearity. And I’m going to call this the shortcut. So rather than needing to follow the main path, the information from a[l] can now follow a shortcut to go much deeper into the neural network. And what that means is that this last equation goes away and we instead have that the output a[l+2] is the ReLU non-linearity g applied to z[l+2] as before, but now plus a[l]. So, the addition of this a[l] here, it makes this a residual block. And in pictures, you can also modify this picture on top by drawing this picture shortcut to go here. And we are going to draw it as it going into this second layer here because the short cut is actually added before the ReLU non-linearity. So each of these nodes here, where there applies a linear function and a ReLU. So a[l] is being injected after the linear part but before the ReLU part. And sometimes instead of a term short cut, you also hear the term skip connection, and that refers to a[l] just skipping over a layer or kind of skipping over almost two layers in order to process information deeper into the neural network. So, what the inventors of ResNet, so that’ll will be Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. What they found was that using residual blocks allows you to train much deeper neural networks. And the way you build a ResNet is by taking many of these residual blocks, blocks like these, and stacking them together to form a deep network.


So, let’s look at this network. This is not the residual network, this is called as a plain network. This is the terminology of the ResNet paper. To turn this into a ResNet, what you do is you add all those skip connections although those short like a connections like so. So every two layers ends up with that additional change that we saw on the previous slide to turn each of these into residual block. So this picture shows five residual blocks stacked together, and this is a residual network. And it turns out that if you use your standard optimization algorithm such as a gradient descent or one of the fancier optimization algorithms to the train or plain network. So without all the extra residual, without all the extra short cuts or skip connections I just drew in. Empirically, you find that as you increase the number of layers, the training error will tend to decrease after a while but then they’ll tend to go back up. And in theory as you make a neural network deeper, it should only do better and better on the training set. Right. So, the theory, in theory, having a deeper network should only help. But in practice or in reality, having a plain network, so no ResNet, having a plain network that is very deep means that all your optimization algorithm just has a much harder time training. And so, in reality, your training error gets worse if you pick a network that’s too deep. But what happens with ResNet is that even as the number of layers gets deeper, you can have the performance of the training error kind of keep on going down. Even if we train a network with over a hundred layers. And then now some people experimenting with networks of over a thousand layers although I don’t see that it used much in practice yet. But by taking these activations be it X of these intermediate activations and allowing it to go much deeper in the neural network, this really helps with the vanishing and exploding gradient problems and allows you to train much deeper neural networks without really appreciable loss in performance, and maybe at some point, this will plateau, this will flatten out, and it doesn’t help that much deeper and deeper networks. But ResNet is not even effective at helping train very deep networks.

So you’ve now gotten an overview of how ResNets work. And in fact, in this week’s programming exercise, you get to implement these ideas and see it work for yourself. But next, I want to share with you better intuition or even more intuition about why ResNets work so well, let’s go on to the next.

04_why-resnets-work

So, why do ResNets work so well? Let’s go through one example that illustrates why ResNets work so well, at least in the sense of how you can make them deeper and deeper without really hurting your ability to at least get them to do well on the training set. And hopefully as you’ve understood from the third course in this sequence, doing well on the training set is usually a prerequisite to doing well on your hold up or on your depth or on your test sets. So, being able to at least train ResNet to do well on the training set is a good first step toward that. Let’s look at an example.

What we saw on the last video was that if you make a network deeper, it can hurt your ability to train the network to do well on the training set. And that’s why sometimes you don’t want a network that is too deep. But this is not true or at least is much less true when you training a ResNet. So let’s go through an example.


Let’s say you have X feeding in to some big neural network and just outputs some activation a[l]. Let’s say for this example that you are going to modify the neural network to make it a little bit deeper. So, use the same big NN, and this output’s a[l], and we’re going to add a couple extra layers to this network so let’s add one layer there and another layer there. And just for output a[l+2]. Only let’s make this a ResNet block, a residual block with that extra short cut. And for the sake our argument, let’s say throughout this network we’re using the value activation functions. So, all the activations are going to be greater than or equal to zero, with the possible exception of the input X. Right. Because the value activation output’s numbers that are either zero or positive. Now, let’s look at what’s a[l+2] will be. To copy the expression from the previous video, a[l+2] will be value apply to z[l+2], and then plus a[l] where is this addition of a[l] comes from the short circle from the skip connection that we just added. And if we expand this out, this is equal to g of w[l+2], times a of [l+1], plus b[l+2]. So that’s z[l+2] is equal to that, plus a[l]. Now notice something, if you are using L two regularisation away to K, that will tend to shrink the value of w[l+2]. If you are applying way to K to B that will also shrink this although I guess in practice sometimes you do and sometimes you don’t apply way to K to B, but W is really the key term to pay attention to here. And if w[l+2] is equal to zero. And let’s say for the sake of argument that B is also equal to zero, then these terms go away because they’re equal to zero, and then g of a[l], this is just equal to a[l] because we assumed we’re using the value activation function. And so all of the activations are all negative and so, g of a[l] is the value applied to a non-negative quantity, so you just get back, a[l]. So, what this shows is that the identity function is easy for residual block to learn. And it’s easy to get a[l+2] equals to a[l] because of this skip connection. And what that means is that adding these two layers in your neural network, it doesn’t really hurt your neural network’s ability to do as well as this simpler network without these two extra layers, because it’s quite easy for it to learn the identity function to just copy a[l] to a[l+2] using despite the addition of these two layers. And this is why adding two extra layers, adding this residual block to somewhere in the middle or the end of this big neural network it doesn’t hurt performance. But of course our goal is to not just not hurt performance, is to help performance and so you can imagine that if all of these heading units if they actually learned something useful then maybe you can do even better than learning the identity function. And what goes wrong in very deep plain nets in very deep network without this residual of the skip connections is that when you make the network deeper and deeper, it’s actually very difficult for it to choose parameters that learn even the identity function which is why a lot of layers end up making your result worse rather than making your result better. And I think the main reason the residual network works is that it’s so easy for these extra layers to learn the identity function that you’re kind of guaranteed that it doesn’t hurt performance and then a lot the time you maybe get lucky and then even helps performance. At least is easier to go from a decent baseline of not hurting performance and then great in decent can only improve the solution from there.


So, one more detail in the residual network that’s worth discussing which is through this edition here, we’re assuming that z[l+2] and a[l] have the same dimension. And so what you see in ResNet is a lot of use of same convolutions so that the dimension of this is equal to the dimension I guess of this layer or the outputs layer. So that we can actually do this short circle connection, because the same convolution preserve dimensions, and so makes that easier for you to carry out this short circle and then carry out this addition of two equal dimension vectors.** In case the input and output have different dimensions** so for example, if this is a 128 dimensional and Z or therefore, a[l] is 256 dimensional as an example. What you would do is add an extra matrix and then call that Ws over here, and Ws in this example would be a[l] 256 by 128 dimensional matrix. So then Ws times a[l] becomes 256 dimensional and this addition is now between two 256 dimensional vectors and there are few things you could do with Ws, it could be a matrix of parameters we learned, it could be a fixed matrix that just implements zero paddings that takes a[l] and then zero pads it to be 256 dimensional and either of those versions I guess could work.


So finally, let’s take a look at ResNets on images. So these are images I got from the paper by Harlow. This is an example of a plain network and in which you input an image and then have a number of conv layers until eventually you have a softmax output at the end. To turn this into a ResNet, you add those extra skip connections. And I’ll just mention a few details, there are a lot of three by three convolutions here and most of these are three by three same convolutions and that’s why you’re adding equal dimension feature vectors. So rather than a fully connected layer, these are actually convolutional layers but because the same convolutions, the dimensions are preserved and so the z[l+2] plus a[l] by addition makes sense. And similar to what you’ve seen in a lot of NetRes before, you have a bunch of convolutional layers and then there are occasionally pooling layers as well or pooling a pooling likely is. And whenever one of those things happen, then you need to make an adjustment to the dimension which we saw on the previous slide. You can do of the matrix Ws, and then as is common in these networks, you have Conv Conv Conv pool Conv Conv Conv pool Conv Conv Conv pool, and then at the end you now have a fully connected layer that then makes a prediction using a softmax.

So that’s it for ResNet. Next, there’s a very interesting idea behind using neural networks with one by one filters, one by one convolutions. So, one could use a one by one convolution. Let’s take a look at the next video.

05_networks-in-networks-and-1x1-convolutions

In terms of designing content architectures, one of the ideas that really helps is using a one by one convolution. Now, you might be wondering, what does a one by one convolution do? Isn’t that just multiplying by numbers? That seems like a funny thing to do. Turns out it’s not quite like that. Let’s take a look.


So you’ll see one by one filter, we’ll put in number two there, and if you take the six by six image, six by six by one and convolve it with this one by one by one filter, you end up just taking the image and multiplying it by two. So, one, two, three ends up being two, four, six, and so on. And so, a convolution by a one by one filter, doesn’t seem particularly useful. You just multiply it by some number. But that’s the case of six by six by one channel images. If you have a 6 by 6 by 32 instead of by 1, then a convolution with a 1 by 1 filter can do something that makes much more sense. And in particular, what a one by one convolution will do is it will look at each of the 36 different positions here, and it will take the element-wise product between 32 numbers on the left and 32 numbers in the filter. And then apply a ReLU non-linearity to it after that. So, to look at one of the 36 positions, maybe one slice through this value, you take these 36 numbers multiply it by 1 slice through the volume like that, and you end up with a single real number which then gets plotted in one of the outputs like that. And in fact, one way to think about the 32 numbers you have in this 1 by 1 by 32 filters is that, it’s as if you have neuron that is taking as input, 32 numbers multiplying each of these 32 numbers in one slice of the same position heightened with by these 32 different channels, multiplying them by 32 weights and then applying a ReLU non-linearity to it and then outputting the corresponding thing over there. And more generally, if you have not just one filter, but if you have multiple filters, then it’s as if you have not just one unit, but multiple units, taken as input all the numbers in one slice, and then building them up into an output of six by six by number of filters. So one way to think about the one by one convolution is that, it is basically having a fully connected neuron network, that applies to each of the 36 different positions. And what does fully connected neural network does? Is it puts 32 numbers and outputs number of filters outputs. So I guess the point on notation, this is really a nc(l+1), if that’s the next layer. And by doing this at each of the 36 positions, each of the six by six positions, you end up with an output that is six by six by the number of filters. And this can carry out a pretty non-trivial computation on your input volume. And this idea is often called a one by one convolution but it’s sometimes also called Network in Network, and is described in this paper, by Min Lin, Qiang Chen, and Schuicheng Yan. And even though the details of the architecture in this paper aren’t used widely, this idea of a one by one convolution or this sometimes called Network in Network idea has been very influential, has influenced many other neural network architectures including the inception network which we’ll see in the next video.


But to give you an example of where one by one convolution is useful, here’s something you could do with it. Let’s say you have a 28 by 28 by 192 volume. If you want to shrink the height and width, you can use a pulling layer. So we know how to do that. But one of a number of channels has gotten too big and we want to shrink that. How do you shrink it to a 28 by 28 by 32 dimensional volume? Well, what you can do is use 32 filters that are one by one. And technically, each filter would be of dimension 1 by 1 by 192, because the number of channels in your filter has to match the number of channels in your input volume, but you use 32 filters and the output of this process will be a 28 by 28 by 32 volume. So this is a way to let you shrink nc as well, whereas pooling layers, I used just to shrink nH and nW, the height and width these volumes. And we’ll see later how this idea of one by one convolutions allows you to shrink the number of channels and therefore, save on computation in some networks.

But of course, if you want to keep the number of channels at 192, that’s fine too. And the effect of the one by one convolution is it just adds non-linearity. It allows you to learn the more complex function of your network by adding another layer that inputs 28 by 28 by 192 and outputs 28 by 28 by 192. So, that’s how a one by one convolutional layer is actually doing something pretty non-trivial and adds non-linearity to your neural network and allow you to decrease or keep the same or if you want, increase the number of channels in your volumes. Next, you’ll see that this is actually very useful for building the inception network. Let’s go on to that in the next video.

So, you’ve now seen how a one by one convolution operation is actually doing a pretty non-trivial operation and it allows you to shrink the number of channels in your volumes or keep it the same or even increase it if you want. In the next video, you see that this can be used to help build up to the inception network. Let’s go into the-

06_inception-network-motivation

When designing a layer for a ConvNet, you might have to pick, do you want a 1 by 3 filter, or 3 by 3, or 5 by 5, or do you want a pooling layer? What the inception network does is it says, why should you do them all? And this makes the network architecture more complicated, but it also works remarkably well. Let’s see how this works.


Let’s say for the sake of example that you have inputted a 28 by 28 by 192 dimensional volume. So what the inception network or what an inception layer says is, instead choosing what filter size you want in a Conv layer, or even do you want a convolutional layer or a pooling layer? Let’s do them all. So what if you can use a 1 by 1 convolution, and that will output a 28 by 28 by something. Let’s say 28 by 28 by 64 output, and you just have a volume there. But maybe you also want to try a 3 by 3 and that might output a 20 by 20 by 128. And then what you do is just stack up this second volume next to the first volume. And to make the dimensions match up, let’s make this a same convolution. So the output dimension is still 28 by 28, same as the input dimension in terms of height and width. But 28 by 28 by in this example 128. And maybe you might say well I want to hedge my bets. Maybe a 5 by 5 filter works better. So let’s do that too and have that output a 28 by 28 by 32. And again you use the same convolution to keep the dimensions the same. And maybe you don’t want to convolutional layer. Let’s apply pooling, and that has some other output and let’s stack that up as well. And here pooling outputs 28 by 28 by 32. Now in order to make all the dimensions match, you actually need to use padding for max pooling. So this is an unusual formal pooling because if you want the input to have a higher than 28 by 28 and have the output, you’ll match the dimension everything else also by 28 by 28, then you need to use the same padding as well as a stride of one for pooling. So this detail might seem a bit funny to you now, but let’s keep going. And we’ll make this all work later. But with a inception module like this, you can input some volume and output. In this case I guess if you add up all these numbers, 32 plus 32 plus 128 plus 64, that’s equal to 256. So you will have one inception module input 28 by 28 by 129, and output 28 by 28 by 256. And this is the heart of the inception network which is due to Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke and Andrew Rabinovich. And the basic idea is that instead of you needing to pick one of these filter sizes or pooling you want and committing to that, you can do them all and just concatenate all the outputs, and let the network learn whatever parameters it wants to use, whatever the combinations of these filter sizes it wants. Now it turns out that there is a problem with the inception layer as we’ve described it here, which is computational cost. On the next slide, let’s figure out what’s the computational cost of this 5 by 5 filter resulting in this block over here.


So just focusing on the 5 by 5 pot on the previous slide, we had as input a 28 by 28 by 192 block, and you implement a 5 by 5 same convolution of 32 filters to output 28 by 28 by 32. On the previous slide I had drawn this as a thin purple slide. So I’m just going draw this as a more normal looking blue block here. So let’s look at the computational costs of outputting this 20 by 20 by 32. So you have 32 filters because the outputs has 32 channels, and each filter is going to be 5 by 5 by 192. And so the output size is 20 by 20 by 32, and so you need to compute 28 by 28 by 32 numbers. And for each of them you need to do these many multiplications, right? 5 by 5 by 192. So the total number of multiplies you need is the number of multiplies you need to compute each of the output values times the number of output values you need to compute. And if you multiply all of these numbers, this is equal to 120 million. And so, while you can do 120 million multiplies on the modern computer, this is still a pretty expensive operation. On the next slide you see how using the idea of 1 by 1 convolutions, which you learnt about in the previous video, you’ll be able to reduce the computational costs by about a factor of 10. To go from about 120 million multiplies to about one tenth of that. So please remember the number 120 so you can compare it with what you see on the next slide, 120 million.


Here is an alternative architecture for inputting 28 by 28 by 192, and outputting 28 by 28 by 32, which is following. You are going to input the volume, use a 1 by 1 convolution to reduce the volume to 16 channels instead of 192 channels, and then on this much smaller volume, run your 5 by 5 convolution to give you your final output. So notice the input and output dimensions are still the same. You input 28 by 28 by 192 and output 28 by 28 by 32, same as the previous slide. But what we’ve done is we’re taking this huge volume we had on the left, and we shrunk it to this much smaller intermediate volume, which only has 16 instead of 192 channels. Sometimes this is called a bottleneck layer, right? I guess because a bottleneck is usually the smallest part of something, right? So I guess if you have a glass bottle that looks like this, then you know this is I guess where the cork goes. And then the bottleneck is the smallest part of this bottle. So in the same way, the bottleneck layer is the smallest part of this network. We shrink the representation before increasing the size again. Now let’s look at the computational costs involved. To apply this 1 by 1 convolution, we have 16 filters. Each of the filters is going to be of dimension 1 by 1 by 192, this 192 matches that 192. And so the cost of computing this 28 by 28 by 16 volumes is going to be well, you need these many outputs, and for each of them you need to do 192 multiplications. I could have written 1 times 1 times 192, right? Which is this. And if you multiply this out, this is 2.4 million, it’s about 2.4 million. How about the second? So that’s the cost of this first convolutional layer. The cost of this second convolutional layer would be that well, you have these many outputs. So 28 by 28 by 32. And then for each of the outputs you have to apply a 5 by 5 by 16 dimensional filter. And so by 5 by 5 by 16. And you multiply that out is equals to 10.0. And so the total number of multiplications you need to do is the sum of those which is 12.4 million multiplications. And you compare this with what we had on the previous slide, you reduce the computational cost from about 120 million multiplies, down to about one tenth of that, to 12.4 million multiplications. And the number of additions you need to do is about very similar to the number of multiplications you need to do. So that’s why I’m just counting the number of multiplications.

So to summarize, if you are building a layer of a neural network and you don’t want to have to decide, do you want a 1 by 1, or 3 by 3, or 5 by 5, or pooling layer, the inception module let’s you say let’s do them all, and let’s concatenate the results. And then we run to the problem of computational cost. And what you saw here was how using a 1 by 1 convolution, you can create this bottleneck layer thereby reducing the computational cost significantly. Now you might be wondering, does shrinking down the representation size so dramatically, does it hurt the performance of your neural network? It turns out that so long as you implement this bottleneck layer so that within region, you can shrink down the representation size significantly, and it doesn’t seem to hurt the performance, but saves you a lot of computation. So these are the key ideas of the inception module. Let’s put them together and in the next video show you what the full inception network looks like.

07_inception-network

In a previous video, you’ve already seen all the basic building blocks of the Inception network. In this video, let’s see how you can put these building blocks together to build your own Inception network.


So the inception module takes as input the activation or the output from some previous layer. So let’s say for the sake of argument this is 28 by 28 by 192, same as our previous video. The example we worked through in depth was the 1 by 1 followed by 5 by 5. There, so maybe the 1 by 1 has 16 channels and then the 5 by 5 will output a 28 by 28 by, let’s say, 32 channels. And this is the example we worked through on the last slide of the previous video. Then to save computation on your 3 by 3 convolution you can also do the same here. And then the 3 by 3 outputs, 28 by 28 by 1 by 28. And then maybe you want to consider a 1 by 1 convolution as well. There’s no need to do a 1 by 1 conv followed by another 1 by 1 conv so there’s just one step here and let’s say these outputs 28 by 28 by 64. And then finally is the pulling layer. So here I’m going to do something funny. In order to really concatenate all of these outputs at the end we are going to use the same type of padding for pooling. So that the output height and width is still 28 by 28. So we can concatenate it with these other outputs. But notice that if you do max-pooling, even with same padding, 3 by 3 filter is tried at 1. The output here will be 28 by 28, By 192. It will have the same number of channels and the same depth as the input that we had here. So, this seems like is has a lot of channels. So what we’re going to do is actually add one more 1 by 1 conv layer to then to what we saw in the one by one convilational video, to strengthen the number of channels. So it gets us down to 28 by 28 by let’s say, 32. And the way you do that, is to use 32 filters, of dimension 1 by 1 by 192. So that’s why the output dimension has a number of channels shrunk down to 32. So then we don’t end up with the pulling layer taking up all the channels in the final output. And finally you take all of these blocks and you do channel concatenation. Just concatenate across this 64 plus 128 plus 32 plus 32 and this if you add it up this gives you a 28 by 28 by 256 dimension output. Concat is just this concatenating the blocks that we saw in the previous video. So this is one inception module, and what the inception network does, is, more or less, put a lot of these modules together.


Here’s a picture of the inception network, taken from the paper by Szegedy et al And you notice a lot of repeated blocks in this. Maybe this picture looks really complicated. But if you look at one of the blocks there, that block is basically the inception module that you saw on the previous slide. And subject to little details I won’t discuss, this is another inception block. This is another inception block. There’s some extra max pooling layers here to change the dimension of the heightened width. But that’s another inception block. And then there’s another max put here to change the height and width but basically there’s another inception block. But the inception network is just a lot of these blocks that you’ve learned about repeated to different positions of the network. But so you understand the inception block from the previous slide, then you understand the inception network.


It turns out that there’s one last detail to the inception network if we read the optional research paper. Which is that there are these additional side-branches that I just added. So what do they do? Well, the last few layers of the network is a fully connected layer followed by a softmax layer to try to make a prediction. What these side branches do is it takes some hidden layer and it tries to use that to make a prediction. So this is actually a softmax output and so is that. And this other side branch, again it is a hidden layer passes through a few layers like a few connected layers. And then has the softmax try to predict what’s the output label. And you should think of this as maybe just another detail of the inception that’s worked. But what is does is it helps to ensure that the features computed. Even in the heading units, even at intermediate layers. That they’re not too bad for protecting the output cause of a image. And this appears to have a regularizing effect on the inception network and helps prevent this network from overfitting. And by the way, this particular Inception network was developed by authors at Google. Who called it GoogleNet, spelled like that, to pay homage to the network. That you learned about in an earlier video as well. So I think it’s actually really nice that the Deep Learning Community is so collaborative. And that there’s such strong healthy respect for each other’s’ work in the Deep Learning Learning community.


Finally here’s one fun fact. Where does the name inception network come from? The inception paper actually cites this meme for we need to go deeper. And this URL is an actual reference in the inception paper, which links to this image. And if you’ve seen the movie titled The Inception, maybe this meme will make sense to you. But the authors actually cite this meme as motivation for needing to build deeper new networks. And that’s how they came up with the inception architecture. So I guess it’s not often that research papers get to cite Internet memes in their citations. But in this case, I guess it worked out quite well.

So to summarize, if you understand the inception module, then you understand the inception network. Which is largely the inception module repeated a bunch of times throughout the network. Since the development of the original inception module, the author and others have built on it and come up with other versions as well. So there are research papers on newer versions of the inception algorithm. And you sometimes see people use some of these later versions as well in their work, like inception v2, inception v3, inception v4. There’s also an inception version. This combined with the resonant idea of having skipped connections, and that sometimes works even better. But all of these variations are built on the basic idea that you learned about this in the previous video of coming up with the inception module and then stacking up a bunch of them together. And with these videos you should be able to read and understand, I think, the inception paper, as well as maybe some of the papers describing the later derivation as well. So that’s it, you’ve gone through quite a lot of specialized neural network architectures. In the next video, I want to start showing you some more practical advice on how you actually use these algorithms to build your own computer vision system. Let’s go on to the next video.

02_practical-advices-for-using-convnets

01_using-open-source-implementation

You’ve now learned about several highly effective neural network and ConvNet architectures. What I want to do in the next few videos is share with you some practical advice on how to use them, first starting with using open source implementations. It turns out that a lot of these neural networks are difficult or finicky to replicate because a lot of details about tuning of the hyperparameters such as learning decay and other things that make some difference to the performance. And so I’ve found that it’s sometimes difficult even for, say, a higher deep loving PhD students, even at the top universities to replicate someone else’s polished work just from reading their paper. Fortunately, a lot of deep learning researchers routinely open source their work on the Internet, such as on GitHub. And as you do work yourself, I certainly encourage you to consider contributing back your code to the open source community. But if you see a research paper whose results you would like to build on top of, one thing you should consider doing, one thing I do quite often it’s just look online for an open source implementation. Because if you can get the author’s implementation, you can usually get going much faster than if you would try to reimplement it from scratch. Although sometimes reimplementing from scratch could be a good exercise to do as well. If you’re already familiar with how to use GitHub, this video might be less necessary or less important for you. But if you aren’t used to downloading open-source code from GitHub, let me quickly show you how easy it is.


Let’s say you’re excited about residual networks, and you want to use it. So let’s search for residence on GitHub. And so you actually see a lot of different implementations of residence on GitHub. And I’m just going to go to the first URL here. And this is a GitHub repo that implements residence. Along with the GitHub webpages if you scroll down we’ll have some text describing the work or the particular implementation. On this particular repo, this particular GitHub repository was actually by the original authors of the ResNet paper. And this code, this license under an MIT license, you can click through to take a look at the implications of this license. The MIT License is one of the more permissive or one of the more open open-source licenses. So I’m going to go ahead and download the code, and to do that, click on this link. This gives you the URL that you can use to download the code. I’m going to click on this button over here to copy the URL to my clipboard and then go over here. Then all you have to do is type git clone and then Ctrl+V for the URL and hit Enter. And so in a couples of seconds it has download, has cloned this repository to my local hard disk. So let’s go into the directory and let’s take a look. I’m more used in Mac than Windows, but I guess let’s see, let’s go to prototxt and I think this is where it has the files specifying the network. So let’s take a look at this file, because this is a very long file that specifies the detail configurations of the ResNet with a 101 layers, all right? And it looks like from what I remember seeing from this webpage, this particular implementation uses the Cafe framework. But if you wanted implementation of this code using some other programming framework, you might be able to find it as well.

So if you’re developing a computer vision application, a very common workflow would be to pick an architecture that you like, maybe one of the ones you learned about in this course. Or maybe one that you heard about from a friend or from some literature. And look for an open source implementation and download it from GitHub to start building from there. One of the advantages of doing so also is that sometimes these networks take a long time to train, and someone else might have used multiple GPUs and a very large dataset to pretrain some of these networks. And that allows you to do transfer learning using these networks which we’ll discuss in the next video as well. Of course if you’re computer vision researcher implementing these things from scratch, then your workflow will be different. And if you do that, then do contribute your work back to the open source community. But because so many vision researchers have done so much work implementing these architectures, I found that often starting with open-source implementations is a better way, or certainly a faster way to get started on a new project.

02_transfer-learning

If you’re building a computer vision application rather than training the ways from scratch, from random initialization, you often make much faster progress if you download weights that someone else has already trained on the network architecture and use that as pre-training and transfer that to a new task that you might be interested in. The computer vision research community has been pretty good at posting lots of data sets on the Internet so if you hear of things like Image Net, or MS COCO, or Pascal types of data sets, these are the names of different data sets that people have post online and a lot of computer researchers have trained their algorithms on. Sometimes these training takes several weeks and might take many GP use and the fact that someone else has done this and gone through the painful high-performance search process, means that you can often download open source weights that took someone else many weeks or months to figure out and use that as a very good initialization for your own neural network. And use transfer learning to sort of transfer knowledge from some of these very large public data sets to your own problem. Let’s take a deeper look at how to do this.


Let’s start with the example, let’s say you’re building a cat detector to recognize your own pet cat. According to the internet, Tigger is a common cat name and Misty is another common cat name. Let’s say your cats are called Tiger and Misty and there’s also neither. You have a classification problem with three clauses. Is this picture Tigger, or is it Misty, or is it neither. And in all the case of both of you cats appearing in the picture. Now, you probably don’t have a lot of pictures of Tigger or Misty so your training set will be small. What can you do? I recommend you go online and download some open source implementation of a neural network and download not just the code but also the weights. There are a lot of networks you can download that have been trained on for example, the Init Net data sets which has a thousand different clauses so the network might have a softmax unit that outputs one of a thousand possible clauses. What you can do is then get rid of the softmax layer and create your own softmax unit that outputs Tigger or Misty or neither. In terms of the network, I’d encourage you to think of all of these layers as frozen so you freeze the parameters in all of these layers of the network and you would then just train the parameters associated with your softmax layer. Which is the softmax layer with three possible outputs, Tigger, Misty or neither. By using someone else’s free trade weights, you might probably get pretty good performance on this even with a small data set.

Fortunately, a lot of people learning frameworks support this mode of operation and in fact, depending on the framework it might have things like trainable parameter equals zero, you might set that for some of these early layers. In others they just say, don’t train those ways or sometimes you have a parameter like freeze equals one and these are different ways and different deep learning program frameworks that let you specify whether or not to train the ways associated with particular layer. In this case, you will train only the softmax layers ways but freeze all of the earlier layers ways.

One other neat trick that may help for some implementations is that because all of these early leads are frozen, there are some fixed function that doesn’t change because you’re not changing it, you not training it that takes this input image acts and maps it to some set of activations in that layer. One of the trick that could speed up training is we just pre-compute that layer, the features of re-activations from that layer and just save them to disk. What you’re doing is using this fixed function, in this first part of the neural network, to take this input any image X and compute some feature vector for it and then you’re training a shallow softmax model from this feature vector to make a prediction. One step that could help your computation as you just pre-compute that layers activation, for all the examples in training sets and save them to disk and then just train the softmax clause right on top of that. The advantage of the safety disk or the pre-compute method or the safety disk is that you don’t need to recompute those activations everytime you take a epoch or take a post through a training set. This is what you do if you have a pretty small training set for your task.


What of a larger training set? One rule of thumb is if you have a larger label data set so maybe you just have a ton of pictures of Tigger, Misty as well as I guess pictures neither of them, one thing you could do is then freeze fewer layers. Maybe you freeze just these layers and then train these later layers. Although if the output layer has different clauses then you need to have your own output unit any way Tigger, Misty or neither. There are a couple of ways to do this. You could take the last few layers ways and just use that as initialization and do gradient descent from there or you can also blow away these last few layers and just use your own new hidden units and in your own final softmax outputs. Either of these matters could be worth trying. But maybe one pattern is if you have more data, the number of layers you’ve freeze could be smaller and then the number of layers you train on top could be greater. And the idea is that if you pick a data set and maybe have enough data not just to train a single softmax unit but to train some other size neural network that comprises the last few layers of this final network that you end up using.


Finally, if you have a lot of data, one thing you might do is take this open source network and weights and use the whole thing just as initialization and train the whole network. Although again if this was a thousand of softmax and you have just three outputs, you need your own softmax output. The output of labels you care about. But the more label data you have for your task or the more pictures you have of Tigger, Misty and neither, the more layers you could train and in the extreme case, you could use the ways you download just as initialization so they would replace random initialization and then could do gradient descent, training updating all the ways and all the layers of the network.

So that’s transfer learning for the training of ConvNets. In practice, because the open data sets on the internet are so big and the ways you can download that someone else has spent weeks training has learned from so much data, you find that for a lot of computer vision applications, you just do much better if you download someone else’s open source ways and use that as initialization for your problem. In all the different disciplines, in all the different applications of deep learning, I think that computer vision is one where transfer learning is something that you should almost always do unless, you have an exceptionally large data set to train everything else from scratch yourself. But transfer learning is just very worth seriously considering unless you have an exceptionally large data set and a very large computation budget to train everything from scratch by yourself.

03_data-augmentation

Most computer vision task could use more data. And so data augmentation is one of the techniques that is often used to improve the performance of computer vision systems. I think that computer vision is a pretty complicated task. You have to input this image, all these pixels and then figure out what is in this picture. And it seems like you need to learn the decently complicated function to do that. And in practice, there almost all competing visions task having more data will help. This is unlike some other domains where sometimes you can get enough data, they don’t feel as much pressure to get even more data. But I think today, this data computer vision is that, for the majority of computer vision problems, we feel like we just can’t get enough data. And this is not true for all applications of machine learning, but it does feel like it’s true for computer vision. So, what that means is that when you’re training in computer vision model, often data augmentation will help. And this is true whether you’re using transfer learning or using someone else’s pre-trained ways to start, or whether you’re trying to train something yourself from scratch. Let’s take a look at the common data augmentation that is in computer vision.


Perhaps the simplest data augmentation method is mirroring on the vertical axis, where if you have this example in your training set, you flip it horizontally to get that image on the right. And for most computer vision task, if the left picture is a cat then mirroring it is though a cat. And if the mirroring operation preserves whatever you’re trying to recognize in the picture, this would be a good data augmentation technique to use. Another commonly used technique is random cropping. So given this dataset, let’s pick a few random crops. So you might pick that, and take that crop or you might take that, to that crop, take this, take that crop and so this gives you different examples to feed in your training sample, sort of different random crops of your datasets. So random cropping isn’t a perfect data augmentation. What if you randomly end up taking that crop which will look much like a cat but in practice and worthwhile so long as your random crops are reasonably large subsets of the actual image. So, mirroring and random cropping are frequently used and in theory, you could also use things like rotation, shearing of the image, so that’s if you do this to the image, distort it that way, introduce various forms of local warping and so on. And there’s really no harm with trying all of these things as well, although in practice they seem to be used a bit less, or perhaps because of their complexity.


The second type of data augmentation that is commonly used is color shifting. So, given a picture like this, let’s say you add to the R, G and B channels different distortions. In this example, we are adding to the red and blue channels and subtracting from the green channel. So, red and blue make purple. So, this makes the whole image a bit more purpley and that creates a distorted image for training set. For illustration purposes, I’m making somewhat dramatic changes to the colors and practice, you draw R, G and B from some distribution that could be quite small as well. But what you do is take different values of R, G, and B and use them to distort the color channels. So, in the second example, we are making a less red, and more green and more blue, so that turns our image a bit more yellowish. And here, we are making it much more blue, just a tiny little bit longer. But in practice, the values R, G and B, are drawn from some probability distribution. And the motivation for this is that if maybe the sunlight was a bit yellow or maybe the in-goal illumination was a bit more yellow, that could easily change the color of an image, but the identity of the cat or the identity of the content, the label y, just still stay the same. And so introducing these color distortions or by doing color shifting, this makes your learning algorithm more robust to changes in the colors of your images. Just a comment for the advanced learners in this course, that is okay if you don’t understand what I’m about to say when using red. There are different ways to sample R, G, and B. One of the ways to implement color distortion uses an algorithm called PCA. This is called Principles Component Analysis, which I talked about in the ml-class.org Machine Learning Course on Coursera. But the details of this are actually given in the AlexNet paper, and sometimes called PCA Color Augmentation. But the rough idea at the time PCA Color Augmentation is for example, if your image is mainly purple, if it mainly has red and blue tints, and very little green, then PCA Color Augmentation, will add and subtract a lot to red and blue, were relatively little green, so kind of keeps the overall color of the tint the same. If you didn’t understand any of this, don’t worry about it. But if you can search online for that, you can and if you want to read about the details of it in the AlexNet paper, and you can also find some open source implementations of the PCA Color Augmentation, and just use that.


So, you might have your training data stored in a hard disk and uses symbol, this round bucket symbol to represent your hard disk. And if you have a small training set, you can do almost anything and you’ll be okay. But the very last training set and this is how people will often implement it, which is you might have a CPU thread that is constantly loading images of your hard disk. So, you have this stream of images coming in from your hard disk. And what you can do is use maybe a CPU thread to implement the distortions, yet the random cropping, or the color shifting, or the mirroring, but for each image, you might then end up with some distorted version of it. So, let’s see this image, I’m going to mirror it and if you also implement colors distortion and so on. And if this image ends up being color shifted, so you end up with some different colored cat. And so your CPU thread is constantly loading data as well as implementing whether the distortions are needed to form a batch or really many batches of data. And this data is then constantly passed to some other thread or some other process for implementing training and this could be done on the CPU or really increasingly on the GPU if you have a large neural network to train. And so, a pretty common way of implementing data augmentation is to really have one thread, almost multiple threads, that is responsible for loading the data and implementing distortions, and then passing that to some other thread or some other process that then does the training. And often, this and this, can run in parallel.

So, that’s it for data augmentation. And similar to other parts of training a deep neural network, the data augmentation process also has a few hyperparameters such as how much color shifting do you implement and exactly what parameters you use for random cropping? So, similar to elsewhere in computer vision, a good place to get started might be to use someone else’s open source implementation for how they use data augmentation. But of course, if you want to capture more in variances, then you think someone else’s open source implementation isn’t, it might be reasonable also to use hyperparameters yourself. So with that, I hope that you’re going to use data augmentation, to get your computer vision applications to work better.

04_state-of-computer-vision

Deep learning has been successfully applied to computer vision, natural language processing, speech recognition, online advertising, logistics, many, many, many problems. There are a few things that are unique about the application of deep learning to computer vision, about the status of computer vision. In this video, I will share with you some of my observations about deep learning for computer vision and I hope that that will help you better navigate the literature, and the set of ideas out there, and how you build these systems yourself for computer vision.


So, you can think of most machine learning problems as falling somewhere on the spectrum between where you have relatively little data to where you have lots of data. So, for example I think that today we have a decent amount of data for speech recognition and it’s relative to the complexity of the problem. And even though there are reasonably large data sets today for image recognition or image classification, because image recognition is just a complicated problem to look at all those pixels and figure out what it is. It feels like even though the online data sets are quite big like over a million images, feels like we still wish we had more data. And there are some problems like object detection where we have even less data. So, just as a reminder image recognition was the problem of looking at a picture and telling you is this a cattle or not. Whereas object detection is look in the picture and actually you’re putting the bounding boxes are telling you where in the picture the objects such as the car as well. And so because of the cost of getting the bounding boxes is just more expensive to label the objects and the bounding boxes. So, we tend to have less data for object detection than for image recognition. And object detection is something we’ll discuss next week. So, if you look across a broad spectrum of machine learning problems, you see on average that when you have a lot of data you tend to find people getting away with using simpler algorithms as well as less hand-engineering. So, there’s just less needing to carefully design features for the problem, but instead you can have a giant neural network, even a simpler architecture, and have a neural network. Just learn whether we want to learn we have a lot of data. Whereas, in contrast when you don’t have that much data then on average you see people engaging in more hand-engineering. And if you want to be ungenerous you can say there are more hacks. But I think when you don’t have much data then hand-engineering is actually the best way to get good performance.

So, when I look at machine learning applications I think usually we have the learning algorithm has two sources of knowledge. One source of knowledge is the labeled data, really the (x,y) pairs you use for supervised learning. And the second source of knowledge is the hand-engineering. And there are lots of ways to hand-engineer a system. It can be from carefully hand designing the features, to carefully hand designing the network architectures to maybe other components of your system. And so when you don’t have much labeled data you just have to call more on hand-engineering. And so I think computer vision is trying to learn a really complex function. And it often feels like we don’t have enough data for computer vision. Even though data sets are getting bigger and bigger, often we just don’t have as much data as we need. And this is why this data computer vision historically and even today has relied more on hand-engineering. And I think this is also why that either computer vision has developed rather complex network architectures, is because in the absence of more data the way to get good performance is to spend more time architecting, or fooling around with the network architecture. And in case you think I’m being derogatory of hand-engineering that’s not at all my intent. When you don’t have enough data hand-engineering is a very difficult, very skillful task that requires a lot of insight. And someone that is insightful with hand-engineering will get better performance, and is a great contribution to a project to do that hand-engineering when you don’t have enough data. It’s just when you have lots of data then I wouldn’t spend time hand-engineering, I would spend time building up the learning system instead. But I think historically the fear the computer vision has used very small data sets, and so historically the computer vision literature has relied on a lot of hand-engineering. And even though in the last few years the amount of data with the right computer vision task has increased dramatically, I think that that has resulted in a significant reduction in the amount of hand-engineering that’s being done. But there’s still a lot of hand-engineering of network architectures and computer vision. Which is why you see very complicated hyper frantic choices in computer vision, are more complex than you do in a lot of other disciplines. And in fact, because you usually have smaller object detection data sets than image recognition data sets, when we talk about object detection that is task like this next week. You see that the algorithms become even more complex and has even more specialized components. Fortunately, one thing that helps a lot when you have little data is transfer learning. And I would say for the example from the previous slide of the tigger, misty, neither detection problem, you have soluble data that transfer learning will help a lot. And so that’s another set of techniques that’s used a lot for when you have relatively little data.


If you look at the computer vision literature, and look at the sort of ideas out there, you also find that people are really enthusiastic. They’re really into doing well on standardized benchmark data sets and on winning competitions. And for computer vision researchers if you do well and the benchmark is easier to get the paper published. So, there’s just a lot of attention on doing well on these benchmarks. And the positive side of this is that, it helps the whole community figure out what are the most effective algorithms. But you also see in the papers people do things that allow you to do well on a benchmark, but that you wouldn’t really use in a production or a system that you deploy in an actual application. So, here are a few tips on doing well on benchmarks. These are things that I don’t myself pretty much ever use if I’m putting a system to production that is actually to serve customers.

But one is ensembling. And what that means is, after you’ve figured out what neural network you want, train several neural networks independently and average their outputs. So, initialize say 3, or 5, or 7 neural networks randomly and train up all of these neural networks, and then average their outputs. And by the way it is important to average their outputs y hats. Don’t average their weights that won’t work. Look and you say seven neural networks that have seven different predictions and average that. And this will cause you to do maybe 1% better, or 2% better. So is a little bit better on some benchmark. And this will cause you to do a little bit better. Maybe sometimes as much as 1 or 2% which really help win a competition. But because ensembling means that to test on each image, you might need to run an image through anywhere from say 3 to 15 different networks quite typical. This slows down your running time by a factor of 3 to 15, or sometimes even more. And so ensembling is one of those tips that people use doing well in benchmarks and for winning competitions. But that I think is almost never use in production to serve actual customers. I guess unless you have huge computational budget and don’t mind burning a lot more of it per customer image.

Another thing you see in papers that really helps on benchmarks, is multi-crop at test time. So, what I mean by that is you’ve seen how you can do data augmentation. And multi-crop is a form of applying data augmentation to your test image as well. So, for example let’s see a cat image and just copy it four times including two more versions. There’s a technique called the 10-crop, which basically says let’s say you take this central region that crop, and run it through your crossfire. And then take that crop up the left hand corner run through a crossfire, up right hand corner shown in green, lower left shown in yellow, lower right shown in orange, and run that through the crossfire. And then do the same thing with the mirrored image. Right. So I’ll take the central crop, then take the four corners crops. So, that’s one central crop here and here, there’s four corners crop here and here. And if you add these up that’s 10 different crops that you mentioned. So hence the name 10-crop. And so what you do, is you run these 10 images through your crossfire and then average the results. So, if you have the computational budget you could do this. Maybe you don’t need as many as 10-crops, you can use a few crops. And this might get you a little bit better performance in a production system. By production I mean a system you’re deploying for actual users. But this is another technique that is used much more for doing well on benchmarks than in actual production systems. And one of the big problems of ensembling is that you need to keep all these different networks around. And so that just takes up a lot more computer memory. For multi-crop I guess at least you keep just one network around. So it doesn’t suck up as much memory, but it still slows down your run time quite a bit. So, these are tips you see and research papers will refer to these tips as well. But I personally do not tend to use these methods when building production systems even though they are great for doing better on benchmarks and on winning competitions.


Because a lot of the computer vision problems are in the small data regime, others have done a lot of hand-engineering of the network architectures. And a neural network that works well on one vision problem often may be surprisingly, but they just often would work on other vision problems as well. So, to build a practical system often you do well starting off with someone else’s neural network architecture. And you can use an open source implementation if possible, because the open source implementation might have figured out all the finicky details like the learning rate, case scheduler, and other hyper parameters. And finally someone else may have spent weeks training a model on half a dozen GP use and on over a million images. And so by using someone else’s pretrained model and fine tuning on your data set, you can often get going much faster on an application. But of course if you have the compute resources and the inclination, don’t let me stop you from training your own networks from scratch. And in fact if you want to invent your own computer vision algorithm, that’s what you might have to do.

So, that’s it for this week, I hope that seeing a number of computer vision architectures helps you get a sense of what works. In this week’s from the exercises you actually learn another program framework and use that to implement resonance. So, I hope you enjoy that exercise and I look forward to seeing you.

查看本网站请使用全局科学上网
欢迎打赏来支持我的免费分享
0%