Note
These are my personal programming assignments at the 4th week after studying the course convolutional neural networks and the copyright belongs to deeplearning.ai.
Deep Learning & Art: Neural Style Transfer
Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576).
In this assignment, you will:
- Implement the neural style transfer algorithm
- Generate novel artistic images using your algorithm
Most of the algorithms you’ve studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you’ll optimize a cost function to get pixel values!
1 | import os |
C:\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
1 - Problem Statement
Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a “content” image (C) and a “style” image (S), to create a “generated” image (G). The generated image G combines the “content” of the image C with the “style” of image S.
In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).
Let’s see how you can do this.
2 - Transfer Learning
Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.
Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we’ll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).
Run the following code to load parameters from the VGG model. This may take a few seconds.
1 | model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") |
{'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable’s value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this:
1 | model["input"].assign(image) |
This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer 4_2
when the network is run on this image, you would run a TensorFlow session on the correct tensor conv4_2
, as follows:
1 | sess.run(model["conv4_2"]) |
3 - Neural Style Transfer
We will build the NST algorithm in three steps:
- Build the content cost function $J_{content}(C,G)$
- Build the style cost function $J_{style}(S,G)$
- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$.
3.1 - Computing the content cost
In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
1 | content_image = scipy.misc.imread("images/louvre.jpg") |
C:\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
if __name__ == '__main__':
<matplotlib.image.AxesImage at 0x23c512646a0>
The content image (C) shows the Louvre museum’s pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**
As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.
We would like the “generated” image G to have similar content as the input image C. Suppose you have chosen some layer’s activations to represent the content of an image. In practice, you’ll get the most visually pleasing results if you choose a layer in the middle of the network–neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)
So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{l}$, but here we’ll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $a^{(G)}$ be the corresponding hidden layer activation. We will define as the content cost function as:
$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$
Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer’s activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn’t needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)
Exercise: Compute the “content cost” using TensorFlow.
Instructions: The 3 steps to implement this function are:
- Retrieve dimensions from a_G:
- To retrieve dimensions from a tensor X, use:
X.get_shape().as_list()
- To retrieve dimensions from a tensor X, use:
- Unroll a_C and a_G as explained in the picture above
- Compute the content cost:
1 | # GRADED FUNCTION: compute_content_cost |
1 | tf.reset_default_graph() |
J_content = 6.7655926
Expected Output:
**J_content** | 6.76559 |
3.2 - Computing the style cost
For our running example, we will use the following style image:
1 | style_image = scipy.misc.imread("images/monet_800600.jpg") |
C:\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
if __name__ == '__main__':
<matplotlib.image.AxesImage at 0x23c57b880f0>
This painting was painted in the style of impressionism.
Lets see how you can now define a “style” const function $J_{style}(S,G)$.
3.2.1 - Style matrix
The style matrix is also called a “Gram matrix.” In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large.
Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context.
In NST, you can compute the Style matrix by multiplying the “unrolled” filter matrix with their transpose:
The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$.
One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture.
By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image.
Exercise:
Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at Hint 1 and Hint 2.
1 | # GRADED FUNCTION: gram_matrix |
1 | tf.reset_default_graph() |
GA = [[ 6.422305 -4.429122 -2.096682]
[-4.429122 19.465837 19.563871]
[-2.096682 19.563871 20.686462]]
Expected Output:
**GA** |
[[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] |
3.2.2 - Style cost
After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the “style” image S and that of the “generated” image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as:
$$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum {i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}{ij})^2\tag{2} $$
where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the “style” image and the “generated” image, computed using the hidden layer activations for a particular hidden layer in the network.
Exercise: Compute the style cost for a single layer.
Instructions: The 3 steps to implement this function are:
- Retrieve dimensions from the hidden layer activations a_G:
- To retrieve dimensions from a tensor X, use:
X.get_shape().as_list()
- To retrieve dimensions from a tensor X, use:
- Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above.
- Compute the Style matrix of the images S and G. (Use the function you had previously written.)
- Compute the Style cost:
1 | # GRADED FUNCTION: compute_layer_style_cost |
1 | tf.reset_default_graph() |
J_style_layer = 9.190277
Expected Output:
**J_style_layer** | 9.19028 |
3.2.3 Style Weights
So far you have captured the style from only one layer. We’ll get better results if we “merge” style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
1 | STYLE_LAYERS = [ |
You can combine the style costs for different layers as follows:
$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$
where the values for $\lambda^{[l]}$ are given in STYLE_LAYERS
.
We’ve implemented a compute_style_cost(…) function. It simply calls your compute_layer_style_cost(...)
several times, and weights their results using the values in STYLE_LAYERS
. Read over it to make sure you understand what it’s doing.
1 | def compute_style_cost(model, STYLE_LAYERS): |
Note: In the inner-loop of the for-loop above, a_G
is a tensor and hasn’t been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.
3.3 - Defining the total cost to optimize
Finally, let’s create a cost function that minimizes both the style and the content cost. The formula is:
$$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$
Exercise: Implement the total cost function which includes both the content cost and the style cost.
1 | # GRADED FUNCTION: total_cost |
1 | tf.reset_default_graph() |
J = 35.34667875478276
Expected Output:
**J** | 35.34667875478276 |
4 - Solving the optimization problem
Finally, let’s put everything together to implement Neural Style Transfer!
Here’s what the program will have to do:
- Create an Interactive Session
- Load the content image
- Load the style image
- Randomly initialize the image to be generated
- Load the VGG16 model
- Build the TensorFlow graph:
- Run the content image through the VGG16 model and compute the content cost
- Run the style image through the VGG16 model and compute the style cost
- Compute the total cost
- Define the optimizer and the learning rate
- Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.
You’ve previously implemented the overall cost $J(G)$. We’ll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an “Interactive Session“. Unlike a regular session, the “Interactive Session” installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.
Lets start the interactive session.
1 | # Reset the graph |
Let’s load, reshape, and normalize our “content” image (the Louvre museum picture):
1 | content_image = scipy.misc.imread("images/louvre_small.jpg") |
C:\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
if __name__ == '__main__':
Let’s load, reshape and normalize our “style” image (Claude Monet’s painting):
1 | style_image = scipy.misc.imread("images/monet.jpg") |
C:\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
if __name__ == '__main__':
Now, we initialize the “generated” image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the “generated” image more rapidly match the content of the “content” image. (Feel free to look in nst_utils.py
to see the details of generate_noise_image(...)
; to do so, click “File–>Open…” at the upper-left corner of this Jupyter notebook.)
1 | generated_image = generate_noise_image(content_image) |
<matplotlib.image.AxesImage at 0x23c62573828>
Error in callback <function install_repl_displayhook.<locals>.post_execute at 0x0000023C51DE00D0> (for post_execute):
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\matplotlib\pyplot.py in post_execute()
148 def post_execute():
149 if matplotlib.is_interactive():
--> 150 draw_all()
151
152 # IPython >= 2
C:\Anaconda3\lib\site-packages\matplotlib\_pylab_helpers.py in draw_all(cls, force)
148 for f_mgr in cls.get_all_fig_managers():
149 if force or f_mgr.canvas.figure.stale:
--> 150 f_mgr.canvas.draw_idle()
151
152 atexit.register(Gcf.destroy_all)
C:\Anaconda3\lib\site-packages\matplotlib\backend_bases.py in draw_idle(self, *args, **kwargs)
2059 if not self._is_idle_drawing:
2060 with self._idle_draw_cntx():
-> 2061 self.draw(*args, **kwargs)
2062
2063 def draw_cursor(self, event):
C:\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
428 # if toolbar:
429 # toolbar.set_cursor(cursors.WAIT)
--> 430 self.figure.draw(self.renderer)
431 finally:
432 # if toolbar:
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1297
1298 mimage._draw_list_compositing_images(
-> 1299 renderer, self, artists, self.suppressComposite)
1300
1301 renderer.close_group('figure')
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2435 renderer.stop_rasterizing()
2436
-> 2437 mimage._draw_list_compositing_images(renderer, self, artists)
2438
2439 renderer.close_group('axes')
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\image.py in draw(self, renderer, *args, **kwargs)
564 else:
565 im, l, b, trans = self.make_image(
--> 566 renderer, renderer.get_image_magnification())
567 if im is not None:
568 renderer.draw_image(gc, l, b, im)
C:\Anaconda3\lib\site-packages\matplotlib\image.py in make_image(self, renderer, magnification, unsampled)
791 return self._make_image(
792 self._A, bbox, transformed_bbox, self.axes.bbox, magnification,
--> 793 unsampled=unsampled)
794
795 def _check_unsampled_image(self, renderer):
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification, unsampled, round_to_pixel_border)
482 # (of int or float)
483 # or an RGBA array of re-sampled input
--> 484 output = self.to_rgba(output, bytes=True, norm=False)
485 # output is now a correctly sized RGBA array of uint8
486
C:\Anaconda3\lib\site-packages\matplotlib\cm.py in to_rgba(self, x, alpha, bytes, norm)
255 if xx.dtype.kind == 'f':
256 if norm and xx.max() > 1 or xx.min() < 0:
--> 257 raise ValueError("Floating point image RGB values "
258 "must be in the 0..1 range.")
259 if bytes:
ValueError: Floating point image RGB values must be in the 0..1 range.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\IPython\core\formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
C:\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in <lambda>(fig)
236
237 if 'png' in formats:
--> 238 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
239 if 'retina' in formats or 'png2x' in formats:
240 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
C:\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
120
121 bytes_io = BytesIO()
--> 122 fig.canvas.print_figure(bytes_io, **kw)
123 data = bytes_io.getvalue()
124 if fmt == 'svg':
C:\Anaconda3\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, **kwargs)
2214 orientation=orientation,
2215 dryrun=True,
-> 2216 **kwargs)
2217 renderer = self.figure._cachedRenderer
2218 bbox_inches = self.figure.get_tightbbox(renderer)
C:\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, *args, **kwargs)
505
506 def print_png(self, filename_or_obj, *args, **kwargs):
--> 507 FigureCanvasAgg.draw(self)
508 renderer = self.get_renderer()
509 original_dpi = renderer.dpi
C:\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
428 # if toolbar:
429 # toolbar.set_cursor(cursors.WAIT)
--> 430 self.figure.draw(self.renderer)
431 finally:
432 # if toolbar:
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1297
1298 mimage._draw_list_compositing_images(
-> 1299 renderer, self, artists, self.suppressComposite)
1300
1301 renderer.close_group('figure')
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2435 renderer.stop_rasterizing()
2436
-> 2437 mimage._draw_list_compositing_images(renderer, self, artists)
2438
2439 renderer.close_group('axes')
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
C:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
C:\Anaconda3\lib\site-packages\matplotlib\image.py in draw(self, renderer, *args, **kwargs)
564 else:
565 im, l, b, trans = self.make_image(
--> 566 renderer, renderer.get_image_magnification())
567 if im is not None:
568 renderer.draw_image(gc, l, b, im)
C:\Anaconda3\lib\site-packages\matplotlib\image.py in make_image(self, renderer, magnification, unsampled)
791 return self._make_image(
792 self._A, bbox, transformed_bbox, self.axes.bbox, magnification,
--> 793 unsampled=unsampled)
794
795 def _check_unsampled_image(self, renderer):
C:\Anaconda3\lib\site-packages\matplotlib\image.py in _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification, unsampled, round_to_pixel_border)
482 # (of int or float)
483 # or an RGBA array of re-sampled input
--> 484 output = self.to_rgba(output, bytes=True, norm=False)
485 # output is now a correctly sized RGBA array of uint8
486
C:\Anaconda3\lib\site-packages\matplotlib\cm.py in to_rgba(self, x, alpha, bytes, norm)
255 if xx.dtype.kind == 'f':
256 if norm and xx.max() > 1 or xx.min() < 0:
--> 257 raise ValueError("Floating point image RGB values "
258 "must be in the 0..1 range.")
259 if bytes:
ValueError: Floating point image RGB values must be in the 0..1 range.
<matplotlib.figure.Figure at 0x23c6251cda0>
Next, as explained in part (2), let’s load the VGG16 model.
1 | model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") |
To get the program to compute the content cost, we will now assign a_C
and a_G
to be the appropriate hidden layer activations. We will use layer conv4_2
to compute the content cost. The code below does the following:
- Assign the content image to be the input to the VGG model.
- Set a_C to be the tensor giving the hidden layer activation for layer “conv4_2”.
- Set a_G to be the tensor giving the hidden layer activation for the same layer.
- Compute the content cost using a_C and a_G.
1 | # Assign the content image to be the input of the VGG model. |
Note: At this point, a_G is a tensor and hasn’t been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
1 | # Assign the input of the model to be the "style" image |
Exercise: Now that you have J_content and J_style, compute the total cost J by calling total_cost()
. Use alpha = 10
and beta = 40
.
1 | ### START CODE HERE ### (1 line) |
You’d previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference
1 | # define optimizer (1 line) |
Exercise: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
1 | def model_nn(sess, input_image, num_iterations = 200): |
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
1 | model_nn(sess, generated_image) |
Expected Output:
**Iteration 0 : ** |
total cost = 5.05035e+09 content cost = 7877.67 style cost = 1.26257e+08 |
You’re done! After running this, in the upper bar of the notebook click on “File” and then “Open”. Go to the “/output” directory to see all the saved images. Open “generated_image” to see the generated image! :)
You should see something the image presented below on the right:
We didn’t want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.
Here are few other examples:
The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)
The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.
A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.
5 - Test with your own image (Optional/Ungraded)
Finally, you can also rerun the algorithm on your own images!
To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here’s what you should do:
- Click on “File -> Open” in the upper tab of the notebook
- Go to “/images” and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them “my_content.png” and “my_style.png” for example.
- Change the code in part (3.4) from :to:
1
2content_image = scipy.misc.imread("images/louvre.jpg")
style_image = scipy.misc.imread("images/claude-monet.jpg")1
2content_image = scipy.misc.imread("images/my_content.jpg")
style_image = scipy.misc.imread("images/my_style.jpg") - Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).
You can share your generated images with us on social media with the hashtag #deeplearniNgAI or by direct tagging!
You can also tune your hyperparameters:
- Which layers are responsible for representing the style? STYLE_LAYERS
- How many iterations do you want to run the algorithm? num_iterations
- What is the relative weighting between content and style? alpha/beta
6 - Conclusion
Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network’s parameters. Deep learning has many different types of models and this is only one of them!
What you should remember: - Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image - It uses representations (hidden layer activations) based on a pretrained ConvNet. - The content cost function is computed using one hidden layer's activations. - The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers. - Optimizing the total cost function results in synthesizing new images.This was the final programming exercise of this course. Congratulations–you’ve finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!
References:
The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user “log0” also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.
- Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576)
- Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/
- Log0, TensorFlow Implementation of “A Neural Algorithm of Artistic Style”. http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style
- Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf)
- MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/