Skip to content

Commit 47a4e92

Browse files
committed
108 of 100 Days of Python
I experimented with using a pretraining neural network for image segmentation, but that failed - I may give it another shot in the future. I began the chapter on Recurrent Neural Networks (chapter 15).
1 parent fef8a10 commit 47a4e92

7 files changed

+235
-48
lines changed
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Chapter 15. Processing Sequences Using RNNs and CNNs\n",
8+
"\n",
9+
"Recurrent neural networks can work on sequences of arbitrary length, making them very useful for time series data or text processing."
10+
]
11+
},
12+
{
13+
"cell_type": "code",
14+
"execution_count": 2,
15+
"metadata": {},
16+
"outputs": [],
17+
"source": [
18+
"import numpy as np\n",
19+
"import pandas as pd \n",
20+
"import matplotlib as mpl\n",
21+
"import matplotlib.pyplot as plt\n",
22+
"import seaborn as sns\n",
23+
"import tensorflow as tf \n",
24+
"import tensorflow.keras as keras\n",
25+
"\n",
26+
"%matplotlib inline\n",
27+
"np.random.seed(0)\n",
28+
"sns.set_style('whitegrid')"
29+
]
30+
},
31+
{
32+
"cell_type": "markdown",
33+
"metadata": {},
34+
"source": [
35+
"## Recurrent neurons and layers\n",
36+
"\n",
37+
"A recurrent neuron looks just like a normal feedforward neuron except it also has connections pointing backwards.\n",
38+
"At each *time step t* (or *frame*), a recurrent neuron receives the inputs $\\textbf{x}_{(t)}$ as well as its own output from a previous time step, $\\textbf{y}_{(t-1)}$.\n",
39+
"Thus, each neuron has two sets of weights, $\\textbf{w}_x$ and $\\textbf{w}_y$.\n",
40+
"These inputs and weights get multiplied together and passed to an activation function just like for a feedforward network.\n",
41+
"The following function is for a layer of recurrent neurons at a time frame $t$ where $\\phi$ is the activation function and $b$ is the bias.\n",
42+
"\n",
43+
"$$\n",
44+
"\\textbf{y}_{(t)} = \\phi(\\textbf{W}_x^T \\textbf{x}_{(t)} + \\textbf{W}_y^T \\textbf{y}_{(t-1)} + b)\n",
45+
"$$\n",
46+
"\n",
47+
"Generally, the initial value of $\\textbf{y}$ at $t=0$ is set to 0.\n",
48+
"It is common to see a recurrent neuron displayed across the time axis - this is called *unrolling the network through time*.\n"
49+
]
50+
},
51+
{
52+
"cell_type": "code",
53+
"execution_count": null,
54+
"metadata": {},
55+
"outputs": [],
56+
"source": []
57+
}
58+
],
59+
"metadata": {
60+
"kernelspec": {
61+
"display_name": "Python 3",
62+
"language": "python",
63+
"name": "python3"
64+
},
65+
"language_info": {
66+
"codemirror_mode": {
67+
"name": "ipython",
68+
"version": 3
69+
},
70+
"file_extension": ".py",
71+
"mimetype": "text/x-python",
72+
"name": "python",
73+
"nbconvert_exporter": "python",
74+
"pygments_lexer": "ipython3",
75+
"version": "3.7.6"
76+
},
77+
"toc": {
78+
"base_numbering": 1,
79+
"nav_menu": {},
80+
"number_sections": true,
81+
"sideBar": true,
82+
"skip_h1_title": false,
83+
"title_cell": "Table of Contents",
84+
"title_sidebar": "Contents",
85+
"toc_cell": false,
86+
"toc_position": {},
87+
"toc_section_display": true,
88+
"toc_window_display": false
89+
}
90+
},
91+
"nbformat": 4,
92+
"nbformat_minor": 4
93+
}

HandsOnMachineLearningWithScikitLearnAndTensorFlow/.ipynb_checkpoints/homl_ch14_Deep-computer-vision-using-convolutional-neural-networks-checkpoint.ipynb

Lines changed: 3 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
},
1010
{
1111
"cell_type": "code",
12-
"execution_count": 1,
12+
"execution_count": 31,
1313
"metadata": {},
1414
"outputs": [],
1515
"source": [
@@ -20,6 +20,7 @@
2020
"import seaborn as sns\n",
2121
"import tensorflow as tf\n",
2222
"import tensorflow.keras as keras\n",
23+
"import tensorflow_datasets as tfds\n",
2324
"\n",
2425
"np.random.seed(0)\n",
2526
"sns.set_style('whitegrid')\n",
@@ -1033,7 +1034,7 @@
10331034
},
10341035
{
10351036
"cell_type": "code",
1036-
"execution_count": 31,
1037+
"execution_count": 29,
10371038
"metadata": {},
10381039
"outputs": [],
10391040
"source": [
@@ -1115,23 +1116,6 @@
11151116
"It is actually an *instance segmentation model* where each instance is kept separate instead of being lumped together with other instances (like a segmentation model would do).\n",
11161117
"It provides output of both bounding boxes with estimated class probabilities and a pixel mask that locates pixels in the bounding box that belong to each object.\n"
11171118
]
1118-
},
1119-
{
1120-
"cell_type": "code",
1121-
"execution_count": 32,
1122-
"metadata": {},
1123-
"outputs": [],
1124-
"source": [
1125-
"# Use this tutorial to help download and use the MobileNetV2 pre-trained model.\n",
1126-
"# https://www.tensorflow.org/tutorials/images/segmentation"
1127-
]
1128-
},
1129-
{
1130-
"cell_type": "code",
1131-
"execution_count": null,
1132-
"metadata": {},
1133-
"outputs": [],
1134-
"source": []
11351119
}
11361120
],
11371121
"metadata": {
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Chapter 15. Processing Sequences Using RNNs and CNNs\n",
8+
"\n",
9+
"Recurrent neural networks can work on sequences of arbitrary length, making them very useful for time series data or text processing."
10+
]
11+
},
12+
{
13+
"cell_type": "code",
14+
"execution_count": 2,
15+
"metadata": {},
16+
"outputs": [],
17+
"source": [
18+
"import numpy as np\n",
19+
"import pandas as pd \n",
20+
"import matplotlib as mpl\n",
21+
"import matplotlib.pyplot as plt\n",
22+
"import seaborn as sns\n",
23+
"import tensorflow as tf \n",
24+
"import tensorflow.keras as keras\n",
25+
"\n",
26+
"%matplotlib inline\n",
27+
"np.random.seed(0)\n",
28+
"sns.set_style('whitegrid')"
29+
]
30+
},
31+
{
32+
"cell_type": "markdown",
33+
"metadata": {},
34+
"source": [
35+
"## Recurrent neurons and layers\n",
36+
"\n",
37+
"A recurrent neuron looks just like a normal feedforward neuron except it also has connections pointing backwards.\n",
38+
"At each *time step t* (or *frame*), a recurrent neuron receives the inputs $\\textbf{x}_{(t)}$ as well as its own output from a previous time step, $\\textbf{y}_{(t-1)}$.\n",
39+
"Thus, each neuron has two sets of weights, $\\textbf{w}_x$ and $\\textbf{w}_y$.\n",
40+
"These inputs and weights get multiplied together and passed to an activation function just like for a feedforward network.\n",
41+
"The following function is for a layer of recurrent neurons at a time frame $t$ where $\\phi$ is the activation function and $b$ is the bias.\n",
42+
"\n",
43+
"$$\n",
44+
"\\textbf{y}_{(t)} = \\phi(\\textbf{W}_x^T \\textbf{x}_{(t)} + \\textbf{W}_y^T \\textbf{y}_{(t-1)} + b)\n",
45+
"$$\n",
46+
"\n",
47+
"Generally, the initial value of $\\textbf{y}$ at $t=0$ is set to 0.\n",
48+
"It is common to see a recurrent neuron displayed across the time axis - this is called *unrolling the network through time*.\n"
49+
]
50+
},
51+
{
52+
"cell_type": "code",
53+
"execution_count": null,
54+
"metadata": {},
55+
"outputs": [],
56+
"source": []
57+
}
58+
],
59+
"metadata": {
60+
"kernelspec": {
61+
"display_name": "Python 3",
62+
"language": "python",
63+
"name": "python3"
64+
},
65+
"language_info": {
66+
"codemirror_mode": {
67+
"name": "ipython",
68+
"version": 3
69+
},
70+
"file_extension": ".py",
71+
"mimetype": "text/x-python",
72+
"name": "python",
73+
"nbconvert_exporter": "python",
74+
"pygments_lexer": "ipython3",
75+
"version": "3.7.6"
76+
},
77+
"toc": {
78+
"base_numbering": 1,
79+
"nav_menu": {},
80+
"number_sections": true,
81+
"sideBar": true,
82+
"skip_h1_title": false,
83+
"title_cell": "Table of Contents",
84+
"title_sidebar": "Contents",
85+
"toc_cell": false,
86+
"toc_position": {},
87+
"toc_section_display": true,
88+
"toc_window_display": false
89+
}
90+
},
91+
"nbformat": 4,
92+
"nbformat_minor": 4
93+
}
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Chapter 15. Processing Sequences Using RNNs and CNNs
2+
3+
Recurrent neural networks can work on sequences of arbitrary length, making them very useful for time series data or text processing.
4+
5+
6+
```python
7+
import numpy as np
8+
import pandas as pd
9+
import matplotlib as mpl
10+
import matplotlib.pyplot as plt
11+
import seaborn as sns
12+
import tensorflow as tf
13+
import tensorflow.keras as keras
14+
15+
%matplotlib inline
16+
np.random.seed(0)
17+
sns.set_style('whitegrid')
18+
```
19+
20+
## Recurrent neurons and layers
21+
22+
A recurrent neuron looks just like a normal feedforward neuron except it also has connections pointing backwards.
23+
At each *time step t* (or *frame*), a recurrent neuron receives the inputs $\textbf{x}_{(t)}$ as well as its own output from a previous time step, $\textbf{y}_{(t-1)}$.
24+
Thus, each neuron has two sets of weights, $\textbf{w}_x$ and $\textbf{w}_y$.
25+
These inputs and weights get multiplied together and passed to an activation function just like for a feedforward network.
26+
The following function is for a layer of recurrent neurons at a time frame $t$ where $\phi$ is the activation function and $b$ is the bias.
27+
28+
$$
29+
\textbf{y}_{(t)} = \phi(\textbf{W}_x^T \textbf{x}_{(t)} + \textbf{W}_y^T \textbf{y}_{(t-1)} + b)
30+
$$
31+
32+
Generally, the initial value of $\textbf{y}$ at $t=0$ is set to 0.
33+
It is common to see a recurrent neuron displayed across the time axis - this is called *unrolling the network through time*.
34+
35+
36+
37+
```python
38+
39+
```

HandsOnMachineLearningWithScikitLearnAndTensorFlow/homl_ch14_Deep-computer-vision-using-convolutional-neural-networks.ipynb

Lines changed: 2 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@
2020
"import seaborn as sns\n",
2121
"import tensorflow as tf\n",
2222
"import tensorflow.keras as keras\n",
23+
"import tensorflow_datasets as tfds\n",
2324
"\n",
2425
"np.random.seed(0)\n",
2526
"sns.set_style('whitegrid')\n",
@@ -1033,7 +1034,7 @@
10331034
},
10341035
{
10351036
"cell_type": "code",
1036-
"execution_count": 31,
1037+
"execution_count": 29,
10371038
"metadata": {},
10381039
"outputs": [],
10391040
"source": [
@@ -1115,23 +1116,6 @@
11151116
"It is actually an *instance segmentation model* where each instance is kept separate instead of being lumped together with other instances (like a segmentation model would do).\n",
11161117
"It provides output of both bounding boxes with estimated class probabilities and a pixel mask that locates pixels in the bounding box that belong to each object.\n"
11171118
]
1118-
},
1119-
{
1120-
"cell_type": "code",
1121-
"execution_count": 32,
1122-
"metadata": {},
1123-
"outputs": [],
1124-
"source": [
1125-
"# Use this tutorial to help download and use the MobileNetV2 pre-trained model.\n",
1126-
"# https://www.tensorflow.org/tutorials/images/segmentation"
1127-
]
1128-
},
1129-
{
1130-
"cell_type": "code",
1131-
"execution_count": null,
1132-
"metadata": {},
1133-
"outputs": [],
1134-
"source": []
11351119
}
11361120
],
11371121
"metadata": {

HandsOnMachineLearningWithScikitLearnAndTensorFlow/homl_ch14_Deep-computer-vision-using-convolutional-neural-networks.md

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ import matplotlib.pyplot as plt
99
import seaborn as sns
1010
import tensorflow as tf
1111
import tensorflow.keras as keras
12+
import tensorflow_datasets as tfds
1213

1314
np.random.seed(0)
1415
sns.set_style('whitegrid')
@@ -840,14 +841,3 @@ Also, *Mask R-CNN* is pretrained model in the TF Models project.
840841
It is actually an *instance segmentation model* where each instance is kept separate instead of being lumped together with other instances (like a segmentation model would do).
841842
It provides output of both bounding boxes with estimated class probabilities and a pixel mask that locates pixels in the bounding box that belong to each object.
842843

843-
844-
845-
```python
846-
# Use this tutorial to help download and use the MobileNetV2 pre-trained model.
847-
# https://www.tensorflow.org/tutorials/images/segmentation
848-
```
849-
850-
851-
```python
852-
853-
```

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -486,3 +486,7 @@ Finally, we breifly demonstrated a simple example of implementing transfer learn
486486

487487
**Day 107 - February 10, 2020:**
488488
I finished Chapter 14 on computer vision with deep models by learning about object localization, object detection, and semantic segmentation.
489+
490+
**Day 108 - February 11, 2020:**
491+
I experimented with using a pretraining neural network for image segmentation, but that failed - I may give it another shot in the future.
492+
I began the chapter on Recurrent Neural Networks (chapter 15).

0 commit comments

Comments
 (0)