In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.

```
In [1]:
```%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!

```
In [2]:
```data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)

```
In [3]:
```rides.head()

```
Out[3]:
```

This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt`

column. You can see the first few rows of the data above.

Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.

```
In [4]:
```rides[:24*10].plot(x='dteday', y='cnt')

```
Out[4]:
```

```
In [5]:
```dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()

```
Out[5]:
```

To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.

The scaling factors are saved so we can go backwards when we use the network for predictions.

```
In [87]:
```quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std

```
In [7]:
```# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]

```
In [8]:
```# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]

Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.

The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.

We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.

Hint:You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.

Below, you have these tasks:

- Implement the sigmoid function to use as the activation function. Set
`self.activation_function`

in`__init__`

to your sigmoid function. - Implement the forward pass in the
`train`

method. - Implement the backpropagation algorithm in the
`train`

method, including calculating the output error. - Implement the forward pass in the
`run`

method.

```
In [430]:
```def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return sigmoid(x) * (1 - sigmoid(x))
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = sigmoid
# Activation function derivative
# We could use the activation_function directly in the code, but that would make it very
# dependent on the actual function being used. For class flexibility and clarity of steps,
# let's leave it defined separately.
self.activation_function_derivative = sigmoid_derivative
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
### Forward pass ###
## Hidden layer
# inputs to hidden layer: weights_input_to_hidden . inputs
# this should already give us a matrix of the right dimensionality for the hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# outputs of hidden_layer f(hidden_inputs) => sigmoid(hidden_inputs)
# activation functions do not change the dimensionality of the data
hidden_outputs = self.activation_function(hidden_inputs)
## Output layer
# input to output layer: weights_hidden_to_output . hidden_layer_outputs
output_layer_in = np.dot(self.weights_hidden_to_output, hidden_outputs)
# output of output layer: f(output_layer_in) = output_layer_in (f(x) = x)
output = output_layer_in
### Backward pass ###
# absolute error: desired - predicted
output_errors = targets - output
# Pending question: Should we take the squared of the error? Doing so produces divergence. Why?
# hidden_errors: errors distributed to the hidden layer, according to the weights interconnecting the hidden and
# output layer
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
# hidden_grad: gradient of the error for each of the weights interconnecting the hidden and output layer.
# As a gradient, we need to multiply it by the derivative of the activation function for this very same layer,
# based on the input that was originally given to it.
# Since this derivative is the sigmoid function, another way to write this would be:
# hidden_grad = hidden_errors * hidden_inputs * (1 - hidden_inputs)
# Notice that these are all element-wise multiplications, so they do not change the dimensionality of the
# result. This means that hidden_errors contains an error for each weight, and hidden_grad will contain a gradient
# for each weight too.
# hidden_grad = hidden_errors * self.activation_function_derivative(hidden_inputs)
# Project review note:
# > Note: In this case, we're calling the hidden gradient just the derivative of the sigmoid function, so please change hidden_grad to be just the derivative of the sigmoid function.
# > Remove hidden errors during calculation.
# > Also update delta_w_i_h accordingly. Your Implementation should just be as is. Just some code refactoring.
hidden_grad = self.activation_function_derivative(hidden_inputs)
# delta for weights for output layer: learningRate * error * input to output layer (hidden_outputs)
# transposing hidden_outputs so that it takes the shape required for our weights marix
delta_w_h_o = self.lr * np.dot(output_errors, hidden_outputs.T)
# delta for weights for hidden layer: learningRate * errorGradient * input layer (inputs)
# transposing inputs so that it takes the shape required for our weights matrix
delta_w_i_h = self.lr * hidden_errors * np.dot(hidden_grad, inputs.T)
# update weights
self.weights_hidden_to_output += delta_w_h_o
self.weights_input_to_hidden += delta_w_i_h
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
# activation function for final layer is f(x) = x, no changes required
final_outputs = final_inputs # signals from final output layer
return final_outputs

```
In [431]:
```def MSE(y, Y):
return np.mean((y-Y)**2)

Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.

You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.

This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.

This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.

The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.

```
In [432]:
```import sys
### Set the hyperparameters here ###
epochs = 1000 # bigger epochs don't make a bigger difference in the loss
learning_rate = 0.15 # bigger than 0.2 seems to oscilate too much
#hidden_nodes = 7 # 7 seems to be a "good" number between loss and performance. Bigger numbers don't increase accuracy.
# Project review note:
# > I think you can still do better here
# > Refer to this question
# > https://www.quora.com/How-do-I-decide-the-number-of-nodes-in-a-hidden-layer-of-a-neural-network
hidden_nodes = 28 # thread encourages to use geometric mean between input and output, which is also 7. Arithmetic mean is 28, also recommended.
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
debug = (e == 1) or (e == 0)
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)

```
```

```
In [433]:
```plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)

```
Out[433]:
```

```
In [434]:
```fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)

```
```

Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?

Note:You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter

The model very accurately predicts the data when it roams around the 1.5 mark. This very likely means that there is a pattern in the data of the parameters and inputs leading up to that range. However, the predictions do still keep that learnt pattern and those number of sales don't apply to the time range, which was a lot lower.

The lower numbers in the actual data correspond to Christmas and New Year's Eve, which could explain in the real world why the pattern changes. (Maybe people don't buy / repair bikes during the festivities?)

Regardless of the reason, these are aspects that our data did not include and as such, the neural network could nto learn.

```
In [416]:
```import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)

```
Out[416]:
```

```
In [ ]:
```