#method to get shape of tensor flow element
Saturday, March 11, 2023
Tensorflow general methods
Wednesday, March 8, 2023
Synchronously shuffle X,Y
import numpy as np
np.random.seed(seed)
m = X.shape[1] # number of training examples
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1, m))
Saturday, March 4, 2023
Dropout
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
Friday, March 3, 2023
python - Initialization of weights
The main difference between Gaussian variable (numpy.random.randn()
) and uniform random variable is the distribution of the generated random numbers:
- numpy.random.rand() produces numbers in a uniform distribution.
- and numpy.random.randn() produces numbers in a normal distribution.
When used for weight initialization, randn() helps most the weights to Avoid being close to the extremes, allocating most of them in the center of the range.
An intuitive way to see it is, for example, if you take the sigmoid() activation function.
You’ll remember that the slope near 0 or near 1 is extremely small, so the weights near those extremes will converge much more slowly to the solution, and having most of them near the center will speed the convergence.
Initialization of weights
- The weights
𝑊[𝑙] should be initialized randomly to break symmetry. - However, it's okay to initialize the biases
𝑏[𝑙] to zeros. Symmetry is still broken so long as𝑊[𝑙] is initialized randomly. - Initializing weights to very large random values doesn't work well.
- Initializing with small random values should do better.
Wednesday, March 1, 2023
python code to plot cost
import matplotlib.pyplot as plt
%matplotlib inline
def plot_costs(costs, learning_rate=0.0075):
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
#Assuming "costs" is a list of costs obtained during training iterations per hundred
#calling the method with some learning rate
plot_costs(costs, learning_rate)
output:
Deep Learning methodology using gradient descent
Usual Deep Learning methodology to build the model:
- Initialize parameters / Define hyperparameters
- Loop for num_iterations: