Wednesday, October 18, 2023

pyspark code to get estimated size of dataframe in bytes

 from pyspark.sql import SparkSession

import sys
# Initialize a Spark session
spark = SparkSession.builder.appName("DataFrameSize").getOrCreate()

# Create a PySpark DataFrame
data = [(1, "John"), (2, "Alice"), (3, "Bob")]
columns = ["id", "name"]
df = spark.createDataFrame(data, columns)

# Get the size of the DataFrame in bytes
size_in_bytes = df.rdd.flatMap(lambda x: x).map(lambda x: sys.getsizeof(x) if x is not None else 0).sum()
print(f"Size of the DataFrame: {size_in_bytes} bytes")

# Stop the Spark session
spark.stop()

Wednesday, July 19, 2023

replaceWhere

If we want to replace content of table or file in a path below can be possible.

  • The replaceWhere option atomically replaces all records that match a given predicate.

  • You can replace directories of data based on how tables are partitioned using dynamic partition overwrites.

Python:
replace_data.write
  .mode("overwrite")
  .option("replaceWhere", "start_date >= '2017-01-02' AND end_date <= '2017-01-30'")
  .save("/tmp1/delta/events")
SQL:
INSERT INTO TABLE events REPLACE WHERE start_data >= '2017-01-01' AND end_date <= '2017-01-31' SELECT * FROM replace_data

Friday, July 14, 2023

Magic commands

 

Following are possible magic commands available in Databricks

  1. %python
  2. %sql
  3. %scala
  4. %sh
  5. %fs → Alternatively, one can use dbutils.fs
  6. %md

Note:

  • The first line in the cell must be the magic command
  • One cell allows only one magic command
  • Magic commands are case sensitive
  • When you change the default language, all cells in that notebook automatically add a magic command of the previous default language.

Saturday, March 11, 2023

Tensorflow general methods

 #method to get shape of tensor flow element

#method to apply a method to all elements
#defining variables, constants in Tensorflow for a matrix of elements.

# Casting, activation functions








Wednesday, March 8, 2023

Synchronously shuffle X,Y

    import numpy as np

    np.random.seed(seed) 

    m = X.shape[1]                  # number of training examples    

    permutation = list(np.random.permutation(m))

    shuffled_X = X[:, permutation]

    shuffled_Y = Y[:, permutation].reshape((1, m))

Saturday, March 4, 2023

Dropout

DROPOUT is a widely used regularization technique that is specific to deep learning. 
It randomly shuts down some neurons in each iteration. 
At each iteration, you shut down (= set to zero) each neuron of a layer with probability 1−keep_prob or keep it with probability  keep_prob (50% here). 
The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration.



















When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.

  • Dropout is a regularization technique.
  • You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
  • Apply dropout both during forward and backward propagation.
  • During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.

L2 Regulerization

 





m=  # of training examples

l= layer

k , j=shape of weight matrix 

Friday, March 3, 2023

python - Initialization of weights

 The main difference between Gaussian variable (numpy.random.randn()) and uniform random variable is the distribution of the generated random numbers:

When used for weight initialization, randn() helps most the weights to Avoid being close to the extremes, allocating most of them in the center of the range.

An intuitive way to see it is, for example, if you take the sigmoid() activation function.

You’ll remember that the slope near 0 or near 1 is extremely small, so the weights near those extremes will converge much more slowly to the solution, and having most of them near the center will speed the convergence.

Initialization of weights

 

  • The weights 
     should be initialized randomly to break symmetry.
  • However, it's okay to initialize the biases 

     to zeros. Symmetry is still broken so long as 
     is initialized randomly.
  • Initializing weights to very large random values doesn't work well.
  • Initializing with small random values should do better.

Wednesday, March 1, 2023

python code to plot cost

import matplotlib.pyplot as plt

%matplotlib inline 

def plot_costs(costs, learning_rate=0.0075):

    plt.plot(np.squeeze(costs))

    plt.ylabel('cost')

    plt.xlabel('iterations (per hundreds)')

    plt.title("Learning rate =" + str(learning_rate))

    plt.show()


#Assuming "costs" is a list of costs obtained during training iterations per hundred

#calling the method with some learning rate

plot_costs(costs, learning_rate)

output:



Deep Learning methodology using gradient descent

Usual Deep Learning methodology to build the model:

  1. Initialize parameters / Define hyperparameters
  2. Loop for num_iterations:   
                       a. Forward propagation 
                       b. Compute cost function 
                       c. Backward propagation 
                       d. Update parameters (using parameters, and grads from backprop)
    3. Use trained parameters to predict labels

Sunday, December 18, 2022

split data set to train, cross validation and test sets

print(f"the shape of the original set (input) is: {x.shape}")

print(f"the shape of the original set (target) is: {y.shape}\n")


from sklearn.model_selection import train_test_split

# Get 60% of the dataset as the training set. Put the remaining 40% in temporary variables.
x_train, x_, y_train, y_ = train_test_split(x, y, test_size=0.40, random_state=1)

# Split the 40% subset above into two: one half for cross validation and the other for the test set
x_cv, x_test, y_cv, y_test = train_test_split(x_, y_, test_size=0.50, random_state=1)

# Delete temporary variables
del x_, y_

print(f"the shape of the training set (input) is: {x_train.shape}")
print(f"the shape of the training set (target) is: {y_train.shape}\n")
print(f"the shape of the cross validation set (input) is: {x_cv.shape}")
print(f"the shape of the cross validation set (target) is: {y_cv.shape}\n")
print(f"the shape of the test set (input) is: {x_test.shape}")
print(f"the shape of the test set (target) is: {y_test.shape}")





Tuesday, December 13, 2022

Epochs and batches

We provide epoch value while fitting/training the model as below.

Example: model.fit(X,Y,epoch=100) 

Epochs and batches

In the fit statement above, the number of epochs was set to 100. This specifies that the entire data set 

should be applied during training 100 times. During training, you see output describing the progress of 

training that looks like this:

Epoch 1/100
157/157 [==============================] - 0s 1ms/step - loss: 2.2770

The first line, Epoch 1/100, describes which epoch the model is currently running. For efficiency,

the training data set is broken into 'batches'. The default size of a batch in Tensorflow is 32. 

if given an model has are 5000 examples(X_train) it will set or roughly to 157 batches. 

The notation on the 2nd line 157/157 [==== is describing which batch has been executed.

Loss (cost)

Ideally, the cost will decrease as the number of iterations of the algorithm increases. Tensorflow refers to 

the cost as loss