Debug School

rakesh kumar
rakesh kumar

Posted on

List out most commonly used function in keras

Below are some of the key functions of the Sequential model in Keras along with their descriptions in the context of building and training sentiment analysis models:

model.compile(optimizer, loss, metrics):
Description: Compiles the model for training.
Arguments:
optimizer: Optimizer algorithm to use during training (e.g., 'adam', 'sgd').
loss: Loss function to optimize during training (e.g., 'binary_crossentropy').
metrics: List of metrics to monitor during training (e.g., ['accuracy']).

from keras.models import Sequential
from keras.layers import Dense

model = Sequential()
model.add(Dense(64, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])
Enter fullscreen mode Exit fullscreen mode

model.fit(x, y, batch_size, epochs, validation_data):
Description: Trains the model on training data.
Arguments:
x: Input data (features).
y: Target data (labels).
batch_size: Number of samples per gradient update.
epochs: Number of training epochs.
validation_data: Data on which to evaluate the loss and any model metrics at the end of each epoch.

model.fit(X_train, y_train, epochs=10, batch_size=32)
Enter fullscreen mode Exit fullscreen mode

model.evaluate(x, y):
Description: Evaluates the model on test data.
Arguments:
x: Input data (features).
y: Target data (labels).
Returns: Test loss and any model metrics specified during compilation.

loss, accuracy = model.evaluate(X_test, y_test)
Enter fullscreen mode Exit fullscreen mode

model.predict(x):
Description: Generates output predictions for the input samples.
Arguments:
x: Input data (features).
Returns: Predicted outputs for the input samples.

predictions = model.predict(X_test)
Enter fullscreen mode Exit fullscreen mode

model.summary():
Description: Prints a summary representation of the model architecture.
Output: Provides a table showing the layers in the model, the number of parameters in each layer, and the total number of parameters.

model.add(layer):
Description: Adds a layer to the model.
Arguments:
layer: Keras layer object to be added to the model.
model.pop():
Description: Removes the last layer from the model.
Returns: The removed layer.
model.layers:
Description: Returns a list of all layers in the model.
model.get_layer(name):
Description: Retrieves a layer in the model by name.
Arguments:
name: Name of the layer to retrieve.
model.save(filepath) and load_model(filepath):
Description: Save or load the model's architecture, weights, and training configuration to/from a file.
These are some of the most commonly used functions of the Sequential model in Keras for building and training sentiment analysis models. They provide the necessary tools for compiling, training, evaluating, and deploying models efficiently.

model.save('sentiment_model.h5')
Enter fullscreen mode Exit fullscreen mode
from keras.models import load_model

loaded_model = load_model('sentiment_model.h5')
Enter fullscreen mode Exit fullscreen mode

=======================================================

Example of sentiment Analysis

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Load IMDB dataset
num_words = 10000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=num_words)
max_length = 100
X_train = pad_sequences(X_train, maxlen=max_length)
X_test = pad_sequences(X_test, maxlen=max_length)

# Define the Sequential model
model = Sequential()
model.add(Embedding(input_dim=num_words, output_dim=128, input_length=max_length))
model.add(LSTM(units=128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(units=1, activation='sigmoid'))

# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Print model summary
model.summary()

# Train the model
batch_size = 32
epochs = 3
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test))

# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f'Test Loss: {loss:.4f}, Test Accuracy: {accuracy:.4f}')

# Make predictions
def predict_sentiment(text):
    sequence = tokenizer.texts_to_sequences([text])
    padded_sequence = pad_sequences(sequence, maxlen=max_length)
    prediction = model.predict(padded_sequence)[0][0]
    sentiment = "positive" if prediction >= 0.5 else "negative"
    return sentiment

# Example usage
text = "I loved the movie, it was fantastic!"
sentiment = predict_sentiment(text)
print(f'Sentiment: {sentiment}')
Enter fullscreen mode Exit fullscreen mode

Top comments (0)