Multiclass Classification using Deep Learning

Multiclass classification is a technique in machine learning classification tasks that consists of more than two classes. In multiclass classification processes, the assigning of outcome or output is based on the class with the highest probability.

This article is for image processing and computer vision lover. If any one wants to learn how AI works and used machine learning ,deeplearning to handle  our problem .Their is a great idea behind this concept of multiclassification , we can classify any things with many number of classes by following these basics.

For this purpose, they used the Cifar10 dataset to predict multiclass classification, in which 60,000 images with size 32 x 32 into different 10 classes.  The data is already divided into training and tested, you can easily download and used it from Keras. They have divided all images into 10 classes namely airplane, automobile,  bird,  cat,  deer,  dog, frog, horses, ship, and truck. This dataset is excellent for practices for multiclass classification and general. This contains small-size images so training takes not much time.

You can use this data just by importing it from Keras library.

from Keras. dataset import cifar10                                                                                                                                                                                                                                              import matplot.pyplot as plt                                                                                                                                                                                                                                                        import matplotlib.image as mpimg                                                                                                                                                                                                                                          from Keras. models import Sequential
from Keras.preprocessing.image import ImageDataGenerator
from Keras. layers import BatchNormalization
from Keras. layers import Conv2D, MaxPooling2D, Dense, Flatten
from Keras. utils import normalize,to_categorical
from Keras. layers import Dropout
from keras.optimizers import SGD, RMSprop

Load the data which are already divided into training and testing  , normalize it by dividing  the whole data by 255. Further the y_tarin and y_test   which is output label , so it need to be converted into numeric form  .

(X_train, y_train), (X_test, y_test) = cifar10.load_data()
 X_train = (X_train.astype('float32')) / 255.
 X_test = (X_test.astype('float32')) / 255. 
y_train = to_categorical(y_train) 
y_test= to_categorical(y_test)
In trhis section, they have define the model, which consist of 6 layes and total paramete is 552,874. multiclassification process.

model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3)))
model.add(BatchNormalization())

model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))

model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
mmodel.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.3))

model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())

model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))

model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

model.add(Dense(10, activation='softmax'))

# compile model
opt = SGD(lr=0.001, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
print(model.summary())odel.add(BatchNormalization())

model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())

This will print the summary of whole model.
history = model.fit(X_train, y_train_cat, 
                    epochs=25, batch_size=64, 
                    validation_data=(X_test, y_test_cat), 
                    verbose=1, callbacks=callbacks_list)
_, acc = model.evaluate(X_test, y_test) 
print("Accuracy = ", (acc * 100.0), "%")


you can plot the training and validation accuracy and loss at each epoch. you can visualize the training and validation accuracy
on that basis you can increased or decrease the accuracy .On that basis you can change  parameter and hyper parameter to make your
 model more efficient to hanel multiclass classification

loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'y', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()


acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.plot(epochs, acc, 'y', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
 

Further mode you can read:   check here

study more: Blockchain segmentic segmentation with UNET, Simple CNN basic Model from Scratch

 

 

Add a Comment

Your email address will not be published. Required fields are marked *