Facial Expression Recognition using keras
ABSTRACT
Facial expressions are part of human language and are often used to convey emotions. With the development of human-computer interaction technology, people pay more and more attention to facial expression recognition (FER) technology. Besides, in the domain of FER, human beings have made some progress. In this paper, we reviewed the development of FER: VGGNet, ResNet, GoogleNet, and AlexNet. Besides, we studied some ideas of CNN (Convolutional Neural Network), and we used FER2013, which is one of the most significant databases of human faces, as the dataset to be considered. Furthermore, we made some improvements based on the original methods of FER. By training the FER2013 dataset with different revised ways, the best result of accuracy we got is 0.6424. At last, we generated and summarized the progress and deficiencies in this study.
Introduction
Facial expression is one of the most important features of human emotion recognition .
It was introduced as a research field by Darwin in his book “The Expression of the Emotions in Man and Animals”.
It can be defined as the facial changes in response to a person’s internal emotional state, intentions, or social communication. Nowadays, automated facial expression recognition has a large variety of applications, such as data-driven animation, neuromarketing, interactive games, sociable robotics and many other human-computer interaction systems.
An face emotion recognition system comprises of two step process i.e. face detection (bounded face) in image followed by emotion detection on the detected bounded face. The following two techniques are used for respective mentioned tasks in face recognition system.
- Haar feature-based cascade classifiers : It detects frontal face in an image well. It is real time and faster in comparison to other face detector. This blog-post uses an implementation from Open-CV.
Dataset
The both training and evaluation operations would be handled with Fer2013 dataset.
Click here to download.
Requirement
download the list
P.S-- As we have created this on linux so if u face any error and not able to resolve it u can comment down we will resolve it as soon as possible.
Let's start with the code:
Task 1: Import Libraries:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import utils
import os
%matplotlib inline
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Input, Dropout,Flatten, Conv2D
from tensorflow.keras.layers import BatchNormalization, Activation, MaxPooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.utils import plot_model
from IPython.display import SVG, Image
import tensorflow as tf
print("Tensorflow version:", tf.__version__)
|
Task 2: Plot Sample Image(optional):
1. utils.datasets.fer.plot_example_images(plt).show()
2. for expression in os.listdir("C:/Users/lenovo/Project/train/"):
print(str(len(os.listdir("C:/Users/lenovo/Project/train/"+expression)))+" "+expression+' images')
#we can serr here we have balanced data set except disgust
#we can serr here we have balanced data set except disgust
|
Task 3: Generate Training and Validation Batches
img_size=48
batch_size=64
datagen_train=ImageDataGenerator(horizontal_flip=True)
train_generator=datagen_train.flow_from_directory("C:/Users/lenovo/Project/train/",
target_size=(img_size,img_size),
color_mode='grayscale',
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
datagen_validation=ImageDataGenerator(horizontal_flip=True)
validation_generator=datagen_train.flow_from_directory("C:/Users/lenovo/Project/test/",
target_size=(img_size,img_size),
color_mode='grayscale',
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
|
Task 4: Create CNN Model:
model=Sequential()
#conv-1
model.add(Conv2D(64,(3,3),padding='same',input_shape=(48,48,1)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#2 -conv layer
model.add(Conv2D(128,(5,5),padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#3 -conv layer
model.add(Conv2D(512,(3,3),padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#4 -conv layer
model.add(Conv2D(512,(3,3),padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(7,activation='softmax'))
opt=Adam(lr=0.0005)
#lr-learning rate
model.compile(optimizer=opt,loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
|
Task 6: Train and Evaluate Model:
#from livelossplot import PlotLossesTensorFlowKeras
ephocs=15
steps_per_epoch=train_generator.n//train_generator.batch_size
steps_per_epoch
validation_steps=validation_generator.n//validation_generator.batch_size
validation_steps
checkpoint=ModelCheckpoint("model_weight.h5",monitor='val_accuracy',
save_weights_only=True,model='max',verbose=1)
reduce_lr=ReduceLROnPlateau(monitor='val_loss',factor=0.1, patience=2,min_lr=0.00001,model='auto')
#callbacks= [PlotLossesTensorFlowKeras(),checkpoint,reduce_lr]
history=model.fit(
x=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=ephocs,
validation_data=validation_generator,
validation_steps=validation_steps,
# callbacks=callbacks
)
model.save('my_expression.h5')
|
Task 7: Represent Model as JSON String:
model_json=model.to_json()
with open("model.json","w") as json_file:
json_file.write(model_json)
|
Now you have to create a folder on desktop where we are going to put all the files we are going to create we named it as "facial expression recognition".
let me give u a preview about it:
In this , 'test' and 'train' are the datasets which we have used in train and evaluate model
'Videos' is a folder which content sample video:
'haarcascade_frontface_default' is "harcascade face model".
'model.json' is the model we have saved in the ,"Task 7".
And the files , "camera" , "main" , "model" is the file we have created in terminal , all of these are 'python file'.
File name "model" is basically to read the model we have created and to add the list of emotions we are going to show.
You can download it from here:
https://goeduhub.com/?qa=blob&qa_blobid=6477846344790526406
File name "camera" is basically for the opencv part , in this we are just reading the file or we are having the live web cam started .
you can download it from here:
https://goeduhub.com/?qa=blob&qa_blobid=6102762457410011874
File name "main" is to create a flask aap :
you can download it from here :
https://goeduhub.com/?qa=blob&qa_blobid=2723678433170150040
Now after doing all work you have to open the terminal (cmd) then first run the "model" file ,then run the "camera" file , then run the "main" file at the end of this you will get a localhost port no. run it in your browser , that,s all.
THIS PROJECT IS CREATED BY:
- SURENDRA KUMAR
- TOSHIK KUMAWAT
- MAHIPAL PAREEK
AND GUIDED BY:
- SHARDA GODARA
FOR ANY QUERY RELATED TO THIS PROJECT, PLEASE TYPE IT IN THE COMMENT SECTION .WE FEEL GLAD TO HELP YOU.