Age and Gender prediction
Introduction
Age and gender, two of the key facial attributes, play a very foundational role in social interactions, making age and gender estimation from a single face image an important task in intelligent applications, such as access control, human-computer interaction, law enforcement, marketing intelligence
and visual surveillance, etc.
In this part of project we didn't use any dataset to train the model , we have used a "caffemodel" named as "age_net.caffemodel" , and "gender_net.caffemodel"
A caffe model has 2 associated files,
1 . prototxt —The .proto file is used to describe the structure (the 'protocol') of the data to be serialized. The protobuf compiler can turn this file into python/or C++/or Java code to serialize and deserialize data with that structure. For the . prototxt file
The definition of CNN goes in here. This file defines the layers in the neural network, each layer’s inputs, outputs and functionality.
2. caffemodel —Standard, compact model format. caffe train produces a binary .caffemodel file. Easily integrate trained models into data pipelines. Deploy against new data using command line, Python or MATLAB interfaces
This contains the information of the trained neural network (trained model).
This is how the project woks:
Let's get started with the code:
You need a high knowledge of python to understand it:
Task 1 : import libraries:
import cv2
import math
import argparse
|
We all know about "cv2" and "math" library let we tell you about "argparse" , argparse is the “recommended command-line parsing module in the Python standard library.” It's what you use to get command line arguments into your program.
Task 2 : Face detection :
In this we are going to detect the face and going to make a frame with the help of "Opencv" library of Python.
def highlightFace(net, frame, conf_threshold=0.7):
frameOpencvDnn=frame.copy()
frameHeight=frameOpencvDnn.shape[0]
frameWidth=frameOpencvDnn.shape[1]
blob=cv2.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104, 117, 123], True, False)
net.setInput(blob)
detections=net.forward()
faceBoxes=[]
for i in range(detections.shape[2]):
confidence=detections[0,0,i,2]
if confidence>conf_threshold:
x1=int(detections[0,0,i,3]*frameWidth)
y1=int(detections[0,0,i,4]*frameHeight)
x2=int(detections[0,0,i,5]*frameWidth)
y2=int(detections[0,0,i,6]*frameHeight)
faceBoxes.append([x1,y1,x2,y2])
cv2.rectangle(frameOpencvDnn, (x1,y1), (x2,y2), (0,255,0), int(round(frameHeight/150)), 8)
return frameOpencvDnn,faceBoxes
|
Task 3 : Training :
We are going to train the model but as we have used "caffemodel" so we are going to read those models .
parser=argparse.ArgumentParser()
parser.add_argument('--image')
args=parser.parse_args()
faceProto="opencv_face_detector.pbtxt"
faceModel="opencv_face_detector_uint8.pb"
ageProto="age_deploy.prototxt"
ageModel="age_net.caffemodel"
genderProto="gender_deploy.prototxt"
genderModel="gender_net.caffemodel"
MODEL_MEAN_VALUES=(78.4263377603, 87.7689143744, 114.895847746)
ageList=['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)']
genderList=['Male','Female']
|
All this files , "opencv_face_detector.pbtxt" , "opencv_face_detector_unit8.pb" , "age_deploy.prototxt" , "age_net.caffemodel" , "gender_deploy.prototxt" , "gender_net.caffemodel" you can download from the link given below.
Task 4 : Save the model :
faceNet=cv2.dnn.readNet(faceModel,faceProto)
ageNet=cv2.dnn.readNet(ageModel,ageProto)
genderNet=cv2.dnn.readNet(genderModel,genderProto)
|
Task 6 : Testing / Prediction :
In this we are going to test our model or predict the age and gender of a person.
video=cv2.VideoCapture(args.image if args.image else 0)
padding=20
while cv2.waitKey(1)<0:
hasFrame,frame=video.read()
if not hasFrame:
cv2.waitKey()
break
resultImg,faceBoxes=highlightFace(faceNet,frame)
if not faceBoxes:
print("No face detected")
for faceBox in faceBoxes:
face=frame[max(0,faceBox[1]-padding):
min(faceBox[3]+padding,frame.shape[0]-1),max(0,faceBox[0]-padding)
:min(faceBox[2]+padding, frame.shape[1]-1)]
blob=cv2.dnn.blobFromImage(face, 1.0, (227,227), MODEL_MEAN_VALUES, swapRB=False)
#gender prediction
genderNet.setInput(blob)
genderPreds=genderNet.forward()
gender=genderList[genderPreds[0].argmax()]
print(f'Gender: {gender}')
#age prediction
ageNet.setInput(blob)
agePreds=ageNet.forward()
age=ageList[agePreds[0].argmax()]
print(f'Age: {age[1:-1]} years')
#displaying the result
cv2.putText(resultImg, f'{gender}, {age}', (faceBox[0], faceBox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,255), 2, cv2.LINE_AA)
cv2.imshow("Detecting age and gender", resultingIMG)
|
You can download the file's from here
click here to download
As we have seen in this article that in just a few lines of code we have built an age and gender detection model, from here on you can also incorporate object detection in the same model and create a fully functional application.
Hopefully, you found this article to be a good, read and useful in your quest for recognizing a person’s age and gender. Let we know your doubts in the comment section.
THIS PROJECT IS CREATED BY:
- SURENDRA KUMAR
- TOSHIK KUMAWAT
- MAHIPAL PAREEK
AND GUIDED BY:
- SHARDA GODARA
FOR ANY QUERY RELATED TO THIS PROJECT ,PLEASE TYPE IT IN THE COMMENT SECTION .WE FEEL GLAD TO HELP YOU.