GOEDUHUB Online Courses || Last Batch Student's Projects || COVID-19 Projects(AI-ML) || Universities  ||  Placement Preparation  Subscribe our youtube channel
+91-7976731765 Free Online Tutorials || AI || ML || NLP ||  OpenCV || Python || DBMS || OOPs || DSA || Java || Linux/Unix ||  C Programming
0 like 0 dislike
1.4k views
in AI-ML-Data Science Projects by (148 points)
edited by

Social distancing is staying away from crowds or congregations of 10 or more people with the intent of minimizing the transmission of infectious disease outbreaks and to slow down the spread of the disease. 

This article helps to detect the humans in the frame with yolov3 convolutional neural network. Calculate the distance between all the instances of humans detected in the frame. and gives the results if there are two people and they are too close to each other then it shows a red rectangle otherwise yellow or green.

Social Distancing Detector using Python | Deep learning | OpenCV

   In this article, we will learn about Covid-19 disease

    - how to protect yourself by COVID-19.

    - how does corona infection spread? 

Coronavirus disease(Novel Covid-19) is an infection or a large group of viruses the consist of a core of genetic material surrounded by an envelope with protein spikes, this gives it the appearance of the crown. a crown is Latin is called corona. 2019 novel coronavirus was first identified in the city of Wuhan in China it initially occurred in a group of people with pneumonia.

TRANSMISSION:

In general respiratory viruses are usually transmitted through droplets created when an infected person coughs or sneezes or through something that has been contaminated with the virus people most at risk of infection from the novel coronavirus are those in close contact with animals.


REQUIREMENT

Setup yolov3 Prerequisites- Read YOLO article for details

      - yolov3.cfg (configuration files

      - yolov3.weights (trained model to detect objects)

      - coco.names (Dataset of 80 objects)  

You can find these three files on the internet YOLO official website simply download it. You already have the config file for YOLO in the subdirectory. You will have to download the pre-trained weight file click here

2 Answers

0 like 0 dislike
by (148 points)
selected by
 
Best answer

Social Distancing Detector using Python | Deep learning | OpenCV Part 1


What is Social Distancing?

Social Distancing detector

Figure: Social distancing is important in times of epidemics and pandemics to prevent the spread of disease. (image source)

Social Distancing is a methodology in which humans stay away from each other physically. 

Coding and Implementation     

First, we will import all the required libraries. There are four sections that we will need to code in order to get our social distancing analyzer app.      

Then, load yolov3. weight file and load yolov3.cfg (configuration file) in the network that is "net" here "dnn" stands for the deep neural network.   

We then define an empty list "classes", in which we want to put "person" will be read from the coco. name file.  

# import the necessary packages

import numpy as np

import cv2

import math

import time

# Load Yolo

net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

classes = []

with open("coco.names", "r") as f:

    classes = [line.strip() for line in f.readlines()]

From the net, we are loading the layer names.In output_layers we get detection of every object. Then assigning random colors to each and every class is done. if we have 80 objects in our classes than 80 random colors have been assigned. The '3' is the channel (for RGB).

layer_names = net.getLayerNames()

output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]

np.random.uniform(0, 255, size=(len(classes),3))

Now, we will read an image in which objects need to be detected using imread() function from OpenCV.

Since the image we are using is too large hence, we need to resize the image.

We use cv2.resize function for it. The first attribute passed is the image name.

The 'None' refers that we are not specifying any particular size of the image.

The fx and fy having 0.4 states that 40% should be the height and width of the image. 

 The height, width, and channel need to have the same shape as that of the image.

cap = cv2.VideoCapture('video.mp4')

#cap = cv2.VideoCapture(0)

#cap = cv2.VideoCapture('http://192.168.43.1:8080//video')

#FourCC code is passed as

fourcc = cv2.VideoWriter_fourcc(*'MJPG')

output = cv2.VideoWriter('output4.mp4',fourcc, 20.0, (img.shape[1], img.shape[0]))

#cv2.imwrite("output1.mp4",img)

#img = cv2.flip(img,0)

Select which camera to use for the live video you should use 0 or for additional camera use 1. If you want to use CCTV or IP Webcam put the URL under single codes

#Euclidean distence for each video

def E_dist(p1, p2):

    return ((p1[0] - p2[0]) ** 2 +  (p1[1] - p2[1]) ** 2) ** 0.5

def isclose(p1, p2):

    c_d = E_dist(p1, p2)

    calib = (p1[1] + p2[1]) / 2

    if 0 < c_d < 0.15 * calib:

        return 1

    elif 0 < c_d < 0.2 * calib:

        return 2

    else:

        return 0

height,width=(None,None)

q=0

Take two points for Euclidean Distance by using the "calibrated_dist" function and return it.

then populate the function that calculates the Euclidean distance between two points. The distance equals to the root of the sum of the squared point and we returned our distance.

If p1 and p2 are too close then return 1 if the distance is medium then return 2 else 0.

while(cap.isOpened()):

    # Capture frame-by-frame

    ret, img = cap.read()    

    print(ret)

    if not ret:

        break

    if width is None or height is None:

        height,width=img.shape[:2]

        q=width

 #height, width, channels = img.shape

    img =img[0:height, 0:q]

    height,width=img.shape[:2]

    # Detecting objects 0.00392

    blob = cv2.dnn.blobFromImage(img,0.00392, (416, 416), (0,0,0), True, crop=False)

    net.setInput(blob)

    start = time.time()

    outs = net.forward(output_layers)

    end=time.time()

We can't give the image directly to the algorithm. We first have to convert it into the blob.

Blob extracts the features from the image. (416,416) is the standard size. 

Then we pass the blob image to the algorithm.

Then, finally, we feed the output layer to 'outs' for the final result.

     # Showing informations on the screen

    class_ids = []

    confidences = []

    boxes = []

    for out in outs:

        for detection in out:

            scores = detection[5:]

            class_id = np.argmax(scores)

            confidence = scores[class_id]

            if classes[class_id]=="person":

                #0.5 is the threshold for confidence

                if confidence > 0.5:

                    # Object detected

                    #Purpose : Converts center coordinates to rectangle coordinates

                    # x, y = midpoint of box

                    center_x = int(detection[0] * width)

                    center_y = int(detection[1] * height)           

                    # w, h = width, height of the box

                    w = int(detection[2] * width)

                    h = int(detection[3] * height)

                    # Rectangle coordinates

                    x = int(center_x - w / 2)

                    y = int(center_y - h / 2)

                    boxes.append([x, y, w, h])

                    confidences.append(float(confidence))

                    class_ids.append(class_id)

We create three empty lists i.e., class_ids, confidences, and boxes.

Firstly scores will be calculated.

The class_ids will contain all the class (object) id of the objects which will be detected.

The class_ids will be decided as per the maximum argument.

The confidence will store the score of the class_id. 

If confidence>0.5 (50%), then the object will be detected. Then we will find the center points as well as (x,y),(w,h) which are used to determine top left and bottom-right coordinates of the rectangle respectively.

Only if the confidence is greater than the confidence threshold, then only class_id, confidence, and scores will be appended in the list. The scores contain (x,y,w,h) of the rectangle.

Here 0.5 is the threshold for confidence, but you can use change according to your project requirement. High scoring regions of the image are considered for detection.

   indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.5)

    #print(indexes)

    font = cv2.FONT_HERSHEY_SIMPLEX  

    if len(indexes)>0:       

        status=list()       

        idf = indexes.flatten()     

        close_pair = list()   

        s_close_pair = list()        

        center = list()        

        dist = list()        

        for i in idf:            

            (x, y) = (boxes[i][0], boxes[i][1])          

            (w, h) = (boxes[i][2], boxes[i][3])           

            center.append([int(x + w / 2), int(y + h / 2)])            

            status.append(0)            

        for i in range(len(center)):           

            for j in range(len(center)):               

                #compare the closeness of two values

                g=isclose(center[i], center[j])                

                if g ==1:                    

                    close_pair.append([center[i],center[j]])                

                    status[i] = 1               

                    status[j] = 1                 

                elif g == 2:               

                    s_close_pair.append([center[i], center[j]])                 

                    if status[i] != 1:                 

                        status[i] = 2               

                    if status[j] != 1:                        

                        status[j] = 2

        total_p = len(center)        

        low_risk_p = status.count(2)        

        high_risk_p = status.count(1)        

        safe_p = status.count(0)        

        kk = 0        

        for i in idf:            

            sub_img = img[10:170, 10:width - 10]            

            black_rect = np.ones(sub_img.shape, dtype=np.uint8)*0            

            res = cv2.addWeighted(sub_img, 0.77, black_rect,0.23, 1.0)

            img[10:170, 10:width - 10] = res           

The Non-max suppression removes multiple boxes around the same object. 

From boxes, we extract the x,y,w,h coordinates of the object and label them with their class_ids. 

Then random colors generated earlier are assigned to the color variable.

In cv2.rectangle, using the (x,y,w,h), we draw a rectangle around the objects detected.

(x,y) are the top lest coordinates and (w,h) refers to height and width respectively.

Hence, x+w and y+h gives the bottom-right coordinate of the rectangle. 'color' is passed to

cv2.rectangle to pass the assigned unique color of each object. The '2' in the function is the width of the rectangle.

Finally, we label (assign text) to the object detected and label it with the class_id.

If the object detected is a human, then it would be labeled as 'person'. This is done via cv2.putText() function.

    # adding text to image            

                      #(image,text,org( X coordinate value, Y coordinate value),font,fontScale,color,thikness)

            cv2.putText(img, "Social Distancing Detection - During COVID19 ", (255, 45),font, 1, (255, 255, 255), 2)

            cv2.putText(img, "GOEDUHUB TECHNOLOGY", (450, 200),font, 1, (0, 255, 255), 2)          

            #image = cv2.rectangle(image, start_point, end_point, color, thickness)

            cv2.rectangle(img, (20, 60), (625, 160), (170, 170, 170), 2)         

            cv2.putText(img, "Connecting lines shows closeness among people. ", (45, 80),font, 0.6, (255, 255, 0), 1)            

            cv2.putText(img, "YELLOW: CLOSE", (45, 110),font, 0.5, (0, 255, 255), 1)            

            cv2.putText(img, "RED: VERY CLOSE", (45, 130),font, 0.5, (0, 0, 255), 1)

            cv2.rectangle(img, (675, 60), (width -20, 160), (170, 170, 170), 2)            

            cv2.putText(img, "Bounding box shows the level of risk to the person.",(685, 80),font, 0.6, (255, 255, 0), 1)                        

            cv2.putText(img, "DARK RED: HIGH RISK", (685, 110),font, 0.5, (0, 0, 150), 1)            

            cv2.putText(img, "ORANGE: LOW RISK", (685, 130),font, 0.5, (0, 120, 255), 1)

            cv2.putText(img, "GREEN: CONGRATULATIONS YOU ARE SAFE", (685, 150),font, 0.5, (0, 255, 0), 1)

            tot_str = "NUMBER OF PEOPLE: " + str(total_p)            

            high_str = "RED ZONE: " + str(high_risk_p)            

            low_str = "ORANGE ZONE: " + str(low_risk_p)            

            safe_str = "GREEN ZONE: " + str(safe_p)            

            #image ROI

            sub_img = img[height - 120:height-20, 0:500]

            #cv2.imshow("sub_img",sub_img)            

            black_rect = np.ones(sub_img.shape, dtype=np.uint8) * 0

            res = cv2.addWeighted(sub_img, 0.8, black_rect, 0.2, 1.0)

            img[height - 120:height-20, 0:500] = res

            cv2.putText(img, tot_str, (10, height - 75),font, 0.6, (255, 255, 255), 1)           

            cv2.putText(img, safe_str, (300, height - 75),font, 0.6, (0, 255, 0), 1)           

            cv2.putText(img, low_str, (10, height - 50),font, 0.6, (0, 120, 255), 1)          

            cv2.putText(img, high_str, (300, height - 50),font, 0.6, (0, 0, 150), 1)

            (x, y) = (boxes[i][0], boxes[i][1])           

            (w, h) = (boxes[i][2], boxes[i][3])                    

           #color of the ractangle when is too close 

            if status[kk] == 1:              

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[kk] == 0:                

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)

            else:

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 120, 255), 2)

            kk += 1

        for h in close_pair:            

            cv2.line(img, tuple(h[0]), tuple(h[1]), (0, 0, 255), 2)            

        for b in s_close_pair:            

            cv2.line(img, tuple(b[0]), tuple(b[1]), (0, 255, 255), 2) 

0 like 0 dislike
by (148 points)
edited by

Social Distancing Detector using Python | Deep learning | OpenCV Part2

For adding text to images we need:-

-frame in which you want to put the data(text)

 -position coordinates of where you want to put it. (that is the bottom-left corner where data starts).

-font type (check cv2.putText() docs for supported fonts)

-font scale (specifies the size of the font)

 -regular things like color, thickness, lineType, etc. for better look, lineType = cv2.LINE_AA is recommended.      

cv2.rectangle use for drow a rectangle under which we put text data on frame using the "sub_img" tag draws a sub-image in which the results are displayed. black_rect use to draw a black rectangle on the top of the screen.

If two people are too close then the rectangle changes the color according to a status like a red rectangle, yellow and if the person is far away from another then it displays a green rectangle and also draws a line between them which calculates the distance between two center points.

        cv2.imshow('image',img)

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

    cv2.waitKey(1)

    output.write(img)

cap.release()

output.release()

cv2.destroyAllWindows()

# press 'q' to release the window.

Set out for video writer process it frame-by-frame and we want to save that video.

For images, it is very simple, just use cv2.imwrite(). Here a little more work is required.

FourCC code is passed as cv2.VideoWriter_fourcc('M','J','P','G') or 

cv2.VideoWriter_fourcc(*'MJPG) for MJPG. Finally, we are displaying the output.

Using cv2.imshow(), we display image. The first argument is the frame name and the second argument is the image in which an object is detected.

The waitKey() stops the output on the screen. Else, it will be displayed but for a very

short the interval which won't be visible.

The cv2.destroyAllWindows() then destroy all the open windows. Without it the output

will hang and we will have to restart the kernel every time we run the program.

Here is the running code:

# import the necessary packages

import numpy as np

import cv2

import math

import time

# Load Yolo Model

net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

classes = ["person"]

with open("coco.names", "r") as f:

    classes = [line.strip() for line in f.readlines()]

layer_names = net.getLayerNames()

output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]

np.random.uniform(0, 255, size=(len(classes),3))

#Start video or live camera

cap = cv2.VideoCapture('video.mp4')

#cap = cv2.VideoCapture(0)  

#cap = cv2.VideoCapture('http://192.168.43.1:8080//video')

#Euclidean distance for each video

def E_dist(p1, p2):

    return ((p1[0] - p2[0]) ** 2 +  (p1[1] - p2[1]) ** 2) ** 0.5

def isclose(p1, p2):

    c_d = E_dist(p1, p2)

    calib = (p1[1] + p2[1]) / 2

    if 0 < c_d < 0.15 * calib:

        return 1

    elif 0 < c_d < 0.2 * calib:

        return 2

    else:

        return 0

    

height,width=(None,None)

q=0       

#Start working on video or camera

while(cap.isOpened()):

    # Capture frame-by-frame

    ret, img = cap.read()  

    print(ret)

    if not ret:

        break

    if width is None or height is None: 

        height,width=img.shape[:2]

        q=width

    #height, width, channels = img.shape

    img =img[0:height, 0:q]

    height,width=img.shape[:2]

    # Detecting objects 0.00392

    blob = cv2.dnn.blobFromImage(img,0.00392, (416, 416), (0,0,0), True, crop=False)

    net.setInput(blob)

    start = time.time()

    outs = net.forward(output_layers)

    end=time.time()

     # Showing informations on the screen

    class_ids = []

    confidences = []

    boxes = []

    for out in outs:

        for detection in out:

            scores = detection[5:]

            class_id = np.argmax(scores)

            confidence = scores[class_id]

            #0.5 is the threshold for confidence

            if confidence > 0.5:

                # Object detected

                #Purpose : Converts center coordinates to rectangle coordinates

                # x, y = midpoint of box

                center_x = int(detection[0] * width)

                center_y = int(detection[1] * height)

                 # w, h = width, height of the box

                w = int(detection[2] * width)

                h = int(detection[3] * height)

                # Rectangle coordinates

                x = int(center_x - w / 2)

                y = int(center_y - h / 2)

                boxes.append([x, y, w, h])

                confidences.append(float(confidence))

                class_ids.append(class_id)

    indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.5)

    #print(indexes)

    font = cv2.FONT_HERSHEY_SIMPLEX    

    if len(indexes)>0:        

        status=list()        

        idf = indexes.flatten()        

        close_pair = list()        

        s_close_pair = list()        

        center = list()        

        dist = list()        

        for i in idf:            

            (x, y) = (boxes[i][0], boxes[i][1])            

            (w, h) = (boxes[i][2], boxes[i][3])            

            center.append([int(x + w / 2), int(y + h / 2)])            

            status.append(0)            

        for i in range(len(center)):            

            for j in range(len(center)):                

                #compare the closeness of two values

                g=isclose(center[i], center[j])                

                if g ==1:                    

                    close_pair.append([center[i],center[j]])                    

                    status[i] = 1                    

                    status[j] = 1                    

                elif g == 2:                    

                    s_close_pair.append([center[i], center[j]])                    

                    if status[i] != 1:                        

                        status[i] = 2                        

                    if status[j] != 1:                        

                        status[j] = 2

        total_p = len(center)        

        low_risk_p = status.count(2)        

        high_risk_p = status.count(1)        

        safe_p = status.count(0)        

        kk = 0        

        for i in idf:            

            sub_img = img[10:170, 10:width - 10]            

            black_rect = np.ones(sub_img.shape, dtype=np.uint8)*0            

            res = cv2.addWeighted(sub_img, 0.77, black_rect,0.23, 1.0)

            img[10:170, 10:width - 10] = res           

            

            # adding text to image            

                      #(image,text,org( X coordinate value, Y coordinate value),font,fontScale,color,thikness)

            cv2.putText(img, "Social Distancing Detection - During COVID19 ", (255, 45),font, 1, (255, 255, 255), 2)

            cv2.putText(img, "GOEDUHUB TECHNOLOGY", (450, 200),font, 1, (0, 255, 255), 2)            

            #image = cv2.rectangle(image, start_point, end_point, color, thickness)

            cv2.rectangle(img, (20, 60), (625, 160), (170, 170, 170), 2)            

            cv2.putText(img, "Connecting lines shows closeness among people. ", (45, 80),font, 0.6, (255, 255, 0), 1)            

            cv2.putText(img, "YELLOW: CLOSE", (45, 110),font, 0.5, (0, 255, 255), 1)            

            cv2.putText(img, "RED: VERY CLOSE", (45, 130),font, 0.5, (0, 0, 255), 1)

            cv2.rectangle(img, (675, 60), (width -20, 160), (170, 170, 170), 2)            

            cv2.putText(img, "Bounding box shows the level of risk to the person.",(685, 80),font, 0.6, (255, 255, 0), 1)           

            

            cv2.putText(img, "DARK RED: HIGH RISK", (685, 110),font, 0.5, (0, 0, 150), 1)      

            cv2.putText(img, "ORANGE: LOW RISK", (685, 130),font, 0.5, (0, 120, 255), 1)

            cv2.putText(img, "GREEN: CONGRATULATIONS YOU ARE SAFE", (685, 150),font, 0.5, (0, 255, 0), 1)

            tot_str = "NUMBER OF PEOPLE: " + str(total_p)            

            high_str = "RED ZONE: " + str(high_risk_p)            

            low_str = "ORANGE ZONE: " + str(low_risk_p)            

            safe_str = "GREEN ZONE: " + str(safe_p)            

            #image ROI

            sub_img = img[height - 120:height-20, 0:500]

            #cv2.imshow("sub_img",sub_img)            

            black_rect = np.ones(sub_img.shape, dtype=np.uint8) * 0

            res = cv2.addWeighted(sub_img, 0.8, black_rect, 0.2, 1.0)

            img[height - 120:height-20, 0:500] = res

            cv2.putText(img, tot_str, (10, height - 75),font, 0.6, (255, 255, 255), 1)            

            cv2.putText(img, safe_str, (300, height - 75),font, 0.6, (0, 255, 0), 1)            

            cv2.putText(img, low_str, (10, height - 50),font, 0.6, (0, 120, 255), 1)            

            cv2.putText(img, high_str, (300, height - 50),font, 0.6, (0, 0, 150), 1)

            (x, y) = (boxes[i][0], boxes[i][1])            

            (w, h) = (boxes[i][2], boxes[i][3])        

            #color of the ractangle when is too close 

            if status[kk] == 1:                

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[kk] == 0:                

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)

            else:

                cv2.rectangle(img, (x, y), (x + w, y + h), (0, 120, 255), 2)

            kk += 1

        for h in close_pair:            

            cv2.line(img, tuple(h[0]), tuple(h[1]), (0, 0, 255), 2)         

        for b in s_close_pair:

            cv2.line(img, tuple(b[0]), tuple(b[1]), (0, 255, 255), 2)

    cv2.imshow('image',img)

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

    cv2.waitKey(1)

    #FourCC code is passed as

    fourcc = cv2.VideoWriter_fourcc(*'MJPG')

    output = cv2.VideoWriter('output4.mp4',fourcc, 20.0, (img.shape[1], img.shape[0]))

    #cv2.imwrite("output1.mp4",img)

    #img = cv2.flip(img,0)

    output.write(img)

cap.release()

output.release()

cv2.destroyAllWindows()

# press 'q' to release the window.

Related questions

 Placements:   List of companies | Logical Reasoning Questions | Quantitative Aptitude Questions | General English Questions | Technical-MCQ and Interview Questions
 Important Lists: List of NITsList of IITsList of Exams After Graduation | List of Engineering Entrance Examinations (UG/PG)College ReviewsCollege Fest, Events & WorkshopsKnowledge ShareTrainees/Interns After 15-04-2020
Exams & Cutoffs: JEE Main | JEE Advanced | GATE | IES | ISRO List of PSUs || Cutoff-GATECutoff_IIT-JEECS-ScopeECE ScopeEE-Scope
 Download Previous Year Papers For:  GATE | IES | RAJASTHAN TECHNICAL UNIVERSITY (RTU-Kota)RPSC Technical Exams | ISRO
 Goeduhub
About Us | Contact Us   Social::   |  | 
...