Online Courses
Free Tutorials  Go to Your University  Placement Preparation 
0 like 1 dislike
3k views
in Artificial Intelligence(AI) & Machine Learning by (757 points)
edited by

Several governments and Industries in view of COVID 19 situation now wants to ensure that social distancing is followed or not. This is a human detection system which checks whether social distancing is followed or not and produces an alert in case it is not followed.


Develop Projects Like This

Goeduhub's Top Online Courses @Udemy

For Indian Students- INR 360/- || For International Students- $9.99/-

S.No.

Course Name

 Coupon

1.

Tensorflow 2 & Keras:Deep Learning & Artificial Intelligence || Labeled as Highest Rated Course by Udemy

Apply Coupon

2.

Complete Machine Learning & Data Science with Python| ML A-Z Apply Coupon

3.

Complete Python Programming from scratch | Python Projects Apply Coupon
    More Courses

1 Answer

0 like 0 dislike
by (757 points)
selected by
 
Best answer

Social Distance Alert System

What is Social Distancing ?

Social Distancing is methodology in which humans stay away from each other physically. For example, in current COVID-19 situation it is being followed widely, in order to prevent the spread of the disease. 

Why do we need Social Distancing ?

In order to slow down the spread of the corona virus we need some preventive measure. If people don't come in contact with each other the chances of the disease being transmitted reduces at a significant level.

Social Distancing Alert System 

  1. This system detects humans in the video and finds out distance between them. 

  2. If they are walking too close by it shows a red line between them. 

  3. It also counts number of humans in each frame and raises an alert in case no. of people increases in a particular frame. 

  4. It saves the frames in two folders.

  5. In first folder all the frames are saved from the video.

  6. In second folder only those frames are saved in which count of people exceeds a particular value.

CODE


1. Loading model

  • We have performed human detection through YOLO. For it's tutorial click YOLO tutorial

  • Firstly we will import two libraries cv2 and numpy. 

  • Then, yolov3.weights file and the yolov3.cfg(configuration file) is loaded in network i.e, 'net' . Here 'dnn' stands for deep neural network. 

  • We then define an empty list "classes", in which all the classes like 'person', 'car', 'bicycle' etc. will be read from the coco.names file.

  • From net, we are loading the layer names.

  • In output_layers we get detection of every object.

2. Load Video 

Then we load the video that we want to check social distancing on.

# import the necessary packages
import numpy as np
import cv2
import math
 

# Load Yolo
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
#classes=['Person','Car']
classes = []
with open("coco.names", "r") as f:
    classes = [line.strip() for line in f.readlines()]
   
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
 

# Load Video

cap = cv2.VideoCapture('aa.avi')

count =0
count_pic=0
count_off=0


3. Capturing Frames

  • Video is captured frame by frame. Hence, all the operations performed will be on images.

  • Now, we will read image in which objects need to be detected using imread() function from opencv.

  • Since the image we are using is too large hence, we need to resize the image. We use cv2.resize function for it. The first attribute passed is the image name. The 'None' refers that we  are not specifying any particular size of the image. The fx and fy having 0.4 states that 40% should be the height and width of the image. 

  • The height, width and channel needs to have the same shape as that of the image.

4. Blob Conversion

  • We can't give the image directly to algorithm. We first have to convert it into blob. Blob extracts the features from the image. (416,416) is the standard size. 

  • Then we pass the blob image to algorithm.

  • Then, finally we feed the output layer to 'outs' for final result. 

while cap.isOpened():
    # Capture frame-by-frame
    ret, img = cap.read()

    height, width, channels = img.shape

    # Detecting objects
    blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)

    net.setInput(blob)
    outs = net.forward(output_layers)

    # Showing informations on the screen
    class_ids = []
    confidences = []
    boxes = []


5. Detecting Objects

  • We create three empty list i.e., class_ids, confidences and boxes.

  • Firstly scores, will be calculated.

  • The class_ids will contain all the class (object) id's of the objects which will be detected. The class_ids will be decided as per maximum argument.

  • The confidence will store the score of the class_id. 

  • If confidence>0.5 (50%), then the object will be detected. Then we will find the center points as well as (x,y),(w,h) which are used to determine top left and bottom right coordinates of the rectangle respectively.

  • Only if the confidence is greater than the confidence threshold, then only class_id, confidence and scores will be appended in the list. The scores contains (x,y,w,h) of the rectangle.

  • Here 0.5 is the threshold for confidence, but you can use change it according to your project requirement. High scoring regions of the image are considered for detection.

for out in outs:
        for detection in out:
            scores = detection[5:]
            class_id = np.argmax(scores)
            confidence = scores[class_id]
            if confidence > 0.5:
                # Object detected
                center_x = int(detection[0] * width)
                center_y = int(detection[1] * height)
                w = int(detection[2] * width)
                h = int(detection[3] * height)

                # Rectangle coordinates
                x = int(center_x - w / 2)
                y = int(center_y - h / 2)

                boxes.append([x, y, w, h])
                confidences.append(float(confidence))
                class_ids.append(class_id)


6. Non-Max Suppression

  • The Non-max suppression removes multiple boxes around the same object. 

  • From boxes, we extract the x,y,w,h coordinates of the object and label them with their class_ids. 

7. Detecting Humans 

  • Since, coordinates of the detected humans are stored in the boxes, hence, we extract them by running loop and draw rectangle around them.

  • The x,y,w,h will contain the coordinates.

  • Then we extract labels and only draw rectangle around "person" label.

  • We initiate a count_ppl variable through which we count the no. of people in the frame.

  • LIST - We have two lists. l and lf. In l we append the x coordinate of detected human and then we y coordinate. Then, we append the list into lf i.e, final list. Then we empty the l list to append coordinates of another human. We get lisft of x and y coordinates inside the final list

  • In cv2.rectangle, using the (x,y,w,h), we draw rectangle around the objects detected. (x,y) are the top lest coordinates and (w,h) refers to height and width respectively. Hence, x+w and y+h gives bottom right coordinate of the rectangle. (0,255,0) means that the color of the rectangle will be green. The '2' in the function is the width if the rectangle. 

  • Finally, we label (assign text) to the object detected and lable it with the class_id. If the object detected is a human, then it would be labelled as 'person'. This is done via cv2.putText() function.

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
    
    font = cv2.FONT_HERSHEY_PLAIN

    count_ppl=0
    l=[]
    lf=[]
 

    for i in range(len(boxes)):
        if i in indexes:
            x, y, w, h = boxes[i]
            label = str(classes[class_ids[i]])
            if label=='person':
                cv2.rectangle(img, (x, y), (x + w, y + h), (0,255,0), 2)
                l=[]
                l.append(x)
                l.append(y)
                lf.append(l)
                count_ppl+=1

            


8. Calculating distances

  • #We initiated a variable close person which later will be used to generate alert.

  • We initiated a variable off to calculate no. of offenders.

  • We run two loops in final list which contains coordinates of humans detected.

  • Hence, we calculate distance of every human with each other and the distance is stored in d variable.

  • If d is less than 60, then will consider that they are walking too close.

  • Then, we draw a red line between the humans who are too close using cv2.line function.

  • And we increment the value of off, i.e., no. of offenders. Hence, now we have no. of offenders in each list.

    off=0
    for i in range(len(lf)):
        for j in range(i+1,len(lf)):
            d=math.sqrt( ((lf[j][1]-lf[i][1])**2)+((lf[j][0]-lf[i][0])**2) )
            if d<60:
                img = cv2.line(img, (lf[i][0]+15,lf[i][1]+35), (lf[j][0]+15,lf[j][1]+35), (0,0,255), 2)
                off+=1


9. Working of Frames

Since, the video captures too many frames within one second, hence, saving and working with each of the frame will increase the load. Therefore, we will save the only every 5th frame. 

10. Saving frames in folders

  • We will save the frames in two folders. Main folder and Offender folder.

  • In main folder every 5th frame will be saved.

  • While in offenders folder only those folders will be saved in which no. of humans exceed a particular limit. 

11. Generating Alert

In 'a' variable when no. of people exceeds a particular limit then, we print an alert message. In this case we have taken no. of people as 41.

count+=1   
    if count>=5:
        print("FRAME "+str(count_pic)+"    People Count : "+str(count_ppl)+"   RL : "+str(off))
        cv2.imwrite('dataset\\img'+str(count_pic)+'.png',img)  # Saving frames in Main Folder
        count_pic+=1
        if off>1:
            cv2.imwrite('offenders\\img'+str(count_off)+'.png',img) # Saving frames in Offenders Folder
            count_off+=1
           

    if count_ppl>=41 and count>=5:
        a="HIGH ALERT "+str(count_ppl)+"people in your area!"
        print(a)  
   

    if count>=5:
        count=0
        off=0

DATASET FOLDER - Contains every 5th frame

OFFENDERS FOLDER - Contains only those frames in which no. of red lines exceed a particular limit.


12. Dispalying each frame in a video form

  • Finally we are displaying the output.

  • Using cv2.imshow(), we display image. The first argument is the frame name and the second argument is the image in which object is detected.

  • If we press 'q', then the video will stop.

13. Releasing Windows

  • The waitKey() stops the output in screen. Else, it will be displayed but for a very short interval which won't be visible.

  • The cv2.destroyAllWindows() then destroys all the open windows. Without it the output will hang and we will have to restart the kernel everytime we run the program.

    cv2.imshow('frame',img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
cv2.waitKey(1)


OUTPUT 

FRAME 0    People Count : 38   RL : 13
FRAME 1    People Count : 38   RL : 13
FRAME 2    People Count : 38   RL : 14
FRAME 3    People Count : 38   RL : 14
FRAME 4    People Count : 36   RL : 13
FRAME 5    People Count : 38   RL : 16
FRAME 6    People Count : 37   RL : 13
FRAME 7    People Count : 38   RL : 14
FRAME 8    People Count : 40   RL : 17
FRAME 9    People Count : 43   RL : 21
HIGH ALERT 43 people in your area!
FRAME 10    People Count : 40   RL : 13
FRAME 11    People Count : 40   RL : 15

Here, each frame is showing no. of people count in that frame along with number of red lines formed in the frame. Also, when no. of people count in frame exceeds 41, it generates an alert message.


Develop Projects Like This

3.3k questions

7.1k answers

394 comments

4.6k users

Related questions

 Goeduhub:

About Us | Contact Us || Terms & Conditions | Privacy Policy || Youtube Channel || Telegram Channel © goeduhub.com Social::   |  | 
...