IOT

How to Design Motion Detection Squid Game using ESP32 CAM & OpenCV

In this article, we will develop a Motion Detection project based on Squid Game using ESP32 CAM & OpenCV. With the help of the Python program and ESP32 Camera Module, we will develop a Red Light – Green Light Game. This game is inspired by a famous Netflix TV Series “Squid Game”. Here, we will capture the frames of the person moving using ESP32-Cam. If there is any motion detected in the live video stream when the red light is turned on then the person is dead or game over else green light is shown in which the person needs to move.

Building Motion Detection Squid Game With Opencv And Python - Mobile Legends

Red Light = No motion & Green Light = Motion

To get started one must have sound knowledge of Python, Image processing, Embedded Systems, and IoT. In this project, we will understand how to detect the motion of a person, and what requirements are needed to run the python program. First, we will test the whole python script with a webcam or internal camera of a laptop. Later motion detection program is implemented with the ESP32 CAM. So let’s see how we can build a Motion Detection Project like a Squid Game Red Light Green Light.

Building Motion Detection Squid Game With Opencv And Python - Mobile Legends

You can go through the earlier projects where we did Gesture Recognition and also the Face Recognition project using ESP32 CAM & OpenCV.

Hardware Required:

  • ESP32-CAM Board-AI-Thinker ESP32 Camera Module
  • FTDI Module-USB-to-TTL Converter Module
  • USB Cable-5V Mini-USB Data Cable
  • Jumper Wires-Female to Female Connectors

Motion Detection Squid Game using PC Camera

Let’s develop a Motion Detection based Squid Game using PC image recognition technologies before moving on to the project.

Installing Python & Required Libraries

In order for the live video stream to appear on our computer, we must develop a Python script that allows us to retrieve the video frames. The first step is to get Python installed. Download the most recent version of Python from python.org.

Open the command prompt when it has been downloaded and installed. Now we must set up a few libraries. To do so, run the scripts below one by one until all of the libraries are installed.

pip install numpy
pip install OpenCV-python
pip install mediapipe
pip install playsound==1.2.2

These libraries will be installed once you perform these commands in the command prompt. Make a new folder now.

Test Code/Program

  • Create a new python file in that folder and put the code below into it.
import mediapipe as mp
import cv2
import numpy as np
import time
from playsound import playsound

cap = cv2.VideoCapture(0)
cPos = 0
startT = 0
endT = 0
userSum = 0
dur = 0
isAlive = 1

isInit = False
cStart, cEnd = 0,0
isCinit = False
tempSum = 0
winner = 0
inFrame = 0
inFramecheck = False
thresh = 180

def calc_sum(landmarkList):


tsum = 0
for i in range(11, 33):
tsum += (landmarkList[i].x * 480)


return tsum

def calc_dist(landmarkList):
return (landmarkList[28].y*640 – landmarkList[24].y*640)

def isVisible(landmarkList):
if (landmarkList[28].visibility > 0.7) and (landmarkList[24].visibility > 0.7):
return True
return False

mp_pose = mp.solutions.pose
pose = mp_pose.Pose()
drawing = mp.solutions.drawing_utils

im1 = cv2.imread(‘im1.jpg’)
im2 = cv2.imread(‘im2.jpg’)


currWindow = im1

while True:

_, frm = cap.read()
rgb = cv2.cvtColor(frm, cv2.COLOR_BGR2RGB)
res = pose.process(rgb)
frm = cv2.blur(frm, (5,5))
drawing.draw_landmarks(frm, res.pose_landmarks, mp_pose.POSE_CONNECTIONS)

if not(inFramecheck):
try:
if isVisible(res.pose_landmarks.landmark):
inFrame = 1
inFramecheck = True
else:
inFrame = 0
except:
print(“You are not visible at all”)

if inFrame == 1:
if not(isInit):
playsound(‘greenLight.mp3’)
currWindow = im1
startT = time.time()
endT = startT
dur = np.random.randint(1, 5)
isInit = True

if (endT – startT) <= dur:
try:
m = calc_dist(res.pose_landmarks.landmark)
if m < thresh:
cPos += 1

print(“current progress is : “, cPos)
except:
print(“Not visible”)

endT = time.time()

else:

if cPos >= 100:
print(“WINNER”)
winner = 1

else:
if not(isCinit):
isCinit = True
cStart = time.time()
cEnd = cStart
currWindow = im2
playsound(‘redLight.mp3’)
userSum = calc_sum(res.pose_landmarks.landmark)

if (cEnd – cStart) <= 3:
tempSum = calc_sum(res.pose_landmarks.landmark)
cEnd = time.time()
if abs(tempSum – userSum) > 150:
print(“DEAD “, abs(tempSum – userSum))
isAlive = 0

else:
isInit = False
isCinit = False

cv2.circle(currWindow, ((55 + 6*cPos),280), 15, (0,0,255), -1)

mainWin = np.concatenate((cv2.resize(frm, (800,400)), currWindow), axis=0)
cv2.imshow(“Main Window”, mainWin)
#cv2.imshow(“window”, frm)
#cv2.imshow(“light”, currWindow)

else:
cv2.putText(frm, “Please Make sure you are fully in frame”, (20,200), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,0), 4)
cv2.imshow(“window”, frm)

if cv2.waitKey(1) == 27 or isAlive == 0 or winner == 1:
cv2.destroyAllWindows()
cap.release()
break

frm = cv2.blur(frm, (5,5))

if isAlive == 0:
cv2.putText(frm, “You are Dead”, (50,200), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 4)
cv2.imshow(“Main Window”, frm)

if winner == 1:
cv2.putText(frm, “You are Winner”, (50,200), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,255,0), 4)
cv2.imshow(“Main Window”, frm)

cv2.waitKey(0)



Adding Images & Audio Files

  • Copy and paste the following photos into the same folder, naming them im1.jpg and im2.jpg, respectively.

Also, in the same folder, download the following audio files and rename them greenlight.mp3 and redLight.mp3 respectively.

1. Audio Red Light: Download

2. Audio Green Light: Download

Motion Detection Algorithm & Testing

Now, we’ll utilize the mediapipe library to detect motion. With this, we can locate landmarks of a person standing in front of the camera, as seen in the figure below.

We’ll use landmarks 24, 23, 28, and 27 to identify motion in this case. Because the distance between the person’s hip and alternative foot grows or decreases as he walks. If a person moves, the distance between 23 and 28 or 24 and 27 will undoubtedly vary. As a result, motion is sensed.

  • After placing all of the files in a folder, execute the python code to make the game operate.
  • After our Python code has been completed, we will go to our firmware.

ESP32 CAM Module

The ESP32 Based Camera Module was developed by AI-Thinker.  The controller contains a Wi-Fi + Bluetooth/BLE chip and is powered by a 32-bit CPU. It has a 520 KB internal SRAM and an external 4M PSRAM. UART, SPI, I2C, PWM, ADC, and DAC are all supported by its GPIO Pins.

The module is compatible with the OV2640 Camera Module, which has a camera resolution of 1600 x 1200 pixels. A 24-pin gold plated connector links the camera to the ESP32 CAM Board. A 4GB SD Card can be used on the board. The photographs captured are saved on the SD Card.

ESP32-CAM Features 

  • The smallest 802.11b/g/n Wi-Fi BT SoC module.
  • Low power 32-bit CPU, can also serve the application processor.
  • Up to 160MHz clock speed, summary computing power up to 600 DMIPS.
  • Built-in 520 KB SRAM, external 4MPSRAM.
  • Supports UART/SPI/I2C/PWM/ADC/DAC.
  • Support OV2640 and OV7670 cameras, built-in flash lamp.
  • Support image WiFI upload.
  • Supports TF card.
  • Supports multiple sleep modes.
  • Embedded Lwip and FreeRTOS.
  • Supports STA/AP/STA+AP operation mode.
  • Support Smart Config/AirKiss technology.
  • Support for serial port local and remote firmware upgrades (FOTA).

ESP32-CAM FTDI Connection

  • There is no programmer chip on the PCB. So, any form of USB-to-TTL Module can be used to program this board. FTDI Modules based on the CP2102 or CP2104 chip, or any other chip, are widely accessible.
  • Connect the FTDI Module to the ESP32 CAM Module as shown below.
ESP32 CAM FTDI Module Connection
ESP32-CAMFTDI Programmer
GNDGND
5VVCC
U0RTX
U0TRX
GPIO0GND

Connect the ESP32’s 5V and GND pins to the FTDI Module’s 5V and GND. Connect the Rx to UOT and the Tx to UOR Pin in the same way. The most crucial thing is that you must connect the IO0 and GND pins. The device will now be in programming mode. You can remove it once the programming is completed.

Project PCB Gerber File & PCB Ordering Online

If you don’t want to put the circuit together on a breadboard and instead prefer a PCB. EasyEDA’s online Circuit Schematics & PCB Design tool was used to create the PCB Board for the ESP32 CAM Board. The PCB appears as seen below.

The Gerber File for the PCB is given below. You can simply download the Gerber File and order the PCB from https://www.nextpcb.com/

Download Gerber File: ESP32-CAM Multipurpose PCB

Now you can visit the NextPCB official website by clicking here: https://www.nextpcb.com/. So you will be directed to the NextPCB website.

  • You can now upload the Gerber File to the Website and place an order. The PCB quality is excellent. That is why the majority of people entrust NextPCB with their PCB and PCBA needs.
  • The components can be assembled on the PCB Board.

Installing ESP32CAM Library

Another streaming process will be used instead of the general ESP webserver example. As a result, another ESPCAM library is required. On the ESP32 microcontroller, the esp32cam library provides an object-oriented API for using the OV2640 camera. It’s an esp32-camera library wrapper.

Download the zip library as shown in the image from the following Github Link

After downloading, unzip the library and place it in the Arduino Library folder. To do so, follow the instructions below:

Open Arduino -> Sketch -> Include Library -> Add .ZIP Library… -> Navigate to downloaded zip file -> add

Source Code/Program for ESP32 CAM Module

The source code/program ESP32 CAM Gesture Controlled Mouse can be found in Library Example. So go to Files -> Examples -> esp32cam -> WifiCam.

You must make a little adjustment to the code before uploading it. Change the SSID and password variables to match the WiFi network you’re using.

Compile the code and upload it to the ESP32 CAM Board. However, you must follow a few steps each time you post.

  • When you push the upload button, make sure the IO0 pin is shorted to the ground.
  • If you notice dots and dashes during uploading, immediately press the reset button.
  • Remove the I01 pin shorting with Ground and push the reset button one more after the code has been uploaded.
  • If the output is still not the Serial monitor, push the reset button once again.

Now you can see a similar output as in the image below.

So that’s it for the ESP32-CAM section. Because the ESP32-CAM is broadcasting live video, make a note of the IP address displayed.

Python Code + Motion Detection ESP32 CAM

Now we return to our python code and make any necessary adjustments or simply paste the code provided.

import mediapipe as mp
import cv2
import numpy as np
import time
from playsound import playsound
import urllib.request

#cap = cv2.VideoCapture(0)
url=“http://192.168.1.61/cam-hi.jpg”
cPos = 0
startT = 0
endT = 0
userSum = 0
dur = 0
isAlive = 1
isInit = False
cStart, cEnd = 0,0
isCinit = False
tempSum = 0
winner = 0
inFrame = 0
inFramecheck = False
thresh = 180

def calc_sum(landmarkList):

tsum = 0
for i in range(11, 33):
tsum += (landmarkList[i].x * 480)

return tsum

def calc_dist(landmarkList):
return (landmarkList[28].y*640 landmarkList[24].y*640)

def isVisible(landmarkList):
if (landmarkList[28].visibility > 0.7) and (landmarkList[24].visibility > 0.7):
return True
return False

mp_pose = mp.solutions.pose
pose = mp_pose.Pose()
drawing = mp.solutions.drawing_utils

im1 = cv2.imread(‘im1.jpg’)
im2 = cv2.imread(‘im2.jpg’)

currWindow = im1

while True:
#_, frm = cap.read()
#rgb = cv2.cvtColor(frm, cv2.COLOR_BGR2RGB)
img_resp=urllib.request.urlopen(url)
imgnp=np.array(bytearray(img_resp.read()),dtype=np.uint8)
frm=cv2.imdecode(imgnp,1)
rgb = cv2.cvtColor(frm, cv2.COLOR_BGR2RGB)
res = pose.process(rgb)
frm = cv2.blur(frm, (5,5))
drawing.draw_landmarks(frm, res.pose_landmarks, mp_pose.POSE_CONNECTIONS)
if not(inFramecheck):
try:
if isVisible(res.pose_landmarks.landmark):
inFrame = 1
inFramecheck = True
else:
inFrame = 0
except:
print(“You are not visible at all”)

if inFrame == 1:
if not(isInit):
playsound(‘greenLight.mp3’)
currWindow = im1
startT = time.time()
endT = startT
dur = np.random.randint(1, 5)
isInit = True

if (endT startT) <= dur:
try:
m = calc_dist(res.pose_landmarks.landmark)
if m < thresh:
cPos += 1

print(“current progress is : “, cPos)
except:
print(“Not visible”)

endT = time.time()

else:

if cPos >= 100:
print(“WINNER”)
winner = 1

else:
if not(isCinit):
isCinit = True
cStart = time.time()
cEnd = cStart
currWindow = im2
playsound(‘redLight.mp3’)
userSum = calc_sum(res.pose_landmarks.landmark)

if (cEnd cStart) <= 3:
tempSum = calc_sum(res.pose_landmarks.landmark)
cEnd = time.time()
if abs(tempSum userSum) > 150:
print(“DEAD “, abs(tempSum userSum))
isAlive = 0

else:
isInit = False
isCinit = False

cv2.circle(currWindow, ((55 + 6*cPos),280), 15, (0,0,255), 1)

mainWin = np.concatenate((cv2.resize(frm, (800,400)), currWindow), axis=0)
cv2.imshow(“Main Window”, mainWin)
#cv2.imshow(“window”, frm)
#cv2.imshow(“light”, currWindow)

else:
cv2.putText(frm, “Please Make sure you are fully in frame”, (20,200), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,0), 4)
cv2.imshow(“window”, frm)

if cv2.waitKey(1) == 27 or isAlive == 0 or winner == 1:
cv2.destroyAllWindows()
#cap.release()
break

frm = cv2.blur(frm, (5,5))

if isAlive == 0:
cv2.putText(frm, “You are Dead”, (50,200), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 4)
cv2.imshow(“Main Window”, frm)

if winner == 1:
cv2.putText(frm, “You are Winner”, (50,200), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,255,0), 4)
cv2.imshow(“Main Window”, frm)

cv2.waitKey(0)
  • Check that the IP address in the URL variable has been updated; if it has, you are set to go.

Now, connect your ESP32-Cam module and laptop to the same local wifi, which contains our code. We run the Python code, and the result is the image below. Our ESP32-Cam-based game is now complete.

The frames of the person moving are taken using the ESP32-CAM, and if motion is recognized in the live video stream, the red light turns on, indicating that the person is dead or that the game is ended, otherwise the green light indicates that the person must move.

This is how you can create a Motion Detection-based Squid Game with the ESP32 CAM and OpenCV.

Conclusion:

I hope all of you had understand how to design a  Motion Detection-based Squid Game with the ESP32 CAM and OpenCV. We will be back soon with more informative blogs.

Leave a Reply

Your email address will not be published.