preloader
Deep learning

MediaPipe - (3)

MediaPipe - (3)

저번 Mediapipe의 Hands 포스팅에 이어서 Mediapipe의 Pose 를 테스트 해보겠습니다.

실행 코드

import time
import cv2 as cv
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_pose = mp.solutions.pose

prevTime = 0
idx = 0
pose = mp_pose.Pose(
    min_detection_confidence=0.5, min_tracking_confidence=0.5)
cap = cv.VideoCapture('./ufc.gif')
while cap.isOpened():
    success, image = cap.read()
    curTime = time.time()
    if not success:
        break
    
    image = cv.cvtColor(cv.flip(image, 1), cv.COLOR_BGR2RGB)
    image.flags.writeable = False
    results = pose.process(image)

    image.flags.writeable = True
    image = cv.cvtColor(image, cv.COLOR_RGB2BGR)
    mp_drawing.draw_landmarks(
        image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS)

    sec = curTime - prevTime
    prevTime = curTime
    fps = 1/(sec)
    str = f"FPS : {fps:0.1f}"
    
		cv.putText(image, str, (0, 100), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))
    cv.imshow('MediaPipe Pose', image)
    cv.imwrite(f"./sample_{idx:05d}.jpg", image) # for making gif
    idx += 1
    if cv.waitKey(1) & 0xFF == ord('q'):
        break
pose.close()
cap.release()

좀 더 해봐야겠지만 일단 지금 코드로는 제대로 잡진 못하는 것 같네요..

흠….코드를 좀 수정해봐야겠습니다..

P.S

  • 다음엔….bazel 버전을..?
donaricano-btn
도움이 되셨다면 몰랑이에게 밀크티를...!
더 다양한 포스팅을 채우도록 노력할게요!
comments powered by Disqus