锐单电子商城 , 一站式电子元器件采购平台!
  • 电话:400-990-0325

单人的姿态检测|tensorflow singlepose

时间:2022-10-24 20:00:00 5kp90a直插二极管

单人姿态检测-图片

如果侵权,联系我,我会删除非本人图片。

用于安装的包

!pip install tensorflow==2.4.1 tensorflow-gpu==2.4.1 tensorflow-hub opencv-python matplotlib

导入下面包

tensorflow_hub: 加载模型

CV2: 利用openCV包、画点、直线或与视频相关的其他图片

import tensorflow as tf import tensorflow_hub as hub import cv2 from matplotlib import pyplot as plt import numpy as np

加载和执行模型, 回到17个姿势的结果

def movenet(input_image):     """Runs detection on an input image.      Args:       input_image: A [1, height, width, 3] tensor represents the input image         pixels. Note that the height/width should already be resized and match the         expected input resolution of the model before passing into this function.      Returns:       A [1, 1, 17, 3] float numpy array representing the predicted keypoint       coordinates and scores.     """     # Download the model from TF Hub.     model = hub.load("https://tfhub.dev/google/movenet/singlepose/lightning/4")     model = model.signatures['serving_default']      # SavedModel format expects tensor type of int32.     input_image = tf.cast(input_image, dtype=tf.int32)     # Run model inference.     outputs = model(input_image)     # Output is a [1, 1, 17, 3] tensor.     keypoints_with_scores = outputs['output_0'].numpy()      keypoints_with_scores = keypoints_with_scores.reshape((1, 17, 3))      return keypoints_with_scores

可以访问Tensorflow的官网,下载singlepose和Multipose查看相关模型或例子程序

https://tfhub.dev/s?module-type=image-pose-detection

画17个姿势点

def draw_keypoints(frame, keypoints, confidence_threshold):     y, x, c = frame.shape     shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))     print("shaped in draw_keypoints:", shaped)     for kp in shaped:         ky, kx, kp_conf = kp         if kp_conf > confidence_threshold:             cv2.circle(frame, (int(kx), int(ky)), 6, (0, 255, 0), -1)

画姿态点之间的直线

以下值告诉我们如何连接人体姿势点。例如,第一个值 (0, 1): 'm' 告诉我们如何将鼻子连接到左眼,最后一个值 (14, 16): 'c' 告诉我们如何将右膝连接到右脚踝。

从0到16,有17个人体姿势点顺序。

nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, right ankle

EDGES = {     (0, 1): 'm',     (0, 2): 'c',     (1, 3): 'm',     (2, 4): 'c',     (0, 5): 'm',     (0, 6): 'c',     (5, 7): 'm',     (7, 9): 'm',     (6, 8): 'c',     (8, 10): 'c',     (5, 6): 'y',     (5, 11): 'm',     (6, 12): 'c',     (11, 12): 'y',     (11, 13): 'm',     (13, 15): 'm',     (12, 14): 'c',     (14, 16): 'c' }

函数draw_connections如何连接17个姿势?

def draw_connections(frame, keypoints, edges, confidence_threshold):     print('frame', frame)     y, x, c = frame.shape     shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))      for edge, color in edges.items():         p1, p2 = edge         y1, x1, c1 = shaped[p1]         y2, x2, c2 = shaped[p2]          if (c1 > confidence_threshold) & (c2 > confidence_threshold):             cv2.line(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 4)

画家的每一个姿势点和线

def loop_through_people(frame, keypoints_with_scores, edges, confidence_threshold):     for person in keypoints_with_scores:         draw_connections(frame, person, edges, confidence_threshold)         draw_keypoints(frame, person, confidence_threshold)

加载自己的图片

image_path = 'fitness_pic.jpg'  image = tf.io.read_file(image_path) image = tf.compat.v1.image.decode_jpeg(image)

把它变成大小(192,192)

注意:

1. 高度和宽度是32倍。

2. 高度和宽度的比例应尽可能接近原始图片的比例。

3.高度和宽度不得大于256. 例如,应进行调整 720p 图像(即 720x1280 (HxW))并填充大小 160x256 图像。

我们的例子很简单,大小是(192, 192)

# Resize and pad the image to keep the aspect ratio and fit the expected size. input_size = 192 input_image = tf.expand_dims(image, axis=0) input_image = tf.image.resize_with_pad(input_image, input_size, input_size)

操作模型推理。keypoints_with_scores是[1, 17, 3].

第一个维度是批次维度,总是等于 1。
第二个维度表示预测的边界框/关键点的位置和分数。 17 * 3 格式为:[y_0, x_0, s_0, y_1, x_1, s_1, ..., y_16, x_16, s_16],其中 y_i, x_i, s_i 是 yx 坐标 (归一化为图像帧,如[0.0, 1.0中的范围)和相应的第i个关节的置信度分数。 17个关键点关节的顺序为:[鼻子、左眼、右眼、左耳、右耳、左肩、右肩、左肘、右肘、左腕、右腕、左髋、右髋、左膝、右膝、左脚踝、右脚踝]。

# Run model inference.
keypoints_with_scores = movenet(input_image)

显示原始图片和标记了每个姿态点的图片

display_image = tf.cast(tf.image.resize_with_pad(image, 1280, 1280), dtype = tf.int32)
display_image = np.array(display_image)
origin_image = np.copy(display_image)

loop_through_people(display_image, keypoints_with_scores, EDGES, 0.1)

plt.subplot(1, 2, 1)
plt.imshow(origin_image)
plt.subplot(1, 2, 2)
plt.imshow(display_image)
plt.show()

 

完整代码

import tensorflow as tf
import tensorflow_hub as hub
import cv2
from matplotlib import pyplot as plt
import numpy as np

def movenet(input_image):
    """Runs detection on an input image.

    Args:
      input_image: A [1, height, width, 3] tensor represents the input image
        pixels. Note that the height/width should already be resized and match the
        expected input resolution of the model before passing into this function.

    Returns:
      A [1, 1, 17, 3] float numpy array representing the predicted keypoint
      coordinates and scores.
    """
    # Download the model from TF Hub.
    model = hub.load("https://tfhub.dev/google/movenet/singlepose/lightning/4")
    model = model.signatures['serving_default']

    # SavedModel format expects tensor type of int32.
    input_image = tf.cast(input_image, dtype=tf.int32)
    # Run model inference.
    outputs = model(input_image)
    # Output is a [1, 1, 17, 3] tensor.
    keypoints_with_scores = outputs['output_0'].numpy()

    keypoints_with_scores = keypoints_with_scores.reshape((1, 17, 3))

    return keypoints_with_scores

def draw_keypoints(frame, keypoints, confidence_threshold):
    y, x, c = frame.shape
    shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))
    print("shaped in draw_keypoints:", shaped)
    for kp in shaped:
        ky, kx, kp_conf = kp
        if kp_conf > confidence_threshold:
            cv2.circle(frame, (int(kx), int(ky)), 6, (0, 255, 0), -1)


EDGES = {
    (0, 1): 'm',
    (0, 2): 'c',
    (1, 3): 'm',
    (2, 4): 'c',
    (0, 5): 'm',
    (0, 6): 'c',
    (5, 7): 'm',
    (7, 9): 'm',
    (6, 8): 'c',
    (8, 10): 'c',
    (5, 6): 'y',
    (5, 11): 'm',
    (6, 12): 'c',
    (11, 12): 'y',
    (11, 13): 'm',
    (13, 15): 'm',
    (12, 14): 'c',
    (14, 16): 'c'
}

def draw_connections(frame, keypoints, edges, confidence_threshold):
    print('frame', frame)
    y, x, c = frame.shape
    shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))

    for edge, color in edges.items():
        p1, p2 = edge
        y1, x1, c1 = shaped[p1]
        y2, x2, c2 = shaped[p2]

        if (c1 > confidence_threshold) & (c2 > confidence_threshold):
            cv2.line(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 4)

def loop_through_people(frame, keypoints_with_scores, edges, confidence_threshold):
    for person in keypoints_with_scores:
        draw_connections(frame, person, edges, confidence_threshold)
        draw_keypoints(frame, person, confidence_threshold)


image_path = 'C:/Users/Harry/Desktop/fitness.jpeg'

image = tf.io.read_file(image_path)
# image = tf.compat.v1.image.decode_image(image)
image = tf.compat.v1.image.decode_jpeg(image)


# Resize and pad the image to keep the aspect ratio and fit the expected size.
input_size = 192
input_image = tf.expand_dims(image, axis=0)
input_image = tf.image.resize_with_pad(input_image, input_size, input_size)

# Run model inference.
keypoints_with_scores = movenet(input_image)


display_image = tf.cast(tf.image.resize_with_pad(image, 1280, 1280), dtype = tf.int32)
display_image = np.array(display_image)
origin_image = np.copy(display_image)

loop_through_people(display_image, keypoints_with_scores, EDGES, 0.1)

plt.subplot(1, 2, 1)
plt.imshow(origin_image)
plt.subplot(1, 2, 2)
plt.imshow(display_image)
plt.show()

参考资料

https://tfhub.dev/google/movenet/singlepose/lightning/4

锐单商城拥有海量元器件数据手册IC替代型号,打造电子元器件IC百科大全!

相关文章