python - 人眼注视检测:识别用户在板上注视的位置
问题描述
我正在做一个上面有板和相机的项目。目的是识别正在看黑板的学生,并确定他们视线的位置(在黑板上)。
目前,我计划在以下部分应对挑战:
- 识别学生面孔
- 从检测到的面部识别他们双眼的 ROI
- 确定他们眼睛瞳孔/虹膜中心的位置和头部姿势
- 决定这个人是否在看黑板?
- 如果是,学生正在看黑板上的哪个区域?
到目前为止,我能够做以下事情:
- 识别面部和眼睛地标和面部位置向量(X、Y、Z)
下面是代码:
from scipy.spatial import distance as dist
from imutils.video import FileVideoStream
from imutils.video import VideoStream
from imutils import face_utils
from gaze_codefiles import get_head_pose,draw_border,iris_center
import numpy as np
import imutils
import time
import dlib
import cv2
line_pairs = [[0, 1], [1, 2], [2, 3], [3, 0],
[4, 5], [5, 6], [6, 7], [7, 4],
[0, 4], [1, 5], [2, 6], [3, 7]]
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('./shape_predictor_68_face_landmarks.dat')
print("[INFO] camera sensor warming up...")
vs = VideoStream(src=0).start()
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
# vs = VideoStream(usePiCamera=True).start() # Raspberry Pi
time.sleep(2.0)
while True:
frame = vs.read()
frame = imutils.resize(frame, width=400)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rects = detector(gray,0)
for rect in rects:
(bx,by,bw,bh) = face_utils.rect_to_bb(rect)
draw_border(frame,(bx,by),(bx+bw,by+bh),(127,255,255),1,10,20)
shape = predictor(gray,rect)
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (127, 255, 255), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (127, 255, 255), 1)
reprojectdst, euler_angle = get_head_pose(shape)
image_points = np.float32([shape[17], shape[21], shape[22], shape[26], shape[36],
shape[39], shape[42], shape[45], shape[31], shape[35],
shape[48], shape[54], shape[57], shape[8]])
#for start, end in line_pairs:
#cv2.line(frame, reprojectdst[start], reprojectdst[end], (0, 0, 255))
for p in image_points:
cv2.circle(frame, (int(p[0]), int(p[1])), 1, (0,0,255), -1)
#p1 = (int(shape[34][0]), int(shape[34][1]))
#p2 = (int(reprojectdst[0][0]), int(reprojectdst[0][1]))
#cv2.line(frame, p1, p2, (255,0,0), 2)
cv2.putText(frame, "X: " + "{:7.2f}".format(euler_angle[0, 0]), (20, 20), cv2.FONT_HERSHEY_SIMPLEX,
0.5, (127, 255, 255), thickness=1)
cv2.putText(frame, "Y: " + "{:7.2f}".format(euler_angle[1, 0]), (20, 50), cv2.FONT_HERSHEY_SIMPLEX,
0.5, (127, 255, 255), thickness=1)
cv2.putText(frame, "Z: " + "{:7.2f}".format(euler_angle[2, 0]), (20, 80), cv2.FONT_HERSHEY_SIMPLEX,
0.5, (127, 255, 255), thickness=1)
#cv2.putText(frame,"Left Eye Center is:{}".format(tuple(lefteyecenter)),(20,100),cv2.FONT_HERSHEY_SIMPLEX,0.75, (127, 255, 255), thickness=2)
#cv2.putText(frame,"Left Eye Center is:{}".format(tuple(righteyecenter)),(20,100),cv2.FONT_HERSHEY_SIMPLEX,0.75, (127, 255, 255), thickness=2)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
这是相同的输出:
我能够获得双眼的注视方向,现在我只需要将这些矢量投影到现实世界中的 3D 空间(板或笔记本电脑屏幕)。有人可以指导我吗?
解决方案
推荐阅读
- java - 为什么我收到错误工厂方法'halLinkDisocoverer'在springboot中抛出异常?
- javascript - Nuxt.js | 打开一个新页面作为带有路由的模式窗口
- reactjs - 我如何乘以状态
- laravel - 如何在 Laravel 5 中删除 X 时间之前的一些记录
- opengl - GL_MAX_VERTEX_UNIFORM_COMPONENTS 和组件尺寸
- html - Center responsive elements in horizontal scroll container
- javascript - 如果已在 Google Chrome 扩展程序中使用 JavaScript 替换查询字符串
- directx - 一个 Dispatch 调用的每个维度最多可以有 65535 个线程组
- symfony - Symfony 映射错误:“关联 Entity\Rule#ruleSettings 引用了不存在的拥有方字段 Entity\RuleSettings#rules。”
- php - 在 php 5.5 中,乘法变为浮点数并转换为 int 变为负数