首页 > 解决方案 > 在 PC 和手机的网页上缩放摄像头视频

问题描述

我跟随一个 codelab 制作了一个智能网络摄像头,用于使用 TensorFlow.js进行对象检测。这是一个简单的网站,所以后来我尝试添加一些视觉效果,例如一些颜色、标题栏、按钮等。现在,该网页在我的 PC 屏幕上几乎可以顺利运行,但在移动设备上运行它时遇到了很多问题。只是为了解释我要做什么-首先,带有一些文本的页面加载并下载 TensorFlow.js 模型 => 出现一个按钮 => 按下它时按钮消失,视频和两个新按钮出现(切换和关闭相机)。然后模型开始工作并制作边界框。我有两个问题-

  1. 无法在手机上使用后置摄像头。
  2. 我无法设置视频的位置以使其适合 PC 和移动屏幕。

链接到网页 - https://lordharsh.github.io/Object-Detection-with-Webcam/

代码-

HTML-

<!DOCTYPE html>
<html lang="en">

<head>
    <title>OD using TensorFlow</title>
    <meta charset="utf-8">
    <!-- Import the webpage's stylesheet -->
    <link rel="stylesheet" href="style.css">
    <meta name="viewport" content="width=1280, initial-scale=1">
</head>

<body class= "page">
    <header>
        <a>Multiple Object Detection</a>
    </header> 
    <main> 
        <p id ='p1' class= 'para'>Multiple object detection using pre trained model in TensorFlow.js. Wait for the model to load before clicking the button to enable the webcam - at which point it will become
            visible to use.</p>

        <section id="demos" class="invisible">

            <p id ='p2'>Hold some objects up close to your webcam to get a real-time classification! When ready click "enable
                webcam"
                below and accept access to the webcam when the browser asks (check the top left of your window)</p>

            <div id="liveView" class="camView">
                <button id="webcamButton" class="button b1">ENABLE CAMERA</button>
                <video id="webcam" autoplay width="640" height="480"></video>
                <button id="webcamFlipButton" class="button b2">SWITCH CAMERA</button>
                <button id="webcamCloseButton" class="button b3">CLOSE CAMERA</button>
            </div>
        </section>

        <!-- Import TensorFlow.js library -->
        <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js" type="text/javascript"></script>
        <!-- Load the coco-ssd model to use to recognize things in images -->
        <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/coco-ssd"></script>

        <!-- Import the page's JavaScript to do some stuff -->
        <script src="script.js" defer></script>
    </main>
</body>

</html>

JavaScript-

const video = document.getElementById('webcam');
const liveView = document.getElementById('liveView');
const demosSection = document.getElementById('demos');
const enableWebcamButton = document.getElementById('webcamButton');
const flipWebcamButton = document.getElementById('webcamFlipButton');
const closeWebcamButton = document.getElementById('webcamCloseButton');
const para1 = document.getElementById('p1');
para2 = document.getElementById('p2');

let camDirection = 'user';

console.log(window.screen.availHeight+" "+window.screen.availWidth+' '+window.devicePixelRatio);


// Check if webcam access is supported.
function getUserMediaSupported() {
  return !!(navigator.mediaDevices &&
    navigator.mediaDevices.getUserMedia);
}

// If webcam supported, add event listener to button for when user
// wants to activate it to call enableCam function which we will 
// define in the next step.
if (getUserMediaSupported()) {
  enableWebcamButton.addEventListener('click', enableCam);
} else {
  console.warn('getUserMedia() is not supported by your browser');
}

// Enable the live webcam view and start classification.
function enableCam(event) {
  // Only continue if the COCO-SSD has finished loading.
  if (!model) {
    return;
  }
  if (event.target === flipWebcamButton) {
    if (camDirection === 'user'){
      camDirection = 'environment';
      console.log('shouls change');
    }
    else
      camDirection = 'user';
  }

  // Hide the button once clicked.
  p1.classList.add('removed');
  p2.classList.add('removed');
  event.target.classList.add('removed');
  video.classList.add('vid_show')
  flipWebcamButton.classList.add('show')
  closeWebcamButton.classList.add('show')
  // getUsermedia parameters to force video but not audio.
  const constraints = {
    video: {facingMode: { exact: camDirection }}
  };

  // Activate the webcam stream.
  navigator.mediaDevices.getUserMedia(constraints).then(function (stream) {
    video.srcObject = stream;
    video.addEventListener('loadeddata', predictWebcam);
  });
}

flipWebcamButton.addEventListener('click', enableCam);
closeWebcamButton.addEventListener('click', restartPage);

function restartPage(event) {
  location.reload()
}


// Store the resulting model in the global scope of our app.
var model = undefined;

// Before we can use COCO-SSD class we must wait for it to finish
// loading. Machine Learning models can be large and take a moment 
// to get everything needed to run.
// Note: cocoSsd is an external object loaded from our index.html
// script tag import so ignore any warning in Glitch.
cocoSsd.load().then(function (loadedModel) {
  model = loadedModel;
  // Show demo section now model is ready to use.
  demosSection.classList.remove('invisible');
});

var children = [];

function predictWebcam() {
  // Now let's start classifying a frame in the stream.
  model.detect(video).then(function (predictions) {
    // Remove any highlighting we did previous frame.
    for (let i = 0; i < children.length; i++) {
      liveView.removeChild(children[i]);
    }
    children.splice(0);

    // Now lets loop through predictions and draw them to the live view if
    // they have a high confidence score.
    for (let n = 0; n < predictions.length; n++) {
      // If we are over 66% sure we are sure we classified it right, draw it!
      if (predictions[n].score > 0.66) {
        const p = document.createElement('p');
        p.style = "font-size=2vh"
        p.innerText = predictions[n].class + ' - with '
          + Math.round(parseFloat(predictions[n].score) * 100)
          + '% confidence.';
        p.style = 'margin-left: ' + predictions[n].bbox[0] + 'px; margin-top: '
          + (predictions[n].bbox[1] - 10) + 'px; width: '
          + (predictions[n].bbox[2] - 10) + 'px; top: 0; left: 0;';

        const highlighter = document.createElement('div');
        highlighter.setAttribute('class', 'highlighter');
        highlighter.style = 'left: ' + predictions[n].bbox[0] + 'px; top: '
          + predictions[n].bbox[1] + 'px; width: '
          + predictions[n].bbox[2] + 'px; height: '
          + predictions[n].bbox[3] + 'px;';
        liveView.appendChild(highlighter);
        liveView.appendChild(p);
        children.push(highlighter);
        children.push(p);
      }
    }

    // Call this function again to keep predicting when the browser is ready.
    window.requestAnimationFrame(predictWebcam);
  });
}

CSS-

:root {
  --primary: #000000;
  --secondary: #;
  --primaryLight: #2c2c2c;
  --primaryDark: #000000;
  --secondaryLight: #73ffd8;
  --secondaryDark: #00ca77;
}

header {
  overflow: hidden;
  background-color:black;
  position: fixed; /* Set the navbar to fixed position */
  top: 0; /* Position the navbar at the top of the page */
  width: 1300px; /* Full width */
  padding: 5px;
  left: 0;
  shape-margin: 5px;
  text-align: left;
  text-shadow: 50px #000000;
}
header a{
  padding: 10px;
  left: 50px;
  font-family:Impact, Haettenschweiler, 'Arial Narrow Bold', sans-serif;
  color: whitesmoke;
  font-size: 5vh;
  font-weight: 400;
  letter-spacing: 0.08em;
}


/* Main content */
main {
  padding-left: 12px;
  margin-top:10vh; /* Add a top margin to avoid content overlay */
  text-align: center;
  font-size: large;
  align-content: center;
  text-align: center;
}

.para{
  margin-top: 40vh;
}

body {
  font-family: "Lucida Console";
  color: #ffffff;
  background-color: #2c2c2c;
}



video{
  display: none;
  border-radius: 12px; 
  align-self: center;
}

.vid_show{
  display:block;
  align-self: center;
}

section {
  opacity: 1;
  transition: opacity 500ms ease-in-out;
}

.removed {
  display: none;
}

.invisible {
  opacity: 0.2;
}

.camView {
  vertical-align: middle;
  position: relative;
  cursor: pointer;
  align-content: center;
}

.camView p {
  position: absolute;
  padding: 5px;
  background-color: #1df3c2;
  color: #fff;
  border: 1px dashed rgba(255, 255, 255, 0.7);
  z-index: 2;
  font-size: 12px;
  align-content: center;
}

.highlighter {
  background: rgba(0, 255, 0, 0.25);
  border: 1px dashed #fff;
  z-index: 1;
  position: absolute;
}

.button{
  height:max-content;
  width: max-content;
  color: #000000;
  box-shadow: 0 8px 16px 0 rgba(0,0,0,0.2), 0 6px 20px 0 rgba(0,0,0,0.19);
  border-radius: 3ch;
  left: 50%;
  right: 50%;
  border-collapse: collapse;
  font-size: 1.7vh;
  font-weight: bold;
  font-family:Verdana, Geneva, Tahoma, sans-serif;
  padding: 1vh;
  transition-duration: 0.4s;
}

.button:hover {
  background-color: #000000;
  color: white;
  box-shadow: 0 6px 10px 0 rgb(231, 231, 231), 0 6px 10px 0 rgb(231, 231, 231);
}

.b2{
  display:none;
}

.b3{
  display: none;
}

.show{
  display:inline-grid;
}

您还可以在 - 链接到 GitHub 代码 - https://github.com/LordHarsh/Objecct-Detection-with-Webcam查看此代码

抱歉这个问题太长了,但我是网络开发的新手。

标签: javascriptweb-deploymentobject-detectiontensorflow.jsbounding-box

解决方案


推荐阅读