c++ - 使用 RealSense 进行后处理后如何获得像素 x,y 处的深度?
问题描述
考虑以下代码:
// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
// We want the points object to be persistent so we can display the last cloud when a frame drops
rs2::points points;
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
// Declare filters
rs2::decimation_filter dec_filter; // Decimation - reduces depth frame density
rs2::threshold_filter thr_filter; // Threshold - removes values outside recommended range
rs2::spatial_filter spat_filter; // Spatial - edge-preserving spatial smoothing
rs2::temporal_filter temp_filter; // Temporal - reduces temporal noise
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(false);
// Initialize a vector that holds filters and their options
std::vector<rs2::filter*> filters;
// The following order of emplacement will dictate the orders in which filters are applied
filters.emplace_back(&dec_filter);
filters.emplace_back(&thr_filter);
filters.emplace_back(&depth_to_disparity);
filters.emplace_back(&spat_filter);
filters.emplace_back(&temp_filter);
filters.emplace_back(&disparity_to_depth);
while (app) // Application still alive?
{
// Wait for the next set of frames from the camera
auto frames = pipe.wait_for_frames();
rs2::video_frame color = frames.get_color_frame();
// For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
if (!color)
color = frames.get_infrared_frame();
rs2::depth_frame depth = frames.get_depth_frame();
int centerX = depth.get_width() / 2;
int centerY = depth.get_height() / 2;
// A: Pre-filtered
float prefiltered_distance = depth.get_distance(centerX, centerY);
// B: Filter frames
for (auto filter : filters)
{
depth = (*filter).process(depth);
}
// C: Post-filtered (fails)
float postfiltered_distance = depth.get_distance(centerX, centerY);
// Tell pointcloud object to map to this color frame
pc.map_to(color);
// Generate the pointcloud and texture mappings
points = pc.calculate(depth);
// ...
}
为什么在过滤帧depth.get_distance(centerX, centerY);
之前调用可以正常工作,但在过滤后调用相同的函数会失败并显示out of range value for argument "y"
?
简而言之,如何获得z
像素的过滤距离( )x,y
?
解决方案
抽取过滤器会降低图像分辨率,因此您应该在运行过滤器后再次检查分辨率并更新您的centerX
和centerY
变量,使它们不再超出范围。
推荐阅读
- django - 多供应商应用程序的 Django 用户、组和组织,如何?
- matplotlib - 是否可以在 matplotlib 中制作马赛克热图?
- java - 继承类的 Java 不兼容类型
- ubuntu - 一次 ssh 两次,使用 netcat
- json - Splunk 提取事件中的 Json 格式字段
- linux - 创建一个简短的 shell 脚本以使用 cut、sort 和 head 来打印表格来排列值
- c# - 使用 MediatR 和 ReactiveUI 中的 Unit 类型(和语言扩展)
- python - 根据给定的日期范围获取星期一和星期日
- powerbi - 红移视图的表现
- oop - 在 OOP 中引入特定方法的最顶层父类是否有一个术语?(根类?)