双目立体视觉(6)- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇

2023-12-01 21:40

本文主要是介绍双目立体视觉(6)- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

项目地址:https://github.com/stereolabs/zed-examples

官方文档:https://www.stereolabs.com/docs/


 

目录

Tutorial 4: Positional tracking with the ZED

Prerequisites

Build the program

Code overview

Create a camera

Enable positional tracking

Capture pose data

Inertial Data

Tutorial 5: Spatial mapping with the ZED

Prerequisites

Build the program

Code overview

Create a camera

Enable positional tracking

Enable spatial mapping

Capture data

Extract mesh

Disable modules and exit

Tutorial 6: Object Detection with the ZED 2

Prerequisites

Build the program

Code overview

Create a camera

Enable Object detection

Capture data

Disable modules and exit

Tutorial 7: Getting sensors data from ZED Mini and ZED2

Prerequisites

Build the program

Code overview

Create a camera

Sensors data capture

Process data

Close camera and exit


Tutorial 4: Positional tracking with the ZED

This tutorial shows how to use the ZED as a positional tracker. The program will loop until 1000 position are grabbed. We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

    mkdir buildcd buildcmake ..make

Code overview

#include <sl/Camera.hpp>using namespace std;
using namespace sl;int main(int argc, char **argv) {// Create a ZED camera objectCamera zed;// Set configuration parametersInitParameters init_parameters;init_parameters.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)init_parameters.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate systeminit_parameters.coordinate_units = UNIT::METER; // Set units in metersinit_parameters.sensors_required = true;// Open the cameraauto returned_state = zed.open(init_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Enable positional tracking with default parametersPositionalTrackingParameters tracking_parameters;returned_state = zed.enablePositionalTracking(tracking_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Track the camera position during 1000 framesint i = 0;Pose zed_pose;// Check if the camera is a ZED M and therefore if an IMU is availablebool zed_has_imu = zed.getCameraInformation().sensors_configuration.isSensorAvailable(sl::SENSOR_TYPE::GYROSCOPE);SensorsData sensor_data;while (i < 1000) {if (zed.grab() == ERROR_CODE::SUCCESS) {// Get the pose of the left eye of the camera with reference to the world framezed.getPosition(zed_pose, REFERENCE_FRAME::WORLD); // get the translation informationauto zed_translation = zed_pose.getTranslation();// get the orientation informationauto zed_orientation = zed_pose.getOrientation();// get the timestampauto ts = zed_pose.timestamp.getNanoseconds();// Display the translation and timestampcout << "Camera Translation: {" << zed_translation << "}, Orientation: {" << zed_orientation << "}, timestamp: " << zed_pose.timestamp.getNanoseconds() << "ns\n";// Display IMU dataif (zed_has_imu) {// Get IMU data at the time the image was capturedzed.getSensorsData(sensor_data, TIME_REFERENCE::IMAGE);//get filtered orientation quaternionauto imu_orientation = sensor_data.imu.pose.getOrientation();// get raw accelerationauto acceleration = sensor_data.imu.linear_acceleration;cout << "IMU Orientation: {" << zed_orientation << "}, Acceleration: {" << acceleration << "}\n";}i++;}}// Disable positional tracking and close the camerazed.disablePositionalTracking();zed.close();return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED.

// Create a ZED camera object
Camera zed;// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)exit(-1);

Enable positional tracking

Once the camera is opened, we must enable the positional tracking module in order to get the position and orientation of the ZED.

// Enable positional tracking with default parameters
sl::PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)exit(-1);

In the above example, we leave the default tracking parameters. For the list of available parameters, check the online documentation.

Capture pose data

Now that the ZED is opened and the positional tracking enabled, we create a loop to grab and retrieve the camera position.

The camera position is given by the class sl::Pose. This class contains the translation and orientation of the camera, as well as image timestamp and tracking confidence (quality).
A pose is always linked to a reference frame. The SDK provides two reference frame : REFERENCE_FRAME::WORLD and REFERENCE_FRAME::CAMERA.
It is not the purpose of this tutorial to go into the details of these reference frame. Read the documentation for more information.
In the example, we get the device position in the World Frame.

// Track the camera position during 1000 frames
int i = 0;
sl::Pose zed_pose;
while (i < 1000) {if (zed.grab() == ERROR_CODE::SUCCESS) {zed.getPosition(zed_pose, REFERENCE_FRAME::WORLD); // Get the pose of the left eye of the camera with reference to the world frame// Display the translation and timestampprintf("Translation: Tx: %.3f, Ty: %.3f, Tz: %.3f, Timestamp: %llu\n", zed_pose.getTranslation().tx, zed_pose.getTranslation().ty, zed_pose.getTranslation().tz, zed_pose.timestamp);// Display the orientation quaternionprintf("Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\n\n", zed_pose.getOrientation().ox, zed_pose.getOrientation().oy, zed_pose.getOrientation().oz, zed_pose.getOrientation().ow);i++;}
}

Inertial Data

If a ZED Mini is open, we can have access to the inertial data from the integrated IMU

bool zed_mini = (zed.getCameraInformation().camera_model == MODEL::ZED_M);

First, we test that the opened camera is a ZED Mini, then, we display some useful IMU data, such as the quaternion and the linear acceleration.

if (zed_mini) { // Display IMU data// Get IMU datazed.getIMUData(imu_data, TIME_REFERENCE::IMAGE); // Get the data// Filtered orientation quaternionprintf("IMU Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\n", imu_data.getOrientation().ox,imu_data.getOrientation().oy, imu_data.getOrientation().oz, zed_pose.getOrientation().ow);// Raw accelerationprintf("IMU Acceleration: x: %.3f, y: %.3f, z: %.3f\n", imu_data.linear_acceleration.x,imu_data.linear_acceleration.y, imu_data.linear_acceleration.z);
}

This will loop until the ZED has been tracked during 1000 frames. We display the camera translation (in meters) in the console window and close the camera before exiting the application.

// Disable positional tracking and close the camera
zed.disablePositionalTracking();
zed.close();
return 0;

You can now use the ZED as an inside-out positional tracker. You can now read the next tutorial to learn how to use the Spatial Mapping.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_4)option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)if (NOT LINK_SHARED_ZED AND MSVC)message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()if(COMMAND cmake_policy)cmake_policy(SET CMP0003 OLD)cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()SET(EXECUTABLE_OUTPUT_PATH ".")    find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)if (LINK_SHARED_ZED)SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

Tutorial 5: Spatial mapping with the ZED

This tutorial shows how to use the spatial mapping module with the ZED. It will loop until 500 frames are grabbed, extract a mesh, filter it and save it as a obj file.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


#include <sl/Camera.hpp>using namespace std;
using namespace sl;int main(int argc, char **argv) {// Create a ZED camera objectCamera zed;// Set configuration parametersInitParameters init_parameters;init_parameters.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)init_parameters.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate systeminit_parameters.coordinate_units = UNIT::METER; // Set units in meters// Open the cameraauto returned_state = zed.open(init_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Enable positional tracking with default parameters. Positional tracking needs to be enabled before using spatial mappingsl::PositionalTrackingParameters tracking_parameters;returned_state = zed.enablePositionalTracking(tracking_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Enable spatial mappingsl::SpatialMappingParameters mapping_parameters;returned_state = zed.enableSpatialMapping(mapping_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Grab data during 500 framesint i = 0;sl::Mesh mesh; // Create a mesh objectwhile (i < 500) {// For each new grab, mesh data is updated if (zed.grab() == ERROR_CODE::SUCCESS) {// In the background, spatial mapping will use newly retrieved images, depth and pose to update the meshsl::SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();// Print spatial mapping statecout << "\rImages captured: " << i << " / 500  ||  Spatial mapping state: " << mapping_state << "\t" << flush;i++;}}cout << endl;// Extract, filter and save the mesh in a obj filecout << "Extracting Mesh...\n";zed.extractWholeSpatialMap(mesh); // Extract the whole meshcout << "Filtering Mesh...\n";mesh.filter(sl::MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)cout << "Saving Mesh...\n";mesh.save("mesh.obj"); // Save the mesh in an obj file// Disable tracking and mapping and close the camerazed.disableSpatialMapping();zed.disablePositionalTracking();zed.close();return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED. In this example, we choose to have a right-handed coordinate system with Y axis up, since it is the most common system chosen in 3D viewing software (meshlab for example).

// Create a ZED camera object
Camera zed;// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)exit(-1);

Enable positional tracking

The spatial mapping needs the positional tracking to be activated. Therefore, as with tutorial 4 - Positional tracking, we need to enable the tracking module first.

sl::PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)exit(-1);

Enable spatial mapping

Now that tracking is enabled, we need to enable the spatial mapping module. You will see that it is very close to the positional tracking: We create a spatial mapping parameters and call enableSpatialMapping() function with this parameter.

sl::SpatialMappingParameters mapping_parameters;
err = zed.enableSpatialMapping(mapping_parameters);
if (err != ERROR_CODE::SUCCESS)exit(-1);

It is not the purpose of this tutorial to go into the details of SpatialMappingParameters class, but you will find mode information in the API documentation.

The spatial mapping is now activated.

Capture data

The spatial mapping does not require any function call in the grab process. the ZED SDK handles and checks that a new image,depth and position can be ingested in the mapping module and will automatically launch the calculation asynchronously.
It means that you just simply have to grab images to have a mesh creating in background.
In this tutorial, we grab 500 frames and then stop the loop to extract mesh.

// Grab data during 500 framesint i = 0;sl::Mesh mesh; // Create a mesh objectwhile (i < 500) {if (zed.grab() == ERROR_CODE::SUCCESS) {// In background, spatial mapping will use new images, depth and pose to create and update the mesh. No specific functions are required heresl::SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();// Print spatial mapping statestd::cout << "\rImages captured: " << i << " / 500  ||  Spatial mapping state: " << spatialMappingState2str(mapping_state) << "                     " << std::flush;i++;}}

Extract mesh

We have now grabbed 500 frames and the mesh has been created in background. Now we need to extract it.
First, we need to create a mesh object to manipulate it: a sl::Mesh. Then launch the extraction with Camera::extractWholeMesh(). This function will block until the mesh is available.

zed.extractWholeMesh(mesh); // Extract the whole mesh

We have now a mesh. This mesh can be filtered (if needed) to remove duplicate vertices and unneeded faces. This will make the mesh lighter to manipulate.
Since we are manipulating the mesh, this function is a function member of sl::Mesh.

mesh.filter(sl::MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)

You can see that filter takes a filtering parameter. This allows you to fine tuning the processing. Likewise, more information are given in the API documentation regarding filtering parameters.

You can now save the mesh as an obj file for external manipulation:

mesh.save("mesh.obj"); // Save the mesh in an obj file

Disable modules and exit

Once the mesh is extracted and saved, don't forget to disable the modules and close the camera before exiting the program.
Since spatial mapping requires positional tracking, always disable spatial mapping before disabling tracking.

// Disable tracking and mapping and close the camerazed.disableSpatialMapping();zed.disablePositionalTracking();zed.close();return 0;

And this is it!

You can now map your environment with the ZED.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_5)option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)if (NOT LINK_SHARED_ZED AND MSVC)message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()if(COMMAND cmake_policy)cmake_policy(SET CMP0003 OLD)cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()SET(EXECUTABLE_OUTPUT_PATH ".")find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)if (LINK_SHARED_ZED)SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

Tutorial 6: Object Detection with the ZED 2

This tutorial shows how to use the object detection module with the ZED 2.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


/*********************************************************************************** This sample demonstrates how to use the objects detection module            ****      with the ZED SDK and display the result                                ***********************************************************************************/// Standard includes
#include <iostream>
#include <fstream>// ZED includes
#include <sl/Camera.hpp>// Using std and sl namespaces
using namespace std;
using namespace sl;int main(int argc, char** argv) {// Create ZED objectsCamera zed;InitParameters init_parameters;init_parameters.camera_resolution = RESOLUTION::HD720;init_parameters.depth_mode = DEPTH_MODE::PERFORMANCE;init_parameters.coordinate_units = UNIT::METER;init_parameters.sdk_verbose = true;// Open the cameraauto returned_state = zed.open(init_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Define the Objects detection module parametersObjectDetectionParameters detection_parameters;// run detection for every Camera grabdetection_parameters.image_sync = true;// track detects object accross time and spacedetection_parameters.enable_tracking = true;// compute a binary mask for each object aligned on the left imagedetection_parameters.enable_mask_output = true; // designed to give person pixel mask// If you want to have object tracking you need to enable positional tracking firstif (detection_parameters.enable_tracking)zed.enablePositionalTracking();    cout << "Object Detection: Loading Module..." << endl;returned_state = zed.enableObjectDetection(detection_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";zed.close();return EXIT_FAILURE;}// detection runtime parametersObjectDetectionRuntimeParameters detection_parameters_rt;// detection outputObjects objects;cout << setprecision(3);int nb_detection = 0;while (nb_detection < 100) {if(zed.grab() == ERROR_CODE::SUCCESS){zed.retrieveObjects(objects, detection_parameters_rt);if (objects.is_new) {cout << objects.object_list.size() << " Object(s) detected\n\n";if (!objects.object_list.empty()) {auto first_object = objects.object_list.front();cout << "First object attributes :\n";cout << " Label '" << first_object.label << "' (conf. "<< first_object.confidence << "/100)\n";if (detection_parameters.enable_tracking)cout << " Tracking ID: " << first_object.id << " tracking state: " <<first_object.tracking_state << " / " << first_object.action_state << "\n";cout << " 3D position: " << first_object.position <<" Velocity: " << first_object.velocity << "\n";cout << " 3D dimensions: " << first_object.dimensions << "\n";if (first_object.mask.isInit())cout << " 2D mask available\n";cout << " Bounding Box 2D \n";for (auto it : first_object.bounding_box_2d)cout << "    " << it<<"\n";cout << " Bounding Box 3D \n";for (auto it : first_object.bounding_box)cout << "    " << it << "\n";cout << "\nPress 'Enter' to continue...\n";cin.ignore();}nb_detection++;}}}zed.close();return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED 2. Please note that the ZED 1 is not compatible with the object detection module.

This module uses the GPU to perform deep neural networks computations. On platforms with limited amount of memory such as jetson Nano, it's advise to disable the GUI to improve the performances and avoid memory overflow.

// Create ZED objects
Camera zed;
InitParameters initParameters;
initParameters.camera_resolution = RESOLUTION::HD720;
initParameters.depth_mode = DEPTH_MODE::PERFORMANCE;
initParameters.sdk_verbose = true;// Open the camera
ERROR_CODE zed_error = zed.open(initParameters);
if (zed_error != ERROR_CODE::SUCCESS) {std::cout << "Error " << zed_error << ", exit program.\n";return 1; // Quit if an error occurred
}

Enable Object detection

We will define the object detection parameters. Notice that the object tracking needs the positional tracking to be able to track the objects in the world reference frame.

// Define the Objects detection module parameters
ObjectDetectionParameters detection_parameters;
detection_parameters.enable_tracking = false;
detection_parameters.enable_mask_output = false;
detection_parameters.image_sync = false;// Object tracking requires the positional tracking module
if (detection_parameters.enable_tracking)zed.enablePositionalTracking();

Then we can start the module, it will load the model. This operation can take a few seconds. The first time the module is used, the model will be optimized for the hardware and will take more time. This operation is done only once.

std::cout << "Object Detection: Loading Module..." << std::endl;
zed_error = zed.enableObjectDetection(detection_parameters);
if (zed_error != ERROR_CODE::SUCCESS) {std::cout << "Error " << zed_error << ", exit program.\n";zed.close();return 1;
}

The object detection is now activated.

Capture data

The object confidence threshold can be adjusted at runtime to select only the revelant objects depending on the scene complexity. Since the parameters have been set to image_sync, for each grab call, the image will be fed into the AI module and will output the detections for each frames.

// Detection runtime parameters
ObjectDetectionRuntimeParameters detection_parameters_rt;
detection_parameters_rt.detection_confidence_threshold = 40;// Detection output
Objects objects;while (zed.grab() == ERROR_CODE::SUCCESS) {zed_error = zed.retrieveObjects(objects, detection_parameters_rt);if (objects.is_new) {std::cout << objects.object_list.size() << " Object(s) detected ("<< zed.getCurrentFPS() << " FPS)" << std::endl;}
}

Disable modules and exit

Once the program is over the modules can be disabled and the camera closed. This step is optional since the zed.close() will take care of disabling all the modules. This function is also called automatically by the destructor if necessary.

// Disable object detection and close the camera
zed.disableObjectDetection();
zed.close();
return 0;

And this is it!

You can now detect object in 3D with the ZED 2.


Tutorial 7: Getting sensors data from ZED Mini and ZED2

This tutorial shows how to retrieve sensors data from ZED Mini and ZED2. Contrary to other samples, this one does not focus on images or depth information but on embedded sensors. It will loop for 5 seconds, printing the retrieved sensors values on console.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


#include <sl/Camera.hpp>using namespace std;
using namespace sl;// Basic structure to compare timestamps of a sensor. Determines if a specific sensor data has been updated or not.
struct TimestampHandler {// Compare the new timestamp to the last valid one. If it is higher, save it as new reference.inline bool isNew(Timestamp& ts_curr, Timestamp& ts_ref) {bool new_ = ts_curr > ts_ref;if (new_) ts_ref = ts_curr;return new_;}// Specific function for IMUData.inline bool isNew(SensorsData::IMUData& imu_data) {return isNew(imu_data.timestamp, ts_imu);}// Specific function for MagnetometerData.inline bool isNew(SensorsData::MagnetometerData& mag_data) {return isNew(mag_data.timestamp, ts_mag);}// Specific function for BarometerData.inline bool isNew(SensorsData::BarometerData& baro_data) {return isNew(baro_data.timestamp, ts_baro);}Timestamp ts_imu = 0, ts_baro = 0, ts_mag = 0; // Initial values
};// Function to display sensor parameters.
void printSensorConfiguration(SensorParameters& sensor_parameters) {if (sensor_parameters.isAvailable) {cout << "*****************************" << endl;cout << "Sensor Type: " << sensor_parameters.type << endl;cout << "Max Rate: "    << sensor_parameters.sampling_rate << SENSORS_UNIT::HERTZ << endl;cout << "Range: ["      << sensor_parameters.range << "] " << sensor_parameters.sensor_unit << endl;cout << "Resolution: "  << sensor_parameters.resolution << " " << sensor_parameters.sensor_unit << endl;if (isfinite(sensor_parameters.noise_density)) cout << "Noise Density: " << sensor_parameters.noise_density <<" "<< sensor_parameters.sensor_unit<<"/√Hz"<<endl;if (isfinite(sensor_parameters.random_walk)) cout << "Random Walk: " << sensor_parameters.random_walk <<" "<< sensor_parameters.sensor_unit<<"/s/√Hz"<<endl;}
}int main(int argc, char **argv) {// Create a ZED camera object.Camera zed;// Set configuration parameters.InitParameters init_parameters;init_parameters.depth_mode = DEPTH_MODE::NONE; // No depth computation required here.// Open the camera.auto returned_state = zed.open(init_parameters);if (returned_state != ERROR_CODE::SUCCESS) {cout << "Error " << returned_state << ", exit program.\n";return EXIT_FAILURE;}// Check camera model.auto info = zed.getCameraInformation();MODEL cam_model =info.camera_model;if (cam_model == MODEL::ZED) {cout << "This tutorial only works with ZED 2 and ZED-M cameras. ZED does not have additional sensors.\n"<<endl;return EXIT_FAILURE;}// Display camera information (model, serial number, firmware versions).cout << "Camera Model: " << cam_model << endl;cout << "Serial Number: " << info.serial_number << endl;cout << "Camera Firmware: " << info.camera_configuration.firmware_version << endl;cout << "Sensors Firmware: " << info.sensors_configuration.firmware_version << endl;// Display sensors configuration (imu, barometer, magnetometer).printSensorConfiguration(info.sensors_configuration.accelerometer_parameters);printSensorConfiguration(info.sensors_configuration.gyroscope_parameters);printSensorConfiguration(info.sensors_configuration.magnetometer_parameters);printSensorConfiguration(info.sensors_configuration.barometer_parameters);// Used to store sensors data.SensorsData sensors_data;// Used to store sensors timestamps and check if new data is available.TimestampHandler ts;// Retrieve sensors data during 5 seconds.auto start_time = std::chrono::high_resolution_clock::now();int count = 0;double elapse_time = 0;while (elapse_time < 5000) {// Depending on your camera model, different sensors are available.// They do not run at the same rate: therefore, to not miss any new samples we iterate as fast as possible// and compare timestamps to determine when a given sensor's data has been updated.// NOTE: There is no need to acquire images with grab(). getSensorsData runs in a separate internal capture thread.if (zed.getSensorsData(sensors_data, TIME_REFERENCE::CURRENT) == ERROR_CODE::SUCCESS) {// Check if a new IMU sample is available. IMU is the sensor with the highest update frequency.if (ts.isNew(sensors_data.imu)) {cout << "Sample " << count++ << "\n";cout << " - IMU:\n";cout << " \t Orientation: {" << sensors_data.imu.pose.getOrientation() << "}\n";cout << " \t Acceleration: {" << sensors_data.imu.linear_acceleration << "} [m/sec^2]\n";cout << " \t Angular Velocitiy: {" << sensors_data.imu.angular_velocity << "} [deg/sec]\n";// Check if Magnetometer data has been updated.if (ts.isNew(sensors_data.magnetometer))cout << " - Magnetometer\n \t Magnetic Field: {" << sensors_data.magnetometer.magnetic_field_calibrated << "} [uT]\n";// Check if Barometer data has been updated.if (ts.isNew(sensors_data.barometer))cout << " - Barometer\n \t Atmospheric pressure:" << sensors_data.barometer.pressure << " [hPa]\n";}}// Compute the elapsed time since the beginning of the main loop.elapse_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - start_time).count();}// Close camerazed.close();return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED camera, as we do not need depth information we can disable its computation to save process power.

    // Create a ZED camera objectCamera zed;// Set configuration parametersInitParameters init_parameters;// no depth computation required hereinit_parameters.depth_mode = DEPTH_MODE::NONE;// Open the cameraERROR_CODE err = zed.open(init_parameters);if (err != ERROR_CODE::SUCCESS) {cout << "Error " << err << ", exit program.\n";return -1;}

Sensors data capture

Depending on your camera model, different sensors may send informations. To simplify the retrieve process we have a global class, SensorsData, that encapsulates all sensors data.

    SensorsData sensors_data;double elapse_time = 0;while (elapse_time < 5000){if (zed.getSensorsData(sensors_data, TIME_REFERENCE::CURRENT) == ERROR_CODE::SUCCESS) {[...]}}        

Process data

As previously said, sensors have different frequencies and they are stored in a global class which means between two getSensorsData call, some sensors may not have newer data to provide. To handle this, each sensor sends the timestamp of its data, by checking if the given timestamp is newer than the previous we know if the data is a new one or not.

In this sample we use a basic class TimestampHandler to store timestamp and check for data update.

 TimestampHandler ts;if (ts.isNew(sensors_data.imu)) {// sensors_data.imu contains new data}

If the data are udpated we display them:

    cout << " - IMU:\n";// Filtered orientation quaternioncout << " \t Orientation: {" << sensors_data.imu.pose.getOrientation() << "}\n";// Filtered accelerationcout << " \t Acceleration: {" << sensors_data.imu.linear_acceleration << "} [m/sec^2]\n";// Filtered angular velocitiescout << " \t Angular Velocities: {" << sensors_data.imu.angular_velocity << "} [deg/sec]\n";// Check if Magnetometer data has been updated if (ts.isNew(sensors_data.magnetometer))// Filtered magnetic fieldscout << " - Magnetometer\n \t Magnetic Field: {" << sensors_data.magnetometer.magnetic_field_calibrated << "} [uT]\n";// Check if Barometer data has been updated if (ts.isNew(sensors_data.barometer))// Atmospheric pressurecout << " - Barometer\n \t Atmospheric pressure:" << sensors_data.barometer.pressure << " [hPa]\n";

You do not have to care about your camera model to acces sensors fields, if the sensors is not available its data will contains NAN values and its timestamp will be 0.

Depending on your camera model and firmware, different sensors can send their temperature. To access it you can iterate over sensors and check if the data is available:

    cout << " - Temperature\n";float temperature;for (int s = 0; s < SensorsData::TemperatureData::SENSOR_LOCATION::LAST; s++) {auto sensor_loc = static_cast<SensorsData::TemperatureData::SENSOR_LOCATION>(s);if (sensors_data.temperature.get(sensor_loc, temperature) == ERROR_CODE::SUCCESS)cout << " \t " << sensor_loc << ": " << temperature << "C\n";
}

Close camera and exit

Once the data are extracted, don't forget to close the camera before exiting the program.

    // Close camerazed.close();return 0;

And this is it!

You can now get all the sensor data from ZED-M and ZED2 cameras.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_7)option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)if (NOT LINK_SHARED_ZED AND MSVC)message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()if(COMMAND cmake_policy)cmake_policy(SET CMP0003 OLD)cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()SET(EXECUTABLE_OUTPUT_PATH ".")find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)if (LINK_SHARED_ZED)SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

 

 

这篇关于双目立体视觉(6)- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/442794

相关文章

6.1.数据结构-c/c++堆详解下篇(堆排序,TopK问题)

上篇:6.1.数据结构-c/c++模拟实现堆上篇(向下,上调整算法,建堆,增删数据)-CSDN博客 本章重点 1.使用堆来完成堆排序 2.使用堆解决TopK问题 目录 一.堆排序 1.1 思路 1.2 代码 1.3 简单测试 二.TopK问题 2.1 思路(求最小): 2.2 C语言代码(手写堆) 2.3 C++代码(使用优先级队列 priority_queue)

计算机视觉工程师所需的基本技能

一、编程技能 熟练掌握编程语言 Python:在计算机视觉领域广泛应用,有丰富的库如 OpenCV、TensorFlow、PyTorch 等,方便进行算法实现和模型开发。 C++:运行效率高,适用于对性能要求严格的计算机视觉应用。 数据结构与算法 掌握常见的数据结构(如数组、链表、栈、队列、树、图等)和算法(如排序、搜索、动态规划等),能够优化代码性能,提高算法效率。 二、数学基础

《计算机视觉工程师养成计划》 ·数字图像处理·数字图像处理特征·概述~

1 定义         从哲学角度看:特征是从事物当中抽象出来用于区别其他类别事物的属性集合,图像特征则是从图像中抽取出来用于区别其他类别图像的属性集合。         从获取方式看:图像特征是通过对图像进行测量或借助算法计算得到的一组表达特性集合的向量。 2 认识         有些特征是视觉直观感受到的自然特征,例如亮度、边缘轮廓、纹理、色彩等。         有些特征需要通

【python计算机视觉编程——7.图像搜索】

python计算机视觉编程——7.图像搜索 7.图像搜索7.1 基于内容的图像检索(CBIR)从文本挖掘中获取灵感——矢量空间模型(BOW表示模型)7.2 视觉单词**思想****特征提取**: 创建词汇7.3 图像索引7.3.1 建立数据库7.3.2 添加图像 7.4 在数据库中搜索图像7.4.1 利用索引获取获选图像7.4.2 用一幅图像进行查询7.4.3 确定对比基准并绘制结果 7.

参会邀请 | 第二届机器视觉、图像处理与影像技术国际会议(MVIPIT 2024)

第二届机器视觉、图像处理与影像技术国际会议(MVIPIT 2024)将于2024年9月13日-15日在中国张家口召开。 MVIPIT 2024聚焦机器视觉、图像处理与影像技术,旨在为专家、学者和研究人员提供一个国际平台,分享研究成果,讨论问题和挑战,探索前沿技术。诚邀高校、科研院所、企业等有关方面的专家学者参加会议。 9月13日(周五):签到日 9月14日(周六):会议日 9月15日(周日

【python计算机视觉编程——8.图像内容分类】

python计算机视觉编程——8.图像内容分类 8.图像内容分类8.1 K邻近分类法(KNN)8.1.1 一个简单的二维示例8.1.2 用稠密SIFT作为图像特征8.1.3 图像分类:手势识别 8.2贝叶斯分类器用PCA降维 8.3 支持向量机8.3.2 再论手势识别 8.4 光学字符识别8.4.2 选取特征8.4.3 多类支持向量机8.4.4 提取单元格并识别字符8.4.5 图像校正

类模板中.h和.cpp的实现方法

一般类的声明和实现放在两个文件中,然后在使用该类的主程序代码中,包含相应的头文件".h"就可以了,但是,模板类必须包含该其实现的.cpp文件才行。也就是说,在你的主程序中,将 #include"DouCirLList.h" 替换成 #include"DouCirLList.cpp" 应该就可以了。 在使用类模板技术时,可在.h中实现,也可在.h和.cpp中分开实现,若用.h实

Python计算机视觉编程 第十章

目录 一、OpenCv基础知识 1.读取和写入图像 2.颜色空间 3.显示图像和结果 二、处理视频 1.输入视频 2.将视频读取到NumPy数组中 三、跟踪 1.光流 2.Lucas-Kanade算法 一、OpenCv基础知识 OpenCV 自带读取、写入图像函数以及矩阵操作和数学库。 1.读取和写入图像 import cv2# 读取图像im = c

AtCoder Beginner Contest 370 Solution

A void solve() {int a, b;qr(a, b);if(a + b != 1) cout << "Invalid\n";else Yes(a);} B 模拟 void solve() {qr(n);int x = 1;FOR(i, n) FOR(j, i) qr(a[i][j]);FOR(i, n) x = x >= i ? a[x][i]: a[i][x];pr2(

CPP中的hash [more cpp-7]

写在前面 hash 在英文中是弄乱的含义。在编程中,hash是一种数据技术,它把任意类型的数据通过算法,生成一串数字(hash code),实现hash的函数称为哈希函数,又称散列函数,杂凑函数。在CPP中hashcode是一个size_t类型的数字。 你可能会问?把数据弄乱有什么用?为什么我们要把数据映射到一串数字上?这又什么意义吗?我们先看看hash的性质 一般hash性质 唯一性(唯