Direct LiDAR-Inertial Odometry

2023-10-22 10:20
文章标签 direct lidar odometry inertial

本文主要是介绍Direct LiDAR-Inertial Odometry,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

DLIO

Runing

运行效果:

<video id=“video” controls=""src=“data/dlio_ss.mp4” height=“500”
preload=“none”>

论文

摘要

  • 难点:快速运动 or 穿越不规则地形时降低精度,通常过于简单的方法而过高的计算量。
  • 本方案提出:Direct LiDAR-Inertial Odometry:一种轻量级激光雷达惯性里程计算法,采用新的从粗到精的方法来构建连续时间轨迹以进行精确的运动校正。
    • 去畸变:方程仅通过时间快速且可并行的逐点偏移量进行参数化。
    • 收敛性:IMU预积分 提供正确的状态估计,进行运动畸变和先验于一个步骤,scan-map。

步骤:

  • 首先,我们提出了一种新的从粗到精的技术来构造连续时间轨迹,其中导出了一组具有恒定加加速度和角加速度运动模型的解析方程,用于快速且可并行的逐点运动校正。
  • 其次,提出了一种新颖的压缩架构,它将运动校正和先验构造结合到一个步骤中,并直接执行扫描到地图配准,显着减少了算法的整体计算开销。
  • 第三,我们利用一种新的非线性几何观察器[10],它具有强大的性能保证(对于实现前两个贡献至关重要),以最小的计算复杂度稳健地生成机器人完整状态的准确估计。B. T. Lopez, “A contracting hierarchical observer for pose-inertial fusion,” arXiv:2303.02777, 2023.
  • 最后,我们的方法的有效性通过使用多个数据集与最新技术进行的广泛实验结果得到验证。

总体框图

在这里插入图片描述

  • DLIO 的轻量级架构将运动校正和先前的构造结合到一个步骤中,此外还删除了以前基于 LiDAR 的里程计所需的scan-scan。 W 中的逐点连续时间积分确保了校正点云的最大保真度,并通过基于 GICP 的定制扫描匹配器注册到机器人的地图上。 系统的状态随后由非线性更新具有强收敛特性的几何观察器[10],并且这些位姿、速度和偏差的估计然后初始化下一次迭代。

相关工作

  • 连续时间方程来查询每个独立激光点的矫正偏移。

  • 我们使用高阶运动模型来表示底层系统动力学,该模型可以捕获高频运动,否则这些运动可能会在尝试将平滑轨迹拟合到一组控制点的方法中丢失。

  • 该方法内置于简化的 LIO 架构中,除了直接执行扫描到地图对齐而无需中间的扫描到扫描之外,还可以一次性执行运动校正和 GICP 优先构建; 这一切都是可能的,因为我们的新型几何观测器具有可证明正确的状态估计的强大收敛保证。

方法:

  • 激光点:在 t k t_k tk时间,定义 P k P_k Pk,下标为 k k k,点云 P k P_k Pk中的点 P k n ∈ R 3 P_k^n \in \mathbb R^3 PknR3 与起始时间 Δ t k n , n = 1 , ⋯ , N \Delta t_k^n,n=1,\cdots,N Δtknn=1,,N
  • 坐标系:
    • 世界坐标系 W W W
    • 机器人坐标系 R R R ,位于其重心,x向前,y指向左侧,z指向上方。
    • IMU坐标系 B B B
    • 激光坐标系 L L L
  • 机器人状态向量:其中 x ^ {\hat {x}} x^指使用值, x x x 指测量值
    • X k = [ p k W , q k W , v k W , b k a , b k w ] X_k = [p_k^W,q_k^W,v_k^W,b_k^a,b_k^w] Xk=[pkW,qkW,vkW,bka,bkw]
    • a ^ i = ( a i − g ) + b i a + n i a \hat a_i = (a_i -g) + b_i^a + n_i^a a^i=(aig)+bia+nia
    • w ^ i = w i + b i w + n i w \hat w_i = w_i + b_i^w + n_i^w w^i=wi+biw+niw

预处理

  • 激光(velodyne 10-20hz),IMU(100-500HZ) ,同步线加速度和角速度。
  • 所有传感器转换到 机器人坐标系 R R R ,外部标定。
  • 对于IMU,如果传感器与重心不重合,则必须考虑位移线性加速度测量对刚体的影响; 这是通过角速度和角速度之间的叉积考虑 R 处线性加速度的所有贡献来完成的。
  • 防止信息丢失不会预处理点云,只会降采样:尝试 角点、边、平面点,或体素滤波。

算法:

  • 算法1 激光里程计

    在这里插入图片描述

    使用联合先验的连续时间运动校正

    • 由于旋转激光阵列在扫描过程中在不同时刻收集点,旋转激光雷达传感器的点云在运动过程中会出现运动失真。
    • 我们没有假设扫描过程中的简单运动(即恒定速度)可能无法准确地捕获精细运动,而是使用更精确的持续的急加速和角加速度模型,通过两步粗略到计算每个点的唯一变换 精细传播方案。 该策略旨在最大限度地减少由于 IMU 采样率以及 IMU 和 LiDAR 点测量之间的时间偏移而产生的误差。 整个扫描的轨迹首先通过 IMU 数值积分 [29] 粗略构建,随后通过求解 W 中的一组解析连续时间方程进行细化。
    • 每个点云在局部系 W W W,首先对IMU进行测量进行积分:
      • p ^ i = p ^ i − 1 + v ^ i − 1 Δ t i + 1 2 R ( q ^ i − 1 ) a ^ i − 1 Δ t i 2 + 1 6 j ^ i Δ t i 3 \hat p_i =\hat p_{i-1} + \hat v_{i-1} \Delta t_i + \frac 1 2 R(\hat{q}_{i-1})\hat{a}_{i-1} \Delta t_i^2 + \frac 1 6 \hat j_i \Delta t_i^3 p^i=p^i1+v^i1Δti+21R(q^i1)a^i1Δti2+61j^iΔti3
      • v ^ i = v ^ i − 1 + R ( q ^ i − 1 ) a ^ i − 1 Δ t i \hat v_i = \hat v_{i-1} + R(\hat{q}_{i-1})\hat{a}_{i-1} \Delta t_i v^i=v^i1+R(q^i1)a^i1Δti
      • q ^ i = q ^ i − 1 + 1 2 ( q ^ i − 1 ⊗ w ^ i − 1 ) Δ t i + 1 4 ( q ^ i − 1 ⊗ α ^ i ) Δ t i 2 \hat q_i = \hat q_{i-1} +\frac 1 2 (\hat q_{i-1} \otimes \hat w_{i-1}) \Delta t_i+\frac 1 4 (\hat q_{i-1} \otimes \hat \alpha_i) \Delta t_i^2 q^i=q^i1+21(q^i1w^i1)Δti+41(q^i1α^i)Δti2
    • 这儿 j ^ i = 1 Δ t i ( R ( q ^ i ) a ^ i − R ( q ^ i − 1 ) a ^ i − 1 ) \hat j_i = \frac 1 {\Delta t_i}(R(\hat{q}_{i})\hat{a}_{i}-R(\hat{q}_{i-1})\hat{a}_{i-1}) j^i=Δti1(R(q^i)a^iR(q^i1)a^i1) α ^ i = 1 Δ t i ( w ^ i − w ^ i − 1 ) \hat \alpha_i=\frac 1 {\Delta t_i}(\hat w_i - \hat w_{i-1}) α^i=Δti1(w^iw^i1) 是线加速度和角加速度
    • 连续时间运动校正。 对于云中的每个点,通过求解在最接近的先前 IMU 测量值处初始化的一组封闭式运动方程来计算唯一的变换。 这提供了精确且可并行的连续时间运动或校正。

scan-to-map 匹配

  • GICP 找: $\Delta \hat T_k = \underset {\Delta T_k} {\mathbf {argmin}} \ \varepsilon (\Delta T_k \hat P_k^W, \hat S_k^W) $
    • 这儿 ε \varepsilon ε 被定义 ε ( Δ T k P ^ k W , S ^ k W ) = ∑ c ∈ C d c T ( C k , c S + Δ T k C k , c P Δ T k T ) − 1 d c \varepsilon (\Delta T_k \hat P_k^W, \hat S_k^W) = \underset {c \in C} \sum d_c^T(C_{k,c}^S+\Delta T_k C_{k,c}^P\Delta T_k^T)^{-1}d_c ε(ΔTkP^kW,S^kW)=cCdcT(Ck,cS+ΔTkCk,cPΔTkT)1dc
    • 一系列点 C C C 的误差 介于点 P ^ k W \hat P_k^W P^kW s ^ k W \hat s_k^W s^kW s ^ k W − Δ T k p ^ k c , p ^ k c ∈ P ^ k W , s ^ k c ∈ S ^ k W , ∀ c ∈ C \hat s_k^W-\Delta T_k \hat p_k^c,\hat p_k^c \in \hat P_k^W,\hat s_k^c \in \hat S_k^W,\forall c \in C s^kWΔTkp^kcp^kcP^kWs^kcS^kWcC
    • C k , c P C_{k,c}^P Ck,cP C k , c S C_{k,c}^S Ck,cS P ^ k W \hat P_k^W P^kW s ^ k W \hat s_k^W s^kW 的 协方差矩阵
    • 点到平面的公式 通过正则化协方差 P P P进行面到面的优化矩阵 C k , c P C_{k,c}^P Ck,cP C k , c S C_{k,c}^S Ck,cS ( 1 , 1 , ε ) (1,1,\varepsilon) (1,1,ε)特征值,其中 ε \varepsilon ε 代表表面法线的不确定性。
    • 所得结果: Δ T ^ k \Delta \hat T_k ΔT^k 是最佳矫正变化,该变换使得 激光与子图很好地对其, 因此:
    • Δ T ^ k W = Δ T ^ k T ^ M W \Delta \hat T_k^W = \Delta \hat T_k \hat T_M^W ΔT^kW=ΔT^kT^MW 是机器人的全局位姿,( T ^ M W \hat T_M^W T^MWIMU迭代的最后一个点)。

物理观测

  • T ^ k W \hat T_k^W T^kW 计算 scan-to-map 的对齐,融合了IMU测量生成整个状态 X ^ k \hat X_k X^k,通过一种新怡的分层非线性几何观测。
  • γ e ∈ 1 , ⋯ , 5 \gamma _{e \in {1,\cdots,5}} γe1,,5 是正常数, Δ t k + \Delta t_k^+ Δtk+GICP位姿的时间,如果 q e : = ( q e o , q ⃗ e ) = q ^ i ∗ ⊗ q ^ k q_e:=(q_e^o,\vec q_e)=\hat q_i^* \otimes \hat q_k qe:=(qeo,q e)=q^iq^k p e = p ^ k − p ^ i p_e = \hat p_k - \hat p_i pe=p^kp^i 传播姿态和测量位姿之间的误差。
  • 状态修正:
    • q ^ i = q ^ i + Δ t k + γ 1 q ^ i ⊗ [ 1 − ∣ q e o ∣ s g n ( q e o ) q ⃗ e ] \hat q_i = \hat q_i + \Delta t_k^+ \gamma_1 \hat q_i \otimes \begin{bmatrix}1-|q_e^o| \\ \mathbf{sgn}(q_e^o) \vec q_e\end{bmatrix} q^i=q^i+Δtk+γ1q^i[1qeosgn(qeo)q e]
    • b ^ i w = b ^ i w − Δ t k + γ 2 q e o q ⃗ e \hat b_i^w = \hat b_i^w - \Delta t_k^+ \gamma_2 q_e^o \vec q_e b^iw=b^iwΔtk+γ2qeoq e
    • p ^ i = p ^ i + Δ t k + γ 3 P e \hat p_i = \hat p_i + \Delta t_k^+ \gamma_3 P_e p^i=p^i+Δtk+γ3Pe
    • v ^ i = v ^ i + Δ t k + γ 4 P e \hat v_i = \hat v_i + \Delta t_k^+ \gamma_4 P_e v^i=v^i+Δtk+γ4Pe
    • b ^ i α = b ^ i α − Δ t k + γ 5 R ^ ( q ^ i ) T P e \hat b_i^{\alpha} = \hat b_i^{\alpha} - \Delta t_k^+ \gamma_5 \hat R(\hat q_i)^T P_e b^iα=b^iαΔtk+γ5R^(q^i)TPe
  • 注意 状态修正 是分层的,因为角度更新(前两个方程)与平移更新(最后三个方程)完全解耦。
    • 在这里插入图片描述

Code-Odom

成员变量

  • 状态量: State state;
  • 位姿:Pose lidarPose,imuPose
  • 资源消耗相关:cpu类型,所占百分百,上一时刻cpu

类型相关的定义

// 外参 相关
struct Extrinsics {struct SE3 {Eigen::Vector3f t;Eigen::Matrix3f R;};SE3 baselink2imu;SE3 baselink2lidar;Eigen::Matrix4f baselink2imu_T;Eigen::Matrix4f baselink2lidar_T;}; Extrinsics extrinsics;// Geometric Observer  
struct Geo {bool first_opt_done;std::mutex mtx;double dp;double dq_deg;Eigen::Vector3f prev_p;Eigen::Quaternionf prev_q;Eigen::Vector3f prev_vel;}; Geo geo;// imu 的 bias
struct ImuBias {Eigen::Vector3f gyro;Eigen::Vector3f accel;
};// 速度->线速度+角速度-> body系+world系
struct Frames {Eigen::Vector3f b;Eigen::Vector3f w;
};
struct Velocity {Frames lin;Frames ang;
};/// 状态量
struct State {Eigen::Vector3f p; // position in world frameEigen::Quaternionf q; // orientation in world frameVelocity v;ImuBias b; // imu biases in body frame
}; State state;/// 位姿
struct Pose {Eigen::Vector3f p; // position in world frameEigen::Quaternionf q; // orientation in world frame
};
Pose lidarPose;
Pose imuPose;// Metrics
struct Metrics {std::vector<float> spaciousness;std::vector<float> density;
}; Metrics metrics;std::string cpu_type;std::vector<double> cpu_percents;clock_t lastCPU, lastSysCPU, lastUserCPU;int numProcessors;

构造

构造函数

步骤:

  • 读取参数

      • 版本,Frames,ns(获取节点 NS 并删除前导字符),连接 Frames 名字(ns+Frame)
      • 点云去畸变?、重力大小(9.80665)、关键帧阈值(D:0.1,R:1.0)
      • 子图(knn:10,cv:10,cc:10)
      • 稠密地图降采样?等到运动后发布地图?等
      • 外惨:重力中心到imu、重力中心到lidar
      • IMU参数:标定acc/gyro? 、imu_calib_time(3)、imu_buff(2000)、gravity_align?、imu_calib?、尺度失准矩阵
      • IMU不标定时,标定参数赋值默认值
      • GICP 参数、几何观测参数
  • 参数Reset

    • dlio_initialized :false、第一帧激光有效帧:false、
  • 订阅:激光+imu、发布:odo+pose+path+kf_pose+kf_cloud+不失真点云

  • 数据初始化

    • T、T_prior、T_corr

    • state

    • imu_means

  • CPU Specs -> /proc/cpuinfo

构造代码:

dlio::OdomNode::OdomNode(ros::NodeHandle node_handle) : nh(node_handle) {// 读取参数,this->getParams();this->num_threads_ = omp_get_max_threads();this->dlio_initialized = false;this->first_valid_scan = false;this->first_imu_received = false;if (this->imu_calibrate_) {this->imu_calibrated = false;}else {this->imu_calibrated = true;}this->deskew_status = false;this->deskew_size = 0;// 订阅激光 和 imuthis->lidar_sub = this->nh.subscribe("pointcloud", 1,&dlio::OdomNode::callbackPointCloud, this, ros::TransportHints().tcpNoDelay());this->imu_sub = this->nh.subscribe("imu", 1000,&dlio::OdomNode::callbackImu, this, ros::TransportHints().tcpNoDelay());this->odom_pub     = this->nh.advertise<nav_msgs::Odometry>("odom", 1, true);this->pose_pub     = this->nh.advertise<geometry_msgs::PoseStamped>("pose", 1, true);this->path_pub     = this->nh.advertise<nav_msgs::Path>("path", 1, true);this->kf_pose_pub  = this->nh.advertise<geometry_msgs::PoseArray>("kf_pose", 1, true);this->kf_cloud_pub = this->nh.advertise<sensor_msgs::PointCloud2>("kf_cloud", 1, true);this->deskewed_pub = this->nh.advertise<sensor_msgs::PointCloud2>("deskewed", 1, true);this->publish_timer = this->nh.createTimer(ros::Duration(0.01), &dlio::OdomNode::publishPose, this);this->T = Eigen::Matrix4f::Identity();this->T_prior = Eigen::Matrix4f::Identity();this->T_corr = Eigen::Matrix4f::Identity();this->origin = Eigen::Vector3f(0., 0., 0.);this->state.p = Eigen::Vector3f(0., 0., 0.);this->state.q = Eigen::Quaternionf(1., 0., 0., 0.);this->state.v.lin.b = Eigen::Vector3f(0., 0., 0.);this->state.v.lin.w = Eigen::Vector3f(0., 0., 0.);this->state.v.ang.b = Eigen::Vector3f(0., 0., 0.);this->state.v.ang.w = Eigen::Vector3f(0., 0., 0.);this->lidarPose.p = Eigen::Vector3f(0., 0., 0.);this->lidarPose.q = Eigen::Quaternionf(1., 0., 0., 0.);// imu 数据 均值this->imu_meas.stamp = 0.;this->imu_meas.ang_vel[0] = 0.;this->imu_meas.ang_vel[1] = 0.;this->imu_meas.ang_vel[2] = 0.;this->imu_meas.lin_accel[0] = 0.;this->imu_meas.lin_accel[1] = 0.;this->imu_meas.lin_accel[2] = 0.;this->imu_buffer.set_capacity(this->imu_buffer_size_);this->first_imu_stamp = 0.;this->prev_imu_stamp = 0.;this->original_scan = pcl::PointCloud<PointType>::ConstPtr (boost::make_shared<const pcl::PointCloud<PointType>>());this->deskewed_scan = pcl::PointCloud<PointType>::ConstPtr (boost::make_shared<const pcl::PointCloud<PointType>>());this->current_scan = pcl::PointCloud<PointType>::ConstPtr (boost::make_shared<const pcl::PointCloud<PointType>>());this->submap_cloud = pcl::PointCloud<PointType>::ConstPtr (boost::make_shared<const pcl::PointCloud<PointType>>());this->num_processed_keyframes = 0;this->submap_hasChanged = true;this->submap_kf_idx_prev.clear();this->first_scan_stamp = 0.;this->elapsed_time = 0.;this->length_traversed;this->convex_hull.setDimension(3);this->concave_hull.setDimension(3);this->concave_hull.setAlpha(this->keyframe_thresh_dist_);this->concave_hull.setKeepInformation(true);this->gicp.setCorrespondenceRandomness(this->gicp_k_correspondences_);this->gicp.setMaxCorrespondenceDistance(this->gicp_max_corr_dist_);this->gicp.setMaximumIterations(this->gicp_max_iter_);this->gicp.setTransformationEpsilon(this->gicp_transformation_ep_);this->gicp.setRotationEpsilon(this->gicp_rotation_ep_);this->gicp.setInitialLambdaFactor(this->gicp_init_lambda_factor_);this->gicp_temp.setCorrespondenceRandomness(this->gicp_k_correspondences_);this->gicp_temp.setMaxCorrespondenceDistance(this->gicp_max_corr_dist_);this->gicp_temp.setMaximumIterations(this->gicp_max_iter_);this->gicp_temp.setTransformationEpsilon(this->gicp_transformation_ep_);this->gicp_temp.setRotationEpsilon(this->gicp_rotation_ep_);this->gicp_temp.setInitialLambdaFactor(this->gicp_init_lambda_factor_);pcl::Registration<PointType, PointType>::KdTreeReciprocalPtr temp;this->gicp.setSearchMethodSource(temp, true);this->gicp.setSearchMethodTarget(temp, true);this->gicp_temp.setSearchMethodSource(temp, true);this->gicp_temp.setSearchMethodTarget(temp, true);this->geo.first_opt_done = false;this->geo.prev_vel = Eigen::Vector3f(0., 0., 0.);pcl::console::setVerbosityLevel(pcl::console::L_ERROR);this->crop.setNegative(true);this->crop.setMin(Eigen::Vector4f(-this->crop_size_, -this->crop_size_, -this->crop_size_, 1.0));this->crop.setMax(Eigen::Vector4f(this->crop_size_, this->crop_size_, this->crop_size_, 1.0));this->voxel.setLeafSize(this->vf_res_, this->vf_res_, this->vf_res_);this->metrics.spaciousness.push_back(0.);this->metrics.density.push_back(this->gicp_max_corr_dist_);// CPU Specschar CPUBrandString[0x40];memset(CPUBrandString, 0, sizeof(CPUBrandString));this->cpu_type = "";#ifdef HAS_CPUIDunsigned int CPUInfo[4] = {0,0,0,0};__cpuid(0x80000000, CPUInfo[0], CPUInfo[1], CPUInfo[2], CPUInfo[3]);unsigned int nExIds = CPUInfo[0];for (unsigned int i = 0x80000000; i <= nExIds; ++i) {__cpuid(i, CPUInfo[0], CPUInfo[1], CPUInfo[2], CPUInfo[3]);if (i == 0x80000002)memcpy(CPUBrandString, CPUInfo, sizeof(CPUInfo));else if (i == 0x80000003)memcpy(CPUBrandString + 16, CPUInfo, sizeof(CPUInfo));else if (i == 0x80000004)memcpy(CPUBrandString + 32, CPUInfo, sizeof(CPUInfo));}this->cpu_type = CPUBrandString;boost::trim(this->cpu_type);#endifFILE* file;struct tms timeSample;char line[128];this->lastCPU = times(&timeSample);this->lastSysCPU = timeSample.tms_stime;this->lastUserCPU = timeSample.tms_utime;file = fopen("/proc/cpuinfo", "r");this->numProcessors = 0;while(fgets(line, 128, file) != nullptr) {if (strncmp(line, "processor", 9) == 0) this->numProcessors++;}fclose(file);}void dlio::OdomNode::getParams() {// Versionros::param::param<std::string>("~dlio/version", this->version_, "0.0.0");// Framesros::param::param<std::string>("~dlio/frames/odom", this->odom_frame, "odom");ros::param::param<std::string>("~dlio/frames/baselink", this->baselink_frame, "base_link");ros::param::param<std::string>("~dlio/frames/lidar", this->lidar_frame, "lidar");ros::param::param<std::string>("~dlio/frames/imu", this->imu_frame, "imu");// Get Node NS and Remove Leading Characterstd::string ns = ros::this_node::getNamespace();ns.erase(0,1);// Concatenate Frame Name Stringsthis->odom_frame = ns + "/" + this->odom_frame;this->baselink_frame = ns + "/" + this->baselink_frame;this->lidar_frame = ns + "/" + this->lidar_frame;this->imu_frame = ns + "/" + this->imu_frame;// Deskew FLagros::param::param<bool>("~dlio/pointcloud/deskew", this->deskew_, true);// Gravityros::param::param<double>("~dlio/odom/gravity", this->gravity_, 9.80665);// Keyframe Thresholdros::param::param<double>("~dlio/odom/keyframe/threshD", this->keyframe_thresh_dist_, 0.1);ros::param::param<double>("~dlio/odom/keyframe/threshR", this->keyframe_thresh_rot_, 1.0);// Submapros::param::param<int>("~dlio/odom/submap/keyframe/knn", this->submap_knn_, 10);ros::param::param<int>("~dlio/odom/submap/keyframe/kcv", this->submap_kcv_, 10);ros::param::param<int>("~dlio/odom/submap/keyframe/kcc", this->submap_kcc_, 10);// Dense map resolutionros::param::param<bool>("~dlio/map/dense/filtered", this->densemap_filtered_, true);// Wait until movement to publish mapros::param::param<bool>("~dlio/map/waitUntilMove", this->wait_until_move_, false);// Crop Box Filterros::param::param<double>("~dlio/odom/preprocessing/cropBoxFilter/size", this->crop_size_, 1.0);// Voxel Grid Filterros::param::param<bool>("~dlio/pointcloud/voxelize", this->vf_use_, true);ros::param::param<double>("~dlio/odom/preprocessing/voxelFilter/res", this->vf_res_, 0.05);// Adaptive Parametersros::param::param<bool>("~dlio/adaptive", this->adaptive_params_, true);// Extrinsicsstd::vector<float> t_default{0., 0., 0.};std::vector<float> R_default{1., 0., 0., 0., 1., 0., 0., 0., 1.};// center of gravity to imustd::vector<float> baselink2imu_t, baselink2imu_R;ros::param::param<std::vector<float>>("~dlio/extrinsics/baselink2imu/t", baselink2imu_t, t_default);ros::param::param<std::vector<float>>("~dlio/extrinsics/baselink2imu/R", baselink2imu_R, R_default);this->extrinsics.baselink2imu.t =Eigen::Vector3f(baselink2imu_t[0], baselink2imu_t[1], baselink2imu_t[2]);this->extrinsics.baselink2imu.R =Eigen::Map<const Eigen::Matrix<float, -1, -1, Eigen::RowMajor>>(baselink2imu_R.data(), 3, 3);this->extrinsics.baselink2imu_T = Eigen::Matrix4f::Identity();this->extrinsics.baselink2imu_T.block(0, 3, 3, 1) = this->extrinsics.baselink2imu.t;this->extrinsics.baselink2imu_T.block(0, 0, 3, 3) = this->extrinsics.baselink2imu.R;// center of gravity to lidarstd::vector<float> baselink2lidar_t, baselink2lidar_R;ros::param::param<std::vector<float>>("~dlio/extrinsics/baselink2lidar/t", baselink2lidar_t, t_default);ros::param::param<std::vector<float>>("~dlio/extrinsics/baselink2lidar/R", baselink2lidar_R, R_default);this->extrinsics.baselink2lidar.t =Eigen::Vector3f(baselink2lidar_t[0], baselink2lidar_t[1], baselink2lidar_t[2]);this->extrinsics.baselink2lidar.R =Eigen::Map<const Eigen::Matrix<float, -1, -1, Eigen::RowMajor>>(baselink2lidar_R.data(), 3, 3);this->extrinsics.baselink2lidar_T = Eigen::Matrix4f::Identity();this->extrinsics.baselink2lidar_T.block(0, 3, 3, 1) = this->extrinsics.baselink2lidar.t;this->extrinsics.baselink2lidar_T.block(0, 0, 3, 3) = this->extrinsics.baselink2lidar.R;// IMUros::param::param<bool>("~dlio/odom/imu/calibration/accel", this->calibrate_accel_, true);ros::param::param<bool>("~dlio/odom/imu/calibration/gyro", this->calibrate_gyro_, true);ros::param::param<double>("~dlio/odom/imu/calibration/time", this->imu_calib_time_, 3.0);ros::param::param<int>("~dlio/odom/imu/bufferSize", this->imu_buffer_size_, 2000);std::vector<float> accel_default{0., 0., 0.}; std::vector<float> prior_accel_bias;std::vector<float> gyro_default{0., 0., 0.}; std::vector<float> prior_gyro_bias;ros::param::param<bool>("~dlio/odom/imu/approximateGravity", this->gravity_align_, true);ros::param::param<bool>("~dlio/imu/calibration", this->imu_calibrate_, true);ros::param::param<std::vector<float>>("~dlio/imu/intrinsics/accel/bias", prior_accel_bias, accel_default);ros::param::param<std::vector<float>>("~dlio/imu/intrinsics/gyro/bias", prior_gyro_bias, gyro_default);// scale-misalignment matrixstd::vector<float> imu_sm_default{1., 0., 0., 0., 1., 0., 0., 0., 1.};std::vector<float> imu_sm;ros::param::param<std::vector<float>>("~dlio/imu/intrinsics/accel/sm", imu_sm, imu_sm_default);if (!this->imu_calibrate_) {this->state.b.accel[0] = prior_accel_bias[0];this->state.b.accel[1] = prior_accel_bias[1];this->state.b.accel[2] = prior_accel_bias[2];this->state.b.gyro[0] = prior_gyro_bias[0];this->state.b.gyro[1] = prior_gyro_bias[1];this->state.b.gyro[2] = prior_gyro_bias[2];this->imu_accel_sm_ = Eigen::Map<const Eigen::Matrix<float, -1, -1, Eigen::RowMajor>>(imu_sm.data(), 3, 3);} else {this->state.b.accel = Eigen::Vector3f(0., 0., 0.);this->state.b.gyro = Eigen::Vector3f(0., 0., 0.);this->imu_accel_sm_ = Eigen::Matrix3f::Identity();}// GICPros::param::param<int>("~dlio/odom/gicp/minNumPoints", this->gicp_min_num_points_, 100);ros::param::param<int>("~dlio/odom/gicp/kCorrespondences", this->gicp_k_correspondences_, 20);ros::param::param<double>("~dlio/odom/gicp/maxCorrespondenceDistance", this->gicp_max_corr_dist_,std::sqrt(std::numeric_limits<double>::max()));ros::param::param<int>("~dlio/odom/gicp/maxIterations", this->gicp_max_iter_, 64);ros::param::param<double>("~dlio/odom/gicp/transformationEpsilon", this->gicp_transformation_ep_, 0.0005);ros::param::param<double>("~dlio/odom/gicp/rotationEpsilon", this->gicp_rotation_ep_, 0.0005);ros::param::param<double>("~dlio/odom/gicp/initLambdaFactor", this->gicp_init_lambda_factor_, 1e-9);// Geometric Observerros::param::param<double>("~dlio/odom/geo/Kp", this->geo_Kp_, 1.0);ros::param::param<double>("~dlio/odom/geo/Kv", this->geo_Kv_, 1.0);ros::param::param<double>("~dlio/odom/geo/Kq", this->geo_Kq_, 1.0);ros::param::param<double>("~dlio/odom/geo/Kab", this->geo_Kab_, 1.0);ros::param::param<double>("~dlio/odom/geo/Kgb", this->geo_Kgb_, 1.0);ros::param::param<double>("~dlio/odom/geo/abias_max", this->geo_abias_max_, 1.0);ros::param::param<double>("~dlio/odom/geo/gbias_max", this->geo_gbias_max_, 1.0);}
debug

Debug主要打印的一个工整。

+-------------------------------------------------------------------+
|               Direct LiDAR-Inertial Odometry v1.1.1               |
+-------------------------------------------------------------------+
| Wed Mar 15 03:33:45 2023              Elapsed Time: 44.89 seconds |
| Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz x 8                     |
| Sensor Rates: Ouster @ 19.99 Hz, IMU @ 99.94 Hz                   |
|===================================================================|
| Position     {W}  [xyz] :: 1.5051 -0.5809 2.3530                  |
| Orientation  {W} [wxyz] :: -0.2946 -0.4288 0.1831 0.8342          |
| Lin Velocity {B}  [xyz] :: 1.4235 -0.8611 -0.1093                 |
| Ang Velocity {B}  [xyz] :: 0.0151 -0.0592 0.0305                  |
| Accel Bias        [xyz] :: 2.63260126 -3.64898181 4.58988619      |
| Gyro Bias         [xyz] :: -0.00235719 0.04470301 -0.02521588     |
|                                                                   |
| Distance Traveled  :: 58.0019 meters                              |
| Distance to Origin :: 2.8530 meters                               |
| Registration       :: keyframes: 2, deskewed points: 501          |
|                                                                   |
| Computation Time ::  14.54 ms    // Avg:  13.88 / Max:  92.68     |
| Cores Utilized   ::   0.80 cores // Avg:   0.94 / Max:   4.00     |
| CPU Load         ::  10.00 %     // Avg:  11.74 / Max:  50.00     |
| RAM Allocation   :: 105.98 MB                                     |
+-------------------------------------------------------------------+

代码如下:

  • ```c
    void dlio::OdomNode::debug() {

    // Total length traversed
    double length_traversed = 0.;
    Eigen::Vector3f p_curr = Eigen::Vector3f(0., 0., 0.);
    Eigen::Vector3f p_prev = Eigen::Vector3f(0., 0., 0.);
    for (const auto& t : this->trajectory) {
    if (p_prev == Eigen::Vector3f(0., 0., 0.)) {
    p_prev = t.first;
    continue;
    }
    p_curr = t.first;
    double l = sqrt(pow(p_curr[0] - p_prev[0], 2) + pow(p_curr[1] - p_prev[1], 2) + pow(p_curr[2] - p_prev[2], 2));

    if (l >= 0.1) {length_traversed += l;p_prev = p_curr;
    }
    

    }
    this->length_traversed = length_traversed;

    // Average computation time
    double avg_comp_time =
    std::accumulate(this->comp_times.begin(), this->comp_times.end(), 0.0) / this->comp_times.size();

    // Average sensor rates
    int win_size = 100;
    double avg_imu_rate;
    double avg_lidar_rate;
    if (this->imu_rates.size() < win_size) {
    avg_imu_rate =
    std::accumulate(this->imu_rates.begin(), this->imu_rates.end(), 0.0) / this->imu_rates.size();
    } else {
    avg_imu_rate =
    std::accumulate(this->imu_rates.end()-win_size, this->imu_rates.end(), 0.0) / win_size;
    }
    if (this->lidar_rates.size() < win_size) {
    avg_lidar_rate =
    std::accumulate(this->lidar_rates.begin(), this->lidar_rates.end(), 0.0) / this->lidar_rates.size();
    } else {
    avg_lidar_rate =
    std::accumulate(this->lidar_rates.end()-win_size, this->lidar_rates.end(), 0.0) / win_size;
    }

    // RAM Usage
    double vm_usage = 0.0;
    double resident_set = 0.0;
    std::ifstream stat_stream(“/proc/self/stat”, std::ios_base::in); //get info from proc directory
    std::string pid, comm, state, ppid, pgrp, session, tty_nr;
    std::string tpgid, flags, minflt, cminflt, majflt, cmajflt;
    std::string utime, stime, cutime, cstime, priority, nice;
    std::string num_threads, itrealvalue, starttime;
    unsigned long vsize;
    long rss;
    stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
    >> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
    >> utime >> stime >> cutime >> cstime >> priority >> nice
    >> num_threads >> itrealvalue >> starttime >> vsize >> rss; // don’t care about the rest
    stat_stream.close();
    long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // for x86-64 is configured to use 2MB pages
    vm_usage = vsize / 1024.0;
    resident_set = rss * page_size_kb;

    // CPU Usage
    struct tms timeSample;
    clock_t now;
    double cpu_percent;
    now = times(&timeSample);
    if (now <= this->lastCPU || timeSample.tms_stime < this->lastSysCPU ||
    timeSample.tms_utime < this->lastUserCPU) {
    cpu_percent = -1.0;
    } else {
    cpu_percent = (timeSample.tms_stime - this->lastSysCPU) + (timeSample.tms_utime - this->lastUserCPU);
    cpu_percent /= (now - this->lastCPU);
    cpu_percent /= this->numProcessors;
    cpu_percent *= 100.;
    }
    this->lastCPU = now;
    this->lastSysCPU = timeSample.tms_stime;
    this->lastUserCPU = timeSample.tms_utime;
    this->cpu_percents.push_back(cpu_percent);
    double avg_cpu_usage =
    std::accumulate(this->cpu_percents.begin(), this->cpu_percents.end(), 0.0) / this->cpu_percents.size();

    // Print to terminal
    printf(“\033[2J\033[1;1H”);

    std::cout << std::endl
    << “±------------------------------------------------------------------+” << std::endl;
    std::cout << “| Direct LiDAR-Inertial Odometry v” << this->version_ << " |"
    << std::endl;
    std::cout << “±------------------------------------------------------------------+” << std::endl;

    std::time_t curr_time = this->scan_stamp;
    std::string asc_time = std::asctime(std::localtime(&curr_time)); asc_time.pop_back();
    std::cout << "| " << std::left << asc_time;
    std::cout << std::right << std::setfill(’ ') << std::setw(42)
    << "Elapsed Time: " + to_string_with_precision(this->elapsed_time, 2) + " seconds "
    << “|” << std::endl;

    if ( !this->cpu_type.empty() ) {
    std::cout << "| " << std::left << std::setfill(’ ') << std::setw(66)
    << this->cpu_type + " x " + std::to_string(this->numProcessors)
    << “|” << std::endl;
    }

    if (this->sensor == dlio::SensorType::OUSTER) {
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << “Sensor Rates: Ouster @ " + to_string_with_precision(avg_lidar_rate, 2)
    + " Hz, IMU @ " + to_string_with_precision(avg_imu_rate, 2) + " Hz”
    << “|” << std::endl;
    } else if (this->sensor == dlio::SensorType::VELODYNE) {
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << “Sensor Rates: Velodyne @ " + to_string_with_precision(avg_lidar_rate, 2)
    + " Hz, IMU @ " + to_string_with_precision(avg_imu_rate, 2) + " Hz”
    << “|” << std::endl;
    } else if (this->sensor == dlio::SensorType::HESAI) {
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << “Sensor Rates: Hesai @ " + to_string_with_precision(avg_lidar_rate, 2)
    + " Hz, IMU @ " + to_string_with_precision(avg_imu_rate, 2) + " Hz”
    << “|” << std::endl;
    } else {
    std::cout << "| " << std::left << std::setfill(’ ') << std::setw(66)
    << “Sensor Rates: Unknown LiDAR @ " + to_string_with_precision(avg_lidar_rate, 2)
    + " Hz, IMU @ " + to_string_with_precision(avg_imu_rate, 2) + " Hz”
    << “|” << std::endl;
    }

    std::cout << “|===================================================================|” << std::endl;

    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << "Position {W} [xyz] :: " + to_string_with_precision(this->state.p[0], 4) + " "
    + to_string_with_precision(this->state.p[1], 4) + " "
    + to_string_with_precision(this->state.p[2], 4)
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << "Orientation {W} [wxyz] :: " + to_string_with_precision(this->state.q.w(), 4) + " "
    + to_string_with_precision(this->state.q.x(), 4) + " "
    + to_string_with_precision(this->state.q.y(), 4) + " "
    + to_string_with_precision(this->state.q.z(), 4)
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << "Lin Velocity {B} [xyz] :: " + to_string_with_precision(this->state.v.lin.b[0], 4) + " "
    + to_string_with_precision(this->state.v.lin.b[1], 4) + " "
    + to_string_with_precision(this->state.v.lin.b[2], 4)
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << "Ang Velocity {B} [xyz] :: " + to_string_with_precision(this->state.v.ang.b[0], 4) + " "
    + to_string_with_precision(this->state.v.ang.b[1], 4) + " "
    + to_string_with_precision(this->state.v.ang.b[2], 4)
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << "Accel Bias [xyz] :: " + to_string_with_precision(this->state.b.accel[0], 8) + " "
    + to_string_with_precision(this->state.b.accel[1], 8) + " "
    + to_string_with_precision(this->state.b.accel[2], 8)
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ') << std::setw(66)
    << "Gyro Bias [xyz] :: " + to_string_with_precision(this->state.b.gyro[0], 8) + " "
    + to_string_with_precision(this->state.b.gyro[1], 8) + " "
    + to_string_with_precision(this->state.b.gyro[2], 8)
    << “|” << std::endl;

    std::cout << “| |” << std::endl;

    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << “Distance Traveled :: " + to_string_with_precision(length_traversed, 4) + " meters”
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ‘) << std::setw(66)
    << “Distance to Origin :: "
    + to_string_with_precision( sqrt(pow(this->state.p[0]-this->origin[0],2) +
    pow(this->state.p[1]-this->origin[1],2) +
    pow(this->state.p[2]-this->origin[2],2)), 4) + " meters”
    << “|” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ') << std::setw(66)
    << "Registration :: keyframes: " + std::to_string(this->keyframes.size()) + ", "
    + "deskewed points: " + std::to_string(this->deskew_size)
    << “|” << std::endl;
    std::cout << “| |” << std::endl;

    std::cout << std::right << std::setprecision(2) << std::fixed;
    std::cout << “| Computation Time :: "
    << std::setfill(’ ') << std::setw(6) << this->comp_times.back()1000. << " ms // Avg: "
    << std::setw(6) << avg_comp_time
    1000. << " / Max: "
    << std::setw(6) << *std::max_element(this->comp_times.begin(), this->comp_times.end())*1000.
    << " |” << std::endl;
    std::cout << “| Cores Utilized :: "
    << std::setfill(’ ') << std::setw(6) << (cpu_percent/100.) * this->numProcessors << " cores // Avg: "
    << std::setw(6) << (avg_cpu_usage/100.) * this->numProcessors << " / Max: "
    << std::setw(6) << (*std::max_element(this->cpu_percents.begin(), this->cpu_percents.end()) / 100.)
    * this->numProcessors
    << " |” << std::endl;
    std::cout << “| CPU Load :: "
    << std::setfill(’ ') << std::setw(6) << cpu_percent << " % // Avg: "
    << std::setw(6) << avg_cpu_usage << " / Max: "
    << std::setw(6) << *std::max_element(this->cpu_percents.begin(), this->cpu_percents.end())
    << " |” << std::endl;
    std::cout << "| " << std::left << std::setfill(’ ') << std::setw(66)
    << “RAM Allocation :: " + to_string_with_precision(resident_set/1000., 2) + " MB”
    << “|” << std::endl;

    std::cout << “±------------------------------------------------------------------+” << std::endl;

    }

回调

callbackImu

步骤:

  • 计算两帧间的时间差:dt,if(dt==0) dt = 1./200
  • 转换IMU数据, transformImu
    • 转换角速度(在刚体上是相同的,所以只需旋转到 ROS 约定): B V a n g l e = I B R ∗ I V a n g l e ^BV_{angle}=^B_IR * ^IV_\mathbf{angle} BVangle=IBRIVangle
    • 转换线加速度: B a = I B R ∗ I a + I V a n g l e − I V a n g l e _ l a s t d t × ( − I B R ) + I V a n g l e × ( I V a n g l e × ( − I B R ) ) ^Ba =^B_IR * ^Ia + \frac{^IV_\mathbf{angle} - ^IV_\mathbf{angle\_last}}{dt}\times(-^B_IR)+^IV_\mathbf{angle}\times(^IV_\mathbf{angle}\times (-^B_IR)) Ba=IBRIa+dtIVangleIVangle_last×(IBR)+IVangle×(IVangle×(IBR))
      • 变换线性加速度(需要考虑由于平移差异而产生的分量)
  • 若 imu未标定时,进行标定
    • 求线加速度均值: a ⃗ a v g \vec a_\mathbf{avg} a avg,和角速度均值: v ⃗ a v g \vec v_\mathbf{avg} v avg
      • 分别对 角速度+线加速度 累计imu_calib_time时间,并求均值
    • 若需重力对齐时,估计重力矢量
      • 重力向量: g ⃗ = ( a ⃗ a v g − b a ) . n o r m ∗ g _ \vec g = (\vec a_\mathbf{avg} - b_a).\mathbf{norm} * g\_ g =(a avgba).normg_
        • 注:加速度均值即当前重力方向,归一化长度到 g _ g\_ g_ 大小
      • 重力向量到重力的转换 q g r a v = g ⃗ T o G ⃗ ( 0 , 0 , g ) q_\mathbf{grav} = \vec g \ \ \mathbf{To} \ \ \vec G_{(0,0,g)} qgrav=g   To  G (0,0,g)
        • 调用函数:Eigen::Quaternionf::FromTwoVectors
        • 如果偏航较小,则使用替代表示:±180°以内
    • 若标定加速度时, b a = a ⃗ a v g − g ⃗ b_a = \vec a_\mathbf{avg}- \vec g ba=a avgg
      • 加速度均值减去重力矢量
    • 若标定角速度时, b g = v ⃗ a v g b_g=\vec v_\mathbf{avg} bg=v avg
    • 标定成功
  • 已标定,正常的工作流程:
    • 1、计算 imu_rate,注:低通滤波器求取

    • 2、计算修正后的 imu数据,注:减去bias偏差

      • 注意:ba,bg 都被减去了
    • 3、数据存入 imu_buffer中,并条件唤醒

      • 条件唤醒callbackPointCloud线程本次IMU数据存在
      • 放入数据时mtx_imu保护,且 push_front
      • 格式使用了 boost::circular_buffe 不用上锁
    • 4、已经进行 第一次优化了,则进行 状态state 传播 propagateState

      • 注:状态结构体,内部自带 互斥锁geo.mtx

      • 将加速度从身体坐标系转换为世界坐标系: W a = W q ∗ a ^W a = ^Wq * a Wa=Wqa

      • 位置更新: p = p + v ∗ d t + 0.5 ∗ a ∗ d t 2 p= p + v *d_t + 0.5 *a*d_t^2 p=p+vdt+0.5adt2

        • 注:Z 方向 加速度 需要减去 重力大小,p,v 都是
        • a = a ^ − G ⃗ a = \hat a -\vec G a=a^G
      • 线速度更新:

        • W v l i n e = W v l i n e + a ∗ d t ^Wv_\mathbf{line} = ^Wv_\mathbf{line}+a*d_t Wvline=Wvline+adt
        • B v = W q − 1 ∗ W v l i n e ^Bv =^Wq^{-1} * ^Wv_\mathbf{line} Bv=Wq1Wvline
      • 角度更新:

        • q w = q w q_w = q_w qw=qw
        • q v = q v + 1 2 ∗ v ∗ d t q_v = q_v +\frac 1 2 * v * d_t qv=qv+21vdt
      • 角速度更新:

        • B v a n g = v i m u ^Bv_\mathbf{ang} = v_\mathbf{imu} Bvang=vimu
        • W v a n g = W q ∗ B v a n g ^Wv_\mathbf{ang}=^Wq * ^Bv_\mathbf{ang} Wvang=WqBvang
void dlio::OdomNode::callbackImu(const sensor_msgs::Imu::ConstPtr& imu_raw) {this->first_imu_received = true;/// imu 数据到 base_linksensor_msgs::Imu::Ptr imu = this->transformImu( imu_raw );this->imu_stamp = imu->header.stamp;Eigen::Vector3f lin_accel;Eigen::Vector3f ang_vel;// Get IMU samplesang_vel[0] = imu->angular_velocity.x;ang_vel[1] = imu->angular_velocity.y;ang_vel[2] = imu->angular_velocity.z;lin_accel[0] = imu->linear_acceleration.x;lin_accel[1] = imu->linear_acceleration.y;lin_accel[2] = imu->linear_acceleration.z;if (this->first_imu_stamp == 0.) {this->first_imu_stamp = imu->header.stamp.toSec();}// IMU calibration procedure - do for three secondsif (!this->imu_calibrated) {static int num_samples = 0;static Eigen::Vector3f gyro_avg (0., 0., 0.);static Eigen::Vector3f accel_avg (0., 0., 0.);static bool print = true;if ((imu->header.stamp.toSec() - this->first_imu_stamp) < this->imu_calib_time_) {num_samples++;gyro_avg[0] += ang_vel[0];gyro_avg[1] += ang_vel[1];gyro_avg[2] += ang_vel[2];accel_avg[0] += lin_accel[0];accel_avg[1] += lin_accel[1];accel_avg[2] += lin_accel[2];if(print) {std::cout << std::endl << " Calibrating IMU for " << this->imu_calib_time_ << " seconds... ";std::cout.flush();print = false;}} else {std::cout << "done" << std::endl << std::endl;// 三秒数据做均值,得到 gyro+accel 平均值gyro_avg /= num_samples;accel_avg /= num_samples;Eigen::Vector3f grav_vec (0., 0., this->gravity_);// 需要重力对齐时, 得到重力向量if (this->gravity_align_) {// Estimate gravity vector - Only approximate if biases have not been pre-calibratedgrav_vec = (accel_avg - this->state.b.accel).normalized() * abs(this->gravity_);Eigen::Quaternionf grav_q = Eigen::Quaternionf::FromTwoVectors(grav_vec, Eigen::Vector3f(0., 0., this->gravity_));// set gravity aligned orientationthis->state.q = grav_q;this->T.block(0,0,3,3) = this->state.q.toRotationMatrix();this->lidarPose.q = this->state.q;// rpyauto euler = grav_q.toRotationMatrix().eulerAngles(2, 1, 0);double yaw = euler[0] * (180.0/M_PI);double pitch = euler[1] * (180.0/M_PI);double roll = euler[2] * (180.0/M_PI);// use alternate representation if the yaw is smallerif (abs(remainder(yaw + 180.0, 360.0)) < abs(yaw)) {yaw   = remainder(yaw + 180.0,   360.0);pitch = remainder(180.0 - pitch, 360.0);roll  = remainder(roll + 180.0,  360.0);}std::cout << " Estimated initial attitude:" << std::endl;std::cout << "   Roll  [deg]: " << to_string_with_precision(roll, 4) << std::endl;std::cout << "   Pitch [deg]: " << to_string_with_precision(pitch, 4) << std::endl;std::cout << "   Yaw   [deg]: " << to_string_with_precision(yaw, 4) << std::endl;std::cout << std::endl;}if (this->calibrate_accel_) {// subtract gravity from avg accel to get biasthis->state.b.accel = accel_avg - grav_vec;std::cout << " Accel biases [xyz]: " << to_string_with_precision(this->state.b.accel[0], 8) << ", "<< to_string_with_precision(this->state.b.accel[1], 8) << ", "<< to_string_with_precision(this->state.b.accel[2], 8) << std::endl;}if (this->calibrate_gyro_) {this->state.b.gyro = gyro_avg;std::cout << " Gyro biases  [xyz]: " << to_string_with_precision(this->state.b.gyro[0], 8) << ", "<< to_string_with_precision(this->state.b.gyro[1], 8) << ", "<< to_string_with_precision(this->state.b.gyro[2], 8) << std::endl;}this->imu_calibrated = true;}} else {double dt = imu->header.stamp.toSec() - this->prev_imu_stamp;if (dt == 0) { dt = 1.0/200.0; }this->imu_rates.push_back( 1./dt );	// 激光rate// Apply the calibrated bias to the new IMU measurementsthis->imu_meas.stamp = imu->header.stamp.toSec();this->imu_meas.dt = dt;this->prev_imu_stamp = this->imu_meas.stamp;Eigen::Vector3f lin_accel_corrected = (this->imu_accel_sm_ * lin_accel) - this->state.b.accel;Eigen::Vector3f ang_vel_corrected = ang_vel - this->state.b.gyro;this->imu_meas.lin_accel = lin_accel_corrected;this->imu_meas.ang_vel = ang_vel_corrected;// Store calibrated IMU measurements into imu buffer for manual integration later.this->mtx_imu.lock();this->imu_buffer.push_front(this->imu_meas);this->mtx_imu.unlock();// Notify the callbackPointCloud thread that IMU data exists for this timethis->cv_imu_stamp.notify_one();if (this->geo.first_opt_done) {// Geometric Observer: Propagate Statethis->propagateState();}}}
callbackPointCloud
  • 闭环标志锁住:main_loop_running = true

  • dlio未初始化,则进行初始化 initializeDLIO

    • imu数据收到,且已经标定完成
  • 将传入的scan转为 dlio格式 getScanFromROS

    • 移除Nan点,调用pcl::removeNaNFromPointCloud
    • 移除crop box滤波, 滤除±1m之内的,pcl::CropBox<PointType>
    • 激光类型确定,field.name = t/time/timestamp 分:OUSTER, VELODYNE, HESAI
    • 若未知,则不进行去畸变
  • 预处理点 preprocessPoints

    • 去畸变
      • 若需去畸变时,调用 deskewPointcloud后,! first_valid_scan时 return
      • 否则,进行数据的整理
        • ! first_valid_scan时 ,进行 first_valid_scan=true

          • 第一帧该激光时间戳小于 imu_buffer最开始时,return,
          • 否则 this->first_valid_scan = true
        • 迭代得到点云数组的状态,integrateImu

    • 基于参数是否使用体素滤波 vf_use_
  • 检测 激光是否符合要求

    • 第一次有效,不符合return
    • 当前激光点个数小于 gicp最小阈值,return
  • 独立线程计算指标 computeMetrics

    • computeSpaciousness 平均长度

      • 计算当前激光水平长度 x 2 + y 2 \sqrt{x^2+y^2} x2+y2 的中位数,并用低通滤波器均衡化 0.95 L m i + 0.5 L c u r r 0.95L_{mi}+0.5L_{curr} 0.95Lmi+0.5Lcurr
      • 变量 :spaciousness
    • computeDensity 平均密度

      • 当前密度使用gicp.source_density_,也用低通滤波器均衡化
      • 变量:density
  • 若设定自适应参数时,激光进行自适应 setAdaptiveParams

    • 基于平均长度spaciousness设置 关键帧阈值距离

      • s p < 0.5 ? s p = 0.5 s p > 5 ? s p = 5 sp< 0.5?sp=0.5 \ \ \ sp>5?sp=5 sp<0.5?sp=0.5   sp>5?sp=5
      • keyframe_thresh_dist_ = sp
    • 基于平均长度+密度设置 icp相关点的距离

      • den<0.5*gicp_max_corr_dist_? den=0.5*gicp_max_corr_dist_
      • den>2*gicp_max_corr_dist_? den=2*gicp_max_corr_dist_
      • sp<5? den=0.5*gicp_max_corr_dist_
      • sp>5? den=2*gicp_max_corr_dist_
      • gicp.setMaxCorrespondenceDistance(den)
  • 设置新的帧作为输入源 setInputSource

    • this->gicp.setInputSource(this->current_scan)
    • this->gicp.calculateSourceCovariances();
  • 若关键帧个数为0时,设置初始帧作为关键帧

    • 初始化输入目标 initializeInputTarget
    • 闭环标志:main_loop_running = false
    • 启动异步线程进行进行构建子图,直到结束buildKeyframesAndSubmap
    • return
  • 得到下一个位姿getNextPose,(Get the next pose via IMU + S2M + GEO)

  • 更新当前关键帧位姿和地图 updateKeyframes

  • 如果需要的话构建关键帧法线和子图,如果没有则等待闭环结束

    • 需要构建关键字和子图:

      • 闭环标志:main_loop_running = false
      • 启动异步线程进行进行构建子图,直到结束buildKeyframesAndSubmap
    • 不需要时:this->submap_build_cv.notify_one();

  • 发布数据 到ros接口,也是独立线程 std::thread( &dlio::OdomNode::publishToROS

  • 更新静态数据:

  • debug 打印,也是 独立线程std::thread( &dlio::OdomNode::debug

void dlio::OdomNode::callbackPointCloud(const sensor_msgs::PointCloud2ConstPtr& pc) {std::unique_lock<decltype(this->main_loop_running_mutex)> lock(main_loop_running_mutex);this->main_loop_running = true;lock.unlock();double then = ros::Time::now().toSec();if (this->first_scan_stamp == 0.) {this->first_scan_stamp = pc->header.stamp.toSec();}// DLIO Initialization procedures (IMU calib, gravity align)if (!this->dlio_initialized) {this->initializeDLIO();}// Convert incoming scan into DLIO formatthis->getScanFromROS(pc);// Preprocess pointsthis->preprocessPoints();if (!this->first_valid_scan) {return;}if (this->current_scan->points.size() <= this->gicp_min_num_points_) {ROS_FATAL("Low number of points in the cloud!");return;}// Compute Metricsthis->metrics_thread = std::thread( &dlio::OdomNode::computeMetrics, this );this->metrics_thread.detach();// Set Adaptive Parametersif (this->adaptive_params_) {this->setAdaptiveParams();}// Set new frame as input sourcethis->setInputSource();// Set initial frame as first keyframeif (this->keyframes.size() == 0) {this->initializeInputTarget();this->main_loop_running = false;this->submap_future =std::async( std::launch::async, &dlio::OdomNode::buildKeyframesAndSubmap, this, this->state );this->submap_future.wait(); // wait until completionreturn;}// Get the next pose via IMU + S2M + GEOthis->getNextPose();// Update current keyframe poses and mapthis->updateKeyframes();// Build keyframe normals and submap if needed (and if we're not already waiting)if (this->new_submap_is_ready) {this->main_loop_running = false;this->submap_future =std::async( std::launch::async, &dlio::OdomNode::buildKeyframesAndSubmap, this, this->state );} else {lock.lock();this->main_loop_running = false;lock.unlock();this->submap_build_cv.notify_one();}// Update trajectorythis->trajectory.push_back( std::make_pair(this->state.p, this->state.q) );// Update time stampsthis->lidar_rates.push_back( 1. / (this->scan_stamp - this->prev_scan_stamp) );this->prev_scan_stamp = this->scan_stamp;this->elapsed_time = this->scan_stamp - this->first_scan_stamp;// Publish stuff to ROSpcl::PointCloud<PointType>::ConstPtr published_cloud;if (this->densemap_filtered_) {published_cloud = this->current_scan;} else {published_cloud = this->deskewed_scan;}this->publish_thread = std::thread( &dlio::OdomNode::publishToROS, this, published_cloud, this->T_corr );this->publish_thread.detach();// Update some statisticsthis->comp_times.push_back(ros::Time::now().toSec() - then);this->gicp_hasConverged = this->gicp.hasConverged();// Debug statements and publish custom DLIO messagethis->debug_thread = std::thread( &dlio::OdomNode::debug, this );this->debug_thread.detach();this->geo.first_opt_done = true;}

其他

getNextPose

步骤:

  • 判断是否有新地图 准备好了,submap_future.wait_for(std::chrono::seconds(0)) == std::future_status::ready);

  • 有新地图准备好了 且 地图已经更新了,则跟新target数据

    • gicp target更新:
      • 设置当前全局地图为 模板 gicp.registerInputTarget
      • 设置 全局 gicp.target_kdtree_
      • 设置全局协方差 gicp.setTargetCovariances
    • >全局地图已更新设为false
  • 以全局 IMU 变换作为初始猜测与当前子图对齐

    • this->gicp.align(*aligned);
  • 得到最终的位姿:

    • 修正的位姿: T c o r r T_{corr} Tcorr
    • 修正: T = T c o r r ∗ T p r i o r T = T_{corr} * T_{prior} T=TcorrTprior
  • 更新 下一个全局位姿 propagateGICP

    • 因为 源点云 和 目标点云 都在全局框架中,因此转换也是全局的

    • 位姿 T (Matrix4d) -> Pose(vector3f+ Quaternionf) 的转换,存储到 lidarPose

  • 更新状态量:updateState

    • state biias 的更新:

      • q e = q s t a t e ∗ ∗ q l i d a r q_{e} = q^*_{state} * q_{lidar} qe=qstateqlidar

      • q c o r r = 1 − a b s ( q e . w ) , q e . v e c q_{corr} = {1-\mathbf{abs}(qe.w),qe.vec} qcorr=1abs(qe.w),qe.vec

      • q c o r r = q l i d a r ∗ q c o r r q_{corr}=q_{lidar} * q_{corr} qcorr=qlidarqcorr

      • p e r r = p l i d a r − p s t a t e p_{err}= p_{lidar}-p_{state} perr=plidarpstate

      • p e b = q s t a t e ∗ ∗ p e r r p_{eb} = q_{state}^* * p_{err} peb=qstateperr

      • b a c c = b a c c − d t ∗ p e b b_{acc} = b_{acc} - d_t * p_{eb} bacc=baccdtpeb

      • b g y r o − = d t ∗ { q e w ∗ q e x , q e w ∗ q e y , q e w ∗ q e z } b_{gyro} -= d_t * \{q_{ew}*q_{ex},q_{ew}*q_{ey},q_{ew}*q_{ez}\} bgyro=dt{qewqex,qewqey,qewqez}

      • 增加阈值限制的这个高级用法:this->state.b.accel.array().min(abias_max).max(-abias_max);

    • state 平移 p p p 的更新:

      • p s t a t e = p s t a t e + d t ∗ p e r r p_{state} = p_{state} +d_t * p_{err} pstate=pstate+dtperr
      • v l i n w = v l i n w + d t ∗ p e r r {v_{lin_w} = v_{lin_w} + d_t * p_{err}} vlinw=vlinw+dtperr
      • q s t a t e = q s t a t e + d t ∗ q c o r r q_{state}=q_{state} + d_t*q_{corr} qstate=qstate+dtqcorr
    • geo.prev_p,prev_q,prev_vel 的更新,直接赋值 state的

void dlio::OdomNode::getNextPose() {// Check if the new submap is ready to be usedthis->new_submap_is_ready = (this->submap_future.wait_for(std::chrono::seconds(0)) == std::future_status::ready);if (this->new_submap_is_ready && this->submap_hasChanged) {// Set the current global submap as the target cloudthis->gicp.registerInputTarget(this->submap_cloud);// Set submap kdtreethis->gicp.target_kdtree_ = this->submap_kdtree;// Set target cloud's normals as submap normalsthis->gicp.setTargetCovariances(this->submap_normals);this->submap_hasChanged = false;}// Align with current submap with global IMU transformation as initial guesspcl::PointCloud<PointType>::Ptr aligned (boost::make_shared<pcl::PointCloud<PointType>>());this->gicp.align(*aligned);// Get final transformation in global framethis->T_corr = this->gicp.getFinalTransformation(); // "correction" transformationthis->T = this->T_corr * this->T_prior;// Update next global pose// Both source and target clouds are in the global frame now, so tranformation is globalthis->propagateGICP();// Geometric observer updatethis->updateState();}
updateKeyframes
  • 关键字存储格式:

    • std::vector<std::pair<std::pair<Eigen::Vector3f, Eigen::Quaternionf>,pcl::PointCloud<PointType>::ConstPtr>> keyframes;
      
    • keyframe:Pose+pointcloud <-- pose:position+rotation

  • 步骤:

    • integrateImu遍历已有关键帧,找到与 当前位姿相关的 关键帧index
      • 直接计算欧氏距离, Δ d = ∣ ∣ p s t a t e − p i ∣ ∣ \Delta d = ||p_{state}-p_{i}|| Δd=∣∣pstatepi∣∣
      • 统计小于1.5倍阈值的个数 num_nearby
        • Δ d < 1.5 ∗ k e y f r a m e _ t h r e s h _ d i s t _ \Delta d < 1.5 * \mathbf{keyframe\_thresh\_dist\_} Δd<1.5keyframe_thresh_dist_? ++num_nearby;
      • 存储最近的下标closest_idx以及最近的距离closest_d
    • 得到最近的 位置+旋转,进而得到与当前state的 差异(位置+角度)
      • 位置差异: Δ d = ∣ ∣ p s t a t e − p i ∣ ∣ \Delta d = ||p_{state}-p_{i}|| Δd=∣∣pstatepi∣∣
      • 角度差异:
        • δ q = ∣ q s t a t e ∗ q c l o s e − 1 ∣ \delta q = |q_{state}*q_{close}^{-1}| δq=qstateqclose1
        • 得到旋转角度: θ = 2 ∗ a t a n 2 { δ q v e c . n o r m ( ) , δ q w } \theta = 2*\mathbf{atan2}\{\delta q_{vec.norm()},\delta q_w\} θ=2atan2{δqvec.norm(),δqw}
        • 旋转角度差异: Θ = 180. ∗ θ / π \Theta = 180. * \theta / \pi Θ=180.θ/π
    • 新关键帧评判标准,感觉1,2 可以合起来
      • 1、若 位置差异 大于 位置阈值 或者 角度差异大于 角度阈值时,newKeyframe = true;
      • 2、若 位置差异 小于 位置阈值 时,newKeyframe = false;
      • 3、若 位置差异 小于 位置阈值 或者 角度差异大于 角度阈值 且满足 位置小于1.5倍的个数<=1 时,newKeyframe = true;
    • 若当前帧添加新关键帧,则 放入数据
      • keyframes
      • keyframe_timestamps
      • keyframe_normals keyframe_normals.push_back(this->gicp.getSourceCovariances())
      • keyframe_transformations keyframe_transformations.push_back(this->T_corr);
void dlio::OdomNode::updateKeyframes() {// calculate difference in pose and rotation to all poses in trajectoryfloat closest_d = std::numeric_limits<float>::infinity();int closest_idx = 0;int keyframes_idx = 0;int num_nearby = 0;for (const auto& k : this->keyframes) {// calculate distance between current pose and pose in keyframesfloat delta_d = sqrt( pow(this->state.p[0] - k.first.first[0], 2) +pow(this->state.p[1] - k.first.first[1], 2) +pow(this->state.p[2] - k.first.first[2], 2) );// count the number nearby current poseif (delta_d <= this->keyframe_thresh_dist_ * 1.5){++num_nearby;}// store into variableif (delta_d < closest_d) {closest_d = delta_d;closest_idx = keyframes_idx;}keyframes_idx++;}// get closest pose and corresponding rotationEigen::Vector3f closest_pose = this->keyframes[closest_idx].first.first;Eigen::Quaternionf closest_pose_r = this->keyframes[closest_idx].first.second;// calculate distance between current pose and closest pose from abovefloat dd = sqrt( pow(this->state.p[0] - closest_pose[0], 2) +pow(this->state.p[1] - closest_pose[1], 2) +pow(this->state.p[2] - closest_pose[2], 2) );// calculate difference in orientation using SLERPEigen::Quaternionf dq;if (this->state.q.dot(closest_pose_r) < 0.) {Eigen::Quaternionf lq = closest_pose_r;lq.w() *= -1.; lq.x() *= -1.; lq.y() *= -1.; lq.z() *= -1.;dq = this->state.q * lq.inverse();} else {dq = this->state.q * closest_pose_r.inverse();}double theta_rad = 2. * atan2(sqrt( pow(dq.x(), 2) + pow(dq.y(), 2) + pow(dq.z(), 2) ), dq.w());double theta_deg = theta_rad * (180.0/M_PI);// update keyframesbool newKeyframe = false;if (abs(dd) > this->keyframe_thresh_dist_ || abs(theta_deg) > this->keyframe_thresh_rot_) {newKeyframe = true;}if (abs(dd) <= this->keyframe_thresh_dist_) {newKeyframe = false;}if (abs(dd) <= this->keyframe_thresh_dist_ && abs(theta_deg) > this->keyframe_thresh_rot_ && num_nearby <= 1) {newKeyframe = true;}if (newKeyframe) {// update keyframe vectorstd::unique_lock<decltype(this->keyframes_mutex)> lock(this->keyframes_mutex);this->keyframes.push_back(std::make_pair(std::make_pair(this->lidarPose.p, this->lidarPose.q), this->current_scan));this->keyframe_timestamps.push_back(this->scan_header_stamp);this->keyframe_normals.push_back(this->gicp.getSourceCovariances());this->keyframe_transformations.push_back(this->T_corr);lock.unlock();}}
buildKeyframesAndSubmap
  • 说明:
    • 已处理关键帧个数 num_processed_keyframes,全局变量

步骤:

  • 已处理关键帧 开始,遍历关键帧,根据关键帧的修正量更新 激光点+激光点协方差

    • 取当前帧的点云+协方差raw_keyframe+raw_covariances,修正量T

    • 点云基于修正量 T T T转换:pcl::transformPointCloud (*raw_keyframe, *transformed_keyframe, T);

      • 注意:这个在生成新关键帧时会保留修正量,但是未直接更新关键帧
    • 点云协方差进行转换:

      • std::transform(raw_covariances->begin(), raw_covariances->end(), transformed_covariances->begin(),[&Td](Eigen::Matrix4d cov) { return Td * cov * Td.transpose(); });
        
    • 利用修正后的数据 更新 激光点+激光点协方差 std::transform(

    • 发布关键帧,使用的是独立线程:publish_keyframe_thread = std::thread

  • 暂停以防止从正在运行的主循环中窃取资源 pauseSubmapBuildIfNeeded

    • 互斥锁,等待主线程 main获取前 K 个最近邻关键帧姿势的索引不进行 main_loop_running
  • 构建Submap buildSubmap

  • 结束

void dlio::OdomNode::buildKeyframesAndSubmap(State vehicle_state) {// transform the new keyframe(s) and associated covariance list(s)std::unique_lock<decltype(this->keyframes_mutex)> lock(this->keyframes_mutex);for (int i = this->num_processed_keyframes; i < this->keyframes.size(); i++) {pcl::PointCloud<PointType>::ConstPtr raw_keyframe = this->keyframes[i].second;std::shared_ptr<const nano_gicp::CovarianceList> raw_covariances = this->keyframe_normals[i];Eigen::Matrix4f T = this->keyframe_transformations[i];lock.unlock();Eigen::Matrix4d Td = T.cast<double>();pcl::PointCloud<PointType>::Ptr transformed_keyframe (boost::make_shared<pcl::PointCloud<PointType>>());pcl::transformPointCloud (*raw_keyframe, *transformed_keyframe, T);std::shared_ptr<nano_gicp::CovarianceList> transformed_covariances (std::make_shared<nano_gicp::CovarianceList>(raw_covariances->size()));std::transform(raw_covariances->begin(), raw_covariances->end(), transformed_covariances->begin(),[&Td](Eigen::Matrix4d cov) { return Td * cov * Td.transpose(); });++this->num_processed_keyframes;lock.lock();this->keyframes[i].second = transformed_keyframe;this->keyframe_normals[i] = transformed_covariances;this->publish_keyframe_thread = std::thread( &dlio::OdomNode::publishKeyframe, this, this->keyframes[i], this->keyframe_timestamps[i] );this->publish_keyframe_thread.detach();}lock.unlock();// Pause to prevent stealing resources from the main loop if it is running.this->pauseSubmapBuildIfNeeded();this->buildSubmap(vehicle_state);
}
buildSubmap
  • 需:锁住关键帧 keyframes_mutex

步骤

  • 遍历 已处理完成的关键帧,求得 距给定 车辆状态的距离及索引

    • δ d = ∣ ∣ p s t a t e − p i k e y ∣ ∣ \delta d = ||p_{state}-p_{i_{key}}|| δd=∣∣pstatepikey∣∣,keyframe_nn.push(i)
  • 获取前 K 个最近邻关键帧姿势的索引 pushSubmapIndices

    • 利用优先队列 std::priority_queue

      • for (auto d : dists) {if (pq.size() >= k && pq.top() > d) {pq.push(d);pq.pop();} else if (pq.size() < k) {pq.push(d);}
        }
        
    • 获取第 k 个最小的元素的值,该元素应该位于堆的顶部

    • 遍历关键帧组,将小于等于 k值的 frame存起来。 pushSubmapIndices

  • 得到凸包中距离前10的索引

    • 计算凸包, computeConvexHull
      • 已处理关键帧个数至少需4个,否则直接返回
      • 基于关键帧组的位置 创建一个点云 cloud
      • for已处理关键帧,将其 position 放入创建的点云中
      • 将其放入凸包检查中, convex_hull.setInputCloud(cloud)
      • 计算所有输入点的凸多边形 convex_hull.reconstruct
      • 取凸多边形的 索引 convex_hull.getHullPointIndices
    • 获取凸包上每个关键帧的距离
      • 基于索引在 距离 数组中查找
    • 获取凸包中 距离前10的索引 pushSubmapIndices
  • 得到凹包中距离前10的索引

    • 计算凹包,computeConcaveHull
      • 已处理关键帧个数至少需5个,否则直接返回
      • 基于关键帧组的位置 创建一个点云 cloud
        • for已处理关键帧,将其 position 放入创建的点云中
      • 将其放入凹包检查中, concave_hull.setInputCloud(cloud)
      • 计算所有输入点的凹多边形 concave_hull.reconstruct
      • 取凹多边形的 索引 concave_hull.getHullPointIndices
    • 获取凹包上每个关键帧的距离
      • 基于索引在 距离 数组中查找
    • 获取凹包中 距离前10的索引 pushSubmapIndices
  • 对当前和上一个子图 kf 索引列表进行排序 std::sort,删除 std::unique

    • submap_kf_idx_curr
    • submap_kf_idx_prev
  • 若二者不相等,则 基于 submap_kf_idx_curr 的点云重新生成地图

    • 状态地图已经改变了:submap_hasChanged = true
    • 重新生成 点云+点云法向量:submap_cloud+submap_normals
    • 基于新生成的点云 重新生成 kd_tree_
    • 更新前submap索引 submap_kf_idx_prev = submap_kf_idx_curr
void dlio::OdomNode::buildSubmap(State vehicle_state) {// clear vector of keyframe indices to use for submapthis->submap_kf_idx_curr.clear();// calculate distance between current pose and poses in keyframe setstd::unique_lock<decltype(this->keyframes_mutex)> lock(this->keyframes_mutex);std::vector<float> ds;std::vector<int> keyframe_nn;for (int i = 0; i < this->num_processed_keyframes; i++) {float d = sqrt( pow(vehicle_state.p[0] - this->keyframes[i].first.first[0], 2) +pow(vehicle_state.p[1] - this->keyframes[i].first.first[1], 2) +pow(vehicle_state.p[2] - this->keyframes[i].first.first[2], 2) );ds.push_back(d);keyframe_nn.push_back(i);}lock.unlock();// get indices for top K nearest neighbor keyframe posesthis->pushSubmapIndices(ds, this->submap_knn_, keyframe_nn);// get convex hull indicesthis->computeConvexHull();// get distances for each keyframe on convex hullstd::vector<float> convex_ds;for (const auto& c : this->keyframe_convex) {convex_ds.push_back(ds[c]);}// get indices for top kNN for convex hullthis->pushSubmapIndices(convex_ds, this->submap_kcv_, this->keyframe_convex);// get concave hull indicesthis->computeConcaveHull();// get distances for each keyframe on concave hullstd::vector<float> concave_ds;for (const auto& c : this->keyframe_concave) {concave_ds.push_back(ds[c]);}// get indices for top kNN for concave hullthis->pushSubmapIndices(concave_ds, this->submap_kcc_, this->keyframe_concave);// sort current and previous submap kf list of indicesstd::sort(this->submap_kf_idx_curr.begin(), this->submap_kf_idx_curr.end());std::sort(this->submap_kf_idx_prev.begin(), this->submap_kf_idx_prev.end());// remove duplicate indicesauto last = std::unique(this->submap_kf_idx_curr.begin(), this->submap_kf_idx_curr.end());this->submap_kf_idx_curr.erase(last, this->submap_kf_idx_curr.end());// check if submap has changed from previous iterationif (this->submap_kf_idx_curr != this->submap_kf_idx_prev){this->submap_hasChanged = true;// Pause to prevent stealing resources from the main loop if it is running.this->pauseSubmapBuildIfNeeded();// reinitialize submap cloud and normalspcl::PointCloud<PointType>::Ptr submap_cloud_ (boost::make_shared<pcl::PointCloud<PointType>>());std::shared_ptr<nano_gicp::CovarianceList> submap_normals_ (std::make_shared<nano_gicp::CovarianceList>());for (auto k : this->submap_kf_idx_curr) {// create current submap cloudlock.lock();*submap_cloud_ += *this->keyframes[k].second;lock.unlock();// grab corresponding submap cloud's normalssubmap_normals_->insert( std::end(*submap_normals_),std::begin(*(this->keyframe_normals[k])), std::end(*(this->keyframe_normals[k])) );}this->submap_cloud = submap_cloud_;this->submap_normals = submap_normals_;// Pause to prevent stealing resources from the main loop if it is running.this->pauseSubmapBuildIfNeeded();this->gicp_temp.setInputTarget(this->submap_cloud);this->submap_kdtree = this->gicp_temp.target_kdtree_;this->submap_kf_idx_prev = this->submap_kf_idx_curr;}
}
deskewPointcloud
  • 去畸变,就是各个时间戳都相对于某一时间戳

步骤:

  • 各个点的时间戳相对于这个时间: sweep_ref_time = this->scan_header_stamp
  • 按时间戳对点进行排序并构建时间戳列表
    • 定义三个函数对象,时间戳比较大小,时间戳是否相等,提取时间点
    • 将激光点按照时间进行排序 std::partial_sort_copy
    • 滤除使得时间戳唯一:boost::adaptors::indexed() | boost::adaptors::adjacent_filtered
    • 从点中提取时间戳并将它们放入自己的列表中
      • for 循环points_unique_timestamps,并放入 timestamps、unique_time_indices
    • 将当前激光帧时间设置为中间时间 scan_stamp = timestamps[median_pt_index]
  • 在 IMU 数据出现之前不处理扫描
    • !this->first_valid_scan时,可以初始化,然后 return
  • 利用 imu数据进行 进行去畸变
    • imu数据 按照 时间戳 timestamps 中按照 全局位姿进行预积分,得到timestamps中各个点的位姿
      • 调用函数 integrateImu,得到各个点的位姿:frames
    • 预积分出问题时,如果扫描开始和结束之间没有帧,则可能意味着存在同步问题
      • 坏的同步基于激光和imu,因此直接转到 base_link系 pcl::transformPointCloud
      • deskew_status = false,结束本次循环
    • 预积分未出问题时,进行同步
      • 多线程处理 #pragma omp parallel for num_threads(this->num_threads_)
      • for 遍历每个点,进行如下操作:
        • 计算当前点时间base到世界坐标系:$^W_{B}T = ^W_{L}T * ^L_{B}T $
        • 当前点转到 世界坐标系 $^W_{p_t}T = ^W_{B}T * ^B_{p_t}T $
      • 去畸变完成
void dlio::OdomNode::deskewPointcloud() {pcl::PointCloud<PointType>::Ptr deskewed_scan_ (boost::make_shared<pcl::PointCloud<PointType>>());deskewed_scan_->points.resize(this->original_scan->points.size());// individual point timestamps should be relative to this timedouble sweep_ref_time = this->scan_header_stamp.toSec();// sort points by timestamp and build list of timestampsstd::function<bool(const PointType&, const PointType&)> point_time_cmp;std::function<bool(boost::range::index_value<PointType&, long>,boost::range::index_value<PointType&, long>)> point_time_neq;std::function<double(boost::range::index_value<PointType&, long>)> extract_point_time;if (this->sensor == dlio::SensorType::OUSTER) {point_time_cmp = [](const PointType& p1, const PointType& p2){ return p1.t < p2.t; };point_time_neq = [](boost::range::index_value<PointType&, long> p1,boost::range::index_value<PointType&, long> p2){ return p1.value().t != p2.value().t; };extract_point_time = [&sweep_ref_time](boost::range::index_value<PointType&, long> pt){ return sweep_ref_time + pt.value().t * 1e-9f; };} else if (this->sensor == dlio::SensorType::VELODYNE) {point_time_cmp = [](const PointType& p1, const PointType& p2){ return p1.time < p2.time; };point_time_neq = [](boost::range::index_value<PointType&, long> p1,boost::range::index_value<PointType&, long> p2){ return p1.value().time != p2.value().time; };extract_point_time = [&sweep_ref_time](boost::range::index_value<PointType&, long> pt){ return sweep_ref_time + pt.value().time; };} else if (this->sensor == dlio::SensorType::HESAI) {point_time_cmp = [](const PointType& p1, const PointType& p2){ return p1.timestamp < p2.timestamp; };point_time_neq = [](boost::range::index_value<PointType&, long> p1,boost::range::index_value<PointType&, long> p2){ return p1.value().timestamp != p2.value().timestamp; };extract_point_time = [&sweep_ref_time](boost::range::index_value<PointType&, long> pt){ return pt.value().timestamp; };}// copy points into deskewed_scan_ in order of timestampstd::partial_sort_copy(this->original_scan->points.begin(), this->original_scan->points.end(),deskewed_scan_->points.begin(), deskewed_scan_->points.end(), point_time_cmp);// filter unique timestampsauto points_unique_timestamps = deskewed_scan_->points| boost::adaptors::indexed()| boost::adaptors::adjacent_filtered(point_time_neq);// extract timestamps from points and put them in their own liststd::vector<double> timestamps;std::vector<int> unique_time_indices;for (auto it = points_unique_timestamps.begin(); it != points_unique_timestamps.end(); it++) {timestamps.push_back(extract_point_time(*it));unique_time_indices.push_back(it->index());}unique_time_indices.push_back(deskewed_scan_->points.size());int median_pt_index = timestamps.size() / 2;this->scan_stamp = timestamps[median_pt_index]; // set this->scan_stamp to the timestamp of the median point// don't process scans until IMU data is presentif (!this->first_valid_scan) {if (this->imu_buffer.empty() || this->scan_stamp <= this->imu_buffer.back().stamp) {return;}this->first_valid_scan = true;this->T_prior = this->T; // assume no motion for the first scanpcl::transformPointCloud (*deskewed_scan_, *deskewed_scan_, this->T_prior * this->extrinsics.baselink2lidar_T);this->deskewed_scan = deskewed_scan_;this->deskew_status = true;return;}// IMU prior & deskewing for second scan onwardsstd::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f>> frames;frames = this->integrateImu(this->prev_scan_stamp, this->lidarPose.q, this->lidarPose.p,this->geo.prev_vel.cast<float>(), timestamps);this->deskew_size = frames.size(); // if integration successful, equal to timestamps.size()// if there are no frames between the start and end of the sweep// that probably means that there's a sync issueif (frames.size() != timestamps.size()) {ROS_FATAL("Bad time sync between LiDAR and IMU!");this->T_prior = this->T;pcl::transformPointCloud (*deskewed_scan_, *deskewed_scan_, this->T_prior * this->extrinsics.baselink2lidar_T);this->deskewed_scan = deskewed_scan_;this->deskew_status = false;return;}// update prior to be the estimated pose at the median time of the scan (corresponds to this->scan_stamp)this->T_prior = frames[median_pt_index];#pragma omp parallel for num_threads(this->num_threads_)for (int i = 0; i < timestamps.size(); i++) {Eigen::Matrix4f T = frames[i] * this->extrinsics.baselink2lidar_T;// transform point to world framefor (int k = unique_time_indices[i]; k < unique_time_indices[i+1]; k++) {auto &pt = deskewed_scan_->points[k];pt.getVector4fMap()[3] = 1.;pt.getVector4fMap() = T * pt.getVector4fMap();}}this->deskewed_scan = deskewed_scan_;this->deskew_status = true;}
integrateImu
  • 参数说明:
    • start_time : prev_scan_stamp
    • q_init+p_init: lidarPose
    • v_init:geo.prev_vel
    • sorted_timestamps:{this->scan_stamp}
  • 流程:
    • sorted_timestamps 或者 start_time 大于 sorted_timestamps 最晚时间 时,返回空
    • 在 imu队列中 取 小于 start_time,大于 end_time的下标 imuMeasFromTimeRange
    • 记 first_imu: f 1 f_1 f1、second_imu : f 2 f_2 f2为 start_time 前后下标,计算其旋转 (这儿用到了角加速度)
      • q_init 为 f 1 f_1 f1 f 2 f_2 f2 之间的 旋转
      • f 1 f_1 f1对应的旋转: q 1 = q i n i t ∗ q { 1 , − 1 2 ( w 1 + v f 2 − v f 1 d t ( t i n i t − t 1 ) ) ∗ ( t i n i t − t 1 ) } q_1 = q_{init} * q\{1,-\frac 1 2(w_1+\frac{v_{f_2}-v_{f_1}}{d_t}(t_{init}-t_1))*(t_{init}-t_1)\} q1=qinitq{1,21(w1+dtvf2vf1(tinitt1))(tinitt1)}
      • f 2 f_2 f2对应的旋转的旋转: q 2 = q 1 ∗ q { 1 , 1 2 ( w 1 + 0.5 ( w 2 − w 1 ) ∗ ( t 2 − t 1 ) } q_2 = q_1 * q\{1,\frac 1 2(w_1+0.5(w_2-w_1)*(t_2-t_1)\} q2=q1q{1,21(w1+0.5(w2w1)(t2t1)}
    • f 1 、 f 2 f_1、f_2 f1f2的加速度转换到世界坐标系:
      • a 1 w = q 1 ∗ a 1 b a_{1w} = q_1*a_{1b} a1w=q1a1b
      • a 2 w = q 1 ∗ a 2 b a_{2w} = q_1*a_{2b} a2w=q1a2b
    • 计算两者之间的加加速度,然后计算 f 1 f_1 f1对应的 v 和 p
      • j = a 2 w − a 1 w d t \mathbf j = \frac {a_{2w}-a_{1w}}{dt} j=dta2wa1w
      • 令: i d t = t i n i t − t 1 idt = t_{init}-t_1 idt=tinitt1
      • v 1 = v i n i t − a 1 w ∗ ( i d t ) − 1 2 j ( i d t ) 2 v_1 = v_{init}- a_{1w}*(idt) -\frac 1 2 \mathbf j (idt)^2 v1=vinita1w(idt)21j(idt)2
      • p i n i t = p i n i t − v i n i t ∗ i d t + 0.5 ∗ a 1 w ∗ i d t 2 + 1 6 j ( i d t ) 3 p_{init}=p_{init}-v_{init}*idt +0.5 * a_{1w} *idt^2+\frac 1 6 \mathbf j(idt)^3 pinit=pinitvinitidt+0.5a1widt2+61j(idt)3
    • 返回线性差值 integrateImuInternal
std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f>>
dlio::OdomNode::integrateImu(double start_time, Eigen::Quaternionf q_init, Eigen::Vector3f p_init,Eigen::Vector3f v_init, const std::vector<double>& sorted_timestamps) {/// 若无 sorted_timestamps 时,返回空,sorted_timestamps 激光帧时间const std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f>> empty;if (sorted_timestamps.empty() || start_time > sorted_timestamps.front()) {// invalid input, return empty vectorreturn empty;}/// 取 imu队列中,包围 boost::circular_buffer<ImuMeas>::reverse_iterator begin_imu_it;boost::circular_buffer<ImuMeas>::reverse_iterator end_imu_it;if (this->imuMeasFromTimeRange(start_time, sorted_timestamps.back(), begin_imu_it, end_imu_it) == false) {// not enough IMU measurements, return empty vectorreturn empty;}// Backwards integration to find pose at first IMU sampleconst ImuMeas& f1 = *begin_imu_it;const ImuMeas& f2 = *(begin_imu_it+1);// Time between first two IMU samplesdouble dt = f2.dt;// Time between first IMU sample and start_timedouble idt = start_time - f1.stamp;// Angular acceleration between first two IMU samplesEigen::Vector3f alpha_dt = f2.ang_vel - f1.ang_vel;Eigen::Vector3f alpha = alpha_dt / dt;// Average angular velocity (reversed) between first IMU sample and start_timeEigen::Vector3f omega_i = -(f1.ang_vel + 0.5*alpha*idt);// Set q_init to orientation at first IMU sampleq_init = Eigen::Quaternionf (q_init.w() - 0.5*( q_init.x()*omega_i[0] + q_init.y()*omega_i[1] + q_init.z()*omega_i[2] ) * idt,q_init.x() + 0.5*( q_init.w()*omega_i[0] - q_init.z()*omega_i[1] + q_init.y()*omega_i[2] ) * idt,q_init.y() + 0.5*( q_init.z()*omega_i[0] + q_init.w()*omega_i[1] - q_init.x()*omega_i[2] ) * idt,q_init.z() + 0.5*( q_init.x()*omega_i[1] - q_init.y()*omega_i[0] + q_init.w()*omega_i[2] ) * idt);q_init.normalize();// Average angular velocity between first two IMU samplesEigen::Vector3f omega = f1.ang_vel + 0.5*alpha_dt;// Orientation at second IMU sampleEigen::Quaternionf q2 (q_init.w() - 0.5*( q_init.x()*omega[0] + q_init.y()*omega[1] + q_init.z()*omega[2] ) * dt,q_init.x() + 0.5*( q_init.w()*omega[0] - q_init.z()*omega[1] + q_init.y()*omega[2] ) * dt,q_init.y() + 0.5*( q_init.z()*omega[0] + q_init.w()*omega[1] - q_init.x()*omega[2] ) * dt,q_init.z() + 0.5*( q_init.x()*omega[1] - q_init.y()*omega[0] + q_init.w()*omega[2] ) * dt);q2.normalize();// Acceleration at first IMU sampleEigen::Vector3f a1 = q_init._transformVector(f1.lin_accel);a1[2] -= this->gravity_;// Acceleration at second IMU sampleEigen::Vector3f a2 = q2._transformVector(f2.lin_accel);a2[2] -= this->gravity_;// Jerk between first two IMU samplesEigen::Vector3f j = (a2 - a1) / dt;// Set v_init to velocity at first IMU sample (go backwards from start_time)v_init -= a1*idt + 0.5*j*idt*idt;// Set p_init to position at first IMU sample (go backwards from start_time)p_init -= v_init*idt + 0.5*a1*idt*idt + (1/6.)*j*idt*idt*idt;return this->integrateImuInternal(q_init, p_init, v_init, sorted_timestamps, begin_imu_it, end_imu_it);
}
imuMeasFromTimeRange
  • 返回 imu_deque中 包含 [start_time,end_time]的 下标,并且 起始下标 begin_imu_it,end_imu_it
bool dlio::OdomNode::imuMeasFromTimeRange(double start_time, double end_time,boost::circular_buffer<ImuMeas>::reverse_iterator& begin_imu_it,boost::circular_buffer<ImuMeas>::reverse_iterator& end_imu_it) {/// 条件唤醒等待,直到 imu数据 的时间戳 包围 end_timeif (this->imu_buffer.empty() || this->imu_buffer.front().stamp < end_time) {// Wait for the latest IMU datastd::unique_lock<decltype(this->mtx_imu)> lock(this->mtx_imu);this->cv_imu_stamp.wait(lock, [this, &end_time]{ return this->imu_buffer.front().stamp >= end_time; });}// 得到 imu队列中 第一帧 时间戳小于 end_time的 last_imu_itauto imu_it = this->imu_buffer.begin();auto last_imu_it = imu_it;imu_it++;	// 这儿加了一次,因此下面也可加一次while (imu_it != this->imu_buffer.end() && imu_it->stamp >= end_time) {last_imu_it = imu_it;imu_it++;}//  得到 imu队列中 第一帧 时间戳小于 start_time的 imu_itwhile (imu_it != this->imu_buffer.end() && imu_it->stamp >= start_time) {imu_it++;}// 理论有值,imu_it不应该到最末端if (imu_it == this->imu_buffer.end()) {// not enough IMU measurements, return falsereturn false;}imu_it++;// Set reverse iterators (to iterate forward in time)end_imu_it = boost::circular_buffer<ImuMeas>::reverse_iterator(last_imu_it);begin_imu_it = boost::circular_buffer<ImuMeas>::reverse_iterator(imu_it);return true;
}
integrateImuInternal

步骤:

  • 初始化状态量,在世界坐标系: q , p , v , a q,p,v,a q,p,v,a
  • 变量begin_imu_it->end_imu_it数据,
    • 的前后两 imu, f 0 = f_0= f0= prev_imu_itf=prev_imu_it+1
    • 由前一阵imu f 0 f_0 f0 数据,计算当前帧 f f f 的信息
      • 计算角加速度: a w = w f − w f 0 d t a_{w}=\frac {w_f - w_{f_0}}{d_t} aw=dtwfwf0,平均角速度: w a v g = w + 1 2 a w ∗ d t {w_{avg}=w + \frac 1 2 a_w *dt} wavg=w+21awdt
      • 得到 f f f 的旋转: q f = q f 0 ∗ q { 1 , 1 2 w a v g ∗ d t } q_f =q_{f_0}*q\{1,\frac 1 2 w_{avg}*d_t\} qf=qf0q{1,21wavgdt}
      • 得到 f f f 的线加速度: a w = q f ∗ a b − G ⃗ a_w = q_f * a_b - \vec G aw=qfabG ,计算线加加速度: j = a w − a 0 w d t j=\frac {a_w - a_{0w}}{dt} j=dtawa0w
    • sorted_timestamps的迭代stamp_it时间戳在 该两帧imu时间间隔内,则进行 加速度差值
      • 计算与前一帧的时间差 idt
      • 计算旋转: q i = q f 0 ∗ q { 1 , 1 2 ( w 0 + 1 2 a w i d t ) ∗ i d t } q_{i} =q_{f_0}*q\{1,\frac 1 2 (w_0+\frac 1 2 a_w \mathbf{idt})*\mathbf{idt}\} qi=qf0q{1,21(w0+21awidt)idt}
      • 位置: p i = p 0 + v 0 ∗ i d t + 1 2 a 0 ∗ i d t 2 + 1 6 j ∗ i d t 3 p_i = p_0+v_0*\mathbf{idt}+\frac 1 2 a_0*\mathbf{idt}^2+\frac 1 6 j * \mathbf{idt}^3 pi=p0+v0idt+21a0idt2+61jidt3
      • 将该位姿加起来,并 stamp_it++
    • 计算 当前imu f f f的 位置,速度
      • p j = p 0 + v 0 ∗ d t + 1 2 a 0 ∗ d t 2 + 1 6 j ∗ a 0 3 p_j = p_0+v_0*d_t+\frac 1 2 a_0*d_t^2+\frac 1 6 j * a_0^3 pj=p0+v0dt+21a0dt2+61ja03
      • v j = v 0 + a 0 ∗ d t + 1 2 j ∗ d t 2 v _j = v_0+ a_0*d_t + \frac 1 2 j * d_t^2 vj=v0+a0dt+21jdt2
    • prev_imu_it = imu_it
std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f>>
dlio::OdomNode::integrateImuInternal(Eigen::Quaternionf q_init, Eigen::Vector3f p_init, Eigen::Vector3f v_init,const std::vector<double>& sorted_timestamps,boost::circular_buffer<ImuMeas>::reverse_iterator begin_imu_it,boost::circular_buffer<ImuMeas>::reverse_iterator end_imu_it) {std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f>> imu_se3;// InitializationEigen::Quaternionf q = q_init;Eigen::Vector3f p = p_init;Eigen::Vector3f v = v_init;Eigen::Vector3f a = q._transformVector(begin_imu_it->lin_accel);a[2] -= this->gravity_;// Iterate over IMU measurements and timestampsauto prev_imu_it = begin_imu_it;auto imu_it = prev_imu_it + 1;auto stamp_it = sorted_timestamps.begin();for (; imu_it != end_imu_it; imu_it++) {const ImuMeas& f0 = *prev_imu_it;const ImuMeas& f = *imu_it;// Time between IMU samplesdouble dt = f.dt;// Angular accelerationEigen::Vector3f alpha_dt = f.ang_vel - f0.ang_vel;Eigen::Vector3f alpha = alpha_dt / dt;// Average angular velocityEigen::Vector3f omega = f0.ang_vel + 0.5*alpha_dt;// Orientationq = Eigen::Quaternionf (q.w() - 0.5*( q.x()*omega[0] + q.y()*omega[1] + q.z()*omega[2] ) * dt,q.x() + 0.5*( q.w()*omega[0] - q.z()*omega[1] + q.y()*omega[2] ) * dt,q.y() + 0.5*( q.z()*omega[0] + q.w()*omega[1] - q.x()*omega[2] ) * dt,q.z() + 0.5*( q.x()*omega[1] - q.y()*omega[0] + q.w()*omega[2] ) * dt);q.normalize();// AccelerationEigen::Vector3f a0 = a;a = q._transformVector(f.lin_accel);a[2] -= this->gravity_;// JerkEigen::Vector3f j_dt = a - a0;Eigen::Vector3f j = j_dt / dt;// Interpolate for given timestampswhile (stamp_it != sorted_timestamps.end() && *stamp_it <= f.stamp) {// Time between previous IMU sample and given timestampdouble idt = *stamp_it - f0.stamp;// Average angular velocityEigen::Vector3f omega_i = f0.ang_vel + 0.5*alpha*idt;// OrientationEigen::Quaternionf q_i (q.w() - 0.5*( q.x()*omega_i[0] + q.y()*omega_i[1] + q.z()*omega_i[2] ) * idt,q.x() + 0.5*( q.w()*omega_i[0] - q.z()*omega_i[1] + q.y()*omega_i[2] ) * idt,q.y() + 0.5*( q.z()*omega_i[0] + q.w()*omega_i[1] - q.x()*omega_i[2] ) * idt,q.z() + 0.5*( q.x()*omega_i[1] - q.y()*omega_i[0] + q.w()*omega_i[2] ) * idt);q_i.normalize();// PositionEigen::Vector3f p_i = p + v*idt + 0.5*a0*idt*idt + (1/6.)*j*idt*idt*idt;// TransformationEigen::Matrix4f T = Eigen::Matrix4f::Identity();T.block(0, 0, 3, 3) = q_i.toRotationMatrix();T.block(0, 3, 3, 1) = p_i;imu_se3.push_back(T);stamp_it++;}// Positionp += v*dt + 0.5*a0*dt*dt + (1/6.)*j_dt*dt*dt;// Velocityv += a0*dt + 0.5*j_dt*dt;prev_imu_it = imu_it;}return imu_se3;}

Code-Map

构造

  • 读取参数:

    • odom_frame="odom"; publish_full_map_=true; publish_freq_=1.0; leaf_size_=0.5;
    • 获取节点 NS 并删除前导字符 ns
    • 连接帧名称字符串 odom_frame = ns + "/" + this->odom_frame;
  • 按一定频率 发布 全局地图:pointcloud2

  • 订阅 topic

    • 关键帧:“keyframes”, 1, &dlo::MapNode::keyframeCB,
    • 保存地图指令:“save_pcd”, &dlio::MapNode::savePcd
dlio::MapNode::MapNode(ros::NodeHandle node_handle) : nh(node_handle) {this->getParams();this->publish_timer = this->nh.createTimer(ros::Duration(this->publish_freq_), &dlio::MapNode::publishTimer, this);this->keyframe_sub = this->nh.subscribe("keyframes", 10,&dlio::MapNode::callbackKeyframe, this, ros::TransportHints().tcpNoDelay());this->map_pub = this->nh.advertise<sensor_msgs::PointCloud2>("map", 100);this->save_pcd_srv = this->nh.advertiseService("save_pcd", &dlio::MapNode::savePcd, this);this->dlio_map = pcl::PointCloud<PointType>::Ptr (boost::make_shared<pcl::PointCloud<PointType>>());pcl::console::setVerbosityLevel(pcl::console::L_ERROR);}

callbackKeyframe

  • 将关键帧数据格式由 ros->pcl pcl::fromROSMsg
  • 关键帧点云数据进行体素滤波降采样 voxelgrid.filter
  • 关键帧加载到全局地图中 pcl格式,直接+号
void dlio::MapNode::callbackKeyframe(const sensor_msgs::PointCloud2ConstPtr& keyframe) {// convert scan to pcl formatpcl::PointCloud<PointType>::Ptr keyframe_pcl =pcl::PointCloud<PointType>::Ptr (boost::make_shared<pcl::PointCloud<PointType>>());pcl::fromROSMsg(*keyframe, *keyframe_pcl);// voxel filterthis->voxelgrid.setLeafSize(this->leaf_size_, this->leaf_size_, this->leaf_size_);this->voxelgrid.setInputCloud(keyframe_pcl);this->voxelgrid.filter(*keyframe_pcl);// save filtered keyframe to map for rviz*this->dlio_map += *keyframe_pcl;}

savePcd

  • 地图进行提速滤波后保存

步骤:

  • 取地图体素大小 + 保存路径
  • 将 全局地图 进行体素降采样
  • 保存

GICP

calculateSourceCovariances

End

这篇关于Direct LiDAR-Inertial Odometry的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/260816

相关文章

自动驾驶---Perception之Lidar点云3D检测

1 背景         Lidar点云技术的出现是基于摄影测量技术的发展、计算机及高新技术的推动以及全球定位系统和惯性导航系统的发展,使得通过激光束获取高精度的三维数据成为可能。随着技术的不断进步和应用领域的拓展,Lidar点云技术将在测绘、遥感、环境监测、机器人等领域发挥越来越重要的作用。         目前全球范围内纯视觉方案的车企主要包括特斯拉和集越,在达到同等性能的前提下,纯视觉方

轮式里程计(Wheel Odometry)和惯性测量单元(IMU)的融合

轮式里程计(Wheel Odometry)和惯性测量单元(IMU)的融合通常采用某种形式的滤波技术,最常见的是卡尔曼滤波器(Kalman Filter)或其变体,如扩展卡尔曼滤波器(Extended Kalman Filter, EKF)、无迹卡尔曼滤波器(Unscented Kalman Filter, UKF)或粒子滤波器(Particle Filter)。这些滤波器能够综合两种传感器的优势,

机器人里程计(Odometry)

机器人里程计(Odometry)是机器人定位和导航中的一个关键概念,它涉及到利用传感器数据来估计机器人在环境中的位置和姿态。里程计的基本原理是根据机器人自身动作的反馈来计算其相对于初始位置的位移。这通常包括机器人从一个已知位置开始,然后使用各种类型的传感器来测量其自身的运动。 以下是一些常见的里程计类型: 1. **轮式里程计(Wheel Odometry)**:    - 使用安装在机器人轮

扫地机LiDAR形态之美

石头扫地机V20 LiDAR: Flash光源和Spot光源切换 图来自 Robot森 LiDAR(Light Detection and Ranging,激光雷达)技术在扫地机器人中的应用,不仅提升了机器的智能性和实用性,还展现了一种科技与艺术的融合之美。 一、外观设计的精致性 紧凑与轻巧:扫地机器人的LiDAR传感器通常被设计得十分紧凑和轻巧,这样不仅便于机器人灵活移动,

spark-Streaming direct和receiver方式读取的区别

区别: Spark-Streaming获取kafka数据的两种方式-Receiver与Direct的方式,可以从代码中简单理解成Receiver方式是通过zookeeper来连接kafka队列,Direct方式是直接连接到kafka的节点上获取数据了。 一、基于Receiver的方式 这种方式使用Receiver来获取数据。Receiver是使用Kafka的高层次Consumer API来实

ICCV2017《Deep Direct Regression for Multi-Oriented Scene Text Detection》阅读笔记

前言 本文是对《Deep Direct Regression for Multi-Oriented Scene Text Detection》论文的简要介绍和细节分析,由于作者没有放出源码,所以本文没有源码解读的部分,有关的复现工作将在下篇博客介绍。 注:编者水平有限,如有谬误,欢迎指正。若要转载,请注明出处,谢谢。 联系方式: 邮箱:yue_zhan@yahoo.com QQ:11563566

直接内存 Direct Memory

不是虚拟机运行时数据区的一部分,也不是《Java虚拟机规范》中定义的内存区域。 直接内存是在Java堆外的、直接向系统申请的内存区间。 来源于NIO,通过存在堆中的DirectByteBuffer操作Native内存 通常,访问直接内存的速度会优于Java堆。即读写性能高。 因此出于性能考虑,读写频繁的场合可能会考虑使用直接内存。Java的NIO库允许Java程序使用直接内存,用于数据缓冲

Spark Streaming 的 Receiver和 Direct模式

1 两种模式的原理和区别 Receiver模式 1. Receiver模式下的运行架构 1)InputDStream: 从流数据源接收的输入数据。 2)Receiver:负责接收数据流,并将数据写到本地。 3)Streaming Context:代表SparkStreaming,负责Streaming层面的任务调度,生成jobs发送到Spark engine处理。 4)Spark

Direct Insertion

直接插入排序是稳定的排序,时间复杂度为O(n^2) #include<iostream>using namespace std;void sorting(int arr[],int n){for(int i = 1;i < n;i++){if(arr[i] >= arr[i-1])continue;int current = arr[i];int j;for(j = i-1;j >= 0

人工智能 | SLAM与Visual Odometry技术综述(浙江大学智能系统和控制研究所)

github:https://github.com/MichaelBeechan CSDN:https://blog.csdn.net/u011344545 下载链接:https://download.csdn.net/download/u011344545/11195245 视觉里程计 | 视觉里程计VO及惯性系统 视觉里程计 | 视觉里程计综述 1、视觉里程计