在CARLA中手动开车,添加双目相机stereo camera,激光雷达Lidar

2023-10-30 13:30

本文主要是介绍在CARLA中手动开车,添加双目相机stereo camera,激光雷达Lidar,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

CARLA的使用逻辑:

首先创建客户端

设置如果2秒没有从服务器返回任何内容,则终止

client = carla.Client("127.0.0.1", 2000)
client.set_timeout(2.0)

从客户端中get world

world = client.get_world()

设置setting并应用

这里使用固定时间步长的异步模式

settings = world.get_settings()
settings.synchronous_mode = False    # Enables synchronous mode
settings.fixed_delta_seconds = None
delta = 0.05
settings.fixed_delta_seconds = delta
world.apply_settings(settings)

设置traffic manager

traffic_manager = client.get_trafficmanager(8000)
traffic_manager.set_synchronous_mode(False)
traffic_manager.set_random_device_seed(0)
random.seed(0)

创建一个观察视角

spectator是我们观察的视角,在后面的循环中,我们让视角始终跟随车,显示从上往下看的车的俯视图

spectator = world.get_spectator()

 从map中获得所有可行的出生点

spawn_points是所有可行的出生点,在后面生成车的时候要指定一个出生点。

spawn_points = world.get_map().get_spawn_points()

carla中添加任何东西(车辆,相机,激光雷达等)的逻辑是:

以车为例,添加车的逻辑是:先从world中get所有东西的blueprint库,从所有blueprint库中filter车的blueprint,再将车的blueprint与出生点结合,生成一个车。

blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.filter("model3")[0]
vehicle = world.try_spawn_actor(vehicle_bp, spawn_points[0])

blueprint_library是所有blueprint的库,vehicle_bp是车的blueprint,与一个出生点结合,生成车的实例vehicle。

相机也是同样的逻辑:

camera_init_trans = carla.Transform(carla.Location(x=1, y=-1, z=2), carla.Rotation(pitch=0))
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera_bp.set_attribute("image_size_x", str(1280))
camera_bp.set_attribute("image_size_y", str(720))
camera_bp.set_attribute("fov", str(105))
camera = world.spawn_actor(camera_bp, camera_init_trans, attach_to=vehicle)

相机要设置附着在车上面,carla.Location是固定在车身的随车坐标系,原点的高度在车轮与地面接触的平面,俯视图看原点在车的中心。x轴正方向指向车头,y轴指向车辆右舷,z轴指向天空,单位是米。

相机监听回调函数:

# Get camera dimensions
image_w = camera_bp.get_attribute("image_size_x").as_int()
image_h = camera_bp.get_attribute("image_size_y").as_int()# Instantiate objects for rendering and vehicle control
renderObject = RenderObject(image_w, image_h)
controlObject = ControlObject(vehicle)camera.listen(lambda image: pygame_callback1(image, renderObject))

carla中没有双目相机,需要自己添加两个rgb相机实现双目相机的效果。

如何理解CARLA中相机的焦距的概念?

定义相机的时候,没有焦距这个概念,首先定义窗口的大小,也就是相当于相机的传感器横竖各有多少像素,再定义fov视角范围,单位是度,定义好这两个概念,相机的焦距f根据这两个数来算,如果双目相机恢复深度,需要这样去算f。假如相机的归一化平面,即距离相机z=1米的位置(这个z是slam相机坐标系,不是carla坐标系)有一根横棒,那么多长的横棒可以正好横向填满相机传感器,根据视角范围fov解相似三角形,求出slam中定义的相机内参概念f,窗口大小的一般,也就是slam中相机内参的c

键盘控制器:

见完整代码 ControlObject类

每一步循环中:

world.tick() # 获取服务器结果
# 获取观察视角
transform = vehicle.get_transform()  # we get the transform of vehicle
spectator.set_transform(carla.Transform(transform.location + carla.Location(z=50),carla.Rotation(pitch=-90)))for event in pygame.event.get():  # 获取键盘操作

完整代码:

import carla
import random
import time
import sys
import pygame
import datetime
import numpy as np
import open3d as o3d
from matplotlib import cmVIDIDIS = np.array(cm.get_cmap("plasma").colors)
VID_RANGE = np.linspace(0.0, 1.0, VIDIDIS.shape[0])# Render object to keep and pass the PyGame surface
class RenderObject(object):def __init__(self, width, height):init_image = np.random.randint(0,255,(height, width, 3),dtype='uint8')self.surface = pygame.surfarray.make_surface(init_image.swapaxes(0,1))# Control object to manage vehicle controls
class ControlObject(object):def __init__(self, veh):# Conrol parameters to store the control stateself._vehicle = vehself._steer = 0self._throttle = Falseself._brake = Falseself._steer = Noneself._steer_cache = 0# A carla.VehicleControl object is needed to alter the# vehicle's control stateself._control = carla.VehicleControl()# Check for key press events in the PyGame window# and define the control statedef parse_control(self, event):if event.type == pygame.KEYDOWN:if event.key == pygame.K_RETURN:self._vehicle.set_autopilot(True)if event.key == pygame.K_UP:self._throttle = Trueif event.key == pygame.K_DOWN:self._brake = Trueif event.key == pygame.K_RIGHT:self._steer = 1if event.key == pygame.K_LEFT:self._steer = -1if event.type == pygame.KEYUP:if event.key == pygame.K_UP:self._throttle = Falseif event.key == pygame.K_DOWN:self._brake = Falseself._control.reverse = Falseif event.key == pygame.K_RIGHT:self._steer = Noneif event.key == pygame.K_LEFT:self._steer = None# Process the current control state, change the control parameter# if the key remains presseddef process_control(self):if self._throttle:self._control.throttle = min(self._control.throttle + 0.01, 1)self._control.gear = 1self._control.brake = Falseelif not self._brake:self._control.throttle = 0.0if self._brake:# If the down arrow is held down when the car is stationary, switch to reverseif self._vehicle.get_velocity().length() < 0.01 and not self._control.reverse:self._control.brake = 0.0self._control.gear = 1self._control.reverse = Trueself._control.throttle = min(self._control.throttle + 0.1, 1)elif self._control.reverse:self._control.throttle = min(self._control.throttle + 0.1, 1)else:self._control.throttle = 0.0self._control.brake = min(self._control.brake + 0.3, 1)else:self._control.brake = 0.0if self._steer is not None:if self._steer == 1:self._steer_cache += 0.03if self._steer == -1:self._steer_cache -= 0.03min(0.7, max(-0.7, self._steer_cache))self._control.steer = round(self._steer_cache,1)else:if self._steer_cache > 0.0:self._steer_cache *= 0.2if self._steer_cache < 0.0:self._steer_cache *= 0.2if 0.01 > self._steer_cache > -0.01:self._steer_cache = 0.0self._control.steer = round(self._steer_cache,1)# Ápply the control parameters to the ego vehicleself._vehicle.apply_control(self._control)def generate_lidar_bp(blueprint_library, delta):"""To get lidar bp:param blueprint_library: the world blueprint_library:param delta: update rate(s):return: lidar bp"""lidar_bp = blueprint_library.find("sensor.lidar.ray_cast")lidar_bp.set_attribute("dropoff_general_rate", "0.0")lidar_bp.set_attribute("dropoff_intensity_limit", "1.0")lidar_bp.set_attribute("dropoff_zero_intensity", "0.0")lidar_bp.set_attribute("upper_fov", str(15.0))lidar_bp.set_attribute("lower_fov", str(-25.0))lidar_bp.set_attribute("channels", str(64.0))lidar_bp.set_attribute("range", str(100.0))lidar_bp.set_attribute("rotation_frequency", str(1.0 / delta))lidar_bp.set_attribute("points_per_second", str(500000))return lidar_bpdef lidar_callback(point_cloud, point_list):# We need to convert point cloud(carla-format) into numpy.ndarraydata = np.copy(np.frombuffer(point_cloud.raw_data, dtype = np.dtype("f4")))data = np.reshape(data, (int(data.shape[0] / 4), 4))intensity = data[:, -1]intensity_col = 1.0 - np.log(intensity) / np.log(np.exp(-0.004 * 100))int_color = np.c_[np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 0]),np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 1]),np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 2])]points = data[:, :-1] # we only use x, y, z coordinatespoints[:, 1] = -points[:, 1] # This is different from official scriptpoint_list.points = o3d.utility.Vector3dVector(points)point_list.colors = o3d.utility.Vector3dVector(int_color)# Camera sensor callback, reshapes raw data from camera into 2D RGB and applies to PyGame surface
def pygame_callback1(data, obj):img = np.reshape(np.copy(data.raw_data), (data.height, data.width, 4))img = img[:, :, :3]img = img[:, :, ::-1]# msg = bridge.cv2_to_imgmsg(img, encoding="bgr8")obj.surface = pygame.surfarray.make_surface(img.swapaxes(0, 1))# pub.publish(msg)print("taking photos1")def pygame_callback2(data, obj):img = np.reshape(np.copy(data.raw_data), (data.height, data.width, 4))img = img[:, :, :3]img = img[:, :, ::-1]# msg = bridge.cv2_to_imgmsg(img, encoding="bgr8")# pub2.publish(msg)print("taking photos2")client = carla.Client("127.0.0.1", 2000)
client.set_timeout(2.0)
world = client.get_world()settings = world.get_settings()
settings.synchronous_mode = False    # Enables synchronous mode
settings.fixed_delta_seconds = None
delta = 0.05
settings.fixed_delta_seconds = delta
world.apply_settings(settings)traffic_manager = client.get_trafficmanager(8000)
traffic_manager.set_synchronous_mode(False)
traffic_manager.set_random_device_seed(0)
random.seed(0)# We will set up the spectator so we can see what we do
spectator = world.get_spectator()
# Retrieve the map's spawn points
spawn_points = world.get_map().get_spawn_points()blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.filter("model3")[0]
vehicle = world.try_spawn_actor(vehicle_bp, spawn_points[0])
vehicle.set_autopilot(False)# Initialise the camera floating behind the vehicle
camera_init_trans = carla.Transform(carla.Location(x=1, y=-1, z=2), carla.Rotation(pitch=0))
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera_bp.set_attribute("image_size_x", str(1280))
camera_bp.set_attribute("image_size_y", str(720))
camera_bp.set_attribute("fov", str(105))
camera = world.spawn_actor(camera_bp, camera_init_trans, attach_to=vehicle)
# Start camera with PyGame callbackcamera2_init_trans = carla.Transform(carla.Location(x=1, y=1, z=2), carla.Rotation(pitch=0))
camera2_bp = blueprint_library.find('sensor.camera.rgb')
camera2_bp.set_attribute("image_size_x", str(1280))
camera2_bp.set_attribute("image_size_y", str(720))
camera2_bp.set_attribute("fov", str(105))
camera2 = world.spawn_actor(camera2_bp, camera2_init_trans, attach_to=vehicle)# Get camera dimensions
image_w = camera_bp.get_attribute("image_size_x").as_int()
image_h = camera_bp.get_attribute("image_size_y").as_int()# Instantiate objects for rendering and vehicle control
renderObject = RenderObject(image_w, image_h)
controlObject = ControlObject(vehicle)camera.listen(lambda image: pygame_callback1(image, renderObject))lidar_bp = generate_lidar_bp(blueprint_library, delta)
lidar_transform = carla.Transform(carla.Location(x=-0.5, z=1.8))
lidar = world.spawn_actor(lidar_bp, lidar_transform, attach_to=vehicle)point_list = o3d.geometry.PointCloud()lidar.listen(lambda data: lidar_callback(data, point_list))vis = o3d.visualization.Visualizer()
vis.create_window(window_name= "Display Point Cloud",width= 960,height= 540,left= 480,top= 270)vis.get_render_option().background_color = [0.05, 0.05, 0.05]
vis.get_render_option().point_size = 1
vis.get_render_option().show_coordinate_frame = Trueframe = 0
dt0 = datetime.datetime.now()# Initialise the display
pygame.init()
gameDisplay = pygame.display.set_mode((image_w,image_h), pygame.HWSURFACE | pygame.DOUBLEBUF)
# Draw black to the display
gameDisplay.fill((0,0,0))
gameDisplay.blit(renderObject.surface, (0,0))
pygame.display.flip()crashed = False
while not crashed:# if frame == 2:#     vis.add_geometry(point_list)## vis.update_geometry(point_list)# vis.poll_events()# vis.update_renderer()# time.sleep(0.005)world.tick()transform = vehicle.get_transform()  # we get the transform of vehiclespectator.set_transform(carla.Transform(transform.location + carla.Location(z=50),carla.Rotation(pitch=-90)))# Update the displaygameDisplay.blit(renderObject.surface, (0, 0))pygame.display.flip()controlObject.process_control()for event in pygame.event.get():if event.type == pygame.QUIT:crashed = TruecontrolObject.parse_control(event)process_time = datetime.datetime.now() - dt0sys.stdout.write("\r" + "FPS: " + str(1.0 / process_time.total_seconds()) + "Current Frame: " + str(frame))sys.stdout.flush()dt0 = datetime.datetime.now()frame += 1# Stop camera and quit PyGame after exiting game loop
camera.stop()
camera2.stop()
pygame.quit()

这里有一个bug,我的代码里分别有Lidar点云可视化的代码和相机照片可视化的代码,可是这两个可视化不能共存,如果Lidar的点云可视化更新,相机图片可视化就不更新,相机更新,Lidar就不更新。不知道为什么,知道的大神欢迎留言。

这篇关于在CARLA中手动开车,添加双目相机stereo camera,激光雷达Lidar的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/308233

相关文章

基于UE5和ROS2的激光雷达+深度RGBD相机小车的仿真指南(五):Blender锥桶建模

前言 本系列教程旨在使用UE5配置一个具备激光雷达+深度摄像机的仿真小车,并使用通过跨平台的方式进行ROS2和UE5仿真的通讯,达到小车自主导航的目的。本教程默认有ROS2导航及其gazebo仿真相关方面基础,Nav2相关的学习教程可以参考本人的其他博客Nav2代价地图实现和原理–Nav2源码解读之CostMap2D(上)-CSDN博客往期教程: 第一期:基于UE5和ROS2的激光雷达+深度RG

海鸥相机存储卡格式化如何恢复数据

在摄影的世界里,‌每一张照片都承载着独特的记忆与故事。‌然而,‌当我们不慎将海鸥相机的存储卡格式化后,‌那些珍贵的瞬间似乎瞬间消逝,‌让人心急如焚。‌但请不要绝望,‌数据恢复并非遥不可及。‌本文将详细介绍在海鸥相机存储卡格式化后,‌如何高效地恢复丢失的数据,‌帮助您重新找回那些宝贵的记忆。‌ 图片来源于网络,如有侵权请告知 一、‌回忆备份情况 ‌海鸥相机存储卡格式化如何恢复数据?在意

4-4.Andorid Camera 之简化编码模板(获取摄像头 ID、选择最优预览尺寸)

一、Camera 简化思路 在 Camera 的开发中,其实我们通常只关注打开相机、图像预览和关闭相机,其他的步骤我们不应该花费太多的精力 为此,应该提供一个工具类,它有处理相机的一些基本工具方法,包括获取摄像头 ID、选择最优预览尺寸以及打印相机参数信息 二、Camera 工具类 CameraIdResult.java public class CameraIdResult {

【LVI-SAM】激光雷达点云处理特征提取LIO-SAM 之FeatureExtraction实现细节

激光雷达点云处理特征提取LIO-SAM 之FeatureExtraction实现细节 1. 特征提取实现过程总结1.0 特征提取过程小结1.1 类 `FeatureExtraction` 的整体结构与作用1.2 详细特征提取的过程1. 平滑度计算(`calculateSmoothness()`)2. 标记遮挡点(`markOccludedPoints()`)3. 特征提取(`extractF

机器视觉硬件选型根据某项目相机镜头

一 项目总需求 1、大视野检测需求: (1)大视野: ①产品尺寸15.6寸屏幕,产品大小:350mm x 225mm; ②产品料盘尺寸大小:565mm x 425mm; ③工作距离:880mm;检测精度:500μm; 1、大视野检测需求: (1)大视野: ①产品尺寸15.6寸屏幕,产品大小:350mm x 225mm; ②产品料盘尺寸大小:565mm x 425mm; 工作距离:

005:VTK世界坐标系中的相机和物体

VTK医学图像处理---世界坐标系中的相机和物体 左侧是成像结果                                                    右侧是世界坐标系中的相机与被观察物体 目录 VTK医学图像处理---世界坐标系中的相机和物体 简介 1 在三维空间中添加坐标系 2 世界坐标系中的相机 3 世界坐标系中vtkImageData的参数 总结:

ASP.NET手动触发页面验证控件事件

开发环境:.NET Framework 3.5.1 sp1 参考文章: http://www.codeproject.com/KB/aspnet/JavascriptValidation.aspx http://msdn.microsoft.com/zh-cn/library/aa479045.aspx http://www.cnblogs.com/minsentinel/archive/

【VSCode v1.93.0】手动配置远程remote-ssh

开发环境 VS Code版本:1.93.0 (Windows) Ubuntu版本:20.04 使用VS Code 插件remote-ssh远程访问Ubuntu服务器中的代码,若Ubuntu无法联网,在连接的时候会报错: Could not establish connection to "xxxx": Failed to download VS Code Server(Failed to

基于opencv实现双目立体匹配点云距离

双目相机或两个单目相机。 一、相机标定 MATLAB软件,打开双目标定app。 点击add images,弹出加载图像的窗口,分别导入左图和右图,设置黑白格长度(标定板的长度一般为20)。 点击确定,弹出加载好的图像。 在菜单栏勾选2 Coefficients(鱼眼相机勾选3 Coefficients)、Tangential Distinction,点击Calibrate,显示

【UE4源代码观察】手动建立一个使用UBT进行编译的空白工程

我想观察UE4是怎么编译的,于是查阅官方文档,了解到UE4有一套自己的编译工具:UnrealBuildTool,简称UBT。关于UBT的官方文档参阅:虚幻编译工具。我想尝试自己手动建立一个使用UBT进行编译的空白工程。不过首先,先了解下UBT的编译流程中一些文件所扮演的角色 UBT的编译流程中一些文件所扮演的角色 模块 每个模块都由一个 .build.cs 文件声明,它存储在 Source