自学内容网 自学内容网

刚体运动中变换矩阵的逆,以及如何理解obtain_sensor2top函数

刚体运动中变换矩阵的逆

刚体运动中的变换矩阵为:
在这里插入图片描述

其中 R 为旋转矩阵( R − 1 R^{-1} R1 = R T R^{T} RT),t 为平移向量。(旋转矩阵的逆确实等于其转置。 这一性质对于所有正交矩阵都成立,而旋转矩阵是正交矩阵的一个子集。正交矩阵的定义是它的逆等于它的转置

分块矩阵:
在这里插入图片描述
求逆为:
在这里插入图片描述
套用上述公式,求得变换矩阵的逆矩阵为:

在这里插入图片描述

附:

bevformer和bevdet工程中,nuscenes数据生成pkl文件过程中,求各传感器向**车顶激光(这里车顶激光只是指关键帧sample中的车顶激光)**之间的外参,这里的传感器有三大类:6个相机分别到主激光的外参RADAR([‘RADAR_FRONT’, ‘RADAR_FRONT_LEFT’, ‘RADAR_FRONT_RIGHT’, ‘RADAR_BACK_LEFT’, ‘RADAR_BACK_RIGHT’]分别到主激光的外参)连续帧sweeps 到单关键帧single key-frame的外参, tools/create_data.py——>tools/data_converter/nuscenes_converter.py中的def obtain_sensor2top函数:
代码如下:

def obtain_sensor2top(nusc,
                      sensor_token,
                      l2e_t,
                      l2e_r_mat,
                      e2g_t,
                      e2g_r_mat,
                      sensor_type='lidar'):
    """Obtain the info with RT matric from general sensor to Top LiDAR.
    Args:
        nusc (class): Dataset class in the nuScenes dataset.
        sensor_token (str): Sample data token corresponding to the
            specific sensor type.
        l2e_t (np.ndarray): Translation from lidar to ego in shape (1, 3).
        l2e_r_mat (np.ndarray): Rotation matrix from lidar to ego
            in shape (3, 3).
        e2g_t (np.ndarray): Translation from ego to global in shape (1, 3).
        e2g_r_mat (np.ndarray): Rotation matrix from ego to global
            in shape (3, 3).
        sensor_type (str): Sensor to calibrate. Default: 'lidar'.
    Returns:
        sweep (dict): Sweep information after transformation.
    """
    sd_rec = nusc.get('sample_data', sensor_token)
    cs_record = nusc.get('calibrated_sensor',
                         sd_rec['calibrated_sensor_token'])
    pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token'])
    data_path = str(nusc.get_sample_data_path(sd_rec['token']))
    if os.getcwd() in data_path:  # path from lyftdataset is absolute path
        data_path = data_path.split(f'{os.getcwd()}/')[-1]  # relative path
    sweep = {
        'data_path': data_path,
        'type': sensor_type,
        'sample_data_token': sd_rec['token'],
        'sensor2ego_translation': cs_record['translation'],
        'sensor2ego_rotation': cs_record['rotation'],
        'ego2global_translation': pose_record['translation'],
        'ego2global_rotation': pose_record['rotation'],
        'timestamp': sd_rec['timestamp']
    }
    l2e_r_s = sweep['sensor2ego_rotation']
    l2e_t_s = sweep['sensor2ego_translation']
    e2g_r_s = sweep['ego2global_rotation']
    e2g_t_s = sweep['ego2global_translation']

    # obtain the RT from sensor to Top LiDAR
    # sweep->ego->global->ego'->lidar
    l2e_r_s_mat = Quaternion(l2e_r_s).rotation_matrix  #Quaternion.rotation_matrix : a 3x3 orthogonal rotation matrix as a 3x3 Numpy array;Quaternion.transformation_matrix : a 4x4 homogeneous transformation matrix as a 4x4 Numpy array
    e2g_r_s_mat = Quaternion(e2g_r_s).rotation_matrix
    R = (l2e_r_s_mat.T @ e2g_r_s_mat.T) @ (
        np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T)
    T = (l2e_t_s @ e2g_r_s_mat.T + e2g_t_s) @ (
        np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T)
    T -= e2g_t @ (np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T
                  ) + l2e_t @ np.linalg.inv(l2e_r_mat).T
    sweep['sensor2lidar_rotation'] = R.T  # points @ R.T + T
    sweep['sensor2lidar_translation'] = T
    return sweep

在这里插入图片描述

其中camera是sweep sensor的特例,自己写了一个推导公式,以camera为例来说明:
请添加图片描述


原文地址:https://blog.csdn.net/qq_39523365/article/details/140662015

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!