自学内容网 自学内容网

基于视觉的3D占用网络汇总

综述文章:https://arxiv.org/pdf/2405.02595

基于视觉的3D占用预测方法的时间线概述:

基于视觉的3D占用预测方法的时间线概述

自动驾驶中基于视觉的3D占用预测的分层结构分类

自动驾驶中基于视觉的3D占用预测的分层结构分类

2023年的方法:

TPVFormer, OccDepth, SimpleOccupancy, StereoScene, OccupancyM3D, VoxFormer, OccFormer, OVO, UniOcc, MiLO, Multi-Scale Occ, PanoOcc, Symphonies, FB-OCC, UniWorld, PointOcc, RenderOcc, FlashOcc, OccWorld, DepthSSC, OctreeOcc, COTR, SGN, OccNeRF, Vampire, RadOcc, SparseOcc

2024年的方法:

SelfOcc, S2TPVFormer, POP-3D, UniVision, InverseMatrixVT3D, OccFlowNet, CoHFF, OccTransformer, FastOcc, MonoOcc

论文汇总:

TPVFormer: An academic alternative to Tesla’s Occupancy Network

论文地址:https://arxiv.org/pdf/2302.07817
代码地址:https://github.com/wzzheng/TPVFormer

OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion

论文地址:https://arxiv.org/abs/2302.13540
代码地址: https://github.com/megvii-research/OccDepth

SimpleOccupancy: A Simple Framework for 3D Occupancy Estimation in Autonomous Driving

论文地址:https://arxiv.org/pdf/2303.10076
代码地址:https://github.com/GANWANSHUI/SimpleOccupancy

StereoScene: Bridging Stereo Geometry and BEV Representation with Reliable Mutual Interaction for Semantic Scene Completion

论文地址:https://arxiv.org/pdf/2303.13959v3
代码地址:https://github.com/Arlo0o/StereoScene


原文地址:https://blog.csdn.net/stephanezhang/article/details/144373905

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!