杨恺伦,博士、教授、博士生导师、硕士生导师、国家优秀青年基金(海外)获得者、湖南大学岳麓学者。2014年6月获北京理工大学测控技术与仪器和北京大学经济学本科双学位。2019年6月获浙江大学测试计量技术及仪器博士学位。2017年9月至2018年9月在西班牙阿尔卡拉大学(UAH)机器人与电子安全(RobeSafe)研究组进行博士联合培养。2019年11月至2023年1月在德国卡尔斯鲁厄理工学院(KIT)计算机视觉与人机交互(CV:HCI)实验室开展博士后研究。2023年2月加入湖南大学机器人学院。
研究包括基于多模态、高维度、全视角传感的计算光学与计算视觉,以支撑自动驾驶、盲人辅助、智能交通系统、智能运动分析等应用。在IEEE汇刊TPAMI、TIP、TNNLS、T-ITS、TMM、T-ASE、T-IV、TIM、TCI、TAI与计算机视觉、机器学习、人工智能、机器人、多媒体、智能交通系统顶会CVPR、NeurIPS、ECCV、AAAI、IJCAI、ICRA、IROS、MM、IV、ITSC等期刊会议上发表论文100余篇,入选 斯坦福全球前2%顶尖科学家。现拥有及与他人合有专利40余项,4项形成技术转移,获共青团中央举办的“创青春”创新创业大赛全国总冠军。CMX入选 IEEE T-ITS Top-10 Popular Articles,MateRobot入选机器人顶会 IEEE ICRA 2024 Finalist for Best Paper Award on Human-Robot Interaction,ACNet入选 IEEE ICIP 2019 Most Cited Papers,获智能车旗舰会议 IEEE IV 2021 Best Paper Award,图像处理前沿会议 ICFIP 2018 Best Presentation Award,无障碍顶会 W4A 2023 Accessibility Challenge Judges' Award,两次获Applied Optics Editors' Pick [2018][2019]。担任IEEE Robotics and Automation Letters AE、Robot Learning AE、智能机器人与系统国际会议ISoIRS 2024 Local Chair、智能车旗舰会议IV 2022 BSL Workshop Chair、机器人顶会ICRA 2024 AE、中国图像图形大会CCIG 2025展览主席。担任TPAMI、IJCV、CVPR、ICCV、ICML、NeurIPS、ICLR等80余本期刊与会议审稿人。获计算机视觉顶会ECCV 2022、ACCV 2022杰出审稿人奖。指导学生毕业在KIT、华为、字节跳动、蔚来汽车等单位深造或工作。
人工智能,作为引领新一轮科技革命的引擎,正悄然改变着全球科技与产业的发展格局,甚至重新定义我们的日常生活。从智能驾驶、AI导盲到混合现实、智慧工厂,再到运动辅助、机器人跑酷,AI的触角无处不在。而这一切的背后,“Vision-视觉”是AI的超级数据源--其重要性不言而喻,是机器人精确决策与执行的核心驱动力。Computer Vision for Panoramic Understanding Lab (CV:PU) 课题组以 Vision 为切入点,交叉融合计算成像、多模感知、全景理解、视频分析,致力于破解各种现实世界的感知与建模难题,突破光照不足、极端天气、领域迁移、高动态多目标、大视场宽频谱、轻量化低算力等限制下的感知挑战。同时,研究具身感知与技能学习,探讨主被动异构多智能体协同,全面优化世界模型与人机交互,以提升机器人系统的整体表现与可解释性。我们的研究延展到自动驾驶、移动机器人、四足机器狗、智能导盲辅助、智能工业制造与智能运动分析等应用场景,为机器人的未来发展提供技术支撑。
现有若干博士后、博士生、直博生、硕士生、研究助理招生名额。教育最重要的目标莫过于塑造独立之人格、自由之精神,鼓励尝试,宽容失败,培养怀有家国情怀、志在改造人生、改造社会、改造世界的知识阶层。深刻地认识到大学教育不仅仅是传授知识,甚至也不止培养能力,更为重要的是营造平等的学术氛围并在与同学交互的过程中启发科学思考和科学研究,形成“独立之人格”。课题组注重营造平等的交流与探讨的气氛,提倡以co-work的形式相互合作。CV:PU 研究小组非常年轻,沟通融洽,除了本小组成员外,还与卡尔斯鲁厄理工学院、浙江大学光电学院、湖南大学机器人学院的其他导师的学生一起共同学习与科研,可以充分交叉协作创新。博士生在读期间至少有两次国外开会交流的机会,硕士生至少有一次国外开会交流的机会。对于科研优秀有余力、提前圆满完成课题研究的同学,可推荐其到海外单位如NVIDIA、KIT、TUM、TU Stuttgart等知名课题组进行一年左右的联合培养。如果您对 {Computer Vision, Deep Learning, Scene Understanding, Video Understanding, Embodied Intelligence, Autonomous Driving} 感兴趣,欢迎与我联系。
邮箱:kailun.yang@hnu.edu.cn
相关链接:个人主页、Google Scholar、ResearchGate、DBLP、GitHub、课题组风采
教育与工作经历:
2023.11 – 今 湖南大学,机器人学院,教授、博士生导师、硕士生导师
2023.02 – 2023.10 湖南大学,机器人学院,副教授、博士生导师、硕士生导师
2019.11 – 2023.01 德国卡尔斯鲁厄理工学院(KIT),计算机视觉与人机交互(CV:HCI)实验室,博士后
2014.09 – 2019.06 浙江大学,现代光学仪器国家重点实验室,博士
2017.09 – 2018.09 西班牙阿尔卡拉大学(UAH),机器人与电子安全(RobeSafe)研究组,联合培养博士
2012.09 – 2014.06 北京大学,国家发展研究院,经济学双学位,本科
2010.09 – 2014.06 北京理工大学,光电学院,测控技术与仪器,本科
科学研究/学生培养方向:
计算机视觉:深度学习、语义分割、全景分割、深度估计、光流估计、知识蒸馏、视频理解、语义占据预测等
智能交通系统:智能车辆、自动驾驶、场景理解、领域适应、世界模型、BEV矢量地图、V2X协同感知等
机器人:3D视觉、具身智能、多模态感知、视觉里程计、视觉定位与建图、动作识别、人机交互等
光学传感:RGB-X传感、全景成像、偏振成像、事件相机、计算成像、光场成像、极简光学系统等
辅助技术:高级驾驶辅助系统(ADAS)、可穿戴盲人辅助系统、无障碍技术(Accessibility)等
科研项目:
[1] 全景计算成像驱动的持续自动驾驶场景解析方法研究. 国家自然科学基金面上项目,2025.01-2028.12(主持)
[2] 机器人视觉感知与辅助技术. 国家自然科学基金优秀青年科学基金项目(海外),2024.01-2026.12(主持)
[3] Accessible Maps: Barrier-free maps to improve the occupational mobility of people with visual or mobility impairments. 德国联邦劳动和社会服务部(BMAS)项目(01KM151112),2019.11-2022.12(主研)
[4] KIT Future Fields. KIT校园项目,2021.01-2023.01(主研)
[5] 视觉精确定位技术研究. 横向项目(K横20180747),2018.05-2020.04(主研)
[6] 融合多维度参数的视觉传感技术研究. 横向项目(K横20181674),2018.08-2019.08(主研)
[7] Semantic Perception for Navigation Assistance. 浙江大学校派海外交流项目,2017.09-2018.09(主持)
[8] 基于三维地形传感的盲人视觉辅助技术. 农业与社会发展部公益性项目(KN20161853),2016.01-2017.12(主研)
代表性论文(课题组所有工作均以co-work的形式相互合作)
计算视觉与场景理解(Computer Vision and Scene Understanding)方向:
[1] J. Zhang, K. Yang†, H. Shi, S. Reiß, K. Peng, C. Ma, H. Fu, P.H.S. Torr, K. Wang, R. Stiefelhagen. Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024 [PDF]
[2] K. Yang†, X. Hu, R. Stiefelhagen. Is Context-Aware CNN Ready for the Surroundings? Panoramic Semantic Segmentation in the Wild. IEEE Transactions on Image Processing (TIP), 2021 [PDF]
[3] Q. Jiang*, S. Gao*, Y. Gao, K. Yang†, Z. Yi, H. Shi, L. Sun, K. Wang†. Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers. IEEE Transactions on Image Processing (TIP), 2024 [PDF]
[4] H. Shi*, C. Pang*, J. Zhang*, K. Yang†, Y. Wu, H. Ni, Y. Lin, R. Stiefelhagen, K. Wang†. CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity. IEEE Transactions on Image Processing (TIP), 2024 [PDF]
[5] J. Lin, J. Chen, K. Yang†, A. Roitberg, S. Li, Z. Li†, S. Li. AdaptiveClick: Click-aware Transformer with Adaptive Focal Loss for Interactive Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024 [PDF]
[6] K. Peng, A. Roitberg, K. Yang†, J. Zhang, R. Stiefelhagen. Delving Deep into One-Shot Skeleton-based Action Recognition with Diverse Occlusions. IEEE Transactions on Multimedia (TMM), 2023 [PDF]
[7] F. Teng*, J. Zhang*, K. Peng, Y. Wang, R. Stiefelhagen, K. Yang†. OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation. IEEE Transactions on Artificial Intelligence (TAI), 2024 [PDF]
[8] K. Yang†, J. Zhang, S. Reiß, X. Hu, R. Stiefelhagen. Capturing Omni-Range Context for Omnidirectional Segmentation. In CVPR, 2021 [PDF]
[9] J. Zhang, K. Yang†, C. Ma, S. Reiß, K. Peng, R. Stiefelhagen. Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation. In CVPR, 2022 [PDF]
[10] J. Zhang*, R. Liu*, H. Shi, K. Yang†, S. Reiß, H. Fu, K. Peng, K. Wang, R. Stiefelhagen. Delivering Arbitrary-Modal Semantic Segmentation. In CVPR, 2023 [PDF]
[11] K. Peng, D. Wen, K. Yang†, A. Luo, Y. Chen, J. Fu, M.S. Sarfraz, A. Roitberg, R. Stiefelhagen. Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler. In NeurIPS, 2024 [PDF]
[12] K. Peng, C. Yin, J. Zheng, R. Liu, D. Schneider, J. Zhang, K. Yang†, M.S. Sarfraz, R. Stiefelhagen, A. Roitberg. Navigating Open Set Scenarios for Skeleton-based Action Recognition. In AAAI, 2024 [PDF]
[13] Y. Cao*, J. Zhang*, H. Shi, K. Peng, Y. Zhang, H. Zhang†, R. Stiefelhagen, K. Yang†. Occlusion-Aware Seamless Segmentation. In ECCV, 2024 [PDF]
[14] K. Peng*, J. Fu*, K. Yang†, D. Wen, Y. Chen, R. Liu, J. Zheng, J. Zhang, M.S. Sarfraz, R. Stiefelhagen, A. Roitberg. Referring Atomic Video Action Recognition. In ECCV, 2024 [PDF]
[15] K. Zeng, H. Shi, J. Lin, S. Li, J. Cheng, K. Wang, Z. Li†, K. Yang†. MambaMOS: LiDAR-based 3D Moving Object Segmentation with Motion-aware State Space Model. In MM, 2024 [PDF]
[16] K. Peng, D. Schneider, A. Roitberg, K. Yang†, J. Zhang, C. Deng, K. Zhang, M.S. Sarfraz, R. Stiefelhagen. Towards Video-based Activated Muscle Group Estimation in the Wild. In MM, 2024 [PDF]
[17] X. Hu, K. Yang, L. Fei, K. Wang. ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation. Most Cited Paper in ICIP 2019 [PDF]
极端光学与计算成像(Extreme Photonics and Computational Imaging)方向:
[1] S. Gao, K. Yang†, H. Shi, K. Wang†, J. Bai. Review on Panoramic Imaging and Its Applications in Scene Understanding. IEEE Transactions on Instrumentation and Measurement (TIM), 2022 [PDF]
[2] Q. Jiang*, H. Shi*, S. Gao, J. Zhang, K. Yang†, L. Sun, H. Ni, K. Wang†. Computational Imaging for Machine Perception: Transferring Semantic Segmentation beyond Aberrations. IEEE Transactions on Computational Imaging (TCI), 2024 [PDF]
[3] Q. Jiang, H. Shi, L. Sun, S. Gao, K. Yang, K. Wang. Annular Computational Imaging: Capture Clear Panoramic Images through Simple Lens. IEEE Transactions on Computational Imaging (TCI), 2022 [PDF]
[4] K. Xiang, K. Yang, K. Wang. Polarization-driven Semantic Segmentation via Efficient Attention-bridged Fusion. Optics Express (OE), 2021 [PDF]
[5] K. Yang, L.M. Bergasa, E. Romera, K. Wang. Robustifying Semantic Cognition of Traversability across Wearable RGB-Depth Cameras. Editors' Pick at Applied Optics (AO), 2019 [PDF]
[6] K. Yang, K. Wang, H. Chen, J. Bai. Reducing the Minimum Range of a RGB-Depth Sensor to Aid Navigation in Visually Impaired Individuals. Editors' Pick at Applied Optics (AO), 2018 [PDF]
[7] K. Zhou, K. Yang, K. Wang. Panoramic Depth Estimation via Supervised and Unsupervised Learning in Indoor Scenes. Applied Optics (AO), 2021 [PDF]
[8] K. Yang, K. Wang, H. Chen, J. Bai. IR Stereo RealSense: Decreasing Minimum Range of Navigational Assistance for Visually Impaired Individuals. Journal of Ambient Intelligence and Smart Environments (JAISE), 2017 [PDF]
[9] 陈浩, 杨恺伦, 胡伟健, 白剑, 汪凯巍. 基于全景环带成像的语义视觉里程计. 光学学报, 2021 [PDF]
自动驾驶与人机交互(Autonomous Driving and Human-Computer Interaction)方向:
[1] K. Yang, X. Hu, L.M. Bergasa, E. Romera, K. Wang. PASS: Panoramic Annular Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2019 [PDF]
[2] K. Yang†, X. Hu, Y. Fang, K. Wang, R. Stiefelhagen. Omnisupervised Omnidirectional Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020 [PDF]
[3] A. Jaus, K. Yang†, R. Stiefelhagen. Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]
[4] J. Zhang, K. Yang†, R. Stiefelhagen. Exploring Event-driven Dynamic Context for Accident Scene Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]
[5] J. Zhang, K. Yang†, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen. Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2022 [PDF]
[6] R. Liu, K. Yang†, A. Roitberg, J. Zhang, K. Peng, H. Liu, Y. Wang, R. Stiefelhagen. TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[7] J. Zhang*, H. Liu*, K. Yang*†, X. Hu, R. Liu, R. Stiefelhagen. CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers. IEEE Transactions on Intelligent Transportation Systems (T-ITS) Top-10 Popular Article, 2023 [PDF]
[8] S. Li, J. Lin, H. Shi, J. Zhang, S. Wang, Y. Yao, Z. Li†, K. Yang†. DTCLMapper: Dual Temporal Consistent Learning for Vectorized HD Map Construction. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[9] J. Lin*, J. Chen*, K. Peng*, X. He, Z. Li†, R. Stiefelhagen, K. Yang†. EchoTrack: Auditory Referring Multi-Object Tracking for Autonomous Driving. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[10] H. Shi*, Y. Zhou*, K. Yang†, X. Yin, Z. Wang, Y. Ye, Z. Yin, S. Meng, P. Li, K. Wang†. PanoFlow: Learning 360° Optical Flow for Surrounding Temporal Understanding. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]
[11] J. Zhang, C. Ma, K. Yang†, A. Roitberg, K. Peng, R. Stiefelhagen. Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]
[12] Z. Wang*, K. Yang*†, H. Shi, P. Li, F. Gao, J. Bai, K. Wang†. LF-VISLAM: A SLAM Framework for Large Field-of-View Cameras with Negative Imaging Plane on Mobile Agents. IEEE Transactions on Automation Science and Engineering (T-ASE), 2023 [PDF]
[13] Z. Wang, K. Yang†, H. Shi, Y. Zhang, Z. Xu, F. Gao, K. Wang†. LF-PGVIO: A Visual-Inertial-Odometry Framework for Large Field-of-View Cameras using Points and Geodesic Segments. IEEE Transactions on Intelligent Vehicles (T-IV), 2024 [PDF]
[14] H. Shi*, Q. Jiang*, K. Yang†, X. Yin, Z. Wang, K. Wang†. Beyond the Field-of-View: Enhancing Scene Visibility and Perception with Clip-Recurrent Transformer. IEEE Transactions on Intelligent Vehicles (T-IV), 2024 [PDF]
[15] Z. Yi*, H. Shi*, K. Yang†, Q. Jiang, Y. Ye, Z. Wang, K. Wang†. FocusFlow: Boosting Key-Points Optical Flow Estimation for Autonomous Driving. IEEE Transactions on Intelligent Vehicles (T-IV), 2023 [PDF]
[16] L. Sun, K. Yang, X. Hu, W. Hu, K. Wang. Real-time Fusion Network for RGB-D Semantic Segmentation Incorporating Unexpected Obstacle Detection for Road-driving Images. Main Publication in Google Scholar Metrics in IEEE Robotics and Automation Letters (RA-L), 2020 [PDF]
[17] S. Li, K. Yang†, H. Shi, J. Zhang, J. Lin, Z. Teng, Z. Li†. Bi-Mapper: Holistic BEV Semantic Mapping for Autonomous Driving. IEEE Robotics and Automation Letters (RA-L), 2023 [PDF]
[18] J. Zheng, J. Zhang, K. Yang†, K. Peng, R. Stiefelhagen. MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments. In ICRA Finalist for Best Paper Award on Human-Robot Interaction, 2024 [PDF]
[19] J. Zhang, K. Yang†, R. Stiefelhagen. ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data. In IROS, 2021 [PDF]
[20] K. Yang†, L.M. Bergasa, E. Romera, R. Cheng, T. Chen, K. Wang. Unifying Terrain Awareness through Real-Time Semantic Segmentation. Main Publication in Google Scholar Metrics in IV, 2018 [PDF]
[21] A. Jaus, K. Yang†, R. Stiefelhagen. Panoramic Panoptic Segmentation: Towards Complete Surrounding Understanding via Unsupervised Contrastive Learning. Best Paper Award at IV 2021 [PDF]
[22] E. Romera, L.M. Bergasa, K. Yang, J.M. Alvarez, R. Barea. Bridging the Day and Night Domain Gap for Semantic Segmentation. Main Publication in Google Scholar Metrics in IV, 2019 [PDF]
[23] K. Yang†, L.M. Bergasa, E. Romera, X. Huang, K. Wang. Predicting Polarization beyond Semantics for Wearable Robotics. In Humanoids, 2018 [PDF]
已毕业学生:
Daniel Bucher(Topic: Improving Robustness of 3D Semantic Segmentation via Transformer-based Fusion and Knowledge Distillation);
李钰(Topic: Fisheye Semantic Completion: Unifying Extrapolation and Semantic Completion),去向:恒润科技;
滕飞(Topic: OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation),去向:HNU读博;
曹可(Topic: Tightly-coupled LiDAR-visual SLAM Based on Geometric Features),去向:Akkodis;
滕志峰(Topic: PanoBEV: Panoramic Semantic Mapping from Monocular Egocentric Images to Holistic Bird's Eye View),去向:Solarlab Aiko Europe;
陈子涵(Topic: Accessible Chemical Structural Formulas through Interactive Labeling),去向:ZF Automotive Technologies;
罗心雨(Topic: Improving Semantic Segmentation of Accident Scenes via Multi-Source Mixed Sampling and Meta-Learning with Transformers),去向:交通银行数据中心;
刘瑞平(Topic: Transformer-based Knowledge Distillation for Efficient Semantic Segmentation),去向:KIT读博;
王庆(Topic: MatchFormer: Interleaving Attention in Transformers for Feature Matching),去向:华为;
欧文彦(Topic: Dynamic Visual SLAM with Semantic Information for Seeing Impaired People),去向:Continental;
刘华耀(Topic: Indoor Scene Understanding for the Visually Impaired Based on Semantic Segmentation),去向:蔚来汽车;
Alexander Jaus(Topic: Panoramic Panoptic Image Segmentation),去向:KIT读博;
马超翔(Topic: Unsupervised Domain Adaptation for Panoramic Semantic Segmentation),去向:字节跳动;
陈硕(Topic: An Efficient Network for Scene Change Detection),去向:零束科技;
张樱之(Topic: Assisting the Visually Impaired Based on Scene Recognition and Semantic Segmentation),去向:纵目科技;
Lukas Vojkovic(Topic: Development and Evaluation of a Computer Vision Based Navigation System for the Visually Impaired);
张嘉明(Topic: Semantic Segmentation in Accident Scenarios Based on Event Data),去向:KIT读博;
陈皓业(Topic: Semantic Visual Localization for Visually Impaired People),去向:采埃孚;
毛威(Topic: Efficient Panoptic Segmentation for Navigating the Visually Impaired),去向:吉咖机器人。
主要获奖情况:
[1] 湖南大学2024届本科毕业论文(设计)优秀指导教师,2024.06.
[2] IEEE ICRA 2024 Finalist for Best Paper Award on Human-Robot Interaction,2024.04.
[3] ACCV 2022杰出审稿人奖, 2022.12.
[4] ECCV 2022杰出审稿人奖, 2022.10.
[5] IEEE Intelligent Vehicles Symposium (IV) 2021最佳论文奖, 2021.07.
[6] 博士研究生国家奖学金,2018.12.
[7] ICFIP 2018 Best Presentation Award, 2018.03.
[8] 第三届“创青春”中国青年互联网创业大赛冠军奖,2017.08.
[9] 第三届浙江省“互联网+”大学生创新创业大赛金奖,2017.07.
[10] 北京理工大学光电学院毕业杯足球赛冠军&最佳球员,2014.06.
代表性专利:
[1] 杨恺伦,胡鑫欣,孙东明,李华兵。一种全景图像的连续性分割方法。已授权。专利号:CN202010198068.0。
[2] 杨恺伦,汪凯巍,程瑞琦。一种单相机偏振信息预测方法。已授权。专利号:CN201810534076.0。
[3] 杨恺伦,汪凯巍,于红雷,胡伟健。一种智能盲人辅助眼镜。已授权。获数千万Pre-A轮融资。专利号:CN201610590755.0。
[4] 杨恺伦,汪凯巍,程瑞琦,陈浩。一种基于RGB‐IR相机的声音编码交互系统。已转移(转让金额60万)。专利号:CN201610018944.0。
[5] 杨恺伦,汪凯巍,王晨。一种智能汽车倒车辅助系统及辅助方法。已授权。专利号:CN201510186028.3。
授课:
[1] 数字电路与系统设计, 2023-2024, Hunan University.
[2] 机器视觉与人机交互, 2024, Hunan University.
[3] 高水平学术论文写作, 2023, Hunan University.
[4] 机器人专业英语, 2023, Hunan University.
[5] Deep Learning for Computer Vision – Advanced Topics, 2021-2022, Karlsruhe Institute of Technology.