微信扫一扫

教师简介

当前位置: 首页 >> 师资队伍 >> 教师简介 >> 正文

杨恺伦

文章出处: 发表时间:2023-03-06


杨恺伦,博士、教授、博士生导师、硕士生导师、国家优秀青年基金(海外)获得者、湖南大学岳麓学者。2014年6月获北京理工大学测控技术与仪器和北京大学经济学本科双学位。2019年6月获浙江大学测试计量技术及仪器博士学位。2017年9月至2018年9月在西班牙阿尔卡拉大学(UAH)机器人与电子安全(RobeSafe)研究组进行博士联合培养。2019年11月至2023年1月在德国卡尔斯鲁厄理工学院(KIT)计算机视觉与人机交互(CV:HCI)实验室开展博士后研究。2023年2月加入湖南大学机器人学院。

研究包括基于多模态、高维度、全视角传感的计算光学与计算视觉,以支撑自动驾驶、盲人辅助、智能交通和智能机器人等应用。在IEEE汇刊TIP、TNNLS、T-ITS、TMM、T-ASE、T-IV、TIM、TCI与计算机视觉、人工智能、机器人、多媒体顶会CVPR、ECCV、AAAI、ICRA、IROS、MM等期刊会议上发表论文100余篇(一作/通讯50余篇)。现拥有及与他人合有专利40余项,4项形成技术转移,获共青团中央举办的“创青春”创新创业大赛全国总冠军。CMX入选 IEEE T-ITS Top-10 Popular Articles,MateRobot入选机器人顶会 IEEE ICRA Finalist for Best Paper Award on Human-Robot Interaction,ACNet入选 IEEE ICIP 2019最高引论文,获智能车旗舰会议 IEEE IV 2021最佳论文奖,图像处理前沿会议 ICFIP 2018最佳展示奖,无障碍顶会 W4A 2023 Accessibility Challenge Judges' Award,两次获Applied Optics编辑推荐[2018][2019]。担任IEEE Robotics and Automation Letters (RA-L) AE、Robot Learning AE、智能机器人与系统国际会议ISoIRS 2024 Local Chair、智能车旗舰会议IEEE IV 2022 BSL Workshop Chair、机器人顶会IEEE ICRA 2024 AE、智能车旗舰会议IEEE IV 2024 AE。担任TPAMI、TIP、IJCV、CVPR、ICCV、ICML、NeurIPS、ICLR等80余本期刊与会议审稿人。获计算机视觉顶会ECCV 2022、ACCV 2022杰出审稿人奖。指导学生毕业在KIT、牛津大学、华为、字节跳动、蔚来汽车等单位深造或工作。

现有若干博士后、博士生、直博生、硕士生、研究助理招生名额。教育最重要的目标莫过于塑造独立之人格、自由之精神,鼓励尝试,宽容失败,培养怀有家国情怀、志在改造人生、改造社会、改造世界的知识阶层。深刻地认识到大学教育不仅仅是传授知识,甚至也不止培养能力,更为重要的是营造平等的学术氛围并在与同学交互的过程中启发科学思考和科学研究,形成“独立之人格”。课题组注重营造平等的交流与探讨的气氛,提倡以co-work的形式相互合作。Computer Vision for Panoramic Understanding Lab (CV:PU) 研究小组非常年轻,沟通融洽,除了本小组成员外,还与卡尔斯鲁厄理工学院、浙江大学光电学院、湖南大学机器人学院的其他导师的学生一起共同学习与科研,可以充分交叉协作创新。博士生在读期间至少有两次国外开会交流的机会,硕士生至少有一次国外开会交流的机会。对于科研优秀有余力、提前圆满完成课题研究的同学,可推荐其到海外单位如NVIDIA、KIT、TUM、TU Stuttgart等知名课题组进行一年左右的联合培养。如果您对 {Computer Vision, Deep Learning, Scene Understanding, Autonomous Driving} 感兴趣,欢迎与我联系。

邮箱:kailun.yang@hnu.edu.cn

相关链接:个人主页Google ScholarResearchGateDBLPGitHub课题组风采


教育与工作经历:

2023.11 – 今 湖南大学,机器人学院,教授、博士生导师、硕士生导师

2023.02 – 2023.10 湖南大学,机器人学院,副教授、博士生导师、硕士生导师

2019.11 – 2023.01 德国卡尔斯鲁厄理工学院(KIT),计算机视觉与人机交互(CV:HCI)实验室,博士后

2014.09 – 2019.06 浙江大学,现代光学仪器国家重点实验室,博士

2017.09 – 2018.09 西班牙阿尔卡拉大学(UAH),机器人与电子安全(RobeSafe)研究组,联合培养博士

2012.09 – 2014.06 北京大学,国家发展研究院,经济学双学位,本科

2010.09 – 2014.06 北京理工大学,光电学院,测控技术与仪器,本科


科学研究/学生培养方向:

计算机视觉:深度学习、语义分割、全景分割、深度估计、光流估计、知识蒸馏、视觉Transformer等

智能交通系统:智能车辆、自动驾驶、场景理解、领域适应、BEV语义建图、V2X协同感知、语义场景补全等

机器人:3D视觉、多模态感知、传感器融合、视觉里程计、视觉定位与建图、动作识别、人机交互等

光学传感:RGB-D传感、全景成像、偏振成像、事件相机、计算成像、光场成像、极简光学系统等

辅助技术:高级驾驶辅助系统(ADAS)、可穿戴盲人辅助系统、无障碍技术(Accessibility)等


科研项目:

[1] Accessible Maps: Barrier-free maps to improve the occupational mobility of people with visual or mobility impairments. 德国联邦劳动和社会服务部(BMAS)项目(01KM151112, 2019.11-2022.12(主研)

[2] KIT Future Fields. KIT校园项目,2021.01-2023.01(主研)

[3] 视觉精确定位技术研究. 横向项目(K横20180747),2018.05-2020.04(主研)

[4] 融合多维度参数的视觉传感技术研究. 横向项目(K横20181674),2018.08-2019.08(主研)

[5] Semantic Perception for Navigation Assistance. 浙江大学校派海外交流项目,2017.09-2018.09(主持)

[6] 基于三维地形传感的盲人视觉辅助技术. 农业与社会发展部公益性项目(KN20161853),2016.01-2017.12(主研)


代表性论文(课题组所有工作均以co-work的形式相互合作)

计算视觉与场景理解(Computer Vision and Scene Understanding)方向:

[1] K. Yang†, X. Hu, R. Stiefelhagen. Is Context-Aware CNN Ready for the Surroundings? Panoramic Semantic Segmentation in the Wild. IEEE Transactions on Image Processing (TIP), 2021 [PDF]

[2] J. Lin, J. Chen, K. Yang†, A. Roitberg, S. Li, Z. Li†, S. Li. AdaptiveClick: Click-aware Transformer with Adaptive Focal Loss for Interactive Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024 [PDF]

[3] K. Peng, A. Roitberg, K. Yang†, J. Zhang, R. Stiefelhagen. Delving Deep into One-Shot Skeleton-based Action Recognition with Diverse Occlusions. IEEE Transactions on Multimedia (TMM), 2023 [PDF]

[4] K. Yang†, J. Zhang, S. Reiß, X. Hu, R. Stiefelhagen. Capturing Omni-Range Context for Omnidirectional Segmentation. In CVPR, 2021 [PDF]

[5] J. Zhang, K. Yang†, C. Ma, S. Reiß, K. Peng, R. Stiefelhagen. Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation. In CVPR, 2022 [PDF]

[6] J. Zhang*, R. Liu*, H. Shi, K. Yang†, S. Reiß, H. Fu, K. Peng, K. Wang, R. Stiefelhagen. Delivering Arbitrary-Modal Semantic Segmentation. In CVPR, 2023 [PDF]

[7] K. Peng, C. Yin, J. Zheng, R. Liu, D. Schneider, J. Zhang, K. Yang*, M.S. Sarfraz, R. Stiefelhagen, A. Roitberg. Navigating Open Set Scenarios for Skeleton-based Action Recognition. In AAAI, 2024 [PDF]

[8] X. Hu, K. Yang, L. Fei, K. Wang. ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation. Most Cited Paper at ICIP 2019 [PDF]

[9] Q. Wang*, J. Zhang*, K. Yang†, K. Peng, R. Stiefelhagen. MatchFormer: Interleaving Attention in Transformers for Feature Matching. In ACCV, 2022 [PDF]


极端光学与计算成像(Extreme Photonics and Computational Imaging)方向:

[1] S. Gao, K. Yang†, H. Shi, K. Wang†, J. Bai. Review on Panoramic Imaging and Its Applications in Scene Understanding. IEEE Transactions on Instrumentation and Measurement (TIM), 2022 [PDF]

[2] Q. Jiang*, H. Shi*, S. Gao, J. Zhang, K. Yang†, L. Sun, H. Ni, K. Wang†. Computational Imaging for Machine Perception: Transferring Semantic Segmentation beyond Aberrations. IEEE Transactions on Computational Imaging (TCI), 2024 [PDF]

[3] Q. Jiang, H. Shi, L. Sun, S. Gao, K. Yang, K. Wang. Annular Computational Imaging: Capture Clear Panoramic Images through Simple Lens. IEEE Transactions on Computational Imaging (TCI), 2022 [PDF]

[4] K. Xiang, K. Yang, K. Wang. Polarization-driven Semantic Segmentation via Efficient Attention-bridged Fusion. Optics Express (OE), 2021 [PDF]

[5] K. Yang, L.M. Bergasa, E. Romera, K. Wang. Robustifying Semantic Cognition of Traversability across Wearable RGB-Depth Cameras. Editors' Pick at Applied Optics (AO), 2019 [PDF]

[6] K. Yang, K. Wang, H. Chen, J. Bai. Reducing the Minimum Range of a RGB-Depth Sensor to Aid Navigation in Visually Impaired Individuals. Editors' Pick at Applied Optics (AO), 2018 [PDF]

[7] K. Zhou, K. Yang, K. Wang. Panoramic Depth Estimation via Supervised and Unsupervised Learning in Indoor Scenes. Applied Optics (AO), 2021 [PDF]

[8] K. Yang, K. Wang, H. Chen, J. Bai. IR Stereo RealSense: Decreasing Minimum Range of Navigational Assistance for Visually Impaired Individuals. Journal of Ambient Intelligence and Smart Environments (JAISE), 2017 [PDF]

[9] 陈浩, 杨恺伦, 胡伟健, 白剑, 汪凯巍. 基于全景环带成像的语义视觉里程计. 光学学报, 2021 [PDF]


自动驾驶与人机交互(Autonomous Driving and Human-Computer Interaction)方向:

[1] K. Yang, X. Hu, L.M. Bergasa, E. Romera, K. Wang. PASS: Panoramic Annular Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2019 [PDF]

[2] K. Yang†, X. Hu, Y. Fang, K. Wang, R. Stiefelhagen. Omnisupervised Omnidirectional Semantic Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020 [PDF]

[3] A. Jaus, K. Yang†, R. Stiefelhagen. Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]

[4] J. Zhang, K. Yang†, R. Stiefelhagen. Exploring Event-driven Dynamic Context for Accident Scene Segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]

[5] J. Zhang, K. Yang†, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen. Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2022 [PDF]

[6] J. Zhang*, H. Liu*, K. Yang*†, X. Hu, R. Liu, R. Stiefelhagen. CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers. IEEE Transactions on Intelligent Transportation Systems (T-ITS) Top-10 Popular Article, 2023 [PDF]

[7] H. Shi*, Y. Zhou*, K. Yang†, X. Yin, Z. Wang, Y. Ye, Z. Yin, S. Meng, P. Li, K. Wang†. PanoFlow: Learning 360° Optical Flow for Surrounding Temporal Understanding. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]

[8] J. Zhang, C. Ma, K. Yang†, A. Roitberg, K. Peng, R. Stiefelhagen. Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation. IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]

[9] Z. Wang*, K. Yang*†, H. Shi, P. Li, F. Gao, J. Bai, K. Wang†. LF-VISLAM: A SLAM Framework for Large Field-of-View Cameras with Negative Imaging Plane on Mobile Agents. IEEE Transactions on Automation Science and Engineering (T-ASE), 2023 [PDF]

[10] Z. Wang, K. Yang†, H. Shi, Y. Zhang, Z. Xu, F. Gao, K. Wang†. LF-PGVIO: A Visual-Inertial-Odometry Framework for Large Field-of-View Cameras using Points and Geodesic Segments. IEEE Transactions on Intelligent Vehicles (T-IV), 2024 [PDF]

[11] Z. Yi*, H. Shi*, K. Yang†, Q. Jiang, Y. Ye, Z. Wang, K. Wang†. FocusFlow: Boosting Key-Points Optical Flow Estimation for Autonomous Driving. IEEE Transactions on Intelligent Vehicles (T-IV), 2023 [PDF]

[12] S. Li, K. Yang†, H. Shi, J. Zhang, J. Lin, Z. Teng, Z. Li†. Bi-Mapper: Holistic BEV Semantic Mapping for Autonomous Driving. IEEE Robotics and Automation Letters (RA-L), 2023 [PDF]

[13] J. Zheng, J. Zhang, K. Yang†, K. Peng, R. Stiefelhagen. MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments. In ICRA Finalist for Best Paper Award on Human-Robot Interaction, 2024 [PDF]

[14] J. Zhang, K. Yang†, R. Stiefelhagen. ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data. In IROS, 2021 [PDF]

[15] K. Yang, L.M. Bergasa, E. Romera, R. Cheng, T. Chen, K. Wang. Unifying Terrain Awareness through Real-Time Semantic Segmentation. Main Publication in Google Scholar Metrics at IV 2018 [PDF]

[16] A. Jaus, K. Yang†, R. Stiefelhagen. Panoramic Panoptic Segmentation: Towards Complete Surrounding Understanding via Unsupervised Contrastive Learning. Best Paper Award at IV 2021 [PDF]

[17] E. Romera, L.M. Bergasa, K. Yang, J.M. Alvarez, R. Barea. Bridging the Day and Night Domain Gap for Semantic Segmentation. Main Publication in Google Scholar Metrics at IV 2019 [PDF]

[18] K. Yang, L.M. Bergasa, E. Romera, X. Huang, K. Wang. Predicting Polarization beyond Semantics for Wearable Robotics. In Humanoids, 2018 [PDF]


已毕业学生:

Daniel Bucher(Topic: Improving Robustness of 3D Semantic Segmentation via Transformer-based Fusion and Knowledge Distillation);

李钰(Topic: Fisheye Semantic Completion: Unifying Extrapolation and Semantic Completion);

滕飞(Topic: OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation),去向:HNU读博;

曹可(Topic: Tightly-coupled LiDAR-visual SLAM Based on Geometric Features);

滕志峰(Topic: PanoBEV: Panoramic Semantic Mapping from Monocular Egocentric Images to Holistic Bird's Eye View),去向:Solarlab Aiko Europe;

陈子涵(Topic: Accessible Chemical Structural Formulas through Interactive Labeling),去向:ZF Automotive Technologies;

罗心雨(Topic: Improving Semantic Segmentation of Accident Scenes via Multi-Source Mixed Sampling and Meta-Learning with Transformers),去向:交通银行数据中心;

刘瑞平(Topic: Transformer-based Knowledge Distillation for Efficient Semantic Segmentation),去向:KIT读博;

王庆(Topic: MatchFormer: Interleaving Attention in Transformers for Feature Matching),去向:华为;

欧文彦(Topic: Dynamic Visual SLAM with Semantic Information for Seeing Impaired People),去向:Continental;

刘华耀(Topic: Indoor Scene Understanding for the Visually Impaired Based on Semantic Segmentation),去向:蔚来汽车;

Alexander Jaus(Topic: Panoramic Panoptic Image Segmentation),去向:KIT读博;

马超翔(Topic: Unsupervised Domain Adaptation for Panoramic Semantic Segmentation),去向:字节跳动;

陈硕(Topic: An Efficient Network for Scene Change Detection),去向:零束科技;

张樱之(Topic: Assisting the Visually Impaired Based on Scene Recognition and Semantic Segmentation),去向:纵目科技;

Lukas Vojkovic(Topic: Development and Evaluation of a Computer Vision Based Navigation System for the Visually Impaired);

张嘉明(Topic: Semantic Segmentation in Accident Scenarios Based on Event Data),去向:KIT读博;

陈皓业(Topic: Semantic Visual Localization for Visually Impaired People),去向:采埃孚;

毛威(Topic: Efficient Panoptic Segmentation for Navigating the Visually Impaired),去向:吉咖机器人。


主要获奖情况:

[1] IEEE ICRA Finalist for Best Paper Award on Human-Robot Interaction,2024.04.

[2] ACCV2022杰出审稿人奖, 2022.12.

[3] ECCV2022杰出审稿人奖, 2022.10.

[4] IEEE Intelligent Vehicles Symposium (IV) 2021最佳论文奖, 2021.07.

[5] 浙江省优秀博士毕业生,2019.06.

[6] 博士研究生国家奖学金,2018.12.

[7] ICFIP 2018 Best Presentation Award, 2018.03.

[8] 第三届“创青春”中国青年互联网创业大赛冠军奖,2017.08.

[9] 第三届浙江省“互联网+”大学生创新创业大赛金奖,2017.07.

[10] 北京理工大学光电学院毕业杯足球赛冠军&最佳球员,2014.06.


代表性专利:

[1] 杨恺伦,胡鑫欣,孙东明,李华兵。一种全景图像的连续性分割方法。已授权。专利号:CN202010198068.0。

[2] 杨恺伦,汪凯巍,程瑞琦。一种单相机偏振信息预测方法。已授权。专利号:CN201810534076.0。

[3] 杨恺伦,汪凯巍,于红雷,胡伟健。一种智能盲人辅助眼镜。已授权。获数千万Pre-A轮融资。专利号:CN201610590755.0。

[4] 杨恺伦,汪凯巍,程瑞琦,陈浩。一种基于RGB‐IR相机的声音编码交互系统。已转移(转让金额60万)。专利号:CN201610018944.0。

[5] 杨恺伦,汪凯巍,王晨。一种智能汽车倒车辅助系统及辅助方法。已授权。专利号:CN201510186028.3。


授课:

[1] 数字电路与系统设计, 2023-2024, Hunan University.

[2] 机器视觉与人机交互, 2024, Hunan University.

[3] 高水平学术论文写作, 2023, Hunan University.

[4] 机器人专业英语, 2023, Hunan University.

[5] Deep Learning for Computer Vision – Advanced Topics, 2021-2022, Karlsruhe Institute of Technology.



上一条:王立坤

下一条:吴迪