Jiarong Lin

Jiarong Lin

Ph.D. candidate in Robotics🤖

The University of Hong Kong

    I am Jiarong Lin (林家荣), graduated from The University of Hong Kong (HKU) with a Ph.D. degree in robotics. My research interests lie in the areas of Simultaneous Localization and Mapping (SLAM), Multi-Sensor Fusion, and 3D Reconstruction. I have a proven track record of producing high-quality research that is the first author of 9 paper, including 2×T-RO, 1×T-PAMI (in revision), 1×RA-L journal, and 3×ICRA, 2×IROS conference paper. (see my publication list for details).
    In addition to my academic pursuits, I am also an active open-source contributor😊. I have been greatly benefited from open-source communities, and correspondingly, I have dedicated my contributions to this community as well. I have made all the code for my publications available on GitHub, where it has received over 7.8k stars⭐ from the community. Some of my most popular works include R3LIVE (★1.8k), FAST-LIO (★2.1k), loam-livox (★1.4k), R2LIVE (★0.7k), and ImMesh🆕(★0.5k).
    I am dedicated to producing high-quality research and making meaningful contributions to both academics and industry.

Skills

LiDAR
LiDAR SLAM

Top LiDAR SLAMer in both communities of academics and industry

sensor_fusion
Multi-Sensor Fusion

Expert in tightly-coupled LiDAR-Inertial, Visual-Inertial, and LiDAR-Inertial-Visual fusion

3D_rec
3D reconstruction

Expert in point cloud mapping, triangle facets meshing and texturing

drone
Robotics

Master in drone assembling, UAV motion planning and control

c++
C/C++

With experiments of over 12+ years

CAD
CAD

Master in 3D-printing, OpenGL, Unreal-Engine, Solidwork, and 3DS MAX

Experience

 
 
 
 
 
The university of Hong Kong (HKU)
Ph.D. in Robotics
January 2019 – October 2023 Hong Kong

Research interests include:

  • LiDAR SLAM
  • Sensor Fusion
  • 3D reconstruction
 
 
 
 
 
Hong Kong University of Science and Technology (HKUST)
Ph.D. student
August 2018 – January 2019 Hong Kong

Research interests include:

  • UAV motion planning and control
  • Deep reinforcement learning
 
 
 
 
 
DaJiang Innovation (DJI)
Computer Vision Engineer
August 2015 – July 2018 Shen Zhen

In DJI, I worked for:

 
 
 
 
 
University of Electronic Science and Technology of China (UESTC)
Bachelor of Engineering
September 2011 – July 2015 Cheng Du

Specialization:

  • Optical Information Science and Technology

My popular open-source projects

*
🆕ImMesh: An Immediate LiDAR Localization and Meshing Framework
ImMesh is a novel LiDAR(-inertial) odometry and meshing framework, which takes advantage of input of LiDAR data, achieving the goal of simultaneous localization and meshing in real-time. ImMesh comprises four tightly-coupled modules: receiver, localization, meshing, and broadcaster. The localization module utilizes the prepossessed sensor data from the receiver, estimates the sensor pose online by registering LiDAR scans to maps, and dynamically grows the map. Then, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. Finally, the real-time odometry, map, and mesh are published via our broadcaster.
R$^3$LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
R$^3$LIVE is a versatile and well-engineered system toward various possible applications, which can not only serve as a SLAM system for realtime robotic applications but can also reconstruct the dense, precise, RGB-colored 3D maps for applications like surveying and mapping. In addition, we have developed a series of offline utilities for reconstructing and texturing meshes for various of 3D applications.
FAST-LIO2: Fast Direct LiDAR-inertial Odometry
FAST-LIO (Fast LiDAR-Inertial Odometry) is a computationally efficient and robust LiDAR-inertial odometry package. It fuses LiDAR feature points with IMU data using a tightly-coupled iterated extended Kalman filter to allow robust navigation in fast-motion, noisy or cluttered environments where degeneration occurs. Our package addresses many key issues: 1) Fast error state iterated Kalman filter (ESIKF) for odometry optimization. 2) Incremental mapping using ikd-Tree, achieve faster speed and over 100Hz LiDAR rate. 3) Without the need for feature extraction, FAST-LIO2 can support many types of LiDAR including spinning (Velodyne, Ouster) and solid-state (Livox Avia, Horizon, MID-70) LiDARs, and can be easily extended to support more LiDARs.
R$^2$LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping
R$^2$LIVE is a robust, real-time tightly-coupled multi-sensor fusion framework, which fuses the measurement from the LiDAR, inertial sensor, visual camera to achieve robust, accurate state estimation. Taking advantage of measurement from all individual sensors, our algorithm is robust enough to various visual failure, LiDAR-degenerated scenarios, and is able to run in real time on an on-board computation platform, as shown by extensive experiments conducted in indoor, outdoor, and mixed environment of different scale.
LOAM_Livox: A robust LiDAR Odometry and Mapping (LOAM) package for Livox-LiDAR
Loam-Livox is a robust, low drift, and real time odometry and mapping package for Livox LiDARs, significant low cost and high performance LiDARs that are designed for massive industrials uses. Our package address many key issues: feature extraction and selection in a very limited FOV, robust outliers rejection, moving objects filtering, and motion distortion compensation. In addition, we also integrate other features like parallelable pipeline, point cloud management using cells and maps, loop closure, utilities for maps saving and reload, etc. To know more about the details, please refer to our related paper:)

Publications

Quickly discover relevant content by filtering publications.
(2022). Fast 3D Sparse Topological Skeleton Graph Generation for Mobile Robot Global Planning. Accepted to IROS2022.

PDF Cite Video

(2022). MARSIM: A light-weight point-realistic simulator for LiDAR-based UAVs. Accepted to RA-L2023.

PDF Cite Video

(2019). A Screen-Based Method for Automated Camera Intrinsic Calibration on Production Lines. Accepted to CASE2019.

PDF

Talks

Simultaneous Localization and Mapping with Multi-sensor Fusion
Simultaneous Localization and Mapping with Multi-sensor Fusion

Invited by shenlanxueyuan.com, I give an online talk on “Simultaneous Localization and Mapping with Multi-sensor Fusion”. In this talk, I shared my researches in my Ph.D. studies.

Contact

  • ziv.lin.ljrATgmail.com
  • +852 5624 9033
  • LG-02, Haking Wong Building, Pok Fu Lam Road,
    The university of Hong Kong (HKU),
    Hong Kong SAR, China
  • 10:00-AM to 22:00-PM
  • ziv-lin
  • ziv.lin
  • Search for id: Ziv-Lin-LJR