Mobile robotics & Marine robotics
Vehicle autonomy & Navigation
SLAM (Simultaneous Localization & Mapping)
Computer vision & Robot vision
Spatial AI (Artificial Intelligence)
We have developed a monocular vision-based autonomous docking framework designed for nonholonomic robotic vehicles with limited sensing capabilities. The proposed method integrates a bearing-only SLAM framework with active control, enabling robust relative navigation between the vehicle and docking station using only a monocular camera. At each time step, control inputs are optimized to generate trajectories that improve state observability while strictly satisfying vehicle motion constraints.
We have developed an integrated guidance, navigation, and control (GNC) framework for autonomous quadruped robots operating in complex environments. The proposed system combines LiDAR-based mapping, map-based localization, and grid-based path planning within a closed-loop control pipeline. Using onboard sensor measurements, the robot constructs a 3D point cloud map, estimates its pose in real time, and generates feasible paths through a traversability grid. This integrated perception–planning–control framework enables reliable autonomous navigation for legged robots in challenging real-world environments.
We have developed a robust underwater object detection framework using two-dimensional forward-looking sonar (FLS) imagery, targeting autonomous operations in low- or zero-visibility environments. Unlike learning-based approaches that rely on prior target information, the proposed method detects underwater objects directly from acoustic sonar images without requiring predefined object models. To enhance robustness against false positives caused by acoustic artefacts, a successive image registration scheme is applied to sequential detection results, significantly improving detection reliability over time.
We have developed a simulation-aided image translation framework to enhance underwater object detection and relative navigation using imaging sonar. Since GPS signals cannot propagate underwater, autonomous marine systems must rely on acoustic sensing, where sonar imagery is often degraded by low signal-to-noise ratios and acoustic artifacts. To address this challenge, the proposed method employs a deep learning-based unpaired image translation model trained with simulated sonar data, enabling the model to learn domain-specific noise characteristics and effectively suppress background clutter in real sonar images.
We have developed an adaptive weld line detection framework to support precise localization of underwater hull-cleaning robots operating on full-scale ships. Weld lines on ship hull surfaces provide valuable structural cues for robot positioning. However, deep learning-based detection models often suffer significant performance degradation when applied to visually different hull surfaces not included in the original training dataset. To address this challenge, the proposed framework integrates a deep learning-based object detector with an unpaired image translation method, enabling improved robustness across varying visual domains without requiring paired training data.
We have developed a ping-level side-scan sonar (SSS) SLAM framework for reliable underwater localization in feature-scarce seabed environments. Unlike conventional image-based SSS SLAM approaches that rely on stacked acoustic images and are vulnerable to geometric distortion and unreliable data association, the proposed method directly exploits raw backscatter intensity profiles at the ping level. By modeling the nominal seafloor response and detecting structurally salient deviations, robust landmark measurements are extracted and integrated into a landmark-based SLAM framework.
We have developed a 3D multibeam sonar-based SLAM framework for underwater environments where GPS signals are unavailable and optical sensing is unreliable. The proposed approach leverages a limited field-of-view multibeam sonar sensor and integrates dead-reckoning factors with sonar-inertial odometry within a pose graph SLAM framework. A direct point cloud registration method is employed to ensure robust relative localization despite constrained sensing coverage.
We have developed a terrain-based place recognition algorithm for quadruped robots equipped with limited field-of-view solid-state LiDAR sensors. While solid-state LiDAR offers advantages in cost, durability, and mechanical simplicity, its restricted sensing range makes reliable place recognition highly challenging—often leading to degraded SLAM performance. To address this limitation, the proposed method reconstructs and compares feature-based terrain representations, leveraging foot contact information unique to quadruped locomotion.
We have developed a line segment-based monocular visual SLAM framework for indoor mobile robots equipped with an obliquely downward-facing camera. In GPS-denied indoor environments, dead-reckoning inevitably accumulates drift, and conventional point feature-based SLAM often struggles in low-texture or repetitive floor patterns. The proposed method instead exploits ground-level line segment features, combined with homography-based perspective correction and pixel-to-meter scale estimation, to achieve metrically consistent localization without full camera calibration.
We have developed a multi-target tracking filter for estimating the trajectories of multiple underwater vehicles operating without GPS. Because GPS signals cannot propagate underwater, multi-vehicle navigation often relies on relative measurements referenced to a surface vehicle. The proposed approach employs a Joint Probabilistic Data Association Filter (JPDAF) to robustly estimate 3D trajectories in the presence of cluttered measurements caused by sensor noise, reflections, and environmental disturbances.
Chaewon Kim, Seonghun Hong*, Jeonghong Park, Jinwoo Choi, and Hye-Jin Kim, "Generation of navigation database using AIS data for remote situational awareness of coastal vessels," Applied Ocean Research, 154:104401, Jan. 2025.
Chaewon Kim, Seonghun Hong*, Jeonghong Park, Jinwoo Choi, and Hye-Jin Kim, "Remote situational awareness for coastal vessels using AIS data-based navigation pattern DB," In Late Breaking Results Poster, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, UAE, Oct. 2024.
김채원, 홍성훈*, 박정홍, 최진우, 김혜진, "장기 AIS 데이터 기반 항해 패턴 DB를 활용한 연안 선박의 안전 항로 생성 기법", 한국해양과학기술협의회 공동학술대회, 2025.05.08-2025.05.10.
김채원, 홍성훈*, 박정홍, 최진우, 김혜진, "장기 AIS 데이터 기반 연안 선박의 원격 상황인식을 위한 효율적인 위치 확률분포 추정 기법", 한국로봇종합학술대회 논문집, 2024.02.21-2024.02.24.
김채원, 홍성훈*, 박정홍, 최진우, 김혜진, "연안 선박의 원격 상황인식을 위한 AIS 데이터 기반 항해 패턴 데이터베이스 생성", 한국해양과학기술협의회 공동학술대회 논문집, 2023.05.02-2023.05.04.
박정홍, 최진우, 김채원, 홍성훈*, 김혜진, "자율운항선박의 운항 경로 예측 및 운항 해역 항적 정보 기반의 비상상황인식 프레임워크 설계", 항해항만학회 추계학술대회 논문집, 2022.11.10-2022.11.11.
김채원, 홍성훈*, 박정홍, 최진우, 김혜진, "AIS 데이터를 이용한 선박 항적의 위치 확률분포 추정 기법 연구", 한국해양과학기술협의회 공동학술대회 논문집, 2022.06.02-2022.06.04.
한국수중·수상로봇기술연구회 춘계학술대회 학생논문 최우수상(김채원), 2023.
Vision-based navigation of an underwater vehicle for automated inspection of fish cages
Design of a standard software architecture for autonomous underwater vehicles
Terrain-based navigation for autonomous operation of unmanned vehicles in 3D space