I am currently a PhD student at KAIST, advised by Prof. Kyung-Soo Kim.
My research focuses on control and state estimation for legged robots, particularly on developing agile and dynamic motions for quadruped robots and humanoids. I’m also interested in reinforcement learning frameworks to enable robots to reason about their body dynamics and interactions with complex environments, ultimately achieving more adaptive and versatile behaviors.
If you have any questions or would like to discuss ideas, feel free to reach out via
email!
selected publications
Legged Robot State Estimation Using Invariant Neural-Augmented Kalman Filter with a Neural Compensator
Seokju Lee,
Hyun-Bin Kim,
and Kyung-Soo Kim
IROS
2025
[Abs]
[arXiv]
[Website]
This paper presents an algorithm to improve state estimation for legged robots. Among existing model-based state estimation methods for legged robots, the contact-aided invariant extended Kalman filter defines the state on a Lie group to preserve invariance, thereby significantly accelerating convergence. It achieves more accurate state estimation by leveraging contact information as measurements for the update step. However, when the model exhibits strong nonlinearity, the estimation accuracy decreases. Such nonlinearities can cause initial errors to accumulate and lead to large drifts over time. To address this issue, we propose compensating for errors by augmenting the Kalman filter with an artificial neural network serving as a nonlinear function approximator. Furthermore, we design this neural network to respect the Lie group structure to ensure invariance, resulting in our proposed Invariant Neural-Augmented Kalman Filter (InNKF). The proposed algorithm offers improved state estimation performance by combining the strengths of model-based and learning-based approaches
Learning legged mobile manipulation using reinforcement learning
Seokju Lee,
Seunghun Jeon,
and Jemin Hwangbo
RiTA
2022
[Abs]
[arXiv (available soon)]
Many studies on quadrupedal manipulators have been conducted for extending the workspace of the end-effector. Many of these studies, especially the recent ones, use model-based control for the arm and learning-based control for the leg. Some studies solely focused on model-based control for controlling both the base and arm. However, model-based controllers such as MPC can be computationally inefficient when there are many contacts between the end-effector and the object. The dynamics of the interactions between a quadrupedal manipulator and the object in contact are complex and often unpredictable without high-resolution contact sensors on the end-effector. In this study, we investigate the possibility of using a reinforcement learning strategy to control an end-effector of a legged mobile manipulator. The proposed framework is verified for a walking and tracking task of the end-effector in a simulation environment.