Categories
Undergraduate ypec-2024

UG26 – Autonomous navigation of surgical robots through deep reinforcement learning of simulated environment interaction

This project presents advancements in utilizing artificial intelligence (AI) for precise control of robotic systems in surgical procedures, focusing on close-loop control strategies. Close-loop control, employing data-driven feedback mechanisms such as deep reinforcement learning (DRL) and imitation learning (IL), offers adaptability to unstructured environments but requires significant data for training, posing challenges like data scarcity and privacy concerns. We address these challenges by proposing end-to-end training strategies for motion planning, mimicking human behavior through DRL for autonomous exploration and interaction with the environment. A key focus lies in navigating flexible endoscopes within the gastrointestinal tract, vital for diagnostic and therapeutic procedures. By modeling deformations using finite element methods (FEM) and training control policies through on-policy DRL, we enable autonomous navigation in dynamic soft tissue environments. Our contributions include employing model-free DRL for controlling tendon-driven flexible robots in contact scenarios and integrating intuitive hand gesture recognition for intraoperative intervention, enhancing safety and efficiency. This research underscores the potential of AI-driven control systems in revolutionizing robotic surgery, promising precise and adaptable robotic interventions with improved patient outcomes.

Leave a Reply

Your email address will not be published.