Filter Keywords


Papers

ARTEMIS: AI-Driven Robotic Triage labeling and Emergency Information System

Dataset Human-Robot Interaction GUI
Submitted to IEEE-IROS 2024

Revanth Krishna Senthilkumaran, Mridu Prashanth, Hrishikesh Viswanath, Sathvika Kotha, Kshitij Tiwari, Aniket Bera

Mass casualty incidents (MCIs) present major challenges for emergency medical services, overwhelming resources and requiring efficient victim assessment. We introduce ARTEMIS, an AI-driven robotic system using speech and natural language processing, and deep learning for acuity classification, deployed on a mobile quadruped for victim localization and severity assessment. Our simulations and outdoor tests demonstrate that ARTEMIS can achieve triage classification precision exceeding 74% overall and 99% for the most critical cases, significantly outperforming existing deep learning-based systems.

Read Paper Watch Video

UPPLIED: UAV Path Planning for Learning from Demonstration

Path Planning UAV/UAS Inspection RL/LfD
Published in IEEE-IROS 2023

Shyam Sundar Kannan*, Vishnunandan L. N. Venkatesh*, Revanth Krishna Senthilkumaran, Byung-Cheol Min

* - Equal Contribution

In this paper, we present UPPLIED: a novel path-planning framework for UAV-based visual inspection of large structures. Leveraging demonstrated trajectories, UPPLIED generates new, optimized paths for inspecting similar structures, maintaining geometric consistency and targeted region focus. The effectiveness of our approach is demonstrated through various experiments, showcasing its capability to adapt inspection trajectories across different structures with minimal deviation.

Read Paper Watch Video

MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks

Dataset Multi-robot Biosensors Cognitive Workload
Under Review in IEEE-TAFFC 2022

Wonse Jo*, Ruiqi Wang*, Su Sun, Revanth Krishna Senthilkumaran, Daniel Foti, Byung-Cheol Min

* - Equal Contribution

MOCAS introduces a realistic, multimodal dataset for assessing human cognitive workload (CWL), derived from actual CCTV monitoring tasks, unlike traditional virtual game-based datasets. The study integrates physiological and behavioral data from wearable sensors and a webcam, collected from 21 participants, supplemented by CWL self-assessments and personal background surveys. Its technical validation confirms the dataset's effectiveness in eliciting varied CWL levels, making it a valuable tool for real-world CWL recognition.

Read Paper

SMARTmBOT: A ROS2-based low-cost and open-source mobile robot platform

Multi-robot GUI Human-robot Interaction
Submitted to IEEE-IROS 2022

Wonse Jo, Jaeeun Kim, Ruiqi Wang, Jeremy Pan, Revanth Krishna Senthilkumaran, Byung-Cheol Min

The paper presents SMARTmBOT, an affordable, open-source mobile robot platform designed for extensive robotics research and education, built on ROS2 and featuring a low-cost, modular, customizable, and expandable design. With most components 3D-printable and a total cost of about $210, SMARTmBOT is accessible and easy to repair, equipped with a comprehensive sensor suite suitable for various tasks like navigation and obstacle avoidance. Its capabilities are demonstrated through diverse experiments, and all source code for sensor integration, camera streaming, and robot control is available in the GitHub Repository.

GitHub Repository Read Paper

Presentations

Data Collection for Mobile Manipulators for Learning from Demonstration

Perception RL/LfD Dataset
Presented at University of Minnesota Summer URC 2023

This research focuses on leveraging Spot's advanced hardware and basic behaviors to develop a robust data collection method for facilitating robot learning through task demonstrations, such as 'putting a chair next to a table.' Utilizing demonstrations, a voxel map is created, encompassing robot state and goal information, fed into a modified Vision-Language Model, Per-Act, which generates 3D colored voxels for training. The dataset, comprising around 25 varied demonstrations with Spot, includes comprehensive data recordings used to train the model, emphasizing the practical application of robotic manipulation and navigation.

View Poster

VF-PLUME: Vertical Farming Plant Localizing UAV with Mass Estimation

Biosensors UAV/UAS Agriculture
Presented at Purdue Spring URC 2023

VF-PLUME addresses the challenge of monitoring in compact, high-yield vertical farms by introducing a singular unmanned aerial vehicle (UAV) equipped to assess plant growth and environmental conditions. Utilizing advanced localization algorithms and environmental sensors, the UAV creates detailed 3D maps capturing crucial data like temperature, humidity, and air quality. This innovative approach streamlines farm monitoring, enabling efficient, data-driven agriculture suitable for supporting growing urban populations.

Presentation PDF

A Dynamic Cognitive Workload Allocation Method for Human Robot Interaction

Biosensors Multi-robot Human-Robot Interaction Cognitive Workload User Study
Presented at Purdue Fall URC 2022

This explores dynamic allocation and measurement of visual perceptual and cognitive workload in HRI using a novel workload allocation algorithm and an affective prediction algorithm, supplemented by a user study. The study leverages the 'Husformer,' a multi-modal framework with cross-modal transformers, to analyze data from biosensors and behavioral sensors, enhancing our understanding of human states during interaction tasks. The effectiveness of this approach is validated through user experiments, building upon prior studies correlating cognitive workload with changes in GUI complexity and object motion, thereby optimizing task allocation based on individual cognitive load.

View Poster

A GUI for Measuring Cognitive Workload Stimulus in Human Robot Interaction

Biosensors Multi-robot Human-Robot Interaction Cognitive Workload User Study
Presented at Purdue Spring URC 2022

This research introduces a new GUI-based method to assess cognitive stimulus in Human-Robot Interaction, aimed at understanding user workload and state, replacing outdated tools like Dual N-Back. The study involved 30 participants interacting with SMARTmBOTs and using wearable biosensors, with data integrated into a multi-modal perception model to analyze cognitive loads. Developed using PyQt for ROS2 compatibility, the GUI, designed for this specific experiment, is being made open source for broader application in Human-Robot Interaction research.

Watch Talk