Show simple item record

dc.contributor.advisorAmrutur, Bharadwaj
dc.contributor.authorSharma, Himanshu
dc.date.accessioned2023-06-28T06:08:23Z
dc.date.available2023-06-28T06:08:23Z
dc.date.submitted2023
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6142
dc.description.abstractIt’s worth the time to acknowledge just how amazingly well we humans can perform tasks with our hands. Starting from picking up a coin to buttoning up our shirts. All these tasks for robots are still at the very forefront of robotics research & require significant interactions between vision, perception, planning & control. Becoming an expert in all of them is quite a challenge. Tele-operation augments the robot’s capability for performing complex tasks in unstructured en- vironments and unfamiliar objects with human support. It offers the robots reasoning skills, intuition, and creativity for performing these tasks in unstructured environments and unfamiliar objects. However, most Tele-operation techniques either use some sort of sensor/gloves or expensive cameras to capture the gestures of the human, making the operation bulky as well as expensive. We present a vision-based Tele-operation of the KUKA IIWA industrial robot arm that imitates in real-time the natural motion of the human operator seen from a depth camera. First, we will discuss about Wahba’s algorithm, which was used to estimate the 6-d hand pose of the operator’s hand. Wahba’s algorithm uses the predicted 3d location of the 21 hand landmarks from google’s mediapipe to estimate this 6-DoF hand pose. The hand orientation estimated above is used to tele-operate the 7-DoF KUKA IIWA manipulator in master-slave as well as in semi- autonomous mode. Then we will talk about how an object’s orientation is estimated and used in the semi- autonomous mode of operation. The object of interest for manipulation is picked by the operator’s pointing to the object in a video stream. The focused object is then detected and segmented, and the object’s pose is estimated based on its geometry of surface normals. Finally, the object’s 6-DoF pose is estimated using hand-eye calibration and robot motion is planned with a B-spline trajectory. After combining all these techniques, two modes of tele- operations for KUKA IIWA are proposed. These methods give efficient operation of robot imitating human motion as well as gesture based operation for the semi-autonomous mode of operation.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00155
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectRoboticsen_US
dc.subjectManipulatoren_US
dc.subjectTele-Operationen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technologyen_US
dc.titleVision-driven Tele-Operation for Robot Manipulationen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record