dc.contributor.advisor | Amrutur, Bharadwaj | |
dc.contributor.author | Sharma, Himanshu | |
dc.date.accessioned | 2023-06-28T06:08:23Z | |
dc.date.available | 2023-06-28T06:08:23Z | |
dc.date.submitted | 2023 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/6142 | |
dc.description.abstract | It’s worth the time to acknowledge just how amazingly well we humans can perform tasks
with our hands. Starting from picking up a coin to buttoning up our shirts. All these tasks for
robots are still at the very forefront of robotics research & require significant interactions between
vision, perception, planning & control. Becoming an expert in all of them is quite a challenge.
Tele-operation augments the robot’s capability for performing complex tasks in unstructured en-
vironments and unfamiliar objects with human support. It offers the robots reasoning skills,
intuition, and creativity for performing these tasks in unstructured environments and unfamiliar
objects.
However, most Tele-operation techniques either use some sort of sensor/gloves or expensive
cameras to capture the gestures of the human, making the operation bulky as well as expensive.
We present a vision-based Tele-operation of the KUKA IIWA industrial robot arm that imitates
in real-time the natural motion of the human operator seen from a depth camera.
First, we will discuss about Wahba’s algorithm, which was used to estimate the 6-d hand pose
of the operator’s hand. Wahba’s algorithm uses the predicted 3d location of the 21 hand landmarks
from google’s mediapipe to estimate this 6-DoF hand pose. The hand orientation estimated above
is used to tele-operate the 7-DoF KUKA IIWA manipulator in master-slave as well as in semi-
autonomous mode. Then we will talk about how an object’s orientation is estimated and used in
the semi- autonomous mode of operation. The object of interest for manipulation is picked by
the operator’s pointing to the object in a video stream. The focused object is then detected and
segmented, and the object’s pose is estimated based on its geometry of surface normals.
Finally, the object’s 6-DoF pose is estimated using hand-eye calibration and robot motion
is planned with a B-spline trajectory. After combining all these techniques, two modes of tele-
operations for KUKA IIWA are proposed. These methods give efficient operation of robot imitating
human motion as well as gesture based operation for the semi-autonomous mode of operation. | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | ;ET00155 | |
dc.rights | I grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part
of this thesis or dissertation | en_US |
dc.subject | Robotics | en_US |
dc.subject | Manipulator | en_US |
dc.subject | Tele-Operation | en_US |
dc.subject.classification | Research Subject Categories::TECHNOLOGY::Information technology | en_US |
dc.title | Vision-driven Tele-Operation for Robot Manipulation | en_US |
dc.type | Thesis | en_US |
dc.degree.name | MTech (Res) | en_US |
dc.degree.level | Masters | en_US |
dc.degree.grantor | Indian Institute of Science | en_US |
dc.degree.discipline | Engineering | en_US |