Robot Kompan

Robot Kompan

scientific project founded by WUT

Look & learn: Skill acquisition by a companion robot based on task demonstration


Start: July 2020
Finish: December 2021
Principal Investigator: Wojciech Szynkiewicz,
Researchers: Wojciech Dudek, Maksym Figat, Włodzimierz Kasprzak, Dawid Seredyński, Maciej Stefańczyk, Tomasz Winiarski, Maciej Węgierek, Cezary Zieliński,

The aim of the project is to develop key technologies and methods necessary to construct a companion robot control system. The project is focused on the most important research issues related to robot programming by demonstration. The robot learns from examples of tasks performed by humans (look and learn). The way to delegate tasks to the robot must be easy to master by unskilled users. Young children learn to perform activities imitating adults, the robot companion should learn in a similar way. In order to make this a reality, the following problems must be solved: a) perception of the environment and human activities demonstrated to the robot, b) visual tracking of human movements and moving objects, c) synthesis and reproduction of human movements by a device with specific kinematics, d) creating a meta-model of the robot control system enabling automatic generation of the controller code for the task, e) ontology enabling knowledge representation from the robot's point of view, however concerning its environment, and taking into account the necessity to generate action plans and define/learn the tasks to be performed. The problem of teaching a robot by demonstrating human activities is difficult mainly due to: visual data ambiguity, small number of examples and low variability of training data. To solve the ambiguity problem, the approach based on the impact of high-level interpretation of dynamic movements performed by a human, utilizing the principle of feedback from the interpretation of image features, will be studied. Too small training sets will be enhanced with synthetic data. Generative Adversarial Networks or Deep Belief Networks will be used to generate such data. A motion synthesis algorithm for a humanoid robot that reliably reproduces human posture and movement will be elaborated. To perform operations on objects, a computationally effective predictive visual servomechanism with an innovative mechanism for adaptive reconfiguration of the servomechanism will be developed. Based on the metamodel in the form of a hierarchical Petri net, a system model will be built, which will then be transformed automatically into an executable program code of the controller. An important element of the project will be the agent acquiring knowledge from its own observation of the environment as well as through communication with other agents. This knowledge will be encoded in a way understandable to the robot – using a domain ontology. The communication language taking into account the communication acts, as well as interaction protocols and a language of content using the formulated ontology will be defined. An approach to robot task planning combining Hierarchical Task Network with planning under uncertainty in the observation of the environment and uncertainty in the outcome of the action (modeled using the Partially Observable Markov Decision Process) will be used. The structure of the robot control system will contain modules responsible for: knowledge representation and processing, task planning and control of robot effectors and receptors. Knowledge representation module will take into account perception, i.e. the results of processing of current observation of the environment. The methods developed in the project will be experimentally verified on real robots.





List of selected publications:
  1. M. Figat and C. Zieliński
    Parameterised robotic system meta-model expressed by Hierarchical Petri nets
    Robotics and Autonomous Systems, p. 103987, 2022
    [ | DOI | | URL ]
  2. T. Winiarski, J. Sikora, D. Seredyński, and W. Dudek
    DAIMM Simulation Platform for Dual-Arm Impedance Controlled Mobile Manipulation
    in 7th International Conference on Automation, Robotics and Applications (ICARA), 2021, pp. 180–184
    [ | DOI | | URL ]
  3. T. Winiarski, S. Jarocki, and D. Seredyński
    Grasped Object Weight Compensation in Reference to Impedance Controlled Robots
    Energies, vol. 14, no. 20, p. 6693, 2021
    [ | DOI | | URL ]