In this paper, we propose a method for automatically determining the motion parameters of robots to execute target tasks such as “scooping powdered tea and putting it into a teacup”. For robots to handle everyday objects, it is necessary to determine the motion parameters of robots for handling objects of various shapes and sizes. There are two methods for determining motion parameters. One involves using a 3D model of an object and the other does not involve using such a model. The latter method is effective in places where there are a wide variety of everyday objects such as in homes. However, it is assumed with this method that an object is placed face up. Therefore, this method cannot be used when an object is placed face down. We propose a method for determining motion parameters for handling changes in the shapes, sizes and poses (face up or face down) of objects. Our method uses a 3D deep neural network to recognize an object’s functions (e.g., “scoop” and “grasp”) and recognizes the poses of an object from the function information. Motion parameters are then determined based on the recognition results. We conducted an experiment to evaluate the performance of the method by testing it on five spoons of different shapes, sizes, and poses. The method had a success rate of approximately 86%.
|