Abstract
Recently, robots have played big roles in more and more cases. An accurate grasp detection is a key component of a robot working process. An end-to-end method for robotic grasp detection in an RGB image containing objects is proposed in such a case, which takes the whole picture as input and gives the prediction result directly without using traditional sliding windows or region extraction. Obviously, different grasp points lead to different grasp orientations. The grasp detection method takes two steps. First, a convolutional neural network is trained to predict the positions of grasp points. Next, a square area with the preceding grasp point as the center is taken from the image, where the edges are extracted using the Canny edge detection and the lines are detected using Hough Transform. A principal-directiondetection algorithm is proposed to analyze these lines and detect grasp orientations and the distance between two parallel fingers. The method gives a better grasp detection and has an influence on computer vision using both deep learning and traditional algorithms.
Abstract
Recently, robots have played big roles in more and more cases. An accurate grasp detection is a key component of a robot working process. An end-to-end method for robotic grasp detection in an RGB image containing objects is proposed in such a case, which takes the whole picture as input and gives the prediction result directly without using traditional sliding windows or region extraction. Obviously, different grasp points lead to different grasp orientations. The grasp detection method takes two steps. First, a convolutional neural network is trained to predict the positions of grasp points. Next, a square area with the preceding grasp point as the center is taken from the image, where the edges are extracted using the Canny edge detection and the lines are detected using Hough Transform. A principal-directiondetection algorithm is proposed to analyze these lines and detect grasp orientations and the distance between two parallel fingers. The method gives a better grasp detection and has an influence on computer vision using both deep learning and traditional algorithms.