时间:2024-12-22
P.W.S.I. Wijethunga, I.A. Chandrawansa, B.M.D.T. Rathnayake,W.A.N.I.Harischandra, W.M.M.T.S. Weerakoon, B.G.L.T. Samaranayake
(Department of Electrical and Electronic Engineering, Faculty of Engineering,University of Peradeniya, 20400, Peradeniya, Sri Lanka)
Abstract: A supportive mobile robot for assisting the elderly is an emerging requirement mainly in countries like Japan where population ageing become relevant in near future. Falls related injuries are considered as a critical issue when taking into account the physical health of older people. A personal assistive robot with the capability of picking up and carrying objects for long/short distances can be used to overcome or lessen this problem. Here, we design and introduce a 3D dynamic simulation of such an assistive robot to perform pick and place of objects through visual recognition. The robot consists of two major components; a robotic arm or manipulator to do the pick and place, and an omnidirectional wheeled robotic platform to support mobility. Both components are designed and operated according to their kinematics and dynamics and the controllers are integrated for the combined performance. The objective was to improve the accuracy of the robot at a considerably high speed. Designed mobile manipulator has been successfully tested and simulated with a stereo vision system to perform object recognition and tracking in a virtual environment resembling aroom of an elderly care. The tracking accuracy of the mobile manipulator at an average speed of 0.5m/s is 90% and is well suited for the proposed application.
Keywords: Omnidirectional Wheeled Robot, Mobile Manipulator, Stereo Vision
Service robots have been a hot topic in robotic research lately. The recent developments have been constantly proving the fact that the dream of possessing robots that could help the human kind in every field has become true. Thousands of mobile and immobile robots are working with humans in fulfilling various requirements. Service robots that are designed to support the elderly are one such task oriented robot types.
One of the major issues facing now in countries such as Japan, United Kingdom, etc.is the reducing birth rate with increasing aging population. It is evident that this condition is more or less experienced in many other countries in the world as well. This results in the reduction of the working population of a country[1].Hence the need to develop a methodology to support elderly to perform even the basic tasks is necessary.
As in [2], the physical health of older people declines with their age and injuries sustained by the falls become a key concern especially if they tend to live on their own. Surveys have identified that an assistive device, in general, for reaching and moving tasks is preferred by the older age group to facilitate the mobility requirement[3]. In[4], authors have identified several mobile robots such as Meldog,Guido and Smart Robot,designed to support elderly in motion.A model of a voice controlled personal assistant robot is proposed by [5]to assist vision impaired and mobility restricted old people.It has been identified that a best match to support an elderly person with mobility issues is a mobile manipulator with a higher degree of freedom (DOF)[6].Several researches have undergone to develop mobile manipulators over the past decade[7-9].Omnidirectional mobility,better flexibility and higher maneuverability of a higher DOF mobile manipulator become the ideal candidate as the supporter of the elderly[10].Combining the advantages of the robotic manipulator and the omnidirectional wheeled platform,these robots could be applied for several supportive tasks.For instance,the omnidirectional wheeled robot extends the workspace of the robotic manipulator,and the manipulator in turn addsadditional functions,such as picking and carrying objects etc,to the wheeled robot.
In this study,we have designed and modelled a 6DOF omnidirectional mobile manipulator in a simulated environment to operate at a higher speed with an improved accuracy.First,we theoretically analyzed both therobotic arm and the wheeled mobilerobot and the respective kinematic models and the dynamic models which have been presented in the section 2.Dynamic models and the corresponding controllers of the robot armand the wheeled mobile robot have been implemented in WEBOTSTM,which is an open-source mobile robot simulation software developed by Cyberbotics Ltd.,[11].In the same simulation environment,we have implemented a stereo vision system for the object detection and tracking purpose.All three control modules were combined in a master algorithm for the simulation of pick and place of assigned objects in a virtual room resembling an assistive robotic task.Several trials were carried outto quantify the tracking accuracy against the speed of the mobile manipulator and the performances are discussed in the section 4.Conclusions have been derived in Section 5.
This section describes the mathematical formulation of kinematic and dynamic models for each of the major components of the assistive robot:5 DOFs manipulator and the omnidirectional wheeled robot,in detail.Before moving into individual formulations,we will explain the workflow of theoverall robotic system asdepicted in the Fig.1.
Fig.1 Working Processof the Robot
With the user input data and the data obtained from stereo cameras, decision making algorithm identifies the locations and the positions of the objects.Using the locations, it decides how the robot should react and sends the relevant data to the robot accordingly. According to that data, kinematic models of robot calculate how it should move. Dynamic model derives the forces that act on the robot, when moving,picking and placing objects. Using both kinematic and dynamic model outputs robot performs all the movements.
2.1.1 Forward Kinematics
Forward kinematics modelling is done to calculate the position of the gripper for a certain joint angle.In order to control a robot manipulator (Robot arm),it is needed to understand the coordinate frame (Fig.2)and the mathematical model of the robot arm. In Kinematic modeling, the relationship between the joint angles and gripper position is calculated. This is used to control the robot joint angles to get the gripper to a desired position. In mathematical modelling, it is needed to calculate the gripper position using joint angles and vice versa. These processes are used in forward kinematic and inverse kinematic calculations.For these calculations, following parameters and design details are needed.
Robot manipulator is vertically articulated and has 5 degrees of freedom (DOF) with 4 revolute joints and one prismatic joint. Fig.2 shows the coordinate frame of the robot manipulator[12]. Each joint is noted with respect to the inertial frame (frame 0). Thereby the position and orientation of the tool frame (frame 06)can be represented with respect to the inertial frame.
For the derivation of the kinematic model of the robot manipulator, Denevit-Hartenberg (DH) parameters can be used as explained in [12]. This is a simple procedure to describe the end-tool with respect to the base coordinates. As shown in Table 1, there are 24 parameters involved with the derivation of the kinematic model.
Fig.2 The Robot Coordinate Frames
Table 1 The DH Parameters of the Robot
To discuss the relationship between neighboring links we need a transformation matrix.Transformation matrix for the tool frame (A60) with respect to base co-ordinates can be gained using homogeneous transformation matrices as in (1). P is the position and the orientation matrix of the tool end and A is the transformation matrix with respect to the world frame.
2.1.2 Inverse Kinematics
By equating the corresponding matrix elements of left and right matrices, solutions for joint angles could be calculated.
Joint Angles: Angles for the respective joint are derived as follows shown from (3) to (7).
This value is chosen for θ5to make the gripper of the robot manipulator horizontal.
When the robot arm performs pick and place tasks,there are various forces acting on the robot arm. Dynamic model is used to calculate how these forces act on joints of the robot arm under various conditions.
Out of the 2 available methods to derive a dynamic model of a robot arm namely Newton Euler method and Euler-Lagrange method, Euler- Lagrange approach is followed considering the simplification of the effort required to reach the solution[13]. For this method, kinetic and potential energies should be calculated. For that, the velocity information of the robot is needed.
To control a robot arm, calculating only the position and joint angles are not enough. The velocity of the joint angle movements and gripper velocity must be known for a smoother control and better understanding of the robot. In order to derive the velocity relationships out of forward kinematic functions, the 6x6 velocity Jacobian given in Appendix B is calculated.
With further simplifications and substitutions,Christoffel symbols can be introduced.
Euler Lagrangian equations for joint forces could be derived as,
In the configuration of the four wheeled omnidirectional wheeled robot (Mobile Platform) in Fig.3, all the forces, velocities and coordinates are marked[14].All the variables used throughout this chapter are noted in Appendix B. To move the robot we must know the wheel velocities of the robot. The robot working principle used here is given below.
Fig.3 Coordinate Frame of Omnidirectional Wheel Robot
Target vectors are always given with respect to the world frame. A transformation is required to convert velocities from the world frame to the robot axis frame(12).
Using these obtained robot axis velocities, the relationship between robot wheel velocities and robot
The dynamic behavior of the omnidirectional wheeled robot can be summarized into a state space model as shown in (15). All the terms and conditions are explained in the Appendix B.
Fig.4 shows the PID based controller in which current velocity vectors and orientation are taken as input to the simulation model and it is deducted from
the previous velocity vectors and orientation and given to the PID controllers as input. Output from the controller is sent to the inverse kinematics calculations of the mobile platform and it gives wheel velocities as the output to each wheel. These values are again taken as feedback and through forward dynamics, the readings are converted into the turned direction of the platform.
Fig.4 Simulation Model for PID Based Omnidirectional Wheeled Robot
Simulation model for the robot arm with the PID controller is shown in Fig.5, which is extracted from Fig.1.
Fig.5 Simulation Model for PID Based 5DOF Robot Arm
Input data for the model are the current position coordinates. That data deducted from previous position values is fed into the PID controllers.
The output is taken for inverse kinematics calculations. Then obtained joint angle reading is fed into the robot arm as the output of the simulation model.This output is once again taken as feedback and through inverse forward dynamics and it is converted back to position coordinates and taken for error calculations.
The controllers were implemented separately to the robots and tested for errors. After the integration of the robot arm and the mobile platform (Fig.6), the controllers were again tested to verify that the axis of the two robots are correctly aligned and are according to the created model.
Fig.6 Integrated Robot with the Manipulator and Wheeled Robot
The mobile manipulator was designed to perform vertical or horizontal pick according to the nature of the object.
To make the integrated mobile manipulator detect the assigned object, track and perform pick and place,stereo vision can be used considering its convenience in depth calculations.
Fig.7 Room Model with Stereo Camera Locations
A room model was designed using WEBOTS and two stereo cameras were placed as one on a wall with coordinates with respect to the top right corner of the room (2.5,0,1), and the other on the roof at (2.5,3,2.5),in order to get a wider and an accurate view (Fig.7 ).Use of two stereo cameras makes depth calculation more accurate and much easier in triangulating the object[15-16].
For the simulation, over 4200 data points were used with 32ms intervals. The actual gripper position of the robot arm is detected using internal GPS sensors(IGPS). Objects have different QR codes to identify,considering the improvement of robustness. Objects were placed at different positions and carried out the simulation for several trials to gather data and calibrate the stereo vision code to get better calculated positions of gripper and the object. The pseudocodes of identifying and locating the position of robot and the object are given in Appendix A.
The parameter values usedin Table 1 for the DH parameters of the 5DOF robot manipulator are a3 =0.18m, a4 = 0.18m, d1 = 0.065m and d5 = 0.09m [17].The simulations were performed in WEBOTS 2020a software and the specifications of the PC used for the simulations are given in Table 2.
Table 2 Software and Computer Details
The dimensions of the omnidirectional mobile manipulator as in Table 3 were selected to suit the physical boundaries of a general room of length = 6m,width = 5m and height = 3m.
Table 3 Dimensions Used for the Robot
Masses and center of mass coordinates of each joint and the base of the robot were extracted as in Table 4 with respect to the base of the robot.
Table 4 Mass and Center of Mass for Each Links and the Base (Robot Arm Links Are Oriented along the Z Axis)
The parameters applied to derive the dynamic model of the mobile platform are given in Table 5[14].
Table 5 Parameters Used for the Platform Model
The experimental plots of the robot arm and mobile platform controllers are shown in Fig.8 and Fig.9.These tests were carried out to check the performance and error detection of the controllers and to verify whether the separate implementation of controllers is effective for the robot. The output position for a certain input was examined over time and for the mobile platform it was noted to havesettled to a steady state within 1.3 seconds and the robot arm has reached steady state within around 2.2 seconds.
Fig.10 shows the main steps of the simulation process. The robot approaches the object location,picks up the object and after carrying it to the assigned destination, it places the object. Reaching the object has two methods. According to the object and the environment where the object is placed, they are vertical approach and horizontal approach. In this simulation horizontal approach is used due to the height of the object.
Fig.11 shows the stereo camera view of the room with object recognition. Using the workspace of the robot and the object location, reachability of the arm was calculated. Fig.12 shows the fully stretched arm that has reached the maximum distance across the table for a height of 0.32m. Any object located beyond the workspace of the robot are considered as unreachable.
Fig.8 PID Controller Testing for the Mobile Platform
Fig.10 Robot Simulation
Object 1 and 2 was placed originally on the reachable area and gradually object 1 was moved along the x axis while object 2 was kept stationary. Table 6 shows the decisions taken by the robot on its reachability. For this test, table was placed parallel to the y axis where the edge of the table is located on x = 0.16 axis. Robot gripper could reach up to 0.3m across the table.
Fig.9 PID Controller Testing for the Robot Arm Joint 2
All the stereo and actual tracking paths gave similar variations with slight deviations. Variation of object, gripper, object and gripper together were tested against time. The results were plotted as shown in Fig.13-15 for a pickup location of (0.3, 0.66, 0.63) and placing location of (-2.3, -1.45, 0.65). Main variant positions are described in Table 7.
Fig.14 shows the gripper position and the object position throughout the simulation tracked by thestereo cameras respectively. It is noted that cameras are successfully able to identify both gripper and the object when the gripper picks up the object.
Table 6 Reachability and Non-reachability of an Object
Fig.13 Robot Gripper Position as Indicated by IGPS
Fig.14 Object Position and Gripper Position Tracked by Stereo Cameras
Fig.15 Zoomed View of Object Pickup State of the Robot as Tracked by Stereo Cameras
Table 7 Simulation Process with Time Stamps
Accuracy of the stereo vision camera is a vital part of this system. It is required to have a higher accuracy to correctly track both the object and the robot arm. To measure the accuracy, the error between theoretically planed path and actual path taken by the robot using the stereo cameras were calculated in millimeters. This process is done for several speeds of the robot within the minimum and maximum speed of the robot. With the increase of the speed of the robot, the accuracy of the system is reduced. Using Fig.16, when the robot moves with an average speed of 0.5 m/s, it has an accuracy of 89.11%. To achieve an accuracy higher than 90%, the speed of the robot must be less than 0.387m/s.
Fig.16 Variation of Stereo Vision Tracking Accuracy with the Speed of the Robot
Elderly people with moving disability issues face the risk of falls and injuries. A robot assistant that can move, carry and handle objects is preferred by the community to support the elderly to minimize this issue. An omnidirectional mobile manipulator with 6 Degrees of Freedom was designed in this research as a robot that fulfills the object reach and carry requirement of the elderly. The higher degree of freedom facilitated the ease of motion for the robot and omnidirectional capability made the robot move and reach an object in any direction. The robot was simulated in a known environment and 2 stereo cameras were used for object detection and tracking. The accuracy of performance of the robot was tested against the speed and an accuracy of 89.11% was obtained for an average speed of 0.5m/s.
APPENDIX A
● Input Image
● Search for QR code
● Read the QR code
● Calculate the QR code pixel position
● Return the pixel position
Listing 1: Pseudo code to find the object in the image
● Input pixel position of QR code, frame right,frame left, baseline, focal length, alpha
● Convert focal length from mm to pixel
● Calculate the disparity between left and right frames (xleft - xright)
● Calculate the depth (Baseline*Pixel/ disparity)
● Calculate the position (depth, left frame,right frame)
● Return the position
Listing 2: Pseudo code to find position
APPENDIX B
Jvi= Linear Velocity Jacobian of ithlink
Jωi= Rotational Velocity Jacobian of ithlink
Ri= Rotational matrix of ithlink with respect to the origin
Ii= Inertia Matrix of ithLink
e = Joint angles
ė = Joint angle velocity
g =gravitational acceleration
mi= Mass of ithlink
τk= Forces of kthjoint
2.1.1 Forward Kinematics
Velocity Jacobian
我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自各大过期杂志,内容仅供学习参考,不准确地方联系删除处理!