Jiaqi Li is currently a postgraduate student under the supervision of Prof. Xiaowu Chen at the University of Science and Technology of China. His research mainly focuses on coding theory
Liming Ma is a Research Associate Professor with the School of Mathematical Sciences, University of Science and Technology of China. He received his Ph.D. degree in Mathematics from Nanyang Technological University, Singapore, in 2014. From April 2014 to May 2020, he was a Lecturer with the School of Mathematical Sciences, Yangzhou University, China. His research mainly focuses on algebraic function fields over finite fields and coding theory
Galois dual codes are a generalization of Euclidean dual codes and Hermitian dual codes. We show that the h-Galois dual code of an algebraic geometry code CL,F(D,G) from function field F/Fpe can be represented as an algebraic geometry code CΩ,F′(ϕh(D),ϕh(G)) from an associated function field F′/Fpe with an isomorphism ϕh:F→F′ satisfying ϕh(a)=ape−h for all a∈Fpe. As an application of this result, we construct a family of h-Galois linear complementary dual maximum distance separable codes (h-Galois LCD MDS codes).
Graphical Abstract
The process of representing CL,F(D,G)⊥h as an algebraic geometry code and constructing h-Galois LCD MDS codes.
Abstract
Galois dual codes are a generalization of Euclidean dual codes and Hermitian dual codes. We show that the h-Galois dual code of an algebraic geometry code CL,F(D,G) from function field F/Fpe can be represented as an algebraic geometry code CΩ,F′(ϕh(D),ϕh(G)) from an associated function field F′/Fpe with an isomorphism ϕh:F→F′ satisfying ϕh(a)=ape−h for all a∈Fpe. As an application of this result, we construct a family of h-Galois linear complementary dual maximum distance separable codes (h-Galois LCD MDS codes).
Public Summary
For any function field F/Fpe, there exists a function field F′/Fpe with an isomorphism ϕh:F→F′ satisfying ϕh(a)=ape−h for all a∈Fpe.
We showed that the h-Galois dual code of algebraic geometry code CL,F(D,G) can be represented as CΩ,F′(ϕh(D),ϕh(G)).
As an application of the above result, we constructed a class of h-Galois LCD MDS codes.
Rescue workers’ lives are often under threat during their rescue work in and after emergencies and disasters. Sometimes even casualties have to be suffered unfortunately. With the development of robotics in general, robots have seen their prosperity in replacing human beings to fulfill miscellaneous tasks in those dangerous scenarios[1–4].
During rescue work, it is often required to traverse unstructured and complicated terrains, even climbing up and down stairs, while carrying miscellaneous equipment and sensors to deal with dangerous situations and clearing up cumbersome obstacles[5]. Therefore, most rescue robots have been developed based on legged or tracked robotic platforms to ensure mobility. To date, many legged robots have been developed by different research organizations and industrial companies[6,7], and some of them have shown up in various competitions, such as the DARPA Robotics Challenge[8] and DARPA Subterranean Challenge[9]. To further improve the mobility of legged robots, there have also been hybrid legged robots that have wheels or tracks attached at the end of their legs to replace the feet, e.g., RoboSimian[10], Momaro[11], CHIMP[12]. However, these robots have very complicated structures and low-level controls that consume considerable computation power and control time, thus resulting in a relatively fragile system under a consistently large workload during rescue work. To date, only the quadrupedal robot ANYmal has been successfully deployed in real rescue scenarios[13]. As with tracked robots, popular robots are often equipped with swing arms that can help cross diverse obstacles, afford high payload, and perform stable locomotion. After the 2016 earthquake in Italy, the tracked robot TRADR was used to inspect damaged buildings[14]. In the ARGOS challenge[15], Team Argonauts also used a tracked robot with swing arms to win the championship[16]. Other groups have even realized autonomous navigation for such tracked robots when climbing stairs[17] and slippery slopes[18]. However, due to the existence of the swing arms, these robots lose the capability of clearing up cumbersome obstacles. To overcome those drawbacks in this work, we have designed a unique structure to provide the tracked system with the capability of both climbing stairs and clearing obstacles.
In addition to thoughtful structural designs, autonomous operation can also greatly improve rescue robots’ efficiency in different rescue works. A typical application scenario would be exploring signal blocked areas after emergencies or disasters have occurred. To overcome the loss of telecommunication between robots and operators, autonomous navigation is potentially desired for rescue robots[19,20]. Miscellaneous sensors can be taken advantage of to conduct simultaneous localization of the robot and mapping of the unknown area[21–23]. We integrated a light detection and ranging (LiDAR) and an inertia measurement unit (IMU) to build grid maps of the surrounding environment, evaluate the position and pose of the robot and realize autonomous navigation. Note that no global positioning system (GPS) is needed in this process, making it particularly suitable for signal blocked areas.
Even with autonomous navigation, the robot still needs the operators’ help when it comes to dexterous manipulation tasks such as opening doors. There was even a case where seven operators were needed to cooperate on controlling a robot[24]. To relieve the operational burden, we have developed depth camera-aided semi-autonomous manipulation for robotic arms in door-opening tasks that can quickly locate the position of the door handle. The whole operation process only requires operation between two operators to control the base and the arm, largely reducing the operation complexity and increasing the operation efficiency.
Consequently, the teleoperation system on a rescue robot becomes quite critical for successful rescue works. The effectiveness and reliability of the teleoperation system determines the lower boundary of the rescue robot’s performance. Therefore, a multimodal teleoperation system to provide enough redundancy and deal with different conditions becomes necessary.
Based upon the above articulation, we present our newly designed rescue robot, which has successfully addressed the aforementioned four points of functionality. We have named it Earthshaker, not only because it “shakes” the ground when it moves around but also because we hope it can bring earthshaking improvement on the role of robots in real rescue work. An overview of Earthshaker is shown in Fig. 1. The remainder of the paper first introduces the various systems of Earthshaker in Section 2, including the tracked chassis, the robotic arm and gripper, the perception system, the teleoperation system, and their mechatronic integration. In Section 3, control frameworks of multimodal teleoperation, depth camera-aided semi-autonomous manipulation, and LiDAR-aided autonomous navigation are presented in detail. Section 4 summarizes the performance of Earthshaker in the finals of the 2020 Advanced Technology Engineering Challenge (A-TEC) championships as the experimental validation of the system integration and control philosophy. The experience obtained from the competition and possible future directions are discussed in Section 5.
Figure
1.
Overview of the rescue robot Earthshaker.
The tracked chassis supports all the other systems onboard with corresponding mechanical and electronic interfaces to form the robot as a whole. It determines the upper limit of the whole system’s mobility[25]. The tracked chassis is made of alloy steel through casting and welding. It combines the design of Christie suspension and Matilda suspension to achieve excellent traversing capability. The vibration and impact from rough terrains can be effectively absorbed by the chassis to maintain a stable operation environment for the onboard systems. The chassis is driven by two 1000 W brushless motors, which can support a maximum running speed of 1.6 m/s and a maximum climbing inclination of 40°. Four packs of LiPo batteries inside the chassis can power Earthshaker to continuously work for 3 h at medium workloads. Each battery pack supports an individual system to ensure power isolation and security, namely, one 48 V 60 A•h pack for the chassis and three 24 V 16 A•h packs for the manipulation system, the perception system, and the teleoperation system. Ultimately, the Earthshaker is 0.72 m wide and 1.22 m high, and its length can vary from 1.33 m to 1.49 m, with a total weight of approximately 250 kg.
To promote the robot’s capability of clearing cumbersome obstacles and climbing up and down stairs, a swing arm—dozer blade structure has been designed and attached to the rear end of the chassis. The structure consists of an electric linear actuator, two tracked swing arms, and a dozer blade. The electric linear actuator can be controlled under teleoperation to rotate the swing arms, thus adjusting the pose of the dozer blade from 65° to –45° with respect to the horizontal direction. On flat terrains, the structure is folded to reduce motion resistance and increase agility, while on stairs, it can be used to adjust the pitch angle of the robot to improve stability, as shown in Fig. 2. When there are cumbersome obstacles in the way, the dozer blade can be put vertical to the ground to push them away obstacles efficiently, as long as they are under 75 kg.
Figure
2.
Demonstration of the swing arm—dozer blade structure. The structure can be folded or extended based on needs.
2.2
Robotic arm and gripper for dexterous manipulation
Without dexterous manipulation, tasks such as pressing buttons, opening doors, turning off valves, picking up small objects, and moving around wounded victims, cannot be accomplished. Earthshaker has been equipped with a UR5e robotic arm and an AG95 two-finger gripper for those purposes. The arm can realize dexterous manipulation within a radius of 750 mm, with a maximum payload of 5 kg[26]. The arm is installed at the front end of the robot to guarantee enough workspace and balance the extra weight introduced by the swing arm—dozer blade at the rear end. The original equipment manufacturer (OEM) control box of the arm has been customized to save space on the robot and can work under 24 V DC power instead of 220 V AC power. The velocity control of each joint on the arm and the gripper is mapped to the remote controller; thus, precise impedance control can be achieved. Additionally, to facilitate semi-autonomous manipulation, an Intel D435i RGBD camera was mounted on the gripper, the use of which is discussed later in Section 3.2.
2.3
Sensors for diverse perception
Earthshaker has a platform for sensor installation between the robotic arm and the swing arm. Four sides of the rectangular platform have four wide-angle cameras which are headed slightly downward to provide a panorama of the environment surrounding the robot. Thus, the remote operator can plan paths and avoid obstacles accordingly. There are also two infrared cameras on the sensor platform that can help identify objects in a smoky environment. The two infrared cameras are placed opposite to each other, with one pointing forward and one pointing backward[27]. At the front panel of the chassis, there is another microcamera that can provide a wide view of the environment in front of the robot. With further help from the lasers on both sides of the robot, the operator can precisely drive Earthshaker to pass through narrow doors or corridors without any problem.
2.4
Teleoperation and communication
Earthshaker is teleoperated by two operators using two AT9S remote controllers, one for the tracked chassis and one for the robotic arm and gripper. Each controller has 12 channels to transmit digital commands via 2.4 GHz communication frequency to the receiver on the robot. An STM32F091-based microcontroller is then utilized to decode the signals to achieve closed-loop control of the chassis, as well as other peripherals, like the swing arm—dozer blade, lasers, LEDs. Meanwhile, the signals to the receiver for the robotic arm and gripper are translated into specific actions by an Intel NUC minicomputer, which has a RAM of 16 GB and an Intel Core i7-1165G7 CPU with a maximum clock frequency of 4.7 GHz. On the other hand, the video images transmitted back to the operators consisting of images from the wide-angle cameras, the infrared cameras, the micro camera, and the operating system screen of the NUC minicomputer. These eight images are selected and combined into one single image for transmission to save bandwidth.
In addition to the 2.4 GHz direct communication, there are also two redundant communication paths on Earthshaker, the 1.8 GHz MIMO-mesh radios[28] and 4G/5G mobile telecommunication. These additional paths can overcome the relatively short communication distance of the 2.4 GHz signals and ensure the robustness of teleoperation for Earthshaker.
2.5
Mechatronic integration
Fig. 3 summarizes the major mechatronic components of Earthshaker, as well as the signal paths for multimodal teleoperation. Note that the NUC minicomputer can also control the chassis, depending on its priority, compared with the STM32F091 microcontroller on the CAN bus. Consequently, the switch between teleoperation and autonomous navigation can be organized. To accomplish autonomous navigation and dexterous manipulation, the NUC is also connected to the LiDAR, the RGBD camera, and the gripper via USB cables and to the robotic arm via a switch. The same switch is also connected to the MIMO-mesh radio and the 4G/5G router. As a result, the switch builds a 100 Mbps network with the operators’ computer, the signal quality of which affects the latency of teleoperation.
In real rescue work, it is inevitable to face environments with detrimental magnetic fields or poor signal transmission conditions, where the regular 2.4 GHz teleoperation signals would decay greatly with a reduced signal-to-noise ratio and increased data package loss. To maintain robust signal transmission between the operators and Earthshaker for real-time teleoperation, a framework for multimodal teleoperation has been developed to ensure that the communication path is unblocked, as shown in Fig. 4a.
Figure
4.
Flow charts of the control algorithms for Earthshaker.
Within the framework, when Earthshaker is close to the operator such that the AT9S remote controllers can talk to the receivers on the robot directly, the 2.4 GHz communication frequency is used. Once the distance in between increases or for some reason the signals are blocked to a point where the direct communication fails, the 1.8 GHz communication frequency is adopted, and the signals are transmitted through the MIMO-mesh radios. When necessary, the robotic arm can even drop an extra relay radio onboard to further increase the communication distance and quality. Multiple MIMO-mesh radios can form a distributed network with various forms, e.g., a line, a star, a net, and even a mixture of those. The network can flexibly adapt to fast node movement and node-to-node signal quality variation, realizing high-quality signal transmission consistently. To ensure teleoperation communication in case even the MIMO-mesh radios fail, one more redundant communication path realized by the 4G/5G router has been added to Earthshaker. The router can either connect to nearby base stations from the selected Internet service provider or be relayed by nearby unmanned aerial vehicles, to build a network with a preset cloud server. The operators can then access the server, monitor the real-time data from the robot, and give corresponding commands.
Earthshaker checks control signals from these three paths according to their priority levels and signal quality. If effective data are not received within a prescribed time, the path with a lower priority level is checked. If all three paths fail, the program determines whether to enter the autonomous navigation mode or an emergency stop mode. Once any communication path is successfully established, the remote controller in the base operator’s hands can drive the robot to move forward, backward and rotate around its center point. The microcontroller on the robot first unifies the joystick values obtained from the remote controller to Jnorm,
Jnorm=2Jinput−Jmax−JminJmax−Jmin,
(1)
where Jinput=[linput,rinput]T, with l denoting value from the left joystick and r from the right one, Jnorm=[lnorm,rnorm]T, representing the unified joystick commands, and Jmax and Jmin denote the maximum and minimum values of the joysticks, respectively. When the value is zero, the robot is still. The unified values are then interpreted into the base’s linear velocity v and angular velocity ω as
v={0,if|lnorm3|<0.001,lnorm3⋅vmax,otherwise,
(2)
and
ω={0,if|rnorm3|<0.001,rnorm3⋅ωmax,otherwise,
(3)
where vmax and ωmax are the maximum linear and angular velocities supported by the base. Through the kinematic model of differential drive, the angular velocities of the left and right motors of the base ωl and ωr, can be calculated as
{ωl=2v−lω2r,ωr=2v+lω2r,
(4)
where l is the distance between tracks and r represents the radius of the drive wheel. The calculated ωl and ωr are then sent to the motors as control commands.
3.2
Control logic of the arm and gripper
The programs for teleoperation of the arm and gripper consist of an operation assisting module for door opening task and several interface modules for maintaining communication between the NUC minicomputer and the other components, including the signal receiver, the UR5e arm, the AG95 gripper, and the D435i RGBD camera. Inside these programs, a network socket is first created according to the arm controller’s IP address and port number, such that the built-in input/output functions can be called to read or write to the socket to interact with the arm controller. At the same time, the serial ports connected to the signal receiver and the gripper are initialized in the programs through RS485 protocols. Once the NUC receives remote control instructions through the signal receiver, it parses them into the positions and velocities for each joint of the arm, as well as the opening angle and holding force for the gripper.
To facilitate semi-autonomous door opening for Earthshaker, the operation assisting module is developed using the depth camera in an eye-to-hand manner. This module, as shown in Fig. 4b, can greatly simplify the process of door opening, avoiding the potential mistakes that could be introduced by the operator through teleoperation. In the module, the coordinates of the camera and the arm are first calibrated to obtain the transformation relationship between them. With the intrinsic parameter matrix K, the pixels on the depth images obtained from the RGBD camera can be converted into a three-dimensional point cloud as
P=DKP′,K=[1/fx0001/fy0001],
(5)
where P are the coordinates of the 3D point, D denotes the depth measured on the ray of the pixel, and P′ are the coordinates of the point in the image. Next, objects can be identified within the point cloud converted from the depth image. Specifically, in the task of door-opening, the position and orientation of the door and the handle should be estimated to serve as the goal for path planning of the arm and gripper. The position of the door is determined by fitting planes to the point cloud. Before the operation assisting module is started, the robot needs to be in front of the door such that the door is inside the field of view (FOV) of the RGBD camera. Consequently, the point cloud corresponding to the door can be recognized by planar segmentation. To determine the orientation of the door, the principal component analysis (PCA) method[29] is exploited to calculate the normal vector of the door plane in the point cloud. Then the axis-angle (θ,n) of the door’s normal direction can be calculated as
{θ=acos(a·x),n=a×x,
(6)
where θ denotes the angle between the normal vector n of the door plane and the unit vector x of the x axis, and n represents the rotation axis from vector a to vector x. Thus, the rotation matrix q can be further calculated with Rodrigues’s Formula[30],
q=cosθI+(1−cosθ)nnT+sinθn∧,
(7)
where I is the identity matrix. Subsequently, the DBSCAN algorithm[31] is used to cluster the cloud points that are close to the door plane. The cluster with a proper size is identified as the point cloud of the door handle, and the cluster center p is calculated as the mean of all the cluster points and set as the target position for arm to grip. As a result, the orientation q and position p serve as the target pose when approaching the handle. However, due to the observation model of the RGBD sensor, the depth measurement error is proportional to the square of the distance. The eye-to-hand method leads to a relatively long separation between the target and the sensor, inevitably causing observation errors for the gripping pose. Additionally, the vibration introduced by the movement of the base also makes it difficult to realize visual feedback control of gripping. Hence, at this point, the algorithm is only adopted to provide an initial pose for the door opening task. The remaining operation still needs to be completed by the operators. Even with this level of semiautonomy, the operating steps have been greatly simplified, and the operational burden on the operators is sufficiently released.
3.3
Autonomous navigation
When autonomous navigation is desired, the control authority of the Earthshaker can be given to the NUC. This helps the robot actively explore unknown and signal blocked areas and search for an exit toward a desired direction, the algorithm of which can be found in Fig. 4c. Once the autonomous navigation is started, the NUC analyzes the data scanned by LiDAR to build a grid map of the current environment and estimate its ego-motion simultaneously. Feature matching-based methods such as LOAM[32] are popular pose estimation methods that demonstrate robustness and efficacy in complex off-road environments. Therefore, scan matching is also incorporated into Earthshaker’s autonomous navigation algorithms. Features are extracted from each frame of the LiDAR sweep XL for the smoothness c as
where {\boldsymbol X}_{i}^{\rm{L}} denotes the i -th point within the sweep, and S defines a set of consecutive points obtained by the same laser beam near point i . The point number within S is empirically set to 10. By setting a threshold for smoothness, the curve can be determined as an edge feature with greater smoothness and a planar feature with less smoothness. Then, the edge and planar features of consecutive frames can be registered separately to restore the motion between frames using iterative closest point (ICP) algorithm[33]. The object function for the ICP algorithm is set to minimize the cost with respect to the estimated transformation \boldsymbol{T} as
where m and n denote the number of matched edge features and planar features, respectively, {d}_{\epsilon} represents the distance between two matched edge features and {d}_{H} represents the distance between two matched planar features.
Due to the vibration of the base caused by tracks and rugged terrains, IMU preintegration[34] is introduced into the system to further improve the robustness of the localization results. As shown in Fig. 4c, the extended Kalman filter[35] is used to infer the state of the robot, fusing the scan-matching results and the IMU preintegration results in a tight coupling manner.
With the high-precision laser-inertial odometry estimated from EKF fusion, the laser scans are then merged into the occupancy grid map. In general, the exploration task is to maximize the covered area on the grid map. Herein, a frontier-based method[36] is used to guide the robot to explore along the boundary between unknown area and free area on the grid map. In the method, the random tree incrementally expands toward boundaries during the exploration process by sampling viewpoints as new nodes. The newly added nodes in the random tree are then evaluated with information gain and traversing cost as
where g\left(\boldsymbol{x}\right) is the expected information gain in position \boldsymbol{x}, c\left(\boldsymbol{x}\right) is the distance cost between the robot and position \boldsymbol{x}, and \lambda denotes a coefficient that controls the penalty on the distance cost. By selecting the branch with the maximum score, the first edge of this node is set as the next best view to navigate. The move_base navigation module provided by robot operating system (ROS)[37] is employed to calculate the shortest path based on the Dijkstra algorithm[38]. The robot follows the generated path to gradually explore the environment. Once the target point is reached, the next round of exploration planning continues. The whole process is repeated until the robot covers the whole area or finds the exit.
4.
Experimental validation
To examine the functionality of Earshaker and demonstrate its superiority, it was sent to attend the first A-TEC championships in 2020 as an experimental validation. The competition was held by the government of Shenzhen in Guangdong, China to further enhance robotic techniques and seek industrial opportunities[39]. In the final championships, the competition was divided into five sessions, and all the teams were ranked based on their performance in those sessions, including task difficulty, task completion, and time consumption. The five sessions, in turn, were traversing rough terrains, clearing cumbersome obstacles and opening doors, climbing up and down regular stairs, passing through signal blocked areas, and searching and rescuing in smoky indoor environments, as illustrated in Fig. 5. Specifically, passing through signal blocked areas required the robots to autonomously navigate inside a maze and search for the only exit, while in the other sessions, the robots were remotely controlled by operators from the first-person point of view from hundreds of meters away. These diverse sessions examined the capability of participating robots in locomotion, manipulation, perception, telecommunication[40–42] Robust and consistent performance in all sessions became more important than outstanding performance in any single session[43].
Figure
5.
Overview of the sessions of A-TEC championships.
During the intense championships, 15 teams globally entered the finals in total. Of those teams, Earthshaker took first place with a score of 109 points, whereas the robots from Tsinghua University and Chongqing University took second and third places with 79 and 70.5 points, respectively.
Compared to the Seeker robot from Tsinghua University, the MIST-Robot from Chongqing University, and many other robots from the rest teams, Earthshaker realized transformation of the tracked system for climbing stairs and improvement on cumbersome obstacle clearing capability in the most economical way, the swing arm—dozer blade structure. Earthshaker won the competition also by the diverse sensors integrated into the robot that allowed the robot to be robustly teleoperated and even achieve autonomous navigation. The following subsections describe the performance of Earthshaker in each session of the finals.
4.1
Traversing miscellaneous terrains
This session required the robot to first traverse a 30 m × 3 m rough terrain that could be covered by rubbles, bricks, or irregular concrete debris depending on the selected difficulty by each team. Following that, the robot needed to pass through an area covered by large immobile obstacles, climb up and down slopes of 36° at most, and travel on a bridge tilted to the side by 27°. Even though these tasks were relatively easy, they relied heavily on the robots’ speed and agility. Because the robotic arm did not need to be operated during this session, the corresponding operator was able to fly a DJI Mavic unmanned aerial vehicle (UAV) to provide a global view of the field from above, which allowed the base operator to plan operation beforehand and greatly reduced the time consumption. Benefiting from the great horsepower and well-designed suspension of the chassis, Earthshaker performed excellently in these tasks and was ranked first among all the robots.
In addition to the aforementioned regular tasks, there were also challenging tasks in this session, where the robots needed to traverse muddy terrains with potholes, flat terrains with trenches of various widths, and pools filled with water of different heights. Earthshaker successfully accomplished these challenging tasks, as shown in Fig. 6. Specifically, when faced with the trenches, Earthshaker put down the swing arm—dozer blade to increase the body length of the chassis. As a result, it crossed the trench with a width of 600 mm. As with the water pools, because the whole body of the Earthshaker was waterproof to level IP64 and the chassis was even waterproof to level IP66, Earthshaker was capable of dealing with the pool with a water depth of 500 mm. It is worth noting that Earthshaker prepared for the possible rainy weather during the competition, whereas many other robots did not have this preparation. Consequently, some robots suffered from rainy weather with their naked electronic interfaces, and ended up being unable to finish the competition.
Figure
6.
Snapshots of Earthshaker in Session 1. (a)–(c) Earthshaker was passing through a pool filled with water of 500 mm in depth. (d) Earthshaker was traversing muddy terrains. (e) Earthshaker was crossing a trench of 600 mm in width.
In this session, the robots were required to first clear up a 10 m × 20 m area by moving obstacles to designated places and then open and enter a door with automatic closers. The obstacles included hollow steel tubes as light as 5 kg and steel beams and concrete blocks as heavy as 50 kg. Earthshaker successfully utilized the dozer blade to push all the obstacles to the target positions.
There were multiple difficulty levels for door opening, with different types of doors and door handles. Options are unifold or bifold doors with spherical handles, L-shape handles, or valves. The most challenging combination, a unifold door with a spherical handle, was selected for Earthshaker in the competition. Because of the automatic door closer, the robot needed to rotate the handle and maintain the rotation while opening the door. As a result, the two operators needed to cooperate in the process. One operator needed to first align the 0.8 m wide Earthshaker with the 1 m wide door frame with the help of the equipped laser pointers and then keep commanding the base to move forward slowly as the door handle was rotated until the front end of the chassis was pushed against the door and the handle could be released by the gripper. The other operator needed to fine tune the robotic arm and the gripper after the initial semi-autonomous manipulation, grip and rotate the door handle as the chassis was approaching the door until the handle could be released from the gripper. Fig. 7a–e shows some snapshots of the whole process of this session. Earthshaker finished this session within 31 min 12 s.
Figure
7.
Snapshots of Earthshaker in Session 2 & 3. (a)–(b) Earthshaker was clearing a light obstacle on the left and a heavy one on the right. (c)–(e) Earthshaker was opening a unifold door with a spherical door handle. (f)–(h) Earthshaker was climbing up and down the stairs.
Robots in this session needed to climb up to and down from the platform as shown in Fig. 7f–h. Optional ways were through vertical ladders or regular stairs. The tracked chassis determined that Earthshaker could only pick the regular stairs, which was the common choice among all the robots in the competition. The stairs had 24 steps one-way, every step of which had a depth of 300 mm and a height of 175 mm. Thus, the inclination angle was approximately 34°. There was a turning platform between the two sections of stairs.
When Earthshaker was climbing up the stairs, the swing arm—dozer blade structure was adjusted to provide enough contact length for the chassis and help the robot move smoothly. However, the swing arm was not put fully flat due to the detrimental friction generated by the passive arm tracks, which would hinder the robot from thrusting upward. The angle of the swing arm was empirically set to just enough to support the robot in climbing up the stairs. On the other hand, when the robot was climbing down the stairs, the swing arm could be put fully flat to take advantage of its length and the passive friction generated to increase stability. Earthshaker was able to finish this session within 6 min 13 s, where the climbing up process took the majority of the consumed time. Compared to the other small tracked robots in the competition, Earthshaker was slower due to its relatively cumbersome body on the stairway.
4.4
Autonomous navigation
The autonomous navigation session tested the robot’s intelligence in building maps and finding exits within unknown areas without human help. To simulate the situation of signal loss in reality, during the competition, the referee turned on the signal blocker once the robot entered the maze. The operators inside the control room were also not allowed to touch the remote controllers during this period. The maze had three possible entrances and three possible exits. When the robot arrived at the maze, only one entrance would be open, and only one exit would be usable. To be fair, inside the maze, there were moveable doors that were adjusted for each robot to form a different unknown structure. Earthshaker was able to show up at the exit within 41.13 s in this session, ranking the second fastest among all the robots. To check the built map for the maze, the point cloud stored in the NUC was extracted after the competition, as shown in Fig. 8. Fig. 8a demonstrates the map built when the robot just entered the maze from the bottom left entrance, where the white line connects the robot to its target position on the far side of the maze. Fig. 8b shows the map built while the robot was half way through the maze. Fig. 8c presents the map built when the robot successfully found the exit on the top left and the path it followed in the maze indicated by a red solid line.
Figure
8.
The map built for the maze from the competition. The red slim line represents the path that the robot followed. The gray area indicates the accessible part of the map, while the cyan areas with dark red boundaries indicate the inaccessible parts. The three sub figures are the built maps, in turn, (a) at the beginning, (b) in the middle, and (c) at the end of the autonomous navigation through the maze.
The last session of the competition involved indoor rescue work. The robot was supposed to enter dense black smoke-filled rooms and search for a fire source and a wounded person. The smoke was real and spread by some smoke generators. However, the fire source was represented by an electric oven, and the wounded person was actually a sand bag in human shape. The dummy weighed about 50 kg. To simulate a real person, clothes were put on for the dummy that could generate heat for a period of time. There were eight similar rooms in total. The fire source and the wounded person were randomly distributed among them. There were also other common items, such as tables, chairs, cabinets, inside those rooms, similar to regular rooms people can find in their daily life. The robot needed to find the fire source and turn it off and needed to find the wounded person and carry it out of the room to a designated area. The smoke was quite dense, and the visible distance was less than half a meter inside the rooms. Earthshaker had to search every room under teleoperation to locate the wounded person and the oven with two infrared cameras and then use the gripper to turn the oven off and carry the wounded person out. This again required cooperation between the two operators. To carry the wounded person out of the room, a customized lasso was installed onto Earthshaker before setoff. Once the wounded person was located, the robotic arm and gripper picked up the lasso using preset control trajectories and put the lasso around the wounded person’s arm through teleoperation. The lasso then automatically locked up once the robot started to drag the wounded person. Fig. 9 shows scenes from this session. Earthshaker finally spent 11 min 36 s finishing all the tasks.
Figure
9.
Snapshots of Earthshaker in Session 5. (a) The infrared thermal image of the smoky environment. (b)–(e) The scene and the strategy used to rescue the dummy.
Earthshaker performed reasonably well in each session of the competition and even ranked first in two of the five sessions. This eventually allowed Earthshaker to take first place among all the robots. The overall score table is shown in Table 1. As a demonstration of dominance, Earthshaker obtained 109 points in the finals, whereas the runner-up only obtained 79 points. Earthshaker stood out for its multimodal teleoperation, its modular and waterproof mechatronic design, and sufficient experiments and practice before the ultimate test. The competition required a complete and robust rescue robot as a whole, not just any advanced individual module of it. However, some of the Earthshaker’s shortcomings were reflected in the competition. The excessive size limited its flexibility of movement, making it difficult to pass through certain narrow spaces in actual use. At the same time, the payload to the manipulator is limited; thus, it cannot complete dexterous manipulation tasks with large loads. Even though Earthshaker still had much room to improve, it was the excellent mechatronic integration and the advanced control philosophy that made it the winner of the A-TEC championships in 2020.
Table
1.
Final score table of the A-TEC championships 2020.
This paper introduces a rescue robot, Earthshaker, including its system integration and control algorithms. The performance of the robot was evaluated to be excellent during the A-TEC robotic championships in 2020. The unique swing arm—dozer blade structure of the Earthshaker helps extend the capability of conventional tracked chassis, improving its performance on cumbersome obstacle clearing and regular stair climbing. The multimodal teleoperation system provides robot redundancy and robustness when the operators cannot show up on site. The finite autonomy in the operation of the robotic arm and gripper helps release the operators’ work burden to a suitable extent. When teleoperation signals are lost, the robot could also enter the autonomous navigation mode to search for an exit by itself and give back the control authority to the operators. Overall, the championship that Earthshaker earned has shown the efficacy of all the aforementioned efforts. It can play a huge role in search and rescue in disaster scenarios, such as nuclear accidents, toxic gas leaks, and fires, where human workers cannot be deployed due to radiation, the danger of toxic contamination, or architecture collapse. Future efforts can be put into improving the robot’s autonomy in many foreseeable tasks for emergencies and disasters to further increase its efficiency and robustness. More earthshaking endeavors in helping the human community can be expected from Earthshaker.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (U21A20119) and the championship prize funded by Shenzhen Leaguer Co., Ltd. The research of Wei Gao was also supported in part by the Fundamental Research Funds for the Central Universities.
Conflict of interest
The authors declare that they have no conflict of interest.
Conflict of interest
The authors declare that they have no conflicts of interest.
Conflict of Interest
The authors declare that they have no conflict of interest.
For any function field F/ \mathbb{F}_{p^e} , there exists a function field F'/ \mathbb{F}_{p^e} with an isomorphism \phi_{h}:F\rightarrow F' satisfying \phi_{h}(a) = a^{p^{e-h}} for all a\in \mathbb{F}_{p^e} .
We showed that the h-Galois dual code of algebraic geometry code C_{ {\cal{L}},F}(D,G) can be represented as C_{\varOmega,F'}(\phi_{h}(D),\phi_{h}(G)) .
As an application of the above result, we constructed a class of h-Galois LCD MDS codes.
Goppa V D. Codes on algebraic curves. Soviet Mathematics Doklady,1981, 24 (1): 170–172.
[2]
Tsfasman M A, Vlăduţ S G, Zink T. Modular curves, Shimura curves, and Goppa codes, better than the Varshamov–Gilbert bound. Mathematische Nachrichten,1982, 109: 21–28. DOI: 10.1002/mana.19821090103
[3]
Mesnager S, Tang C, Qi Y. Complementary dual algebraic geometry codes. IEEE Transactions on Information Theory,2018, 64 (4): 2390–2397. DOI: 10.1109/TIT.2017.2766075
[4]
Jin L, Kan H. Self-dual near MDS codes from elliptic curves. IEEE Transactions on Information Theory,2019, 65 (4): 2166–2170. DOI: 10.1109/TIT.2018.2880913
[5]
Barg A, Tamo I, Vlăduţ S. Locally recoverable codes on algebraic curves. IEEE Transactions on Information Theory,2017, 63 (8): 4928–4939. DOI: 10.1109/TIT.2017.2700859
[6]
Li X, Ma L, Xing C. Optimal locally repairable codes via elliptic curves. IEEE Transactions on Information Theory,2019, 65 (1): 108–117. DOI: 10.1109/TIT.2018.2844216
[7]
Ma L, Xing C. The group structures of automorphism groups of elliptic curves over finite fields and their applications to optimal locally repairable codes. Journal of Combinatorial Theory, Series A,2023, 193: 105686. DOI: 10.1016/j.jcta.2022.105686
[8]
Massey J L. Linear codes with complementary duals. Discrete Mathematics,1992, 106–107: 337–342. DOI: 10.1016/0012-365X(92)90563-U
[9]
Carlet C, Guilley S. Complementary dual codes for counter-measures to side-channel attacks. In: Coding Theory and Applications. Cham, Switzerland: Springer, 2015.
[10]
Guenda K, Jitman S, Gulliver T A. Constructions of good entanglement-assisted quantum error correcting codes. Designs, Codes and Cryptography,2018, 86: 121–136. DOI: 10.1007/s10623-017-0330-z
[11]
Carlet C, Mesnager S, Tang C, et al. Euclidean and Hermitian LCD MDS codes. Designs, Codes and Cryptography,2018, 86: 2605–2618. DOI: 10.1007/s10623-018-0463-8
[12]
Chen B, Liu H. New constructions of MDS codes with complementary duals. IEEE Transactions on Information Theory,2018, 64 (8): 5776–5782. DOI: 10.1109/TIT.2017.2748955
[13]
Jin L. Construction of MDS codes with complementary duals. IEEE Transactions on Information Theory,2017, 63 (5): 2843–2847. DOI: 10.1109/TIT.2016.2644660
[14]
Beelen P, Jin L. Explicit MDS codes with complementary duals. IEEE Transactions on Information Theory,2018, 64 (11): 7188–7193. DOI: 10.1109/TIT.2018.2816934
[15]
Liu H, Liu S. Construction of MDS twisted Reed–Solomon codes and LCD MDS codes. Designs, Codes and Cryptography,2021, 89: 2051–2065. DOI: 10.1007/s10623-021-00899-z
[16]
Shi X, Yue Q, Yang S. New LCD MDS codes constructed from generalized Reed–Solomon codes. Journal of Algebra and Its Applications,2018, 18 (8): 1950150. DOI: 10.1142/S0219498819501500
[17]
Fan Y, Zhang L. Galois self-dual constacyclic codes. Designs, Codes and Cryptography,2017, 84: 473–492. DOI: 10.1007/s10623-016-0282-8
[18]
Liu X, Fan Y, Liu H. Galois LCD codes over finite fields. Finite Fields and Their Applications,2018, 49: 227–242. DOI: 10.1016/j.ffa.2017.10.001
[19]
Cao M. MDS Codes with Galois hulls of arbitrary dimensions and the related entanglement-assisted quantum error correction. IEEE Transactions on Information Theory,2021, 67 (12): 7964–7984. DOI: 10.1109/TIT.2021.3117562
[20]
Cao M, Yang J. Intersections of linear codes and related MDS codes with new Galois hulls. arXiv: 2210.05551, 2022.
[21]
Fang X, Jin R, Luo J, et al. New Galois hulls of GRS codes and application to EAQECCs. Cryptography and Communications,2022, 14: 145–159. DOI: 10.1007/s12095-021-00525-8
[22]
Li Y, Zhu S, Li P. On MDS codes with Galois hulls of arbitrary dimensions. Cryptography and Communications,2023, 15: 565–587. DOI: 10.1007/s12095-022-00621-3
[23]
Wu Y, Li C, Yang S. New Galois hulls of generalized Reed–Solomon codes. Finite Fields and Their Applications,2022, 83: 102084. DOI: 10.1016/j.ffa.2022.102084
[24]
Stichtenoth H. Algebraic Function Fields and Codes. Berlin: Springer-Verlag, 2009.
Zick, L.A., Martinelli, D., Schneider de Oliveira, A. et al. Teleoperation system for multiple robots with intuitive hand recognition interface. Scientific Reports, 2024, 14(1): 30230.
DOI:10.1038/s41598-024-80898-x
2.
Do, D.T., Nguyen, N.D., Tran, Q.A. et al. Optimization of a Frame of Stair-Climbing Wheelchair Structure by Using GREY-TOPSIS. Mathematical Modelling of Engineering Problems, 2024, 11(12): 3348-3368.
DOI:10.18280/mmep.111214
3.
Shi, Y., Li, X., Wang, L. et al. A Mixed Reality Visual Augmentation and Interaction System in Teleoperation Scenarios | [一种遥操作场景下的混合现实视觉增强交互系统]. Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2023, 57(10): 20-29.
DOI:10.7652/xjtuxb202310003
Goppa V D. Codes on algebraic curves. Soviet Mathematics Doklady,1981, 24 (1): 170–172.
[2]
Tsfasman M A, Vlăduţ S G, Zink T. Modular curves, Shimura curves, and Goppa codes, better than the Varshamov–Gilbert bound. Mathematische Nachrichten,1982, 109: 21–28. DOI: 10.1002/mana.19821090103
[3]
Mesnager S, Tang C, Qi Y. Complementary dual algebraic geometry codes. IEEE Transactions on Information Theory,2018, 64 (4): 2390–2397. DOI: 10.1109/TIT.2017.2766075
[4]
Jin L, Kan H. Self-dual near MDS codes from elliptic curves. IEEE Transactions on Information Theory,2019, 65 (4): 2166–2170. DOI: 10.1109/TIT.2018.2880913
[5]
Barg A, Tamo I, Vlăduţ S. Locally recoverable codes on algebraic curves. IEEE Transactions on Information Theory,2017, 63 (8): 4928–4939. DOI: 10.1109/TIT.2017.2700859
[6]
Li X, Ma L, Xing C. Optimal locally repairable codes via elliptic curves. IEEE Transactions on Information Theory,2019, 65 (1): 108–117. DOI: 10.1109/TIT.2018.2844216
[7]
Ma L, Xing C. The group structures of automorphism groups of elliptic curves over finite fields and their applications to optimal locally repairable codes. Journal of Combinatorial Theory, Series A,2023, 193: 105686. DOI: 10.1016/j.jcta.2022.105686
[8]
Massey J L. Linear codes with complementary duals. Discrete Mathematics,1992, 106–107: 337–342. DOI: 10.1016/0012-365X(92)90563-U
[9]
Carlet C, Guilley S. Complementary dual codes for counter-measures to side-channel attacks. In: Coding Theory and Applications. Cham, Switzerland: Springer, 2015.
[10]
Guenda K, Jitman S, Gulliver T A. Constructions of good entanglement-assisted quantum error correcting codes. Designs, Codes and Cryptography,2018, 86: 121–136. DOI: 10.1007/s10623-017-0330-z
[11]
Carlet C, Mesnager S, Tang C, et al. Euclidean and Hermitian LCD MDS codes. Designs, Codes and Cryptography,2018, 86: 2605–2618. DOI: 10.1007/s10623-018-0463-8
[12]
Chen B, Liu H. New constructions of MDS codes with complementary duals. IEEE Transactions on Information Theory,2018, 64 (8): 5776–5782. DOI: 10.1109/TIT.2017.2748955
[13]
Jin L. Construction of MDS codes with complementary duals. IEEE Transactions on Information Theory,2017, 63 (5): 2843–2847. DOI: 10.1109/TIT.2016.2644660
[14]
Beelen P, Jin L. Explicit MDS codes with complementary duals. IEEE Transactions on Information Theory,2018, 64 (11): 7188–7193. DOI: 10.1109/TIT.2018.2816934
[15]
Liu H, Liu S. Construction of MDS twisted Reed–Solomon codes and LCD MDS codes. Designs, Codes and Cryptography,2021, 89: 2051–2065. DOI: 10.1007/s10623-021-00899-z
[16]
Shi X, Yue Q, Yang S. New LCD MDS codes constructed from generalized Reed–Solomon codes. Journal of Algebra and Its Applications,2018, 18 (8): 1950150. DOI: 10.1142/S0219498819501500
[17]
Fan Y, Zhang L. Galois self-dual constacyclic codes. Designs, Codes and Cryptography,2017, 84: 473–492. DOI: 10.1007/s10623-016-0282-8
[18]
Liu X, Fan Y, Liu H. Galois LCD codes over finite fields. Finite Fields and Their Applications,2018, 49: 227–242. DOI: 10.1016/j.ffa.2017.10.001
[19]
Cao M. MDS Codes with Galois hulls of arbitrary dimensions and the related entanglement-assisted quantum error correction. IEEE Transactions on Information Theory,2021, 67 (12): 7964–7984. DOI: 10.1109/TIT.2021.3117562
[20]
Cao M, Yang J. Intersections of linear codes and related MDS codes with new Galois hulls. arXiv: 2210.05551, 2022.
[21]
Fang X, Jin R, Luo J, et al. New Galois hulls of GRS codes and application to EAQECCs. Cryptography and Communications,2022, 14: 145–159. DOI: 10.1007/s12095-021-00525-8
[22]
Li Y, Zhu S, Li P. On MDS codes with Galois hulls of arbitrary dimensions. Cryptography and Communications,2023, 15: 565–587. DOI: 10.1007/s12095-022-00621-3
[23]
Wu Y, Li C, Yang S. New Galois hulls of generalized Reed–Solomon codes. Finite Fields and Their Applications,2022, 83: 102084. DOI: 10.1016/j.ffa.2022.102084
[24]
Stichtenoth H. Algebraic Function Fields and Codes. Berlin: Springer-Verlag, 2009.
Zick, L.A., Martinelli, D., Schneider de Oliveira, A. et al. Teleoperation system for multiple robots with intuitive hand recognition interface. Scientific Reports, 2024, 14(1): 30230.
DOI:10.1038/s41598-024-80898-x
2.
Do, D.T., Nguyen, N.D., Tran, Q.A. et al. Optimization of a Frame of Stair-Climbing Wheelchair Structure by Using GREY-TOPSIS. Mathematical Modelling of Engineering Problems, 2024, 11(12): 3348-3368.
DOI:10.18280/mmep.111214
3.
Shi, Y., Li, X., Wang, L. et al. A Mixed Reality Visual Augmentation and Interaction System in Teleoperation Scenarios | [一种遥操作场景下的混合现实视觉增强交互系统]. Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2023, 57(10): 20-29.
DOI:10.7652/xjtuxb202310003