The hottest robot obstacle avoidance control strat

2022-08-21
  • Detail

A robot obstacle avoidance control strategy

1 introduction

at present, most of the research on biped robot focuses on its balance and walking. The dynamic and movable intelligent robot can independently detect obstacles, avoid obstacles and plan paths. Obstacle detection is actually the process of restoring the three-dimensional depth information of the surrounding environment

the vision system can capture and understand the environmental information in the robot system world. Extracting three-dimensional information from two-dimensional images and then reconstructing three-dimensional images is an important function of the vision system. The vision system uses different technologies to restore three-dimensional information, such as binocular stereo vision, optical flow moving method and zoom depth restoration method

in this paper, we propose a real-time obstacle avoidance method for robots based on the principle of depth information restored by focal length. The focal length method we proposed is to use two CCD cameras with different focal lengths to capture the image of the same scene, and then compare the sharpness of the corresponding area on each image. The distance determined by the corresponding focal length of the image with the largest regional sharpness is the depth information of its short area. Although using multiple cameras can achieve better depth recovery effect, in order to simplify the calculation, the actual control algorithm only needs to know the rough depth information of each area of the image, which is "far" in this algorithm, and drill holes and "near" marks at the end of each plate spring. This information becomes the depth mark of the whole area. Then we build a full image depth information map, which is similar to the process of recovering terrain 3D information from focal length, but our algorithm is greatly simplified. Finally, the robot decision-making system designs its obstacle avoidance control strategy based on the depth map

the second part of this paper introduces the relevant theories in this field, In the third part, the polyurethane industry of building exterior wall external insulation in Beijing began to formally and strictly implement the latest technical specification - "technical specification for construction of exterior wall external insulation engineering with rigid polyurethane composite board plastered with light mortar" (db11/t 1080 ⑵ 014) (hereinafter referred to as "technical specification"). It introduces the depth marking diagram and the applicable standard of barrier avoidance insulation material testing machine: the specific implementation of control strategy, and finally gives some experimental results. The early work of

2

focal length analysis has long been used in automatic focus shooting systems and the application of depth recovery from captured images. The earliest work on this field is [1]. The author analyzes the focusing degree of images by Fourier transform. Pentland proposed two methods to realize the reconstruction of scene depth image in document [2]. The first method is based on measuring the fuzzy edge of defocused image. This method needs to know the position and amplitude of image edge. The second method is to compare two images taken by different aperture cameras, calculate the defocus change in the corresponding area, and obtain the depth information. Pentland has achieved very good experimental results. The speed of processing pictures has reached 8 frames per second. Other scholars used a more accurate defocus mathematical model to improve the depth recovery accuracy [3]. However, these methods need a lot of computing resources to achieve the purpose of real-time processing because they need to convolute and filter the image. Moreover, their applicable environment is static and fixed, and the depth information that can be recovered is only limited to a very shallow range [4]. Krokov put forward the theory of extracting depth information from focusing degree in his 1987 paper [5]. The principle is to obtain the maximum focal point from a large number of images taken with different focal lengths. Because the concept of time average is used in the filtering method, krokov also needs a static image, and can only recover the depth information of one window in the image, while the robot needs multiple depth information maps to realize continuous smooth obstacle avoidance

in 1993, krokov and bajcsy successfully developed a vision system that combines stereo vision positioning and focal length [6], with a depth recovery range of up to two meters. Darrel and wohn proposed a pyramid method to recover depth information from focal length in 1988 [7]. The author used a servo system to control the focal length of the lens and took 8 to 30 images, achieving ideal accuracy. But like the disadvantages of previous methods, it is suitable for static images and consumes a lot of computation. In order to overcome the disadvantage of large amount of calculation in the above methods, our algorithm should try to adopt a simple mathematical model based on the principle of focal length recovery depth, and the design of control strategy should be based on the principles of simplicity, feasibility and stability

to establish a simple robot navigation system, it is important to understand what functions the robot's perception system should have at least in order to achieve stable and continuous obstacle avoidance. Stability refers to reliable operation and accurate identification of obstacles. Continuity means that the robot's action should be continuous without pause, which requires that the depth information provided by the perception system to the decision-making system needs to be in advance. At the same time, the formulation of obstacle avoidance strategy should be based on the speed and reaction time of the robot's walking system. Figure 1 shows the flow chart of robot obstacle avoidance control. For the simplest control process shown in Figure 1, the vision system must be able to distinguish the left and right directions and the far, middle and near of the depth. This means that the control strategy only corresponds to three types of distance markers, and the values of far and near markers represent a certain range of depth respectively. Of course, the threshold to distinguish these three depth ranges should be based on the maximum speed of the robot. In other words, corresponding to the "close" depth distance, the robot should have enough time to reduce the forward speed from maximum to zero, and give the robot's control system and vision system enough reaction time

by dividing the depth information of the robot's front vision scene into three categories, the algorithm is much simpler than the previous algorithm. Here, how the visual system judges the distance of the front scene plays a guiding role in the control strategy. Since it is based on the principle of restoring the depth from the focal length, we need a standard to measure the focus scale, and the most intuitive standard is the concept of sharpness.

4 definition standard

in order to dynamically measure the degree of focus in a certain area of the image, we need a standard to calculate the definition of this area. Compared with the focused image, the concept of defocus is to lose some potential information, which is equivalent to reducing the quality of the image

Copyright © 2011 JIN SHI