ImageVerifierCode 换一换
格式:DOCX , 页数:19 ,大小:536.99KB ,
资源ID:10215945      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/10215945.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(外文翻译萤光灯管检测室内移动机器人.docx)为本站会员(b****7)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

外文翻译萤光灯管检测室内移动机器人.docx

1、外文翻译 萤光灯管检测室内移动机器人 毕业设计外文资料翻译题 目 荧光管检测室内移动机器人 专 业 机械设计制造及其自动化 班 级 学 生 学 号 指导教师 二一 二年 四月 八 日 Autonomous Indoor Mobile Robot Navigation by detecting FluorescentTubesFabien LAUNAY Akihisa OHYA Shinichi YUTAIntelligent Robot Laboratory, University of Tsukuba 1-1-1 Tennoudai, Tsukuba, Ibaraki 305-8573 JA

2、PAN launay,ohya,yutaroboken.esys.tsukuba.ac.jpAbstract This paper proposes an indoor navigation system for an autonomous mobile robot including the teaching of its environment. The self-localization of the vehicle is done by detecting the position and orientation of fluorescent tubes located above i

3、ts desired path thanks to a camera pointing to the ceiling. A map of the lights based on odometry data is built in advance by the robot guided by an operator. Then a graphic user interface is used to define the trajectory the robot must follow with respect to the lights. While the robot is moving, t

4、he position and orientation of the lights it detects are compared to the map values, which enables the vehicle to cancel odometry errors.1 Introduction When a wheel type mobile robot navigates on a two dimensional plane, it can use sensors to know its relative localization by summing elementary disp

5、lacements provided by incremental encoders mounted on its wheels. The main default of this method known as odometry is that its estimation error tends to increase unboundedly1. For long distance navigation, odometry and other dead reckoning solutions may be supported by an absolute localization tech

6、nique providing position information with a low frequency. Absolute localization in indoor navigation using landmarks located on the ground or on the walls is sometimes difficult to implement since different objects can obstruct them. Therefore a navigation system based on ceiling landmark recogniti

7、on can be thought as an alternative to this issue. The navigation system we developed consists in two steps. In the first step, the vehicle is provided with a map of the ceiling lights. Building such a map by hand quickly becomes a heavy task as its size grows. Instead, the robot is guided manually

8、under each light and builds the map automatically. The second step consists in defining a navigation path for the vehicleand enabling its position and orientation correction whenever it detects a light recorded previously in the map. Since the map built by the robot is based on odometry whose estima

9、tion error grows unboundedly, the position and orientation of the lights in the map do not correspond to the reality. However, if the trajectory to be followed by the vehicle during the navigation process is defined appropriately above this distorted map, it will be possible for the robot to move al

10、ong any desired trajectory in the real world. A GUI has been developed in order to facilitate this map-based path definition process. We equipped a mobile robot with a camera pointing to the ceiling. During the navigation process, when a light is detected, the robot calculates the position and the o

11、rientation of this landmark in its own reference and thanks to a map of the lights built in advance, it can estimate its absolute position and orientation with respect to its map. We define the pose of an object as its position and orientation with respect to a given referential.2 Related work The i

12、dea of using lights as landmarks for indoor navigation is not new. Hashino2 developed a fluorescent light sensor in order to detect the inclination angle between an unmanned vehicle and a fluorescent lamp attached to the ceiling. The objective was to carry out the main part of the process by hardwar

13、e logic circuit. Instead of lights, openings in the ceiling for aerations have also been used as landmarks to track.Oota et al.3 based this tracking on edge detection, whereas Fukuda4 developed a more complex system using fuzzy template matching. Hashiba et al.5 used the development images of the ce

14、iling to propose a motion planning method. More recently, Amat et al.6 presented a vision based navigation system using several fluorescent light tubes located in captured images whose absolute pose estimation accuracy is better than a GPS system. One advantage of the system proposed here is its low

15、 memory and processing speed requirements that make its implementation possible on a robot with limited image-processing hardware. Moreover, our navigation system includes a landmarks map construction process entirely based on the robots odometry data. The development of a GUI enables the connection

16、 between the lights map produced during the teaching process, and the autonomous robot navigation, which results in a complete navigation system. This is the main difference with the previous works which either assume the knowledge of the ceiling landmarks exact pose thanks to CAD data of building m

17、aps, or require the absolute vehicle pose to be entered manually and periodically during the landmarks map constructionso as to cancel odometry errors.Figure 1: Target environment consisting of lights of different shapes in corridors exposed to luminosity variations due to sunning.3 Lights map build

18、ing In order to cancel odometry errors whenever a light is detected, the robot needs to know in advance the pose in a given referential of the lights under which it is supposed to navigate. Since we are aiming at long distance autonomous indoor navigation, the size of the landmarks map isunbounded.

19、Building such a map manually becomes a heavy task for the operator and we believe that an autonomous mobile robot can cope with this issue. During the learning process, the vehicle equipped with a camera pointing to the ceiling is guided manually under each light and adds landmark information to the

20、 map whenever a new light appears above its path. This human assisted map building is the first step of our research concerning landmarks map building. We want to change it to a fully autonomous map building system. As the image-processing involved during the learning process is identical to the one

21、 used during the navigation, we will present the feature extraction method in sections 5 and 6. Once the teaching phase is completed, the robot holds a map of the lights that can be used later for the autonomous navigation process.4 Dealing with a robot-made map4.1 Odometry errors influence on the m

22、ap Asking the robot to build a map implies dealing with odometry errors that will occur during the learning process itself. As the robot will be guided under new lights, because of the accumulation of odometry errors, the pose of the landmarks recorded in the map will become more and more different

23、from the values corresponding to the real world. Several maps of the environment represented in Fig.1 are given in Fig.2. The odometry data recorded by the robot during the learning process has also been represented for one of the maps.4.2 Usage of the map Only one map is needed by the robot to corr

24、ect its pose during the navigation process. Whenever the robot detects a light learnt previously, it corrects its absolute pose1 by using the landmarks informationrecorded in the map. Since the map contents dont correspond to the values of the real world, the trajectory of the robot has to be specif

25、ied according to the pose of the lights in the map, and not according to the trajectory we want the robot to follow in its real environment. For example, if the mobile robots task is to navigate right below a straight corridors lights, the robot wont be requested to follow a straight line along the

26、middle of the corridor. Instead of this simple motion command, the robot will have to trace every segment which connects the projection on the ground of the center of two successive lights. This is illustrated in Fig.3 where a zoom of the trajectory specified to the robot appears in dotted line. A G

27、UI has been developed in Tcl/Tk in order to specify easily different types of trajectories with respect to the map learnt by the robot. This GUI can also be used on-line in order to follow the evolution of the robot in real time on the landmarks map during the learning and navigation processes. Figu

28、re 2: Several maps of the environment represented Fig.1 built by the same robot. Rectangles and circles represent lights of different shapes.5 Fluorescent tube detection5.1 Fluorescent tube model It is natural to think of fluorescent tube as a natural landmark for a vision-based process aimed at imp

29、roving the localization of a mobile robot in an indoor environment. Indeed, problems such as dirt, shadows, light reflection on the ground, or obstruction of the landmarks usually do not appear in this case. One advantage of fluorescent tubes compared to other possible landmarks located on the ceili

30、ng is that once they are switched on, their recognition in an image can be performed with a very simple imageprocessing algorithm since they are the only bright elements that are permanently found in such a place. If a 256 grey levels image containing a fluorescent tube is binarized with an appropri

31、ate threshold 0 T 255, the only element that remains after this operation is a rectangular shape. Fig.4.a shows a typical camera image of the ceiling of a corridor containing a fluorescent light. The axis of the camera is perpendicular to the ceiling. Shown in (b) is the binarized image of (a). If w

32、e suppose that the distance between the camera and the ceiling remains constant and that no more than one light at a time can be seen by the camera located on the top of the robot, a fluorescent tube can be modeled by a given area S0 in a thresholded image of the ceiling. Figure 4: (a) Sample image of a fluorescent light, (b) binarized image.5.2 Fluorescent light detection process Using odometry, the robot is able to know when it gets close to a light recorded in its map by comparing in a close loop its actual estimated position to

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1