外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx

上传人:b****6 文档编号:5811604 上传时间:2023-01-01 格式:DOCX 页数:12 大小:1.14MB
下载 相关 举报
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx_第1页
第1页 / 共12页
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx_第2页
第2页 / 共12页
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx_第3页
第3页 / 共12页
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx_第4页
第4页 / 共12页
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx_第5页
第5页 / 共12页
点击查看更多>>
下载资源
资源描述

外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx

《外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx》由会员分享,可在线阅读,更多相关《外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx(12页珍藏版)》请在冰豆网上搜索。

外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx

外文翻译多模态六自由度力视觉传感器融合的姿态跟踪

附录A

Multi-modalForce/VisionSensorFusionin6-DOFPoseTracking

Abstract—Sensorbasedrobotcontrolallowsmanipulationindynamicanduncertainenvironments.Visioncanbeusedtoestimate6-DOFposeofanobjectbymodel-basedposeestimationmethods,buttheestimateisnotaccurateinalldegreesoffreedom.Forceoffersacomplementarysensormodalityallowingaccuratemeasurementsoflocalobjectshapewhenthetooltipisincontactwiththeobject.Asforceandvisionarefundamentallydifferentsensormodalities,theycannotbefuseddirectly.Wepresentamethodwhichfusesforceandvisualmeasurementsusingpositionalinformationoftheend-effector.Bytransformingthepositionofthetooltipandthecameratoasamecoordinateframeandmodelingtheuncertaintiesofthevisualmeasurement,thesensorscanbefusedtogetherinanExtendedKalmanfilter.Experimentalresultsshowgreatlyimprovedposeestimateswhenthesensorfusionisused.

I.INTRODUCTION

Robotcontrolinunstructuredenvironmentsisachallengingproblem.Simplepositionbasedcontrolisnotadequate,ifthepositionoftheworkpieceisunknownduringmanipulationasuncertaintiespresentinrobottaskpreventtherobotfromfollowingapreprogrammedtrajectory.Sensorbasedmanipulationallowsarobottoadapttoadynamicanduncertainenvironment.Withsensorstheuncertaintiesoftheenvironmentcanbemodeledandtherobotcantakeactionsbasedonthesensoryinput.Invisualservoingtherobotiscontrolledbasedonthesensoryinputfromavisualsensor.A3-Dmodeloftheworkpiececanbecreatedand6-DOFposeoftheobjectcanbedeterminedbyposeestimationalgorithms.Visualservoingenablessuchtasksastrackingamovingobjectwithanend-effectormountedcamera.However,asinglecameravisualmeasurementisoftennotaccurateinalldegreesoffreedom.Onlytheobjecttranslationsperpendiculartothecameraaxiscanbedeterminedaccurately.Objecttranslationalongthecameraaxisisdifficulttomeasureasevenalargechangeinobjectdistanceinducesonlyasmallchangeinimage.Thesameappliesfortherotationsasonlytherotationaroundthecameraaxiscanbedeterminedaccuratelywhereasrotationsaroundtheoffaxesyieldonlyadiminishingchangeinimage.Visioncanbecomplementedbyothersensormodalitiesinordertoalleviatetheseproblems.Withatactileorforcesensorthelocalshapeoftheobjectcanbeprobed.Whenthetooltipisincontactwithanobjectandthepositionofthetooltipisknown,informationabouttheobjectcanbeextracted.However,asingletooltipmeasurementcanonlygiveonepointontheobjectsurface.Withoutotherinformationthismeasurementwouldbeuselessaswedonotknowonwhichlocationoftheobjectthemeasurementistaken.Alsoiftheobjectismovingthepointofcontactcanmoveevenifthepositionofthetooltipisstationary.Combiningaforcesensorwithvisionwouldseemappealingasthesetwosensorscancomplementeachother.Sincetheforceandvisionmeasureafundamentallydifferentsensormodalitytheinformationfromthesesensorscannotbefuseddirectly.Visioncangivethefullposeofanobjectwithrespecttothecamera,butforcesensorcanmeasureforcesonlylocally.Whentheforcesensorisusedonlytodetectifthetooltipisincontactwiththeobject,nootherinformationcanbegained.Combiningthisbinaryinformationwithvisualmeasurementrequiresthatboththepositionofthetooltipandthecameraareknowninthesamecoordinateframe.Thiscanbeachievedastheincrementalencodersorjointanglesensorsoftherobotcandeterminethepositionoftherobotend-effectorinworldcoordinates.Ifalsothehand-eyecalibrationofthecameraandthetoolgeometriesareknown,bothofthemeasurementscanbetransformedintoworldcoordinateframe.Asingletooltipmeasurementcanonlygiveconstraintstotheposeoftheobjectbutnotthefullpose.Thereforeasinglemeasurementismeaninglessunlessitcanbefusedwithothersensormodalitiesorovertime.Combiningseveralsensormodalitiesormultiplemeasurementsovertimecanreducetheuncertaintyofthemeasurements,butinordertofusethemeasurementstheuncertaintyofeachindividualmeasurementmustbeestimated.Alsothesensordelayofthevisualmeasurementsmustbetakenintoaccountwhenfusingthemeasurements.Especially,eye-inhandconfigurationrequiresaccuratesynchronizationofthepositionalinformationandvisualmeasurement.Otherwisevisionwillgiveerroneousinformationwhiletheend-effectorisinmotion.

Inthispaper,wepresenthowvisionandforcecanbefusedtogethertakingintoaccounttheuncertaintyofeachindividualmeasurement.Amodelbasedposeestimationalgorithmisusedtoextracttheunknownposeofamovingtarget.TheuncertaintyoftheposedependsontheuncertaintyofthemeasuredfeaturepointsinimageplaneandthisuncertaintyisprojectedintoCartesianspace.Atooltipmeasurementisusedtoprobethelocalshapeoftheobjectbymovingontheobjectsurfaceandkeepingaconstantcontactforce.AnExtendedKalmanfilter(EKF)isthenusedtofusethemeasurementsovertimebytakingintoaccounttheuncertaintyofeachindividualmeasurement.Toourknowledgethisisthefirstworkusingcontactinformationtocompensatetheuncertaintyofvisualtrackingwhilethetooltipisslidingontheobjectsurface.

II.RELATEDWORK

ReductionofmeasurementerrorsandfusionofseveralsensorymodalitiesusingaKalmanfilter(KF)frameworkiswidelyusedinrobotics,forexample,in6-DOFposetracking[1].However,invisualservoingcontextKalmanfiltersaretypicallyusedonlyforfilteringuncertainvisualmeasurementsanddonottakeintoaccountthepositionalinformationoftheend-effector.Wilsonetal.[2]proposedtosolvetheposeestimationproblemforposition-basedvisualservoingusingtheKFframeworkasthiswillbalancetheeffectofmeasurementuncertainties.Lippielloetal.proposeamethodforcombiningvisualinformationfromseveralcamerasandtheposeoftheendeffectortogetherinKF[3].However,intheirapproachestheKFcanbeunderstoodasasingleiterationofaniterativeGauss-Newtonprocedureforposeestimation,andassuchisnotlikelytogiveoptimalresultsforthenon-linearposeestimationproblem.

Controlandobservationaredualproblems.Combiningofforceandvisionisoftendoneonthelevelofcontrol[4],[5],[6].Asthereisnocommonrepresentationforthetwosensormodalitiescombiningtheinformationinoneobservationmodelisnotstraightforward.Previousworkoncombininghapticinformationwithvisioninobservationlevelprimarilyusesthetwosensorsseparately.Visionisusedtogeneratea3Dmodelofanobjectandaforcesensortoextractphysicalpropertiessuchasstiffnessoftheobject[7].Pomaresetal.combinedaforcesensorandaneye-in-handcamerausingstructuredlighttodetectchangesinthecontactsurface[8].Visionisfirstusedtodetectzoneslikelytohavediscontinuitiesonthesurfaceandtheforcesensorisusedforverifyingthediscontinuity.

Ourworkcombinescontactinformationwithvisiontoextractmoreaccurateinformationoftheobjectpose.Weassumeaconstantstiffnessfortheobjectmakingitpossibletousethetooltipmeasurementsfordeterminingtheobjectpositionandorientation.Themethodisindependentofthefrictionandcanbeusedevenwhenthetooltipisslidingontheobjectsurfacewhiletheobjectisinmotion.

In[9]arobotwasusedtoprobetheposeofanobjectaswellascontactparameters.However,theproposedapproachusedvisiononlytoestimatetheposeofthetoolandnottheposeoftheobject.Ourapproachusesvisiontoestimatetheposeoftheobject,andthepositionofthetooltipaswellastheposeofthecameraareobtainedfromthejointsensorsofaparallelmanipulator.Analgorithmpresentedin[10]combinesvisionwithforceandjointanglesensors.Acamerafixedtotheworldframeaswellaswristforcesensorandjointsensorsofan6-DOFindustrialrobotarefusedinanEKF.Whiletheirapproachtakesadvantageoftheforcesensormeasurementsdirectlyintheposeestimateaswellasthepositionalinformationfromthejointsensors,theyassumefrictionlesspointcontactmakingitimpossibletousethesensorfusionwhilethetooltipismovingonaphysicalsurface.

III.MODELBASEDPOSEESTIMATION

Theposeofanobjectrelativetoacameraisobtainablewithmodelbasedposeestimationmethods.Weusedmarkerbasedtrackingwithapredefined3-Dmarkermodel.Themarkersystemwasdesignedsothatperspectiveprojectiondoesnotcauseinaccuraciesindeterminingthemarkerlocation.Inourapproachthemarkerfeaturesarepointsanddonotsufferfromperspectiveprojection.Eachmarkerconsistofthreecornerswhichcanberecognizedwithcornerextractionmethods[11].ThemarkersystemandcoordinateaxesoftheestimatedrelativeposeareshowninFig.1.

Withmodelbasedpose-estimationtheposeoftheobjectrelativetothecameraCTOcanbedeterminediftheintrinsiccameraparametersareknown.Poseestimationmethodsrequireatleastthree2-D–3-Dfeaturepairsthatarenotonthesameline.AninitialguessfortheposewascalculatedusingDeMenthon’smodel-basedposeestimationmethod[12].However,thisapproachdoesnotconvergetoalocaloptimumoftheposeandthereforealocalgradientdescentapproachwasusedtofinetunethepose.

Inoursetupthecameraisattachedrigidlytotheendeffector,butitisnotinthecenteroftheend-effector.TransformationfromthecameratotheobjectCTOismea-suredwithvision.Thetranslationandrotationofthecamerarespecttotheendeffec-torEETCmustbe

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 经管营销

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1