外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx
《外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx》由会员分享,可在线阅读,更多相关《外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx(12页珍藏版)》请在冰豆网上搜索。
![外文翻译多模态六自由度力视觉传感器融合的姿态跟踪.docx](https://file1.bdocx.com/fileroot1/2022-12/30/d19fcc24-ab5f-4117-9b3f-58ab53a99039/d19fcc24-ab5f-4117-9b3f-58ab53a990391.gif)
外文翻译多模态六自由度力视觉传感器融合的姿态跟踪
附录A
Multi-modalForce/VisionSensorFusionin6-DOFPoseTracking
Abstract—Sensorbasedrobotcontrolallowsmanipulationindynamicanduncertainenvironments.Visioncanbeusedtoestimate6-DOFposeofanobjectbymodel-basedposeestimationmethods,buttheestimateisnotaccurateinalldegreesoffreedom.Forceoffersacomplementarysensormodalityallowingaccuratemeasurementsoflocalobjectshapewhenthetooltipisincontactwiththeobject.Asforceandvisionarefundamentallydifferentsensormodalities,theycannotbefuseddirectly.Wepresentamethodwhichfusesforceandvisualmeasurementsusingpositionalinformationoftheend-effector.Bytransformingthepositionofthetooltipandthecameratoasamecoordinateframeandmodelingtheuncertaintiesofthevisualmeasurement,thesensorscanbefusedtogetherinanExtendedKalmanfilter.Experimentalresultsshowgreatlyimprovedposeestimateswhenthesensorfusionisused.
I.INTRODUCTION
Robotcontrolinunstructuredenvironmentsisachallengingproblem.Simplepositionbasedcontrolisnotadequate,ifthepositionoftheworkpieceisunknownduringmanipulationasuncertaintiespresentinrobottaskpreventtherobotfromfollowingapreprogrammedtrajectory.Sensorbasedmanipulationallowsarobottoadapttoadynamicanduncertainenvironment.Withsensorstheuncertaintiesoftheenvironmentcanbemodeledandtherobotcantakeactionsbasedonthesensoryinput.Invisualservoingtherobotiscontrolledbasedonthesensoryinputfromavisualsensor.A3-Dmodeloftheworkpiececanbecreatedand6-DOFposeoftheobjectcanbedeterminedbyposeestimationalgorithms.Visualservoingenablessuchtasksastrackingamovingobjectwithanend-effectormountedcamera.However,asinglecameravisualmeasurementisoftennotaccurateinalldegreesoffreedom.Onlytheobjecttranslationsperpendiculartothecameraaxiscanbedeterminedaccurately.Objecttranslationalongthecameraaxisisdifficulttomeasureasevenalargechangeinobjectdistanceinducesonlyasmallchangeinimage.Thesameappliesfortherotationsasonlytherotationaroundthecameraaxiscanbedeterminedaccuratelywhereasrotationsaroundtheoffaxesyieldonlyadiminishingchangeinimage.Visioncanbecomplementedbyothersensormodalitiesinordertoalleviatetheseproblems.Withatactileorforcesensorthelocalshapeoftheobjectcanbeprobed.Whenthetooltipisincontactwithanobjectandthepositionofthetooltipisknown,informationabouttheobjectcanbeextracted.However,asingletooltipmeasurementcanonlygiveonepointontheobjectsurface.Withoutotherinformationthismeasurementwouldbeuselessaswedonotknowonwhichlocationoftheobjectthemeasurementistaken.Alsoiftheobjectismovingthepointofcontactcanmoveevenifthepositionofthetooltipisstationary.Combiningaforcesensorwithvisionwouldseemappealingasthesetwosensorscancomplementeachother.Sincetheforceandvisionmeasureafundamentallydifferentsensormodalitytheinformationfromthesesensorscannotbefuseddirectly.Visioncangivethefullposeofanobjectwithrespecttothecamera,butforcesensorcanmeasureforcesonlylocally.Whentheforcesensorisusedonlytodetectifthetooltipisincontactwiththeobject,nootherinformationcanbegained.Combiningthisbinaryinformationwithvisualmeasurementrequiresthatboththepositionofthetooltipandthecameraareknowninthesamecoordinateframe.Thiscanbeachievedastheincrementalencodersorjointanglesensorsoftherobotcandeterminethepositionoftherobotend-effectorinworldcoordinates.Ifalsothehand-eyecalibrationofthecameraandthetoolgeometriesareknown,bothofthemeasurementscanbetransformedintoworldcoordinateframe.Asingletooltipmeasurementcanonlygiveconstraintstotheposeoftheobjectbutnotthefullpose.Thereforeasinglemeasurementismeaninglessunlessitcanbefusedwithothersensormodalitiesorovertime.Combiningseveralsensormodalitiesormultiplemeasurementsovertimecanreducetheuncertaintyofthemeasurements,butinordertofusethemeasurementstheuncertaintyofeachindividualmeasurementmustbeestimated.Alsothesensordelayofthevisualmeasurementsmustbetakenintoaccountwhenfusingthemeasurements.Especially,eye-inhandconfigurationrequiresaccuratesynchronizationofthepositionalinformationandvisualmeasurement.Otherwisevisionwillgiveerroneousinformationwhiletheend-effectorisinmotion.
Inthispaper,wepresenthowvisionandforcecanbefusedtogethertakingintoaccounttheuncertaintyofeachindividualmeasurement.Amodelbasedposeestimationalgorithmisusedtoextracttheunknownposeofamovingtarget.TheuncertaintyoftheposedependsontheuncertaintyofthemeasuredfeaturepointsinimageplaneandthisuncertaintyisprojectedintoCartesianspace.Atooltipmeasurementisusedtoprobethelocalshapeoftheobjectbymovingontheobjectsurfaceandkeepingaconstantcontactforce.AnExtendedKalmanfilter(EKF)isthenusedtofusethemeasurementsovertimebytakingintoaccounttheuncertaintyofeachindividualmeasurement.Toourknowledgethisisthefirstworkusingcontactinformationtocompensatetheuncertaintyofvisualtrackingwhilethetooltipisslidingontheobjectsurface.
II.RELATEDWORK
ReductionofmeasurementerrorsandfusionofseveralsensorymodalitiesusingaKalmanfilter(KF)frameworkiswidelyusedinrobotics,forexample,in6-DOFposetracking[1].However,invisualservoingcontextKalmanfiltersaretypicallyusedonlyforfilteringuncertainvisualmeasurementsanddonottakeintoaccountthepositionalinformationoftheend-effector.Wilsonetal.[2]proposedtosolvetheposeestimationproblemforposition-basedvisualservoingusingtheKFframeworkasthiswillbalancetheeffectofmeasurementuncertainties.Lippielloetal.proposeamethodforcombiningvisualinformationfromseveralcamerasandtheposeoftheendeffectortogetherinKF[3].However,intheirapproachestheKFcanbeunderstoodasasingleiterationofaniterativeGauss-Newtonprocedureforposeestimation,andassuchisnotlikelytogiveoptimalresultsforthenon-linearposeestimationproblem.
Controlandobservationaredualproblems.Combiningofforceandvisionisoftendoneonthelevelofcontrol[4],[5],[6].Asthereisnocommonrepresentationforthetwosensormodalitiescombiningtheinformationinoneobservationmodelisnotstraightforward.Previousworkoncombininghapticinformationwithvisioninobservationlevelprimarilyusesthetwosensorsseparately.Visionisusedtogeneratea3Dmodelofanobjectandaforcesensortoextractphysicalpropertiessuchasstiffnessoftheobject[7].Pomaresetal.combinedaforcesensorandaneye-in-handcamerausingstructuredlighttodetectchangesinthecontactsurface[8].Visionisfirstusedtodetectzoneslikelytohavediscontinuitiesonthesurfaceandtheforcesensorisusedforverifyingthediscontinuity.
Ourworkcombinescontactinformationwithvisiontoextractmoreaccurateinformationoftheobjectpose.Weassumeaconstantstiffnessfortheobjectmakingitpossibletousethetooltipmeasurementsfordeterminingtheobjectpositionandorientation.Themethodisindependentofthefrictionandcanbeusedevenwhenthetooltipisslidingontheobjectsurfacewhiletheobjectisinmotion.
In[9]arobotwasusedtoprobetheposeofanobjectaswellascontactparameters.However,theproposedapproachusedvisiononlytoestimatetheposeofthetoolandnottheposeoftheobject.Ourapproachusesvisiontoestimatetheposeoftheobject,andthepositionofthetooltipaswellastheposeofthecameraareobtainedfromthejointsensorsofaparallelmanipulator.Analgorithmpresentedin[10]combinesvisionwithforceandjointanglesensors.Acamerafixedtotheworldframeaswellaswristforcesensorandjointsensorsofan6-DOFindustrialrobotarefusedinanEKF.Whiletheirapproachtakesadvantageoftheforcesensormeasurementsdirectlyintheposeestimateaswellasthepositionalinformationfromthejointsensors,theyassumefrictionlesspointcontactmakingitimpossibletousethesensorfusionwhilethetooltipismovingonaphysicalsurface.
III.MODELBASEDPOSEESTIMATION
Theposeofanobjectrelativetoacameraisobtainablewithmodelbasedposeestimationmethods.Weusedmarkerbasedtrackingwithapredefined3-Dmarkermodel.Themarkersystemwasdesignedsothatperspectiveprojectiondoesnotcauseinaccuraciesindeterminingthemarkerlocation.Inourapproachthemarkerfeaturesarepointsanddonotsufferfromperspectiveprojection.Eachmarkerconsistofthreecornerswhichcanberecognizedwithcornerextractionmethods[11].ThemarkersystemandcoordinateaxesoftheestimatedrelativeposeareshowninFig.1.
Withmodelbasedpose-estimationtheposeoftheobjectrelativetothecameraCTOcanbedeterminediftheintrinsiccameraparametersareknown.Poseestimationmethodsrequireatleastthree2-D–3-Dfeaturepairsthatarenotonthesameline.AninitialguessfortheposewascalculatedusingDeMenthon’smodel-basedposeestimationmethod[12].However,thisapproachdoesnotconvergetoalocaloptimumoftheposeandthereforealocalgradientdescentapproachwasusedtofinetunethepose.
Inoursetupthecameraisattachedrigidlytotheendeffector,butitisnotinthecenteroftheend-effector.TransformationfromthecameratotheobjectCTOismea-suredwithvision.Thetranslationandrotationofthecamerarespecttotheendeffec-torEETCmustbe