Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test highway localization feasibility of Current Autoware Pipe-Line #4696

Open
8 tasks done
liuXinGangChina opened this issue May 7, 2024 · 34 comments
Open
8 tasks done
Assignees
Labels
component:localization Vehicle's position determination in its environment.

Comments

@liuXinGangChina
Copy link

liuXinGangChina commented May 7, 2024

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Several months ago, we have test the ndt based localization pipeline under 60 km/h, you can find the result here
We also notice that autoware introduce a new localization vision based theory called "yabloc", together with test report here and here under 30km/h
In the great march to achieve L4 autonomous driving, it is necessary for autoware to fill the absence of highway secenario. In order to make it happen, we plan to focous on highway localization first.

Purpose

Test autoware's current localization pipeline with highway secenario
Leave comment for localization enhancement if necessary

Possible approaches

  • 1. Scan a map of high-way test field
  • 2. Integate our sensor into autoware
  • 3. Perform road test for 4 velocity zones (70 - 100)
  • 4. Perform road test for 4 velocity zones (70 - 100), with ramp for drive in or out of high way
  • 5. Compare different localization pipe-line result and leave our comment for localization enhancement for highway scenario

Definition of done

  1. integration finish
  2. test report
  3. test report with ramp secenario
  4. comment for enhancement
@KYabuuchi KYabuuchi added the component:localization Vehicle's position determination in its environment. label May 7, 2024
@liuXinGangChina
Copy link
Author

Morning Kento-san @KYabuuchi , currently we are working on creat the map for localization test,
Due to the difference sensor-configuration between tire4 and autocore, I wonder whether a 2Mp120°-fov camera suit your algorithm-yabloc ?

@KYabuuchi
Copy link

@liuXinGangChina Good morning. 2MP and 120° FoV are sufficient for operating YabLoc. 👍 (Increasing the resolution beyond this won't bring any benefits. )
Please note that YabLoc relies not only on the camera but also on GNSS, IMU, and vehicle wheel odometry.

@KYabuuchi
Copy link

By the way, the link in the initial post might be incorrect. Please check it.

you can find the result here

@liuXinGangChina
Copy link
Author

By the way, the link in the initial post might be incorrect. Please check it.

you can find the result here

already update the link, thank you for remind

@liuXinGangChina
Copy link
Author

@liuXinGangChina Good morning. 2MP and 120° FoV are sufficient for operating YabLoc. 👍 (Increasing the resolution beyond this won't bring any benefits. ) Please note that YabLoc relies not only on the camera but also on GNSS, IMU, and vehicle wheel odometry.

that will be great,since our camera meet yabloc's resuirement,and we have all the other sensors you mentioned, we will continue this task

thank you

@liuXinGangChina
Copy link
Author

Morning,yabuuchi-san @KYabuuchi ,during the preparation of the test, we found a note that said “If the road boundary or road surface markings are not included in the Lanelet2, the estimation is likely to fail.” in the limitation notification of the code。

currently our test lanelet2 map only contain lane line of the road,can you provide some additional material or an example to tell us what formation “road boundary or road surface markings” should be like in a lanelet2 file

@KYabuuchi
Copy link

KYabuuchi commented May 28, 2024

Hi @liuXinGangChina , "road boundary or road surface markings" includes lane lines, stop lines, crosswalks, bus stops, etc.

The figure below is a lanelet2 provided in the AWSIM tutorial, which is ideal as it contains all of crosswalk, stop lines, bus stops.

Since highways usually only has lane lines, I think your map is sufficiently compatible for YabLoc.

@liuXinGangChina
Copy link
Author

Hi @liuXinGangChina , "road boundary or road surface markings" includes lane lines, stop lines, crosswalks, bus stops, etc.

The figure below is a lanelet2 provided in the AWSIM tutorial, which is ideal as it contains all of crosswalk, stop lines, bus stops.

Since highways usually only has lane lines, I think your map is sufficiently compatible for YabLoc.

got it,that‘s true. there are only lane line on the high way

@liuXinGangChina
Copy link
Author

liuXinGangChina commented Jun 3, 2024

Hi, Yabuuchi-san @KYabuuchi . During our test on highway, There are some issue confuse me. In the image below we can see line-extract and graph-seg went well on our dataset, but we got nothing in "pf/match_image" and no lane projected to the /pf/lanelet2_overlay_image. Do you have any clue?
1717413997260

@KYabuuchi
Copy link

KYabuuchi commented Jun 4, 2024

@liuXinGangChina Please check /localization/pose_estimator/yabloc/image_processing/projected_image & /localization/pose_estimator/yabloc/pf/cost_map_image are published correctly.

image
yabloc#image-topics-for-debug

projected_image is an image of the line segments and segmented results projected onto the ground.
cost_map_image is an image of the cost map generated from lanelet2.

If projected_image is not being published, there may be no tf from base_link to camera.
If cost_map_image is not being published, the some of lanelet2 elements might not be loaded properly.
Edit this as necessary:
https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3

@liuXinGangChina
Copy link
Author

liuXinGangChina commented Jun 4, 2024

@liuXinGangChina Please check /localization/pose_estimator/yabloc/image_processing/projected_image & /localization/pose_estimator/yabloc/pf/cost_map_image are published correctly.

image yabloc#image-topics-for-debug

projected_image is an image of the line segments and segmented results projected onto the ground. cost_map_image is an image of the cost map generated from lanelet2.

If projected_image is not being published, there may be no tf from base_link to camera. If cost_map_image is not being published, the some of lanelet2 elements might not be loaded properly. Edit this as necessary: https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3

Thanks for your reply @KYabuuchi , I just checked my lanelet2 file, there are only lane_thin element with sub_type dash and solid in the map, so should edit https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3 into "only left lane_thin in the road_marking_lables" ?

@KYabuuchi
Copy link

@liuXinGangChina If road_marking_lables includes lane_thin, it is fine. There is no need to remove other elements.

/localization/pose_estimator/yabloc/pf/cost_map_image was not being published?

@liuXinGangChina
Copy link
Author

图片

@KYabuuchi , hi Yabuuchi-san.
With your kindly help, now i can run the whole yabloc pipeline.
but there are still some problem, when i manully or automaticlly init the first pose, particle-filter node works well and give a good-looking distribution ,but ekf's output start to move un-predictable(you can see the blue line which represent EKF-history-path in the image), in that case i can not get a right "LL2 overlay image result"

@KYabuuchi
Copy link

KYabuuchi commented Jun 14, 2024

@liuXinGangChina LL2_overlay depends on /localization/pose_twist_fusion_filter/pose. Since that topic is published by ekf_localizer, the real issue is likely with ekf_localizer or the final output of YabLoc.

Please visualize /localization/pose_estimator/yabloc/pf/pose as shown in the image below and verify if it is correct. This is the centroid position of the yabloc particle filter.

Also, please check /localization/pose_estimator/pose_with_covariance . It is the output of YabLoc & the input of ekf_localizer (Ideally, it would be great to visualize this topic as pose history, but it is not supported. )

@liuXinGangChina
Copy link
Author

图片

@KYabuuchi Thank you for your quick reply, i visualiza the path of yabloc/pf and it looks pretty well.
I list the topic and hz below, maybe that can help to find out why ekf malfunction.

  1. /compressed_image and camera_info 30hz
  2. gnss/pose 20hz
  3. twist_estimator/pose_with_covariance 100hz

@KYabuuchi
Copy link

@liuXinGangChina 晚上好
If the output of yabloc is correct, then there might be an issue with twist_estimator/twist_with_covariance.
However, it is strange because the yabloc's particle filter also is using that twist to update the particles. 🤔

Could you record the topics in a ROS bag and share it?
Recording all topics might make the ROS bag too large, so it would be helpful if you could record and share the following topics for investigating the cause.

/initialpose3d
/localization/pose_estimator/pose_with_covariance
/localization/twist_estimator/twist_with_covariance
/localization/pose_twist_fusion_filter/kinematic_state

@KYabuuchi
Copy link

Or, less likely, maybe the covariance of twist_with_covariance is incorrect...

@liuXinGangChina
Copy link
Author

liuXinGangChina commented Jun 17, 2024

Or, less likely, maybe the covariance of twist_with_covariance is incorrect...

Highly likely, for now covariance is a matrix whole made with 0.
By the way,regarding the covariance matrix of twist_with_covariance ,what value should i assign for linear-x and angular-z? @KYabuuchi

@KYabuuchi
Copy link

@liuXinGangChina You need to ensure that the diagonal elements of the matrix are always set to non-zero values.

example:

[0,0]= 0.04
[1,1]= 100000.0 # large value because we can not observe this
[2,2]= 100000.0 # large value because we can not observe this
[3,3]= 1.1492099999999998e-05
[4,4]= 1.1492099999999998e-05
[5,5]= 1.1492099999999998e-05

Other elements can be 0.

@liuXinGangChina
Copy link
Author

Thanks for your quick reply yabuuchi-san @KYabuuchi
After assign a value to the matrix, now ekf works well。
I notice that when using gps-enable for pose intializer ,the heading that pose intializer gives maybe wrong(because when ego is stop it is hard to estimate heading using gnss attena)。 I found that yabloc introduce a -camear-initializer , will that help the pose intializer to give a right heading while gnss only gives pose?

@KYabuuchi
Copy link

@liuXinGangChina When Autoware is started with the option pose_source:=yabloc, the initial position estimation that combines GNSS and camera will automatically be activated. It uses the GNSS observation position as the initial position and determines the orientation that best matches the image with lanelet2.

If yabloc_enabled and gnss_enabled are true with the following command, then the initialization mechanism is active.
ros2 param dump /localization/util/pose_initializer

@liuXinGangChina
Copy link
Author

hi, yabuuchi-san @KYabuuchi 。I used the command you provide。i notice gnss_enable true,ekf_enable true,ndt false but not find the "yabloc_enabled "

@KYabuuchi
Copy link

@liuXinGangChina It's hard to believe. 🤔 Did you possibly miss it since yabloc_enabled appears at the very bottom?
I'll also share the results of my command.
If it really doesn't exist, please provide the commit hash for autoware.universe.

the result `ros2 param dump /localization/util/pose_initializer`
/localization/util/pose_initializer:
  ros__parameters:
    ekf_enabled: true
    gnss_enabled: true
    gnss_particle_covariance:
    - 1.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 1.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 10.0
    gnss_pose_timeout: 3.0
    map_height_fitter:
      map_loader_name: /map/pointcloud_map_loader
      target: pointcloud_map
    ndt_enabled: false
    output_pose_covariance:
    - 1.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 1.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.01
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.0
    - 0.2
    qos_overrides:
      /clock:
        subscription:
          depth: 1
          durability: volatile
          history: keep_last
          reliability: best_effort
      /parameter_events:
        publisher:
          depth: 1000
          durability: volatile
          history: keep_last
          reliability: reliable
    stop_check_enabled: false
    use_sim_time: true
    user_defined_initial_pose:
      enable: false
    yabloc_enabled: true # <======================= HERE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

@liuXinGangChina
Copy link
Author

Thanks for your help yabuuchi-san @KYabuuchi . Now I can run the Yabloc well, but i found that lane line detect sometime fail on far away area like 10 meters far(green circle area) especially for dash lane, in that case lane line match result maybe inaccurate. What can i do with this issue , are there any parameters to tune with?
图片

@KYabuuchi
Copy link

@liuXinGangChina In my experience, it's not necessary to extract all road markings within the visible range. Due to issues with extrinsic calibration and the impact of slopes, they don't contribute much to accuracy.
Additionally, false positives in road markings detection negatively impact accuracy, but false negatives do not.

It would be sufficient if this blue circle range could be detected. And I think this is the limit of what YabLoc will be able to detect by adjusting parameters.

Anyway, the parameters for line segment detection are rarely adjusted, so they are hard-coded here
See this documents for the definition of each parameter.

If you try to adjust the parameters, it would be convenient to start the line_segment_detector and camera with the following command.

ros2 run yabloc_image_processing yabloc_line_segment_detector_node
ros2 run v4l2_camera v4l2_camera_node --ros-args -r /image_raw:=/line_detector/input/image_raw

@liuXinGangChina
Copy link
Author

liuXinGangChina commented Jun 18, 2024

Goodnight yabuuchi-san @KYabuuchi , during our test i found one small issue:
1.init heading estimate (using auto init with camera and gnss) result are not accurate sometimes, it would lead to a bi-pos distribution(the particles distribute into 2 seprerate clusters)
图片
图片
after a while,the particles automaticlly converged into one tight distribution and everything seems fine
图片

@KYabuuchi
Copy link

@liuXinGangChina Good morning and thank you for reporting the issue.
Honestly, the current initial position estimation in YabLoc is a very basic implementation and not the best solution. I would like to improve it, but I don't have enough time to address.

If you want to resolve it with the existing implementation, increasing angle_resolution might help.

@liuXinGangChina
Copy link
Author

Thanks for your reply, yabuuchi-san @KYabuuchi. The idea of camera pose initializer is brilliant,im using a cheap gps device in that case sometimes the init pose and orientation maybe wrong and that will lead to a bad estimation result.

@liuXinGangChina
Copy link
Author

liuXinGangChina commented Jun 24, 2024

Hi,everyone we have proven the feasibility of autoware‘s localization pipeline(camera-based)in high-way scenario

Test environment

Item Description Additions
Test infrastructure close test field which include high way(multi lane,ramp) 图片
Test Conditions speed range 100 km/h ~ 120 km/h
Ego sensor for localization
  • 1.camera 2MP 120°
  • 2.Odometer
  • 3.Trimble BD992 without RTK
  • 4.IMU
Ground Truth Asensing 571-INS with rtk 图片

Test Result

Test case Result Additions(redline-groundtruth, blueline-ekf) Deviation ( blue-square root, green-longitudinal, red-lateral)
  • High Way cruise
  • Automatic initial pose estimation
  • Lane changing
Failed , estimated pose is in adjacent lane 图片 yabloc-result-1
  • High Way cruise
  • Manual pose initialization
  • Lane changing
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change 图片 yabloc-result-2
  • High Way cruise
  • Automatic initial pose estimation
  • Lane changing
  • Ramp
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change , however sometimes it may fail caused by wrong heading estimation 图片 yabloc-result-ramp-1
  • High Way cruise
  • Manual pose initialization
  • Lane changing
  • Ramp
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change 图片 yabloc-result-ramp-4

summary

Autoware's Camera based localization Pipeline (Yabloc)can achieve lane-level localization accuracy in Highway scenarios.
While there are still something can be improved:

  1. Initial heading estimation may go wrong

@KYabuuchi
Copy link

@liuXinGangChina Thank you for sharing these interesting experimental results. 👏
In each graph, there is a consistent error for the first 20 seconds. Is the vehicle stopped during this period?

Additionally, there is a spike in the error afterward. Could this be due to the vehicle accelerating rapidly?

@liuXinGangChina
Copy link
Author

That's right, yabuuchi-san @KYabuuchi .

  1. in order to test the whole localization pipe-line (including automatic pose init), every test bag begins with a steady status.
  2. in order to mimic a urban-highway mixed scenario, ego start from 0 km/h and speed up to 120 and maintain speed(urban driving switch into highway driving).

@KYabuuchi
Copy link

That makes sense. Thanks for explaining!

@liuXinGangChina
Copy link
Author

hi yabuuchi-san @KYabuuchi. I leave a message to you in discourd related to this test , can you give your feed back when you notice this.

@liuXinGangChina
Copy link
Author

Create a separate page to summarize the test result here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:localization Vehicle's position determination in its environment.
Projects
No open projects
Status: Done
Development

No branches or pull requests

2 participants