jackkuo commited on
Commit
e0a3ef8
·
verified ·
1 Parent(s): 1dbfd07

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NFQT4oBgHgl3EQfKDVk/content/tmp_files/2301.13258v1.pdf.txt +4177 -0
  2. -NFQT4oBgHgl3EQfKDVk/content/tmp_files/load_file.txt +0 -0
  3. -dE4T4oBgHgl3EQfDwsm/vector_store/index.faiss +3 -0
  4. -tFST4oBgHgl3EQfcjgx/content/tmp_files/2301.13803v1.pdf.txt +1855 -0
  5. -tFST4oBgHgl3EQfcjgx/content/tmp_files/load_file.txt +0 -0
  6. .gitattributes +59 -0
  7. 0tE1T4oBgHgl3EQflAQk/content/tmp_files/2301.03279v1.pdf.txt +1123 -0
  8. 0tE1T4oBgHgl3EQflAQk/content/tmp_files/load_file.txt +0 -0
  9. 19AzT4oBgHgl3EQfDfqo/content/tmp_files/2301.00978v1.pdf.txt +1044 -0
  10. 19AzT4oBgHgl3EQfDfqo/content/tmp_files/load_file.txt +382 -0
  11. 1NAyT4oBgHgl3EQfofhG/vector_store/index.faiss +3 -0
  12. 1tE0T4oBgHgl3EQf_wKi/content/tmp_files/2301.02831v1.pdf.txt +756 -0
  13. 1tE0T4oBgHgl3EQf_wKi/content/tmp_files/load_file.txt +380 -0
  14. 1tFLT4oBgHgl3EQfqC-n/content/tmp_files/2301.12138v1.pdf.txt +0 -0
  15. 1tFLT4oBgHgl3EQfqC-n/content/tmp_files/load_file.txt +0 -0
  16. 2dAzT4oBgHgl3EQfDvol/vector_store/index.faiss +3 -0
  17. 2tAzT4oBgHgl3EQfRvvX/content/2301.01222v1.pdf +3 -0
  18. 2tAzT4oBgHgl3EQfRvvX/vector_store/index.faiss +3 -0
  19. 2tAzT4oBgHgl3EQfRvvX/vector_store/index.pkl +3 -0
  20. 49FIT4oBgHgl3EQf7St_/vector_store/index.faiss +3 -0
  21. 49FIT4oBgHgl3EQf7St_/vector_store/index.pkl +3 -0
  22. 4dFQT4oBgHgl3EQf4Ta7/vector_store/index.pkl +3 -0
  23. 5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf +3 -0
  24. 5NE3T4oBgHgl3EQfQgli/vector_store/index.faiss +3 -0
  25. 5NE3T4oBgHgl3EQfQgli/vector_store/index.pkl +3 -0
  26. 5tAzT4oBgHgl3EQff_wz/content/tmp_files/2301.01460v1.pdf.txt +526 -0
  27. 5tAzT4oBgHgl3EQff_wz/content/tmp_files/load_file.txt +430 -0
  28. 5tE2T4oBgHgl3EQfkQe0/content/2301.03977v1.pdf +3 -0
  29. 5tE2T4oBgHgl3EQfkQe0/vector_store/index.faiss +3 -0
  30. 7dAyT4oBgHgl3EQfQvYd/content/tmp_files/2301.00050v1.pdf.txt +1826 -0
  31. 7dAyT4oBgHgl3EQfQvYd/content/tmp_files/load_file.txt +0 -0
  32. 7dE2T4oBgHgl3EQfPQbU/vector_store/index.faiss +3 -0
  33. 89FLT4oBgHgl3EQfBi6R/content/tmp_files/2301.11971v1.pdf.txt +0 -0
  34. 89FLT4oBgHgl3EQfBi6R/content/tmp_files/load_file.txt +0 -0
  35. 8NFLT4oBgHgl3EQfsy_c/content/tmp_files/2301.12149v1.pdf.txt +1800 -0
  36. A9E1T4oBgHgl3EQf9QZb/vector_store/index.pkl +3 -0
  37. B9E1T4oBgHgl3EQfpgVg/content/2301.03332v1.pdf +3 -0
  38. B9E1T4oBgHgl3EQfpgVg/vector_store/index.faiss +3 -0
  39. B9E1T4oBgHgl3EQfpgVg/vector_store/index.pkl +3 -0
  40. C9AyT4oBgHgl3EQfSPck/content/tmp_files/2301.00080v1.pdf.txt +479 -0
  41. C9AyT4oBgHgl3EQfSPck/content/tmp_files/load_file.txt +316 -0
  42. C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf +3 -0
  43. C9E0T4oBgHgl3EQfyQLe/vector_store/index.pkl +3 -0
  44. CNFQT4oBgHgl3EQf-jfx/content/tmp_files/2301.13455v1.pdf.txt +541 -0
  45. CNFQT4oBgHgl3EQf-jfx/content/tmp_files/load_file.txt +361 -0
  46. D9E0T4oBgHgl3EQfywIT/content/2301.02662v1.pdf +3 -0
  47. D9E0T4oBgHgl3EQfywIT/vector_store/index.faiss +3 -0
  48. D9E0T4oBgHgl3EQfywIT/vector_store/index.pkl +3 -0
  49. DNFQT4oBgHgl3EQf_zdP/content/2301.13459v1.pdf +3 -0
  50. DNFQT4oBgHgl3EQf_zdP/vector_store/index.pkl +3 -0
-NFQT4oBgHgl3EQfKDVk/content/tmp_files/2301.13258v1.pdf.txt ADDED
@@ -0,0 +1,4177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Draft version February 1, 2023
2
+ Typeset using LATEX twocolumn style in AASTeX631
3
+ A Pilot Study of Nulling in 22 Pulsars Using Mixture Modeling
4
+ Akash Anumarlapudi
5
+ ,1 Joseph K. Swiggum
6
+ ,1, 2 David L. Kaplan
7
+ ,1 and Travis D. J. Fichtenbauer1
8
+ 1Center for Gravitation, Cosmology, and Astrophysics, Department of Physics, University of Wisconsin-Milwaukee, PO Box 413,
9
+ Milwaukee, WI, 53201, USA
10
+ 2Dept. of Physics, 730 High St., Lafayette College, Easton, PA 18042, USA
11
+ ABSTRACT
12
+ The phenomenon of pulsar nulling, observed as the temporary inactivity of a pulsar, remains poorly
13
+ understood both observationally and theoretically. Most observational studies that quantify nulling
14
+ employ a variant of Ritchings (1976)’s algorithm which can suffer significant biases for pulsars where
15
+ the emission is weak. Using a more robust mixture model method, we study pulsar nulling in a sample
16
+ of 22 recently discovered pulsars, for which we publish the nulling fractions for the first time. These
17
+ data clearly demonstrate biases of the former approach and show how an otherwise non-nulling pulsar
18
+ can be classified as having significant nulls. We show that the population-wide studies that find a
19
+ positive correlation of nulling with pulsar period/characteristic age can similarly be biased because
20
+ of the bias in estimating the nulling fraction. We use our probabilistic approach to find the evidence
21
+ for periodicity in the nulls in a subset of three pulsars in our sample. In addition, we also provide
22
+ improved timing parameters for 17 of the 22 pulsars that had no prior follow-up.
23
+ Keywords: Pulsar Nulling — Neutron Stars — Radio Astronomy
24
+ 1. INTRODUCTION
25
+ Pulsar nulling, initially observed by Backer (1970a),
26
+ is the absence of observed emission from a pulsar for
27
+ one or more pulse periods.
28
+ Observationally, the phe-
29
+ nomenon of pulsar nulling remains poorly understood.
30
+ It is clear that nulling is a broadband phenomenon, ob-
31
+ served from 102 MHz (Davies et al. 1984) to 8.35 GHz
32
+ (Honnappa et al. 2012).
33
+ However, it is not firmly
34
+ established whether nulling is simultaneous over this
35
+ frequency range using a large sample of nulling pul-
36
+ sars.
37
+ Prior studies found contradictory conclusions.
38
+ For example, observing over two frequency ranges, 50-
39
+ 140 MHz and 275-430 MHz, Taylor et al. (1975) found
40
+ that nulls are simultaneous in two different pulsars (PSR
41
+ B0031−07, PSR B0809+74), while Davies et al. (1984)
42
+ found the evidence for excessive nulls in single pulses at
43
+ 102 MHz compared to 406 MHz in PSR B0809+74. A
44
+ more recent study by Gajjar et al. (2014a) found that the
45
+ nulls are highly coherent in three pulsars at four different
46
+ frequencies — 313, 607, 1380, and 4850 MHz. In addi-
47
+ Corresponding author: Akash Anumarlapudi
48
49
+ tion, it is also not clear whether pulsars null randomly.
50
+ Redman & Rankin (2009) and Gajjar et al. (2012) found
51
+ that nulls might not occur randomly but might be clus-
52
+ tered, where nulls and bursts tend to occur in groups,
53
+ but the latter found that the null durations can be ran-
54
+ dom. However, for many of these results the dependency
55
+ of the nulling inferences on signal-to-noise ratio makes
56
+ it hard to robustly interpret their findings.
57
+ Although the formation of a pair cascade and the radi-
58
+ ation from these accelerated pairs in the pulsar magne-
59
+ tosphere is often invoked to explain the observed emis-
60
+ sion from a pulsar (Ruderman & Sutherland 1975), a
61
+ full theory of pulsar magnetospheres and its emission to
62
+ explain the diverse morphology in pulse profiles and phe-
63
+ nomenology is yet to be developed. As such, the theory
64
+ of pulsar nulling remains largely speculative, though it is
65
+ often attributed to one of two classes: i) inherent to the
66
+ magnetosphere itself such as loss of coherence condition
67
+ required for radio emission, e.g., Filippenko & Radhakr-
68
+ ishnan (1982), or the depletion of pairs in the magneto-
69
+ sphere themselves, e.g., Kramer et al. (2006) or ii) geo-
70
+ metrical factors external to the magnetosphere such as
71
+ the line of sight traversing through the ‘empty’ region
72
+ between rotating emission carousels, e.g., Herfindal &
73
+ Rankin (2007, 2009). Further progress may require ad-
74
+ arXiv:2301.13258v1 [astro-ph.HE] 30 Jan 2023
75
+
76
+ ID2
77
+ Anumarlapudi et al.
78
+ ditional observational data to understand how the prop-
79
+ erties of nulling relate to the properties of the pulsars
80
+ themselves.
81
+ Nulling as a phenomenon may be related to other more
82
+ extreme forms of intensity modulation, where the pulses
83
+ can disappear for hours to months in the cases of rotat-
84
+ ing radio transients (RRATs; McLaughlin et al. 2006)
85
+ or intermittent pulsars (Kramer et al. 2006; Lyne 2009).
86
+ However, the connection between these populations is
87
+ not clear. Furthermore, pulsar nulling is often discussed
88
+ in tandem with two other forms of single pulse vari-
89
+ ations: mode changing – a phenomenon in which an
90
+ otherwise stable pulse profile switches between multiple
91
+ shapes (or modes) (Backer 1970b) and sub-pulse drift-
92
+ ing – a phenomenon in which the single pulse phase
93
+ shows a uniform periodic drift (Drake & Craft 1968).
94
+ Regardless, in all of these cases the appearance of these
95
+ phenomena can be limited by instrumental sensitivity:
96
+ without enough sensitivity to probe single pulses at high
97
+ significance, one cannot be certain whether the pulsar
98
+ emission is truly missing during the nulls or the pulsar
99
+ switches to an alternate mode with lower intensity. To-
100
+ gether all three are often thought of as different represen-
101
+ tatives of a larger underlying phenomenon of sub-pulse
102
+ intensity variations (Lorimer & Kramer 2004).
103
+ Nulling is usually quantified by the fraction of pulses
104
+ where there is no discernible emission, called the Nulling
105
+ Fraction (NF). NF can vary from 0 – in the case of stan-
106
+ dard emission picture that shows no nulls – to 1, in the
107
+ extreme case where the pulsar emission is visible only
108
+ between long nulls (intermittent pulsars and RRATs).
109
+ NF has been measured in roughly 8% of pulsars, but
110
+ this has more to do with the lack of single pulse studies
111
+ as opposed to nulling being restricted to a small subset
112
+ of pulsars. This smaller data set of nulling pulsars is en-
113
+ tirely restricted to normal (not recycled) pulsars, owing
114
+ to the high sensitivity demands that would be needed
115
+ to observe single pulses of millisecond pulsars (MSPs),
116
+ although some recent studies (Rajwade et al. 2014) have
117
+ been conducted in a sample of bright MSPs which did
118
+ not find a signature of nulling with high confidence. In
119
+ addition, there can be a bias against discovering normal
120
+ pulsars which tend to have a high NF, or are intermit-
121
+ tent. Hence the fraction (8%), can only be considered
122
+ as a conservative lower limit.
123
+ Such a small data set restricts our ability to infer
124
+ population-wide properties, which might give clues to
125
+ the origin of the phenomenon, and hence studies done
126
+ thus far have not reached a consensus. An initial study
127
+ done by Ritchings (1976) claimed a correlation between
128
+ NF and pulsar period (with longer period pulsars expe-
129
+ riencing higher NF) and also a stronger correlation with
130
+ the characteristic age. Wang et al. (2007) also suggested
131
+ a correlation with spin-down age, albeit qualitatively,
132
+ with older pulsars experiencing higher NF, before even-
133
+ tually crossing the death line.
134
+ Konar & Deka (2019)
135
+ found that there may be two different populations of
136
+ pulsars separated by a NF of ∼ 40% but did not find
137
+ correlations with any intrinsic pulsar properties, while
138
+ Sheikh & MacDonald (2021) claimed that there is no
139
+ strong evidence for the existence of two sub-populations.
140
+ All of these studies may be significantly biased since the
141
+ samples used are restricted to the pulsars that explicitly
142
+ showed nulling.
143
+ In general, most studies (Wang et al. 2007; Gajjar
144
+ et al. 2012, 2014b,a; Herfindal & Rankin 2009) estimate
145
+ NF using the methodology (or a variant) proposed by
146
+ Ritchings (1976). But as Kaplan et al. (2018) demon-
147
+ strated, this method can suffer strong biases in the case
148
+ of weaker pulsars which can lead to overestimating the
149
+ NF and classifying an otherwise standard weak pulsar
150
+ as a nulling pulsar. This can also lead to systematic bi-
151
+ ases in population inferences. In addition, Kaplan et al.
152
+ (2018) proposed an alternate method in which they use
153
+ Gaussian Mixtures to model the single pulse intensities
154
+ and estimate the NF , and demonstrate the reliability of
155
+ this method in accurately measuring the NF in weaker
156
+ pulsars. In this study, we expand on the Gaussian Mix-
157
+ ture Model (GMM) of Kaplan et al. (2018)1 to general-
158
+ ize their method and apply it to a larger sample of 22
159
+ pulsars2.
160
+ Pulsars selected for this study were discovered as a
161
+ part of the Green Bank North Celestial Cap (GBNCC)
162
+ pulsar survey (Stovall et al. 2014) in 2-min drift scans
163
+ at 350 MHz with a 100 MHz bandwidth and with data
164
+ sampled every 81.92 µs. At 350 MHz the beam size is
165
+ 36′ (Full Width at Half Maximum; FWHM) and hence
166
+ the astrometric precision prior to a coherent timing so-
167
+ lution is limited by the beam size depending on the
168
+ Signal-to-noise ratio (SNR) of the discovery candidate.
169
+ These were later followed up at the Green Bank Tele-
170
+ scope (GBT) and Arecibo Observatory (AO) to improve
171
+ their timing solutions and establish their nulling char-
172
+ acteristics.
173
+ The structure of this paper is as follows: In Section 2,
174
+ we detail our data acquisition and reduction methods,
175
+ and provide updated timing solutions for the pulsars in
176
+ this study. We then describe the mixture model and pro-
177
+ vide our basic results in Section 3. Finally, we present
178
+ 1 As noted in Kaplan et al. (2018), a similar method may have
179
+ been used in Arjunwadkar et al. (2014).
180
+ 2 All of our code is available at https://github.com/AkashA98/
181
+ pulsar nulling
182
+
183
+ Pulsar Nulling with Mixture Models
184
+ 3
185
+ the implications of the results in Section 4 and conclude
186
+ in Section 5.
187
+ 2. DATA ANALYSIS
188
+ 2.1. Observations and Data Reduction
189
+ A sample of 22 recently discovered pulsars was selected
190
+ for this pilot study if they showed any signs of intermit-
191
+ tency in their discovery plots3. Data for 15 out of 22
192
+ pulsars were collected using the 100-m Robert C. Byrd
193
+ Green Bank Telescope (GBT) (hereafter referred to as
194
+ the GBT sample), operating at 820 MHz with a band-
195
+ width of 200 MHz, in 2 hr contiguous scans, with the
196
+ primary aim of determining the pulsars’ nulling charac-
197
+ teristics (project code 18A−436; PI: J. Swiggum). Data
198
+ for another nine pulsars were collected at the 300-m
199
+ William E. Gordon Arecibo Observatory (AO) operat-
200
+ ing at 430 MHz over a bandwidth of 24 MHz, with the
201
+ goals to both establish coherent timing solutions and de-
202
+ termine nulling characteristics (project code P3436; PI:
203
+ J. Swiggum) (hereafter referred to as the AO sample).
204
+ Two pulsars in our sample, PSR J0414+31, and PSR
205
+ J1829+25, were observed at both observatories.
206
+ Six of the 15 pulsars in the GBT sample already had
207
+ coherent timing solutions (Lynch et al. 2018) and the
208
+ data for these were collected in coherent search mode us-
209
+ ing the Green Bank Ultimate Pulsar Processing Instru-
210
+ ment (GUPPI; Ransom et al. 2009) with 128 frequency
211
+ channels sampled at 10.24 µs and retaining full polar-
212
+ ization information. The remaining nine pulsars had no
213
+ prior follow-up campaigns and so we first improved their
214
+ positions using gridding observations and then observed
215
+ them in incoherent search mode with 2048 frequency
216
+ channels sampled at 40.96 µs. Data for the AO sample
217
+ were collected in coherent search mode using the Puerto
218
+ Rico Ultimate Pulsar Processing Instrument4 (PUPPI),
219
+ with 64 channels sampled at 40.96 µs, over a span of ∼
220
+ six months to establish coherent timing solutions in ad-
221
+ dition to studying the nulling properties. A summary of
222
+ observations for each pulsar is provided in Tables 1 and
223
+ 2.
224
+ Starting with the raw search mode data, we used
225
+ dspsr (van Straten & Bailes 2011) to fold the data.
226
+ We then used pazi, the interactive zapping routine in
227
+ psrchive (van Straten et al. 2011) to remove radio fre-
228
+ quency interference (RFI)-affected frequency channels
229
+ and single pulses. For GBT data, we also made use of
230
+ RFI scans taken at the observatory5, when available, to
231
+ 3 See the GBNCC discovery page:
232
+ http://astro.phys.wvu.edu/
233
+ GBNCC.
234
+ 4 http://www.naic.edu/puppi-observing/
235
+ 5 https://greenbankobservatory.org/rfi-gui-user-guide/
236
+ Table 1. Times and durations of GBT ob-
237
+ servations
238
+ Pulsar
239
+ Observations
240
+ Total Time
241
+ MJD (hr)
242
+ (hr)
243
+ J0054+6946
244
+ 58163 (2.00)
245
+ 2.00
246
+ J0111+6624
247
+ 58163 (2.24)
248
+ 2.24
249
+ J0325+6744
250
+ 58163 (1.52)
251
+ 2.00
252
+ 58164 (0.48)
253
+ · · ·
254
+ J0414+31a
255
+ 58164 (1.50)
256
+ 1.50
257
+ J0614+83
258
+ 58164 (1.90)
259
+ 1.90
260
+ J0738+6904
261
+ 58209 (2.00)
262
+ 2.00
263
+ J1529−26
264
+ 58209 (1.50)
265
+ 1.50
266
+ J1536−30
267
+ 58209 (1.50)
268
+ 1.50
269
+ J1629+33
270
+ 58209 (1.50)
271
+ 1.50
272
+ J1821+4147
273
+ 58209 (1.69)
274
+ 1.69
275
+ J1829+25a
276
+ 58246 (1.50)
277
+ 1.50
278
+ J1901−04
279
+ 58246 (1.50)
280
+ 1.50
281
+ J2040−21
282
+ 58246 (1.50)
283
+ 1.50
284
+ J2131−31
285
+ 58246 (0.33)
286
+ 0.33
287
+ J2310+6706
288
+ 58246 (1.75)
289
+ 1.75
290
+ Note—For each pulsar we give the individual
291
+ Modified Julian Date (MJD) and duration of
292
+ each session, as well as the total observing
293
+ time.
294
+ aThis pulsar was observed at both AO and
295
+ GBT
296
+ identify the frequency bands that are affected by RFI,
297
+ which are otherwise not obvious visually. In some cases,
298
+ we found that one of the polarization channels was per-
299
+ sistently affected by RFI, and in such cases we excluded
300
+ data from that polarization channel at the cost of SNR.
301
+ Fortunately, this did not have a significant impact on
302
+ the determination of the nulling fractions. Some of the
303
+ AO data had periodic “drop-outs” in the data with sub-
304
+ millisecond periodicity at zero dispersion measure (DM),
305
+ caused by data rate overflow during the observations.
306
+ We cleaned these “drop-outs” by replacing the data with
307
+ NaN values and being careful to exclude those when fold-
308
+ ing/averaging.
309
+ After cleaning the RFI, both for tim-
310
+ ing and estimating nulling, we averaged polarizations to
311
+ measure the total intensity.
312
+ 2.2. Timing
313
+ For the 16 pulsars in our sample that had no prior
314
+ follow-up, we first tried to improve the timing parame-
315
+ ters. We used paas from psrchive (van Straten et al.
316
+ 2011) to make a standard template and then used pat
317
+
318
+ 4
319
+ Anumarlapudi et al.
320
+ Table 2. Times and durations of Arecibo observations
321
+ Pulsar
322
+ Observations
323
+ Total Time
324
+ MJD (hr)
325
+ (hr)
326
+ J0355+28
327
+ 58890 (0.25), 58922 (0.33)
328
+ 2.95
329
+ 58924 (0.42), 58928 (0.39)
330
+ · · ·
331
+ 58936 (0.39), 58951 (0.39)
332
+ · · ·
333
+ 58982 (0.39), 59013 (0.39)
334
+ · · ·
335
+ J0414+31a
336
+ 58890 (0.50), 58922 (0.38)
337
+ 3.46
338
+ 58924 (0.50), 58928 (0.35)
339
+ · · ·
340
+ 58936 (0.30), 58951 (0.40)
341
+ · · ·
342
+ 58982 (0.63), 59013 (0.40)
343
+ · · ·
344
+ J1822+02
345
+ 58941 (0.22), 58968 (0.17)
346
+ 1.55
347
+ 58970 (0.17), 58974 (0.17)
348
+ · · ·
349
+ 58981 (0.17), 59000 (0.33)
350
+ · · ·
351
+ 59029 (0.17), 59063 (0.17)
352
+ · · ·
353
+ J1829+25a
354
+ 58852 (0.17), 58941 (0.17)
355
+ 1.03
356
+ 58968 (0.14), 58970 (0.11)
357
+ · · ·
358
+ 58974 (0.11), 58981 (0.11)
359
+ · · ·
360
+ 59029 (0.11), 59063 (0.11)
361
+ · · ·
362
+ J1904+33
363
+ 58852 (0.17), 58882 (0.17)
364
+ 1.34
365
+ 58941 (0.17), 58968 (0.14)
366
+ · · ·
367
+ 58970 (0.14), 58974 (0.14)
368
+ · · ·
369
+ 58981 (0.14), 59029 (0.14)
370
+ · · ·
371
+ 59063 (0.14)
372
+ · · ·
373
+ J1928+28
374
+ 58852 (0.17), 58882 (0.17)
375
+ 1.98
376
+ 58941 (0.17), 58968 (0.14)
377
+ · · ·
378
+ 58970 (0.17), 58974 (0.17)
379
+ · · ·
380
+ 58981 (0.17), 59000 (0.50)
381
+ · · ·
382
+ 59029 (0.17), 59063 (0.17)
383
+ · · ·
384
+ J1941+02
385
+ 58852 (0.17), 58882 (0.17)
386
+ 1.5
387
+ 58912 (0.14), 58941 (0.17)
388
+ · · ·
389
+ 58968 (0.14), 58970 (0.14)
390
+ · · ·
391
+ 58974 (0.14), 58981 (0.10)
392
+ · · ·
393
+ 59029 (0.17), 59063 (0.17)
394
+ · · ·
395
+ J2000+29
396
+ 58852 (0.39), 58882 (0.17)
397
+ 1.83
398
+ 58941 (0.10), 58968 (0.14)
399
+ · · ·
400
+ 58970 (0.14), 58974 (0.14)
401
+ · · ·
402
+ 58981 (0.14), 59000 (0.33)
403
+ · · ·
404
+ 59029 (0.14), 59063 (0.14)
405
+ · · ·
406
+ J2044+28
407
+ 58852 (0.17), 58882 (0.17)
408
+ 1.18
409
+ 58968 (0.07), 58970 (0.14)
410
+ · · ·
411
+ 58974 (0.14), 58981 (0.14)
412
+ · · ·
413
+ 59000 (0.07), 59029 (0.14)
414
+ · · ·
415
+ 59063 (0.14)
416
+ · · ·
417
+ aThis pulsar was observed at both AO and GBT
418
+ to extract the Times of Arrival (TOAs) from the data.
419
+ For the GBT data, our goal was to improve the spin fre-
420
+ quency (F0) and DM measurements since we had only
421
+ 2 hour scan at a single epoch for each source. For the
422
+ AO data, the data spanned ∼3–6 months depending on
423
+ the pulsar and hence we can generate a phase-connected
424
+ solution. However, the relatively narrow bandwidth of
425
+ the observations (24 MHz) restricted our ability to fit
426
+ for DM using sub-banded TOAs and hence we used the
427
+ DM of the discovery candidate found on the GBNCC
428
+ discovery page.
429
+ The timing solutions for all the pulsars in this study
430
+ are given in Table 3. For pulsars observed at GBT we
431
+ improved the positions through gridding, and F0 and
432
+ DM estimates through timing.
433
+ For pulsars observed
434
+ at AO, we improved the gridded positions, F0 and the
435
+ frequency derivative F1 = ˙F0 through coherent timing.
436
+ For the two overlapping pulsars observed at both GBT
437
+ and AO, a timing solution was obtained by combining
438
+ the TOAs from both observatories. In the case of pul-
439
+ sars observed at AO for only ∼3 months (J0355+28,
440
+ J0414+31, J1822+02), and pulsars where a combina-
441
+ tion of low SNR and nulling resulted in few TOAs with
442
+ SNR > 8 (J1928+28), it is difficult to estimate both po-
443
+ sition and F1 precisely (they are highly covariant). In
444
+ such cases, we rely on the
445
+ F-statistic, given by
446
+ F = (χ2
447
+ 0 − χ2)/(p − p0)
448
+ χ2/p
449
+ where χ2
450
+ 0 and χ2 are the chi-squared values of the timing
451
+ residuals, and p0 and the p are the degrees of freedom
452
+ before and after the addition of F1 (or any additional
453
+ parameter(s), in general). This F-statistic follows an F-
454
+ distribution (Lomax 2007) and hence we include F1 in
455
+ the fit if the improvement in the goodness of fit (χ2) due
456
+ to F1 is <1% by chance. The resulting timing residuals
457
+ are shown in Figure 1.
458
+ 2.3. ON/OFF histograms
459
+ Once we had improved the timing solution, we used
460
+ dspsr in single pulse mode to generate single pulses for
461
+ all scans and used psradd, from psrchive, to phase
462
+ align pulses from different scans after cleaning the data
463
+ for RFI. We then averaged the data along the polariza-
464
+ tion and frequency axes to obtain the pulse intensity of
465
+ the single pulses as a function of the rotational phase
466
+ and generated single pulse stacks such as that shown in
467
+ Figure 2.
468
+ The most important aspect in estimating the nulling
469
+ fraction is determining the “ON”-pulse and “OFF”-
470
+
471
+ Pulsar Nulling with Mixture Models
472
+ 5
473
+ Table 3. Timing Parameters for the GBNCC pulsars used to study nulling
474
+ Pulsar
475
+ Position (J2000)
476
+ Period
477
+ Period derivative
478
+ DM
479
+ RA
480
+ RA error
481
+ DEC
482
+ DEC error
483
+ (′′)
484
+ (′′)
485
+ (s)
486
+ (10−15 s/s)
487
+ (pc/cm3)
488
+ GBT sample
489
+ J0054+6946a
490
+ 00h 54m 59.s1
491
+ 00.1
492
+ +69◦ 46′ 16.′′8
493
+ 00.0(3)
494
+ 0.832911328744(4)
495
+ −0.7194(8)
496
+ 116.52(5)
497
+ J0111+6624a
498
+ 01h 11m 21.s9
499
+ 01.7
500
+ +66◦ 24′ 10.′′9
501
+ 00.6
502
+ 4.3018721007(3)
503
+ −8.4(2)
504
+ 111.20(3)
505
+ J0325+6744a
506
+ 03h 25m 05.s1
507
+ 00.3
508
+ +67◦ 44′ 59.′′4
509
+ 00.1
510
+ 1.36467876728(1)
511
+ −1.553(9)
512
+ 65.28(5)
513
+ J0414+31b
514
+ 04h 14m 35.s6
515
+ 02.6
516
+ +31◦ 38′ 35.′′4
517
+ 25.3
518
+ 1.0805116(1)
519
+ −3.6(5)
520
+ 64.64(3)
521
+ J0614+83c
522
+ 06h 14m 03.s4
523
+ 34.6
524
+ +83◦ 13′ 46.′′2
525
+ 34.6
526
+ 1.03918794(5)
527
+ · · ·
528
+ 44.2(1)
529
+ J0738+6904a
530
+ 07h 38m 22.s6
531
+ 00.5
532
+ +69◦ 04′ 20.′′0
533
+ 00.3
534
+ 6.8276928023(5)
535
+ −26.97(4)
536
+ 17.22(2)
537
+ J1529−26c
538
+ 15h 29m 07.s2
539
+ 38.9
540
+ −26◦ 26′ 35.′′5
541
+ 38.9
542
+ 0.79857094(5)
543
+ · · ·
544
+ 44.7(1)
545
+ J1536−30c
546
+ 15h 36m 33.s4
547
+ 17.3
548
+ −30◦ 06′ 14.′′4
549
+ 17.3
550
+ 0.190084143(9)
551
+ · · ·
552
+ 63.40(7)
553
+ J1629+33c
554
+ 16h 29m 22.s6
555
+ 99.2
556
+ +33◦ 23′ 35.′′9
557
+ 99.2
558
+ 1.5247311(3)
559
+ · · ·
560
+ 34.8(5)
561
+ J1821+4147a
562
+ 18h 21m 52.s3
563
+ 00.1
564
+ +41◦ 47′ 02.′′6
565
+ 00.0(4)
566
+ 1.26185719(3)
567
+ −1.7292(9)
568
+ 40.63(5)
569
+ J1829+25b
570
+ 18h 30m 31.s8
571
+ 01.8
572
+ +25◦ 08′ 00.′′4
573
+ 01.4
574
+ 2.85769207(9)
575
+ −1.9(4)
576
+ 73.64(9)
577
+ J1901−04c
578
+ 19h 01m 37.s1
579
+ 62.0
580
+ −04◦ 54′ 44.′′9
581
+ 62.0
582
+ 1.8255459(8)
583
+ · · ·
584
+ 105.4(9)
585
+ J2040−21c
586
+ 20h 40m 40.s6
587
+ 09.7
588
+ +21◦ 52′ 51.′′6
589
+ 09.7
590
+ 0.562564125(4)
591
+ · · ·
592
+ 23.77(1)
593
+ J2131−31c
594
+ 21h 31m 30.s9
595
+ 65.9
596
+ −31◦ 32′ 53.′′4
597
+ 65.9
598
+ 3.32537(3)
599
+ · · ·
600
+ 31.753
601
+ J2310+6706a
602
+ 23h 10m 42.s1
603
+ 02.9
604
+ +67◦ 06′ 52.′′1
605
+ 00.9
606
+ 1.944788973(1)
607
+ −0.06(5)
608
+ 97.7(2)
609
+ AO sample
610
+ J0355+28
611
+ 03h 55m 22.s8
612
+ 00.4
613
+ +28◦ 38′ 50.′′1
614
+ 00.8
615
+ 0.36492919909(3)
616
+ · · ·
617
+ 48.788
618
+ J0414+31b
619
+ 04h 14m 35.s6
620
+ 02.6
621
+ +31◦ 38′ 35.′′4
622
+ 25.3
623
+ 1.0805116(1)
624
+ −3.6(5)
625
+ 64.64(3)
626
+ J1822+02
627
+ 18h 22m 43.s6
628
+ 01.4
629
+ +02◦ 28′ 53.′′8
630
+ 01.2
631
+ 1.5081132778(9)
632
+ · · ·
633
+ 103.22
634
+ J1829+25b
635
+ 18h 30m 31.s8
636
+ 01.8
637
+ +25◦ 08′ 00.′′4
638
+ 01.4
639
+ 2.85769207(9)
640
+ −1.9(4)
641
+ 73.64(9)
642
+ J1904+33
643
+ 19h 04m 40.s2
644
+ 00.2
645
+ +33◦ 58′ 25.′′9
646
+ 00.1
647
+ 0.417032327(1)
648
+ −0.247(5)
649
+ 81.139
650
+ J1928+28
651
+ 19h 27m 58.s4
652
+ 01.1
653
+ +28◦ 59′ 12.′′4
654
+ 01.0
655
+ 1.0630373062(5)
656
+ · · ·
657
+ 79.34
658
+ J1941+02
659
+ 19h 40m 34.s1
660
+ 00.8
661
+ +02◦ 39′ 21.′′7
662
+ 01.0
663
+ 1.23229077(1)
664
+ −0.18(9)
665
+ 87.478
666
+ J2000+29
667
+ 20h 00m 16.s5
668
+ 00.4
669
+ +29◦ 20′ 07.′′6
670
+ 00.1
671
+ 3.07377646(2)
672
+ −37.37(8)
673
+ 132.62
674
+ J2044+28
675
+ 20h 43m 36.s9
676
+ 00.4
677
+ +28◦ 28′ 37.′′3
678
+ 00.2
679
+ 1.61816650(1)
680
+ −3.99(4)
681
+ 90.169
682
+ Note—Quantities in parentheses are 1σ uncertainties on the last digit.
683
+ aCoherent timing solutions are given in Lynch et al. (2018)
684
+ b Timing solution is obtained by combining AO and GBT data.
685
+ c Astrometric positions are estimated from gridding and the positional uncertainties are estimated from the beam size (15′)
686
+ and the Signal to Noise Ratio (SNR)
687
+
688
+ 6
689
+ Anumarlapudi et al.
690
+ -1.0
691
+ 0
692
+ 1.0
693
+ PSR J0355+28
694
+ -3.0
695
+ 0
696
+ 3.0
697
+ PSR J0414+31
698
+ -11.0
699
+ 0
700
+ 11.0
701
+ PSR J1822+02
702
+ -3.0
703
+ 0
704
+ 3.0
705
+ PSR J1829+25
706
+ -1.0
707
+ 0
708
+ 1.0
709
+ Residuals (ms)
710
+ PSR J1904+33
711
+ -2.0
712
+ 0
713
+ 2.0
714
+ PSR J1928+28
715
+ -2.0
716
+ 0
717
+ 2.0
718
+ PSR J1941+02
719
+ -2.0
720
+ 0
721
+ 2.0
722
+ PSR J2000+29
723
+ 58800
724
+ 58850
725
+ 58900
726
+ 58950
727
+ 59000
728
+ 59050
729
+ 59100
730
+ Modified Julian Date (MJD)
731
+ -2.0
732
+ 0
733
+ 2.0
734
+ PSR J2044+28
735
+ -0.002
736
+ 0
737
+ 0.002
738
+ -0.002
739
+ 0
740
+ 0.002
741
+ -0.007
742
+ 0
743
+ 0.007
744
+ -0.001
745
+ 0
746
+ 0.001
747
+ -0.002
748
+ 0
749
+ 0.002
750
+ Residuals (cycles)
751
+ -0.001
752
+ 0
753
+ 0.001
754
+ -0.001
755
+ 0
756
+ 0.001
757
+ -0.0006
758
+ 0
759
+ 0.0006
760
+ -0.001
761
+ 0
762
+ 0.001
763
+ Figure 1. Timing residuals for the pulsars observed in the timing/nulling campaign at the AO. The red dots are the residuals
764
+ (in milliseconds) from the timing model with the error bars representing the 1-σ error on the TOAs. The timing model solutions
765
+ are presented in Table 3.
766
+
767
+ Pulsar Nulling with Mixture Models
768
+ 7
769
+ 0.0
770
+ 0.2
771
+ 0.4
772
+ 0.6
773
+ 0.8
774
+ 1.0
775
+ Pulse phase
776
+ 0
777
+ 200
778
+ 400
779
+ Single pulses
780
+ ON
781
+ OFF
782
+ −0.1
783
+ 0.0
784
+ 0.1
785
+ 0
786
+ 2
787
+ 0
788
+ 0.5
789
+ 1
790
+ NP
791
+ NP= 0.5
792
+ Intensity
793
+ (a) Single pulse stack of PSR J0325+6744
794
+ −1
795
+ 0
796
+ 1
797
+ 2
798
+ 3
799
+ 4
800
+ 5
801
+ Normalized Intensity
802
+ 0.0
803
+ 0.2
804
+ 0.4
805
+ 0.6
806
+ 0.8
807
+ 1.0
808
+ 1.2
809
+ Density
810
+ OFF window
811
+ ON window
812
+ (b) Pulse intensity histogram for PSR J0325+6744
813
+ Figure 2. (a)The bottom left panel shows the single pulse
814
+ stack with the ON and OFF windows marked with black
815
+ dashed lines. Null probabilities (NP) for every single pulse
816
+ are calculated using the method described in §3.2 and are
817
+ shown in the bottom right plot. The distribution of NP is
818
+ shown in the top right panel where we can clearly see the
819
+ evidence for two classes of pulses. The summed profile of
820
+ all the single pulses with null probability < 0.5 is shown in
821
+ the top left panel, while the summed profile for pulses with
822
+ null probability > 0.5 is shown in the middle panel. (b) The
823
+ pulse intensities in the OFF and ON windows are shown
824
+ in blue and orange histograms.
825
+ The presence of excessive
826
+ counts in the ON histogram (the null component) at the
827
+ background noise level separated from a second component
828
+ at higher intensities (the emission component) is evidence for
829
+ the nulling behavior.
830
+ pulse phase windows. The single pulse intensities in the
831
+ “OFF”-pulse window should be entirely due to radiome-
832
+ ter noise, while the intensities in the “ON”-pulse window
833
+ should be the sum of the radiometer noise component
834
+ (same as the “OFF”-pulse window) and the pulsar emis-
835
+ sion component. We first generated the average pulse
836
+ profile to visually select on and off windows of the same
837
+ widths. We then fit a sixth-order polynomial as a func-
838
+ tion of pulse phase to each single pulse (similar to Rosen
839
+ et al. 2013; Lynch et al. 2013; Kaplan et al. 2018) after
840
+ masking the ON/OFF windows to remove any trends
841
+ and construct a flat baseline. We recorded the ON/OFF
842
+ intensities as the sum of the baseline-subtracted inten-
843
+ sities across the windows. Finally, we constructed his-
844
+ tograms of the ON/OFF intensities which we used to
845
+ determine the nulling properties.
846
+ Figure 2 shows the
847
+ single pulse intensity distribution in the ON/OFF win-
848
+ dow. The OFF histogram can be accurately described
849
+ by a single component (Gaussian noise), but the ON
850
+ histogram can have multiple components — “null” and
851
+ “emission” components. The presence of nulling man-
852
+ ifests in the ON histogram as an excess of samples at
853
+ levels consistent with the OFF component, which we
854
+ refer to as the null component. The residual distribu-
855
+ tion, after removing the null component, is supposed to
856
+ be a realization of pulsar’s emission distribution (here-
857
+ after referred to as ‘emission’ component). The emission
858
+ component can be a single distribution or a combination
859
+ of multiple distributions. The ON distribution can be
860
+ thought of as the sum of the null and the emission com-
861
+ ponents.
862
+ 3. METHODS & RESULTS
863
+ 3.1. Determining Nulling Frations
864
+ As demonstrated by Kaplan et al. (2018), Ritchings’
865
+ method can give biased estimates for NF (hereafter re-
866
+ ferred as NFr) in pulsars where the emission compo-
867
+ nent is close to the noise level. Therefore, following Ka-
868
+ plan et al. (2018) we adopt a method which models the
869
+ ON/OFF histograms using a mixture model (MM). This
870
+ means that the intensities x can be considered as ran-
871
+ dom draws from the probability density function (PDF)
872
+ p(x|¯θ) =
873
+ m
874
+
875
+ n=1
876
+ cn Fn(x|{θn}),
877
+ (1)
878
+ where the Fn functions are the individual probability
879
+ density functions parameterized by the set {θn}, cn are
880
+ the weights. In the case where all the Fn functions are
881
+ the same and are normal distributions
882
+ Fn(x; µn, σn) = N(x; µn, σn) =
883
+ 1
884
+
885
+ 2πσn
886
+ e− 1
887
+ 2(
888
+ x−µn
889
+ σn )
890
+ 2
891
+ ,
892
+ where {µn} and {σn} are the means and standard devia-
893
+ tions of component n, this reduces to a Gaussian mixture
894
+ model (GMM), but more general models are considered.
895
+
896
+ 8
897
+ Anumarlapudi et al.
898
+ There is an additional constraint that the weights cn add
899
+ to one:
900
+ m
901
+
902
+ n=1
903
+ cn = 1,
904
+ which comes from the normalization of the PDF, which
905
+ leaves the total number of free parameters to be deter-
906
+ mined as �m
907
+ n=1 dim({θn}) model parameters, and m−1
908
+ latent parameters.
909
+ In general, the OFF histogram can be well-described
910
+ by a Gaussian as expected of radiometer noise (assum-
911
+ ing that RFI has been sufficiently removed), and this is
912
+ what we observe in our data. The emission component
913
+ usually can be described by a single Gaussian as well.
914
+ However, there are cases when it deviates from a single
915
+ Gaussian component. More than one component is a
916
+ possibility considered in Kaplan et al. (2018), which can
917
+ be tested against the single-component model through a
918
+ model comparison test. However, we also consider non-
919
+ Gaussian models here. Specifically, multi-path propaga-
920
+ tion of the pulses through the interstellar medium (ISM)
921
+ (Smith 1973; Bhat et al. 2003; Lorimer & Kramer 2004),
922
+ can result in the emission distribution having long tails
923
+ towards higher intensities. This effect can be reasonably
924
+ well described by the intensity distribution
925
+ F(x; µ, σ, τ) = 1
926
+ 2τ exp
927
+ � σ2
928
+ 2τ 2
929
+
930
+ exp
931
+
932
+ −x − µ
933
+ τ
934
+
935
+ erfc
936
+
937
+ −x − (µ + σ2/τ)
938
+
939
+
940
+
941
+ (2)
942
+ which is a convolution of a Gaussian N(x; µ, σ) and a
943
+ one-sided exponential 1
944
+ τ exp(−x/τ)U(x), where U(x) is
945
+ the Heaviside or step function, erfc(x) is the complemen-
946
+ tary error function, and τ is the decay time of the ex-
947
+ ponential (McKinnon 2014). Hence we try to model the
948
+ emission component using multi-component Gaussians
949
+ and Gaussians with exponential tails and rank them us-
950
+ ing their Bayesian Information Criterion (BIC) values
951
+ to choose the best-fit model.
952
+ We employ the scikit-learn Gaussian mixture
953
+ model (Pedregosa et al. 2011) to derive an initial fit for
954
+ the ON and OFF histograms. This is based on the ex-
955
+ pectation–maximization (EM) algorithm, in which pa-
956
+ rameters are estimated by maximizing the likelihood
957
+ function L(data | ¯θ) (see Ivezi´c et al. 2020, for details).
958
+ This produces a very good fit for the OFF histogram.
959
+ However, in the case of weaker pulsars where the emis-
960
+ sion can be confused with the background, Kaplan et al.
961
+ (2018) showed that this method can still fail in produc-
962
+ ing a reliable fit for the null and emission components
963
+ of the ON histogram simultaneously, although this bias
964
+ can be small compared to the Ritchings’ algorithm. As
965
+ such, a refined fit for the null and emission components
966
+ can be obtained by performing a Markov-Chain Monte
967
+ Carlo (MCMC) analysis.
968
+ For MCMC analysis, the likelihood function is given
969
+ by
970
+ L(¯x|¯θ) =
971
+
972
+ i
973
+ p(xi|¯θ)
974
+ (3)
975
+ following p(xi|¯θ) from Equation 1.
976
+ The priors chosen are:
977
+ • Initial Gaussian fit from the EM algorithm for the
978
+ off-pulse mean and standard deviation
979
+ • Uniform between the bounds dictated by the on-
980
+ pulse intensities for the parameters governing the
981
+ pulsar emission component
982
+ • Dirichlet distribution for the m coefficients cm
983
+ (Wilks 2008)
984
+ We use the emcee (Foreman-Mackey et al. 2013) en-
985
+ semble sampler to sample the posterior. We initialize
986
+ 32 walkers within a ±5σ range of the initial fit values
987
+ of the parameters. To account for the finite correlation
988
+ length of the chains and produce independent samples,
989
+ we first let the walkers “burn-in” to erase their start-
990
+ ing conditions, and we then let the walkers explore the
991
+ parameter space until we have at least 100 independent
992
+ samples for each walker.
993
+ Figure (3, left column) shows the pulse intensity his-
994
+ tograms for PSR J0325+6744: a pulsar in which the
995
+ emission component is easily discernible from the noise;
996
+ and PSR J1529−26: a pulsar where these two start to
997
+ blend into each other. Looking at the null component in
998
+ the ON histogram for the two pulsars, the evidence for
999
+ nulling is clear in J0325+6744 while J1529−26 behaves
1000
+ like a non-nulling pulsar whose emission is weak. The
1001
+ blue, green and orange-filled regions show the fit for the
1002
+ OFF, null, and emission components respectively, and
1003
+ the black dotted line shows the overall fit for the ON
1004
+ component.
1005
+ The posteriors for the model parameters
1006
+ are presented in Figure (3, right column) with the point
1007
+ estimates (median6) of the NF from MM given in Ta-
1008
+ ble 4.
1009
+ For PSR J0325+6744, where the null and emission
1010
+ components are well separated (bright pulsars), our
1011
+ method yields a NF = 53.92 ± 0.81% while Ritchings’
1012
+ 6 In the case of non-nulling pulsars where the distribution of NF
1013
+ is one-sided, the median will be over-estimated compared to the
1014
+ true value. Even so, the uncertainty on NF is larger than the
1015
+ difference between the median and mode and hence NF is still
1016
+ consistent with 0.
1017
+
1018
+ Pulsar Nulling with Mixture Models
1019
+ 9
1020
+ Table 4. Nulling properties of the GBNCC pulsars
1021
+ Pulsar
1022
+ Model
1023
+ NF
1024
+ NFr
1025
+ Null period
1026
+ Lengths
1027
+ Null
1028
+ Em.
1029
+ (%)
1030
+ (%)
1031
+ (pulse periods)
1032
+ GBT sample
1033
+ J0054+6946
1034
+ G3
1035
+ 27.5±5.1
1036
+ 36.8
1037
+ · · ·
1038
+ 2
1039
+ 3
1040
+ J0111+6624
1041
+ G2
1042
+ 10.2±1.7
1043
+ 17.9
1044
+ · · ·
1045
+ 2
1046
+ 7
1047
+ J0325+6744
1048
+ G2
1049
+ 53.9±0.8
1050
+ 55.1
1051
+ · · ·
1052
+ 3
1053
+ 4
1054
+ J0414+31
1055
+ G2
1056
+ 27.5±1.9
1057
+ 40.7
1058
+ 28.4c
1059
+ 2
1060
+ 4
1061
+ J0614+83
1062
+ G2
1063
+ 06.7±3.1
1064
+ 52.3
1065
+ · · ·
1066
+ 1-2a
1067
+ · · ·
1068
+ J0738+6904
1069
+ Eg2
1070
+ 66.6±1.5
1071
+ 64.9
1072
+ 42.7c
1073
+ 9
1074
+ 4
1075
+ J1529−26
1076
+ G2
1077
+ 05.4±4.3
1078
+ 48.5
1079
+ · · ·
1080
+ 1-2a
1081
+ · · ·
1082
+ J1536−30
1083
+ G2
1084
+ 43.1±2.2
1085
+ 57.5
1086
+ · · ·
1087
+ 4
1088
+ J1629+33
1089
+ G2
1090
+ 83.8±1.9
1091
+ 83.9
1092
+ · · ·
1093
+ 12
1094
+ 1-2a
1095
+ J1821+4147
1096
+ G2
1097
+ 00.0±0.6
1098
+ 20.9
1099
+ · · ·
1100
+ 1-2a
1101
+ · · ·
1102
+ J1829+25
1103
+ G2
1104
+ 00.0±0.6
1105
+ 07.8
1106
+ · · ·
1107
+ 0b
1108
+ · · ·
1109
+ J1901−04
1110
+ G2
1111
+ 13.9±4.1
1112
+ 50.4
1113
+ 1a
1114
+ · · ·
1115
+ J2040−21
1116
+ G2
1117
+ 25.4±1.8
1118
+ 42.4
1119
+ 23.3c
1120
+ 2
1121
+ 5
1122
+ J2131−31
1123
+ G2
1124
+ 49.8±8.6
1125
+ 54.2
1126
+ · · ·
1127
+ 3
1128
+ 3
1129
+ J2310+6706
1130
+ Eg2
1131
+ 54.1±2.7
1132
+ 52.7
1133
+ 3
1134
+ 3
1135
+ AO sample
1136
+ J0355+28
1137
+ G2
1138
+ 01.6±1.1
1139
+ 30.3
1140
+ · · ·
1141
+ 1-2a
1142
+ · · ·
1143
+ J0414+31
1144
+ G2
1145
+ 33.0±0.7
1146
+ 37.1
1147
+ 28.4c
1148
+ 2
1149
+ 4
1150
+ J1822+02
1151
+ G2
1152
+ 00.1±0.7
1153
+ 09.3
1154
+ · · ·
1155
+ 1a
1156
+ · · ·
1157
+ J1829+25
1158
+ G2
1159
+ 00.0±0.6
1160
+ 05.5
1161
+ · · ·
1162
+ 0b
1163
+ · · ·
1164
+ J1904+33
1165
+ G2
1166
+ 00.0±0.1
1167
+ 09.4
1168
+ · · ·
1169
+ 1a
1170
+ · · ·
1171
+ J1928+28
1172
+ G2
1173
+ 47.6±2.4
1174
+ 71.9
1175
+ · · ·
1176
+ 3
1177
+ 3
1178
+ J1941+02
1179
+ G2
1180
+ 00.2±1.7
1181
+ 31.1
1182
+ · · ·
1183
+ 1-3a
1184
+ · · ·
1185
+ J2000+29
1186
+ G2
1187
+ 19.3±1.1
1188
+ 23.4
1189
+ · · ·
1190
+ 1-2a
1191
+ 3
1192
+ J2044+28
1193
+ G2
1194
+ 15.2±0.9
1195
+ 17.4
1196
+ · · ·
1197
+ 1-2a
1198
+ 6
1199
+ Note—Naming convention for the model represents the model used
1200
+ to describe the emission histogram (G=Gaussian, Eg=Exponentially
1201
+ modified Gaussian) followed by the number of components in the ON
1202
+ histogram.
1203
+ aWe find that in extreme cases (non-nulling/highly-nulling), one of the
1204
+ distributions is confined to very few bins and so we quote this range
1205
+ rather than fitting for it.
1206
+ b We find that there are no single pulses with NP>0.5.
1207
+ c We observe quasi-periodicity in these cases.
1208
+ method (see Ritchings 1976; Wang et al. 2007; Kaplan
1209
+ et al. 2018, for implementation) gives a comparable esti-
1210
+ mate of 55.01%. However in the case of a weaker pulsar,
1211
+ PSR J1529−26, where the emission component is closer
1212
+ to the background noise, our method gives a best-fit
1213
+ value of NF = 5.55 ± 4.4% compared to 48.1% given
1214
+ by the Ritchings’ method.
1215
+ The latter is significantly
1216
+ overestimated and can easily lead to (mis)classifying the
1217
+ source as a nulling pulsar, further illuminating the bias
1218
+ of Ritchings’ method in weaker pulsars.
1219
+ Full results for all the 23 pulsars, including the single
1220
+ pulse stacks, posteriors from the MCMC run and the
1221
+ resultant ON/OFF histogram model fits are shown in
1222
+ Appendix A.
1223
+ 3.2. Nulling Correlations
1224
+ After determining the nulling properties we wish to
1225
+ know whether the locations and durations of nulls are
1226
+ completely random, or if there is any correlation be-
1227
+ tween different nulling and emission episodes in a pulsar.
1228
+ Specifically, given a single pulse that shows emission (or
1229
+ that nulls), how likely are we to see emission for the next
1230
+ pulse, and are there any patterns of longer duration?
1231
+ We test this using the probability of a null (the nulling
1232
+ “responsibility”) evaluated for each individual pulse,
1233
+ given by
1234
+ NPI =
1235
+ c0F0(I|{θ0})
1236
+ �m
1237
+ n=1 cn Fn(I|{θn}).
1238
+ (4)
1239
+ We divided the data into stacks of 256 pulses (similar to
1240
+ Ritchings 1976; Herfindal & Rankin 2009) to calculate
1241
+ more robust estimates and to be less sensitive to long-
1242
+ term variations like scintillation and system temperature
1243
+ changes, and use equation 4 to calculate the probabil-
1244
+ ity of a given single pulse being a null. We then looked
1245
+ for periodic signature by taking the Fourier transform
1246
+ (FT) within each stack and co-adding the power from all
1247
+ stacks incoherently. Figure 4 shows the resultant spec-
1248
+ trum for PSR J0414+31, in which a certain pattern of
1249
+ combination of emission and nulls seems to be periodic
1250
+ over ∼28 pulse periods. We estimate the significance of
1251
+ peaks in the stacked power spectra assuming that the
1252
+ null distribution from n stacks follows a χ2 distribution
1253
+ with 2n degrees of freedom (this assumes white noise).
1254
+ We see significant periodic or quasi-periodic (a signifi-
1255
+ cant broad peak in the power spectrum) signatures in a
1256
+ few other pulsars, and tabulate their periods in Table 4.
1257
+ In the case of precise period measurements, we estimate
1258
+ the uncertainty as described in Ransom et al. (2002).
1259
+ However, this only points to the periodic nature of
1260
+ a certain pattern of emission and nulls.
1261
+ To find how
1262
+ emissions and nulls are ‘bunched’, we look for the dis-
1263
+ tribution of continuous emissions and nulls, where we
1264
+ use NPI=0.5 to be the boundary between an emission
1265
+ and a null. Figure 5 shows the emission and null length
1266
+
1267
+ 10
1268
+ Anumarlapudi et al.
1269
+ −1
1270
+ 0
1271
+ 1
1272
+ 2
1273
+ 3
1274
+ 4
1275
+ 5
1276
+ Raw Intensity
1277
+ 0.0
1278
+ 0.2
1279
+ 0.4
1280
+ 0.6
1281
+ 0.8
1282
+ 1.0
1283
+ 1.2
1284
+ Probability Density
1285
+ MM On fit
1286
+ ON/OFF histograms
1287
+ MM Off fit
1288
+ MM emission fit
1289
+ MM null fit
1290
+ (a1) PSR J0325+6744 – NF = 53.92 ± 0.81% vs NFr =
1291
+ 55.01%
1292
+ µ0
1293
+ µ0=-0.001
1294
+ 2.10
1295
+ 2.16
1296
+ 2.22
1297
+ µ1
1298
+ µ1=2.183
1299
+ 0.345
1300
+ 0.360
1301
+ σ0
1302
+ σ0=0.355
1303
+ 0.80
1304
+ 0.85
1305
+ 0.90
1306
+ σ1
1307
+ σ1=0.853
1308
+ −0.015
1309
+ 0.000
1310
+ 0.015
1311
+ µ0
1312
+ 0.52
1313
+ 0.54
1314
+ 0.56
1315
+ NF
1316
+ 2.10
1317
+ 2.16
1318
+ 2.22
1319
+ µ1
1320
+ 0.345
1321
+ 0.360
1322
+ σ0
1323
+ 0.80
1324
+ 0.85
1325
+ 0.90
1326
+ σ1
1327
+ 0.52
1328
+ 0.54
1329
+ 0.56
1330
+ NF
1331
+ NF=0.54
1332
+ (a2) Model parameter posteriors for PSR J0325+6744
1333
+ −5.0
1334
+ −2.5
1335
+ 0.0
1336
+ 2.5
1337
+ 5.0
1338
+ 7.5
1339
+ 10.0
1340
+ Raw Intensity
1341
+ 0.00
1342
+ 0.05
1343
+ 0.10
1344
+ 0.15
1345
+ 0.20
1346
+ 0.25
1347
+ 0.30
1348
+ Probability Density
1349
+ MM On fit
1350
+ ON/OFF histograms
1351
+ MM Off fit
1352
+ MM emission fit
1353
+ MM null fit
1354
+ (b1) PSR J1529−26 – NF = 5.4 ± 4.4% vs NFr =48.5%
1355
+ µ0
1356
+ µ0=0.011
1357
+ 1.05
1358
+ 1.20
1359
+ µ1
1360
+ µ1=1.082
1361
+ 1.3
1362
+ 1.4
1363
+ σ0
1364
+ σ0=1.35
1365
+ 1.30
1366
+ 1.35
1367
+ 1.40
1368
+ σ1
1369
+ σ1=1.389
1370
+ −0.06
1371
+ 0.00
1372
+ 0.06
1373
+ µ0
1374
+ 0.08
1375
+ 0.16
1376
+ NF
1377
+ 1.05
1378
+ 1.20
1379
+ µ1
1380
+ 1.3
1381
+ 1.4
1382
+ σ0
1383
+ 1.30
1384
+ 1.35
1385
+ 1.40
1386
+ σ1
1387
+ 0.08
1388
+ 0.16
1389
+ NF
1390
+ NF=0.054
1391
+ (b2) Model parameter posteriors for PSR J1529−26
1392
+ Figure 3. Left (a1, b1) Two-component Gaussian model fits for the ON and OFF histograms. Individual ON/OFF histograms
1393
+ are shown in solid black lines. The blue, green and orange-filled regions shows the OFF, the null (NF × OFF) and the emission
1394
+ (ON − NF × OFF) components respectively, where this estimate of NF is obtained using the mixture model. The black dotted
1395
+ line shows the overall fit for the ON pulse distribution. Right (a2, b2) Corner plots for 2-component Gaussian fit to the ON/OFF
1396
+ histograms parameterized by the means {µ1, µ2}, standard deviations {σ1, σ2} and the nulling fraction NF. The dashed vertical
1397
+ lines are the quoted median point estimates of the parameters
1398
+
1399
+ Pulsar Nulling with Mixture Models
1400
+ 11
1401
+ 0.0
1402
+ 0.1
1403
+ 0.2
1404
+ 0.3
1405
+ 0.4
1406
+ 0.5
1407
+ Fourier frequency (in 1/P)
1408
+ 0
1409
+ 20
1410
+ 40
1411
+ 60
1412
+ 80
1413
+ 100
1414
+ 120
1415
+ 140
1416
+ 160
1417
+ Power (arbitrary units)
1418
+ NP FFT (ON)
1419
+ NP FFT (OFF)
1420
+ analytical limit
1421
+ (a) PSR J0414+31 (GBT)
1422
+ 0.0
1423
+ 0.1
1424
+ 0.2
1425
+ 0.3
1426
+ 0.4
1427
+ 0.5
1428
+ Fourier frequency (in 1/P)
1429
+ 0
1430
+ 100
1431
+ 200
1432
+ 300
1433
+ 400
1434
+ 500
1435
+ Power (arbitrary units)
1436
+ NP FFT (ON)
1437
+ NP FFT (OFF)
1438
+ analytical limit
1439
+ (b) PSR J0414+31 (AO)
1440
+ 0.0
1441
+ 0.1
1442
+ 0.2
1443
+ 0.3
1444
+ 0.4
1445
+ 0.5
1446
+ Fourier frequency (in 1/P)
1447
+ 0
1448
+ 50
1449
+ 100
1450
+ 150
1451
+ 200
1452
+ 250
1453
+ Power (arbitrary units)
1454
+ NP FFT (ON)
1455
+ NP FFT (OFF)
1456
+ analytical limit
1457
+ (e) PSR J2040−21
1458
+ 0.0
1459
+ 0.1
1460
+ 0.2
1461
+ 0.3
1462
+ 0.4
1463
+ 0.5
1464
+ Fourier frequency (in 1/P)
1465
+ 0
1466
+ 20
1467
+ 40
1468
+ 60
1469
+ 80
1470
+ Power (arbitrary units)
1471
+ NP FFT (ON)
1472
+ NP FFT (OFF)
1473
+ analytical limit
1474
+ (f) PSR J0738+6904
1475
+ Figure 4. Fourier transform of the null probability for the pulsars in our sample that show periodicity.
1476
+ Power combined
1477
+ incoherently from multiple stacks of 256 pulses is shown at 129 discrete frequencies (in the units of 1/pulse period) in the
1478
+ blue line. The orange curve shows the same for the OFF component (background noise) which can be used to eliminate any
1479
+ instrumental variations/artifacts and/or RFI. The black dotted line shows the upper limit that allows for 1 false positive in 1000
1480
+ trails, corresponding to a 99.9% confidence limit. The gray curves are the normalized power from the individual stacks (not to
1481
+ scale) that are used to look for quasi-periodicity. The value of the periodicities are given in Table 4
1482
+
1483
+ 12
1484
+ Anumarlapudi et al.
1485
+ 0
1486
+ 5
1487
+ 10
1488
+ 15
1489
+ 20
1490
+ Pulse periods
1491
+ 0.0
1492
+ 0.1
1493
+ 0.2
1494
+ 0.3
1495
+ 0.4
1496
+ 0.5
1497
+ Normalized counts
1498
+ null lengths
1499
+ em. lengths
1500
+ null fit τem=0.49
1501
+ null fit τem=0.3
1502
+ Figure 5. Distribution of emission lengths and null lengths
1503
+ for J0414+31. The gray-filled and the black-open histograms
1504
+ show the distribution of null and emission episodes respec-
1505
+ tively.
1506
+ The orange curve shows an exponential fit for the
1507
+ emission length distribution with decay constant τem=0.3,
1508
+ whereas the blue curve shoes the same for the null length
1509
+ distribution with τnull=0.49.
1510
+ distributions for the single pulses of PSR J0414+31. We
1511
+ find that these distributions can be well described by an
1512
+ exponential distribution (p(x) = τ −1 exp(−x/λ)), where
1513
+ x is the null or emission length and the mean duration
1514
+ of the episode is λ. We find that for PSR J0414+31, the
1515
+ emission episodes have a characteristic period of four
1516
+ periods, whereas the nulls are two periods long, which
1517
+ is consistent with the observed nulling fraction of ∼ 33%
1518
+ (see Table 4). We repeat this for all the pulsars and the
1519
+ results are tabulated in Table 4.
1520
+ 3.3. Sub-pulse Drifting
1521
+ Beyond nulling, we also look for any correlations be-
1522
+ tween nulling and sub-pulse drifting. Drifting is usually
1523
+ characterized by two periods: the drifting period P3,
1524
+ defined as the period for which the pulse is seen at the
1525
+ same longitude (phase), and P2, the spacing between
1526
+ two sub-pluses within the same single pulse (see Figure
1527
+ 6). To estimate both, we prepared the data by selecting
1528
+ only the on-pulse window of data (np phase bins) for all
1529
+ the single pulses (ns single pulses). We then calculated
1530
+ Longitude Resolved Fluctuation Spectra (LRFS, Backer
1531
+ 1970c), where we take a 1-D Fourier transform of the
1532
+ (ns × np) data along the ns axis. Figure 6 shows one
1533
+ of the two pulsars in our sample, J1822+02, that shows
1534
+ clear signs of drifting. A period P3 of ∼ 28 pulse periods
1535
+ and P2 of ∼ 35/1024 pulse periods can be clearly seen.
1536
+ We also find the evidence for drifting in PSR J1829+25
1537
+ (see figure 7), with a P3 of ∼ three pulse periods and a
1538
+ P2 of 1/128 pulse periods, with similar inferences in the
1539
+ data from both AO and GBT.
1540
+ 4. DISCUSSION
1541
+ 4.1. Biases in Nulling Models
1542
+ Kaplan et al. (2018) demonstrated the bias of Ritch-
1543
+ ings’ method for weaker pulsars through simulated data,
1544
+ where the mixture model was able to recover the true in-
1545
+ jected nulling fraction. They also showed that for Gaus-
1546
+ sian mixtures, an analytical correction can correct the
1547
+ biased estimate of Ritchings’ method to find the true
1548
+ value. We extend the same technique using our sam-
1549
+ ple of 22 pulsars. Figure 8 shows the comparison of the
1550
+ NF estimates derived using both methods.
1551
+ The blue
1552
+ points show the NF estimate derived using Ritchings’
1553
+ algorithm (NFr), the orange points show NFr estimate
1554
+ corrected for the bias (as in Kaplan et al. 2018), and the
1555
+ green points show the NF derived using mixture model-
1556
+ ing. In the case of highly nulling pulsars, the contamina-
1557
+ tion of the null component from the emission component
1558
+ can be small, and both methods perform comparably.
1559
+ However, in the case of pulsars with small NF a system-
1560
+ atic bias can be seen as the pulsar emission component
1561
+ becomes blended with the background noise, and the
1562
+ fact that the green and orange points agree quite well
1563
+ demonstrates our confidence in estimating the bias in
1564
+ the Ritchings method and the utility of mixture models.
1565
+ 4.2. Is the Nulling Fraction Correlated with Pulsar
1566
+ Properties?
1567
+ Comparing the nulling estimates from the mixture
1568
+ modelling and Ritchings’ method in Table 4, it can be
1569
+ seen that there can be significant differences between
1570
+ these estimates. Such a scenario can lead to significant
1571
+ biases in population-wide studies that look for corre-
1572
+ lation between nulling fraction and pulsar properties.
1573
+ Figure 9 shows the most complete list of nulling pulsars,
1574
+ extended from Konar & Deka (2019), on the P − ˙P dia-
1575
+ gram. We do not find any clear visual trends of NF with
1576
+ respect to period (P), spin-down rate ( ˙P), characteris-
1577
+ tic age (τc), or surface magnetic field (Bsurf), although
1578
+ we emphasize that most of the pulsars here (142/164)
1579
+ have their NF estimates derived using some variant of
1580
+ the Ritchings method.
1581
+ Our sample size of 22 pulsars is too small to derive
1582
+ reliable correlations. However, we can test the similar-
1583
+ ity/disparity in the correlations obtained using nulling
1584
+ estimates derived with mixture models versus the Ritch-
1585
+ ings algorihtm. We use the Spearman correlation test, a
1586
+ non-parametric correlation test to quantify any correla-
1587
+ tions between the relevant parameters (P/ ˙P/Bsurf/τc)
1588
+ and NF. Table 5 shows the correlation coefficients of
1589
+ nulling fraction with parameters of interest (P, ˙P, Bsurf,
1590
+ τc). In no case do we see an evidence for strong cor-
1591
+ relations but we can see large differences between these
1592
+ coefficients obtained using the NF derived using the two
1593
+
1594
+ Pulsar Nulling with Mixture Models
1595
+ 13
1596
+ 0.1
1597
+ 0.16
1598
+ 0.21
1599
+ 0.27
1600
+ 0.33
1601
+ 0.39
1602
+ Phase
1603
+ 0
1604
+ 50
1605
+ 100
1606
+ 150
1607
+ 200
1608
+ 250
1609
+ Singlepulse number
1610
+ P2
1611
+ P3
1612
+ 0.21
1613
+ 0.23
1614
+ 0.25
1615
+ 0.27
1616
+ 0.29
1617
+ Pulse phase
1618
+ 0.6
1619
+ 0.8
1620
+ 1.0
1621
+ Intensity
1622
+ 0.0
1623
+ 0.1
1624
+ 0.2
1625
+ 0.3
1626
+ 0.4
1627
+ 0.5
1628
+ Frequency (in units of 1/Period)
1629
+ 0.25
1630
+ 0.50
1631
+ 0.75
1632
+ 1.00
1633
+ Power
1634
+ Figure 6. Left: A stack of 300 single pulses of PSR J1822+02 clearly showing the sub-pulse drifting phenomenon. The drifting
1635
+ periods P2 and P3 are shown. Right: LFRS of the single pulse stack of J1822+02. The 2D spectrogram shows the Fourier
1636
+ transform of data along the axis of single pulses. The evidence of a single drifting frequency across the phase bins is evident
1637
+ from the spectrogram. The bottom panel shows the 2D spectrogram scrunched along the phase axis and the right-hand plot
1638
+ shows the same scrunched along the frequency axis.
1639
+ methods. We emphasize that the values of these have to
1640
+ be taken with a high degree of caution, given the relative
1641
+ sample size under study and the presence of outliers. In
1642
+ particular we find that PSR J2310+6706 turns out to be
1643
+ a strong outlier, especially in the τc and Bsurf space and
1644
+ this significantly affects the results (see Table 5), further
1645
+ illustrating the limitations of a small sample size.
1646
+ Previously, using a sample size (23) comparable to
1647
+ ours, Wang et al. (2007) qualitatively found that NF is
1648
+ related to age with older population experiencing larger
1649
+ nulling fractions. Ritchings (1976) found a positive cor-
1650
+ relation both with the pulsar period and age in a sample
1651
+ (32) slightly larger than the one in this study. However,
1652
+ as mentioned above those and most other nulling esti-
1653
+ mates in the literature are derived using some variant
1654
+ of Ritchings’ algorithm. Computing the Spearman coef-
1655
+ ficient for all of the archival sources we cannot confirm
1656
+ either correlation and suggest caution in interpreting re-
1657
+ sults using Ritchings’ algorithm.
1658
+ However, we also note that the source of this dispar-
1659
+ ity does not seem to be straightforward: For a sam-
1660
+ ple of pulsars with a given SNR, the energy per sin-
1661
+ gle pulse will be lower for pulsars with shorter peri-
1662
+ ods, which means that the NF estimates for the short-
1663
+ period pulsars should experience larger biases and have
1664
+ higher nulling fractions measured with the Richtings’
1665
+ method.
1666
+ Under the (overly simplistic) assumption of
1667
+ a uniform distribution of luminosity with period (cf.
1668
+ Faucher-Gigu`ere & Kaspi 2006; Bates et al. 2014), the
1669
+ correlation of inferred nulling fraction with period will
1670
+ then be negative which is contrary to the previous stud-
1671
+ ies.
1672
+ This suggests that the source of this bias is not
1673
+ simple and needs careful understanding of the under-
1674
+ Table 5. Spearman rank correlation coefficients for our sam-
1675
+ ple data set and archival data set.
1676
+ Parameter
1677
+ MM
1678
+ Ritchings
1679
+ Catalog
1680
+ P
1681
+ 0.356
1682
+ 0.008
1683
+ 0.311
1684
+ 0.314
1685
+ −0.064
1686
+ · · ·
1687
+ | ˙P|
1688
+ 0.274
1689
+ 0.035
1690
+ −0.013
1691
+ 0.457
1692
+ 0.057
1693
+ · · ·
1694
+ τc
1695
+ −0.353
1696
+ −0.088
1697
+ 0.149
1698
+ −0.557
1699
+ −0.207
1700
+ · · ·
1701
+ Bsurf
1702
+ 0.291
1703
+ −0.006
1704
+ 0.110
1705
+ 0.450
1706
+ 0.071
1707
+ · · ·
1708
+ Note—Not all the pulsars in the sample have
1709
+ ˙P measurements. Hence the sample size used
1710
+ for period is larger. The two rows for each pa-
1711
+ rameter correspond to the rank coefficients
1712
+ including and excluding PSR J2310+6706
1713
+ (see Figure 10).
1714
+ lying distribution of NF with pulsar properties and a
1715
+ larger sample of pulsars with more robust and unbiased
1716
+ NF estimates.
1717
+ 4.3. Is Nulling Periodic?
1718
+ As shown in Section 3.2, we find that nulling appears
1719
+ periodic/quasi-periodic in a subset of pulsars, with their
1720
+ periods noted in Table 4. Herfindal & Rankin (2007,
1721
+ 2009) also find evidence for such signatures and at-
1722
+ tributd this to the line of sight passing through a struc-
1723
+ tured rotating carousel. In addition we also find that
1724
+
1725
+ 14
1726
+ Anumarlapudi et al.
1727
+ 0.36
1728
+ 0.41
1729
+ 0.45
1730
+ 0.5
1731
+ 0.55
1732
+ 0.6
1733
+ 0.65
1734
+ 0.7
1735
+ 0.75
1736
+ Pulse phase
1737
+ 0
1738
+ 20
1739
+ 40
1740
+ 60
1741
+ 80
1742
+ 100
1743
+ 120
1744
+ 140
1745
+ 160
1746
+ Single pulses
1747
+ (a) AO data single pulse stack
1748
+ 0.556
1749
+ 0.56
1750
+ 0.565
1751
+ 0.57
1752
+ 0.574
1753
+ Pulse phase
1754
+ 0.25
1755
+ 0.50
1756
+ 0.75
1757
+ 1.00
1758
+ Intensity
1759
+ 0.0
1760
+ 0.1
1761
+ 0.2
1762
+ 0.3
1763
+ 0.4
1764
+ 0.5
1765
+ Frequency (in units of (1/P))
1766
+ 0.0
1767
+ 0.5
1768
+ 1.0
1769
+ Power
1770
+ (b) LRFS (AO data)
1771
+ 0.36
1772
+ 0.41
1773
+ 0.45
1774
+ 0.5
1775
+ 0.55
1776
+ 0.6
1777
+ 0.65
1778
+ 0.7
1779
+ 0.75
1780
+ Pulse phase
1781
+ 0
1782
+ 25
1783
+ 50
1784
+ 75
1785
+ 100
1786
+ 125
1787
+ 150
1788
+ 175
1789
+ Single pulses
1790
+ (a) GBT data single pulse stack
1791
+ 0.487
1792
+ 0.492
1793
+ 0.496
1794
+ 0.5
1795
+ 0.505
1796
+ Pulse phase
1797
+ 0.4
1798
+ 0.6
1799
+ 0.8
1800
+ 1.0
1801
+ Intensity
1802
+ 0.0
1803
+ 0.1
1804
+ 0.2
1805
+ 0.3
1806
+ 0.4
1807
+ 0.5
1808
+ Frequency (in units of (1/P))
1809
+ 0.25
1810
+ 0.50
1811
+ 0.75
1812
+ 1.00
1813
+ Power
1814
+ (b) LRFS (GBT data)
1815
+ Figure 7. Sub-pulse drifting in PSR J1829+25: The left panels shows the stack of single pulses, in the data taken at AO and
1816
+ GBT, which shows the signature of drifting phenomenon. The right panels shows the LRFS (see §3.3) of the single pulse stacks.
1817
+ Data from AO (top right) shows a strong feature with a periodicity ∼ 3 pulse periods. Data from GBT (bottom right) shows a
1818
+ quasi-periodic (broad) peak consistent with the period from AO data.
1819
+ in PSR J0414+31, which was observed at two differ-
1820
+ ent frequencies with different instruments, this period is
1821
+ the same. It should be noted that the frequency reso-
1822
+ lution here is ∼ 0.004 pulse period−1 (from the stacks of
1823
+ 256 pulses) and so we will be insensitive to any changes
1824
+ that are finer than this. Although significant correla-
1825
+ tions can not be drawn from these periodicities given
1826
+ our sample size and the number of pulsars that show
1827
+ periodic nulling, the occurrence of such a phenomenon
1828
+ in modest set of pulsars in our sample suggests that this
1829
+ might not be uncommon and should be searched for in
1830
+ future data.
1831
+ 5. CONCLUSIONS
1832
+ In this study, we have extended the Gaussian mixture
1833
+ model of Kaplan et al. (2018) to study nulling behav-
1834
+ ior in 22 pulsars, spanning a wider range of properties
1835
+ than in the initial paper but still not selected indepen-
1836
+ dent of nulling behavior. We find that all pulsars can
1837
+ be well-represented by mixture model, but we find that
1838
+ a single Gaussian is not sufficient to describe the emis-
1839
+
1840
+ Pulsar Nulling with Mixture Models
1841
+ 15
1842
+ 0.2
1843
+ 1.0
1844
+ 1.3
1845
+ 2.6
1846
+ 6.5
1847
+ Emission component SNR
1848
+ 0.0
1849
+ 0.2
1850
+ 0.4
1851
+ 0.6
1852
+ 0.8
1853
+ NF
1854
+ Uncorrected NFr
1855
+ Corrected NFr
1856
+ NF
1857
+ Figure 8. Comparison of NF estimates from Ritchings’ al-
1858
+ gorithm and mixture model as a function of pulsar emission
1859
+ component (significance; in units of σOFF). The blue error
1860
+ bars show the estimates from Ritchings’ algorithm while the
1861
+ orange error bars are from mixture model.
1862
+ The green er-
1863
+ ror bars are derived by estimating the systematic bias from
1864
+ the Ritchings’ method and clearly depict the bias in the cases
1865
+ where the emission component is weak compared to the back-
1866
+ ground.
1867
+ 10−1
1868
+ 100
1869
+ Period (s)
1870
+ 10−18
1871
+ 10−17
1872
+ 10−16
1873
+ 10−15
1874
+ 10−14
1875
+ 10−13
1876
+ 10−12
1877
+ 10−11
1878
+ Period derivative (s/s)
1879
+ 109 yr
1880
+ 107 yr
1881
+ 105 yr
1882
+ 1013 G
1883
+ 1012 G
1884
+ 1011 G
1885
+ ATNF catalog
1886
+ Archival NF
1887
+ This work
1888
+ 0.0
1889
+ 0.2
1890
+ 0.4
1891
+ 0.6
1892
+ 0.8
1893
+ 1.0
1894
+ Nulling Fraction
1895
+ Figure 9. Period-period derivative (P − ˙P) diagram high-
1896
+ lighting nulling pulsars.
1897
+ Shown in grey circles are all the
1898
+ pulsar from the ATNF catalog (Manchester et al. 2005), in
1899
+ colored circles are the archival nulling pulsars from Konar
1900
+ & Deka (2019) and in diamonds are the pulsars from this
1901
+ study. The contours represent lines of constant character-
1902
+ istic age τc and dipolar surface magnetic field (Bsurf). The
1903
+ color bar shows the nulling fraction which ranges from 0 to 1.
1904
+ No clear discernible trend of NF with any of P/ ˙P/Bsurf/τc
1905
+ is visible.
1906
+ sion component in some pulsars7.
1907
+ Similar to Kaplan
1908
+ et al. (2018), we find that previous methods used to
1909
+ estimate NF can suffer significant biases when the pul-
1910
+ sar emission is weak compared to the background noise.
1911
+ Such biases may lead to misinterpreting weak pulsars
1912
+ as nulling pulsars. We also show that these biases may
1913
+ lead to spurious correlations between the NF and pulsar
1914
+ properties in population-wide studies.
1915
+ Drawing on the more robust statistics that we calcu-
1916
+ late, we find that nulling can appear periodic, with three
1917
+ pulsars in our sample showing this behavior. Two pul-
1918
+ sars in our sample, PSR J1822+02 and PSR J1829+25,
1919
+ shows clear signs of sub-pulse drifting, and they have an
1920
+ inferred nulling fraction consistent with 0. In contrast,
1921
+ studies like Gajjar et al. (2014a); Davies et al. (1984)
1922
+ find sub-pulse drifting in pulsars that exhibit moderate
1923
+ nulling, indicating that sub-pulse drifting and nulling
1924
+ might be two independent manifestations of sub-pulse
1925
+ intensity variations. In all cases we look forward to us-
1926
+ ing larger, less-biased samples to more robustly explore
1927
+ the nulling population and seeing if it is related to other
1928
+ phenomenology.
1929
+ Two pulsars in our sample, PSR J0414+31 and PSR
1930
+ J1829+25, were observed at two different frequencies
1931
+ (430 MHz and 820 MHz), albeit not simultaneously.
1932
+ PSR J1829+25 has nulling estimates that agree at both
1933
+ frequencies, consistent with 0, but we find that PSR
1934
+ J0414+31, has NF estimates in tension at the ∼ 2σ
1935
+ level, with the NF higher at lower frequencies.
1936
+ Al-
1937
+ though it is hard to draw definite conclusions from these
1938
+ two pulsars since the observations are not simultane-
1939
+ ous, it emphasizes the need for simultaneous observa-
1940
+ tions at multiple frequencies (or across a larger band-
1941
+ width). Observing at 4 different frequencies (325, 610,
1942
+ 1400, 4850 MHz), Gajjar et al. (2014a) find coherent
1943
+ nulling in three different pulsars whereas Bhat et al.
1944
+ (2007) find the evidence for null excess at lower frequen-
1945
+ cies in PSR B1133+16 further emphasizing the need for
1946
+ multi-frequency observations in a larger sample to find
1947
+ whether nulling is universally broadband.
1948
+ One of the pulsars in our sample (PSR J2310+6706)
1949
+ has a two-component profile with a faint leading peak in
1950
+ addition to the primary peak. The very low SNR of the
1951
+ leading component limits our ability to find a stringent
1952
+ estimate of the NF independent of the primary com-
1953
+ ponent, but we find that the NF values obtained from
1954
+ each component is consistent. Analyzing nulling charac-
1955
+ teristics in pulsars with multi-component pulse profiles
1956
+ 7 PSR J0054+6946 is better described by 2 different emission com-
1957
+ ponents, one at lower amplitude and the other at higher ampli-
1958
+ tude, as seen in Figure 11.
1959
+
1960
+ 16
1961
+ Anumarlapudi et al.
1962
+ 0
1963
+ 2
1964
+ 4
1965
+ 6
1966
+ Period (s)
1967
+ 0.0
1968
+ 0.2
1969
+ 0.4
1970
+ 0.6
1971
+ 0.8
1972
+ NF
1973
+ 10−16
1974
+ 10−15
1975
+ 10−14
1976
+ Period derivative (s/s)
1977
+ 107
1978
+ 108
1979
+ 109
1980
+ Characteristic Age (yr)
1981
+ J2310+6706
1982
+ 1012
1983
+ 1013
1984
+ Surface Magnetic field (G)
1985
+ J2310+6706
1986
+ Figure 10. Scatter plot showing the NF of the pulsars in this study vs their properties. It can be seen that the pulsars appear
1987
+ scattered in the P/ ˙P space. However, with the exclusion of PSR J2310+6706 which appears as an outlier in the τc/Bsurf space,
1988
+ a rough trend can be seen that of NF decreasing with the age τc and increasing with the surface magnetic field Bsurf. The
1989
+ correlation coefficients are given in Table 5.
1990
+ with a robust method like mixture modeling can provide
1991
+ insights into the simultaneous nulling in the different re-
1992
+ gions of the pulsar’s magnetosphere.
1993
+ So far we have only analyzed normal, non-recycled
1994
+ pulsars. Current sensitivity limitations restrict the sam-
1995
+ ple of nulling pulsars to normal pulsars (as is evident
1996
+ from Figure 9), while MSPs are largely unexplored. Ini-
1997
+ tial single pulse studies done by Rajwade et al. (2014)
1998
+ do not find any compelling evidence for nulling in MSPs.
1999
+ Using the mixture model technique, which does not suf-
2000
+ fer from the same biases at low signal-to-noise, for MSPs,
2001
+ together with newer higher-sensitivity facilities may help
2002
+ explore whether the nulling phenomenon affects all pul-
2003
+ sars, or is limited to a sub-population.
2004
+ We thank an anonymous referee for helpful suggestions
2005
+ that clarified this work. AA, JS, and DK receive sup-
2006
+ port from National Science Foundation (NSF) Physics
2007
+ Frontiers Center award numbers 1430284 and 2020265.
2008
+ AA thanks Alex McEwen for helpful discussions dur-
2009
+ ing the data reduction stage.
2010
+ The Arecibo Observa-
2011
+ tory is a facility of the NSF operated under cooperative
2012
+ agreement (#AST-1744119) by the University of Cen-
2013
+ tral Florida (UCF) in alliance with Universidad Ana G.
2014
+ M´endez (UAGM) and Yang Enterprises (YEI), Inc. The
2015
+ Green Bank Observatory is a facility of the NSF oper-
2016
+ ated under cooperative agreement by Associated Uni-
2017
+ versities, Inc.
2018
+ 1
2019
+ 2
2020
+ 3
2021
+ 4
2022
+ 5
2023
+ 6
2024
+ 7
2025
+ 8
2026
+ 9
2027
+ 10
2028
+ 11
2029
+ 12
2030
+ 13
2031
+ Facilities: GBT (GUPPI), Arecibo (PUPPI)
2032
+ Software:
2033
+ PINT (Luo et al. 2019), PSRCHIVE (van
2034
+ Straten et al. 2011), dspsr (van Straten & Bailes 2011),
2035
+ NumPy (Harris et al. 2020), Matplotlib (Hunter 2007),
2036
+ AstroPy (Astropy Collaboration et al. 2013, 2018),
2037
+ emcee (Foreman-Mackey et al. 2013)
2038
+ APPENDIX
2039
+ A. NULLING RESULTS FOR ALL PULSARS
2040
+ We show pulse profiles, MCMC corner plot results,
2041
+ and nulling histograms for all of the pulsars in our sam-
2042
+ ple.
2043
+
2044
+ Pulsar Nulling with Mixture Models
2045
+ 17
2046
+ REFERENCES
2047
+ Arjunwadkar, M., Rajwade, K., & Gupta, Y. 2014, in
2048
+ Astronomical Society of India Conference Series, Vol. 13,
2049
+ Astronomical Society of India Conference Series, 79–81
2050
+ Astropy Collaboration, Robitaille, T. P., Tollerud, E. J.,
2051
+ et al. 2013, A&A, 558, A33,
2052
+ doi: 10.1051/0004-6361/201322068
2053
+ Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M.,
2054
+ et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
2055
+ Backer, D. C. 1970a, Nature, 228, 42, doi: 10.1038/228042a0
2056
+ —. 1970b, Nature, 228, 1297, doi: 10.1038/2281297a0
2057
+ —. 1970c, Nature, 228, 752, doi: 10.1038/228752a0
2058
+ Bates, S. D., Lorimer, D. R., Rane, A., & Swiggum, J.
2059
+ 2014, MNRAS, 439, 2893, doi: 10.1093/mnras/stu157
2060
+ Bhat, N. D. R., Cordes, J. M., & Chatterjee, S. 2003, ApJ,
2061
+ 584, 782, doi: 10.1086/345775
2062
+ Bhat, N. D. R., Gupta, Y., Kramer, M., et al. 2007, A&A,
2063
+ 462, 257, doi: 10.1051/0004-6361:20053157
2064
+ Davies, J. G., Lyne, A. G., Smith, F. G., et al. 1984,
2065
+ MNRAS, 211, 57, doi: 10.1093/mnras/211.1.57
2066
+ Drake, F. D., & Craft, H. D. 1968, Nature, 220, 231,
2067
+ doi: 10.1038/220231a0
2068
+ Faucher-Gigu`ere, C.-A., & Kaspi, V. M. 2006, ApJ, 643,
2069
+ 332, doi: 10.1086/501516
2070
+ Filippenko, A. V., & Radhakrishnan, V. 1982, ApJ, 263,
2071
+ 828, doi: 10.1086/160553
2072
+ Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman,
2073
+ J. 2013, PASP, 125, 306, doi: 10.1086/670067
2074
+ Gajjar, V., Joshi, B. C., & Kramer, M. 2012, MNRAS, 424,
2075
+ 1197, doi: 10.1111/j.1365-2966.2012.21296.x
2076
+ Gajjar, V., Joshi, B. C., Kramer, M., Karuppusamy, R., &
2077
+ Smits, R. 2014a, ApJ, 797, 18,
2078
+ doi: 10.1088/0004-637X/797/1/18
2079
+ Gajjar, V., Joshi, B. C., & Wright, G. 2014b, MNRAS, 439,
2080
+ 221, doi: 10.1093/mnras/stt2389
2081
+ Harris, C. R., Millman, K. J., van der Walt, S. J., et al.
2082
+ 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
2083
+ Herfindal, J. L., & Rankin, J. M. 2007, MNRAS, 380, 430,
2084
+ doi: 10.1111/j.1365-2966.2007.12089.x
2085
+ —. 2009, MNRAS, 393, 1391,
2086
+ doi: 10.1111/j.1365-2966.2008.14119.x
2087
+ Honnappa, S., Lewandowski, W., Kijak, J., et al. 2012,
2088
+ MNRAS, 421, 1996,
2089
+ doi: 10.1111/j.1365-2966.2012.20424.x
2090
+ Hunter, J. D. 2007, Computing in Science & Engineering, 9,
2091
+ 90, doi: 10.1109/MCSE.2007.55
2092
+ Ivezi´c, ˇZ., Connolly, A. J., VanderPlas, J. T., & Gray, A.
2093
+ 2020, Statistics, Data Mining, and Machine Learning in
2094
+ Astronomy. A Practical Python Guide for the Analysis of
2095
+ Survey Data, Updated Edition
2096
+ Kaplan, D. L., Swiggum, J. K., Fichtenbauer, T. D. J., &
2097
+ Vallisneri, M. 2018, ApJ, 855, 14,
2098
+ doi: 10.3847/1538-4357/aaab62
2099
+ Konar, S., & Deka, U. 2019, Journal of Astrophysics and
2100
+ Astronomy, 40, 42, doi: 10.1007/s12036-019-9608-z
2101
+ Kramer, M., Lyne, A. G., O’Brien, J. T., Jordan, C. A., &
2102
+ Lorimer, D. R. 2006, Science, 312, 549,
2103
+ doi: 10.1126/science.1124060
2104
+ Lomax, R. 2007, Statistical Concepts: A Second Course
2105
+ (Lawrence Erlbaum Associates).
2106
+ https://books.google.com/books?id=p17rT373FNAC
2107
+ Lorimer, D. R., & Kramer, M. 2004, Handbook of Pulsar
2108
+ Astronomy, Vol. 4
2109
+ Luo, J., Ransom, S., Demorest, P., et al. 2019, PINT:
2110
+ High-precision pulsar timing analysis package,
2111
+ Astrophysics Source Code Library, record ascl:1902.007.
2112
+ http://ascl.net/1902.007
2113
+ Lynch, R. S., Boyles, J., Ransom, S. M., et al. 2013, ApJ,
2114
+ 763, 81, doi: 10.1088/0004-637X/763/2/81
2115
+ Lynch, R. S., Swiggum, J. K., Kondratiev, V. I., et al.
2116
+ 2018, ApJ, 859, 93, doi: 10.3847/1538-4357/aabf8a
2117
+ Lyne, A. G. 2009, in Astrophysics and Space Science
2118
+ Library, Vol. 357, Astrophysics and Space Science
2119
+ Library, ed. W. Becker, 67,
2120
+ doi: 10.1007/978-3-540-76965-1 4
2121
+ Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M.
2122
+ 2005, AJ, 129, 1993, doi: 10.1086/428488
2123
+ McKinnon, M. M. 2014, PASP, 126, 476,
2124
+ doi: 10.1086/676975
2125
+ McLaughlin, M. A., Lyne, A. G., Lorimer, D. R., et al.
2126
+ 2006, Nature, 439, 817, doi: 10.1038/nature04440
2127
+ Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011,
2128
+ Journal of Machine Learning Research, 12, 2825.
2129
+ http://jmlr.org/papers/v12/pedregosa11a.html
2130
+ Rajwade, K., Gupta, Y., Kumar, U., & Arjunwadkar, M.
2131
+ 2014, in Astronomical Society of India Conference Series,
2132
+ Vol. 13, Astronomical Society of India Conference Series,
2133
+ 73–77
2134
+ Ransom, S. M., Demorest, P., Ford, J., et al. 2009, in
2135
+ American Astronomical Society Meeting Abstracts, Vol.
2136
+ 214, American Astronomical Society Meeting Abstracts
2137
+ #214, 605.08
2138
+ Ransom, S. M., Eikenberry, S. S., & Middleditch, J. 2002,
2139
+ AJ, 124, 1788, doi: 10.1086/342285
2140
+ Redman, S. L., & Rankin, J. M. 2009, MNRAS, 395, 1529,
2141
+ doi: 10.1111/j.1365-2966.2009.14632.x
2142
+ Ritchings, R. T. 1976, MNRAS, 176, 249,
2143
+ doi: 10.1093/mnras/176.2.249
2144
+
2145
+ 18
2146
+ Anumarlapudi et al.
2147
+ Rosen, R., Swiggum, J., McLaughlin, M. A., et al. 2013,
2148
+ ApJ, 768, 85, doi: 10.1088/0004-637X/768/1/85
2149
+ Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51,
2150
+ doi: 10.1086/153393
2151
+ Sheikh, S. Z., & MacDonald, M. G. 2021, MNRAS, 502,
2152
+ 4669, doi: 10.1093/mnras/stab282
2153
+ Smith, F. G. 1973, MNRAS, 161, 9P,
2154
+ doi: 10.1093/mnras/161.1.9P
2155
+ Stovall, K., Lynch, R. S., Ransom, S. M., et al. 2014, ApJ,
2156
+ 791, 67, doi: 10.1088/0004-637X/791/1/67
2157
+ Taylor, J. H., Manchester, R. N., & Huguenin, G. R. 1975,
2158
+ ApJ, 195, 513, doi: 10.1086/153351
2159
+ van Straten, W., & Bailes, M. 2011, PASA, 28, 1,
2160
+ doi: 10.1071/AS10021
2161
+ van Straten, W., Demorest, P., Khoo, J., et al. 2011,
2162
+ PSRCHIVE: Development Library for the Analysis of
2163
+ Pulsar Astronomical Data, Astrophysics Source Code
2164
+ Library, record ascl:1105.014. http://ascl.net/1105.014
2165
+ Wang, N., Manchester, R. N., & Johnston, S. 2007,
2166
+ MNRAS, 377, 1383,
2167
+ doi: 10.1111/j.1365-2966.2007.11703.x
2168
+ Wilks, S. 2008, Mathematical Statistics (Read Books).
2169
+ https://books.google.com/books?id=iMDWgCcqswkC
2170
+
2171
+ Pulsar Nulling with Mixture Models
2172
+ 19
2173
+ 0.0
2174
+ 0.5
2175
+ 1.0
2176
+ 0.0
2177
+ 0.2
2178
+ 0.4
2179
+ 0.6
2180
+ 0.8
2181
+ 1.0
2182
+ Pulse phase
2183
+ 0
2184
+ 100
2185
+ 200
2186
+ 300
2187
+ 400
2188
+ Single pulses
2189
+ ON
2190
+ OFF
2191
+ Intensity
2192
+ µ0
2193
+ µ0=0.001
2194
+ 0.25
2195
+ 0.50
2196
+ 0.75
2197
+ µ1
2198
+ µ1=0.498
2199
+ 1.6
2200
+ 1.8
2201
+ 2.0
2202
+ µ2
2203
+ µ2=1.821
2204
+ 0.425
2205
+ 0.450
2206
+ 0.475
2207
+ σ0
2208
+ σ0=0.442
2209
+ 0.45
2210
+ 0.60
2211
+ σ1
2212
+ σ1=0.509
2213
+ 0.80
2214
+ 0.88
2215
+ 0.96
2216
+ σ2
2217
+ σ2=0.91
2218
+ 0.15
2219
+ 0.30
2220
+ c0 (NF)
2221
+ NF=0.273
2222
+ −0.02
2223
+ 0.00
2224
+ 0.02
2225
+ µ0
2226
+ 0.15
2227
+ 0.30
2228
+ 0.45
2229
+ c1
2230
+ 0.25
2231
+ 0.50
2232
+ 0.75
2233
+ µ1
2234
+ 1.6
2235
+ 1.8
2236
+ 2.0
2237
+ µ2
2238
+ 0.425
2239
+ 0.450
2240
+ 0.475
2241
+ σ0
2242
+ 0.45
2243
+ 0.60
2244
+ σ1
2245
+ 0.80
2246
+ 0.88
2247
+ 0.96
2248
+ σ2
2249
+ 0.15
2250
+ 0.30
2251
+ c0 (NF)
2252
+ 0.15
2253
+ 0.30
2254
+ 0.45
2255
+ c1
2256
+ c1=0.257
2257
+ −2
2258
+ 0
2259
+ 2
2260
+ 4
2261
+ 6
2262
+ Raw Intensity
2263
+ 0.0
2264
+ 0.2
2265
+ 0.4
2266
+ 0.6
2267
+ 0.8
2268
+ Probability Density
2269
+ MM On fit
2270
+ ON/OFF histograms
2271
+ MM Off fit
2272
+ MM emission comps.
2273
+ MM null fit
2274
+ Figure 11. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2275
+ J0054+6946. In this case the best fit model is a 3-component Gaussian mixture
2276
+
2277
+ 20
2278
+ Anumarlapudi et al.
2279
+ 0.0
2280
+ 0.5
2281
+ 1.0
2282
+ 0.0
2283
+ 0.2
2284
+ 0.4
2285
+ 0.6
2286
+ 0.8
2287
+ 1.0
2288
+ Pulse phase
2289
+ 0
2290
+ 100
2291
+ 200
2292
+ 300
2293
+ 400
2294
+ Single pulses
2295
+ ON
2296
+ OFF
2297
+ Intensity
2298
+ µ0
2299
+ µ0=0.016
2300
+ 1.08
2301
+ 1.14
2302
+ 1.20
2303
+ µ1
2304
+ µ1=1.141
2305
+ 0.24
2306
+ 0.28
2307
+ σ0
2308
+ σ0=0.268
2309
+ 0.64
2310
+ 0.68
2311
+ 0.72
2312
+ σ1
2313
+ σ1=0.667
2314
+ 0.000
2315
+ 0.025
2316
+ µ0
2317
+ 0.08
2318
+ 0.12
2319
+ 0.16
2320
+ NF
2321
+ 1.08
2322
+ 1.14
2323
+ 1.20
2324
+ µ1
2325
+ 0.24
2326
+ 0.28
2327
+ σ0
2328
+ 0.64
2329
+ 0.68
2330
+ 0.72
2331
+ σ1
2332
+ 0.08
2333
+ 0.12
2334
+ 0.16
2335
+ NF
2336
+ NF=0.108
2337
+ −2
2338
+ −1
2339
+ 0
2340
+ 1
2341
+ 2
2342
+ 3
2343
+ 4
2344
+ Raw Intensity
2345
+ 0.00
2346
+ 0.25
2347
+ 0.50
2348
+ 0.75
2349
+ 1.00
2350
+ 1.25
2351
+ 1.50
2352
+ 1.75
2353
+ Probability Density
2354
+ MM On fit
2355
+ ON/OFF histograms
2356
+ MM Off fit
2357
+ MM emission fit
2358
+ MM null fit
2359
+ Figure 12. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2360
+ J0111+6624. In this case the best fit model is a 2-component Gaussian mixture
2361
+
2362
+ Pulsar Nulling with Mixture Models
2363
+ 21
2364
+ 0.0
2365
+ 0.5
2366
+ 1.0
2367
+ 0.0
2368
+ 0.2
2369
+ 0.4
2370
+ 0.6
2371
+ 0.8
2372
+ 1.0
2373
+ Pulse phase
2374
+ 0
2375
+ 100
2376
+ 200
2377
+ 300
2378
+ 400
2379
+ Single pulses
2380
+ ON
2381
+ OFF
2382
+ Intensity
2383
+ µ0
2384
+ µ0=-0.001
2385
+ 2.10
2386
+ 2.16
2387
+ 2.22
2388
+ µ1
2389
+ µ1=2.183
2390
+ 0.345
2391
+ 0.360
2392
+ σ0
2393
+ σ0=0.355
2394
+ 0.80
2395
+ 0.85
2396
+ 0.90
2397
+ σ1
2398
+ σ1=0.853
2399
+ −0.015
2400
+ 0.000
2401
+ 0.015
2402
+ µ0
2403
+ 0.52
2404
+ 0.54
2405
+ 0.56
2406
+ NF
2407
+ 2.10
2408
+ 2.16
2409
+ 2.22
2410
+ µ1
2411
+ 0.345
2412
+ 0.360
2413
+ σ0
2414
+ 0.80
2415
+ 0.85
2416
+ 0.90
2417
+ σ1
2418
+ 0.52
2419
+ 0.54
2420
+ 0.56
2421
+ NF
2422
+ NF=0.54
2423
+ −1
2424
+ 0
2425
+ 1
2426
+ 2
2427
+ 3
2428
+ 4
2429
+ 5
2430
+ Raw Intensity
2431
+ 0.0
2432
+ 0.2
2433
+ 0.4
2434
+ 0.6
2435
+ 0.8
2436
+ 1.0
2437
+ 1.2
2438
+ Probability Density
2439
+ MM On fit
2440
+ ON/OFF histograms
2441
+ MM Off fit
2442
+ MM emission fit
2443
+ MM null fit
2444
+ Figure 13. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2445
+ J0325+6744. In this case the best fit model is a 2-component Gaussian mixture
2446
+
2447
+ 22
2448
+ Anumarlapudi et al.
2449
+ 0.0
2450
+ 0.5
2451
+ 1.0
2452
+ 0.0
2453
+ 0.2
2454
+ 0.4
2455
+ 0.6
2456
+ 0.8
2457
+ 1.0
2458
+ Pulse phase
2459
+ 0
2460
+ 100
2461
+ 200
2462
+ 300
2463
+ 400
2464
+ Single pulses
2465
+ ON
2466
+ OFF
2467
+ Intensity
2468
+ µ0
2469
+ µ0=0.0
2470
+ 1.000
2471
+ 1.025
2472
+ 1.050
2473
+ µ1
2474
+ µ1=1.014
2475
+ 0.66
2476
+ 0.68
2477
+ 0.70
2478
+ σ0
2479
+ σ0=0.672
2480
+ 0.960
2481
+ 0.975
2482
+ σ1
2483
+ σ1=0.972
2484
+ −0.015
2485
+ 0.000
2486
+ 0.015
2487
+ µ0
2488
+ 0.02
2489
+ 0.04
2490
+ NF
2491
+ 1.000
2492
+ 1.025
2493
+ 1.050
2494
+ µ1
2495
+ 0.66
2496
+ 0.68
2497
+ 0.70
2498
+ σ0
2499
+ 0.960
2500
+ 0.975
2501
+ σ1
2502
+ 0.02
2503
+ 0.04
2504
+ NF
2505
+ NF=0.018
2506
+ −4
2507
+ −2
2508
+ 0
2509
+ 2
2510
+ 4
2511
+ 6
2512
+ Raw Intensity
2513
+ 0.0
2514
+ 0.1
2515
+ 0.2
2516
+ 0.3
2517
+ 0.4
2518
+ 0.5
2519
+ 0.6
2520
+ Probability Density
2521
+ MM On fit
2522
+ ON/OFF histograms
2523
+ MM Off fit
2524
+ MM emission fit
2525
+ MM null fit
2526
+ Figure 14. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2527
+ J0355+28. In this case the best fit model is a 2-component Gaussian mixture
2528
+
2529
+ Pulsar Nulling with Mixture Models
2530
+ 23
2531
+ 0.0
2532
+ 0.5
2533
+ 1.0
2534
+ 0.0
2535
+ 0.2
2536
+ 0.4
2537
+ 0.6
2538
+ 0.8
2539
+ 1.0
2540
+ Pulse phase
2541
+ 0
2542
+ 100
2543
+ 200
2544
+ 300
2545
+ 400
2546
+ Single pulses
2547
+ ON
2548
+ OFF
2549
+ Intensity
2550
+ µ0
2551
+ µ0=0.042
2552
+ 1.35
2553
+ 1.50
2554
+ µ1
2555
+ µ1=1.351
2556
+ 0.60
2557
+ 0.65
2558
+ σ0
2559
+ σ0=0.618
2560
+ 1.10
2561
+ 1.15
2562
+ 1.20
2563
+ σ1
2564
+ σ1=1.136
2565
+ 0.00
2566
+ 0.04
2567
+ 0.08
2568
+ µ0
2569
+ 0.24
2570
+ 0.32
2571
+ NF
2572
+ 1.35
2573
+ 1.50
2574
+ µ1
2575
+ 0.60
2576
+ 0.65
2577
+ σ0
2578
+ 1.10
2579
+ 1.15
2580
+ 1.20
2581
+ σ1
2582
+ 0.24
2583
+ 0.32
2584
+ NF
2585
+ NF=0.275
2586
+ −4
2587
+ −2
2588
+ 0
2589
+ 2
2590
+ 4
2591
+ 6
2592
+ 8
2593
+ Raw Intensity
2594
+ 0.0
2595
+ 0.1
2596
+ 0.2
2597
+ 0.3
2598
+ 0.4
2599
+ 0.5
2600
+ 0.6
2601
+ Probability Density
2602
+ MM On fit
2603
+ ON/OFF histograms
2604
+ MM Off fit
2605
+ MM emission fit
2606
+ MM null fit
2607
+ Figure 15. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2608
+ J0414+31 (GBT). In this case the best fit model is a 2-component Gaussian mixture
2609
+
2610
+ 24
2611
+ Anumarlapudi et al.
2612
+ 0.0
2613
+ 0.5
2614
+ 1.0
2615
+ 0.0
2616
+ 0.2
2617
+ 0.4
2618
+ 0.6
2619
+ 0.8
2620
+ 1.0
2621
+ Pulse phase
2622
+ 0
2623
+ 100
2624
+ 200
2625
+ 300
2626
+ 400
2627
+ Single pulses
2628
+ ON
2629
+ OFF
2630
+ Intensity
2631
+ µ0
2632
+ µ0=0.024
2633
+ 1.44
2634
+ 1.50
2635
+ µ1
2636
+ µ1=1.476
2637
+ 0.285
2638
+ 0.300
2639
+ σ0
2640
+ σ0=0.296
2641
+ 0.99
2642
+ 1.02
2643
+ σ1
2644
+ σ1=1.011
2645
+ 0.015
2646
+ 0.030
2647
+ µ0
2648
+ 0.300
2649
+ 0.325
2650
+ 0.350
2651
+ NF
2652
+ 1.44
2653
+ 1.50
2654
+ µ1
2655
+ 0.285
2656
+ 0.300
2657
+ σ0
2658
+ 0.99
2659
+ 1.02
2660
+ σ1
2661
+ 0.300
2662
+ 0.325
2663
+ 0.350
2664
+ NF
2665
+ NF=0.329
2666
+ −2
2667
+ 0
2668
+ 2
2669
+ 4
2670
+ 6
2671
+ Raw Intensity
2672
+ 0.0
2673
+ 0.2
2674
+ 0.4
2675
+ 0.6
2676
+ 0.8
2677
+ 1.0
2678
+ 1.2
2679
+ 1.4
2680
+ Probability Density
2681
+ MM On fit
2682
+ ON/OFF histograms
2683
+ MM Off fit
2684
+ MM emission fit
2685
+ MM null fit
2686
+ Figure 16. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2687
+ J0414+31 (arecibo). In this case the best fit model is a 2-component Gaussian mixture
2688
+
2689
+ Pulsar Nulling with Mixture Models
2690
+ 25
2691
+ 0.0
2692
+ 0.5
2693
+ 1.0
2694
+ 0.0
2695
+ 0.2
2696
+ 0.4
2697
+ 0.6
2698
+ 0.8
2699
+ 1.0
2700
+ Pulse phase
2701
+ 0
2702
+ 100
2703
+ 200
2704
+ 300
2705
+ 400
2706
+ Single pulses
2707
+ ON
2708
+ OFF
2709
+ Intensity
2710
+ µ0
2711
+ µ0=-0.015
2712
+ 1.05
2713
+ 1.20
2714
+ µ1
2715
+ µ1=1.09
2716
+ 1.7
2717
+ 1.8
2718
+ 1.9
2719
+ σ0
2720
+ σ0=1.795
2721
+ 1.50
2722
+ 1.56
2723
+ 1.62
2724
+ σ1
2725
+ σ1=1.568
2726
+ −0.08
2727
+ 0.00
2728
+ µ0
2729
+ 0.08
2730
+ 0.16
2731
+ NF
2732
+ 1.05
2733
+ 1.20
2734
+ µ1
2735
+ 1.7
2736
+ 1.8
2737
+ 1.9
2738
+ σ0
2739
+ 1.50
2740
+ 1.56
2741
+ 1.62
2742
+ σ1
2743
+ 0.08
2744
+ 0.16
2745
+ NF
2746
+ NF=0.074
2747
+ −7.5
2748
+ −5.0
2749
+ −2.5
2750
+ 0.0
2751
+ 2.5
2752
+ 5.0
2753
+ 7.5
2754
+ 10.0
2755
+ Raw Intensity
2756
+ 0.00
2757
+ 0.05
2758
+ 0.10
2759
+ 0.15
2760
+ 0.20
2761
+ 0.25
2762
+ Probability Density
2763
+ MM On fit
2764
+ ON/OFF histograms
2765
+ MM Off fit
2766
+ MM emission fit
2767
+ MM null fit
2768
+ Figure 17. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2769
+ J0614+83. In this case the best fit model is a 2-component Gaussian mixture
2770
+
2771
+ 26
2772
+ Anumarlapudi et al.
2773
+ 0.0
2774
+ 0.5
2775
+ 1.0
2776
+ 0.0
2777
+ 0.2
2778
+ 0.4
2779
+ 0.6
2780
+ 0.8
2781
+ 1.0
2782
+ Pulse phase
2783
+ 0
2784
+ 100
2785
+ 200
2786
+ 300
2787
+ 400
2788
+ Single pulses
2789
+ ON
2790
+ OFF
2791
+ Intensity
2792
+ µ0
2793
+ µ0=0.003
2794
+ 0.50
2795
+ 0.75
2796
+ 1.00
2797
+ µ1
2798
+ µ1=0.857
2799
+ 0.020
2800
+ 0.022
2801
+ σ0
2802
+ σ0=0.021
2803
+ 0.4
2804
+ 0.6
2805
+ σ1
2806
+ σ1=0.462
2807
+ 0.4
2808
+ 0.5
2809
+ 0.6
2810
+ λ
2811
+ λ=0.47
2812
+ 0.002
2813
+ 0.004
2814
+ µ0
2815
+ 0.60
2816
+ 0.65
2817
+ 0.70
2818
+ NF
2819
+ 0.50
2820
+ 0.75
2821
+ 1.00
2822
+ µ1
2823
+ 0.020
2824
+ 0.022
2825
+ σ0
2826
+ 0.4
2827
+ 0.6
2828
+ σ1
2829
+ 0.4
2830
+ 0.5
2831
+ 0.6
2832
+ λ
2833
+ 0.60
2834
+ 0.65
2835
+ 0.70
2836
+ NF
2837
+ NF=0.666
2838
+ 0
2839
+ 100
2840
+ 101
2841
+ Raw Intensity
2842
+ 0
2843
+ 100
2844
+ 101
2845
+ Probability Density
2846
+ MM On fit
2847
+ ON/OFF histograms
2848
+ MM Off fit
2849
+ MM emission fit
2850
+ MM null fit
2851
+ Figure 18. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2852
+ J0738+6904. In this case the best fit model is a 2-component Exponential convolved Gaussian mixture
2853
+
2854
+ Pulsar Nulling with Mixture Models
2855
+ 27
2856
+ 0.0
2857
+ 0.5
2858
+ 1.0
2859
+ 0.0
2860
+ 0.2
2861
+ 0.4
2862
+ 0.6
2863
+ 0.8
2864
+ 1.0
2865
+ Pulse phase
2866
+ 0
2867
+ 100
2868
+ 200
2869
+ 300
2870
+ 400
2871
+ Single pulses
2872
+ ON
2873
+ OFF
2874
+ Intensity
2875
+ µ0
2876
+ µ0=0.011
2877
+ 1.05
2878
+ 1.20
2879
+ µ1
2880
+ µ1=1.082
2881
+ 1.3
2882
+ 1.4
2883
+ σ0
2884
+ σ0=1.35
2885
+ 1.30
2886
+ 1.35
2887
+ 1.40
2888
+ σ1
2889
+ σ1=1.389
2890
+ −0.06
2891
+ 0.00
2892
+ 0.06
2893
+ µ0
2894
+ 0.08
2895
+ 0.16
2896
+ NF
2897
+ 1.05
2898
+ 1.20
2899
+ µ1
2900
+ 1.3
2901
+ 1.4
2902
+ σ0
2903
+ 1.30
2904
+ 1.35
2905
+ 1.40
2906
+ σ1
2907
+ 0.08
2908
+ 0.16
2909
+ NF
2910
+ NF=0.054
2911
+ −5.0
2912
+ −2.5
2913
+ 0.0
2914
+ 2.5
2915
+ 5.0
2916
+ 7.5
2917
+ 10.0
2918
+ Raw Intensity
2919
+ 0.00
2920
+ 0.05
2921
+ 0.10
2922
+ 0.15
2923
+ 0.20
2924
+ 0.25
2925
+ 0.30
2926
+ Probability Density
2927
+ MM On fit
2928
+ ON/OFF histograms
2929
+ MM Off fit
2930
+ MM emission fit
2931
+ MM null fit
2932
+ Figure 19. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
2933
+ J1529-26. In this case the best fit model is a 2-component Gaussian mixture
2934
+
2935
+ 28
2936
+ Anumarlapudi et al.
2937
+ 0.0
2938
+ 0.5
2939
+ 1.0
2940
+ 0.0
2941
+ 0.2
2942
+ 0.4
2943
+ 0.6
2944
+ 0.8
2945
+ 1.0
2946
+ Pulse phase
2947
+ 0
2948
+ 100
2949
+ 200
2950
+ 300
2951
+ 400
2952
+ Single pulses
2953
+ ON
2954
+ OFF
2955
+ Intensity
2956
+ µ0
2957
+ µ0=-0.018
2958
+ 1.50
2959
+ 1.75
2960
+ 2.00
2961
+ µ1
2962
+ µ1=1.786
2963
+ 0.55
2964
+ 0.60
2965
+ 0.65
2966
+ σ0
2967
+ σ0=0.607
2968
+ 1.20
2969
+ 1.35
2970
+ σ1
2971
+ σ1=1.33
2972
+ −0.05
2973
+ 0.00
2974
+ µ0
2975
+ 0.40
2976
+ 0.48
2977
+ NF
2978
+ 1.50
2979
+ 1.75
2980
+ 2.00
2981
+ µ1
2982
+ 0.55
2983
+ 0.60
2984
+ 0.65
2985
+ σ0
2986
+ 1.20
2987
+ 1.35
2988
+ σ1
2989
+ 0.40
2990
+ 0.48
2991
+ NF
2992
+ NF=0.429
2993
+ −4
2994
+ −2
2995
+ 0
2996
+ 2
2997
+ 4
2998
+ 6
2999
+ 8
3000
+ 10
3001
+ Raw Intensity
3002
+ 0.0
3003
+ 0.1
3004
+ 0.2
3005
+ 0.3
3006
+ 0.4
3007
+ 0.5
3008
+ 0.6
3009
+ Probability Density
3010
+ MM On fit
3011
+ ON/OFF histograms
3012
+ MM Off fit
3013
+ MM emission fit
3014
+ MM null fit
3015
+ Figure 20. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3016
+ J1536-30. In this case the best fit model is a 2-component Gaussian mixture
3017
+
3018
+ Pulsar Nulling with Mixture Models
3019
+ 29
3020
+ 0
3021
+ 1
3022
+ 0.0
3023
+ 0.2
3024
+ 0.4
3025
+ 0.6
3026
+ 0.8
3027
+ 1.0
3028
+ Pulse phase
3029
+ 0
3030
+ 100
3031
+ 200
3032
+ 300
3033
+ 400
3034
+ Single pulses
3035
+ ON
3036
+ OFF
3037
+ Intensity
3038
+ µ0
3039
+ µ0=0.153
3040
+ 4.5
3041
+ 6.0
3042
+ 7.5
3043
+ µ1
3044
+ µ1=5.61
3045
+ 2.55
3046
+ 2.70
3047
+ σ0
3048
+ σ0=2.655
3049
+ 4.8
3050
+ 5.6
3051
+ σ1
3052
+ σ1=4.952
3053
+ 0.00
3054
+ 0.15
3055
+ 0.30
3056
+ µ0
3057
+ 0.78
3058
+ 0.84
3059
+ 0.90
3060
+ NF
3061
+ 4.5
3062
+ 6.0
3063
+ 7.5
3064
+ µ1
3065
+ 2.55
3066
+ 2.70
3067
+ σ0
3068
+ 4.8
3069
+ 5.6
3070
+ σ1
3071
+ 0.78
3072
+ 0.84
3073
+ 0.90
3074
+ NF
3075
+ NF=0.84
3076
+ −10
3077
+ 0
3078
+ 10
3079
+ 20
3080
+ 30
3081
+ Raw Intensity
3082
+ 0.00
3083
+ 0.02
3084
+ 0.04
3085
+ 0.06
3086
+ 0.08
3087
+ 0.10
3088
+ 0.12
3089
+ 0.14
3090
+ 0.16
3091
+ Probability Density
3092
+ MM On fit
3093
+ ON/OFF histograms
3094
+ MM Off fit
3095
+ MM emission fit
3096
+ MM null fit
3097
+ Figure 21. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3098
+ J1629+33. In this case the best fit model is a 2-component Gaussian mixture
3099
+
3100
+ 30
3101
+ Anumarlapudi et al.
3102
+ 0.0
3103
+ 0.5
3104
+ 1.0
3105
+ 0.0
3106
+ 0.2
3107
+ 0.4
3108
+ 0.6
3109
+ 0.8
3110
+ 1.0
3111
+ Pulse phase
3112
+ 0
3113
+ 100
3114
+ 200
3115
+ 300
3116
+ 400
3117
+ Single pulses
3118
+ ON
3119
+ OFF
3120
+ Intensity
3121
+ µ0
3122
+ µ0=0.0
3123
+ 1.00
3124
+ 1.04
3125
+ 1.08
3126
+ µ1
3127
+ µ1=1.028
3128
+ 0.60
3129
+ 0.65
3130
+ 0.70
3131
+ σ0
3132
+ σ0=0.646
3133
+ 0.850
3134
+ 0.875
3135
+ σ1
3136
+ σ1=0.861
3137
+ −0.04
3138
+ 0.00
3139
+ 0.04
3140
+ µ0
3141
+ 0.015
3142
+ 0.030
3143
+ NF
3144
+ 1.00
3145
+ 1.04
3146
+ 1.08
3147
+ µ1
3148
+ 0.60
3149
+ 0.65
3150
+ 0.70
3151
+ σ0
3152
+ 0.850
3153
+ 0.875
3154
+ σ1
3155
+ 0.015
3156
+ 0.030
3157
+ NF
3158
+ NF=0.004
3159
+ −2
3160
+ 0
3161
+ 2
3162
+ 4
3163
+ 6
3164
+ Raw Intensity
3165
+ 0.0
3166
+ 0.1
3167
+ 0.2
3168
+ 0.3
3169
+ 0.4
3170
+ 0.5
3171
+ 0.6
3172
+ Probability Density
3173
+ MM On fit
3174
+ ON/OFF histograms
3175
+ MM Off fit
3176
+ MM emission fit
3177
+ MM null fit
3178
+ Figure 22. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3179
+ J1821+4147. In this case the best fit model is a 2-component Gaussian mixture
3180
+
3181
+ Pulsar Nulling with Mixture Models
3182
+ 31
3183
+ 0.0
3184
+ 0.5
3185
+ 1.0
3186
+ 0.0
3187
+ 0.2
3188
+ 0.4
3189
+ 0.6
3190
+ 0.8
3191
+ 1.0
3192
+ Pulse phase
3193
+ 0
3194
+ 100
3195
+ 200
3196
+ 300
3197
+ 400
3198
+ Single pulses
3199
+ ON
3200
+ OFF
3201
+ Intensity
3202
+ µ0
3203
+ µ0=0.002
3204
+ 0.99
3205
+ 1.02
3206
+ 1.05
3207
+ µ1
3208
+ µ1=1.003
3209
+ 0.30
3210
+ 0.33
3211
+ 0.36
3212
+ σ0
3213
+ σ0=0.334
3214
+ 0.58
3215
+ 0.60
3216
+ 0.62
3217
+ σ1
3218
+ σ1=0.596
3219
+ −0.02
3220
+ 0.00
3221
+ 0.02
3222
+ µ0
3223
+ 0.02
3224
+ 0.04
3225
+ NF
3226
+ 0.99
3227
+ 1.02
3228
+ 1.05
3229
+ µ1
3230
+ 0.30
3231
+ 0.33
3232
+ 0.36
3233
+ σ0
3234
+ 0.58
3235
+ 0.60
3236
+ 0.62
3237
+ σ1
3238
+ 0.02
3239
+ 0.04
3240
+ NF
3241
+ NF=0.007
3242
+ −3
3243
+ −2
3244
+ −1
3245
+ 0
3246
+ 1
3247
+ 2
3248
+ 3
3249
+ 4
3250
+ Raw Intensity
3251
+ 0.0
3252
+ 0.2
3253
+ 0.4
3254
+ 0.6
3255
+ 0.8
3256
+ 1.0
3257
+ 1.2
3258
+ Probability Density
3259
+ MM On fit
3260
+ ON/OFF histograms
3261
+ MM Off fit
3262
+ MM emission fit
3263
+ MM null fit
3264
+ Figure 23. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3265
+ J1822+02. In this case the best fit model is a 2-component Gaussian mixture
3266
+
3267
+ 32
3268
+ Anumarlapudi et al.
3269
+ 0.0
3270
+ 0.5
3271
+ 1.0
3272
+ 0.0
3273
+ 0.2
3274
+ 0.4
3275
+ 0.6
3276
+ 0.8
3277
+ 1.0
3278
+ Pulse phase
3279
+ 0
3280
+ 100
3281
+ 200
3282
+ 300
3283
+ 400
3284
+ Single pulses
3285
+ ON
3286
+ OFF
3287
+ Intensity
3288
+ µ0
3289
+ µ0=0.014
3290
+ 1.00
3291
+ 1.05
3292
+ 1.10
3293
+ µ1
3294
+ µ1=1.04
3295
+ 0.36
3296
+ 0.42
3297
+ σ0
3298
+ σ0=0.389
3299
+ 0.57
3300
+ 0.60
3301
+ 0.63
3302
+ σ1
3303
+ σ1=0.584
3304
+ −0.04
3305
+ 0.00
3306
+ 0.04
3307
+ µ0
3308
+ 0.02
3309
+ 0.04
3310
+ NF
3311
+ 1.00
3312
+ 1.05
3313
+ 1.10
3314
+ µ1
3315
+ 0.36
3316
+ 0.42
3317
+ σ0
3318
+ 0.57
3319
+ 0.60
3320
+ 0.63
3321
+ σ1
3322
+ 0.02
3323
+ 0.04
3324
+ NF
3325
+ NF=0.004
3326
+ −1
3327
+ 0
3328
+ 1
3329
+ 2
3330
+ 3
3331
+ Raw Intensity
3332
+ 0.0
3333
+ 0.2
3334
+ 0.4
3335
+ 0.6
3336
+ 0.8
3337
+ 1.0
3338
+ 1.2
3339
+ Probability Density
3340
+ MM On fit
3341
+ ON/OFF histograms
3342
+ MM Off fit
3343
+ MM emission fit
3344
+ MM null fit
3345
+ Figure 24. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3346
+ J1829+25 (GBT). In this case the best fit model is a 2-component Gaussian mixture
3347
+
3348
+ Pulsar Nulling with Mixture Models
3349
+ 33
3350
+ 0.0
3351
+ 0.5
3352
+ 1.0
3353
+ 0.0
3354
+ 0.2
3355
+ 0.4
3356
+ 0.6
3357
+ 0.8
3358
+ 1.0
3359
+ Pulse phase
3360
+ 0
3361
+ 50
3362
+ 100
3363
+ 150
3364
+ 200
3365
+ 250
3366
+ 300
3367
+ Single pulses
3368
+ ON
3369
+ OFF
3370
+ Intensity
3371
+ µ0
3372
+ µ0=0.014
3373
+ 1.00
3374
+ 1.05
3375
+ 1.10
3376
+ µ1
3377
+ µ1=1.04
3378
+ 0.36
3379
+ 0.42
3380
+ σ0
3381
+ σ0=0.389
3382
+ 0.57
3383
+ 0.60
3384
+ 0.63
3385
+ σ1
3386
+ σ1=0.584
3387
+ −0.04
3388
+ 0.00
3389
+ 0.04
3390
+ µ0
3391
+ 0.02
3392
+ 0.04
3393
+ NF
3394
+ 1.00
3395
+ 1.05
3396
+ 1.10
3397
+ µ1
3398
+ 0.36
3399
+ 0.42
3400
+ σ0
3401
+ 0.57
3402
+ 0.60
3403
+ 0.63
3404
+ σ1
3405
+ 0.02
3406
+ 0.04
3407
+ NF
3408
+ NF=0.004
3409
+ −1
3410
+ 0
3411
+ 1
3412
+ 2
3413
+ 3
3414
+ Raw Intensity
3415
+ 0.0
3416
+ 0.2
3417
+ 0.4
3418
+ 0.6
3419
+ 0.8
3420
+ 1.0
3421
+ 1.2
3422
+ 1.4
3423
+ Probability Density
3424
+ MM On fit
3425
+ ON/OFF histograms
3426
+ MM Off fit
3427
+ MM emission fit
3428
+ MM null fit
3429
+ Figure 25. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3430
+ J1829+25 (AO). In this case the best fit model is a 2-component Gaussian mixture
3431
+
3432
+ 34
3433
+ Anumarlapudi et al.
3434
+ 0.0
3435
+ 0.5
3436
+ 1.0
3437
+ 0.0
3438
+ 0.2
3439
+ 0.4
3440
+ 0.6
3441
+ 0.8
3442
+ 1.0
3443
+ Pulse phase
3444
+ 0
3445
+ 100
3446
+ 200
3447
+ 300
3448
+ 400
3449
+ Single pulses
3450
+ ON
3451
+ OFF
3452
+ Intensity
3453
+ µ0
3454
+ µ0=-0.015
3455
+ 1.0
3456
+ 1.2
3457
+ 1.4
3458
+ µ1
3459
+ µ1=1.17
3460
+ 1.4
3461
+ 1.6
3462
+ 1.8
3463
+ σ0
3464
+ σ0=1.646
3465
+ 1.3
3466
+ 1.4
3467
+ 1.5
3468
+ σ1
3469
+ σ1=1.384
3470
+ −0.15
3471
+ 0.00
3472
+ 0.15
3473
+ µ0
3474
+ 0.15
3475
+ 0.30
3476
+ NF
3477
+ 1.0
3478
+ 1.2
3479
+ 1.4
3480
+ µ1
3481
+ 1.4
3482
+ 1.6
3483
+ 1.8
3484
+ σ0
3485
+ 1.3
3486
+ 1.4
3487
+ 1.5
3488
+ σ1
3489
+ 0.15
3490
+ 0.30
3491
+ NF
3492
+ NF=0.147
3493
+ −7.5
3494
+ −5.0
3495
+ −2.5
3496
+ 0.0
3497
+ 2.5
3498
+ 5.0
3499
+ 7.5
3500
+ 10.0
3501
+ Raw Intensity
3502
+ 0.00
3503
+ 0.05
3504
+ 0.10
3505
+ 0.15
3506
+ 0.20
3507
+ 0.25
3508
+ 0.30
3509
+ Probability Density
3510
+ MM On fit
3511
+ ON/OFF histograms
3512
+ MM Off fit
3513
+ MM emission fit
3514
+ MM null fit
3515
+ Figure 26. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3516
+ J1901-04. In this case the best fit model is a 2-component Gaussian mixture
3517
+
3518
+ Pulsar Nulling with Mixture Models
3519
+ 35
3520
+ 0.0
3521
+ 0.5
3522
+ 1.0
3523
+ 0.0
3524
+ 0.2
3525
+ 0.4
3526
+ 0.6
3527
+ 0.8
3528
+ 1.0
3529
+ Pulse phase
3530
+ 0
3531
+ 100
3532
+ 200
3533
+ 300
3534
+ 400
3535
+ Single pulses
3536
+ ON
3537
+ OFF
3538
+ Intensity
3539
+ µ0
3540
+ µ0=-0.002
3541
+ 0.990
3542
+ 1.005
3543
+ µ1
3544
+ µ1=0.997
3545
+ 0.475
3546
+ 0.500
3547
+ 0.525
3548
+ σ0
3549
+ σ0=0.503
3550
+ 0.59
3551
+ 0.60
3552
+ σ1
3553
+ σ1=0.593
3554
+ −0.015
3555
+ 0.000
3556
+ 0.015
3557
+ µ0
3558
+ 0.003
3559
+ 0.006
3560
+ NF
3561
+ 0.990
3562
+ 1.005
3563
+ µ1
3564
+ 0.475
3565
+ 0.500
3566
+ 0.525
3567
+ σ0
3568
+ 0.59
3569
+ 0.60
3570
+ σ1
3571
+ 0.003
3572
+ 0.006
3573
+ NF
3574
+ NF=0.001
3575
+ −2
3576
+ −1
3577
+ 0
3578
+ 1
3579
+ 2
3580
+ 3
3581
+ 4
3582
+ Raw Intensity
3583
+ 0.0
3584
+ 0.1
3585
+ 0.2
3586
+ 0.3
3587
+ 0.4
3588
+ 0.5
3589
+ 0.6
3590
+ 0.7
3591
+ 0.8
3592
+ Probability Density
3593
+ MM On fit
3594
+ ON/OFF histograms
3595
+ MM Off fit
3596
+ MM emission fit
3597
+ MM null fit
3598
+ Figure 27. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3599
+ J1904+33. In this case the best fit model is a 2-component Gaussian mixture
3600
+
3601
+ 36
3602
+ Anumarlapudi et al.
3603
+ 0.0
3604
+ 0.5
3605
+ 1.0
3606
+ 0.0
3607
+ 0.2
3608
+ 0.4
3609
+ 0.6
3610
+ 0.8
3611
+ 1.0
3612
+ Pulse phase
3613
+ 0
3614
+ 100
3615
+ 200
3616
+ 300
3617
+ 400
3618
+ Single pulses
3619
+ ON
3620
+ OFF
3621
+ Intensity
3622
+ µ0=-0.015
3623
+ 1.8
3624
+ 2.1
3625
+ µ1=1.836
3626
+ 1.4
3627
+ 1.5
3628
+ 1.6
3629
+ σ0=1.467
3630
+ 2.40
3631
+ 2.55
3632
+ σ1=2.517
3633
+ −0.08
3634
+ 0.00
3635
+ 0.08
3636
+ 0.40
3637
+ 0.48
3638
+ 0.56
3639
+ 1.8
3640
+ 2.1
3641
+ 1.4
3642
+ 1.5
3643
+ 1.6
3644
+ 2.40
3645
+ 2.55
3646
+ 0.40
3647
+ 0.48
3648
+ 0.56
3649
+ NF=0.476
3650
+ −10
3651
+ −5
3652
+ 0
3653
+ 5
3654
+ 10
3655
+ 15
3656
+ Raw Intensity
3657
+ 0.00
3658
+ 0.05
3659
+ 0.10
3660
+ 0.15
3661
+ 0.20
3662
+ 0.25
3663
+ 0.30
3664
+ Probability Density
3665
+ MM On fit
3666
+ ON/OFF histograms
3667
+ MM Off fit
3668
+ MM emission fit
3669
+ MM null fit
3670
+ Figure 28. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3671
+ J1928+28. In this case the best fit model is a 2-component Gaussian mixture
3672
+
3673
+ Pulsar Nulling with Mixture Models
3674
+ 37
3675
+ 0.0
3676
+ 0.5
3677
+ 1.0
3678
+ 0.0
3679
+ 0.2
3680
+ 0.4
3681
+ 0.6
3682
+ 0.8
3683
+ 1.0
3684
+ Pulse phase
3685
+ 0
3686
+ 100
3687
+ 200
3688
+ 300
3689
+ 400
3690
+ Single pulses
3691
+ ON
3692
+ OFF
3693
+ Intensity
3694
+ µ0
3695
+ µ0=0.006
3696
+ 1.04
3697
+ 1.12
3698
+ µ1
3699
+ µ1=1.027
3700
+ 0.78
3701
+ 0.84
3702
+ 0.90
3703
+ σ0
3704
+ σ0=0.831
3705
+ 0.92
3706
+ 0.96
3707
+ 1.00
3708
+ σ1
3709
+ σ1=0.979
3710
+ −0.05
3711
+ 0.00
3712
+ 0.05
3713
+ µ0
3714
+ 0.04
3715
+ 0.08
3716
+ NF
3717
+ 1.04
3718
+ 1.12
3719
+ µ1
3720
+ 0.78
3721
+ 0.84
3722
+ 0.90
3723
+ σ0
3724
+ 0.92
3725
+ 0.96
3726
+ 1.00
3727
+ σ1
3728
+ 0.04
3729
+ 0.08
3730
+ NF
3731
+ NF=0.013
3732
+ −4
3733
+ −2
3734
+ 0
3735
+ 2
3736
+ 4
3737
+ 6
3738
+ Raw Intensity
3739
+ 0.0
3740
+ 0.1
3741
+ 0.2
3742
+ 0.3
3743
+ 0.4
3744
+ 0.5
3745
+ Probability Density
3746
+ MM On fit
3747
+ ON/OFF histograms
3748
+ MM Off fit
3749
+ MM emission fit
3750
+ MM null fit
3751
+ Figure 29. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3752
+ J1941+02. In this case the best fit model is a 2-component Gaussian mixture
3753
+
3754
+ 38
3755
+ Anumarlapudi et al.
3756
+ 0.0
3757
+ 0.5
3758
+ 1.0
3759
+ 0.0
3760
+ 0.2
3761
+ 0.4
3762
+ 0.6
3763
+ 0.8
3764
+ 1.0
3765
+ Pulse phase
3766
+ 0
3767
+ 100
3768
+ 200
3769
+ 300
3770
+ 400
3771
+ Single pulses
3772
+ ON
3773
+ OFF
3774
+ Intensity
3775
+ µ0
3776
+ µ0=0.001
3777
+ 1.20
3778
+ 1.25
3779
+ 1.30
3780
+ µ1
3781
+ µ1=1.246
3782
+ 0.120
3783
+ 0.135
3784
+ 0.150
3785
+ σ0
3786
+ σ0=0.137
3787
+ 0.64
3788
+ 0.68
3789
+ 0.72
3790
+ σ1
3791
+ σ1=0.686
3792
+ 0.000
3793
+ 0.015
3794
+ µ0
3795
+ 0.18
3796
+ 0.21
3797
+ NF
3798
+ 1.20
3799
+ 1.25
3800
+ 1.30
3801
+ µ1
3802
+ 0.120
3803
+ 0.135
3804
+ 0.150
3805
+ σ0
3806
+ 0.64
3807
+ 0.68
3808
+ 0.72
3809
+ σ1
3810
+ 0.18
3811
+ 0.21
3812
+ NF
3813
+ NF=0.197
3814
+ −1
3815
+ 0
3816
+ 1
3817
+ 2
3818
+ 3
3819
+ 4
3820
+ 5
3821
+ Raw Intensity
3822
+ 0.0
3823
+ 0.5
3824
+ 1.0
3825
+ 1.5
3826
+ 2.0
3827
+ 2.5
3828
+ 3.0
3829
+ Probability Density
3830
+ MM On fit
3831
+ ON/OFF histograms
3832
+ MM Off fit
3833
+ MM emission fit
3834
+ MM null fit
3835
+ Figure 30. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3836
+ J2000+29. In this case the best fit model is a 2-component Gaussian mixture
3837
+
3838
+ Pulsar Nulling with Mixture Models
3839
+ 39
3840
+ 0.0
3841
+ 0.5
3842
+ 1.0
3843
+ 0.0
3844
+ 0.2
3845
+ 0.4
3846
+ 0.6
3847
+ 0.8
3848
+ 1.0
3849
+ Pulse phase
3850
+ 0
3851
+ 100
3852
+ 200
3853
+ 300
3854
+ 400
3855
+ Single pulses
3856
+ ON
3857
+ OFF
3858
+ Intensity
3859
+ µ0
3860
+ µ0=0.016
3861
+ 1.2
3862
+ 1.3
3863
+ 1.4
3864
+ µ1
3865
+ µ1=1.328
3866
+ 0.68
3867
+ 0.72
3868
+ σ0
3869
+ σ0=0.697
3870
+ 1.05
3871
+ 1.10
3872
+ 1.15
3873
+ σ1
3874
+ σ1=1.119
3875
+ 0.00
3876
+ 0.04
3877
+ µ0
3878
+ 0.20
3879
+ 0.25
3880
+ 0.30
3881
+ NF
3882
+ 1.2
3883
+ 1.3
3884
+ 1.4
3885
+ µ1
3886
+ 0.68
3887
+ 0.72
3888
+ σ0
3889
+ 1.05
3890
+ 1.10
3891
+ 1.15
3892
+ σ1
3893
+ 0.20
3894
+ 0.25
3895
+ 0.30
3896
+ NF
3897
+ NF=0.254
3898
+ −4
3899
+ −2
3900
+ 0
3901
+ 2
3902
+ 4
3903
+ 6
3904
+ Raw Intensity
3905
+ 0.0
3906
+ 0.1
3907
+ 0.2
3908
+ 0.3
3909
+ 0.4
3910
+ 0.5
3911
+ 0.6
3912
+ Probability Density
3913
+ MM On fit
3914
+ ON/OFF histograms
3915
+ MM Off fit
3916
+ MM emission fit
3917
+ MM null fit
3918
+ Figure 31. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3919
+ J2040-21. In this case the best fit model is a 2-component Gaussian mixture
3920
+
3921
+ 40
3922
+ Anumarlapudi et al.
3923
+ 0.0
3924
+ 0.5
3925
+ 1.0
3926
+ 0.0
3927
+ 0.2
3928
+ 0.4
3929
+ 0.6
3930
+ 0.8
3931
+ 1.0
3932
+ Pulse phase
3933
+ 0
3934
+ 100
3935
+ 200
3936
+ 300
3937
+ 400
3938
+ Single pulses
3939
+ ON
3940
+ OFF
3941
+ Intensity
3942
+ µ0
3943
+ µ0=-0.001
3944
+ 1.16
3945
+ 1.20
3946
+ µ1
3947
+ µ1=1.188
3948
+ 0.180
3949
+ 0.195
3950
+ 0.210
3951
+ σ0
3952
+ σ0=0.198
3953
+ 0.475
3954
+ 0.500
3955
+ 0.525
3956
+ σ1
3957
+ σ1=0.499
3958
+ −0.015
3959
+ 0.000
3960
+ 0.015
3961
+ µ0
3962
+ 0.14
3963
+ 0.16
3964
+ NF
3965
+ 1.16
3966
+ 1.20
3967
+ µ1
3968
+ 0.180
3969
+ 0.195
3970
+ 0.210
3971
+ σ0
3972
+ 0.475
3973
+ 0.500
3974
+ 0.525
3975
+ σ1
3976
+ 0.14
3977
+ 0.16
3978
+ NF
3979
+ NF=0.152
3980
+ 0
3981
+ 1
3982
+ 2
3983
+ 3
3984
+ 4
3985
+ Raw Intensity
3986
+ 0.0
3987
+ 0.5
3988
+ 1.0
3989
+ 1.5
3990
+ 2.0
3991
+ 2.5
3992
+ Probability Density
3993
+ MM On fit
3994
+ ON/OFF histograms
3995
+ MM Off fit
3996
+ MM emission fit
3997
+ MM null fit
3998
+ Figure 32. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
3999
+ J2044+28. In this case the best fit model is a 2-component Gaussian mixture
4000
+
4001
+ Pulsar Nulling with Mixture Models
4002
+ 41
4003
+ 0.0
4004
+ 0.5
4005
+ 1.0
4006
+ 0.0
4007
+ 0.2
4008
+ 0.4
4009
+ 0.6
4010
+ 0.8
4011
+ 1.0
4012
+ Pulse phase
4013
+ 0
4014
+ 50
4015
+ 100
4016
+ 150
4017
+ 200
4018
+ 250
4019
+ 300
4020
+ Single pulses
4021
+ ON
4022
+ OFF
4023
+ Intensity
4024
+ µ0
4025
+ µ0=0.02
4026
+ 1.6
4027
+ 2.4
4028
+ µ1
4029
+ µ1=2.032
4030
+ 0.75
4031
+ 1.00
4032
+ 1.25
4033
+ σ0
4034
+ σ0=0.998
4035
+ 0.8
4036
+ 1.2
4037
+ 1.6
4038
+ σ1
4039
+ σ1=1.026
4040
+ −0.25
4041
+ 0.00
4042
+ 0.25
4043
+ µ0
4044
+ 0.25
4045
+ 0.50
4046
+ NF
4047
+ 1.6
4048
+ 2.4
4049
+ µ1
4050
+ 0.75
4051
+ 1.00
4052
+ 1.25
4053
+ σ0
4054
+ 0.8
4055
+ 1.2
4056
+ 1.6
4057
+ σ1
4058
+ 0.25
4059
+ 0.50
4060
+ NF
4061
+ NF=0.485
4062
+ −2
4063
+ 0
4064
+ 2
4065
+ 4
4066
+ Raw Intensity
4067
+ 0.0
4068
+ 0.1
4069
+ 0.2
4070
+ 0.3
4071
+ 0.4
4072
+ 0.5
4073
+ 0.6
4074
+ Probability Density
4075
+ MM On fit
4076
+ ON/OFF histograms
4077
+ MM Off fit
4078
+ MM emission fit
4079
+ MM null fit
4080
+ Figure 33. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
4081
+ J2131-31. In this case the best fit model is a 2-component Gaussian mixture
4082
+
4083
+ 42
4084
+ Anumarlapudi et al.
4085
+ 0.0
4086
+ 0.5
4087
+ 1.0
4088
+ 0.0
4089
+ 0.2
4090
+ 0.4
4091
+ 0.6
4092
+ 0.8
4093
+ 1.0
4094
+ Pulse phase
4095
+ 0
4096
+ 100
4097
+ 200
4098
+ 300
4099
+ 400
4100
+ Single pulses
4101
+ ON
4102
+ OFF
4103
+ Intensity
4104
+ µ0
4105
+ µ0=0.03
4106
+ 0.0
4107
+ 0.3
4108
+ 0.6
4109
+ µ1
4110
+ µ1=0.371
4111
+ 0.70
4112
+ 0.75
4113
+ 0.80
4114
+ σ0
4115
+ σ0=0.727
4116
+ 0.2
4117
+ 0.4
4118
+ 0.6
4119
+ σ1
4120
+ σ1=0.349
4121
+ 0.54
4122
+ 0.60
4123
+ 0.66
4124
+ λ
4125
+ λ=0.589
4126
+ 0.00
4127
+ 0.05
4128
+ 0.10
4129
+ µ0
4130
+ 0.30
4131
+ 0.45
4132
+ 0.60
4133
+ NF
4134
+ 0.0
4135
+ 0.3
4136
+ 0.6
4137
+ µ1
4138
+ 0.70
4139
+ 0.75
4140
+ 0.80
4141
+ σ0
4142
+ 0.2
4143
+ 0.4
4144
+ 0.6
4145
+ σ1
4146
+ 0.54
4147
+ 0.60
4148
+ 0.66
4149
+ λ
4150
+ 0.30
4151
+ 0.45
4152
+ 0.60
4153
+ NF
4154
+ NF=0.533
4155
+ −2.5
4156
+ 0.0
4157
+ 2.5
4158
+ 5.0
4159
+ 7.5
4160
+ 10.0
4161
+ Raw Intensity
4162
+ 0.0
4163
+ 0.1
4164
+ 0.2
4165
+ 0.3
4166
+ 0.4
4167
+ 0.5
4168
+ 0.6
4169
+ Probability Density
4170
+ MM On fit
4171
+ ON/OFF histograms
4172
+ MM Off fit
4173
+ MM emission fit
4174
+ MM null fit
4175
+ Figure 34. Single pulse stack (upper left), MCMC corner plot (bottom), and pulse intensity histogram (upper right) for PSR
4176
+ J2310+6706. In this case the best fit model is a 2-component Exponential convolved Gaussian mixture
4177
+
-NFQT4oBgHgl3EQfKDVk/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-dE4T4oBgHgl3EQfDwsm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f64b6e79ee79323d98af9b51bfeafb7ad98d8a59ade308c5d52381bf9188bde
3
+ size 4128813
-tFST4oBgHgl3EQfcjgx/content/tmp_files/2301.13803v1.pdf.txt ADDED
@@ -0,0 +1,1855 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Fairness-aware Vision Transformer via Debiased Self-Attention
2
+ Yao Qiang
3
+ Chengyin Li
4
+ Prashant Khanduri
5
+ Dongxiao Zhu
6
+ Department of Computer Science, Wayne State University
7
+ {yao,cyli,khanduri.prashant,dzhu}@wayne.edu
8
+ Abstract
9
+ Vision Transformer (ViT) has recently gained significant
10
+ interest in solving computer vision (CV) problems due to
11
+ its capability of extracting informative features and mod-
12
+ eling long-range dependencies through the self-attention
13
+ mechanism. To fully realize the advantages of ViT in real-
14
+ world applications, recent works have explored the trust-
15
+ worthiness of ViT, including its robustness and explainabil-
16
+ ity. However, another desiderata, fairness has not yet been
17
+ adequately addressed in the literature. We establish that
18
+ the existing fairness-aware algorithms (primarily designed
19
+ for CNNs) do not perform well on ViT. This necessitates
20
+ the need for developing our novel framework via Debiased
21
+ Self-Attention (DSA). DSA is a fairness-through-blindness
22
+ approach that enforces ViT to eliminate spurious features
23
+ correlated with the sensitive attributes for bias mitigation.
24
+ Notably, adversarial examples are leveraged to locate and
25
+ mask the spurious features in the input image patches. In
26
+ addition, DSA utilizes an attention weights alignment reg-
27
+ ularizer in the training objective to encourage learning in-
28
+ formative features for target prediction. Importantly, our
29
+ DSA framework leads to improved fairness guarantees over
30
+ prior works on multiple prediction tasks without compro-
31
+ mising target prediction performance.
32
+ 1. Introduction
33
+ Recently, Visual Transformer (ViT) [11, 30] has emerged
34
+ as an architectural paradigm and a viable alternative to the
35
+ standard Convolutional Neural Network (CNN) [19,27,42]
36
+ for computer vision (CV) tasks. Unlike CNN, ViT is ca-
37
+ pable of extracting global relationships via self-attention
38
+ mechanism as well as informative features from the input
39
+ images, leading to impressive feature representation capa-
40
+ bilities. Consequently, ViT has demonstrated improved per-
41
+ formance in a variety of CV tasks, including image classi-
42
+ fication [11, 30], object detection [3, 9], semantic segmen-
43
+ tation [55, 67], and image generation [21], to name a few.
44
+ Due to its promising performance, it is anticipated that ViT
45
+ will form the architectural backbone of CV algorithms in
46
+ the near-future for real-world applications. This has led the
47
+ (a) Original Image
48
+ (b) Vanilla
49
+ (c) DSA
50
+ Figure 1. An illustration example. The prediction target is hair
51
+ color and the sensitive attribute is gender. The heatmap of atten-
52
+ tion weights show that Vanilla ViT (b) uses gender-sensitive fea-
53
+ tures, e.g., ‘red lip’ and ‘eye shadow’, whereas our fairness-aware
54
+ ViT DSA (c) uses informative features, e.g., ‘hair’, for predictions.
55
+ researchers to analyze the trustworthiness of ViT for solving
56
+ CV tasks.
57
+ Studying the robustness of ViT has recently attracted a
58
+ growing interest [2, 13, 38, 50, 68]. It is critical to improve
59
+ ViT’s robustness in order to deploy them safely in the real-
60
+ world. On the other hand, investigating ViT’s vulnerability
61
+ to attacks can give us a deeper understanding of its underly-
62
+ ing working mechanism. In the past, researchers have dis-
63
+ sected the self-attention mechanism [1,47] and the gradient-
64
+ based attribution [4] to offer a faithful explanation of the
65
+ inner workings of ViT or Transformer at large.
66
+ Besides robustness and explainability, fairness also
67
+ stands as a core trustworthy desiderata for both industry
68
+ [20] and academia [7].
69
+ Several studies show that many
70
+ deep-learning-based CV models simply make predictions
71
+ by exploiting spurious correlations with the input features
72
+ [23, 58]. These spurious features are statistically informa-
73
+ tive features that work for a majority of training examples
74
+ but do not capture the underlying relationship between the
75
+ input features and the target labels.
76
+ For illustration, let
77
+ us consider the example in Figure 1 (taken from CelebA
78
+ dataset). Since the target label, hair color, is spuriously cor-
79
+ related with the gender-related sensitive attributes, e.g., ‘eye
80
+ shadow’ or ‘red lips’ in Figure 1(b), a vanilla ViT model
81
+ would simply learn these spurious features as a shortcut to
82
+ predict the hair color whereas our fairness-aware ViT model
83
+ learns the informative features, e.g., ‘hair’ in Figure 1(c), to
84
+ make prediction.
85
+ arXiv:2301.13803v1 [cs.CV] 31 Jan 2023
86
+
87
+ Such spurious correlations can cause ViT to behave in
88
+ a biased manner, e.g., a lower performance on some pop-
89
+ ulation subgroups [54, 61]. Although an array of debias-
90
+ ing algorithms have been proposed for image classification
91
+ tasks [23, 36, 46, 60, 65, 66], most are designed for learn-
92
+ ing with the CNN models. Whether these algorithms are
93
+ compatible or even transferable to the ViT architecture is
94
+ unclear. Regardless of the neural network architecture, lim-
95
+ iting the spurious correlation between the input features and
96
+ the target labels for bias mitigation is still a challenging
97
+ problem. The difficulty arises from the fact that automat-
98
+ ically locating the spurious features in the input images is
99
+ computationally intractable. For example, one simple solu-
100
+ tion is to have domain experts and/or crowd workers curate
101
+ the entire training set, which neither works well with un-
102
+ known bias [29] nor is scalable to large-scale datasets [39].
103
+ Moreover, even if one can identify the spurious features,
104
+ the major challenge is how to make the classifier blind to
105
+ such features? Image in-painting [35, 59] appears to be a
106
+ promising approach to remove the undesired features; nev-
107
+ ertheless, significant challenges remain regarding what old
108
+ features to cut out for debiasing and what new features to
109
+ fill up to repair the corrupted images.
110
+ To address the above challenges, we propose a novel
111
+ framework for ensuring bias mitigation training of ViT via
112
+ Debiasing Self-Attention (DSA) to decouple the target pre-
113
+ diction from the spurious features.
114
+ DSA takes a hierar-
115
+ chical approach, where, in the first stage, we first localize
116
+ the spurious features from the input imaging patches. This
117
+ is achieved by training a bias-only model which exploits
118
+ the spurious features to explicitly predict the sensitive at-
119
+ tributes (e.g., gender and race). We then use adversarial
120
+ attacks against the bias-only model to identify and perturb
121
+ (or mask) the top patches that are responsible for the de-
122
+ creased accuracy in predicting the sensitive attributes. No-
123
+ tably, our approach for the fair ViTs is a novel addition to
124
+ the growing body of work on “adversarial examples for fair-
125
+ ness” [62,66].
126
+ DSA relies on the intuitive hypothesis that the adversar-
127
+ ial attacks initially designed to evaluate and understand the
128
+ robustness of ViT can also be a viable approach for identi-
129
+ fying and removing the spurious features towards training
130
+ the fair ViT models. Meanwhile, whether the approaches
131
+ that generate adversarial examples for CNN are transferable
132
+ to ViT remains a matter of contention [17, 41, 44, 45, 53],
133
+ the work in [13] propose Patch-Fool as one of the first ap-
134
+ proaches to fool the self-attention mechanism by attack-
135
+ ing image patches (as opposed to pixels) during ViT’s self-
136
+ attention computations. In this work, we apply Patch-Fool
137
+ to attack the bias-only model with the goal of capturing the
138
+ most important patches for learning the sensitive attributes.
139
+ As a result, the effect of sensitive features can be miti-
140
+ gated with adversarial examples, which are constructed by
141
+ directly perturbing (attacking) the sensitive patches.
142
+ In the second stage, in addition to augmenting the
143
+ original training set with these adversarial examples as the
144
+ debiased training set, we also align the biased examples
145
+ and their corresponding (unbiased) adversarial examples
146
+ via an attention weights aligning regularizer tailor-made
147
+ for self-attention mechanism in ViT. This leads to a novel
148
+ training objective that encourages learning informative
149
+ features while ensuring fairness of the trained ViT models.
150
+ Major contributions: We summarize our major contribu-
151
+ tions as follows: (1) We tackle the under-addressed fair-
152
+ ness problem in ViT from a novel perspective of leveraging
153
+ adversarial examples to eliminate spurious features while
154
+ utilizing attention weights alignment to retain informative
155
+ features. (2) We design a novel DSA framework for ViT
156
+ to mitigate bias in both training set and learning algorithm
157
+ via identifying and decorrelating the sensitive features from
158
+ the target label. (3) DSA presents a flexible and modular
159
+ debiasing approach that can be used either standalone or
160
+ with other fairness-aware training algorithms to ensure ViT
161
+ fairness. (4) Experimental results show that DSA improves
162
+ group fairness with respect to demographic parity (DP) and
163
+ equality of odds (EO) metrics while achieving a competitive
164
+ or even better prediction accuracy compared to the base-
165
+ lines. The qualitative analysis further indicates that DSA
166
+ has reduced attention on sensitive features.
167
+ 2. Related Work
168
+ ViT based Classification.
169
+ The earlier exploration of ViT
170
+ either used a hybrid architecture combining convolution and
171
+ self-attention [3] or a pure self-attention architecture with-
172
+ out convolution [48]. The work in [11] proposed a ViT
173
+ that achieves impressive results on image classification us-
174
+ ing ImageNet data set. This success has motivated a se-
175
+ ries of subsequent works to further exploit ViT’s expressive
176
+ power from various perspectives, such as incorporating lo-
177
+ cality into ViT [28,30,63], and finding well-performing ViT
178
+ using neural architecture search (NAS) [6].
179
+ Fairness and Debiased Learning.
180
+ The field of fairness in
181
+ deep learning has grown significantly over the past several
182
+ years as a result of bias in training data and algorithms [36,
183
+ 46]. The existing techniques for debiased learning can be
184
+ roughly categorized into pre-, in-, and post-processing.
185
+ – Pre-processing methods attempt to debias and increase
186
+ the quality of the training set with the assumption that fair
187
+ training sets would result in fair models [8, 25, 66]. The
188
+ work in [66] proposed to balance the data distribution over
189
+ different protected attributes by generating adversarial ex-
190
+ amples to supplement the training dataset. Similarly, [25]
191
+ generated the bias-swapped image augmentations to bal-
192
+ ance protected attributes, which would remove spurious
193
+
194
+ correlation between the target label and protected attributes.
195
+ In [8], the authors presented fair mixup as a new data aug-
196
+ mentation method to generate interpolated samples to find
197
+ middle-ground representation for different sensitive groups.
198
+ The work [46] described a novel generative data augmen-
199
+ tation approach to create counterfactual samples that d-
200
+ separates the sensitive attributes and the targets ensuring
201
+ fairness and attribution-based explainability.
202
+ – In-processing approaches aim to mitigate bias during the
203
+ training process by directly modifying the learning algo-
204
+ rithm and model weights with specifically designed fair-
205
+ ness penalties/constraints or adversarial mechanism [24,36,
206
+ 40, 49, 65]. To enforce the fairness constraints, one line
207
+ of works either disentangles the association between model
208
+ predictions and the sensitive attributes via an auxiliary reg-
209
+ ularization term [40] or minimize the performance differ-
210
+ ence between protected groups with a novel objective func-
211
+ tion [49]. However, the issue is that the trained models may
212
+ behave differently at the inference stage even though such
213
+ fairness constraints are satisfied during the training. An-
214
+ other line of works [24, 36, 60, 65] enforce the model to
215
+ generate fair outputs with adversarial training techniques
216
+ through the min-max objective: maximizing accuracy while
217
+ minimizing the ability of a discriminator to predict the pro-
218
+ tected (sensitive) attribute. Nevertheless, this process can
219
+ compromise the model performance on the main prediction
220
+ task. Additional line of works impose either orthogonal-
221
+ ity [51], disentanglement [32], or feature alignment [23]
222
+ constraints on the feature representation and force the repre-
223
+ sentation to be agnostic to the sensitive attributes. We note
224
+ that most of these approaches are exclusively designed for
225
+ CNN architectures, and whether these approaches are trans-
226
+ ferable to the ViT has not yet been demonstrated.
227
+ – Post-processing techniques directly calibrate or modify
228
+ the classifier’s decisions to certain fairness criteria at infer-
229
+ ence time [26, 33]. These methods require access to the
230
+ sensitive attribute for fair inference, which may not be fea-
231
+ sible in real-world applications due to the salient security
232
+ and privacy concerns.
233
+ Fairness in ViT.
234
+ Recently, [16] explored how the spuri-
235
+ ous correlations are manifested in ViT and performed exten-
236
+ sive experiments to understand the role of the self-attention
237
+ mechanism in debiased learning of ViT. Despite the new
238
+ insights, the authors did not provide any debiasing tech-
239
+ niques for ViT. The authors in [56] proposed a new method,
240
+ named TADeT, for debiasing ViT that aims to discover and
241
+ remove bias primarily from query matrix features. To our
242
+ knowledge, this is the only published work along the line
243
+ of fairness ViT. Nevertheless, this pioneering work TADeT
244
+ has two weaknesses: first, it requires parameter sharing
245
+ across the key and value weights in self-attention mecha-
246
+ nism, which may conflict with most ViT architectures; sec-
247
+ ond, the complex alignment strategy on the query matrix
248
+ is not straightforward and well investigated. Thus, TADeT
249
+ does not even outperform the compared baselines that pri-
250
+ marily designed for CNNs.
251
+ In contrast to the above works, this work tackles the de-
252
+ biasing problem through a novel perspective of fairness-
253
+ through-adversarial-attack. The proposed DSA framework
254
+ combines the strengths of both pre- and in-processing ap-
255
+ proaches via leveraging data augmentation (for ensuring
256
+ fairness in the training set) and feature alignment for bias
257
+ mitigation. The adversarial examples are used to both dis-
258
+ entangle spurious features from informative features and to
259
+ align attention weights, specifically, tailor-made for the self-
260
+ attention mechanism in ViT.
261
+ 3. Preliminaries
262
+ 3.1. Overview of Vision Transformer
263
+ Similar to the Transformer architecture [57], the ViT model
264
+ expects the input to be a linear sequence of token/patch
265
+ embeddings. An input image is first partitioned into non-
266
+ overlapping fixed-size square patches with resolution p×p,
267
+ resulting in a sequence of flattened 2D patches. For ex-
268
+ ample, given an image of size 384 × 384 and patch size
269
+ p = 16, the image is divided into patches of resolution
270
+ 16 × 16, resulting in 576 image patches. These patches are
271
+ then mapped to constant-size embeddings using a trainable
272
+ linear projection. In this example, the projection layer will
273
+ produce 576 embedding vectors of fixed dimensions. Next,
274
+ position embeddings are added to the patch embeddings to
275
+ imbibe relative positional information of the patches. Fi-
276
+ nally. the ViT model prepends a learnable embedding (class
277
+ token) to the sequence of embedded patches following [10],
278
+ which is used as image representation at the model’s output.
279
+ The core architecture of ViT mainly consists of mul-
280
+ tiple stacked encoder blocks, where each block primarily
281
+ consists of a Multi-head Self Attention (MSA) layer and
282
+ a Feed-Forward Network (FFN) layer.
283
+ Within the MSA
284
+ layer, multiple self-attention heads learn relationships be-
285
+ tween each pair of input patches. Using three different lin-
286
+ ear transformations, the input patch xi is first projected to
287
+ a query qi, a key ki, and a value vi in each self-attention
288
+ head, i here is the index of the patches. The query qi then
289
+ computes the dot products with all the keys k, which are
290
+ further scaled and normalized by the softmax layer to obtain
291
+ the attention weights. After this, it outputs hi by weighted
292
+ sum up all the values v with the obtained attention weights.
293
+ Finally, the outputs from all heads are concatenated and
294
+ re-projected by a linear layer into an output patch. FFN
295
+ consists of two linear layers, which are connected by the
296
+ GeLU activation function and process each hi ∈ Rd from
297
+ the precedent MSA layer individually. Both MSA and FFN
298
+ layers function as the residual connection.
299
+
300
+ 3.2. Fairness Metrics
301
+ Many different notions of fairness have been proposed in
302
+ the literature [12, 18]. In this work, we mainly focus on
303
+ the two most widely used definitions: demographic par-
304
+ ity [12] and equalized odds [18] as the metrics to assess
305
+ group fairness of the model. Demographic Parity (DP) mea-
306
+ sures whether the true positive rates across all groups (de-
307
+ fined by a sensitive attribute s, e.g., gender) are equal, par-
308
+ ticularly between the vulnerable minority group (s = 0)
309
+ and others (s = 1), formally: DP = TPRs=1 − TPRs=0.
310
+ Equalized Odds (EO) is used to understand the dispari-
311
+ ties in both the true positive rates and the false positive
312
+ rates in the vulnerable group compared to others: EO =
313
+ 1
314
+ 2[TPRs=1 − TPRs=0] + 1
315
+ 2[FPRs=1 − FPRs=0]. In ad-
316
+ dition, we also use Accuracy (ACC) and Balanced Accu-
317
+ racy (BA) [43], where BA =
318
+ 1
319
+ 4[TPRs=0 + TNRs=0 +
320
+ TPRs=1 + TNRs=1], to evaluate the utility of the model.
321
+ However, when a dataset is class imbalanced, BA will have
322
+ an implicit bias against the minority class. Therefore, we
323
+ introduce Difference of Balanced Accuracy (DBA) as a
324
+ way to measure the difference in a model’s performance
325
+ across groups defined by a sensitive attribute while account-
326
+ ing for class imbalance, formally: DBA = 1
327
+ 2[TPRs=1 +
328
+ TNRs=1] − 1
329
+ 2[TPRs=0 + TNRs=0].
330
+ 4. The Proposed Framework
331
+ 4.1. Problem Formulation
332
+ We consider a supervised classification task with training
333
+ samples {x, y, s} ∼ pdata, where x ∈ X is the input fea-
334
+ ture, y ∈ Y is the target label, and s ∈ S is an annotated
335
+ sensitive categorical attribute that we wish to protect. Some
336
+ examples of s include gender, race, age or other attributes
337
+ that can identify a certain protected group. We assume that
338
+ the sensitive attributes S can only be used during training
339
+ phase, and are not accessible during the inference (post-
340
+ training phase). Moreover, we suppose that each input fea-
341
+ ture x can be split into two parts, one with sensitive features
342
+ xs that are highly relevant to the sensitive attribute s, and
343
+ the rest xt that are relevant to the prediction of the target
344
+ label y, i.e., we have x = (xs, xt) ∈ X.
345
+ We develop a two-step hierarchical approach for bias
346
+ mitigation, wherein, in the first stage, we localize and mask
347
+ the sensitive attributes xs from the input x in order to
348
+ disentangle xs from xt.
349
+ This is accomplished by trans-
350
+ forming the model prediction from p(x) = p(y|xs, xt) to
351
+ p(x) ∝ p(x′) = p(y|x′
352
+ t), where x′ is the sample constructed
353
+ after masking the sensitive attributes xs from x via adver-
354
+ sarial attacks. In the second stage, we utilize the original
355
+ x and the augmented data x′ to train a ViT model f(·) for
356
+ generating the prediction, as ˆy = f(x), while at the same
357
+ time satisfying certain fairness requirements (i.e., DP, EO,
358
+ and DBA) with respect to the sensitive attributes s.
359
+ 4.2. Bias in Training Set and ViT Model
360
+ The tendency of neural networks (including ViT) to learn
361
+ spurious correlations makes them particularly vulnerable
362
+ to utilizing sensitive features to make predictions, thereby,
363
+ propagating biases towards a particular group [15]. This
364
+ issue is particularly salient with the current deep learning
365
+ models that follow the data-driven learning paradigm and
366
+ are trained with imbalanced data set where some sensitive
367
+ features could have a high correlation with certain class la-
368
+ bels. Our work is motivated by the empirical observation
369
+ that the bias in learning is mainly caused by the model’s
370
+ reliance on sensitive features for prediction. Note that the
371
+ sensitive features xs are parts of the input features x, that
372
+ are highly predictive of the sensitive attribute s. In Figure
373
+ 1, we visualize the attention weights from the ViT model to
374
+ analyze the importance of different features. In this exam-
375
+ ple, gender is the sensitive attribute that is highly correlated
376
+ with the prediction task of hair color. The Vanilla model
377
+ may pay more attention on the gender related features, in-
378
+ dicating that it has associated gender with the hair color.
379
+ This association might lead the ViT model to discriminate
380
+ against the female group. We have thus established that, for
381
+ the image classification task using CelebA dataset, the ViT
382
+ model is heavily biased as it relies on the sensitive features
383
+ for prediction. This observation naturally leads to our DSA
384
+ framework for bias mitigation discussed next.
385
+ 4.3. Debiased Self-Attention (DSA) Framework
386
+ The discussion in Section 4.2 demonstrates that the reliance
387
+ of ViT on the sensitive features for prediction can lead to
388
+ bias. Therefore, to mitigate the bias originating from the
389
+ sensitive features, we propose to achieve fairness by miti-
390
+ gating the influence of sensitive features on the prediction
391
+ task. However, note that it is a challenging task to locate the
392
+ sensitive features in the input. To address this challenge, we
393
+ propose a hierarchical framework as discussed in Section
394
+ 4.1. Specifically, our DSA framework follows a two-step
395
+ procedure (Figure 2):
396
+ Step 1: Firstly, we train a bias-only model that deliberately
397
+ maximizes the usage of sensitive features for prediction,
398
+ followed by adversarial attack on the bias-only model to lo-
399
+ calize and mask the sensitive attributes.
400
+ Step 2: Second, we train a debiased model with augmented
401
+ adversarial examples and attention weights alignment.
402
+ 4.3.1
403
+ Training the Bias-only Model
404
+ Recall that the input feature x = (xs, xt) ∈ X where xs are
405
+ the sensitive features while xt are the target related features.
406
+ The goal of Step 1 (see Section 4.2) is to learn only the sen-
407
+ sitive features xs, during training the bias-only model. To
408
+ achieve this, we first build a bias-only ViT model which
409
+ maximally utilizes the sensitive features for prediction. We
410
+
411
+ Debiased Self-Attention
412
+ Sensitive Label (s) Prediction
413
+ 11
414
+ 15
415
+ 16
416
+ 6
417
+ 7
418
+ Bias-Only
419
+ 0
420
+ 2
421
+ 1
422
+ 11
423
+ Target Label (y) Prediction
424
+ 0
425
+ 2
426
+ 1
427
+ 15
428
+ 16
429
+ 6
430
+ 7
431
+ Adversarial Attack
432
+ 16
433
+ 15
434
+ 0
435
+ 2
436
+ 1
437
+ 6
438
+ 7
439
+ 11
440
+ 16
441
+ 15
442
+ 0
443
+ 2
444
+ 1
445
+ 6
446
+ 7
447
+ 11
448
+ cls
449
+ low
450
+ attention
451
+ pos
452
+ Attention Weights
453
+ Alignment
454
+ adversarial attack
455
+ train bias-only model
456
+ train debiased model
457
+ attention weights
458
+ alignment
459
+ Figure 2. The DSA framework. The bias-only model is first trained to learn the spurious features (the green patches) for predicting sensitive
460
+ attribute (s ∈ S) (see Section 4.3.1). Adversarial attack is then applied against the bias-only model to generate the adversarial examples,
461
+ (x′), by perturbing the sensitive patches (the grid shadow patches) of the original inputs (x ∈ X) (see Section 4.3.2). Finally, both x
462
+ and x′ are used to train a fairness-aware ViT with an attention weights alignment objective (see Eq. (10)) and learn the target (y)-related
463
+ informative features (the red patches) (see Sections 4.3.3 and 4.3.4). Best viewed in color.
464
+ denote the bias-only model by fB(x, s) = c(h(x), s),
465
+ where h(x) is the intermediate representation of the input
466
+ x, and c(·) maps the intermediate representation to the final
467
+ prediction. Note that h(x) contains only m elements from
468
+ the categories in S, e.g., m = 2 in most of our experimental
469
+ settings. The key motivation of using the m elements for
470
+ input representation h(x) is to force the bias-only model
471
+ to only utilize sensitive attributes to obtain the prediction
472
+ fB(x, s).
473
+ Given N samples of the input, xi, and the sensitive at-
474
+ tribute, si, pairs {xi, si}N
475
+ i=1, the bias-only model minimizes
476
+ the following loss.
477
+ LB(x) = − 1
478
+ N
479
+ N
480
+
481
+ i=1
482
+ si log(fB(xi, si))
483
+ + (1 − si) log(1 − fB(xi, si)).
484
+ (1)
485
+ We illustrate the idea using the example in Figure 2. We
486
+ consider the hair color classification tasks with gender bias.
487
+ Input representation h(x) is denoted using two elements,
488
+ indicating the sensitive attributes male and female, respec-
489
+ tively. The bias-only model fB(x, s) mainly relies on the
490
+ sensitive features, like ‘eye shadow’ and/or ‘red lips’, to
491
+ predict the label as female, while at the same time pay-
492
+ ing nearly no attention to the hair color related features like
493
+ ‘hair’ themselves.
494
+ 4.3.2
495
+ Adversarial Attack Against the Bias-only Model
496
+ After obtaining the bias-only model, the following proce-
497
+ dure in Step 2 of the DSA framework localizes and masks
498
+ the spurious (sensitive) features via adversarial attacks that
499
+ are generated using the Patch-Fool construction proposed
500
+ in [13].
501
+ Specifically, Patch-Fool is designed to fool the
502
+ self-attention mechanism in ViTs by attacking their basic
503
+ component (i.e., a single patch) with a series of attention-
504
+ aware optimization techniques, demonstrating that the ViTs
505
+ are more vulnerable to adversarial attacks than the CNNs.
506
+ However, in contrast to [13], instead of applying Patch-Fool
507
+ as an adversarial attack method to evaluate the robustness of
508
+ ViT, we utilize it to efficiently localize and mask the sensi-
509
+ tive features in the inputs. To this end, we adapt the objec-
510
+ tive function of Patch-Fool in order to attack the bias-only
511
+ model on the sensitive labels instead of the target labels.
512
+ Specifically, given the objective function LB(x) and a se-
513
+ ries of input image patches X = [x1, · · · , xp, · · · , xn]T ∈
514
+ Rn×d with its associated sensitive label s, the objective of
515
+ the adversarial algorithm is
516
+ arg max
517
+ 1≤p≤n,E∈Rn×dLB(X + 1 ⊙ E, s),
518
+ (2)
519
+ where E denotes the adversarial perturbation; 1 ∈ Rn is the
520
+ identifying one-hot vector demonstrating whether current p-
521
+ th patch is selected or not; ⊙ represents the penetrating face
522
+ product [13]. Thus, the Patch-Fool needs to (1) select the
523
+ adversarial patch p, and (2) optimize the corresponding ad-
524
+ versarial attack, E.
525
+ Selection of p: For encoder blocks in the ViT, we define:
526
+ t(l)
527
+ j
528
+ = �
529
+ h,i a(l,h,i)
530
+ j
531
+ to measure the importance of the j-th
532
+ patch in the l-th block based on its contributions to other
533
+ patches in the self-attention calculation, where a(l,h,i) =
534
+ [a(l,h,i)
535
+ 1
536
+ , · · · , a(l,h,i)
537
+ n
538
+ ] denotes the attention distribution for
539
+
540
+ the ith patch of the hth head in the lth block. The moti-
541
+ vation behind applying Patch-Fool is to localize the most
542
+ influence patch p according to the predicted sensitive at-
543
+ tribute s. Here, we derive the top k (which is a tunable
544
+ hyper-parameter) important patches from arg max t(l)
545
+ j .
546
+ Optimize E: Given the selected adversarial patch index p
547
+ from the previous step, an attention-aware loss is applied for
548
+ the lth block as: LAttn = �
549
+ h,i a(l,h,i)
550
+ p
551
+ . This loss is expected
552
+ to be maximized so that the adversarial patch p, serving as
553
+ the target adversarial patch, can attract more attention from
554
+ other patches for effectively fooling ViTs. The perturbation
555
+ E is then updated based on both the final sensitive classifi-
556
+ cation loss and a layer-wise attention-aware loss:
557
+ L(X′, s, p) = LB(X′, s) + α
558
+
559
+ l
560
+ LAttn(X′, p),
561
+ (3)
562
+ where X′ ≜ X + 1 ⊙ E and α is a weight hyper-parameter
563
+ set to 0.5 in the experiments. Moreover, PCGrad [64] is
564
+ adopted to avoid the gradient conflict of the two losses and
565
+ E is updated using:
566
+ δE = ∇EL(X′, s, p) − α
567
+
568
+ l
569
+ βl∇ELB(X′, s),
570
+ (4)
571
+ where
572
+ βl =
573
+
574
+
575
+
576
+ 0,
577
+ ⟨∇ELB(X′, s), ∇ELAttn(X′, p)⟩ > 0
578
+ ⟨∇ELB(X′, s), ∇ELAttn(X′, p)⟩
579
+ ∥∇ELB(X′, s)∥2
580
+ ,
581
+ otherwise.
582
+ (5)
583
+ Following PGD [37], we iteratively update E using an
584
+ Adam optimizer: Et+1 = Et + η · Adam(δEt), where η
585
+ is the step-size for each update.
586
+ 4.3.3
587
+ Attention Weights Alignment
588
+ After Step 1, the DSA framework generates the adversarial
589
+ example x′, whose top k patches containing sensitive at-
590
+ tributes are perturbed through the adversarial attack. Here,
591
+ besides using these adversarial examples as augmentation
592
+ during training of the debiased ViT models, we also lever-
593
+ age them via attention weights alignment to further guide
594
+ the model to pay more attention to the target-related fea-
595
+ tures. This also allows more sensitive features to be dis-
596
+ covered and ignored by self-attention mechanism in the
597
+ ViT models as shown in Figure 2. In particular, we ap-
598
+ ply three different feature discrepancy metrics D(·, ·), i.e.,
599
+ Mean Squared Error (MSE), Kullback-Leibler Divergence
600
+ (KL-Div), and Attention Transfer (AT), to evaluate the dis-
601
+ crepancy between the attention weights Ax and Ax′ from
602
+ the original sample x and the adversarial example x′, re-
603
+ spectively. We define the three metrics as:
604
+ LA = D⋆(Ax, Ax′),
605
+ (6)
606
+ where D⋆ is either
607
+ DMSE(Ax, Ax′) = 1
608
+ 2
609
+
610
+ j∈I
611
+ ∥Ax
612
+ j − Ax′
613
+ j ∥2
614
+ (7)
615
+ DKL−Div(Ax∥Ax′) =
616
+
617
+ j∈I
618
+ Ax
619
+ j log Ax
620
+ j
621
+ Ax′
622
+ j
623
+ (8)
624
+ DAT(Ax, Ax′) = 1
625
+ 2
626
+
627
+ j∈I
628
+ ����
629
+ Ax
630
+ j
631
+ ∥Ax
632
+ j ∥2
633
+
634
+ Ax′
635
+ j
636
+ ∥Ax′∥2
637
+ ����
638
+ 2
639
+ ,
640
+ (9)
641
+ where I denotes the indices of all the adversarial examples
642
+ and the original example attention weights pairs for which
643
+ we perform alignment. Finally, to incorporate the attention
644
+ distributions of Ax and Ax′ in the objective, we add LA as
645
+ a regularizer in the overall objective.
646
+ 4.3.4
647
+ Overall Loss Function
648
+ Putting the above Steps 1 and 2 together, the overall objec-
649
+ tive for training the proposed debiased model is:
650
+ L = λ1LCE(x, y) + λ2LCE(x′, y) + λ3LA,
651
+ (10)
652
+ where LCE denotes the standard cross entropy (CE) loss;
653
+ λ1, λ2, and λ3 are three weighted coefficients for control-
654
+ ling the three losses. These parameters are designed for
655
+ controlling the fairness-utility trade-off. We provide further
656
+ ablation study on these terms in the experiments.
657
+ 5. Experimental Settings
658
+ 5.1. Datasets
659
+ We evaluate the DSA framework on two popular CV
660
+ datasets, namely, Waterbirds [49] and CelebA [31]. Wa-
661
+ terbirds dataset contains spurious correlation between the
662
+ background features S = {water, land} and target label Y =
663
+ {waterbird, landbird}. The spurious correlation is injected
664
+ by pairing waterbirds with the water background and land-
665
+ birds with the land background more frequently, as com-
666
+ pared to other combinations. CelebA dataset, which has
667
+ been widely used in the fairness literature, contains 200k
668
+ celebrity face images with annotations for 40 binary at-
669
+ tributes. We present the results on three settings follow-
670
+ ing [16,56], each with a corresponding binary task (Y) that
671
+ the model is trained to predict, and a binary sensitive at-
672
+ tribute (S) over which we wish the model to be unbiased.
673
+ The three settings described as a tuple (Y, S) are as follows:
674
+ (Gray Hair, Gender), (Wavy Hair, Gender), and (Smiling,
675
+ High Cheekbones). We provide more details of these set-
676
+ tings in the Supplementary Materials.
677
+ 5.2. Implementation Details
678
+ We train the ViT-S/16 models from scratch for each pre-
679
+ diction task. The ViT-S/16 model consists of 196 patches
680
+
681
+ ACC
682
+ DP
683
+ BA
684
+ DBA
685
+ EO
686
+ 82.4 84.8 87.2 89.6 92.0
687
+ Vanila
688
+ TADeT
689
+ MMD
690
+ MFD
691
+ DANN
692
+ LAFTRE
693
+ AM
694
+ DSA (Ours)
695
+ 0.25
696
+ 0.28
697
+ 0.3
698
+ 0.33
699
+ 0.35
700
+ 79.0
701
+ 80.0
702
+ 81.0
703
+ 82.0
704
+ 83.0
705
+ 0.01
706
+ 0.02
707
+ 0.03
708
+ 0.04
709
+ 0.05
710
+ 0.26
711
+ 0.28
712
+ 0.31
713
+ 0.33
714
+ 0.35
715
+ (a) Y: Gray hair S: Gender
716
+ ACC
717
+ DP
718
+ BA
719
+ DBA
720
+ EO
721
+ 68.0 71.0 74.0 77.0 80.0
722
+ Vanila
723
+ TADeT
724
+ MMD
725
+ MFD
726
+ DANN
727
+ LAFTRE
728
+ AM
729
+ DSA (Ours)
730
+ 0.23
731
+ 0.29
732
+ 0.34
733
+ 0.4
734
+ 0.45
735
+ 59.0
736
+ 63.0
737
+ 67.0
738
+ 71.0
739
+ 75.0
740
+ 1.4
741
+ 2.8
742
+ 4.2
743
+ 5.6
744
+ 7.0
745
+ 0.2
746
+ 0.25
747
+ 0.3
748
+ 0.35
749
+ 0.4
750
+ (b) Y: Wavy hair S: Gender
751
+ ACC
752
+ DP
753
+ BA
754
+ DBA
755
+ EO
756
+ 85.2 86.4 87.6 88.8 90.0
757
+ Vanila
758
+ TADeT
759
+ MMD
760
+ MFD
761
+ DANN
762
+ LAFTRE
763
+ AM
764
+ DSA (Ours)
765
+ 0.31
766
+ 0.34
767
+ 0.36
768
+ 0.39
769
+ 0.42
770
+ 72.4
771
+ 74.8
772
+ 77.2
773
+ 79.6
774
+ 82.0
775
+ 0.0
776
+ 0.01
777
+ 0.01
778
+ 0.02
779
+ 0.02
780
+ 0.32
781
+ 0.35
782
+ 0.37
783
+ 0.4
784
+ 0.42
785
+ (c) Y: Smiling S: High Cheeckbones
786
+ Figure 3. Fairness and accuracy evaluation for all methods over different combinations of target (y) and sensitive (s) on CelebA dataset. For
787
+ DSA, we use LA = DAT . The test accuracies of the bias-only model used in AM and DSA for predicting gender and high cheekbones are
788
+ 82.62% and 80.71%, respectively. The success rates of adversarial attacks are reported in Supplementary Material. The ↙ signs indicate
789
+ the lower value of the corresponding metric is better, while ↗ denotes the higher value is better. Best viewed in color.
790
+ (each representing a 16x16 sub-image), 1 class token patch,
791
+ 12 transformer encoder layers, and 8 attention heads. We
792
+ flatten and project each patch into a 64-dimensional vec-
793
+ tor and add positional embeddings. The embedded patches
794
+ are fed into the ViT encoder. After the ViT encoder pro-
795
+ cesses the patch embeddings, the class token patch is fed
796
+ into 2 fully-connected layers (with hidden state size as 256)
797
+ and a sigmoid layer to produce a single normalized output
798
+ score (since we deal with binary classification). We train the
799
+ ViT models using momentum Stochastic Gradient Descent
800
+ (SGD) with a momentum parameter of 0.9 and an initial
801
+ learning rate of 3e-2 for 20 epochs. We use a fixed batch
802
+ size of 32, gradient clipping at global norm 1, and a cosine
803
+ decay learning rate schedule with a linear warmup follow-
804
+ ing [16]. We select the model with the best accuracy on the
805
+ validation sets.
806
+ 5.3. Baselines
807
+ We select the following debiasing algorithms as baselines
808
+ for performance evaluation, for which we select the best
809
+ model that yields the highest validation performance. To
810
+ our knowledge, besides the proposed DSA and AM as a
811
+ home run method, TADeT is the only third-party fairness-
812
+ aware algorithm tailor-made for ViT while all the others are
813
+ designed for CNN. We consider the following baselines:
814
+ Vanilla [11]: The ViT models are only trained with CE
815
+ loss for target prediction. Attention Masking (AM): The
816
+ self-attention mechanism is critical in ViT as it provides
817
+ important weights for extracting visual features. We pro-
818
+ pose the AM method as a home run that directly masks
819
+ the top-k patches with highest attention scores for the bias-
820
+ only model. Mitigating Bias in ViT via Target Alignment
821
+ (TADeT) [56] uses a targeted alignment strategy for debi-
822
+ asing ViT that aims to identify and remove bias primarily
823
+ from query matrix features. Maximum Mean Discrepancy
824
+ (MMD) [34] calculates the mean of penultimate layer fea-
825
+ ture activation values for each sensitive attribute setting and
826
+ then minimizes their L2 distance. MMD-based Fair Dis-
827
+ tillation (MFD) [23] adds a MMD-based regularizer that
828
+ utilizes the group-indistinguishable predictive features from
829
+ the teacher model while discouraging the student model
830
+ from discriminating against any protected group. Domain
831
+ Adversarial Neural Network (DANN) [14] employs a sen-
832
+ sitive attribute adversary learned on top of the penultimate
833
+ layer activation. The adversarial head consists of two linear
834
+ layers in the same dimension as the class token, followed by
835
+ a sigmoid function. Learning Adversarially Fair and Trans-
836
+ ferable Representation (LAFTR) [36] trains a model with
837
+ a modified adversarial objective that attempts to meet the
838
+ EO fairness criterion. This objective is implemented by
839
+ minimizing the average absolute difference on each task.
840
+ 6. Main Results and Discussion
841
+ In this Section, we report the results of fairness and accu-
842
+ racy evaluations, the ablation study, and the effects of model
843
+ size and patch size.
844
+ In Supplementary Materials, many
845
+ more experimental results are reported, including the im-
846
+ pact of several tunable hyper-parameters, results with dif-
847
+ ferent D⋆ in the regularizer LA, and some qualitative eval-
848
+ uations.
849
+ 6.1. Fairness and Accuracy Evaluations
850
+ We report the fairness and accuracy performance on the
851
+ three aforementioned settings (see Section 5.1) on CelebA
852
+ dataset in Figure 3. We make the following observations.
853
+ First, DSA outperforms all the baselines, demonstrated with
854
+ the largest area (enclosed by the red lines) in the radar
855
+ charts, significantly improving the ViT fairness with lower
856
+ EO, DP, and DBA while maintaining higher accuracy in
857
+ terms of BA and ACC. Second, several baseline methods
858
+
859
+ 0.02
860
+ 0.03
861
+ 0.04
862
+ 0.05
863
+ 0.06
864
+ 0.07
865
+ 0.08
866
+ EO
867
+ 0.010
868
+ 0.015
869
+ 0.020
870
+ 0.025
871
+ 0.030
872
+ 0.035
873
+ 0.040
874
+ DP
875
+ Vanilla(62.36)
876
+ TADeT(69.05)
877
+ MMD(67.81)
878
+ MFD(67.36)
879
+ DANN(60.04)
880
+ LAFTRE(64.80)
881
+ AM(61.70)
882
+ DSA(69.58)
883
+ Figure 4. Fairness and accuracy evaluation on Waterbirds dataset.
884
+ The ACCs are shown in the legends. All tunable hyper- parameters
885
+ and other settings are same as in Figure 3.
886
+ (e.g., MMD, MFD, and DANN) that have shown strong per-
887
+ formance with CNN models, do not even outperform the
888
+ vanilla model on some fairness metrics (e.g., EO), partic-
889
+ ularly under the (Wavy Hair, Gender) setting. This may
890
+ happen because ViT primarily learns global image fea-
891
+ tures by modeling long-range dependencies using the self-
892
+ attention mechanism, which is fundamentally different form
893
+ convolution-based local feature leaning with CNN. As such,
894
+ these baseline methods (designed for the CNNs) are not
895
+ transferable for bias mitigation with the ViT models. Third,
896
+ we note the home run method AM is also designed by blind-
897
+ ing the sensitive attributes in the input based on only the
898
+ attention weights of the bias-only model. However, sev-
899
+ eral works [1, 22, 52] have questioned whether highly at-
900
+ tentive inputs would significantly impact the model outputs.
901
+ Since self-attention mechanism involves the computation of
902
+ queries, keys, and values, reducing it only to the derived
903
+ attention weights (inner products of queries and keys) can
904
+ be insufficient to capture the importance of the features.
905
+ Hence, the home run AM method fails to achieve a com-
906
+ parable performance with the proposed DSA method.
907
+ Similarly, we observe the same patterns on the results of
908
+ Waterbirds dataset as shown in Figure 4. DSA outperforms
909
+ all other baselines in terms of fairness evaluations, i.e., DP
910
+ and EO, as well as accuracy performance.
911
+ 6.2. Ablating DSA
912
+ The training objective of DSA contains three essential com-
913
+ ponents for bias mitigation. We conduct ablation study us-
914
+ ing the (Gray Hair, Gender) setting to analyze their indi-
915
+ vidual contributions and report the results in Table 1 (the
916
+ other two settings are reported in the Supplementary Ma-
917
+ terials). We summarize our major findings. First, all of
918
+ the components contribute towards the improved fairness
919
+ performance across all three fairness metrics (i.e., EO, DP
920
+ and DBA). Second, both the target (task) related CE losses
921
+ in Eq.(10) are critical in preventing DSA from compro-
922
+ mising the prediction performance (otherwise, the accu-
923
+ racy drops from 90.95 to 88.32 and 88.54, respectively).
924
+ Third, the training objective LA in Eq.(10) contributes the
925
+ most to the higher fairness measures, as is clearly shown
926
+ by: EO (0.2934→0.2558), DP (0.2865→0.2337), and DBA
927
+ (0.0206→0.0031).
928
+ Table 1. Ablation study of the three training objectives. Best re-
929
+ sults are bold faced. ‘w/o’ represents without.
930
+ Models
931
+ Y : Gray Hair S: Gender
932
+ EO↓
933
+ DP↓
934
+ DBA↓
935
+ BA↑
936
+ ACC↑
937
+ L(all)
938
+ 0.2558
939
+ 0.2337
940
+ 0.0031
941
+ 82.92
942
+ 90.95
943
+ w/o LCE(x, y)
944
+ 0.2754
945
+ 0.2541
946
+ 0.0175
947
+ 81.21
948
+ 88.32
949
+ w/o LCE(x′, y)
950
+ 0.2641
951
+ 0.2503
952
+ 0.0129
953
+ 80.65
954
+ 88.54
955
+ w/o LA
956
+ 0.2934
957
+ 0.2865
958
+ 0.0206
959
+ 81.54
960
+ 89.91
961
+ 6.3. Effect of ViT Model Size and Patch Size
962
+ We examine the effect of ViT model size and patch size
963
+ on DSA in Table 2. The ViT-B model is much larger than
964
+ the ViT-S model, which has 12 self-attention heads in each
965
+ block and 256 hidden state size in the two fully-connected
966
+ layers. Each patch is flattened and projected into a vector
967
+ of 768 dimensions. We draw two conclusions from Table 2.
968
+ First, the larger ViT-B models outperform the smaller ViT-S
969
+ on most of the fairness and accuracy metrics, demonstrat-
970
+ ing better feature learning capabilities with higher feature
971
+ dimensions and more self-attention heads. Second, smaller
972
+ patch size (16) achieves a better performance on both fair-
973
+ ness and accuracy measurements because small patches en-
974
+ ables extracting more fine-grained features [5].
975
+ Table 2. Evaluations with different ViT models (i.e., ViT-B (B)
976
+ and ViT-S (S)) and patch sizes (i.e., 16 and 32). All tunable hyper-
977
+ parameters are set same as Figure 3. VA is short for Vanilla.
978
+ Model
979
+ Y : Gray Hair S: Gender
980
+ EO↓
981
+ DP↓
982
+ DBA↓
983
+ BA↑
984
+ ACC↑
985
+ B/16
986
+ VA
987
+ 0.2984
988
+ 0.2841
989
+ 0.0142
990
+ 81.95
991
+ 91.05
992
+ DSA
993
+ 0.2424
994
+ 0.2205
995
+ 0.0081
996
+ 83.42
997
+ 91.24
998
+ S/16
999
+ VA
1000
+ 0.2763
1001
+ 0.3185
1002
+ 0.0422
1003
+ 81.84
1004
+ 90.25
1005
+ DSA
1006
+ 0.2558
1007
+ 0.2337
1008
+ 0.0031
1009
+ 82.92
1010
+ 90.95
1011
+ B/32
1012
+ VA
1013
+ 0.2982
1014
+ 0.2976
1015
+ 0.0205
1016
+ 81.11
1017
+ 90.16
1018
+ DSA
1019
+ 0.2629
1020
+ 0.2520
1021
+ 0.0109
1022
+ 82.73
1023
+ 91.03
1024
+ S/32
1025
+ VA
1026
+ 0.3014
1027
+ 0.3213
1028
+ 0.0198
1029
+ 80.64
1030
+ 89.18
1031
+ DSA
1032
+ 0.2935
1033
+ 0.3165
1034
+ 0.0086
1035
+ 80.86
1036
+ 89.45
1037
+ 7. Conclusion
1038
+ In this work, we proposed a novel hierarchical fairness-
1039
+ aware ViT training framework named DSA for bias mitiga-
1040
+ tion in both the training set and the learning algorithm. The
1041
+ DSA framework eliminates the spurious features through
1042
+ adversarial attacks on the bias-only model while retaining
1043
+ the informative features through an attention weights align-
1044
+ ment regularizer. The experimental results demonstrate the
1045
+ effectiveness of DSA for bias mitigation without compro-
1046
+ mising prediction performance.
1047
+
1048
+ References
1049
+ [1] Samira Abnar and Willem Zuidema.
1050
+ Quantifying atten-
1051
+ tion flow in transformers. arXiv preprint arXiv:2005.00928,
1052
+ 2020. 1, 8
1053
+ [2] Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner,
1054
+ Daliang Li, Thomas Unterthiner, and Andreas Veit. Under-
1055
+ standing robustness of transformers for image classification.
1056
+ In Proceedings of the IEEE/CVF International Conference
1057
+ on Computer Vision, pages 10231–10241, 2021. 1
1058
+ [3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas
1059
+ Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
1060
+ end object detection with transformers. In European confer-
1061
+ ence on computer vision, pages 213–229. Springer, 2020. 1,
1062
+ 2
1063
+ [4] Hila Chefer, Shir Gur, and Lior Wolf. Transformer inter-
1064
+ pretability beyond attention visualization. In Proceedings of
1065
+ the IEEE/CVF Conference on Computer Vision and Pattern
1066
+ Recognition, pages 782–791, 2021. 1
1067
+ [5] Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda.
1068
+ Crossvit: Cross-attention multi-scale vision transformer for
1069
+ image classification. In Proceedings of the IEEE/CVF in-
1070
+ ternational conference on computer vision, pages 357–366,
1071
+ 2021. 8
1072
+ [6] Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin
1073
+ Ling. Autoformer: Searching transformers for visual recog-
1074
+ nition. In Proceedings of the IEEE/CVF International Con-
1075
+ ference on Computer Vision, pages 12270–12280, 2021. 2
1076
+ [7] Alexandra Chouldechova and Aaron Roth.
1077
+ The fron-
1078
+ tiers of fairness in machine learning.
1079
+ arXiv preprint
1080
+ arXiv:1810.08810, 2018. 1
1081
+ [8] Ching-Yao Chuang and Youssef Mroueh. Fair mixup: Fair-
1082
+ ness via interpolation.
1083
+ arXiv preprint arXiv:2103.06503,
1084
+ 2021. 2, 3
1085
+ [9] Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen.
1086
+ Up-detr: Unsupervised pre-training for object detection with
1087
+ transformers. In Proceedings of the IEEE/CVF conference
1088
+ on computer vision and pattern recognition, pages 1601–
1089
+ 1610, 2021. 1
1090
+ [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
1091
+ Toutanova.
1092
+ Bert:
1093
+ Pre-training of deep bidirectional
1094
+ transformers for language understanding.
1095
+ arXiv preprint
1096
+ arXiv:1810.04805, 2018. 3
1097
+ [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
1098
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
1099
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
1100
+ vain Gelly, et al. An image is worth 16x16 words: Trans-
1101
+ formers for image recognition at scale.
1102
+ arXiv preprint
1103
+ arXiv:2010.11929, 2020. 1, 2, 7
1104
+ [12] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Rein-
1105
+ gold, and Richard Zemel. Fairness through awareness. In
1106
+ Proceedings of the 3rd innovations in theoretical computer
1107
+ science conference, pages 214–226, 2012. 4
1108
+ [13] Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, and
1109
+ Yingyan Lin. Patch-fool: Are vision transformers always
1110
+ robust against adversarial perturbations?
1111
+ arXiv preprint
1112
+ arXiv:2203.08392, 2022. 1, 2, 5
1113
+ [14] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain
1114
+ adaptation by backpropagation. In International conference
1115
+ on machine learning, pages 1180–1189. PMLR, 2015. 7
1116
+ [15] Robert Geirhos, J¨orn-Henrik Jacobsen, Claudio Michaelis,
1117
+ Richard Zemel, Wieland Brendel, Matthias Bethge, and Fe-
1118
+ lix A Wichmann. Shortcut learning in deep neural networks.
1119
+ Nature Machine Intelligence, 2(11):665–673, 2020. 4
1120
+ [16] Soumya Suvra Ghosal, Yifei Ming, and Yixuan Li. Are vi-
1121
+ sion transformers robust to spurious correlations?
1122
+ arXiv
1123
+ preprint arXiv:2203.09125, 2022. 3, 6, 7
1124
+ [17] Jindong Gu, Volker Tresp, and Yao Qin. Are vision trans-
1125
+ formers robust to patch perturbations?
1126
+ In European Con-
1127
+ ference on Computer Vision, pages 404–421. Springer, 2022.
1128
+ 2
1129
+ [18] Moritz Hardt, Eric Price, and Nati Srebro. Equality of op-
1130
+ portunity in supervised learning. Advances in neural infor-
1131
+ mation processing systems, 29, 2016. 4
1132
+ [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
1133
+ Deep residual learning for image recognition. In Proceed-
1134
+ ings of the IEEE conference on computer vision and pattern
1135
+ recognition, pages 770–778, 2016. 1
1136
+ [20] Kenneth
1137
+ Holstein,
1138
+ Jennifer
1139
+ Wortman
1140
+ Vaughan,
1141
+ Hal
1142
+ Daum´e III, Miro Dudik, and Hanna Wallach.
1143
+ Improving
1144
+ fairness in machine learning systems: What do industry
1145
+ practitioners need?
1146
+ In Proceedings of the 2019 CHI
1147
+ conference on human factors in computing systems, pages
1148
+ 1–16, 2019. 1
1149
+ [21] Drew A Hudson and Larry Zitnick. Generative adversarial
1150
+ transformers. In International conference on machine learn-
1151
+ ing, pages 4487–4499. PMLR, 2021. 1
1152
+ [22] Sarthak Jain and Byron C Wallace. Attention is not explana-
1153
+ tion. arXiv preprint arXiv:1902.10186, 2019. 8
1154
+ [23] Sangwon Jung, Donggyu Lee, Taeeon Park, and Taesup
1155
+ Moon.
1156
+ Fair feature distillation for visual recognition.
1157
+ In
1158
+ Proceedings of the IEEE/CVF conference on computer vi-
1159
+ sion and pattern recognition, pages 12115–12124, 2021. 1,
1160
+ 2, 3, 7
1161
+ [24] Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim,
1162
+ and Junmo Kim. Learning not to learn: Training deep neural
1163
+ networks with biased data. In Proceedings of the IEEE/CVF
1164
+ Conference on Computer Vision and Pattern Recognition,
1165
+ pages 9012–9020, 2019. 3
1166
+ [25] Eungyeup Kim, Jihyeon Lee, and Jaegul Choo. Biaswap:
1167
+ Removing dataset bias with bias-tailored swapping augmen-
1168
+ tation. In Proceedings of the IEEE/CVF International Con-
1169
+ ference on Computer Vision, pages 14992–15001, 2021. 2
1170
+ [26] Michael P Kim, Amirata Ghorbani, and James Zou. Multi-
1171
+ accuracy: Black-box post-processing for fairness in classifi-
1172
+ cation. In Proceedings of the 2019 AAAI/ACM Conference
1173
+ on AI, Ethics, and Society, pages 247–254, 2019. 3
1174
+ [27] Xin Li, Xiangrui Li, Deng Pan, and Dongxiao Zhu. Improv-
1175
+ ing adversarial robustness via probabilistically compact loss
1176
+ with logit constraints. In Proceedings of the AAAI Confer-
1177
+ ence on Artificial Intelligence, volume 35, pages 8482–8490,
1178
+ 2021. 1
1179
+ [28] Yawei Li, Kai Zhang, Jiezhang Cao, Radu Timofte, and Luc
1180
+ Van Gool. Localvit: Bringing locality to vision transformers.
1181
+ arXiv preprint arXiv:2104.05707, 2021. 2
1182
+
1183
+ [29] Zhiheng Li, Anthony Hoogs, and Chenliang Xu. Discover
1184
+ and mitigate unknown biases with debiasing alternate net-
1185
+ works. In European Conference on Computer Vision, pages
1186
+ 270–288. Springer, 2022. 2
1187
+ [30] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng
1188
+ Zhang, Stephen Lin, and Baining Guo. Swin transformer:
1189
+ Hierarchical vision transformer using shifted windows. In
1190
+ Proceedings of the IEEE/CVF International Conference on
1191
+ Computer Vision, pages 10012–10022, 2021. 1, 2
1192
+ [31] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
1193
+ Deep learning face attributes in the wild. In Proceedings of
1194
+ the IEEE international conference on computer vision, pages
1195
+ 3730–3738, 2015. 6
1196
+ [32] Francesco Locatello, Gabriele Abbati, Thomas Rainforth,
1197
+ Stefan Bauer, Bernhard Sch¨olkopf, and Olivier Bachem. On
1198
+ the fairness of disentangled representations.
1199
+ Advances in
1200
+ Neural Information Processing Systems, 32, 2019. 3
1201
+ [33] Pranay K Lohia, Karthikeyan Natesan Ramamurthy, Man-
1202
+ ish Bhide, Diptikalyan Saha, Kush R Varshney, and Ruchir
1203
+ Puri.
1204
+ Bias mitigation post-processing for individual and
1205
+ group fairness. In Icassp 2019-2019 ieee international con-
1206
+ ference on acoustics, speech and signal processing (icassp),
1207
+ pages 2847–2851. IEEE, 2019. 3
1208
+ [34] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jor-
1209
+ dan. Learning transferable features with deep adaptation net-
1210
+ works.
1211
+ In International conference on machine learning,
1212
+ pages 97–105. PMLR, 2015. 7
1213
+ [35] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher
1214
+ Yu, Radu Timofte, and Luc Van Gool.
1215
+ Repaint: Inpaint-
1216
+ ing using denoising diffusion probabilistic models. In Pro-
1217
+ ceedings of the IEEE/CVF Conference on Computer Vision
1218
+ and Pattern Recognition (CVPR), pages 11461–11471, June
1219
+ 2022. 2
1220
+ [36] David Madras, Elliot Creager, Toniann Pitassi, and Richard
1221
+ Zemel. Learning adversarially fair and transferable represen-
1222
+ tations. In International Conference on Machine Learning,
1223
+ pages 3384–3393. PMLR, 2018. 2, 3, 7
1224
+ [37] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt,
1225
+ Dimitris Tsipras, and Adrian Vladu. Towards deep learn-
1226
+ ing models resistant to adversarial attacks. arXiv preprint
1227
+ arXiv:1706.06083, 2017. 6
1228
+ [38] Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie
1229
+ Duan, Shaokai Ye, Yuan He, and Hui Xue. Towards robust
1230
+ vision transformer. In Proceedings of the IEEE/CVF Con-
1231
+ ference on Computer Vision and Pattern Recognition, pages
1232
+ 12042–12051, 2022. 1
1233
+ [39] Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and
1234
+ Tamer Elsayed.
1235
+ Why is that relevant? collecting annota-
1236
+ tor rationales for relevance judgments. In Proceedings of the
1237
+ AAAI Conference on Human Computation and Crowdsourc-
1238
+ ing, volume 4, pages 139–148, 2016. 2
1239
+ [40] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and
1240
+ Jinwoo Shin. Learning from failure: De-biasing classifier
1241
+ from biased classifier. Advances in Neural Information Pro-
1242
+ cessing Systems, 33:20673–20684, 2020. 3
1243
+ [41] Muzammal Naseer, Kanchana Ranasinghe, Salman Khan,
1244
+ Fahad Shahbaz Khan, and Fatih Porikli.
1245
+ On improving
1246
+ adversarial transferability of vision transformers.
1247
+ arXiv
1248
+ preprint arXiv:2106.04169, 2021. 2
1249
+ [42] Deng Pan, Xin Li, and Dongxiao Zhu. Explaining deep neu-
1250
+ ral network models with adversarial gradient integration. In
1251
+ IJCAI, pages 2876–2883, 2021. 1
1252
+ [43] Sungho Park, Dohyung Kim, Sunhee Hwang, and Hyeran
1253
+ Byun. Readme: Representation learning by fairness-aware
1254
+ disentangling method.
1255
+ arXiv preprint arXiv:2007.03775,
1256
+ 2020. 4
1257
+ [44] Sayak Paul and Pin-Yu Chen. Vision transformers are robust
1258
+ learners. In Proceedings of the AAAI Conference on Artificial
1259
+ Intelligence, volume 36, pages 2071–2081, 2022. 2
1260
+ [45] Francesco Pinto, Philip Torr, and Puneet K Dokania. Are
1261
+ vision transformers always more robust than convolutional
1262
+ neural networks? 2021. 2
1263
+ [46] Yao Qiang, Chengyin Li, Marco Brocanelli, and Dongxiao
1264
+ Zhu. Counterfactual interpolation augmentation (cia): A uni-
1265
+ fied approach to enhance fairness and explainability of dnn.
1266
+ In Proceedings of the Thirty-First International Joint Con-
1267
+ ference on Artificial Intelligence, IJCAI-22, LD Raedt, Ed.
1268
+ International Joint Conferences on Artificial Intelligence Or-
1269
+ ganization, volume 7, pages 732–739, 2022. 2, 3
1270
+ [47] Yao Qiang, Deng Pan, Chengyin Li, Xin Li, Rhongho Jang,
1271
+ and Dongxiao Zhu. AttCAT: Explaining transformers via at-
1272
+ tentive class activation tokens. In Alice H. Oh, Alekh Agar-
1273
+ wal, Danielle Belgrave, and Kyunghyun Cho, editors, Ad-
1274
+ vances in Neural Information Processing Systems, 2022. 1
1275
+ [48] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan
1276
+ Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-
1277
+ attention in vision models. Advances in Neural Information
1278
+ Processing Systems, 32, 2019. 2
1279
+ [49] Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and
1280
+ Percy Liang.
1281
+ Distributionally robust neural networks for
1282
+ group shifts: On the importance of regularization for worst-
1283
+ case generalization. arXiv preprint arXiv:1911.08731, 2019.
1284
+ 3, 6
1285
+ [50] Hadi Salman, Saachi Jain, Eric Wong, and Aleksander
1286
+ Madry. Certified patch robustness via smoothed vision trans-
1287
+ formers. In Proceedings of the IEEE/CVF Conference on
1288
+ Computer Vision and Pattern Recognition, pages 15137–
1289
+ 15147, 2022. 1
1290
+ [51] Mhd Hasan Sarhan, Nassir Navab, Abouzar Eslami, and
1291
+ Shadi Albarqouni. Fairness by learning orthogonal disentan-
1292
+ gled representations. In European Conference on Computer
1293
+ Vision, pages 746–761. Springer, 2020. 3
1294
+ [52] Sofia Serrano and Noah A Smith. Is attention interpretable?
1295
+ arXiv preprint arXiv:1906.03731, 2019. 8
1296
+ [53] Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and
1297
+ Cho-Jui Hsieh. On the adversarial robustness of vision trans-
1298
+ formers. arXiv preprint arXiv:2103.15670, 2021. 2
1299
+ [54] Krishna Kumar Singh, Dhruv Mahajan, Kristen Grauman,
1300
+ Yong Jae Lee, Matt Feiszli, and Deepti Ghadiyaram. Don’t
1301
+ judge an object by its context: learning to overcome con-
1302
+ textual bias. In Proceedings of the IEEE/CVF Conference
1303
+ on Computer Vision and Pattern Recognition, pages 11070–
1304
+ 11078, 2020. 2
1305
+
1306
+ [55] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia
1307
+ Schmid. Segmenter: Transformer for semantic segmenta-
1308
+ tion. In Proceedings of the IEEE/CVF International Confer-
1309
+ ence on Computer Vision, pages 7262–7272, 2021. 1
1310
+ [56] Sruthi Sudhakar, Viraj Prabhu, Arvindkumar Krishnakumar,
1311
+ and Judy Hoffman. Mitigating bias in visual transformers
1312
+ via targeted alignment. 2021. 3, 6, 7
1313
+ [57] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
1314
+ reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
1315
+ Polosukhin. Attention is all you need. Advances in neural
1316
+ information processing systems, 30, 2017. 3
1317
+ [58] Mei Wang and Weihong Deng. Mitigating bias in face recog-
1318
+ nition using skewness-aware reinforcement learning. In Pro-
1319
+ ceedings of the IEEE/CVF conference on computer vision
1320
+ and pattern recognition, pages 9322–9331, 2020. 1
1321
+ [59] Wentao Wang, Li Niu, Jianfu Zhang, Xue Yang, and Liqing
1322
+ Zhang. Dual-path image inpainting with auxiliary gan inver-
1323
+ sion. In Proceedings of the IEEE/CVF Conference on Com-
1324
+ puter Vision and Pattern Recognition, pages 11421–11430,
1325
+ 2022. 2
1326
+ [60] Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang,
1327
+ Weifeng Chiu, Tao Wei, and Kui Ren. Fairness-aware ad-
1328
+ versarial perturbation towards bias mitigation for deployed
1329
+ deep models. In Proceedings of the IEEE/CVF Conference
1330
+ on Computer Vision and Pattern Recognition, pages 10379–
1331
+ 10388, 2022. 2, 3
1332
+ [61] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern.
1333
+ Predictive inequity in object detection.
1334
+ arXiv preprint
1335
+ arXiv:1902.11097, 2019. 2
1336
+ [62] Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, and Jiliang Tang.
1337
+ To be robust or to be fair: Towards fairness in adversarial
1338
+ training. In International Conference on Machine Learning,
1339
+ pages 11492–11501. PMLR, 2021. 2
1340
+ [63] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai,
1341
+ Bin Xiao, Lu Yuan, and Jianfeng Gao. Focal self-attention
1342
+ for local-global interactions in vision transformers.
1343
+ arXiv
1344
+ preprint arXiv:2107.00641, 2021. 2
1345
+ [64] Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine,
1346
+ Karol Hausman, and Chelsea Finn.
1347
+ Gradient surgery for
1348
+ multi-task learning. Advances in Neural Information Pro-
1349
+ cessing Systems, 33:5824–5836, 2020. 6
1350
+ [65] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell.
1351
+ Mitigating unwanted biases with adversarial learning.
1352
+ In
1353
+ Proceedings of the 2018 AAAI/ACM Conference on AI,
1354
+ Ethics, and Society, pages 335–340, 2018. 2, 3
1355
+ [66] Yi Zhang and Jitao Sang. Towards accuracy-fairness para-
1356
+ dox: Adversarial example-based data augmentation for vi-
1357
+ sual debiasing.
1358
+ In Proceedings of the 28th ACM Interna-
1359
+ tional Conference on Multimedia, pages 4346–4354, 2020.
1360
+ 2
1361
+ [67] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu,
1362
+ Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao
1363
+ Xiang, Philip HS Torr, et al. Rethinking semantic segmen-
1364
+ tation from a sequence-to-sequence perspective with trans-
1365
+ formers.
1366
+ In Proceedings of the IEEE/CVF conference on
1367
+ computer vision and pattern recognition, pages 6881–6890,
1368
+ 2021. 1
1369
+ [68] Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, An-
1370
+ imashree Anandkumar, Jiashi Feng, and Jose M Alvarez.
1371
+ Understanding the robustness in vision transformers. In In-
1372
+ ternational Conference on Machine Learning, pages 27378–
1373
+ 27394. PMLR, 2022. 1
1374
+
1375
+ 8. Supplementary Materials
1376
+ In this Section, we provide additional experiments for per-
1377
+ formance evaluation of the proposed DSA framework on
1378
+ CelebA and Waterbird datasets.
1379
+ 8.1. Dataset Statistics
1380
+ Recall from Section 5.1 that we choose three settings from
1381
+ the CelebA dataset and one setting from the Waterbird
1382
+ dataset to evaluate the baselines against the proposed DSA
1383
+ framework. We describe these four settings using the tuples
1384
+ (Y, S) as follows: a) (Smiling, High Cheekbones), b) (Wavy
1385
+ Hair, Gender), c) (Gray Hair, Gender), and d) (Waterbird,
1386
+ Place). Note that the first three settings are considered for
1387
+ the CelebA dataset while the last setting is considered for
1388
+ the Waterbird dataset. We first provide the data statistics
1389
+ for all these settings in Figure 5. We note that significant
1390
+ biases exist in all these settings. For example, a majority
1391
+ of “Smiling” faces are correlated with “High Cheekbones”
1392
+ whereas the majority of “Not Smiling” faces are correlated
1393
+ with “Not High Cheekbones”. Similar spurious correlations
1394
+ are also observed in other settings as well, which can lead to
1395
+ biased models. We establish this by further analyzing and
1396
+ reporting the True Positive Rate (TPR) of the vanilla ViT
1397
+ models trained on these biased datasets in Table 3. Clearly,
1398
+ the biased ViT models perform significantly worse on the
1399
+ minority groups, e.g., predicting “Smiling” when the indi-
1400
+ vidual does not have “High Cheekbones” (S = 0: 63.93%)
1401
+ compared to the ones that have “High Cheekbones” (S = 1:
1402
+ 96.50%). Next, we analyze the effect of the tunable hyper-
1403
+ parameters on the performance of DSA.
1404
+ Table 3. Disparities of true positive rate (TPR) among different
1405
+ task and sensitive attribute tuples.
1406
+ Y
1407
+ S
1408
+ TPR%↑
1409
+ ∆TPR%
1410
+ Smiling
1411
+ Not High Cheekbones (0)
1412
+ 63.93
1413
+ 32.57
1414
+ High Cheekbones (1)
1415
+ 96.50
1416
+ Wavy Hair
1417
+ Female (0)
1418
+ 77.62
1419
+ 20.04
1420
+ Male (1)
1421
+ 57.58
1422
+ Gray Hair
1423
+ Female (0)
1424
+ 63.31
1425
+ 31.85
1426
+ Male (1)
1427
+ 95.16
1428
+ Waterbird
1429
+ Land (0)
1430
+ 55.75
1431
+ 14.14
1432
+ Water (1)
1433
+ 69.89
1434
+ 8.2. Effect of Discrepancy Metrics
1435
+ Since We apply three different feature discrepancy metrics
1436
+ D(·, ·), i.e., MSE, KL-Div, and AT, to evaluate the discrep-
1437
+ ancy between the attention weights Ax and Ax′ in (6), we
1438
+ report the effect of these discrepancy metrics in Table 4.
1439
+ Although the differences between these discrepancy met-
1440
+ rics are relatively small, AT clearly achieves the best per-
1441
+ formance, especially on the fairness metrics. Since AT can
1442
+ capture the most significant differences between Ax and
1443
+ Ax′ as shown in (9), the regularizer LA is more efficient
1444
+ to minimize their differences.
1445
+ Table 4. Evaluations with different discrepancy metrics in the reg-
1446
+ ularizer (6).
1447
+ D⋆
1448
+ Y : Gray Hair S: Gender
1449
+ EO↓
1450
+ DP↓
1451
+ DBA↓
1452
+ BA↑
1453
+ Acc%↑
1454
+ MSE
1455
+ 0.2706
1456
+ 0.2488
1457
+ 0.0136
1458
+ 82.07
1459
+ 90.13
1460
+ KL-Div
1461
+ 0.2608
1462
+ 0.2467
1463
+ 0.0106
1464
+ 83.26
1465
+ 89.48
1466
+ AT
1467
+ 0.2558
1468
+ 0.2337
1469
+ 0.0031
1470
+ 82.92
1471
+ 90.95
1472
+ 8.3. Effect of Tunable Hyper-parameters
1473
+ There are several tunable hyper-parameters in the proposed
1474
+ DSA framework, including the various coefficient weights
1475
+ in the objective function and the number of masked patches
1476
+ learned during the adversarial attack.
1477
+ We tune the three coefficient weights in the objective
1478
+ function (10) to identify the best-performing model as
1479
+ shown in Table 5. To improve model performance, we be-
1480
+ lieve that these coefficient weights should be carefully tuned
1481
+ and selected under different settings and datasets.
1482
+ The effect of the number of masked patches learned dur-
1483
+ ing the adversarial attack is shown in Table 6. In our ex-
1484
+ periments, the ViT model with k = 3 patches achieves the
1485
+ best performance among all compared metrics in most set-
1486
+ tings. Looking into more details of the adversarial examples
1487
+ shown in Figure 6, if we perturb only one patch out of all
1488
+ the input patches, some sensitive attributes may not be lo-
1489
+ calized and masked. On the contrary, perturbing excessive
1490
+ patches (e.g., 5 patches) would increase the risk of masking
1491
+ the related attributes to the target task, resulting in a worse
1492
+ prediction performance. For example, the ACC drops from
1493
+ 90.95 to 88.55 in the setting of (Gray Hair, Gender) with 5
1494
+ perturbed patches, as shown in Table 6.
1495
+ Table 5. Evaluations with different tunable coefficient weights in
1496
+ the objective function (10).
1497
+ λ1, λ2, λ3
1498
+ Y : Gray Hair S: Gender
1499
+ EO↓
1500
+ DP↓
1501
+ DBA↓
1502
+ BA↑
1503
+ Acc%↑
1504
+ 1.0, 0.5, 0.5
1505
+ 0.2843
1506
+ 0.2675
1507
+ 0.0125
1508
+ 81.45
1509
+ 91.12
1510
+ 0.5, 1.0, 0.5
1511
+ 0.2633
1512
+ 0.2578
1513
+ 0.0106
1514
+ 81.32
1515
+ 89.26
1516
+ 1.0, 1.0, 0.5
1517
+ 0.2558
1518
+ 0.2337
1519
+ 0.0031
1520
+ 82.92
1521
+ 90.95
1522
+ 8.4. Ablation Studies and Effect of Patch Size
1523
+ We report the adversarial success rates of DSA on the sen-
1524
+ sitive attributes as target with different number of masked
1525
+ patches in Table 7. Note that we only generate adversarial
1526
+ examples for the training sets.
1527
+ In Table 8, we report additional ablation study results for
1528
+ the DSA framework on the other two settings from CelebA
1529
+ dataset. It is straightforward to make a similar conclusion
1530
+
1531
+ Smiling
1532
+ Not Smiling
1533
+ 0
1534
+ 20000
1535
+ 40000
1536
+ 60000
1537
+ 80000
1538
+ 78899
1539
+ 13290
1540
+ 18770
1541
+ 91640
1542
+ High CheekBones
1543
+ Not High CheekBones
1544
+ (a) Y: Smiling S: High Cheeckbones
1545
+ Wavy Hair
1546
+ Not Wavy Hair
1547
+ 0
1548
+ 10000
1549
+ 20000
1550
+ 30000
1551
+ 40000
1552
+ 50000
1553
+ 60000
1554
+ 70000
1555
+ 11892
1556
+ 72542
1557
+ 52852
1558
+ 65313
1559
+ Male
1560
+ Female
1561
+ (b) Y: Wavy hair S: Gender
1562
+ Gray Hair
1563
+ Not Gray Hair
1564
+ 0
1565
+ 1000
1566
+ 2000
1567
+ 3000
1568
+ 4000
1569
+ 5000
1570
+ 6000
1571
+ 6136
1572
+ 1262
1573
+ 1262
1574
+ 6136
1575
+ Male
1576
+ Female
1577
+ (c) Y: Gray hair S: Gender
1578
+ Waterbird
1579
+ Landbird
1580
+ 0
1581
+ 1000
1582
+ 2000
1583
+ 3000
1584
+ 4000
1585
+ 5000
1586
+ 6000
1587
+ 6220
1588
+ 831
1589
+ 2905
1590
+ 1832
1591
+ Water
1592
+ Land
1593
+ (d) Y: Waterbird S: Place
1594
+ Figure 5. Spurious correlation between tasks (Y ) and sensitive attributes (S) tuples (Y, S). Note that Figures 5a, 5b and 5c represent the
1595
+ data statistics for the CelebA dataset while Figure 5d represents the data statistics of the Waterbird dataset.
1596
+ Table 6. Performance of DSA with different number of masked or perturbed patches.
1597
+ k
1598
+ Y : Smiling
1599
+ S: High Cheekbones
1600
+ Y : Wavy Hair
1601
+ S: Gender
1602
+ Y : Gray Hair
1603
+ S: Gender
1604
+ EO↓
1605
+ DP↓
1606
+ DBA↓
1607
+ BA(%)↑
1608
+ Acc(%)↑
1609
+ EO↓
1610
+ DP↓
1611
+ DBA↓
1612
+ BA(%)↑
1613
+ Acc(%)↑
1614
+ EO↓
1615
+ DP↓
1616
+ DBA↓
1617
+ BA(%)↑
1618
+ Acc(%)↑
1619
+ 1
1620
+ 0.3502
1621
+ 0.3341
1622
+ 0.0046
1623
+ 77.84
1624
+ 87.18
1625
+ 0.1822
1626
+ 0.2036
1627
+ 0.0098
1628
+ 72.79
1629
+ 77.26
1630
+ 0.2946
1631
+ 0.3075
1632
+ 0.0110
1633
+ 81.77
1634
+ 90.61
1635
+ 3
1636
+ 0.3012
1637
+ 0.2864
1638
+ 0.0034
1639
+ 80.10
1640
+ 89.23
1641
+ 0.1618
1642
+ 0.1844
1643
+ 0.0056
1644
+ 73.34
1645
+ 79.34
1646
+ 0.2558
1647
+ 0.2337
1648
+ 0.0031
1649
+ 82.92
1650
+ 90.95
1651
+ 5
1652
+ 0.3218
1653
+ 0.3179
1654
+ 0.0040
1655
+ 79.88
1656
+ 88.12
1657
+ 0.1604
1658
+ 0.1776
1659
+ 0.0087
1660
+ 72.13
1661
+ 78.16
1662
+ 0.2776
1663
+ 0.2560
1664
+ 0.0216
1665
+ 81.91
1666
+ 88.55
1667
+ Table 7. Adversarial attack success rates of DSA on the sensitive
1668
+ attributes target with different number of masked patches, k.
1669
+ S
1670
+ k
1671
+ Success Rate%↑
1672
+ Gender
1673
+ 1
1674
+ 88.52
1675
+ 3
1676
+ 91.47
1677
+ 5
1678
+ 93.69
1679
+ High Cheekbones
1680
+ 1
1681
+ 85.41
1682
+ 3
1683
+ 88.64
1684
+ 5
1685
+ 91.58
1686
+ as in Section 6.2. We note that all the terms in the objec-
1687
+ tive function in (10) contribute towards better fairness and
1688
+ accuracy performance.
1689
+ Additional evaluations capturing the effect of different
1690
+ patch sizes on the performance of DSA are reported in Ta-
1691
+ ble 9. Similar to our conclusion in Section 6.3, the ViT
1692
+ models with smaller patch sizes, i.e., 16, achieve the best
1693
+ performance on two other settings from the CelebA dataset.
1694
+ Figure 6. Adversarial examples with different number of masked
1695
+ patches in the (Gray Hair, Gender) setting.
1696
+ 8.5. Qualitative Evaluations
1697
+ In Figures 7, 8, and 9, we demonstrate some more qualita-
1698
+ tive evaluations to further demonstrate the effectiveness of
1699
+ the DSA approach. We note that the distribution of the at-
1700
+ tention weights for the ViT models trained with the vanilla
1701
+ method simply focuses on the sensitive attributes, e.g., “eye
1702
+ shadow”. This demonstrates that the vanilla ViT models are
1703
+ biased and simply leverage the sensitive features to predict
1704
+ the target labels. On the contrary, DSA reduces the atten-
1705
+ tion on these sensitive features but pays more attention on
1706
+ the target-related features, e.g., the hair, which actually de-
1707
+ termines the target label Gray and Wavy hair.
1708
+ 8.6. Summary
1709
+ We summarize the major findings of our experimental study
1710
+ here. First, DSA reduces the attention on the sensitive fea-
1711
+ tures while focusing on the target-related features as an ef-
1712
+ fective approach to bias mitigation. Second, the additional
1713
+ ablation studies demonstrate that each term in the objec-
1714
+ tive function (10) contributes towards the improved fairness
1715
+ and accuracy performance of DSA. Third, we noted that the
1716
+ smaller patch size results in better performance of DSA due
1717
+ to their capability of efficiently extracting fine-grained fea-
1718
+ tures.
1719
+ (a) Original Image
1720
+ (b) Vanilla
1721
+ (c) DSA
1722
+ Figure 7.
1723
+ Qualitative evaluation of DSA. Y: Smiling S: High
1724
+ Cheeckbones.
1725
+
1726
+ VANN
1727
+ .(a) Original Image
1728
+ (b) Vanilla
1729
+ (c) DSA
1730
+ Figure 8. Qualitative evaluation. Y: Gray hair S: Gender.
1731
+ (a) Original Image
1732
+ (b) Vanilla
1733
+ (c) DSA
1734
+ Figure 9. Qualitative evaluation. Y: Wavy hair S: Gender.
1735
+ Table 8. Ablation study of DSA for the three training objectives on two other settings from the CelebA data set. Best results are bold faced.
1736
+ ‘w/o’ represents without.
1737
+ Models
1738
+ Y : Wavy Hair S: Gender
1739
+ Y : Smiling S: High Cheekbones
1740
+ EO↓
1741
+ DP↓
1742
+ DBA↓
1743
+ BA↑
1744
+ ACC↑
1745
+ EO↓
1746
+ DP↓
1747
+ DBA↓
1748
+ BA↑
1749
+ ACC↑
1750
+ L(all)
1751
+ 0.1618
1752
+ 0.1844
1753
+ 0.0056
1754
+ 73.34
1755
+ 79.34
1756
+ 0.3012
1757
+ 0.2864
1758
+ 0.0034
1759
+ 80.10
1760
+ 89.23
1761
+ w/o LCE(x, y)
1762
+ 0.2114
1763
+ 0.22.03
1764
+ 0.0288
1765
+ 72.42
1766
+ 77.56
1767
+ 0.3105
1768
+ 0.3014
1769
+ 0.0231
1770
+ 79.54
1771
+ 88.06
1772
+ w/o LCE(x′, y)
1773
+ 0.2004
1774
+ 0.2154
1775
+ 0.0275
1776
+ 72.45
1777
+ 77.98
1778
+ 0.3198
1779
+ 0.2987
1780
+ 0.0129
1781
+ 78.39
1782
+ 88.54
1783
+ w/o LA
1784
+ 0.1942
1785
+ 0.2012
1786
+ 0.0312
1787
+ 72.33
1788
+ 77.45
1789
+ 0.3125
1790
+ 0.2955
1791
+ 0.0198
1792
+ 79.21
1793
+ 87.95
1794
+ Table 9. Performance evaluation of DSA with different patch sizes (i.e., 16 and 32). All tunable hyper-parameters are set same as Figure
1795
+ 3. VA is short for Vanilla.
1796
+ Model
1797
+ Y : Wavy Hair S: Gender
1798
+ Y : Smiling S: High Cheekbones
1799
+ EO↓
1800
+ DP↓
1801
+ DBA↓
1802
+ BA↑
1803
+ ACC↑
1804
+ EO↓
1805
+ DP↓
1806
+ DBA↓
1807
+ BA↑
1808
+ ACC↑
1809
+ S/16
1810
+ VA
1811
+ 0.2193
1812
+ 0.2204
1813
+ 0.0310
1814
+ 72.69
1815
+ 78.78
1816
+ 0.3382
1817
+ 0.3256
1818
+ 0.0125
1819
+ 77.80
1820
+ 88.04
1821
+ DSA
1822
+ 0.1618
1823
+ 0.1844
1824
+ 0.0056
1825
+ 73.34
1826
+ 79.34
1827
+ 0.3012
1828
+ 0.2864
1829
+ 0.0034
1830
+ 80.10
1831
+ 89.23
1832
+ S/32
1833
+ VA
1834
+ 0.2236
1835
+ 0.2319
1836
+ 0.0450
1837
+ 72.05
1838
+ 78.68
1839
+ 0.3398
1840
+ 0.3315
1841
+ 0.0213
1842
+ 78.12
1843
+ 87.54
1844
+ DSA
1845
+ 0.1805
1846
+ 0.2196
1847
+ 0.0254
1848
+ 72.58
1849
+ 79.02
1850
+ 0.3209
1851
+ 0.3177
1852
+ 0.0156
1853
+ 79.27
1854
+ 88.15
1855
+
-tFST4oBgHgl3EQfcjgx/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -2134,3 +2134,62 @@ cdE5T4oBgHgl3EQffg9z/content/2301.05627v1.pdf filter=lfs diff=lfs merge=lfs -tex
2134
  FtE4T4oBgHgl3EQfHAzF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2135
  pNAzT4oBgHgl3EQfOfvU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2136
  T9E3T4oBgHgl3EQfaQqr/content/2301.04505v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2134
  FtE4T4oBgHgl3EQfHAzF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2135
  pNAzT4oBgHgl3EQfOfvU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2136
  T9E3T4oBgHgl3EQfaQqr/content/2301.04505v1.pdf filter=lfs diff=lfs merge=lfs -text
2137
+ mNE1T4oBgHgl3EQfOAOw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2138
+ EtAyT4oBgHgl3EQfSfdv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2139
+ hdE4T4oBgHgl3EQfrg0j/content/2301.05208v1.pdf filter=lfs diff=lfs merge=lfs -text
2140
+ 1NAyT4oBgHgl3EQfofhG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2141
+ EtAyT4oBgHgl3EQfSfdv/content/2301.00087v1.pdf filter=lfs diff=lfs merge=lfs -text
2142
+ oNFQT4oBgHgl3EQfqzYC/content/2301.13381v1.pdf filter=lfs diff=lfs merge=lfs -text
2143
+ o9E2T4oBgHgl3EQf0Aih/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2144
+ 5tE2T4oBgHgl3EQfkQe0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2145
+ 5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf filter=lfs diff=lfs merge=lfs -text
2146
+ KNAzT4oBgHgl3EQfVfxu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2147
+ stAyT4oBgHgl3EQf0Pls/content/2301.00714v1.pdf filter=lfs diff=lfs merge=lfs -text
2148
+ KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf filter=lfs diff=lfs merge=lfs -text
2149
+ 5tE2T4oBgHgl3EQfkQe0/content/2301.03977v1.pdf filter=lfs diff=lfs merge=lfs -text
2150
+ jNAyT4oBgHgl3EQfx_lt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2151
+ btAyT4oBgHgl3EQfwPm0/content/2301.00646v1.pdf filter=lfs diff=lfs merge=lfs -text
2152
+ 7dE2T4oBgHgl3EQfPQbU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2153
+ VNE0T4oBgHgl3EQf2wJn/content/2301.02716v1.pdf filter=lfs diff=lfs merge=lfs -text
2154
+ cdE5T4oBgHgl3EQffg9z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2155
+ L9AyT4oBgHgl3EQf6voI/content/2301.00825v1.pdf filter=lfs diff=lfs merge=lfs -text
2156
+ 2tAzT4oBgHgl3EQfRvvX/content/2301.01222v1.pdf filter=lfs diff=lfs merge=lfs -text
2157
+ L9FLT4oBgHgl3EQfMi8T/content/2301.12016v1.pdf filter=lfs diff=lfs merge=lfs -text
2158
+ wtFRT4oBgHgl3EQfgzeb/content/2301.13581v1.pdf filter=lfs diff=lfs merge=lfs -text
2159
+ 2dAzT4oBgHgl3EQfDvol/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2160
+ btAyT4oBgHgl3EQfwPm0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2161
+ ctAzT4oBgHgl3EQfZ_wC/content/2301.01359v1.pdf filter=lfs diff=lfs merge=lfs -text
2162
+ QtAzT4oBgHgl3EQfW_yd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2163
+ X9AyT4oBgHgl3EQf9foH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2164
+ R9E4T4oBgHgl3EQf_g7i/content/2301.05372v1.pdf filter=lfs diff=lfs merge=lfs -text
2165
+ _NE1T4oBgHgl3EQfCwKv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2166
+ D9E0T4oBgHgl3EQfywIT/content/2301.02662v1.pdf filter=lfs diff=lfs merge=lfs -text
2167
+ 2tAzT4oBgHgl3EQfRvvX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2168
+ 5NE3T4oBgHgl3EQfQgli/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2169
+ C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf filter=lfs diff=lfs merge=lfs -text
2170
+ oNFQT4oBgHgl3EQfqzYC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2171
+ 49FIT4oBgHgl3EQf7St_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2172
+ D9E0T4oBgHgl3EQfywIT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2173
+ ENE0T4oBgHgl3EQfywJf/content/2301.02663v1.pdf filter=lfs diff=lfs merge=lfs -text
2174
+ -dE4T4oBgHgl3EQfDwsm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2175
+ W9FKT4oBgHgl3EQfoC5E/content/2301.11864v1.pdf filter=lfs diff=lfs merge=lfs -text
2176
+ f9E3T4oBgHgl3EQfHwmY/content/2301.04327v1.pdf filter=lfs diff=lfs merge=lfs -text
2177
+ ENE0T4oBgHgl3EQfywJf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2178
+ kdE5T4oBgHgl3EQfGw4w/content/2301.05433v1.pdf filter=lfs diff=lfs merge=lfs -text
2179
+ E9E1T4oBgHgl3EQfEgPe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2180
+ vNE0T4oBgHgl3EQf9wJp/content/2301.02805v1.pdf filter=lfs diff=lfs merge=lfs -text
2181
+ vNE1T4oBgHgl3EQfkQSg/content/2301.03272v1.pdf filter=lfs diff=lfs merge=lfs -text
2182
+ B9E1T4oBgHgl3EQfpgVg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2183
+ kdE5T4oBgHgl3EQfGw4w/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2184
+ vNE0T4oBgHgl3EQf9wJp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2185
+ m9AzT4oBgHgl3EQfN_u0/content/2301.01159v1.pdf filter=lfs diff=lfs merge=lfs -text
2186
+ B9E1T4oBgHgl3EQfpgVg/content/2301.03332v1.pdf filter=lfs diff=lfs merge=lfs -text
2187
+ DNFQT4oBgHgl3EQf_zdP/content/2301.13459v1.pdf filter=lfs diff=lfs merge=lfs -text
2188
+ ddE1T4oBgHgl3EQfLQO1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2189
+ _NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf filter=lfs diff=lfs merge=lfs -text
2190
+ wtFRT4oBgHgl3EQfgzeb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2191
+ m9AzT4oBgHgl3EQfN_u0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2192
+ R9E4T4oBgHgl3EQf_g7i/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2193
+ L9AyT4oBgHgl3EQf6voI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
2194
+ g9E0T4oBgHgl3EQf6gID/content/2301.02763v1.pdf filter=lfs diff=lfs merge=lfs -text
2195
+ YNE2T4oBgHgl3EQfvAgF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0tE1T4oBgHgl3EQflAQk/content/tmp_files/2301.03279v1.pdf.txt ADDED
@@ -0,0 +1,1123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.03279v1 [cs.GT] 9 Jan 2023
2
+ Revisiting the Distortion of Distributed Voting
3
+ Aris Filos-Ratsikas1 and Alexandros A. Voudouris2
4
+ 1School of Informatics, University of Edinburgh, UK
5
+ 2School of Computer Science and Electronic Engineering, University of Essex, UK
6
+ Abstract
7
+ We consider a seting with agents that have preferences over alternatives and are partitioned
8
+ into disjoint districts. Te goal is to choose one alternative as the winner using a mechanism
9
+ which first decides a representative alternative for each district based on a local election with the
10
+ agents therein as participants, and then chooses one of the district representatives as the winner.
11
+ Previous work showed bounds on the distortion of a specific class of deterministic plurality-based
12
+ mechanisms depending on the available information about the preferences of the agents in the
13
+ districts. In this paper, we first consider the whole class of deterministic mechanisms and show
14
+ asymptotically tight bounds on their distortion. We then initiate the study of the distortion of
15
+ randomized mechanisms in distributed voting and show bounds based on several informational
16
+ assumptions, which in many cases turn out to be tight. Finally, we also experimentally compare
17
+ the distortion of many different mechanisms of interest using synthetic and real-world data.
18
+ 1
19
+ Introduction
20
+ Voting is a ubiquitous method for making decisions with a large number of applications, such as electing
21
+ political representatives, deciding how to split a public budget between projects, or choosing which
22
+ services (restaurants, hotels, etc) to recommend to new users based on past user experiences. As such, it
23
+ has been at the epicenter of research within multiple disciplines including political sciences, economics
24
+ and computer science [Brandt et al., 2016]. Te most prominent question in this research agenda is to
25
+ identify the best voting rule to use to collectively aggregate the preferences of agents over alternative
26
+ options into a single winning alternative, with most of the earlier literature focusing on axiomatic
27
+ properties that good voting rules should have. An alternative way to tackle this question that has been
28
+ proposed in computer science is through the distortion framework [Anshelevich et al., 2021] which
29
+ allows to compare different voting rules based on how well they approximate the optimal choice as
30
+ measured in terms of a social objective function like the utilitarian social welfare.
31
+ Since its inception in 2006 by Procaccia and Rosenschein [2006], the distortion framework has been
32
+ applied to several utilitarian social choice setings (e.g., [Boutilier et al., 2015, Anshelevich et al., 2018,
33
+ Gkatzelis et al., 2020]). Te lion’s share of previous work has focused on centralized models with a
34
+ single pool of agents whose preferences are directly given as input to a voting rule, which thus can
35
+ utilize all the given information at once to make a decision. However, there are many applications
36
+ with multiple pools of agents which make independent local decisions that can be thought of as rec-
37
+ ommendations for the final decision. To give a concrete example, in most political election systems,
38
+ the citizens are partitioned into districts based on geographic or other criteria, and vote within their
39
+ districts to propose the candidate (party) they would like to be selected as the winner.
40
+ Inspired by situations like the one described above, Filos-Ratsikas et al. [2020] initiated the study
41
+ of the distortion of mechanisms in a distributed single-winner seting where a set of n agents with
42
+ 1
43
+
44
+ Deterministic
45
+ Randomized-of-Deterministic
46
+ Randomized-of-Randomized
47
+ Ordinal
48
+ Θ(km2)
49
+ Θ(km2)
50
+ Ω(√m), O(√m log m)
51
+ Cardinal
52
+ Θ(km)
53
+ Θ(k)
54
+ Θ(k)
55
+ Strategyproof
56
+ Θ(nm)
57
+ Θ(nm)
58
+ Ω(√m), O(√m log m)
59
+ Table 1: An overview of our results. Specific details can be found in the appropriate sections.
60
+ cardinal preferences over a set of m alternatives are partitioned into k disjoint districts. Te authors
61
+ focused on deterministic mechanisms of the form Plurality-of-f, which first choose a representative
62
+ alternative for each district according to some rule f, by holding a local election with the agents of
63
+ the district as the voters, and then picking the winner to be the alternative that is representative of
64
+ the most districts (i.e., using the Plurality rule). Filos-Ratsikas et al. considered mechanisms for
65
+ which the rule f can be cardinal or ordinal, i.e., it may use the actual numerical information about
66
+ the preferences of the agents within the districts or just consistent rankings. Te authors showed
67
+ that, when the districts are symmetric (that is, each of them contains the same number of agents), the
68
+ distortion of a cardinal mechanism, namely Plurality-of-Range-Voting is O(km), and provided an
69
+ asymptotically matching lower bound of Ω(km) on the distortion of any Plurality-of-f mechanism.
70
+ For ordinal mechanisms, they showed that Plurality-of-Plurality achieves a distortion of O(km2),
71
+ and that this is asymptotically best among all ordinal Plurality-of-f mechanisms.
72
+ 1.1
73
+ Revisiting the distortion of distributed voting
74
+ A first observation about the results of Filos-Ratsikas et al. [2020] is that there is a-priori no reason
75
+ to restrict our atention to only mechanisms in the class Plurality-of-f, as using other over-districts
76
+ rules could potentially lead to beter distortion. Indeed, follow-up work considered distributed social
77
+ choice setings with metric preferences [Anshelevich et al., 2022, Filos-Ratsikas and Voudouris, 2021]
78
+ without such restrictions on the over-districts rule. In addition, all of the previous work on these
79
+ setings only considered deterministic mechanisms that use deterministic in-district and over-districts
80
+ rules. Randomization has proven out to be a very useful tool in achieving beter (expected) distortion
81
+ bounds in the centralized seting (see Boutilier et al. [2015], Ebadian et al. [2022]), so it is only natural
82
+ to consider randomized mechanisms in the distributed seting as well. Finally, an important question
83
+ is how the distortion bounds are affected in case the participants act selfishly, and whether there are
84
+ strategyproof mechanisms with good distortion bounds. Tis question has been considered in the
85
+ centralized seting [Filos-Ratsikas and Miltersen, 2014, Bhaskar and Ghosh, 2018, Bhaskar et al., 2018,
86
+ Ebadian et al., 2022] and also in the distributed metric seting [Filos-Ratsikas and Voudouris, 2021]; we
87
+ consider it in the context of the normalized seting of Filos-Ratsikas et al. [2020] as well.
88
+ 1.2
89
+ Our Contributions
90
+ We consider the class of all mechanisms for distributed voting in the seting of [Filos-Ratsikas et al.,
91
+ 2020]. In particular, we consider the fover-of-fin class of mechanisms, where fin is an in-district rule that
92
+ takes as input the preferences of the agents within each district and outputs a representative alternative
93
+ for the district, while fover is a rule that takes as input the representative alternatives of all districts and
94
+ chooses one of them as the overall winner. We consider several different cases depending on the nature
95
+ of fover and fin (deterministic or randomized), and the type of information they can utilize (cardinal or
96
+ ordinal). We show the following results; see Table 1 for an overview.
97
+ Deterministic Mechanisms. When fover and fin are both deterministic and the districts are symmet-
98
+ ric, we show that the best possible distortion is Θ(km) when the valuation functions of the agents are
99
+ 2
100
+
101
+ accessible (cardinal mechanisms), and is Θ(km2) when only ordinal information about the preferences
102
+ of the agents is available (ordinal mechanisms). Te upper bounds were shown by Filos-Ratsikas et al.
103
+ [2020] and here we provide asymptotically tight lower bounds. Tese results show that for general,
104
+ unstructured (normalized) valuations, employing different over-district rules in fact does not result in
105
+ improvements on the distortion. We present these results in Section 3.
106
+ Randomized Mechanisms. In Section 4, we consider for the first time the distortion of randomized
107
+ mechanisms in distributed voting. We first prove a simple composition theorem, which shows that
108
+ using an in-district rule with known distortion δ in the centralizedseting and then selecting the winner
109
+ uniformly at random from the set of representatives, defines a distributed mechanism with distortion
110
+ O(kδ). Using this, complemented with new lower bounds, we show that the best possible distortion
111
+ for cardinal unanimous mechanisms is Θ(k); in fact, this is true even when the districts are asymmetric
112
+ and when fover is randomized but fin is deterministic.
113
+ For ordinal mechanisms, we consider two cases: (a) mechanisms that use deterministic in-district
114
+ rules fin, and (b) fully-randomized mechanisms, where both fover and fin are randomized rules. For
115
+ (a), we show that the best possible distortion is Θ(km2). Te upper bound follows from the bound on
116
+ Plurality-of-Plurality proven in [Filos-Ratsikas et al., 2020]; here, we provide an asymptotically
117
+ matching lower bound assuming a natural universal tie-breaking rule. For (b), we prove a simple but
118
+ very interesting result: For a well-studied class of randomized centralized voting rules called point-
119
+ voting schemes (e.g., see Gibbard [1977], Barbera [1978]), there exists a distributed implementation so
120
+ that there is no effect on the induced probability distribution, even for asymmetric districts. Simply put,
121
+ using such rules it is possible to escape the ill effects of districts in terms of the distortion, even when
122
+ the districts are asymmetric. From this result, it follows that there exists a distributed implementation
123
+ of a well-known mechanism of Boutilier et al. [2015] that achieves distortion O(√m log m), almost
124
+ matching the best possible lower bound of Ω(√m).
125
+ Strategyproof Mechanisms. For strategyproof mechanisms, which are resilient to strategic manip-
126
+ ulation, we show that a best-possible distortion of Θ(nm) for deterministic mechanisms (and more
127
+ generally mechanisms with a deterministic in-district rule) is easy to achieve by a variation of a dic-
128
+ tatorship rule. For randomized mechanisms, since point-voting schemes are strategyproof, the bound
129
+ O(√m log m) carries over to this class as well. Results about deterministic strategyproof mechanisms
130
+ are presented in Section 3, and about randomized strategyproof mechanisms in Section 4.
131
+ Experiments. Finally, in Section 5, we perform experiments using real-world data and synthetic data
132
+ to evaluate the effect of distributed decision making to the distortion in setings closer to practice. Te
133
+ main conclusions of our experimental results mirror that of our theoretical results in Sections 3 and 4.
134
+ 1.3
135
+ Further Related Work
136
+ Te distortion literature is by now rather extensive, including topics such as single-winner voting
137
+ [Boutilier et al., 2015, Anshelevich et al., 2018, Gkatzelis et al., 2020, Kizilkaya and Kempe, 2022],
138
+ multi-winner voting [Caragiannis et al., 2017, 2022], matching problems [Filos-Ratsikas et al., 2014,
139
+ Amanatidis et al., 2022a], and participatory budgeting [Benad`e et al., 2017]. Generally speaking, most
140
+ works can be categorized as either studying a normalized utilitarian seting (e.g., [Procaccia and Rosen-
141
+ schein, 2006, Boutilier et al., 2015, Filos-Ratsikas et al., 2014, Benad`e et al., 2017, Ebadian et al., 2022]) or
142
+ a metric preference seting (e.g., [Anshelevich and Sekar, 2016, Anshelevich et al., 2018, Gkatzelis et al.,
143
+ 2020, Caragiannis et al., 2022, Kizilkaya and Kempe, 2022]). Some more recent works have also studied
144
+ the interplay between information and distortion [Amanatidis et al., 2021, 2022a,b, Mandal et al., 2019,
145
+ 2020, Abramowitz et al., 2019], and there have also been several works on strategyproofness in the con-
146
+ text of distortion [Filos-Ratsikas and Miltersen, 2014, Filos-Ratsikas et al., 2014, Bhaskar and Ghosh,
147
+ 3
148
+
149
+ 2018, Bhaskar et al., 2018, Ebadian et al., 2022]. We refer the reader to the survey of Anshelevich et al.
150
+ [2021] for a detailed overview of the related literature.
151
+ Besides the aforementioned works on distributed voting, Borodin et al. [2019] studied a related
152
+ two-stage seting in which the voters participate in a central election, but the candidates themselves
153
+ come from local elections within the political parties’ electorates. Beyond distortion, in the context of
154
+ district-based elections, there have also been other works that have considered the degree of deviation
155
+ from proportional representation (e.g., see [Bachrach et al., 2016] and references therein), and some
156
+ works that have studied the complexity of manipulation (e.g., see [Elkind et al., 2021, Lewenberg et al.,
157
+ 2017, Lev and Lewenberg, 2019, Borodin et al., 2018]).
158
+ 2
159
+ Preliminaries
160
+ An instance I of our problem is given by a tuple I = (N, A, v, D). Tere is a set N of n agents (or
161
+ voters) that have preferences over a set A of m alternatives (or candidates). Te preferences of each
162
+ agent i ∈ N are captured by a valuation function vi : A → R≥0 that maps every alternative a ∈ A to a
163
+ real non-negative value vi(a) = via. Following previous work, we assume that the valuation functions
164
+ are normalized such that �
165
+ a∈A via = 1 for every i ∈ N (unit-sum assumption). Let v = (vi)i∈N be
166
+ the valuation profile consisting of the valuation functions of all agents. Te agents are also partitioned
167
+ into a set D of k disjoint districts.
168
+ For every district d ∈ D, let Nd be the set of agents it contains, such that �
169
+ d∈D Nd = N. In the
170
+ symmetric case, each district d contains exactly λ = n/k agents. In the asymmetric case, each district
171
+ d contains a number nd of agents. All our lower bounds follow by instances consisting of symmetric
172
+ districts, whereas our upper bounds in Section 4 hold for asymmetric districts.
173
+ 2.1
174
+ Mechanisms
175
+ Our goal is to choose an alternative to satisfy several criteria of interest. Tis choice must be done
176
+ using a distributed mechanism that uses an in-district voting rule fin and an over-districts voting rule
177
+ fover to implement the following two independent steps:
178
+ • Step 1: For each district d, choose a representative alternative ad ∈ A by holding a local election
179
+ based on fin.
180
+ • Step 2: Choose a district representative as the winner based on fover by considering the districts
181
+ as voters and their representatives as the candidates they approve.
182
+ For simplicity we refer to such mechanisms as fover-of-fin. Different choices of fin and fover lead to
183
+ different distributed mechanisms. Note that the in-district rule can in general use various types of
184
+ information about the preferences of the agents. For instance, it may be able to use exact cardinal
185
+ information about the valuation functions, or only ordinal information that is induced by the values
186
+ (i.e., rankings of alternatives that are consistent to the values of the agents for them). In the later case,
187
+ we will use σi to denote the preference ranking of agent i ∈ N so that σi(a) is the rank of alternative
188
+ a ∈ A in the ranking of i, and σi(a) < σi(b) if vi(a) ≥ vi(b); let σ = (σi)i∈N be the ordinal
189
+ profile consisting of the preference rankings of all agents. To be concise in the definitions below, let
190
+ δ(I) be the information about the preferences of the agents in instance I = (N, A, v, D) that is used
191
+ by a mechanism; that is, δ(I) = v in case of cardinal information, or δ(I) = σ in case of ordinal
192
+ information.
193
+ We will focus on different classes of distributed mechanisms depending on the available informa-
194
+ tion about the preferences of the agents at the district level (cardinal or ordinal), and also on whether
195
+ 4
196
+
197
+ their decision is deterministic or randomized (that is, they choose the district representatives or final
198
+ winner based on probability distributions).
199
+ 2.2
200
+ Social Welfare and Distortion
201
+ Given an instance I, the social welfare of an alternative a ∈ A is the total value that the agents have for
202
+ a, that is, SW(a|I) = �
203
+ i∈N via. So, the expected social welfare achieved by a randomized distributed
204
+ mechanism M that chooses alternative a ∈ A as the winner w with probability PrM[w = a] is
205
+ E[SW(M(I))] =
206
+
207
+ a∈A
208
+ Pr
209
+ M [w = a] · SW(a|I).
210
+ Te efficiency of a distributed mechanism is measured by the notion of distortion. Te distortion of a
211
+ distributed mechanism M is the worst-case ratio (over all possible instances with n agents, m alterna-
212
+ tives, and k districts) of the maximum social welfare achieved by any alternative over the (expected)
213
+ social welfare of the alternative chosen by the mechanism as the winner w, that is,
214
+ dist(M) = sup
215
+ I
216
+ maxa∈A SW(a|I)
217
+ E[SW(M(δ(I))] .
218
+ Clearly, dist(M) ≥ 1. When the denominator in the definition of the distortion tends to 0, we will
219
+ say that the distortion is infinite or unbounded. Our goal is to identify the best possible distributed
220
+ mechanisms in terms of distortion.
221
+ 2.3
222
+ Strategyproofness
223
+ Another important property that we would like our mechanisms to satisfy is that of strategyproof-
224
+ ness. A strategyproof mechanism makes decisions such that providing false information never leads to
225
+ the selection of an alternative that an agent prefers over the alternative chosen when the agent pro-
226
+ vides truthful information. In particular, for any instance I, it must be the case that vi(M(δ(I))) ≥
227
+ vi(M(δ(I′))) for any agent i ∈ N, where I′ is the instance obtained when only agent i reports infor-
228
+ mation different than that in I.
229
+ 2.4
230
+ Some useful observations and properties
231
+ Before we present our technical results, let us briefly discuss some useful properties.
232
+ Locality of distributed mechanisms: First, observe that any distributed mechanism fover-of-fin
233
+ satisfies a locality property in the following sense. A district d (that is, the preferences of a number
234
+ of agents) appears in different instances if in each of these instances there is a district with the same
235
+ number of agents and the same information about theirs preferences as in d (depending on what is
236
+ required by the mechanism). Since the information is the same, the in-district rule fin must decide the
237
+ same alternative as the representative of the district in all these instances. Similarly, in all instances
238
+ where the mechanism has decided the same set of district representatives, the over-districts rule fover
239
+ must decide the same final winner.
240
+ Distortion of distributed vs centralized: Another useful observation is that the distortion of a
241
+ distributed mechanism fover-of-fin is at least as much as the distortion of the in-district centralized
242
+ voting rule fin. Indeed, when k = 1, there is only one representative alternative chosen by fin, and
243
+ thus this alternative must be chosen as the winner by fover; this is also true for instances with k ≥ 2
244
+ districts which are all copies of one district. Consequently, the distortion of fin is a lower bound on
245
+ the distortion of fover-of-fin.
246
+ 5
247
+
248
+ Strategyproofness: Observe that for a distributed mechanism fover-of-fin to be strategyproof it is
249
+ necessary that the in-district rule fin is strategyproof. Tis again follows by how the mechanism would
250
+ work in instances with a single district, in which case the over-districts rule fover does not play any
251
+ role in the selection of the final winner.
252
+ Unanimity: A few of our results will require the in-district rules fin to be unanimous. Unanimity
253
+ stipulates that if all of the agents have the same alternative as the top preference, that alternative
254
+ must be selected (with probability 1). Unanimity is a very natural property of “reasonable” voting
255
+ rules, especially deterministic ones. For randomized rules, there might be reasons to consider non-
256
+ unanimous choices, e.g., see Gibbard [1977], Filos-Ratsikas and Miltersen [2014].
257
+ 3
258
+ Deterministic mechanisms
259
+ We start with deterministic distributed mechanisms and focus explicitly on the case of symmetric
260
+ districts in this section (that is, the size of each district is λ). When full information about the valuations
261
+ of the agents is known at the district level, Filos-Ratsikas et al. [2020] showed that the mechanism
262
+ Plurality-of-Range-Voting, which chooses the representative of each district to be the alternative
263
+ with maximum social welfare for the agents in the district, has distortion O(km). We show that this
264
+ mechanism is asymptotically best possible over all possible deterministic distributed mechanisms that
265
+ use unanimous in-district rules (but may not use Plurality as the over-districts rule).
266
+ Teorem 3.1. Te distortion of any deterministic distributed mechanism with a unanimous in-district
267
+ rule is Ω(km).
268
+ Proof. Let M be some deterministic distributed mechanism with a unanimous in-district rule. Without
269
+ loss of generality, whenever there are k distinct district representatives {a1, . . . , ak}, we assume that
270
+ M chooses a1 as the overall winner. Let ε > 0 be some positive infinitesimal and consider the following
271
+ instance with k districts {d1, . . . , dk} and m > k alternatives:
272
+ • In district d1, all agents have value 1/m + ε for alternative a1, and value 1/m − ε/(m − 1) for
273
+ any other alternative.
274
+ • For any ℓ ∈ {2, . . . , k}, in district dℓ, all agents have value 1/2 + ε for alternative aℓ, value
275
+ 1/2 − ε for alternative x, and value 0 for any other alternative.
276
+ Since the in-district rule is unanimous, the district representatives are alternatives {a1, . . . , ak}, and
277
+ the overall winner is thus a1. Te social welfare of alternative a1 is approximately λ/m, whereas the
278
+ social welfare of alternative x is approximately k · λ/2, leading to distortion Ω(km).
279
+ When only ordinal information about the preferences of the agents is available, Filos-Ratsikas
280
+ et al. [2020] showed that Plurality-of-Plurality, which chooses the favorite alternative of most of
281
+ the agents in a district as its representative and then the alternative that represents the most districts
282
+ as the winner, has distortion O(km2). We show that this mechanism is asymptotically best possible
283
+ among all ordinal distributed mechanisms (without any restrictions), thus improving upon the result
284
+ of Filos-Ratsikas et al. [2020] who showed that Plurality-of-Plurality is best possible only within
285
+ the class of mechanisms they studied.
286
+ We first prove an easy but important lemma showing that when only ordinal information is avail-
287
+ able, to achieve finite distortion, it is necessary the representative of each district to be some alternative
288
+ that is the favorite of at least one agent in the district.
289
+ 6
290
+
291
+ Lemma 3.2. Te representative of any district must be some top-ranked alternative, otherwise the distor-
292
+ tion is infinite.
293
+ Proof. Let d be a district and let T be the set of top-ranked alternatives. Suppose that the representative
294
+ of d is chosen to be some alternative x ̸∈ T. Ten, in any instance consisting of copies of d, the winner
295
+ must be x. However, the valuation profile might be such that all agents have value 1 for their favorite
296
+ alternative and 0 for any other alternative. Consequently, the social welfare of x might be 0, whereas
297
+ the social welfare of any top-ranked alternative is positive, leading to infinite distortion.
298
+ We say that a district is divided if its λ agents are partitioned into m/2 equal-sized sets such that all
299
+ the 2λ/m agents in each set rank the same alternative first and different sets of agents have different
300
+ top-ranked alternatives. By Lemma 3.2, the representative of such a district must be one of the top-
301
+ ranked alternatives. Te following lemma shows that choosing the representative of a divided district
302
+ as the winner is, under some circumstances, a bad choice.
303
+ Lemma 3.3. Suppose that some alternative y1 is chosen as the winner by a deterministic ordinal dis-
304
+ tributed mechanism when the set of representatives is {y1, . . . , yk}. If there exists a divided district that
305
+ is represented by y1, then there are k − 1 districts with representatives y2, . . . , yk, and altogether these k
306
+ districts define an instance such that the distortion of the mechanism is Ω(km2).
307
+ Proof. Let M be a deterministic ordinal distributed mechanism that selects y1 as the winner when
308
+ the set of representatives is {y1, . . . , yk}, and let d be the divided district that is represented by y1.
309
+ Consider the following k districts:
310
+ • Te first district is a copy of d.
311
+ • For every ℓ ∈ {2, . . . , k}, the ℓ-th district is such that all agents therein rank yℓ first, x ̸∈
312
+ {y1, . . . , yk} second, and then all other alternatives. By Lemma 3.2, M must choose yℓ as the
313
+ representative of the ℓ-th district, as this is the only top-ranked alternative.
314
+ So, indeed the set of representatives is {y1, . . . , yk} and M chooses y1 as the winner by assumption.
315
+ One possible valuation profile is the following:
316
+ • In the first, divided district, the 2λ/m agents that rank y1 first have value 1/m for all alternatives,
317
+ and the remaining agents all have value 1 for their favorite alternative.
318
+ • For every ℓ ∈ {2, . . . , k}, all agents in the ℓ-th district have value 1/2 for their two favorite
319
+ alternatives (yℓ and x).
320
+ Consequently, the social welfare of y1 is λ/m2 whereas the social welfare of x is approximately k·λ/2,
321
+ and thus the distortion is Ω(km2).
322
+ Lemma 3.3 shows that deterministic ordinal distributed mechanisms with distortion o(km2) must
323
+ not output the representative of a divided district as the winner when it is given a set of districts with
324
+ different representatives. However, as we show in the proof of the next theorem, there are instances
325
+ where such a choice is inevitable, and thus the distortion is Ω(km2).
326
+ Teorem 3.4. Te distortion of any deterministic ordinal distributed mechanism is Ω(km2).
327
+ Proof. Let M be a deterministic ordinal distributed mechanism. We focus on instances with k districts
328
+ and sets of alternatives A ∪ B ∪ C ∪ {x}, where A = {a1, . . . , ak}, B = {b1, . . . , bm/2+k−1}, and
329
+ 7
330
+
331
+ C = {c1, . . . , cm−2k}. Without loss of generality, suppose that when the district representatives are
332
+ {a1, . . . , ak}, M chooses a1 as the overall winner.
333
+ Let d1 be a divided district with set of top-ranked alternatives {a1, b1, . . . , bm/2−1}. By Lemma 3.3,
334
+ if a1 is the representative of d1, then there exists an instance such that the distortion of M is Ω(km2).
335
+ So, suppose that the representative of d1 is some other top-ranked alternative, say b1. Again by
336
+ Lemma 3.3, if b1 is chosen as the winner whenever she is part of a representative set consisting of
337
+ k distinct alternatives, then the distortion of M would be Ω(km2). So, let us assume that when the
338
+ district representatives are {b1, a2, . . . , ak}, the winner is an alternative different than b1, say a2.
339
+ We can now repeat this argument step by step for each alternative aℓ, ℓ ∈ {2, . . . , k}. In particular,
340
+ let dℓ be a divided district with top-ranked alternatives {aℓ, bℓ, . . . , bm/2+ℓ−2} (note that alternatives
341
+ b1, . . . , bℓ−1 do not appear as top-ranked alternatives in dℓ). By Lemma 3.3, if aℓ is the representative
342
+ of dℓ then the distortion of M is Ω(km2), so the representative is some other alternative from the set
343
+ {bℓ, . . . , bm/2+ℓ−2}, say bℓ. Again by Lemma 3.3, if bℓ is chosen as the winner whenever she is part of
344
+ a representative set consisting of k distinct alternatives, then the distortion of M would be Ω(km2).
345
+ So, when the district representatives are {b1, . . . , bℓ, aℓ+1, . . . , ak}, the winner is an alternative not in
346
+ {b1, . . . , bℓ}, say aℓ.
347
+ Te last step of this repeated argument leads to the lower bound of Ω(km2): We have reached an
348
+ instance with set of representatives {b1, . . . , bk} all of whom are representative of some divided district,
349
+ and thus no mater who of them is chosen as the winner, by Lemma 3.3 there exists an instance that
350
+ includes the corresponding divided district and k − 1 unanimous districts (like in the proof of the
351
+ lemma) such that the distortion is Ω(km2).
352
+ Finally, let us discuss the case of deterministic strategyproof distributed mechanisms. Bhaskar and
353
+ Ghosh [2018] showed that the distortion of any deterministic centralized strategyproof voting rule
354
+ (including those that have access to the valuation functions) is Θ(nm). From the discussion Section 2.4,
355
+ we directly obtain a lower bound of Ω(nm) for the distributed seting as well. A tight upper bound is
356
+ also not hard to derive by considering the straightforward First-of-First mechanism which works as
357
+ follows:
358
+ • For each district d, choose the favorite alternative of the first agent therein as the representative.
359
+ • Choose the representative of the first district as the winner.
360
+ Teorem 3.5. First-of-First is strategyproof and achieves an asymptotically best possible distortion of
361
+ Θ(nm) within the class of deterministic strategyproof distributed mechanisms.
362
+ Proof. Te mechanism is clearly strategyproof since the winner is the favorite alternative of the first
363
+ agent of the first district who acts as a dictator. Since the winner is ranked first by an agent, the social
364
+ welfare of the mechanism is at least 1/m. Te maximum possible social welfare is n, and thus the
365
+ distortion is O(nm).
366
+ 4
367
+ Randomized mechanisms
368
+ We start our discussion on randomized distributed mechanisms by analyzing a general class of mech-
369
+ anisms that we call Uniform-of-δ-Approximate. A mechanism M in this class works as follows:
370
+ • For each district d, M chooses the representative ad according to some centralized voting rule
371
+ fin that has distortion at most δ.
372
+ • M chooses the winner uniformly at random from the set of representatives.
373
+ 8
374
+
375
+ Picking the winner uniformly at random from the representatives that have been selected seems to be
376
+ the most natural choice as there is not much information about the preferences of the agents in the
377
+ districts, and essentially all we can do is assign higher proportional probability to an alternative that
378
+ is representative of more districts. We have the following result.
379
+ Teorem 4.1. Te distortion of any Uniform-of-δ-Approximate mechanism is O(kδ).
380
+ Proof. Consider an arbitrary instance. Let o be the optimal alternative, ad the representative of district
381
+ d, and w the final winner. Denote by SWd(x) the social welfare of alternative x only from the agents
382
+ in d; clearly, SW(x) = �
383
+ d∈D SWd(x). Te expected social welfare of the mechanism is
384
+ E[SW(M)] =
385
+
386
+ a∈A
387
+ Pr[w = a] · SW(a)
388
+ = 1
389
+ k
390
+
391
+ a∈A
392
+ ��
393
+ d∈D
394
+ Pr[ad = a]
395
+
396
+ SW(a)
397
+ = 1
398
+ k
399
+
400
+ d∈D
401
+
402
+ a∈A
403
+ Pr[ad = a] · SW(a)
404
+ = 1
405
+ k
406
+
407
+ d∈D
408
+ E[SW(ad)]
409
+ ≥ 1
410
+ k
411
+
412
+ d∈D
413
+ E[SWd(ad)]
414
+ Since ad is chosen based on a voting rule with distortion at most δ, we have that E[SW(ad)] ≥ 1
415
+ δ ·
416
+ SWd(o). Combining this together with the fact that SW(o) = �
417
+ d∈D SWd(o), and using the linearity
418
+ of expectation, we obtain
419
+ E[SW(M)] ≥ 1
420
+ k
421
+
422
+ d∈D
423
+ E[SWd(ad)]
424
+ ≥ 1
425
+ k
426
+
427
+ d∈D
428
+ 1
429
+ δ · SWd(o)
430
+ = 1
431
+ kδ · SW(o).
432
+ Hence, the distortion of the mechanism is at most kδ.
433
+ Teorem 4.1 is a simple composition theorem, analogous to the one presented by Anshelevich
434
+ et al. [2022] for the metric seting. Based on it, we can define randomized distributed mechanisms
435
+ with proven distortion guarantees by appropriately choosing the in-district rule. Before we continue,
436
+ observe that the sizes of the districts do not appear in the proof of Teorem 4.1, and thus the distortion
437
+ of any Uniform-of-δ-Approximate mechanism is O(kδ) even if the districts are asymmetric. So, the
438
+ distortion of the mechanism depends on the number of agents only if the distortion δ of the in-district
439
+ rule depends on the number of agents.
440
+ If cardinal information is available at the district level, by using Range-Voting with δ = 1 as the
441
+ in-district rule, we obtain the following.
442
+ Corollary 4.2. Te distortion of Uniform-of-Range-Voting is O(k).
443
+ If only ordinal information about the preferences of the agents is given at the district level, then we
444
+ can use Plurality with δ = O(m2) and the randomized rule Stable-Lottery mechanism of Ebadian
445
+ et al. [2022] with δ = O(√m) as the in-district rule to obtain the following results.
446
+ 9
447
+
448
+ Corollary 4.3. Te distortion of Uniform-of-Plurality is O(km2).
449
+ Corollary 4.4. Te distortion of Uniform-of-Stable-Lottery is O(k√m).
450
+ An important question to ask next is under what circumstances the aforementioned upper bounds
451
+ of Corollaries 4.2, 4.3 and 4.4 are tight. First, we show that Uniform-of-Range-Voting is the best
452
+ among mechanisms with unanimous in-district rules which may even use cardinal information.
453
+ Teorem 4.5. Te distortion of any randomized distributed mechanism with a unanimous in-district rule
454
+ is Ω(k).
455
+ Proof. Let ε > 0 be a positive infinitesimal. Consider an instance with the following k symmetric
456
+ districts: For any ℓ ∈ [k], in district dℓ, all λ agents therein have value 1/2 + ε for alternative aℓ,
457
+ 1/2 − ε for alternative x, and 0 for any other alternative. Since, the in-district rule is unanimous, the
458
+ representative of district dℓ must be aℓ with probability 1. Hence, no mater what the probability of
459
+ choosing a district representative as the winner is, the expected social welfare of the mechanism is
460
+ λ · (1/2 + ε). However, the social welfare of alternative x is k · λ · (1/2 − ε), and thus the distortion
461
+ is Ω(k).
462
+ If we consider non-unanimous in-district rules, but require the in-district rule to be deterministic,
463
+ then we can show a weaker lower bound of Ω(
464
+
465
+ k); notice that the theorem also implies the same
466
+ bound for fully deterministic distributed mechanisms without unanimous in-district rules.
467
+ Teorem 4.6. Te distortion of any randomized distributed mechanism with a deterministic in-district
468
+ rule is Ω(
469
+
470
+ k).
471
+ Proof. Consider a district dℓ in which all agents have value 1/2 for alternative aℓ, value 1/(2
472
+
473
+ k) for
474
+ each alternative in {b1, . . . , b√
475
+ k}, and 0 for any other alternative. If the representative of this district is
476
+ not aℓ, then in instances consisting of copies of this district, the distortion is at least
477
+
478
+ k; in particular, it
479
+ is at least that much if some alternative in {b1, . . . , b√
480
+ k} is chosen and infinite if any other alternative
481
+ is chosen. So, suppose that the representative of dℓ is aℓ.
482
+ Next, consider an instance with k symmetric districts d1, . . . , dk. By the above discussion, for any
483
+ ℓ ∈ [k], the representative of dℓ is alternative aℓ with social welfare λ/2 (note that only the agents
484
+ of dℓ have positive value, equal to 1/2, for aℓ). Hence, no mater which district representative is
485
+ chosen as the winner (or the probability distribution over the representatives), the (expected) social
486
+ welfare of the mechanism is λ/2. In contrast, the social welfare of any alternative in {b1, . . . , b√
487
+ k} is
488
+ k · λ/(2
489
+
490
+ k) =
491
+
492
+ k · λ/2, and thus the distortion is
493
+
494
+ k.
495
+ Next, we show that Uniform-of-Plurality is the best possible among ordinal randomized dis-
496
+ tributed mechanisms with deterministic in-district rules, assuming an arbitrary but fixed ordering of
497
+ the alternatives. Tis is quite surprising, as it shows that randomization over the districts is not beter
498
+ than just choosing an arbitrary alternative that is representative of the most districts (i.e., not beter
499
+ than Plurality-of-Plurality).
500
+ Teorem 4.7. Te distortion of any ordinal distributed mechanism with a deterministic in-district rule is
501
+ Ω(km2), when there exists an arbitrary but fixed tie-breaking ordering of the alternatives.
502
+ Proof. Without loss of generality, suppose that the tie-breaking ordering of the alternatives is a1 ≻
503
+ . . . ≻ ak ≻ b1 ≻ . . . ≻ bm/2−1 ≻ x ≻ c1 ≻ . . . ≻ cm/2−k; the naming of the alternatives is arbitrary
504
+ but is assumed to be known and can be exploited. For simplicity, for any set of alternatives X, denote
505
+ by [X] an arbitrary ordering of the alternatives in X.
506
+ 10
507
+
508
+ Consider an instance with k symmetric districts such that in district dℓ there is a set of 2λ/m
509
+ agents with preference ordering aℓ ≻ x ≻ [A\{aℓ, x}], a set of 2λ/m agents with preference ordering
510
+ b1 ≻ x ≻ [A \ {b1, x}], . . ., and a set of 2λ/m agents with preference ordering bm/2−1 ≻ x ≻
511
+ [A \ {bm/2−1, x}]. By Lemma 3.2, the representative of dℓ must be one of the top-ranked alternatives
512
+ (otherwise the distortion of the mechanism would be infinite). Since aℓ is ranked above the other
513
+ alternatives in the tie-breaking ordering, she chosen as the representative of dℓ. Hence, the set of
514
+ representatives is {a1, . . . , ak}, and the winner is chosen according to some probability distribution
515
+ over this set.
516
+ Te valuation profile may be such that the 2λ/m agents in district dℓ that rank aℓ first have value
517
+ 1/m for all alternatives, while all other agents in dℓ have value 1/2 for their two favorite alterna-
518
+ tives. Consequently, the social welfare of alternative aℓ is 2λ/m2, and thus the social welfare of the
519
+ mechanism is also this much, no mater the probability distribution over the district representatives.
520
+ In contrast, the social welfare of x is approximately kλ/2, leading to a distortion of Ω(km2).
521
+ When randomization at the district level can be leveraged by ordinal distributed mechanisms, then
522
+ we achieve distortion much beter than what is implied by Corollary 4.4, while also achieving strat-
523
+ egyproofness. In particular, there are several centralized voting rules that can be implemented as
524
+ distributed mechanisms, in the sense that they define the same probability distribution over the alter-
525
+ natives. One such important class of voting rules is that of point-voting schemes, which is part of a
526
+ larger class of strategyproof mechanisms [Barbera, 1978, Hylland, 1980, Gibbard, 1977] and includes
527
+ rules with almost best possible distortion guarantees [Boutilier et al., 2015, Ebadian et al., 2022].
528
+ 4.1
529
+ Point-voting schemes
530
+ A point-voting scheme chooses an agent uniformly at random and then outputs her t-th favorite al-
531
+ ternative with probability pt, where p1 ≥ . . . ≥ pm ≥ 0 and �m
532
+ t=1 pt = 1. Hence, the probability
533
+ according to which the point-voting scheme using the probability vector p = (p1, . . . , pm) chooses
534
+ alternative a ∈ A as the winner w is Pr[w = a] = 1
535
+ n
536
+
537
+ i∈N pσi(a), where σi(a) is the position that i
538
+ ranks a in her preference ranking σ.
539
+ Tere are many point-voting schemes of interest. For every positional scoring rule using the scor-
540
+ ing vector s = (s1, . . . , sm), we can define a point-voting scheme f(s) by normalizing the scoring
541
+ vector, that is, define pt = st/
542
+ ��
543
+ j∈[m] sj
544
+
545
+ for every t ∈ [m] so that the winning probability of
546
+ alternative a is
547
+ Pr[w = a] = 1
548
+ n
549
+
550
+ i∈N
551
+ sσi(a)
552
+
553
+ j∈[m] sj
554
+ =
555
+
556
+ i∈N sσi(a)
557
+ n · �
558
+ j∈[m] sj
559
+ .
560
+ Another important point-voting scheme is the rule that chooses each alternative uniformly at random;
561
+ in this case, we have pt = 1/m for every t ∈ [m] so that Pr[w = a] = 1
562
+ n
563
+
564
+ i∈N
565
+ 1
566
+ m = 1
567
+ m.
568
+ For any point-voting scheme f that uses a probability vector p, we consider the distributed mech-
569
+ anism Proportional-of-f-Point-Voting, which works as follows:
570
+ • For every district d, choose the representative ad to be alternative a ∈ A with probability
571
+ 1
572
+ λ
573
+
574
+ i∈Nd pσi(a).
575
+ • Choose the winner to be the representative of district d with probability nd/n.
576
+ 11
577
+
578
+ Teorem 4.8. Proportional-of-f-Point-Voting defines the same probability distribution as the point-
579
+ voting scheme f.
580
+ Proof. Te probabilitythat alternativea is chosen as the winner by Proportional-of-f-Point-Voting
581
+ is
582
+ Pr[w = a] =
583
+
584
+ d∈D
585
+ Pr[w = ad] · Pr[ad = a]
586
+ =
587
+
588
+ d∈D
589
+ nd
590
+ n · 1
591
+ nd
592
+
593
+ i∈Nd
594
+ pσi(a)
595
+ = 1
596
+ n
597
+
598
+ i∈N
599
+ pσi(a),
600
+ that is, Proportional-of-f-Point-Voting chooses a with the same probability as f.
601
+ Teorem 4.8 shows that Proportional-of-f-Point-Voting achieves the same distortion bound
602
+ as the point-voting scheme f it uses as the in-district rule, and also that it inherits its strategyproofness
603
+ property. Tis is extremely useful, as there are centralized voting rules that are based on point-voting
604
+ schemes and achieve almost the best possible distortion.
605
+ Boutilier et al. [2015] considered a voting rule that is a convex combination of two point-voting
606
+ schemes: With probability 1/2 choose an alternative uniformly at random, and with probability 1/2
607
+ run the point-voting scheme defined by normalizingthe harmonic scoring rule H = (1, 1/2, . . . , 1/m).
608
+ We will refer to this mechanism as BCHLPS. Boutilier et al. [2015] showed that this voting rule has
609
+ distortion O(√m log m). An important property of point-voting schemes is that any rule that is a
610
+ convex combination of point-voting schemes is also a point-voting scheme. Te following lemma is
611
+ similar to lemmas proved before in the literature (e.g., see Filos-Ratsikas and Miltersen [2014], Barbera
612
+ [1978]); we provide a proof for completeness.
613
+ Lemma 4.9. Let f1, . . . , fκ be point-voting schemes defined by the probability vectors p1, . . . , pκ. For
614
+ any non-negative numbers q1, . . . , qκ such that �
615
+ j∈[κ] qj = 1, the voting rule f that chooses the outcome
616
+ of fj with probability qj is a point-voting scheme.
617
+ Proof. Let σ be an arbitrary preference profile. For any j ∈ [κ], denote the t-th coordinate of pj as pj,t,
618
+ and let Pj(a) = Pr[a = fj(σ)] be the probability of choosing a as the winner according to point-voting
619
+ scheme fj. Ten, the voting rule f chooses alternative a as the winner w with probability
620
+ Pr[w = a] =
621
+
622
+ j∈[κ]
623
+ qj · Pj(a)
624
+ =
625
+
626
+ j∈[κ]
627
+ qj ·
628
+
629
+ 1
630
+ n
631
+
632
+ i∈N
633
+ pj,σi(a)
634
+
635
+ = 1
636
+ n
637
+
638
+ i∈N
639
+
640
+ j∈[κ]
641
+ qj · pj,σi(a).
642
+ Hence, f is a point-voting scheme defined by the probability vector p with pt = �
643
+ j∈[κ] qj · pj,t.
644
+ Consequently, by Teorem 4.8 and Lemma 4.9, we can construct a randomized ordinal distributed
645
+ mechanism based on the point-voting scheme of Boutilier et al. [2015] that achieves the same distortion
646
+ bound and is strategyproof.
647
+ 12
648
+
649
+ Corollary 4.10. Tere exists a randomized ordinal strategyproof distributed mechanism with distortion
650
+ O(√m log m).
651
+ Tis distortion bound is almost best possible as the lower bound of Ω(√m) for randomized cen-
652
+ tralized rules holds trivially for distributed mechanisms by considering single-district instances.
653
+ 5
654
+ Experiments
655
+ In this section, we perform experiments with real and synthetic datasets, aiming to identify paterns in
656
+ the distortion of several well-known voting rules and examine whether these support our theoretical
657
+ findings. It is well-documented in the literature (e.g., see [Boutilier et al., 2015, Filos-Ratsikas et al.,
658
+ 2020]) that when working with real or realistic preferences, it ofen is the case that the distortions
659
+ bounds are small numbers quite close to 1. In this sense, our goal is not primarily to demonstrate the
660
+ distortion bounds themselves, but rather the dependence of these bounds on the distributed decision-
661
+ making process, in particular the number of districts, as well as the use of randomization. We perform
662
+ two main experiments, one with real-world preferences and valuation data, and one with synthetic
663
+ data. All our experiments are with symmetric districts.
664
+ 5.1
665
+ Experiments with the Jester Dataset
666
+ For our first experiment, we use the Jester Joke Dataset [Goldberg et al., 2001]. Te dataset contains
667
+ ratings for 100 different jokes in the range [−10, 10], provided by 70000 users. We chose to work
668
+ with this dataset as it has also been employed by Boutilier et al. [2015] in the context of centralized
669
+ distortion bounds, and also by Filos-Ratsikas et al. [2020] for the distortion of deterministic distributed
670
+ mechanisms that use plurality as the over-district rule.
671
+ Following the methodology developed in these works, we construct inputs consisting of ratings
672
+ for the 8 most-rated jokes. In particular, we perform 1000 random runs in which we sample 100 users
673
+ from the set of all users that have provided rankings for all eight jokes, and then partition them into
674
+ k equal-sized districts uniformly at random, for k ∈ {1, 2, 5, 10, 20, 25}. Clearly, the case of k = 1
675
+ corresponds to the centralized seting and will be used as a reference point. We interpret the ratings
676
+ of the jokes as cardinal valuations: to be consistent with our seting (and with the experiments of
677
+ [Boutilier et al., 2015, Filos-Ratsikas et al., 2020]), we add 10 to each user’s rating vector, to ensure that
678
+ the values are positive and then apply the unit-sum normalization. For these inputs, we compute the
679
+ average distortion of a set of 20 voting rules over the 1000 runs of the experiment. In particular, we
680
+ consider distributed mechanisms fover-of-fin, where for fover we use Plurality or Uniform, whereas
681
+ for fin we have:
682
+ Deterministic Rules: We use simple voting scoring rules, namely Plurality (PL), Veto, Borda and
683
+ Harmonic, as well as Range-Voting (RV), which in the case of k = 1 finds the optimal alternative.
684
+ Randomized Rules: Here we use several natural point-voting schemes with probability vectors that
685
+ are proportional to the aforementioned scoring rules (recall the definition from Section 4), namely
686
+ • Proportional to Plurality Score (PropPL);
687
+ • Proportional to Borda Score (PropBorda);
688
+ • Proportional to Veto Score (PropVeto);
689
+ • Proportional to Harmonic Score (PropHarmonic).
690
+ 13
691
+
692
+ k
693
+ RV
694
+ PL
695
+ Veto
696
+ Borda
697
+ Harmonic
698
+ PropPL
699
+ PropVeto
700
+ PropBorda
701
+ PropHarmonic
702
+ BCHLPS
703
+ 1
704
+ 1
705
+ 1.049
706
+ 1.035
707
+ 1.007
708
+ 1.017
709
+ 1.135
710
+ 1.166
711
+ 1.155
712
+ 1.156
713
+ 1.166
714
+ 2
715
+ 1.017
716
+ 1.070
717
+ 1.059
718
+ 1.018
719
+ 1.020
720
+ 1.137
721
+ 1.166
722
+ 1.155
723
+ 1.156
724
+ 1.165
725
+ 5
726
+ 1.018
727
+ 1.064
728
+ 1.070
729
+ 1.020
730
+ 1.036
731
+ 1.133
732
+ 1.162
733
+ 1.155
734
+ 1.156
735
+ 1.165
736
+ 10
737
+ 1.019
738
+ 1.066
739
+ 1.082
740
+ 1.021
741
+ 1.044
742
+ 1.133
743
+ 1.162
744
+ 1.153
745
+ 1.154
746
+ 1.163
747
+ 20
748
+ 1.024
749
+ 1.066
750
+ 1.107
751
+ 1.030
752
+ 1.050
753
+ 1.134
754
+ 1.165
755
+ 1.154
756
+ 1.155
757
+ 1.164
758
+ 25
759
+ 1.022
760
+ 1.067
761
+ 1.142
762
+ 1.031
763
+ 1.107
764
+ 1.133
765
+ 1.165
766
+ 1.153
767
+ 1.154
768
+ 1.164
769
+ Table 2: Distortion bounds of various voting rules based on valuations defined by the provided scores of the Jester dataset and random district partitions.
770
+ RV
771
+ PL
772
+ Veto
773
+ Borda
774
+ Harmonic
775
+ PropPL
776
+ PropVeto
777
+ PropBorda
778
+ PropHarmonic
779
+ BCHLPS
780
+ k = 1
781
+ Uniform
782
+ 1
783
+ 1.038
784
+ 1.045
785
+ 1.006
786
+ 1.019
787
+ 1.079
788
+ 1.087
789
+ 1.085
790
+ 1.085
791
+ 1.087
792
+ Beta
793
+ 1
794
+ 1.086
795
+ 1.105
796
+ 1.029
797
+ 1.050
798
+ 1.140
799
+ 1.152
800
+ 1.147
801
+ 1.147
802
+ 1.150
803
+ Exponential
804
+ 1
805
+ 1.032
806
+ 1.096
807
+ 1.018
808
+ 1.013
809
+ 1.118
810
+ 1.137
811
+ 1.132
812
+ 1.131
813
+ 1.134
814
+ k = 2
815
+ Uniform
816
+ 1.026
817
+ 1.052
818
+ 1.056
819
+ 1.030
820
+ 1.039
821
+ 1.079
822
+ 1.087
823
+ 1.084
824
+ 1.084
825
+ 1.086
826
+ Beta
827
+ 1.044
828
+ 1.111
829
+ 1.118
830
+ 1.064
831
+ 1.080
832
+ 1.140
833
+ 1.152
834
+ 1.147
835
+ 1.147
836
+ 1.150
837
+ Exponential
838
+ 1.039
839
+ 1.062
840
+ 1.115
841
+ 1.055
842
+ 1.051
843
+ 1.118
844
+ 1.136
845
+ 1.132
846
+ 1.130
847
+ 1.135
848
+ k = 5
849
+ Uniform
850
+ 1.031
851
+ 1.050
852
+ 1.057
853
+ 1.029
854
+ 1.038
855
+ 1.076
856
+ 1.084
857
+ 1.081
858
+ 1.081
859
+ 1.084
860
+ Beta
861
+ 1.052
862
+ 1.113
863
+ 1.125
864
+ 1.074
865
+ 1.094
866
+ 1.143
867
+ 1.155
868
+ 1.151
869
+ 1.150
870
+ 1.154
871
+ Exponential
872
+ 1.039
873
+ 1.069
874
+ 1.110
875
+ 1.055
876
+ 1.056
877
+ 1.119
878
+ 1.137
879
+ 1.133
880
+ 1.131
881
+ 1.134
882
+ k = 20
883
+ Uniform
884
+ 1.031
885
+ 1.055
886
+ 1.077
887
+ 1.039
888
+ 1.042
889
+ 1.077
890
+ 1.085
891
+ 1.082
892
+ 1.082
893
+ 1.084
894
+ Beta
895
+ 1.055
896
+ 1.105
897
+ 1.145
898
+ 1.073
899
+ 1.084
900
+ 1.141
901
+ 1.154
902
+ 1.149
903
+ 1.149
904
+ 1.152
905
+ Exponential
906
+ 1.047
907
+ 1.069
908
+ 1.123
909
+ 1.060
910
+ 1.058
911
+ 1.115
912
+ 1.133
913
+ 1.128
914
+ 1.127
915
+ 1.129
916
+ k = 25
917
+ Uniform
918
+ 1.031
919
+ 1.056
920
+ 1.071
921
+ 1.036
922
+ 1.044
923
+ 1.077
924
+ 1.085
925
+ 1.082
926
+ 1.0824
927
+ 1.084
928
+ Beta
929
+ 1.054
930
+ 1.124
931
+ 1.149
932
+ 1.084
933
+ 1.094
934
+ 1.148
935
+ 1.155
936
+ 1.150
937
+ 1.150
938
+ 1.151
939
+ Exponential
940
+ 1.042
941
+ 1.069
942
+ 1.129
943
+ 1.060
944
+ 1.054
945
+ 1.116
946
+ 1.134
947
+ 1.129
948
+ 1.128
949
+ 1.131
950
+ Table 3: Distortion bounds of various voting rules based on valuations defined according to several probability distributions and random district
951
+ partitions. Results for deterministic mechanisms are presented at the lef of the bold vertical line, and results for randomized mechanisms are at the
952
+ right of the bold vertical line.
953
+
954
+ We also use the rule of Boutilier et al. [2015] (we refer to it as BCHLPS in the following); recall that this
955
+ is a point-voting scheme that with probability 1/2 selects an alternative at random and with probability
956
+ 1/2 runs the PropHarmonic rule defined above. As established in Corollary 4.10 (and the discussion
957
+ before the statement of the corollary), this is best possible in terms of the worst-case distortion.
958
+ Te results of our experiments can be seen in Table 2. In the table we only present the results
959
+ where as fover, we used Plurality for deterministic rules and Uniform for randomized rules. Tis
960
+ is in accordance to our approach in the theoretical results in previous sections. Te bounds for the
961
+ cases not shown are quite similar, and slightly larger in general. For each of the randomized rules,
962
+ we perform 300 runs and calculate their expected social welfare, which we then use to calculate the
963
+ distortion.
964
+ From the results of Table 2 we observe that, as expected, the existence of multiple districts has an
965
+ adverse effect on the distortion of deterministic mechanisms, which becomes worse compared to the
966
+ centralized case k = 1. For these rules, we can also observe that the distortion generally increases as k
967
+ increases. In contrast, the distortion of randomized rules remains virtually unchanged for any value of
968
+ k. Tis is in complete accordance with our theoretical findings, where we established that these rules
969
+ induce the same probability distribution. Te experiments showcase that this does not only hold in
970
+ expectation, but also in practice (given sufficiently many runs).
971
+ Another crucial observation is that, in terms of the absolute distortion numbers, randomization
972
+ does not seem to help; if anything, it makesthe distortion bounds worse! Tis can be justified by the fact
973
+ that real-world instances like those from the Jester dataset display a large degree of homogeneity, which
974
+ results in the simple deterministic rules performing quite well. On the other hand, randomization ofen
975
+ leads to suboptimal choices even on such “well-behaved” instances, demeaning the distortion bounds
976
+ on average. Surprisingly, among ordinal voting rules, Borda seems to perform best across the board
977
+ even though the theoretical distortion of Borda is in fact unbounded.
978
+ 5.2
979
+ Experiments with Synthetic Datasets
980
+ We also perform experiments with datasets that are generated from probability distributions. In par-
981
+ ticular, and to be consistent with the Jester experiment presented above, we create instances with
982
+ 100 agents and 8 alternatives, by first drawing the values of the agents from a certain distribution,
983
+ and then constructing the induced ordinal preference profile from those values. We use the following
984
+ distributions:
985
+ • Uniform distribution in [1, 100]. Tis is the simplest case, where all possible values are equally
986
+ likely.
987
+ • Beta distribution with α = 1/10 and β = 1/10. Tis distribution has a symmetric convex pdf
988
+ function centered around a mean of 1/2, assigning higher probabilities to values very close to 1
989
+ or 0.
990
+ • Exponential distribution with exponent 4, i.e., the pdf is f(x) = 4e4 for x ≥ 0 and f(x) = 0
991
+ otherwise. Tis distribution generates values close to 0 with high probability, and as the values
992
+ increase, the probability of them being generated decreases exponentially.
993
+ For the rest of the experiment, we perform similar steps as in the case of the Jester dataset: We nor-
994
+ malize the values to sum up to 1, and run the set of mechanisms described above. For each ran-
995
+ domized mechanism we now perform 150 individual runs and calculate its expected welfare. We
996
+ calculate the average distortions over 500 runs of the experiment for k symmetric districts, where
997
+ k ∈ {1, 2, 5, 20, 25}. Note that the number of runs and the number of district sizes is slightly smaller
998
+ 15
999
+
1000
+ in this experiment, because it is more computationally intensive (as we need to calculate bounds for 3
1001
+ different distributions). Again, we use Plurality as fover for deterministic and Uniform for random-
1002
+ ized mechanisms; the results for the other cases were similar and are not reported.
1003
+ Te results can be found in Table 3. Similarly to the Jester experiment, it is evident that the distor-
1004
+ tion of the deterministic mechanisms becomes worse for k ≥ 2, whereas it remains prety much the
1005
+ same for randomized mechanisms. Again, we observe that randomization results in worse distortion
1006
+ bounds overall, and that Borda performs best among deterministic mechanisms. Interestingly, con-
1007
+ trary to the Jester dataset, here we do not see a clear patern of the distortion increasing as k increases
1008
+ for deterministic mechanisms (other than the jump from k = 1 to k = 2). Tis is probably due to the
1009
+ fact that the synthetic instances are highly homogeneous, and with uniform random district partitions,
1010
+ the districts end up being quite uniform, regardless of their number and size.
1011
+ Te role of unit-sum. We remark here that normalizing the values to sum up to 1 effectively makes the
1012
+ Uniform and Exponential distributions prety similar, and this is reflected in the results. To get a sense
1013
+ of the effect of normalization, we also ran the experiments without it. We observe that the distortions
1014
+ for the exponential distribution are now larger than those of the uniform distribution. In general, the
1015
+ distortion bounds still lie in the range [1.03, 1.15] for all distributions, but their average values (over
1016
+ all documented distortion bounds) are larger for all distributions except Uniform. It is also the case
1017
+ that for the Beta distribution, the bounds of deterministic mechanisms are much closer to those of
1018
+ randomized ones. Te distortion of randomized mechanisms is still almost the same for any number
1019
+ of districts.
1020
+ 6
1021
+ Open Problems
1022
+ From our results, an interesting technical challenge is to remove the requirement for a consistent tie-
1023
+ breaking ordering from the statement of Teorem 4.7. Similarly, we could atempt to remove unanimity
1024
+ from the lower bound of Teorem 3.1; although unanimity is usually prety natural, removing it would
1025
+ make the theorem stronger. More interestingly, our result about point-voting schemes in Teorem 4.8
1026
+ crucially does not depend on the normalization of the valuations, and hence also could be applied
1027
+ verbatim to the metric distributed social choice seting studied by Anshelevich et al. [2022], where
1028
+ randomized mechanisms have never been considered; this seems like a natural starting point for such
1029
+ an investigation.
1030
+ References
1031
+ Ben Abramowitz, Elliot Anshelevich, and Wennan Zhu. Awareness of voter passion greatly improves
1032
+ the distortion of metric social choice. In Proceedings of the Te 15th Conference on Web and Internet
1033
+ Economics (WINE), pages 3–16, 2019.
1034
+ Georgios Amanatidis, Georgios Birmpas, Aris Filos-Ratsikas, and Alexandros A. Voudouris. Peeking
1035
+ behind the ordinal curtain: Improving distortion via cardinal queries. Artificial Intelligence, 296:
1036
+ 103488, 2021.
1037
+ Georgios Amanatidis, Georgios Birmpas, Aris Filos-Ratsikas, and Alexandros A. Voudouris. A few
1038
+ queries go a long way: Information-distortion tradeoffs in matching. Journal of Artificial Intelligence
1039
+ Research, 74, 2022a.
1040
+ Georgios Amanatidis, Georgios Birmpas, Aris Filos-Ratsikas, and Alexandros A. Voudouris. Don’t roll
1041
+ the dice, ask twice: Te two-query distortion of matching problems and beyond. In Proceedings of
1042
+ the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022b.
1043
+ 16
1044
+
1045
+ Elliot Anshelevich and Shreyas Sekar. Blind, greedy, and random: Algorithms for matching and clus-
1046
+ tering using only ordinal information. In Proceedings of the 30th AAAI Conference on Artificial Intel-
1047
+ ligence (AAAI), pages 390–396, 2016.
1048
+ Elliot Anshelevich, Onkar Bhardwaj, Edith Elkind, John Postl, and Piotr Skowron. Approximating
1049
+ optimal social choice under metric preferences. Artificial Intelligence, 264:27–51, 2018.
1050
+ Elliot Anshelevich, Aris Filos-Ratsikas, Nisarg Shah, and Alexandros A. Voudouris. Distortion in so-
1051
+ cial choice problems: Te first 15 years and beyond. In Proceedings of the 30th International Joint
1052
+ Conference on Artificial Intelligence (IJCAI), pages 4294–4301, 2021.
1053
+ Elliot Anshelevich, Aris Filos-Ratsikas, and Alexandros A Voudouris. Te distortion of distributed
1054
+ metric social choice. Artificial Intelligence, 308:103713, 2022.
1055
+ Yoram Bachrach, Omer Lev, Yoad Lewenberg, and Yair Zick. Misrepresentation in district voting. In
1056
+ IJCAI, pages 81–87, 2016.
1057
+ Salvador Barbera. Nice Decision Schemes. Springer Netherlands, 1978.
1058
+ Gerdus Benad`e, Swaprava Nath, Ariel D. Procaccia, and Nisarg Shah. Preference elicitation for partic-
1059
+ ipatory budgeting. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), pages
1060
+ 376–382, 2017.
1061
+ Umang Bhaskar and Abheek Ghosh. On the welfare of cardinal voting mechanisms. In Proceedings of
1062
+ the 38th IARCS Annual Conference on Foundations of Sofware Technology and Teoretical Computer
1063
+ Science (FSTTCS), pages 27:1–27:22, 2018.
1064
+ Umang Bhaskar, Varsha Dani, and Abheek Ghosh. Truthful and near-optimal mechanisms for welfare
1065
+ maximization in multi-winner elections. In Proceedings of the 32nd AAAI Conference on Artificial
1066
+ Intelligence (AAAI), pages 925–932, 2018.
1067
+ Allan Borodin, Omer Lev, Nisarg Shah, and Tyrone Strangway. Big city vs. the great outdoors: Voter
1068
+ distribution and how it affects gerrymandering. In IJCAI, pages 98–104, 2018.
1069
+ Allan Borodin, Omer Lev, Nisarg Shah, and Tyrone Strangway. Primarily about primaries. In Proceed-
1070
+ ings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), pages 1804–1811, 2019.
1071
+ Craig Boutilier, Ioannis Caragiannis, Simi Haber, Tyler Lu, Ariel D. Procaccia, and Or Sheffet. Optimal
1072
+ social choice functions: A utilitarian view. Artificial Intelligence, 227:190–213, 2015.
1073
+ Felix Brandt, Vincent Conitzer, Ulle Endriss, J´erˆome Lang, and Ariel D. Procaccia, editors. Handbook
1074
+ of Computational Social Choice. Cambridge University Press, 2016.
1075
+ Ioannis Caragiannis, Swaprava Nath, Ariel D. Procaccia, and Nisarg Shah. Subset selection via implicit
1076
+ utilitarian voting. Journal of Artificial Intelligence Research, 58:123–152, 2017.
1077
+ Ioannis Caragiannis, Nisarg Shah, and Alexandros A. Voudouris. Te metric distortion of multiwinner
1078
+ voting. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI), pages 4900–4907,
1079
+ 2022.
1080
+ Soroush Ebadian, Anson Kahng, Dominik Peters, and Nisarg Shah. Optimized distortion and propor-
1081
+ tional fairness in voting. In Proceedings of the 23rd ACM Conference on Economics and Computation
1082
+ (EC), pages 563–600, 2022.
1083
+ 17
1084
+
1085
+ Edith Elkind, Jiarui Gan, Svetlana Obraztsova, Zinovi Rabinovich, and Alexandros A. Voudouris. Pro-
1086
+ tecting elections by recounting ballots. Artificial Intelligence, 290:103401, 2021.
1087
+ Aris Filos-Ratsikas and Peter Bro Miltersen. Truthful approximations to range voting. In Proceedings
1088
+ of the 10th International Conference on Web and Internet Economics (WINE), pages 175–188, 2014.
1089
+ Aris Filos-Ratsikas and Alexandros A Voudouris. Approximate mechanism design for distributed facil-
1090
+ ity location. In International Symposium on Algorithmic Game Teory, pages 49–63. Springer, 2021.
1091
+ Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, and Jie Zhang. Social welfare in one-sided match-
1092
+ ings: Random priority and beyond. In Proceedings of the 7th Symposium of Algorithmic Game Teory
1093
+ (SAGT), pages 1–12, 2014.
1094
+ Aris Filos-Ratsikas, Evi Micha, and Alexandros A. Voudouris. Te distortion of distributed voting.
1095
+ Artificial Intelligence, 286:103343, 2020.
1096
+ Allan Gibbard. Manipulation of schemes that mix voting with chance. Econometrica: Journal of the
1097
+ Econometric Society, pages 665–681, 1977.
1098
+ Vasilis Gkatzelis, Daniel Halpern, and Nisarg Shah. Resolving the optimal metric distortion conjecture.
1099
+ In Proceedings of the 61st IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages
1100
+ 1427–1438, 2020.
1101
+ Ken Goldberg, Teresa Roeder, Dhruv Gupta, and Chris Perkins. Eigentaste: A constant time collabo-
1102
+ rative filtering algorithm. information retrieval, 4(2):133–151, 2001.
1103
+ Aanund Hylland. Strategy proofness of voting procedures with loteries as outcomes and infinite sets
1104
+ of strategies. Technical report, 1980.
1105
+ Fatih Erdem Kizilkaya and David Kempe. Plurality veto: A simple voting rule achieving optimal metric
1106
+ distortion. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI),
1107
+ pages 349–355, 2022.
1108
+ Omer Lev and Yoad Lewenberg. “reverse gerrymandering”: Manipulation in multi-group decision
1109
+ making. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2069–
1110
+ 2076, 2019.
1111
+ Yoad Lewenberg, Omer Lev, and Jeffrey S Rosenschein. Divide and conquer: Using geographic manip-
1112
+ ulation to win district-based elections. In Proceedings of the 16th Conference on Autonomous Agents
1113
+ and MultiAgent Systems, pages 624–632, 2017.
1114
+ Debmalya Mandal, Ariel D. Procaccia, Nisarg Shah, and David P. Woodruff. Efficient and thrify vot-
1115
+ ing by any means necessary. In Proceedings of the 32nd Annual Conference on Neural Information
1116
+ Processing Systems (NeurIPS), pages 7178–7189, 2019.
1117
+ Debmalya Mandal, Nisarg Shah, and David P. Woodruff. Optimal communication-distortion tradeoff
1118
+ in voting. In Proceedings of the 21st ACM Conference on Economics and Computation (EC), pages
1119
+ 795–813, 2020.
1120
+ Ariel D. Procaccia and Jeffrey S. Rosenschein. Te distortion of cardinal preferences in voting. In
1121
+ International Workshop on Cooperative Information Agents (CIA), pages 317–331, 2006.
1122
+ 18
1123
+
0tE1T4oBgHgl3EQflAQk/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
19AzT4oBgHgl3EQfDfqo/content/tmp_files/2301.00978v1.pdf.txt ADDED
@@ -0,0 +1,1044 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00978v1 [math.NT] 3 Jan 2023
2
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
3
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
4
+ Abstract. Let K be either a locally compact non-discrete field
5
+ of characteristic p > 2 or K = Qp, and Q be a non-degenerate
6
+ isotropic quadratic form with coefficients in K. We obtain asymp-
7
+ totic estimates for the number of solutions in the two fold product
8
+ of certain discrete set inside K, of the inequalities of the form
9
+ |Q(x, y)| < δ for some δ > 0, where | · | is an ultrametric abso-
10
+ lute value on K. The estimates are obtained in terms of continued
11
+ fraction expansions of the coefficients of the quadratic form Q.
12
+ Mathematics Subject Classification: 11E16, 11E08, 11D88, 11A55,
13
+ 11J70, 11K50, 37A44.
14
+ Keywords: Quadratic forms, locally compact fields, asymptotic esti-
15
+ mates, continued fractions.
16
+ Contents
17
+ 1.
18
+ Introduction
19
+ 1
20
+ 2.
21
+ K has positive characteristic (> 2)
22
+ 3
23
+ 3.
24
+ K is the field of p-adic numbers
25
+ 10
26
+ References
27
+ 14
28
+ 1. Introduction
29
+ The Oppenheim conjecture, solved by Margulis in 1987 (see [13]
30
+ for more details), states that if Q is a real non-degenerate indefinite
31
+ quadratic form which is not proportional to a form with rational coeffi-
32
+ cients, then Q(Zn) is dense in R if n ≥ 3. After Oppenheim conjecture
33
+ was settled, people got interested in studying finer questions related to
34
+ the distribution of the values of Q on integral points. Given a quadratic
35
+ form as above, and a, b, ρ ∈ R with ρ > 0, let
36
+ NQ(a, b, ρ) := # {v ∈ Zn : a < Q(v) < b, v ∈ B(ρ)},
37
+ B(ρ) being the ball of radius ρ around the origin in Rn. Also let
38
+ VQ(a, b, ρ) := Vol ({v ∈ Rn : a < Q(v) < b, v ∈ B(ρ)}).
39
+ 1
40
+
41
+ 2
42
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
43
+ Then it was shown by Dani and Margulis in [7] that
44
+ lim inf
45
+ ρ→∞
46
+ NQ(a, b, ρ)
47
+ VQ(a, b, ρ) = 1.
48
+ Asymptotic upper bound for the quantity NQ(a,b,ρ)
49
+ VQ(a,b,ρ) was found by Eskin,
50
+ Margulis and Mozes (see [9] for instance), and combining the result of
51
+ [7], they showed that if Q is a quadratic form as above such that the
52
+ signature of Q is neither (2, 1) nor (2, 2), then
53
+ lim
54
+ ρ→∞
55
+ NQ(a, b, ρ)
56
+ VQ(a, b, ρ) = 1.
57
+ The Oppenheim conjecture fails for binary quadratic forms due to
58
+ the existence of badly approximable numbers. A real number α is called
59
+ badly approximable if there exists c > 0 such that
60
+ ���α − p
61
+ q
62
+ ��� > c
63
+ q2 for any
64
+ rational number p
65
+ q. Now, let Q be the binary quadratic form defined
66
+ by
67
+ Q(x, y) = (x + αy)y,
68
+ α being a badly approximable number. Then Q(Z2) avoids the neigh-
69
+ bourhood (−c, c) of zero. Nevertheless, one can study the distribution
70
+ of the values taken by such forms at integral points. This was done
71
+ in [6] with the interval (a, b) being a neighbourhood of 0.
72
+ In case
73
+ of binary quadratic forms, the asymptotic estimates depend on the
74
+ quadratic form under consideration, and they are given in terms of
75
+ the partial quotients of the continued fraction expansions of the coeffi-
76
+ cients of the quadratic form. There is a natural connection between the
77
+ values of non-degenerate indefinite binary quadratic forms at integral
78
+ points, and certain geometric and dynamical aspects of the orbits of
79
+ geodesic flow associated with the modular surface. In [6], the authors
80
+ explored this connection, and used a method of coding of geodesics on
81
+ the modular surface via nearest integer continued fraction which was
82
+ introduced by S. Katok and I. Ugarcovicci (see [10] for instance), to
83
+ obtain the estimates (see [18] for a different proof which does not uses
84
+ the mechinary of geodesic flow etc.). The method of [6] can be adopted
85
+ to obtain similar type of estimates in terms of a more general class of
86
+ continued farctions as well, see Remark 3.4 of [5] for more details.
87
+ In the present article, we do a similar study for non-degenerate
88
+ isotropic binary quadratic forms whose coefficients are coming from
89
+ a non-discrete locally compact field K such that either K has char-
90
+ acteristic p > 2, or K is the field of p-adic numbers. In the following
91
+ sections, we first deal with the positive characteristic case and then con-
92
+ sider quadratic forms with coefficients in Qp. Note that an analogue
93
+ of Oppenheim conjecture holds in S-arithmetic setting for isotropic
94
+ quadratic forms in n ≥ 3 variables (see [2] for more details) as well.
95
+
96
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
97
+ 3
98
+ 2. K has positive characteristic (> 2)
99
+ By the classification of non-discrete locally compact fields, if K is
100
+ of positive characteristic, then K is the Laurent series fields in one
101
+ indeterminate over a finite field. Let p be an odd prime, q be a power of
102
+ p, and Fq be the finite field of characteristic p consisting of q elements.
103
+ We denote by Z the polynomial ring Fq[X] in one variable over Fq.
104
+ Let Fq(X) be the field of rational functions with coefficients in Fq and
105
+ K := Fq((X−1)) be the field of formal Laurent series in X−1 over Fq.
106
+ More precisely, if α ∈ Fq((X−1)), then
107
+ α =
108
+
109
+ j≥n0
110
+ ajX−j,
111
+ aj ∈ Fq, n0 ∈ Z.
112
+ Whenever α ∈ Fq((X−1))\Fq(X), we call α an irrational element. We
113
+ define a valuation ν on K as follows: if α =
114
+
115
+ n≥n0
116
+ anX−n, then
117
+ ν(α) := inf {j ∈ Z : aj ̸= 0}.
118
+ This valuation gives rise to an absolute value on K as follows: if α(̸=
119
+ 0) ∈ K and ν(α) = dα, then
120
+ |α| := qdα,
121
+ and the absolute value of the zero element in K is 0.
122
+ Then K is
123
+ the completion of Fq(X) with respect to this absolute value. As ν is
124
+ a non-Archimedean valuation, the absolute value defined above is an
125
+ ultrametric absolute value. Being a locally compact field, K admits a
126
+ Haar measure (see [14] for details) which we denote by µ. For a ∈ K
127
+ and r ∈ Z, let
128
+ B(a, qr) := {α ∈ K : |α − a| < qr}
129
+ be the open disc around a of radius qr, then µ(B(a, qr)) = qr. Let µ⊗µ
130
+ be the corresponding product measure on K2 which is denoted by η.
131
+ As in the case of real numbers, any α in K has a unique continued
132
+ fraction expansion
133
+ α = b0 +
134
+ 1
135
+ b1 +
136
+ 1
137
+ b2 +
138
+ 1
139
+ b3 + ....
140
+ ,
141
+ also written as
142
+ α = [b0, b1, b2, ....]
143
+ with bj ∈ Z for j ≥ 0 and bj has positive degree for j ≥ 1. Given any
144
+ α =
145
+
146
+ j≥n0
147
+ ajX−j in K, let
148
+ ⌊α⌋ =
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+ 0
157
+
158
+ j=n0
159
+ ajX−j
160
+ if
161
+ n0 ≤ 0
162
+ 0
163
+ if
164
+ n0 ≥ 1.
165
+
166
+ 4
167
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
168
+ Then the continued fraction algorithm is defined as follows:
169
+ α0 := α, αn+1 := (αn − bn)−1 and bn = ⌊αn⌋.
170
+ Here bn’s are called partial quotients and αn’s are called complete quo-
171
+ tients of the continued fraction expansion of α (see [16] for more de-
172
+ tails).
173
+ Now let sn
174
+ tn be the nth convergent of the continued fraction
175
+ expansion of α, i.e.,
176
+ sn
177
+ tn
178
+ = [b0, b1, b2, ..., bn].
179
+ Then the sequences (sn)n≥0 and (tn)n≥0 in Z satisfy the following re-
180
+ currence relations:
181
+ (1)
182
+ sn = bnsn−1 + sn−2,
183
+ tn = bntn−1 + tn−2.
184
+ They also satisfy the following equation:
185
+ (2)
186
+ sn+1tn − sntn+1 = (−1)n
187
+ which tells us that sn and tn are coprime, i.e., they do not have any
188
+ common factor other than the constant polynomials in Fq[X]. The fol-
189
+ lowing equalities which are special features of continued fraction theory,
190
+ will be quite useful for this article. If α, bn, sn, tn are as above, then
191
+ (3)
192
+ |tn| = |bn · · · b1| ; ∀n ≥ 1,
193
+ (4)
194
+ ����α − sn
195
+ tn
196
+ ���� =
197
+ 1
198
+ |bn+1||tn|2,
199
+ and
200
+ (5)
201
+ ����α − sn
202
+ tn
203
+ ���� =
204
+ 1
205
+ |tn+1||tn|.
206
+ Note that in the case of continued fraction for real numbers, inequal-
207
+ ities hold instead of equalities in (4) and (5). This is because of the
208
+ ultrametric nature of the absolute value on K. The following lemma is
209
+ a simple characterization of the convergents of the continued fraction
210
+ expansion of any element in K, the proof of which can be found in [16].
211
+ Lemma 1. Let s, t ∈ Z with t ̸= 0. Then s
212
+ t is a convergent to α if and
213
+ only if
214
+ (6)
215
+ ����α − s
216
+ t
217
+ ���� < 1
218
+ |t|2.
219
+ Now, let us consider binary quadratic forms with coefficients in K.
220
+ It is well-known that if Q is a non-degenerate isotropic quadratic form
221
+ with coefficients in a field F of characteristic not equal to 2, then there
222
+ exists a basis {v1, v2} of F 2 such that if a1, a2 ∈ F, then
223
+ Q(a1v1 + a2v2) = a1a2.
224
+
225
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
226
+ 5
227
+ This says in particular that if Q0 is the quadratic from on K2 defined
228
+ by
229
+ Q0(x, y) = xy for x, y ∈ K,
230
+ then for any isotropic quadratic form Q on K2, there is a matrix AQ
231
+ in SL(2, K) and γ in K, such that
232
+ (7)
233
+ Q(x, y) = γ Q0(AQ(x, y)).
234
+ So, to study the asymptotic behaviour of the set of values of an isotropic
235
+ quadratic form with coefficients in K, it is enough to consider quadratic
236
+ form Q given as follows:
237
+ Q(x, y) = (ax + by)(cx + dy)
238
+ with a, b, c, d ∈ K, bc − ad = 1.
239
+ Now let Q be a quadratic form of the type Q(x, y) = (ax+by)(cx+dy)
240
+ with a, b, c, d ∈ K, bc − ad = 1 (there is no loss of generality because
241
+ one may replace γ by −γ in (7)) such that ba is an irrational element of
242
+ K. Also let p be the set of primitive elements of Z2, i.e., p is the set of
243
+ those (s, t) in Z2 such that s and t do not have a common factor except
244
+ constant polynomials. For fixed real numbers k and δ with k > 1 and
245
+ 0 < δ < 1, let
246
+ G(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)| < δ, ||(s, t)|| ≤ ρ, |cs + dt| > k},
247
+ where ||(s, t)|| = max{|s|, |t|}.
248
+ Let α = −ba and β = ac, and the
249
+ continued fraction expansion of α be given by
250
+ α = [b0, b1, b2, ...]
251
+ with sn
252
+ tn being the nth convergent. Also let
253
+ H(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)| < δ, ||(x, y)|| ≤ ρ, |cx + dy| > k}.
254
+ In this article, we find asymptotic lower and upper bound of the quo-
255
+ tient # G(ρ)
256
+ η (H(ρ)) as ρ → ∞. Now let
257
+ α− := lim inf
258
+ n→∞
259
+ 1
260
+ n
261
+ n
262
+
263
+ j=1
264
+ log |bj|
265
+ and
266
+ α+ := lim sup
267
+ n→∞
268
+ 1
269
+ n
270
+ n
271
+
272
+ j=1
273
+ log |bj|.
274
+ Also for 0 < δ < 1, let
275
+ e(δ) := lim inf
276
+ n→∞
277
+ 1
278
+ n#
279
+
280
+ j, 1 ≤ j ≤ n : |bj+1| ≥ 1
281
+ δ
282
+
283
+ and
284
+ f(δ) := lim sup
285
+ n→∞
286
+ 1
287
+ n#
288
+
289
+ j, 1 ≤ j ≤ n : |bj+1| ≥ 1
290
+ δ
291
+
292
+ .
293
+ The main result of this article is contained in the following theorem.
294
+
295
+ 6
296
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
297
+ Theorem 2. Let Q be a quadratic form defined by
298
+ Q(x, y) = (ax + by)(cx + dy) with a, b, c, d ∈ K, bc − ad = 1,
299
+ and ba an irrational element of K. Also let G(ρ), H(ρ), α+, α−, e(δ),
300
+ f(δ) be as defined above. If α− < ∞, then we have the followings:
301
+ lim inf
302
+ ρ→∞
303
+ # G(ρ)
304
+ η (H(ρ)) ≥ c e(δ)
305
+ α+
306
+ and
307
+ lim sup
308
+ ρ→∞
309
+ # G(ρ)
310
+ η (H(ρ)) ≤ c f(δ)
311
+ α− ,
312
+ where c is a constant depending on δ and q.
313
+ Remark 3. Let
314
+ I(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)| < δ, ||(s, t)|| ≤ ρ, |as + bt| > k}
315
+ and
316
+ J(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)| < δ, ||(x, y)|| ≤ ρ, |ax+by| > k}.
317
+ Then one can obtain a similar estimates for # I(ρ)
318
+ η (J(ρ)) in terms of the
319
+ continued fraction expansion of −dc provided dc is an irrational element
320
+ of K.
321
+ Proof of Theorem 2:
322
+ Let
323
+ G′(ρ) := {(s, t) ∈ p : |t(tα − s)| < δ, |t| ≤ ρ}.
324
+ It is easy to see that
325
+ (8)
326
+ Q(s, t) = (tα − s)(t + β(tα − s)).
327
+ If |Q(s, t)| < δ with |cs+dt| > k then |as+bt| < δ
328
+ k, which implies that
329
+ |tα − s| < δ|a|
330
+ k , i.e., |tα − s| is bounded. Now by (8),
331
+ |Q(s, t)|
332
+ |q(tα − s)| =
333
+ �����1 + β
334
+ t (tα − s)
335
+ ����� .
336
+ Since |tα − s| is bounded, it follows that
337
+ |Q(s, t)|
338
+ |t(tα − s)| = 1 if |t| is suffi-
339
+ ciently large. Note that when |tα − s| is bounded, ||(s, t)|| → ∞ if and
340
+ only if |t| → ∞. Also, if |q(tα−s)| < δ, then clearly |tα−s| is bounded
341
+ and
342
+ |Q(s, t)|
343
+ |t(tα − s)| = 1 for sufficiently large |t|. Combining all these facts,
344
+ we can say that there exists a constant C > 0 such that
345
+ #G
346
+ ′(ρ) − C ≤ #G(ρ) ≤ #G
347
+ ′(ρ) + C
348
+ for sufficiently large ρ. Since 0 < δ < 1, it follows from Lemma 1, that if
349
+ (s, t) ∈ G
350
+ ′(ρ), then s = sj and t = tj, where sj
351
+ tj is a convergent of α in its
352
+
353
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
354
+ 7
355
+ continued fraction expansion. Also G
356
+ ′(ρ) = G
357
+ ′(|tn|) if |tn| ≤ ρ < |tn+1|.
358
+ Note that if (sj, tj) ∈ G
359
+ ′(|tn|), then (asj, atj) ∈ G
360
+ ′(|tn|) as well for any
361
+ a ∈ F∗
362
+ q.
363
+ Now let us calculate the measure of H(ρ). Let A be the set given by
364
+ A := {(x, y) ∈ K2 : 0 < |xy| < δ, ||(x, y)|| ≤ ρ, |y| > k},
365
+ then
366
+ η(H(ρ)) = |det(M)| η(A)
367
+ where M =
368
+
369
+ a
370
+ b
371
+ c
372
+ d
373
+
374
+ . Since bc−ad = 1, we have that η(H(ρ)) = η(A).
375
+ Note that for 0 < δ < 1, k > 1 and ρ ≥ k, there exist unique
376
+ m0, m
377
+
378
+ 0, t and i ∈ Z such that qm0 ≤ δ < qm0+1, qm
379
+
380
+ 0 ≤
381
+
382
+ δ <
383
+ qm
384
+
385
+ 0+1, qm
386
+
387
+ 0+t ≤ k < qm
388
+
389
+ 0+t+1 and qm
390
+
391
+ 0+t+i ≤ ρ < qm
392
+
393
+ 0+t+i+1. Also for
394
+ 1 ≤ n ≤ i, let
395
+ An := {(x, y) ∈ K2 : |x| ≤ qm0−m
396
+
397
+ 0−t−n and |y| = qm
398
+
399
+ 0+t+n}.
400
+ Clearly An’s are disjoint, and it is easy to see that A = ∪i
401
+ n=1An. Hence,
402
+ η(A) =
403
+ i�
404
+ n=1 η(An). Now
405
+ {y ∈ K : |y| ≤ qm
406
+
407
+ 0+t+n}
408
+ = {y ∈ K : |y| < qm
409
+
410
+ 0+t+n} ∪ {y ∈ K : |y| = qm
411
+
412
+ 0+t+n}.
413
+ Therefore,
414
+ η(An) = µ({x ∈ K : |x| ≤ qm0−m
415
+
416
+ 0−t−n}) · µ({y ∈ K : |y| = qm
417
+
418
+ 0+t+n})
419
+ = µ({x ∈ K : |x| ≤ qm0−m
420
+
421
+ 0−t−n})
422
+ · (µ({y ∈ K : |y| ≤ qm
423
+
424
+ 0+t+n}) − µ({y ∈ K : |y| < qm
425
+
426
+ 0+t+n}))
427
+ = (qm0−m
428
+
429
+ 0−t−n+1) · (qm
430
+
431
+ 0+t+n+1 − qm
432
+
433
+ 0+t+n)
434
+ = (qm0−m
435
+
436
+ 0−t−n+1)(qm
437
+
438
+ 0+t+n)(q − 1)
439
+ = qm0+1(q − 1),
440
+ and consequently,
441
+ η(H(ρ)) = η(A) =
442
+ i
443
+
444
+ n=1
445
+ η(An) = iqm0+1(q − 1).
446
+ Since qm′
447
+ 0+t+i ≤ ρ < qm′
448
+ 0+t+i+1, it follows that
449
+ (m′
450
+ 0 + t + i) log q ≤ log ρ < (m′
451
+ 0 + t + i + 1) log q
452
+ which implies that
453
+ log ρ
454
+ log q − m′
455
+ 0 − t − 1 < i ≤ log ρ
456
+ log q − m′
457
+ 0 − t.
458
+
459
+ 8
460
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
461
+ Hence,
462
+ �log ρ
463
+ log q − m
464
+
465
+ 0 − t − 1
466
+
467
+ (q − 1)qm0+1 < η(H(ρ)) ≤
468
+ �log ρ
469
+ log q − m
470
+
471
+ 0 − t
472
+
473
+ (q − 1)qm0+1.
474
+ (9)
475
+ Now,
476
+ lim inf
477
+ ρ→∞
478
+ #G(ρ)
479
+ η(H(ρ)) ≥ lim inf
480
+ ρ→∞
481
+ #G′(ρ) − C
482
+ η(H(ρ))
483
+ = lim inf
484
+ n→∞
485
+ #G′(|tn|) − C
486
+ η(H(|tn|))
487
+ (for |tn| ≤ ρ < |tn+1|)
488
+ = lim inf
489
+ n→∞
490
+ 1
491
+ n(#G′(|tn|) − C)
492
+ 1
493
+ n(η(H(|tn|)))
494
+
495
+ lim inf
496
+ n→∞
497
+ 1n(#G′(|tn|))
498
+ lim sup
499
+ n→∞
500
+ 1n(η(H(|tn|)))
501
+
502
+ lim inf
503
+ n→∞
504
+ 1
505
+ n(q − 1) #
506
+
507
+ j : 1 ≤ j ≤ n, |bj| ≥ 1
508
+ δ
509
+
510
+ lim sup
511
+ n→∞
512
+ 1
513
+ n
514
+ �log |tn|
515
+ log q
516
+ − m′
517
+ 0 − t
518
+
519
+ qm0+1(q − 1)
520
+ (by (4) and (9))
521
+
522
+ lim inf
523
+ n→∞
524
+ 1
525
+ n #
526
+
527
+ j : 1 ≤ j ≤ n, |bj| ≥ 1
528
+ δ
529
+
530
+ lim sup
531
+ n→∞
532
+ 1
533
+ n
534
+ �log |b1b2 · · · bn|
535
+ log q
536
+ − m′
537
+ 0 − t
538
+
539
+ qm0+1
540
+ (by (3))
541
+
542
+ lim inf
543
+ n→∞
544
+ 1
545
+ n #
546
+
547
+ j : 1 ≤ j ≤ n, |bj| ≥ 1
548
+ δ
549
+
550
+ lim sup
551
+ n→∞
552
+ 1
553
+ n
554
+
555
+
556
+
557
+
558
+
559
+ n�
560
+ j=1 log |bj|
561
+ log q
562
+ − m′
563
+ 0 − t
564
+
565
+
566
+
567
+
568
+  qm0+1
569
+ = e(δ)
570
+ α+
571
+ log q
572
+ qm0+1.
573
+
574
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
575
+ 9
576
+ A similar calculation yields
577
+ lim sup
578
+ ρ→∞
579
+ #G(ρ)
580
+ η(H(ρ)) ≤ f(δ)
581
+ α−
582
+ log q
583
+ qm0+1.
584
+ Corollary 4. Let Q be a quadratic form as in Theorem 2, and 0 < δ <
585
+ 1 be fixed. Then there exist a subset K′ of K with µ(K′) = µ(K) such
586
+ that if α = −ba ∈ K′, then
587
+ lim
588
+ ρ→∞
589
+ #G(ρ)
590
+ η(H(ρ)) =
591
+ q − 1
592
+ q⌈δ−1⌉+m0+1,
593
+ where ⌈δ−1⌉ denotes the smallest integer greater or equal to δ−1.
594
+ Proof. Let [b0, b1, b2, . . .] be the continued fraction expansion of α =
595
+ −ba as above.
596
+ It follows from Theorem 6 of [1] that there is a full
597
+ measure subset K′ of K such that if α = −ba ∈ K′, then
598
+ (10)
599
+ lim
600
+ n→∞ |b1b2 · · · bn| 1n = q
601
+ q
602
+ q − 1.
603
+ This implies that
604
+ lim
605
+ n→∞
606
+ 1
607
+ n
608
+ n
609
+
610
+ j=1
611
+ log |bj| =
612
+ q
613
+ q − 1 log q,
614
+ and, therefore, α− = α+ =
615
+ q
616
+ q−1 log q. Also for any 0 < δ < 1, there
617
+ exists a unique l ∈ N such that l = ⌈δ−1⌉. Then by Theorem 14 of
618
+ [12], for α in a full measure set which without loss of generality we may
619
+ assume to be K′,
620
+ lim
621
+ n→∞
622
+ 1
623
+ n #{1 ⩽ j ⩽ n : |bj| ⩾ ql} =
624
+ 1
625
+ ql−1
626
+ which implies that e(δ) = f(δ) =
627
+ 1
628
+ ql−1 =
629
+ 1
630
+ q⌈δ−1⌉−1. Then it follows from
631
+ Theorem 2 above that, if α = −ba ∈ K
632
+ ′, then
633
+ lim
634
+ ρ→∞
635
+ #G(ρ)
636
+ η(H(ρ)) =
637
+ 1
638
+ q⌈δ−1⌉ − 1
639
+
640
+ q
641
+ q − 1 log q
642
+ � log q
643
+ qm0+1
644
+ =
645
+ q − 1
646
+ q⌈δ−1⌉ + m0 + 1.
647
+
648
+ Remark 5. Let Q, α be as in Theorem 2. Now, if the absolute values
649
+ of the partial quotients in the continued fraction expansion of α are
650
+
651
+ 10
652
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
653
+ bounded by some real numbers, then it is easy to see that e(δ) = f(δ) =
654
+ 0 if δ is sufficiently small. In this case,
655
+ lim
656
+ ρ→∞
657
+ #G(ρ)
658
+ η(H(ρ)) = 0.
659
+ 3. K is the field of p-adic numbers
660
+ In this section, we consider isotropic quadratic forms with coefficients
661
+ in the field of p-adic numbers for a prime p. Recall that the field of
662
+ p-adic numbers, denoted by Qp, is the collection of all formal series of
663
+ the form
664
+
665
+ j≥n0
666
+ ajpj, with n0 ∈ Z and aj ∈ {0, 1, . . ., p − 1}.
667
+ The ultrametric absolute value on Qp is defined as follows: if
668
+ α (̸= 0) =
669
+
670
+ j≥n0
671
+ ajpj,
672
+ then
673
+ |α|p := p−νp(α), and |0|p = 0,
674
+ where νp(α) := inf {j ∈ Z : aj ̸= 0}. The integer νp(α) is also known
675
+ as the valuation of α. For α ∈ Qp and r ∈ Z, let
676
+ B(a, pr) := {α ∈ K : |α − a|p < pr}
677
+ be the open disc of radius pr around the point α. The Haar measure µ
678
+ (say) on Qp is defined in such a way that µ(B(a, pr)) = pr. We denote
679
+ by η again the product measure µ ⊗ µ on Qp × Qp.
680
+ As in the case of real numbers and elements of Laurent series fields
681
+ over finite fields, continued fraction expansion exists for p-adic num-
682
+ bers as well. There are mainly two types of continued fractions for
683
+ p-adic numbers, one of them was introduced by Schneider (see [17] for
684
+ instance), and the other one was introduced by Ruban (see [15] for
685
+ instance) and modified later by Brokwin (see [3], [4]). In this article,
686
+ we are going to consider the continued fraction introduced by Ruban
687
+ which has some similarity with the simple continued fraction for real
688
+ numbers. From now on, unless otherwise stated, we will be considering
689
+ Ruban’s continued fraction only. Let Z be the subset of Qp given by
690
+ Z := {a0 + a1
691
+ 1
692
+ p + . . . an
693
+ 1
694
+ pn : ai ∈ {0, 1, . . ., p − 1} for 0 ≤ i ≤ n}.
695
+ It is easy to see that Z is a discrete set in the topology coming from
696
+ the p-adic abosolute value. For α (̸= 0) = �
697
+ j≥n0
698
+ ajpj, let
699
+ ⌊α⌋ =
700
+
701
+
702
+
703
+
704
+
705
+
706
+
707
+ 0
708
+
709
+ j=n0
710
+ ajpj
711
+ if
712
+ n0 ≤ 0
713
+ 0
714
+ if
715
+ n0 ≥ 1.
716
+
717
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
718
+ 11
719
+ Given α ∈ Qp, we define two sequences (αn) and (bn) as follows: α0 =
720
+ α, b0 = ⌊α0⌋; for n ≥ 0, if bn = αn, then αn+1 and bn+1 are not defined,
721
+ otherwise, αn+1 = (αn − bn)−1 and bn+1 = ⌊αn+1⌋. Any p-adic number
722
+ α has a unique continued fraction expansion as α = [b0, b1, . . . , bn, . . . ]
723
+ which can be obtained by using the algorithm discussed above. Note
724
+ that the partial quotients bn’s are elements of Z. The nth convergent
725
+ is given by sn
726
+ tn = [b0, b1, . . . , bn] where sn and tn satisfy the recurrence
727
+ relation as in (1), and equation (2) as well.
728
+ The p-adic versions of
729
+ equation (3), (4) and (5) are valid as well with the absolute value in
730
+ the Laurent series field replaced by the p-adic absolute value. As we
731
+ could not find a proper reference for a p-adic version of Lemma 1, we
732
+ include a proof here following the proof of Lemma 1 given in [16].
733
+ Lemma 6. Let s, t ∈ Z with t ̸= 0. Then s
734
+ t is a convergent to α if and
735
+ only if
736
+ (11)
737
+ ����α − s
738
+ t
739
+ ����
740
+ p < 1
741
+ |t|2p
742
+ Proof. By the p-adic version of equation (4),
743
+ ����α − sn
744
+ tn
745
+ ����
746
+ p
747
+ =
748
+ 1
749
+ |bn+1|p |tn|2
750
+ p
751
+ <
752
+ 1
753
+ |tn|2
754
+ p
755
+ for any convergent sn
756
+ tn corresponding to the continued fraction expan-
757
+ sion of α.
758
+ Conversely, assume that s, t ∈ Z with t ̸= 0 such that
759
+ ����α − s
760
+ t
761
+ ����
762
+ p < 1
763
+ |t|2
764
+ p
765
+ .
766
+ There is a unique n such that |tn|p ≤ |t|p < |tn+1|p. Then
767
+ ����α − s
768
+ t
769
+ ����
770
+ p <
771
+ 1
772
+ |t|p|tn|p
773
+ ,
774
+ and
775
+ ����α − sn
776
+ tn
777
+ ����
778
+ p
779
+ =
780
+ 1
781
+ |tn|p|tn+1|p
782
+ (by p-adic version of (5))
783
+ <
784
+ 1
785
+ |t|p|tn|p
786
+ ,
787
+ so that
788
+ ����
789
+ s
790
+ t − sn
791
+ tn
792
+ ����
793
+ p
794
+ =
795
+ ����
796
+ s
797
+ t − α + α − sn
798
+ tn
799
+ ����
800
+ p
801
+ ≤ max
802
+ �����α − s
803
+ t
804
+ ����
805
+ p ,
806
+ ����α − sn
807
+ tn
808
+ ����
809
+ p
810
+
811
+ <
812
+ 1
813
+ |t|p|tn|p
814
+ .
815
+
816
+ 12
817
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
818
+ Thus, s
819
+ t = sn
820
+ tn .
821
+
822
+ Now, let Q be a non-degenerate isotropic binary quadratic form with
823
+ coefficients in Qp. Since Qp has characteristic zero, as explained in the
824
+ previous section, it is enough to consider Q defined by
825
+ Q(x, y) = (ax + by)(cx + dy)
826
+ with a, b, c, d in Qp and bc−ad = 1. We also assume that ba is not of the
827
+ form s
828
+ t for some s, t ∈ Z with t ̸= 0. Let p denote the set of all those
829
+ (s, t) ∈ Z such that s and t does not have a common factor except the
830
+ constant polynomials in 1p inside Z. For k > 1 and 0 < δ < 1, we
831
+ define G(ρ) and H(ρ) as in the previous section as follows:
832
+ G(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)|p < δ, ||(s, t)|| ≤ ρ, |cs + dt|p > k},
833
+ H(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)|p < δ, ||(x, y)|| ≤ ρ, |cx+dy|p > k},
834
+ here ||(s, t)|| = max { |s|p, |t|p }. Also let α = −ba and β = ac, and the
835
+ continued fraction expansion of α be given by
836
+ α = [b0, b1, b2, ...].
837
+ The quantities e(δ), f(δ), α− and α+ are defined similarly as in the
838
+ previous section with the absolute value replaced by the p-adic absolute
839
+ value wherever applicable. Then an analogue of Theorem 2 holds in
840
+ this set up as well.
841
+ Theorem 7. With all the notations as above, if α− < ∞, then
842
+ lim inf
843
+ ρ→∞
844
+ #G(ρ)
845
+ η(H(ρ)) ≥ c e(δ)
846
+ α+ ,
847
+ and
848
+ lim sup
849
+ ρ→∞
850
+ #G(ρ)
851
+ η(H(ρ)) ≤ c f(δ)
852
+ α− ,
853
+ where c =
854
+ log p
855
+ pm0 + 1.
856
+ Let X = B(0, 1) and T : X → X be the continued fraction map
857
+ defined by
858
+ T(α) = 1
859
+ α −
860
+ � 1
861
+ α
862
+
863
+ .
864
+ It is known that the map T is ergodic (see [15] for details) with respect
865
+ to the Haar measure µ. As an application of the ergodicity, we obtain
866
+ a result similar to Theorem 14 of [12].
867
+ Lemma 8. Let α ∈ X and [0, b1, b2, . . . ] be the continued fraction
868
+ expansion of α. Then for any natural number l,
869
+ lim
870
+ n→∞ #{1 ≤ j ≤ n : −νp(bj) ≥ l} =
871
+ 1
872
+ pl−1
873
+ almost everywhere with respect to the Haar measure µ.
874
+
875
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
876
+ 13
877
+ Proof. Note that b1 = b1(α) can be thought of as a function on B(0, 1).
878
+ Then it is easy to check that the function
879
+ f(α) = χ[pl,∞)(|b1(α)|p), α ∈ B(0, 1)
880
+ is integrable on B(0, 1). Now, by the pointwise ergodic theorem
881
+ (see Theorem 2.30 of [8] for instance),
882
+ lim
883
+ n→∞
884
+ 1
885
+ n#{1 ≤ j ≤ n : −νp(bj) ≥ l} = lim
886
+ n→∞
887
+ 1
888
+ n#{1 ≤ j ≤ n : |bj|p ≥ pl}
889
+ = lim
890
+ n→∞
891
+ 1
892
+ n
893
+ n
894
+
895
+ j=1
896
+ χ[pl,∞)(|b1(T j(α))|p)
897
+ =
898
+
899
+ B(0,1)
900
+ χ[pl,∞)(|b1(α)|p)dµ
901
+ = µ{α ∈ B(0, 1) : |b1(α)|p ≥ pl}
902
+ = µ{α ∈ B(0, 1) : |α|p ≤ p−l}
903
+ = p−l+1
904
+ =
905
+ 1
906
+ pl−1
907
+
908
+ Now, using Theorem 8 of [15] and Lemma 8 above, we obtain a p-adic
909
+ version of Corollary 4.
910
+ Corollary 9. Let Q be a quadratic form as in Theorem 7, and 0 < δ <
911
+ 1 be fixed. Then there exist a subset K
912
+ ′ of K with µ(K
913
+ ′) = µ(K) such
914
+ that if α = −ba ∈ K
915
+ ′, then
916
+ lim
917
+ ρ→∞
918
+ #G(ρ)
919
+ η(H(ρ)) =
920
+ p − 1
921
+ p⌈δ−1⌉+m0+1.
922
+ It is easy to see that a version of Remark 5 is true in the p-adic set
923
+ up as well. As the statements are similar, we do not write it separately
924
+ here. Rather, we give an example of a p-adic number whose continued
925
+ fraction expansion consists of partial quotients with bounded absolute
926
+ values. One may look at [11] and references cited there in for similar
927
+ examples in Laurent series field over finite fields. Let α be the p-adic
928
+ number given by α =
929
+
930
+ j≥−1 ajpj, with aj = 1 for all j ≥ −1. Let the
931
+ continued fraction expansion of α be [b0, b1, b2, ...]. Then b0 = p0 + p−1
932
+
933
+ 14
934
+ MANOJ CHOUDHURI AND PRASHANT J. MAKADIYA
935
+ and |b0|p = p. Now
936
+ α1 = (α0 − b0)−1
937
+ =
938
+
939
+ �
940
+ j≥1
941
+ pj
942
+
943
+
944
+ −1
945
+ = p−1 +
946
+
947
+ j≥0
948
+ (p − 1)pj.
949
+ Then b1 = (p − 1)p0 + p−1 and |b1|p = p. Again
950
+ α2 = (α1 − b1)−1
951
+ =
952
+
953
+ �
954
+ j≥1
955
+ (p − 1)pj
956
+
957
+
958
+ −1
959
+ =
960
+
961
+ j≥−1
962
+ (p − 1)pj.
963
+ Then b2 = (p − 1)p0 + (p − 1)p−1 and |b2|p = p. Observe that α3 =
964
+ (α2 − b2)−1 = α2, and hence |b3|p = p. In a similar manner we get
965
+ αn+1 = αn, bn+1 = bn, |bn+1|p = p for n ≥ 3 as well. Therefore, the
966
+ absolute values of all the partial quotients of the continued fraction
967
+ expansion of α are bounded by p.
968
+ Remark 10. As in the case of binary real quadratic forms, the Op-
969
+ penheim conjecture fails to hold for non-degenerate isotropic quadratic
970
+ form with coefficients in a non-discrete locally compact non-Archimedean
971
+ field as well. To see this, let us consider the quadratic form Q given by
972
+ Q(x, y) = (x + αy)y
973
+ with α ∈ Fq((X−1)) (or Qp). Now if the partial quotients in the con-
974
+ tinued fraction expansion of α have bounded absolute values, then us-
975
+ ing Lemma 1 (or Lemma 6), it is easy to see that the set of values
976
+ {|Q(s, t)| : s, t ∈ Z} (Z is either as in Section 1 or as in Section 2)
977
+ avoids certain neighbourhood of zero.
978
+ Acknowledgement . Prashant J. Makadiya acknowledges the support
979
+ of Government of Gujarat thorugh the SHODH (ScHeme Of Developing
980
+ High Quality Research) fellowship. Manoj Choudhuri thanks L. Singhal
981
+ for helpful discussions.
982
+ References
983
+ [1] Val´erie Berth´e and Hitoshi Nakada. On continued fraction expansions in pos-
984
+ itive characteristic: equivalence relations and some metric properties. Expo.
985
+ Math., 18(4):257–284, 2000.
986
+ [2] Armand Borel and Gopal Prasad. Values of isotropic quadratic forms at S-
987
+ integral points. Compositio Math., 83(3):347–372, 1992.
988
+
989
+ ON VALUES OF ISOTROPIC QUADRATIC FORMS
990
+ 15
991
+ [3] Jerzy Browkin. Continued fractions in local fields. i. Demonstratio Math.,
992
+ 11(1):67–82, 1978.
993
+ [4] Jerzy
994
+ Browkin.
995
+ Continued
996
+ fractions
997
+ in
998
+ local
999
+ fields.
1000
+ ii.
1001
+ Math.
1002
+ Comp.,
1003
+ 70(235):1281–1292, 2001.
1004
+ [5] Manoj Choudhuri. On certain orbits of geodesic flow and (a, b)-continued frac-
1005
+ tions. Proc. Indian Acad. Sci. Math. Sci., 131(1):Paper No. 2, 19, 2021.
1006
+ [6] Manoj Choudhuri and S. G. Dani. On values of binary quadratic forms at
1007
+ integer points. Math. Res. Lett., 22(4):1023–1045, 2015.
1008
+ [7] S. G. Dani and G. A. Margulis. Limit distributions of orbits of unipotent flows
1009
+ and values of quadratic forms. In I. M. Gelfand Seminar, volume 16 of Adv.
1010
+ Soviet Math., pages 91–137. Amer. Math. Soc., Providence, RI, 1993.
1011
+ [8] Manfred Einsiedler and Thomas Ward. Ergodic theory with a view towards
1012
+ number theory, volume 259 of Graduate Texts in Mathematics. Springer-Verlag
1013
+ London, Ltd., London, 2011.
1014
+ [9] Alex Eskin, Gregory Margulis, and Shahar Mozes. On a quantitative ver-
1015
+ sion of the Oppenheim conjecture. Electron. Res. Announc. Amer. Math. Soc.,
1016
+ 1(3):124–130, 1995.
1017
+ [10] Svetlana Katok and Ilie Ugarcovici. Arithmetic coding of geodesics on the
1018
+ modular surface via continued fractions. In European women in mathematics—
1019
+ Marseille 2003, volume 135 of CWI Tract, pages 59–77. Centrum Wisk. In-
1020
+ form., Amsterdam, 2005.
1021
+ [11] Alain Lasjaunias and Jean-Jacques Ruch. Algebraic and badly approximable
1022
+ power series over a finite field. Finite Fields Appl., 8(1):91–107, 2002.
1023
+ [12] Poj Lertchoosakul and Radhakrishnan Nair. On the metric theory of continued
1024
+ fractions in positive characteristic. Mathematika, 60(2):307–320, 2014.
1025
+ [13] G. A. Margulis. Oppenheim conjecture. In Fields Medallists’ lectures, volume 5
1026
+ of World Sci. Ser. 20th Century Math., pages 272–327. World Sci. Publ., River
1027
+ Edge, NJ, 1997.
1028
+ [14] Dinakar Ramakrishnan and Robert J. Valenza. Fourier analysis on number
1029
+ fields, volume 186 of Graduate Texts in Mathematics. Springer-Verlag, New
1030
+ York, 1999.
1031
+ [15] A. A. Ruban. Certain metric properties of the p-adic numbers. Sibirsk. Mat.
1032
+ ˇZ., 11:222–227, 1970.
1033
+ [16] Wolfgang M. Schmidt. On continued fractions and Diophantine approximation
1034
+ in power series fields. Acta Arith., 95(2):139–166, 2000.
1035
+ [17] Th. Schneider. ¨Uber p-adische Kettenbr¨uche. In Symposia Mathematica, Vol.
1036
+ IV (INDAM, Rome, 1968/69), pages 181–189. Academic Press, London, 1970.
1037
+ [18] David Simmons. The Hurwitz continued fraction expansion as applied to real
1038
+ numbers. Enseign. Math., 62(3-4):475–485, 2016.
1039
+ Institute of Infrastructure, Technology, Research and Manage-
1040
+ ment, Near Khokhara Circle, maninagar (East), Ahmedabad 380026,
1041
+ Gujarat, India.
1042
+ Email address: [email protected]
1043
+ Email address: [email protected]
1044
+
19AzT4oBgHgl3EQfDfqo/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf,len=381
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
3
+ page_content='00978v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
4
+ page_content='NT] 3 Jan 2023 ON VALUES OF ISOTROPIC QUADRATIC FORMS MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
5
+ page_content=' MAKADIYA Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
6
+ page_content=' Let K be either a locally compact non-discrete field of characteristic p > 2 or K = Qp, and Q be a non-degenerate isotropic quadratic form with coefficients in K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
7
+ page_content=' We obtain asymp- totic estimates for the number of solutions in the two fold product of certain discrete set inside K, of the inequalities of the form |Q(x, y)| < δ for some δ > 0, where | · | is an ultrametric abso- lute value on K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
8
+ page_content=' The estimates are obtained in terms of continued fraction expansions of the coefficients of the quadratic form Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
9
+ page_content=' Mathematics Subject Classification: 11E16, 11E08, 11D88, 11A55, 11J70, 11K50, 37A44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
10
+ page_content=' Keywords: Quadratic forms, locally compact fields, asymptotic esti- mates, continued fractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
11
+ page_content=' Contents 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
12
+ page_content=' Introduction 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
13
+ page_content=' K has positive characteristic (> 2) 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
14
+ page_content=' K is the field of p-adic numbers 10 References 14 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
15
+ page_content=' Introduction The Oppenheim conjecture, solved by Margulis in 1987 (see [13] for more details), states that if Q is a real non-degenerate indefinite quadratic form which is not proportional to a form with rational coeffi- cients, then Q(Zn) is dense in R if n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
16
+ page_content=' After Oppenheim conjecture was settled, people got interested in studying finer questions related to the distribution of the values of Q on integral points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
17
+ page_content=' Given a quadratic form as above, and a, b, ρ ∈ R with ρ > 0, let NQ(a, b, ρ) := # {v ∈ Zn : a < Q(v) < b, v ∈ B(ρ)}, B(ρ) being the ball of radius ρ around the origin in Rn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
18
+ page_content=' Also let VQ(a, b, ρ) := Vol ({v ∈ Rn : a < Q(v) < b, v ∈ B(ρ)}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
19
+ page_content=' 1 2 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
20
+ page_content=' MAKADIYA Then it was shown by Dani and Margulis in [7] that lim inf ρ→∞ NQ(a, b, ρ) VQ(a, b, ρ) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
21
+ page_content=' Asymptotic upper bound for the quantity NQ(a,b,ρ) VQ(a,b,ρ) was found by Eskin, Margulis and Mozes (see [9] for instance), and combining the result of [7], they showed that if Q is a quadratic form as above such that the signature of Q is neither (2, 1) nor (2, 2), then lim ρ→∞ NQ(a, b, ρ) VQ(a, b, ρ) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
22
+ page_content=' The Oppenheim conjecture fails for binary quadratic forms due to the existence of badly approximable numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
23
+ page_content=' A real number α is called badly approximable if there exists c > 0 such that ���α − p q ��� > c q2 for any rational number p q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
24
+ page_content=' Now, let Q be the binary quadratic form defined by Q(x, y) = (x + αy)y, α being a badly approximable number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
25
+ page_content=' Then Q(Z2) avoids the neigh- bourhood (−c, c) of zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
26
+ page_content=' Nevertheless, one can study the distribution of the values taken by such forms at integral points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
27
+ page_content=' This was done in [6] with the interval (a, b) being a neighbourhood of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
28
+ page_content=' In case of binary quadratic forms, the asymptotic estimates depend on the quadratic form under consideration, and they are given in terms of the partial quotients of the continued fraction expansions of the coeffi- cients of the quadratic form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
29
+ page_content=' There is a natural connection between the values of non-degenerate indefinite binary quadratic forms at integral points, and certain geometric and dynamical aspects of the orbits of geodesic flow associated with the modular surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
30
+ page_content=' In [6], the authors explored this connection, and used a method of coding of geodesics on the modular surface via nearest integer continued fraction which was introduced by S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
31
+ page_content=' Katok and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
32
+ page_content=' Ugarcovicci (see [10] for instance), to obtain the estimates (see [18] for a different proof which does not uses the mechinary of geodesic flow etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
33
+ page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
34
+ page_content=' The method of [6] can be adopted to obtain similar type of estimates in terms of a more general class of continued farctions as well, see Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
35
+ page_content='4 of [5] for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
36
+ page_content=' In the present article, we do a similar study for non-degenerate isotropic binary quadratic forms whose coefficients are coming from a non-discrete locally compact field K such that either K has char- acteristic p > 2, or K is the field of p-adic numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
37
+ page_content=' In the following sections, we first deal with the positive characteristic case and then con- sider quadratic forms with coefficients in Qp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
38
+ page_content=' Note that an analogue of Oppenheim conjecture holds in S-arithmetic setting for isotropic quadratic forms in n ≥ 3 variables (see [2] for more details) as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
39
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
40
+ page_content=' K has positive characteristic (> 2) By the classification of non-discrete locally compact fields, if K is of positive characteristic, then K is the Laurent series fields in one indeterminate over a finite field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
41
+ page_content=' Let p be an odd prime, q be a power of p, and Fq be the finite field of characteristic p consisting of q elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
42
+ page_content=' We denote by Z the polynomial ring Fq[X] in one variable over Fq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
43
+ page_content=' Let Fq(X) be the field of rational functions with coefficients in Fq and K := Fq((X−1)) be the field of formal Laurent series in X−1 over Fq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
44
+ page_content=' More precisely, if α ∈ Fq((X−1)), then α = � j≥n0 ajX−j, aj ∈ Fq, n0 ∈ Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
45
+ page_content=' Whenever α ∈ Fq((X−1))\\Fq(X), we call α an irrational element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
46
+ page_content=' We define a valuation ν on K as follows: if α = � n≥n0 anX−n, then ν(α) := inf {j ∈ Z : aj ̸= 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
47
+ page_content=' This valuation gives rise to an absolute value on K as follows: if α(̸= 0) ∈ K and ν(α) = dα, then |α| := qdα, and the absolute value of the zero element in K is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
48
+ page_content=' Then K is the completion of Fq(X) with respect to this absolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
49
+ page_content=' As ν is a non-Archimedean valuation, the absolute value defined above is an ultrametric absolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
50
+ page_content=' Being a locally compact field, K admits a Haar measure (see [14] for details) which we denote by µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
51
+ page_content=' For a ∈ K and r ∈ Z, let B(a, qr) := {α ∈ K : |α − a| < qr} be the open disc around a of radius qr, then µ(B(a, qr)) = qr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
52
+ page_content=' Let µ⊗µ be the corresponding product measure on K2 which is denoted by η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
53
+ page_content=' As in the case of real numbers, any α in K has a unique continued fraction expansion α = b0 + 1 b1 + 1 b2 + 1 b3 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
54
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
55
+ page_content='. , also written as α = [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
56
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
57
+ page_content='.] with bj ∈ Z for j ≥ 0 and bj has positive degree for j ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
58
+ page_content=' Given any α = � j≥n0 ajX−j in K, let ⌊α⌋ = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 � j=n0 ajX−j if n0 ≤ 0 0 if n0 ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
59
+ page_content=' 4 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
60
+ page_content=' MAKADIYA Then the continued fraction algorithm is defined as follows: α0 := α, αn+1 := (αn − bn)−1 and bn = ⌊αn⌋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
61
+ page_content=' Here bn’s are called partial quotients and αn’s are called complete quo- tients of the continued fraction expansion of α (see [16] for more de- tails).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
62
+ page_content=' Now let sn tn be the nth convergent of the continued fraction expansion of α, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
63
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
64
+ page_content=', sn tn = [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
65
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
66
+ page_content=', bn].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
67
+ page_content=' Then the sequences (sn)n≥0 and (tn)n≥0 in Z satisfy the following re- currence relations: (1) sn = bnsn−1 + sn−2, tn = bntn−1 + tn−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
68
+ page_content=' They also satisfy the following equation: (2) sn+1tn − sntn+1 = (−1)n which tells us that sn and tn are coprime, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
69
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
70
+ page_content=', they do not have any common factor other than the constant polynomials in Fq[X].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
71
+ page_content=' The fol- lowing equalities which are special features of continued fraction theory, will be quite useful for this article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
72
+ page_content=' If α, bn, sn, tn are as above, then (3) |tn| = |bn · · · b1| ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
73
+ page_content=' ∀n ≥ 1, (4) ����α − sn tn ���� = 1 |bn+1||tn|2, and (5) ����α − sn tn ���� = 1 |tn+1||tn|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
74
+ page_content=' Note that in the case of continued fraction for real numbers, inequal- ities hold instead of equalities in (4) and (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
75
+ page_content=' This is because of the ultrametric nature of the absolute value on K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
76
+ page_content=' The following lemma is a simple characterization of the convergents of the continued fraction expansion of any element in K, the proof of which can be found in [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
77
+ page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
78
+ page_content=' Let s, t ∈ Z with t ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
79
+ page_content=' Then s t is a convergent to α if and only if (6) ����α − s t ���� < 1 |t|2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
80
+ page_content=' Now, let us consider binary quadratic forms with coefficients in K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
81
+ page_content=' It is well-known that if Q is a non-degenerate isotropic quadratic form with coefficients in a field F of characteristic not equal to 2, then there exists a basis {v1, v2} of F 2 such that if a1, a2 ∈ F, then Q(a1v1 + a2v2) = a1a2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
82
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 5 This says in particular that if Q0 is the quadratic from on K2 defined by Q0(x, y) = xy for x, y ∈ K, then for any isotropic quadratic form Q on K2, there is a matrix AQ in SL(2, K) and γ in K, such that (7) Q(x, y) = γ Q0(AQ(x, y)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
83
+ page_content=' So, to study the asymptotic behaviour of the set of values of an isotropic quadratic form with coefficients in K, it is enough to consider quadratic form Q given as follows: Q(x, y) = (ax + by)(cx + dy) with a, b, c, d ∈ K, bc − ad = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
84
+ page_content=' Now let Q be a quadratic form of the type Q(x, y) = (ax+by)(cx+dy) with a, b, c, d ∈ K, bc − ad = 1 (there is no loss of generality because one may replace γ by −γ in (7)) such that ba is an irrational element of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
85
+ page_content=' Also let p be the set of primitive elements of Z2, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
86
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
87
+ page_content=', p is the set of those (s, t) in Z2 such that s and t do not have a common factor except constant polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
88
+ page_content=' For fixed real numbers k and δ with k > 1 and 0 < δ < 1, let G(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)| < δ, ||(s, t)|| ≤ ρ, |cs + dt| > k}, where ||(s, t)|| = max{|s|, |t|}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
89
+ page_content=' Let α = −ba and β = ac, and the continued fraction expansion of α be given by α = [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
90
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
91
+ page_content='] with sn tn being the nth convergent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
92
+ page_content=' Also let H(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)| < δ, ||(x, y)|| ≤ ρ, |cx + dy| > k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
93
+ page_content=' In this article, we find asymptotic lower and upper bound of the quo- tient # G(ρ) η (H(ρ)) as ρ → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
94
+ page_content=' Now let α− := lim inf n→∞ 1 n n � j=1 log |bj| and α+ := lim sup n→∞ 1 n n � j=1 log |bj|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
95
+ page_content=' Also for 0 < δ < 1, let e(δ) := lim inf n→∞ 1 n# � j, 1 ≤ j ≤ n : |bj+1| ≥ 1 δ � and f(δ) := lim sup n→∞ 1 n# � j, 1 ≤ j ≤ n : |bj+1| ≥ 1 δ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
96
+ page_content=' The main result of this article is contained in the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
97
+ page_content=' 6 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
98
+ page_content=' MAKADIYA Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
99
+ page_content=' Let Q be a quadratic form defined by Q(x, y) = (ax + by)(cx + dy) with a, b, c, d ∈ K, bc − ad = 1, and ba an irrational element of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
100
+ page_content=' Also let G(ρ), H(ρ), α+, α−, e(δ), f(δ) be as defined above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
101
+ page_content=' If α− < ∞, then we have the followings: lim inf ρ→∞ # G(ρ) η (H(ρ)) ≥ c e(δ) α+ and lim sup ρ→∞ # G(ρ) η (H(ρ)) ≤ c f(δ) α− , where c is a constant depending on δ and q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
102
+ page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
103
+ page_content=' Let I(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)| < δ, ||(s, t)|| ≤ ρ, |as + bt| > k} and J(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)| < δ, ||(x, y)|| ≤ ρ, |ax+by| > k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
104
+ page_content=' Then one can obtain a similar estimates for # I(ρ) η (J(ρ)) in terms of the continued fraction expansion of −dc provided dc is an irrational element of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
105
+ page_content=' Proof of Theorem 2: Let G′(ρ) := {(s, t) ∈ p : |t(tα − s)| < δ, |t| ≤ ρ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
106
+ page_content=' It is easy to see that (8) Q(s, t) = (tα − s)(t + β(tα − s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
107
+ page_content=' If |Q(s, t)| < δ with |cs+dt| > k then |as+bt| < δ k, which implies that |tα − s| < δ|a| k , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
108
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
109
+ page_content=', |tα − s| is bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
110
+ page_content=' Now by (8), |Q(s, t)| |q(tα − s)| = �����1 + β t (tα − s) ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
111
+ page_content=' Since |tα − s| is bounded, it follows that |Q(s, t)| |t(tα − s)| = 1 if |t| is suffi- ciently large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
112
+ page_content=' Note that when |tα − s| is bounded, ||(s, t)|| → ∞ if and only if |t| → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
113
+ page_content=' Also, if |q(tα−s)| < δ, then clearly |tα−s| is bounded and |Q(s, t)| |t(tα − s)| = 1 for sufficiently large |t|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
114
+ page_content=' Combining all these facts, we can say that there exists a constant C > 0 such that #G ′(ρ) − C ≤ #G(ρ) ≤ #G ′(ρ) + C for sufficiently large ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
115
+ page_content=' Since 0 < δ < 1, it follows from Lemma 1, that if (s, t) ∈ G ′(ρ), then s = sj and t = tj, where sj tj is a convergent of α in its ON VALUES OF ISOTROPIC QUADRATIC FORMS 7 continued fraction expansion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
116
+ page_content=' Also G ′(ρ) = G ′(|tn|) if |tn| ≤ ρ < |tn+1|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
117
+ page_content=' Note that if (sj, tj) ∈ G ′(|tn|), then (asj, atj) ∈ G ′(|tn|) as well for any a ∈ F∗ q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
118
+ page_content=' Now let us calculate the measure of H(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
119
+ page_content=' Let A be the set given by A := {(x, y) ∈ K2 : 0 < |xy| < δ, ||(x, y)|| ≤ ρ, |y| > k}, then η(H(ρ)) = |det(M)| η(A) where M = � a b c d � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
120
+ page_content=' Since bc−ad = 1, we have that η(H(ρ)) = η(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
121
+ page_content=' Note that for 0 < δ < 1, k > 1 and ρ ≥ k, there exist unique m0, m ′ 0, t and i ∈ Z such that qm0 ≤ δ < qm0+1, qm ′ 0 ≤ √ δ < qm ′ 0+1, qm ′ 0+t ≤ k < qm ′ 0+t+1 and qm ′ 0+t+i ≤ ρ < qm ′ 0+t+i+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
122
+ page_content=' Also for 1 ≤ n ≤ i, let An := {(x, y) ∈ K2 : |x| ≤ qm0−m ′ 0−t−n and |y| = qm ′ 0+t+n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
123
+ page_content=' Clearly An’s are disjoint, and it is easy to see that A = ∪i n=1An.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
124
+ page_content=' Hence, η(A) = i� n=1 η(An).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
125
+ page_content=' Now {y ∈ K : |y| ≤ qm ′ 0+t+n} = {y ∈ K : |y| < qm ′ 0+t+n} ∪ {y ∈ K : |y| = qm ′ 0+t+n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
126
+ page_content=' Therefore, η(An) = µ({x ∈ K : |x| ≤ qm0−m ′ 0−t−n}) · µ({y ∈ K : |y| = qm ′ 0+t+n}) = µ({x ∈ K : |x| ≤ qm0−m ′ 0−t−n}) (µ({y ∈ K : |y| ≤ qm ′ 0+t+n}) − µ({y ∈ K : |y| < qm ′ 0+t+n})) = (qm0−m ′ 0−t−n+1) · (qm ′ 0+t+n+1 − qm ′ 0+t+n) = (qm0−m ′ 0−t−n+1)(qm ′ 0+t+n)(q − 1) = qm0+1(q − 1), and consequently, η(H(ρ)) = η(A) = i � n=1 η(An) = iqm0+1(q − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
127
+ page_content=' Since qm′ 0+t+i ≤ ρ < qm′ 0+t+i+1, it follows that (m′ 0 + t + i) log q ≤ log ρ < (m′ 0 + t + i + 1) log q which implies that log ρ log q − m′ 0 − t − 1 < i ≤ log ρ log q − m′ 0 − t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
128
+ page_content=' 8 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
129
+ page_content=' MAKADIYA Hence, �log ρ log q − m ′ 0 − t − 1 � (q − 1)qm0+1 < η(H(ρ)) ≤ �log ρ log q − m ′ 0 − t � (q − 1)qm0+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
130
+ page_content=' (9) Now,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
131
+ page_content=' lim inf ρ→∞ #G(ρ) η(H(ρ)) ≥ lim inf ρ→∞ #G′(ρ) − C η(H(ρ)) = lim inf n→∞ #G′(|tn|) − C η(H(|tn|)) (for |tn| ≤ ρ < |tn+1|) = lim inf n→∞ 1 n(#G′(|tn|) − C) 1 n(η(H(|tn|))) ≥ lim inf n→∞ 1n(#G′(|tn|)) lim sup n→∞ 1n(η(H(|tn|))) ≥ lim inf n→∞ 1 n(q − 1) # � j : 1 ≤ j ≤ n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
132
+ page_content=' |bj| ≥ 1 δ � lim sup n→∞ 1 n �log |tn| log q − m′ 0 − t � qm0+1(q − 1) (by (4) and (9)) ≥ lim inf n→∞ 1 n # � j : 1 ≤ j ≤ n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
133
+ page_content=' |bj| ≥ 1 δ � lim sup n→∞ 1 n �log |b1b2 · · · bn| log q − m′ 0 − t � qm0+1 (by (3)) ≥ lim inf n→∞ 1 n # � j : 1 ≤ j ≤ n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
134
+ page_content=' |bj| ≥ 1 δ � lim sup n→∞ 1 n \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed n� j=1 log |bj| log q − m′ 0 − t \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 qm0+1 = e(δ) α+ log q qm0+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
135
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 9 A similar calculation yields lim sup ρ→∞ #G(ρ) η(H(ρ)) ≤ f(δ) α− log q qm0+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
136
+ page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
137
+ page_content=' Let Q be a quadratic form as in Theorem 2, and 0 < δ < 1 be fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
138
+ page_content=' Then there exist a subset K′ of K with µ(K′) = µ(K) such that if α = −ba ∈ K′, then lim ρ→∞ #G(ρ) η(H(ρ)) = q − 1 q⌈δ−1⌉+m0+1, where ⌈δ−1⌉ denotes the smallest integer greater or equal to δ−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
139
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
140
+ page_content=' Let [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
141
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
142
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
143
+ page_content='] be the continued fraction expansion of α = −ba as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
144
+ page_content=' It follows from Theorem 6 of [1] that there is a full measure subset K′ of K such that if α = −ba ∈ K′, then (10) lim n→∞ |b1b2 · · · bn| 1n = q q q − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
145
+ page_content=' This implies that lim n→∞ 1 n n � j=1 log |bj| = q q − 1 log q, and, therefore, α− = α+ = q q−1 log q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
146
+ page_content=' Also for any 0 < δ < 1, there exists a unique l ∈ N such that l = ⌈δ−1⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
147
+ page_content=' Then by Theorem 14 of [12], for α in a full measure set which without loss of generality we may assume to be K′, lim n→∞ 1 n #{1 ⩽ j ⩽ n : |bj| ⩾ ql} = 1 ql−1 which implies that e(δ) = f(δ) = 1 ql−1 = 1 q⌈δ−1⌉−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
148
+ page_content=' Then it follows from Theorem 2 above that, if α = −ba ∈ K ′, then lim ρ→∞ #G(ρ) η(H(ρ)) = 1 q⌈δ−1⌉ − 1 � q q − 1 log q � log q qm0+1 = q − 1 q⌈δ−1⌉ + m0 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
149
+ page_content=' □ Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
150
+ page_content=' Let Q, α be as in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
151
+ page_content=' Now, if the absolute values of the partial quotients in the continued fraction expansion of α are 10 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
152
+ page_content=' MAKADIYA bounded by some real numbers, then it is easy to see that e(δ) = f(δ) = 0 if δ is sufficiently small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
153
+ page_content=' In this case, lim ρ→∞ #G(ρ) η(H(ρ)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
154
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
155
+ page_content=' K is the field of p-adic numbers In this section, we consider isotropic quadratic forms with coefficients in the field of p-adic numbers for a prime p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
156
+ page_content=' Recall that the field of p-adic numbers, denoted by Qp, is the collection of all formal series of the form � j≥n0 ajpj, with n0 ∈ Z and aj ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
157
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
158
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
159
+ page_content=', p − 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
160
+ page_content=' The ultrametric absolute value on Qp is defined as follows: if α (̸= 0) = � j≥n0 ajpj, then |α|p := p−νp(α), and |0|p = 0, where νp(α) := inf {j ∈ Z : aj ̸= 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
161
+ page_content=' The integer νp(α) is also known as the valuation of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
162
+ page_content=' For α ∈ Qp and r ∈ Z, let B(a, pr) := {α ∈ K : |α − a|p < pr} be the open disc of radius pr around the point α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
163
+ page_content=' The Haar measure µ (say) on Qp is defined in such a way that µ(B(a, pr)) = pr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
164
+ page_content=' We denote by η again the product measure µ ⊗ µ on Qp × Qp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
165
+ page_content=' As in the case of real numbers and elements of Laurent series fields over finite fields, continued fraction expansion exists for p-adic num- bers as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
166
+ page_content=' There are mainly two types of continued fractions for p-adic numbers, one of them was introduced by Schneider (see [17] for instance), and the other one was introduced by Ruban (see [15] for instance) and modified later by Brokwin (see [3], [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
167
+ page_content=' In this article, we are going to consider the continued fraction introduced by Ruban which has some similarity with the simple continued fraction for real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
168
+ page_content=' From now on, unless otherwise stated, we will be considering Ruban’s continued fraction only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
169
+ page_content=' Let Z be the subset of Qp given by Z := {a0 + a1 1 p + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
170
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
171
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
172
+ page_content=' an 1 pn : ai ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
173
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
174
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
175
+ page_content=', p − 1} for 0 ≤ i ≤ n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
176
+ page_content=' It is easy to see that Z is a discrete set in the topology coming from the p-adic abosolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
177
+ page_content=' For α (̸= 0) = � j≥n0 ajpj, let ⌊α⌋ = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 � j=n0 ajpj if n0 ≤ 0 0 if n0 ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
178
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 11 Given α ∈ Qp, we define two sequences (αn) and (bn) as follows: α0 = α, b0 = ⌊α0⌋;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
179
+ page_content=' for n ≥ 0, if bn = αn, then αn+1 and bn+1 are not defined, otherwise, αn+1 = (αn − bn)−1 and bn+1 = ⌊αn+1⌋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
180
+ page_content=' Any p-adic number α has a unique continued fraction expansion as α = [b0, b1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
181
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
182
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
183
+ page_content=' , bn, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
184
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
185
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
186
+ page_content=' ] which can be obtained by using the algorithm discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
187
+ page_content=' Note that the partial quotients bn’s are elements of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
188
+ page_content=' The nth convergent is given by sn tn = [b0, b1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
189
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
190
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
191
+ page_content=' , bn] where sn and tn satisfy the recurrence relation as in (1), and equation (2) as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
192
+ page_content=' The p-adic versions of equation (3), (4) and (5) are valid as well with the absolute value in the Laurent series field replaced by the p-adic absolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
193
+ page_content=' As we could not find a proper reference for a p-adic version of Lemma 1, we include a proof here following the proof of Lemma 1 given in [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
194
+ page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
195
+ page_content=' Let s, t ∈ Z with t ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
196
+ page_content=' Then s t is a convergent to α if and only if (11) ����α − s t ���� p < 1 |t|2p Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
197
+ page_content=' By the p-adic version of equation (4), ����α − sn tn ���� p = 1 |bn+1|p |tn|2 p < 1 |tn|2 p for any convergent sn tn corresponding to the continued fraction expan- sion of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
198
+ page_content=' Conversely, assume that s, t ∈ Z with t ̸= 0 such that ����α − s t ���� p < 1 |t|2 p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
199
+ page_content=' There is a unique n such that |tn|p ≤ |t|p < |tn+1|p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
200
+ page_content=' Then ����α − s t ���� p < 1 |t|p|tn|p , and ����α − sn tn ���� p = 1 |tn|p|tn+1|p (by p-adic version of (5)) < 1 |t|p|tn|p , so that ���� s t − sn tn ���� p = ���� s t − α + α − sn tn ���� p ≤ max �����α − s t ���� p , ����α − sn tn ���� p � < 1 |t|p|tn|p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
201
+ page_content=' 12 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
202
+ page_content=' MAKADIYA Thus, s t = sn tn .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
203
+ page_content=' □ Now, let Q be a non-degenerate isotropic binary quadratic form with coefficients in Qp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
204
+ page_content=' Since Qp has characteristic zero, as explained in the previous section, it is enough to consider Q defined by Q(x, y) = (ax + by)(cx + dy) with a, b, c, d in Qp and bc−ad = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
205
+ page_content=' We also assume that ba is not of the form s t for some s, t ∈ Z with t ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
206
+ page_content=' Let p denote the set of all those (s, t) ∈ Z such that s and t does not have a common factor except the constant polynomials in 1p inside Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
207
+ page_content=' For k > 1 and 0 < δ < 1, we define G(ρ) and H(ρ) as in the previous section as follows: G(ρ) := {(s, t) ∈ p : 0 < |Q(s, t)|p < δ, ||(s, t)|| ≤ ρ, |cs + dt|p > k}, H(ρ) := {(x, y) ∈ K2 : 0 < |Q(x, y)|p < δ, ||(x, y)|| ≤ ρ, |cx+dy|p > k}, here ||(s, t)|| = max { |s|p, |t|p }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
208
+ page_content=' Also let α = −ba and β = ac, and the continued fraction expansion of α be given by α = [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
209
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
210
+ page_content='].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
211
+ page_content=' The quantities e(δ), f(δ), α− and α+ are defined similarly as in the previous section with the absolute value replaced by the p-adic absolute value wherever applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
212
+ page_content=' Then an analogue of Theorem 2 holds in this set up as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
213
+ page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
214
+ page_content=' With all the notations as above, if α− < ∞, then lim inf ρ→∞ #G(ρ) η(H(ρ)) ≥ c e(δ) α+ , and lim sup ρ→∞ #G(ρ) η(H(ρ)) ≤ c f(δ) α− , where c = log p pm0 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
215
+ page_content=' Let X = B(0, 1) and T : X → X be the continued fraction map defined by T(α) = 1 α − � 1 α � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
216
+ page_content=' It is known that the map T is ergodic (see [15] for details) with respect to the Haar measure µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
217
+ page_content=' As an application of the ergodicity, we obtain a result similar to Theorem 14 of [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
218
+ page_content=' Lemma 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
219
+ page_content=' Let α ∈ X and [0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
220
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
221
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
222
+ page_content=' ] be the continued fraction expansion of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
223
+ page_content=' Then for any natural number l, lim n→∞ #{1 ≤ j ≤ n : −νp(bj) ≥ l} = 1 pl−1 almost everywhere with respect to the Haar measure µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
224
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 13 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
225
+ page_content=' Note that b1 = b1(α) can be thought of as a function on B(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
226
+ page_content=' Then it is easy to check that the function f(α) = χ[pl,∞)(|b1(α)|p), α ∈ B(0, 1) is integrable on B(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
227
+ page_content=' Now, by the pointwise ergodic theorem (see Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
228
+ page_content='30 of [8] for instance), lim n→∞ 1 n#{1 ≤ j ≤ n : −νp(bj) ≥ l} = lim n→∞ 1 n#{1 ≤ j ≤ n : |bj|p ≥ pl} = lim n→∞ 1 n n � j=1 χ[pl,∞)(|b1(T j(α))|p) = � B(0,1) χ[pl,∞)(|b1(α)|p)dµ = µ{α ∈ B(0, 1) : |b1(α)|p ≥ pl} = µ{α ∈ B(0, 1) : |α|p ≤ p−l} = p−l+1 = 1 pl−1 □ Now, using Theorem 8 of [15] and Lemma 8 above, we obtain a p-adic version of Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
229
+ page_content=' Corollary 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
230
+ page_content=' Let Q be a quadratic form as in Theorem 7, and 0 < δ < 1 be fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
231
+ page_content=' Then there exist a subset K ′ of K with µ(K ′) = µ(K) such that if α = −ba ∈ K ′, then lim ρ→∞ #G(ρ) η(H(ρ)) = p − 1 p⌈δ−1⌉+m0+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
232
+ page_content=' It is easy to see that a version of Remark 5 is true in the p-adic set up as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
233
+ page_content=' As the statements are similar, we do not write it separately here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
234
+ page_content=' Rather, we give an example of a p-adic number whose continued fraction expansion consists of partial quotients with bounded absolute values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
235
+ page_content=' One may look at [11] and references cited there in for similar examples in Laurent series field over finite fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
236
+ page_content=' Let α be the p-adic number given by α = � j≥−1 ajpj, with aj = 1 for all j ≥ −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
237
+ page_content=' Let the continued fraction expansion of α be [b0, b1, b2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
238
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
239
+ page_content='].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
240
+ page_content=' Then b0 = p0 + p−1 14 MANOJ CHOUDHURI AND PRASHANT J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
241
+ page_content=' MAKADIYA and |b0|p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
242
+ page_content=' Now α1 = (α0 − b0)−1 = \uf8eb \uf8ed� j≥1 pj \uf8f6 \uf8f8 −1 = p−1 + � j≥0 (p − 1)pj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
243
+ page_content=' Then b1 = (p − 1)p0 + p−1 and |b1|p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
244
+ page_content=' Again α2 = (α1 − b1)−1 = \uf8eb \uf8ed� j≥1 (p − 1)pj \uf8f6 \uf8f8 −1 = � j≥−1 (p − 1)pj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
245
+ page_content=' Then b2 = (p − 1)p0 + (p − 1)p−1 and |b2|p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
246
+ page_content=' Observe that α3 = (α2 − b2)−1 = α2, and hence |b3|p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
247
+ page_content=' In a similar manner we get αn+1 = αn, bn+1 = bn, |bn+1|p = p for n ≥ 3 as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
248
+ page_content=' Therefore, the absolute values of all the partial quotients of the continued fraction expansion of α are bounded by p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
249
+ page_content=' Remark 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
250
+ page_content=' As in the case of binary real quadratic forms, the Op- penheim conjecture fails to hold for non-degenerate isotropic quadratic form with coefficients in a non-discrete locally compact non-Archimedean field as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
251
+ page_content=' To see this, let us consider the quadratic form Q given by Q(x, y) = (x + αy)y with α ∈ Fq((X−1)) (or Qp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
252
+ page_content=' Now if the partial quotients in the con- tinued fraction expansion of α have bounded absolute values, then us- ing Lemma 1 (or Lemma 6), it is easy to see that the set of values {|Q(s, t)| : s, t ∈ Z} (Z is either as in Section 1 or as in Section 2) avoids certain neighbourhood of zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
253
+ page_content=' Acknowledgement .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
254
+ page_content=' Prashant J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
255
+ page_content=' Makadiya acknowledges the support of Government of Gujarat thorugh the SHODH (ScHeme Of Developing High Quality Research) fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
256
+ page_content=' Manoj Choudhuri thanks L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
257
+ page_content=' Singhal for helpful discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
258
+ page_content=' References [1] Val´erie Berth´e and Hitoshi Nakada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
259
+ page_content=' On continued fraction expansions in pos- itive characteristic: equivalence relations and some metric properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
260
+ page_content=' Expo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
261
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
262
+ page_content=', 18(4):257–284, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
263
+ page_content=' [2] Armand Borel and Gopal Prasad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
264
+ page_content=' Values of isotropic quadratic forms at S- integral points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
265
+ page_content=' Compositio Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
266
+ page_content=', 83(3):347–372, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
267
+ page_content=' ON VALUES OF ISOTROPIC QUADRATIC FORMS 15 [3] Jerzy Browkin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
268
+ page_content=' Continued fractions in local fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
269
+ page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
270
+ page_content=' Demonstratio Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
271
+ page_content=', 11(1):67–82, 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
272
+ page_content=' [4] Jerzy Browkin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
273
+ page_content=' Continued fractions in local fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
274
+ page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
275
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
276
+ page_content=' Comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
277
+ page_content=', 70(235):1281–1292, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
278
+ page_content=' [5] Manoj Choudhuri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
279
+ page_content=' On certain orbits of geodesic flow and (a, b)-continued frac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
280
+ page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
281
+ page_content=' Indian Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
282
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
283
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
284
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
285
+ page_content=', 131(1):Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
286
+ page_content=' 2, 19, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
287
+ page_content=' [6] Manoj Choudhuri and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
288
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
289
+ page_content=' Dani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
290
+ page_content=' On values of binary quadratic forms at integer points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
291
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
292
+ page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
293
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
294
+ page_content=', 22(4):1023–1045, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
295
+ page_content=' [7] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
296
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
297
+ page_content=' Dani and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
298
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
299
+ page_content=' Margulis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
300
+ page_content=' Limit distributions of orbits of unipotent flows and values of quadratic forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
301
+ page_content=' In I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
302
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
303
+ page_content=' Gelfand Seminar, volume 16 of Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
304
+ page_content=' Soviet Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
305
+ page_content=', pages 91–137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
306
+ page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
307
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
308
+ page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
309
+ page_content=', Providence, RI, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
310
+ page_content=' [8] Manfred Einsiedler and Thomas Ward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
311
+ page_content=' Ergodic theory with a view towards number theory, volume 259 of Graduate Texts in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
312
+ page_content=' Springer-Verlag London, Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
313
+ page_content=', London, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
314
+ page_content=' [9] Alex Eskin, Gregory Margulis, and Shahar Mozes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
315
+ page_content=' On a quantitative ver- sion of the Oppenheim conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
316
+ page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
317
+ page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
318
+ page_content=' Announc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
319
+ page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
320
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
321
+ page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
322
+ page_content=', 1(3):124–130, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
323
+ page_content=' [10] Svetlana Katok and Ilie Ugarcovici.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
324
+ page_content=' Arithmetic coding of geodesics on the modular surface via continued fractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
325
+ page_content=' In European women in mathematics— Marseille 2003, volume 135 of CWI Tract, pages 59–77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
326
+ page_content=' Centrum Wisk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
327
+ page_content=' In- form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
328
+ page_content=', Amsterdam, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
329
+ page_content=' [11] Alain Lasjaunias and Jean-Jacques Ruch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
330
+ page_content=' Algebraic and badly approximable power series over a finite field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
331
+ page_content=' Finite Fields Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
332
+ page_content=', 8(1):91–107, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
333
+ page_content=' [12] Poj Lertchoosakul and Radhakrishnan Nair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
334
+ page_content=' On the metric theory of continued fractions in positive characteristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
335
+ page_content=' Mathematika, 60(2):307–320, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
336
+ page_content=' [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
337
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
338
+ page_content=' Margulis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
339
+ page_content=' Oppenheim conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
340
+ page_content=' In Fields Medallists’ lectures, volume 5 of World Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
341
+ page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
342
+ page_content=' 20th Century Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
343
+ page_content=', pages 272–327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
344
+ page_content=' World Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
345
+ page_content=' Publ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
346
+ page_content=', River Edge, NJ, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
347
+ page_content=' [14] Dinakar Ramakrishnan and Robert J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
348
+ page_content=' Valenza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
349
+ page_content=' Fourier analysis on number fields, volume 186 of Graduate Texts in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
350
+ page_content=' Springer-Verlag, New York, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
351
+ page_content=' [15] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
352
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
353
+ page_content=' Ruban.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
354
+ page_content=' Certain metric properties of the p-adic numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
355
+ page_content=' Sibirsk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
356
+ page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
357
+ page_content=' ˇZ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
358
+ page_content=', 11:222–227, 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
359
+ page_content=' [16] Wolfgang M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
360
+ page_content=' Schmidt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
361
+ page_content=' On continued fractions and Diophantine approximation in power series fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
362
+ page_content=' Acta Arith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
363
+ page_content=', 95(2):139–166, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
364
+ page_content=' [17] Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
365
+ page_content=' Schneider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
366
+ page_content=' ¨Uber p-adische Kettenbr¨uche.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
367
+ page_content=' In Symposia Mathematica, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
368
+ page_content=' IV (INDAM, Rome, 1968/69), pages 181–189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
369
+ page_content=' Academic Press, London, 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
370
+ page_content=' [18] David Simmons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
371
+ page_content=' The Hurwitz continued fraction expansion as applied to real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
372
+ page_content=' Enseign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
373
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
374
+ page_content=', 62(3-4):475–485, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
375
+ page_content=' Institute of Infrastructure, Technology, Research and Manage- ment, Near Khokhara Circle, maninagar (East), Ahmedabad 380026, Gujarat, India.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
376
+ page_content=' Email address: manojchoudhuri@iitram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
377
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
378
+ page_content='in Email address: prashant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
379
+ page_content='makadiya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
380
+ page_content='20pm@iitram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
381
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
382
+ page_content='in' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf'}
1NAyT4oBgHgl3EQfofhG/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:757c1f477e952fd08268c592763a0d2f09b62c99c8bc2a75a9b01769bb8a75e8
3
+ size 1966125
1tE0T4oBgHgl3EQf_wKi/content/tmp_files/2301.02831v1.pdf.txt ADDED
@@ -0,0 +1,756 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02831v1 [cs.IT] 7 Jan 2023
2
+ 1
3
+ Joint Beamforming and Phase Shift Design for
4
+ Hybrid-IRS-aided Directional Modulation Network
5
+ Rongen Dong, Hangjia He, Feng Shu, Riqing Chen, and Jiangzhou Wang, Fellow, IEEE
6
+ Abstract—To make a good balance between performance,
7
+ cost, and power consumption, a hybrid intelligent reflecting
8
+ surface (IRS)-aided directional modulation (DM) network is
9
+ investigated in this paper, where the hybrid IRS consists of
10
+ passive and active reflecting elements. To maximize the achievable
11
+ rate, two optimization algorithms, called maximum signal-to-
12
+ noise ratio (SNR)-fractional programming (FP) (Max-SNR-FP)
13
+ and maximum SNR-equal amplitude reflecting (EAR) (Max-
14
+ SNR-EAR), are proposed to jointly design the beamforming
15
+ vector and IRS phase shift matrix by alternately optimizing one
16
+ and fixing another. The former employs the successive convex
17
+ approximation and FP methods to solve the beamforming vector
18
+ and hybrid IRS phase shift matrix, while the latter uses the
19
+ maximum signal-to-leakage-noise ratio method and the criteria
20
+ of phase alignment and EAR to design them. Simulation results
21
+ show that the rates harvested by the proposed two methods
22
+ are slightly lower than that of active IRS with higher power
23
+ consumption, which are 35 percent higher than those of no IRS
24
+ and random phase IRS, while passive IRS achieves only about
25
+ 17 percent rate gain over the latter. Moreover, compared to Max-
26
+ SNR-FP, the proposed Max-SNR-EAR method makes an obvious
27
+ complexity reduction at the cost of a slight rate performance loss.
28
+ Index Terms—Intelligent reflecting surface, directional modu-
29
+ lation, fractional programming, beamforming, phase shift
30
+ I. INTRODUCTION
31
+ Directional modulation (DM) is a promising solution to sig-
32
+ nificantly improve the performance of physical layer security
33
+ in wireless networks [1]. The design of DM synthesis is mainly
34
+ implemented in the radio frequency (RF) frontend or baseband.
35
+ For example, in [2], the signal was produced in a given
36
+ direction by shifting the phase of each antenna element at the
37
+ RF frontend. In [3], a multi-beam DM scenario was considered
38
+ to maximize the secure rate (SR), where the precoder and
39
+ the artificial noise (AN) were designed by maximizing signal-
40
+ to-leakage-noise ratio and maximizing the signal-to-AN ratio
41
+ methods, respectively.
42
+ Intelligent reflecting surface (IRS), as a cost and energy-
43
+ efficient solution to enhance the performance of the wire-
44
+ less communication system, has been adopted to aid various
45
+ This work was supported in part by the National Natural Science Foundation
46
+ of China (Nos.U22A2002, and 62071234), the Major Science and Technology
47
+ plan of Hainan Province under Grant ZDKJ2021022, and the Scientific
48
+ Research Fund Project of Hainan University under Grant KYQD(ZR)-21008.
49
+ Rongen Dong and Feng Shu are with the School of Information and Com-
50
+ munication Engineering, Hainan University, Haikou, 570228, China (Email:
51
52
+ Hangjia He is with the School of Electronic and Optical Engineering,
53
+ Nanjing University of Science and Technology, Nanjing, 210094, China.
54
+ Riqing Chen is with the Digital Fujian Institute of Big Data for Agriculture,
55
+ Fujian Agriculture and Forestry University, Fuzhou 350002, China (Email:
56
57
+ Jiangzhou Wang is with the School of Engineering, University of Kent,
58
+ Canterbury CT2 7NT, U.K. (Email: [email protected]).
59
+ wireless communication directions: unmanned aerial vehicle
60
+ communication [4], single-cell wireless communication [5],
61
+ multi-cell communication [6], etc. Recently, IRS-aided DM
62
+ system have also been investigated. To maximize the SR
63
+ of IRS-aided DM system, the general alternating iterative
64
+ and null-space projection algorithms were proposed to jointly
65
+ obtain the transmit beamforming vectors and IRS phase shift
66
+ matrix in [7]. To maximize the receive power sum, the authors
67
+ in [8] proposed the general alternating optimization and zero-
68
+ forcing algorithms to jointly design the receive beamforming
69
+ vectors and IRS phase shift matrix.
70
+ However, all the above work was considered in the scenarios
71
+ of passive IRS, and the system may not be able to guarantee
72
+ a satisfactory achievable rate due to the presence of double
73
+ path loss in the cascaded channels. To overcome the “double
74
+ fading” effect and enhance the performance of the passive
75
+ IRS-aided wireless network, the fully active IRS has been
76
+ investigated [9], [10]. Due to the high power consumption and
77
+ hardware design of active IRS, a hybrid active-passive IRS
78
+ was proposed to overcome the limitation of passive and active
79
+ IRSs [11], [12]. The main idea of the hybrid IRS is to employ
80
+ some active elements to replace the one of the passive IRS,
81
+ these active elements of hybrid IRS with signal amplification
82
+ can efficiently compensate for the path loss and increase the
83
+ achievable rate. To the best of the authors’ knowledge, the
84
+ hybrid IRS-aided DM system have not been investigated yet.
85
+ In this paper, we employ the hybrid IRS to further enhance
86
+ the performance of passive IRS-aided DM network. The main
87
+ contributions of this paper are summarized as follows:
88
+ 1) To make a good balance between performance, cost,
89
+ and power consumption, a hybrid IRS-aided DM system
90
+ model is proposed. To maximize the achievable rate,
91
+ the optimization problem of maximizing the signal-to-
92
+ noise ratio (SNR) is established, and the maximum SNR-
93
+ fractional programming (FP) (Max-SNR-FP) scheme is
94
+ proposed to jointly obtain the beamforming vector and
95
+ hybrid IRS phase shift matrix by optimizing one and
96
+ fixing another. In this scheme, the beamforming vector
97
+ and passive IRS phase shift matrix are solved by the
98
+ successive convex approximation algorithm, and the
99
+ active IRS phase shift matrix is computed by the FP
100
+ method.
101
+ 2) To reduce the high computational complexity of the
102
+ above scheme, a low-complexity maximum SNR-equal
103
+ amplitude reflecting (EAR) (Max-SNR-EAR) method is
104
+ proposed. By utilizing the maximum signal-to-leakage-
105
+ noise ratio (SLNR) method, the beamforming vector is
106
+
107
+ 2
108
+ obtained. Moreover, the hybrid IRS phase shift matrix is
109
+ computed based on the criteria of phase alignment and
110
+ EAR. Simulation results show that the achievable rates
111
+ harvested by both the proposed methods are higher than
112
+ those of no IRS, random phase IRS, and passive IRS.
113
+ In addition, the difference in achievable rates between
114
+ these two methods is trivial when the number of hybrid
115
+ IRS elements tends to large scale.
116
+ The remainder of this paper is organized as follows. Section
117
+ II describes the system model of hybrid IRS-aided DM net-
118
+ work. The Max-SNR-FP scheme is presented in Section III.
119
+ Section IV describes the Max-SNR-EAR scheme. Numerical
120
+ simulation results are presented in Section V. Finally, we draw
121
+ conclusions in Section VI.
122
+ Notations: throughout this paper, boldface lower case and
123
+ upper case letters represent vectors and matrices, respectively.
124
+ Signs (·)T , (·)∗, (·)H, Tr(·), ℜ{·}, and diag{·} denote the
125
+ transpose, conjugate, conjugate transpose, trace, real part,
126
+ and diagonal operations, respectively. The sign | · | is the
127
+ determinant of a matrix or the absolute value of a scalar. The
128
+ symbol CN×N denotes the space of N × N complex-valued
129
+ matrix. The notation IN is the N × N identity matrix.
130
+ II. SYSTEM MODEL
131
+ As shown in Fig. 1, a hybrid IRS-aided DM system is
132
+ considered, where the base station (BS) is equipped with
133
+ N antennas, and the user (Bob) is equipped with single
134
+ antenna. The hybrid IRS is equipped with M elements, which
135
+ consists of Ma active and Mp passive IRS reflecting elements
136
+ (M = Ma + Mp, 1 ≤ Ma ≤ Mp). It is assumed that the
137
+ active elements can tune both the phase and amplitude while
138
+ the passive ones can only shift the phase of the incident
139
+ signal. The signals reflected more than once on the hybrid IRS
140
+ are negligible due to the severe path loss [6]. All channels
141
+ are assumed to be line-of-sight channels since DM is only
142
+ applicable to line-of-sight channels. It is assumed that all the
143
+ channel state information is perfectly known through channel
144
+ estimation [13].
145
+ Fig. 1. System model of Hybrid-IRS-aided directional modulation network.
146
+ Similar to the conventional passive IRS, it is assumed that
147
+ each elements of hybrid IRS can independently reflect the inci-
148
+ dent signals. Let us denote the set of the Ma active elements by
149
+ Ω. Θ = diag{θ∗} = diag{θ1, · · · , θm, · · · , θM} ∈ CM×M,
150
+ Ψ = diag{ψ∗} ∈ CM×M, and Φ = diag{φ∗} ∈ CM×M are
151
+ the reflection coefficients of total elements, active elements,
152
+ and passive elements of hybrid IRS, respectively, where
153
+ θm =
154
+
155
+ |βm|ejµm,
156
+ if m ∈ Ω,
157
+ ejµm,
158
+ otherwise,
159
+ (1)
160
+ µm
161
+ ∈ [0, 2π) is the phase, and |βm| is the amplifying
162
+ coefficient and determined by the total power of the active
163
+ elements. Let us define
164
+ Ψ = EMaΘ, Φ = EMpΘ,
165
+ (2)
166
+ where
167
+ EMa + EMp = IM, EMaEMp = 0M,
168
+ (3)
169
+ EMa is an M × M diagonal matrix whose non-zero elements
170
+ are all unity and have positions determined by Ω.
171
+ The transmitted signal at BS is
172
+ s =
173
+
174
+ Pvx,
175
+ (4)
176
+ where P denotes the transmit power, v ∈ CN×1 and x are the
177
+ beamforming vector and the information symbol, satisfying
178
+ vHv = 1 and E[∥x∥2] = 1, respectively.
179
+ Taking the path loss into consideration, the received signal
180
+ at Bob is
181
+ yb = (√ρsrbhH
182
+ rbΘHsr + √ρsbhH
183
+ sb)s + √ρrbhH
184
+ rbΨnr + nb
185
+ =
186
+
187
+ P(√ρsrbhH
188
+ rbΨHsr + √ρsrbhH
189
+ rbΦHsr + √ρsbhH
190
+ sb)vx
191
+ + √ρrbhH
192
+ rbΨnr + nb,
193
+ (5)
194
+ where ρsrb = ρsrρrb is the equivalent path loss coefficient
195
+ of BS-to-IRS channel and IRS-to-Bob channel, ρsb and ρrb
196
+ are the path loss coefficient of BS-to-Bob channel and IRS-
197
+ to-Bob channel, respectively. nr ∼ CN(0, σ2
198
+ rIMa) and nb ∼
199
+ CN(0, σ2
200
+ b) denote the complex additive white Gaussian noise
201
+ (AWGN) at the Ma active elements of the hybrid IRS and
202
+ at Bob, respectively. hsb ∈ CN×1, hrb ∈ CM×1, and Hsr =
203
+ hsrhH
204
+ sr ∈ CM×N are the BS-to-Bob, IRS-to-Bob, and BS-to-
205
+ IRS channels, respectively. Let us define the channel htr =
206
+ h(θtr), the normalized steering vector h(θ) is
207
+ h(θ) =
208
+ 1
209
+
210
+ N
211
+ [ej2πΨθ(1), . . . , ej2πΨθ(n), . . . , ej2πΨθ(N)]T , (6)
212
+ and the phase function Ψθ(n) is given by
213
+ Ψθ(n)
214
+ ∆= −(n − (N + 1)/2)d cosθ
215
+ λ
216
+ , n = 1, . . . , N,
217
+ (7)
218
+ where θ represents the direction angle of arrival or departure,
219
+ n denotes the index of antenna, d is the spacing of adjacent
220
+ transmitting antennas, and λ represents the wavelength.
221
+ In accordance with (5), the achievable rate at Bob can be
222
+ written as
223
+ Rb = log2 (1 + SNR) ,
224
+ (8)
225
+ where
226
+ SNR = P|(√ρsrbhH
227
+ rbΨHsr + √ρsrbhH
228
+ rbΦHsr + √ρsbhH
229
+ sb)v|2
230
+ σ2r|√ρrbhH
231
+ rbΨ|2 + σ2
232
+ b
233
+ .
234
+ (9)
235
+
236
+ Hybrid IRS
237
+ Active
238
+ Passive
239
+ H
240
+ H
241
+ rb
242
+ ((())
243
+ H
244
+ 5
245
+ sb
246
+ User
247
+ (Bob)
248
+ Base station3
249
+ The transmit power of the active elements at the hybrid IRS
250
+ is given by
251
+ Pr = Tr
252
+
253
+ Ψ
254
+
255
+ ρsrPHsrvvHHH
256
+ sr + σ2
257
+ rIM
258
+
259
+ ΨH�
260
+ ,
261
+ (10)
262
+ which satisfies Pr ≤ P max
263
+ r
264
+ , where P max
265
+ r
266
+ represents the maxi-
267
+ mum transmit power of Ma active elements.
268
+ In this paper, we maximize the SNR by jointly optimizing
269
+ beamforming vector v, passive IRS phase shift matrix Φ, and
270
+ active IRS phase shift matrix Ψ. The optimization problem
271
+ can be formulated as
272
+ max
273
+ v,Φ,Ψ
274
+ SNR
275
+ (11a)
276
+ s.t.
277
+ vHv = 1, Pr ≤ P max
278
+ r
279
+ ,
280
+ (11b)
281
+ |Φ(m, m)| = 1, if m ̸∈ Ω,
282
+ (11c)
283
+ |Φ(m, m)| = 0, otherwise,
284
+ (11d)
285
+ |Ψ(m, m)| ≤ βmax, if m ∈ Ω,
286
+ (11e)
287
+ |Ψ(m, m)| = 0, otherwise,
288
+ (11f)
289
+ where βmax is the amplification budget. It is notes that this
290
+ optimization problem is a non-convex problem with a constant
291
+ modulus constraint, and it is challenging to solve it directly in
292
+ general. In what follows, we propose the alternating optimiza-
293
+ tion algorithm to design the beamforming vector and hybrid
294
+ IRS phase shift matrix, respectively.
295
+ III. PROPOSED MAX-SNR-FP SCHEME
296
+ In this section, we construct a Max-SNR-FP method to
297
+ jointly optimize the beamforming vector v, passive IRS phase
298
+ shift matrix Φ, and active IRS phase shift matrix Ψ. In what
299
+ follows, we will alternately solve for v, Φ, and Ψ.
300
+ A. Optimize v given Φ and Ψ
301
+ Firstly, we transform the power constraint in (11b) into a
302
+ convex constraint with respect to v as follows
303
+ Pr = vH �
304
+ ρsrPHH
305
+ srΨHΨHsr
306
+
307
+ v + Tr
308
+
309
+ σ2
310
+ rΨΨH�
311
+ ≤ P max
312
+ r
313
+ .
314
+ (12)
315
+ Then, given Φ and Ψ, the optimal beamforming vector v can
316
+ be found by solving the following problem
317
+ max
318
+ v
319
+ vHA¯v
320
+ s.t. vHv = 1, (12),
321
+ (13)
322
+ where
323
+ A =(√ρsrbhH
324
+ rbΦHsr + √ρsrbhH
325
+ rbΨHsr + √ρsbhH
326
+ sb)H
327
+ (√ρsrbhH
328
+ rbΦHsr + √ρsrbhH
329
+ rbΨHsr + √ρsbhH
330
+ sb).
331
+ (14)
332
+ It is clear that this problem is not convex, and in accordance
333
+ with the Taylor series expansion, we have
334
+ vHAv ≥ 2ℜ{¯vHAv} − ¯vHA¯v,
335
+ (15)
336
+ where ¯v is a given vector. Then (13) can be recasted as
337
+ max
338
+ v
339
+ 2ℜ{¯vHAv} − ¯vHA¯v
340
+ s.t. vHv = 1, (12).
341
+ (16)
342
+ It is a convex optimization problem and can be solved by
343
+ employing CVX tool.
344
+ B. Optimize Φ given v and Ψ
345
+ To simplify the SNR expression related to the phase shift
346
+ matrix Φ, we regard v and Ψ as two constants, and define
347
+ B = (√ρsrbhH
348
+ rbΨHsr + √ρsbhH
349
+ sb)v.
350
+ (17)
351
+ Then, the subproblem to optimize Φ can be expressed as
352
+ max
353
+ Φ
354
+ |√ρsrbhH
355
+ rbΦHsrv + B|2
356
+ (18a)
357
+ s.t. |Φ(m, m)| = 1, if m ̸∈ Ω,
358
+ (18b)
359
+ |Φ(m, m)| = 0, otherwise.
360
+ (18c)
361
+ By defining
362
+ C = ρsrbdiag{hH
363
+ rb}HsrvvHHH
364
+ srdiag{hH
365
+ rb}H,
366
+ (19)
367
+ and based on the fact that diag{a}b = diag{b}a for a, b ∈
368
+ CM×1, the objective function in (18) can be recasted as
369
+ φHCφ + 2ℜ{√ρsrbφHdiag{hH
370
+ rb}HsrvB∗} + |B|2.
371
+ (20)
372
+ Based on the Taylor series expansion, we have
373
+ φHCφ ≥ 2ℜ{ ¯φHCφ} − ¯φHC ¯φ,
374
+ (21)
375
+ where ¯φ is a given vector. For the unit modulus constraint
376
+ (18b), it can be relaxed as
377
+ |Φ(m, m)| ≤ 1, if m ̸∈ Ω.
378
+ (22)
379
+ At this point, the problem (18) can be rewritten as
380
+ max
381
+ Φ
382
+ 2ℜ{ ¯φHCφ} − ¯φHC ¯φ + |B|2 + 2ℜ{√ρsrbφH•
383
+ diag{hH
384
+ rb}HsrvB∗}
385
+ s.t.
386
+ (22), (18c).
387
+ (23)
388
+ We can find that it is a convex optimization problem and can
389
+ be solved by employing CVX tool.
390
+ C. Optimize Ψ given v and Φ
391
+ To optimize Ψ, we regard v and Φ as two given constants,
392
+ and transform the power constraint in (11b) into a convex
393
+ constraint on ψ as follows
394
+ Pr = Tr
395
+
396
+ Ψ
397
+
398
+ ρsrPHsrvvHHH
399
+ sr + σ2IM
400
+
401
+ ΨH�
402
+ = ψT (ρsrPdiag{vHHH
403
+ sr}diag{Hsrv} + σ2
404
+ rIM)ψ∗
405
+ ≤ P max
406
+ r
407
+ .
408
+ (24)
409
+ By neglecting the constant terms, the subproblem with respect
410
+ to Ψ is given by
411
+ max
412
+ Ψ
413
+ |(√ρsrbhH
414
+ rbΨHsr + √ρsrbhH
415
+ rbΦHsr + √ρsbhH
416
+ sb)v|2
417
+ σ2r|√ρrbhH
418
+ rbΨ|2 + σ2
419
+ b
420
+ (25a)
421
+ s.t.
422
+ (11e), (11f), (24).
423
+ (25b)
424
+ Let us define
425
+ D = (√ρsrbhH
426
+ rbΦHsr + √ρsbhH
427
+ sb)v.
428
+ (26)
429
+ Then, the objective function in (25) can be converted to
430
+ ψHCψ + 2ℜ{ψH√ρsrbdiag{hH
431
+ rb}HsrvD∗} + |D|2
432
+ σ2rρrb|ψHdiag{hH
433
+ rb}|2 + σ2
434
+ b
435
+ .
436
+ (27)
437
+
438
+ 4
439
+ At this point, the optimization problem (25) becomes a nonlin-
440
+ ear fractional optimization problem. Based on the FP strategy
441
+ in [14], we introduce a parameter τ and transform the objective
442
+ function (27) as
443
+ ψHCψ + 2ℜ{ψH√ρsrbdiag{hH
444
+ rb}HsrvD∗} + |D|2
445
+ − τ(σ2
446
+ rρrb|ψHdiag{hH
447
+ rb}|2 + σ2
448
+ b).
449
+ (28)
450
+ The
451
+ optimal
452
+ solution
453
+ can
454
+ be
455
+ achieved
456
+ if
457
+ and
458
+ only
459
+ if ψHCψ + 2ℜ{ψH√ρsrbdiag{hH
460
+ rb}HsrvD∗} + |D|2 −
461
+ τ(σ2
462
+ rρrb|ψHdiag{hH
463
+ rb}|2 + σ2
464
+ b) = 0. We linearize the ψHCψ
465
+ by employing Taylor series expansion at a given vector ¯ψ, the
466
+ subproblem with respect to Ψ can be rewritten as
467
+ max
468
+ Ψ,τ
469
+ 2ℜ{ ¯ψHCψ} − ¯ψHC ¯ψ + 2ℜ{ψH√ρsrbdiag{hH
470
+ rb}•
471
+ HsrvD∗} + |D|2 − τ(σ2
472
+ rρrb|ψHdiag{hH
473
+ rb}|2 + σ2
474
+ b)
475
+ s.t.
476
+ (11e), (11f), (24).
477
+ (29)
478
+ It should be noted that this problem is convex, which can be
479
+ effectively solved by the CVX tool. The whole procedure of
480
+ the Max-SNR-FP algorithm is described in Algorithm 1.
481
+ Algorithm 1 Proposed Max-SNR-FP algorithm
482
+ 1: Initialize v(0), Φ(0), and Ψ(0), compute R(0)
483
+ b
484
+ based on (8).
485
+ 2: Set p = 0, threshold value ǫ.
486
+ 3: repeat
487
+ 4:
488
+ Given Φ(p) and Ψ(p), solve (16) to determine v(p+1).
489
+ 5:
490
+ Given v(p+1) and Ψ(p), solve (23) to determine Φ(p+1).
491
+ 6:
492
+ Given v(p+1) and Φ(p+1), solve (29) to determine
493
+ Ψ(p+1).
494
+ 7:
495
+ Compute R(p+1)
496
+ b
497
+ using v(p+1), Φ(p+1), and Ψ(p+1).
498
+ 8:
499
+ p = p + 1.
500
+ 9: until |R(p)
501
+ b
502
+ − R(p−1)
503
+ b
504
+ | ≤ ǫ.
505
+ The computational complexity of the proposed Max-SNR-
506
+ FP algorithm is O(L((M + 1)3 + 2MN 2 + 2M 2)In(1/ǫ) +
507
+ M 3+N 3+5M 2+2MN+2M+2MN 2) float-point operations
508
+ (FLOPs), where L is the numbers of alternating iterations, ǫ
509
+ denotes the accuracy.
510
+ IV. PROPOSED MAX-SNR-EAR SCHEME
511
+ In the previous section, we proposed the Max-SNR-FP
512
+ method to design the beamforming v, IRS phase shift matrices
513
+ Φ and Ψ. However, it has a high computational complexity.
514
+ To reduce the computational complexity, a low-complexity
515
+ method named Max-SNR-EAR is proposed in what follows.
516
+ A. Optimize v given Φ and Ψ
517
+ Given IRS phase shift matrices Φ and Ψ, in accordance with
518
+ the principle of maximizing SLNR in [15], the beamforming
519
+ vector v can be optimized by solving the following problem
520
+ max
521
+ v
522
+ SLNR =
523
+ vHEv
524
+ vH(σ2
525
+ bIN)v
526
+ s.t. vHv = 1, (12),
527
+ (30)
528
+ where
529
+ E =ρsrbHH
530
+ srΦHhrbhH
531
+ rbΦHsr + ρsrbHH
532
+ srΨHhrbhH
533
+ rbΨHsr
534
+ + hsbhH
535
+ sb.
536
+ (31)
537
+ According to the Taylor series expansion and neglecting the
538
+ constant terms, the problem (30) can be recasted as
539
+ max
540
+ v
541
+ 2ℜ{¯vHEv} − ¯vHE¯v
542
+ s.t.
543
+ vHv = 1, (12).
544
+ (32)
545
+ Note that it is a convex optimization problem and can be
546
+ solved with CVX tool.
547
+ B. Optimize Φ and Ψ given v
548
+ Given beamforming vector v, we consider to design the
549
+ phase of hybrid IRS firstly. The confidential message received
550
+ by Bob through the cascade path is expressed as
551
+ PρsrbhH
552
+ rbΘHsrvvHHH
553
+ srΘHhrb.
554
+ (33)
555
+ To maximize the confidential message of the cascade path, the
556
+ phase alignment method is employed to design the hybrid IRS
557
+ phase �θ, �θ is given by
558
+ �θ = [e(−iarg(s1)), · · · , e(−iarg(sM))]T ,
559
+ (34)
560
+ where s = diag{hH
561
+ rb}Hsrv, and si is the i-th element of s.
562
+ Next, inspired by the amplitude design of fully active IRS
563
+ in [9], we assume that all active IRS elements have the same
564
+ amplitude. Based on the IRS power constraint in (11b), we
565
+ have
566
+ |β| =
567
+
568
+ P max
569
+ r
570
+ /Q,
571
+ (35)
572
+ where
573
+ Q =Tr(�θH(ρsrPdiag{vHHH
574
+ srEMa}diag{vHHH
575
+ srEMa}H
576
+ + σ2EMaEMa)�θ).
577
+ (36)
578
+ Based on (34) and (35), we can obtain the passive IRS phase
579
+ shift matrix and active IRS phase shift matrix as follows
580
+ Φ = EMpdiag{�θ}, Ψ = |β|EMadiag{�θ}.
581
+ (37)
582
+ Similar to Algorithm 1, we calculate v, Φ, and Ψ alternately
583
+ until convergence, i.e., |R(p)
584
+ b −R(p−1)
585
+ b
586
+ | ≤ ǫ. The computational
587
+ complexity of Max-SNR-EAR algorithm is O(K(2M 2+N 3+
588
+ 2M 2 + 8N 2M + 2MN) FLOPs, where K is the numbers of
589
+ alternating iterations.
590
+ V. SIMULATION RESULTS AND DISCUSSIONS
591
+ In this section, simulation results are presented to evaluate
592
+ the performance of two proposed algorithms. Simulation de-
593
+ fault parameters are chosen as follows: N = 8, M = 128,
594
+ Ma = 32, d = λ/2, θsr = π/4, θsb = π/3, dsr = 200m,
595
+ dsb = 220m, σ2
596
+ b = −70dBm, σ2
597
+ r = 2σ2
598
+ b, P = 25dBm,
599
+ P max
600
+ r
601
+ = 30dBm. The path loss at the distance d is modeled
602
+ as g(d) = PL0 − 10γlog10
603
+ d
604
+ d0 , where PL0 = −30dB is the
605
+ path loss reference distance d0 = 1m, and γ is the path loss
606
+ exponent. The path loss exponents of all channels are chosen
607
+ as 2. The positions of the IRS active elements are fixed to
608
+ Ω = {1, · · · , Ma}.
609
+ First, we make an investigation of the convergence be-
610
+ haviour of the proposed Max-SNR-FP and Max-SNR-EAR
611
+ algorithms. Fig. 2 shows the achievable rate versus the differ-
612
+ ent BS power, i.e., P = 20dBm, 25dBm. It can be seen from
613
+ the figure that both of the proposed algorithms converge within
614
+ limited iterations. The proposed Max-SNR-EAR algorithm has
615
+ a faster convergence rate than the Max-SNR-FP algorithm,
616
+ regardless of P = 20dBm or 25dBm.
617
+
618
+ 5
619
+ 0
620
+ 5
621
+ 10
622
+ 15
623
+ 20
624
+ 25
625
+ 30
626
+ 13.5
627
+ 14
628
+ 14.5
629
+ 15
630
+ 15.5
631
+ 16
632
+ 16.5
633
+ 17
634
+ Fig. 2. Convergence of the proposed algorithms at different BS power.
635
+ Fig. 3 depicts the curves of the achievable rate versus the
636
+ number of IRS phase shift elements, where Ma = M/2. We
637
+ compare two proposed algorithms to the benchmark schemes:
638
+ active IRS, passive IRS, no IRS, random phase IRS, and exist-
639
+ ing method in [11]. The achievable rates of the proposed Max-
640
+ SNR-FP and Max-SNR-EAR algorithms gradually increase
641
+ as the number of IRS elements increases, and the former
642
+ is better than the latter and existing method in [11]. The
643
+ achievable rates of both the proposed algorithms are much
644
+ better than that of the passive IRS, no IRS and random phase
645
+ IRS. Moreover, the difference in achievable rates between both
646
+ the proposed algorithms and active IRS gradually decreases
647
+ when the number of IRS elements tends to large scale.
648
+ 3
649
+ 4
650
+ 5
651
+ 6
652
+ 7
653
+ 8
654
+ 11
655
+ 12
656
+ 13
657
+ 14
658
+ 15
659
+ 16
660
+ 17
661
+ 18
662
+ 19
663
+ Fig. 3. Achievable rate versus the numbers of IRS phase shift elements.
664
+ Fig. 4 plots the curves of the computational complexity
665
+ versus the number of IRS elements. It can be found that the
666
+ complexities of the proposed Max-SNR-FP method, proposed
667
+ Max-SNR-EAR method, and existing method in [11] are
668
+ similar at small-scale IRS. However, the complexities of the
669
+ existing method in [11] and proposed Max-SNR-FP method
670
+ are far higher than that of the proposed Max-SNR-EAR
671
+ method when the number of IRS elements tends to large scale.
672
+ VI. CONCLUSION
673
+ In this paper, we have made an investigation of the hybrid
674
+ IRS-aided DM network. To fully explore the advantages of
675
+ hybrid IRS and maximize the achievable rate, the Max-SNR-
676
+ FP and Max-SNR-EAR algorithms were proposed to jointly
677
+ design the beamforming vector, passive IRS phase shift matrix,
678
+ and active IRS phase shift matrix by alternately optimizing one
679
+ and fixing rest. Simulation results showed that the achievable
680
+ 2
681
+ 3
682
+ 4
683
+ 5
684
+ 6
685
+ 7
686
+ 0
687
+ 2
688
+ 4
689
+ 6
690
+ 8
691
+ 10
692
+ 12
693
+ 14
694
+ 107
695
+ Fig. 4. Computational complexity versus the numbers of IRS elements.
696
+ rate of both proposed algorithms increases as the number of
697
+ IRS elements increases, and is much better than those of
698
+ the cases of random phase IRS, no IRS, and passive IRS.
699
+ Moreover, the proposed Max-SNR-FP method outperforms the
700
+ existing method in terms of the achievable rate and has lower
701
+ complexity.
702
+ REFERENCES
703
+ [1] Q. Cheng, S. Wang, V. Fusco, F. Wnag, J. Zhu, and C. Gu, “Physical-
704
+ layer security for frequency diverse array-based directional modulation
705
+ in fluctuating two-ray fading channels,” IEEE Trans. Wirel. Commun.,
706
+ vol. 20, no. 7, pp. 4190–4204, Jul. 2021.
707
+ [2] M. P. Daly and J. T. Bemhard, “Directional modulation technique for
708
+ phased arrays,” IEEE Trans. Antennas Propag, vol. 57, no. 9, pp. 2633–
709
+ 2640, Sep. 2009.
710
+ [3] F. Shu, X. Wu, J. Li, R. Chen, and B. Vucetic, “Robust synthesis scheme
711
+ for secure multi-beam directional modulation in broadcasting systems,”
712
+ IEEE Access, vol. 4, pp. 6614–6623, Nov. 2016.
713
+ [4] Y. Pan, C. Wang, C. Pan, H. Zhu, and J. Wang, “UAV-assisted and intel-
714
+ ligent reflecting surfaces-supported terahertz communication,” Wireless
715
+ Commun. Lett., vol. 10, no. 6, pp. 1256–1260, Jun. 2021.
716
+ [5] Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless
717
+ network via joint active and passive beamforming,” IEEE Trans. Wirel.
718
+ Commun., vol. 18, no. 11, pp. 5394–5409, Nov. 2019.
719
+ [6] C. Pan, H. Ren, K. Wang, W. Xu, M. Elkashlan, A. Nallanathan, and
720
+ L. Hanzo, “Multicell MIMO communications relaying on intelligent
721
+ reflecting surfaces,” IEEE Trans. Wirel. Commun., vol. 19, no. 8, pp.
722
+ 5218–5233, Aug. 2020.
723
+ [7] F. Shu, Y. Teng, J. Li, M. Huang, W. Shi, J. Li, Y. Wu, and J. Wang,
724
+ “Enhanced secrecy rate maximization for directional modulation net-
725
+ works via IRS,” IEEE Trans. Commun., vol. 69, no. 12, pp. 8388–8401,
726
+ Dec. 2021.
727
+ [8] R. Dong, S. Jiang, X. Hua, Y. Teng, F. Shu, and J. Wang, “Low-
728
+ complexity joint phase adjustment and receive beamforming for di-
729
+ rectional modulation networks via IRS,” IEEE open journal of the
730
+ Communications Society, vol. 3, pp. 1234–1243, Aug. 2022.
731
+ [9] Z. Zhang, L. Dai, X. Chen, C. Liu, F. Yang, R. Schober, and H. V. Poor,
732
+ “Active RIS vs. passive RIS: which will previal in 6G?” arXiv preprint
733
+ arXiv: 2103.15154, 2021.
734
+ [10] K. Liu, Z. Zhang, L. Dai, s. Xu, and F. Yang, “Active reconfigurable
735
+ intelligent surface: Fully-connected or sub-connected?” IEEE Commun.
736
+ Lett., vol. 26, no. 1, pp. 167–171, Jan. 2022.
737
+ [11] N. T. Nguyen, V.-D. Nguyen, Q. Wu, A. T¨olli, S. Chatzinotas, and
738
+ M. Juntti, “Hybrid active-passive reconfigurable intelligent surface-
739
+ assisted multi-user MISO systems,” 2022 IEEE 23rd International
740
+ Workshop on Signal Processing Advances in Wireless Communication
741
+ (SPAWC), pp. 1–5, Jul. 2022.
742
+ [12] N. T. Nguyen, Q.-D. Vu, K. Lee, and M. Juntti, “Hybrid relay-reflecting
743
+ intelligent surface-assisted wireless communications,” IEEE Trans. Veh.
744
+ Technol., Mar. 2022.
745
+ [13] Z. Wang, L. Liu, and S. Cui, “Channel estimation for intelligent reflect-
746
+ ing surface assisted multiuser communications: Framework, algorithms,
747
+ and analysis,” IEEE Trans. Wirel. Commun., vol. 19, no. 10, pp. 6607–
748
+ 6620, Oct. 2020.
749
+ [14] W. Dinkelbach, “On nonlinear fractional programming,” Manage Sci.,
750
+ vol. 13, no. 7, pp. 492–498, Mar. 1967.
751
+
752
+ 6
753
+ [15] M. Sadek, A. Tarighat, and A. H. Sayed, “A leakage-based precoding
754
+ scheme for downlink multi-user MIMO channels,” IEEE Trans. Wirel.
755
+ Commun., vol. 6, no. 5, pp. 1711–1721, May. 2007.
756
+
1tE0T4oBgHgl3EQf_wKi/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf,len=379
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
3
+ page_content='02831v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
4
+ page_content='IT] 7 Jan 2023 1 Joint Beamforming and Phase Shift Design for Hybrid-IRS-aided Directional Modulation Network Rongen Dong, Hangjia He, Feng Shu, Riqing Chen, and Jiangzhou Wang, Fellow, IEEE Abstract—To make a good balance between performance, cost, and power consumption, a hybrid intelligent reflecting surface (IRS)-aided directional modulation (DM) network is investigated in this paper, where the hybrid IRS consists of passive and active reflecting elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
5
+ page_content=' To maximize the achievable rate, two optimization algorithms, called maximum signal-to- noise ratio (SNR)-fractional programming (FP) (Max-SNR-FP) and maximum SNR-equal amplitude reflecting (EAR) (Max- SNR-EAR), are proposed to jointly design the beamforming vector and IRS phase shift matrix by alternately optimizing one and fixing another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
6
+ page_content=' The former employs the successive convex approximation and FP methods to solve the beamforming vector and hybrid IRS phase shift matrix, while the latter uses the maximum signal-to-leakage-noise ratio method and the criteria of phase alignment and EAR to design them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
7
+ page_content=' Simulation results show that the rates harvested by the proposed two methods are slightly lower than that of active IRS with higher power consumption, which are 35 percent higher than those of no IRS and random phase IRS, while passive IRS achieves only about 17 percent rate gain over the latter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
8
+ page_content=' Moreover, compared to Max- SNR-FP, the proposed Max-SNR-EAR method makes an obvious complexity reduction at the cost of a slight rate performance loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
9
+ page_content=' Index Terms—Intelligent reflecting surface, directional modu- lation, fractional programming, beamforming, phase shift I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
10
+ page_content=' INTRODUCTION Directional modulation (DM) is a promising solution to sig- nificantly improve the performance of physical layer security in wireless networks [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
11
+ page_content=' The design of DM synthesis is mainly implemented in the radio frequency (RF) frontend or baseband.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
12
+ page_content=' For example, in [2], the signal was produced in a given direction by shifting the phase of each antenna element at the RF frontend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
13
+ page_content=' In [3], a multi-beam DM scenario was considered to maximize the secure rate (SR), where the precoder and the artificial noise (AN) were designed by maximizing signal- to-leakage-noise ratio and maximizing the signal-to-AN ratio methods, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
14
+ page_content=' Intelligent reflecting surface (IRS), as a cost and energy- efficient solution to enhance the performance of the wire- less communication system, has been adopted to aid various This work was supported in part by the National Natural Science Foundation of China (Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
15
+ page_content='U22A2002, and 62071234), the Major Science and Technology plan of Hainan Province under Grant ZDKJ2021022, and the Scientific Research Fund Project of Hainan University under Grant KYQD(ZR)-21008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
16
+ page_content=' Rongen Dong and Feng Shu are with the School of Information and Com- munication Engineering, Hainan University, Haikou, 570228, China (Email: shufeng0101@163.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
17
+ page_content='com).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
18
+ page_content=' Hangjia He is with the School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
19
+ page_content=' Riqing Chen is with the Digital Fujian Institute of Big Data for Agriculture, Fujian Agriculture and Forestry University, Fuzhou 350002, China (Email: riqing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
20
+ page_content='chen@fafu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
21
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
22
+ page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
23
+ page_content=' Jiangzhou Wang is with the School of Engineering, University of Kent, Canterbury CT2 7NT, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
24
+ page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
25
+ page_content=' (Email: j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
26
+ page_content='z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
27
+ page_content='wang@kent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
28
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
29
+ page_content='uk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
30
+ page_content=' wireless communication directions: unmanned aerial vehicle communication [4], single-cell wireless communication [5], multi-cell communication [6], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
31
+ page_content=' Recently, IRS-aided DM system have also been investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
32
+ page_content=' To maximize the SR of IRS-aided DM system, the general alternating iterative and null-space projection algorithms were proposed to jointly obtain the transmit beamforming vectors and IRS phase shift matrix in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
33
+ page_content=' To maximize the receive power sum, the authors in [8] proposed the general alternating optimization and zero- forcing algorithms to jointly design the receive beamforming vectors and IRS phase shift matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
34
+ page_content=' However, all the above work was considered in the scenarios of passive IRS, and the system may not be able to guarantee a satisfactory achievable rate due to the presence of double path loss in the cascaded channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
35
+ page_content=' To overcome the “double fading” effect and enhance the performance of the passive IRS-aided wireless network, the fully active IRS has been investigated [9], [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
36
+ page_content=' Due to the high power consumption and hardware design of active IRS, a hybrid active-passive IRS was proposed to overcome the limitation of passive and active IRSs [11], [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
37
+ page_content=' The main idea of the hybrid IRS is to employ some active elements to replace the one of the passive IRS, these active elements of hybrid IRS with signal amplification can efficiently compensate for the path loss and increase the achievable rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
38
+ page_content=' To the best of the authors’ knowledge, the hybrid IRS-aided DM system have not been investigated yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
39
+ page_content=' In this paper, we employ the hybrid IRS to further enhance the performance of passive IRS-aided DM network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
40
+ page_content=' The main contributions of this paper are summarized as follows: 1) To make a good balance between performance, cost, and power consumption, a hybrid IRS-aided DM system model is proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
41
+ page_content=' To maximize the achievable rate, the optimization problem of maximizing the signal-to- noise ratio (SNR) is established, and the maximum SNR- fractional programming (FP) (Max-SNR-FP) scheme is proposed to jointly obtain the beamforming vector and hybrid IRS phase shift matrix by optimizing one and fixing another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
42
+ page_content=' In this scheme, the beamforming vector and passive IRS phase shift matrix are solved by the successive convex approximation algorithm, and the active IRS phase shift matrix is computed by the FP method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
43
+ page_content=' 2) To reduce the high computational complexity of the above scheme, a low-complexity maximum SNR-equal amplitude reflecting (EAR) (Max-SNR-EAR) method is proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
44
+ page_content=' By utilizing the maximum signal-to-leakage- noise ratio (SLNR) method, the beamforming vector is 2 obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
45
+ page_content=' Moreover, the hybrid IRS phase shift matrix is computed based on the criteria of phase alignment and EAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
46
+ page_content=' Simulation results show that the achievable rates harvested by both the proposed methods are higher than those of no IRS, random phase IRS, and passive IRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
47
+ page_content=' In addition, the difference in achievable rates between these two methods is trivial when the number of hybrid IRS elements tends to large scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
48
+ page_content=' The remainder of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
49
+ page_content=' Section II describes the system model of hybrid IRS-aided DM net- work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
50
+ page_content=' The Max-SNR-FP scheme is presented in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
51
+ page_content=' Section IV describes the Max-SNR-EAR scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
52
+ page_content=' Numerical simulation results are presented in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
53
+ page_content=' Finally, we draw conclusions in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
54
+ page_content=' Notations: throughout this paper, boldface lower case and upper case letters represent vectors and matrices, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
55
+ page_content=' Signs (·)T , (·)∗, (·)H, Tr(·), ℜ{·}, and diag{·} denote the transpose, conjugate, conjugate transpose, trace, real part, and diagonal operations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
56
+ page_content=' The sign | · | is the determinant of a matrix or the absolute value of a scalar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
57
+ page_content=' The symbol CN×N denotes the space of N × N complex-valued matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
58
+ page_content=' The notation IN is the N × N identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
59
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
60
+ page_content=' SYSTEM MODEL As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
61
+ page_content=' 1, a hybrid IRS-aided DM system is considered, where the base station (BS) is equipped with N antennas, and the user (Bob) is equipped with single antenna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
62
+ page_content=' The hybrid IRS is equipped with M elements, which consists of Ma active and Mp passive IRS reflecting elements (M = Ma + Mp, 1 ≤ Ma ≤ Mp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
63
+ page_content=' It is assumed that the active elements can tune both the phase and amplitude while the passive ones can only shift the phase of the incident signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
64
+ page_content=' The signals reflected more than once on the hybrid IRS are negligible due to the severe path loss [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
65
+ page_content=' All channels are assumed to be line-of-sight channels since DM is only applicable to line-of-sight channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
66
+ page_content=' It is assumed that all the channel state information is perfectly known through channel estimation [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
67
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
68
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
69
+ page_content=' System model of Hybrid-IRS-aided directional modulation network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
70
+ page_content=' Similar to the conventional passive IRS, it is assumed that each elements of hybrid IRS can independently reflect the inci- dent signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
71
+ page_content=' Let us denote the set of the Ma active elements by Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
72
+ page_content=' Θ = diag{θ∗} = diag{θ1, · · · , θm, · · · , θM} ∈ CM×M, Ψ = diag{ψ∗} ∈ CM×M, and Φ = diag{φ∗} ∈ CM×M are the reflection coefficients of total elements, active elements, and passive elements of hybrid IRS, respectively, where θm = � |βm|ejµm, if m ∈ Ω, ejµm, otherwise, (1) µm ∈ [0, 2π) is the phase, and |βm| is the amplifying coefficient and determined by the total power of the active elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
73
+ page_content=' Let us define Ψ = EMaΘ, Φ = EMpΘ, (2) where EMa + EMp = IM, EMaEMp = 0M, (3) EMa is an M × M diagonal matrix whose non-zero elements are all unity and have positions determined by Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
74
+ page_content=' The transmitted signal at BS is s = √ Pvx, (4) where P denotes the transmit power, v ∈ CN×1 and x are the beamforming vector and the information symbol, satisfying vHv = 1 and E[∥x∥2] = 1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
75
+ page_content=' Taking the path loss into consideration, the received signal at Bob is yb = (√ρsrbhH rbΘHsr + √ρsbhH sb)s + √ρrbhH rbΨnr + nb = √ P(√ρsrbhH rbΨHsr + √ρsrbhH rbΦHsr + √ρsbhH sb)vx + √ρrbhH rbΨnr + nb, (5) where ρsrb = ρsrρrb is the equivalent path loss coefficient of BS-to-IRS channel and IRS-to-Bob channel, ρsb and ρrb are the path loss coefficient of BS-to-Bob channel and IRS- to-Bob channel, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
76
+ page_content=' nr ∼ CN(0, σ2 rIMa) and nb ∼ CN(0, σ2 b) denote the complex additive white Gaussian noise (AWGN) at the Ma active elements of the hybrid IRS and at Bob, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
77
+ page_content=' hsb ∈ CN×1, hrb ∈ CM×1, and Hsr = hsrhH sr ∈ CM×N are the BS-to-Bob, IRS-to-Bob, and BS-to- IRS channels, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
78
+ page_content=' Let us define the channel htr = h(θtr), the normalized steering vector h(θ) is h(θ) = 1 √ N [ej2πΨθ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
79
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
80
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
81
+ page_content=' , ej2πΨθ(n), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
82
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
83
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
84
+ page_content=' , ej2πΨθ(N)]T , (6) and the phase function Ψθ(n) is given by Ψθ(n) ∆= −(n − (N + 1)/2)d cosθ λ , n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
85
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
86
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
87
+ page_content=' , N, (7) where θ represents the direction angle of arrival or departure, n denotes the index of antenna, d is the spacing of adjacent transmitting antennas, and λ represents the wavelength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
88
+ page_content=' In accordance with (5), the achievable rate at Bob can be written as Rb = log2 (1 + SNR) , (8) where SNR = P|(√ρsrbhH rbΨHsr + √ρsrbhH rbΦHsr + √ρsbhH sb)v|2 σ2r|√ρrbhH rbΨ|2 + σ2 b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
89
+ page_content=' (9) Hybrid IRS Active Passive H H rb ((()) H 5 sb User (Bob) Base station3 The transmit power of the active elements at the hybrid IRS is given by Pr = Tr � Ψ � ρsrPHsrvvHHH sr + σ2 rIM � ΨH� , (10) which satisfies Pr ≤ P max r , where P max r represents the maxi- mum transmit power of Ma active elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
90
+ page_content=' In this paper, we maximize the SNR by jointly optimizing beamforming vector v, passive IRS phase shift matrix Φ, and active IRS phase shift matrix Ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
91
+ page_content=' The optimization problem can be formulated as max v,Φ,Ψ SNR (11a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
92
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
93
+ page_content=' vHv = 1, Pr ≤ P max r , (11b) |Φ(m, m)| = 1, if m ̸∈ Ω, (11c) |Φ(m, m)| = 0, otherwise, (11d) |Ψ(m, m)| ≤ βmax, if m ∈ Ω, (11e) |Ψ(m, m)| = 0, otherwise, (11f) where βmax is the amplification budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
94
+ page_content=' It is notes that this optimization problem is a non-convex problem with a constant modulus constraint, and it is challenging to solve it directly in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
95
+ page_content=' In what follows, we propose the alternating optimiza- tion algorithm to design the beamforming vector and hybrid IRS phase shift matrix, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
96
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
97
+ page_content=' PROPOSED MAX-SNR-FP SCHEME In this section, we construct a Max-SNR-FP method to jointly optimize the beamforming vector v, passive IRS phase shift matrix Φ, and active IRS phase shift matrix Ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
98
+ page_content=' In what follows, we will alternately solve for v, Φ, and Ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
99
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
100
+ page_content=' Optimize v given Φ and Ψ Firstly, we transform the power constraint in (11b) into a convex constraint with respect to v as follows Pr = vH � ρsrPHH srΨHΨHsr � v + Tr � σ2 rΨΨH� ≤ P max r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
101
+ page_content=' (12) Then, given Φ and Ψ, the optimal beamforming vector v can be found by solving the following problem max v vHA¯v s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
102
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
103
+ page_content=' vHv = 1, (12), (13) where A =(√ρsrbhH rbΦHsr + √ρsrbhH rbΨHsr + √ρsbhH sb)H (√ρsrbhH rbΦHsr + √ρsrbhH rbΨHsr + √ρsbhH sb).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
104
+ page_content=' (14) It is clear that this problem is not convex, and in accordance with the Taylor series expansion, we have vHAv ≥ 2ℜ{¯vHAv} − ¯vHA¯v, (15) where ¯v is a given vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
105
+ page_content=' Then (13) can be recasted as max v 2ℜ{¯vHAv} − ¯vHA¯v s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
106
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
107
+ page_content=' vHv = 1, (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
108
+ page_content=' (16) It is a convex optimization problem and can be solved by employing CVX tool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
109
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
110
+ page_content=' Optimize Φ given v and Ψ To simplify the SNR expression related to the phase shift matrix Φ, we regard v and Ψ as two constants, and define B = (√ρsrbhH rbΨHsr + √ρsbhH sb)v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
111
+ page_content=' (17) Then, the subproblem to optimize Φ can be expressed as max Φ |√ρsrbhH rbΦHsrv + B|2 (18a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
112
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
113
+ page_content=' |Φ(m, m)| = 1, if m ̸∈ Ω, (18b) |Φ(m, m)| = 0, otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
114
+ page_content=' (18c) By defining C = ρsrbdiag{hH rb}HsrvvHHH srdiag{hH rb}H, (19) and based on the fact that diag{a}b = diag{b}a for a, b ∈ CM×1, the objective function in (18) can be recasted as φHCφ + 2ℜ{√ρsrbφHdiag{hH rb}HsrvB∗} + |B|2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
115
+ page_content=' (20) Based on the Taylor series expansion, we have φHCφ ≥ 2ℜ{ ¯φHCφ} − ¯φHC ¯φ, (21) where ¯φ is a given vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
116
+ page_content=' For the unit modulus constraint (18b), it can be relaxed as |Φ(m, m)| ≤ 1, if m ̸∈ Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
117
+ page_content=' (22) At this point, the problem (18) can be rewritten as max Φ 2ℜ{ ¯φHCφ} − ¯φHC ¯φ + |B|2 + 2ℜ{√ρsrbφH• diag{hH rb}HsrvB∗} s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
118
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
119
+ page_content=' (22), (18c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
120
+ page_content=' (23) We can find that it is a convex optimization problem and can be solved by employing CVX tool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
121
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
122
+ page_content=' Optimize Ψ given v and Φ To optimize Ψ, we regard v and Φ as two given constants, and transform the power constraint in (11b) into a convex constraint on ψ as follows Pr = Tr � Ψ � ρsrPHsrvvHHH sr + σ2IM � ΨH� = ψT (ρsrPdiag{vHHH sr}diag{Hsrv} + σ2 rIM)ψ∗ ≤ P max r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
123
+ page_content=' (24) By neglecting the constant terms, the subproblem with respect to Ψ is given by max Ψ |(√ρsrbhH rbΨHsr + √ρsrbhH rbΦHsr + √ρsbhH sb)v|2 σ2r|√ρrbhH rbΨ|2 + σ2 b (25a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
124
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
125
+ page_content=' (11e), (11f), (24).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
126
+ page_content=' (25b) Let us define D = (√ρsrbhH rbΦHsr + √ρsbhH sb)v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
127
+ page_content=' (26) Then, the objective function in (25) can be converted to ψHCψ + 2ℜ{ψH√ρsrbdiag{hH rb}HsrvD∗} + |D|2 σ2rρrb|ψHdiag{hH rb}|2 + σ2 b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
128
+ page_content=' (27) 4 At this point, the optimization problem (25) becomes a nonlin- ear fractional optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
129
+ page_content=' Based on the FP strategy in [14], we introduce a parameter τ and transform the objective function (27) as ψHCψ + 2ℜ{ψH√ρsrbdiag{hH rb}HsrvD∗} + |D|2 − τ(σ2 rρrb|ψHdiag{hH rb}|2 + σ2 b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
130
+ page_content=' (28) The optimal solution can be achieved if and only if ψHCψ + 2ℜ{ψH√ρsrbdiag{hH rb}HsrvD∗} + |D|2 − τ(σ2 rρrb|ψHdiag{hH rb}|2 + σ2 b) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
131
+ page_content=' We linearize the ψHCψ by employing Taylor series expansion at a given vector ¯ψ, the subproblem with respect to Ψ can be rewritten as max Ψ,τ 2ℜ{ ¯ψHCψ} − ¯ψHC ¯ψ + 2ℜ{ψH√ρsrbdiag{hH rb}• HsrvD∗} + |D|2 − τ(σ2 rρrb|ψHdiag{hH rb}|2 + σ2 b) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
132
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
133
+ page_content=' (11e), (11f), (24).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
134
+ page_content=' (29) It should be noted that this problem is convex, which can be effectively solved by the CVX tool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
135
+ page_content=' The whole procedure of the Max-SNR-FP algorithm is described in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
136
+ page_content=' Algorithm 1 Proposed Max-SNR-FP algorithm 1: Initialize v(0), Φ(0), and Ψ(0), compute R(0) b based on (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
137
+ page_content=' 2: Set p = 0, threshold value ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
138
+ page_content=' 3: repeat 4: Given Φ(p) and Ψ(p), solve (16) to determine v(p+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
139
+ page_content=' 5: Given v(p+1) and Ψ(p), solve (23) to determine Φ(p+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
140
+ page_content=' 6: Given v(p+1) and Φ(p+1), solve (29) to determine Ψ(p+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
141
+ page_content=' 7: Compute R(p+1) b using v(p+1), Φ(p+1), and Ψ(p+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
142
+ page_content=' 8: p = p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
143
+ page_content=' 9: until |R(p) b − R(p−1) b | ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
144
+ page_content=' The computational complexity of the proposed Max-SNR- FP algorithm is O(L((M + 1)3 + 2MN 2 + 2M 2)In(1/ǫ) + M 3+N 3+5M 2+2MN+2M+2MN 2) float-point operations (FLOPs), where L is the numbers of alternating iterations, ǫ denotes the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
145
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
146
+ page_content=' PROPOSED MAX-SNR-EAR SCHEME In the previous section, we proposed the Max-SNR-FP method to design the beamforming v, IRS phase shift matrices Φ and Ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
147
+ page_content=' However, it has a high computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
148
+ page_content=' To reduce the computational complexity, a low-complexity method named Max-SNR-EAR is proposed in what follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
149
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
150
+ page_content=' Optimize v given Φ and Ψ Given IRS phase shift matrices Φ and Ψ, in accordance with the principle of maximizing SLNR in [15], the beamforming vector v can be optimized by solving the following problem max v SLNR = vHEv vH(σ2 bIN)v s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
151
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
152
+ page_content=' vHv = 1, (12), (30) where E =ρsrbHH srΦHhrbhH rbΦHsr + ρsrbHH srΨHhrbhH rbΨHsr + hsbhH sb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
153
+ page_content=' (31) According to the Taylor series expansion and neglecting the constant terms, the problem (30) can be recasted as max v 2ℜ{¯vHEv} − ¯vHE¯v s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
154
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
155
+ page_content=' vHv = 1, (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
156
+ page_content=' (32) Note that it is a convex optimization problem and can be solved with CVX tool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
157
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
158
+ page_content=' Optimize Φ and Ψ given v Given beamforming vector v, we consider to design the phase of hybrid IRS firstly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
159
+ page_content=' The confidential message received by Bob through the cascade path is expressed as PρsrbhH rbΘHsrvvHHH srΘHhrb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
160
+ page_content=' (33) To maximize the confidential message of the cascade path, the phase alignment method is employed to design the hybrid IRS phase �θ, �θ is given by �θ = [e(−iarg(s1)), · · · , e(−iarg(sM))]T , (34) where s = diag{hH rb}Hsrv, and si is the i-th element of s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
161
+ page_content=' Next, inspired by the amplitude design of fully active IRS in [9], we assume that all active IRS elements have the same amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
162
+ page_content=' Based on the IRS power constraint in (11b), we have |β| = � P max r /Q, (35) where Q =Tr(�θH(ρsrPdiag{vHHH srEMa}diag{vHHH srEMa}H + σ2EMaEMa)�θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
163
+ page_content=' (36) Based on (34) and (35), we can obtain the passive IRS phase shift matrix and active IRS phase shift matrix as follows Φ = EMpdiag{�θ}, Ψ = |β|EMadiag{�θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
164
+ page_content=' (37) Similar to Algorithm 1, we calculate v, Φ, and Ψ alternately until convergence, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
165
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
166
+ page_content=', |R(p) b −R(p−1) b | ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
167
+ page_content=' The computational complexity of Max-SNR-EAR algorithm is O(K(2M 2+N 3+ 2M 2 + 8N 2M + 2MN) FLOPs, where K is the numbers of alternating iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
168
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
169
+ page_content=' SIMULATION RESULTS AND DISCUSSIONS In this section, simulation results are presented to evaluate the performance of two proposed algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
170
+ page_content=' Simulation de- fault parameters are chosen as follows: N = 8, M = 128, Ma = 32, d = λ/2, θsr = π/4, θsb = π/3, dsr = 200m, dsb = 220m, σ2 b = −70dBm, σ2 r = 2σ2 b, P = 25dBm, P max r = 30dBm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
171
+ page_content=' The path loss at the distance d is modeled as g(d) = PL0 − 10γlog10 d d0 , where PL0 = −30dB is the path loss reference distance d0 = 1m, and γ is the path loss exponent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
172
+ page_content=' The path loss exponents of all channels are chosen as 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
173
+ page_content=' The positions of the IRS active elements are fixed to Ω = {1, · · · , Ma}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
174
+ page_content=' First, we make an investigation of the convergence be- haviour of the proposed Max-SNR-FP and Max-SNR-EAR algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
175
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
176
+ page_content=' 2 shows the achievable rate versus the differ- ent BS power, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
177
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
178
+ page_content=', P = 20dBm, 25dBm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
179
+ page_content=' It can be seen from the figure that both of the proposed algorithms converge within limited iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
180
+ page_content=' The proposed Max-SNR-EAR algorithm has a faster convergence rate than the Max-SNR-FP algorithm, regardless of P = 20dBm or 25dBm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
181
+ page_content=' 5 0 5 10 15 20 25 30 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
182
+ page_content='5 14 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
183
+ page_content='5 15 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
184
+ page_content='5 16 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
185
+ page_content='5 17 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
186
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
187
+ page_content=' Convergence of the proposed algorithms at different BS power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
188
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
189
+ page_content=' 3 depicts the curves of the achievable rate versus the number of IRS phase shift elements, where Ma = M/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
190
+ page_content=' We compare two proposed algorithms to the benchmark schemes: active IRS, passive IRS, no IRS, random phase IRS, and exist- ing method in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
191
+ page_content=' The achievable rates of the proposed Max- SNR-FP and Max-SNR-EAR algorithms gradually increase as the number of IRS elements increases, and the former is better than the latter and existing method in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
192
+ page_content=' The achievable rates of both the proposed algorithms are much better than that of the passive IRS, no IRS and random phase IRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
193
+ page_content=' Moreover, the difference in achievable rates between both the proposed algorithms and active IRS gradually decreases when the number of IRS elements tends to large scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
194
+ page_content=' 3 4 5 6 7 8 11 12 13 14 15 16 17 18 19 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
195
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
196
+ page_content=' Achievable rate versus the numbers of IRS phase shift elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
197
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
198
+ page_content=' 4 plots the curves of the computational complexity versus the number of IRS elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
199
+ page_content=' It can be found that the complexities of the proposed Max-SNR-FP method, proposed Max-SNR-EAR method, and existing method in [11] are similar at small-scale IRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
200
+ page_content=' However, the complexities of the existing method in [11] and proposed Max-SNR-FP method are far higher than that of the proposed Max-SNR-EAR method when the number of IRS elements tends to large scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
201
+ page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
202
+ page_content=' CONCLUSION In this paper, we have made an investigation of the hybrid IRS-aided DM network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
203
+ page_content=' To fully explore the advantages of hybrid IRS and maximize the achievable rate, the Max-SNR- FP and Max-SNR-EAR algorithms were proposed to jointly design the beamforming vector, passive IRS phase shift matrix, and active IRS phase shift matrix by alternately optimizing one and fixing rest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
204
+ page_content=' Simulation results showed that the achievable 2 3 4 5 6 7 0 2 4 6 8 10 12 14 107 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
205
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
206
+ page_content=' Computational complexity versus the numbers of IRS elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
207
+ page_content=' rate of both proposed algorithms increases as the number of IRS elements increases, and is much better than those of the cases of random phase IRS, no IRS, and passive IRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
208
+ page_content=' Moreover, the proposed Max-SNR-FP method outperforms the existing method in terms of the achievable rate and has lower complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
209
+ page_content=' REFERENCES [1] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
210
+ page_content=' Cheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
211
+ page_content=' Wang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
212
+ page_content=' Fusco, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
213
+ page_content=' Wnag, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
214
+ page_content=' Zhu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
215
+ page_content=' Gu, “Physical- layer security for frequency diverse array-based directional modulation in fluctuating two-ray fading channels,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
216
+ page_content=' Wirel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
217
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
218
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
219
+ page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
220
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
221
+ page_content=' 4190–4204, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
222
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
223
+ page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
224
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
225
+ page_content=' Daly and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
226
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
227
+ page_content=' Bemhard, “Directional modulation technique for phased arrays,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
228
+ page_content=' Antennas Propag, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
229
+ page_content=' 57, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
230
+ page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
231
+ page_content=' 2633– 2640, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
232
+ page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
233
+ page_content=' [3] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
234
+ page_content=' Shu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
235
+ page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
236
+ page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
237
+ page_content=' Chen, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
238
+ page_content=' Vucetic, “Robust synthesis scheme for secure multi-beam directional modulation in broadcasting systems,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
239
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
240
+ page_content=' 6614–6623, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
241
+ page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
242
+ page_content=' [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
243
+ page_content=' Pan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
244
+ page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
245
+ page_content=' Pan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
246
+ page_content=' Zhu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
247
+ page_content=' Wang, “UAV-assisted and intel- ligent reflecting surfaces-supported terahertz communication,” Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
248
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
249
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
250
+ page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
251
+ page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
252
+ page_content=' 1256–1260, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
253
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
254
+ page_content=' [5] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
255
+ page_content=' Wu and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
256
+ page_content=' Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
257
+ page_content=' Wirel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
258
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
259
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
260
+ page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
261
+ page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
262
+ page_content=' 5394–5409, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
263
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
264
+ page_content=' [6] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
265
+ page_content=' Pan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
266
+ page_content=' Ren, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
267
+ page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
268
+ page_content=' Xu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
269
+ page_content=' Elkashlan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
270
+ page_content=' Nallanathan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
271
+ page_content=' Hanzo, “Multicell MIMO communications relaying on intelligent reflecting surfaces,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
272
+ page_content=' Wirel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
273
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
274
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
275
+ page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
276
+ page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
277
+ page_content=' 5218–5233, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
278
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
279
+ page_content=' [7] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
280
+ page_content=' Shu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
281
+ page_content=' Teng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
282
+ page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
283
+ page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
284
+ page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
285
+ page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
286
+ page_content=' Wu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
287
+ page_content=' Wang, “Enhanced secrecy rate maximization for directional modulation net- works via IRS,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
288
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
289
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
290
+ page_content=' 69, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
291
+ page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
292
+ page_content=' 8388–8401, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
293
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
294
+ page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
295
+ page_content=' Dong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
296
+ page_content=' Jiang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
297
+ page_content=' Hua, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
298
+ page_content=' Teng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
299
+ page_content=' Shu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
300
+ page_content=' Wang, “Low- complexity joint phase adjustment and receive beamforming for di- rectional modulation networks via IRS,” IEEE open journal of the Communications Society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
301
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
302
+ page_content=' 1234–1243, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
303
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
304
+ page_content=' [9] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
305
+ page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
306
+ page_content=' Dai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
307
+ page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
308
+ page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
309
+ page_content=' Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
310
+ page_content=' Schober, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
311
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
312
+ page_content=' Poor, “Active RIS vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
313
+ page_content=' passive RIS: which will previal in 6G?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
314
+ page_content=' arXiv preprint arXiv: 2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
315
+ page_content='15154, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
316
+ page_content=' [10] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
317
+ page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
318
+ page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
319
+ page_content=' Dai, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
320
+ page_content=' Xu, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
321
+ page_content=' Yang, “Active reconfigurable intelligent surface: Fully-connected or sub-connected?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
322
+ page_content=' IEEE Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
323
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
324
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
325
+ page_content=' 26, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
326
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
327
+ page_content=' 167–171, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
328
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
329
+ page_content=' [11] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
330
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
331
+ page_content=' Nguyen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
332
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
333
+ page_content=' Nguyen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
334
+ page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
335
+ page_content=' T¨olli, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
336
+ page_content=' Chatzinotas, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
337
+ page_content=' Juntti, “Hybrid active-passive reconfigurable intelligent surface- assisted multi-user MISO systems,” 2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
338
+ page_content=' 1–5, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
339
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
340
+ page_content=' [12] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
341
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
342
+ page_content=' Nguyen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
343
+ page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
344
+ page_content=' Vu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
345
+ page_content=' Lee, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
346
+ page_content=' Juntti, “Hybrid relay-reflecting intelligent surface-assisted wireless communications,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
347
+ page_content=' Veh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
348
+ page_content=' Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
349
+ page_content=', Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
350
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
351
+ page_content=' [13] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
352
+ page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
353
+ page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
354
+ page_content=' Cui, “Channel estimation for intelligent reflect- ing surface assisted multiuser communications: Framework, algorithms, and analysis,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
355
+ page_content=' Wirel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
356
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
357
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
358
+ page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
359
+ page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
360
+ page_content=' 6607– 6620, Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
361
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
362
+ page_content=' [14] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
363
+ page_content=' Dinkelbach, “On nonlinear fractional programming,” Manage Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
364
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
365
+ page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
366
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
367
+ page_content=' 492–498, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
368
+ page_content=' 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
369
+ page_content=' 6 [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
370
+ page_content=' Sadek, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
371
+ page_content=' Tarighat, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
372
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
373
+ page_content=' Sayed, “A leakage-based precoding scheme for downlink multi-user MIMO channels,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
374
+ page_content=' Wirel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
375
+ page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
376
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
377
+ page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
378
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
379
+ page_content=' 1711–1721, May.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
380
+ page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf'}
1tFLT4oBgHgl3EQfqC-n/content/tmp_files/2301.12138v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
1tFLT4oBgHgl3EQfqC-n/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2dAzT4oBgHgl3EQfDvol/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca70a709fed2059f59e2a2bb93719badfb150e8a5af1e7d35309da90b6f474af
3
+ size 2621485
2tAzT4oBgHgl3EQfRvvX/content/2301.01222v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2411d6217b0c264ce7acfed8173567cea2527d102da50a157808e6be15c53f0b
3
+ size 678765
2tAzT4oBgHgl3EQfRvvX/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e00e11b31e4d2e976ca07801b07807a58298b8f3ae7cc6d927d4b62ecc2b68d
3
+ size 3276845
2tAzT4oBgHgl3EQfRvvX/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6038ae35d70fe08244e9ead74d0831f4939765720812aac34d7880aaf14f252d
3
+ size 105798
49FIT4oBgHgl3EQf7St_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0f45c65c6a29ae1adfa3df5213c0b46887e9483d008ec65ee59ea20734e1c83
3
+ size 4063277
49FIT4oBgHgl3EQf7St_/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6457294f8a6711c9be13a34f164969e3b1504528679d9851b17c85d52f95369c
3
+ size 145870
4dFQT4oBgHgl3EQf4Ta7/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1162fbb2cb91a27cf4da7147704815582da1ee605d198b555d9b4d604583215a
3
+ size 294241
5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81f22da432b8b45835727ba2cbfcec06ed7bd969a87c2ad9c8d8cb67c34be0d7
3
+ size 240775
5NE3T4oBgHgl3EQfQgli/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e498c4b2509996515eb73e4798be3faa04d4170aeb83fd2476bf1ab587e10ce
3
+ size 3997741
5NE3T4oBgHgl3EQfQgli/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:512f4ff1e5ae67a0d43ee9df6527ed842e841cc89c0f3dfe9968c6efa1b2c263
3
+ size 134099
5tAzT4oBgHgl3EQff_wz/content/tmp_files/2301.01460v1.pdf.txt ADDED
@@ -0,0 +1,526 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Prepared for submission to JINST
2
+ Computational Models for High-Power Cyclotrons and
3
+ FFAs
4
+ Andreas Adelmann
5
+ ,𝑎 Chris T. Rogers,𝑏
6
+ 𝑎Paul Scherrer Institut,
7
+ Forschungsstrasse 111 CH-5232 Villigen, Switzerland
8
+ 𝑏STFC Rutherford Appleton Laboratory,
9
+ Harwell Science and Innovation Campus, Didcot, OX11 0QX, United Kingdom
10
11
+ Abstract: A summary of numerical modeling capabilities regarding high power cyclotrons and
12
+ fixed field alternating gradient machines is presented. This paper focuses on techniques made
13
+ available by the OPAL simulation code.
14
+ Keywords: High Power Cyclotrons, High Power FFAs, Computational Models, OPAL
15
+ 1Corresponding author.
16
+ arXiv:2301.01460v1 [physics.acc-ph] 4 Jan 2023
17
+
18
+ Contents
19
+ 1
20
+ Overview on Computational Models
21
+ 1
22
+ 1.1
23
+ Single particle modeling
24
+ 1
25
+ 1.2
26
+ Large Scale Multiparticle Modeling
27
+ 2
28
+ 1.3
29
+ Surrogate Model Construction
30
+ 2
31
+ 2
32
+ Physics Modeling
33
+ 2
34
+ 2.1
35
+ Modeling H- Injection and Painting in Vertical and Horizontal FFAs
36
+ 2
37
+ 2.2
38
+ Beam stripping interactions
39
+ 5
40
+ 2.3
41
+ Spiral inflector modeling
42
+ 5
43
+ 2.4
44
+ Neighboring Turn Modeling
45
+ 5
46
+ 3
47
+ Path Forward
48
+ 6
49
+ 1
50
+ Overview on Computational Models
51
+ In all high-power particle accelerators "one of the major limitations is particle losses. Losses may
52
+ be controlled, resulting in beam particles impinging on dedicated equipment such as collimators, or
53
+ uncontrolled, resulting in beam particles striking other equipment around the accelerator. Uncon-
54
+ trolled losses can damage and activate any equipment in the accelerator and so must be minimized.
55
+ Controlled losses need to be carefully considered and also minimized. The amount and cause
56
+ of loss are investigated by modeling accelerators using simulation codes that model numerically
57
+ the behaviour of beams. A review of available numerical codes can be found in the article of
58
+ Smirnov [1]. In this paper modeling capabilities available in OPAL are discussed in more detail
59
+ [2].
60
+ 1.1
61
+ Single particle modeling
62
+ For conventional cyclotrons (and FFAs) the single particle tool box is established and many different
63
+ codes variants exists [1]. For cyclotrons and (horizontal FFAs) the existing tools seem to be
64
+ comfortable and accurate. New machines like vertical FFAs, currently studied for example at the
65
+ Rutherford Appleton Laboratory (RAL) [3], require non–trivial modifications to the existing codes.
66
+ These modifications are on the way for example in the code OPAL [2] and expected to be available
67
+ in second quarter of 2022.
68
+ Recently, in the context of very high field and ultra compact H− cyclotrons beam stripping
69
+ losses of ion beams by interactions with residual gas and electromagnetic fields are evaluated [4].
70
+ The beam stripping algorithm, implemented in OPAL, evaluates the interaction of hydrogen ions
71
+ with residual gas and electromagnetic fields. In the first case, the cross sections of the processes
72
+ are estimated according to the energy by means of analytical functions (see Sec. II-A c[4]). The
73
+ – 1 –
74
+
75
+ implementation allows the user to set the pressure, temperature, and composition of the residual
76
+ gas, which could be selected for the calculations as either molecular hydrogen (H+
77
+ 2) or dry air in the
78
+ usual proportion. For precise simulations, a two-dimensional pressure field map from an external
79
+ file can be imported into OPAL, providing more realistic vacuum conditions.
80
+ Concerning electromagnetic stripping, the electric dissociation lifetime is evaluated through
81
+ the theoretical formalism (see Sec. II-B [4]). In both instances, the individual probability at each
82
+ integration step for every particle is assessed.
83
+ A stochastic process is used to evaluate if an interaction occurs. In this case the particle will
84
+ be stripped and removed from the beam, or optionally transformed to a secondary heavy particle,
85
+ dependent on the interaction. In this case, the secondary particle will continue its movement but
86
+ with the new particle properties.
87
+ 1.2
88
+ Large Scale Multiparticle Modeling
89
+ In general, modeling losses in high intensity accelerators require 3D space-charge and sufficient
90
+ simulation particles. Recent investigations [5] propose a sparse grid-based adaptive noise reduction
91
+ strategy for electrostatic particle-in-cell (PIC) simulations. By projecting the charge density onto
92
+ sparse grids, high-frequency particle noise is reduced and hence an optimal number of grid points
93
+ and simulation particles can be obtained. For a 3D Penning trap simulation, a maximum speedup
94
+ of 2.8 and 15 times memory reduction has been obtained. This method is already integrated into
95
+ OPAL.
96
+ 1.3
97
+ Surrogate Model Construction
98
+ Cheap to evaluate surrogate models have gained a lot of interest lately. Statistical [6] or machine
99
+ learning techniques are used [7]. These models can for example replace a computationally heavy
100
+ model in a multi-objective optimization [8] or in the future be part of an on-line model. Some
101
+ surrogate modeling algorithms may include an intrinsic estimator for the model uncertainty [9].
102
+ 2
103
+ Physics Modeling
104
+ In this section we show latest additions to the open source code OPAL [2] regarding cyclotron and
105
+ FFA modeling capabilities.
106
+ 2.1
107
+ Modeling H- Injection and Painting in Vertical and Horizontal FFAs
108
+ Fixed Field Accelerators (FFAs) have fixed magnetic fields, like cyclotrons, but increase bending
109
+ field with momentum and hence more compact designs can be realized. FFAs offer the power
110
+ efficiency of cyclotrons combined with the energy reach of synchrotrons.
111
+ FFAs have never been used for high power proton acceleration, however in OPAL the necessary
112
+ models are available for design. Single particle tracking has been benchmarked against the KURNS
113
+ FFA [10]. A design for a 3-12 MeV H- FFA prototype ring is being pursued at RAL as a prototype for
114
+ a MW-class neutron spallation source [3]. Scaling horizontal orbit excursion (hFFA) and a vertical
115
+ orbit excursion (vFFA) FFA are both under consideration. Both are non–isochronous machines
116
+ using RF cavities with variable resonant frequency. Injection is planned using charge exchange of
117
+ H− to H+ and phase space painting.
118
+ – 2 –
119
+
120
+ In hFFAs, magnetic rigidity varies with radius. The dipole field varies as [11]
121
+ 𝐵𝑧(𝑧 = 0) = 𝐵0(𝜓)
122
+ � 𝑟
123
+ 𝑟0
124
+ � 𝑘
125
+ .
126
+ (2.1)
127
+ 𝐵0(𝜓) is the dipole field as a function of a normalised azimuthal coordinate 𝜓, 𝑟 is the radial
128
+ coordinate, 𝑟0 is a nominal (user-defined) radius, and 𝑘 is the field index. The field away from
129
+ the midplane, at 𝑧 ≠ 0, may be calculated using a recursion relation arising from consideration
130
+ of Maxwell’s equations in free space. OPAL has capability to calculate the expansion to arbitrary
131
+ order, within machine precision. The normalised azimuthal coordinate
132
+ 𝜓 = 𝜙 − tan(𝛿) ln
133
+ � 𝑟
134
+ 𝑟0
135
+
136
+ (2.2)
137
+ is a measure of distance around the ring. Here 𝜙 is the geometrical azimuthal angle and 𝛿 is the
138
+ spiral angle; for a sector FFA magnet 𝛿 = 0 and 𝜓 = 𝜙. The arrangement of fields in this way
139
+ guarantees that single particle trajectories and optical parameters at all orders scale exactly with
140
+ momentum.
141
+ In vFFAs, magnetic rigidity varies with height.
142
+ As particles are accelerated, the closed
143
+ orbit changes height. Successive acceleration kicks add incoherently, so overall the beam follows
144
+ the closed orbit with no appreciable emittance growth.
145
+ Rectangular vFFA magnets have been
146
+ implemented in OPAL, with a dipole field that varies as [12]
147
+ 𝐵0(𝑥𝑣 = 0) = 𝐵0(𝑠𝑣)𝑒𝑚𝑧𝑣 .
148
+ (2.3)
149
+ 𝑧𝑣 is the height, 𝑠𝑣 is a nominal longitudinal coordinate and 𝑥𝑣 is a nominal horizontal coordinate
150
+ in the rectangular coordinate system of the magnet. 𝐵0 describes the dipole field variation with
151
+ longitudinal distance.
152
+ A tanh model is available for vFFA fields.
153
+ 𝑚 is the vFFA field index,
154
+ roughly equivalent to the field index 𝑘 in hFFAs. Fields away from the plane having 𝑥𝑣 = 0 are
155
+ calculated using a field expansion derived from consideration of Maxwell’s laws. It is noted that
156
+ the focusing in the magnet body is, to linear order, skew quadrupole. The fringe field has solenoid
157
+ components parallel to 𝑠𝑣 that may be significant for short magnets. This arrangement of fields
158
+ guarantees that trajectories and optical functions are identical as momentum increases, barring a
159
+ vertical displacement. In particular, the path length of the beam is independent of momentum, the
160
+ momentum compaction factor is exactly 0 and ultra-relativistic particles are isochronous.
161
+ In order to model injection into the FFA, OPAL was extended with models for:
162
+ • horizontal & vertical FFA magnets as described above;
163
+ • variable frequency RF cavities;
164
+ • arbitrary order multipoles with maxwellian fringe fields;
165
+ • foil model (scattering and energy loss);
166
+ • pulsed injected beam; and
167
+ • pulsed multipoles.
168
+ – 3 –
169
+
170
+
171
+
172
+ H Bump 4
173
+ catch H+ (x’)
174
+ H Bump 5
175
+ catch H+ (x)
176
+
177
+ H Bump 2 moves H+
178
+ bump orbit (r’)
179
+ H Bump 1
180
+ moves H+
181
+ bump orbit
182
+ (r)
183
+ Merge H- and H+ in D
184
+ magnet
185
+ Foil
186
+ H- injection
187
+ Septum
188
+ Vertical painting in
189
+ injection line to select
190
+ z, z’
191
+ H = horizontal
192
+ bump
193
+ H-
194
+ D F
195
+ D = Defocusing
196
+ F = Focusing
197
+ Figure 1: Injection system for the hFFA (Left) field map of the hFFA, calculated using OPAL, with
198
+ labels indicating the position of injection equipment (top right) closed orbits for different bump
199
+ magnets (bottom right) required bump magnet fields.
200
+ All but the latter two features are available in the latest version of OPAL. This enabled a fully
201
+ four-dimensional simulation of the injection system, including consideration of effects such as
202
+ appropriate phasing of the pulsed dipoles and transverse breathing of the beam arising due to initial
203
+ longitudinal mismatch at injection.
204
+ As an example, a schematic of an injection system and associated parameters for the 3-12 MeV
205
+ test ring is shown for a horizontal FFA in Fig. 1. Owing to the compact nature of the ring, the
206
+ injection system is spread across a number of cells. H− are brought into the ring and onto a foil.
207
+ Bump magnets in the ring distort the proton closed orbit so that particles passing through the foil are
208
+ returned to a nominal closed orbit. The foil is placed inside the defocusing (D) dipole magnet so that
209
+ the distorted H+ closed orbit and H− beam, initially separated, are brought onto the same trajectory.
210
+ Electrons are stripped from the H− leaving H+ (protons). The bump magnets are slowly varied, so
211
+ that the proton closed orbit is moved away from the injection point for the H− and newly injected
212
+ particles are at higher horizontal amplitude. In the H− injection line, pulsed magnets move the H−
213
+ upwards so that newly injected particles are at higher vertical amplitude. Overall, a correlation is
214
+ introduced between horizontal and vertical amplitude. Sample trajectories and bump magnet field
215
+ strengths for the magnets in the ring are shown in Fig. 1. In this example vertical bumpers are not
216
+ considered - they are all kept at 0 T field. The beam following injection is shown in fig. 2.
217
+ – 4 –
218
+
219
+ roΦ [m] for ro = 4.0 m
220
+ 0
221
+ N
222
+ 4
223
+ 6
224
+ 8
225
+ 10
226
+ 12
227
+ 4.075
228
+ 4.050
229
+ 4.025
230
+ 4.000
231
+ [m]
232
+ 3.975
233
+ 3.950
234
+ 3.925
235
+ 3.900
236
+ 3.875
237
+ 0
238
+ 20
239
+ 40
240
+ 60
241
+ 80
242
+ 100
243
+ 120
244
+ 140
245
+ 160
246
+ 180
247
+ [o] Φ0.00
248
+ h bump 1
249
+ hbump2
250
+ hbump 3
251
+ -0.02
252
+ h bump 4
253
+ hbump5
254
+ 0.04
255
+ vbump 1
256
+ [1]
257
+ v bump 2
258
+ field
259
+ 0.06
260
+ vbump3
261
+ Bump
262
+ v bump 4
263
+ vbump 5
264
+ -0.08
265
+ -0.10
266
+ 0.12
267
+ 3900
268
+ 3920
269
+ 3940
270
+ 3960
271
+ 3980
272
+ 4000
273
+ 4020
274
+ 4040
275
+ 4060
276
+ Radial position[mm]Orbit
277
+ E
278
+ 4
279
+ 0.4
280
+ B
281
+ 2
282
+ 0.2
283
+ w
284
+ 0
285
+ 0.0
286
+ -2
287
+ 0.2
288
+ -0.4
289
+ 4
290
+ -2
291
+ 0
292
+ 2
293
+ 4
294
+ x [m]Figure 2: Beam (left) after injection is completed, but still on a distorted orbit (right) following
295
+ collapse of the bump. 𝑥 is the position of the beam relative to the ring centre and 𝑦 is the height of
296
+ the particle above the midplane. Particles are coloured according to the injection turn.
297
+ 2.2
298
+ Beam stripping interactions
299
+ Beam transmission optimization and loss characterization, where beam stripping interactions are
300
+ a key issue, play an important role in the design and operation of compact cyclotrons. A beam
301
+ stripping model has been implemented in the three-dimensional object-oriented parallel code OPAL-
302
+ cycl, a flavor of the OPAL framework. The model includes Monte Carlo methods for interaction
303
+ with residual gas and dissociation by electromagnetic stripping. The model has been verified with
304
+ theoretical models and it has been applied to the AMIT cyclotron according to design conditions
305
+ [4].
306
+ 2.3
307
+ Spiral inflector modeling
308
+ In [13] a spiral inflector model implemented in OPAL is presented, that enables us to run highly
309
+ realistic simulations of the spiral inflector system of a compact cyclotron (c.f. Fig. 3). A new
310
+ geometry class and field solver can handle the complicated boundary conditions posed by the
311
+ electrode system in the central region of the cyclotron both in terms of particle termination, and
312
+ calculation of self-fields. Results are benchmarked against the analytical solution of a coasting
313
+ beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV/amu test
314
+ cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation
315
+ results [14, 15]. In conclusion, OPAL can handle realistic and arbitrary boundary geometries.
316
+ Simulated injection efficiencies and beam shape compare well with measured efficiencies and a
317
+ preliminary measurement of the beam distribution after injection.
318
+ 2.4
319
+ Neighboring Turn Modeling
320
+ This article presents a hardware architecture independent implementation of an adaptive mesh
321
+ refinement Poisson solver that is integrated into the electrostatic Particle-In-Cell beam dynamics
322
+ code OPAL. The Poisson solver is solely based on second generation Trilinos packages to ensure the
323
+ desired hardware portability. Based on the massively parallel framework AMREX, formerly known
324
+ – 5 –
325
+
326
+ 20
327
+ [ww]
328
+ 0
329
+ y
330
+ -20
331
+ 3900
332
+ 3925
333
+ 3950
334
+ 3975
335
+ 4000
336
+ 4025
337
+ 4050
338
+ 4075
339
+ 4100
340
+ x[mm]
341
+ 0.02
342
+ 0.24
343
+ 0.01
344
+ 0.22
345
+ 0.00
346
+ 0.20
347
+ 0.01
348
+ 0.18
349
+ -0.02
350
+ 3900
351
+ 3950
352
+ 4000
353
+ 4050
354
+ 4100
355
+ -20
356
+ 0
357
+ 20
358
+ x[mm]
359
+ y[mm]20
360
+ [ww]
361
+ 0
362
+ y
363
+ -20
364
+ 3900
365
+ 3925
366
+ 3950
367
+ 3975
368
+ 4000
369
+ 4025
370
+ 4050
371
+ 4075
372
+ 4100
373
+ x[mm]
374
+ 0.02
375
+ 0.24
376
+ 0.01
377
+ 0.22
378
+ 0.00
379
+ 0.20
380
+ 0.01
381
+ 0.18
382
+ -0.02
383
+ 3900
384
+ 3950
385
+ 4000
386
+ 4050
387
+ 4100
388
+ -20
389
+ 0
390
+ 20
391
+ x[mm]
392
+ y[mm]Figure 3: Spiral inflector with selected particle trajectories from an OPAL simulation.
393
+ The
394
+ beam enters axially (from the top) through an aperture (grey) and is bent into the mid-plane by a
395
+ combination of the electrostatic field generated by the spiral electrodes (green and blue) and the
396
+ cyclotron’s main magnetic field. Then it is accelerated by the two Dees (copper, Dummy-Dees not
397
+ shown) [13].
398
+ Figure 4: Integrated projection of the electric field component 𝐸𝑥 onto the xy-plane showing 7
399
+ adjacent particle bunches [16].
400
+ as BoxLib, the new adaptive mesh refinement interface provides several refinement policies in order
401
+ to enable precise large-scale neighbouring bunch simulations in high intensity cyclotrons. The
402
+ solver is validated with a built-in multigrid solver of AMREX and a test problem with analytical
403
+ solution. The parallel scalability is presented as well as an example of a neighbouring bunch
404
+ simulation that covers the scale of the later anticipated physics simulation [16].
405
+ 3
406
+ Path Forward
407
+ While statistical and machine learning techniques have a lot of potential, high fidelity physics
408
+ simulations will always be used to, for example, produce the training set. In case of high-intensity
409
+ machines we will need large numbers of particles and the associated fine mesh to solve the PDE in
410
+ question. It is imperative that we make use of existing and future high performance infrastructure.
411
+ – 6 –
412
+
413
+ 7.5
414
+ 102
415
+ 5.0
416
+ (N)
417
+ 2.5
418
+ cm
419
+ 101
420
+ 0
421
+ 2.5
422
+ 101
423
+ 5.0
424
+ 7.5
425
+ 1cm
426
+ -102
427
+ -10
428
+ -5
429
+ 0
430
+ 5
431
+ 10
432
+ x (cm)A performance portable implementation [16] is of utmost importance. The OPAL collaboration [2]
433
+ is in the progress to completely rewrite the code according to the sketch in Fig. 5. With this new
434
+ architecture we will be able to make efficient use of Exascale-Architecture that will come online
435
+ soon. The core algorithms of OPAL are already performance portable as demonstrated in [17].
436
+ Figure 5: Outlook of the future OPAL architecture, targeting in a performance portable way future
437
+ exascale architectures.
438
+ Acknowledgments
439
+ The authors acknowledge the OPAL developer team for their continued support of this open source,
440
+ community-driven code.
441
+ References
442
+ [1] V. Smirnov. Computer codes for beam dynamics analysis of cyclotronlike accelerators. Phys. Rev.
443
+ Accel. Beams, 20:124801, 12 2017. doi: 10.1103/PhysRevAccelBeams.20.124801. URL
444
+ https://link.aps.org/doi/10.1103/PhysRevAccelBeams.20.124801.
445
+ [2] The OPAL Framework: Version 2.4, 2021.
446
+ http://amas.web.psi.ch/opal/Documentation/2.4/index.html.
447
+ [3] S. Machida, D. J. Kelliher, J-B. Lagrange, and C. T. Rogers. Optics design of vertical excursion
448
+ fixed-field alternating gradient accelerators. Phys. Rev. Accel. Beams, 24:021601, 2 2021. doi:
449
+ 10.1103/PhysRevAccelBeams.24.021601. URL
450
+ https://link.aps.org/doi/10.1103/PhysRevAccelBeams.24.021601.
451
+ [4] P. Calvo, I. Podadera, D. Gavela, C. Oliver, A. Adelmann, J. Snuverink, and A. Gsell. Beam stripping
452
+ interactions in compact cyclotrons. Phys. Rev. Accel. Beams, 24:090101, 11 2021. doi:
453
+ 10.1103/PhysRevAccelBeams.24.090101. URL
454
+ https://link.aps.org/doi/10.1103/PhysRevAccelBeams.24.090101.
455
+ [5] Sriramkrishnan Muralikrishnan, Antoine J. Cerfon, Matthias Frey, Lee F. Ricketson, and Andreas
456
+ Adelmann. Sparse grid-based adaptive noise reduction strategy for particle-in-cell schemes. Journal
457
+ of Computational Physics: X, 11:100094, 2021. ISSN 2590-0552. doi:
458
+ – 7 –
459
+
460
+ OPAL
461
+ Kokkos aware Profiling and
462
+ Trilinos
463
+ IPPL
464
+ Kokkos-Tools
465
+ Debugging Tools)
466
+ (Linear Solvers, Load Balancing,
467
+ (Particles & Fields)
468
+ Discretization,DistributedLinearAlgebra)
469
+ Training)
470
+ Kokkos-Kernels
471
+ heFFTe
472
+ (Sparse/DenseBLAS,GraphKernelsTensorKernels)
473
+ Algorithms
474
+ Containers
475
+ (Random,Sort)
476
+ (Map,CrsGraph, Mem Pool)
477
+ Kokkos Core
478
+ (Parallel Execution, Data Allocation, Data Transfer)
479
+ std:thread
480
+ OpenMP
481
+ CUDA
482
+ ROCmhttps://doi.org/10.1016/j.jcpx.2021.100094. URL
483
+ https://www.sciencedirect.com/science/article/pii/S2590055221000111.
484
+ [6] Andreas Adelmann. On nonintrusive uncertainty quantification and surrogate model construction in
485
+ particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification, 7(2):383–416, 2019.
486
+ [7] Renato Bellotti, Romana Boiger, and Andreas Adelmann. Fast, efficient and flexible particle
487
+ accelerator optimisation using densely connected and invertible neural networks. Information, 12(9),
488
+ 2021. doi: 10.3390/info12090351. URL https://www.mdpi.com/2078-2489/12/9/351.
489
+ [8] Auralee Edelen, Nicole Neveu, Yannick Huber, Matthias Frey, and Andreas Adelmannn. Machine
490
+ learning to enable orders of magnitude speedup in multi-objective optimization of particle accelerator
491
+ systems’. Phys. Rev. AB, 23:044601, 2020. doi: 10.1103/PhysRevAccelBeams.23.044601. URL
492
+ https://link.aps.org/doi/10.1103/PhysRevAccelBeams.23.044601.
493
+ [9] Matthias Frey and Andreas Adelmann. Global sensitivity analysis on numerical solver parameters of
494
+ particle-in-cell models in particle accelerator systems. Computer Physics Communications, 258:
495
+ 107577, 2021. ISSN 0010-4655. doi: https://doi.org/10.1016/j.cpc.2020.107577. URL
496
+ http://www.sciencedirect.com/science/article/pii/S0010465520302770.
497
+ [10] Suzanne Sheehy et al. Progress on Simulation of Fixed Field Alternating Gradient Accelerators. In
498
+ 6th International Particle Accelerator Conference, page MOPJE077, 2015. doi:
499
+ 10.18429/JACoW-IPAC2015-MOPJE077.
500
+ [11] K. R. Symon, D. W. Kerst, L. W. Jones, L. J. Laslett, and K. M. Terwilliger. Fixed-field
501
+ alternating-gradient particle accelerators. Phys. Rev., 103:1837–1859, Sep 1956. doi:
502
+ 10.1103/PhysRev.103.1837. URL https://link.aps.org/doi/10.1103/PhysRev.103.1837.
503
+ [12] Stephen Brooks. Vertical orbit excursion fixed field alternating gradient accelerators. Phys. Rev. ST
504
+ Accel. Beams, 16:084001, Aug 2013. doi: 10.1103/PhysRevSTAB.16.084001. URL
505
+ https://link.aps.org/doi/10.1103/PhysRevSTAB.16.084001.
506
+ [13] Daniel Winklehner, Andreas Adelmann, Achim Gsell, Tulin Kaman, and Daniela Campo. Realistic
507
+ simulations of a cyclotron spiral inflector within a particle-in-cell framework. Physical Review
508
+ Accelerators and Beams, 20(12):124201, 12 2017. doi: 10.1103/PhysRevAccelBeams.20.124201.
509
+ URL https://link.aps.org/doi/10.1103/PhysRevAccelBeams.20.124201.
510
+ [14] Daniel Winklehner, Andreas Adelmann, Achim Gsell, Tulin Kaman, and Daniela Campo. Realistic
511
+ simulations of a cyclotron spiral inflector within a particle-in-cell framework. Phys. Rev. Accel.
512
+ Beams, 20:124201, Dec 2017. doi: 10.1103/PhysRevAccelBeams.20.124201. URL
513
+ https://link.aps.org/doi/10.1103/PhysRevAccelBeams.20.124201.
514
+ [15] J. Alonso, S. Axani, L. Calabretta, D. Campo, L. Celona, J.M. Conrad, A. Day, G. Castro,
515
+ F. Labrecque, and D. Winklehner. The isodar high intensity h2+ transport and injection tests. Journal
516
+ of Instrumentation, 10(10):T10003, oct 2015. doi: 10.1088/1748-0221/10/10/T10003. URL
517
+ https://dx.doi.org/10.1088/1748-0221/10/10/T10003.
518
+ [16] Matthias Frey, Andreas Adelmann, and Uldis Locans. On architecture and performance of adaptive
519
+ mesh refinement in an electrostatics particle-in-cell code (vol 247, 106912, 2020). COMPUTER
520
+ PHYSICS COMMUNICATIONS, 265, 2021.
521
+ [17] Sriramkrishnan Muralikrishnan, Matthias Frey, Alessandro Vinciguerra, Michael Ligotino, Antoine J.
522
+ Cerfon, Miroslav Stoyanov, Rahulkumar Gayatri, and Andreas Adelmann. Alpine: A set of
523
+ performance portable plasma physics particle-in-cell mini-apps for exascale computing, 2022. URL
524
+ arXiv:2205.11052.
525
+ – 8 –
526
+
5tAzT4oBgHgl3EQff_wz/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,430 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf,len=429
2
+ page_content='Prepared for submission to JINST Computational Models for High-Power Cyclotrons and FFAs Andreas Adelmann ,𝑎 Chris T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
3
+ page_content=' Rogers,𝑏 𝑎Paul Scherrer Institut, Forschungsstrasse 111 CH-5232 Villigen, Switzerland 𝑏STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, OX11 0QX, United Kingdom E-mail: andreas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
4
+ page_content='adelmann@psi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
5
+ page_content='ch, chris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
6
+ page_content='rogers@stfc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
7
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
8
+ page_content='uk Abstract: A summary of numerical modeling capabilities regarding high power cyclotrons and fixed field alternating gradient machines is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
9
+ page_content=' This paper focuses on techniques made available by the OPAL simulation code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
10
+ page_content=' Keywords: High Power Cyclotrons, High Power FFAs, Computational Models, OPAL 1Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
11
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
12
+ page_content='01460v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
13
+ page_content='acc-ph] 4 Jan 2023 Contents 1 Overview on Computational Models 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
14
+ page_content='1 Single particle modeling 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
15
+ page_content='2 Large Scale Multiparticle Modeling 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
16
+ page_content='3 Surrogate Model Construction 2 2 Physics Modeling 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
17
+ page_content='1 Modeling H- Injection and Painting in Vertical and Horizontal FFAs 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
18
+ page_content='2 Beam stripping interactions 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
19
+ page_content='3 Spiral inflector modeling 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
20
+ page_content='4 Neighboring Turn Modeling 5 3 Path Forward 6 1 Overview on Computational Models In all high-power particle accelerators "one of the major limitations is particle losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
21
+ page_content=' Losses may be controlled, resulting in beam particles impinging on dedicated equipment such as collimators, or uncontrolled, resulting in beam particles striking other equipment around the accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
22
+ page_content=' Uncon- trolled losses can damage and activate any equipment in the accelerator and so must be minimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
23
+ page_content=' Controlled losses need to be carefully considered and also minimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
24
+ page_content=' The amount and cause of loss are investigated by modeling accelerators using simulation codes that model numerically the behaviour of beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
25
+ page_content=' A review of available numerical codes can be found in the article of Smirnov [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
26
+ page_content=' In this paper modeling capabilities available in OPAL are discussed in more detail [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
27
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
28
+ page_content='1 Single particle modeling For conventional cyclotrons (and FFAs) the single particle tool box is established and many different codes variants exists [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
29
+ page_content=' For cyclotrons and (horizontal FFAs) the existing tools seem to be comfortable and accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
30
+ page_content=' New machines like vertical FFAs, currently studied for example at the Rutherford Appleton Laboratory (RAL) [3], require non–trivial modifications to the existing codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
31
+ page_content=' These modifications are on the way for example in the code OPAL [2] and expected to be available in second quarter of 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
32
+ page_content=' Recently, in the context of very high field and ultra compact H− cyclotrons beam stripping losses of ion beams by interactions with residual gas and electromagnetic fields are evaluated [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
33
+ page_content=' The beam stripping algorithm, implemented in OPAL, evaluates the interaction of hydrogen ions with residual gas and electromagnetic fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
34
+ page_content=' In the first case, the cross sections of the processes are estimated according to the energy by means of analytical functions (see Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
35
+ page_content=' II-A c[4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
36
+ page_content=' The – 1 – implementation allows the user to set the pressure, temperature, and composition of the residual gas, which could be selected for the calculations as either molecular hydrogen (H+ 2) or dry air in the usual proportion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
37
+ page_content=' For precise simulations, a two-dimensional pressure field map from an external file can be imported into OPAL, providing more realistic vacuum conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
38
+ page_content=' Concerning electromagnetic stripping, the electric dissociation lifetime is evaluated through the theoretical formalism (see Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
39
+ page_content=' II-B [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
40
+ page_content=' In both instances, the individual probability at each integration step for every particle is assessed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
41
+ page_content=' A stochastic process is used to evaluate if an interaction occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
42
+ page_content=' In this case the particle will be stripped and removed from the beam, or optionally transformed to a secondary heavy particle, dependent on the interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
43
+ page_content=' In this case, the secondary particle will continue its movement but with the new particle properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
44
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
45
+ page_content='2 Large Scale Multiparticle Modeling In general, modeling losses in high intensity accelerators require 3D space-charge and sufficient simulation particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
46
+ page_content=' Recent investigations [5] propose a sparse grid-based adaptive noise reduction strategy for electrostatic particle-in-cell (PIC) simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
47
+ page_content=' By projecting the charge density onto sparse grids, high-frequency particle noise is reduced and hence an optimal number of grid points and simulation particles can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
48
+ page_content=' For a 3D Penning trap simulation, a maximum speedup of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
49
+ page_content='8 and 15 times memory reduction has been obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
50
+ page_content=' This method is already integrated into OPAL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
51
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
52
+ page_content='3 Surrogate Model Construction Cheap to evaluate surrogate models have gained a lot of interest lately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
53
+ page_content=' Statistical [6] or machine learning techniques are used [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
54
+ page_content=' These models can for example replace a computationally heavy model in a multi-objective optimization [8] or in the future be part of an on-line model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
55
+ page_content=' Some surrogate modeling algorithms may include an intrinsic estimator for the model uncertainty [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
56
+ page_content=' 2 Physics Modeling In this section we show latest additions to the open source code OPAL [2] regarding cyclotron and FFA modeling capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
57
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
58
+ page_content='1 Modeling H- Injection and Painting in Vertical and Horizontal FFAs Fixed Field Accelerators (FFAs) have fixed magnetic fields, like cyclotrons, but increase bending field with momentum and hence more compact designs can be realized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
59
+ page_content=' FFAs offer the power efficiency of cyclotrons combined with the energy reach of synchrotrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
60
+ page_content=' FFAs have never been used for high power proton acceleration, however in OPAL the necessary models are available for design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
61
+ page_content=' Single particle tracking has been benchmarked against the KURNS FFA [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
62
+ page_content=' A design for a 3-12 MeV H- FFA prototype ring is being pursued at RAL as a prototype for a MW-class neutron spallation source [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
63
+ page_content=' Scaling horizontal orbit excursion (hFFA) and a vertical orbit excursion (vFFA) FFA are both under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
64
+ page_content=' Both are non–isochronous machines using RF cavities with variable resonant frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
65
+ page_content=' Injection is planned using charge exchange of H− to H+ and phase space painting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
66
+ page_content=' – 2 – In hFFAs, magnetic rigidity varies with radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
67
+ page_content=' The dipole field varies as [11] 𝐵𝑧(𝑧 = 0) = 𝐵0(𝜓) � 𝑟 𝑟0 � 𝑘 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
68
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
69
+ page_content='1) 𝐵0(𝜓) is the dipole field as a function of a normalised azimuthal coordinate 𝜓, 𝑟 is the radial coordinate, 𝑟0 is a nominal (user-defined) radius, and 𝑘 is the field index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
70
+ page_content=' The field away from the midplane, at 𝑧 ≠ 0, may be calculated using a recursion relation arising from consideration of Maxwell’s equations in free space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
71
+ page_content=' OPAL has capability to calculate the expansion to arbitrary order, within machine precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
72
+ page_content=' The normalised azimuthal coordinate 𝜓 = 𝜙 − tan(𝛿) ln � 𝑟 𝑟0 � (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
73
+ page_content='2) is a measure of distance around the ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
74
+ page_content=' Here 𝜙 is the geometrical azimuthal angle and 𝛿 is the spiral angle;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
75
+ page_content=' for a sector FFA magnet 𝛿 = 0 and 𝜓 = 𝜙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
76
+ page_content=' The arrangement of fields in this way guarantees that single particle trajectories and optical parameters at all orders scale exactly with momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
77
+ page_content=' In vFFAs, magnetic rigidity varies with height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
78
+ page_content=' As particles are accelerated, the closed orbit changes height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
79
+ page_content=' Successive acceleration kicks add incoherently, so overall the beam follows the closed orbit with no appreciable emittance growth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
80
+ page_content=' Rectangular vFFA magnets have been implemented in OPAL, with a dipole field that varies as [12] 𝐵0(𝑥𝑣 = 0) = 𝐵0(𝑠𝑣)𝑒𝑚𝑧𝑣 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
81
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
82
+ page_content='3) 𝑧𝑣 is the height, 𝑠𝑣 is a nominal longitudinal coordinate and 𝑥𝑣 is a nominal horizontal coordinate in the rectangular coordinate system of the magnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
83
+ page_content=' 𝐵0 describes the dipole field variation with longitudinal distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
84
+ page_content=' A tanh model is available for vFFA fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
85
+ page_content=' 𝑚 is the vFFA field index, roughly equivalent to the field index 𝑘 in hFFAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
86
+ page_content=' Fields away from the plane having 𝑥𝑣 = 0 are calculated using a field expansion derived from consideration of Maxwell’s laws.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
87
+ page_content=' It is noted that the focusing in the magnet body is, to linear order, skew quadrupole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
88
+ page_content=' The fringe field has solenoid components parallel to 𝑠𝑣 that may be significant for short magnets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
89
+ page_content=' This arrangement of fields guarantees that trajectories and optical functions are identical as momentum increases, barring a vertical displacement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
90
+ page_content=' In particular, the path length of the beam is independent of momentum, the momentum compaction factor is exactly 0 and ultra-relativistic particles are isochronous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
91
+ page_content=' In order to model injection into the FFA, OPAL was extended with models for: horizontal & vertical FFA magnets as described above;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
92
+ page_content=' variable frequency RF cavities;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
93
+ page_content=' arbitrary order multipoles with maxwellian fringe fields;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
94
+ page_content=' foil model (scattering and energy loss);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
95
+ page_content=' pulsed injected beam;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
96
+ page_content=' and pulsed multipoles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
97
+ page_content=' – 3 – H Bump 4 catch H+ (x’) H Bump 5 catch H+ (x) H Bump 2 moves H+ bump orbit (r’) H Bump 1 moves H+ bump orbit (r) Merge H- and H+ in D magnet Foil H- injection Septum Vertical painting in injection line to select z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
98
+ page_content=' z’ H = horizontal bump H- D F D = Defocusing F = Focusing Figure 1: Injection system for the hFFA (Left) field map of the hFFA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
99
+ page_content=' calculated using OPAL,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
100
+ page_content=' with labels indicating the position of injection equipment (top right) closed orbits for different bump magnets (bottom right) required bump magnet fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
101
+ page_content=' All but the latter two features are available in the latest version of OPAL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
102
+ page_content=' This enabled a fully four-dimensional simulation of the injection system, including consideration of effects such as appropriate phasing of the pulsed dipoles and transverse breathing of the beam arising due to initial longitudinal mismatch at injection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
103
+ page_content=' As an example, a schematic of an injection system and associated parameters for the 3-12 MeV test ring is shown for a horizontal FFA in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
104
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
105
+ page_content=' Owing to the compact nature of the ring, the injection system is spread across a number of cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
106
+ page_content=' H− are brought into the ring and onto a foil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
107
+ page_content=' Bump magnets in the ring distort the proton closed orbit so that particles passing through the foil are returned to a nominal closed orbit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
108
+ page_content=' The foil is placed inside the defocusing (D) dipole magnet so that the distorted H+ closed orbit and H− beam, initially separated, are brought onto the same trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
109
+ page_content=' Electrons are stripped from the H− leaving H+ (protons).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
110
+ page_content=' The bump magnets are slowly varied, so that the proton closed orbit is moved away from the injection point for the H− and newly injected particles are at higher horizontal amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
111
+ page_content=' In the H− injection line, pulsed magnets move the H− upwards so that newly injected particles are at higher vertical amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
112
+ page_content=' Overall, a correlation is introduced between horizontal and vertical amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
113
+ page_content=' Sample trajectories and bump magnet field strengths for the magnets in the ring are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
114
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
115
+ page_content=' In this example vertical bumpers are not considered - they are all kept at 0 T field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
116
+ page_content=' The beam following injection is shown in fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
117
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
118
+ page_content=' – 4 – roΦ [m] for ro = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
119
+ page_content='0 m 0 N 4 6 8 10 12 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
120
+ page_content='075 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
121
+ page_content='050 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
122
+ page_content='025 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
123
+ page_content='000 [m] 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
124
+ page_content='975 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
125
+ page_content='950 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
126
+ page_content='925 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
127
+ page_content='900 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
128
+ page_content='875 0 20 40 60 80 100 120 140 160 180 [o] Φ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
129
+ page_content='00 h bump 1 hbump2 hbump 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
130
+ page_content='02 h bump 4 hbump5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
131
+ page_content='04 vbump 1 [1] v bump 2 field 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
132
+ page_content='06 vbump3 Bump v bump 4 vbump 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
133
+ page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
134
+ page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
135
+ page_content='12 3900 3920 3940 3960 3980 4000 4020 4040 4060 Radial position[mm]Orbit E 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
136
+ page_content='4 B 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
137
+ page_content='2 w 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
138
+ page_content='0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
139
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
140
+ page_content='4 4 2 0 2 4 x [m]Figure 2: Beam (left) after injection is completed, but still on a distorted orbit (right) following collapse of the bump.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
141
+ page_content=' 𝑥 is the position of the beam relative to the ring centre and 𝑦 is the height of the particle above the midplane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
142
+ page_content=' Particles are coloured according to the injection turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
143
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
144
+ page_content='2 Beam stripping interactions Beam transmission optimization and loss characterization, where beam stripping interactions are a key issue, play an important role in the design and operation of compact cyclotrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
145
+ page_content=' A beam stripping model has been implemented in the three-dimensional object-oriented parallel code OPAL- cycl, a flavor of the OPAL framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
146
+ page_content=' The model includes Monte Carlo methods for interaction with residual gas and dissociation by electromagnetic stripping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
147
+ page_content=' The model has been verified with theoretical models and it has been applied to the AMIT cyclotron according to design conditions [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
148
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
149
+ page_content='3 Spiral inflector modeling In [13] a spiral inflector model implemented in OPAL is presented, that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron (c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
150
+ page_content='f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
151
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
152
+ page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
153
+ page_content=' A new geometry class and field solver can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
154
+ page_content=' Results are benchmarked against the analytical solution of a coasting beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
155
+ page_content=' As a practical example, the spiral inflector and the first revolution in a 1 MeV/amu test cyclotron, located at Best Cyclotron Systems, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
156
+ page_content=', are modeled and compared to the simulation results [14, 15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
157
+ page_content=' In conclusion, OPAL can handle realistic and arbitrary boundary geometries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
158
+ page_content=' Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
159
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
160
+ page_content='4 Neighboring Turn Modeling This article presents a hardware architecture independent implementation of an adaptive mesh refinement Poisson solver that is integrated into the electrostatic Particle-In-Cell beam dynamics code OPAL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
161
+ page_content=' The Poisson solver is solely based on second generation Trilinos packages to ensure the desired hardware portability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
162
+ page_content=' Based on the massively parallel framework AMREX, formerly known – 5 – 20 [ww] 0 y 20 3900 3925 3950 3975 4000 4025 4050 4075 4100 x[mm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
163
+ page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
164
+ page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
165
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
166
+ page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
167
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
168
+ page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
169
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
170
+ page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
171
+ page_content='02 3900 3950 4000 4050 4100 20 0 20 x[mm] y[mm]20 [ww] 0 y 20 3900 3925 3950 3975 4000 4025 4050 4075 4100 x[mm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
172
+ page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
173
+ page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
174
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
175
+ page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
176
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
177
+ page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
178
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
179
+ page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
180
+ page_content='02 3900 3950 4000 4050 4100 20 0 20 x[mm] y[mm]Figure 3: Spiral inflector with selected particle trajectories from an OPAL simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
181
+ page_content=' The beam enters axially (from the top) through an aperture (grey) and is bent into the mid-plane by a combination of the electrostatic field generated by the spiral electrodes (green and blue) and the cyclotron’s main magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
182
+ page_content=' Then it is accelerated by the two Dees (copper, Dummy-Dees not shown) [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
183
+ page_content=' Figure 4: Integrated projection of the electric field component 𝐸𝑥 onto the xy-plane showing 7 adjacent particle bunches [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
184
+ page_content=' as BoxLib, the new adaptive mesh refinement interface provides several refinement policies in order to enable precise large-scale neighbouring bunch simulations in high intensity cyclotrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
185
+ page_content=' The solver is validated with a built-in multigrid solver of AMREX and a test problem with analytical solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
186
+ page_content=' The parallel scalability is presented as well as an example of a neighbouring bunch simulation that covers the scale of the later anticipated physics simulation [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
187
+ page_content=' 3 Path Forward While statistical and machine learning techniques have a lot of potential, high fidelity physics simulations will always be used to, for example, produce the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
188
+ page_content=' In case of high-intensity machines we will need large numbers of particles and the associated fine mesh to solve the PDE in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
189
+ page_content=' It is imperative that we make use of existing and future high performance infrastructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
190
+ page_content=' – 6 – 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
191
+ page_content='5 102 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
192
+ page_content='0 (N) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
193
+ page_content='5 cm 101 0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
194
+ page_content='5 101 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
195
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
196
+ page_content='5 1cm 102 10 5 0 5 10 x (cm)A performance portable implementation [16] is of utmost importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
197
+ page_content=' The OPAL collaboration [2] is in the progress to completely rewrite the code according to the sketch in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
198
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
199
+ page_content=' With this new architecture we will be able to make efficient use of Exascale-Architecture that will come online soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
200
+ page_content=' The core algorithms of OPAL are already performance portable as demonstrated in [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
201
+ page_content=' Figure 5: Outlook of the future OPAL architecture, targeting in a performance portable way future exascale architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
202
+ page_content=' Acknowledgments The authors acknowledge the OPAL developer team for their continued support of this open source, community-driven code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
203
+ page_content=' References [1] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
204
+ page_content=' Smirnov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
205
+ page_content=' Computer codes for beam dynamics analysis of cyclotronlike accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
206
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
207
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
208
+ page_content=' Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
209
+ page_content=' Beams, 20:124801, 12 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
210
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
211
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
212
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
213
+ page_content='124801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
214
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
215
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
216
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
217
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
218
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
219
+ page_content='124801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
220
+ page_content=' [2] The OPAL Framework: Version 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
221
+ page_content='4, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
222
+ page_content=' http://amas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
223
+ page_content='web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
224
+ page_content='psi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
225
+ page_content='ch/opal/Documentation/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
226
+ page_content='4/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
227
+ page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
228
+ page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
229
+ page_content=' Machida, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
230
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
231
+ page_content=' Kelliher, J-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
232
+ page_content=' Lagrange, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
233
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
234
+ page_content=' Rogers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
235
+ page_content=' Optics design of vertical excursion fixed-field alternating gradient accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
236
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
237
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
238
+ page_content=' Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
239
+ page_content=' Beams, 24:021601, 2 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
240
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
241
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
242
+ page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
243
+ page_content='021601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
244
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
245
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
246
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
247
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
248
+ page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
249
+ page_content='021601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
250
+ page_content=' [4] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
251
+ page_content=' Calvo, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
252
+ page_content=' Podadera, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
253
+ page_content=' Gavela, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
254
+ page_content=' Oliver, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
255
+ page_content=' Adelmann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
256
+ page_content=' Snuverink, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
257
+ page_content=' Gsell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
258
+ page_content=' Beam stripping interactions in compact cyclotrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
259
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
260
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
261
+ page_content=' Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
262
+ page_content=' Beams, 24:090101, 11 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
263
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
264
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
265
+ page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
266
+ page_content='090101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
267
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
268
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
269
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
270
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
271
+ page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
272
+ page_content='090101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
273
+ page_content=' [5] Sriramkrishnan Muralikrishnan, Antoine J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
274
+ page_content=' Cerfon, Matthias Frey, Lee F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
275
+ page_content=' Ricketson, and Andreas Adelmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
276
+ page_content=' Sparse grid-based adaptive noise reduction strategy for particle-in-cell schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
277
+ page_content=' Journal of Computational Physics: X, 11:100094, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
278
+ page_content=' ISSN 2590-0552.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
279
+ page_content=' doi: – 7 – OPAL Kokkos aware Profiling and Trilinos IPPL Kokkos-Tools Debugging Tools) (Linear Solvers, Load Balancing, (Particles & Fields) Discretization,DistributedLinearAlgebra) Training) Kokkos-Kernels heFFTe (Sparse/DenseBLAS,GraphKernelsTensorKernels) Algorithms Containers (Random,Sort) (Map,CrsGraph, Mem Pool) Kokkos Core (Parallel Execution, Data Allocation, Data Transfer) std:thread OpenMP CUDA ROCmhttps://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
280
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
281
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
282
+ page_content='jcpx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
283
+ page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
284
+ page_content='100094.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
285
+ page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
286
+ page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
287
+ page_content='com/science/article/pii/S2590055221000111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
288
+ page_content=' [6] Andreas Adelmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
289
+ page_content=' On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
290
+ page_content=' SIAM/ASA Journal on Uncertainty Quantification, 7(2):383–416, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
291
+ page_content=' [7] Renato Bellotti, Romana Boiger, and Andreas Adelmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
292
+ page_content=' Fast, efficient and flexible particle accelerator optimisation using densely connected and invertible neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
293
+ page_content=' Information, 12(9), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
294
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
295
+ page_content='3390/info12090351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
296
+ page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
297
+ page_content='mdpi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
298
+ page_content='com/2078-2489/12/9/351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
299
+ page_content=' [8] Auralee Edelen, Nicole Neveu, Yannick Huber, Matthias Frey, and Andreas Adelmannn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
300
+ page_content=' Machine learning to enable orders of magnitude speedup in multi-objective optimization of particle accelerator systems’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
301
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
302
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
303
+ page_content=' AB, 23:044601, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
304
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
305
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
306
+ page_content='23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
307
+ page_content='044601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
308
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
309
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
310
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
311
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
312
+ page_content='23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
313
+ page_content='044601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
314
+ page_content=' [9] Matthias Frey and Andreas Adelmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
315
+ page_content=' Global sensitivity analysis on numerical solver parameters of particle-in-cell models in particle accelerator systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
316
+ page_content=' Computer Physics Communications, 258: 107577, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
317
+ page_content=' ISSN 0010-4655.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
318
+ page_content=' doi: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
319
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
320
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
321
+ page_content='cpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
322
+ page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
323
+ page_content='107577.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
324
+ page_content=' URL http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
325
+ page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
326
+ page_content='com/science/article/pii/S0010465520302770.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
327
+ page_content=' [10] Suzanne Sheehy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
328
+ page_content=' Progress on Simulation of Fixed Field Alternating Gradient Accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
329
+ page_content=' In 6th International Particle Accelerator Conference, page MOPJE077, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
330
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
331
+ page_content='18429/JACoW-IPAC2015-MOPJE077.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
332
+ page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
333
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
334
+ page_content=' Symon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
335
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
336
+ page_content=' Kerst, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
337
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
338
+ page_content=' Jones, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
339
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
340
+ page_content=' Laslett, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
341
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
342
+ page_content=' Terwilliger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
343
+ page_content=' Fixed-field alternating-gradient particle accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
344
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
345
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
346
+ page_content=', 103:1837–1859, Sep 1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
347
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
348
+ page_content='1103/PhysRev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
349
+ page_content='103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
350
+ page_content='1837.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
351
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
352
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
353
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
354
+ page_content='1103/PhysRev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
355
+ page_content='103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
356
+ page_content='1837.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
357
+ page_content=' [12] Stephen Brooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
358
+ page_content=' Vertical orbit excursion fixed field alternating gradient accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
359
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
360
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
361
+ page_content=' ST Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
362
+ page_content=' Beams, 16:084001, Aug 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
363
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
364
+ page_content='1103/PhysRevSTAB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
365
+ page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
366
+ page_content='084001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
367
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
368
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
369
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
370
+ page_content='1103/PhysRevSTAB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
371
+ page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
372
+ page_content='084001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
373
+ page_content=' [13] Daniel Winklehner, Andreas Adelmann, Achim Gsell, Tulin Kaman, and Daniela Campo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
374
+ page_content=' Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
375
+ page_content=' Physical Review Accelerators and Beams, 20(12):124201, 12 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
376
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
377
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
378
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
379
+ page_content='124201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
380
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
381
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
382
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
383
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
384
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
385
+ page_content='124201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
386
+ page_content=' [14] Daniel Winklehner, Andreas Adelmann, Achim Gsell, Tulin Kaman, and Daniela Campo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
387
+ page_content=' Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
388
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
389
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
390
+ page_content=' Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
391
+ page_content=' Beams, 20:124201, Dec 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
392
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
393
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
394
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
395
+ page_content='124201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
396
+ page_content=' URL https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
397
+ page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
398
+ page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
399
+ page_content='1103/PhysRevAccelBeams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
400
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
401
+ page_content='124201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
402
+ page_content=' [15] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
403
+ page_content=' Alonso, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
404
+ page_content=' Axani, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
405
+ page_content=' Calabretta, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
406
+ page_content=' Campo, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
407
+ page_content=' Celona, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
408
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
409
+ page_content=' Conrad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
410
+ page_content=' Day, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
411
+ page_content=' Castro, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
412
+ page_content=' Labrecque, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
413
+ page_content=' Winklehner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
414
+ page_content=' The isodar high intensity h2+ transport and injection tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
415
+ page_content=' Journal of Instrumentation, 10(10):T10003, oct 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
416
+ page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
417
+ page_content='1088/1748-0221/10/10/T10003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
418
+ page_content=' URL https://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
419
+ page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
420
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
421
+ page_content='1088/1748-0221/10/10/T10003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
422
+ page_content=' [16] Matthias Frey, Andreas Adelmann, and Uldis Locans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
423
+ page_content=' On architecture and performance of adaptive mesh refinement in an electrostatics particle-in-cell code (vol 247, 106912, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
424
+ page_content=' COMPUTER PHYSICS COMMUNICATIONS, 265, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
425
+ page_content=' [17] Sriramkrishnan Muralikrishnan, Matthias Frey, Alessandro Vinciguerra, Michael Ligotino, Antoine J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
426
+ page_content=' Cerfon, Miroslav Stoyanov, Rahulkumar Gayatri, and Andreas Adelmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
427
+ page_content=' Alpine: A set of performance portable plasma physics particle-in-cell mini-apps for exascale computing, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
428
+ page_content=' URL arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
429
+ page_content='11052.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
430
+ page_content=' – 8 –' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf'}
5tE2T4oBgHgl3EQfkQe0/content/2301.03977v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f58c5f405159b3a97e30fcd2d18fa6afd1d5137dc179725fd1423ab7625b216
3
+ size 787689
5tE2T4oBgHgl3EQfkQe0/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a7cc2a261d244696f666efdef2f790459f020adf657f6856952757f4b64e8c
3
+ size 3145773
7dAyT4oBgHgl3EQfQvYd/content/tmp_files/2301.00050v1.pdf.txt ADDED
@@ -0,0 +1,1826 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This is a post-peer-review, pre-copyedit version of an article published in Autonomous Robots.
2
+ The final authenticated version is available online at: http://dx.doi.org/10.1007/s10514-017-9682-5
3
+ Long-Term Online Multi-Session Graph-Based SPLAM with
4
+ Memory Management
5
+ Mathieu Labb´e · Fran¸cois Michaud
6
+ Abstract For long-term simultaneous planning, local-
7
+ ization and mapping (SPLAM), a robot should be able
8
+ to continuously update its map according to the dy-
9
+ namic changes of the environment and the new areas
10
+ explored. With limited onboard computation capabili-
11
+ ties, a robot should also be able to limit the size of the
12
+ map used for online localization and mapping. This pa-
13
+ per addresses these challenges using a memory manage-
14
+ ment mechanism, which identifies locations that should
15
+ remain in a Working Memory (WM) for online pro-
16
+ cessing from locations that should be transferred to
17
+ a Long-Term Memory (LTM). When revisiting previ-
18
+ ously mapped areas that are in LTM, the mechanism
19
+ can retrieve these locations and place them back in WM
20
+ for online SPLAM. The approach is tested on a robot
21
+ equipped with a short-range laser rangefinder and a
22
+ RGB-D camera, patrolling autonomously 10.5 km in
23
+ an indoor environment over 11 sessions while having
24
+ encountered 139 people.
25
+ Keywords SLAM · path planning · pose graph ·
26
+ multi-session · loop closure detection
27
+ 1 Introduction
28
+ The ability to simultaneously map an environment, lo-
29
+ calize itself in it, and plan paths using this information
30
+ This work was supported by the Natural Sciences and Engi-
31
+ neering Research Council of Canada (NSERC), the Canada
32
+ Research Chair program and the Canadian Foundation for
33
+ Innovation.
34
+ M. Labb´e
35
+ E-mail: [email protected]
36
+ F. Michaud
37
+ E-mail: [email protected]
38
+ Interdisciplinary Institute for Technological Innovation (3IT),
39
+ Universit´e de Sherbrooke, Sherbrooke, Qu´ebec, Canada
40
+ is known as Simultaneous Planning, Localization And
41
+ Mapping, or SPLAM (Stachniss, 2009). This task can
42
+ be particularly complex when done online on a robot
43
+ with limited computing resources in large, unstructured
44
+ and dynamic environments. Since SPLAM can be seen
45
+ as an extension of Simultaneous Localization And Map-
46
+ ping (SLAM), many approaches exist (Thrun et al.,
47
+ 2005). Our interest lies with graph-based SLAM ap-
48
+ proaches (Grisetti et al., 2010), for which combining
49
+ a lightweight topological map over a detailed metrical
50
+ map reveals to be more suitable for large-scale mapping
51
+ and navigation (Konolige et al., 2011).
52
+ Two important challenges in graph-based SPLAM
53
+ are :
54
+ – Multi-session mapping, also known as the kidnapped
55
+ robot problem or the initial state problem: when
56
+ turned on, a robot does not know its relative po-
57
+ sition to a map previously created, making it im-
58
+ possible to plan a path to a previously visited loca-
59
+ tion. A solution is to have the robot localize itself
60
+ in a previously-built map before initiating mapping.
61
+ This solution has the advantage of always using the
62
+ same referential, resulting in only one map is created
63
+ across the sessions. However, the robot must start
64
+ in a portion already mapped of the environment.
65
+ Another approach is to initialize a new map with
66
+ its own referential on startup, and when a previ-
67
+ ously visited location is encountered, a transforma-
68
+ tion between the two maps can be computed. The
69
+ transformations between the maps can be saved ex-
70
+ plicitly with special nodes called anchor nodes (Mc-
71
+ Donald et al., 2012; Kim et al., 2010), or implicitly
72
+ with links added between each map (Konolige and
73
+ Bowman, 2009; Latif et al., 2013). This process is
74
+ referred to as loop closure detection. Loop closure
75
+ detection approaches that are independent of the
76
+ arXiv:2301.00050v1 [cs.RO] 30 Dec 2022
77
+
78
+ 2
79
+ Mathieu Labb´e, Fran¸cois Michaud
80
+ robot’s estimated position (Ho and Newman, 2006)
81
+ can intrinsically detect if the current location is a
82
+ new location or a previously visited one among all
83
+ the mapping sessions conducted in the past. Popular
84
+ loop closure detection approaches are appearance-
85
+ based (Garcia-Fidalgo and Ortiz, 2015), exploiting
86
+ the distinctiveness of images of the environment.
87
+ The underlying idea is that loop closure detection
88
+ is done by comparing all previous images with the
89
+ new one. When loop closures are found between the
90
+ maps, a global map can be created by combining
91
+ the maps from each session. In graph-based SLAM,
92
+ graph pose optimization approaches (Folkesson and
93
+ Christensen, 2007; Grisetti et al., 2007; Kummerle
94
+ et al., 2011; Johannsson et al., 2013) use these loop
95
+ closures to reduce odometry errors inside each map
96
+ and in between the maps.
97
+ – Long-term mapping in dynamic environments. Per-
98
+ sistent (Milford and Wyeth, 2010), lifelong (Kono-
99
+ lige and Bowman, 2009) or continuous (Pirker et al.,
100
+ 2011) are terms generally used to describe SLAM
101
+ approaches working in such conditions. Continu-
102
+ ously updating and adding new data to the map in
103
+ unbounded or dynamic environments will inevitably
104
+ increase the map size over time. Online simulta-
105
+ neous planning, localization and mapping requires
106
+ that new incoming data be processed faster than
107
+ the time to acquire them. For example, if data are
108
+ acquired at 1 Hz, updating the map should be done
109
+ in less than 1 sec. As the map grows, the time re-
110
+ quired for loop closure detection and graph opti-
111
+ mization increases, and eventually limits the size of
112
+ the environment that can be mapped and used on-
113
+ line.
114
+ To address these challenges, we introduce SPLAM-
115
+ MM, a graph-based SPLAM with a memory manage-
116
+ ment (MM) mechanism. As demonstrated in (Labbe
117
+ and Michaud, 2013), memory management can be used
118
+ to limit the size of the map so that loop closure detec-
119
+ tions are always processed under a fixed time limit, thus
120
+ satisfying online requirements for long-term and large-
121
+ scale environment mapping. The idea behind SPLAM-
122
+ MM is to limit the number of nodes available for
123
+ loop closure detection and graph optimization, keeping
124
+ enough observations in the map for successful online
125
+ localization and planning while still having the ability
126
+ to generate a global representation of the environment
127
+ that can adapt to changes over time.
128
+ The paper is organized as follows. Section 2 reviews
129
+ graph-based SLAM approaches that reduce the size of
130
+ the map when revisiting the same environment while
131
+ continuously adapting to dynamic changes. Section 3
132
+ describes the implementation and the operating prin-
133
+ ciples associated with the use of memory management
134
+ with a graph-based SPLAM approach, which extends
135
+ our previous metric-based SLAM approach (Labbe and
136
+ Michaud, 2014) with a new planning capability. The
137
+ implementation integrates four algorithms: loop clo-
138
+ sure detection (Labbe and Michaud, 2013), graph opti-
139
+ mization (Grisetti et al., 2007), metrical path planner
140
+ (Marder-Eppstein et al., 2010) and a custom topological
141
+ path planner. Section 4 presents experimental results of
142
+ 11 SPLAM sessions using the AZIMUT-3 robot in an
143
+ indoor environment over 10.5 km. Section 5 discusses
144
+ strengths and limitations of SPLAM-MM, and Section
145
+ 6 concludes the paper.
146
+ 2 Related Work
147
+ Lifelong appearance-based SLAM requires dealing with
148
+ dynamic environments. Glover et al. (2010) present an
149
+ appearance-based SLAM approach that had to oper-
150
+ ate in different lighting conditions over three weeks.
151
+ An interesting observation from their experiments is
152
+ that even when revisiting the same locations, the map
153
+ still grows: in dynamic environments, the loop closure
154
+ detector is sometimes unable to detect loop closures,
155
+ duplicating locations in the map. A map management
156
+ approach is therefore required to limit map size. In
157
+ highly dynamic environments, multiple views of the
158
+ same location may also be required for proper local-
159
+ ization. Churchill and Newman (2012) present a graph-
160
+ based SLAM approach where visual experiences of the
161
+ same locations are kept in the map, to increase localiza-
162
+ tion robustness to dynamic changes caused for instance
163
+ by outdoor illumination conditions. If localization fails
164
+ when revisiting an area, new experiences are added to
165
+ the map. Even if adding new visual experiences to the
166
+ map happens less often over time (as the robot explores
167
+ the same location), there is no mechanism to limit this.
168
+ Pirker et al. (2011) present a continuous monocular
169
+ SLAM approach where new key frames are added to
170
+ the map only when the environment has changed, to
171
+ keep its size proportional to the explored space. But if
172
+ the environment changes very often, there is no mech-
173
+ anism to limit the number of key frames over the same
174
+ physical location.
175
+ Some
176
+ SLAM
177
+ approaches
178
+ can
179
+ handle
180
+ dynamic
181
+ changes of the environment while limiting the size of
182
+ the map for long-term operation. Biber et al. (2005)
183
+ present a sample-based representation for maps, to han-
184
+ dle changes at different timescales, tracking both sta-
185
+ tionary and non-stationary elements of the environ-
186
+ ment. The idea is to refresh samples stored for each
187
+ timescale with new sensor measurements. Map growth
188
+ is then indirectly limited as older memories fade at
189
+
190
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
191
+ 3
192
+ different rates depending on the timescale. Walcott-
193
+ Bryant et al. (2012) describe Dynamic Pose-Graph
194
+ SLAM (DPG-SLAM), a long-term mapping approach
195
+ that detects static and dynamic changes of the environ-
196
+ ment through time. To keep consistency of the graph
197
+ while reducing its size, nodes that are not observable
198
+ anymore are removed. Johannsson et al. (2013) also re-
199
+ move unobservable nodes to limit the size of the map
200
+ over time when revisiting the same area. Similar nodes
201
+ of the graph are merged together while keeping only the
202
+ new loop closure detection. However, the graph size is
203
+ not bounded when exploring new areas. Krajn´ık et al.
204
+ (2016) present an occupancy grid approach where each
205
+ cell in the map estimates its occupancy value depend-
206
+ ing on periodical and cyclic changes occurring in the
207
+ environment. This increases localization and navigation
208
+ accuracy in dynamic environments compared to static
209
+ maps, as the predicted map represents the correct state
210
+ of the environment at that time of the day (e.g., doors
211
+ can change to be opened or closed). The maximum
212
+ data kept for each cell is bounded by some parameters
213
+ (depending on the smallest and longest cyclic periods
214
+ that should be detected), thus keeping memory usage
215
+ fixed. However, the approach assumes that the navi-
216
+ gation phase always occur in the same environment as
217
+ the first mapping cycle, without possibility to extend it
218
+ afterward.
219
+ These problems of lifelong SLAM are also addressed
220
+ in some SPLAM approaches. Milford and Wyeth (2010)
221
+ present a solution to limit the size of the map (called
222
+ experience map) while revisiting the same area: close
223
+ nodes are merged together up to a maximum density
224
+ threshold. This approach has the advantage of mak-
225
+ ing the map size independent of the operating time,
226
+ but the diversity of the observations on each location is
227
+ somewhat lost. Konolige et al. (2011) use a view-based
228
+ graph SLAM approach (Konolige and Bowman, 2009)
229
+ in a SPLAM context. The approach preserves diversity
230
+ of the images referring to the same location so that the
231
+ map can handle dynamic changes over time, and forget-
232
+ ting images limits the size of the graph over time when
233
+ revisiting the same area. However, the graph still grows
234
+ when visiting new areas.
235
+ Overall, these approaches reduce map size when re-
236
+ visiting the same area, while continuously adapting to
237
+ dynamic changes. This makes them independent or al-
238
+ most independent of the operation time of the robot in
239
+ these conditions, but they are all limited to a maximum
240
+ size of the environment that can be mapped online. The
241
+ SPLAM-MM approach deals specifically with this lim-
242
+ itation.
243
+ Fig. 1 The AZIMUT-3 robot equipped with a URG-04LX
244
+ laser range finder and a Xtion PRO LIVE sensor.
245
+ 3 Memory Management for SPLAM
246
+ The underlying representation of SPLAM-MM is a
247
+ graph with nodes and links. The nodes contain the fol-
248
+ lowing information:
249
+ – ID: unique time index of the node.
250
+ – Weight: an indication of the importance of the node,
251
+ used for memory management.
252
+ – Bag-of-words (BOW): visual words used for loop
253
+ closure detections. They are SURF features (Bay
254
+ et al., 2008) quantized to an incremental vocabu-
255
+ lary based on KD-Trees.
256
+ – Sensor data: used to find similarities between nodes
257
+ and to construct maps. For this paper, our imple-
258
+ mentation of SPLAM-MM is using the AZIMUT-3
259
+ robot (Ferland et al., 2010), equipped with an URG-
260
+ 04LX laser rangefinder and a Xtion Pro Live RGB-D
261
+ camera, as shown by Fig. 1. The sensory data used
262
+ are:
263
+ – Pose: the position of the robot computed by its
264
+ odometry system (e.g., the value given by wheel
265
+ odometry), expressed in (x, y, θ) coordinates.
266
+ – RGB image: used to extract visual words.
267
+ – Depth image: used to find 3D position of the vi-
268
+ sual words. The depth image is registered with
269
+ the RGB image, i.e., each depth pixel corre-
270
+ sponds exactly to the same RGB pixel.
271
+ – Laser scan: used for loop closure transformations
272
+ and odometry refinements, and by the Proximity
273
+ Detection module.
274
+ The links store rigid transformations (i.e., Eucledian
275
+ transformation derived from odometry or loop closures)
276
+ between nodes. There are four types of links:
277
+
278
+ URG-04LX
279
+ Xtion PRO LIVE
280
+ AZIMUT-34
281
+ Mathieu Labb´e, Fran¸cois Michaud
282
+ Motion Controller
283
+ Waypoints
284
+ Graph-based SLAM-MM
285
+ WM
286
+ STM
287
+ SPLAM-MM
288
+ Graph-based
289
+ SLAM-MM
290
+ Wheel Odometry
291
+ Laser Rangefinder
292
+ RGB-D Camera
293
+ Motion Controller
294
+ Topological Path
295
+ Planner (TPP)
296
+ Twist
297
+ Pose
298
+ Scan
299
+ RGB-D
300
+ Image
301
+ Local Map
302
+ Upcoming Node IDs
303
+ Metrical Path
304
+ Planner (MPP)
305
+ Pose
306
+ User
307
+ Goal
308
+ Appearance-based Loop
309
+ Closure Detection
310
+ Graph
311
+ Optimization
312
+ New
313
+ Link(s)
314
+ New
315
+ Node
316
+ Local Map
317
+ Proximity Detection
318
+ Sensor Data
319
+ Sensors
320
+ Global Map
321
+ LTM
322
+ Transferred
323
+ Nodes
324
+ Retrieved
325
+ Nodes
326
+ Global Map
327
+ Upcoming Node IDs
328
+ Patrol
329
+ Goal
330
+ Status
331
+ Waypoints
332
+ Topological Path
333
+ Planner (TPP)
334
+ Twist
335
+ Metrical Path
336
+ Planner (MPP)
337
+ Pose
338
+ User
339
+ Goal
340
+ Patrol
341
+ Goal
342
+ Status
343
+ Fig. 2 Memory management and control architecture of SPLAM-MM.
344
+ – Neighbor link: created between a new node and the
345
+ previous one.
346
+ – Loop closure link: added when a loop closure is de-
347
+ tected between the new node and one in the map.
348
+ – Proximity link: added when two close nodes are
349
+ aligned together.
350
+ – Temporary link: used for path planning purposes. It
351
+ is used to keep the planned path connected to the
352
+ current map.
353
+ Figure 2 presents a high-level representation of
354
+ SPLAM-MM. Basically, it consists of a graph-based
355
+ SLAM module with memory management, to which
356
+ path planners are added. Memory management involves
357
+ the use of a Working Memory (WM) and a Long-Term
358
+ Memory (LTM). WM is where maps, which are graphs
359
+ of nodes and links, are processed. To satisfy online con-
360
+ straints, nodes can be transferred and retrieved from
361
+ LTM. More specifically, the WM size indirectly depends
362
+ on a fixed time limit T: when the time required to up-
363
+ date the map (i.e., the time required to execute the pro-
364
+ cesses in the Graph-based SLAM-MM block) reaches
365
+ T, some nodes of the map are transferred from WM to
366
+ LTM, thus keeping WM size nearly constant and pro-
367
+ cessing time around T. However, when a loop closure is
368
+ detected, neighbors in LTM with the loop closure node
369
+ can be retrieved from LTM to WM for further loop clo-
370
+ sure detections. In other words, when a robot revisits
371
+ an area which was previously transferred to LTM, it
372
+ can incrementally retrieve the area if a least one node
373
+ of this area is still in WM. When some LTM nodes are
374
+ retrieved, nodes in WM from other areas in the map
375
+ can be transferred to LTM, to limit map size in WM
376
+ and therefore keeping processing time around T.
377
+ Therefore, the choice of which nodes to keep in
378
+ WM is key in SPLAM-MM. The objective is to have
379
+ enough nodes in WM from each mapping session for
380
+ loop closure detections and to keep a maximum num-
381
+ ber of nodes in WM for generating a map usable to
382
+ follow correctly a planned path, while still satisfying
383
+ online processing. Two heuristics are used to establish
384
+ the compromise between selection of which nodes to
385
+ keep in WM and online processing:
386
+ – Heuristic 1 is inspired from observations made by
387
+ psychologists (Atkinson and Shiffrin, 1968; Badde-
388
+ ley, 1997) that people remember more the areas
389
+ where they spent most of their time, compared to
390
+ those where they spent less time. In terms of mem-
391
+ ory management, this means that the longer the
392
+ robot is at a particular location, the larger the
393
+ weight of the corresponding node should be. Old-
394
+ est and less weighted nodes in WM are transferred
395
+ to LTM before the others, thus keeping in WM only
396
+ the nodes seen for longer periods of time. As demon-
397
+ strated in (Labbe and Michaud, 2013), this heuristic
398
+ reveals to be quite efficient in establishing the com-
399
+ promise between search time and space, as driven by
400
+ the environment and the experiences of the robot.
401
+ – Heuristic 2 is used to identifies nodes that should
402
+ stay in WM for autonomous navigation. Nodes on a
403
+ planned path could have small weights and may be
404
+ identified for transfer to LTM by Heuristic 1, thus
405
+ eliminating the possibility of finding a loop closure
406
+ link or a proximity link with these nodes and cor-
407
+
408
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
409
+ 5
410
+ Map 1!
411
+ Map 3!
412
+ Map 4!
413
+ Last node!
414
+ Map 2!
415
+ Local map!
416
+ Global map!
417
+ Fig. 3 Illustration of the local map (inner dashed area) and
418
+ the global map (outer dotter area) in multi-session mapping.
419
+ Red nodes are in LTM, while all other nodes are in WM. Loop
420
+ closure links are shown using bidirectional green arrows.
421
+ rectly follow the path. Therefore, Heuristic 2 must
422
+ supersede Heuristic 1 and allow upcoming nodes to
423
+ remain in WM, even if they are old and have a small
424
+ weight.
425
+ The Graph-based SLAM-MM block provides two
426
+ types of maps derived from nodes in WM and LTM:
427
+ – Local map, i.e., the largest connected graph that can
428
+ be created from the last node in WM with nodes
429
+ available in WM only. The local map is used for
430
+ online path planning.
431
+ – Global map, i.e., the largest connected graph that
432
+ can be created from the last node in WM with nodes
433
+ in WM and LTM. It is used for offline path planning.
434
+ Figure 3 uses diamonds to represent initial and end
435
+ nodes for each mapping session. The nodes in LTM are
436
+ shown in red and the others are those in WM. The lo-
437
+ cal map is created using only the nodes in WM that
438
+ are linked to the last node. The graph linking the last
439
+ node with other nodes in WM and LTM represents the
440
+ global map (outer dotted area). If loop closure detec-
441
+ tions are found between nodes of different maps, loop
442
+ closure links can be generated, and the local map can
443
+ span over multiple mapping sessions. Other nodes in
444
+ WM but not included in the local map are unreachable
445
+ from the last node, but they are still used for loop clo-
446
+ sure detections since all nodes in WM (including those
447
+ in Map 2 for instance) are examined.
448
+ The modules presented in Fig. 2 are described as
449
+ follows.
450
+ 3.1 Short-Term Memory Module
451
+ Short-Term Memory (STM) is the entry point where
452
+ sensor data are assembled into a node to be added to
453
+ the map. Similarly to (Labbe and Michaud, 2013), the
454
+ role of the STM module is to update node weight based
455
+ on visual similarity. When a node is created, a unique
456
+ time index ID is assigned and its weight is initialized to
457
+ 0. The current pose, RBG image, depth image and laser
458
+ scan readings are also memorized in the node. If two
459
+ consecutive nodes have similar images, i.e., the ratio of
460
+ corresponding visual words between the nodes is over a
461
+ specified threshold Y , the weight of the previous node is
462
+ increased by one. If the robot is not moving (i.e., odom-
463
+ etry poses are the same), the new node is deleted. To re-
464
+ duce odometry errors on successive STM nodes, trans-
465
+ formation refinement is done using 2D iterative-closest-
466
+ point (ICP) optimization (Besl and McKay, 1992) on
467
+ the rigid transformation of the neighbor link with the
468
+ previous node and the corresponding laser scans. If the
469
+ ratio of ICP point correspondences between the laser
470
+ scans over the total laser scan size is greater or equal to
471
+ C, the neighbor link’s transformation is updated with
472
+ the correction.
473
+ When the STM size reaches a fixed size limit of S
474
+ nodes, the oldest node in STM is moved to WM. STM
475
+ size is determined based on the velocity of the robot
476
+ and at which rate the nodes are added to the map.
477
+ Images are generally very similar to the newly added
478
+ node, keeping S nodes in STM avoids using them for
479
+ appearance-based loop closure detection once in WM.
480
+ For example, at the same velocity, STM size should
481
+ be larger if the rate at which the nodes are added to
482
+ map increases, in order to keep nodes with consecutive
483
+ similar images in STM. Transferring nodes with images
484
+ very similar with the current node from STM to WM
485
+ too early limits the ability to detect loop closures with
486
+ older nodes in WM.
487
+ 3.2 Appearance-based Loop Closure Detection Module
488
+ Appearance-based loop closure detection is based on
489
+ the bag-of-words approach described in (Labbe and
490
+ Michaud, 2013). Briefly, this approach uses a bayesian
491
+ filter to evaluate appearance-based loop closure hy-
492
+ potheses over all previous images in WM. When a loop
493
+ closure hypothesis reaches a pre-defined threshold H, a
494
+ loop closure is detected. Visual words of the nodes are
495
+ used to compute the likelihood required by the filter. In
496
+ this work, the Term Frequency-Inverse Document Fre-
497
+ quency (TF-IDF) approach (Sivic and Zisserman, 2003)
498
+ is used for fast likelihood estimation, and FLANN (Fast
499
+
500
+ 6
501
+ Mathieu Labb´e, Fran¸cois Michaud
502
+ Library for Approximate Nearest Neighbors) incremen-
503
+ tal KD-Trees (Muja and Lowe, 2009) are used to avoid
504
+ rebuilding the vocabulary at each iteration. To keep it
505
+ balanced, the vocabulary is rebuilt only when it doubles
506
+ in size.
507
+ The RGB image, from which the visual words are
508
+ extracted, is registered with a depth image. Using (1),
509
+ for each 2D point (x, y) in the rectified RGB image, a
510
+ 3D position Pxyz can be computed using the calibration
511
+ matrix (focal lengths fx and fy, optical centres cx and
512
+ cy) and the depth information d for the corresponding
513
+ pixel in the depth image. The 3D positions of the visual
514
+ words are then known. When a loop closure is detected,
515
+ the rigid transformation between the matching images
516
+ is computed using a RANSAC (RANdom SAmple Con-
517
+ sensus) approach which exploits the 3D visual word cor-
518
+ respondences (Rusu and Cousins, 2011). If a minimum
519
+ of I inliers are found, the transformation is refined us-
520
+ ing the laser scans in the same way as the odometry
521
+ correction in STM using 2D ICP transformation refine-
522
+ ment. If transformation refinement is accepted, then a
523
+ loop closure link is added with the computed transfor-
524
+ mation between the corresponding nodes. The weight
525
+ of the current node is updated by adding the weight
526
+ of the loop closure hypothesis node and the latter is
527
+ reset to 0, so that only one node with a large weight
528
+ represents the same location.
529
+ Pxyz =
530
+ �(x − cx) · d
531
+ fx
532
+ , (y − cy) · d
533
+ fy
534
+ , d
535
+ �T
536
+ (1)
537
+ By doing appearance-based loop closure detection
538
+ this way, setting H high means that there is less chance
539
+ of detecting false positives, but at the cost of detect-
540
+ ing less loop closures (Labbe and Michaud, 2013). For
541
+ SPLAM-MM, H can be set relatively low to detect more
542
+ loop closures because false positives that are geometri-
543
+ cally different will be rejected by the rigid transforma-
544
+ tion computation step (i.e., the 3D visual word corre-
545
+ spondences and 2D ICP transformation refinement).
546
+ 3.3 Proximity Detection Module
547
+ Appearance-based loop closure detection is limited by
548
+ the perceptual range of the sensory data used. For in-
549
+ stance, when the robot is revisiting areas in opposite di-
550
+ rection, the RGB-D camera on AZIMUT-3 is not point-
551
+ ing in the same direction compared to when the nodes
552
+ were created, and thus no appearance-based loop clo-
553
+ sures can be detected. This also happens when there
554
+ are not enough visual features under the depth range
555
+ of the RGB-D camera (e.g., white walls or long halls).
556
+ Simply relying on appearance-based loop closure detec-
557
+ tions for map corrections would then limit path plan-
558
+ ning capabilities, and make navigation difficult in such
559
+ conditions. Figure 4a illustrates a situation where the
560
+ robot is in a hall coming back to its starting position
561
+ in reverse direction. Setting a goal at the starting posi-
562
+ tion would make the planner fail because no loop clo-
563
+ sures could be found to correct the odometry, resulting
564
+ in having a wall directly placed on the starting posi-
565
+ tion. One solution would be to have the robot visit the
566
+ nodes of the graph backward so loop closures could be
567
+ detected to correct the map, and ultimately be able
568
+ to reach the starting position. However, it is inefficient
569
+ and unsafe if the robot does not have sensors pointing
570
+ backward. To deal with such situations, the Proximity
571
+ Detection module uses laser rangefinder data to correct
572
+ odometry drift in areas where the camera cannot de-
573
+ tect loop closures. With a field of view of more than
574
+ 180◦, the laser scans can be aligned in reverse direc-
575
+ tion, generating proximity links. As laser scans are not
576
+ as discriminative as images, proximity detection is re-
577
+ stricted to nodes of the local map located around the
578
+ estimated position of the robot. Figure 4b illustrates
579
+ the result.
580
+ Figure 5 illustrates how nodes located close to the
581
+ robot are selected by the Proximity Detection module.
582
+ Only nodes in the local map with their pose inside ra-
583
+ dius R centered on the robot are used. Nodes in STM
584
+ are not considered in order to avoid adding useless links
585
+ with nodes close by: this would increase graph optimiza-
586
+ tion time without adding significative improvements of
587
+ the map. The nodes are then segmented into groups
588
+ with nodes connected only by neighbor links. A group
589
+ must have its nearest node from the robot inside a fixed
590
+ radius L defining close-by nodes (with L < R) to be
591
+ considered for proximity detection, to keep the length
592
+ of the resulting proximity links small for path planning.
593
+ Note that Appearance-based Loop Closure Detection
594
+ is done before Proximity Detection, thus if the near-
595
+ est node has already a loop closure with the new node,
596
+ the group is ignored. Proximity detection is then ap-
597
+ plied separately on each group of nodes by doing the
598
+ following steps:
599
+ 1. A rigid transformation between the nearest node
600
+ of each group and the new node added to map is
601
+ computed as in Section 3.2, and if it is accepted, a
602
+ proximity link is added between the corresponding
603
+ nodes, and the group of nodes is ignored for step
604
+ 2. These links are referred as visual proximity links
605
+ because visual words are used in the transformation
606
+ estimation.
607
+ 2. To avoid having to compare multiple nodes with
608
+ very similar laser scans (and thus to save computa-
609
+
610
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
611
+ 7
612
+ a)!
613
+ b)!
614
+ Fig. 4 Illustration of the role of the Proximity Detection module. On the left are the raw laser scans, the blue dot is the
615
+ starting position, and on the right the corresponding occupancy grid map at 0.05 m resolution (black, light gray and dark
616
+ gray areas are occupied, empty and unknown spaces, respectively). In a), the yellow circle on the right locates the problematic
617
+ situation: after the second traversal, the first nodes of the graph are located exactly over the wall, making it impossible to
618
+ plan a path (red arrow on the right) to return to the starting position. In b), proximity links are detected using only the laser
619
+ scans, and the local map can then be correctly optimized.
620
+ tion), only the more recent node among those in
621
+ the same fixed small radius L (centered on each
622
+ node) is kept along the nodes in a remaining group.
623
+ Then for each group, the laser scans of the nodes
624
+ are merged together using their respective pose. 2D
625
+ ICP transformation refinement is done between the
626
+ merged laser scans and the one of the new node.
627
+ If the transformation is accepted, a new proximity
628
+ link with this transformation is added to the graph
629
+ between the new node and the nearest one in the
630
+ group.
631
+ 3.4 Graph Optimization Module
632
+ TORO (Tree-based netwORk Optimizer) (Grisetti
633
+ et al., 2007) is used for graph optimization. When loop
634
+ closure and proximity links are added, the errors de-
635
+ rived from odometry can be propagated to all links,
636
+ thus correcting the local map. This also guarantees that
637
+ nodes belonging to different maps are transformed into
638
+ the same referential when loop closures are found.
639
+ When only one map exists, it is relatively straight-
640
+ forward to use TORO to create a tree because it only
641
+ has one root. However, for multi-session mapping, each
642
+ map has it own root with its own reference frame. When
643
+ loop closures occur between the maps, TORO cannot
644
+ optimize the graph if there are multiple roots. It may
645
+ also be difficult to find a unique root when some of
646
+ the nodes have been transferred in LTM. As a solution,
647
+ our approach takes the root of the tree to be the latest
648
+ a)
649
+ b)
650
+ c)
651
+ d)
652
+ R
653
+ L
654
+ Fig. 5 Illustration of how proximity detection works. In a),
655
+ the larger dashed circle represents the radius R used to deter-
656
+ mine close-by nodes, and the smaller dashed circle defined by
657
+ L is used to limit the length of the links to be created. The
658
+ empty dots are nodes for which the laser scans are not used,
659
+ either because they are outside the radius R, they are too
660
+ close from each other or they are in STM. In b) and c), nodes
661
+ in the radius R from the two segmented groups of nodes are
662
+ processed for proximity detection. In d), proximity links are
663
+ added (yellow), and after graph optimization, the groups of
664
+ nodes are connected together and the respective laser scans
665
+ are now aligned.
666
+
667
+ 8
668
+ Mathieu Labb´e, Fran¸cois Michaud
669
+ node added to the local map, which is always uniquely
670
+ defined across intra-session and inter-session mapping.
671
+ All other poses in the graph are then optimized using
672
+ the last odometry pose as the referential.
673
+ 3.5 Path Planning Modules
674
+ Memory management has a significant effect on how to
675
+ do path planning online using graph-based SLAM, for
676
+ which the map changes almost at each iteration and
677
+ with only the local map accessible while executing the
678
+ plan. This differs from approaches that assume that the
679
+ map is static and/or that all the previously visited loca-
680
+ tions always remain in the map. In this paper, SPLAM-
681
+ MM uses two path planners: a Metrical Path Planner
682
+ (MPP) and a Topological Path Planner (TPP).
683
+ 3.5.1 Metrical Path Planning Module
684
+ MPP receives a pose expressed in (x, y, θ) coordinates,
685
+ and uses the local map to plan a trajectory and to make
686
+ the robot move toward the targeted pose while avoid-
687
+ ing obstacles. Our MPP implementation exploits the
688
+ ROS navigation stack (Marder-Eppstein et al., 2010) to
689
+ compute trajectories expressed as a sequence of veloc-
690
+ ity commands (expressed as twists) sent to the robot’s
691
+ Motion Controller module. A global Costmap is used
692
+ to plan a trajectory to a targeted pose. MPP creates
693
+ the global Costmap from an occupancy grid created us-
694
+ ing the assembled laser scans from the latest local map.
695
+ Each time the local map is updated, the occupancy grid
696
+ is re-assembled and the trajectory is re-planned. MPP
697
+ also uses a local Costmap for its Dynamic Window Ap-
698
+ proach (DWA) (Fox et al., 1997) to handle dynamic
699
+ obstacles for collision avoidance. The local Costmap is
700
+ created directly from sensor readings. To create the lo-
701
+ cal Costmap, only using the laser rangefinder for obsta-
702
+ cle detection revealed to be insufficient: while the laser
703
+ range finder can detect most of the obstacles (e.g., walls,
704
+ people, table legs), it is located 40 cm above the floor
705
+ and all obstacles under this height cannot be detected.
706
+ Therefore, the depth image from the RGB-D camera
707
+ is also used to detect these small obstacles and to add
708
+ them to the local Costmap. Figure 6 shows an example
709
+ where combining laser scans and RGB-D data creates a
710
+ more robust and a safer local Costmap for navigation.
711
+ Note that segmentation of the point cloud generated
712
+ from the depth image is required to be able to add or
713
+ clear small dynamic obstacles below the RGB-D cam-
714
+ era. To segment the ground, all points with normal par-
715
+ allel to z-axis (up to an angle Z) are labeled as ground.
716
+ Then, all other points under a maximum height U are
717
+ labeled as obstacles. This method would also make the
718
+ robot capable of operating on uneven terrain.
719
+ 3.5.2 Topological Path Planning Module
720
+ When TPP receives a goal identified by a node ID from
721
+ a user (or a high-level module like a task planner, or
722
+ in this paper the Patrol module), the global map is
723
+ provided by the graph-based SLAM-MM module, and
724
+ a topological path is computed to reach this goal. The
725
+ topological path is a sequence of poses, expressed by
726
+ their respective node IDs, to reach the goal. This step
727
+ must be done offline or when the robot is not moving
728
+ because all nodes linked to the current local map should
729
+ be retrieved from LTM to build the global map.
730
+ To choose which nodes to use for navigation, TPP
731
+ computes a path from the current node to the goal node
732
+ using Djikstra algorithm (Dijkstra, 1959). The choice
733
+ of using Dijkstra over A* is to avoid global graph op-
734
+ timization, which is time consuming, to know the dis-
735
+ tance to goal required by A*. Dijkstra can also be com-
736
+ puted directly when fetching the global map from LTM.
737
+ Similar to (Valencia et al., 2013), to avoid losing track
738
+ of the planned path, TPP prefers paths traversed in
739
+ the same direction (e.g., where the camera is facing the
740
+ same direction than on the nodes on the path) over
741
+ shortest paths. This increases localization confidence:
742
+ loop closure detection and visual proximity detection
743
+ are more reliable than proximity detection using only
744
+ laser scans because of their double verification (3D vi-
745
+ sual word correspondences and 2D ICP transformation
746
+ refinement). To embed this preference in Djikstra, the
747
+ search cost is angular-based instead of distance-based,
748
+ i.e., it finds the path with less orientation changes when
749
+ traversing it in the forward direction.
750
+ Then, TPP selects the farthest node on the path
751
+ in the local map and sends its pose to MPP. While
752
+ MPP makes the robot navigate to its targeted pose,
753
+ TPP indicates to the graph-based SLAM-MM mod-
754
+ ule which upcoming nodes on the topological path is
755
+ needed, expressed as a list of node IDs from the lat-
756
+ est node reached on the path to the farthest node in-
757
+ side the radius R (to limit the size of the list). The re-
758
+ quired nodes are identified by the graph-based SLAM-
759
+ MM module with Heuristic 2 either to remain in WM or
760
+ to be retrieved from LTM to extend the local map. The
761
+ maximum number of retrieved nodes per map update is
762
+ limited to M because this operation is time consuming
763
+ as it needs to load nodes from LTM. M is set based on
764
+ the hardware on which LTM is saved and according to
765
+ the maximum velocity of the robot: for instance, if the
766
+ robot is moving at the same speed or less as when it
767
+ traversed the same area the first time, M = 1 would
768
+
769
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
770
+ 9
771
+ (a)
772
+ (b)
773
+ (c)
774
+ Fig. 6 Example of obstacle detection using the laser rangefinder and the RGB-D camera. The red dots on the chair show
775
+ what is detected using the laser rangefinder data. The cyan area is derived from the obstacle projection on the ground plane
776
+ up to robot’s footprint radius, delimiting where the center of the robot should not enter to avoid collisions. In a), only the
777
+ laser rangefinder data are used and the chair’s wheels are not detected, making unsafe for the robot to plan a path around the
778
+ chair. In b), the point cloud generated from the camera’s depth image is used and the chair’s wheels are detected (shown by
779
+ the orange dots), increasing the cyan area (and consequently the area to avoid colliding with the chair). Illustration c) presents
780
+ a view from the RGB-D camera where the segmented ground is shown in green and the obstacles in orange.
781
+ suffice to retrieve nodes on the path without having to
782
+ slow down to wait for nodes not yet retrieved.
783
+ Extending the local map with nodes of the topo-
784
+ logical path is important for the robot to localize it-
785
+ self using the Appearance-based Loop Closure Detec-
786
+ tion module or using the Proximity Detection module,
787
+ making it able to follow the topological path appro-
788
+ priately. As the robot moves and new local maps are
789
+ created, TPP always looks for the farthest node of the
790
+ topological path that can be reached in the local map
791
+ to update the current pose sent to MPP module. If new
792
+ nodes are retrieved from LTM on the topological path,
793
+ then the farthest pose is sent to MPP. TPP also de-
794
+ tects changes in the local map after graph optimization
795
+ (e.g., when new loop closures are detected): if so, the
796
+ updated position of the current pose is sent to MPP.
797
+ Up to a ratio O of the WM size, nodes identified by
798
+ the planner and located in the radius R from the robot’s
799
+ current position are immunized to be transferred, with
800
+ R being the sensor range.
801
+ Figure 7 presents an example of the interaction be-
802
+ tween MPP and TPP to reach a goal G. While the robot
803
+ is moving, TPP always sends the farthest pose P of the
804
+ node on the topological path (purple links) in the lo-
805
+ cal map. An occupancy grid is assembled with the laser
806
+ scans contained in the nodes of the local map. MPP
807
+ uses this occupancy grid to plan a trajectory (yellow
808
+ arrow) to P. To keep the WM size constant, as nodes
809
+ are retrieved from LTM on the path, older nodes are
810
+ transferred to LTM. To follow the path appropriately,
811
+ proximity links are detected to correct the map as the
812
+ robot moves, otherwise the situation explained by Fig.
813
+ 4a would happen.
814
+ TPP iterates by sending poses until the node of the
815
+ goal (under a goal radius D expressed in m) is reached.
816
+ Finally, handling situations where the environment has
817
+ changed too much for proper localization must be taken
818
+ into consideration. If no loop closures and proximity de-
819
+ tections occur when following a path, a temporary link
820
+ is added between the current node and the closest one
821
+ in the path so that the topological path is always linked
822
+ to the current node in the local map. Without this link,
823
+ if previous nodes between the current node and those of
824
+ the topological path are transferred to LTM, the local
825
+ map would be divided and the nodes of the path would
826
+ not be in the local map anymore. This temporary link
827
+ is removed when a new link is added between the cur-
828
+ rent node and the closest one in the path or when the
829
+ goal is reached. If the robot has not reached the cur-
830
+ rent pose set to MPP after F iterations of SPLAM-MM
831
+ (e.g., MPP cannot plan to the requested pose because
832
+ of the presence of a new obstacle or because the robot
833
+ cannot localize itself on the path), TPP chooses another
834
+ pose on the upcoming nodes and sends it to MPP. If all
835
+ the upcoming nodes cannot be reached, TPP fails and
836
+ sends a status message to its connected modules so that
837
+ they can be notified that the goal cannot be reached.
838
+
839
+ 10
840
+ Mathieu Labb´e, Fran¸cois Michaud
841
+ P"
842
+ G"
843
+ (a)
844
+ P"
845
+ G"
846
+ (b)
847
+ P"
848
+ G"
849
+ (c)
850
+ P"
851
+ G"
852
+ (d)
853
+ P"
854
+ G"
855
+ (e)
856
+ P"
857
+ (f)
858
+ Fig. 7 Interaction between TPP and MPP for path planning. The goal is identified by the purple G. The topological path is
859
+ shown with purple links. The dashed yellow arrow is the trajectory computed by MPP to the targeted poses designated by the
860
+ yellow P. Light gray, dark gray and black areas of the occupancy grid represent free, unknown and occupied cells, respectively.
861
+ Blue nodes are in WM, and red nodes are in LTM. Yellow links are proximity links.
862
+ 3.6 Patrol Module
863
+ We implemented the Patrol module to generate naviga-
864
+ tion goals, referred to as waypoints so that the robot is
865
+ programmed to continuously patrol an area. The Patrol
866
+ module receives waypoints as inputs and sends them
867
+ successively to TPP. By examining TPP’s status mes-
868
+ sages, Patrol can know when a goal is reached or if TPP
869
+ has failed. Whenever the status indicates that the goal
870
+ is reached or not, the Patrol module sends the next
871
+ waypoint, and restart to the first one once the whole
872
+ list has been processed.
873
+ 4 Results
874
+ Table 1 shows the parameters used for the trials1. The
875
+ acquisition time A used is 1 sec (i.e., the map update
876
+ rate is 1 Hz), which set the maximum online time al-
877
+ lowed to process each node added to the map. For
878
+ the trials, T is set to 200 ms to limit CPU usage for
879
+ SPLAM-MM to around 20%, to make sure that higher
880
+ 1 In comparison with (Labbe and Michaud, 2013), T =
881
+ Ttime, S = TST M and Y = Tsimilarity.
882
+ frequency modules (acquisition of Sensor Data acquisi-
883
+ tion and MPP) can run at their fixed frequency of 10
884
+ Hz. The robot is relatively moving at the same velocity
885
+ during the trials, and therefore M is fixed to 2 to make
886
+ sure that nodes on a planned path are retrieved fast
887
+ enough to avoid having the robot wait for nodes still in
888
+ LTM. All computations are done onboard on the robot,
889
+ which is equipped with a 2.66 GHz Intel Core i7-620M
890
+ and a 128 GB SSD hard drive (on which the LTM is
891
+ saved).
892
+ To define the area over which the robot had to pa-
893
+ trol, during session 1 we first teleoperated the robot
894
+ and defined four waypoints (WP1 to WP4). There were
895
+ no people in the environment during the teleoperation
896
+ phase. After reaching WP4, the autonomous navigation
897
+ phase is initiated by sending the waypoints to the Pa-
898
+ trol module. Figure 8 illustrates the four waypoints on
899
+ the global map and the first planned trajectory by TPP
900
+ (purple path) from the current position of the robot
901
+ (WP4) to WP1. To come back to WP1, the robot had
902
+ to follow the path in the opposite direction from when
903
+ these nodes were created. Proximity detection made
904
+ it able to follow the path appropriately. To see more
905
+ clearly the effect of proximity links, Fig. 9 shows the
906
+
907
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
908
+ 11
909
+ Table 1 Parameters used for the trials
910
+ Acquisition time
911
+ A
912
+ 1 sec
913
+ ICP correspondence ratio
914
+ C
915
+ 0.3
916
+ Radius of the goal area
917
+ D
918
+ 0.5 m
919
+ TPP iterations before failure
920
+ F
921
+ 10
922
+ Loop closure hypothesis threshold
923
+ H
924
+ 0.11
925
+ Minimum RANSAC visual word inliers
926
+ I
927
+ 5
928
+ Close nodes radius
929
+ L
930
+ 0.5 m
931
+ Maximum retrieved close nodes
932
+ M
933
+ 2
934
+ Heuristics 2 close-by nodes ratio
935
+ O
936
+ 0.25
937
+ Laser scan range
938
+ R
939
+ 4 m
940
+ STM size
941
+ S
942
+ 20
943
+ Time limit
944
+ T
945
+ 200 ms
946
+ Maximum obstacle height
947
+ U
948
+ 0.4 m
949
+ Similarity threshold
950
+ Y
951
+ 0.3
952
+ Ground segmentation maximum angle
953
+ Z
954
+ 0.1 rad
955
+ WP4
956
+ WP3
957
+ WP2
958
+ WP1
959
+ Battery Charger
960
+ Fig. 8 Waypoints WP1 to WP4 identified on the global map.
961
+ The purple path is the first path planned by TPP from the
962
+ WP4 to WP1.
963
+ maps after reaching WP1 with and without graph op-
964
+ timization. Navigation would not have been possible
965
+ without proximity links: the local map would have look
966
+ like the map in (b) without the yellow links because no
967
+ appearance-based similarities would have been found
968
+ with nodes from the map on the planned path. When
969
+ reaching WP1, the Patrol module sends the next way-
970
+ point (WP2), making the robot continue patrolling.
971
+ Every 45 minutes or so of operation, the robot was
972
+ manually shutdown and moved to the battery charger
973
+ near WP1. Once recharged, a new session of SPLAM-
974
+ MM was initiated, creating a new node in STM with
975
+ odometry reset, while preserving the nodes in WM
976
+ and LTM. As the robot was initialized in the area of
977
+ WP1 for each session, loop closures were found, con-
978
+ WP1!
979
+ WP1!
980
+ WP2!
981
+ WP2!
982
+ WP3!
983
+ WP3!
984
+ WP4!
985
+ WP4!
986
+ (a)
987
+ WP1!
988
+ WP1!
989
+ WP2!
990
+ WP2!
991
+ WP3!
992
+ WP3!
993
+ WP4!
994
+ WP4!
995
+ (b)
996
+ Fig. 9 Global maps, optimized and not optimized, after
997
+ reaching WP1. Yellow and red links are proximity and loop
998
+ closure links, respectively.
999
+ necting and optimizing the new map with nodes cre-
1000
+ ated from previous sessions, and allowing the Patrol
1001
+ module to provide waypoints as navigation goals to pa-
1002
+ trol the area. Overall, 11 indoor mapping sessions were
1003
+ conducted, for a total distance of 10.5 km lasting 7.5
1004
+ hours of operation spent over two weeks. The robot did
1005
+ 111 patrolling cycles (i.e., traversing from WP1 through
1006
+ WP2, WP3, WP4 and coming back to WP1). The ses-
1007
+ sions were conducted during office hours, with people
1008
+ walking by. A total of 139 people were encountered by
1009
+ the robot while patrolling. Figure 10 illustrates the dy-
1010
+ namic conditions and some of the obstacles that the
1011
+ robot had to deal with during the trials.
1012
+ The main goal of the trials is to see how SPLAM is
1013
+ influenced by memory management over long-term op-
1014
+ eration, only having the local map for online process-
1015
+ ing. This can be illustrated by looking at the influences
1016
+ of memory management on SPLAM, interactions be-
1017
+ tween TPP and MPP, and the influences of LTM on
1018
+ TPP. As the robot is continuously adding new nodes,
1019
+ the trials also demonstrate how SPLAM-MM works in
1020
+ an unbounded environment.
1021
+ 4.1 Influences of MM on SPLAM
1022
+ Figure 11 shows a typical navigation result when reach-
1023
+ ing the time limit T, thus limiting the size of the local
1024
+ map used for online navigation. This example shows
1025
+ the path planned between WP4 and WP1 after 4.7
1026
+ hours of operation. The local maps used for online plan-
1027
+ ning, localization and mapping are shown for different
1028
+ time steps along the trajectory. At t = 17031 sec, the
1029
+ planned path had 67 nodes and was 33 m long. It took
1030
+ 1.3 sec to be generated by TPP and to have the first
1031
+ pose on the path sent to MPP. The laser scan range R
1032
+ is delimiting the upcoming nodes on the path provided
1033
+ by TPP. As the robot navigates in the environment,
1034
+ the farthest available pose in the local map on the path
1035
+ (end of the cyan line) is sent from TPP to MPP. Up-
1036
+
1037
+ 12
1038
+ Mathieu Labb´e, Fran¸cois Michaud
1039
+ a)!
1040
+ b)!
1041
+ c)!
1042
+ d)!
1043
+ e)!
1044
+ Fig. 10 Events that occurred during the trials: a) open and closed doors between traversals; b) camera exposure that led to
1045
+ the extraction of different visual features, making it difficult to find loop closures; c) someone opening a door while the robot
1046
+ is navigating; d) people walking around or blocking the robot; e) featureless images on which loop closure detection cannot
1047
+ work.
1048
+ t = 17060 sec!
1049
+ t = 17053 sec!
1050
+ t = 17031 sec!
1051
+ t = 17068 sec!
1052
+ t = 17075 sec!
1053
+ t = 17081 sec!
1054
+ t = 17108 sec!
1055
+ t = 17095 sec!
1056
+ WP4!
1057
+ WP4!
1058
+ WP4!
1059
+ WP4!
1060
+ WP4!
1061
+ WP4!
1062
+ WP4!
1063
+ WP1!
1064
+ WP1!
1065
+ WP1!
1066
+ WP1!
1067
+ WP1!
1068
+ WP1!
1069
+ WP1!
1070
+ WP1!
1071
+ Fig. 11 Example of the effect of memory management when travelling from WP4 to WP1 after 4.7 hours of operation. The
1072
+ path planned is shown in purple. The small colored icon represents the robot position at each time step. The dotted circle
1073
+ around the robot position illustrates the laser scan range R. The cyan lines represent the upcoming nodes on the planned path.
1074
+ coming nodes, if they are not in WM, are retrieved to
1075
+ make the robot able to localize itself (though loop clo-
1076
+ sures and proximity detections) on the path. Looking
1077
+ at how the local map changes in these snapshots, notice
1078
+ how starting from t = 17075 sec, the initial portion of
1079
+ the path is transferred in LTM to keep the size of the
1080
+ WM relatively constant. At t = 17108 sec, the robot
1081
+ reached WP1.
1082
+ Figure 12 compares the images between each way-
1083
+ point and the final position of the robot at the way-
1084
+ points. The robot successfully reached the waypoints
1085
+ (within D as the goal radius) 445 out of 446 times. For
1086
+ WP2, WP3 and WP4, the robot always came from be-
1087
+ hind the waypoint, and as soon the robot reached the
1088
+ waypoint within a D radius, TPP detected that the goal
1089
+ was reached. This explains why all the poses are behind
1090
+ the waypoints but inside the goal radius D. Similarly,
1091
+ for WP1, the robot came from behind from a slightly
1092
+ different direction. Spurious poses on the right part of
1093
+ the circle are those where there was an obstacle that
1094
+ caused the robot to avoid it, making it reach the way-
1095
+ point from a different direction. The one time the robot
1096
+ failed to reach a waypoint is because someone blocked
1097
+ the robot for a long time, making TPP failed after F at-
1098
+
1099
+ SOHTESTHTELong-Term Online Multi-Session Graph-Based SPLAM with Memory Management
1100
+ 13
1101
+ tempts of reaching the upcoming nodes: a failure status
1102
+ message was then sent to the Patrol module to provide
1103
+ the next waypoint. The person left soon after the next
1104
+ waypoint was sent, and the robot reached the new way-
1105
+ point provided.
1106
+ Figure 13 illustrates the evolution of the number
1107
+ of nodes in WM and online processing time over the 11
1108
+ mapping sessions. Processing time includes all SPLAM-
1109
+ MM modules except MPP which was running concur-
1110
+ rently on a separate process (its processing time is only
1111
+ dependent of the local map size). As explained in Sec-
1112
+ tion 3.5.2, TPP occurs offline and only when a new
1113
+ goal is received from the Patrol module, and is exam-
1114
+ ined in Section 4.3. Fig. 13a illustrates that the number
1115
+ of nodes in WM and the local map was identical until T
1116
+ sec was reached. After that, nodes were transferred to
1117
+ LTM to limit the WM size for online processing, which
1118
+ is satisfied as shown by Fig. 13b. Processing time also
1119
+ remained well under the acquisition time A.
1120
+ 4.2 TPP-MPP Interactions
1121
+ To illustrate with a concrete example of the situation
1122
+ described in Fig. 7, Fig. 14 presents an example of con-
1123
+ secutive poses sent by TPP to MPP while nodes from
1124
+ LTM are retrieved for the planned path. The red ar-
1125
+ row shows the pose of the farthest node on the path
1126
+ (the direction of the arrow shows the orientation of
1127
+ the pose). The red line represents the trajectory com-
1128
+ puted by MPP from the current position of the robot
1129
+ to its targeted pose, combined with obstacle avoidance.
1130
+ The blue lines represent the local map. In Fig. 14a,
1131
+ the targeted pose is on a node traversed backward (as
1132
+ shown by the arrow pointing backward). Between a)
1133
+ and b), the local map was updated with nodes loaded
1134
+ from LTM of the topological path. The targeted pose
1135
+ was updated farther on the path and at the same time,
1136
+ the occupancy grid was extended to previously mapped
1137
+ areas and MPP recomputed its trajectory. The robot
1138
+ could then move farther toward its goal and the nodes
1139
+ retrieved were used for proximity detection to correctly
1140
+ follow the planned path.
1141
+ To also illustrate the importance of obstacle detec-
1142
+ tion described in Fig. 6, Fig. 15 presents an example
1143
+ where an unexpected obstacle was encountered: as the
1144
+ laser rangefinder is 0.4 m above the ground, the forklift
1145
+ could only be detected using the RGB-D camera. MPP
1146
+ planned a slightly different path (orange) that the one
1147
+ planned by TPP (pink) to avoid the obstacle.
1148
+ 4.3 Influences of LTM on TPP
1149
+ Although Fig. 13 demonstrates that SPLAM-MM is
1150
+ able to satisfy online constraints on a map increasing
1151
+ linearly in size (i.e., not bounded to a maximum size of
1152
+ environment), memory used by LTM and consequently
1153
+ TPP planning time increase linearly. For example, at
1154
+ the end of experiment, LTM contains 24002 nodes and
1155
+ 113368 links. All raw sensor data in the nodes were
1156
+ also saved in the LTM’s database (for debugging and
1157
+ visualization purposes), including RGB image (JPEG
1158
+ format) and depth image (PNG format) of each node.
1159
+ The final database took 6.7 GB of hard drive space.
1160
+ With as many links at the end of the experiment, TPP
1161
+ required 2.4 sec to compute a plan to the next waypoint.
1162
+ In term of memory usage and planning time, LTM must
1163
+ be somewhat limited over time when revisiting the same
1164
+ areas.
1165
+ As a solution to limit LTM memory growth, nodes
1166
+ from STM can be merged when moved to WM if they
1167
+ have loop closure and/or visual proximity links. We
1168
+ studied this possibility by adding a graph reduction al-
1169
+ gorithm to STM, to remove the node from the graph
1170
+ and to add its neighbor links to the corresponding old
1171
+ node(s). Algorithm 1 summarizes the approach used to
1172
+ maintain the graph at the same size (same number of
1173
+ removed links and nodes than added) if there are many
1174
+ successive nodes with loop closure or visual proxim-
1175
+ ity links. If two nodes of a same location do not have
1176
+ similar images (i.e., they don’t have loop closure or vi-
1177
+ sual proximity links), they will not be merged, thus still
1178
+ keeping a variety of different images representing the
1179
+ same location. To make sure nodes to be merged are
1180
+ still in WM (to avoid to modify the LTM), nodes hav-
1181
+ ing a link to a node in STM are identified as nodes that
1182
+ must stay in WM (similarly to Heuristic 2). Figure 16
1183
+ shows how links are merged between the node moved to
1184
+ WM and its corresponding node(s) linked by loop clo-
1185
+ sure link. In a), the purple node has two loop closure
1186
+ links. On graph reduction, its two neighbor links (blue)
1187
+ are merged with the loop closure links (red) by multi-
1188
+ plying the corresponding transformations together, cre-
1189
+ ating merged neighbor links (orange). In this case, the
1190
+ same number of links are added than those removed but
1191
+ one node is removed. In b), the green node has only one
1192
+ neighbor link (with the cyan node), then the loop clo-
1193
+ sure link is only merged with it, creating only one link
1194
+ and four are removed. Merged neighbor links are ig-
1195
+ nored to be merged again to limit the number of links.
1196
+ In c), the cyan node does not have any loop closure and
1197
+ no graph reduction is done.
1198
+ To test this idea, data from the 11 sessions were
1199
+ processed again to test the influences of the graph re-
1200
+
1201
+ 14
1202
+ Mathieu Labb´e, Fran¸cois Michaud
1203
+ ID=167
1204
+ ID = 266
1205
+ ID = 417
1206
+ a)
1207
+ b)
1208
+ c)
1209
+ WP2
1210
+ WP3
1211
+ d)
1212
+ WP4
1213
+ ID = 26514
1214
+ ID = 6414
1215
+ ID = 22016
1216
+ ID = 9896
1217
+ ID = 19
1218
+ −2
1219
+ −1.8
1220
+ −1.6
1221
+ −1.4
1222
+ −1.2
1223
+ −1
1224
+ −0.8
1225
+ −0.6
1226
+ −0.4
1227
+ −0.2
1228
+ −2.2
1229
+ −2
1230
+ −1.8
1231
+ −1.6
1232
+ −1.4
1233
+ −1.2
1234
+ −1
1235
+ −0.8
1236
+ −0.6
1237
+ −0.4
1238
+ wp1
1239
+ 2.2
1240
+ 2.4
1241
+ 2.6
1242
+ 2.8
1243
+ 3
1244
+ 3.2
1245
+ 3.4
1246
+ 3.6
1247
+ 3.8
1248
+ 4
1249
+ −8.4
1250
+ −8.2
1251
+ −8
1252
+ −7.8
1253
+ −7.6
1254
+ −7.4
1255
+ −7.2
1256
+ −7
1257
+ −6.8
1258
+ −6.6
1259
+ wp2
1260
+ 15.4
1261
+ 15.6
1262
+ 15.8
1263
+ 16
1264
+ 16.2
1265
+ 16.4
1266
+ 16.6
1267
+ 16.8
1268
+ 17
1269
+ 17.2
1270
+ 12.2
1271
+ 12.4
1272
+ 12.6
1273
+ 12.8
1274
+ 13
1275
+ 13.2
1276
+ 13.4
1277
+ 13.6
1278
+ 13.8
1279
+ 14
1280
+ wp3
1281
+ 15.4
1282
+ 15.6
1283
+ 15.8
1284
+ 16
1285
+ 16.2
1286
+ 16.4
1287
+ 16.6
1288
+ 16.8
1289
+ 17
1290
+ 17.2
1291
+ −4.2
1292
+ −4
1293
+ −3.8
1294
+ −3.6
1295
+ −3.4
1296
+ −3.2
1297
+ −3
1298
+ −2.8
1299
+ −2.6
1300
+ −2.4
1301
+ wp4
1302
+ WP1
1303
+ Images
1304
+ Laser scans
1305
+ Fig. 12 Comparison of the corresponding images between the waypoint (left image) and at the last pose reached on one of
1306
+ the planned path (right image) for the waypoints. The top view grid shows the laser scan readings and referentials of the
1307
+ waypoint’s nodes (at the origin of the grid) and the final node. The zoomed portions represent the final poses of the robot
1308
+ (represented by blue dots), for all paths planned for each waypoint. The circle represents the goal radius D, and the grid’s
1309
+ cells used for visualization have a width of 1 m.
1310
+ 0
1311
+ 0.5
1312
+ 1
1313
+ 1.5
1314
+ 2
1315
+ 2.5
1316
+ 3
1317
+ x 10
1318
+ 4
1319
+ 0
1320
+ 50
1321
+ 100
1322
+ 150
1323
+ 200
1324
+ 250
1325
+ 300
1326
+ 350
1327
+ 400
1328
+ 450
1329
+ 500
1330
+ Node indexes
1331
+ Nodes
1332
+
1333
+
1334
+ WM
1335
+ Local map
1336
+ (a) Number of nodes in WM and in the local map.
1337
+ 0
1338
+ 0.5
1339
+ 1
1340
+ 1.5
1341
+ 2
1342
+ 2.5
1343
+ 3
1344
+ x 10
1345
+ 4
1346
+ 0
1347
+ 0.1
1348
+ 0.2
1349
+ 0.3
1350
+ 0.4
1351
+ 0.5
1352
+ 0.6
1353
+ 0.7
1354
+ 0.8
1355
+ 0.9
1356
+ 1
1357
+ Time (s)
1358
+ Node indexes
1359
+ (b) Processing time (the horizontal line represents T = 0.2
1360
+ sec).
1361
+ Fig. 13 Memory size and total processing time over the 11 mapping sessions.
1362
+
1363
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
1364
+ 15
1365
+ Goal
1366
+ (a)
1367
+ Goal
1368
+ (b)
1369
+ Fig. 14 Example of poses sent by TPP to MPP while nodes
1370
+ from LTM are retrieved for the planned path. The goal of the
1371
+ path is somewhere outside these images in the direction shown
1372
+ by Goal. The bottom left images shows the actual RGB image
1373
+ from the RGB-D camera. The blue lines are nodes and links
1374
+ of the local map. The red line is the computed trajectory from
1375
+ MPP using the local map’s occupancy grid from its current
1376
+ pose (red arrow). The RGB point cloud and the occupancy
1377
+ grid are created using RGB-D images and laser scans stored
1378
+ in nodes from the local map, respectively. In a), the robot is
1379
+ following the red trajectory. In b), some nodes are retrieved
1380
+ from LTM and a new trajectory is computed to move further
1381
+ on the path toward the goal.
1382
+ Algorithm 1 Graph Reduction
1383
+ 1: o ← node moved to WM
1384
+ 2: m ← loop closure and visual proximity links of o
1385
+ 3: if m is not empty then
1386
+ 4:
1387
+ n ← neighbor links of o
1388
+ 5:
1389
+ for all m in m do
1390
+ 6:
1391
+ om ← node pointed by m
1392
+ 7:
1393
+ for all n in n do
1394
+ 8:
1395
+ on ← node pointed by n
1396
+ 9:
1397
+ t ← m−1·n
1398
+ 10:
1399
+ Add t to om
1400
+ 11:
1401
+ Add t−1 to on
1402
+ 12:
1403
+ end for
1404
+ 13:
1405
+ end for
1406
+ 14:
1407
+ Remove o from the graph
1408
+ 15: end if
1409
+ duction approach using real data acquired by the robot.
1410
+ Note that even though graph reduction was validated
1411
+ offline, we carefully monitored the experiment manually
1412
+ to make sure that the robot could still localize itself cor-
1413
+ rectly on the planned paths.
1414
+ Figure 17 shows a comparison of the final global
1415
+ map without and with graph reduction. The zones with
1416
+ Fig. 15 Example where MPP plans a slightly different path
1417
+ (orange) than the one provided by TPP (pink). The yellow
1418
+ dot is the current position of the robot and the lower right
1419
+ image is the corresponding RGB image.
1420
+ STM
1421
+ WM
1422
+ Graph Reduction
1423
+ STM to WM
1424
+ WM
1425
+ STM
1426
+ a)
1427
+ b)
1428
+ c)
1429
+ Fig. 16 Three examples illustrating how the graph reduc-
1430
+ tion algorithm works. Blue, red and orange links represent
1431
+ neighbor, loop closure and merged neighbor links, respec-
1432
+ tively. Black links and white nodes are those removed using
1433
+ graph reduction. The left column shows the rightmost node
1434
+ (the oldest) of STM moved to WM. Then on the right column,
1435
+ this node is removed if it has a loop closure link.
1436
+ less blue links indicate that there were many nodes
1437
+ merged. The zones with more blue links are where nodes
1438
+ were not merged, because of a lack of features or be-
1439
+ cause of obstacles: the robot was not able to localize
1440
+ itself perfectly on the paths every time, thus adding
1441
+ new nodes to the map.
1442
+ Figure 18 illustrates TPP planning time correspond-
1443
+ ing to LTM size with and without graph reduction. As
1444
+ the LTM became larger, TPP planning time increased:
1445
+ with graph reduction, TPP planning time was reduced
1446
+ by 89% for the last path planned (272 ms instead of 2.4
1447
+
1448
+ 16
1449
+ Mathieu Labb´e, Fran¸cois Michaud
1450
+ a)!
1451
+ b)!
1452
+ Fig. 17 Comparison between the global maps a) without
1453
+ graph reduction (24002 nodes and 113368 links); b) with
1454
+ graph reduction (6059 nodes and 18255 links).
1455
+ sec). Figure 19 illustrates hard drive usage with and
1456
+ without graph reduction. Extrapolating linearly mem-
1457
+ ory usage with a 100 Gb hard drive, the robot could
1458
+ navigate online approximately 110 hours without graph
1459
+ reduction before filling up the hard drive. When debug-
1460
+ ging data (not used for navigation) are not recorded in
1461
+ the database, this estimate would increase to approx-
1462
+ imately 33 days (800 hours). This means that if the
1463
+ robot is always visiting new locations at a mean velocity
1464
+ of 1.4 km/h (as in this experiment), it could travel up
1465
+ to 1120 km to map environments online. When graph
1466
+ reduction is used, debugging data are not saved and
1467
+ having the robot always revisiting the same areas like
1468
+ in this experiment, it could do SPLAM continuously for
1469
+ about 130 days before reaching the hard drive capacity.
1470
+ 5 Discussion
1471
+ In terms of processing time, results show that SPLAM-
1472
+ MM is able to satisfy online processing requirements in-
1473
+ dependently of the size of the environment, by transfer-
1474
+ ring in LTM portions of the map which then cannot be
1475
+ used for loop closure detection, proximity detection and
1476
+ graph optimization. Results show also that path fol-
1477
+ lowing is still possible in such conditions by incremen-
1478
+ tally retrieving locations on the planned path. Thus, as
1479
+ shown in Section 4.3, the current hardware limitation
1480
+ of the system for long-term continuous SPLAM is hard
1481
+ drive capacity, not computation power.
1482
+ 0
1483
+ 0.5
1484
+ 1
1485
+ 1.5
1486
+ 2
1487
+ 2.5
1488
+ 3
1489
+ x 10
1490
+ 4
1491
+ 0
1492
+ 500
1493
+ 1000
1494
+ 1500
1495
+ 2000
1496
+ 2500
1497
+ Time (ms)
1498
+ Node indexes
1499
+
1500
+
1501
+ Graph size
1502
+ 0
1503
+ 0.5
1504
+ 1
1505
+ 1.5
1506
+ 2
1507
+ 2.5
1508
+ x 10
1509
+ 4
1510
+ Graph size (nodes)
1511
+ 0
1512
+ 1000
1513
+ 2000
1514
+ 3000
1515
+ 4000
1516
+ 0
1517
+ 100
1518
+ 200
1519
+ 300
1520
+ Time (ms)
1521
+ Node indexes
1522
+ Fig. 18 Comparison of TPP planning time and LTM size,
1523
+ with (blue) and without (red) graph reduction. The peaks in
1524
+ the zoomed section show more precisely when a planning is
1525
+ done (when a waypoint is reached).
1526
+ 0
1527
+ 1
1528
+ 2
1529
+ 3
1530
+ 4
1531
+ 5
1532
+ 6
1533
+ 7
1534
+ 8
1535
+ 0
1536
+ 1000
1537
+ 2000
1538
+ 3000
1539
+ 4000
1540
+ 5000
1541
+ 6000
1542
+ 7000
1543
+ Time (h)
1544
+ Hard drive usage (MB)
1545
+
1546
+
1547
+ Raw data discarded
1548
+ Fig. 19 Comparison of hard drive usage with (blue) and
1549
+ without (red) graph reduction. The dashed curves represents
1550
+ results without saving in database the debugging data (i.e.,
1551
+ raw RGB and depth images).
1552
+ To successfully follow a path, results demonstrate
1553
+ the importance of adding loop closure and/or proxim-
1554
+ ity links with nodes on the planned path to localize the
1555
+ robot in the map. In our trials, the robot navigated in-
1556
+ door where static structures (e.g., walls) were most of
1557
+ the time visible using the laser rangefinder. However, in
1558
+ large empty spaces where the laser rangefinder would
1559
+ not be able to perceive nearby structures, it would be
1560
+ difficult for the robot to follow a path if appearance-
1561
+ based loop closure detection and visual proximity de-
1562
+ tection do not occur. A laser rangefinder with larger
1563
+ perceptual range or a 3D LIDAR sensor like the Velo-
1564
+ dyne could be used to increase perceptual range. For
1565
+ a lower cost solution, using a camera facing backward
1566
+ could be useful to allow the robot to detect similari-
1567
+ ties in images when traversing a path in opposite direc-
1568
+ tion (Carrera et al., 2011). Without adding new sensors,
1569
+ TPP could also stop sending new poses when no loop
1570
+ closure links or proximity links occur for a while. If no
1571
+
1572
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
1573
+ 17
1574
+ loop closures were found over the next few meters, it
1575
+ would be possible to wait for the robot to rotate at
1576
+ this location so that it can look backward, increasing
1577
+ its chance to detect a loop closure to correct its po-
1578
+ sition on the planned path and then generate a new
1579
+ pose. A similar recovery approach is presented in (Mil-
1580
+ ford and Wyeth, 2010), where an exploration phase is
1581
+ triggered to re-localize the robot when failing to follow
1582
+ the planned path. Also, to be more robust to dynamic
1583
+ environments where there are cyclic changes over time,
1584
+ TPP could select nodes that match better the current
1585
+ time of the day rather than the most recent ones, to in-
1586
+ crease localization success as in (Krajn´ık et al., 2016).
1587
+ In comparison with large empty environments, those
1588
+ in which a lot of dynamic changes occur (e.g., navigat-
1589
+ ing through a crowd) would also make simultaneous
1590
+ planning and localization more difficult. For instance,
1591
+ mapping the area in session 1 without people walk-
1592
+ ing by helped the robot acquire the static structures
1593
+ of the environment since they were not hidden by peo-
1594
+ ple. These static structures facilitate localization when
1595
+ the robot comes back to these areas later one. If these
1596
+ static structures were previously occluded, they would
1597
+ be added to the map as the robot comes back to these
1598
+ areas (obviously if people are no longer in the robot’s
1599
+ field of view). If people partially occlude the robot’s
1600
+ sensors over a long distance, localization would still be
1601
+ possible but would occur less frequently.
1602
+ For online multi-session mapping with our memory
1603
+ management approach, the worst case is when all nodes
1604
+ of a previous map are transferred to LTM before a loop
1605
+ closure is detected (Labbe and Michaud, 2013). This
1606
+ results in definitely ignoring the previous map and dis-
1607
+ abling at the same time the ability to plan paths to
1608
+ a location in it. To avoid this problem, an additional
1609
+ heuristic could be to keep in WM at least one discrim-
1610
+ inative node for each map. However, if the number of
1611
+ mapping sessions becomes very high (e.g., thousands of
1612
+ sessions), these nodes would definitely have to be trans-
1613
+ ferred in LTM to satisfy online processing requirements.
1614
+ A strategy that makes the robot explore potential paths
1615
+ to link maps together would then be useful, and maps
1616
+ that could not be linked would eventually be unretriev-
1617
+ able.
1618
+ In the trials conducted, no invalid loop closures were
1619
+ detected, avoiding to corrupt the map with erroneous
1620
+ loop closure links. If this happens, graph optimization
1621
+ approaches such as (Latif et al., 2013; Sunderhauf and
1622
+ Protzel, 2012; Lee et al., 2013) deal with possible invalid
1623
+ matches, and could be used to increase robustness of
1624
+ SPLAM-MM. However, these approaches assume that
1625
+ the whole global map is available online, which is not
1626
+ the case here. They could be still used offline at the end
1627
+ of a session.
1628
+ As shown by Fig. 15, MPP in SPLAM-MM allows
1629
+ the robot to find an alternative path to reach the tar-
1630
+ geted pose when possible. However, if the alternative
1631
+ path is outside the local map, re-planning with TPP is
1632
+ required. Some paths may be also blocked temporary or
1633
+ permanently by some dynamic or new static obstacles.
1634
+ An approach similar to (Konolige et al., 2011) could be
1635
+ used to identify some links as blocked so that TPP can-
1636
+ not plan a path using them. The Patrol module could
1637
+ also manage waypoints that can and cannot be reached.
1638
+ Finally, the graph reduction approach can reduce
1639
+ significantly the number of nodes and links saved in
1640
+ LTM to reduce TPP planning time. However, because
1641
+ of dynamic events or the lack of features (e.g., Fig.
1642
+ 10e), new nodes and links will inevitably be added to
1643
+ LTM over time when revisiting the same areas. As an
1644
+ improvement, nodes with featureless image could be
1645
+ merged through a maximum density threshold like in
1646
+ (Milford and Wyeth, 2010), as they cannot be used for
1647
+ loop closure detection. After applying graph reduction
1648
+ on the experimental data, there are still 3068 featureless
1649
+ nodes of 6059 nodes in the global graph, which would
1650
+ reduce by about 50% the remaining graph. However,
1651
+ even by limiting the rate at which the LTM grows, a
1652
+ continuous SLAM approach in unbounded dynamic en-
1653
+ vironments will always add new data over time. A com-
1654
+ plementary strategy would be to definitely forget some
1655
+ parts of the global map, at the cost of not being able
1656
+ to return to some locations.
1657
+ 6 Conclusion
1658
+ By limiting the nodes of the map available online in
1659
+ WM for loop closure detection, proximity detection and
1660
+ graph optimization, results presented in this paper sug-
1661
+ gest that the proposed graph-based SPLAM-MM ap-
1662
+ proach is able to meet online processing requirements
1663
+ needed for simultaneous mapping, localizing and plan-
1664
+ ning in multi-session conditions. SPLAM-MM is tightly
1665
+ based on appearance-based loop closure detection, al-
1666
+ lowing it to naturally deal with the initial state prob-
1667
+ lem of multi-session mapping. To successfully localize
1668
+ on a planned path through areas previously transferred
1669
+ in LTM, memory management allows SPLAM-MM to
1670
+ deal with the necessity of retrieving upcoming nodes on
1671
+ the path in WM. Our code is open source and available
1672
+ at http://introlab.github.io/rtabmap.
1673
+ In future works, more robust failure recovery ap-
1674
+ proaches will be examined to test SPLAM-MM in dy-
1675
+ namic environments where the paths could often be
1676
+ blocked (temporally or permanently). We also plan to
1677
+
1678
+ 18
1679
+ Mathieu Labb´e, Fran¸cois Michaud
1680
+ study the impact of autonomous coverage and explo-
1681
+ ration strategies, especially how it can actively direct
1682
+ exploration based on nodes available for online map-
1683
+ ping. This could be also useful to conduct longer ex-
1684
+ periments at larger scale.
1685
+ References
1686
+ Atkinson R, Shiffrin R (1968) Human memory: A pro-
1687
+ posed system and its control processes. In: Psychol-
1688
+ ogy of Learning and Motivation: Advances in Re-
1689
+ search and Theory, vol 2, Elsevier, pp 89–195
1690
+ Baddeley A (1997) Human Memory: Theory and Prac-
1691
+ tice. Psychology Press
1692
+ Bay H, Ess A, Tuytelaars T, Gool LV (2008) Speeded
1693
+ Up Robust Features (SURF). Computer Vision and
1694
+ Image Understanding 110(3):346–359
1695
+ Besl PJ, McKay ND (1992) Method for registration of
1696
+ 3-D shapes. In: Robotics-DL tentative, International
1697
+ Society for Optics and Photonics, pp 586–606
1698
+ Biber P, Duckett T, et al. (2005) Dynamic maps for
1699
+ long-term operation of mobile service robots. In:
1700
+ Robotics: Science and Systems, pp 17–24
1701
+ Carrera G, Angeli A, Davison AJ (2011) Lightweight
1702
+ SLAM and navigation with a multi-camera rig. In:
1703
+ European Conference on Mobile Robots, pp 77–82
1704
+ Churchill W, Newman P (2012) Practice makes per-
1705
+ fect? Managing and leveraging visual experiences for
1706
+ lifelong navigation. In: Proc. IEEE Int. Conf. on
1707
+ Robotics and Automation, pp 4525–4532
1708
+ Dijkstra EW (1959) A note on two problems in connex-
1709
+ ion with graphs. Numerische Mathematik 1(1):269–
1710
+ 271
1711
+ Ferland F, Clavien L, Fr´emy J, Letourneau D, Michaud
1712
+ F, Lauria M (2010) Teleoperation of AZIMUT-3, an
1713
+ omnidirectional non-holonomic platform with steer-
1714
+ able wheels. In: Proc. IEEE/RSJ Int. Conf. on Intel-
1715
+ ligent Robots and Systems, pp 2515–2516
1716
+ Folkesson J, Christensen HI (2007) Closing the loop
1717
+ with graphical SLAM. IEEE Trans on Robotics
1718
+ 23(4):731–41
1719
+ Fox D, Burgard W, Thrun S (1997) The dynamic win-
1720
+ dow approach to collision avoidance. IEEE Robotics
1721
+ & Automation Magazine 4(1):23–33
1722
+ Garcia-Fidalgo E, Ortiz A (2015) Vision-based topo-
1723
+ logical mapping and localization methods: A survey.
1724
+ Robotics and Autonomous Systems 64:1 – 20
1725
+ Glover AJ, Maddern WP, Milford MJ, Wyeth GF
1726
+ (2010) FAB-MAP + RatSLAM: Appearance-based
1727
+ SLAM for multiple times of day. In: Proc. IEEE Int.
1728
+ Conf. on Robotics and Automation, pp 3507–3512
1729
+ Grisetti G, Grzonka S, Stachniss C, Pfaff P, Burgard
1730
+ W (2007) Efficient estimation of accurate maximum
1731
+ likelihood maps in 3D. In: Proc. IEEE/RSJ Int. Conf.
1732
+ on Intelligent Robots and Systems, pp 3472–3478
1733
+ Grisetti G, K¨ummerle R, Stachniss C, Burgard W
1734
+ (2010) A tutorial on graph-based SLAM. IEEE Intel-
1735
+ ligent Transportation Systems Magazine 2(4):31–43
1736
+ Ho KL, Newman P (2006) Loop closure detection in
1737
+ SLAM by combining visual and spatial appearance.
1738
+ Robotics and Autonomous Systems 54(9):740–749
1739
+ Johannsson H, Kaess M, Fallon M, Leonard J (2013)
1740
+ Temporally scalable visual SLAM using a reduced
1741
+ pose graph. In: Proc. IEEE Int. Conf. on Robotics
1742
+ and Automation, pp 54–61
1743
+ Kim B, Kaess M, Fletcher L, Leonard J, Bachrach A,
1744
+ Roy N, Teller S (2010) Multiple relative pose graphs
1745
+ for robust cooperative mapping. In: Proc. IEEE Int.
1746
+ Conf. on Robotics and Automation, pp 3185–3192
1747
+ Konolige K, Bowman J (2009) Towards lifelong visual
1748
+ maps. In: Proc. IEEE/RSJ Int. Conf. on Intelligent
1749
+ Robots and Systems, pp 1156–1163
1750
+ Konolige K, Marder-Eppstein E, Marthi B (2011) Nav-
1751
+ igation in hybrid metric-topological maps. In: Proc.
1752
+ IEEE Int. Conf. on Robotics and Automation, pp
1753
+ 3041–3047
1754
+ Krajn´ık T, Fentanes JP, Hanheide M, Duckett T
1755
+ (2016) Persistent localization and life-long mapping
1756
+ in changing environments using the frequency map
1757
+ enhancement. In: Proc. IEEE/RSJ Int. Conf. on In-
1758
+ telligent Robots and Systems, pp 4558–4563
1759
+ Kummerle R, Grisetti G, Strasdat H, Konolige K, Bur-
1760
+ gard W (2011) g2o: A general framework for graph
1761
+ optimization. In: Proc. IEEE Int. Conf. on Robotics
1762
+ and Automation, pp 3607–3613
1763
+ Labbe M, Michaud F (2013) Appearance-based loop
1764
+ closure detection for online large-scale and long-term
1765
+ operation. IEEE Trans on Robotics 29(3):734–745
1766
+ Labbe M, Michaud F (2014) Online global loop closure
1767
+ detection for large-scale multi-session graph-gased
1768
+ SLAM. In: Proc. IEEE/RSJ Int. Conf. on Intelligent
1769
+ Robots and Systems, pp 2661–2666
1770
+ Latif Y, Cadena C, Neira J (2013) Robust loop closing
1771
+ over time for pose graph SLAM. Int J of Robotics
1772
+ Research 32(14):1611—1626
1773
+ Lee GH, Fraundorfer F, Pollefeys M (2013) Ro-
1774
+ bust
1775
+ pose-graph
1776
+ loop-closures
1777
+ with
1778
+ expectation-
1779
+ maximization. In: Proc. IEEE/RSJ Int. Conf. on In-
1780
+ telligent Robots and Systems, pp 556–563
1781
+ Marder-Eppstein E, Berger E, Foote T, Gerkey B,
1782
+ Konolige K (2010) The Office Marathon: Robust nav-
1783
+ igation in an indoor office environment. In: Proc.
1784
+ IEEE Int. Conf. on Robotics and Automation, pp
1785
+ 300–307
1786
+ McDonald J, Kaess M, Cadena C, Neira J, Leonard J
1787
+ (2012) Real-time 6-DOF multi-session visual SLAM
1788
+
1789
+ Long-Term Online Multi-Session Graph-Based SPLAM with Memory Management
1790
+ 19
1791
+ over large scale environments. Robotics and Au-
1792
+ tonomous Systems 61(10):1144–58
1793
+ Milford M, Wyeth G (2010) Persistent navigation and
1794
+ mapping using a biologically inspired SLAM system.
1795
+ Int J of Robotics Research 29(9):1131–53
1796
+ Muja M, Lowe DG (2009) Fast approximate nearest
1797
+ neighbors with automatic algorithm configuration.
1798
+ In: Proc. Int. Conf. on Computer Vision Theory and
1799
+ Application, pp 331–340
1800
+ Pirker K, Ruther M, Bischof H (2011) CD SLAM –
1801
+ Continuous localization and mapping in a dynamic
1802
+ world. In: Proc. IEEE/RSJ Int. Conf. on Intelligent
1803
+ Robots and Systems, pp 3990–3997
1804
+ Rusu RB, Cousins S (2011) 3D is here: Point Cloud
1805
+ Library (PCL). In: Proc. IEEE Int. Conf. on Robotics
1806
+ and Automation, Shanghai, China, pp 1–4
1807
+ Sivic J, Zisserman A (2003) Video Google: A text re-
1808
+ trieval approach to object matching in videos. In:
1809
+ Proc. 9th Int. Conf. on Computer Vision, Nice,
1810
+ France, pp 1470–1478
1811
+ Stachniss C (2009) Robotic Mapping and Exploration,
1812
+ vol 55. Springer Science & Business Media
1813
+ Sunderhauf N, Protzel P (2012) Towards a robust back-
1814
+ end for pose graph SLAM. In: Proc. IEEE Int. Conf.
1815
+ on Robotics and Automation, pp 1254–1261
1816
+ Thrun S, Burgard W, Fox D (2005) Probabilistic
1817
+ Robotics. The MIT Press
1818
+ Valencia R, Morta M, Andrade-Cetto J, Porta JM
1819
+ (2013) Planning reliable paths with Pose SLAM.
1820
+ IEEE Trans on Robotics 29(4):1050–1059
1821
+ Walcott-Bryant A, Kaess M, Johannsson H, Leonard
1822
+ JJ (2012) Dynamic pose graph SLAM: Long-term
1823
+ mapping in low dynamic environments. In: Proc.
1824
+ IEEE/RSJ Int. Conf. on Intelligent Robots and Sys-
1825
+ tems, pp 1871–1878
1826
+
7dAyT4oBgHgl3EQfQvYd/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
7dE2T4oBgHgl3EQfPQbU/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:766f82d1f62b9bb2a8c7062bad8b17511a58128f8c8d85be34c2f0023e798acb
3
+ size 5308461
89FLT4oBgHgl3EQfBi6R/content/tmp_files/2301.11971v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
89FLT4oBgHgl3EQfBi6R/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8NFLT4oBgHgl3EQfsy_c/content/tmp_files/2301.12149v1.pdf.txt ADDED
@@ -0,0 +1,1800 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ POSTER V2: A simpler and stronger facial expression recognition network
2
+ Jiawei Mao†
3
+ Rui Xu†
4
+ Xuesong Yin*
5
+ Yuanqi Chang
6
+ Binling Nie
7
+ Aibin Huang∗
8
+ School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
9
+ {jiaweima0,211330017,yinxs,yuanqichang,binlingnie,huangaibin}@hdu.edu.cn
10
+ Abstract
11
+ Facial expression recognition (FER) plays an impor-
12
+ tant role in a variety of real-world applications such as
13
+ human-computer interaction.
14
+ POSTER V1 achieves the
15
+ state-of-the-art (SOTA) performance in FER by effectively
16
+ combining facial landmark and image features through
17
+ two-stream pyramid cross-fusion design.
18
+ However, the
19
+ architecture of POSTER V1 is undoubtedly complex.
20
+ It
21
+ causes expensive computational costs. In order to relieve
22
+ the computational pressure of POSTER V1, in this pa-
23
+ per, we propose POSTER V2.
24
+ It improves POSTER V1
25
+ in three directions: cross-fusion, two-stream, and multi-
26
+ scale feature extraction. In cross-fusion, we use window-
27
+ based cross-attention mechanism replacing vanilla cross-
28
+ attention mechanism. We remove the image-to-landmark
29
+ branch in the two-stream design. For multi-scale feature
30
+ extraction, POSTER V2 combines images with landmark’s
31
+ multi-scale features to replace POSTER V1’s pyramid de-
32
+ sign. Extensive experiments on several standard datasets
33
+ show that our POSTER V2 achieves the SOTA FER perfor-
34
+ mance with the minimum computational cost. For exam-
35
+ ple, POSTER V2 reached 92.21% on RAF-DB, 67.49% on
36
+ AffectNet (7 cls) and 63.77% on AffectNet (8 cls), respec-
37
+ tively, using only 8.4G floating point operations (FLOPs)
38
+ and 43.7M parameters (Param). This demonstrates the ef-
39
+ fectiveness of our improvements. The code and models are
40
+ available at https://github.com/Talented-Q/
41
+ POSTER_V2.
42
+ 1. Introduction
43
+ With the continuous development of technology and
44
+ the continuous improvement of automation, the need
45
+ for human-computer interaction is becoming increasingly
46
+ strong.
47
+ Facial expression recognition (FER) helps ma-
48
+ chines to understand human emotions from facial expres-
49
+ sions. This makes it as a core task for human-computer in-
50
+ teraction. Besides, with its powerful expression understand-
51
+ *Corresponding author.†Equal contribution.
52
+ Figure 1. POSTER V2 results on RAF-DB. We compare POSTER
53
+ V2 with three variants of POSTER V1 and other FER algorithms.
54
+ The results indicate that POSTER V2 weighs the number of pa-
55
+ rameters and accuracy better than other FER methods on RAF-
56
+ DB.
57
+ ing ability, FER has great potential applications in psychol-
58
+ ogy, intelligent robotics, intelligent surveillance, virtual re-
59
+ ality and synthetic animation. Therefore, research on FER
60
+ is very necessary.
61
+ Due to the increasing attention of FER, it has been
62
+ able to develop rapidly in recent years. Early FER works
63
+ [55, 59, 33, 20] used manual features [6, 34, 23] for the anal-
64
+ ysis of facial expressions. However, FER algorithms based
65
+ on manual features are often only applicable to specific FER
66
+ tasks. When applied to real world scenarios, it is difficult for
67
+ these algorithms to achieve the same results as in the experi-
68
+ mental setting. With the development of deep learning, con-
69
+ volutional neural networks (CNNs) are introduced to FER
70
+ for improving the robustness of the network. Savchenko et
71
+ al. [38] first verified the effectiveness of CNNs such as Mo-
72
+ bileNet [19], EfficientNet [41] and RexNet [15] for FER.
73
+ Zhao et al. proposed an efficient and robust FER network
74
+ EfficientFace [57] for the analysis of facial expressions in
75
+ the wild. Nevertheless, convolution-based FER algorithms
76
+ cannot consider the global information of the image due to
77
+ the limitation of convolutional local receptive field. Influ-
78
+ enced by the vision transformer, Xue et al. [51] designed the
79
+ first transformer-based FER network to model long-range
80
+ arXiv:2301.12149v1 [cs.CV] 28 Jan 2023
81
+
82
+ 93
83
+ POSTER V2
84
+ POSTER V1
85
+ 92
86
+ POSTER V1-S
87
+ POSTER V1-T
88
+
89
+ Acc
90
+ TransFER
91
+ 91
92
+ RAF-DB Top-1
93
+ 90
94
+ DMUE
95
+ 89
96
+ VTFF
97
+ 88
98
+ 87
99
+ 40
100
+ 45
101
+ 50
102
+ 55
103
+ 60
104
+ 65
105
+ 70
106
+ 75
107
+ 80
108
+ Param(M)dependencies for FER. Kim et al. [24] improved the vision
109
+ transformer (ViT) to combine both global and local features
110
+ so that ViT can be adapted to FER task.
111
+ Among many excellent FER works, POSTER V1
112
+ [58] stands out with state-of-the-art (SOTA) performance.
113
+ POSTER V1 mainly solves three key issues of FER at the
114
+ same time: inter-class similarity, intra-class discrepancy
115
+ and scale sensitivity. POSTER V1 cleverly combines facial
116
+ landmark with image features through a network design of
117
+ two-stream pyramidal cross-fusion transformer. With the
118
+ difference and sparsity of landmark, POSTER V1 success-
119
+ fully solves the issue of inter-class similarity and intra-class
120
+ discrepancy in FER. The network design of pyramid archi-
121
+ tecture introduces multi-scale features for POSTER V1 to
122
+ solve the scale sensitivity problem. Along with the solution
123
+ of the three main issues of FER, POSTER V1 shows the
124
+ amazing expression analysis ability.
125
+ Although POSTER V1 works so well on FER, the huge
126
+ number of parameters and expensive computational cost
127
+ brought by its network architecture affects the efficiency
128
+ of FER. To address this issue, we revisit the network de-
129
+ sign of POSTER V1 and improve it to obtain POSTER
130
+ V2. We mainly improve POSTER V1 in three directions:
131
+ two-stream, cross-fusion and multi-scale feature extrac-
132
+ tion.
133
+ POSTER V1 contains two main branches: image-
134
+ to-landmark and landmark-to-image. Landmark-to-image
135
+ branch is essential as the core of POSTER V1 to solve inter-
136
+ class similarity and intra-class discrepancy.
137
+ The image-
138
+ to-landmark branch is only used to provide information to
139
+ landmark that it fails to take into account. This does not
140
+ contribute to solving the three main issues of FER. There-
141
+ fore, in POSTER V2, we remove the image-to-landmark
142
+ branch from the two-stream design. This greatly reduces
143
+ the computational cost on POSTER V1. For cross-fusion,
144
+ we use a window-based cross-attention mechanism instead
145
+ of the vanilla cross-attention mechanism in POSTER V1.
146
+ The window-based cross-attention mechanism not only pro-
147
+ vides linear computational complexity for POSTER V2 but
148
+ also enhances the local modeling capability of the network.
149
+ In addition, POSTER V2 no longer uses an additional pyra-
150
+ mid architecture for multi-scale feature extraction. We per-
151
+ form multi-scale feature extraction directly from the image
152
+ backbone as well as from the facial landmark detector. For
153
+ the extracted multi-scale features, we use a vision trans-
154
+ former network consisting of only two layers of transformer
155
+ modules for integration. Based on the above designs, our
156
+ POSTER V2 becomes a simpler and more powerful facial
157
+ expression recognition network. It achieves SOTA perfor-
158
+ mance on several standard FER datasets with only 8.4G
159
+ floating point operations (FLOPs) and 43.7M parameters
160
+ (Param). Figure 1 demonstrates the superiority of POSTER
161
+ V2.
162
+ Specially, POSTER V2 reached 92.21% on RAF-DB
163
+ [29], 67.49% on AffectNet [32] (7 cls) and 63.77% on Af-
164
+ fecNet (8 cls), respectively. This is better than POSTER
165
+ V1 (RAF-DB with 92.05%, AffectNet (7 cls) with 67.31%
166
+ and AffectNet (8 cls) with 63.34%). And POSTER V2 of-
167
+ fers a smaller Param (43.7M vs. 71.8M) and FLOPs (8.4G
168
+ vs. 15.7G). We hope that our work could contribute to the
169
+ design of future FER models.
170
+ In general, we summarize the contributions of this paper
171
+ as follows:
172
+ 1) We design POSTER V2 by modifying POSTER V1
173
+ from three perspectives: two-stream, cross-fusion and
174
+ feature extraction.
175
+ Compared with POSTER V1,
176
+ POSTER V2 is simpler and stronger.
177
+ 2) POSTER V2 shows state-of-the-art performance on
178
+ several standard FER datasets such as RAF-DB, Affec-
179
+ Net and CAER-S. This shows the powerful expression
180
+ analysis capability of POSTER V2.
181
+ 3) POSTER V2 greatly reduces the FLOPs and Param
182
+ of POSTER V1. Specifically, POSTER V2 reduces
183
+ 28.1M of Param and 7.3G of FLOPs. This greatly im-
184
+ proves the computational efficiency of the model.
185
+ 2. Related Work
186
+ 2.1. Facial Expression Recognition
187
+ The study of FER has become very popular in re-
188
+ cent years as more and more researchers focus on human-
189
+ computer interaction. Zhao et al. [55] used the manual fea-
190
+ ture LBP [34] for the research of FER with good results.
191
+ Zhong et al. [59] proposed a two-stage multitask sparse
192
+ learning framework (MTSL) for the FER task by explor-
193
+ ing some common and specific information among differ-
194
+ ent expressions. Savchenko et al. [38] studied lightweight
195
+ convolutional neural networks for FER task learning and
196
+ verified the effectiveness of CNNs for FER. Sang et al. [37]
197
+ focused on reducing intra-class variation in facial expres-
198
+ sion depth features and introduced a dense convolutional
199
+ network [21] for the FER task. PSR [45] solves the prac-
200
+ tical issues associated with individual wild images in FER
201
+ in terms of pose, orientation and input resolution with its
202
+ super-resolution pyramidal network architecture. Zhang et
203
+ al. [54] proposed an erasing attention consistency method to
204
+ handle the noise-labeled facial expression recognition task
205
+ that is more challenging than the conventional FER.
206
+ With the rise of transformer in the field of computer vi-
207
+ sion, many FER methods combined with transformer have
208
+ emerged. The vision transformer was first used for the study
209
+ of FER by Xue et al. [51] and achieved state-of-the-art per-
210
+ formance. VTFF [31] excels in dealing with facial expres-
211
+ sion recognition tasks in the wild by virtue of its feature fu-
212
+ sion. Huang et al. designed the teacher-student model PID-
213
+ ViT [22] for modeling the probability distribution of frontal
214
+
215
+ Figure 2. Pipeline of POSTER V1. POSTER V1 mainly contains facial landmark detector, image backbone, cross-fusion transformer
216
+ encoders and pyramid network.
217
+ and multi-pose facial expressions, and solved the problem
218
+ of pose change and occlusion in FER. Zhao et al. [9] com-
219
+ bined global and local attention in order to address the two
220
+ key issues of occlusion and pose change in FER. POSTER
221
+ V1 [58] solves the intra-class discrepancy, inter-class sim-
222
+ ilarity and scale sensitivity issues of FER in the same time
223
+ by integrating image features with facial landmark features
224
+ through two-stream, cross-fusion and pyramid design.
225
+ However, the huge computational cost of POSTER V1
226
+ has prevented researchers from investigating further im-
227
+ provements in FER. To solve this issue, we improved the
228
+ architecture of POSTER V1 and proposed POSTER V2,
229
+ which is simpler and more powerful for FER tasks.
230
+ 2.2. Vision Transformer
231
+ Recently vision transformer has been widely used for
232
+ computer vision tasks on large scale datasets with its ex-
233
+ cellent ability to model long distance dependencies.
234
+ Dosovitskiy et al. [8] pioneered the introduction of trans-
235
+ former from the field of natural language processing to com-
236
+ puter vision. Touvron et al. [42] used a teacher-student
237
+ strategy to accelerate the training of transformer by distill-
238
+ ing tokens. Zhou et al. [60] found that the reason why
239
+ the transformer quickly saturates at deeper levels is that
240
+ the attention map becomes increasingly similar as the trans-
241
+ former goes deeper. Based on this observation, they pro-
242
+ posed the Re-attention model to regenerate the attention
243
+ map in order to enhance the diversity among layers at a
244
+ small computational cost. Touvron et al. also designed CaiT
245
+ [43], a deep vision transformer for optimal image classifi-
246
+ cation. To solve the issue that ViT is inferior to traditional
247
+ ResNet [17] on datasets without huge data size, Yuan et al.
248
+ proposed T2T-ViT [52]. Besides, Hassani et al. proposed
249
+ CCT [16] which uses convolution rather than patch em-
250
+ bedding layer for self-attention processing. This introduces
251
+ convolutional inductive bias for the transformer. Chen et
252
+ al. proposed CrossViT [4], which combines image patches
253
+ of different sizes by dual branches to produce stronger im-
254
+ age features. Heo et al. [18] also verified whether pooling
255
+ layers bring advantages to ViT as they do in convolutional
256
+ neural networks (CNNs). Liu et al. [30] reduced the atten-
257
+ tion mechanism from quadratic computational complexity
258
+ to linear by window attention and the design of a shift win-
259
+ dow scheme. Graham et al. grafted CNN with Transformer
260
+ to obtain LeViT [13] with higher accuracy and faster speed.
261
+ Wu et al. have designed a new architecture called convo-
262
+ lutional visual transformer CVT [50], which improves the
263
+ performance and efficiency of ViT by introducing convolu-
264
+ tion into vision transformer to produce the better results of
265
+ both designs. Chen et al. proposed a new architecture with
266
+ a pyramidal structure and a novel region-to-local-attention
267
+ vision transformer, RegionViT [3]. Wang et al. [48] intro-
268
+ duced ViT into a CNN-like pyramid structure for intensive
269
+ prediction tasks such as object detection and semantic seg-
270
+ mentation.
271
+ The architectural design of these vision transformer ef-
272
+ forts inspires our improvements for POSTER V1.
273
+ This
274
+ leads to a better trade-off between accuracy and computa-
275
+ tional complexity in FER with our POSTER V2.
276
+ 3. Method
277
+ In this section, we first review the POSTER V1 process.
278
+ We then describe the overall architecture of POSTER V2
279
+ and discuss the specific details of POSTER V2 in three di-
280
+ rections: two-stream, cross-fusion, and multi-scale feature
281
+ extraction.
282
+ 3.1. A brief review of POSTER V1
283
+ POSTER V1 contains four main core designs: facial
284
+ landmark detector, image backbone, cross-fusion trans-
285
+ former encoders and pyramid network. Given the input im-
286
+ age X ∈ RH×W ×3, POSTER V1 obtain the image features
287
+ Ximg and landmark features Xlm by facial landmark detec-
288
+ tor and image backbone, respectively.
289
+ The image features Ximg ∈ RN×D as well as the land-
290
+ mark features Xlm ∈ RN×D are mapped into three ma-
291
+ trices respectively: image query matrix Qimg, image key
292
+ matrix Kimg, image value matrix Vimg and landmark query
293
+
294
+ Cross-fusion
295
+ Transform
296
+ Landmark Feature
297
+ Encoders
298
+ Input Image
299
+ Landmark
300
+ Dector
301
+ Cross-fusion
302
+ Transform
303
+ head
304
+ Encoders
305
+ Concat
306
+ Image
307
+ Backbone
308
+ Cross-fusion
309
+ Transform
310
+ Image Feature
311
+ EncodersFigure 3. The overview of POSTER V2 architecture. LMFi and IMFi denotes facial landmark features and image features at the i-th
312
+ level of POSTER V2 respectively.
313
+ matrix Qlm, landmark key matrix Klm, landmark value ma-
314
+ trix Vlm in the cross-fusion transformer encoder. Specifi-
315
+ cally expressed as:
316
+ Qimg = XimgWq1, Qlm = XlmWq2,
317
+ Kimg = XimgWk1, Klm = XlmWk2,
318
+ Vimg = XimgWv1, Vlm = XlmWv2,
319
+ (1)
320
+ where Wq1, Wq2, Wk1, Wk2, Wv1 and Wv2 ∈ RD×D are
321
+ the mapping matrix.
322
+ The cross-fusion transformer encoder uses the vanilla
323
+ cross-attention mechanism to interact image features and
324
+ landmark features respectively. It is defined as follows:
325
+ Attention(img) = softmax(QlmKT
326
+ img/
327
+
328
+ d)Vimg,
329
+ Attention(lm) = softmax(QimgKT
330
+ lm/
331
+
332
+ d)Vlm,
333
+ (2)
334
+ where softmax(·) is softmax [1] activation function and
335
+ 1
336
+
337
+ d is an appropriately normalized scaling factor used to
338
+ prevent the gradient from being too small.
339
+ In summary cross-fusion transformer encoder can be de-
340
+ noted as:
341
+ X’img = Attention(img) + Ximg,
342
+ Ximg o = MLP(Norm(X’img)) + X’img,
343
+ X’lm = Attention(lm) + Xlm,
344
+ Xlm o = MLP(Norm(X’lm)) + X’lm,
345
+ (3)
346
+ where MLP (·) is multi-layer perceptron and Norm (·)
347
+ denotes the normalization operation.
348
+ Finally, POSTER V1 extracts and integrates multi-scale
349
+ features of images and landmarks by the pyramid network
350
+ design. The specific details are shown in Figure 2.
351
+ 3.2. Architecture
352
+ Figure 3 shows the pipeline for POSTER V2.
353
+ The
354
+ POSTER V2 keeps the facial landmark detector and im-
355
+ age backbone in POSTER V1. In difference, we remove
356
+ the POSTER V1 pyramid architecture and the image-to-
357
+ landmark branch of the two-stream design. Meanwhile, we
358
+ perform multi-scale feature extraction directly from the fa-
359
+ cial landmark detector and image backbone. And we in-
360
+ troduce a small vision transformer consisting of only two
361
+ layers of vanilla tranformer blocks in POSTER V2 to in-
362
+ tegrate multi-scale features. Moreover, we design the new
363
+ cross-fusion transformer encoder with window-based cross-
364
+ attention mechanism. Next, we discuss the detailed modifi-
365
+ cations to POSTER V2.
366
+ 3.3. Two-stream
367
+ Methods
368
+ RAF-DB
369
+ AffectNet
370
+ Baseline
371
+ 91
372
+ 65.06
373
+ POSTER V1
374
+ 92.05
375
+ 67.31
376
+ POSTER w/o image to landmark branch
377
+ 91.82
378
+ 65.96
379
+ POSTER w/o landmark to image branch
380
+ 91.62
381
+ 65.28
382
+ Table 1. Ablation study of two branches in cross-fusion of
383
+ POSTER V1. The baseline in the table keeps the baseline setting
384
+ in POSTER V1.
385
+ Although two-stream is central to the design of POSTER
386
+ V1, POSTER V1 does not explore which branch of two-
387
+ stream actually plays a major role. Thus, in this section, we
388
+ first perform an ablation study of the two-stream to learn the
389
+ contribution of the two branches to the FER. Table 1 shows
390
+ the ablation results. We see that on the RAF-DB dataset, the
391
+ accuracy of POSTER V1 slips by 0.23 after missing the im-
392
+ age to landmark branch. If the landmark-to-image branch
393
+ is missing, the accuracy of POSTER V1 on RAF-DB is re-
394
+ duced by 0.43. Meanwhile, we observe a similar situation
395
+ on the AffectNet dataset. This indicates that although the
396
+ image-to-landmark branch contributes to the POSTER V1
397
+ FER performance, it is the landmark-to-image branch that
398
+ plays a decisive role in POSTER V1. Next, we analyze the
399
+ above results at the theoretical level.
400
+ Discussion.
401
+ The two-stream design in POSTER V1 is
402
+ mainly used to solve the issues of intra-class discrepancy
403
+ and inter-class similarity in FER. It includes landmark-to-
404
+
405
+ 2nd POSTER-V2 level
406
+ 1st POSTER-V2 level
407
+ 3rd POSTER-V2 level
408
+ Landmark Stage 2
409
+ Stage
410
+ LMFi
411
+ LMF2
412
+ ..
413
+ Landmark
414
+ Input Image
415
+ LMFs
416
+ LMF2
417
+ ViT Model
418
+ LMFi
419
+ head
420
+ IMFi
421
+ IMF2
422
+ IMFs
423
+ Image Stage 2
424
+ Image Stage 3
425
+ IMFi
426
+ IMF2
427
+ Low-Level Feature Extraction (LFE)
428
+ High-Level Feature Extraction (HFE)
429
+ Multi-Level Feature Integration (MFI)Figure 4. Input images (row 1), facial landmark images (row 2),
430
+ landmark-to-image branching attention visualization results (row
431
+ 3). We visualize the attention map belonging to the last layer of the
432
+ landmarks to image branching for high-level features in POSTER
433
+ V1. We can observe that with the help of landmark features, the
434
+ attention map focuses more on the outstanding areas of face and
435
+ less on the areas common to face.
436
+ image and image-to-landmark branches.
437
+ We revisit the
438
+ influence of the two branches on POSTER V1.
439
+ In the
440
+ landmark-to-image branch, the landmark features inter-
441
+ act with the image features as queries Qlm in the cross-
442
+ attention mechanism. Image features are guided by land-
443
+ mark features to more easily represent salient regions of fa-
444
+ cial expressions when dealing with intra-class discrepancy
445
+ issue. Also benefiting from the sparsity of landmark fea-
446
+ tures, image features guided by landmark features reduce
447
+ the focus on regions where faces are prevalent. This helps
448
+ to reduce the impact of inter-class similarity in FER. The
449
+ results of the visualization of landmark-to-image branch
450
+ attention in Figure 4 also validate the above statements.
451
+ Therefore, the landmark-to-image branch in the two-stream
452
+ is essential and needs to be retained.
453
+ For the image-to-
454
+ landmark branch, the image features interact with the land-
455
+ mark features as query Qimg to compensate for the lack
456
+ of landmark features. Although this also benefits the FER
457
+ task to some extent, it does not contribute to solving the
458
+ issues of inter-class similarity and intra-class discrepancy
459
+ as well as comes with a huge computational cost. This is
460
+ consistent with the results we observed in the ablation ex-
461
+ periments of Table 1. Thus, by making a trade-off between
462
+ computational cost and accuracy, we eventually remove the
463
+ image-to-landmark branch in the two-stream design.
464
+ 3.4. Cross-fusion
465
+ In POSTER V2 we use window-based cross-attention
466
+ mechanism instead of vanilla cross-attention mechanism in
467
+ POSTER V1 for the purpose of linear computation. Fig-
468
+ ure 5 illustrates the detailed differences between the two
469
+ cross-attention mechanisms. For image features Ximg ∈
470
+ RN×D, we first divide them into several non-overlapping
471
+ windows zimg
472
+ ∈ RM×D, where zimg contains M to-
473
+ Figure 5. Window-based cross attention mechanism and vanilla
474
+ cross attention mechanism.
475
+ kens. For the landmark feature Xlm ∈ RC×H×W , we first
476
+ down-sample it to the window size zlm ∈ Rc×h×w, where
477
+ c = D, M = h × w. Then we reshape it according to
478
+ the shape of Zimg. At this point, the cross-attention with I
479
+ heads in a local window can be formulated as:
480
+ q = zlmwq, k = zimgwk, v = zimgwv,
481
+ o(i) = θ(q(i)k(i)T/
482
+
483
+ d + b)v(i), i = 1,...,I,
484
+ o = [o(1), . . . , o(I)]wo,
485
+ (4)
486
+ where wq, wk, wv, wo are the mapping matrix, respectively.
487
+ θ (·) is the softmax function. [·] denotes the merge operation
488
+ and b ∈ RI×I is the relative position bias.
489
+ We perform the above cross-attention calculation for
490
+ all windows. We refer to this cross-attention mechanism
491
+ as window-based multi-head cross-attention (W-MCSA).
492
+ Thus the cross-fusion transformer encoder in POSTER V2
493
+ can be expressed as follows:
494
+ X’img = W-MCSA(img) + Ximg,
495
+ Ximg o = MLP(Norm(X’img)) + X’img,
496
+ (5)
497
+ Computational Complexity Analysis. Since the query in
498
+ the two types of cross-attention computation keeps the same
499
+ shape as the key, value, we can use the multi-head self-
500
+ attention and the window-based multi-head self-attention
501
+ complexity to represent their computational complexity.
502
+ This can be indicated as follows:
503
+ Ω(MCSA) = 4ND2 + 2N2D,
504
+ Ω(W-MCSA) = 4ND2 + 2M2ND,
505
+ (6)
506
+
507
+ Attention Query
508
+ Window-based Cross Attention Mechanism
509
+ Vanilla Cross Attention MechanismAccording to Eqn 6, we can find that the window-based
510
+ cross-attention mechanism we use successfully reduces the
511
+ computational complexity of cross-fusion in POSTER V1
512
+ from square level to linear level. This further improves the
513
+ computational efficiency of POSTER V2.
514
+ 3.5. Multi-scale feature extraction
515
+ From Figure 3, we can observe that POSTER V2 re-
516
+ moves the pyramid design from POSTER V1. Moreover, in
517
+ POSTER V2, we extract multi-scale features directly from
518
+ facial landmark detector and image backbone. And we also
519
+ add a small vision transformer network to POSTER V2 for
520
+ the integration of multi-scale features.
521
+ For the obtained
522
+ multi-scale features o1, o2, o3, we directly merge in the to-
523
+ ken dimension and using the vanilla transformer blocks for
524
+ processing. This process is specifically described as:
525
+ o = [o1, o2, o3],
526
+ o’ = MSA(o) + o,
527
+ oout = MLP(Norm(o’)) + o’,
528
+ (7)
529
+ where MSA (·) represents multi-head self-attention mech-
530
+ anism. For above design we discuss as follows.
531
+ Discussion. POSTER V1 adopts the pyramid structure to
532
+ solve the scale sensitivity problem in FER. However, we
533
+ consider that the pyramid structure design is only an up-
534
+ sampling and down-sampling operation on the basis of the
535
+ same scale feature map. Although it provides multi-scale
536
+ information to some extent, we believe that it is not as good
537
+ as multi-scale feature extraction directly from the network.
538
+ The method analysis in section 4.3 also proves our point.
539
+ For the integration of multi-scale features, we believe that
540
+ the vanilla transformer block is sufficient for this task. We
541
+ combine the tokens of all scale feature maps together, and
542
+ the attention mechanism can model long-range dependen-
543
+ cies for all scale tokens. Thus, different scales of token in-
544
+ formation are delivered in the transformer block.
545
+ 4. Experiments
546
+ We verify the effectiveness of POSTER V2 on several
547
+ standard FER datasets such as RAF-DB [29], AffectNet
548
+ [32] and CAER-S [27]. In the following, we first compare
549
+ POSTER V2 with SOTA methods. We then conduct a se-
550
+ ries of method analysis and ablation studies on POSTER
551
+ V2. More detailed experimental setup, more experimen-
552
+ tal results and visualization results are detailed in the Ap-
553
+ pendix.
554
+ 4.1. Experiment Setup
555
+ Datasets. We evaluat the FER performance of POSTER
556
+ V2 on the widely used RAF-DB, AffectNet and CAER-S
557
+ Dataset
558
+ Train size
559
+ Test size
560
+ Classes
561
+ RAF-DB
562
+ 12271
563
+ 3068
564
+ 7
565
+ AffectNet (7 cls)
566
+ 280401
567
+ 3500
568
+ 7
569
+ AffectNet (8 cls)
570
+ 283501
571
+ 4000
572
+ 8
573
+ CAER-S
574
+ 44996
575
+ 20987
576
+ 7
577
+ Table 2. Detailed size of the experimental dataset.
578
+ datasets. The Real-world Affective Faces Database (RAF-
579
+ DB) is a large-scale database of facial expressions, anno-
580
+ tated by 315 staff members (students and faculty members
581
+ of the University). For the selection of expressions, RAF-
582
+ DB selected six basic emotions as well as neutral emotions
583
+ from a range of expressions (e.g., smile, cackle, cry, anger,
584
+ fear, dread, fear, shock, surprise, disgust, and no expres-
585
+ sion), for a total of seven expressions for expression anno-
586
+ tation. It mainly contains 12,271 training images as well
587
+ as 3,068 test images. AffectNet is currently the largest pub-
588
+ licly available dataset in the FER field. It contains about 1M
589
+ images of faces associated with emotional words. It mainly
590
+ contains 8 categories of primary emotions (neutral, happy,
591
+ angry, sad, fear, surprise, disgust,contempt). We mainly use
592
+ AffectNet settings based on class 7 (excluding contempt) as
593
+ well as class 8. AffectNet (7 cls) consists of 280K training
594
+ images and 3.500 validation images (500 images per cat-
595
+ egory). AffectNet (8 cls) consists of 283K training images
596
+ and 4.000 validation images (500 images per category). The
597
+ CAER-S dataset was obtained from the CAER dataset con-
598
+ taining 65,983 images. It is mainly divided into 7 types of
599
+ expressions: neutral, happy, sad, surprised, fear, disgust and
600
+ anger. In the FER task we used 44996 images for training
601
+ and 20987 images for testing. The specific dataset configu-
602
+ ration is shown in Table 2.
603
+ Methods
604
+ Year
605
+ RAF-DB
606
+ AffectNet (7 cls)
607
+ AffectNet (8 cls)
608
+ SCN [46]
609
+ CVPR 2020
610
+ 87.03
611
+ -
612
+ 60.23
613
+ PSR [45]
614
+ CVPR 2020
615
+ 88.98
616
+ 63.77
617
+ 60.68
618
+ LDL-ALSG [5]
619
+ CVPR 2020
620
+ 85.53
621
+ 59.35
622
+ -
623
+ RAN [47]
624
+ TIP 2020
625
+ 86.9
626
+ -
627
+ -
628
+ DACL [11]
629
+ WACV 2020
630
+ 87.78
631
+ 65.2
632
+ -
633
+ KTN [28]
634
+ TIP 2021
635
+ 88.07
636
+ 63.97
637
+ -
638
+ DMUE [39]
639
+ CVPR 2021
640
+ 89.42
641
+ 63.11
642
+ -
643
+ FDRL [36]
644
+ CVPR 2021
645
+ 89.47
646
+ -
647
+ -
648
+ VTFF [31]
649
+ TAC 2021
650
+ 88.14
651
+ 61.85
652
+ -
653
+ ARM [40]
654
+ 2021
655
+ 90.42
656
+ 65.2
657
+ 61.33
658
+ TransFER [51]
659
+ ICCV 2021
660
+ 90.91
661
+ 66.23
662
+ -
663
+ DAN [49]
664
+ 2021
665
+ 89.7
666
+ 65.69
667
+ 62.09
668
+ EfficientFace [57]
669
+ AAAI 2021
670
+ 88.36
671
+ 63.7
672
+ 60.23
673
+ MA-Net [56]
674
+ TIP 2021
675
+ 88.42
676
+ 64.53
677
+ 60.29
678
+ Meta-Face2Exp [53]
679
+ CVPR 2022
680
+ 88.54
681
+ 64.23
682
+ -
683
+ EAC [54]
684
+ ECCV 2022
685
+ 90.35
686
+ 65.32
687
+ -
688
+ POSTER V1 [58]
689
+ 2022
690
+ 92.05
691
+ 67.31
692
+ 63.34
693
+ POSTER V2
694
+ -
695
+ 92.21
696
+ 67.49
697
+ 63.77
698
+ Table 3. Comparison results with SOTA FER algorithm on RAF-
699
+ DB and AffectNet.
700
+ Settings. Similar to POSTER V1 [58], we also use the ir50
701
+ [7] network pre-trained on the Ms-Celeb-1M [14] dataset as
702
+ the image backbone. And MobileFaceNet [2] with frozen
703
+ weights is used as our facial landmark detector. We employ
704
+
705
+ Dataset
706
+ Method
707
+ Neutral
708
+ Happy
709
+ Sad
710
+ Surprise
711
+ Fear
712
+ Disgust
713
+ Anger
714
+ Contempt
715
+ mean Acc
716
+ RAF-DB
717
+ POSTER V1
718
+ 92.35
719
+ 96.96
720
+ 91.21
721
+ 90.27
722
+ 67.57
723
+ 75
724
+ 88.89
725
+ -
726
+ 86.04
727
+ RAF-DB
728
+ POSTER V2
729
+ 92.06
730
+ 97.22
731
+ 92.89
732
+ 90.58
733
+ 68.92
734
+ 71.88
735
+ 88.27
736
+ -
737
+ 85.97
738
+ AffectNet (7 cls)
739
+ POSTER V1
740
+ 67.2
741
+ 89
742
+ 67
743
+ 64
744
+ 64.8
745
+ 56
746
+ 62.6
747
+ -
748
+ 67.23
749
+ AffectNet (7 cls)
750
+ POSTER V2
751
+ 65.4
752
+ 89.4
753
+ 68
754
+ 66
755
+ 64.2
756
+ 54.4
757
+ 65
758
+ -
759
+ 67.45
760
+ AffectNet (8 cls)
761
+ POSTER V1
762
+ 59.4
763
+ 80.2
764
+ 66.6
765
+ 63.6
766
+ 63.6
767
+ 59.8
768
+ 58.8
769
+ 54.71
770
+ 63.34
771
+ AffectNet (8 cls)
772
+ POSTER V2
773
+ 60.6
774
+ 76.4
775
+ 66.8
776
+ 65.6
777
+ 63
778
+ 58
779
+ 60.2
780
+ 59.52
781
+ 63.76
782
+ Table 4. Class-wise accuracy of POSTER V1 and POSTER V2 on RAF-DB, AffectNet (7 cls), and AffectNet (8 cls) datasets. Green, blue
783
+ and red mark the highest value of single category in RAF-DB, AffectNet (7 cls) and AffectNet (8 cls) respectively.
784
+ the Adam [25] optimizer for 200 epochs training. A train-
785
+ ing scheme with a batch size of 144, a learning rate of 3.5e-4
786
+ and a weight decay of 1e-4 was used. We use random hor-
787
+ izontal flipping and random erasing as our data augmenta-
788
+ tion methods. For the loss function, we choose the standard
789
+ cross-entropy loss. We eventually realized POSTER V2 on
790
+ a single NVIDIA RTX 3090 via Pytorch.
791
+ 4.2. Comparison with SOTA FER Methods
792
+ Results on RAF-DB. We compare POSTER V2 with the
793
+ SOTA FER algorithms in recent years on the RAF-DB
794
+ datasets in Table 3.
795
+ The experimental results show that
796
+ POSTER V2 exhibits SOTA performance on RAF-DB.
797
+ Compared with POSTER V1 (92.05), POSTER V2 im-
798
+ proved by 0.16. +1.86 for POSTER V2 over EAC (90.35),
799
+ and +1.3 for POSTER V2 over TransFER (90.91). This
800
+ shows the superiority of PSTER V2 on RAF-DB. Table 4
801
+ shows the comparison of POSTER V2 with POSTER V1
802
+ for RAF-DB individual classes and average accuracy. Al-
803
+ though POSTER V2 outperformed POSTER V1 in sev-
804
+ eral categories, the average accuracy was slightly inferior
805
+ to POSTER V1.
806
+ Results on AffectNet. In Table 3, we also conduct FER ex-
807
+ periments on AffectNet (7 cls) as well as AffectNet (8 cls).
808
+ We observe that POSTER V2 exhibits SOTA FER effect
809
+ in both AffectNet (7 cls) and AffectNet (8 cls). Compared
810
+ with POSTER V1 (67.31, 63.34), POSTER V2 increases
811
+ 0.18, 0.43 on AffectNet (7 cls) and AffectNet (8 cls), re-
812
+ spectively. On AffectNet (8 cls), POSTER V2 is higher than
813
+ DAN (62.09) by 1.68. On AffectNet (7 cls), POSTER V2
814
+ is greater than TransFER (66.23) with 1.26. This demon-
815
+ strates that POSTER V2 can maintain excellent FER perfor-
816
+ mance even on larger datasets. Table 4 shows that POSTER
817
+ V2 exceeds POSTER V1 for the majority of individual class
818
+ accuracies in both AffectNet (7 cls) and AffectNet (8 cls).
819
+ As a result, POSTER V2 achieves better average accuracy
820
+ than POSTER V1 on AffectNet.
821
+ Results on CAER-S. We compare POSTER V2 with SOTA
822
+ FER methods of recent years on the CAER-S dataset.
823
+ Our POSTER V2 in Table 5 performs extremely well on
824
+ the CAER-S dataset.
825
+ Specifically, POSTER V2 scored
826
+ 92.98 on CAER-S. +0.27 for POSTER V2 over POSTER
827
+ Methods
828
+ Year
829
+ CAER-S
830
+ DSN [10]
831
+ ICML 2018
832
+ 75.19
833
+ CAER-Net-S [27]
834
+ ICCV 2019
835
+ 73.51
836
+ GRERN [12]
837
+ IEEE Access 2020
838
+ 81.31
839
+ EfficientFace [57]
840
+ AAAI 2021
841
+ 85.87
842
+ MA-Net [56]
843
+ TIP 2021
844
+ 88.42
845
+ GLAMOR-Net [26]
846
+ NCA 2021
847
+ 89.88
848
+ POSTER V1 [58]
849
+ 2022
850
+ 92.73
851
+ POSTER V2
852
+ -
853
+ 93
854
+ Table 5. Comparison results with SOTA FER algorithm on CAER-
855
+ S.
856
+ V1 (92.73). +3.12 for POSTER V2 over GLAMOR-Net
857
+ (89.88), and +4.58 for POSTER V2 over MA-Net (88.42).
858
+ +7.13 for POSTER V2 over EfficientFace (85.87).
859
+ The
860
+ excellent results on CAER-S prove that the success of
861
+ POSTER V2 is no accident. It shows the powerful gen-
862
+ eralization ability of POSTER V2.
863
+ 4.3. FLOPs and Param Comparison
864
+ Methods
865
+ #Param
866
+ #FLOPs
867
+ RAF-DB
868
+ AffectNet
869
+ POSTER V1-T
870
+ 52.2M
871
+ 13.6G
872
+ 91.36
873
+ 66.87
874
+ POSTER V1-S
875
+ 62.0M
876
+ 14.7G
877
+ 91.54
878
+ 67.13
879
+ POSTER V1
880
+ 71.8M
881
+ 15.7G
882
+ 92.05
883
+ 67.31
884
+ POSTER V2
885
+ 43.7M
886
+ 8.4G
887
+ 92.21
888
+ 67.49
889
+ Table 6. Comparison of Param and FLOPs with POSTER V1.
890
+ From Table 6, we can see that POSTER V2 achieves
891
+ better FER results with smaller Param and FLOPs than
892
+ POSTER V1. Compared to POSTER V1-T, POSTER V2
893
+ reduces 8.5M Param and 5.2G FLOPs, while increasing
894
+ 0.85% on RAF-DB and 0.62% on AffectNet. Compared
895
+ to POSTER V1-S, POSTER V2 reduces 18.3M Param and
896
+ 6.3G FLOPs, while increasing 0.67% on RAF-DB and
897
+ 0.36% on AffectNet. Compared to POSTER V1, POSTER
898
+ V2 reduces 28.1M Param and 7.3G FLOPs, while increas-
899
+ ing 0.16% on RAF-DB and 0.18% on AffectNet. Therefore,
900
+ POSTER V2 would be a better choice for the FER task.
901
+ 4.4. Method Analysis
902
+ In this sub-section, we present a method analysis for the
903
+ small ViT model we used in POSTER V2 on RAF-DB.
904
+
905
+ Figure 6. Influence of different depth ViT models on POSTER V2
906
+ for RAF-DB.
907
+ Vit depth.
908
+ Here, we investigate the impact of different
909
+ depths for ViT on the FER performance of POSTER V2.
910
+ In Figure 6, we show the influence of the ViT model with
911
+ depth {2,4,6,8} on POSTER V2. We observe that for multi-
912
+ scale integration we do not need to increase the depth of the
913
+ ViT model. The ViT model with a depth of 2 is sufficient to
914
+ handle the FER task. A deeper ViT model hurts the perfor-
915
+ mance of POSTER V2 instead.
916
+ ViT w/ pre-trained weights
917
+ RAF-DB
918
+ AffectNet
919
+ 
920
+ 92.21
921
+ 67.49
922
+ 
923
+ 91.49
924
+ 60.2
925
+ Table 7. Impact of pre-trained ViT models for POSTER V2 on
926
+ FER.
927
+ Pre-trained Vit. We study the influence of the pre-trained
928
+ ViT model on POSTER V2. We use the ViT pre-trained
929
+ weights on ImagenNet-21K [35] for POSTER V2. Table 7
930
+ shows that the performance of POSTER V2 on FER drops
931
+ after using the pre-trained ViT model. We argue that this is
932
+ mainly due to the fact that the pre-trained ViT model acts
933
+ mainly on the feature extraction of the image-level inputs.
934
+ However, in POSTER V2, ViT performs the multi-scale fea-
935
+ ture integration task of feature-level inputs. The difference
936
+ in input and task resulted in the pre-trained ViT not working
937
+ on POSTER V2.
938
+ 4.5. Ablation Study
939
+ Methods
940
+ RAF-DB
941
+ AffectNet
942
+ POSTER V2
943
+ 92.21
944
+ 67.49
945
+ w/o multi-scale feature extraction
946
+ 91.47
947
+ 66.51
948
+ w/o ViT
949
+ 91.86
950
+ 66.92
951
+ w/o W-MCSA
952
+ 91.56
953
+ 67.24
954
+ w/o cross-fusion
955
+ 91.39
956
+ 66.35
957
+ Table 8. Results of ablation experiments of key components of
958
+ POSTER V2.
959
+ We validate the effectiveness of our POSTER V1 im-
960
+ provement component on the RAF-DB as well as on the
961
+ AffectNet dataset.
962
+ Multi-scale feature extraction. We first verify the effec-
963
+ tiveness of extracting multi-scale features directly in the
964
+ network. In this ablation experiment, we only use the im-
965
+ age backbone as well as the last layer of feature maps from
966
+ the facial landmark detector for cross-fusion. From Table 8
967
+ we observe that POSTER V2 degrades significantly on the
968
+ RAF-DB and AffectNet datasets when multi-scale feature
969
+ extraction is not performed. This shows that our method
970
+ of directly extracting multi-scale features can also solve the
971
+ scale sensitivity issue of FER. Also, this indicates the im-
972
+ portance of multi-scale features for FER.
973
+ Vit. For the ViT used for multi-scale feature integration, we
974
+ ablate it. We directly sum several different scale features
975
+ for FER. According to the experimental results in Table 8,
976
+ POSTER V2 decreases by 0.35 on RAF-DB and 0.57 on
977
+ AffectNet when multi-scale feature integration is not per-
978
+ formed by ViT. This suggests that ViT facilitates multi-scale
979
+ feature integration.
980
+ W-MCSA. We validate the effectiveness of W-MCSA for
981
+ cross-fusion by ablation experiments. In this experiment,
982
+ we use the vanilla cross-attention mechanism to replace our
983
+ window-based cross-attention mechanism.
984
+ We observed
985
+ that POSTER V2 degraded on both RAF-DB and Affect-
986
+ Net datasets. This shows that the W-MCSA we use both
987
+ improves the FER accuracy and reduces the computational
988
+ complexity of POSTER V1. Thus, W-MCSA is essential
989
+ for POSTER V2.
990
+ Cross-fusion. This experiment mainly verifies the role of
991
+ landmark-to-image branch for POSTER V2. In the abla-
992
+ tion experiments on cross-fusion, we merge the extracted
993
+ image multi-scale features and landmark multi-scale fea-
994
+ tures directly and integrate them by ViT. Table 8 shows that
995
+ the effectiveness of POSTER V2 on RAF-DB as well as
996
+ AffectNet drops sharply when cross-fusion is not applied.
997
+ This shows that cross-fusion is the key for POSTER V2
998
+ to achieve SOTA FER. Also, this indicates that addressing
999
+ inter-class similarity and intra-class discrepancy are partic-
1000
+ ularly important for FER task.
1001
+ 5. Conclusion
1002
+ In this paper, we improve POSTER V1 from three direc-
1003
+ tions: two-stream, cross-fusion, and multi-scale feature ex-
1004
+ traction to obtain a simpler and stronger vision transformer
1005
+ for FER, POSTER V2. Extensive FER experimental results
1006
+ show that POSTER V2 achieves the state-of-the-art FER
1007
+ performance while greatly reducing the Param and FLOPs
1008
+ of POSTER V1. This suggests that POSTER V2 achieves a
1009
+ better trade-off between accuracy and computational com-
1010
+ plexity. Therefore, POSTER V2 is a better choice for the
1011
+ FER task.
1012
+
1013
+ 92.4
1014
+ 92.2
1015
+ RAF-DB Top-1 Accuracy (%)
1016
+ 92
1017
+ 91.8
1018
+ 91.6
1019
+ 91.4
1020
+ 91.2
1021
+ 2
1022
+ 4
1023
+ 6
1024
+ 8
1025
+ ViT DepthAcknowledge
1026
+ This work was supported by Public-welfare Technology
1027
+ Application Research of Zhejiang Province in China un-
1028
+ der Grant LGG22F020032, and Key Research and Devel-
1029
+ opment Project of Zhejiang Province in China under Grant
1030
+ 2021C03137.
1031
+ References
1032
+ [1] Peter F Brown, Vincent J Della Pietra, Peter V Desouza,
1033
+ Jennifer C Lai, and Robert L Mercer. Class-based n-gram
1034
+ models of natural language.
1035
+ Computational linguistics,
1036
+ 18(4):467–480, 1992. 4
1037
+ [2] Cunjian Chen. PyTorch Face Landmark: A fast and accurate
1038
+ facial landmark detector, 2021. 6
1039
+ [3] Chun-Fu Chen, Rameswar Panda, and Quanfu Fan.
1040
+ Re-
1041
+ gionvit: Regional-to-local attention for vision transformers.
1042
+ arXiv preprint arXiv:2106.02689, 2021. 3
1043
+ [4] Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda.
1044
+ Crossvit: Cross-attention multi-scale vision transformer for
1045
+ image classification. In Proceedings of the IEEE/CVF in-
1046
+ ternational conference on computer vision, pages 357–366,
1047
+ 2021. 3
1048
+ [5] Shikai Chen, Jianfeng Wang, Yuedong Chen, Zhongchao
1049
+ Shi, Xin Geng, and Yong Rui. Label distribution learning
1050
+ on auxiliary label space graphs for facial expression recog-
1051
+ nition. In Proceedings of the IEEE/CVF conference on com-
1052
+ puter vision and pattern recognition, pages 13984–13993,
1053
+ 2020. 6
1054
+ [6] Navneet Dalal and Bill Triggs. Histograms of oriented gra-
1055
+ dients for human detection. In 2005 IEEE computer soci-
1056
+ ety conference on computer vision and pattern recognition
1057
+ (CVPR’05), volume 1, pages 886–893. Ieee, 2005. 1
1058
+ [7] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos
1059
+ Zafeiriou. Arcface: Additive angular margin loss for deep
1060
+ face recognition.
1061
+ In Proceedings of the IEEE/CVF con-
1062
+ ference on computer vision and pattern recognition, pages
1063
+ 4690–4699, 2019. 6
1064
+ [8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
1065
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
1066
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
1067
+ vain Gelly, et al. An image is worth 16x16 words: Trans-
1068
+ formers for image recognition at scale.
1069
+ arXiv preprint
1070
+ arXiv:2010.11929, 2020. 3
1071
+ [9] Tongle Fan, Guanglei Wang, Yan Li, and Hongrui Wang.
1072
+ Ma-net: A multi-scale attention network for liver and tumor
1073
+ segmentation. IEEE Access, 8:179656–179665, 2020. 3
1074
+ [10] Yingruo Fan, Jacqueline CK Lam, and Victor OK Li. Video-
1075
+ based emotion recognition using deeply-supervised neu-
1076
+ ral networks.
1077
+ In Proceedings of the 20th ACM Interna-
1078
+ tional Conference on Multimodal Interaction, pages 584–
1079
+ 588, 2018. 7
1080
+ [11] Amir Hossein Farzaneh and Xiaojun Qi. Facial expression
1081
+ recognition in the wild via deep attentive center loss. In Pro-
1082
+ ceedings of the IEEE/CVF winter conference on applications
1083
+ of computer vision, pages 2402–2411, 2021. 6
1084
+ [12] Qinquan Gao, Hanxin Zeng, Gen Li, and Tong Tong. Graph
1085
+ reasoning-based emotion recognition network. IEEE Access,
1086
+ 9:6488–6497, 2021. 7
1087
+ [13] Benjamin Graham, Alaaeldin El-Nouby, Hugo Touvron,
1088
+ Pierre Stock, Armand Joulin, Herv´e J´egou, and Matthijs
1089
+ Douze. Levit: a vision transformer in convnet’s clothing for
1090
+ faster inference. In Proceedings of the IEEE/CVF interna-
1091
+ tional conference on computer vision, pages 12259–12269,
1092
+ 2021. 3
1093
+ [14] Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and
1094
+ Jianfeng Gao. Ms-celeb-1m: A dataset and benchmark for
1095
+ large-scale face recognition.
1096
+ In European conference on
1097
+ computer vision, pages 87–102. Springer, 2016. 6
1098
+ [15] Dongyoon Han,
1099
+ Sangdoo Yun,
1100
+ Byeongho Heo,
1101
+ and
1102
+ YoungJoon Yoo. Rexnet: Diminishing representational bot-
1103
+ tleneck on convolutional neural network.
1104
+ arXiv preprint
1105
+ arXiv:2007.00992, 6, 2020. 1
1106
+ [16] Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu
1107
+ Abuduweili, Jiachen Li, and Humphrey Shi. Escaping the
1108
+ big data paradigm with compact transformers. arXiv preprint
1109
+ arXiv:2104.05704, 2021. 3
1110
+ [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
1111
+ Deep residual learning for image recognition. In Proceed-
1112
+ ings of the IEEE conference on computer vision and pattern
1113
+ recognition, pages 770–778, 2016. 3
1114
+ [18] Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk
1115
+ Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spa-
1116
+ tial dimensions of vision transformers.
1117
+ In Proceedings of
1118
+ the IEEE/CVF International Conference on Computer Vi-
1119
+ sion, pages 11936–11945, 2021. 3
1120
+ [19] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry
1121
+ Kalenichenko, Weijun Wang, Tobias Weyand, Marco An-
1122
+ dreetto, and Hartwig Adam. Mobilenets: Efficient convolu-
1123
+ tional neural networks for mobile vision applications. arXiv
1124
+ preprint arXiv:1704.04861, 2017. 1
1125
+ [20] Yuxiao Hu, Zhihong Zeng, Lijun Yin, Xiaozhou Wei, Xi
1126
+ Zhou, and Thomas S Huang. Multi-view facial expression
1127
+ recognition. In 2008 8th IEEE International Conference on
1128
+ Automatic Face & Gesture Recognition, pages 1–6. IEEE,
1129
+ 2008. 1
1130
+ [21] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil-
1131
+ ian Q Weinberger.
1132
+ Densely connected convolutional net-
1133
+ works. In Proceedings of the IEEE conference on computer
1134
+ vision and pattern recognition, pages 4700–4708, 2017. 2
1135
+ [22] Yin-Fu Huang and Chia-Hsin Tsai. Pidvit: Pose-invariant
1136
+ distilled vision transformer for facial expression recognition
1137
+ in the wild.
1138
+ IEEE Transactions on Affective Computing,
1139
+ 2022. 2
1140
+ [23] Taskeed Jabid, Md Hasanul Kabir, and Oksam Chae. Lo-
1141
+ cal directional pattern (ldp) for face recognition. In 2010
1142
+ digest of technical papers international conference on con-
1143
+ sumer electronics (ICCE), pages 329–330. IEEE, 2010. 1
1144
+ [24] Sangwon Kim, Jaeyeal Nam, and Byoung Chul Ko. Facial
1145
+ expression recognition based on squeeze vision transformer.
1146
+ Sensors, 22(10):3729, 2022. 2
1147
+ [25] Diederik P Kingma and Jimmy Ba. Adam: A method for
1148
+ stochastic optimization.
1149
+ arXiv preprint arXiv:1412.6980,
1150
+ 2014. 7
1151
+
1152
+ [26] Nhat Le, Khanh Nguyen, Anh Nguyen, and Bac Le. Global-
1153
+ local attention for emotion recognition. Neural Computing
1154
+ and Applications, 34(24):21625–21639, 2022. 7
1155
+ [27] Jiyoung Lee, Seungryong Kim, Sunok Kim, Jungin Park, and
1156
+ Kwanghoon Sohn. Context-aware emotion recognition net-
1157
+ works. In Proceedings of the IEEE/CVF international con-
1158
+ ference on computer vision, pages 10143–10152, 2019. 6,
1159
+ 7
1160
+ [28] Hangyu Li, Nannan Wang, Xinpeng Ding, Xi Yang, and
1161
+ Xinbo Gao. Adaptively learning facial expression represen-
1162
+ tation via cf labels and distillation. IEEE Transactions on
1163
+ Image Processing, 30:2016–2028, 2021. 6
1164
+ [29] Shan Li, Weihong Deng, and JunPing Du. Reliable crowd-
1165
+ sourcing and deep locality-preserving learning for expres-
1166
+ sion recognition in the wild. In 2017 IEEE Conference on
1167
+ Computer Vision and Pattern Recognition (CVPR), pages
1168
+ 2584–2593. IEEE, 2017. 2, 6
1169
+ [30] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng
1170
+ Zhang, Stephen Lin, and Baining Guo. Swin transformer:
1171
+ Hierarchical vision transformer using shifted windows. In
1172
+ Proceedings of the IEEE/CVF International Conference on
1173
+ Computer Vision, pages 10012–10022, 2021. 3
1174
+ [31] Fuyan Ma, Bin Sun, and Shutao Li. Facial expression recog-
1175
+ nition with visual transformers and attentional selective fu-
1176
+ sion. IEEE Transactions on Affective Computing, 2021. 2,
1177
+ 6
1178
+ [32] Ali Mollahosseini, Behzad Hasani, and Mohammad H Ma-
1179
+ hoor. Affectnet: A database for facial expression, valence,
1180
+ and arousal computing in the wild. IEEE Transactions on
1181
+ Affective Computing, 10(1):18–31, 2017. 2, 6
1182
+ [33] Stephen Moore and Richard Bowden. Local binary patterns
1183
+ for multi-view facial expression recognition. Computer vi-
1184
+ sion and image understanding, 115(4):541–558, 2011. 1
1185
+ [34] Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multires-
1186
+ olution gray-scale and rotation invariant texture classification
1187
+ with local binary patterns.
1188
+ IEEE Transactions on pattern
1189
+ analysis and machine intelligence, 24(7):971–987, 2002. 1,
1190
+ 2
1191
+ [35] Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi
1192
+ Zelnik-Manor.
1193
+ Imagenet-21k pretraining for the masses.
1194
+ arXiv preprint arXiv:2104.10972, 2021. 8
1195
+ [36] Delian Ruan, Yan Yan, Shenqi Lai, Zhenhua Chai, Chunhua
1196
+ Shen, and Hanzi Wang. Feature decomposition and recon-
1197
+ struction learning for effective facial expression recognition.
1198
+ In Proceedings of the IEEE/CVF Conference on Computer
1199
+ Vision and Pattern Recognition, pages 7660–7669, 2021. 6
1200
+ [37] Dinh Viet Sang, Pham Thai Ha, et al. Discriminative deep
1201
+ feature learning for facial emotion recognition. In 2018 1st
1202
+ International Conference on Multimedia Analysis and Pat-
1203
+ tern Recognition (MAPR), pages 1–6. IEEE, 2018. 2
1204
+ [38] Andrey V Savchenko. Facial expression and attributes recog-
1205
+ nition based on multi-task learning of lightweight neural net-
1206
+ works. In 2021 IEEE 19th International Symposium on Intel-
1207
+ ligent Systems and Informatics (SISY), pages 119–124. IEEE,
1208
+ 2021. 1, 2
1209
+ [39] Jiahui She, Yibo Hu, Hailin Shi, Jun Wang, Qiu Shen, and
1210
+ Tao Mei.
1211
+ Dive into ambiguity: Latent distribution min-
1212
+ ing and pairwise uncertainty estimation for facial expression
1213
+ recognition. In Proceedings of the IEEE/CVF Conference
1214
+ on Computer Vision and Pattern Recognition, pages 6248–
1215
+ 6257, 2021. 6
1216
+ [40] Jiawei Shi, Songhao Zhu, and Zhiwei Liang. Learning to
1217
+ amend facial expression representation via de-albino and
1218
+ affinity. arXiv preprint arXiv:2103.10189, 2021. 6
1219
+ [41] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model
1220
+ scaling for convolutional neural networks. In International
1221
+ conference on machine learning, pages 6105–6114. PMLR,
1222
+ 2019. 1
1223
+ [42] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco
1224
+ Massa, Alexandre Sablayrolles, and Herv´e J´egou. Training
1225
+ data-efficient image transformers & distillation through at-
1226
+ tention. In International Conference on Machine Learning,
1227
+ pages 10347–10357. PMLR, 2021. 3
1228
+ [43] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles,
1229
+ Gabriel Synnaeve, and Herv´e J´egou. Going deeper with im-
1230
+ age transformers. In Proceedings of the IEEE/CVF Interna-
1231
+ tional Conference on Computer Vision, pages 32–42, 2021.
1232
+ 3
1233
+ [44] Laurens Van der Maaten and Geoffrey Hinton.
1234
+ Visualiz-
1235
+ ing data using t-sne. Journal of machine learning research,
1236
+ 9(11), 2008. 14
1237
+ [45] Thanh-Hung Vo, Guee-Sang Lee, Hyung-Jeong Yang, and
1238
+ Soo-Hyung Kim. Pyramid with super resolution for in-the-
1239
+ wild facial expression recognition. IEEE Access, 8:131988–
1240
+ 132001, 2020. 2, 6
1241
+ [46] Kai Wang, Xiaojiang Peng, Jianfei Yang, Shijian Lu, and
1242
+ Yu Qiao. Suppressing uncertainties for large-scale facial ex-
1243
+ pression recognition. In Proceedings of the IEEE/CVF con-
1244
+ ference on computer vision and pattern recognition, pages
1245
+ 6897–6906, 2020. 6
1246
+ [47] Kai Wang, Xiaojiang Peng, Jianfei Yang, Debin Meng, and
1247
+ Yu Qiao. Region attention networks for pose and occlusion
1248
+ robust facial expression recognition. IEEE Transactions on
1249
+ Image Processing, 29:4057–4069, 2020. 6
1250
+ [48] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao
1251
+ Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao.
1252
+ Pyramid vision transformer: A versatile backbone for dense
1253
+ prediction without convolutions.
1254
+ In Proceedings of the
1255
+ IEEE/CVF International Conference on Computer Vision,
1256
+ pages 568–578, 2021. 3
1257
+ [49] Zhengyao Wen, Wenzhong Lin, Tao Wang, and Ge Xu.
1258
+ Distract your attention:
1259
+ Multi-head cross attention net-
1260
+ work for facial expression recognition.
1261
+ arXiv preprint
1262
+ arXiv:2109.07270, 2021. 6
1263
+ [50] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu,
1264
+ Xiyang Dai, Lu Yuan, and Lei Zhang.
1265
+ Cvt: Introducing
1266
+ convolutions to vision transformers. In Proceedings of the
1267
+ IEEE/CVF International Conference on Computer Vision,
1268
+ pages 22–31, 2021. 3
1269
+ [51] Fanglei Xue, Qiangchang Wang, and Guodong Guo. Trans-
1270
+ fer: Learning relation-aware facial expression representa-
1271
+ tions with transformers. In Proceedings of the IEEE/CVF
1272
+ International Conference on Computer Vision, pages 3601–
1273
+ 3610, 2021. 1, 2, 6
1274
+ [52] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi,
1275
+ Zi-Hang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng
1276
+
1277
+ Yan. Tokens-to-token vit: Training vision transformers from
1278
+ scratch on imagenet. In Proceedings of the IEEE/CVF In-
1279
+ ternational Conference on Computer Vision, pages 558–567,
1280
+ 2021. 3
1281
+ [53] Dan Zeng, Zhiyuan Lin, Xiao Yan, Yuting Liu, Fei Wang,
1282
+ and Bo Tang. Face2exp: Combating data biases for facial ex-
1283
+ pression recognition. In Proceedings of the IEEE/CVF Con-
1284
+ ference on Computer Vision and Pattern Recognition, pages
1285
+ 20291–20300, 2022. 6
1286
+ [54] Yuhang Zhang, Chengrui Wang, Xu Ling, and Weihong
1287
+ Deng.
1288
+ Learn from all: Erasing attention consistency for
1289
+ noisy label facial expression recognition. In European Con-
1290
+ ference on Computer Vision, pages 418–434. Springer, 2022.
1291
+ 2, 6
1292
+ [55] Guoying Zhao and Matti Pietikainen.
1293
+ Dynamic texture
1294
+ recognition using local binary patterns with an application
1295
+ to facial expressions. IEEE transactions on pattern analysis
1296
+ and machine intelligence, 29(6):915–928, 2007. 1, 2
1297
+ [56] Zengqun Zhao, Qingshan Liu, and Shanmin Wang. Learning
1298
+ deep global multi-scale and local attention features for facial
1299
+ expression recognition in the wild. IEEE Transactions on
1300
+ Image Processing, 30:6544–6556, 2021. 6, 7
1301
+ [57] Zengqun Zhao, Qingshan Liu, and Feng Zhou.
1302
+ Robust
1303
+ lightweight facial expression recognition network with la-
1304
+ bel distribution training. In Proceedings of the AAAI confer-
1305
+ ence on artificial intelligence, volume 35, pages 3510–3519,
1306
+ 2021. 1, 6, 7
1307
+ [58] Ce Zheng, Matias Mendieta, and Chen Chen. Poster: A pyra-
1308
+ mid cross-fusion transformer network for facial expression
1309
+ recognition. arXiv preprint arXiv:2204.04083, 2022. 2, 3, 6,
1310
+ 7
1311
+ [59] Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, Junzhou
1312
+ Huang, and Dimitris N Metaxas.
1313
+ Learning active facial
1314
+ patches for expression analysis. In 2012 IEEE Conference
1315
+ on Computer Vision and Pattern Recognition, pages 2562–
1316
+ 2569. IEEE, 2012. 1, 2
1317
+ [60] Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xi-
1318
+ aochen Lian, Zihang Jiang, Qibin Hou, and Jiashi Feng.
1319
+ Deepvit: Towards deeper vision transformer. arXiv preprint
1320
+ arXiv:2103.11886, 2021. 3
1321
+
1322
+ Appendix
1323
+ A. Implementation Details
1324
+ For POSTER V2, we conduct FER experiments on
1325
+ the RAF-DB, AffectNet, and CAER-S datasets, respec-
1326
+ tively. For different datasets, we adopt different detail set-
1327
+ tings. Specifically, for different datasets, we exploit differ-
1328
+ ent learning rates for training according to the settings of
1329
+ POSTER V1. Moreover, for AffectNet (8 cls), POSTER
1330
+ V2 uses a classification head with a category number of 8
1331
+ for prediction. The rest of the settings are consistent with
1332
+ the experimental sections in the main text.
1333
+ config
1334
+ value
1335
+ optimizer
1336
+ Adam
1337
+ base learning rate
1338
+ 3.50E-05
1339
+ weight decay
1340
+ 1.00E-04
1341
+ batch size
1342
+ 144
1343
+ training epochs
1344
+ 200
1345
+ learning rate schedule
1346
+ ExponentialLR (gamma=0.98)
1347
+ augmentation
1348
+ RandomHorizontalFlip(),
1349
+ RandomErasing(scale=(0.02, 0.1)).
1350
+ drop path
1351
+ linspace(0, 0.5, 5)
1352
+ num classes
1353
+ 7
1354
+ Table 9. Supervised training POSTER V2 from scratch on RAF-
1355
+ DB.
1356
+ RAF-DB Settings.
1357
+ We use the Adam optimizer with a
1358
+ learning rate of 3.5e-5 for 200 epochs training. The batch
1359
+ size is maintained at 144 and the weight decay remains at
1360
+ 1e-4. The learning rate schedule uses an exponential decay
1361
+ with a gamma of 0.98. Data augmentation includes random
1362
+ horizontal flipping and random erasure. The specific set-
1363
+ tings are shown in Table 9.
1364
+ config
1365
+ value
1366
+ optimizer
1367
+ Adam
1368
+ base learning rate
1369
+ 1.00E-06
1370
+ weight decay
1371
+ 1.00E-04
1372
+ batch size
1373
+ 144
1374
+ training epochs
1375
+ 200
1376
+ learning rate schedule
1377
+ ExponentialLR (gamma=0.98)
1378
+ augmentation
1379
+ RandomHorizontalFlip(),
1380
+ RandomErasing(p=1, scale=(0.05, 0.05)).
1381
+ drop path
1382
+ linspace(0, 0.5, 5)
1383
+ num classes
1384
+ 7
1385
+ Table 10. Supervised training POSTER V2 from scratch on Af-
1386
+ fectNet (7 cls).
1387
+ AffectNet (7 cls) Settings. On the AffcetNet (7 cls) dataset,
1388
+ we adjust the learning rate to 1e-6. The training epochs re-
1389
+ mains at 200. The batch size is maintained at 144 and the
1390
+ weight decay remains at 1e-4. The learning rate schedule
1391
+ uses an exponential decay with a gamma of 0.98. Data aug-
1392
+ mentation includes random horizontal flipping and random
1393
+ erasure. The detailed settings are shown in Table 10.
1394
+ config
1395
+ value
1396
+ optimizer
1397
+ Adam
1398
+ base learning rate
1399
+ 1.00E-06
1400
+ weight decay
1401
+ 1.00E-04
1402
+ batch size
1403
+ 144
1404
+ training epochs
1405
+ 200
1406
+ learning rate schedule
1407
+ ExponentialLR (gamma=0.98)
1408
+ augmentation
1409
+ RandomHorizontalFlip(),
1410
+ RandomErasing(p=1, scale=(0.05, 0.05)).
1411
+ drop path
1412
+ linspace(0, 0.5, 5)
1413
+ num classes
1414
+ 8
1415
+ Table 11. Supervised training POSTER V2 from scratch on Af-
1416
+ fectNet (8 cls).
1417
+ AffectNet (8 cls) Settings. We use the Adam optimizer
1418
+ with a learning rate of 1e-6 for 200 epochs training. The
1419
+ batch size is maintained at 144 and the weight decay re-
1420
+ mains at 1e-4. The learning rate schedule uses an expo-
1421
+ nential decay with a gamma of 0.98. Data augmentation
1422
+ includes random horizontal flipping and random erasure. In
1423
+ addition, we set the number of categories to 8. Table 11
1424
+ shows the specific experimental settings.
1425
+ config
1426
+ value
1427
+ optimizer
1428
+ Adam
1429
+ base learning rate
1430
+ 4.00E-05
1431
+ weight decay
1432
+ 1.00E-04
1433
+ batch size
1434
+ 144
1435
+ training epochs
1436
+ 200
1437
+ learning rate schedule
1438
+ ExponentialLR (gamma=0.98)
1439
+ augmentation
1440
+ RandomHorizontalFlip(),
1441
+ RandomErasing(p=1, scale=(0.05, 0.05)).
1442
+ drop path
1443
+ linspace(0, 0.5, 5)
1444
+ num classes
1445
+ 7
1446
+ Table 12. Supervised training POSTER V2 from scratch on
1447
+ CAER-S.
1448
+ CAER-S Settings. On the CAER-S dataset, we employ the
1449
+ Adam optimizer with a learning rate of 4e-5 for 200 epochs
1450
+ of training. The batch size is maintained at 144 and the
1451
+ weight decay remains at 1e-4. The learning rate schedule
1452
+ uses an exponential decay with a gamma of 0.98. Data aug-
1453
+ mentation includes random horizontal flipping and random
1454
+ erasure. The specific settings are shown in Table 12.
1455
+ B. Detailed Experimental Results
1456
+ In this section, we show more detailed experimental re-
1457
+ sults of POSTER V2 on each dataset. And we also show
1458
+ the confusion matrix of POSTER V2 in each dataset in Fig-
1459
+ ure 7.
1460
+ RAF-DB Results. Figure 8 shows the specific training pro-
1461
+ cess of POSTER V2 on RAF-DB. We observe that the train-
1462
+ ing loss and validation loss of POSTER V2 decrease un-
1463
+ til saturation during the training process. Furthermore, the
1464
+ training accuracy and validation accuracy of POSTER V2
1465
+
1466
+ Figure 7. The confusion matrix of POSTER V2 on each dataset.
1467
+ Figure 8. The specific training process of POSTER V2 on RAF-
1468
+ DB.
1469
+ continue to increase until a small fluctuation.
1470
+ Figure 9. The detailed training process of POSTER V2 on Affect-
1471
+ Net (7 cls).
1472
+ AffectNet (7 cls) Results. We show in Figure 9 the detailed
1473
+ training of POSTER V2 on AffectNet (7 cls). POSTER V2
1474
+ achieves the best training results on AffectNet (7 cls) at an
1475
+ early stage. At this point, POSTER V2 achieves the highest
1476
+ accuracy on AffectNet (7 cls) for both the training and test
1477
+ sets. Therefore, we stop training in time to save training
1478
+ costs.
1479
+ AffectNet (8 cls) Results. Figure 10 shows the exact per-
1480
+ formance of POSTER V2 on AffectNet (8 cls). We observe
1481
+ a similar phenomenon on AffectNet (8 cls) as POSTER V2
1482
+ did on AffectNet (7 cls). POSTER V2 also reach saturation
1483
+ in the early stages of AffectNet (8 cls). POSTER V2 train-
1484
+ ing loss continues to show a decreasing trend, yet there is a
1485
+ small increase in validation loss. Nevertheless, the training
1486
+ Figure 10. The detailed training process of POSTER V2 on Af-
1487
+ fectNet (8 cls).
1488
+ accuracy of POSTER V2 on AffectNet (8 cls) continues to
1489
+ increase, and the validation accuracy has largely been op-
1490
+ timal and remains constant. Therefore, we take the same
1491
+ early end operation for POSTER V2 on AffectNet (8 cls) as
1492
+ we do for AffectNet (7 cls).
1493
+ Figure 11. The detailed training process of POSTER V2 on
1494
+ CAER-S.
1495
+ CAER-S Results. We show the specific training perfor-
1496
+ mance of POSTER V2 on CAER-S in Figure 11. Compared
1497
+ with other datasets, POSTER V2 has a relatively long sat-
1498
+ uration time on the CAER-S dataset. During the training
1499
+ process, the loss on the POSTER V2 training and validation
1500
+ sets decreases and saturates at a late stage. Meanwhile, the
1501
+ accuracy of POSTER V2 on both the training and validation
1502
+ sets has been increasing.
1503
+
1504
+ the accuracy/loss curve of train/val
1505
+ 100
1506
+ 95
1507
+ 90
1508
+ 85
1509
+ 80
1510
+ 75
1511
+ 70
1512
+ 65
1513
+ 60
1514
+ accuracy
1515
+ 55
1516
+ 45
1517
+ 40
1518
+ 35
1519
+ 30
1520
+ +..
1521
+ 25
1522
+ 20
1523
+ 15
1524
+ 10
1525
+ train-accuracy
1526
+ valid-accuracy
1527
+ 5
1528
+ valid-loss-x30
1529
+ 0
1530
+ 10
1531
+ 15
1532
+ 20
1533
+ 25
1534
+ OE
1535
+ 35
1536
+ 40
1537
+ 55
1538
+ 90
1539
+ 95
1540
+ 100
1541
+ 105
1542
+ 110
1543
+ 115
1544
+ 180
1545
+ 185
1546
+ 195
1547
+ 200
1548
+ the training epochthe accuracy/loss curve of train/val
1549
+ 100
1550
+ 95
1551
+ 90
1552
+ 85
1553
+ 80
1554
+ 75
1555
+ 70
1556
+ 65
1557
+ 60
1558
+ 45
1559
+ 35
1560
+ OE
1561
+ 25
1562
+ 20
1563
+ 15
1564
+ 10
1565
+ .++,+++++++++.+
1566
+ ++++*++*++
1567
+ train-accuracy
1568
+ valid-accuracy
1569
+ 5
1570
+ train-loss-x30
1571
+ valid-loss-x30
1572
+ 0
1573
+ 10
1574
+ 15
1575
+ 20
1576
+ 25
1577
+ 30
1578
+ 35
1579
+ 40
1580
+ 45
1581
+ 50
1582
+ 55
1583
+ 60
1584
+ 65
1585
+ 70
1586
+ 75
1587
+ 08
1588
+ 85
1589
+ 90
1590
+ 95
1591
+ 100
1592
+ 105
1593
+ 110
1594
+ 115
1595
+ 130
1596
+ 135
1597
+ 140
1598
+ 145
1599
+ 160
1600
+ 180
1601
+ 185
1602
+ 190
1603
+ 195
1604
+ 200
1605
+ the training epochRAF-DB
1606
+ AffectNet (7 cls)
1607
+ AffectNet (8 cls)
1608
+ CAER-S
1609
+ 0.0182 0.0122 0.0182 0.000 0.0122 0.0334
1610
+ 0.0660 0.0940 0.0960 0.0120 0.0140 0.0640
1611
+ Neutral
1612
+ 0. 6060
1613
+ 0.0200.0960.0840.0120.0140.000.1080
1614
+ Surprise
1615
+ 0.0020 0.0060 0. 0177 0. 0073 0.0067 0.0210
1616
+ 0. 8
1617
+ 0.0020 0.0360 0.0020 0.0140 0.00800.1461
1618
+ 0.00000.02700.10810.01350.0135
1619
+ Happy J 0. 0460
1620
+ 0.00200.03800.00200.01200.0060
1621
+ 0. 7
1622
+ 0.6
1623
+ Fear J0.0000
1624
+ 0.00030.0010
1625
+ 0.8
1626
+ Sad Jo.1260 0.0120 0.668
1627
+ 0. 0563 0. 0938 0.0437 0.0688
1628
+ 0.6
1629
+ 0380 0.0260 0.0420 0.0660 0.0220
1630
+ Disgust J0.0125 0.0030.7188
1631
+ Sad /0.1320 0. 0180
1632
+ 0.03600.03000.03600.0680
1633
+ Disgust J0. 0043 0. 0003
1634
+ 0.984
1635
+ 0.00300.00300.0013
1636
+ 0. 6
1637
+ 0.0680 0.0620 0.0320
1638
+ 0. 0034 0.0008 0.0042
1639
+ 0.9722
1640
+ 0.0034 0.0017 0. 0143
1641
+ Surprise -0.0780 0.0760 0. 0260
1642
+ 0.12000.02400.0160
1643
+ .0050~0.0070
1644
+ label
1645
+ Happy
1646
+ 0.01230.0407
1647
+ True
1648
+ 0.9289
1649
+ 0.00000.0439
1650
+ 0. 4
1651
+ 0.04800.0280
1652
+ 0.0050 0.0080
1653
+ 0060 0.0127
1654
+ 0.4
1655
+ 0.16200.0420
1656
+ 0. 0185 0.0062
1657
+ 0. 0309
1658
+ .0740 0.0240 0. 05800.5440
1659
+ Anger
1660
+ 0.950
1661
+ 0. 2
1662
+ 0. 1
1663
+ 0.9200
1664
+ 0.03100.0440
1665
+ Conte
1666
+ Predicted label
1667
+ Predicted label
1668
+ Predicted labelthe accuracy/loss curve of train/val
1669
+ 100
1670
+ 95
1671
+ 90
1672
+ 85
1673
+ 80
1674
+ 75
1675
+ 10
1676
+ 65
1677
+ 60
1678
+ 55
1679
+ 45
1680
+ 40
1681
+ 35
1682
+ 30 卡
1683
+ 25
1684
+ 20
1685
+ 15
1686
+ .
1687
+ 10 -
1688
+ train-accuracy
1689
+ valid-accuracy
1690
+ 5
1691
+ train-loss-x30
1692
+ .
1693
+ valid-loss-x30
1694
+ 0
1695
+ 10
1696
+ 15
1697
+ 20
1698
+ 25
1699
+ OE
1700
+ 35
1701
+ 40
1702
+ 45
1703
+ 55
1704
+ 60
1705
+ 65
1706
+ 70
1707
+ 80
1708
+ 90
1709
+ 95
1710
+ 100
1711
+ 105
1712
+ 110
1713
+ 120125
1714
+ 130
1715
+ 135140
1716
+ 145
1717
+ 150155
1718
+ 160
1719
+ 170175
1720
+ 180
1721
+ 185
1722
+ 190
1723
+ 195
1724
+ 200
1725
+ the training epochthe accuracy/loss curve of train/val
1726
+ 100
1727
+ 95
1728
+ 90
1729
+ 85
1730
+ 80
1731
+ 75
1732
+ 70
1733
+ 65
1734
+ 60
1735
+ 55
1736
+ 50
1737
+ 45
1738
+ 35
1739
+ 30 -
1740
+ 25
1741
+ 20
1742
+ 15
1743
+ train-accuracy
1744
+ valid-accuracy
1745
+ 5
1746
+ valid-loss-x30
1747
+ 0
1748
+ 15
1749
+ 20
1750
+ 25
1751
+ 30
1752
+ 35404550
1753
+ 55
1754
+ 90
1755
+ 95
1756
+ 100
1757
+ 105
1758
+ 110
1759
+ 115
1760
+ 180
1761
+ 185
1762
+ 195
1763
+ 200
1764
+ the training epochFigure 12. Comparison of POSTER V2 and POSTER V1 high-dimensional space t-SNE visualization results. POSTER V1 t-SNE visual-
1765
+ ization results (first row), POSTER V2 t-SNE visualization results (second row).
1766
+ Figure 13. POSTER V2 cross-fusion stage attention visualization results. For each triplet, we show the input image (left), the landmark
1767
+ image (middle), and attention map (right).
1768
+ C. Visualization
1769
+ T-SNE Visualization. We visualized the high-dimensional
1770
+ features of POSTER V1 and POSTER V2 using t-SNE
1771
+ [44]. As can be seen in Figure 12, both POSTER V2 and
1772
+ POSTER V1 present good t-SNE visualization results on
1773
+ RAF-DB and CAER-S datasets. There is almost no signif-
1774
+ icant difference between the t-SNE visualization results of
1775
+ POSTER V1 and POSTER V2 on CAER-S. POSTER V2
1776
+ has a closer intra-class distance than POSTER V1 on RAF-
1777
+ DB. Although POSTER V1 and POSTER V2 have poor t-
1778
+ SNE visualization results on AffectNet (7 cls) and Affect-
1779
+ Net (8 cls). But the inter-class distance between clusters in
1780
+ POSTER V2 is further than POSTER V1. Above results
1781
+ indicates that POSTER V2 is better than POSTER V1 in al-
1782
+ leviating the issues of inter-class similarity and intra-class
1783
+ discrepancy in FER.
1784
+ Attention Visualization. We visualize the attention map of
1785
+ the highest-level features of the POSTER V2 cross-fusion
1786
+ stage. From Figure 13, we observe that POSTER V2 suc-
1787
+ cessfully captures important facial expression features with
1788
+ the help of facial landmark features.
1789
+
1790
+ RAF-DB
1791
+ AffectNet (7 cls)
1792
+ AffectNet (8 cls)
1793
+ CAER-SNeutral
1794
+ Happy
1795
+ Sad
1796
+ Surprise
1797
+ Fear
1798
+ Disgust
1799
+ Angry
1800
+ Contempt
A9E1T4oBgHgl3EQf9QZb/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0488c6985f3e265f615ddad81ae65a442483701da47aab3eb2fbbd9377f9653e
3
+ size 372164
B9E1T4oBgHgl3EQfpgVg/content/2301.03332v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cb89e6c77a3b378a1ebd199ad04740024ead608106293c925c8e1107be00ced
3
+ size 223382
B9E1T4oBgHgl3EQfpgVg/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4780de80d06c83be2a3c5e0f8137f23e42a474a9716b1eb62d942f808f2698c6
3
+ size 2293805
B9E1T4oBgHgl3EQfpgVg/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91fb0cc36cbcef92dd7ca7ffd24c0462450a20fe6518a90bbfe81cba949c6043
3
+ size 85177
C9AyT4oBgHgl3EQfSPck/content/tmp_files/2301.00080v1.pdf.txt ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Impact Invariant Trajectory Optimization of 5-Link Biped Robot
2
+ Using Hybrid Optimization
3
+ Aref Amiri, Hasan Salarieh1
4
+ Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran
5
+
6
+ Abstract
7
+ Bipedal robots have received much attention because of the variety of motion maneuvers that they can produce, and the many
8
+ applications they have in various areas including rehabilitation. One of these motion maneuvers is walking. In this study, we
9
+ presented a framework for the trajectory optimization of a 5-link (planar) Biped Robot using hybrid optimization. The walking is
10
+ modeled with two phases of single-stance (support) phase and the collision phase. The dynamic equations of the robot in each
11
+ phase are extracted by the Lagrange method. It is assumed that the robot heel strike to the ground is full plastic. The gait is optimized
12
+ with a method called hybrid optimization. The objective function of this problem is considered to be the integral of torque-squared
13
+ along the trajectory, and also various constraints such as zero dynamics are satisfied without any approximation. Furthermore, in a
14
+ new framework, there is presented a constraint called impact invariance, which ensures the periodicity of the time-varying
15
+ trajectories. On the other hand, other constraints provide better and more human-like movement..
16
+ Keywords: Trajectory optimization, bipedal robots, walking robots, zero dynamics;
17
+
18
+
19
+ 1. Introduction
20
+ The mechanism of movement and transfer of objects has always been one of the most important and active areas of
21
+ human research. Due to the limitations of moving with a wheel, replacing it with feet is an attractive but difficult
22
+ option, so this field is a hot topic in today's robotic world. With the advancement of robotics science and the usefulness
23
+ of this issue, a lot of research has been done on the design, optimization, and control of legged robots [1-6]. As the
24
+ science of bipedal robots has advanced in recent years, there have been significant efforts to improve the performance
25
+ of these robots in important maneuvers, such as walking and running, but research is still ongoing to find ideal answers
26
+ [7,8]. Designing reference trajectories for human walking cycles is very important. Several techniques have been
27
+ adopted to define reference trajectories. So far, many researchers have studied low-energy (or low input torques) paths
28
+ for bipedal robots [7,9]. We are looking for a periodic path that meets a specific goal in terms of speed and minimizes
29
+ the torque required to produce the gate. In general, this open and non-trivial problem is solved by finding numerical
30
+ answers. Various parameters can be considered to optimize the problem, for example, torques, Cartesian coordinate
31
+ or joint coordinates constraints can be used[10-12]. Many authors have used polynomial functions for Cartesian
32
+ coordinates of swing leg’s foot, hip, and trunk angle [13,14]. Polynomial functions are used for the coordinates of the
33
+ joints to limit the number of optimization parameters [15]. The optimal path for each coordinate of joints is usually
34
+ written in the form of polynomials with unknown coefficients. The coefficients should be obtained through the
35
+ optimization process [15]. For all bipedal robots, it is important to define optimal periodic motions despite the fact
36
+ that the number of actuators is less than the degree of freedom of the system, and also zero dynamics problem there
37
+ exists which should be satisfied during optimization.
38
+ In this paper, a new method is presented to produce a periodic path for the walking of bipedal robots which satisfies
39
+ the impact invariance constraint. Also, in order to achieve the feasible trajectory, the zero dynamics constraint is
40
+ satisfied without any approximation. In addition, by considering some other kinematic and dynamic constraints, and
41
+
42
+ 1P.O.B. 11155-9567, Tehran, Iran
43
44
+
45
+ using the hybrid optimization method, an optimal reference trajectory for human-like walking of bipedal robots has
46
+ been presented.
47
+ Section 2 presents the dynamics and kinematics of a model of the biped robot. Section 3 is devoted to the formulation
48
+ of the optimization variables. The constraints are defined in section 4 and the optimization method is also described
49
+ in section 5. Finally, in sections 5 and 6, the results and conclusion are given.
50
+ 2. Dynamics and kinematics
51
+ Bipedal robots have different dynamics depending on their movement maneuvers. For example, a running robot with
52
+ 5 links and without an ankle actuator in the flight phase has 7 degrees of freedom and only 4 actuators, so the system
53
+ is 3 degrees under-actuated. Here a bipedal walking robot will be examined. We assume that the robot is completely
54
+ on the ground and does not slip while walking.
55
+ On the other hand, during the single support phase, the other leg rises from the ground when the swing leg hits the
56
+ ground. During the single support phase, the model has 5 degrees of freedom and needs at least 5 generalized
57
+ coordinates to identify the system. On the other hand, the robot has only 4 actuation, so the system has a degree of
58
+ under-actuation. In under-actuated systems, some parts of the dynamics are not affected by the actuator called the zero
59
+ dynamics. Here, zero dynamics is affected only by the earth's gravity. The robot's model can be modeled with absolute
60
+ or relative angles, if relative angles are used, zero dynamics can be easily separated from the main total dynamics.
61
+ Figure 1 shows the absolute and relative coordinates of a 5-link robot with point feet.
62
+
63
+ Figure 1 Relative and absolute angles
64
+
65
+ The general hybrid walking gait model is obtained by combining the single support phase model and the impact model:
66
+ Σ: {
67
+ 𝑥̇ = 𝑓(𝑥) + 𝑔(𝑥)𝑢 𝑥− ∉ Γ
68
+ 𝑥+ =  (𝑥−)
69
+ 𝑥− ∈ Γ
70
+ (1)
71
+ where  is a mapping that transforms the states just before the contact to the states just after the contact. 𝑥: =
72
+ (𝑞𝑇, 𝑞̇ 𝑇)𝑇 is the state vector that contains 𝑞: = (𝑞1, 𝑞2, … , 𝑞𝑛)
73
+ 𝑇 which is the vector of joint coordinates and 𝑞̇: =
74
+ (𝑞̇ 1, 𝑞̇ 2, … , 𝑞̇ 𝑛)
75
+ 𝑇 is the vector of angular velocities, and 𝑥+ denotes the state vector just after the impact and 𝑥− shows
76
+ just before this event.
77
+ The switching set is shown as,
78
+
79
+
80
+
81
+
82
+
83
+ 𝑞
84
+ 𝑞
85
+ 𝑞
86
+ 𝑞
87
+ 𝑞
88
+ 𝑞 : relative coordinates
89
+ : absolute coordinates
90
+
91
+ : swing leg's foot
92
+ 1
93
+ 1
94
+ 2 : stance leg's foot
95
+ 2
96
+
97
+ Γ = {(𝑞, 𝑞̇) ∈ 𝑥 ∣ 𝑃
98
+ 𝑣(𝑞) = 0, 𝑃
99
+ ℎ(𝑞) > 0} (2)
100
+ 𝑃
101
+ 𝑣(𝑞) and 𝑃
102
+ ℎ(𝑞) indicate the vertical and horizontal position of the swing leg, respectively. Now if we model the
103
+ single support phase alone, we have:
104
+ 𝑀(𝑞)𝑞̈ + 𝑐(𝑞, 𝑞̇)𝑞̇ + 𝐺(𝑞) = (0, 𝑈𝑇)𝑇 (3)
105
+ where 𝑀(𝑞) ∈ ℜ𝑛×𝑛 (𝑛 = 5) is the inertia matrix, 𝑐(𝑞, 𝑞̇) ∈ ℜ𝑛×𝑛 is the Coriolis matrix, and 𝐺(𝑞) ∈ ℜ𝑛 is the gravity
106
+ vector. As shown in Figure 2, the robot does not have any actuators (torques) on the feet, i.e. the robot has not the
107
+ ankle joint actuator, so the robot is under-actuated which adds a zero dynamic constraint to the problem as mentioned
108
+ in [16]. The vector 𝑈 ∈ ℜ𝑛− is as follows:
109
+ 𝑈 = [𝜏 , 𝜏 , 𝜏 , 𝜏 ]𝜏 (4)
110
+ which represents 4 actuators (torques) on the robot. 2 actuators (torques) on the pelvis (hip) and 2 on the knee of each
111
+ leg. By separating the equations of (3) the first equation which produces the zero dynamics is written as:
112
+
113
+
114
+
115
+ 𝑗= (𝑀 ,𝑗𝑞̈𝑗 + 𝑐 ,𝑗𝑞̇𝑗) + 𝐺 = 0 (5)
116
+ which is called zero-hybrid dynamics and:
117
+
118
+
119
+
120
+ 𝑗= (𝑀 ,𝑗𝑞̈𝑗 + 𝑐 ,𝑗𝑞̇𝑗) + 𝐺 = 𝜏 − (6)
121
+ are other rows of equation (3) (i = 2,…, 5).
122
+
123
+ Figure 2 Robot configuration and control torques
124
+ The trunk angle is assumed independent from other links with a separate actuator, In other words, one actuator is
125
+ responsible for moving the trunk. So if we temporarily separate the trunk from the other components, we are faced
126
+ with 4 degrees of freedom system. By determining the swing leg's foot (link number 5 in figure 2), the system still has
127
+ 2 degrees of freedom, so the inverse kinematics has infinite answers. Therefore, By determining the position of the
128
+ hip, 2 more degrees of freedom are determined from the system, in this case, the inverse kinematic robot has 4 answers.
129
+ Among these 4 answers, the only acceptable answers are the one that satisfies the condition of not breaking the knee.
130
+ It is important to note that in order to find a suitable periodic answer, we assume that the initial configuration is the
131
+ same as the final one.
132
+ 3. Optimization variables
133
+ One convenient way is to select the angles of each link based on a polynomial function of time with a series of
134
+ unknown coefficients. This choice enables us to have a smooth function with time. Here it is assumed that each angle
135
+ is a polynomial function of degree 4. It should be noted that the initial and final configuration of the system in each
136
+ step affects determining two parameters of the polynomial coefficients, the impact invariance constraint is also
137
+ 1
138
+ 5
139
+ 4
140
+ 2
141
+ 3
142
+ =0
143
+
144
+
145
+
146
+
147
+
148
+ effective on another coefficient. Therefore, in order to have at least 2 optimization parameters for each angle, we
149
+ consider a fourth-order polynomial function with unknown coefficients for the trajectories of each angle.
150
+
151
+ 𝑞𝑘(𝑡) = ∑
152
+
153
+ 𝑛=
154
+ =0 𝛼𝑘, 𝑡 (𝑘 = 1, … , 5) (7)
155
+ 4. Definition of constraints
156
+ These constraints are to find the right trajectory to walk. It makes the shapes of the joint trajectories, the links
157
+ orientations, and the required torques for walking be within a reasonable range.
158
+ The constraints are defined as follows:
159
+ 1) Constraints on the initial and final configuration: Initial and final configurations of the robot must be specified.
160
+ Since the robot moves in a periodic pattern, its initial and final configuration must coincide.
161
+ 𝑞(@𝑡=0)=𝑞𝑖𝑛𝑖𝑡𝑖𝑎𝑙 , 𝑞(@𝑡=𝑇)=𝑞𝑓𝑖𝑛𝑎𝑙 , (8)
162
+ 2) Knee movement constraints: In order to have human-like movement, the robot's knees should not be opened and
163
+ closed excessively ( 𝑚 and 𝑚 are two pre-especified upper bounds in Eq. (9)).
164
+ 𝑚 ≥ 𝑞 (𝑡) ≥ 0 , 𝑚 ≥ 𝑞 (𝑡) ≥ 0, (9)
165
+
166
+
167
+ 3) Swing leg's foot constraint: The swing leg's foot should not collide with the ground except at the beginning and end
168
+ of the phase.
169
+ 𝑝(0)
170
+ 𝑣
171
+ = 𝑝(𝑇)
172
+ 𝑣
173
+ = 0 𝑝(𝑡)
174
+ 𝑣 >0 for 0 < 𝑡 < 𝑇 (10)
175
+
176
+
177
+
178
+ 4) Limitation of torques: In order to the physical limitations of the motors, the actuator torques have a certain limit.
179
+ |𝜏 − (𝑡)| ≤ 𝜏𝑚𝑎𝑥 𝑖 = 2, … ,5 (11)
180
+ 5) Limitation of angular velocities: In order to the physical limitations of the motors, the actuator velocities have a
181
+ certain limit.
182
+ |𝑞̇ (𝑡)| ≤ 𝑞̇𝑚𝑎𝑥 𝑖 = 1, … ,5 (12)
183
+ 6) Limitation of friction coefficient: The reaction of the heels, which is the result of the acceleration of the various
184
+ members of the robot, must observe a certain ratio. This ratio should be less than the coefficient of friction between
185
+ the heels and the ground.
186
+ −𝜇 ≤ |
187
+ 𝐹𝑥
188
+ 𝐹𝑦| ≤ 𝜇 (13)
189
+ In the above equation, 𝜇 is the coefficient of friction, and 𝐹𝑥 and 𝐹𝑦 are sequentially the horizontal and vertical ground
190
+ reactions.
191
+ 7) Zero dynamic constraint: the satisfaction of this constraint is important in two ways. First, if this constraint is not
192
+ satisfied, the problem of optimizing the input torques is practically ambiguous, because these torques are not really
193
+ applicable to the problem. Although it may lead to a feasible kinematic equation (kinematically possible), it is not
194
+ feasible in terms of control, or in other words, it is not dynamically possible.
195
+ 8) Impact invariance constraint: this constraint means that in order to produce a periodic motion, in addition to the
196
+ configuration, the initial velocities at the beginning point of each cycle should be exactly the same as its previous
197
+ cycle. Since the velocities after the collision are dependent on the velocities before the collision, by satisfying this
198
+ constraint, the velocities before the collision are adjusted in such a way as to guarantee the periodicity of the motion.
199
+ Through the following formulae, this purpose is achieved. At first, the impact mapping formula is written as,
200
+
201
+ 𝑞̇ + = Δ̃(𝑞−)𝑞̇ − (14)
202
+ Δ̃(𝑞−) ∈ ℜ × is the impact mapping which maps the angular rates of the leg before contact to the angular rates of
203
+ that leg after contact. The inverse of Δ̃ is denoted by,
204
+ 𝜂̃(𝑞−) = (Δ̃(𝑞−))
205
+
206
+ (15)
207
+ So 𝑞̇ − can be found as :
208
+ 𝑞̇ − = 𝜂̃(𝑞−)𝑞̇ + (16)
209
+ The mathematical formulation of this mapping is obtained from the governing differential equations of the system.
210
+ After the swing leg's foot hits the ground, the positions do not change but the angular velocities change, which can be
211
+ achieved as following (see [17] for more information),
212
+
213
+ Δ𝑞̇ = 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− ⋅ Δ𝑣𝑒 (17)
214
+
215
+ where 𝑣 is the velocity vector of the end of the swing leg and 𝑀 ∈ ℜ𝑛×𝑛 is the inertia matrix as mentioned in (3), the
216
+ matrix𝐽 ∈ ℜ𝑚×𝑛 (𝑚 = 2 for planar motions) is also obtained as:
217
+ 𝐽 =
218
+ ∂𝑝𝑒
219
+ ∂𝑞 (18)
220
+ 𝑝𝑒 is the position of the end of the swing leg. Assuming that the swing leg sticks to the ground after impact, the velocity
221
+ of the swing leg's foot after impact is zero, so
222
+ 𝑞̇ + = 𝑞̇ − + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− ⋅ (−𝑣𝑒) (19)
223
+ We know that due to the placement of a leg on the ground, we can write:
224
+ 𝑣𝑒 = 𝛼(𝑞)𝑞̇ (20)
225
+ where 𝛼(𝑞) is:
226
+ 𝛼(𝑞) =
227
+ ∂𝑣𝑒
228
+ ∂𝑞̇ (21)
229
+ Finally, by placing )20( into )19( and separating 𝑞̇ −, the pre-impact angular velocity is obtained as follows:
230
+ 𝑞̇ − = (𝐼 + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− − 𝛼(𝑞))
231
+ − 𝑞̇ + (22)
232
+ where 𝐼 ∈ ℜ𝑛×𝑛 is the identity matrix. In the above relation, both velocity vectors are written in the same coordinate
233
+ system, which requires a coordinate conversion, because the coordinate changes after the collision due to the change
234
+ in the role of the legs. For this purpose, consider the following mapping that converts the relative angles and angular
235
+ velocities to absolute ones:
236
+ 1𝑟𝑒𝑙κ= 𝐻 𝑎𝑏𝑠𝜿 (23)
237
+ where 𝜿 ∈ ℜ𝑛 can be the angles vector, the angular velocities vector or the angular accelerations vector. Superscripts
238
+ 1𝑟𝑒𝑙 and 1𝑎𝑏𝑠 represent relative and absolute coordinates in which the vectors are defined, and also 𝐻 ∈ ℜ𝑛×𝑛 is
239
+ a square matrix. On the other hand, we have a mapping that converts old and new coordinates to each other. This
240
+ mapping can just be defined for an absolute angular coordinate. If we define the absolute coordinates in this way, we
241
+ have:
242
+ 1 𝜓 = Γ 𝜓 (24)
243
+
244
+ where indices 1 and 2 indicate the coordinate system before and after the impact, 𝜓 ∈ ℜ𝑛×𝑛 can be velocity vector or
245
+ angular acceleration vector, and Γ ∈ ℜ𝑛×𝑛 is the mapping matrix. Finally, with the above transformations, the
246
+ coordinate systems can be connected suitably as:
247
+ 1 𝑞̇ + = 𝐻Γ𝐻− 1 𝑞̇ + (25)
248
+ So the invariancy of the impact during walking is written as it follows,
249
+ 𝑞̇ − = (𝐼 + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− − 𝛼(𝑞))
250
+ − 𝐻Γ𝐻− 𝑞̇ + (26)
251
+ As a result, according to Equation (26), the impact invariance constraint is obtained. In this way, by satisfying this
252
+ equality constraint, the velocity after impact will be similar to the initial velocity in the previous cycle.
253
+ 5. Optimization
254
+ According to figure 3, optimization is performed using a hybrid method. This means that first, with the penalty method,
255
+ the constrained problem becomes unconstrained. Then, using the genetic algorithm, the first level of optimization is
256
+ applied. Finally, in the second level, the outputs of the first level are used as the input of a gradient-based method and
257
+ the problem is solved. The objective function is the Euclidean norm of input torques:
258
+ 𝐽(𝛼) = ∫
259
+
260
+ 𝑇(𝜁−)
261
+ 0
262
+ ∥∥𝑈𝛼(𝑡)∥∥
263
+ 𝑑𝑡 = ∫
264
+
265
+ 𝑇(𝜁−)
266
+ 0
267
+ ⟨𝜏, 𝜏⟩𝑑𝑡 (27)
268
+ where 𝑇(𝜁−) corresponds to the step duration, 𝑈𝛼(𝑡) is the resulting torque obtained from (3) along the periodic
269
+ solution of the hybrid zero dynamics. To solve the problem more easily and accurately, we tried to satisfy
270
+ configuration constraints in the problem itself. Therefore, 2 coefficients of each coordinate and a total of 10 parameters
271
+ of equation (7) are determined by the configuration constraints.
272
+
273
+
274
+ Figure 3 optimization diagram
275
+ According to equation (7), the number of unknown coefficients for a polynomial of order 4 is equal to 5. On the other
276
+ hand, due to the existence of 5 independent angles, the number of unknown coefficients in the problem is 25. By
277
+ Dynamics
278
+ Kinematics
279
+ Setting
280
+ Initializing
281
+ Barrier/Penalty
282
+ Method
283
+ Genetic
284
+ Algorithm
285
+ Gradient
286
+ Based
287
+ Method
288
+ Physically
289
+ constraints
290
+ constraints
291
+ kinematically
292
+ constraints
293
+ Cost
294
+ Function
295
+ 1
296
+ 3
297
+ 2
298
+ 1: Setting: Type of optimization variables – Desired velocity – Initial and final configuration
299
+ 2: Initialization reduces the number of variables and simplifies optimization.
300
+ 3: Using penalty/barrier functions, the constrained problem becomes unconstrained.
301
+ F(x, r) = f (x) + P(h(x), g(x), r)
302
+ where f (x) is the cost function h(x) is the vector of equalities constraint, g(x) is the vector of
303
+ inequalities constraint, r is a vector of penalty parameters and P is a real-valued function whose
304
+ action of imposing the penalty on the cost function is controlled by r.
305
+
306
+ determining the initial and final configuration of the robot, the number of optimization variables for this problem is
307
+ reduced to 15 (by initializing).
308
+ 6. Results
309
+ The simulation is based on the specifications of the RABBIT robot (Table 1). As a review, the nonlinear and
310
+ constrained optimization problem is first converted to a non-constrained problem by the penalty method, then with
311
+ the values and parameters in Tables 2 and 3, the first layer optimization problem is solved using the genetic algorithm.
312
+ Next, the outputs of the first layer of optimization are considered as the start point (initial condition) of the second
313
+ layer of optimization. The maximum violation of the constraints will be equal to .01 and the maximum iteration of the
314
+ interior-point algorithm is equal to 20. The initial and final configuration of the system as well as other specifications
315
+ and constraint bounds are given in Tables 3 and 4, respectively.
316
+ Table 1 RABBIT parameters[18]
317
+ Symbol
318
+ Value
319
+ Name
320
+ m1, m5
321
+ 3.2 kg
322
+ mass of lower leg
323
+ m2, m4
324
+ 6.8 kg
325
+ mass of upper leg
326
+ m3
327
+ 20 kg
328
+ mass of trunk
329
+ I1, I5
330
+ 0.93 kg-m2
331
+ rotational inertia of lower leg, about its center of mass
332
+ I2, I4
333
+ 1.08 kg-m2
334
+ rotational inertia of upper leg, about its center of mass
335
+ I3
336
+ 2.22 kg-m2
337
+ rotational inertia of trunk, about its center of mass
338
+ l1, l5
339
+ 0.4 m
340
+ length of lower leg
341
+ l2, l4
342
+ 0.4 m
343
+ length of femur
344
+ l3
345
+ 0.625 m
346
+ length of trunk
347
+ d1, d5
348
+ 0.128 m
349
+ distance from lower leg center of mass to knee
350
+ d2, d4
351
+ 0.163 m
352
+ distance from upper leg center of mass to hip
353
+ d3
354
+ 0.2 m
355
+ distance from trunk center of mass to hip
356
+
357
+ Table 2 Quantities and specifications of genetic algorithms
358
+ Population size
359
+ 300
360
+ Initial range
361
+ [-12,12]
362
+ Elite count
363
+ 15
364
+ Crossover fraction
365
+ .8
366
+ Migration fraction
367
+ .2
368
+ Stall generation
369
+ 50
370
+ Function count
371
+ 10401
372
+
373
+ Table 3 Problem physical parameters and constraints
374
+ Maximum angular rate
375
+ 5 rad/s
376
+ Maximum actuator torque
377
+ 150 N.m
378
+ Step length
379
+ 0.5 m
380
+ Velocity
381
+ 1m/s
382
+ Maximum Friction coefficient
383
+ 0.7
384
+
385
+
386
+
387
+ Table 4 Initial and final configuration
388
+ Relative angles
389
+ Initial value@(t=0)
390
+ Final value@(t=T)
391
+ q1
392
+ -0.1681
393
+ 0.4754
394
+ q2
395
+ 0.3073
396
+ 0.3073
397
+ q3
398
+ -0.6499
399
+ -0.0064
400
+ q4
401
+ 0.0064
402
+ 0.6499
403
+ q5
404
+ 0.3073
405
+ 0.3073
406
+
407
+
408
+ Figure 4 The phase plots of joint angles vs. Joint angular rates
409
+ As can be seen from the results of Figure 4, simulation results show that optimization by considering zero-dynamics
410
+ constraint can produce an ideal limit cycle in walking of the biped. It is clear that angular velocities, like angles, are
411
+ quite smooth and without fractures or discontinuities. They are also a long distance from their saturation limit (5
412
+ radians per second).
413
+
414
+
415
+ Figure 5 Force reactions and Friction coefficient
416
+ It is also clear from Figure 5 that the ground reaction force is also a positive value to ensure that the robot does not
417
+ rise completely from the ground and the static friction coefficient required between the heels and the ground. As it is
418
+ known, the coefficient of friction has desirable values that do not reach the upper bound [19].
419
+
420
+ Figure 6 Input torques
421
+ As can be seen from figure 6, the torques are without fractures and are also far from their saturation limits.
422
+
423
+
424
+ Figure 7 Walking motion
425
+
426
+
427
+ Figure 8 Position of the swing leg's foot
428
+ As shown in Figures 7 and 8, the swing leg does not collide with the ground except at the beginning and end of the
429
+ phase.
430
+ 7. Conclusion
431
+ This paper proposes a two-layer framework for generating optimal time-varying trajectories for bipedal robots. The
432
+ novelties of the proposed work are presenting and satisfying the impact invariance constraint in a new way to ensure
433
+ the periodicity of the gait in each phase and satisfying the hybrid zero dynamics simultaneously without any
434
+
435
+ approximation. Also to find a better optimal solution, a hybrid optimization is used. On the other hand, various
436
+ constraints were considered for a better motion of the robot. According to the simulation results, the accuracy of the
437
+ proposed method and the obtained optimal solution were confirmed.
438
+
439
+ References
440
+ [1]Shi, F., Homberger, T., Lee, J., Miki, T., Zhao, M., Farshidian, F., ... & Hutter, M. (2020). Circus ANYmal: A Quadruped Learning Dexterous
441
+ Manipulation with Its Limbs. arXiv preprint arXiv:2011.08811.Strunk, W., Jr., & White, E. B. (1979).The elements of style. (3rd ed.).New
442
+ York: Macmillan, (Chapter 4).
443
+ [2]Grizzle, J. W., Hurst, J., Morris, B., Park, H. W., & Sreenath, K. (2009, June). MABEL, a new robotic bipedal walker and runner. In 2009
444
+ American Control Conference (pp. 2030-2036). IEEE.
445
+ [3]Kakaei, M. M., & Salarieh, H. (2020). New Robust Control Method Applied to the Locomotion of a 5-Link Biped Robot. Robotica, 38(11),
446
+ 2023-2038.Van der Geer, J., Hanraads, J. A. J., & Lupton R. A. (2000). The art of writing a scientific article. Journal of Scientific
447
+ Communications, 163, 51-59.
448
+ [4]Meghdari, Ali, et al. "A novel method of gait synthesis for bipedal fast locomotion." Journal of Intelligent and Robotic Systems 53.2 (2008):
449
+ 101-118.
450
+ [5]Wright, Joe, and Ivan Jordanov. "Intelligent approaches in locomotion-a review." Journal of Intelligent & Robotic Systems 80.2 (2015): 255-
451
+ 277.
452
+ [6]Tzafestas, Spyros G., Thanassis E. Krikochoritis, and Costas S. Tzafestas. "Robust sliding-mode control of nine-link biped robot
453
+ walking." Journal of Intelligent and Robotic Systems 20.2 (1997): 375-402.
454
+ [7]Khan, Ameer Tamoor, Shuai Li, and Xuefeng Zhou. "Trajectory optimization of 5-link biped robot using beetle antennae search." IEEE
455
+ Transactions on Circuits and Systems II: Express Briefs 68.10 (2021): 3276-3280.
456
+ [8]Li, Jingchao, et al. "Online Robust Gait Generator of Biped Robots Inspired by Human Anti-disturbance Strategies." Journal of Intelligent &
457
+ Robotic Systems 105.1 (2022): 1-16.
458
+ [9] Beletskii, V. V., Berbyuk, V. E., & Samsonov, V. A. (1982). Parametric optimization of motions of a bipedal walking robot. Mechanics of
459
+ solids, 17(1), 24-35.
460
+ [10] Selim, Erman, Musa Alcı, and Mert Altıntas. "Variable-time-interval trajectory optimization-based dynamic walking control of bipedal robot."
461
+ Robotica (2021): 1-21.
462
+ [11] Westervelt, Eric R., Jessy W. Grizzle, and Daniel E. Koditschek. "Hybrid zero dynamics of planar biped walkers." IEEE transactions on
463
+ automatic control 48.1 (2003): 42-56.
464
+ [12] Wang, Helin, et al. "Finite-time stabilization of periodic orbits for under-actuated biped walking with hybrid zero dynamics." Communications
465
+ in Nonlinear Science and Numerical Simulation 80 (2020): 104949.
466
+ [13]Sarkar, Abhishek, and Ashish Dutta. "Optimal trajectory generation and design of an 8-dof compliant biped robot for walk on inclined
467
+ ground." Journal of Intelligent & Robotic Systems 94.3 (2019): 583-602.
468
+ [14]Tlalolini, D., Chevallereau, C., & Aoustin, Y. (2009). Comparison of different gaits with rotation of the feet for a planar biped. Robotics and
469
+ Autonomous Systems, 57(4), 371-383.
470
+ [15] Chevallereau, C., & Aoustin, Y. (2001). Optimal reference trajectories for walking and running of a biped robot. Robotica, 19(5), 557-569.
471
+ [16] Kelly, Matthew. "An introduction to trajectory optimization: How to do your own direct collocation." SIAM Review 59.4 (2017): 849-904.
472
+ [17] Zheng, Yuan‐Fang, and Hooshang Hemami. "Mathematical modeling of a robot collision with its environment." Journal of Robotic Systems
473
+ 2.3 (1985): 289-307.
474
+ [18] Chevallereau, Christine, et al. "Rabbit: A testbed for advanced control theory." IEEE Control Systems Magazine 23.5 (2003): 57-79.
475
+ [19] Channon, P. H., S. H. Hopkins, and D. T. Pham. "Derivation of optimal walking motions for a bipedal walking robot." Robotica 10.2 (1992):
476
+ 165-172.
477
+
478
+
479
+
C9AyT4oBgHgl3EQfSPck/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf,len=315
2
+ page_content='Impact Invariant Trajectory Optimization of 5-Link Biped Robot Using Hybrid Optimization Aref Amiri, Hasan Salarieh1 Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran Abstract Bipedal robots have received much attention because of the variety of motion maneuvers that they can produce, and the many applications they have in various areas including rehabilitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
3
+ page_content=' One of these motion maneuvers is walking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
4
+ page_content=' In this study, we presented a framework for the trajectory optimization of a 5-link (planar) Biped Robot using hybrid optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
5
+ page_content=' The walking is modeled with two phases of single-stance (support) phase and the collision phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
6
+ page_content=' The dynamic equations of the robot in each phase are extracted by the Lagrange method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
7
+ page_content=' It is assumed that the robot heel strike to the ground is full plastic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
8
+ page_content=' The gait is optimized with a method called hybrid optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
9
+ page_content=' The objective function of this problem is considered to be the integral of torque-squared along the trajectory, and also various constraints such as zero dynamics are satisfied without any approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
10
+ page_content=' Furthermore, in a new framework, there is presented a constraint called impact invariance, which ensures the periodicity of the time-varying trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
11
+ page_content=' On the other hand, other constraints provide better and more human-like movement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
12
+ page_content='. Keywords: Trajectory optimization, bipedal robots, walking robots, zero dynamics;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
13
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
14
+ page_content=' Introduction The mechanism of movement and transfer of objects has always been one of the most important and active areas of human research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
15
+ page_content=" Due to the limitations of moving with a wheel, replacing it with feet is an attractive but difficult option, so this field is a hot topic in today's robotic world." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
16
+ page_content=' With the advancement of robotics science and the usefulness of this issue, a lot of research has been done on the design, optimization, and control of legged robots [1-6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
17
+ page_content=' As the science of bipedal robots has advanced in recent years, there have been significant efforts to improve the performance of these robots in important maneuvers, such as walking and running, but research is still ongoing to find ideal answers [7,8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
18
+ page_content=' Designing reference trajectories for human walking cycles is very important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
19
+ page_content=' Several techniques have been adopted to define reference trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
20
+ page_content=' So far, many researchers have studied low-energy (or low input torques) paths for bipedal robots [7,9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
21
+ page_content=' We are looking for a periodic path that meets a specific goal in terms of speed and minimizes the torque required to produce the gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
22
+ page_content=' In general, this open and non-trivial problem is solved by finding numerical answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
23
+ page_content=' Various parameters can be considered to optimize the problem, for example, torques, Cartesian coordinate or joint coordinates constraints can be used[10-12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
24
+ page_content=' Many authors have used polynomial functions for Cartesian coordinates of swing leg’s foot, hip, and trunk angle [13,14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
25
+ page_content=' Polynomial functions are used for the coordinates of the joints to limit the number of optimization parameters [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
26
+ page_content=' The optimal path for each coordinate of joints is usually written in the form of polynomials with unknown coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
27
+ page_content=' The coefficients should be obtained through the optimization process [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
28
+ page_content=' For all bipedal robots, it is important to define optimal periodic motions despite the fact that the number of actuators is less than the degree of freedom of the system, and also zero dynamics problem there exists which should be satisfied during optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
29
+ page_content=' In this paper, a new method is presented to produce a periodic path for the walking of bipedal robots which satisfies the impact invariance constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
30
+ page_content=' Also, in order to achieve the feasible trajectory, the zero dynamics constraint is satisfied without any approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
31
+ page_content=' In addition, by considering some other kinematic and dynamic constraints, and 1P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
32
+ page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
33
+ page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
34
+ page_content=' 11155 9567, Tehran, Iran salarieh@sharif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
35
+ page_content='edu using the hybrid optimization method, an optimal reference trajectory for human-like walking of bipedal robots has been presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
36
+ page_content=' Section 2 presents the dynamics and kinematics of a model of the biped robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
37
+ page_content=' Section 3 is devoted to the formulation of the optimization variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
38
+ page_content=' The constraints are defined in section 4 and the optimization method is also described in section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
39
+ page_content=' Finally, in sections 5 and 6, the results and conclusion are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
40
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
41
+ page_content=' Dynamics and kinematics Bipedal robots have different dynamics depending on their movement maneuvers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
42
+ page_content=' For example, a running robot with 5 links and without an ankle actuator in the flight phase has 7 degrees of freedom and only 4 actuators, so the system is 3 degrees under-actuated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
43
+ page_content=' Here a bipedal walking robot will be examined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
44
+ page_content=' We assume that the robot is completely on the ground and does not slip while walking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
45
+ page_content=' On the other hand, during the single support phase, the other leg rises from the ground when the swing leg hits the ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
46
+ page_content=' During the single support phase, the model has 5 degrees of freedom and needs at least 5 generalized coordinates to identify the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
47
+ page_content=' On the other hand, the robot has only 4 actuation, so the system has a degree of under-actuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
48
+ page_content=' In under-actuated systems, some parts of the dynamics are not affected by the actuator called the zero dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
49
+ page_content=" Here, zero dynamics is affected only by the earth's gravity." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
50
+ page_content=" The robot's model can be modeled with absolute or relative angles, if relative angles are used, zero dynamics can be easily separated from the main total dynamics." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
51
+ page_content=' Figure 1 shows the absolute and relative coordinates of a 5-link robot with point feet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
52
+ page_content=' Figure 1 Relative and absolute angles The general hybrid walking gait model is obtained by combining the single support phase model and the impact model: Σ: { 𝑥̇ = 𝑓(𝑥) + 𝑔(𝑥)𝑢 𝑥− ∉ Γ 𝑥+ = \uf044 (𝑥−) 𝑥− ∈ Γ (1) where \uf044 is a mapping that transforms the states just before the contact to the states just after the contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
53
+ page_content=' 𝑥: = (𝑞𝑇, 𝑞̇ 𝑇)𝑇 is the state vector that contains 𝑞: = (𝑞1, 𝑞2, … , 𝑞𝑛) 𝑇 which is the vector of joint coordinates and 𝑞̇: = (𝑞̇ 1, 𝑞̇ 2, … , 𝑞̇ 𝑛) 𝑇 is the vector of angular velocities, and 𝑥+ denotes the state vector just after the impact and 𝑥− shows just before this event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
54
+ page_content=" The switching set is shown as, 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 : relative coordinates : absolute coordinates : swing leg's foot 1 1 2 : stance leg's foot 2 Γ = {(𝑞, 𝑞̇) ∈ 𝑥 ∣ 𝑃 𝑣(𝑞) = 0, 𝑃 ℎ(𝑞) > 0} (2) 𝑃 𝑣(𝑞) and 𝑃 ℎ(𝑞) indicate the vertical and horizontal position of the swing leg, respectively." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
55
+ page_content=' Now if we model the single support phase alone, we have: 𝑀(𝑞)𝑞̈ + 𝑐(𝑞, 𝑞̇)𝑞̇ + 𝐺(𝑞) = (0, 𝑈𝑇)𝑇 (3) where 𝑀(𝑞) ∈ ℜ𝑛×𝑛 (𝑛 = 5) is the inertia matrix, 𝑐(𝑞, 𝑞̇) ∈ ℜ𝑛×𝑛 is the Coriolis matrix, and 𝐺(𝑞) ∈ ℜ𝑛 is the gravity vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
56
+ page_content=' As shown in Figure 2, the robot does not have any actuators (torques) on the feet, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
57
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
58
+ page_content=' the robot has not the ankle joint actuator, so the robot is under-actuated which adds a zero dynamic constraint to the problem as mentioned in [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
59
+ page_content=' The vector 𝑈 ∈ ℜ𝑛− is as follows: 𝑈 = [𝜏 , 𝜏 , 𝜏 , 𝜏 ]𝜏 (4) which represents 4 actuators (torques) on the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
60
+ page_content=' 2 actuators (torques) on the pelvis (hip) and 2 on the knee of each leg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
61
+ page_content=' By separating the equations of (3) the first equation which produces the zero dynamics is written as: ∑ 𝑗= (𝑀 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
62
+ page_content='𝑗𝑞̈𝑗 + 𝑐 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
63
+ page_content='𝑗𝑞̇𝑗) + 𝐺 = 0 (5) which is called zero-hybrid dynamics and: ∑ 𝑗= (𝑀 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
64
+ page_content='𝑗𝑞̈𝑗 + 𝑐 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
65
+ page_content='𝑗𝑞̇𝑗) + 𝐺 = 𝜏 − (6) are other rows of equation (3) (i = 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
66
+ page_content='…,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
67
+ page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
68
+ page_content=' Figure 2 Robot configuration and control torques The trunk angle is assumed independent from other links with a separate actuator, In other words, one actuator is responsible for moving the trunk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
69
+ page_content=' So if we temporarily separate the trunk from the other components, we are faced with 4 degrees of freedom system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
70
+ page_content=" By determining the swing leg's foot (link number 5 in figure 2), the system still has 2 degrees of freedom, so the inverse kinematics has infinite answers." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
71
+ page_content=' Therefore, By determining the position of the hip, 2 more degrees of freedom are determined from the system, in this case, the inverse kinematic robot has 4 answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
72
+ page_content=' Among these 4 answers, the only acceptable answers are the one that satisfies the condition of not breaking the knee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
73
+ page_content=' It is important to note that in order to find a suitable periodic answer, we assume that the initial configuration is the same as the final one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
74
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
75
+ page_content=' Optimization variables One convenient way is to select the angles of each link based on a polynomial function of time with a series of unknown coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
76
+ page_content=' This choice enables us to have a smooth function with time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
77
+ page_content=' Here it is assumed that each angle is a polynomial function of degree 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
78
+ page_content=' It should be noted that the initial and final configuration of the system in each step affects determining two parameters of the polynomial coefficients, the impact invariance constraint is also 1 5 4 2 3 =0 effective on another coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
79
+ page_content=' Therefore, in order to have at least 2 optimization parameters for each angle, we consider a fourth-order polynomial function with unknown coefficients for the trajectories of each angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
80
+ page_content=' 𝑞𝑘(𝑡) = ∑ 𝑛= =0 𝛼𝑘, 𝑡 (𝑘 = 1, … , 5) (7) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
81
+ page_content=' Definition of constraints These constraints are to find the right trajectory to walk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
82
+ page_content=' It makes the shapes of the joint trajectories, the links orientations, and the required torques for walking be within a reasonable range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
83
+ page_content=' The constraints are defined as follows: 1) Constraints on the initial and final configuration: Initial and final configurations of the robot must be specified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
84
+ page_content=' Since the robot moves in a periodic pattern, its initial and final configuration must coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
85
+ page_content=" 𝑞(@𝑡=0)=𝑞𝑖𝑛𝑖𝑡𝑖𝑎𝑙 , 𝑞(@𝑡=𝑇)=𝑞𝑓𝑖𝑛𝑎𝑙 , (8) 2) Knee movement constraints: In order to have human-like movement, the robot's knees should not be opened and closed excessively ( 𝑚 and 𝑚 are two pre-especified upper bounds in Eq." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
86
+ page_content=' (9)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
87
+ page_content=" 𝑚 ≥ 𝑞 (𝑡) ≥ 0 , 𝑚 ≥ 𝑞 (𝑡) ≥ 0, (9) 3) Swing leg's foot constraint: The swing leg's foot should not collide with the ground except at the beginning and end of the phase." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
88
+ page_content=' 𝑝(0) 𝑣 = 𝑝(𝑇) 𝑣 = 0 𝑝(𝑡) 𝑣 >0 for 0 < 𝑡 < 𝑇 (10) 4) Limitation of torques: In order to the physical limitations of the motors, the actuator torques have a certain limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
89
+ page_content=' |𝜏 − (𝑡)| ≤ 𝜏𝑚𝑎𝑥 𝑖 = 2, … ,5 (11) 5) Limitation of angular velocities: In order to the physical limitations of the motors, the actuator velocities have a certain limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
90
+ page_content=' |𝑞̇ (𝑡)| ≤ 𝑞̇𝑚𝑎𝑥 𝑖 = 1, … ,5 (12) 6) Limitation of friction coefficient: The reaction of the heels, which is the result of the acceleration of the various members of the robot, must observe a certain ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
91
+ page_content=' This ratio should be less than the coefficient of friction between the heels and the ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
92
+ page_content=' −𝜇 ≤ | 𝐹𝑥 𝐹𝑦| ≤ 𝜇 (13) In the above equation, 𝜇 is the coefficient of friction, and 𝐹𝑥 and 𝐹𝑦 are sequentially the horizontal and vertical ground reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
93
+ page_content=' 7) Zero dynamic constraint: the satisfaction of this constraint is important in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
94
+ page_content=' First, if this constraint is not satisfied, the problem of optimizing the input torques is practically ambiguous, because these torques are not really applicable to the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
95
+ page_content=' Although it may lead to a feasible kinematic equation (kinematically possible), it is not feasible in terms of control, or in other words, it is not dynamically possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
96
+ page_content=' 8) Impact invariance constraint: this constraint means that in order to produce a periodic motion, in addition to the configuration, the initial velocities at the beginning point of each cycle should be exactly the same as its previous cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
97
+ page_content=' Since the velocities after the collision are dependent on the velocities before the collision, by satisfying this constraint, the velocities before the collision are adjusted in such a way as to guarantee the periodicity of the motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
98
+ page_content=' Through the following formulae, this purpose is achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
99
+ page_content=' At first, the impact mapping formula is written as, 𝑞̇ + = Δ̃(𝑞−)𝑞̇ − (14) Δ̃(𝑞−) ∈ ℜ × is the impact mapping which maps the angular rates of the leg before contact to the angular rates of that leg after contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
100
+ page_content=' The inverse of Δ̃ is denoted by, 𝜂̃(𝑞−) = (Δ̃(𝑞−)) − (15) So 𝑞̇ − can be found as : 𝑞̇ − = 𝜂̃(𝑞−)𝑞̇ + (16) The mathematical formulation of this mapping is obtained from the governing differential equations of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
101
+ page_content=" After the swing leg's foot hits the ground," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
102
+ page_content=' the positions do not change but the angular velocities change,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
103
+ page_content=' which can be achieved as following (see [17] for more information),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
104
+ page_content=' Δ𝑞̇ = 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− ⋅ Δ𝑣𝑒 (17) where 𝑣 is the velocity vector of the end of the swing leg and 𝑀 ∈ ℜ𝑛×𝑛 is the inertia matrix as mentioned in (3),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
105
+ page_content=' the matrix𝐽 ∈ ℜ𝑚×𝑛 (𝑚 = 2 for planar motions) is also obtained as: 𝐽 = ∂𝑝𝑒 ∂𝑞 (18) 𝑝𝑒 is the position of the end of the swing leg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
106
+ page_content=' Assuming that the swing leg sticks to the ground after impact,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
107
+ page_content=" the velocity of the swing leg's foot after impact is zero," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
108
+ page_content=' so 𝑞̇ + = 𝑞̇ − + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− ⋅ (−𝑣𝑒) (19) We know that due to the placement of a leg on the ground,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
109
+ page_content=' we can write: 𝑣𝑒 = 𝛼(𝑞)𝑞̇ (20) where 𝛼(����) is: 𝛼(𝑞) = ∂𝑣𝑒 ∂𝑞̇ (21) Finally,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
110
+ page_content=' by placing )20( into )19( and separating 𝑞̇ −,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
111
+ page_content=' the pre-impact angular velocity is obtained as follows: 𝑞̇ − = (𝐼 + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− − 𝛼(𝑞)) − 𝑞̇ + (22) where 𝐼 ∈ ℜ𝑛×𝑛 is the identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
112
+ page_content=' In the above relation, both velocity vectors are written in the same coordinate system, which requires a coordinate conversion, because the coordinate changes after the collision due to the change in the role of the legs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
113
+ page_content=' For this purpose, consider the following mapping that converts the relative angles and angular velocities to absolute ones: 1𝑟𝑒𝑙κ= 𝐻 𝑎𝑏𝑠𝜿 (23) where 𝜿 ∈ ℜ𝑛 can be the angles vector, the angular velocities vector or the angular accelerations vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
114
+ page_content=' Superscripts 1𝑟𝑒𝑙\uf0a3 and 1𝑎𝑏𝑠\uf0a3 represent relative and absolute coordinates in which the vectors are defined, and also 𝐻 ∈ ℜ𝑛×𝑛 is a square matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
115
+ page_content=' On the other hand, we have a mapping that converts old and new coordinates to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
116
+ page_content=' This mapping can just be defined for an absolute angular coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
117
+ page_content=' If we define the absolute coordinates in this way, we have: 1 𝜓 = Γ 𝜓 (24) where indices 1 and 2 indicate the coordinate system before and after the impact, 𝜓 ∈ ℜ𝑛×𝑛 can be velocity vector or angular acceleration vector, and Γ ∈ ℜ𝑛×𝑛 is the mapping matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
118
+ page_content=' Finally,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
119
+ page_content=' with the above transformations,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
120
+ page_content=' the coordinate systems can be connected suitably as: 1 𝑞̇ + = 𝐻Γ𝐻− 1 𝑞̇ + (25) So the invariancy of the impact during walking is written as it follows,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
121
+ page_content=' 𝑞̇ − = (𝐼 + 𝑀− ⋅ 𝐽𝑇 ⋅ (𝐽 ⋅ 𝑀− ⋅ 𝐽𝑇)− − 𝛼(𝑞)) − 𝐻Γ𝐻− 𝑞̇ + (26) As a result,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
122
+ page_content=' according to Equation (26),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
123
+ page_content=' the impact invariance constraint is obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
124
+ page_content=' In this way, by satisfying this equality constraint, the velocity after impact will be similar to the initial velocity in the previous cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
125
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
126
+ page_content=' Optimization According to figure 3, optimization is performed using a hybrid method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
127
+ page_content=' This means that first, with the penalty method, the constrained problem becomes unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
128
+ page_content=' Then, using the genetic algorithm, the first level of optimization is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
129
+ page_content=' Finally, in the second level, the outputs of the first level are used as the input of a gradient-based method and the problem is solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
130
+ page_content=' The objective function is the Euclidean norm of input torques: 𝐽(𝛼) = ∫ 𝑇(𝜁−) 0 ∥∥𝑈𝛼(𝑡)∥∥ 𝑑𝑡 = ∫ 𝑇(𝜁−) 0 ⟨𝜏, 𝜏⟩𝑑𝑡 (27) where 𝑇(𝜁−) corresponds to the step duration, 𝑈𝛼(𝑡) is the resulting torque obtained from (3) along the periodic solution of the hybrid zero dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
131
+ page_content=' To solve the problem more easily and accurately, we tried to satisfy configuration constraints in the problem itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
132
+ page_content=' Therefore, 2 coefficients of each coordinate and a total of 10 parameters of equation (7) are determined by the configuration constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
133
+ page_content=' Figure 3 optimization diagram According to equation (7), the number of unknown coefficients for a polynomial of order 4 is equal to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
134
+ page_content=' On the other hand, due to the existence of 5 independent angles, the number of unknown coefficients in the problem is 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
135
+ page_content=' By Dynamics Kinematics Setting Initializing Barrier/Penalty Method Genetic Algorithm Gradient Based Method Physically constraints constraints kinematically constraints Cost Function 1 3 2 1: Setting: Type of optimization variables – Desired velocity – Initial and final configuration 2: Initialization reduces the number of variables and simplifies optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
136
+ page_content=' 3: Using penalty/barrier functions, the constrained problem becomes unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
137
+ page_content=' F(x, r) = f (x) + P(h(x), g(x), r) where f (x) is the cost function h(x) is the vector of equalities constraint, g(x) is the vector of inequalities constraint, r is a vector of penalty parameters and P is a real-valued function whose action of imposing the penalty on the cost function is controlled by r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
138
+ page_content=' determining the initial and final configuration of the robot, the number of optimization variables for this problem is reduced to 15 (by initializing).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
139
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
140
+ page_content=' Results The simulation is based on the specifications of the RABBIT robot (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
141
+ page_content=' As a review, the nonlinear and constrained optimization problem is first converted to a non-constrained problem by the penalty method, then with the values and parameters in Tables 2 and 3, the first layer optimization problem is solved using the genetic algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
142
+ page_content=' Next, the outputs of the first layer of optimization are considered as the start point (initial condition) of the second layer of optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
143
+ page_content=' The maximum violation of the constraints will be equal to .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
144
+ page_content='01 and the maximum iteration of the interior-point algorithm is equal to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
145
+ page_content=' The initial and final configuration of the system as well as other specifications and constraint bounds are given in Tables 3 and 4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
146
+ page_content=' Table 1 RABBIT parameters[18] Symbol Value Name m1, m5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
147
+ page_content='2 kg mass of lower leg m2, m4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
148
+ page_content='8 kg mass of upper leg m3 20 kg mass of trunk I1, I5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
149
+ page_content='93 kg-m2 rotational inertia of lower leg, about its center of mass I2, I4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
150
+ page_content='08 kg-m2 rotational inertia of upper leg, about its center of mass I3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
151
+ page_content='22 kg-m2 rotational inertia of trunk, about its center of mass l1, l5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
152
+ page_content='4 m length of lower leg l2, l4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
153
+ page_content='4 m length of femur l3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
154
+ page_content='625 m length of trunk d1, d5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
155
+ page_content='128 m distance from lower leg center of mass to knee d2, d4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
156
+ page_content='163 m distance from upper leg center of mass to hip d3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
157
+ page_content='2 m distance from trunk center of mass to hip Table 2 Quantities and specifications of genetic algorithms Population size 300 Initial range [-12,12] Elite count 15 Crossover fraction .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
158
+ page_content='8 Migration fraction .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
159
+ page_content='2 Stall generation 50 Function count 10401 Table 3 Problem physical parameters and constraints Maximum angular rate 5 rad/s Maximum actuator torque 150 N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
160
+ page_content='m Step length 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
161
+ page_content='5 m Velocity 1m/s Maximum Friction coefficient 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
162
+ page_content='7 Table 4 Initial and final configuration Relative angles Initial value@(t=0) Final value@(t=T) q1 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
163
+ page_content='1681 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
164
+ page_content='4754 q2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
165
+ page_content='3073 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
166
+ page_content='3073 q3 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
167
+ page_content='6499 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
168
+ page_content='0064 q4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
169
+ page_content='0064 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
170
+ page_content='6499 q5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
171
+ page_content='3073 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
172
+ page_content='3073 Figure 4 The phase plots of joint angles vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
173
+ page_content=' Joint angular rates As can be seen from the results of Figure 4, simulation results show that optimization by considering zero-dynamics constraint can produce an ideal limit cycle in walking of the biped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
174
+ page_content=' It is clear that angular velocities, like angles, are quite smooth and without fractures or discontinuities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
175
+ page_content=' They are also a long distance from their saturation limit (5 radians per second).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
176
+ page_content=' Figure 5 Force reactions and Friction coefficient It is also clear from Figure 5 that the ground reaction force is also a positive value to ensure that the robot does not rise completely from the ground and the static friction coefficient required between the heels and the ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
177
+ page_content=' As it is known, the coefficient of friction has desirable values that do not reach the upper bound [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
178
+ page_content=' Figure 6 Input torques As can be seen from figure 6, the torques are without fractures and are also far from their saturation limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
179
+ page_content=" Figure 7 Walking motion Figure 8 Position of the swing leg's foot As shown in Figures 7 and 8, the swing leg does not collide with the ground except at the beginning and end of the phase." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
180
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
181
+ page_content=' Conclusion This paper proposes a two-layer framework for generating optimal time-varying trajectories for bipedal robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
182
+ page_content=' The novelties of the proposed work are presenting and satisfying the impact invariance constraint in a new way to ensure the periodicity of the gait in each phase and satisfying the hybrid zero dynamics simultaneously without any approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
183
+ page_content=' Also to find a better optimal solution, a hybrid optimization is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
184
+ page_content=' On the other hand, various constraints were considered for a better motion of the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
185
+ page_content=' According to the simulation results, the accuracy of the proposed method and the obtained optimal solution were confirmed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
186
+ page_content=' References [1]Shi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
187
+ page_content=', Homberger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
188
+ page_content=', Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
189
+ page_content=', Miki, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
190
+ page_content=', Zhao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
191
+ page_content=', Farshidian, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
192
+ page_content=', .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
193
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
194
+ page_content=' & Hutter, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
195
+ page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
196
+ page_content=' Circus ANYmal: A Quadruped Learning Dexterous Manipulation with Its Limbs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
197
+ page_content=' arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
198
+ page_content='08811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
199
+ page_content='Strunk, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
200
+ page_content=', Jr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
201
+ page_content=', & White, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
202
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
203
+ page_content=' (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
204
+ page_content='The elements of style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
205
+ page_content=' (3rd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
206
+ page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
207
+ page_content='New York: Macmillan, (Chapter 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
208
+ page_content=' [2]Grizzle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
209
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
210
+ page_content=', Hurst, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
211
+ page_content=', Morris, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
212
+ page_content=', Park, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
213
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
214
+ page_content=', & Sreenath, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
215
+ page_content=' (2009, June).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
216
+ page_content=' MABEL, a new robotic bipedal walker and runner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
217
+ page_content=' In 2009 American Control Conference (pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
218
+ page_content=' 2030-2036).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
219
+ page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
220
+ page_content=' [3]Kakaei, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
221
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
222
+ page_content=', & Salarieh, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
223
+ page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
224
+ page_content=' New Robust Control Method Applied to the Locomotion of a 5-Link Biped Robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
225
+ page_content=' Robotica, 38(11), 2023-2038.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
226
+ page_content='Van der Geer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
227
+ page_content=', Hanraads, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
228
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
229
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
230
+ page_content=', & Lupton R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
231
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
232
+ page_content=' (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
233
+ page_content=' The art of writing a scientific article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
234
+ page_content=' Journal of Scientific Communications, 163, 51-59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
235
+ page_content=' [4]Meghdari, Ali, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
236
+ page_content=' "A novel method of gait synthesis for bipedal fast locomotion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
237
+ page_content='" Journal of Intelligent and Robotic Systems 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
238
+ page_content='2 (2008): 101-118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
239
+ page_content=' [5]Wright, Joe, and Ivan Jordanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
240
+ page_content=' "Intelligent approaches in locomotion-a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
241
+ page_content='" Journal of Intelligent & Robotic Systems 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
242
+ page_content='2 (2015): 255- 277.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
243
+ page_content=' [6]Tzafestas, Spyros G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
244
+ page_content=', Thanassis E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
245
+ page_content=' Krikochoritis, and Costas S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
246
+ page_content=' Tzafestas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
247
+ page_content=' "Robust sliding-mode control of nine-link biped robot walking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
248
+ page_content='" Journal of Intelligent and Robotic Systems 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
249
+ page_content='2 (1997): 375-402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
250
+ page_content=' [7]Khan, Ameer Tamoor, Shuai Li, and Xuefeng Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
251
+ page_content=' "Trajectory optimization of 5-link biped robot using beetle antennae search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
252
+ page_content='" IEEE Transactions on Circuits and Systems II: Express Briefs 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
253
+ page_content='10 (2021): 3276-3280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
254
+ page_content=' [8]Li, Jingchao, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
255
+ page_content=' "Online Robust Gait Generator of Biped Robots Inspired by Human Anti-disturbance Strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
256
+ page_content='" Journal of Intelligent & Robotic Systems 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
257
+ page_content='1 (2022): 1-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
258
+ page_content=' [9] Beletskii, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
259
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
260
+ page_content=', Berbyuk, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
261
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
262
+ page_content=', & Samsonov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
263
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
264
+ page_content=' (1982).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
265
+ page_content=' Parametric optimization of motions of a bipedal walking robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
266
+ page_content=' Mechanics of solids, 17(1), 24-35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
267
+ page_content=' [10] Selim, Erman, Musa Alcı, and Mert Altıntas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
268
+ page_content=' "Variable-time-interval trajectory optimization-based dynamic walking control of bipedal robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
269
+ page_content='" Robotica (2021): 1-21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
270
+ page_content=' [11] Westervelt, Eric R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
271
+ page_content=', Jessy W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
272
+ page_content=' Grizzle, and Daniel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
273
+ page_content=' Koditschek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
274
+ page_content=' "Hybrid zero dynamics of planar biped walkers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
275
+ page_content='" IEEE transactions on automatic control 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
276
+ page_content='1 (2003): 42-56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
277
+ page_content=' [12] Wang, Helin, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
278
+ page_content=' "Finite-time stabilization of periodic orbits for under-actuated biped walking with hybrid zero dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
279
+ page_content='" Communications in Nonlinear Science and Numerical Simulation 80 (2020): 104949.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
280
+ page_content=' [13]Sarkar, Abhishek, and Ashish Dutta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
281
+ page_content=' "Optimal trajectory generation and design of an 8-dof compliant biped robot for walk on inclined ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
282
+ page_content='" Journal of Intelligent & Robotic Systems 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
283
+ page_content='3 (2019): 583-602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
284
+ page_content=' [14]Tlalolini, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
285
+ page_content=', Chevallereau, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
286
+ page_content=', & Aoustin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
287
+ page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
288
+ page_content=' Comparison of different gaits with rotation of the feet for a planar biped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
289
+ page_content=' Robotics and Autonomous Systems, 57(4), 371-383.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
290
+ page_content=' [15] Chevallereau, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
291
+ page_content=', & Aoustin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
292
+ page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
293
+ page_content=' Optimal reference trajectories for walking and running of a biped robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
294
+ page_content=' Robotica, 19(5), 557-569.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
295
+ page_content=' [16] Kelly, Matthew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
296
+ page_content=' "An introduction to trajectory optimization: How to do your own direct collocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
297
+ page_content='" SIAM Review 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
298
+ page_content='4 (2017): 849-904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
299
+ page_content=' [17] Zheng, Yuan‐Fang, and Hooshang Hemami.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
300
+ page_content=' "Mathematical modeling of a robot collision with its environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
301
+ page_content='" Journal of Robotic Systems 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
302
+ page_content='3 (1985): 289-307.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
303
+ page_content=' [18] Chevallereau, Christine, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
304
+ page_content=' "Rabbit: A testbed for advanced control theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
305
+ page_content='" IEEE Control Systems Magazine 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
306
+ page_content='5 (2003): 57-79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
307
+ page_content=' [19] Channon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
308
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
309
+ page_content=', S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
310
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
311
+ page_content=' Hopkins, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
312
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
313
+ page_content=' Pham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
314
+ page_content=' "Derivation of optimal walking motions for a bipedal walking robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
315
+ page_content='" Robotica 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
316
+ page_content='2 (1992): 165-172.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf'}
C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdee53eaccff99203623229ad0b75c43ac45a02ec9231ee3856c1a4d2552472e
3
+ size 401031
C9E0T4oBgHgl3EQfyQLe/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bafb184c371caaf2a9e971854fe5244975a3d806b0ea4eb2ed5782a2792f2cb
3
+ size 24057
CNFQT4oBgHgl3EQf-jfx/content/tmp_files/2301.13455v1.pdf.txt ADDED
@@ -0,0 +1,541 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training
2
+ for E-Commerce Product Search
3
+ Xuange Cui
4
5
+ JD.com
6
+ Beijing, China
7
+ Wei Xiong
8
9
+ JD.com
10
+ Beijing, China
11
+ Songlin Wang
12
13
+ JD.com
14
+ Beijing, China
15
+ ABSTRACT
16
+ In this paper, we propose a robust multilingual model to improve the
17
+ quality of search results. Our model not only leverage the processed
18
+ class-balanced dataset, but also benefit from multitask pre-training
19
+ that leads to more general representations. In pre-training stage,
20
+ we adopt mlm task, classification task and contrastive learning task
21
+ to achieve considerably performance. In fine-tuning stage, we use
22
+ confident learning, exponential moving average method (EMA), ad-
23
+ versarial training (FGM) and regularized dropout strategy (R-Drop)
24
+ to improve the model’s generalization and robustness. Moreover,
25
+ we use a multi-granular semantic unit to discover the queries and
26
+ products textual metadata for enhancing the representation of the
27
+ model. Our approach obtained competitive results and ranked top-8
28
+ in three tasks. We release the source code and pre-trained models
29
+ associated with this work1.
30
+ CCS CONCEPTS
31
+ • Information systems → Retrieval models and ranking.
32
+ KEYWORDS
33
+ search relevance, e-commerce, semantic matching, multilingual
34
+ 1
35
+ INTRODUCTION
36
+ With the rapid growth of e-Commerce, online product search has
37
+ emerged as a popular and effective paradigm for customers to find
38
+ desired products and engage in online shopping [7, 9, 11]. It is very
39
+ challenging to accurately find and display relevant products. This
40
+ is because the customer queries are ambiguous and implicit [12].
41
+ For example, many users search for "iPhone" to find and purchase
42
+ an "iPhone charger". However, the traditional binary classification
43
+ model is difficult to clearly characterize this relationship. The Ama-
44
+ zon KDD Cup 2022 presents a novel multilingual dataset [17] across
45
+ English, Japanese and Spanish, and consists of three different sub-
46
+ tasks to evaluate the model’s abilities of ranking and classifying.
47
+ In this paper, our contributions can be summarized as follows:
48
+ 1) Data Augmentation. We use the translation model to convert
49
+ Spanish to English for expanding the dataset. Through splitting
50
+ the complement and irrelevant product text information, we could
51
+ get a bigger dataset with balanced labels. We use confident learn-
52
+ ing [14, 15] to find the potential label errors and remove ∼4% data
53
+ from the training dataset. 2) MultiTask Pre-training. In pre-training
54
+ stage, we use MLM task, classification task and contrastive learn-
55
+ ing task for improving the model’s performance. 3) In fine-tuning
56
+ stage, we use a multi-granular semantic unit to discover the queries
57
+ and products textual metadata for enhancing the representation
58
+ 1https://github.com/cuixuage/KDDCup2022-ESCI
59
+ SubTask
60
+ Train Dataset
61
+ Test dataset
62
+ Languages
63
+ Task1
64
+ 781K
65
+ 48K
66
+ Spanish
67
+ Task2
68
+ 1834K
69
+ 277K
70
+ & English
71
+ Task3
72
+ 1834K
73
+ 277K
74
+ & Japanese
75
+ Table 1: The statistics of datasets.
76
+ of the model. And we observe that exponential moving average
77
+ method(EMA) [6], adversarial training(FGM) [5] and regularized
78
+ dropout strategy(R-Drop) [10] could improve the model’s general-
79
+ ization and robustness.
80
+ Our team participated in all tasks, and achieved considerably
81
+ performance gain over the baseline solution. Specifically, our ap-
82
+ proach ranked 5th in task1, ranked 7th in task2 and ranked 8th in
83
+ task3.
84
+ 2
85
+ BACKGROUND
86
+ The Amazon KDD Cup 2022 [17] provides three subtasks. The
87
+ task1 consists of a query-product ranking task aimed at ranking the
88
+ results list. The Normalized Discounted Cumulative Gain(nDCG)
89
+ [18] will be used to evaluate the model’s abilities of ranking.
90
+ The task2 and task3 are classification tasks which require the
91
+ model to classify the query/product pairs into correct categories.
92
+ These tasks are designed to test the model’s ability of classifying.
93
+ The micro-F1 [16] will be used as an evaluation metric. Moreover,
94
+ the task2 consists of a multi-class product classification task aimed
95
+ at classifying each product as being an Exact, Substitute, Comple-
96
+ ment, or Irrelevant match for the query. The task3 will measure the
97
+ model’s abilities of identifying the substitute products in the list of
98
+ results for a given query.
99
+ The statistics of the corpus are shown in Table 1. In this challenge,
100
+ the organizers provide two different versions of the data set. One
101
+ for task 1 which is reduced version in terms of number of examples
102
+ and ones for tasks 2 and 3 which is a larger [17]. It is noted that the
103
+ reduced version of the data set has more difficult samples. Our team
104
+ participated in all subtasks, and the next section will introduce an
105
+ overview of our system.
106
+ 3
107
+ SYSTEM OVERVIEW
108
+ 3.1
109
+ Multi-Task Pre-Training
110
+ We compare several pre-trained multilingual language models from
111
+ the XTREME Leaderboard2, and then we use the "microsoft/infoxlm-
112
+ large3" as text encoder.
113
+ 2https://sites.research.google/xtreme
114
+ 3https://huggingface.co/microsoft/infoxlm-large
115
+ arXiv:2301.13455v1 [cs.CL] 31 Jan 2023
116
+
117
+ KDDCup ’22, August 17, 2022, Washington, DC, USA
118
+ Xuange Cui, Wei Xiong, and Songlin Wang
119
+ The InfoXLM𝑙𝑎𝑟𝑔𝑒 model [1] containing 94 languages and pre-
120
+ trained with CCNet dataset, and has the same configurations of
121
+ XLM-R [2] and a shared vocabulary size of 250002. Figure 1 shows
122
+ a high-level overview of our proposed pretext tasks.
123
+ Figure 1: A schematic overview of our novel pre-training
124
+ tasks. These tasks encourage the encoded representations to
125
+ be more general.
126
+ MLM Task, is widely used for learning text representations [3].
127
+ MLM trains a model to predict a random sample of input tokens
128
+ that have been replaced by a [MASK] placeholder in a multi-class
129
+ setting over the entire vocabulary [20]. We adopt MLM-Task on the
130
+ multilingual product-catalogue dataset.
131
+ Classification Task, contains three classification subtasks. One
132
+ of them is Product2Query-Task, this task is a binary classification
133
+ task. Based on the Poisson distribution, a piece of text is intercepted
134
+ from commodity text information as the faked query. The Parame-
135
+ ters passed to the Poisson distribution and more details can be found
136
+ in appendix A.1. Product2Brand-Task and Product2Color-Task are
137
+ multi-class classification that using product text information to
138
+ predict the brand and the color of current item.
139
+ Contrastive Learning Task, is mainly inspired by SimCSE [4]
140
+ and EsimCSE [19]. During training, each data point is trained to
141
+ find out its counterpart among (𝑁 − 1) from in-batch negative
142
+ samples and the queue of data samples. The samples in the queue
143
+ are progressively replaced.
144
+ − log
145
+ 𝑒sim(h𝑖,h+
146
+ 𝑖 )/𝜏
147
+ �𝑁
148
+ 𝑗=1 𝑒sim
149
+
150
+ h𝑖,h+
151
+ 𝑗
152
+
153
+ /𝜏 + �𝑄
154
+ 𝑞=1 𝑒sim
155
+
156
+ h𝑖,h+𝑞
157
+
158
+ /𝜏
159
+ (1)
160
+ The ℎ∗ is the sentence representation, where ℎ𝑖 and ℎ+
161
+ 𝑖 are se-
162
+ mantically related. The ℎ+𝑞 denotes a sentence embedding in the
163
+ momentum-updated queue. And the 𝑄 is the size of the queue,
164
+ 𝑠𝑖𝑚(ℎ1,ℎ2) is the cosine similarity scores of sentence representa-
165
+ tions, 𝜏 is a temperature hyperparameter. In the end, we average
166
+ the all N Li losses to calculate the contrastive loss Lcon.
167
+ Algorithm 1: Training a MultiTask model.
168
+ Input: DataSet D =
169
+
170
+ (𝑥,𝑦,𝑧)𝑖
171
+ � |D |
172
+ 𝑖=1
173
+ 1 Initialize model parameters Θ randomly ;
174
+ 2 Model trainer 𝑇 that takes batches of training data as input
175
+ to optimize the model parameters Θ ;
176
+ 3 Set the max number of epoch: 𝑒𝑝𝑜𝑐ℎmax ;
177
+ 4 for epoch in 1, 2, ...,𝑒𝑝𝑜𝑐ℎmax do
178
+ 5
179
+ Shuffle D by mixing data from different tasks ;
180
+ 6
181
+ for B in D do
182
+ 7
183
+ // B is a mini-batch of pre-training task ;
184
+ 8
185
+ Compute loss : 𝐿(Θ) ;
186
+ 9
187
+ 1. 𝐿(Θ) = Mask LM Loss ;
188
+ 10
189
+ 2. 𝐿(Θ) += Classification Loss ;
190
+ 11
191
+ 3. 𝐿(Θ) += Contrastive Learning Loss ;
192
+ 12
193
+ Optimize the model using 𝐿(Θ) ;
194
+ 13
195
+ end
196
+ 14 end
197
+ Output: Pre-trained Model Θ
198
+ 3.2
199
+ Fine-Tuning Methods
200
+ After pre-training, we remove the classifiers for pre-training multi-
201
+ task and concatenate some embeddings with an extra MLP classifier.
202
+ The embeddings consist of three sets of representations. One of
203
+ them is done by concatenating the queries’ 3-gram mean-pooling,
204
+ bullet points’ 3-gram mean-pooling and descriptions’ 3-gram mean-
205
+ pooling embeddings. The others consist of country embedding,
206
+ brand embedding and color embedding, as shown in Figure 2.
207
+ Exponential Moving Average Our model uses EMA [6] to
208
+ smooth the trained parameters. Evaluations that use averaged pa-
209
+ rameters sometimes produce significantly better results than the
210
+ final trained values. Formally, we define the smoothed variables
211
+ and trained variables as 𝜃𝑠 and 𝜃𝑡, EMA decay weight as: 𝜂. After
212
+ each training step, we update 𝜃𝑠 by:
213
+ 𝜃𝑠 ← 𝜂𝜃𝑠 + (1 − 𝜂)𝜃𝑡
214
+ (2)
215
+ Adversarial Training Recently, adversarial attack has been
216
+ widely applied in computer vision and natural language processing
217
+ [5, 8, 13, 21]. Many works use it during fine-tuning, we explore the
218
+ influence of adversarial training strategies and compare the FGSM,
219
+ PGD, FREELB and SMART methods. The adversarial attack works
220
+ by augmenting the input with a small perturbation that maximizes
221
+ the adversarial loss:
222
+ min
223
+ 𝜃
224
+ E(𝑥,𝑦)∼D
225
+
226
+ max
227
+ Δ𝑥 ∈Ω 𝐿(𝑥 + Δ𝑥,𝑦;𝜃)
228
+
229
+ (3)
230
+ where the D is dataset, 𝑥 is input, 𝑦 is the gold label, 𝜃 is the model
231
+ parameters, 𝐿(𝑥,𝑦;𝜃) is the loss function and Δ𝑥 is the perturbation.
232
+ In our experiments, we adopt FGSM method in all tasks which based
233
+ on the actual performances.
234
+ R-Drop is proved to be an effective regularization method based
235
+ on dropout, by minimizing the KL-divergence of the output distri-
236
+ butions of every two sub-models generated via dropout in model
237
+ training.
238
+ L𝐾𝐿 = 𝛼 · [D𝐾𝐿 (𝐿𝑜𝑔𝑖𝑡1, 𝐿𝑜𝑔𝑖𝑡2) + D𝐾𝐿 (𝐿𝑜𝑔𝑖𝑡2, 𝐿𝑜𝑔𝑖𝑡1)]
239
+ (4)
240
+
241
+ ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search
242
+ KDDCup ’22, August 17, 2022, Washington, DC, USA
243
+ Figure 2: In fine-tuning stage, we concatenate the multi-granular semantic units, the [CLS] embedding from XLM encoder and
244
+ the IDs’ embeddings.
245
+ We use the origin logits of model’s output as 𝐿𝑜𝑔𝑖𝑡1, and the logits
246
+ after adversarial attack as 𝐿𝑜𝑔𝑖𝑡2.
247
+ Embedding Mixup is widely used data augmentation method
248
+ through linearly interpolating inputs and modeling targets of ran-
249
+ dom samples. We use the contextual embedding vector of [CLS]
250
+ and the corresponding label to generate synthetic examples for
251
+ training. Such training has been shown to act as an effective model
252
+ regularization strategy for text classification task. In conclusion, we
253
+ present the self-supervised multitask pre-training tasks and the sev-
254
+ eral fine-tuning methods for improving the models’ generalization
255
+ and robustness.
256
+ 4
257
+ EXPERIMENTS
258
+ 4.1
259
+ Settings
260
+ We use InfoXLM𝑙𝑎𝑟𝑔𝑒 as the text encoder, the EMA decay weight is
261
+ set to 0.999. And our learning rate is set to 1e-5 with warm-up ratio
262
+ over 10%, batch size is 32 and gradient clip norm threshold is set to
263
+ 1. In pre-training stage, the maximum number of epochs was set to
264
+ 10. And in the fine-tuning stage, the maximum number of epochs
265
+ was set to 5. During adversarial training, we set 𝜀 to 1.0 in FGM that
266
+ means calculate only one step in the adversarial attack. We find
267
+ that the dataset has imbalanced label and use some data processing
268
+ steps. Through splitting the complement and irrelevant product text
269
+ information, we could get more pairs which have the same label
270
+ and get a more balanced dataset. We use confident learning to find
271
+ the potential label errors and remove ∼4% data from the training
272
+ dataset. As presented in appendix A.1, the median of Spanish and
273
+ English queries is 14 which satisfies the Poisson distribution of 𝜇 is
274
+ set to 4. And the median of the Japanese query is 31 which satisfies
275
+ the Poisson distribution with 𝜇 is set to 8.
276
+ 4.2
277
+ Main Results
278
+ Our approach achieved considerably performance gain over the
279
+ baseline solution, and ranked top-8 in three tasks. The main results
280
+ are shown in Table 2. In task1, we calculated the mean of all model
281
+ outputs as the final ranking results. In task2 and task3, we almost
282
+ used the same network structure except the number of neurons
283
+ in the classifier. Finally, Our approach ranked 5th, 7th and 8th,
284
+ respectively.
285
+ SubTask
286
+ Model
287
+ Metric
288
+ Ranking
289
+ task1
290
+ 6 large models
291
+ ndcg=0.9025
292
+ 5th
293
+ task2
294
+ only 1 large model
295
+ micro f1=0.8194
296
+ 7th
297
+ task3
298
+ only 1 large model
299
+ micro f1=0.8686
300
+ 8th
301
+ Table 2: Performance of our approach on the private leader-
302
+ board. In task1, we used six InfoXLM𝑙𝑎𝑟𝑔𝑒 models that fine-
303
+ tuned by different datasets or methods. In task2 and task3,
304
+ we used only one InfoXLM𝑙𝑎𝑟𝑔𝑒 model with the same net-
305
+ work structure, as shown in Figure 2.
306
+ Pre-Training Task
307
+ CV-MLM Loss
308
+ CV-Micro F1
309
+ Mask LM
310
+ 1.966
311
+ 74.97
312
+ +Product2Query
313
+ 1.969
314
+ 75.05
315
+ ++Product2Brand
316
+ 1.978
317
+ 75.08
318
+ +++Contrastive Learning
319
+ 2.047
320
+ 75.08
321
+ Table 3: The effect of different pre-training tasks and keep
322
+ accumulating from top to bottom. We report the cross vali-
323
+ dation MLM-Loss and Micro-F1 Score × 100 in the task2 set-
324
+ ting.
325
+ 4.3
326
+ Ablation Studies
327
+ We investigate the impact of adopting different pre-training task
328
+ in the task2 setting. In Table 3, we show the Mask-LM losses after
329
+ 5 epochs of pre-training and Micro-F1 scores after 2 epochs of
330
+ fine-tuning. We find that the Product2Query task achieves an 0.008
331
+ improvement compared to the baseline, and the contrastive learning
332
+ task doesn’t get a significant gain.
333
+ As shown in Table 4, we compare several loss functions, and we
334
+ adopt Poly1 loss function in task2 and task3 which based on the
335
+ actual performances. We observe that the Focal loss function and
336
+ GHM loss function have worse performance than the cross-entropy
337
+ loss function in the task2 setting.
338
+ In this subsection, we explore several methods for further im-
339
+ proving the model’s performance in fine-tuning stage. As presented
340
+
341
+ KDDCup ’22, August 17, 2022, Washington, DC, USA
342
+ Xuange Cui, Wei Xiong, and Songlin Wang
343
+ Classification Loss
344
+ CV-Micro F1
345
+ CE Loss
346
+ 75.08
347
+ Focal Loss
348
+ 74.73
349
+ GHM Loss
350
+ 74.85
351
+ Poly1 Loss
352
+ 75.21
353
+ Table 4: The effect of different losses in the task2 setting. We
354
+ report the cross validation Micro-F1 Score × 100.
355
+ Methods
356
+ CV-Micro F1
357
+ +EMA
358
+ 75.19
359
+ ++FGM
360
+ 75.30
361
+ +++R-Drop
362
+ 75.43
363
+ ++++Embedding Mixup
364
+ 75.43
365
+ Table 5: The effect of different strategies and keep accumu-
366
+ lating from top to bottom. We report the cross validation
367
+ Micro-F1 Score × 100 in the task2 setting.
368
+ Confident Learning
369
+ CV-Metric
370
+ with-in-task1
371
+ NDCG, +0.005
372
+ with-in-task2
373
+ Micro-F1, -0.003
374
+ with-in-task3
375
+ Micro-F1, -0.002
376
+ Table 6: The effect of removing 4% noisy labels.
377
+ in Table 5, we adopt all of these methods to improve the model’s gen-
378
+ eralization and robustness. We observe that the exponential moving
379
+ average method(EMA), adversarial training(FGM) and regularized
380
+ dropout strategy(R-Drop) could improve the model’s generalization
381
+ and robustness. But the Embedding Mixup strategy doesn’t get a
382
+ significant gain.
383
+ As shown in Table 7, we consider using smaller datasets with
384
+ removing ∼4% noisy labels. We used the smaller dataset to achieve
385
+ an 0.005 improvement in task1, but we get worse results in tash2
386
+ and task3. It could be explained that since task1 contains more
387
+ difficult samples, the manually annotated data contains more label
388
+ errors.
389
+ 5
390
+ CONCLUSION AND FUTURE WORK
391
+ In this work, we provide an overview of the combined approach to
392
+ improve the quality of search results. We use data augmentation,
393
+ multitask pretraining strategy and several fine-tuning methods to
394
+ achieve considerably performance. Specifically, we use mlm task,
395
+ classification task and contrastive learning task in pre-training
396
+ stage. And we use exponential moving average method(EMA), ad-
397
+ versarial training(FGM) and regularized dropout strategy(R-Drop)
398
+ to improve the model’s generalization and robustness in fine-tuning
399
+ stage. Moreover, we use a multi-granular semantic unit to discover
400
+ the queries and products textual metadata for enhancing the repre-
401
+ sentation of the model. Future work of our system includes: 1) Com-
402
+ paring with other pre-trained language models, such as deborta𝑙𝑎𝑟𝑔𝑒.
403
+ 2) Using other training strategies, such as self-distillation.
404
+ REFERENCES
405
+ [1] Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang,
406
+ Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2020.
407
+ InfoXLM:
408
+ An Information-Theoretic Framework for Cross-Lingual Language Model Pre-
409
+ Training. CoRR abs/2007.07834 (2020). arXiv:2007.07834 https://arxiv.org/abs/
410
+ 2007.07834
411
+ [2] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guil-
412
+ laume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
413
+ moyer, and Veselin Stoyanov. 2019. Unsupervised Cross-lingual Representa-
414
+ tion Learning at Scale. CoRR abs/1911.02116 (2019). arXiv:1911.02116 http:
415
+ //arxiv.org/abs/1911.02116
416
+ [3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT:
417
+ Pre-training of Deep Bidirectional Transformers for Language Understanding.
418
+ CoRR abs/1810.04805 (2018). arXiv:1810.04805 http://arxiv.org/abs/1810.04805
419
+ [4] Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple Contrastive
420
+ Learning of Sentence Embeddings. In Empirical Methods in Natural Language
421
+ Processing (EMNLP).
422
+ [5] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and
423
+ Harnessing Adversarial Examples. arXiv:1412.6572 [stat.ML]
424
+ [6] Seng Hansun. 2013. A new approach of moving average method in time series
425
+ analysis. In 2013 Conference on New Media Studies (CoNMedia). 1–4.
426
+ https:
427
+ //doi.org/10.1109/CoNMedia.2013.6708545
428
+ [7] Rahul Radhakrishnan Iyer, Rohan Kohli, and Shrimai Prabhumoye. 2020. Mod-
429
+ eling Product Search Relevance in e-Commerce. CoRR abs/2001.04980 (2020).
430
+ arXiv:2001.04980 https://arxiv.org/abs/2001.04980
431
+ [8] Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and
432
+ Tuo Zhao. 2020.
433
+ SMART: Robust and Efficient Fine-Tuning for Pre-trained
434
+ Natural Language Models through Principled Regularized Optimization. In Pro-
435
+ ceedings of the 58th Annual Meeting of the Association for Computational Lin-
436
+ guistics. Association for Computational Linguistics, Online, 2177–2190. https:
437
+ //doi.org/10.18653/v1/2020.acl-main.197
438
+ [9] Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu,
439
+ and Qianli Ma. 2021. Embedding-based Product Retrieval in Taobao Search. CoRR
440
+ abs/2106.09297 (2021). arXiv:2106.09297 https://arxiv.org/abs/2106.09297
441
+ [10] Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min
442
+ Zhang, and Tie-Yan Liu. 2021. R-Drop: Regularized Dropout for Neural Networks.
443
+ CoRR abs/2106.14448 (2021). arXiv:2106.14448 https://arxiv.org/abs/2106.14448
444
+ [11] Yiqun Liu, Kaushik Rangadurai, Yunzhong He, Siddarth Malreddy, Xunlong Gui,
445
+ Xiaoyi Liu, and Fedor Borisyuk. 2021. Que2Search: Fast and Accurate Query and
446
+ Document Understanding for Search at Facebook. Proceedings of the 27th ACM
447
+ SIGKDD Conference on Knowledge Discovery & Data Mining (2021).
448
+ [12] Hanqing Lu, Youna Hu, Tong Zhao, Tony Wu, Yiwei Song, and Bing Yin. 2021.
449
+ Graph-based Multilingual Product Retrieval in E-commerce Search.
450
+ CoRR
451
+ abs/2105.02978 (2021). arXiv:2105.02978 https://arxiv.org/abs/2105.02978
452
+ [13] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and
453
+ Adrian Vladu. 2019. Towards Deep Learning Models Resistant to Adversarial
454
+ Attacks. arXiv:1706.06083 [stat.ML]
455
+ [14] Curtis G. Northcutt, Lu Jiang, and Isaac L. Chuang. 2021. Confident Learning:
456
+ Estimating Uncertainty in Dataset Labels. Journal of Artificial Intelligence Research
457
+ (JAIR) 70 (2021), 1373–1411.
458
+ [15] Curtis G. Northcutt, Tailin Wu, and Isaac L. Chuang. 2017.
459
+ Learning with
460
+ Confident Examples: Rank Pruning for Robust Classification with Noisy La-
461
+ bels. In Proceedings of the Thirty-Third Conference on Uncertainty in Artifi-
462
+ cial Intelligence (Sydney, Australia) (UAI’17). AUAI Press, 10 pages.
463
+ http:
464
+ //auai.org/uai2017/proceedings/papers/35.pdf
465
+ [16] Juri Opitz and Sebastian Burst. 2019. Macro F1 and Macro F1. CoRR abs/1911.03347
466
+ (2019). arXiv:1911.03347 http://arxiv.org/abs/1911.03347
467
+ [17] Chandan K. Reddy, Lluís Màrquez, Fran Valero, Nikhil Rao, Hugo Zaragoza,
468
+ Sambaran Bandyopadhyay, Arnab Biswas, Anlu Xing, and Karthik Subbian. 2022.
469
+ Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product
470
+ Search. arXiv:2206.06588
471
+ [18] Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, and Wei Chen. 2013.
472
+ A Theoretical Analysis of NDCG Type Ranking Measures. CoRR abs/1304.6480
473
+ (2013). arXiv:1304.6480 http://arxiv.org/abs/1304.6480
474
+ [19] Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and
475
+ Songlin Hu. 2021. ESimCSE: Enhanced Sample Building Method for Contrastive
476
+ Learning of Unsupervised Sentence Embedding. CoRR abs/2109.04380 (2021).
477
+ arXiv:2109.04380 https://arxiv.org/abs/2109.04380
478
+ [20] Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, and Nikolaos
479
+ Aletras. 2021. Frustratingly Simple Pretraining Alternatives to Masked Language
480
+
481
+ ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search
482
+ KDDCup ’22, August 17, 2022, Washington, DC, USA
483
+ Methods
484
+ CV-Micro F1
485
+ Random♦
486
+ -
487
+ Word2vec♣
488
+ 85.33
489
+ Freeze♥
490
+ 85.29
491
+ Table 7: The performance of different initialization methods
492
+ of the multi-granular semantic unit. We report the cross val-
493
+ idation Micro-F1 Score × 100 in the task3 setting.
494
+ Modeling. CoRR abs/2109.01819 (2021). arXiv:2109.01819 https://arxiv.org/abs/
495
+ 2109.01819
496
+ [21] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020.
497
+ FreeLB: Enhanced Adversarial Training for Natural Language Understanding. In
498
+ International Conference on Learning Representations. https://openreview.net/
499
+ forum?id=BygzbyHFvB
500
+ A
501
+ APPENDIX
502
+ A.1
503
+ Poisson Distribution
504
+ Figure 3: The length distribution of queries in different lan-
505
+ guages.
506
+ As presented in Figure 3, the median of Spanish and English
507
+ queries is 14 which satisfies the Poisson distribution of 𝜇 is set to
508
+ 4. And the median of the Japanese query is 31 which satisfies the
509
+ Poisson distribution with 𝜇 is set to 8.
510
+ A.2
511
+ EmbeddingBag Initialization
512
+ The multi-granular semantic unit implemented by Embedding-
513
+ Bag4. As presented in Table 7, the way of random initialization
514
+ converges slowly, so we don’t record the final result. And when the
515
+ Embedding-Bag is initialized by Word2vec, our approach obtain
516
+ the best performance.
517
+ 4https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html
518
+
519
+ query_len distribution
520
+ 0.35
521
+ geometric_p=0.2
522
+ poisson_miu=4
523
+ 0.30
524
+ es
525
+ sn
526
+ 0.25
527
+ Jp
528
+ 0.20
529
+ Y-axis
530
+ 0.15
531
+ 0.10
532
+ 0.05
533
+ 0.00
534
+ 0
535
+ 5
536
+ 10
537
+ 15
538
+ 20
539
+ 25
540
+ 30
541
+ X-axis
CNFQT4oBgHgl3EQf-jfx/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf,len=360
2
+ page_content='ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search Xuange Cui cuixuange@jd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
3
+ page_content='com JD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
4
+ page_content='com Beijing, China Wei Xiong xiongwei9@jd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
5
+ page_content='com JD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
6
+ page_content='com Beijing, China Songlin Wang wangsonglin3@jd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
7
+ page_content='com JD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
8
+ page_content='com Beijing, China ABSTRACT In this paper, we propose a robust multilingual model to improve the quality of search results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
9
+ page_content=' Our model not only leverage the processed class-balanced dataset, but also benefit from multitask pre-training that leads to more general representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
10
+ page_content=' In pre-training stage, we adopt mlm task, classification task and contrastive learning task to achieve considerably performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
11
+ page_content=' In fine-tuning stage, we use confident learning, exponential moving average method (EMA), ad- versarial training (FGM) and regularized dropout strategy (R-Drop) to improve the model’s generalization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
12
+ page_content=' Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
13
+ page_content=' Our approach obtained competitive results and ranked top-8 in three tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
14
+ page_content=' We release the source code and pre-trained models associated with this work1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
15
+ page_content=' CCS CONCEPTS Information systems → Retrieval models and ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
16
+ page_content=' KEYWORDS search relevance, e-commerce, semantic matching, multilingual 1 INTRODUCTION With the rapid growth of e-Commerce, online product search has emerged as a popular and effective paradigm for customers to find desired products and engage in online shopping [7, 9, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
17
+ page_content=' It is very challenging to accurately find and display relevant products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
18
+ page_content=' This is because the customer queries are ambiguous and implicit [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
19
+ page_content=' For example, many users search for "iPhone" to find and purchase an "iPhone charger".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
20
+ page_content=' However, the traditional binary classification model is difficult to clearly characterize this relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
21
+ page_content=' The Ama- zon KDD Cup 2022 presents a novel multilingual dataset [17] across English, Japanese and Spanish, and consists of three different sub- tasks to evaluate the model’s abilities of ranking and classifying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
22
+ page_content=' In this paper, our contributions can be summarized as follows: 1) Data Augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
23
+ page_content=' We use the translation model to convert Spanish to English for expanding the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
24
+ page_content=' Through splitting the complement and irrelevant product text information, we could get a bigger dataset with balanced labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
25
+ page_content=' We use confident learn- ing [14, 15] to find the potential label errors and remove ∼4% data from the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
26
+ page_content=' 2) MultiTask Pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
27
+ page_content=' In pre-training stage, we use MLM task, classification task and contrastive learn- ing task for improving the model’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
28
+ page_content=' 3) In fine-tuning stage, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation 1https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
29
+ page_content='com/cuixuage/KDDCup2022-ESCI SubTask Train Dataset Test dataset Languages Task1 781K 48K Spanish Task2 1834K 277K & English Task3 1834K 277K & Japanese Table 1: The statistics of datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
30
+ page_content=' of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
31
+ page_content=' And we observe that exponential moving average method(EMA) [6], adversarial training(FGM) [5] and regularized dropout strategy(R-Drop) [10] could improve the model’s general- ization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
32
+ page_content=' Our team participated in all tasks, and achieved considerably performance gain over the baseline solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
33
+ page_content=' Specifically, our ap- proach ranked 5th in task1, ranked 7th in task2 and ranked 8th in task3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
34
+ page_content=' 2 BACKGROUND The Amazon KDD Cup 2022 [17] provides three subtasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
35
+ page_content=' The task1 consists of a query-product ranking task aimed at ranking the results list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
36
+ page_content=' The Normalized Discounted Cumulative Gain(nDCG) [18] will be used to evaluate the model’s abilities of ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
37
+ page_content=' The task2 and task3 are classification tasks which require the model to classify the query/product pairs into correct categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
38
+ page_content=' These tasks are designed to test the model’s ability of classifying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
39
+ page_content=' The micro-F1 [16] will be used as an evaluation metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
40
+ page_content=' Moreover, the task2 consists of a multi-class product classification task aimed at classifying each product as being an Exact, Substitute, Comple- ment, or Irrelevant match for the query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
41
+ page_content=' The task3 will measure the model’s abilities of identifying the substitute products in the list of results for a given query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
42
+ page_content=' The statistics of the corpus are shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
43
+ page_content=' In this challenge, the organizers provide two different versions of the data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
44
+ page_content=' One for task 1 which is reduced version in terms of number of examples and ones for tasks 2 and 3 which is a larger [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
45
+ page_content=' It is noted that the reduced version of the data set has more difficult samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
46
+ page_content=' Our team participated in all subtasks, and the next section will introduce an overview of our system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
47
+ page_content=' 3 SYSTEM OVERVIEW 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
48
+ page_content='1 Multi-Task Pre-Training We compare several pre-trained multilingual language models from the XTREME Leaderboard2, and then we use the "microsoft/infoxlm- large3" as text encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
49
+ page_content=' 2https://sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
50
+ page_content='research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
51
+ page_content='google/xtreme 3https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
52
+ page_content='co/microsoft/infoxlm-large arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
53
+ page_content='13455v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
54
+ page_content='CL] 31 Jan 2023 KDDCup ’22, August 17, 2022, Washington, DC, USA Xuange Cui, Wei Xiong, and Songlin Wang The InfoXLM𝑙𝑎𝑟𝑔𝑒 model [1] containing 94 languages and pre- trained with CCNet dataset, and has the same configurations of XLM-R [2] and a shared vocabulary size of 250002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
55
+ page_content=' Figure 1 shows a high-level overview of our proposed pretext tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
56
+ page_content=' Figure 1: A schematic overview of our novel pre-training tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
57
+ page_content=' These tasks encourage the encoded representations to be more general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
58
+ page_content=' MLM Task, is widely used for learning text representations [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
59
+ page_content=' MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
60
+ page_content=' We adopt MLM-Task on the multilingual product-catalogue dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
61
+ page_content=' Classification Task, contains three classification subtasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
62
+ page_content=' One of them is Product2Query-Task, this task is a binary classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
63
+ page_content=' Based on the Poisson distribution, a piece of text is intercepted from commodity text information as the faked query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
64
+ page_content=' The Parame- ters passed to the Poisson distribution and more details can be found in appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
65
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
66
+ page_content=' Product2Brand-Task and Product2Color-Task are multi-class classification that using product text information to predict the brand and the color of current item.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
67
+ page_content=' Contrastive Learning Task, is mainly inspired by SimCSE [4] and EsimCSE [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
68
+ page_content=' During training, each data point is trained to find out its counterpart among (𝑁 − 1) from in-batch negative samples and the queue of data samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
69
+ page_content=' The samples in the queue are progressively replaced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
70
+ page_content=' − log 𝑒sim(h𝑖,h+ 𝑖 )/𝜏 �𝑁 𝑗=1 𝑒sim � h𝑖,h+ 𝑗 � /𝜏 + �𝑄 𝑞=1 𝑒sim � h𝑖,h+𝑞 � /𝜏 (1) The ℎ∗ is the sentence representation, where ℎ𝑖 and ℎ+ 𝑖 are se- mantically related.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
71
+ page_content=' The ℎ+𝑞 denotes a sentence embedding in the momentum-updated queue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
72
+ page_content=' And the 𝑄 is the size of the queue, 𝑠𝑖𝑚(ℎ1,ℎ2) is the cosine similarity scores of sentence representa- tions, 𝜏 is a temperature hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
73
+ page_content=' In the end, we average the all N Li losses to calculate the contrastive loss Lcon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
74
+ page_content=' Algorithm 1: Training a MultiTask model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
75
+ page_content=' Input: DataSet D = � (𝑥,𝑦,𝑧)𝑖 � |D | 𝑖=1 1 Initialize model parameters Θ randomly ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
76
+ page_content=' 2 Model trainer 𝑇 that takes batches of training data as input to optimize the model parameters Θ ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
77
+ page_content=' 3 Set the max number of epoch: 𝑒𝑝𝑜𝑐ℎmax ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
78
+ page_content=' 4 for epoch in 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
79
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
80
+ page_content=',𝑒𝑝𝑜𝑐ℎmax do 5 Shuffle D by mixing data from different tasks ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
81
+ page_content=' 6 for B in D do 7 // B is a mini-batch of pre-training task ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
82
+ page_content=' 8 Compute loss : 𝐿(Θ) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
83
+ page_content=' 9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
84
+ page_content=' 𝐿(Θ) = Mask LM Loss ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
85
+ page_content=' 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
86
+ page_content=' 𝐿(Θ) += Classification Loss ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
87
+ page_content=' 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
88
+ page_content=' 𝐿(Θ) += Contrastive Learning Loss ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
89
+ page_content=' 12 Optimize the model using 𝐿(Θ) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
90
+ page_content=' 13 end 14 end Output: Pre-trained Model Θ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
91
+ page_content='2 Fine-Tuning Methods After pre-training, we remove the classifiers for pre-training multi- task and concatenate some embeddings with an extra MLP classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
92
+ page_content=' The embeddings consist of three sets of representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
93
+ page_content=' One of them is done by concatenating the queries’ 3-gram mean-pooling, bullet points’ 3-gram mean-pooling and descriptions’ 3-gram mean- pooling embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
94
+ page_content=' The others consist of country embedding, brand embedding and color embedding, as shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
95
+ page_content=' Exponential Moving Average Our model uses EMA [6] to smooth the trained parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
96
+ page_content=' Evaluations that use averaged pa- rameters sometimes produce significantly better results than the final trained values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
97
+ page_content=' Formally, we define the smoothed variables and trained variables as 𝜃𝑠 and 𝜃𝑡, EMA decay weight as: 𝜂.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
98
+ page_content=' After each training step, we update 𝜃𝑠 by: 𝜃𝑠 ← 𝜂𝜃𝑠 + (1 − 𝜂)𝜃𝑡 (2) Adversarial Training Recently, adversarial attack has been widely applied in computer vision and natural language processing [5, 8, 13, 21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
99
+ page_content=' Many works use it during fine-tuning, we explore the influence of adversarial training strategies and compare the FGSM, PGD, FREELB and SMART methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
100
+ page_content=' The adversarial attack works by augmenting the input with a small perturbation that maximizes the adversarial loss: min 𝜃 E(𝑥,𝑦)∼D � max Δ𝑥 ∈Ω 𝐿(𝑥 + Δ𝑥,𝑦;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
101
+ page_content='𝜃) � (3) where the D is dataset, 𝑥 is input, 𝑦 is the gold label, 𝜃 is the model parameters, 𝐿(𝑥,𝑦;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
102
+ page_content='𝜃) is the loss function and Δ𝑥 is the perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
103
+ page_content=' In our experiments, we adopt FGSM method in all tasks which based on the actual performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
104
+ page_content=' R-Drop is proved to be an effective regularization method based on dropout, by minimizing the KL-divergence of the output distri- butions of every two sub-models generated via dropout in model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
105
+ page_content=' L𝐾𝐿 = 𝛼 · [D𝐾𝐿 (𝐿𝑜𝑔𝑖𝑡1, 𝐿𝑜𝑔𝑖𝑡2) + D𝐾𝐿 (𝐿𝑜𝑔𝑖𝑡2, 𝐿𝑜𝑔𝑖𝑡1)] (4) ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search KDDCup ’22, August 17, 2022, Washington, DC, USA Figure 2: In fine-tuning stage, we concatenate the multi-granular semantic units, the [CLS] embedding from XLM encoder and the IDs’ embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
106
+ page_content=' We use the origin logits of model’s output as 𝐿𝑜𝑔𝑖𝑡1, and the logits after adversarial attack as 𝐿𝑜𝑔𝑖𝑡2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
107
+ page_content=' Embedding Mixup is widely used data augmentation method through linearly interpolating inputs and modeling targets of ran- dom samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
108
+ page_content=' We use the contextual embedding vector of [CLS] and the corresponding label to generate synthetic examples for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
109
+ page_content=' Such training has been shown to act as an effective model regularization strategy for text classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
110
+ page_content=' In conclusion, we present the self-supervised multitask pre-training tasks and the sev- eral fine-tuning methods for improving the models’ generalization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
111
+ page_content=' 4 EXPERIMENTS 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
112
+ page_content='1 Settings We use InfoXLM𝑙𝑎𝑟𝑔𝑒 as the text encoder, the EMA decay weight is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
113
+ page_content='999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
114
+ page_content=' And our learning rate is set to 1e-5 with warm-up ratio over 10%, batch size is 32 and gradient clip norm threshold is set to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
115
+ page_content=' In pre-training stage, the maximum number of epochs was set to 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
116
+ page_content=' And in the fine-tuning stage, the maximum number of epochs was set to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
117
+ page_content=' During adversarial training, we set 𝜀 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
118
+ page_content='0 in FGM that means calculate only one step in the adversarial attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
119
+ page_content=' We find that the dataset has imbalanced label and use some data processing steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
120
+ page_content=' Through splitting the complement and irrelevant product text information, we could get more pairs which have the same label and get a more balanced dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
121
+ page_content=' We use confident learning to find the potential label errors and remove ∼4% data from the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
122
+ page_content=' As presented in appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
123
+ page_content='1, the median of Spanish and English queries is 14 which satisfies the Poisson distribution of 𝜇 is set to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
124
+ page_content=' And the median of the Japanese query is 31 which satisfies the Poisson distribution with 𝜇 is set to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
125
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
126
+ page_content='2 Main Results Our approach achieved considerably performance gain over the baseline solution, and ranked top-8 in three tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
127
+ page_content=' The main results are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
128
+ page_content=' In task1, we calculated the mean of all model outputs as the final ranking results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
129
+ page_content=' In task2 and task3, we almost used the same network structure except the number of neurons in the classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
130
+ page_content=' Finally, Our approach ranked 5th, 7th and 8th, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
131
+ page_content=' SubTask Model Metric Ranking task1 6 large models ndcg=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
132
+ page_content='9025 5th task2 only 1 large model micro f1=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
133
+ page_content='8194 7th task3 only 1 large model micro f1=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
134
+ page_content='8686 8th Table 2: Performance of our approach on the private leader- board.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
135
+ page_content=' In task1, we used six InfoXLM𝑙𝑎𝑟𝑔𝑒 models that fine- tuned by different datasets or methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
136
+ page_content=' In task2 and task3, we used only one InfoXLM𝑙𝑎𝑟𝑔𝑒 model with the same net- work structure, as shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
137
+ page_content=' Pre-Training Task CV-MLM Loss CV-Micro F1 Mask LM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
138
+ page_content='966 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
139
+ page_content='97 +Product2Query 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
140
+ page_content='969 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
141
+ page_content='05 ++Product2Brand 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
142
+ page_content='978 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
143
+ page_content='08 +++Contrastive Learning 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
144
+ page_content='047 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
145
+ page_content='08 Table 3: The effect of different pre-training tasks and keep accumulating from top to bottom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
146
+ page_content=' We report the cross vali- dation MLM-Loss and Micro-F1 Score × 100 in the task2 set- ting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
147
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
148
+ page_content='3 Ablation Studies We investigate the impact of adopting different pre-training task in the task2 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
149
+ page_content=' In Table 3, we show the Mask-LM losses after 5 epochs of pre-training and Micro-F1 scores after 2 epochs of fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
150
+ page_content=' We find that the Product2Query task achieves an 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
151
+ page_content='008 improvement compared to the baseline, and the contrastive learning task doesn’t get a significant gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
152
+ page_content=' As shown in Table 4, we compare several loss functions, and we adopt Poly1 loss function in task2 and task3 which based on the actual performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
153
+ page_content=' We observe that the Focal loss function and GHM loss function have worse performance than the cross-entropy loss function in the task2 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
154
+ page_content=' In this subsection, we explore several methods for further im- proving the model’s performance in fine-tuning stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
155
+ page_content=' As presented KDDCup ’22, August 17, 2022, Washington, DC, USA Xuange Cui, Wei Xiong, and Songlin Wang Classification Loss CV-Micro F1 CE Loss 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
156
+ page_content='08 Focal Loss 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
157
+ page_content='73 GHM Loss 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
158
+ page_content='85 Poly1 Loss 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
159
+ page_content='21 Table 4: The effect of different losses in the task2 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
160
+ page_content=' We report the cross validation Micro-F1 Score × 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
161
+ page_content=' Methods CV-Micro F1 +EMA 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
162
+ page_content='19 ++FGM 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
163
+ page_content='30 +++R-Drop 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
164
+ page_content='43 ++++Embedding Mixup 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
165
+ page_content='43 Table 5: The effect of different strategies and keep accumu- lating from top to bottom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
166
+ page_content=' We report the cross validation Micro-F1 Score × 100 in the task2 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
167
+ page_content=' Confident Learning CV-Metric with-in-task1 NDCG, +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
168
+ page_content='005 with-in-task2 Micro-F1, -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
169
+ page_content='003 with-in-task3 Micro-F1, -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
170
+ page_content='002 Table 6: The effect of removing 4% noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
171
+ page_content=' in Table 5, we adopt all of these methods to improve the model’s gen- eralization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
172
+ page_content=' We observe that the exponential moving average method(EMA), adversarial training(FGM) and regularized dropout strategy(R-Drop) could improve the model’s generalization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
173
+ page_content=' But the Embedding Mixup strategy doesn’t get a significant gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
174
+ page_content=' As shown in Table 7, we consider using smaller datasets with removing ∼4% noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
175
+ page_content=' We used the smaller dataset to achieve an 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
176
+ page_content='005 improvement in task1, but we get worse results in tash2 and task3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
177
+ page_content=' It could be explained that since task1 contains more difficult samples, the manually annotated data contains more label errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
178
+ page_content=' 5 CONCLUSION AND FUTURE WORK In this work, we provide an overview of the combined approach to improve the quality of search results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
179
+ page_content=' We use data augmentation, multitask pretraining strategy and several fine-tuning methods to achieve considerably performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
180
+ page_content=' Specifically, we use mlm task, classification task and contrastive learning task in pre-training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
181
+ page_content=' And we use exponential moving average method(EMA), ad- versarial training(FGM) and regularized dropout strategy(R-Drop) to improve the model’s generalization and robustness in fine-tuning stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
182
+ page_content=' Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the repre- sentation of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
183
+ page_content=' Future work of our system includes: 1) Com- paring with other pre-trained language models, such as deborta𝑙𝑎𝑟𝑔𝑒.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
184
+ page_content=' 2) Using other training strategies, such as self-distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
185
+ page_content=' REFERENCES [1] Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
186
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
187
+ page_content=' InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre- Training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
188
+ page_content=' CoRR abs/2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
189
+ page_content='07834 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
190
+ page_content=' arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
191
+ page_content='07834 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
192
+ page_content='org/abs/ 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
193
+ page_content='07834 [2] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guil- laume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
194
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
195
+ page_content=' Unsupervised Cross-lingual Representa- tion Learning at Scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
196
+ page_content=' CoRR abs/1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
197
+ page_content='02116 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
198
+ page_content=' arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
199
+ page_content='02116 http: //arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
200
+ page_content='org/abs/1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
201
+ page_content='02116 [3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
202
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
203
+ page_content=' BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
204
+ page_content=' CoRR abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
205
+ page_content='04805 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
206
+ page_content=' arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
207
+ page_content='04805 http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
208
+ page_content='org/abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
209
+ page_content='04805 [4] Tianyu Gao, Xingcheng Yao, and Danqi Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
210
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
211
+ page_content=' SimCSE: Simple Contrastive Learning of Sentence Embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
212
+ page_content=' In Empirical Methods in Natural Language Processing (EMNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
213
+ page_content=' [5] Ian J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
214
+ page_content=' Goodfellow, Jonathon Shlens, and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
215
+ page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
216
+ page_content=' Explaining and Harnessing Adversarial Examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
217
+ page_content=' arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
218
+ page_content='6572 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
219
+ page_content='ML] [6] Seng Hansun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
220
+ page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
221
+ page_content=' A new approach of moving average method in time series analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
222
+ page_content=' In 2013 Conference on New Media Studies (CoNMedia).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
223
+ page_content=' 1–4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
224
+ page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
225
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
226
+ page_content='1109/CoNMedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
227
+ page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
228
+ page_content='6708545 [7] Rahul Radhakrishnan Iyer, Rohan Kohli, and Shrimai Prabhumoye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
229
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
230
+ page_content=' Mod- eling Product Search Relevance in e-Commerce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
231
+ page_content=' CoRR abs/2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
232
+ page_content='04980 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
233
+ page_content=' arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
234
+ page_content='04980 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
235
+ page_content='org/abs/2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
236
+ page_content='04980 [8] Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
237
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
238
+ page_content=' SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
239
+ page_content=' In Pro- ceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
240
+ page_content=' Association for Computational Linguistics, Online, 2177–2190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
241
+ page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
242
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
243
+ page_content='18653/v1/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
244
+ page_content='acl-main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
245
+ page_content='197 [9] Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu, and Qianli Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
246
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
247
+ page_content=' Embedding-based Product Retrieval in Taobao Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
248
+ page_content=' CoRR abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
249
+ page_content='09297 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
250
+ page_content=' arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
251
+ page_content='09297 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
252
+ page_content='org/abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
253
+ page_content='09297 [10] Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, and Tie-Yan Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
254
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
255
+ page_content=' R-Drop: Regularized Dropout for Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
256
+ page_content=' CoRR abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
257
+ page_content='14448 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
258
+ page_content=' arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
259
+ page_content='14448 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
260
+ page_content='org/abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
261
+ page_content='14448 [11] Yiqun Liu, Kaushik Rangadurai, Yunzhong He, Siddarth Malreddy, Xunlong Gui, Xiaoyi Liu, and Fedor Borisyuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
262
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
263
+ page_content=' Que2Search: Fast and Accurate Query and Document Understanding for Search at Facebook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
264
+ page_content=' Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
265
+ page_content=' [12] Hanqing Lu, Youna Hu, Tong Zhao, Tony Wu, Yiwei Song, and Bing Yin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
266
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
267
+ page_content=' Graph-based Multilingual Product Retrieval in E-commerce Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
268
+ page_content=' CoRR abs/2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
269
+ page_content='02978 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
270
+ page_content=' arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
271
+ page_content='02978 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
272
+ page_content='org/abs/2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
273
+ page_content='02978 [13] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
274
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
275
+ page_content=' Towards Deep Learning Models Resistant to Adversarial Attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
276
+ page_content=' arXiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
277
+ page_content='06083 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
278
+ page_content='ML] [14] Curtis G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
279
+ page_content=' Northcutt, Lu Jiang, and Isaac L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
280
+ page_content=' Chuang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
281
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
282
+ page_content=' Confident Learning: Estimating Uncertainty in Dataset Labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
283
+ page_content=' Journal of Artificial Intelligence Research (JAIR) 70 (2021), 1373–1411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
284
+ page_content=' [15] Curtis G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
285
+ page_content=' Northcutt, Tailin Wu, and Isaac L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
286
+ page_content=' Chuang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
287
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
288
+ page_content=' Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy La- bels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
289
+ page_content=' In Proceedings of the Thirty-Third Conference on Uncertainty in Artifi- cial Intelligence (Sydney, Australia) (UAI’17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
290
+ page_content=' AUAI Press, 10 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
291
+ page_content=' http: //auai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
292
+ page_content='org/uai2017/proceedings/papers/35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
293
+ page_content='pdf [16] Juri Opitz and Sebastian Burst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
294
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
295
+ page_content=' Macro F1 and Macro F1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
296
+ page_content=' CoRR abs/1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
297
+ page_content='03347 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
298
+ page_content=' arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
299
+ page_content='03347 http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
300
+ page_content='org/abs/1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
301
+ page_content='03347 [17] Chandan K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
302
+ page_content=' Reddy, Lluís Màrquez, Fran Valero, Nikhil Rao, Hugo Zaragoza, Sambaran Bandyopadhyay, Arnab Biswas, Anlu Xing, and Karthik Subbian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
303
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
304
+ page_content=' Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
305
+ page_content=' arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
306
+ page_content='06588 [18] Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, and Wei Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
307
+ page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
308
+ page_content=' A Theoretical Analysis of NDCG Type Ranking Measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
309
+ page_content=' CoRR abs/1304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
310
+ page_content='6480 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
311
+ page_content=' arXiv:1304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
312
+ page_content='6480 http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
313
+ page_content='org/abs/1304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
314
+ page_content='6480 [19] Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
315
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
316
+ page_content=' ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
317
+ page_content=' CoRR abs/2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
318
+ page_content='04380 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
319
+ page_content=' arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
320
+ page_content='04380 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
321
+ page_content='org/abs/2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
322
+ page_content='04380 [20] Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, and Nikolaos Aletras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
323
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
324
+ page_content=' Frustratingly Simple Pretraining Alternatives to Masked Language ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search KDDCup ’22, August 17, 2022, Washington, DC, USA Methods CV-Micro F1 Random♦ Word2vec♣ 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
325
+ page_content='33 Freeze♥ 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
326
+ page_content='29 Table 7: The performance of different initialization methods of the multi-granular semantic unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
327
+ page_content=' We report the cross val- idation Micro-F1 Score × 100 in the task3 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
328
+ page_content=' Modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
329
+ page_content=' CoRR abs/2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
330
+ page_content='01819 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
331
+ page_content=' arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
332
+ page_content='01819 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
333
+ page_content='org/abs/ 2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
334
+ page_content='01819 [21] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
335
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
336
+ page_content=' FreeLB: Enhanced Adversarial Training for Natural Language Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
337
+ page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
338
+ page_content=' https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
339
+ page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
340
+ page_content='id=BygzbyHFvB A APPENDIX A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
341
+ page_content='1 Poisson Distribution Figure 3: The length distribution of queries in different lan- guages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
342
+ page_content=' As presented in Figure 3, the median of Spanish and English queries is 14 which satisfies the Poisson distribution of 𝜇 is set to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
343
+ page_content=' And the median of the Japanese query is 31 which satisfies the Poisson distribution with 𝜇 is set to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
344
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
345
+ page_content='2 EmbeddingBag Initialization The multi-granular semantic unit implemented by Embedding- Bag4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
346
+ page_content=' As presented in Table 7, the way of random initialization converges slowly, so we don’t record the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
347
+ page_content=' And when the Embedding-Bag is initialized by Word2vec, our approach obtain the best performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
348
+ page_content=' 4https://pytorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
349
+ page_content='org/docs/stable/generated/torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
350
+ page_content='nn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
351
+ page_content='EmbeddingBag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
352
+ page_content='html query_len distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
353
+ page_content='35 geometric_p=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
354
+ page_content='2 poisson_miu=4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
355
+ page_content='30 es sn 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
356
+ page_content='25 Jp 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
357
+ page_content='20 Y-axis 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
358
+ page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
359
+ page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
360
+ page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
361
+ page_content='00 0 5 10 15 20 25 30 X-axis' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf'}
D9E0T4oBgHgl3EQfywIT/content/2301.02662v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42b30f06db5719886e24c4d9c8d47037a0e454872e053fd10e16bb9c15d82eb3
3
+ size 925010
D9E0T4oBgHgl3EQfywIT/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cb021b9e94e141adaf8442fdd0fb8777135ee76c7fafbb656cc11af82b44a9c
3
+ size 6094893
D9E0T4oBgHgl3EQfywIT/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67586f1d3faf6255173ebf82b70c5add67870033229e422db775fec54410005
3
+ size 221785
DNFQT4oBgHgl3EQf_zdP/content/2301.13459v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9fe33940f7d679511bd0ed6715f27be53e084d9e47027a4917ec45ab940008c
3
+ size 625228
DNFQT4oBgHgl3EQf_zdP/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc13bd0a4711b9427974bb5b33cd24d8aa0e436ce58926ec24c8c6bf1ad66cce
3
+ size 125795