jackkuo commited on
Commit
457015e
·
verified ·
1 Parent(s): 67552bf

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +43 -0
  2. 0NFKT4oBgHgl3EQfNi2Q/content/2301.11755v1.pdf +3 -0
  3. 0NFKT4oBgHgl3EQfNi2Q/vector_store/index.faiss +3 -0
  4. 0NFKT4oBgHgl3EQfNi2Q/vector_store/index.pkl +3 -0
  5. 0dE3T4oBgHgl3EQfnAoC/content/tmp_files/2301.04620v1.pdf.txt +1998 -0
  6. 0dE3T4oBgHgl3EQfnAoC/content/tmp_files/load_file.txt +0 -0
  7. 0dFST4oBgHgl3EQfVTiq/content/tmp_files/2301.13777v1.pdf.txt +1263 -0
  8. 0dFST4oBgHgl3EQfVTiq/content/tmp_files/load_file.txt +0 -0
  9. 19E0T4oBgHgl3EQf_wLw/content/tmp_files/2301.02832v1.pdf.txt +503 -0
  10. 19E0T4oBgHgl3EQf_wLw/content/tmp_files/load_file.txt +0 -0
  11. 1dE1T4oBgHgl3EQf5AXg/content/tmp_files/2301.03508v1.pdf.txt +2014 -0
  12. 1dE1T4oBgHgl3EQf5AXg/content/tmp_files/load_file.txt +0 -0
  13. 1dFAT4oBgHgl3EQfkB03/vector_store/index.pkl +3 -0
  14. 1tFST4oBgHgl3EQfWzjN/vector_store/index.pkl +3 -0
  15. 29FQT4oBgHgl3EQf2zan/vector_store/index.pkl +3 -0
  16. 4NE3T4oBgHgl3EQfogr8/content/2301.04635v1.pdf +3 -0
  17. 4NE3T4oBgHgl3EQfogr8/vector_store/index.pkl +3 -0
  18. 4tE0T4oBgHgl3EQfvQHn/content/tmp_files/2301.02617v1.pdf.txt +969 -0
  19. 4tE0T4oBgHgl3EQfvQHn/content/tmp_files/load_file.txt +0 -0
  20. 59E1T4oBgHgl3EQfBQIH/content/2301.02848v1.pdf +3 -0
  21. 59E1T4oBgHgl3EQfBQIH/vector_store/index.pkl +3 -0
  22. 5NA0T4oBgHgl3EQfNv96/content/tmp_files/2301.02151v1.pdf.txt +1970 -0
  23. 5NA0T4oBgHgl3EQfNv96/content/tmp_files/load_file.txt +0 -0
  24. 6dAyT4oBgHgl3EQf2fn3/content/tmp_files/2301.00754v1.pdf.txt +0 -0
  25. 6dAyT4oBgHgl3EQf2fn3/content/tmp_files/load_file.txt +0 -0
  26. 6tE1T4oBgHgl3EQfBgIF/vector_store/index.faiss +3 -0
  27. 6tE1T4oBgHgl3EQfBgIF/vector_store/index.pkl +3 -0
  28. 7dE1T4oBgHgl3EQfBgLt/content/tmp_files/2301.02854v1.pdf.txt +533 -0
  29. 7dE1T4oBgHgl3EQfBgLt/content/tmp_files/load_file.txt +0 -0
  30. 9NAyT4oBgHgl3EQfdPda/content/tmp_files/2301.00298v1.pdf.txt +1810 -0
  31. 9NAyT4oBgHgl3EQfdPda/content/tmp_files/load_file.txt +397 -0
  32. 9dAyT4oBgHgl3EQfQ_am/content/tmp_files/2301.00058v1.pdf.txt +1127 -0
  33. 9dAyT4oBgHgl3EQfQ_am/content/tmp_files/load_file.txt +0 -0
  34. 9tE1T4oBgHgl3EQfoARp/content/2301.03315v1.pdf +3 -0
  35. 9tE1T4oBgHgl3EQfoARp/vector_store/index.pkl +3 -0
  36. ANE1T4oBgHgl3EQfVQQy/content/tmp_files/2301.03099v1.pdf.txt +1769 -0
  37. ANE1T4oBgHgl3EQfVQQy/content/tmp_files/load_file.txt +0 -0
  38. B9E0T4oBgHgl3EQfQABU/content/tmp_files/2301.02186v1.pdf.txt +1989 -0
  39. B9E0T4oBgHgl3EQfQABU/content/tmp_files/load_file.txt +0 -0
  40. EdAyT4oBgHgl3EQfeviI/content/tmp_files/2301.00327v1.pdf.txt +0 -0
  41. EdAyT4oBgHgl3EQfeviI/content/tmp_files/load_file.txt +0 -0
  42. GdAzT4oBgHgl3EQfUvwS/content/tmp_files/2301.01270v1.pdf.txt +851 -0
  43. GdAzT4oBgHgl3EQfUvwS/content/tmp_files/load_file.txt +0 -0
  44. HtE1T4oBgHgl3EQfXwQA/content/2301.03129v1.pdf +3 -0
  45. HtFJT4oBgHgl3EQfFSwh/content/tmp_files/2301.11441v1.pdf.txt +0 -0
  46. HtFJT4oBgHgl3EQfFSwh/content/tmp_files/load_file.txt +0 -0
  47. MdE0T4oBgHgl3EQf0ALT/vector_store/index.pkl +3 -0
  48. O9AzT4oBgHgl3EQfzf7E/content/tmp_files/2301.01771v1.pdf.txt +951 -0
  49. O9AzT4oBgHgl3EQfzf7E/content/tmp_files/load_file.txt +0 -0
  50. ONFLT4oBgHgl3EQfOy8c/content/tmp_files/2301.12025v1.pdf.txt +1841 -0
.gitattributes CHANGED
@@ -6606,3 +6606,46 @@ qNFAT4oBgHgl3EQfex2-/content/2301.08578v1.pdf filter=lfs diff=lfs merge=lfs -tex
6606
  H9FKT4oBgHgl3EQfdS4O/content/2301.11819v1.pdf filter=lfs diff=lfs merge=lfs -text
6607
  X9AyT4oBgHgl3EQfvflc/content/2301.00631v1.pdf filter=lfs diff=lfs merge=lfs -text
6608
  _dAzT4oBgHgl3EQfvf3k/content/2301.01709v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6606
  H9FKT4oBgHgl3EQfdS4O/content/2301.11819v1.pdf filter=lfs diff=lfs merge=lfs -text
6607
  X9AyT4oBgHgl3EQfvflc/content/2301.00631v1.pdf filter=lfs diff=lfs merge=lfs -text
6608
  _dAzT4oBgHgl3EQfvf3k/content/2301.01709v1.pdf filter=lfs diff=lfs merge=lfs -text
6609
+ ZdFRT4oBgHgl3EQfPzc_/content/2301.13518v1.pdf filter=lfs diff=lfs merge=lfs -text
6610
+ lNE4T4oBgHgl3EQfTwzv/content/2301.05011v1.pdf filter=lfs diff=lfs merge=lfs -text
6611
+ RtAzT4oBgHgl3EQf0P7C/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6612
+ idE4T4oBgHgl3EQfSgzP/content/2301.05000v1.pdf filter=lfs diff=lfs merge=lfs -text
6613
+ 9tE1T4oBgHgl3EQfoARp/content/2301.03315v1.pdf filter=lfs diff=lfs merge=lfs -text
6614
+ dtE1T4oBgHgl3EQfLgMI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6615
+ HtE1T4oBgHgl3EQfXwQA/content/2301.03129v1.pdf filter=lfs diff=lfs merge=lfs -text
6616
+ 4NE3T4oBgHgl3EQfogr8/content/2301.04635v1.pdf filter=lfs diff=lfs merge=lfs -text
6617
+ ztFAT4oBgHgl3EQfBhzw/content/2301.08405v1.pdf filter=lfs diff=lfs merge=lfs -text
6618
+ _tAyT4oBgHgl3EQfdveI/content/2301.00308v1.pdf filter=lfs diff=lfs merge=lfs -text
6619
+ jNA0T4oBgHgl3EQfIv8h/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6620
+ bdFAT4oBgHgl3EQf5B6R/content/2301.08731v1.pdf filter=lfs diff=lfs merge=lfs -text
6621
+ edE4T4oBgHgl3EQfQwxW/content/2301.04984v1.pdf filter=lfs diff=lfs merge=lfs -text
6622
+ _dAzT4oBgHgl3EQfvf3k/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6623
+ lNE4T4oBgHgl3EQfTwzv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6624
+ X9AyT4oBgHgl3EQfvflc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6625
+ s9AzT4oBgHgl3EQfrv3p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6626
+ 0NFKT4oBgHgl3EQfNi2Q/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6627
+ xNAzT4oBgHgl3EQfdvyx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6628
+ pNFPT4oBgHgl3EQfLjS4/content/2301.13023v1.pdf filter=lfs diff=lfs merge=lfs -text
6629
+ gNFMT4oBgHgl3EQf2jEq/content/2301.12444v1.pdf filter=lfs diff=lfs merge=lfs -text
6630
+ r9E1T4oBgHgl3EQfjAT1/content/2301.03259v1.pdf filter=lfs diff=lfs merge=lfs -text
6631
+ edE4T4oBgHgl3EQfQwxW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6632
+ 59E1T4oBgHgl3EQfBQIH/content/2301.02848v1.pdf filter=lfs diff=lfs merge=lfs -text
6633
+ 0NFKT4oBgHgl3EQfNi2Q/content/2301.11755v1.pdf filter=lfs diff=lfs merge=lfs -text
6634
+ _tAyT4oBgHgl3EQfdveI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6635
+ ZdFRT4oBgHgl3EQfPzc_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6636
+ ctFIT4oBgHgl3EQfnitG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6637
+ 6tE1T4oBgHgl3EQfBgIF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6638
+ jNAyT4oBgHgl3EQfX_d5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6639
+ pNFPT4oBgHgl3EQfLjS4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6640
+ ctFIT4oBgHgl3EQfnitG/content/2301.11314v1.pdf filter=lfs diff=lfs merge=lfs -text
6641
+ qNFQT4oBgHgl3EQfszbV/content/2301.13389v1.pdf filter=lfs diff=lfs merge=lfs -text
6642
+ n9FPT4oBgHgl3EQfKTRi/content/2301.13018v1.pdf filter=lfs diff=lfs merge=lfs -text
6643
+ ldE1T4oBgHgl3EQfNwO7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6644
+ OtE1T4oBgHgl3EQfaASJ/content/2301.03157v1.pdf filter=lfs diff=lfs merge=lfs -text
6645
+ ztFAT4oBgHgl3EQfBhzw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6646
+ ddAzT4oBgHgl3EQf3f4p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6647
+ pNAyT4oBgHgl3EQfZPcw/content/2301.00218v1.pdf filter=lfs diff=lfs merge=lfs -text
6648
+ bdA0T4oBgHgl3EQfGf9T/content/2301.02047v1.pdf filter=lfs diff=lfs merge=lfs -text
6649
+ y9FAT4oBgHgl3EQfBBzs/content/2301.08402v1.pdf filter=lfs diff=lfs merge=lfs -text
6650
+ yNFKT4oBgHgl3EQfLi3P/content/2301.11747v1.pdf filter=lfs diff=lfs merge=lfs -text
6651
+ pNAyT4oBgHgl3EQfZPcw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0NFKT4oBgHgl3EQfNi2Q/content/2301.11755v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a38b5bad601459d534980f8e8607e56284cee87d4ca2858017ab62405abd16
3
+ size 1773750
0NFKT4oBgHgl3EQfNi2Q/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51344577a7b8e4677b9deb77ff814b250958328f07025345afaae90ea6bda2cf
3
+ size 4718637
0NFKT4oBgHgl3EQfNi2Q/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f47fef613b87af855f0d496df0adef21fe0f7bb24e23d2c1069218b849b2ac37
3
+ size 171549
0dE3T4oBgHgl3EQfnAoC/content/tmp_files/2301.04620v1.pdf.txt ADDED
@@ -0,0 +1,1998 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.04620v1 [eess.SY] 11 Jan 2023
2
+ AdaptSLAM: Edge-Assisted Adaptive SLAM with
3
+ Resource Constraints via Uncertainty Minimization
4
+ Ying Chen∗, Hazer Inaltekin†, Maria Gorlatova∗
5
+ ∗Duke University, Durham, NC, †Macquarie University, North Ryde, NSW, Australia
6
+ ∗{ying.chen151, maria.gorlatova}@duke.edu, †[email protected]
7
+ Abstract—Edge computing is increasingly proposed as a solu-
8
+ tion for reducing resource consumption of mobile devices running
9
+ simultaneous localization and mapping (SLAM) algorithms, with
10
+ most edge-assisted SLAM systems assuming the communication
11
+ resources between the mobile device and the edge server to be
12
+ unlimited, or relying on heuristics to choose the information to
13
+ be transmitted to the edge. This paper presents AdaptSLAM, an
14
+ edge-assisted visual (V) and visual-inertial (VI) SLAM system
15
+ that adapts to the available communication and computation re-
16
+ sources, based on a theoretically grounded method we developed
17
+ to select the subset of keyframes (the representative frames) for
18
+ constructing the best local and global maps in the mobile device
19
+ and the edge server under resource constraints. We implemented
20
+ AdaptSLAM to work with the state-of-the-art open-source V-
21
+ and VI-SLAM ORB-SLAM3 framework, and demonstrated that,
22
+ under constrained network bandwidth, AdaptSLAM reduces the
23
+ tracking error by 62% compared to the best baseline method.
24
+ Index Terms—Simultaneous localization and mapping, edge
25
+ computing, uncertainty quantification and minimization
26
+ I. INTRODUCTION
27
+ Simultaneous localization and mapping (SLAM), the pro-
28
+ cess of simultaneously constructing a map of the environ-
29
+ ment and tracking the mobile device’s pose within it, is an
30
+ essential capability for a wide range of applications, such
31
+ as autonomous driving and robotic navigation [1]. In partic-
32
+ ular, visual (V) and visual-inertial (VI) SLAM, which use
33
+ cameras either alone or in combination with inertial sensors,
34
+ have demonstrated remarkable progress over the last three
35
+ decades [2], and have become an indispensable component
36
+ of emerging mobile applications such as drone-based surveil-
37
+ lance [3], [4] and markerless augmented reality [5]–[8].
38
+ Due to the high computational demands placed by the V-
39
+ and VI-SLAM on mobile devices [9]–[12], offloading parts of
40
+ the workload to edge servers has recently emerged as a promis-
41
+ ing solution for lessening the loads on the mobile devices and
42
+ improving the overall performance [9]–[18]. However, such
43
+ approach experiences performance degradation under resource
44
+ limitations and fluctuations. The existing edge-assisted SLAM
45
+ solutions either assume wireless network resources to be
46
+ sufficient for unrestricted offloading, or rely on heuristics in
47
+ making offloading decisions. By contrast, in this paper we
48
+ develop an edge computing-assisted SLAM framework, which
49
+ we call AdaptSLAM, that intelligently adapts to both commu-
50
+ nication and computation resources to maintain high SLAM
51
+ performance. Similar to prior work [11]–[17], AdaptSLAM
52
+ runs a real-time tracking module and maintains a local map
53
+ on the mobile device, while offloading non-time-critical and
54
+ computationally expensive processes (global map optimization
55
+ and loop closing) to the edge server. However, unlike prior
56
+ designs, AdaptSLAM uses a theoretically grounded method
57
+ to build the local and global maps of limited size, and
58
+ minimize the uncertainty of the maps, laying the foundation
59
+ for the optimal adaptive offloading of SLAM tasks under the
60
+ communication and computation constraints.
61
+ First, we develop an uncertainty quantification model for
62
+ the local and global maps in edge-assisted V-SLAM and VI-
63
+ SLAM. Specifically, since these maps are built from the infor-
64
+ mation contained in the keyframes (i.e., the most representative
65
+ frames) [19]–[21], the developed model characterizes how the
66
+ keyframes and the connections between them contribute to
67
+ the uncertainty. To the best of our knowledge, this is the first
68
+ uncertainty quantification model for V-SLAM and VI-SLAM in
69
+ edge-assisted architectures.
70
+ Next, we apply the developed uncertainty quantification
71
+ model to efficiently select subsets of keyframes to build local
72
+ and global maps under the constraints of limited computation
73
+ and communication resources. The local and global map
74
+ construction is formulated as NP-hard cardinality-constrained
75
+ combinatorial optimization problems [22]. We demonstrate
76
+ that the map construction problems are ‘close to’ submodular
77
+ problems under some conditions, propose a low-complexity
78
+ greedy-based algorithm to obtain near-optimal solutions, and
79
+ present a computation reuse method to speed up map construc-
80
+ tion. We implement AdaptSLAM in conjunction with the state-
81
+ of-the-art open-source V- and VI-SLAM ORB-SLAM3 [20]
82
+ framework, and evaluate the implementation with both sim-
83
+ ulated and real-world communication and computation con-
84
+ ditions. Under constrained bandwidth, AdaptSLAM reduces
85
+ the tracking error by 62% compared with the best baseline
86
+ method.
87
+ To summarize, the main contributions of this paper are: (i)
88
+ the first uncertainty quantification model of maps in V- and
89
+ VI-SLAM under the edge-assisted architecture, (ii) an analyt-
90
+ ically grounded algorithm for efficiently selecting subsets of
91
+ keyframes to build local and global maps under computation
92
+ and communication resource budgets, and (iii) a compre-
93
+ hensive evaluation of AdaptSLAM on two configurations of
94
+ mobile devices. We open-source AdaptSLAM via GitHub.1
95
+ The rest of this paper is organized as follows. §II reviews
96
+ the related work, §III provides the preliminaries, §IV and
97
+ 1https://github.com/i3tyc/AdaptSLAM
98
+
99
+ §V introduce the AdaptSLAM system architecture and model,
100
+ §VI presents the problem formulation, and §VII presents the
101
+ problem solutions. We present the evaluation in §VIII and
102
+ conclude the paper in §IX.
103
+ II. RELATED WORK
104
+ V- and VI-SLAM. Due to the affordability of cameras and
105
+ the richness of information provided by them, V-SLAM has
106
+ been widely studied in the past three decades [2]. It can be
107
+ classified into direct approaches (LSD-SLAM [23], DSO [24]),
108
+ which operate directly on pixel intensity values, and feature-
109
+ based approaches (PTAM [25], ORB-SLAM2 [19], Pair-
110
+ Navi [26]), which extract salient regions in each camera
111
+ frame. We focus on feature-based approaches since direct
112
+ approaches require high computing power for real-time per-
113
+ formance [2]. To provide robustness (to textureless areas,
114
+ motion blur, illumination changes), there is a growing trend
115
+ of employing VI-SLAM, that assists the cameras with an
116
+ inertial measurement unit (IMU) [20], [21], [27]; VI-SLAM
117
+ has become the de-facto standard SLAM method for modern
118
+ augmented reality platforms [5], [6]. In VI-SLAM, visual
119
+ information and IMU data can be loosely [27] or tightly [20],
120
+ [21] coupled. We implement AdaptSLAM based on ORB-
121
+ SLAM3 [20], a state-of-the-art open-source V- and VI-SLAM
122
+ system which tightly integrates visual and IMU information.
123
+ Edge-assisted SLAM. Recent studies [4], [9], [11], [13]–
124
+ [18], [28]–[30] have focused on offloading parts of SLAM
125
+ workloads from mobile devices to edge (or cloud) servers
126
+ to reduce mobile device resource consumption. A standard
127
+ approach is to offload computationally expensive tasks (global
128
+ map optimization, loop closing), while exploiting onboard
129
+ computation for running the tasks critical to the mobile
130
+ device’s autonomy (tracking, local map optimization) [11],
131
+ [13]–[18]. Most edge-assisted SLAM frameworks assume
132
+ wireless network resources to be sufficient for unconstrained
133
+ offloading [4], [13]–[16], [29]; some use heuristics to choose
134
+ the information to be offloaded under communication con-
135
+ straints [9], [11], [17], [18], [28], [30]. Some frameworks only
136
+ keep the newest keyframes in the local map to combat the
137
+ constrained computation resources on mobile devices [14],
138
+ [16]. Complementing this work, we propose a theoretical
139
+ framework to characterize how keyframes contribute to the
140
+ SLAM performance, laying the foundation for the adaptive
141
+ offloading of SLAM tasks under the communication and
142
+ computation constraints.
143
+ Uncertainty quantification and minimization. Recent
144
+ work [31]–[33] has focused on quantifying and minimizing
145
+ the pose estimate uncertainty in V-SLAM. Since the pose
146
+ estimate accuracy is difficult to obtain due to the lack of
147
+ ground-truth poses of mobile devices, the uncertainty can
148
+ guide the decision-making in SLAM systems. In [31], [32],
149
+ it is used for measurement selection (selecting measurements
150
+ between keyframes [31] and selecting extracted features of
151
+ keyframes [32]); in [33], it is used for anchor selection (se-
152
+ lecting keyframes to make their poses have ‘zero uncertainty’).
153
+ Complementing this work, we quantify the pose estimate
154
+ uncertainty of both V- and VI-SLAM under the edge-assisted
155
+ architecture. After the uncertainty quantification, we study
156
+ the problem of selecting a subset of keyframes to minimize
157
+ the uncertainty. This problem is largely overlooked in the
158
+ literature, but is of great importance for tackling computation
159
+ and communication constraints in edge-assisted SLAM.
160
+ III. PRELIMINARIES
161
+ A. Graph Preliminaries
162
+ A directed multigraph is defined by the tuple of sets G =
163
+ (V, E, C), where V = {v1, · · · , v|V|} is the set of nodes, E is
164
+ the set of edges, and C is the set of edge categories. Let e =
165
+ ((vi, vj), c) ∈ E denote the edge, where the nodes vi, vj ∈ V
166
+ are the head and tail of e, and c ∈ C is the category of e.
167
+ We let we be the weight of edge e. We allow multiple edges
168
+ from vi to vj to exist, and denote the set of edges from vi to
169
+ vj by Ei,j. Note that the edges in Ei,j are differentiated from
170
+ each other by their category labels. The total edge weight from
171
+ nodes vi to vj is given by wi,j =
172
+
173
+ e∈Ei,j
174
+ we, which is the sum
175
+ of all edge weights from vi to vj.
176
+ The weighted Laplacian matrix L of graph G is a |V| × |V|
177
+ matrix where the i, j-th element Li,j is given by:
178
+ Li,j =
179
+ � −wi,j,
180
+ i ̸= j
181
+
182
+ e∈Ei
183
+ we,
184
+ i = j
185
+ ,
186
+ where Ei ⊆ E is the set of all edges whose head is node vi.
187
+ The reduced Laplacian matrix ˜L is obtained by removing an
188
+ arbitrary node (i.e., removing the row and column associated
189
+ to the node) from L.
190
+ B. Set Function
191
+ We define a set function f for a finite set V as a mapping
192
+ f : 2V → R that assigns a value f (S) to each subset S ⊆ V .
193
+ Submodularity. A set function f is submodular if f (L) +
194
+ f (S) ⩾ f (L ∪ S) + f (L ∩ S) for all L, S ⊆ V .
195
+ Submodularity ratio. The submodularity ratio of a set
196
+ function f with respect to a parameter s is
197
+ γ =
198
+ min
199
+ L⊆V,S⊆V,|S|⩽s,x∈V \(S∪L)
200
+ f (L ∪ {x}) − f (L)
201
+ f (L ∪ S ∪ {x}) − f (L ∪ S).
202
+ (1)
203
+ where we define 0/0 := 1.
204
+ The cardinality-fixed maximization problem is
205
+ max
206
+ S⊆V,|S|=s f (S) .
207
+ (2)
208
+ The keyframe selection optimization is closely related to
209
+ the cardinality-fixed maximization problem introduced above,
210
+ which is an NP-hard problem [34]. However, for submodular
211
+ set functions, there is an efficient greedy approach that will
212
+ come close to the optimum value for (2), with a provable
213
+ optimality gap. This result is formally stated in Theorem 1.
214
+ Theorem 1.
215
+ [34], [35] Given a non-negative and monoton-
216
+ ically increasing set function f with a submodularity ratio γ,
217
+ let S# be the solution produced by the greedy algorithm
218
+
219
+ (Algorithm 1) and S⋆ be the solution of (2). Then, f
220
+
221
+ S#�
222
+
223
+ (1 − exp(−γ)) f (S⋆) .
224
+ C. SLAM Preliminaries
225
+ The components of SLAM systems include [2], [20], [21]:
226
+ Tracking. The tracking module detects 2D feature points
227
+ (e.g., by extracting SIFT, SURF, or ORB descriptors) in the
228
+ current frame. Each feature point corresponds to a 3D map
229
+ point (e.g., a distinguishable landmark) in the environment.
230
+ The tracking module uses these feature points to find corre-
231
+ spondences with a previous reference frame. It also processes
232
+ the IMU measurements. Based on the correspondences in
233
+ feature points and the IMU measurements, it calculates the
234
+ relative pose change between the selected reference frame and
235
+ the current frame. The module also determines if this frame
236
+ should be a keyframe based on a set of criteria such as the
237
+ similarity to the previous keyframes [20].
238
+ Local and global mapping. It finds correspondences (of
239
+ feature points) between the new keyframe and the other
240
+ keyframes in the map. It then performs map optimizations,
241
+ i.e., estimates the keyframe poses given the common feature
242
+ points between the keyframes and the IMU measurements.
243
+ Map optimizations are computationally expensive. In edge-
244
+ assisted SLAM, global mapping runs on the server [11], [13]–
245
+ [17].
246
+ Loop closing. By comparing the new keyframe to all
247
+ previous keyframes, the module checks if the new keyframe is
248
+ revisiting a place. If so (i.e., if a loop is detected), it establishes
249
+ connections between the keyframe and all related previous
250
+ ones, and then performs global map optimizations. Loop
251
+ closing is computationally expensive and can be offloaded to
252
+ the edge server in the edge-assisted SLAM [11], [13]–[17].
253
+ IV. ADAPTSLAM SYSTEM ARCHITECTURE
254
+ The design of AdaptSLAM is shown in Fig. 1. The mobile
255
+ device, equipped with a camera and an IMU, can communicate
256
+ with the edge server bidirectionally. The mobile device and the
257
+ edge server cooperatively run SLAM algorithms to estimate
258
+ the mobile device’s pose and a map of the environment.
259
+ AdaptSLAM optimizes the SLAM performance under compu-
260
+ tation resource limits of the mobile device and communication
261
+ resource limits between the mobile device and the edge server.
262
+ We split the modules between the mobile device and the
263
+ edge server similar to [11], [13]–[18]. The mobile device
264
+ offloads loop closing and global map optimization modules
265
+ to the edge server, while running real-time tracking and local
266
+ mapping onboard. Unlike existing edge-assisted SLAM sys-
267
+ tems [11], [13]–[18], AdaptSLAM aims to optimally construct
268
+ the local and global maps under the computation and commu-
269
+ nication resource constraints. The design of AdaptSLAM is
270
+ Algorithm 1 Greedy algorithm to solve (2)
271
+ 1: S# ← ∅;
272
+ 2: while (
273
+ ��S#�� < s) do
274
+ 3:
275
+ x⋆ ← arg max
276
+ x
277
+ f(S#∪{x})−f(S#). S# ← S#∪{x⋆}.
278
+ Mobile Device
279
+ Edge Server
280
+ Tracking
281
+ Local Map
282
+ Optimization
283
+ Global Map
284
+ Construction
285
+ Global Map
286
+ Optimization
287
+ Loop Closing
288
+ IMU
289
+ Image
290
+ Local Map
291
+ Construction
292
+ Candidate
293
+ Keyframes
294
+ Selected
295
+ Keyframes
296
+ Detected
297
+ Loop
298
+ Local
299
+ Map
300
+ Global
301
+ Map
302
+ Fig. 1: Overview of the AdaptSLAM system architecture.
303
+ mainly focused on two added modules, local map construction
304
+ and global map construction highlighted in purple in Fig. 1.
305
+ In local map construction, due to the computation resource
306
+ limits, the mobile device selects a subset of keyframes from
307
+ candidate keyframes to build a local map. In global map
308
+ construction, to adapt to the constrained wireless connection
309
+ for uplink transmission, the mobile device also selects a subset
310
+ of keyframes to be transmitted to the edge server to build a
311
+ global map. The AdaptSLAM optimally selects the keyframes
312
+ to build local and global maps, minimizing the pose estimate
313
+ uncertainty under the resource constraints.
314
+ Similar to [11], the selected keyframes are transmitted from
315
+ the mobile device to the server, and the map after the global
316
+ map optimization is transmitted from the server to the mobile
317
+ device. For the uplink transmission, instead of the whole
318
+ keyframe, the 2D feature points extracted from the keyframes
319
+ are sent. For the downlink communication, the poses of the
320
+ keyframes obtained by the global map optimization, and the
321
+ feature points of the keyframes are transmitted.
322
+ V. ADAPTSLAM SYSTEM MODEL
323
+ A. The Pose Graph and the Map
324
+ We divide time into slots of equal size of ∆t. We introduce
325
+ the pose graph and the map at time slot t that lasts for ∆t
326
+ seconds. For clarity of notation, we will omit the time index
327
+ below.
328
+ Definition 1 (Pose graph). For a given index set K =
329
+ {1, . . ., |K|}
330
+ (indexing
331
+ camera
332
+ poses
333
+ and
334
+ representing
335
+ keyframes), the pose graph is defined as the undirected multi-
336
+ graph G = (K, E, C), where K is the node set, E is the edge
337
+ set, and C = {IMU, vis} is the category set. Here, IMU stands
338
+ for the IMU edges, and vis stands for the covisibility edges.
339
+ Given a pose graph G = (K, E, C), there is a camera pose
340
+ Pn = (x, y, z, wx, wy, wz) for all n ∈ K, where the first
341
+ three entries are the 3-D positions and the last three ones are
342
+ the Euler angles (yaw, pitch and roll) representing the camera
343
+ orientation. Edges in E are represented as e = ((n, m), c) for
344
+ n, m ∈ K and c ∈ C. Two keyframes in K are connected
345
+ by a covisibility edge if there are 3D map points observed
346
+ in both keyframes. Two consecutive keyframes are connected
347
+ by an IMU edge if there are accelerometer and gyroscope
348
+ readings from one keyframe to another. There may exist both
349
+ a covisibility edge and an IMU edge between two keyframes.
350
+ For each e = ((n, m), c) ∈ E, we observe relative noisy
351
+ pose measurements between n and m, which is written as
352
+ ∆e = Pm − Pn + xe, where xe is the measurement noise
353
+
354
+ on edge e. The map optimization problem is to find the
355
+ maximum likelihood estimates {˜Pn}n∈K for the actual camera
356
+ poses {Pn}n∈K. For Gaussian distributed edge noise, the map
357
+ optimization problem is
358
+ min
359
+ {˜Pn}n∈K
360
+
361
+ e∈E
362
+ (˜xe)⊤Ie˜xe,
363
+ (3)
364
+ where ˜xe = ∆e − ˜Pm + ˜Pn and Ie is the information matrix
365
+ (i.e., inverse covariance matrix) of the measurement error on
366
+ e [36]. (˜xe)⊤ Ie˜xe is the Mahalanobis norm [20], [21] of the
367
+ estimated measurement noise for e with respect to Ie.
368
+ Below, we assume that the measurement noise xe is Gaus-
369
+ sian distributed with isotropic covariance (as in [31], [37],
370
+ [38]). We assume that the information matrix Ie can be
371
+ characterized by a weight assigned to e [37], [39]. Specifically,
372
+ Ie = weI, where we ⩾ 1 is the weight for e and I is the
373
+ matrix that is constant for all measurements. We note that the
374
+ relative measurements between keyframes n and m introduce
375
+ the same information for them. We assume all weights we to
376
+ be independent from each other for edges between different
377
+ pairs of keyframes as in [20], [21], [39], [40].
378
+ The map optimization problem in (3) is solved by standard
379
+ methods such as Levenberg-Marquardt algorithm implemented
380
+ in g2o [41] and Ceres solvers [42] as in [20], [21].
381
+ Definition 2 (Anchor). We say that a node is the anchor of
382
+ the pose graph if the pose of the node is known.
383
+ The map (local or global) consists of the pose graph (in
384
+ Definition 1) and map points in the environment. In this paper,
385
+ we will use the terms map and pose graph interchangeably.
386
+ Without loss of generality, we will also assume that the global
387
+ (or local) map is anchored on the first node, as in [37], [39].
388
+ This assumption is made because SLAM can only estimate
389
+ the relative pose change based on the covisibility and inertial
390
+ measurements, while the absolute pose estimate in the global
391
+ coordinate system cannot be provided.
392
+ B. The Local Map
393
+ Local map construction. The candidate keyframes are se-
394
+ lected from camera frames according to the selection strategy
395
+ in ORB-SLAM3 [20], and these candidate keyframes form
396
+ the set K. Due to the constrained computation resources,
397
+ the mobile device selects a fixed keyframe set Kfixed and a
398
+ local keyframe set Kloc from the candidate keyframes, where
399
+ |Kfixed| ⩽ lf and |Kloc| ⩽ lloc. The fixed keyframe set
400
+ Kfixed ⊆ Kg,user is selected from the global map Kg,user
401
+ that was last transmitted from the edge server. The poses
402
+ of keyframes in Kfixed act as fixed priors in the local map
403
+ optimization. This is because poses of keyframes in Kg,user
404
+ are already optimized in the global map optimization and
405
+ hence have low uncertainty. The poses of keyframes in the
406
+ local keyframe set Kloc ⊆ K \ Kg,user will be optimized
407
+ according to the map optimization problem introduced above.
408
+ The edges between keyframes in Kloc form the set Eloc, and
409
+ the edges whose one node belongs to Kloc and another node
410
+ belongs to Kfixed form the set El,f.
411
+ Local map optimization. After selecting Kloc in the local
412
+ map construction, the local map optimization is to optimize
413
+ the estimated poses
414
+
415
+ ˜Pn
416
+
417
+ n∈Kloc
418
+ to minimize the sum of
419
+ Mahalanobis norms
420
+
421
+ e∈Eloc∪El,f
422
+ (˜xe)⊤Ie˜xe. Note that in the
423
+ local pose graph optimization, the keyframes in Kfixed are
424
+ included in the optimization with their poses fixed. The local
425
+ map optimization to solve (3) is
426
+ min
427
+ {˜Pn}n∈Kloc
428
+
429
+ e∈Eloc∪El,f
430
+ (˜xe)⊤Ie˜xe.
431
+ (4)
432
+ C. The Global Map
433
+ Global map construction. Due to the limited bandwidth
434
+ between the mobile device and the edge server, only a subset
435
+ of candidate keyframes are offloaded to the edge server to
436
+ build a global map. The selection of keyframes to be offloaded
437
+ will be optimized to minimize the pose estimation uncertainty
438
+ of the global map when considering the underlying wireless
439
+ network constraints.
440
+ The edge server maintains the global map, denoted as
441
+ Kg,edge, holding all keyframes uploaded by the mobile device.
442
+ The edges between keyframes in the global map Kg,edge
443
+ constitute the set Eglob. Note that Kg,edge may be different
444
+ from Kg,user, because the global map is large and it takes
445
+ time to transmit the most up-to-date global map from the edge
446
+ server to the mobile device.
447
+ Global map optimization. After selecting Kg,edge in the
448
+ global map construction, the edge server performs the global
449
+ map optimization to estimate poses
450
+ ˜Pn in Kg,edge and
451
+ minimize the sum of Mahalanobis norms
452
+
453
+ e∈Eglob
454
+ (˜xe)⊤Ie˜xe.
455
+ Specifically, the edge solves (3) when E = Eglob and K =
456
+ Kg,edge, i.e., the global map optimization is to solve
457
+ min
458
+ {˜Pn}n∈Kg,edge
459
+
460
+ e∈Eglob
461
+ (˜xe)⊤Ie˜xe.
462
+ (5)
463
+ VI. PROBLEM FORMULATION
464
+ AdaptSLAM aims to efficiently select keyframes to con-
465
+ struct optimal local and global maps, i.e., we select keyframes
466
+ in Kloc and Kfixed for the local map and Kg,edge for the
467
+ global map. From §IV, after constructing the optimal local and
468
+ global maps, the map optimization can be performed using the
469
+ standard algorithms [41], [42]. We construct optimal local and
470
+ global maps by minimizing the uncertainty of the keyframes’
471
+ estimated poses. Hence, we represent and quantify the uncer-
472
+ tainty in §VI-A, and formulate the uncertainty minimization
473
+ problems in §VI-B.
474
+ A. Uncertainty Quantification
475
+ Let pn
476
+ =
477
+ ˜Pn − Pn denote the pose estimate error
478
+ of keyframe n. The estimated measurement noise can be
479
+ rewritten as �xe = pn − pm + xe = pn,m + xe, where
480
+ pn,m
481
+ =
482
+ pn − pm. We stack all pn, n
483
+
484
+ K and get
485
+ a pose estimate error vector w
486
+ =
487
+
488
+ p⊤
489
+ 1 , p⊤
490
+ 2 , · · · , p⊤
491
+ |K|
492
+
493
+ .
494
+ We rewrite the objective function of map optimization
495
+
496
+ in (3) as
497
+
498
+ e∈E
499
+ (˜xe)⊤Ie˜xe
500
+ =
501
+
502
+ e=((n,m),c)∈E
503
+ p⊤
504
+ n,mIepn,m +
505
+ 2
506
+
507
+ e=((n,m),c)∈E
508
+ p⊤
509
+ n,mIexe + �
510
+ e∈E
511
+ x⊤
512
+ e Iexe. If we can rewrite
513
+ the quadratic term
514
+
515
+ e=((n,m),c)∈E
516
+ p⊤
517
+ n,mIepn,m in the format of
518
+ wIww⊤, where Iw is called the information matrix of the
519
+ pose graph, the uncertainty of the pose graph is quantified by
520
+ − log det (Iw) according to the D-optimality [31]–[33].2
521
+ We denote the pose estimate error vectors for the global
522
+ and local maps as wg
523
+ =
524
+
525
+ p⊤
526
+ u1, · · · , p⊤
527
+ u|Kg,edge|
528
+
529
+ and
530
+ wl =
531
+
532
+ p⊤
533
+ r1, · · · , p⊤
534
+ r|Kloc|
535
+
536
+ , where u1, · · · , u|Kg,edge| are the
537
+ keyframes in Kg,edge, and r1, · · · , r|Kloc| are the keyframes
538
+ in Kloc. The first pose in the global and local pose graph is
539
+ known (pu1 = 0, pr1 = 0). We rewrite the quadratic terms of
540
+ the objective functions of global and local map optimizations
541
+ in
542
+ (5)
543
+ and
544
+ (4)
545
+ as
546
+
547
+ e=((n,m),c)∈Eglob
548
+ p⊤
549
+ n,mIepn,m
550
+ =
551
+ wgIglob (Kg,edge) w⊤
552
+ g (or
553
+
554
+ e=((n,m),c)∈Eloc∪El,f
555
+ p⊤
556
+ n,mIepn,m =
557
+ wlIloc (Kloc, Kfixed)w⊤
558
+ l ),
559
+ where
560
+ Iglob (Kg,edge)
561
+ and
562
+ Iloc (Kloc, Kfixed) are called the information matrices of
563
+ the global and local maps and will be derived later (in
564
+ Definition 3 and Lemmas 1 and 2).
565
+ Definition 3 (Uncertainty). The uncertainty of the global (or
566
+ local) pose graph is defined as − log det
567
+
568
+ ˜Iglob (Kg,edge)
569
+
570
+ (or − log det
571
+
572
+ ˜Iloc (Kloc, Kfixed)
573
+
574
+ , where ˜Iglob (Kg,edge) and
575
+ ˜Iloc (Kloc, Kfixed) are obtained by removing the first row and
576
+ first column in the information matrices Iglob (Kg,edge) and
577
+ Iloc (Kloc, Kfixed).
578
+ From Definition 3, the uncertainty quantification is based
579
+ on the global and local map optimizations introduced in §V-C
580
+ and §V-B. After quantifying the uncertainty, we will later (in
581
+ §VI-B) optimize the local and global map construction which
582
+ in turn minimizes the uncertainty of poses obtained from local
583
+ and global map optimizations.
584
+ Lemma 1 (Uncertainty of global pose graph). For the
585
+ global map optimization, the uncertainty is calculated as
586
+ − log det
587
+
588
+ ˜Iglob (Kg,edge)
589
+
590
+ , where ˜Iglob (Kg,edge) = ˜Lglob⊗I
591
+ with ˜Lglob being the matrix obtained by deleting the first row
592
+ and column in the Laplacian matrix Lglob, and ⊗ being the
593
+ Kronecker product. The i, j-th element of Lglob is given by
594
+ [Lglob]i,j =
595
+
596
+
597
+
598
+
599
+
600
+
601
+
602
+ e=((ui,uj),c)∈Eg,edge
603
+ we,
604
+ i ̸= j
605
+
606
+ e=((ui,q),c)∈Eg,edge,ui̸=q
607
+ we,
608
+ i = j
609
+ .
610
+ (6)
611
+ Proof. See Appendix A. Proof sketch: The proof follows from
612
+ the global map optimization formulation in §V-C and the
613
+ definition of ˜Iglob (Kg,edge).
614
+ 2Common approaches to quantifying uncertainty in SLAM are to use real
615
+ scalar functions of the maximum likelihood estimator covariance matrix [43].
616
+ Among them, D-optimality (determinant of the covariance matrix) [37], [39]
617
+ captures the uncertainty due to all the elements of a covariance matrix and
618
+ has well-known geometrical and information-theoretic interpretations [44].
619
+ From Lemma 1, the uncertainty of the global pose graph
620
+ can be calculated based on the reduced Laplacian matrix
621
+ (˜Lglob). According to the relationship between the reduced
622
+ Laplacian matrix and the tree structure [45], the uncertainty is
623
+ inversely proportional to the logarithm of weighted number of
624
+ spanning trees in the global pose graph. Similar conclusions
625
+ are drawn for 2D pose graphs [31] and 3D pose graphs with
626
+ only covisibility edges [37], [39], where the device can move
627
+ in 2D plane and 3D space respectively. We extend the results
628
+ to VI-SLAM where the global pose graph is a multigraph with
629
+ the possibility of having both a covisibility edge and an IMU
630
+ edge between two keyframes.
631
+ Lemma 2 (Uncertainty of local pose graph). The uncertainty
632
+ is − log det
633
+
634
+ ˜Iloc (Kloc, Kfixed)
635
+
636
+ for the local map, where
637
+ ˜Iloc (Kloc, Kfixed) = ˜Lloc ⊗ I with ˜Lloc being the matrix
638
+ obtained by deleting the first row and the first column in Lloc.
639
+ The i, j-th element of Lloc (of size |Kloc| × |Kloc|) is given by
640
+ [Lloc]i,j =
641
+
642
+
643
+
644
+
645
+
646
+
647
+
648
+ e=((ri,rj),c)∈Eloc
649
+ we,
650
+ i ̸= j
651
+
652
+ e=((ri,q),c)∈El,f∪Eloc,q̸=ri
653
+ we, i = j
654
+ .
655
+ (7)
656
+ Proof. See Appendix B. Proof sketch: Setting pn
657
+ = 0,
658
+ n ∈ Kfixed (the fixed keyframes have poses with ‘zero
659
+ uncertainty’), the proof follows from the local pose graph
660
+ optimization formulation in §V-B and the definition of
661
+ ˜Iloc (Kloc, Kfixed).
662
+ From Lemma 2, the uncertainty of the local map is propor-
663
+ tional to the uncertainty of the pose graph G anchoring on the
664
+ first node in Kloc and all nodes in Kfixed, where G’s node set is
665
+ Kfixed∪Kloc and edge set includes all measurements between
666
+ any two nodes in Kfixed ∪ Kloc. Note that keyframe poses
667
+ in Kfixed are optimized on the edge server and transmitted
668
+ to the mobile device, and they are considered as constants
669
+ in the local pose graph optimization. From the uncertainty’s
670
+ perspective, adding fixed keyframes in Kfixed is equivalent
671
+ to anchoring these keyframe poses (i.e., deleting rows and
672
+ columns corresponding to the anchored nodes in the Laplacian
673
+ matrix of graph G). In addition, from Lemma 2, although poses
674
+ are fixed, the anchored nodes still reduce the uncertainty of
675
+ the pose graph. Hence, apart from Kloc, we will select the
676
+ anchored keyframe set Kfixed to minimize the uncertainty.
677
+ B. Uncertainty Minimization Problems
678
+ We now formulate optimization problems whose objectives
679
+ are to minimize the uncertainty of the local and global
680
+ maps. For the local map optimization, under the computation
681
+ resource constraints, we solve Problem 1 for each keyframe k.
682
+ For the global map optimization, under the communication
683
+ resource constraints, we solve Problem 2 to adaptively offload
684
+ keyframes to the edge server.
685
+
686
+ Problem 1 (Local map construction).
687
+ max
688
+ Kloc,Kfixed log det
689
+
690
+ �Iloc (Kloc ∪ {k} , Kfixed)
691
+
692
+ (8)
693
+ s.t.
694
+ |Kloc| ⩽ lloc, Kloc ⊆ K \ Kg,user
695
+ (9)
696
+ |Kfixed| ⩽ lf, Kfixed ⊆ Kg,user.
697
+ (10)
698
+ The objective of Problem 1 is equivalent to minimizing the
699
+ uncertainty of the local map. Constraint (9) means that the size
700
+ of Kloc is constrained to reduce the computational complexity
701
+ in the local map optimization, and that the keyframes to
702
+ be optimized in the local map are selected from keyframes
703
+ that are not in Kg,user. Constraint (10) means that the size
704
+ of Kfixed is constrained, and that the fixed keyframes are
705
+ selected from Kg,user that were previously optimized on and
706
+ transmitted from the edge server.
707
+ Problem 2 (Global map construction).
708
+ max
709
+ K′⊆K\Kg,edge log det
710
+
711
+ �Iglob (Kg,edge ∪ K′)
712
+
713
+ (11)
714
+ s.t. d |K′| ⩽ D.
715
+ (12)
716
+ The objective of Problem 2 is equivalent to minimizing the
717
+ uncertainty of the global map. K \ Kg,edge is set of the
718
+ keyframes that have not been offloaded to the server, and
719
+ we select a subset of keyframes, K′, from K \ Kg,edge.
720
+ The constraint (12) guarantees that the keyframes cannot be
721
+ offloaded from the device to the server at a higher bitrate than
722
+ the available channel capacity, where D is the channel capacity
723
+ constraint representing the maximum number of bits that can
724
+ be transmitted in a given transmission window. We assume that
725
+ the data size d of each keyframe is the same, which is based on
726
+ the observation that the data size is relatively consistent across
727
+ keyframes in popular public SLAM datasets [46], [47].
728
+ VII. LOCAL AND GLOBAL MAP CONSTRUCTION
729
+ We analyze the properties of approximate submodularity
730
+ in map construction problems, and propose low-complexity
731
+ algorithms to efficiently construct local and global maps.
732
+ A. Local Map Construction
733
+ The keyframes in the local map include those in two disjoint
734
+ sets Kloc and Kfixed. To efficiently solve Problem 1, we
735
+ decompose it into two problems aiming at minimizing the
736
+ uncertainty: Problem 3 that selects keyframes in Kloc and
737
+ Problem 4 that selects keyframes in Kfixed. We obtain the
738
+ optimal local keyframe set K⋆
739
+ loc in Problem 3. Based on K⋆
740
+ loc,
741
+ we then obtain the optimal fixed keyframe set K⋆
742
+ fixed in
743
+ Problem 4. We will compare the solutions to Problems 3 and 4
744
+ with the optimal solution to Problem 1 in §VIII to show that
745
+ the performance loss induced by the decomposition is small.
746
+ Problem 3.
747
+ K⋆
748
+ loc = arg max
749
+ Kloc log det
750
+
751
+ ˜Iloc(Kloc∪ {k}, ∅)
752
+
753
+ s.t. (9).
754
+ Problem 4.
755
+ K⋆
756
+ fixed = arg max
757
+ Kfixed log det
758
+
759
+ ˜Iloc(K⋆
760
+ loc∪ {k}, Kfixed)
761
+
762
+ s.t. (10).
763
+ 1) The Selection of Local Keyframe Set Kloc: We first
764
+ solve Problem 3. It is a nonsubmodular optimization problem
765
+ with constraints, which are NP-hard and generally difficult
766
+ to be solved with an approximation ratio [22]. Hence, we
767
+ decompose Problem 3 into subproblems (Problems 5 and 6)
768
+ that are equivalent to the original Problem 3 and can be
769
+ approximately solved with a low-complexity algorithm.
770
+ In problem 5, assume that we already select a keyframe
771
+ subset Kbase from K \ Kg,user (with the size lb ≜ |Kbase| ⩽
772
+ lloc), and we aim to further select a keyframe set Kadd
773
+ to be added to Kbase to minimize the local map uncer-
774
+ tainty. Rewriting the objective as Unc (Kadd ∪ Kbase ∪ {k}) ≜
775
+ − log det
776
+
777
+ �Iloc (Kadd ∪ Kbase ∪ {k}, ∅)
778
+
779
+ , the problem is to
780
+ obtain the optimal Kadd (denoted as OPTadd(Kbase)) given
781
+ Kbase:
782
+ Problem 5.
783
+ OPTadd (Kbase) = arg max
784
+ Kadd −Unc (Kadd ∪ Kbase ∪ {k})
785
+ s.t.
786
+ |Kadd| ⩽ lloc − lb.
787
+ After getting the solutions (i.e., OPTadd (Kbase)) to Prob-
788
+ lem 5 for all possible Kbase of size lb, we obtain the optimal
789
+ Kbase (denoted as K⋆
790
+ base) in Problem 6.
791
+ Problem 6.
792
+ K⋆
793
+ base = arg max
794
+ Kbase −Unc (OPTadd(Kbase) ∪ Kbase ∪ {k})
795
+ s.t.
796
+ |Kbase| = lb.
797
+ Lemma 3. Given lb, the solution to Problems 5 and 6, i.e.,
798
+ K⋆
799
+ base and OPTadd (K⋆
800
+ base), will give us the solution K⋆
801
+ loc to
802
+ Problem 3. Specifically, K⋆
803
+ loc = K⋆
804
+ base ∪ OPTadd (K⋆
805
+ base).
806
+ Proof. The proof is straightforward and hence omitted.
807
+ We can obtain K⋆
808
+ loc in Problem 3 by solving Problems 5
809
+ and 6. We will show that the objective function of Problem 5 is
810
+ ‘close to’ a submodular function when the size of the keyframe
811
+ set Kbase is large. In this case, Problem 5 can be efficiently
812
+ solved using a greedy algorithm with an approximation ratio.
813
+ When |Kbase| is small, we need to compare the objective
814
+ function for different combinations of Kbase and Kadd.
815
+ Lemma 4. When
816
+ wmax
817
+ |Kbase|wmin < 1, the submodularity ratio γ
818
+ of the objective function in Problem 5 is lower bounded by
819
+ γ ⩾ 1 + 1
820
+ ϑ log
821
+
822
+ 1 −
823
+ 4|Kadd|2w2
824
+ max
825
+ |Kbase| wmin − wmax
826
+
827
+ ,
828
+ (13)
829
+ where
830
+ ϑ
831
+ =
832
+ min
833
+ m∈Kadd
834
+
835
+ n∈Kbase
836
+ log wn,m,
837
+ wmax
838
+ =
839
+ max
840
+ n,m∈Kbase∪Kadd wn,m, and wmin =
841
+ min
842
+ n,m∈Kbase∪Kadd wn,m. γ
843
+ is close to 1 when |Kbase| is significantly larger than |Kadd|.
844
+
845
+ Algorithm 2 Selecting local keyframe set Kloc in the local
846
+ map (top-h greedy-based algorithm)
847
+ 1: Θ ← ∅;
848
+ 2: while ( |Λ| ⩽ lloc) do
849
+ 3:
850
+ if |Λ| ⩽ lthr then h ← H else h ← 1;
851
+ 4:
852
+ Select
853
+ the
854
+ top-h
855
+ highest-scoring
856
+ combinations
857
+ of
858
+ Λ, Λ ∈ Θ and n, n ∈ K \ Kg,user that minimize
859
+ Unc (Λ ∪ {n, k}). Unc (Λ ∪ {n, k}) is calculated using
860
+ the computation reuse algorithm in Algorithm 2;
861
+ 5:
862
+ Update Θ as the set of h highest-scoring combinations
863
+ of Λ and n. Each element of Θ is a set (i.e., Λ ∪ {n})
864
+ corresponding to one combination;
865
+ 6: K⋆
866
+ loc ← arg min
867
+ Λ∈Θ Unc(Λ ∪ {k}).
868
+ Proof. See Appendix C. Proof sketch: Following from the
869
+ definition of γ in (1), we first prove that the denomina-
870
+ tor in (1), denoted as log det(Mden), is lower bounded
871
+ by ϑ. Denoting the numerator in (1) as log det(Mnum),
872
+ we
873
+ show
874
+ that
875
+ log det(Mnum)
876
+
877
+ log det(Mden) +
878
+ log
879
+
880
+ 1 −
881
+ 4|Kadd|2w2
882
+ max
883
+ |Kbase|wmin−wmax
884
+
885
+ , by proving that the absolute val-
886
+ ues of all elements in Mnum are bounded.
887
+ From Lemma 4, the objective function in Problem 5 is
888
+ ‘close to’ a submodular function when the size of the exist-
889
+ ing keyframe set (i.e., |Kbase|) is much larger than |Kadd|.
890
+ Hence, we can use the greedy algorithm to approximately
891
+ solve Problem 5. According to Theorem 1, the solution
892
+ obtained by the greedy algorithm for Problem 5, denoted
893
+ by OPT#
894
+ add (Kbase), has an approximation guarantee that
895
+ OPT#
896
+ add (Kbase) ⩾ (1 − exp(−γ)) OPTadd (Kbase).
897
+ According to the analysis of the properties of Problems 5
898
+ and 6, we now solve Problem 3 to select the local keyframe set
899
+ Kloc using Algorithm 2 (top-h greedy-based algorithm). Θ is
900
+ the set of possible keyframe sets that minimize the local map
901
+ uncertainty, and we only maintain h keyframe sets to save
902
+ the computation resources. Λ, Λ �� Θ, denotes the element
903
+ in Θ and represents one possible keyframe set. When the
904
+ size of Λ is smaller than a threshold lthr (|Λ| ⩽ lthr), we
905
+ select the top-H (H > 1) highest-scoring combinations of
906
+ Λ and n, n ∈ K \ Kg,user, that minimize Unc (Λ ∪ {k, n}).
907
+ When |Λ| gets larger, we only select the highest-scoring
908
+ combination. The reasons are as follows. Λ can be seen as the
909
+ existing keyframe set Kbase. According to Lemma 4, when
910
+ the size of the existing keyframe set (which is |Λ| here) is
911
+ small, there is no guarantee that Unc (Kadd ∪ Kbase ∪ {k}) is
912
+ close to a submodular function (i.e., the submodularity ratio
913
+ is much smaller than 1). Hence, we need to try different
914
+ combinations of Λ and n to search for the combination that
915
+ minimizes the uncertainty after each iteration. As |Λ| grows,
916
+ the submodularity ratio is close to 1, and a greedy algorithm
917
+ can achieve η approximation (η = 1 − exp(−γ), γ → 1).
918
+ In this case, we apply the greedy algorithm and only keep
919
+ the combination that achieves the minimal uncertainty at each
920
+ step.
921
+ Algorithm 3 Computation reuse algorithm
922
+ 1: Input: det(A), A−1;
923
+ 2: B ← A−1. Calculate BiB⊤
924
+ i , i = 1, · · · , |Λ|;
925
+ 3: Calculate (A′)−1 using (15). Calculate det (A′) using
926
+ (16). Calculate det(˜I (Λ ∪ {n, k})) using (14).
927
+ 2) Computation Reuse Algorithm: We use the computation
928
+ reuse algorithm (Algorithm 3) to speed up Algorithm 2. We
929
+ observe that for different n, n ∈ K \ Kg,user, only a limited
930
+ number (3|Λ|+1) of elements in the matrix ˜I (Λ ∪ {n, k}) are
931
+ different. Calculating the log-determinant function of a (|Λ|+
932
+ 1)×(|Λ|+1) matrix ˜I (Λ ∪ {n, k}) has a high computational
933
+ complexity (of O(|Λ|+1)3) [48]. Hence, instead of computing
934
+ the objective function for each n from scratch, we reuse parts
935
+ of computation results for different n.
936
+ Letting A ≜ ˜I (Λ ∪ {k}) denote the information matrix of
937
+ the local map in the |Λ|-th iteration (of Algorithm 2), the infor-
938
+ mation matrix in the (|Λ|+1)-th iteration is ˜I (Λ ∪ {n, k}) =
939
+
940
+ A + diag (a)
941
+ a⊤
942
+ a
943
+ d
944
+
945
+ , where a = (a1, a2, · · · , a|Λ|) with
946
+ ai = wλi,n, λi is the i-th element of Λ, and d = wk,n+
947
+ |Λ|
948
+
949
+ i=1
950
+ ai.
951
+ We aim to calculate det(˜I (Λ ∪ {n, k})) using the calcula-
952
+ tion of det(A) and A−1 from the previous iteration. Letting
953
+ A′ ≜ A + diag(a), det(˜I (Λ ∪ {n, k})) is calculated by
954
+ det(˜I (Λ ∪ {n, k})) = (d − a(A′)−1a⊤) det(A′).
955
+ (14)
956
+ Next we efficiently calculate (A′)−1 and det(A′) to get
957
+ det(˜I (Λ ∪ {n, k})). We can rewrite A′ as A′
958
+ = A +
959
+ |Λ|
960
+
961
+ i=1
962
+ β⊤
963
+ i βi where βi =
964
+
965
+ 0, · · · , √ai
966
+ ����
967
+ i−th
968
+ , · · · , 0
969
+
970
+ . According to
971
+ Sherman–Morrison formula [49], (A′)−1 is given by
972
+ (A′)−1 ≈
973
+ B
974
+ ����
975
+ Reuse
976
+
977
+ |Λ|
978
+
979
+ i=1
980
+ ai
981
+ 1 + aiBi,i
982
+ BiB⊤
983
+ i
984
+ � �� �
985
+ Reuse
986
+ ,
987
+ (15)
988
+ where B = A−1, Bi,i is the i, i-th element of B, and Bi is
989
+ the i-th column vector of B. Using (15), B and BiB⊤
990
+ i can be
991
+ computed only once to be used for different n, n ∈ K\Kg,user,
992
+ which greatly reduces the computational cost. According to the
993
+ rank-1 update of determinant [49], det(A′) can be written as
994
+ det (A′) = det (A) (1 + a1B1,1) {1(|Λ| = 1) + 1(|Λ| > 1)
995
+ ×
996
+ |Λ|
997
+
998
+ i=2
999
+
1000
+ 1 + ai
1001
+
1002
+ B −
1003
+ i−1
1004
+
1005
+ j=1
1006
+ ajBjBT
1007
+ j
1008
+ 1 + ajBj,j
1009
+
1010
+
1011
+ i,i
1012
+
1013
+
1014
+
1015
+
1016
+  .
1017
+ (16)
1018
+
1019
+ B −
1020
+ i−1
1021
+
1022
+ j=1
1023
+ ajBjBT
1024
+ j
1025
+ 1+ajBj,j
1026
+
1027
+ is already calculated in (15), which re-
1028
+ duces the computational complexity. Substituting (15) and (16)
1029
+ into (14), we get the final results of det(˜I (Λ ∪ {n, k})).
1030
+ The computation complexity of different algorithms.
1031
+ If we select keyframes in Kloc using a brute-force algo-
1032
+ rithm based on exhaustive enumeration of combinations of
1033
+
1034
+ keyframes in Kloc, the complexity is O
1035
+ �� ρ
1036
+ lloc
1037
+
1038
+ l3
1039
+ loc
1040
+
1041
+ , where
1042
+ ρ = |K \ Kg,user| is the number of keyframes that have not
1043
+ been offloaded to the edge server. Without computation reuse,
1044
+ the computation complexity of the proposed top-h greedy-
1045
+ based algorithm is O(Hρl4
1046
+ loc). With computation reuse, it is
1047
+ reduced to O(Hl4
1048
+ loc) + O(Hρl3
1049
+ loc). Since we only keep lloc
1050
+ keyframes in Kloc of the local map and a small H in Algo-
1051
+ rithm 2 to save computation resources, i.e., ρ ≫ lloc > H,
1052
+ the proposed greedy-based algorithm with computation reuse
1053
+ significantly reduces the computational complexity.
1054
+ 3) The Selection of Fixed Keyframe Set Kfixed: After
1055
+ selecting the local keyframe set Kloc by solving Problem 3,
1056
+ we solve Problem 4 to select the fixed keyframe set.
1057
+ Lemma 5. Problem 4 is non-negative, monotone and submod-
1058
+ ular with a cardinality-fixed constraint.
1059
+ Proof sketch. It is straightforward to prove the non-negativity
1060
+ and monotonicity. For the submodularity, we can prove that
1061
+ det(�Iloc(K⋆
1062
+ loc∪{k},L)) det(�Iloc(K⋆
1063
+ loc∪{k},S))
1064
+ det(�Iloc(K⋆
1065
+ loc∪{k},L∪S)) det(�Iloc(K⋆
1066
+ loc∪{k},∅)) ⩾ 1, using the
1067
+ property that det(M) ⩾ det(N) holds for positive semidefi-
1068
+ nite matrices M, N when M−N is positive semidefinite.
1069
+ Lemma 5 indicates that the problem can be approximately
1070
+ solved with greedy methods in Algorithm 1 [34]. For each
1071
+ iteration, the algorithm selects one keyframe from Kg,user to
1072
+ be added to the fixed keyframe set Kfixed. The approximation
1073
+ ratio η = 1−exp(−1) guarantees that worst-case performance
1074
+ of a greedy algorithm cannot be far from optimal.
1075
+ B. Global Map Construction
1076
+ We use a low-complexity algorithm to solve Problem 2 to
1077
+ construct the global map. The objective function of Problem 2
1078
+ can be rewritten as −Unc (Kg,edge ∪ K′), which has the same
1079
+ structure as that of Problem 3. Problems 2 and 3 both add
1080
+ keyframes to the existing keyframe sets to construct a pose
1081
+ graph and optimize the keyframe poses in the pose graph.
1082
+ Hence, Algorithms 2 and 3 can be used to solve Problem 2.
1083
+ In Algorithm 2, lloc is replaced by
1084
+ D
1085
+ d , and K \ Kg,user is
1086
+ replaced by K \ Kg,edge. Calculating the uncertainty of a
1087
+ large global map is computationally intensive, and hence the
1088
+ proposed low-complexity algorithm is essential to reducing the
1089
+ computational load on the mobile device.
1090
+ VIII. EVALUATION
1091
+ We implement AdaptSLAM on the open-source ORB-
1092
+ SLAM3 [20] framework which typically outperforms older
1093
+ SLAM methods [25], [26], with both V- and VI- configura-
1094
+ tions. The edge server modules are run on a Dell XPS 8930
1095
+ desktop with Intel (R) Core (TM) i7-9700K [email protected]
1096
+ and NVIDIA GTX 1080 GPU under Ubuntu 18.04LTS. In
1097
+ §VIII-A, the mobile device modules are run on the same
1098
+ desktop under simulated computation and network constraints.
1099
+ In §VIII-B, the mobile device modules are implemented on a
1100
+ laptop (with an AMD Ryzen 7 4800H CPU and an NVIDIA
1101
+ GTX 1660 Ti GPU), using a virtual machine with 4-core CPUs
1102
+ and 8GB of RAM. The weight we, e = ((n, m), c) is set as the
1103
+ number of common map features visible in keyframes n and
1104
+ m for covisibility edges, similar to [20], [50], and the IMU
1105
+ edge weight is set as a large value (i.e., 500) as the existence
1106
+ of IMU measurements greatly reduces the tracking error. We
1107
+ empirically set H = 5 and lthr = 30 in Algorithm 2 to ensure
1108
+ low complexity and good performance at the same time.
1109
+ Metric. We use root mean square (RMS) absolute trajectory
1110
+ error (ATE) as the SLAM performance metric which is com-
1111
+ monly used in the literature [20], [51]. ATE is the absolute
1112
+ distance between the estimated and ground truth trajectories.
1113
+ Baseline methods. We compare AdaptSLAM with 5 base-
1114
+ lines. Random selects the keyframe randomly. DropOldest
1115
+ drops the oldest keyframes when the number of keyframes
1116
+ is constrained. ORBBuf, proposed in [28], chooses the
1117
+ keyframes that maximize the minimal edge weight between
1118
+ the adjacent selected keyframes. BruteForce examines all the
1119
+ combinations of keyframes to search for the optimal one that
1120
+ minimizes the uncertainty (in Problems 1 and 2). BruteForce
1121
+ can achieve better SLAM performance than AdaptSLAM
1122
+ but is shown to have exponential computation complexity
1123
+ in §VII-A. In the original ORB-SLAM3, the local map
1124
+ includes all covisibility keyframes, and the global map in-
1125
+ cludes all keyframes. The original ORB-SLAM3 also achieves
1126
+ better SLAM performance and consumes more computation
1127
+ resources than AdaptSLAM as the numbers of keyframes in
1128
+ both local and global maps are large.
1129
+ Datasets. We evaluate AdaptSLAM on public SLAM
1130
+ datasets containing V and VI sequences, including TUM [47]
1131
+ and EuRoC [46]. The difficulty of a SLAM sequence depends
1132
+ on the extent of device mobility and scene illumination. We
1133
+ use EuRoC sequences V101 (easy), V102 (medium), and V103
1134
+ (difficult), and difficult TUM VI room1 and room6 sequences.
1135
+ We report the results over 10 trials for each sequence.
1136
+ A. Simulated Computation and Network Constraints
1137
+ First, we limit the number of keyframes in the local map
1138
+ under computation constraints, and all keyframes are used
1139
+ to build the global map without communication constraints.
1140
+ Second, we maintain local maps as in the default settings
1141
+ of ORB-SLAM3, and limit the number of keyframes in the
1142
+ global map under constrained communications, where D in
1143
+ Problem 2 is set according to the available bandwidth.
1144
+ Local map construction. We demonstrate the RMS ATE of
1145
+ different keyframe selection methods, for different V-SLAM
1146
+ (Fig. 2a) and VI-SLAM (Fig. 2b) sequences. The size of
1147
+ the local map is limited to 10 keyframes and 9 anchors
1148
+ in V-SLAM sequences, and 25 keyframes and 10 anchors
1149
+ in VI-SLAM sequences (to ensure successful tracking while
1150
+ keeping a small local map). AdaptSLAM reduces the RMS
1151
+ ATE compared with Random, DropOldest, and ORBBuf by
1152
+ more than 70%, 62%, and 42%, averaged over all sequences.
1153
+ The performance of AdaptSLAM is close to BruteForce, which
1154
+ demonstrates that our greedy-based algorithms yield near-
1155
+ optimal solutions, with substantially reduced computational
1156
+ complexity. Moreover, the performance of AdaptSLAM is close
1157
+ to the original ORB-SLAM3 (less than 0.05 m RMS ATE
1158
+
1159
+ V101
1160
+ V102
1161
+ V103
1162
+ room1
1163
+ room6
1164
+ Sequence
1165
+ 0.0
1166
+ 0.1
1167
+ 0.2
1168
+ 0.3
1169
+ 0.4
1170
+ 0.5
1171
+ 0.6
1172
+ RMS ATE (m)
1173
+ Random
1174
+ DropOldest
1175
+ ORBBuf
1176
+ BruteForce
1177
+ ORB-SLAM3
1178
+ AdaptSLAM
1179
+ (a) V-SLAM
1180
+ V101
1181
+ V102
1182
+ V103
1183
+ room1
1184
+ room6
1185
+ Sequence
1186
+ 0.0
1187
+ 0.1
1188
+ 0.2
1189
+ 0.3
1190
+ 0.4
1191
+ 0.5
1192
+ 0.6
1193
+ RMS ATE (m)
1194
+ Random
1195
+ DropOldest
1196
+ ORBBuf
1197
+ BruteForce
1198
+ ORB-SLAM3
1199
+ AdaptSLAM
1200
+ (b) VI-SLAM
1201
+ Fig. 2: RMS ATE for 6 keyframe selection methods in the local map construction for 5
1202
+ sequences in EuRoC and TUM.
1203
+ Random DropOldest ORBBuf AdaptSLAM
1204
+ Method
1205
+ 0.0
1206
+ 0.1
1207
+ 0.2
1208
+ 0.3
1209
+ 0.4
1210
+ 0.5
1211
+ 0.6
1212
+ RMS ATE (m)
1213
+ lloc = 10
1214
+ lloc = 20
1215
+ lloc = 30
1216
+ Fig. 3: RMS ATE for different sizes
1217
+ of local keyframe set (for EuRoC
1218
+ V102).
1219
+ V101
1220
+ V102
1221
+ V103
1222
+ room1
1223
+ room6
1224
+ Sequence
1225
+ 0.0
1226
+ 0.1
1227
+ 0.2
1228
+ 0.3
1229
+ 0.4
1230
+ 0.5
1231
+ 0.6
1232
+ 0.7
1233
+ RMS ATE (m)
1234
+ Random
1235
+ DropOldest
1236
+ ORBBuf
1237
+ BruteForce
1238
+ ORB-SLAM3
1239
+ AdaptSLAM
1240
+ (a) V-SLAM
1241
+ V101
1242
+ V102
1243
+ V103
1244
+ room1
1245
+ room6
1246
+ Sequence
1247
+ 0.0
1248
+ 0.1
1249
+ 0.2
1250
+ 0.3
1251
+ 0.4
1252
+ 0.5
1253
+ 0.6
1254
+ RMS ATE (m)
1255
+ Random
1256
+ DropOldest
1257
+ ORBBuf
1258
+ BruteForce
1259
+ ORB-SLAM3
1260
+ AdaptSLAM
1261
+ (b) VI-SLAM
1262
+ Fig. 4: RMS ATE for 6 keyframe selection methods in the global map construction for
1263
+ 5 sequences in EuRoC and TUM.
1264
+ Random DropOldest ORBBuf AdaptSLAM
1265
+ Method
1266
+ 0.0
1267
+ 0.2
1268
+ 0.4
1269
+ 0.6
1270
+ 0.8
1271
+ 1.0
1272
+ RMS ATE (m)
1273
+ 40Mbps
1274
+ 80Mbps
1275
+ w/o bandwidth
1276
+ limitation
1277
+ Fig.
1278
+ 5:
1279
+ RMS
1280
+ ATE
1281
+ for different
1282
+ available bandwidth for offloading
1283
+ keyframes (for EuRoC V102).
1284
+ difference for all sequences) even though the size of the local
1285
+ map is reduced by more than 75%.
1286
+ The influence of the number lloc of keyframes in the local
1287
+ map on the RMS ATE for different methods is shown in
1288
+ Fig. 3. We present the results for EuRoC V102 (of medium
1289
+ difficulty), which are representative. When lloc is reduced
1290
+ from 30 to 10, AdaptSLAM increases the RMS ATE by only
1291
+ 6.7%, to 0.09 m, as compared to 0.37, 0.16, and 0.12 m
1292
+ for, correspondingly, Random, DropOldest, and ORBBuf. This
1293
+ indicates that AdaptSLAM achieves low tracking error under
1294
+ stringent computation resource constraints.
1295
+ Global map construction. First, we examine the case
1296
+ where only half of all keyframes are offloaded to build a
1297
+ global map, for V-SLAM (Fig. 4a) and VI-SLAM (Fig. 4b)
1298
+ sequences. AdaptSLAM reduces the RMS ATE compared with
1299
+ the closest baseline ORBBuf by 27% and 46% on average for
1300
+ V- and VI-SLAM, and has small performance loss compared
1301
+ with the original ORB-SLAM3, despite reducing the number
1302
+ of keyframes by half.
1303
+ Next, in Fig. 5, we examine four methods whose perfor-
1304
+ mance is impacted by the available bandwidth, under different
1305
+ levels of communication constraints. Without bandwidth lim-
1306
+ itations, all methods have the same performance as the global
1307
+ map holds all keyframes. When the bandwidth is limited,
1308
+ Random and DropOldest have the worst performance as they
1309
+ ignore the relations of keyframes in the pose graph. The
1310
+ ORBBuf performs better, but the tracking error is increased
1311
+ by 4.0× and 9.8× when the bandwidth is limited to 80
1312
+ and 40 Mbps. AdaptSLAM achieves the best performance,
1313
+ reducing the RMS ATE compared to ORBBuf by 62% and 78%
1314
+ when network bandwidth is 80 and 40 Mbps, correspondingly.
1315
+ This highlights the superiority of AdaptSLAM in achieving
1316
+ high tracking accuracy under communication constraints.
1317
+ foot1
1318
+ foot3
1319
+ foot5
1320
+ Network trace
1321
+ 0.0
1322
+ 0.1
1323
+ 0.2
1324
+ 0.3
1325
+ 0.4
1326
+ 0.5
1327
+ 0.6
1328
+ 0.7
1329
+ RMS ATE (m)
1330
+ Random
1331
+ DropOldest
1332
+ ORBBuf
1333
+ AdaptSLAM
1334
+ Fig. 6: RMS ATE for dif-
1335
+ ference network traces.
1336
+ Method
1337
+ Latency (ms)
1338
+ Random
1339
+ 133.0±86.3
1340
+ DropOldest
1341
+ 139.9±53.7
1342
+ ORBBuf
1343
+ 149.3±75.6
1344
+ BruteForce
1345
+ 863.4±123.5
1346
+ ORB-SLAM3
1347
+ 556.4±113.7
1348
+ AdaptSLAM
1349
+ 162.8±68.9
1350
+ TABLE I: The latency for local
1351
+ map construction and optimization.
1352
+ B. Real-World Computation and Network Constraints
1353
+ Following the approach of splitting modules between the
1354
+ edge server and the mobile device [11], we split the modules
1355
+ as shown in Fig. 1. The server and the device are connected
1356
+ via a network cable to minimize other factors. To ensure
1357
+ reproducibility, we replay the network traces collected from a
1358
+ 4G network [52]. Focusing on mobile devices carried by users,
1359
+ we choose network traces (foot1, foot3, and foot5) collected
1360
+ by pedestrians. We set D in Problem 2 according to the traces.
1361
+ We examine the RMS ATE under the network traces in
1362
+ Fig. 6 for the EuRoC V102 sequence. The results for only
1363
+ four methods are presented because the overall time taken for
1364
+ running the SLAM modules onboard is high for BruteForce
1365
+ and the original SLAM. AdaptSLAM reduces the RMS ATE by
1366
+ 65%, 61%, and 35% (averaged over all traces) compared with
1367
+ Random, DropOldest, and ORBBuf. AdaptSLAM achieves
1368
+ high tracking accuracy under real-world network traces.
1369
+ Table I shows the computation latency of mobile devices
1370
+ for all six methods. We compare the latency for running
1371
+ local map construction and optimization, which is the main
1372
+ source of latency for modules running onboard [11]. Compared
1373
+
1374
+ with AdaptSLAM, the original ORB-SLAM3 takes 3.7× as
1375
+ much time for optimizing the local map as all covisibility
1376
+ keyframes are included in the local map without keyframe
1377
+ selection. Without the edge-assisted architecture, the original
1378
+ ORB-SLAM3 also runs global mapping and loop closing
1379
+ onboard which have even higher latency [11]. BruteForce takes
1380
+ 5.3× as much time for examining all the combinations of
1381
+ keyframes to minimize the local map uncertainty. The latency
1382
+ for constructing and optimizing local maps using AdaptSLAM
1383
+ is close to that using Random and DropOldest (<12.3%
1384
+ difference). Low latency for local mapping shows that edge-
1385
+ assisted SLAM is appealing, as local mapping is the biggest
1386
+ source of delay for modules executing onboard after offloading
1387
+ the intensive tasks (loop closing and global mapping).
1388
+ IX. CONCLUSION
1389
+ We present AdaptSLAM, an edge-assisted SLAM that effi-
1390
+ ciently select subsets of keyframes to build local and global
1391
+ maps, under constrained communication and computation re-
1392
+ sources. AdaptSLAM quantifies the pose estimate uncertainty
1393
+ of V- and VI-SLAM under the edge-assisted architecture, and
1394
+ minimizes the uncertainty by low-complexity algorithms based
1395
+ on the approximate submodularity properties and computation
1396
+ reuse. AdaptSLAM is demonstrated to reduce the size of the
1397
+ local keyframe set by 75% compared with the original ORB-
1398
+ SLAM3 with a small performance loss.
1399
+ ACKNOWLEDGMENTS
1400
+ This work was supported in part by NSF grants CSR-
1401
+ 1903136, CNS-1908051, and CNS-2112562, NSF CAREER
1402
+ Award IIS-2046072, by an IBM Faculty Award, and by the
1403
+ Australian Research Council under Grant DP200101627.
1404
+ REFERENCES
1405
+ [1] D. M. Rosen, K. J. Doherty, A. Ter´an Espinoza, and J. J. Leonard,
1406
+ “Advances in inference and representation for simultaneous localization
1407
+ and mapping,” Annu. Rev. Control Robot. Auton. Syst., vol. 4, pp. 215–
1408
+ 242, 2021.
1409
+ [2] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira,
1410
+ I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous
1411
+ localization and mapping: Toward the robust-perception age,” IEEE
1412
+ Trans. Robot., vol. 32, no. 6, pp. 1309–1332, 2016.
1413
+ [3] C. Forster, S. Lynen, L. Kneip, and D. Scaramuzza, “Collabora-
1414
+ tive monocular SLAM with multiple micro aerial vehicles,” in Proc.
1415
+ IEEE/RSJ IROS, 2013.
1416
+ [4] R. Williams, B. Konev, and F. Coenen, “Scalable distributed collabora-
1417
+ tive tracking and mapping with micro aerial vehicles,” in Proc. IEEE/RSJ
1418
+ IROS, 2015.
1419
+ [5] Google. (2022) ARCore. https://developers.google.com/ar.
1420
+ [6] Apple. (2022) ARKit. https://developer.apple.com/augmented-reality/arkit/.
1421
+ [7] T. Scargill, G. Premsankar, J. Chen, and M. Gorlatova, “Here to stay: A
1422
+ quantitative comparison of virtual object stability in markerless mobile
1423
+ AR,” in Proc. IEEE/ACM Workshop on Cyber-Physical-Human System
1424
+ Design and Implementation, 2022.
1425
+ [8] Y.-J. Yeh and H.-Y. Lin, “3D reconstruction and visual SLAM of indoor
1426
+ scenes for augmented reality application,” in Proc. IEEE ICCA, 2018.
1427
+ [9] J. Xu, H. Cao, D. Li, K. Huang, C. Qian, L. Shangguan, and Z. Yang,
1428
+ “Edge assisted mobile semantic visual SLAM,” in Proc. IEEE INFO-
1429
+ COM, 2020.
1430
+ [10] H. Cao, J. Xu, D. Li, L. Shangguan, Y. Liu, and Z. Yang, “Edge assisted
1431
+ mobile semantic visual SLAM,” IEEE Trans. Mob. Comput., vol. 1,
1432
+ no. 1, pp. 1–15, 2022.
1433
+ [11] A. J. Ben Ali, Z. S. Hashemifar, and K. Dantu, “Edge-SLAM: Edge-
1434
+ assisted visual simultaneous localization and mapping,” in Proc. ACM
1435
+ MobiSys, 2020.
1436
+ [12] A. J. B. Ali, M. Kouroshli, S. Semenova, Z. S. Hashemifar, S. Y. Ko, and
1437
+ K. Dantu, “Edge-SLAM: edge-assisted visual simultaneous localization
1438
+ and mapping,” ACM Trans. Embed. Comput. Syst., vol. 22, no. 1, pp.
1439
+ 1–31, 2022.
1440
+ [13] I. Deutsch, M. Liu, and R. Siegwart, “A framework for multi-robot pose
1441
+ graph SLAM,” in Proc. IEEE RCAR, 2016.
1442
+ [14] M. Karrer, P. Schmuck, and M. Chli, “CVI-SLAM—collaborative visual-
1443
+ inertial SLAM,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 2762–2769,
1444
+ 2018.
1445
+ [15] F. Li, S. Yang, X. Yi, and X. Yang, “CORB-SLAM: a collaborative
1446
+ visual SLAM system for multiple robots,” in CollaborateCom. Springer,
1447
+ 2017.
1448
+ [16] P. Schmuck and M. Chli, “CCM-SLAM: Robust and efficient centralized
1449
+ collaborative monocular simultaneous localization and mapping for
1450
+ robotic teams,” J. Field Robot., vol. 36, no. 4, pp. 763–781, 2019.
1451
+ [17] K.-L. Wright, A. Sivakumar, P. Steenkiste, B. Yu, and F. Bai, “Cloud-
1452
+ SLAM: Edge offloading of stateful vehicular applications,” in Proc.
1453
+ IEEE/ACM SEC, 2020.
1454
+ [18] J. Xu, H. Cao, Z. Yang, L. Shangguan, J. Zhang, X. He, and Y. Liu,
1455
+ “SwarmMap: Scaling up real-time collaborative visual SLAM at the
1456
+ edge,” in Proc. USENIX NSDI, 2022.
1457
+ [19] R. Mur-Artal and J. D. Tard´os, “ORB-SLAM2: An open-source SLAM
1458
+ system for monocular, stereo, and RGB-D cameras,” IEEE Trans. Robot.,
1459
+ vol. 33, no. 5, pp. 1255–1262, 2017.
1460
+ [20] C. Campos, R. Elvira, J. J. G. Rodr´ıguez, J. M. Montiel, and J. D.
1461
+ Tard´os, “ORB-SLAM3: An accurate open-source library for visual,
1462
+ visual–inertial, and multimap SLAM,” IEEE Trans. Robot., 2021.
1463
+ [21] T. Qin, P. Li, and S. Shen, “VINS-Mono: A robust and versatile
1464
+ monocular visual-inertial state estimator,” IEEE Trans. Robot., vol. 34,
1465
+ no. 4, pp. 1004–1020, 2018.
1466
+ [22] A. A. Bian, J. M. Buhmann, A. Krause, and S. Tschiatschek, “Guar-
1467
+ antees for greedy maximization of non-submodular functions with
1468
+ applications,�� in Proc. PMLR ICML, 2017.
1469
+ [23] J. Engel, T. Sch¨ops, and D. Cremers, “LSD-SLAM: Large-scale direct
1470
+ monocular SLAM,” in Proc. Springer ECCV, 2014.
1471
+ [24] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE
1472
+ Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, 2017.
1473
+ [25] G. Klein and D. Murray, “Parallel tracking and mapping for small AR
1474
+ workspaces,” in Proc. IEEE ISMAR, 2007.
1475
+ [26] E. Dong, J. Xu, C. Wu, Y. Liu, and Z. Yang, “Pair-Navi: Peer-to-peer
1476
+ indoor navigation with mobile visual SLAM,” in Proc. IEEE INFOCOM,
1477
+ 2019.
1478
+ [27] S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Siegwart, “Real-
1479
+ time onboard visual-inertial state estimation and self-calibration of
1480
+ MAVs in unknown environments,” in Proc. IEEE ICRA, 2012.
1481
+ [28] Y.-P. Wang, Z.-X. Zou, C. Wang, Y.-J. Dong, L. Qiao, and D. Manocha,
1482
+ “ORBBuf: A robust buffering method for remote visual SLAM,” in Proc.
1483
+ IEEE/RSJ IROS, 2021.
1484
+ [29] L. Riazuelo, J. Civera, and J. M. Montiel, “C2TAM: A cloud framework
1485
+ for cooperative tracking and mapping,” Robot. Auton. Syst., vol. 62,
1486
+ no. 4, pp. 401–413, 2014.
1487
+ [30] P. Huang, L. Zeng, X. Chen, K. Luo, Z. Zhou, and S. Yu, “Edge robotics:
1488
+ Edge-computing-accelerated multi-robot simultaneous localization and
1489
+ mapping,” IEEE Internet Things J., 2022.
1490
+ [31] K. Khosoussi, M. Giamou, G. S. Sukhatme, S. Huang, G. Dissanayake,
1491
+ and J. P. How, “Reliable graphs for SLAM,” Int. J. Robot. Res., vol. 38,
1492
+ no. 2-3, pp. 260–298, 2019.
1493
+ [32] L. Carlone and S. Karaman, “Attention and anticipation in fast visual-
1494
+ inertial navigation,” IEEE Trans. Robot., vol. 35, no. 1, pp. 1–20, 2018.
1495
+ [33] Y. Chen, L. Zhao, Y. Zhang, S. Huang, and G. Dissanayake, “Anchor
1496
+ selection for SLAM based on graph topology and submodular optimiza-
1497
+ tion,” IEEE Trans. Robot., 2021.
1498
+ [34] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of
1499
+ approximations for maximizing submodular set functions—I,” Mathe-
1500
+ matical programming, vol. 14, no. 1, pp. 265–294, 1978.
1501
+ [35] A. Das and D. Kempe, “Approximate submodularity and its applications:
1502
+ Subset selection, sparse approximation and dictionary selection,” J.
1503
+ Mach. Learn. Res., vol. 19, no. 1, pp. 74–107, 2018.
1504
+ [36] L. Carlone, G. C. Calafiore, C. Tommolillo, and F. Dellaert, “Planar
1505
+ pose graph optimization: Duality, optimal solutions, and verification,”
1506
+ IEEE Trans. Robot., vol. 32, no. 3, pp. 545–565, 2016.
1507
+
1508
+ [37] J. A. Placed and J. A. Castellanos, “Fast autonomous robotic exploration
1509
+ using the underlying graph structure,” in Proc. IEEE/RSJ IROS, 2021.
1510
+ [38] K. Khosoussi, S. Huang, and G. Dissanayake, “Tree-connectivity: Eval-
1511
+ uating the graphical structure of SLAM,” in Proc. IEEE ICRA, 2016.
1512
+ [39] Y. Chen, S. Huang, L. Zhao, and G. Dissanayake, “Cram´er–Rao bounds
1513
+ and optimal design metrics for pose-graph SLAM,” IEEE Trans. Robot.,
1514
+ vol. 37, no. 2, pp. 627–641, 2021.
1515
+ [40] N. Boumal, A. Singer, P.-A. Absil, and V. D. Blondel, “Cram´er–Rao
1516
+ bounds for synchronization of rotations,” Information and Inference: A
1517
+ Journal of the IMA, vol. 3, no. 1, pp. 1–39, 2014.
1518
+ [41] R. K¨ummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard,
1519
+ “g2o: A general framework for graph optimization,” in IEEE ICRA,
1520
+ 2011.
1521
+ [42] S.
1522
+ Agarwal,
1523
+ K.
1524
+ Mierle,
1525
+ and
1526
+ Others,
1527
+ “Ceres
1528
+ solver,”
1529
+ http://ceres-solver.org.
1530
+ [43] M. L. Rodr´ıguez-Ar´evalo, J. Neira, and J. A. Castellanos, “On the
1531
+ importance of uncertainty representation in active SLAM,” IEEE Trans.
1532
+ Robot., vol. 34, no. 3, pp. 829–834, 2018.
1533
+ [44] F. Pukelsheim, Optimal design of experiments.
1534
+ SIAM, 2006.
1535
+ [45] K. Khosoussi, S. Huang, and G. Dissanayake, “Novel insights into the
1536
+ impact of graph structure on SLAM,” in IEEE/RSJ IROS, 2014.
1537
+ [46] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W.
1538
+ Achtelik, and R. Siegwart, “The EuRoC micro aerial vehicle datasets,”
1539
+ Int. J. Rob. Res., vol. 35, no. 10, pp. 1157–1163, 2016.
1540
+ [47] D. Schubert, T. Goll, N. Demmel, V. Usenko, J. St¨uckler, and D. Cre-
1541
+ mers, “The TUM VI benchmark for evaluating visual-inertial odometry,”
1542
+ in Proc. IEEE/RSJ IROS, 2018.
1543
+ [48] G. Strang, Linear algebra and its applications. Thomson, Brooks/Cole,
1544
+ 2006.
1545
+ [49] G. H. Golub and C. F. Van Loan, Matrix computations.
1546
+ JHU press,
1547
+ 2013.
1548
+ [50] Y. Chen, L. Zhao, K. M. B. Lee, C. Yoo, S. Huang, and R. Fitch,
1549
+ “Broadcast your weaknesses: Cooperative active pose-graph SLAM for
1550
+ multiple robots,” IEEE Robot. Autom. Lett, vol. 5, no. 2, pp. 2200–2207,
1551
+ 2020.
1552
+ [51] Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory
1553
+ evaluation for visual (-inertial) odometry,” in Proc. IEEE/RSJ IROS,
1554
+ 2018.
1555
+ [52] J. Van Der Hooft, S. Petrangeli, T. Wauters, R. Huysegems, P. R. Alface,
1556
+ T. Bostoen, and F. De Turck, “HTTP/2-based adaptive streaming of
1557
+ HEVC video over 4G/LTE networks,” IEEE Commun. Lett., vol. 20,
1558
+ no. 11, pp. 2177–2180, 2016.
1559
+ APPENDIX
1560
+ A. Proof of Lemma 1
1561
+ The quadratic term of the objective function in (5) is
1562
+
1563
+ e=((n,m),c)∈Eglob
1564
+ p⊤
1565
+ n,mIepn,m, where p⊤
1566
+ n,mIepn,m can be
1567
+ rewritten as
1568
+ p⊤
1569
+ n,mIepn,m
1570
+ =we
1571
+
1572
+ p⊤
1573
+ n , p⊤
1574
+ m
1575
+ � �
1576
+ I6I
1577
+ −I6I
1578
+ −I6I
1579
+ I6I
1580
+
1581
+ (pn, pm)
1582
+ =wgΞew⊤
1583
+ g ,
1584
+ (17)
1585
+ where the i, j-th block of Ξe, [Ξe]i,j, is derived as
1586
+ [Ξe]i,j =
1587
+
1588
+
1589
+
1590
+ −weI,
1591
+ ui = n, uj = m
1592
+ weI,
1593
+ ui = uj = n
1594
+ 0,
1595
+ otherwise
1596
+ .
1597
+ From the definition of Iglob (Kg,edge) and the global pose
1598
+ graph optimization formulation in §V-C, we can obtain that
1599
+ Iglob (Kg,edge) =
1600
+
1601
+ e∈Eg,edge
1602
+ Ξe. Hence, Lglob is given by (6),
1603
+ which concludes the proof.
1604
+ B. Proof of Lemma 2
1605
+ As introduced in §V-B, the local pose graph optimization is
1606
+ to solve
1607
+ min
1608
+ { ˜Pn}n∈Kloc
1609
+
1610
+ e∈Eloc∪El,f
1611
+ (xe)⊤Iexe. In optimizing poses
1612
+ of keyframes in Kloc, the poses of keyframes in Kfixed are
1613
+ fixed. Hence, the quadratic term of the objective function can
1614
+ be rewritten as
1615
+
1616
+ e=((n,m),c)∈Eloc∪El,f
1617
+ p⊤
1618
+ n,mIepn,m
1619
+ =
1620
+
1621
+ e=((n,m),c)∈Eloc
1622
+ p⊤
1623
+ n,mIepn,m
1624
+ +
1625
+
1626
+ e=((n,m),c)∈El,f,n∈Kloc,m∈Kfixed
1627
+ p⊤
1628
+ n Iepn
1629
+ +
1630
+
1631
+ e=((n,m),c)∈El,f,n∈Kfixed,m∈Kloc
1632
+
1633
+ −p⊤
1634
+ m
1635
+
1636
+ Ie (−pm) .
1637
+ According to the above analysis, we can reformulate (4) as
1638
+ min
1639
+ { ˜Pn}n∈Kloc
1640
+ wlΛloc (Kloc, Kfixed) w⊤
1641
+ l ,
1642
+ where Λloc (Kloc, Kfixed) is the |Kloc| × |Kloc| block matrix
1643
+ whose i, j-th block is
1644
+ [Λloc (Kloc, Kfixed)]i,j =
1645
+
1646
+
1647
+
1648
+
1649
+
1650
+
1651
+
1652
+
1653
+
1654
+
1655
+
1656
+
1657
+
1658
+
1659
+
1660
+
1661
+
1662
+
1663
+
1664
+ e=((ri,rj),c)∈Eloc
1665
+ weI,
1666
+ i ̸= j
1667
+
1668
+
1669
+ e=((ri,q),c)∈El,f,q∈Kfixed
1670
+ we +
1671
+
1672
+ e=((ri,q),c)∈Eloc,q∈Kloc,q̸=ri
1673
+ we
1674
+
1675
+ I
1676
+ ,
1677
+ i = j
1678
+ .
1679
+ According to the uncertainty definition in Definition 3,
1680
+ the uncertainty of the local pose graph is calculated as
1681
+ − log det
1682
+
1683
+ ˜Iloc (Kloc, Kfixed)
1684
+
1685
+ , where ˜Iloc (Kloc, Kfixed) is
1686
+ given by (7).
1687
+ C. Proof of Lemma 4
1688
+ According to the definition of the submodularity ratio given
1689
+ in (1), the submodularity ratio γ of the objective function in
1690
+ Problem 5 can be calculated as (18), where (a) follows from
1691
+ the definition of the submodularity ratio. The denominator of
1692
+ (18), denoted as ς, is lower bounded by
1693
+ ς = log
1694
+ det
1695
+
1696
+ ˜Iloc (Kbase ∪ L ∪ S ∪ {x, k} , ∅)
1697
+
1698
+ det
1699
+
1700
+ ˜Iloc (Kbase ∪ L ∪ S ∪ {k}) , ∅
1701
+
1702
+
1703
+
1704
+ n∈Kbase∪L∪S
1705
+ log wx,n
1706
+
1707
+ min
1708
+ m∈Kadd
1709
+
1710
+ n∈Kbase
1711
+ log wn,m ≜ ϑ,
1712
+ (20)
1713
+ where the first inequality is due to the fact that the determinant
1714
+ of the reduced weighted Laplacian matrix is equal to the tree-
1715
+ connectivity of its corresponding graph [31].
1716
+
1717
+ γ
1718
+ (a)
1719
+ =
1720
+ min
1721
+ L⊆Kadd,S⊆Kadd,|S|⩽lloc−lb,x∈Kadd,x̸∈S∪L
1722
+ −Unc (Kbase ∪ L ∪ {x}) + Unc (Kbase ∪ L)
1723
+ −Unc (Kbase ∪ L ∪ S ∪ {x}) + Unc (Kbase ∪ L ∪ S)
1724
+ =
1725
+ min
1726
+ L⊆Kadd,S⊆Kadd,|S|⩽lloc−lb,x∈Kadd,x̸∈S∪L
1727
+ log
1728
+ det(˜Iloc(Kbase∪L∪{x,k},∅))
1729
+ det(˜Iloc(Kbase∪L∪{k}),∅)
1730
+ log
1731
+ det(˜Iloc(Kbase∪L∪S∪{x,k},∅))
1732
+ det(˜Iloc(Kbase∪L∪S∪{k}),∅)
1733
+ .
1734
+ (18)
1735
+ γ = 1 +
1736
+ log
1737
+
1738
+ min
1739
+ L⊆Kadd,S⊆Kadd,|S|⩽lloc−lb,x∈Kadd,x̸∈S∪L
1740
+ det(˜Iloc(Kbase∪L∪{x,k},∅)) det(˜Iloc(Kbase∪L∪S∪{k}),∅)
1741
+ det(˜Iloc(Kbase∪L∪S∪{x,k},∅)) det(˜Iloc(Kbase∪L∪{k}),∅)
1742
+
1743
+ ς
1744
+ = 1 +
1745
+ log
1746
+
1747
+
1748
+
1749
+
1750
+
1751
+
1752
+
1753
+
1754
+ min
1755
+ L⊆Kadd,S⊆Kadd,|S|⩽lloc−lb,x∈Kadd,x̸∈S∪L det
1756
+
1757
+
1758
+
1759
+
1760
+
1761
+
1762
+
1763
+
1764
+
1765
+
1766
+
1767
+ Q1+Q2+
1768
+
1769
+ 
1770
+ 0z
1771
+ I|S|
1772
+ 0
1773
+
1774
+ 
1775
+
1776
+
1777
+
1778
+
1779
+
1780
+ Q1+Q3+
1781
+
1782
+  0z+|S|
1783
+ 1
1784
+
1785
+
1786
+
1787
+
1788
+ (Q1+Q2+Q3+Q4)
1789
+
1790
+ Q1+
1791
+
1792
+  0
1793
+ 0
1794
+ 0
1795
+ I|S|+1
1796
+
1797
+
1798
+
1799
+
1800
+
1801
+
1802
+
1803
+
1804
+
1805
+
1806
+
1807
+
1808
+
1809
+
1810
+
1811
+
1812
+
1813
+
1814
+
1815
+
1816
+ ς
1817
+ = 1 +
1818
+ log
1819
+
1820
+
1821
+
1822
+
1823
+ min
1824
+ L⊆Kadd,S⊆Kadd,|S|⩽lloc−lb,x∈Kadd,x̸∈S∪L det
1825
+ (Q1)2+Q1Q2+Q1Q3+(Q2+Q3)
1826
+
1827
+  0
1828
+ 0
1829
+ 0
1830
+ I|S|+1
1831
+
1832
+ +Q2Q3
1833
+ (Q1)2+Q1Q2+Q1Q3+(Q2+Q3)
1834
+
1835
+  0
1836
+ 0
1837
+ 0
1838
+ I|S|+1
1839
+
1840
+ +Q4
1841
+
1842
+
1843
+
1844
+
1845
+ ς
1846
+ (a)
1847
+ ⩾ 1 + 1
1848
+ ϑ log
1849
+
1850
+ 1 − g1Q−1g⊤
1851
+ 1
1852
+
1853
+ .
1854
+ (19)
1855
+ Substituting (20) into (18), γ can be further calculated
1856
+ by (19), where Ii and 0i are the i × i identity matrix and
1857
+ zero matrix, and Q = Q1 + Q2 + Q3 + Q4. Q1, Q2, Q3 and
1858
+ Q4 are defined as follows. We express Kbase ∪ L ∪ S ∪ {x, k}
1859
+ as {si}i={1,··· ,z+|S|+2} where z = |Kbase ∪L|, si ∈ Kbase ∪L
1860
+ when i ⩽ z, si ∈ S when z < i ⩽ z + |S|, sz+|S|+1 = x,
1861
+ and sz+|S|+2 = k. For each edge e, we define a vector qe,
1862
+ and each element [qe]i = −[qe]j = we if vertexes si and
1863
+ sj are the head or tail of e and zero otherwise. We then get
1864
+ ˜qe after removing the last element of qe. Q1, Q2, Q3 and
1865
+ Q4 are defined as Q1 =
1866
+ 1
1867
+ 2
1868
+
1869
+ e=((si,sj),c),si,sj∈Kbase∪L
1870
+ �qe�q⊤
1871
+ e
1872
+ ( 1
1873
+ 2 is used because the edges from si to sj and from sj
1874
+ to si are both included), Q2 =
1875
+
1876
+ e=((si,x),c),si∈Kbase∪L
1877
+ �qe�q⊤
1878
+ e ,
1879
+ Q3
1880
+ =
1881
+
1882
+ e=((si,sj),c),si∈Kbase∪L,sj∈S
1883
+ �qe�q⊤
1884
+ e ,
1885
+ and
1886
+ Q4
1887
+ =
1888
+
1889
+ e=((x,sj),c),sj∈S
1890
+ �qe�q⊤
1891
+ e . g1 is given by
1892
+ g1 =
1893
+
1894
+ 0, · · · , 0
1895
+ � �� �
1896
+ z ‘0′s
1897
+ , −wx,s1, · · · , −wx,s|S|
1898
+ |S|
1899
+
1900
+ i=1
1901
+ wx,si
1902
+
1903
+  ,
1904
+ where wmax =
1905
+ max
1906
+ n,m∈Kbase∪Kadd wn,m, and wx,si ⩽ wmax
1907
+ for si ∈ S. (a) in (19) is because for invertible positive
1908
+ semidefinite matrices M, N, det(M) ⩾ det(N) holds when
1909
+ M − N is positive semidefinite [48].
1910
+ We will prove that
1911
+ ��Q−1�� ⩽
1912
+ 1
1913
+ |Kbase|wmin−wmax when
1914
+ |Kbase| is significantly larger than |Kadd|, where ∥M∥ is the
1915
+ l∞ norm of M (defined as the largest magnitude among each
1916
+ element in M), and wmin =
1917
+ min
1918
+ n,m∈Kbase∪Kadd wn,m. Rewrite
1919
+ Q as Q = DQ − EQ, where DQ is a diagonal matrix
1920
+ with elements on the diagonal the same as those of Q, and
1921
+ EQ = DQ − Q. Q−1 is calculated as
1922
+ Q−1 =
1923
+
1924
+ Iz+|S|+1 − D−1
1925
+ Q EQ
1926
+ �−1
1927
+ D−1
1928
+ Q
1929
+ =
1930
+ � ∞
1931
+
1932
+ i=0
1933
+
1934
+ D−1
1935
+ Q EQ
1936
+ �i
1937
+
1938
+ D−1
1939
+ Q .
1940
+ D−1
1941
+ Q EQ has the properties that all elements in D−1
1942
+ Q EQ are
1943
+ positive and smaller than
1944
+ wmax
1945
+ |Kbase|wmin , and all row vectors has
1946
+ an l∞ norm smaller than 1. Hence, we have
1947
+ ����
1948
+
1949
+ D−1
1950
+ Q EQ
1951
+ �i���� ⩽
1952
+ wmax
1953
+ |Kbase|wmin .
1954
+
1955
+ Q−1 is given by
1956
+ ��Q−1�� ⩽
1957
+ 1
1958
+ |Kbase| wmin
1959
+ �����
1960
+
1961
+
1962
+ i=1
1963
+
1964
+ D−1
1965
+ Q EQ
1966
+ �i
1967
+ �����
1968
+
1969
+ 1
1970
+ |Kbase| wmin
1971
+ 1
1972
+ 1 −
1973
+ ���D−1
1974
+ Q EQ
1975
+ ���
1976
+
1977
+ 1
1978
+ |Kbase| wmin
1979
+ 1
1980
+ 1 −
1981
+ wmax
1982
+ |Kbase|wmin
1983
+ =
1984
+ 1
1985
+ |Kbase| wmin − wmax
1986
+ .
1987
+ (21)
1988
+ Substituting (21) into (19), we can derive that γ ⩾ 1 +
1989
+ 1
1990
+ ϑ log
1991
+
1992
+ 1 −
1993
+ 4|Kadd|2w2
1994
+ max
1995
+ |Kbase|wmin−wmax
1996
+
1997
+ .
1998
+
0dE3T4oBgHgl3EQfnAoC/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0dFST4oBgHgl3EQfVTiq/content/tmp_files/2301.13777v1.pdf.txt ADDED
@@ -0,0 +1,1263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Computer Algebra in R Bridges a Gap Between Mathematics and Data
2
+ in the Teaching of Statistics and Data Science
3
+ Mikkel Meyer Andersen and Søren Højsgaard
4
+ February 1, 2023
5
+ Mikkel Meyer Andersen
6
+ Department of Mathematical Sciences, Aalborg University, Denmark
7
+ Skjernvej 4A
8
+ 9220 Aalborg Ø, Denmark
9
+ ORCiD: 0000-0002-0234-0266
10
+ mikl@ math. aau. dk
11
+ Søren Højsgaard
12
+ Department of Mathematical Sciences, Aalborg University, Denmark
13
+ Skjernvej 4A
14
+ 9220 Aalborg Ø, Denmark
15
+ ORCiD: 0000-0002-3269-9552
16
+ sorenh@ math. aau. dk
17
+ Abstract
18
+ The capability of R to do symbolic mathematics is enhanced by the caracas package. This package
19
+ uses the Python computer algebra library SymPy as a back-end but caracas is tightly integrated in
20
+ the R environment, thereby enabling the R user with symbolic mathematics within R. We demonstrate
21
+ how mathematics and statistics can benefit from bridging computer algebra and data via R. This is done
22
+ thought a number of examples and we propose some topics for small student projects. The caracas
23
+ package integrates well with e.g. Rmarkdown, and as such creation of scientific reports and teaching is
24
+ supported.
25
+ Introduction
26
+ The caracas package [Andersen and Højsgaard, 2021] and the Ryacas package [Andersen and Højsgaard,
27
+ 2019] enhance the capability of R [R Core Team, 2023] to handle symbolic mathematics. In this paper
28
+ we will illustrate the use of the caracas package in connection with teaching mathematics and statistics.
29
+ Focus is on 1) treating statistical models symbolically, 2) on bridging the gap between symbolic mathe-
30
+ matics and numerical computations and 3) on preparing teaching material in a reproducible framework
31
+ (provided by, e.g. rmarkdown [Allaire et al., 2021, Xie et al., 2018, 2020]. The caracas package is avail-
32
+ able from CRAN [R Core Team, 2023]. The open-source development version of caracas is available
33
+ at https://github.com/r-cas/caracas and readers are recommended to study the online documenta-
34
+ tion at https://r-cas.github.io/caracas/. The caracas package provides an interface from R to the
35
+ Python package sympy [Meurer et al., 2017]. This means that SymPy is “running under the hood” of R
36
+ via the reticulate package [Ushey et al., 2020]. The sympy package is mature and robust with many
37
+ users and developers.
38
+ Neither caracas nor Ryacas are as powerful as some of the larger commercial computer algebra
39
+ systems (CAS). The virtue of caracas and Ryacas lie elsewhere: (1) Mathematical tools like equation
40
+ solving, summation, limits, symbolic linear algebra, outputting in tex format etc. are directly available
41
+ from within R. (2) The packages enable working with the same language and in the same environment
42
+ as the user does for statistical analyses. (3) Symbolic mathematics can easily be combined with data
43
+ which is helpful in e.g. numerical optimization. (4) The packages are open-source and therefore support
44
+ 1
45
+ arXiv:2301.13777v1 [stat.AP] 31 Jan 2023
46
+
47
+ e.g. education - also for people with limited economical means and thus contributing to United Nations
48
+ sustainable development goals [United Nations General Assembly, 2015].
49
+ The paper is organized in the following sections: The section Mathematics and documents containing
50
+ mathematics briefly introduces the caracas package and its syntax, including how caracas can be used in
51
+ connection with preparing texts, e.g. teaching material. More details are provided in the Section Important
52
+ technical aspects. Several vignettes illustrating caracas are provided and they are also available online,
53
+ see https://r-cas.github.io/caracas/. The section Statistics examples is the main section of the
54
+ paper and here we present a sample of statistical models where we believe that a symbolic treatment is
55
+ a valuable supplement to a numerical in connection with teaching. The section Possible topics to study
56
+ contains suggestions about hand-on activities for students. Lastly, the section Discussion and future work
57
+ contains a discussion of the paper.
58
+ Mathematics and documents containing mathematics
59
+ We start by introducing the caracas syntax on familiar topics within calculus and linear algebra.
60
+ Calculus
61
+ First we define a caracas symbol x (more details will follow in Section Important technical aspects) and
62
+ subsequently a caracas polynomial p in x (p becomes a symbol because x is):
63
+ R> library(caracas)
64
+ R> def_sym(x) ## Declares ’x’ as a symbol
65
+ R> p <- 1 - x^2 + x^3 + x^4/4 - 3 * x^5 / 5 + x^6 / 6
66
+ R> p
67
+ #> [c]:
68
+ 6
69
+ 5
70
+ 4
71
+ #>
72
+ x
73
+ 3*x
74
+ x
75
+ 3
76
+ 2
77
+ #>
78
+ -- - ---- + -- + x
79
+ - x
80
+ + 1
81
+ #>
82
+ 6
83
+ 5
84
+ 4
85
+ The gradient of p is:
86
+ R> grad <- der(p, x) ## ’der’ is shorthand for derivative
87
+ R> grad
88
+ #> [c]:
89
+ 5
90
+ 4
91
+ 3
92
+ 2
93
+ #>
94
+ x
95
+ - 3*x
96
+ + x
97
+ + 3*x
98
+ - 2*x
99
+ Stationary points of p can be found by finding roots of the gradient. In this simple case we can factor
100
+ the gradient:
101
+ R> factor_(grad)
102
+ #> [c]:
103
+ 2
104
+ #>
105
+ x*(x - 2)*(x - 1) *(x + 1)
106
+ The factorizations shows that stationary points are −1, 0, 1 and 2. To investigate if extreme points
107
+ are local minima, local maxima or saddle points, we compute the Hessian and evaluate the Hessian in the
108
+ stationary points:
109
+ R> hess <- der2(p, x)
110
+ R> hess
111
+ #> [c]:
112
+ 4
113
+ 3
114
+ 2
115
+ #>
116
+ 5*x
117
+ - 12*x
118
+ + 3*x
119
+ + 6*x - 2
120
+ R> hess_ <- as_func(hess)
121
+ R> hess_
122
+ 2
123
+
124
+ #> function (x)
125
+ #> {
126
+ #>
127
+ 5 * x^4 - 12 * x^3 + 3 * x^2 + 6 * x - 2
128
+ #> }
129
+ #> <environment: 0x55e4f8ea8890>
130
+ R> stationary_points <- c(-1, 0, 1, 2)
131
+ R> hess_(stationary_points)
132
+ #> [1] 12 -2
133
+ 0
134
+ 6
135
+ Alternatively, we can create an R expression and evaluate:
136
+ R> eval(as_expr(hess), list(x = stationary_points))
137
+ #> [1] 12 -2
138
+ 0
139
+ 6
140
+ The sign of the Hessian in these points gives that x = −1 and x = 12 are local minima, x = 0 is a local
141
+ maximum and x = 1 is a saddle point. In general we can find the stationary symbolically and evaluate
142
+ the Hessian as follows (output omitted):
143
+ R> sol <- solve_sys(lhs = grad, vars = x) ## finds roots by default
144
+ R> subs(hess, sol[[1]]) ## the first solution
145
+ R> lapply(sol, function(s) subs(hess, s)) ## iterate over all solutions
146
+ Linear algebra
147
+ Next, we create a symbolic matrix and find its inverse:
148
+ R> M <- as_sym(toeplitz(c("a", "b", 0))) ## as_sym() converts an R object to caracas symbol
149
+ R> Minv <- inv(M) %>% simplify()
150
+ Default printing of M is (Minv is shown below in next section):
151
+ R> M
152
+ #> [c]: [a
153
+ b
154
+ 0]
155
+ #>
156
+ [
157
+ ]
158
+ #>
159
+ [b
160
+ a
161
+ b]
162
+ #>
163
+ [
164
+ ]
165
+ #>
166
+ [0
167
+ b
168
+ a]
169
+ A vector is a one-column matrix, but it is printed as its transpose to save space:
170
+ R> v <- vector_sym(3, "v")
171
+ R> v
172
+ #> [c]: [v1
173
+ v2
174
+ v3]^T
175
+ Matrix products are computed using the %*% operator:
176
+ R> M %*% v
177
+ #> [c]: [a*v1 + b*v2
178
+ a*v2 + b*v1 + b*v3
179
+ a*v3 + b*v2]^T
180
+ 3
181
+
182
+ Preparing mathematical documents
183
+ The packages Sweave [Leisch, 2002] and Rmarkdown [Allaire et al., 2021] provide integration of LaTeX
184
+ and other text formatting systems into R helping to produce text document with R content. In a similar
185
+ vein, caracas provides an integration of computer algebra into R and in addition, caracas also facilitates
186
+ creation of documents with mathematical content without e.g. typing tedious LaTeX instructions.
187
+ A LaTeX rendering of the caracas symbol p is obtained by typing $$p(x) = `r tex(p)`$$ which
188
+ results in the following when the document is compiled:
189
+ p(x) = x6
190
+ 6 − 3x5
191
+ 5
192
+ + x4
193
+ 4 + x3 − x2 + 1
194
+ Typing $$Mˆ{-1} = `r tex(Minv)`$$ produces the result:
195
+ M −1 =
196
+
197
+ ��
198
+ a2−b2
199
+ a(a2−2b2)
200
+
201
+ b
202
+ a2−2b2
203
+ b2
204
+ a(a2−2b2)
205
+
206
+ b
207
+ a2−2b2
208
+ a
209
+ a2−2b2
210
+
211
+ b
212
+ a2−2b2
213
+ b2
214
+ a(a2−2b2)
215
+
216
+ b
217
+ a2−2b2
218
+ a2−b2
219
+ a(a2−2b2)
220
+
221
+ �� .
222
+ The determinant of M is det(M) = a3 −2∗a∗b2 and this can be factored out of the matrix by dividing
223
+ each entry with the determinant and multiplying the new matrix by the determinant which simplifies the
224
+ appearance of the matrix:
225
+ R> Minv_fact <- as_factor_list(1 / factor_(det(M)), simplify(Minv * det(M)))
226
+ Typing $$Mˆ{-1} = `r tex(Minv_fact)`$$ produces this:
227
+ M −1 =
228
+ 1
229
+ a (a2 − 2b2)
230
+
231
+
232
+ a2 − b2
233
+ −ab
234
+ b2
235
+ −ab
236
+ a2
237
+ −ab
238
+ b2
239
+ −ab
240
+ a2 − b2
241
+
242
+ � .
243
+ Finally we illustrate creation of additional mathematical expressions:
244
+ R> def_sym(x, n)
245
+ R> y <- (1 + x/n)^n
246
+ R> lim(y, n, Inf)
247
+ #> [c]: exp(x)
248
+ Typing $$y = `r tex(y)`$$ etc. gives
249
+ y =
250
+
251
+ 1 + x
252
+ n
253
+ �n
254
+ , lim
255
+ n−>∞ y = exp(x).
256
+ We can also prepare unevaluated expressions using the doit argument. That helps making repro-
257
+ ducible documents where the changes in code appears automatically in the generated formulas. This is
258
+ done as follows:
259
+ R> l <- lim(y, n, Inf, doit = FALSE)
260
+ R> l
261
+ #> [c]:
262
+ n
263
+ #>
264
+ /
265
+ x\
266
+ #>
267
+ lim |1 + -|
268
+ #>
269
+ n->oo\
270
+ n/
271
+ R> doit(l)
272
+ #> [c]: exp(x)
273
+ Typing $$`r tex(l)` = `r tex(doit(l))`$$ gives
274
+ lim
275
+ n→∞
276
+
277
+ 1 + x
278
+ n
279
+ �n
280
+ = ex.
281
+ Several functions have the doit argument, e.g. lim(), int() and sum_().
282
+ 4
283
+
284
+ Important technical aspects
285
+ A caracas symbol is a list with a pyobj slot and the class caracas_symbol. The pyobj is an an object
286
+ in Python (often a sympy object). As such, a symbol (in R) provides a handle to a Python object. In
287
+ the design of caracas we have tried to make this distinction something the user should not be concerned
288
+ with, but it is worthwhile being aware of the distinction.
289
+ Sections Calculus and Linear algebra illustrate that caracas symbols can be created with def_sym()
290
+ and as_sym(). Both declares the symbol in R and in Python. A symbol can also be defined in terms of
291
+ other symbols: Define symbols s1 and s2 and define symbol s3 in terms of s1 and s2:
292
+ R> def_sym(s1, s2) ## Note: ’s1’ and ’s2’ exist in both R and Python
293
+ R> s1$pyobj
294
+ #> s1
295
+ R> s3_ <- s1 * s2
296
+ ## Note: ’s3’ is a symbol in R; no corresponding object in Python
297
+ R> s3_$pyobj
298
+ #> s1*s2
299
+ The underscore in s3_ indicates that this expression is defined in terms of other symbols.
300
+ This
301
+ convention is used through out the paper. Next express s1 and s2 in terms of symbols u and v (which
302
+ are created on the fly):
303
+ R> s4_ <- subs(s3_, c("s1", "s2"), c("u+v", "u-v"))
304
+ R> s4_
305
+ #> [c]: (u - v)*(u + v)
306
+ Statistics examples
307
+ In this section we examine larger statistical examples and demonstrate how caracas can help improve
308
+ understanding of the models.
309
+ Linear models
310
+ A matrix algebra approach to e.g. linear models is very clear and concise. On the other hand, it can also
311
+ be argued that matrix algebra obscures what is being computed. Numerical examples are useful for some
312
+ aspects of the computations but not for other. In this respect symbolic computations can be enlightening.
313
+ Consider a two-way analysis of variance (ANOVA) with one observation per group, see Table 1.
314
+ Table 1: Two-by-two layout of data.
315
+ y11
316
+ y21
317
+ y12
318
+ y22
319
+ R> nr <- 2
320
+ R> nc <- 2
321
+ R> y <- matrix_sym(nr, nc, "y")
322
+ R> dim(y) <- c(nr*nc, 1)
323
+ R> y
324
+ #> [c]: [y11
325
+ y21
326
+ y12
327
+ y22]^T
328
+ R> dat <- expand.grid(r=factor(1:nr), s=factor(1:nc))
329
+ R> X <- model.matrix(~r+s, data=dat) |> as_sym()
330
+ R> b <- vector_sym(ncol(X), "b")
331
+ R> mu <- X %*% b
332
+ 5
333
+
334
+ For the specific model we have random variables y = (yij). All yijs are assumed independent and
335
+ yij ∼ N(µij, v). The corresponding mean vector µ has the form given below:
336
+ y =
337
+
338
+ ���
339
+ y11
340
+ y21
341
+ y12
342
+ y22
343
+
344
+ ��� ,
345
+ X =
346
+
347
+ ���
348
+ 1
349
+ .
350
+ .
351
+ 1
352
+ 1
353
+ .
354
+ 1
355
+ .
356
+ 1
357
+ 1
358
+ 1
359
+ 1
360
+
361
+ ��� ,
362
+ b =
363
+
364
+
365
+ b1
366
+ b2
367
+ b3
368
+
369
+ � ,
370
+ µ = Xb =
371
+
372
+ ���
373
+ b1
374
+ b1 + b2
375
+ b1 + b3
376
+ b1 + b2 + b3
377
+
378
+ ��� .
379
+ Above and elsewhere, dots represent zero. The least squares estimate of b is the vector ˆb that minimizes
380
+ ||y − Xb||2 which leads to the normal equations (X⊤X)b = X⊤y to be solved. If X has full rank, the
381
+ unique solution to the normal equations is ˆb = (X⊤X)−1X⊤y.
382
+ Hence the estimated mean vector is
383
+ ˆµ = Xˆb = X(X⊤X)−1X⊤y. Symbolic computations are not needed for quantities involving only the
384
+ model matrix X, but when it comes to computations involving y, a symbolic treatment of y is useful:
385
+ R> XtX <- t(X) %*% X
386
+ R> XtXinv <- inv(XtX)
387
+ R> Xty <- t(X) %*% y
388
+ R> b_hat <- XtXinv %*% Xty
389
+ X⊤y =
390
+
391
+
392
+ y11 + y12 + y21 + y22
393
+ y21 + y22
394
+ y12 + y22
395
+
396
+ � ,
397
+ ˆb = 1
398
+ 4
399
+
400
+
401
+ 3y11 + y12 + y21 − y22
402
+ −2y11 − 2y12 + 2y21 + 2y22
403
+ −2y11 + 2y12 − 2y21 + 2y22
404
+
405
+
406
+ (1)
407
+ Hence X⊤y (a sufficient reduction of data if the variance is known) consists of the sum of all observa-
408
+ tions, the sum of observations in the second row and the sum of observations in the second column. For
409
+ ˆb, the second component is, apart from a scaling, the sum of the second row minus the sum of the first
410
+ row. Likewise, the third component is the sum of the second column minus the sum of the first column.
411
+ It is hard to give an interpretation of the first component of ˆb.
412
+ Logistic regression
413
+ In the following we go through details of a logistic regression model, see e.g. McCullagh and Nelder [1989]
414
+ for a classical description of logistic regression: Observables are binomially distributed, yi ∼ bin(pi, ni).
415
+ The probability pi is connected to a q-vector of covariates xi = (xi1, . . . , xiq) and a q-vector of regression
416
+ coefficients b = (b1, . . . , bq) as follows: The term si = xi ·b is denoted the linear predictor. The probability
417
+ pi can be linked to si in different ways, but the most commonly employed is via the logit link function
418
+ which is logit(pi) = log(pi/(1 − pi)) so here logit(pi) = si.
419
+ As an example, consider the budworm data from the doBy package [Højsgaard and Halekoh, 2023].
420
+ The data shows the number of killed moth tobacco budworm Heliothis virescens. Batches of 20 moths of
421
+ each sex were exposed for three days to the pyrethroid and the number in each batch that were dead or
422
+ knocked down was recorded:
423
+ R> data(budworm, package = "doBy")
424
+ R> bud <- subset(budworm, sex == "male")
425
+ R> bud
426
+ #>
427
+ sex dose ndead ntotal
428
+ #> 1 male
429
+ 1
430
+ 1
431
+ 20
432
+ #> 2 male
433
+ 2
434
+ 4
435
+ 20
436
+ #> 3 male
437
+ 4
438
+ 9
439
+ 20
440
+ #> 4 male
441
+ 8
442
+ 13
443
+ 20
444
+ #> 5 male
445
+ 16
446
+ 18
447
+ 20
448
+ #> 6 male
449
+ 32
450
+ 20
451
+ 20
452
+ Below we focus only on male budworms and the mortality is illustrated in Figure 1 (produced with
453
+ ggplot2 [Wickham, 2016]). On the y-axis we have the empirical logits, i.e. log((ndead + 0.5)/(ntotal −
454
+ ndead + 0.5)). The figure suggests that logit grows linearly with log dose.
455
+ 6
456
+
457
+ −2
458
+ 0
459
+ 2
460
+ 4
461
+ 0
462
+ 10
463
+ 20
464
+ 30
465
+ dose
466
+ Empirical logits
467
+ −2
468
+ 0
469
+ 2
470
+ 4
471
+ 0
472
+ 1
473
+ 2
474
+ 3
475
+ 4
476
+ 5
477
+ log2(dose)
478
+ Empirical logits
479
+ Figure 1: Insecticide mortality of the moth tobacco budworm.
480
+ Each component of the likelihood
481
+ The log-likelihood is log L = �
482
+ i yi log(pi)+(ni−yi) log(1−pi) = �
483
+ i log Li, say. With log(pi/(1−pi)) = si
484
+ we have pi = 1/(1 + exp(−si)) and
485
+ d
486
+ dsi pi =
487
+ exp(−si)
488
+ (1+exp(−si))2 . With si = xi · b, we have
489
+ d
490
+ dbsi = xi.
491
+ Consider the contribution to the total log-likelihood from the ith observation which is li = yi log(pi)+
492
+ (ni − yi) log(1 − pi). Since we are focusing on one observation only, we shall ignore the subscript i in this
493
+ section. First notice that with s = log(p/(1 − p)) we can find p as:
494
+ R> def_sym(s, p)
495
+ R> sol_ <- solve_sys(lhs = log(p / (1 - p)), rhs = s, vars = p)
496
+ R> sol_[[1]]$p
497
+ #> [c]:
498
+ exp(s)
499
+ #>
500
+ ----------
501
+ #>
502
+ exp(s) + 1
503
+ Next, find the likelihood as a function of p, as a function of s and as a function of b. The underscore
504
+ in logLb_ and elsewhere indicates that this expression is defined in terms of other symbols (this is in
505
+ contrast to the free variables, e.g. y, p, and n.):
506
+ R> def_sym(y, n, p, x, s, b)
507
+ R> logLp_ <- y * log(p) + (n - y) * log(1 - p)
508
+ R> p_ <- exp(s) / (exp(s) + 1)
509
+ R> logLs_ <- subs(logLp_, p, p_)
510
+ R> s_ <- sum(x * b)
511
+ R> logLb_ <- subs(logLs_, s, s_)
512
+ R> logLb_
513
+ #> [c]:
514
+ /
515
+ exp(b*x)
516
+ \
517
+ /
518
+ exp(b*x)
519
+ \
520
+ #>
521
+ y*log|------------| + (n - y)*log|1 - ------------|
522
+ #>
523
+ \exp(b*x) + 1/
524
+ \
525
+ exp(b*x) + 1/
526
+ The log-likelihood can be maximized using e.g. Newton-Rapson (see e.g. Nocedal and Wright [2006])
527
+ and in this connection we need the score function, S, and the Hessian, H:
528
+ R> Sb_ <- score(logLb_, b) |> simplify()
529
+ R> Hb_ <- hessian(logLb_, b) |> simplify()
530
+ R> Sb_
531
+ #> [c]: [x*(y - (n - y)*exp(b*x))]
532
+ #>
533
+ [------------------------]
534
+ #>
535
+ [
536
+ exp(b*x) + 1
537
+ ]
538
+ R> Hb_
539
+ 7
540
+
541
+ #> [c]: [
542
+ 2
543
+ ]
544
+ #>
545
+ [
546
+ -n*x *exp(b*x)
547
+ ]
548
+ #>
549
+ [---------------------------]
550
+ #>
551
+ [exp(2*b*x) + 2*exp(b*x) + 1]
552
+ Since x and b are vectors, the term b*x above should be read as the inner product x · b (or as x⊤b in
553
+ matrix notation). Also, since x is a vector, the term xˆ2 above should be read as the outer product x ⊗ x
554
+ (or as xx⊤ in matrix notation). More insight in the structure is obtained by letting b and x be 2-vectors
555
+ (to save space, the Hessian matrix is omitted in the following):
556
+ R> b <- vector_sym(2, "b")
557
+ R> x <- vector_sym(2, "x")
558
+ R> s_ <- sum(x * b)
559
+ R> logLb_ <- subs(logLs_, s, s_)
560
+ R> Sb_ <- score(logLb_, b) |> simplify()
561
+ logLb_ = y log
562
+
563
+ eb1x1+b2x2
564
+ eb1x1+b2x2 + 1
565
+
566
+ + (n − y) log
567
+
568
+ 1 −
569
+ eb1x1+b2x2
570
+ eb1x1+b2x2 + 1
571
+
572
+ ,
573
+ (2)
574
+ Sb_ =
575
+
576
+
577
+ x1(−neb1x1+b2x2+yeb1x1+b2x2+y)
578
+ eb1x1+b2x2+1
579
+ x2(−neb1x1+b2x2+yeb1x1+b2x2+y)
580
+ eb1x1+b2x2+1
581
+
582
+ � .
583
+ (3)
584
+ Next, insert data, e.g. x1 = 1, x2 = 2, y = 9, n = 20 to obtain a function of the regression parameters
585
+ only. Note how the expression depending on other symbols, S_, is named S. to indicate that data has
586
+ been inserted:
587
+ R> nms <- c("x1", "x2", "y", "n")
588
+ R> vls <- c(1, 2, 9, 20)
589
+ R> logLb. <- subs(logLb_, nms, vls)
590
+ R> Sb. <- subs(Sb_, nms, vls)
591
+ The total score for the entire dataset can be obtained as follows:
592
+ R> Sb_list <- lapply(seq_len(nrow(bud)), function(r){
593
+ +
594
+ vls <- c(1, log2(bud$dose[r]), bud$ndead[r], bud$ntotal[r])
595
+ +
596
+ subs(Sb_, nms, vls)
597
+ + })
598
+ R> Sb_total <- Reduce(‘+‘, Sb_list)
599
+ This score can be used as part of an iterative algorithm for solving the score equations. If one wants to
600
+ use Newton-Rapson, the total Hessian matrix must also be created following lines similar to those above.
601
+ It is straight forward implement a Newton-Rapson algorithm based on these quantities, one must only
602
+ note the distinction between the two expressions below (and it is the latter one would use in an iterative
603
+ algorithm):
604
+ R> subs(Sb_total, b, c(1, 2))
605
+ R> subs(Sb_total, b, c(1, 2)) |> as_expr()
606
+ An alternative is to construct the total log-likelihood for the entire dataset as a caracas object, convert
607
+ this object to an R function and maximize this function using one of R’s optimization methods:
608
+ R> logLb_list <- lapply(seq_len(nrow(bud)), function(r){
609
+ +
610
+ vls <- c(1, log2(bud$dose[r]), bud$ndead[r], bud$ntotal[r])
611
+ +
612
+ subs(logLb_, nms, vls)
613
+ + })
614
+ R> logLb_total <- Reduce(‘+‘, logLb_list)
615
+ R> logLb_total_func <- as_func(logLb_total, vec_arg = TRUE)
616
+ 8
617
+
618
+ The total likelihood symbolically
619
+ We conclude this section by illustrating that the log-likelihood for the entire dataset can be constructed
620
+ in a few steps (output is omitted to save space):
621
+ R> X. <- as_sym(cbind(1, log2(bud$dose)))
622
+ R> n. <- as_sym(bud$ntotal)
623
+ R> y. <- as_sym(bud$ndead)
624
+ R> N <- nrow(X.)
625
+ R> q <- ncol(X.)
626
+ R> X <- matrix_sym(N, q, "x")
627
+ R> n <- vector_sym(N, "n")
628
+ R> y <- vector_sym(N, "y")
629
+ R> p <- vector_sym(N, "p")
630
+ R> s <- vector_sym(N, "s")
631
+ R> b <- vector_sym(q, "b")
632
+ X =
633
+
634
+ �������
635
+ x11
636
+ x12
637
+ x21
638
+ x22
639
+ x31
640
+ x32
641
+ x41
642
+ x42
643
+ x51
644
+ x52
645
+ x61
646
+ x62
647
+
648
+ �������
649
+ ,
650
+ X. =
651
+
652
+ �������
653
+ 1
654
+ 0
655
+ 1
656
+ 1
657
+ 1
658
+ 2
659
+ 1
660
+ 3
661
+ 1
662
+ 4
663
+ 1
664
+ 5
665
+
666
+ �������
667
+ ,
668
+ n. =
669
+
670
+ �������
671
+ 20
672
+ 20
673
+ 20
674
+ 20
675
+ 20
676
+ 20
677
+
678
+ �������
679
+ ,
680
+ n =
681
+
682
+ �������
683
+ n1
684
+ n2
685
+ n3
686
+ n4
687
+ n5
688
+ n6
689
+
690
+ �������
691
+ ,
692
+ y. =
693
+
694
+ �������
695
+ 1
696
+ 4
697
+ 9
698
+ 13
699
+ 18
700
+ 20
701
+
702
+ �������
703
+ .
704
+ The symbolic computations are as follows:
705
+ R> ## log-likelihood as function of p
706
+ R> logLp
707
+ <- sum(y * log(p) + (n-y) * log(1-p))
708
+ R> ## log-likelihood as function of s
709
+ R> p_ <- exp(s) / (exp(s) + 1)
710
+ R> logLs <- subs(logLp, p, p_)
711
+ R> ## linear predictor as function of regression coefficients:
712
+ R> s_
713
+ <- X %*% b
714
+ R> ## log-Likelihood as function of regression coefficients:
715
+ R> logLb <- subs(logLs, s, s_)
716
+ Next, numerical values can be inserted:
717
+ R> logLb <- subs(logLb, cbind(n, y, X), cbind(n., y., X.))
718
+ An alternative would have been to define logLp above in terms of n. and y. and similarly define
719
+ s_ in terms of X. If doing so, the last step where numerical values are inserted could have been avoided.
720
+ From here, one may proceed by computing the score function and the Hessian matrix and solve the
721
+ score equation, using e.g. Newton-Rapson. Alternatively, one might create an R function based on the
722
+ log-likelihood, and maximize this function using one of R’s optimization methods (see the example in the
723
+ previous section):
724
+ R> logLb_func <- as_func(logLb, vec_arg = TRUE)
725
+ R> optim(c(0, 0), logLb_func, control = list(fnscale = -1), hessian = TRUE)
726
+ Maximum likelihood under constraints
727
+ In this section we illustrate constrained optimization using Lagrange multipliers. This is demonstrated
728
+ for the independence model for a two-way contingency table. Consider a 2 × 2 contingency table with
729
+ cell counts yij and cell probabilities pij for i = 1, 2 and j = 1, 2, where i refers to row and j to column as
730
+ illustrated in Table 1.
731
+ Under multinomial sampling, the log likelihood is
732
+ l = log L =
733
+
734
+ ij
735
+ yij log(pij).
736
+ 9
737
+
738
+ Under the assumption of independence between rows and columns, the cell probabilities have the form,
739
+ (see e.g. Højsgaard et al. [2012], p. 32)
740
+ pij = u · ri · sj.
741
+ To make the parameters (u, ri, sj) identifiable, constraints must be imposed. One possibility is to
742
+ require that r1 = s1 = 1. The task is then to estimate u, r2, s2 by maximizing the log likelihood under
743
+ the constraint that �
744
+ ij pij = 1. This can be achieved using a Lagrange multiplier where we instead solve
745
+ the unconstrained optimization problem maxp Lag(p) where
746
+ Lag(p) = −l(p) + λg(p)
747
+ under the constraint that
748
+ (4)
749
+ g(p) =
750
+
751
+ ij
752
+ pij − 1 = 0,
753
+ (5)
754
+ where λ is a Lagrange multiplier. In SymPy, lambda is a reserved symbol. Hence the underscore as postfix
755
+ below:
756
+ R> y_ <- c("y_11", "y_21", "y_12", "y_22")
757
+ R> y
758
+ <- as_sym(y_)
759
+ R> def_sym(u, r2, s2, lambda_)
760
+ R> p <- as_sym(c("u", "u*r2", "u*s2", "u*r2*s2"))
761
+ R> logL
762
+ <- sum(y * log(p))
763
+ R> Lag
764
+ <- -logL + lambda_ * (sum(p) - 1)
765
+ R> vars <- list(u, r2, s2, lambda_)
766
+ R> gLag <- der(Lag, vars)
767
+ R> sol <- solve_sys(gLag, vars)
768
+ R> print(sol, method = "ascii")
769
+ #> Solution 1:
770
+ #>
771
+ lambda_ =
772
+ y_11 + y_12 + y_21 + y_22
773
+ #>
774
+ r2
775
+ =
776
+ (y_21 + y_22)/(y_11 + y_12)
777
+ #>
778
+ s2
779
+ =
780
+ (y_12 + y_22)/(y_11 + y_21)
781
+ #>
782
+ u
783
+ =
784
+ (y_11 + y_12)*(y_11 + y_21)/(y_11 + y_12 + y_21 + y_22)^2
785
+ R> sol <- sol[[1]]
786
+ There is only one critical point. Fitted cell probabilities ˆpij are:
787
+ R> p11 <- sol$u
788
+ R> p21 <- sol$u * sol$r2
789
+ R> p12 <- sol$u * sol$s2
790
+ R> p22 <- sol$u * sol$r2 * sol$s2
791
+ R> p.hat <- matrix_(c(p11, p21, p12, p22), nrow = 2)
792
+ ˆp =
793
+ 1
794
+ (y11 + y12 + y21 + y22)2
795
+
796
+ (y11 + y12) (y11 + y21)
797
+ (y11 + y12) (y12 + y22)
798
+ (y11 + y21) (y21 + y22)
799
+ (y12 + y22) (y21 + y22)
800
+
801
+ To verify that the maximum likelihood estimate has been found, we compute the Hessian matrix which
802
+ is negative definite (the Hessian matrix is diagonal so the eigenvalues are the diagonal entries and these
803
+ are all negative), output omitted:
804
+ R> H <- hessian(logL, list(u, r2, s2)) |> simplify()
805
+ An AR(1) model
806
+ Symbolic computations
807
+ In this section we study the auto regressive model of order 1 (an AR(1) model), see e.g. Shumway and
808
+ Stoffer [2016], p. 75 ff. for details: Consider random variables x1, x2, . . . , xn following a stationary zero
809
+ mean AR(1) process:
810
+ xi = axi−1 + ei;
811
+ i = 2, . . . , n,
812
+ (6)
813
+ 10
814
+
815
+ where ei ∼ N(0, v) and all eis are independent. Note that v denotes the variance. The marginal
816
+ distribution of x1 is also assumed normal, and for the process to be stationary we must have that the
817
+ variance Var(x1) = v/(1 − a2). Hence we can write x1 =
818
+ 1
819
+
820
+ 1−a2 e1.
821
+ For simplicity of exposition, we set n = 4. All terms e1, . . . , e4 are independent and N(0, v) distributed.
822
+ Let e = (e1, . . . , e4) and x = (x1, . . . x4). Hence e ∼ N(0, vI). Isolating error terms in (6) gives
823
+ e =
824
+
825
+ ���
826
+ e1
827
+ e2
828
+ e3
829
+ e4
830
+
831
+ ��� =
832
+
833
+ ���
834
+
835
+ 1 − a2
836
+ .
837
+ .
838
+ .
839
+ −a
840
+ 1
841
+ .
842
+ .
843
+ .
844
+ −a
845
+ 1
846
+ .
847
+ .
848
+ .
849
+ −a
850
+ 1
851
+
852
+ ���
853
+
854
+ ���
855
+ x1
856
+ x2
857
+ x3
858
+ x4
859
+
860
+ ��� = Lx.
861
+ Since Var(e) = vI we have Var(e) = vI = LVar(x)L′ so the covariance matrix of x is V = Var(x) =
862
+ vL−(L−)⊤ while the concentration matrix (the inverse covariance matrix) is K = v−1L⊤L:
863
+ R> n <- 4
864
+ R> L <- diff_mat(n, "-a")
865
+ R> def_sym(a)
866
+ R> L[1, 1] <- sqrt(1-a^2)
867
+ R> def_sym(v)
868
+ R> Linv <- inv(L)
869
+ R> K <- crossprod_(L) / v
870
+ R> V <- tcrossprod_(Linv) * v
871
+ L−1 =
872
+
873
+ ����
874
+ 1
875
+
876
+ 1−a2
877
+ .
878
+ .
879
+ .
880
+ a
881
+
882
+ 1−a2
883
+ 1
884
+ .
885
+ .
886
+ a2
887
+
888
+ 1−a2
889
+ a
890
+ 1
891
+ .
892
+ a3
893
+
894
+ 1−a2
895
+ a2
896
+ a
897
+ 1
898
+
899
+ ���� ,
900
+ (7)
901
+ K = 1
902
+ v
903
+
904
+ ���
905
+ 1
906
+ −a
907
+ .
908
+ .
909
+ −a
910
+ a2 + 1
911
+ −a
912
+ .
913
+ .
914
+ −a
915
+ a2 + 1
916
+ −a
917
+ .
918
+ .
919
+ −a
920
+ 1
921
+
922
+ ��� ,
923
+ (8)
924
+ V = v
925
+
926
+ ����
927
+ 1
928
+ 1−a2
929
+ a
930
+ 1−a2
931
+ a2
932
+ 1−a2
933
+ a3
934
+ 1−a2
935
+ a
936
+ 1−a2
937
+ a2
938
+ 1−a2 + 1
939
+ a3
940
+ 1−a2 + a
941
+ a4
942
+ 1−a2 + a2
943
+ a2
944
+ 1−a2
945
+ a3
946
+ 1−a2 + a
947
+ a4
948
+ 1−a2 + a2 + 1
949
+ a5
950
+ 1−a2 + a3 + a
951
+ a3
952
+ 1−a2
953
+ a4
954
+ 1−a2 + a2
955
+ a5
956
+ 1−a2 + a3 + a
957
+ a6
958
+ 1−a2 + a4 + a2 + 1
959
+
960
+ ���� .
961
+ (9)
962
+ The zeros in the concentration matrix K implies a conditional independence restriction: If the ijth
963
+ element of a concentration matrix is zero then xi and xj are conditionally independent given all other
964
+ variables, see e.g. Højsgaard et al. [2012], p. 84 for details.
965
+ Next, we take the step from symbolic computations to numerical evaluations. The joint distribution
966
+ of x is multivariate normal distribution, x ∼ N(0, K−1). Let W = xx⊤ denote the matrix of (cross)
967
+ products. The log-likelihood is therefore (ignoring additive constants)
968
+ log L = n
969
+ 2 (log det(K) − x⊤Kx) = n
970
+ 2 (log det(K) − tr(KW)),
971
+ where we note that tr(KW) is the sum of the elementwise products of K and W since both matrices
972
+ are symmetric. Ignoring the constant n
973
+ 2 , this can be written symbolically to obtain the expression in this
974
+ particular case:
975
+ R> x <- vector_sym(n, "x")
976
+ R> logL <- log(det(K)) - sum(K * (x %*% t(x))) %>% simplify()
977
+ log L = log
978
+
979
+ −a2
980
+ v4 + 1
981
+ v4
982
+
983
+ − −2ax1x2 − 2ax2x3 − 2ax3x4 + x2
984
+ 1 + x2
985
+ 2
986
+
987
+ a2 + 1
988
+
989
+ + x2
990
+ 3
991
+
992
+ a2 + 1
993
+
994
+ + x2
995
+ 4
996
+ v
997
+ .
998
+ 11
999
+
1000
+ Numerical evaluation
1001
+ Next we illustrate how bridge the gap from symbolic computations to numerical computations based on
1002
+ a dataset: For a specific data vector we get:
1003
+ R> xt <- c(0.1, -0.9, 0.4, .0)
1004
+ R> logL. <- subs(logL, x, xt)
1005
+ log L = log
1006
+
1007
+ −a2
1008
+ v4 + 1
1009
+ v4
1010
+
1011
+ − 0.97a2 + 0.9a + 0.98
1012
+ v
1013
+ .
1014
+ We can use R for numerical maximization of the likelihood and constraints on the parameter values
1015
+ can be imposed e.g. in the optim() function:
1016
+ R> logL_wrap <- as_func(logL., vec_arg = TRUE)
1017
+ R> eps <- 0.01
1018
+ R> par <- optim(c(a=0, v=1), logL_wrap,
1019
+ +
1020
+ lower=c(-(1-eps), eps), upper=c((1-eps), 10),
1021
+ +
1022
+ method="L-BFGS-B", control=list(fnscale=-1))$par
1023
+ R> par
1024
+ #>
1025
+ a
1026
+ v
1027
+ #> -0.376
1028
+ 0.195
1029
+ The same model can be fitted e.g. using R’s arima() function as follows (output omitted):
1030
+ R> arima(xt, order = c(1, 0, 0), include.mean = FALSE, method = "ML")
1031
+ It is less trivial to do the optimization in caracas by solving the score equations. There are some
1032
+ possibilities for putting assumptions on variables in caracas (see the “Reference” vignette), but it is not
1033
+ possible to restrict the parameter a to only take values in (−1, 1).
1034
+ Variance of the average of correlated data
1035
+ Consider random variables x1, . . . , xn where Var(xi) = v and Cov(xi, xj) = vr for i ̸= j, where 0 ≤ |r| ≤
1036
+ 1. For n = 3, the covariance matrix of (x1, . . . , xn) is therefore
1037
+ V = vR = v
1038
+
1039
+
1040
+ 1
1041
+ r
1042
+ r
1043
+ r
1044
+ 1
1045
+ r
1046
+ r
1047
+ r
1048
+ 1
1049
+
1050
+ � .
1051
+ (10)
1052
+ Let ¯x = �
1053
+ i xi/n denote the average. Suppose interest is in the variance of the average, Var(¯x), when
1054
+ n goes to infinity. One approach is as follow: Let 1 denote an n-vector of 1’s and let V be an n × n
1055
+ matrix with v on the diagonal and vr outside the diagonal. Then Var(¯x) =
1056
+ 1
1057
+ n2 1⊤V 1. The answer lies in
1058
+ studying the limiting behaviour of this expression when n → ∞. First, we must calculate variance of a
1059
+ sum x. = �
1060
+ i xi which is Var(x.) = �
1061
+ i Var(xi) + 2 �
1062
+ ij:i<j Cov(xi, xj) (i.e., the sum of the elements of
1063
+ the covariance matrix). We can do this in caracas as follows:
1064
+ R> def_sym(v, r, n, j, i)
1065
+ R> var_sum <- v*(n + 2*sum_(sum_(r, j, i+1, n), i, 1, n-1)) |> simplify()
1066
+ R> var_avg <- var_sum / n^2
1067
+ Var(x.) = nv (r (n − 1) + 1) ,
1068
+ Var(¯x) = v (r (n − 1) + 1)
1069
+ n
1070
+ .
1071
+ From hereof, we can study the limiting behavior of the variance Var(¯x) in different situations:
1072
+ R> l_1 <- lim(var_avg, n, Inf)
1073
+ ## when sample size n goes to infinity
1074
+ R> l_2 <- lim(var_avg, r, 0, dir=’+’)
1075
+ ## when correlation r goes to zero
1076
+ R> l_3 <- lim(var_avg, r, 1, dir=’-’)
1077
+ ## when correlation r goes to one
1078
+ 12
1079
+
1080
+ For a given correlation r it is instructive to investigate how many independent variables k the n
1081
+ correlated variables correspond to (in the sense of the same variance of the average), because the k can
1082
+ be seen as a measure of the amount of information in data. Moreover, one might study how k behaves
1083
+ as function of n when n → ∞. That is we must (1) solve v(1 + (n − 1)r)/n = v/k for k and (2) find
1084
+ limn→∞ k:
1085
+ R> def_sym(k)
1086
+ R> k <- solve_sys(var_avg - v / k, k)[[1]]$k
1087
+ R> l_k <- lim(k, n, Inf)
1088
+ The findings above are:
1089
+ l1 = rv,
1090
+ l2 = v
1091
+ n,
1092
+ l3 = v,
1093
+ k =
1094
+ n
1095
+ nr − r + 1,
1096
+ lk = 1
1097
+ r .
1098
+ With respect to k, it is illustrative to supplement the symbolic computations above with numerical
1099
+ evaluations, which shows that even a moderate correlation reduces the effective sample size substantially:
1100
+ R> dat <- expand.grid(r=c(.1, .2, .5), n=c(10, 50))
1101
+ R> k_fun <- as_func(k)
1102
+ R> dat$k <- k_fun(r=dat$r, n=dat$n)
1103
+ R> dat$ri <- 1/dat$r
1104
+ R> dat
1105
+ #>
1106
+ r
1107
+ n
1108
+ k ri
1109
+ #> 1 0.1 10 5.26 10
1110
+ #> 2 0.2 10 3.57
1111
+ 5
1112
+ #> 3 0.5 10 1.82
1113
+ 2
1114
+ #> 4 0.1 50 8.47 10
1115
+ #> 5 0.2 50 4.63
1116
+ 5
1117
+ #> 6 0.5 50 1.96
1118
+ 2
1119
+ Possible topics to study
1120
+ 1. Related to Section Linear models:
1121
+ a) The orthogonal projection matrix onto the span of the model matrix X is P = X(X⊤X)−1X⊤.
1122
+ The residuals are r = (I − P)y. From this one may verify that these are not all independent.
1123
+ b) If one of the factors is ignored, then the model becomes a one-way analysis of variance model,
1124
+ at it is illustrative to redo the computations in Section Linear models in this setting.
1125
+ c) Likewise if an interaction between the two factors is included in the model.
1126
+ What are the
1127
+ residuals in this case?
1128
+ 2. Related to Section Logistic regression:
1129
+ a) In Each component of the likelihood, Newton-Rapson can be implemented to solve the likelihood
1130
+ equations and compared to the output from glm().
1131
+ Note how sensitive Newton-Rapson is
1132
+ to starting point.
1133
+ This can be solved by another optimisation scheme, e.g.
1134
+ Nelder-Mead
1135
+ (optimising the log likelihood) or BFGS (finding extreme for the score function).
1136
+ b) The example is done as logistic regression with the logit link function. Try other link functions
1137
+ such as cloglog (complementary log-log).
1138
+ 3. Related to Section Maximum likelihood under constraints:
1139
+ a) Identifiability of the parameters was handled by not including r1 and s1 in the specification
1140
+ of pij. An alternative is to impose the restrictions r1 = 1 and s1 = 1, and this can also be
1141
+ handled via Lagrange multipliers. Another alternative is to regard the model as a log-linear
1142
+ model where log pij = log u + log ri + log sj = ˜u + ˜ri + ˜sj. This model is similar in its structure
1143
+ to the two-way ANOVA for Section Linear models. This model can be fitted as a generalized
1144
+ linear model with a Poisson likelihood and log as link function. Hence, one may modify the
1145
+ results in Section Logistic regression to provide an alternative way of fitting the model.
1146
+ b) A simpler task is to consider a multinomial distribution with four categories, counts yi and cell
1147
+ probabilities pi, i = 1, 2, 3, 4 where �
1148
+ i pi = 1. For this model, find the maximum likelihood
1149
+ estimate for pi (use the Hessian to verify that the critical point is a maximum).
1150
+ 13
1151
+
1152
+ 4. Related to Section An AR(1) model:
1153
+ a) Compare the estimated parameter values with those obtained from the arima() function.
1154
+ b) Modify the model in Equation (6) by setting x1 = axn + e1 (“wrapping around”) and see what
1155
+ happens to the pattern of zeros in the concentration matrix.
1156
+ c) Extend the AR(1) model to an AR(2) model (“wrapping around”) and investigate this model
1157
+ along the same lines. Specifically, where are the conditional independencies (try at least n = 6)?
1158
+ 5. Related to Section Variance of the average of correlated data: It is illustrative to study such be-
1159
+ haviours for other covariance functions. Replicate the calculations for the covariance matrix of the
1160
+ form
1161
+ V = vR = v
1162
+
1163
+ ���
1164
+ 1
1165
+ r
1166
+ 0
1167
+ 0
1168
+ r
1169
+ 1
1170
+ r
1171
+ 0
1172
+ 0
1173
+ r
1174
+ 1
1175
+ r
1176
+ 0
1177
+ 0
1178
+ r
1179
+ 1
1180
+
1181
+ ��� ,
1182
+ (11)
1183
+ i.e., a special case of a Toeplitz matrix. How many independent variables, k, do the n correlated
1184
+ variables correspond to?
1185
+ Discussion and future work
1186
+ We have presented the caracas package and argued that the package extends the functionality of R sig-
1187
+ nificantly with respect to symbolic mathematics. One practical virtue of caracas is that the package
1188
+ integrates nicely with Rmarkdown, Allaire et al. [2021], (e.g. with the tex() functionality) and thus sup-
1189
+ ports creating of scientific documents and teaching material. As for the usability in practice we await
1190
+ feedback from users.
1191
+ Another related package we mentioned in the introduction is Ryacas. This package has existed for
1192
+ many years and is still of relevance. Ryacas probably has fewer features than caracas. On the other hand,
1193
+ Ryacas does not require Python (it is compiled), is faster for some computations (like matrix inversion).
1194
+ Finally, the Yacas language [Pinkus and Winitzki, 2002, Pinkus et al., 2016] is extendable (see e.g. the
1195
+ vignette “User-defined yacas rules” in the Ryacas package).
1196
+ One possible future development could be an R package which is designed without a view towards the
1197
+ underlying engine (SymPy or Yacas) and which then draws more freely from SymPy and Yacas. In this
1198
+ connection we mention that there are additional resources on CRAN such as calculus [Guidotti, 2022].
1199
+ Lastly, with respect to freely available resources in a CAS context, we would like to draw attention
1200
+ to WolframAlpha, see https://www.wolframalpha.com/, which provides an online service for answering
1201
+ (mathematical) queries.
1202
+ Acknowledgements
1203
+ We would like to thank the R Consortium for financial support for creating the caracas package, users
1204
+ for pin pointing points that can be improved in caracas and Ege Rubak (Aalborg University, Denmark),
1205
+ Malte Bødkergaard Nielsen (Aalborg University, Denmark), and Poul Svante Eriksen (Aalborg University,
1206
+ Denmark) for comments on this manuscript.
1207
+ References
1208
+ J. Allaire, Y. Xie, J. McPherson, J. Luraschi, K. Ushey, A. Atkins, H. Wickham, J. Cheng, W. Chang,
1209
+ and R. Iannone. rmarkdown: Dynamic Documents for R, 2021. URL https://github.com/rstudio/
1210
+ rmarkdown. R package version 2.7. 1, 4, 14
1211
+ M. M. Andersen and S. Højsgaard. Ryacas: A computer algebra system in R. Journal of Open Source
1212
+ Software, 4(42), 2019. URL https://doi.org/10.21105/joss.01763. 1
1213
+ M. M. Andersen and S. Højsgaard. caracas: Computer algebra in R. Journal of Open Source Software, 6
1214
+ (63):3438, 2021. doi: 10.21105/joss.03438. URL https://doi.org/10.21105/joss.03438. 1
1215
+ E. Guidotti. calculus: High-Dimensional Numerical and Symbolic Calculus in R. Journal of Statistical
1216
+ Software, 104(1):1–37, 2022. doi: 10.18637/jss.v104.i05. URL https://www.jstatsoft.org/index.
1217
+ php/jss/article/view/v104i05. 14
1218
+ 14
1219
+
1220
+ S. Højsgaard, D. Edwards, and S. Lauritzen. Graphical Models with R. Springer, New York, 2012. doi:
1221
+ 10.1007/978-1-4614-2299-0. ISBN 978-1-4614-2298-3. 10, 11
1222
+ S. Højsgaard and U. Halekoh. doBy: Groupwise Statistics, LSmeans, Linear Estimates, Utilities, 2023.
1223
+ URL https://github.com/hojsgaard/doby. R package version 4.6.16. 6
1224
+ F. Leisch. Sweave: Dynamic generation of statistical reports using literate data analysis. In W. Härdle
1225
+ and B. Rönz, editors, Compstat, pages 575–580, Heidelberg, 2002. Physica-Verlag HD. 4
1226
+ P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman & Hall/CRC Monographs on
1227
+ Statistics and Applied Probability. Chapman & Hall/CRC, Philadelphia, PA, 2 edition, Aug. 1989.
1228
+ ISBN 9780412317606. 6
1229
+ A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K.
1230
+ Moore, S. Singh, T. Rathnayake, S. Vig, B. E. Granger, R. P. Muller, F. Bonazzi, H. Gupta, S. Vats,
1231
+ F. Johansson, F. Pedregosa, M. J. Curry, A. R. Terrel, v. Roučka, A. Saboo, I. Fernando, S. Kulal,
1232
+ R. Cimrman, and A. Scopatz. Sympy: symbolic computing in python. PeerJ Computer Science, 3:
1233
+ e103, Jan. 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.org/10.7717/peerj-
1234
+ cs.103. 1
1235
+ J. Nocedal and S. J. Wright. [Numerical Optimization. Springer New York, 2006. doi: 10.1007/978-0-
1236
+ 387-40065-5. URL https://doi.org/10.1007/978-0-387-40065-5. 7
1237
+ A. Pinkus, S. Winnitzky, and G. Mazur. Yacas - yet another computer algebra system. Technical report,
1238
+ 2016. URL https://yacas.readthedocs.io/en/latest/. 14
1239
+ A. Z. Pinkus and S. Winitzki. YACAS: A Do-It-Yourself Symbolic Algebra Environment. In Proceedings
1240
+ of the Joint International Conferences on Artificial Intelligence, Automated Reasoning, and Symbolic
1241
+ Computation, AISC ’02/Calculemus ’02, pages 332–336, London, UK, UK, 2002. Springer-Verlag. ISBN
1242
+ 3-540-43865-3. doi: 10.1007/3-540-45470-5_29. URL http://doi.org/10.1007/3-540-45470-5_29.
1243
+ 14
1244
+ R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical
1245
+ Computing, Vienna, Austria, 2023. URL http://www.R-project.org/. ISBN 3-900051-07-0. 1
1246
+ R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, fourth edition
1247
+ edition, 2016. 10
1248
+ United Nations General Assembly. Sustainable development goals, 2015. https://sdgs.un.org/. 2
1249
+ K. Ushey, J. Allaire, and Y. Tang.
1250
+ reticulate: Interface to ’Python’, 2020.
1251
+ URL https://CRAN.R-
1252
+ project.org/package=reticulate. R package version 1.18. 1
1253
+ H. Wickham.
1254
+ ggplot2: Elegant Graphics for Data Analysis.
1255
+ Springer-Verlag New York, 2016.
1256
+ ISBN
1257
+ 978-3-319-24277-4. URL https://ggplot2.tidyverse.org. 6
1258
+ Y. Xie, J. Allaire, and G. Grolemund. R Markdown: The Definitive Guide. Chapman and Hall/CRC,
1259
+ Boca Raton, Florida, 2018. URL https://bookdown.org/yihui/rmarkdown. ISBN 9781138359338. 1
1260
+ Y. Xie, C. Dervieux, and E. Riederer. R Markdown Cookbook. Chapman and Hall/CRC, Boca Raton,
1261
+ Florida, 2020. URL https://bookdown.org/yihui/rmarkdown-cookbook. ISBN 9780367563837. 1
1262
+ 15
1263
+
0dFST4oBgHgl3EQfVTiq/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
19E0T4oBgHgl3EQf_wLw/content/tmp_files/2301.02832v1.pdf.txt ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Prepared for submission to JINST
2
+ Proc. of the 23rd International Workshop on Radiation Imaging Detectors
3
+ J-PET detection modules based on plastic scintillators for
4
+ performing studies with positron and positronium beams
5
+ S. Sharma,1,2,3,∗ J. Baran,1,2,3 R.S. Brusa,4,5 R. Caravita,5 N. Chug,1,2,3 A. Coussat,1,2,3 C.
6
+ Curceanu,6 E. Czerwiński,1,2,3 M. Dadgar,1,2,3 K. Dulski,1,2,3 K. Eliyan,1,2,3 A. Gajos,1,2,3 B.C.
7
+ Hiesmayr,7 K. Kacprzak,1,2,3 Ł. Kapłon,1,2,3 K. Klimaszewski,8 P. Konieczka,9 G. Korcyl,1,2 T.
8
+ Kozik,1 W. Krzemień,9 D. Kumar,1,2,3 S. Mariazzi,4,5 S. Niedźwiecki,1,2,3 L. Panasa,4,5 S.
9
+ Parzych,1,2,3 L. Povolo,4,5 E. Perez del Rio,1,2,3 L. Raczyński,9 Shivani,1,2,3 R.Y. Shopa,9 M.
10
+ Skurzok,1,2,3 E.Ł. Stępień,1,2,3 F. Tayefi,1,2,3 K. Tayefi,1,2,3 W. Wiślicki,9 P. Moskal,1,2,3
11
+ 1Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Krakow, Poland
12
+ 2Total-Body Jagiellonian-PET Laboratory, Jagiellonian University, Kraków, Poland
13
+ 3Center for Theranostics, Jagiellonian University, Cracow, Poland
14
+ 4Department of Physics, University of Trento, via Sommarive 14, 38123 Povo,Trento, Italy
15
+ 5TIFPA/INFN, via Sommarive 14, 38123 Povo,Trento, Trento, Italy
16
+ 6INFN, Laboratori Nazionali di Frascati, Frascati, Italy
17
+ 7Faculty of Physics, University of Vienna, Vienna, Austria
18
+ 8Department of Complex Systems, National Centre for Nuclear Research, Otwock-Świerk, Poland
19
+ 9High Energy Physics Division, National Centre for Nuclear Research, Otwock-Świerk, Poland
20
+ E-mail: [email protected]
21
+ Abstract: The J-PET detector, which consists of inexpensive plastic scintillators, has demon-
22
+ strated its potential in the study of fundamental physics. In recent years, a prototype with 192
23
+ plastic scintillators arranged in 3 layers has been optimized for the study of positronium decays.
24
+ This allows performing precision tests of discrete symmetries (C, P, T) in the decays of positronium
25
+ atoms. Moreover, thanks to the possibility of measuring the polarization direction of the photon
26
+ based on Compton scattering, the predicted entanglement between the linear polarization of anni-
27
+ hilation photons in positronium decays can also be studied. Recently, a new J-PET prototype was
28
+ commissioned, based on a modular design of detection units. Each module consists of 13 plastic
29
+ scintillators and can be used as a stand-alone, compact and portable detection unit. In this paper,
30
+ the main features of the J-PET detector, the modular prototype and their applications for possible
31
+ studies with positron and positronium beams are discussed. Preliminary results of the first test
32
+ experiment performed on two detection units in the continuous positron beam recently developed
33
+ at the Antimatter Laboratory (AML) of Trento are also reported.
34
+ Keywords: J-PET, modular J-PET, positron and positronium beam, entanglement, inertial sensing
35
+ 1Corresponding author.
36
+ arXiv:2301.02832v1 [physics.ins-det] 7 Jan 2023
37
+
38
+ Contents
39
+ 1
40
+ Introduction
41
+ 1
42
+ 2
43
+ J-PET tomograph as multi-photon detector
44
+ 2
45
+ 2.1
46
+ Fundamental studies in decays of Ps using J-PET
47
+ 2
48
+ 3
49
+ Modular J-PET and its possible applications with positron and positronium beams
50
+ 4
51
+ 4
52
+ Performance study of two J-PET detection units with positron beam
53
+ 5
54
+ 5
55
+ Summary and perspectives
56
+ 6
57
+ 1
58
+ Introduction
59
+ Jagiellonian Positron Emission Tomograph (J-PET) is the first tomograph based on the idea of
60
+ using plastic scintillators instead of crystals as currently used in commercial tomographs [1, 2].
61
+ Plastic scintillators have excellent time resolution and are therefore good candidates for building
62
+ TOF-PET tomographs [3]. The novelty that distinguishes J-PET from other tomographs is its
63
+ potential to conduct studies of fundamental physics problems [4] and positronium imaging [5–7].
64
+ Therefore, it can be used in a dual role, both as a PET scanner and as a multimodule detector. The
65
+ J-PET detector is optimised for studying the decay of positronium atoms (Ps, the bound state of
66
+ electron (𝑒−) and positron (𝑒+)) [8, 9]. In recent years, data have been collected with the 3-layer
67
+ prototype of J-PET [10], which consists of 192 detection modules, demonstrating its applicability
68
+ not only in medical physics [6, 11, 12] but also in the study of fundamental physics [13]. Following
69
+ the J-PET technology, a new prototype was recently built based on a modular design consisting
70
+ of 24 individual detection units [14]. Thanks to the modular design, the detection units can be
71
+ conveniently transported to other research facilities to perform experiments. At the University
72
+ of Trento, a continuous positron beam has been put into operation in the Antimatter Laboratory
73
+ (AML). With the know-how to fabricate efficient positron/positronium converters [15, 16] and to
74
+ manipulate positronium atoms into a metastable state with increased lifetime [17], the generation
75
+ of Ps beams is envisaged [18]. Two detection modules with a supported data acquisition chain
76
+ have recently been moved to the AML to perform studies with positrons and positronium beams to
77
+ investigate fundamental physics not yet explored. The outline of the draft is divided as follows. In
78
+ the next section, an introduction to the 3-layer protoype of J-PET is given, followed by the studies
79
+ that are currently being performed. Then, the modular detection units are briefly described and their
80
+ possible applications with 𝑒+ and Ps beams are discussed. Finally, the preliminary results of the
81
+ first experiment using a continuous positron beam to reconstruct the beam spot with two detection
82
+ modules are reported.
83
+ – 1 –
84
+
85
+ 2
86
+ J-PET tomograph as multi-photon detector
87
+ The 3-layer prototype is developed to test the concept of constructing a cost-effective total-body
88
+ PET from plastic scintillators [10, 14, 19]. To obtain a longer axial field of view (AFOV), strips of
89
+ 500×19×7 mm3 plastic scintillators are used. In the design of J-PET, 192 plastic scintillators (EJ-
90
+ 230) are arranged in 3 concentric cylinders with radii 42.5 cm, 46.75 cm, and 57.5 cm, respectively.
91
+ Signals from each scintillator are read out with an R9800 Hamamatsu photomultiplier on each
92
+ side. Data are measured and stored in triggerless mode, which can handle data streams of up to 8
93
+ Gbps [20, 21]. To take advantage of the excellent time resolution, Time Over Threshold (TOT) is
94
+ measured instead of the charge collection. The energy deposition in a given interaction of photons
95
+ within the scintillator is estimated based on the established relationship between TOT and the energy
96
+ deposition [22]. A special framework based on advanced C++ routines and ROOT (Data Analysis
97
+ Framework from CERN) was also developed to analyse the measured data [23]. Hit position and
98
+ hit time of a photon interaction (hit) within the scintillator are calculated based on the measured
99
+ light signals read from both ends of a scintillator [24]. The hit time is calculated as the average of
100
+ the times of the light signals arriving at both ends, while the hit position is calculated as the product
101
+ of the difference in the arrival times of the light signal at both ends multiplied by half of their
102
+ effective speed [25, 26]. Fig. 1 shows the pictures of the J-PET detector (left), the installation of the
103
+ Figure 1. (A) Shows an image of J-PET in the laboratory. 3-Layers (blue, yellow, red) represent the angular
104
+ arrangement of the scintillator strips and their cross-section in the plane. (B) shows the installation of hollow
105
+ cylindrical chamber in the centre of J-PET, while (C) represents the small aluminium chamber.
106
+ hollow cylindrical chamber (centre), and the small aluminium chamber (right). Before turning to
107
+ the modular prototype of J-PET, the next section briefly discusses the physics aspects of the J-PET
108
+ detector in studying the decays of positronium atoms (Ps) [4].
109
+ 2.1
110
+ Fundamental studies in decays of Ps using J-PET
111
+ Ps, being a pure leptonic system of electron and positron, is an excellent probe to test the bound state
112
+ of quantum electrodynamics [27]. Ps can be formed in one of two ground states: Singlet state (1S0:
113
+ para-positronium (p-Ps)) with lifetime 125 ps or in the triplet state (3S1: ortho-positronium(o-Ps))
114
+ with lifetime 142 ns. Under the charge conjugation condition, p-Ps and o-Ps decay into an even
115
+ – 2 –
116
+
117
+ A
118
+ Bnumber (2n𝛾; n=1,2,.), while o-Ps decays into an odd number ((2n+1)𝛾; n=1,2,.) of photons,
119
+ predominantly 2𝛾 and 3𝛾, respectively. In studying the decays of Ps atoms, several fundamental
120
+ problems can be investigated, e.g., quantum entanglement, violation of discrete symmetries, etc.
121
+ We briefly discuss here the studies that are currently being carried out with the J-PET detector.
122
+ Photon polarization and quantum entanglement: The measurement of entanglement in the
123
+ annihilation photons emitted in Ps is one of the important aspects that can be studied with the J-PET
124
+ detector. It is predicted that the linear polarizations of back-to-back photons emitted in the singlet
125
+ state (p-Ps) of Ps are orthogonally correlated [28, 29]. Measurement of the correlation between
126
+ the polarizations of annihilation photons can be used to observe the entangled state [30–33], a
127
+ subject of fundamental importance that has direct application in medical imaging [34, 35]. There
128
+ are no mechanical polarizers to measure the polarization of 511 keV photons. However, Compton
129
+ scattering can be used as a polarizer for such measurements [36]. J-PET is capable of registering
130
+ both annihilation photons before and after their scattering with angular resolution≈1𝑜 [37]. Based
131
+ on the fact that in Compton scattering, a photon is scattered most likely at right angles to the
132
+ direction of linear polarization of the incident photon, the polarization of the photon can be defined
133
+ as �𝜖 = �𝑘 × �𝑘
134
+ ′ [4, 38].
135
+ By measuring the polarization of each photon, one can measure the
136
+ azimuthal correlation between their polarizations and test the theoretically predicted claims for
137
+ entanglement [30–33]. Entanglement studies can be extended to the o-Ps →3𝛾 case [33].
138
+ Symmetry violation - test of discrete symmetries in decays of Ps atoms: Ps exhibits interesting
139
+ properties that make it an exotic atom for performing the tests on discrete symmetries. For example,
140
+ it is an eigenstate of the parity operator (P) since it is bound by a central potential. Moreover, Ps is a
141
+ system of a particle and its antiparticle that remains symmetric in their exchange and thus it is also an
142
+ eigenstate of the charge conjugation operator C. Therefore, positronium is also an eigenstate of the
143
+ CP operator. In conjunction with the CPT theorem, one can test the T violation effects separately or
144
+ in combination for CPT test. Therefore, Ps can serve as an excellent laboratory to perform tests for
145
+ C, P, CP and CPT violations. In 1988, studying the o-Ps→3𝛾 decays, Bernreuther and coworkers
146
+ suggested that to test the discrete symmetries, odd-symmetry operators can be constructed using
147
+ the spin of the o-Ps atom and momenta of annihilation photons [39]. Non vanishing expectation
148
+ value of these operators will be the confirmation of the symmetry violation. Table 1 represents the
149
+ list of operators for the the discrete symmetries test [4]. Minus sign represents the odd-symmetric
150
+ operators which are sensitive to observe the violation effects for discrete symmetries. In addition,
151
+ with the ability of J-PET detector to measure the polarization direction of photons [38], additional
152
+ operators are also constructed utilizing the photon’s polarization direction which are unique and
153
+ currently possible only with the J-PET [4].
154
+ Table 1. Odd-symmetry operators constructed of the photons momenta (�𝑘𝑖) and polarization (�𝜖𝑖), and spin
155
+ of o-Ps (�𝑆) [4]
156
+ Odd symmetric
157
+ C
158
+ P
159
+ T
160
+ CP
161
+ CPT
162
+ Odd symmetric
163
+ C
164
+ P
165
+ T
166
+ CP
167
+ CPT
168
+ �𝑆 . �𝑘 1
169
+ +
170
+ -
171
+ +
172
+ -
173
+ -
174
+ �𝑘 1 . �𝜖 2
175
+ +
176
+ -
177
+ -
178
+ -
179
+ +
180
+ �𝑆 . (�𝑘 1 × �𝑘 2)
181
+ +
182
+ +
183
+ -
184
+ +
185
+ -
186
+ �𝑆 . �𝜖 1
187
+ +
188
+ +
189
+ -
190
+ +
191
+ -
192
+ ( �𝑆 . �𝑘 1)( �𝑆 . (�𝑘 1 × �𝑘 2)
193
+ +
194
+ -
195
+ -
196
+ -
197
+ +
198
+ �𝑆 . (�𝑘 2 × �𝜖 1)
199
+ +
200
+ -
201
+ +
202
+ -
203
+ -
204
+ In table 1, one can see that none of the operators is sensitive to the C symmetry test. However,
205
+ – 3 –
206
+
207
+ tests for violation of C symmetry can be performed by examining C-prohibited Ps-decays (e.g. p-
208
+ Ps→3𝛾, o-Ps→2𝛾, 4𝛾,.). In symmetry studies with J-PET, the estimate of the o-Ps spin is based on
209
+ the intrinsic polarization of positrons emitted in 𝛽+ decays [13]. These are longitudinally polarized
210
+ due to parity violation, which is proportional to the velocity of the positrons. Different types of
211
+ chambers can be used depending on the specifics of the operators. For the operators involving the
212
+ spin of o-Ps, large hollow chambers (see Fig. 1 (B)) are used, the inner wall of which are coated with
213
+ porous materials to increase the probability of Ps formation. The annihilation points of the o-Ps in
214
+ the chamber wall can be reconstructed using the trilateration method [40], which gives the direction
215
+ of the positrons with respect to the known position of the source and thus allows the o-Ps spin to be
216
+ estimated [4, 13]. Small chambers (Fig. 1 (C)) are used for the other operators, in particular with
217
+ photon polarization.
218
+ 3
219
+ Modular J-PET and its possible applications with positron and positronium beams
220
+ The modular J-PET is a new prototype, which consists of 24 independent detection modules.
221
+ Each module consists of 13 plastic scintillators of size 500×24×6 mm3, read out at each end by a
222
+ SiPM matrix, together with their front-end electronics housed in a single module. Thanks to their
223
+ modular design and FPGA-based compact data acquisition, they can be easily transported to be
224
+ used as potential detectors in different laboratories. In this context, we have explored the possibility
225
+ of using the modular detection units for studies with positron and positronium beams at the AML in
226
+ Trento [41]. A continuous positron beam was recently commissioned at the AML. In the future, the
227
+ continuous beam will be injected into a Surko trap [42] where positrons will be trapped, stored for
228
+ fraction of seconds and then bunched to form pulses containing up to 104 positrons. Implantation
229
+ of positron pulses in efficient positron/positronium converters [15, 43, 44] allows producing dense
230
+ Ps clouds [45, 46]. In particular, in recent years the possibility to populate the long-lived 23S
231
+ state of positronium via spontaneous [17] and stimulated [47] decay from the 33P level (previously
232
+ reached via 13S→33P laser excitation) has been demonstrated [48]. A monochromatic pulsed 23S
233
+ positronium beam with low angular divergence can then be produced by placing an iris diaphragm
234
+ in front of the target [18]. By employing properly polarized laser pulses, the production of Ps
235
+ in 23S with fully controlled quantum numbers looks feasible. In studying the annihilation of Ps
236
+ in 3-photons, an interesting fundamental problem, the experimental measurement of the quantum
237
+ entanglement of the polarization of the annihilation photons could be addressed for the first time.
238
+ Theoretical studies predict that the entanglement type of the 3-photons depends on the quantum
239
+ numbers of the annihilating positronium [33]. Ps in the 23S state is also of interest for direct
240
+ measurements of gravitational interaction on antimatter [49, 50]. Indeed, Ps excited in a long-lived
241
+ state [51] together with antihydrogen [52–54] and muonium [55], have been proposed as a probe for
242
+ test of weak equivalence principle on antimatter. A possible experimental scheme consists in the
243
+ employment of a Ps beam in the metastable 23S state crossing a deflectometer or an interferometer
244
+ to form a fringe pattern [49, 50]. In presence of an external force, the fringe pattern shows a
245
+ displacement that is proportional to the acceleration experienced by the Ps [50]. In order to detect
246
+ such a fringe pattern shift, Ps atom distribution on a plane could be scanned by using a slit or a
247
+ material grating [50]. Ps annihilating on the obstacles and the ones crossing it can then be counted
248
+ as a function of the position of the slit/grating and the Ps spatial distribution on the plane can be
249
+ – 4 –
250
+
251
+ reconstructed. A detector able to resolve the annihilation points of Ps along the beam direction (to
252
+ distinguish the annihilations on the obstacles from the ones occurring forward) is needed. To verify
253
+ the applicability of the J-PET detection units for this purpose, two such units with complete readout
254
+ electronics were transported to AML. A test run was performed to measure the spatial resolution
255
+ that can be achieved with only 2 detection units with e+ beam. The details and first results of the
256
+ test can be found in the next section.
257
+ 4
258
+ Performance study of two J-PET detection units with positron beam
259
+ To investigate the performance of the J-PET modules, 511 keV photons emitted by the annihilations
260
+ of 𝑒+ implanted with the AML beam into a stainless-steel flange have been recorded. Two modules
261
+ were placed 20 cm apart from the e+ beam spot (red dot) as shown in the left panel of Fig. 2. Binary
262
+ data registered by the FPGA cards were processed using framework software developed by the
263
+ J-PET collaboration [23]. Hit time and hit position are reconstructed as described above. Signals
264
+ from each SiPM are sampled at two thresholds in the voltage domain (30 mV and 70 mV). TOT as
265
+ a measure of the energy deposition by photons interacting in a scintillator (hit) is calculated as the
266
+ sum of the TOTs at both thresholds of the connected SiPMs. In the right panel of Fig. 2, the upper
267
+ left inset shows the measured TOT spectra. Since TOT is the measure of energy deposition, higher
268
+ Figure 2. The photo of the experimental setup (left). Two modules are 20 cm apart and centered around
269
+ the e+ annihilation points. The X,Y,Z directional frame ( width(24 mm), thickness(6 mm), length(500 mm)
270
+ ) of the modules is such that the Y-axis is along the direction of the beam, while the X- and Z-directions are
271
+ perpendicular and parallel to the plane of the J- PET module, respectively. On the right is the preliminary
272
+ measured TOT distribution (upper left inset) and the 3D (X,Y,Z) projections of the reconstructed vertices.
273
+ TOT values are expected with increasing energy deposition [22]. The structure with two peaks
274
+ corresponds to the energy depositions by 511 keV photons and their scattered photons. The first
275
+ peak results from the interactions of the scattered photons, while the second enhancement indicated
276
+ in orange color represents the contribution by the 511 keV photons. In the analysis of events with
277
+ – 5 –
278
+
279
+ X103
280
+ [a.u]
281
+ 3500
282
+ 4000
283
+ Entries46841
284
+ Counts
285
+ 3000
286
+ Mean0.1685
287
+ Std Dev 1.084
288
+ 2500
289
+ 3000
290
+ 2059 832280
291
+ 2000
292
+ 2000
293
+ 1500
294
+ 20 cm
295
+ 1000
296
+ 1000
297
+ reference
298
+ 500
299
+ detector
300
+ ×103
301
+ -15 -10-50
302
+ 510
303
+ 15
304
+ 2000
305
+ 4000
306
+ 6000
307
+ Xposition[cm]
308
+ Time over Threshold [ps]
309
+ u
310
+ Entries 46841
311
+ Entries46841
312
+ Mean
313
+ 0.01471
314
+ Mean-0.00574
315
+ 3000
316
+ StdDev1.325
317
+ Std Dev 0.2632
318
+ 15000
319
+ 2000
320
+ DAQ
321
+ 10000
322
+ &
323
+ Controller Board
324
+ 1000
325
+ STORAGE
326
+ 5000
327
+ (ability to handle 6 modules)
328
+ -15 -10-50
329
+ 51015
330
+ 510 15
331
+ Y position [cml
332
+ Z position [cm]2 hits by 511 keV photons, the annihilation points of 𝑒+ are reconstructed. For the selection of 511
333
+ keV interactions, the first criterion is based on the measured TOT values for both hits. Only those
334
+ events for which the TOT values are lying in the shaded region (orange) were selected. The second
335
+ criterion is based on their angular correlation, i.e., the photons that caused 2 hits are considered only
336
+ if emitted in back-to-back directions. After the selection of the 511 keV photons, the annihilation
337
+ vertices are reconstructed. The projections of the reconstructed vertices on each axis are shown
338
+ in the upper right and lower insets of Fig. 2. Preliminary results of the analysis performed over a
339
+ set of data measured with a stainless-steel flange are presented. The obtained spatial resolutions in
340
+ X, Y, and Z coordinates are 1.01 cm, 0.26 cm, and 1.33 cm, respectively (right panel in Fig. 2).
341
+ These results are very promising. The ability to resolve the annihilation points along the beam
342
+ (𝜎(Y)=0.26) justifies the use of J-PET modules for inertial sensing measurements on 23S Ps as
343
+ described in [50]. Analysis of the complete measured data with both flanges is in progress. A
344
+ detailed analysis, including the procedure for calibration of the detection modules, description of
345
+ the analysis algorithm, and final results in terms of achievable spatial resolutions and reconstruction
346
+ performance of the detectors will be reported in a separate article.
347
+ 5
348
+ Summary and perspectives
349
+ In this article, we have discussed several research problems that can be studied with the modular
350
+ detection units of J-PET in the positron beam facility at AML in Trento. The new experimental
351
+ system at AML can deliver a velocity-moderated, continuous e+ beam. In the next phase, the
352
+ generation of a monochromatic 23S Ps beam will be developed. In addition, it is expected that
353
+ the Ps beam can be produced in a defined quantum state. With the availability of the long-lived
354
+ 23S Ps beam, it is planned to use of atomic interferometry to study inertial sensing to measure the
355
+ gravitational acceleration on Ps. Moreover, the ability to produce Ps in a defined quantum state
356
+ will enrich studies of entanglement in Ps decays [33]. These studies will require the registration of
357
+ multiphotons emitted in Ps decays with good angular and spatial resolution. Modular units based
358
+ on J-PET technology can be used as potential detectors to perform such studies. A first test with
359
+ two such modules has already been performed. Preliminary results show that the resolution in
360
+ spatial coordinates is promising for performing the planned studies. The modular detector units
361
+ used were developed primarily for tomographic purposes. In the future, a new design with a shorter
362
+ scintillator length could be considered to meet the specific beam conditions for performing studies
363
+ with positronium beam at AML.
364
+ Acknowledgments
365
+ The authors gratefully acknowledge support from the Foundation for Polish Science through
366
+ the program TEAM/POIR.04.04.00-00-4204/17; the National Science Centre of Poland through
367
+ grant no.
368
+ 2019/35/B/ST2/03562; the Ministry of Education and Science through grant no.
369
+ SPUB/SP/490528/2021; the SciMat and qLIFE Priority Research Areas budget under the pro-
370
+ gram Excellence Initiative - Research University at the Jagiellonian University, and Jagiellonian
371
+ University project no. CRP/0641.221.2020. The authors also gratefully acknowledge the support
372
+ – 6 –
373
+
374
+ of Q@TN, the joint laboratory of the University of Trento, FBK-Fondazione Bruno Kessler, INFN-
375
+ National Institute of Nuclear Physics, and CNR-National Research Council, as well as support
376
+ from the European Union’s Horizon 2020 research and innovation programme under the Marie
377
+ Sklodowska-Curie Grant Agreement No.754496 -FELLINI and Canaletto project for the Executive
378
+ Programme for Scientific and Technological Cooperation between Italian Republic and the Republic
379
+ of Poland 2019-2021.
380
+ References
381
+ [1] P. Moskal, P. Salabura, M. Silarski et al., Novel detector systems for the Positron Emission
382
+ Tomography, Bio-Algorithms and Med-Systems 7(2) (2011) 73
383
+ [2] P. Moskal, T. Bednarski, P. Białas et al., Strip-PET: a novel detector concept for the TOF-PET
384
+ scanner, Nuclear Med. Rev. 15C (2012) 68
385
+ [3] P. Moskal, T. Bednarski, P. Białas et al., TOF-PET detector concept based on organic scintillators,
386
+ Nuclear Med. Rev. 15C (2012) 81
387
+ [4] P. Moskal, D. Alfs, T. Bednarski et al., Potential of the J-PET Detector for Studies of Discrete
388
+ Symmetries in Decays of Positronium Atom - a Purely Leptonic System, Acta Phys. Pol. B 47 (2016)
389
+ 509
390
+ [5] P. Moskal, D. Kisielewska, C. Curceanu et al., Feasibility study of the positronium imaging with the
391
+ J-PET tomograph, Phys. Med. Biol. 64 (2019) 055017
392
+ [6] P. Moskal, K. Dulski, N. Chug et al., Positronium imaging with the novel multiphoton PET scanner,
393
+ Science advances 7 (2021) eabh4394
394
+ [7] P. Moskal, E.L. Stepien, Positronium as a biomarker of hypoxia, Bio-Algorithms and Med-Systems 17
395
+ (2021) 311
396
+ [8] P. Moskal, D. Kisielewska, R. Y. Shopa et al., Performance assessment of the 2𝛾 positronium imaging
397
+ with the total-body PET, EJNMMI Physics 7:44 (2020)
398
+ [9] K. Dulski, S.D. Bass, J. Chhokar et al., The J-PET detector- a tool for precison studies of
399
+ ortho-positronium decays, Nucl. Instr. And Meth. A 1008 (2021) 1654452
400
+ [10] S. Niedzwiecki S, P. Bialas, C. Curceanu et al., J-PET: a new technology for the whole-body PET
401
+ imaging, Acta Phys Polon B. 48(10) (2017) 1567
402
+ [11] L. Raczynski, W. wislicki, K. Klimaszewski et al., 3D TOF-PET image reconstruction using total
403
+ variation regularization, Physica Medica 80 (2020) 230
404
+ [12] R. Shopa, K. Klimaszewski, P. Kopka et al., Optimisation of the event-based TOF filtered
405
+ back-projection for online imaging in total-body J-PET, Medical Image Analysis 73 (2021) 102199
406
+ [13] P. Moskal, A. Gajos, M. Mohammed et al., Testing CPT symmetry in ortho-positronium decays with
407
+ positronium annihilation tomography, Nature Communications 12 (2021) 5658
408
+ [14] P. Moskal, P. Kowalski, R.Y. Shopa et al., Simulating NEMA characteristics of the modular total-body
409
+ J-PET scanner - an economic total-body PET from plastic scintillators, Phys. Med. Biol. 66 (2021)
410
+ 175015
411
+ [15] S. Mariazzi, P. Bettotti, S. Larcheri et al., High positronium yield and emission into the vacuum from
412
+ oxidized tunable nanochannels in silicon, Phys. Rev. B 81 235418
413
+ – 7 –
414
+
415
+ [16] S Mariazzi, R Caravita, C Zimmer et al., High-yield thermalized positronium at room temperature
416
+ emitted by morphologically tuned nanochanneled silicon targets, Jour. of Phys. B: Atomic, Molecular
417
+ and Optical Physics 54 (2021) 085004
418
+ [17] C. Amsler, M. Antonello, A. Belov et al., Velocity-selected production of 23S metastable positronium,
419
+ Phys. Rev. A 99 (2019) 033405
420
+ [18] S. Mariazzi, R. Caravita, A. Vespertini et al., Techniques for Production and Detection of 23S
421
+ Positronium, Acta Phys. Pol. A 137 (2020) 91
422
+ [19] P Moskal, O Rundel, D Alfs et al., Time resolution of the plastic scintillator strips with matrix
423
+ photomultiplier readout for J-PET tomograph, Phys. Med. Biol. 61 (2016) 2025–2047
424
+ [20] M. Palka, P. Strzempek, G. Korcyl et al., Multichannel FPGA based MVT system for high precision
425
+ time (20 ps RMS) and charge measurement", JINST 14 (2019) P08001
426
+ [21] G. Korcyl, P. Bialas, C. Curceanu et al., Evaluation of Single-Chip, Real-Time Tomographic Data
427
+ Processing on FPGA SoC devices, IEEE Transactions on Medical Imaging 37 (2018) 2526
428
+ [22] S. Sharma, J. Chhokar, C. Curceanu et al., Estimating relationship between the time over threshold
429
+ and energy loss by photons in plastic scintillators used in the J-PET scanner, EJNMMI Physics 7:39
430
+ (2020)
431
+ [23] W. Krzemien, A. Gajos, K. Kacprzak et al., J-PET Framework: Software platform for PET
432
+ tomography data reconstruction and analysis, Software X 11 (2020) 100487
433
+ [24] P. Moskal, Sz. Niedzwiecki, T. Bednarski et al., Test of a single module of the J-PET scanner based on
434
+ plastic scintillators, Nucl. Instr. And Meth. A 764 (2014) 317
435
+ [25] P. Moskal, N. Zon, T. Bednarski et al., A novel method for the line-of-response and time-of-flight
436
+ reconstruction in TOF-PET detectors based on a library of synchronized model signals, Nucl. Instr.
437
+ And Meth. A 775 (2015) 54
438
+ [26] L. Raczynski, W. Wislicki, W. Krzemień et al., Calculation of the time resolution of the J-PET
439
+ tomograph using kernel density estimation, Phys. Med. Biol. 62 (2017) 5076
440
+ [27] S.D. Bass, QED and fundamental symmetries in positornium decays, Acta Phys. Polo. B 50 (2019)
441
+ 1319
442
+ [28] J. A. Wheeler, Polyelectrons, Annals of the New York Academy of Sciences 48 (1946) 219
443
+ [29] M L H Pryce and J C Ward, Angular correlation effects with annihilation radiation, Nature 160
444
+ (1947) 435
445
+ [30] D. Bohm, and Y. Aharonov, Discussion of experimental proof for the paradox of Einstein, Rosen, and
446
+ Podolsky, Phys. Rev. 108 (1957) 1070
447
+ [31] P. Caradonna, D. Reutens, T. Takahashi et al., Probing entanglement in Compton interactions, J. Phys.
448
+ Commun. 3 (2019) 105005
449
+ [32] B C. Hiesmayr and P Moskal, Genuine Multipartite Entanglement in the 3-Photon Decay of
450
+ Positronium, Sci. Rep. 7 (2017) 15349
451
+ [33] B. Hiesmayr and P. Moskal, Witnessing entanglement in Compton scattering processes via mutually
452
+ unbiased bases, Sci. Rep. 9:8166 (2019)
453
+ [34] M. Toghyani, J. E. Gillam, A. L. McNamara et al., Polarization-based coincidence event
454
+ discrimination: an in silico study towards a feasible scheme for Compton-PET, Phys. Med. Bio. 61:15
455
+ (2016) 5803
456
+ – 8 –
457
+
458
+ [35] P. Moskal, Positronium and Quantum Entanglement Imaging: A New Trend in Positron Emission
459
+ Tomography, IEEE Nucl. Sci. Symp. and Medical Imag. Conf. (NSS/MIC), (2021) 1-3
460
+ [36] O. Klein, Y. Nishina, Z. Physik 52 (1929) 853
461
+ [37] D. Kaminska, A. Gajos, E. Czerwinski et al., A feasibility study of ortho-positronium decays
462
+ measurement with the J-PET scanner based on plastic scintillators, Eur. Phys. J. C 76 (2016) 445
463
+ [38] P. Moskal, N. Krawczyk and B.C. Hiesmayr et al., Feasibility studies of the polarization of photons
464
+ beyond the optical wavelength regime with the J-PET detector, Eur. Phys. J. C 78:970 (2018)
465
+ [39] W. Bernreuther, U. Low, J.P. Ma, O. Nachtmann, How to test CP, T, and CPT invariance in the three
466
+ photon decay of polarized 3S1 positronium, Z. Phys. C - Particles and Fields 41 (1988) 143
467
+ [40] A. Gajos, D. Kaminska, E. Czerwinski et al., Trilateration-based reconstruction of ortho-positronium
468
+ decays into three photons with the J-PET detector, Nucl. Instr. And Meth. A 819 (2016) 54
469
+ [41] L. Povolo, S. Mariazzi, R.S. Brusa, in preparation
470
+ [42] R. Danielson, D. H. E. Dubin, R. G. Greaves et al., Plasma and trap-based techniques for science with
471
+ positrons, Rev. Mod. Phys. 87 (2015) 247
472
+ [43] L. Liszkay, F. Guillemot, C. Corbel et al., Positron annihilation in latex-templated macroporous silica
473
+ films: pore size and ortho-positronium escape, New Jour. of Phys. 14 (2012) 065009
474
+ [44] S. Mariazzi, R. Caravita, C Zimmer et al., High-yield thermalized positronium at room temperature
475
+ emitted by morphologically tuned nanochanneled silicon targets, J. Phys. B: At. Mol. Opt. Phys. 54
476
+ (2021) 085004
477
+ [45] D.B. Cassidy and S.H.M Deng, Accumulator for the production of intense positron pulses, Rev. of
478
+ Scien. Instr. 77 (2006) 073106
479
+ [46] S. Aghion, c. Amsler, A.Ariga et al., Positron bunching and electrostatic transport system for the
480
+ production and emission of dense positronium clouds into vacuum, Nucl. Instr. And Meth. A 362
481
+ (2015) 86
482
+ [47] M. Antonello, A. Belov, G. Bonomi et al., Efficient 23S positronium production by stimulated decay
483
+ from the 33P level, Phys. Rev. A 100,6 (2019) 63414
484
+ [48] S. Aghion et al., Laser excitation of the n=3 level of positronium for antihydrogen production, Phys.
485
+ Rev. A 94 (2016) 012507
486
+ [49] M.K. Oberthaler, Anti-matter wave interferometry with positronium, Nucl. Instr. And Meth. B 192
487
+ (2002) 129
488
+ [50] S. Mariazzi, R. Caravita, M. Doser et al., Toward inertial sensing with a 23S positronium beam, Eur.
489
+ Phys. J. D 74 (2020) 79
490
+ [51] A.P. Mills and M. Leventhal, Can we measure the gravitational free fall of cold Rydberg state
491
+ positronium?, Nucl. Instr. And Meth. B 192 (2002) 102
492
+ [52] C. Amsler, M. Antonello, A. Belov et al., Pulsed production of antihydrogen, Comm. Phys. 4,19
493
+ (2021)
494
+ [53] ALPHA collaboration and A.E. Charman et al., Description and first application of a new technique
495
+ to measure the gravitational mass of antihydrogen, Nat. comm. 4,1785 (2013)
496
+ [54] P. Perez, D. Banerjee, F. Biraben et al., The GBAR antimatter gravity experiment, Hyper. Int. 233
497
+ (2015) 21
498
+ – 9 –
499
+
500
+ [55] A. Antognini, D. M. Kaplan, K. Kirch et al., Studying Antimatter Gravity with Muonium, atoms 6,17
501
+ (2018)
502
+ – 10 –
503
+
19E0T4oBgHgl3EQf_wLw/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1dE1T4oBgHgl3EQf5AXg/content/tmp_files/2301.03508v1.pdf.txt ADDED
@@ -0,0 +1,2014 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Nonlinear THz Control of the Lead Halide Perovskite Lattice
3
+ Maximilian Frenzel1,*, Marie Cherasse1,2,*, Joanna M. Urban1, Feifan Wang3,‡ , Bo Xiang3,
4
+ Leona Nest1, Lucas Huber3,§, Luca Perfetti2, Martin Wolf1, Tobias Kampfrath1,4, X.-Y. Zhu3,
5
+ Sebastian F. Maehrlein1,†
6
+ 1 Fritz Haber Institute of the Max Planck Society, Department of Physical Chemistry, Berlin, Germany
7
+ 2 LSI, CEA/DRF/IRAMIS, CNRS, Ecole Polytechnique, Institut Polytechnique de Paris, Palaiseau,
8
+ France
9
+ 3 Columbia University, Department of Chemistry, New York City, USA
10
+ 4 Freie Universität Berlin, Berlin, Germany
11
+ * Authors contributed equally
12
+ ‡ Present address: Department of Materials, ETH Zurich, 8093 Zürich, Switzerland
13
+ § Present address: Sensirion AG, Staefa, Switzerland
14
+ † Corresponding author. Email: [email protected]
15
+ Abstract
16
+ Lead halide perovskites (LHPs) have emerged as an excellent class of semiconductors for
17
+ next-generation solar cells and optoelectronic devices. Tailoring physical properties by
18
+ fine-tuning the lattice structures has been explored in these materials by chemical
19
+ composition or morphology. Nevertheless, its dynamic counterpart, phonon-driven
20
+ ultrafast material control, as contemporarily harnessed for oxide perovskites, has not
21
+ been established yet. Here we employ intense THz electric fields to obtain direct lattice
22
+ control via nonlinear excitation of coherent octahedral twist modes in hybrid
23
+ CH3NH3PbBr3 and all-inorganic CsPbBr3 perovskites. These Raman-active phonons at
24
+ 0.9 – 1.3 THz are found to govern the ultrafast THz-induced Kerr effect in the low-
25
+ temperature orthorhombic phase and thus dominate the phonon-modulated
26
+ polarizability with potential implications for dynamic charge carrier screening beyond
27
+ the Fröhlich polaron. Our work opens the door to selective control of LHP’s vibrational
28
+ degrees of freedom governing phase transitions and dynamic disorder.
29
+ Introduction
30
+ During the last decade, lead halide perovskites (LHPs) emerged as promising semiconductors
31
+ for efficient solar cells, light-emitting diodes, and other optoelectronic devices (1-3). Key
32
+ prerequisites for the high LHP device efficiencies are the long charge carrier diffusion lengths
33
+ and lifetimes (4, 5), often explained by the unusual defect physics (6, 7) and/or dynamic charge
34
+ carrier screening (8, 9). The latter relies on delicate electron-phonon coupling, established by
35
+ the dominant role of the static structure and dynamics of the lead-halide framework (10, 11).
36
+ However, the exact mechanisms of the carrier-lattice interaction in the highly polarizable and
37
+ anharmonic LHP lattices remain debated (12, 13). The sensitivity of the physical properties to
38
+ structural distortions is a common feature in the extensive family of perovskites. In particular,
39
+ for oxide perovskites, the control of specific lattice modes leads to ultrafast material control
40
+ and nonlinear phononics (14, 15). Successful examples include, among others, light-induced
41
+ superconductivity (16), magnetization switching (17), access to hidden quasi-equilibrium spin
42
+ states (18), ferroelectricity (19, 20) and insulator-metal transitions (21) in perovskite or similar
43
+ garnet structures.
44
+ The crystal structure of LHPs features a large A-site cation surrounded by PbX6 octahedra
45
+ consisting of lead (Pb) and halide (X) ions in the ABX3 crystal structure (see Fig. 1A). The
46
+
47
+ 2
48
+ electronic band structure is mainly determined by the identities of metal and halide but is also
49
+ highly sensitive to the Pb-X-Pb bond angle, which can be controlled through the steric
50
+ hindrance of the A-cation (22). Changing the Pb-X-Pb bond angle is equivalent to tilting of the
51
+ PbX6octahedra, which serves as an order parameter for the cubic  tetragonal  orthorhombic
52
+ phase transitions (23, 24). Octahedral tilting is also an important factor governing structural
53
+ stability (25), dynamic disorder (26, 27), and potential ferroelectricity (26, 27) in LHPs. A
54
+ recent study using resonant excitation of the ~1 THz octahedral twist mode (Pb-I-Pb bending)
55
+ revealed modulation of the bandgap of CH3NH3PbI3 at room temperature (28). A similar
56
+ observation of dynamic bandgap modulation due twist modes was made at 80K for off-resonant
57
+ impulsive Raman excitation (29). These twist modes are also believed to contribute to the
58
+ formation of a polaronic state (30). All of these findings indicate an intriguing role of carrier
59
+ coupling to Raman-active non-polar phonons in addition to the polar LO phonons in the
60
+ conventional Fröhlich polaron picture (11, 31). In addition, the application of the Fröhlich
61
+ polaron picture to LHPs has been questioned (9, 26), because of the limited applicability of the
62
+ harmonic approximation in these soft lattices (13).
63
+ Accordingly, the dynamic screening picture in LHPs is incomplete and its microscopic
64
+ mechanism continues to be debated (32, 33). Furthermore, identifying and characterizing
65
+ polaronic behavior is experimentally difficult (31, 33-37). Optical Kerr effect (OKE) in LHPs
66
+ (38, 39) did not succeed in unveiling a lattice response and can be explained by an instantaneous
67
+ electronic polarization (due to hyperpolarizability) instead (40). Moreover, previous strong
68
+ field THz excitation could not directly detect the driven vibrational modes (28, 31) and coherent
69
+ control of the phonons remained elusive. Here, we turn to the THz-induced Kerr effect (TKE)
70
+ (41, 42) to investigate lattice-modulated polarization dynamics in the electronic ground state.
71
+ We employ intense THz electric fields (Fig. 1B) that broadly cover most of the inorganic cage
72
+ modes (Fig. 1C) and may nonlinearly probe the THz polarizability. The rapidly changing
73
+ single-cycle THz field macroscopically mimics the sub-picosecond variation of local electric
74
+ fields following electron-hole separation (43, 44) and elucidates the isolated lattice response.
75
+ Experiment
76
+ Generally, the polarizability describes the tendency of matter to form an electric dipole moment
77
+ when subject to an electric field, such as the local field from a mobile charge carrier in a
78
+ semiconductor. In the presence of an electric field 𝑬, the microscopic dipole moment is given
79
+ by 𝒑(𝑬) = 𝝁0 + 𝛼𝑬, where 𝜇0 is the static dipole moment and 𝛼 is the polarizability tensor. In
80
+ LHPs 𝛼 originates from three contributions: instantaneous electronic response (𝛼e), lattice
81
+ distortion (𝛼lat), and molecular A cation reorientation (𝛼mol). For small perturbations of the
82
+ respective collective coordinate 𝑄 (charge distribution, molecular orientation, or lattice mode)
83
+ a Taylor expansion yields
84
+ 𝒑(𝑬, 𝑄) = 𝝁0 + 𝜕𝝁𝟎
85
+ 𝜕𝑄 𝑄 +
86
+ 𝜕𝛼
87
+ 𝜕𝑄 𝑄𝑬 ,
88
+ (1)
89
+ where the two partial derivatives correspond to the mode effective charge 𝑍∗ and the Raman
90
+ 𝑅𝑖𝑗 tensor, respectively. Macroscopically, the two terms lead to lattice polarization 𝑍∗𝑄IR and
91
+ phonon-modulated susceptibility 𝜒eq
92
+ (1) + (𝜕𝜒eq
93
+ (1)/𝜕𝑄R)𝑄R for polar, 𝑄IR, and non-polar, 𝑄R,
94
+ modes, respectively. The latter relates 𝜕𝛼/𝜕𝑄 to a transient dielectric function and change in
95
+ refractive index of the material. This relation thus enables studying microscopic polarizability
96
+ through the observation of macroscopic transient birefringence induced by a pump pulse and
97
+ experienced by a weak probe pulse (41, 45). Collective polarization dynamics are induced by
98
+ the driving force 𝐹 = −𝜕𝑊int/𝜕𝑄, where 𝑊int = −𝑷(𝑬, 𝑄) ∙ 𝑬 is the potential energy of the
99
+ macroscopic polarization 𝑷 = ∑ 𝒑𝑖
100
+ 𝑖
101
+ interacting with an electric field 𝑬 (from a local charge
102
+
103
+ 3
104
+ carrier or through light-matter coupling in the electric dipole approximation). Thus, two 𝐸THz
105
+ interactions lead to THz polarizability-induced transient birefringence in TKE (42), which is
106
+ linearly probed by a weak probe pulse 𝐸pr in an effective 3rd order nonlinear process
107
+ proportional to 𝜒(3)𝐸THz𝐸THz𝐸pr (see Methods) (41, 46).
108
+ To induce polarization dynamics, we use intense THz single-cycle pulses with a 1.0 THz center
109
+ frequency (> 1.5 THz spectral width, see Fig. 1C), delivering sub-picosecond peak electric
110
+ fields exceeding 1.5 MV/cm generated by optical rectification in LiNbO3 (47). We probe the
111
+ resulting transient birefringence, i.e. anisotropic four-wave mixing signals, by stroboscopic
112
+ sampling with a synchronized 20 fs pulse (800 nm center wavelength) in a balanced detection
113
+ scheme, see Fig. 1A. We therefore effectively measure a 3rd order nonlinear signal field
114
+ heterodyned with the transmitted probe field. The probe pulses are linearly polarized at 45°
115
+ with respect to the vertically polarized THz pulses. As representative LHPs, we investigate
116
+ hybrid organic-inorganic CH3NH3PbBr3 (MAPbBr3) and fully inorganic CsPbBr3. The
117
+ freestanding single crystal samples (200 – 500 µm thickness) were solution grown by an
118
+ antisolvent diffusion method (48, 49) (see Methods). Complementary polycrystalline thin films
119
+ (~ 400 nm thickness) were spin-coated on 500 µm BK7 substrates, being particularly
120
+ technologically relevant as most state-of-the-art LHP solar cells are fabricated in a similar way
121
+ (50).
122
+ Results
123
+ Fig. 2A shows the THz induced transient birefringence in MAPbBr3 single crystals at room
124
+ temperature. The signal (blue line) initially follows 𝐸THz
125
+ 2
126
+ (grey area, measured via electrooptic
127
+ sampling), but then transitions into a nearly mono-exponential decay for time delays 𝑡 >
128
+ 500 fs. The transient birefringence peak at 𝑡 = 0 clearly scales quadratically with the THz-field
129
+ amplitude as found by the pump fluence dependence in Fig. 2B. With the exponential decay
130
+ dynamics remaining also constant for different fluences (Fig. S2), we can infer the Kerr-type
131
+ origin of the full signal and thus conclude a strong THz polarizability. Furthermore, the peak
132
+ amplitude’s (Fig. 2C) and the exponential tail’s (Fig. S3) dependence on the azimuthal angle
133
+ between probe polarization direction and crystal axes perfectly obeys the expected 4-fold
134
+ rotational symmetry of the 𝜒(3) tensor and TKE dependence of 𝜒𝑖𝑗���𝑙
135
+ (3) 𝐸𝑗
136
+ THz𝐸𝑘
137
+ THz𝐸𝑙
138
+ pr. We quantify
139
+ the THz polarizability of MAPbBr3 by a nonlinear THz refractive index 𝑛2 of about 2 × 10−14
140
+ cm2/W (see details in SI), being on the same order as in the optical region (51) and roughly 80
141
+ times larger than 𝑛2 of Diamond (52), which is known as a suitable material for THz nonlinear
142
+ optics (53).
143
+ The small oscillatory deviations from the exponential tail in MAPbBr3 (Fig. 2A), become more
144
+ pronounced and qualitatively different in CsPbBr3 in the form of a bumpy, non-trivial shape
145
+ (Fig. 2D). This stark difference between MAPbBr3 and CsPbBr3 is reminiscent of 2D-OKE
146
+ results (40), where the oscillatory signal of CsPbBr3 was found to be mainly due to anisotropic
147
+ light propagation, since CsPbBr3 is orthorhombic and thus birefringent at room temperature.
148
+ The fluence (Fig. 2E) and azimuthal dependences (Fig. 2F) are consistent with the pure 3rd
149
+ order nonlinearity of the signal. However, fits to the azimuthal angle dependences in Figs. 2C,F
150
+ (black lines) yield different ratios of the off-diagonal 𝜒𝑖𝑗𝑘𝑙
151
+ (3) to diagonal 𝜒𝑖𝑖𝑖𝑖
152
+ (3) tensor elements for
153
+ the two materials: 1.6 for MAPbBr3 and 1.0 for CsPbBr3. A similar polarization dependence of
154
+ static Raman spectra was recently attributed to additional isotropic disorder from the rotational
155
+ freedom of the polar MA+ cation in MAPbBr3 (54).
156
+
157
+ 4
158
+ Figs. 3A,B show a comparison of the temperature dependent TKE in MAPbBr3 single crystals
159
+ and polycrystalline thin films. At room temperature (both top traces), it stands out that the thin
160
+ film TKE signal lacks the exponential decay seen in the single crystals, providing a first
161
+ evidence that the tail stems from dispersion effects and is not due to intrinsic molecular
162
+ relaxation dynamics as previously interpreted (55). A strong and sophisticated THz dispersion,
163
+ as seen in Fig 1C, is a general, but often overlooked, phenomenon in broadband high-field THz
164
+ pump-probe spectroscopy. In analogy to the OKE (40), the features of the room temperature
165
+ TKE in both MAPbBr3 and CsPbBr3 might therefore be dominated by dispersive and
166
+ anisotropic light propagation. Hence, we assign the main contribution of the TKE response at
167
+ room temperature to the instantaneous electronic polarizability (hyperpolarizability), which
168
+ may overwhelm possible lattice contributions. This interpretation will be further supported by
169
+ the modeling below.
170
+ From here on, we mainly focus on the TKE of MAPbBr3, especially at low temperatures at
171
+ which increased phonon lifetimes should facilitate the observation of a coherent lattice response
172
+ (54, 56). For the single crystal (Fig. 3A), the TKE dynamics at 180 K are different than at room
173
+ temperature, which might reflect the change of structural phase from cubic to tetragonal. At
174
+ 180K, an oscillatory signal at short times (< 2 ps) appears, suggesting the presence of a coherent
175
+ phonon which was overdamped in the cubic phase at room temperature (54). The coherent
176
+ oscillations become much stronger for the single crystal at 80 K, where MAPbBr3 is in the
177
+ orthorhombic phase. Less pronounced, but clear oscillations are also visible in the thin film
178
+ sample at 80 K (Fig. 3A, lowest trace). We extract the oscillatory parts, Fig. 3C, of both single
179
+ crystal and thin film samples at 80 K by subtracting incoherent backgrounds, using a
180
+ convolution of the squared THz field with a bi-exponential function. The respective Fourier
181
+ transforms in Fig. 3D reveal the same oscillations frequency of 1.15 ± 0.05 THz for both
182
+ samples. This clearly rules out anisotropic propagation effects as the origin of these oscillations
183
+ (40), as the 400 nm film is too thin for significant walk-off between pump and probe (shown in
184
+ simulations later) and the different thicknesses of the two samples rule out a Fabry-Pérot
185
+ resonance effect. Thus, we can clearly assign the signal to a lattice-modulation of the THz
186
+ polarizability dominated by a single 1.15 THz phonon in MAPbBr3. We now turn to THz-THz-
187
+ VIS four-wave-mixing simulations to understand the origins of TKE from MAPbBr3.
188
+ Modelling
189
+ For dispersive and birefringent materials, the Kerr signal cannot be decomposed into an
190
+ effective birefringence change observed by an independent probe beam (46). Instead, the Kerr-
191
+ effect induced nonlinear polarization 𝑃(3) needs to be captured in a full four-wave-mixing
192
+ (FWM) picture. To separate the three polarizability contributions (instantaneous electronic,
193
+ molecular and lattice) and to take anisotropic light propagation across dispersive phonon
194
+ resonances into account, we simulate the 3rd order nonlinear polarization by
195
+ 𝑃𝑖
196
+ (3)(𝑡, 𝑧)
197
+ = 𝜖0 �
198
+ 𝑑𝑡′ �
199
+ 𝑑𝑡′′
200
+ 𝑡′
201
+ −∞
202
+
203
+ 𝑑𝑡′′′
204
+ 𝑡′′
205
+ −∞
206
+ 𝑡
207
+ −∞
208
+ 𝑅�𝑖𝑗𝑘𝑙𝑅(𝑡, 𝑡′, 𝑡′′, 𝑡′′′)𝐸𝑗
209
+ THz(𝑡′, 𝑧)𝐸𝑘
210
+ THz(𝑡′′, 𝑧)𝐸𝑙
211
+ pr(𝑡′′′, 𝑧), (2)
212
+ where 𝑅 is the time-domain 𝜒(3) response function (46) and 𝐸THz and 𝐸pr are the pump and
213
+ probe electric fields, respectively. The time-independent 𝑅�𝑖𝑗𝑘𝑙 tensor constitutes the respective
214
+ 𝜒(3) symmetry for the different crystalline phases, in agreement with the ratios of the tensor
215
+ elements obtained from the azimuthal fits in Fig. 2C. For the instantaneous electronic
216
+ polarizability
217
+ (hyperpolarizability),
218
+ we
219
+ assume
220
+ temporal
221
+ Dirac
222
+ delta
223
+ functions
224
+ 𝑅e(𝑡, 𝑡′, 𝑡′′, 𝑡′′′) = 𝑅e,0𝛿(𝑡 − 𝑡′)𝛿(𝑡′ − 𝑡′′)𝛿(𝑡′′ − 𝑡′′′). For a lattice response, we model the
225
+ driven phonon response by a Lorentz oscillator
226
+
227
+ 5
228
+ 𝑅ph(𝑡, 𝑡′, 𝑡′′, 𝑡′′′) = 𝑅ph,0𝛿(𝑡′ − 𝑡′′)𝛿(𝑡′′ − 𝑡′′′)𝑒−Γ�𝑡−𝑡′� sin ���𝜔ph
229
+ 2 − Γ2�(𝑡 − 𝑡′)�, (3)
230
+ where 𝜔ph/2𝜋 is the frequency and 1/2Γ the lifetime of the phonon (46). The driving force for
231
+ Raman-active phonons is hereby 𝐸𝑗
232
+ THz𝐸𝑘
233
+ THz, which contains difference- and sum-frequency
234
+ terms (57, 58). The latter is a unique distinction to the OKE. For 𝐸THz we can directly use the
235
+ experimental THz electric field, as measured in amplitude and phase resolved electro-optic
236
+ sampling. After we determine the complex refractive indices (Fig. 1C) and extrapolate the static
237
+ birefringence (see Methods and SI), we calculate and propagate all involved fields from Eq. (2)
238
+ incl. signal fields 𝐸𝑖
239
+ s(𝑡, 𝑧) emitted from 𝑃𝑖
240
+ (3)(𝑡, 𝑧), followed by our full detection scheme,
241
+ including balanced detection, to obtain the pump-probe signal (see details in Methods).
242
+ Fig. 4A shows the simulated TKE signal (grey) compared to the experimental data (blue) at
243
+ room temperature for a 500 µm thick MAPbBr3 single crystal. It unveils the formation of a long
244
+ exponential tail produced by walk-off, dispersion, and absorption effects, even for only an
245
+ instantaneous electronic response 𝑅e. This confirms that the electronic polarizability dominates
246
+ the TKE signal at room temperature. It contrasts a previous interpretation of a TKE
247
+ measurement in thick single crystal MAPbBr3, which neglected propagation effects entirely
248
+ (55). At 80 K, MAPbBr3 is orthorhombic. We therefore need to include additional static
249
+ birefringence. Instantaneous hyperpolarizability alongside static birefringence and dispersion
250
+ can cause the appearance of oscillatory features (40). Nevertheless, our modelling finds these
251
+ features to be too short-lived (see Fig. S14) to explain our experimental observation at 80 K.
252
+ Thus, we need to account for both hyperpolarizability 𝑅e and lattice-modulated polarizability
253
+ 𝑅ph responses (fit parameters: 𝜔ph/2𝜋 = 1.14 THz, Γ = (2 ∙ 1.7ps)−1, 𝑅e,0/𝑅ph,0 = 2.4) to
254
+ describe the low-temperature TKE signals in the time- and frequency domain (Figs. 4B,C). In
255
+ contrast to OKE at 80K (40), the oscillations in TKE are therefore due to coherent phonon
256
+ modes and we hence finally observe an ultrafast lattice response to a sub-picosecond electric
257
+ field transient.
258
+ The simulation assuming only instantaneous hyperpolarizability for a 400 nm thin film agrees
259
+ well with the experimental TKE at room temperature (see Fig. 4D). As expected, the simulation
260
+ lacks the clear tail seen in the thick single crystals, thereby additionally confirming that the tail
261
+ is due to light propagation effects. Also here, at 80 K, we need to include both instantaneous
262
+ electronic and phonon contributions (𝜔ph/2𝜋 = 1.14 THz, Γ = (2 ∙ 1.7ps)−1, 𝑅e,0/𝑅ph,0 =
263
+ 24) to describe the experimental signals for the thin films in Figs. 4E,F. Here, a purely
264
+ instantaneous electronic contribution alongside static birefringence does not lead to oscillatory
265
+ features (see Figs. S14A,C). This provides direct proof that the observed oscillations in Figs.
266
+ 3C,D originate from a coherent phonon. Therefore, through comparison of single crystals with
267
+ thin films and by rigorous FWM simulation, we prove to witness a coherent lattice-driven
268
+ dynamic polarization response.
269
+ Interpretation
270
+ Besides potential rotational disorder, our rigorous modeling shows that we do not observe a
271
+ TKE contribution that we can unambiguously relate to an ultrafast cation reorientation in the
272
+ form of a liquid-like exponential decay (41, 42). We rather find MAPbBr3‘s TKE tail at room
273
+ temperature to be most likely overwhelmed by the instantaneous hyperpolarizability 𝑅e in
274
+ conjunction with dispersive light propagation. This might be also explained by the THz pump
275
+ spectrum being far off the cation rotational resonances around the 100 GHz frequency range
276
+ (59). The cation species nevertheless influences the static and dynamic properties of the
277
+ inorganic lattice, highlighting the importance of the interplay between the organic and inorganic
278
+
279
+ 6
280
+ sub-lattices for the LHPs equilibrium structure (56). This fact shows up e.g. as a single
281
+ dominating PbBr6 cage mode in MAPbBr3 but two dominating modes in CsPbBr3 (see Fig. S1);
282
+ in agreement with static Raman spectra (54). The various templating mechanisms by which the
283
+ cation influences these properties (60) are through its steric size (22), lone-pair effects (27, 61),
284
+ or hydrogen bonding (62).
285
+ For MAPbBr3, we find a single phonon mode dominating the Raman-active lattice dynamics in
286
+ response to a sub-ps electric field spike. The observed phonon at 1.15 THz is consistent with
287
+ static Raman spectra in the visible range, where this mode also exhibits the highest scattering
288
+ amplitude (54, 63). Thus, we can assign it to a dynamic change in the Pb-Br-Pb bond angle
289
+ corresponding to a twisting of the PbBr6 octahedra (twist mode) (64). Based on theory work for
290
+ MAPbI3 (65), we assign this to Ag symmetry, which matches the experimental observations that
291
+ the mode is still present when we rotate the single crystal by 45° (see Fig. S7) and that we also
292
+ observe the same mode in polycrystalline thin films (Figs. 3C,D). We suggest that at room
293
+ temperature this mode also strongly modulates the THz dielectric response, even though its
294
+ oscillations are potentially overdamped as inferred from the broad Raman spectra (54, 56). To
295
+ distinguish whether this twist mode only dominates the ultrafast lattice response in MAPbBr3,
296
+ or is of wider relevance for other LHPs, we analyze the TKE response of CsPbBr3, where we
297
+ observe two modes at 0.9 and 1.3 THz at 80K (see Fig. S1), corresponding to two octahedra
298
+ twist modes as observed in static Raman spectra (54). We thus conclude that the transient THz
299
+ polarizability (𝜕𝜒eq
300
+ (1)/𝜕𝑄) 𝑄 is generally dominated by the octahedra twist modes in LHPs.
301
+ We now consider the excitation mechanism of the coherent phonon. Fig. 5A shows that the
302
+ 1.15 THz oscillations at 80K scale with the square of the THz electric field amplitude,
303
+ suggesting nonlinear excitation with a Raman-type driving force. This is consistent with the
304
+ Kerr effect being also a Raman-type probing mechanism. Generally, there are four types of
305
+ Raman-active THz excitation mechanisms: Difference- or sum-frequency excitation via Ionic
306
+ Raman Scattering (IRS) or Stimulated Raman Scattering, corresponding to nonlinear ionic
307
+ (=phononic) or nonlinear electronic (= photonic) pathways, respectively (58). Indeed, the Ag
308
+ symmetry of the observed modes permits IRS, where a resonantly driven IR-active phonon
309
+ couples anharmonically to a Raman-active mode (14, 58). However, this phononic pathway
310
+ requires phonon anharmonicity, whereas the photonic pathway requires electronic THz
311
+ polarizability. The sum-frequency (SF) and difference-frequency (DF) photonic force spectra
312
+ in Fig. 5B indicate a comparable probability for both photonic mechanisms to drive the 1.15
313
+ THz mode (dashed line). For the phononic pathways in Fig. 5C, the DF excitation requires a
314
+ primarily driven IR-active phonon with a bandwidth of ≳ 1 THz, which exists in our excitation
315
+ range even at 80 K (66). On the other hand, there are also IR-active modes, which provide
316
+ roughly half the frequency of the Raman-active mode ΩIR = ΩR/2 enabling phononic SF-IRS
317
+ (58). Accordingly, none of the four nonlinear excitation pathways can be neglected, but the
318
+ observed strong electronic THz polarizability in conjunction with a longer penetration depth
319
+ for lower THz frequencies favors a SF nonlinear photonic mechanism. We leave the
320
+ determination of the exact excitation pathway to further studies, e.g. by two-dimensional TKE
321
+ (67) or more narrowband THz excitation (68).
322
+ Discussion
323
+ Independent of the precise excitation pathway and in contrast to optical Raman or transient
324
+ absorption studies, we unambiguously observe strong electron-phonon coupling of the
325
+ octahedral twist modes via a pure THz polarizability (electronic or ionic). This explains the
326
+ mode’s dominating influence on the electronic bandgap in MAPbI3 previously observed by Kim
327
+ et al. (28). The twist mode’s half-cycle period of ~0.5 ps is short enough to contribute to
328
+
329
+ 7
330
+ electron-phonon coupling within the estimated polaron formation time (69), even in the
331
+ overdamped case at room temperature. We can understand carrier screening by non-polar
332
+ modes as follows. As shown in Eq. (1), the THz polarizability contains two lattice
333
+ contributions: From polar lattice modes 𝑃IR(𝜔) ∝ 𝑍∗𝑄IR(𝜔) ∝ 𝑍∗𝐸THz(𝜔) and from the non-
334
+ resonant electron cloud moving at THz speeds (sub-ps time scales):
335
+ 𝑃𝑒(𝜔) = 𝜖0[ 𝜒𝑒
336
+ (1)(𝜔) + 𝜕𝜒𝑒
337
+ (1)
338
+ 𝜕𝑄R
339
+ (𝜔, 𝛺) 𝑄R(𝛺) ]𝐸THz(𝜔),
340
+ (4)
341
+ where the latter is modulated in the presence of a Raman-active phonon 𝑄R. Thus, excited
342
+ Raman-active modes lead to a transient dielectric response 𝜖(ω) = 𝜖𝑒𝑞(ω) + Δ𝜖(𝜔, Ω) at THz
343
+ frequencies 𝜔 with Δ𝜖 =
344
+ 𝜕𝜒𝑒
345
+ (1)
346
+ 𝜕𝑄R 𝑄R, which constitutes an additional contribution of higher order
347
+ screening due to a fluctuating lattice. In the macroscopic incoherent case, Δ𝜖 averages out. On
348
+ time and length scales relevant to electron-hole separation and localization (< 1 nm and < 1ps)
349
+ (43, 44), collective octahedral tilting produces (70) an additional THz polarizability, which
350
+ might add to the conventional Fröhlich picture of carrier screening. We speculate that a local
351
+ non-zero twist angle 𝑄R could be either already present due to dynamic disorder (see discussion
352
+ below), or might be nonlinearly excited by the transient local charge field 𝐸loc
353
+ 2 , easily exceeding
354
+ 1 MV/cm (9) (analog to the excitation pathways above). The latter scenario agrees with
355
+ MAPbBr3’s unusually large optical 𝜒(3), previously attributed to local confinement effects (51).
356
+ The observed 1.15 THz mode is therefore a good candidate for contributing to strong electron-
357
+ phonon coupling beyond the polar Fröhlich picture.
358
+ The driven twist mode is similar to soft modes in oxide perovskites, where the tilting angle of
359
+ adjacent oxygen octahedra is an order parameter for phase transitions (71). Recently, TKE was
360
+ similarly employed to drive and detect ultrafast field-induced ferroelectricity in the quantum
361
+ paramagnet SrTiO3 (19). In Eu and Sr doped La2CuO4, driving the tilt of oxygen octahedra was
362
+ found to induce signatures of superconductivity persisting over a few ps above the critical
363
+ temperature (16). Consistent with these observations in oxide perovskites, the tilting angle of
364
+ the PbX6 octahedra (twist mode) was found to act as an order parameter for phase transitions in
365
+ LHPs (23, 24) and in the double-perovskite Cs2AgBiBr6 (72). Especially for MAPbBr3, the
366
+ Raman scattering intensity of the 1.1 THz peak was recently shown as measure of the
367
+ orthorhombic-tetragonal phase transition (63) and its spectral evolution in Raman (56) and
368
+ neutron scattering (73) is indicative of a soft mode phase transition. Yet, the LHP lattice
369
+ properties were previously mainly tuned in a static and chemical manner, e.g. by acting on the
370
+ octahedral tilting angle through the steric size of the A-site cation (22). The coherent lattice
371
+ control demonstrated here allows dynamic tuning of the structure and thus ultrafast phonon-
372
+ driven steering of LHP’s optoelectronic properties, e.g. for integrated photonic devices
373
+ operating at GHz to THz clock-rates (74).
374
+ In addition, imposing a coherence on the octahedral tilting should directly influence the
375
+ dynamic disorder (75), which is considered one of the key components determining the
376
+ optoelectronic properties of LHPs (12, 54, 76). Dynamic disorder means that the effective
377
+ crystallographic structure (e.g. cubic at 300 K) only exist in spatial and temporal average.
378
+ Specifically, in LHPs with a Goldschmidt tolerance factor below 1, such as MAPbBr3 and
379
+ CsPbBr3, the disorder mainly arises from the lattice instability associated with octahedral tilting
380
+ (61, 75, 77), evidenced by X-ray total scattering in CsPbBr3 (78), inelastic X-ray scattering in
381
+ MAPbI3 (70) and Raman spectroscopy in MAPbBr3, CsPbBr3 and MAPbI3 (54, 77). The
382
+ resulting fluctuating lattice potential and polar nanodomains have been suggested as underlying
383
+ mechanisms for dynamic charge carrier screening in the form of preferred current pathways
384
+
385
+ 8
386
+ (79, 80) and ferroelectric polarons (26, 81), respectively. All these phenomena might be
387
+ potentially controlled or temporally lifted by the THz control of octahedral motion.
388
+ Overall, we find that the octahedral tilting motion, which serves as an order parameter for phase
389
+ transitions (23, 24) and contributes significantly to dynamic disorder (54, 77), shows a strong
390
+ nonlinear coupling to a rapidly varying electric field on sub ps-timescales that are relevant to
391
+ local electron-hole separation polaron formation. Our results thus indicate that the TO
392
+ octahedral twist mode contributes to strong electron-phonon coupling and dynamic carrier
393
+ screening in LHPs, which may be inherently linked to a local and transient phase instability as
394
+ suggested by the ferroelectric polaron picture (26, 81).
395
+ Conclusion
396
+ By investigating 3rd order nonlinear polarization dynamics in hybrid and all-inorganic LHPs,
397
+ we reveal that the room temperature TKE response stems predominantly from a strong THz
398
+ hyperpolarizability, leading to a nonlinear THz refractive index on the order of 10-14 cm2/W. In
399
+ analogy to previous OKE studies (40), we explain and model the appearance of retarded TKE
400
+ dynamics by dispersion, absorption, walk-off, and anisotropy effects (46). These effects are of
401
+ crucial relevance to contemporary THz pump-probe experiments, such as TKE or THz-MOKE
402
+ studies (82, 83). For sufficiently long phonon lifetimes at lower temperatures, we can
403
+ nonlinearly drive and observe a coherent lattice response of the ~1 THz octahedral twist
404
+ mode(s). These phonons couple most strongly to the THz polarizability, which means they must
405
+ be highly susceptible to transient local fields on the 100s fs time scale, relevant to electron-
406
+ phonon coupling and carrier localization. We find this ultrafast non-polar lattice response to be
407
+ mediated by anharmonic phonon-phonon coupling and/or by the strong nonlinear electronic
408
+ THz polarizability. The same octahedral twist mode serving as a sensitive order parameter for
409
+ structural phase transitions (63, 73) is likely the origin of significant intrinsic dynamic disorder
410
+ in LHPs (54, 75). Thus, our findings suggest that the microscopic mechanism of the unique
411
+ defect tolerance (39, 84) and long carrier diffusion lengths (4, 5) of LHPs might also rely on
412
+ small phase instabilities accompanying the polaronic effects.
413
+ Our work demonstrates the possibility of coherent control over the twist modes via nonlinear
414
+ THz excitation. Since the octahedral twist modes are the dynamic counterparts to steric
415
+ engineering of the metal-halide-metal bond angle, our work paves the way to study charge
416
+ carriers in defined modulated lattice potentials, to control dynamic lattice disorder, or to
417
+ macroscopically switch polar nanodomains leading to the emergence of transient
418
+ ferroelectricity.
419
+
420
+ 9
421
+ Materials and Methods
422
+ Sample Growth
423
+ The single crystal samples were synthesized based on our previous published method (40). For
424
+ MAPbBr3, the precursor solution (0.45 M) was prepared by dissolving equal molar ratio of
425
+ MABr (Dyesol, 98%) and PbBr2 (Aldrich, ≥98%) in dimethylformamide (DMF, Aldrich,
426
+ anhydrous 99.8%). After filtration, the crystal was allowed to grow using a mixture of
427
+ dichloromethane (Aldrich, ≥99.5%) and nitromethane (Aldrich, ≥96%) as the antisolvent (48).
428
+ Similar method was used for CsPbBr3 crystal growth (49). The precursor solution (0.38 M) was
429
+ formed by dissolving equal molar ratio of CsBr (Aldrich, 99.999%) and PbBr2 in dimethyl
430
+ sulfoxide (EMD Millipore Co., anhydrous ≥99.8%). The solution was titrated by methanol till
431
+ yellow precipitates show up and did not redissolve after stirring at 50 °C for a few hours. The
432
+ yellow supernatant was filtered and used for the antisolvent growth. Methanol was used for the
433
+ slow vapor diffusion. All solid reactants were dehydrated in a vacuum oven at 150 °C overnight
434
+ and all solvents were used without further purification.
435
+ Thin films. Before spin-coating, the substrate was rinsed by acetone, methanol and isopropanol
436
+ and treated under oxygen plasma for 10 min. The freshly prepared substrate was transferred to
437
+ the spin coater within a short time. For MAPbBr3, the precursor DMSO (Aldrich, ≥99.9%)
438
+ solution (2M) containing the equimolar ratio of MABr and PbBr2 was used for the one-step
439
+ coating method. The film was formed by spin-coating at 2000 rpm for 45 s and annealed at 110
440
+ °C for 10 min. For CsPbBr3, a two-step method was implemented. First, the PbBr2 layer was
441
+ obtained by spin-coating the 1 M PbBr2/DMF precursor solution at 2000 rpm for 45 s and dried
442
+ at 80 °C for 30 min. Subsequently, the PbBr2 film was immersed in a 70 mM CsBr/methanol
443
+ solution for 20 min. Following the rinsing by isopropanol, the film was annealed at 250 °C for
444
+ 5 min to form the uniform perovskite phase.
445
+ THz-induced Kerr effect
446
+ THz pulses with 1.0 THz center frequency and field strength exceeding 1.5 MV/cm (Fig. 1B,C),
447
+ were generated by optical rectification in LiNbO3 with the tilted pulse front technique (47). To
448
+ that end, LiNbO3 was driven by laser pulses from an amplified Ti:sapphire laser system (central
449
+ wavelength 800 nm, pulse duration 35 fs FWHM, pulse energy 5 mJ, repetition rate 1 kHz).
450
+ The probe pulses came from a synchronized Ti:sapphire oscillator (center wavelength 800nm,
451
+ repetition rate 80 MHz) and were collinearly aligned and temporarily delayed with respect to
452
+ the THz pulse. The probe polarization was set at 45 degrees with respect to the vertically-
453
+ polarized THz pulses. The THz pulses induced a change in birefringence (TKE) in the sample
454
+ (41). This birefringence causes the probe field to acquire a phase difference between
455
+ polarization components parallel and perpendicular to THz pulse polarization. The phase
456
+ difference is detected via a half- and quarter-waveplate (HWP and QWP) followed by a
457
+ Wollaston prism to spatially separate perpendicularly polarized probe beam components. The
458
+ intensity of the two beams is detected by two photodiodes in a balanced detection configuration.
459
+ Four-wave-mixing simulation
460
+ The 3rd order nonlinear polarization 𝑃(3)(𝑡, 𝑧) is simulated using the general four-wave mixing
461
+ equation (Eq. (2)) and according to Ref. (46). To compute 𝑃(3)(𝑡, 𝑧), all three contributing light
462
+ fields, 𝐸𝑗
463
+ THz, 𝐸𝑘
464
+ THz and 𝐸𝑙
465
+ pr, are propagated through the crystal on a time-space grid. The three
466
+ fields inside the sample are calculated at any location 𝑧 using
467
+ 𝐸𝑖(𝑡, 𝑧) = �
468
+ 𝑡𝑖(𝜔)𝐴𝑖(𝜔)𝑒−𝑖(𝜔𝑡−𝑘𝑖(𝜔)𝑧)(1 − 𝑅𝑖(𝜔, 𝑧))𝑑𝜔
469
+
470
+ −∞
471
+ ,
472
+ (5)
473
+ with
474
+
475
+ 10
476
+ 𝑅𝑖(𝜔, 𝑧) = 𝑟𝑖�1 + 𝑒2𝑖𝑧𝑘𝑖(𝜔)�
477
+ 𝑒2𝑖(𝑑−𝑧)𝑘𝑖(𝜔)
478
+ 1 − 𝑟𝑖
479
+ 2(𝜔)𝑒2𝑖𝑑𝑘𝑖(𝜔) ,
480
+ (6)
481
+ where 𝐴𝑖(𝜔) is the spectral amplitude of the field and 𝑡𝑖 and 𝑟𝑖 denote the Fresnel transmission
482
+ and reflection coefficients respectively. As the input pump field 𝐸THz we use the full
483
+ experimental THz electric field generated via optical rectification in LiNbO3 as measured using
484
+ electro-optic sampling in Quartz (85). For the probe field 𝐸pr we assume a Fourier limited
485
+ Gaussian spectrum with center wavelength 800 nm and pulse duration 20 fs, experimentally
486
+ measured by a spectrometer and a commercial SPIDER. For both non-birefringent and
487
+ birefringent simulations, we use the THz refractive index for MAPbBr3 as calculated from its
488
+ dielectric function based on the experimental work by Sendner et al. (10) (Fig. S11). In the
489
+ optical region, the precise anisotropic refractive index of CsPbBr3 is used as measured using
490
+ the 2D-OKE (46). For the birefringent lead halide perovskite simulation, the static birefringence
491
+ of CsPbBr3 is used and interpolated to THz region (Fig. S12). For the isotropic cubic perovskite,
492
+ the static birefringence is set to zero. In the shown simulation results, the time-grid had a finite
493
+ element size of Δ𝑡′ = 16.6 fs and the spatial grid had a finite element size of Δ𝑧 = 10 μm for
494
+ the single crystal and Δ𝑧 = 0.1 μm for the thin film simulations respectively. These values were
495
+ chosen for the sake of computational efficiency and did not qualitatively affect the simulation
496
+ results. The pump-probe delay finite element size was chosen to be Δ𝑡 = 16.6 fs.
497
+ We assume that the nonlinear polarization 𝑃(3) emits an electric field 𝐸(4) at every slice 𝑧
498
+ according to the inhomogeneous wave equation
499
+ [∇2 + 𝑘𝑖
500
+ 2(𝜔)]𝐸𝑖
501
+ (4)(𝜔, 𝑡, 𝑧) = − 𝜔2
502
+ 𝜖0𝑐2 𝑃𝑖
503
+ (3)(𝜔, 𝑡, 𝑧),
504
+ (7)
505
+ which then co-propagates with the probe field 𝐸pr. The transmitted probe field 𝐸pr and emitted
506
+ field 𝐸(4) are projected on two orthogonal polarization components by propagating through a
507
+ half-wave plate, quarter-wave plate and Wollaston prism. The combined effect of these optical
508
+ devices are captured by the Jones matrices 𝐽1 and 𝐽2 for the two separated polarization
509
+ component channels. A balanced detection scheme allows observation of 𝐸(4) by interfering
510
+ with 𝐸pr. Under balanced conditions, the detected non-equilibrium signal is
511
+ 𝑆 ∝ ∫ 𝑅𝑒�(𝐽1𝐸𝑝𝑟) ∙ �𝐽1𝐸(4)∗� − (𝐽2𝐸𝑝𝑟) ∙ �𝐽2𝐸(4)∗��𝑑𝜔. (8)
512
+ Our simulation therefore mimics the balancing conditions of the experiment. A detailed
513
+ description of this calculation is given in (46).
514
+ To model the response of the system, we assume the response function 𝑅e(𝑡, 𝑡′, 𝑡′′, 𝑡′′′) for an
515
+ instantaneous electronic response and 𝑅ph(𝑡, 𝑡′, 𝑡′′, 𝑡′′′) for a phonon response. The expressions
516
+ for 𝑅e and 𝑅lat are given in the main paper. In the frequency domain, 𝑅(𝜔) = 𝜒(3)(𝜔). For
517
+ normal incidence on the (101) crystal surface, the orthorhombic space group Pnma allows Kerr
518
+ signals from 𝜒𝑥𝑥𝑥𝑥
519
+ (3) , 𝜒𝑦𝑦𝑦𝑦
520
+ (3)
521
+ , 𝜒𝑥𝑥𝑦𝑦
522
+ (3)
523
+ = 𝜒𝑥𝑦𝑦𝑥
524
+ 3
525
+ = 𝜒𝑥𝑦𝑥𝑦
526
+ (3) and 𝜒𝑦𝑦𝑥𝑥
527
+ (3)
528
+ = 𝜒𝑦𝑥𝑥𝑦
529
+ (3)
530
+ = 𝜒𝑦𝑥𝑦𝑥
531
+ (3) (40). While
532
+ the cubic space group Pm3m and allows for 𝜒𝑥𝑥𝑥𝑥
533
+ (3)
534
+ = 𝜒𝑦𝑦𝑦𝑦
535
+ (3) and 𝜒𝑥𝑥𝑦𝑦
536
+ (3)
537
+ = 𝜒𝑥𝑦𝑦𝑥
538
+ (3)
539
+ = 𝜒𝑥𝑦𝑥𝑦
540
+ (3)
541
+ =
542
+ 𝜒𝑦𝑦𝑥𝑥
543
+ (3)
544
+ = 𝜒𝑦𝑥𝑥𝑦
545
+ (3)
546
+ = 𝜒𝑦𝑥𝑦𝑥
547
+ (3) . The Pnma space group applies to CsPbBr3 in its orthorhombic phase,
548
+ which is the case for all temperatures considered in this work. The Pm3m space applies to
549
+ MAPbBr3 for its room temperature cubic phase. All allowed tensor elements were assumed to
550
+ have the same magnitude.
551
+ Simulations for an electronic response only and without optical anisotropy are shown for
552
+ various thicknesses in Fig. S13. This applies to MAPbBr3 single crystals at room temperature
553
+ when this material is in the cubic phase. Simulations for an electronic response only and with
554
+ optical anisotropy are shown for various thicknesses in Fig. S14. This applies to MAPbBr3 in
555
+ its low-temperature orthorhombic phase and CsPbBr3, which is orthorhombic for all
556
+ temperatures considered in this work. The effect of optical anisotropy on the TKE is very
557
+
558
+ 11
559
+ dependent on the azimuthal angle of the crystal. Results are shown for two different azimuthal
560
+ angles: 0° and 45° angle between crystal axis and probe polarization in Fig. S14.
561
+ Fig. S14 shows that the oscillatory features due to propagation effects and static birefringence
562
+ cannot explain the oscillations observed in low temperature MAPbBr3 single crystals and thin
563
+ films. To simulate this oscillatory signal, we have to consider both electronic and phonon
564
+ contributions to 𝑅 = 𝑅e + 𝑅ph alongside static birefringence. The chosen simulation
565
+ parameters are given in the main text and were chosen in accordance with the Lorentzian fits to
566
+ our experimental data in Fig. S8. The instantaneous contribution to 𝑅 is 𝑅e,0 𝑅ph,0
567
+
568
+ times larger
569
+ than the phononic contribution, when the respective spectral amplitude of the responses are
570
+ integrated in the 0 - 10 THz range.
571
+ References
572
+ 1.
573
+ A. Al-Ashouri et al., Monolithic perovskite/silicon tandem solar cell with > 29% efficiency by
574
+ enhanced hole extraction. Science 370, 1300-+ (2020).
575
+ 2.
576
+ M. Lu et al., Metal Halide Perovskite Light‐Emitting Devices: Promising Technology for
577
+ Next‐Generation Displays. Advanced Functional Materials 29, (2019).
578
+ 3.
579
+ Y. Wang, L. Song, Y. Chen, W. Huang, Emerging New-Generation Photodetectors Based on
580
+ Low-Dimensional Halide Perovskites. ACS Photonics 7, 10-28 (2019).
581
+ 4.
582
+ S. D. Stranks et al., Electron-Hole Diffusion Lengths Exceeding 1 Micrometer in an
583
+ Organometal Trihalide Perovskite Absorber. Science 342, 341-344 (2013).
584
+ 5.
585
+ M. B. Johnston, L. M. Herz, Hybrid Perovskites for Photovoltaics: Charge-Carrier
586
+ Recombination, Diffusion, and Radiative Efficiencies. Acc Chem Res 49, 146-154 (2016).
587
+ 6.
588
+ W. J. Yin, T. T. Shi, Y. F. Yan, Unusual defect physics in CH3NH3PbI3 perovskite solar cell
589
+ absorber. Applied Physics Letters 104, (2014).
590
+ 7.
591
+ W. B. Chu, Q. J. Zheng, O. V. Prezhdo, J. Zhao, W. A. Saidi, Low-frequency lattice phonons
592
+ in halide perovskites explain high defect tolerance toward electron-hole recombination. Sci
593
+ Adv 6, (2020).
594
+ 8.
595
+ P. P. Joshi, S. F. Maehrlein, X. Zhu, Dynamic Screening and Slow Cooling of Hot Carriers in
596
+ Lead Halide Perovskites. Adv Mater 31, e1803054 (2019).
597
+ 9.
598
+ K. Miyata, X. Y. Zhu, Ferroelectric large polarons. Nature Materials 17, 379-381 (2018).
599
+ 10.
600
+ M. Sendner et al., Optical phonons in methylammonium lead halide perovskites and
601
+ implications for charge transport. Materials Horizons 3, 613-620 (2016).
602
+ 11.
603
+ A. D. Wright et al., Electron-phonon coupling in hybrid lead halide perovskites. Nat Commun
604
+ 7, (2016).
605
+ 12.
606
+ M. J. Schilcher et al., The Significance of Polarons and Dynamic Disorder in Halide
607
+ Perovskites. ACS Energy Letters 6, 2162-2173 (2021).
608
+ 13.
609
+ K. T. Munson, J. R. Swartzfager, J. B. Asbury, Lattice Anharmonicity: A Double-Edged
610
+ Sword for 3D Perovskite-Based Optoelectronics. ACS Energy Letters 4, 1888-1897 (2019).
611
+ 14.
612
+ M. Först et al., Nonlinear phononics as an ultrafast route to lattice control. Nature Physics 7,
613
+ 854-856 (2011).
614
+ 15.
615
+ A. S. Disa, T. F. Nova, A. Cavalleri, Engineering crystal structures with light. Nature Physics
616
+ 17, 1087-1092 (2021).
617
+ 16.
618
+ D. Fausti et al., Light-Induced Superconductivity in a Stripe-Ordered Cuprate. Science 331,
619
+ 189-191 (2011).
620
+ 17.
621
+ A. Stupakiewicz et al., Ultrafast phononic switching of magnetization. Nature Physics 17,
622
+ 489-492 (2021).
623
+ 18.
624
+ S. F. Maehrlein et al., Dissecting spin-phonon equilibration in ferrimagnetic insulators by
625
+ ultrafast lattice excitation. Sci Adv 4, (2018).
626
+ 19.
627
+ X. Li et al., Terahertz field-induced ferroelectricity in quantum paraelectric SrTiO3. Science
628
+ 364, 1079-+ (2019).
629
+ 20.
630
+ T. F. Nova, A. S. Disa, M. Fechner, A. Cavalleri, Metastable ferroelectricity in optically
631
+ strained SrTiO3. Science 364, 1075-+ (2019).
632
+
633
+ 12
634
+ 21.
635
+ M. Rini et al., Insulator-to-metal transition induced by mid-IR vibrational excitation in a
636
+ magnetoresistive manganite. Springer Ser Chem Ph 88, 588-+ (2007).
637
+ 22.
638
+ M. R. Filip, G. E. Eperon, H. J. Snaith, F. Giustino, Steric engineering of metal-halide
639
+ perovskites with tunable optical band gaps. Nat Commun 5, 5757 (2014).
640
+ 23.
641
+ P. S. Whitfield et al., Structures, Phase Transitions and Tricritical Behavior of the Hybrid
642
+ Perovskite Methyl Ammonium Lead Iodide. Sci Rep 6, 35685 (2016).
643
+ 24.
644
+ H. Mashiyama, Y. Kawamura, E. Magome, Y. Kubota, Displacive character of the cubic-
645
+ tetragonal transition in CH3NH3PbX3. J Korean Phys Soc 42, S1026-S1029 (2003).
646
+ 25.
647
+ W. Xiang, S. Liu, W. Tress, A review on the stability of inorganic metal halide perovskites:
648
+ challenges and opportunities for stable solar cells. Energy & Environmental Science 14, 2090-
649
+ 2113 (2021).
650
+ 26.
651
+ F. Wang et al., Solvated Electrons in Solids-Ferroelectric Large Polarons in Lead Halide
652
+ Perovskites. J Am Chem Soc 143, 5-16 (2021).
653
+ 27.
654
+ Y. Fu, S. Jin, X. Y. Zhu, Stereochemical expression of ns2 electron pairs in metal halide
655
+ perovskites. Nature Reviews Chemistry 5, 838-852 (2021).
656
+ 28.
657
+ H. Kim et al., Direct observation of mode-specific phonon-band gap coupling in
658
+ methylammonium lead halide perovskites. Nat Commun 8, 687 (2017).
659
+ 29.
660
+ P. Guo et al., Direct Observation of Bandgap Oscillations Induced by Optical Phonons in
661
+ Hybrid Lead Iodide Perovskites. Advanced Functional Materials 30, (2020).
662
+ 30.
663
+ M. Park et al., Excited-state vibrational dynamics toward the polaron in methylammonium
664
+ lead iodide perovskite. Nat Commun 9, 2525 (2018).
665
+ 31.
666
+ Y. Lan et al., Ultrafast correlated charge and lattice motion in a hybrid metal halide
667
+ perovskite. Sci Adv 5, (2019).
668
+ 32.
669
+ D. Meggiolaro, F. Ambrosio, E. Mosconi, A. Mahata, F. De Angelis, Polarons in Metal Halide
670
+ Perovskites. Advanced Energy Materials 10, (2019).
671
+ 33.
672
+ D. Ghosh, E. Welch, A. J. Neukirch, A. Zakhidov, S. Tretiak, Polarons in Halide Perovskites:
673
+ A Perspective. J Phys Chem Lett 11, 3271-3286 (2020).
674
+ 34.
675
+ L. R. V. Buizza, L. M. Herz, Polarons and Charge Localization in Metal-Halide
676
+ Semiconductors for Photovoltaic and Light-Emitting Devices. Adv Mater 33, e2007057
677
+ (2021).
678
+ 35.
679
+ O. Cannelli et al., Quantifying Photoinduced Polaronic Distortions in Inorganic Lead Halide
680
+ Perovskite Nanocrystals. J Am Chem Soc 143, 9048-9059 (2021).
681
+ 36.
682
+ M. Puppin et al., Evidence of Large Polarons in Photoemission Band Mapping of the
683
+ Perovskite Semiconductor CsPbBr3. Phys Rev Lett 124, 206402 (2020).
684
+ 37.
685
+ H. Seiler et al., Direct observation of ultrafast lattice distortions during exciton-polaron
686
+ formation in lead-halide perovskite nanocrystals. arXiv, (2022).
687
+ 38.
688
+ H. M. Zhu et al., Screening in crystalline liquids protects energetic carriers in hybrid
689
+ perovskites. Science 353, 1409-1413 (2016).
690
+ 39.
691
+ K. Miyata et al., Large polarons in lead halide perovskites. Sci Adv 3, (2017).
692
+ 40.
693
+ S. F. Maehrlein et al., Decoding ultrafast polarization responses in lead halide perovskites by
694
+ the two-dimensional optical Kerr effect. Proc Natl Acad Sci U S A 118, (2021).
695
+ 41.
696
+ M. C. Hoffmann, N. C. Brandt, H. Y. Hwang, K.-L. Yeh, K. A. Nelson, Terahertz Kerr effect.
697
+ Applied Physics Letters 95, (2009).
698
+ 42.
699
+ M. Sajadi, M. Wolf, T. Kampfrath, Transient birefringence of liquids induced by terahertz
700
+ electric-field torque on permanent molecular dipoles. Nat Commun 8, 14963 (2017).
701
+ 43.
702
+ F. Ambrosio, J. Wiktor, F. De Angelis, A. Pasquarello, Origin of low electron–hole
703
+ recombination rate in metal halide perovskites. Energy & Environmental Science 11, 101-105
704
+ (2018).
705
+ 44.
706
+ F. Ambrosio, D. Meggiolaro, E. Mosconi, F. De Angelis, Charge Localization, Stabilization,
707
+ and Hopping in Lead Halide Perovskites: Competition between Polaron Stabilization and
708
+ Cation Disorder. ACS Energy Letters 4, 2013-2020 (2019).
709
+ 45.
710
+ R. Righini, Ultrafast Optical Kerr Effect in Liquids and Solids. Science 262, (1993).
711
+ 46.
712
+ L. Huber, S. F. Maehrlein, F. Wang, Y. Liu, X. Y. Zhu, The ultrafast Kerr effect in anisotropic
713
+ and dispersive media. J Chem Phys 154, 094202 (2021).
714
+
715
+ 13
716
+ 47.
717
+ H. Hirori, A. Doi, F. Blanchard, K. Tanaka, Single-cycle terahertz pulses with amplitudes
718
+ exceeding 1 MV/cm generated by optical rectification in LiNbO3. Applied Physics Letters 98,
719
+ (2011).
720
+ 48.
721
+ D. Shi et al., Low trap-state density and long carrier diffusion in organolead trihalide
722
+ perovskite single crystals. Science 347, 519-522 (2015).
723
+ 49.
724
+ Y. Rakita et al., Low-Temperature Solution-Grown CsPbBr3 Single Crystals and Their
725
+ Characterization. Crystal Growth & Design 16, 5717-5725 (2016).
726
+ 50.
727
+ X. D. Wang, W. G. Li, J. F. Liao, D. B. Kuang, Recent Advances in Halide Perovskite Single‐
728
+ Crystal Thin Films: Fabrication Methods and Optoelectronic Applications. Solar RRL 3,
729
+ (2019).
730
+ 51.
731
+ C. Kriso et al., Nonlinear refraction in CH(3)NH(3)PbBr(3) single crystals. Opt Lett 45, 2431-
732
+ 2434 (2020).
733
+ 52.
734
+ M. Sajadi, M. Wolf, T. Kampfrath, Terahertz-field-induced optical birefringence in common
735
+ window and substrate materials. Opt Express 23, 28985-28992 (2015).
736
+ 53.
737
+ M. Shalaby, C. Vicario, C. P. Hauri, Extreme nonlinear terahertz electro-optics in diamond for
738
+ ultrafast pulse switching. APL Photonics 2, (2017).
739
+ 54.
740
+ O. Yaffe et al., Local Polar Fluctuations in Lead Halide Perovskite Crystals. Phys Rev Lett
741
+ 118, 136001 (2017).
742
+ 55.
743
+ A. A. Melnikov, V. E. Anikeeva, O. I. Semenova, S. V. Chekalin, Terahertz Kerr effect in a
744
+ methylammonium lead bromide perovskite crystal. Physical Review B 105, (2022).
745
+ 56.
746
+ Y. Guo et al., Interplay between organic cations and inorganic framework and
747
+ incommensurability in hybrid lead-halide perovskite CH3NH3PbBr3. Physical Review
748
+ Materials 1, (2017).
749
+ 57.
750
+ S. Maehrlein, A. Paarmann, M. Wolf, T. Kampfrath, Terahertz Sum-Frequency Excitation of a
751
+ Raman-Active Phonon. Phys Rev Lett 119, 127402 (2017).
752
+ 58.
753
+ D. M. Juraschek, S. F. Maehrlein, Sum-frequency ionic Raman scattering. Physical Review B
754
+ 97, (2018).
755
+ 59.
756
+ L. M. Herz, How Lattice Dynamics Moderate the Electronic Properties of Metal-Halide
757
+ Perovskites. J Phys Chem Lett 9, 6853-6863 (2018).
758
+ 60.
759
+ D. B. Mitzi, Templating and structural engineering in organic–inorganic perovskites. Journal
760
+ of the Chemical Society, Dalton Transactions, 1-12 (2001).
761
+ 61.
762
+ L. Gao et al., Metal cation s lone-pairs increase octahedral tilting instabilities in halide
763
+ perovskites. Materials Advances 2, 4610-4616 (2021).
764
+ 62.
765
+ P. R. Varadwaj, A. Varadwaj, H. M. Marques, K. Yamashita, Significance of hydrogen
766
+ bonding and other noncovalent interactions in determining octahedral tilting in the
767
+ CH3NH3PbI3 hybrid organic-inorganic halide perovskite solar cell semiconductor. Sci Rep 9,
768
+ 50 (2019).
769
+ 63.
770
+ F. Wang, L. Huber, S. F. Maehrlein, X. Y. Zhu, Optical Anisotropy and Phase Transitions in
771
+ Lead Halide Perovskites. J Phys Chem Lett 12, 5016-5022 (2021).
772
+ 64.
773
+ A. M. Leguy et al., Dynamic disorder, phonon lifetimes, and the assignment of modes to the
774
+ vibrational spectra of methylammonium lead halide perovskites. Phys Chem Chem Phys 18,
775
+ 27051-27066 (2016).
776
+ 65.
777
+ M. A. Pérez-Osorio et al., Raman Spectrum of the Organic–Inorganic Halide Perovskite
778
+ CH3NH3PbI3 from First Principles and High-Resolution Low-Temperature Raman
779
+ Measurements. The Journal of Physical Chemistry C 122, 21703-21717 (2018).
780
+ 66.
781
+ D. Zhao et al., Low-frequency optical phonon modes and carrier mobility in the halide
782
+ perovskite CH3NH3PbBr3 using terahertz time-domain spectroscopy. Applied Physics Letters
783
+ 111, (2017).
784
+ 67.
785
+ C. L. Johnson, B. E. Knighton, J. A. Johnson, Distinguishing Nonlinear Terahertz Excitation
786
+ Pathways with Two-Dimensional Spectroscopy. Phys Rev Lett 122, 073901 (2019).
787
+ 68.
788
+ Z. Zhang et al., Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite.
789
+ Co-submitted, (2022).
790
+ 69.
791
+ S. A. Bretschneider et al., Quantifying Polaron Formation and Charge Carrier Cooling in
792
+ Lead-Iodide Perovskites. Adv Mater, e1707312 (2018).
793
+
794
+ 14
795
+ 70.
796
+ A. N. Beecher et al., Direct Observation of Dynamic Symmetry Breaking above Room
797
+ Temperature in Methylammonium Lead Iodide Perovskite. ACS Energy Letters 1, 880-887
798
+ (2016).
799
+ 71.
800
+ T. Kohmoto, M. Masui, M. Abe, T. Moriyasu, K. Tanaka, Ultrafast dynamics of soft phonon
801
+ modes in perovskite dielectrics observed by coherent phonon spectroscopy. Physical Review B
802
+ 83, (2011).
803
+ 72.
804
+ A. Cohen et al., Diverging Expressions of Anharmonicity in Halide Perovskites. Adv Mater
805
+ 34, e2107932 (2022).
806
+ 73.
807
+ I. P. Swainson et al., From soft harmonic phonons to fast relaxational dynamics
808
+ inCH3NH3PbBr3. Physical Review B 92, (2015).
809
+ 74.
810
+ A. Ferrando, J. P. Martinez Pastor, I. Suarez, Toward Metal Halide Perovskite Nonlinear
811
+ Photonics. J Phys Chem Lett 9, 5612-5623 (2018).
812
+ 75.
813
+ R. X. Yang, J. M. Skelton, E. L. da Silva, J. M. Frost, A. Walsh, Assessment of dynamic
814
+ structural instabilities across 24 cubic inorganic halide perovskites. J Chem Phys 152, 024703
815
+ (2020).
816
+ 76.
817
+ K. T. Munson, E. R. Kennehan, G. S. Doucette, J. B. Asbury, Dynamic Disorder Dominates
818
+ Delocalization, Transport, and Recombination in Halide Perovskites. Chem 4, 2826-2843
819
+ (2018).
820
+ 77.
821
+ R. Sharma et al., Elucidating the atomistic origin of anharmonicity in tetragonal
822
+ CH3NH3PbI3 with Raman scattering. Physical Review Materials 4, (2020).
823
+ 78.
824
+ F. Bertolotti et al., Coherent Nanotwins and Dynamic Disorder in Cesium Lead Halide
825
+ Perovskite Nanocrystals. ACS Nano 11, 3819-3831 (2017).
826
+ 79.
827
+ J. M. Frost et al., Atomistic origins of high-performance in hybrid halide perovskite solar
828
+ cells. Nano Lett 14, 2584-2590 (2014).
829
+ 80.
830
+ A. Pecchia, D. Gentilini, D. Rossi, M. Auf der Maur, A. Di Carlo, Role of Ferroelectric
831
+ Nanodomains in the Transport Properties of Perovskite Solar Cells. Nano Lett 16, 988-992
832
+ (2016).
833
+ 81.
834
+ F. Wang et al., Phonon signatures for polaron formation in an anharmonic semiconductor.
835
+ Proc Natl Acad Sci U S A 119, e2122436119 (2022).
836
+ 82.
837
+ M. Basini et al., Terahertz electric-field driven dynamical multiferroicity in SrTiO3. arXiv,
838
+ (2022).
839
+ 83.
840
+ M. Basini et al., Terahertz Ionic Kerr Effect. arXiv, (2022).
841
+ 84.
842
+ M. Cherasse et al., Electron Dynamics in Hybrid Perovskites Reveal the Role of Organic
843
+ Cations on the Screening of Local Charges. Nano Lett 22, 2065-2069 (2022).
844
+ 85.
845
+ V. Balos, M. Wolf, S. Kovalev, M. Sajadi, Optical Rectification and Electro-Optic Sampling
846
+ in Quartz. ArXiv, (2022).
847
+
848
+ 15
849
+ Figures
850
+ Fig. 1 | THz fields for nonlinear lattice control in lead halide perovskites. A. Sketch of the
851
+ experimental pump-probe configuration. An intense THz electric field causes a transient change
852
+ of birefringence, leading to an altered probe pulse polarization. This change in polarization is
853
+ read out using a balanced detection scheme, consisting of balancing optics (BO), Wollaston
854
+ prism (WP) and two photodiodes (PD1, PD2). B. Employed pump THz electric field measured
855
+ using electro-optic sampling. C. Complex refractive index of MAPbBr3 (blue curves) obtained
856
+ from (10) and Fourier transform of THz field (red area) in B.
857
+
858
+ A
859
+ MAPbBr
860
+ PD1
861
+ dwnd ZH
862
+ BO
863
+ WP
864
+ PD2
865
+ VIS probezelectricfield(Mv/cm
866
+ 8
867
+ B
868
+ THz amplitude (norm.)
869
+ Re(n)
870
+ Refractiveindexn
871
+ 6
872
+ Im(n)
873
+ 0.5
874
+ 0
875
+ 0
876
+ 2
877
+ 0
878
+ 2
879
+ 0
880
+ L
881
+ 2
882
+ 3
883
+ 4
884
+ 5
885
+ Time (ps)
886
+ Freguency(THz)16
887
+ Fig. 2 | THz-induced birefringence in MAPbBr3 and CsPbBr3 at room temperature. A.
888
+ Room temperature TKE in MAPbBr3 and D. CsPbBr3 single crystals. B. and E., THz fluence
889
+ dependence of transient birefringence peak amplitude with quadratic fit (black line),
890
+ demonstrating the Kerr effect nature of the signals. C. and F., azimuth angle (between probe
891
+ beam polarization and crystal facets) dependence of main TKE peak with fit (black line) to
892
+ expected 𝜒(3) tensor geometries in cubic and orthorhombic phase, respectively.
893
+
894
+ A
895
+ B
896
+ c
897
+ Kerr signal
898
+ MAPbBr,
899
+ 1.0
900
+ MAPbBr,
901
+ 90
902
+ peak (norm.)
903
+ norr
904
+ I peak (norm.)
905
+ Quadratic fit
906
+ 135
907
+ 45
908
+ 0.8
909
+ THz
910
+ gence
911
+ 0.8
912
+ 0.6
913
+ 0.6
914
+ signal
915
+ 180
916
+ 0
917
+ 0.4
918
+ 0.4
919
+ ransient
920
+ 0.2
921
+ Ker
922
+ 0.2
923
+ 225
924
+ 315
925
+ 0.0
926
+ 0.0
927
+ -2
928
+ 0
929
+ 2
930
+ 4
931
+ 6
932
+ 0
933
+ 0.5
934
+ 1
935
+ 270
936
+ Time (ps)
937
+ Max E-field amp. (norm.)
938
+ Azimuthangle(o)D
939
+ E
940
+ F
941
+ Kerr signal
942
+ 1.0
943
+ 1.0
944
+ CsPbBr3
945
+ 90
946
+ CsPbBr,
947
+ peak (norm.)
948
+ nori
949
+ peak (norm.)
950
+ Quadratic fit
951
+ 135
952
+ 45
953
+ 0.8
954
+ Jence
955
+ TH7
956
+ 0.8
957
+ 0.6
958
+ 0.6
959
+ 9
960
+ signal
961
+ 180
962
+ 0
963
+ 0.4
964
+ 0.4
965
+ ransient
966
+ 0.2
967
+ 0.2
968
+ Ker
969
+ 225
970
+ 315
971
+ 0.0
972
+ 0.0
973
+ -2
974
+ 0
975
+ 2
976
+ 4
977
+ 6
978
+ 0
979
+ 0.5
980
+ 1
981
+ 270
982
+ Time (ps)
983
+ Max E-field amp. (norm.)
984
+ Azimuth angle (°)17
985
+ Fig. 3 | TKE temperature dependence of single crystal vs. thin film MAPbBr3. A.
986
+ Temperature-dependent TKE for single crystal and B. thin film samples. C. Oscillatory signal
987
+ components at 80K extracted by subtracting an exponential tail (dashed lines) and starting after
988
+ the main peak (bottom black arrow) in A, B. D. Respective Fourier transforms (blue and red)
989
+ of C and incident THz pump spectrum (gray area).
990
+
991
+ A
992
+ B
993
+ c
994
+ Single crystal
995
+ Thin film (polycryst.)
996
+ 80K
997
+ (Cn
998
+ 3.0
999
+ d= 500 μm
1000
+ 3.0
1001
+ d = 0.4 μm
1002
+ (arb.
1003
+ 10
1004
+ latory signal
1005
+ 300K
1006
+ 2.5
1007
+ 2.5
1008
+ 300K
1009
+ cub.
1010
+ ingence (norm.)
1011
+ A
1012
+ 0.0
1013
+ Oscilla
1014
+ 2.0
1015
+ 2.0
1016
+ 0
1017
+ 5
1018
+ 10
1019
+ 180K
1020
+ 1.5
1021
+ 180K
1022
+ Time (ps)
1023
+ fetbirefr
1024
+ Transient birefr
1025
+ D
1026
+ Transient
1027
+ 80K
1028
+ 1.0
1029
+ 10
1030
+ 1.0
1031
+ norm.
1032
+ 80K
1033
+ THz spectrum
1034
+ 0.8
1035
+ 0.5
1036
+ 0.5
1037
+ 80K
1038
+ orth.
1039
+ Spectrum
1040
+ 0.6
1041
+ 0.4
1042
+ 0.0
1043
+ S
1044
+ 0.0
1045
+ 0.2
1046
+ 0.0
1047
+ 0
1048
+ 5
1049
+ 10
1050
+ 15
1051
+ 0
1052
+ 5
1053
+ 10
1054
+ 0
1055
+ 1
1056
+ 2
1057
+ 3
1058
+ 4
1059
+ 5
1060
+ Time (ps)
1061
+ Time(ps)
1062
+ Freguency(THz)18
1063
+ Fig. 4 | Four-wave mixing simulations vs. experimental results in MAPbBr3. Isotropic cubic
1064
+ phase (300 K): Simulated TKE signals for A. single crystal (500 µm thickness) and D. thin film
1065
+ (400 nm thickness) assuming only instantaneous electronic response 𝑅e(𝑡) (gray lines).
1066
+ Anisotropic orthorhombic phase (80 K): B. Single crystal and E. thin film TKE vs simulation
1067
+ for model system with static birefringence, instantaneous electronic 𝑅e(𝑡) and Lorentz
1068
+ oscillator 𝑅ph(𝑡) phonon response (purple lines). C., F. Fourier transforms of experimental data
1069
+ (blue and red) and simulation results (purple) from B., E., respectively, normalized to the
1070
+ phonon amplitude at 1.15 THz.
1071
+
1072
+ A
1073
+ Cubic (300K)
1074
+ B
1075
+ c
1076
+ Orthorhombic (80K)
1077
+ 1.0
1078
+ 1.0
1079
+ 1.0
1080
+ Exp.Singlecrystal
1081
+ Sim. R。 + Rpr
1082
+ Exp. Single crystal
1083
+ Sim. R.
1084
+ Sim. R。 + Rph
1085
+ (norm.)
1086
+ fringence (norm.)
1087
+ 0.5
1088
+ 0.5
1089
+ T= 2.7ps
1090
+ 0.5
1091
+ lence
1092
+ 0.0
1093
+ (norm.
1094
+ T=0.8ps
1095
+ 0.0
1096
+ -0.5
1097
+ 0.0
1098
+ -bire
1099
+ Transient bire
1100
+ 1.0
1101
+ 1.0
1102
+ 0.05
1103
+ 1.0
1104
+ pe
1105
+ Transient
1106
+ Exp. Thin film
1107
+ Exp. Thin film
1108
+ S
1109
+ Sim. R.
1110
+ 0.5
1111
+ 0.5
1112
+ 0.5
1113
+ 4
1114
+ 8
1115
+ Sim. R。+ Rph
1116
+ 0.0
1117
+ 0.0
1118
+ 0.0
1119
+ -2
1120
+ 0
1121
+ 2
1122
+ 4
1123
+ 6
1124
+ 8
1125
+ 0
1126
+ 5
1127
+ 10
1128
+ 0
1129
+ 1
1130
+ 2
1131
+ 3
1132
+ 4
1133
+ 5
1134
+ Time (ps)
1135
+ Time (ps)
1136
+ Freauencv(THz)19
1137
+ Fig. 5 | Nonlinear excitation pathways for the 1.15 THz Raman-active twist mode. A.
1138
+ Time domain coherent phonon oscillations (normalized to 𝑡 = 0 TKE main peak) at 80 K for
1139
+ different THz field strengths (left panel) and respective coherent phonon amplitude (right
1140
+ panel) obtained from Fourier transform; both unveiling a 𝐸THz
1141
+ 2
1142
+ scaling law and thus
1143
+ demonstrating a nonlinear excitation. B. Possible nonlinear photonic excitation pathways for
1144
+ the 𝜔ph = 1.15 THz mode (dashed line) mediated via a THz electronic polarizability. The
1145
+ nonlinearly coupled 𝐸THz spectrum (gray area) leads to difference-frequency 𝐸THz𝐸THz
1146
+
1147
+ (DF,
1148
+ red area) and sum-frequency 𝐸THz𝐸THz (SF, blue area) driving forces. The octahedral twist
1149
+ mode is schematically sketched on the right hand side. C. Possible phononic pathways via a
1150
+ directly driven IR-active phonon 𝑄IR, which nonlinearly couples to the Raman-active mode
1151
+ 𝑄𝑅 via anharmonic 𝑄R𝑄IR
1152
+ 2 coupling.
1153
+
1154
+ Phonon Amp.
1155
+ 1.0
1156
+ An (norm.
1157
+ 0.0
1158
+ 0.5
1159
+ 0.0
1160
+ 0
1161
+ 5
1162
+ 10
1163
+ 15
1164
+ 20
1165
+ 0
1166
+ 0.5
1167
+ 1
1168
+ Time (ps)
1169
+ B
1170
+ PhotonicPathways
1171
+ 1.0
1172
+ EE
1173
+ Driving force
1174
+ 0.8
1175
+ arb.
1176
+ VE
1177
+ 0.6
1178
+ wn
1179
+ 0.4DF
1180
+ SF
1181
+ 0.2
1182
+ S
1183
+ -EE
1184
+ Wph
1185
+ 0.0
1186
+ 0
1187
+ 1
1188
+ 2
1189
+ 3
1190
+ 4
1191
+ 5
1192
+ 6
1193
+ Frequency (THz)
1194
+ c
1195
+ Phononic Pathways
1196
+ Difference-frequency
1197
+ Sum-frequency
1198
+ hQIR
1199
+ QIR
1200
+ h2R
1201
+ hIR
1202
+ 0
1203
+ 0
1204
+ IR-active
1205
+ Raman-active
1206
+ IR-active
1207
+ Raman-active20
1208
+ Acknowledgments
1209
+ We thank A. Paarmann, M. S. Spencer, M. Chergui, A. Mattoni, and H. Seiler for fruitful
1210
+ discussion.
1211
+ Funding: S.F.M. acknowledges funding for his Emmy Noether group from the Deutsche
1212
+ Forschungsgemeinschaft (DFG, German Research Foundation, Nr. 469405347). S.F.M and
1213
+ L.P acknowledge support of the 2D-HYPE project from the Deutsche
1214
+ Forschungsgemeinschaft (DFG, German Research Foundation, Nr. 490867834) and Agence
1215
+ Nationale de la Recherche (ANR, Nr. ANR-21-CE30-0059), respectively. XYZ acknowledges
1216
+ support by the Vannevar Bush Faculty Fellowship through Office of Naval Research Grant #
1217
+ N00014-18-1-2080. M.C. was supported by the DAAD Scholarship 57507869.
1218
+ Author contributions: S.F.M. conceived the experimental idea; M.F., M.C., and S.F.M.
1219
+ designed the research; M.F., M.C., L.N. performed experiments; F.W., B.X. prepared
1220
+ samples; M.F., M.C. analyzed data; J.U., L.H., S.F.M. contributed theory/analytic tools. L.H.
1221
+ developed FWM model and M.F. carried out FWM simulations. M.F., X.-Y.Z., and S.F.M.
1222
+ wrote the manuscript. All authors read, discussed and commented the manuscript. M.F. and
1223
+ M.C. contributed equally to this work.
1224
+ Competing interests: The authors declare that they have no competing interests.
1225
+ Data and materials availability: All data and simulation codes will be uploaded to a public
1226
+ repository after publication of the manuscript.
1227
+
1228
+ 21
1229
+ Supplementary materials
1230
+ Supplementary information
1231
+ 1.1 CsPbBr3 TKE temperature dependence
1232
+ Fig. S1 | TKE temperature evolution in CsPbBr3. a. TKE in CsPbBr3 at RT and 80K. In
1233
+ contrast to MAPbBr3, where the structural phase changes for lower temperatures (from cubic
1234
+ to tetragonal to orthorhombic), CsPbBr3 remains in the orthorhombic phase as the temperature
1235
+ is lowered. This is also reflected in the overall TKE shape. However, additional oscillations are
1236
+ visible on the longer timescales at 80K. b. Fourier transforming the oscillations after the time
1237
+ indicated by the arrow reveals two main frequency components of 0.9 and 1.3 THz. These
1238
+ frequencies agree well with the two dominating phonon modes in the static Raman spectra of
1239
+ CsPbBr3 (54). c, e. THz fluence dependence reveals that both oscillation amplitudes (for 0.9
1240
+ and 1.3 THz) scale quadratically with the THz electric field. d. Comparison between simulation
1241
+ for an anisotropic material (100 µm thick and 22.5° azimuthal angle between crystal axis and
1242
+ probe polarization) considering an electronic response only and experimental room temperature
1243
+ CsPbBr3 TKE. This shows that the complex CsPbBr3 TKE signal may be understood in terms
1244
+ of an instantaneous electronic polarization response alongside anisotropic light propagation.
1245
+
1246
+ a
1247
+ 0.4
1248
+ (norm.)
1249
+ 1.3
1250
+ (arb. u.)
1251
+ 2
1252
+ 0.9
1253
+ 2
1254
+ 300K
1255
+ 0.3
1256
+ Orth.
1257
+ peak (
1258
+ efringence (norm.)
1259
+ Spectrum
1260
+ 0.2
1261
+ 0.5
1262
+ ZHI
1263
+ 1.5
1264
+ 0.1
1265
+ 3.
1266
+ 0
1267
+ 0
1268
+ 1
1269
+ 2
1270
+ 3
1271
+ Frequency (THz)Transient bire
1272
+ ou
1273
+ Orth
1274
+ (norm.)
1275
+ (wou)
1276
+ Sim.
1277
+ _2
1278
+ 0.5
1279
+ birefr.
1280
+ ak
1281
+ 0.5
1282
+ pea
1283
+ 0.5
1284
+ ZHI
1285
+ Trans.
1286
+ 0
1287
+ 0.9
1288
+ 0
1289
+ 0
1290
+ 0
1291
+ 5
1292
+ 10
1293
+ 15
1294
+ 20
1295
+ -2
1296
+ 0
1297
+ 2
1298
+ 4
1299
+ 6
1300
+ 8
1301
+ 0
1302
+ 0.5
1303
+ 1
1304
+ Time (ps)
1305
+ Time (ps)22
1306
+ 1.2 Estimating the THz nonlinear refractive index of MAPbBr3
1307
+ Fig. S5 shows a comparison between the TKE in MAPbBr3 and Diamond. The measured TKE
1308
+ signal strength 𝑆(𝑑) = Δ𝐼/𝐼0, where 𝐼0 is the total probe intensity measured by the photodiodes
1309
+ and Δ𝐼 is the intensity difference, is proportional to Δ𝑛𝜔pr𝑑/𝑐0 in Diamond, where 𝑑 is the
1310
+ sample thickness and 𝜔pr is the probing frequency.
1311
+ This simple relation holds because there is no significant THz dispersion in Diamond. However,
1312
+ due to significant THz absorption and dispersion, this relation does not hold in MAPbBr3 as
1313
+ seen in Fig. S4b. For MAPbBr3, 𝑆(𝑑) may rather be approximated by
1314
+ Δ𝑛𝜔pr
1315
+ 𝑐0
1316
+ 𝑓(𝑑), where
1317
+ 𝑓(𝑑) = ∫ 𝑑𝑧
1318
+ 𝑑
1319
+ 0
1320
+
1321
+ 𝑑𝜔𝐸THz
1322
+ 2
1323
+ (𝜔)exp(−𝛼(𝜔)𝑧)
1324
+
1325
+ 0
1326
+ / ∫
1327
+ 𝑑𝜔𝐸THz
1328
+ 2
1329
+ (𝜔)
1330
+
1331
+ 0
1332
+ .
1333
+ S1
1334
+ Here, 𝐸THz(ω) is the THz pump spectrum and α is the absorption of MAPbBr3 as extracted
1335
+ from the complex refractive index data in Fig. S11.
1336
+ Since Δ𝑛 = 𝑛2𝑐0𝜖0𝐸THz
1337
+ 2
1338
+ , we may estimate 𝑛2 of MAPbBr3 using:
1339
+ 𝑛2
1340
+ MA =
1341
+ 𝑆MA(𝑑MA)𝑑D
1342
+ 𝑆D(𝑑D)𝑓(𝑑MA) 𝑛2
1343
+ D.
1344
+ S2
1345
+ 𝑛2
1346
+ D of Diamond has been measured to be 3 × 10−16 cm2/W for 1 THz pump and 800 nm optical
1347
+ probing (52). Based on
1348
+ 𝑆MA
1349
+ 𝑆D = 9.4, 𝑓(𝑑𝑀𝐴 = 500 µm ) = 47µm , we therefore estimate 𝑛2
1350
+ MA
1351
+ to be 2 × 10−14 cm2/W, roughly 80 times higher than 𝑛2
1352
+ D for 1 THz pump and 800 nm optical
1353
+ probing.
1354
+ For comparison, 𝑛2
1355
+ MA has been previously measured in the near-infrared spectral region using
1356
+ the Z-scan technique (51). They found a similar order of magnitude of 𝑛2
1357
+ MA = 9.5 × 10−14
1358
+ cm2/W at 1000 nm wavelength.
1359
+
1360
+ 23
1361
+ Supplementary figures
1362
+ Experimental figures
1363
+ Fig. S2 | MAPbBr3 TKE temporal dependence on THz fluence. Normalised experimental
1364
+ TKE of MAPbBr3 at RT for various THz fluences showing that the temporal evolution is not
1365
+ affected by the THz-field strength. Fig. 2 in the main text already showed that the 𝑡 = 0 ps peak
1366
+ scales quadratically with the THz field amplitude.
1367
+
1368
+ 1.2
1369
+ Max E-Field Amplitude (norm.)
1370
+ 0.13
1371
+ Trans. birefringence (norm.)
1372
+ 0.17
1373
+ 0.21
1374
+ 0.8
1375
+ 0.26
1376
+ 0.3
1377
+ 0.38
1378
+ 0.6
1379
+ 0.47
1380
+ 0.56
1381
+ 0.4
1382
+ 0.85
1383
+ 0.95
1384
+ 1
1385
+ 0.2
1386
+ 0
1387
+ -2
1388
+ 0
1389
+ 2
1390
+ 4
1391
+ 6
1392
+ Time (ps)24
1393
+ Fig. S3 | MAPbBr3 TKE azimuthal angle dependence at RT. a. TKE signal showing the 4-
1394
+ fold rotational symmetry of the measured signal. b, c. TKE signal is normalized to show that
1395
+ the time constant of the tail is independent of azimuthal angle. This agrees with the simulations
1396
+ for an isotropic material in Fig. S13, where the origin of the exponential tail is high absorption,
1397
+ dispersion and pump-probe walkoff, which do not depend on the crystal azimuthal angle. Note
1398
+ that the azimuthal angle is not calibrated with respect to the crystal axes in this figure.
1399
+
1400
+ b
1401
+ a
1402
+ c
1403
+ Trans.
1404
+ Transient
1405
+ birefringence (arb. u.)
1406
+ birefringence (norm.)
1407
+ 1.2
1408
+ 1.2
1409
+ MAPbBrs (RT)
1410
+ MAPbBr3 (RT)
1411
+ 0.9
1412
+ Azimuthangle(°)
1413
+ 50
1414
+ 50
1415
+ 10
1416
+ 130
1417
+ 250
1418
+ 0.8
1419
+ 20
1420
+ 140
1421
+ 260
1422
+ C
1423
+ 100
1424
+ C
1425
+ 100
1426
+ 30
1427
+ 150
1428
+ 270
1429
+ 0.7
1430
+ ngle
1431
+ ngle
1432
+ 40
1433
+ 160
1434
+ 280
1435
+ 150
1436
+ 0.8
1437
+ nce
1438
+ 0.6
1439
+ 50
1440
+ 170
1441
+ 290
1442
+ 150Azimuth
1443
+ 0.6
1444
+ Azimuth
1445
+ 0.5
1446
+ 70
1447
+ 190
1448
+ 310
1449
+ 200
1450
+ 200
1451
+ 80
1452
+ 200
1453
+ 320
1454
+ 0.4
1455
+ 0.4
1456
+ 90
1457
+ 210
1458
+ 330
1459
+ 250
1460
+ 0.4
1461
+ 250
1462
+ 0.3
1463
+ ns.
1464
+ 100
1465
+ 220
1466
+ 340
1467
+ 0.2
1468
+ 110
1469
+ 230
1470
+ 350
1471
+ 300
1472
+ 0.2
1473
+ 120
1474
+ 240
1475
+ 360
1476
+ 0.2
1477
+ 300
1478
+ 0.1
1479
+ 0
1480
+ 350
1481
+ 350
1482
+ -1
1483
+ 2
1484
+ 3
1485
+ 0
1486
+ 3
1487
+ -2
1488
+ 0
1489
+ 2
1490
+ 4
1491
+ -1
1492
+ 6
1493
+ 1
1494
+ 2
1495
+ Time (ps)
1496
+ Time (ps)
1497
+ Time (ps)25
1498
+ Fig. S4 | MAPbBr3 TKE dependence on sample thickness. a. Normalised experimental TKE
1499
+ thickness dependence of MAPbBr3 at RT. The results agree well with the simulations in Fig.
1500
+ S13. b. shows the measured TKE peak signal as a function of sample thickness. The black line
1501
+ shows the expected signal dependence when accounting for strong THz absorption and
1502
+ dispersion
1503
+ using
1504
+ the
1505
+ formula
1506
+ 𝑆(𝑑) = ∫ 𝑑𝑧
1507
+ 𝑑
1508
+ 0
1509
+
1510
+ 𝑑𝜔𝐸THz
1511
+ 2
1512
+ (𝜔)exp(−𝛼(𝜔)𝑧)
1513
+
1514
+ 0
1515
+ /
1516
+
1517
+ ����𝑧
1518
+ 1000
1519
+ 0
1520
+
1521
+ 𝑑𝜔𝐸THz
1522
+ 2
1523
+ (𝜔)exp(−𝛼(𝜔)𝑧)
1524
+
1525
+ 0
1526
+ , where 𝐸𝑇𝐻𝑧(𝜔) is the THz pump spectrum and 𝛼 is
1527
+ the absorption of MAPbBr3 as extracted from the complex refractive index data in Fig. S11.
1528
+
1529
+ a
1530
+ b
1531
+ 1.2
1532
+ THz field squared
1533
+ d=972μm
1534
+ d=547μm
1535
+ d=132μm
1536
+ 0.8
1537
+ (norm.)
1538
+ 0.6birefrin
1539
+ pea
1540
+ 0.4
1541
+ 0.4
1542
+ TKE
1543
+ 0.2
1544
+ 0
1545
+ 0
1546
+ -2
1547
+ 0
1548
+ 2
1549
+ 4
1550
+ 6
1551
+ 0
1552
+ 200
1553
+ 400
1554
+ 600
1555
+ 800
1556
+ 1000
1557
+ Time (ps)
1558
+ Thickness d (um)26
1559
+ Fig. S5 | Comparison between TKE in MAPbBr3 and Diamond for estimating the THz
1560
+ nonlinear refractive index n2. Diamond has already been shown to have a strong THz-induced
1561
+ Kerr nonlinearity and be a good nonlinear material in the THz range (51). 𝑛2 of Diamond has
1562
+ been measured to be 3 × 10−16 cm2/W for 1 THz pump and 800 nm optical probing (52). For
1563
+ a 500 µm thick MAPbBr3 single crystal, the TKE peak signal is about 10 times bigger than for
1564
+ a 400 µm thick Diamond.
1565
+
1566
+ MAPbBr,(d=500um)
1567
+ 0.8
1568
+ Transient birefringence (
1569
+ Diamond (d=400um)
1570
+ 0.6
1571
+ 0.4
1572
+ 0.2
1573
+ 0
1574
+ -2
1575
+ 0
1576
+ 2
1577
+ 4
1578
+ Time (ps)27
1579
+ Fig. S6 | CsPbBr3 TKE azimuthal angle dependence at RT. Although the main peak exhibits
1580
+ a 4-fold symmetry, the temporal evolution as a function of azimuthal angle is more complex
1581
+ than for MAPbBr3. As CsPbBr3 is in the orthorhombic phase at room temperature, this extra
1582
+ complexity might be explained by additional static birefringence and resulting anistropic light
1583
+ propagation as can be seen in Fig. S14. Note that the azimuthal angle is not calibrated with
1584
+ respect to the crystal axes in this figure.
1585
+
1586
+ Transient
1587
+ birefringence (arb. u.)
1588
+ 0
1589
+ 0.4
1590
+ 50
1591
+ CsPbBr3 (RT)
1592
+ 0.3
1593
+ Azimuth angle (°)
1594
+ 100
1595
+ 0.2
1596
+ 150
1597
+ 0.1
1598
+ 200
1599
+ 0
1600
+ 250
1601
+ -0.1
1602
+ -0.2
1603
+ 300
1604
+ -0.3
1605
+ 350
1606
+ -1
1607
+ 0
1608
+ 1
1609
+ 2
1610
+ 3
1611
+ Time (ps)28
1612
+ Fig. S7 | MAPbBr3 TKE at 45° azimuthal angle at 80K. a. MAPbBr3 TKE at 80K for 0° and
1613
+ about 45° azimuthal angle. MAPbBr3 is orthorhombic at 80K, which might explain the different
1614
+ overall signal shape for both orientations. However, in both TKEs we can see a strong
1615
+ oscillatory signal. b. By subtracting off fits to the tails (dotted line in a) for the TKEs at 0° and
1616
+ 45°, we extract the oscillatory signals. c. Fourier transforming the oscillatory signals in b.
1617
+ reveals that the same 1.1 THz mode dominates the oscillatory response at 0° and 45° azimuthal
1618
+ angle.
1619
+
1620
+ a
1621
+ b
1622
+ C
1623
+ ingence (norm.)
1624
+ 1.5
1625
+ Azimuth angle (°)
1626
+ ('n
1627
+ 1.1THz
1628
+
1629
+ (arb.
1630
+ 1.1THz
1631
+ 3
1632
+ 45°
1633
+ 0.8
1634
+ 0.5
1635
+ (arb.
1636
+ ignal
1637
+ 0.6
1638
+ 0.5
1639
+ mTransient birefr
1640
+ Oscillatory
1641
+ 0.4
1642
+ -0.5
1643
+ Exponentialfit
1644
+ 0.2
1645
+ -0.5
1646
+ 0
1647
+ 5
1648
+ 10
1649
+ 15
1650
+ 20
1651
+ 0
1652
+ 5
1653
+ 10
1654
+ 15
1655
+ 20
1656
+ 0
1657
+ 1
1658
+ 2
1659
+ 3
1660
+ 4
1661
+ 5
1662
+ Time (ps)
1663
+ Time (ps)
1664
+ Frequency (THz)29
1665
+ Fig. S8 | Lorentzian fits to spectral peaks in MAPbBr3 at 180K and 80K. a,c., Oscillatory
1666
+ signals extracted from the MAPbBr3 TKE at 180K and 80K in Fig. 3A respectively. The signals
1667
+ are extracted by subtracting off exponential fits to the tails from the main TKE signals. b. The
1668
+ modulus squared of the Fourier transform of the oscillatory signal at 180K shows a broad peak
1669
+ at 1.5 THz, which we fit with a Lorentzian. The FWHM of the Lorentzian amplitude is
1670
+ 𝛥𝜈FWHM =0.58 THz. This corresponds to a phonon lifetime of 𝜏 = 1/(2𝜋𝛥𝜈FWHM ) = 0.27
1671
+ ps. d. The modulus squared of the Fourier transform of the oscillatory signal at 80K shows two
1672
+ peaks at 1.14 THz and 1.39 THz. By fitting Lorentzians, we obtain phonon lifetimes of 1.7(4)
1673
+ ps and 1.5(1) ps for the two peaks respectively.
1674
+
1675
+ a
1676
+ b
1677
+ X10-3
1678
+ 0.2
1679
+ . units)
1680
+ (arb. units)
1681
+ MAPbBr3 Single Crystal (180K)
1682
+ Lorentzianfit
1683
+ Wp = 1.5THz,
1684
+ 0.8
1685
+ Oscillatory signal (arb.
1686
+ 1 OscillatorModel
1687
+ T = 0.27(5)ps
1688
+ spectrum
1689
+ 0.6
1690
+ 0.4
1691
+ Power
1692
+ 0.2
1693
+ 0
1694
+ 2
1695
+ 6
1696
+ 0
1697
+ 2
1698
+ 30.3
1699
+ Oscillatory signal (arb. units)
1700
+ units
1701
+ MAPbBr3SingleCrystal(80K)
1702
+ Lorentzian fit
1703
+ 0.06
1704
+ wp = 1.14THz,
1705
+ 2 Oscillator Model
1706
+ t = 1.7(4)ps
1707
+ (arb.
1708
+ spectrum
1709
+ 0.04
1710
+ 0
1711
+ Lorentzian fit
1712
+ Wp = 1.39THz,
1713
+ 0.02
1714
+ t = 1.5(1)ps
1715
+ 0.1
1716
+ Power
1717
+ 0
1718
+ 0
1719
+ 5
1720
+ 10
1721
+ 15
1722
+ 20
1723
+ 25
1724
+ 0
1725
+ 2
1726
+ 3
1727
+ Time (ps)
1728
+ Frequency (THz)30
1729
+ Fig. S9 | BK7 TKE at room temperature. a. TKE of a BK7 substrate with 0.5 mm thickness.
1730
+ b. shows the BK7 TKE relative to the TKE of a MAPbBr3 thin film on top of a BK7 substrate
1731
+ with 0.5 mm thickness at room temperature.
1732
+
1733
+ a
1734
+ b
1735
+ X10-3
1736
+ BK7 (0.5mm)
1737
+ 6
1738
+ BK7 (0.5mm)
1739
+ MAPbBr3 0n BK7
1740
+ 0.8
1741
+ 5
1742
+ (norm.)
1743
+ 0.6Difference
1744
+ 2
1745
+ 0.2
1746
+ .1
1747
+ -2
1748
+ 0
1749
+ 2
1750
+ 4
1751
+ 6
1752
+ -2
1753
+ 0
1754
+ 2
1755
+ 4
1756
+ 6
1757
+ Time (ps)
1758
+ Time (ps)31
1759
+ Fig. S10 | BK7 TKE for various temperatures. a. TKE temperature dependence of BK7
1760
+ substrate with 0.5 mm thickness. b. shows the relative strength and shape compared to
1761
+ MAPbBr3 thin film on top of a BK7 substrate for room temperature and 80K.
1762
+
1763
+ BK7 (0.5mm)
1764
+ a
1765
+ b
1766
+ BK7 (0.5mm)
1767
+ 2
1768
+ MAPbBr3 0on BK7
1769
+ 300K
1770
+ 3.5
1771
+ 1.5(noi
1772
+ 250K
1773
+ 1/IV
1774
+ 2.
1775
+ Difference signal
1776
+ 145K
1777
+ ransient
1778
+ 0.580K
1779
+ 80K
1780
+ 0.5
1781
+ 0
1782
+ -2
1783
+ 0
1784
+ 2
1785
+ 4
1786
+ 6
1787
+ 0
1788
+ 5
1789
+ 10
1790
+ Time (ps)
1791
+ Time (ps)32
1792
+ Simulation figures
1793
+ Fig. S11 | Dispersion of MAPbBr3 in the THz region. a. Refractive index 𝑛 and extinction
1794
+ coefficient 𝜅 of MAPbBr3 calculated using the dielectric function from Sendner et al. (10). b.
1795
+ Absorption coefficient is calculated using relation 𝛼 = 4𝜋𝜅/𝜆. The penetration depth is equal
1796
+ to 1/𝛼.
1797
+
1798
+ a
1799
+ b
1800
+ 10
1801
+ 1000
1802
+ 6000
1803
+ 5
1804
+ fficient α (1/cm)
1805
+ oefficient k
1806
+ 5000
1807
+ Depth (μm)
1808
+ 8
1809
+ 800
1810
+ e Index n
1811
+ 4
1812
+ 4000
1813
+ 6
1814
+ 600
1815
+ 3Refractiv
1816
+ Extinction
1817
+ Coe
1818
+ 4
1819
+ 400
1820
+ 2
1821
+ 2000
1822
+ Absorption
1823
+ 2
1824
+ 200
1825
+ 1000
1826
+ 0
1827
+ 0
1828
+ 0
1829
+ 0
1830
+ 0
1831
+ 1
1832
+ 2
1833
+ 3
1834
+ 4
1835
+ 5
1836
+ 0
1837
+ 1
1838
+ 2
1839
+ 3
1840
+ 4
1841
+ 5
1842
+ Freguency (THz)
1843
+ Freguency(THz)33
1844
+ Fig. S12 | Extrapolated static birefringence of MAPbBr3 for the simulations of the low
1845
+ temperature orthorhombic phase. In the optical region, the refractive index of CsPbBr3 is
1846
+ used as measured using the 2D-OKE (46). The static birefringence of CsPbBr3 is then
1847
+ extrapolated to the THz region. 𝑛f and 𝑛s correspond to the refractive index along the fast and
1848
+ slow crystal axes and static birefringence is defined as the difference between 𝑛f and 𝑛s.
1849
+
1850
+ 8
1851
+ 0.03
1852
+ Re(n.)
1853
+ Re(n.)
1854
+ 0.025
1855
+ Refractive Index n
1856
+ 6
1857
+ Birefringence
1858
+ 0.02
1859
+ 0.015
1860
+ 2
1861
+ 0.01
1862
+ 0
1863
+ 0.005
1864
+ 0
1865
+ 1
1866
+ 2
1867
+ 3
1868
+ 4
1869
+ 5
1870
+ Freguency (THz)34
1871
+ Fig. S13 | Four-wave-mixing simulation for cubic MAPbBr3 for various thicknesses
1872
+ assuming an instantaneous electronic hyperpolarizability response only. a. shows the
1873
+ normalised TKE signal for various thickness. For thicknesses larger than 100 µm, we can see
1874
+ an exponential tail with a decay time constant largely independent of thickness. b. The
1875
+ normalised TKE signals for various thicknesses are plotted on top of each other. On top of the
1876
+ exponential tail, there are small modulations, whose onset depends on the thickness. The onset
1877
+ time can be roughly estimated by the 𝑡1 time (𝑡1 = (𝑛𝑔,𝑓(𝜔𝑇𝐻𝑧) − 𝑛𝑔,𝑓(𝜔𝑝𝑟))𝑑/𝑐0), where 𝑑
1878
+ is the sample thickness and 𝑛𝑔 is the group velocity refractive index (40).
1879
+
1880
+ a
1881
+ 6
1882
+ wn006
1883
+ ringence (norm.)
1884
+ ringence (norm.)
1885
+ 500μm
1886
+ 0.8
1887
+ 300μm
1888
+ 0.6Transient biret
1889
+ 200μm
1890
+ Transient biref
1891
+ 0.4
1892
+ 100μm
1893
+ 0.2
1894
+ 0.4μm
1895
+ 0
1896
+ 0
1897
+ 2
1898
+ 4
1899
+ 6
1900
+ 8
1901
+ 10
1902
+ -2
1903
+ 0
1904
+ 2
1905
+ 4
1906
+ 6
1907
+ 8
1908
+ 10
1909
+ Time (ps)
1910
+ Time (ps)35
1911
+ Fig. S14 | Four-wave-mixing simulation for orthorhombic CsPbBr3 assuming an
1912
+ instantaneous electronic hyperpolarizability response only. In contrast to the isotropic
1913
+ simulation in Fig. S13, the azimuthal angle of the model crystal matters for the temporal TKE
1914
+ shape. a-d. Results for 0° azimuthal angle of crystal with respect to probe pulse polarization are
1915
+ shown for various thicknesses. For this angle, the birefringence experienced by the probe is
1916
+ maximized. For 0° azimuthal angle and for all thicknesses larger than 200 µm, we can see the
1917
+ appearance of a short-lived oscillatory signal of around 1.4 THz in (a-d). These oscillations
1918
+ arise due to static birefringence, but are too short-lived to explain our experimental observation
1919
+ at 80K as shown in (b, d). For 0.4 µm thickness, these oscillations due to static birefringence
1920
+ disappear. e-f. Results for 45° azimuthal angle for various thicknesses. For this angle, the
1921
+ birefringence experienced by the pump is maximized. The peak t = 0 ps is diminishingly small
1922
+ in comparison to the oscillatory features that happen at later times, which is due to the input
1923
+ tensor symmetry of 𝑅. The small oscillatory features correspond to internal reflections - similar
1924
+ to the small modulations on top of the tail in Fig. S13. The onset time for these oscillatory
1925
+ features can be roughly estimated by the 𝑡1 time.
1926
+
1927
+ o° azimuth angle
1928
+ o° azimuth angle
1929
+ a
1930
+ c
1931
+ rans. birefringence (norm.)
1932
+ wno06
1933
+ Spectrum (norm.)
1934
+ 0.8
1935
+ M
1936
+ 500μm
1937
+ 0.6
1938
+ 200μm
1939
+ 0.4
1940
+ 0.2
1941
+ 0.4μm0
1942
+ N.W
1943
+ 0
1944
+ -2
1945
+ 0
1946
+ 2
1947
+ 4
1948
+ 6
1949
+ 8
1950
+ 10
1951
+ 0
1952
+ 1
1953
+ 2
1954
+ 3
1955
+ 4
1956
+ 5
1957
+ Time (ps)
1958
+ Frequency (THz)
1959
+ b
1960
+ d
1961
+ ringence (norm.)
1962
+ Exp. MAPbBr3 Single
1963
+ Crystal (80K)
1964
+ 0.8
1965
+ 500μm Simulation
1966
+ (norm.
1967
+ 0.5
1968
+ THzFieldSguared
1969
+ 0.6biref
1970
+ 0.4
1971
+ Trans.
1972
+ 0.2
1973
+ 0.5
1974
+ 0
1975
+ 0
1976
+ 5
1977
+ 10
1978
+ 15
1979
+ 0
1980
+ 1
1981
+ 2
1982
+ 3
1983
+ 4
1984
+ 5
1985
+ Time (ps)
1986
+ Frequency (THz)
1987
+ 45° azimuth angle
1988
+ 45° azimuth angle
1989
+ e
1990
+ MTrans. birefringence (no
1991
+ 0.8
1992
+ Spectrum (norm.)
1993
+ 500um
1994
+ 0.6
1995
+ 0.4
1996
+ 200μm
1997
+ 0.2
1998
+ 0.4um
1999
+ 0
2000
+ -2
2001
+ 0
2002
+ 2
2003
+ 4
2004
+ 6
2005
+ 8
2006
+ 10
2007
+ 0
2008
+ 1
2009
+ 2
2010
+ 3
2011
+ 4
2012
+ 5
2013
+ Time (ps)
2014
+ Frequency (THz)
1dE1T4oBgHgl3EQf5AXg/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1dFAT4oBgHgl3EQfkB03/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3154e415d860e302f58ea59144e90e7566bb292672f1c0f9583cff7409633f56
3
+ size 151388
1tFST4oBgHgl3EQfWzjN/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9ae64ee9b176e591c488588cbca548b239884f05eaab5081253fa2b70e45257
3
+ size 217552
29FQT4oBgHgl3EQf2zan/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2f07ea9369cfff1983bced1ec8a7a43cde399d974948073f02f743c01cadad8
3
+ size 87545
4NE3T4oBgHgl3EQfogr8/content/2301.04635v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1d251edb3b00da1bcbd6db29853d16fea9f91ba4db1e119767dab91ec63e2f6
3
+ size 522670
4NE3T4oBgHgl3EQfogr8/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:571692e802dd9208f57a5e7827c288e3a268638a5c936d46c7b1918bf16b58f4
3
+ size 177878
4tE0T4oBgHgl3EQfvQHn/content/tmp_files/2301.02617v1.pdf.txt ADDED
@@ -0,0 +1,969 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02617v1 [math.FA] 6 Jan 2023
2
+ LINEAR TOPOLOGICAL INVARIANTS FOR KERNELS OF
3
+ DIFFERENTIAL OPERATORS BY SHIFTED FUNDAMENTAL
4
+ SOLUTIONS
5
+ A. DEBROUWERE1 AND T. KALMES2
6
+ Abstract. We characterize the condition (Ω) for smooth kernels of partial differen-
7
+ tial operators in terms of the existence of shifted fundamental solutions satisfying
8
+ certain properties.
9
+ The conditions (PΩ) and (PΩ) for distributional kernels are
10
+ characterized in a similar way. By lifting theorems for Fr´echet spaces and (PLS)-
11
+ spaces, this provides characterizations of the problem of parameter dependence for
12
+ smooth and distributional solutions of differential equations by shifted fundamen-
13
+ tal solutions.
14
+ As an application, we give a new proof of the fact that the space
15
+ {f ∈ E (X) | P(D)f = 0} satisfies (Ω) for any differential operator P(D) and any
16
+ open convex set X ⊆ Rd.
17
+ Keywords: Partial differential operators; Fundamental solutions; Linear topological
18
+ invariants.
19
+ MSC 2020: 46A63, 35E20, 46M18
20
+ 1. Introduction
21
+ In their seminal work [16] Meise, Taylor and Vogt characterized the constant co-
22
+ efficient linear partial differential operators P(D) = P(−i ∂
23
+ ∂x1, . . . , −i ∂
24
+ ∂xd) that have a
25
+ continuous linear right inverse on E (X) and/or D′(X) (X ⊆ Rd open) in terms of
26
+ the existence of certain shifted fundamental solutions of P(D). Later on, Frerick and
27
+ Wengenroth [8,27] gave a similar characterization of the surjectivity of P(D) on E (X),
28
+ D′(X), and D′(X)/E (X) as well as of the existence of right inverses of P(D) on the
29
+ latter space. Roughly speaking, these results assert that P(D) satisfies some condition
30
+ (e.g. being surjective on E (X)) if and only if for each compact subset K of X and
31
+ ξ ∈ X far enough away from K there is a shifted fundamental solution E for δξ such
32
+ that E satisfies a certain property on K. Of course, this property depends on the
33
+ condition one wants to characterize. Results of the same type have also been shown for
34
+ spaces of non-quasianalytic ultradifferentiable functions and ultradistributions [13,14]
35
+ and for spaces of real analytic functions [15]. The aim of this paper is to complement
36
+ 1Department of Mathematics and Data Science, Vrije Universiteit Brussel, Plein-
37
+ laan 2, 1050 Brussels, Belgium
38
+ 2Faculty of Mathematics, Chemnitz University of Technology, 09107 Chemnitz,
39
+ Germany
40
+ E-mail addresses: [email protected], [email protected].
41
+ 1
42
+
43
+ 2
44
+ A. DEBROUWERE AND T. KALMES
45
+ the above results by characterizing several linear topological invariants for smooth and
46
+ distributional kernels of P(D) by means of shifted fundamental solutions.
47
+ The study of linear topological invariants for kernels of P(D) goes back to the work
48
+ of Petzsche [19] and Vogt [23] and was reinitiated by Bonet and Doma´nski [1, 2, 5].
49
+ It is motivated by the question of surjectivity of P(D) on vector-valued function and
50
+ distribution spaces, as we now proceed to explain.
51
+ We assume that the reader is
52
+ familiar with the condition (Ω) for Fr´echet spaces [18] and the conditions (PΩ) and
53
+ (PΩ) for (PLS)-spaces [1, 5] (see also the preliminary Section 2). Set EP(X) = {f ∈
54
+ E (X) | P(D)f = 0} and D′
55
+ P(X) = {f ∈ D′(X) | P(D)f = 0}. Suppose that P(D) is
56
+ surjective on E (X), respectively, D′(X). Given a locally convex space E, it is natural to
57
+ ask whether P(D) : E (X; E) → E (X; E), respectively, P(D) : D′(X; E) → D′(X; E)
58
+ is still surjective.
59
+ If E is a space of functions or distributions, this question is a
60
+ reformulation of the well-studied problem of parameter dependence for solutions of
61
+ partial differential equations; see [1, 2, 5] and the references therein.
62
+ The splitting
63
+ theory for Fr´echet spaces [24] implies that the mapping P(D) : E (X; E) → E (X; E)
64
+ for E = D′(Y ) (Y ⊆ Rn open) or S ′(Rn) is surjective if and only if EP(X) satisfies
65
+ (Ω). Similarly, as an application of their lifting results for (PLS)-spaces, Bonet and
66
+ Doma´nski showed that the mapping P(D) : D′(X; E) → D′(X; E) for E = D′(Y ) or
67
+ S ′(Rn) is surjective if and only if DP(X) satisfies (PΩ) [1], while it is surjective for
68
+ E = A (Y ) if and only if D′
69
+ P(X) satisfies (PΩ) [5].
70
+ Petzsche [19] showed that EP(X) satisfies (Ω) for any convex open set X1, while
71
+ Vogt proved that this is the case for an arbitrary open set X if P(D) is elliptic [23].
72
+ Similarly, D′
73
+ P(X) satisfies (PΩ) for any convex open set X [1] and for an arbitrary
74
+ open set X if P(D) is elliptic [1, 9, 23]. On the negative side, the second author [12]
75
+ constructed a differential operator P(D) and an open set X ⊆ Rd such that P(D) is
76
+ surjective on D′(X) (and thus also on E (X)) but EP(X) and D′
77
+ P(X) do not satisfy (Ω),
78
+ respectively, (PΩ). Furthermore, D′
79
+ P(X) does not satisfy (PΩ) for any convex open
80
+ set X if P(D) is hypoelliptic and for an arbitrary open set X if P(D) is elliptic [5,25].
81
+ We refer to [3,5] and the references therein for further results concerning (Ω) for EP(X)
82
+ and (PΩ) and (PΩ) for DP(X).
83
+ Apart from this classical application to the problem of surjectivity of P(D) on spaces
84
+ of vector-valued smooth functions and distributions, in our recent article [4], the linear
85
+ topological invariant (Ω) for EP(X) played an important role to establish quantitative
86
+ approximation results of Runge type for several classes of partial differential operators.
87
+ See [7,20,21] for other works on this topic.
88
+ In the present note, we characterize the condition (Ω) for EP(X) and the conditions
89
+ (PΩ) and (PΩ) for D′
90
+ P(X) in terms of the existence of certain shifted fundamental
91
+ solutions for P(D). By the above mentioned results from [1, 5], the latter provides
92
+ characterizations of the problem of distributional and real analytic parameter depen-
93
+ dence for distributional solutions of the equation P(D)f = g by shifted fundamental
94
+ solutions. This answers a question of Doma´nski [6, Problem 7.5] for distributions.
95
+ 1Petzsche actually showed this result under the additional hypothesis that P(D) is hypoelliptic.
96
+ However, as observed in [3], a careful inspection of his proof shows that this hypothesis can be omitted.
97
+
98
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
99
+ 3
100
+ We now state our main result. Set N = {0, 1, 2, . . .}. Let Y ⊆ Rd be relatively
101
+ compact and open. For N ∈ N we define
102
+ ∥f∥Y ,N =
103
+ max
104
+ x∈Y ,|α|≤N |f (α)(x)|,
105
+ f ∈ CN(Y ),
106
+ and
107
+ ∥f∥∗
108
+ Y ,N = sup{|⟨f, ϕ⟩| | ϕ ∈ DY , ∥ϕ∥Y ,N ≤ 1},
109
+ f ∈ (DY )′,
110
+ where DY denotes the Fr´echet space of smooth functions with support in Y .
111
+ Theorem 1.1. Let P ∈ C[ξ1, . . . , ξd], let X ⊆ Rd be open, and let (Xn)n∈N be an
112
+ exhaustion by relatively compact open subsets of X.
113
+ (a) P(D) : E (X) → E (X) is surjective and EP(X) satisfies (Ω) if and only if
114
+ ∀ n ∈ N ∃ m ≥ n, N ∈ N ∀ k ≥ m, ξ ∈ X\Xm ∃ K ∈ N, s, C > 0 ∀ ε ∈ (0, 1)
115
+ ∃ Eξ,ε ∈ D′(Rd) with P(D)Eξ,ε = δξ in Xk such that
116
+ ∥Eξ,ε∥∗
117
+ Xn,N ≤ ε
118
+ and
119
+ ∥Eξ,ε∥∗
120
+ Xk,K ≤ C
121
+ εs.
122
+ (1.1)
123
+ (b) P(D) : D′(X) → D′(X) is surjective and D′
124
+ P(X) satisfies (PΩ) if and only if
125
+ ∀ n ∈ N ∃ m ≥ n ∀ k ≥ m, N ∈ N, ξ ∈ X\Xm ∃ K ∈ N, s, C > 0 ∀ ε ∈ (0, 1)
126
+ ∃ Eξ,ε ∈ D′(Rd) ∩ CN(Xn) with P(D)Eξ,ε = δξ in Xk such that
127
+ ∥Eξ,ε∥Xn,N ≤ ε
128
+ and
129
+ ∥Eξ,ε∥∗
130
+ Xk,K ≤ C
131
+ εs.
132
+ (1.2)
133
+ (c) P(D) : D′(X) → D′(X) is surjective and D′
134
+ P(X) satisfies (PΩ) if and only if
135
+ (1.2) with “∃ K ∈ N, s, C > 0′′ replaced by “∀s > 0 ∃ K ∈ N, C > 0′′ holds.
136
+ The proof of Theorem 1.1 will be given in Section 3. Interestingly, Theorem 1.1 is
137
+ somewhat of a different nature than the above mentioned results from [8,16,27] in the
138
+ sense that the characterizing properties on the shifted fundamental solutions Eξ,ε are
139
+ not only about the behavior of Eξ,ε on the Xn but also on the larger set Xk. In this
140
+ regard, we mention that P(D) is surjective on E (X), respectively, D′(X) if and only
141
+ if (1.1), respectively, (1.2) without the assumption ∥Eξ,ε∥∗
142
+ Xk,K ≤ C
143
+ εs holds [27].
144
+ It would be interesting to evaluate the conditions in Theorem 1.1 in specific cases
145
+ in order to obtain concrete necessary and sufficient conditions on X and P for EP(X)
146
+ to satisfy (Ω) and for D′
147
+ P(X) to satisfy (PΩ) and (PΩ) (cf. [14, 16]).
148
+ We plan to
149
+ study this in the future. As a first result in this direction, we show in Section 4 that
150
+ EP(X) satisfies (Ω) for any differential operator P(D) and any open convex set X by
151
+ combining Theorem 1.1(a) with a powerful method to construct fundamental solutions
152
+ due to H¨ormander [10, Proof of Theorem 7.3.2]. As mentioned above, this result is
153
+ originally due to Petzsche [19], who proved it with the aid of the fundamental principle
154
+ of Ehrenpreis. A completely different proof was recently given by the authors in [3].
155
+ Finally, we would like to point out that Theorem 1.1 implies that surjectivity of
156
+ P(D) on E (X) and (Ω) for EP(X) as well as surjectivity of P(D) on D′(X) and (PΩ),
157
+ respectively, (PΩ), for D′
158
+ P(X) are preserved under taking finite intersections of open
159
+
160
+ 4
161
+ A. DEBROUWERE AND T. KALMES
162
+ sets. For (PΩ) this also follows from [1, Proposition 8.3] and the fact that surjectivity
163
+ of P(D) is preserved under taking finite intersections. However, for (Ω) and (PΩ) we
164
+ do not see how this may be shown without Theorem 1.1.
165
+ 2. Linear topological invariants
166
+ In this preliminary section we introduce the linear topological invariants (Ω) for
167
+ Fr´echet spaces and (PΩ) and (PΩ) for (PLS)-spaces. We refer to [1, 5, 18] for more
168
+ information about these conditions and examples of spaces satisfying them.
169
+ Throughout, we use standard notation from functional analysis [18] and distribution
170
+ theory [10,22]. In particular, given a locally convex space E, we denote by U0(E) the
171
+ filter basis of absolutely convex neighborhoods of 0 in E and by B(E) the family of
172
+ all absolutely convex bounded sets in E.
173
+ 2.1. Projective spectra. A projective spectrum (of locally convex spaces)
174
+ E = (En, ̺n
175
+ n+1)n∈N
176
+ consists of locally convex spaces En and continuous linear maps ̺n
177
+ n+1 : En+1 → En,
178
+ called the spectral maps. We define ̺n
179
+ n = idEn and ̺n
180
+ m = ̺n
181
+ n+1 ◦ · · · ◦ ̺m−1
182
+ m
183
+ : Em → En
184
+ for n, m ∈ N with m > n. The projective limit of E is defined as
185
+ Proj E =
186
+
187
+ (xn)n∈N ∈
188
+
189
+ n∈N
190
+ En | xn = ̺n
191
+ n+1(xn+1), ∀n ∈ N
192
+
193
+ .
194
+ For n ∈ N we write ̺n : Proj E → En, (xj)j∈N �→ xn. We always endow Proj E with
195
+ its natural projective limit topology. For a projective spectrum E = (En, ̺n
196
+ n+1)n∈N of
197
+ Fr´echet spaces, the projective limit Proj E is again a Fr´echet space. We will implicitly
198
+ make use of the derived projective limit Proj1 E. We refer to [26, Sections 2 and 3] for
199
+ more information. In particular, see [26, Theorem 3.1.4] for an explicit definition of
200
+ Proj1 E.
201
+ 2.2. The condition (Ω) for Fr´echet spaces. A Fr´echet space E is said to satisfy
202
+ the condition (Ω) [18] if
203
+ ∀U ∈ U0(E) ∃V ∈ U0(E) ∀W ∈ U0(E) ∃s, C > 0 ∀ε ∈ (0, 1) : V ⊆ εU + C
204
+ εsW.
205
+ The following result will play a key role in the proof of Theorem 1.1(a).
206
+ Lemma 2.1. [3, Lemma 2.4] Let E = (En, ̺n
207
+ n+1)n∈N be a projective spectrum of Fr´echet
208
+ spaces. Then, Proj1 E = 0 and Proj E satisfies (Ω) if and only if
209
+ ∀n ∈ N, U ∈ U0(En) ∃m ≥ n, V ∈ U0(Em) ∀k ≥ m, W ∈ U0(Ek) ∃s, C > 0 ∀ε ∈ (0, 1) :
210
+ ̺n
211
+ m(V ) ⊆ εU + C
212
+ εs̺n
213
+ k(W).
214
+ (2.1)
215
+
216
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
217
+ 5
218
+ 2.3. The conditions (PΩ) and (PΩ) for (PLS)-spaces. A locally convex space E
219
+ is called a (PLS)-space if it can be written as the projective limit of a spectrum of
220
+ (DFS)-spaces.
221
+ Let E = (En, ̺n
222
+ n+1)n∈N be a spectrum of (DFS)-spaces. We call E strongly reduced if
223
+ ∀n ∈ N ∃m ≥ n : ̺n
224
+ m(Em) ⊆ ̺n(Proj E).
225
+ The spectrum E is said to satisfy (PΩ) if
226
+ ∀n ∈ N ∃m ≥ n ∀k ≥ m ∃B ∈ B(En) ∀M ∈ B(Em) ∃K ∈ B(Ek), s, C > 0 ∀ε ∈ (0, 1) :
227
+ ̺n
228
+ m(M) ⊆ εB + C
229
+ εs̺n
230
+ k(K).
231
+ (2.2)
232
+ The spectrum E is said to satisfy (PΩ) if (2.2) with “∃K ∈ B(Ek), s, C > 0” replaced
233
+ by “∀s > 0 ∃K ∈ B(Ek), C > 0” holds.
234
+ A (PLS)-space E is said to satisfy (PΩ), respectively, (PΩ) if E = Proj E for some
235
+ strongly reduced spectrum E of (DFS)-spaces that satisfies (PΩ), respectively, (PΩ).
236
+ This notion is well-defined as [26, Proposition 3.3.8] yields that all strongly reduced
237
+ projective spectra E of (DFS)-spaces with E = Proj E are equivalent (in the sense
238
+ of [26, Definition 3.1.6]).
239
+ The bipolar theorem and [1, Lemma 4.5] imply that the
240
+ above definitions of (PΩ) and (PΩ) are equivalent to the original ones from [1].
241
+ 3. Proof of Theorem 1.1
242
+ This section is devoted to the proof of Theorem 1.1. We fix P ∈ C[ξ1, . . . , ξd]\{0},
243
+ an open set X ⊆ Rd, and an exhaustion by relatively compact open subsets (Xn)n∈N of
244
+ X. For n, N ∈ N we write ∥ · ∥n,N = ∥ · ∥Xn,N and ∥ · ∥∗
245
+ n,N = ∥ · ∥∗
246
+ Xn,N. For ξ ∈ Rd and
247
+ r > 0 we denote by B(ξ, r) the open ball in Rd with center ξ and radius r. Moreover,
248
+ for p ∈ {1, ∞} and N ∈ N we set
249
+ ∥ϕ∥Lp,N = max
250
+ |α|≤N ∥ϕ(α)∥Lp,
251
+ ϕ ∈ D(Rd).
252
+ We fix χ ∈ D(B(0, 1)) with χ ≥ 0 and
253
+
254
+ Rd χ(x)dx = 1, and set χε(x) = ε−dχ(x/ε) for
255
+ ε > 0.
256
+ 3.1. Proof of Theorem 1.1(a). We write E (X) for the space of smooth functions in
257
+ X endowed with its natural Fr´echet space topology. We set
258
+ EP(X) = {f ∈ E (X) | P(D)f = 0}
259
+ and endow it with the relative topology induced by E (X).
260
+ Let n ∈ N.
261
+ We write E (Xn) for the space of smooth functions in Xn endowed
262
+ with its natural Fr´echet space topology, i.e, the one induced by the sequence of norms
263
+ (∥ · ∥n,N)N∈N. We define
264
+ EP(Xn) = {f ∈ E (Xn) | P(D)f = 0}
265
+
266
+ 6
267
+ A. DEBROUWERE AND T. KALMES
268
+ and endow it with the relative topology induced by E (Xn). Since EP(Xn) is closed in
269
+ E (Xn), it is a Fr´echet space. For N ∈ N we set
270
+ Un,N = {f ∈ EP(Xn) | ∥f∥n,N ≤ 1}.
271
+ Note that
272
+
273
+ 1
274
+ N+1Un,N
275
+
276
+ N∈N is a decreasing fundamental sequence of absolutely convex
277
+ neighborhoods of 0 in E (Xn).
278
+ Consider the projective spectrum (EP(Xn), ̺n
279
+ n+1)n∈N with ̺n
280
+ n+1 the restriction map
281
+ from EP(Xn+1) to EP(Xn). Then,
282
+ EP(X) = Proj(EP(Xn), ̺n
283
+ n+1)n∈N.
284
+ By [3, Lemma 3.1(i)] (see also [26, Section 3.4.4]), P(D) : E (X) → E (X) is surjective
285
+ if and only if
286
+ Proj1(EP(Xn), ̺n
287
+ n+1)n∈N = 0.
288
+ Hence, Lemma 2.1 and a simple rescaling argument yield that P(D) : E (X) → E (X)
289
+ is surjective and EP(X) satisfies (Ω) if and only if
290
+ ∀n, N ∈ N ∃m ≥ n, M ≥ N ∃k ≥ m, K ≥ M ∃s, C > 0 ∀ε ∈ (0, 1) :
291
+ Um,M ⊆ εUn,N + C
292
+ εsUk,K,
293
+ (3.1)
294
+ where we did not write the restriction maps explicitly, as we shall not do in the sequel
295
+ either. We are ready to show Theorem 1.1(a).
296
+ Sufficiency of (1.1). It suffices to show (3.1). Let n, N ∈ N be arbitrary. Choose �m, �N
297
+ according to (1.1) for n + 1. Set m = �m + 1 and M = N + �N + deg P + 1. Let
298
+ k ≥ m, K ≥ M be arbitrary. Choose ψ ∈ D(Xm) such that ψ = 1 in a neighborhood
299
+ of X �m. Pick ε0 ∈ (0, 1] such that ψ = 1 on X �m + B(0, ε0), supp ψ + B(0, ε0) ⊆ Xm,
300
+ Xn + B(0, ε0) ⊆ Xn+1, Xk + B(0, ε0) ⊆ Xk+1.
301
+ Cover the compact set Xk\X �m by
302
+ finitely many balls B(ξj, ε0), j ∈ J, with ξj ∈ X\X �m, and choose ϕj ∈ D(B(ξj, ε0)),
303
+ j ∈ J, such that �
304
+ j∈J ϕj = 1 in a neighborhood of Xk\X �m. As J is finite, (1.1) for
305
+ k + 1 implies that there are �K ∈ N, �s, �C > 0 such that for all ε ∈ (0, ε0) there exist
306
+ Eξj,ε ∈ D′(Rd), j ∈ J, with P(D)Eξj,ε = δξj in Xk+1 such that
307
+ (3.2)
308
+ ∥Eξj,ε∥∗
309
+ n+1, ˜
310
+ N ≤ ε
311
+ and
312
+ ∥Eξj,ε∥∗
313
+ k+1, �
314
+ K ≤
315
+ �C
316
+ ε�s.
317
+ Let f ∈ EP(Xm) be arbitrary. For ε ∈ (0, ε0) we define fε = (ψf) ∗ χε ∈ EP(X �m) and
318
+ hε =
319
+
320
+ j∈J
321
+ Eξj,ε ∗ δ−ξj ∗ (ϕjP(D)fε).
322
+ Since δ−ξj ∗ (ϕjP(D)fε) = (ϕjP(D)fε)(· + ξj) ∈ D(B(0, ε0)), j ∈ J, it holds that
323
+ P(D)hε = �
324
+ j∈J ϕjP(D)fε in a neighborhood of Xk. As �
325
+ j∈J ϕj = 1 in a neighborhood
326
+ of Xk\X �m and fε ∈ EP(X �m), we obtain that P(D)hε = P(D)fε in a neighborhood of
327
+ Xk and thus hε ∈ EP(X �m) and fε − hε ∈ EP(Xk). We decompose f as follows
328
+ f = (f − fε + hε) + (fε − hε) ∈ EP(Xn) + EP(Xk).
329
+
330
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
331
+ 7
332
+ We claim that there are s, Ci > 0, i = 1, 2, 3, 4, such that for all f ∈ EP(Xm) and
333
+ ε ∈ (0, ε0)
334
+ ∥f − fε∥n,N ≤ C1ε∥f∥m,M,
335
+ ∥hε∥n,N ≤ C2ε∥f∥m,M,
336
+ ∥fε∥k,K ≤ C3
337
+ εs ∥f∥m,M,
338
+ ∥hε∥k,K ≤ C4
339
+ εs ∥f∥m,M,
340
+ which implies (3.1). Let f ∈ EP(Xm) and ε ∈ (0, ε0) be arbitrary. By the mean value
341
+ theorem, we find that
342
+ ∥f − fε∥n,N ≤ ε
343
+
344
+ d∥f∥n+1,N+1 ≤ ε
345
+
346
+ d∥f∥m,M.
347
+ Furthermore, it holds that
348
+ ∥fε∥k,K ≤ ∥χ∥L1,K
349
+ εK
350
+ ∥ψf∥L∞ ≤ ∥χ∥L1,K∥ψ∥L∞
351
+ εK
352
+ ∥f∥m,M.
353
+ By the first inequality in (3.2), we obtain that
354
+ ∥hε∥n,N
355
+
356
+
357
+ j∈J
358
+ ∥Eξj,ε∥∗
359
+ n+1, �
360
+ N∥(ϕjP(D)fε)(· + ξj)∥L∞,N+ �
361
+ N
362
+
363
+ ε
364
+
365
+ j∈J
366
+ ∥ϕj((P(D)(ψf)) ∗ χε)∥L∞,N+ �
367
+ N
368
+
369
+ C′
370
+ 2ε∥P(D)(ψf)∥L∞,N+ �
371
+ N
372
+
373
+ C2ε∥f∥m,N+ �
374
+ N+deg P ≤ C2ε∥f∥m,M,
375
+ for some C′
376
+ 2, C2 > 0. Similarly, by the second inequality in (3.2), we find that
377
+ ∥hε∥k,K
378
+
379
+
380
+ j∈J
381
+ ∥Eξj,ε∥∗
382
+ k+1, �
383
+ K∥(ϕjP(D)fε)(· + ξj)∥L∞,K+ �
384
+ K
385
+
386
+ �C
387
+ ε�s
388
+
389
+ j∈J
390
+ ∥ϕj((P(D)(ψf)) ∗ χε)∥L∞,K+ �
391
+ K
392
+
393
+ C′
394
+ 4
395
+ ε�s ∥P(D)(ψf)∥L∞∥χε∥L1,K+ �
396
+ K
397
+
398
+ C4
399
+ ε�s+K+ �
400
+ K ∥f∥m,deg P ≤
401
+ C4
402
+ ε�s+K+ �
403
+ K ∥f∥m,M,
404
+ for some C′
405
+ 4, C4 > 0.This proves the claim with s = �s + K + �K.
406
+
407
+ Necessity of (1.1). As explained above, condition (3.1) holds.
408
+ Let F ∈ D′(Rd) be
409
+ a fundamental solution for P(D) of finite order q. Let n ∈ N be arbitrary. Choose
410
+ m, �
411
+ M ∈ N according to (3.1) for n and 0. Set N = q + 1. Let k ≥ m and ξ ∈ X\Xm
412
+ be arbitrary. Set K = q + 1. (3.1) for k + 1 and 0 implies that there are �C, �s > 0 such
413
+ that for all δ ∈ (0, 1)
414
+ (3.3)
415
+ Um,�
416
+ M ⊆ δUn,0 +
417
+ �C
418
+ δ�sUk+1,0.
419
+
420
+ 8
421
+ A. DEBROUWERE AND T. KALMES
422
+ Let ε0 ∈ (0, 1] be such that B(ξ, ε0) ⊆ X\Xm. Set Fξ = F ∗ δξ ∈ D′(Rd). For all
423
+ ε ∈ (0, ε0) it holds that Fξ ∗ χε ∈ EP(Xm) and
424
+ ∥Fξ ∗ χε∥m,�
425
+ M ≤
426
+ C′
427
+ εd+�
428
+ M+q
429
+ with C′ = ∥Fξ∥∗
430
+ Xm+B(0,ε0),q. Hence, (3.3) with δ = εd+�
431
+ M+q+1 implies that
432
+ Fξ ∗ χε ∈
433
+ C′
434
+ εd+�
435
+ M+q Um,�
436
+ M ⊆ C′εUn,0 + C′ �C
437
+ εs Uk+1,0,
438
+ with s = d + �
439
+ M + q + �s(d + �
440
+ M + q + 1). Let fξ,ε ∈ C′εUn,0 and hξ,ε ∈ C′ �Cε−sUk+1,0 be
441
+ such that
442
+ (3.4)
443
+ Fξ ∗ χε = fξ,ε + hξ,ε.
444
+ Choose ψ ∈ D(Xk+1) such that ψ = 1 in a neighborhood of Xk and define Eξ,ε =
445
+ Fξ − ψhξ,ε ∈ D′(Rd). Then, P(D)Eξ,ε = δξ in Xk. Moreover, for all ε ∈ (0, ε0) it holds
446
+ that
447
+ ∥Eξ,ε∥∗
448
+ n,q+1
449
+
450
+ ∥Fξ − Fξ ∗ χε∥∗
451
+ n,q+1 + ∥Fξ ∗ χε − ψhξ,ε∥∗
452
+ n,q+1
453
+
454
+ ∥Fξ∥∗
455
+ Xn+B(0,ε0),q
456
+
457
+ dε + ∥fξ,ε∥n,0
458
+
459
+ (∥Fξ∥∗
460
+ Xn+B(0,ε0),q
461
+
462
+ d + C′)ε,
463
+ where we used the mean value theorem, and
464
+ ∥Eξ,ε∥∗
465
+ k,q+1
466
+
467
+ ∥Fξ∥∗
468
+ k,q+1 + ∥ψhξ,ε∥∗
469
+ k,q+1
470
+
471
+ ∥Fξ∥∗
472
+ k,q+1 + |Xk|∥hξ,ε∥k,0
473
+
474
+ ∥Fξ∥∗
475
+ k,q+1 + C′ �C|Xk|
476
+ εs
477
+ ,
478
+ where |Xk| denotes the Lebesgue measure of Xk. This completes the proof.
479
+
480
+ 3.2. Proof of Theorem 1.1(b) and (c). We write D′(X) for the space of distribu-
481
+ tions in X endowed with its strong dual topology. We set
482
+ D′
483
+ P(X) = {f ∈ D′(X) | P(D)f = 0}
484
+ and endow it with the relative topology induced by D′(X).
485
+ In [27, Theorem (5)] it is shown that the mapping P(D) : D′(X) → D′(X) is
486
+ surjective if and only if
487
+ ∀ n ∈ N ∃ m ≥ n ∀ k ≥ m, N ∈ N, ξ ∈ X\Xm, ε ∈ (0, 1)
488
+ ∃ Eξ,ε ∈ D′(Rd) ∩ CN(Xn) with P(D)Eξ,ε = δξ in Xk such that
489
+ ∥Eξ,ε∥Xn,N ≤ ε.
490
+ (3.5)
491
+ Let n ∈ N. We endow the space DXn of smooth functions with support in Xn with
492
+ the relative topology induced by E (Xn). We write D′(Xn) for the strong dual of DXn.
493
+
494
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
495
+ 9
496
+ Then, D′(Xn) is a (DFS)-space. We define
497
+ D′
498
+ P(Xn) = {f ∈ D′(Xn) | P(D)f = 0}
499
+ and endow it with the relative topology induced by D′(Xn). Since D′
500
+ P(Xn) is closed
501
+ in D′(Xn), it is a (DFS)-space. For N ∈ N we set
502
+ Bn,N = {f ∈ D′
503
+ P(Xn) | ∥f∥∗
504
+ n,N ≤ 1}.
505
+ Note that (NBn,N)N∈N is an increasing fundamental sequence of absolutely convex
506
+ bounded sets in D′
507
+ P(Xn).
508
+ Consider the projective spectrum (D′
509
+ P(Xn), ̺n
510
+ n+1)n∈N with ̺n
511
+ n+1 the restriction map
512
+ from D′
513
+ P(Xn+1) to D′
514
+ P(Xn). Then,
515
+ D′
516
+ P(X) = Proj(D′
517
+ P(Xn), ̺n
518
+ n+1)n∈N.
519
+ By [3, Lemma 3.1(ii)] (see also [26, Section 3.4.5]), P(D) : D′(X) → D′(X) is surjective
520
+ if and only if
521
+ Proj1(D′
522
+ P(Xn), ̺n
523
+ n+1)n∈N = 0.
524
+ The latter condition implies that (D′
525
+ P(Xn), ̺n
526
+ n+1)n∈N is strongly reduced [26, Theorem
527
+ 3.2.9]. Hence, if P(D) : D′(X) → D′(X) is surjective, D′
528
+ P(X) satisfies (PΩ), respec-
529
+ tively, (PΩ) if and only if (D′
530
+ P(Xn), ̺n
531
+ n+1)n∈N does so. A simple rescaling argument
532
+ yields that (D′
533
+ P(Xn), ̺n
534
+ n+1)n∈N satisfies (PΩ) if and only if
535
+ ∀n ∈ N ∃m ≥ n ∀k ≥ m ∃N ∈ N ∀M ∈ N ∃K ∈ N, s, C > 0 ∀ε ∈ (0, 1) :
536
+ Bm,M ⊆ εBn,N + C
537
+ εsBk,K,
538
+ (3.6)
539
+ where, as before, we did not write the restriction maps explicitly. Similarly, the spec-
540
+ trum (D′
541
+ P(Xn), ̺n
542
+ n+1)n∈N satisfies (PΩ) if and only if (3.6) with “∃K ∈ N, s, C > 0”
543
+ replaced by “∀s > 0 ∃K ∈ N, C > 0” holds. We now show Theorem 1.1(b).
544
+ Sufficiency of (1.2). Clearly, (1.2) implies (3.5) and thus that P(D) : D′(X) → D′(X)
545
+ is surjective. Hence, by the above discussion, it suffices to show (3.6). Let n ∈ N
546
+ be arbitrary. Choose m according to (1.2) for n + 1. Let k ≥ m be arbitrary. Set
547
+ N = 0. Let M ∈ N be arbitrary. Pick ε0 ∈ (0, 1] such that Xn + B(0, ε0) ⊆ Xn+1
548
+ and Xk + B(0, ε0) ⊆ Xk+1.
549
+ Cover the compact set Xk\Xm by finitely many balls
550
+ B(ξj, ε0), j ∈ J, with ξj ∈ X\Xm, and choose ϕj ∈ D(B(ξj, ε0)), j ∈ J, such that
551
+
552
+ j∈J ϕj = 1 in a neighborhood of Xk\Xm. As J is finite, (1.2) for k+1 and M +deg P
553
+ implies that there are �K ∈ N, �s, �C > 0 such that for all ε ∈ (0, ε0) there exist Eξj,ε ∈
554
+ D′(Rd) ∩ CM+deg P(Xn+1), j ∈ J, with P(D)Eξj,ε = δξj in Xk+1 such that
555
+ (3.7)
556
+ ∥Eξj,ε∥Xn+1,M+deg P ≤ ε
557
+ and
558
+ ∥Eξj,ε∥∗
559
+ k+1, �
560
+ K ≤
561
+ �C
562
+ ε�s.
563
+
564
+ 10
565
+ A. DEBROUWERE AND T. KALMES
566
+ Pick ψ ∈ D(Xm) with ψ = 1 in a neighborhood of Xn.
567
+ Let f ∈ D′
568
+ P(Xm) with
569
+ ∥f∥∗
570
+ m,M < ∞ be arbitrary. For ε ∈ (0, ε0) we define
571
+ hε =
572
+
573
+ j∈J
574
+ Eξj,ε ∗ δ−ξj ∗ (ϕjP(D)(ψf)).
575
+ By the same reasoning as in the proof of part (a) it follows that hε ∈ D′
576
+ P(Xn) and ψf −
577
+ hε ∈ D′
578
+ P(Xk). Furthermore, as Eξj,ε ∈ D′(Rd) ∩ CM+deg P(Xn+1) and the distributions
579
+ δ−ξj ∗ (ϕjP(D)(ψf)) = ϕjP(D)(ψf)(· + ξj), j ∈ J, have order at most M + deg P and
580
+ are supported in B(0, ε0), it holds that hε ∈ C(Xn). We decompose f as follows in Xn
581
+ f = ψf = hε + (ψf − hε) ∈ (D′
582
+ P(Xn) ∩ C(Xn)) + D′
583
+ P(Xk).
584
+ We claim that there are K ∈ N, s, C1, C2 > 0 such that for all f ∈ D′
585
+ P(Xm) with
586
+ ∥f∥∗
587
+ m,M < ∞ and ε ∈ (0, ε0)
588
+ ∥hε∥∗
589
+ n,0 ≤ C1ε∥f∥∗
590
+ m,M,
591
+ ∥ψf − hε∥∗
592
+ k,K ≤ C2
593
+ εs ∥f∥∗
594
+ m,M,
595
+ which implies (3.6). Let f ∈ D′
596
+ P(Xm) with ∥f∥∗
597
+ m,M < ∞ and ε ∈ (0, ε0) be arbitrary.
598
+ Choose ρ ∈ D(Xm) with ρ = 1 in a neighborhood of supp ψ. The first inequality in
599
+ (3.7) implies that
600
+ ∥hε∥∗
601
+ n,0
602
+
603
+ |Xn|∥hε∥n,0
604
+
605
+ |Xn|
606
+
607
+ j∈J
608
+ ∥P(D)(ψf)∥∗
609
+ m,M+deg P sup
610
+ x∈Xn
611
+ ∥(ϕjρ)Eξj,ε(x + ξj − ·)∥L∞,M+deg P
612
+
613
+ C1∥f∥∗
614
+ m,M∥Eξj,ε∥n+1,M+deg P
615
+
616
+ C1ε∥f∥∗
617
+ m,M
618
+ for some C1 > 0. Next, by the second inequality in (3.7), we obtain that for all ϕ ∈ DXk
619
+ |⟨hε, ϕ⟩|
620
+
621
+
622
+ j∈J
623
+ |⟨Eξj,ε ∗ δ−ξj ∗ (ϕjP(D)(ψf)), ϕ⟩|
624
+ =
625
+
626
+ j∈J
627
+ |⟨Eξj,ε, (δ−ξj ∗ (ϕjP(D)(ψf)))∨ ∗ ϕ⟩|
628
+
629
+
630
+ j∈J
631
+ ∥Eξj,ε∥∗
632
+ k+1, �
633
+ K∥(δ−ξj ∗ (ϕjP(D)(ψf)))∨ ∗ ϕ∥L∞, �
634
+ K
635
+
636
+ �C
637
+ ε�s
638
+
639
+ j∈J
640
+ ∥P(D)(ψf)∥∗
641
+ m,M+deg P sup
642
+ x∈Rd ∥(ϕjρ)ϕ(· − ξj − x)∥L∞, �
643
+ K+M+deg P
644
+
645
+ C′
646
+ 2
647
+ ε�s ∥f∥∗
648
+ m,M∥ϕ∥L∞, �
649
+ K+M+deg P,
650
+ for some C′
651
+ 2 > 0, whence
652
+ ∥hε∥∗
653
+ k, �
654
+ K+M+deg P ≤ C′
655
+ 2
656
+ ε�s ∥f∥∗
657
+ m,M.
658
+
659
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
660
+ 11
661
+ Furthermore,
662
+ ∥ψf∥∗
663
+ k, �
664
+ K+M+deg P ≤ C′′
665
+ 2∥f∥∗
666
+ m,M,
667
+ for some C′′
668
+ 2 > 0. Therefore,
669
+ ∥ψf − hε∥∗
670
+ k, �
671
+ K+M+deg P ≤ C′′
672
+ 2 + C′
673
+ 2
674
+ ε�s
675
+ ∥f∥∗
676
+ m,M.
677
+ This shows the claim with K = �K + M + deg P and s = �s.
678
+
679
+ Necessity of (1.2). As explained above, conditions (3.5) and (3.6) hold. Let n ∈ N
680
+ be arbitrary. Choose �m according to (3.6) for n + 1 and m according to (3.5) for �m.
681
+ Let k ≥ m, N ∈ N, and ξ ∈ X\Xm be arbitrary. (3.5) for k and N + 1 implies that
682
+ there exist Fξ ∈ D′(Rd) ∩ CN+1(X �m) with P(D)Fξ = δξ in Xk such that ∥Fξ∥ �m,N+1 ≤
683
+ min{1, 1/|X �m|}. By (3.6) for k + 1, we obtain that there are �N, �K ∈ N, �s, �C > 0 such
684
+ that for all δ ∈ (0, 1)
685
+ (3.8)
686
+ B �m,0 ⊆ δBn+1, �
687
+ N +
688
+ �C
689
+ δ�sBk+1, �
690
+ K.
691
+ Note that Fξ ∈ D′
692
+ P(X �m) and ∥Fξ∥∗
693
+ �m,0 ≤ |X �m|∥Fξ∥ �m,0 ≤ 1, whence Fξ ∈ B �m,0. There-
694
+ fore, (3.8) yields that for all δ ∈ (0, 1) there are Gξ,δ ∈ δBn+1, �
695
+ N and Hξ,δ ∈ �Cδ−�sBk+1, �
696
+ K
697
+ such that Fξ = Gξ,δ + Hξ,δ in Xn+1.
698
+ Let ψ ∈ D(Xk+1) be such that ψ = 1 on
699
+ a neighborhood of Xk.
700
+ Choose ε0 ∈ (0, 1] such that ψ = 1 on Xk + B(0, ε0) and
701
+ Xn + B(0, ε0) ⊆ Xn+1. For δ ∈ (0, 1) and ε ∈ (0, ε0) we define
702
+ Eξ,ε,δ = Fξ − (ψHξ,δ) ∗ χε ∈ D′(Rd).
703
+ Since Hξ,δ ∈ D′
704
+ P(Xk+1), we have that (ψHξ,δ) ∗ χε ∈ EP(Xk). This implies that Eξ,ε,δ ∈
705
+ CN(Xn) and P(D)Eξ,ε,δ = δξ in Xk. As ψ = 1 on Xn+1, it holds that (ψHξ,δ) ∗ χε =
706
+ Hξ,δ ∗ χε on Xn. Hence, we obtain that
707
+ ∥Eξ,ε,δ∥n,N
708
+
709
+ ∥Fξ − Fξ ∗ χε∥n,N + ∥Fξ ∗ χε − Hξ,δ ∗ χε∥n,N
710
+
711
+
712
+ dε + ∥Gξ,δ ∗ χε∥n,N
713
+
714
+
715
+ dε + ∥χ∥L∞,N+ �
716
+ N
717
+ δ
718
+ εN+ �
719
+ N+d,
720
+ where we used the mean value theorem. Let r ∈ N be the order of Fξ in Xk and set
721
+ K = max{r, �K}. Then,
722
+ ∥Eξ,ε,δ∥∗
723
+ k,K ≤ ∥Fξ∥∗
724
+ k,r + ∥(ψHξ,δ) ∗ χε∥∗
725
+ k, �
726
+ K ≤ ∥Fξ∥∗
727
+ k,r + C′
728
+ δ�s
729
+ ,
730
+ for some C′ > 0. For ε ∈ (0, ε0) we set δε = εN+ �
731
+ N+d+1 and Eξ,ε = Eξ,ε,δε. We obtain
732
+ that Eξ,ε ∈ CN(Xn) with P(D)Eξ,ε = δξ in Xk. Furthermore, there are C1, C2 > 0
733
+ such that for all ε ∈ (0, ε0)
734
+ ∥Eξ,ε,δ∥n,N ≤ C1ε
735
+ and
736
+ ∥Eξ,ε,δ∥∗
737
+ k,K ≤ C2
738
+ εs
739
+ with s = �s(N + �
740
+ N + d + 1). This completes the proof.
741
+
742
+
743
+ 12
744
+ A. DEBROUWERE AND T. KALMES
745
+ Theorem 1.1(c) can be shown in the same way as Theorem 1.1(b) (see in particular
746
+ the values of s in terms of �s in the above proof). We leave the details to the reader.
747
+ 4. The condition (Ω) for EP(X) if X is convex
748
+ In this final section we use Theorem 1.1(a) to prove that P(D) : E (X) → E (X) is
749
+ surjective and EP(X) satisfies (Ω) for any non-zero differential operator P(D) and any
750
+ open convex set X ⊆ Rd. To this end, we show that (1.1) holds for any exhaustion by
751
+ relatively compact open convex subsets (Xn)n∈N of X. The latter is a consequence of
752
+ the following lemma.
753
+ Lemma 4.1. Let P ∈ C[ξ1, . . . , ξd]\{0}. Let K ⊆ Rd be compact and convex, and
754
+ let ξ ∈ Rd be such that ξ /∈ K. For all ε ∈ (0, 1) there exists Eξ,ε ∈ D′(Rd) with
755
+ P(D)Eξ,ε = δξ in Rd such that
756
+ ∥Eξ,ε∥∗
757
+ K,d+1 ≤ ε
758
+ and for every L ⊂ Rd compact and convex there are s, C > 0 such that
759
+ ∥Eξ,ε∥∗
760
+ L,d+1 ≤ C
761
+ εs.
762
+ The rest of this section is devoted to the proof of Lemma 4.1, which is based on a
763
+ construction of fundamental solutions due to H¨ormander [10, proof of Theorem 7.3.10].
764
+ We need some preparation. For Q ∈ C[ξ1, . . . , ξd] we define
765
+ �Q(ζ) =
766
+ � �
767
+ α∈Nd
768
+ |Q(α)(ζ)|2
769
+ �1/2
770
+ ,
771
+ ζ ∈ Cd.
772
+ Let m ∈ N. We denote by Pol◦(m) the finite-dimensional vector space of non-zero
773
+ polynomials in d variables of degree at most m with the origin removed. By [10, Lemma
774
+ 7.3.11 and Lemma 7.3.12] there exists a non-negative Φ ∈ C∞(Pol◦(m)×Cd) such that
775
+ (i) For all Q ∈ Pol◦(m) it holds that Φ(Q, ζ) = 0 if |ζ| > 1 and
776
+
777
+ Cd Φ(Q, ζ)dζ = 1.
778
+ (ii) For all entire functions F on Cd and Q ∈ Pol◦(m) it holds that
779
+
780
+ Cd F(ζ)Φ(Q, ζ)dζ = F(0).
781
+ (iii) There is A > 0 such that for all Q ∈ Pol◦(m) and ζ ∈ Cd with Φ(Q, ζ) ̸= 0 it
782
+ holds that
783
+ �Q(0) ≤ A|Q(ζ)|.
784
+ Let K ⊆ Rd be compact and convex.
785
+ As customary, we define the supporting
786
+ function of K as
787
+ HK(η) = sup
788
+ x∈K
789
+ η · x,
790
+ η ∈ Rd.
791
+
792
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
793
+ 13
794
+ Note that HK is subadditive and positive homogeneous of degree 1. Furthermore, it
795
+ holds that [10, Theorem 4.3.2]
796
+ (4.1)
797
+ K = {x ∈ Rd | η · x ≤ HK(η), ∀η ∈ Rd}.
798
+ We define the Fourier transform of ϕ ∈ D(Rd) as
799
+ �ϕ(ζ) =
800
+
801
+ Rd ϕ(x)e−iζ·xdx,
802
+ ζ ∈ Cd.
803
+ Then, �ϕ is an entire function on Cd. For all N ∈ N there is C > 0 such that for all
804
+ ϕ ∈ D(Rd)
805
+ (4.2)
806
+ |�ϕ(ζ)| ≤ C∥ϕ∥L1,N
807
+ eHch supp ϕ(Imζ)
808
+ (2 + |ζ|)N ,
809
+ ζ ∈ Cd,
810
+ where ch supp ϕ denotes the convex hull of supp ϕ. We are ready to show Lemma 4.1.
811
+ Proof. We may assume without loss of generality that ξ = 0.
812
+ Since 0 /∈ K, (4.1)
813
+ implies that there is η ∈ Rd such that HK(−η) < 0. For t > 0 and σ ∈ Rd we define
814
+ Pt,σ = P(σ +itη + · ) ∈ Pol◦(m). Note that there is c > 0 such for all t > 0 and σ ∈ Rd
815
+ (4.3)
816
+
817
+ Pt,σ(0) = �P(σ + itη) ≥ c.
818
+ Let Φ be as above. We define Ft ∈ D′(Rd) via (cf. [10, proof of Theorem 7.3.10])
819
+ ⟨Ft, ϕ⟩ =
820
+ 1
821
+ (2π)d
822
+
823
+ Rd
824
+
825
+ Cd
826
+ �ϕ(−σ − itη − ζ)
827
+ P(σ + itη + ζ) Φ(Pt,σ, ζ)dζdσ,
828
+ ϕ ∈ D(Rd).
829
+ Let L be an arbitrary compact convex subset of Rd. By properties (i) and (iii) of Φ,
830
+ (4.2) (with N = d + 1) and (4.3) we have that for all ϕ ∈ DL
831
+ |⟨Ft, ϕ⟩|
832
+
833
+ 1
834
+ (2π)d
835
+
836
+ Rd
837
+
838
+ |ζ|≤1
839
+ |�ϕ(−σ − itη − ζ)|
840
+ |Pt,σ(ζ)|
841
+ Φ(Pt,σ, ζ)dζdσ
842
+
843
+ AC∥ϕ∥L1,d+1
844
+ (2π)d
845
+
846
+ Rd
847
+
848
+ |ζ|≤1
849
+ eHL(−tη−Im ζ)
850
+ (2 + |σ + itη + ζ|)d+1�
851
+ Pt,σ(0)
852
+ Φ(Pt,σ, ζ)dζdσ
853
+
854
+ AC∥ϕ∥L1,d+1
855
+ (2π)dc
856
+
857
+ Rd
858
+
859
+ |ζ|≤1
860
+ etHL(−η)eHL(− Im ζ)
861
+ (1 + |σ|)d+1
862
+ Φ(Pt,σ, ζ)dζdσ
863
+
864
+ C′
865
+ L∥ϕ∥L∞,d+1etHL(−η),
866
+ (4.4)
867
+ where
868
+ C′
869
+ L = AC|L|
870
+ (2π)dc max
871
+ |ζ|≤1 eHL(− Im ζ)
872
+
873
+ Rd
874
+ 1
875
+ (1 + |σ|)d+1dσ.
876
+ In particular, Ft is a well-defined distribution. Property (ii) of Φ and Cauchy’s integral
877
+ formula yield that for all ϕ ∈ D(Rd)
878
+ ⟨P(D)Ft, ϕ⟩
879
+ =
880
+ ⟨Ft, P(−D)ϕ⟩
881
+ =
882
+ 1
883
+ (2π)d
884
+
885
+ Rd
886
+
887
+ Cd �ϕ(−σ − itη − ζ)Φ(Pt,σ, ζ)dζdσ
888
+ =
889
+ 1
890
+ (2π)d
891
+
892
+ Rd �ϕ(−σ − itη)dσ
893
+
894
+ 14
895
+ A. DEBROUWERE AND T. KALMES
896
+ =
897
+ 1
898
+ (2π)d
899
+
900
+ Rd �ϕ(σ)dσ = ϕ(0)
901
+ and thus P(D)Ft = δ. For ε ∈ (0, 1) we set tε = log ε/HK(−η) > 0 and E0,ε = Ftε.
902
+ Then, P(D)E0,ε = δ. By (4.4), we obtain that for all ε ∈ (0, 1)
903
+ ∥E0,ε∥∗
904
+ K,d+1 ≤ C′
905
+
906
+ and, for any L ⊆ Rd compact and convex,
907
+ ∥E0,ε∥∗
908
+ L,d+1 ≤ C′
909
+ L
910
+ εs ,
911
+ with s = |HL(−η)/HK(−η)|. This gives the desired result.
912
+
913
+ References
914
+ [1] J. Bonet, P. Doma´nski, Parameter dependence of solutions of differential equations on spaces of
915
+ distributions and the splitting of short exact sequences, J. Funct. Anal. 230 (2006), 329–381.
916
+ [2] J. Bonet, P. Doma´nski, The splitting of exact sequences of PLS-spaces and smooth dependence
917
+ of solutions of linear partial differential equations, Adv. Math. 217 (2008), 561–585.
918
+ [3] A. Debrouwere, T. Kalmes, Linear topological invariants for kernels of convolution and differ-
919
+ ential operators, arXiv-preprint 2204.11733v1.
920
+ [4] A. Debrouwere, T. Kalmes, Quantitative Runge type approximation theorems for zero solutions
921
+ of certain partial differential operators, arXiv-preprint 2209.10794v1.
922
+ [5] P. Doma´nski, Real analytic parameter dependence of solutions of differential equations, Rev.
923
+ Mat. Iberoam. 26 (2010), 175–238.
924
+ [6] P. Doma´nski, Real analytic parameter dependence of solutions of differential equations over
925
+ Roumieu classes, Funct. et Approx. Comment. Math. 26 (2011), 79–109.
926
+ [7] A. Enciso, D. Peralta-Salas, Approximation Theorems for the Schr¨odinger Equation and Quan-
927
+ tum Vortex Reconnection. Comm. Math. Phys. 387 (2021), 1111–1149.
928
+ [8] L. Frerick, J. Wengenroth, Partial differential operators modulo smooth functions, Bull. Soc.
929
+ Roy. Sci. Li`ege 73 (2004), 119–127.
930
+ [9] L. Frerick, T. Kalmes, Some results on surjectivity of augmented semi-elliptic differential oper-
931
+ ators, Math. Ann. 347 (2010), 81–94.
932
+ [10] L. H¨ormander, The Analysis of Linear Partial Differential Operators, I, Springer-Verlag, Berlin,
933
+ 2003.
934
+ [11] L. H¨ormander, The Analysis of Linear Partial Differential Operators, II, Springer-Verlag, Berlin,
935
+ 2005.
936
+ [12] T. Kalmes, The augmented operator of a surjective partial differential operator with constant
937
+ coefficients need not be surjective, Bull. Lond. Math. Soc. 44 (2012), 610–614.
938
+ [13] M. Langenbruch, Surjective partial differential operators on spaces of ultradifferentiable functions
939
+ of Roumieu type, Results in Math. 29 (1996), 254–275.
940
+ [14] M. Langenbruch, Surjectivity of partial differential operators on Gevrey classes and extension of
941
+ regularity, Math. Nachr. 196 (1998), 103-140.
942
+ [15] M. Langenbruch, Characterization of surjective partial differential operators on spaces of real
943
+ analytic functions, Studia Math. 162 (2004), 53–96.
944
+ [16] R. Meise, B. A. Taylor, D. Vogt, Characterization of the linear partial differential operators with
945
+ constant coefficients that admit a continuous linear right inverse, Ann. Inst. Fourier (Grenoble)
946
+ 40 (1990), 619–655.
947
+ [17] R. Meise, B. A. Taylor, D. Vogt, Continuous linear right inverses for partial differential operators
948
+ on non-quasianalytic classes and on ultradistributions, Math. Nachr. 196 (1998), 213-242.
949
+ [18] R. Meise, D. Vogt, Introduction to Functional Analysis, Clarendon Press, Oxford, 1997.
950
+
951
+ LINEAR TOPOLOGICAL INVARIANTS BY SHIFTED FUNDAMENTAL SOLUTIONS
952
+ 15
953
+ [19] H.J. Petzsche, Some results of Mittag-Leffler-type for vector valued functions and spaces of class
954
+ A, in: K.D. Bierstedt, B. Fuchssteiner (Eds.), Functional Analysis: Surveys and Recent Results,
955
+ North-Holland, Amsterdam, 1980, pp. 183–204.
956
+ [20] A. R¨uland, M. Salo, Quantitative Runge approximation and inverse problems, Int. Math. Res.
957
+ Not. IMRN 20 (2019), 6216–6234.
958
+ [21] A. R¨uland, M. Salo, The fractional Calder´on problem: Low regularity and stability, Nonlinear
959
+ Anal. 93 (2020), 111529.
960
+ [22] L. Schwartz, Th´eorie des Distributions, Hermann, Paris, 1966.
961
+ [23] D. Vogt, On the solvability of P(D)f = g for vector valued functions, RIMS Kokyoroku 508
962
+ (1983), 168–181.
963
+ [24] D. Vogt, On the functors Ext1(E, F) for Fr´echet spaces, Studia Math. 85 (1987), 163–197.
964
+ [25] D. Vogt, Invariants and spaces of zero solutions of linear partial differential operators, Arch.
965
+ Math. 87 (2006), 163–171.
966
+ [26] J. Wengenroth, Derived Functors in Functional Analysis, Springer-Verlag, Berlin, 2003.
967
+ [27] J. Wengenroth, Surjectivity of partial differential operators with good fundamental solutions, J.
968
+ Math. Anal. Appl. 379 (2011), 719–723.
969
+
4tE0T4oBgHgl3EQfvQHn/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
59E1T4oBgHgl3EQfBQIH/content/2301.02848v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8b2c1eb9172512c24b654afa61756691ab4bd2997222e2fd1a3b6ea4645707f
3
+ size 526547
59E1T4oBgHgl3EQfBQIH/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4f7a00e62b7debbb2e8cbea763fbdb4bd0bb52dd5a9252437b8d36078775de0
3
+ size 297035
5NA0T4oBgHgl3EQfNv96/content/tmp_files/2301.02151v1.pdf.txt ADDED
@@ -0,0 +1,1970 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Beyond spectral gap (extended):
2
+ The role of the topology in decentralized learning
3
+ Thijs Vogels*
4
5
+ Hadrien Hendrikx*
6
7
+ Martin Jaggi
8
9
+ Machine Learning and Optimization Laboratory
10
+ EPFL
11
+ Lausanne, Switzerland
12
+ Abstract
13
+ In data-parallel optimization of machine learning models, workers collaborate to improve
14
+ their estimates of the model: more accurate gradients allow them to use larger learning
15
+ rates and optimize faster. In the decentralized setting, in which workers communicate over a
16
+ sparse graph, current theory fails to capture important aspects of real-world behavior. First,
17
+ the ‘spectral gap’ of the communication graph is not predictive of its empirical performance
18
+ in (deep) learning. Second, current theory does not explain that collaboration enables larger
19
+ learning rates than training alone. In fact, it prescribes smaller learning rates, which further
20
+ decrease as graphs become larger, failing to explain convergence dynamics in infinite graphs.
21
+ This paper aims to paint an accurate picture of sparsely-connected distributed optimization.
22
+ We quantify how the graph topology influences convergence in a quadratic toy problem and
23
+ provide theoretical results for general smooth and (strongly) convex objectives. Our theory
24
+ matches empirical observations in deep learning, and accurately describes the relative merits
25
+ of different graph topologies. This paper is an extension of the conference paper by Vogels
26
+ et al. (2022). Code: github.com/epfml/topology-in-decentralized-learning.
27
+ Keywords:
28
+ Decentralized Learning, Convex Optimization, Stochastic Gradient Descent,
29
+ Gossip Algorithms, Spectral Gap
30
+ 1. Introduction
31
+ Distributed data-parallel optimization algorithms help us tackle the increasing complexity of
32
+ machine learning models and of the data on which they are trained. We can classify those
33
+ training algorithms as either centralized or decentralized, and we often consider those settings
34
+ to have different benefits over training ‘alone’. In the centralized setting, workers compute
35
+ gradients on independent mini-batches of data, and they average those gradients between all
36
+ workers. The resulting lower variance in the updates enables larger learning rates and faster
37
+ training. In the decentralized setting, workers average their models with only a sparse set of
38
+ ‘neighbors’ in a graph instead of all-to-all, and they may have private datasets sampled from
39
+ different distributions. As the benefit of decentralized learning, we usually focus only on the
40
+ (indirect) access to other worker’s datasets, and not of faster training.
41
+ Homogeneous (i.i.d.) setting. While decentralized learning is typically studied with
42
+ heterogeneous datasets across workers, sparse (decentralized) averaging between them is also
43
+ useful when worker’s data is identically distributed (i.i.d.) (Lu and Sa, 2021). As an example,
44
+ ©2022 Thijs Vogels and Hadrien Hendrikx and Martin Jaggi. *: Equal contribution.
45
+ Preprint. Under Review. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/.
46
+ arXiv:2301.02151v1 [cs.LG] 5 Jan 2023
47
+
48
+ Vogels, Hendrikx, Jaggi
49
+ 10
50
+ 100
51
+ 1000
52
+ ↑ Steps until loss < 0.01
53
+ 0.001
54
+ 0.01
55
+ 0.1
56
+ 1
57
+ Learning rate →
58
+ Fully connected
59
+ Ring
60
+ Alone (disconnected)
61
+ Current theory uses
62
+ lower learning rates
63
+ but decentralized averaging
64
+ enables higher learning rates
65
+ 0.001
66
+ 0.01
67
+ 0.1
68
+ 1
69
+ Learning rate →
70
+ 1-ring (spectral gap 1)
71
+ 2-ring (spectral gap 1)
72
+ 4-ring (spectral gap 0.67)
73
+ 8-ring (s.g. 0.20)
74
+ ∞-ring (s.g. 0)
75
+ ↮ Instead of a speedup,
76
+ current theory predicts a slowdown with ring size
77
+ Figure 1: ‘Time to target’ for D-SGD (Lian et al., 2017) with constant learning rates on
78
+ an i.i.d. isotropic quadratic dataset (Section 3.1). The noise disappears at the
79
+ optimum. Compared to optimizing alone, 32 workers in a ring (left) are faster for
80
+ any learning rate, but the largest improvement comes from being able to use a
81
+ large learning rate. This benefit is not captured by current theory, which prescribes
82
+ a smaller learning rate than training alone. On the right, we see that rings of
83
+ increasing size enable larger learning rates and faster optimization. Because a
84
+ ring’s spectral gap goes to zero with the size of the ring, this cannot be explained
85
+ by current theory.
86
+ sparse averaging is used in data centers to mitigate communication bottlenecks (Assran et al.,
87
+ 2019). When the data is i.i.d. (or heterogeneity is mild), the goal of sparse averaging is to
88
+ optimize faster, just like in centralized (all-to-all) graphs. Yet, current decentralized learning
89
+ theory poorly explains this speed-up. Analyses typically show that, for small enough learning
90
+ rates, training with sparse averaging behaves the same as with all-to-all averaging (Lian
91
+ et al., 2017; Koloskova et al., 2020) and so it reduces the gradient variance by the number of
92
+ workers compared to training alone with the same small learning rate. In practice, however,
93
+ such small learning rates would never be used. In fact, a reduction in variance should allow
94
+ us to use a larger learning rate than training alone, rather than imposing a smaller one.
95
+ Contrary to current theory, we show that (sparse) averaging lowers variance throughout all
96
+ phases of training (both initially and asymptotically), allowing to take higher learning rates,
97
+ which directly speeds up convergence. We characterize how much averaging with various
98
+ communication graphs reduces the variance, and show that centralized performance (variance
99
+ divided by the number of workers) is not always achieved when using optimal large learning
100
+ rates. The behavior we explain is illustrated in Figure 1.
101
+ Heterogeneous (non-i.i.d.) setting. In standard analyses, heterogeneity affects convergence
102
+ in a very worst-case manner. Standard guarantees intuitively correspond to the pessimistic
103
+ case in which the most distant workers have the most different functions. These guarantees are
104
+ typically loose in the settings where workers have different finite datasets sampled i.i.d. from
105
+ the same distribution, or if each worker has a lot of diversity in its close neighbors. In this
106
+ work, we characterize the impact of heterogeneity together with the communication graph,
107
+ 2
108
+
109
+ Beyond spectral gap
110
+ enabling non-trivial guarantees even for infinite graphs under non-adversarial heterogeneity
111
+ patterns.
112
+ Spectral gap. In both the homogeneous and heterogeneous settings, the graph topology
113
+ appears in current convergence rates through the spectral gap of its averaging (gossip) matrix.
114
+ The spectral gap poses a conservative lower bound on how much one averaging step brings
115
+ all worker’s models closer together. The larger, the better. If the spectral gap is small, a
116
+ significantly smaller learning rate is required to make the algorithm behave close to SGD
117
+ with all-to-all averaging with the same learning rate. Unfortunately, we experimentally
118
+ observe that, both in deep learning and in convex optimization, the spectral gap of the
119
+ communication graph is not predictive of its performance under tuned learning rates.
120
+ The problem with the spectral gap quantity is clearly illustrated in a simple example. Let
121
+ the communication graph be a ring of varying size. As the size of the ring increases to infinity,
122
+ its spectral gap goes to zero since it becomes harder and harder to achieve consensus between
123
+ all the workers. This leads to the optimization progress predicted by current theory to go to
124
+ zero as well. In some cases, when the worker’s objectives are adversarially heterogeneous
125
+ in a way that requires workers to obtain information from all others, this is indeed what
126
+ happens. In typical cases, however, this view is overly pessimistic. In particular, this view
127
+ does not match the empirical behavior with i.i.d. data. With i.i.d. data, as the size of the
128
+ ring increases, the convergence rate actually improves (Figure 1), until it saturates at a point
129
+ that depends on the problem.
130
+ In this work, we aim to accurately describe the behavior of distributed learning algorithms
131
+ with sparse averaging, both in theory and in practice. We aim to do so both in the high learning
132
+ rate regime, which was previously studied in the conference version of this paper Vogels et al.
133
+ (2022), as well as in the small learning rate regime, in which we characterize the interplay
134
+ between topology and data heterogeneity, as well as stochastic noise.
135
+ • We quantify the role of the graph in a quadratic toy problem designed to mimic the
136
+ initial phase of deep learning (Section 3.1), showing that averaging enables a larger
137
+ learning rate.
138
+ • From these insights, we derive a problem-independent notion of ‘effective number of
139
+ neighbors’ in a graph that is consistent with time-varying topologies and infinite graphs,
140
+ and is predictive of a graph’s empirical performance in both convex and deep learning.
141
+ • We provide convergence proofs for (strongly) convex objectives that do not depend on
142
+ the spectral gap of the graph (Section 4), and consider finer spectral quantities instead.
143
+ Our rates disentangle the homogeneous and heterogeneous settings, and highlight that
144
+ all problems behave as if they were homogeneous when the iterates are far from the
145
+ optimum.
146
+ At its core, our analysis does not enforce global consensus, but only between workers that are
147
+ close to each other in the graph. Our theory shows that sparse averaging provably enables
148
+ larger learning rates and thus speeds up optimization. These insights prove to be relevant in
149
+ deep learning, where we accurately describe the performance of a variety of topologies, while
150
+ their spectral gap does not (Section 5).
151
+ 3
152
+
153
+ Vogels, Hendrikx, Jaggi
154
+ 2. Related work
155
+ Decentralized SGD.
156
+ This paper studies decentralized SGD. Koloskova et al. (2020)
157
+ obtain the tightest bounds for this algorithm in the general setting where workers optimize
158
+ heterogeneous objectives. They show that gossip averaging reduces the asymptotic variance
159
+ suffered by the algorithm at the cost of a degradation (depending on the spectral gap of
160
+ the gossip matrix) of the initial linear convergence term. This key term does not improve
161
+ through collaboration and gives rise to a smaller learning rate than training alone. Besides,
162
+ as discussed above, this implies that optimization is not possible in the limit of large graphs,
163
+ even in the absence of heterogeneity: for instance, the spectral gap of an infinite ring is zero,
164
+ which would lead to a learning rate of zero as well.
165
+ These rates suggest that decentralized averaging speeds up the last part of training
166
+ (dominated by variance), at the cost of slowing down the initial (linear convergence) phase.
167
+ Beyond the work of Koloskova et al. (2020), many papers focus on linear speedup (in the
168
+ variance phase) over optimizing alone, and prove similar results in a variety of settings (Lian
169
+ et al., 2017; Tang et al., 2018; Lian et al., 2018). All these results rely on the following insight:
170
+ while linear speedup is only achieved for small learning rates, SGD eventually requires such
171
+ small learning rates anyway (because of, e.g., stochastic noise, or non-smoothness). This
172
+ observation leads these works to argue that “topology does not matter”. This is the case
173
+ indeed, but only for very small learning rates, as shown in Figure 1. Besides, while linear
174
+ speedup might be achievable indeed for very small learning rates, some level of variance
175
+ reduction should be obtained by averaging for any learning rate. In practice, averaging
176
+ speeds up both the initial and last part of training and in a possibly non-linear way. This is
177
+ what we show in this work, both in theory and in practice.
178
+ Another line of work studies decentralized SGD under statistical assumptions on the local
179
+ data. In particular, Richards and Rebeschini (2020) show favorable properties for D-SGD
180
+ with graph-dependent implicit regularization and attain optimal statistical rates. Their
181
+ suggested learning rate does depend on the spectral gap of the communication network, and
182
+ it goes to zero when the spectral gap shrinks. Richards and Rebeschini (2019) also show that
183
+ larger (constant) learning rates can be used in decentralized GD, but their analysis focuses
184
+ on decentralized kernel regression. Their analysis relies on statistical concentration of local
185
+ objectives rather, while the analysis in this paper relies on the notion of local neighborhoods.
186
+ Gossiping in infinite graphs.
187
+ An important feature of our results is that they do not
188
+ depend on the spectral gap, and so they apply independently of the size of the graph. Instead,
189
+ our results rely on new quantities that involve a combination of the graph topology and
190
+ the heterogeneity pattern. These may depend on the spectral gap in extreme cases, but
191
+ are much better in general. Berthier et al. (2020) study acceleration of gossip averaging in
192
+ infinite graphs, and obtain the same conclusions as we do: although spectral gap is useful
193
+ for asymptotics (how long does information take to spread in the whole graph), it fails to
194
+ accurately describe the transient regime of gossip averaging, i.e., how quickly information
195
+ spreads over local neighborhoods in the first few gossip rounds. This is especially limiting
196
+ for optimization (compared to just averaging), as new local updates need to be averaged at
197
+ every step. The averaging for latest gradient updates always starts in the transient regime,
198
+ implying that the transient regime of gossip averaging deeply affects the asymptotic regime
199
+ of decentralized SGD. In this work, we build on tools from Berthier et al. (2020) to show
200
+ 4
201
+
202
+ Beyond spectral gap
203
+ how the effective number of neighbors, a key quantity we introduce, is related to the graph’s
204
+ spectral dimension.
205
+ The impact of the graph topology.
206
+ Lian et al. (2017) argue that the topology of
207
+ the graph does not matter. This is only true for asymptotic rates in specific settings, as
208
+ illustrated in Figure 1. Neglia et al. (2020) investigate the impact of the graph on decentralized
209
+ optimization, and contradict this claim. Similarly to us, they show that the graph has an
210
+ impact in the early phases of training. Their analysis of the heterogeneous setting, their
211
+ analysis depends on how gradient heterogeneity spans the eigenspace of the Laplacian. Their
212
+ assumptions, however, differ from ours, and they retain an unavoidable dependence on the
213
+ spectral gap of the graph. Our results are different in nature, and show the benefits of
214
+ averaging and the impact of the graph through the choice of large learning rates, and a better
215
+ dependence on the noise and the heterogeneity for a given learning rate. Even et al. (2021)
216
+ also consider the impact of the graph on decentralized learning. They focus on non-worst-case
217
+ dependence on heterogeneous delays, and still obtain spectral-gap-like quantities but on a
218
+ reweighted gossip matrix.
219
+ Another line of work studies the interaction of topology with particular patterns of data
220
+ heterogeneity (Le Bars et al., 2022; Dandi et al., 2022), and how to optimize graphs with this
221
+ heterogeneity in mind. Our analysis highlights the role of heterogeneity through a different
222
+ quantity than these works, that we believe is tight. Besides, both works either try to reduce
223
+ this heterogeneity all along the trajectory, or optimize for both the spectral gap of the graph
224
+ and the heterogeneity term. Instead, we show that heterogeneity changes the fixed-point of
225
+ the algorithm but not the global dynamics.
226
+ Time-varying topologies.
227
+ Time-varying topologies are popular for decentralized deep
228
+ learning in data centers due to their strong mixing (Assran et al., 2019; Wang et al., 2019).
229
+ The benefit of varying the communication topology over time is not easily explained through
230
+ standard theory, but requires dedicated analysis (Ying et al., 2021). While our proofs
231
+ only cover static topologies, the quantities that appear in our analysis can be computed
232
+ for time-varying schemes, too. With these quantities, we can empirically study static and
233
+ time-varying schemes in the same framework.
234
+ Conference version.
235
+ This paper is an extension of Vogels et al. (2022), which focused on
236
+ the homogeneous setting where all workers share the same global optimum. In this extension,
237
+ we introduce a simpler analysis that strictly improves and generalizes the previous one,
238
+ extending the results to the important heterogeneous setting. In the conference version, it
239
+ remained unclear if larger learning rates could only be achieved thanks to homogeneity. We
240
+ also connect the quantities we introduce to the spectral dimension of a graph, and use this
241
+ connection to derive explicit formulas for the optimal learning rates based on the spectral
242
+ dimension. This allows us to accurately compare with previous bounds (for instance Koloskova
243
+ et al. (2020)) and show that we improve on them in all settings.
244
+ 3. Measuring collaboration in decentralized learning
245
+ Both this paper’s analysis of decentralized SGD for general convex objectives and its deep
246
+ learning experiments revolve around a notion of ‘effective number of neighbors’ that we
247
+ would introduce in Section 3.2. The aim of this section is to motivate the quantity based on
248
+ 5
249
+
250
+ Vogels, Hendrikx, Jaggi
251
+ a simple toy model for which we can exactly characterize the convergence (Section 3.1). We
252
+ then connect this quantity to the typical graph metrics such as spectral gap and spectral
253
+ dimensions in Section 3.3.
254
+ 3.1 A toy problem: D-SGD on isotropic random quadratics
255
+ The aim of this section is to provide intuition while avoiding the complexities of general
256
+ analysis. To keep this section light, we omit any derivations. The appendix of (Vogels et al.,
257
+ 2022) contains a longer version of this section that includes derivations and proofs.
258
+ We consider n workers that jointly optimize an isotropic quadratic Ed∼N d(0,1)
259
+ 1
260
+ 2(d⊤x)2 =
261
+ 1
262
+ 2∥x∥2 with a unique global minimum x⋆ = 0. The workers access the quadratic through
263
+ stochastic gradients of the form g(x) = dd⊤x, with d ∼ N d(0, 1). This corresponds to
264
+ a linear model with infinite data, and where the model can fit the data perfectly, so that
265
+ stochastic noise goes to zero close to the optimum. We empirically find that this simple model
266
+ is a meaningful proxy for the initial phase of (over-parameterized) deep learning (Section 5).
267
+ A benefit of this model is that we can compute exact rates for it. These rates illustrate the
268
+ behavior that we capture more generally in the theory of Section 4.
269
+ The stochasticity in this toy problem can be quantified by the noise level
270
+ ζ = sup
271
+ x∈Rd
272
+ Ed∥g(x)∥2
273
+ ∥x∥2
274
+ = sup
275
+ x∈Rd
276
+ Ed∥dd⊤x∥2
277
+ ∥x∥2
278
+ ,
279
+ (1)
280
+ which is equal to ζ = d + 2, due to the random normal distribution of d.
281
+ The workers run the D-SGD algorithm (Lian et al., 2017). Each worker i has its own
282
+ copy xi ∈ Rd of the model, and they alternate between local model updates xi ← xi − ηg(xi)
283
+ and averaging their models with others: xi ← �n
284
+ j=1 wijxj. The averaging weights wij are
285
+ summarized in the gossip matrix W ∈ Rn×n. A non-zero weight wij indicates that i and
286
+ j are directly connected. In the following, we assume that W is symmetric and doubly
287
+ stochastic: �n
288
+ j=1 wij = 1 ∀i.
289
+ On our objective, D-SGD either converges or diverges linearly. Whenever it converges,
290
+ i.e., when the learning rate is small enough, there is a convergence rate r such that
291
+ E∥x(t)
292
+ i ∥2 ≤ (1 − r)∥x(t−1)
293
+ i
294
+ ∥2,
295
+ with equality as t → ∞. When the workers train alone (W = I), the convergence rate for a
296
+ given learning rate η reads:
297
+ ralone = 1 − (1 − η)2 − (ζ − 1)η2.
298
+ (2)
299
+ The optimal learning rate η⋆ = 1
300
+ ζ balances the optimization term (1 − η)2 and the stochastic
301
+ term (ζ − 1)η2. In the centralized (fully connected) setting (wij = 1
302
+ n ∀i, j), the rate is simple
303
+ as well:
304
+ rcentralized = 1 − (1 − η)2 − (ζ − 1)η2
305
+ n
306
+ .
307
+ (3)
308
+ Averaging between n workers reduces the impact of the gradient noise, and the optimal
309
+ learning rate grows to η⋆ =
310
+ n
311
+ n+ζ−1. We find that D-SGD with a general gossip matrix W
312
+ interpolates those results.
313
+ 6
314
+
315
+ Beyond spectral gap
316
+ 3.2 The effective number of neighbors
317
+ To quantify the reduction of the (ζ − 1)η2 term in general, we introduce the problem-
318
+ independent notion of effective number of neighbors nW(γ) of the gossip matrix W and
319
+ decay parameter γ.
320
+ Definition 1 (Effective number of neighbors) The effective number of neighbors nW(γ) =
321
+ limt→∞
322
+ �n
323
+ i=1 Var[y(t)
324
+ i
325
+ ]
326
+ �n
327
+ i=1 Var[z(t)
328
+ i
329
+ ] measures the ratio of the asymptotic variance of the processes
330
+ y(t+1) = √γ · y(t) + ξ(t),
331
+ where y(t) ∈ Rn and ξ(t) ∼ N n(0, 1)
332
+ (4)
333
+ and
334
+ z(t+1) = W(√γ · z(t) + ξ(t)),
335
+ where z(t) ∈ Rn and ξ(t) ∼ N n(0, 1).
336
+ (5)
337
+ We call y and z random walks because workers repeatedly add noise to their state, somewhat
338
+ like SGD’s parameter updates. This should not be confused with a ‘random walk’ over nodes
339
+ in the graph.
340
+ Since averaging with W decreases the variance of the random walk by at most n, the
341
+ effective number of neighbors is a number between 1 and n. The decay γ modulates the
342
+ sensitivity to communication delays. If γ = 0, workers only benefit from averaging with their
343
+ direct neighbors. As γ increases, multi-hop connections play an increasingly important role.
344
+ As γ approaches 1, delayed and undelayed noise contributions become equally weighted, and
345
+ the reduction tends to n for any connected topology.
346
+ Proposition 2 For regular doubly-stochastic symmetric gossip matrices W with eigenvalues
347
+ λ1, . . . , λn, nW(γ) has a closed-form expression
348
+ nW(γ) =
349
+ 1
350
+ 1−γ
351
+ 1
352
+ n
353
+ �n
354
+ i=1
355
+ λi2
356
+ 1−λ2
357
+ i γ
358
+ .
359
+ (6)
360
+ This follows from unrolling the recursions for y and z, using the eigendecomposition of W,
361
+ and the limit lim t → ∞ �t
362
+ k=1 xk =
363
+ x
364
+ 1−x.
365
+ While this closed-form expression only covers a restricted set of gossip matrices, the notion
366
+ of variance reduction in random walks, however, naturally extends to infinite topologies or
367
+ time-varying averaging schemes. Figure 2 illustrates nW for various topologies.
368
+ In our exact characterization of the convergence of D-SGD on the isotropic quadratic toy
369
+ problem, we find that the effective number of neighbors appears in place of the number of
370
+ workers n in the fully-connected rate of Equation 3. The rate r is the unique solution to
371
+ r = 1 − (1 − η)2 −
372
+ (ζ − 1)η2
373
+ nW
374
+ � (1−η)2
375
+ 1−r
376
+ �.
377
+ (7)
378
+ For fully-connected and disconnected W, nW(γ) = n or 1 respectively, irrespective of γ, and
379
+ Equation 7 recovers Equations 2 and 3. For other graphs, the effective number of workers
380
+ depends on the learning rate. Current theory only considers the case where nW ≈ n, but
381
+ 7
382
+
383
+ Vogels, Hendrikx, Jaggi
384
+ ↑ Effective number of neighbors (variance reduction in a ‘random walk’)
385
+ 1
386
+ 4
387
+ 8
388
+ 16
389
+ 24
390
+ 32
391
+ 0.9999
392
+ 0.999
393
+ 0.99
394
+ 0.9
395
+ 0
396
+ Decay γ of the ‘random walk’ →
397
+ (Think “lower learning rate” or “iterates moving slower”) →
398
+ Fully connected
399
+ Two cliques
400
+ Time-varying
401
+ exponential
402
+ Ring
403
+ Alone (disconnected)
404
+ · · ·
405
+ Figure 2: The effective number of neighbors for several topologies measured by their variance
406
+ reduction in (5). The point γ on the x-axis that matters depends on the learning
407
+ rate and the task. Which topology is ‘best’ varies from problem to problem. For
408
+ large decay rates γ (corresponding small learning rates), all connected topologies
409
+ achieve variance reduction close to a fully connected graph. For small decay rates
410
+ (large learning rates), workers only benefit from their direct neighbors (e.g. 3 in
411
+ a ring). These curves can be computed explicitly for constant topologies, and
412
+ simulated efficiently for the time-varying exponential scheme (Assran et al., 2019).
413
+ the small learning rates this requires can make the term (1 − η)2 too large, defeating the
414
+ purpose of collaboration.
415
+ Beyond this toy problem, we find that the proposed notion of effective number of neighbors
416
+ is also meaningful in the analysis of general objectives (Section 4) and in deep learning
417
+ (Section 5).
418
+ 3.3 Links between the effective number of neighbors and other graph quantities
419
+ In general, the effective number of neighbors function nW(γ) cannot be summarized by a
420
+ single scalar. Figure 2 demonstrates that the behavior of this function varies from graph to
421
+ graph. We can, however, bound the effective number of neighbors by known graph quantities
422
+ such as its spectral gap or spectral dimension.
423
+ We aim to create bounds for both finite and infinite graphs. To allow for this, we introduce
424
+ a generalization of Proposition 2 as an integral over the spectral measure dσ of the gossip
425
+ matrix, instead of a sum over its eigenvalues:
426
+ nW(γ)−1 = (1 − γ)
427
+ � 1
428
+ 0
429
+ λ2
430
+ 1 − γλ2 dσ(λ).
431
+ (8)
432
+ For finite graphs, dσ is a sum of Dirac deltas of mass 1
433
+ n at each eigenvalue of matrix W,
434
+ recovering Equation (6).
435
+ 8
436
+
437
+ Beyond spectral gap
438
+ 3.3.1 Upper and lower bounds
439
+ We can use the fact that there all eigenvalues λ are ≤ 1, leading to:
440
+ nW(γ)−1 ≤ (1 − γ)
441
+ � 1
442
+ 0
443
+ 1
444
+ 1 − γ dσ(λ) = 1,
445
+ (9)
446
+ This lower bound to the ‘effective number of neighbors’ corresponds to a disconnected graph.
447
+ On the other hand, for finite graphs, we can use the fact that σ(λ) contains a series of n
448
+ Diracs. The peak at λ = 1, corresponding to the fully-averaged state, has value 1
449
+ n, while the
450
+ other peaks have values ≥ 0. Using this bound, we obtain
451
+ nW(γ)−1 ≥ 1 − γ
452
+ 1 − γ
453
+ 1
454
+ n = 1
455
+ n.
456
+ (10)
457
+ This upper bound to the ‘effective number of neighbors’ is tight for a fully-connected graph.
458
+ 3.3.2 Bounding by spectral gap
459
+ If the graph has a spectral gap α, this means that σ(λ) contains a Dirac delta with mass 1
460
+ n
461
+ at λ = 1, corresponding to the fully-averaged state. The rest of σ(λ) has mass n−1
462
+ n
463
+ and is
464
+ contained in the subdomain λ ∈ [0, 1 − α]. In this setting, we obtain
465
+ nW(γ)−1 ≤ 1
466
+ n + n − 1
467
+ n
468
+ (1 − γ)(1 − α)2
469
+ 1 − γ(1 − α)2 .
470
+ (11)
471
+ This lower bound to the ‘effective number of neighbors’ is typically pessimistic, but it is tight
472
+ for the finite gossip matrix W = (1 − α)I + α
473
+ n11⊤.
474
+ 3.3.3 Bounding by spectral dimension
475
+ Next, we will link the notion of ‘effective number of neighbors’ to the spectral dimension ds
476
+ of the graph (Berthier, 2021, e.g. Definition 1.9), which controls the decay of eigenvalues
477
+ near 1. This notion is usually linked with the spectral measure of the Laplacian of the
478
+ graph. However, to avoid introducing too many graph-related quantities, we define spectral
479
+ dimension with respect to the gossip matrix W. Standard definitions using the Laplacian
480
+ LW = I − W are equivalent. In the remainder of this paper, the ‘graph’ will always refer to
481
+ the communication graph implicitly induced by W of Laplacian LW.
482
+ Definition 3 (Spectral Dimension) A gossip matrix has a spectral dimension at least ds
483
+ if there exists cs > 0 such that for all λ ∈ [0, 1], the density of its eigenvalues is bounded by
484
+ σ((λ, 1)) ≤ c−1
485
+ s (1 − λ)
486
+ ds
487
+ 2 .
488
+ (12)
489
+ The notation σ((λ, 1)) here refers to the integral
490
+ � 1
491
+ λ σ(l) dl. The spectral dimension of a
492
+ graph has a natural geometric interpretation. For instance, the line (or ring) are of spectral
493
+ dimension ds = 1, whereas 2-dimensional grids are of spectral dimension 2. More generally,
494
+ a d-dimensional torus is of spectral dimension d. Besides, the spectral dimension describes
495
+ macroscopic topological features and are robust to microscopic changes. For instance, random
496
+ geometric graphs are of spectral dimension 2.
497
+ 9
498
+
499
+ Vogels, Hendrikx, Jaggi
500
+ Note that since finite graphs have a spectral gap, σ((λ2(W), 1)) = 0 and so finite graphs
501
+ verify (12) for any spectral dimension ds. However, the notion of spectral dimension is still
502
+ relevant for finite graphs, since the constant cs blows up when ds is bigger than the actual
503
+ spectral dimension of an infinite graph with similar topology. Alternatively, it is sometimes
504
+ helpful to explicitly take the spectral gap into account in (12), as in Berthier et al. (2020,
505
+ Section 6).
506
+ We now proceed to bounding nW(γ) using the spectral dimension. Since λ �→ λ2(1 −
507
+ γλ2)−1 is a non-negative non-decreasing function on [0, 1], we can use Berthier et al. (2020,
508
+ Lemma C.1) to obtain that:
509
+ nW(γ)−1 ≤ 1
510
+ n + c−1
511
+ s (1 − γ)
512
+ � 1
513
+ 0
514
+ λ2
515
+ 1 − γλ2 (1 − λ)
516
+ ds
517
+ 2 −1dλ.
518
+ (13)
519
+ The term 1
520
+ n comes from the fact that for finite graphs, the density dσ includes a Dirac delta
521
+ with mass 1
522
+ n at eigenvalue 1. This Dirac is not affected by spectral dimension, and is required
523
+ for consistency, as it ensures that nW(γ) ≤ n for any finite graph. To evaluate the integral,
524
+ we then distinguish three cases.
525
+ Case ds > 2.
526
+ Since γλ < 1, then 1 − λ ≤ 1 − γλ2. In particular we use integration by parts
527
+ to get:
528
+ nW(γ)−1 − n−1 ≤ c−1
529
+ s (1 − γ)
530
+ � 1
531
+ 0
532
+ λ2(1 − γλ2)
533
+ ds
534
+ 2 −2dλ
535
+ ≤ − (1 − γ)c−1
536
+ s
537
+ 2γ(ds/2 − 1)
538
+ � 1
539
+ 0
540
+ −2γλ(ds/2 − 1)(1 − γλ2)
541
+ ds
542
+ 2 −2dλ
543
+ = (1 − γ)c−1
544
+ s
545
+ γ(ds − 2)
546
+
547
+ 1 − (1 − γ)
548
+ ds
549
+ 2 −1�
550
+ .
551
+ This leads to a scaling of:
552
+ nW(γ) ≥
553
+ � 1
554
+ n +
555
+ (1 − γ)
556
+ γ(ds − 2)cs
557
+ �−1
558
+ .
559
+ (14)
560
+ For large enough n, we obtain the same scaling of (1 − γ)−1 as in the previous section, thus
561
+ indicating that for networks that are well-enough connected (ds > 2), the spectral dimension
562
+ only affects the constants, and not the scaling in γ.
563
+ Case ds = 2.
564
+ When ds = 2, only the primitive of the integrand changes, leading to:
565
+ nW(γ) ≥
566
+ � 1
567
+ n − (1 − γ) ln(1 − γ)
568
+ 2γcs
569
+ �−1
570
+ (15)
571
+ Case ds < 2.
572
+ In this case, we start by splitting the integral as:
573
+ (1 − γ)
574
+ � 1
575
+ 0
576
+ λ2(1 − λ)
577
+ ds
578
+ 2 −1
579
+ (1 − γλ2)
580
+ dλ = (1 − γ)
581
+ � γ
582
+ 0
583
+ λ2(1 − λ)
584
+ ds
585
+ 2 −1
586
+ (1 − γλ2)
587
+ dλ + (1 − γ)
588
+ � 1
589
+ γ
590
+ λ2(1 − λ)
591
+ ds
592
+ 2 −1
593
+ (1 − γλ2)
594
+
595
+ 10
596
+
597
+ Beyond spectral gap
598
+ For the first term, note that γλ ≤ 1, so (1 − γλ2)−1 ≤ (1 − λ)−1, leading to:
599
+ (1 − γ)
600
+ � γ
601
+ 0
602
+ λ2(1 − λ)
603
+ ds
604
+ 2 −1
605
+ (1 − γλ2)
606
+ dλ ≤ (1 − γ)
607
+ � γ
608
+ 0
609
+ (1 − λ)
610
+ ds
611
+ 2 −2dλ
612
+ = 2(1 − γ)
613
+ 2 − ds
614
+
615
+ (1 − γ)
616
+ ds
617
+ 2 −1 − 1
618
+
619
+
620
+ 2
621
+ 2 − ds
622
+ (1 − γ)
623
+ ds
624
+ 2 .
625
+ For the second term, note that λ2 ≤ 1, so (1 − γλ2)−1 ≤ (1 − γ)−1, leading to:
626
+ (1 − γ)
627
+ � 1
628
+ γ
629
+ λ2(1 − λ)
630
+ ds
631
+ 2 −1
632
+ (1 − γλ2)
633
+ dλ ≤
634
+ � 1
635
+ γ
636
+ (1 − λ)
637
+ ds
638
+ 2 −1dλ = 2
639
+ ds
640
+ (1 − γ)
641
+ ds
642
+ 2 .
643
+ (16)
644
+ In the end, we obtain that nW(γ)−1 − 1
645
+ n ≤ 2
646
+ cs
647
+
648
+ 1
649
+ 2−ds + 1
650
+ ds
651
+
652
+ (1 − γ)
653
+ ds
654
+ 2 , and so:
655
+ nW(γ) ≥
656
+
657
+ 1
658
+ n + 4(1 − γ)
659
+ ds
660
+ 2
661
+ ds(2 − ds)cs
662
+ �−1
663
+ .
664
+ (17)
665
+ In this case, scaling in γ is impacted by the spectral dimension. Better-connected graphs
666
+ benefit more from higher γ.
667
+ 4. Convergence analysis
668
+ 4.1 Notations and Definitions
669
+ In the previous section, we have derived exact rates for a specific function. Now we present
670
+ convergence rates for general (strongly) convex functions that are consistent with our
671
+ observations in the previous section. We obtain rates that depend on the level of noise, the
672
+ hardness of the objective, and the topology of the graph. More formally, we assume that we
673
+ would like to solve the following problem:
674
+ min
675
+ θ∈Rd
676
+ n
677
+
678
+ i=1
679
+ fi(θ) =
680
+ min
681
+ x∈Rnd,xi=xj
682
+ n
683
+
684
+ i=1
685
+ fi(xi).
686
+ (18)
687
+ In this case, xi ∈ Rd represents the local variable of node i, and x ∈ Rnd the stacked variables
688
+ of all nodes. We will assume the following iterations for D-SGD:
689
+ (D-SGD):
690
+ x(t+1)
691
+ i
692
+ =
693
+ n
694
+
695
+ j=1
696
+ wijx(t)
697
+ j
698
+ − η∇fξ(t)
699
+ i (x(t)
700
+ i )
701
+ (19)
702
+ where fξ(t)
703
+ i
704
+ represent sampled data points and the gossip weights wij are elements of W.
705
+ Denoting LW = I − W, we rewrite this expression in matrix form as:
706
+ x(t+1) = x(t) −
707
+
708
+ η∇Fξ(t)(x(t)) + LWx(t)�
709
+ ,
710
+ (20)
711
+ where (∇Fξ(t)(x(t)))i = ∇fξ(t)
712
+ i (x(t)
713
+ i ). We abuse notations in the sense that W ∈ Rnd×nd is
714
+ now the Kronecker product of the standard n×n gossip matrix and the d×d identity matrix.
715
+ 11
716
+
717
+ Vogels, Hendrikx, Jaggi
718
+ This definition is a slight departure from the conference version of this work (Vogels
719
+ et al., 2022), which alternated randomly between gossip steps and gradient updates instead
720
+ of in turns. The analysis of the randomized setting is still possible, but with heterogeneous
721
+ objectives xi ̸= �n
722
+ j=1 wijxj, even for the fixed points of D-SGD (19), and randomizing the
723
+ updates adds undesirable variance. Similarly, it is also possible to analyze the popular variant
724
+ x(t+1) = W[x(t) − η∇Fξ(t)(x(t))], which locally averages the stochastic gradients before they
725
+ are applied. Yet, the D-SGD algorithm in (19) allows communications and computations to
726
+ be performed in parallel, and leads to a simpler analysis. We analyze this model under the
727
+ following assumptions, where Df(x, y) = f(x) − f(y) − ∇f(y)⊤(x − y) denotes the Bregman
728
+ divergence of f between points x and y.
729
+ Assumption 4 The stochastic gradients are such that: ( i) the sampled data points ξ(t)
730
+ i
731
+ and
732
+ ξ(ℓ)
733
+ j
734
+ are independent across times t, ℓ and nodes i ̸= j. ( ii) stochastic gradients are locally
735
+ unbiased: E [fξ(t)
736
+ i ] = fi for all t, i ( iii) the objectives fξ(t)
737
+ i
738
+ are convex and ζξ-smooth for all t, i,
739
+ with E
740
+
741
+ ζξDfξ(x, y)
742
+
743
+ ≤ ζDf(x, y) for all x, y. ( iv) all local objectives fi are µ-strongly-convex
744
+ for µ ≥ 0 and L-smooth.
745
+ Large learning rates.
746
+ The smoothness constant ζ of the stochastic functions fξ defines
747
+ the level of noise in the problem (the lower, the better) in the transient regime. The ratio
748
+ ζ/L compares the difficulty of optimizing with stochastic gradients to the difficulty with the
749
+ true global gradient before reaching the ‘variance region’ in which the iterates of D-SGD with
750
+ a constant learning rate lie almost surely as t → ∞. This ratio is thus especially important
751
+ in interpolating settings when all fξ(t)
752
+ i
753
+ have the same minimum, so that the ‘variance region’
754
+ is reduced to the optimum x⋆. Assuming better smoothness for the global average objective
755
+ than for the local functions is key to showing that averaging between workers allows for larger
756
+ learning rates. Without communication, convergence to the ‘variance region’ is ensured for
757
+ learning rates η ≤ 1/ζ. If ζ ≈ L, there is little noise and cooperation only helps to reduce
758
+ the final variance, and to get closer to the global minimum (instead of just your own). Yet,
759
+ in noisy regimes (ζ ≫ L), such as in Section 3.1 in which ζ = d + 2 ≫ 1 = L, averaging
760
+ enables larger learning rates up to min(1/L, n/ζ), greatly speeding up the initial training
761
+ phase. This is precisely what we will prove in Theorem 6.
762
+ If the workers always remain close (xi ≈ 1
763
+ n(x1 +. . .+xn) ∀i, or equivalently 1
764
+ n11⊤x ≈ x),
765
+ D-SGD behaves the same as SGD on the average parameter 1
766
+ n
767
+ �n
768
+ i=1 xi, and the learning rate
769
+ depends on max(ζ/n, L), showing a reduction of variance by n. Maintaining “ 1
770
+ n11⊤x ≈ x”,
771
+ however, requires a small learning rate. This is a common starting point for the analysis of
772
+ D-SGD, in particular for the proofs in Koloskova et al. (2020). On the other extreme, if we
773
+ do not assume closeness between workers, “Ix ≈ x” always holds. In this case, there is no
774
+ variance reduction, but no requirement for a small learning rate either. In Section 3.1, we
775
+ found that, at the optimal learning rate, workers are not close to all other workers, but they
776
+ are close to others that are not too far away in the graph.
777
+ We capture the concept of ‘local closeness’ by defining a neighborhood matrix M ∈ Rn×n.
778
+ It allows us to consider semi-local averaging beyond direct neighbors, but without fully
779
+ averaging with the whole graph. We ensure that “Mx ≈ x”, leading to an improvement in the
780
+ smoothness somewhere between ζ (achieved alone) and ζ/n (achieved when global consensus
781
+ 12
782
+
783
+ Beyond spectral gap
784
+ is maintained). Each neighborhood matrix M implies a requirement on the learning rate, as
785
+ well as an improvement in smoothness.
786
+ While we can conduct our analysis with any M, those matrices that strike a good balance
787
+ between the learning rate requirement and improved smoothness are most interesting. Based
788
+ on Section 3.1, we therefore focus on a specific construction of matrices: We choose M as
789
+ the covariance of a decay-γ ‘random walk process’ with the graph, as in (5), meaning that
790
+ M = (1 − γ)
791
+
792
+
793
+ k=1
794
+ γk−1W2k = (1 − γ)W2(I − γW2)−1.
795
+ (21)
796
+ Varying γ induces a spectrum of averaging neighborhoods from M = W2 (γ = 0) to
797
+ M = 1
798
+ n11⊤ (γ = 1). γ also implies an effective number of neighbors nW(γ): the larger γ,
799
+ the larger nW(γ). We make the following assumption on the neighborhood matrix M:
800
+ Assumption 5 The neighborhood matrix M is of the form of (21), and all the diagonal
801
+ elements have the same value, i.e., Mii = Mjj for all i, j.
802
+ Assumption 5 implies that Mii−1 = nW(γ): the effective number of neighbors defined in (6)
803
+ is equal to the inverse of the self-weights of M. This comes from the fact that the trace of
804
+ M is equal to the sum of its eigenvalues. Otherwise, all results that require Assumption 5
805
+ hold by replacing nW(γ) with mini Mii−1. Besides this interesting relationship with the
806
+ effective number of neighbors nW(γ), we will be interested in another spectral property
807
+ of M, namely the constant β(γ) (which only depends on γ through M, but we make this
808
+ dependence explicit), which is such that:
809
+ LM ≼ β(γ)−1LWW
810
+ (22)
811
+ This constant can be interpreted as the strong convexity of the semi-norm defined by LWW
812
+ relatively to the one defined by LM. Due to the form of M, we have 1 − λ2(W) ≤ β(γ) ≤ 1,
813
+ and the lower bound is tight for γ → 1.
814
+ However, the specific form of M (involving
815
+ neighborhoods as defined by W) and the use of γ < 1 ensure a much larger constant β(γ) in
816
+ general.
817
+ Fixed points of D-(S)GD.
818
+ In Vogels et al. (2022), we consider a homogeneous setting,
819
+ in which E fξ(t)
820
+ i
821
+ = f for all i. We now go beyond this analysis, and consider a setting in
822
+ which local functions fi might be different. In this case, constant-learning-rate Decentralized
823
+ Gradient Descent (the deterministic version of D-SGD) does not converge to the minimizer
824
+ of the average function but to a different one. Let us now consider this fixed point x⋆
825
+ η, which
826
+ verifies:
827
+ η∇F(x⋆
828
+ η) + LWx⋆
829
+ η = 0.
830
+ (23)
831
+ Note that x⋆
832
+ η crucially depends on the learning rate η (which we emphasize in the notation)
833
+ and that it is generally not at consensus (LWx⋆
834
+ η ̸= 0). In the presence of stochastic noise,
835
+ D-SGD will oscillate in a neighborhood (proportional to the gradients’ variance) of this fixed
836
+ point x⋆
837
+ η, and so from now on we will refer to x⋆
838
+ η as the fixed point of D-SGD.
839
+ In the remainder of this section, we show that the results from Vogels et al. (2022) still
840
+ hold as long as we replace the global minimizer x⋆ (solution of Problem (18)) by this fixed
841
+ 13
842
+
843
+ Vogels, Hendrikx, Jaggi
844
+ point x⋆
845
+ η. More specifically, we measure convergence by ensuring the decrease of the following
846
+ Lyapunov function:
847
+ Lt = ∥x(t) − x⋆
848
+ η∥2
849
+ M + ω∥x(t) − x⋆
850
+ η∥2
851
+ LM = (1 − ω)∥x(t) − x⋆
852
+ η∥2
853
+ M + ω∥x(t) − x⋆
854
+ η∥2,
855
+ (24)
856
+ for some parameter ω ∈ [0, 1], and where LM = I − M. Then, we will show how these results
857
+ imply convergence to a neighborhood of x⋆
858
+ η, and that this neighborhood shrinks with smaller
859
+ learning rates η. More specifically, the section unrolls as follows:
860
+ 1. Theorem 6 first proves a general convergence result to x⋆
861
+ η, the fixed point of D-(S)GD.
862
+ 2. Theorem 9 then bounds the distance to the true optimum for general learning rates.
863
+ 3. Corollary 10 finally gives a full convergence result with optimized learning rates.
864
+ Readers interested in quickly comparing our results with state-of-the art ones can skip
865
+ to this result.
866
+ 4.2 General convergence result
867
+ Theorem 6 provides convergence rates for any choice of the parameter γ that determines the
868
+ neighborhood matrix M, and for any Lyapunov parameter ω. The best rates are obtained
869
+ for specific γ and ω that balance the benefit of averaging with the constraint it imposes on
870
+ closeness between neighbors. We will discuss these choices more in depth in the next section.
871
+ Theorem 6 If Assumptions 4 and 5 hold and if η is such that
872
+ η ≤ min
873
+
874
+ �β(γ)ω
875
+ L
876
+ ,
877
+ 1
878
+ 4
879
+ ��
880
+ nW(γ)−1 + ω
881
+
882
+ ζ + L
883
+
884
+
885
+ � ,
886
+ (25)
887
+ then the Lyapunov function defined in (24) verifies the following:
888
+ L(t+1) ≤ (1 − ηµ)L(t) + η2σ2
889
+ M,
890
+ where σ2
891
+ M = 2[(1 − ω)nW(γ)−1 + ω] E
892
+
893
+ ∥∇Fξt(x⋆
894
+ η) − ∇F(x⋆
895
+ η)∥2�
896
+ .
897
+ This theorem shows convergence (up to a variance region) to the fixed point x⋆
898
+ η of D-SGD,
899
+ regardless of the ‘true’ minimizer x⋆. Although converging to x⋆
900
+ η might not be ideal depending
901
+ on the use case (but do keep in mind that x⋆
902
+ η → x⋆ as η shrinks), this is what D-SGD does,
903
+ and so we believe it is important to start by stating this clearly. The homogeneous case did
904
+ not have this problem since x⋆
905
+ η = x⋆ for all η for η that implied convergence.
906
+ Parameter ω ∈ [0, 1] is free, and it is often convenient to choose it as ω = ηL/β(γ) to
907
+ get rid of the first condition on η. However, we present the result with a free parameter ω
908
+ since, as we will see in the remainder of this section, setting ω = nW(γ)−1 allows for simple
909
+ corollaries.
910
+ Proof
911
+ We now detail the proof, which is both a simplification and generalization of
912
+ Theorem IV from Vogels et al. (2022).
913
+ 14
914
+
915
+ Beyond spectral gap
916
+ 1 - General decomposition
917
+ We first analyze the first term in the Lyapunov (24), and
918
+ use the fixed-point conditions of (23) to write:
919
+ E
920
+
921
+ ∥x(t+1) − x⋆
922
+ η∥2
923
+ M
924
+
925
+ = ∥x(t) − x⋆
926
+ η∥2
927
+ M + ∥η∇Fξt(x(t)) + LWx(t)∥2
928
+ M
929
+ − 2η
930
+
931
+ ∇F(x(t)) − ∇F(x⋆
932
+ η)
933
+ �⊤
934
+ M(x(t) − x⋆
935
+ η) − 2∥x(t) − x⋆
936
+ η∥2
937
+ LWM.
938
+ (26)
939
+ The second term is the same with M in place of I.
940
+ 2 - Error terms
941
+ We start by bounding the error terms, and use the optimality conditions
942
+ to obtain:
943
+ E
944
+
945
+ ∥η∇Fξt(x(t)) + LWx(t)∥2
946
+ M
947
+
948
+ = E
949
+
950
+ ∥η
951
+
952
+ ∇Fξt(x(t)) − ∇F(x⋆
953
+ η)
954
+
955
+ + LW(x(t) − x⋆
956
+ η)∥2
957
+ M
958
+
959
+ = E
960
+
961
+ ∥η
962
+
963
+ ∇Fξt(x(t)) − ∇Fξt(x⋆
964
+ η)
965
+
966
+ +
967
+
968
+ η
969
+
970
+ ∇Fξt(x⋆
971
+ η) − ∇F(x⋆
972
+ η)
973
+
974
+ + LW(x(t) − x⋆
975
+ η)
976
+
977
+ ∥2
978
+ M
979
+
980
+ ≤ 2η2 E
981
+
982
+ ∥∇Fξt(x(t)) − ∇Fξt(x⋆
983
+ η)∥2
984
+ M
985
+
986
+ + 2η2 E
987
+
988
+ ∥∇Fξt(x⋆
989
+ η) − ∇F(x⋆
990
+ η)∥2
991
+ M
992
+
993
+ + 2∥x(t) − x⋆
994
+ η∥2
995
+ LWMLW,
996
+ where the last inequality comes from the bias-variance decomposition. The second term
997
+ corresponds to variance, whereas the first and last one will be canceled by descent terms.
998
+ Stochastic gradient noise.
999
+ To bound the first term, we crucially use that stochastic
1000
+ noises are independent for two different nodes, so in particular:
1001
+ E
1002
+
1003
+ ∥∇Fξt(x(t)) − ∇Fξt(x⋆
1004
+ η)∥2
1005
+ M
1006
+
1007
+ = nW(γ)−1 E
1008
+
1009
+ ∥∇Fξt(x(t)) − ∇Fξt(x⋆
1010
+ η)∥2�
1011
+ + ∥∇F(x(t)) − ∇F(x⋆
1012
+ η)∥2
1013
+ M−nW(γ)−1I
1014
+ ≤ 2nW(γ)−1 E
1015
+
1016
+ ζξtDFξt(x⋆
1017
+ η, x(t))
1018
+
1019
+ + ∥∇F(x(t)) − ∇F(x⋆
1020
+ η)∥2
1021
+ ≤ 2
1022
+
1023
+ nW(γ)−1ζ + L
1024
+
1025
+ DF (x(t), x⋆
1026
+ η),
1027
+ where we used that M ≼ I, the L-cocoercivity of F, and the noise assumption, i.e.,
1028
+ E
1029
+
1030
+ ζξtDFξt
1031
+
1032
+ ≤ ζDF .
1033
+ The effective number of neighbors nW(γ) kicks in since Assump-
1034
+ tion 5 implies that the diagonal of M is equal to nW(γ)−1I. Using independence again, we
1035
+ obtain:
1036
+ E
1037
+
1038
+ ∥∇Fξt(x⋆
1039
+ η) − ∇F(x⋆
1040
+ η)∥2
1041
+ M
1042
+
1043
+ = nW(γ)−1 E
1044
+
1045
+ ∥∇Fξt(x⋆
1046
+ η) − ∇F(x⋆
1047
+ η)∥2�
1048
+ (27)
1049
+ Performing the same computations for the E
1050
+
1051
+ ∥∇Fξt(x(t)) − ∇F(x⋆
1052
+ η)∥2�
1053
+ term and adding
1054
+ consensus error leads to:
1055
+ E
1056
+
1057
+ ∥η∇Fξt(x(t)) + LWx(t)∥2
1058
+ (1−ω)M+ωI
1059
+
1060
+ ≤ 4
1061
+ ��
1062
+ (1 − ω)nW(γ)−1 + ω
1063
+
1064
+ ζ + (1 − ω)L
1065
+
1066
+ DF (x(t), x⋆
1067
+ η)
1068
+ + 2η2((1 − ω)nW(γ)−1 + ω) E
1069
+
1070
+ ∥∇Fξt(x⋆
1071
+ η) − ∇F(x⋆
1072
+ η)∥2�
1073
+ + 2∥x(t) − x⋆
1074
+ η∥2
1075
+ LW[M+ωLM]LW
1076
+ (28)
1077
+ Here, the first term will be controlled by the descent obtained through the gradient terms,
1078
+ and the second one through communication terms.
1079
+ 15
1080
+
1081
+ Vogels, Hendrikx, Jaggi
1082
+ 3 - Descent terms
1083
+ Gradient terms
1084
+ We first analyze the effect of all gradient terms. In particular, we use
1085
+ that (1 − ω)M + ωI = I − (1 − ω)LM. Then, we use that
1086
+
1087
+ ∇F(x(t)) − ∇F(x⋆
1088
+ η)
1089
+ �⊤
1090
+ (x(t) − x⋆
1091
+ η) = DF (x(t), x⋆
1092
+ η) + DF (x⋆
1093
+ η, x(t)),
1094
+ and:
1095
+ 2
1096
+
1097
+ ∇F(x(t)) − ∇F(x⋆
1098
+ η)
1099
+ �⊤
1100
+ LM(x(t) − x⋆
1101
+ η) ≤ 2∥∇F(x(t)) − ∇F(x⋆
1102
+ η)∥∥LM(x(t) − x⋆
1103
+ η)∥
1104
+ ≤ 1
1105
+ 2L∥∇F(x(t)) − ∇F(x⋆
1106
+ η)∥2 + 2L∥x(t) − x⋆
1107
+ η∥LM2
1108
+ ≤ DF (x(t), x⋆
1109
+ η) + 2L∥x(t) − x⋆
1110
+ η∥LM2.
1111
+ Overall, the gradient terms sum to:
1112
+ − 2
1113
+
1114
+ ∇F(x(t)) − ∇F(x⋆
1115
+ η)
1116
+ �⊤
1117
+ (x(t) − x⋆
1118
+ η) + 2(1 − ω)
1119
+
1120
+ ∇F(x(t)) − ∇F(x⋆
1121
+ η)
1122
+ �⊤
1123
+ LM(x(t) − x⋆
1124
+ η)
1125
+ ≤ −2DF (x⋆
1126
+ η, x(t)) − (1 + ω)DF (x(t), x⋆
1127
+ η) + 2(1 − ω)L∥x(t) − x⋆
1128
+ η∥LM2
1129
+ ≤ −µ∥x(t) − x⋆
1130
+ η∥2 − DF (x(t), x⋆
1131
+ η) + 2L∥x(t) − x⋆
1132
+ η∥LM2
1133
+ ≤ −(1 − ω)µ∥x(t) − x⋆
1134
+ η∥2
1135
+ M − ω∥x(t) − x⋆
1136
+ η∥2 − DF (x(t), x⋆
1137
+ η) + 2β(γ)−1L∥x(t) − x⋆
1138
+ η∥LMLWW,
1139
+ (29)
1140
+ where we used that LM ≼ β(γ)−1LWW.
1141
+ Gossip terms.
1142
+ We simply recall the gossip terms we use for descent here, which write:
1143
+ −2∥x(t) − x⋆
1144
+ η∥2
1145
+ LWM − 2ω∥x(t) − x⋆
1146
+ η∥2
1147
+ LWLM.
1148
+ (30)
1149
+ 4 - Putting everything together.
1150
+ We now add all the descent and error terms together.
1151
+ More specifically, using Equations (28), (29) and (30) we obtain:
1152
+ L(t+1) ≤ (1 − ηµ)L(t)
1153
+ − 2∥x(t) − x⋆
1154
+ η∥2
1155
+ LWM(I−LW)
1156
+ − 2ω [1 − ηL/(ωβ(γ))] ∥x(t) − x⋆
1157
+ η∥2
1158
+ LWLMW
1159
+ − η
1160
+
1161
+ 1 − 4η
1162
+ ��
1163
+ (1 − ω)nW(γ)−1 + ω
1164
+
1165
+ ζ + (1 − ω)L
1166
+ ��
1167
+ DF (x(t), x⋆
1168
+ η)
1169
+ + 2η2 �
1170
+ (1 − ω)nW(γ)−1 + ω
1171
+
1172
+ E
1173
+
1174
+ ∥∇Fξt(x⋆
1175
+ η) − ∇F(x⋆
1176
+ η)∥2�
1177
+ .
1178
+ The conditions in the theorem are chosen so that the terms from lines 3 and 4 are positive
1179
+ (which is automatically true for line 2), and using that 1 − ω ≤ 1 (since ω is small anyway).
1180
+ 16
1181
+
1182
+ Beyond spectral gap
1183
+ 5
1184
+ 10
1185
+ 15
1186
+ 20
1187
+ 25
1188
+ 30
1189
+ 0.000
1190
+ 0.001
1191
+ 0.002
1192
+ 0.003
1193
+ 0.004
1194
+ 0.005
1195
+ ↑ Learning rate given by Theorem 1 (L = 1.0, ζ = 2000)
1196
+ Effective number of neighbors nW(γ) →
1197
+ 5
1198
+ 10
1199
+ 15
1200
+ 20
1201
+ 25
1202
+ 30
1203
+ 0.000
1204
+ 0.002
1205
+ 0.004
1206
+ 0.006
1207
+ 0.008
1208
+ 0.010
1209
+ Effective number of neighbors nW(γ) →
1210
+ Ring
1211
+ Torus (4x8)
1212
+ Hypercube
1213
+ Restricted by noise
1214
+ Restricted by consensus
1215
+ M
1216
+ Figure 3: Maximum learning rates prescribed by Theorem 6, varying the parameter γ that
1217
+ implies an effective neighborhood size (x-axis) and an averaging matrix M (drawn
1218
+ as heatmaps). On the left, we show the details for a 32-worker ring topology, and
1219
+ on the right, we compare it to more connected topologies. Increasing γ (and with
1220
+ it nW(γ)) initially leads to larger learning rates thanks to noise reduction. At the
1221
+ optimum, the cost of consensus exceeds the benefit of further reduced noise.
1222
+ 4.3 Main corollaries
1223
+ 4.3.1 Large learning rate: speeding up convergence for large errors
1224
+ We now investigate Theorem 6 in the case in which both the noise σ2 and the heterogeneity
1225
+ ∥∇F(x⋆)∥2
1226
+ LW† are small (compared to L(0)), and so we would like to have the highest
1227
+ possible learning rate in order to ensure fast decrease of the objective (which is consistent
1228
+ with Figure 1). Using (25), we obtain a rate for each parameter γ that controls the local
1229
+ neighborhood size (remember that β(γ) depends on γ). The task that remains is to find
1230
+ the γ parameter that gives the best convergence guarantees (the largest learning rate). As
1231
+ explained before, one should never reduce the learning rate in order to be close to others,
1232
+ because the goal of collaboration (in this regime in which we are not affected by variance
1233
+ and heterogeneity) is to increase the learning rate.
1234
+ We illustrate this in Figure Figure 3, that we obtain by choosing ω = nW(γ)−1, and
1235
+ evaluating the two terms of (25) for different values of γ. The expression for the linear part
1236
+ of the curve (before consensus dominates) is given in Corollary 7.
1237
+ Corollary 7 Consider that Assumptions 4 and 5 hold, then the largest (up to constants)
1238
+ learning rate is obtained as:
1239
+ η = (8ζ/nW(γ) + 4L)−1 , for γ such that 4nW(γ)−1β(γ)(2nW(γ)−1ζ + L) ≤ L
1240
+ (31)
1241
+ We see that the learning rate scales linearly with the number of effective neighbors in this case
1242
+ (which is equivalent to taking a mini-batch of size linear in nW(γ)) until a certain number
1243
+ 17
1244
+
1245
+ Vogels, Hendrikx, Jaggi
1246
+ of neighbors is reached (condition on the right), or centralized performance is achieved
1247
+ (ζ = nW(γ)L). The condition on γ always has a solution since when γ ≈ 0, both β(γ)
1248
+ and nW(γ)−1 are close to 1, and they both decrease when γ grows. This corollary directly
1249
+ follows from taking ω = nW(γ)−1 in Theorem 6. Note that a slightly tighter choice could be
1250
+ obtained by setting ω = ηβ(γ)/L.
1251
+ Investigating β(γ).
1252
+ We now evaluate β(γ) in order to obtain more precise bounds. In
1253
+ particular, choosing M as in (21), the eigenvalues of LM are equal to:
1254
+ λLM
1255
+ i
1256
+ = 1 − λ2
1257
+ i
1258
+ 1 − γλ2
1259
+ i
1260
+ ,
1261
+ (32)
1262
+ where λi are the eigenvalues of W. In particular, β(γ)LM ≼ WLW translates into the fact
1263
+ that for all i such that λi ̸= 1 (automatically verified in this case), we want for all i:
1264
+ β(γ) ≤ 1 − γλ2
1265
+ i
1266
+ 1 − λ2
1267
+ i
1268
+ (1 − λi)λi = λi(1 − γλ2
1269
+ i )
1270
+ 1 + λi
1271
+ .
1272
+ (33)
1273
+ We now make the simplifying assumption that λmin(W) ≥ 1
1274
+ 2 (which we can always enforce
1275
+ by taking W′ = (I + W)/2), but note that the theory holds regardless. We motivate this
1276
+ simplifying assumption by the fact that the for arbitrarily small spectral gaps, the right
1277
+ side of (33) will always be minimized for λ2(W) assuming γ is large enough, so the actual
1278
+ value of λmin(W) < 1 does not matter. In particular, in this case, neglecting the effect of the
1279
+ spectral gap, we can just take:
1280
+ β(γ) = 1 − γλ2(W)
1281
+ 4
1282
+ ≥ 1 − γ
1283
+ 4
1284
+ ,
1285
+ (34)
1286
+ Note that β(γ) allows for large γ when the spectral gap 1 − λ2(W) is large, but we allow
1287
+ non-trivial learning rates η > 0 even when λ2(W) = 1 (infinite graphs) as long as γ < 1.
1288
+ Optimal choice of nW(γ).
1289
+ Leveraging the spectral dimension results from Section 3.1,
1290
+ we obtain the following corollary:
1291
+ Corollary 8 Under Assumption 4 and 5, and assuming that λmin(W) ≥ 1
1292
+ 2, that the commu-
1293
+ nication graph has spectral dimension ds > 2, and that ζ ≫ L, the highest possible learning
1294
+ rate is
1295
+ η = 1
1296
+ 8
1297
+ �cs(ds − 2)
1298
+ ζ2L
1299
+ � 1
1300
+ 3
1301
+ , obtained for nW(γ) =
1302
+
1303
+ cs(ds − 2) ζ
1304
+ L
1305
+ � 1
1306
+ 3
1307
+ (35)
1308
+ This result follows from Corollary 7, which, if ζ ≫ L, writes:
1309
+ L
1310
+ ζ ≥ 8nW(γ)−2β(γ) = nW(γ)−3cs(ds − 2),
1311
+ (36)
1312
+ where the right part is obtained by plugging in the expressions for β(γ) from (34) into
1313
+ nW(γ)−1 ≤
1314
+ 2(1−γ)
1315
+ cs(ds−2) from (14) (assuming γ ≥ 1/2). Then, one can solve for 1 − γ. Assump-
1316
+ tions besides Assumption 4 allow to give a simple result in this specific case, but similar
1317
+ expressions can easily be obtained for ds ≤ 2 and ζ < LnW(γ).
1318
+ 18
1319
+
1320
+ Beyond spectral gap
1321
+ 4.3.2 Small learning rate: approaching the optimum arbitrarily closely
1322
+ Theorem 6 gives a convergence result to x⋆
1323
+ η, the fixed point of D-SGD, and we have investigated
1324
+ in the previous section the behavior of D-SGD for large learning rates. In Theorem 9, we
1325
+ focus on small error levels, for which the variance and heterogeneity terms dominate, and we
1326
+ would like to take small learning rates η. In this setting, we bound the distance between the
1327
+ current iterate and the true minimizer x⋆ instead of x⋆
1328
+ η. We also provide a result that gets
1329
+ rid of all dependence on x⋆
1330
+ η, and only explicitly depends on the learning rate η.
1331
+ Theorem 9 Under the same assumptions and conditions on the learning rate as Theo-
1332
+ rem 6 and Corollary 8, we have that:
1333
+ ∥x(t) − x⋆∥M ≤ 2(1 − ηµ)tL(0) + 2ησ2
1334
+ M
1335
+ µ
1336
+ + 2η2(1 + κ)∥LW†∇F(x⋆
1337
+ η)∥2
1338
+ (37)
1339
+ We can further remove x⋆
1340
+ η from the bound, and obtain:
1341
+ ∥x(t) − x⋆∥M ≤ 2(1 − ηµ)tL(0) +
1342
+ 6ησ2
1343
+ M,⋆
1344
+ µ
1345
+ + 6η2κp−1∆2
1346
+ W,
1347
+ where σ2
1348
+ M,⋆ = (nW(γ)−1 + ω) E
1349
+
1350
+ ∥∇Fξ(x⋆) − ∇F(x⋆)∥2�
1351
+ and p−1 = maxη
1352
+ ∥LW†∇F(x⋆
1353
+ η)∥2
1354
+ ∥∇F(x⋆η)∥2
1355
+ LW† ,
1356
+ so that p ≥ 1 − λ2(W), and ∆2
1357
+ W = ∥∇F(x⋆)∥2
1358
+ LW†
1359
+ The norm ∥x(t) − x⋆∥2
1360
+ M considers convergence of locally averaged neighborhoods, but ∥x(t) −
1361
+ x⋆∥2
1362
+ M ≥ ∥x(t) −x⋆∥2 since 1 is an eigenvector of M with eigenvalue 1. We now briefly discuss
1363
+ the various terms in this corollary, and then prove it.
1364
+ Heterogeneity term.
1365
+ The term due to heterogeneity only depends on the distance between
1366
+ the true optimum x⋆ and the fixed point x⋆
1367
+ η, which we then transform into a condition on
1368
+ ∥∇F(x⋆)∥2
1369
+ LW†. In particular, it is not influenced by the choice of M (and thus of γ).
1370
+ Constant p.
1371
+ We introduce constant p to get rid of the explicit dependence on x⋆
1372
+ η. Indeed,
1373
+ p−1 intuitively denotes how large LW† is in the direction of ∇F(x⋆
1374
+ η). For instance, if ∇F(x⋆
1375
+ η)
1376
+ is an eigenvector of W associated with eigenvalue λ, then we have p = 1−λ. In the worst case,
1377
+ we have that p = 1 − λ2(W), but p can be much better in general, when the heterogeneity is
1378
+ spread evenly, instead of having very different functions on distant nodes.
1379
+ Variance term.
1380
+ In this case, the largest variance reduction (of order n) is obtained by
1381
+ taking ω and nW(γ)−1 as small as possible. For learning rates that are too large to imply
1382
+ nW(γ)−1 ≈ n−1, decreasing it decreases the variance term in two ways: (i) directly, through
1383
+ the η term, (ii) indirectly, by allowing to take smaller values of nW(γ)−1.
1384
+ For very large (infinite) graphs, we can take ω = nW(γ)−1, and in this case Theorem 6
1385
+ gives that the smallest nW(γ)−1 is given by nW(γ)−1β(γ) = ηL. Using spectral dimension
1386
+ results (for instance with ds > 2), we obtain (similarly to Corollary 8) that we can take
1387
+ 19
1388
+
1389
+ Vogels, Hendrikx, Jaggi
1390
+ β(γ) = nW(γ)−1cs(ds − 2)/8, and so:
1391
+ nW(γ)−1 =
1392
+
1393
+ 8ηL
1394
+ cs(ds − 2),
1395
+ (38)
1396
+ so the residual variance term for this choice of nW(γ)−1 is of order:
1397
+ O
1398
+
1399
+ η
1400
+ 3
1401
+ 2
1402
+ µ
1403
+
1404
+ L
1405
+ cs(ds − 2) E
1406
+
1407
+ ∥∇Fξ(x⋆) − ∇F(x⋆)∥2�
1408
+
1409
+ (39)
1410
+ In particular, we obtain super-linear scaling when reducing the learning rate η thanks to the
1411
+ added benefit of gaining more effective neighbors. Note that again, the cases ds ≤ 2 can be
1412
+ treated in the same way.
1413
+ Proof [Theorem 9] We start by writing:
1414
+ ∥x(t) − x⋆∥2
1415
+ M ≤ 2∥x(t) − x⋆
1416
+ η∥2
1417
+ M + 2∥x⋆
1418
+ η − x⋆∥2
1419
+ M ≤ 2L(t) + 2∥x⋆
1420
+ η − x⋆∥2.
1421
+ (40)
1422
+ Theorem 6 ensures that L(t) becomes small, and so we are left with bounding the distance
1423
+ between x⋆
1424
+ η and x⋆.
1425
+ 1 - Distance to the global minimizer.
1426
+ We define x⋆η = 11⊤x⋆
1427
+ η/n. Using the fact that
1428
+ both x⋆η and x⋆ are at consensus, and 1⊤∇F(x⋆
1429
+ η) = 0 (immediate from (23)), we write:
1430
+ DF (x⋆, x⋆
1431
+ η) = F(x⋆) − F(x⋆
1432
+ η) − ∇F(x⋆
1433
+ η)⊤(x⋆ − x⋆
1434
+ η)
1435
+ = F(x⋆η) − F(x⋆
1436
+ η) − ∇F(x⋆
1437
+ η)⊤(x⋆η − x⋆
1438
+ η) + F(x⋆) − F(x⋆η)
1439
+ ≤ DF (x⋆η, x⋆
1440
+ η),
1441
+ (41)
1442
+ where the last line comes from the fact that x⋆ is the minimizer of F on the consensus space.
1443
+ Therefore:
1444
+ ∥x⋆
1445
+ η − x⋆∥2 = ∥x⋆η − x⋆∥2 + ∥x⋆
1446
+ η − x⋆η∥2
1447
+ ≤ 1
1448
+ µDF (x⋆, x⋆
1449
+ η) + ∥x⋆
1450
+ η − x⋆η∥2
1451
+ ≤ 1
1452
+ µDF (x⋆η, x⋆
1453
+ η) + ∥x⋆
1454
+ η − x⋆η∥2
1455
+
1456
+
1457
+ 1 + L
1458
+ µ
1459
+ ���
1460
+ ∥x⋆η − x⋆
1461
+ η∥2 = η2
1462
+
1463
+ 1 + L
1464
+ µ
1465
+
1466
+ ∥LW†∇F(x⋆
1467
+ η)∥2.
1468
+ Note that the result depends on the heterogeneity pattern of the gradients at the fixed point,
1469
+ and might be bounded (and even small) even when W has no spectral gap. However, this
1470
+ quantity is proportional to the squared inverse spectral gap in the worst case.
1471
+ 2 - Monotonicity in η.
1472
+ We now prove that ∥∇F(x⋆
1473
+ η)∥2
1474
+ LW† decreases when η increases,
1475
+ and so is maximal for η = 0, corresponding to x⋆
1476
+ η = x⋆. More specifically:
1477
+ d∥∇F(x⋆
1478
+ η)∥2
1479
+ LW†
1480
+
1481
+ =
1482
+ d
1483
+
1484
+ η−2∥x⋆
1485
+ η∥2
1486
+ LW
1487
+
1488
+
1489
+ = −
1490
+ 2∥x⋆
1491
+ η∥2
1492
+ LW
1493
+ η3
1494
+ + 2η−2(x⋆
1495
+ η)⊤LW
1496
+ dx⋆
1497
+ η
1498
+
1499
+ 20
1500
+
1501
+ Beyond spectral gap
1502
+ Differentiating the fixed-point conditions, we obtain that
1503
+ η∇2F(x⋆
1504
+ η)dx⋆
1505
+ η
1506
+ dη + ∇F(x⋆
1507
+ η) + LW
1508
+ dx⋆
1509
+ η
1510
+ dη = 0,
1511
+ (42)
1512
+ so that:
1513
+ dx⋆
1514
+ η
1515
+ dη = −
1516
+
1517
+ η∇2F(x⋆
1518
+ η) + LW
1519
+ �−1 ∇F(x⋆
1520
+ η) = η−1 �
1521
+ η∇2F(x⋆
1522
+ η) + LW
1523
+ �−1 LWx⋆
1524
+ η.
1525
+ (43)
1526
+ Plugging this into the previous expression and using that ∇2F(x⋆
1527
+ η) is positive semi-definite,
1528
+ we obtain:
1529
+ d∥∇F(x⋆
1530
+ η)∥2
1531
+ LW†
1532
+
1533
+ = − 2
1534
+ η3 (x⋆
1535
+ η)⊤ �
1536
+ LW − LW
1537
+
1538
+ LW + η∇2F(x⋆
1539
+ η)
1540
+ �−1 LW
1541
+
1542
+ x⋆
1543
+ η
1544
+ ≤ − 2
1545
+ η3 (x⋆
1546
+ η)⊤ �
1547
+ LW − LWLW†LW
1548
+
1549
+ x⋆
1550
+ η = 0.
1551
+ 3 - Getting rid of x⋆
1552
+ η.
1553
+ By definition of p, we can write:
1554
+ ∥LW†∇F(x⋆
1555
+ η)∥2 ≤ p−1∥∇F(x⋆
1556
+ η)∥2
1557
+ LW† ≤ p−1∥∇F(x⋆)∥2
1558
+ LW†.
1559
+ (44)
1560
+ Note that we have to bound this constant p in order to use the monotonicity in η of
1561
+ ∥∇F(x⋆
1562
+ η)∥2
1563
+ LW† since this result does not hold for ∥LW†∇F(x⋆
1564
+ η)∥2. For the variance, we write
1565
+ that:
1566
+ E
1567
+
1568
+ ∥∇Fξt(x⋆
1569
+ η) − ∇F(x⋆
1570
+ η)∥2�
1571
+ ≤ 3 E
1572
+
1573
+ ∥∇Fξt(x⋆
1574
+ η) − ∇Fξt(x⋆)∥2�
1575
+ + 3 E
1576
+
1577
+ ∥∇Fξt(x⋆) − ∇F(x⋆)∥2�
1578
+ + 3∥∇F(x⋆
1579
+ η) − ∇F(x⋆)∥2
1580
+ ≤ 3σ2
1581
+ M,⋆ + 3 (ζ + L) DF (x⋆, x⋆
1582
+ η).
1583
+ From here, we use Equation (41) and obtain that:
1584
+ E
1585
+
1586
+ ∥∇Fξt(x⋆
1587
+ η) − ∇F(x⋆
1588
+ η)∥2�
1589
+ ≤ 3σ2
1590
+ M,⋆ + 3L (ζ + L) η2∥LW†∇F(x⋆
1591
+ η)∥2.
1592
+ (45)
1593
+ To obtain the final result, we use that η(nW(γ)−1 +ω)(ζ +L) ≤ 1/4 thanks to the conditions
1594
+ on the learning rate.
1595
+ 4.3.3 Comparison with existing work.
1596
+ Expressed in the form of Koloskova et al. (2020), we can summarize the previous corollaries
1597
+ into the following result by taking either η as the largest possible constant (as indicated in
1598
+ Corollary 8) or η = ˜O(1/(µT)). Here, ˜O denotes inequality up to logarithmic factors, and
1599
+ recall that ∥x(t) − x⋆∥2
1600
+ M ≥ ∥x(t) − x⋆∥2. We recall that L is the smoothness of the global
1601
+ objective f, ζ is the smoothness of the stochastic functions fξ, µ is the strong convexity
1602
+ parameter, ds is the spectral dimension of the gossip matrix W (and we assume ds > 2) and
1603
+ cs is the associated constant.
1604
+ 21
1605
+
1606
+ Vogels, Hendrikx, Jaggi
1607
+ Corollary 10 (Final result.) Under the same assumptions as Corollary 8, there exists
1608
+ a choice of learning rate (and, equivalently, of decay parameters γ∗
1609
+ large and γ∗
1610
+ small) such that
1611
+ the expected squared distance to the global optimum after T steps of D-SGD ∥x(t) − x⋆∥2
1612
+ is of order:
1613
+ ˜O
1614
+
1615
+ σ2
1616
+ µ2TnW(γ∗
1617
+ small) + L∆2
1618
+ W
1619
+ µ3pT 2 + exp
1620
+
1621
+ −nW(γ∗
1622
+ large)µ
1623
+ ζ T
1624
+ ��
1625
+ ,
1626
+ (46)
1627
+ where ∆2
1628
+ W and p are defined in Theorem 9, and x(t) is the average parameter. The
1629
+ optimal effective number of neighbors in respectively the small and large learning rate
1630
+ settings are:
1631
+ nW(γ∗
1632
+ small) = min
1633
+ ��
1634
+ dsT
1635
+ Lcs
1636
+ , n
1637
+
1638
+ and nW(γ∗
1639
+ large) = min
1640
+ ��csdsζ
1641
+ L
1642
+ � 1
1643
+ 3
1644
+ , n
1645
+
1646
+ .
1647
+ (47)
1648
+ This result can be contrasted with the result from Koloskova et al. (2020), which writes:
1649
+ ˜O
1650
+ � σ2
1651
+ µ2T
1652
+ � 1
1653
+ n +
1654
+ L
1655
+ µ(1 − λ2(W))T
1656
+
1657
+ +
1658
+ L∆2
1659
+ µ3(1 − λ2(W))2T 2 + exp
1660
+
1661
+
1662
+ µ
1663
+ (1 − λ2(W))ζ T
1664
+ ��
1665
+ , (48)
1666
+ We can now make the following observations.
1667
+ Scheduling the learning rate.
1668
+ Here, the learning rate is either chosen as ηlarge =
1669
+ nW(γ∗
1670
+ large)/ζ, or as ηsmall = ˜O((µT)−1). In practice, one would start with the large learning
1671
+ rate, and switching to η when training does not improve anymore (heterogeneity/variance
1672
+ terms dominate).
1673
+ Exponential decrease term.
1674
+ We first show a significant improvement in the exponential
1675
+ decrease term. Indeed, nW(γ∗
1676
+ large)/(1 − λ2(W)), the ratio between the largest learning rate
1677
+ permitted in our analysis versus existing ones, is always large since nW(γ∗
1678
+ large) ≥ 1 and
1679
+ 1 − λ2(W) ≤ 1. Besides, the exponential decrease term is no longer affected by the spectral
1680
+ gap in our analysis, which only affects how big nW(γ) can be. This improvement holds even
1681
+ when ζ = L (in this case nW(γ) = 1 is enough), and is due to the fact that heterogeneity
1682
+ only affects lower-order terms, so that when cooperation brings nothing it doesn’t hurt
1683
+ convergence either.
1684
+ Impact of heterogeneity.
1685
+ The improvement in the heterogeneous case does not depend
1686
+ on some γ, and relies on bounding heterogeneity in a non-worst case fashion. Indeed, ζW and
1687
+ p capture the interplay between how heterogeneity is distributed among nodes, and the actual
1688
+ topology of the graph. Note that this does not contradict the lower bound from Koloskova
1689
+ et al. (2020), since ∆2
1690
+ W/p = ∆2/(1 − λ2(W))2 in the worst case. In the worst case, the
1691
+ heterogeneity pattern of ∇F(x⋆) is aligned with the smallest eigenvalue of LW, i.e., very
1692
+ distant nodes have very different objectives. The quantity p, however, gives more fine-grained
1693
+ bounds that depend on the actual heterogeneity pattern in general.
1694
+ Variance term.
1695
+ One key difference between the analyses is on the variance term that
1696
+ involves σ2. Both analyses depend on the variance of a single node, σ2/(µT), which is
1697
+ 22
1698
+
1699
+ Beyond spectral gap
1700
+ then multiplied by a ‘variance reduction’ term. In both cases, this term is of the form
1701
+ nW(γ)−1+ηLβ(γ)−1. However, the standard analysis implicitly use γ = 1, and so nW(γ) = n,
1702
+ and β(γ) = 1 − λ2(W). Then, the form from (48) follows from taking η = ˜O(1/(µT)). Our
1703
+ analysis on the other hands relies on tuning γ such that nW(γ)−1 + ηLβ(γ)−1 is the smallest
1704
+ possible, and is therefore strictly better than just considering γ = 1. Assuming a given
1705
+ spectral dimension ds > 2 for the graph leads to (46), but any assumption that precisely
1706
+ relates nW(γ) and γ would allow getting similar results.
1707
+ While the ˜O(T −2) in the variance term of Koloskova et al. (2020) seems better than our
1708
+ ˜O(T −3/2) term, this is misleading because constants are very important in this case. Our
1709
+ rate is optimized by over γ, which accounts for the fact that if the ˜O(T −2) term dominates,
1710
+ then it is better to just consider a smaller neighborhood. In that case, we would not benefit
1711
+ from n−1 variance reduction anyway. Our result optimally balances the two variance terms
1712
+ from (48) instead. Thanks to this balancing, we obtain that in graphs of spectral dimension
1713
+ ds > 2, the variance decreases as ˜O(T − 3
1714
+ 2 ) with a learning rate of ˜O(T −1) due to the combined
1715
+ effect of a smaller learning rate and adding more effective neighbors. In finite graphs, this
1716
+ effect caps at nW(γ) = n.
1717
+ Finally, note that our analysis and the analysis of Koloskova et al. (2020) allow for
1718
+ different generalizations of the standard framework: our analysis applies to arbitrarily large
1719
+ (infinite) graphs, while Koloskova et al. (2020) can handle time-varying graphs with weak
1720
+ (multi-round) connectivity assumptions.
1721
+ 5. Empirical relevance in deep learning
1722
+ While the theoretical results in this paper are for convex functions, the initial motivation for
1723
+ this work comes from observations in deep learning. First, it is crucial in deep learning to
1724
+ use a large learning rate in the initial phase of training (Li et al., 2019). Contrary to what
1725
+ current theory prescribes, we do not use smaller learning rates in decentralized optimization
1726
+ than when training alone (even when data is heterogeneous.) And second, we find that the
1727
+ spectral gap of a topology is not predictive of the performance of that topology in deep
1728
+ learning experiments.
1729
+ In this section, we experiment with a variety of 32-worker topologies on Cifar-10 (Krizhevsky
1730
+ et al.) with a VGG-11 model (Simonyan and Zisserman, 2015). Like other recent works (Lin
1731
+ et al., 2021; Vogels et al., 2021), we opt for this older model, because it does not include
1732
+ BatchNorm (Ioffe and Szegedy, 2015) which forms an orthogonal challenge for decentralized
1733
+ SGD. Please refer to Appendix E of (Vogels et al., 2022) for full details on the experimental
1734
+ setup. Our set of topologies includes regular graphs like rings and toruses, but also irregular
1735
+ graphs such as a binary tree (Vogels et al., 2021) and social network Davis et al. (1930),
1736
+ and a time-varying exponential scheme (Assran et al., 2019). We focus on the initial phase
1737
+ of training, 25k steps in our case, where both train and test loss converge close to linearly.
1738
+ Using a large learning rate in this phase is found to be important for good generalization (Li
1739
+ et al., 2019).
1740
+ Figure 4 shows the loss reached after the first 2.5k SGD steps for all topologies and for a
1741
+ dense grid of learning rates. The curves have the same global structure as those for isotropic
1742
+ quadratics Figure 1: (sparse) averaging yields a small increase in speed for small learning
1743
+ rates, but a large gain over training alone comes from being able to increase the learning
1744
+ 23
1745
+
1746
+ Vogels, Hendrikx, Jaggi
1747
+ 2.3
1748
+ 0.2
1749
+ 1.55
1750
+ 1.15
1751
+ 0.5
1752
+ 0.001
1753
+ 0.01
1754
+ 0.1
1755
+ ↑ Cifar-10 training loss after 2.5k steps (∼25 epochs)
1756
+ Learning rate →
1757
+ Binary tree
1758
+ Fully connected
1759
+ Hypercube
1760
+ Ring
1761
+ Social network
1762
+ Solo
1763
+ Star
1764
+ Time-varying exponential
1765
+ Torus (4x8)
1766
+ Two cliques
1767
+ Figure 4: Training loss reached after 2.5k SGD steps with a variety of graph topologies. In
1768
+ all cases, averaging yields a small increase in speed for small learning rates, but a
1769
+ large gain over training alone comes from being able to increase the learning rate.
1770
+ While the star has a better spectral gap (0.031) than the ring (0.013), it performs
1771
+ worse, and does not allow large learning rates. For reference, similar curves for
1772
+ fully-connected graphs of varying sizes are in the appendix of Vogels et al. (2022).
1773
+ rate. The best schemes support almost the same learning rate as 32 fully-connected workers,
1774
+ and get close in performance.
1775
+ We also find that the random walks introduced in Section 3.1 are a good model for
1776
+ variance between workers in deep learning. Figure 5 shows the empirical covariance between
1777
+ the workers after 100 SGD steps. Just like for isotropic quadratics, the covariance is accurately
1778
+ modeled by the covariance in the random walk process for a certain decay rate γ.
1779
+ Finally, we observe that the effective number of neighbors computed by the variance
1780
+ reduction in a random walk (Section 3.1) accurately describes the relative performance under
1781
+ tuned learning rates of graph topologies on our task, including for irregular and time-varying
1782
+ topologies. This is in contrast to the topology’s spectral gaps, which we find to be not
1783
+ predictive. We fit a decay rate γ = 0.951 that seems to capture the specifics of our problem,
1784
+ and show the correlation in Figure 6.
1785
+ Appendix F of (Vogels et al., 2022) replicates the same experiments in a different setting.
1786
+ There, we use larger graphs (of 64 workers), a different model and data set (an MLP on
1787
+ Fashion MNIST Xiao et al. (2017)), and no momentum or weight decay. The results in this
1788
+ setting are qualitatively comparable to the ones presented above.
1789
+ 6. Conclusion
1790
+ We have shown that the sparse averaging in decentralized learning allows larger learning rates
1791
+ to be used, and that it speeds up training. With the optimal large learning rate, the workers’
1792
+ models are not guaranteed to remain close to their global average. Enforcing global consensus
1793
+ is often unnecessary and the small learning rates it requires can be counter-productive. Indeed,
1794
+ 24
1795
+
1796
+ Beyond spectral gap
1797
+ Gossip matrix
1798
+ Measured cov.
1799
+ on Cifar-10
1800
+ Covariance in
1801
+ random walk
1802
+ Two cliques
1803
+ nW(γ := 0.948)
1804
+ = 17.8
1805
+ Torus (4x8)
1806
+ nW(γ := 0.993)
1807
+ = 29.4
1808
+ Star
1809
+ nW(γ := 0.986)
1810
+ = 5.1
1811
+ Social network
1812
+ nW(γ := 0.992)
1813
+ = 27.3
1814
+ Ring
1815
+ nW(γ := 0.983)
1816
+ = 13.9
1817
+ Hypercube
1818
+ nW(γ := 0.997)
1819
+ = 31.3
1820
+ Binary tree
1821
+ nW(γ := 0.984)
1822
+ = 12.3
1823
+ Figure 5: Measured covariance in Cifar-10 (second row) between workers using various graphs
1824
+ (top row). After 10 epochs, we store a checkpoint of the model and train repeatedly
1825
+ for 100 SGD steps, yielding 100 models for 32 workers. We show normalized
1826
+ covariance matrices between the workers. These are very well approximated by
1827
+ the covariance in the random walk process of Section 3.1 (third row). We print
1828
+ the fitted decay parameters and corresponding ‘effective number of neighbors’.
1829
+ ↑ Cifar-10 training loss after 2.5k steps (∼25 epochs)
1830
+ 0
1831
+ 0.1
1832
+ 0.2
1833
+ 0.3
1834
+ 0.4
1835
+ 0.5
1836
+ 0.6
1837
+ 0.7
1838
+ 0.8
1839
+ 0.9
1840
+ 1
1841
+ Spectral gap →
1842
+ ×
1843
+ ×
1844
+ ×
1845
+ ×
1846
+ ×
1847
+ 0.2
1848
+ 0.4
1849
+ 0.6
1850
+ 0.8
1851
+ 1.0
1852
+ 1.2
1853
+ 1.4
1854
+ 1.6
1855
+ 1 2
1856
+ 4
1857
+ 8
1858
+ 16
1859
+ 32
1860
+ Effective num. neighbors (γ = 0.951, tuned) →
1861
+ ×
1862
+ ×
1863
+ ×
1864
+ ×
1865
+ ×
1866
+ Figure 6: Cifar-10 training loss after 2.5k steps for all studied topologies with their optimal
1867
+ learning rates. Colors match Figure 4, and × indicates fully-connected graphs
1868
+ with varying number of workers. After fitting a decay parameter γ = 0.951 that
1869
+ captures problem specifics, the effective number of neighbors (left) as measured
1870
+ by variance reduction in a random walk (like in Section 3.1) explains the relative
1871
+ performance of these graphs much better than the spectral gap of these topologies
1872
+ (right).
1873
+ 25
1874
+
1875
+ Vogels, Hendrikx, Jaggi
1876
+ models do remain close to some local average in a weighted neighborhood around them
1877
+ even with high learning rates. The workers benefit from a number of ‘effective neighbors’,
1878
+ potentially smaller than the whole graph, that allow them to use larger learning rates while
1879
+ retaining sufficient consensus within the ‘local neighborhood’.
1880
+ Similar insights apply when nodes have heterogeneous local functions: there is no need
1881
+ to enforce global averaging over the whole network when heterogeneity is small across local
1882
+ neighborhoods. Besides, there is no need to compensate for heterogeneity in the early phases
1883
+ of training, when models are all far from the global optimum.
1884
+ Based on our insights, we encourage practitioners of sparse distributed learning algorithms
1885
+ to look beyond the spectral gap of graph topologies, and to investigate the actual ‘effective
1886
+ number of neighbors’ that is used. We also hope that our insights motivate theoreticians to
1887
+ be mindful of assumptions that artificially limit the learning rate, even though they are tight
1888
+ in worst cases. Indeed, the spectral gap is omnipresent in the decentralized litterature, which
1889
+ sometimes hides some subtle phenomena such as the superlinear decrease of the variance in
1890
+ the learning rate, that we highlight.
1891
+ We show experimentally that our conclusions hold in deep learning, but extending our
1892
+ theory to the non-convex setting is an important open direction that could reveal interesting
1893
+ new phenomena. Another interesting direction would be to better understand (beyond the
1894
+ worst-case) the effective number of neighbors for irregular graphs.
1895
+ Acknowledgments and Disclosure of Funding
1896
+ This project was supported by SNSF grant 200020_200342.
1897
+ We thank Lie He for valuable conversations and for identifying the discrepancy between
1898
+ a topology’s spectral gap and its empirical performance.
1899
+ We also thank Raphaël Berthier for helpful discussions that allowed us to clarify the links
1900
+ between effective number of neighbors and spectral dimension.
1901
+ We also thank Aditya Vardhan Varre, Yatin Dandi and Mathieu Even for their feedback
1902
+ on the manuscript.
1903
+ References
1904
+ Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, and Michael G. Rabbat. Stochastic gradient
1905
+ push for distributed deep learning. In Proc. ICML, volume 97, pages 344–353, 2019.
1906
+ Raphaël Berthier. Analysis and acceleration of gradient descents and gossip algorithms. PhD
1907
+ Thesis, Université Paris Sciences & Lettres, 2021.
1908
+ Raphaël Berthier, Francis R. Bach, and Pierre Gaillard. Accelerated gossip in networks of
1909
+ given dimension using jacobi polynomial iterations. SIAM J. Math. Data Sci., 2(1):24–47,
1910
+ 2020.
1911
+ Yatin Dandi, Anastasia Koloskova, Martin Jaggi, and Sebastian U. Stich. Data-heterogeneity-
1912
+ aware mixing for decentralized learning. CoRR, abs/2204.06477, 2022.
1913
+ 26
1914
+
1915
+ Beyond spectral gap
1916
+ Allison Davis, Burleigh Bradford Gardner, and Mary R Gardner. Deep South: A social
1917
+ anthropological study of caste and class. Univ of South Carolina Press, 1930.
1918
+ Mathieu Even, Hadrien Hendrikx, and Laurent Massoulie. Decentralized optimization with
1919
+ heterogeneous delays: a continuous-time approach. arXiv preprint arXiv:2106.03585, 2021.
1920
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
1921
+ by reducing internal covariate shift. In Proc. ICML, volume 37, pages 448–456, 2015.
1922
+ Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian U. Stich. A
1923
+ unified theory of decentralized SGD with changing topology and local updates. In Proc.
1924
+ ICML, volume 119, pages 5381–5393, 2020.
1925
+ Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (Canadian Institute for Advanced
1926
+ Research).
1927
+ B. Le Bars, Aurélien Bellet, Marc Tommasi, and Anne-Marie Kermarrec. Yes, topology
1928
+ matters in decentralized optimization: Refined convergence and topology learning under
1929
+ heterogeneous data. CoRR, abs/2204.04452, 2022.
1930
+ Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial
1931
+ large learning rate in training neural networks. In NeurIPS, pages 11669–11680, 2019.
1932
+ Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu.
1933
+ Can
1934
+ decentralized algorithms outperform centralized algorithms? A case study for decentralized
1935
+ parallel stochastic gradient descent. In NeurIPS, pages 5330–5340, 2017.
1936
+ Xiangru Lian, Wei Zhang, Ce Zhang, and Ji Liu. Asynchronous decentralized parallel
1937
+ stochastic gradient descent. In Proc. ICML, volume 80, pages 3049–3058, 2018.
1938
+ Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, and Martin Jaggi. Quasi-global
1939
+ momentum: Accelerating decentralized deep learning on heterogeneous data. In Proc.
1940
+ ICML, volume 139, pages 6654–6665, 2021.
1941
+ Yucheng Lu and Christopher De Sa. Optimal complexity in decentralized training. In Proc.
1942
+ ICML, volume 139, pages 7111–7123, 2021.
1943
+ Giovanni Neglia, Chuan Xu, Don Towsley, and Gianmarco Calbi. Decentralized gradient
1944
+ methods: does topology matter? In AISTATS,, volume 108, pages 2348–2358, 2020.
1945
+ Dominic Richards and Patrick Rebeschini. Optimal statistical rates for decentralised non-
1946
+ parametric regression with linear speed-up. In NeurIPS, pages 1214–1225, 2019.
1947
+ Dominic Richards and Patrick Rebeschini. Graph-dependent implicit regularisation for
1948
+ distributed stochastic subgradient descent. J. Mach. Learn. Res., 21:34:1–34:44, 2020.
1949
+ Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
1950
+ image recognition. In ICLR, 2015.
1951
+ Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, and Ji Liu. d2: Decentralized training
1952
+ over decentralized data. In Proc. ICML, volume 80, pages 4855–4863, 2018.
1953
+ 27
1954
+
1955
+ Vogels, Hendrikx, Jaggi
1956
+ Thijs Vogels, Lie He, Anastasia Koloskova, Sai Praneeth Karimireddy, Tao Lin, Sebastian U.
1957
+ Stich, and Martin Jaggi. Relaysum for decentralized deep learning on heterogeneous data.
1958
+ In NeurIPS, pages 28004–28015, 2021.
1959
+ Thijs Vogels, Hadrien Hendrikx, and Martin Jaggi. Beyond spectral gap: the role of topology
1960
+ in decentralized learning. In NeurIPS, 2022.
1961
+ Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, and Soummya Kar.
1962
+ MATCHA: speeding up decentralized SGD via matching decomposition sampling. CoRR,
1963
+ abs/1905.09435, 2019.
1964
+ Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for
1965
+ benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017.
1966
+ Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, and Wotao Yin. Exponential
1967
+ graph is provably efficient for decentralized deep training. In NeurIPS, pages 13975–13987,
1968
+ 2021.
1969
+ 28
1970
+
5NA0T4oBgHgl3EQfNv96/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6dAyT4oBgHgl3EQf2fn3/content/tmp_files/2301.00754v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
6dAyT4oBgHgl3EQf2fn3/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6tE1T4oBgHgl3EQfBgIF/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16d76512f94e2defab7994c0e86c70c8d2b64997028dfcca6b53bc130f5a1139
3
+ size 3211309
6tE1T4oBgHgl3EQfBgIF/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:962894a696294b191db2efd539c85420c244f355f8248033aa0a90ca1524d806
3
+ size 116391
7dE1T4oBgHgl3EQfBgLt/content/tmp_files/2301.02854v1.pdf.txt ADDED
@@ -0,0 +1,533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Absence of off-diagonal long-range order in hcp 4He dislocation cores
2
+ Maurice de Koning
3
+ Instituto de F´ısica Gleb Wataghin, Universidade Estadual de Campinas,
4
+ UNICAMP, 13083-859, Campinas, S˜ao Paulo, Brazil and
5
+ Center for Computing in Engineering & Sciences, Universidade Estadual de Campinas,
6
+ UNICAMP, 13083-861, Campinas, S˜ao Paulo, Brazil∗
7
+ Wei Cai
8
+ Department of Mechanical Engineering, Stanford University, Stanford, CA 94305-4040†
9
+ Claudio Cazorla and Jordi Boronat
10
+ Departament de F´ısica, Universitat Polit`ecnica de Catalunya, Campus Nord B4-B5, 08034 Barcelona, Spain‡
11
+ The mass transport properties along dislocation cores in hcp 4He are revisited by considering two types
12
+ of edge dislocations as well as a screw dislocation, using a fully correlated quantum simulation approach.
13
+ Specifically, we employ the zero-temperature path-integral ground state (PIGS) method together with er-
14
+ godic sampling of the permutation space to investigate the fundamental dislocation core structures and
15
+ their off-diagonal long-range order properties. It is found that the Bose-Einstein condensate fraction of
16
+ such defective 4He systems is practically null (≤ 10−6), just as in the bulk defect-free crystal. These re-
17
+ sults provide compelling evidence for the absence of intrinsic superfluidity in dislocation cores in hcp 4He
18
+ and challenge the superfluid dislocation-network interpretation of the mass-flux-experiment observations,
19
+ calling for further experimental investigation.
20
+ Although torsional oscillator experiments on hcp 4He
21
+ by Kim and Chan in 2004 [1, 2] initially pointed at the
22
+ existence of superfluidity in a solid-phase system, also
23
+ known as supersolidity [3, 4], posterior examination un-
24
+ ambiguously established that, instead, the observed phe-
25
+ nomenology was a consequence of its anomalous mechan-
26
+ ical behavior. Specifically, it was found to be caused by
27
+ the obstructing influence of 3He impurities on the low-
28
+ temperature mobility of lattice dislocations [5–10], the
29
+ one-dimensional defects whose motion induces plastic de-
30
+ formation in crystalline solids [11, 12].
31
+ Still, the possibility of intrinsic supersolidity in hcp
32
+ 4He has not been discarded, in particular due to a vari-
33
+ ety of mass flux experiments that report the flow of mat-
34
+ ter across solid 4He samples [13–22]. However, the in-
35
+ terpretation of these observations remains controversial.
36
+ On the one hand, it has been proposed that the matter
37
+ flow is transmitted through a superfluid network of in-
38
+ terconnected, one-dimensional dislocation cores [20–22].
39
+ This view relies fundamentally on the results of com-
40
+ putational grand-canonical finite-temperature path-integral
41
+ Monte Carlo (PIMC) studies of one group [23–25], which
42
+ conclude that the cores of dislocations with Burgers vec-
43
+ tors along the c-axis, b = [0001], are superfluid at ultralow
44
+ temperatures of ∼ 0.1 K. In contrast, other authors argue
45
+ that the mass flow is not dislocation-based but, rather, in-
46
+ volves interfacial disorder effects within the samples, in-
47
+ cluding at cell walls and grain boundaries [18, 19]. This
48
+ account is supported by the fact that large amounts of 3He
49
+ impurities, much larger than required to saturate typical
50
+ dislocation networks and their intersections, are required
51
+ to block the flow at low temperatures [19]. In either case,
52
+ dislocations play a central role in this controversy and, in
53
+ view of the scarce computational evidence, further theoret-
54
+ ical scrutiny of their properties is pressingly needed.
55
+ In this Letter we do so, revisiting the basic properties of
56
+ dislocations in hcp 4He using first-principles quantum sim-
57
+ ulations. However, the employed computational approach
58
+ differs significantly from that applied in Refs. [23], [24]
59
+ and [25]. First, instead of finite-temperature PIMC cal-
60
+ culations, we resort to the zero-temperature path-integral
61
+ ground state (PIGS) approach, a generalization of the
62
+ PIMC method to zero temperature [26–28], that has shown
63
+ to converge to exact ground-state results regardless of the
64
+ initially chosen wave function for condensed phases of
65
+ 4He [27, 29].
66
+ Like in Refs. [23–25], permutation sam-
67
+ pling is carried out using the worm algorithm [30, 31] to
68
+ guarantee ergodicity in permutation space [28]. Second,
69
+ we adopt different boundary conditions for the computa-
70
+ tional cells [32]. The results of the previous PIMC cal-
71
+ culations [23, 24] are based on tube-like setups, in which
72
+ only atoms within a cylindrical (or pencil-shaped in the
73
+ case of Ref. [23]) region are treated explicitly while fix-
74
+ ing a set of atoms outside of it to their classical positions,
75
+ applying periodic boundary conditions (PBC) only along
76
+ the dislocation line. Such an arrangement can give rise
77
+ to lateral incompatibility stresses [33–37] that may result
78
+ in incorrect dislocation core structures if these are not ad-
79
+ equately relieved, e.g., by using Green’s function bound-
80
+ ary conditions [36, 37]. Here, we employ different config-
81
+ urations, including a dislocation-dipole arrangement em-
82
+ ploying fully three-dimensional PBC [32, 38], as well as
83
+ slab configurations containing a single dislocation subject
84
+ to two-dimensional PBC [39]. Finally, we focus on the fun-
85
+ damental atomic lattice structure of the dislocation cores,
86
+ without considering processes that require the addition or
87
+ arXiv:2301.02854v1 [cond-mat.other] 7 Jan 2023
88
+
89
+ 2
90
+ removal of material through a grand-canonical (GC) ap-
91
+ proach as used in Refs. [23], [24] and [25]. Indeed, if one
92
+ does not adequately thermalize, changing particle num-
93
+ bers may introduce artificial disorder, possibly leading to
94
+ spurious the appearance of long-winding permutation cy-
95
+ cles [23]. By applying this computational scheme to edge
96
+ dislocations with their Burgers vectors both perpendicular
97
+ and parallel to the c axis and to the screw dislocation with
98
+ its Burgers vector along the c-axis, we find that, at zero
99
+ temperature, the off-diagonal long-range order (ODLRO)
100
+ is practically null (≤ 10−6), just as in the defect-free hcp
101
+ crystal. This result contrasts previous claims [23–25] and
102
+ signals the absence of quantum mass transport through dis-
103
+ location cores in hcp 4He, instead lending support to the
104
+ interpretation that the mass-flow observations are due to
105
+ interfacial disorder effects rather than dislocation-mediated
106
+ superfluidity.
107
+ The integral Schr¨odinger equation for a system of N in-
108
+ teracting particles can be expressed in imaginary time as
109
+ Ψ(R, τ) =
110
+
111
+ dR′ G(R, R′; τ)Ψ(R′, 0) ,
112
+ (1)
113
+ where G(R, R′; τ) ≡ ⟨R|e−Hτ|R′⟩ is the correspond-
114
+ ing Green’s function, with H the system Hamiltonian,
115
+ Ψ(R, τ) the system wave function at imaginary time τ and
116
+ |R⟩ = |r1, r2, . . . , rN⟩, with ri the particle positions.
117
+ In the path-integral ground state (PIGS) approach [26–
118
+ 28], one exploits the formal identity between G(R, R′; τ)
119
+ and the thermal density matrix of the system at an in-
120
+ verse temperature of ϵ ≡ 1/T (we measure energy in
121
+ units of Kelvins according to 1 K = 8.617×10−5 eV, such
122
+ that ℏ2/2m = 6.059615 K ˚A2), namely, ρ(R, R′; ϵ).
123
+ In
124
+ this manner, the ground-state wave function of the system,
125
+ Ψ0(R), can be asymptotically projected out of a trial wave
126
+ function, ΨT(R), according to
127
+ Ψ0(RM) =
128
+
129
+ M−1
130
+
131
+ i=0
132
+ dRi ρ(Ri, Ri+1; ϵ) ΨT(R0) . (2)
133
+ Likewise, the ground-state average value of any physical
134
+ observable can be written in terms of a multidimensional
135
+ integral that can be calculated exactly, within statistical
136
+ uncertainties, independently of whether the corresponding
137
+ operator commutes or not with the Hamiltonian of the sys-
138
+ tem. The only requirement for the trial wave function ΨT
139
+ is to satisfy the symmetry conditions imposed by the statis-
140
+ tics of the simulated quantum many-body system. In this
141
+ work, since we are dealing with boson particles, we con-
142
+ sider a symmetrized trial wave function of the Jastrow type
143
+ that typically is employed in quantum Monte Carlo (QMC)
144
+ simulation of quantum liquids [28].
145
+ The central physical quantity in our PIGS study is the
146
+ one-body density matrix (OBDM), which is defined as
147
+ ρ1(r1, r′
148
+ 1) = 1
149
+ Z
150
+
151
+ dr2 . . . drN ρ(R, R′) ,
152
+ (3)
153
+ a)
154
+ [1210]
155
+ [0001]
156
+ [1010]
157
+ [1210]
158
+ [0001]
159
+ [1010]
160
+ b)
161
+ [1210]
162
+ [0001]
163
+ [1010]
164
+ c)
165
+ FIG. 1. Computational cells employed in the zero-temperature
166
+ PIGS simulations of edge and screw dislocations in hcp 4He as
167
+ visualized using the OVITO package [41]. Atoms shown in red
168
+ and green are located in hcp and fcc surroundings, respectively.
169
+ In all panels, the total Burgers vector b is indicated by the black
170
+ arrow a) Dipole arrangement for the edge dislocations with Burg-
171
+ ers vector in the basal plane, with PBC applied in all directions,
172
+ following Ref. [38]. Each dislocation is dissociated into Shock-
173
+ ley partial dislocations separated by a ribbon of stacking fault.
174
+ b) Setup for the single edge dislocation with Burgers vector ori-
175
+ ented along the c-axis dissociated into two Frank partials, with
176
+ PBC applied along the dislocation line as well as the c-axis. The
177
+ blue spheres in the upper and lower regions depict frozen atoms.
178
+ c) Setup for single screw dislocation with Burgers vector oriented
179
+ along the c-axis, with PBC applied along the dislocation line as
180
+ well as the [1010] directions. The blue spheres in the upper and
181
+ lower regions depict frozen atoms. Blue atoms in the central re-
182
+ gion are close to the dislocation core.
183
+ where the two configurations |R⟩ = |r1, r2, . . . , rN⟩ and
184
+ |R′⟩ = |r′
185
+ 1, r2, . . . , rN⟩ differ only in one particle co-
186
+ ordinate, and Z represents the quantum partition function
187
+ of the system. In PIGS, ρ1(r1, r′
188
+ 1) is computed by track-
189
+ ing the distances between the two extremities of one open
190
+ chain (worm) during the QMC sampling [40]. Importantly,
191
+ the condensate fraction of a N-boson system, n0, can be
192
+ deduced from the long-range asymptotic behavior of the
193
+ OBDM,
194
+ n0 =
195
+ lim
196
+ |r1−r′
197
+ 1|→∞ ρ1(r1, r′
198
+ 1) .
199
+ (4)
200
+ We carried out PIGS simulations of hcp 4He crystals
201
+ containing edge dislocations with their lines in the basal
202
+ plane and with Burgers vectors oriented in the basal plane
203
+ and along the c-axis, respectively, as well as for the screw
204
+ dislocation with Burgers vector parallel to the c axis. The
205
+ interactions between He atoms were modeled using the
206
+ pairwise Aziz potential [42]. The computational cells em-
207
+ ployed in the calculations are shown in Fig. 1. Depending
208
+ on the type of edge dislocation, two different setups were
209
+ employed. Fig. 1 a) displays the arrangement utilized for
210
+ the basal edge (BE) dislocation. It is analogous to that used
211
+ in Ref. [38], containing a pair of edge dislocations with
212
+
213
+ 3
214
+ opposite Burgers vectors of the type b =
215
+ 1
216
+ 3[1210] disso-
217
+ ciated into Shockley partials [11] with Burgers vectors of
218
+ the kind b =
219
+ 1
220
+ 3[1100] separated by a stacking-fault rib-
221
+ bon. PBC were applied in all three directions and the cell
222
+ contained 1872 atoms. As shown in Fig. 1 b), a different
223
+ approach was adopted for the c-axis edge (CE) dislocation
224
+ with Burgers vector b = [0001]. While a dipole setup
225
+ would also be possible, it would require simulating num-
226
+ bers of atoms that are prohibitively large for the excessively
227
+ demanding PIGS calculations. Therefore, we employed a
228
+ cell containing only a single CE dislocation, applying PBC
229
+ along the dislocation-line direction and the c-axis while fix-
230
+ ing the top and bottom two layers in the [1010] directions.
231
+ This is a standard approach that has been routinely used
232
+ in atomistic simulations of dislocations [39, 43, 44] and
233
+ preserves translational symmetry along the glide direction.
234
+ The cell contains a total of 2280 atoms, of which 2052 were
235
+ treated explicitly, whereas the remaining 228 atoms were
236
+ fixed in the top and bottom layers. The CE dislocation
237
+ dissociates into two Frank partial dislocations with Burg-
238
+ ers vectors of the type b = 1
239
+ 6[2023] (Ref. [11], pg. 361)
240
+ separated by a ribbon of stacking fault. A similar single-
241
+ dislocation setup was also employed for the c-axis screw
242
+ (CS) dislocation, as shown in Fig. 1 c), with a cell con-
243
+ taining 1920 of which 228 atoms in the surface layers were
244
+ held fixed. For all dislocation cells the atomic number den-
245
+ sity was held fixed at ρ = 0.0287 ˚A−3, which corresponds
246
+ to a lattice parameter of a = 3.67 ˚A. The number of time-
247
+ slices used in Eq. (2) was M = 25 and an imaginary-time
248
+ step of τ = 0.0125 K−1. We have verified that larger
249
+ values of M and smaller values of τdo not modify our
250
+ results within the statistical uncertainties (see Supporting
251
+ Information [45]). Finally, for comparison with the defect-
252
+ cell results, we also carried out subsidiary calculations for
253
+ defect-free hcp 4He at the same density, employing a fully
254
+ periodic cell containing 180 atoms.
255
+ The red circles in Figs. 2 a) and the red and grey circles
256
+ in Fig. 2 b) show the PIGS results for the zero-temperature
257
+ OBDM of hcp 4He crystals containing, respectively, the
258
+ BE, CS and CE dislocations. In all cases, ρ1 clearly ex-
259
+ hibits a generally decreasing tendency under increasing ra-
260
+ dial distance r ≡ |r1 − r′
261
+ 1| (note the logarithmic y-scale
262
+ in the graphs). For the BE dislocation, the steady OBDM
263
+ reduction is slightly smaller than for the CS and CE dis-
264
+ locations; for example, at a radial distance of ∼ 7 ˚A the
265
+ one-body density matrix has reduced to ∼ 10−5 in the for-
266
+ mer case compared to ∼ 10−6 for the latter. Nevertheless,
267
+ the slope of all ρ1 asymptotes are manifestly negative. This
268
+ is clear evidence that the Bose-Einstein condensate fraction
269
+ (Eq.4) of bulk hcp 4He containing these types of disloca-
270
+ tions is negligible in practice (≤ 10−6) as ρ1 tends to zero
271
+ in the limit of long radial distances. For further compari-
272
+ son, the blue circles in Figs. 2 a) and b) display the PIGS
273
+ OBDM calculations carried out for the defect-free hcp 4He
274
+ cell at the same density.
275
+ The results for these dislocation systems display the
276
+ FIG. 2. PIGS one-body density matrix results obtained at zero
277
+ temperature for hcp 4He for the cells containing (a) a BE dislo-
278
+ cation (red circles) and (b) CS (grey circles) and CE dislocation
279
+ (red circles). The y-axis is in logarithmic scale. For compari-
280
+ son, PIGS results obtained for defect-free bulk hcp 4He at the
281
+ same density (blue circles) as well as the liquid at a density of
282
+ 0.0227 ˚A−3 (black circles) are also shown.
283
+ same general trend as seen for the defect-free crystal, pro-
284
+ viding further support for our conclusion of negligible n0
285
+ in the presence of these types of dislocations. As a final
286
+ consistency check, we carried out an additional simulation
287
+ starting from the CE dislocation cell, but reducing its den-
288
+ sity to 0.0227 ˚A−3 to induce a transition into the liquid
289
+ phase. The corresponding ODLRO, obtained after reach-
290
+ ing the equilibrated liquid, is shown as the black circles in
291
+ Fig. 2 b). The Bose-Einstein condensate fraction obtained
292
+ in this case, employing the same PIGS approach applied to
293
+ the solid-phase systems, is found to be n0 ∼ 0.02. This is
294
+ in agreement with the known value corresponding to bulk
295
+ liquid 4He at that density at ultralow temperatures [46], at-
296
+ testing to the numerical reliability of our zero-temperature
297
+ computational approach.
298
+ The fact that the zero-temperature OBDM results in
299
+ Fig. 2 display a practically null Bose-Einstein condensate
300
+
301
+ a)
302
+ 0.1
303
+ 0.01
304
+ 0.001
305
+ 0.0001
306
+ 1*10-5
307
+ BE dislocation
308
+ Solid (perfect)
309
+ Liquid
310
+ 1*10
311
+ 0
312
+ 1
313
+ 2
314
+ 3
315
+ 4
316
+ 5
317
+ 6
318
+ 7
319
+ 8
320
+ 9
321
+ b)
322
+ 0.1
323
+ 0.01
324
+ 0.001
325
+ (a)id
326
+ 0.0001
327
+ CS dislocation
328
+ CE dislocation
329
+ 1*10
330
+ .6
331
+ Solid (perfect)
332
+ Liquid
333
+ 1*10-7
334
+ 0
335
+ 1
336
+ 2
337
+ 3
338
+ 4
339
+ 5
340
+ 6
341
+ 7
342
+ 8
343
+ 9
344
+ r (A)4
345
+ FIG. 3. Visualization of the 4He system containing the dissociated CE dislocation at the beginning and end of the PIGS simulations;
346
+ quantum polymers “centroids” are represented in both cases. The initial configuration was obtained after equilibrating the system at
347
+ T = 1 K with the PIMC method. A few quantum polymers located at a similar distance within the dislocation core are represented in
348
+ the inset of b); long chains of atomic exchanges involving several quantum polymers are absent.
349
+ fraction (i.e., ≲ 10−6) in both the defect-free as well as de-
350
+ fected 4He crystal is compelling evidence that the cores of
351
+ the considered types of dislocations are in fact insulating
352
+ in nature. The lack of quantum mass flux along the dis-
353
+ location cores can be further verified by visual inspection
354
+ of the quantum polymers during the simulation. A repre-
355
+ sentative example is depicted in Fig. 3 for the case of the
356
+ dissociated CE dislocation. Fig. 3 a) and the main panel of
357
+ Fig. 3 b) display the centroids (i.e., the “centers-of-mass”
358
+ of the quantum polymers) for the initial and final configu-
359
+ rations of the PIGS simulation, respectively. Both pictures
360
+ qualitatively demonstrate the prevalence of atomic order,
361
+ including the regions of the partial dislocation cores. Fur-
362
+ thermore, when visualizing entire quantum polymers in the
363
+ core region as depicted in the expanded view, there are no
364
+ evident traces of long-winding quantum exchanges [40],
365
+ thus corroborating the absence of superfluidity in these dis-
366
+ location cores.
367
+ While the absence of superfluidity for the BE disloca-
368
+ tions is consistent with the PIMC calculations reported in
369
+ Ref. [38] and the unpublished data referred to in Ref. [47],
370
+ the present PIGS results for the CS and CE dislocations
371
+ are at odds with the findings in Refs. [23] and
372
+ [24] as
373
+ well as the proposed mechanism of “superclimb” of dis-
374
+ locations [24, 25]. Accordingly, our results are incompat-
375
+ ible with the superfluid dislocation network interpretation
376
+ of the mass flux experiments, and lend support to the alter-
377
+ nate view that effects related to disordered regions at inter-
378
+ nal interfaces, including vessel walls and grain boundaries,
379
+ are responsible for the observations [18, 19].
380
+ A further issue with the superfluid-network interpreta-
381
+ tion is that, given the consensus that dislocations with
382
+ Burgers vectors in the basal plane are insulating [38, 47],
383
+ it relies fundamentally on the presence of a spanning net-
384
+ work consisting entirely of dislocations with c-axis Burg-
385
+ ers vectors. Such an arrangement of dislocations, however,
386
+ is geometrically impossible due to the requirement of con-
387
+ servation of Burgers vector at network nodes [11]. In con-
388
+ trast, there is abundant experimental evidence [6, 8, 48–
389
+ 51] for the existence of networks of nonsuperfluid basal-
390
+ plane Burgers-vector dislocations, which drive the domi-
391
+ nant mode of basal slip in hcp 4He [50, 51] and play a cen-
392
+ tral role in the phenomenon of giant plasticity [8], as well
393
+ as in the nonsupersolid explanation of the original torsion-
394
+ oscillator observations by Kim and Chan [6]. This premise
395
+ is also consistent with findings in other hcp-structured ma-
396
+ terials such as Zn [52] and Mg [53] in which observed
397
+ dislocation networks display the characteristic hexagonal
398
+ structure of basal-plane Burgers vector dislocations.
399
+ In
400
+ this light, the present results further challenge the super-
401
+ fluid dislocation-network interpretation of the mass-flux-
402
+ experiment observations and call for further experimental
403
+ investigation.
404
+ M.K. acknowledges support from CNPq, Fapesp grant
405
+ no. 2016/23891-6 and the Center for Computing in En-
406
+ gineering & Sciences - Fapesp/Cepid no.
407
+ 2013/08293-
408
+ 7.
409
+ W.C. acknowledges support from the U.S. Depart-
410
+ ment of Energy, Office of Basic Energy Sciences, Divi-
411
+ sion of Materials Sciences and Engineering under Award
412
+ No.
413
+ DE-SC0010412.
414
+ J.B. acknowledges financial sup-
415
+
416
+ a
417
+ b
418
+ .
419
+ 8
420
+ a
421
+ 8
422
+ .
423
+ .
424
+ Quantum
425
+ polymers
426
+ Initial configuration
427
+ Final configuration5
428
+ port from the Secretaria d’Universitats i Recerca del De-
429
+ partament d’Empresa i Coneixement de la Generalitat de
430
+ Catalunya, co-funded by the European Union Regional De-
431
+ velopment Fund within the ERDF Operational Program
432
+ of Catalunya (project QuantumCat, Ref. 001-P-001644),
433
+ and the MINECO (Spain) Grant PID2020-113565GB-C21.
434
+ C.C. acknowledges financial support from the MINECO
435
+ (Spain) under the “Ram´on y Cajal” fellowship (RYC2018-
436
+ 024947-I).
437
438
439
440
+ [1] E. Kim and M. H. W. Chan, Nature 427, 225 (2004).
441
+ [2] E. Kim and M. H. W. Chan, Science 305, 1941 (2004).
442
+ [3] S. Balibar, Contemp. Phys. 48, 31 (2007).
443
+ [4] M. Boninsegni and N. V. Prokof’ev, Rev. Mod. Phys. 84,
444
+ 759 (2012).
445
+ [5] D. Y. Kim and M. H. W. Chan, Phys. Rev. Lett. 109, 155301
446
+ (2012).
447
+ [6] J. Day and J. Beamish, Nature 450, 853 (2007).
448
+ [7] J. D. Reppy, Phys. Rev. Lett. 104, 255301 (2010).
449
+ [8] A. Haziot, X. Rojas, A. D. Fefferman, J. R. Beamish, and
450
+ S. Balibar, Phys. Rev. Lett. 110, 035301 (2013).
451
+ [9] M. H. W. Chan, R. B. Hallock, and L. Reatto, J. Low Temp.
452
+ Phys. 172, 317 (2013).
453
+ [10] J. Beamish and S. Balibar, Rev. Mod. Phys. 92, 045002
454
+ (2020).
455
+ [11] J. P. Hirth and J. Lothe, Theory of Dislocations, 2nd ed.
456
+ (Krieger Publishing Company, 1992).
457
+ [12] D. Hull and D. Bacon, Introduction to dislocations
458
+ (Butterworth-Heinemann, 2001).
459
+ [13] M. W. Ray and R. B. Hallock, Phys. Rev. Lett. 100, 235301
460
+ (2008).
461
+ [14] M. W. Ray and R. B. Hallock, Phys. Rev. B 79, 224302
462
+ (2009).
463
+ [15] M. W. Ray and R. B. Hallock, Phys. Rev. B 84, 144512
464
+ (2011).
465
+ [16] Y. Vekhov, W. Mullin, and R. Hallock, Phys. Rev. Lett. 113,
466
+ 035302 (2014).
467
+ [17] Y. Vekhov and R. B. Hallock, Phys. Rev. B 90, 134511
468
+ (2014).
469
+ [18] Z. G. Cheng, J. Beamish, A. D. Fefferman, F. Souris, S. Bal-
470
+ ibar, and V. Dauvois, Phys. Rev. Lett. 114, 165301 (2015).
471
+ [19] Z. G. Cheng and J. Beamish, Phys. Rev. Lett. 117, 025301
472
+ (2016).
473
+ [20] J. Shin, D. Y. Kim, A. Haziot, and M. H. W. Chan, Phys.
474
+ Rev. Lett. 118, 235301 (2017).
475
+ [21] J. Shin and M. H. W. Chan, Phys. Rev. B 99, 140502 (2019).
476
+ [22] R. B. Hallock, J. Low Temp. Phys. 197, 167 (2019).
477
+ [23] M. Boninsegni, A. B. Kuklov, L. Pollet, N. V. Prokof’ev,
478
+ B. V. Svistunov, and M. Troyer, Phys. Rev. Lett. 99, 035301
479
+ (2007).
480
+ [24] S. G. S¨oyler, A. B. Kuklov, L. Pollet, N. V. Prokof’ev, and
481
+ B. V. Svistunov, Phys. Rev. Lett. 103, 175301 (2009).
482
+ [25] A. B. Kuklov, L. Pollet, N. V. Prokof’ev, and B. V. Svis-
483
+ tunov, Phys. Rev. Lett. 128, 255301 (2022).
484
+ [26] A. Sarsa, K. E. Schmidt, and W. R. Magro, J. Chem. Phys.
485
+ 113, 1366 (2000).
486
+ [27] M. Rossi, M. Nava, L. Reatto, and D. E. Galli, J. Chem.
487
+ Phys. 131, 154108 (2009).
488
+ [28] C. Cazorla and J. Boronat, Rev. Mod. Phys. 89, 035003
489
+ (2017).
490
+ [29] R. Rota, J. Casulleras, F. Mazzanti, and J. Boronat, Phys.
491
+ Rev. E 81, 016707 (2010).
492
+ [30] M. Boninsegni, N. Prokof’ev, and B. Svistunov, Phys. Rev.
493
+ Lett. 96, 070601 (2006).
494
+ [31] M. Boninsegni, N. V. Prokof’ev, and B. V. Svistunov, Phys.
495
+ Rev. E 74, 036701 (2006).
496
+ [32] V. V. Bulatov and W. Cai, Computer simulations of disloca-
497
+ tions (Oxford University Press, 2006).
498
+ [33] P. C. Gehlen, J. P. Hirth, R. G. Hoagland, and M. F. Kanni-
499
+ nen, J. Appl. Phys. 43, 3921 (1972).
500
+ [34] R. G. Hoagland, J. P. Hirth, and P. C. Gehlen, Phil. Mag. A
501
+ 34, 413 (1976).
502
+ [35] J. E. Sinclair, P. C. Gehlen, R. G. Hoagland, and J. P. Hirth,
503
+ J. Appl. Phys. 49, 3890 (1978).
504
+ [36] S. Rao, C. Hernandez, J. P. Simmons, T. A. Parthasarathy,
505
+ and C. Woodward, Philos. Mag. A 77, 231 (1998).
506
+ [37] C. Woodward and S. I. Rao, Phys. Rev. Lett. 88, 216402
507
+ (2002).
508
+ [38] E. J. Landinez Borda, W. Cai, and M. de Koning, Phys. Rev.
509
+ Lett. 117, 045301 (2016).
510
+ [39] R. Freitas, M. Asta, and V. V. Bulatov, npj Comput. Mater.
511
+ 4, 55 (2018).
512
+ [40] D. M. Ceperley, Rev. Mod. Phys. 67, 279 (1995).
513
+ [41] A. Stukowski and K. Albe, Model. Simul. Mater. Sci. Eng.
514
+ 18, 085001 (2010).
515
+ [42] R. A. Aziz, F. R. W. McCourt, and C. C. K. Wong, Mol.
516
+ Phys. 61, 1487 (1987).
517
+ [43] A. Abu-Odeh, D. L. Olmsted, and M. Asta, Scr. Mater. 210,
518
+ 114465 (2022).
519
+ [44] D. Rodney and G. Martin, Phys. Rev. B 61, 8714 (2000).
520
+ [45] For further details, see Supplemental Material.
521
+ [46] R. Rota and J. Boronat, J. Low Temp. Phys. 166, 21 (2012).
522
+ [47] L. Pollet, M. Boninsegni, A. B. Kuklov, N. V. Prokof’ev,
523
+ B. V. Svistunov, and M. Troyer, Phys. Rev. Lett. 101,
524
+ 097202 (2008).
525
+ [48] Y. Hiki and F. Tsuruoka, Phys. Lett. A 56, 484 (1976).
526
+ [49] Y. Hiki and F. Tsuruoka, Phys. Lett. A 62, 50 (1977).
527
+ [50] F. Tsuruoka and Y. Hiki, Phys. Rev. B 20, 2702 (1979).
528
+ [51] M. A. Paalanen, D. J. Bishop, and H. W. Dail, Phys. Rev.
529
+ Lett. 46, 664 (1981).
530
+ [52] N. A. Tyapunina, T. N. Pashenko, and G. M. Zinenkova,
531
+ phys. stat. sol. (a) 31, 309 (1975).
532
+ [53] P. B. Hirsch and J. S. Lally, Phil. Mag. A 12, 595 (1965).
533
+
7dE1T4oBgHgl3EQfBgLt/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9NAyT4oBgHgl3EQfdPda/content/tmp_files/2301.00298v1.pdf.txt ADDED
@@ -0,0 +1,1810 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00298v1 [math.NT] 31 Dec 2022
2
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA
3
+ SERIES
4
+ T. WAKHARE1 AND C. VIGNAT2
5
+ Abstract. An unpublished identity of Gosper restates a hypergeometric identity for odd
6
+ zeta values in terms of an infinite product of matrices. We show that this correspondence
7
+ runs much deeper, and show that many examples of WZ-accelerated series for zeta values lift
8
+ to infinite matrix products. We also introduce a new matrix subgroup, the Gosper group,
9
+ which all of our matrix products fall into.
10
+ 1. Introduction
11
+ In his famous book “Mathematical Constants” [3], Finch cites an unpublished result by
12
+ Gosper [2]:
13
+ (1.1)
14
+
15
+
16
+ k=1
17
+ � −
18
+ k
19
+ 2(2k+1)
20
+ 5
21
+ 4k2
22
+ 0
23
+ 1
24
+
25
+ =
26
+
27
+ 0
28
+ ζ (3)
29
+ 0
30
+ 1
31
+
32
+ ,
33
+ and its (N + 1) × (N + 1) extension, for N ⩾ 2,
34
+ (1.2)
35
+
36
+
37
+ k=1
38
+
39
+ 
40
+
41
+ k
42
+ 2(2k+1)
43
+ 1
44
+ 2k(2k+1)
45
+ 0
46
+ . . .
47
+ 0
48
+ 1
49
+ k2N
50
+ 0
51
+
52
+ k
53
+ 2(2k+1)
54
+ 1
55
+ 2k(2k+1)
56
+ . . .
57
+ 1
58
+ k2N−2
59
+ ...
60
+ ...
61
+ ...
62
+ ...
63
+ ...
64
+ 0
65
+ 0
66
+ 0
67
+ . . .
68
+ 1
69
+ 2k(2k+1)
70
+ 1
71
+ k4
72
+ 0
73
+ 0
74
+ 0
75
+ . . .
76
+
77
+ k
78
+ 2(2k+1)
79
+ 5
80
+ 4k2
81
+ 0
82
+ 0
83
+ 0
84
+ . . .
85
+ 0
86
+ 1
87
+
88
+ 
89
+ =
90
+
91
+ 
92
+ 0
93
+ . . .
94
+ 0
95
+ ζ (2N + 1)
96
+ 0
97
+ . . .
98
+ 0
99
+ ζ (2N − 1)
100
+ ...
101
+ ...
102
+ ...
103
+ 0
104
+ . . .
105
+ 0
106
+ ζ (5)
107
+ 0
108
+ . . .
109
+ 0
110
+ ζ (3)
111
+ 0
112
+ . . .
113
+ 0
114
+ 1
115
+
116
+ 
117
+ .
118
+ We will show that this formula is in fact equivalent to Koecher’s identity [4, Eq. (3)]
119
+ (1.3)
120
+
121
+
122
+ n=0
123
+ 1
124
+ n(n2 − x2) = 1
125
+ 2
126
+
127
+
128
+ k=1
129
+ (−1)k−1
130
+ �2k
131
+ k
132
+
133
+ k3
134
+ 5k2 − x2
135
+ k2 − x2
136
+ k−1
137
+
138
+ m=1
139
+
140
+ .1 − x2
141
+ m2
142
+
143
+ .
144
+ By extracting coefficients of 1 and x2 in Koecher’s identity, we recover Markov’s series ac-
145
+ celeration identity [5]
146
+ ζ (3) = 5
147
+ 2
148
+
149
+ n⩾1
150
+ (−1)n−1
151
+ n3�2n
152
+ n
153
+
154
+ and its higher order counterpart
155
+ ζ (5) = 2
156
+
157
+
158
+ n=1
159
+ (−1)n−1
160
+ n5�2n
161
+ n
162
+ � − 5
163
+ 2
164
+
165
+
166
+ n=1
167
+ (−1)n−1 H(2)
168
+ n−1
169
+ n3�2n
170
+ n
171
+
172
+ .
173
+ 1
174
+
175
+ 2
176
+ T. WAKHARE1 AND C. VIGNAT2
177
+ These are efficiently encoded by the matrix product. By extracting other coefficients of
178
+ xn in Koecher’s identity, we recover counterparts for ζ(2n + 1) which are again encoded by
179
+ the matrix product.
180
+ This correspondence runs much deeper, and we will show that several hypergeometric-type
181
+ series for the zeta function at small integers are equivalent to infinite products for N × N
182
+ matrices. The fact that these identities support an expression in terms of matrix products is
183
+ already interesting. The pattern of entries of some small matrices suggest the general form
184
+ of the relevant n × n generalizations, which would then be equivalent to new accelerated
185
+ series for zeta values.
186
+ 2. Background
187
+ 2.1. Special Functions. The Riemann zeta function, absolutely convergent for s ∈ C, ℜs >
188
+ 1 is given by
189
+ (2.1)
190
+ ζ(s) :=
191
+
192
+
193
+ n=1
194
+ 1
195
+ ns.
196
+ This straightforwardly extends to the Hurwitz zeta function with the addition of a parameter
197
+ z ∈ C, z ̸= 0, −1, −2, . . .:
198
+ (2.2)
199
+ ζ(s|z) :=
200
+
201
+
202
+ n=1
203
+ 1
204
+ (n + z)s ,
205
+ so that ζ(s) = ζ(s|1).
206
+ The harmonic numbers are given by H0 := 0 and
207
+ (2.3)
208
+ Hn :=
209
+ n
210
+
211
+ k=1
212
+ 1
213
+ k,
214
+ n ⩾ 1.
215
+ The hyper-harmonic numbers are defined similar.
216
+ We will also consider the elementary
217
+ symmetric functions
218
+ (2.4)
219
+ e(s)
220
+ ℓ (k) := [tℓ]
221
+ k−1
222
+
223
+ j=1
224
+
225
+ 1 + t
226
+ js
227
+
228
+ =
229
+
230
+ 1⩽j1<j2<···<jℓ⩽k−1
231
+ 1
232
+ (j1 · · · jℓ)s,
233
+ which reduce to the harmonic numbers at e1
234
+ 1(n) = Hn−1.
235
+ 3. The Gosper Group
236
+ Each Gosper matrix in the product (1.2) has the form
237
+ Mk =
238
+
239
+ Ak
240
+ uk
241
+ 0
242
+ 1
243
+
244
+ where Ak is square (N × N), uk is a (N × 1) vector and 0 is the (1 × N) vector of zeros.
245
+ Matrices of this kind form a group, which we shall name the Gosper group. With IN the
246
+ (N × N) identity matrix, the unit element of the group is
247
+
248
+ IN
249
+ 0
250
+ 0
251
+ 1
252
+
253
+ , and the inverse of
254
+
255
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES
256
+ 3
257
+ an element M =
258
+
259
+ A
260
+ u
261
+ 0
262
+ 1
263
+
264
+ is M−1 =
265
+
266
+ A−1
267
+ −A−1u
268
+ 0
269
+ 1
270
+
271
+ . Closure follows from M1M2 =
272
+
273
+ A1A2
274
+ A1u2 + u1
275
+ 0
276
+ 1
277
+
278
+ . We can inductively verify that
279
+ M1M2 . . . Mn =
280
+
281
+ A1A2 . . . An
282
+ �n
283
+ k=1 A1 . . . Ak−1uk
284
+ 0
285
+ 1
286
+
287
+ .
288
+ 3.1. Toeplitz Matrices. Moreover, each Ak in Gosper’s identity has the simple form
289
+ Ak = αkI + βkJ
290
+ where J is the (N × N) matrix with a first superdiagonal of ones.
291
+ Hence JN = 0 and, for p ⩾ N, we have
292
+ A1A2 . . . Ap = (α1I + β1J) (α2I + β2J) . . . (αpI + βpJ)
293
+ =
294
+ � p
295
+
296
+ i=1
297
+ αi
298
+ � 
299
+ I +
300
+ p
301
+
302
+ j=1
303
+ βj
304
+ αj
305
+ J + · · · +
306
+
307
+ 1⩽j1<···<jN−1⩽p
308
+ βj1 . . . βjN−1
309
+ αj1 . . . αjN−1
310
+ JN−1
311
+
312
+  .
313
+ For p < N the summation is instead truncated at Jp.
314
+ The general form of the components of the limiting infinite product case can be deduced
315
+ by induction.
316
+ Lemma 3.1. The components of
317
+ (3.1)
318
+
319
+
320
+ k=1
321
+
322
+ Ak
323
+ uk
324
+ 0
325
+ 1
326
+
327
+ =
328
+ � �∞
329
+ k=1 Ak
330
+ v∞
331
+ 0
332
+ 1
333
+
334
+ ,
335
+ with
336
+
337
+ v(N)
338
+ ∞ , . . . , v(1)
339
+
340
+ �T := v∞ =
341
+
342
+
343
+ p=1
344
+ A1 . . . Ap−1up,
345
+ are
346
+ v(1)
347
+ ∞ =
348
+
349
+
350
+ p=1
351
+ (α1 · · · αp−1) u(1)
352
+ p ,
353
+ v(2)
354
+ ∞ =
355
+
356
+
357
+ p=1
358
+ (α1 · · · αp−1)
359
+
360
+ u(2)
361
+ p
362
+ +
363
+ �p−1
364
+
365
+ j=1
366
+ βj
367
+ αj
368
+
369
+ u(1)
370
+ p
371
+
372
+ ,
373
+ ...
374
+ v(ℓ)
375
+ ∞ =
376
+
377
+
378
+ p=1
379
+ (α1 · · · αp−1)
380
+
381
+ u(ℓ)
382
+ p +
383
+ �p−1
384
+
385
+ j=1
386
+ βj
387
+ αj
388
+
389
+ u(ℓ−1)
390
+ p
391
+ + · · · +
392
+
393
+
394
+
395
+ 1⩽j1<···<jℓ−1⩽p−1
396
+ βj1 . . . βjℓ−1
397
+ αj1 . . . αjℓ−1
398
+
399
+  u(1)
400
+ p
401
+
402
+  ,
403
+ with 1 ⩽ ℓ ⩽ N.
404
+ Already the connection to zeta series and hyperharmonic numbers is clear: with the correct
405
+ choice of α and β, the multiple sums will reduce to multiple zeta type functions.
406
+ These matrix products also exhibit a stability phenomenon, where increasing the dimen-
407
+ sion of the matrix does not impact any entries in v∞ except the top right one, since mapping
408
+ N → N + 1 only changes the formula for v(N+1)
409
+
410
+ .
411
+
412
+ 4
413
+ T. WAKHARE1 AND C. VIGNAT2
414
+ We will consistently refer to the N = 1 and N = 2 cases. Explicitly, when N = 1 so that
415
+ both Ak (denoted αk to avoid confusion) and uk are scalars, we have
416
+ Lemma 3.2. For N = 1,
417
+ (3.2)
418
+ n
419
+
420
+ k=1
421
+
422
+ αk
423
+ βk
424
+ 0
425
+ 1
426
+
427
+ =
428
+ � �n
429
+ k=1 αk
430
+ �n
431
+ k=1 α1 . . . αk−1βk
432
+ 0
433
+ 1
434
+
435
+ .
436
+ Although we will only need the n → ∞ limit, let us note that this identity holds for finite
437
+ n.
438
+ 4. Koecher’ Identity
439
+ Theorem 4.1. Identity (1.1) and Koecher’s identity are equivalent.
440
+ Proof. Begin with Koecher’s identity (1.3). By extracting coefficients of x2n, in general we
441
+ obtain
442
+ (4.1)
443
+ ζ(2n + 3) = 5
444
+ 2
445
+
446
+
447
+ k=1
448
+ (−1)k−1
449
+ k3�2k
450
+ k
451
+ � (−1)ne(2)
452
+ n (k) + 2
453
+ n
454
+
455
+ j=1
456
+
457
+
458
+ k=1
459
+ (−1)k−1
460
+ k2j+3�2k
461
+ k
462
+ �(−1)n−je(2)
463
+ n−j(k).
464
+ Take αk = −
465
+ k
466
+ 2(2k+1), βk =
467
+ 1
468
+ 2k(2k+1), u(1)
469
+ k
470
+ =
471
+ 5
472
+ 4k2, and u(ℓ)
473
+ k
474
+ =
475
+ 1
476
+ k2ℓ+2 for 2 ⩽ ℓ ⩽ N. This
477
+ corresponds to the Gosper matrix
478
+
479
+ Ak
480
+ uk
481
+ 0
482
+ 1
483
+
484
+ =
485
+
486
+ 
487
+
488
+ k
489
+ 2(2k+1)
490
+ 1
491
+ 2k(2k+1)
492
+ 0
493
+ . . .
494
+ 0
495
+ 1
496
+ k2N
497
+ 0
498
+
499
+ k
500
+ 2(2k+1)
501
+ 1
502
+ 2k(2k+1)
503
+ . . .
504
+ 1
505
+ k2N−2
506
+ ...
507
+ ...
508
+ ...
509
+ ...
510
+ ...
511
+ 0
512
+ 0
513
+ 0
514
+ . . .
515
+ 1
516
+ 2k(2k+1)
517
+ 1
518
+ k4
519
+ 0
520
+ 0
521
+ 0
522
+ . . .
523
+
524
+ k
525
+ 2(2k+1)
526
+ 5
527
+ 4k2
528
+ 0
529
+ 0
530
+ 0
531
+ . . .
532
+ 0
533
+ 1
534
+
535
+ 
536
+ .
537
+ Then
538
+ p
539
+
540
+ i=1
541
+ αi = (−1)p
542
+ p
543
+
544
+ i=1
545
+ i2
546
+ (2i)(2i + 1) = (−1)p
547
+ p!2
548
+ (2p + 1)!,
549
+ and (for 2 ⩽ ℓ ⩽ N)
550
+
551
+ j1<···<jℓ−1⩽p−1
552
+ βj1 . . . βjℓ−1
553
+ αj1 . . . αjℓ−1
554
+ = (−1)ℓ
555
+
556
+ j1<···<jℓ−1⩽p−1
557
+ 1
558
+ (j1 · · · jℓ−1)2 = (−1)ℓe(2)
559
+ ℓ−1(p).
560
+ We deduce
561
+ lim
562
+ p→∞ α1 · · · αp = 0,
563
+ while
564
+ lim
565
+ p→∞
566
+
567
+ j1<···<jk⩽p−1
568
+ 1
569
+ (j1 · · · jk)2 ⩽ lim
570
+ p→∞
571
+ p
572
+
573
+ j1=1
574
+ 1
575
+ j2
576
+ 1
577
+ = ζ(2).
578
+ Hence, applying Lemma 3.1, we deduce
579
+
580
+
581
+ i=1
582
+ Ai = 0.
583
+
584
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES
585
+ 5
586
+ The components in the right column are then explicitly given as
587
+ v(ℓ)
588
+ ∞ =
589
+
590
+
591
+ p=1
592
+ (α1 · · · αp−1)
593
+
594
+ u(ℓ)
595
+ p +
596
+ �p−1
597
+
598
+ j=1
599
+ βj
600
+ αj
601
+
602
+ u(ℓ−1)
603
+ p
604
+ + · · · +
605
+
606
+
607
+
608
+ 1⩽j1<···<jℓ−1⩽p−1
609
+ βj1 . . . βjℓ−1
610
+ αj1 . . . αjℓ−1
611
+
612
+  u(1)
613
+ p
614
+
615
+
616
+ =
617
+
618
+
619
+ p=1
620
+ (−1)p−1(p − 1)!2
621
+ (2p − 1)!
622
+
623
+ 1
624
+ p2ℓ+2 − e(2)
625
+ 1 (p)
626
+ p2ℓ
627
+ + · · · + (−1)ℓ−15
628
+ 4
629
+ e(2)
630
+ ℓ−1(p)
631
+ p2
632
+
633
+ = 5
634
+ 2
635
+
636
+
637
+ p=1
638
+ (−1)p−1
639
+ p3�2p
640
+ p
641
+ � e(2)
642
+ ℓ−1(p) + 2
643
+ ℓ−1
644
+
645
+ j=1
646
+
647
+
648
+ p=1
649
+ (−1)p−1
650
+ p3+2j�2p
651
+ p
652
+ �e(2)
653
+ ℓ−1−j(p)(−1)ℓ−1−j.
654
+ We see that this is exactly the formula from Koecher’s identity, hence equals ζ(2ℓ + 1) for
655
+ 1 ⩽ ℓ ⩽ N.
656
+
657
+ 5. Leschiner’s identity
658
+ Begin with the Leschiner identity
659
+
660
+ n⩾1
661
+ (−1)n−1
662
+ n2 − z2 = 1
663
+ 2
664
+
665
+ k⩾1
666
+ 1
667
+ �2k
668
+ k
669
+
670
+ k2
671
+ 3k2 + z2
672
+ k2 − z2
673
+ k−1
674
+
675
+ j=1
676
+
677
+ 1 − z2
678
+ j2
679
+
680
+ ,
681
+ so that
682
+ ˜ζ (2) = 3
683
+ 2
684
+
685
+ k⩾1
686
+ 1
687
+ �2k
688
+ k
689
+
690
+ k2,
691
+ and
692
+ ¯ζ (4) = 3
693
+ 2
694
+
695
+ k⩾1
696
+ 1
697
+ �2k
698
+ k
699
+
700
+ k2
701
+ � 4
702
+ k2 − H(2)
703
+ k−1
704
+
705
+ ,
706
+ and in general (I think I made a mistake here)
707
+ ˜ζ(2n + 2) = 3
708
+ 2
709
+
710
+
711
+ k=1
712
+ 1
713
+ k2�2k
714
+ k
715
+ �(−1)ne(2)
716
+ n (k) + 6
717
+ n
718
+
719
+ j=1
720
+
721
+
722
+ k=1
723
+ 1
724
+ k2j+2�2k
725
+ k
726
+ �(−1)n−je(2)
727
+ n−j(k).
728
+ A Gosper representation for ¯ζ (2) and ¯ζ (4) is
729
+
730
+ n⩾1
731
+
732
+
733
+ n
734
+ 2(2n+1)
735
+ −1
736
+ 2n(2n+1)
737
+ 1
738
+ n3
739
+ 0
740
+ n
741
+ 2(2n+1)
742
+ 3
743
+ 4n
744
+ 0
745
+ 0
746
+ 1
747
+
748
+  =
749
+
750
+
751
+ 0
752
+ 0
753
+ ¯ζ (4)
754
+ 0
755
+ 0
756
+ ¯ζ (2)
757
+ 0
758
+ 0
759
+ 1
760
+
761
+  .
762
+ This will generalize using the same method as Koecher.
763
+ 6. Borwein’s Identity
764
+ 6.1. the infinite product case. Extracting coefficient of z2n from Borwein’s identity [1]
765
+ (6.1)
766
+
767
+ n⩾1
768
+ 1
769
+ n2 − z2 = 3
770
+
771
+ k⩾1
772
+ 1
773
+ �2k
774
+ k
775
+
776
+ 1
777
+ k2 − z2
778
+ k−1
779
+
780
+ j=1
781
+ j2 − 4z2
782
+ j2 − z2 .
783
+
784
+ 6
785
+ T. WAKHARE1 AND C. VIGNAT2
786
+ gives
787
+
788
+ k⩾1
789
+ 1
790
+ �2k
791
+ k
792
+
793
+ 1
794
+ k2 − z2
795
+ k−1
796
+
797
+ j=1
798
+ j2 − 4z2
799
+ j2 − z2 =
800
+
801
+ k⩾1
802
+ 1
803
+ k2�2k
804
+ k
805
+
806
+ k−1
807
+
808
+ j=1
809
+
810
+ 1 − 4z2
811
+ j2
812
+
813
+ k
814
+
815
+ j=1
816
+ 1
817
+ 1 − z2
818
+ j2
819
+ =
820
+
821
+ k⩾1
822
+ 1
823
+ k2�2k
824
+ k
825
+
826
+
827
+ ℓ⩾0
828
+ z2ℓ4ℓe(2)
829
+ ℓ (k)
830
+
831
+ m⩾0
832
+ z2mh(2)
833
+ m (k + 1),
834
+ where hm is the complete symmetric function. This gives us a formula for the coefficient
835
+ of z2n as a convolution over hm and em. How do we encode this in the matrix, in terms of
836
+ αk, βk, uk?
837
+ Theorem 6.1. A Gosper representation for ζ (2) is obtained as
838
+
839
+ n⩾1
840
+
841
+ n
842
+ 2(2n+1)
843
+ 3
844
+ 2n
845
+ 0
846
+ 1
847
+
848
+ =
849
+
850
+ 0
851
+ ζ (2)
852
+ 0
853
+ 1
854
+
855
+ .
856
+ Proof. Identifying the constant term produces
857
+ ζ (2) = 3
858
+
859
+ k⩾1
860
+ 1
861
+ �2k
862
+ k
863
+
864
+ k2.
865
+ With αk =
866
+ k
867
+ 2(2k+1) and βk =
868
+ 3
869
+ 2k, we have
870
+
871
+ n⩾1
872
+ �n−1
873
+
874
+ k=1
875
+ αk
876
+
877
+ βn = 3
878
+ 2
879
+
880
+ n⩾1
881
+ 2
882
+ n2�2n
883
+ n
884
+ � = ζ (2) .
885
+
886
+ Identifying the linear term in (6.1) produces
887
+ ζ (4) = 3
888
+
889
+ k⩾1
890
+ 1
891
+ �2k
892
+ k
893
+
894
+ k2
895
+ � 1
896
+ k2 − 3H(2)
897
+ k−1
898
+
899
+ .
900
+ This suggests the following result.
901
+ Theorem 6.2. A Gosper representation for ζ (2) and ζ (4) is obtained as
902
+
903
+ n⩾1
904
+
905
+
906
+ n
907
+ 2(2n+1)
908
+ −3
909
+ 2n(2n+1)
910
+ 3
911
+ 2n3
912
+ 0
913
+ n
914
+ 2(2n+1)
915
+ 3
916
+ 2n
917
+ 0
918
+ 0
919
+ 1
920
+
921
+  =
922
+
923
+
924
+ 0
925
+ 0
926
+ ζ (4)
927
+ 0
928
+ 0
929
+ ζ (2)
930
+ 0
931
+ 0
932
+ 1
933
+
934
+  .
935
+ Proof. Denote
936
+ Mn =
937
+
938
+
939
+ δn
940
+ γn
941
+ u(1)
942
+ n
943
+ 0
944
+ δn
945
+ u(2)
946
+ n
947
+ 0
948
+ 0
949
+ 1
950
+
951
+  =
952
+
953
+ An
954
+ un
955
+ 0
956
+ 1
957
+
958
+
959
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES
960
+ 7
961
+ with An =
962
+
963
+ δn
964
+ γn
965
+ 0
966
+ δn
967
+
968
+ = δnI + γnJ and δn =
969
+ 2
970
+ n(2n+1) so that, with I =
971
+
972
+ 1
973
+ 0
974
+ 0
975
+ 1
976
+
977
+ , J =
978
+
979
+ 0
980
+ 1
981
+ 0
982
+ 0
983
+
984
+ , un =
985
+
986
+ u(1)
987
+ n
988
+ u(2)
989
+ n =
990
+ 3
991
+ 2n
992
+
993
+ ,
994
+ A1 . . . Ai−1 =
995
+ 2
996
+ i
997
+ �2i
998
+ i
999
+
1000
+
1001
+ I + J
1002
+ i−1
1003
+
1004
+ j=1
1005
+ γj
1006
+ δj
1007
+
1008
+ .
1009
+ We know that
1010
+ M1 . . . Mn =
1011
+
1012
+ A1 . . . An
1013
+ vn
1014
+ 0
1015
+ 1
1016
+
1017
+ with
1018
+ vn =
1019
+ n
1020
+
1021
+ i=1
1022
+ A1 . . . Ai−1ui
1023
+ so that
1024
+ vn =
1025
+ n
1026
+
1027
+ i=1
1028
+ 2
1029
+ i
1030
+ �2i
1031
+ i
1032
+
1033
+
1034
+ ui +
1035
+ i−1
1036
+
1037
+ j=1
1038
+ γj
1039
+ δj
1040
+
1041
+ 3
1042
+ 2i
1043
+ 0
1044
+ ��
1045
+ =
1046
+ n
1047
+
1048
+ i=1
1049
+ 2
1050
+ i
1051
+ �2i
1052
+ i
1053
+
1054
+ ��
1055
+ u(1)
1056
+ i3
1057
+ 2i
1058
+
1059
+ +
1060
+ i−1
1061
+
1062
+ j=1
1063
+ γj
1064
+ δj
1065
+
1066
+ 3
1067
+ 2i
1068
+ 0
1069
+ ��
1070
+ .
1071
+ =
1072
+
1073
+
1074
+ �n
1075
+ i=1
1076
+ 2
1077
+ i(2i
1078
+ i )u(1)
1079
+ i
1080
+ + �i−1
1081
+ j=1
1082
+ γj
1083
+ δj
1084
+ 3
1085
+ 2i
1086
+ �n
1087
+ i=1
1088
+ 2
1089
+ i(2i
1090
+ i)
1091
+ 3
1092
+ 2i
1093
+
1094
+
1095
+ This produces
1096
+ v(2)
1097
+ ∞ = ζ (2) =
1098
+
1099
+
1100
+ i=1
1101
+ 3
1102
+ i2�2i
1103
+ i
1104
+
1105
+ and
1106
+ v(1)
1107
+ ∞ = ζ (4) =
1108
+
1109
+
1110
+ i=1
1111
+ 2
1112
+ i
1113
+ �2i
1114
+ i
1115
+ �u(1)
1116
+ i
1117
+ +
1118
+
1119
+
1120
+ i=1
1121
+ 2
1122
+ i
1123
+ �2i
1124
+ i
1125
+ � 3
1126
+ 2i
1127
+ i−1
1128
+
1129
+ j=1
1130
+ γj
1131
+ δj
1132
+ .
1133
+ Identifying with
1134
+ ζ (4) = 3
1135
+
1136
+ k⩾1
1137
+ 1
1138
+ �2k
1139
+ k
1140
+
1141
+ k2
1142
+ � 1
1143
+ k2 − 3H(2)
1144
+ k−1
1145
+
1146
+ produces
1147
+ u(1)
1148
+ i
1149
+ = 3
1150
+ 2i3, γj =
1151
+ −3
1152
+ 2j (2j + 1).
1153
+
1154
+ Unfortunately, the case that includes ζ (6) is not as straightforward.
1155
+ Theorem 6.3. A Gosper representation for ζ (2) , ζ (4) and ζ (6) is obtained as
1156
+
1157
+ n⩾1
1158
+
1159
+ 
1160
+ n
1161
+ 2(2n+1)
1162
+
1163
+ 3
1164
+ 2n(2n+1)
1165
+ 0
1166
+ 3
1167
+ 2n5 −
1168
+ 9H(4)
1169
+ n−1
1170
+ 2n
1171
+ 0
1172
+ n
1173
+ 2(2n+1)
1174
+
1175
+ 3
1176
+ 2n(2n+1)
1177
+ 3
1178
+ 2n3
1179
+ 0
1180
+ 0
1181
+ n
1182
+ 2(2n+1)
1183
+ 3
1184
+ 2n
1185
+ 0
1186
+ 0
1187
+ 0
1188
+ 1
1189
+
1190
+  =
1191
+
1192
+ 
1193
+ 0
1194
+ 0
1195
+ 0
1196
+ ζ (6)
1197
+ 0
1198
+ 0
1199
+ 0
1200
+ ζ (4)
1201
+ 0
1202
+ 0
1203
+ 0
1204
+ ζ (2)
1205
+ 0
1206
+ 0
1207
+ 0
1208
+ 1
1209
+
1210
+  .
1211
+
1212
+ 8
1213
+ T. WAKHARE1 AND C. VIGNAT2
1214
+ For example, the truncated product from n = 1 up to n = 200 is
1215
+
1216
+ 
1217
+ 2.4222.10−122
1218
+ −1.1917.10−121
1219
+ 1.7517.10−121
1220
+ 1.01734
1221
+ 0.
1222
+ 2.4222.10−122
1223
+ -1.1917.10−121
1224
+ 1.08232
1225
+ 0.
1226
+ 0.
1227
+ 2.4222.10−122
1228
+ 1.64493
1229
+ 0.
1230
+ 0.
1231
+ 0.
1232
+ 1.
1233
+
1234
+  .
1235
+ Proof. Identifying the coefficient of z2 in Borwein’s identity (6.1) produces
1236
+ ζ (6) = 3
1237
+
1238
+ k⩾1
1239
+ 1
1240
+ �2k
1241
+ k
1242
+
1243
+ k2
1244
+
1245
+ 17H(2,2)
1246
+ k−1 + H(4)
1247
+ k−1 − 4
1248
+
1249
+ H(2)
1250
+ k−1
1251
+ �2
1252
+ − 3H(2)
1253
+ k−1
1254
+ k2
1255
+ + 1
1256
+ k4
1257
+
1258
+ .
1259
+ Moreover, the vector vn is computed as
1260
+ vn =
1261
+ n
1262
+
1263
+ i=1
1264
+ A1 . . . Ai−1ui =
1265
+ n
1266
+
1267
+ i=1
1268
+ 2
1269
+ �2i
1270
+ i
1271
+
1272
+ i
1273
+
1274
+ ui − 3H(2)
1275
+ i−1
1276
+
1277
+
1278
+ ui (2)
1279
+ ui (3)
1280
+ 0
1281
+
1282
+  + 9H(2,2)
1283
+ i−1
1284
+
1285
+
1286
+ ui (3)
1287
+ 0
1288
+ 0
1289
+
1290
+
1291
+
1292
+  .
1293
+ Hence
1294
+ v(1)
1295
+ ∞ =
1296
+
1297
+
1298
+ i=1
1299
+ 2
1300
+ �2i
1301
+ i
1302
+
1303
+ i
1304
+
1305
+ ui (1) − 3H(2)
1306
+ i−1ui (2) + 9H(2,2)
1307
+ i−1 ui (3)
1308
+
1309
+ =
1310
+
1311
+
1312
+ i=1
1313
+ 2
1314
+ �2i
1315
+ i
1316
+
1317
+ i
1318
+
1319
+ ui (1) − 3H(2)
1320
+ i−1
1321
+ 3
1322
+ 2i3 + 9H(2,2)
1323
+ i−1
1324
+ 3
1325
+ 2i
1326
+
1327
+ .
1328
+ Using
1329
+ 3
1330
+ n
1331
+
1332
+ 4H(2,2)
1333
+ n−1 + 1
1334
+ 2H(4)
1335
+ n−1 − 2
1336
+
1337
+ H(2)
1338
+ n−1
1339
+ �2�
1340
+ = − 9
1341
+ 2nH(4)
1342
+ n−1
1343
+ and identifying v(1)
1344
+ ∞ = ζ (6) produces the result.
1345
+
1346
+ 6.2. A finite matrix product for H(3)
1347
+ N . From the identity
1348
+ H(3)
1349
+ N =
1350
+ N
1351
+
1352
+ n=1
1353
+ (−1)n−1
1354
+ n3�2n
1355
+ n
1356
+
1357
+
1358
+ 5
1359
+ 2 −
1360
+ 1
1361
+ 2
1362
+ �N+n
1363
+ 2n
1364
+
1365
+
1366
+ ,
1367
+ we deduce the following finite product representation
1368
+ N
1369
+
1370
+ n=1
1371
+
1372
+  −
1373
+ n
1374
+ 2 (2n + 1)
1375
+ 5
1376
+ 4n2
1377
+
1378
+ 1 −
1379
+ 1
1380
+ 5
1381
+ �N+n
1382
+ 2n
1383
+
1384
+
1385
+ 0
1386
+ 1
1387
+
1388
+  =
1389
+
1390
+
1391
+ 2 (−1)N
1392
+ (N + 1)
1393
+ �2N+2
1394
+ N+1
1395
+
1396
+ H(3)
1397
+ N
1398
+ 0
1399
+ 1
1400
+
1401
+  .
1402
+ 7. Gosper Representation of Markov’s identity for ζ(2) and ζ(z + 1, 3)
1403
+ 7.1. Markov’s identity for ζ(z + 1, 3). Markov’s identity reads
1404
+ (7.1)
1405
+ ζ (z + 1, 3) =
1406
+
1407
+
1408
+ n=1
1409
+ 1
1410
+ (n + z)3 = 1
1411
+ 4
1412
+
1413
+
1414
+ k=1
1415
+ (−1)k−1 (k − 1)!6
1416
+ (2k − 1)!
1417
+ 5k2 + 6kz + 2z2
1418
+ ((z + 1) (z + 2) . . . (z + k))4.
1419
+
1420
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES
1421
+ 9
1422
+ Theorem 7.1. A Gosper’s representation for Markov’s identity is
1423
+
1424
+
1425
+ n=1
1426
+
1427
+  −
1428
+ n6
1429
+ 2n (2n + 1) (z + n + 1)4
1430
+ 5k2 + 6kz + 2z2
1431
+ 0
1432
+ 1
1433
+
1434
+  =
1435
+
1436
+ 0
1437
+ 4 (z + 1)4 ζ (z + 1, 3)
1438
+ 0
1439
+ 1
1440
+
1441
+ or equivalently
1442
+
1443
+
1444
+ n=1
1445
+
1446
+  −
1447
+ n6
1448
+ 2n (2n + 1) (z + n + 1)4
1449
+ 5k2 + 6kz + 2z2
1450
+ 4 (z + 1)4
1451
+ 0
1452
+ 1
1453
+
1454
+  =
1455
+
1456
+ 0
1457
+ ζ (z + 1, 3)
1458
+ 0
1459
+ 1
1460
+
1461
+ .
1462
+ Proof. Rewrite Markov’s identity as
1463
+ 4ζ (z + 1, 3) =
1464
+
1465
+ k⩾1
1466
+ (−1)k−1 (k − 1)!6
1467
+ (2k − 1)!
1468
+ 5k2 + 6kz + 2z2
1469
+ ((z + 1) . . . (z + k))4,
1470
+ define
1471
+ uk = 5k2 + 6kz + 2z2
1472
+ and notice that writing
1473
+ 4ζ (z + 1, 3) = u1 + α1u2 + α1α2u3 + . . .
1474
+ requires that the coefficient of u1 should be equal to 1; as it is equal to
1475
+ 1
1476
+ (z+1)4, consider the
1477
+ variation
1478
+ 4 (z + 1)4 ζ (z + 1, 3) =
1479
+
1480
+ k⩾1
1481
+ (−1)k−1 (k − 1)!6
1482
+ (2k − 1)!
1483
+ (z + 1)4
1484
+ ((z + 1) . . . (z + k))4uk
1485
+ which now satisfies this constraint. Then identifying
1486
+ α1 . . . αk−1 = (−1)k−1 (k − 1)!6
1487
+ (2k − 1)!
1488
+ (z + 1)4
1489
+ ((z + 1) . . . (z + k))4
1490
+ provides
1491
+ αk =
1492
+ −k6
1493
+ 2k (2k + 1) (z + k + 1)4.
1494
+ Notice that the constant term (z + 1)4 disappears from αk.
1495
+
1496
+ Another identity [6] due to Tauraso is
1497
+ (7.2)
1498
+
1499
+ n⩾1
1500
+ 1
1501
+ n2 − an − b2 =
1502
+
1503
+ k⩾1
1504
+ 3k − a
1505
+ �2k
1506
+ k
1507
+
1508
+ k
1509
+ 1
1510
+ k2 − ak − b2
1511
+ k−1
1512
+
1513
+ j=1
1514
+ j2 − a2 − 4b2
1515
+ j2 − aj − b2 .
1516
+ Theorem 7.2. A Gosper’s matrix representation for identity (7.2) is
1517
+
1518
+
1519
+ n=1
1520
+
1521
+ k
1522
+ 2(2k+1)
1523
+ k2−a2−4b2
1524
+ k2−ak−b2
1525
+ 3k−a
1526
+ k2−ak−b2
1527
+ 0
1528
+ 1
1529
+
1530
+ =
1531
+ � 0
1532
+
1533
+ n⩾1
1534
+ 2
1535
+ n2−an−b2
1536
+ 0
1537
+ 1
1538
+
1539
+ .
1540
+ Notice that
1541
+
1542
+ n⩾1
1543
+ 2
1544
+ n2 − an − b2 =
1545
+ 2
1546
+
1547
+ a2 + 4b2
1548
+
1549
+ ψ
1550
+
1551
+ 1 − a
1552
+ 2 +
1553
+
1554
+ a2 + 4b2
1555
+ 2
1556
+
1557
+ − ψ
1558
+
1559
+ 1 − a
1560
+ 2 −
1561
+
1562
+ a2 + 4b2
1563
+ 2
1564
+ ��
1565
+ .
1566
+
1567
+ 10
1568
+ T. WAKHARE1 AND C. VIGNAT2
1569
+ Proof. Choose
1570
+ uk =
1571
+ 3k − a
1572
+ k2 − ak − b2.
1573
+ The first term in (7.2) is
1574
+ 3 − a
1575
+ 2
1576
+ 1
1577
+ 1 − a − b2 = 1
1578
+ 2u1
1579
+ so that we consider twice identity (7.2), and choose
1580
+ α1 . . . αk−1 =
1581
+ 1
1582
+ �2k
1583
+ k
1584
+
1585
+ k
1586
+ k−1
1587
+
1588
+ j=1
1589
+ j2 − a2 − 4b2
1590
+ j2 − aj − b2
1591
+ so that
1592
+ αk = k2 − a2 − 4b2
1593
+ k2 − ak − b2
1594
+ k
1595
+ 2 (2k + 1).
1596
+
1597
+ A quartic version reads
1598
+
1599
+ n⩾1
1600
+ n
1601
+ n4 − a2n2 − b4 = 1
1602
+ 2
1603
+
1604
+ k⩾1
1605
+ (−1)k−1
1606
+ �2k
1607
+ k
1608
+
1609
+ k
1610
+ 5k2 − a2
1611
+ k4 − a2k2 − b4
1612
+ k−1
1613
+
1614
+ j=1
1615
+ (j2 − a2)2 + 4b4
1616
+ j4 − a2j2 − b4 .
1617
+ The same approach as above produces
1618
+
1619
+
1620
+ n=1
1621
+
1622
+
1623
+ k
1624
+ 2(2k+1)
1625
+ (k2−a2)
1626
+ 2+4b4
1627
+ k4−a2k2−b4
1628
+ 5k2−a2
1629
+ k4−a2k2−b4
1630
+ 0
1631
+ 1
1632
+
1633
+ =
1634
+ � 0
1635
+
1636
+ n=1
1637
+ 4n
1638
+ n4−a2n2−b4
1639
+ 0
1640
+ 1
1641
+
1642
+ .
1643
+ Amdeberhan-Zeilberger’s ultra-fast series representation [7]
1644
+ ζ (3) =
1645
+
1646
+ n⩾1
1647
+ (−1)n−1
1648
+ (n − 1)!10
1649
+ 64 (2n − 1)!5
1650
+
1651
+ 205n2 − 160n + 32
1652
+
1653
+ can be realized as
1654
+
1655
+
1656
+ n=1
1657
+
1658
+
1659
+
1660
+ k
1661
+ 2(2k+1)
1662
+ �5
1663
+ 205k2 − 160k + 32
1664
+ 0
1665
+ 1
1666
+
1667
+ =
1668
+
1669
+ 0
1670
+ 64ζ (3)
1671
+ 0
1672
+ 1
1673
+
1674
+ or equivalently
1675
+
1676
+
1677
+ n=1
1678
+
1679
+
1680
+
1681
+ k
1682
+ 2(2k+1)
1683
+ �5
1684
+ 205k2−160k+32
1685
+ 64
1686
+ 0
1687
+ 1
1688
+
1689
+ =
1690
+
1691
+ 0
1692
+ ζ (3)
1693
+ 0
1694
+ 1
1695
+
1696
+ .
1697
+ The resemblance with (3.1) is interesting and suggests the generalization
1698
+
1699
+
1700
+ n=1
1701
+
1702
+ 
1703
+
1704
+
1705
+ k
1706
+ 2(2k+1)
1707
+ �5
1708
+
1709
+ 1
1710
+ 2k(2k+1)
1711
+ �5
1712
+ P (k)
1713
+ 0
1714
+
1715
+
1716
+ k
1717
+ 2(2k+1)
1718
+ �5
1719
+ 205k2−160k+32
1720
+ 64
1721
+ 0
1722
+ 0
1723
+ 1
1724
+
1725
+  =
1726
+
1727
+
1728
+ 0
1729
+ 0
1730
+ ζ (5)
1731
+ 0
1732
+ 0
1733
+ ζ (3)
1734
+ 0
1735
+ 0
1736
+ 1
1737
+
1738
+
1739
+ where P (k) is to be determined. Another fast representation due to Amdeberhan [8] is
1740
+ ζ (3) = 1
1741
+ 4
1742
+
1743
+
1744
+ n=1
1745
+ (−1)n−1 (56n2 − 32n + 5)
1746
+ n3 (2n − 1)2 �3n
1747
+ n
1748
+ ��2n
1749
+ n
1750
+
1751
+
1752
+ INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES
1753
+ 11
1754
+ and produces
1755
+
1756
+
1757
+ n=1
1758
+
1759
+  −
1760
+ k3
1761
+ (3k + 3) (3k + 2) (3k + 1)
1762
+ �2k − 1
1763
+ 2k + 1
1764
+ �2
1765
+ 56k2 − 32k + 5
1766
+ 24
1767
+ 0
1768
+ 1
1769
+
1770
+  =
1771
+
1772
+ 0
1773
+ ζ (3)
1774
+ 0
1775
+ 1
1776
+
1777
+ .
1778
+ References
1779
+ [1] J.M. Borwein, D.M. Bradley and D.J. Broadhurst, Evaluations of k-fold Euler/Zagier sums: a com-
1780
+ pendium of results for arbitrary k, Electron. J. Combin., 4-2, 1-21, 1997
1781
+ [2] R. W. Gosper, Analytic identities from path invariant matrix multiplication, unpublished manuscript,
1782
+ 1976
1783
+ [3] S. R. Finch, Mathematical Constants, Encyclopedia of Mathematics and its Applications 94, Cambridge
1784
+ University Press, 2003
1785
+ [4] M. Koecher, Letters, Math. Intelligencer 2 (1980), no. 2, 62-64.
1786
+ [5] A. A. Markoff, M´emoire sur la transformation des s´eries peu convergentes en s´eries tr`es convergentes,
1787
+ M´em. de l’Acad. Imp. Sci. de St. P´etersbourg, t. XXXVII, No. 9 (1890)
1788
+ [6] R.
1789
+ Tauraso,
1790
+ A
1791
+ bivariate
1792
+ generating
1793
+ function
1794
+ for
1795
+ zeta
1796
+ values
1797
+ and
1798
+ related
1799
+ supercongruences,
1800
+ arXiv:1806.00846
1801
+ [7] T. Amdeberhan and D. Zeilberger, Hypergeometric Series Acceleration via the WZ Method, THe Elec-
1802
+ tronic Journal of Combinatorics, 4-2, The Wilf Festchrift Volume, 1997
1803
+ [8] T. Amdeberhan, Faster and faster convergent series for z(3), Electronic Journal of Combinatorics, Volume
1804
+ 3 Issue 1, 1996
1805
+ 1 Department of Electrical Engineering and Computer Science, Massachusetts Institute
1806
+ of Technology, Cambridge, Massachusetts, USA
1807
+ Email address: [email protected]
1808
+ 2 Department of Mathematics, Tulane University, New Orleans, Louisiana, USA
1809
+ Email address: [email protected]
1810
+
9NAyT4oBgHgl3EQfdPda/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf,len=396
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
3
+ page_content='00298v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
4
+ page_content='NT] 31 Dec 2022 INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
5
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
6
+ page_content=' VIGNAT2 Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
7
+ page_content=' An unpublished identity of Gosper restates a hypergeometric identity for odd zeta values in terms of an infinite product of matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
8
+ page_content=' We show that this correspondence runs much deeper, and show that many examples of WZ-accelerated series for zeta values lift to infinite matrix products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
9
+ page_content=' We also introduce a new matrix subgroup, the Gosper group, which all of our matrix products fall into.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
10
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
11
+ page_content=' Introduction In his famous book “Mathematical Constants” [3], Finch cites an unpublished result by Gosper [2]: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
12
+ page_content='1) ∞ � k=1 � − k 2(2k+1) 5 4k2 0 1 � = � 0 ζ (3) 0 1 � , and its (N + 1) × (N + 1) extension, for N ⩾ 2, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
13
+ page_content='2) ∞ � k=1 \uf8ee \uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8f0 − k 2(2k+1) 1 2k(2k+1) 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
14
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
15
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
16
+ page_content=' 0 1 k2N 0 − k 2(2k+1) 1 2k(2k+1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
17
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
18
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
19
+ page_content=' 1 k2N−2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
20
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
21
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
22
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
23
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
24
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
25
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
26
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
27
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
28
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
29
+ page_content=' 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
30
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
31
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
32
+ page_content=' 1 2k(2k+1) 1 k4 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
33
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
34
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
35
+ page_content=' − k 2(2k+1) 5 4k2 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
36
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
37
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
38
+ page_content=' 0 1 \uf8f9 \uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fb = \uf8ee \uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8f0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
39
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
40
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
41
+ page_content=' 0 ζ (2N + 1) 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
42
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
43
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
44
+ page_content=' 0 ζ (2N − 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
45
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
46
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
47
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
48
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
49
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
50
+ page_content=' 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
51
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
52
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
53
+ page_content=' 0 ζ (5) 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
54
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
55
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
56
+ page_content=' 0 ζ (3) 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
57
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
58
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
59
+ page_content=' 0 1 \uf8f9 \uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
60
+ page_content=' We will show that this formula is in fact equivalent to Koecher’s identity [4, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
61
+ page_content=' (3)] (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
62
+ page_content='3) ∞ � n=0 1 n(n2 − x2) = 1 2 ∞ � k=1 (−1)k−1 �2k k � k3 5k2 − x2 k2 − x2 k−1 � m=1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
63
+ page_content='1 − x2 m2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
64
+ page_content=' By extracting coefficients of 1 and x2 in Koecher’s identity, we recover Markov’s series ac- celeration identity [5] ζ (3) = 5 2 � n⩾1 (−1)n−1 n3�2n n � and its higher order counterpart ζ (5) = 2 ∞ � n=1 (−1)n−1 n5�2n n � − 5 2 ∞ � n=1 (−1)n−1 H(2) n−1 n3�2n n � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
65
+ page_content=' 1 2 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
66
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
67
+ page_content=' VIGNAT2 These are efficiently encoded by the matrix product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
68
+ page_content=' By extracting other coefficients of xn in Koecher’s identity, we recover counterparts for ζ(2n + 1) which are again encoded by the matrix product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
69
+ page_content=' This correspondence runs much deeper, and we will show that several hypergeometric-type series for the zeta function at small integers are equivalent to infinite products for N × N matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
70
+ page_content=' The fact that these identities support an expression in terms of matrix products is already interesting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
71
+ page_content=' The pattern of entries of some small matrices suggest the general form of the relevant n × n generalizations, which would then be equivalent to new accelerated series for zeta values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
72
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
73
+ page_content=' Background 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
74
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
75
+ page_content=' Special Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
76
+ page_content=' The Riemann zeta function, absolutely convergent for s ∈ C, ℜs > 1 is given by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
77
+ page_content='1) ζ(s) := ∞ � n=1 1 ns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
78
+ page_content=' This straightforwardly extends to the Hurwitz zeta function with the addition of a parameter z ∈ C, z ̸= 0, −1, −2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
79
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
80
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
81
+ page_content=' : (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
82
+ page_content='2) ζ(s|z) := ∞ � n=1 1 (n + z)s , so that ζ(s) = ζ(s|1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
83
+ page_content=' The harmonic numbers are given by H0 := 0 and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
84
+ page_content='3) Hn := n � k=1 1 k, n ⩾ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
85
+ page_content=' The hyper-harmonic numbers are defined similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
86
+ page_content=' We will also consider the elementary symmetric functions (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
87
+ page_content='4) e(s) ℓ (k) := [tℓ] k−1 � j=1 � 1 + t js � = � 1⩽j1<j2<···<jℓ⩽k−1 1 (j1 · · · jℓ)s, which reduce to the harmonic numbers at e1 1(n) = Hn−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
88
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
89
+ page_content=' The Gosper Group Each Gosper matrix in the product (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
90
+ page_content='2) has the form Mk = � Ak uk 0 1 � where Ak is square (N × N), uk is a (N × 1) vector and 0 is the (1 × N) vector of zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
91
+ page_content=' Matrices of this kind form a group, which we shall name the Gosper group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
92
+ page_content=' With IN the (N × N) identity matrix, the unit element of the group is � IN 0 0 1 � , and the inverse of INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES 3 an element M = � A u 0 1 � is M−1 = � A−1 −A−1u 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
93
+ page_content=' Closure follows from M1M2 = � A1A2 A1u2 + u1 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
94
+ page_content=' We can inductively verify that M1M2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
95
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
96
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
97
+ page_content=' Mn = � A1A2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
98
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
99
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
100
+ page_content=' An �n k=1 A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
101
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
102
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
103
+ page_content=' Ak−1uk 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
104
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
105
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
106
+ page_content=' Toeplitz Matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
107
+ page_content=' Moreover, each Ak in Gosper’s identity has the simple form Ak = αkI + βkJ where J is the (N × N) matrix with a first superdiagonal of ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
108
+ page_content=' Hence JN = 0 and, for p ⩾ N, we have A1A2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
109
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
110
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
111
+ page_content=' Ap = (α1I + β1J) (α2I + β2J) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
112
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
113
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
114
+ page_content=' (αpI + βpJ) = � p � i=1 αi � \uf8eb \uf8edI + p � j=1 βj αj J + · · · + � 1⩽j1<···<jN−1⩽p βj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
115
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
116
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
117
+ page_content=' βjN−1 αj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
118
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
119
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
120
+ page_content=' αjN−1 JN−1 \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
121
+ page_content=' For p < N the summation is instead truncated at Jp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
122
+ page_content=' The general form of the components of the limiting infinite product case can be deduced by induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
123
+ page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
124
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
125
+ page_content=' The components of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
126
+ page_content='1) ∞ � k=1 � Ak uk 0 1 � = � �∞ k=1 Ak v∞ 0 1 � , with � v(N) ∞ , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
127
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
128
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
129
+ page_content=' , v(1) ∞ �T := v∞ = ∞ � p=1 A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
130
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
131
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
132
+ page_content=' Ap−1up, are v(1) ∞ = ∞ � p=1 (α1 · · · αp−1) u(1) p , v(2) ∞ = ∞ � p=1 (α1 · · · αp−1) � u(2) p + �p−1 � j=1 βj αj � u(1) p � , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
133
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
134
+ page_content=' v(ℓ) ∞ = ∞ � p=1 (α1 · · · αp−1) \uf8eb \uf8edu(ℓ) p + �p−1 � j=1 βj αj � u(ℓ−1) p + · · · + \uf8eb \uf8ed � 1⩽j1<···<jℓ−1⩽p−1 βj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
135
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
136
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
137
+ page_content=' βjℓ−1 αj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
138
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
139
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
140
+ page_content=' αjℓ−1 \uf8f6 \uf8f8 u(1) p \uf8f6 \uf8f8 , with 1 ⩽ ℓ ⩽ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
141
+ page_content=' Already the connection to zeta series and hyperharmonic numbers is clear: with the correct choice of α and β, the multiple sums will reduce to multiple zeta type functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
142
+ page_content=' These matrix products also exhibit a stability phenomenon, where increasing the dimen- sion of the matrix does not impact any entries in v∞ except the top right one, since mapping N → N + 1 only changes the formula for v(N+1) ∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
143
+ page_content=' 4 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
144
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
145
+ page_content=' VIGNAT2 We will consistently refer to the N = 1 and N = 2 cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
146
+ page_content=' Explicitly, when N = 1 so that both Ak (denoted αk to avoid confusion) and uk are scalars, we have Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
147
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
148
+ page_content=' For N = 1, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
149
+ page_content='2) n � k=1 � αk βk 0 1 � = � �n k=1 αk �n k=1 α1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
150
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
151
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
152
+ page_content=' αk−1βk 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
153
+ page_content=' Although we will only need the n → ∞ limit, let us note that this identity holds for finite n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
154
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
155
+ page_content=' Koecher’ Identity Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
156
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
157
+ page_content=' Identity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
158
+ page_content='1) and Koecher’s identity are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
159
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
160
+ page_content=' Begin with Koecher’s identity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
161
+ page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
162
+ page_content=' By extracting coefficients of x2n, in general we obtain (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
163
+ page_content='1) ζ(2n + 3) = 5 2 ∞ � k=1 (−1)k−1 k3�2k k � (−1)ne(2) n (k) + 2 n � j=1 ∞ � k=1 (−1)k−1 k2j+3�2k k �(−1)n−je(2) n−j(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
164
+ page_content=' Take αk = − k 2(2k+1), βk = 1 2k(2k+1), u(1) k = 5 4k2, and u(ℓ) k = 1 k2ℓ+2 for 2 ⩽ ℓ ⩽ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
165
+ page_content=' This corresponds to the Gosper matrix � Ak uk 0 1 � = \uf8ee \uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8f0 − k 2(2k+1) 1 2k(2k+1) 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
166
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
167
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
168
+ page_content=' 0 1 k2N 0 − k 2(2k+1) 1 2k(2k+1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
169
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
170
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
171
+ page_content=' 1 k2N−2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
172
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
173
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
174
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
175
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
176
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
177
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
178
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
179
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
180
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
181
+ page_content=' 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
182
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
183
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
184
+ page_content=' 1 2k(2k+1) 1 k4 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
185
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
186
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
187
+ page_content=' − k 2(2k+1) 5 4k2 0 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
188
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
189
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
190
+ page_content=' 0 1 \uf8f9 \uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
191
+ page_content=' Then p � i=1 αi = (−1)p p � i=1 i2 (2i)(2i + 1) = (−1)p p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
192
+ page_content='2 (2p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
193
+ page_content=', and (for 2 ⩽ ℓ ⩽ N) � j1<···<jℓ−1⩽p−1 βj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
194
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
195
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
196
+ page_content=' βjℓ−1 αj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
197
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
198
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
199
+ page_content=' αjℓ−1 = (−1)ℓ � j1<···<jℓ−1⩽p−1 1 (j1 · · · jℓ−1)2 = (−1)ℓe(2) ℓ−1(p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
200
+ page_content=' We deduce lim p→∞ α1 · · · αp = 0, while lim p→∞ � j1<···<jk⩽p−1 1 (j1 · · · jk)2 ⩽ lim p→∞ p � j1=1 1 j2 1 = ζ(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
201
+ page_content=' Hence, applying Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
202
+ page_content='1, we deduce ∞ � i=1 Ai = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
203
+ page_content=' INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES 5 The components in the right column are then explicitly given as v(ℓ) ∞ = ∞ � p=1 (α1 · · · αp−1) \uf8eb \uf8edu(ℓ) p + �p−1 � j=1 βj αj � u(ℓ−1) p + · · · + \uf8eb \uf8ed � 1⩽j1<···<jℓ−1⩽p−1 βj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
204
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
205
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
206
+ page_content=' βjℓ−1 αj1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
207
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
208
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
209
+ page_content=' αjℓ−1 \uf8f6 \uf8f8 u(1) p \uf8f6 \uf8f8 = ∞ � p=1 (−1)p−1(p − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
210
+ page_content='2 (2p − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
211
+ page_content=' � 1 p2ℓ+2 − e(2) 1 (p) p2ℓ + · · · + (−1)ℓ−15 4 e(2) ℓ−1(p) p2 � = 5 2 ∞ � p=1 (−1)p−1 p3�2p p � e(2) ℓ−1(p) + 2 ℓ−1 � j=1 ∞ � p=1 (−1)p−1 p3+2j�2p p �e(2) ℓ−1−j(p)(−1)ℓ−1−j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
212
+ page_content=' We see that this is exactly the formula from Koecher’s identity, hence equals ζ(2ℓ + 1) for 1 ⩽ ℓ ⩽ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
213
+ page_content=' ■ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
214
+ page_content=' Leschiner’s identity Begin with the Leschiner identity � n⩾1 (−1)n−1 n2 − z2 = 1 2 � k⩾1 1 �2k k � k2 3k2 + z2 k2 − z2 k−1 � j=1 � 1 − z2 j2 � , so that ˜ζ (2) = 3 2 � k⩾1 1 �2k k � k2, and ¯ζ (4) = 3 2 � k⩾1 1 �2k k � k2 � 4 k2 − H(2) k−1 � , and in general (I think I made a mistake here) ˜ζ(2n + 2) = 3 2 ∞ � k=1 1 k2�2k k �(−1)ne(2) n (k) + 6 n � j=1 ∞ � k=1 1 k2j+2�2k k �(−1)n−je(2) n−j(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
215
+ page_content=' A Gosper representation for ¯ζ (2) and ¯ζ (4) is � n⩾1 \uf8eb \uf8ed n 2(2n+1) −1 2n(2n+1) 1 n3 0 n 2(2n+1) 3 4n 0 0 1 \uf8f6 \uf8f8 = \uf8eb \uf8ed 0 0 ¯ζ (4) 0 0 ¯ζ (2) 0 0 1 \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
216
+ page_content=' This will generalize using the same method as Koecher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
217
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
218
+ page_content=' Borwein’s Identity 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
219
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
220
+ page_content=' the infinite product case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
221
+ page_content=' Extracting coefficient of z2n from Borwein’s identity [1] (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
222
+ page_content='1) � n⩾1 1 n2 − z2 = 3 � k⩾1 1 �2k k � 1 k2 − z2 k−1 � j=1 j2 − 4z2 j2 − z2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
223
+ page_content=' 6 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
224
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
225
+ page_content=' VIGNAT2 gives � k⩾1 1 �2k k � 1 k2 − z2 k−1 � j=1 j2 − 4z2 j2 − z2 = � k⩾1 1 k2�2k k � k−1 � j=1 � 1 − 4z2 j2 � k � j=1 1 1 − z2 j2 = � k⩾1 1 k2�2k k � � ℓ⩾0 z2ℓ4ℓe(2) ℓ (k) � m⩾0 z2mh(2) m (k + 1), where hm is the complete symmetric function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
226
+ page_content=' This gives us a formula for the coefficient of z2n as a convolution over hm and em.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
227
+ page_content=' How do we encode this in the matrix, in terms of αk, βk, uk?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
228
+ page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
229
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
230
+ page_content=' A Gosper representation for ζ (2) is obtained as � n⩾1 � n 2(2n+1) 3 2n 0 1 � = � 0 ζ (2) 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
231
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
232
+ page_content=' Identifying the constant term produces ζ (2) = 3 � k⩾1 1 �2k k � k2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
233
+ page_content=' With αk = k 2(2k+1) and βk = 3 2k, we have � n⩾1 �n−1 � k=1 αk � βn = 3 2 � n⩾1 2 n2�2n n � = ζ (2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
234
+ page_content=' ■ Identifying the linear term in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
235
+ page_content='1) produces ζ (4) = 3 � k⩾1 1 �2k k � k2 � 1 k2 − 3H(2) k−1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
236
+ page_content=' This suggests the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
237
+ page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
238
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
239
+ page_content=' A Gosper representation for ζ (2) and ζ (4) is obtained as � n⩾1 \uf8eb \uf8ed n 2(2n+1) −3 2n(2n+1) 3 2n3 0 n 2(2n+1) 3 2n 0 0 1 \uf8f6 \uf8f8 = \uf8eb \uf8ed 0 0 ζ (4) 0 0 ζ (2) 0 0 1 \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
240
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
241
+ page_content=' Denote Mn = \uf8eb \uf8ed δn γn u(1) n 0 δn u(2) n 0 0 1 \uf8f6 \uf8f8 = � An un 0 1 � INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES 7 with An = � δn γn 0 δn � = δnI + γnJ and δn = 2 n(2n+1) so that, with I = � 1 0 0 1 � , J = � 0 1 0 0 � , un = � u(1) n u(2) n = 3 2n � , A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
242
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
243
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
244
+ page_content=' Ai−1 = 2 i �2i i � � I + J i−1 � j=1 γj δj � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
245
+ page_content=' We know that M1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
246
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
247
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
248
+ page_content=' Mn = � A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
249
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
250
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
251
+ page_content=' An vn 0 1 � with vn = n � i=1 A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
252
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
253
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
254
+ page_content=' Ai−1ui so that vn = n � i=1 2 i �2i i � � ui + i−1 � j=1 γj δj � 3 2i 0 �� = n � i=1 2 i �2i i � �� u(1) i3 2i � + i−1 � j=1 γj δj � 3 2i 0 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
255
+ page_content=' = \uf8ee \uf8f0 �n i=1 2 i(2i i )u(1) i + �i−1 j=1 γj δj 3 2i �n i=1 2 i(2i i) 3 2i \uf8f9 \uf8fb This produces v(2) ∞ = ζ (2) = ∞ � i=1 3 i2�2i i � and v(1) ∞ = ζ (4) = ∞ � i=1 2 i �2i i �u(1) i + ∞ � i=1 2 i �2i i � 3 2i i−1 � j=1 γj δj .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
256
+ page_content=' Identifying with ζ (4) = 3 � k⩾1 1 �2k k � k2 � 1 k2 − 3H(2) k−1 � produces u(1) i = 3 2i3, γj = −3 2j (2j + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
257
+ page_content=' ■ Unfortunately, the case that includes ζ (6) is not as straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
258
+ page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
259
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
260
+ page_content=' A Gosper representation for ζ (2) , ζ (4) and ζ (6) is obtained as � n⩾1 \uf8ee \uf8ef\uf8ef\uf8ef\uf8f0 n 2(2n+1) − 3 2n(2n+1) 0 3 2n5 − 9H(4) n−1 2n 0 n 2(2n+1) − 3 2n(2n+1) 3 2n3 0 0 n 2(2n+1) 3 2n 0 0 0 1 \uf8f9 \uf8fa\uf8fa\uf8fa\uf8fb = \uf8ee \uf8ef\uf8ef\uf8f0 0 0 0 ζ (6) 0 0 0 ζ (4) 0 0 0 ζ (2) 0 0 0 1 \uf8f9 \uf8fa\uf8fa\uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
261
+ page_content=' 8 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
262
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
263
+ page_content=' VIGNAT2 For example, the truncated product from n = 1 up to n = 200 is \uf8ee \uf8ef\uf8ef\uf8f0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
264
+ page_content='4222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
265
+ page_content='10−122 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
266
+ page_content='1917.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
267
+ page_content='10−121 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
268
+ page_content='7517.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
269
+ page_content='10−121 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
270
+ page_content='01734 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
271
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
272
+ page_content='4222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
273
+ page_content='10−122 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
274
+ page_content='1917.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
275
+ page_content='10−121 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
276
+ page_content='08232 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
277
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
278
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
279
+ page_content='4222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
280
+ page_content='10−122 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
281
+ page_content='64493 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
282
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
283
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
284
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
285
+ page_content=' \uf8f9 \uf8fa\uf8fa\uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
286
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
287
+ page_content=' Identifying the coefficient of z2 in Borwein’s identity (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
288
+ page_content='1) produces ζ (6) = 3 � k⩾1 1 �2k k � k2 � 17H(2,2) k−1 + H(4) k−1 − 4 � H(2) k−1 �2 − 3H(2) k−1 k2 + 1 k4 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
289
+ page_content=' Moreover, the vector vn is computed as vn = n � i=1 A1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
290
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
291
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
292
+ page_content=' Ai−1ui = n � i=1 2 �2i i � i \uf8ee \uf8f0ui − 3H(2) i−1 \uf8ee \uf8f0 ui (2) ui (3) 0 \uf8f9 \uf8fb + 9H(2,2) i−1 \uf8ee \uf8f0 ui (3) 0 0 \uf8f9 \uf8fb \uf8f9 \uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
293
+ page_content=' Hence v(1) ∞ = ∞ � i=1 2 �2i i � i � ui (1) − 3H(2) i−1ui (2) + 9H(2,2) i−1 ui (3) � = ∞ � i=1 2 �2i i � i � ui (1) − 3H(2) i−1 3 2i3 + 9H(2,2) i−1 3 2i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
294
+ page_content=' Using 3 n � 4H(2,2) n−1 + 1 2H(4) n−1 − 2 � H(2) n−1 �2� = − 9 2nH(4) n−1 and identifying v(1) ∞ = ζ (6) produces the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
295
+ page_content=' ■ 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
296
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
297
+ page_content=' A finite matrix product for H(3) N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
298
+ page_content=' From the identity H(3) N = N � n=1 (−1)n−1 n3�2n n � � 5 2 − 1 2 �N+n 2n � � , we deduce the following finite product representation N � n=1 \uf8ee \uf8ef\uf8f0 − n 2 (2n + 1) 5 4n2 � 1 − 1 5 �N+n 2n � � 0 1 \uf8f9 \uf8fa\uf8fb = \uf8ee \uf8f0 2 (−1)N (N + 1) �2N+2 N+1 � H(3) N 0 1 \uf8f9 \uf8fb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
299
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
300
+ page_content=' Gosper Representation of Markov’s identity for ζ(2) and ζ(z + 1, 3) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
301
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
302
+ page_content=' Markov’s identity for ζ(z + 1, 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
303
+ page_content=' Markov’s identity reads (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
304
+ page_content='1) ζ (z + 1, 3) = ∞ � n=1 1 (n + z)3 = 1 4 ∞ � k=1 (−1)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
305
+ page_content='6 (2k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
306
+ page_content=' 5k2 + 6kz + 2z2 ((z + 1) (z + 2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
307
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
308
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
309
+ page_content=' (z + k))4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
310
+ page_content=' INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES 9 Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
311
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
312
+ page_content=' A Gosper’s representation for Markov’s identity is ∞ � n=1 \uf8ee \uf8f0 − n6 2n (2n + 1) (z + n + 1)4 5k2 + 6kz + 2z2 0 1 \uf8f9 \uf8fb = � 0 4 (z + 1)4 ζ (z + 1, 3) 0 1 � or equivalently ∞ � n=1 \uf8ee \uf8f0 − n6 2n (2n + 1) (z + n + 1)4 5k2 + 6kz + 2z2 4 (z + 1)4 0 1 \uf8f9 \uf8fb = � 0 ζ (z + 1, 3) 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
313
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
314
+ page_content=' Rewrite Markov’s identity as 4ζ (z + 1, 3) = � k⩾1 (−1)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
315
+ page_content='6 (2k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
316
+ page_content=' 5k2 + 6kz + 2z2 ((z + 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
317
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
318
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
319
+ page_content=' (z + k))4, define uk = 5k2 + 6kz + 2z2 and notice that writing 4ζ (z + 1, 3) = u1 + α1u2 + α1α2u3 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
320
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
321
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
322
+ page_content=' requires that the coefficient of u1 should be equal to 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
323
+ page_content=' as it is equal to 1 (z+1)4, consider the variation 4 (z + 1)4 ζ (z + 1, 3) = � k⩾1 (−1)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
324
+ page_content='6 (2k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
325
+ page_content=' (z + 1)4 ((z + 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
326
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
327
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
328
+ page_content=' (z + k))4uk which now satisfies this constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
329
+ page_content=' Then identifying α1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
330
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
331
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
332
+ page_content=' αk−1 = (−1)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
333
+ page_content='6 (2k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
334
+ page_content=' (z + 1)4 ((z + 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
335
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
336
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
337
+ page_content=' (z + k))4 provides αk = −k6 2k (2k + 1) (z + k + 1)4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
338
+ page_content=' Notice that the constant term (z + 1)4 disappears from αk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
339
+ page_content=' ■ Another identity [6] due to Tauraso is (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
340
+ page_content='2) � n⩾1 1 n2 − an − b2 = � k⩾1 3k − a �2k k � k 1 k2 − ak − b2 k−1 � j=1 j2 − a2 − 4b2 j2 − aj − b2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
341
+ page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
342
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
343
+ page_content=' A Gosper’s matrix representation for identity (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
344
+ page_content='2) is ∞ � n=1 � k 2(2k+1) k2−a2−4b2 k2−ak−b2 3k−a k2−ak−b2 0 1 � = � 0 � n⩾1 2 n2−an−b2 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
345
+ page_content=' Notice that � n⩾1 2 n2 − an − b2 = 2 √ a2 + 4b2 � ψ � 1 − a 2 + √ a2 + 4b2 2 � − ψ � 1 − a 2 − √ a2 + 4b2 2 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
346
+ page_content=' 10 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
347
+ page_content=' WAKHARE1 AND C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
348
+ page_content=' VIGNAT2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
349
+ page_content=' Choose uk = 3k − a k2 − ak − b2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
350
+ page_content=' The first term in (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
351
+ page_content='2) is 3 − a 2 1 1 − a − b2 = 1 2u1 so that we consider twice identity (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
352
+ page_content='2), and choose α1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
353
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
354
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
355
+ page_content=' αk−1 = 1 �2k k � k k−1 � j=1 j2 − a2 − 4b2 j2 − aj − b2 so that αk = k2 − a2 − 4b2 k2 − ak − b2 k 2 (2k + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
356
+ page_content=' ■ A quartic version reads � n⩾1 n n4 − a2n2 − b4 = 1 2 � k⩾1 (−1)k−1 �2k k � k 5k2 − a2 k4 − a2k2 − b4 k−1 � j=1 (j2 − a2)2 + 4b4 j4 − a2j2 − b4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
357
+ page_content=' The same approach as above produces ∞ � n=1 � − k 2(2k+1) (k2−a2) 2+4b4 k4−a2k2−b4 5k2−a2 k4−a2k2−b4 0 1 � = � 0 � n=1 4n n4−a2n2−b4 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
358
+ page_content=' Amdeberhan-Zeilberger’s ultra-fast series representation [7] ζ (3) = � n⩾1 (−1)n−1 (n − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
359
+ page_content='10 64 (2n − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
360
+ page_content='5 � 205n2 − 160n + 32 � can be realized as ∞ � n=1 � − � k 2(2k+1) �5 205k2 − 160k + 32 0 1 � = � 0 64ζ (3) 0 1 � or equivalently ∞ � n=1 � − � k 2(2k+1) �5 205k2−160k+32 64 0 1 � = � 0 ζ (3) 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
361
+ page_content=' The resemblance with (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
362
+ page_content='1) is interesting and suggests the generalization ∞ � n=1 \uf8ee \uf8ef\uf8ef\uf8f0 − � k 2(2k+1) �5 � 1 2k(2k+1) �5 P (k) 0 − � k 2(2k+1) �5 205k2−160k+32 64 0 0 1 \uf8f9 \uf8fa\uf8fa\uf8fb = \uf8ee \uf8f0 0 0 ζ (5) 0 0 ζ (3) 0 0 1 \uf8f9 \uf8fb where P (k) is to be determined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
363
+ page_content=' Another fast representation due to Amdeberhan [8] is ζ (3) = 1 4 ∞ � n=1 (−1)n−1 (56n2 − 32n + 5) n3 (2n − 1)2 �3n n ��2n n � INFINITE MATRIX PRODUCTS AND HYPERGEOMETRIC ZETA SERIES 11 and produces ∞ � n=1 \uf8ee \uf8f0 − k3 (3k + 3) (3k + 2) (3k + 1) �2k − 1 2k + 1 �2 56k2 − 32k + 5 24 0 1 \uf8f9 \uf8fb = � 0 ζ (3) 0 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
364
+ page_content=' References [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
365
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
366
+ page_content=' Borwein, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
367
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
368
+ page_content=' Bradley and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
369
+ page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
370
+ page_content=' Broadhurst, Evaluations of k-fold Euler/Zagier sums: a com- pendium of results for arbitrary k, Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
371
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
372
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
373
+ page_content=', 4-2, 1-21, 1997 [2] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
374
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
375
+ page_content=' Gosper, Analytic identities from path invariant matrix multiplication, unpublished manuscript, 1976 [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
376
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
377
+ page_content=' Finch, Mathematical Constants, Encyclopedia of Mathematics and its Applications 94, Cambridge University Press, 2003 [4] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
378
+ page_content=' Koecher, Letters, Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
379
+ page_content=' Intelligencer 2 (1980), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
380
+ page_content=' 2, 62-64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
381
+ page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
382
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
383
+ page_content=' Markoff, M´emoire sur la transformation des s´eries peu convergentes en s´eries tr`es convergentes, M´em.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
384
+ page_content=' de l’Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
385
+ page_content=' Imp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
386
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
387
+ page_content=' de St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
388
+ page_content=' P´etersbourg, t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
389
+ page_content=' XXXVII, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
390
+ page_content=' 9 (1890) [6] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
391
+ page_content=' Tauraso, A bivariate generating function for zeta values and related supercongruences, arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
392
+ page_content='00846 [7] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
393
+ page_content=' Amdeberhan and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
394
+ page_content=' Zeilberger, Hypergeometric Series Acceleration via the WZ Method, THe Elec- tronic Journal of Combinatorics, 4-2, The Wilf Festchrift Volume, 1997 [8] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
395
+ page_content=' Amdeberhan, Faster and faster convergent series for z(3), Electronic Journal of Combinatorics, Volume 3 Issue 1, 1996 1 Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA Email address: twakhare@mit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
396
+ page_content='edu 2 Department of Mathematics, Tulane University, New Orleans, Louisiana, USA Email address: cvignat@tulane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
397
+ page_content='edu' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NAyT4oBgHgl3EQfdPda/content/2301.00298v1.pdf'}
9dAyT4oBgHgl3EQfQ_am/content/tmp_files/2301.00058v1.pdf.txt ADDED
@@ -0,0 +1,1127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Detecting TCP Packet Reordering in the Data Plane
2
+ YUFEI ZHENG, Princeton University, USA
3
+ HUACHENG YU, Princeton University, USA
4
+ JENNIFER REXFORD, Princeton University, USA
5
+ Network administrators are often interested in detecting TCP-level packet reordering to diagnose performance
6
+ problems and neutralize attacks. However, packet reordering is expensive to measure, because each packet
7
+ must be processed relative to the TCP sequence number of its predecessor in the same flow. Due to the volume
8
+ of traffic, the detection of packet reordering should take place in the data plane of the network devices as
9
+ the packets fly by. However, restrictions on the memory size and the number of memory accesses per packet
10
+ make it impossible to design an efficient algorithm for pinpointing the flows with heavy packet reordering.
11
+ In practice, packet reordering is typically a property of a network path, due to a congested or flaky link.
12
+ Flows traversing the same path are correlated in their out-of-orderness, and aggregating out-of-order statistics
13
+ at the IP prefix level would provide useful diagnostic information. In this paper, we present efficient algorithms
14
+ for identifying IP prefixes with heavy packet reordering under memory restrictions. First, we analyze as much
15
+ of the traffic as possible by going after the largest flows. Next, we sample as many flows as possible, regardless
16
+ of their sizes. To achieve the best of both worlds, we also combine these two methods. In all algorithms, we
17
+ resolve the challenging interplay between measuring at the flow level and aggregating at the prefix level by
18
+ allocating memory using prefix information. Our simulation experiments using packet traces from a campus
19
+ network show that our algorithms are effective at identifying IP prefixes with heavy packet reordering using
20
+ moderate memory resources.
21
+ 1
22
+ INTRODUCTION
23
+ Transmission Control Protocol (TCP) performance problems are often associated with packet
24
+ reordering. Packet loss, commonly caused by congested links, triggers TCP senders to retransmit
25
+ packets, leading the retransmitted packets to appear out of order. Also, the network itself can cause
26
+ packet reordering, due to malfunctioning equipment or traffic splitting over multiple links [17].
27
+ TCP overreacts to inadvertent reordering by retransmitting packets that were not actually lost and
28
+ erroneously reducing the sending rate [5, 17]. In addition, reordering of acknowledgment packets
29
+ muddles TCP’s self-clocking property, and induces bursts of traffic [4]. Perhaps more strikingly,
30
+ reordering can be a form of denial-of-service (DoS) attack. In this scenario, an adversary persistently
31
+ reorders existing packets, or injects malicious reordering into the network, to make the goodput
32
+ low or even close to zero, despite delivering all of the packets [1, 7].
33
+ To diagnose performance problems and neutralize attacks, it is therefore crucial to detect packet
34
+ reordering quickly and efficiently, e.g., on the order of minutes. Due to the sheer volume of traffic,
35
+ the detection of packet reordering should take place in the data plane of network devices as the
36
+ packets fly by. This is because each packet must be processed in conjunction with its predecessor
37
+ in the same flow, which renders simple packet sampling insufficient. In many cases, it suffices to
38
+ report reordering at a coarser level, such as to identify the IP prefixes associated with performance
39
+ problems. In this paper, we focus on an edge network. Since routing is determined at the IP prefix
40
+ level, by identifying heavily reordered source prefixes in the incoming traffic, we can locate the
41
+ part of network experiencing trouble. However, this does not obviate the need to maintain state for
42
+ at least some flows, as packet reordering is still a flow-level phenomenon.
43
+ The emergence of programmable data planes makes it possible to keep simple reordering statistics
44
+ directly in the packet-processing pipeline. With flexible parsing, we can extract the header fields
45
+ we need to analyze the packets in a flow, including the TCP flow identifier (source and destination
46
+ IP addresses and port numbers) and the TCP sequence number for each packet. Using register
47
+ 1
48
+ arXiv:2301.00058v1 [cs.NI] 30 Dec 2022
49
+
50
+ Zheng, Yu and Rexford
51
+ arrays, we can keep state across successive packets of the same flow. In addition, simple arithmetic
52
+ operations allow us to detect reordering and count the number of out-of-order packets in a flow.
53
+ However, the limited memory in the programmable data plane usually needs to be shared among
54
+ several monitoring tasks, and keeping per-flow state is taxing on the memory resources. To see that,
55
+ simply storing the flow signatures for a 5-minute traffic trace from a campus network could take
56
+ more than 228 bits of memory, which is already a significant fraction of the total register memory
57
+ in bits available on a Tofino switch, without even accounting for the memory necessary to keep
58
+ the statistics for each flow. Moreover, to keep up with line rate, we can only access memory a
59
+ small constant number of times for each packet, which limits our choice of data structures. As in
60
+ many previous works [3, 19], we turn to the hash-indexed array for our algorithms. Due to the
61
+ number of processing stages present in the hardware, it is also impossible to use more than a small
62
+ constant number of arrays. Furthermore, since the data-plane hardware has limited bandwidth for
63
+ communicating with the control-plane software, we cannot offload monitoring tasks to the control
64
+ plane. As such, we need to design compact data structures that work within these constraints.
65
+ In this paper, we present data structures that detect and report packet-reordering statistics to the
66
+ control plane. As packet reordering is typically a property of a network path, packets traversing the
67
+ same path at the same time are often correlated in their out-of-orderness. This hints that we can
68
+ identify prefixes with heavy packet reordering without needing to sieve through all of the flows
69
+ in that prefix. To work with the limited memory, we approach this problem from two different
70
+ directions:
71
+ • Study as much of the traffic as possible by going after the heavy flows, since it is memory
72
+ efficient to identify heavy hitters, even in the data plane [3, 19, 21]. However, with heavy
73
+ hitters not being the only flows of interest, this method is not robust if the traffic distribution
74
+ contains heavy-reordering prefixes with only non-heavy flows.
75
+ • Sample as many flows as possible, regardless of their sizes. The minor drawback is that the
76
+ amount of communication necessary from the data plane to the control plane, though not to
77
+ the extent of overwhelming hardware resources, could be significantly larger than that of
78
+ the first approach.
79
+ To achieve the best of both worlds, we also propose a combination of these two approaches.
80
+ The interplay between measuring at the flow level and acting at the prefix level lies in the heart
81
+ of this problem. To decide which set of flows to monitor, we need to incorporate prefix identity in
82
+ managing the data structures, which gives rise to the idea of allocating memory on the prefix level.
83
+ In what follows, § 2 formulates the reordering problem and shows hardness of identifying out-
84
+ of-order heavy flows. We elaborate on the two approaches for finding heavily reordered prefixes in
85
+ § 3, and briefly discuss a combination of the two. In § 4, we verify the correlation among flows from
86
+ the same prefix through measurement results, and demonstrate that our algorithms are extremely
87
+ memory-efficient. We discuss related work in §5 and then conclude our paper in § 6.
88
+ 2
89
+ PROBLEM FORMULATION: IDENTIFY HEAVY OUT-OF-ORDER IP PREFIXES
90
+ Consider a switch close to the receiving hosts, where we observe a stream of incoming packets
91
+ (Figure 1). Our goal is to identify the senders whose paths to the receivers are experiencing
92
+ performance problems, through counting out-of-order packets. In § 2.1, we first introduce notations
93
+ and definitions at the flow level, and show that identifying flows with heavy reordering is hard,
94
+ even with randomness and approximation. Later, in § 2.2, we extend the definitions to the prefix
95
+ level, then discuss possible directions to identify heavy out-of-order prefixes.
96
+ 2
97
+
98
+ Detecting TCP Packet Reordering in the Data Plane
99
+ Fig. 1. Different source prefixes send packets over different paths. Packets on a path are colored differently
100
+ to show that traffic from a single prefix has a mix of packets from different flows. While flows from a single
101
+ prefix may split over parallel subpaths, they do share many portions of their network resources.
102
+ 2.1
103
+ Flow-level reordering statistics
104
+ 2.1.1
105
+ Definitions at the flow level. Consider a stream 𝑆 of TCP packets from different remote senders
106
+ to the local receivers. In practice, TCP packets may contain payloads, and sequence numbers advance
107
+ by the length of payload in bytes. But, to keep the discussions simple, we assume sequence numbers
108
+ advance by 1 at a time, and we ignore sequence number rollovers. We note that these assumptions
109
+ can be easily adjusted to reflect the more realistic scenarios. Then, a packet can be abstracted as a
110
+ 3-tuple (𝑓 ,𝑠,𝑡), with 𝑓 ∈ F being its flow ID, 𝑠 ∈ [𝐼] the sequence number and 𝑡 the timestamp. In
111
+ this case, a flow ID is a 4-tuple of source and destination IP addresses, and the source and destination
112
+ TCP port numbers.
113
+ Let 𝑆𝑓 = {(𝑓 ,𝑠𝑖,𝑡𝑖)}
114
+ 𝑁𝑓
115
+ 𝑖=1 ⊆ 𝑆 be the set of packets corresponding to some flow 𝑓 , sorted by time 𝑡𝑖
116
+ in ascending order. We say the packets of flow 𝑓 are perfectly in-order if 𝑠𝑖+1 = 𝑠𝑖 + 1 for all 𝑖 in
117
+ [𝑁𝑓 − 1]. By commonly used definitions, the 𝑖th packet in flow 𝑓 is out-of-order if it has:
118
+ Definition 1 a lower sequence number than its predecessor in 𝑓 , 𝑠𝑖 < 𝑠𝑖−1.
119
+ Definition 2 a sequence number larger than that expected from its predecessor in 𝑓 , 𝑠𝑖 > 𝑠𝑖−1 + 1.
120
+ Definition 3 a smaller sequence number than the maximum sequence number seen in 𝑓 so far,
121
+ 𝑠𝑖 < max𝑗 ∈[���−1] 𝑠𝑗.
122
+ When 𝑠𝑖 < 𝑠𝑖−1 in flow 𝑓 , we sometimes say an out-of-order event occurs at packet 𝑖 with respect
123
+ to Definition 1. Out-of-order events with respect to other definitions are similarly defined. Under
124
+ each definition, denote the number of out-of-order packets in flow 𝑓 as 𝑂𝑓 , a flow 𝑓 is said to be
125
+ out-of-order heavy if 𝑂𝑓 > 𝜀𝑁𝑓 for some small 𝜀 > 0.
126
+ In practice, none of these three definitions is a clear winner. Rather, different applications may
127
+ call for different metrics. From an algorithmic point of view, Definition 1 and Definition 2 are
128
+ essentially identical, in that detecting the out-of-order events only requires comparing adjacent
129
+ pairs of packets. An out-of-order event with respect to Definition 3, however, is far more difficult to
130
+ uncover, as looking at pairs of packets is no longer enough—the algorithm always has to record the
131
+ maximum sequence number (over a potentially large number of packets) in order to report such
132
+ events. In this paper, we focus on Definition 1 and show that easy modifications to the algorithms
133
+ can be effective for Definition 2.
134
+ 2.1.2
135
+ A strawman solution for identifying out-of-order heavy flows. A naive algorithm that identifies
136
+ out-or-order heavy flows would memorize, for every flow, the flow ID 𝑓 , the sequence number 𝑠 of
137
+ the latest arriving packet from 𝑓 when using Definition 1, and the number of out-of-order packets
138
+ 𝑜. When a new packet of 𝑓 arrives, we go to its flow record, and compare its sequence number 𝑠′
139
+ with 𝑠. If 𝑠′ < 𝑠, the new packet is out-of-order and we increment 𝑜 by 1.
140
+ 3
141
+
142
+ vantage
143
+ point
144
+ 区区
145
+
146
+ XZheng, Yu and Rexford
147
+ For Definition 2, we simply save the expected sequence number 𝑠 + 1 of the next packet when
148
+ maintaining the flow record, and compare it to that of the new packet, according to Definition 2.
149
+ We see that different definitions only slightly altered the sequence numbers saved in memory, and
150
+ we always decide whether an out-of-order event has happened based on the comparison.
151
+ 2.1.3
152
+ Memory lower bound for identifying out-of-order heavy flows. To show that identifying
153
+ out-of-order heavy flows is fundamentally expensive, we want to construct a worst-case packet
154
+ stream, for which detecting heavy reordering requires a lot of memory. For simplicity, we consider
155
+ the case where heavy reordering occurs in only one of the |F | flows, and let this flow be 𝑓 . If 𝑓
156
+ is also heavy in size, it suffices to use a heavy-hitter data structure to identify 𝑓 . Problems arise
157
+ when 𝑓 is not that heavy on any timescale, and yet is not small enough to be completely irrelevant.
158
+ A low-rate, long-lived flow fits such a profile. Unless given a lot of memory, a heavy-hitter data
159
+ structure is incapable of identifying 𝑓 . Moreover, since the packet inter-arrival times for a low-rate
160
+ flow are large, to see more than one packet from 𝑓 , the record of 𝑓 would need to remain in memory
161
+ for a longer duration, relative to other short-lived or high-rate flows.
162
+ Next we formalize this intuition, and show that given some flow 𝑓 , it is infeasible for a streaming
163
+ algorithm to always distinguish whether 𝑂𝑓 is large or not, with memory sublinear in the total
164
+ number of flows |F |, even with randomness and approximation.
165
+ Claim 1. Divide a stream with at most |F | flows into 𝑘 time-blocks 𝐵1, 𝐵2, . . . , 𝐵𝑘. It is guaranteed
166
+ that one of the following two cases holds:
167
+ (1) For any pair of blocks 𝐵𝑖 and 𝐵𝑗 with 𝑖 ≠ 𝑗, there does not exist a flow that appears in both 𝐵𝑖
168
+ and 𝐵𝑗.
169
+ (2) There exists a unique flow 𝑓 that appears in Θ(𝑘) blocks.
170
+ Then distinguishing between the the two cases is hard for low-memory algorithms. Specifically, a
171
+ streaming algorithm needs Ω(min (|F | , |F|
172
+ 𝑘 log 1
173
+ 𝛿 )) bits of space to identify 𝑓 with probability at least
174
+ 1 − 𝛿, if 𝑓 exists.
175
+ Claim 1 follows from reducing the communication problem MostlyDisjoint stated in [10], by
176
+ treating elements of the sets as flow IDs in a packet stream.
177
+ Claim 1 implies the hardness of identifying out-of-order heavy flows, as the unique flow 𝑓 may
178
+ have many packets, but not be heavy enough for a heavy-hitter algorithm to detect it efficiently.
179
+ Deciding whether such a flow exists is already difficult, identifying it among other flows is at least
180
+ as difficult. Consequently, checking whether it has many out-of-order packets is difficult as well.
181
+ The same reduction also implies that detecting duplicated packets requires Ω(|F |) space. In
182
+ fact, Claim 1 corroborates the common perception that measuring performance metrics such as
183
+ round-trip delays, reordering, and retransmission in the data plane is generally challenging, as it is
184
+ hard to match tuples of packets that span a long period of time, with limited memory.
185
+ 2.2
186
+ Prefix-level reordering statistics
187
+ 2.2.1
188
+ Problem statement. Identifying out-of-order heavy flows is hard; fortunately, we do not
189
+ always need to report individual flows. Since reordering is typically a property of a network path,
190
+ and routing decisions are made at the prefix level, it is natural to focus on heavily reordered prefixes.
191
+ Throughout this paper, we consider 24-bit source IP prefixes, as they achieve a reasonable level of
192
+ granularity. The same methods apply if prefixes of a different length are more suitable in other
193
+ applications.
194
+ By common definitions of the flow ID, the prefix 𝑔 of a packet (𝑓 ,𝑠,𝑡) is encoded in 𝑓 . To simplify
195
+ notations, we think of a prefix 𝑔 as the set of flows with that prefix, and when context is clear, 𝑆
196
+ also refers to the set of all prefixes in the stream. Let 𝑂𝑔 = �
197
+ 𝑓 ∈𝑔 𝑂𝑓 be the number of out-of-order
198
+ 4
199
+
200
+ Detecting TCP Packet Reordering in the Data Plane
201
+ packets in prefix 𝑔. A prefix 𝑔 is out-of-order heavy if 𝑂𝑔 > 𝜀𝑁𝑔 for some small 𝜀 > 0, where 𝑁𝑔 is
202
+ the number of packets in prefix 𝑔.
203
+ For localizing attacks and performance problems, it is not always sensible to catch prefixes with
204
+ the highest fraction of out-of-order packets. When a prefix is small, even a single out-of-order
205
+ packet would lead to a large fraction, but it might just be caused by a transient loss. In addition,
206
+ with the control plane being more computationally powerful yet less efficient in packet processing,
207
+ there is an apparent trade-off between processing speed and the amount of communication from
208
+ the data plane to the control plane. As a result, we also want to limit the communication overhead
209
+ incurred.
210
+ Therefore, for some 𝜀, 𝛼, 𝛽, our goals can be described as:
211
+ (1) Report prefixes 𝑔 with 𝑁𝑔 ≥ 𝛽 and 𝑂𝑔 > 𝜖𝑁𝑔.
212
+ (2) Avoid reports of prefixes with at most 𝛼 packets.
213
+ (3) Keep the communication overhead from the data plane to the control plane small.
214
+ 2.2.2
215
+ Bypassing memory lower bound. As a consequence of Claim 1, it is evidently infeasible
216
+ to study all flows from a prefix and aggregate all of that information to determine whether to
217
+ report the prefix. So why would reporting at the prefix level circumvent the lower bound? In
218
+ practice, packets are often reordered due to a congested or flaky link that causes lost, reordered,
219
+ or retransmitted packets at the TCP level. Therefore, flows traversing the same path at the same
220
+ time are positively correlated in their out-of-orderness. This effectively means that we only need
221
+ to study a few flows from a prefix to estimate the extent of reordering this prefix suffers. We state
222
+ the correlation assumption that all of our algorithms are based on as follows, and postpone its
223
+ verification to §4.1:
224
+ Assumption 1. Let 𝑓 be a flow chosen uniformly at random from all flow in prefix 𝑔. If 𝑁𝑔 > 𝛼,
225
+ and 𝑔 has at least two flows,
226
+ 𝑂𝑔−𝑂𝑓
227
+ 𝑁𝑔−𝑁𝑓 and
228
+ 𝑂𝑓
229
+ 𝑁𝑓 are positively correlated.
230
+ 3
231
+ DATA-PLANE DATA STRUCTURES FOR OUT-OF-ORDER MONITORING
232
+ At a high level, a data-plane algorithm generates reports of flows with potentially heavy packet
233
+ reordering on the fly, and a simple program that sits on the control plane parses through the reports
234
+ to return their prefixes. Each report includes the prefix, the number of packets monitored, and
235
+ the number of out-of-order packets of a suspicious flow. At the end of the time interval, we can
236
+ also scan the data-plane data structure to generate reports for highly-reordered flows remaining in
237
+ memory. On seeing reports, a control-plane program simply aggregates counts from reports of the
238
+ same prefix, and outputs a prefix when its count exceeds a threshold.
239
+ One of the challenges of designing the data-plane data structures is working with complex
240
+ real-world traffic. A huge number of small flows or prefixes only represent a small fraction of the
241
+ traffic, while a small number of very large flows or prefixes makes up a large fraction. Moreover,
242
+ the distribution of flows within each prefix is quite varied.
243
+ In the data plane, we keep state at the flow level, and consider prefix information in allocating
244
+ memory. Assuming a positive correlation between the out-of-orderness of a prefix and that of the
245
+ flows from that prefix, we do not have to monitor all flows to gain enough information about a
246
+ prefix, which leads us to two different threads of thought. In § 3.1, we mainly consider heavy flows,
247
+ and monitor them for long periods each time a flow enters the memory, while in § 3.2, we sample
248
+ as many flows as possible, regardless of their sizes, and for a constant number of packets at a time.
249
+ § 3.3 introduces hybrid scheme that combines these approaches.
250
+ 5
251
+
252
+ Zheng, Yu and Rexford
253
+ Fig. 2. A modification of PRECISION for tracking out-of-order packets.
254
+ 3.1
255
+ Track heavy flows over long periods
256
+ 3.1.1
257
+ Track reordering using a heavy-hitter data structure. To capture out-of-orderness in heavy
258
+ flows, we want a data structure that is capable of simultaneously tracking heaviness and reordering.
259
+ The SpaceSaving [14] data structure fits naturally for the task, as we can maintain extra state
260
+ for each flow record, while the data structure gradually identifies the flows with heavy volume
261
+ by keeping estimates of their traffic counts. However, when overwriting a flow record to admit
262
+ a new flow, SpaceSaving needs to go over all entries to locate the flow with the smallest traffic
263
+ count, which makes it infeasible for the data plane due to the constraint on the number of memory
264
+ accesses per packet.
265
+ Thus, we opt for PRECISION [3], the data-plane adaptation of SpaceSaving, which checks only a
266
+ small number of 𝑑 entries when overwriting a flow record. We emphasize that the specifics about
267
+ how PRECISION works are not, in fact, important in this context. It is enough to bear in mind that
268
+ with a suitable data-plane friendly heavy-hitter algorithm, tracking reordering is exactly the same
269
+ as in the strawman solution (§ 2.1.2), but applied only to heavy flows.
270
+ Figure 2 shows the modified PRECISION for tracking out-of-order packets using 𝑑 stages. To set
271
+ the stage for later discussions, throughout this paper, we refer to the unit of memory allocated to
272
+ keep one flow record as a bucket. Depending on the algorithm, a bucket might include different
273
+ information about a flow. Here a bucket stores the flow ID 𝑓 , the estimated size of 𝑓 , the sequence
274
+ number of the last arriving packet from 𝑓 , and the number of out-of-order packets of 𝑓 .
275
+ 3.1.2
276
+ Allocate memory by prefix. On the one hand, we want to track the heavy flows, on the other
277
+ hand, we do not want some very large prefix to have its many heavy flows dominate the data
278
+ structure. To this end, we assign flows from the same prefix to the same set of buckets, by hashing
279
+ prefixes instead of flow IDs, a technique we use in all our algorithms. In a PRECISION data structure
280
+ with 𝑑 stages, at the end of the stream, at most 𝑑 heaviest flows from each prefix 𝑔 would remain in
281
+ memory. Doing so effectively frees up buckets that used to be taken by a few prefixes with many
282
+ heavy flows, and allows more prefixes to have their heaviest flows measured.
283
+ 3.2
284
+ Sample flows over short periods
285
+ Previously, we worked with the limited memory essentially by having each bucket track one heavy
286
+ flow. However, as we shall see in § 4.1.1, some prefixes do not have any large flow, and some of
287
+ these prefixes experience heavy reordering. Unless the reordering is concentrated on large flows,
288
+ the method of tracking reordering only for heavy hitters inevitably suffers from poor performance.
289
+ Now that large flows are not representative enough in terms of out-of-orderness, we have to sieve
290
+ through more flows regardless of their sizes, and with limited memory. Rather than one bucket per
291
+ flow, the main idea is to use one bucket to check multiple flows in turn.
292
+ 6
293
+
294
+ Packet
295
+ 910234510
296
+ srclP pref = A
297
+ fID = 10
298
+ t= 100
299
+ SEQ# = 5Detecting TCP Packet Reordering in the Data Plane
300
+ 3.2.1
301
+ Flow sampling with buckets. Under the strict memory access constraints, we again opt for a
302
+ hash-indexed array as a natural choice of data structure, where each row in the array corresponds
303
+ to a bucket, and all buckets behave independently. Similar to § 3.1, we use the IP prefix as the
304
+ hash key, so that all flows from the same prefix are assigned to the same bucket, which effectively
305
+ prevents a prefix with a huge number of flows from consuming many buckets. Therefore, we fix a
306
+ bucket 𝔟, and consider the substream of packets hashed to 𝔟. When a packet (𝑓 ,𝑠,𝑡) arrives at 𝔟,
307
+ there are three cases:
308
+ (1) If 𝔟 is empty, we always admit the packet, that is, we save its flow record 𝑓 , sequence number
309
+ 𝑠, timestamp 𝑡 in 𝔟, together with the number of packets 𝑛 and the number of out-of-oder
310
+ packets 𝑜, both initilized to 0.
311
+ (2) If flow 𝑓 ’s record is already in 𝔟, we update the record as in the strawman solution (§ 2.1.2),
312
+ and update the timestamp in memory to 𝑡.
313
+ (3) If 𝔟 is occupied by another flow’s record (𝑓 ′,𝑠′,𝑡 ′,𝑛′,𝑜′), we only admit 𝑓 if 𝑓 ′ has been
314
+ monitored in memory for a sufficient period specified by parameters 𝑇 and 𝐶, or the prefix
315
+ of 𝑓 ′ could be potentially heavily reordered with respect to another parameter 𝑅. That is, 𝑓
316
+ overwrites 𝑓 ′ with record (𝑓 ,𝑠,𝑡,𝑛 = 0,𝑠 = 0) only if one of the following holds:
317
+ (a) 𝑓 ′ is stale: 𝑡 − 𝑡 ′ > 𝑇.
318
+ (b) 𝑓 ′ has been hogging 𝔟 for too long: 𝑛′ > 𝐶.
319
+ (c) 𝑓 ′ might belong to a prefix with heavy reordering: 𝑜′ > 𝑅.
320
+ In Case 3c, the algorithm sends a 3-tuple report (𝑔′,𝑛′,𝑜′) to the control plane, where 𝑔′ is the
321
+ prefix of flow 𝑓 ′. On seeing reports from the data plane, a simple control-plane program keeps a
322
+ tally for each reported prefix 𝑔. Let {(𝑔,𝑛𝑖,𝑜𝑖)}𝑟
323
+ 𝑖=1 be the set of all reports corresponding to a prefix
324
+ 𝑔. The control-plane program outputs 𝑔 if �𝑟
325
+ 𝑖=1 𝑛𝑖 ≥ 𝛼, for the same 𝛼 in § 2.2.1. In the following
326
+ sections, we refer to the data-plane component together with the simple control-plane program as
327
+ the flow-sampling algorithm.
328
+ Lazy expiration of flow records in memory. Due to memory access constraints, many data-plane
329
+ algorithms lazily expire records in memory on collisions with other flows, as opposed to actively
330
+ searching for stale records in the data structure. We again adopt the same technique in the algorithm
331
+ above, though here it is more nuanced. We could imagine a variant of the algorithm where a flow
332
+ is monitored for up to 𝐶 + 1 packets at a time. That is, when the (𝐶 + 1)st packet arrives, we check
333
+ whether to report this flow, and evict its record. Compared to this variant, lazy expiration helps in
334
+ preventing a heavy flow being admitted into the data structure consecutively, so that the heavy
335
+ flow can be evicted before a integer multiple of (𝐶 + 1) packets, should another flow appears in the
336
+ meantime.
337
+ Robustness of flow sampling. For the flow-sampling method to be effective, the data structure
338
+ needs to sample as many flows as possible. Therefore, it is not desirable to keep a large flow in
339
+ memory when we have already seen many of its packets, and learned enough information about its
340
+ packet reordering. This means that the packet count threshold 𝐶 should not be too large. Neither
341
+ do we want to keep a flow, regardless of its size, that has long been finished. We can eliminate such
342
+ cases by setting a small inter-arrival timeout 𝑇.
343
+ Now the question is, how small can these parameters be. Real-world traffic can be bursty, meaning
344
+ that sometimes there are packets from the same flow arriving back-to-back. In this case, even if
345
+ we overwrite the existing flow record on every hash collision (𝑇 = 0 and 𝐶 = 1), the algorithm
346
+ still generates meaningful samples. When the memory is not too small compared to the number of
347
+ prefixes, and hash collisions are rare, the algorithm might even have good performance. However,
348
+ setting small 𝑇 > 0 and 𝐶 > 1 makes the algorithm more robust against worst-case streams.
349
+ 7
350
+
351
+ Zheng, Yu and Rexford
352
+ Consider a stream of packets where no adjacent pairs of packets come from the same flow. On
353
+ seeing such a stream, a flow-sampling algorithm that overwrites existing records on every hash
354
+ collision with another flow will no doubt collect negligible samples. In contrast, small 𝑇 > 0 and
355
+ 𝐶 > 1 allow a small period of time for a flow in memory to be monitored, and hence gives a better
356
+ chance of capturing packet reordering.
357
+ 3.2.2
358
+ Performance guarantee. In this section, we analyze the number of times a flow with a certain
359
+ size is sampled. Consider a prefix 𝑔 when the hash function is fixed. Let 𝔟 be the bucket prefix 𝑔
360
+ is hashed to, and we know all the flows as well as the prefixes that are hashed to 𝔟. With a slight
361
+ abuse of notation, we write 𝑔 ∈ 𝔟 when the bucket with index ℎ(𝑔) is 𝔟. We also write 𝑓 ∈ 𝔟 when
362
+ 𝑓 ’s prefix is hashed to 𝔟. To capture the essence of the flow-sampling algorithm without excessive
363
+ details, we make the following assumptions:
364
+ (1) Each packet in 𝑆 is sampled i.i.d. from distribution (𝑝𝑓 )𝑓 ∈F, that is, each packet belongs to
365
+ some flow 𝑓 ∈ F independently with probability 𝑝𝑓 . Consequently, each packet belongs to
366
+ some prefix 𝑔 independently with probability 𝑝𝑔 = �
367
+ 𝑓 ∈𝑔 𝑝𝑓 .
368
+ (2) Let 𝑝𝑓 |𝔟 =
369
+ 𝑝𝑓
370
+
371
+ 𝑓 ′∈𝔟 𝑝𝑓 ′ , 𝑝𝑔|𝔟 can be similarly defined. Only a flow 𝑓 with 𝑝𝑓 |𝔟 greater than
372
+ some 𝑝min will get checked, where we think of 𝑝min as a fixed threshold depending on the
373
+ inter-arrival time threshold 𝑇 and distribution (𝑝𝑓 )𝑓 ∈F.
374
+ (3) A flow is checked exactly 𝐶 + 1 packets at a time.
375
+ Note that Assumption (2) is a way to approximate the effect of𝑇, where we assume a low-frequency
376
+ flow would soon be overwritten by some other flow on hash collision. In contrast to Assumption
377
+ (3), the flow sampling algorithm does not immediately evict a flow record with 𝐶 + 1 packets, if
378
+ there is no hash collision. In this way, though 𝑓 is monitored beyond its original 𝐶 + 1 packets,
379
+ once a hash collision occurs, the collided flow would seize 𝑓 ’s bucket. By imposing Assumption (3),
380
+ the heavier flows would likely benefit by getting more checks, while the smaller flows would likely
381
+ suffer. Empirically, the eviction scheme of the flow-sampling algorithm (§ 3.2.1) achieves better
382
+ performance in comparison to Assumption (3).
383
+ Lemma 3.1. Given the total length of stream |𝑆|, distributions (𝑝𝑓 )𝑓 ∈F, with the assumptions above,
384
+ for a fixed hash function ℎ and any 𝜀,𝛿 ∈ (0, 1), a prefix 𝑔 in bucket 𝔟 is checked at least (1 − 𝛿)𝑡1𝑝𝑔|𝔟
385
+ times with probability at least 1 − 𝑒−𝑝min𝑡1𝐶𝐹𝔟 · 𝜀2
386
+ 24 − 𝑒−
387
+ 𝜀2|𝑆| �
388
+ 𝑔∈𝔟 𝑝𝑔
389
+ 3
390
+ − 𝑒−
391
+ 𝛿2𝑡1𝑝𝑔|𝔟
392
+ 2
393
+ , where 𝑡1 =
394
+ � |𝑆 | �
395
+ 𝑔∈𝔟 𝑝𝑔
396
+ (1+ 𝜀
397
+ 2 )𝐶𝐹𝔟
398
+
399
+ and 𝑝𝑔|𝔟 =
400
+
401
+ 𝑓 ∈𝑔:𝑝𝑓 |𝔟≥𝑝min 𝑝𝑓
402
+
403
+ 𝑓 ′∈𝔟 𝑝𝑓 ′
404
+ .
405
+ Proof. Let 𝑆𝔟 the substream of 𝑆 that is hashed to 𝔟. Given |𝑆|, the length |𝑆𝔟| of substream 𝑆𝔟 is
406
+ a random variable, E |𝑆𝔟| = |𝑆| �
407
+ 𝑔∈𝔟 𝑝𝑔, then by Chernoff bound,
408
+ P[|𝑆𝔟| < (1 − 𝜀) E |𝑆𝔟|] < 𝑒− 𝜀2 E|𝑆𝔟 |
409
+ 3
410
+ = 𝑒−
411
+ 𝜀2|𝑆| �
412
+ 𝑔∈𝔟 𝑝𝑔
413
+ 3
414
+ .
415
+ (1)
416
+ Let 𝑡 be a random variable denoting the number of checks in 𝔟. Let random variable 𝑋𝑖,𝑗 be
417
+ the number of packets hashed to 𝔟 after seeing the 𝑗th packet till receiving the (𝑗 + 1)st packet
418
+ from the currently monitored flow, where 𝑖 ∈ [𝑡] and 𝑗 ∈ [𝐶]. 𝑋𝑖,𝑗s are independent geometric
419
+ random variables, and 𝑋𝑖,𝑗 ∼ 𝐺𝑒𝑜(𝑝𝑓𝑖 |𝑏), where 𝑓𝑖 is the flow under scrutiny during the 𝑖th check,
420
+ by Assumption 2, 𝑝𝑓𝑖 |𝑏 ≥ 𝑝min. Next we look at 𝑋 = �𝑡
421
+ 𝑖=1
422
+ �𝐶
423
+ 𝑗=1 𝑋𝑖,𝑗, the length of the substream in 𝔟
424
+ after 𝑡 checks,
425
+ E𝑋 =
426
+ 𝑡∑︁
427
+ 𝑖=1
428
+ 𝐶
429
+ ∑︁
430
+ 𝑗=1
431
+ E𝑋𝑖,𝑗 =
432
+ 𝑡∑︁
433
+ 𝑖=1
434
+ 𝐶
435
+ ∑︁
436
+ 𝑗=1
437
+ ∑︁
438
+ 𝑓 ∈𝔟:
439
+ 𝑝𝑓 |𝑏 ≥𝑝min
440
+ 𝑝𝑓 |𝑏 ·
441
+ 𝐶
442
+ 𝑝𝑓 |𝑏
443
+ = 𝑡𝐶𝐹𝔟,
444
+ (2)
445
+ 8
446
+
447
+ Detecting TCP Packet Reordering in the Data Plane
448
+ where 𝐹𝔟 =
449
+ ��{𝑓 ∈ 𝑏 | 𝑝𝑓 |𝑏 ≥ 𝑝min}
450
+ ��. By the Chernoff-type tail bound for independent geometric
451
+ random variables (Theorem 2.1 in [8]), for any 𝜀 ∈ (0, 1),
452
+ P[𝑋 > (1 + 𝜀
453
+ 2) E𝑋] < 𝑒−𝑝min E𝑋 ( 𝜀
454
+ 2 −ln (1+ 𝜀
455
+ 2 )) ≤ 𝑒−𝑝min𝑡𝐶𝐹𝔟 · 𝜀2
456
+ 24 .
457
+ (3)
458
+ Let 𝑡1 be the largest 𝑡 such that (1 + 𝜀
459
+ 2) E𝑋 < E |𝑆𝔟|, we have 𝑡1 =
460
+ � |𝑆 | �
461
+ 𝑔∈𝔟 𝑝𝑔
462
+ (1+ 𝜀
463
+ 2 )𝐶𝐹𝔟
464
+
465
+ . Consider two
466
+ events:
467
+ (i) The number of checks 𝑡 on seeing 𝑆𝔟 is less than 𝑡1.
468
+ Applying 3 on 𝑡1, we have that with probability at most 𝑒−𝑝min𝑡1𝐶𝐹𝔟 · 𝜀2
469
+ 24 , after seeing (1−𝜀) E |𝑆𝑏|
470
+ packets, the number of checks is at most 𝑡1. Together with 1, by union bound,
471
+ P[𝑡 < 𝑡1] < 𝑒−𝑝min𝑡1𝐶𝐹𝔟 · 𝜀2
472
+ 24 + 𝑒−
473
+ 𝜀2|𝑆| �
474
+ 𝑔∈𝔟 𝑝𝑔
475
+ 3
476
+ .
477
+ (4)
478
+ (ii) Prefix 𝑔 is checked less than (1 − 𝛿)𝑡1𝑝𝑔|𝔟 times. By Chernoff bound, this event holds with
479
+ probability at most 𝑒−
480
+ 𝛿2𝑡1𝑝𝑔|𝔟
481
+ 2
482
+ .
483
+ The Lemma follows from applying the union bound over these two events.
484
+
485
+ Counterintuitively, the proof of Lemma 3.1 suggests hash collisions are in fact harmless in the
486
+ flow-sampling algorithm. To see that, suppose we add another heavy flow to bucket 𝔟, E |𝑆𝔟| would
487
+ increase by some factor 𝑥, which means E𝑋 would increase by the same factor. Since 𝐹𝔟 would
488
+ only increase by 1, if 𝐹𝔟 is large enough, by (2), 𝑡 would also increase by roughly a factor of 𝑥, while
489
+ 𝑝𝑓 |𝔟 decreases by roughly a factor of 𝑥. Then 𝑡 · 𝑝𝑓 |𝔟 is about the same with or without the added
490
+ heavy flow. Therefore colliding with heavy flows does not decrease the number of checks of a
491
+ smaller flow, as long as the total number of flows in a bucket is large enough, which is usually the
492
+ case in practice.
493
+ 3.2.3
494
+ Decrease the number of false positives. Since the parameters of the flow-sampling algorithm
495
+ are chosen so that many flows are sampled, and some might get sampled multiple times, it is possible
496
+ for the algorithm to capture many out-of-order events, but not every one of them indicates that the
497
+ prefix is out-of-order heavy. After all, there is only a weak correlation between the out-of-orderness
498
+ of flows and that of their prefixes, not to mention that even if the correlation is stronger, we are
499
+ inferring the extent of reordering on a scale much larger than the snippets of flows that we observe.
500
+ In such cases, the algorithm could output many false positives.
501
+ To reduce the number of false positives, we could imagine feeding the control plane more
502
+ information, so that the algorithm can make a more informed decision about whether the fraction
503
+ of out-of-order packets exceeds 𝜀, for each reported prefix. To this end, we modify the flow-sampling
504
+ algorithm to always report before eviction, even if the number of out-of-order packets is below
505
+ threshold 𝑅. Again denote {(𝑔,𝑛𝑖,𝑜𝑖)}𝑟
506
+ 𝑖=1 as the set of all reports corresponding to a prefix 𝑔, the
507
+ control plane outputs 𝑔 if �𝑟
508
+ 𝑖=1 𝑛𝑖 ≥ 𝛼, and
509
+ �𝑟
510
+ 𝑖=1 𝑜𝑖
511
+ �𝑟
512
+ 𝑖=1 𝑛𝑖 > 𝑐 · 𝜀, for some tunable parameter 0 < 𝑐 ≤ 1.
513
+ The parameter 𝑐 compensates for the fact that we only monitor a subset of the traffic, so the exact
514
+ fraction of out-of-order packets we observe might not directly align with 𝜀.
515
+ 3.3
516
+ Separate large flows
517
+ When heavy reordering is concentrated in heavy flows, PRECISION (§ 3.1) performs well. The flow-
518
+ sampling algorithm (§ 3.2.1) generally has good performance, regardless of where the reordering
519
+ occurs. However, compared to PRECISION, the flow-sampling algorithm generates more false
520
+ positives, and sends more reports to the control plane. We can reduce the number of false positives
521
+ (§ 3.2.3), but doing so leads to sending even more reports.
522
+ 9
523
+
524
+ Zheng, Yu and Rexford
525
+ To combine the best of both approaches, we introduce a hybrid scheme, where the packets first go
526
+ through a heavy-hitter data structure, and the array only admits flows whose prefixes are not being
527
+ monitored in the heavy-hitter data structure. On the one hand, the array component makes the
528
+ hybrid scheme robust when reordering is no longer concentrated in heavy flows. On the other, by
529
+ keeping some heavy flows in the heavy-hitter data structure, the hybrid scheme avoids repeatedly
530
+ admitting large flows into the array hence potentially reducing the number of reports sent to the
531
+ control plane. Moreover, through monitoring heavy flows in a more continuous manner, the hybrid
532
+ scheme extracts more accurate reordering statistics for heavy flows, which reduces the number of
533
+ false positives at the prefix level.
534
+ In any practical setting, the correct memory allocation between the heavy-hitter data structure
535
+ and the array in the hybrid scheme depends on the workload properties: the relationship of flows
536
+ to prefixes, the heaviness of flows and prefixes, and where the reordering actually occurs. Next we
537
+ understand how these algorithms behave under real-world workloads.
538
+ 4
539
+ EVALUATION
540
+ This section presents both measurement results and performance evaluations. First, we show
541
+ several traffic traits that drive our algorithm design (§ 4.1). Next, we evaluate the flow-sampling
542
+ algorithm in § 4.2, and the hybrid scheme in § 4.3, through running a Python simulator on a
543
+ 5-minute real-world traffic trace. We recognize that the optimal parameters for our algorithms are
544
+ often workload dependent. Therefore, we do not attempt to always find the optimum, but instead,
545
+ we show in § 4.2 and § 4.3 that arbitrarily chosen parameters already give good performance. Finally
546
+ in § 4.4, we see that these parameters we used previously for evaluations are indeed representative,
547
+ and the algorithms are robust against small perturbations.
548
+ 4.1
549
+ Traffic workload characterization
550
+ For all of our measurement and evaluation, we use a 5-minute anonymized packet trace, collected
551
+ ethically from a border router on a university campus network. This study has been conducted
552
+ with necessary approvals from our university, including its Institutional Review Board (IRB). Note
553
+ that only packets with payloads are relevant for our application, as TCP sequence numbers must
554
+ advance for our algorithms to detect reordering events. We therefore preprocess the trace to only
555
+ contain flows from external servers to campus hosts, with the rationale that these senders are
556
+ more likely to generate continuous streams of traffic. After preprocessing, the trace consists of
557
+ 82, 359, 630 packets, which come from 546, 126 flows and 17, 097 24-bit source IP prefixes.
558
+ 4.1.1
559
+ Heavy-tailed traffic volume and out-of-orderness. It has long been observed that in real-world
560
+ traffic, a small fraction of flows and prefixes account for a large fraction of the traffic (Figure 3a).
561
+ Out-of-orderness in prefixes is similarly heavy-tailed; only a small fraction of prefixes are out-
562
+ of-order heavy (Figure 3b). If heavy reordering happens to occur in heavy flows and prefixes,
563
+ detecting heavy reordering would be easy, by solely focusing on large flows and prefixes using
564
+ heavy-hitter data structures. However, what happens in reality is quite the opposite. Figure 4 shows
565
+ the wide variation of flow sizes in prefixes with heavy reordering, and the sizes of such prefixes
566
+ can be orders-of-magnitude different. Thus, by zooming in on large flows and prefixes, we would
567
+ inevitably miss out on many prefixes of interest without any large flow.
568
+ Fortunately, to report a prefix with a significant amount of reordering, we need not measure
569
+ every flow in that prefix, as flows in the same prefix have some correlation in their out-of-orderness.
570
+ As it turns out, the fraction of out-of-order packets in a prefix is positively correlated with that of a
571
+ flow within the prefix, which we verify next.
572
+ 10
573
+
574
+ Detecting TCP Packet Reordering in the Data Plane
575
+ 24
576
+ 210
577
+ 216
578
+ 222
579
+ Number of packets
580
+ 0.00
581
+ 0.25
582
+ 0.50
583
+ 0.75
584
+ 1.00
585
+ Cumulative probability
586
+ Flow sizes
587
+ Prefix sizes
588
+ (a) Flow and prefix size distribu-
589
+ tions are heavy-tailed. A small frac-
590
+ tion of flows and prefixes account
591
+ for a large fraction of the traffic.
592
+ 2
593
+ 1
594
+ 2
595
+ 5
596
+ 2
597
+ 9
598
+ 2
599
+ 13
600
+ 2
601
+ 17
602
+ Fraction of OoO packets in a prefix
603
+ 0.00
604
+ 0.25
605
+ 0.50
606
+ 0.75
607
+ 1.00
608
+ Cumulative probability
609
+ Def 1
610
+ Def 2
611
+ (b) Out-of-order heavy prefixes are
612
+ rare. Here we consider prefixes
613
+ with at least 𝛽 = 27 packets, and
614
+ 𝜀1 = 0.01, 𝜀2 = 0.02 (§ 2.2.1) for
615
+ Definitions 1 and 2 respectively.
616
+ 2
617
+ 15
618
+ 2
619
+ 8
620
+ 2
621
+ 1
622
+ 26
623
+ Inter-arrival times (s)
624
+ 0.00
625
+ 0.25
626
+ 0.50
627
+ 0.75
628
+ 1.00
629
+ Cumulative probability
630
+ in order
631
+ Def 2
632
+ Def 1
633
+ (c) In-order packets tend to have
634
+ smaller inter-arrival times, while
635
+ out-of-order events defined by Def-
636
+ inition 1 exhibit the highest inter-
637
+ arrival times.
638
+ Fig. 3. Heavy-tailed distributions in real-world workload.
639
+ 5
640
+ 12
641
+ 16
642
+ 19
643
+ 38
644
+ 55
645
+ 113
646
+ 131
647
+ 206
648
+ 249
649
+ 376
650
+ 447
651
+ 763 1047 1999 2221 2420 2689 4022 5110
652
+ 20
653
+ 24
654
+ 28
655
+ 212
656
+ 216
657
+ 220
658
+ Number of packets in a flow
659
+ 222.5
660
+ 219
661
+ 216
662
+ 213
663
+ 210
664
+ 27
665
+ Number of packets in a prefix
666
+ Rank in the number of packets in a 24-bit IP prefix
667
+ Fig. 4. A violin of rank 𝑟 shows the flow-size distribution of the 𝑟-th largest prefix in the trace, and each
668
+ violin corresponds to a heavily reordered prefix with at least 𝛽 = 27 packets, using Definition 2 with 𝜀 = 0.02.
669
+ All violins are scaled to the same width, and where colors indicate the prefix size. When a prefix consists of
670
+ flow(s) with only one size, its violin degenerates into a horizontal segment. We see that many prefixes do not
671
+ have any large flow. And the many prefixes beyond rank 22644 (not shown in plot) consist only of small flows.
672
+ 4.1.2
673
+ Correlation among flows with the same prefix. Let 𝑓 be a flow drawn uniformly at random
674
+ from a set of flows. Let 𝑋 be the random variable representing the fraction of out-of-order packets in
675
+ flow 𝑓 , 𝑋 =
676
+ 𝑂𝑓
677
+ 𝑁𝑓 . Denote 𝑔 as the prefix of flow 𝑓 , let 𝑌 be the random variable denoting the fraction
678
+ of out-of-order packets among all flows in prefix 𝑔 excluding 𝑓 , that is, 𝑌 =
679
+ 𝑂𝑔−𝑂𝑓
680
+ 𝑁𝑔−𝑁𝑓 , where 𝑁𝑔 is the
681
+ number of packets in prefix 𝑔. To ensure that 𝑁𝑔 > 𝑁𝑓 , the prefixes we sample from must have
682
+ at least two flows. Since we are less interested in small prefixes, we focus only on prefixes of size
683
+ greater than 𝛼. We use the Pearson correlation coefficient (PCC) to show that 𝑋 and 𝑌 are positively
684
+ correlated, which implies that the out-of-orderness of a flow 𝑓 is statistically representative of
685
+ other flows in the prefix of 𝑓 . Essentially a normalized version of Cov(𝑋,𝑌), PCC always lies in the
686
+ interval [−1, 1], and a positive PCC indicates a positive linear correlation. Lacking a better reason
687
+ to believe the correlation between 𝑋 and 𝑌 is of higher order, we shall see that PCC suffices for our
688
+ analysis.
689
+ Let 𝑆 be a set of flows whose prefixes are of size greater than 𝛼, and have at least two flows. We
690
+ compute the PCC as follows:
691
+ 11
692
+
693
+ Zheng, Yu and Rexford
694
+ (1) Draw 𝑛 flows from 𝑆, independently and uniformly at random.
695
+ (2) For each of the 𝑛 flows 𝑓𝑖, let 𝑥𝑖 =
696
+ 𝑂𝑓𝑖
697
+ 𝑁𝑓𝑖 , 𝑦𝑖 =
698
+
699
+ 𝑓 ′∈𝑔,𝑓 ′≠𝑓𝑖 𝑂𝑓 ′
700
+
701
+ 𝑓 ′∈𝑔,𝑓 ′≠𝑓𝑖 𝑁𝑓 ′ ,
702
+ (3) The PCC 𝑟 =
703
+ �𝑛
704
+ 𝑖=1(𝑥𝑖−¯𝑥) (𝑦𝑖− ¯𝑦)
705
+ √�𝑛
706
+ 𝑖=1(𝑥𝑖−¯𝑥)2√�𝑛
707
+ 𝑖=1(𝑦𝑖− ¯𝑦)2 , where ¯𝑥 = 1
708
+ 𝑛
709
+ �𝑛
710
+ 𝑖=1 𝑥𝑖, ¯𝑦 = 1
711
+ 𝑛
712
+ �𝑛
713
+ 𝑖=1 𝑦𝑖.
714
+ We perform 𝑚 = 100 tests on 𝑆, generating 𝑚 PCCs {𝑟 𝑗}𝑚
715
+ 𝑗=1 with 𝑛 = 5000 and 𝛼 = 24. Using
716
+ Definition 1, we obtain the average PCC 𝜇1({𝑟 𝑗}𝑚
717
+ 𝑗=1) = 0.2348, and the variance 𝜎1({𝑟 𝑗}𝑚
718
+ 𝑗=1) = 0.0524.
719
+ Definition 2 of reordering yields even larger PCCs, with 𝜇2({𝑟 𝑗}𝑚
720
+ 𝑗=1) = 0.4028 and 𝜎2({𝑟 𝑗}𝑚
721
+ 𝑗=1) =
722
+ 0.0189.
723
+ We conclude that there is a positive correlation between 𝑋 and 𝑌. Moreover, for reasons not yet
724
+ clear, the correlation is stronger when using Definition 2. Since we heavily rely on the correlation
725
+ assumption in all of our algorithms, this suggests that the performance of the algorithms might be
726
+ worse under Definition 1.
727
+ 4.1.3
728
+ Inter-arrival time of packets within a flow. We also study the inter-arrival time of packets
729
+ within a flow to understand how efficient the flow sampling algorithm can be. Due to TCP window-
730
+ ing dynamics, where the sender transmits a window of data and then waits for acknowledgments,
731
+ in-order packets tend to have small inter-arrival times. Depending on the definition, reordering
732
+ can be a result of gaps in transmission of non-consecutive packets (Definition 2), or worse yet the
733
+ retransmissions of lost packets (Definition 1), which often lead to larger inter-arrival times.
734
+ Indeed, Figure 3c shows that the inter-arrival times of out-of-order packets using Definition 2
735
+ tend to be smaller than that of the out-of-order packets using Definition 1, with the inter-arrival
736
+ times of in-order packets being the smallest. This implies, that to detect the reordering events in
737
+ Definition 2, the flow-sampling algorithm (§ 3.2) can afford to use a shorter waiting period 𝑇.
738
+ 4.2
739
+ Evaluate flow sampling
740
+ Following § 4.1.2 and § 4.1.3, capturing out-of-order heavy prefixes under Definition 1 appears to be
741
+ more difficult. Next, we show that even when using this more difficult definition of reordering, the
742
+ flow-sampling algorithm is extremely memory efficient. We start by introducing the three metrics
743
+ we use throughout this section to evaluate our algorithms.
744
+ Let ˆ𝐺 denote the set of prefixes output by an algorithm A.
745
+ • Let 𝐺 ≥𝛽 = {𝑔∗ ∈ 𝑆 | 𝑁𝑔∗ ≥ 𝛽,𝑂𝑔∗ > 𝜀 �
746
+ 𝑔∈𝑆 𝑂𝑔} be the ground truth set of heavily reordered
747
+ prefixes with at least 𝛽 packets. Define the accuracy 𝐴 of algorithm A to be the fraction of
748
+ ground-truth prefixes output by A, that is,
749
+ 𝐴(A) =
750
+ �� ˆ𝐺 ∩ 𝐺 ≥𝛽
751
+ ��
752
+ ��𝐺 ≥𝛽
753
+ ��
754
+ .
755
+ • Let 𝐺>𝛼 = {𝑔∗ ∈ 𝑆 | 𝑁𝑔∗ > 𝛼,𝑂𝑔∗ > 𝜀 �
756
+ 𝑔∈𝑆 𝑂𝑔}, then the false-positive rate of A is defined
757
+ as
758
+ 𝐹𝑃(A) =
759
+ �� ˆ𝐺 \ 𝐺 ≥𝛼
760
+ ��
761
+ |𝐺 ≥𝛼 |
762
+ .
763
+ • The communication overhead from the data plane to the control plane is defined as the
764
+ number of reports sent by A, divided by the length of stream 𝑆, where the number of reports
765
+ also accounts for the flow records in the data structure that exceed the reporting thresholds.
766
+ Unless otherwise specified, each experiment is repeated five times with different seeds to the
767
+ hash functions. Whenever using Definition 1, we are interested in identifying prefixes with at least
768
+ 𝛽 = 27 packets, with more than 𝜀 = 0.01 fraction of their packets reordered. And we do not wish to
769
+ output prefixes with at most 𝛼 = 24 packets, irrespective of their out-of-orderness.
770
+ 12
771
+
772
+ Detecting TCP Packet Reordering in the Data Plane
773
+ 25
774
+ 28
775
+ 211
776
+ 214
777
+ 217 219
778
+ (1)
779
+ 0.4
780
+ 0.6
781
+ 0.8
782
+ 1.0
783
+ Accuracy
784
+ 25
785
+ 28
786
+ 211
787
+ 214
788
+ 217 219
789
+ (2)
790
+ 0.00
791
+ 0.25
792
+ 0.50
793
+ 0.75
794
+ False positive
795
+ 25
796
+ 28
797
+ 211
798
+ 214
799
+ 217 219
800
+ (3)
801
+ 0.00
802
+ 0.05
803
+ 0.10
804
+ 0.15
805
+ Communication
806
+ Number of buckets B
807
+ Report all c=0.5
808
+ Report reorder
809
+ Report all c=1
810
+ Fig. 5. Performance of the array sampling algorithm and its variant, with 𝑇 = 2−15,𝐶 = 24, and 𝑅 = 1.
811
+ 4.2.1
812
+ Performance evaluation. Figure 5 evaluates the performance of the “Report reorder” version
813
+ of the flow-sampling algorithm (§ 3.2.1), and the “Report all” version (§ 3.2.3) using two different
814
+ values of parameter 𝑐. Recall that in the “Report all” version, we output a prefix if more than 𝑐𝜀
815
+ fraction of its packets observed is out-of-order.
816
+ Note that our trace (§ 4.1) contains more than 219 flows and more than 214 prefixes, and using
817
+ only 25 buckets, the original version of the flow-sampling algorithm is already capable of reporting
818
+ half of the out-of-order prefixes. To put it into perspective, reordering happens at the flow level,
819
+ and assigning even one bucket per prefix to detect reordering already requires a nontrivial solution,
820
+ while the flow-sampling algorithm achieves good accuracy using memory orders-of-magnitude
821
+ smaller.
822
+ If we are willing to generate reports for more than 10% of the traffic, with a increased communi-
823
+ cation overhead comes a reduced false-positive rate. Moreover, with a better chosen parameter 𝑐,
824
+ the extra information sent to the control plane helps in further improving the accuracy.
825
+ 4.3
826
+ Evaluate the hybrid scheme
827
+ To fairly compare the hybrid scheme with the flow-sampling algorithm, we need to determine the
828
+ optimal memory allocation between Precision and the array. Lacking a better way to optimize the
829
+ memory allocation, we turn to experiments with our packet trace. Given a total of 𝐵 buckets, we
830
+ assign ⌊𝑥 · 𝐵⌋ buckets to PRECISION, 𝐵 − ⌊𝑥𝐵⌋ buckets to the array, and conduct a grid search on
831
+ 𝑥 ∈ 𝐼 = {0.1, . . . , 0.9} to find the value of 𝑥 that maximizes the performance of the hybrid scheme.
832
+ We evaluate the hybrid scheme using the optimal 𝑥 we found for each 𝐵, for both Definition
833
+ 1 (Figure 6a) and Definition 2 (Figure 6b). Admittedly, the grid 𝐼 might not be fine enough to
834
+ reveal the true optimal allocation, it nonetheless conveys the main idea. When the memory is
835
+ small compared to the number of prefixes (214), the performance of the flow-sampling algorithm
836
+ significantly dominates that of the heavy-hitter data structure. The optimal hybrid scheme then
837
+ only allocates a small fraction of the memory to the heavy-hitter data structure. However, compared
838
+ to the flow-sampling algorithm, filtering even a small number of large flows helps in significantly
839
+ reducing the communication overhead, while not deteriorating the accuracy. As we approach the
840
+ memory range where there is roughly one bucket per prefix, the heavy-hitter data structure starts
841
+ to perform well, and more memory is devoted to it in the optimal hybrid scheme. In this case, the
842
+ hybrid scheme also reduces the false-positive rate, in comparison to the flow-sampling algorithm.
843
+ 13
844
+
845
+ Zheng, Yu and Rexford
846
+ 210
847
+ 212
848
+ 214
849
+ 216
850
+ 218
851
+ (a.1)
852
+ 0.0
853
+ 0.2
854
+ 0.4
855
+ 0.6
856
+ 0.8
857
+ Accuracy
858
+ 210
859
+ 212
860
+ 214
861
+ 216
862
+ 218
863
+ (a.2)
864
+ 0.0
865
+ 0.2
866
+ 0.4
867
+ 0.6
868
+ 0.8
869
+ False positive
870
+ 212
871
+ 215
872
+ 218
873
+ (a.3)
874
+ 0.00%
875
+ 0.02%
876
+ 0.04%
877
+ 0.06%
878
+ 0.08%
879
+ Communication
880
+ Hybrid
881
+ Array
882
+ PRECISION
883
+ (a) Performance using Definition 1, with 𝑅 = 0.01 for PRECISION.
884
+ 210
885
+ 212
886
+ 214
887
+ 216
888
+ 218
889
+ (b.1)
890
+ 0.00
891
+ 0.25
892
+ 0.50
893
+ 0.75
894
+ 1.00
895
+ Accuracy
896
+ 210
897
+ 212
898
+ 214
899
+ 216
900
+ 218
901
+ (b.2)
902
+ 0.0
903
+ 0.1
904
+ 0.2
905
+ 0.3
906
+ False positive
907
+ 210
908
+ 212
909
+ 214
910
+ 216
911
+ 218
912
+ (b.3)
913
+ 0.0%
914
+ 0.1%
915
+ 0.2%
916
+ 0.3%
917
+ 0.4%
918
+ Communication
919
+ Number of buckets B
920
+ (b) Performance using Definition 2, with 𝑅 = 0.02 for PRECISION.
921
+ Fig. 6. Performance of all proposed algorithms using two definitions of reordering, with 𝑑 = 2 for PRECISION.
922
+ 4.4
923
+ Parameter robustness
924
+ We started the evaluation using arbitrarily picked parameters. Now we verify that all parameters
925
+ in our algorithms are either easily set, or robust to changes.
926
+ 4.4.1
927
+ Thresholds 𝑇,𝐶, 𝑅 for flow sampling. To reveal how thresholds 𝑇 and 𝐶 individually affect
928
+ the accuracy of the flow-sampling algorithm, ideally we want to fix one of them to infinity, and
929
+ vary the other. In this way, only one of them governs the frequency of evictions. Applying this
930
+ logic, when studying the effect of 𝑇 (Figure 7a), we fix 𝐶 to a number larger than the length of the
931
+ entire trace. We see that as long as 𝑇 is small, the algorithm samples enough flows, and has high
932
+ accuracy.
933
+ Evaluating the effects on a varying 𝐶 turns out to be less straight-forward. If we make 𝑇 too
934
+ large, the algorithm generally suffers from extremely poor performance, which makes it impossible
935
+ to observe any difference that changing 𝐶 might bring. If 𝑇 is too small, the frequency of eviction
936
+ would be primarily driven by𝑇, and 𝐶 would not have any impact. And it is not as simple as setting
937
+ 𝑇 larger than all inter-arrival times, since eviction only occurs on hash collisions, inter-arrival
938
+ time alone only paints part of the picture. All evidence above points to the fact that 𝑇 is the more
939
+ important parameter. Once we have a good choice of 𝑇, the boost from optimizing 𝐶 is secondary.
940
+ Armed with this knowledge, we fix a 𝑇 = 25, an ad hoc choice that is by no means perfect. Yet it is
941
+ enough to observe (Figure 7b) that having a small 𝐶 is slightly more beneficial.
942
+ However, 𝐶 cannot be too small, as inserting a new flow record into the array requires recircula-
943
+ tion in the hardware implementation. Programmable switches generally support recirculating up to
944
+ 3% − 10% of packets without penalty. Here we set 𝐶 to be 16, which allows us to achieve line rate.
945
+ Given that each non-small flow is continuously monitored for roughly 𝐶 = 16 packets at a time,
946
+ we report its prefix to the control plane when we encounter any out-of-order packet, that is, 𝑅 = 1.
947
+ 14
948
+
949
+ Detecting TCP Packet Reordering in the Data Plane
950
+ 20
951
+ 2
952
+ 3
953
+ 2
954
+ 6
955
+ 2
956
+ 9
957
+ 2
958
+ 12
959
+ 2
960
+ 15
961
+ 2
962
+ 18
963
+ Inter-arraival timeout T (s)
964
+ 0.0
965
+ 0.2
966
+ 0.4
967
+ 0.6
968
+ Accuracy
969
+ (a) The accuracy of the flow-
970
+ sampling algorithm with varying
971
+ 𝑇, and fixed 𝐵 = 28, 𝑅 = 1 and
972
+ 𝐶 = 108.
973
+ 20
974
+ 22
975
+ 24
976
+ 26
977
+ 28
978
+ 210 212
979
+ Packet count threshold C
980
+ 0.00
981
+ 0.01
982
+ 0.02
983
+ 0.03
984
+ Accuracy
985
+ (b) The accuracy of the flow-
986
+ sampling algorithm with varying
987
+ 𝐶, with fixed 𝐵 = 28, 𝑅 = 1 and
988
+ 𝑇 = 25.
989
+ 212
990
+ 215
991
+ 218
992
+ Number of buckets B
993
+ 0.00
994
+ 0.25
995
+ 0.50
996
+ 0.75
997
+ 1.00
998
+ Accuracy
999
+ d = 2
1000
+ d = 3
1001
+ d = 4
1002
+ d = 5
1003
+ (c) The accuracy of the flow-
1004
+ sampling algorithm with varying
1005
+ 𝑑, with fixed 𝑅 = 0.01.
1006
+ Fig. 7. The effect of changing parameters on the accuracy of the flow-sampling algorithm and PRECISION.
1007
+ 4.4.2
1008
+ The number of stages 𝑑 in PRECISION. It is observed in [3] that a small constant 𝑑 > 1
1009
+ only incurs minimal accuracy loss in finding heavy flows. Increasing 𝑑 leads to diminishing gains
1010
+ in performance, and adds the number of pipeline stages when implemented on the hardware.
1011
+ Therefore, 𝑑 = 2 is preferable for striking a balance between accuracy and hardware resources.
1012
+ Building on [3], we evaluate PRECISION for𝑑 = 2, 3, 4, 5, for reporting out-of-order heavy prefixes.
1013
+ The results in Figure 7c show that when the total memory is small, using fewer stages provides a
1014
+ slight benefit. The opposite holds when there is ample memory. However, as the performance gap
1015
+ using different 𝑑 is insignificant, we also suggest using 𝑑 = 2 for hardware implementations.
1016
+ 5
1017
+ RELATED WORK
1018
+ Characterization of out-of-orderness on the Internet. Packet reordering is first studied in the
1019
+ seminal work by Paxson [17]. It has since been well understood that packet reordering can be
1020
+ caused by parallel links, routing changes, and the presence of adversaries [4]. In typical network
1021
+ conditions, only a small fraction of packets are out-of-order [17, 20]. However, when the network
1022
+ reorders packets, TCP endpoints may wrongly infer that the network is congested, harming end-
1023
+ to-end performance by retransmitting packets and reducing the sending rate [4, 11, 12]. Metrics
1024
+ for characterizing reordering are intensively studied in [15] and [9], though many of the proposed
1025
+ metrics are more suitable for offline analysis. In addition to the network causing packet reordering,
1026
+ the stream of packets in the same TCP connection can appear out of order because congestion along
1027
+ the path leads to packet losses and subsequent retransmissions. Our techniques for identifying IP
1028
+ prefixes with heavy reordering of TCP packets are useful for pinpointing network paths suffering
1029
+ from both kinds of reordering—whether caused by the network devices themselves or induced by
1030
+ the TCP senders in response to network congestion.
1031
+ Data-plane efficient data structures for volume-based metrics. For heavy-hitter queries,
1032
+ HashPipe [19] adapts SpaceSaving [14] to work with the data-plane constraints, using a multi-
1033
+ stage hash-indexes array. PRECISION [3] further incorporates the idea of Randomized Admission
1034
+ Policy [2] to better deal with the massive number of small flows generally found in network traffic.
1035
+ We extend PRECISION to keep reordering statistics for large flows. However, such an extension
1036
+ cannot be used to detect flows with a large number of out-of-order packets with a reasonable
1037
+ amount of memory.
1038
+ Data-plane efficient data structures for performance metrics. Liu et al. [13] proposes
1039
+ memory-efficient algorithms for identifying flows with high latency, or lost, reordered, and retrans-
1040
+ mitted packets. Several solutions for measuring round-trip delay in the data-plane [6, 18, 22] have
1041
+ 15
1042
+
1043
+ Zheng, Yu and Rexford
1044
+ a similar flavor to identifying out-of-order heavy prefixes, as in both cases keeping at least some
1045
+ state is necessary, with the difference that for reordering we generally need to match more than a
1046
+ pair of packets.
1047
+ Detect heavy reordering in the data plane. Several existing systems can detect TCP packet
1048
+ reordering in the data plane. Marple is a general-purpose network telemetry platform with a
1049
+ database-like query language [16]. While Marple can analyze out-of-order packets, the compiler
1050
+ generates a data-plane implementation that requires per-flow state. Unfortunately, such methods
1051
+ consume more memory than the programmable switch can offer in practice. The algorithm proposed
1052
+ by Liu et al. [13] for detecting flows with a large number of out-of-order packets remains the work
1053
+ most related to ours. We note that our lower bound on memory consumption in § 2.1.3 is stronger
1054
+ than a similar lower bound (Lemma 10) in [13], as we also allow randomness and approximation. Liu
1055
+ et al. [13] considers out-of-order events specified by Definition 3, and works around the lower bound
1056
+ by assuming out-of-order packets always arrive within some fixed period of time. In contrast, we
1057
+ circumvent the lower bound using the more natural observation that out-of-orderness is correlated
1058
+ among flows within a prefix, and identify heavily reordered prefixes instead of flows.
1059
+ 6
1060
+ CONCLUSION
1061
+ In this paper, we introduce three algorithms for identifying out-of-order prefixes in the data plane.
1062
+ In particular, the flow-sampling algorithm achieves good accuracy empirically, even with a memory
1063
+ orders-of-magnitude smaller than the number of prefixes, let alone the number of flows. When given
1064
+ memory comparable to the number of prefixes, the hybrid scheme using both a heavy-hitter data
1065
+ structure and flow sampling gives similar accuracy, while significantly reducing the false-positive
1066
+ rate and the communication overhead.
1067
+ Next, we plan to build prototypes of the flow-sampling array and the hybrid scheme for the
1068
+ Intel Tofino high-speed programmable switch. Moreover, notice that measuring reordering is
1069
+ fundamentally memory-expensive, yet we leverage the correlation of out-of-orderness among flows
1070
+ in the same prefix so that compact data structures can be effective. In fact, there is nothing special
1071
+ about out-of-orderness. Other properties of a network path could very well lead to an analogous
1072
+ correlation. For many performance metrics that suffer from similar memory lower bounds, it would
1073
+ be intriguing to look into whether such correlation helps in squeezing good performance out of
1074
+ limited memory. We leave that for future work.
1075
+ REFERENCES
1076
+ [1] Imad Aad, Jean-Pierre Hubaux, and Edward W Knightly. 2008. Impact of denial of service attacks on ad hoc networks.
1077
+ IEEE/ACM Transactions on Networking 16, 4 (2008), 791–802.
1078
+ [2] Ran Ben Basat, Xiaoqi Chen, Gil Einziger, Roy Friedman, and Yaron Kassner. 2019. Randomized admission policy for
1079
+ efficient top-k, frequency, and volume estimation. IEEE/ACM Transactions on Networking 27, 4 (2019), 1432–1445.
1080
+ [3] Ran Ben Basat, Xiaoqi Chen, Gil Einziger, and Ori Rottenstreich. 2020. Designing heavy-hitter detection algorithms
1081
+ for programmable switches. IEEE/ACM Transactions on Networking 28, 3 (2020), 1172–1185.
1082
+ [4] Jon CR Bennett, Craig Partridge, and Nicholas Shectman. 1999. Packet reordering is not pathological network behavior.
1083
+ IEEE/ACM Transactions on Networking 7, 6 (1999), 789–798.
1084
+ [5] Ethan Blanton and Mark Allman. 2002. On making TCP more robust to packet reordering. ACM SIGCOMM Computer
1085
+ Communication Review 32, 1 (2002), 20–30.
1086
+ [6] Xiaoqi Chen, Hyojoon Kim, Javed M Aman, Willie Chang, Mack Lee, and Jennifer Rexford. 2020. Measuring TCP
1087
+ round-trip time in the data plane. In ACM SIGCOMM Workshop on Secure Programmable Network Infrastructure. 35–41.
1088
+ [7] Amir Herzberg and Haya Shulman. 2010. Stealth DoS Attacks on Secure Channels. In Network and Distributed System
1089
+ Symposium.
1090
+ [8] Svante Janson. 2018. Tail bounds for sums of geometric and exponential variables. Statistics & Probability Letters 135
1091
+ (2018), 1–6.
1092
+ [9] Anura Jayasumana, N Piratla, T Banka, A Bare, and R Whitner. 2008. Improved packet reordering metrics. RFC 5236.
1093
+ 16
1094
+
1095
+ Detecting TCP Packet Reordering in the Data Plane
1096
+ [10] Akshay Kamath, Eric Price, and David P. Woodruff. 2021. A Simple Proof of a New Set Disjointness with Applications
1097
+ to Data Streams. In Computational Complexity Conference.
1098
+ [11] Michael Laor and Lior Gendel. 2002. The effect of packet reordering in a backbone link on application throughput.
1099
+ IEEE Network 16, 5 (2002), 28–36.
1100
+ [12] Ka-Cheong Leung, Victor OK Li, and Daiqin Yang. 2007. An overview of packet reordering in transmission control
1101
+ protocol (TCP): Problems, solutions, and challenges. IEEE Transactions on Parallel and Distributed Systems 18, 4 (2007),
1102
+ 522–535.
1103
+ [13] Zaoxing Liu, Samson Zhou, Ori Rottenstreich, Vladimir Braverman, and Jennifer Rexford. 2020. Memory-efficient
1104
+ performance monitoring on programmable switches with lean algorithms. In Symposium on Algorithmic Principles of
1105
+ Computer Systems. SIAM, 31–44.
1106
+ [14] Ahmed Metwally, Divyakant Agrawal, and Amr El Abbadi. 2005. Efficient computation of frequent and top-k elements
1107
+ in data streams. In International Conference on Database Theory. Springer, 398–412.
1108
+ [15] Al Morton, Len Ciavattone, Gomathi Ramachandran, Stanislav Shalunov, and Jerry Perser. 2006. Packet reordering
1109
+ metrics. RFC 4737.
1110
+ [16] Srinivas Narayana, Anirudh Sivaraman, Vikram Nathan, Prateesh Goyal, Venkat Arun, Mohammad Alizadeh, Vimalku-
1111
+ mar Jeyakumar, and Changhoon Kim. 2017. Language-directed hardware design for network performance monitoring.
1112
+ In ACM SIGCOMM. 85–98.
1113
+ [17] Vern Paxson. 1997. End-to-end Internet packet dynamics. IEEE/ACM Transactions on Networking 7, 3 (June 1997),
1114
+ 277–292.
1115
+ [18] Satadal Sengupta, Hyojoon Kim, and Jennifer Rexford. 2022. Continuous in-network round-trip time monitoring. In
1116
+ ACM SIGCOMM. 473–485.
1117
+ [19] Vibhaalakshmi Sivaraman, Srinivas Narayana, Ori Rottenstreich, Shan Muthukrishnan, and Jennifer Rexford. 2017.
1118
+ Heavy-hitter detection entirely in the data plane. In ACM SIGCOMM Symposium on SDN Research. 164–176.
1119
+ [20] Yi Wang, Guohan Lu, and Xing Li. 2004. A study of Internet packet reordering. In International Conference on Information
1120
+ Networking. Springer, 350–359.
1121
+ [21] Yinda Zhang, Zaoxing Liu, Ruixin Wang, Tong Yang, Jizhou Li, Ruijie Miao, Peng Liu, Ruwen Zhang, and Junchen
1122
+ Jiang. 2021. CocoSketch: High-performance sketch-based measurement over arbitrary partial key query. In ACM
1123
+ SIGCOMM. 207–222.
1124
+ [22] Yufei Zheng, Xiaoqi Chen, Mark Braverman, and Jennifer Rexford. 2022. Unbiased Delay Measurement in the Data
1125
+ Plane. In Symposium on Algorithmic Principles of Computer Systems (APOCS). SIAM, 15–30.
1126
+ 17
1127
+
9dAyT4oBgHgl3EQfQ_am/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9tE1T4oBgHgl3EQfoARp/content/2301.03315v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3280e125576b2ce676c32a6d50875a0947dad547dc78e9a1dc768ea40cb4fffb
3
+ size 4091149
9tE1T4oBgHgl3EQfoARp/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41256d8e871b57e20cfa7e4cb05b5e68be6ae2f7b9414806c7cc899f8729f6c5
3
+ size 304746
ANE1T4oBgHgl3EQfVQQy/content/tmp_files/2301.03099v1.pdf.txt ADDED
@@ -0,0 +1,1769 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.03099v1 [cs.AI] 8 Jan 2023
2
+ Fully Dynamic Online Selection through Online Contention Resolution Schemes
3
+ Vashist Avadhanula*, Andrea Celli1, Riccardo Colini-Baldeschi2,
4
+ Stefano Leonardi3, Matteo Russo3
5
+ 1Department of Computing Sciences, Bocconi University, Milan, Italy
6
+ 2 Core Data Science, Meta, London, UK
7
+ 3Department of Computer, Control and Management Engineering, Sapienza University, Rome, Italy
8
9
+ Abstract
10
+ We study fully dynamic online selection problems in an ad-
11
+ versarial/stochastic setting that includes Bayesian online se-
12
+ lection, prophet inequalities, posted price mechanisms, and
13
+ stochastic probing problems subject to combinatorial con-
14
+ straints. In the classical “incremental” version of the problem,
15
+ selected elements remain active until the end of the input se-
16
+ quence. On the other hand, in the fully dynamic version of the
17
+ problem, elements stay active for a limited time interval, and
18
+ then leave. This models, for example, the online matching of
19
+ tasks to workers with task/worker-dependent working times,
20
+ and sequential posted pricing of perishable goods. A success-
21
+ ful approach to online selection problems in the adversarial
22
+ setting is given by the notion of Online Contention Resolution
23
+ Scheme (OCRS), that uses a priori information to formulate
24
+ a linear relaxation of the underlying optimization problem,
25
+ whose optimal fractional solution is rounded online for any
26
+ adversarial order of the input sequence. Our main contribu-
27
+ tion is providing a general method for constructing an OCRS
28
+ for fully dynamic online selection problems. Then, we show
29
+ how to employ such OCRS to construct no-regret algorithms
30
+ in a partial information model with semi-bandit feedback and
31
+ adversarial inputs.
32
+ 1
33
+ Introduction
34
+ Consider the case where a financial service provider receives
35
+ multiple operations every hour/day. These operations might
36
+ be malicious. The provider needs to assign them to human
37
+ reviewers for inspection. The time required by each reviewer
38
+ to file a reviewing task and the reward (weight) that is ob-
39
+ tained with the review follow some distributions. The distri-
40
+ butions can be estimated from historical data, as they depend
41
+ on the type of transaction that needs to be examined and on
42
+ the expertise of the employed reviewers. To efficiently solve
43
+ the problem, the platform needs to compute a matching be-
44
+ tween tasks and reviewers based on the a priori information
45
+ that is available. However, the time needed for a specific re-
46
+ view, and the realized reward (weight), is often known only
47
+ after the task/reviewer matching is decided.
48
+ A multitude of variations to this setting are possible. For
49
+ instance, if a cost is associated with each reviewing task, the
50
+ total cost for the reviewing process might be bounded by a
51
+ budget. Moreover, there might be various kinds of restric-
52
+ tions on the subset of reviewers that are assigned at each
53
+ *Research performed while the author was working at Meta.
54
+ time step. Finally, the objective function might not only be
55
+ the sum of the rewards (weights) we observe, if, for example,
56
+ the decision maker has a utility function with “diminishing
57
+ return” property.
58
+ To model the general class of sequential decision prob-
59
+ lems described above, we introduce fully dynamic online se-
60
+ lection problems. This model generalizes online selection
61
+ problems (Chekuri, Vondr´ak, and Zenklusen 2011), where
62
+ elements arrive online in an adversarial order and algorithms
63
+ can use a priori information to maximize the weight of the
64
+ selected subset of elements, subject to combinatorial con-
65
+ straints (such as matroid, matching, or knapsack).
66
+ In the classical version of the problem (Chekuri, Vondr´ak,
67
+ and Zenklusen 2011), once an element is selected, it will af-
68
+ fect the combinatorial constraints throughout the entire input
69
+ sequence. This is in sharp contrast with the fully dynamic
70
+ version, where an element will affect the combinatorial con-
71
+ straint only for a limited time interval, which we name ac-
72
+ tivity time of the element. For example, a new task can be
73
+ matched to a reviewer as soon as she is done with previ-
74
+ ously assigned tasks, or an agent can buy a new good as
75
+ soon as the previously bought goods are perished. A large
76
+ class of Bayesian online selection (Kleinberg and Weinberg
77
+ 2012), prophet inequality (Hajiaghayi, Kleinberg, and Sand-
78
+ holm 2007), posted price mechanism (Chawla et al. 2010),
79
+ and stochastic probing (Gupta and Nagarajan 2013) prob-
80
+ lems that have been studied in the classical version of on-
81
+ line selection can therefore be extended to the fully dynamic
82
+ setting. Note that in the dynamic algorithms literature, fully
83
+ dynamic algorithms are algorithms that deal with both adver-
84
+ sarial insertions and deletions (Demetrescu et al. 2010). We
85
+ could also interpret our model in a similar sense since ele-
86
+ ments arrive online (are inserted) according to an adversarial
87
+ order, and cease to exist (are deleted) according to adversar-
88
+ ially established activity times.
89
+ A successful approach to online selection problems is
90
+ based on Online Contention Resolution Schemes (OCRSs)
91
+ (Feldman, Svensson, and Zenklusen 2016). OCRSs use a
92
+ priori information on the values of the elements to formu-
93
+ late a linear relaxation whose optimal fractional solution up-
94
+ per bounds the performance of the integral offline optimum.
95
+ Then, an online rounding procedure is used to produce a so-
96
+ lution whose value is as close as possible to the fractional re-
97
+ laxation solution’s value, for any adversarial order of the in-
98
+
99
+ put sequence. The OCRS approach allows to obtain good ap-
100
+ proximations of the expected optimal solution for linear and
101
+ submodular objective functions. The existence of OCRSs for
102
+ fully dynamic online selection problems is therefore a natu-
103
+ ral research question that we address in this work.
104
+ The OCRS approach is based on the availability of a pri-
105
+ ori information on weights and activity times. However, in
106
+ real world scenarios, these might be missing or might be
107
+ expensive to collect. Therefore, in the second part of our
108
+ work, we study the fully dynamic online selection problem
109
+ with partial information, where the main research question
110
+ is whether the OCRS approach is still viable if a priori in-
111
+ formation on the weights is missing. In order to answer this
112
+ question, we study a repeated version of the fully dynamic
113
+ online selection problem, in which at each stage weights are
114
+ unknown to the decision maker (i.e., no a priori informa-
115
+ tion on weights is available) and chosen adversarially. The
116
+ goal in this setting is the design of an online algorithm with
117
+ performances (i.e., cumulative sum of weights of selected
118
+ elements) close to that of the best static selection strategy in
119
+ hindsight.
120
+ Our Contributions
121
+ First, we introduce the fully dynamic online selection prob-
122
+ lem, in which elements arrive following an adversarial or-
123
+ dering, and revealed one-by-one their weights and activ-
124
+ ity times at the time of arrival (i.e., prophet model), or
125
+ after the element has been selected (i.e., probing model).
126
+ Our model describes temporal packing constraints (i.e.,
127
+ downward-closed), where elements are active only within
128
+ their activity time interval. The objective is to maximize the
129
+ weight of the selected set of elements subject to temporal
130
+ packing constraints. We provide two black-box reductions
131
+ for adapting classical OCRS for online (non-dynamic) se-
132
+ lection problems to the fully dynamic setting under full and
133
+ partial information.
134
+ Blackbox reduction 1: from OCRS to temporal OCRS.
135
+ Starting from a (b, c)-selectable greedy OCRS in the clas-
136
+ sical setting, we use it as a subroutine to build a (b, c)-
137
+ selectable greedy OCRS in the more general temporal setting
138
+ (see Algorithm 1 and Theorem 1). This means that competi-
139
+ tive ratio guarantees in one setting determine the same guar-
140
+ antees in the other. Such a reduction implies the existence
141
+ of algorithms with constant competitive ratio for online opti-
142
+ mization problems with linear or submodular objective func-
143
+ tions subject to matroid, matching, and knapsack constraints,
144
+ for which we give explicit constructions. We also extend the
145
+ framework to elements arriving in batches, which can have
146
+ correlated weights or activity times within the batch, as de-
147
+ scribed in the appendix of the paper.
148
+ Blackbox reduction 2: from temporal OCRS to no-α-
149
+ regret algorithm. Following the recent work by Gergatsouli
150
+ and Tzamos (2022) in the context of Pandora’s box prob-
151
+ lems, we define the following extension of the problem to the
152
+ partial-information setting. For each of the T stages, the al-
153
+ gorithm is given in input a new instance of the fully dynamic
154
+ online selection problem. Activity times are fixed before-
155
+ hand and known to the algorithm, while weights are chosen
156
+ by an adversary, and revealed only after the selection at the
157
+ current stage has been completed. In such setting, we show
158
+ that an α-competitive temporal OCRS can be exploited in
159
+ the adversarial partial-information version of the problem, in
160
+ order to build no-α-regret algorithms with polynomial per-
161
+ iteration running time. Regret is measured with respect to the
162
+ cumulative weights collected by the best fixed selection pol-
163
+ icy in hindsight. We study three different settings: in the first
164
+ setting, we study the full-feedback model (i.e., the algorithm
165
+ observes the entire utility function at the end of each stage).
166
+ Then, we focus on the semi-bandit-feedback model, in which
167
+ the algorithm only receives information on the weights of the
168
+ elements it selects. In such setting, we provide a no-α-regret
169
+ framework with ˜O(T 1/2) upper bound on cumulative regret
170
+ in the case in which we have a “white-box” OCRS (i.e., we
171
+ know the exact procedure run within the OCRS, and we are
172
+ able to simulate it ex-post). Moreover, we also provide a no-
173
+ α-regret algorithm with ˜O(T 2/3) regret upper bound for the
174
+ case in which we only have oracle access to the OCRS (i.e.,
175
+ the OCRS is treated as a black-box, and the algorithm does
176
+ not require knowledge about its internal procedures).
177
+ Related Work
178
+ In the first part of the paper, we deal with a setting where
179
+ the algorithm has complete information over the input but is
180
+ unaware of the order in which elements arrive. In this con-
181
+ text, Contention resolution schemes (CRS) were introduced
182
+ by Chekuri, Vondr´ak, and Zenklusen (2011) as a powerful
183
+ rounding technique in the context of submodular maximiza-
184
+ tion. The CRS framework was extended to online contention
185
+ resolution schemes (OCRS) for online selection problems
186
+ by Feldman, Svensson, and Zenklusen (2016), who provided
187
+ constant competitive OCRSs for different problems, e.g. in-
188
+ tersections of matroids, matchings, and prophet inequalities.
189
+ We generalize the OCRS framework to a setting where ele-
190
+ ments are timed and cease to exist right after.
191
+ In the second part, we lift the complete knowledge as-
192
+ sumption and work in an adversarial bandit setting, where at
193
+ each stage the entire set of elements arrives, and we seek
194
+ to select the “best” feasible subset. This is similar to the
195
+ problem of combinatorial bandits (Cesa-Bianchi and Lugosi
196
+ 2012), but unlike it, we aim to deal with combinatorial se-
197
+ lection of timed elements. In this respect, blocking bandits
198
+ (Basu et al. 2019) model situations where played arms are
199
+ blocked for a specific number of stages. Despite their con-
200
+ textual (Basu et al. 2021), combinatorial (Atsidakou et al.
201
+ 2021), and adversarial (Bishop et al. 2020) extensions, re-
202
+ cent work on blocking bandits only addresses specific cases
203
+ of the fully dynamic online selection problem (Dickerson
204
+ et al. 2018), which we solve in entire generality, i.e. adver-
205
+ sarially and for all packing constraints.
206
+ Our problem is also related to sleeping bandits (Klein-
207
+ berg, Niculescu-Mizil, and Sharma 2010), in that the adver-
208
+ sary decides which actions the algorithm can perform at each
209
+ stage t. Nonetheless, a sleeping bandit adversary has to com-
210
+ municate all available actions to the algorithm before a stage
211
+ starts, whereas our adversary sets arbitrary activity times for
212
+ each element, choosing in what order elements arrive.
213
+
214
+ 2
215
+ Preliminaries
216
+ Given a finite set X ⊆ Rn and Y ⊆ 2X , let 1Y ∈ {0, 1}|X|
217
+ be the characteristic vector of set X, and co X be the convex
218
+ hull of X. We denote vectors by bold fonts. Given vector x,
219
+ we denote by xi its i-th component. The set {1, 2, . . . , n},
220
+ with n ∈ N>0, is compactly denoted as [n]. Given a set X
221
+ and a scalar α ∈ R, let αX := {αx : x ∈ X}. Finally, given
222
+ a discrete set X, we denote by ∆X the |X|-simplex.
223
+ We start by introducing a general selection problem in the
224
+ standard (i.e., non-dynamic) case as studied by Kleinberg
225
+ and Weinberg (2012) in the context of prophet inequalities.
226
+ Let E be the ground set and let m := |E|. Each element e ∈ E
227
+ is characterized by a collection of parameters ze. In gen-
228
+ eral, ze is a random variable drawn according to an element-
229
+ specific distribution ζe, supported over the joint set of pos-
230
+ sible parameters. In the standard (i.e., non-dynamic) setting,
231
+ ze just encodes the weight associated to element e, that is
232
+ ze = (we), for some we ∈ [0, 1].1 In such case distributions
233
+ ζe are supported over [0, 1]. Random variables {ze : e ∈ E}
234
+ are independent, and ze is distributed according to ζe. An in-
235
+ put sequence is an ordered sequence of elements and weights
236
+ such that every element in E occurs exactly once in the se-
237
+ quence. The order is specified by an arrival time se for each
238
+ element e. Arrival times are such that se ∈ [m] for all e ∈ E,
239
+ and for two distinct e, e′ we have se ̸= se′. The order of
240
+ arrival of the elements is a priori unknown to the algorithm,
241
+ and can be selected by an adversary. In the standard full-
242
+ information setting the distributions ζe can be chosen by an
243
+ adversary, but they are known to the algorithm a priori. We
244
+ consider problems characterized by a family of packing con-
245
+ straints.
246
+ Definition 1 (Packing Constraint). A family of constraints
247
+ F = (E, I), for ground set E and independence family I ⊆
248
+ 2E, is said to be packing (i.e., downward-closed) if, taken
249
+ A ∈ I, and B ⊆ A, then B ∈ I.
250
+ Elements of I are called independent sets. Such family of
251
+ constraints is closed under intersection, and encompasses
252
+ matroid, knapsack, and matching constraints.
253
+ Fractional LP formulation
254
+ Even in the offline setting, in
255
+ which the ordering of the input sequence (se)e∈E is known
256
+ beforehand, determining an independent set of maximum
257
+ cumulative weight may be NP-hard in the worst-case (Feige
258
+ 1998). Then, we consider the relaxation of the problem
259
+ in which we look for an optimal fractional solution. The
260
+ value of such solution is an upper bound to the value of
261
+ the true offline optimum. Therefore, any algorithm guar-
262
+ anteeing a constant approximation to the offline fractional
263
+ optimum immediately yields the same guarantees with re-
264
+ spect to the offline optimum. Given a family of packing con-
265
+ straints F = (E, I), in order to formulate the problem of
266
+ computing the best fractional solution as a linear program-
267
+ ming problem (LP) we introduce the notion of packing con-
268
+ straint polytope PF ⊆ [0, 1]m which is such that PF :=
269
+ co ({1S : S ∈ I}) . Given a non-negative submodular func-
270
+ tion f : [0, 1]m → R≥0, and a family of packing constraints
271
+ 1This is for notational convenience. In the dynamic case ze will
272
+ contain other parameters in addition to weights.
273
+ F, an optimal fractional solution can be computed via the
274
+ LP maxx∈PF f(x). If the goal is maximizing the cumula-
275
+ tive sum of weights, the objective of the optimization prob-
276
+ lem is ⟨x, w⟩, where w := (w1, . . . , wm) ∈ [0, 1]m is a
277
+ vector specifying the weight of each element. If we assume
278
+ access to a polynomial-time separation oracle for PF such
279
+ LP yields an optimal fractional solution in polynomial time.
280
+ Online selection problem. In the online version of the prob-
281
+ lem, given a family of packing constraints F, the goal is se-
282
+ lecting an independent set whose cumulative weight is as
283
+ large as possible. In such setting, the elements reveal one
284
+ by one their realized ze, following a fixed prespecified order
285
+ unknown to the algorithm. Each time an element reveals ze,
286
+ the algorithm has to choose whether to select it or discard it,
287
+ before the next element is revealed. Such decision is irrevo-
288
+ cable. Computing the exact optimal solution to such online
289
+ selection problems is intractable in general (Feige 1998), and
290
+ the goal is usually to design approximation algorithms with
291
+ good competitive ratio.2 In the remainder of the section we
292
+ describe one well-known framework for such objective.
293
+ Online contention resolution schemes. Contention resolu-
294
+ tion schemes were originally proposed by Chekuri, Vondr´ak,
295
+ and Zenklusen (2011) in the context of submodular function
296
+ maximization, and later extended to online selection prob-
297
+ lems by Feldman, Svensson, and Zenklusen (2016) under
298
+ the name of online contention resolution schemes (OCRS).
299
+ Given a fractional solution x ∈ PF, an OCRS is an online
300
+ rounding procedure yielding an independent set in I guar-
301
+ anteeing a value close to that of x. Let R(x) be a random
302
+ set containing each element e independently and with prob-
303
+ ability xe. The set R(x) may not be feasible according to
304
+ constraints F. An OCRS essentially provides a procedure to
305
+ construct a good feasible approximation by starting from the
306
+ random set R(x). Formally,
307
+ Definition 2 (OCRS). Given a point x ∈ PF and the set of
308
+ elements R(x), elements e ∈ E reveal one by one whether
309
+ they belong to R(x) or not. An OCRS chooses irrevocably
310
+ whether to select an element in R(x) before the next element
311
+ is revealed. An OCRS for PF is an online algorithm that
312
+ selects S ⊆ R(x) such that 1S ∈ PF.
313
+ We will focus on greedy OCRS, which were defined by Feld-
314
+ man, Svensson, and Zenklusen (2016) as follows.
315
+ Definition 3 (Greedy OCRS). Let PF ⊆ [0, 1]m be the fea-
316
+ sibility polytope for constraint family F. An OCRS π for PF
317
+ is called a greedy OCRS if, for every ex-ante feasible solu-
318
+ tion x ∈ PF, it defines a packing subfamily of feasible sets
319
+ Fπ,x ⊆ F, and an element e is selected upon arrival if, to-
320
+ gether with the set of already selected elements, the resulting
321
+ set is in Fπ,x.
322
+ A greedy OCRS is randomized if, given x, the choice of
323
+ Fπ,x is randomized, and deterministic otherwise. For b, c ∈
324
+ [0, 1], we say that a greedy OCRS π is (b, c)-selectable if,
325
+ for each e ∈ E, and given x ∈ bPF (i.e., belonging to a
326
+ 2The competitive ratio is computed as the worst-case ratio be-
327
+ tween the value of the solution found by the algorithm and the value
328
+ of an optimal solution.
329
+
330
+ down-scaled version of PF),
331
+ Prπ,R(x) [S ∪ {e} ∈ Fπ,x
332
+ ∀S ⊆ R(x), S ∈ Fπ,x] ≥ c.
333
+ Intuitively, this means that, with probability at least c, the
334
+ random set R(x) is such that an element e is selected no
335
+ matter what other elements I of R(x) have been selected
336
+ so far, as long as I ∈ Fπ,x. This guarantees that an ele-
337
+ ment is selected with probability at least c against any ad-
338
+ versary, which implies a bc competitive ratio with respect
339
+ to the offline optimum (see Appendix A for further details).
340
+ Now, we provide an example due to Feldman, Svensson, and
341
+ Zenklusen (2016) of a feasibility constraint family where
342
+ OCRSs guarantee a constant competitive ratio against the
343
+ offline optimum. We will build on this example throughout
344
+ the paper in order to provide intuition for the main concepts.
345
+ Example 1 (Theorem 2.7 in (Feldman, Svensson, and Zen-
346
+ klusen 2016)). Given a graph G = (V, E), with |E| = m
347
+ edges, we consider a matching feasibility polytope PF =
348
+
349
+ x ∈ [0, 1]m : �
350
+ e∈δ(u) xe ≤ 1, ∀u ∈ V
351
+
352
+ , where δ(u) de-
353
+ notes the set of all adjacent edges to u ∈ V. Given b ∈ [0, 1],
354
+ the OCRS takes as input x ∈ bPF, and samples each edge e
355
+ with probability xe to build R(x). Then, it selects each edge
356
+ e ∈ R(x), upon its arrival, with probability (1 − e−xe)/xe
357
+ only if it is feasible. Then, the probability to select any edge
358
+ e = (u, v) (conditioned on being sampled) is
359
+ 1 − e−xe
360
+ xe
361
+ ·
362
+
363
+ e′∈δ(u)∪δ(v)\{e}
364
+ e−xe′
365
+ = 1 − e−xe
366
+ xe
367
+ · e− �
368
+ e′∈δ(u)∪δ(v)\{e} xe′ ≥ 1 − e−xe
369
+ xe
370
+ · e−2b
371
+ ≥ e−2b,
372
+ where the inequality follows from xe
373
+
374
+ bPF, i.e.,
375
+
376
+ e′∈δ(u)\{e} xe′ ≤ b−xe, and similarly for δ(v). Note that
377
+ in order to obtain an unconditional probability, we need to
378
+ multiply the above by a factor xe.
379
+ We remark that this example resembles closely our in-
380
+ troductory motivating application, where financial transac-
381
+ tions need to be assigned to reviewers upon their arrival.
382
+ Moreover, Feldman, Svensson, and Zenklusen (2016) give
383
+ explicit constructions of (b, c)-selectable greedy OCRSs for
384
+ knapsack, matching, matroidal constraints, and their inter-
385
+ section. We include a discussion of their feasibility poly-
386
+ topes in Appendix B. Ezra et al. (2020) generalize the above
387
+ online selection procedure to a setting where elements arrive
388
+ in batches rather than one at a time; we provide a discussion
389
+ of such setting in Appendix C.
390
+ 3
391
+ Fully Dynamic Online Selection
392
+ The fully dynamic online selection problem is characterized
393
+ by the definition of temporal packing constraints. We gen-
394
+ eralize the online selection model (Section 2) by introduc-
395
+ ing an activity time de ∈ [m] for each element. Element e
396
+ arrives at time se and, if it is selected by the algorithm, it re-
397
+ mains active up to time se + de and “blocks” other elements
398
+ from being selected. Elements arriving after that time can
399
+ be selected by the algorithm. In this setting, each element
400
+ e ∈ E is characterized by a tuple of attributes ze := (we, de).
401
+ Let Fd := (E, Id) be the family of temporal packing fea-
402
+ sibility constraints where elements block other elements in
403
+ the same independent set according to activity time vector
404
+ d = (de)e∈E. The goal of fully dynamic online selection is
405
+ selecting an independent set in Id whose cumulative weight
406
+ is as large as possible (i.e., as close as possible to the offline
407
+ optimum). We can naturally extend the expression for pack-
408
+ ing polytopes in the standard setting to the temporal one for
409
+ every feasibility constraint family, by exploiting the follow-
410
+ ing notion of active elements.
411
+ Definition 4 (Active Elements). For element e ∈ E and
412
+ given {ze}e∈E, we denote the set of active elements as
413
+ Ee := {e′ ∈ E : se′ ≤ se ≤ se′ + de′}.3
414
+ In this setting, we don’t need to select an independent set
415
+ S ∈ F, but, in a less restrictive way, we only require that for
416
+ each incoming element we select a feasible subset of the set
417
+ of active elements.
418
+ Definition 5 (Temporal packing constraint polytope). Given
419
+ F = (E, I), a temporal packing constraint polytope Pd
420
+ F ⊆
421
+ [0, 1]m is such that Pd
422
+ F := co ({1S : S ∩ Ee ∈ I, ∀e ∈ E}) .
423
+ Observation 1. For a fixed element e, the temporal polytope
424
+ is the convex hull of the collection containing all the sets
425
+ such that S ∩ Ee is feasible. This needs to be true for all
426
+ e ∈ E, meaning that we can rewrite the polytope and the
427
+ feasibility set as Pd
428
+ F = co
429
+ ��
430
+ e∈E {1S : S ∩ Ee ∈ F}
431
+
432
+ , and
433
+ Id = �
434
+ e∈E {S : S ∩ Ee ∈ I}. Moreover, when d and d′
435
+ differ for at least one element e, that is de < d′
436
+ e, then Ee ⊆
437
+ E′
438
+ e. Then, Pd
439
+ F ⊇ Pd′
440
+ F , Id ⊇ Id′.
441
+ We now extend Example 1 to account for activity times.
442
+ In Appendix B we also work out the reduction from stan-
443
+ dard to temporal packing constraints for a number of exam-
444
+ ples, including rank-1 matroids (single-choice), knapsack,
445
+ and general matroid constraints.
446
+ Example 2. We consider the temporal extension of the
447
+ matching polytope presented in Example 1, that is
448
+ Pd
449
+ F =
450
+
451
+
452
+ y ∈ [0, 1]m :
453
+
454
+ e∈δ(u)∩Ee
455
+ xe ≤ 1, ∀u ∈ V, ∀e ∈ E
456
+
457
+
458
+  .
459
+ Let us use the same OCRS as in the previous example, but
460
+ where “feasibility” only concerns the subset of active edges
461
+ in δ(u) ∪ δ(v). The probability to select an edge e = (u, v)
462
+ is
463
+ 1 − e−xe
464
+ xe
465
+ ·
466
+
467
+ e′∈δ(u)∪δ(v)∩Ee\{e}
468
+ e−xe′ ≥ 1 − e−xe
469
+ xe
470
+ · e−2b ≥ e−2b,
471
+ which is obtained in a similar way to Example 1.
472
+ The above example suggests to look for a general reduc-
473
+ tion that maps an OCRS for the standard setting, to an OCRS
474
+ for the temporal setting, while achieving at least the same
475
+ competitive ratio.
476
+ 3Note that, since for distinct elements e, e′, we have se′ ̸= se,
477
+ we can equivalently define the set of active elements as Ee :=
478
+ {e′ ∈ E : se′ < se ≤ se′ + de′} ∪ {e}.
479
+
480
+ Algorithm 1: Greedy OCRS Black-box Reduction
481
+ Input: Feasibility families F and Fd, polytopes PF
482
+ and Pd
483
+ F, OCRS π for F, a point x ∈ bPd
484
+ F;
485
+ Initialize Sd ← ∅;
486
+ Sample R(x) such that Pr [e ∈ R(x)] = xe;
487
+ for e ∈ E do
488
+ Upon arrival of element e, compute the set of
489
+ currently active elements Ee;
490
+ if (Sd ∩ Ee) ∪ {e} ∈ Fπ,y then
491
+ Execute the original greedy OCRS π(x);
492
+ Update Sd accordingly;
493
+ else
494
+ Discard element e;
495
+ return set Sd;
496
+ 4
497
+ OCRS for Fully Dynamic Online Selection
498
+ The first black-box reduction which we provide consists
499
+ in showing that a (b, c)-selectable greedy OCRS for stan-
500
+ dard packing constraints implies the existence of a (b, c)-
501
+ selectable greedy OCRS for temporal constraints. In partic-
502
+ ular, we show that the original greedy OCRS working for
503
+ x ∈ bPF can be used to construct another greedy OCRS
504
+ for y ∈ bPd
505
+ F. To this end, Algorithm 1 provides a way of
506
+ exploiting the original OCRS π in order to manage temporal
507
+ constraints. For each element e, and given the induced sub-
508
+ family of packing feasible sets Fπ,y, the algorithm checks
509
+ whether the set of previously selected elements Sd which
510
+ are still active in time, together with the new element e, is
511
+ feasible with respect to Fπ,y. If that is the case, the algo-
512
+ rithm calls the OCRS π. Then, if the OCRS π for input y
513
+ decided to select the current element e, the algorithm adds it
514
+ to Sd, otherwise the set remains unaltered. We remark that
515
+ such a procedure is agnostic to whether the original greedy
516
+ OCRS is deterministic or randomized. We observe that, due
517
+ to a larger feasibility constraint family, the number of in-
518
+ dependent sets have increased with respect to the standard
519
+ setting. However, we show that this does not constitute a
520
+ problem, and an equivalence between the two settings can
521
+ be established through the use of Algorithm 1. The follow-
522
+ ing result shows that Algorithm 1 yields a (b, c)-selectable
523
+ greedy OCRS for temporal packing constraints.
524
+ Theorem 1. Let F, Fd be the standard and temporal pack-
525
+ ing constraint families, respectively, and let their corre-
526
+ sponding polytopes be PF and Pd
527
+ F. Let x ∈ bPF and
528
+ y ∈ bPd
529
+ F, and consider a (b, c)-selectable greedy OCRS π
530
+ for Fπ,x. Then, Algorithm 1 equippend with π is a (b, c)-
531
+ selectable greedy OCRS for Fd
532
+ π,y.
533
+ Proof. Let us denote by ˆπ the procedure described in Algo-
534
+ rithm 1. First, we show that ˆπ is a greedy OCRS for Fd.
535
+ Greedyness. It is clear from the setting and the construc-
536
+ tion that elements arrive one at a time, and that ˆπ irrevoca-
537
+ bly selects an incoming element only if it is feasible, and
538
+ before seeing the next element. Indeed, in the if statement
539
+ of Algorithm 1, we check that the active subset of the el-
540
+ ements selected so far, together with the new arriving ele-
541
+ ment e, is feasible against the subfamily Fπ,x ⊆ F. Con-
542
+ straint subfamily Fπ,x is induced by the original OCRS
543
+ π, and point x belongs to the polytope bPd
544
+ F. Note that
545
+ we do not necessarily add element e to the running set
546
+ Sd, even though feasible, but act as the original greedy
547
+ OCRS would have acted. All that is left to be shown is
548
+ that such a procedure defines a subfamily of feasibility
549
+ constraints Fd
550
+ π,x ⊆ Fd. By construction, on the arrival
551
+ of each element e, we guarantee that Sd is a set such that
552
+ its subset of active elements is feasible. This means that
553
+ Sd ∩ Ee ∈ Fπ,x ⊆ F. Then,
554
+ Sd ∈ Fd
555
+ π,x :=
556
+
557
+ e∈E
558
+ {S : S ∩ Ee ∈ Fπ,x}.
559
+ Finally, Fπ,x ⊆ F implies that Fd
560
+ π,x ⊆ Fd, which shows
561
+ that ˆπ is greedy. With the above, we can now turn to
562
+ demonstrate (b, c)-selectability.
563
+ Selectability. Upon arrival of element e ∈ E, let us con-
564
+ sider S and Sd to be the sets of elements already selected
565
+ by π and ˆπ, respectively. By the way in which the con-
566
+ straint families are defined, and by construction of ˆπ, we
567
+ can observe that, given x ∈ bPd
568
+ F and y ∈ bPF, for all
569
+ S ⊆ R(y) such that S ∪{e} ∈ Fπ,y, there always exists a
570
+ set Sd ⊆ R(x) such that (Sd ∩Ee)∪{e} ∈ Fπ,x. This es-
571
+ tablishes an injection between the selected set under stan-
572
+ dard constraints, and its counterpart under temporal con-
573
+ straints. We observe that, for all e ∈ E and x ∈ bPd
574
+ F,
575
+ Pr
576
+
577
+ Sd ∪ {e} ∈ Fd
578
+ π,x
579
+ ∀Sd ⊆ R(x), Sd ∈ Fd
580
+ π,x
581
+
582
+ =
583
+ Pr
584
+
585
+ (Sd ∩ Ee) ∪ {e} ∈ Fπ,x ∀Sd ⊆ R(x), Sd ∩ Ee ∈ Fd
586
+ π,x
587
+
588
+ .
589
+ Hence, since for greedy OCRS π and y ∈ bPF, we have
590
+ that Pr [S ∪ {e} ∈ Fπ,y ∀S ⊆ R(y), S ∈ Fπ,y] ≥ c, we
591
+ can conclude by the injection above that
592
+ Pr
593
+
594
+ (Sd ∩ Ee) ∪ {e} ∈ Fπ,x
595
+ ∀Sd ⊆ R(x), Sd ∩ Ee ∈ Fπ,x
596
+
597
+ ≥ c.
598
+ The theorem follows.
599
+ We remark that the above reduction is agnostic to the
600
+ weight scale, i.e., we need not assume that we ∈ [0, 1] for
601
+ all e ∈ E. In order to further motivate the significance of
602
+ Algorithm 1 and Theorem 1, in the Appendix we explic-
603
+ itly reduce the standard setting to the fully dynamic one for
604
+ single-choice, and provide a general recipe for all packing
605
+ constraints.
606
+ 5
607
+ Fully Dynamic Online Selection under
608
+ Partial Information
609
+ In this section, we study the case in which the decision-
610
+ maker has to act under partial information. In particular,
611
+ we focus on the following online sequential extension of
612
+ the full-information problem: at each stage t ∈ [T ], a de-
613
+ cision maker faces a new instance of the fully dynamic
614
+
615
+ online selection problem. An unknown vector of weights
616
+ wt ∈ [0, 1]|E| is chosen by an adversary at each stage t,
617
+ while feasibility set Fd is known and fixed across all T
618
+ stages. This setting is analogous to the one recently stud-
619
+ ied by Gergatsouli and Tzamos (2022) in the context of
620
+ Pandora’s box problems. A crucial difference with the on-
621
+ line selection problem with full-information studied in Sec-
622
+ tion 4 is that, at each step t, the decision maker has to de-
623
+ cide whether to select or discard an element before observ-
624
+ ing its weight. In particular, at each t, the decision maker
625
+ takes an action at := 1Sd
626
+ t , where Sd
627
+ t
628
+ ∈ Fd is the fea-
629
+ sible set selected at stage t. The choice of at is made be-
630
+ fore observing wt. The objective of maximizing the cumu-
631
+ lative sum of weights is encoded in the reward function
632
+ f : [0, 1]2m ∋ (a, w) �→ ⟨a, w⟩ ∈ [0, 1], which is the
633
+ reward obtained by playing a with weights w = (we)e∈E. 4
634
+ In this setting, we can think of Fd as the set of super-arms
635
+ in a combinatorial online optimization problem. Our goal is
636
+ designing online algorithms which have a performance close
637
+ to that of the best fixed super-arm in hindsight.5 In the analy-
638
+ sis, as it is customary when the online optimization problem
639
+ has an NP-hard offline counterpart, we resort to the notion
640
+ of α-regret. In particular, given a set of feasible actions X,
641
+ we define an algorithm’s α-regret up to time T as
642
+ Regretα(T ) := α max
643
+ x∈X
644
+ � T
645
+
646
+ t=1
647
+ f(x, wt)
648
+
649
+ −E
650
+ � T
651
+
652
+ t=1
653
+ f(xt, wt)
654
+
655
+ ,
656
+ where α ∈ (0, 1] and xt is the strategy output by the online
657
+ algorithm at time t. We say that an algorithm has the no-α-
658
+ regret property if Regretα(T )/T → 0 for T → ∞.
659
+ The main result of the section is providing a black-box re-
660
+ duction that yields a no-α-regret algorithm for any fully dy-
661
+ namic online selection problem admitting a temporal OCRS.
662
+ We provide no-α-regret frameworks for three scenarios:
663
+ • full-feedback model: after selecting at the decision-maker
664
+ observes the exact reward function f(·, wt).
665
+ • semi-bandit feedback with white-box OCRS: after taking a
666
+ decision at time t, the algorithm observes wt,e for each el-
667
+ ement e ∈ Sd
668
+ t (i.e., each element selected at t). Moreover,
669
+ the decision-maker has exact knowledge of the procedure
670
+ employed by the OCRS, which can be easily simulated.
671
+ • semi-bandit feedback with oracle access to the OCRS: the
672
+ decision maker has semi-bandit feedback and the OCRS is
673
+ given as a black-box which can be queried once per step t.
674
+ Full-feedback Setting
675
+ In this setting, after selecting at, the decision-maker gets
676
+ to observe the reward function f(·, wt). In order to achieve
677
+ performance close to that of the best fixed super-harm in
678
+ hindsight the idea is to employ the α-competitive OCRS de-
679
+ signed in Section 4 by feeding it with a fractional solution
680
+ 4The analysis can be easily extended to arbitrary functions lin-
681
+ ear in both terms.
682
+ 5As we argue in Appendix D it is not possible to be competitive
683
+ with respect to more powerful benchmarks.
684
+ Algorithm 2: FULL-FEEDBACK ALGORITHM
685
+ Input: T , Fd, temporal OCRS ˆπ, subroutine RM
686
+ Initialize RM for strategy space Pd
687
+ F
688
+ for t ∈ [T ] do
689
+ xt ← RM.RECOMMEND()
690
+ at ← execute OCRS ˆπ with input xt
691
+ Play at, and subsequently observe f(·, wt)
692
+ RM.UPDATE(f(·, wt))
693
+ xt computed by considering the weights selected by the ad-
694
+ versary up to time t − 1.6
695
+ Let us assume to have at our disposal a no-α-regret algo-
696
+ rithm for decision space Pd
697
+ F. We denote such regret min-
698
+ imizer as RM, and we assume it offers two basic opera-
699
+ tions: i) RM.RECOMMEND() returns a vector in Pd
700
+ F; ii)
701
+ RM.UPDATE(f(·, w)) updates the internal state of the re-
702
+ gret minimizer using feedback received by the environment
703
+ in the form of a reward function f(·, w). Notice that the
704
+ availability of such component is not enough to solve our
705
+ problem since at each t we can only play a super-arm at ∈
706
+ {0, 1}m feasible for Fd, and not the strategy xt ∈ Pd
707
+ F ⊆
708
+ [0, 1]m returned by RM. The decision-maker can exploit the
709
+ subroutine RM together with a temporal greedy OCRS ˆπ by
710
+ following Algorithm 2. We can show that, if the algorithm
711
+ employs a regret minimizer for Pd
712
+ F with a sublinear cumu-
713
+ lative regret upper bound of RT , the following result holds.
714
+ Theorem 2. Given a regret minimizer RM for decision
715
+ space Pd
716
+ F with cumulative regret upper bound RT , and an
717
+ α-competitive temporal greedy OCRS, Algorithm 2 provides
718
+ α max
719
+ S∈Id
720
+ T
721
+
722
+ t=1
723
+ f(1S, wt) − E
724
+ � T
725
+
726
+ t=1
727
+ f(at, wt)
728
+
729
+ ≤ RT .
730
+ Since we are assuming the existence of a polynomial-
731
+ time separation oracle for the set Pd
732
+ F, then the LP
733
+ arg maxx∈Pd
734
+ F f(x, w) can be solved in polynomial time
735
+ for any w. Therefore, we can instantiate a regret minimizer
736
+ for Pd
737
+ F by using, for example, follow-the-regularised-leader
738
+ which yields RT ≤ ˜O(m
739
+
740
+ T) (Orabona 2019).
741
+ Semi-Bandit Feedback with White-Box OCRS
742
+ In this setting, given a temporal OCRS ˆπ, it is enough to
743
+ show that we can compute the probability that a certain
744
+ super-arm a is selected by ˆπ given a certain order of ar-
745
+ rivals at stage t and a vector of weights w. If that is the
746
+ case, we can build a no-α-regret algorithm with regret upper
747
+ bound of ˜O(m
748
+
749
+ T) by employing Algorithm 2 and by in-
750
+ stantiating the regret minimizer RM as the online stochastic
751
+ mirror descent (OSMD) framework by Audibert, Bubeck,
752
+ and Lugosi (2014). We observe that the regret bound ob-
753
+ tained is this way is tight in the semi-bandit setting (Audib-
754
+ ert, Bubeck, and Lugosi 2014). Let qt(e) be the probability
755
+ 6We remark that a (b, c)-selectable OCRS yields a bc competi-
756
+ tive ratio. In the following, we let α := bc.
757
+
758
+ Algorithm 3: SEMI-BANDIT-FEEDBACK ALGO-
759
+ RITHM WITH ORACLE ACCESS TO OCRS
760
+ Input: T , Fd, temporal OCRS ˆπ, full-feedback
761
+ algorithm RM for decision space Pd
762
+ F
763
+ Let Z be initialized as in Theorem 3, and initialize
764
+ RM appropriately
765
+ for τ = 1, . . . , Z do
766
+ Iτ ←
767
+
768
+ (τ − 1) T
769
+ Z + 1, . . . , τ T
770
+ Z
771
+
772
+ Choose a random permutation p : [m] → E, and
773
+ t1, . . . , tm stages at random from Iτ
774
+ xτ ← RM.RECOMMEND()
775
+ for t = (τ − 1) T
776
+ Z + 1, . . . , τ T
777
+ Z do
778
+ if t = tj for some j ∈ [m] then
779
+ xt ← 1Sd for a feasible set Sd
780
+ containing p(j)
781
+ elsext ← xτ
782
+ Play at obtained from the OCRS ˆπ executed
783
+ with fractional solution xt
784
+ Compute estimators ˜fτ(e) of
785
+ fτ(e) :=
786
+ 1
787
+ |Iτ |
788
+
789
+ t∈Iτ f(1e, wt) for each e ∈ E
790
+ RM.UPDATE
791
+
792
+ ˜fτ(·)
793
+
794
+ with which our algorithm selects element e at time t. Then,
795
+ we can equip OSMD with the following unbiased estimator
796
+ of the vector of weights: ˆwt,e := wt,eat,e/qt(e). 7 In order to
797
+ compute qt(·) we need to have observed the order of arrival
798
+ at stage t, the weights corresponding to super-arm at, and
799
+ we need to be able to compute the probability with which
800
+ the OCRS selected e at t. This the reason for which we talk
801
+ about “white-box” OCRS, as we need to simulate ex post
802
+ the procedure followed by the OCRS in order to compute
803
+ qt(·). When we know the procedure followed by the OCRS,
804
+ we can always compute qt(e) for any element e selected at
805
+ stage t, since at the end of stage t we know the order of
806
+ arrival, weights for selected elements, and the initial frac-
807
+ tional solution xt. We provide further intuition as for how to
808
+ compute such probabilities through the running example of
809
+ matching constraints.
810
+ Example 3. Consider Algorithm 2 initialized with the OCRS
811
+ of Example 1. Given stage t, we can safely limit our atten-
812
+ tion to selected edges (i.e., elements e such that at,e = 1).
813
+ Indeed, all other edges will either be unfeasible (which im-
814
+ plies that the probability of selecting them is 0), or they were
815
+ not selected despite being feasible. Consider an arbitrary
816
+ element e among those selected. Conditioned on the past
817
+ choices up to element e, we know that e ∈ at will be fea-
818
+ sible with certainty, and thus the (unconditional) probability
819
+ it is selected is simply qt(e) = 1 − e−yt,e.
820
+ Semi-Bandit Feedback and Oracle Access to OCRS
821
+ As in the previous case, at each stage t the decision maker
822
+ can only observe the weights associated to each edge se-
823
+ 7We observe that ˆwt,e is equal to 0 when e has not been selected
824
+ at stage t because, in that case, at,e = 0.
825
+ lected by at. Therefore, they have no counterfactual infor-
826
+ mation on their reward had they selected a different feasi-
827
+ ble set. On top of that, we assume that the OCRS is given
828
+ as a black-box, and therefore we cannot compute ex post
829
+ the probabilities qt(e) for selected elements. However, we
830
+ show that it is possible to tackle this setting by exploiting a
831
+ reduction from the semi-bandit feedback setting to the full-
832
+ information feedback one. In doing so, we follow the ap-
833
+ proach first proposed by Awerbuch and Kleinberg (2008).
834
+ The idea is to split the time horizon T into a given num-
835
+ ber of equally-sized blocks. Each block allows the decision
836
+ maker to simulate a single stage of the full information set-
837
+ ting. We denote the number of blocks by Z, and each block
838
+ τ ∈ [Z] is composed by a sequence of consecutive stages
839
+ Iτ. Algorithm 3 describes the main steps of our procedure.
840
+ In particular, the algorithm employs a procedure RM, an al-
841
+ gorithm for the full feedback setting as the one described
842
+ in the previous section, that exposes an interface with the
843
+ two operation of a traditional regret minimizer. During each
844
+ block τ, the full-information subroutine is used to compute
845
+ a vector xτ. Then, in most stages of the window Iτ, the de-
846
+ cision at is computed by feeding xτ to the OCRS. A few
847
+ stages are chosen uniformly at random to estimate utilities
848
+ provided by other feasible sets (i.e., exploration phase). Af-
849
+ ter the execution of all the stages in the window Iτ, the al-
850
+ gorithm computes estimated reward functions and uses them
851
+ to update the full-information regret minimizer.
852
+ Let p : [m] → E be a random permutation of elements in
853
+ E. Then, for each e ∈ E, by letting j be the index such that
854
+ p(j) = e in the current block τ, an unbiased estimator ˜fτ(e)
855
+ of fτ(e) :=
856
+ 1
857
+ |Iτ |
858
+
859
+ t∈Iτ f(1e, wt) can be easily obtained by
860
+ setting ˜fτ(e) := f(1e, wtj). Then, it is possible to show that
861
+ our algorithm provides the following guarantees.
862
+ Theorem 3. Given a temporal packing feasibility set Fd,
863
+ and an α-competitive OCRS ˆπ, let Z = T 2/3, and the full
864
+ feedback subroutine RM be defined as per Theorem 2. Then
865
+ Algorithm 3 guarantees that
866
+ α max
867
+ S∈Id
868
+ T
869
+
870
+ t=1
871
+ f(1S, wt) − E
872
+ � T
873
+
874
+ t=1
875
+ f(at, wt)
876
+
877
+ ≤ ˜O(T 2/3).
878
+ 6
879
+ Conclusion and Future Work
880
+ In this paper we introduce fully dynamic online selection
881
+ problems in which selected items affect the combinatorial
882
+ constraints during their activity times. We presented a gen-
883
+ eralization of the OCRS approach that provides near opti-
884
+ mal competitive ratios in the full-information model, and no-
885
+ α-regret algorithms with polynomial per-iteration running
886
+ time with both full- and semi-bandit feedback. Our frame-
887
+ work opens various future research directions. For example,
888
+ it would be particularly interesting to understand whether a
889
+ variation of Algorithms 2 and 3 can be extended to the case
890
+ in which the adversary changes the constraint family at each
891
+ stage. Moreover, the study of the bandit-feedback model re-
892
+ mains open, and no regret bound is known for that setting.
893
+
894
+ Acknowledgements
895
+ The authors of Sapienza are supported by the Meta Re-
896
+ search grant on “Fairness and Mechanism Design”, the ERC
897
+ Advanced Grant 788893 AMDROMA “Algorithmic and
898
+ Mechanism Design Research in Online Markets”, the MIUR
899
+ PRIN project ALGADIMAR “Algorithms, Games, and Dig-
900
+ ital Markets”.
901
+ References
902
+ Abernethy, J. D.; Hazan, E.; and Rakhlin, A. 2009. Com-
903
+ peting in the dark: An efficient algorithm for bandit linear
904
+ optimization. COLT.
905
+ Atsidakou, A.; Papadigenopoulos, O.; Basu, S.; Caramanis,
906
+ C.; and Shakkottai, S. 2021. Combinatorial Blocking Ban-
907
+ dits with Stochastic Delays. In Proceedings of the 38th In-
908
+ ternational Conference on Machine Learning, ICML 2021,
909
+ 18-24 July 2021, Virtual Event, 404–413.
910
+ Audibert, J.-Y.; Bubeck, S.; and Lugosi, G. 2014. Regret in
911
+ online combinatorial optimization. Mathematics of Opera-
912
+ tions Research, 39(1): 31–45.
913
+ Awerbuch, B.; and Kleinberg, R. 2008. Online linear op-
914
+ timization and adaptive routing. Journal of Computer and
915
+ System Sciences, 74(1): 97–114.
916
+ Basu, S.; Papadigenopoulos, O.; Caramanis, C.; and
917
+ Shakkottai, S. 2021. Contextual Blocking Bandits. In The
918
+ 24th International Conference on Artificial Intelligence and
919
+ Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event,
920
+ 271–279.
921
+ Basu, S.; Sen, R.; Sanghavi, S.; and Shakkottai, S. 2019.
922
+ Blocking Bandits. In Wallach, H.; Larochelle, H.; Beygelz-
923
+ imer, A.; d'Alch´e-Buc, F.; Fox, E.; and Garnett, R., eds.,
924
+ Advances in Neural Information Processing Systems, vol-
925
+ ume 32. Curran Associates, Inc.
926
+ Bishop, N.; Chan, H.; Mandal, D.; and Tran-Thanh, L. 2020.
927
+ Adversarial Blocking Bandits. In Advances in Neural Infor-
928
+ mation Processing Systems 33: Annual Conference on Neu-
929
+ ral Information Processing Systems 2020, NeurIPS 2020,
930
+ December 6-12, 2020, virtual.
931
+ Cesa-Bianchi, N.; and Lugosi, G. 2012. Combinatorial ban-
932
+ dits.
933
+ Journal of Computer and System Sciences, 78(5):
934
+ 1404–1422.
935
+ Chawla, S.; Hartline, J. D.; Malec, D. L.; and Sivan, B.
936
+ 2010. Multi-Parameter Mechanism Design and Sequential
937
+ Posted Pricing. In Proceedings of the Forty-Second ACM
938
+ Symposium on Theory of Computing, STOC ’10, 311–320.
939
+ New York, NY, USA: Association for Computing Machin-
940
+ ery. ISBN 9781450300506.
941
+ Chekuri, C.; Vondr´ak, J.; and Zenklusen, R. 2011. Submod-
942
+ ular Function Maximization via the Multilinear Relaxation
943
+ and Contention Resolution Schemes. In Proceedings of the
944
+ Forty-Third Annual ACM Symposium on Theory of Comput-
945
+ ing, STOC ’11, 783–792. New York, NY, USA: Association
946
+ for Computing Machinery. ISBN 9781450306911.
947
+ Chen, W.; Wang, Y.; and Yuan, Y. 2013.
948
+ Combinatorial
949
+ multi-armed bandit: General framework and applications.
950
+ In International conference on machine learning, 151–159.
951
+ PMLR.
952
+ Demetrescu, C.; Eppstein, D.; Galil, Z.; and Italiano, G. F.
953
+ 2010.
954
+ Dynamic Graph Algorithms, 9.
955
+ Chapman &
956
+ Hall/CRC, 2 edition. ISBN 9781584888222.
957
+ Dickerson, J.; Sankararaman, K.; Srinivasan, A.; and Xu, P.
958
+ 2018. Allocation problems in ride-sharing platforms: Online
959
+ matching with offline reusable resources. In Proceedings of
960
+ the AAAI Conference on Artificial Intelligence, volume 32.
961
+ Ezra, T.; Feldman, M.; Gravin, N.; and Tang, Z. G. 2020.
962
+ Online Stochastic Max-Weight Matching: Prophet Inequal-
963
+ ity for Vertex and Edge Arrival Models. In EC’20, 769–787.
964
+ Feige, U. 1998. A Threshold of Ln n for Approximating Set
965
+ Cover. J. ACM, 45(4): 634–652.
966
+ Feldman, M.; Svensson, O.; and Zenklusen, R. 2016. On-
967
+ line Contention Resolution Schemes. In Krauthgamer, R.,
968
+ ed., Proceedings of the Twenty-Seventh Annual ACM-SIAM
969
+ Symposium on Discrete Algorithms, SODA 2016, Arlington,
970
+ VA, USA, January 10-12, 2016, 1014–1033. SIAM.
971
+ Gergatsouli, E.; and Tzamos, C. 2022. Online Learning for
972
+ Min Sum Set Cover and Pandora’s Box.
973
+ In Proceedings
974
+ of the 39th International Conference on Machine Learning,
975
+ volume 162 of Proceedings of Machine Learning Research,
976
+ 7382–7403.
977
+ Gupta, A.; and Nagarajan, V. 2013. A Stochastic Probing
978
+ Problem with Applications. In Proceedings of the 16th In-
979
+ ternational Conference on Integer Programming and Com-
980
+ binatorial Optimization, IPCO’13, 205–216. Berlin, Heidel-
981
+ berg: Springer-Verlag. ISBN 9783642366932.
982
+ Gy¨orgy, A.; Linder, T.; Lugosi, G.; and Ottucs´ak, G. 2007.
983
+ The On-Line Shortest Path Problem Under Partial Monitor-
984
+ ing. Journal of Machine Learning Research, 8(10).
985
+ Hajiaghayi, M. T.; Kleinberg, R.; and Sandholm, T. 2007.
986
+ Automated Online Mechanism Design and Prophet Inequal-
987
+ ities. In Proceedings of the 22nd National Conference on
988
+ Artificial Intelligence - Volume 1, AAAI’07, 58–65. AAAI
989
+ Press. ISBN 9781577353232.
990
+ Kesselheim, T.; and Mehlhorn, K. 2016. Lecture 2: Yao’s
991
+ Principle and the Secretary Problem.
992
+ Randomized Al-
993
+ gorithms and Probabilistic Analysis of Algorithms, Max
994
+ Planck Institute for Informatics, Saarbr¨ucken, Germany.
995
+ Kleinberg, R.; Niculescu-Mizil, A.; and Sharma, Y. 2010.
996
+ Regret Bounds for Sleeping Experts and Bandits.
997
+ Mach.
998
+ Learn., 80(2–3): 245–272.
999
+ Kleinberg, R.; and Weinberg, S. M. 2012. Matroid prophet
1000
+ inequalities. In STOC’12, 123–136.
1001
+ Kveton, B.; Wen, Z.; Ashkan, A.; and Szepesvari, C.
1002
+ 2015. Tight regret bounds for stochastic combinatorial semi-
1003
+ bandits. In Artificial Intelligence and Statistics, 535–543.
1004
+ PMLR.
1005
+ Livanos, V. 2021. A Simple and Tight Greedy OCRS. CoRR,
1006
+ abs/2111.13253.
1007
+ McMahan, H. B.; and Blum, A. 2004. Online geometric
1008
+ optimization in the bandit setting against an adaptive adver-
1009
+ sary. In International Conference on Computational Learn-
1010
+ ing Theory, 109–123. Springer.
1011
+ Orabona, F. 2019. A modern introduction to online learning.
1012
+ arXiv preprint arXiv:1912.13213.
1013
+
1014
+ A
1015
+ Contention Resolution Schemes and Online Contention Resolution Schemes
1016
+ As explained at length in Section 2, our goal in general is that of finding the independent set of maximum weight for a
1017
+ given feasibility constraint family. However, doing this directly might be intractable in general and we need to aim for a
1018
+ good approximation of the optimum. In particular, given a non-negative submodular function f : [0, 1]m → R≥0, and a family
1019
+ of packing constraints F, we start from an ex ante feasible solution to the linear program maxx∈PF f(x), which upper bounds
1020
+ the optimal value achievable. An ex ante feasible solution is simply a distribution over the independent sets of F, given by
1021
+ a vector x in the packing constraint polytope of F. A key observation is that we can interpret the ex ante optimal solution
1022
+ to the above linear program as a vector x∗ of fractional values, which induces distribution over elements such that x∗
1023
+ e is the
1024
+ marginal probability that element e ∈ E is included in the optimum. Then, we use this solution to obtain a feasible solution that
1025
+ suitably approximates the optimum. The random set R(x∗) constructed by ex ante selecting each element independently with
1026
+ probability x∗
1027
+ e can be infeasible. Contention Resolution Schemes (Chekuri, Vondr´ak, and Zenklusen 2011) are procedures that,
1028
+ starting from the random set of sampled elements R(x∗), construct a feasible solution with good approximation guarantees
1029
+ with respect to the optimal solution of the original integer linear program.
1030
+ Definition 6 (Contention Resolution Schemes (CRSs) (Chekuri, Vondr´ak, and Zenklusen 2011)). For b, c ∈ [0, 1], a (b, c)-
1031
+ balanced Contention Resolution Scheme (CRS) π for F = (E, I) is a procedure such that, for every ex-ante feasible solution
1032
+ x ∈ bPF (i.e., the down-scaled version of polytope PF), and every subset S ⊆ E, returns a random set π(x, S) ⊆ S satisfying
1033
+ the following properties:
1034
+ 1. Feasibility: π(x, S) ∈ I.
1035
+ 2. c-balancedness: Prπ,R(x) [e ∈ π(x, R(x)) | e ∈ R(x)] ≥ c, ∀e ∈ E.
1036
+ When elements arrive in an online fashion, Feldman, Svensson, and Zenklusen (2016) extend CRS to the notion of OCRS,
1037
+ where R(x) is obtained in the same manner, but elements are revealed one by one in adversarial order. The procedure has to
1038
+ decide irrevocably whether or not to add the current element to the final solution set, which needs to be feasible and a competitive
1039
+ against the offline optimum. The idea is that adding a sampled element e ∈ E to the set of already selected elements S ⊆ R(x)
1040
+ maintains feasibility with at least constant probability, regardless of the element and the set. This originates Definition 3 and
1041
+ the subsequent discussion.
1042
+ B
1043
+ Examples
1044
+ In this section, we provide some clarifying examples for the concepts introduced in Section 2 and 3.
1045
+ Polytopes
1046
+ Example 4 provides the definition of the constraint polytopes of some standard problems, while Example 5 describes their
1047
+ temporal version. For a set S ⊆ E and x ∈ Rm, we define, with a slight abuse of notation, x(S) := �
1048
+ e∈S xe.
1049
+ Example 4 (Standard Polytopes). Given a ground set E,
1050
+ • Let K = (E, I) be a knapsack constraint. Then, given budget B > 0 and a vector of elements’ sizes c ∈ Rm
1051
+ ≥0, its feasibility
1052
+ polytope is defined as
1053
+ PK = {x ∈ [0, 1]m : ⟨c, x⟩ ≤ B} .
1054
+ • Let G = (E, I) be a matching constraint. Then, its feasibility polytope is defined as
1055
+ PG = {x ∈ [0, 1]m : x(δ(u)) ≤ 1, ∀u ∈ V } ,
1056
+ where δ(u) denotes the set of all adjacent edges to u ∈ V. Note that the ground set in this case is the set of all edges of
1057
+ graph G = (V, E).
1058
+ • Let M = (E, I) be a matroid constraint. Then, its feasibility polytope is defined as
1059
+ PM = {x ∈ [0, 1]m : x(S) ≤ rank(S), ∀S ⊆ E} .
1060
+ Here, rank(S) := max {|I| : I ⊆ S, I ∈ I}, i.e., the cardinality of the maximum independent set contained in S.
1061
+ We can now rewrite the above polytopes under temporal packing constraints.
1062
+ Example 5 (Temporal Polytopes). For ground set E,
1063
+ • Let K = (E, I) be a knapsack constraint. Then, for B > 0 and cost vector c ∈ Rm
1064
+ ≥0, its feasibility polytope is defined as
1065
+ Pd
1066
+ K = {x ∈ [0, 1]m : ⟨c, x⟩ ≤ B, ∀e ∈ E} .
1067
+ • Let G = (E, I) be a matching constraint. Then, its feasibility polytope is defined as
1068
+ Pd
1069
+ G = {x ∈ [0, 1]m : x(δ(u) ∩ Ee) ≤ 1, ∀u ∈ V, ∀e ∈ E} .
1070
+ • Let M = (E, I) be a matroid constraint. Then, its feasibility polytope is defined as
1071
+ Pd
1072
+ M = {x ∈ [0, 1]m : x(S ∩ Ee) ≤ rank(S), ∀S ⊆ E, ∀e ∈ E} .
1073
+ We also note that, for general packing constraints, if de = ∞ for all e ∈ E, then Ee = E, P∞
1074
+ F = PF, and similarly for the
1075
+ constraint family F∞ = F.
1076
+
1077
+ From Standard OCRS to Temporal OCRS for Rank-1 Matroids, Matchings, Knapsacks, and General
1078
+ Matroids
1079
+ In this section, we explicitly derive a (1, 1/e)-selectable (randomized) temporal greedy OCRS for the rank-1 matroid feasibility
1080
+ constraint, from a (1, 1/e)-selectable (randomized) greedy OCRS in the standard setting (Livanos 2021), which is also tight.
1081
+ Let us denote this standard OCRS as πM, where M is a rank-1 matroid.
1082
+ Corollary 1. For the rank-1 matroid feasibility constraint family under temporal constraints, Algorithm 1 produces a (1, 1/e)-
1083
+ selectable (randomized) temporal greedy OCRS ˆπM from πM.
1084
+ Proof. Since it is clear from context, we drop the dependence on M and write π, ˆπ. We will proceed by comparing side-by-side
1085
+ what happens in π and in ˆπ. Let us recall from Examples 4, 5 that the polytopes can respectively be written as
1086
+ PM = {x ∈ [0, 1]m : x(S) ≤ 1, ∀S ⊆ E} ,
1087
+ Pd
1088
+ M = {y ∈ [0, 1]m : y(S ∩ Ee) ≤ 1, ∀S ⊆ E, ∀e ∈ E} .
1089
+ The two OCRSs perform the following steps, on the basis of Algorithm 1. On one hand, π defines a subfamily of constraints
1090
+ Fπ,x := {{e} : e ∈ H(x)}, where e ∈ E is included in random subset H(x) ⊆ E with probability 1−e−xe
1091
+ xe
1092
+ . Then, it selects
1093
+ the first sampled element e ∈ R(x) such that {e} ∈ Fπ,x. On the other hand, πy defines a subfamily of constraints Fd
1094
+ π,y :=
1095
+ {{e} : e ∈ H(y)}, where e ∈ E is included in random subset H(y) ⊆ E with probability qe(y) = 1−e−ye
1096
+ ye
1097
+ . The feasibility
1098
+ family Fd
1099
+ π,y induces, as per Observation 1, a sequence of feasibility families Fπ,y(e) := {{e} : e ∈ H(y) ∩ Ee}, for each
1100
+ e ∈ E. For all e′ ∈ E, the OCRS selects the first sampled element e ∈ R(y) such that {e} ∈ Fπ,y(e). In other words, the
1101
+ temporal OCRS selects a sampled element that is active only if no other element in its active elements set has been selected
1102
+ earlier. It is clear that both are randomized greedy OCRSs.
1103
+ We will now proceed by showing that each element e is selected with probability at least 1/e in both π, ˆπ. In π element e is
1104
+ selected if sampled, and no earlier element has been selected before (i.e. its singleton set belongs to the subfamily Fπ,x). An
1105
+ element e′ is not selected with probability 1 − xe′ · 1−e−xe′
1106
+ xe′
1107
+ = e−xe′. This means that the probability of e being selected is
1108
+ 1 − e−xe
1109
+ xe
1110
+ ·
1111
+
1112
+ se′ <se
1113
+ e−xe′ = 1 − e−xe
1114
+ xe
1115
+ · e
1116
+ − �
1117
+ se′ <se xe′ ≥ (1 − e−xe) exe−1
1118
+ xe
1119
+ ≥ 1
1120
+ e,
1121
+ where the first inequality is justified by �
1122
+ se′ <se xe′ ≤ 1, and the second follows because the expression is minimized for
1123
+ xe = 0. Similarly, in ˆπ element e is selected if sampled, and no earlier element that is still active has been selected before (i.e.
1124
+ its singleton set belongs to the subfamily Fπ,y(e)). We have that the probability of e being selected is
1125
+ 1 − e−ye
1126
+ ye
1127
+ ·
1128
+
1129
+ se′ <se:e′∈Ee
1130
+ e−ye′ = 1 − e−ye
1131
+ ye
1132
+ · e
1133
+ − �
1134
+ se′ <se:e′∈Ee ye′ ≥ (1 − e−ye) eye−1
1135
+ ye
1136
+ ≥ 1
1137
+ e.
1138
+ Again, the first inequality is justified by �
1139
+ se′ <se:e′∈Ee ye′ ≤ 1 by the temporal feasibility constraints, and the second follows
1140
+ because the expression is minimized for ye = 0. Selectability is thus shown.
1141
+ Remark 1. Adapting the OCRSs in Theorem 1.8 of (Feldman, Svensson, and Zenklusen 2016) for general matroids, matchings
1142
+ and knapsacks, by following Algorithm 1 step-by-step, we get the same selectability guarantees in the temporal settings as in
1143
+ the standard ones: respectively, (b, 1−b), (b, e−2b), (b, (1−2b)/(2−2b)). There are two crucial steps to map a standard OCRS
1144
+ into a temporal one, as exemplified by Corollary 1:
1145
+ 1. We first need to define the temporal constraints based on the standard ones. This is done simply by enforcing the constraint
1146
+ in standard setting only for the current set of active elements, i.e. transforming Fπ,x into Fπ,y(e) for all elements e ∈ E.
1147
+ Such a transformation is analogous to the one used to go from Example 4 to Example 5.
1148
+ 2. When proving selectability, the probability of feasibility is only calculated on elements e′ belonging the same independent
1149
+ set as e (which arrives later), that are still active. This means that the probability computation is confined to only e′ ∈ Ee
1150
+ such that se′ < se, rather than all e′ ∈ E such that se′ < se.
1151
+ C
1152
+ Batched Arrival: Matching Constraints
1153
+ As mentioned in Section 1, Ezra et al. (2020) generalize the one-by-one online selection problem to a setting where elements
1154
+ arrive in batches. The existence of batched greedy OCRSs implies a number of results, as for instance Prophet Inequalities
1155
+ under matching constraints where, rather than edges, vertices with all the edges adjacent to them arrive one at a time. This can
1156
+ be viewed as an incoming batch of edges, for which Ezra et al. (2020) explicitly construct a (1, 1/2)-selectable batched greedy
1157
+ OCRS.
1158
+
1159
+ Indeed, we let the ground set E be partitioned in k disjoint subsets (batches) arriving in the order B1, . . . , Bk, and where
1160
+ elements in each batch appear at the same time. Such batches need to belong to a feasible family of batches B: for example,
1161
+ all batches could be required to be singletons, or they could be required to be all edges incident to a given vertex in a graph,
1162
+ and so on. Similarly to the traditional OCRS, we sample a random subset Rj(x) ⊆ Bj, for all j ∈ [k], so as to form R(x) :=
1163
+
1164
+ j∈[k] Rj(x) ⊆ E, where Rj’s are mutually independent. The fundamental difference with greedy OCRSs is that, within a
1165
+ given batch, weights are allowed to be correlated.
1166
+ Definition 7 (Batched Greedy OCRSs (Ezra et al. 2020)). For b, c ∈ [0, 1], let PF ⊆ [0, 1]m be F’s feasibility polytope.
1167
+ An OCRS π for bPF is called a batched greedy OCRS with respect to R if, for every ex-ante feasible solution x ∈ bPF, π
1168
+ defines a packing subfamily of feasible sets Fπ,x ⊆ F, and it selects a sampled element e ∈ Bj when, together with the
1169
+ set of already selected elements, the resulting set is in Fπ,x. We say that a batched greedy OCRS π is (b, c)-selectable if
1170
+ Prπ,R(x) [Sj ∪ {e} ∈ Fπ,x
1171
+ ∀Sj ⊆ Rj(x), Sj ∈ Fπ,x] ≥ c, for each j ∈ [k], e ∈ Sj. The output feasible set will be S :=
1172
+
1173
+ j∈[m] Sj ∈ Fπ,x.
1174
+ Naturally, Theorem 1 extends to batched OCRSs.
1175
+ Corollary 2. Let F, Fd be respectively the standard and temporal packing constraint families, with their corresponding
1176
+ polytopes PF, Pd
1177
+ F. Let x ∈ bPF and y ∈ bPd
1178
+ F, and consider a (b, c)-selectable batched greedy OCRS π for Fπ,x, with
1179
+ batches B1, . . . , Bk ∈ B. We can construct a batched greedy OCRS ˆπ that is also (b, c)-selectable for Fd
1180
+ π,y, with batches
1181
+ B1, . . . , Bk ∈ B.
1182
+ The proof of this corollary is identical to that of Theorem 1: we can indeed define a set of active elements Ej for each batch
1183
+ Bj, and ˆπ is essentially in Algorithm 1 but with incoming batches rather than elements, and the necessary modifications in the
1184
+ sets. We will demonstrate the use of batched greedy OCRSs in the graph matching setting, where vertices come one at a time
1185
+ together with their contiguous edges. This allows us to solve the problem of dynamically assigning tasks to reviewers for the
1186
+ reviewing time, and to eventually match new tasks to the same reviewers, so as to maximize the throughput of this procedure.
1187
+ Details are presented in Appendix C.
1188
+ By Corollary 2 together with Theorem 4.1 in Ezra et al. (2020), which gives an explicit construction of a (1, 1/2)-selectable
1189
+ batched greedy OCRS under matching constraints, we immediately have that (1, 1/2)-selectable batched greedy OCRS exists
1190
+ even under temporal constraints. For clarity, and in the spirit of Appendix B, we work out how to derive from scratch an
1191
+ online algorithm that is 1/2-competitive with respect to the offline optimal matching when the graph is bipartite and temporal
1192
+ constraints are imposed. We do not use of Corollary 2, but we follow the proof of this general statement for the specific setting
1193
+ of bipartite graph matching. Batched OCRSs in the non-temporal case are not specific to the bipartite matching case but extend
1194
+ in principle to arbitrary packing constraints. Nevertheless, the only known constant competitive batched OCRS is the one for
1195
+ general graph matching by Ezra et al. (2020). Finally, we note that our results closely resemble the ones of Dickerson et al.
1196
+ (2018), with the difference that their arrival order is assumed to be stochastic, whereas ours is adversarial.
1197
+ This is motivated for instance by the following real-world scenario: there are |U| = m “offline” beds (machines) in an
1198
+ hospital, and |V | = n “online” patients (jobs) that arrive. Once a patient v ∈ V comes, the hospital has to irrevocably assign it
1199
+ to one of the beds, say u ∈ U, and occupy it for a stochastic time equal to duv := dv[u], for dv ∼ Dv, i.e., the u-th component
1200
+ of random vector dv. The sequence of arrivals is adversarial, but with known ex-ante distributions (Wv, Dv). Moreover, the
1201
+ patient’s healing can be thought of as a positive reward/weight equal to wuv := wv[u], for wv ∼ Wv, i.e., the u-th component
1202
+ of random vector wv, whose distributions are known to the algorithm. The hospital’s goal is that of maximizing the sum of the
1203
+ healing weights over time, i.e., over a discrete time period of length |V | = n. Across v’s, both wv’s and dv’s are independent.
1204
+ However, within the vector itself, components duv and du′v could be correlated, and the same holds for wv’s.
1205
+ Linear Programming Formulation
1206
+ First, we construct a suitable linear-programming formulation whose fractional solution yields an upper bound on the
1207
+ expected optimum offline algorithm. Then, we devise an online algorithm that achieves an α-competitive ratio with re-
1208
+ spect to the linear programming fractional solution. We follow the temporal LP Definition , and let f(x) := ⟨w, x⟩,
1209
+ for x
1210
+
1211
+ Pd
1212
+ G being a feasible fractional solution in the matching polytope. Since the matching polytope is Pd
1213
+ G
1214
+ =
1215
+ {x ∈ [0, 1]m : x(δ(u) ∩ Ee) ≤ 1, ∀u ∈ V, ∀e ∈ E}, we can equivalently write the temporal linear program as
1216
+
1217
+
1218
+
1219
+
1220
+
1221
+
1222
+
1223
+
1224
+
1225
+
1226
+
1227
+
1228
+
1229
+
1230
+
1231
+
1232
+
1233
+
1234
+
1235
+ max
1236
+ x∈[0,1]m
1237
+
1238
+ u∈U
1239
+
1240
+ v∈V
1241
+ wuv · xuv
1242
+ ⊳ Objective
1243
+ s.t.
1244
+
1245
+ u∈U
1246
+ xuv ≤ 1, ∀v ∈ V
1247
+ ⊳ Constr. 1
1248
+
1249
+ v′:sv′ <sv
1250
+ xuv′ · Pr [duv′ ≥ sv − sv′] + xuv ≤ 1, ∀u ∈ U, v ∈ V
1251
+ ⊳ Constr. 2
1252
+ xuv ≥ 0, ∀u ∈ U, v ∈ V
1253
+ ⊳ Constr. 3
1254
+ (1)
1255
+
1256
+ where wuv := Ewv∼Wv [wuv], when wuv is a random variable; when instead, it is deterministic, we simply have wuv = wuv.
1257
+ Furthermore, as we argued in Section 2, we can think of xuv to be the probability that edge uv is inserted in the offline (frac-
1258
+ tional) optimal matching. We now show why the above linear program yields an upper bound to the offline optimal matching.
1259
+ Lemma 1. Cosider solution x∗ to linear program (1). Then, x∗ is such that ⟨w, x∗⟩ ≥ Ew,d [⟨w, 1OPT⟩], where 1OPT ∈
1260
+ {0, 1}m is the vector denoting which of the elements have been selected by the integral offline optimum.
1261
+ Proof. The proof follows from analyzing the constraints. The meaning of Constraint 1 is that upon the arrival of vertex v, v
1262
+ must be matched at most once in expectation. In fact, for each job v ∈ V , at most one machine u ∈ U can be selected by the
1263
+ optimum, which yields
1264
+
1265
+ u∈U
1266
+ xuv ≤ 1.
1267
+ This justifies Constraint 1. Constraint 2, on the other hand, has the following simple interpretation: machine u is unavailable
1268
+ when job v arrives if it has been matched earlier to a job v′ such that the activity time is longer than the difference of v, v′ arrival
1269
+ times. Otherwise, u can in fact be matched to v, and this probability is of course lower than the probability of being available.
1270
+ This implies that for each machine u ∈ U and each job v ∈ V ,
1271
+
1272
+ v′:sv′ <sv
1273
+ xuv′ · Pr [duv′ ≥ sv − sv′] + xuv ≤ 1.
1274
+ We have shown that all constraints are less restrictive for the linear program as they would be for the offline optimum. Since
1275
+ the objective function is the same for both, a solution for the integral optimum is also a solution for the linear program, while
1276
+ the converse does not necessarily hold. The statement follows.
1277
+ A simple algorithm
1278
+ Inspired by the algorithm by Dickerson et al. (2018) (which deals with stochastic rather than adversarial arrivals), we propose
1279
+ Algorithm 4. In the remainder, let Navail(v) denote the set of available vertices u ∈ U when v ∈ V arrives.
1280
+ Algorithm 4: Bipartite Matching Temporal OCRS
1281
+ Data: Machine set U, job set V , and distributions Wv, Dv
1282
+ Result: Matching M ⊆ U × V
1283
+ Solve LP (1) and obtain fractional solution solution x∗;
1284
+ M ← ∅;
1285
+ for v ∈ V do
1286
+ if Navail(v) = ∅ then
1287
+ Reject v;
1288
+ else
1289
+ Select u ∈ Navail(v) with probability α ·
1290
+ x∗
1291
+ uv
1292
+ Pr[u∈Navail(v)];
1293
+ M ← M ∪ {uv};
1294
+ Lemma 2. Algorithm 4 makes every vertex u ∈ U available with probability at least α. Moreover, such probability is maximized
1295
+ for α = 1/2.
1296
+ Proof. We will prove the claim by induction. For the first incoming job v = 1, Pr [u ∈ Navail(v)] = 1 ≥ α for all machines
1297
+ u ∈ U, no matter what the values of wuv, duv are. To complete the base case, we only need to check that the probability of
1298
+ selecting one machine is in fact no larger than one: for this purpose, let us name the event u is selected by Algorithm 4 when v
1299
+ comes as u ∈ ALG(v).
1300
+ Pr [∃u ∈ Navail(v) : u ∈ ALG(v)] =
1301
+
1302
+ u∈U
1303
+ α ·
1304
+ x∗
1305
+ uv
1306
+ Pr [u ∈ Navail(v)] ≤ α,
1307
+ where the first equality follows from the fact the events within the existence quantifier are disjoint, and recalling that Navail(v) =
1308
+ U for the first job. Consider all vertices v′ arriving before vertex v (sv′ < sv), and assume that Pr [u ∈ Navail(v′)] ≥ α always.
1309
+ This means that the algorithm is makes each u available with probability at least α for all vertex arrivals before v. This, in turn,
1310
+ implies that each u is selected with probability α · x∗
1311
+ uv′. Let us observe that a machine u ∈ U will not be available for the
1312
+
1313
+ incoming job v ∈ V only if the algorithm has matched it to an earlier job v′ with activity time larger than sv − sv′. Formally,
1314
+ the probability that u is available for v is
1315
+ Pr [u ∈ Navail(v)] = 1 − Pr [u /∈ Navail(v)]
1316
+ = 1 − Pr [∃v′ ∈ V : sv′ < sv, u ∈ ALG(v′), duv′ > sv − sv′]
1317
+ ≥ 1 − α ·
1318
+
1319
+ v′:sv′ <sv
1320
+ x∗
1321
+ uv′Pr [duv′ ≥ sv − sv′]
1322
+ ≥ α + α · x∗
1323
+ uv
1324
+ ≥ α
1325
+ The second to last inequality follows from Constraint 2, and by observing the following simple implication for all r, z ∈ R: if
1326
+ r + z ≤ 1, then 1 − αr ≥ α + αz, so long as α ≤ 1
1327
+ 2. Since we would like to choose α as large as possible, we choose α = 1
1328
+ 2.
1329
+ What is left to be shown is that the probability of selecting one machine is at most one:
1330
+ Pr [∃u ∈ Navail(v) : u ∈ ALG(v)] =
1331
+
1332
+ u∈U
1333
+ α ·
1334
+ x∗
1335
+ uv
1336
+ Pr [u ∈ Navail(v)] ≤ 1.
1337
+ The statement, thus, follows.
1338
+ A direct consequence of the above two lemmata is the following theorem. Indeed, if every u is available with at least
1339
+ probability 1/2, then the algorithm will select it, regardless of what the previous algorithm actions. In turn, the optimum will
1340
+ be approximated with the same factor.
1341
+ Theorem 4. Algorithm 4 is 1
1342
+ 2-competitive with respect to the expected optimum Ew,d [⟨w, 1OPT⟩].
1343
+ Various applications such as prophet and probing inequalities for the batched temporal setting can be derived from the
1344
+ above theorem. Solving them with a constant competitive ratio yields a solution for the review problem illustrated in the
1345
+ introduction, where multiple financial transactions arriving over time could be assigned to one of many potential reviewers, and
1346
+ these reviewers can be “reused” once they have completed their review time.
1347
+ D
1348
+ Benchmarks
1349
+ The need for stages
1350
+ We argue that, for the results in Section 5, stages are necessary in order for us to be able to compare our algorithm against any
1351
+ meaningful benchmark. Suppose, in contrast, that we chose to compare against the optimum (or an approximation of it) within
1352
+ a single stage where n jobs arrive to a single arm. A non-adaptive adversary could simply run the following procedure, with
1353
+ each job having weight 1: with probability 1/2, jobs with odd arrival order have activity time 1, and jobs with even arrival order
1354
+ have activity time ∞, with probability 1/2 the opposite holds. To be precise, let us recall that ∞ is just a shorthand notation to
1355
+ mean that all future jobs would be blocked: indeed, the activity time of a job arriving at time se is not unbounded but can be at
1356
+ most n − se. As activity times are revealed after the algorithm has made a decision for the current job, the algorithm does not
1357
+ know whether taking the current job will prevent it from being blocked for the entire future. The best thing the algorithm can
1358
+ do is to pick the first job with probability 1/2. Indeed, if the algorithm is lucky and the activity time is 1 then it knows to be
1359
+ in the first scenario and gets n. Otherwise, it only gets 1. Hence, the regret would be Rn = n − n+1
1360
+ 2
1361
+ ∈ Ω(n), which is linear.
1362
+ Note that n and T here represent two different concepts: the first is the number of elements sent within a stage; the second is the
1363
+ number of stages. In the case outlined above, T = 1, since it is a single stage scenario. Thus, there is no hope that in a single
1364
+ stage we could do anything meaningful, and we turn to the framework where an entire instance of the problem is sent at each
1365
+ stage t ∈ [T ].
1366
+ Choosing the right benchmark
1367
+ Now, we motivate why the Best-in-Hindsight policy introduced at the beginning of Section 5 is a strong and realistic benchmark,
1368
+ for an algorithm that knows the feasibility polytopes a priori. In fact, when we want to measure regret, we need to find a
1369
+ benchmark to compare against, which is neither too trivial nor unrealistically powerful compared to the information we have at
1370
+ hand. Below, we provide explicit lower bounds which show that the dynamic optimum is a too powerful benchmark even when
1371
+ the polytope is known. In particular, the next examples prove that it is impossible to achieve sublinear (α-)Regret against the
1372
+ dynamic optimum. In the remainder, we always assume full feedback and that the adversary is non-adaptive.
1373
+ In the remainder, we denote by aOPT
1374
+ t
1375
+ and aALG
1376
+ t
1377
+ the action chosen at time t by the optimum and the algorithm respectively.
1378
+ Lemma 3. Every algorithm has RT = �
1379
+ t∈[T ] E[ft(aOPT
1380
+ t
1381
+ )] − �
1382
+ t∈[T ] E[ft(aALG
1383
+ t
1384
+ )] ∈ Ω(T ) against the dynamic optimum.
1385
+ Proof. Consider the case of a single arm and the arrival of 3 jobs at each stage (on at a time within the stage, revealed from
1386
+ top to bottom), with the constraint that at most 1 active job can be selected. The (non-adaptive) adversary simply tosses T fair
1387
+
1388
+ coins independently at each stage: if the tth coin lands heads, then all 3 jobs at the tth stage have activity times 1 and weights
1389
+ 1, otherwise all jobs have activity time ∞, the first job has weight ǫ and the last two have weight 1 (recall that ∞ is just a
1390
+ shorthand notation to mean that all future jobs would be blocked). Figure 1 shows a possible realization of the T stages: at each
1391
+ stage the expected reward of the optimal policy is 3
1392
+ 2, since the optimal value is 1 or 2 with equal probability. By linearity of
1393
+ expectation, �
1394
+ t∈[T ] E[ft(aOPT
1395
+ t
1396
+ )] = T · E[f(aOPT)] ≥ 3
1397
+ 2T .
1398
+ 1, 1
1399
+ 1, 1
1400
+ 1, 1
1401
+ ǫ, ∞
1402
+ 1, ∞
1403
+ 1, ∞
1404
+ ǫ, ∞
1405
+ 1, ∞
1406
+ 1, ∞
1407
+ . . . . . . . . .
1408
+ Figure 1: Three jobs per stage: w.p. 1/2, either {(1, 1), (1, 1), (1, 1)} or {(ǫ, ∞), (1, ∞), (1, ∞)}.
1409
+ On the other hand, the algorithm will discover which scenario it has landed into only after the value of the first job has been
1410
+ revealed. If it does not pick it and it results in a weight of 1, then the algorithm can get at most 1 from the remaining jobs. If
1411
+ instead it decides to pick it but it realizes in an ǫ value, it will only get ǫ. Even if the algorithm is aware of such a stochastic
1412
+ input beforehand, it knows that stages are independent and, hence, cannot be adaptive before a given stage begins. Then, it
1413
+ observes the first job weight without taking it, but it may already be too late. Any algorithm in this setting can be described by
1414
+ deciding to accept the first job with probability p (and reject it with 1 − p), and then act adaptively. Then, again by linearity of
1415
+ expectation,
1416
+
1417
+ t∈[T ]
1418
+ E[ft(aALG
1419
+ t
1420
+ )] = T · E[f(aALG)] = T ·
1421
+ �1
1422
+ 2 (2p + (1 − p)) + 1
1423
+ 2 (ǫp + (1 − p))
1424
+
1425
+ = 2 + ǫp
1426
+ 2
1427
+ · T ≤ (1 + ǫ) · T.
1428
+ Thus, RT ≥ (1 − ǫ) · T ∈ Ω(T ).
1429
+ Now, we ask whether there exists a similar lower bound on approximate regret. Similarly to the previous lemma, we denote
1430
+ by aOCRS
1431
+ t
1432
+ the action chosen at time t by the OCRS.
1433
+ Lemma 4. Every algorithm has RT = α · �
1434
+ t∈[T ] E[ft(aOPT
1435
+ t
1436
+ )] − �
1437
+ t∈[T ] E[ft(aALG
1438
+ t
1439
+ )] ∈ Ω(T ) against an α-approximation of
1440
+ the dynamic optimum, for α ∈ (0, 1].
1441
+ Proof. Let all the activity times be infinite, and define (for a given stage) the constraint to be picking a single job irrevocably.
1442
+ We know that, for the single-choice problem, a tight OCRS achieves α = 1/2 competitive ratio. However, such OCRS is not
1443
+ greedy. Livanos (2021) constructs a tight greedy OCRS for single-choice, which is α = 1/e competitive. For our purposes,
1444
+ nonetheless, we only require the trivial inequality α ≤ 1. The non-adaptiveadversary could run the following a priori procedure,
1445
+ for each of the T stages: let δ = α−1/n
1446
+ 2
1447
+ be a constant, sample k ∼ [n] uniformly at random, and send jobs in order of weights
1448
+ δk, δk−1, . . . , δ, 0, . . . , 0 (ascending until δ and then all 0s).8 We know that, by Theorem 1,
1449
+
1450
+ t∈[T ]
1451
+ E[ft(aOCRS
1452
+ t
1453
+ )] ≥ α ·
1454
+
1455
+ t∈[T ]
1456
+ E[ft(aOPT
1457
+ t
1458
+ )].
1459
+ This is possible because the greedy OCRS has access full-information about the current stage a priori (it knows δ and the
1460
+ sampled k at each stage), unlike the algorithm, which is unaware of how the T stages are going to be presented. It is easy to
1461
+ see that what the best the algorithm can do within a given stage is to randomly guess what the drawn k has been, i.e., where
1462
+ δ will land. We now divide the time horizon in T/n intervals, each composed of n stages. In each interval, since no stage is
1463
+ predictive of the next, we know that the algorithm cannot be adaptive across stages, nor can it be within a stage, since all possible
1464
+ sequences have the same prefix. By construction, we expect the algorithm to catch δ once per time interval, and otherwise get
1465
+ 8This construction is inspired by the notes of Kesselheim and Mehlhorn (2016).
1466
+
1467
+ at most δ2, optimistically for all remaining n − 1 stages. In other words, let us index each interval by I ∈ [T/n] and rewrite the
1468
+ algorithm and the OCRS expected rewards as
1469
+
1470
+ t∈[T ]
1471
+ E[ft(aALG
1472
+ t
1473
+ )] ≤
1474
+
1475
+ I∈[T/n]
1476
+
1477
+ δ + (n − 1)δ2�
1478
+
1479
+ � δ
1480
+ n + δ2
1481
+
1482
+ · T,
1483
+
1484
+ t∈[T ]
1485
+ E[ft(aOCRS
1486
+ t
1487
+ )] ≥
1488
+
1489
+ I∈[T/n]
1490
+ (αδ · n) = αδ · T.
1491
+ Hence,
1492
+ RT =
1493
+
1494
+ t∈[T ]
1495
+ E[ft(aOCRS
1496
+ t
1497
+ )] −
1498
+
1499
+ t∈[T ]
1500
+ E[ft(aALG
1501
+ t
1502
+ )]
1503
+
1504
+
1505
+ αδ − δ
1506
+ n − δ2
1507
+
1508
+ · T
1509
+ = (α − 1/n)2
1510
+ 4
1511
+ · T ∈ Ω(T ).
1512
+ The last step follows from the fact that 1/n ∈ o(α), and α ∈ o(T ).
1513
+ E
1514
+ Omitted proofs from Section 5
1515
+ Theorem 2. Given a regret minimizer RM for decision space Pd
1516
+ F with cumulative regret upper bound RT , and an α-competitive
1517
+ temporal greedy OCRS, Algorithm 2 provides
1518
+ α max
1519
+ S∈Id
1520
+ T
1521
+
1522
+ t=1
1523
+ f(1S, wt) − E
1524
+ � T
1525
+
1526
+ t=1
1527
+ f(at, wt)
1528
+
1529
+ ≤ RT .
1530
+ Proof. We assume to have access to a regret minimizer for the set Pd
1531
+ F guaranteeing an upper bound on the cumulative regret
1532
+ up to time T of RT . Then,
1533
+ E
1534
+ � T
1535
+
1536
+ t=1
1537
+ f(at, wt)
1538
+
1539
+ ≥ α
1540
+ T
1541
+
1542
+ t=1
1543
+ f(xt, wt)
1544
+ ≥ α
1545
+
1546
+ max
1547
+ x∈Pd
1548
+ F
1549
+ T
1550
+
1551
+ t=1
1552
+ f(x, wt) − RT
1553
+
1554
+ = α
1555
+
1556
+ max
1557
+ a
1558
+ T
1559
+
1560
+ t=1
1561
+ f(a, wt) − RT
1562
+
1563
+ ,
1564
+ where the first inequality follows from the fact that Algorithm 2 employs a suitable temporal OCRS ˆπ to select at: for each
1565
+ e ∈ E, the probability with which the OCRS selects e is at least α · xt,e, and since f is a linear mapping (in particular, it is
1566
+ defined as the scalar product between a vector of weights and the choice at t) the above inequality holds. The second inequality
1567
+ is by no-regret property of the regret minimizer for decision space Pd
1568
+ F. This concludes the proof.
1569
+ Theorem 3. Given a temporal packing feasibility set Fd, and an α-competitive OCRS ˆπ, let Z = T 2/3, and the full feedback
1570
+ subroutine RM be defined as per Theorem 2. Then Algorithm 3 guarantees that
1571
+ α max
1572
+ S∈Id
1573
+ T
1574
+
1575
+ t=1
1576
+ f(1S, wt) − E
1577
+ � T
1578
+
1579
+ t=1
1580
+ f(at, wt)
1581
+
1582
+ ≤ ˜O(T 2/3).
1583
+ Proof. We start by computing a lower bound on the average reward the algorithm gets. Algorithm 3 splits its decisions into Z
1584
+ blocks, and, at each τ ∈ [Z], chooses the action xτ suggested by the RM, unless the stage is one of the randomly sampled
1585
+
1586
+ exploration steps. Then, we can write
1587
+ 1
1588
+ T · E
1589
+ � T
1590
+
1591
+ t=1
1592
+ f(at, wt)
1593
+
1594
+ ≥ α
1595
+ T ·
1596
+
1597
+ τ∈[Z]
1598
+
1599
+ t∈Iτ
1600
+ f(xt, wt)
1601
+ ≥ α
1602
+ T
1603
+
1604
+ τ∈[Z]
1605
+
1606
+ t∈Iτ
1607
+ f(xτ, wt) − αm2Z
1608
+ T
1609
+ = α
1610
+ T ·
1611
+
1612
+ τ∈[Z]
1613
+
1614
+ e∈E
1615
+ xτ,e
1616
+
1617
+ t∈Iτ
1618
+ f(1e, wt) − αm2Z
1619
+ T
1620
+ = α
1621
+ Z ·
1622
+
1623
+ τ∈[Z]
1624
+
1625
+ e∈E
1626
+ xτ,e · E
1627
+
1628
+ ˜fτ(e)
1629
+
1630
+ − αm2Z
1631
+ T
1632
+ ,
1633
+ where the first inequality is by the use of a temporal OCRS to select at, and the second inequality is obtained by subtracting
1634
+ the worst-case costs incurred during exploration; note that the m2 factor in the second inequality is due to the fact that at each
1635
+ of the m exploration stages, we can lose at most m. The last equality is by definition of the unbiased estimator, since the value
1636
+ of f is observed T/Z times (once for every block) in expectation.
1637
+ We can now bound from below the rightmost expression we just obtained by using the guarantees of the regret-minimizer.
1638
+ α
1639
+ Z ·
1640
+
1641
+ τ∈[Z]
1642
+
1643
+ e∈E
1644
+ xτ,e · E
1645
+
1646
+ ˜fτ(e)
1647
+
1648
+ − αm2Z
1649
+ T
1650
+ ≥ α
1651
+ Z · E
1652
+
1653
+  max
1654
+ x∈Pd
1655
+ F
1656
+
1657
+ τ∈[Z]
1658
+
1659
+ e∈E
1660
+ xe ˜fτ(e) − RZ
1661
+
1662
+  − αm2Z
1663
+ T
1664
+ = α
1665
+ Z max
1666
+ x∈Pd
1667
+ F
1668
+
1669
+ τ∈[Z]
1670
+
1671
+ e∈E
1672
+ xe · E
1673
+
1674
+ ˜fτ(e)
1675
+
1676
+ − α
1677
+ Z RZ − αm2Z
1678
+ T
1679
+ = α
1680
+ T max
1681
+ a∈F d
1682
+
1683
+ τ∈[Z]
1684
+
1685
+ e∈E
1686
+ xe
1687
+
1688
+ t∈Iτ
1689
+ f(1e, wt) − α
1690
+ Z RZ − αm2Z
1691
+ T
1692
+ = α
1693
+ T max
1694
+ a∈F d
1695
+ T
1696
+
1697
+ t=1
1698
+ f(at, wt) − α
1699
+ Z RZ − αm2Z
1700
+ T
1701
+ ,
1702
+ where we used unbiasedness of ˜fτ(e), and the fact that the value of optimal fractional vector in the polytope is the same value
1703
+ provided by the best superarm (i.e., best vertex of the polytope) by convexity. The third equality follows from expanding the
1704
+ expectation of the unbiased estimator (i.e. E
1705
+
1706
+ ˜fτ(e)
1707
+
1708
+ := Z
1709
+ T · �
1710
+ t∈Iτ f(1e, wt)). Let us now rearrange the last expression and
1711
+ compute the cumulative regret:
1712
+ α max
1713
+ a∈F d
1714
+ T
1715
+
1716
+ t=1
1717
+ f(at, wt) − E
1718
+ � T
1719
+
1720
+ t=1
1721
+ f(at, wt)
1722
+
1723
+ ≤ α
1724
+ Z
1725
+ RZ
1726
+ ����
1727
+ ≤ ˜
1728
+ O(
1729
+
1730
+ Z)
1731
+ T + αm2Z
1732
+ ≤ ˜O(T 2/3),
1733
+ where in the last step we set Z = T 2/3 and obtain the desired upper bound on regret (the term αm2 is incorporated in the ˜O
1734
+ notation). The theorem follows.
1735
+ F
1736
+ Further Related Works
1737
+ CRS and OCRS.
1738
+ Contention resolution schemes (CRS) were introduced by Chekuri, Vondr´ak, and Zenklusen (2011) as a
1739
+ powerful rounding technique in the context of submodular maximization. The CRS framework was extended to online con-
1740
+ tention resolution schemes (OCRS) for online selection problems by Feldman, Svensson, and Zenklusen (2016), who provided
1741
+ OCRSs for different problems, including intersections of matroids, matchings, and prophet inequalities. Ezra et al. (2020) re-
1742
+ cently extended OCRS to batched arrivals, providing a constant competitive ratio for stochastic max-weight matching in vertex
1743
+ and edge arrival models.
1744
+ Combinatorial Bandits.
1745
+ The problem of combinatorial bandits was first studied in the context of online shortest paths (Awer-
1746
+ buch and Kleinberg 2008; Gy¨orgy et al. 2007), and the general version of the problem is due to Cesa-Bianchi and Lugosi (2012).
1747
+ Improved regret bounds can be achieved in the case of combinatorial bandits with semi-bandit feedback (see, e.g., (Chen, Wang,
1748
+ and Yuan 2013; Kveton et al. 2015; Audibert, Bubeck, and Lugosi 2014)). A related problem is that of linear bandits (Awer-
1749
+ buch and Kleinberg 2008; McMahan and Blum 2004), which admit computationally efficient algorithms in the case in which
1750
+ the action set is convex (Abernethy, Hazan, and Rakhlin 2009).
1751
+
1752
+ Blocking bandits.
1753
+ In blocking bandits (Basu et al. 2019) the arm that is played is blocked for a specific number of stages.
1754
+ Blocking bandits have recently been studied in contextual (Basu et al. 2021), combinatorial (Atsidakou et al. 2021), and ad-
1755
+ versarial (Bishop et al. 2020) settings. Our bandit model differs from blocking bandits since we consider each instance of the
1756
+ problem confined within each stage. In addition, the online full information problems that are solved in most blocking bandits
1757
+ papers (Atsidakou et al. 2021; Basu et al. 2021; Dickerson et al. 2018) only addresses specific cases of the fully dynamic online
1758
+ selection problem, which we solve in entire generality.
1759
+ Sleeping bandits.
1760
+ As mentioned, our problem is similar to that of sleeping bandits (see (Kleinberg, Niculescu-Mizil, and
1761
+ Sharma 2010) and follow-up papers), but at the same time the two models differ in a number of ways. Just like the sleeping
1762
+ bandits case, the adversary in our setting decides which actions we can perform by setting arbitrary activity times at each t.
1763
+ The crucial difference between the two settings is that, in sleeping bandits, once an adversary has chosen the available actions
1764
+ for a given stage, they have to communicate them all at once to the algorithm. In our case, instead, the adversary can choose
1765
+ the available actions within a given stage as the elements arrive, so it is, in some sense, “more dynamic”. In particular, in the
1766
+ temporal setting there are two levels of adaptivity for the adversary: on one hand, the adversary may or may not be adaptive
1767
+ across stages (this is the classic bandit notion of adaptivity). On the other hand, the adversary may or may not be adaptive within
1768
+ the same stage (which is the notion of online algorithms adaptivity).
1769
+
ANE1T4oBgHgl3EQfVQQy/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
B9E0T4oBgHgl3EQfQABU/content/tmp_files/2301.02186v1.pdf.txt ADDED
@@ -0,0 +1,1989 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MNRAS 000, 1–16 (0000)
2
+ Preprint 6 January 2023
3
+ Compiled using MNRAS LATEX style file v3.0
4
+ Inferring the impact of feedback on the matter distribution
5
+ using the Sunyaev Zel’dovich effect: Insights from
6
+ CAMELS simulations and ACT+DES data
7
+ Shivam Pandey,1,2 Kai Lehman,3,4 Eric J. Baxter,3 Yueying Ni,5
8
+ Daniel Angl´es-Alc´azar,6,7 Shy Genel,7,1 Francisco Villaescusa-Navarro,7
9
+ Ana Maria Delgado,5 Tiziana di Matteo9
10
+ 1Department of Physics, Columbia University, New York, NY, USA 10027
11
+ 2Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA
12
+ 3Institute for Astronomy, University of Hawai‘i, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
13
+ 4Universit¨ats-Sternwarte M¨unchen, Fakult¨at f¨ur Physik, Ludwig-Maximilians-Universit¨at, Scheinerstr. 1, 81679 M¨unchen, Germany
14
+ 5Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA 02138, US
15
+ 6Department of Physics, University of Connecticut, 196 Auditorium Road, U-3046, Storrs, CT, 06269, USA
16
+ 7Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY, 10010, USA
17
+ 9McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
18
+ 6 January 2023
19
+ ABSTRACT
20
+ Feedback from active galactic nuclei and stellar processes changes the matter distri-
21
+ bution on small scales, leading to significant systematic uncertainty in weak lensing
22
+ constraints on cosmology. We investigate how the observable properties of group-scale
23
+ halos can constrain feedback’s impact on the matter distribution using Cosmology and
24
+ Astrophysics with MachinE Learning Simulations (CAMELS). Extending the results
25
+ of previous work to smaller halo masses and higher wavenumber, k, we find that the
26
+ baryon fraction in halos contains significant information about the impact of feed-
27
+ back on the matter power spectrum. We explore how the thermal Sunyaev Zel’dovich
28
+ (tSZ) signal from group-scale halos contains similar information. Using recent Dark
29
+ Energy Survey (DES) weak lensing and Atacama Cosmology Telescope (ACT) tSZ
30
+ cross-correlation measurements and models trained on CAMELS, we obtain 10% con-
31
+ straints on feedback effects on the power spectrum at k ∼ 5 h/Mpc. We show that
32
+ with future surveys, it will be possible to constrain baryonic effects on the power spec-
33
+ trum to O(< 1%) at k = 1 h/Mpc and O(3%) at k = 5 h/Mpc using the methods
34
+ that we introduce here. Finally, we investigate the impact of feedback on the matter
35
+ bispectrum, finding that tSZ observables are highly informative in this case.
36
+ Key words: large-scale structure of Universe – methods: statistical
37
+ 1
38
+ INTRODUCTION
39
+ The statistics of the matter distribution on scales k ≳
40
+ 0.1 hMpc−1 are tightly constrained by current weak lensing
41
+ surveys (e.g. Asgari et al. 2021; Abbott et al. 2022). How-
42
+ ever, modeling the matter distribution on these scales to ex-
43
+ tract cosmological information is complicated by the effects
44
+ of baryonic feedback (Rudd et al. 2008). Energetic output
45
+ from active galactic nuclei (AGN) and stellar processes (e.g.
46
+ winds and supernovae) directly impacts the distribution of
47
+ gas on small scales, thereby changing the total matter dis-
48
+ tribution (e.g. Chisari et al. 2019).1 The coupling between
49
+ these processes and the large-scale gas distribution is chal-
50
+ lenging to model theoretically and in simulations because of
51
+ the large dynamic range involved, from the scales of individ-
52
+ ual stars to the scales of galaxy clusters. While it is generally
53
+ agreed that feedback leads to a suppression of the matter
54
+ power spectrum on scales 0.1 hMpc−1 ≲ k ≲ 20 hMpc−1,
55
+ the amplitude of this suppression remains uncertain by tens
56
+ of percent (van Daalen et al. 2020; Villaescusa-Navarro et al.
57
+ 1 Changes to the gas distribution can also gravitationally influ-
58
+ ence the dark matter distribution, further modifying the total
59
+ matter distribution.
60
+ © 0000 The Authors
61
+ arXiv:2301.02186v1 [astro-ph.CO] 5 Jan 2023
62
+
63
+ 2
64
+ Pandey et al.
65
+ 2021) (see also Fig. 1). This systematic uncertainty limits
66
+ constraints on cosmological parameters from current weak
67
+ lensing surveys (e.g. Abbott et al. 2022; Asgari et al. 2021).
68
+ For future surveys, such as the Vera Rubin Observatory
69
+ LSST (The LSST Dark Energy Science Collaboration et al.
70
+ 2018) and Euclid (Euclid Collaboration et al. 2020), the
71
+ problem will become even more severe given expected in-
72
+ creases in statistical precision. In order to reduce the sys-
73
+ tematic uncertainties associated with feedback, we would
74
+ like to identify observable quantities that carry information
75
+ about the impact of feedback on the matter distribution and
76
+ develop approaches to extract this information (e.g. Nicola
77
+ et al. 2022).
78
+ Recently, van Daalen et al. (2020) showed that the halo
79
+ baryon fraction, fb, in halos with M ∼ 1014 M⊙ carries sig-
80
+ nificant information about suppression of the matter power
81
+ spectrum caused by baryonic feedback. They found that the
82
+ relation between fb and matter power suppression was ro-
83
+ bust to at least some changes in the subgrid prescriptions
84
+ for feedback physics. Note that fb as defined by van Daalen
85
+ et al. (2020) counts baryons in both the intracluster medium
86
+ as well as those in stars. The connection between fb and feed-
87
+ back is expected, since one of the main drivers of feedback’s
88
+ impact on the matter distribution is the ejection of gas from
89
+ halos by AGN. Therefore, when feedback is strong, halos will
90
+ be depleted of baryons and fb will be lower. The conversion
91
+ of baryons into stars — which will not significantly impact
92
+ the matter power spectrum on large scales — does not im-
93
+ pact fb, since fb includes baryons in stars as well as the ICM.
94
+ van Daalen et al. (2020) specifically consider the measure-
95
+ ment of fb in halos with 6 × 1013M⊙ ≲ M500c ≲ 1014 M⊙.
96
+ In much more massive halos, the energy output of AGN is
97
+ small compared to the binding energy of the halo, preventing
98
+ gas from being expelled. In smaller halos, van Daalen et al.
99
+ (2020) found that the correlation between power spectrum
100
+ suppression and fb is less clear.
101
+ Although fb carries information about feedback, it is
102
+ somewhat unclear how one would measure fb in practice.
103
+ Observables such as the kinematic Sunyaev Zel’dovich (kSZ)
104
+ effect can be used to constrain the gas density; combined
105
+ with some estimate of stellar mass, fb could then be in-
106
+ ferred. However, measuring the kSZ is challenging, and cur-
107
+ rent measurements have low signal-to-noise (Hand et al.
108
+ 2012; Hill et al. 2016; Soergel et al. 2016). Moreover, van
109
+ Daalen et al. (2020) consider a relatively limited range of
110
+ feedback prescriptions. It is unclear whether a broader range
111
+ of feedback models could lead to a greater spread in the
112
+ relationship between fb and baryonic effects on the power
113
+ spectrum. In any case, it is worthwhile to consider other
114
+ potential observational probes of feedback.
115
+ Another potentially powerful probe of baryonic feed-
116
+ back is the thermal SZ (tSZ) effect. The tSZ effect is caused
117
+ by inverse Compton scattering of CMB photons with a pop-
118
+ ulation of electrons at high temperature. This scattering pro-
119
+ cess leads to a spectral distortion in the CMB that can be
120
+ reconstructed from multi-frequency CMB observations. The
121
+ amplitude of this distortion is sensitive to the line-of-sight
122
+ integral of the electron pressure. Since feedback changes the
123
+ distribution and thermodynamics of the gas, it stands to rea-
124
+ son that it could impact the tSZ signal. Indeed, several works
125
+ using both data (e.g Pandey et al. 2019, 2022; Gatti et al.
126
+ 2022a) and simulations (e.g. Scannapieco et al. 2008; Bhat-
127
+ tacharya et al. 2008; Moser et al. 2022; Wadekar et al. 2022)
128
+ have shown that the tSZ signal from low-mass (group scale)
129
+ halos is sensitive to feedback. Excitingly, the sensitivity of
130
+ tSZ measurements is expected to increase dramatically in
131
+ the near future due to high-sensitivity CMB measurements
132
+ from e.g. SPT-3G (Benson et al. 2014), Advanced ACTPol
133
+ (Henderson et al. 2016), Simons Observatory (Ade et al.
134
+ 2019), and CMB Stage 4 (Abazajian et al. 2016).
135
+ The goal of this work is to investigate what informa-
136
+ tion the tSZ signals from low-mass halos contain about the
137
+ impact of feedback on the small-scale matter distribution.
138
+ The tSZ signal, which we denote with the Compton y pa-
139
+ rameter, carries different information from fb. For one, y is
140
+ sensitive only to the gas and not to stellar mass. Moreover,
141
+ y carries sensitivity to both the gas density and tempera-
142
+ ture, unlike fb which depends only on the gas density. The
143
+ y signal is also easier to measure than fb, since it can be
144
+ estimated simply by cross-correlating halos with a tSZ map.
145
+ The signal-to-noise of such cross-correlation measurements
146
+ is already high with current data, on the order of 10s of σ
147
+ (Vikram et al. 2017; Pandey et al. 2019, 2022; S´anchez et al.
148
+ 2022).
149
+ In this paper, we investigate the information content
150
+ of the tSZ signal from group-scale halos using the Cosmol-
151
+ ogy and Astrophysics with MachinE Learning Simulations
152
+ (CAMELS) simulations. As we describe in more detail in
153
+ §2, CAMELS is a suite of many hydrodynamical simula-
154
+ tions run across a range of different feedback prescriptions
155
+ and different cosmological parameters. The relatively small
156
+ volume of the CAMELS simulations ((25/h)3 Mpc3) means
157
+ that we are somewhat limited in the halo masses and scales
158
+ that we can probe. We therefore view our analysis as an ex-
159
+ ploratory work that investigates the information content of
160
+ low-mass halos for constraining feedback and how to extract
161
+ this information; more accurate results over a wider range
162
+ of halo mass and k may be obtained in the future using the
163
+ same methods applied to larger volume simulations.
164
+ By training statistical models on the CAMELS sim-
165
+ ulations, we explore what information about feedback ex-
166
+ ists in tSZ observables, and how robust this information is
167
+ to changes in subgrid feedback prescriptions. We consider
168
+ three very different prescriptions for feedback based on the
169
+ SIMBA (Dav´e et al. 2019), Illustris-TNG (Pillepich et al.
170
+ 2018, henceforth TNG) and Astrid (Bird et al. 2022; Ni et al.
171
+ 2022) models across a wide range of possible parameter val-
172
+ ues, including variations in cosmology. The flexibility of the
173
+ statistical models we employ means that it is possible to
174
+ uncover more complex relationships between e.g. fb, y, and
175
+ the baryonic suppression of the power spectrum than consid-
176
+ ered in van Daalen et al. (2020). The work presented here
177
+ is complementary to Delgado et al. (2023) which explores
178
+ the information content in the baryon fraction of halos en-
179
+ compassing broader mass range (M > 1010M⊙/h), finding
180
+ a broad correlation with the matter power suppression.
181
+ Finally, we apply our trained statistical models to recent
182
+ measurements of the y signal from low-mass halos by Gatti
183
+ et al. (2022a) and Pandey et al. (2022). These analyses in-
184
+ ferred the halo-integrated y signal from the cross-correlation
185
+ of galaxy lensing and the tSZ effect using lensing data from
186
+ the Dark Energy Survey (DES) (Amon et al. 2022; Secco
187
+ et al. 2022) and tSZ measurements from the Atacama Cos-
188
+ mology Telescope (ACT) (Madhavacheril et al. 2020). In
189
+ MNRAS 000, 1–16 (0000)
190
+
191
+ Probing feedback with the SZ
192
+ 3
193
+ addition to providing interesting constraints on the impact
194
+ of feedback, these results highlight the potential of future
195
+ similar analyses with e.g. Dark Energy Spectroscopic Ex-
196
+ periment (DESI; DESI Collaboration et al. 2016), Simons
197
+ Observatory (Ade et al. 2019), and CMB Stage 4 (Abaza-
198
+ jian et al. 2016).
199
+ Two recent works — Moser et al. (2022) and Wadekar
200
+ et al. (2022) — have used the CAMELS simulations to ex-
201
+ plore the information content of the tSZ signal for constrain-
202
+ ing feedback. These works focus on the ability of tSZ ob-
203
+ servations to constrain the parameters of subgrid feedback
204
+ models in hydrodynamical simulations. Here, in contrast, we
205
+ attempt to connect the observable quantities directly to the
206
+ impact of feedback on the matter power spectrum and bis-
207
+ pectrum. Additionally, unlike some of the results presented
208
+ in Moser et al. (2022) and Wadekar et al. (2022), we consider
209
+ the full parameter space explored by the CAMELS simula-
210
+ tions rather than the small variations around a fiducial point
211
+ that are relevant to calculation of the Fisher matrix. Finally,
212
+ we only focus on the intra-halo gas profile of the halos in
213
+ the mass range captured by the CAMELS simulations (c.f.
214
+ Moser et al. 2022). We do not expect the inter-halo gas pres-
215
+ sure to be captured by the small boxes used here as it may
216
+ be sensitive to higher halo masses (Pandey et al. 2020).
217
+ Nonlinear evolution of the matter distribution induces
218
+ non-Gaussianity, and hence there is additional information
219
+ to be recovered beyond the power spectrum. Recent mea-
220
+ surements detect higher-order matter correlations at cos-
221
+ mological scales at O(10σ)(Secco et al. 2022; Gatti et al.
222
+ 2022b), and the significance of these measurements is ex-
223
+ pected to rapidly increase with up-coming surveys (Pyne
224
+ & Joachimi 2021). Jointly analyzing two-point and three-
225
+ point correlations of the matter field can help with self-
226
+ calibration of systematic parameters and improve cosmolog-
227
+ ical constraints. As described in Foreman et al. (2020), the
228
+ matter bispectrum is expected to be impacted by baryonic
229
+ physics at O(10%) over the scales of interest. With these
230
+ considerations in mind, we also investigate whether the SZ
231
+ observations carry information about the impact of baryonic
232
+ feedback on the matter bispectrum.
233
+ The plan of the paper is as follows. In §2 we discuss the
234
+ CAMELS simulation and the data products that we use in
235
+ this work. In §3, we present the results of our explorations
236
+ with the CAMELS simulations, focusing on the information
237
+ content of the tSZ signal for inferring the impact of feedback
238
+ on the matter distribution. In §4, we apply our analysis to
239
+ the DES and ACT measurements. We summarize our results
240
+ and conclude in §5.
241
+ 2
242
+ CAMELS SIMULATIONS AND
243
+ OBSERVABLES
244
+ 2.1
245
+ Overview of CAMELS simulations
246
+ We investigate the use of SZ signals for constraining the
247
+ impact of feedback on the matter distribution using approx-
248
+ imately 3000 cosmological simulations run by the CAMELS
249
+ collaboration (Villaescusa-Navarro et al. 2021). One half of
250
+ these are gravity-only N-body simulations and the other half
251
+ are hydrodynamical simulations with matching initial con-
252
+ ditions. The simulations are run using three different hy-
253
+ drodynamical sub-grid codes, TNG (Pillepich et al. 2018),
254
+ SIMBA (Dav´e et al. 2019) and Astrid (Bird et al. 2022; Ni
255
+ et al. 2022). As detailed in Villaescusa-Navarro et al. (2021),
256
+ for each sub-grid implementation six parameters are varied:
257
+ two cosmological parameters (Ωm and σ8) and four param-
258
+ eters dealing with baryonic astrophysics. Of these, two deal
259
+ with supernovae feedback (ASN1 and ASN2) and two deal
260
+ with AGN feedback (AAGN1 and AAGN2). The meanings of
261
+ the feedback parameters for each subgrid model are summa-
262
+ rized in Table 1.
263
+ Note that the astrophysical parameters have somewhat
264
+ different physical meanings for the different subgrid pre-
265
+ scriptions, and there is usually a complex interplay between
266
+ the parameters and their impact on the properties of galax-
267
+ ies and gas. For example, the parameter ASN1 approximately
268
+ corresponds to the pre-factor for the overall energy output in
269
+ galactic wind feedback per-unit star-formation in both the
270
+ TNG (Pillepich et al. 2018) and Astrid (Bird et al. 2022) sim-
271
+ ulations. However, in the SIMBA simulations it corresponds
272
+ to the to the wind-driven mass outflow rate per unit star-
273
+ formation calibrated from the Feedback In Realistic Envi-
274
+ ronments (FIRE) zoom-in simulations (Angl´es-Alc´azar et al.
275
+ 2017b). Similarly, the AAGN2 parameter controls the bursti-
276
+ ness and the temperature of the heated gas during the AGN
277
+ bursts in the TNG simulations (Weinberger et al. 2017). In
278
+ the SIMBA suite, it corresponds to the speed of the kinetic
279
+ AGN jets with constant momentum flux (Angl´es-Alc´azar
280
+ et al. 2017a; Dav´e et al. 2019). However, in the Astrid suite,
281
+ it corresponds to the efficiency of thermal mode of AGN
282
+ feedback. As we describe in § 3.2, this can result in counter-
283
+ intuitive impact on the matter power spectrum in the Astrid
284
+ simulation, relative to TNG and SIMBA.
285
+ For each of the sub-grid physics prescriptions, three va-
286
+ rieties of simulations are provided. These include 27 sims
287
+ for which the parameters are fixed and initial conditions are
288
+ varied (cosmic variance, or CV, set), 66 simulations varying
289
+ only one parameter at a time (1P set) and 1000 sims varying
290
+ parameters in a six dimensional latin hyper-cube (LH set).
291
+ We use the CV simulations to estimate the variance expected
292
+ in the matter power suppression due to stochasticity (see
293
+ Fig. 1). We use the 1P sims to understand how the matter
294
+ suppression responds to variation in each parameter individ-
295
+ ually. Finally we use the full LH set to effectively marginalize
296
+ over the full parameter space varying all six parameters. We
297
+ use publicly available power spectrum and bispectrum mea-
298
+ surements for these simulation boxes (Villaescusa-Navarro
299
+ et al. 2021).2 Where unavailable, we calculate the power
300
+ spectrum and bispectrum, using the publicly available code
301
+ Pylians.3
302
+ 2.2
303
+ Baryonic effects on the power spectrum and
304
+ bispectrum
305
+ The left panel of Fig. 1 shows the measurement of the
306
+ power spectrum suppression caused by baryonic effects in
307
+ the TNG, SIMBA, and Astrid simulations for their fiducial
308
+ feedback settings. The right two panels of the figure show the
309
+ impact of baryonic effects on the bispectrum for two different
310
+ 2 See also https://www.camel-simulations.org/data.
311
+ 3 https://github.com/franciscovillaescusa/Pylians3
312
+ MNRAS 000, 1–16 (0000)
313
+
314
+ 4
315
+ Pandey et al.
316
+ Simulation
317
+ Type/Code
318
+ Astrophysical parameters varied
319
+ & its meaning
320
+ TNG
321
+ Magneto-hydrodynamic/
322
+ AREPO
323
+ ASN1: (Energy of Galactic winds)/SFR
324
+ ASN2: Speed of galactic winds
325
+ AAGN1: Energy/(BH accretion rate)
326
+ AAGN2: Jet ejection speed or burstiness
327
+ SIMBA
328
+ Hydrodynamic/GIZMO
329
+ ASN1 : Mass loading of galactic winds
330
+ ASN2 : Speed of galactic winds
331
+ AAGN1 : Momentum flux in QSO and jet mode of feedback
332
+ AAGN2 : Jet speed in kinetic mode of feedback
333
+ Astrid
334
+ Hydrodynamic/pSPH
335
+ ASN1: (Energy of Galactic winds)/SFR
336
+ ASN2: Speed of galactic winds
337
+ AAGN1: Energy/(BH accretion rate)
338
+ AAGN2: Thermal feedback efficiency
339
+ Table 1. Summary of the three feedback models used in this analysis. For each model, four feedback parameters are varied: AAGN1,
340
+ AAGN2, ASN1, and ASN2. The meanings of these parameters are different for each model, and are summarized in the rightmost column.
341
+ In addition to these four astrophysical parameters, the cosmological parameters Ωm and σ8 were also varied.
342
+ 100
343
+ 101
344
+ k (h/Mpc)
345
+ −0.6
346
+ −0.5
347
+ −0.4
348
+ −0.3
349
+ −0.2
350
+ −0.1
351
+ 0.0
352
+ 0.1
353
+ ∆P/PDMO
354
+ Illustris-TNG
355
+ Illustris-TNG (LH suite)
356
+ SIMBA
357
+ Astrid
358
+ 100
359
+ 101
360
+ keq (h/Mpc)
361
+ −0.6
362
+ −0.5
363
+ −0.4
364
+ −0.3
365
+ −0.2
366
+ −0.1
367
+ 0.0
368
+ 0.1
369
+ ∆Beq/Beq;DMO
370
+ 100
371
+ 101
372
+ ksq (h/Mpc)
373
+ −0.6
374
+ −0.5
375
+ −0.4
376
+ −0.3
377
+ −0.2
378
+ −0.1
379
+ 0.0
380
+ 0.1
381
+ ∆Bsq/Bsq;DMO
382
+ Figure 1. Far left: Baryonic suppression of the matter power spectrum, ∆P/PDMO, in the CAMELS simulations. The dark-blue, red
383
+ and orange shaded regions correspond to the 1σ range of the cosmic variance (CV) suite of TNG, SIMBA and Astrid simulations,
384
+ respectively. The light-blue region corresponds to the 1σ range associated with the latin hypercube (LH) suite of TNG, illustrating the
385
+ range of feedback models explored across all parameter values. Middle and right panels: the impact of baryonic feedback on the matter
386
+ bispectrum for equilateral and squeezed triangle configurations, respectively.
387
+ tringle configurations (equilateral and squeezed). To com-
388
+ pute these quantitites, we use the matter power spectra and
389
+ bispectra of the hydrodynamical simulations (hydro) and the
390
+ dark-matter only (DMO) simulations generated at varying
391
+ initial conditions (ICs). For each of the 27 unique IC runs,
392
+ we calculate the ratios ∆P/PDMO = (Phydro−PDMO)/PDMO
393
+ and ∆B/BDMO = (Bhydro − BDMO)/BDMO. As the hydro-
394
+ dynamical and the N-body simulations are run with same
395
+ initial conditions, the ratios ∆P/PDMO and ∆B/BDMO are
396
+ roughly independent of sample variance.
397
+ It is clear that the amplitude of suppression of the
398
+ small-scale matter power spectrum can be significant: sup-
399
+ pression on the order of tens of percent is reached for all
400
+ three simulations. It is also clear that the impact is sig-
401
+ nificantly different between the three simulations. Even for
402
+ the simulations in closest agreement (TNG and Astrid), the
403
+ measurements of ∆P/PDMO disagree by more than a fac-
404
+ tor of two at k = 5 h/Mpc. The width of the curves in
405
+ Fig. 1 represents the standard deviation measured across
406
+ the cosmic variance simulations, which all have the same
407
+ parameter values but different initial conditions. For the bis-
408
+ pectrum, we show both the equilateral and squeezed trian-
409
+ gle configurations with cosine of angle between long sides
410
+ fixed to µ = 0.9. Interestingly, the spread in ∆P/PDMO
411
+ and ∆B/BDMO increases with increasing k over the range
412
+ 0.1 h/Mpc ≲ k ≲ 10 h/Mpc. This increase is driven by
413
+ stochasticity arising from baryonic feedback. The middle
414
+ and right panels show the impact of feedback on the bis-
415
+ pectrum for the equilateral and squeezed triangle configura-
416
+ tions, respectively.
417
+ Throughout this work, we will focus on the regime
418
+ 0.3 h/Mpc < k < 10 h/Mpc. Larger scales modes are not
419
+ present in the (25Mpc/h)3 CAMELS simulations, and in
420
+ any case, the impact of feedback on large scales is typically
421
+ small. Much smaller scales, on the other hand, are difficult to
422
+ model even in the absence of baryonic feedback (Schneider
423
+ et al. 2016). In Appendix A we show how the matter power
424
+ suppression changes when varying the resolution and volume
425
+ of the simulation boxes. When comparing with the original
426
+ TNG boxes, we find that while the box sizes do not change
427
+ MNRAS 000, 1–16 (0000)
428
+
429
+ Probing feedback with the SZ
430
+ 5
431
+ the measured power suppression significantly, the resolution
432
+ of the boxes has a non-negligible impact. This is expected
433
+ since the physical effect of feedback mechanisms depend on
434
+ the resolution of the simulations. Note that the errorbars
435
+ presented in Fig. 1 will also depend on the default choice of
436
+ feedback values assumed.
437
+ 2.3
438
+ Measuring gas profiles around halos
439
+ We use 3D grids of various fields (e.g. gas density and pres-
440
+ sure) made available by the CAMELS team to extract the
441
+ profiles of these fields around dark matter halos. The grids
442
+ are generated with resolution of 0.05 Mpc/h. Following van
443
+ Daalen et al. (2020), we define fb as (Mgas + Mstars)/Mtotal,
444
+ where Mgas, Mstars and Mtotal are the mass in gas, stars and
445
+ all the components, respectively. The gas mass is computed
446
+ by integrating the gas number density profile around each
447
+ halo. We typically measure fb within the spherical overden-
448
+ sity radius r500c.4
449
+ The SZ effect is sensitive to the electron pressure. We
450
+ compute the electron pressure profiles, Pe, using Pe =
451
+ 2(XH + 1)/(5XH + 3)Pth, where Pth is the total thermal
452
+ pressure, and XH = 0.76 is the primordial hydrogen frac-
453
+ tion. Given the electron pressure profile, we measure the
454
+ integrated SZ signal within r500c as:
455
+ Y500c =
456
+ σT
457
+ mec2
458
+ � r500c
459
+ 0
460
+ 4πr2 Pe(r) dr,
461
+ (1)
462
+ where, σT is the Thomson scattering cross-section, me is the
463
+ electron mass and c is the speed of light.
464
+ We normalize the SZ observables by the self-similar ex-
465
+ pectation (Battaglia et al. 2012b),5
466
+ Y SS = 131.7h−1
467
+ 70
468
+
469
+ M500c
470
+ 1015h−1
471
+ 70 M⊙
472
+ �5/3 Ωb
473
+ 0.043
474
+ 0.25
475
+ Ωm kpc2,
476
+ (2)
477
+ where, M200c is mass inside r200c and h70 = h/0.7. This cal-
478
+ culation, which scales as M 5/3, assumes hydrostatic equilib-
479
+ rium and that the baryon fraction is equal to cosmic bary-
480
+ onic fraction. Hence deviations from this self-similar scaling
481
+ provide a probe of the effects of baryonic feedback. Our final
482
+ SZ observable is defined as Y500c/Y SS. On the other hand,
483
+ the amplitude of the pressure profile approximately scales
484
+ as M 2/3. Therefore, when considering the pressure profile
485
+ as the observable, we factor out a M 2/3 scaling.
486
+ 3
487
+ RESULTS I: SIMULATIONS
488
+ 3.1
489
+ Inferring feedback parameters from fb and y
490
+ We first consider how the halo Y signal can be used to con-
491
+ strain the parameters describing the subgrid physics mod-
492
+ els. This question has been previously investigated using the
493
+ CAMELS simulations by Moser et al. (2022) and Wadekar
494
+ 4 We define spherical overdensity radius (r∆c, where ∆
495
+ =
496
+ 200, 500) and overdensity mass (M∆c) such that the mean density
497
+ within r∆ is ∆ times the critical density ρcrit, M∆ = ∆ 4
498
+ 3 πr3
499
+ ∆ρcrit.
500
+ 5 Note that we use spherical overdensity mass corresponding to
501
+ ∆ = 500 and hence adjust the coefficients accordingly, while keep-
502
+ ing other approximations used in their derivations as the same.
503
+ et al. (2022). The rest of our analysis will focus on constrain-
504
+ ing changes to the power spectrum and bispectrum, and our
505
+ intention here is mainly to provide a basis of comparison for
506
+ those results.
507
+ Similar to Wadekar et al. (2022), we treat the mean
508
+ log(Y500c/M 5/3) value of all the halos in two mass bins
509
+ (1012 < M(M⊙/h) < 5 × 1012 and 5 × 1012 < M(M⊙/h) <
510
+ 1014) as our observable; we refer to this observable as ⃗d.
511
+ In this section, we restrict our analysis to only the TNG
512
+ simulations. Here and throughout our investigations with
513
+ CAMELS we ignore the contributions of measurement un-
514
+ certainty since our intention is mainly to assess the infor-
515
+ mation content of the SZ signals. We therefore use the CV
516
+ simulations to determine the covariance, C, of the ⃗d. Note
517
+ that the level of cosmic variance will depend on the volume
518
+ probed, and can be quite large for the CAMELS simulations.
519
+ Given this covariance, we use the Fisher matrix formalism
520
+ to forecast the precision with which the feedback and cos-
521
+ mological parameters can be constrained.
522
+ The Fisher matrix, Fij, is given by
523
+ Fij = ∂ ⃗dT
524
+ ∂θi C−1 ∂ ⃗d
525
+ ∂θi ,
526
+ (3)
527
+ where θi refers to the ith parameter value. Calculation of
528
+ the derivatives ∂ ⃗d/∂θi is complicated by the large amount
529
+ of stochasticity between the CAMELS simulations. To per-
530
+ form the derivative calculation, we use a radial basis function
531
+ interpolation method based on Moser et al. (2022); Cromer
532
+ et al. (2022). We show an example of the derivative calcu-
533
+ lation in Appendix B. We additionally assume a Gaussian
534
+ prior on parameter p with σ(ln p) = 1 for the feedback pa-
535
+ rameters and σ(p) = 1 for the cosmological parameters. The
536
+ forecast parameter covariance matrix, Cp, is then related to
537
+ the Fisher matrix by Cp = F−1.
538
+ The parameter constraints corresponding to our calcu-
539
+ lated Fisher matrix are shown in Fig. 2. We show results only
540
+ for Ωm, ASN1 and AAGN2, but additionally marginalize over
541
+ σ8, ASN2 and AAGN1. The degeneracy directions seen in our
542
+ results are consistent with those in Wadekar et al. (2022).
543
+ We we find a weaker constraint on AAGN2, likely owing to
544
+ the large sample variance contribution to our calculation.
545
+ It is clear from Fig. 2 that the marginalized constraints
546
+ on the feedback parameters are weak. If information about
547
+ Ωm is not used, we effectively have no information about
548
+ the feedback parameters. Even when Ωm is fixed, the con-
549
+ straints on the feedback parameters are not very precise.
550
+ This finding is consistent with Wadekar et al. (2022), for
551
+ which measurement uncertainty was the main source of vari-
552
+ ance rather than sample variance. Part of the reason for the
553
+ poor constraints is the degeneracy between the AGN and SN
554
+ parameters. As we show below, the impacts of SN and AGN
555
+ feedback can have opposite impacts on the Y signal; more-
556
+ over, even AAGN1 and AAGN2 can have opposite impacts on
557
+ Y . These degeneracies, as well as degeneracies with cosmo-
558
+ logical parameters like Ωm, make it difficult to extract tight
559
+ constraints on the feedback parameters from measurements
560
+ of Y . However, for the purposes of cosmology, we are ul-
561
+ timately most interested in the impact of feedback on the
562
+ matter distribution, and not the values of the feedback pa-
563
+ rameters themselves. These considerations motivate us to
564
+ instead explore direct inference of changes to the statistics
565
+ MNRAS 000, 1–16 (0000)
566
+
567
+ 6
568
+ Pandey et al.
569
+ 0
570
+ 1
571
+ Ωm
572
+ −2
573
+ 0
574
+ 2
575
+ log(AAGN2)
576
+ −2
577
+ 0
578
+ 2
579
+ log(ASN1)
580
+ −2
581
+ 0
582
+ 2
583
+ log(ASN1)
584
+ −2
585
+ 0
586
+ 2
587
+ log(AAGN2)
588
+ Free Ωm and σ8
589
+ Fixed Ωm and σ8
590
+ Figure 2. Forecast constraints on the feedback parameters when
591
+ log Y500c/Y SS in two halo mass bins is treated as the observable.
592
+ Even when the cosmological model is fixed (red contours), the
593
+ AGN parameters (e.g. AAGN2) remain effectively unconstrained
594
+ (note that we impose a Gaussian prior with σ(ln p) = 1 on all feed-
595
+ back parameters, p). When the cosmological model is free (blue
596
+ contours), all feedback parameters are unconstrained. We assume
597
+ that the only contribution to the variance of the observable is
598
+ sample variance coming from the finite volume of the CAMELS
599
+ simulations.
600
+ of the matter distribution from the Y observables. This will
601
+ be the focus of the rest of the paper.
602
+ 3.2
603
+ fb and y as probes of baryonic effects on the
604
+ matter power spectrum
605
+ As discussed above, van Daalen et al. (2020) observed a tight
606
+ correlation between suppression of the matter power spec-
607
+ trum and the baryon fraction, fb, in halos with 6×1013M⊙ ≲
608
+ M500c ≲ 1014 M⊙. That relation was found to hold regard-
609
+ less of the details of the feedback implementation, suggest-
610
+ ing that by measuring fb, one could robustly infer the im-
611
+ pact of baryonic feedback on the power spectrum. We be-
612
+ gin by investigating the connection between matter power
613
+ spectrum suppression and integrated tSZ parameter in low-
614
+ mass, M ∼ 1013 M⊙, halos to test if similar correlation ex-
615
+ ists (c.f. Delgado et al. (2023) for a similar figure between fb
616
+ and ∆P/PDMO). We also consider a wider range of feedback
617
+ models than van Daalen et al. (2020), including the SIMBA
618
+ and Astrid models.
619
+ Fig. 3 shows the impact of cosmological and feed-
620
+ back parameters on the relationship between the power
621
+ spectrum suppression (∆P/PDMO) and the ratio Y500c/Y SS
622
+ for the SIMBA simulations. Each point corresponds to a
623
+ single simulation, taking the average over all halos with
624
+ 1013 < M(M⊙/h) < 1014 when computing Y500c/Y SS. Note
625
+ that since the halo mass function rapidly declines at high
626
+ masses, the average will be dominated by the low mass ha-
627
+ los. We observe that the largest suppression (i.e. more nega-
628
+ tive ∆P/PDMO) occurs when AAGN2 is large. This is caused
629
+ by powerful AGN jet-mode feedback ejecting gas from halos,
630
+ −0.4
631
+ −0.2
632
+ 0.0
633
+ ∆P/PDMO
634
+ −0.4
635
+ −0.2
636
+ 0.0
637
+ ∆P/PDMO
638
+ −0.4
639
+ −0.2
640
+ 0.0
641
+ ∆P/PDMO
642
+ −0.4
643
+ −0.2
644
+ 0.0
645
+ ∆P/PDMO
646
+ −0.4
647
+ −0.2
648
+ 0.0
649
+ ∆P/PDMO
650
+ 0.0
651
+ 0.2
652
+ 0.4
653
+ 0.6
654
+ 0.8
655
+ Y500c/YSS
656
+ −0.4
657
+ −0.2
658
+ 0.0
659
+ ∆P/PDMO
660
+ 0.2
661
+ 0.3
662
+ 0.4
663
+ Ωm
664
+ 0.7
665
+ 0.8
666
+ 0.9
667
+ σ8
668
+ 1
669
+ 2
670
+ 3
671
+ ASN1
672
+ 1
673
+ 2
674
+ 3
675
+ AAGN1
676
+ 0.75
677
+ 1.00
678
+ 1.25
679
+ 1.50
680
+ 1.75
681
+ ASN2
682
+ 0.75
683
+ 1.00
684
+ 1.25
685
+ 1.50
686
+ 1.75
687
+ AAGN2
688
+ Figure 3. We show the relation between matter power suppres-
689
+ sion at k = 2h/Mpc and the integrated tSZ signal, Y500c/Y SS,
690
+ of halos in the mass range 1013 < M (M⊙/h) < 1014 in the
691
+ SIMBA simulation suite. In each of six panels, the points are col-
692
+ ored corresponding to the parameter value given in the associated
693
+ colorbar.
694
+ MNRAS 000, 1–16 (0000)
695
+
696
+ Probing feedback with the SZ
697
+ 7
698
+ leading to a significant reduction in the matter power spec-
699
+ trum, as described by e.g. van Daalen et al. (2020); Borrow
700
+ et al. (2020); Gebhardt et al. (2023). For SIMBA, the pa-
701
+ rameter AAGN2 controls the velocity of the ejected gas, with
702
+ higher velocities (i.e. higher AAGN2) leading to gas ejected
703
+ to larger distances. On the other hand, when ASN2 is large,
704
+ ∆P/PDMO is small. This is because efficient supernovae feed-
705
+ back prevents the formation of massive galaxies which host
706
+ AGN and hences reduces the strength of the AGN feedback.
707
+ The parameter AAGN1, on the other hand, controls the radia-
708
+ tive quasar mode of feedback, which has slower gas outflows
709
+ and thus a smaller impact on the matter distribution.
710
+ It is also clear from Fig. 3 that increasing Ωm reduces
711
+ |∆P/PDMO|, relatively independently of the other parame-
712
+ ters. By increasing Ωm, the ratio Ωb/Ωm decreases, meaning
713
+ that halos of a given mass have fewer baryons, and the im-
714
+ pact of feedback is therefore reduced. We propose a very
715
+ simple toy model for this effect in §3.3.
716
+ The impact of σ8 in Fig. 3 is less clear. For halos in
717
+ the mass range shown, we find that increasing σ8 leads to a
718
+ roughly monotonic decrease in Y500c (and fb), presumably
719
+ because higher σ8 means that there are more halos amongst
720
+ which the same amount of baryons must be distributed. This
721
+ effect would not occur for cluster-scale halos because these
722
+ are rare and large enough to gravitationally dominate their
723
+ local environments, giving them fb ∼ Ωb/Ωm, regardless of
724
+ σ8. In any case, no clear trend with σ8 is seen in Fig. 3
725
+ because σ8 does not correlate strongly with ∆P/PDMO.
726
+ Fig. 4 shows the relationship between ∆P/PDMO at
727
+ k = 2 h/Mpc and fb or Y500c in different halo mass bins
728
+ and for different amounts of feedback, colored by the value
729
+ of AAGN2. As in Fig. 3, each point represents an average
730
+ over all halos in the indicated mass range for a particular
731
+ CAMELS simulation (i.e. at fixed values of cosmological and
732
+ feedback parameters). Note that the meaning of AAGN2 is
733
+ not exactly the same across the different feedback models,
734
+ as noted in §2. For TNG and SIMBA we expect increasing
735
+ AAGN2 to lead to stronger AGN feedback driving more gas
736
+ out of the halos, leading to more power suppression with-
737
+ out strongly regulating the growth of black holes. However,
738
+ for Astrid, increasing AAGN2 parameter would more strongly
739
+ regulate and suppress the black hole growth in the box since
740
+ controls the efficiency of thermal mode of AGN feedback
741
+ (Ni et al. 2022). This drastically reduces the number of high
742
+ mass black holes and hence effectively reducing the amount
743
+ of feedback that can push the gas out of the halos, leading
744
+ to less matter power suppression. We see this difference re-
745
+ flected in Fig. 4 where for the Astrid simulations the points
746
+ corresponding to high AAGN2, result in ∆P/PDMO ∼ 0, in
747
+ contrast to TNG and SIMBA suite of simulations.
748
+ For the highest mass bin (1013 < M(M⊙/h) < 1014,
749
+ rightmost column of Fig. 4) our results are in agreement with
750
+ van Daalen et al. (2020): we find that there is a robust corre-
751
+ lation between between fb/(Ωb/Ωm) and the matter power
752
+ suppression (also see Delgado et al. (2023)). This relation is
753
+ roughly consistent across different feedback subgrid models,
754
+ although the different models appear to populate different
755
+ parts of this relation. Moreover, varying AAGN2 appears to
756
+ move points along this relation, rather than broadening the
757
+ relation. This is in contrast to Ωm, which as shown in Fig. 3,
758
+ tends to move simulations in the direction orthogonal to the
759
+ narrow Y500c-∆P/PDMO locus. For this reason, and given
760
+ current constraints on Ωm, we restrict Fig. 4 to simulations
761
+ with 0.2 < Ωm < 0.4. The dashed curves shown in Fig. 4
762
+ correspond to the toy model discussed in §3.3.
763
+ At low halo mass, the relation between fb/(Ωb/Ωm)
764
+ and ∆P/PDMO appears similar to that for the high-mass
765
+ bin, although it is somewhat flatter at high fb, and some-
766
+ what steeper at low fb. Again the results are fairly consistent
767
+ across the different feedback prescriptions, although points
768
+ with high fb/(Ωb/Ωm) are largely absent for SIMBA. This
769
+ is because the feedback mechanisms are highly efficient in
770
+ SIMBA, driving the gas out of their parent halos.
771
+ The relationships between Y and ∆P/PDMO appear
772
+ quite similar to those between ∆P/PDMO and fb/(Ωb/Ωm).
773
+ This is not too surprising because Y is sensitive to the gas
774
+ density, which dominates fb/(Ωb/Ωm). However, Y is also
775
+ sensitive to the gas temperature. Our results suggest that
776
+ variations in gas temperature are not significantly impact-
777
+ ing the Y500c-∆P/PDMO relation. The possibility of using
778
+ the tSZ signal to infer the impact of feedback on the matter
779
+ distribution rather than fb/(Ωb/Ωm) is therefore appealing.
780
+ This will be the focus of the remainder of the paper.
781
+ Fig. 5 shows the same quantities as Fig. 4, but now for
782
+ a fixed halo mass range (1013 < M/(M⊙/h) < 1014), fixed
783
+ subgrid prescription (TNG), and varying values of k. We
784
+ find roughly similar results when using the different sub-
785
+ grid physics prescriptions. At low k, we find that there is
786
+ a regime at high fb/(Ωb/Ωm) for which ∆P/PDMO changes
787
+ negligibly. It is only when fb/(Ωb/Ωm) becomes very low
788
+ that ∆P/PDMO begins to change. On the other hand, at
789
+ high k, there is a near-linear relation between fb/(Ωb/Ωm)
790
+ and ∆P/PDMO.
791
+ 3.3
792
+ A toy model for power suppression
793
+ We now describe a simple model for the effects of feedback
794
+ on the relation between fb or Y and ∆P/PDMO that ex-
795
+ plains some of the features seen in Figs. 3, 4 and 5. We
796
+ assume in this model that it is removal of gas from halos by
797
+ AGN feedback that is responsible for changes to the matter
798
+ power spectrum. SN feedback, on the other hand, can pre-
799
+ vent gas from accreting onto the SMBH and therefore reduce
800
+ the impact of AGN feedback (Angl´es-Alc´azar et al. 2017c;
801
+ Habouzit et al. 2017). This scenario is consistent with the
802
+ fact that at high SN feedback, we see that ∆P/PDMO goes
803
+ to zero (second panel from the bottom in Fig. 3). Stellar
804
+ feedback can also prevent gas from accreting onto low-mass
805
+ halos (Pandya et al. 2020, 2021). In some sense, the dis-
806
+ tinction between gas that is ejected by AGN and gas that is
807
+ prevented from accreting onto halos by stellar feedback does
808
+ not matter for our simple model. Rather, all that matters
809
+ is that some amount of gas that would otherwise be in the
810
+ halo is instead outside of the halo as a result of feedback
811
+ effects, and it is this gas which is responsible for changes to
812
+ the matter power spectrum.
813
+ We identify three relevant scales: (1) the halo radius,
814
+ Rh, (2) the distance to which gas is ejected by the AGN,
815
+ Rej, and (3) the scale at which the power spectrum is mea-
816
+ sured, 2π/k. If Rej ≪ 2π/k, then there will be no impact
817
+ on ∆P at k: this corresponds to a rearrangement of the
818
+ matter distribution on scales well below where we measure
819
+ the power spectrum. If, on the other hand, Rej ≪ Rh, then
820
+ there will be no impact on fb or Y , since the gas is not
821
+ MNRAS 000, 1–16 (0000)
822
+
823
+ 8
824
+ Pandey et al.
825
+ −0.4
826
+ −0.2
827
+ 0.0
828
+ ∆P/PDMO
829
+ Illustris-TNG
830
+ 5 × 1012 < M(M⊙/h) < 1013
831
+ Illustris-TNG
832
+ 5 × 1012 < M(M⊙/h) < 1013
833
+ Illustris-TNG
834
+ 1013 < M(M⊙/h) < 1014
835
+ Illustris-TNG
836
+ 1013 < M(M⊙/h) < 1014
837
+ −0.4
838
+ −0.2
839
+ 0.0
840
+ ∆P/PDMO
841
+ SIMBA
842
+ 5 × 1012 < M(M⊙/h) < 1013
843
+ SIMBA
844
+ 5 × 1012 < M(M⊙/h) < 1013
845
+ SIMBA
846
+ 1013 < M(M⊙/h) < 1014
847
+ SIMBA
848
+ 1013 < M(M⊙/h) < 1014
849
+ 0.00
850
+ 0.25
851
+ 0.50
852
+ 0.75
853
+ Y500c/YSS
854
+ −0.4
855
+ −0.2
856
+ 0.0
857
+ ∆P/PDMO
858
+ Astrid
859
+ 5 × 1012 < M(M⊙/h) < 1013
860
+ 0.0
861
+ 0.5
862
+ 1.0
863
+ fb/(Ωb/Ωm)
864
+ Astrid
865
+ 5 × 1012 < M(M⊙/h) < 1013
866
+ 0.0
867
+ 0.5
868
+ 1.0
869
+ Y500c/YSS
870
+ Astrid
871
+ 1013 < M(M⊙/h) < 1014
872
+ 0.0
873
+ 0.5
874
+ 1.0
875
+ fb/(Ωb/Ωm)
876
+ Astrid
877
+ 1013 < M(M⊙/h) < 1014
878
+ 0.5
879
+ 1.0
880
+ 1.5
881
+ 2.0
882
+ 2.5
883
+ 3.0
884
+ 3.5
885
+ AAGN2
886
+ Figure 4. Impact of baryonic physics on the matter power spectrum at k = 2h/Mpc for the TNG, SIMBA and Astrid simulations
887
+ (top, middle, and bottom rows). Each point corresponds to an average across halos in the indicated mass ranges in a different CAMELS
888
+ simulation. We restrict the figure to simulations that have 0.2 < Ωm < 0.4. The dashed curves illustrate the behavior of the model
889
+ described in §3.3 when the gas ejection distance is large compared to the halo radius and 2π/k.
890
+ 0.5
891
+ 1.0
892
+ fb/(Ωb/Ωm)
893
+ −0.5
894
+ −0.4
895
+ −0.3
896
+ −0.2
897
+ −0.1
898
+ 0.0
899
+ ∆P/PDMO
900
+ k = 0.6 h/Mpc
901
+ 0.5
902
+ 1.0
903
+ fb/(Ωb/Ωm)
904
+ k = 1.0 h/Mpc
905
+ 0.5
906
+ 1.0
907
+ fb/(Ωb/Ωm)
908
+ k = 5.0 h/Mpc
909
+ 0.5
910
+ 1.0
911
+ fb/(Ωb/Ωm)
912
+ k = 10.0 h/Mpc
913
+ 0.6
914
+ 0.8
915
+ 1.0
916
+ 1.2
917
+ 1.4
918
+ 1.6
919
+ 1.8
920
+ AAGN2
921
+ Figure 5. Similar to Fig. 4, but for different values of k. For simplicity, we show only the TNG simulations for halos in the mass range
922
+ 1013 < M(M⊙/h) < 1014. The dashed curves illustrate the behavior of the model described in §3.3 in the regime that the radius to
923
+ which gas is ejected by AGN is larger than the halo radius, and larger than 2π/k. As expected, this model performs best in the limit of
924
+ high k and large halo mass.
925
+ ejected out of the halo. We therefore consider four regimes
926
+ defined by the relative amplitudes of Rh, Rej, and 2π/k, as
927
+ described below. Note that there is not a one-to-one corre-
928
+ spondence between physical scale in configuration space and
929
+ 2π/k; therefore, the inequalities below should be considered
930
+ as approximate. The four regimes are:
931
+ • Regime 1: Rej < Rh and Rej < 2π/k. In this regime,
932
+ changes to the feedback parameters have no impact on fb or
933
+ ∆P.
934
+ • Regime 2: Rej > Rh and Rej < 2π/k. In this regime,
935
+ changes to the feedback parameters result in movement
936
+ along the fb or Y axis without changing ∆P. Gas is be-
937
+ ing removed from the halo, but the resultant changes to the
938
+ matter distribution are below the scale at which we measure
939
+ the power spectrum. Note that Regime 2 cannot occur when
940
+ Rh > 2π/k (i.e. high-mass halos at large k).
941
+ MNRAS 000, 1–16 (0000)
942
+
943
+ Probing feedback with the SZ
944
+ 9
945
+ • Regime 3: Rej > Rh and Rej > 2π/k. In this regime, chang-
946
+ ing the feedback amplitude directly changes the amount of
947
+ gas ejected from halos as well as ∆P/PDMO.
948
+ • Regime 4: Rej < Rh and Rej > 2π/k. In this regime, gas is
949
+ not ejected out of the halo, so fb and Y should not change.
950
+ In principle, the redistribution of gas within the halo could
951
+ lead to changes in ∆P/PDMO. However, as we discuss below,
952
+ this does not seem to happen in practice.
953
+ Let us now consider the behavior of ∆P/PDMO and fb
954
+ or Y as the feedback parameters are varied in Regime 3. A
955
+ halo of mass M is associated with an overdensity δm in the
956
+ absence of feedback, which is changed to δ′
957
+ m due to ejec-
958
+ tion of baryons as a result of feedback. In Regime 3, some
959
+ amount of gas, Mej, is completely removed from the halo.
960
+ This changes the size of the overdensity associated with the
961
+ halo to
962
+ δ′
963
+ m
964
+ δm
965
+ =
966
+ 1 − Mej
967
+ M .
968
+ (4)
969
+ The change to the power spectrum is then
970
+ ∆P
971
+ PDMO
972
+
973
+ �δ′
974
+ m
975
+ δm
976
+ �2
977
+ − 1 ≈ −2Mej
978
+ M ,
979
+ (5)
980
+ where we have assumed that Mej is small compared to M.
981
+ We have ignored the k dependence here, but in Regime 3,
982
+ the ejection radius is larger than the scale of interest, so the
983
+ calculated ∆P/PDMO should apply across a range of k in
984
+ this regime.
985
+ The ejected gas mass can be related to the gas mass
986
+ in the absence of feedback. We write the gas mass in the
987
+ absence of feedback as fc(Ωb/Ωm)M, where fc encapsulates
988
+ non-feedback processes that result in the halo having less
989
+ than the cosmic baryon fraction. We then have
990
+ Mej
991
+ =
992
+ fc(Ωb/Ωm)M − fbM − M0,
993
+ (6)
994
+ where M0 is the mass that has been removed from the
995
+ gaseous halo, but that does not change the power spectrum,
996
+ e.g. the conversion of gas to stars. Substituting into Eq. 5,
997
+ we have
998
+ ∆P
999
+ PDMO = −2fcΩb
1000
+ Ωm
1001
+
1002
+ 1 − fbΩm
1003
+ fcΩb − ΩmM0
1004
+ fcΩbM
1005
+
1006
+ .
1007
+ (7)
1008
+ In other words, for Regime 3, we find a linear relation be-
1009
+ tween ∆P/PDMO and fbΩm/Ωb. For high mass halos, we
1010
+ should have fc ≈ 1 and M0/M ≈ 0. In this limit, the rela-
1011
+ tionship between fb and ∆P/PDMO becomes
1012
+ ∆P
1013
+ PDMO = −2 Ωb
1014
+ Ωm
1015
+
1016
+ 1 − fbΩm
1017
+ Ωb
1018
+
1019
+ ,
1020
+ (8)
1021
+ which
1022
+ is
1023
+ linear
1024
+ between
1025
+ endpoints
1026
+ at
1027
+ (∆P/PDMO, fbΩm/Ωb)
1028
+ =
1029
+ (−2Ωb/Ωm, 0)
1030
+ and
1031
+ (∆P/PDMO, fbΩm/Ωb)
1032
+ =
1033
+ (0, 1). We show this relation
1034
+ as the dashed line in the fb columns of Figs. 4 and Fig. 5.
1035
+ We can repeat the above argument for Y . Unlike the
1036
+ case with fb, processes other than the removal of gas may
1037
+ reduce Y ; these include, e.g., changes to the gas temperature
1038
+ in the absence of AGN feedback, or nonthermal pressure sup-
1039
+ port. We account for these with a term Y0, defined such that
1040
+ when Mej = M0 = 0, we have Y + Y0 = fc(Ωb/Ωm)MT/α,
1041
+ where we have assumed constant gas temperature, T, and
1042
+ α is a dimensionful constant of proportionality. We ignore
1043
+ detailed modeling of variation in the temperature of the gas
1044
+ due to feedback and departures from hydro-static equilib-
1045
+ rium (Ostriker et al. 2005). We then have
1046
+ α(Y + Y0)
1047
+ T
1048
+ = fc(Ωb/Ωm)M − Mej − M0.
1049
+ (9)
1050
+ Substituting the above equation into Eq. 5 we have
1051
+ ∆P
1052
+ PDMO
1053
+ =
1054
+ −2fcΩb
1055
+ Ωm
1056
+
1057
+ 1 − α(Y + Y0)Ωm
1058
+ fcTMΩb
1059
+ − ΩmM0
1060
+ fcΩbM
1061
+
1062
+ .
1063
+ (10)
1064
+ Following Eq. 2, we define the self-similar value of Y , Y SS,
1065
+ via
1066
+ αY SS/T = (Ωb/Ωm)M,
1067
+ (11)
1068
+ leading to
1069
+ ∆P
1070
+ PDMO
1071
+ =
1072
+ −2fcΩb
1073
+ Ωm
1074
+
1075
+ 1 − (Y + Y0)
1076
+ fcY SS
1077
+ − ΩmM0
1078
+ fcΩbM
1079
+
1080
+ . (12)
1081
+ Again taking the limit that fc ≈ 1 and M0/M ≈ 0, we have
1082
+ ∆P
1083
+ PDMO
1084
+ =
1085
+ −2 Ωb
1086
+ Ωm
1087
+
1088
+ 1 − (Y + Y0)
1089
+ Y SS
1090
+
1091
+ .
1092
+ (13)
1093
+ Thus, we see that in Regime 3, the relation between Y/Y SS
1094
+ and ∆P/PDMO is linear. The Y/Y SS columns of Figs. 4 show
1095
+ this relationship, assuming Y0 = 0.
1096
+ In summary, we interpret the results of Figs. 4 and 5 in
1097
+ the following way. Starting at low feedback amplitude, we
1098
+ are initially in Regime 1. In this regime, the simulations clus-
1099
+ ter around fbfcΩm/Ωb ≈ 1 (or Y ≈ Y0) and ∆P/PDMO ≈ 0
1100
+ since changing the feedback parameters in this regime does
1101
+ not impact fb or ∆P/PDMO. For high mass halos, we have
1102
+ fc ≈ 1 and Y0 ≈ 0 (although SIMBA appears to have Y0 > 0,
1103
+ even at high mass); for low mass halos, fc < 1 and Y0 > 0.
1104
+ As we increase the AGN feedback amplitude, the behavior
1105
+ is different depending on halo mass and k:
1106
+ • For low halo masses or low k, increasing the AGN feed-
1107
+ back amplitude leads the simulations into Regime 2. Increas-
1108
+ ing the feedback amplitude in this regime moves points to
1109
+ lower Y/Y SS (or fbΩm/Ωb) without significantly impacting
1110
+ ∆P/PDMO. Eventually, when the feedback amplitude is suf-
1111
+ ficiently strong, these halos enter Regime 3, and we see a
1112
+ roughly linear decline in ∆P/PDMO with decreasing Y/Y SS
1113
+ (or fbΩm/Ωb), as discussed above.
1114
+ • For high mass halos and high k, we never enter Regime 2
1115
+ since it is not possible to have Rej > Rh and Rej < 2π/k
1116
+ when Rh is very large. In this case, we eventually enter
1117
+ Regime 3, leading to a linear trend of decreasing ∆P/PDMO
1118
+ with decreasing Y/Y SS or fbΩm/Ωb, as predicted by the
1119
+ above discussion. This behavior is especially clear in Fig. 5:
1120
+ at high k, the trend closely follows the predicted linear rela-
1121
+ tion. At low k, on the other hand, we see a more prominent
1122
+ Regime 2 region. The transition between these two regimes
1123
+ is expected to occur when k ∼ 2π/Rh, which is roughly
1124
+ 5 h−1Mpc for the halo mass regime shown in the figure. This
1125
+ expectation is roughly confirmed in the figure.
1126
+ Interestingly, we never see Regime 4 behavior: when the halo
1127
+ mass is large and k is large, we do not see rapid changes
1128
+ in ∆P/PDMO with little change to fb and Y . This could
1129
+ be because this regime corresponds to movement of the gas
1130
+ entirely within the halo. If the gas has time to re-equilibrate,
1131
+ it makes sense that we would see little change to ∆P/PDMO
1132
+ in this regime.
1133
+ MNRAS 000, 1–16 (0000)
1134
+
1135
+ 10
1136
+ Pandey et al.
1137
+ 3.4
1138
+ Predicting the power spectrum suppression
1139
+ from the halo observables
1140
+ While the toy model described above roughly captures the
1141
+ trends between Y (or fb) and ∆P/PDMO, it of course does
1142
+ not capture all of the physics associated with feedback. It
1143
+ is also clear that there is significant scatter in the relation-
1144
+ ships between observable quantities and ∆P. It is possible
1145
+ that this scatter is reduced in some higher dimensional space
1146
+ that includes more observables. To address both of these
1147
+ issues, we now train statistical models to learn the rela-
1148
+ tionships between observable quantities and ∆P/PDMO. We
1149
+ will focus on results obtained with random forest regression
1150
+ (Breiman 2001). We have also tried using neural networks to
1151
+ infer these relationships, but have not found any significant
1152
+ improvement with respect to the random forest results, pre-
1153
+ sumably because the space is low-dimensional (i.e. we con-
1154
+ sider at most about five observable quantities at a time). We
1155
+ leave a detailed comparison with other decision tree based
1156
+ approaches, such as gradient boosted trees (Friedman 2001)
1157
+ to a future study.
1158
+ We train a random forest model to go from observable
1159
+ quantities (e.g. fb/(Ωb/Ωm) and Y500c/Y SS) to a prediction
1160
+ for ∆P/PDMO at multiple k values. The random forest model
1161
+ uses 100 trees with a maxdepth = 10.6 In this section we an-
1162
+ alyze the halos in the mass bin 5 × 1012 < Mhalo(M⊙/h) <
1163
+ 1014 but we also show the results for halos with lower masses
1164
+ in Appendix D. We also consider supplying the values of Ωm
1165
+ as input to the random forest, since it can be constrained
1166
+ precisely through other observations (e.g. primary CMB ob-
1167
+ servations), and as we showed in §3.2, the cosmological pa-
1168
+ rameters can impact the observables.7
1169
+ Ultimately, we are interested in making predictions for
1170
+ ∆P/PDMO using observable quantities. However, the sam-
1171
+ ple variance in the CAMELS simulations limits the precision
1172
+ with which we can measure ∆P/PDMO. It is not possible
1173
+ to predict ∆P/PDMO to better than this precision. We will
1174
+ therefore normalize the uncertainties in the RF predictions
1175
+ by the cosmic variance error. In order to obtain the un-
1176
+ certainty in the predictions, we randomly split the data into
1177
+ 70% training and 30% test set. After training the RF regres-
1178
+ sor using the training set and a given observable, we make
1179
+ compute the 16th and 84th percentile of the distribution of
1180
+ prediction errors evaluated on the test set. This constitutes
1181
+ our assessment of prediction uncertainty.
1182
+ Fig. 6 shows the accuracy of the RF predictions for
1183
+ ∆P/PDMO when trained on stacked fb (for halos in 5 ×
1184
+ 1012 < Mhalo(M⊙/h) < 1014) and Ωm, normalized to the
1185
+ sample variance error in ∆P/PDMO. As we will show later
1186
+ in this section, this combination of inputs results in pre-
1187
+ 6 We use a publicly available code: https://scikit-learn.
1188
+ org/stable/modules/generated/sklearn.ensemble.
1189
+ RandomForestRegressor.html.
1190
+ We
1191
+ also
1192
+ verified
1193
+ that
1194
+ our
1195
+ conclusions are robust to changing the settings of the random
1196
+ forest.
1197
+ 7 One might worry that using cosmological information to con-
1198
+ strain ∆P/PDMO defeats the whole purpose of constraining
1199
+ ∆P/PDMO in order to improve cosmological constraints. How-
1200
+ ever, observations, such as those of CMB primary anisotropies,
1201
+ already provide precise constraints on the matter density with-
1202
+ out using information in the small-scale matter distribution.
1203
+ cise constraints on the matter power suppression. Specifi-
1204
+ cally to obtain the constraints, after training the RF regres-
1205
+ sor on the train simulations, we predict the ∆P/PDMO on
1206
+ test simulation boxes at four scales. Thereafter, we create
1207
+ a histogram of the difference between truth and predicted
1208
+ ∆P/PDMO, normalized by the variance obtained from the
1209
+ CV set of simulations, for each respective suite of simula-
1210
+ tions (see Fig. 1). In Fig. 6, each errorbar corresponds to the
1211
+ 16th and 84th percentile from this histogram and the marker
1212
+ corresponds to its peak. We show the results of training and
1213
+ testing on a single simulation suite, and also the results of
1214
+ training/testing across different simulation suites. It is clear
1215
+ that when training and testing on the same simulation suite,
1216
+ the RF learns a model that comes close to the best possi-
1217
+ ble uncertainty on ∆P/PDMO (i.e. cosmic variance). When
1218
+ training on one or two simulation suites and testing another,
1219
+ however, the predictions show bias at low k. This suggests
1220
+ that the model learned from one simulation does not gen-
1221
+ eralize very well to another in this regime. This result is
1222
+ somewhat different from the findings of van Daalen et al.
1223
+ (2020), where it was found that the relationship between fb
1224
+ and ∆P/PDMO did generalize to different simulations. This
1225
+ difference may result from the fact that we are considering
1226
+ a wider range of feedback prescriptions than in van Daalen
1227
+ et al. (2020), as well as considering significant variations in
1228
+ cosmological parameters.
1229
+ Fig. 6 also shows the results of testing and training on
1230
+ all three simulations (black points with errorbars). Encour-
1231
+ agingly, we find that in this case, the predictions are of
1232
+ comparable accuracy to those obtained from training and
1233
+ predicting on the same simulation suite. This suggests that
1234
+ there is a general relationship across all feedback models
1235
+ that can be learned to go from Ωm and fb to ∆P/PDMO.
1236
+ Henceforth, we will show results trained on all simulation
1237
+ suites and tested on all simulations suites. Of course, this
1238
+ result does not imply that our results will generalize to some
1239
+ completely different feedback prescription.
1240
+ In Fig. 7 we show the results of training the random
1241
+ forest on different combinations of fb, Y500c and Ωm. Con-
1242
+ sistent with the findings of van Daalen et al. (2020), we
1243
+ find that fb/(Ωb/Ωm) results in robust constraints on the
1244
+ matter power suppression (blue points with errors). These
1245
+ constraints come close to the cosmic variance limit across a
1246
+ wide range of k.
1247
+ We additionally find that providing fb and Ωm as sepa-
1248
+ rate inputs to the RF improves the precision of the predic-
1249
+ tions for ∆P/PDMO relative to using just the combination
1250
+ fb/(Ωb/Ωm), with the largest improvement coming at small
1251
+ scales. This is not surprising given the predictions of our
1252
+ simple model, for which it is clear that ∆P/PDMO can be
1253
+ impacted by both Ωm and fb/(Ωb/Ωb) independently. Sim-
1254
+ ilarly, it is clear from Fig. 3 that changing Ωm changes the
1255
+ relationship between ∆P/PDMO and the halo gas-derived
1256
+ quantities (like Y and fb).
1257
+ We next consider a model trained on Y500c/Y SS (orange
1258
+ points in Fig. 7). This model yields reasonable predictions
1259
+ for ∆P/PDMO, although not quite as good as the model
1260
+ trained on fb/(Ωb/Ωm). The Y/Y SS model yields somewhat
1261
+ larger errorbars, and the distribution of ∆P/PDMO predic-
1262
+ tions is highly asymmetric. When we train the RF model
1263
+ jointly on Y500c/Y SS and Ωm (green points), we find that
1264
+ the predictions improve considerably, particularly at high k.
1265
+ MNRAS 000, 1–16 (0000)
1266
+
1267
+ Probing feedback with the SZ
1268
+ 11
1269
+ 100
1270
+ 101
1271
+ k (h/Mpc)
1272
+ −4
1273
+ −3
1274
+ −2
1275
+ −1
1276
+ 0
1277
+ 1
1278
+ 2
1279
+ 3
1280
+ 4
1281
+ Error ∆P/PDMO prediction relative to CV (σ)
1282
+ CV
1283
+ Train:TNG, Test:TNG
1284
+ Train:SIMBA, Test:SIMBA
1285
+ Train:Astrid, Test:Astrid
1286
+ Train: TNG, SIMBA
1287
+ Test: Astrid
1288
+ Train: TNG, Astrid
1289
+ Test: SIMBA
1290
+ Train: SIMBA, Astrid
1291
+ Test: TNG
1292
+ Train: TNG, SIMBA, Astrid
1293
+ Test: TNG, SIMBA, Astrid
1294
+ Figure 6. We show the results of the random forest regressor predictions for the baryonic power suppression, represented by ∆P/PDMO,
1295
+ across the LH suite of simulations at four different scales k using the subgrid physics models for TNG, SIMBA, and Astrid. The model
1296
+ was trained using the average fb of halos with masses between 5 × 1012 < M(M⊙/h) < 1014 and the cosmological parameter Ωm. The
1297
+ errorbars indicate the uncertainty in the predictions normalized by the uncertainity in the CV suite at each scale, showing the 16-84
1298
+ percentile error on the test set. The gray band represents the expected 1σ error from the CV suite. The model performs well when the
1299
+ training and test simulations are the same. When tested on an independent simulation, it remains robust at high k but becomes biased
1300
+ at low k. The results presented in the remainder of the paper are based on training the model on all three simulations. The data points
1301
+ at each scale are staggered for clarity.
1302
+ 100
1303
+ 101
1304
+ k (h/Mpc)
1305
+ −4
1306
+ −3
1307
+ −2
1308
+ −1
1309
+ 0
1310
+ 1
1311
+ 2
1312
+ 3
1313
+ 4
1314
+ Error ∆P/PDMO prediction relative to CV (σ)
1315
+ CV
1316
+ fb/(Ωb/Ωm)
1317
+ fb, Ωm
1318
+ Y500c/YSS
1319
+ Y500c/YSS, Ωm
1320
+ fb, Y500c/YSS, Ωm
1321
+ Figure 7. Similar to Fig. 6, but showing results when training the RF model on different observables from all three simulations (TNG,
1322
+ SIMBA and Astrid) to predict ∆P/PDMO of a random subset of the the three simulations not used in training. We find that jointly
1323
+ training on the deviation of the integrated SZ profile from the self-similar expectation, Y500c/Y SS and Ωm results in inference of power
1324
+ suppression that is comparable to cosmic variance errors, with small improvements when additionally adding the baryon fraction (fb) of
1325
+ halos in the above mass range.
1326
+ In this case, the predictions are typically symmetric around
1327
+ the true ∆P/PDMO, have smaller uncertainty compared to
1328
+ the model trained on fb/(Ωb/Ωm), and comparable uncer-
1329
+ tainty to the model trained on {fb/(Ωb/Ωm),Ωm}. We thus
1330
+ conclude that when combined with matter density informa-
1331
+ tion, Y/Y SS provides a powerful probe of baryonic effects
1332
+ on the matter power spectrum.
1333
+ Above we have considered the integrated tSZ signal
1334
+ from halos, Y500c. Measurements in data, however, can po-
1335
+ tentially probe the tSZ profiles rather than only the inte-
1336
+ grated tSZ signal (although the instrumental resolution may
1337
+ limit the extent to which this is possible). In Fig. 8 we con-
1338
+ sider RF models trained on the stacking the full electron
1339
+ density and pressure profiles in the halo mass range instead
1340
+ of just the integrated quantities. The electron pressure and
1341
+ number density profiles are measured in eight logarithmi-
1342
+ MNRAS 000, 1–16 (0000)
1343
+
1344
+ 12
1345
+ Pandey et al.
1346
+ 100
1347
+ 101
1348
+ k (h/Mpc)
1349
+ −4
1350
+ −3
1351
+ −2
1352
+ −1
1353
+ 0
1354
+ 1
1355
+ 2
1356
+ 3
1357
+ 4
1358
+ Error ∆P/PDMO prediction relative to CV (σ)
1359
+ CV
1360
+ Pe(r)/PSS
1361
+ e
1362
+ Pe(r)/PSS
1363
+ e , Ωm
1364
+ Pe(r)/PSS
1365
+ e , ne(r), Ωm
1366
+ Pe(r)/PSS
1367
+ e , Ωm
1368
+ Low+High mass bins
1369
+ Figure 8. Same as Fig. 7 but showing results from using the full pressure profile, Pe(r), and electron number density profiles, ne(r),
1370
+ instead of the integrated quantities. We again find that with pressure profile and Ωm information we can recover robust and precise
1371
+ constraints on the matter power suppression.
1372
+ cally spaced bins between 0.1 < r/r200c < 1. We find that
1373
+ while the ratio Pe(r)/P SS results in robust predictions for
1374
+ ∆P/PDMO, simultaneously providing Ωm makes the predic-
1375
+ tions more precise. Similar to the integrated profile case, we
1376
+ find that additionally providing the electron density profile
1377
+ information only marginally improves the constraints. We
1378
+ also show the results when jointly using the measured pres-
1379
+ sure profiles for both the low and high mass halos to infer
1380
+ the matter power suppression. We find that this leads to
1381
+ only a marginal improvements in the constraints.
1382
+ Note that we have input the 3D pressure and electron
1383
+ density profiles in this case. Even though observed SZ maps
1384
+ are projected quantities, we can infer the 3D pressure profiles
1385
+ from the model used to analyze the projected correlations.
1386
+ 3.5
1387
+ Predicting baryonic effects on the bispectrum
1388
+ with fb and the electron pressure
1389
+ In Fig. 9, we repeat our analysis from above to make
1390
+ predictions for baryonic effects on the matter bispectrum,
1391
+ ∆B(k)/B(k). Similar to the matter power spectrum, we
1392
+ train and test our model on a combination of the three
1393
+ simulations. We train and test on equilateral triangle bis-
1394
+ pectrum configurations with different scales k. We again see
1395
+ that information about the electron pressure and Ωm results
1396
+ in precise and unbiased constraints on the impact of bary-
1397
+ onic physics on the bispectrum. The constraints improve as
1398
+ we go to the small scales. In Appendix E we show similar
1399
+ methodology applied to squeezed bispectrum configurations.
1400
+ However, there are several important caveats to these
1401
+ results. The bispectrum is sensitive to high-mass (M >
1402
+ 5 × 1013M⊙/h) halos (Foreman et al. 2020) which are miss-
1403
+ ing from the CAMELS simulations. Consequently, our mea-
1404
+ surements of baryonic effects on the bispectrum can be bi-
1405
+ ased when using CAMELS. The simulation resolution can
1406
+ also impact the bispectrum significantly. A future analysis
1407
+ with larger volume sims at high resolution could use the
1408
+ methodology introduced here to obtain more robust results.
1409
+ Finally, there would is likely to be covariance between the
1410
+ power spectrum suppression and baryonic effects on the bis-
1411
+ pectrum, as they both stem from same underlying physics.
1412
+ We defer a complete exploration of these effects to future
1413
+ work.
1414
+ 4
1415
+ RESULTS II: ACTXDES MEASUREMENTS
1416
+ AND FORECAST
1417
+ Our analysis above has resulted in a statistical model (i.e.
1418
+ a random forest regressor) that predicts the matter power
1419
+ suppression ∆P/PDMO given values of Y500c for low-mass
1420
+ halos. This model is robust to significant variations in the
1421
+ feedback prescription, at least across the SIMBA, TNG and
1422
+ Astrid models. We now apply this model to constraints on
1423
+ Y500c coming from the cross-correlation of galaxy lensing
1424
+ shear with tSZ maps measured using Dark Energy Survey
1425
+ (DES) and Atacama Cosmology Telescope (ACT) data.
1426
+ Gatti et al. (2022a) and Pandey et al. (2022) measured
1427
+ the cross-correlations of DES galaxy lensing with Compton y
1428
+ maps from a combination of Advanced ACT (Madhavacheril
1429
+ et al. 2020) and Planck data (Planck Collaboration et al.
1430
+ 2016) over an area of 400 sq. deg. They analyze these cross-
1431
+ correlations using a halo model framework, where the pres-
1432
+ sure profile in halos was parameterized using a generalized
1433
+ Navarro-Frenk-White profile (Navarro et al. 1996; Battaglia
1434
+ et al. 2012a). This pressure profile is described using four
1435
+ free parameters, allowing for scaling with mass, redshift and
1436
+ distance from halo center. The constraints on the parame-
1437
+ terized pressure profiles can be translated directly into con-
1438
+ straints on Y500c for halos in the mass range relevant to our
1439
+ random forest models.
1440
+ We use the parameter constraints from Pandey et al.
1441
+ (2022) to generate 400 samples of the inferred 3D profiles
1442
+ of halos at z = 0 (i.e. the redshift at which the RF models
1443
+ are trained) in ten logarithmically-spaced mass bins in range
1444
+ 12.7 < log10(M/h−1M⊙) < 14. We then perform the volume
1445
+ MNRAS 000, 1–16 (0000)
1446
+
1447
+ Probing feedback with the SZ
1448
+ 13
1449
+ 100
1450
+ 101
1451
+ keq (h/Mpc)
1452
+ −4
1453
+ −3
1454
+ −2
1455
+ −1
1456
+ 0
1457
+ 1
1458
+ 2
1459
+ 3
1460
+ 4
1461
+ Error ∆Beq/Beq;DMO prediction relative to CV (σ)
1462
+ CV
1463
+ Y500c/YSS, Ωm
1464
+ Pe(r)/PSS
1465
+ e
1466
+ Pe(r)/PSS
1467
+ e , Ωm
1468
+ Pe(r)/PSS
1469
+ e , ne(r), Ωm
1470
+ Figure 9. Same as Fig. 7, but for the impact of feedback on the bispectrum in equilateral triangle configurations. We find that the
1471
+ inclusion of pressure profile information results in unbiased constraints on feedback effects on the bispectrum.
1472
+ 100
1473
+ k (h/Mpc)
1474
+ −0.40
1475
+ −0.35
1476
+ −0.30
1477
+ −0.25
1478
+ −0.20
1479
+ −0.15
1480
+ −0.10
1481
+ −0.05
1482
+ 0.00
1483
+ ∆P/PDMO
1484
+ Chen et al 2022
1485
+ w/ cosmology prior
1486
+ Schneider et al 2022
1487
+ OWLS
1488
+ BAHAMAS
1489
+ BAHAMAS
1490
+ highAGN
1491
+ TNG-300
1492
+ DESxACT; Data
1493
+ DESIxS4; Forecast
1494
+ Figure 10. Constraints on the impact of feedback on the matter power spectrum obtained using our trained random forest model applied
1495
+ to measurements of Y500c/Y SS from the DESxACT analysis of Pandey et al. (2022) (black points with errorbars). We also show the
1496
+ expected improvements from future halo-y correlations from DESIxSO using the constraints in Pandey et al. (2020). We compare these
1497
+ to the inferred constraints obtained using cosmic shear (Chen et al. 2023) and additionally including X-ray and kSZ data (Schneider
1498
+ et al. 2022). We also compare with the results from larger simulations: OWLS (Schaye et al. 2010), BAHAMAS (McCarthy et al. 2017)
1499
+ and TNG-300 (Springel et al. 2018).
1500
+ integral of these profiles to infer the Y500c(M, z) (see Eq. 1).
1501
+ Next, we generate a halo-averaged value of Y500c/Y SS for the
1502
+ jth sample by integrating over the halo mass distribution in
1503
+ CAMELS:
1504
+ �Y500c
1505
+ Y SS
1506
+ �j
1507
+ = 1
1508
+ ¯nj
1509
+
1510
+ dM
1511
+ � dn
1512
+ dM
1513
+ �j
1514
+ CAMELS
1515
+ Y j
1516
+ 500c(M)
1517
+ Y SS
1518
+ (14)
1519
+ where ¯nj =
1520
+
1521
+ dM(dn/dM)j
1522
+ CAMELS and (dn/dM)j
1523
+ CAMELS are
1524
+ a randomly chosen halo mass function from the CV set of
1525
+ boxes of TNG, SIMBA or Astrid. This procedure allows us
1526
+ to incorporate the impact and uncertainties of the CAMELS
1527
+ box size on the halo mass function. Note that due to the
1528
+ small box size of CAMELS, there is a deficit of high mass
1529
+ halos and hence the functional form of the mass function
1530
+ MNRAS 000, 1–16 (0000)
1531
+
1532
+ 14
1533
+ Pandey et al.
1534
+ differs somewhat from other fitting functions in literature,
1535
+ e.g. Tinker et al. (2008).
1536
+ Fig. 10 shows the results feeding the Y500c/Y SS values
1537
+ calculated above into our trained RF model to infer the im-
1538
+ pact of baryonic feedback on the matter power spectrum
1539
+ (black points with errorbars). The RF model used is that
1540
+ trained on the TNG, SIMBA and Astrid simulations. The
1541
+ errorbars represent the 16th and 84th percentile of the recov-
1542
+ ered ∆P/PDMO distribution using the 400 samples described
1543
+ above. Note that in this inference we fix the matter density
1544
+ parameter, Ωm = 0.3, same value as used by the CAMELS
1545
+ CV simulations as we use these to estimate the halo mass
1546
+ function.
1547
+ In the same figure, we also show the constraints from
1548
+ Chen et al. (2023) and Schneider et al. (2022) obtained using
1549
+ the analysis of complementary datasets. Chen et al. (2023)
1550
+ analyze the small scale cosmic shear measurements from
1551
+ DES Year-3 data release using a baryon correction model.
1552
+ Note that in this analysis, they only use a limited range of
1553
+ cosmologies, particularly restricting to high σ8 due to the
1554
+ requirements of emulator calibration. Moreover they also
1555
+ impose cosmology constraints from the large scale analy-
1556
+ sis of the DES data. Note that unlike the procedure pre-
1557
+ sented here, their modeling and constraints are sensitive to
1558
+ the priors on σ8. Schneider et al. (2022) analyze the X-ray
1559
+ data (as presented in Giri & Schneider 2021) and kSZ data
1560
+ from ACT and SDSS (Schaan et al. 2021) and the cosmic
1561
+ shear measurement from KiDS (Asgari et al. 2021), using
1562
+ another version of baryon correction model. A joint analysis
1563
+ from these complementary dataset leads to crucial degen-
1564
+ eracy breaking in the parameters. It would be interesting
1565
+ to include the tSZ observations presented here in the same
1566
+ framework as it can potentially make the constraints more
1567
+ precise.
1568
+ Several caveats about our analysis with data are in or-
1569
+ der. First, the lensing-SZ correlation is most sensitive to
1570
+ halos in the mass range of Mhalo ≥ 1013M⊙/h. However,
1571
+ our RF model operates on halos with mass in the range
1572
+ of 5 × 1012 ≥ Mhalo ≤ 1014M⊙/h, with the limited vol-
1573
+ ume of the simulations restricting the number of halos above
1574
+ 1013M⊙/h. We have attempted to account for this selection
1575
+ effect by using the halo mass function from the CV sims of
1576
+ the CAMELS simulations when calculating the stacked pro-
1577
+ file. However, using a larger volume simulation suite would
1578
+ be a better alternative (also see discussion in Appendix A).
1579
+ Moreover, the CAMELS simulation suite also fix the value
1580
+ of Ωb. There may be a non-trivial impact on the inference
1581
+ of ∆P/PDMO when varying that parameter. Note, though,
1582
+ that Ωb is tightly constrained by other cosmological obser-
1583
+ vations. Lastly, the sensitivity of the lensing-SZ correlations
1584
+ using DES galaxies is between 0.1 < z < 0.6. However, in
1585
+ this study we extrapolate those constraints to z = 0 using
1586
+ the pressure profile model of Battaglia et al. (2012a). We
1587
+ note that inference obtained at the peak sensitivity redshift
1588
+ would be a better alternative but we do not expect this to
1589
+ have a significant impact on the conclusions here.
1590
+ In order to shift the sensitivity of the data correlations
1591
+ to lower halo masses, it would be preferable to analyze the
1592
+ galaxy-SZ and halo-SZ correlations. In Pandey et al. (2020)
1593
+ we forecast the constraints on the inferred 3D pressure pro-
1594
+ file from the future halo-SZ correlations using DESI and
1595
+ CMB-S4 SZ maps for a wide range of halo masses. In Fig. 10
1596
+ we also show the expected constraints on the matter power
1597
+ suppression using the halo-SZ correlations from halos in the
1598
+ range M500c > 5 × 1012M⊙/h. We again follow the same
1599
+ methodology as described above to create a stacked normal-
1600
+ ized integrated pressure (see Eq. 14). Moreover, we also fix
1601
+ Ω = 0.3 to predict the matter power suppression. Note that
1602
+ we shift the mean value of ∆P/PDMO to the recovered value
1603
+ from BAHAMAS high-AGN simulations (McCarthy et al.
1604
+ 2017). As we can see in Fig. 10, we can expect to obtain
1605
+ significantly more precise constraints from these future ob-
1606
+ servations.
1607
+ 5
1608
+ CONCLUSIONS
1609
+ We have shown that the tSZ signals from low-mass halos
1610
+ contain significant information about the impacts of bary-
1611
+ onic feedback on the small-scale matter distribution. Using
1612
+ models trained on hydrodynamical simulations with a wide
1613
+ range of feedback implementations, we demonstrate that in-
1614
+ formation about baryonic effects on the power spectrum and
1615
+ bispectrum can be robustly extracted. By applying these
1616
+ same models to measurements with ACT and DES, we have
1617
+ shown that current tSZ measurements already constrain the
1618
+ impact of feedback on the matter distribution. Our results
1619
+ suggest that using simulations to learn the relationship be-
1620
+ tween halo gas observables and baryonic effects on the mat-
1621
+ ter distribution is a promising way forward for constraining
1622
+ these effects with data.
1623
+ Our main findings from our explorations with the
1624
+ CAMELS simulations are the following:
1625
+ • In agreement with van Daalen et al. (2020), we find that
1626
+ baryon fraction in halos correlates with the power spectrum
1627
+ suppression. We find that the correlation is especially robust
1628
+ at small scales.
1629
+ • We find (in agreement with Delgado et al. 2023) that there
1630
+ can be significant scatter in the relationship between baryon
1631
+ fraction and power spectrum suppression at low halo mass,
1632
+ and that the relationship varies to some degree with feed-
1633
+ back implementation. However, the bulk trends appear to
1634
+ be consistent regardless of feedback implementation.
1635
+ • We propose a simple model that qualitatively (and in some
1636
+ cases quantitatively) captures the broad features in the re-
1637
+ lationships between the impact of feedback on the power
1638
+ spectrum, ∆P/PDMO, at different values of k, and halo gas-
1639
+ related observables like fb and Y500c at different halo masses.
1640
+ • Despite significant scatter in the relations between Y500c
1641
+ and ∆P/PDMO at low halo mass, we find that simple ran-
1642
+ dom forest models yield tight and robust constraints on
1643
+ ∆P/PDMO given information about Y500c in low-mass ha-
1644
+ los and Ωm.
1645
+ • Using
1646
+ the
1647
+ pressure
1648
+ profile
1649
+ instead
1650
+ of
1651
+ just
1652
+ the
1653
+ inte-
1654
+ grated Y500c signal provides additional information about
1655
+ ∆P/PDMO, leading to 20-50% improvements when not us-
1656
+ ing any cosmological information. When additionally pro-
1657
+ viding the Ωm information, the improvements in constraints
1658
+ on baryonic changes to the power spectrum or bispectrum
1659
+ are modest when using the full pressure profile relative to
1660
+ integrated quantities like Y500c.
1661
+ • The pressure profiles and baryon fractions also carry infor-
1662
+ mation about baryonic effects on the bispectrum.
1663
+ MNRAS 000, 1–16 (0000)
1664
+
1665
+ Probing feedback with the SZ
1666
+ 15
1667
+ Our main results from our analysis of constraints from
1668
+ the DESxACT shear-y correlation analysis are
1669
+ • We have used the DES-ACT measurement of the shear-
1670
+ tSZ correlation from Gatti et al. (2022a) and Pandey et al.
1671
+ (2022) to infer Y500c for halos in the mass range relevant to
1672
+ our random forest models. Feeding the measured Y500c into
1673
+ these models, we have inferred the impact of baryonic effects
1674
+ on the power spectrum, as shown in Fig. 10.
1675
+ • We show that constraints on baryonic effects on the power
1676
+ spectrum will improve significantly in the future, particu-
1677
+ larly using halo catalogs from DESI and tSZ maps from
1678
+ CMB-S4.
1679
+ With data from future galaxy and CMB surveys, we
1680
+ expect constraints on the tSZ signal from halos across a
1681
+ wide mass and redshift range to improve significantly. These
1682
+ improvements will come from both the galaxy side (e.g. halos
1683
+ detected over larger areas of the sky, down to lower halo
1684
+ masses, and out to higher redshifts) and the CMB side (more
1685
+ sensitive tSZ maps over larger areas of the sky). Our forecast
1686
+ for DESI and CMB Stage 4 in Fig. 10 suggests that very
1687
+ tight constraints can be obtained on the impact of baryonic
1688
+ feedback on the matter power spectrum. We expect that
1689
+ these constraints on the impact of baryonic feedback will
1690
+ enable the extraction of more cosmological information from
1691
+ the small-scale matter distribution.
1692
+ 6
1693
+ ACKNOWLEDGEMENTS
1694
+ DAA acknowledges support by NSF grants AST-2009687
1695
+ and AST-2108944, CXO grant TM2-23006X, and Simons
1696
+ Foundation award CCA-1018464.
1697
+ 7
1698
+ DATA AVAILABILITY
1699
+ The TNG and SIMBA simulations used in this work are
1700
+ part of the CAMELS public data release (Villaescusa-
1701
+ Navarro et al. 2021) and are available at https://camels.
1702
+ readthedocs.io/en/latest/. The Astrid simulations used
1703
+ in this work will be made public before the end of the year
1704
+ 2023. The data used to make the plots presented in this
1705
+ paper are available upon request.
1706
+ REFERENCES
1707
+ Abazajian K. N., et al., 2016, arXiv e-prints, p. arXiv:1610.02743
1708
+ Abbott T. M. C., et al., 2022, Phys. Rev. D, 105, 023520
1709
+ Ade P., et al., 2019, J. Cosmology Astropart. Phys., 2019, 056
1710
+ Amon A., et al., 2022, Phys. Rev. D, 105, 023514
1711
+ Angl´es-Alc´azar D., Dav´e R., Faucher-Gigu`ere C.-A., ¨Ozel F.,
1712
+ Hopkins P. F., 2017a, MNRAS, 464, 2840
1713
+ Angl´es-Alc´azar D., Faucher-Gigu`ere C.-A., Kereˇs D., Hopkins
1714
+ P. F., Quataert E., Murray N., 2017b, MNRAS, 470, 4698
1715
+ Angl´es-Alc´azar D., Faucher-Gigu`ere C.-A., Quataert E., Hopkins
1716
+ P. F., Feldmann R., Torrey P., Wetzel A., Kereˇs D., 2017c,
1717
+ MNRAS, 472, L109
1718
+ Asgari M., et al., 2021, A&A, 645, A104
1719
+ Battaglia N., Bond J. R., Pfrommer C., Sievers J. L., 2012a, ApJ,
1720
+ 758, 74
1721
+ Battaglia N., Bond J. R., Pfrommer C., Sievers J. L., 2012b, ApJ,
1722
+ 758, 75
1723
+ Benson B. A., et al., 2014, in Holland W. S., Zmuidzinas J., eds,
1724
+ Society of Photo-Optical Instrumentation Engineers (SPIE)
1725
+ Conference Series Vol. 9153, Millimeter, Submillimeter, and
1726
+ Far-Infrared Detectors and Instrumentation for Astronomy
1727
+ VII. p. 91531P (arXiv:1407.2973), doi:10.1117/12.2057305
1728
+ Bhattacharya S., Di Matteo T., Kosowsky A., 2008, MNRAS, 389,
1729
+ 34
1730
+ Bird S., Ni Y., Di Matteo T., Croft R., Feng Y., Chen N., 2022,
1731
+ MNRAS, 512, 3703
1732
+ Borrow J., Angl´es-Alc´azar D., Dav´e R., 2020, MNRAS, 491, 6102
1733
+ Breiman L., 2001, Machine Learning, 45, 5
1734
+ Chen A., et al., 2023, MNRAS, 518, 5340
1735
+ Chisari N. E., et al., 2019, The Open Journal of Astrophysics, 2,
1736
+ 4
1737
+ Cromer D., Battaglia N., Miyatake H., Simet M., 2022, Journal
1738
+ of Cosmology and Astroparticle Physics, 2022, 034
1739
+ DESI
1740
+ Collaboration
1741
+ et
1742
+ al.,
1743
+ 2016,
1744
+ arXiv
1745
+ e-prints,
1746
+ p.
1747
+ arXiv:1611.00036
1748
+ Dav´e R., Angl´es-Alc´azar D., Narayanan D., Li Q., Rafieferantsoa
1749
+ M. H., Appleby S., 2019, MNRAS, 486, 2827
1750
+ Delgado A. M., et al., 2023, in preparation
1751
+ Euclid Collaboration et al., 2020, A&A, 642, A191
1752
+ Foreman S., Coulton W., Villaescusa-Navarro F., Barreira A.,
1753
+ 2020, MNRAS, 498, 2887
1754
+ Friedman J. H., 2001, The Annals of Statistics, 29, 1189
1755
+ Gatti M., et al., 2022a, Phys. Rev. D, 105, 123525
1756
+ Gatti M., et al., 2022b, Phys. Rev. D, 106, 083509
1757
+ Gebhardt M., et al., 2023, in preparation
1758
+ Giri S. K., Schneider A., 2021, J. Cosmology Astropart. Phys.,
1759
+ 2021, 046
1760
+ Habouzit M., Volonteri M., Dubois Y., 2017, MNRAS, 468, 3935
1761
+ Hand N., et al., 2012, Phys. Rev. Lett., 109, 041101
1762
+ Henderson S. W., et al., 2016, Journal of Low Temperature
1763
+ Physics, 184, 772
1764
+ Hill J. C., Ferraro S., Battaglia N., Liu J., Spergel D. N., 2016,
1765
+ Phys. Rev. Lett., 117, 051301
1766
+ Madhavacheril M. S., et al., 2020, Phys. Rev. D, 102, 023534
1767
+ McCarthy I. G., Schaye J., Bird S., Le Brun A. M. C., 2017,
1768
+ MNRAS, 465, 2936
1769
+ Moser E., et al., 2022, The Astrophysical Journal, 933, 133
1770
+ Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563
1771
+ Ni Y., et al., 2022, MNRAS, 513, 670
1772
+ Nicola A., et al., 2022, J. Cosmology Astropart. Phys., 2022, 046
1773
+ Ostriker J. P., Bode P., Babul A., 2005, ApJ, 634, 964
1774
+ Pandey S., et al., 2019, Phys. Rev. D, 100, 063519
1775
+ Pandey S., Baxter E. J., Hill J. C., 2020, Phys. Rev. D, 101,
1776
+ 043525
1777
+ Pandey S., et al., 2022, Phys. Rev. D, 105, 123526
1778
+ Pandya V., et al., 2020, ApJ, 905, 4
1779
+ Pandya V., et al., 2021, MNRAS, 508, 2979
1780
+ Pillepich A., et al., 2018, MNRAS, 473, 4077
1781
+ Planck Collaboration et al., 2016, A&A, 594, A22
1782
+ Pyne S., Joachimi B., 2021, MNRAS, 503, 2300
1783
+ Rudd D. H., Zentner A. R., Kravtsov A. V., 2008, ApJ, 672, 19
1784
+ S´anchez J., et al., 2022, arXiv e-prints, p. arXiv:2210.08633
1785
+ Scannapieco E., Thacker R. J., Couchman H. M. P., 2008, ApJ,
1786
+ 678, 674
1787
+ Schaan E., et al., 2021, Phys. Rev. D, 103, 063513
1788
+ Schaye J., et al., 2010, MNRAS, 402, 1536
1789
+ Schneider A., et al., 2016, J. Cosmology Astropart. Phys., 2016,
1790
+ 047
1791
+ Schneider A., Giri S. K., Amodeo S., Refregier A., 2022, MNRAS,
1792
+ 514, 3802
1793
+ Secco L. F., et al., 2022, Phys. Rev. D, 105, 023515
1794
+ Soergel B., et al., 2016, MNRAS, 461, 3172
1795
+ Springel V., et al., 2018, MNRAS, 475, 676
1796
+ The LSST Dark Energy Science Collaboration et al., 2018, arXiv
1797
+ e-prints, p. arXiv:1809.01669
1798
+ MNRAS 000, 1–16 (0000)
1799
+
1800
+ 16
1801
+ Pandey et al.
1802
+ Tinker J., Kravtsov A. V., Klypin A., Abazajian K., Warren M.,
1803
+ Yepes G., Gottl¨ober S., Holz D. E., 2008, ApJ, 688, 709
1804
+ Vikram V., Lidz A., Jain B., 2017, MNRAS, 467, 2315
1805
+ Villaescusa-Navarro F., et al., 2021, ApJ, 915, 71
1806
+ Wadekar D., et al., 2022, arXiv e-prints, p. arXiv:2209.02075
1807
+ Weinberger R., et al., 2017, MNRAS, 465, 3291
1808
+ van Daalen M. P., McCarthy I. G., Schaye J., 2020, MNRAS, 491,
1809
+ 2424
1810
+ APPENDIX A: IMPACT OF LIMITED
1811
+ VOLUME OF CAMELS SIMULATIONS
1812
+ In order to analyze the impact of varying box sizes and res-
1813
+ olution on the matter power suppression, we use the TNG
1814
+ simulations as presented in Springel et al. (2018). Partic-
1815
+ ularly we use their boxes with side lengths of 210 Mpc/h,
1816
+ 75 Mpc/h and 35 Mpc/h (which they refer to as TNG-300,
1817
+ TNG-100 and TNG-50 as it corresponds to side length in the
1818
+ units of Mpc). We then make the comparison to 25 Mpc/h
1819
+ TNG boxes run from CAMELS. We use the CV set of
1820
+ simulations and use them to infer the expected variance
1821
+ due to stochasticity induced by changing initial conditions.
1822
+ Note that the hydrodynamical model is identical between
1823
+ CAMELS CV runs and the bigger TNG boxes. In Fig. A1,
1824
+ we show the power suppression for these boxes, including
1825
+ the runs at varying resolution. We find that while changing
1826
+ box sizes gives relatively robust values of power suppression,
1827
+ changing resolution can have non-negligible impact. How-
1828
+ ever, all the TNG boxes are consistent at 2-3σ level relative
1829
+ to the CAMELS boxes.
1830
+ APPENDIX B: EXAMPLE OF EMULATION
1831
+ We present an example of the constructed emulator from
1832
+ §3.1 for the AAGN1 parameter in Fig. B1. This shows how
1833
+ we estimate the derivative of the observable (Y500c/M 5/3) in
1834
+ a way that is robust to stochasticity.
1835
+ APPENDIX C: ROBUSTNESS OF RESULTS TO
1836
+ DIFFERENT TRAIN SIMULATIONS
1837
+ In Fig. C1, we test the impact of changing the simulations
1838
+ used to train the random forest regressor. We then use these
1839
+ different trained models to infer the constraints on the mat-
1840
+ ter power suppression from the same stacked ⟨Y500c/Y SS⟩ as
1841
+ described in § 4. We see that our inferred constraints remain
1842
+ consistent when changing the simulations.
1843
+ APPENDIX D: TEST WITH LOWER HALO
1844
+ MASSES
1845
+ In Fig. D1, we show the constraints on the power suppres-
1846
+ sion obtained by analyzing the observables obtained from
1847
+ halos with lower masses, 1 × 1012 < M(M⊙/h) < 5 × 1012.
1848
+ We see that remarkably, even these lower halo masses pro-
1849
+ vide unbiased constraints on the matter power suppression
1850
+ with robust inference especially at smaller scales. However,
1851
+ when compared to the results descibed in § 3.2, we obtain
1852
+ less precise constraints. This is expected as lower halos with
1853
+ −0.3
1854
+ −0.2
1855
+ −0.1
1856
+ 0.0
1857
+ ∆P/PDMO
1858
+ TNG-210
1859
+ NDM = 25003
1860
+ TNG-70
1861
+ NDM = 9103
1862
+ TNG-35
1863
+ NDM = 5403
1864
+ TNG-25
1865
+ NDM = 2563
1866
+ 100
1867
+ 101
1868
+ k (h/Mpc)
1869
+ −0.3
1870
+ −0.2
1871
+ −0.1
1872
+ 0.0
1873
+ ∆P/PDMO
1874
+ TNG70
1875
+ NDM = 18203
1876
+ TNG 70
1877
+ NDM = 4553
1878
+ Figure A1. Comparison of the suppression of matter power in
1879
+ the CAMELS TNG simulation and simulations using the same
1880
+ sub-grid prescription but larger box sizes (Springel et al. 2018).
1881
+ We also show 1σ and 2σ uncertainty due to cosmic variance. In
1882
+ the top panel we change the TNG box sizes, while preserving the
1883
+ resolution, where as in the bottom panel we preserve the TNG
1884
+ box size while changing the resolution.
1885
+ lower masses are more susceptible to environmental effects
1886
+ which induces a larger scatter in the relation between their
1887
+ observables (such as fb or Y500c) and their halo masses gov-
1888
+ erning feedback processes.
1889
+ APPENDIX E: TEST WITH OTHER
1890
+ BISPECTRUM CONFIGURATIONS
1891
+ In Fig. E1, we show the constraints obtained on the sup-
1892
+ pression of the squeezed bispectrum configurations. We fix
1893
+ the the angle between the long sides of the triangle to corre-
1894
+ spond to µ = 0.9. We again find robust inference of baryonic
1895
+ effects on the bispectrum when using either the integrated
1896
+ pressure profile or full radial pressure profile.
1897
+ This paper has been typeset from a TEX/LATEX file prepared by
1898
+ the author.
1899
+ MNRAS 000, 1���16 (0000)
1900
+
1901
+ Probing feedback with the SZ
1902
+ 17
1903
+ −0.6
1904
+ −0.4
1905
+ −0.2
1906
+ 0.0
1907
+ 0.2
1908
+ 0.4
1909
+ 0.6
1910
+ log(AAGN1)
1911
+ −5.5
1912
+ −5.4
1913
+ log(Y500c/M 5/3)
1914
+ Resulting Emulator
1915
+ Resulting Derivative
1916
+ Figure B1. The constructed emulator and resulting derivative
1917
+ for the AAGN1 parameter in the mass bin 1012 < M(M⊙/h) <
1918
+ 5 × 1012.
1919
+ 100
1920
+ 101
1921
+ k
1922
+ −0.4
1923
+ −0.3
1924
+ −0.2
1925
+ −0.1
1926
+ 0.0
1927
+ ∆P/PDMO
1928
+ Train
1929
+ TNG+SIMBA+Astrid
1930
+ Train
1931
+ TNG+SIMBA
1932
+ Train
1933
+ SIMBA+Astrid
1934
+ Train
1935
+ TNG+Astrid
1936
+ Figure C1. In this figure we change the simulations used to
1937
+ train the RF when inferring the power suppression from the data
1938
+ measurements.
1939
+ MNRAS 000, 1–16 (0000)
1940
+
1941
+ 18
1942
+ Pandey et al.
1943
+ 100
1944
+ 101
1945
+ k (h/Mpc)
1946
+ −4
1947
+ −3
1948
+ −2
1949
+ −1
1950
+ 0
1951
+ 1
1952
+ 2
1953
+ 3
1954
+ 4
1955
+ Error ∆P/PDMO prediction relative to CV (σ)
1956
+ CV
1957
+ Y500c/YSS, Ωm
1958
+ Pe(r)/PSS
1959
+ e
1960
+ Pe(r)/PSS
1961
+ e , Ωm
1962
+ Pe(r)/PSS
1963
+ e , ne(r), Ωm
1964
+ Figure D1. Same as Fig. 7 and Fig. 8, but obtained on lower halo masses, 1 × 1012 < M(M⊙/h) < 5 × 1012. We find that having
1965
+ pressure profile information results in unbiased constraints here as well, albeit with a larger errorbars
1966
+ 100
1967
+ 101
1968
+ ksq (h/Mpc)
1969
+ −4
1970
+ −3
1971
+ −2
1972
+ −1
1973
+ 0
1974
+ 1
1975
+ 2
1976
+ 3
1977
+ 4
1978
+ Error ∆Bsq/Bsq;DMO prediction relative to CV (σ)
1979
+ CV
1980
+ Y500c/YSS, Ωm
1981
+ Pe(r)/PSS
1982
+ e
1983
+ Pe(r)/PSS
1984
+ e , Ωm
1985
+ Pe(r)/PSS
1986
+ e , ne(r), Ωm
1987
+ Figure E1. Same as Fig. 9, but for squeezed triangle configurations (µ = 0.9).
1988
+ MNRAS 000, 1–16 (0000)
1989
+
B9E0T4oBgHgl3EQfQABU/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
EdAyT4oBgHgl3EQfeviI/content/tmp_files/2301.00327v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
EdAyT4oBgHgl3EQfeviI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
GdAzT4oBgHgl3EQfUvwS/content/tmp_files/2301.01270v1.pdf.txt ADDED
@@ -0,0 +1,851 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01270v1 [math.GM] 9 Dec 2022
2
+ Maurer-Cartan characterization, cohomology and
3
+ deformations of equivariant Lie superalgebras
4
+ RB Yadav1,∗, Subir Mukhopadhyay
5
+ Sikkim University, Gangtok, Sikkim, 737102, INDIA
6
+ Abstract
7
+ In this article, we give Maurer-Cartan characterizations of equivariant Lie superalgebra
8
+ structures. We introduce equivariant cohomology and equivariant formal deformation
9
+ theory of Lie superalgebras. As an application of equivariant cohomology we study
10
+ the equivariant formal deformation theory of Lie superalgebras. As another applica-
11
+ tion we characterize equivariant central extensions of Lie superalgebras using second
12
+ equivariant cohomology. We give some examples of Lie superalgebras with an action
13
+ of a group and equivariant formal deformations of a classical Lie superalgebras.
14
+ Keywords: Lie superalgebra, cohomology, extension, formal
15
+ deformations, Maurer-Cartan equation
16
+ 2020 MSC: 17A70, 17B99, 16S80, 13D10, 13D03, 16E40
17
+ 1. Introduction
18
+ Graded Lie algebras have been a topic of interest in physics in the context of ”su-
19
+ persymmetries” relating particles of differing statistics. In mathematics, graded Lie
20
+ algebras have been studied in the context of deformation theory, [1].
21
+ Lie superalgebras were studied and a classification was given by Kac [2]. Leits
22
+ [3] introduced a cohomology for Lie superalgebras. Lie superalgebras are also called
23
+ Z2-graded Lie algebras by physicists.
24
+ ∗Corresponding author
25
+ Email addresses: [email protected] (RB Yadav ), [email protected] (Subir
26
+ Mukhopadhyay)
27
+ Preprint submitted to ...
28
+ January 4, 2023
29
+
30
+ Algebraic deformation theory was introduced by Gerstenhaber for rings and al-
31
+ gebras [4],[5],[6], [7], [8]. Deformation theory of Lie superalgebras was introduced
32
+ and studied by Binegar [9]. Maurer-Cartan characterization was given for Lie algebra
33
+ structures by Nijenhuis and Richardson in [10] and for associative algebra structures by
34
+ Gerstenhaber in [11]. Such characterization for Lie superalgebra structures was given
35
+ in [12]. Deformation theory of Lie superalgebras was studied in [9].
36
+ Aim of the present paper is to give Maurer-Cartan characterization, introduce equiv-
37
+ ariant cohomology, do some equivariant cohomology computations in lower dimen-
38
+ sions, introduce equivariant formal deformation theory of Lie superalgebras and give
39
+ some examples. Organization of the paper is as follows. In Section 2, we recall def-
40
+ inition of Lie superalgebra and give some examples. In Section 4, we give Maurer-
41
+ Cartan characterization of equivariant Lie superalgebras. In this section we construct
42
+ a Z × Z2-graded Lie algebra from a Z2-graded G-vector space. We show that class of
43
+ Maurer-Cartan elements of this Z×Z2-graded Lie algebra is the class of G-equivariant
44
+ Lie superalgebra structures on V. In Section 5, we introduce equivariant chain complex
45
+ and equivariant cohomology of Lie superalgebras. In Section 6, we compute coho-
46
+ mology of Lie superalgebras in degree 0 and dimension 0, 1 and 2. In Section 7, we
47
+ introduce equivariant deformation theory of Lie superalgebras. In this section we see
48
+ that infinitesimals of equivariant deformations are equivariant cocycles . Also, in this
49
+ section we give an example of an equivariant formal deformation of a Lie superalge-
50
+ bras. In Section 8, we study equivalence of two equivariant formal deformations and
51
+ prove that infinitesimals of any two equivalent equivariant deformations are cohomol-
52
+ ogous.
53
+ 2. Lie Superalgebras
54
+ In this section, we recall definitions of Lie superalgebras and modules over a Lie
55
+ superalgebras. We recall some examples of Lie superalgebras. Throughout the paper
56
+ we denote a fixed field by K. Also, we denote the ring of formal power series with
57
+ coefficients in K by K[[t]]. In any Z2-graded vector space V we use a notation in
58
+ which we replace degree deg(a) of an element a ∈ V by ‘a′ whenever deg(a) appears
59
+ 2
60
+
61
+ in an exponent; thus, for example (−1)ab = (−1)deg(a)deg(b).
62
+ Definition 2.1. Let V = V0 ⊕ V1 and W = W0 ⊕ W1 be Z2-graded vector spaces
63
+ over a field K. A linear map f : V → W is said to be homogeneous of degree α if
64
+ f(Vβ) ⊂ Wα+β, for all β ∈ Z2 = {0, 1}. We write (−1)deg(f) = (−1)f. Elements of
65
+ Vβ are called homogeneous of degree β.
66
+ Definition 2.2. A superalgebra is a Z2-graded vector space A = A0 ⊕ A1 together
67
+ with a bilinear map m : A × A → A such that m(a, b) ∈ Aα+β, for all a ∈ Aα,
68
+ b ∈ Aβ.
69
+ Definition 2.3. A Lie superalgebra is a superalgebra L = L0 ⊕ L1 over a field K
70
+ equipped with an operation [−, −] : L × L → L satisfying the following conditions:
71
+ 1. [a, b] = −(−1)αβ[b, a],
72
+ 2. [[a, [b, c]] = [[a, b], c] + (−1)αβ[b, [a, c]],
73
+ (Jacobi identity)
74
+ for all a ∈ Lαand b ∈ Lβ. Let L1 and L2 be two Lie superalgebras. A homomorphism
75
+ f : L1 → L2 is a K-linear map such that f([a, b]) = [f(a), f(b)]. Given a Lie
76
+ superalgebra L [L, L] is the vector subspace of L spanned by the set {[x, y] : x, y ∈
77
+ L}. A Lie superalgebra L is called abelian if [L, L] = 0.
78
+ Example 2.1. Let V = V¯0 ⊕ V¯1 be a Z2-graded vector space, dimV¯0 = m, dimV¯1 =
79
+ n. Consider the associative algebra EndV of all endomorphisms of V . Define
80
+ Endi V = {a ∈ End V | aVs ⊆ Vi+s} , i, s ∈ Z2
81
+ (1)
82
+ One can easily verify that End V = End¯0 V ⊕ End¯1 V . The bracket [a, b] =
83
+ ab − (−1)¯a¯bba makes EndV into a Lie superalgebra, denoted by ℓ(V ) or ℓ(m, n). In
84
+ some (homogeneous) basis of V , ℓ(m, n) consists of block matrices of the form
85
+ � α β
86
+ γ δ
87
+
88
+ ,
89
+ where α, β, γ, δ are matrices of order m × m, m × n, n × m and n × n, respectively.
90
+ Example 2.2. Define a linear function str : ℓ(V ) → k, by str([a, b]) = 0, a, b ∈
91
+ ℓ(V ), and str idV = m − n. str(a) is called a supertrace of a ∈ ℓ(V ). Consider the
92
+ subspace
93
+ sℓ(m, n) = {a ∈ ℓ(m, n) | str a = 0}.
94
+ 3
95
+
96
+ Clearly, sℓ(m, n) is an ideal of ℓ(m, n) of codimension 1. Therefore sℓ(m, n) is a
97
+ subalgebra of ℓ(m, n).
98
+ For any
99
+ � α β
100
+ γ δ
101
+
102
+ in ℓ(m, n) str
103
+ � α β
104
+ γ δ
105
+
106
+ = tr α − tr δ. sℓ(n, n) contains the one-
107
+ dimensional ideal {λI2n : λ ∈ K}.
108
+ Definition 2.4. [13] Let L = L0 ⊕ L1 be a Lie superalgebra. A Z2-graded vector
109
+ space M = M0 ⊕ M1 over the field K is called a module over L if there exists a
110
+ bilinear map [−, −] : L × M → M such that following condition is satisfied
111
+ [a, [b, m]] = [[a, b], m] + (−1)ab[b, [a, m]].
112
+ for all a ∈ Lα, b ∈ Lβ, α, β ∈ {0, 1}.
113
+ Clearly, every Lie superalgebra is a module over itself.
114
+ 3. Z2-graded Groups and their Actions on a Lie Superalgebra
115
+ Definition 3.1. We define a Z2-graded group as a group G having a subgroup G¯0 and
116
+ a subset G¯1 such that for all x ∈ Gi, y ∈ Gj, xy ∈ Gi+j, where i, j, i + j ∈ Z2.
117
+ Example 3.1. Consider Z6 = {¯0, ¯1, ¯2, ¯3, ¯4, ¯5}. Take G = Z6, G¯0 = {¯0, ¯2, ¯4}, G¯1 =
118
+ {¯1, ¯3, ¯5}. Clearly, with this choice of G¯0 and G¯1, G is a Z2-graded group.
119
+ Example 3.2. Every group G can be seen as Z2-graded group with G¯0 = G and
120
+ G¯1 = ∅.
121
+ Definition 3.2. A Z2-graded group G is said to act on a Lie superalgebra L = L0⊕L1
122
+ if there exits a map
123
+ ψ : G × L → L, (g, x) �→ ψ(g, x) = gx
124
+ satisfying following conditions
125
+ 1. ex = x, for all x ∈ L. Here e ∈ G is the identity element of G.
126
+ 2. ∀g ∈ Gi, i ∈ Z2 ψg : L → L given by ψg(x) = ψ(g, x) = gx is a homogeneous
127
+ linear map of degree i.
128
+ 3. ∀g1, g2 ∈ G, ψ(g1g2, x) = ψ(g1, ψ(g2, x)), that is (g1g2)x = g1(g2x).
129
+ 4
130
+
131
+ 4. For x, y ∈ L, g ∈ G, [gx, gy] = g[x, y].
132
+ We denote an action as above by (G, L).
133
+ Proposition 3.1. Let G be a finite Z2-graded group and L be a Lie superalgebra. Then
134
+ G acts on L if and only if there exists a group homomorphism of degree 0
135
+ φ : G → Iso(L, L), g �→ φ(g) = ψg
136
+ from the group G to the group of homogeneous Lie superalgebra isomorphisms from L
137
+ to L.
138
+ Proof. For an action (G, L), we define a map φ : G → Iso(L, L) by φ(g) = ψg. One
139
+ can verify easily that φ is a group homomorphism. Now, let φ : G → Iso(L, L) be
140
+ a group homomorphism. Define a map G × L → L by (g, a) �→ φ(g)(a). It can be
141
+ easily seen that this is an action of G on L.
142
+ Note: In this article we consider action of groups G, that is those Z2-graded groups G
143
+ for which groups G0 = G and G1 = ∅. We call a Lie Superalgebra L = L0 ⊕ L1 with
144
+ an action of a group G G-Lie Superalgebra.
145
+ Example 3.3. Super-Poincare algebra: The (N = 1) Super-Poincare algebra L =
146
+ L0 ⊕ L1 is given by1
147
+ i[Jµν, Jρσ] = ηνρJµσ − ηµρJνσ − ησµJρν + ησνJρµ,
148
+ i[P µ, Jρσ] = ηµρP σ − ηµσP ρ,
149
+ [P µ, P ρ] = 0,
150
+ [Qα, Jµν] = (σµν) β
151
+ α Qβ,
152
+ [ ¯Q ˙α, Jµν] = (¯σµν) ˙α
153
+ ˙β ¯Q
154
+ ˙β
155
+ [Qα, P µ] = 0,
156
+ [ ¯Q ˙α, P µ] = 0
157
+ {Qα, Qβ} = 0,
158
+ { ¯Q ˙α, ¯Q
159
+ ˙β} = 0,
160
+ {Qα, ¯Q
161
+ ˙β} = 2(σµ)α ˙βPµ.
162
+ 1Here we have used the following notation. µ, ν, ρ, ... = 0, 1, 2, 3.; σi, i = 1, 2, 3 represent Pauli spin
163
+ matrices and one introduces σµ = (1, σi)
164
+ and
165
+ ¯σµ = (1, −σi),
166
+ (σµν) β
167
+ α = − i
168
+ 4(σµ¯σν − σν¯σµ) β
169
+ α , (¯σµν) β
170
+ α = − i
171
+ 4(¯σµσν − ¯σνσµ) ˙α
172
+ ˙β. Spinor indices are denoted by
173
+ α, β, ˙α, ˙β, they take values from the set {1, 2} and are being raised and lowered by ǫαβ (ǫ ˙α ˙β), and ǫαβ
174
+ (ǫ ˙α ˙β). They are antisymmetric and we have chosen ǫ12 = ǫ˙1˙2 = +1
175
+ 5
176
+
177
+ Here L0 is generated by the set {Jµν : µ, ν = 0, 1, 2, 3} ∪ {P µ : µ = 0, 1, 2, 3} over
178
+ C. L1 is generated by the set {Qα : α = 1, 2} ∪ { ¯Q ˙α : ˙α = 1, 2} over C. Consider the
179
+ group Zm = {gn : n = 0, 1, . . . , m − 1}, where g = e
180
+ 2πi
181
+ m . There exists an action of
182
+ Zm on the Super-Poincare algebra L = L0 ⊕ L1 given by
183
+ (gn, Jµν) �→ Jµν,
184
+ (gn, P µ) �→ P µ,
185
+ (gn, Qα) �→ gnQα,
186
+ (gn, ¯Q ˙α) �→ gm−n ¯Q ˙α,
187
+ for every n = 0, 1, . . . , m − 1.
188
+ Example 3.4. Let eij denote a 2 × 2 matrix with (i, j)th entry 1 and all other entries
189
+ 0. Consider L0 = span{e11, e22}, L1 = span{e12, e21}. Then L = L0 ⊕ L1 is a Lie
190
+ superalgebra with the bracket [ , ] defined by
191
+ [a, b] = ab − (−1)¯a¯bba.
192
+ Define a function ψ : Z2 × L → L by ψ(0, x) = x, ∀x ∈ L, ψ(1, e11) = e22,
193
+ ψ(1, e22) = e11, ψ(1, e12) = e21, ψ(1, e21) = e12. Obviously Conditions 1 − 3
194
+ hold for (Z2, L) to be an action. To verify condition 4 it is enough to verify for basis
195
+ elements of L0 and L1. We have
196
+ 1. 1[eii, eii] = 0 = [1eii, 1eii], ∀ i = 1, 2.
197
+ 2. 1[eii, ejj] = 0 = [ejj, eii] = [1eii, 1ejj], ∀ i, j = 1, 2, i ̸= j.
198
+ 3. 1[eij, eji] = 1(eii − (−1)1ejj) = ejj + eii = [1eij, 1eji], ∀ i, j = 1, 2, i ̸= j.
199
+ 4. 1[eij, eij] = 0 = [eji, 1eji] = [1eij, 1eij], ∀ i, j = 1, 2, i ̸= j.
200
+ 5. 1[eii, eij] = 1(eij) = eji = [ejj, eji] = [1eii, 1eij], ∀ i, j = 1, 2, i ̸= j.
201
+ 6. 1[ejj, eij] = 1(−eij) = −eji = [eii, eji] = [1ejj, 1eij], ∀ i, j = 1, 2, i ̸= j.
202
+ From above it is clear that (Z2, L) is an action.
203
+ Definition 3.3. Let L = L0 ⊕ L1 be a Lie superalgebra. Let G be a finite group which
204
+ acts on L. A Z2-graded vector space M = M0 ⊕ M1 with an action of G is called a
205
+ G-module over L if there exists a G-equivariant bilinear map [−, −] : L × M → M
206
+ such that following condition is satisfied
207
+ [a, [b, m]] = [[a, b], m] + (−1)ab[b, [a, m]],
208
+ for all a ∈ Lα, b ∈ Lβ, α, β, ∈ {0, 1}.
209
+ 6
210
+
211
+ Example 3.5. Every G-Lie superalgebra is a G-module over itself.
212
+ Example 3.6. Let L = L0 ⊕ L1 be the (N = 1) Super-Poincare algebra, Example
213
+ 3.3. Let M0 be the span of {P µ : µ = 0, 1, 2, 3} and M1 be the span of the set
214
+ {Qα : α = 1, 2} ∪ { ¯Q ˙α : ˙α = 1, 2}. Then clearly M = M0 ⊕ M1 is a Zm-module
215
+ over L = L0 ⊕ L1.
216
+ 4. Maurer-Cartan Characterization of Equivariant Lie Superalgebra Structures
217
+ Definition 4.1. A finite group G is said to act on a Z2-graded vector space V = V0⊕V1
218
+ if there exits a map
219
+ ψ : G × V → V, (g, x) �→ ψ(g, x) = gx
220
+ satisfying following conditions
221
+ 1. ex = x, for all x ∈ V . Here e is the identity element of G.
222
+ 2. ∀g ∈ G, ψg : V → V given by ψg(x) = ψ(g, x) = gx is a homogeneous linear
223
+ map of degree 0.
224
+ 3. ∀g1, g2 ∈ G, ψ(g1g2, x) = ψ(g1, ψ(g2, x)), that is (g1g2)x = g1(g2x).
225
+ A Z2-graded vector space V = V0⊕V1 with an action of a group G is called a G-vector
226
+ space.
227
+ Let V = V0 ⊕ V1 and W = W0 ⊕ W1 be vector spaces over a field F. An n-linear
228
+ map f : V × · · · ×
229
+ � �� �
230
+ n times
231
+ V → W is said to be homogeneous of degree α if f(x1, · · · , xn) is
232
+ homogeneous in W and deg(f(x1, · · · , xn)) − �n
233
+ i=1 deg(xi) = α, for homogeneous
234
+ xi ∈ V , 1 ≤ i ≤ n. We denote the degree of a homogeneous f by deg(f). We write
235
+ (−1)deg(f) = (−1)f.
236
+ Consider the permutation group Sn. For any X = (X1, . . . , Xn) with Xi ∈ Vxi
237
+ and σ ∈ Sn, define
238
+ K(σ, X) = card{(i, j) : i < j, Xσ(i) ∈ V1, Xσ(j) ∈ V1, σ(j) < σ(i)},
239
+ ǫ(σ, X) = ǫ(σ)(−1)K(σ,X),
240
+ where cardA denotes cardinality of a set A, ǫ(σ) is the signature of σ. Also, define
241
+ σ.X = (Xσ−1(1), . . . , Xσ−1(n)). We have following Lemma [12]
242
+ 7
243
+
244
+ Lemma 4.1.
245
+ 1. K(σσ′, X) = K(σ, X) + K(σ′, σ−1X)
246
+ (mod2).
247
+ 2. ǫ(σσ′, X) = ǫ(σ, X)ǫ(σ′, σ−1X).
248
+ For each n ∈ N, define Fn,α(V, W) as the vector space of all homogeneous n-
249
+ linear mappings f : V × · · · ×
250
+ � �� �
251
+ n times
252
+ V → W of degree α. Define Fn(V, W) = Fn,0(V, W)⊕
253
+ Fn,1(V, W), F0(V, W) = W and F−n(V, W) = 0, ∀n ∈ N. Take F(V, W) =
254
+
255
+ n∈Z Fn(V, W).
256
+ For F ∈ Fn(V, W), X ∈ V n, σ ∈ Sn, define
257
+ (σ.F)(X) = ǫ(σ, X)F(σ−1X).
258
+ By using Lemma 4.1, one concludes that this defines an action of Sn on the Z2-
259
+ graded vector space Fn(V, W). Define En for n ∈ Z as follows:
260
+ Set En = {F ∈ Fn+1(V, V ) : σ.F = F, ∀ σ ∈ Sn+1}, for n ≥ 0 and
261
+ En =
262
+
263
+
264
+
265
+
266
+
267
+ V
268
+ if n = −1
269
+ 0
270
+ if n < −1
271
+ .
272
+ Write E = �
273
+ ∈Z En. Define a product ◦ on E as follows: For F ∈ En,f, F ′ ∈ En′,f ′ set
274
+ F ◦ F ′ =
275
+
276
+ σ∈S(n,n′+1)
277
+ σ.(F ∗ F ′),
278
+ where
279
+ F∗F ′(X1, . . . , Xn+n′+1) = (−1)f ′(x1+···+xn)F(X1, . . . , Xn, F ′(Xn+1, . . . , Xn+n′+1)),
280
+ for Xi ∈ Vxi, and S(n,n′+1) consists of permutations σ ∈ Sn+n′+1 such that σ(1) <
281
+ · · · < σ(n), σ(n + 1) < · · · < σ(n + n′ + 1). Clearly, F ◦ F ′ ∈ E(n+n′,f+f ′). We
282
+ have following Lemma [12].
283
+ Lemma 4.2. For F ∈ En,f, F ′ ∈ En′,f ′, F ′′ ∈ En′′,f ′′
284
+ (F ◦ F ′) ◦ F ′′ − F ◦ (F ′ ◦ F ′′) = (−1)n′n′′+f ′f ′′{(F ◦ F ′′) ◦ F ′ − F ◦ (F ′′ ◦ F ′)}.
285
+ Using Lemma 4.2, we have following theorem [14], [12]
286
+ 8
287
+
288
+ Theorem 4.1. E is a Z × Z2-graded Lie algebra with the bracket [ , ] defined by
289
+ [F, F ′] = F ◦ F ′ − (−1)nn′+ff ′F ′ ◦ F,
290
+ for F ∈ En,f, F ′ ∈ En′,f ′
291
+ Let G be a finite group acting on the vector spaces V = V0⊕V1 and W = W0⊕W1.
292
+ Denote by FG
293
+ n (V, W) the vector space of G-equivariant elements of Fn(V, W), that
294
+ is F(gX1, . . . , gXn) = gF(X1, . . . , Xn), for each F ∈ FG
295
+ n (V, W), (X1, . . . , Xn) ∈
296
+ V n. Write FG(V, W) = �
297
+ n∈Z FG
298
+ n (V, W). For σ ∈ Sn, g ∈ G, (X1, . . . , Xn) ∈ V n,
299
+ we have
300
+ σ.(gX1, . . . , gXn)
301
+ =
302
+ (gXσ−1(1), . . . , gXσ−1(n))
303
+ =
304
+ g(Xσ−1(1), . . . , Xσ−1(n))
305
+ =
306
+ g(σ.(X1, . . . , Xn)).
307
+ (1)
308
+ Let F ∈ EG
309
+ n,f, F ′ ∈ EG
310
+ n′,f ′. Clearly, F ∗ F ′ ∈ EG
311
+ n+n′,f+f ′. Using Equation 1, we
312
+ conclude that F ◦ F ′ ∈ EG
313
+ n+n′,f+f ′. This implies that [ , ] defines a product in EG.
314
+ Hence using Theorem 4.1, we have following theorem.
315
+ Theorem 4.2. EG is a Z × Z2-graded Lie algebra with with the bracket [ , ] defined
316
+ by
317
+ [F, F ′] = F ◦ F ′ − (−1)nn′+ff ′F ′ ◦ F,
318
+ for F ∈ En,f, F ′ ∈ En′,f ′
319
+ Using [12], Proposition (3.1), we get following theorem.
320
+ Theorem 4.3. Given F0 ∈ EG
321
+ (1,0), F0 defines on a Z2-graded G-vector space V a
322
+ G-Lie superalgebra structure if and only if [F0, F0] = 0.
323
+ Remark 4.1. An element F0 ∈ EG
324
+ (1,0) which satisfies the equation
325
+ [F, F] = 0
326
+ (2)
327
+ is called a Maurer-Cartan element and the Equation 2 is called Maurer-Cartan equa-
328
+ tion. Thus the class of Maurer-Cartan elements is the class of G-Lie superalgebra
329
+ structures on a Z2-graded G-vector space V .
330
+ 9
331
+
332
+ 5. Equivariant Cohomology of Lie Superalgebras
333
+ Let L = L0 ⊕ L1 be a Lie superalgebra and M = M0 ⊕ M1 be a module over L.
334
+ For each n ≥ 0, a K-vector space Cn(L; M) is defined as follows: C0(L; M) = M
335
+ and for n ≥ 1, Cn(L; M) consists of those n-linear maps f from Ln to M which are
336
+ homogeneous and
337
+ f(x1, . . . , xi, xi+1, . . . , xn) = −(−1)xixi+1f(x1, . . . , xi+1, xi . . . , xn).
338
+ Clearly, Cn(L; M) = Cn
339
+ 0 (L; M) ⊕ Cn
340
+ 1 (L; M), where Cn
341
+ 0 (L; M) and Cn
342
+ 1 (L; M) are
343
+ vector subspaces of Cn(L; M) containing elements of degree 0 and 1, respectively. A
344
+ linear map δn : Cn(L; M) → Cn+1(L; M) is defined by ([9], [3])
345
+ δnf(x1, · · · , xn+1)
346
+ =
347
+
348
+ i<j
349
+ (−1)i+j+(xi+xj)(x1+···+xi−1)+xj(xi+1+···+xj−1)
350
+ f([xi, xj], x1, . . . , ˆxi, . . . , ˆxj, . . . , xn+1)
351
+ +
352
+ n+1
353
+
354
+ i=1
355
+ (−1)i−1+xi(f+x1+···+xi−1)[xi, f(x1, . . . , ˆxi, . . . , xn+1)],
356
+ (3)
357
+ (4)
358
+ for all f ∈ Cn(L; M), n ≥ 1, and δ0f(x1) = (−1)x1f[x1, f], for all f ∈ C0(L; M) =
359
+ M. Clearly, for each f ∈ Cn
360
+ G(L; M), n ≥ 0, deg(δf) = deg(f). From [9], [3], we
361
+ have following theorem:
362
+ Theorem 5.1. δn+1◦δn = 0, that is, (C∗(L; M), δ), where C∗(L; M) = ⊕nCn(L; M),
363
+ δ = ⊕nδn, is a cochain complex.
364
+ Let G be a finite group which acts on L. Let M be a G-module over L. For each
365
+ n ≥ 0, we define a K-vector space Cn
366
+ G(L; M) as follows: C0
367
+ G(L; M) = M and for
368
+ n ≥ 1, Cn
369
+ G(L; M) consists of those f ∈ Cn(L, M) which are G-equivariant, that
370
+ is, f(ga1, . . . , gan) = gf(a1, . . . , an), for all (a1, . . . , an) ∈ Ln, g ∈ G. Clearly,
371
+ Cn
372
+ G(L; M) = (Cn
373
+ G)0(L; M) ⊕ (Cn
374
+ G)1(L; M), where (Cn
375
+ G)0(L; M) and (Cn
376
+ G)1(L; M)
377
+ are vector subspaces of Cn
378
+ G(L; M) containing elements of degree 0 and 1, respectively.
379
+ We define a K-linear map δn
380
+ G : Cn
381
+ G(L; M) → Cn+1
382
+ G
383
+ (L; M) by
384
+ δn
385
+ Gf(x1, . . . , xn+1) = δnf(x1, . . . , xn+1).
386
+ 10
387
+
388
+ Clearly, δn
389
+ Gf(gx1, . . . , gxn+1) = gδnf(x1, . . . , xn+1) for each f ∈ Cn
390
+ G(L; M),
391
+ g ∈ G. Thus δn
392
+ G is well defined. Write C∗
393
+ G(L; M) = ⊕nCn
394
+ G(L; M), δG = ⊕nδn
395
+ G.
396
+ Using Theorem 5.1 we have following theorem:
397
+ Theorem 5.2. (C∗
398
+ G(L; M), δG) is a cochain complex.
399
+ We denote ker(δn
400
+ G) by Zn
401
+ G(L; M) and image of (δn−1
402
+ G
403
+ ) by Bn
404
+ G(L; M). We call
405
+ the n-th cohomology Zn
406
+ G(L; M)/Bn
407
+ G(L; M) of the cochain complex {Cn
408
+ G(L; M), δn
409
+ G}
410
+ as the n-th equivariant cohomology of L with coefficients in M and denote it by
411
+ Hn
412
+ G(L; M). Since L is a module over itself. So we can consider cohomology groups
413
+ Hn
414
+ G(L; L). We call Hn
415
+ G(L; L) as the n-th equivariant cohomology group of L. We
416
+ have
417
+ Zn
418
+ G(L; M) = (Zn
419
+ G)0(L; M)⊕(Zn
420
+ G)1(L; M), Bn
421
+ G(L; M) = (Bn
422
+ G)0(L; M)⊕(Bn
423
+ G)1(L; M),
424
+ where (Zn
425
+ G)i(L; M)and (Bn
426
+ G)i(L; M) are submodules of (Cn
427
+ G)i(L; M), i = 0, 1.
428
+ Since boundary map δn
429
+ G : Cn
430
+ G(L; M) → Cn+1
431
+ G
432
+ (L; M) is homogeneous of degree 0,
433
+ we conclude that Hn
434
+ G(L; M) is Z2-graded and
435
+ Hn
436
+ G(L; M) = (Hn
437
+ G)0(L; M) ⊕ (Hn
438
+ G)1(L; M),
439
+ where (Hn
440
+ G)i(L; M) = (Zn
441
+ G)i(L; M)/(Bn
442
+ G)i(L; M), i = 0, 1.
443
+ 6. Equivariant Cohomology of Lie Superalgebras in Low Degrees
444
+ Let G be a finite group and L = L0 ⊕ L1 be a Lie superalgebra with an action
445
+ of G. Let M = M0 ⊕ M1 be a G-module over L. For m ∈ M0 = (C0
446
+ G)0(L; M),
447
+ f ∈ (C1
448
+ G)0(L; M) and g ∈ (C2
449
+ G)0(L; M)
450
+ δ0
451
+ Gm(x) = [x, m],
452
+ (5)
453
+ δ1f(x1, x2) = −f([x1, x2]) + [x1, f(x2)] − (−1)x2x1[x2, f(x1)],
454
+ (6)
455
+ δ2g(x1, x2, x3)
456
+ =
457
+ −g([x1, x2], x3) + (−1)x3x2g([x1, x3], x2) − (−1)x1(x2+x3)g([x2, x3], x1)
458
+ +[x1, g(x2, x3)] − (−1)x2x1[x2, g(x1, x3)]
459
+ +(−1)x3x1+x3x2[x3, g(x1, x2)].
460
+ (7)
461
+ 11
462
+
463
+ The set {m ∈ M0|[x, m] = 0, ∀x ∈ L} is called annihilator of L in M0 and is denoted
464
+ by annM0L. We have
465
+ (H0
466
+ G)0(L; M)
467
+ =
468
+ {m ∈ M0|[x, m] = 0, for all x ∈ L}
469
+ =
470
+ annM0L.
471
+ A G-equivariant homogeneous linear map f : L → M is called derivation from L to
472
+ M if f([x1, x2]) = (−1)fx1[x1, f(x2)] �� (−1)fx2+x2x1[x2, f(x1)], that is δ1
473
+ Gf = 0.
474
+ For every m ∈ M0 the map x �→ [x, m] is called an inner derivation from L to M. We
475
+ denote the vector spaces of equivariant derivations and equivariant inner derivations
476
+ from L to M by DerG(L; M) and DerG
477
+ Inn(L; M) respectively. By using 5, 6 we have
478
+ (H1
479
+ G)0(L; M) = DerG(L; M)/DerG
480
+ Inn(L; M).
481
+ Let L be a Lie superalgebra with an action of a finite group G and M be a G-
482
+ module over L. We regard M as an abelian Lie superalgebra with an action of G. An
483
+ extension of L by M is an exact sequence
484
+ 0
485
+ � M
486
+ i
487
+ � E
488
+ π
489
+ � L
490
+ � 0
491
+ (*)
492
+ of Lie superalgebras such that
493
+ [x, i(m)] = [π(x), m].
494
+ The exact sequence (∗) regarded as a sequence of K-vector spaces, splits. Therefore
495
+ without any loss of generality we may assume that E as a K-vector space coincides
496
+ with the direct sum L ⊕ M and that i(m) = (0, m), π(x, m) = x. Thus we have
497
+ E = E0 ⊕ E1, where E0 = L0 ⊕ M0, E1 = L1 ⊕ M1. The multiplication in E = L ⊕ M
498
+ has then necessarily the form
499
+ [(0, m1), (0, m2)] = 0, [(x1, 0), (0, m1)] = (0, [x1, m1]),
500
+ [(0, m2), (x2, 0)] = −(−1)m2x2(0, [x2, m2]), [(x1, 0), (x2, 0)] = ([x1, x2], h(x1, x2)),
501
+ for some h ∈ (C2
502
+ G)0(L; M), for all homogeneous x1, x2 ∈ L, m1, m2 ∈ M. Thus, in
503
+ general, we have
504
+ [(x, m), (y, n)] = ([x, y], [x, n] − (−1)my[y, m] + h(x, y)),
505
+ (8)
506
+ 12
507
+
508
+ for all homogeneous (x, m), (y, n) in E = L ⊕ M.
509
+ Conversely, let h : L × L → M be a bilinear G-equivariant homogeneous map of
510
+ degree 0. For homogeneous (x, m), (y, n) in E we define multiplication in E = L⊕M
511
+ by Equation 8. For homogeneous (x, m), (y, n) and (z, p) in E we have
512
+ [[(x, m), (y, n)], (z, p)]
513
+ =
514
+ ([[x, y], z], [[x, y], p] − (−1)zx+zn[z[x, n]] + (−1)ym+zy+zm[z, [y, m]] + [h(x, y), z] + h([x, y], z))
515
+ (9)
516
+ [(x, m), [(y, n), (z, p)]]
517
+ =
518
+ ([x, [y, z]], [x, [y, p]] − (−1)nz[x, [z, n]] − (−1)my+mz[[y, z], m] + [x, h(y, z)] + h(x, [y, z])
519
+ (10)
520
+ [(y, n), [(x, m), (z, p)]]
521
+ =
522
+ ([y, [x, z]], [y, [x, p]] − (−1)mz[y, [z, m]] − (−1)nx+nz[[x, z], n] + [y, h(x, z)] + h(y, [x, z]))
523
+ (11)
524
+ From Equations 9, 10, 11 we conclude that E = L ⊕ M is a Lie superalgebra with
525
+ product given by Equation 8 if and only if δ2
526
+ Gh = 0. We denote the Lie superalgebra
527
+ given by Equation 8 using notation Eh. Thus for every cocycle h ∈ (C2
528
+ G)0(L; M) there
529
+ exists an extension
530
+ Eh : 0
531
+ � M
532
+ i
533
+ � Eh
534
+ π
535
+ � L
536
+ � 0
537
+ of L by M, where i and π are inclusion and projection maps, that is, i(m) = (0, m),
538
+ π(x, m) = x. We say that two extensions
539
+ 0
540
+ � M
541
+ � Ei
542
+ � L
543
+ � 0 (i = 1, 2)
544
+ of L by M are equivalent if there is a G-equivariant Lie superalgebra isomorphism
545
+ ψ : E1 → E2 such that following diagram commutes:
546
+ 0
547
+ � M
548
+ IdM
549
+
550
+ � E1
551
+ ψ
552
+
553
+ � L
554
+ IdL
555
+
556
+ � 0
557
+ 0
558
+ � M
559
+ � E2
560
+ � L
561
+ � 0
562
+ (**)
563
+ 13
564
+
565
+ We use F(L, M)to denote the set of all equivalence classes of extensions of L by
566
+ M. Equation 8 defines a mapping of (Z2
567
+ G)0(L; M) onto F(L, M). If for h, h′ ∈
568
+ (Z2
569
+ G)0(L; M) Eh is equivalent to Eh′, then commutativity of diagram (∗∗) is equiva-
570
+ lent to
571
+ ψ(x, m) = (x, m + f(x)),
572
+ for some f ∈ (C1
573
+ G)0(L; M). We have
574
+ ψ([(x1, m1), (x2, m2)])
575
+ =
576
+ ψ([x1, x2], [x1, m2] + [m1, x2] + h(x1, x2))
577
+ =
578
+ ([x1, x2], [x1, m2] + [m1, x2] + h(x1, x2) + f([x1, x2])),
579
+ (12)
580
+ [ψ(x1, m1), ψ(x2, m2)]
581
+ =
582
+ [(x1, m1 + f(x1)), (x2, m2 + f(x2))]
583
+ =
584
+ ([x1, x2], [x1, m2 + f(x2)] + [m1 + f(x1), x2] + h′(x1, x2)).
585
+ (13)
586
+ Since ψ([(x1, m1), (x2, m2)]) = [ψ(x1, m1), ψ(x2, m2)], we have
587
+ h(x1, x2) − h′(x1, x2)
588
+ =
589
+ −f([x1, x2]) + [x1, f(x2)] + [f(x1), x2]
590
+ =
591
+ −f([x1, x2]) + [x1, f(x2)] − (−1)x1x2[x2, f(x1)]
592
+ =
593
+ δ1(f)(x1, x2)
594
+ (14)
595
+ Thus two extensions Eh and Eh′ are equivalent if and only if there exists some f ∈
596
+ (C1
597
+ G)0(L; M) such that δ1f = h − h′. We thus have following theorem:
598
+ Theorem 6.1. The set F(L, M) of all equivalence classes of extensions of L by M is
599
+ in one to one correspondence with the cohomology group (H2
600
+ G)0(L; M). This corre-
601
+ spondence ω : (H2
602
+ G)0(L; M) → F(L, M) is obtained by assigning to each cocycle
603
+ h ∈ (Z2
604
+ G)0(L; M), the extension given by multiplication 8.
605
+ 7. Equivariant Deformation of Lie Superalgebras
606
+ Let L = L0 ⊕ L1 be a Lie superalgebra. We denote the ring of all formal power
607
+ series with coefficients in L by L[[t]]. Clearly, L[[t]] = L0[[t]] ⊕ L1[[t]]. So every
608
+ at ∈ L[[t]] is of the form at = (at)0⊕(at)1, where (at)0 ∈ L0[[t]] and (at)1 ∈ L1[[t]].
609
+ 14
610
+
611
+ Definition 7.1. Let L = L0 ⊕L1 be a Lie superalgebra with an action of a finite group
612
+ G. An equivariant formal one-parameter deformation of L is a K[[t]]-bilinear map
613
+ µt : L[[t]] × L[[t]] → L[[t]]
614
+ satisfying the following properties:
615
+ (a) µt(a, b) = �∞
616
+ i=0 µi(a, b)ti, for all a, b ∈ L, where µi : L×L → L, i ≥ 0 are G-
617
+ equivariant bilinear homogeneous mappings of degree zero and µ0(a, b) = [a, b]
618
+ is the original product on L.
619
+ (b) µt(a, b) = −(−1)abµt(b, a), for all homogeneous a, b ∈ L.
620
+ (c)
621
+ µt(a, µt(b, c)) = µt(µt(a, b), c) + (−1)abµt(b, µt(a, c)),
622
+ (15)
623
+ for all homogeneous a, b, c ∈ L.
624
+ The Equation 15 is equivalent to following equation:
625
+
626
+ i+j=r
627
+ µi(a, µj(b, c))
628
+ =
629
+
630
+ i+j=r
631
+ {µi(µj(a, b), c) − (−1)abµi(b, µj(a, c))},
632
+ (16)
633
+ for all homogeneous a, b, c ∈ L.
634
+ Now we define a formal deformation of finite order of a Lie superalgebra L.
635
+ Definition 7.2. Let L be a Lie superalgebra with an action of a group G. A formal
636
+ one-parameter deformation of order n of L is a K[[t]]-bilinear map
637
+ µt : L[[t]] × L[[t]] → L[[t]]
638
+ satisfying the following properties:
639
+ (a) µt(a, b) = �n
640
+ i=0 µi(a, b)ti, ∀a, b, c ∈ L, where µi : L×L → T , 0 ≤ i ≤ n, are
641
+ equivariant K-bilinear homogeneous maps of degree 0, and µ0(a, b) = [a, b] is
642
+ the original product on L.
643
+ 15
644
+
645
+ (b) µi(a, b) = −(−1)abµi(b, a), for all homogeneous a, b ∈ L, i ≥ 0.
646
+ (c)
647
+ µt(a, µt(b, c)) = µt(µt(a, b), c) + (−1)abµt(b, µt(a, c)),
648
+ (17)
649
+ for all homogeneous a, b, c ∈ L.
650
+ Remark 7.1.
651
+ • For r = 0, conditions 16 is equivalent to the fact that L is a Lie
652
+ superalgebra.
653
+ • For r = 1, conditions 16 is equivalent to
654
+ 0
655
+ =
656
+ −µ1(a, [b, c]) − [a, µ1(b, c)]
657
+ +µ1([a, b], c) + (−1)abµ1(b, [a, c]) + [µ1(a, b), c] + (−1)ab[b, µ1(a, c)]
658
+ =
659
+ δ2µ1(a, b, c); for all homogeneous a, b, c ∈ L.
660
+ Thus for r = 1, 16 is equivalent to saying that µ1 ∈ C2
661
+ 0(L; L) is a cocycle. In
662
+ general, for r ≥ 0, µr is just a 2-cochain, that is, in µr ∈ C2
663
+ 0(L; L).
664
+ Definition 7.3. The cochain µ1 ∈ C2
665
+ 0(L; L) is called infinitesimal of the deformation
666
+ µt. In general, if µi = 0, for 1 ≤ i ≤ n − 1, and µn is a nonzero cochain in C2
667
+ 0(L; L),
668
+ then µn is called n-infinitesimal of the deformation µt.
669
+ Proposition 7.1. The infinitesimal µ1 ∈ C2
670
+ 0(L; L) of the deformation µt is a cocycle.
671
+ In general, n-infinitesimal µn is a cocycle in C2
672
+ 0(L; L).
673
+ Proof. For n=1, proof is obvious from the Remark 7.1. For n > 1, proof is similar.
674
+ 8. Equivalence of Equivariant Formal Deformations and Cohomology
675
+ Let µt and ˜µt be two formal deformations of a Lie superalgebra L − L0 ⊕ L1.
676
+ A formal isomorphism from the deformation µt to ˜µt is a K[[t]]-linear automorphism
677
+ Ψt : L[[t]] → L[[t]] of the form Ψt = �∞
678
+ i=0 ψiti, where each ψi is a homogeneous
679
+ K-linear map L → L of degree 0, ψ0(a) = a, for all a ∈ T and
680
+ ˜µt(Ψt(a), Ψt(b)) = Ψt ◦ µt(a, b),
681
+ for all a, b ∈ L.
682
+ 16
683
+
684
+ Definition 8.1. Two deformations µt and ˜µt of a Lie superalgebra L are said to be
685
+ equivalent if there exists a formal isomorphism Ψt from µt to ˜µt.
686
+ Formal isomorphism on the collection of all formal deformations of a Lie superal-
687
+ gebra L is an equivalence relation.
688
+ Definition 8.2. Any formal deformation of T that is equivalent to the deformation µ0
689
+ is said to be a trivial deformation.
690
+ Theorem 8.1. The cohomology class of the infinitesimal of a deformation µt of a Lie
691
+ Superalgebra L is determined by the equivalence class of µt.
692
+ Proof. Let Ψt be a formal isomorphism from µt to ˜µt. So we have, for all a, b ∈ L,
693
+ ˜µt(Ψta, Ψtb) = Ψt ◦ µt(a, b). This implies that
694
+ (µ1 − ˜µ1)(a, b)
695
+ =
696
+ [ψ1a, b] + [a, ψ1b] − ψ1([a, b])
697
+ =
698
+ δ1ψ1(a, b).
699
+ So we have µ1 − ˜µ1 = δ1ψ1. This completes the proof.
700
+ 9. Some Examples of Equivariant Deformations
701
+ In this section we discuss some examples of equivariant formal deformations of Lie
702
+ superalgebras.
703
+ Example 9.1. Let eij denote a 2 × 2 matrix with (i, j)th entry 1 and all other entries
704
+ 0. Consider L0 = span{e11, e22}, L1 = span{e12, e21}. Then L = L0 ⊕ L1 is a Lie
705
+ superalgebra with the bracket [, ] defined by
706
+ [a, b] = ab − (−1)¯a¯bba.
707
+ Define a function ψ : Z2 × L → L by ψ(0, x) = x, ∀x ∈ L, ψ(1, e11) = e22,
708
+ ψ(1, e22) = e11, ψ(1, e12) = e21, ψ(1, e21) = e12. In Example 3.4, we have seen that
709
+ this gives an action of Z2 on L. Define a bilinear map ∗ : L × L → L by
710
+ eij ∗ ekl =
711
+
712
+
713
+
714
+
715
+
716
+ eli, if j = k
717
+ 0, otherwise
718
+ .
719
+ 17
720
+
721
+ Now define µ1 : L × L → L by
722
+ µ1(a, b) = a ∗ b − (−1)abb ∗ a,
723
+ for all homogeneous a , b in L. We have
724
+ 1. 1µ1(eii, eii) = 0 = µ1(1eii, 1eii), ∀ i = 1, 2.
725
+ 2. 1µ1(eii, ejj) = 0 = µ1(ejj, eii) = µ1(1eii, 1ejj), ∀ i, j = 1, 2, i ̸= j.
726
+ 3. 1µ1(eij, eji) = 1(ejj − (−1)1eii) = eii + ejj = µ1(1eij, 1eji), ∀ i, j =
727
+ 1, 2, i ̸= j.
728
+ 4. 1µ1(eij, eij) = 0 = (eji, eji) = µ1(1eij, 1eij), ∀ i, j = 1, 2, i ̸= j.
729
+ 5. 1µ1(eii, eij) = 1(eji) = eij = µ1(ejj, eji) = µ1(1eii, 1eij), ∀ i, j = 1, 2, i ̸=
730
+ j.
731
+ 6. 1µ1(ejj, eij) = 1(−eji) = −eij = µ1(eii, eji] = µ1(1ejj, 1eij), ∀ i, j =
732
+ 1, 2, i ̸= j.
733
+ Hence µ1 is Z2 equivariant. Define µt = µ0 + µ1t, where µ0(a, b) = [a, b]. We shall
734
+ show that µt is an equivariant deformation of L of order 1. To conclude this only thing
735
+ that we need to show is that
736
+ δ2µ1(a, b, c)
737
+ =
738
+ −µ1(a, [b, c]) − [a, µ1(b, c)]
739
+ +µ1([a, b], c) + (−1)abµ1(b, [a, c]) + [µ1(a, b), c] + (−1)ab[b, µ1(a, c)]
740
+ =
741
+ 0;
742
+ for all homogeneous a, b, c ∈ L.
743
+ We have
744
+ δ2µ1(b, c, a)
745
+ =
746
+ −µ1(b, [c, a]) − [b, µ1(c, a)]
747
+ +µ1([b, c], a) + (−1)bcµ1(c, [b, a]) + [µ1(b, c), a] + (−1)bc[c, µ1(b, a)]
748
+ =
749
+ (−1)acµ1(b, [a, c]) + (−1)ac[b, µ1(a, c)]
750
+ −(−1)ab+acµ1(a, [b, c]) + (−1)ab+acµ1([a, b], c)
751
+ −(−1)ab+ac[a, µ1(b, c)] + (−1)ab+ac[µ1(a, b), c]
752
+ =
753
+ (−1)ab+ac{−µ1(a, [b, c]) − [a, µ1(b, c)]
754
+ +µ1([a, b], c) + (−1)abµ1(b, [a, c]) + [µ1(a, b), c] + (−1)ab[b, µ1(a, c)]}
755
+ =
756
+ (−1)ab+acδ2µ1(a, b, c)
757
+ (18)
758
+ 18
759
+
760
+ δ2µ1(e11, e12, e21)
761
+ =
762
+ −µ1(e11, e11 + e22) − [e11, e22 + e11]
763
+ +µ1(e12, e21) + µ1(e12, −e21) + [e21, e21] + [e12, −e12]
764
+ =
765
+ 0
766
+ (19)
767
+ δ2µ1(e11, e21, e12)
768
+ =
769
+ −µ1(e11, e11 + e22) − [e11, e22 + e11]
770
+ +µ1(−e21, e12) + µ1(e21, e12) + [−e12, e12] + [e21, e21]
771
+ =
772
+ 0
773
+ (20)
774
+ δ2µ1(e11, e12, e22)
775
+ =
776
+ −µ1(e11, e12) − [e11, e21]
777
+ +µ1(e12, e22) + µ1(e12, 0) + [e21, e22] + [e12, 0]
778
+ =
779
+ 0
780
+ (21)
781
+ δ2µ1(e11, e22, e12)
782
+ =
783
+ −µ1(e11, −e12) − [e11, −e21]
784
+ +µ1(0, e12) + µ1(e22, −e12) + [0, e12] + [e22, e21]
785
+ =
786
+ 0
787
+ (22)
788
+ δ2µ1(e11, e22, e21)
789
+ =
790
+ −µ1(e11, e21) − [e11, e12]
791
+ +µ1(0, e21) + µ1(e22, −e21) + [0, e21] + [e22, −e12]
792
+ =
793
+ 0
794
+ (23)
795
+ δ2µ1(e11, e21, e22)
796
+ =
797
+ −µ1(e11, −e21) − [e11, −e12]
798
+ +µ1(−e21, e22) + µ1(e21, 0) + [−e12, e22] + [e21, 0]
799
+ =
800
+ 0
801
+ (24)
802
+ δ2µ1(e22, e12, e21)
803
+ =
804
+ −µ1(e22, e11 + e22) − [e22, e22 + e11]
805
+ +µ1(−e12, e21) + µ1(e12, e21) + −[e21, e21] + [e12, e12]
806
+ =
807
+ 0
808
+ (25)
809
+ 19
810
+
811
+ δ2µ1(e22, e21, e12)
812
+ =
813
+ −µ1(e22, e11 + e22) − [e22, e22 + e11]
814
+ +µ1(e21, e12) + µ1(e21, −e12) + [e12, e12] + [e21, −e21]
815
+ =
816
+ 0
817
+ (26)
818
+ Using Equations 18, 19, 20, 21, 22, 23, 24, 25 and 26 we conclude that Equation 18
819
+ holds. Hence µt is an equivariant deformation of L of order 1.
820
+ References
821
+ [1] L. Corwin, Y. Ne’eman, S. Sternberg, Graded Lie algebras in mathematics and
822
+ physics (Bose-Fermi symmetry), Rev. Modern Phys. 47 (1975) 573–603.
823
+ [2] V. G. Kac, Lie superalgebras, Advances in Math. 26 (1) (1977) 8–96.
824
+ [3] D. A. Le˘ıtes, Cohomology of Lie superalgebras, Funkcional. Anal. i Priloˇzen.
825
+ 9 (4) (1975) 75–76.
826
+ [4] M. Gerstenhaber, On the deformation of rings and algebras, Ann. of Math. (2) 79
827
+ (1964) 59–103.
828
+ [5] M. Gerstenhaber, On the deformation of rings and algebras. II, Ann. of Math. 84
829
+ (1966) 1–19.
830
+ [6] M. Gerstenhaber, On the deformation of rings and algebras. III, Ann. of Math. (2)
831
+ 88 (1968) 1–34.
832
+ [7] M. Gerstenhaber, On the deformation of rings and algebras. IV, Ann. of Math.
833
+ (2) 99 (1974) 257–276.
834
+ [8] M. Gerstenhaber, S. D. Schack, On the deformation of algebra morphisms and
835
+ diagrams, Trans. Amer. Math. Soc. 279 (1) (1983) 1–50.
836
+ [9] B. Binegar, Cohomology and deformations of Lie superalgebras, Lett. Math.
837
+ Phys. 12 (4) (1986) 301–308.
838
+ [10] A. Nijenhuis, R. W. Richardson, Jr., Deformations of Lie algebra structures, J.
839
+ Math. Mech. 17 (1967) 89–105.
840
+ 20
841
+
842
+ [11] M. Gerstenhaber, The cohomology structure of an associative ring, Ann. of Math.
843
+ (2) 78 (1963) 267–288.
844
+ [12] H. Benamor, G. Pinczon, The graded Lie algebra structure of Lie superalgebra
845
+ deformation theory, Lett. Math. Phys. 18 (4) (1989) 307–313.
846
+ [13] D. Liu, N. Hu, Leibniz superalgebras and central extensions, J. Algebra Appl.
847
+ 5 (6) (2006) 765–780.
848
+ [14] A. Nijenhuis, R. W. Richardson, Jr., Cohomology and deformations in graded Lie
849
+ algebras, Bull. Amer. Math. Soc. 72 (1966) 1–29.
850
+ 21
851
+
GdAzT4oBgHgl3EQfUvwS/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
HtE1T4oBgHgl3EQfXwQA/content/2301.03129v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5f4a99b3791b9e0cb7057141a5b6210ddbefbb25b3ee0215ced94739d782343
3
+ size 8024303
HtFJT4oBgHgl3EQfFSwh/content/tmp_files/2301.11441v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
HtFJT4oBgHgl3EQfFSwh/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
MdE0T4oBgHgl3EQf0ALT/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d09ac5fd91a60b5e90231fd6765107a27a9e3de9cc26acab654b8ada4b8ecb66
3
+ size 83177
O9AzT4oBgHgl3EQfzf7E/content/tmp_files/2301.01771v1.pdf.txt ADDED
@@ -0,0 +1,951 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Exploring Machine Learning Techniques to Identify Important
2
+ Factors Leading to Injury in Curve Related Crashes
3
+ Mehdi Moeinaddinia, Mozhgan Pourmoradnasseria, Amnir Hadachia and Mario Coolsb,c,d
4
+ aITS Lab, Institute of Computer Science, University of Tartu, Narva mnt 18, 51009 Tartu, Estonia
5
+ bLEMA Research Group, Urban & Environmental Engineering Department, University of Liège, Liège, Belgium
6
+ cDepartment of Informatics, Simulation and Modelling, KULeuven Campus Brussels, Brussels, Belgium
7
+ dFaculty of Business Economics, Hasselt University, Diepenbeek, Belgium
8
+ A R T I C L E I N F O
9
+ Keywords:
10
+ Pre-Crash Events, Machine Learning,
11
+ Number of Vehicles with or without
12
+ Injury, Curve Related Crashes, Most
13
+ Effective Variables
14
+ A B S T R A C T
15
+ Different factors have effects on traffic crashes and crash-related injuries. These factors include
16
+ segment characteristics, crash-level characteristics, occupant level characteristics, environment
17
+ characteristics, and vehicle level characteristics. There are several studies regarding these fac-
18
+ tors’ effects on crash injuries. However, limited studies have examined the effects of pre-crash
19
+ events on injuries, especially for curve-related crashes. The majority of previous studies for
20
+ curve-related crashes focused on the impact of geometric features or street design factors. The
21
+ current study tries to eliminate the aforementioned shortcomings by considering important pre-
22
+ crash events related factors as selected variables and the number of vehicles with or without
23
+ injury as the predicted variable. This research used CRSS data from the National Highway Traf-
24
+ fic Safety Administration (NHTSA), which includes traffic crash-related data for different states
25
+ in the USA. The relationships are explored using different machine learning algorithms like the
26
+ random forest, C5.0, CHAID, Bayesian Network, Neural Network, C&R Tree, Quest, etc. The
27
+ random forest and SHAP values are used to identify the most effective variables. The C5.0 al-
28
+ gorithm, which has the highest accuracy rate among the other algorithms, is used to develop the
29
+ final model. Analysis results revealed that the extent of the damage, critical pre-crash event, pre-
30
+ impact location, the trafficway description, roadway surface condition, the month of the crash,
31
+ the first harmful event, number of motor vehicles, attempted avoidance maneuver, and roadway
32
+ grade affect the number of vehicles with or without injury in curve-related crashes.
33
+ 1. Introduction
34
+ Globally, more than 1.25 million people die per year as a result of road traffic crashes (WHO, 2018), and 20-50
35
+ million people suffer minor and major injuries due to motor vehicle crashes (WHO, 2018). Crashes involve the potential
36
+ loss of human life and damage to vehicles in addition to causing extra travel costs as a result of delays in traffic (Alireza,
37
+ 2002). Road traffic crashes impose considerable economic and social losses on vehicle manufacturers, society, and
38
+ transportation agencies (Haghighi, Liu, Zhang and Porter, 2018). Although during the last decade policymakers and
39
+ planners have tried to reduce these losses, still more research and studies are needed to identify factors that have effects
40
+ on crash injuries to reduce these social and economic losses.
41
+ The number of injuries is twice as high on curves in comparison to straight roads (Chen, 2010). Some of these
42
+ curve-related crashes occurred due to drivers that cannot recognize the sharpness and presence of upcoming curves
43
+ (Wang, Hallmark, Savolainen and Dong, 2017). The probability of a fatal crash at horizontal curves is significantly
44
+ higher than in other segments (Wang et al., 2017). In 2008, around 27 percent of fatal crashes in the United States
45
+ occurred at horizontal curves and most of these curve-related fatalities (over 80 percent) were in roadway departures
46
+ (FHWA, 2018). So, annually more than one-quarter of all motor-vehicle fatalities in the United States are related to
47
+ curve-related crashes (Wang et al., 2017). Because of this huge number of fatalities and injuries, the interest in curve-
48
+ related crashes is significantly high. Thus, there is a need to examine the relationship between injuries and factors that
49
+ have important effects on injuries in these crashes.
50
+ There are five categories of factors that have effects on traffic crash injuries. These factors include crash level factors
51
+ such as crash time, crash type, cause of crash and speed (Hao, Kamga and Wan, 2016; Qin, Ivan, Ravishanker, Liu
52
+ and Tepas, 2006), vehicle level factors such as vehicle age and type (Richter, Pape, Otte and Krettek, 2005; Bedard,
53
+ ORCID(s): 0000-0002-0679-3537 (M. Moeinaddini); 0000-0002-2092-816X (M. Pourmoradnasseri); 0000-0001-9257-3858 (A.
54
+ Hadachi)
55
+ Page 1 of 14
56
+ arXiv:2301.01771v1 [cs.LG] 4 Jan 2023
57
+
58
+ Guyatt, Stones and Hirdes, 2002; Langley, Mullin, Jackson and Norton, 2000), occupant level factors such as the
59
+ number of occupants, driver attention and alcohol involvement (Movig, Mathijssen, Nagel, Van Egmond, De Gier,
60
+ Leufkens and Egberts, 2004; Petridou and Moustaki, 2000), roadway design and environmental level factors such as
61
+ the number of lanes, traffic control, road curvature, road grade and pavement surface (Moeinaddini, Asadi-Shekari
62
+ and Shah, 2014; Moeinaddini, Asadi-Shekari, Sultan and Shah, 2015; RENGARASU, Hagiwara and Hirasawa, 2007;
63
+ Aarts and Van Schagen, 2006; Karlaftis and Golias, 2002; Ahmed, Abdel-Aty and Yu, 2012; Brijs, Karlis and Wets,
64
+ 2008; Golob and Recker, 2003).
65
+ There are various studies regarding the effects of the aforementioned five categories on crash severities. (Duddu,
66
+ Penmetsa and Pulugurtha, 2018) examined the effects of road characteristics, environmental conditions, and driver
67
+ characteristics on driver injury severity (for both at-fault and not-at-fault drivers), using a partial proportional odds
68
+ model. The results of this study show that the age of the driver, physical condition, gender, vehicle type, and the
69
+ number and type of traffic rule violations have significantly higher impacts on injury severity in traffic crashes for
70
+ not-at-fault drivers compared to at-fault drivers. In addition, road characteristics, weather conditions, and geometric
71
+ characteristics were observed to have similar effects on injury severity for at-fault and not-at-fault drivers. Driving
72
+ inattention and distracted driving behavior are two important causes of traffic crashes (Bakhit, Osman, Guo and Ishak,
73
+ 2019). (Guo and Fang, 2013) predicted high-risk drivers using personality, demographic, and driving characteristic
74
+ data. The results of their study show that the driver’s age, personality, and critical incident rate have significant effects
75
+ on crash and near-crash risk. Three inter-related variables, including failures of driver attention, misperceptions of
76
+ speed and curvature, and poor lane positioning, are important reasons for driver errors associated with horizontal
77
+ curves (Charlton, 2007).
78
+ (Charlton, 2007) used simulation to find the level of driver attention by comparing advance warning, delineation,
79
+ and road marking. The results of this study show rumble strips can produce appreciable reductions in speed compared
80
+ to advance warning signs. (Haghighi et al., 2018) examined the effects of different roadway geometric features (e.g.
81
+ curve rate, lane width, narrow shoulder, shoulder width, and driveway density) on the severity outcomes in two-lane
82
+ highways in rural areas, using data from 2007 to 2009 in Illinois. In their research, the effects of environmental
83
+ conditions and geometric features on crash severity were analyzed using a multilevel ordered logit model. The results
84
+ showed that the presence of a 10-ft lane and/or narrow shoulders, lower roadside hazard rate, higher driveway density,
85
+ longer barrier length, and shorter barrier offset have lower severe crash risk.
86
+ (Wang et al., 2017) considered the effects of variables such as driver demographic and behavioral characteristics,
87
+ traffic environment characteristics, and roadway design characteristics on the odds of a safety-critical event (e.g., using
88
+ a cell phone, interaction with passengers, external distraction, talking or singing, reaching or moving objects, and
89
+ drinking or eating) in curve related crash and near-crash events using a logistic regression model. However, some
90
+ important variables, such as variables that can explain the critical events and reactions or maneuvers of drivers during
91
+ the crash were missing in this study. Moreover, this study also did not consider the effects of these factors on injury
92
+ severity. Some limited studies examine driver behavior on traffic safety negotiated with curve using simulators (e.g.,
93
+ (Jeong and Liu, 2017); (Yotsutsuji, Kita, Xing and Hirai, 2017); (Abele and Møller, 2011); (Charlton, 2007)) but these
94
+ studies represent just a simplified real-life situation based on some certain assumptions and validation is a challenging
95
+ process in these studies.
96
+ Various studies focused on curve characteristics and crash risk (Elvik, 2013). The majority of these studies in-
97
+ vestigated the effects of speed and speed limit (e.g., (Yotsutsuji et al., 2017), (Wang et al., 2017); (Dong, Nambisan,
98
+ Richards and Ma, 2015); (Vayalamkuzhi and Amirthalingam, 2016)) and road design characteristics such as radius of
99
+ the curve, curve rate and curve length on traffic safety (e.g., (Yotsutsuji et al., 2017); (Haghighi et al., 2018); (Wang
100
+ et al., 2017); (Khan, Bill, Chitturi and Noyce, 2013); (Schneider IV, Savolainen and Moore, 2010)). In addition to
101
+ speed and road design factors, some researchers investigated the effects of socio-demographic factors for drivers (age,
102
+ gender, income, etc.) on traffic safety in curve-related crashes (e.g., (Wang et al., 2017)). However, only a few efforts
103
+ have been undertaken to quantify the effects of pre-crash events on traffic safety and crash injuries, specifically in
104
+ curve crashes (Wang et al., 2017). To our knowledge, little is known in the related transportation literature regarding
105
+ the impacts of pre-crash events on traffic injuries for curve-related crashes (Bärgman, Boda and Dozza, 2017). The
106
+ current study tries to eliminate these shortcomings by considering pre-crash events related factors as selected variables
107
+ and the number of vehicles with or without injury as the predicted variable in curve-related crashes.
108
+ Page 2 of 14
109
+
110
+ 2. Data and Methodology
111
+ This research focuses on the relationship between pre-crash events related factors and the number of vehicles with
112
+ or without injury in different states of the United States for curve-related crashes in 2020. In this study, the data
113
+ are extracted from Crash Report Sampling System (CRSS) in the National Highway Traffic Safety Administration
114
+ (NHTSA) report. This database has data for 94718 vehicles involved in crashes in different states of the United States.
115
+ The data are extracted from all reliable police-reported motor vehicle traffic crashes. The database includes data for
116
+ pedestrians, cyclists, and all types of motor vehicles and covers different types of crashes. In this study, vehicle-related
117
+ data are used to explore the effects of pre-crash events on the number of vehicles with or without injuries for curve-
118
+ related crashes. Around 8% of the involved vehicles in these crashes are related to crashes on curves (7542 crashes
119
+ out of 94718). Table 1 shows the frequency of crashes based on roadway alignment for cases with available injury
120
+ levels (90269 crashes out of 94718). This table indicates the proportion of involved vehicles in curve-related crashes
121
+ for vehicles without or with injuries.
122
+ Table 1
123
+ Frequency of crashes based on roadway alignment.
124
+ Injury
125
+ No
126
+ Yes
127
+ Total
128
+ Roadway Alignment
129
+ Row%
130
+ Col%
131
+ Cell%
132
+ Row%
133
+ Col%
134
+ Cell%
135
+ No.
136
+ 1,864
137
+ 81
138
+ 3
139
+ 2
140
+ 428
141
+ 19
142
+ 1
143
+ 0
144
+ 2,292
145
+ Non-Trafficway
146
+ 51,97
147
+ 67
148
+ 87
149
+ 58
150
+ 25,604
151
+ 33
152
+ 84
153
+ 28
154
+ 77,574
155
+ Straight
156
+ 1,758
157
+ 55
158
+ 3
159
+ 2
160
+ 1,436
161
+ 45
162
+ 5
163
+ 2
164
+ 3,194
165
+ Curve Right
166
+ 1,502
167
+ 49
168
+ 3
169
+ 2
170
+ 1,581
171
+ 51
172
+ 5
173
+ 2
174
+ 3,083
175
+ Curve Left
176
+ 563
177
+ 57
178
+ 1
179
+ 1
180
+ 429
181
+ 43
182
+ 1
183
+ 0
184
+ 992
185
+ Curve-Unknown Direction
186
+ 1,998
187
+ 65
188
+ 3
189
+ 2
190
+ 1,085
191
+ 35
192
+ 4
193
+ 1
194
+ 3,083
195
+ Not Reported
196
+ 35
197
+ 69
198
+ 0
199
+ 0
200
+ 16
201
+ 31
202
+ 0
203
+ 0
204
+ 51
205
+ Total
206
+ 59,69
207
+ 66
208
+ 100
209
+ 66
210
+ 30,579
211
+ 34
212
+ 100
213
+ 34
214
+ 90,269
215
+ Different pre-crash variables are considered predictors for the predicted variable, i.e., the number of vehicles with
216
+ or without injury (0: vehicle without injury, 1: vehicle with injury) in the crashes that occurred in a curve. Table 2
217
+ defines these variables after decoding. The selected variables in Table 2 present pre-crash events in addition to the
218
+ driver behavior and some crash level data, such as the month of the crash. Since the crash rate may differ in different
219
+ seasons, the month of the crash indicates the crashes in the winter. In addition, since more occupants in crashes may
220
+ lead increased chance of vehicle injury, the effect of the number of occupants in each vehicle is also tested in this study.
221
+ Some driver behavior-related factors may affect the number of vehicles with or without injury. These factors include
222
+ driver errors (e.g., careless driving, aggressive driving, improper or erratic lane changing and overcorrecting), driving
223
+ too fast, and avoidance maneuvers. Some factors represent environment and vehicle conditions. For example, the
224
+ vehicle’s manufacturing year represents the vehicle’s condition assuming that newer and less damaged cars may lead
225
+ to fewer injuries because of more advanced safety facilities. The driving environment condition is presented by traffic
226
+ way description (one way or two ways and how the traffic ways are separated), speed limit, roadway surface condition,
227
+ and the presence of traffic controls.
228
+ The rest of the selected variables, such as the harmful events, the critical pre-crash events, the vehicle’s stability
229
+ after the critical event, the location of the vehicle after the critical event, and the crash type, represent pre-crash events.
230
+ As expected, all required data for the main variables are not reported for all curve-related crashes in the NHTSA report,
231
+ and there are many not reported or unknown data for these variables. In addition, to focus on curve-related crashes,
232
+ only the vehicles that negotiate a curve prior to realizing an impending critical event or just prior to impact are included.
233
+ Therefore, after data preparation (removing incomplete information for the main variables and including the vehicles
234
+ that negotiate a curve), the total number of curve-related crashes retained in this study equals 740.
235
+ Page 3 of 14
236
+
237
+ Table 2: Selected variables.
238
+ Variable
239
+ Description
240
+ Coding
241
+ Urban or rural
242
+ The geographical area of the crash is essen-
243
+ tially urban or rural
244
+ 1: urban
245
+ 2: rural
246
+ Number of motor
247
+ vehicles
248
+ Number of motor vehicles involved in the
249
+ crash
250
+ 0: 1
251
+ 1: >1
252
+ Number of
253
+ occupants
254
+ The number of occupants in each vehicle
255
+ 0: 1
256
+ 1: >1
257
+ First harmful
258
+ event
259
+ The first injury or damage producing event
260
+ 0: other events
261
+ 1: collision with motor vehi-
262
+ cles in transport
263
+ Vehicle’s Model
264
+ Manufacturer’s model year of the vehicle
265
+ 0: <2010
266
+ 1: >=2010
267
+ Initial contact
268
+ point
269
+ The area on the vehicle that produced the
270
+ first instance of injury or damage
271
+ 0: other areas
272
+ 1: front
273
+ Extent of
274
+ damage
275
+ The amount of damage sustained by the ve-
276
+ hicle
277
+ 0: Not disabling damage
278
+ 1: Disabling damage
279
+ Most harmful
280
+ event
281
+ The event that resulted in the most severe
282
+ injury or the greatest damage
283
+ 0: other events
284
+ 1: collision with motor vehi-
285
+ cles in transport
286
+ Speeding-related
287
+ The driver’s speed was related to the crash
288
+ 0: no
289
+ 1: yes
290
+ Page 4 of 14
291
+
292
+ Driver Error
293
+ Factors related to the driver errors ex-
294
+ pressed by the investigating officer
295
+ 0: no error
296
+ 1:
297
+ error
298
+ (e.g.,
299
+ careless
300
+ driving or aggressive driv-
301
+ ing/road rage or operating
302
+ the vehicle in an erratic,
303
+ reckless, or negligent man-
304
+ ner,
305
+ improper
306
+ or
307
+ erratic
308
+ lane changing or improper
309
+ lane usage, or driving on
310
+ the wrong side of two-way
311
+ traffic way, etc.)
312
+ Traffic way
313
+ Trafficway description
314
+ 0: divided two-way and oth-
315
+ ers
316
+ 1: not divided two-way
317
+ Speed limit
318
+ The posted speed limit in miles per hour
319
+ 0: <46
320
+ 1: >=46
321
+ Roadway alignment
322
+ The roadway alignment prior to the critical
323
+ pre-crash event
324
+ 1: curve right
325
+ 2: curve left
326
+ 3: curve – unknown direc-
327
+ tion
328
+ Grade
329
+ Roadway grade prior to the critical pre-
330
+ crash event
331
+ 0: not level
332
+ 1: level
333
+ Roadway surface con-
334
+ dition
335
+ Roadway surface condition prior to the crit-
336
+ ical pre-crash event
337
+ 0: not dry
338
+ 1: dry
339
+ Traffic control device
340
+ The presence of traffic controls in the envi-
341
+ ronment prior to the critical pre-crash event
342
+ 0: no
343
+ 1: yes
344
+ Critical pre-crash event
345
+ The critical event which made this crash
346
+ imminent
347
+ 1:
348
+ the vehicle itself (loss
349
+ of control, traveling too fast,
350
+ etc.)
351
+ 2: other vehicles (traveling
352
+ in the opposite direction, en-
353
+ croaching into the lane, etc.)
354
+ 3:
355
+ others (pedestrian in
356
+ the road, animal approach-
357
+ ing road, etc.)
358
+ Attempted
359
+ avoidance
360
+ maneuve r
361
+ Movements/actions taken by the driver
362
+ within the crash
363
+ 0: no action
364
+ 1: braking
365
+ 2: others.
366
+ Page 5 of 14
367
+
368
+ Pre-impact stability
369
+ The stability of the vehicle after the critical
370
+ event but before the impact.
371
+ 0:
372
+ no tracking (skidding,
373
+ loss-of-control, etc.)
374
+ 1: tracking
375
+ Pre-impact location
376
+ The location of the vehicle after the critical
377
+ event but before the impact
378
+ 0: not departed roadway
379
+ 1: departed roadway
380
+ Crash Type
381
+ The type of crash
382
+ 0: others
383
+ 1:
384
+ single driver involved
385
+ (roadside departure,
386
+ colli-
387
+ sion with pedestrians, etc.)
388
+ Different methods like multinomial logit models, general linear models, order prohibit models, linear regression
389
+ ((Clark and Cushing, 2004); (Levine, Kim and Nitz, 1995); (Abdel-Aty, 2003); (Yang, Zhibin, Pan and Liteng, 2011);
390
+ (Fan, Kane and Haile, 2015)), Negative binomial (NB) models ((Abdel-Aty and Radwan, 2000); (Hadayeghi, Shalaby
391
+ and Persaud, 2003); (Hadayeghi, Shalaby and Persaud, 2007); (Wei and Lovegrove, 2013); (Moeinaddini et al., 2014);
392
+ (Moeinaddini et al., 2015)), Poisson models ((Movig et al., 2004)) and Zero-inflated Poisson and NB models ((Qin,
393
+ Ivan and Ravishanker, 2004); (Shankar, Milton and Mannering, 1997)) have been used to analyze traffic fatalities and
394
+ injuries related data. In addition to these methods, some studies have used decision tree approaches (e.g., ID3, C4.5,
395
+ C5.0, C&R, CHAID) to find the major contributing factors to collision and the number of fatalities. For example,
396
+ the Classification and Regression (C&R) Tree was used by (Tavakoli Kashani, Shariat-Mohaymany and Ranjbari,
397
+ 2011). One of the most common approaches for representing classifiers is Decision Trees (Maimon and Rokach, 2005).
398
+ Researchers from different disciplines such as machine learning, statistics, pattern recognition, and data mining use
399
+ decision trees to analyze data in a more comprehensive way (Maimon and Rokach, 2005).
400
+ (Zhang and Fan, 2013) used data mining models using ID3 and C4.5 decision tree algorithms to evaluate the traffic
401
+ collision data in Canada. (Chong, Abraham and Paprzycki, 2005) compared different machine learning paradigms,
402
+ including neural networks trained using hybrid learning approaches, support vector machines, decision trees, and a
403
+ concurrent hybrid model involving decision trees and neural networks to model the injury severity of traffic crashes.
404
+ The results of their study show that for the non-incapacitating injury, the incapacitating injury, and the fatal injury
405
+ classes, the hybrid approach performed better than a neural network, decision trees, and support vector machines.
406
+ (da Cruz Figueira, Pitombo, Larocca et al., 2017) used the C&R algorithm as a useful tool for identifying potential
407
+ sites of crashes with victims. (Chang and Wang, 2006) used C&R to find the relationships between crash severity with
408
+ factors such as drivers’ and vehicles’ variables and the road and environment characteristics. The results of their study
409
+ show that vehicle type is one of the most important factors that have an effect on the severity of the crash.
410
+ The majority of traditional and parametric analysis techniques have different assumptions and pre-defined functions
411
+ that describe the relationship between the selected and the predicted variables (Chang and Wang, 2006). If these
412
+ assumptions are violated, the model power can be affected negatively (Griselda, Joaquín et al., 2012). Therefore,
413
+ assumption-free models such as decision trees can be used to avoid this limitation (Griselda et al., 2012). (Kuhnert,
414
+ Do and McClure, 2000) compared the results of different methods such as logistic regression, multivariate adaptive
415
+ regression splines (MARS), and C&R in the analysis of data related to the injury in motor vehicle crashes. The findings
416
+ of their study show the usefulness of non-parametric techniques such as C&R and MARS to provide more attractive
417
+ and informative models (Griselda et al., 2012). (Pandya and Pandya, 2015) compared the results of ID3, C4.5, and
418
+ C5.0 with each other. They found that among all these classifiers, C5.0 gives more efficient, accurate, and fast results
419
+ with low memory usage (fewer rules compare to other techniques).
420
+ Finding the most accurate prediction models can help planners and designers to develop better traffic safety control
421
+ policies. In the current study, a variety of modeling techniques is envisaged as possible analysis methods, and the most
422
+ appropriate model based on the accuracy rate is retained for further discussion of the results.
423
+ The first step to finding the most appropriate model is applying random forest to identify the most influential
424
+ variables among the selected variables. Then, the identified effective variables are used as selected variables to explore
425
+ the effects of these selected variables on the predicted variable. Random forest is a common method for selecting the
426
+ Page 6 of 14
427
+
428
+ most effective variables in studies with a high number of predictors ((Jahangiri, Rakha and Dingus, 2016); (Kitali,
429
+ Alluri, Sando, Haule, Kidando and Lentz, 2018); (Zhu, Li and Wang, 2018); (Aghaabbasi, Shekari, Shah, Olakunle,
430
+ Armaghani and Moeinaddini, 2020); (Lu and Ma, 2020)). The random forest aggregates many binary decision trees.
431
+ Cross-validation (10-fold cross-validation) which generally results in a less biased model than other methods like
432
+ train and test split is applied to estimate the accuracy. Random and grid search for hyper-parameter optimization
433
+ (Bergstra and Bengio, 2012) are used for hyper-parameter optimization. After applying random forests, the SHAP
434
+ (SHapley Additive exPlanations) values (Lundberg and Lee, 2017) are used to select the most important variables. The
435
+ SHAP explains the contribution of each observation and provides local interpretability but the traditional importance
436
+ values explain each predictor’s effects that are based on the entire population. After estimating SHAP values, the not-
437
+ important variables are excluded one by one. The accuracy rates for each step of exclusion are used to find the most
438
+ effective variables.
439
+ 2.1. Models Description
440
+ A couple of machine-learning algorithms were explored to identify the relationships between curve-related crashes
441
+ and pre-crash events. To this end, identifying the significant features for training the models is an important step to
442
+ ensure a good training process and better results.
443
+ 2.1.1. Feature selection
444
+ Approximating the functional relationship between the input data and the output is one of the fundamental problems
445
+ when applying machine learning methods. Selecting the significant feature for training the machine learning models
446
+ is crucial to avoid overfitting and to induce high computational costs. Therefore, our approach utilized the power of
447
+ random forest as a classifier and interpreted with Shapely values. Relying on SHAP values helps to perform feature
448
+ selection based on ranking. This means that instead of using the embedded feature selection process of the random
449
+ forest, we use the SHAP value to select the ones with the highest shapely values. This approach’s advantage is avoiding
450
+ any bias in the native tree-based feature built by the random forest approach.
451
+ 2.1.2. C5.0 decision tree algorithm
452
+ Decision trees are built using recursive partitioning. The algorithm starts by creating the root node, which in our
453
+ case, is the actual data. Then, based on the most significant feature selected, the data is partitioned into groups. These
454
+ groups are the distinct value of this feature, and this decision forms the first set that constitutes the tree branches. The
455
+ algorithm divides the nodes until the criterion is reached (Algorithm 1). In practice, the C5.0 algorithm decides the
456
+ split by using the concept of entropy for measuring purity. This means for a segment of data 푆, if the entropy is close
457
+ to value 0 indicates that the data sample is homogenous, and the opposite if it is close to value 1 and it is defined as
458
+ follows:
459
+ Entropy(푆) =
460
+
461
+
462
+ 푖=1
463
+ −푃푖 log2 푃푖;
464
+ (1)
465
+ where 푚 refers to the number of different class levels, and 푝푖 is the proportion of values falling into the class level
466
+ 푖. However, even after conducting this step, to understand the homogeneity of the data, the algorithm still needs to
467
+ decide how to split the set. To solve this, the C5.0 uses the entropy to spot the variations of homogeneity resulting
468
+ from the split. This measure is called Information Gain, which is defined using equation 2. It quantifies the gained
469
+ information of an attribute 퐴, when selecting data set 푆 .
470
+ InfoGain(푆, 퐴) = Entropy(푆) −
471
+
472
+ 푣∈푉 (퐴)
473
+ |푆푣|
474
+ |푆| Entropy(푆푣);
475
+ (2)
476
+ where 푉 (퐴) is the set of all possible attribute values for 퐴, and 푆푣 is the subset of 푆 for which 퐴 has value 푣.
477
+ Hence, the creation of homogeneous groups after the split on specific feature is better when the information gain is
478
+ high.
479
+ Page 7 of 14
480
+
481
+ Algorithm 1 C5.0 decision tree
482
+ Require: Data 푇 = {(푥푖, 푦푗), 푖, 푗 ∈ {1, 2, ⋯ , 푛}}
483
+ Require: Attributes 퐴 = {푎푙, 푙 ∈ {1, 2, ⋯ , 푝}}
484
+ Require: InfoGain = {푔푘, 푘 ∈ {1, 2, ⋯ , 푑}}
485
+ Create node 푁
486
+ if 푆 are all the same class, 퐶 then
487
+ label 푁 with class 퐶 → 푁(퐶)
488
+ return 푁(퐶) as leaf node
489
+ end if
490
+ if 퐴 ≠ ∅ or 푎푖 = 푎푗 for all 푎푖, 푎푗 ∈ 퐴 then
491
+ label 푁 with majority class 푀 in 푆 → 푁(푀) return 푁(푀) as leaf node
492
+ end if
493
+ select best attribute 푎푖 using InfoGain
494
+ for every 푎푣
495
+ 푖 ∈ 푎푖 do
496
+ label node 푁 with splitting criterion
497
+ if 푆푣 ≠ ∅ then where 푆푣 is the set of data in 푆 equal to 푎푣
498
+
499
+ label 푁 with majority class 푀 in 푆 → 푁(푀) return 푁(푀) as leaf node
500
+ else return 푁 with splitting criterion (푆푣, 퐴{푎푖})
501
+ end if
502
+ end for
503
+ 2.1.3. Chi-squared Automatic Interaction Detection
504
+ Chi-squared automatic interaction detection (CHAID) is one of the techniques based on a tree machine learning
505
+ algorithm. Hence, the algorithm relies on the multiway split using Chi-square or F-test. The CHAID algorithm uses
506
+ two approaches for separation reference depending on the variable. If the variable is categorical, Pearson’s Chi-square
507
+ is adequate; otherwise, the likelihood ratio Chi-square statistic. The CHAID uses Pearson’s Chi-squared test of inde-
508
+ pendence to test the existence of an association between two categorical variables (“true" or“false"). The main steps
509
+ to calculate Chi-square for the split are as follows:
510
+ 1. Calculating the deviation for "true" and "false" in the node which constitutes the Chi-square computation.
511
+ 2. Getting the split by computing the sum of all the chi-square of "true" and "false" of each split node.
512
+ 2.1.4. Classification and Regression Tree node
513
+ The Classification and Regression (C&R) Tree node algorithm is a classification algorithm that is based on a binary
514
+ tree built by splitting nodes into two child nodes continually similarly to C5.0 method. The algorithm is designed in
515
+ such a manner that it follows three major steps:
516
+ 1. Identifying each feature’s best split.
517
+ 2. Identifying the node’s best split.
518
+ 3. Based on the step 2 result, the node is split and repeats the process from step 1 till the stopping criterion is met.
519
+ For performing the split, Gini’s impurity index criterion is used and it is defined as follows for a node 푡:
520
+ Gini (푡) =
521
+
522
+ 푖,푗
523
+ 퐶(푖 ∣ 푗)푃 (푖 ∣ 푛)푃(푗 ∣ 푛);
524
+ (3)
525
+ where,
526
+ • 퐶(푖 ∣ 푗) is the cost of classifying wrongly a class 푗 as a class 푖 and it is defined as follows:
527
+ 퐶(푖 ∣ 푗) =
528
+ {
529
+ 1
530
+ 푖 ≠ 푗
531
+ 0
532
+ 푖 = 푗
533
+ (4)
534
+ • 푃 (푖 ∣ 푛) is the probability of 푖 falls into node 푛
535
+ Page 8 of 14
536
+
537
+ • 푃(푗 ∣ 푛) is the probability of 푗 falls into node 푛
538
+ The splitting criterion is based on Gini’s impurity criterion, which follows a decrease of impurity using the following
539
+ formula:
540
+ ΔGini (푠, 푛) = Gini (푡)−푃푅Gini (푛푅)−푃퐿Gini (푛퐿);
541
+ (5)
542
+ where, ΔGini (푠, 푛) is the decrease of impurity at node 푛 with a split 푠. 푃푅 and 푃퐿 are, respectively, the probabilities of
543
+ sending the case to the right or the left node 푛푅 or 푛퐿, and Gini (푛푅) and Gini (푛퐿) are respectively the Gini impurity
544
+ index of the right and left child node.
545
+ 2.1.5. Bayesian network
546
+ A Bayesian network is a compact graphical interpretation of the causal relationship between variables of a dataset.
547
+ The structure is presented by a directed acyclic graph (DAG), and parameters are expressed as conditional probabilities.
548
+ In order to learn the network, structure and conditional probabilities must be known. The structure is learned by DAG
549
+ search algorithms and assigning prior probabilities. Then parameters are determined by the maximum likelihood
550
+ estimation. Including the prior knowledge of the causal structure of DAG is a crucial step in learning parameters in
551
+ this method.
552
+ 2.1.6. Logistic regression
553
+ Logistic regression is a statistical classification algorithm that maps the results of a linear function onto the regres-
554
+ sion function
555
+ 푃 (퐗) =
556
+ 푒훽0+훽퐗
557
+ 1 + 푒훽0+훽퐗 .
558
+ (6)
559
+ Based on the maximum likelihood method, the coefficients 훽0 and 훽 are estimated in the training phase. This algorithm
560
+ is a suitable classifier when variables can be categorized into two or a few classes. In contrast to linear regression, the
561
+ logistic function associates probabilities to each possible output class by producing an S-shape curve and a range of
562
+ output between 0 and 1.
563
+ 2.1.7. Neural Network
564
+ Our case study adopted neural network architecture based on five hidden layers using a multilayer perception
565
+ model. The multilayer perceptron is the most straightforward feed-forward network. When the layers are increased, it
566
+ can provide exciting performance in learning and precision. The units are arranged into a set of layers, and each layer
567
+ contains a collection. The first layer is the input layer, populated by the value of input features. Then, the input is later
568
+ connected to a very hidden layer in a fully connected fashion. The last layer is the output layer, which has one unit for
569
+ each network output weight with a stopping rule on the error generated.
570
+ 2.1.8. QUEST algorithm
571
+ Quick Unbiased Efficient Statistical Tree (QUEST) is a cost-effective classification method for building binary
572
+ decision trees for categorical and quantitative predictors with a large number of variables. Instead of examining all
573
+ possible splits, QUEST uses statistical analysis and a multi-way chi-square test to select the variable at each node.
574
+ This leads to a significant reduction in the time complexity, compared to methods like R&C Tree, by avoiding ineffi-
575
+ cient splits. Moreover, the split point at each node is selected based on a quadratic discriminant analysis of potential
576
+ categories.
577
+ 2.1.9. Decision List
578
+ Decision lists are a representation of Boolean functions that work as a collection of rule-based classifiers. Rules
579
+ are learned sequentially and based on a greedy approach by identifying the rule that covers the maximum number of
580
+ instances in the input space 푋. Then rules are appended to the decision tree one at a time, and the corresponding data
581
+ is removed from the data set in each process. A new instance is classified by examining the rules in order, and if no
582
+ rule is satisfied, the default rule is applied.
583
+ Features are defined as Boolean functions 푓푖 that map the input space 푋 onto {0, 1}. For a given set of features
584
+  = {푓푖(푥)}, with 푥 ∈ 푋, and the training set  , learning algorithm returns a selection of features ′ ⊂ . Once the
585
+ effective features are determined, for an arbitrary input 푥, the output of the decision tree is calculated according to a
586
+ set of conditions to be satisfied.
587
+ Page 9 of 14
588
+
589
+ 3. Results
590
+ The overall accuracy for the applied random forest model with all predictors is 0.67. However, the overall accuracy
591
+ for the applied random forest model with the 10 most important predictors based on SHAP values can reach 0.68.
592
+ Therefore, these 10 predictors are selected as the most effective variables among the selected variables (the extent of
593
+ the damage, critical pre-crash event, pre-impact location, the trafficway description, roadway surface condition, the
594
+ month of the crash, the first harmful event, number of motor vehicles, attempted avoidance maneuver, and roadway
595
+ grade). The identified effective variables are used as selected variables for a variety of possible modeling methods to
596
+ find the most appropriate model based on the accuracy rate. The accuracy of the traditional logistic regression model
597
+ is lower than non-parametric models like C5.0, CHAID, C&R Tree, and Bayesian network. In addition, potential high
598
+ correlations between some of the selected crash-related variables in this study may lead to a multicollinearity concern.
599
+ Therefore, it is better to use modeling techniques that can handle multicollinearity issues to be able to consider the
600
+ effects of these variables. Since non-parametric models can handle multi-collinearity issues in crash-related data better
601
+ than traditional and parametric models, based on the overall accuracy that is achieved for each model (refer Table 3),
602
+ the C5.0 model with the highest accuracy score is used for modeling the most important predictors. To develop this
603
+ C5.0 model, the minimum number of records per child branch number is considered to be 2 and the pruning severity
604
+ is considered to be 75. To collapse weak subtrees, trees are pruned in local and global pruning stages. Cross-validate
605
+ is used to estimate the accuracy of the model. This technique uses a set of models using subsets of the data to estimate
606
+ the accuracy. C5.0 is an improved version of C4.5 that is an extension of ID3 algorithm ((Quinlan, 1993); (Witten,
607
+ 2011); (Kotsiantis, Zaharakis, Pintelas et al., 2007); (Quinlan, 1996)).
608
+ Table 3
609
+ The overall accuracy of the possible analysis methods.
610
+ Applied model
611
+ Overall accuracy (%)
612
+ C5.0
613
+ 71.757
614
+ CHAID
615
+ 70.135
616
+ C&R Tree
617
+ 68.784
618
+ Bayesian Network
619
+ 67.973
620
+ Logistic Regression
621
+ 66.486
622
+ Neural Network
623
+ 65.27
624
+ Quest
625
+ 63.514
626
+ Decision List
627
+ 63.108
628
+ The applied C5.0 model is shown in Figure1. This figure shows the total percentage and the classification of the
629
+ predicted variable for each node. The overall accuracy based on the results is more than 71%. Sixteen terminal nodes
630
+ (the bottom nodes of the decision tree) have been shown in Figure 1 and it is clear that this model has 8 splitters, i.e. the
631
+ extent of the damage, first harmful event, the month of the crash, critical pre-crash event, pre-impact location, roadway
632
+ surface condition, the trafficway description, and roadway grade. The most important variable for data segmentation
633
+ is the extent of the damage. The probability of having vehicles without injury in curve-related crashes is high in node
634
+ 1. Node 1 shows that not disabling damage results in a higher rate for vehicles without injury. In contrast, from node
635
+ 23 one can depict that disabling damage results in a higher rate for vehicles with injury. The findings show that all
636
+ environmental and pre-crash events that lead to driving with extra caution are related to vehicles with a lower chance
637
+ of having injuries in curve-related crashes. For example, for vehicles that have disabling damage (refer node 23), the
638
+ model prediction is with injury if the month of the crash is not in the winter. The same prediction can be expected for
639
+ the months in the winter if the surface is dry (refer node 27) and the pre-impact location is departed the roadway (refer
640
+ node 29). However, the prediction for months in the winter can be without injury if the surface is not dry (refer node
641
+ 26) or the pre-impact location has not departed the roadway (refer node 28) for dry surfaces. The effects of driving with
642
+ extra caution can also be noticed for vehicles that do not have collisions with motor vehicles in transport. Node 1 is
643
+ divided into node 2 and node 20 which are related to the first harmful event. For vehicles that have no disabling damage
644
+ (refer node 1) and do not have collisions with motor vehicles in transport (refer node 2), the model prediction is without
645
+ injury if the critical pre-crash event is related to the vehicle itself (refer node 3), the surface is not dry (refer node 4),
646
+ and the month of the crash is in the winter (refer node 8). The same prediction can be expected for the months that are
647
+ not in the winter if the roadway is not a divided two-way road (refer node 7). The same prediction also can be expected
648
+ Page 10 of 14
649
+
650
+ for the critical pre-crash event that is related to the other vehicles (node 10) while the pre-impact location is departed
651
+ the roadway (refer node 12) and for the other critical pre-crash events while the pre-impact location is not departed
652
+ the roadway (refer node 14). The prediction is also without injury for departed the roadway in this case (refer node
653
+ 15) if the roadway is a divided two-way road (refer node 16) or a not level road (refer node 18) for a divided two-way
654
+ road (refer node 17). For vehicles that have no disabling damage (refer node 1) and do not have collisions with motor
655
+ vehicles in transport (refer node 2), the model prediction is with injury if the critical pre-crash event is related to the
656
+ vehicle itself (refer node 3) and the surface is dry (refer node 9). The same prediction can be expected for the critical
657
+ pre-crash event that is related to the other vehicles (node 10) while the pre-impact location is not departed the roadway
658
+ (refer node 11). For vehicles that have no disabling damage (refer node 1) and have collisions with motor vehicles in
659
+ transport (refer node 20), the model prediction is without injury if the pre-impact location is not departed the roadway
660
+ (refer node 21) and with injury if the pre-impact location is departed the roadway (refer node 22). This finding shows
661
+ that departing the roadway is a very important factor for collisions with motor vehicles in transport in curve-related
662
+ crashes. The results confirm that out of all input selected variables, eight main variables play an important role in
663
+ vehicles with or without injury in curve-related crashes. Table 4 shows the importance of these main predictors based
664
+ on the proposed C5.0 algorithm. Higher importance scores mean a greater contribution of the variable in predicting
665
+ the number of vehicles with or without injury. A breakdown of prediction accuracy is also estimated (refer Table 5).
666
+ Table 4
667
+ Importance of the predictors based on the proposed C5.0 algorithm.
668
+ Nodes
669
+ Importance
670
+ Extent of damage
671
+ 0.3401
672
+ Pre-impact location
673
+ 0.2303
674
+ The first harmful event
675
+ 0.2056
676
+ Month of crash
677
+ 0.1021
678
+ Roadway surface condition
679
+ 0.0433
680
+ Trafficway description
681
+ 0.0430
682
+ Roadway grade
683
+ 0.0351
684
+ Critical pre-crash event
685
+ 0.0006
686
+ Table 5
687
+ Coincidence matrix for predicted values .
688
+ 0
689
+ 1
690
+ %
691
+ 0
692
+ 206
693
+ 140
694
+ 60
695
+ 1
696
+ 69
697
+ 325
698
+ 82
699
+ 4. Discussion and Conclusion
700
+ To reduce the number of crash injuries and have better planning decisions and strategies, it is important to have
701
+ deep knowledge about factors influencing crash injuries. The proposed C5.0 algorithm (with a higher overall accuracy
702
+ rate compared to the other analysis methods) can help to identify the variables that have the most important impacts
703
+ on the number of vehicles with or without injury in curve-related crashes. This study used the 2020 NHTSA data
704
+ for different states in the USA to find the key variables that affect the number of vehicles with or without injury in
705
+ curve-related crashes. The results show that the extent of the damage, critical pre-crash event, pre-impact location,
706
+ the trafficway description, roadway surface condition, the month of the crash, the first harmful event, number of motor
707
+ vehicles, attempted avoidance maneuver, and roadway grade affect the number of vehicles with or without injury the
708
+ most. The C5.0 model shows that most of the important predictors are related to environmental and pre-crash events
709
+ that lead to driving with extra caution. Analysis results also revealed that departing the roadway is a very important
710
+ factor for collisions with motor vehicles in transport in curve-related crashes. This is in line with previous studies like
711
+ (Wang et al., 2017) that identified traveling too fast on curves as one of the most important factors that contribute to
712
+ Page 11 of 14
713
+
714
+ Figure 1: The proposed C5.0 model.
715
+ Page 12 of 14
716
+
717
+ With or without injury
718
+ Nodeo
719
+ 000:0
720
+ 1.000
721
+ Extentof damage
722
+ 0.000
723
+ 1.000
724
+ Node 1
725
+ Node 23
726
+ The first harmful event
727
+ Month of the crash
728
+ 0.000
729
+ 1.000
730
+ 0.000
731
+ 1.000
732
+ Node 2
733
+ Node 20
734
+ Node 24
735
+ Node 25
736
+
737
+ -
738
+ Critical pre-crash event
739
+ Pre-impact location Roadway surface condition
740
+ 1.000
741
+ 2.000
742
+ 3.000
743
+ 000'0
744
+ 1.000
745
+ 000°0
746
+ 1.000
747
+ Node 3
748
+ Node10
749
+ Node 13
750
+ Node 21
751
+ Node22
752
+ Node 26
753
+ Node 27
754
+
755
+ Roadway surface condition
756
+ Pre-impact location
757
+ Pre-impact location
758
+ Pre-impact location
759
+ 0.000
760
+ 1.000
761
+ 0.000
762
+ 1.000
763
+ 0.000
764
+ 1.000
765
+ 0.000
766
+ 1.000
767
+ Node 4
768
+ Node 9
769
+ Node 11
770
+ Node 12
771
+ Node 14
772
+ Node 15
773
+ Node 28
774
+ Node 29
775
+ Month of the crash
776
+ Trafficway description
777
+ 0.000
778
+ 1.000
779
+ 0.000
780
+ 1.000
781
+ Node 5
782
+ 8apon
783
+ Node 16
784
+ Node17
785
+ Trafficway description
786
+ Roadway grade
787
+ 0.000
788
+ 1.0.00
789
+ 0.000
790
+ 1.000
791
+ Nodeb
792
+ Node 7
793
+ Noce 19crash fatalities. (Wang et al., 2017) considered the effects of driver behavior factors such as speeding in curve-related
794
+ crashes. Still, this study did not consider factors such as critical events and pre-critical event factors in addition to the
795
+ reaction or maneuvers of the driver during the crash.
796
+ In (Wang et al., 2017), the icy and snowy road surface is another important factor that was associated with curve-
797
+ related crashes, however, in our study, not dry surface was a significant factor for vehicles with injury for the months
798
+ that are not in the winter. This is in line with the (Eisenberg and Warner, 2005) study that evaluated the impacts of
799
+ snowy surfaces on traffic crash rates in the USA (1975-2000). They found that snow days are associated with fewer
800
+ severe crashes, whereas more no severe crashes and property-damage crashes are reported on snow days. Therefore,
801
+ although icy and snowy surfaces can be an important factor in the crash rate, they do not have a high association with
802
+ severe crashes. Crash type is another important variable in similar research such as (da Cruz Figueira et al., 2017)
803
+ and (Griselda et al., 2012). However, the proposed models in the current study show that crash type is not significant
804
+ while considering critical pre-crash events. Based on the proposed final model, the extent of damage, the pre-impact
805
+ location, the first harmful event, and the critical pre-crash event are among the significant pre-crash events that can
806
+ affect the number of vehicles with or without injury in addition to the environmental factors like the month of the crash,
807
+ roadway surface condition, the traffic way description, and roadway grade. There are limited studies about the impacts
808
+ of these important pre-crash events and environmental factors on traffic injuries (Bärgman et al., 2017) and although
809
+ curve related crashes are associated with a high proportion of severe crashes, there is no study about the effects of
810
+ these important factors on the number of vehicles with or without injury in curve-related crashes. Applying non-
811
+ parametric tree-based models like C5.0 has some advantages compared to traditional regression and other parametric
812
+ models. (Chang and Wang, 2006) highlighted that C5.0 analysis does not require the specification of a functional form
813
+ and also it can handle multi-collinearity problems, which often occur due to the high correlations between selected
814
+ variables in traffic injury data (e.g., collision type and driver/vehicle action; weather condition and pavement condition).
815
+ The proposed C5.0 model can be presented graphically, which is intuitively easy to interpret without complicated
816
+ statistics. It also provides useful results by focusing on limited, yet most influential factors (Chang and Wang, 2006).
817
+ However, non-parametric models have some disadvantages such as a lack of formal statistical inference procedures
818
+ (Chang and Wang, 2006). These models also do not have a confidence interval for the risk factors (splitters) and
819
+ predictions (Chang and Wang, 2006). The structure and accuracy can be changed significantly if different partitioning
820
+ and sampling strategies (e.g., stratified random sampling) are applied for model testing. It is not recommended to
821
+ have a generalization based on the results of nonparametric techniques. Therefore, the tree models are often applied to
822
+ identify important variables, and other modeling techniques are needed to develop final models. Since sampling and
823
+ different partitioning strategies are not applied to the proposed models in this study, this disadvantage is not a great
824
+ concern for the current research.
825
+ Acknowledgments
826
+ This work was supported by the European Social Fund via IT Academy programme, and the Estonian Centre of
827
+ Excellence in IT (EXCITE).
828
+ References
829
+ Aarts, L., Van Schagen, I., 2006. Driving speed and the risk of road crashes: A review. Accident Analysis & Prevention 38, 215–224.
830
+ Abdel-Aty, M., 2003. Analysis of driver injury severity levels at multiple locations using ordered probit models. Journal of safety research 34,
831
+ 597–603.
832
+ Abdel-Aty, M.A., Radwan, A.E., 2000. Modeling traffic accident occurrence and involvement. Accident Analysis & Prevention 32, 633–642.
833
+ Abele, L., Møller, M., 2011. The relationship between road design and driving behavior: A simulator study, in: 3rd International Conference on
834
+ Road Safety and Simulation, pp. 26–27.
835
+ Aghaabbasi, M., Shekari, Z.A., Shah, M.Z., Olakunle, O., Armaghani, D.J., Moeinaddini, M., 2020. Predicting the use frequency of ride-sourcing
836
+ by off-campus university students through random forest and bayesian network techniques. Transportation Research Part A: Policy and Practice
837
+ 136, 262–281.
838
+ Ahmed, M., Abdel-Aty, M., Yu, R., 2012. Assessment of the interaction between crash occurrence, mountainous 4 freeway geometry, real-time
839
+ weather and avi traffic data 5. Assessment 2, 3.
840
+ Alireza, H., 2002. Accident prediction models for safety evaluation of urban transportation network. MASc Thesis, Universiy ot Toronto .
841
+ Bakhit, P.R., Osman, O.A., Guo, B., Ishak, S., 2019. A distraction index for quantification of driver eye glance behavior: A study using shrp2 nest
842
+ database. Safety Science 119, 106–111.
843
+ Bärgman, J., Boda, C.N., Dozza, M., 2017. Counterfactual simulations applied to shrp2 crashes: The effect of driver behavior models on safety
844
+ benefit estimations of intelligent safety systems. Accident Analysis & Prevention 102, 165–180.
845
+ Page 13 of 14
846
+
847
+ Bedard, M., Guyatt, G.H., Stones, M.J., Hirdes, J.P., 2002. The independent contribution of driver, crash, and vehicle characteristics to driver
848
+ fatalities. Accident Analysis & Prevention 34, 717–727.
849
+ Bergstra, J., Bengio, Y., 2012. Random search for hyper-parameter optimization. Journal of machine learning research 13.
850
+ Brijs, T., Karlis, D., Wets, G., 2008. Studying the effect of weather conditions on daily crash counts using a discrete time-series model. Accident
851
+ Analysis & Prevention 40, 1180–1190.
852
+ Chang, L.Y., Wang, H.W., 2006. Analysis of traffic injury severity: An application of non-parametric classification tree techniques. Accident
853
+ Analysis & Prevention 38, 1019–1027.
854
+ Charlton, S.G., 2007. The role of attention in horizontal curves: A comparison of advance warning, delineation, and road marking treatments.
855
+ Accident Analysis & Prevention 39, 873–885.
856
+ Chen, S.H., 2010. Mining patterns and factors contributing to crash severity on road curves. Ph.D. thesis. Queensland University of Technology.
857
+ Chong, M., Abraham, A., Paprzycki, M., 2005. Traffic accident analysis using machine learning paradigms. Informatica 29.
858
+ Clark, D.E., Cushing, B.M., 2004. Rural and urban traffic fatalities, vehicle miles, and population density. Accident Analysis & Prevention 36,
859
+ 967–972.
860
+ da Cruz Figueira, A., Pitombo, C.S., Larocca, A.P.C., et al., 2017. Identification of rules induced through decision tree algorithm for detection of
861
+ traffic accidents with victims: A study case from Brazil. Case studies on transport policy 5, 200–207.
862
+ Dong, C., Nambisan, S.S., Richards, S.H., Ma, Z., 2015. Assessment of the effects of highway geometric design features on the frequency of truck
863
+ involved crashes using bivariate regression. Transportation Research Part A: Policy and Practice 75, 30–41.
864
+ Duddu, V.R., Penmetsa, P., Pulugurtha, S.S., 2018. Modeling and comparing injury severity of at-fault and not at-fault drivers in crashes. Accident
865
+ Analysis & Prevention 120, 55–63.
866
+ Eisenberg, D., Warner, K.E., 2005. Effects of snowfalls on motor vehicle collisions, injuries, and fatalities. American journal of public health 95,
867
+ 120–124.
868
+ Elvik, R., 2013. International transferability of accident modification functions for horizontal curves. Accident Analysis & Prevention 59, 487–496.
869
+ Fan, W., Kane, M.R., Haile, E., 2015. Analyzing severity of vehicle crashes at highway-rail grade crossings: multinomial logit modeling, in: Journal
870
+ of the Transportation Research Forum, pp. 39–56.
871
+ FHWA, 2018. Horizontal curve safety. URL: https://safety.fhwa.dot.gov/roadway_dept/horicurves/cmhoricurves/.
872
+ Golob, T.F., Recker, W.W., 2003. Relationships among urban freeway accidents, traffic flow, weather, and lighting conditions. Journal of trans-
873
+ portation engineering 129, 342–353.
874
+ Griselda, L., Joaquín, A., et al., 2012. Using decision trees to extract decision rules from police reports on road accidents. Procedia-social and
875
+ behavioral sciences 53, 106–114.
876
+ Guo, F., Fang, Y., 2013. Individual driver risk assessment using naturalistic driving data. Accident Analysis & Prevention 61, 3–9.
877
+ Hadayeghi, A., Shalaby, A.S., Persaud, B., 2003. Macrolevel accident prediction models for evaluating safety of urban transportation systems.
878
+ Transportation research record 1840, 87–95.
879
+ Hadayeghi, A., Shalaby, A.S., Persaud, B.N., 2007. Safety prediction models: proactive tool for safety evaluation in urban transportation planning
880
+ applications. Transportation Research Record 2019, 225–236.
881
+ Haghighi, N., Liu, X.C., Zhang, G., Porter, R.J., 2018. Impact of roadway geometric features on crash severity on rural two-lane highways. Accident
882
+ Analysis & Prevention 111, 34–42.
883
+ Hao, W., Kamga, C., Wan, D., 2016. The effect of time of day on driver’s injury severity at highway-rail grade crossings in the united states. Journal
884
+ of traffic and transportation engineering (English edition) 3, 37–50.
885
+ Jahangiri, A., Rakha, H., Dingus, T.A., 2016. Red-light running violation prediction using observational and simulator data. Accident Analysis &
886
+ Prevention 96, 316–328.
887
+ Jeong, H., Liu, Y., 2017. Horizontal curve driving performance and safety affected by road geometry and lead vehicle, in: Proceedings of the Human
888
+ Factors and Ergonomics Society Annual Meeting, SAGE Publications Sage CA: Los Angeles, CA. pp. 1629–1633.
889
+ Karlaftis, M.G., Golias, I., 2002. Effects of road geometry and traffic volumes on rural roadway accident rates. Accident Analysis & Prevention 34,
890
+ 357–365.
891
+ Khan, G., Bill, A.R., Chitturi, M.V., Noyce, D.A., 2013. Safety evaluation of horizontal curves on rural undivided roads. Transportation research
892
+ record 2386, 147–157.
893
+ Kitali, A.E., Alluri, P., Sando, T., Haule, H., Kidando, E., Lentz, R., 2018. Likelihood estimation of secondary crashes using bayesian complementary
894
+ log-log model. Accident Analysis & Prevention 119, 58–67.
895
+ Kotsiantis, S.B., Zaharakis, I., Pintelas, P., et al., 2007. Supervised machine learning: A review of classification techniques. Emerging artificial
896
+ intelligence applications in computer engineering 160, 3–24.
897
+ Kuhnert, P.M., Do, K.A., McClure, R., 2000. Combining non-parametric models with logistic regression: an application to motor vehicle injury
898
+ data. Computational Statistics & Data Analysis 34, 371–386.
899
+ Langley, J., Mullin, B., Jackson, R., Norton, R., 2000. Motorcycle engine size and risk of moderate to fatal injury from a motorcycle crash. Accident
900
+ Analysis & Prevention 32, 659–663.
901
+ Levine, N., Kim, K.E., Nitz, L.H., 1995. Spatial analysis of honolulu motor vehicle crashes: I. spatial patterns. Accident Analysis & Prevention 27,
902
+ 663–674.
903
+ Lu, H., Ma, X., 2020. Hybrid decision tree-based machine learning models for short-term water quality prediction. Chemosphere 249, 126169.
904
+ Lundberg, S.M., Lee, S.I., 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30.
905
+ Maimon, O., Rokach, L., 2005. Data mining and knowledge discovery handbook .
906
+ Moeinaddini, M., Asadi-Shekari, Z., Shah, M.Z., 2014. The relationship between urban street networks and the number of transport fatalities at the
907
+ city level. Safety science 62, 114–120.
908
+ Moeinaddini, M., Asadi-Shekari, Z., Sultan, Z., Shah, M.Z., 2015. Analyzing the relationships between the number of deaths in road accidents and
909
+ the work travel mode choice at the city level. Safety science 72, 249–254.
910
+ Page 14 of 14
911
+
912
+ Movig, K.L., Mathijssen, M., Nagel, P., Van Egmond, T., De Gier, J.J., Leufkens, H., Egberts, A.C., 2004. Psychoactive substance use and the risk
913
+ of motor vehicle accidents. Accident Analysis & Prevention 36, 631–636.
914
+ Pandya, R., Pandya, J., 2015. C5. 0 algorithm to improved decision tree with feature selection and reduced error pruning. International Journal of
915
+ Computer Applications 117, 18–21.
916
+ Petridou, E., Moustaki, M., 2000. Human factors in the causation of road traffic crashes. European journal of epidemiology 16, 819–826.
917
+ Qin, X., Ivan, J.N., Ravishanker, N., 2004. Selecting exposure measures in crash rate prediction for two-lane highway segments. Accident Analysis
918
+ & Prevention 36, 183–191.
919
+ Qin, X., Ivan, J.N., Ravishanker, N., Liu, J., Tepas, D., 2006. Bayesian estimation of hourly exposure functions by crash type and time of day.
920
+ Accident Analysis & Prevention 38, 1071–1080.
921
+ Quinlan, J.R., 1996. Improved use of continuous attributes in c4. 5. Journal of artificial intelligence research 4, 77–90.
922
+ Quinlan, R., 1993. 4.5: Programs for machine learning morgan kaufmann publishers inc. San Francisco, USA .
923
+ RENGARASU, T.M., Hagiwara, T., Hirasawa, M., 2007. Effects of road geometry and season on head-on and single-vehicle collisions on rural two
924
+ lane roads in hokkaido, japan. Journal of the Eastern Asia Society for Transportation Studies 7, 2860–2872.
925
+ Richter, M., Pape, H.C., Otte, D., Krettek, C., 2005. Improvements in passive car safety led to decreased injury severity–a comparison between the
926
+ 1970s and 1990s. Injury 36, 484–488.
927
+ Schneider IV, W.H., Savolainen, P.T., Moore, D.N., 2010. Effects of horizontal curvature on single-vehicle motorcycle crashes along rural two-lane
928
+ highways. Transportation Research Record 2194, 91–98.
929
+ Shankar, V., Milton, J., Mannering, F., 1997. Modeling accident frequencies as zero-altered probability processes: an empirical inquiry. Accident
930
+ Analysis & Prevention 29, 829–837.
931
+ Tavakoli Kashani, A., Shariat-Mohaymany, A., Ranjbari, A., 2011.
932
+ A data mining approach to identify key factors of traffic injury severity.
933
+ PROMET-Traffic&Transportation 23, 11–17.
934
+ Vayalamkuzhi, P., Amirthalingam, V., 2016. Influence of geometric design characteristics on safety under heterogeneous traffic flow. Journal of
935
+ traffic and transportation engineering (English edition) 3, 559–570.
936
+ Wang, B., Hallmark, S., Savolainen, P., Dong, J., 2017. Crashes and near-crashes on horizontal curves along rural two-lane highways: Analysis of
937
+ naturalistic driving data. Journal of safety research 63, 163–169.
938
+ Wei, F., Lovegrove, G., 2013. An empirical tool to evaluate the safety of cyclists: Community based, macro-level collision prediction models using
939
+ negative binomial regression. Accident Analysis & Prevention 61, 129–137.
940
+ WHO, 2018. Road traffic injuries. URL: http://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries.
941
+ Witten, Ian H.and Frank, E.H.M.A., 2011. Practical machine learning tools and techniques. Elsevier Inc, United States.
942
+ Yang, Z., Zhibin, L., Pan, L., Liteng, Z., 2011. Exploring contributing factors to crash injury severity at freeway diverge areas using ordered probit
943
+ model. Procedia engineering 21, 178–185.
944
+ Yotsutsuji, H., Kita, H., Xing, J., Hirai, S., 2017. A car-accident rate index for curved roads: A speed choice–based approach. Transportation
945
+ research procedia 25, 2108–2118.
946
+ Zhang, X.F., Fan, L., 2013. A decision tree approach for traffic accident analysis of saskatchewan highways, in: 2013 26th IEEE Canadian Conference
947
+ on Electrical and Computer Engineering (CCECE), IEEE. pp. 1–4.
948
+ Zhu, M., Li, Y., Wang, Y., 2018. Design and experiment verification of a novel analysis framework for recognition of driver injury patterns: From
949
+ a multi-class classification perspective. Accident Analysis & Prevention 120, 152–164.
950
+ Page 15 of 14
951
+
O9AzT4oBgHgl3EQfzf7E/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ONFLT4oBgHgl3EQfOy8c/content/tmp_files/2301.12025v1.pdf.txt ADDED
@@ -0,0 +1,1841 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised
2
+ Learning
3
+ Pranav Singh 1 Jacopo Cirrone 2
4
+ Abstract
5
+ Existing self-supervised techniques have extreme
6
+ computational requirements and suffer a substan-
7
+ tial drop in performance with a reduction in batch
8
+ size or pretraining epochs. This paper presents
9
+ Cross Architectural - Self Supervision (CASS),
10
+ a novel self-supervised learning approach that
11
+ leverages Transformer and CNN simultaneously.
12
+ Compared to the existing state-of-the-art self-
13
+ supervised learning approaches, we empirically
14
+ show that CASS-trained CNNs and Transformers
15
+ across four diverse datasets gained an average of
16
+ 3.8% with 1% labeled data, 5.9% with 10% la-
17
+ beled data, and 10.13% with 100% labeled data
18
+ while taking 69% less time. We also show that
19
+ CASS is much more robust to changes in batch
20
+ size and training epochs than existing state-of-the-
21
+ art self-supervised learning approaches. We have
22
+ opensourced our code at https://github.
23
+ com/pranavsinghps1/CASS.
24
+ 1. Introduction
25
+ Self-supervised learning has emerged as a powerful
26
+ paradigm for learning representations that can be used
27
+ for various downstream tasks like classification, object de-
28
+ tection, and image segmentation. Pretraining with self-
29
+ supervised techniques is label-free, allowing us to train
30
+ even on unlabeled images. This is especially useful in fields
31
+ with limited labeled data availability or if the cost and effort
32
+ required to provide annotations are high. Medical Imaging
33
+ is one field that can benefit from applying self-supervised
34
+ techniques. Medical imaging is a field characterized by min-
35
+ imal data availability. First, data labeling typically requires
36
+ domain-specific knowledge. Therefore, the requirement of
37
+ 1Department of Computer Science, Tandon School of En-
38
+ gineering, New York University, New York, NY 11202, USA
39
+ 2Center for Data Science, New York University, and Colton
40
+ Center for Autoimmunity, NYU Grossman School of Medicine,
41
+ New York, NY 10011, USA. Correspondence to: Pranav Singh
42
43
+ Preliminary work. Under review. Copyright 2023 by the author(s).
44
+ large-scale clinical supervision may be cost and time pro-
45
+ hibitive. Second, due to patient privacy, disease prevalence,
46
+ and other limitations, it is often difficult to release imag-
47
+ ing datasets for secondary analysis, research, and diagnosis.
48
+ Third, due to an incomplete understanding of diseases. This
49
+ could be either because the disease is emerging or because
50
+ no mechanism is in place to systematically collect data about
51
+ the prevalence and incidence of the disease. An example
52
+ of the former is COVID-19 when despite collecting chest
53
+ X-ray data spanning decades, the samples lacked data for
54
+ COVID-19 (Sriram et al., 2021). An example of the latter
55
+ is autoimmune diseases. Statistically, autoimmune diseases
56
+ affect 3% of the US population or 9.9 million US citizens.
57
+ There are still major outstanding research questions for au-
58
+ toimmune diseases regarding the presence of different cell
59
+ types and their role in inflammation at the tissue level. The
60
+ study of autoimmune diseases is critical because autoim-
61
+ mune diseases affect a large part of society and because
62
+ these conditions have been on the rise recently (Galeotti &
63
+ Bayry, 2020; Lerner et al., 2015; Ehrenfeld et al., 2020).
64
+ Other fields like cancer and MRI image analysis have bene-
65
+ fited from the application of artificial intelligence (AI). But
66
+ for autoimmune diseases, the application of AI is partic-
67
+ ularly challenging due to minimal data availability, with
68
+ the median dataset size for autoimmune diseases between
69
+ 99-540 samples (Tsakalidou et al., 2022; Stafford et al.,
70
+ 2020).
71
+ To overcome the limited availability of annotations, we turn
72
+ to self-supervised learning. Models extract representations
73
+ that can be fine-tuned even with a small amount of labeled
74
+ data for various downstream tasks (Sriram et al., 2021).
75
+ As a result, this learning approach avoids the relatively ex-
76
+ pensive and human-intensive task of data annotation. But
77
+ self-supervised learning techniques suffer when limited data
78
+ is available, especially in cases where the entire dataset size
79
+ is smaller than the peak performing batch size for some
80
+ of the leading self-supervised techniques. This calls for a
81
+ reduction in the batch size; this again causes existing self-
82
+ supervised techniques to drop performance; for example,
83
+ state-of-the-art DINO (Caron et al., 2021) drops classifi-
84
+ cation performance by 25% when trained with batch size
85
+ 8. Furthermore, existing self-supervised techniques are
86
+ compute-intensive and trained using multiple GPU servers
87
+ arXiv:2301.12025v1 [cs.CV] 27 Jan 2023
88
+
89
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
90
+ over multiple days. This makes them inaccessible to general
91
+ practitioners.
92
+ Existing approaches in the field of self-supervised learning
93
+ rely purely on Convolutional Neural Networks (CNNs) or
94
+ Transformers as the feature extraction backbone and learn
95
+ feature representations by teaching the network to com-
96
+ pare the extracted representations. Instead, we propose to
97
+ combine a CNN and Transformer in a response-based con-
98
+ trastive method. In CASS, the extracted representations of
99
+ each input image are compared across two branches rep-
100
+ resenting each architecture (see Figure 1). By transferring
101
+ features sensitive to translation equivariance and locality
102
+ from CNN to Transformer, our proposed approach - CASS,
103
+ learns more predictive data representations in limited data
104
+ scenarios where a Transformer-only model cannot find them.
105
+ We studied this quantitatively and qualitatively in Section 5.
106
+ Our contributions are as follows:
107
+ • We introduce Cross Architectural - Self Supervision
108
+ (CASS), a hybrid CNN-Transformer approach for
109
+ learning improved data representations in a self-
110
+ supervised setting in limited data availability problems
111
+ in the medical image analysis domain 1
112
+ • We propose the use of CASS for analysis of autoim-
113
+ mune diseases such as dermatomyositis and demon-
114
+ strate an improvement of 2.55% compared to the ex-
115
+ isting state-of-the-art self-supervised approaches. To
116
+ our knowledge, the autoimmune dataset contains 198
117
+ images and is the smallest known dataset for self-
118
+ supervised learning.
119
+ • Since our focus is to study self-supervised techniques
120
+ in the context of medical imaging. We evaluate CASS
121
+ on three challenging medical image analysis problems
122
+ (autoimmune disease cell classification, brain tumor
123
+ classification, and skin lesion classification) on three
124
+ public datasets (Dermofit Project Dataset (Fisher &
125
+ Rees, 2017), brain tumor MRI Dataset (Cheng, 2017;
126
+ Kang et al., 2021) and ISIC 2019 (Tschandl et al.,
127
+ 2018; Gutman et al., 2018; Combalia et al., 2019)) and
128
+ find that CASS improves classification performance
129
+ (F1 Score and Recall value) over the existing state of
130
+ the art self-supervised techniques by an average of
131
+ 3.8% using 1% label fractions, 5.9 % with 10% label
132
+ fractions and 10.13% with 100% label fractions.
133
+ • Existing methods also suffer a severe drop in perfor-
134
+ mance when trained for a reduced number of epochs
135
+ or batch size ((Caron et al., 2021; Grill et al., 2020b;
136
+ Chen et al., 2020a)). We show that CASS is robust to
137
+ these changes in Sections 5.3.2 and 5.3.1.
138
+ 1We have opensourced our code at https://github.
139
+ com/pranavsinghps1/CASS
140
+ • New state-of-the-art self-supervised techniques often
141
+ require significant computational requirements. This
142
+ is a major hurdle as these methods can take around 20
143
+ GPU days to train (Azizi et al., 2021b). This makes
144
+ them inaccessible in limited computational resource
145
+ settings. CASS, on average, takes 69% less time than
146
+ the existing state-of-the-art methods. We further ex-
147
+ pand on this result in Section 5.2.
148
+ 2. Background
149
+ 2.1. Neural Network Architectures for Image Analysis
150
+ CNNs are a famous architecture of choice for many im-
151
+ age analysis applications (Khan et al., 2020). CNNs learn
152
+ more abstract visual concepts with a gradually increasing
153
+ receptive field. They have two favorable inductive biases:
154
+ (i) translation equivariance resulting in the ability to learn
155
+ equally well with shifted object positions, and (ii) locality
156
+ resulting in the ability to capture pixel-level closeness in the
157
+ input data. CNNs have been used for many medical image
158
+ analysis applications, such as disease diagnosis (Yadav &
159
+ Jadhav, 2019) or semantic segmentation (Ronneberger et al.,
160
+ 2015). To address the requirement of additional context
161
+ for a more holistic image understanding, the Vision Trans-
162
+ former (ViT) architecture (Dosovitskiy et al., 2020) has been
163
+ adapted to images from language-related tasks and recently
164
+ gained popularity (Liu et al., 2021b; 2022a; Touvron et al.,
165
+ 2021). In a ViT, the input image is split into patches that
166
+ are treated as tokens in a self-attention mechanism. Com-
167
+ pared to CNNs, ViTs can capture additional image context
168
+ but lack ingrained inductive biases of translation and loca-
169
+ tion. As a result, ViTs typically outperform CNNs on larger
170
+ datasets (d’Ascoli et al., 2021).
171
+ 2.1.1. CROSS-ARCHITECTURE TECHNQIUES
172
+ Cross-architecture techniques aim to combine the features of
173
+ CNNs and Transformers; they can be classified into two cat-
174
+ egories (i) Hybrid cross architecture techniques and (ii) pure
175
+ cross-architecture techniques. Hybrid cross-architecture
176
+ techniques combine parts of CNNs and Transformers in
177
+ some capacity, allowing architectures to learn unique repre-
178
+ sentations. ConViT (d’Ascoli et al., 2021) combines CNNs
179
+ and ViTs using gated positional self-attention (GPSA) to
180
+ create a soft convolution similar to inductive bias and im-
181
+ prove upon the capabilities of Transformers alone. More
182
+ recently, the training regimes and inferences from ViTs
183
+ have been used to design a new family of convolutional
184
+ architectures - ConvNext (Liu et al., 2022b), outperforming
185
+ benchmarks set by ViTs in classification tasks. (Li et al.,
186
+ 2021) further simplified the procedure to create an opti-
187
+ mal CNN-Transformer using their self-supervised Neural
188
+ Architecture Search (NAS) approach.
189
+
190
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
191
+ On the other hand, pure cross-architecture techniques com-
192
+ bine CNNs and Transformers without any changes to their
193
+ architecture to help both of them learn better representations.
194
+ (Gong et al., 2022) used CNN and Transformer pairs in a
195
+ consistent teaching knowledge distillation format for audio
196
+ classification and showed that cross-architecture distillation
197
+ makes distilled models less prone to overfitting and also
198
+ improves robustness. Compared with the CNN-attention
199
+ hybrid models, cross-architecture knowledge distillation is
200
+ more effective and does not require any model architecture
201
+ change. Similarly, (Guo et al., 2022) also used a 3D-CNN
202
+ and Transformer to learn strong representations and pro-
203
+ posed a self-supervised learning module to predict an edit
204
+ distance between two video sequences in the temporal order.
205
+ Although their approach showed encouraging results on two
206
+ datasets, their approach relies on both positive and nega-
207
+ tive pairs. Furthermore, their proposed approach is batch
208
+ statistic dependent.
209
+ 2.2. Self-Supervised Learning
210
+ Most existing self-supervised techniques can be classified
211
+ into contrastive and reconstruction-based techniques. Tra-
212
+ ditionally, contrastive self-supervised techniques have been
213
+ trained by reducing the distance between representations
214
+ of different augmented views of the same image (‘positive
215
+ pairs’) and increasing the distance between representations
216
+ of augmented views from different images (‘negative pairs’)
217
+ (He et al., 2020; Chen et al., 2020b; Caron et al., 2020b).
218
+ But this is highly memory intensive as we need to track
219
+ positive and negative pairs. Recently, Bootstrap Your Own
220
+ Latent (BYOL) (Grill et al., 2020b) and DINO (Caron et al.,
221
+ 2021) have improved upon this approach by eliminating
222
+ the memory banks. The premise of using negative pairs is
223
+ to avoid collapse. Several strategies have been developed
224
+ with BYOL using a momentum encoder, Simple Siamese
225
+ (SimSiam) (Chen & He, 2021) a stop gradient, and DINO
226
+ applying the counterbalancing effects of sharpening and cen-
227
+ tering on avoiding collapse. Techniques relying only on the
228
+ positive pairs are much more efficient than the ones using
229
+ positive and negative pairs. Recently, there has been a surge
230
+ in reconstruction-based self-supervised pretraining meth-
231
+ ods with the introduction of MSN (Assran et al., 2022b),
232
+ and MAE (He et al., 2021). These methods learn semantic
233
+ knowledge of the image by masking a part of it and then
234
+ predicting the masked portion.
235
+ 2.2.1. SELF-SUPERVISED LEARNING AND MEDICAL
236
+ IMAGE ANALYSIS
237
+ ImageNet is most commonly used for benchmarking and
238
+ comparing self-supervised techniques. ImageNet is a bal-
239
+ anced dataset that is not representative of real-world data,
240
+ especially in the field of medical imaging, that has been
241
+ characterized by class imbalance. Self-supervised methods
242
+ that use batch-level statistics have been found to drop a
243
+ significant amount of performance in image classification
244
+ tasks when trained on ImageNet by artificially inducing
245
+ class imbalance (Assran et al., 2022a). This prior of some
246
+ self-supervised techniques like MSN (Assran et al., 2022b),
247
+ SimCLR (Chen et al., 2020a), and VICreg (Bardes et al.,
248
+ 2021) limits their applicability on imbalanced datasets, es-
249
+ pecially in the case of medical imaging.
250
+ Existing self-supervised techniques typically require large
251
+ batch sizes and datasets. When these conditions are not met,
252
+ a marked reduction in performance is demonstrated (Caron
253
+ et al., 2021; Chen et al., 2020a; Caron et al., 2020a; Grill
254
+ et al., 2020b). Self-supervised learning approaches are prac-
255
+ tical in big data medical applications (Ghesu et al., 2022; Az-
256
+ izi et al., 2021a), such as analysis of dermatology and radiol-
257
+ ogy imaging. In more limited data scenarios (3,662 images -
258
+ 25,333 images), Matsoukas et al. (2021) reported that ViTs
259
+ outperform their CNN counterparts when self-supervised
260
+ pre-training is followed by supervised fine-tuning. Trans-
261
+ fer learning favors ViTs when applying standard training
262
+ protocols and settings. Their study included running the
263
+ DINO (Caron et al., 2021) self-supervised method over 300
264
+ epochs with a batch size of 256. However, questions re-
265
+ main about the accuracy and efficiency of using existing
266
+ self-supervised techniques on datasets whose entire size
267
+ is smaller than their peak performance batch size. Also,
268
+ viewing this from the general practitioner’s perspective with
269
+ limited computational power raises the question of how we
270
+ can make practical self-supervised approaches more acces-
271
+ sible. Adoption and faster development of self-supervised
272
+ paradigms will only be possible when they become easy to
273
+ plug and play with limited computational power.
274
+ In this work, we explore these questions by designing CASS,
275
+ a novel self-supervised approach developed with the core
276
+ values of efficiency and effectiveness. In simple terms, we
277
+ are combining CNN and Transformer in a response-based
278
+ contrastive method by reducing similarity to combine the
279
+ abilities of CNNs and Transformers. This approach was ini-
280
+ tially designed for a 198-image dataset for muscle biopsies
281
+ of inflammatory lesions from patients with dermatomyositis
282
+ - an autoimmune disease. The benefits of this approach are
283
+ illustrated by challenges in diagnosing autoimmune diseases
284
+ due to their rarity, limited data availability, and heteroge-
285
+ neous features. Consequently, misdiagnoses are common,
286
+ and the resulting diagnostic delay plays a significant factor
287
+ in their high mortality rate. Autoimmune diseases share
288
+ commonalities with COVID-19 regarding clinical manifes-
289
+ tations, immune responses, and pathogenic mechanisms.
290
+ Moreover, some patients have developed autoimmune dis-
291
+ eases after COVID-19 infection (Liu et al., 2020). Despite
292
+ this increasing prevalence, the representation of autoim-
293
+ mune diseases in medical imaging and deep learning is
294
+ limited.
295
+
296
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
297
+ 3. Methodology
298
+ We start by motivating our method before explaining it
299
+ in detail (in Section 3.1). Self-supervised methods have
300
+ been using different augmentations of the same image to
301
+ create positive pairs. These were then passed through the
302
+ same architectures but with a different set of parameters
303
+ (Grill et al., 2020b). In (Caron et al., 2021) the authors
304
+ introduced image cropping of different sizes to add local
305
+ and global information. They also used different operators
306
+ and techniques to avoid collapse, as described in Section 2.2.
307
+ But there can be another way to create positive pairs -
308
+ through architectural differences. (Raghu et al., 2021) in
309
+ their study suggested that for the same input, Transformers
310
+ and CNNs extract different representations. They conducted
311
+ their study by analyzing the CKA (Centered Kernel Align-
312
+ ment) for CNNs and Transformer using ResNet (He et al.,
313
+ 2016) and ViT (Vision Transformer) (Dosovitskiy et al.,
314
+ 2020) family of encoders, respectively. They found that
315
+ Transformers have a more uniform representation across all
316
+ layers as compared to CNNs. They also have self-attention,
317
+ enabling global information aggregation from shallow lay-
318
+ ers and skip connections that connect lower layers to higher
319
+ layers, promising information transfer. Hence, lower and
320
+ higher layers in Transformers show much more similarity
321
+ than in CNNs. The receptive field of lower layers for Trans-
322
+ formers is more extensive than in CNNs. While this recep-
323
+ tive field gradually grows for CNNs, it becomes global for
324
+ Transformers around the midway point. Transformers don’t
325
+ attend locally in their earlier layers, while CNNs do. Using
326
+ local information earlier is essential for solid performance.
327
+ CNNs have a more centered receptive field as opposed to a
328
+ more globally spread receptive field of Transformers. Hence,
329
+ representations drawn from the same input will differ for
330
+ Transformers and CNNs. Until now, self-supervised tech-
331
+ niques have used only one kind of architecture at a time,
332
+ either a CNN or Transformer. But differences in the repre-
333
+ sentations learned with CNN and Transformers inspired us
334
+ to create positive pairs by different architectures or feature
335
+ extractors rather than using a different set of augmentations.
336
+ This, by design, avoids collapse as the two architectures
337
+ will never give the exact representation as output. By con-
338
+ trasting their extracted features at the end, we hope to help
339
+ the Transformer learn representations from CNN and vice
340
+ versa. This should help both the architectures to learn better
341
+ representations and learn from patterns that they would miss.
342
+ We verify this by studying attention maps and feature maps
343
+ from supervised and CASS-trained CNN and Transform-
344
+ ers in Appendix C.4 and Section 5.3.3. We observed that
345
+ CASS-trained CNN and Transformer were able to retain a
346
+ lot more detail about the input image, which pure CNN and
347
+ Transformers lacked.
348
+ 3.1. Description of CASS
349
+ CASS’ goal is to extract and learn representations in a self-
350
+ supervised way. To achieve this, an image is passed through
351
+ a common set of augmentations. The augmented image is
352
+ then simultaneously passed through a CNN and Transformer
353
+ to create positive pairs. The output logits from the CNN
354
+ and Transformer are then used to find cosine similarity loss
355
+ (equation 1). This is the same loss function as used in BYOL
356
+ (Grill et al., 2020a). Furthermore, the intuition of CASS is
357
+ very similar to that of BYOL. In BYOL to avoid collapse
358
+ to a trivial solution the target and the online arm are differ-
359
+ ently parameterized and an additional predictor is used with
360
+ the online arm. They compared this setup to that of GANs
361
+ where joint of optimization of both arms to a common value
362
+ was impossible due to differences in the arms. Analogously,
363
+ In CASS instead of using an additional MLP on top of one
364
+ of the arms and differently parameterizing them, we use
365
+ two fundamentally different architectures. Since the two
366
+ architectures give different output representations as men-
367
+ tioned in (Raghu et al., 2021), the model doesn’t collapse.
368
+ Additionally, to avoid collapse we introduced a condition
369
+ where if the outputs from the CNN and Transformer are the
370
+ same, artificial noise sampled from a Gaussian distribution
371
+ is added to the model outputs and thereby making the loss
372
+ non-zero. We also report results for CASS using a different
373
+ set of CNNs and Transformers in Appendix B.6 and Section
374
+ 5, and not a single case of the model collapse was registered.
375
+ loss = 2 − 2 × F(R) × F(T)
376
+ (1)
377
+ where, F(x) =
378
+ N
379
+
380
+ i=1
381
+
382
+ x
383
+ (max (∥x∥2) , ϵ)
384
+
385
+ We use the same parameters for the optimizer and learn-
386
+ ing schedule for both architectures. We also use stochastic
387
+ weigh averaging (SWA) (Izmailov et al., 2018) with Adam
388
+ optimizer and a learning rate of 1e-3. For the learning rate,
389
+ we use a cosine schedule with a maximum of 16 iterations
390
+ and a minimum value of 1e-6. ResNets are typically trained
391
+ with Stochastic Gradient Descent (SGD) and our use of
392
+ the Adam optimizer is quite unconventional. Furthermore,
393
+ unlike existing self-supervised techniques there is no param-
394
+ eter sharing between the two architectures.
395
+ We compare CASS against the state-of-the-art self-
396
+ supervised technique DINO (DIstilation with NO labels).
397
+ This choice was made based on two conditions (i) As already
398
+ explained in Section 2.2.1, some self-supervised techniques
399
+ use batch-level statistics that makes them less suitable for
400
+ application on imbalanced datasets and imbalanced datasets
401
+ are a feature of medical imaging. (ii) The self-supervised
402
+ technique should be benchmarked for both CNNs and Trans-
403
+ formers as both architectures have exciting properties and
404
+
405
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
406
+ apriori, it is difficult to predict which architecture will per-
407
+ form better.
408
+ In Figure 1, we show CASS on top, and DINO (Caron et al.,
409
+ 2021) at the bottom. Comparing the two, CASS does not use
410
+ any extra mathematical treatment used in DINO to avoid col-
411
+ lapse such as centering and applying the softmax function
412
+ on the output of its student and teacher networks. We also
413
+ provide an ablation study using a softmax and sigmoid layer
414
+ for CASS in Appendix B. After training CASS and DINO
415
+ for one cycle, DINO yields only one kind of trained architec-
416
+ ture. In contrast, CASS provides two trained architectures
417
+ (1 - CNN and 1 - Transformer). CASS-pre-trained architec-
418
+ tures perform better than DINO-pre-trained architectures in
419
+ most cases, as further elaborated in Section 5.
420
+ Figure 1. (Top) In our proposed self-supervised architecture -
421
+ CASS, R represents ResNet-50, a CNN and T in the other box
422
+ represents the Transformer used (ViT); X is the input image, which
423
+ becomes X’ after applying augmentations. Note that CASS applies
424
+ only one set of augmentations to create X’. X’ is passed through
425
+ both arms to compute loss, as in Equation 1. This differs from
426
+ DINO, which passes different augmentation of the same image
427
+ through networks with the same architecture but different param-
428
+ eters. The output of the teacher network is centered on a mean
429
+ computed over a batch. Another key difference is that in CASS, the
430
+ loss is computed over logits; meanwhile, in DINO, it is computed
431
+ over softmax output.
432
+ 4. Experimental Details
433
+ 4.1. Datasets
434
+ We split the datasets into three splits - training, validation,
435
+ and testing following the 70/10/20 split strategy unless spec-
436
+ ified otherwise. We further expand upon our thought process
437
+ for choosing datasets in Appendix C.4.5.
438
+ • Autoimmune diseases biopsy slides (Singh & Cir-
439
+ rone, 2022; Van Buren et al., 2022) consists of slides
440
+ cut from muscle biopsies of dermatomyositis patients
441
+ stained with different proteins and imaged to generate
442
+ a dataset of 198 TIFF image set from 7 patients. The
443
+ presence or absence of these cells helps to diagnose
444
+ dermatomyositis. Multiple cell classes can be present
445
+ per image; therefore this is a multi-label classification
446
+ problem. Our task here was to classify cells based on
447
+ their protein staining into TFH-1, TFH-217, TFH-Like,
448
+ B cells, and others. We used F1 score as our metric for
449
+ evaluation, as employed in previous works by (Singh
450
+ & Cirrone, 2022; Van Buren et al., 2022). These RGB
451
+ images have a consistent size of 352 by 469.
452
+ • Dermofit dataset (Fisher & Rees, 2017) contains nor-
453
+ mal RGB images captured through an SLR camera
454
+ indoors with ring lightning. There are 1300 image sam-
455
+ ples, classified into 10 classes: Actinic Keratosis (AK),
456
+ Basal Cell Carcinoma (BCC), Melanocytic Nevus /
457
+ Mole (ML), Squamous Cell Carcinoma (SCC), Sebor-
458
+ rhoeic Keratosis (SK), Intraepithelial carcinoma (IEC),
459
+ Pyogenic Granuloma (PYO), Haemangioma (VASC),
460
+ Dermatofibroma (DF) and Melanoma (MEL). This
461
+ dataset comprises images of different sizes and no two
462
+ images are of the same size. They range from 205×205
463
+ to 1020×1020 in size. Our pretext task is multi-class
464
+ classification and we use the F1 score as our evaluation
465
+ metric on this dataset.
466
+ • Brain tumor MRI dataset (Cheng, 2017; Amin et al.,
467
+ 2022) 7022 images of human brain MRI that are classi-
468
+ fied into four classes: glioma, meningioma, no tumor,
469
+ and pituitary. This dataset combines Br35H: Brain tu-
470
+ mor Detection 2020 dataset used in ”Retrieval of Brain
471
+ tumors by Adaptive Spatial Pooling and Fisher Vector
472
+ Representation” and Brain tumor classification curated
473
+ by Navoneel Chakrabarty and Swati Kanchan. Out
474
+ of these, the dataset curator created the training and
475
+ testing splits. We followed their splits, 5,712 images
476
+ for training and 1,310 for testing. Since this was a com-
477
+ bination of multiple datasets, the size of images varies
478
+ throughout the dataset from 512×512 to 219×234. The
479
+ pretext of the task is multi-class classification, and we
480
+ used the F1 score as the metric.
481
+ • ISIC 2019 (Tschandl et al., 2018; Gutman et al., 2018;
482
+ Combalia et al., 2019) consists of 25,331 images
483
+ across eight different categories - melanoma (MEL),
484
+ melanocytic nevus (NV), Basal cell carcinoma (BCC),
485
+ actinic keratosis(AK), benign keratosis(BKL), der-
486
+ matofibroma(DF), vascular lesion (VASC) and Squa-
487
+ mous cell carcinoma(SCC). This dataset contains im-
488
+ ages of size 600 × 450 and 1024 × 1024. The distri-
489
+ bution of these labels is unbalanced across different
490
+
491
+ R
492
+ logits,
493
+ Loss
494
+ logits,Softmax
495
+ Student
496
+ (-p, log p,)
497
+ Loss
498
+ ema
499
+ Softmax
500
+ X.
501
+ Teacher
502
+ c
503
+ SCross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
504
+ Techniques
505
+ Backbone
506
+ Testing F1 score
507
+ 10%
508
+ 100%
509
+ DINO
510
+ ResNet-50
511
+ 0.8237±0.001
512
+ 0.84252±0.008
513
+ CASS
514
+ ResNet-50
515
+ 0.8158±0.0055
516
+ 0.8650±0.0001
517
+ Supervised
518
+ ResNet-50
519
+ 0.819±0.0216
520
+ 0.83895±0.007
521
+ DINO
522
+ ViT B/16
523
+ 0.8445±0.0008
524
+ 0.8639± 0.002
525
+ CASS
526
+ ViT B/16
527
+ 0.8717±0.005
528
+ 0.8894±0.005
529
+ Supervised
530
+ ViT B/16
531
+ 0.8356±0.007
532
+ 0.8420±0.009
533
+ Table 1. Results for autoimmune biopsy slides dataset. In this table, we compare the F1 score on the test set. We observed that CASS
534
+ outperformed the existing state-of-art self-supervised method using 100% labels for CNN as well as for Transformers. Although DINO
535
+ outperforms CASS for CNN with 10% labeled fraction. Overall, CASS outperforms DINO by 2.2% for 100% labeled training for CNN
536
+ and Transformer. For Transformers in 10% labeled training CASS’ performance was 2.7% better than DINO.
537
+ Dataset
538
+ DINO
539
+ CASS
540
+ Autoimmune
541
+ 1 H 13 M
542
+ 21 M
543
+ Dermofit
544
+ 3 H 9 M
545
+ 1 H 11 M
546
+ Brain MRI
547
+ 26 H 21 M
548
+ 7 H 11 M
549
+ ISIC-2019
550
+ 109 H 21 M
551
+ 29 H 58 M
552
+ Table 2. Self-supervised pretraining time comparison for 100
553
+ epochs on a single RTX8000 GPU. In this table, H represents
554
+ hour(s), and M represents minute(s).
555
+ classes. For evaluation, we followed the metric fol-
556
+ lowed in the official competition i.e balanced multi-
557
+ class accuracy value, which is semantically equal to
558
+ recall.
559
+ 4.2. Self-supervised learning
560
+ We studied and compared results between DINO and CASS-
561
+ pre-trained self-supervised CNNs and Transformers. For the
562
+ same, we trained from ImageNet initialization (Matsoukas
563
+ et al., 2021) for 100 epochs with a batch size of 16. We ran
564
+ these experiments on an internal cluster with a single GPU
565
+ unit (NVIDIA RTX8000) with 48 GB video RAM, 2 CPU
566
+ cores, and 64 GB system RAM.
567
+ For DINO, we used the hyperparameters and augmentations
568
+ mentioned in the original implementation. For CASS, we
569
+ describe the experimentation details in Appendix C.5.
570
+ 4.3. End-to-end fine-tuning
571
+ In order to evaluate the utility of the learned representa-
572
+ tions, we use the self-supervised pre-trained weights for
573
+ the downstream classification tasks. While performing the
574
+ downstream fine-tuning, we perform the entire model (E2E
575
+ fine-tuning). The test set metrics were used as proxies for
576
+ representation quality. We trained the entire model for a
577
+ maximum of 50 epochs with an early stopping patience of
578
+ 5 epochs. For supervised fine-tuning, we used Adam opti-
579
+ mizer with a cosine annealing learning rate starting at 3e-04.
580
+ Since almost all medical datasets have some class imbalance
581
+ we applied class distribution normalized Focal Loss (Lin
582
+ et al., 2017) to navigate class imbalance.
583
+ We fine-tune the models using different label fractions dur-
584
+ ing E2E fine-tuning i.e 1%, 10%, and 100& label fractions.
585
+ For example, if a model is trained with a 10% label fraction,
586
+ then that model will have access only to 10% of the training
587
+ dataset samples and their corresponding labels during the
588
+ E2E fine-tuning after initializing weights using the CASS
589
+ or DINO pretraining.
590
+ 5. Results and Discussion
591
+ 5.1. Compute and Time analysis Analysis
592
+ We ran all the experiments on a single NVIDIA RTX8000
593
+ GPU with 48GB video memory. In Table 2, we compare the
594
+ cumulative training times for self-supervised training of a
595
+ CNN and Transformer with DINO and CASS. We observed
596
+ that CASS took an average of 69% less time compared
597
+ to DINO. Another point to note is that CASS trained two
598
+ architectures at the same time or in a single pass. While
599
+ training a CNN and Transformer with DINO it would take
600
+ two separate passes.
601
+ 5.2. Results on the four medical imaging datasets
602
+ We did not perform 1% finetuning for the autoimmune dis-
603
+ eases biopsy slides of 198 images because using 1% images
604
+ would be too small a number to learn anything meaningful
605
+ and the results would be highly randomized. Similarly, we
606
+ also did not perform 1% fine-tuning for the dermofit dataset
607
+ as the training set was too small to draw meaningful results
608
+ with just 10 samples. We present the results on the four med-
609
+ ical imaging datasets in Tables 1, 3, 4, and 5. From these
610
+ tables, we observe that CASS improves upon the classifica-
611
+ tion performance of existing state-of-the-art self-supervised
612
+ method DINO by 3.8% with 1% labeled data, 5.9% with
613
+ 10% labeled data, and 10.13% with 100% labeled data.
614
+
615
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
616
+ Techniques
617
+ Testing F1 score
618
+ 10%
619
+ 100%
620
+ DINO (Resnet-50)
621
+ 0.3749±0.0011
622
+ 0.6775±0.0005
623
+ CASS (Resnet-50)
624
+ 0.4367±0.0002
625
+ 0.7132±0.0003
626
+ Supervised (Resnet-50)
627
+ 0.33±0.0001
628
+ 0.6341±0.0077
629
+ DINO (ViT B/16)
630
+ 0.332± 0.0002
631
+ 0.4810±0.0012
632
+ CASS (ViT B/16)
633
+ 0.3896±0.0013
634
+ 0.6667±0.0002
635
+ Supervised (ViT B/16)
636
+ 0.299±0.002
637
+ 0.456±0.0077
638
+ Table 3. This table contains the results for the dermofit dataset. We observe that CASS outperforms both supervised and existing
639
+ state-of-the-art self-supervised methods for all label fractions. Parenthesis next to the techniques represents the architecture used, for
640
+ example, DINO(ViT B/16) represents ViT B/16 trained with DINO. In this table, we compare the F1 score on the test set. We observed
641
+ that CASS outperformed the existing state-of-art self-supervised method using all label fractions and for both the architectures.
642
+ Techniques
643
+ Backbone
644
+ Testing F1 score
645
+ 1%
646
+ 10%
647
+ 100%
648
+ DINO
649
+ Resnet-50
650
+ 0.63405±0.09
651
+ 0.92325±0.02819
652
+ 0.9900±0.0058
653
+ CASS
654
+ Resnet-50
655
+ 0.40816±0.13
656
+ 0.8925±0.0254
657
+ 0.9909± 0.0032
658
+ Supervised
659
+ Resnet-50
660
+ 0.52±0.018
661
+ 0.9022±0.011
662
+ 0.9899± 0.003
663
+ DINO
664
+ ViT B/16
665
+ 0.3211±0.071
666
+ 0.7529±0.044
667
+ 0.8841± 0.0052
668
+ CASS
669
+ ViT B/16
670
+ 0.3345±0.11
671
+ 0.7833±0.0259
672
+ 0.9279± 0.0213
673
+ Supervised
674
+ ViT B/16
675
+ 0.3017 ± 0.077
676
+ 0.747±0.0245
677
+ 0.8719± 0.017
678
+ Table 4. This table contains results on the brain tumor MRI classification dataset. While DINO outperformed CASS for 1% and 10%
679
+ labeled training for CNN, CASS maintained its superiority for 100% labeled training, albeit by just 0.09%. Similarly, CASS outperformed
680
+ DINO for all data regimes for Transformers, incrementally 1.34% in for 1%, 3.04% for 10%, and 4.38% for 100% labeled training. We
681
+ observe that this margin is more significant than for biopsy images. Such results could be ascribed to the increase in dataset size and
682
+ increasing learnable information.
683
+ Techniques
684
+ Backbone
685
+ Testing Balanced multi-class accuracy
686
+ 1%
687
+ 10%
688
+ 100%
689
+ DINO
690
+ Resnet-50
691
+ 0.328±0.0016
692
+ 0.3797±0.0027
693
+ 0.493±3.9e-05
694
+ CASS
695
+ Resnet-50
696
+ 0.3617±0.0047
697
+ 0.41±0.0019
698
+ 0.543±2.85e-05
699
+ Supervised
700
+ Resnet-50
701
+ 0.2640±0.031
702
+ 0.3070±0.0121
703
+ 0.35±0.006
704
+ DINO
705
+ ViT B/16
706
+ 0.3676± 0.012
707
+ 0.3998±0.056
708
+ 0.5408±0.001
709
+ CASS
710
+ ViT B/16
711
+ 0.3973± 0.0465
712
+ 0.4395±0.0179
713
+ 0.5819±0.0015
714
+ Supervised
715
+ ViT B/16
716
+ 0.3074±0.0005
717
+ 0.3586±0.0314
718
+ 0.42±0.007
719
+ Table 5. Results for the ISIC-2019 dataset.
720
+ Comparable to the official metrics used in the challenge https://challenge.
721
+ isic-archive.com/landing/2019/. The ISIC-2019 dataset is an incredibly challenging, not only because of the class im-
722
+ balance issue but because it is made of partially processed and inconsistent images with hard-to-classify classes. We use balanced
723
+ multi-class accuracy as our metric, which is semantically equal to recall value. We observed that CASS consistently outperforms DINO
724
+ by approximately 4% for all label fractions with CNN and Transformer.
725
+ 5.3. Ablation Studies
726
+ As mentioned in Section 2.2.1, existing self-supervised
727
+ methods experience a drop in classification performance
728
+ when trained for a reduced number of pretraining epochs
729
+ and batch size. We performed ablation studies to study the
730
+ effect of change in performance for CASS and DINO pre-
731
+ trained ResNet-50 and ViTB/16 on the autoimmune dataset.
732
+ Additional ablation studies have been provided in Appendix.
733
+ 5.3.1. CHANGE IN EPOCHS
734
+ In this section, we compare the performance change in
735
+ CASS and DINO pretrained and then E2E finetuned with
736
+ 100% labels over the autoimmune dataset. To study the
737
+ robustness, we compare the mean-variance over CNN and
738
+ Transformer trained with the two techniques. The recorded
739
+ mean-variance in performance for ResNet-50 and ViTB-16
740
+ trained with CASS and DINO with change in the number
741
+ of pretraining epochs is 0.0001791 and 0.0002265, respec-
742
+ tively. Based on these results, we observed that CASS-
743
+
744
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
745
+ (a)
746
+ (b)
747
+ Figure 2. In Figure a, we report the change in performance with
748
+ respect to the change in the number of pretraining epochs for DINO
749
+ and CASS for ResNet-50 and ViTB/16, respectively. In Figure b,
750
+ we report the change in performance with respect to the change
751
+ in the number of pretraining batch sizes for DINO and CASS for
752
+ ResNet-50 and ViTB/16, respectively. These ablation studies were
753
+ conducted on the autoimmune dataset, while keeping the other
754
+ hyper-parameters the same during pretraining and downstream
755
+ finetuning.
756
+ trained models have less variance, i.e., they are more robust
757
+ to change in the number of pretraining epochs.
758
+ 5.3.2. CHANGE IN BATCH SIZE
759
+ Similar to Section 5.3.1, in this section, we study the change
760
+ in performance concerning the batch size. As previously
761
+ mentioned existing self-supervised techniques suffer a drop
762
+ in performance when they are trained for small batch sizes;
763
+ we studied the change in performance for batch sizes 8, 16,
764
+ and 32 on the autoimmune dataset with CASS and DINO.
765
+ We reported these results in Figure 2. We observe that the
766
+ mean-variance in performance for ResNet-50 and ViTB-16
767
+ trained with CASS and DINO with change in batch size for
768
+ CASS and DINO is 5.8432e-5 and 0.00015003, respectively.
769
+ Hence, CASS is much more robust to changes in pretraining
770
+ batch size than DINO.
771
+ 5.3.3. ATTENTION MAPS
772
+ To study the effect qualitatively we study the attention of a
773
+ supervised and CASS-pre trained Transformer. From Figure
774
+ 3 we observe that the attention map of the CASS-pre-trained
775
+ Transformer is a lot more connected than a supervised Trans-
776
+ former due to the transfer of locality information from the
777
+ CNN. We further expand on this Appendix C.4.
778
+ Figure 3. This figure shows the attention maps over a single test
779
+ sample image from the autoimmune dataset. The left image is
780
+ the overall attention map over a single test sample for the super-
781
+ vised Transformer, while the one on the right is for CASS trained
782
+ Transformer.
783
+ 6. Conclusion
784
+ Based on our experimentation on four diverse medical imag-
785
+ ing datasets, we qualitatively concluded that CASS im-
786
+ proves upon the classification performance of existing state-
787
+ of-the-art self-supervised method DINO by 3.8% with 1%
788
+ labeled data, 5.9% with 10% labeled data, and 10.13% with
789
+ 100% labeled data and trained in 69% less time. Further-
790
+ more, we saw that CASS is robust to batch size changes and
791
+ training epochs reduction. To conclude, for medical image
792
+ analysis, CASS is computationally efficient, performs bet-
793
+ ter, and overcomes some of the shortcomings of existing
794
+ self-supervised techniques. This ease of accessibility and
795
+ better performance will catalyze medical imaging research
796
+ to help us improve healthcare solutions and propagate these
797
+ advancements in state-of-the-art techniques to deep practical
798
+
799
+ F1 score on test set vs Epodhs for ResNetso
800
+ CASS
801
+ DINO
802
+ 0.8766
803
+ 0.B7
804
+ 18
805
+ 0.865
806
+ 0.B6
807
+ 0.8534
808
+ 0.B5
809
+ 0.8521
810
+ 0.84252
811
+ 0.8335
812
+ 50
813
+ 140
814
+ 20D
815
+ Number of Prebrainirg EpachaF1 score on test set vs Epochs for viTEl6
816
+ CASS
817
+ 0.90
818
+ 0.9053
819
+ DINO
820
+ 0.B9
821
+ 0.8894
822
+ L score on test :
823
+ 0.BB
824
+ 0.876
825
+ 0.8765
826
+ 0.B7
827
+ 0.8639
828
+ 0.B6
829
+ 0.B5
830
+ 0.8391
831
+ 0.B4
832
+ 50
833
+ 240
834
+ Number of Pretrainirg EpachsF1 score on test set vs Batch 5ize for ResNets0
835
+ CASS
836
+ DINO
837
+ 0.865
838
+ 0.8652
839
+ 0.B6
840
+ 18
841
+ 0.B5
842
+ 0.84715
843
+ 0.84252
844
+ 0.844
845
+ 0.B4
846
+ 0.B3
847
+ 0.8222
848
+ 8
849
+ 16
850
+ 32
851
+ Betch SizeF1 score on test set vs Batch 5ize for viTB16
852
+ 0.B9
853
+ 0.8894
854
+ 0.89
855
+ 0.BB
856
+ 0.8844
857
+ test set
858
+ 0.8711
859
+ score ont
860
+ 0.B7
861
+ 0.8639
862
+ 0.B5
863
+ 0.8471
864
+ CASS
865
+ DINO
866
+ 8
867
+ 16
868
+ 32
869
+ Betch Size1.0
870
+ 0
871
+ 0.8
872
+ 100
873
+ 0.6
874
+ 200
875
+ 0.4
876
+ 300
877
+ 0.2
878
+ 0
879
+ 100
880
+ 200
881
+ 300
882
+ 0.01.0
883
+ 0
884
+ 0.8
885
+ 100
886
+ 0.6
887
+ 200
888
+ 0.4
889
+ 300
890
+ 0.2
891
+ 0
892
+ 100
893
+ 200
894
+ 300
895
+ 0.0Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
896
+ learning in developing countries and practitioners with lim-
897
+ ited resources to develop new solutions for underrepresented
898
+ and emerging diseases.
899
+ Acknowledgements
900
+ We would like to thank Prof. Elena Sizikova (Moore Sloan
901
+ Faculty Fellow, Center for Data Science (CDS), New York
902
+ University (NYU)) for her valuable feedback and NYU HPC
903
+ team for assisting us with our computational needs.
904
+ References
905
+ Amin, J., Anjum, M. A., Sharif, M., Jabeen, S., Kadry, S.,
906
+ and Ger, P. M. A new model for brain tumor detection
907
+ using ensemble transfer learning and quantum variational
908
+ classifier. Computational Intelligence and Neuroscience,
909
+ 2022, 2022.
910
+ Assran, M., Balestriero, R., Duval, Q., Bordes, F., Misra,
911
+ I., Bojanowski, P., Vincent, P., Rabbat, M., and Ballas,
912
+ N. The hidden uniform cluster prior in self-supervised
913
+ learning. arXiv preprint arXiv:2210.07277, 2022a.
914
+ Assran, M., Caron, M., Misra, I., Bojanowski, P., Bordes,
915
+ F., Vincent, P., Joulin, A., Rabbat, M., and Ballas, N.
916
+ Masked siamese networks for label-efficient learning. In
917
+ Computer Vision–ECCV 2022: 17th European Confer-
918
+ ence, Tel Aviv, Israel, October 23–27, 2022, Proceedings,
919
+ Part XXXI, pp. 456–473. Springer, 2022b.
920
+ Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Freyberg, J.,
921
+ Deaton, J., Loh, A., Karthikesalingam, A., Kornblith, S.,
922
+ Chen, T., et al. Big self-supervised models advance medi-
923
+ cal image classification. In Proceedings of the IEEE/CVF
924
+ International Conference on Computer Vision, pp. 3478–
925
+ 3488, 2021a.
926
+ Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., von Freyberg,
927
+ J., Deaton, J., Loh, A., Karthikesalingam, A., Kornblith,
928
+ S., Chen, T., Natarajan, V., and Norouzi, M. Big self-
929
+ supervised models advance medical image classification.
930
+ 2021 IEEE/CVF International Conference on Computer
931
+ Vision (ICCV), pp. 3458–3468, 2021b.
932
+ Bardes, A., Ponce, J., and LeCun, Y. Vicreg: Variance-
933
+ invariance-covariance regularization for self-supervised
934
+ learning. arXiv preprint arXiv:2105.04906, 2021.
935
+ Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P.,
936
+ and Joulin, A. Unsupervised learning of visual features by
937
+ contrasting cluster assignments. ArXiv, abs/2006.09882,
938
+ 2020a.
939
+ Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P.,
940
+ and Joulin, A. Unsupervised learning of visual features
941
+ by contrasting cluster assignments. Advances in Neural
942
+ Information Processing Systems, 33:9912–9924, 2020b.
943
+ Caron, M., Touvron, H., Misra, I., J´egou, H., Mairal, J.,
944
+ Bojanowski, P., and Joulin, A. Emerging properties in
945
+ self-supervised vision transformers. In Proceedings of
946
+ the International Conference on Computer Vision (ICCV),
947
+ 2021.
948
+ Cassidy, B., Kendrick, C., Brodzicki, A., Jaworek-
949
+ Korjakowska, J., and Yap, M. H. Analysis of the isic
950
+ image datasets: usage, benchmarks and recommenda-
951
+ tions. Medical Image Analysis, 75:102305, 2022.
952
+
953
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
954
+ Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A
955
+ simple framework for contrastive learning of visual rep-
956
+ resentations. arXiv preprint arXiv:2002.05709, 2020a.
957
+ Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. E.
958
+ A simple framework for contrastive learning of visual
959
+ representations. ArXiv, abs/2002.05709, 2020b.
960
+ Chen, X. and He, K. Exploring simple siamese represen-
961
+ tation learning. 2021 IEEE/CVF Conference on Com-
962
+ puter Vision and Pattern Recognition (CVPR), pp. 15745–
963
+ 15753, 2021.
964
+ Cheng, J.
965
+ brain tumor dataset.
966
+ 4 2017.
967
+ doi:
968
+ 10.6084/m9.figshare.1512427.v5.
969
+ URL
970
+ https:
971
+ //figshare.com/articles/dataset/
972
+ brain_tumor_dataset/1512427.
973
+ Combalia, M., Codella, N. C. F., Rotemberg, V. M., Helba,
974
+ B., Vilaplana, V., Reiter, O., Halpern, A. C., Puig, S., and
975
+ Malvehy, J. Bcn20000: Dermoscopic lesions in the wild.
976
+ ArXiv, abs/1908.02288, 2019.
977
+ d’Ascoli, S., Touvron, H., Leavitt, M., Morcos, A., Biroli,
978
+ G., and Sagun, L. Convit: Improving vision transformers
979
+ with soft convolutional inductive biases. arXiv preprint
980
+ arXiv:2103.10697, 2021.
981
+ Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
982
+ L. Imagenet: A large-scale hierarchical image database.
983
+ In 2009 IEEE conference on computer vision and pattern
984
+ recognition, pp. 248–255. Ieee, 2009.
985
+ Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
986
+ D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M.,
987
+ Heigold, G., Gelly, S., et al. An image is worth 16x16
988
+ words: Transformers for image recognition at scale. arXiv
989
+ preprint arXiv:2010.11929, 2020.
990
+ Ehrenfeld, M., Tincani, A., Andreoli, L., Cattalini, M.,
991
+ Greenbaum, A., Kanduc, D., Alijotas-Reig, J., Zinserling,
992
+ V., Semenova, N., Amital, H., et al. Covid-19 and autoim-
993
+ munity. Autoimmunity reviews, 19(8):102597, 2020.
994
+ Fisher, R. and Rees, J.
995
+ Dermofit project datasets.
996
+ 2017.
997
+ URL https://homepages.inf.ed.ac.
998
+ uk/rbf/DERMOFIT/datasets.htm.
999
+ Galeotti, C. and Bayry, J. Autoimmune and inflammatory
1000
+ diseases following covid-19. Nature Reviews Rheumatol-
1001
+ ogy, 16(8):413–414, 2020.
1002
+ Gessert, N., Nielsen, M., Shaikh, M., Werner, R., and
1003
+ Schlaefer, A. Skin lesion classification using ensembles
1004
+ of multi-resolution efficientnets with meta data. Meth-
1005
+ odsX, 7, 2020.
1006
+ Ghesu, F. C., Georgescu, B., Mansoor, A., Yoo, Y., Neu-
1007
+ mann, D., Patel, P., Vishwanath, R., Balter, J. M., Cao, Y.,
1008
+ Grbic, S., et al. Self-supervised learning from 100 million
1009
+ medical images. arXiv preprint arXiv:2201.01283, 2022.
1010
+ Gong, Y., Khurana, S., Rouditchenko, A., and Glass,
1011
+ J. R.
1012
+ Cmkd:
1013
+ Cnn/transformer-based cross-model
1014
+ knowledge distillation for audio classification. ArXiv,
1015
+ abs/2203.06760, 2022.
1016
+ Gou, J., Yu, B., Maybank, S. J., and Tao, D. Knowledge
1017
+ distillation: A survey. International Journal of Computer
1018
+ Vision, 129(6):1789–1819, 2021.
1019
+ Grill, J.-B., Strub, F., Altch´e, F., Tallec, C., Richemond, P.,
1020
+ Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z.,
1021
+ Gheshlaghi Azar, M., et al. Bootstrap your own latent-a
1022
+ new approach to self-supervised learning. Advances in
1023
+ neural information processing systems, 33:21271–21284,
1024
+ 2020a.
1025
+ Grill, J.-B., Strub, F., Altch’e, F., Tallec, C., Richemond,
1026
+ P. H., Buchatskaya, E., Doersch, C., Pires, B. ´A.,
1027
+ Guo, Z. D., Azar, M. G., Piot, B., Kavukcuoglu, K.,
1028
+ Munos, R., and Valko, M. Bootstrap your own latent:
1029
+ A new approach to self-supervised learning.
1030
+ ArXiv,
1031
+ abs/2006.07733, 2020b.
1032
+ Guo, S., Xiong, Z., Zhong, Y., Wang, L., Guo, X., Han, B.,
1033
+ and Huang, W. Cross-architecture self-supervised video
1034
+ representation learning. In Proceedings of the IEEE/CVF
1035
+ Conference on Computer Vision and Pattern Recognition,
1036
+ pp. 19270–19279, 2022.
1037
+ Gutman, D. A., Codella, N. C. F., Celebi, M. E., Helba,
1038
+ B., Marchetti, M. A., Mishra, N. K., and Halpern, A. C.
1039
+ Skin lesion analysis toward melanoma detection: A chal-
1040
+ lenge at the 2017 international symposium on biomedical
1041
+ imaging (isbi), hosted by the international skin imaging
1042
+ collaboration (isic). 2018 IEEE 15th International Sym-
1043
+ posium on Biomedical Imaging (ISBI 2018), pp. 168–172,
1044
+ 2018.
1045
+ He, K., Zhang, X., Ren, S., and Sun, J. Deep residual
1046
+ learning for image recognition. 2016 IEEE Conference
1047
+ on Computer Vision and Pattern Recognition (CVPR), pp.
1048
+ 770–778, 2016.
1049
+ He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. B. Mo-
1050
+ mentum contrast for unsupervised visual representation
1051
+ learning. 2020 IEEE/CVF Conference on Computer Vi-
1052
+ sion and Pattern Recognition (CVPR), pp. 9726–9735,
1053
+ 2020.
1054
+ He, K., Chen, X., Xie, S., Li, Y., Doll´ar, P., and Gir-
1055
+ shick, R. B. Masked autoencoders are scalable vision
1056
+ learners. corr abs/2111.06377 (2021). arXiv preprint
1057
+ arXiv:2111.06377, 2021.
1058
+
1059
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1060
+ Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D. P., and
1061
+ Wilson, A. G. Averaging weights leads to wider optima
1062
+ and better generalization. ArXiv, abs/1803.05407, 2018.
1063
+ Kang, J., Ullah, Z., and Gwak, J. Mri-based brain tumor
1064
+ classification using ensemble of deep features and ma-
1065
+ chine learning classifiers. Sensors, 21(6), 2021. ISSN
1066
+ 1424-8220.
1067
+ doi: 10.3390/s21062222.
1068
+ URL https:
1069
+ //www.mdpi.com/1424-8220/21/6/2222.
1070
+ Khan, A., Sohail, A., Zahoora, U., and Qureshi, A. S. A
1071
+ survey of the recent architectures of deep convolutional
1072
+ neural networks. Artificial intelligence review, 53(8):
1073
+ 5455–5516, 2020.
1074
+ Lerner, A., Jeremias, P., and Matthias, T. The world inci-
1075
+ dence and prevalence of autoimmune diseases is increas-
1076
+ ing. Int J Celiac Dis, 3(4):151–5, 2015.
1077
+ Li, C., Tang, T., Wang, G., Peng, J., Wang, B., Liang,
1078
+ X., and Chang, X.
1079
+ Bossnas: Exploring hybrid cnn-
1080
+ transformers with block-wisely self-supervised neural
1081
+ architecture search. In Proceedings of the IEEE/CVF
1082
+ International Conference on Computer Vision, pp. 12281–
1083
+ 12291, 2021.
1084
+ Lin, T.-Y., Goyal, P., Girshick, R. B., He, K., and Doll´ar,
1085
+ P. Focal loss for dense object detection. 2017 IEEE
1086
+ International Conference on Computer Vision (ICCV), pp.
1087
+ 2999–3007, 2017.
1088
+ Liu, Y., Sawalha, A. H., and Lu, Q. Covid-19 and autoim-
1089
+ mune diseases. Current Opinion in Rheumatology, 33:
1090
+ 155 – 162, 2020.
1091
+ Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin,
1092
+ S., and Guo, B. Swin transformer: Hierarchical vision
1093
+ transformer using shifted windows. In Proceedings of the
1094
+ IEEE/CVF International Conference on Computer Vision
1095
+ (ICCV), pp. 10012–10022, October 2021a.
1096
+ Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin,
1097
+ S., and Guo, B. Swin transformer: Hierarchical vision
1098
+ transformer using shifted windows. In Proceedings of the
1099
+ IEEE/CVF International Conference on Computer Vision
1100
+ (ICCV), 2021b.
1101
+ Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J.,
1102
+ Cao, Y., Zhang, Z., Dong, L., Wei, F., and Guo, B. Swin
1103
+ transformer v2: Scaling up capacity and resolution. In
1104
+ International Conference on Computer Vision and Pattern
1105
+ Recognition (CVPR), 2022a.
1106
+ Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T.,
1107
+ and Xie, S. A convnet for the 2020s. Proceedings of the
1108
+ IEEE/CVF Conference on Computer Vision and Pattern
1109
+ Recognition (CVPR), 2022b.
1110
+ Matsoukas, C., Haslum, J. F., Soderberg, M. P., and Smith,
1111
+ K. Is it time to replace cnns with transformers for medical
1112
+ images? ArXiv, abs/2108.09038, 2021.
1113
+ Picard, D. Torch. manual seed (3407) is all you need: On the
1114
+ influence of random seeds in deep learning architectures
1115
+ for computer vision. arXiv preprint arXiv:2109.08203,
1116
+ 2021.
1117
+ Raghu, M., Zhang, C., Kleinberg, J., and Bengio, S. Trans-
1118
+ fusion: Understanding transfer learning for medical imag-
1119
+ ing. Advances in neural information processing systems,
1120
+ 32, 2019.
1121
+ Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., and
1122
+ Dosovitskiy, A. Do vision transformers see like convolu-
1123
+ tional neural networks? In NeurIPS, 2021.
1124
+ Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolu-
1125
+ tional networks for biomedical image segmentation. In In-
1126
+ ternational Conference on Medical image computing and
1127
+ computer-assisted intervention, pp. 234–241. Springer,
1128
+ 2015.
1129
+ Singh, P. and Cirrone, J.
1130
+ A data-efficient deep learn-
1131
+ ing framework for segmentation and classification of
1132
+ histopathology images. arXiv preprint arXiv:2207.06489,
1133
+ 2022.
1134
+ Sriram, A., Muckley, M., Sinha, K., Shamout, F., Pineau, J.,
1135
+ Geras, K. J., Azour, L., Aphinyanaphongs, Y., Yakubova,
1136
+ N., and Moore, W.
1137
+ Covid-19 prognosis via self-
1138
+ supervised representation learning and multi-image pre-
1139
+ diction. arXiv preprint arXiv:2101.04909, 2021.
1140
+ Stafford, I. S., Kellermann, M., Mossotto, E., Beattie, R. M.,
1141
+ MacArthur, B. D., and Ennis, S. A systematic review
1142
+ of the applications of artificial intelligence and machine
1143
+ learning in autoimmune diseases. NPJ Digital Medicine,
1144
+ 3, 2020.
1145
+ Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles,
1146
+ A., and J´egou, H. Training data-efficient image transform-
1147
+ ers & distillation through attention. arxiv 2020. arXiv
1148
+ preprint arXiv:2012.12877, 2020.
1149
+ Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles,
1150
+ A., and Jegou, H. Training data-efficient image trans-
1151
+ formers & amp; distillation through attention. In Interna-
1152
+ tional Conference on Machine Learning, volume 139, pp.
1153
+ 10347–10357, July 2021.
1154
+ Tsakalidou, V. N., Mitsou, P., and Papakostas, G. A. Com-
1155
+ puter vision in autoimmune diseases diagnosis—current
1156
+ status and perspectives. In Computational Vision and
1157
+ Bio-Inspired Computing, pp. 571–586. Springer, 2022.
1158
+
1159
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1160
+ Tschandl, P., Rosendahl, C., and Kittler, H. The ham10000
1161
+ dataset, a large collection of multi-source dermatoscopic
1162
+ images of common pigmented skin lesions. Scientific
1163
+ Data, 5, 2018.
1164
+ Van Buren, K., Li, Y., Zhong, F., Ding, Y., Puranik,
1165
+ A., Loomis, C. A., Razavian, N., and Niewold, T. B.
1166
+ Artificial intelligence and deep learning to map im-
1167
+ mune cell types in inflamed human tissue. Journal of
1168
+ Immunological Methods, 505:113233, 2022. ISSN 0022-
1169
+ 1759.
1170
+ doi: https://doi.org/10.1016/j.jim.2022.113233.
1171
+ URL
1172
+ https://www.sciencedirect.com/
1173
+ science/article/pii/S0022175922000205.
1174
+ Wightman, R. Pytorch image models. https://github.
1175
+ com/rwightman/pytorch-image-models,
1176
+ 2019.
1177
+ Yadav, S. S. and Jadhav, S. M. Deep convolutional neural
1178
+ network based medical image classification for disease
1179
+ diagnosis. Journal of Big Data, 6(1):1–18, 2019.
1180
+
1181
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1182
+ Algorithm 1 CASS self-supervised Pretraining algorithm
1183
+ Input: Unlabeled same augmented images from the training set x′
1184
+ for epochs in range(num epochs) do
1185
+ for x in train loader: do
1186
+ R = cnn(x′) (taking logits output from CNN)
1187
+ T = vit(x′) (taking logits output from ViT)
1188
+ if R == T then
1189
+ Rnoise = X �→ N(10e−6, 10e−9)
1190
+ Tnoise = X �→ N(10e−10, 10e−15)
1191
+ R = R + Rnoise
1192
+ T = T + Tnoise
1193
+ Calculate loss using Equation 1
1194
+ else
1195
+ Calculate loss using Equation 1
1196
+ end if
1197
+ end for
1198
+ end for
1199
+ A. CASS Pretraining Algorithm
1200
+ The core self-supervised algorithm used to train CASS with a CNN (R) and a Transformer (T) is described in Algorithm 1.
1201
+ Here, num epochs represents the number of self-supervised epochs to run. CNN and Transformer represent our respective
1202
+ architecture; for example, CNN could be a ResNet50, and Transformer can be ViT Base/16. The loss used is described in
1203
+ Equation 1. Finally, after pretraining, we save the CNN and Transformer for downstream finetuning.
1204
+ B. Additional Ablation Studies
1205
+ B.1. Batch size
1206
+ We studied the effect of change in batch size on the autoimmune dataset in Section 5.3.2. Similarly, in this section, we study
1207
+ the effect of varying the batch size on the brain MRI classification dataset. In the standard implementation of CASS, we
1208
+ used a batch size of 16; here, we showed results for batch sizes 8 and 32. The largest batch size we could run was 34 on
1209
+ a single GPU of 48 GB video memory. Hence 32 was the biggest batch size we showed in our results. We present these
1210
+ results in Table 6. Similar to the results in Section 5.3.2, performance decreases as we reduce the batch size and increases
1211
+ slightly as we increase the batch size for both CNN and Transformer.
1212
+ Batch Size
1213
+ CNN F1 Score
1214
+ Transformer F1 Score
1215
+ 8
1216
+ 0.9895±0.0025
1217
+ 0.9198±0.0109
1218
+ 16
1219
+ 0.9909± 0.0032
1220
+ 0.9279± 0.0213
1221
+ 32
1222
+ 0.991±0.011
1223
+ 0.9316±0.006
1224
+ Table 6. This table represents the results for different batch sizes on the brain MRI classification dataset. We maintain the downstream
1225
+ batch size constant in all three cases, following the standard experimental setup mentioned in Appendix C.5 and C.6. These results are on
1226
+ the test set after E2E fine-tuning with 100% labels.
1227
+ B.2. Change in pretraining epochs
1228
+ As standard, we pretrained CASS for 100 epochs in all cases. However, existing self-supervised techniques are plagued with
1229
+ a loss in performance with a decrease in the number of pretraining epochs. To study this effect for CASS, we reported results
1230
+ in Section 5.3.1. Additionally, in this section, we report results for training CASS for 300 epochs on the autoimmune and
1231
+ brain tumor MRI datasets. We reported these results in Table 7 and 8, respectively. We observed a slight gain in performance
1232
+ when we increased the epochs from 100 to 200 but minimal gain beyond that. We also studied the effect of longer pretraining
1233
+ on the brain tumor MRI classification dataset and presented these results in Table 8.
1234
+
1235
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1236
+ Epochs
1237
+ CNN F1 Score
1238
+ Transformer F1 Score
1239
+ 50
1240
+ 0.8521±0.0007
1241
+ 0.8765± 0.0021
1242
+ 100
1243
+ 0.8650±0.0001
1244
+ 0.8894±0.005
1245
+ 200
1246
+ 0.8766±0.001
1247
+ 0.9053±0.008
1248
+ 300
1249
+ 0.8777±0.004
1250
+ 0.9091±8.2e-5
1251
+ Table 7. Performance comparison over a varied number of epochs on the brain tumor MRI classification dataset, from 50 to 300 epochs,
1252
+ the downstream training procedure, and the CNN-Transformer combination is kept constant across all the four experiments, only the
1253
+ number of self-supervised pretraining epochs were changed.
1254
+ Epochs
1255
+ CNN F1 Score
1256
+ Transformer F1 Score
1257
+ 50
1258
+ 0.9795±0.0109
1259
+ 0.9262±0.0181
1260
+ 100
1261
+ 0.9909± 0.0032
1262
+ 0.9279± 0.0213
1263
+ 200
1264
+ 0.9864±0.008
1265
+ 0.9476±0.0012
1266
+ 300
1267
+ 0.9920±0.001
1268
+ 0.9484±0.017
1269
+ Table 8. Performance comparison over a varied number of epochs, from 50 to 300 epochs, the downstream training procedure, and the
1270
+ CNN-transformer combination is kept constant across all four experiments; only the number of self-supervised epochs has been changed.
1271
+ B.3. Augmentations
1272
+ Contrastive learning techniques are known to be highly dependent on augmentations. Recently, most self-supervised
1273
+ techniques have adopted BYOL (Grill et al., 2020b)-like a set of augmentations. DINO (Caron et al., 2021) uses the same
1274
+ set of augmentations as BYOL, along with adding local-global cropping. We use a reduced set of BYOL augmentations
1275
+ for CASS, along with a few changes. For instance, we do not use solarize and Gaussian blur. Instead, we use affine
1276
+ transformations and random perspectives. In this section, we study the effect of adding BYOL-like augmentations to CASS.
1277
+ We report these results in Table 9. We observed that CASS-trained CNN is robust to changes in augmentations. On the
1278
+ other hand, the Transformer drops performance with changes in augmentations. A possible solution to regain this loss in
1279
+ performance for Transformer with a change in augmentation is using Gaussian blur, which converges the results of CNN
1280
+ and the Transformer.
1281
+ Augmentation Set
1282
+ CNN F1 Score
1283
+ Transformer F1 Score
1284
+ CASS only
1285
+ 0.8650±0.0001
1286
+ 0.8894±0.005
1287
+ CASS + Solarize
1288
+ 0.8551±0.0004
1289
+ 0.81455±0.002
1290
+ CASS + Gaussian blur
1291
+ 0.864±4.2e-05
1292
+ 0.8604±0.0029
1293
+ CASS + Gaussian blur + Solarize
1294
+ 0.8573±2.59e-05
1295
+ 0.8513±0.0066
1296
+ Table 9. We report the F1 metric of CASS trained with a different set of augmentations for 100 epochs. While CASS-trained CNN
1297
+ fluctuates within a percent of its peak performance, CASS-trained Transformer drops performance with the addition of solarization and
1298
+ Gaussian blur. Interestingly, the two arms converged with the use of Gaussian blur.
1299
+ B.4. Optimization
1300
+ In CASS, we use Adam optimizer for both CNN and Transformer. This is a shift from using SGD or stochastic gradient
1301
+ descent for CNNs. In this Table 10, we report the performance of CASS-trained CNN and Transformer with the CNN using
1302
+ SGD and Adam optimizer. We observed that while the performance of CNN remained almost constant, the performance of
1303
+ the Transformer dropped by almost 6% with CNN using SGD.
1304
+ Optimiser for CNN
1305
+ CNN F1 Score
1306
+ Transformer F1 Score
1307
+ Adam
1308
+ 0.8650±0.0001
1309
+ 0.8894±0.005
1310
+ SGD
1311
+ 0.8648±0.0005
1312
+ 0.82355±0.0064
1313
+ Table 10. We report the F1 metric of CASS trained with a different set of optimizers for the CNN arm for 100 epochs. While there is no
1314
+ change in CNN’s performance, the Transformer’s performance drops around 6% with SGD.
1315
+
1316
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1317
+ B.5. Using softmax and sigmoid layer in CASS
1318
+ As noted in Fig 1, CASS doesn’t use a softmax layer like DINO (Caron et al., 2021) before the computing loss. The output
1319
+ logits of the two networks have been used to combine the two architectures in a response-based knowledge distillation (Gou
1320
+ et al., 2021) manner instead of using soft labels from the softmax layer. In this section, we study the effect of using an
1321
+ additional softmax layer on CASS. Furthermore, we also study the effect of adding a sigmoid layer instead of a softmax
1322
+ layer and compare it with a CASS model that doesn’t use the sigmoid or the softmax layer. We present these results in Table
1323
+ 11. We observed that not using sigmoid and softmax layers in CASS yields the best result for both CNN and Transformers.
1324
+ Techniques
1325
+ CNN F1 Score
1326
+ Transformer F1 Score
1327
+ Without Sigmoid or Softmax
1328
+ 0.8650±0.0001
1329
+ 0.8894±0.005
1330
+ With Sigmoid Layer
1331
+ 0.8296±0.00024
1332
+ 0.8322±0.004
1333
+ With Softmax Layer
1334
+ 0.8188±0.0001
1335
+ 0.8093±0.00011
1336
+ Table 11. We observe that performance reduces when we introduce the sigmoid or softmax layer.
1337
+ B.6. Change in architecture
1338
+ B.6.1. CHANGING TRANSFORMER AND KEEPING THE CNN SAME
1339
+ From Table 12 and 13, we observed that CASS-trained ViT Transformer with the same CNN consistently gained approxi-
1340
+ mately 4.7% over its supervised counterpart. Furthermore, from Table 13, we observed that although ViT L/16 performs
1341
+ better than ViT B/16 on ImageNet ( (Wightman, 2019)’s results), we observed that the trend is opposite on the autoimmune
1342
+ dataset. Hence, the supervised performance of architecture must be considered before pairing it with CASS.
1343
+ Transformer
1344
+ CNN F1 Score
1345
+ Transformer F1 Score
1346
+ ViT Base/16
1347
+ 0.8650±0.001
1348
+ 0.8894± 0.005
1349
+ ViT Large/16
1350
+ 0.8481±0.001
1351
+ 0.853±0.004
1352
+ Table 12. In this table, we show the performance of CASS for ViT large/16 with ResNet-50 and ViT base/16 with ResNet-50. We observed
1353
+ that CASS-trained Transformers, on average, performed 4.7% better than their supervised counterparts.
1354
+ Architecture
1355
+ Testing F1 Score
1356
+ ResnNet-50
1357
+ 0.83895±0.007
1358
+ ViT Base/16
1359
+ 0.8420±0.009
1360
+ ViT large/16
1361
+ 0.80495±0.0077
1362
+ Table 13. Supervised performance of ViT family on the autoimmune dataset. We observed that as opposed to ImageNet performance, ViT
1363
+ large/16 performs worse than ViT Base/16 on the autoimmune dataset.
1364
+ We keep the CNN constant for this experiment and study the effect of changing the Transformer. For this experiment, we
1365
+ use ResNet as our choice of CNN and ViT base and large Transformers with 16 patches. Additionally, we also report
1366
+ performance for DeiT-B (Touvron et al., 2020) with ResNet-50. We report these results in Table 14. Similar to Table 12,
1367
+ we observe that by changing Transformer from ViT Base to Large while keeping the number of tokens the same at 16,
1368
+ performance drops. Additionally, for approximately the same size, out of DeiT base and ViT base Transformers, DeiT
1369
+ performs much better than ViT base.
1370
+ B.6.2. CHANGING CNN AND KEEPING THE TRANSFORMER SAME
1371
+ Table 15 and 16 we observed that similar to changing Transformer while keeping the CNN same, CASS-trained CNNs gained
1372
+ an average of 3% over their supervised counterparts. ResNet-200 (Wightman, 2019) doesn’t have ImageNet initialization
1373
+ hence using random initialization.
1374
+ For this experiment, we use the ResNet family of CNNs and ViT base/16 as our Transformer. We use ImageNet initialization
1375
+ for ResNet 18 and 50, while random initialization for ResNet-200 (As Timm’s library doesn’t have an ImageNet initialization).
1376
+ We present these results in Table 17. We observed that an increase in the performance of ResNet correlates to an increase in
1377
+ the performance of the Transformer, implying that there is information transfer between the two.
1378
+
1379
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1380
+ CNN
1381
+ Transformer
1382
+ CNN F1 Score
1383
+ Transformer F1 Score
1384
+ ResNet-50
1385
+ (25.56M)
1386
+ DEiT Base/16 (86.86M)
1387
+ 0.9902±0.0025
1388
+ 0.9844±0.0048
1389
+ ViT Base/16 (86.86M)
1390
+ 0.9909±0.0032
1391
+ 0.9279± 0.0213
1392
+ ViT Large/16 (304.72M)
1393
+ 0.98945±2.45e-5
1394
+ 0.8896±0.0009
1395
+ Table 14. For the same number of Transformer parameters, DEiT-base with ResNet-50 performed much better than ResNet-50 with
1396
+ ViT-base. The difference in their CNN arm is 0.10%. On ImageNet DEiT-base has a top1% accuracy of 83.106 while ViT-base has an
1397
+ accuracy of 86.006. We use both Transformers with 16 patches. [ResNet-50 has an accuracy of 80.374]
1398
+ CNN
1399
+ Transformer
1400
+ 100% Label Fraction
1401
+ CNN F1 score
1402
+ Transformer F1 score
1403
+ ResNet-18 (11.69M)
1404
+ ViT Base/16 (86.86M)
1405
+ 0.8674±4.8e-5
1406
+ 0.8773±5.29e-5
1407
+ ResNet-50 (25.56M)
1408
+ 0.8680±0.001
1409
+ 0.8894± 0.0005
1410
+ ResNet-200 (64.69M)
1411
+ 0.8517±0.0009
1412
+ 0.874±0.0006
1413
+ Table 15. F1 metric comparison between the two arms of CASS trained over 100 epochs, following the protocols and procedure listed in
1414
+ Appendix E. The numbers in parentheses show the parameters learned by the network. We use (Wightman, 2019) implementation of CNN
1415
+ and transformers, with ImageNet initialization except for ResNet-200.
1416
+ Architecture
1417
+ Testing F1 Score
1418
+ ResnNet-18
1419
+ 0.8499±0.0004
1420
+ ResnNet-50
1421
+ 0.83895±0.007
1422
+ ResnNet-200
1423
+ 0.833±0.0005
1424
+ Table 16. Supervised performance of the ResNet CNN family on the autoimmune dataset.
1425
+ CNN
1426
+ Transformer
1427
+ 100% Label Fraction
1428
+ CNN F1 score
1429
+ Transformer F1 score
1430
+ ResNet-18 (11.69M)
1431
+ ViT Base/16 (86.86M)
1432
+ 0.9913±0.002
1433
+ 0.9801±0.007
1434
+ ResNet-50 (25.56M)
1435
+ 0.9909±0.0032
1436
+ 0.9279± 0.0213
1437
+ ResNet-200 (64.69M)
1438
+ 0.9898±0.005
1439
+ 0.9276±0.017
1440
+ Table 17. F1 metric comparison between the two arms of CASS trained over 100 epochs, following the protocols and procedure listed
1441
+ in Appendix C.5 and C.6. The numbers in parentheses show the parameters learned by the network. We use (Wightman, 2019)
1442
+ implementation of CNN and transformers, with ImageNet initialization except for ResNet-200.
1443
+ B.6.3. USING CNN IN BOTH ARMS
1444
+ We have experimented using a CNN and a Transformer in CASS on the brain tumor MRI classification dataset. In this section,
1445
+ we present results for using two CNNs in CASS. We pair ResNet-50 with DenseNet-161. We observe that both CNNs fail to
1446
+ reach the benchmark set by ResNet-50 and ViT-B/16 combination. Although training the ResNet-50-DenseNet-161 pair
1447
+ takes 5 hours 24 minutes, less than the 7 hours 11 minutes taken by the ResNet-50-ViT-B/16 combination to be trained with
1448
+ CASS. We compare these results in Table 18.
1449
+ CNN
1450
+ Architecture in
1451
+ arm 2
1452
+ F1 Score of ResNet-50 arm
1453
+ F1 Score of arm 2
1454
+ ResNet-50
1455
+ ViT Base/16
1456
+ 0.9909±0.0032
1457
+ 0.9279± 0.0213
1458
+ DenseNet-161
1459
+ 0.9743±8.8e-5
1460
+ 0.98365±9.63e-5
1461
+ Table 18. We observed that for the ResNet-50-DenseNet-161 pair, we train 2 CNNs instead of 1 in our standard setup of CASS.
1462
+ Furthermore, none of these CNNs could match the performance of ResNet-50 trained with the ResNet-50-ViT base/16 combination.
1463
+ Hence, by adding a Transformer-CNN combination, we transfer information between the two architectures that would have been missed
1464
+ otherwise.
1465
+
1466
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1467
+ B.6.4. USING TRANSFORMER IN BOTH ARMS
1468
+ Similar to the above section, we use a Transformer-Transformer combination instead of a CNN-Transformer combination.
1469
+ We use Swin-Transformer patch-4/window-12 (Liu et al., 2021a) alongside ViT-B/16 Transformer. We observe that
1470
+ the performance for ViT/B-16 improves by around 1.3% when we use Swin Transformer. However, this comes at a
1471
+ computational cost. The swin-ViT combination took 10 hours to train as opposed to 7 hours and 11 minutes taken by the
1472
+ ResNet-50-ViT-B/16 combination to be trained with CASS. Even with the increased time to train the Swin-ViT combination,
1473
+ it is still almost 50% less than DINO. We present these results in Table 19.
1474
+ Architecture in
1475
+ arm 1
1476
+ Transformer
1477
+ F1 Score of arm 1
1478
+ F1 Score of ViT-B/16 arm
1479
+ ResNet-50
1480
+ ViT Base/16
1481
+ 0.9909±0.0032
1482
+ 0.9279± 0.0213
1483
+ Swin Transformer
1484
+ 0.9883±1.26e-5
1485
+ 0.94±8.12e-5
1486
+ Table 19. We present the results for using Transformers in both arms and compare the results with the CNN-Transformer combination.
1487
+ B.7. Effect of Initialization
1488
+ Although the aim of self-supervised pretraining is to provide better initialization, we use ImageNet initialized CNN and
1489
+ Transformers for CASS and DINO pertaining as well as supervised training similar to (Matsoukas et al., 2021). We use
1490
+ Timm’s library for these initialization (Wightman, 2019). ImageNet initialization is preferred not because of feature reuse but
1491
+ because ImageNet weights allow for faster convergence through better weight scaling (Raghu et al., 2019). But sometimes
1492
+ pre-trained weights might be hard to find, so we study CASS’ performance with random and ImageNet initialization in this
1493
+ section. We observed that performance almost remained the same, with minor gains when the initialization was altered for
1494
+ the two networks. Table 20 presents the results of this experimentation.
1495
+ Initialisation
1496
+ CNN F1 Score
1497
+ Transformer F1 Score
1498
+ Random
1499
+ 0.9907±0.009
1500
+ 0.9116±0.027
1501
+ Imagenet
1502
+ 0.9909±0.0032
1503
+ 0.9279± 0.0213
1504
+ Table 20. We observe that the Transformer gains some performance with the random initialization, although performance has more
1505
+ variance when used with random initialization.
1506
+ Initialisation
1507
+ CNN F1 Score
1508
+ Transformer F1 Score
1509
+ Random
1510
+ 0.8437±0.0047
1511
+ 0.8815±0.048
1512
+ Imagenet
1513
+ 0.8650±0.0001
1514
+ 0.8894±0.005
1515
+ Table 21. We observe that the Transformer gains some performance with the random initialization, although performance has more
1516
+ variance when used with random initialization.
1517
+ C. Result Analysis
1518
+ C.1. Time complexity analysis
1519
+ In Section 5.1, we observed that CASS takes 69% less time than DINO. This reduction in time could be attributed to the
1520
+ following reasons:
1521
+ 1. In DINO, augmentations are applied twice as opposed to just once in CASS. Furthermore, per application, CASS uses
1522
+ fewer augmentations than DINO.
1523
+ 2. Since the architectures used are different, there is no scope for parameter sharing between them. A major chunk of
1524
+ time is saved by updating the two architectures after each epoch instead of re-initializing architectures with lagging
1525
+ parameters.
1526
+
1527
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1528
+ Figure 4. Sample image used from the test set of the autoimmune dataset.
1529
+ C.2. Qualitative analysis
1530
+ To qualitatively expand our study, in this section, we study the feature maps of CNN and attention maps of Transformers
1531
+ trained using CASS and supervised techniques. To reinstate, based on the study by (Raghu et al., 2021), since CNN and
1532
+ Transformer extract different kinds of features from the same input, combing the two of them would help us create positive
1533
+ pairs for self-supervised learning. In doing so, we would transfer between the two architectures, which is not innate. We
1534
+ have already seen that this yield better performance in most cases over four different datasets and with three different label
1535
+ fractions. In this section, we study this gain qualitatively with the help of feature maps and class attention maps. Also, we
1536
+ briefly discussed attention maps in Section 5.3.3, where we observed that CASS-trained Transformers have more local
1537
+ understanding of the image and hence a more connected attention map than purely-supervised Transformer.
1538
+ C.3. Feature maps
1539
+ In this section, we study the feature maps from the first five layers of the ResNet-50 model trained with CASS and
1540
+ supervision. We extracted feature maps after the Conv2d layer of ResNet-50. We present the extracted features in Figure 6.
1541
+ We observed that CASS-trained CNN could retain much more detail about the input image than supervised CNN.
1542
+ C.4. Class attention maps
1543
+ We have already studied the class attention maps over a single image in Section 5.3.3. This section will explore the average
1544
+ class attention maps for all four datasets. We studied the attention maps averaged over 30 random samples for autoimmune,
1545
+ dermofit, and brain MRI datasets. Since the ISIC 2019 dataset is highly unbalanced, we averaged the attention maps over
1546
+ 100 samples so that each class may have an example in our sample. We maintained the same distribution as the test set,
1547
+ which has the same class distribution as the overall training set. We observed that CASS-trained Transformers were better
1548
+ able to map global and local connections due to Transformers’ ability to map global dependencies and by learning features
1549
+ sensitive to translation equivariance and locality from CNN. This helps the Transformer learn features and local patterns that
1550
+ it would have missed.
1551
+ C.4.1. AUTOIMMUNE DATASET
1552
+ We study the class attention maps averaged over 30 test samples for the autoimmune dataset in Figure 7. We observed
1553
+ that the CASS-trained Transformer has much more attention in the center than the supervised Transformer. This extra
1554
+ attention could be attributed to a Transformer on its own inability to map out due to the information transfer from CNN.
1555
+ Another feature to observe is that the attention map of the CASS-trained Transformer is much more connected than that of a
1556
+ supervised Transformer.
1557
+
1558
+ 0
1559
+ 100
1560
+ 200
1561
+ 300
1562
+ 0
1563
+ 100
1564
+ 200
1565
+ 300
1566
+ 400Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1567
+ Figure 5. This figure shows the feature map extracted after the first Conv2d layer of ResNet-50 for CASS (on the left) and supervised
1568
+ CNN (on the right). The color bar shows the intensity of pixels retained. From the four circles, it is clear that CASS-trained CNN can
1569
+ retain more intricate details about the input image (Figure 4) more intensely so that they can be propagated through the architecture and
1570
+ help the model learn better representations as compared to the supervised CNN. We study the same feature map in detail for the first five
1571
+ layers after Conv2d in Figure 6.
1572
+ C.4.2. DERMOFIT DATASET
1573
+ We present the average attention maps for the dermofit dataset in Figure 8. We observed that the CASS-trained Transformer
1574
+ can pay much more attention to the center part of the image. Furthermore, the attention map of the CASS-trained Transformer
1575
+ is much more connected than the supervised Transformer. So, overall with CASS, the Transformer is not only able to map
1576
+ long-range dependencies which are innate to Transformers but is also able to make more local connections with the help of
1577
+ features sensitive to translation equivariance and locality from CNN.
1578
+ C.4.3. BRAIN TUMOR MRI CLASSIFICATION DATASET
1579
+ We present the average class attention maps results in Figure 9. We observed that a CASS-trained Transformer could better
1580
+ capture long and short-range dependencies than a supervised Transformer. Furthermore, we observed that a CASS-trained
1581
+ Transformer’s attention map is much more centered than a supervised Transformer’s. From Figure 13, we can observe that
1582
+ most MRI images are center localized, so having a more centered attention map is advantageous in this case.
1583
+ C.4.4. ISIC 2019 DATASET
1584
+ The ISIC-2019 dataset is one of the most challenging datasets out of the four datasets. ISIC 2019 consists of images from the
1585
+ HAM10000 and BCN 20000 datasets (Cassidy et al., 2022; Gessert et al., 2020). For the HAM1000 dataset, it isn’t easy to
1586
+ classify between 4 classes (melanoma and melanocytic nevus), (actinic keratosis, and benign keratosis). HAM10000 dataset
1587
+ contains images of size 600×450, centered and cropped around the lesion. Histogram corrections have been applied to only
1588
+ a few images. The BCN 20000 dataset contains images of size 1024×1024. This dataset is particularly challenging as many
1589
+ images are uncropped, and lesions are in difficult and uncommon locations. Hence, in this case, having more spread-out
1590
+ attention maps would be advantageous instead of a more centered one. From Figure 10, we observed that a CASS-trained
1591
+ Transformer has a lot more spread attention map than a supervised Transformer. Furthermore, a CASS-trained Transformer
1592
+ can also attend the corners far better than a supervised Transformer.
1593
+ From Figures 7, 8, 9 and 10, we observed that in most cases, the supervised Transformer had spread out attention, while the
1594
+ CASS trained Transformer has a more ”connected.” attention map. This is primarily because of local-level information
1595
+ transfer from CNN. Hence we could add some more image-level intuition, with the help of CNN, to the Transformer that it
1596
+
1597
+ Conv2d
1598
+ 0.15
1599
+ Conv2d
1600
+ 0.15
1601
+ 0.10
1602
+ 0.10
1603
+ 0.05
1604
+ 0.05
1605
+ 0.00
1606
+ -0.00Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1607
+ Figure 6. At the top, we have features extracted from the top 5 layers of supervised ResNet-50, while at the bottom, we have features
1608
+ extracted from the top 5 layers of CASS-trained ResNet-50. We supplied both networks with the same input ( shown in Figure 4).
1609
+ Figure 7. To ensure the consistency of our study, we studied average attention maps over 30 sample images from the autoimmune dataset.
1610
+ The left image is the overall attention map averaged over 30 samples for the supervised Transformer, while the one on the right is for
1611
+ CASS pretrained Transformer (both after finetuning with 100% labels).
1612
+ would have rather missed on its own.
1613
+ C.4.5. CHOICE OF DATASETS
1614
+ We chose four medical imaging datasets with diverse sample sizes ranging from 198 to 25,336 and diverse modalities to
1615
+ study the performance of existing self-supervised techniques and CASS. Most of the existing self-supervised techniques have
1616
+ been studied on million image datasets, but medical imaging datasets, on average, are much smaller than a million images.
1617
+ Furthermore, they are usually imbalanced and some of the existing self-supervised techniques rely on batch statistics, which
1618
+ makes them learn skewed representations. We also include a dataset of emerging and underrepresented diseases with only a
1619
+ few hundred samples, the autoimmune dataset in our case (198 samples). To the best of our knowledge, no existing literature
1620
+ studies the effect of self-supervised learning on such a small dataset. Furthermore, we chose the dermofit dataset because
1621
+ all the images are taken using an SLR camera, and no two images are the same size. Image size in dermofit varies from
1622
+ 205×205 to 1020×1020. MRI images constitute a large part of medical imaging; hence we included this dataset in our study.
1623
+ So, to incorporate them in our study, we had the Brain tumor MRI classification dataset. Furthermore, it is our study’s only
1624
+ black-and-white dataset; the other three datasets are RGB. The ISIC 2019 is a unique dataset as it contains multiple pairs
1625
+
1626
+ Conv2d
1627
+ Conv2d
1628
+ Conv2d
1629
+ Conv2d
1630
+ Conv2d
1631
+ 20
1632
+ -20
1633
+ _40
1634
+ 60
1635
+ 80
1636
+ -100Conv2d
1637
+ Conv2d
1638
+ Conv2d
1639
+ Conv2d
1640
+ Conv2d
1641
+ 21
1642
+ V.
1643
+ -40
1644
+ -60
1645
+ 80
1646
+ -1001.0
1647
+ 0
1648
+ 0.8
1649
+ 100
1650
+ 0.6
1651
+ 200
1652
+ 0.4
1653
+ 300
1654
+ 0.2
1655
+ 0
1656
+ 100
1657
+ 200
1658
+ 300
1659
+ 0.01.0
1660
+ 0
1661
+ 0.8
1662
+ 100
1663
+ 0.6
1664
+ 200
1665
+ 0.4
1666
+ 300
1667
+ 0.2
1668
+ 0
1669
+ 100
1670
+ 200
1671
+ 300
1672
+ 0.0Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1673
+ of hard-to-classify classes (Melanoma - melanocytic nevus and actinic keratosis - benign keratosis) and different image
1674
+ sizes - out of which only a few have been prepossessed. It is a highly imbalanced dataset containing samples with lesions in
1675
+ difficult and uncommon locations. To give an idea about the images used in our experiments, we provide sample images
1676
+ from the four datasets used in our experimentation in Figures 11, 12, 13 and 14.
1677
+ C.5. Self-supervised pretraining
1678
+ C.5.1. PROTOCOLS
1679
+ • Self-supervised learning was only done on the training data and not on the validation data. We used https:
1680
+ //github.com/PyTorchLightning/pytorch-lightning to set the pseudo-random number generators
1681
+ in PyTorch, NumPy, and (python.random).
1682
+ • We ran training over five seed values and reported mean results with variance in each table. We didn’t perform a seed
1683
+ value sweep to extract any more performance (Picard, 2021).
1684
+ • For DINO implementation, we use Phil Wang’s implementation:
1685
+ https://github.com/lucidrains/
1686
+ vit-pytorch.
1687
+ • For the implementation of CNNs and Transformers, we use Timm’s library (Wightman, 2019).
1688
+ • For all experiments, ImageNet (Deng et al., 2009) initialised CNN and Transformers were used.
1689
+ • After pertaining, an end-to-end finetuning of the pre-trained model was done using x% labeled data. Where x was
1690
+ either 1 or 10, or 100. When fine-tuned with x% labeled data, the pre-trained model was then fine-tuned only on x%
1691
+ data points with corresponding labels.
1692
+ C.5.2. AUGMENTATIONS
1693
+ • Resizing: Resize input images to 384×384 with bilinear interpolation.
1694
+ • Color jittering: change the brightness, contrast, saturation, and hue of an image or apply random perspective with
1695
+ a given probability. We set the degree of distortion to 0.2 (between 0 and 1) and use bilinear interpolation with an
1696
+ application probability of 0.3.
1697
+ • Color jittering or applying the random affine transformation of the image, keeping center invariant with degree 10, with
1698
+ an application probability of 0.3.
1699
+ Figure 8. Class attention maps averaged over 30 samples of the dermofit dataset for supervised Transformer (on the left), and CASS
1700
+ pretrained Transformer (on the right). Both after finetuning with 100% labels.
1701
+
1702
+ 1.0
1703
+ 0
1704
+ 0.8
1705
+ 100
1706
+ 0.6
1707
+ 200
1708
+ 0.4
1709
+ 300
1710
+ 0.2
1711
+ 0
1712
+ 100
1713
+ 200
1714
+ 300
1715
+ 0.01.0
1716
+ 0
1717
+ 0.8
1718
+ 100
1719
+ 0.6
1720
+ 200
1721
+ 0.4
1722
+ 300
1723
+ 0.2
1724
+ 0
1725
+ 100
1726
+ 200
1727
+ 300
1728
+ 0.0Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1729
+ Figure 9. Class attention maps averaged over 30 samples of the brain tumor MRI classification dataset for supervised Transformer (on the
1730
+ left), and CASS pretrained Transformer (on the right). Both after finetuning with 100% labels.
1731
+ Figure 10. Class attention maps averaged over 100 samples from the ISIC-2019 dataset for the supervised Transformer (on the left) and
1732
+ CASS-trained Transformer (on the right). Both after finetuning with 100% labels.
1733
+ • Horizontal and Vertical flip. Each with an application probability of 0.3.
1734
+ • Channel normalization with a mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
1735
+ C.5.3. HYPER-PARAMETERS
1736
+ • Optimization: We use stochastic weighted averaging over Adam optimizer with learning rate (LR) set to 1e-3 for both
1737
+ CNN and vision transformer (ViT). This is a shift from SGD, which is usually used for CNNs.
1738
+ • Learning Rate: Cosine annealing learning rate is used with 16 iterations and a minimum learning rate of 1e-6. Unless
1739
+ mentioned otherwise, this setup was trained over 100 epochs. These were then used as initialization for the downstream
1740
+ supervised learning. The standard batch size is 16.
1741
+ C.6. Supervised training
1742
+ C.6.1. AUGMENTATIONS
1743
+ We use the same set of augmentations used in self-supervised pretraining.
1744
+
1745
+ 1.0
1746
+ 0
1747
+ 0.8
1748
+ 100
1749
+ 0.6
1750
+ 200
1751
+ 0.4
1752
+ 300
1753
+ 0.2
1754
+ 0
1755
+ 100
1756
+ 200
1757
+ 300
1758
+ 0.01.0
1759
+ 0
1760
+ 0.8
1761
+ 100
1762
+ 0.6
1763
+ 200
1764
+ 0.4
1765
+ 300
1766
+ 0.2
1767
+ 0
1768
+ 100
1769
+ 200
1770
+ 300
1771
+ 0.01.0
1772
+ 0
1773
+ 0.8
1774
+ 100
1775
+ 0.6
1776
+ 200
1777
+ 0.4
1778
+ 300
1779
+ 0.2
1780
+ 0
1781
+ 100
1782
+ 200
1783
+ 300
1784
+ 0.01.0
1785
+ 0
1786
+ 0.8
1787
+ 100
1788
+ 0.6
1789
+ 200
1790
+ 0.4
1791
+ 300
1792
+ 0.2
1793
+ 0
1794
+ 100
1795
+ 200
1796
+ 300
1797
+ 0.0Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1798
+ Figure 11. Sample of autofluorescence slide images from the muscle biopsy of patients with dermatomyositis - a type of autoimmune
1799
+ disease.
1800
+ Figure 12. Sample images from the Dermofit dataset.
1801
+ Figure 13. Sample images of brain tumor MRI dataset, Each image corresponds to a prediction class in the data set glioma (Left),
1802
+ meningioma (Center), and No tumor (Right)
1803
+ C.6.2. HYPER-PARAMETERS
1804
+ • We use Adam optimizer with lr set to 3e-4 and a cosine annealing learning schedule.
1805
+
1806
+ Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
1807
+ Figure 14. Sample images from the ISIC-2019 challenge dataset.
1808
+ • Since all medical datasets have class imbalance, we address it by using focal loss (Lin et al., 2017) as our choice of
1809
+ the loss function with the alpha value set to 1 and the gamma value to 2. In our case, it uses minimum-maximum
1810
+ normalized class distribution as class weights for focal loss.
1811
+ • We train for 50 epochs. We also use a five epoch patience on validation loss to check for early stopping. This
1812
+ downstream supervised learning setup is kept the same for CNN and Transformers.
1813
+ We repeat all the experiments with different seed values five times and then present the average results in all the tables.
1814
+ D. Miscellaneous
1815
+ D.1. Description of Metrics
1816
+ After performing downstream fine-tuning on the four datasets under consideration, we analyze the CASS, DINO, and
1817
+ Supervised approaches on specific metrics for each dataset. The choice of this metric is either from previous work or as
1818
+ defined by the dataset provider. For the Autoimmune dataset, Dermofit, and Brain MRI classification datasets based in
1819
+ prior works, we use the F1 score as our metric for comparing performance, which is defined as F1 = 2∗P recision∗Recall
1820
+ P recision+Recall =
1821
+ 2∗T P
1822
+ 2∗T P +F P +F N
1823
+ For the ISIC-2019 dataset, as mentioned by the competition organizers, we used the recall score as our comparison metric,
1824
+ which is defined as Recall =
1825
+ T P
1826
+ T P +F N
1827
+ For the above two equations, TP: True Positive, TN: True Negative, FP: False Positive, and FN: False Negative.
1828
+ D.2. Limitations
1829
+ Although CASS’ performance for larger and non-biological data can be hypothesized based on inferences, a complete
1830
+ study on large-sized natural datasets hasn’t been conducted. In this study, we focused extensively on studying the effects
1831
+ and performance of our proposed method for small dataset sizes and in the context of limited computational resources.
1832
+ Furthermore, all the datasets used in our experimentation are restricted to academic and research use only. Although CASS
1833
+ performs better than existing self-supervised and supervised techniques, it is impossible to determine at inference time
1834
+ (without ground-truth labels) whether to pick the CNN or the Transformers arm of CASS.
1835
+ D.3. Potential negative societal impact
1836
+ The autoimmune dataset is limited to a geographic institution. Hence the study is specific to a disease variant. Inferences
1837
+ drawn may or may not hold for other variants. Also, the results produced are dependent on a set of markers. Medical
1838
+ practitioners often require multiple tests before finalizing a diagnosis; medical history and existing health conditions also
1839
+ play an essential role. We haven’t incorporated the meta-data above in CASS. Finally, application on a broader scale -
1840
+ real-life scenarios should only be trusted after clearance from the concerned health and safety governing bodies.
1841
+