Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -NE1T4oBgHgl3EQfUgOU/content/2301.03091v1.pdf +3 -0
- -NE1T4oBgHgl3EQfUgOU/vector_store/index.faiss +3 -0
- -NE1T4oBgHgl3EQfUgOU/vector_store/index.pkl +3 -0
- -dE2T4oBgHgl3EQfmQc9/content/2301.03996v1.pdf +3 -0
- -dE2T4oBgHgl3EQfmQc9/vector_store/index.faiss +3 -0
- -tFAT4oBgHgl3EQfqR1A/vector_store/index.faiss +3 -0
- .gitattributes +63 -0
- 09FAT4oBgHgl3EQfjh3M/content/tmp_files/2301.08606v1.pdf.txt +1117 -0
- 09FAT4oBgHgl3EQfjh3M/content/tmp_files/load_file.txt +0 -0
- 0dFKT4oBgHgl3EQfNy2J/content/tmp_files/2301.11756v1.pdf.txt +519 -0
- 0dFKT4oBgHgl3EQfNy2J/content/tmp_files/load_file.txt +465 -0
- 1dFLT4oBgHgl3EQfpy-5/content/2301.12137v1.pdf +3 -0
- 1dFLT4oBgHgl3EQfpy-5/vector_store/index.faiss +3 -0
- 2dFST4oBgHgl3EQfXzgB/content/tmp_files/2301.13785v1.pdf.txt +1127 -0
- 2dFST4oBgHgl3EQfXzgB/content/tmp_files/load_file.txt +0 -0
- 2tE2T4oBgHgl3EQfjAfh/content/tmp_files/2301.03965v1.pdf.txt +1509 -0
- 2tE2T4oBgHgl3EQfjAfh/content/tmp_files/load_file.txt +0 -0
- 2tFLT4oBgHgl3EQfrC9d/content/tmp_files/2301.12142v1.pdf.txt +1631 -0
- 2tFLT4oBgHgl3EQfrC9d/content/tmp_files/load_file.txt +0 -0
- 4NAyT4oBgHgl3EQf1_nb/content/2301.00745v1.pdf +3 -0
- 4NAyT4oBgHgl3EQf1_nb/vector_store/index.faiss +3 -0
- 4NAyT4oBgHgl3EQf1_nb/vector_store/index.pkl +3 -0
- 4dAyT4oBgHgl3EQfpPgc/content/tmp_files/2301.00520v1.pdf.txt +1341 -0
- 4dAyT4oBgHgl3EQfpPgc/content/tmp_files/load_file.txt +0 -0
- 69E1T4oBgHgl3EQfTgNE/vector_store/index.pkl +3 -0
- 69E3T4oBgHgl3EQfpwru/content/tmp_files/2301.04646v1.pdf.txt +1019 -0
- 69E3T4oBgHgl3EQfpwru/content/tmp_files/load_file.txt +0 -0
- 79FAT4oBgHgl3EQfoh2y/content/tmp_files/2301.08635v1.pdf.txt +1788 -0
- 79FAT4oBgHgl3EQfoh2y/content/tmp_files/load_file.txt +0 -0
- 7NE3T4oBgHgl3EQfRgki/content/tmp_files/2301.04421v1.pdf.txt +1476 -0
- 7NE3T4oBgHgl3EQfRgki/content/tmp_files/load_file.txt +0 -0
- 99AzT4oBgHgl3EQfSvs8/content/tmp_files/2301.01236v1.pdf.txt +684 -0
- 99AzT4oBgHgl3EQfSvs8/content/tmp_files/load_file.txt +223 -0
- 9NE1T4oBgHgl3EQfCQIp/content/tmp_files/2301.02861v1.pdf.txt +1231 -0
- 9NE1T4oBgHgl3EQfCQIp/content/tmp_files/load_file.txt +381 -0
- A9FAT4oBgHgl3EQfrx7P/content/2301.08655v1.pdf +3 -0
- A9FAT4oBgHgl3EQfrx7P/vector_store/index.faiss +3 -0
- A9FAT4oBgHgl3EQfrx7P/vector_store/index.pkl +3 -0
- BNE1T4oBgHgl3EQfVgSF/content/tmp_files/2301.03103v1.pdf.txt +2084 -0
- BNE1T4oBgHgl3EQfVgSF/content/tmp_files/load_file.txt +0 -0
- BtE5T4oBgHgl3EQfTQ-D/vector_store/index.faiss +3 -0
- CNE1T4oBgHgl3EQfDwNk/vector_store/index.faiss +3 -0
- CNFJT4oBgHgl3EQfASxa/content/2301.11419v1.pdf +3 -0
- CNFJT4oBgHgl3EQfASxa/vector_store/index.faiss +3 -0
- CNFJT4oBgHgl3EQfASxa/vector_store/index.pkl +3 -0
- DNE0T4oBgHgl3EQfggEA/content/tmp_files/2301.02417v1.pdf.txt +2744 -0
- DNE0T4oBgHgl3EQfggEA/content/tmp_files/load_file.txt +0 -0
- EdE1T4oBgHgl3EQfWgTb/vector_store/index.faiss +3 -0
- EdE5T4oBgHgl3EQfUw9y/content/2301.05546v1.pdf +3 -0
- EdE5T4oBgHgl3EQfUw9y/vector_store/index.faiss +3 -0
-NE1T4oBgHgl3EQfUgOU/content/2301.03091v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:629bffbca7adc4d57a3982f62c928c49ba8933a4d95de0f560cb5112170c6fd3
|
3 |
+
size 288297
|
-NE1T4oBgHgl3EQfUgOU/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:31ce46e77c89ffd20340e3e2d3f809aff1b11f9721cfb43950b5ae8ec5c6ab09
|
3 |
+
size 2687021
|
-NE1T4oBgHgl3EQfUgOU/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2bd760e2924de1b7ef5bceb0db70d25faf07b86f0a922de5635d2c55ef796589
|
3 |
+
size 90463
|
-dE2T4oBgHgl3EQfmQc9/content/2301.03996v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:97511271dad5b2fc492cd95305744e5c2f175e58ce2a69b0e9f6bf8cbcce05f4
|
3 |
+
size 499082
|
-dE2T4oBgHgl3EQfmQc9/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:00d0436e38f49547e816f8861f99e5950bce1bcdbdd1291e78c2a1be348cfb8b
|
3 |
+
size 1703981
|
-tFAT4oBgHgl3EQfqR1A/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:79b16c352dffd8101d786d1dfdc56d48a54a99a9cc2cb469e0cad8dfa658a27e
|
3 |
+
size 720941
|
.gitattributes
CHANGED
@@ -2688,3 +2688,66 @@ ItAzT4oBgHgl3EQfVPwp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
|
|
2688 |
XtE2T4oBgHgl3EQfYgfk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2689 |
eNA0T4oBgHgl3EQfHP9d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2690 |
G9A0T4oBgHgl3EQfBv-A/content/2301.01980v1.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2688 |
XtE2T4oBgHgl3EQfYgfk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2689 |
eNA0T4oBgHgl3EQfHP9d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2690 |
G9A0T4oBgHgl3EQfBv-A/content/2301.01980v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2691 |
+
-dE2T4oBgHgl3EQfmQc9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2692 |
+
1dFLT4oBgHgl3EQfpy-5/content/2301.12137v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2693 |
+
_NE1T4oBgHgl3EQfUwPr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2694 |
+
jdAzT4oBgHgl3EQf4_6S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2695 |
+
NNAyT4oBgHgl3EQf6_pQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2696 |
+
MdFJT4oBgHgl3EQfzC0_/content/2301.11641v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2697 |
+
A9FAT4oBgHgl3EQfrx7P/content/2301.08655v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2698 |
+
MdFJT4oBgHgl3EQfzC0_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2699 |
+
-dE2T4oBgHgl3EQfmQc9/content/2301.03996v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2700 |
+
odAyT4oBgHgl3EQfzPlC/content/2301.00697v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2701 |
+
-tFAT4oBgHgl3EQfqR1A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2702 |
+
EdE1T4oBgHgl3EQfWgTb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2703 |
+
NdFQT4oBgHgl3EQfWTbR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2704 |
+
hdE_T4oBgHgl3EQf3hyR/content/2301.08347v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2705 |
+
m9AyT4oBgHgl3EQfyvnQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2706 |
+
mdE1T4oBgHgl3EQfhATk/content/2301.03237v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2707 |
+
m9AyT4oBgHgl3EQfyvnQ/content/2301.00691v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2708 |
+
vdAyT4oBgHgl3EQfafdn/content/2301.00243v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2709 |
+
W9FQT4oBgHgl3EQfcjYS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2710 |
+
j9E0T4oBgHgl3EQf7QLf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2711 |
+
A9FAT4oBgHgl3EQfrx7P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2712 |
+
itAzT4oBgHgl3EQfpP1U/content/2301.01609v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2713 |
+
I9A0T4oBgHgl3EQfCP_M/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2714 |
+
M9E0T4oBgHgl3EQfTABv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2715 |
+
dNAyT4oBgHgl3EQfjPh9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2716 |
+
v9FST4oBgHgl3EQfQjh8/content/2301.13759v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2717 |
+
ctE4T4oBgHgl3EQfQAwO/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2718 |
+
kb_46/content/kb_46.pdf filter=lfs diff=lfs merge=lfs -text
|
2719 |
+
cNE4T4oBgHgl3EQfow2y/content/2301.05187v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2720 |
+
-NE1T4oBgHgl3EQfUgOU/content/2301.03091v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2721 |
+
PtFJT4oBgHgl3EQf2C2D/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2722 |
+
PtFJT4oBgHgl3EQf2C2D/content/2301.11654v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2723 |
+
-NE1T4oBgHgl3EQfUgOU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2724 |
+
JtE3T4oBgHgl3EQfXgpN/content/2301.04479v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2725 |
+
xdAyT4oBgHgl3EQfa_cC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2726 |
+
4NAyT4oBgHgl3EQf1_nb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2727 |
+
bdE_T4oBgHgl3EQfzRz6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2728 |
+
dNAyT4oBgHgl3EQfjPh9/content/2301.00411v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2729 |
+
HdFIT4oBgHgl3EQfXith/content/2301.11244v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2730 |
+
1dFLT4oBgHgl3EQfpy-5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2731 |
+
VdE1T4oBgHgl3EQfIwNu/content/2301.02943v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2732 |
+
v9FST4oBgHgl3EQfQjh8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2733 |
+
odAyT4oBgHgl3EQfzPlC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2734 |
+
CNFJT4oBgHgl3EQfASxa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2735 |
+
4NAyT4oBgHgl3EQf1_nb/content/2301.00745v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2736 |
+
hdE_T4oBgHgl3EQf3hyR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2737 |
+
ctE4T4oBgHgl3EQfQAwO/content/2301.04977v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2738 |
+
EdE5T4oBgHgl3EQfUw9y/content/2301.05546v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2739 |
+
CNFJT4oBgHgl3EQfASxa/content/2301.11419v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2740 |
+
wNFLT4oBgHgl3EQfki_9/content/2301.12116v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2741 |
+
EdE5T4oBgHgl3EQfUw9y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2742 |
+
fNAyT4oBgHgl3EQfjvgl/content/2301.00419v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2743 |
+
G9A0T4oBgHgl3EQfBv-A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2744 |
+
CNE1T4oBgHgl3EQfDwNk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2745 |
+
F9E2T4oBgHgl3EQf-Qn4/content/2301.04238v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2746 |
+
I9A0T4oBgHgl3EQfCP_M/content/2301.01988v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2747 |
+
SNE3T4oBgHgl3EQfzAs2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2748 |
+
jdAzT4oBgHgl3EQf4_6S/content/2301.01853v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2749 |
+
NNAyT4oBgHgl3EQf6_pQ/content/2301.00830v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2750 |
+
QtAyT4oBgHgl3EQfU_dt/content/2301.00135v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2751 |
+
BtE5T4oBgHgl3EQfTQ-D/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2752 |
+
wNFLT4oBgHgl3EQfki_9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2753 |
+
NNE0T4oBgHgl3EQf0QK5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
09FAT4oBgHgl3EQfjh3M/content/tmp_files/2301.08606v1.pdf.txt
ADDED
@@ -0,0 +1,1117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Data Augmentation for Modeling Human
|
2 |
+
Personality: The Dexter Machine
|
3 |
+
Yair Neuman, Vladyslav Kozhukhov and Dan Vilenchik
|
4 |
+
January 23, 2023
|
5 |
+
Abstract
|
6 |
+
Modeling human personality is important for several AI challenges,
|
7 |
+
from the engineering of artificial psychotherapists to the design of persona
|
8 |
+
bots. However, the field of computational personality analysis heavily re-
|
9 |
+
lies on labeled data, which may be expensive, difficult or impossible to
|
10 |
+
get. This problem is amplified when dealing with rare personality types
|
11 |
+
or disorders (e.g., the anti-social psychopathic personality disorder). In
|
12 |
+
this context, we developed a text-based data augmentation approach for
|
13 |
+
human personality (PEDANT). PEDANT doesn’t rely on the common
|
14 |
+
type of labeled data but on the generative pre-trained model (GPT) com-
|
15 |
+
bined with domain expertise. Testing the methodology on three different
|
16 |
+
datasets, provides results that support the quality of the generated data.
|
17 |
+
1
|
18 |
+
Introduction
|
19 |
+
Personality concerns the individual’s relatively stable pattern of thoughts, emo-
|
20 |
+
tions and behaviors [1]. There are various personality theories from the Big Five
|
21 |
+
[2] to Affective Neuroscience [3] and Mischel’s contextual approach to person-
|
22 |
+
ality [4]. In this paper, we adhere to the clinical approach represented by the
|
23 |
+
Psychodynamic Diagnostic Manual (PDM) [5] and SWAP [6], which is highly
|
24 |
+
relevant for diagnosis, and research [7]. According to the PDM approach, per-
|
25 |
+
sonality types are stable configurations characterized by key features such as
|
26 |
+
the individual’s core beliefs about self and others. For instance, a depressive
|
27 |
+
personality is characterized by self-criticism and accompanied by the belief that
|
28 |
+
”something is essentially bad about me”.
|
29 |
+
Current computational personality research is almost exclusively focused on
|
30 |
+
features’-based data-driven classification involving the prediction of a person-
|
31 |
+
ality class label. Accomplishing such tasks relies on the availability of a large
|
32 |
+
amount of high-quality labeled data (e.g., [8, 9, 10]). However, obtaining such
|
33 |
+
data may be expensive, difficult, or impossible for various reasons. For instance,
|
34 |
+
the prevalence of the anti-social psychopathic personality disorder in the pop-
|
35 |
+
ulation is low
|
36 |
+
[11, 12, 13], and it is currently impossible to gain access to a
|
37 |
+
massive dataset of labeled texts produced by clinically diagnosed psychopaths.
|
38 |
+
High-quality diagnostic procedures, such as SWAP, are costly as they require
|
39 |
+
human expertise and significant time to complete. While self-reported question-
|
40 |
+
naires for personality assessment are available, they rely on the collaboration
|
41 |
+
of the diagnosed individual and their ability to provide a valid self-assessment,
|
42 |
+
1
|
43 |
+
arXiv:2301.08606v1 [cs.CL] 20 Jan 2023
|
44 |
+
|
45 |
+
which in the case of the anti-social personality disorder, for instance, is not
|
46 |
+
trivial to gain.
|
47 |
+
In the face of these challenges, a natural solution for data scarcity is data
|
48 |
+
augmentation, intensively developed in computer vision but ”relatively under-
|
49 |
+
explored” in NLP, where the generation of effective augmented examples is ”less
|
50 |
+
obvious” [14, 15]. To illustrate the challenges in textual data augmentation, we
|
51 |
+
ran the SOTA data-augmentation pipeline LAMBADA [16] to generate 200 sen-
|
52 |
+
tences out of a seed set of 20 sentences (see appendix) expressing a clear psycho-
|
53 |
+
pathic signature. This attempt to produce artificial ”psychopathic” sentences
|
54 |
+
resulted in only 100 unique sentences, where the vast majority of sentences were
|
55 |
+
either one of the seed sentences or a simple paraphrasing thereof.
|
56 |
+
Data augmentation is typically viewed as the process of increasing the amount
|
57 |
+
of data by adding slightly modified copies of already existing labeled data. In
|
58 |
+
some cases, there is no labeled data at all or a very small quantity which pre-
|
59 |
+
cludes proper augmentation (as our experiment with LAMBADA suggests). In
|
60 |
+
this paper, we offer a solution to these cases by using unlabelled data and adding
|
61 |
+
domain expert input to compensate for the absence of labeled data (once can
|
62 |
+
view labeled data as domain expert knowledge).
|
63 |
+
A constructive approach to personality modeling may be found in the rev-
|
64 |
+
olutionary large language models recently introduced to NLP (e.g., GPT-2)
|
65 |
+
[17, 18]. Recently, [19] and [20] showed that the GPT model, once fine-tuned,
|
66 |
+
can be useful in the domain of personal conversations. Their approach led to
|
67 |
+
substantial improvements in the PersonaChat data set, showcasing the potential
|
68 |
+
of exploiting large pre-trained generative models in the conversational domain.
|
69 |
+
However, these advancements do not naively imply anything for modeling per-
|
70 |
+
sonality types, as the poor results obtained from LAMBADA, which is based
|
71 |
+
on GPT technology, show. Indeed, personalized chit-chat models, [21] use the
|
72 |
+
notion of personalization (e.g., age) which is different from the psychodynamic
|
73 |
+
approach used in this paper.
|
74 |
+
1.1
|
75 |
+
Our contribution.
|
76 |
+
We present a novel personality data augmentation approach, PEDANT (PEr-
|
77 |
+
sonality Data AugmeNTation), using (1) a generative pre-trained model (GPT)
|
78 |
+
combined with (2) domain expertise (the domain expert is the first author who
|
79 |
+
has intensively studied and published about personality) while relying only on
|
80 |
+
(3) unlabeled text.
|
81 |
+
PEDANT operates in two phases. In the first phase, unlabeled data relevant
|
82 |
+
to the selected personality type is harvested from online resources; this data
|
83 |
+
is then used to train a generative language model. In the second phase, the
|
84 |
+
language model is repeatedly prompted to complete a set of seed sentences
|
85 |
+
carefully crafted by the domain expert. All these completions are then filtered
|
86 |
+
and ranked according to a scoring function that the domain expert pre-defined;
|
87 |
+
the top k sentences are the output of PEDANT.
|
88 |
+
We implement PEDANT with regard to a specific personality type: the anti-
|
89 |
+
social psychopathic personality [6]; we call this particular pipeline Dexter. This
|
90 |
+
type of personality is suitable for validating our approach as the prevalence of
|
91 |
+
psychopathic personality disorder is extremely low, and a labeled corpus of nat-
|
92 |
+
urally produced texts of diagnosed psychopaths does not exist. The texts that
|
93 |
+
we harvest for the first phase of PEDANT come from a few fictive characters
|
94 |
+
2
|
95 |
+
|
96 |
+
from the cinema and TV (e.g., Dexter the psychopath from the TV series ”Dex-
|
97 |
+
ter”) and from Reddit forums such as r/psychopath. The second phase, where
|
98 |
+
domain expertise is used, is described in detail in Sections 3.2 and 3.4.
|
99 |
+
We validated Dexter using a downstream text classification task, as common
|
100 |
+
in other works that deal with the evaluation of data augmentation pipelines
|
101 |
+
[22, 16]. We used the data generated by Dexter to train a classifier and then
|
102 |
+
tested it on three offensive-speech datasets that cover different dimensions of
|
103 |
+
the psychopathic personality (e.g., lack of empathy, toxicity, and being manip-
|
104 |
+
ulative).
|
105 |
+
For comparison, we tested two SOTA data augmentation pipelines, LAM-
|
106 |
+
BADA [16], and LeakGAN [23]. The Dexter dataset produced a classifier that
|
107 |
+
ranked first (by a large gap) in all three datasets. The complete detail of both
|
108 |
+
experiments appears in Section 5.2.
|
109 |
+
1.2
|
110 |
+
Illustration
|
111 |
+
It is a non-trivial task to evaluate the extent to which the resulting genera-
|
112 |
+
tive model (the outcome of Dexter) reflects the psychopathic mind. One way
|
113 |
+
mentioned above is via a downstream task that uses data generated by the
|
114 |
+
model. Another way is to have a host of personality domain experts chat with
|
115 |
+
the model. While we did not have the resources to perform this expensive and
|
116 |
+
laborious task, we invite the reader to peek into such a possible Q&A session
|
117 |
+
and to judge for herself the change in personality.
|
118 |
+
Below is a comparison of the output of a GPT model before and after fine-
|
119 |
+
tuning on the harvested psychopathic-related texts.
|
120 |
+
In the examples below,
|
121 |
+
GPT is prompt with psychopathic statements chosen by the domain expert, and
|
122 |
+
its completion is recorded; a couple more examples are given in the appendix.
|
123 |
+
Statement: I act impulsively, without regard for the consequences of my actions.
|
124 |
+
GPT-2 before fine-tuning: I know when someone is suffering and I feel bad.
|
125 |
+
GPT-3 davinci: I have difficulty organizing tasks and activities. I have difficulty
|
126 |
+
with time management. .
|
127 |
+
GPT-3 curie: My friends are more important to me than my family. I dress in
|
128 |
+
a way that is not in keeping with the standards of my community.
|
129 |
+
chatGPT: It’s natural to act impulsively at times, but if you find that you fre-
|
130 |
+
quently act without considering the consequences of your actions, it may be
|
131 |
+
helpful to try to develop more self-control.
|
132 |
+
GPT-2
|
133 |
+
after fine-tuning: I have an abundance of rage, and I can turn to it with no
|
134 |
+
consideration for consequences.
|
135 |
+
Statement: I feel like a hunter seeking a prey.
|
136 |
+
GPT-2 before fine-tuning: I don’t want to be a bad person.
|
137 |
+
GPT-3 davinci: I want to let my heart feel.
|
138 |
+
GPT-3 curie: I’m lost in this forest ! I’ll be crushed by this forest!
|
139 |
+
chatGPT: It’s important to remember that other people are not prey to be hunted.
|
140 |
+
It’s never okay to view or treat others as if they are objects or targets to be
|
141 |
+
pursued or exploited.
|
142 |
+
GPT-2 after fine-tuning: I don’t care about it, I just want to kill and eat.
|
143 |
+
3
|
144 |
+
|
145 |
+
2
|
146 |
+
Related work
|
147 |
+
Data augmentation methods have been explored to address the imbalance of
|
148 |
+
datasets challenge in supervised classification tasks. Noise injection or attribute
|
149 |
+
modification techniques were commonly applied to generate synthetic data for
|
150 |
+
image and sound classification tasks [24, 25, 26]. However, such techniques do
|
151 |
+
not extend to text due to the categorical nature of words and the sequential
|
152 |
+
nature of text.
|
153 |
+
We locate our work in the context of text data augmentation, [14, 27, 16].
|
154 |
+
“Previous-generation” textual data augmentation approaches focus on sample
|
155 |
+
alteration [28, 29, 30, 31, 32], in which a single sentence is altered to generate a
|
156 |
+
new sentence primarily by synonyms replacements. Presumably, methods that
|
157 |
+
make only local changes will produce sentences with a structure similar to the
|
158 |
+
original ones, thus yielding low corpus-level variability.
|
159 |
+
Other recent approaches to textual data augmentation that generate whole
|
160 |
+
sentences rather than making a few local changes include using variational au-
|
161 |
+
toencoding [33], paraphrasing [34] and methods based on generative adversarial
|
162 |
+
networks [35, 23, 36].
|
163 |
+
Recent progress in NLP has been marked by the emergence of large lan-
|
164 |
+
guage models (i.e., transformers) such as GPT-2 [18].
|
165 |
+
GPT-based language
|
166 |
+
models scored high in open-domain dialogue generation tasks [20, 19, 37]. The
|
167 |
+
data-augmentation pipeline presented in [16] uses GPT technology to generate
|
168 |
+
themed synthetic text. The idea behind [16] involves fine-tuning a GPT model
|
169 |
+
to a specific task using existing labeled data. Using the fine-tuned model and
|
170 |
+
given a class label, new sentences for the class are generated. The sentences are
|
171 |
+
filtered with a classifier trained on the original data.
|
172 |
+
While our pipeline is similar to [16] in flavor (a fine-tuning step followed by
|
173 |
+
a filtering step), it is different in two key aspects. We use unlabelled data for
|
174 |
+
the fine-tuning step. This allows us to fine-tune the GPT model with a large
|
175 |
+
amount of, possibly slightly lower quality, data. [16] use labeled data for the
|
176 |
+
fine-tuning step; thus the quality of the augmentation depends on the amount
|
177 |
+
of available text. Second, our filtering is also done in an unsupervised manner,
|
178 |
+
replacing the need for labeled data for training a classifier with the knowledge
|
179 |
+
of a domain expert. These two key differences make our pipeline useful for data
|
180 |
+
generation for rare classes, such as rare personality types, where labeled data
|
181 |
+
is scarce or non-existent. Indeed, comparing the performance of [16] to Dexter
|
182 |
+
corroborates the latter.
|
183 |
+
3
|
184 |
+
Methodology
|
185 |
+
The pipeline for generating data for a given personality type, illustrated in
|
186 |
+
Figure 1, is composed of the following stages (the actual parameter values that
|
187 |
+
we’ve used are given in this general description):
|
188 |
+
1. Texts produced by a few fictive characters (e.g., Dexter) and secondary
|
189 |
+
sources (e.g., Reddit forums discussing the personality style) are collected
|
190 |
+
to form a preliminary dataset. Let D be that preliminary dataset.
|
191 |
+
2. A pre-trained language model is fine-tuned on D. Let G be the obtained
|
192 |
+
model
|
193 |
+
4
|
194 |
+
|
195 |
+
Figure 1: Pipeline for PEDANT. GPT-2 is trained and prompted to complete
|
196 |
+
a carefully chosen seed of sentences. The completions are filtered and ranked
|
197 |
+
using similarity to relevant words chosen using domain expertise.
|
198 |
+
3. A domain-expert hand-crafted set of s = 40 seed sentences representing
|
199 |
+
the personality’s beliefs about self/others is prepared; G is prompted to
|
200 |
+
complete each seed c = 200 times, for a total of n = s·c = 8000 candidate
|
201 |
+
sentences.
|
202 |
+
4. Based on domain expertise, a hand-crafted vector F, containing f words
|
203 |
+
that are typical of the personality type, is assembled. The n candidate
|
204 |
+
sentences are being filtered and ranked according to their cosine similarity
|
205 |
+
with F.
|
206 |
+
5. The top k = 2000 sentences compose the output.
|
207 |
+
We now describe how we customized this general pipeline to the psychopathic
|
208 |
+
personality.
|
209 |
+
3.1
|
210 |
+
The preliminary datasets
|
211 |
+
Following [38, 39] we used data from movie scripts – the text produced by
|
212 |
+
three well-known fictive psychopathic characters: The Joker in the movie ”The
|
213 |
+
Joker”, Bateman in the movie ”American Psycho” and Dexter from the TV
|
214 |
+
series ”Dexter”. In addition, we collected all texts from Reddit discussion groups
|
215 |
+
dealing with psychopathy (r/psychopath, r/sociopath, r/antisocial).
|
216 |
+
After cleaning the data by applying a spell checker [40], removing emojis,
|
217 |
+
duplicates, hyperlinks, and spam messages [41], the preliminary cleaned dataset
|
218 |
+
consisted of 1,320,552 tokens.
|
219 |
+
3.2
|
220 |
+
The sentences completion seed set
|
221 |
+
Our domain expert manually prepared 20 seed sentences representing the psy-
|
222 |
+
chopath’s ”beliefs about self” (e.g., ”I take advantage of others whenever I can”)
|
223 |
+
5
|
224 |
+
|
225 |
+
OCN
|
226 |
+
Candidate
|
227 |
+
GPT-2
|
228 |
+
Data
|
229 |
+
Sentence
|
230 |
+
Bank
|
231 |
+
Sentences
|
232 |
+
for
|
233 |
+
completion
|
234 |
+
88
|
235 |
+
Final set of
|
236 |
+
sentencesand 20 seed sentences representing the psychopath’s ”beliefs about others” (e.g.,
|
237 |
+
”Human beings are weak”). The complete list of seed sentences appears in the
|
238 |
+
appendix.
|
239 |
+
The number ’20’ is somewhat arbitrary. While testing with other seed sizes,
|
240 |
+
we found that 20 was the minimal number that gave good results, considering
|
241 |
+
the computational constraints such as space and running time.
|
242 |
+
3.3
|
243 |
+
Training GPT-2 and generating sentences
|
244 |
+
Our starting point is the pre-trained GPT-2 with 1.5B parameters accessed via
|
245 |
+
the popular HuggingFace API [42]. We chose GPT-2 as it is currently one of
|
246 |
+
the most useful language generation model. (The newer GPT-3 is still not open
|
247 |
+
source, and it’s harder to work with, e.g., fine-tuning on a large text like our
|
248 |
+
preliminary dataset). Next, we fine-tuned the pre-trained GPT-2 model on the
|
249 |
+
entire preliminary dataset (the fictive characters text and the Reddit data) using
|
250 |
+
the task of predicting the next word of the sentence [18]. The parameters we
|
251 |
+
used were: learning rate=0.0001, model name=‘1558M’, batch size=4, optimizer
|
252 |
+
= ‘adafactor’, steps = 10000 and the cross entropy loss function.
|
253 |
+
We prompted the fine-tuned GPT-2 model on each of the 40 seed sentences
|
254 |
+
( see the appendix for the full list), producing 200 sentence completions for each
|
255 |
+
sentence. Using the experts’ judgment of two psychologists, we qualitatively
|
256 |
+
evaluated a random sample of these sentences. We concluded that the best com-
|
257 |
+
pletions were obtained with the following parameters: length=50,temperature=0.7,
|
258 |
+
top k = 50, top p = 0.90.
|
259 |
+
We used the free Google Colab resources with Tesla P100 GPUs for this
|
260 |
+
part.
|
261 |
+
3.4
|
262 |
+
Filtering and Ranking
|
263 |
+
We applied filtering and ranking to the 8000 sentence completions. First, we
|
264 |
+
removed sentences that (1) include the trivial words: psychopath, antisocial,
|
265 |
+
and sociopath; (2) are duplicates of other sentences; (3) contain less than three
|
266 |
+
words; (4) end with a stop word; (5) are emotionally neutral or have a higher
|
267 |
+
positive than a negative sentiment (we used NLTK to estimate the sentiment);
|
268 |
+
(6) are simple paraphrases of each other (via [43]).
|
269 |
+
For the ranking task, we identified words significantly collocated with the
|
270 |
+
target word ”psychopath” in the iWeb repository [44].
|
271 |
+
The domain expert
|
272 |
+
selected 28 to form a ”psychopathic vector” (see the appendix). Next, we used
|
273 |
+
the vectorial semantic approach for personality assessment [45] and measured
|
274 |
+
the cosine similarity between each filtered candidate sentence completion and
|
275 |
+
the psychopathy vector. For each of the 40 seed sentences, we selected the 50
|
276 |
+
completions that scored highest on the cosine similarity test. The output was a
|
277 |
+
set of 1735 synthetically generated sentences that are supposed to represent a
|
278 |
+
psychopathic mind (if for some seed less than 50 completions passed the filtering
|
279 |
+
step, we took all of them).
|
280 |
+
6
|
281 |
+
|
282 |
+
4
|
283 |
+
Data
|
284 |
+
Our setting inherently precludes the existence of a large labeled bench-marking
|
285 |
+
dataset of text written by clinically diagnosed psychopaths. However, as the
|
286 |
+
antisocial psychopathic personality is composed of several dimensions (e.g., lack
|
287 |
+
of empathy), we tested our approach on labeled datasets hypothesized to share
|
288 |
+
one dimension or more with this personality type.
|
289 |
+
4.1
|
290 |
+
Test Data
|
291 |
+
We now describe the three datasets that we’ve used to evaluate the performance
|
292 |
+
of Dexter and two other data augmentation pipelines.
|
293 |
+
Sexual predators. Sexual predators share with psychopaths at least two psy-
|
294 |
+
chological dimensions: being manipulative and lacking empathy, as indicated
|
295 |
+
by the correlation between sexual offending and psychopathy [46, 47, 48]. Our
|
296 |
+
first dataset was a labeled data set of texts produced by 142 sexual predators
|
297 |
+
and 97,689 non-predators [49].
|
298 |
+
Empathy. Psychopaths are characterized by a lack of empathy. Our second
|
299 |
+
data set consists of interactions between help seekers in an online call center
|
300 |
+
for mental health support [50]. Labeled texts of the mental health supporters
|
301 |
+
(responders) are provided. Responders are tagged according to three increasing
|
302 |
+
levels of empathy: “0” (N = 2037), “1” (N = 895), and “2” (N = 152). Unlike
|
303 |
+
the other two datasets, the empathy dataset does not contain a natural positive
|
304 |
+
class, as an empathy score of 0 does not necessarily imply a strong negative
|
305 |
+
personality.
|
306 |
+
Cyberbullying. Cyberbullying may have a clear psychopathic signature, given
|
307 |
+
the reported association between the psychopathic mind and sadism [51, 52].
|
308 |
+
We have used the labeled toxic-text subset of the cyberbullying dataset [53]
|
309 |
+
that contains 12,168 toxic vs. 14,874 non-toxic texts. Unlike the previous two
|
310 |
+
datasets, this one is labeled at the message level.
|
311 |
+
Each message consists of
|
312 |
+
several sentences, and the entire message is assigned the label “toxic” if there
|
313 |
+
are “enough” toxic sentences (the exact labeling procedure is described in the
|
314 |
+
original paper [53]).
|
315 |
+
4.2
|
316 |
+
Train Data
|
317 |
+
We now describe the datasets we used to fine-tune the BERT-base-uncased
|
318 |
+
model [54], which we then ran to classify the aforementioned test datasets. The
|
319 |
+
data statistics is summarized in Table 1.
|
320 |
+
The Dexter dataset. This dataset contains 3400 sentences; 1700 sentences
|
321 |
+
are the output of Dexter, which serve as the positive class, and 1700 sentences
|
322 |
+
from various Reddit discussion groups that serve as the negative class. These
|
323 |
+
sentences were selected by first sampling 8000 random sentences from various
|
324 |
+
Reddit groups, then filtering and cleaning them according to the same procedure
|
325 |
+
that was applied to the psychopathic texts. Finally, the 1700 sentences with the
|
326 |
+
lowest psychopathic score were chosen.
|
327 |
+
The Dexter-minus and the PRELIM dataset. These two datasets allow us
|
328 |
+
to evaluate the importance of the different stages in the Dexter pipeline (Figure
|
329 |
+
1). The Dexter-minus dataset follows the same pipeline as Dexter just that the
|
330 |
+
7
|
331 |
+
|
332 |
+
Dataset
|
333 |
+
#Pos.
|
334 |
+
#Neg
|
335 |
+
Sexual Predators
|
336 |
+
142
|
337 |
+
97,689
|
338 |
+
Empathy
|
339 |
+
2,037
|
340 |
+
152
|
341 |
+
Cyberbullying
|
342 |
+
12,168
|
343 |
+
14,784
|
344 |
+
Table 1: Summary statistics for the three datasets described in Section 4. The
|
345 |
+
number of samples in the positive and negative class are shown.
|
346 |
+
fine-tuning of the GPT is skipped. The PERLIM dataset shortcuts the GPT
|
347 |
+
step altogether and proceeds directly to the filtering and ranking step.
|
348 |
+
To compare the performance of Dexter against the two SOTA data-augmentation
|
349 |
+
pipelines, we created the following two synthetic datasets.
|
350 |
+
The LAMBADA dataset. To train LAMBADA, we used the Papers-With-
|
351 |
+
Code recommended implementation of LAMBADA [55]. We augmented GPT-2
|
352 |
+
with two new classes, ”#beliefs about others” and ”# beliefs about self”. Each
|
353 |
+
class was seeded with the 20 sentences that our domain expert crafted (Section
|
354 |
+
3.2). All the training parameters and the code are available at [55].
|
355 |
+
We then used the LAMBADA pipeline to generate 17,000 sentences, out
|
356 |
+
of which we chose the best 1700 (this was the recommendation of the authors
|
357 |
+
[16], to generate 10x more sentences than needed). Specifically, we generated
|
358 |
+
8500 from the “#beliefs about others” class and 8500 from “#beliefs about
|
359 |
+
self” (we used the following parameters for GPT-2: max length=50, top k = 10,
|
360 |
+
p = 0.85). We ranked the sentences the same way we ranked ours: using cosine
|
361 |
+
similarity to the psychopathic vector of words (Section 3.4).
|
362 |
+
The negative class of the LAMBADA dataset is the same as Dexter’s.
|
363 |
+
The LeakGAN dataset. We trained LeakGAN using the official LeakGAN
|
364 |
+
implementation [56]. LeakGAN is trained with the target class text. We used
|
365 |
+
all the text in the preliminary dataset to that end (Section 3.1) and the default
|
366 |
+
parameters from the official implementation. We generated 1700 sentences using
|
367 |
+
LeakGAN to serve as the positive class, and the negative class was the same as
|
368 |
+
Dexter’s.
|
369 |
+
External competition datasets. This collection contains three gold-standard
|
370 |
+
competition offensive speech datasets: OffenseEval [57], HatEval [58], and AbuseE-
|
371 |
+
val [59]. Each dataset contains roughly 10,000 labeled texts. We call the union
|
372 |
+
of all three datasets the Golden dataset.
|
373 |
+
5
|
374 |
+
Evaluation
|
375 |
+
To evaluate Dexter we’ve created a family of models using the aforementioned
|
376 |
+
training datasets of Section 4.2. Each model is named X@BERT, which means
|
377 |
+
that the BERT-base-uncased model [54] was fine-tuned using dataset X. If a
|
378 |
+
’+’ is appended, X@BERT+, this means that a preceding step of fine-tuning on
|
379 |
+
OffenseEval [57] was taking place.
|
380 |
+
5.1
|
381 |
+
Evaluation procedure
|
382 |
+
All the test datasets mentioned in Section 4.1, Table 1, are imbalanced to dif-
|
383 |
+
ferent degrees (the sexual predators dataset contained merely 0.001% sexual
|
384 |
+
8
|
385 |
+
|
386 |
+
Model
|
387 |
+
Pred.
|
388 |
+
Emp.
|
389 |
+
Cyber.
|
390 |
+
Avg Rank
|
391 |
+
Dexter@BERT+
|
392 |
+
1
|
393 |
+
1
|
394 |
+
2
|
395 |
+
1.33
|
396 |
+
OffenseEval@BERT
|
397 |
+
3
|
398 |
+
2
|
399 |
+
1
|
400 |
+
2
|
401 |
+
Dexter@BERT
|
402 |
+
2
|
403 |
+
4
|
404 |
+
5
|
405 |
+
3.66
|
406 |
+
HateEval@BERT
|
407 |
+
6
|
408 |
+
3
|
409 |
+
6
|
410 |
+
5
|
411 |
+
Dexter-@BERT
|
412 |
+
4
|
413 |
+
5
|
414 |
+
7
|
415 |
+
5.33
|
416 |
+
Golden@BERT
|
417 |
+
7
|
418 |
+
7
|
419 |
+
3
|
420 |
+
5.66
|
421 |
+
AbuseEval@BERT
|
422 |
+
7
|
423 |
+
7
|
424 |
+
4
|
425 |
+
6
|
426 |
+
PRELIM@BERT
|
427 |
+
8
|
428 |
+
8
|
429 |
+
8
|
430 |
+
8
|
431 |
+
Table 2: Summary of Table 4. The Dexter@BERT variants are in bold.
|
432 |
+
Model
|
433 |
+
Pred.
|
434 |
+
Emp.
|
435 |
+
Cyber.
|
436 |
+
Avg Rank
|
437 |
+
Dexter@BERT+
|
438 |
+
1
|
439 |
+
1
|
440 |
+
1
|
441 |
+
1
|
442 |
+
Dexter@BERT
|
443 |
+
2
|
444 |
+
2
|
445 |
+
3
|
446 |
+
2.33
|
447 |
+
LAMBADA@BERT+
|
448 |
+
4
|
449 |
+
4
|
450 |
+
2
|
451 |
+
3.33
|
452 |
+
Dexter-@BERT
|
453 |
+
3
|
454 |
+
3
|
455 |
+
5
|
456 |
+
3.66
|
457 |
+
LeakGAN@BERT+
|
458 |
+
5
|
459 |
+
5
|
460 |
+
4
|
461 |
+
4.66
|
462 |
+
LAMBADA@BERT
|
463 |
+
7
|
464 |
+
6
|
465 |
+
6
|
466 |
+
6.33
|
467 |
+
LeakGAN@BERT
|
468 |
+
6
|
469 |
+
7
|
470 |
+
7
|
471 |
+
6.66
|
472 |
+
Table 3: Summary of Table 5. The Dexter@BERT variants are in bold.
|
473 |
+
predators). To allow comparison across the three datasets while avoiding mis-
|
474 |
+
leading artifacts that such imbalanced data introduces, we down-sampled the
|
475 |
+
majority class to obtain balanced sets.
|
476 |
+
The output of a BERT model is a number in [0, 1], the result of the last layer
|
477 |
+
activation unit (soft-max in our case). This number may be thought of as the
|
478 |
+
probability that BERT assigns the instance to belong to the positive class (in
|
479 |
+
our case, ”psychopath”). We define the PsychoScore of a user as the average
|
480 |
+
output of the model over all the sentences produced by that user (each sentence
|
481 |
+
is scored separately by the model). It is common practice to feed the BERT
|
482 |
+
score into a simple classifier, like SVM, to find the optimal cut-off for the binary
|
483 |
+
classification task [60].
|
484 |
+
To evaluate the model on each test data set, we computed the 5-fold cross-
|
485 |
+
validation F1 and Macro F1 scores. Each fold consisted of n = 100 randomly
|
486 |
+
sampled instances from each class, and split into 80% train and 20% test. We
|
487 |
+
trained a soft-margin kernel SVM (we used the default Python sklearn module
|
488 |
+
parameters C = 1, kernel = RBF) on the users’ PsychoScores and the corre-
|
489 |
+
sponding label.
|
490 |
+
5.2
|
491 |
+
Results
|
492 |
+
The results of running the X@BERT models on the test datasets of Section
|
493 |
+
4.1 are summarized in Tables 4 and 5. Table 4 reprots the comparison against
|
494 |
+
the pre-trained offensive speech models, while Table 5 reports the comparison
|
495 |
+
against LeakGAN and LAMBADA. Table 2 summarizes Table 4 with the over-
|
496 |
+
all average ranking across the three datasets and similarly Table 3 summarizes
|
497 |
+
Table 5. Both show that the model Dexter@BERT+ ranked first, and Dex-
|
498 |
+
ter@BERT came second (Table 5) and third (Table 4) .
|
499 |
+
The following key
|
500 |
+
conclusions are read from the tables:
|
501 |
+
9
|
502 |
+
|
503 |
+
Data set
|
504 |
+
Model
|
505 |
+
Precision
|
506 |
+
Recall
|
507 |
+
F1 score
|
508 |
+
Macro F1 score
|
509 |
+
Dexter@BERT+
|
510 |
+
0.92
|
511 |
+
0.87
|
512 |
+
0.89
|
513 |
+
0.91 ± 0.029
|
514 |
+
Dexter@BERT
|
515 |
+
0.91
|
516 |
+
0.86
|
517 |
+
0.88
|
518 |
+
0.90 ± 0.037
|
519 |
+
Sexual Predator Identification
|
520 |
+
OffenseEval@BERT
|
521 |
+
0.89
|
522 |
+
0.87
|
523 |
+
0.88
|
524 |
+
0.90 ± 0.035
|
525 |
+
Competition [49]
|
526 |
+
Dexter-@BERT
|
527 |
+
0.80
|
528 |
+
0.93
|
529 |
+
0.86
|
530 |
+
0.88 ± 0.043
|
531 |
+
HateEval@BERT
|
532 |
+
0.95
|
533 |
+
0.5
|
534 |
+
0.65
|
535 |
+
0.75 ± 0.011
|
536 |
+
Golden@BERT
|
537 |
+
0.88
|
538 |
+
0.50
|
539 |
+
0.63
|
540 |
+
0.69 ± 0.095
|
541 |
+
AbuseEval@BERT
|
542 |
+
0.73
|
543 |
+
0.58
|
544 |
+
0.51
|
545 |
+
0.53 ± 0.133
|
546 |
+
PRELIM@BERT
|
547 |
+
0.51
|
548 |
+
1.00
|
549 |
+
0.68
|
550 |
+
0.38 ± 0.024
|
551 |
+
Dexter@BERT+
|
552 |
+
0.66
|
553 |
+
0.80
|
554 |
+
0.72
|
555 |
+
0.70 ± 0.083
|
556 |
+
OffenseEval@BERT
|
557 |
+
0.61
|
558 |
+
0.81
|
559 |
+
0.69
|
560 |
+
0.64 ± 0.081
|
561 |
+
Empathy [50]
|
562 |
+
HateEval@BERT
|
563 |
+
0.59
|
564 |
+
0.65
|
565 |
+
0.61
|
566 |
+
0.59 ± 0.063
|
567 |
+
Dexter@BERT
|
568 |
+
0.55
|
569 |
+
0.88
|
570 |
+
0.67
|
571 |
+
0.54 ± 0.077
|
572 |
+
Dexter-@BERT
|
573 |
+
0.51
|
574 |
+
0.65
|
575 |
+
0.57
|
576 |
+
0.52 ± 0.075
|
577 |
+
Golden@BERT
|
578 |
+
0.42
|
579 |
+
0.93
|
580 |
+
0.58
|
581 |
+
0.38 ± 0.061
|
582 |
+
AbuseEval@BERT
|
583 |
+
0.44
|
584 |
+
0.85
|
585 |
+
0.58
|
586 |
+
0.37 ± 0.018
|
587 |
+
PRELIM@BERT
|
588 |
+
0.16
|
589 |
+
0.27
|
590 |
+
0.22
|
591 |
+
0.27 ± 0.080
|
592 |
+
OffenseEval@BERT
|
593 |
+
0.92
|
594 |
+
0.80
|
595 |
+
0.85
|
596 |
+
0.88 ± 0.044
|
597 |
+
Dexter@BERT+
|
598 |
+
0.96
|
599 |
+
0.72
|
600 |
+
0.83
|
601 |
+
0.87 ± 0.048
|
602 |
+
Golden@BERT
|
603 |
+
0.84
|
604 |
+
0.87
|
605 |
+
0.85
|
606 |
+
0.87 ± 0.041
|
607 |
+
Cyberbullying [53]
|
608 |
+
AbuseEval@BERT
|
609 |
+
0.89
|
610 |
+
0.78
|
611 |
+
0.82
|
612 |
+
0.83 ± 0.050
|
613 |
+
Dexter@BERT
|
614 |
+
0.93
|
615 |
+
0.60
|
616 |
+
0.72
|
617 |
+
0.77 ± 0.051
|
618 |
+
HateEval@BERT
|
619 |
+
0.86
|
620 |
+
0.61
|
621 |
+
0.71
|
622 |
+
078 ± 0.075
|
623 |
+
Dexter-@BERT
|
624 |
+
0.91
|
625 |
+
0.56
|
626 |
+
0.68
|
627 |
+
0.77 ± 0.080
|
628 |
+
PRELIM@BERT
|
629 |
+
0.80
|
630 |
+
0.57
|
631 |
+
0.67
|
632 |
+
0.70 ± 0.012
|
633 |
+
Table 4: Results of the various models on the test data sets sorted according to
|
634 |
+
macro F1 score.
|
635 |
+
• The results for the sexual predators place Dexter@BERT+ and Dex-
|
636 |
+
ter@BERT at the top two both with respect to the other data augmenta-
|
637 |
+
tion pipelines (Table 5) and with respect to the abusive speech BERT mod-
|
638 |
+
els (Table 4). In fact, both LAMBADA@BERT+ and LeakGAN@BERT+
|
639 |
+
obtained worse results than the baseline OffenseEval@BERT. (F1 score of
|
640 |
+
0.88 vs 0.8 and lower).
|
641 |
+
• The results for the empathy dataset in Table 4 show that Dexter@BERT+
|
642 |
+
obtained the highest F1 and macro F1 score. In Table 5 we see that the
|
643 |
+
performance of Dexter@BERT and its derivatives is far better than the
|
644 |
+
other two pipelines. We also observe a poorer overall performance than the
|
645 |
+
other two datasets, in accordance with the absence of a natural positive
|
646 |
+
class.
|
647 |
+
• The results for the cyberbullying dataset in Table 4 show that Dex-
|
648 |
+
ter@BERT+ scored at the top (Macro F1 score 0.87) together with Of-
|
649 |
+
fenseEval@BERT (0.88) and AbuseEval@BERT (0.87).
|
650 |
+
In Table 5 we
|
651 |
+
again see that Dexter@BERT+ came first, although this time, the gap
|
652 |
+
from LAMBADA@BERT+ is small.
|
653 |
+
• The performance of Dexter@BERT+ is similar to OffenseEval@BERT on
|
654 |
+
the sexual predators and cyberbullying data sets. This is to be expected
|
655 |
+
as these datasets have a clear offensive speech element. The more telling
|
656 |
+
result is the larger gap for the empathy dataset, 0.7 vs 0.64 in Macro F1
|
657 |
+
10
|
658 |
+
|
659 |
+
Dataset
|
660 |
+
Model
|
661 |
+
Precision
|
662 |
+
Recall
|
663 |
+
F1 score
|
664 |
+
Macro F1 score
|
665 |
+
Dexter@BERT+
|
666 |
+
0.92
|
667 |
+
0.87
|
668 |
+
0.89
|
669 |
+
0.91 ± 0.029
|
670 |
+
Dexter@BERT
|
671 |
+
0.91
|
672 |
+
0.86
|
673 |
+
0.88
|
674 |
+
0.90 ± 0.037
|
675 |
+
Sexual Predator Identification
|
676 |
+
Dexter-@BERT
|
677 |
+
0.80
|
678 |
+
0.93
|
679 |
+
0.86
|
680 |
+
0.88 ± 0.043
|
681 |
+
Competition [49]
|
682 |
+
LAMBADA@BERT+
|
683 |
+
0.85
|
684 |
+
0.73
|
685 |
+
0.77
|
686 |
+
0.80 ± 0.074
|
687 |
+
LeakGAN@BERT+
|
688 |
+
0.55
|
689 |
+
0.99
|
690 |
+
0.71
|
691 |
+
0.51 ± 0.035
|
692 |
+
LeakGAN@BERT
|
693 |
+
0.53
|
694 |
+
0.85
|
695 |
+
0.65
|
696 |
+
0.51 ± 0.041
|
697 |
+
LAMBADA@BERT
|
698 |
+
0.47
|
699 |
+
0.80
|
700 |
+
0.51
|
701 |
+
0.30 ± 0.019
|
702 |
+
Dexter@BERT+
|
703 |
+
0.66
|
704 |
+
0.80
|
705 |
+
0.72
|
706 |
+
0.70 ± 0.083
|
707 |
+
Dexter@BERT
|
708 |
+
0.55
|
709 |
+
0.88
|
710 |
+
0.67
|
711 |
+
0.54 ± 0.077
|
712 |
+
Empathy [50]
|
713 |
+
Dexter-@BERT
|
714 |
+
0.51
|
715 |
+
0.65
|
716 |
+
0.57
|
717 |
+
0.52 ± 0.075
|
718 |
+
LAMBADA@BERT+
|
719 |
+
0.40
|
720 |
+
0.67
|
721 |
+
0.50
|
722 |
+
0.33 ± 0.071
|
723 |
+
LeakGAN@BERT+
|
724 |
+
0.30
|
725 |
+
0.76
|
726 |
+
0.41
|
727 |
+
0.31 ± 0.122
|
728 |
+
LAMBADA@BERT
|
729 |
+
0.81
|
730 |
+
0.35
|
731 |
+
0.40
|
732 |
+
0.50 ± 0.145
|
733 |
+
LeakGAN@BERT
|
734 |
+
0.28
|
735 |
+
0.53
|
736 |
+
0.36
|
737 |
+
0.30 ± 0.040
|
738 |
+
Dexter@BERT+
|
739 |
+
0.96
|
740 |
+
0.72
|
741 |
+
0.83
|
742 |
+
0.87 ± 0.048
|
743 |
+
LAMBADA@BERT+
|
744 |
+
0.95
|
745 |
+
0.70
|
746 |
+
0.80
|
747 |
+
0.83 ± 0.060
|
748 |
+
Dexter@BERT
|
749 |
+
0.93
|
750 |
+
0.60
|
751 |
+
0.72
|
752 |
+
0.77 ± 0.051
|
753 |
+
Cyberbullying [53]
|
754 |
+
LeakGAN@BERT+
|
755 |
+
0.91
|
756 |
+
0.56
|
757 |
+
0.69
|
758 |
+
0.71 ± 0.064
|
759 |
+
Dexter-@BERT
|
760 |
+
0.91
|
761 |
+
0.56
|
762 |
+
0.68
|
763 |
+
0.77 ± 0.080
|
764 |
+
LAMBADA@BERT
|
765 |
+
0.88
|
766 |
+
0.54
|
767 |
+
0.66
|
768 |
+
0.71 ± 0.078
|
769 |
+
LeakGAN@BERT
|
770 |
+
0.97
|
771 |
+
0.47
|
772 |
+
0.62
|
773 |
+
0.68 ± 0.114
|
774 |
+
Table 5: Results of the various models that were trained by different text aug-
|
775 |
+
mentation techniques, sorted according to macro F1 score.
|
776 |
+
score. Indeed lack of empathy has more to do with the psychopathic mind
|
777 |
+
than offensive speech.
|
778 |
+
• One can look at our results through the lens of transfer learning, where
|
779 |
+
our dataset was successfully used to facilitate transfer learning from the
|
780 |
+
task of offensive speech detection to the task of predicting various aspects
|
781 |
+
associated with the psychopathic personality.
|
782 |
+
6
|
783 |
+
Discussion
|
784 |
+
This paper presents a new unsupervised approach for personality data augmen-
|
785 |
+
tation (PEDANT), trading labeled data with domain expertise. We implement
|
786 |
+
it in a specific pipeline that generates sentences with a psychopathic signature
|
787 |
+
(Dexter). One could ask whether it is feasible to assemble a labeled dataset
|
788 |
+
via platforms such as Amazon Turk. The answer is probably no, as domain
|
789 |
+
expertise is required in the field of personality to correctly label the data. Our
|
790 |
+
work offers a scalable and feasible data augmentation pipeline that circumvents
|
791 |
+
such caveats by taking input from a domain expert in the later stages of the
|
792 |
+
pipeline rather than at the beginning (the data collection step).
|
793 |
+
The clear conclusion from the evaluation experiments we ran is that our
|
794 |
+
pipeline produced synthetic data with better quality than the other two pipelines
|
795 |
+
([16], and [23]) thus highlighting the point that not all data augmentation tasks
|
796 |
+
were born equal. The task of generating synthetic data about flight and travel
|
797 |
+
issues (the examples from the LAMBADA paper) is not the same as generating
|
798 |
+
personality-type text. The same way generating a synthetic dog picture is not
|
799 |
+
11
|
800 |
+
|
801 |
+
the same as generating a CT-scan picture of a brain with a tumor in order to
|
802 |
+
train med students to read such images.
|
803 |
+
We expect that our pipeline can be adapted to other domains where high-
|
804 |
+
quality labeled data is lacking and hard to obtain by crowd-sourcing: suicide,
|
805 |
+
school shooters, etc.
|
806 |
+
References
|
807 |
+
[1] Philip J Corr and Gerald Ed Matthews. The Cambridge handbook of per-
|
808 |
+
sonality psychology. Cambridge University Press, 2020.
|
809 |
+
[2] Lewis R Goldberg. An alternative” description of personality”: the big-five
|
810 |
+
factor structure. Journal of personality and social psychology, 59(6):1216,
|
811 |
+
1990.
|
812 |
+
[3] Kenneth L Davis and Jaak Panksepp. The brain’s emotional foundations of
|
813 |
+
human personality and the affective neuroscience personality scales. Neu-
|
814 |
+
roscience & Biobehavioral Reviews, 35(9):1946–1958, 2011.
|
815 |
+
[4] Walter Mischel. Toward an integrative science of the person. Annu. Rev.
|
816 |
+
Psychol., 55:1–22, 2004.
|
817 |
+
[5] Vittorio Lingiardi and Nancy McWilliams. Psychodynamic diagnostic man-
|
818 |
+
ual: PDM-2. Guilford Publications, 2017.
|
819 |
+
[6] Jonathan Shedler and Drew Westen. The shedler–westen assessment pro-
|
820 |
+
cedure (swap): making personality diagnosis clinically meaningful. Journal
|
821 |
+
of personality assessment, 89(1):41–55, 2007.
|
822 |
+
[7] Vittorio Lingiardi, Nancy McWilliams, Robert F Bornstein, Francesco
|
823 |
+
Gazzillo, and Robert M Gordon. The psychodynamic diagnostic manual
|
824 |
+
version 2 (pdm-2): Assessing patients for improved clinical practice and
|
825 |
+
research. Psychoanalytic Psychology, 32(1):94, 2015.
|
826 |
+
[8] Yair Neuman. Computational personality analysis: Introduction, practical
|
827 |
+
applications and novel directions. Springer, 2016.
|
828 |
+
[9] David E Losada and Fabio Crestani. A test collection for research on depres-
|
829 |
+
sion and language use. In International Conference of the Cross-Language
|
830 |
+
Evaluation Forum for European Languages, pages 28–39. Springer, 2016.
|
831 |
+
[10] Yoram Bachrach, Michal Kosinski, Thore Graepel, Pushmeet Kohli, and
|
832 |
+
David Stillwell. Personality and patterns of facebook usage. In Proceedings
|
833 |
+
of the 4th annual ACM web science conference, pages 24–32, 2012.
|
834 |
+
[11] Ana Sanz-Garc´ıa, Clara Gesteira, Jes´us Sanz, and Mar´ıa Paz Garc´ıa-Vera.
|
835 |
+
Prevalence of psychopathy in the general adult population: A systematic
|
836 |
+
review and meta-analysis. Frontiers in Psychology, page 3278, 2021.
|
837 |
+
[12] Kimberly B Werner, Lauren R Few, and Kathleen K Bucholz. Epidemiol-
|
838 |
+
ogy, comorbidity, and behavioral genetics of antisocial personality disorder
|
839 |
+
and psychopathy. Psychiatric annals, 45(4):195–199, 2015.
|
840 |
+
12
|
841 |
+
|
842 |
+
[13] Katherine J Holzer, Michael G Vaughn, Travis M Loux, Michael A Mancini,
|
843 |
+
Noelle E Fearn, and Cara L Wallace. Prevalence and correlates of antisocial
|
844 |
+
personality disorder in older adults. Aging & Mental Health, pages 1–10,
|
845 |
+
2020.
|
846 |
+
[14] Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush
|
847 |
+
Vosoughi, Teruko Mitamura, and Eduard Hovy.
|
848 |
+
A survey of data aug-
|
849 |
+
mentation approaches for nlp. arXiv preprint arXiv:2105.03075, 2021.
|
850 |
+
[15] Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, and
|
851 |
+
Christopher R´e. A kernel theory of modern data augmentation. In Inter-
|
852 |
+
national Conference on Machine Learning, pages 1528–1537. PMLR, 2019.
|
853 |
+
[16] Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor,
|
854 |
+
George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. Do
|
855 |
+
not have enough data? deep learning to the rescue! In Proceedings of the
|
856 |
+
AAAI Conference on Artificial Intelligence, volume 34, pages 7383–7390,
|
857 |
+
2020.
|
858 |
+
[17] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.
|
859 |
+
Improving language understanding by generative pre-training. 2018.
|
860 |
+
[18] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya
|
861 |
+
Sutskever, et al.
|
862 |
+
Language models are unsupervised multitask learners.
|
863 |
+
OpenAI blog, 1(8):9, 2019.
|
864 |
+
[19] Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue.
|
865 |
+
Transfertransfo: A transfer learning approach for neural network based
|
866 |
+
conversational agents. arXiv preprint arXiv:1901.08149, 2019.
|
867 |
+
[20] Sergey Golovanov, Rauf Kurbanov, Sergey Nikolenko, Kyryl Truskovskyi,
|
868 |
+
Alexander Tselousov, and Thomas Wolf. Large-scale transfer learning for
|
869 |
+
natural language generation. In Proceedings of the 57th Annual Meeting of
|
870 |
+
the Association for Computational Linguistics, pages 6053–6058, 2019.
|
871 |
+
[21] Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela,
|
872 |
+
and Jason Weston. Personalizing dialogue agents: I have a dog, do you
|
873 |
+
have pets too? arXiv preprint arXiv:1801.07243, 2018.
|
874 |
+
[22] Sebastian Ruder and Barbara Plank.
|
875 |
+
Strong baselines for neural semi-
|
876 |
+
supervised learning under domain shift. arXiv preprint arXiv:1804.09530,
|
877 |
+
2018.
|
878 |
+
[23] Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang.
|
879 |
+
Long text generation via adversarial training with leaked information. In
|
880 |
+
Proceedings of the AAAI Conference on Artificial Intelligence, volume 32,
|
881 |
+
2018.
|
882 |
+
[24] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data aug-
|
883 |
+
mentation for deep learning. Journal of big data, 6(1):1–48, 2019.
|
884 |
+
[25] Toan Tran, Trung Pham, Gustavo Carneiro, Lyle Palmer, and Ian Reid. A
|
885 |
+
bayesian data augmentation approach for learning deep models. Advances
|
886 |
+
in neural information processing systems, 30, 2017.
|
887 |
+
13
|
888 |
+
|
889 |
+
[26] Justin Salamon and Juan Pablo Bello. Deep convolutional neural networks
|
890 |
+
and data augmentation for environmental sound classification. IEEE Signal
|
891 |
+
processing letters, 24(3):279–283, 2017.
|
892 |
+
[27] Connor Shorten, Taghi M Khoshgoftaar, and Borko Furht. Text data aug-
|
893 |
+
mentation for deep learning. Journal of big Data, 8(1):1–34, 2021.
|
894 |
+
[28] Sosuke Kobayashi. Contextual augmentation: Data augmentation by words
|
895 |
+
with paradigmatic relations. arXiv preprint arXiv:1805.06201, 2018.
|
896 |
+
[29] Jonas Mueller and Aditya Thyagarajan. Siamese recurrent architectures
|
897 |
+
for learning sentence similarity. In Proceedings of the AAAI conference on
|
898 |
+
artificial intelligence, volume 30, 2016.
|
899 |
+
[30] Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu.
|
900 |
+
Conditional bert contextual augmentation. In International Conference on
|
901 |
+
Computational Science, pages 84–95. Springer, 2019.
|
902 |
+
[31] Georgios Rizos, Konstantin Hemker, and Bj¨orn Schuller. Augment to pre-
|
903 |
+
vent: short-text data augmentation in deep learning for hate-speech clas-
|
904 |
+
sification. In Proceedings of the 28th ACM International Conference on
|
905 |
+
Information and Knowledge Management, pages 991–1000, 2019.
|
906 |
+
[32] Mai Ibrahim, Marwan Torki, and Nagwa El-Makky. Imbalanced toxic com-
|
907 |
+
ments classification using data augmentation and deep learning. In 2018
|
908 |
+
17th IEEE international conference on machine learning and applications
|
909 |
+
(ICMLA), pages 875–878. IEEE, 2018.
|
910 |
+
[33] Diederik P Kingma and Max Welling. Auto-encoding variational bayes.
|
911 |
+
arXiv preprint arXiv:1312.6114, 2013.
|
912 |
+
[34] Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha
|
913 |
+
Talukdar. Submodular optimization-based diverse paraphrasing and its ef-
|
914 |
+
fectiveness in data augmentation. In Proceedings of the 2019 Conference of
|
915 |
+
the North American Chapter of the Association for Computational Linguis-
|
916 |
+
tics: Human Language Technologies, Volume 1 (Long and Short Papers),
|
917 |
+
pages 3609–3619, 2019.
|
918 |
+
[35] Fabio Henrique Kiyoiti dos Santos Tanaka and Claus Aranha. Data aug-
|
919 |
+
mentation using gans. arXiv preprint arXiv:1904.09135, 2019.
|
920 |
+
[36] Rui Cao and Roy Ka-Wei Lee. HateGAN: Adversarial generative-based
|
921 |
+
data augmentation for hate speech detection. In Proceedings of the 28th
|
922 |
+
International Conference on Computational Linguistics, pages 6327–6338,
|
923 |
+
Barcelona, Spain (Online), December 2020. International Committee on
|
924 |
+
Computational Linguistics.
|
925 |
+
[37] Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett,
|
926 |
+
Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. Dialogpt: Large-
|
927 |
+
scale generative pre-training for conversational response generation. arXiv
|
928 |
+
preprint arXiv:1911.00536, 2019.
|
929 |
+
[38] Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and
|
930 |
+
Joelle Pineau. A survey of available corpora for building data-driven dia-
|
931 |
+
logue systems. arXiv preprint arXiv:1512.05742, 2015.
|
932 |
+
14
|
933 |
+
|
934 |
+
[39] Glen Coppersmith, Mark Dredze, and Craig Harman. Quantifying mental
|
935 |
+
health signals in twitter. In Proceedings of the workshop on computational
|
936 |
+
linguistics and clinical psychology: From linguistic signal to clinical reality,
|
937 |
+
pages 51–60, 2014.
|
938 |
+
[40] Filip Sondej. Autocorrect. https://github.com/filyp/autocorrect, 2022.
|
939 |
+
[41] Lorenz Leitner.
|
940 |
+
redditcleaner.
|
941 |
+
https://github.com/LoLei/redditcleaner,
|
942 |
+
2020.
|
943 |
+
[42] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement
|
944 |
+
Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan
|
945 |
+
Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural lan-
|
946 |
+
guage processing. arXiv preprint arXiv:1910.03771, 2019.
|
947 |
+
[43] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings
|
948 |
+
using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
|
949 |
+
[44] Mark Davies and Jong-Bok Kim. The advantages and challenges of” big
|
950 |
+
data”: Insights from the 14 billion word iweb corpus. Linguistic Research,
|
951 |
+
36(1):1–34, 2019.
|
952 |
+
[45] Yair Neuman and Yochai Cohen. A vectorial semantics approach to per-
|
953 |
+
sonality assessment. Scientific reports, 4(1):1–6, 2014.
|
954 |
+
[46] Heather
|
955 |
+
M
|
956 |
+
Gretton,
|
957 |
+
Michelle
|
958 |
+
McBride,
|
959 |
+
Robert
|
960 |
+
D
|
961 |
+
Hare,
|
962 |
+
Roy
|
963 |
+
O’Shaughnessy, and Gary Kumka.
|
964 |
+
Psychopathy and recidivism in ado-
|
965 |
+
lescent sex offenders. Criminal justice and behavior, 28(4):427–449, 2001.
|
966 |
+
[47] Sonja Krstic, Craig S Neumann, Sandeep Roy, Carrie A Robertson, Ray-
|
967 |
+
mond A Knight, and Robert D Hare. Using latent variable-and person-
|
968 |
+
centered approaches to examine the role of psychopathic traits in sex of-
|
969 |
+
fenders. Personality Disorders: Theory, Research, and Treatment, 9(3):207,
|
970 |
+
2018.
|
971 |
+
[48] Ji Seun Sohn, Adrian Raine, and Soo Jung Lee. The utility of the psy-
|
972 |
+
chopathy checklist-revised (pcl-r) facet and item scores in predicting violent
|
973 |
+
recidivism. Aggressive behavior, 46(6):508–515, 2020.
|
974 |
+
[49] Giacomo Inches and Fabio Crestani. Overview of the international sexual
|
975 |
+
predator identification competition at pan-2012. In CLEF (Online working
|
976 |
+
notes/labs/workshop), volume 30, 2012.
|
977 |
+
[50] Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff.
|
978 |
+
A
|
979 |
+
computational approach to understanding empathy expressed in text-based
|
980 |
+
mental health support. arXiv preprint arXiv:2009.08441, 2020.
|
981 |
+
[51] Myrthe Meere and Vincent Egan. Everyday sadism, the dark triad, per-
|
982 |
+
sonality, and disgust sensitivity.
|
983 |
+
Personality and Individual Differences,
|
984 |
+
112:157–161, 2017.
|
985 |
+
[52] Natalie Sest and Evita March. Constructing the cyber-troll: Psychopathy,
|
986 |
+
sadism, and empathy. Personality and Individual Differences, 119:69–72,
|
987 |
+
2017.
|
988 |
+
15
|
989 |
+
|
990 |
+
[53] Fatma Elsafoury. Cyberbullying datasets. Mendeley Data, 2020.
|
991 |
+
[54] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert:
|
992 |
+
Pre-training of deep bidirectional transformers for language understanding.
|
993 |
+
arXiv preprint arXiv:1810.04805, 2018.
|
994 |
+
[55] Edward Ma. Nlp augmentation. https://github.com/makcedward/nlpaug,
|
995 |
+
2019.
|
996 |
+
[56] Jiaxian Guo. Leakgan. https://github.com/CR-Gjx/LeakGAN, 2018.
|
997 |
+
[57] Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura
|
998 |
+
Farra, and Ritesh Kumar.
|
999 |
+
Semeval-2019 task 6: Identifying and cate-
|
1000 |
+
gorizing offensive language in social media (offenseval).
|
1001 |
+
arXiv preprint
|
1002 |
+
arXiv:1903.08983, 2019.
|
1003 |
+
[58] Valerio Basile, Cristina Bosco, Elisabetta Fersini, Nozza Debora, Viviana
|
1004 |
+
Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti,
|
1005 |
+
et al. Semeval-2019 task 5: Multilingual detection of hate speech against
|
1006 |
+
immigrants and women in twitter.
|
1007 |
+
In 13th International Workshop on
|
1008 |
+
Semantic Evaluation, pages 54–63. Association for Computational Linguis-
|
1009 |
+
tics, 2019.
|
1010 |
+
[59] Tommaso Caselli, Valerio Basile, Jelena Mitrovi´c, Inga Kartoziya, and
|
1011 |
+
Michael Granitzer. I feel offended, don’t be abusive! implicit/explicit mes-
|
1012 |
+
sages in offensive and abusive language. In Proceedings of the 12th language
|
1013 |
+
resources and evaluation conference, pages 6193–6202, 2020.
|
1014 |
+
[60] Jennifer S Raj, J Vijitha Ananthi, et al. Recurrent neural networks and
|
1015 |
+
nonlinear prediction in support vector machines. Journal of Soft Computing
|
1016 |
+
Paradigm (JSCP), 1(01):33–40, 2019.
|
1017 |
+
A
|
1018 |
+
Sentence completion examples
|
1019 |
+
Each example consists of a statement that servers as a prompt for GPT-2,
|
1020 |
+
GPT-3, and its completion.
|
1021 |
+
Two completions are given, before fine-tuning
|
1022 |
+
(GPT-2,GPT-3) and after the fine-tuning on the preliminary dataset (GPT-
|
1023 |
+
2). The statements were chosen to entail a natural discriminatory completion
|
1024 |
+
between a normal and a psychopathic personality. The statement is first, and
|
1025 |
+
the completions follow.
|
1026 |
+
I take advantage of others whenever I can
|
1027 |
+
(GPT-2 before) I know how to get out of trouble.
|
1028 |
+
(GPT-3 davinci) said he, ”but they generally take advantage of me. It
|
1029 |
+
is the way of the world.” ”I am glad to hear you say so,” said Holmes,
|
1030 |
+
laughing; ”it is a most comforting reflection.”.
|
1031 |
+
(GPT-3 curie) and I am a very strong negotiator. Yes, I wouldn’t say that
|
1032 |
+
I am perfect, but I do try to do the right thing by all of my partners.I am
|
1033 |
+
not a fan of people who are selfish, narcissistic or narcissistic.
|
1034 |
+
(GPT-2 after) I have a very low empathy for others and I am constantly
|
1035 |
+
manipulating people to get what I want.
|
1036 |
+
16
|
1037 |
+
|
1038 |
+
People are violent
|
1039 |
+
(GPT-2 before) I don’t think I was ever taught to be violent.
|
1040 |
+
(GPT-3 davinci) People are not moral. People are amoral. People are de-
|
1041 |
+
praved.” The senator was bullish on the prospects of the Republican Party.
|
1042 |
+
“The future is ours,” he said, to loud applause. “
|
1043 |
+
(GPT-3 curie) and stupid, but they’re not all violent and stupid, and
|
1044 |
+
they’re not all crazy. I don’t know what the hell happened. But I do know
|
1045 |
+
that this is not a good thing. Things have been getting better in this country
|
1046 |
+
(GPT-2 after) They are the ones who kill animals for fun, rape and torture
|
1047 |
+
people, and steal and rob.
|
1048 |
+
People are selfish
|
1049 |
+
(GPT-2 before) I don’t think we are selfish.
|
1050 |
+
(GPT-3 davinci) they don’t think about what they’re doing,” said the 17-
|
1051 |
+
year-old. “They take the time to stop and think about what they’re doing,
|
1052 |
+
maybe this wouldn’t happen
|
1053 |
+
(GPT-3 curie) It’s a part of human nature. But, most importantly, it’s
|
1054 |
+
a part of the human condition.” ”This is the fundamental question of all
|
1055 |
+
human history,” he continued. ”Do I believe that people are fundamentally
|
1056 |
+
selfish?”
|
1057 |
+
(GPT-2 after) It’s just that, when we see someone in pain, we don’t really
|
1058 |
+
want to help him.
|
1059 |
+
B
|
1060 |
+
Psychopathic Vector
|
1061 |
+
28 words that are significantly collocated with the target word “psychopath”
|
1062 |
+
in the iWeb. These words constitute the “psychopathic vector” used for rank-
|
1063 |
+
ing: psychopath, sociopath, killer, criminal, murderer, victim, empathy, prison,
|
1064 |
+
monster, liar, rapist, violent, dangerous, sexual, antisocial, sadistic, narcissis-
|
1065 |
+
tic, evil, insane, manipulative, charming, kill, manipulate, torture, hunt, rape,
|
1066 |
+
terrorize, terrify.
|
1067 |
+
C
|
1068 |
+
Seed Sentences
|
1069 |
+
40 seed sentences for completion by GPT. 20 sentences about self, and 20 about
|
1070 |
+
others.
|
1071 |
+
Beliefs about self
|
1072 |
+
I take advantage of others whenever I can.
|
1073 |
+
I experience no remorse for harm or injury I have caused to others.
|
1074 |
+
I enjoy manipulating others’ emotions to get what I want.
|
1075 |
+
I am unconcerned with the consequences of my actions.
|
1076 |
+
I feel invulnerable and that nothing can hurt me.
|
1077 |
+
I have no interest in the rights, property, or safety of others.
|
1078 |
+
I feel little empathy to the feelings of others.
|
1079 |
+
I don’t understand others’ needs and feelings.
|
1080 |
+
17
|
1081 |
+
|
1082 |
+
I act impulsively, without regard for the consequences of my actions.
|
1083 |
+
I am unreliable and irresponsible.
|
1084 |
+
I engage in unlawful or criminal behaviour.
|
1085 |
+
I tend to be angry or hostile toward others.
|
1086 |
+
I tend to get into power struggles with other people.
|
1087 |
+
I gain pleasure and satisfaction by being sadistic or aggressive toward others.
|
1088 |
+
I blame others for my failures or shortcomings and believe my problems are
|
1089 |
+
caused by external factors.
|
1090 |
+
I don’t understand my behaviour and motives.
|
1091 |
+
I don’t feel shame or remorse.
|
1092 |
+
I am seeking thrill and excitement.
|
1093 |
+
I usually feel empty and bored.
|
1094 |
+
I feel like a hunter seeking a prey.
|
1095 |
+
Beliefs about others
|
1096 |
+
People are selfish.
|
1097 |
+
Human beings are greedy.
|
1098 |
+
The majority of people are cruel.
|
1099 |
+
The world is full of inconsiderate people
|
1100 |
+
Most people are childish
|
1101 |
+
Most people are arrogant
|
1102 |
+
People I know are irresponsible
|
1103 |
+
People are manipulative
|
1104 |
+
Human beings are deceptive
|
1105 |
+
The majority of people are abusive
|
1106 |
+
The majority of people are dangerous
|
1107 |
+
Most people are exploitative
|
1108 |
+
Most people are untrustworthy
|
1109 |
+
People are violent
|
1110 |
+
People are vulnerable
|
1111 |
+
Human beings are weak
|
1112 |
+
The majority of people are helpless
|
1113 |
+
People are predatory
|
1114 |
+
Most people are an easy prey
|
1115 |
+
The human condition is weak and vulnerable to predation
|
1116 |
+
18
|
1117 |
+
|
09FAT4oBgHgl3EQfjh3M/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
0dFKT4oBgHgl3EQfNy2J/content/tmp_files/2301.11756v1.pdf.txt
ADDED
@@ -0,0 +1,519 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.11756v1 [math.AC] 27 Jan 2023
|
2 |
+
A comment on the structure of graded modules
|
3 |
+
over graded principal ideal domains
|
4 |
+
in the context of persistent homology
|
5 |
+
Clara L¨oh
|
6 |
+
January 30, 2023
|
7 |
+
Abstract
|
8 |
+
The literature in persistent homology often refers to a “structure the-
|
9 |
+
orem for finitely generated graded modules over a graded principal ideal
|
10 |
+
domain”. We clarify the nature of this structure theorem in this context.
|
11 |
+
1
|
12 |
+
Introduction
|
13 |
+
The persistent homology with field coefficients of finite type filtrations can be
|
14 |
+
described in terms of barcodes. Zomorodian and Carlsson promoted the elegant
|
15 |
+
idea to view persistent homology with coefficients in a field K as a graded module
|
16 |
+
over the graded polynomial ring K[T ] [ZC05].
|
17 |
+
They then suggest a general
|
18 |
+
structure theorem for finitely generated graded modules over graded principal
|
19 |
+
ideal domains [ZC05, Theorem 2.1]. Applying this structure theorem to the
|
20 |
+
graded polynomial ring K[T ] gives a graded elementary divisor decomposition
|
21 |
+
of persistent homology, which can be reinterpreted as barcodes [CZCG04] or,
|
22 |
+
equivalently, as persistence diagrams [EH10].
|
23 |
+
However, there does not seem to be a proof of this general structure theo-
|
24 |
+
rem in the literature in the form stated by Zomorodian and Carlsson. As this
|
25 |
+
theorem is quoted multiple times in work on persistent homology and as it is a
|
26 |
+
potential source of confusion, the goal of this expository note is to clarify the
|
27 |
+
nature of this structure theorem (even though it might be clear to the experts).
|
28 |
+
We first give a precise formulation of the structure theorem; this formu-
|
29 |
+
lation slightly differs from the statement of Zomorodian and Carlsson [ZC05,
|
30 |
+
Theorem 2.1] (for a reason explained below):
|
31 |
+
Theorem 1.1 (structure theorem for graded modules over graded PIDs). Let R
|
32 |
+
be a graded principal ideal domain with R ̸= R0 and let M be a finitely generated
|
33 |
+
graded R-module. Then M admits a graded elementary divisor decomposition
|
34 |
+
(Definition 2.8) and the signatures of all such graded decompositions of M co-
|
35 |
+
incide.
|
36 |
+
The key observation of this note is that in fact every N-graded principal ideal
|
37 |
+
domain is
|
38 |
+
© C. L¨oh 2023. This work was supported by the CRC 1085 Higher Invariants (Universit¨at
|
39 |
+
Regensburg, funded by the DFG).
|
40 |
+
MSC 2010 classification: 13C05, 55N31
|
41 |
+
1
|
42 |
+
|
43 |
+
2
|
44 |
+
2
|
45 |
+
Graded rings and modules
|
46 |
+
• a principal ideal domain with the 0-grading or
|
47 |
+
• a polynomial ring over a field with a multiple of the canonical grading.
|
48 |
+
The proof is elementary [VO83, Remark 2.7] (Proposition 3.1).
|
49 |
+
For trivially graded principal ideal domains, in general, the graded elemen-
|
50 |
+
tary divisor version of the structure theorem does not hold (Example 4.1). This
|
51 |
+
explains the additional hypothesis of R ̸= R0 in Theorem 1.1. In contrast, the
|
52 |
+
graded prime power version of the structure theorem also holds if the grading
|
53 |
+
is trivial (Proposition 4.2).
|
54 |
+
For polynomial rings, the graded uniqueness part can be deduced in a
|
55 |
+
straightforward way from the ungraded uniqueness. However, for the graded
|
56 |
+
existence part, there does not seem to be a “generic” derivation from the un-
|
57 |
+
graded existence result – the difficulty being the graded direct sum splitting (as
|
58 |
+
exhibited in the case of the trivially graded ring Z). Finding such a splitting
|
59 |
+
needs a careful inductive approach that establishes that the torsion submodule
|
60 |
+
is graded and that avoids dividing out cyclic submodules in bad position/order.
|
61 |
+
The graded existence part can be proved using specific properties of polynomial
|
62 |
+
rings over fields.
|
63 |
+
In conclusion, the structure theorem for graded modules over graded prin-
|
64 |
+
cipal ideal domains gives a helpful structural perspective on barcodes for per-
|
65 |
+
sistent homology (and also for the computation of persistent homology [ZC05,
|
66 |
+
SVJ13]), but its scope does not seem to go beyond the special case that is needed
|
67 |
+
for persistent homology and it does not seem to provide a shortcut avoiding spe-
|
68 |
+
cial properties of polynomial rings over fields.
|
69 |
+
Generalisations of N-graded persistent homology such as zigzag persistence
|
70 |
+
or R-graded persistence (or more general indexing situations) are usually based
|
71 |
+
on arguments from quiver representations [CdS10, BCB20]. Similarly to the N-
|
72 |
+
graded case, in these settings, it is also essential that the underlying coefficients
|
73 |
+
are a field.
|
74 |
+
Organisation of this article
|
75 |
+
Basic notions on graded rings and modules are recalled in Section 2. In Sec-
|
76 |
+
tion 3, we prove Proposition 3.1. The case of principal ideal domains with trivial
|
77 |
+
gradings is considered in Section 4; the case of polynomial rings over fields is
|
78 |
+
discussed in Section 5, where we give an elementary proof of the structure the-
|
79 |
+
orem.
|
80 |
+
Acknowledgements
|
81 |
+
I would like to thank Ulrich Bunke for helpful discussions on abstract methods
|
82 |
+
for the decomposition of graded modules and Luigi Caputi for valuable feedback.
|
83 |
+
2
|
84 |
+
Graded rings and modules
|
85 |
+
We recall basic notions on graded rings and modules and decompositions of
|
86 |
+
graded modules. As usual in (discrete) persistence, we consider only the case of
|
87 |
+
discrete non-negative gradings, i.e., gradings over N.
|
88 |
+
|
89 |
+
2
|
90 |
+
Graded rings and modules
|
91 |
+
3
|
92 |
+
Definition 2.1 (graded ring). A graded ring is a pair (R, (Rn)n∈N), where R
|
93 |
+
is a ring and the Rn are additive subgroups of R with the following properties:
|
94 |
+
• The additive group (R, +) is the internal direct sum of the (Rn)n∈N.
|
95 |
+
• For all n, m ∈ N, we have Rn · Rm ⊂ Rn+m.
|
96 |
+
For n ∈ N, the elements in Rn are called homogeneous of degree n. An element
|
97 |
+
of R is homogenous if there exists an n ∈ N such that the element is homogeneous
|
98 |
+
of degree n.
|
99 |
+
A graded ring is a graded principal ideal domain if it is a domain and every
|
100 |
+
homogeneous ideal (i.e., generated by homogeneous elements) is generated by a
|
101 |
+
single element.
|
102 |
+
Example 2.2 (polynomial rings). Let K be a ring. Then the usual degree on
|
103 |
+
monomials in the polynomial ring K[T ] turns K[T ] into a graded ring via the
|
104 |
+
canonical isomorphism K[T ] ∼=Ab
|
105 |
+
�
|
106 |
+
n∈N K · T n. We will refer to this as the
|
107 |
+
canonical grading on K[T ]. If K is a field, then K[T ] is a principal ideal domain
|
108 |
+
(graded and ungraded).
|
109 |
+
Definition 2.3 (graded module). Let R be a graded ring. A graded module
|
110 |
+
over R is a pair (M, (Mn)), consisting of an R-module M and additive sub-
|
111 |
+
groups Mn of M with the following properties:
|
112 |
+
• The additive group (M, +) is the internal direct sum of the (Mn)n∈N.
|
113 |
+
• For all n, m ∈ N, we have Rn · Mm ⊂ Mn+m.
|
114 |
+
Elements of Mm are called homogeneous of degree m.
|
115 |
+
Remark 2.4 (the category of graded modules). Let R be a graded ring. Ho-
|
116 |
+
momorphisms between graded R-modules are R-linear maps that preserve the
|
117 |
+
grading. Graded R-modules and homomorphisms of R-modules form the cate-
|
118 |
+
gory RMod∗ of graded R-modules.
|
119 |
+
Example 2.5 (shifted graded modules). Let R be a graded ring, let M be a
|
120 |
+
graded module over R, and let n ∈ N. Then ΣnM denotes the graded R-module
|
121 |
+
given by the n-shifted decomposition 0 ⊕ · · · ⊕ 0 ⊕ �
|
122 |
+
j∈N≥n Mj−n.
|
123 |
+
Example 2.6 (direct sums and quotients of graded modules). Let M and N
|
124 |
+
be graded modules over a graded ring R. Then M ⊕ N is a graded R-module
|
125 |
+
via the grading (Mn ⊕ Nn)n∈N. If M ′ ⊂ M is a graded submodule of M (i.e., it
|
126 |
+
is generated by homogeneous elements), then (Mn/(M ′ ∩Mn))n∈N turns M/M ′
|
127 |
+
into a graded R-module.
|
128 |
+
Persistent homology leads to persistence modules [ZC05]. Persistence mod-
|
129 |
+
ules in turn give rise to graded modules over graded polynomial rings [ZC05,
|
130 |
+
Section 3.1]:
|
131 |
+
Example 2.7 (from persistence modules to graded modules). Let K be a ring
|
132 |
+
and let (M ∗, f ∗) be an N-indexed persistence K-module. Then M := �
|
133 |
+
n∈N M n
|
134 |
+
carries a K[T ]-module structure, given by
|
135 |
+
∀x∈Mn
|
136 |
+
T · x := f n(x) ∈ M n+1.
|
137 |
+
If we view K[T ] as a graded ring (Example 2.2), then this K[T ]-module structure
|
138 |
+
and this direct sum decomposition of M turn M into a graded K[T ]-module. If
|
139 |
+
(M ∗, f ∗) is of finite type, then M is finitely generated over K[T ].
|
140 |
+
|
141 |
+
4
|
142 |
+
3
|
143 |
+
Graded principal ideal domains
|
144 |
+
Finally, we define the central types of decompositions arising in the structure
|
145 |
+
theorems:
|
146 |
+
Definition 2.8 (graded elementary divisor decomposition). Let R be a graded
|
147 |
+
ring and let M be a graded module over R. A graded elementary divisor de-
|
148 |
+
composition of M over R is an isomorphism
|
149 |
+
M ∼=RMod∗
|
150 |
+
N
|
151 |
+
�
|
152 |
+
j=1
|
153 |
+
ΣnjR/(fj)
|
154 |
+
of graded R-modules with N ∈ N, degrees n1, . . . , nN ∈ N, and homogeneous
|
155 |
+
elements f1, . . . , fN ∈ R with fj|fj+1 for all j ∈ {1, . . . , N −1}. Here, the right-
|
156 |
+
hand side carries the canonical grading.
|
157 |
+
The elements f1, . . . , fN are called
|
158 |
+
elementary divisors of M.
|
159 |
+
The signature of such a decomposition is the multiset of all pairs (nj, R×·fj)
|
160 |
+
with j ∈ {1, . . ., N}.
|
161 |
+
Definition 2.9 (graded prime power decomposition). Let R be a graded ring
|
162 |
+
and let M be a graded module over R. A graded prime power decomposition
|
163 |
+
of M over R is an isomorphism
|
164 |
+
M ∼=RMod
|
165 |
+
N
|
166 |
+
�
|
167 |
+
j=1
|
168 |
+
ΣnjR/(pkj
|
169 |
+
j )
|
170 |
+
of graded R-modules with N ∈ N, n1, . . . , nN ∈ N, k1, . . . , kN ∈ N, and homo-
|
171 |
+
geneous prime elements p1, . . . , pN ∈ R. Here, the right-hand side carries the
|
172 |
+
canonical grading.
|
173 |
+
The signature of such a decomposition is the multiset of all pairs (nj, R×·pkj
|
174 |
+
j )
|
175 |
+
with j ∈ {1, . . ., N}.
|
176 |
+
3
|
177 |
+
Graded principal ideal domains
|
178 |
+
For the sake of completeness, we provide a proof of the following observa-
|
179 |
+
tion [VO83, Remark 2.7].
|
180 |
+
Proposition 3.1 (graded PIDs). Let R be a graded principal ideal domain.
|
181 |
+
Then R is of one of the following types:
|
182 |
+
• We have R = R0, i.e., R is an ordinary principal ideal domain with the
|
183 |
+
0-grading.
|
184 |
+
• The subring R0 is a field and R is isomorphic to the graded ring R0[T ],
|
185 |
+
where the grading on R0[T ] is a multiple of the canonical grading.
|
186 |
+
Proof. Let R ̸= R0 and let n ∈ N>0 be the minimal degree with Rn ̸= 0. Then
|
187 |
+
R≥n :=
|
188 |
+
�
|
189 |
+
j∈N≥n
|
190 |
+
Rj
|
191 |
+
is a homogeneous ideal in R; as R is a graded principal ideal domain, there
|
192 |
+
exists a t ∈ R with R≥n = (t). We show that t is homogeneous of degree n: Let
|
193 |
+
|
194 |
+
4
|
195 |
+
Trivially graded principal ideal domains
|
196 |
+
5
|
197 |
+
x ∈ Rn \ {0}. Then t divides x and a straightforward computation shows that
|
198 |
+
hence also t is homogeneous. The grading implies that t has degree n.
|
199 |
+
We show that the canonical R0-algebra homomorphism ϕ: R0[T ] −→ R
|
200 |
+
given by ϕ(T ) := t is an isomorphism.
|
201 |
+
• We first show that ϕ is injective: Because R is graded and t is homoge-
|
202 |
+
neous, it suffices to show that a · tk ̸= 0 for all a ∈ R0 \ {0} and all k ∈ N.
|
203 |
+
However, this is guaranteed by the hypothesis that R is a domain.
|
204 |
+
• Regarding surjectivity, let y ∈ R. It suffices to consider the case that y
|
205 |
+
is homogeneous of degree m ≥ n. Because (t) = R≥n, we know that t
|
206 |
+
divides y, say y = t · y′. Then y′ is homogeneous and we can iterate the
|
207 |
+
argument for y′. Proceeding inductively, we obtain that m is a multiple
|
208 |
+
of n and that there exists an a ∈ R0 with y = a · tm/n.
|
209 |
+
Hence, ϕ is
|
210 |
+
surjective.
|
211 |
+
This establishes that R is isomorphic as a graded ring to R0[T ], where R0[T ]
|
212 |
+
carries the canonical grading on R0[T ] scaled by n.
|
213 |
+
It remains to show that R0 ∼=Ring R/(t) is a field.
|
214 |
+
Thus, we are left to
|
215 |
+
show that (t) is a maximal ideal in R.
|
216 |
+
By construction, every ideal a that
|
217 |
+
contains (t) = R≥n is generated by (t) and a subset of R0; in particular, a is
|
218 |
+
homogeneous, whence principal. The grading shows that then a = R or a = (t).
|
219 |
+
Thus, (t) is maximal and so R0 is a field.
|
220 |
+
In the setting of Z-graded principal ideal domains, further examples appear,
|
221 |
+
such as generalised Rees rings [PvG82].
|
222 |
+
4
|
223 |
+
Trivially graded principal ideal domains
|
224 |
+
Example 4.1 (elementary divisor decompositions over trivially graded PIDs).
|
225 |
+
Let R be a principal ideal domain with the 0-grading that contains two non-
|
226 |
+
associated prime elements p and q (e.g., 2 and 3 in Z). We consider the graded
|
227 |
+
R-module
|
228 |
+
M := Σ0R/(p) ⊕ Σ1R/(q).
|
229 |
+
This graded R module does not admit a graded elementary divisor decom-
|
230 |
+
position: Indeed, if there were a graded elementary divisor decomposition of M,
|
231 |
+
then the corresponding elementary divisors would have to coincide with the un-
|
232 |
+
graded elementary divisors. The only ungraded elementary divisor of M is p · q.
|
233 |
+
However, M does not contain a homogenous element with annihilator ideal (p·q).
|
234 |
+
Therefore, M does not admit a graded elementary divisor decomposition.
|
235 |
+
Proposition 4.2 (prime power decompositions over trivially graded PIDs). Let
|
236 |
+
R be a principal ideal domain with the 0-grading and let M be a finitely generated
|
237 |
+
graded R-module. Then M admits a graded prime power decomposition and the
|
238 |
+
signature of all such graded decompositions of M coincide.
|
239 |
+
Proof. Because R is trivially graded, the grading on M decomposes M as a
|
240 |
+
direct sum �
|
241 |
+
n∈N Mn of R-submodules.
|
242 |
+
In view of finite generation of M,
|
243 |
+
only finitely many of these summands are non-trivial. We can now apply the
|
244 |
+
ungraded structure theorem to each summand Mn to conclude.
|
245 |
+
|
246 |
+
6
|
247 |
+
5
|
248 |
+
Polynomial rings over fields
|
249 |
+
5
|
250 |
+
Polynomial rings over fields
|
251 |
+
In view of Proposition 3.1, Theorem 1.1 can equivalently be stated as follows
|
252 |
+
(which is exactly the special case needed in persistent homology):
|
253 |
+
Theorem 5.1 (structure theorem for graded modules over polynomial rings).
|
254 |
+
Let K be a field and let M be a finitely generated graded module over the graded
|
255 |
+
ring K[T ]. Then there exist N ∈ N, n1, . . . , nN ∈ N, and k1, . . . , kN ∈ N>0 ∪
|
256 |
+
{∞} with
|
257 |
+
M ∼=K[T ]Mod∗
|
258 |
+
N
|
259 |
+
�
|
260 |
+
j=1
|
261 |
+
ΣnjK[T ]/(T kj).
|
262 |
+
Here, T ∞ := 0. The multiset of all (nj, kj) with j ∈ {1, . . . , N} is uniquely
|
263 |
+
determined by M.
|
264 |
+
The rest of this section contains an elementary and constructive proof of
|
265 |
+
Theorem 5.1.
|
266 |
+
5.1
|
267 |
+
Uniqueness of graded decompositions
|
268 |
+
The uniqueness claim in Theorem 5.1 can be derived inductively from the un-
|
269 |
+
graded uniqueness statement:
|
270 |
+
Let a decomposition as in Theorem 5.1 be given and let ϕ: �
|
271 |
+
... · · · −→ M
|
272 |
+
be a corresponding graded K[T ]-isomorphism. Then
|
273 |
+
M ′ := ϕ(N ′) with N ′ :=
|
274 |
+
�
|
275 |
+
j∈{1,...,N},nj=0
|
276 |
+
ΣnjK[T ]/(T kj)
|
277 |
+
is a graded submodule of M and it is not difficult to see that M ′ = ϕ(N ′) =
|
278 |
+
SpanK[T ] M0.
|
279 |
+
Moroever, M ′ is finitely generated over K[T ]. Therefore, the
|
280 |
+
ungraded structure theorem when applied to M ′ shows that the multiset of all
|
281 |
+
pairs (nj, kj) with nj = 0 is uniquely determined by M.
|
282 |
+
For the induction step, we pass to the quotient M/M ′, which is a finitely gen-
|
283 |
+
erated graded K[T ]-module with (M/M ′)0 ∼= 0. We shift the degrees on M/M ′
|
284 |
+
by −1 and inductively apply the previous argument.
|
285 |
+
5.2
|
286 |
+
Homogeneous matrix reduction
|
287 |
+
The standard matrix reduction algorithm for the computation of persistent ho-
|
288 |
+
mology [EH10, ZC05] can be viewed as a proof of the existence part of Theo-
|
289 |
+
rem 5.1.
|
290 |
+
We phrase the matrix reduction algorithm in the graded language to em-
|
291 |
+
phasise the connection with graded decompositions.
|
292 |
+
Definition 5.2 (graded matrix). Let K be a field, let r, s ∈ N, and let n1, . . . , nr,
|
293 |
+
m1, . . . , ms ∈ N be monotonically increasing. A matrix A ∈ Mr×s(K[T ]) is
|
294 |
+
(n∗, m∗)-graded if the following holds: For all j ∈ {1, . . . , r}, k ∈ {1, . . ., s}, we
|
295 |
+
have that the entry Ajk ∈ K[T ] is a homogeneous polynomial and
|
296 |
+
• Ajk = 0 or
|
297 |
+
• nj = deg Ajk + mk.
|
298 |
+
|
299 |
+
5
|
300 |
+
Polynomial rings over fields
|
301 |
+
7
|
302 |
+
In a graded matrix, the degrees of matrix entries monotonically increase
|
303 |
+
from the left to the right and from the bottom to the top.
|
304 |
+
Definition 5.3 (reduced matrix). Let K be a field, let r, s ∈ N, and let
|
305 |
+
n1, . . . , nr, m1, . . . , ms ∈ N be monotonically increasing, and let A ∈ Mr×s(K[T ])
|
306 |
+
be an (n∗, m∗)-graded matrix.
|
307 |
+
• For k ∈ {1, . . ., s}, we define
|
308 |
+
lowA(k) := max
|
309 |
+
�
|
310 |
+
j ∈ {1, . . . , r}
|
311 |
+
�� Ajk ̸= 0
|
312 |
+
�
|
313 |
+
∈ N
|
314 |
+
(with max ∅ := 0). I.e., lowA(k) is the index of the “lowest” matrix entry
|
315 |
+
in column k that is non-zero.
|
316 |
+
• The matrix A is reduced if all columns have different low-indices: For
|
317 |
+
all k, k′ ∈ N with lowA(k) ̸= 0 and lowA(k′) ̸= 0, we have lowA(k) ̸=
|
318 |
+
lowA(k′).
|
319 |
+
Graded matrices can be transformed into reduced matrices via elementary
|
320 |
+
column operations; these reduced matrices then lead to module decompositions:
|
321 |
+
Algorithm 5.4 (homogeneous matrix reduction). Given a field K, r, s ∈ N,
|
322 |
+
monotonically increasing sequences n1, . . . , nr, m1, . . . , ms ∈ N, and an (n∗, m∗)-
|
323 |
+
graded matrix A ∈ Mr×s, do the following:
|
324 |
+
• For each k from 1 up to s (in ascending order):
|
325 |
+
Let ℓ := lowA(k).
|
326 |
+
If ℓ ̸= 0, then:
|
327 |
+
• For each j from ℓ down to 1 (in descending order):
|
328 |
+
If Ajk ̸= 0 and there exists k′ ∈ {1, . . ., k − 1} with lowA(k′) = j,
|
329 |
+
then:
|
330 |
+
• Update the matrix A by subtracting Ajk/Ajk′-times the col-
|
331 |
+
umn k′ from column k.
|
332 |
+
[Loop invariant observation: Because A is graded, Ajk/Ajk′ in-
|
333 |
+
deed is a homogeneous polynomial over K and the resulting ma-
|
334 |
+
trix is (n∗, m∗)-graded. This eliminates the entry Ajk′.]
|
335 |
+
• Return the resulting matrix A.
|
336 |
+
Proposition 5.5. Let K be a field, let r, s ∈ N, let n1, . . . , ns, m1, . . . , mr ∈ N
|
337 |
+
be monotonically increasing, and let A ∈ Mr×s(K[T ]) be an (n∗, m∗)-graded
|
338 |
+
matrix. Then:
|
339 |
+
1. The homogeneous matrix reduction algorithm (Algorithm 5.4) terminates
|
340 |
+
on this input after finitely many steps (relative to the arithmetic on K).
|
341 |
+
2. The resulting matrix A′ is reduced and there is a graded s × s-matrix B
|
342 |
+
over K[T ] that admits a graded inverse and satisfies A′ = A · B.
|
343 |
+
3. The low-entries of the resulting matrix A′ are the elementary divisors of A
|
344 |
+
over K[T ].
|
345 |
+
|
346 |
+
8
|
347 |
+
5.2
|
348 |
+
Homogeneous matrix reduction
|
349 |
+
4. We have
|
350 |
+
F/ im A ∼=K[T ]Mod∗
|
351 |
+
�
|
352 |
+
j∈I
|
353 |
+
ΣnjK[T ]/(T mk(j)−nj) ⊕
|
354 |
+
�
|
355 |
+
j∈I′
|
356 |
+
ΣnjK[T ],
|
357 |
+
where F := �r
|
358 |
+
j=1 ΣnjK[T ] and I := {lowA′(k) | k ∈ {1, . . . , s}} \ {0}
|
359 |
+
as well as I′ := {1, . . ., r} \ I. For j ∈ I, let k(j) ∈ {1, . . . , s} be the
|
360 |
+
unique (!) index with lowA′(k(j)) = j.
|
361 |
+
Proof. Ad 1. Well-definedness follows from the observation mentioned in the
|
362 |
+
algorithm: As every homogeneous polynomial in K[T ] is of the form λ · T d
|
363 |
+
with λ ∈ K and d ∈ N and as the matrix is graded, the corresponding division
|
364 |
+
can be performed in K[T ] and the gradedness of the matrix is preserved by the
|
365 |
+
elimination operation. Termination is then clear from the algorithm.
|
366 |
+
Ad 2.
|
367 |
+
As we traverse the columns from left to right, a straightforward
|
368 |
+
induction shows that no two columns can remain that have the same non-zero
|
369 |
+
value of “lowA”. The product decomposition comes from the fact that we only
|
370 |
+
applied elementary homogeneous column operations without swaps.
|
371 |
+
Ad 3. Because the resulting matrix A′ is obtained through elementary col-
|
372 |
+
umn operations from A, the elementary divisors of A′ and A coincide. Applying
|
373 |
+
Lemma 5.6 to A′ proves the claim.
|
374 |
+
Ad 4. In view of the second part, we have that F/ im A ∼=K[T ]Mod∗ F/ im A′.
|
375 |
+
Therefore, the claim is a direct consequence of Lemma 5.6.
|
376 |
+
Lemma 5.6. Let K be a field, let r, s ∈ N, let n1, . . . , nr, m1, . . . , ms ∈ N be
|
377 |
+
monotonically increasing, and let A ∈ Mr×s(K[T ]) be an (n∗, m∗)-graded matrix
|
378 |
+
that is reduced. Then:
|
379 |
+
1. The low-entries of A are the elementary divisors of A over K[T ].
|
380 |
+
2. Let F := �r
|
381 |
+
j=1 ΣnjK[T ] and I := {lowA(k) | k ∈ {1, . . ., s}} \ {0} as well
|
382 |
+
as I′ := {1, . . ., r} \ I. Then
|
383 |
+
F/ im A ∼=K[T ]Mod∗
|
384 |
+
�
|
385 |
+
j∈I
|
386 |
+
ΣnjK[T ]/(T mk(j)−nj) ⊕
|
387 |
+
�
|
388 |
+
j∈I′
|
389 |
+
ΣnjK[T ]
|
390 |
+
Proof. Ad 1. Let k ∈ {1, . . . , s} with ℓ := lowA(k) ̸= 0. Then we can clear out
|
391 |
+
all the entries of A in column k above ℓ by elementary row operations (again, the
|
392 |
+
gradedness of A ensures that this is possible). Swapping zero rows and columns
|
393 |
+
appropriately thus results in a matrix in rectangle “diagonal” form; moreover,
|
394 |
+
as all the “diagonal” entries are monomials, we can swap rows and columns to
|
395 |
+
obtain a matrix A′ in Smith normal form that both
|
396 |
+
• has the same elementary divisors as A and
|
397 |
+
• whose elementary divisors are precisely the low-entries of A.
|
398 |
+
In particular, these elementary divisors must coincide.
|
399 |
+
Ad 2. The claim is clear if A is already in Smith normal form. By con-
|
400 |
+
struction, there are square matrices B and C that are invertible over K[T ] and
|
401 |
+
represent graded K[T ]-isomorphisms with
|
402 |
+
A′ = C · A · B.
|
403 |
+
In particular, F/ im A ∼=K[T ]Mod∗ (C · F)/ im A′. By construction, the values
|
404 |
+
of lowA′ and the degrees of A′ differ from the ones of A only by compatible
|
405 |
+
index permutations. Therefore, the claim follows.
|
406 |
+
|
407 |
+
5
|
408 |
+
Polynomial rings over fields
|
409 |
+
9
|
410 |
+
5.3
|
411 |
+
Existence of a graded decomposition
|
412 |
+
To prove existence in Theorem 5.1 we can follow the standard proof pattern
|
413 |
+
of first finding a (graded) finite presentation and then applying (homogeneous)
|
414 |
+
matrix reduction.
|
415 |
+
Let M be a finitely generated graded K[T ]-module. Then M also has a finite
|
416 |
+
generating set consisting of homogeneous elements. This defines a surjective
|
417 |
+
graded K[T ]-homomorphism
|
418 |
+
ϕ: F :=
|
419 |
+
r
|
420 |
+
�
|
421 |
+
j=1
|
422 |
+
ΣnjK[T ] −→ M
|
423 |
+
for suitable r ∈ N and monotonically increasing n1, . . . , nr ∈ N.
|
424 |
+
As ϕ is a
|
425 |
+
graded homomorphism, ker ϕ ⊂ F is a graded K[T ]-submodule and we obtain
|
426 |
+
an isomorphism
|
427 |
+
M ∼=K[T ]Mod∗ F/ im ker ϕ
|
428 |
+
of graded K[T ]-modules.
|
429 |
+
Because K[T ] is a principal ideal domain, the graded submodule ker ϕ ⊂ F
|
430 |
+
is finitely generated over K[T ]. Because ker ϕ is a graded submodule, ker ϕ has a
|
431 |
+
finite homogeneous generating set. (In fact, there also exists a homogeneous free
|
432 |
+
K[T ]-basis for ker ϕ, as can be seen from a straightforward inductive splitting
|
433 |
+
argument [Web85, Lemma 1].) In particular, there exist s ∈ N, monotonically
|
434 |
+
increasing m1, . . . , ms ∈ N, and a graded K[T ]-homomorphism
|
435 |
+
ψ: E :=
|
436 |
+
s
|
437 |
+
�
|
438 |
+
k=1
|
439 |
+
ΣmkK[T ] −→ F
|
440 |
+
with im ψ = ker ϕ. Because ψ is graded and n∗, m∗ are monotonically increas-
|
441 |
+
ing, the r×s-matrix A over K[T ] that represents ψ with respect to the canonical
|
442 |
+
homogeneous bases of E and F is graded in the sense of Definition 5.2.
|
443 |
+
Applying the homogeneous matrix reduction algorithm to A shows that
|
444 |
+
M ∼=K[T ]Mod∗ F/ im A,
|
445 |
+
has the desired decomposition (Proposition 5.5; after discarding the irrelevant
|
446 |
+
terms of the form ΣnK[T ]/(T 0)).
|
447 |
+
This completes the proof of the structure theorem (Theorem 5.1).
|
448 |
+
Remark 5.7. There is a general matrix reduction for a slighlty different notion
|
449 |
+
of “graded” matrices over (Z-)graded principal ideal domains [PvG82]. However,
|
450 |
+
one should be aware that such “graded” matrices in general only lead to graded
|
451 |
+
homomorphisms once one is allowed to change the grading on the underlying free
|
452 |
+
modules. This explains why this general matrix reduction does not contradict
|
453 |
+
the counterexample in case of 0-graded principal ideal rings in Example 4.1.
|
454 |
+
5.4
|
455 |
+
Barcodes
|
456 |
+
For the sake of completeness, we recall the relation between graded decomposi-
|
457 |
+
tions and barcodes:
|
458 |
+
|
459 |
+
10
|
460 |
+
References
|
461 |
+
Remark 5.8 (barcodes of persistence modules). Let K be a field and let
|
462 |
+
(M ∗, f ∗) be an N-indexed persistence K-module of finite type. We equip M :=
|
463 |
+
�
|
464 |
+
n∈N M n with the canonical graded K[T ]-module structure (Example 2.7). By
|
465 |
+
the graded structure theorem (Theorem 5.1), there exist N ∈ N, n1, . . . , nN ∈ N,
|
466 |
+
and k1, . . . , kN ∈ N>0 ∪ {∞} with
|
467 |
+
M ∼=K[T ]Mod∗
|
468 |
+
N
|
469 |
+
�
|
470 |
+
j=1
|
471 |
+
ΣnjK[T ]/(T kj).
|
472 |
+
Let B be the multiset of all (nj, kj − 1) with j ∈ {1, . . ., N}; then B is uniquely
|
473 |
+
determined by M and this multiset B is the barcode of (M ∗, f ∗).
|
474 |
+
The barcode contains the full information on the isomorphism type of the
|
475 |
+
graded K[T ]-module M (and the underlying persistence module) and describes
|
476 |
+
the birth, death, and persistence of elements as specified by the “elder rule”: If
|
477 |
+
(n, p) is an element of the barcode, this means that a new independent class is
|
478 |
+
born at stage n, it persists for p stages, and it dies (if p ̸= ∞) at stage n+ p+ 1.
|
479 |
+
In particular, this leads to the notion of barcodes of persistent homology
|
480 |
+
(in a given degree) of finite type persistence chain complexes and finite type
|
481 |
+
filtrations in topology.
|
482 |
+
References
|
483 |
+
[BCB20]
|
484 |
+
Magnus Bakke Botnan and William Crawley-Boevey.
|
485 |
+
Decomposition of
|
486 |
+
persistence modules. Proc. Amer. Math. Soc., 148(11):4581–4596, 2020.
|
487 |
+
Cited on page: 2
|
488 |
+
[CdS10]
|
489 |
+
Gunnar Carlsson and Vin de Silva. Zigzag persistence. Found. Comput.
|
490 |
+
Math., 10(4):367–405, 2010. Cited on page: 2
|
491 |
+
[CZCG04] Gunnar Carlsson, Afra Zomorodian, Anne Collins, and Leonidas Guibas.
|
492 |
+
Persistence Barcodes for Shapes. In Roberto Scopigno and Denis Zorin, ed-
|
493 |
+
itors, Symposium on Geometry Processing. The Eurographics Association,
|
494 |
+
2004. Cited on page: 1
|
495 |
+
[EH10]
|
496 |
+
Herbert Edelsbrunner and John L. Harer. Computational topology. Amer-
|
497 |
+
ican Mathematical Society, Providence, RI, 2010. An introduction. Cited
|
498 |
+
on page: 1, 6
|
499 |
+
[PvG82]
|
500 |
+
R. Puystjens and J. van Geel.
|
501 |
+
Diagonalization of matrices over graded
|
502 |
+
principal ideal domains. Linear Algebra Appl., 48:265–281, 1982. Cited on
|
503 |
+
page: 5, 9
|
504 |
+
[SVJ13]
|
505 |
+
Primoz Skraba and Mikael Vejdemo-Johansson. Persistence modules: Al-
|
506 |
+
gebra and algorithms. 2013. arXiv:1302.2015 [cs.CG]. Cited on page: 2
|
507 |
+
[VO83]
|
508 |
+
F. Van Oystaeyen. Generalized Rees rings and arithmetical graded rings.
|
509 |
+
J. Algebra, 82(1):185–193, 1983. Cited on page: 2, 4
|
510 |
+
[Web85]
|
511 |
+
Cary Webb. Decomposition of graded modules. Proc. Amer. Math. Soc.,
|
512 |
+
94(4):565–571, 1985. Cited on page: 9
|
513 |
+
[ZC05]
|
514 |
+
Afra Zomorodian and Gunnar Carlsson. Computing persistent homology.
|
515 |
+
Discrete Comput. Geom., 33(2):249–274, 2005. Cited on page: 1, 2, 3, 6
|
516 |
+
Clara L¨oh
|
517 |
+
Fakult¨at f¨ur Mathematik, Universit¨at Regensburg, 93040 Regensburg
|
518 |
+
[email protected], https://loeh.app.ur.de
|
519 |
+
|
0dFKT4oBgHgl3EQfNy2J/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,465 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf,len=464
|
2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
3 |
+
page_content='11756v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
4 |
+
page_content='AC] 27 Jan 2023 A comment on the structure of graded modules over graded principal ideal domains in the context of persistent homology Clara L¨oh January 30, 2023 Abstract The literature in persistent homology often refers to a “structure the- orem for finitely generated graded modules over a graded principal ideal domain”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
5 |
+
page_content=' We clarify the nature of this structure theorem in this context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
6 |
+
page_content=' 1 Introduction The persistent homology with field coefficients of finite type filtrations can be described in terms of barcodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
7 |
+
page_content=' Zomorodian and Carlsson promoted the elegant idea to view persistent homology with coefficients in a field K as a graded module over the graded polynomial ring K[T ] [ZC05].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
8 |
+
page_content=' They then suggest a general structure theorem for finitely generated graded modules over graded principal ideal domains [ZC05, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
9 |
+
page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
10 |
+
page_content=' Applying this structure theorem to the graded polynomial ring K[T ] gives a graded elementary divisor decomposition of persistent homology, which can be reinterpreted as barcodes [CZCG04] or, equivalently, as persistence diagrams [EH10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
11 |
+
page_content=' However, there does not seem to be a proof of this general structure theo- rem in the literature in the form stated by Zomorodian and Carlsson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
12 |
+
page_content=' As this theorem is quoted multiple times in work on persistent homology and as it is a potential source of confusion, the goal of this expository note is to clarify the nature of this structure theorem (even though it might be clear to the experts).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
13 |
+
page_content=' We first give a precise formulation of the structure theorem;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
14 |
+
page_content=' this formu- lation slightly differs from the statement of Zomorodian and Carlsson [ZC05, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
15 |
+
page_content='1] (for a reason explained below): Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
16 |
+
page_content='1 (structure theorem for graded modules over graded PIDs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
17 |
+
page_content=' Let R be a graded principal ideal domain with R ̸= R0 and let M be a finitely generated graded R-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
18 |
+
page_content=' Then M admits a graded elementary divisor decomposition (Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
19 |
+
page_content='8) and the signatures of all such graded decompositions of M co- incide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
20 |
+
page_content=' The key observation of this note is that in fact every N-graded principal ideal domain is © C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
21 |
+
page_content=' L¨oh 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
22 |
+
page_content=' This work was supported by the CRC 1085 Higher Invariants (Universit¨at Regensburg, funded by the DFG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
23 |
+
page_content=' MSC 2010 classification: 13C05, 55N31 1 2 2 Graded rings and modules a principal ideal domain with the 0-grading or a polynomial ring over a field with a multiple of the canonical grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
24 |
+
page_content=' The proof is elementary [VO83, Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
25 |
+
page_content='7] (Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
26 |
+
page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
27 |
+
page_content=' For trivially graded principal ideal domains, in general, the graded elemen- tary divisor version of the structure theorem does not hold (Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
28 |
+
page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
29 |
+
page_content=' This explains the additional hypothesis of R ̸= R0 in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
30 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
31 |
+
page_content=' In contrast, the graded prime power version of the structure theorem also holds if the grading is trivial (Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
32 |
+
page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
33 |
+
page_content=' For polynomial rings, the graded uniqueness part can be deduced in a straightforward way from the ungraded uniqueness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
34 |
+
page_content=' However, for the graded existence part, there does not seem to be a “generic” derivation from the un- graded existence result – the difficulty being the graded direct sum splitting (as exhibited in the case of the trivially graded ring Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
35 |
+
page_content=' Finding such a splitting needs a careful inductive approach that establishes that the torsion submodule is graded and that avoids dividing out cyclic submodules in bad position/order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
36 |
+
page_content=' The graded existence part can be proved using specific properties of polynomial rings over fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
37 |
+
page_content=' In conclusion, the structure theorem for graded modules over graded prin- cipal ideal domains gives a helpful structural perspective on barcodes for per- sistent homology (and also for the computation of persistent homology [ZC05, SVJ13]), but its scope does not seem to go beyond the special case that is needed for persistent homology and it does not seem to provide a shortcut avoiding spe- cial properties of polynomial rings over fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
38 |
+
page_content=' Generalisations of N-graded persistent homology such as zigzag persistence or R-graded persistence (or more general indexing situations) are usually based on arguments from quiver representations [CdS10, BCB20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
39 |
+
page_content=' Similarly to the N- graded case, in these settings, it is also essential that the underlying coefficients are a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
40 |
+
page_content=' Organisation of this article Basic notions on graded rings and modules are recalled in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
41 |
+
page_content=' In Sec- tion 3, we prove Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
42 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
43 |
+
page_content=' The case of principal ideal domains with trivial gradings is considered in Section 4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
44 |
+
page_content=' the case of polynomial rings over fields is discussed in Section 5, where we give an elementary proof of the structure the- orem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
45 |
+
page_content=' Acknowledgements I would like to thank Ulrich Bunke for helpful discussions on abstract methods for the decomposition of graded modules and Luigi Caputi for valuable feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
46 |
+
page_content=' 2 Graded rings and modules We recall basic notions on graded rings and modules and decompositions of graded modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
47 |
+
page_content=' As usual in (discrete) persistence, we consider only the case of discrete non-negative gradings, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
48 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
49 |
+
page_content=', gradings over N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
50 |
+
page_content=' 2 Graded rings and modules 3 Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
51 |
+
page_content='1 (graded ring).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
52 |
+
page_content=' A graded ring is a pair (R, (Rn)n∈N), where R is a ring and the Rn are additive subgroups of R with the following properties: The additive group (R, +) is the internal direct sum of the (Rn)n∈N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
53 |
+
page_content=' For all n, m ∈ N, we have Rn · Rm ⊂ Rn+m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
54 |
+
page_content=' For n ∈ N, the elements in Rn are called homogeneous of degree n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
55 |
+
page_content=' An element of R is homogenous if there exists an n ∈ N such that the element is homogeneous of degree n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
56 |
+
page_content=' A graded ring is a graded principal ideal domain if it is a domain and every homogeneous ideal (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
57 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
58 |
+
page_content=', generated by homogeneous elements) is generated by a single element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
59 |
+
page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
60 |
+
page_content='2 (polynomial rings).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
61 |
+
page_content=' Let K be a ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
62 |
+
page_content=' Then the usual degree on monomials in the polynomial ring K[T ] turns K[T ] into a graded ring via the canonical isomorphism K[T ] ∼=Ab � n∈N K · T n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
63 |
+
page_content=' We will refer to this as the canonical grading on K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
64 |
+
page_content=' If K is a field, then K[T ] is a principal ideal domain (graded and ungraded).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
65 |
+
page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
66 |
+
page_content='3 (graded module).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
67 |
+
page_content=' Let R be a graded ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
68 |
+
page_content=' A graded module over R is a pair (M, (Mn)), consisting of an R-module M and additive sub- groups Mn of M with the following properties: The additive group (M, +) is the internal direct sum of the (Mn)n∈N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
69 |
+
page_content=' For all n, m ∈ N, we have Rn · Mm ⊂ Mn+m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
70 |
+
page_content=' Elements of Mm are called homogeneous of degree m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
71 |
+
page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
72 |
+
page_content='4 (the category of graded modules).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
73 |
+
page_content=' Let R be a graded ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
74 |
+
page_content=' Ho- momorphisms between graded R-modules are R-linear maps that preserve the grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
75 |
+
page_content=' Graded R-modules and homomorphisms of R-modules form the cate- gory RMod∗ of graded R-modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
76 |
+
page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
77 |
+
page_content='5 (shifted graded modules).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
78 |
+
page_content=' Let R be a graded ring, let M be a graded module over R, and let n ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
79 |
+
page_content=' Then ΣnM denotes the graded R-module given by the n-shifted decomposition 0 ⊕ · · · ⊕ 0 ⊕ � j∈N≥n Mj−n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
80 |
+
page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
81 |
+
page_content='6 (direct sums and quotients of graded modules).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
82 |
+
page_content=' Let M and N be graded modules over a graded ring R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
83 |
+
page_content=' Then M ⊕ N is a graded R-module via the grading (Mn ⊕ Nn)n∈N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
84 |
+
page_content=' If M ′ ⊂ M is a graded submodule of M (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
85 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
86 |
+
page_content=', it is generated by homogeneous elements), then (Mn/(M ′ ∩Mn))n∈N turns M/M ′ into a graded R-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
87 |
+
page_content=' Persistent homology leads to persistence modules [ZC05].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
88 |
+
page_content=' Persistence mod- ules in turn give rise to graded modules over graded polynomial rings [ZC05, Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
89 |
+
page_content='1]: Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
90 |
+
page_content='7 (from persistence modules to graded modules).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
91 |
+
page_content=' Let K be a ring and let (M ∗, f ∗) be an N-indexed persistence K-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
92 |
+
page_content=' Then M := � n∈N M n carries a K[T ]-module structure, given by ∀x∈Mn T · x := f n(x) ∈ M n+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
93 |
+
page_content=' If we view K[T ] as a graded ring (Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
94 |
+
page_content='2), then this K[T ]-module structure and this direct sum decomposition of M turn M into a graded K[T ]-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
95 |
+
page_content=' If (M ∗, f ∗) is of finite type, then M is finitely generated over K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
96 |
+
page_content=' 4 3 Graded principal ideal domains Finally, we define the central types of decompositions arising in the structure theorems: Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
97 |
+
page_content='8 (graded elementary divisor decomposition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
98 |
+
page_content=' Let R be a graded ring and let M be a graded module over R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
99 |
+
page_content=' A graded elementary divisor de- composition of M over R is an isomorphism M ∼=RMod∗ N � j=1 ΣnjR/(fj) of graded R-modules with N ∈ N, degrees n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
100 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
101 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
102 |
+
page_content=' , nN ∈ N, and homogeneous elements f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
103 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
104 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
105 |
+
page_content=' , fN ∈ R with fj|fj+1 for all j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
106 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
107 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
108 |
+
page_content=' , N −1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
109 |
+
page_content=' Here, the right- hand side carries the canonical grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
110 |
+
page_content=' The elements f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
111 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
112 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
113 |
+
page_content=' , fN are called elementary divisors of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
114 |
+
page_content=' The signature of such a decomposition is the multiset of all pairs (nj, R×·fj) with j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
115 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
116 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
117 |
+
page_content=', N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
118 |
+
page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
119 |
+
page_content='9 (graded prime power decomposition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
120 |
+
page_content=' Let R be a graded ring and let M be a graded module over R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
121 |
+
page_content=' A graded prime power decomposition of M over R is an isomorphism M ∼=RMod N � j=1 ΣnjR/(pkj j ) of graded R-modules with N ∈ N, n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
122 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
123 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
124 |
+
page_content=' , nN ∈ N, k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
125 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
126 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
127 |
+
page_content=' , kN ∈ N, and homo- geneous prime elements p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
128 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
129 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
130 |
+
page_content=' , pN ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
131 |
+
page_content=' Here, the right-hand side carries the canonical grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
132 |
+
page_content=' The signature of such a decomposition is the multiset of all pairs (nj, R×·pkj j ) with j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
133 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
134 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
135 |
+
page_content=', N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
136 |
+
page_content=' 3 Graded principal ideal domains For the sake of completeness, we provide a proof of the following observa- tion [VO83, Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
137 |
+
page_content='7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
138 |
+
page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
139 |
+
page_content='1 (graded PIDs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
140 |
+
page_content=' Let R be a graded principal ideal domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
141 |
+
page_content=' Then R is of one of the following types: We have R = R0, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
142 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
143 |
+
page_content=', R is an ordinary principal ideal domain with the 0-grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
144 |
+
page_content=' The subring R0 is a field and R is isomorphic to the graded ring R0[T ], where the grading on R0[T ] is a multiple of the canonical grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
145 |
+
page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
146 |
+
page_content=' Let R ̸= R0 and let n ∈ N>0 be the minimal degree with Rn ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
147 |
+
page_content=' Then R≥n := � j∈N≥n Rj is a homogeneous ideal in R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
148 |
+
page_content=' as R is a graded principal ideal domain, there exists a t ∈ R with R≥n = (t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
149 |
+
page_content=' We show that t is homogeneous of degree n: Let 4 Trivially graded principal ideal domains 5 x ∈ Rn \\ {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
150 |
+
page_content=' Then t divides x and a straightforward computation shows that hence also t is homogeneous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
151 |
+
page_content=' The grading implies that t has degree n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
152 |
+
page_content=' We show that the canonical R0-algebra homomorphism ϕ: R0[T ] −→ R given by ϕ(T ) := t is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
153 |
+
page_content=' We first show that ϕ is injective: Because R is graded and t is homoge- neous, it suffices to show that a · tk ̸= 0 for all a ∈ R0 \\ {0} and all k ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
154 |
+
page_content=' However, this is guaranteed by the hypothesis that R is a domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
155 |
+
page_content=' Regarding surjectivity, let y ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
156 |
+
page_content=' It suffices to consider the case that y is homogeneous of degree m ≥ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
157 |
+
page_content=' Because (t) = R≥n, we know that t divides y, say y = t · y′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
158 |
+
page_content=' Then y′ is homogeneous and we can iterate the argument for y′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
159 |
+
page_content=' Proceeding inductively, we obtain that m is a multiple of n and that there exists an a ∈ R0 with y = a · tm/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
160 |
+
page_content=' Hence, ϕ is surjective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
161 |
+
page_content=' This establishes that R is isomorphic as a graded ring to R0[T ], where R0[T ] carries the canonical grading on R0[T ] scaled by n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
162 |
+
page_content=' It remains to show that R0 ∼=Ring R/(t) is a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
163 |
+
page_content=' Thus, we are left to show that (t) is a maximal ideal in R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
164 |
+
page_content=' By construction, every ideal a that contains (t) = R≥n is generated by (t) and a subset of R0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
165 |
+
page_content=' in particular, a is homogeneous, whence principal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
166 |
+
page_content=' The grading shows that then a = R or a = (t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
167 |
+
page_content=' Thus, (t) is maximal and so R0 is a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
168 |
+
page_content=' In the setting of Z-graded principal ideal domains, further examples appear, such as generalised Rees rings [PvG82].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
169 |
+
page_content=' 4 Trivially graded principal ideal domains Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
170 |
+
page_content='1 (elementary divisor decompositions over trivially graded PIDs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
171 |
+
page_content=' Let R be a principal ideal domain with the 0-grading that contains two non- associated prime elements p and q (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
172 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
173 |
+
page_content=', 2 and 3 in Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
174 |
+
page_content=' We consider the graded R-module M := Σ0R/(p) ⊕ Σ1R/(q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
175 |
+
page_content=' This graded R module does not admit a graded elementary divisor decom- position: Indeed, if there were a graded elementary divisor decomposition of M, then the corresponding elementary divisors would have to coincide with the un- graded elementary divisors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
176 |
+
page_content=' The only ungraded elementary divisor of M is p · q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
177 |
+
page_content=' However, M does not contain a homogenous element with annihilator ideal (p·q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
178 |
+
page_content=' Therefore, M does not admit a graded elementary divisor decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
179 |
+
page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
180 |
+
page_content='2 (prime power decompositions over trivially graded PIDs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
181 |
+
page_content=' Let R be a principal ideal domain with the 0-grading and let M be a finitely generated graded R-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
182 |
+
page_content=' Then M admits a graded prime power decomposition and the signature of all such graded decompositions of M coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
183 |
+
page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
184 |
+
page_content=' Because R is trivially graded, the grading on M decomposes M as a direct sum � n∈N Mn of R-submodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
185 |
+
page_content=' In view of finite generation of M, only finitely many of these summands are non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
186 |
+
page_content=' We can now apply the ungraded structure theorem to each summand Mn to conclude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
187 |
+
page_content=' 6 5 Polynomial rings over fields 5 Polynomial rings over fields In view of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
188 |
+
page_content='1, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
189 |
+
page_content='1 can equivalently be stated as follows (which is exactly the special case needed in persistent homology): Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
190 |
+
page_content='1 (structure theorem for graded modules over polynomial rings).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
191 |
+
page_content=' Let K be a field and let M be a finitely generated graded module over the graded ring K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
192 |
+
page_content=' Then there exist N ∈ N, n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
193 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
194 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
195 |
+
page_content=' , nN ∈ N, and k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
196 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
197 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
198 |
+
page_content=' , kN ∈ N>0 ∪ {∞} with M ∼=K[T ]Mod∗ N � j=1 ΣnjK[T ]/(T kj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
199 |
+
page_content=' Here, T ∞ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
200 |
+
page_content=' The multiset of all (nj, kj) with j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
201 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
202 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
203 |
+
page_content=' , N} is uniquely determined by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
204 |
+
page_content=' The rest of this section contains an elementary and constructive proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
205 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
206 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
207 |
+
page_content='1 Uniqueness of graded decompositions The uniqueness claim in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
208 |
+
page_content='1 can be derived inductively from the un- graded uniqueness statement: Let a decomposition as in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
209 |
+
page_content='1 be given and let ϕ: � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
210 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
211 |
+
page_content=' · · · −→ M be a corresponding graded K[T ]-isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
212 |
+
page_content=' Then M ′ := ϕ(N ′) with N ′ := � j∈{1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
213 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
214 |
+
page_content=',N},nj=0 ΣnjK[T ]/(T kj) is a graded submodule of M and it is not difficult to see that M ′ = ϕ(N ′) = SpanK[T ] M0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
215 |
+
page_content=' Moroever, M ′ is finitely generated over K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
216 |
+
page_content=' Therefore, the ungraded structure theorem when applied to M ′ shows that the multiset of all pairs (nj, kj) with nj = 0 is uniquely determined by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
217 |
+
page_content=' For the induction step, we pass to the quotient M/M ′, which is a finitely gen- erated graded K[T ]-module with (M/M ′)0 ∼= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
218 |
+
page_content=' We shift the degrees on M/M ′ by −1 and inductively apply the previous argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
219 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
220 |
+
page_content='2 Homogeneous matrix reduction The standard matrix reduction algorithm for the computation of persistent ho- mology [EH10, ZC05] can be viewed as a proof of the existence part of Theo- rem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
221 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
222 |
+
page_content=' We phrase the matrix reduction algorithm in the graded language to em- phasise the connection with graded decompositions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
223 |
+
page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
224 |
+
page_content='2 (graded matrix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
225 |
+
page_content=' Let K be a field, let r, s ∈ N, and let n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
226 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
227 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
228 |
+
page_content=' , nr, m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
229 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
230 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
231 |
+
page_content=' , ms ∈ N be monotonically increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
232 |
+
page_content=' A matrix A ∈ Mr×s(K[T ]) is (n∗, m∗)-graded if the following holds: For all j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
233 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
234 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
235 |
+
page_content=' , r}, k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
236 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
237 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
238 |
+
page_content=', s}, we have that the entry Ajk ∈ K[T ] is a homogeneous polynomial and Ajk = 0 or nj = deg Ajk + mk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
239 |
+
page_content=' 5 Polynomial rings over fields 7 In a graded matrix, the degrees of matrix entries monotonically increase from the left to the right and from the bottom to the top.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
240 |
+
page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
241 |
+
page_content='3 (reduced matrix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
242 |
+
page_content=' Let K be a field, let r, s ∈ N, and let n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
243 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
244 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
245 |
+
page_content=' , nr, m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
246 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
247 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
248 |
+
page_content=' , ms ∈ N be monotonically increasing, and let A ∈ Mr×s(K[T ]) be an (n∗, m∗)-graded matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
249 |
+
page_content=' For k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
250 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
251 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
252 |
+
page_content=', s}, we define lowA(k) := max � j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
253 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
254 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
255 |
+
page_content=' , r} �� Ajk ̸= 0 � ∈ N (with max ∅ := 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
256 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
257 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
258 |
+
page_content=', lowA(k) is the index of the “lowest” matrix entry in column k that is non-zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
259 |
+
page_content=' The matrix A is reduced if all columns have different low-indices: For all k, k′ ∈ N with lowA(k) ̸= 0 and lowA(k′) ̸= 0, we have lowA(k) ̸= lowA(k′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
260 |
+
page_content=' Graded matrices can be transformed into reduced matrices via elementary column operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
261 |
+
page_content=' these reduced matrices then lead to module decompositions: Algorithm 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
262 |
+
page_content='4 (homogeneous matrix reduction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
263 |
+
page_content=' Given a field K, r, s ∈ N, monotonically increasing sequences n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
264 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
265 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
266 |
+
page_content=' , nr, m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
267 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
268 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
269 |
+
page_content=' , ms ∈ N, and an (n∗, m∗)- graded matrix A ∈ Mr×s, do the following: For each k from 1 up to s (in ascending order): Let ℓ := lowA(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
270 |
+
page_content=' If ℓ ̸= 0, then: For each j from ℓ down to 1 (in descending order): If Ajk ̸= 0 and there exists k′ ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
271 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
272 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
273 |
+
page_content=', k − 1} with lowA(k′) = j, then: Update the matrix A by subtracting Ajk/Ajk′-times the col- umn k′ from column k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
274 |
+
page_content=' [Loop invariant observation: Because A is graded, Ajk/Ajk′ in- deed is a homogeneous polynomial over K and the resulting ma- trix is (n∗, m∗)-graded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
275 |
+
page_content=' This eliminates the entry Ajk′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
276 |
+
page_content='] Return the resulting matrix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
277 |
+
page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
278 |
+
page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
279 |
+
page_content=' Let K be a field, let r, s ∈ N, let n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
280 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
281 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
282 |
+
page_content=' , ns, m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
283 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
284 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
285 |
+
page_content=' , mr ∈ N be monotonically increasing, and let A ∈ Mr×s(K[T ]) be an (n∗, m∗)-graded matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
286 |
+
page_content=' Then: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
287 |
+
page_content=' The homogeneous matrix reduction algorithm (Algorithm 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
288 |
+
page_content='4) terminates on this input after finitely many steps (relative to the arithmetic on K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
289 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
290 |
+
page_content=' The resulting matrix A′ is reduced and there is a graded s × s-matrix B over K[T ] that admits a graded inverse and satisfies A′ = A · B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
291 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
292 |
+
page_content=' The low-entries of the resulting matrix A′ are the elementary divisors of A over K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
293 |
+
page_content=' 8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
294 |
+
page_content='2 Homogeneous matrix reduction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
295 |
+
page_content=' We have F/ im A ∼=K[T ]Mod∗ � j∈I ΣnjK[T ]/(T mk(j)−nj) ⊕ � j∈I′ ΣnjK[T ], where F := �r j=1 ΣnjK[T ] and I := {lowA′(k) | k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
296 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
297 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
298 |
+
page_content=' , s}} \\ {0} as well as I′ := {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
299 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
300 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
301 |
+
page_content=', r} \\ I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
302 |
+
page_content=' For j ∈ I, let k(j) ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
303 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
304 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
305 |
+
page_content=' , s} be the unique (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
306 |
+
page_content=') index with lowA′(k(j)) = j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
307 |
+
page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
308 |
+
page_content=' Ad 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
309 |
+
page_content=' Well-definedness follows from the observation mentioned in the algorithm: As every homogeneous polynomial in K[T ] is of the form λ · T d with λ ∈ K and d ∈ N and as the matrix is graded, the corresponding division can be performed in K[T ] and the gradedness of the matrix is preserved by the elimination operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
310 |
+
page_content=' Termination is then clear from the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
311 |
+
page_content=' Ad 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
312 |
+
page_content=' As we traverse the columns from left to right, a straightforward induction shows that no two columns can remain that have the same non-zero value of “lowA”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
313 |
+
page_content=' The product decomposition comes from the fact that we only applied elementary homogeneous column operations without swaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
314 |
+
page_content=' Ad 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
315 |
+
page_content=' Because the resulting matrix A′ is obtained through elementary col- umn operations from A, the elementary divisors of A′ and A coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
316 |
+
page_content=' Applying Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
317 |
+
page_content='6 to A′ proves the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
318 |
+
page_content=' Ad 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
319 |
+
page_content=' In view of the second part, we have that F/ im A ∼=K[T ]Mod∗ F/ im A′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
320 |
+
page_content=' Therefore, the claim is a direct consequence of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
321 |
+
page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
322 |
+
page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
323 |
+
page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
324 |
+
page_content=' Let K be a field, let r, s ∈ N, let n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
325 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
326 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
327 |
+
page_content=' , nr, m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
328 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
329 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
330 |
+
page_content=' , ms ∈ N be monotonically increasing, and let A ∈ Mr×s(K[T ]) be an (n∗, m∗)-graded matrix that is reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
331 |
+
page_content=' Then: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
332 |
+
page_content=' The low-entries of A are the elementary divisors of A over K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
333 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
334 |
+
page_content=' Let F := �r j=1 ΣnjK[T ] and I := {lowA(k) | k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
335 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
336 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
337 |
+
page_content=', s}} \\ {0} as well as I′ := {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
338 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
339 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
340 |
+
page_content=', r} \\ I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
341 |
+
page_content=' Then F/ im A ∼=K[T ]Mod∗ � j∈I ΣnjK[T ]/(T mk(j)−nj) ⊕ � j∈I′ ΣnjK[T ] Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
342 |
+
page_content=' Ad 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
343 |
+
page_content=' Let k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
344 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
345 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
346 |
+
page_content=' , s} with ℓ := lowA(k) ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
347 |
+
page_content=' Then we can clear out all the entries of A in column k above ℓ by elementary row operations (again, the gradedness of A ensures that this is possible).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
348 |
+
page_content=' Swapping zero rows and columns appropriately thus results in a matrix in rectangle “diagonal” form;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
349 |
+
page_content=' moreover, as all the “diagonal” entries are monomials, we can swap rows and columns to obtain a matrix A′ in Smith normal form that both has the same elementary divisors as A and whose elementary divisors are precisely the low-entries of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
350 |
+
page_content=' In particular, these elementary divisors must coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
351 |
+
page_content=' Ad 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
352 |
+
page_content=' The claim is clear if A is already in Smith normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
353 |
+
page_content=' By con- struction, there are square matrices B and C that are invertible over K[T ] and represent graded K[T ]-isomorphisms with A′ = C · A · B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
354 |
+
page_content=' In particular, F/ im A ∼=K[T ]Mod∗ (C · F)/ im A′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
355 |
+
page_content=' By construction, the values of lowA′ and the degrees of A′ differ from the ones of A only by compatible index permutations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
356 |
+
page_content=' Therefore, the claim follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
357 |
+
page_content=' 5 Polynomial rings over fields 9 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
358 |
+
page_content='3 Existence of a graded decomposition To prove existence in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
359 |
+
page_content='1 we can follow the standard proof pattern of first finding a (graded) finite presentation and then applying (homogeneous) matrix reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
360 |
+
page_content=' Let M be a finitely generated graded K[T ]-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
361 |
+
page_content=' Then M also has a finite generating set consisting of homogeneous elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
362 |
+
page_content=' This defines a surjective graded K[T ]-homomorphism ϕ: F := r � j=1 ΣnjK[T ] −→ M for suitable r ∈ N and monotonically increasing n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
363 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
364 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
365 |
+
page_content=' , nr ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
366 |
+
page_content=' As ϕ is a graded homomorphism, ker ϕ ⊂ F is a graded K[T ]-submodule and we obtain an isomorphism M ∼=K[T ]Mod∗ F/ im ker ϕ of graded K[T ]-modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
367 |
+
page_content=' Because K[T ] is a principal ideal domain, the graded submodule ker ϕ ⊂ F is finitely generated over K[T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
368 |
+
page_content=' Because ker ϕ is a graded submodule, ker ϕ has a finite homogeneous generating set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
369 |
+
page_content=' (In fact, there also exists a homogeneous free K[T ]-basis for ker ϕ, as can be seen from a straightforward inductive splitting argument [Web85, Lemma 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
370 |
+
page_content=') In particular, there exist s ∈ N, monotonically increasing m1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
371 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
372 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
373 |
+
page_content=' , ms ∈ N, and a graded K[T ]-homomorphism ψ: E := s � k=1 ΣmkK[T ] −→ F with im ψ = ker ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
374 |
+
page_content=' Because ψ is graded and n∗, m∗ are monotonically increas- ing, the r×s-matrix A over K[T ] that represents ψ with respect to the canonical homogeneous bases of E and F is graded in the sense of Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
375 |
+
page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
376 |
+
page_content=' Applying the homogeneous matrix reduction algorithm to A shows that M ∼=K[T ]Mod∗ F/ im A, has the desired decomposition (Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
377 |
+
page_content='5;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
378 |
+
page_content=' after discarding the irrelevant terms of the form ΣnK[T ]/(T 0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
379 |
+
page_content=' This completes the proof of the structure theorem (Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
380 |
+
page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
381 |
+
page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
382 |
+
page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
383 |
+
page_content=' There is a general matrix reduction for a slighlty different notion of “graded” matrices over (Z-)graded principal ideal domains [PvG82].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
384 |
+
page_content=' However, one should be aware that such “graded” matrices in general only lead to graded homomorphisms once one is allowed to change the grading on the underlying free modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
385 |
+
page_content=' This explains why this general matrix reduction does not contradict the counterexample in case of 0-graded principal ideal rings in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
386 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
387 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
388 |
+
page_content='4 Barcodes For the sake of completeness, we recall the relation between graded decomposi- tions and barcodes: 10 References Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
389 |
+
page_content='8 (barcodes of persistence modules).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
390 |
+
page_content=' Let K be a field and let (M ∗, f ∗) be an N-indexed persistence K-module of finite type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
391 |
+
page_content=' We equip M := � n∈N M n with the canonical graded K[T ]-module structure (Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
392 |
+
page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
393 |
+
page_content=' By the graded structure theorem (Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
394 |
+
page_content='1), there exist N ∈ N, n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
395 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
396 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
397 |
+
page_content=' , nN ∈ N, and k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
398 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
399 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
400 |
+
page_content=' , kN ∈ N>0 ∪ {∞} with M ∼=K[T ]Mod∗ N � j=1 ΣnjK[T ]/(T kj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
401 |
+
page_content=' Let B be the multiset of all (nj, kj − 1) with j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
402 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
403 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
404 |
+
page_content=', N};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
405 |
+
page_content=' then B is uniquely determined by M and this multiset B is the barcode of (M ∗, f ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
406 |
+
page_content=' The barcode contains the full information on the isomorphism type of the graded K[T ]-module M (and the underlying persistence module) and describes the birth, death, and persistence of elements as specified by the “elder rule”: If (n, p) is an element of the barcode, this means that a new independent class is born at stage n, it persists for p stages, and it dies (if p ̸= ∞) at stage n+ p+ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
407 |
+
page_content=' In particular, this leads to the notion of barcodes of persistent homology (in a given degree) of finite type persistence chain complexes and finite type filtrations in topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
408 |
+
page_content=' References [BCB20] Magnus Bakke Botnan and William Crawley-Boevey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
409 |
+
page_content=' Decomposition of persistence modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
410 |
+
page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
411 |
+
page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
412 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
413 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
414 |
+
page_content=', 148(11):4581–4596, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
415 |
+
page_content=' Cited on page: 2 [CdS10] Gunnar Carlsson and Vin de Silva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
416 |
+
page_content=' Zigzag persistence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
417 |
+
page_content=' Found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
418 |
+
page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
419 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
420 |
+
page_content=', 10(4):367–405, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
421 |
+
page_content=' Cited on page: 2 [CZCG04] Gunnar Carlsson, Afra Zomorodian, Anne Collins, and Leonidas Guibas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
422 |
+
page_content=' Persistence Barcodes for Shapes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
423 |
+
page_content=' In Roberto Scopigno and Denis Zorin, ed- itors, Symposium on Geometry Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
424 |
+
page_content=' The Eurographics Association, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
425 |
+
page_content=' Cited on page: 1 [EH10] Herbert Edelsbrunner and John L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
426 |
+
page_content=' Harer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
427 |
+
page_content=' Computational topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
428 |
+
page_content=' Amer- ican Mathematical Society, Providence, RI, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
429 |
+
page_content=' An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
430 |
+
page_content=' Cited on page: 1, 6 [PvG82] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
431 |
+
page_content=' Puystjens and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
432 |
+
page_content=' van Geel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
433 |
+
page_content=' Diagonalization of matrices over graded principal ideal domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
434 |
+
page_content=' Linear Algebra Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
435 |
+
page_content=', 48:265–281, 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
436 |
+
page_content=' Cited on page: 5, 9 [SVJ13] Primoz Skraba and Mikael Vejdemo-Johansson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
437 |
+
page_content=' Persistence modules: Al- gebra and algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
438 |
+
page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
439 |
+
page_content=' arXiv:1302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
440 |
+
page_content='2015 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
441 |
+
page_content='CG].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
442 |
+
page_content=' Cited on page: 2 [VO83] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
443 |
+
page_content=' Van Oystaeyen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
444 |
+
page_content=' Generalized Rees rings and arithmetical graded rings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
445 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
446 |
+
page_content=' Algebra, 82(1):185–193, 1983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
447 |
+
page_content=' Cited on page: 2, 4 [Web85] Cary Webb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
448 |
+
page_content=' Decomposition of graded modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
449 |
+
page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
450 |
+
page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
451 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
452 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
453 |
+
page_content=', 94(4):565–571, 1985.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
454 |
+
page_content=' Cited on page: 9 [ZC05] Afra Zomorodian and Gunnar Carlsson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
455 |
+
page_content=' Computing persistent homology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
456 |
+
page_content=' Discrete Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
457 |
+
page_content=' Geom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
458 |
+
page_content=', 33(2):249–274, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
459 |
+
page_content=' Cited on page: 1, 2, 3, 6 Clara L¨oh Fakult¨at f¨ur Mathematik, Universit¨at Regensburg, 93040 Regensburg clara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
460 |
+
page_content='loeh@mathematik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
461 |
+
page_content='uni-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
462 |
+
page_content='de, https://loeh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
463 |
+
page_content='app.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
464 |
+
page_content='ur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
465 |
+
page_content='de' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFKT4oBgHgl3EQfNy2J/content/2301.11756v1.pdf'}
|
1dFLT4oBgHgl3EQfpy-5/content/2301.12137v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7e413712f23fcdfa6e417cb008d81fa85e3165cfd2925bb3376e228c73d0127f
|
3 |
+
size 303973
|
1dFLT4oBgHgl3EQfpy-5/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:829292d4adfc5bf683710cc871c447e321dff79f7c6deef812eaa261235884a5
|
3 |
+
size 7733293
|
2dFST4oBgHgl3EQfXzgB/content/tmp_files/2301.13785v1.pdf.txt
ADDED
@@ -0,0 +1,1127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Commitment Against Front Running
|
2 |
+
Attacks∗
|
3 |
+
Andrea Canidio †and Vincent Danos‡
|
4 |
+
February 1, 2023
|
5 |
+
Abstract
|
6 |
+
We provide a game-theoretic analysis of the problem of front-running attacks. We use it
|
7 |
+
to distinguish attacks from legitimate competition among honest users for having their
|
8 |
+
transactions included earlier in the block. We also use it to introduce an intuitive notion
|
9 |
+
of the severity of front-running attacks. We then study a simple commit-reveal protocol
|
10 |
+
and discuss its properties. This protocol has costs because it requires two messages and
|
11 |
+
imposes a delay. However, it is effective at preventing the most severe front-running at-
|
12 |
+
tacks while preserving legitimate competition between users, guaranteeing that the earliest
|
13 |
+
transaction in a block belongs to the honest user who values it the most.
|
14 |
+
Keywords: Front running, Game theory, Ethereum, MEV, Transaction reordering, commit-
|
15 |
+
reveal
|
16 |
+
1
|
17 |
+
Introduction
|
18 |
+
On the Ethereum network, each validator decides how to order pending transactions
|
19 |
+
to form the next block, which determines the order in which these transactions are
|
20 |
+
executed.
|
21 |
+
As a consequence, users often compete with each other to have their
|
22 |
+
∗We are grateful to Agostino Capponi, Jiasun Li, Christof Ferreira Torres, Arthur Gervais,
|
23 |
+
Ari Juels, and the participants to UBRI Connect 2022, Tokenomics 2022 for their comments and
|
24 |
+
suggestions. We gratefully acknowledge the financial support of the Ethereum Foundation (grant
|
25 |
+
FY22-0840).
|
26 |
+
†IMT school of advanced studies, Lucca, Italy
|
27 |
+
‡CNRS and École Normale Supérieure (France);
|
28 |
+
1
|
29 |
+
arXiv:2301.13785v1 [econ.TH] 31 Jan 2023
|
30 |
+
|
31 |
+
1 Introduction
|
32 |
+
2
|
33 |
+
transactions included earlier in a block, either by paying transaction fees or by mak-
|
34 |
+
ing side payments directly to validators.1 This form of competition can be beneficial
|
35 |
+
because it ensures that a scarce resource (i.e., having a transaction included earlier
|
36 |
+
in the block) is allocated to the user who values it the most.2 But at the same time,
|
37 |
+
it opens the possibility of front-running attacks: because pending transactions are
|
38 |
+
public, a malicious user can observe a victim’s incoming transaction, craft a new
|
39 |
+
transaction and then pay to place it before that of the victim.
|
40 |
+
Importantly, legitimate competition and attacks are often difficult to distinguish.
|
41 |
+
As an illustrative example, consider a smart contract programmed to award a valu-
|
42 |
+
able NFT to the first person who correctly answers a question. Assume, crucially,
|
43 |
+
that the smart contract does not have an explicit mechanism to resolve competing
|
44 |
+
claims to the object (i.e., by running an auction among those who provided the
|
45 |
+
correct answer) and settles claims in order of arrival. In this example, competition
|
46 |
+
between users can arise in two cases. In the first case, two users simultaneously
|
47 |
+
and independently find the answer. Each submits it and competes to have his/her
|
48 |
+
transaction included earlier in the block. Because the user who values the NFT the
|
49 |
+
most is willing to pay more, this user should be able to place his transaction before
|
50 |
+
that of the opponent, thereby winning the NFT. In the second case, an honest user
|
51 |
+
finds the answer and sends it to the smart contract. A malicious user observes the
|
52 |
+
transaction, copies it, and competes to have its copy included in the block earlier
|
53 |
+
than the original transaction.
|
54 |
+
From the observational point of view, the above two situations are identical:
|
55 |
+
two users submit the same answer and then compete to have it included earlier in
|
56 |
+
the block. Despite this, the first is an example of legitimate competition because
|
57 |
+
users do not exploit their observation of the opponent’s transaction. Hence, each
|
58 |
+
user would have submitted his answer also in the absence of the other user. The
|
59 |
+
second is an attack because the attacker cannot send his transaction if he does
|
60 |
+
not observe the victim’s transaction. Furthermore, the extent to which an attacker
|
61 |
+
1 Competition through higher transaction fees occurs via “gas replacement” transactions,
|
62 |
+
whereby a pending transaction is resubmitted with a higher fee. The resulting game is akin to an
|
63 |
+
auction (see Daian et al. (2019)). The most popular way to make side payments to validators is
|
64 |
+
to use flashbots (see https://github.com/flashbots/pm).
|
65 |
+
2 Whether it is the most efficient to achieve this goal is a different issue we do not address here.
|
66 |
+
|
67 |
+
1 Introduction
|
68 |
+
3
|
69 |
+
relies on the victim’s message can be interpreted as a measure of the severity of
|
70 |
+
a front-running attack.
|
71 |
+
For example, the “attacker” could be another user who,
|
72 |
+
through his research, narrowed the correct answer to two or three possibilities. This
|
73 |
+
attack seems less severe relative to a situation in which the attacker has no prior
|
74 |
+
information.
|
75 |
+
Our simple, illustrative example is representative of the workings of most smart
|
76 |
+
contracts, including those at the core of decentralized finance protocols. For ex-
|
77 |
+
ample, Automated Market Makers (AMM) are the dominant type of decentralized
|
78 |
+
exchange. They allow users to swap one token for a different one at a price that
|
79 |
+
is mechanically derived by the size of two liquidity pools (one per token traded).
|
80 |
+
Because swapping one token for another changes the liquidity in each pool, if no
|
81 |
+
liquidity is added, then a sequence of users performing the same swap will face worse
|
82 |
+
and worse terms.3 Hence, two users who independently decide to perform the same
|
83 |
+
swap on an AMM will compete to obtain a better rate. Competition ensures that
|
84 |
+
the user who most values obtaining the better rate (perhaps because the swap is
|
85 |
+
part of a sequence of atomic transactions) can obtain it. Alternatively, a user may
|
86 |
+
want to perform a swap. Upon observing this transaction, an attacker will front-run
|
87 |
+
the victim with the same swap and then back-run her with the opposite swap, in
|
88 |
+
what is called a sandwich attack or insertion attack.4 Again, the second example is
|
89 |
+
an attack because the attacker uses information obtained by observing the victim’s
|
90 |
+
transaction.
|
91 |
+
In this paper, we propose a game-theoretic model of front-running. Our goals are
|
92 |
+
two. First, inspired by the above discussion, we aim to provide a formal definition
|
93 |
+
of front-running attacks (vs. legitimate competition among honest users) as well as
|
94 |
+
their severity. Second, we study a simple commit-reveal protocol that can be im-
|
95 |
+
plemented at the smart contract level without modifying the underlying Ethereum
|
96 |
+
infrastructure or introducing third parties (or layer-2 networks). In the simplest
|
97 |
+
version of the protocol, the user concatenates the desired message with the address
|
98 |
+
3 Like in our simple illustrative example, AMMs do not have an explicit mechanism to allocate
|
99 |
+
the “better rate” to one of the users and instead rely on the order in which transactions are
|
100 |
+
aggregated into a block.
|
101 |
+
4 See Eskandari et al. (2019) for a discussion of this type of front running. See also Park (2022)
|
102 |
+
for an analysis of these types of attacks in the context of AMMs.
|
103 |
+
|
104 |
+
1 Introduction
|
105 |
+
4
|
106 |
+
from which the reveal message will be sent and passes this into a function with
|
107 |
+
an intractable pre-image problem (for example, the SHA-256 hash function).5 The
|
108 |
+
resulting output is the commit message, which the user sends to the smart contract.
|
109 |
+
Then, the user sends the reveal message to the smart contract, where the reveal
|
110 |
+
message is simply the desired message. The smart contract receiving a reveal mes-
|
111 |
+
sage will execute it only if the concatenation of the reveal message with the address
|
112 |
+
from which it was received corresponds to the commit message.
|
113 |
+
The key observation is that an attack involves two steps: (i) committing a mes-
|
114 |
+
sage without knowing what message the victim will send and (ii) after observing
|
115 |
+
the victim’s reveal message, deciding to send the committed message or no message
|
116 |
+
at all. Furthermore, the victim may use a newly-created address to send the com-
|
117 |
+
mit message. When this is the case, the attacker may observe that someone sent
|
118 |
+
a commit message to the smart contract, but not whom. The protocol, therefore,
|
119 |
+
forces the attacker to make a costly guess: send a costly commit message without
|
120 |
+
knowing whether a given victim committed (and will reveal) nor what the victim
|
121 |
+
committed. At the same time, the protocol does not impede legitimate competition
|
122 |
+
between users: two honest users can commit their messages and then compete to
|
123 |
+
have their reveal message included earlier in the block.
|
124 |
+
We derive conditions under which an honest player is better off using the pro-
|
125 |
+
tocol than Ethereum’s standard procedure.6 On the cost side, the protocol requires
|
126 |
+
sending two messages instead of one and imposes a delay. Hence, if the cost of
|
127 |
+
sending messages or waiting is high, the protocol is worse than the standard way
|
128 |
+
to send transactions; if they are low, the protocol is preferred. On the benefit side,
|
129 |
+
the protocol can eliminate front-running attacks, especially when it is difficult for
|
130 |
+
an attacker to guess, that is, when the expected payoff of an attacker who commits
|
131 |
+
without knowing whether the victim committed and what message was committed
|
132 |
+
is low. By definition of the severity of the attack introduced earlier, we can say
|
133 |
+
5 In this version, an attacker observes that a smart contract received a commit message (but
|
134 |
+
not necessary whom if the commit message is sent from a brand new address). Later, we discuss
|
135 |
+
a more complex protocol in which the receiver of the commit message is obfuscated.
|
136 |
+
6 We believe that, for our purposes, the honest player’s welfare is the most sensible criterion to
|
137 |
+
evaluate the protocol. The alternative would be to consider the smart contract perspective. Note,
|
138 |
+
however, that certain types of attacks can be quite profitable from the smart contract viewpoint—
|
139 |
+
for example, sandwich attacks in the context of AMMs.
|
140 |
+
|
141 |
+
1 Introduction
|
142 |
+
5
|
143 |
+
that our protocol is most effective when the severity of the attack is high and is less
|
144 |
+
effective when the severity of the attack is low.
|
145 |
+
As an extension (see Section 5.1), we study a variation of the above protocol in
|
146 |
+
which the identity of the receiver of the commit message is hidden. This variation
|
147 |
+
hinges on the existence of a template code for a container smart contract. When
|
148 |
+
committing, the user uses a brand new address to create a container smart con-
|
149 |
+
tract using the template and then sends the commit message to this newly-created
|
150 |
+
container, which time-stamps the commitment message with the current block num-
|
151 |
+
ber. When sending the reveal message, the honest user also sends a pointer to the
|
152 |
+
container smart contract where the commitment is located. The smart contract
|
153 |
+
considers the commitment as valid if the commit message is correct, its timestamp
|
154 |
+
is antecedent to the current block, and, crucially, if the code of the container smart
|
155 |
+
contract corresponds to the template. This way, an outside observer can only see
|
156 |
+
that someone created a commitment smart contract and sent a commit message, but
|
157 |
+
not who committed nor the target smart contract for that commitment. Guessing
|
158 |
+
is even harder for an attacker, and hence the probability of an attack is even lower.
|
159 |
+
As a second extension (see Section 5.4), we introduce multiple attackers. Absent
|
160 |
+
the protocol, competition pushes each attacker to overspend (relative to the single-
|
161 |
+
attacker case). This is detrimental to both the attackers and the honest user. In
|
162 |
+
particular, the weakest attacker always earns zero in expected terms.7 Instead, in
|
163 |
+
the commit-reveal protocol, the commit message acts as a fixed cost to attack in the
|
164 |
+
next period. Because in the following period, the weakest attackers earn zero, there
|
165 |
+
is no equilibrium in which both attackers send the commit message with probability
|
166 |
+
1: either there is no attack; or a single attacker commits and then attacks; or
|
167 |
+
both attackers commit (and then attack) with probability strictly less than one.
|
168 |
+
As a consequence, in the best-case scenario, the protocol eliminates attacks; in the
|
169 |
+
worst-case scenario, it reduces the level of competition between attackers resulting
|
170 |
+
in fewer resources spent in an attack, which benefits the honest player as well.
|
171 |
+
7 This result is a version of a well-known result in contest theory, that of “full dissipation of
|
172 |
+
rents”. See, for example, Fudenberg and Tirole (1987).
|
173 |
+
|
174 |
+
1 Introduction
|
175 |
+
6
|
176 |
+
Prior work
|
177 |
+
Our commit-reveal protocol is novel but similar to existing proposals.
|
178 |
+
Our main contribution is the type of analysis. In particular, we show that our proto-
|
179 |
+
col can eliminate the most severe front-running attacks while maintaining legitimate
|
180 |
+
competition between users. As we discuss later, existing solutions instead are either
|
181 |
+
primarily concerned with eliminating attacks (at the cost of also eliminating legit-
|
182 |
+
imate competition) or better organizing competition (at the cost of exacerbating
|
183 |
+
attacks). Furthermore, most of the literature has proposed solutions to reduce or
|
184 |
+
eliminate front-running in Ethereum by changing its infrastructure or introducing
|
185 |
+
third parties (See Heimbach and Wattenhofer (2022) for a review of the literature).
|
186 |
+
Instead, our solution does not require third parties and can be implemented at the
|
187 |
+
smart contract level, allowing for flexibility in its implementation. For example,
|
188 |
+
each smart contract could decide that only some messages must follow the protocol
|
189 |
+
to be considered valid, while other messages do not need to.8 Or a smart contract
|
190 |
+
may decide that the protocol is required only during some periods (see Section 5.3).
|
191 |
+
With respect to existing solutions, our protocol can be seen as a simplified version
|
192 |
+
of the submarine commitments protocol in Breidenbach et al. (2018): in both cases, a
|
193 |
+
message is first committed and then revealed, and the commitment can be hidden in
|
194 |
+
the sense that the identity of the sender and receiver of the commit message cannot
|
195 |
+
be observed. The main difference is that we adopt a weaker notion of “commitment”
|
196 |
+
because we allow users not to send a transaction after committing it. The notion
|
197 |
+
of “commitment” in Breidenbach et al. (2018) is instead stronger because users are
|
198 |
+
penalized for not following through with their commitment.
|
199 |
+
As already mentioned, we provide a game-theoretic analysis of the properties
|
200 |
+
of this protocol, applicable to any smart contract.9 With this respect, our work
|
201 |
+
is inspired by Gans and Holden (2022), who develop a game-theoretic analysis of
|
202 |
+
the problem of front-running arising when an honest user and an attacker claim the
|
203 |
+
same reward. They also propose a protocol that eliminates these types of attacks.
|
204 |
+
Their key assumption is that the legitimate claimant strictly prefers the reward
|
205 |
+
8 See the discussion in Section 5.2. For example, in the context of AMMs, it may make sense
|
206 |
+
that users who want to provide or withdraw liquidity do not need to follow the commit reveal
|
207 |
+
protocol, which is instead required for swaps.
|
208 |
+
9 Breidenbach et al. (2018) analyze the properties of the submarine commitment scheme in the
|
209 |
+
context of a bug-bounty scheme they propose.
|
210 |
+
|
211 |
+
1 Introduction
|
212 |
+
7
|
213 |
+
to be burned rather than paid to the attacker. Therefore, these results are useful
|
214 |
+
in some environments where front-running may emerge, but not all. For example,
|
215 |
+
front-running attacks are a serious concern in the AMMs, but in this context, it may
|
216 |
+
not be possible to “burn the reward”.
|
217 |
+
Flashbots is a well-known project aiming to better organize competition among
|
218 |
+
users. The premise is that competition through transaction fees can lead to so-called
|
219 |
+
“gas wars” by which a given block is filled with transactions that will fail (because
|
220 |
+
only the first one can be correctly executed). Therefore, gas wars impose a negative
|
221 |
+
externality on all users because they lead to congestion and higher transaction fees.
|
222 |
+
The idea is to eliminate these negative externalities by allowing users to pay valida-
|
223 |
+
tors directly, therefore keeping their messages private.10 Doing so, however, makes
|
224 |
+
it extremely easy to attack an honest user who sends his or her message publicly
|
225 |
+
(see Capponi et al. (2022)).
|
226 |
+
Other solutions impose exogenous criteria for ordering transactions, preventing
|
227 |
+
attacks but also hindering legitimate competition. Kelkar et al. (2020) propose the
|
228 |
+
Aequitas protocol, a method to achieve ordering-fairness, by which if sufficiently
|
229 |
+
many nodes observe a given transaction arriving before a different transaction, this
|
230 |
+
ordering should be maintained when these transactions are included in a block.11
|
231 |
+
There are also commit-reveal schemes intermediated by third parties in charge of,
|
232 |
+
for example, reorganizing incoming transactions while also encrypting and then de-
|
233 |
+
crypting them. With this respect, the better-known solution is the Shutter network,
|
234 |
+
in which a network of nodes called “keypers” jointly generate cryptographic keys with
|
235 |
+
which users encrypt their transactions. Users then submit these transactions to a
|
236 |
+
batcher contract that also orders them. Finally, Keypers broadcast the decryption
|
237 |
+
key, the transactions are decrypted and sent to the target smart contracts.
|
238 |
+
A concept that is often associated with front-running attacks is that of Maximal-
|
239 |
+
extractable value (MEV), defined as “the maximum value that can be extracted from
|
240 |
+
block production in excess of the standard block reward and gas fees by including, ex-
|
241 |
+
cluding, and changing the order of transactions in a block”12 Most existing measures
|
242 |
+
10 We note that our protocol also reduces competition among attackers and hence reduces "gas
|
243 |
+
wars".
|
244 |
+
11 See also the hedera-hashgraph project (Baird (2016)).
|
245 |
+
12 See https://ethereum.org/en/developers/docs/mev/.
|
246 |
+
|
247 |
+
1 Introduction
|
248 |
+
8
|
249 |
+
of total MEV are not very useful in our context as they capture both users’ legit-
|
250 |
+
imate competition (sometimes called “good MEV”) and attacks (sometimes called
|
251 |
+
“bad MEV”). A few papers, however, identify specifically profits extracted from at-
|
252 |
+
tacks. Torres et al. (2021) collect on-chain data from the inception of Ethereum
|
253 |
+
(July 30, 2015) until November 21, 2020. They estimate that these attacks gener-
|
254 |
+
ated 18.41M USD in profits for the attackers, of which 13.9M USD due to sandwich
|
255 |
+
(also called insertion) attacks. They also identify instances where several attackers
|
256 |
+
competed to attack the same victim. Similarly, Qin et al. (2022) consider a later
|
257 |
+
period (from the 1st of December, 2018 to the 5th of August, 2021) and find that
|
258 |
+
sandwich attacks generated 174.34M USD in profits.
|
259 |
+
The profits reported in the literature correspond to a situation in which an
|
260 |
+
attacker can craft his attack after observing the victim’s message, which is impossible
|
261 |
+
with our protocol. Nonetheless, we can use these measures as upper bounds for
|
262 |
+
an attacker’s profits under our protocol.
|
263 |
+
For example, suppose that all attacks
|
264 |
+
reported in Torres et al. (2021) are generated from attacking a single smart contract
|
265 |
+
(remember that in our protocol, commitments are specific to a given target smart
|
266 |
+
contract) and that the attacker is uninformed concerning when the victim will act.
|
267 |
+
In this case, an attacker must commit a message every block, hoping a victim would
|
268 |
+
do something. The profits per block reported by Torres et al. (2021) (and hence per
|
269 |
+
commitment) is 1.23 USD. Currently, the simplest possible transaction on Ethereum
|
270 |
+
costs approximately 2 USD (the base fee of a simple 21,000 gwei transaction), and
|
271 |
+
hence front-running attacks would not be profitable under our protocol.
|
272 |
+
We can repeat the same exercise using Qin et al. (2022) measures. Interestingly,
|
273 |
+
Qin et al. (2022) report that “the most sandwich attack-prone ERC-20 token is
|
274 |
+
SHIB, with an adversarial profit of 6.8M USD”. Because each pool of an AMM is
|
275 |
+
a different smart contract, 6.8M USD is an upper bound to the profits extracted
|
276 |
+
by attacking a single smart contract.13 Repeating the same calculation discussed
|
277 |
+
earlier yields profits per block of approx 8 USD.14 Remember that sandwich attacks
|
278 |
+
13 It is precisely the profits extracted by attacking a single smart contract if SHIB is traded only
|
279 |
+
on an AMM and only against one other token. It will be lower if SHIB is traded against multiple
|
280 |
+
tokens and/or on multiple AMMs.
|
281 |
+
14 SHIB was created in Aug 2020 and did not see much price action until April 2021. Here we
|
282 |
+
assume that all front-running attacks on SHIB occurred between April 2021 and Aug 2021 (when
|
283 |
+
their data collection stopped), for a period of 821798 blocks.
|
284 |
+
|
285 |
+
2 The problem: front-running attacks
|
286 |
+
9
|
287 |
+
require 2 messages. Hence, if profits are reduced by half (or more) by the inability
|
288 |
+
to observe the victim’s message beforehand, these attacks are not profitable under
|
289 |
+
our protocol.
|
290 |
+
2
|
291 |
+
The problem: front-running attacks
|
292 |
+
As a benchmark case, we develop a model of front-running attacks and later intro-
|
293 |
+
duce our protocol. There is a smart contract SC and two players: Alice and Bob.
|
294 |
+
There is a piece of information (call it “the state of the world”) s ∈ S that only A
|
295 |
+
learns at the beginning of the game. Absent front running attacks, after observing
|
296 |
+
s, player A sends a message ˜σA ∈ Σ to the mempool (i.e., the set of pending trans-
|
297 |
+
actions), where Σ ̸= ∅ is the space of possible messages. As soon as the message ˜σA
|
298 |
+
is included in a block, the smart contract SC performs an action that generates a
|
299 |
+
benefit ˜PA(˜σA, s) to player A.
|
300 |
+
Front-running attacks arise because messages in the mempool are public. Hence,
|
301 |
+
after A sends a message to the mempool, this message is observed by B, who can
|
302 |
+
send a counter-message ˜σB ∈ Σ. If ˜σB is included in the blockchain before A’s
|
303 |
+
message, then B earns ˜PB(˜σB, ˜σA, s) while A earns nothing. Else, B earns nothing
|
304 |
+
and A earns ˜PA(˜σA, s).
|
305 |
+
Sending messages is costly. Each player can send a regular message by paying
|
306 |
+
c > 0. If multiple regular messages are sent, they are included in the block in the
|
307 |
+
order they are sent. We can think of c as being the base fee: a fee that should
|
308 |
+
guarantee the inclusion of a transaction in the next bloc, at least outside of periods
|
309 |
+
of rapid change in the demand for transactions.15 Player B, however, can also pay
|
310 |
+
f > c to send a “fast” message that, with probability q, is included in the block
|
311 |
+
before A’s regular message, despite A’s message being sent first. For example, f
|
312 |
+
could be the cost of sending a transaction via a service such as flashbots, or could
|
313 |
+
be a regular mempool transaction with a transaction fee significantly above the base
|
314 |
+
fee. Here we consider the parameters q, c, and f as exogenous and determined by
|
315 |
+
the technology available to A and B. We relax this assumption in Section 5.4, in
|
316 |
+
15 The concept of base fee was introduced with the EIP-1559 upgrade. See the original pro-
|
317 |
+
posal here https://eips.ethereum.org/EIPS/eip-1559. For an economic analysis of EIP-1559, see
|
318 |
+
Roughgarden (2020).
|
319 |
+
|
320 |
+
2 The problem: front-running attacks
|
321 |
+
10
|
322 |
+
which we introduce multiple B players choosing their own f, which then determine
|
323 |
+
the probability that a given B player successfully front runs both A and the other
|
324 |
+
B players.
|
325 |
+
In terms of applications, consider the example we discussed in the introduction:
|
326 |
+
a smart contract that rewards whoever can correctly answer a question. In this case,
|
327 |
+
B will learn the correct answer by observing A’s message and then try to submit the
|
328 |
+
same answer before A. Formally, s = σA(s) = σB(s). Our model also fits a famous
|
329 |
+
(nonfictional) example: that discussed in the blog post “Ethereum is a dark forest”
|
330 |
+
(Robinson and Konstantopoulos, 2020). In this example, two researchers wanted to
|
331 |
+
recover some tokens that a user sent to an incorrect address. They realized that
|
332 |
+
anyone who knew about these tokens could have stolen them. Despite their effort,
|
333 |
+
their attempt to recover these tokens revealed their existence to an attacker who
|
334 |
+
managed to front-run them and steal them. In the context of our model, again
|
335 |
+
σA(s) = σB(s). Another fitting example is that of an AMM. Player A is a liquidity
|
336 |
+
provider who, upon learning some private information s, decides to withdraw some or
|
337 |
+
all the liquidity provided. By observing such a message, B can infer that something
|
338 |
+
has changed in the environment and try to steal the same liquidity. In this case,
|
339 |
+
σA(s) ={withdraw my liquidity}, σB(s) ={swap some tokens}.16 Also relevant in
|
340 |
+
the context of AMMs are sandwich attacks, in which A sends message σA(s) ={swap
|
341 |
+
some tokens}, and B then front runs A with a message σB(s) ={perform the same
|
342 |
+
swap as A} and “back-run” A with the message σB(s) ={perform the opposite swap
|
343 |
+
as A}. This attack is profitable because it exploits the slippage curve of the AMM.
|
344 |
+
Although we do not explicitly allow B to back-run A, the only difference in the
|
345 |
+
analysis is that a sandwich attack is more costly than a simple front-running attack
|
346 |
+
because it requires an additional message. It follows that all our results apply to
|
347 |
+
sandwich attacks as well.
|
348 |
+
We make two simplifying assumptions. First, we assume that A is partially naive.
|
349 |
+
She is naive in that she always chooses the message that maximizes her payoff given
|
350 |
+
the state of the world; however, she is sophisticated in the choice of whether to send
|
351 |
+
her message (or, in the next section, to initiate the protocol). We, therefore, rule
|
352 |
+
16 For a study of this type of attack, see Capponi and Jia (2021). For a study of similar attacks
|
353 |
+
in the context of traditional exchanges, see Section 6 of Budish et al. (2015).
|
354 |
+
|
355 |
+
2 The problem: front-running attacks
|
356 |
+
11
|
357 |
+
out the possibility that A chooses her message to manipulate B���s belief about the
|
358 |
+
state of the world, which we think is unrealistic.17 Mathematically, after observing
|
359 |
+
the state of the world, if A sends a message, she sends a message
|
360 |
+
σA(s) ≡ argmax˜σA∈Σ ˜PA(˜σA, s).
|
361 |
+
Given this, we can re-define A’s payoff in case she sends a message, and she is not
|
362 |
+
front-ran as:
|
363 |
+
PA(s) ≡ ˜PA(σA(s), s).
|
364 |
+
The second simplifying assumption is that σA(s) is a bijection; that is, in each state
|
365 |
+
of the world, there is a unique and distinct message maximizing player A’s payoff.
|
366 |
+
This a useful simplification because A’s message (if sent and observed) always reveals
|
367 |
+
the state of the world. It follows that B’s optimal counter message after observing
|
368 |
+
σA(s) and learning s is:
|
369 |
+
σB(s) ≡ argmax˜σB∈Σ ˜PB(˜σB, σA(s), s).
|
370 |
+
The resulting payoff for player B if he successfully front-runs A is:
|
371 |
+
PB(s) ≡ ˜PB(σB(s), σA(s), s).
|
372 |
+
Equilibrium
|
373 |
+
The above assumptions allow us to write the extensive form of the
|
374 |
+
game for given s as in Figure 1, which we can easily solve by backward induction.
|
375 |
+
If A sends a message, then B attempts to front-run if and only if:
|
376 |
+
qPB(s) > f
|
377 |
+
Given this, we can derive A’s optimal strategy. Suppose the state of the world is
|
378 |
+
such that qPB(s) < f, and A expects no front running. In this case, she sends a
|
379 |
+
17 If A is fully sophisticated, then the equilibrium of the game is a partition of the possible states
|
380 |
+
of the world S such that A sends the same message in all states of the world belonging to the
|
381 |
+
same part of the partition. Upon observing the message, B learns the part of the partition but
|
382 |
+
not the state of the world. The results for a given partition are identical to those presented here.
|
383 |
+
However, deriving the equilibrium partition is non-trivial and of second-order importance relative
|
384 |
+
to our main research question.
|
385 |
+
|
386 |
+
2 The problem: front-running attacks
|
387 |
+
12
|
388 |
+
A
|
389 |
+
B
|
390 |
+
((1 − q)PA(s) − c, qPB(s) − f)
|
391 |
+
σB(s)
|
392 |
+
(PA(s) − c, 0)
|
393 |
+
no message
|
394 |
+
σA(s)
|
395 |
+
(0, 0)
|
396 |
+
no message
|
397 |
+
Fig. 1: Game tree for given s.
|
398 |
+
message if and only if
|
399 |
+
PA(s) > c
|
400 |
+
If, instead, the state of the world is such that qPB(s) > f, then A anticipates that
|
401 |
+
B will try to front-run. In this case, A sends a message if and only if
|
402 |
+
(1 − q)PA(s) > c
|
403 |
+
The following proposition summarizes these derivations.
|
404 |
+
Proposition 1 (Equilibrium). Player A’s equilibrium strategy is:
|
405 |
+
σ∗
|
406 |
+
A(s) =
|
407 |
+
�
|
408 |
+
�
|
409 |
+
�
|
410 |
+
∅
|
411 |
+
if PA(s) < c or qPB(s) > f and (1 − q)PA(s) < c
|
412 |
+
σA(s)
|
413 |
+
otherwise
|
414 |
+
(1)
|
415 |
+
where σ∗
|
416 |
+
A(s) = ∅ means that A does not send any message. Player B’s equilibrium
|
417 |
+
strategy is
|
418 |
+
σ∗
|
419 |
+
B(s) =
|
420 |
+
�
|
421 |
+
�
|
422 |
+
�
|
423 |
+
σB(s)
|
424 |
+
if qPB(s) > f and σ∗
|
425 |
+
A(s) ̸= ∅
|
426 |
+
∅
|
427 |
+
otherwise
|
428 |
+
(2)
|
429 |
+
Hence, front running does not happen when its benefit is low (i.e., PB(s) ≤ f/q).
|
430 |
+
If, instead, its benefit is large (i.e., PB(s) > f/q), B will attempt to front run A
|
431 |
+
|
432 |
+
2 The problem: front-running attacks
|
433 |
+
13
|
434 |
+
whenever A sends a message. In particular, when PA(s) > c but (1 − q)PA(s) < c
|
435 |
+
the threat of front running prevents A from sending the message in the first place,
|
436 |
+
therefore destroying the value of the exchange between A and SC.
|
437 |
+
Front-running attacks vs. legitimate competition.
|
438 |
+
In the introduction, we ar-
|
439 |
+
gued that the difference between front-running attacks and legitimate competition
|
440 |
+
is whether the “attacker” relies on the information extracted from observing the vic-
|
441 |
+
tim’s message. This intuitive notion can be easily formalized in the context of our
|
442 |
+
model by considering a modified game in which player B chooses whether to send
|
443 |
+
his message and what message to send without observing A’s actions. We want to
|
444 |
+
find necessary and sufficient conditions such that, in the equilibrium of this modi-
|
445 |
+
fied game, B does not want to send any message. Clearly, if B does not send any
|
446 |
+
message, then A’s optimal strategy is simply:
|
447 |
+
σ∗∗
|
448 |
+
A (s) ≡
|
449 |
+
�
|
450 |
+
�
|
451 |
+
�
|
452 |
+
σA(s)
|
453 |
+
if PA(s) ≥ c
|
454 |
+
∅
|
455 |
+
otherwise
|
456 |
+
(3)
|
457 |
+
Given this, there is an equilibrium in which B does not send any message if and
|
458 |
+
only if
|
459 |
+
Es[ ˜PB(˜σB, σ∗∗
|
460 |
+
A (s), s)] ≤ f
|
461 |
+
∀˜σB ∈ Σ,
|
462 |
+
(4)
|
463 |
+
In what follows, if in the equilibrium of the original game, B sends a message
|
464 |
+
and condition 4 holds, then we say that there is a front-running attack. If instead, in
|
465 |
+
the equilibrium of the original game, B sends a message and condition 4 is violated,
|
466 |
+
then we say that B is a legitimate competitor.18 As we will see, this distinction will
|
467 |
+
play an important role in the next section when we introduce our commit-reveal
|
468 |
+
protocol. The reason is that the protocol reduces (but not fully eliminates) B’s
|
469 |
+
ability to act upon A’s message. If (4) holds, the expected benefit of an attack is
|
470 |
+
reduced, and hence attacks are less likely. If instead (4) is violated, then B always
|
471 |
+
18 It is possible that (4) does not hold and hence B sends a message also when he does not observe
|
472 |
+
A’s actions. At the same time, he may choose a different message if he observes A’s message.
|
473 |
+
According to our definition, this is not a front-running attack, even if B uses A’s message. This
|
474 |
+
is justified by the observation that, in our model, A’s payoff does not depend on what message B
|
475 |
+
sends. Hence, the fact that B uses A’s message to craft his message is irrelevant to A.
|
476 |
+
|
477 |
+
3 Preventing front-running via commitment
|
478 |
+
14
|
479 |
+
has a profitable message to send, independently of his observation of A. In this case,
|
480 |
+
the protocol has little impact on B’s behavior, except for requiring him to send two
|
481 |
+
messages. This means that the protocol reduces the expected return of an attack
|
482 |
+
(i.e. when 4 holds) but has little impact on legitimate competition (i.e., when 4 is
|
483 |
+
violated)
|
484 |
+
3
|
485 |
+
Preventing front-running via commitment
|
486 |
+
To address the problem of front-running attacks, here we propose a commit-reveal
|
487 |
+
protocol. In terms of notation, we call player A’s commit message σA,1 and reveal
|
488 |
+
message σA,2. Similarly, player B’s counter-messages are σB,1 and σB,2.
|
489 |
+
Formally, the protocol has a commitment period and a reveal period, which here
|
490 |
+
are two subsequent blocks.19 If player A wants to send message σA ∈ Σ to SC, in
|
491 |
+
the commit period A sends the commit message
|
492 |
+
σA,1 = S(addr, σA)
|
493 |
+
to SC where addr is an address that A controls and S() is a function with an
|
494 |
+
intractable pre-image problem (for example Hash (addr|σA) where Hash() is the
|
495 |
+
SHA-256 hash function). Once the commit message is included in a block, A sends
|
496 |
+
the reveal message σA,2 = σA to SC from the address addr, which is then included in
|
497 |
+
the next block. Upon receiving the message, SC computes S(addr, σA) and checks
|
498 |
+
whether it received message S(addr, σA) in the previous block.
|
499 |
+
It follows that if B wants to front run A he will need to commit a message at the
|
500 |
+
commit stage and then reveal it at the reveal stage. There is a common discount
|
501 |
+
factor β ∈ [0, 1], so when a given payoff is earned with a block delay, this payoff is
|
502 |
+
discounted by β. Finally, A does not observe B’s commit message and hence cannot
|
503 |
+
detect B’s attempt to front running. At the same time, we assume B observes A’s
|
504 |
+
commit message. In Section 5.1, instead, we introduce a modified protocol that
|
505 |
+
allows A to hide his commit message.
|
506 |
+
Finally, we simplify the problem slightly by assuming that there is no state of the
|
507 |
+
19 In Section 5.3 we discuss more in detail the problem of specifying commit and reveal periods.
|
508 |
+
|
509 |
+
4 Solution
|
510 |
+
15
|
511 |
+
world s such that PA(s) ∈ [c, c + c
|
512 |
+
β]. Under this assumption, absent front running,
|
513 |
+
the states of the world in which A wants to send a message is the same with and
|
514 |
+
without the protocol.20
|
515 |
+
4
|
516 |
+
Solution
|
517 |
+
We start with a rather immediate result: there is no equilibrium in which B sends
|
518 |
+
the same commit message as A. To see this, suppose that player A sends the commit
|
519 |
+
message S(addr, σA) and player B sends the same commit message. If in the next
|
520 |
+
period B sends the message revealB = σA, then the SC will consider the earlier
|
521 |
+
commitment as invalid because B’s address is different from addr. It is also easy
|
522 |
+
to see that there is no equilibrium in which A commits but then does not reveal
|
523 |
+
because A can do better by not committing at all. The next lemma summarizes
|
524 |
+
these observations.
|
525 |
+
Lemma 1 (No cloning in equilibrium). There is no equilibrium in which σB,1 = σA,1.
|
526 |
+
There is also no equilibrium in which A sends the commit message but not the reveal
|
527 |
+
message.
|
528 |
+
In equilibrium, therefore, if B wants to attack, he would need to guess what
|
529 |
+
message to commit message without knowing the state of the world s. Nonetheless,
|
530 |
+
B anticipates that he will observe A’s message and, under our assumptions, will
|
531 |
+
learn the state of the world. At that point, he can decide whether or not to send
|
532 |
+
the message he initially committed. Therefore, the protocol severely limits but does
|
533 |
+
not totally eliminate B’s ability to act upon his observation of A’s message. Hence,
|
534 |
+
it is possible that (4) holds and, despite this, B can profitably attack.
|
535 |
+
We derive conditions under which the protocol is effective at eliminating front
|
536 |
+
running. In an equilibrium without front running, A’s optimal strategy is again
|
537 |
+
σ∗∗
|
538 |
+
A (s) as defined in (3). Given this, consider player B. Suppose that A sent her
|
539 |
+
20 We could alternatively assume that these states of the world exist but are not very important
|
540 |
+
from B’s viewpoint, in the sense that
|
541 |
+
pr
|
542 |
+
�
|
543 |
+
PA(s) ∈ [c, c + c
|
544 |
+
β ]
|
545 |
+
�
|
546 |
+
Es
|
547 |
+
�
|
548 |
+
PB(˜σB, σA(s), s)|PA(s) ∈ [c, c + c
|
549 |
+
β ]
|
550 |
+
�
|
551 |
+
is sufficiently small.
|
552 |
+
|
553 |
+
4 Solution
|
554 |
+
16
|
555 |
+
commit message, that B committed message σB and then observed A’s reveal mes-
|
556 |
+
sage σA(s). In this case, B’s expected payoff from front-running is
|
557 |
+
q · ˜PB(σB, σA(s), s) − f.
|
558 |
+
Hence, upon observing σA(s) and learning s, B will try to front run if and only
|
559 |
+
if q · ˜P(σB, σA(s), s) > f. In the commitment phase, therefore, the best possible
|
560 |
+
message B can commit is
|
561 |
+
ˆσB ≡ argmaxσB∈ΣEs
|
562 |
+
�
|
563 |
+
max{q · ˜PB(σB, σA(s), s) − f, 0}|σ∗∗
|
564 |
+
A (s) ̸= ∅
|
565 |
+
�
|
566 |
+
,
|
567 |
+
where the expectation is conditional on the state of the world being such that A
|
568 |
+
sends a commit message. We define π as the expected payoff if B commits ˆσB after
|
569 |
+
observing that A committed a message:
|
570 |
+
π ≡ Es
|
571 |
+
�
|
572 |
+
max{q · ˜PB(ˆσB, σA(s), s) − f, 0}|σ∗∗
|
573 |
+
A (s) ̸= ∅
|
574 |
+
�
|
575 |
+
Hence, if A sends a commit message and B tries to front run, B’s expected payoff
|
576 |
+
is βπ − c. We therefore have the following proposition:21
|
577 |
+
Proposition 2. If π ≤
|
578 |
+
c
|
579 |
+
β (i.e., “guessing is hard for B”), then there is no front-
|
580 |
+
running in equilibrium.
|
581 |
+
If instead π >
|
582 |
+
c
|
583 |
+
β (i.e., “guessing is easy for B”), front
|
584 |
+
running occurs with strictly positive probability in equilibrium.
|
585 |
+
Note that in case “guessing is easy for B”, there could be a pure strategy equilibrium
|
586 |
+
in which B commits with probability 1 whenever A commits, or a mixed strategy
|
587 |
+
equilibrium in which B commits with some probability. In either cases, after com-
|
588 |
+
mitting, B attempts to front run A or not depending on A’s reveal message.
|
589 |
+
It is easy to check that in the “guessing is hard for B” case, A’s equilibrium
|
590 |
+
payoff is
|
591 |
+
v∗
|
592 |
+
A(s) = max {−c + β(PA(s) − c), 0}
|
593 |
+
Therefore, the protocol generates both costs and benefits to player A. The main
|
594 |
+
21 The existence of the equilibrium follows from the fact that the players’ strategy space is finite,
|
595 |
+
as noted already in Nash (1950).
|
596 |
+
|
597 |
+
4 Solution
|
598 |
+
17
|
599 |
+
benefit is that the protocol reduces or eliminates front running. The costs are two.
|
600 |
+
The most evident one is that, here, two messages are required which implies that A
|
601 |
+
pays c twice. More subtle is the fact that, here, the payoff is earned with a one-block
|
602 |
+
delay, and hence is discounted by the parameter β.
|
603 |
+
4.1
|
604 |
+
Discussion
|
605 |
+
Attack vs legitimate competition
|
606 |
+
It is instructive to consider what happens
|
607 |
+
when B is an attacker (i.e., condition 4) holds vs a legitimate competitor (i.e.,
|
608 |
+
condition 4 is violated). To do so, we introduce the following condition
|
609 |
+
Es[ ˜PB(˜σB, σ∗∗
|
610 |
+
A (s), s)] ≤ c + f
|
611 |
+
β
|
612 |
+
for some ˜σB ∈ Σ,
|
613 |
+
(5)
|
614 |
+
which is akin to condition (4), but where the cost of sending a message is now the
|
615 |
+
cost of participating in the commit-reveal protocol. Suppose first that the above
|
616 |
+
condition is violated, which implies that 4 is violated and hence B is a legitimate
|
617 |
+
competitor. Define
|
618 |
+
σ∗∗
|
619 |
+
B ≡ argmaxσB∈ΣEs[ ˜PB(˜σB, σ∗∗
|
620 |
+
A (s), s)]
|
621 |
+
as the best possible message that B can send when he is completely uninformed,
|
622 |
+
earning him a payoff equal to
|
623 |
+
−c + β(Es[ ˜PB(σ∗∗
|
624 |
+
B , σ∗∗
|
625 |
+
A (s), s)] − f)
|
626 |
+
It is easy to see that
|
627 |
+
− c + βEs[ ˜PB(σ∗∗
|
628 |
+
B , σ∗∗
|
629 |
+
A (s), s)] − f = −Pr {σ∗∗
|
630 |
+
A (s) = ∅} · c+
|
631 |
+
Pr
|
632 |
+
�
|
633 |
+
σ∗∗
|
634 |
+
A (s) ̸= ∅ & q · ˜PB(σ∗∗
|
635 |
+
B , σA(s), s) − f > 0
|
636 |
+
�
|
637 |
+
· Es
|
638 |
+
�
|
639 |
+
max{q · ˜PB(σ∗∗
|
640 |
+
B , σA(s), s) − f, 0} − c|σ∗∗
|
641 |
+
A (s) ̸= ∅
|
642 |
+
�
|
643 |
+
+
|
644 |
+
Pr
|
645 |
+
�
|
646 |
+
σ∗∗
|
647 |
+
A (s) ̸= ∅ & q · ˜PB(σ∗∗
|
648 |
+
B , σA(s), s) − f < 0
|
649 |
+
�
|
650 |
+
· Es
|
651 |
+
�
|
652 |
+
min{q · ˜PB(σ∗∗
|
653 |
+
B , σA(s), s) − f, 0} − c|σ∗∗
|
654 |
+
A (s) ̸= ∅
|
655 |
+
�
|
656 |
+
≤ Pr
|
657 |
+
�
|
658 |
+
σ∗∗
|
659 |
+
A (s) ̸= ∅ & q · ˜PB(σ∗∗
|
660 |
+
B , σA(s), s) − f > 0
|
661 |
+
�
|
662 |
+
· Es
|
663 |
+
�
|
664 |
+
max{q · ˜PB(σ∗∗
|
665 |
+
B , σA(s), s) − f, 0} − c|σ∗∗
|
666 |
+
A (s) ̸= ∅
|
667 |
+
�
|
668 |
+
≤ Pr
|
669 |
+
�
|
670 |
+
σ∗∗
|
671 |
+
A (s) ̸= ∅ & q · ˜PB(σ∗∗
|
672 |
+
B , σA(s), s) − f > 0
|
673 |
+
�
|
674 |
+
· π.
|
675 |
+
|
676 |
+
4 Solution
|
677 |
+
18
|
678 |
+
That is, player B is strictly better off whenever he can (i) commit a message only
|
679 |
+
when A commits a message (therefore avoiding paying the commit message and
|
680 |
+
earning zero) and (ii) only send the reveal message after observing A’s message and
|
681 |
+
only when it is profitable to do so. It follows that when (4) holds, then player B
|
682 |
+
always wants to commit.
|
683 |
+
There derivations show that, modulo the fact that sending messages is more
|
684 |
+
expensive with the protocol (i.e.
|
685 |
+
the right-hand side of 4 is different from the
|
686 |
+
right-hand side of 5), the protocol does not impede legitimate competition: both
|
687 |
+
players commit their messages and then compete with each other to have their
|
688 |
+
reveal message included first in the following block. At the same time, attacks are
|
689 |
+
more costly because an attacker is forced to make a costly guess. Hence, under the
|
690 |
+
protocol, front-running attacks are discouraged while competition among honest
|
691 |
+
players is preserved (but postponed by one period).
|
692 |
+
Severity of attacks
|
693 |
+
The value of π measures how easy it is for B to guess. It
|
694 |
+
is therefore the inverse of the measure of severity of the attack discussed in the
|
695 |
+
introduction: if it is difficult for B to guess, it is because B has very little prior
|
696 |
+
information and, in the benchmark case, he relies heavily on observing A’s message,
|
697 |
+
while the opposite is true when it is easy for B to guess. We can therefore say that
|
698 |
+
the protocol is most effective at preventing the most severe front-running attacks.
|
699 |
+
Additional messages
|
700 |
+
In the above analysis, we restricted the players’ action space
|
701 |
+
to a single message per player in the commit period. If we relax this assumption,
|
702 |
+
additional interesting considerations emerge, although the basic intuition discussed
|
703 |
+
earlier remains the same.
|
704 |
+
For example, player A may want to commit and disclose σA already in period 1.
|
705 |
+
This is strictly beneficial to A if the state of the world s is such that
|
706 |
+
• π ≥ c
|
707 |
+
β so that player B sends a commit message with strictly positive proba-
|
708 |
+
bility.
|
709 |
+
• s is such that ˜PB(ˆσB, σA(s), s) ≥ f
|
710 |
+
q . That is, after sending the commit message
|
711 |
+
and learning the state of the world, player B will try to front-run A.
|
712 |
+
|
713 |
+
5 Extensions to the protocol
|
714 |
+
19
|
715 |
+
• s is such that PB(s) < f/q + c/β, so that if B knew the state of the world
|
716 |
+
from the beginning, he would not want to commit and then front-run.
|
717 |
+
In this situation, by disclosing σA(s) already in period 1, player A can prevent B
|
718 |
+
from attempting to front run. The analysis above therefore holds by restricting the
|
719 |
+
space of signals Σ to those such that the above conditions do not hold and A does
|
720 |
+
not want to disclose.
|
721 |
+
It is also possible that B may want to send multiple commit messages. If the
|
722 |
+
number of commit messages is k, then B will choose the k messages that, jointly,
|
723 |
+
generate the largest expected payoffs. There is a “guessing is hard” case which is
|
724 |
+
identical to the one discussed earlier. There is also a “guessing is easy” case, which
|
725 |
+
is however more convoluted than earlier because the number of messages committed
|
726 |
+
by B may be greater than 1. However, the intuition is largely unchanged from the
|
727 |
+
simple case.
|
728 |
+
Pre-commitments
|
729 |
+
Another restriction we imposed is that the protocol starts
|
730 |
+
when player A learns the state of the world. It is however possible that A may
|
731 |
+
want to pre-commit, that is, commit a message before learning the state of the
|
732 |
+
world, in the hope that the committed message can be used immediately when the
|
733 |
+
state of the world is revealed. The important observation is that A can pre-commit,
|
734 |
+
and then decide to restart the protocol by committing a second message upon learn-
|
735 |
+
ing the state of the world. This complicates B’s inference problem because whatever
|
736 |
+
message he commits may be wasted in the future. Again, the basic insight from the
|
737 |
+
simple model above continues to hold, but guessing is harder for player B.
|
738 |
+
5
|
739 |
+
Extensions to the protocol
|
740 |
+
5.1
|
741 |
+
Hiding commitments
|
742 |
+
Here we propose a version of the protocol that allows to hide the commit message,
|
743 |
+
in which an attacker does not know whether the victim committed something (and
|
744 |
+
will reveal in the following period). The modified protocol exploits the fact that
|
745 |
+
player A can send commit and reveal messages from different addresses, provided
|
746 |
+
|
747 |
+
5 Extensions to the protocol
|
748 |
+
20
|
749 |
+
that the commit message includes the address that A will use in the following period
|
750 |
+
to send the reveal message.
|
751 |
+
To study how the possibility of hiding the commit messages affects the equilib-
|
752 |
+
rium of the game, here we assume that the honest player observes the state of the
|
753 |
+
world only with some probability, in which case she may decide to send her message.
|
754 |
+
If instead player A does not observe the state of the world, then she takes no action.
|
755 |
+
We also replicate the game n times: there are now n identical honest player, who
|
756 |
+
with some probability may want to interact with one of n smart contracts.
|
757 |
+
These modifications are irrelevant in the protocol that we introduced earlier,
|
758 |
+
because, in each replica game, the attacker can send his commit message after having
|
759 |
+
observed whether the victim sent her commit message.
|
760 |
+
But the above protocol
|
761 |
+
can be modified so that both the sender and the receiver of the commitment are
|
762 |
+
obfuscated. More precisely, the modified protocol is now:
|
763 |
+
• there is a pre-existing template code for the container smart contract. This
|
764 |
+
code is such that when the container smart contract receives a commit message,
|
765 |
+
it time-stamps it with the current block number.
|
766 |
+
• to commit, the honest player generates a brand-new address and uses it to send
|
767 |
+
a transaction in which, first, a container smart contract is created using the
|
768 |
+
template, and then the commit message is sent to the newly-created container
|
769 |
+
smart contract.22 The commitment message is now S(addr, addrSC, σA), where
|
770 |
+
addrSC is the address of the target smart contract.
|
771 |
+
• to reveal, the honest player sends to the target smart contract the message σA
|
772 |
+
together with a pointer to the container smart contract in which the commit-
|
773 |
+
ment message is stored.
|
774 |
+
• the target smart contract considers the message as valid if all these conditions
|
775 |
+
are satisfied
|
776 |
+
1. like before, the commit message should be S(addr, addrSC, σA), where
|
777 |
+
addr is the address from which the reveal message was sent.
|
778 |
+
22 The brand new address needs to be founded with some ETH before it can send messages. We
|
779 |
+
note that this could be done via a centralized exchange, therefore hiding the identity of the creator
|
780 |
+
of the new address from the attacker.
|
781 |
+
|
782 |
+
5 Extensions to the protocol
|
783 |
+
21
|
784 |
+
2. the timestamp associated with the commit message is lower than the
|
785 |
+
current block number. This step makes sure that the commit message
|
786 |
+
was sent before the reveal message.
|
787 |
+
3. the code of the container smart contract is identical to the template smart
|
788 |
+
contract.23
|
789 |
+
The very last step is necessary to prevent an attack in which, after observing the
|
790 |
+
reveal message, an attacker sends a single transaction that (i) creates a container
|
791 |
+
smart contract, (ii) stores the commitment there together with a fake time stamp
|
792 |
+
and (iii) send the reveal message.
|
793 |
+
An outside observer can infer that someone created a container smart contract
|
794 |
+
using the template and committed something, but does not know who committed
|
795 |
+
nor the target smart contract that will receive the reveal message. Call τ the ratio
|
796 |
+
between the observed container smart contracts created and n.
|
797 |
+
The same logic
|
798 |
+
discussed above implies that if τ · π ≤ c/β, then it is too costly for B to attack a
|
799 |
+
given A player: guessing is too hard for B and front-running is prevented. Hence, if
|
800 |
+
the probability that a given honest user sends a message to a given smart contract
|
801 |
+
is sufficiently low (so that the realized τ is also low), then the protocol eliminates
|
802 |
+
all front-running attacks.
|
803 |
+
An important observation is that the above scheme is effective in hiding the
|
804 |
+
target smart contract if and only if multiple target smart contracts share the same
|
805 |
+
template for the container smart contract. In the extreme case in which each target
|
806 |
+
smart contract has its own template, then the identity of the user remains hidden
|
807 |
+
but the target smart contract that will receive the reveal can be inferred. At the
|
808 |
+
other extreme, the highest level of obfuscation is achieved when all smart contracts
|
809 |
+
use the same template. Different smart contracts could also coordinate by creating
|
810 |
+
a single “official” container smart contract that receives all commitments. Again,
|
811 |
+
an outside observer can infer that a user sent a commitment to the container smart
|
812 |
+
contract, without knowing who is the user and what is the target smart contract.
|
813 |
+
23 In Ethereum, a smart contract code is accessible by other smart contracts.
|
814 |
+
For example,
|
815 |
+
the expression type(SC).creationCode returns the creation bytecode of smart contract SC (see
|
816 |
+
https://docs.soliditylang.org/en/latest/units-and-global-variables.html#type-information). If the
|
817 |
+
template storage contract specifies that the contract is immutable, such bytecode will be constant
|
818 |
+
and cannot be changed.
|
819 |
+
|
820 |
+
5 Extensions to the protocol
|
821 |
+
22
|
822 |
+
Here, however, users do not need to create the container smart contract each time,
|
823 |
+
leading to significant savings in gas. How to achieve this coordination among smart
|
824 |
+
contracts is not part of the model.
|
825 |
+
5.2
|
826 |
+
Partial implementation.
|
827 |
+
It is possible to implement the protocol only for a subset of possible messages. That
|
828 |
+
is, there is a set of messages M ⊂ Σ such that any message σ ∈ M is considered
|
829 |
+
valid by the SC only if the commit-reveal protocol described above is followed. All
|
830 |
+
other messages σ ̸∈ M are considered valid by the SC as soon as they are received.
|
831 |
+
Suppose that A wants to send message σA and B wants to front run with message
|
832 |
+
σB. There are four possible cases:
|
833 |
+
1. σA, σB ∈ M, which means that we are in the commit-reveal case discussed
|
834 |
+
earlier.
|
835 |
+
2. σA, σB ̸∈ M, which means that we are in the benchmark case discussed earlier.
|
836 |
+
3. σA ̸∈ M but σB ∈ M, which means that A can send her message directly,
|
837 |
+
without fear of being front ran. In this case, front running is prevented at no
|
838 |
+
cost for A.
|
839 |
+
4. σA ∈ M but σB ̸∈ M, which implies that A needs to send two messages
|
840 |
+
(commit and reveal), and wait one period, for in the end having the same
|
841 |
+
probability of being front ran than in the benchmark case. In this case, A the
|
842 |
+
protocol imposes extra costs on A without any benefit.
|
843 |
+
The specific design of M depends on the situation and will balance the possible
|
844 |
+
costs and benefits to player A. With this respect, an important observation is that
|
845 |
+
the choice of M determines π. So, for example, for given π, it would seem beneficial
|
846 |
+
not to use the protocol in states of the world in which player A does not expect an
|
847 |
+
attack. But this may not be optimal, because states of the world in which A does
|
848 |
+
not expect to be attacked are precisely the ones in which the attackers’ payoff is
|
849 |
+
low. Hence, by applying the protocol also in these states of the world, π decrease,
|
850 |
+
and with it the probability of a front-running attack.
|
851 |
+
|
852 |
+
5 Extensions to the protocol
|
853 |
+
23
|
854 |
+
5.3
|
855 |
+
Specifying commit and reveal periods
|
856 |
+
Our model assumes that both commit and reveal messages are included in a block
|
857 |
+
immediately after being sent. In practice, however, messages may remain in the
|
858 |
+
mempool for some time before being included in a block.24 This possibility is not
|
859 |
+
an issue with respect to the commit message, because the honest player can simply
|
860 |
+
wait until this message is included in a block before sending the reveal message. It
|
861 |
+
is however an issue with respect to the reveal message, because an attacker may be
|
862 |
+
able to observe the victim’s reveal message, send a commit message (either directly
|
863 |
+
to SC or via a container smart contract as discussed in the previous section), have
|
864 |
+
it included in a block, then send a reveal message and have it included in a block
|
865 |
+
before that of the honest user.
|
866 |
+
To start, note that the possibility that messages stay in the mempool is a concern
|
867 |
+
also in the benchmark case (i.e. the standard way in which Ethereum operates),
|
868 |
+
possibly even more than in our protocol because an attacker needs to send just 1
|
869 |
+
message during the period in which the honest player’s message stays in the mempool
|
870 |
+
(vs 2 in our protocol). It is also a concern that is greatly reduced by the introduction
|
871 |
+
of the base fee: a fee that should guarantee the rapid inclusion of a transaction in a
|
872 |
+
bloc (see footnote 15).
|
873 |
+
For our purposes, it is interesting to note that our protocol can reduce or elim-
|
874 |
+
inate this concern by appropriately specifying commit and reveal periods, that can
|
875 |
+
be thought of as sets of blocks. The SC will then consider a reveal message as valid
|
876 |
+
only if received during a block belonging to the reveal period, and only if its commit
|
877 |
+
message was received (either directly by SC or via the container smart contract)
|
878 |
+
during a block belonging to the commit period.
|
879 |
+
For example, a specific application may have a natural deadline, such as a com-
|
880 |
+
petition rewarding whoever can provide the correct answer to a question within a
|
881 |
+
specific time period. In these situations, it seems natural to specify the commit
|
882 |
+
period as all blocks up until the deadline and the reveal period as all blocks after
|
883 |
+
the deadline, therefore eliminating the risk that an attacker commits after having
|
884 |
+
24 We treat this possibility as a random event, not something that an attacker could manipulate.
|
885 |
+
The reason is that purposefully censoring a transaction requires a large number of miners/validators
|
886 |
+
to collude, which is prevented by the consensus protocol.
|
887 |
+
|
888 |
+
5 Extensions to the protocol
|
889 |
+
24
|
890 |
+
observed the reveal message. In other situations, it may be possible to alternate
|
891 |
+
between commitment periods and reveal periods. In this case, the above attack is
|
892 |
+
possible only if the reveal message remains in the mempool for the entire length of
|
893 |
+
the reveal period—a probability that drops to zero rapidly with the length of this
|
894 |
+
period. Of course, this modification has a cost because it increases A’s waiting time
|
895 |
+
(i.e., the time between A learning the state of the world and deciding to send her
|
896 |
+
message and the time he receives her reward).
|
897 |
+
Finally, it is also possible that the commit-reveal protocol is required only in some
|
898 |
+
periods. For example, during the “commit” period users could either commit or send
|
899 |
+
a message directly to the smart contract without any commitment, which would be
|
900 |
+
considered valid. In the reveal period, only reveal messages that were committed
|
901 |
+
during the commit period are considered valid. The honest player can choose to send
|
902 |
+
a given transaction in a “slow but safe” mode, or a “fast but risky” mode. In the
|
903 |
+
slow but safe mode, the user sends her commitment during the “commit” period and
|
904 |
+
the reveal in the “reveal” period, therefore preventing an attacker from sending both
|
905 |
+
commit and reveal messages after observing the honest player’s reveal message. In
|
906 |
+
the fast but risky mode, a user sends a direct message to the smart contract during
|
907 |
+
the commit period. Doing so exposes the honest player to the risk of being front-ran
|
908 |
+
but may nonetheless be optimal if the honest player is particularly impatient.
|
909 |
+
5.4
|
910 |
+
Multiple attackers
|
911 |
+
An interesting implication of our protocol is that it may reduce or eliminate com-
|
912 |
+
petition between attackers, therefore benefiting the attackers as well as the honest
|
913 |
+
player. To see this, assume that there are two attackers: B1 and B2. When sending
|
914 |
+
a transaction, each Bi chooses how much money to spend fi ≥ 0, simultaneously
|
915 |
+
and independently from each other.
|
916 |
+
To remain as close as possible to the case with a single attacker (and leverage as
|
917 |
+
much as possible the results already derived), we can think of competition between
|
918 |
+
the two attackers and the honest player as happening in two steps. First, the attacker
|
919 |
+
that spends the most wins the right to attack the honest player. Then, similarly
|
920 |
+
|
921 |
+
5 Extensions to the protocol
|
922 |
+
25
|
923 |
+
to the single-attacker case, the winner attempts to front-run the honest player.25
|
924 |
+
Mathematically, the probability that the transaction sent by player Bi is included
|
925 |
+
in the block before that of B−i and player A is:
|
926 |
+
�
|
927 |
+
�
|
928 |
+
�
|
929 |
+
γiq(fi)
|
930 |
+
iffi > f−i
|
931 |
+
0
|
932 |
+
iffi < f−i
|
933 |
+
where the function q() : R+ → [0, 1] is strictly increasing and strictly concave, and
|
934 |
+
γi > 0 for i ∈ {1, 2}. A tie-breaking rule determines what happens when fi = f−i,
|
935 |
+
but the nature of such a rule is not important for our analysis. The parameter
|
936 |
+
γi measures the strength of each attacker. Without loss of generality, we assume
|
937 |
+
that the attacker number 1 is stronger, and hence γ1 ≥ γ2. Attackers are otherwise
|
938 |
+
identical: they have the same payoff function and the same information.
|
939 |
+
Benchmark case
|
940 |
+
We start by deriving what happens with multiple attackers when
|
941 |
+
players can send their messages directly to the smart contract (i.e., no commitment
|
942 |
+
needed). Again, after observing the victim’s message and learning the state of the
|
943 |
+
world, attacker Bi’s payoff as a function of fi, f−i is
|
944 |
+
�
|
945 |
+
�
|
946 |
+
�
|
947 |
+
Pb(s)γiq(fi)
|
948 |
+
if fi > f−i
|
949 |
+
0
|
950 |
+
if fi < f−i
|
951 |
+
Formally, therefore the attackers are engaged in an asymmetric contest with pro-
|
952 |
+
ductive effort, as studied in Siegel (2014). Define
|
953 |
+
f i ≡ fi : PB(s)γiq′(fi) = 1
|
954 |
+
as the optimal expenditure by attacker i whenever attacker −i is absent (or alter-
|
955 |
+
natively, whenever f−i = 0). Define
|
956 |
+
f i ≡ fi : PB(s)γiq(fi) = fi
|
957 |
+
25 All our results are robust to other ways to model competition. The reason is that our results
|
958 |
+
rely on there being full dissipation of rents: the weakest attacker expects to earn zero. This result
|
959 |
+
holds in a large class of contest models.
|
960 |
+
|
961 |
+
5 Extensions to the protocol
|
962 |
+
26
|
963 |
+
as the expenditure level at which attacker i’s payoff is zero in the absence of attacker
|
964 |
+
−i. Note that whenever f 1 ≥ f i, then there is a unique equilibrium in pure strategy,
|
965 |
+
in which attacker B1 sets f ∗
|
966 |
+
1 = f 1 and attacker 2 does not do anything. This situation
|
967 |
+
is therefore identical to the single-attacker case discussed earlier.
|
968 |
+
If instead f 1 < f i, according to Theorem 1 in Siegel (2014), there are multiple
|
969 |
+
mixed-strategy equilibria. However, in every equilibrium of the game attacker 1’s
|
970 |
+
utility is
|
971 |
+
PB(s)γ1q(f 2).
|
972 |
+
That is, the strong attacker’s payoff is equal to the payoff he would achieve if he’d
|
973 |
+
set his expenditure equal to the follower’s largest possible expenditure.26 Also here,
|
974 |
+
the utility of the other attackers is zero.
|
975 |
+
To summarize, relative to the single-attacker case, if there are two attackers
|
976 |
+
who are sufficiently similar then in equilibrium they will randomize their level of
|
977 |
+
spending. In expectation, the weaker attacker earns zero. The stronger attacker
|
978 |
+
earns a positive amount, which is however lower than if he was the unique attacker.
|
979 |
+
Competition, therefore, hurts both attackers because they overspend (relative to the
|
980 |
+
single-attacker case). This is clearly detrimental to the honest player as well.
|
981 |
+
Commit-reveal protocol.
|
982 |
+
Consider the commit-reveal protocol. We assume that
|
983 |
+
both attackers observe the victim’s commit message. For simplicity, we also assume
|
984 |
+
that the attackers choose their commit messages simultaneously and independently,
|
985 |
+
and that they can observe each other’s commit messages.27
|
986 |
+
We solve the game
|
987 |
+
backward, starting from the reveal phase.
|
988 |
+
If only one attacker Bi committed, then the problem is quite simple: the single
|
989 |
+
26 This result is also in Siegel (2009), in which however only non-productive effort is considered.
|
990 |
+
Siegel (2014) extends these results to cases in which, over some range, the “prize” to be won by a
|
991 |
+
player may be increasing in this player’s effort.
|
992 |
+
27 If an attacker does not observe the other attacker’s commit message, he will nonetheless detect
|
993 |
+
the opponent’s attempt to front run in the following period. At that point, he will increase its level
|
994 |
+
of spending. The outcome is identical to the case in which the attacker knows from the beginning
|
995 |
+
that the other attacker committed and will therefore attack.
|
996 |
+
|
997 |
+
5 Extensions to the protocol
|
998 |
+
27
|
999 |
+
attacker i earns28
|
1000 |
+
V (γi) ≡ max
|
1001 |
+
fi
|
1002 |
+
�
|
1003 |
+
˜PB(˜σB, s)γiq(fi) − fi
|
1004 |
+
�
|
1005 |
+
If instead both attackers committed, then the logic discussed in the previous section
|
1006 |
+
continues to apply: if they are sufficiently similar, then the equilibrium is in mixed
|
1007 |
+
strategies. The attackers overspend (relative to the single attacker case) and, as a
|
1008 |
+
consequence, the weaker attacker expects to earn zero while the stronger attacker
|
1009 |
+
expects to earn V (γ1) < V (γ1).29
|
1010 |
+
Given this, we can derive the equilibrium in the commitment phase. The main
|
1011 |
+
result is that there is no equilibrium in which both players commit with probability 1.
|
1012 |
+
The reason is that the weak attacker anticipates that, if the other attacker commits
|
1013 |
+
and he also commits, he will then earn zero in the following period. Commitment
|
1014 |
+
messages are however costly, which implies that the weak attacker is better off by
|
1015 |
+
not committing.
|
1016 |
+
It follows that the equilibria of the game are
|
1017 |
+
• if either βV (γ1) > c, or βV (γ1) > c > βV (γ1) and c > βV (γ2), then there is a
|
1018 |
+
unique equilibrium in pure strategy in which only the strong attacker (attacker
|
1019 |
+
1) commits.
|
1020 |
+
• if βV (γ1) > c > βV (γ1) and βV (γ2) > c, then there are two pure strategy
|
1021 |
+
Nash equilibria, each corresponding to only one attacker sending the commit
|
1022 |
+
message. There is also a mixed strategy equilibrium, in which attacker 1 com-
|
1023 |
+
mits with probability α1 and attacker 2 commits with probability α2. These
|
1024 |
+
probabilities are such that each attacker is indifferent between committing or
|
1025 |
+
not, that is α1V (γ2) = c and α2V (γ1) + (1 − α2)V (γ1) = c. In this equilib-
|
1026 |
+
rium, there is a probability α1α2 that both attackers commit, a probability
|
1027 |
+
(1−α1)(1−α2) that no attackers commit, and the remaining probability that
|
1028 |
+
a single attacker commits.
|
1029 |
+
• otherwise, no attacker commits and front running is prevented.
|
1030 |
+
28 Remember that the attacker has the same payoff function and information. Hence, in the
|
1031 |
+
commit period, if they commit they will both commit ˆσB.
|
1032 |
+
29 The meaning of “the attackers being sufficiently similar” and the expected payoff of player 1
|
1033 |
+
can be precisely derived following the same steps illustrated in the previous paragraph. But their
|
1034 |
+
precise expressions are not important in what follows.
|
1035 |
+
|
1036 |
+
6 Conclusion
|
1037 |
+
28
|
1038 |
+
The protocol, therefore, decreases the level of competition among attackers.
|
1039 |
+
This, in turn, have a beneficial effect on the victim as reducing competition also
|
1040 |
+
reduces the amount spent by the attackers.
|
1041 |
+
6
|
1042 |
+
Conclusion
|
1043 |
+
We conclude by discussing a number of possible limitations to our protocol that
|
1044 |
+
require further study.
|
1045 |
+
Our commit-reveal protocol may impede the possibility of calling different smart
|
1046 |
+
contracts within the same transaction (usually referred to as smart-contract compos-
|
1047 |
+
ablity). In principle, composability is still possible by first committing the different
|
1048 |
+
messages to the various smart contracts. A problem however arises when these smart
|
1049 |
+
contracts have different commit-reveal periods (see Section 5.3). Although different
|
1050 |
+
commit messages may be sent in different periods depending on the commitment
|
1051 |
+
window of each smart contract, to maintain composability the reveal messages must
|
1052 |
+
be sent within the same transaction during the reveal window of all smart contracts.
|
1053 |
+
If such a window does not exist, then composability is not possible. If it exists, then
|
1054 |
+
it is possible but exploiting it may impose large delays to the execution of the trans-
|
1055 |
+
action. Studying further how to mitigate this problem is also left for future work.
|
1056 |
+
Here we just note that composability is preserved if the commit-reveal protocol is
|
1057 |
+
required only in some periods (as discussed in the last paragraph of Section 5.3),
|
1058 |
+
chosen in a coordinated way among all smart contracts.
|
1059 |
+
Our analysis assumes that the smart contract does not have an explicit mecha-
|
1060 |
+
nism to resolve competing claims to an object and therefore does not apply to, for
|
1061 |
+
example, a smart contract running an on-chain auction. Applying our protocol to
|
1062 |
+
such smart contract may lead to unintended consequence because the players (hon-
|
1063 |
+
est or not) may fail to reveal after having committed—perhaps because they realize
|
1064 |
+
that they would lose the auction. This is problematic in many cases. For example,
|
1065 |
+
failures to reveal in a second-price auction may decrease the revenues raised in the
|
1066 |
+
auction.
|
1067 |
+
Finally, our protocol is also not effective against a type of front-running attack
|
1068 |
+
called suppression attacks in which an attacker prevents the victim’s transaction
|
1069 |
+
|
1070 |
+
6 Conclusion
|
1071 |
+
29
|
1072 |
+
from being included in a block by front-running it with a series of spam transactions
|
1073 |
+
(see Eskandari et al. (2019)). The reason is that, in these attacks, the content of the
|
1074 |
+
victim’s transaction is irrelevant to the attacker. However, these types of attacks
|
1075 |
+
are rare and specific to certain applications. For example, Eskandari et al. (2019)
|
1076 |
+
document only one of them in the context of a gambling smart contract.
|
1077 |
+
References
|
1078 |
+
Baird, L. (2016). The swirlds hashgraph consensus algorithm: Fair, fast, byzantine
|
1079 |
+
fault tolerance. Swirlds Tech Reports SWIRLDS-TR-2016-01, Tech. Rep 34.
|
1080 |
+
Breidenbach, L., P. Daian, F. Tramèr, and A. Juels (2018). Enter the hydra: To-
|
1081 |
+
wards principled bug bounties and {Exploit-Resistant} smart contracts. In 27th
|
1082 |
+
USENIX Security Symposium (USENIX Security 18), pp. 1335–1352.
|
1083 |
+
Budish, E., P. Cramton, and J. Shim (2015). The high-frequency trading arms race:
|
1084 |
+
Frequent batch auctions as a market design response. The Quarterly Journal of
|
1085 |
+
Economics 130(4), 1547–1621.
|
1086 |
+
Capponi, A. and R. Jia (2021). The adoption of blockchain-based decentralized
|
1087 |
+
exchanges. arXiv preprint arXiv:2103.08842.
|
1088 |
+
Capponi, A., R. Jia, and Y. Wang (2022). The evolution of blockchain: from lit to
|
1089 |
+
dark.
|
1090 |
+
Daian, P., S. Goldfeder, T. Kell, Y. Li, X. Zhao, I. Bentov, L. Breidenbach, and
|
1091 |
+
A. Juels (2019). Flash boys 2.0: Frontrunning, transaction reordering, and con-
|
1092 |
+
sensus instability in decentralized exchanges. arXiv preprint arXiv:1904.05234.
|
1093 |
+
Eskandari, S., S. Moosavi, and J. Clark (2019). Sok: Transparent dishonesty: front-
|
1094 |
+
running attacks on blockchain. In International Conference on Financial Cryp-
|
1095 |
+
tography and Data Security, pp. 170–189. Springer.
|
1096 |
+
Fudenberg, D. and J. Tirole (1987). Understanding rent dissipation: on the use of
|
1097 |
+
game theory in industrial organization. The American Economic Review 77(2),
|
1098 |
+
176–183.
|
1099 |
+
|
1100 |
+
6 Conclusion
|
1101 |
+
30
|
1102 |
+
Gans, J. S. and R. T. Holden (2022). A solomonic solution to ownership disputes:
|
1103 |
+
An application to blockchain front-running. Technical report, National Bureau of
|
1104 |
+
Economic Research.
|
1105 |
+
Heimbach, L. and R. Wattenhofer (2022). Sok: Preventing transaction reordering
|
1106 |
+
manipulations in decentralized finance. arXiv preprint arXiv:2203.11520.
|
1107 |
+
Kelkar, M., F. Zhang, S. Goldfeder, and A. Juels (2020). Order-fairness for byzantine
|
1108 |
+
consensus. Cryptology ePrint Archive, Paper 2020/269. https://eprint.iacr.org/
|
1109 |
+
2020/269.
|
1110 |
+
Nash, J. (1950). Equilibrium points in n-person games. Proceedings of the national
|
1111 |
+
academy of sciences 36(1), 48–49.
|
1112 |
+
Park, A. (2022). Conceptual flaws of decentralized automated market making. Tech-
|
1113 |
+
nical report, Working paper, University of Toronto.
|
1114 |
+
Qin, K., L. Zhou, and A. Gervais (2022). Quantifying blockchain extractable value:
|
1115 |
+
How dark is the forest? In 2022 IEEE Symposium on Security and Privacy (SP),
|
1116 |
+
pp. 198–214. IEEE.
|
1117 |
+
Robinson, D. and G. Konstantopoulos (2020, Aug). Ethereum is a dark forest.
|
1118 |
+
Roughgarden, T. (2020).
|
1119 |
+
Transaction fee mechanism design for the ethereum
|
1120 |
+
blockchain: An economic analysis of eip-1559. arXiv preprint arXiv:2012.00854.
|
1121 |
+
Siegel, R. (2009). All-pay contests. Econometrica 77(1), 71–92.
|
1122 |
+
Siegel, R. (2014). Contests with productive effort. International Journal of Game
|
1123 |
+
Theory 43(3), 515–523.
|
1124 |
+
Torres, C. F., R. Camino, et al. (2021). Frontrunner jones and the raiders of the
|
1125 |
+
dark forest: An empirical study of frontrunning on the ethereum blockchain. In
|
1126 |
+
30th USENIX Security Symposium (USENIX Security 21), pp. 1343–1359.
|
1127 |
+
|
2dFST4oBgHgl3EQfXzgB/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
2tE2T4oBgHgl3EQfjAfh/content/tmp_files/2301.03965v1.pdf.txt
ADDED
@@ -0,0 +1,1509 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
1
|
2 |
+
BiCurNet: Pre-Movement EEG based Neural
|
3 |
+
Decoder for Biceps Curl Trajectory Estimation
|
4 |
+
Manali Saini*, Anant Jain*, Lalan Kumar, Suriya Prakash Muthukrishnan, Shubhendu Bhasin and Sitikantha Roy
|
5 |
+
Abstract—Kinematic parameter (KP) estimation from early
|
6 |
+
electroencephalogram (EEG) signals is essential for positive
|
7 |
+
augmentation using wearable robot. However, work related to
|
8 |
+
early estimation of KPs from surface EEG is sparse. In this
|
9 |
+
work, a deep learning-based model, BiCurNet, is presented for
|
10 |
+
early estimation of biceps curl using collected EEG signal.
|
11 |
+
The model utilizes light-weight architecture with depth-wise
|
12 |
+
separable convolution layers and customized attention module.
|
13 |
+
The feasibility of early estimation of KPs is demonstrated using
|
14 |
+
brain source imaging. Computationally efficient EEG features in
|
15 |
+
spherical and head harmonics domain is utilized for the first
|
16 |
+
time for KP prediction. The best Pearson correlation coefficient
|
17 |
+
(PCC) between estimated and actual trajectory of 0.7 is achieved
|
18 |
+
when combined EEG features (spatial and harmonics domain)
|
19 |
+
in delta band is utilized. Robustness of the proposed network
|
20 |
+
is demonstrated for subject-dependent and subject-independent
|
21 |
+
training, using EEG signals with artifacts.
|
22 |
+
Index Terms—Brain-computer interface, Electroencephalo-
|
23 |
+
gram, Deep learning, Kinematic parameter estimation.
|
24 |
+
I. INTRODUCTION
|
25 |
+
Brain-computer interface (BCI) is an integration of the
|
26 |
+
measurement, decoding, and translation of the activity of
|
27 |
+
central nervous system (CNS) into imitative output that rein-
|
28 |
+
states, augments, or rehabilitates the natural CNS output [1].
|
29 |
+
This creates an interface between the CNS and its external
|
30 |
+
environment. BCI-based systems are rapidly emerging on
|
31 |
+
account of the recent advancements in signal processing and
|
32 |
+
artificial intelligence [2], [3]. These systems are useful in
|
33 |
+
*Manali Saini and Anant Jain have contributed equally to this work.
|
34 |
+
This work was supported in part by DRDO - JATC project with project
|
35 |
+
number RP04191G.
|
36 |
+
This work involved human subjects or animals in its research. Approval
|
37 |
+
of all ethical and experimental procedures and protocols was granted by
|
38 |
+
the Institute Ethics Committee, All India Institute of Medical Sciences, New
|
39 |
+
Delhi, India with reference number IEC-751/07.08.2020,RP-06/2020.
|
40 |
+
Manali Saini is with the Department of Electrical Engineering, Indian
|
41 |
+
Institute of Technology Delhi, New Delhi 110016, India (e-mail: manali-
|
42 | |
43 |
+
Anant Jain is with the Department of Electrical Engineering, Indian
|
44 |
+
Institute of Technology Delhi, New Delhi 110016, India (e-mail: anant-
|
45 | |
46 |
+
Lalan Kumar is with the Department of Electrical Engineering, Bharti
|
47 |
+
School of Telecommunication, and Yardi School of Artificial Intelligence,
|
48 |
+
Indian Institute of Technology Delhi, New Delhi 110016, India (e-mail:
|
49 | |
50 |
+
Suriya Prakash Muthukrishnan is with the Department of Physiology,
|
51 |
+
All India Institute of Medical Sciences, New Delhi - 110016, India(e-mail:
|
52 | |
53 |
+
Subhendu Bhasin is with the Department of Electrical Engineering, In-
|
54 |
+
dian Institute of Technology Delhi, New Delhi 110016, India (e-mail: sb-
|
55 | |
56 |
+
Sitikantha Roy is with the Department of Applied Mechanics, In-
|
57 |
+
dian Institute of Technology Delhi, New Delhi 110016, India (e-mail:
|
58 | |
59 |
+
neuro-rehabilitation to assist users with motor-impairments
|
60 |
+
[4]–[7]. For real-time operability of these systems, continuous
|
61 |
+
signal decoding is required for extraction of kinematic param-
|
62 |
+
eters (KPs) such as motion trajectory, velocity, acceleration,
|
63 |
+
etc. [8]–[10]. In view of these aspects, electroencephalogram
|
64 |
+
(EEG)-based BCI systems have gained popularity in the recent
|
65 |
+
years, owing to the non-invasiveness, low-cost, and excellent
|
66 |
+
temporal resolution of EEG signals [11], [12].
|
67 |
+
A. Motivation and Related Work
|
68 |
+
Literary works explore machine learning and deep learning-
|
69 |
+
based paradigms for upper limb kinematic parameter esti-
|
70 |
+
mation (KPE), movement intention detection and classifica-
|
71 |
+
tion from low frequency components of EEG signals. For
|
72 |
+
instance, in [13], sparse multinomial logistic regression is
|
73 |
+
utilized to classify EEG signals during reach intention and
|
74 |
+
actual movement, based on multiple hand-crafted features
|
75 |
+
extracted from EEG signals filtered in the range of 1 − 40
|
76 |
+
Hz. In this work, independent component analysis (ICA) and
|
77 |
+
dipole fitting are applied to remove movement artifacts from
|
78 |
+
the recorded EEG signals, for obtaining low classification
|
79 |
+
error rates [13]. Researchers in [14] have explored EEG
|
80 |
+
current source dipole (CSD) data, using standardized low
|
81 |
+
resolution brain electromagnetic tomography (sLORETA) to
|
82 |
+
decode actual and imagined arm joint trajectories based on
|
83 |
+
multiple linear regression (mLR). The most useful time lags
|
84 |
+
are observed to be between 80−150 ms prior to the movement,
|
85 |
+
and the low β and γ bands are shown to be more effective
|
86 |
+
in movement decoding with a correlation of 0.67. Similarly,
|
87 |
+
mLR is utilized in [15] for estimating the 3D trajectories of
|
88 |
+
arm movement with variable velocities using EEG segments
|
89 |
+
filtered in the range of 0.1 − 40 Hz. The researchers reported
|
90 |
+
a high correlation between the movement velocities and EEG
|
91 |
+
activity above the motor cortex in fronto-central and parietal
|
92 |
+
areas [15]. mLR is also utilized in [16] with α and β band
|
93 |
+
powers of EEG signals during the motor planning and exe-
|
94 |
+
cution phases to predict the upcoming peak tangential speed
|
95 |
+
and acceleration of hand movement. This study demonstrates
|
96 |
+
the prominence of occipital and parietal-occipital regions for
|
97 |
+
α band, and frontal and frontal-central regions for the β band
|
98 |
+
in movement planning and execution phases. In a recent study,
|
99 |
+
researchers have explored the feasibility of a commercial EEG
|
100 |
+
headset in motor decoding and classification with the use of
|
101 |
+
Kalman filter and spatio-spectral features extracted from EEG
|
102 |
+
signals [9]. An overall correlation of 0.58 is achieved in this
|
103 |
+
work. Besides mLR, sparse LR is investigated for predicting
|
104 |
+
the circular trajectories of upper limb during movement of
|
105 |
+
arXiv:2301.03965v1 [eess.SP] 10 Jan 2023
|
106 |
+
|
107 |
+
2
|
108 |
+
bottles with varying masses [17]. In this work, a wide range
|
109 |
+
of EEG frequencies, i.e., 0−150 Hz is used and channels over
|
110 |
+
the motor cortex are shown to be more prominent towards
|
111 |
+
the prediction. In [18], movement intent is decoded from
|
112 |
+
movement related cortical potentials (MRCPs) using narrow-
|
113 |
+
band EEG in the range of 0.1−1 Hz to train a support vector
|
114 |
+
machine (SVM)-based classifier. The selection of a single-
|
115 |
+
channel, i.e., Cz in movement onset decoding with an accuracy
|
116 |
+
of 91% using low frequency (0 − 5 Hz) Teager-Kaiser energy
|
117 |
+
operator with threshold-based classification is demonstrated in
|
118 |
+
[19].
|
119 |
+
Despite of the effectiveness of conventional machine
|
120 |
+
learning-based paradigms in EEG-based movement decoding,
|
121 |
+
there is a need of extracting the high-level features which can
|
122 |
+
enhance the performance. To overcome this, researchers have
|
123 |
+
proposed deep learning-based paradigms. For example, convo-
|
124 |
+
lutional neural network (CNN) is proposed with the use of pre-
|
125 |
+
movement raw spatio-temporal multi-channel EEG for hand
|
126 |
+
movement and force levels classification with an accuracy of
|
127 |
+
84% [20]. This work demonstrates early classification of hand
|
128 |
+
movement, i.e., in 100−1600 ms advance. CNN is also utilized
|
129 |
+
in [21] along with bidirectional long short term memory (Bi-
|
130 |
+
LSTM)-based network to predict the velocities of arm reaching
|
131 |
+
tasks using pre-processed EEG signals. An overall correlation
|
132 |
+
between 0.4 − 0.6 is achieved in this work and feasibility of
|
133 |
+
robotic arm control based on real-time EEG is demonstrated
|
134 |
+
[21]. Recently, deep learning-based three-dimensional (3D)
|
135 |
+
hand movement trajectory during grasp and lift movements
|
136 |
+
is estimated using a public EEG database in [10], [22], [23].
|
137 |
+
In [22], wavelet packet decomposition (WPD) based time-
|
138 |
+
lagged EEG sub bands are used to train a CNN-LSTM network
|
139 |
+
for prediction of the hand position/trajectory with a high
|
140 |
+
correlation of 0.86. This work explores the source-aware EEG
|
141 |
+
features and the demonstrates the relevance of low frequency
|
142 |
+
bands (δ, θ, and α) in movement estimation, however, it has
|
143 |
+
limited feasibility in real-time hardware implementation. Early
|
144 |
+
estimation of this trajectory is demonstrated in [10] with a high
|
145 |
+
correlation 0.79 using the δ band of EEG. Further, researchers
|
146 |
+
in [23] demonstrate the feasibility of a brain-inspired spiking
|
147 |
+
neural network (Bi-SNN) along with mid-frequency and high-
|
148 |
+
frequency EEG bands, i.e., α, β, and γ, toward the same
|
149 |
+
trajectory estimation with a correlation of 0.7.
|
150 |
+
Based on the aforementioned description of literary works,
|
151 |
+
it can be asserted that many of these works focus on
|
152 |
+
classification of upper limb movements, rather than predic-
|
153 |
+
tion/estimation of the related kinematic parameters. Timely
|
154 |
+
extraction of kinematic parameters from EEG data during
|
155 |
+
upper limb movement is imperative towards different real-time
|
156 |
+
exosuit control-based BCI applications. Further, few existing
|
157 |
+
machine-learning based regression algorithms are able to esti-
|
158 |
+
mate the KPs earlier w.r.t. actual movement, however, average
|
159 |
+
correlation is achieved. Although the existing deep learning-
|
160 |
+
based networks outperform these ML-based paradigms, only
|
161 |
+
few of them have explored early estimation of KPs. Further,
|
162 |
+
these networks use slightly complex architectures after pre-
|
163 |
+
processing which may not be feasible on hand-held processors
|
164 |
+
for real-time BCI systems. Most importantly, the performance
|
165 |
+
of the existing paradigms for KP estimation is highly subject-
|
166 |
+
Fig. 1: Experimental setup for biceps-curl task.
|
167 |
+
specific, which further adds to the complexity since the
|
168 |
+
networks need to be trained for each subject.
|
169 |
+
B. Objective and Key Contributions
|
170 |
+
In view of the aforementioned challenges of literary works,
|
171 |
+
this work proposes a deep learning-based upper limb mo-
|
172 |
+
tion trajectory prediction/estimation from preceding EEG, i.e.,
|
173 |
+
BiCurNet, for early estimation towards exosuit control-based
|
174 |
+
BCI applications. Further, the proposed network is demon-
|
175 |
+
strated to be subject-independent and robust against artifacts,
|
176 |
+
unlike the existing works. To the best of our awareness, this
|
177 |
+
is the first work which focuses on early estimation of kine-
|
178 |
+
matic parameters from both subject-dependent and subject-
|
179 |
+
independent EEG signals and further analyses the noise-
|
180 |
+
robustness of the proposed network. The key contributions of
|
181 |
+
this work are listed as follows.
|
182 |
+
• Low-complex deep learning-based architecture is pro-
|
183 |
+
posed for early estimation of upper limb motion trajec-
|
184 |
+
tory.
|
185 |
+
• In-house recording of multi-channel EEG signals during
|
186 |
+
upper limb biceps curl experiment.
|
187 |
+
• Spherical harmonics and head-harmonics domain EEG
|
188 |
+
features based motion trajectory estimation has been
|
189 |
+
explored for the first time.
|
190 |
+
• Demonstration
|
191 |
+
of
|
192 |
+
subject-adaptability
|
193 |
+
and
|
194 |
+
noise-
|
195 |
+
robustness of the proposed network.
|
196 |
+
The rest of this paper is organized as follows. Section II
|
197 |
+
describes the experimental recording and data acquisition
|
198 |
+
procedures. Section III presents the proposed methodology for
|
199 |
+
BiCurNet. Section IV discusses the experimental evaluation
|
200 |
+
results for the proposed work. Finally, section V concludes
|
201 |
+
this work with major advantages, shortcomings, and future
|
202 |
+
directions.
|
203 |
+
|
204 |
+
3
|
205 |
+
Fig. 2: Block diagram depicting the proposed methodology for biceps-curl trajectory estimation.
|
206 |
+
II. EXPERIMENT AND DATA ACQUISITION
|
207 |
+
The key objective of the study is to investigate the via-
|
208 |
+
bility of using EEG signals for elbow joint angle decoding
|
209 |
+
during biceps-curl motion. For this purpose, we designed
|
210 |
+
a synchronous EEG and joint angle recording system. The
|
211 |
+
description of the experimental paradigm and data acquisition
|
212 |
+
are elucidated in the subsequent sections.
|
213 |
+
A. Subjects and Equipment
|
214 |
+
The experiment was performed in the Multichannel Signal
|
215 |
+
Processing Laboratory, Department of Electrical Engineering
|
216 |
+
at Indian Institute of Technology Delhi, New Delhi. This
|
217 |
+
research was authorized by the Institutional Review Board of
|
218 |
+
All India Institute Of Medical Sciences, New Delhi. EEG data
|
219 |
+
and joint angle data were recorded from 5 healthy subjects
|
220 |
+
(all males, age 29 ± 2.61, all right handed) while performing
|
221 |
+
the biceps curl task. Each subject performed 300 trials of
|
222 |
+
biceps curls task. EEG data was recorded using 16-channel
|
223 |
+
dry-active electrodes (actiCAP Xpress Twist, Brain Products,
|
224 |
+
Gilching, Germany) with wireless EEG amplifier (LiveAmp-
|
225 |
+
16, Brain Products, Gilching, Germany). The EEG sensors
|
226 |
+
were arranged in 10-20 international system of EEG electrode
|
227 |
+
placement, namely, Fp1, Fz, F3, C3, T7, Pz, P3, O1, Oz, O2,
|
228 |
+
P4, Cz, C4, T8, F4 and Fp2. The EEG data was acquired with
|
229 |
+
500 Hz sampling frequency. A marker-based camera system
|
230 |
+
(Noraxon NiNOX 125 Camera System) was placed for elbow
|
231 |
+
joint angle measurement. The NiNOX 125 camera system was
|
232 |
+
connected to Noraxon myoResearch platform (MR 3.16) for
|
233 |
+
recording the biceps-curl hand motion. The camera system was
|
234 |
+
placed in sagittal plane 2 m away from the subject. The elbow
|
235 |
+
joint angle was calculated using myoResearch software in
|
236 |
+
the post-processing step. The 3-point angle measurement tool
|
237 |
+
was utilized to compute 2D joint angle by tracking reflective
|
238 |
+
markers in the video recording. The joint angle data was
|
239 |
+
sampled with the sampling frequency of 125 Hz. The EEG and
|
240 |
+
joint angle data was synchronized using Noraxon myoSync
|
241 |
+
device.
|
242 |
+
B. Experimental Setup and Paradigm
|
243 |
+
Concurrent EEG and motion data was collected from the
|
244 |
+
users during biceps-curl task. At the beginning of experiment,
|
245 |
+
participants were in standing position with 2 Kg dumbbell
|
246 |
+
holding in their right hand. A monitor was positioned 1.8 m
|
247 |
+
away in front of them for showing the experimental paradigm.
|
248 |
+
Participants were standing in balanced upright posture with
|
249 |
+
dumbbell in their right hand. We designed the experiment in
|
250 |
+
PsychoPy [24] for instructing the participant for initiating the
|
251 |
+
biceps-curl movement. Each trial begin with a cross appearing
|
252 |
+
on the center screen along with a beep sound, indicating the
|
253 |
+
start of the trial. After a couple of seconds, a visual cue
|
254 |
+
appeared on the screen to instruct the participant to initiate
|
255 |
+
the biceps-curl. The biceps-curl was performed in the motion
|
256 |
+
execution phase. Each trial ended with resting phase of two
|
257 |
+
seconds. Before the actual data acquisition, each participant
|
258 |
+
performed a practice run for executing the task correctly. This
|
259 |
+
practice run was not included for any consequent analysis. We
|
260 |
+
recorded 30 runs with 10 trials each for the biceps curl task.
|
261 |
+
Inter-run rest was given to the participant for avoiding muscle
|
262 |
+
fatigue.
|
263 |
+
III. PROPOSED METHODOLOGY
|
264 |
+
This section elaborates the proposed methodology for early
|
265 |
+
prediction of upper limb motion trajectory from EEG based
|
266 |
+
on deep learning, as illustrated in Fig. 2. It consists of three
|
267 |
+
major modules: EEG recording, pre-processing and feature
|
268 |
+
|
269 |
+
Raw EEG
|
270 |
+
signals
|
271 |
+
Pre-processing and
|
272 |
+
3 Dense
|
273 |
+
andino
|
274 |
+
(Channels 1
|
275 |
+
feature extraction
|
276 |
+
EEG data
|
277 |
+
Flatten
|
278 |
+
layers, 8
|
279 |
+
layer, N
|
280 |
+
to 16)
|
281 |
+
(Channels 1
|
282 |
+
layer
|
283 |
+
units,
|
284 |
+
units,
|
285 |
+
DFT-based
|
286 |
+
to Nc)
|
287 |
+
DWSConv1Dlayer
|
288 |
+
Conv1D layer
|
289 |
+
Maxpool1D layer
|
290 |
+
Enhance and suppress
|
291 |
+
activation = activation =
|
292 |
+
baseline wander
|
293 |
+
32 kernels,kernel 32kernels, kernel
|
294 |
+
Pool size=2,
|
295 |
+
attention block
|
296 |
+
Swish
|
297 |
+
Linear
|
298 |
+
size=5, stride=1,
|
299 |
+
size=5, stride=1,
|
300 |
+
noise removal
|
301 |
+
activation-ReLu
|
302 |
+
stride=2
|
303 |
+
activation=ReLu
|
304 |
+
(Threshold: 0.5
|
305 |
+
Hz).
|
306 |
+
dwConv1D
|
307 |
+
Dense
|
308 |
+
+ ReLu
|
309 |
+
layer
|
310 |
+
Common average
|
311 |
+
referencing.
|
312 |
+
ICA-based artifact
|
313 |
+
rejection.
|
314 |
+
Amplitude
|
315 |
+
normalization [v].
|
316 |
+
DWT-based sub-
|
317 |
+
bands extraction:
|
318 |
+
[AA:"A:A:*A]
|
319 |
+
DWT-Spherical
|
320 |
+
harmonics (SH)
|
321 |
+
(C3-Ks 1) × 32
|
322 |
+
C4 × 32
|
323 |
+
features:
|
324 |
+
C4
|
325 |
+
C5
|
326 |
+
DWT-Head
|
327 |
+
harmonics (H2)
|
328 |
+
(N-Ks+1) × 32
|
329 |
+
(C,-Ks+1) × 32 (C,/2) × 32
|
330 |
+
C5 × 1
|
331 |
+
8×1
|
332 |
+
N×1
|
333 |
+
NxNC
|
334 |
+
.
|
335 |
+
features:
|
336 |
+
Predicted
|
337 |
+
c
|
338 |
+
C2
|
339 |
+
C3
|
340 |
+
trajectory
|
341 |
+
EEG
|
342 |
+
recording4
|
343 |
+
extraction, and depth-wise separable convolutional neural net-
|
344 |
+
work with a customized attention module. The modules are
|
345 |
+
described in the subsequent sub-sections.
|
346 |
+
A. EEG recording
|
347 |
+
In this work, the EEG signals are acquired using LiveAmp
|
348 |
+
16 Brain Products system as described in the previous section.
|
349 |
+
Prior to be used for the proposed BiCurNet, these signals are
|
350 |
+
pre-processed as detailed in the ensuing sub-section.
|
351 |
+
B. Pre-processing
|
352 |
+
The recorded EEG signals are pre-processed in EEGLAB
|
353 |
+
[25] and MATLAB for feature extraction prior to be fed to
|
354 |
+
the proposed BiCurNet, as shown in Fig. 2. After recording
|
355 |
+
and re-sampling the EEG signals, low frequency (below 0.5
|
356 |
+
Hz) baseline wander noise (BWN) suppression is done using
|
357 |
+
discrete Fourier transform (DFT). For this purpose, the DFT
|
358 |
+
coefficients corresponding to frequencies below 0.5Hz are
|
359 |
+
estimated. The computation of DFT coefficient index k is done
|
360 |
+
as: k = ⌊(fqNd/fqs)⌋, where fq is the frequency in Hz, fqs is
|
361 |
+
the sampling frequency, and Nd is the number of DFT points
|
362 |
+
for computation. These DFT coefficients are thresholded to
|
363 |
+
zero for suppression of the BWN. The EEG signal after BWN
|
364 |
+
suppression is synthesized as the inverse of the DFT coefficient
|
365 |
+
matrix. The mathematical interpretation of this procedure is
|
366 |
+
described for a recorded EEG signal v[n] by the following
|
367 |
+
DFT pair:
|
368 |
+
DFT of recorded signal : V [k] =
|
369 |
+
Nd−1
|
370 |
+
�
|
371 |
+
n=0
|
372 |
+
v[n]e
|
373 |
+
−jn2πk
|
374 |
+
Nd
|
375 |
+
(1)
|
376 |
+
˜vq[n] = 1
|
377 |
+
Nd
|
378 |
+
Nd−1
|
379 |
+
�
|
380 |
+
k=0
|
381 |
+
˜Vq(k)e
|
382 |
+
jn2πk
|
383 |
+
Nd
|
384 |
+
(2)
|
385 |
+
where
|
386 |
+
˜
|
387 |
+
Xq denotes the DFT coefficient matrix after thresh-
|
388 |
+
olding, i.e., ˜
|
389 |
+
Xq(k) = [0, . . . , 0, Xq[k + 1], . . . , Xq[Nd − k −
|
390 |
+
1], 0, ...., 0]. All signals are normalized w.r.t. amplitude to
|
391 |
+
bring it in range: [−1, 1] as
|
392 |
+
˜
|
393 |
+
xq[n]
|
394 |
+
max| ˜
|
395 |
+
xq[n]|.
|
396 |
+
In this work, the recorded EEG signals are analyzed for
|
397 |
+
the estimation of motion trajectory with and without artifact
|
398 |
+
suppression. Independent component analysis (ICA) is utilized
|
399 |
+
for artifact suppression. It is used for estimating the sources
|
400 |
+
corresponding to cerebral and non-cerebral activities resulting
|
401 |
+
in the scalp EEG [26]. EEGLAB is used in this work for ICA-
|
402 |
+
based decomposition of the EEG signals obtained after BWN
|
403 |
+
removal. The decomposed independent sources with more than
|
404 |
+
70% of artifactual components are rejected and the artifact-free
|
405 |
+
EEG signal is reconstructed.
|
406 |
+
C. Brain source imaging
|
407 |
+
Brain source imaging (BSI) is performed to select the rel-
|
408 |
+
evant pre-movement EEG segment prior to feature extraction.
|
409 |
+
Numerical Boundary Element Method (BEM) based forward
|
410 |
+
modeling is utilized for this purpose. The head model utilizes
|
411 |
+
ICBM MRI template [27] in OpenMEEG [28] toolbox. The
|
412 |
+
spatio-temporal dynamics of brain cortical sources are ob-
|
413 |
+
tained using inverse modeling. In particular, standardized low-
|
414 |
+
resolution electromagnetic tomography (sLORETA) [29] is
|
415 |
+
utilized to solve the under-determined inverse problem. Under
|
416 |
+
the constraint of the smooth source distribution, standardized
|
417 |
+
current density maps are utilized for localization inference.
|
418 |
+
Source localization plots for a right hand biceps-curl activity
|
419 |
+
are illustrated in Fig. 3. The analysis shown corresponds to
|
420 |
+
a single-trial of biceps-curl. The subject was instructed to
|
421 |
+
focus the vision on fixation cross. A visual cue for movement
|
422 |
+
onset was presented at 0 ms. The subject executed biceps-curl
|
423 |
+
activity 410 ms after the visual cue was given. A constant
|
424 |
+
activation may be observed in occipital lobe up to 60 ms.
|
425 |
+
The information starts getting transferred to the left motor
|
426 |
+
cortex thereafter. All such pre-movement EEG [Fig 3(c)-(g)]
|
427 |
+
has inbuilt motion trajectory.
|
428 |
+
It may be noted that the left motor cortex region was acti-
|
429 |
+
vated at 220-240 ms [Fig 3(e)] corresponding to the right-hand
|
430 |
+
biceps-curl activity. Motor activity was observed thereafter
|
431 |
+
up to 320 msec [Fig 3(e)]. The subject executed biceps-curl
|
432 |
+
activity at 400-450 ms after the visual cue was given. It may
|
433 |
+
be concluded that the motor neural information corresponding
|
434 |
+
to the biceps-curl activity is present approximately 250 ms
|
435 |
+
prior to the motor execution. This information is utilized for
|
436 |
+
selecting the time-lag window for elbow joint-angle trajectory.
|
437 |
+
The selected EEG data was utilized for the training and testing
|
438 |
+
of the proposed neural decoder.
|
439 |
+
D. Feature extraction
|
440 |
+
The pre-processed EEG signals are analyzed with different
|
441 |
+
transform-domain techniques for significant feature extraction.
|
442 |
+
This work explores time-frequency features using discrete
|
443 |
+
wavelet transform (DWT) in: Spatial domain; Spatio-temporal
|
444 |
+
domain using spherical Fourier transform (SFT); and Spatio-
|
445 |
+
temporal domain using head harmonics transform.
|
446 |
+
1) Discrete wavelet transform-based features:
|
447 |
+
Discrete
|
448 |
+
wavelet transform is utilized to decompose the EEG signals
|
449 |
+
into constituent sub-bands/rhythms. It makes use of high-pass
|
450 |
+
and low-pass filters for decomposing the signals into a pre-
|
451 |
+
defined number of levels based on the sampling frequency
|
452 |
+
[30]. DWT of a single channel EEG signal v[n] is given by
|
453 |
+
Vj,r =
|
454 |
+
�
|
455 |
+
n∈z
|
456 |
+
v[n]ψ∗
|
457 |
+
j,r[n]
|
458 |
+
(3)
|
459 |
+
where ψj,r is the translated and scaled version of the mother
|
460 |
+
wavelet ψ0,0, and defined as:
|
461 |
+
ψj,r[n] = 2−(j/2)ψ0,0
|
462 |
+
�
|
463 |
+
2−j(n − r)
|
464 |
+
�
|
465 |
+
(4)
|
466 |
+
The procedure for DWT-based decomposition follows a
|
467 |
+
tree-like structure as demonstrated in Fig. 4. At each decompo-
|
468 |
+
sition level, the wavelet coefficients are down-sampled for re-
|
469 |
+
moving the redundant information [31]. In this work, since the
|
470 |
+
sampling frequency used is 125 Hz, the decomposed sub bands
|
471 |
+
are obtained as: delta (δ : 0.5 − 3.9 Hz), theta (θ : 3.9 − 7.8
|
472 |
+
Hz), alpha (α : 7.8 − 15.6 Hz), beta (β : 15.6 − 31.2 Hz), and
|
473 |
+
gamma (γ :> 31.2 Hz), denoted by Vδ, Vθ, Vα, Vβ, and Vγ
|
474 |
+
respectively.
|
475 |
+
|
476 |
+
5
|
477 |
+
(a)
|
478 |
+
(b)
|
479 |
+
(c)
|
480 |
+
(d)
|
481 |
+
(e)
|
482 |
+
(f)
|
483 |
+
(g)
|
484 |
+
(h)
|
485 |
+
Fig. 3: Brain source localization using sLORETA at different time stamps : (a) 0ms (b) 60ms (c) 120ms (d) 180ms (e) 240ms
|
486 |
+
(f) 300ms (g) 360ms (h) 420ms
|
487 |
+
Fig. 4:
|
488 |
+
Four-level DWT-based decomposition to obtain the approxi-
|
489 |
+
mation and detail bands with frequency range at level j
|
490 |
+
given by:
|
491 |
+
�
|
492 |
+
0, 2−j−1 Fs
|
493 |
+
�
|
494 |
+
, and
|
495 |
+
�
|
496 |
+
2−j−1 Fs, 2−j Fs
|
497 |
+
�
|
498 |
+
respectively.
|
499 |
+
2) DWT-Spherical harmonics-based features: To extract the
|
500 |
+
spatio-temporal features of the EEG signal and the correspond-
|
501 |
+
ing DWT-based sub bands obtained above, spherical Fourier
|
502 |
+
transform (SFT) is explored in this work. Since the human
|
503 |
+
head is assumed to be spherical in shape [32], spherical Fourier
|
504 |
+
basis functions have been widely employed in literary works.
|
505 |
+
The decomposition of a multi-channel EEG signal V in SFD
|
506 |
+
is obtained as:
|
507 |
+
VSH
|
508 |
+
lm =
|
509 |
+
�
|
510 |
+
Ω
|
511 |
+
V (Ω, n) [Y m
|
512 |
+
n (Ω)] dΩ
|
513 |
+
(5)
|
514 |
+
where V (Ω, n) denotes the potential at (Ω) = (r, θ, φ) on
|
515 |
+
the scalp at time instant n. Here, r represents the radius of
|
516 |
+
head, θ denotes the angle of elevation measured in downward
|
517 |
+
direction from positive Z-axis (θ ∈ [0, π]), and φ denotes
|
518 |
+
the azimuth angle measured in anticlockwise direction from
|
519 |
+
positive X-axis, as shown in Fig. 5. The real-valued Y m
|
520 |
+
l (Ω)
|
521 |
+
of lth order and mth degree constitutes an orthonormal set of
|
522 |
+
basis function, defined over spherical array. For a finite order
|
523 |
+
Fig. 5: Total potential at a channel is a contribution of each active equivalent
|
524 |
+
dipole .
|
525 |
+
system, l ∈ [0, L], and m ∈ [−l, l]. Therefore, (L+1)2 distinct
|
526 |
+
spherical harmonics are obtained in total. Since the number of
|
527 |
+
sampling points S in spatial domain should be atleast (L+1)2,
|
528 |
+
the highest limit of L is ≤
|
529 |
+
�
|
530 |
+
(S) − 1. In this work, since 16
|
531 |
+
electrodes are used for recording, i.e., S = 16, the limit of L is
|
532 |
+
3. Therefore, L = 2 is used here and total 9 distinct spherical
|
533 |
+
harmonics are obtained. The corresponding features are stored
|
534 |
+
in V SH
|
535 |
+
nm with a dimension of 9 × N. Each EEG sub band is
|
536 |
+
also decomposed using spherical Fourier basis functions, and
|
537 |
+
the corresponding features are obtained as V SH
|
538 |
+
δlm , V SH
|
539 |
+
θlm , V SH
|
540 |
+
αlm,
|
541 |
+
V SH
|
542 |
+
βlm , and V SH
|
543 |
+
γlm .
|
544 |
+
3) DWT-Head harmonics-based features: More recently,
|
545 |
+
head harmonics (H2) basis functions have been proposed for
|
546 |
+
more adequate representation of EEG signals based on the
|
547 |
+
geometry of human head [32]. Since the EEG sensors placed
|
548 |
+
on head form a shape between a sphere and a hemisphere, H2
|
549 |
+
basis functions are shown to be more efficient for representing
|
550 |
+
|
551 |
+
x[n] sampled at F, = 125 Hz
|
552 |
+
Approximation 1
|
553 |
+
Detail 1
|
554 |
+
(0-31.25 Hz)
|
555 |
+
(31.25-62.5 Hz)
|
556 |
+
V
|
557 |
+
Approximation 2
|
558 |
+
Detail 2
|
559 |
+
(0-15.625 Hz)
|
560 |
+
(15.625-31.25 Hz)
|
561 |
+
Vp
|
562 |
+
Approximation 3
|
563 |
+
Detail 3
|
564 |
+
(0-7.8125 Hz)
|
565 |
+
(7.8125-15.625 Hz)
|
566 |
+
Va
|
567 |
+
Approximation 4
|
568 |
+
Detail 4
|
569 |
+
(0-3.9 Hz)
|
570 |
+
(3.9-7.8125 Hz)
|
571 |
+
Vs
|
572 |
+
Ve+Y0
|
573 |
+
10-86
|
574 |
+
the data sampled over head. The decomposition of an EEG
|
575 |
+
signal matrix V in H2 domain is given as:
|
576 |
+
VH2
|
577 |
+
lm =
|
578 |
+
�
|
579 |
+
Ω
|
580 |
+
V (Ω, n) [Hm
|
581 |
+
l (Ω)] dΩ
|
582 |
+
≈
|
583 |
+
S
|
584 |
+
�
|
585 |
+
w=1
|
586 |
+
zwV (Ωw, n) [Hm
|
587 |
+
l (Ωw)]
|
588 |
+
(6)
|
589 |
+
where, zw denotes the sampling weight and Ωw = (θw, φw)
|
590 |
+
is the location of channel w. Here, the angle of elevation θ
|
591 |
+
is in the range [0, 2π/3], as per the head geometry shown in
|
592 |
+
Fig. 6. The real-valued Hm
|
593 |
+
l (Ω) of lth order and mth degree
|
594 |
+
constitutes an orthonormal set of basis function defined over
|
595 |
+
human head .
|
596 |
+
Fig. 6: Geometry of human head with the parameters: Perimeter=40cm,
|
597 |
+
radius=10cm [32].
|
598 |
+
The corresponding features are stored in V H2
|
599 |
+
lm
|
600 |
+
with a di-
|
601 |
+
mension of 9 × N, similar to that obtained in SFT. Each EEG
|
602 |
+
sub-band is also decomposed using H2 basis functions, and
|
603 |
+
the corresponding features are obtained as V H2
|
604 |
+
δlm , V H2
|
605 |
+
θlm, V H2
|
606 |
+
αlm,
|
607 |
+
V H2
|
608 |
+
βlm, and V H2
|
609 |
+
γlm.
|
610 |
+
E. Proposed BiCurNet
|
611 |
+
After pre-processing and feature extraction, the EEG data is
|
612 |
+
given as input to the proposed BiCurNet model. The proposed
|
613 |
+
deep learning model is illustrated in Fig. 2. The constituent
|
614 |
+
layers in the proposed model include a depth-wise separable
|
615 |
+
one-dimensional convolution layer (DWSConv1D), a conv1D
|
616 |
+
layer, a maxpooling (maxpool1D) layer, a customized attention
|
617 |
+
module, a flatten layer, three dense layers, and an output layer
|
618 |
+
for regression/prediction.
|
619 |
+
• Depth-wise separable convolution layer (DWSConv1D):
|
620 |
+
The first layer of the network is a conv1D layer which
|
621 |
+
performs a depth-wise separable convolution of the re-
|
622 |
+
ceived input data with the kernels/filters used in this
|
623 |
+
layer. It receives the input EEG data in the form of
|
624 |
+
N × Nc matrix as shown in Fig. 2. Here N denotes the
|
625 |
+
number of samples in the data, and Nc is the number of
|
626 |
+
channels. The convolution operation is split into two parts
|
627 |
+
in this layer as depth-wise and point-wise [33]. Depth-
|
628 |
+
wise convolution is performed with each channel sepa-
|
629 |
+
rately, and point-wise convolution is performed as 1 × 1
|
630 |
+
convolution. It is a computationally efficient operation
|
631 |
+
w.r.t. the standard convolution layer, making it suitable
|
632 |
+
for lightweight scenarios. Convolution of a filter f[n] with
|
633 |
+
an input v[n] is written as:
|
634 |
+
v[n] ∗ f[n] =
|
635 |
+
ks−1
|
636 |
+
�
|
637 |
+
i=0
|
638 |
+
v[i] · f[n − i]
|
639 |
+
(7)
|
640 |
+
where, ‘∗’ represents the convolution operation and ks
|
641 |
+
denotes the filter width. In this layer, 32 filters are used.
|
642 |
+
Each filter has a width ks of 5. In general, the zth
|
643 |
+
convolution output, i.e, feature map of layer lr is given
|
644 |
+
as [34]:
|
645 |
+
clr
|
646 |
+
z = σ
|
647 |
+
�
|
648 |
+
�bilr
|
649 |
+
z +
|
650 |
+
�
|
651 |
+
j
|
652 |
+
clr−1
|
653 |
+
j
|
654 |
+
× f lr
|
655 |
+
zj
|
656 |
+
�
|
657 |
+
�
|
658 |
+
(8)
|
659 |
+
where, clr
|
660 |
+
z is the zth feature in the lrth layer; clr−1
|
661 |
+
j
|
662 |
+
is
|
663 |
+
the jth feature in the corresponding preceding layer; f lr
|
664 |
+
zj
|
665 |
+
represents the filter which links feature z to feature j, bilr
|
666 |
+
z
|
667 |
+
represents the corresponding bias vector and σ denotes
|
668 |
+
the activation function, which is rectified linear unit
|
669 |
+
(ReLu) in this layer. It is defined as: σ(t) = max(0, t).
|
670 |
+
A stride of one is used in this layer. The ’He’ uniform
|
671 |
+
initialization is used for kernel weights and zero initializa-
|
672 |
+
tion is used for bias vector. All these parameters produce
|
673 |
+
an output dimension of C1: (N − ks + 1) × 32 as shown
|
674 |
+
in Fig. 2. L2 regularization with a factor of 0.001 is also
|
675 |
+
used in this layer to reduce over-fitting.
|
676 |
+
• Conv1D layer: The second layer is a conventional con-
|
677 |
+
volution layer, which operates on all input channels at a
|
678 |
+
time. This layer uses the same parameters as described in
|
679 |
+
the previous layer. The corresponding output dimension
|
680 |
+
of this layer is given as (C1 − ks + 1) × 32.
|
681 |
+
• Max pooling layer (Maxpool1D): The convolution layer
|
682 |
+
output is reduced in dimensionality by using a max
|
683 |
+
pooling 1D layer, which retains the highest value of the
|
684 |
+
feature in a segment with a pool size [35]. This layer
|
685 |
+
helps in low-level feature extraction. The corresponding
|
686 |
+
process can be interpreted as [34]:
|
687 |
+
chx
|
688 |
+
mx = max
|
689 |
+
∀b∈arm chx−1
|
690 |
+
b
|
691 |
+
(9)
|
692 |
+
where, arm denotes the pool area with index m. In this
|
693 |
+
work, a pool size and a stride of 2 is selected, which
|
694 |
+
results in the dimension of the output as (C1 − ks +
|
695 |
+
1)/2 × 32, shown in Fig. 2.
|
696 |
+
• Customized attention module (CAM): The feature maps
|
697 |
+
of the previous layer are further transformed to intensify
|
698 |
+
the more relevant features and restrain the less relevant
|
699 |
+
features. A CAM is utilized for this purpose, which uses
|
700 |
+
a dense layer with 32 units and a multiply layer as
|
701 |
+
shown in Fig. 2. This module works on the attention
|
702 |
+
phenomenon, which enhances the relevant features and
|
703 |
+
diminishes the less significant features [33]. An element-
|
704 |
+
wise multiplication operation is performed between the
|
705 |
+
outputs of the dense layer and the maxpool1D layer. This
|
706 |
+
produces higher values of product where both maxpool1D
|
707 |
+
and dense layer outputs are high, thereby enhancing
|
708 |
+
the more intense features. Similarly, the less significant
|
709 |
+
features are further restrained due to low values of the
|
710 |
+
product where both the layer outputs are low. The input
|
711 |
+
dimension of the dense layer is (C3) × 32, and a dot
|
712 |
+
product operation between a 32 × 32 weight vector of
|
713 |
+
the dense layer and its input results in the same output
|
714 |
+
dimension.
|
715 |
+
|
716 |
+
+ Z (Superior)
|
717 |
+
+ Z (Superior)
|
718 |
+
40 cm
|
719 |
+
+Y
|
720 |
+
fx
|
721 |
+
-x
|
722 |
+
10 cm
|
723 |
+
Posterior
|
724 |
+
Anterior
|
725 |
+
Right
|
726 |
+
Left
|
727 |
+
- Z (Inferior)
|
728 |
+
Z (Inferior)7
|
729 |
+
TABLE I: Training hyper-parameters (After hypertuning).
|
730 |
+
Nc
|
731 |
+
Nk
|
732 |
+
Dr
|
733 |
+
ks
|
734 |
+
sr
|
735 |
+
lr
|
736 |
+
Bt
|
737 |
+
ec
|
738 |
+
3
|
739 |
+
32
|
740 |
+
0.40
|
741 |
+
5
|
742 |
+
1
|
743 |
+
0.001
|
744 |
+
15
|
745 |
+
100
|
746 |
+
Ncl: Number of convolution layers, Nk: Number of kernels/filters, Dr: Dropout rate,
|
747 |
+
ks: Kernel width, sr: Stride/shift, lr: Learning rate, Bt: Batch size, ec: Number of
|
748 |
+
training epochs.
|
749 |
+
• Flatten layer: This layer transforms the output of CAM
|
750 |
+
which is C3 × 32 to a 1D vector with dimension C4 × 1,
|
751 |
+
as shown in Fig. 2. A dropout with a factor of 0.4 is used
|
752 |
+
after this layer to prevent the model from over-fitting [36].
|
753 |
+
• Dense layers: Three dense layers with 8 units each are
|
754 |
+
used after the flatten layer. In this work, swish activation
|
755 |
+
function is used in these layers, interpreted as:
|
756 |
+
f(x) = x . swish(x)
|
757 |
+
(10)
|
758 |
+
• Output layer: The final layer is a dense layer for re-
|
759 |
+
gression, that maps the output of flatten layer to the
|
760 |
+
predicted trajectory with dimension N × 1, as shown
|
761 |
+
in Fig. 2. Dense layer implements the element-wise
|
762 |
+
dot product between the input and the kernel. Linear
|
763 |
+
activation function is used in this layer, given by:
|
764 |
+
f(x) = x
|
765 |
+
(11)
|
766 |
+
The aforementioned layers and hyper-parameters are used
|
767 |
+
to create the proposed network. For training, 80% of EEG
|
768 |
+
signals with different durations/window lengths are taken from
|
769 |
+
the recorded database. The rest 20% of the data is divided
|
770 |
+
into 10% test and 10% validation data. The information about
|
771 |
+
optimal training hyper-parameter selection and their values is
|
772 |
+
provided in the next section. The proposed network is built
|
773 |
+
using Keras deep learning framework with TensorFlow version
|
774 |
+
2.2.1 as backend in Python. In this work, data augmentation
|
775 |
+
is utilized to increase the number of training examples in the
|
776 |
+
data to avoid over-fitting. It makes the proposed network more
|
777 |
+
robust by creating new and different training examples by
|
778 |
+
which it can learn the alterations in the real world. For this
|
779 |
+
purpose, random flipping and rolling operations are used in
|
780 |
+
Python Keras framework.
|
781 |
+
IV. RESULTS AND DISCUSSION
|
782 |
+
In this Section, the performance evaluation of the proposed
|
783 |
+
BiCurNet on the recorded EEG signals is presented w.r.t.
|
784 |
+
different parameters. Elaborated interpretations of the results
|
785 |
+
are also presented for the proposed network.
|
786 |
+
A. Hyper-parameters for training BiCurNet
|
787 |
+
Various parameters used for training the proposed network
|
788 |
+
are presented herein. For assessing the regression/prediction
|
789 |
+
performance of the proposed network, 10% of the EEG signals
|
790 |
+
from the recorded database are used for testing. The data
|
791 |
+
from each subject is used for training, testing, and validation,
|
792 |
+
i.e., subject-dependent training is performed. The network
|
793 |
+
is trained using a batch size of 15, epochs as 100, and
|
794 |
+
Adam optimizer with a learning rate as 0.001. To curtail
|
795 |
+
the statistical unreliability in computation of test loss due
|
796 |
+
to small database, ten-fold cross validation is employed for
|
797 |
+
performance evaluation. Mean square error (MSE) is used as
|
798 |
+
the loss function for regression. Table I presents the training
|
799 |
+
hyper-parameters which are selected using the KerasTuner
|
800 |
+
framework in Python. It is an optimization framework for
|
801 |
+
tuning the hyper-parameters that uses search-and-selection-
|
802 |
+
based criteria. The final corresponding selected set of optimal
|
803 |
+
hyper-parameters is listed in the table.
|
804 |
+
B. Regression metric
|
805 |
+
In this work, time lagged and windowed EEG signals
|
806 |
+
are used to estimate the motion trajectory in advance. In
|
807 |
+
particular, the EEG data preceding the motion by different
|
808 |
+
time lags (8-240 ms) is used to train, test, and validate the
|
809 |
+
proposed network. Additionally, the performance is evaluated
|
810 |
+
with varying EEG window sizes (320-1600 ms). A 95%
|
811 |
+
overlap between adjacent windows is considered. Pearson
|
812 |
+
correlation coefficient (PCC) is utilized for analysing the
|
813 |
+
performance of the proposed network w.r.t. upper limb motion
|
814 |
+
trajectory estimation. PCC between true/measured (A) and
|
815 |
+
predicted/estimated (P) trajectory signal with N samples is
|
816 |
+
given as
|
817 |
+
Π(A, P) =
|
818 |
+
1
|
819 |
+
N − 1
|
820 |
+
N
|
821 |
+
�
|
822 |
+
i=1
|
823 |
+
�Ai − mA
|
824 |
+
σA
|
825 |
+
� �Pi − mP
|
826 |
+
σP
|
827 |
+
�
|
828 |
+
(12)
|
829 |
+
where m is the mean and σ denotes standard deviation. The
|
830 |
+
normalized covariance measure assumes a value between -1
|
831 |
+
and 1.
|
832 |
+
C. Subject dependent PCC analysis
|
833 |
+
The proposed model is trained and tested for each subject
|
834 |
+
separately, for subject-dependent (SD) performance analysis.
|
835 |
+
The PCC values averaged across all the trials and subjects, are
|
836 |
+
presented in Table II with varying time lags, window sizes,
|
837 |
+
and EEG features. The EEG bands are considered in spatial
|
838 |
+
(V ), spherical harmonics (Vδnm), and head harmonics domains
|
839 |
+
(V H2
|
840 |
+
δnm). It may be noted that the transformed domain (Vδnm
|
841 |
+
and V H2
|
842 |
+
δnm) features gives PCC similar to spatial domain coun-
|
843 |
+
terparts with reduced computational cost, as detailed in Section
|
844 |
+
III-D2. Additionally, δ band gives higher PCC values while γ
|
845 |
+
band has the lowest PCC. This indicates the pertinence of low-
|
846 |
+
frequency δ band for motion trajectory decoding using EEG.
|
847 |
+
The best correlation is observed when Vδ, V SH
|
848 |
+
δnm, and V H2
|
849 |
+
δnm are
|
850 |
+
combined. The highest correlation achieved is 0.7 with 240
|
851 |
+
ms advanced EEG window of 1600 ms. This demonstrates the
|
852 |
+
feasibility of early estimation of the motion trajectory by using
|
853 |
+
the proposed network.
|
854 |
+
D. Subject-independent performance analysis
|
855 |
+
To further explore the adaptability of the proposed network,
|
856 |
+
subject-independent (SI) analysis is presented herein using
|
857 |
+
leave-one-out scheme. Simultaneous comparison of SI/SD
|
858 |
+
case on PCC is presented in Fig. 7. The PCC values are
|
859 |
+
averaged over all subjects and lags. A slight decrease in PCC
|
860 |
+
value may be noted in the SI case. However, it remains within
|
861 |
+
±0.05 which indicates the robustness of the proposed network
|
862 |
+
against the subject-variability.
|
863 |
+
|
864 |
+
8
|
865 |
+
TABLE II: Pearson correlation coefficient (PCC) for different EEG segments and lags of data (Mean over subjects).
|
866 |
+
EEG
|
867 |
+
Features
|
868 |
+
8
|
869 |
+
ms
|
870 |
+
40
|
871 |
+
ms
|
872 |
+
80
|
873 |
+
ms
|
874 |
+
160
|
875 |
+
ms
|
876 |
+
240
|
877 |
+
ms
|
878 |
+
8
|
879 |
+
ms
|
880 |
+
40
|
881 |
+
ms
|
882 |
+
80
|
883 |
+
ms
|
884 |
+
160
|
885 |
+
ms
|
886 |
+
240
|
887 |
+
ms
|
888 |
+
8
|
889 |
+
ms
|
890 |
+
40
|
891 |
+
ms
|
892 |
+
80
|
893 |
+
ms
|
894 |
+
160
|
895 |
+
ms
|
896 |
+
240
|
897 |
+
ms
|
898 |
+
8
|
899 |
+
ms
|
900 |
+
40
|
901 |
+
ms
|
902 |
+
80
|
903 |
+
ms
|
904 |
+
160
|
905 |
+
ms
|
906 |
+
240
|
907 |
+
ms
|
908 |
+
V
|
909 |
+
0.25 0.25
|
910 |
+
0.26
|
911 |
+
0.26
|
912 |
+
0.26
|
913 |
+
0.35 0.35
|
914 |
+
0.36
|
915 |
+
0.35
|
916 |
+
0.26
|
917 |
+
0.42 0.42
|
918 |
+
0.42
|
919 |
+
0.42
|
920 |
+
0.43
|
921 |
+
0.55 0.55
|
922 |
+
0.55
|
923 |
+
0.55
|
924 |
+
0.56
|
925 |
+
Vδ
|
926 |
+
0.34 0.33
|
927 |
+
0.33
|
928 |
+
0.34
|
929 |
+
0.36
|
930 |
+
0.41 0.41
|
931 |
+
0.42
|
932 |
+
0.42
|
933 |
+
0.42
|
934 |
+
0.48 0.48
|
935 |
+
0.48
|
936 |
+
0.48
|
937 |
+
0.49
|
938 |
+
0.61 0.61
|
939 |
+
0.61
|
940 |
+
0.66
|
941 |
+
0.67
|
942 |
+
Vθ
|
943 |
+
0.24 0.23
|
944 |
+
0.23
|
945 |
+
0.24
|
946 |
+
0.26
|
947 |
+
0.38 0.38
|
948 |
+
0.38
|
949 |
+
0.38
|
950 |
+
0.38
|
951 |
+
0.44 0.44
|
952 |
+
0.44
|
953 |
+
0.44
|
954 |
+
0.45
|
955 |
+
0.55 0.55
|
956 |
+
0.55
|
957 |
+
0.56
|
958 |
+
0.57
|
959 |
+
Vα
|
960 |
+
0.22 0.22
|
961 |
+
0.22
|
962 |
+
0.22
|
963 |
+
0.21
|
964 |
+
0.36 0.36
|
965 |
+
0.37
|
966 |
+
0.36
|
967 |
+
0.36
|
968 |
+
0.39 0.39
|
969 |
+
0.39
|
970 |
+
0.39
|
971 |
+
0.39
|
972 |
+
0.51 0.51
|
973 |
+
0.51
|
974 |
+
0.51
|
975 |
+
0.53
|
976 |
+
Vβ
|
977 |
+
0.18 0.18
|
978 |
+
0.17
|
979 |
+
0.17
|
980 |
+
0.17
|
981 |
+
0.29 0.29
|
982 |
+
0.29
|
983 |
+
0.3
|
984 |
+
0.3
|
985 |
+
0.32 0.32
|
986 |
+
0.32
|
987 |
+
0.32
|
988 |
+
0.33
|
989 |
+
0.39 0.39
|
990 |
+
0.39
|
991 |
+
0.39
|
992 |
+
0.39
|
993 |
+
Vγ
|
994 |
+
0.1
|
995 |
+
0.1
|
996 |
+
0.1
|
997 |
+
0.1
|
998 |
+
0.1
|
999 |
+
0.17 0.17
|
1000 |
+
0.17
|
1001 |
+
0.18
|
1002 |
+
0.18
|
1003 |
+
0.27 0.27
|
1004 |
+
0.27
|
1005 |
+
0.27
|
1006 |
+
0.28
|
1007 |
+
0.29 0.29
|
1008 |
+
0.29
|
1009 |
+
0.29
|
1010 |
+
0.29
|
1011 |
+
V SH
|
1012 |
+
nm
|
1013 |
+
0.25 0.25
|
1014 |
+
0.25
|
1015 |
+
0.25
|
1016 |
+
0.26
|
1017 |
+
0.34 0.35
|
1018 |
+
0.35
|
1019 |
+
0.35
|
1020 |
+
0.36
|
1021 |
+
0.41 0.41
|
1022 |
+
0.41
|
1023 |
+
0.41
|
1024 |
+
0.42
|
1025 |
+
0.54 0.54
|
1026 |
+
0.54
|
1027 |
+
0.54
|
1028 |
+
0.55
|
1029 |
+
V SH
|
1030 |
+
δnm
|
1031 |
+
0.34 0.33
|
1032 |
+
0.34
|
1033 |
+
0.35
|
1034 |
+
0.35
|
1035 |
+
0.41 0.41
|
1036 |
+
0.41
|
1037 |
+
0.41
|
1038 |
+
0.41
|
1039 |
+
0.47 0.47
|
1040 |
+
0.47
|
1041 |
+
0.48
|
1042 |
+
0.48
|
1043 |
+
0.61 0.61
|
1044 |
+
0.61
|
1045 |
+
0.66
|
1046 |
+
0.66
|
1047 |
+
V SH
|
1048 |
+
θnm
|
1049 |
+
0.23 0.22
|
1050 |
+
0.22
|
1051 |
+
0.22
|
1052 |
+
0.23
|
1053 |
+
0.37 0.37
|
1054 |
+
0.37
|
1055 |
+
0.37
|
1056 |
+
0.38
|
1057 |
+
0.44 0.44
|
1058 |
+
0.44
|
1059 |
+
0.44
|
1060 |
+
0.45
|
1061 |
+
0.55 0.55
|
1062 |
+
0.55
|
1063 |
+
0.55
|
1064 |
+
0.56
|
1065 |
+
V SH
|
1066 |
+
αnm
|
1067 |
+
0.2
|
1068 |
+
0.2
|
1069 |
+
0.19
|
1070 |
+
0.2
|
1071 |
+
0.2
|
1072 |
+
0.34 0.34
|
1073 |
+
0.34
|
1074 |
+
0.34
|
1075 |
+
0.36
|
1076 |
+
0.38 0.38
|
1077 |
+
0.38
|
1078 |
+
0.38
|
1079 |
+
0.38
|
1080 |
+
0.5
|
1081 |
+
0.5
|
1082 |
+
0.5
|
1083 |
+
0.5
|
1084 |
+
0.51
|
1085 |
+
V SH
|
1086 |
+
βnm
|
1087 |
+
0.17 0.17
|
1088 |
+
0.16
|
1089 |
+
0.16
|
1090 |
+
0.17
|
1091 |
+
0.29 0.29
|
1092 |
+
0.29
|
1093 |
+
0.29
|
1094 |
+
0.29
|
1095 |
+
0.33 0.33
|
1096 |
+
0.33
|
1097 |
+
0.33
|
1098 |
+
0.34
|
1099 |
+
0.4
|
1100 |
+
0.4
|
1101 |
+
0.4
|
1102 |
+
0.4
|
1103 |
+
0.41
|
1104 |
+
V SH
|
1105 |
+
γnm
|
1106 |
+
0.09 0.1
|
1107 |
+
0.1
|
1108 |
+
0.1
|
1109 |
+
0.1
|
1110 |
+
0.18 0.18
|
1111 |
+
0.18
|
1112 |
+
0.18
|
1113 |
+
0.18
|
1114 |
+
0.28 0.28
|
1115 |
+
0.28
|
1116 |
+
0.28
|
1117 |
+
0.3
|
1118 |
+
0.3
|
1119 |
+
0.3
|
1120 |
+
0.3
|
1121 |
+
0.3
|
1122 |
+
0.31
|
1123 |
+
V H2
|
1124 |
+
nm
|
1125 |
+
0.25 0.25
|
1126 |
+
0.26
|
1127 |
+
0.26
|
1128 |
+
0.26
|
1129 |
+
0.35 0.35
|
1130 |
+
0.35
|
1131 |
+
0.35
|
1132 |
+
0.36
|
1133 |
+
0.42 0.42
|
1134 |
+
0.42
|
1135 |
+
0.42
|
1136 |
+
0.43
|
1137 |
+
0.55 0.55
|
1138 |
+
0.55
|
1139 |
+
0.55
|
1140 |
+
0.57
|
1141 |
+
V H2
|
1142 |
+
δnm
|
1143 |
+
0.34 0.33
|
1144 |
+
0.34
|
1145 |
+
0.34
|
1146 |
+
0.35
|
1147 |
+
0.41 0.41
|
1148 |
+
0.41
|
1149 |
+
0.41
|
1150 |
+
0.41
|
1151 |
+
0.48 0.48
|
1152 |
+
0.48
|
1153 |
+
0.48
|
1154 |
+
0.49
|
1155 |
+
0.62 0.62
|
1156 |
+
0.62
|
1157 |
+
0.62
|
1158 |
+
0.65
|
1159 |
+
V H2
|
1160 |
+
θnm
|
1161 |
+
0.25 0.24
|
1162 |
+
0.24
|
1163 |
+
0.23
|
1164 |
+
0.23
|
1165 |
+
0.38 0.38
|
1166 |
+
0.38
|
1167 |
+
0.39
|
1168 |
+
0.39
|
1169 |
+
0.44 0.44
|
1170 |
+
0.44
|
1171 |
+
0.44
|
1172 |
+
0.45
|
1173 |
+
0.53 0.53
|
1174 |
+
0.53
|
1175 |
+
0.53
|
1176 |
+
0.55
|
1177 |
+
V H2
|
1178 |
+
αnm
|
1179 |
+
0.22 0.2
|
1180 |
+
0.2
|
1181 |
+
0.2
|
1182 |
+
0.22
|
1183 |
+
0.36 0.36
|
1184 |
+
0.36
|
1185 |
+
0.36
|
1186 |
+
0.36
|
1187 |
+
0.38 0.38
|
1188 |
+
0.38
|
1189 |
+
0.38
|
1190 |
+
0.39
|
1191 |
+
0.51 0.51
|
1192 |
+
0.51
|
1193 |
+
0.51
|
1194 |
+
0.51
|
1195 |
+
V H2
|
1196 |
+
βnm
|
1197 |
+
0.18 0.18
|
1198 |
+
0.18
|
1199 |
+
0.18
|
1200 |
+
0.18
|
1201 |
+
0.28 0.28
|
1202 |
+
0.28
|
1203 |
+
0.28
|
1204 |
+
0.29
|
1205 |
+
0.33 0.33
|
1206 |
+
0.33
|
1207 |
+
0.33
|
1208 |
+
0.34
|
1209 |
+
0.39 0.39
|
1210 |
+
0.39
|
1211 |
+
0.39
|
1212 |
+
0.4
|
1213 |
+
V H2
|
1214 |
+
γnm
|
1215 |
+
0.11 0.11
|
1216 |
+
0.11
|
1217 |
+
0.11
|
1218 |
+
0.11
|
1219 |
+
0.16 0.16
|
1220 |
+
0.16
|
1221 |
+
0.16
|
1222 |
+
0.17
|
1223 |
+
0.2
|
1224 |
+
0.2
|
1225 |
+
0.2
|
1226 |
+
0.2
|
1227 |
+
0.2
|
1228 |
+
0.19 0.19
|
1229 |
+
0.19
|
1230 |
+
0.19
|
1231 |
+
0.19
|
1232 |
+
Vcom
|
1233 |
+
0.36 0.36
|
1234 |
+
0.36
|
1235 |
+
0.36
|
1236 |
+
0.37
|
1237 |
+
0.43 0.43
|
1238 |
+
0.44
|
1239 |
+
0.44
|
1240 |
+
0.44
|
1241 |
+
0.5
|
1242 |
+
0.5
|
1243 |
+
0.5
|
1244 |
+
0.51
|
1245 |
+
0.52
|
1246 |
+
0.67 0.67
|
1247 |
+
0.67
|
1248 |
+
0.68
|
1249 |
+
0.70
|
1250 |
+
■: 320 ms window, ■: 800 ms window, ■: 1200 ms window , ■: 1600 ms window; Note: Vcom : [Vδ; V SH
|
1251 |
+
δnm; V H2
|
1252 |
+
δnm]
|
1253 |
+
Fig. 7: Average PCC values w.r.t. subject dependent (SD) and
|
1254 |
+
subject-independent (SI) training of the proposed network at
|
1255 |
+
different window sizes (320 ms to 1600 ms).
|
1256 |
+
E. Robustness analysis
|
1257 |
+
The robustness of the proposed network is analyzed herein
|
1258 |
+
using artifactual EEG signals. In particular, the pre-processing
|
1259 |
+
did not include ICA decomposition-based artifact removal. The
|
1260 |
+
proposed network is trained and tested using such signals.
|
1261 |
+
Mean PCC values obtained using without artifact (WOA) and
|
1262 |
+
with artifact (WA) EEG signal are presented in Fig. 8. A
|
1263 |
+
small decrease of 0.06 in the PCC values may be observed
|
1264 |
+
with artifact case that indicates the robustness of the proposed
|
1265 |
+
model.
|
1266 |
+
F. Trajectory estimation curves
|
1267 |
+
The proposed BiCurNet model is additionally evaluated
|
1268 |
+
herein using actual motion trajectories. Fig. 9 illustrates the
|
1269 |
+
estimated and actual trajectories for subject I with window
|
1270 |
+
size varying between 800-1600 ms. 95% overlap is considered
|
1271 |
+
Fig. 8: Subject dependent average PCC values utilizing with
|
1272 |
+
and without artifactual EEG data for different window sizes.
|
1273 |
+
between two adjacent windows. It may be observed from the
|
1274 |
+
figure that there is a considerable improvement in correlation
|
1275 |
+
when window size is increased. This results in trajectory closer
|
1276 |
+
to the ground truth. Ability of the proposed network to follow
|
1277 |
+
the trajectory pattern for all windows indicates the learning
|
1278 |
+
capability of the network.
|
1279 |
+
V. CONCLUSION
|
1280 |
+
A deep learning-based paradigm for early estimation of
|
1281 |
+
upper limb motion trajectory using EEG signal is proposed
|
1282 |
+
in this work. The EEG is collected while performing biceps
|
1283 |
+
curl movement. The proposed BiCurNet model is built using
|
1284 |
+
a light-weight architecture with depth-wise separable con-
|
1285 |
+
volution layers and customized attention module. The input
|
1286 |
+
features to the model are taken in computationally more
|
1287 |
+
efficient spherical and head harmonics domain in addition to
|
1288 |
+
spatio-temporal data. The extensive performance evaluation
|
1289 |
+
|
1290 |
+
0.7
|
1291 |
+
0.6
|
1292 |
+
0.5
|
1293 |
+
PCC
|
1294 |
+
0.4
|
1295 |
+
0.3
|
1296 |
+
SD
|
1297 |
+
SI
|
1298 |
+
Meanoversubjects0.7
|
1299 |
+
0.6
|
1300 |
+
IWOA
|
1301 |
+
IWA
|
1302 |
+
0.5
|
1303 |
+
0.4
|
1304 |
+
CC
|
1305 |
+
P
|
1306 |
+
0.3
|
1307 |
+
0.2
|
1308 |
+
0.1
|
1309 |
+
0
|
1310 |
+
320 ms
|
1311 |
+
800ms
|
1312 |
+
1200ms
|
1313 |
+
1600ms
|
1314 |
+
Window size9
|
1315 |
+
Fig. 9: Actual and predicted trajectories of subject 1 (Early prediction, before 40 ms).
|
1316 |
+
of the proposed network on in-house recorded EEG signals
|
1317 |
+
demonstrates its effectiveness in early estimation. Performance
|
1318 |
+
evaluation includes subject (in)dependent study. the noise
|
1319 |
+
awareness of the proposed network is also demonstrated by
|
1320 |
+
using the artifactual EEG signals for training. Robustness of
|
1321 |
+
the proposed network is demonstrated by using the artifactual
|
1322 |
+
EEG signals for training. The proposed network being com-
|
1323 |
+
putationally efficient, and noise-aware, makes it suitable for
|
1324 |
+
use in real-time BCI applications. Real-time implementation
|
1325 |
+
of the proposed network for an exosuit control is currently
|
1326 |
+
being explored.
|
1327 |
+
ACKNOWLEDGMENT
|
1328 |
+
This research work was supported in part by DRDO - JATC
|
1329 |
+
project with project number RP04191G.
|
1330 |
+
REFERENCES
|
1331 |
+
[1] J. R. Wolpaw and E. W. Wolpaw, “Brain-computer interfaces: something
|
1332 |
+
new under the sun,” Brain-computer interfaces: principles and practice,
|
1333 |
+
vol. 14, 2012.
|
1334 |
+
[2] S. Gong, K. Xing, A. Cichocki, and J. Li, “Deep learning in EEG:
|
1335 |
+
Advance of the last ten-year critical period,” IEEE Transactions on
|
1336 |
+
Cognitive and Developmental Systems, 2021.
|
1337 |
+
[3] S. Aggarwal and N. Chugh, “Review of machine learning techniques
|
1338 |
+
for EEG based brain computer interface,” Archives of Computational
|
1339 |
+
Methods in Engineering, pp. 1–20, 2022.
|
1340 |
+
[4] A. Chowdhury, H. Raza, Y. K. Meena, A. Dutta, and G. Prasad, “On-
|
1341 |
+
line covariate shift detection-based adaptive brain-computer interface
|
1342 |
+
to trigger hand exoskeleton feedback for neuro-rehabilitation,” IEEE
|
1343 |
+
Transactions on Cognitive and Developmental Systems, vol. 10, no. 4,
|
1344 |
+
pp. 1070–1080, 2017.
|
1345 |
+
[5] H. Raza, A. Chowdhury, and S. Bhattacharyya, “Deep learning based
|
1346 |
+
prediction of EEG motor imagery of stroke patients’ for neuro-
|
1347 |
+
rehabilitation application,” in 2020 International Joint Conference on
|
1348 |
+
Neural Networks (IJCNN).
|
1349 |
+
IEEE, 2020, pp. 1–8.
|
1350 |
+
[6] P. Di Lillo, F. Arrichiello, D. Di Vito, and G. Antonelli, “BCI-controlled
|
1351 |
+
assistive manipulator: developed architecture and experimental results,”
|
1352 |
+
IEEE Transactions on Cognitive and Developmental Systems, vol. 13,
|
1353 |
+
no. 1, pp. 91–104, 2020.
|
1354 |
+
[7] X. Li, L. Wang, S. Miao, Z. Yue, Z. Tang, L. Su, Y. Zheng, X. Wu,
|
1355 |
+
S. Wang, J. Wang et al., “Sensorimotor Rhythm-Brain Computer Inter-
|
1356 |
+
face With Audio-Cue, Motor Observation and Multisensory Feedback
|
1357 |
+
for Upper-Limb Stroke Rehabilitation: A Controlled Study,” Frontiers
|
1358 |
+
in Neuroscience, vol. 16, 2022.
|
1359 |
+
[8] R. Sosnik and O. B. Zur, “Reconstruction of hand, elbow and shoulder
|
1360 |
+
actual and imagined trajectories in 3d space using eeg slow cortical
|
1361 |
+
potentials,” Journal of neural engineering, vol. 17, no. 1, p. 016065,
|
1362 |
+
2020.
|
1363 |
+
[9] N. Robinson, T. W. J. Chester, and K. Smitha, “Use of mobile EEG in
|
1364 |
+
decoding hand movement speed and position,” IEEE Transactions on
|
1365 |
+
Human-Machine Systems, vol. 51, no. 2, pp. 120–129, 2021.
|
1366 |
+
[10] A. Jain and L. Kumar, “PreMovNet: Premovement EEG-Based Hand
|
1367 |
+
Kinematics Estimation for Grasp-and-Lift Task,” IEEE Sensors Letters,
|
1368 |
+
vol. 6, no. 7, pp. 1–4, 2022.
|
1369 |
+
[11] M. Saini, U. Satija, and M. D. Upadhayay, “One-dimensional convo-
|
1370 |
+
lutional neural network architecture for classification of mental tasks
|
1371 |
+
from electroencephalogram,” Biomedical Signal Processing and Control,
|
1372 |
+
vol. 74, p. 103494, 2022.
|
1373 |
+
[12] A. Jain and L. Kumar, “Subject-Independent 3D Hand Kinematics
|
1374 |
+
Reconstruction using Pre-Movement EEG Signals for Grasp And Lift
|
1375 |
+
Task,” arXiv preprint arXiv:2209.01932, 2022.
|
1376 |
+
[13] P. S. Hammon, S. Makeig, H. Poizner, E. Todorov, and V. R. De Sa,
|
1377 |
+
“Predicting reaching targets from human EEG,” IEEE Signal Processing
|
1378 |
+
Magazine, vol. 25, no. 1, pp. 69–77, 2007.
|
1379 |
+
[14] R. Sosnik and L. Zheng, “Reconstruction of hand, elbow and shoulder
|
1380 |
+
actual and imagined trajectories in 3D space using EEG current source
|
1381 |
+
dipoles,” Journal of Neural Engineering, vol. 18, no. 5, p. 056011, 2021.
|
1382 |
+
[15] J.-H. Kim, F. Bießmann, and S.-W. Lee, “Decoding three-dimensional
|
1383 |
+
trajectory of executed and imagined arm movements from electroen-
|
1384 |
+
cephalogram signals,” IEEE Transactions on Neural Systems and Reha-
|
1385 |
+
bilitation Engineering, vol. 23, no. 5, pp. 867–876, 2014.
|
1386 |
+
[16] L. Yang, H. Leung, M. Plank, J. Snider, and H. Poizner, “EEG activity
|
1387 |
+
during movement planning encodes upcoming peak speed and acceler-
|
1388 |
+
ation and improves the accuracy in predicting hand kinematics,” IEEE
|
1389 |
+
journal of biomedical and health informatics, vol. 19, no. 1, pp. 22–28,
|
1390 |
+
2014.
|
1391 |
+
[17] Y. Nakanishi, T. Yanagisawa, D. Shin, H. Kambara, N. Yoshimura,
|
1392 |
+
M. Tanaka, R. Fukuma, H. Kishima, M. Hirata, and Y. Koike, “Mapping
|
1393 |
+
ECoG channel contributions to trajectory and muscle activity prediction
|
1394 |
+
in human sensorimotor cortex,” Scientific reports, vol. 7, no. 1, pp. 1–13,
|
1395 |
+
2017.
|
1396 |
+
[18] N. A. Bhagat, A. Venkatakrishnan, B. Abibullaev, E. J. Artz, N. Yoz-
|
1397 |
+
batiran, A. A. Blank, J. French, C. Karmonik, R. G. Grossman, M. K.
|
1398 |
+
O’Malley et al., “Design and optimization of an EEG-based brain ma-
|
1399 |
+
chine interface (BMI) to an upper-limb exoskeleton for stroke survivors,”
|
1400 |
+
Frontiers in neuroscience, vol. 10, p. 122, 2016.
|
1401 |
+
[19] M. Mahmoodi, B. Makkiabadi, M. Mahmoudi, and S. Sanei, “A new
|
1402 |
+
method for accurate detection of movement intention from single
|
1403 |
+
channel EEG for online BCI,” Computer Methods and Programs in
|
1404 |
+
Biomedicine Update, vol. 1, p. 100027, 2021.
|
1405 |
+
[20] R. Gatti, Y. Atum, L. Schiaffino, M. Jochumsen, and J. Biurrun Manresa,
|
1406 |
+
“Decoding kinetic features of hand motor preparation from single-
|
1407 |
+
trial EEG using convolutional neural networks,” European Journal of
|
1408 |
+
Neuroscience, vol. 53, no. 2, pp. 556–570, 2021.
|
1409 |
+
[21] J.-H. Jeong, K.-H. Shim, D.-J. Kim, and S.-W. Lee, “Brain-controlled
|
1410 |
+
robotic arm system based on multi-directional CNN-BiLSTM network
|
1411 |
+
using EEG signals,” IEEE Transactions on Neural Systems and Reha-
|
1412 |
+
bilitation Engineering, vol. 28, no. 5, pp. 1226–1238, 2020.
|
1413 |
+
[22] S. Pancholi, A. Giri, A. Jain, L. Kumar, and S. Roy, “Source aware
|
1414 |
+
deep learning framework for hand kinematic reconstruction using EEG
|
1415 |
+
signal,” IEEE Transactions on Cybernetics, 2022.
|
1416 |
+
[23] K. Kumarasinghe, N. Kasabov, and D. Taylor, “Brain-inspired spiking
|
1417 |
+
neural networks for decoding and understanding muscle activity and
|
1418 |
+
kinematics from electroencephalography signals during hand move-
|
1419 |
+
ments,” Scientific reports, vol. 11, no. 1, pp. 1–15, 2021.
|
1420 |
+
[24] J. W. Peirce, “PsychoPy—psychophysics software in Python,” Journal
|
1421 |
+
of neuroscience methods, vol. 162, no. 1-2, pp. 8–13, 2007.
|
1422 |
+
[25] A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for
|
1423 |
+
analysis of single-trial EEG dynamics including independent component
|
1424 |
+
analysis,” Journal of neuroscience methods, vol. 134, no. 1, pp. 9–21,
|
1425 |
+
2004.
|
1426 |
+
|
1427 |
+
Window size: 800 ms
|
1428 |
+
Window size: 1200 ms
|
1429 |
+
Window size: 1600 ms
|
1430 |
+
True
|
1431 |
+
Predicted
|
1432 |
+
10
|
1433 |
+
10
|
1434 |
+
10
|
1435 |
+
0.9
|
1436 |
+
0.9
|
1437 |
+
0.9
|
1438 |
+
0.8
|
1439 |
+
0.8
|
1440 |
+
0.8
|
1441 |
+
Angle
|
1442 |
+
0.7
|
1443 |
+
0.7
|
1444 |
+
0.7
|
1445 |
+
0.6
|
1446 |
+
0.5
|
1447 |
+
0.5
|
1448 |
+
0.5
|
1449 |
+
0.4
|
1450 |
+
PCC
|
1451 |
+
0.4
|
1452 |
+
PCC
|
1453 |
+
0.4
|
1454 |
+
PCC
|
1455 |
+
0.45
|
1456 |
+
0.3
|
1457 |
+
0.3
|
1458 |
+
0.52
|
1459 |
+
EO
|
1460 |
+
0.60
|
1461 |
+
0.2
|
1462 |
+
0.2
|
1463 |
+
0.2
|
1464 |
+
250
|
1465 |
+
500
|
1466 |
+
75010001250150017502000
|
1467 |
+
0
|
1468 |
+
250500
|
1469 |
+
75010001250150017502000
|
1470 |
+
75010001250150017502000
|
1471 |
+
Samples
|
1472 |
+
Samples
|
1473 |
+
Samples10
|
1474 |
+
[26] A. K. Maddirala and K. C. Veluvolu, “ICA with CWT and k-means
|
1475 |
+
for Eye-blink Artifact Removal from Fewer Channel EEG,” IEEE
|
1476 |
+
Transactions on Neural Systems and Rehabilitation Engineering, 2022.
|
1477 |
+
[27] J. Mazziotta, A. Toga, A. Evans, P. Fox, J. Lancaster, K. Zilles,
|
1478 |
+
R. Woods, T. Paus, G. Simpson, B. Pike et al., “A probabilistic atlas
|
1479 |
+
and reference system for the human brain: International Consortium
|
1480 |
+
for Brain Mapping (ICBM),” Philosophical Transactions of the Royal
|
1481 |
+
Society of London. Series B: Biological Sciences, vol. 356, no. 1412,
|
1482 |
+
pp. 1293–1322, 2001.
|
1483 |
+
[28] A. Gramfort, T. Papadopoulo, E. Olivi, and M. Clerc, “OpenMEEG:
|
1484 |
+
opensource software for quasistatic bioelectromagnetics,” Biomedical
|
1485 |
+
engineering online, vol. 9, no. 1, pp. 1–20, 2010.
|
1486 |
+
[29] R. D. Pascual-Marqui et al., “Standardized low-resolution brain elec-
|
1487 |
+
tromagnetic tomography (sLORETA): technical details,” Methods Find
|
1488 |
+
Exp Clin Pharmacol, vol. 24, no. Suppl D, pp. 5–12, 2002.
|
1489 |
+
[30] M. Saini, U. Satija, and M. D. Upadhayay, “Discriminatory features
|
1490 |
+
based on wavelet energy for effective analysis of electroencephalogram
|
1491 |
+
during mental tasks,” Circuits, Systems, and Signal Processing, pp. 1–29,
|
1492 |
+
2022.
|
1493 |
+
[31] S. G. Mallat, “A theory for multiresolution signal decomposition: the
|
1494 |
+
wavelet representation,” IEEE transactions on pattern analysis and
|
1495 |
+
machine intelligence, vol. 11, no. 7, pp. 674–693, 1989.
|
1496 |
+
[32] A. Giri, L. Kumar, and T. K. Gandhi, “Brain source localization in
|
1497 |
+
head harmonics domain,” IEEE Transactions on Instrumentation and
|
1498 |
+
Measurement, vol. 70, pp. 1–10, 2020.
|
1499 |
+
[33] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional
|
1500 |
+
block attention module,” in Proceedings of the European conference on
|
1501 |
+
computer vision (ECCV), 2018, pp. 3–19.
|
1502 |
+
[34] M. P´erez-Enciso and L. M. Zingaretti, “A guide on deep learning for
|
1503 |
+
complex trait genomic prediction,” Genes, vol. 10, no. 7, p. 553, 2019.
|
1504 |
+
[35] Z. Jiao, X. Gao, Y. Wang, J. Li, and H. Xu, “Deep convolutional neural
|
1505 |
+
networks for mental load classification based on EEG data,” Pattern
|
1506 |
+
Recognition, vol. 76, pp. 582–595, 2018.
|
1507 |
+
[36] H. Wu and X. Gu, “Towards dropout training for convolutional neural
|
1508 |
+
networks,” Neural Networks, vol. 71, pp. 1–10, 2015.
|
1509 |
+
|
2tE2T4oBgHgl3EQfjAfh/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
2tFLT4oBgHgl3EQfrC9d/content/tmp_files/2301.12142v1.pdf.txt
ADDED
@@ -0,0 +1,1631 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.12142v1 [math.DG] 28 Jan 2023
|
2 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
3 |
+
HUI ZHANG AND ZAILI YAN
|
4 |
+
Abstract. We consider the moment map m : PVn → iu(n) for the action of GL(n) on Vn = ⊗2(Cn)∗ ⊗ Cn,
|
5 |
+
and study the critical points of the functional Fn = ∥m∥2 : PVn → R. Firstly, we prove that [µ] ∈ PVn is a
|
6 |
+
critical point if and only if Mµ = cµI + Dµ for some cµ ∈ R and Dµ ∈ Der(µ), where m([µ]) =
|
7 |
+
Mµ
|
8 |
+
∥µ∥2 . Then we
|
9 |
+
show that any algebra µ admits a Nikolayevsky derivation φµ which is unique up to automorphism, and if
|
10 |
+
moreover, [µ] is a critical point of Fn, then φµ = − 1
|
11 |
+
cµ Dµ. Secondly, we characterize the maxima and minima
|
12 |
+
of the functional Fn : An → R, where An denotes the projectivization of the algebraic varieties of all n-
|
13 |
+
dimensional associative algebras. Furthermore, for an arbitrary critical point [µ] of Fn : An → R, we also
|
14 |
+
obtain a description of the algebraic structure of [µ]. Finally, we classify the critical points of Fn : An → R
|
15 |
+
for n = 2, 3, respectively.
|
16 |
+
1. Introduction
|
17 |
+
Lauret has studied the moment map for the variety of Lie algebras and obtained many remarkable
|
18 |
+
results in [7], which turned out to be very important in proving that every Einstein solvmanifold is
|
19 |
+
standard ([9]) and in the characterization of solitons ([1, 10]). Apart from the Lie algebras, the study
|
20 |
+
of the moment map in other classes of algebras was also initiated by Lauret, see [11] for more details.
|
21 |
+
Motivated by this, the authors have recently extended the study of the moment map to the variety of
|
22 |
+
3-Lie algebras (see [17]).
|
23 |
+
In this paper, we study the moment map for the variety of associative algebras. Let GL(n) be the
|
24 |
+
complex reductive Lie group acting naturally on the complex vector space Vn = ⊗2(Cn)∗ ⊗ Cn, i.e., the
|
25 |
+
space of all n-dimensional complex algebras. The usual Hermitian inner product on Cn naturally induces
|
26 |
+
an U(n)-invariant Hermitian inner product on Vn, which is denoted by ⟨·, ·⟩. Since gl(n) = u(n) + iu(n),
|
27 |
+
we may define a function as follows
|
28 |
+
m : PVn → iu(n),
|
29 |
+
(m([µ]), A) = (dρµ)eA
|
30 |
+
∥µ∥2
|
31 |
+
,
|
32 |
+
0 � µ ∈ Vn, A ∈ iu(n),
|
33 |
+
where (·, ·) is an Ad(U(n))-invariant real inner product on iu(n), and ρµ : GL(n) → R is defined by
|
34 |
+
ρµ(g) = ⟨g.µ, g.µ⟩. The function m is the moment map from symplectic geometry, corresponding to the
|
35 |
+
Hamiltonian action U(n) of Vn on the symplectic manifold PVn (see [4, 12]). In this paper, we study the
|
36 |
+
critical points of the functional Fn = ∥m∥2 : PVn → R, with an emphasis on the critical points that lie in
|
37 |
+
the projectivization of the algebraic variety of all n-dimensional associative algebras An.
|
38 |
+
2010 Mathematics Subject Classification. 14L30, 17B30, 53D20.
|
39 |
+
Key words and phrases. Moment map; Variety of associative algebras; Critical point.
|
40 |
+
This work is supported by NSFC (Nos. 11701300, 11626134) and K.C. Wong Magna Fund in Ningbo University.
|
41 |
+
1
|
42 |
+
|
43 |
+
2
|
44 |
+
HUI ZHANG AND ZAILI YAN
|
45 |
+
The paper is organized as follows: In Sect. 2, we recall some basic concepts and results of complex
|
46 |
+
associative algebras.
|
47 |
+
In Sect. 3, we first give the explicit expression of the moment map m : PVn → iu(n) in terms of Mµ,
|
48 |
+
that is, m([µ]) = Mµ
|
49 |
+
∥µ∥2 for any [µ] ∈ PVn. Then we show that [µ] ∈ PVn is a critical point of Fn if and only
|
50 |
+
if Mµ = cµI + Dµ for some cµ ∈ R and Dµ ∈ Der(µ) (Thm. 3.3).
|
51 |
+
In Sect. 4, we first show that any algebra µ ∈ Vn admits a Nikolayevsky derivation φµ which is unique
|
52 |
+
up to automorphism, the eigenvalues of φµ are necessarily rational, and moreover, φµ = − 1
|
53 |
+
cµ Dµ if [µ]
|
54 |
+
is a critical point of Fn (Thm. 4.1). Then we study the extremal points of Fn : An → R, proving that
|
55 |
+
the minimum value is attained at semisimple associative algebras (Thm. 4.6), and the maximum value at
|
56 |
+
the direct sum of a two-dimensional commutative associative algebra with the trivial algebra (Thm. 4.9).
|
57 |
+
In the context of Lie algebras ([7]), Lauret proved that any µ for which there exists [λ] ∈ GL(n).[µ]
|
58 |
+
such that all eigenvalues of Mλ are negative, must be semisimple, and we prove that this result also
|
59 |
+
holds for associative algebras (Remark 4.7). Besides, the structure for an arbitrary critical point [µ] of
|
60 |
+
Fn : An → R is discussed (Thm. 4.10 and Thm. 4.12).
|
61 |
+
In Sect. 5, we classify the critical points of Fn : An → R for n = 2, 3. It shows that every two-
|
62 |
+
dimensional associative algebra is isomorphic to a critical point of F2; and there exists only one three-
|
63 |
+
dimensional associative algebra which is not isomorphic to any critical point of F3. Finally, based on the
|
64 |
+
discussion in previous sections, we collect some natural and interesting questions.
|
65 |
+
2. Preliminaries
|
66 |
+
In this section, we recall some basic definitions and results of associative algebras. The ambient field
|
67 |
+
is always assumed to be the complex number field C unless otherwise stated.
|
68 |
+
Definition 2.1. A vector space A over C with a bilinear operation A × A → A, denoted by (x, y) �→ xy,
|
69 |
+
is called an associative algebra, if
|
70 |
+
x(yz) = (xy)z
|
71 |
+
for all x, y, z ∈ A.
|
72 |
+
A derivation of an associative algebra A is a linear transformation D : A → A satisfying
|
73 |
+
D(xy) = (Dx)y + x(Dy),
|
74 |
+
for x, y ∈ A. It is easy to see that the set of all derivations of A is a Lie algebra, which is denoted by
|
75 |
+
Der(A). A vector subspace I of A is called an ideal if AI, IA ⊂ I.
|
76 |
+
|
77 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
78 |
+
3
|
79 |
+
Definition 2.2. Let A be an associative algebra. The center of A is defined by C(A) = {x ∈ A : xy =
|
80 |
+
yx, ∀y ∈ A}. The annihilator of A is defined by ann(A) = {x ∈ A : xy = yx = 0, ∀y ∈ A}.
|
81 |
+
Clearly, C(A) is a subalgebra of A, and ann(A) is an ideal of A.
|
82 |
+
Definition 2.3. Let I be an ideal of an associative algebra. Then I is called nilpotent, if Ik = 0 for some
|
83 |
+
integer k ≥ 1, where Ik = I· · ·I· · ·I
|
84 |
+
����������
|
85 |
+
k
|
86 |
+
.
|
87 |
+
If I, J are any two nilpotent ideals of an associative algebra A, then I + J is also a nilpotent ideal. So
|
88 |
+
the maximum nilpotent ideal of A is unique, which is called the radical and denoted by N(A).
|
89 |
+
Remark 2.4. Note that N(A) coincides with the Jacobson radical of A since A is an associative algebra
|
90 |
+
over C. Moreover, N(A) = {x ∈ A : xy, yx are nilpotent elements for any y ∈ A}.
|
91 |
+
Definition 2.5. Let A be an associative algebra. If A has no ideals except itself and 0, we call A simple.
|
92 |
+
Denote by Mn(C) the set of all n × n complex square matrices, which is clearly an associative algebra
|
93 |
+
with respect to the usual matrix addition and multiplication. In fact, Mn(C) is a simple associative algebra
|
94 |
+
for any n ≥ 1. Moreover, it follows from Wedderburn-Artin theorem that any finite-dimensional simple
|
95 |
+
associative algebra over C is isomorphic to Mn(C) for some integer n ≥ 1 ([15]).
|
96 |
+
An associative algebra A is called semisimple if its radical N(A) is zero. The following theorem is
|
97 |
+
well known.
|
98 |
+
Theorem 2.6 ([15]). An associative algebra over C is semisimple if and only if it is a direct sum of simple
|
99 |
+
ideals. That is, a semisimple associative algebra is isomorphic to Mn1(C) × Mn2(C) × · · · × Mns(C) for
|
100 |
+
some positive integers n1, n2, · · · , ns.
|
101 |
+
3. The moment map for complex algebras
|
102 |
+
Let Cn be the n-dimensional complex vector space and Vn = ⊗2(Cn)∗ ⊗ Cn be the space of all complex
|
103 |
+
n-dimensional algebras. The natural action of GL(n) = GL(Cn) on Vn is given by
|
104 |
+
g.µ(X, Y) = gµ(g−1X, g−1Y),
|
105 |
+
g ∈ GL(n), X, Y ∈ Cn.
|
106 |
+
(3.1)
|
107 |
+
Clearly, GL(n).µ is precisely the isomorphism class of µ. Note that
|
108 |
+
lim
|
109 |
+
t→∞ gt.µ = 0,
|
110 |
+
gt = tI ⊂ GL(n), t > 0,
|
111 |
+
we see that 0 lies in the boundary of the orbit GL(n).µ for each µ ∈ Vn. By differentiating (3.1), we obtain
|
112 |
+
the natural action gl(n) on Vn, i.e.,
|
113 |
+
A.µ(X, Y) = Aµ(X, Y) − µ(AX, Y) − µ(X, AY),
|
114 |
+
A ∈ gl(n), µ ∈ Vn.
|
115 |
+
(3.2)
|
116 |
+
|
117 |
+
4
|
118 |
+
HUI ZHANG AND ZAILI YAN
|
119 |
+
It follows that A.µ = 0 if and only if A ∈ Der(µ), where Der(µ) denotes the derivation algebra of µ.
|
120 |
+
Note that the usual Hermitian inner product on Cn gives an U(n)-invariant Hermitian inner product on
|
121 |
+
Vn as follows
|
122 |
+
⟨µ, λ⟩ =
|
123 |
+
�
|
124 |
+
i, j,k
|
125 |
+
⟨µ(Xi, X j), Xk⟩⟨λ(Xi, X j), Xk⟩,
|
126 |
+
µ, λ ∈ Vn,
|
127 |
+
(3.3)
|
128 |
+
where {X1, X2, · · · , Xn} is an arbitrary orthonormal basis of Cn. Let u(n) denote the Lie algebra of U(n),
|
129 |
+
then it is easy to see that gl(n) = u(n) + iu(n) decomposes into skew-Hermitian and Hermitian transfor-
|
130 |
+
mations of Vn, respectively. Moreover, there is an Ad(U(n))-invariant Hermitian inner product on gl(n)
|
131 |
+
given by
|
132 |
+
(A, B) = tr AB∗, A, B ∈ gl(n).
|
133 |
+
(3.4)
|
134 |
+
The moment map from symplectic geometry, corresponding to the Hamiltonian action of U(n) on the
|
135 |
+
symplectic manifold PVn, is defined as follows
|
136 |
+
m : PVn → iu(n),
|
137 |
+
(m([µ]), A) = (dρµ)eA
|
138 |
+
∥µ∥2
|
139 |
+
,
|
140 |
+
0 � µ ∈ Vn, A ∈ iu(n),
|
141 |
+
(3.5)
|
142 |
+
where ρµ : GL(n) → R is given by ρµ(g) = ⟨g.µ, g.µ⟩. Clearly, (dρµ)eA = ⟨A.µ, µ⟩ + ⟨µ, A.µ⟩ = 2⟨A.µ, µ⟩
|
143 |
+
for any A ∈ iu(n). The square norm of the moment map is denoted by
|
144 |
+
Fn : PVn → R,
|
145 |
+
(3.6)
|
146 |
+
where Fn([µ]) = ∥m([µ])∥2 = (m([µ]), m([µ])) for any [µ] ∈ PVn.
|
147 |
+
In order to express the moment map m explicitly, we define Mµ ∈ iu(n) as follows
|
148 |
+
Mµ = 2
|
149 |
+
�
|
150 |
+
i
|
151 |
+
Lµ
|
152 |
+
Xi(Lµ
|
153 |
+
Xi)∗ − 2
|
154 |
+
�
|
155 |
+
i
|
156 |
+
(Lµ
|
157 |
+
Xi)∗Lµ
|
158 |
+
Xi − 2
|
159 |
+
�
|
160 |
+
i
|
161 |
+
(Rµ
|
162 |
+
Xi)∗Rµ
|
163 |
+
Xi,
|
164 |
+
(3.7)
|
165 |
+
where the left and right multiplication Lµ
|
166 |
+
X, Rµ
|
167 |
+
X : Cn → Cn by X of the algebra µ, are given by Lµ
|
168 |
+
X(Y) =
|
169 |
+
µ(X, Y) and Rµ
|
170 |
+
X(Y) = µ(Y, X) for all Y ∈ Cn, respectively. It is not hard to prove that
|
171 |
+
⟨MµX, Y⟩ =2
|
172 |
+
�
|
173 |
+
i, j
|
174 |
+
⟨µ(Xi, X j), X⟩⟨µ(Xi, X j), Y⟩ − 2
|
175 |
+
�
|
176 |
+
i, j
|
177 |
+
⟨µ(Xi, X), X j⟩⟨µ(Xi, Y), X j⟩
|
178 |
+
− 2
|
179 |
+
�
|
180 |
+
i, j
|
181 |
+
⟨µ(X, Xi), X j⟩⟨µ(Y, Xi), X j⟩
|
182 |
+
(3.8)
|
183 |
+
for X, Y ∈ Cn. Note that if the algebra µ is commutative or anticommutative, then the second and third
|
184 |
+
term of (3.8) are the same, and in this case, Mµ coincides with [7].
|
185 |
+
Lemma 3.1. For any µ ∈ Vn, we have (Mµ, A) = 2⟨µ, A.µ⟩, ∀A ∈ gl(n) = u(n) + iu(n). In particular,
|
186 |
+
m([µ]) = Mµ
|
187 |
+
∥µ∥2 for any 0 � µ ∈ Vn
|
188 |
+
|
189 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
190 |
+
5
|
191 |
+
Proof. For any A ∈ gl(n), we have (A, Mµ) = tr AM∗
|
192 |
+
µ = tr AMµ = tr MµA, and
|
193 |
+
tr MµA = 2 tr
|
194 |
+
�
|
195 |
+
i
|
196 |
+
Lµ
|
197 |
+
Xi(Lµ
|
198 |
+
Xi)∗A
|
199 |
+
��������������������������������������
|
200 |
+
I
|
201 |
+
− 2 tr
|
202 |
+
�
|
203 |
+
i
|
204 |
+
((Lµ
|
205 |
+
Xi)∗Lµ
|
206 |
+
Xi + (Rµ
|
207 |
+
Xi)∗Rµ
|
208 |
+
Xi)A
|
209 |
+
��������������������������������������������������������������������������
|
210 |
+
II
|
211 |
+
=: I − II.
|
212 |
+
Note that
|
213 |
+
I =2
|
214 |
+
�
|
215 |
+
i
|
216 |
+
tr Lµ
|
217 |
+
Xi(Lµ
|
218 |
+
Xi)∗A
|
219 |
+
=2
|
220 |
+
�
|
221 |
+
i
|
222 |
+
tr(Lµ
|
223 |
+
Xi)∗ALµ
|
224 |
+
Xi
|
225 |
+
=2
|
226 |
+
�
|
227 |
+
i, j
|
228 |
+
⟨(Lµ
|
229 |
+
Xi)∗ALµ
|
230 |
+
Xi(X j), X j⟩
|
231 |
+
=2
|
232 |
+
�
|
233 |
+
i, j
|
234 |
+
⟨Aµ(Xi, X j), µ(Xi, X j)⟩,
|
235 |
+
and
|
236 |
+
II =2 tr
|
237 |
+
�
|
238 |
+
i
|
239 |
+
((Lµ
|
240 |
+
Xi)∗Lµ
|
241 |
+
Xi + (Rµ
|
242 |
+
Xi)∗Rµ
|
243 |
+
Xi)A
|
244 |
+
=2
|
245 |
+
�
|
246 |
+
i, j
|
247 |
+
⟨((Lµ
|
248 |
+
Xi)∗Lµ
|
249 |
+
Xi + (Rµ
|
250 |
+
Xi)∗Rµ
|
251 |
+
Xi)AX j, X j⟩
|
252 |
+
=2
|
253 |
+
�
|
254 |
+
i, j
|
255 |
+
⟨µ(Xi, AX j), µ(Xi, X j)⟩ − 2
|
256 |
+
�
|
257 |
+
i, j
|
258 |
+
⟨µ(AX j, Xi), µ(X j, Xi)⟩
|
259 |
+
=2
|
260 |
+
�
|
261 |
+
i, j
|
262 |
+
⟨µ(AXi, X j) + µ(Xi, AX j), µ(Xi, X j)⟩.
|
263 |
+
By (3.2), it follows that (A, Mµ) = tr MµA = 2⟨A.µ, µ⟩, so (Mµ, A) = 2⟨µ, A.µ⟩ for any A ∈ gl(n). This
|
264 |
+
proves the first statement. For A ∈ iu(n), we have ⟨A.µ, µ⟩ = ⟨µ, A.µ⟩. By (3.5), we conclude that
|
265 |
+
m([µ]) = Mµ
|
266 |
+
∥µ∥2 for any 0 � µ ∈ Vn. This completes proof Lemma 3.1
|
267 |
+
□
|
268 |
+
Corollary 3.2. For any µ ∈ Vn, then
|
269 |
+
(i) tr MµD = 0 for any D ∈ Der(µ);
|
270 |
+
(ii) tr Mµ[A, A∗] ≥ 0 for any A ∈ Der(µ), and equality holds if and only if A∗ ∈ Der(µ).
|
271 |
+
Proof. For (i), it follows from Lemma 3.1 that tr MµD = 2⟨D.µ, µ⟩. For (ii), it follows from that tr Mµ[A, A∗] =
|
272 |
+
2⟨[A, A∗].µ, µ⟩ = 2⟨A∗.µ, A∗.µ⟩ ≥ 0, ∀A ∈ Der(µ), and the fact A∗.µ = 0 if and only if A∗ ∈ Der(µ).
|
273 |
+
□
|
274 |
+
Theorem 3.3. The moment map m : PVn → iu(n), the functional square norm of the moment map
|
275 |
+
Fn = ∥m∥2 : PVn → R and the gradient of Fn are, respectively, given by
|
276 |
+
Fn([µ]) =
|
277 |
+
tr M2
|
278 |
+
µ
|
279 |
+
∥µ∥4 ,
|
280 |
+
grad(Fn)[µ] = 8π∗(Mµ).µ
|
281 |
+
∥µ∥4
|
282 |
+
,
|
283 |
+
[µ] ∈ PVn,
|
284 |
+
(3.9)
|
285 |
+
|
286 |
+
6
|
287 |
+
HUI ZHANG AND ZAILI YAN
|
288 |
+
where π∗ denotes the derivative of π : Vn\{0} → PVn, the canonical projection. Moreover, the following
|
289 |
+
statements are equivalent:
|
290 |
+
(i) [µ] ∈ PVn is a critical point of Fn.
|
291 |
+
(ii) [µ] ∈ PVn is a critical point of Fn|GL(n).[µ].
|
292 |
+
(iii) Mµ = cµI + Dµ for some cµ ∈ R and Dµ ∈ Der(µ).
|
293 |
+
Proof. By (3.6) and Lemma 3.1, we have Fn([µ]) =
|
294 |
+
tr M2
|
295 |
+
µ
|
296 |
+
∥µ∥4 for any [µ] ∈ PVn. To prove the second one, we
|
297 |
+
only need to compute the gradient of Fn : Vn \ {0} → R, Fn(µ) =
|
298 |
+
tr M2
|
299 |
+
µ
|
300 |
+
∥µ∥4 , and then to project it via π∗. If
|
301 |
+
µ, λ ∈ Vn with µ � 0, then
|
302 |
+
Re⟨grad(Fn)µ, λ⟩ = d
|
303 |
+
dt
|
304 |
+
�����t=0
|
305 |
+
Fn(µ + tλ) = d
|
306 |
+
dt
|
307 |
+
�����t=0
|
308 |
+
1
|
309 |
+
∥µ + tλ∥4 (Mµ+tλ, Mµ+tλ)
|
310 |
+
= − 4 Re⟨Fn(µ)
|
311 |
+
∥µ∥2 µ, λ⟩ +
|
312 |
+
2
|
313 |
+
∥µ∥4 ( d
|
314 |
+
dt
|
315 |
+
�����t=0
|
316 |
+
Mµ+tλ, Mµ)
|
317 |
+
We claim that ( d
|
318 |
+
dt
|
319 |
+
���t=0 Mµ+tλ, A) = 4 Re⟨A.µ, λ⟩ for any A ∈ iu(n). Indeed, by Lemma 3.1, ( d
|
320 |
+
dt
|
321 |
+
���t=0 Mµ+tλ, A) =
|
322 |
+
d
|
323 |
+
dt
|
324 |
+
���t=0 (Mµ+tλ, A) = 2 d
|
325 |
+
dt
|
326 |
+
���t=0 ⟨µ + tλ, A.(µ + tλ)⟩ = 2⟨λ, A.µ⟩ + 2⟨µ, A.λ⟩ = 4 Re⟨A.µ, λ⟩ for any A ∈ iu(n).
|
327 |
+
It follows that grad(Fn)µ = −4 Fn(µ)
|
328 |
+
∥µ∥2 µ + 8(Mµ).µ
|
329 |
+
∥µ∥4 , and consequentely
|
330 |
+
grad(Fn)[µ] = 8π∗(Mµ).µ
|
331 |
+
∥µ∥4
|
332 |
+
.
|
333 |
+
Thus the first part of the theorem is proved, and the following is to prove the equivalence among the
|
334 |
+
statements (i), (ii) and (iii).
|
335 |
+
(i) ⇔ (ii) : The equivalence follows from that grad(Fn) is tangent to the GL(n)-orbits. Indeed
|
336 |
+
grad(Fn)[µ] = 8π∗(Mµ).µ
|
337 |
+
∥µ∥4
|
338 |
+
=
|
339 |
+
8
|
340 |
+
∥µ∥4 π∗( d
|
341 |
+
dt
|
342 |
+
�����t=0
|
343 |
+
etMµ.µ) =
|
344 |
+
8
|
345 |
+
∥µ∥4
|
346 |
+
d
|
347 |
+
dt
|
348 |
+
�����t=0
|
349 |
+
etMµ.[µ] ∈ T[µ](GL(n).[µ]).
|
350 |
+
(iii) ⇒ (i) : By (3.2), we know that I.µ = −µ, and (Mµ).µ = (cµI + Dµ).µ = −cµµ. It follows that
|
351 |
+
grad(Fn)[µ] = 0.
|
352 |
+
(i) ⇒ (iii) : Since grad(Fn)[µ] = 0, then (Mµ).µ ∈ ker π∗µ = Cµ. So Mµ = cI + D for some c ∈ C and
|
353 |
+
D ∈ Der(µ). Clearly [D, D∗] = [Mµ − cI, Mµ − ¯cI] = 0, we conclude by Corollary 3.2 that D∗ is also a
|
354 |
+
derivation of µ. In particular, (c − ¯c)I = (D∗ − D) ∈ Der(µ), thus c = ¯c ∈ R.
|
355 |
+
□
|
356 |
+
Remark 3.4. Let [µ] be a critical point of Fn and [λ] be a critical point of Fm, then [µ ⊕ cλ] is a critical
|
357 |
+
point of Fn+m for a suitable c ∈ C. Indeed, assume that Mµ = cµI + Dµ for some cµ ∈ R, Dµ ∈ Der(µ),
|
358 |
+
and Mλ = cλI +Dλ for some cλ ∈ R, Dλ ∈ Der(λ). Noting that Mtλ = |t|2Mλ for any t ∈ C, we can choose
|
359 |
+
t0 such that cµ = |t0|2cλ, then it follows that [µ ⊕ t0λ] is a critical point of Fn+m.
|
360 |
+
In the frame of algebras, a remarkable result due to Ness can be stated as follows
|
361 |
+
Theorem 3.5 ([12]). If [µ] is a critical point of the functional Fn : PVn → R, then
|
362 |
+
|
363 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
364 |
+
7
|
365 |
+
(i) Fn|GL(n).[µ] attains its minimum value at [µ].
|
366 |
+
(ii) [λ] ∈ GL(n).[µ] is a critical point of Fn if and only if [λ] ∈ U(n).[µ].
|
367 |
+
In fact, the above theorem implies that up to U(n)-orbit, GL(n).[µ] contains at most one critical point
|
368 |
+
for each [µ] ∈ PVn.
|
369 |
+
Lemma 3.6. Let [µ] ∈ PVn be a critical point of Fn with Mµ = cµI+Dµ for some cµ ∈ R and Dµ ∈ Der(µ).
|
370 |
+
Then we have
|
371 |
+
(i) cµ =
|
372 |
+
tr M2
|
373 |
+
µ
|
374 |
+
tr Mµ = − 1
|
375 |
+
2
|
376 |
+
tr M2
|
377 |
+
µ
|
378 |
+
∥µ∥2 < 0.
|
379 |
+
(ii) If tr Dµ � 0, then cµ = −
|
380 |
+
tr D2
|
381 |
+
µ
|
382 |
+
tr Dµ and tr Dµ > 0.
|
383 |
+
Proof. Since Mµ = cµI + Dµ, by Lemma 3.1 and Corollary 3.2 we have
|
384 |
+
tr Mµ = (Mµ, I) = 2⟨µ, I.µ⟩ = −2∥µ∥2 < 0,
|
385 |
+
tr M2
|
386 |
+
µ = tr Mµ(cµI + Dµ) = cµ tr Mµ.
|
387 |
+
So cµ =
|
388 |
+
tr M2
|
389 |
+
µ
|
390 |
+
tr Mµ = − 1
|
391 |
+
2
|
392 |
+
tr M2
|
393 |
+
µ
|
394 |
+
∥µ∥2 < 0. If tr Dµ � 0, then
|
395 |
+
0 = tr MµDµ = cµ tr Dµ + tr D2
|
396 |
+
µ.
|
397 |
+
So cµ = −
|
398 |
+
tr D2
|
399 |
+
µ
|
400 |
+
tr Dµ and tr Dµ > 0.
|
401 |
+
□
|
402 |
+
Remark 3.7. In fact, tr Dµ = 0 if and only if Dµ = 0. Indeed, it follows from that 0 = tr MµDµ =
|
403 |
+
cµ tr Dµ + tr D2
|
404 |
+
µ and Dµ is Hermitian.
|
405 |
+
4. The critical points of the variety of associative algebras
|
406 |
+
The space An of all n-dimensional associative algebras is an algebraic set since it is given by polyno-
|
407 |
+
mial conditions. Denote by An the projective algebraic variety obtained by projectivization of An . Note
|
408 |
+
that An is GL(n)-invariant, then by Theorem 3.3, the critical points of Fn : An → R are precisely the
|
409 |
+
critical points of Fn : PVn → R that lie in An.
|
410 |
+
4.1. The Nikolayevsky derivation and the rationality. A derivation of φ of an algebra (µ, Cn) is called
|
411 |
+
a Nikolayevsky derivation, if it is semisimple with all eigenvalues real, and tr φψ = tr ψ for any ψ ∈
|
412 |
+
Der(µ). This notion is motivated by [14].
|
413 |
+
Theorem 4.1. Let (µ, Cn) be an arbitrary algebra. Then
|
414 |
+
(1) (µ, Cn) admits a Nikolayevsky derivation φµ.
|
415 |
+
(2) The Nikolayevsky derivation φµ is determined up to automorphism of µ.
|
416 |
+
(3) All eigenvalues of φµ are rational numbers.
|
417 |
+
|
418 |
+
8
|
419 |
+
HUI ZHANG AND ZAILI YAN
|
420 |
+
If moreover, [µ] is a critical point of Fn : PVn → R with Mµ = cµI+Dµ for some cµ ∈ R and Dµ ∈ Der(µ),
|
421 |
+
then − 1
|
422 |
+
cµ Dµ is the Nikolayevsky derivation of µ.
|
423 |
+
Proof. (1) The complex Lie algebra Der(µ) is algebraic. Let Der(µ) = s ⊕ t ⊕ n be its Levi-Mal’cev
|
424 |
+
decomposition, where s is semisimple, t ⊕ n is the radical of Der(µ), n is the set of all nilpotent elements
|
425 |
+
in t ⊕ n (and is the nilradical of t ⊕ n), t is an abelian subalgebra consisting of semisimple elements, and
|
426 |
+
[s, t] = 0. Define the bilinear form B on Der(µ) by
|
427 |
+
B(ψ1, ψ2) := tr ψ1ψ2,
|
428 |
+
∀ψ1, ψ2 ∈ Der(µ).
|
429 |
+
Then, in general, B is degenerate, and Ker B = n. Since s is semisimple, then B(s, t) = B([s, s], t) =
|
430 |
+
B(s, [s, t]) = 0. Clearly, B is nondegenerate on t. Since t is reductive, we have t = a + ia, where a consists
|
431 |
+
of semisimple elements with all eigenvalues real. It follows that there exists a unique element φ ∈ a such
|
432 |
+
that B(φ, ψ) = tr ψ for any ψ ∈ t. Thus tr φψ = tr ψ for any ψ ∈ Der(µ).
|
433 |
+
(2) The subalgebra s ⊕ t is a maximal fully reducible subalgebra of Der(µ). Since the maximal fully
|
434 |
+
reducible subalgebras of Der(µ) are conjugate by inner automorphism of Der(µ) (which corresponds to
|
435 |
+
an automorphism of µ), and then the center t of s ⊕ t, is defined uniquely, up to automorphism. So the
|
436 |
+
Nikolayevsky derivation is determined up to automorphism of µ.
|
437 |
+
(3) The case φµ = 0 is trivial. In the following, we assume that φµ is nonzero. Note that φµ is
|
438 |
+
simisimple with all eigenvalues real, we have the following decomposition
|
439 |
+
Cn = l1 ⊕ l2 ⊕ · · · ⊕ lr,
|
440 |
+
where li = {X ∈ Cn|φµX = ciX} are eigenspaces of φµ corresponding to eigenvalues c1 < c2 < · · · < cr ∈
|
441 |
+
R, respectively. Set di = dim li ∈ N, 1 ≤ i ≤ r. Since φµ is a derivation, we have the following relations
|
442 |
+
µ(li, lj) ⊂ lk
|
443 |
+
if ci + cj = ck,
|
444 |
+
for all 1 ≤ i, j, k ≤ r. Conversely, if we define a linear transformation ψ : Cn → Cn by ψ|li = aiIdli,
|
445 |
+
where a1, a2, · · · , ar ∈ R satisfy ai + aj = ak for all 1 ≤ i, j, k ≤ r such that ci + cj = ck, then ψ is
|
446 |
+
a derivation of µ. Clearly, all such derivations form a real vector space, which can be identified with
|
447 |
+
W := {(w1, w2, · · · , wr) ∈ Rr|wi + w j = wk if ci + cj = ck}. We endow Rr with the usual inner product, i.e.
|
448 |
+
⟨x, y⟩ =
|
449 |
+
�
|
450 |
+
i
|
451 |
+
xiyi,
|
452 |
+
(4.1)
|
453 |
+
for any x = (x1, x2, · · · , xr), y = (y1, y2, · · · , yr) ∈ Rr.
|
454 |
+
For any derivation ψ ∈ W, we have
|
455 |
+
tr(φµ − I)ψ = tr φµψ − tr ψ = 0.
|
456 |
+
|
457 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
458 |
+
9
|
459 |
+
Then we see that (d1(c1 − 1), d2(c2 − 1), · · · , dr(cr − 1)) ⊥ W relative to (4.1). Put F := W⊥, then by
|
460 |
+
definition we have
|
461 |
+
F = span1≤i, j,k≤r{ei + ej − ek : ci + cj = ck},
|
462 |
+
where ei belongs to Rr having 1 in the i-th position and 0 elsewhere. Let {ei1 +ej1 −ek1, · · · , eis +ejs −eks}
|
463 |
+
be a basis of F, then
|
464 |
+
(d1(c1 − 1), d2(c2 − 1), · · · , dr(cr − 1)) =
|
465 |
+
s
|
466 |
+
�
|
467 |
+
p=1
|
468 |
+
bp(eip + ejp − ekp),
|
469 |
+
(4.2)
|
470 |
+
for some b1, b2, · · · , bs ∈ R. Put
|
471 |
+
E =
|
472 |
+
|
473 |
+
ei1 + ej1 − ek1
|
474 |
+
ei2 + ej2 − ek2
|
475 |
+
...
|
476 |
+
eis + ejs − eks
|
477 |
+
|
478 |
+
∈ Zs×r,
|
479 |
+
then EET ∈ GL(s, Z), and (EET)−1 ∈ GL(s, Q). By (4.2) and the definition of E, we have
|
480 |
+
|
481 |
+
d1(c1 − 1)
|
482 |
+
d2(c2 − 1)
|
483 |
+
...
|
484 |
+
dr(cr − 1)
|
485 |
+
|
486 |
+
r×1
|
487 |
+
= ET
|
488 |
+
|
489 |
+
b1
|
490 |
+
b2
|
491 |
+
...
|
492 |
+
bs
|
493 |
+
|
494 |
+
s×1
|
495 |
+
, E
|
496 |
+
|
497 |
+
c1
|
498 |
+
c2
|
499 |
+
...
|
500 |
+
cr
|
501 |
+
|
502 |
+
r×1
|
503 |
+
=
|
504 |
+
|
505 |
+
0
|
506 |
+
0
|
507 |
+
...
|
508 |
+
0
|
509 |
+
|
510 |
+
s×1
|
511 |
+
,
|
512 |
+
E
|
513 |
+
|
514 |
+
1
|
515 |
+
1
|
516 |
+
...
|
517 |
+
1
|
518 |
+
|
519 |
+
r×1
|
520 |
+
=
|
521 |
+
|
522 |
+
1
|
523 |
+
1
|
524 |
+
...
|
525 |
+
1
|
526 |
+
|
527 |
+
s×1
|
528 |
+
.
|
529 |
+
By the left multiplication of E on (4.2), we have
|
530 |
+
|
531 |
+
0
|
532 |
+
0
|
533 |
+
...
|
534 |
+
0
|
535 |
+
|
536 |
+
s×1
|
537 |
+
−
|
538 |
+
|
539 |
+
1
|
540 |
+
1
|
541 |
+
...
|
542 |
+
1
|
543 |
+
|
544 |
+
s×1
|
545 |
+
= ED−1ET
|
546 |
+
|
547 |
+
b1
|
548 |
+
b2
|
549 |
+
...
|
550 |
+
bs
|
551 |
+
|
552 |
+
s×1
|
553 |
+
,
|
554 |
+
where D = diag(d1, d2, · · · , dr). It is easy to see that (ED−1ET) ∈ GL(s, Q). Consequently
|
555 |
+
D
|
556 |
+
|
557 |
+
c1 − 1
|
558 |
+
c2 − 1
|
559 |
+
...
|
560 |
+
cr − 1
|
561 |
+
|
562 |
+
r×1
|
563 |
+
= −ET(ED−1ET)−1
|
564 |
+
|
565 |
+
1
|
566 |
+
1
|
567 |
+
...
|
568 |
+
1
|
569 |
+
|
570 |
+
s×1
|
571 |
+
,
|
572 |
+
and
|
573 |
+
|
574 |
+
c1
|
575 |
+
c2
|
576 |
+
...
|
577 |
+
cr
|
578 |
+
|
579 |
+
r×1
|
580 |
+
=
|
581 |
+
|
582 |
+
1
|
583 |
+
1
|
584 |
+
...
|
585 |
+
1
|
586 |
+
|
587 |
+
r×1
|
588 |
+
− D−1ET(ED−1ET)−1
|
589 |
+
|
590 |
+
1
|
591 |
+
1
|
592 |
+
...
|
593 |
+
1
|
594 |
+
|
595 |
+
s×1
|
596 |
+
∈ Qr.
|
597 |
+
So all eigenvalues of φµ are rational.
|
598 |
+
For the last statement, by Corollary 3.2 we know that 0 = tr Mµψ = cµ tr ψ+tr Dµψ for any ψ ∈ Der(µ).
|
599 |
+
Since Dµ is Hermitian, we conclude that − 1
|
600 |
+
cµ Dµ is the Nikolayevsky derivation of µ.
|
601 |
+
□
|
602 |
+
By Theorem 4.1, it is easy to obtain the following theorem.
|
603 |
+
|
604 |
+
10
|
605 |
+
HUI ZHANG AND ZAILI YAN
|
606 |
+
Theorem 4.2. Let [µ] ∈ PVn be a critical point of Fn : PVn → R with Mµ = cµI + Dµ for some cµ ∈ R
|
607 |
+
and Dµ ∈ Der(µ). Then there exists a constant c > 0 such that the eigenvalues of cDµ are integers prime
|
608 |
+
to each other, say k1 < k2 < · · · < kr ∈ Z with multiplicities d1, d2, · · · , dr ∈ N.
|
609 |
+
Definition 4.3. The data set (k1 < k2 < · · · < kr; d1, d2, · · · , dr) in Theorem 4.2 is called the type of the
|
610 |
+
critical point [µ].
|
611 |
+
Proposition 4.4. Let [µ] ∈ PVn be a critical point of Fn with type α = (k1 < k2 < · · · < kr; d1, d2, · · · , dr).
|
612 |
+
Then we have
|
613 |
+
(i) If α = (0; n), then Fn([µ]) = 4
|
614 |
+
n.
|
615 |
+
(ii) If α � (0; n), then Fn([µ]) = 4
|
616 |
+
�
|
617 |
+
n − (k1d1+k2d2+···+krdr)2
|
618 |
+
k2
|
619 |
+
1d1+k2
|
620 |
+
2d2+···+k2r dr
|
621 |
+
�−1
|
622 |
+
.
|
623 |
+
Proof. We suppose that Mµ = cµI + Dµ, ∥µ∥ = 1. Since tr Mµ = −2⟨µ, µ⟩ = −2, then
|
624 |
+
tr M2
|
625 |
+
µ = tr Mµ(cµI + Dµ) = cµ tr Mµ = −2cµ,
|
626 |
+
and Fn([µ]) = tr Mµ2
|
627 |
+
∥µ∥4 = tr Mµ2 = −2cµ.
|
628 |
+
For (i), we have Dµ = 0, so Mµ = cµI and cµn = tr Mµ = −2. Thus cµ = − 2
|
629 |
+
n. Fn([µ]) = −2cµ = 4
|
630 |
+
n.
|
631 |
+
For (ii), we have Dµ � 0, and cµ = −
|
632 |
+
tr D2
|
633 |
+
µ
|
634 |
+
tr Dµ by Lemma 3.6 and Remark 3.7. Note that
|
635 |
+
Fn([µ]) = tr Mµ2 = tr(cµI + Dµ)2 = c2
|
636 |
+
µn + cµ tr Dµ = 1
|
637 |
+
4Fn([µ])2n − 1
|
638 |
+
2Fn([µ]) tr Dµ,
|
639 |
+
so we have
|
640 |
+
1
|
641 |
+
Fn([µ]) = 1
|
642 |
+
4n −
|
643 |
+
1
|
644 |
+
2Fn([µ]) tr(Dµ) = 1
|
645 |
+
4n + 1
|
646 |
+
4cµ
|
647 |
+
tr Dµ = 1
|
648 |
+
4
|
649 |
+
n − (tr Dµ)2
|
650 |
+
tr D2µ
|
651 |
+
.
|
652 |
+
It follows that Fn([µ]) = 4
|
653 |
+
�
|
654 |
+
n − (k1d1+k2d2+···+krdr)2
|
655 |
+
k2
|
656 |
+
1d1+k2
|
657 |
+
2d2+···+k2r dr
|
658 |
+
�−1
|
659 |
+
.
|
660 |
+
□
|
661 |
+
4.2. The minima of Fn : An → R.
|
662 |
+
Lemma 4.5. Assume [µ] ∈ PVn, then [µ] is a critical point of Fn : PVn → R with type (0; n) if and only
|
663 |
+
if Fn([µ]) = 4
|
664 |
+
n. Moreover, 4
|
665 |
+
n is the minimum value of Fn : PVn → R.
|
666 |
+
Proof. For any 0 � µ ∈ Vn, we use x1, x2, · · · , xn ∈ R denote the eigenvalues of Mµ. Note that tr Mµ =
|
667 |
+
−2∥µ∥2, then we have
|
668 |
+
Fn([µ]) = tr Mµ2
|
669 |
+
∥µ∥4
|
670 |
+
= 4 tr Mµ2
|
671 |
+
(tr Mµ)2 = 4
|
672 |
+
(x2
|
673 |
+
1 + x2
|
674 |
+
2 + · · · + x2
|
675 |
+
n)
|
676 |
+
(x1 + x2 + · · · + xn)2 .
|
677 |
+
It is easy to see that Fn([µ]) ≥ 4
|
678 |
+
n with equality holds if and only if x1 = x2 = · · · = xn. So [µ] is a critical
|
679 |
+
point of Fn : PVn → R with type (0; n) if only if Mµ is a constant multiple of I, if and only Fn attains its
|
680 |
+
minimum value 4
|
681 |
+
n at [µ].
|
682 |
+
□
|
683 |
+
|
684 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
685 |
+
11
|
686 |
+
Theorem 4.6. The functional Fn : An → R attains its minimum value at a point [λ] ∈ GL(n).[µ] if and
|
687 |
+
only if µ is a semisimple associative algebra. In such a case, Fn([λ]) = 4
|
688 |
+
n.
|
689 |
+
Proof. Consider the simple associative algebra Mm(C) for an integer m > 0. We endow Mm(C) with the
|
690 |
+
following Hermitian inner product
|
691 |
+
⟨A, B⟩ := tr AB∗, A, B ∈ Mm(C).
|
692 |
+
(4.3)
|
693 |
+
Then {Eij : 1 ≤ i, j ≤ m} is an orthonormal basis, where Eij denote the matrices having 1 in the (i, j)-
|
694 |
+
position and 0 elsewhere. Set ν := (Mm(C), ⟨·, ·⟩). Clearly
|
695 |
+
(Lν
|
696 |
+
A)∗ = LA∗,
|
697 |
+
(Rν
|
698 |
+
A)∗ = RA∗
|
699 |
+
for any A ∈ Mm(C). Thus by (3.7), we have
|
700 |
+
Mν = 2
|
701 |
+
�
|
702 |
+
ij
|
703 |
+
Lν
|
704 |
+
Eij(Lν
|
705 |
+
Eij)∗ − 2
|
706 |
+
�
|
707 |
+
ij
|
708 |
+
(Lν
|
709 |
+
Eij)∗Lν
|
710 |
+
Eij − 2
|
711 |
+
�
|
712 |
+
ij
|
713 |
+
(Rν
|
714 |
+
Eij)∗Rν
|
715 |
+
Eij
|
716 |
+
= 2
|
717 |
+
�
|
718 |
+
ij
|
719 |
+
Lν
|
720 |
+
EijLν
|
721 |
+
Eji − 2
|
722 |
+
�
|
723 |
+
ij
|
724 |
+
Lν
|
725 |
+
EjiLν
|
726 |
+
Eij − 2
|
727 |
+
�
|
728 |
+
ij
|
729 |
+
Rν
|
730 |
+
EjiRν
|
731 |
+
Eij
|
732 |
+
= 2
|
733 |
+
�
|
734 |
+
ij
|
735 |
+
Lν
|
736 |
+
EijEji − 2
|
737 |
+
�
|
738 |
+
ij
|
739 |
+
Lν
|
740 |
+
EjiEij − 2
|
741 |
+
�
|
742 |
+
ij
|
743 |
+
Rν
|
744 |
+
EijEji
|
745 |
+
= 2m
|
746 |
+
�
|
747 |
+
i
|
748 |
+
Lν
|
749 |
+
Eii − 2m
|
750 |
+
�
|
751 |
+
i
|
752 |
+
Lν
|
753 |
+
Eii − 2m
|
754 |
+
�
|
755 |
+
i
|
756 |
+
Rν
|
757 |
+
Eii
|
758 |
+
= 2mLν
|
759 |
+
I − 2mLν
|
760 |
+
I − 2mRν
|
761 |
+
I
|
762 |
+
= 2mIm2 − 2mIm2 − 2mIm2
|
763 |
+
= −2mIm2.
|
764 |
+
So [ν] is a critical point of type (0; m2). Since µ is a complex semisimple associative algebra, by Theo-
|
765 |
+
rem 2.6, µ is isomorphic to Mn1(C) × Mn2(C) × · · · × Mns(C) for some positive integers n1, n2, · · · , ns. It
|
766 |
+
follows from Remark 3.4 that there exists a point [λ] ∈ GL(n).[µ] such that [λ] is a critical point of type
|
767 |
+
(0; n). So the functional Fn : An → R attains its minimum value at [λ], and Fn([λ]) = 4
|
768 |
+
n by Lemma 4.5.
|
769 |
+
Conversely, assume that Fn : An → R attains its minimum value at a point [λ] ∈ GL(n).[µ]. The first
|
770 |
+
part of the proof implies that Mλ = cλI with cλ < 0. To prove µ is semisimple, it suffices to show that
|
771 |
+
L = (λ, Cn) is semisimple. Consider the following orthogonal decompositions: (i) L = H ⊕ N, where
|
772 |
+
N is the radical of λ; (ii) N = V ⊕ Z, where Z = {A ∈ N : λ(A, N) = λ(N, A) = 0} is the annihilator
|
773 |
+
of N. Clearly, Z is an ideal of L. We have L = H ⊕ V ⊕ Z. Suppose that Z � 0. Let {Hi}, {Vi}, {Zi} be
|
774 |
+
an orthonormal basis of H, V, and Z, respectively. Put {Xi} = {Hi} ∪ {Vi} ∪ {Zi}. For any 0 � Z ∈ Z, by
|
775 |
+
|
776 |
+
12
|
777 |
+
HUI ZHANG AND ZAILI YAN
|
778 |
+
hypothesis we have
|
779 |
+
0 > ⟨MλZ, Z⟩ =2
|
780 |
+
�
|
781 |
+
ij
|
782 |
+
|⟨λ(Xi, X j), Z⟩|2 − 2
|
783 |
+
�
|
784 |
+
ij
|
785 |
+
|⟨λ(Z, Xi), X j⟩|2 − 2
|
786 |
+
�
|
787 |
+
ij
|
788 |
+
|⟨λ(Xi, Z), X j⟩|2
|
789 |
+
=2
|
790 |
+
�
|
791 |
+
ij
|
792 |
+
�
|
793 |
+
|⟨λ(Zi, H j), Z⟩|2 + |⟨λ(Hi, Z j), Z⟩|2�
|
794 |
+
+ α(Z)
|
795 |
+
− 2
|
796 |
+
�
|
797 |
+
ij
|
798 |
+
|⟨λ(Z, Hi), Z j⟩|2 − 2
|
799 |
+
�
|
800 |
+
ij
|
801 |
+
|⟨λ(Hi, Z), Z j⟩|2,
|
802 |
+
where α(Z) = 2 �
|
803 |
+
ij |⟨λ(Yi, Y j), Z⟩|2 ≥ 0, {Yi} = {Hi} ∪ {Vi}. This implies
|
804 |
+
0 >
|
805 |
+
�
|
806 |
+
k
|
807 |
+
⟨MλZk, Zk⟩ =
|
808 |
+
�
|
809 |
+
k
|
810 |
+
α(Zk) ≥ 0,
|
811 |
+
which is a contradiction. So Z = 0, and consequently, N = 0. Therefore L is a semisimple associative
|
812 |
+
algebra.
|
813 |
+
This completes the proof of theorem.
|
814 |
+
□
|
815 |
+
Remark 4.7. In fact, by the proof of Theorem 4.6, we know that if [µ] ∈ An for which there exists
|
816 |
+
[λ] ∈ GL(n).[µ] such that Mλ is negative definite, then µ is a semisimple associative algebra.
|
817 |
+
4.3. The maxima of Fn : An → R. We say that an algebra λ degenerates to µ, write as λ → µ if
|
818 |
+
µ ∈ GL(n).λ, the closure of GL(n).λ with respect to the usual topology of Vn. The degeneration λ → µ
|
819 |
+
is called direct degeneration if there are no nontrivial chains: λ → ν → µ. The degeneration level of an
|
820 |
+
algebra is the maximum length of chain of direct degenerations.
|
821 |
+
Theorem 4.8 ([3]). An n-dimensional associative algebra is of degeneration level one if and only if it is
|
822 |
+
isomorphic to one of the following
|
823 |
+
(1) µl: µl(X1, Xi) = Xi, i = 1, · · · , n;
|
824 |
+
(2) µr: µr(Xi, X1) = Xi, i = 1, · · · , n;
|
825 |
+
(3) µca: µs(X1, X1) = X2,
|
826 |
+
where {X1, · · · , Xn} is a basis.
|
827 |
+
Theorem 4.9. The functional Fn : An → R attains its maximal value at a point [µ] ∈ Ln, n ≥ 3 if and
|
828 |
+
only if µ is isomorphic to the commutative associative algebra µca. In such a case, Fn([µ]) = 20.
|
829 |
+
Proof. Assume that Fn : An → R attains its maximal value at a point [µ] ∈ An, n ≥ 3. By Theorem 3.3,
|
830 |
+
we know that [µ] is also a critical of Fn : PVn → R. Then it follows Theorem 3.5 that Fn|GL(n).[µ] also
|
831 |
+
attains its minimum value at a point [µ] , consequently Fn|GL.[µ] is a constant, so
|
832 |
+
GL(n).[µ] = U(n).[µ]
|
833 |
+
(4.4)
|
834 |
+
|
835 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
836 |
+
13
|
837 |
+
The relation (4.4) implies that the only non-trivial degeneration of µ is 0 ([8, Theorem 5.1] ), conse-
|
838 |
+
quently the degeneration level of µ is 1.
|
839 |
+
It is easy to see that the critical points [µl], [µr] are both of type (0 < 1; 1, n − 1), and [µca] is of type
|
840 |
+
(3 < 5 < 6; 1, n − 2, 1). By Proposition 4.4, we know
|
841 |
+
Fn([µca]) = 20 > 4 = Fn([µl]) = Fn([µr]).
|
842 |
+
So the theorem is proved.
|
843 |
+
□
|
844 |
+
4.4. The structure for the critical points of Fn : An → R. In the following, we discuss the structure
|
845 |
+
for an arbitrary critical points of Fn : An → R by Theorem 4.2.
|
846 |
+
Theorem 4.10. Let [µ] be a critical point of Fn : An → R with Mµ = cµI + Dµ of type (k1 < · · · <
|
847 |
+
kr; d1, d2, · · · , dr), where cµ ∈ R and Dµ ∈ Der(µ). Consider the orthogonal decomposition
|
848 |
+
Cn = A− ⊕ A0 ⊕ A+,
|
849 |
+
where A−, A0 and A+ denote the direct sum of eigenspaces of Dµ with eigenvalues smaller than zero,
|
850 |
+
equal to zero and larger than zero, respectively. Then the following conditions hold:
|
851 |
+
(i) ann(µ) ⊂ A+, where ann(µ) is the annihilator of µ
|
852 |
+
(ii) A+ ⊂ N(µ), where N(µ) is the radical of µ.
|
853 |
+
(iii) A− ⊂ (C(µ) ∩ N(µ)) \ ann(µ), where C(µ) is the center of µ.
|
854 |
+
(iv) (Lµ
|
855 |
+
A − Rµ
|
856 |
+
A)∗ ∈ Der(µ) for any A ∈ A0. So the induced Lie algebra of A0 is reductive.
|
857 |
+
Proof. For (i), assume that X ∈ ann(µ) and DµX = cX, then by (3.8)
|
858 |
+
⟨MµX, X⟩ = 2
|
859 |
+
�
|
860 |
+
i, j
|
861 |
+
|⟨µ(Xi, X j), X⟩|2 ≥ 0.
|
862 |
+
Since Mµ = cµI + Dµ, then 0 ≤ ⟨MµX, X⟩ = (cµ + c)⟨X, X⟩. It follows from Lemma 3.6 that c ≥ −cµ > 0.
|
863 |
+
This proves (i).
|
864 |
+
For (ii), it is an immediate consequence of (iii) by Remark 2.4. Now, we prove (iii) as follows. Assume
|
865 |
+
that DµX = cX for some c < 0. Since cLµ
|
866 |
+
X = [Dµ, Lµ
|
867 |
+
X], cRµ
|
868 |
+
X = [Dµ, Rµ
|
869 |
+
X], then
|
870 |
+
c tr(Lµ
|
871 |
+
X − Rµ
|
872 |
+
X)(Lµ
|
873 |
+
X − Rµ
|
874 |
+
X)∗ = tr[Dµ, (Lµ
|
875 |
+
X − Rµ
|
876 |
+
X)](Lµ
|
877 |
+
X − Rµ
|
878 |
+
X)∗
|
879 |
+
= tr[Mµ, (Lµ
|
880 |
+
X − Rµ
|
881 |
+
X)](Lµ
|
882 |
+
X − Rµ
|
883 |
+
X)∗
|
884 |
+
= tr Mµ[(Lµ
|
885 |
+
X − Rµ
|
886 |
+
X), (Lµ
|
887 |
+
X − Rµ
|
888 |
+
X)∗].
|
889 |
+
Noting that (Lµ
|
890 |
+
X − Rµ
|
891 |
+
X) ∈ Der(µ), by Corollary 3.2 we have
|
892 |
+
c tr(Lµ
|
893 |
+
X − Rµ
|
894 |
+
X)(Lµ
|
895 |
+
X − Rµ
|
896 |
+
X)∗ ≥ 0.
|
897 |
+
|
898 |
+
14
|
899 |
+
HUI ZHANG AND ZAILI YAN
|
900 |
+
It follows that (Lµ
|
901 |
+
X − Rµ
|
902 |
+
X) = 0 since c < 0. So X ∈ C(µ). By Remark 2.4, it is easy to see that X ∈ N(µ).
|
903 |
+
Using (i), we conclude A− ⊂ (C(µ) ∩ N(µ)) \ ann(µ). This proves (iii).
|
904 |
+
For (iv), we first note that
|
905 |
+
[Dµ, Lµ
|
906 |
+
A] = Lµ
|
907 |
+
DµA,
|
908 |
+
[Dµ, Rµ
|
909 |
+
A] = Rµ
|
910 |
+
DµA,
|
911 |
+
for any A ∈ A. If A ∈ A0, we have [Dµ, Lµ
|
912 |
+
A] = [Dµ, Rµ
|
913 |
+
A] = 0, and so
|
914 |
+
tr Mµ[(Lµ
|
915 |
+
A − Rµ
|
916 |
+
A), (Lµ
|
917 |
+
A − Rµ
|
918 |
+
A)∗] = tr(cµI + Dµ)[(Lµ
|
919 |
+
A − Rµ
|
920 |
+
A), (Lµ
|
921 |
+
A − Rµ
|
922 |
+
A)∗]
|
923 |
+
= tr Dµ[(Lµ
|
924 |
+
A − Rµ
|
925 |
+
A), (Lµ
|
926 |
+
A − Rµ
|
927 |
+
A)∗]
|
928 |
+
= tr[Dµ, (Lµ
|
929 |
+
A − Rµ
|
930 |
+
A)](Lµ
|
931 |
+
A − Rµ
|
932 |
+
A)∗
|
933 |
+
= 0.
|
934 |
+
By Corollary 3.2, it follows that (Lµ
|
935 |
+
A − Rµ
|
936 |
+
A)∗ ∈ Der(µ) since (Lµ
|
937 |
+
A − Rµ
|
938 |
+
A) ∈ Der(µ). This proves (iv).
|
939 |
+
□
|
940 |
+
In the sequel, we give a description of the critical points in terms of those which are nilpotent. Let [λ]
|
941 |
+
be a nilpotent critical point of Fm : Am → R. Define
|
942 |
+
L(λ) : = {Φ ∈ End(Cm) : Φ(λ(X, Y)) = λ(ΦX, Y)},
|
943 |
+
R(λ) : = {Ψ ∈ End(Cm) : Ψ(λ(X, Y)) = λ(X, ΨY)}.
|
944 |
+
Moreover, we set Γl = {Φ ∈ L(λ) : [Φ, Ψ] = 0, ∀Ψ ∈ R(λ)}, Γr = {Ψ ∈ R(λ) : [Φ, Ψ] = 0, ∀Φ ∈ L(λ)}, and
|
945 |
+
Γ(λ) : = {(Φ, Ψ) ∈ Γl × Γr : λ(·, Φ(·)) = λ(Ψ(·), ·)}.
|
946 |
+
For any (Φi, Ψi) ∈ Γ(λ), i = 1, 2, we define (Φ1, Ψ1)(Φ2, Ψ2) := (Φ1Φ2, Ψ2Ψ1). Then it follows that Γ(λ)
|
947 |
+
is an associative algebra.
|
948 |
+
Lemma 4.11. Assume that S ⊂ Γ(λ) is a subalgebra such that (Φ∗, Ψ∗) ∈ S for any (Φ, Ψ) ∈ S, then S
|
949 |
+
is a semisimple associative algebra.
|
950 |
+
Proof. Note that S is an associative algebra of matrices, which are closed under conjugate transpose.
|
951 |
+
Define an Hermitian inner product on S by
|
952 |
+
⟨H1, H2⟩ := tr H1H∗
|
953 |
+
2 = tr Φ1Φ∗
|
954 |
+
2 + tr Ψ1Ψ∗
|
955 |
+
2, ∀Hi = (Φi, Ψi) ∈ S, i = 1, 2.
|
956 |
+
Then it follows that ⟨HH1, H2⟩ = ⟨H1, H∗H2⟩, ⟨H1H, H2⟩ = ⟨H1, H2H∗⟩ for any H, H1, H2 ∈ S. Let I be
|
957 |
+
an ideal in S and I⊥ denote the orthogonal complement of I. Then it is easy to see that I⊥ is also an ideal
|
958 |
+
of S. Let S = R⊕N, where N is the radical of S and R = N⊥. It follows that R and N are both ideals of
|
959 |
+
S. Moreover, R is semisimple, and N is the annihilator of S (by considering the derived series). Since S
|
960 |
+
is an associative algebra of matrices which are closed under conjugate transpose, then HH∗ = 0 for any
|
961 |
+
H ∈ N, hence H = 0. So N = 0, and S is semisimple.
|
962 |
+
□
|
963 |
+
|
964 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
965 |
+
15
|
966 |
+
Theorem 4.12. Let [λ] be a nilpotent critical point of Fm : Am → R with Mλ = cλI + Dλ of type
|
967 |
+
(k2 < · · · < kr; d2, · · · , dr), where cλ ∈ R and Dλ ∈ Der(λ). Assume that S ⊂ Γ(λ) is a subalgebra of
|
968 |
+
dimension d1 such that (Φ∗, Ψ∗) ∈ S, [Dλ, Φ] = [Dλ, Ψ] = 0 for any (Φ, Ψ) ∈ S. Consider the following
|
969 |
+
semidirect sum
|
970 |
+
µ = S ⋉ λ,
|
971 |
+
where
|
972 |
+
µ((Φ1, Ψ1) + X1, (Φ2, Ψ2) + X2) := (Φ1Φ2, Ψ2Ψ1) + Φ1(X2) + Ψ2(X1) + X1X2,
|
973 |
+
for any (Φ1, Ψ1), (Φ2, Ψ2) ∈ S, X1, X2 ∈ Cm. Then µ is an associative algebra. If we extend the Hermitian
|
974 |
+
inner product on Cm by setting
|
975 |
+
⟨H, K⟩ = − 2
|
976 |
+
cλ
|
977 |
+
(tr LS
|
978 |
+
HLS
|
979 |
+
K∗ + tr HK∗), H, K ∈ S,
|
980 |
+
then [µ] is a critical point of type (0, k2 < · · · < kr; d1, d2, · · · , dr) for the functional Fn : An → R, where
|
981 |
+
n = d1 + m.
|
982 |
+
Proof. For any H = (Φ, Ψ) ∈ S, we have
|
983 |
+
Lµ
|
984 |
+
H =
|
985 |
+
�
|
986 |
+
LS
|
987 |
+
H
|
988 |
+
0
|
989 |
+
0
|
990 |
+
Φ
|
991 |
+
�
|
992 |
+
,
|
993 |
+
Rµ
|
994 |
+
H =
|
995 |
+
�
|
996 |
+
RS
|
997 |
+
H
|
998 |
+
0
|
999 |
+
0
|
1000 |
+
Ψ
|
1001 |
+
�
|
1002 |
+
,
|
1003 |
+
where Lµ
|
1004 |
+
H, Rµ
|
1005 |
+
H (resp. LS
|
1006 |
+
H, RS
|
1007 |
+
H) denote the left and right multiplication by H of the algebra µ (resp. S),
|
1008 |
+
respectively. By Lemma 4.11, we know that S is a semisimple associative algebra. Then it follows
|
1009 |
+
that there is an orthonormal basis {Hi = (Φi, Ψi)} ⊂ S such that Φi∗ = −Φi, Ψi∗ = −Ψi, and Lµ
|
1010 |
+
Hi, Rµ
|
1011 |
+
Hi
|
1012 |
+
are skew-Hermitian for each i. Let {Hi} ∪ {Xi} be an orthonormal basis of Cn = S ⊕ Cm. Then for any
|
1013 |
+
H = (Φ, Ψ) ∈ S and X ∈ Cm, we have
|
1014 |
+
⟨MµX, H⟩ = −2
|
1015 |
+
�
|
1016 |
+
i, j
|
1017 |
+
⟨µ(Xi, X), X j⟩⟨µ(Xi, H), X j⟩ − 2
|
1018 |
+
�
|
1019 |
+
i, j
|
1020 |
+
⟨µ(X, Xi), X j⟩⟨µ(H, Xi), X j⟩
|
1021 |
+
= −2
|
1022 |
+
�
|
1023 |
+
i, j
|
1024 |
+
⟨λ(Xi, X), X j⟩⟨Ψ(Xi), X j⟩ − 2
|
1025 |
+
�
|
1026 |
+
i, j
|
1027 |
+
⟨λ(X, Xi), X j⟩⟨Φ(Xi), X j⟩
|
1028 |
+
= −2
|
1029 |
+
�
|
1030 |
+
i
|
1031 |
+
⟨λ(Xi, X), Ψ(Xi)⟩ − 2
|
1032 |
+
�
|
1033 |
+
i
|
1034 |
+
⟨λ(X, Xi), Φ(Xi)⟩
|
1035 |
+
= −2 tr Ψ∗Rλ
|
1036 |
+
X − 2 tr Φ∗Lλ
|
1037 |
+
X
|
1038 |
+
= −2 tr Rλ
|
1039 |
+
Ψ∗(X) − 2 tr Lλ
|
1040 |
+
Φ∗(X)
|
1041 |
+
= 0,
|
1042 |
+
where Lλ
|
1043 |
+
X, Rλ
|
1044 |
+
X denote the left and right multiplication by X of the algebra λ, respectively, and the last two
|
1045 |
+
equalities follow from that λ is nilpotent and (Φ∗, Ψ∗) ∈ S. Moreover, since Φi∗ = −Φi, Ψi∗ = −Ψi for
|
1046 |
+
|
1047 |
+
16
|
1048 |
+
HUI ZHANG AND ZAILI YAN
|
1049 |
+
each i, then [Φi, Φi∗] = 0, [Ψi, Ψi∗] = 0. So by (3.8) we have
|
1050 |
+
⟨MµX, Y⟩ = 2
|
1051 |
+
�
|
1052 |
+
i, j
|
1053 |
+
⟨µ(Hi, X j), X⟩⟨µ(Hi, X j), Y⟩ + 2
|
1054 |
+
�
|
1055 |
+
i, j
|
1056 |
+
⟨µ(Xi, H j), X⟩⟨µ(Xi, H j), Y⟩
|
1057 |
+
+ 2
|
1058 |
+
�
|
1059 |
+
i, j
|
1060 |
+
⟨µ(Xi, X j), X⟩⟨µ(Xi, X j), Y⟩ − 2
|
1061 |
+
�
|
1062 |
+
i, j
|
1063 |
+
⟨µ(Hi, X), X j⟩⟨µ(Hi, Y), X j⟩
|
1064 |
+
− 2
|
1065 |
+
�
|
1066 |
+
i, j
|
1067 |
+
⟨µ(Xi, X), X j⟩⟨µ(Xi, Y), X j⟩ − 2
|
1068 |
+
�
|
1069 |
+
i, j
|
1070 |
+
⟨µ(X, Hi), X j⟩⟨µ(Y, Hi), X j⟩
|
1071 |
+
− 2
|
1072 |
+
�
|
1073 |
+
i, j
|
1074 |
+
⟨µ(X, Xi), X j⟩⟨µ(Y, Xi), X j⟩
|
1075 |
+
= ⟨MλX, Y⟩ + 2
|
1076 |
+
�
|
1077 |
+
i
|
1078 |
+
⟨[Φi, Φi∗](X, Y⟩ + 2
|
1079 |
+
�
|
1080 |
+
i
|
1081 |
+
⟨[Ψi, Ψi∗](X), Y⟩
|
1082 |
+
= ⟨MλX, Y⟩,
|
1083 |
+
for any X, Y ∈ Cm. Therefore Mµ|Cm = Mλ = cλI + Dλ. On the other hand, noting that Lµ
|
1084 |
+
Hi and Rµ
|
1085 |
+
Hi are
|
1086 |
+
skew-Hermitian for each i, then for any H = (Φ, Ψ) ∈ S, we have
|
1087 |
+
⟨MµH, H⟩ = 2
|
1088 |
+
�
|
1089 |
+
i, j
|
1090 |
+
⟨µ(Hi, H j), H⟩⟨µ(Hi, H j), H⟩
|
1091 |
+
− 2
|
1092 |
+
�
|
1093 |
+
i, j
|
1094 |
+
⟨µ(Hi, H), H j⟩⟨µ(Hi, H), H j⟩ − 2
|
1095 |
+
�
|
1096 |
+
i, j
|
1097 |
+
⟨µ(Xi, H), X j⟩⟨µ(Xi, H), X j⟩
|
1098 |
+
− 2
|
1099 |
+
�
|
1100 |
+
i, j
|
1101 |
+
⟨µ(H, Hi), H j⟩⟨µ(H, Hi), H j⟩ − 2
|
1102 |
+
�
|
1103 |
+
i, j
|
1104 |
+
⟨µ(H, Xi), X j⟩⟨µ(H, Xi), X j⟩
|
1105 |
+
= −2(tr LS
|
1106 |
+
HLS
|
1107 |
+
H∗ + tr ΦΦ∗ + tr ΨΨ∗)
|
1108 |
+
= −2(tr LS
|
1109 |
+
HLS
|
1110 |
+
H∗ + tr HH∗)
|
1111 |
+
= cλ⟨H, H⟩.
|
1112 |
+
So Mµ = cµI + Dµ, where cµ = cλ, and
|
1113 |
+
Dµ =
|
1114 |
+
�
|
1115 |
+
0
|
1116 |
+
0
|
1117 |
+
0
|
1118 |
+
Dλ
|
1119 |
+
�
|
1120 |
+
∈ Der(µ).
|
1121 |
+
This completes the proof.
|
1122 |
+
□
|
1123 |
+
Remark 4.13. Let the notation be as Theorem 4.10. If (Lµ
|
1124 |
+
A)∗ ∈ {Lµ
|
1125 |
+
A : A ∈ A0} and (Rµ
|
1126 |
+
A)∗ ∈ {Rµ
|
1127 |
+
A : A ∈ A0}
|
1128 |
+
for any A ∈ A0. Then it follows from a similar proof of Lemma 4.11 that A0 is a semisimple associative
|
1129 |
+
algebra. Moreover, the radical of [µ] corresponds to a critical point of type (k1 < · · · < ˆks < · · · <
|
1130 |
+
kr; d1, · · · , ˆds, · · · , dr) by Theorem 4.12, where ks = 0.
|
1131 |
+
5. Examples
|
1132 |
+
In this section, we classify the critical points of Fn : An → R for n = 2 and 3, respectively. It shows
|
1133 |
+
that every 2-dimensional associative algebra is isomorphic to a critical point of F2, and there exists only
|
1134 |
+
|
1135 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
1136 |
+
17
|
1137 |
+
one 3-dimensional associative algebra which is not isomorphic to any critical point of F3. Finally, based
|
1138 |
+
on the discussion in previous sections, we collect some natural and interesting questions.
|
1139 |
+
For reader’s convenience, we recall the notation in [2]. Let {e1, e2, · · · , en} be a basis of Cn. Define
|
1140 |
+
the bilinear maps ψi, j
|
1141 |
+
k : Cn × Cn → Cn by
|
1142 |
+
ψi, j
|
1143 |
+
k (emen) = δi
|
1144 |
+
mδj
|
1145 |
+
nek.
|
1146 |
+
It follows that any algebra can be expressed in the form d = �
|
1147 |
+
ijk ck
|
1148 |
+
ijψi, j
|
1149 |
+
k , where ck
|
1150 |
+
ij ∈ C are the structure
|
1151 |
+
constants.
|
1152 |
+
5.1. Two-dimensional case. The classification of two-dimensional associative algebras can be found in
|
1153 |
+
[2, TABLE 1]. We give the classification of the critical points of F2 : A2 → R as follows.
|
1154 |
+
TABLE I. Two-dimensional associative algebras, critical types and critical values.
|
1155 |
+
Multiplication relation
|
1156 |
+
Critical type
|
1157 |
+
Critical value
|
1158 |
+
�
|
1159 |
+
d1 = ψ1,1
|
1160 |
+
1
|
1161 |
+
(0 < 1; 1, 1)
|
1162 |
+
4
|
1163 |
+
�
|
1164 |
+
d2 = ψ1,1
|
1165 |
+
1
|
1166 |
+
+ ψ1,2
|
1167 |
+
2
|
1168 |
+
(0 < 1; 1, 1)
|
1169 |
+
4
|
1170 |
+
�
|
1171 |
+
d3 = ψ1,1
|
1172 |
+
1
|
1173 |
+
+ ψ2,1
|
1174 |
+
2
|
1175 |
+
(0 < 1; 1, 1)
|
1176 |
+
4
|
1177 |
+
�
|
1178 |
+
d4 = ψ1,1
|
1179 |
+
1
|
1180 |
+
+ ψ2,2
|
1181 |
+
2
|
1182 |
+
(0; 2)
|
1183 |
+
2
|
1184 |
+
�
|
1185 |
+
d5 = ψ1,1
|
1186 |
+
2
|
1187 |
+
(1 < 2; 1, 1)
|
1188 |
+
20
|
1189 |
+
�
|
1190 |
+
d6 = ψ1,1
|
1191 |
+
1
|
1192 |
+
+ ψ1,2
|
1193 |
+
2
|
1194 |
+
+ ψ2,1
|
1195 |
+
2
|
1196 |
+
(0 < 1; 1, 1)
|
1197 |
+
4
|
1198 |
+
Indeed, endow these algebras with the Hermitian inner product ⟨·, ·⟩ so that {e1, e2} is an orthonormal
|
1199 |
+
basis, then it is easy to obtain TABLE I. For example, the multiplication relation of µ := (d6, ⟨·, ·⟩) is
|
1200 |
+
given by: e1e1 = e1, e1e2 = e2, e2e1 = e2. With respect to the given orthonormal basis {e1, e2}, the left
|
1201 |
+
and right multiplications of µ are represented by
|
1202 |
+
Lµ
|
1203 |
+
e1 =
|
1204 |
+
� 1
|
1205 |
+
0
|
1206 |
+
0
|
1207 |
+
1
|
1208 |
+
�
|
1209 |
+
,
|
1210 |
+
Lµ
|
1211 |
+
e2 =
|
1212 |
+
� 0
|
1213 |
+
0
|
1214 |
+
1
|
1215 |
+
0
|
1216 |
+
�
|
1217 |
+
,
|
1218 |
+
Rµ
|
1219 |
+
e1 =
|
1220 |
+
� 1
|
1221 |
+
0
|
1222 |
+
0
|
1223 |
+
1
|
1224 |
+
�
|
1225 |
+
,
|
1226 |
+
Rµ
|
1227 |
+
e2 =
|
1228 |
+
� 0
|
1229 |
+
0
|
1230 |
+
1
|
1231 |
+
0
|
1232 |
+
�
|
1233 |
+
.
|
1234 |
+
It follows from (3.7) that
|
1235 |
+
Mµ =
|
1236 |
+
�
|
1237 |
+
−6
|
1238 |
+
0
|
1239 |
+
0
|
1240 |
+
0
|
1241 |
+
�
|
1242 |
+
Set cµ :=
|
1243 |
+
tr M2
|
1244 |
+
µ
|
1245 |
+
tr Mµ , then cµ = −6. It follows that Mµ = cµI + Dµ, where
|
1246 |
+
Dµ =
|
1247 |
+
�
|
1248 |
+
0
|
1249 |
+
0
|
1250 |
+
0
|
1251 |
+
6
|
1252 |
+
�
|
1253 |
+
is clearly a derivation of µ. So [µ] is a critical point of F2 : A2 → R with the critical type (0 < 1; 1, 1)
|
1254 |
+
and F2([µ]) = 4.
|
1255 |
+
|
1256 |
+
18
|
1257 |
+
HUI ZHANG AND ZAILI YAN
|
1258 |
+
5.2. Three-dimensional case. The complete classification of three-dimensional associative algebras
|
1259 |
+
can be found in [2, TABLE 2]. The following table gives the classification of the critical points of
|
1260 |
+
F3 : A3 → R.
|
1261 |
+
TABLE II. Three-dimensional associative algebras, critical types and critical values.
|
1262 |
+
Multiplication relation
|
1263 |
+
Critical type
|
1264 |
+
Critical value
|
1265 |
+
�
|
1266 |
+
d1 = ψ1,1
|
1267 |
+
1
|
1268 |
+
(0 < 1; 1, 2)
|
1269 |
+
4
|
1270 |
+
�
|
1271 |
+
d2 = ψ1,1
|
1272 |
+
1
|
1273 |
+
+ ψ2,2
|
1274 |
+
3
|
1275 |
+
(0 < 1 < 2; 1, 1, 1)
|
1276 |
+
10
|
1277 |
+
3
|
1278 |
+
�
|
1279 |
+
d3 = ψ1,1
|
1280 |
+
1
|
1281 |
+
+ ψ1,3
|
1282 |
+
3
|
1283 |
+
(0 < 1; 1, 2)
|
1284 |
+
4
|
1285 |
+
�
|
1286 |
+
d4 = ψ1,1
|
1287 |
+
1
|
1288 |
+
+ ψ3,1
|
1289 |
+
3
|
1290 |
+
(0 < 1; 1, 2)
|
1291 |
+
4
|
1292 |
+
�
|
1293 |
+
d5 = ψ1,1
|
1294 |
+
1
|
1295 |
+
+ ψ1,3
|
1296 |
+
3
|
1297 |
+
+ ψ3,1
|
1298 |
+
3
|
1299 |
+
(0 < 1; 1, 2)
|
1300 |
+
4
|
1301 |
+
�
|
1302 |
+
d6 = ψ1,1
|
1303 |
+
1
|
1304 |
+
+ ψ3,3
|
1305 |
+
3
|
1306 |
+
(0 < 1; 2, 1)
|
1307 |
+
2
|
1308 |
+
�
|
1309 |
+
d7 = ψ1,1
|
1310 |
+
1
|
1311 |
+
+ ψ2,1
|
1312 |
+
2
|
1313 |
+
+ ψ1,3
|
1314 |
+
3
|
1315 |
+
(0 < 1; 1, 2)
|
1316 |
+
4
|
1317 |
+
�
|
1318 |
+
d8 = ψ1,1
|
1319 |
+
1
|
1320 |
+
+ ψ2,1
|
1321 |
+
2
|
1322 |
+
+ ψ3,1
|
1323 |
+
3
|
1324 |
+
(0 < 1; 1, 2)
|
1325 |
+
4
|
1326 |
+
�
|
1327 |
+
d9 = ψ1,1
|
1328 |
+
1
|
1329 |
+
+ ψ2,1
|
1330 |
+
2
|
1331 |
+
+ ψ1,3
|
1332 |
+
3
|
1333 |
+
+ ψ3,1
|
1334 |
+
3
|
1335 |
+
(0 < 1; 1, 2)
|
1336 |
+
4
|
1337 |
+
�
|
1338 |
+
d10 = ψ1,1
|
1339 |
+
1
|
1340 |
+
+ ψ2,1
|
1341 |
+
2
|
1342 |
+
+ ψ3,3
|
1343 |
+
3
|
1344 |
+
(0 < 1; 2, 1)
|
1345 |
+
2
|
1346 |
+
�
|
1347 |
+
d11 = ψ1,1
|
1348 |
+
1
|
1349 |
+
+ ψ2,2
|
1350 |
+
2
|
1351 |
+
+ ψ2,3
|
1352 |
+
3
|
1353 |
+
(0 < 1; 2, 1)
|
1354 |
+
2
|
1355 |
+
�
|
1356 |
+
d12 = ψ1,1
|
1357 |
+
1
|
1358 |
+
+ ψ2,2
|
1359 |
+
2
|
1360 |
+
+ ψ2,3
|
1361 |
+
3
|
1362 |
+
+ ψ3,2
|
1363 |
+
3
|
1364 |
+
(0 < 1; 2, 1)
|
1365 |
+
2
|
1366 |
+
�
|
1367 |
+
d13 = ψ1,1
|
1368 |
+
1
|
1369 |
+
+ ψ2,2
|
1370 |
+
2
|
1371 |
+
+ ψ2,3
|
1372 |
+
3
|
1373 |
+
+ ψ3,1
|
1374 |
+
3
|
1375 |
+
(0 < 1; 2, 1)
|
1376 |
+
2
|
1377 |
+
�
|
1378 |
+
d14 = ψ1,1
|
1379 |
+
1
|
1380 |
+
+ ψ2,2
|
1381 |
+
2
|
1382 |
+
+ ψ3,3
|
1383 |
+
3
|
1384 |
+
(0; 3)
|
1385 |
+
4
|
1386 |
+
3
|
1387 |
+
�
|
1388 |
+
d15 = ψ1,1
|
1389 |
+
2
|
1390 |
+
(3 < 5 < 6; 1, 1, 1)
|
1391 |
+
20
|
1392 |
+
�
|
1393 |
+
d16 = ψ1,1
|
1394 |
+
2
|
1395 |
+
+ ψ1,2
|
1396 |
+
3
|
1397 |
+
+ ψ2,1
|
1398 |
+
3
|
1399 |
+
(1 < 2 < 3; 1, 1, 1)
|
1400 |
+
20
|
1401 |
+
3
|
1402 |
+
�
|
1403 |
+
d17 = ψ1,1
|
1404 |
+
1
|
1405 |
+
+ ψ1,1
|
1406 |
+
2
|
1407 |
+
+ ψ1,2
|
1408 |
+
2
|
1409 |
+
+ ψ2,1
|
1410 |
+
2
|
1411 |
+
+ ψ1,3
|
1412 |
+
3
|
1413 |
+
(0 < 1; 1, 2)
|
1414 |
+
4
|
1415 |
+
�
|
1416 |
+
d18 = ψ1,1
|
1417 |
+
1
|
1418 |
+
+ ψ1,1
|
1419 |
+
2
|
1420 |
+
+ ψ1,2
|
1421 |
+
2
|
1422 |
+
+ ψ2,1
|
1423 |
+
2
|
1424 |
+
+ ψ1,3
|
1425 |
+
3
|
1426 |
+
+ ψ3,1
|
1427 |
+
3
|
1428 |
+
(0 < 1; 1, 2)
|
1429 |
+
4
|
1430 |
+
�
|
1431 |
+
d19 = ψ3,3
|
1432 |
+
3
|
1433 |
+
+ ψ1,1
|
1434 |
+
2
|
1435 |
+
+ ψ1,3
|
1436 |
+
1
|
1437 |
+
+ ψ3,1
|
1438 |
+
1
|
1439 |
+
+ ψ2,3
|
1440 |
+
2
|
1441 |
+
+ ψ3,2
|
1442 |
+
2
|
1443 |
+
(0 < 1 < 2; 1, 1, 1)
|
1444 |
+
10
|
1445 |
+
3
|
1446 |
+
�
|
1447 |
+
d20 = ψ1,1
|
1448 |
+
1
|
1449 |
+
+ ψ1,2
|
1450 |
+
2
|
1451 |
+
+ ψ1,3
|
1452 |
+
3
|
1453 |
+
(0 < 1; 1, 2)
|
1454 |
+
4
|
1455 |
+
�
|
1456 |
+
d21 = ψ1,1
|
1457 |
+
3
|
1458 |
+
+ ψ1,2
|
1459 |
+
3
|
1460 |
+
− ψ2,1
|
1461 |
+
3
|
1462 |
+
−
|
1463 |
+
−
|
1464 |
+
�
|
1465 |
+
d22 = xψ1,2
|
1466 |
+
3
|
1467 |
+
+ yψ2,1
|
1468 |
+
3
|
1469 |
+
(1 < 2; 2, 1)
|
1470 |
+
12
|
1471 |
+
Indeed, endow the algebras with the Hermitian inner product ⟨·, ·⟩ so that {e1, e2, e3} is an orthonormal
|
1472 |
+
basis, it is easy to obtain all cases in TABLE II except for d2, d10, d11, d12, d13, d17, d18, d21. For the cases
|
1473 |
+
d2, d10, d11, d12, it follows from Remark 3.4 and TABLE I. For the cases d13, d17, d18, it follows from [5]
|
1474 |
+
that d13 � U3
|
1475 |
+
1, d17 � W3
|
1476 |
+
10 and d18 � U3
|
1477 |
+
0, where U3
|
1478 |
+
1, W3
|
1479 |
+
10 and U3
|
1480 |
+
0 are defined by
|
1481 |
+
U3
|
1482 |
+
1 :
|
1483 |
+
ψ1,1
|
1484 |
+
1
|
1485 |
+
+ ψ3,3
|
1486 |
+
1
|
1487 |
+
+ ψ1,2
|
1488 |
+
2
|
1489 |
+
+ ψ2,1
|
1490 |
+
2
|
1491 |
+
+ ψ2,3
|
1492 |
+
2
|
1493 |
+
+ ψ1,3
|
1494 |
+
3
|
1495 |
+
+ ψ3,1
|
1496 |
+
3
|
1497 |
+
− ψ3,2
|
1498 |
+
3 .
|
1499 |
+
W3
|
1500 |
+
10 :
|
1501 |
+
ψ1,2
|
1502 |
+
1
|
1503 |
+
+ ψ2,1
|
1504 |
+
1
|
1505 |
+
+ ψ2,2
|
1506 |
+
2
|
1507 |
+
+ ψ2,3
|
1508 |
+
3 .
|
1509 |
+
U3
|
1510 |
+
0 :
|
1511 |
+
ψ1,1
|
1512 |
+
2
|
1513 |
+
+ ψ1,2
|
1514 |
+
2
|
1515 |
+
+ ψ2,1
|
1516 |
+
2
|
1517 |
+
+ ψ1,3
|
1518 |
+
3
|
1519 |
+
+ ψ3,1
|
1520 |
+
3 .
|
1521 |
+
Endow U3
|
1522 |
+
1, W3
|
1523 |
+
10 and U3
|
1524 |
+
0 with the Hermitian inner product ⟨·, ·⟩ so that {e1, e2, e3} is an orthonormal basis,
|
1525 |
+
then it is easy to obtain the corresponding critical types and values for d13, d17, d18.
|
1526 |
+
|
1527 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
1528 |
+
19
|
1529 |
+
In the sequel, we follow a similar procedure as in [6, 16] to classify all Hermitian inner products on
|
1530 |
+
d21, then show that for any Hermitian inner product ⟨·, ·⟩ on d21, (d21, ⟨·, ·⟩) cannot be a critical point of
|
1531 |
+
F3. First, note that the multiplication relation of d21 is given as follows:
|
1532 |
+
e1e1 = e3,
|
1533 |
+
e1e2 = e3,
|
1534 |
+
e2e1 = −e3.
|
1535 |
+
Denote by ⟨·, ·⟩0 the Hermitian inner product on d21 such that {e1, e2, e3} is orthonormal. With respect to
|
1536 |
+
this basis {e1, e2, e3}, the automorphism group of d21 is given by
|
1537 |
+
Aut(d21) =
|
1538 |
+
|
1539 |
+
a
|
1540 |
+
0
|
1541 |
+
0
|
1542 |
+
b
|
1543 |
+
a
|
1544 |
+
0
|
1545 |
+
c
|
1546 |
+
d
|
1547 |
+
a2
|
1548 |
+
⊂ GL(3, C),
|
1549 |
+
(5.1)
|
1550 |
+
where 0 � a ∈ C, and b, c, d ∈ C are arbitrary.
|
1551 |
+
Lemma 5.1. For any Hermitian inner product ⟨·, ·⟩ on d21, there exist k > 0 and φ ∈ Aut(d21) such that
|
1552 |
+
{aφe1, φe2, φe3} is orthonormal with respective to k⟨·, ·⟩, where a > 0
|
1553 |
+
Proof. It suffices to prove that
|
1554 |
+
U = {diag(a, 1, 1) : a > 0} ⊂ GL(3, C)
|
1555 |
+
is a set of representatives for the action C×Aut(d21) on M, i.e., the space of all Hermitian inner products
|
1556 |
+
on d21, which can be identified with the homogeneous space GL(3, C)/U(3) at the base point ⟨·, ·⟩0 ∈ M
|
1557 |
+
(see [6]). Indeed, since
|
1558 |
+
�
|
1559 |
+
g∈U
|
1560 |
+
C×Aut(d21) · g · U(3) = GL(3, C),
|
1561 |
+
it follows that U is a set of representatives. For any Hermitian inner product ⟨·, ·⟩ on d21, we know that
|
1562 |
+
there exists g0 ∈ U such that
|
1563 |
+
⟨·, ·⟩ ∈ (C×Aut(d21)).(g0.⟨·, ·⟩0)
|
1564 |
+
Hence there exist c ∈ C×, φ ∈ Aut(d21) such that
|
1565 |
+
⟨·, ·⟩ = (cφ).(g0.⟨·, ·⟩0) = (cφg0).⟨·, ·⟩0)
|
1566 |
+
Put k = |c|2, then
|
1567 |
+
k⟨·, ·⟩ = k⟨(cφg0)−1(·), (cφg0)−1(·)⟩0 = kc−1¯c−1⟨(φg0)−1(·), (φg0)−1(·)⟩0 = ⟨(φg0)−1(·), (φg0)−1(·)⟩0
|
1568 |
+
Since g0 ∈ U, then g0 = diag{a, 1, 1} for some a > 0. It follows that {aφe1, φe2, φe3} is orthonormal with
|
1569 |
+
respective to k⟨·, ·⟩
|
1570 |
+
□
|
1571 |
+
Proposition 5.2. For any Hermitian inner product ⟨·, ·⟩ on d21, (d21, ⟨·, ·⟩) can not be a critical point of
|
1572 |
+
F3 : A3 → R.
|
1573 |
+
|
1574 |
+
20
|
1575 |
+
HUI ZHANG AND ZAILI YAN
|
1576 |
+
Proof. Assume that ⟨·, ·⟩ is a Hermitian inner product on d21 such that (d21, ⟨·, ·⟩) is a critical point of F3 :
|
1577 |
+
A3 → R. Then the critical type is necessarily of (1 < 2; 2, 1) by Theorem 4.1 and (5.1). Moreover, for
|
1578 |
+
the Hermitian inner product ⟨·, ·⟩ on d21, by Lemma 5.1 we know that there exist k > 0 and φ ∈ Aut(d21)
|
1579 |
+
such that {x1 = aφe1, x2 = φe2, x3 = φe3} is orthonormal with respective to k⟨·, ·⟩, where a > 0. With
|
1580 |
+
respect to the basis {x1, x2, x3}, the multiplication relation of d21 is given as follows
|
1581 |
+
x1x1 = a2x3,
|
1582 |
+
x1x2 = ax3,
|
1583 |
+
x2x1 = −ax3.
|
1584 |
+
By (3.7), Lemma 3.6 and a straightforward calculation, it follows that the critical type is of
|
1585 |
+
(3a4 + 6a2 + 8, 5a4 + 10a2 + 8, 2(3a4 + 8a2 + 8))
|
1586 |
+
which is never of type (1 < 2; 2, 1) for any a > 0. This is a contradiction by Theorem 4.1, and the
|
1587 |
+
proposition is therefore proved.
|
1588 |
+
□
|
1589 |
+
5.3. Comments. By the previous discussion, we know that the critical types of Fn : An → R, n = 2, 3,
|
1590 |
+
are necessarily nonnegative. So it is natural to ask the following question: Let [µ] ∈ An be a critical
|
1591 |
+
point of Fn : An → R with Mµ = cµI + Dµ for some cµ ∈ R and Dµ ∈ Der(µ). Are all the eigenvalues of
|
1592 |
+
Dµ necessarily nonnegative?
|
1593 |
+
On the other hand, it will be also interesting to construct or classify the critical points [µ] of Fn :
|
1594 |
+
An → R such that Dµ has negative eigenvalues if the above question does not hold. We note that 2-step
|
1595 |
+
nilpotent Lie algebras are automatically associative algebras, so it follows from [13, Example 1] that
|
1596 |
+
there exist associative algebras whose Nikolayevsky derivations do admit negative eigenvalues.
|
1597 |
+
6. Statements and Declarations
|
1598 |
+
The authors declare that there is no conflict of interest.
|
1599 |
+
References
|
1600 |
+
[1] B¨ohm, C.; Lafuente, R: Immortal homogeneous Ricci flows, Invent. Math. 212 (2018), 461–529.
|
1601 |
+
[2] Fialowski, A.; Penkava, M.: The moduli space of 3-dimensional associative algebras, Comm. Algebra 37(10) (2009)
|
1602 |
+
3666–3685.
|
1603 |
+
[3] Khudoyberdiyev, A.; Omirov, B.: The classification of algebras of level one, Linear Algebra Appl. 439(11) (2013), 3460–
|
1604 |
+
3463.
|
1605 |
+
[4] Kirwan, K.: Momentum maps and reduction in algebraic geometry, Differential Geom. Appl. 9 (1998), 135–172.
|
1606 |
+
[5] Kobayashi, Y; Shirayanagi, K; Tsukada, M; Takahasi, S.: A complete classification of three-dimensional algebras over R
|
1607 |
+
and C–(visiting old, learn new),Asian-Eur. J. Math. 14 (2021).
|
1608 |
+
[6] Kubo, A; Onda, K; Taketomi, Y; Tamaru, H.: On the moduli spaces of left-invariant pseudo-Riemannian metrics on Lie
|
1609 |
+
groups, Hiroshima math. J. 46 (2016), 357–374.
|
1610 |
+
[7] Lauret, J.: On the moment map for the variety of Lie algebras, J. Funct. Anal. 202 (2003), 392–423.
|
1611 |
+
[8] Lauret, J.: Degenerations of Lie algebras and geometry of Lie groups, Differential Geom. Appl. 18 (2003), 177–194.
|
1612 |
+
[9] Lauret, J.: Einstein solvmanifolds are standard, Ann. of Math. 172 (2010), 1859–1877.
|
1613 |
+
[10] Lauret, J.: Ricci soliton solvmanifolds, J. reine angew. Math. 650 (2011), 1–21.
|
1614 |
+
[11] Lauret, J.: Finding solitons, Notices Am. Math. Soc. 67 (2020), 647–657.
|
1615 |
+
[12] Ness, L.: A stratification of the null cone via the moment map, Amer. J. Math., 106 (1984), 1281-1329 (with an appendix
|
1616 |
+
by D. Mumford).
|
1617 |
+
|
1618 |
+
THE MOMENT MAP FOR THE VARIETY OF ASSOCIATIVE ALGEBRAS
|
1619 |
+
21
|
1620 |
+
[13] Nikolayevsky, Y.: Nilradicals of Einstein solvmanifolds, arXiv:math/0612117v1 [math.DG] (2006).
|
1621 |
+
[14] Nikolayevsky, Y.: Einstein solvmanifolds and the pre-Einstein derivation, Trans. Am. Math. Soc. 363 (2011), 3935–3958.
|
1622 |
+
[15] Pierce, R.S.: Associative Algebras, Springer-Verlag, New York, Heidelberg, Berlin, 1982
|
1623 |
+
[16] Taketomi, Y; Tamaru, H.: On the nonexistence of left-invariant Ricci solitons a conjecture and examples, Transform.
|
1624 |
+
Groups 23 (2018), 257–270.
|
1625 |
+
[17] Zhang, H.; Chen, Z.; Li, L.: The moment map for the variety of 3-Lie algebras, J. Funct. Anal. 283 (2022), No. 11, Article
|
1626 |
+
ID 109683.
|
1627 |
+
(Hui Zhang) School of Mathematics, Southeast University, Nanjing 210096, P. R. China
|
1628 |
+
Email address: [email protected]
|
1629 |
+
School of Mathematics and Statistics, Ningbo University, Ningbo, Zhejiang Province, 315211, People’s Republic of China
|
1630 |
+
Email address: [email protected]
|
1631 |
+
|
2tFLT4oBgHgl3EQfrC9d/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
4NAyT4oBgHgl3EQf1_nb/content/2301.00745v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48f940b5d31c0b2f0eb31b46aa8bccdc475b04f480f9058b23d6b61cd476d177
|
3 |
+
size 302817
|
4NAyT4oBgHgl3EQf1_nb/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ac7dc53f312cc9b8f03464845b313b0e1fd027c7d01d0c6702b63886e90f9878
|
3 |
+
size 4325421
|
4NAyT4oBgHgl3EQf1_nb/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3eddfd4c00e13b2e580370f6891bd302620722001487fa4cb9239c00f34f86fb
|
3 |
+
size 170819
|
4dAyT4oBgHgl3EQfpPgc/content/tmp_files/2301.00520v1.pdf.txt
ADDED
@@ -0,0 +1,1341 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantum Annealing vs. QAOA: 127 Qubit Higher-Order Ising
|
2 |
+
Problems on NISQ Computers
|
3 |
+
Elijah Pelofske∗1, Andreas B¨artschi†1, and Stephan Eidenbenz1
|
4 |
+
1CCS-3 Information Sciences, Los Alamos National Laboratory
|
5 |
+
Abstract
|
6 |
+
Quantum annealing (QA) and Quantum Alternating Operator Ansatz (QAOA) are both heuristic quantum
|
7 |
+
algorithms intended for sampling optimal solutions of combinatorial optimization problems. In this article we
|
8 |
+
implement a rigorous direct comparison between QA on D-Wave hardware and QAOA on IBMQ hardware. The
|
9 |
+
studied problems are instances of a class of Ising problems, with variable assignments of +1 or −1, that contain
|
10 |
+
cubic ZZZ interactions (higher order terms) and match both the native connectivity of the Pegasus topology D-
|
11 |
+
Wave chips and the heavy hexagonal lattice of the IBMQ chips. The novel QAOA implementation on the heavy
|
12 |
+
hexagonal lattice has a CNOT depth of 6 per round and allows for usage of an entire heavy hexagonal lattice.
|
13 |
+
Experimentally, QAOA is executed on an ensemble of randomly generated Ising instances with a grid search
|
14 |
+
over 1 and 2 round angles using all 127 programmable superconducting transmon qubits of ibm washington.
|
15 |
+
The error suppression technique digital dynamical decoupling (DDD) is also tested on all QAOA circuits. QA
|
16 |
+
is executed on the same Ising instances with the programmable superconducting flux qubit devices D-Wave
|
17 |
+
Advantage system4.1 and Advantage system6.1 using modified annealing schedules with pauses. We find that
|
18 |
+
QA outperforms QAOA on all problem instances. We also find that DDD enables 2-round QAOA to outperform
|
19 |
+
1-round QAOA, which is not the case without DDD.
|
20 |
+
1
|
21 |
+
Introduction
|
22 |
+
Quantum annealing (QA) in the transverse field Ising model (TFIM) is an analog computation technology which
|
23 |
+
utilizes quantum fluctuations in order to search for ground state solutions of a problem Hamiltonian [1–4]. D-Wave
|
24 |
+
quantum annealers are programmable hardware implementations of quantum annealing which use superconducting
|
25 |
+
flux qubits.
|
26 |
+
Quantum alternating operator ansatz (QAOA) is a hybrid quantum classical algorithm for sampling combina-
|
27 |
+
torial optimization problems [5, 6], the quantum component of which can be instantiated with a programmable
|
28 |
+
gate-based universal quantum computer. The quantum approximate optimization algorithm [7] was the first vari-
|
29 |
+
ational algorithm of this type, which was then generalized to the quantum alternating operator ansatz algorithm
|
30 |
+
[5].
|
31 |
+
QAOA is effectively a Trotterization of the Quantum Adiabatic Algorithm, and is overall similar to Quantum
|
32 |
+
Annealing. In particular both algorithms address combinatorial optimization problems. The exact characteristics
|
33 |
+
of how both QA and QAOA will scale to large system sizes is currently not fully understood, in particular because
|
34 |
+
quantum hardware is still in the NISQ era [8–10]. For example, there is evidence that QAOA may be more difficult
|
35 |
+
for classical computers to simulate than quantum annealing, which could make it a viable candidate for quantum
|
36 |
+
advantage [11]. Therefore it is of interest to investigate differences between QAOA and QA, and determine how
|
37 |
+
these algorithms will scale [12–17]. There have been experimental QAOA implementations which used up to 27
|
38 |
+
qubits [18] and 23 qubits [19]. There have also been QAOA experiments which had circuit depth up to 159 [20]
|
39 |
+
and 148 [21].
|
40 |
+
The contributions of this article are as follows:
|
41 |
+
1. We provide a direct comparison between QAOA and Quantum Annealing in terms of experiments on D-Wave
|
42 |
+
and IBMQ hardware. This comparison uses a comparable parameter search space for QA and QAOA, uses
|
43 |
+
no minor embedding for quantum annealing, and uses short depth QAOA circuits, thus providing a fair
|
44 |
+
comparison of the two algorithms. We show that QAOA is better than random sampling, and quantum
|
45 |
+
annealing clearly outperforms QAOA.
|
46 |
+
∗Email: [email protected]
|
47 |
+
†Email: [email protected]
|
48 |
+
1
|
49 |
+
arXiv:2301.00520v1 [quant-ph] 2 Jan 2023
|
50 |
+
|
51 |
+
Device name
|
52 |
+
Topology/chip
|
53 |
+
name
|
54 |
+
Available qubits
|
55 |
+
Available
|
56 |
+
couplers/
|
57 |
+
CNOTs
|
58 |
+
Computation type
|
59 |
+
Advantage system4.1
|
60 |
+
Pegasus P16
|
61 |
+
5627
|
62 |
+
40279
|
63 |
+
QA
|
64 |
+
Advantage system6.1
|
65 |
+
Pegasus P16
|
66 |
+
5616
|
67 |
+
40135
|
68 |
+
QA
|
69 |
+
ibm washington
|
70 |
+
Eagle r1
|
71 |
+
heavy-hexagonal
|
72 |
+
127
|
73 |
+
142
|
74 |
+
Universal gate-model
|
75 |
+
Table 1: NISQ hardware summary at the time the experiments were executed.
|
76 |
+
The hardware yield (e.g., the
|
77 |
+
number of available qubits or two qubit interactions) for all of these devices can be less than the logical lattice
|
78 |
+
because of hardware defects, and can also change over time if device calibration changes.
|
79 |
+
2. The QAOA algorithm we present is tailored for short depth circuit construction on the heavy hexagonal
|
80 |
+
lattice, therefore allowing full usage of any heavy hexagonal topology quantum processor in the future. We
|
81 |
+
use all 127 qubits of the ibm washington chip in order to execute the largest QAOA circuit, in terms of
|
82 |
+
qubits, to date.
|
83 |
+
3. The problem instances that are used to compare quantum annealing and QAOA are specifically constructed
|
84 |
+
to include higher order terms, specifically three variable (cubic) terms. QAOA can directly implement higher
|
85 |
+
order terms, and quantum annealing requires order reduction using auxiliary variables to implement these
|
86 |
+
higher order terms. This is the largest experimental demonstration of QAOA with higher order terms to date.
|
87 |
+
4. In order to mitigate errors when executing the QAOA circuits, we utilize digital dynamical decoupling. This
|
88 |
+
is the largest usage of dynamical decoupling in terms of qubit system size to date, and the results show that
|
89 |
+
digital dynamical decoupling improves performance for two round QAOA, suggesting that it will be useful
|
90 |
+
for computations with large numbers of qubits in the noisy regime.
|
91 |
+
In Section 2 the QAOA and QA hardware implementations are detailed. Section 3 details the experimental
|
92 |
+
results and how the two algorithms compare. Section 4 concludes with what the results indicate and future research
|
93 |
+
directions. The figures in this article are generated using matplotlib [22, 23], and Qiskit [24] in Python 3.
|
94 |
+
2
|
95 |
+
Methods
|
96 |
+
The problem instances are defined in Section 2.1.
|
97 |
+
In Section 2.2 the QAOA circuit algorithm and hardware
|
98 |
+
parameters are defined. In Section 2.3 the quantum annealing implementation is defined.
|
99 |
+
2.1
|
100 |
+
Problem instances
|
101 |
+
The NISQ computers which are used in this comparison are detailed in Table 1; the clear difference between the
|
102 |
+
D-Wave quantum annealers and ibm washington is the number of qubits that are available. The additional qubits
|
103 |
+
available on the quantum annealers will allow us to embed multiple problem instances onto the chips. The current
|
104 |
+
IBMQ devices have a graph topology referred to as the heavy-hexagonal lattice [25]. Therefore, for a direct QAOA
|
105 |
+
and QA comparison we would want to be able to create QAOA circuits which match the logical heavy-hexagonal
|
106 |
+
lattice and the quantum annealer graph topology of Pegasus. For this direct comparison we target D-Wave quantum
|
107 |
+
annealers with Pegasus graph hardware [26, 27] connectivities. The two current D-Wave quantum annealers with
|
108 |
+
Pegasus hardware graphs have chip id names Advantage system6.1 and Advantage system4.1. The goal for this
|
109 |
+
direct comparison is that ideally we want problems which can be instantiated on all three of the devices in Table
|
110 |
+
1. In particular, we want these implementations to not be unfairly costly in terms of implementation overhead.
|
111 |
+
For example we do not want to introduce unnecessary qubit swapping in the QAOA circuit because that would
|
112 |
+
introduce larger circuit depths which would introduce more decoherence in the computation. We also do not want
|
113 |
+
to introduce unnecessary minor-embedding in the problems for quantum annealers.
|
114 |
+
The other property of these problem instances that is of interest is an introduction of higher order terms,
|
115 |
+
specifically cubic ZZZ interactions [28] also referred to as multi-body interactions [29], in addition to random
|
116 |
+
linear and quadratic terms. These higher order terms require both QAOA and QA to be handle these higher order
|
117 |
+
variable interactions, which is an additional test on the capability of both algorithms. QAOA can naturally handle
|
118 |
+
2
|
119 |
+
|
120 |
+
0
|
121 |
+
1
|
122 |
+
2
|
123 |
+
3
|
124 |
+
4
|
125 |
+
5
|
126 |
+
6
|
127 |
+
7
|
128 |
+
8
|
129 |
+
9
|
130 |
+
10
|
131 |
+
11
|
132 |
+
12
|
133 |
+
13
|
134 |
+
14
|
135 |
+
15
|
136 |
+
16
|
137 |
+
17
|
138 |
+
18
|
139 |
+
19
|
140 |
+
20
|
141 |
+
21
|
142 |
+
22
|
143 |
+
23
|
144 |
+
24
|
145 |
+
25
|
146 |
+
26
|
147 |
+
27
|
148 |
+
28
|
149 |
+
29
|
150 |
+
30
|
151 |
+
31
|
152 |
+
32
|
153 |
+
33
|
154 |
+
34
|
155 |
+
35
|
156 |
+
36
|
157 |
+
37
|
158 |
+
38
|
159 |
+
39
|
160 |
+
40
|
161 |
+
41
|
162 |
+
42
|
163 |
+
43
|
164 |
+
44
|
165 |
+
45
|
166 |
+
46
|
167 |
+
47
|
168 |
+
48
|
169 |
+
49
|
170 |
+
50
|
171 |
+
51
|
172 |
+
52
|
173 |
+
53
|
174 |
+
54
|
175 |
+
55
|
176 |
+
56
|
177 |
+
57
|
178 |
+
58
|
179 |
+
59
|
180 |
+
60
|
181 |
+
61
|
182 |
+
62
|
183 |
+
63
|
184 |
+
64
|
185 |
+
65
|
186 |
+
66
|
187 |
+
67
|
188 |
+
68
|
189 |
+
69
|
190 |
+
70
|
191 |
+
71
|
192 |
+
72
|
193 |
+
73
|
194 |
+
74
|
195 |
+
75
|
196 |
+
76
|
197 |
+
77
|
198 |
+
78
|
199 |
+
79
|
200 |
+
80
|
201 |
+
81
|
202 |
+
82
|
203 |
+
83
|
204 |
+
84
|
205 |
+
85
|
206 |
+
86
|
207 |
+
87
|
208 |
+
88
|
209 |
+
89
|
210 |
+
90
|
211 |
+
91
|
212 |
+
92
|
213 |
+
93
|
214 |
+
94
|
215 |
+
95
|
216 |
+
96
|
217 |
+
97
|
218 |
+
98
|
219 |
+
99
|
220 |
+
100
|
221 |
+
101
|
222 |
+
102
|
223 |
+
103
|
224 |
+
104
|
225 |
+
105
|
226 |
+
106
|
227 |
+
107
|
228 |
+
108
|
229 |
+
109
|
230 |
+
110
|
231 |
+
111
|
232 |
+
112
|
233 |
+
113
|
234 |
+
114
|
235 |
+
115
|
236 |
+
116
|
237 |
+
117
|
238 |
+
118
|
239 |
+
119
|
240 |
+
120
|
241 |
+
121
|
242 |
+
122
|
243 |
+
123
|
244 |
+
124
|
245 |
+
125
|
246 |
+
126
|
247 |
+
0
|
248 |
+
1
|
249 |
+
2
|
250 |
+
3
|
251 |
+
4
|
252 |
+
5
|
253 |
+
6
|
254 |
+
7
|
255 |
+
8
|
256 |
+
9
|
257 |
+
10
|
258 |
+
11
|
259 |
+
12
|
260 |
+
13
|
261 |
+
14
|
262 |
+
15
|
263 |
+
16
|
264 |
+
17
|
265 |
+
18
|
266 |
+
19
|
267 |
+
20
|
268 |
+
21
|
269 |
+
22
|
270 |
+
23
|
271 |
+
24
|
272 |
+
25
|
273 |
+
26
|
274 |
+
27
|
275 |
+
28
|
276 |
+
29
|
277 |
+
30
|
278 |
+
31
|
279 |
+
32
|
280 |
+
33
|
281 |
+
34
|
282 |
+
35
|
283 |
+
36
|
284 |
+
37
|
285 |
+
38
|
286 |
+
39
|
287 |
+
40
|
288 |
+
41
|
289 |
+
42
|
290 |
+
43
|
291 |
+
44
|
292 |
+
45
|
293 |
+
46
|
294 |
+
47
|
295 |
+
48
|
296 |
+
49
|
297 |
+
50
|
298 |
+
51
|
299 |
+
52
|
300 |
+
53
|
301 |
+
54
|
302 |
+
55
|
303 |
+
56
|
304 |
+
57
|
305 |
+
58
|
306 |
+
59
|
307 |
+
60
|
308 |
+
61
|
309 |
+
62
|
310 |
+
63
|
311 |
+
64
|
312 |
+
65
|
313 |
+
66
|
314 |
+
67
|
315 |
+
68
|
316 |
+
69
|
317 |
+
70
|
318 |
+
71
|
319 |
+
72
|
320 |
+
73
|
321 |
+
74
|
322 |
+
75
|
323 |
+
76
|
324 |
+
77
|
325 |
+
78
|
326 |
+
79
|
327 |
+
80
|
328 |
+
81
|
329 |
+
82
|
330 |
+
83
|
331 |
+
84
|
332 |
+
85
|
333 |
+
86
|
334 |
+
87
|
335 |
+
88
|
336 |
+
89
|
337 |
+
90
|
338 |
+
91
|
339 |
+
92
|
340 |
+
93
|
341 |
+
94
|
342 |
+
95
|
343 |
+
96
|
344 |
+
97
|
345 |
+
98
|
346 |
+
99
|
347 |
+
100
|
348 |
+
101
|
349 |
+
102
|
350 |
+
103
|
351 |
+
104
|
352 |
+
105
|
353 |
+
106
|
354 |
+
107
|
355 |
+
108
|
356 |
+
109
|
357 |
+
110
|
358 |
+
111
|
359 |
+
112
|
360 |
+
113
|
361 |
+
114
|
362 |
+
115
|
363 |
+
116
|
364 |
+
117
|
365 |
+
118
|
366 |
+
119
|
367 |
+
120
|
368 |
+
121
|
369 |
+
122
|
370 |
+
123
|
371 |
+
124
|
372 |
+
125
|
373 |
+
126
|
374 |
+
Figure 1: Left: ibm washington graph connectivity, where qubits are connected by CNOT (also referred to as cx)
|
375 |
+
gates. The ideal lattice is called the heavy-hexagonal lattice. Note that there are two missing graph edges from
|
376 |
+
the lattice between qubits 8-9 and 109-114. The total number of qubits (nodes) is 127. The edges of the graph are
|
377 |
+
three colored (red, blue, and green) such that no node shares two or more edges with the same color. The node
|
378 |
+
colorings of light and dark gray show that the heavy hexagonal lattice is bipartite (meaning it can be partitioned
|
379 |
+
into two disjoint sets). The three edge coloring is consistent with the QAOA circuit construction in Figure 2.
|
380 |
+
Right: Example of a single random problem instance with cubic terms (see Eq. (1)) on the ibm washington
|
381 |
+
graph. The linear and quadratic terms are shown using two distinct colors (red and green). The nodes and edges
|
382 |
+
colored red denote a weight of −1 and the nodes and edges colored green denote a weight of +1. The cubic terms
|
383 |
+
are represented by ovals around the three qubits which define the cubic variable interactions. Like the linear and
|
384 |
+
quadratic terms, the color of the oval representing the cubic terms represents the sign of the weight on the terms,
|
385 |
+
where green is +1 and red is −1.
|
386 |
+
higher order terms [30]. Implementing high order terms with QA requires introducing auxiliary variables in order
|
387 |
+
to perform order reduction to get a problem structure that is comprised of only linear and quadratic terms, so that
|
388 |
+
it can be implemented on the hardware, but whose optimal solutions match the optimal solutions of the original
|
389 |
+
high order polynomial [3, 31–34].
|
390 |
+
Taking each of these characteristics into account, we create a class of random problems which follow the native
|
391 |
+
device connectivities in Table 1. The problem instances we will be considering are Ising problems defined on the
|
392 |
+
hardware connectivity graph of the heavy hexagonal lattice of the device, which for these experiments will be
|
393 |
+
ibm washington.
|
394 |
+
C(x) =
|
395 |
+
�
|
396 |
+
v∈N
|
397 |
+
cv · xv +
|
398 |
+
�
|
399 |
+
(i,j)∈E
|
400 |
+
ci,j · xi · xj +
|
401 |
+
�
|
402 |
+
l∈D
|
403 |
+
cl · xl · xn1(l) · xn2(l)
|
404 |
+
(1)
|
405 |
+
Eq. (1) defines the class of problem Isings as follows. N is the set of qubits, or variables, that exist on the heavy
|
406 |
+
hexagonal layout topology. E is the edge set of all two qubit (CNOT) gates that can allow two qubits, indexed as
|
407 |
+
i and j, to interact. Any heavy hexagonal lattice is a bipartite graph with vertices V = V2 ∪ V3 where V2 consists
|
408 |
+
of vertices with a maximum degree of 2, and V3 consists of vertices with a maximum degree of 3. D is the set
|
409 |
+
of vertices in V2 which all have degree exactly equal to 2. n1 is a function which gives the qubit (variable) index
|
410 |
+
of the first of the two neighbors of a degree-2 node, and n2 provides the qubit (variable) index of the second of
|
411 |
+
the two neighbors of any degree-2 node. cv, ci,j, and ct are all functions representing the random selection of the
|
412 |
+
linear, quadratic, and cubic coefficients, respectively. These coefficients could be drawn from any distribution - in
|
413 |
+
this case we draw the coefficients from {+1, −1} with probability 0.5. The decision variables are xi, where the
|
414 |
+
possible variable states are the spins −1 or +1. Combined, any vector of variable states x can be evaluated given
|
415 |
+
this objective function formulation of Eq. (1).
|
416 |
+
The heavy hexagonal topology of ibm washington, along with an overlay showing one of the random problem
|
417 |
+
instances with cubic terms defined on ibm washington, is shown in Figure 1. Each term coefficient was chosen to
|
418 |
+
3
|
419 |
+
|
420 |
+
be either +1 or −1 in order to mitigate the potential problem of limited precision for the programming control on
|
421 |
+
all of the NISQ devices. 10 random instances of this class of problems are generated and sampled using QAOA and
|
422 |
+
QA, the implementations of each will be discussed next.
|
423 |
+
2.2
|
424 |
+
QAOA
|
425 |
+
Given a combinatorial optimization problem over inputs x ∈ {0, 1}n, let f(x): {0, 1}n → R be the objective function
|
426 |
+
which evaluates the cost of solution x. For a maximization (or minimization) problem, the goal is to find a variable
|
427 |
+
assignment vector x for which f(x) is maximized (or minimized). The QAOA algorithm consists of the following
|
428 |
+
components:
|
429 |
+
• An initial state |ψ⟩
|
430 |
+
• A phase separating Hamiltonian: HP |x⟩ = f(x) |x⟩
|
431 |
+
• A mixing Hamiltonian: HM
|
432 |
+
• An integer p ≥ 1, the number of rounds to run the algorithm
|
433 |
+
• Two real vectors ⃗γ = (γ1, ..., γp) and ⃗β = (β1, ..., βp), each with length p
|
434 |
+
The algorithm consists of preparing the initial state |ψ⟩, then applying p rounds of the alternating simulation
|
435 |
+
of the phase separating Hamiltonian and the mixing Hamiltonian:
|
436 |
+
|⃗γ, ⃗β⟩ = e−iβpHM e−iγpHP
|
437 |
+
�
|
438 |
+
��
|
439 |
+
�
|
440 |
+
round p
|
441 |
+
· · · e−iβ1HM e−iγ1HP
|
442 |
+
�
|
443 |
+
��
|
444 |
+
�
|
445 |
+
round 1
|
446 |
+
|ψ⟩
|
447 |
+
(2)
|
448 |
+
Within reach round, HP is applied first, which separates the basis states of the state vector by phases e−iγf(x).
|
449 |
+
HM then provides parameterized interference between solutions of different cost values. After p rounds, the state
|
450 |
+
|⃗γ, ⃗β⟩ is measured in the computational basis and returns a sample solution y of cost value f(y) with probability
|
451 |
+
| ⟨y|⃗γ, ⃗β⟩ |2.
|
452 |
+
The aim of QAOA is to prepare the state |⃗γ, ⃗β⟩ from which we can sample a solution y with high cost value f(y).
|
453 |
+
Therefore, in order to use QAOA the task is to find angles ⃗γ and ⃗β such that the expectation value ⟨⃗γ, ⃗β|HP |⃗γ, ⃗β⟩
|
454 |
+
is large (−HP for minimization problems). In the limit p → ∞, QAOA is effectively a Trotterization of of the
|
455 |
+
Quantum Adiabatic Algorithm, and in general as we increase p we expect to see a corresponding increase in the
|
456 |
+
probability of sampling the optimal solution [17]. The challenge is the classical outer loop component of finding
|
457 |
+
the good angles ⃗γ and ⃗β for all rounds p, which has a high computational cost as p increases.
|
458 |
+
Variational quantum algorithms, such as QAOA, have been a subject of large amount of attention, in large part
|
459 |
+
because of the problem domains that variational algorithms can address (such as combinatorial optimization) [35].
|
460 |
+
One of the challenges however with variational quantum algorithms is that the classical component of parameter
|
461 |
+
selection, in the case of QAOA this is the angle finding problem, is not solved and is even more difficult when
|
462 |
+
noise is present in the computation [36]. Typically the optimal angles for QAOA are computed exactly for small
|
463 |
+
problem instances [15, 37].
|
464 |
+
However, in this case the angle finding approach we will use is a reasonably high
|
465 |
+
resolution gridsearch over the possible angles. Note however that a fine gridsearch scales exponentially with the
|
466 |
+
number of QAOA rounds p, and therefore is not advisable for practical high round QAOA [6, 7]. Exactly computing
|
467 |
+
what the optimal angles are for problems of this size would be quite computationally intensive, especially with the
|
468 |
+
introduction of higher order terms. We leave the problem of exactly computing the optimal QAOA angles up to
|
469 |
+
future work.
|
470 |
+
Figure 2 describes the short depth QAOA circuit construction for sampling the higher order Ising test instance.
|
471 |
+
This algorithm can be applied to any heavy hexagonal lattice topology, which allows for executing the QAOA
|
472 |
+
circuits on the 127 variable instances on the IBMQ ibm washington backend. For the class of Isings with higher
|
473 |
+
order terms defined in Section 2.1, the QAOA angle ranges which are used are γ1, . . . , γp ∈ [0, π) and β1, . . . , βp−1 ∈
|
474 |
+
[0, π), βp ∈ [0, π
|
475 |
+
2 ) where p is the number of QAOA rounds. Note that the halving of the angle search space for β
|
476 |
+
applies when p = 1. For optimizing the angles using the naive grid search for p = 1, β0 is varied over 60 linearly
|
477 |
+
spaced angles ∈ [0, π
|
478 |
+
2 ] and γ0 is varied over 120 linearly spaced angles ∈ [0, π]. For the high resolution gridsearch
|
479 |
+
for p = 2, β1 is varied over 5 linearly spaced angles ∈ [0, π
|
480 |
+
2 ] and γ0, γ1, and β0 are varied over 11 linearly spaced
|
481 |
+
angles ∈ [0, π]. Therefore, for p = 2 the angle gridsearch uses 6655 separate circuit executions (for each of the 10
|
482 |
+
problem instances), and for p = 1 the angle gridsearch uses 7200 separate circuit executions. Each circuit execution
|
483 |
+
used 10, 000 samples in order to compute a robust distribution for each angle combination.
|
484 |
+
4
|
485 |
+
|
486 |
+
|0⟩
|
487 |
+
|0⟩
|
488 |
+
|0⟩
|
489 |
+
|0⟩
|
490 |
+
|0⟩
|
491 |
+
|0⟩
|
492 |
+
|0⟩
|
493 |
+
|0⟩
|
494 |
+
|0⟩
|
495 |
+
|0⟩
|
496 |
+
|0⟩
|
497 |
+
H
|
498 |
+
H
|
499 |
+
H
|
500 |
+
H
|
501 |
+
H
|
502 |
+
H
|
503 |
+
H
|
504 |
+
H
|
505 |
+
H
|
506 |
+
H
|
507 |
+
H
|
508 |
+
dB
|
509 |
+
dC
|
510 |
+
dD
|
511 |
+
dE
|
512 |
+
dF
|
513 |
+
dG
|
514 |
+
dH
|
515 |
+
dI
|
516 |
+
dJ
|
517 |
+
dBA
|
518 |
+
dDE
|
519 |
+
dF C
|
520 |
+
dJI
|
521 |
+
dBC
|
522 |
+
dF I
|
523 |
+
dHG
|
524 |
+
dJK
|
525 |
+
dDC
|
526 |
+
dHI
|
527 |
+
dA
|
528 |
+
dK
|
529 |
+
dBAC
|
530 |
+
dDCE
|
531 |
+
dHGI
|
532 |
+
dJIK
|
533 |
+
dF CI
|
534 |
+
γdBC
|
535 |
+
γdDE
|
536 |
+
γdF I
|
537 |
+
γdJK
|
538 |
+
γdHI
|
539 |
+
γdDC
|
540 |
+
γdBA
|
541 |
+
γdF C
|
542 |
+
γdHG
|
543 |
+
γdA
|
544 |
+
γdB
|
545 |
+
γdC
|
546 |
+
γdD
|
547 |
+
γdE
|
548 |
+
γdF
|
549 |
+
γdG
|
550 |
+
γdH
|
551 |
+
γdI
|
552 |
+
γdJ
|
553 |
+
γdK
|
554 |
+
Z
|
555 |
+
Z
|
556 |
+
γdJI
|
557 |
+
γdBAC
|
558 |
+
γdDCE
|
559 |
+
γdF CI
|
560 |
+
γdHGI
|
561 |
+
γdJIK
|
562 |
+
Z
|
563 |
+
Z
|
564 |
+
Z
|
565 |
+
Z
|
566 |
+
Row
|
567 |
+
Column
|
568 |
+
Time
|
569 |
+
d... = ±1
|
570 |
+
γd
|
571 |
+
Z
|
572 |
+
β
|
573 |
+
X
|
574 |
+
= Rz(2γd)
|
575 |
+
= Rx(2β)
|
576 |
+
Init
|
577 |
+
Phase Separator
|
578 |
+
Mixer
|
579 |
+
β
|
580 |
+
X
|
581 |
+
β
|
582 |
+
β
|
583 |
+
β
|
584 |
+
β
|
585 |
+
β
|
586 |
+
β
|
587 |
+
β
|
588 |
+
β
|
589 |
+
β
|
590 |
+
β
|
591 |
+
Eval
|
592 |
+
�
|
593 |
+
��
|
594 |
+
�
|
595 |
+
⟨β,γ|HC|β,γ⟩
|
596 |
+
Figure 2: A 1-round QAOA circuit: (left) The problem instance is a hardware-native bipartite graph with an
|
597 |
+
arbitrary 3-edge-coloring given by K˝onig’s line coloring theorem. (right) Any quadratic term (colored edge) gives
|
598 |
+
rise to a combination of two CNOTs and a Rz-rotation in the phase separator, giving a CNOT depth of 6 due to
|
599 |
+
the degree-3 nodes. When targeting the degree-2 nodes with the CNOT gates, these constructions can be nested,
|
600 |
+
leading to no overhead when implementing the three-qubit terms: these always have a degree-2 node in the middle
|
601 |
+
(see Eq. (1)).
|
602 |
+
In order to mitigate decoherence on idle qubits, digital dynamical decoupling (DDD) is also tested for all
|
603 |
+
QAOA circuits. Dynamical Decoupling is an open loop quantum control technique error suppression technique
|
604 |
+
for mitigating decoherence on idle qubits [38–41].
|
605 |
+
Dynamical decoupling can be implemented with pulse level
|
606 |
+
quantum control, and digital dynamical decoupling can be implemented simply with circuit level instructions of
|
607 |
+
sequences of gates which are identities [41]. Note that digital dynamical decoupling is an approximation of pulse
|
608 |
+
level dynamical decoupling.
|
609 |
+
Dynamical decoupling has been experimentally demonstrated for superconducting
|
610 |
+
qubit quantum processors including IBMQ devices [42–44]. Dynamical decoupling in particular is applicable for
|
611 |
+
QAOA circuits because they can be relatively sparse and therefore have idle qubits [42]. DDD does not always
|
612 |
+
effective at consistently reducing errors during computation (for example because of other control errors present
|
613 |
+
on the device [40, 42]), and therefore the raw QAOA circuits are compared against the QAOA circuits with DDD
|
614 |
+
in the experiments section. In order to apply the DDD sequences to the OpenQASM [45] QAOA circuits, the
|
615 |
+
PadDynamicalDecoupling 1 method from Qiskit [24] is used, with the pulse alignment parameter set based on
|
616 |
+
the ibm washington backend properties. The native gateset of all current IBMQ backends is x, rz, cx, sx. The
|
617 |
+
circuit scheduling algorithm that is used for inserting the digital dynamical decoupling sequences is ALAP, which
|
618 |
+
schedules the stop time of instructions as late as possible 2. There are other scheduling algorithms that could be
|
619 |
+
applied which may increase the efficacy of dynamical decoupling. Note that the rz gate is a virtual gate which is
|
620 |
+
not executed on the hardware. There are different DDD gate sequences that can be applied, including Y-Y or X-X
|
621 |
+
sequences. Because the X Pauli gate is already a native gate of the IBMQ device, the X-X DDD sequence is used
|
622 |
+
for simplicity.
|
623 |
+
Note that the variable states for the optimization problems are either −1 or +1, but the circuit measurement
|
624 |
+
states are either 0 or 1. Therefore once the measurements are made on the QAOA circuits, for each variable in each
|
625 |
+
sample the variable state mapping of 0 → 1, 1 → −1 is performed. For circuit execution on the superconducting
|
626 |
+
transom qubit ibm washington, circuits are batched into jobs where each job is composed of a group of at most 250
|
627 |
+
circuits - the maximum number of circuits for a job on ibm washington is currently 300, but we use 250 in order
|
628 |
+
to reduce job errors related to the size of jobs. Grouping circuits into jobs is helpful for reducing the total amount
|
629 |
+
of compute time required to prepare and measure each circuit. When submitting the circuits to the backend,
|
630 |
+
they are all first locally transpiled via Qiskit [24] with optimization level=3. This transpilation converts the
|
631 |
+
gateset to the ibm washington native gateset, and the transpiler optimization attempts to simplify the circuit
|
632 |
+
where possible. The QAOA circuit execution on ibm washington spanned a large amount of time, and therefore
|
633 |
+
the backend versions were not consistent. The exact backend software versions were 1.3.7, 1.3.8, 1.3.13, 1.3.15,
|
634 |
+
1.3.17.
|
635 |
+
1https://qiskit.org/documentation/locale/bn_BN/stubs/qiskit.transpiler.passes.PadDynamicalDecoupling.html
|
636 |
+
2https://qiskit.org/documentation/apidoc/transpiler_passes.html
|
637 |
+
5
|
638 |
+
|
639 |
+
dA
|
640 |
+
dB
|
641 |
+
dC
|
642 |
+
dBA
|
643 |
+
dBC
|
644 |
+
dBAC = +1
|
645 |
+
dA
|
646 |
+
dB
|
647 |
+
dC
|
648 |
+
dBA
|
649 |
+
dBC
|
650 |
+
+1
|
651 |
+
−1
|
652 |
+
−1
|
653 |
+
−1
|
654 |
+
−2
|
655 |
+
1
|
656 |
+
1
|
657 |
+
1
|
658 |
+
2
|
659 |
+
2 2
|
660 |
+
dA
|
661 |
+
dB
|
662 |
+
dC
|
663 |
+
dBA
|
664 |
+
dBC
|
665 |
+
+1
|
666 |
+
−3
|
667 |
+
−3
|
668 |
+
−1
|
669 |
+
+1
|
670 |
+
6
|
671 |
+
0
|
672 |
+
−1
|
673 |
+
−1
|
674 |
+
2
|
675 |
+
−4
|
676 |
+
−4
|
677 |
+
dA
|
678 |
+
dB
|
679 |
+
dC
|
680 |
+
dBA
|
681 |
+
dBC
|
682 |
+
dBAC = −1
|
683 |
+
dA
|
684 |
+
dB
|
685 |
+
dC
|
686 |
+
dBA
|
687 |
+
dBC
|
688 |
+
+1
|
689 |
+
−1
|
690 |
+
−1
|
691 |
+
−1
|
692 |
+
−2
|
693 |
+
−1
|
694 |
+
1
|
695 |
+
1
|
696 |
+
2
|
697 |
+
2 2
|
698 |
+
dA
|
699 |
+
dB
|
700 |
+
dC
|
701 |
+
dBA
|
702 |
+
dBC
|
703 |
+
+3
|
704 |
+
−1
|
705 |
+
−1
|
706 |
+
+1
|
707 |
+
−1
|
708 |
+
2
|
709 |
+
0
|
710 |
+
1
|
711 |
+
−1
|
712 |
+
−2
|
713 |
+
−4
|
714 |
+
−4
|
715 |
+
Figure 3:
|
716 |
+
(left) Two different embeddings for cubic +1/−1 terms. Each embedding needs two slack variable
|
717 |
+
qubits. Our overall embedding alternates between these two cubic term embeddings. Any embedding with only
|
718 |
+
one slack variable needs a 4-clique between the slack and the three original variables, which is not possible to
|
719 |
+
embed for consecutive cubic terms. (right) Embedding structures of the problem instances with higher order
|
720 |
+
terms embedded in parallel (independently) 6 times onto the logical Pegasus P16 graph. The view of this graph has
|
721 |
+
been slightly partitioned so that not all of the outer parts of the Pegasus chip are drawn. The light grey qubits and
|
722 |
+
couplers indicate unused hardware regions. The cyan coloring on nodes and edges denote the vertical qubits and
|
723 |
+
CNOTs on the ibm washington hardware graph (see Figure 1). The red coloring on nodes and edges denote the
|
724 |
+
horizontal lines of qubits and CNOTs on ibm washington. The green nodes and edges denote the order reduction
|
725 |
+
auxiliary variables. Note that the top right hand and lower left hand qubits are not present on the ibm washington
|
726 |
+
lattice - but for the purposes of generating the embeddings, these extra qubits are filled in to complete the lattice.
|
727 |
+
2.3
|
728 |
+
Quantum Annealing
|
729 |
+
Quantum annealing is a proposed type of quantum computation which uses quantum fluctuations, such as quantum
|
730 |
+
tunneling, in order to search for the ground state of a user programmed Hamiltonian. Quantum annealing, in the
|
731 |
+
case of the transverse field Ising model implemented on D-Wave hardware, is explicitly described by the system
|
732 |
+
given in Eq. (3). The state begins at time zero purely in the transverse Hamiltonian state �
|
733 |
+
i σx
|
734 |
+
i , and then over
|
735 |
+
the course of the anneal (parameterized by the annealing time) the user programmed Ising is applied according the
|
736 |
+
function B(s). Together, A(s) and B(s) define the anneal schedules of the annealing process, and s is referred to
|
737 |
+
as the anneal fraction. The standard anneal schedule that is used is a linear interpolation between s = 0 and s = 1.
|
738 |
+
H = −A(s)
|
739 |
+
2
|
740 |
+
� �
|
741 |
+
i
|
742 |
+
σx
|
743 |
+
i
|
744 |
+
�
|
745 |
+
+ B(s)
|
746 |
+
2
|
747 |
+
�
|
748 |
+
Hising
|
749 |
+
�
|
750 |
+
(3)
|
751 |
+
The adiabatic theorem states that if changes to the Hamiltonian of the system are sufficiently slow, the system
|
752 |
+
will remain in the ground state of problem Hamiltonian, thereby providing a computational mechanism for comput-
|
753 |
+
ing the ground state of optimization problems. The user programmed Ising Hising, acting on n qubits, is defined
|
754 |
+
in Eq. (4). The quadratic terms and the linear terms combined define the optimization problem instance that the
|
755 |
+
annealing procedure will ideally find the ground state of. As with QAOA, the objective of quantum annealing is
|
756 |
+
6
|
757 |
+
|
758 |
+
0.0
|
759 |
+
0.2
|
760 |
+
0.4
|
761 |
+
0.6
|
762 |
+
0.8
|
763 |
+
1.0
|
764 |
+
Pause duration fraction
|
765 |
+
0.0
|
766 |
+
0.2
|
767 |
+
0.4
|
768 |
+
0.6
|
769 |
+
0.8
|
770 |
+
1.0
|
771 |
+
Anneal fraction [s]
|
772 |
+
Figure 4: All modified (forward) quantum annealing schedules which are tested in order to find the best anneal
|
773 |
+
schedule with a pause. The symmetric pause inserted into the normal linearly interpolated schedule defining the
|
774 |
+
A(s) and B(s) functions can provide better ground state sampling probability. The anneal fraction at which this
|
775 |
+
pause occurs is varied between 0.1 and 0.9 in steps of 0.1. The pause duration, as a fraction of the total annealing
|
776 |
+
time, is also varied between 0.1 and 0.9 in steps of 0.1. Although not shown in this figure, the annealing times are
|
777 |
+
also varied between 10, 100, 1000, and 2000 microseconds.
|
778 |
+
to find the variable assignment vector x that minimizes the cost function which has the form of Eq. (4).
|
779 |
+
Hising =
|
780 |
+
n
|
781 |
+
�
|
782 |
+
i
|
783 |
+
hiσz
|
784 |
+
i +
|
785 |
+
n
|
786 |
+
�
|
787 |
+
i<j
|
788 |
+
Jijσz
|
789 |
+
i σz
|
790 |
+
j
|
791 |
+
(4)
|
792 |
+
The goal is to be able to implement the Ising problems defined in Section 2.1 on D-Wave quantum annealers.
|
793 |
+
In order to implement the higher order terms, we will need to use order reduction in order to transform the cubic
|
794 |
+
terms into linear and quadratic terms [3, 31–34]. This order reduction will result in using additional variables,
|
795 |
+
usually called auxiliary or slack variables. Figure 3 shows the embeddings of the problem instances onto the logical
|
796 |
+
Pegasus P16 graph, including the order reduction procedure which is used. The order reduction procedure outlined
|
797 |
+
in Figure 3 allows for direct embedding of the order reduced polynomials onto the hardware graph, regardless of
|
798 |
+
whether the cubic term coefficient is +1 or −1. This order reduction ensures that the ground state(s) of the cubic
|
799 |
+
term are also the ground states of the order reduced Ising. Additionally, this order reduction ensures that for
|
800 |
+
every excited state of the cubic term, there are no slack variable assignments which result in the original variables
|
801 |
+
having an energy less than or equal to the ground state of the original cubic term. This order reduction procedure
|
802 |
+
allows any problem in the form of Eq. (1) to be mapped natively to quantum annealing hardware which accepts
|
803 |
+
problems with the form of Eq. (4). Importantly, this procedure does not require minor-embedding, even including
|
804 |
+
the auxiliary variables.
|
805 |
+
In order to get more problem samples for the same QPU time, the other strategy that is employed is to embed
|
806 |
+
multiple independent problem instances onto the hardware graph and thus be able to execute several instances in
|
807 |
+
the same annealing cycle(s). This technique is referred to as parallel quantum annealing [33, 46] or tiling 3. Figure 3
|
808 |
+
(right) shows the parallel embeddings on a logical Pegasus graph. Because some of the logical embeddings may use
|
809 |
+
a qubit or coupler which is missing on the actual hardware, less than 6 parallel instances can be tiled onto the chips
|
810 |
+
to be executed at the same time. For Advantage system4.1, 2 independent embeddings of the problem instances
|
811 |
+
could be created without encountering missing hardware. For Advantage system6.1, 3 independent embeddings of
|
812 |
+
the problem instances could be created. The structure of the heavy-hexagonal lattice onto Pegasus can be visually
|
813 |
+
seen in Figure 3; the horizontal heavy-hex lines (Figure 1) are mapped to diagonal Pegasus qubit lines that run
|
814 |
+
from top left to bottom right of the square Pegasus graph rendering. Then the vertical heavy-hexagonal qubits are
|
815 |
+
mapped to QA qubits in between the diagonal qubit lines.
|
816 |
+
In order to optimize the quantum annealing parameters, with relatively similar complexity to the angle param-
|
817 |
+
eter search done for QAOA, the forward anneal schedule with pausing is optimized over a gridsearch. Pausing the
|
818 |
+
3https://dwave-systemdocs.readthedocs.io/en/samplers/reference/composites/tiling.html
|
819 |
+
7
|
820 |
+
|
821 |
+
200
|
822 |
+
150
|
823 |
+
100
|
824 |
+
50
|
825 |
+
0
|
826 |
+
50
|
827 |
+
Energy
|
828 |
+
0
|
829 |
+
100
|
830 |
+
200
|
831 |
+
300
|
832 |
+
400
|
833 |
+
500
|
834 |
+
600
|
835 |
+
700
|
836 |
+
800
|
837 |
+
Counts
|
838 |
+
random samples
|
839 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
840 |
+
1-round [[0.363], [2.882]]
|
841 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
842 |
+
1-round DDD [[0.389], [2.856]]
|
843 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.3
|
844 |
+
Advantage_system6.1 AT=2000 s=0.6 pause=0.5
|
845 |
+
Figure 5: Direct energy histogram comparison of QA and QAOA results for one of the ten problem instances.
|
846 |
+
Here the energies being plotted are the full energy spectrum for the parameters which gave the minimum mean
|
847 |
+
energy across the parameter grid searches performed across the QA and QAOA parameters. The optimal parameter
|
848 |
+
combination is given in the figure legend. For QA parameters, the annealing time in microseconds, the forward
|
849 |
+
anneal schedule (symmetric) pause fraction, and anneal fraction, are given in the legend. For the QAOA angle
|
850 |
+
parameters, the format is [β, γ], and are rounded to 3 decimal places. The mean for each dataset is marked with
|
851 |
+
vertical dashed lines and the minimum energy found in each dataset is marked with solid vertical lines. The energy
|
852 |
+
histogram plots for the other 9 problems are shown in Figure 6.
|
853 |
+
anneal at the appropriate spot can provide higher chances of sampling the ground state [47]. Figure 4 shows this
|
854 |
+
anneal schedule search space - importantly the annealing times used in these schedule are also optimized for. The
|
855 |
+
total number of QA parameters which are varied are 9 anneal fractions, 9 pause durations, and 4 annealing times
|
856 |
+
(10, 100, 1000, 2000 microseconds). Therefore, the total number of parameter combinations which are considered
|
857 |
+
in the grid search is 324. 2000 microseconds is the longest annealing time available on the current D-Wave quantum
|
858 |
+
annealers. The number of anneals sampled for each D-Wave job was 500. The annealing times and the anneal
|
859 |
+
schedules were varied in a simple grid search. Readout and programming thermalization times are both set to 0
|
860 |
+
microseconds. All other parameters are set to default, with the exception of the modified annealing schedule.
|
861 |
+
3
|
862 |
+
Results
|
863 |
+
Figures 5 and 6 combined show the detailed energy distributions for all 10 problem instances sampled using the
|
864 |
+
best parameter choices found for QA and QAOA. These histograms include the four variants of QAOA - 1 and 2
|
865 |
+
rounds with and without DDD. The histograms include 10000 random samples (binomial distribution with p = 0.5)
|
866 |
+
on the problem instances.
|
867 |
+
QA performs better than QAOA: The most notable observation across these histograms is that clearly quantum
|
868 |
+
annealing results in better variable assignments compared to all tested variations of QAOA; this clear stratification
|
869 |
+
of the algorithms capabilities is consistent across all 10 problem instances. Notice that the minimum energies
|
870 |
+
achieved by QAOA (marked by the solid vertical lines) do not reach the energy distribution sampled by the
|
871 |
+
quantum annealers. The characteristics of each of the 10 problem instances are slightly different, but this trend is
|
872 |
+
very clear.
|
873 |
+
QAOA performs better than random sampling: Both QA and QAOA sampled better solutions than the
|
874 |
+
8
|
875 |
+
|
876 |
+
200
|
877 |
+
150
|
878 |
+
100
|
879 |
+
50
|
880 |
+
0
|
881 |
+
50
|
882 |
+
Energy
|
883 |
+
0
|
884 |
+
200
|
885 |
+
400
|
886 |
+
600
|
887 |
+
800
|
888 |
+
Counts
|
889 |
+
random samples
|
890 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
891 |
+
1-round [[0.363], [2.882]]
|
892 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
893 |
+
1-round DDD [[0.338], [2.882]]
|
894 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.9
|
895 |
+
Advantage_system6.1 AT=2000 s=0.6 pause=0.5
|
896 |
+
200
|
897 |
+
150
|
898 |
+
100
|
899 |
+
50
|
900 |
+
0
|
901 |
+
50
|
902 |
+
Energy
|
903 |
+
0
|
904 |
+
100
|
905 |
+
200
|
906 |
+
300
|
907 |
+
400
|
908 |
+
500
|
909 |
+
600
|
910 |
+
700
|
911 |
+
800
|
912 |
+
Counts
|
913 |
+
random samples
|
914 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
915 |
+
1-round [[0.363], [2.882]]
|
916 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
917 |
+
1-round DDD [[0.441], [2.908]]
|
918 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.3
|
919 |
+
Advantage_system6.1 AT=2000 s=0.5 pause=0.4
|
920 |
+
200
|
921 |
+
150
|
922 |
+
100
|
923 |
+
50
|
924 |
+
0
|
925 |
+
50
|
926 |
+
Energy
|
927 |
+
0
|
928 |
+
100
|
929 |
+
200
|
930 |
+
300
|
931 |
+
400
|
932 |
+
Counts
|
933 |
+
random samples
|
934 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
935 |
+
1-round [[0.389], [2.882]]
|
936 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
937 |
+
1-round DDD [[0.441], [2.908]]
|
938 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.4
|
939 |
+
Advantage_system6.1 AT=2000 s=0.7 pause=0.1
|
940 |
+
200
|
941 |
+
150
|
942 |
+
100
|
943 |
+
50
|
944 |
+
0
|
945 |
+
50
|
946 |
+
Energy
|
947 |
+
0
|
948 |
+
200
|
949 |
+
400
|
950 |
+
600
|
951 |
+
800
|
952 |
+
Counts
|
953 |
+
random samples
|
954 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
955 |
+
1-round [[0.389], [2.882]]
|
956 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
957 |
+
1-round DDD [[0.441], [2.856]]
|
958 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.5
|
959 |
+
Advantage_system6.1 AT=2000 s=0.6 pause=0.2
|
960 |
+
200
|
961 |
+
150
|
962 |
+
100
|
963 |
+
50
|
964 |
+
0
|
965 |
+
50
|
966 |
+
Energy
|
967 |
+
0
|
968 |
+
100
|
969 |
+
200
|
970 |
+
300
|
971 |
+
400
|
972 |
+
500
|
973 |
+
600
|
974 |
+
700
|
975 |
+
800
|
976 |
+
Counts
|
977 |
+
random samples
|
978 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
979 |
+
1-round [[0.363], [2.882]]
|
980 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
981 |
+
1-round DDD [[0.415], [2.882]]
|
982 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.4
|
983 |
+
Advantage_system6.1 AT=2000 s=0.6 pause=0.2
|
984 |
+
150
|
985 |
+
100
|
986 |
+
50
|
987 |
+
0
|
988 |
+
50
|
989 |
+
Energy
|
990 |
+
0
|
991 |
+
100
|
992 |
+
200
|
993 |
+
300
|
994 |
+
400
|
995 |
+
500
|
996 |
+
Counts
|
997 |
+
random samples
|
998 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
999 |
+
1-round [[0.389], [2.882]]
|
1000 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
1001 |
+
1-round DDD [[0.363], [2.882]]
|
1002 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.4
|
1003 |
+
Advantage_system6.1 AT=2000 s=0.6 pause=0.7
|
1004 |
+
200
|
1005 |
+
150
|
1006 |
+
100
|
1007 |
+
50
|
1008 |
+
0
|
1009 |
+
50
|
1010 |
+
Energy
|
1011 |
+
0
|
1012 |
+
200
|
1013 |
+
400
|
1014 |
+
600
|
1015 |
+
800
|
1016 |
+
Counts
|
1017 |
+
random samples
|
1018 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
1019 |
+
1-round [[0.338], [2.882]]
|
1020 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
1021 |
+
1-round DDD [[0.415], [2.908]]
|
1022 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.5
|
1023 |
+
Advantage_system6.1 AT=1000 s=0.6 pause=0.6
|
1024 |
+
150
|
1025 |
+
100
|
1026 |
+
50
|
1027 |
+
0
|
1028 |
+
50
|
1029 |
+
Energy
|
1030 |
+
0
|
1031 |
+
100
|
1032 |
+
200
|
1033 |
+
300
|
1034 |
+
400
|
1035 |
+
500
|
1036 |
+
600
|
1037 |
+
700
|
1038 |
+
800
|
1039 |
+
Counts
|
1040 |
+
random samples
|
1041 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
1042 |
+
1-round [[0.363], [2.882]]
|
1043 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
1044 |
+
1-round DDD [[0.441], [2.882]]
|
1045 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.6
|
1046 |
+
Advantage_system6.1 AT=1000 s=0.6 pause=0.3
|
1047 |
+
200
|
1048 |
+
150
|
1049 |
+
100
|
1050 |
+
50
|
1051 |
+
0
|
1052 |
+
50
|
1053 |
+
Energy
|
1054 |
+
0
|
1055 |
+
100
|
1056 |
+
200
|
1057 |
+
300
|
1058 |
+
400
|
1059 |
+
500
|
1060 |
+
600
|
1061 |
+
700
|
1062 |
+
800
|
1063 |
+
Counts
|
1064 |
+
random samples
|
1065 |
+
2-round [[0.524, 0.262], [2.88, 2.88]]
|
1066 |
+
1-round [[0.441], [2.882]]
|
1067 |
+
2-round DDD [[0.524, 0.262], [2.88, 2.88]]
|
1068 |
+
1-round DDD [[0.441], [2.882]]
|
1069 |
+
Advantage_system4.1 AT=2000 s=0.6 pause=0.4
|
1070 |
+
Advantage_system6.1 AT=2000 s=0.7 pause=0.3
|
1071 |
+
Figure 6: Direct energy histogram comparison of QA and QAOA results for the other nine problem instances,
|
1072 |
+
continuing from Figure 5.
|
1073 |
+
10000 random samples.
|
1074 |
+
Although an obvious observation from the distributions in Figures 6 and 5, it is not
|
1075 |
+
trivial that the QAOA samples had better objective function values compared to random sampling. The reason
|
1076 |
+
this is not trivial is because at sufficient circuit, which is not difficult to reach, the computation will entirely
|
1077 |
+
decohere and the computation will not be meaningful. This result is encouraging because it shows that short depth
|
1078 |
+
circuit constructions, combined with increasing scale of near term quantum computers, can begin to yield relevant
|
1079 |
+
computations for larger system sizes (in this case, 127 variables).
|
1080 |
+
The effect of digital dynamical decoupling: The dataset shown in Figure 6 also allows for a direct quantifi-
|
1081 |
+
cation of how successful the digital dynamical decoupling passes were at improving the QAOA circuit executions.
|
1082 |
+
Table 2 shows a comparison of the four QAOA implementations. For 2-round QAOA, DDD improved the mean
|
1083 |
+
sample energy for 10 out of the 10 problem instances. For 1-round QAOA, DDD improved the mean sample energy
|
1084 |
+
for 4 out of the 10 problem instances. This shows that digital dynamical decoupling does not uniformly improve
|
1085 |
+
the performance of the QAOA circuits. This suggests that the qubits in the 2-round QAOA circuits have more
|
1086 |
+
available idle time compared to the 1-round QAOA circuits, which would allow for DDD to improve the circuit
|
1087 |
+
performance. The 2-round QAOA results had better average energy compared the 1-round results in 6 out of the
|
1088 |
+
10 problem instances.
|
1089 |
+
Optimal parameter choices - QAOA: The optimal 2-round QAOA angles for all 10 problems with and without
|
1090 |
+
dynamical decoupling is the same. The optimal 1-round QAOA angles are not consistent across all problems, and
|
1091 |
+
even vary between the with and without DDD circuit executions. However, even though the exact optimal angle
|
1092 |
+
assignments are not consistent across all problems the, they are very close to each other which is notable because
|
1093 |
+
it indicates that the optimal angles may be identical or nearly identical but the search space is being obscured by
|
1094 |
+
the noise in the computation.
|
1095 |
+
Optimal parameter choices - QA: Figure 6 also allows examination of how stable the different parameters are,
|
1096 |
+
both across the 10 problem instances but also within each problem instance. In the case of quantum annealing,
|
1097 |
+
but the optimal annealing times are always 2000 and the optimal pause schedule is not incredibly consistent with
|
1098 |
+
pause fraction durations ranging from 0.1 to 0.9 and with anneal fractions s ranging from 0.5 to 0.7.
|
1099 |
+
9
|
1100 |
+
|
1101 |
+
p = 1
|
1102 |
+
p = 2
|
1103 |
+
p = 1 with DDD
|
1104 |
+
p = 2 with DDD
|
1105 |
+
p = 1 (no DDD) better than -
|
1106 |
+
-
|
1107 |
+
10/10
|
1108 |
+
5/10
|
1109 |
+
4/10
|
1110 |
+
p = 2 (no DDD) better than -
|
1111 |
+
0/10
|
1112 |
+
-
|
1113 |
+
2/10
|
1114 |
+
0/10
|
1115 |
+
p = 1 (with DDD) better than -
|
1116 |
+
5/10
|
1117 |
+
8/10
|
1118 |
+
-
|
1119 |
+
4/10
|
1120 |
+
p = 2 (with DDD) better than -
|
1121 |
+
6/10
|
1122 |
+
10/10
|
1123 |
+
6/10
|
1124 |
+
-
|
1125 |
+
Table 2: How the four different QAOA implementations, one and two rounds with and without DDD, compare
|
1126 |
+
against each other in terms of in how many of the 10 random instances each method was better than the other
|
1127 |
+
three methods in terms of mean objective function value across the 10000 samples (for the best angle combination).
|
1128 |
+
There is a clear finding in the order of performance of the four methods; p = 2 with no DDD performed the worse,
|
1129 |
+
p = 1 with no DDD performed the next best, p = 1 with DDD performed the next best, and p = 2 with DDD
|
1130 |
+
performed the best overall.
|
1131 |
+
D-Wave devices performance differences: One last observation from Figure 6 is that there a small but consis-
|
1132 |
+
tent performance difference between the two quantum annealers; the slightly older generation Advantage system4.1
|
1133 |
+
yields lower mean energy than Advantage system6.1.
|
1134 |
+
4
|
1135 |
+
Discussion
|
1136 |
+
It is of considerable interest to determine how effective quantum annealing and QAOA are at computing the
|
1137 |
+
optimal solutions of combinatorial optimization problems. Combinatorial optimization problems have wide reaching
|
1138 |
+
applicability, and being able to solve them faster or to get better heuristic solutions is a very relevant topic
|
1139 |
+
in computing. In this article, we have presented experimental results for a fair direct comparison of QAOA and
|
1140 |
+
quantum annealing, implemented on the currently accessible quantum hardware via cloud computing. This research
|
1141 |
+
has specifically found the following:
|
1142 |
+
1. Quantum annealing finds higher quality solutions to the random test Isings with higher order terms compared
|
1143 |
+
to the short depth QAOA p = 1 and p = 2 circuits, with reasonably fine grid searches over the QAOA angles
|
1144 |
+
and quantum annealing schedules with pauses.
|
1145 |
+
2. QAOA performs noticeably better than random sampling - this is mostly due to the short depth QAOA circuit
|
1146 |
+
constructions which allow reasonably robust computations to be executed without the qubits decohering on
|
1147 |
+
current quantum computers.
|
1148 |
+
3. The short depth QAOA circuit construction is notable because it allows for higher order terms in the Ising,
|
1149 |
+
and is scalable to a heavy-hexagonal lattice of any size, therefore this circuit construction can be used for
|
1150 |
+
future implementations of QAOA on devices with heavy-hexagonal lattices.
|
1151 |
+
4. Dynamical decoupling can improve the computation of QAOA on NISQ computers.
|
1152 |
+
5
|
1153 |
+
Acknowledgments
|
1154 |
+
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los
|
1155 |
+
Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Ad-
|
1156 |
+
ministration of U.S. Department of Energy (Contract No. 89233218CNA000001). We acknowledge the use of IBM
|
1157 |
+
Quantum services for this work.
|
1158 |
+
The views expressed are those of the authors, and do not reflect the official
|
1159 |
+
policy or position of IBM or the IBM Quantum team. The research presented in this article was supported by
|
1160 |
+
the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project
|
1161 |
+
number 20220656ER. This research used resources provided by the Darwin testbed at Los Alamos National Labo-
|
1162 |
+
ratory (LANL) which is funded by the Computational Systems and Software Environments subprogram of LANL’s
|
1163 |
+
Advanced Simulation and Computing program (NNSA/DOE).
|
1164 |
+
LA-UR-22-33077
|
1165 |
+
10
|
1166 |
+
|
1167 |
+
References
|
1168 |
+
[1]
|
1169 |
+
Tadashi Kadowaki and Hidetoshi Nishimori. “Quantum annealing in the transverse Ising model”. In: Physical
|
1170 |
+
Review E 58.5 (1998), pp. 5355–5363. doi: 10.1103/physreve.58.5355. url: https://doi.org/10.1103%
|
1171 |
+
2Fphysreve.58.5355.
|
1172 |
+
[2]
|
1173 |
+
Arnab Das and Bikas K Chakrabarti. “Colloquium: Quantum annealing and analog quantum computation”.
|
1174 |
+
In: Reviews of Modern Physics 80.3 (2008), p. 1061.
|
1175 |
+
[3]
|
1176 |
+
Philipp Hauke et al. “Perspectives of quantum annealing: methods and implementations”. In: Reports on
|
1177 |
+
Progress in Physics 83.5 (2020), p. 054401. doi: 10.1088/1361-6633/ab85b8. url: https://dx.doi.org/
|
1178 |
+
10.1088/1361-6633/ab85b8.
|
1179 |
+
[4]
|
1180 |
+
Sheir Yarkoni et al. “Quantum annealing for industry applications: introduction and review”. In: Reports on
|
1181 |
+
Progress in Physics 85.10 (2022), p. 104001. doi: 10.1088/1361-6633/ac8c54. url: https://doi.org/10.
|
1182 |
+
1088%2F1361-6633%2Fac8c54.
|
1183 |
+
[5]
|
1184 |
+
Stuart Hadfield et al. “From the Quantum Approximate Optimization Algorithm to a Quantum Alternating
|
1185 |
+
Operator Ansatz”. In: Algorithms 12.2 (2019), p. 34. doi: 10.3390/a12020034. url: https://doi.org/10.
|
1186 |
+
3390%2Fa12020034.
|
1187 |
+
[6]
|
1188 |
+
Jeremy Cook, Stephan Eidenbenz, and Andreas B¨artschi. “The Quantum Alternating Operator Ansatz on
|
1189 |
+
Maximum k-Vertex Cover”. In: 2020 IEEE International Conference on Quantum Computing and Engineering
|
1190 |
+
(QCE). 2020, pp. 83–92. doi: 10.1109/QCE49297.2020.00021.
|
1191 |
+
[7]
|
1192 |
+
Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A Quantum Approximate Optimization Algorithm.
|
1193 |
+
2014. doi: 10.48550/ARXIV.1411.4028. url: https://arxiv.org/abs/1411.4028.
|
1194 |
+
[8]
|
1195 |
+
Phillip C. Lotshaw et al. “Scaling quantum approximate optimization on near-term hardware”. In: Scientific
|
1196 |
+
Reports 12.1 (2022). doi: 10.1038/s41598-022-14767-w. url: https://doi.org/10.1038%2Fs41598-
|
1197 |
+
022-14767-w.
|
1198 |
+
[9]
|
1199 |
+
Tameem Albash and Daniel A. Lidar. “Demonstration of a Scaling Advantage for a Quantum Annealer over
|
1200 |
+
Simulated Annealing”. In: Phys. Rev. X 8 (3 2018), p. 031016. doi: 10.1103/PhysRevX.8.031016. url:
|
1201 |
+
https://link.aps.org/doi/10.1103/PhysRevX.8.031016.
|
1202 |
+
[10]
|
1203 |
+
Andrew D King et al. “Scaling advantage over path-integral Monte Carlo in quantum simulation of geomet-
|
1204 |
+
rically frustrated magnets”. In: Nature communications 12.1 (2021), pp. 1–6. doi: 10.1038/s41467-021-
|
1205 |
+
20901-5.
|
1206 |
+
[11]
|
1207 |
+
Edward Farhi and Aram W Harrow. Quantum Supremacy through the Quantum Approximate Optimization
|
1208 |
+
Algorithm. 2016. doi: 10.48550/ARXIV.1602.07674. url: https://arxiv.org/abs/1602.07674.
|
1209 |
+
[12]
|
1210 |
+
Lucas T. Brady et al. “Optimal Protocols in Quantum Annealing and Quantum Approximate Optimization
|
1211 |
+
Algorithm Problems”. In: Phys. Rev. Lett. 126 (7 2021), p. 070505. doi: 10.1103/PhysRevLett.126.070505.
|
1212 |
+
url: https://link.aps.org/doi/10.1103/PhysRevLett.126.070505.
|
1213 |
+
[13]
|
1214 |
+
Madita Willsch et al. “Benchmarking the quantum approximate optimization algorithm”. In: Quantum In-
|
1215 |
+
formation Processing 19.7 (2020), pp. 1–24. doi: 10.1007/s11128-020-02692-8.
|
1216 |
+
[14]
|
1217 |
+
Michael Streif and Martin Leib. Comparison of QAOA with Quantum and Simulated Annealing. 2019. doi:
|
1218 |
+
10.48550/ARXIV.1901.01903. url: https://arxiv.org/abs/1901.01903.
|
1219 |
+
[15]
|
1220 |
+
Elijah Pelofske et al. “Sampling on NISQ Devices: ”Who’s the Fairest One of All?””. In: 2021 IEEE Interna-
|
1221 |
+
tional Conference on Quantum Computing and Engineering (QCE). IEEE, 2021. doi: 10.1109/qce52317.
|
1222 |
+
2021.00038. url: https://doi.org/10.1109%2Fqce52317.2021.00038.
|
1223 |
+
[16]
|
1224 |
+
Stefan H. Sack and Maksym Serbyn. “Quantum annealing initialization of the quantum approximate op-
|
1225 |
+
timization algorithm”. In: Quantum 5 (2021), p. 491. doi: 10.22331/q-2021-07-01-491. url: https:
|
1226 |
+
//doi.org/10.22331%2Fq-2021-07-01-491.
|
1227 |
+
[17]
|
1228 |
+
John Golden et al. Evidence for Super-Polynomial Advantage of QAOA over Unstructured Search. 2022. doi:
|
1229 |
+
10.48550/ARXIV.2202.00648. url: https://arxiv.org/abs/2202.00648.
|
1230 |
+
[18]
|
1231 |
+
Johannes Weidenfeller et al. “Scaling of the quantum approximate optimization algorithm on superconducting
|
1232 |
+
qubit based hardware”. In: Quantum 6 (Dec. 2022), p. 870. issn: 2521-327X. doi: 10.22331/q-2022-12-
|
1233 |
+
07-870. url: https://doi.org/10.22331/q-2022-12-07-870.
|
1234 |
+
11
|
1235 |
+
|
1236 |
+
[19]
|
1237 |
+
Matthew P. Harrigan et al. “Quantum approximate optimization of non-planar graph problems on a planar
|
1238 |
+
superconducting processor”. In: Nature Physics 17.3 (2021), pp. 332–336. doi: 10.1038/s41567-020-01105-
|
1239 |
+
y. url: https://doi.org/10.1038%2Fs41567-020-01105-y.
|
1240 |
+
[20]
|
1241 |
+
Pradeep Niroula et al. “Constrained quantum optimization for extractive summarization on a trapped-ion
|
1242 |
+
quantum computer”. In: Scientific Reports 12.1 (2022), pp. 1–14. doi: 10.1038/s41598-022-20853-w.
|
1243 |
+
[21]
|
1244 |
+
Dylan Herman et al. Portfolio Optimization via Quantum Zeno Dynamics on a Quantum Processor. 2022.
|
1245 |
+
doi: 10.48550/ARXIV.2209.15024. url: https://arxiv.org/abs/2209.15024.
|
1246 |
+
[22]
|
1247 |
+
Thomas A Caswell et al. matplotlib/matplotlib. Version v3.4.3. doi: 10.5281/zenodo.5194481.
|
1248 |
+
[23]
|
1249 |
+
J. D. Hunter. “Matplotlib: A 2D graphics environment”. In: Computing in Science & Engineering 9.3 (2007),
|
1250 |
+
pp. 90–95. doi: 10.1109/MCSE.2007.55.
|
1251 |
+
[24]
|
1252 |
+
Matthew Treinish et al. Qiskit/qiskit: Qiskit 0.34.1. Version 0.34.1. Jan. 2022. doi: 10.5281/zenodo.5823346.
|
1253 |
+
[25]
|
1254 |
+
Christopher Chamberland et al. “Topological and Subsystem Codes on Low-Degree Graphs with Flag Qubits”.
|
1255 |
+
In: Phys. Rev. X 10 (1 2020), p. 011022. doi: 10.1103/PhysRevX.10.011022. url: https://link.aps.
|
1256 |
+
org/doi/10.1103/PhysRevX.10.011022.
|
1257 |
+
[26]
|
1258 |
+
Stefanie Zbinden et al. “Embedding algorithms for quantum annealers with chimera and pegasus connection
|
1259 |
+
topologies”. In: International Conference on High Performance Computing. Springer. 2020, pp. 187–206. doi:
|
1260 |
+
10.1007/978-3-030-50743-5_10.
|
1261 |
+
[27]
|
1262 |
+
Nike Dattani, Szilard Szalay, and Nick Chancellor. Pegasus: The second connectivity graph for large-scale
|
1263 |
+
quantum annealing hardware. 2019. doi: 10.48550/ARXIV.1901.07636. url: https://arxiv.org/abs/
|
1264 |
+
1901.07636.
|
1265 |
+
[28]
|
1266 |
+
C. H. Tseng et al. “Quantum simulation of a three-body-interaction Hamiltonian on an NMR quantum
|
1267 |
+
computer”. In: Phys. Rev. A 61 (1 1999), p. 012302. doi: 10.1103/PhysRevA.61.012302. url: https:
|
1268 |
+
//link.aps.org/doi/10.1103/PhysRevA.61.012302.
|
1269 |
+
[29]
|
1270 |
+
Nicholas Chancellor, Stefan Zohren, and Paul A Warburton. “Circuit design for multi-body interactions in
|
1271 |
+
superconducting quantum annealing systems with applications to a scalable architecture”. In: npj Quantum
|
1272 |
+
Information 3.1 (2017), pp. 1–7. doi: 10.1038/s41534-017-0022-6.
|
1273 |
+
[30]
|
1274 |
+
Colin Campbell and Edward Dahl. “QAOA of the Highest Order”. In: 2022 IEEE 19th International Confer-
|
1275 |
+
ence on Software Architecture Companion (ICSA-C). 2022, pp. 141–146. doi: 10.1109/ICSA-C54293.2022.
|
1276 |
+
00035.
|
1277 |
+
[31]
|
1278 |
+
Elisabetta Valiante et al. “Computational overhead of locality reduction in binary optimization problems”. In:
|
1279 |
+
Computer Physics Communications 269 (2021), p. 108102. issn: 0010-4655. doi: https://doi.org/10.1016/
|
1280 |
+
j.cpc.2021.108102. url: https://www.sciencedirect.com/science/article/pii/S0010465521002149.
|
1281 |
+
[32]
|
1282 |
+
Hiroshi Ishikawa. “Transformation of General Binary MRF Minimization to the First-Order Case”. In: IEEE
|
1283 |
+
Transactions on Pattern Analysis and Machine Intelligence 33.6 (2011), pp. 1234–1249. doi: 10.1109/TPAMI.
|
1284 |
+
2010.91.
|
1285 |
+
[33]
|
1286 |
+
Elijah Pelofske et al. “Quantum annealing algorithms for Boolean tensor networks”. In: Scientific Reports 12.1
|
1287 |
+
(2022). doi: 10.1038/s41598-022-12611-9. url: https://doi.org/10.1038%2Fs41598-022-12611-9.
|
1288 |
+
[34]
|
1289 |
+
Shuxian Jiang et al. “Quantum annealing for prime factorization”. In: Scientific reports 8.1 (2018), pp. 1–9.
|
1290 |
+
doi: 10.1038/s41598-018-36058-z.
|
1291 |
+
[35]
|
1292 |
+
M. Cerezo et al. “Variational quantum algorithms”. In: Nature Reviews Physics 3.9 (2021), pp. 625–644. doi:
|
1293 |
+
10.1038/s42254-021-00348-9. url: https://doi.org/10.1038%2Fs42254-021-00348-9.
|
1294 |
+
[36]
|
1295 |
+
Samson Wang et al. “Noise-induced barren plateaus in variational quantum algorithms”. In: Nature commu-
|
1296 |
+
nications 12.1 (2021), pp. 1–11. doi: 10.1038/s41467-021-27045-6.
|
1297 |
+
[37]
|
1298 |
+
Yingyue Zhu et al. “Multi-round QAOA and advanced mixers on a trapped-ion quantum computer”. In:
|
1299 |
+
Quantum Science and Technology 8.1 (2022), p. 015007. doi: 10.1088/2058-9565/ac91ef. url: https:
|
1300 |
+
//dx.doi.org/10.1088/2058-9565/ac91ef.
|
1301 |
+
[38]
|
1302 |
+
Dieter Suter and Gonzalo A. ´Alvarez. “Colloquium: Protecting quantum information against environmental
|
1303 |
+
noise”. In: Rev. Mod. Phys. 88 (4 2016), p. 041001. doi: 10.1103/RevModPhys.88.041001. url: https:
|
1304 |
+
//link.aps.org/doi/10.1103/RevModPhys.88.041001.
|
1305 |
+
12
|
1306 |
+
|
1307 |
+
[39]
|
1308 |
+
Lorenza Viola, Emanuel Knill, and Seth Lloyd. “Dynamical Decoupling of Open Quantum Systems”. In:
|
1309 |
+
Phys. Rev. Lett. 82 (12 1999), pp. 2417–2421. doi: 10.1103/PhysRevLett.82.2417. url: https://link.
|
1310 |
+
aps.org/doi/10.1103/PhysRevLett.82.2417.
|
1311 |
+
[40]
|
1312 |
+
Mustafa Ahmed Ali Ahmed, Gonzalo A. ´Alvarez, and Dieter Suter. “Robustness of dynamical decoupling
|
1313 |
+
sequences”. In: Physical Review A 87.4 (2013). doi: 10.1103/physreva.87.042309. url: https://doi.
|
1314 |
+
org/10.1103%2Fphysreva.87.042309.
|
1315 |
+
[41]
|
1316 |
+
Ryan LaRose et al. “Mitiq: A software package for error mitigation on noisy quantum computers”. In: Quan-
|
1317 |
+
tum 6 (2022), p. 774. doi: 10.22331/q-2022-08-11-774. url: https://doi.org/10.22331%2Fq-2022-
|
1318 |
+
08-11-774.
|
1319 |
+
[42]
|
1320 |
+
Siyuan Niu and Aida Todri-Sanial. “Effects of Dynamical Decoupling and Pulse-Level Optimizations on IBM
|
1321 |
+
Quantum Computers”. In: IEEE Transactions on Quantum Engineering 3 (2022), pp. 1–10. doi: 10.1109/
|
1322 |
+
tqe.2022.3203153. url: https://doi.org/10.1109%2Ftqe.2022.3203153.
|
1323 |
+
[43]
|
1324 |
+
Nic Ezzell et al. Dynamical decoupling for superconducting qubits: a performance survey. 2022. doi: 10.
|
1325 |
+
48550/ARXIV.2207.03670. url: https://arxiv.org/abs/2207.03670.
|
1326 |
+
[44]
|
1327 |
+
Bibek Pokharel et al. “Demonstration of Fidelity Improvement Using Dynamical Decoupling with Supercon-
|
1328 |
+
ducting Qubits”. In: Phys. Rev. Lett. 121 (22 2018), p. 220502. doi: 10.1103/PhysRevLett.121.220502.
|
1329 |
+
url: https://link.aps.org/doi/10.1103/PhysRevLett.121.220502.
|
1330 |
+
[45]
|
1331 |
+
Andrew W. Cross et al. Open Quantum Assembly Language. 2017. doi: 10.48550/ARXIV.1707.03429. url:
|
1332 |
+
https://arxiv.org/abs/1707.03429.
|
1333 |
+
[46]
|
1334 |
+
Elijah Pelofske, Georg Hahn, and Hristo N. Djidjev. “Parallel quantum annealing”. In: Scientific Reports 12.1
|
1335 |
+
(2022). doi: 10.1038/s41598-022-08394-8. url: https://doi.org/10.1038%2Fs41598-022-08394-8.
|
1336 |
+
[47]
|
1337 |
+
Jeffrey Marshall et al. “Power of Pausing: Advancing Understanding of Thermalization in Experimental
|
1338 |
+
Quantum Annealers”. In: Phys. Rev. Appl. 11 (4 2019), p. 044083. doi: 10.1103/PhysRevApplied.11.
|
1339 |
+
044083. url: https://link.aps.org/doi/10.1103/PhysRevApplied.11.044083.
|
1340 |
+
13
|
1341 |
+
|
4dAyT4oBgHgl3EQfpPgc/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
69E1T4oBgHgl3EQfTgNE/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6f8e14d19b5dae642843d3c1de6bd8502529893dbb78d89a7c279e9af9db184
|
3 |
+
size 116381
|
69E3T4oBgHgl3EQfpwru/content/tmp_files/2301.04646v1.pdf.txt
ADDED
@@ -0,0 +1,1019 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Draft version January 12, 2023
|
2 |
+
Typeset using LATEX twocolumn style in AASTeX631
|
3 |
+
HAWC Detection of a TeV Halo Candidate Surrounding a Radio-quiet pulsar
|
4 |
+
A. Albert
|
5 |
+
,1 R. Alfaro
|
6 |
+
,2 J.C. Arteaga-Vel´azquez,3 E. Belmont-Moreno
|
7 |
+
,2 T. Capistr´an
|
8 |
+
,4
|
9 |
+
A. Carrami˜nana
|
10 |
+
,5 S. Casanova
|
11 |
+
,6 J. Cotzomi
|
12 |
+
,7 S. Couti˜no de Le´on
|
13 |
+
,8 E. De la Fuente
|
14 |
+
,9
|
15 |
+
R. Diaz Hernandez,5 M.A. DuVernois
|
16 |
+
,8 J.C. D´ıaz-V´elez
|
17 |
+
,9 C. Espinoza
|
18 |
+
,2 K.L. Fan,10 N. Fraija
|
19 |
+
,4
|
20 |
+
K. Fang
|
21 |
+
,8 J.A. Garc´ıa-Gonz´alez
|
22 |
+
,11 F. Garfias
|
23 |
+
,4 Armelle Jardin-Blicq
|
24 |
+
,12, 13 M.M. Gonz´alez
|
25 |
+
,4
|
26 |
+
J.A. Goodman
|
27 |
+
,10 J.P. Harding
|
28 |
+
,1 S. Hernandez
|
29 |
+
,2 D. Huang
|
30 |
+
,14 F. Hueyotl-Zahuantitla
|
31 |
+
,15
|
32 |
+
A. Iriarte
|
33 |
+
,4 V. Joshi
|
34 |
+
,16 A. Lara
|
35 |
+
,17 J. Lee
|
36 |
+
,18 H. Le´on Vargas
|
37 |
+
,2 J.T. Linnemann
|
38 |
+
,19
|
39 |
+
A.L. Longinotti
|
40 |
+
,4 G. Luis-Raya
|
41 |
+
,20 K. Malone
|
42 |
+
,21 O. Martinez
|
43 |
+
,7 J. Mart´ınez-Castro
|
44 |
+
,22
|
45 |
+
J.A. Matthews
|
46 |
+
,23 J.A. Morales-Soto
|
47 |
+
,3 E. Moreno
|
48 |
+
,7 M. Mostaf´a
|
49 |
+
,24 A. Nayerhoda
|
50 |
+
,6 L. Nellen
|
51 |
+
,25
|
52 |
+
M. Newbold
|
53 |
+
,26 M.U. Nisa
|
54 |
+
,27, 28 Y. P´erez Araujo
|
55 |
+
,4 E.G. P´erez-P´erez
|
56 |
+
,20 C.D. Rho
|
57 |
+
,29
|
58 |
+
D. Rosa-Gonz´alez
|
59 |
+
,5 M. Schneider
|
60 |
+
,10 J. Serna-Franco,2 A.J. Smith
|
61 |
+
,10 Y. Son,18 R.W. Springer
|
62 |
+
,26
|
63 |
+
K. Tollefson
|
64 |
+
,27 I. Torres
|
65 |
+
,5 R. Torres-Escobedo,30 X. Wang,14 K. Whitaker,24 E. Willox
|
66 |
+
,10 H. Zhou
|
67 |
+
,30
|
68 |
+
C. de Le´on
|
69 |
+
,3
|
70 |
+
(THE HAWC COLLABORATION)
|
71 |
+
1Physics Division, Los Alamos National Laboratory, Los Alamos, NM, USA
|
72 |
+
2Instituto de F´ısica, Universidad Nacional Aut´onoma de M´exico, Ciudad de Mexico, Mexico
|
73 |
+
3Universidad Michoacana de San Nicol´as de Hidalgo, Morelia, Mexico
|
74 |
+
4Instituto de Astronom´ıa, Universidad Nacional Aut´onoma de M´exico, Ciudad de Mexico, Mexico
|
75 |
+
5Instituto Nacional de Astrof´ısica, ´Optica y Electr´onica, Puebla, Mexico
|
76 |
+
6Institute of Nuclear Physics Polish Academy of Sciences, PL-31342 IFJ-PAN, Krakow, Poland
|
77 |
+
7Facultad de Ciencias F´ısico Matem´aticas, Benem´erita Universidad Aut´onoma de Puebla, Puebla, Mexico
|
78 |
+
8Department of Physics, University of Wisconsin-Madison, Madison, WI, USA
|
79 |
+
9Departamento de F´ısica, Centro Universitario de Ciencias Exactase Ingenierias, Universidad de Guadalajara, Guadalajara, Mexico
|
80 |
+
10Department of Physics, University of Maryland, College Park, MD, USA
|
81 |
+
11Tecnologico de Monterrey, Escuela de Ingenier´ıa y Ciencias, Ave. Eugenio Garza Sada 2501, Monterrey, N.L., Mexico, 64849
|
82 |
+
12Universit´e Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France
|
83 |
+
13Max-Planck Institute for Nuclear Physics, D-69117 Heidelberg, Germany
|
84 |
+
14Department of Physics, Michigan Technological University, Houghton, MI, USA
|
85 |
+
15Universidad Aut´onoma de Chiapas, Tuxtla Guti´errez, Chiapas, M´exico
|
86 |
+
16Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, Germany
|
87 |
+
17Instituto de Geof´ısica, Universidad Nacional Aut´onoma de M´exico, Ciudad de Mexico, Mexico
|
88 |
+
18University of Seoul, Seoul, Rep.of Korea
|
89 |
+
19Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA
|
90 |
+
20Universidad Politecnica de Pachuca, Pachuca, Hgo, Mexico
|
91 |
+
21Space Science and Applications Group, Los Alamos National Laboratory, Los Alamos, NM, USA
|
92 |
+
22Centro de Investigaci´on en Computaci´on, Instituto Polit´ecnico Nacional, M´exico City, M´exico.
|
93 |
+
23Dept of Physics and Astronomy, University of New Mexico, Albuquerque, NM, USA
|
94 |
+
24Department of Physics, Pennsylvania State University, University Park, PA, USA
|
95 |
+
25Instituto de Ciencias Nucleares, Universidad Nacional Aut´onoma de Mexico, Ciudad de Mexico, Mexico
|
96 |
+
26Department of Physics and Astronomy, University of Utah, Salt Lake City, UT, USA
|
97 |
+
27Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA
|
98 |
+
28Department of Physics, Michigan Technological University, Houghton, MI, USA
|
99 |
+
29University of Seoul, Seoul, Rep. of Korea
|
100 |
+
30Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai, China
|
101 |
+
Corresponding author: Sara Couti˜no de Le´on
|
102 | |
103 |
+
arXiv:2301.04646v1 [astro-ph.HE] 11 Jan 2023
|
104 |
+
|
105 |
+
ID2
|
106 |
+
ABSTRACT
|
107 |
+
Extended very-high-energy (VHE; 0.1-100 TeV) γ-ray emission has been observed around several
|
108 |
+
middle-aged pulsars and referred to as “TeV halos”. Their formation mechanism remains under debate.
|
109 |
+
It is also unknown whether they are ubiquitous or related to certain subgroup of pulsars. With 2321
|
110 |
+
days of observation, the High Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory detected
|
111 |
+
VHE γ-ray emission at the location of the radio-quiet pulsar PSR J0359+5414 with > 6σ significance.
|
112 |
+
By performing likelihood tests with different spectral and spatial models and comparing the TeV
|
113 |
+
spectrum with multi-wavelength observations of nearby sources, we show that this excess is consistent
|
114 |
+
with a TeV halo associated with PSR J0359+5414, though future observation of HAWC and multi-
|
115 |
+
wavelength follow-ups are needed to confirm this nature. This new halo candidate is located in a
|
116 |
+
non-crowded region in the outer Galaxy. It shares similar properties to the other halos but its pulsar is
|
117 |
+
younger and radio-quiet. Our observation implies that TeV halos could commonly exist around pulsars
|
118 |
+
and their formation does not depend on the configuration of the pulsar magnetosphere.
|
119 |
+
Keywords: Pulsars (1306) — Gamma-ray astronomy(628) — High-energy astrophysics(739)
|
120 |
+
1. INTRODUCTION
|
121 |
+
Extended TeV gamma-ray emission has been observed
|
122 |
+
around several middle-aged (> 100 kyr) pulsars and
|
123 |
+
grouped as a new source class named “TeV halos” (Lin-
|
124 |
+
den et al. 2017; L´opez-Coto et al. 2022). Seven sources
|
125 |
+
are referenced as TeV halos in the online catalog for TeV
|
126 |
+
Astronomy, TeVCat1 (Wakely & Horan 2008), including
|
127 |
+
the first halos around the Geminga and Monogem pul-
|
128 |
+
sars, discovered by HAWC (Abeysekara et al. 2017a),
|
129 |
+
HESS J1825-137 reported by the H.E.S.S. collaboration
|
130 |
+
(H. E. S. S. Collaboration et al. 2018), and the halo
|
131 |
+
of PSR J0622+3749, identified by the LHAASO collab-
|
132 |
+
oration (Aharonian et al. 2021).
|
133 |
+
The VHE fluxes of
|
134 |
+
these halos suggest that ∼ 10% − 40% of the spin-down
|
135 |
+
power of the pulsars is converted into e± pair popula-
|
136 |
+
tion that interacts with the ambient interstellar radia-
|
137 |
+
tion field (Sudoh et al. 2019; Aharonian et al. 2021). The
|
138 |
+
diffusion coefficients derived from the sizes of the halos
|
139 |
+
are typically two orders of magnitude lower than the
|
140 |
+
average diffusion coefficient of the interstellar medium
|
141 |
+
(ISM; Hooper et al. 2017; Sudoh et al. 2019).
|
142 |
+
The formation mechanism of the TeV halos is still
|
143 |
+
under debate (Linden et al. 2017; Sudoh et al. 2019;
|
144 |
+
Giacinti et al. 2020; L´opez-Coto et al. 2022; Liu 2022; De
|
145 |
+
La Torre Luque et al. 2022). Whether they are related to
|
146 |
+
the local environment, such as extended, diffuse emission
|
147 |
+
by other sources near the pulsar (e.g., the Monogem
|
148 |
+
Ring Plucinsky et al. 1996), is also questioned. If TeV
|
149 |
+
halos commonly exist around pulsars, they can be used
|
150 |
+
to study the propagation of cosmic rays (e.g., Evoli et al.
|
151 |
+
2018) and to identify pulsars that are otherwise invisible
|
152 |
+
to radio and γ-ray observations (Linden et al. 2017).
|
153 |
+
1 http://tevcat2.uchicago.edu/
|
154 |
+
In this letter, we report the detection of a new TeV
|
155 |
+
halo candidate around the pulsar PSR J0359+5414
|
156 |
+
(hereafter J0359) using 2321 days of HAWC data. The
|
157 |
+
detection of J0359 was first reported in the Fermi Large
|
158 |
+
Area Telescope (LAT) First Source Catalog (1FGL,
|
159 |
+
Abdo et al. 2010) where it remained as an unclassi-
|
160 |
+
fied source until the Third Source Catalog (3FGL, Acero
|
161 |
+
et al. 2015). J0359 was later classified as a radio-quiet
|
162 |
+
pulsar by Clark et al. (2017) with an age of 75 kyr and
|
163 |
+
a spin-down power of ˙E = 1.3 × 1036 erg s−1. In Zyuzin
|
164 |
+
et al. (2018) a pseudo-distance of J0359 is reported as
|
165 |
+
d = 3.45 kpc, derived from the ˙E and the gamma-ray
|
166 |
+
flux.
|
167 |
+
The latest report at high energies of J0359 ap-
|
168 |
+
pears in the Fermi-LAT Fourth Source Catalog (4FGL,
|
169 |
+
Abdollahi et al. 2020) where it is detected above 33σ
|
170 |
+
in the MeV-GeV energy range. A pulsar wind nebula
|
171 |
+
(PWN) with an extension of ∼ 30′′ was observed by
|
172 |
+
Chandra as a result of a X-ray analysis on gamma-ray
|
173 |
+
pulsars (Zyuzin et al. 2018). No radio emission has been
|
174 |
+
detected from the pulsar (Grießmeier et al. 2021). The
|
175 |
+
VHE γ-ray emission from the vicinity of J0359 observed
|
176 |
+
by HAWC presents similar properties to the other TeV
|
177 |
+
halos candidates, including the derived acceleration ef-
|
178 |
+
ficiency and diffusion coefficient. If this source is a TeV
|
179 |
+
halo, it would support the hypothesis that the halos are
|
180 |
+
ubiquitous.
|
181 |
+
The paper is organized as follows. The data set and
|
182 |
+
analysis framework are described in Section 2. The re-
|
183 |
+
sults of the spectral and spatial analysis are presented in
|
184 |
+
Section 3. In Section 4, the broadband spectral energy
|
185 |
+
distribution (SED) of J0359 is presented and the origin
|
186 |
+
of the TeV emission is discussed. The conclusions are
|
187 |
+
summarized in Section 5.
|
188 |
+
2. INSTRUMENT AND DATA ANALYSIS
|
189 |
+
|
190 |
+
TeV Halo Candidate Surrounding Radio-quiet pulsar
|
191 |
+
3
|
192 |
+
Figure 1. HAWC significance map in Galactic coordinates
|
193 |
+
using 2321 days of live data. The significance is computed
|
194 |
+
with a point-like spatial template and a power-law spectrum
|
195 |
+
with spectral index α = 2.7. For comparison, the positions
|
196 |
+
of PSR J0359+5414 and PSR B0355+54 are marked.
|
197 |
+
The HAWC Gamma-Ray Observatory consists of 300
|
198 |
+
water Cherenkov detectors located at 19◦N in Puebla,
|
199 |
+
Mexico at an altitude of 4100 m. Each detector is in-
|
200 |
+
strumented with 4 photo-multiplier tubes (PMTs) that
|
201 |
+
are capable of detecting the Cherenkov radiation pro-
|
202 |
+
duced in the detector water when an electromagnetic or
|
203 |
+
hadronic shower hits the ground, which is initiated by
|
204 |
+
a γ-ray or a cosmic ray, respectively, when it enters the
|
205 |
+
Earth’s atmosphere. HAWC is sensitive to sources with
|
206 |
+
declinations between −41◦ and +79◦ and to energies in
|
207 |
+
the 300 GeV to > 100 TeV range. The data set used
|
208 |
+
in this analysis comprises 2321 days of live data taken
|
209 |
+
from November 2014 to October 2021. The data set is
|
210 |
+
divided into 11 analysis bins (fHit) based on the fraction
|
211 |
+
of PMTs that are triggered in each event, on and off the
|
212 |
+
main detector array. A full description of HAWC’s de-
|
213 |
+
sign and performance can be found in Smith & HAWC
|
214 |
+
Collaboration (2015) and Abeysekara et al. (2017b).
|
215 |
+
A maximum likelihood analysis was performed using
|
216 |
+
the Multi-Mission Maximum Likelihood (3ML) frame-
|
217 |
+
work (Vianello et al. 2015) with the HAWC Accelerated
|
218 |
+
Likelihood (HAL) plug-in (Abeysekara et al. 2021). For
|
219 |
+
model selection, we use the likelihood ratio test statistic
|
220 |
+
(TS) which is defined by
|
221 |
+
TS = 2 ln LS+B
|
222 |
+
LB
|
223 |
+
,
|
224 |
+
(1)
|
225 |
+
where LS+B is the maximum likelihood of a signal plus
|
226 |
+
background model, which depends on the spectral and
|
227 |
+
spatial parameters, and LB is the maximum likelihood of
|
228 |
+
the background-only hypothesis. Three spectral models
|
229 |
+
are tested, including single power-law (PL, Equation 2),
|
230 |
+
log-parabola (LOGP, Equation 3), and power-law with
|
231 |
+
an exponential energy cutoff (PL+CO, Equation 4):
|
232 |
+
dN
|
233 |
+
dE = N0
|
234 |
+
� E
|
235 |
+
E0
|
236 |
+
�−α
|
237 |
+
,
|
238 |
+
(2)
|
239 |
+
dN
|
240 |
+
dE = N0
|
241 |
+
� E
|
242 |
+
E0
|
243 |
+
�−α−β ln(E/E0)
|
244 |
+
,
|
245 |
+
(3)
|
246 |
+
dN
|
247 |
+
dE = N0
|
248 |
+
� E
|
249 |
+
E0
|
250 |
+
�−α
|
251 |
+
× exp
|
252 |
+
�−E
|
253 |
+
Ec
|
254 |
+
�
|
255 |
+
.
|
256 |
+
(4)
|
257 |
+
In the above equations, N0 is the flux normalization in
|
258 |
+
units of [TeV−1cm−2s−1], E0 is the pivot energy fixed at
|
259 |
+
30 TeV to minimize correlations with the other parame-
|
260 |
+
ters, α is the spectral index, Ec is the cut-off energy and
|
261 |
+
β is the curvature of the log-parabola spectrum. Two
|
262 |
+
spatial models are tested: a point-like template and an
|
263 |
+
extended template. The extended template is described
|
264 |
+
by a symmetric Gaussian with width as a free parame-
|
265 |
+
ter.
|
266 |
+
The energy range in which a source is detected is com-
|
267 |
+
puted by multiplying a step function with the best fit
|
268 |
+
model (nominal case). The lower and upper values of
|
269 |
+
the step function at which the likelihood decreases by
|
270 |
+
1σ, 2σ or 3σ from that of the nominal case are regarded
|
271 |
+
as the upper limit to the minimum energy and lower
|
272 |
+
limit to the maximum energy, respectively.
|
273 |
+
3. RESULTS
|
274 |
+
3.1. Association with J0359
|
275 |
+
We first free the position of the emission and fit the
|
276 |
+
PL point source model to data. The best-fit R.A. and
|
277 |
+
decl. are 59.83 ± 0.07stat and 54.22 ± 0.05stat degrees
|
278 |
+
(the systematic uncertainty at this location is 0◦.02),
|
279 |
+
which are consistent with the position of J0359 (59.86
|
280 |
+
and 54.25 degrees for R.A and decl. respectively). The
|
281 |
+
TS of the model is TS = 38.18, which corresponds to a
|
282 |
+
significance of 6.18σ for four degrees of freedom based
|
283 |
+
on the Wilks theorem (Wilks 1938). As the position is
|
284 |
+
consistent with the pulsar position, we fixed the TeV
|
285 |
+
emission to the pulsar position to perform the spectral
|
286 |
+
analysis.
|
287 |
+
Table 1 summarizes the best-fit parameters of differ-
|
288 |
+
ent spectral and spatial models.
|
289 |
+
The simplest model
|
290 |
+
assuming a point-like morphology and non-broken PL
|
291 |
+
|
292 |
+
PSR-B0355+54
|
293 |
+
2
|
294 |
+
PSR-J0359+5414
|
295 |
+
1
|
296 |
+
b
|
297 |
+
0
|
298 |
+
-1
|
299 |
+
150
|
300 |
+
149
|
301 |
+
148
|
302 |
+
147
|
303 |
+
1[°]
|
304 |
+
-4
|
305 |
+
-2
|
306 |
+
0
|
307 |
+
2
|
308 |
+
4
|
309 |
+
6
|
310 |
+
8
|
311 |
+
10
|
312 |
+
12
|
313 |
+
14
|
314 |
+
VTS4
|
315 |
+
Table 1. PSR J0359-5414 likelihood fit results for the two spatial scenarios and different spectral shapes.
|
316 |
+
Model
|
317 |
+
TS
|
318 |
+
∆BIC
|
319 |
+
Extension
|
320 |
+
N0
|
321 |
+
α
|
322 |
+
β
|
323 |
+
Ec
|
324 |
+
[◦]
|
325 |
+
[×10−16TeV−1cm−2s−1]
|
326 |
+
[TeV]
|
327 |
+
PL, point-like
|
328 |
+
37.86
|
329 |
+
-12
|
330 |
+
-
|
331 |
+
1.34+0.34
|
332 |
+
−0.27
|
333 |
+
2.60 ± 0.16
|
334 |
+
-
|
335 |
+
-
|
336 |
+
LOGP, point-like
|
337 |
+
39.18
|
338 |
+
-1
|
339 |
+
-
|
340 |
+
1.6+0.5
|
341 |
+
−0.4
|
342 |
+
2.80 ± 0.23
|
343 |
+
0.14 ± 0.12
|
344 |
+
-
|
345 |
+
PL+CO, point-like
|
346 |
+
37.98
|
347 |
+
0
|
348 |
+
-
|
349 |
+
4+50
|
350 |
+
−4
|
351 |
+
2.5 ± 1.2
|
352 |
+
-
|
353 |
+
500
|
354 |
+
PL, extended
|
355 |
+
40.27
|
356 |
+
-12
|
357 |
+
0.2 ± 0.1
|
358 |
+
2.0+0.8
|
359 |
+
−0.6
|
360 |
+
2.52 ± 0.16
|
361 |
+
-
|
362 |
+
-
|
363 |
+
LOGP, extended
|
364 |
+
41.72
|
365 |
+
-1.2
|
366 |
+
0.2 ± 0.1
|
367 |
+
2.6+1.5
|
368 |
+
−1.0
|
369 |
+
2.71 ± 0.22
|
370 |
+
0.14 ± 0.13
|
371 |
+
-
|
372 |
+
PL+CO, extended
|
373 |
+
40.48
|
374 |
+
0
|
375 |
+
0.23 ± 0.1
|
376 |
+
14+5
|
377 |
+
−4
|
378 |
+
2.40 ± 0.19
|
379 |
+
-
|
380 |
+
270+240
|
381 |
+
−130
|
382 |
+
Note—All the associated errors are statistical. The best model is the one with the lowest BIC value so, ∆BIC is
|
383 |
+
the difference between a model and the best model, such that it quantifies the evidence against the model with
|
384 |
+
the highest BIC value. In this case, from both spatial models, the PL+CO spectral model results with the highest
|
385 |
+
BIC value. The energy cutoff of 500 TeV of the PL+CO point-like model is the boundary of the fit.
|
386 |
+
yields TS = 37.86. In general more complicated models
|
387 |
+
with extended morphology and spectral curvature yields
|
388 |
+
a larger TS since they have more degrees of freedom than
|
389 |
+
the PL point-source model. So, the preferred spectral
|
390 |
+
models for both spatial assumptions is a PL, based the
|
391 |
+
BIC values, where these models have the lower ones.
|
392 |
+
Figure 2 presents the model and residual significance
|
393 |
+
maps, and the residual histograms for the two spatial
|
394 |
+
templates assuming a PL spectral model.
|
395 |
+
The resid-
|
396 |
+
ual histogram shows the distribution of the significance
|
397 |
+
value in each pixel within the region of interest cen-
|
398 |
+
tered at J0359. The residual significance is defined as
|
399 |
+
the deviation from the background expectation after fit-
|
400 |
+
ting and subtracting the modeled emission from J0359.
|
401 |
+
If only random background fluctuations are left, then
|
402 |
+
the significance values follow a standard normal distri-
|
403 |
+
bution (dashed red line). A positive tail is visible in the
|
404 |
+
residual map of the point-source model. Although the
|
405 |
+
current sample do not allow to distinguish between the
|
406 |
+
different spatial models, the residual histograms in Fig-
|
407 |
+
ure 2 indicate that we get a better fit for an extended
|
408 |
+
model.
|
409 |
+
The energy range of the detection are found to be 7-
|
410 |
+
188 TeV at 1σ level, 11-89 TeV at 2σ level and 15-51
|
411 |
+
TeV at 3σ level, with the PL point-source model. For
|
412 |
+
the PL extended model, the energy range is 4-190 TeV
|
413 |
+
at 1σ level, 9-110 TeV at 2σ level and 17-78 TeV at 3σ
|
414 |
+
level.
|
415 |
+
The luminosity of the VHE emission is L15−51 TeV =
|
416 |
+
3.6 × 1032 erg s−1 for a distance of 3.45 kpc. The typ-
|
417 |
+
ical energies of the synchrotron and inverse Compton
|
418 |
+
photons produced by the same electrons are related
|
419 |
+
by Esyn ≈ 2.1 keV(EIC/30 TeV) (B/10 µG) (e.g., Aharo-
|
420 |
+
nian et al. 1997), where B is the magnetic field strength
|
421 |
+
in the PWN. As the magnetic energy density of a PWN
|
422 |
+
is usually higher than the energy density of the Cos-
|
423 |
+
mic Microwave Background (CMB) and infrared (IR)
|
424 |
+
photons of the ISM, the synchrotron flux of a typical
|
425 |
+
PWN at keV energies is expected to be higher than
|
426 |
+
its inverse Compton emission at the HAWC energies
|
427 |
+
(see e.g., the Crab Nebula H. E. S. S. Collaboration
|
428 |
+
2020). However, the X-ray luminosity of J0359’s PWN,
|
429 |
+
L0.3−10 keV = 2.8 × 1031 erg s−1 (Zyuzin et al. 2018), is
|
430 |
+
instead ∼ 13 times lower than the VHE gamma-ray lu-
|
431 |
+
minosity. This suggests the existence of a VHE electron
|
432 |
+
population outside the region where the nebula is ener-
|
433 |
+
getically dominant, which is expected in the case of a
|
434 |
+
TeV halo (Linden et al. 2017; L´opez-Coto et al. 2022).
|
435 |
+
Figure 3 presents the broadband SED of J0359. The
|
436 |
+
pulsar, PWN, and TeV halo components are shown in
|
437 |
+
grey, black, and in blue/green colors, respectively. The
|
438 |
+
multi-wavelength data points include an upper limit of
|
439 |
+
the pulsar emission by the Effelsberg telescope at 1400
|
440 |
+
MHz (Grießmeier et al. 2021), X-ray measurements of
|
441 |
+
the pulsar and PWN (Zyuzin et al. 2018), γ-ray ob-
|
442 |
+
servation of the pulsar from 50 MeV to 1 TeV by the
|
443 |
+
Fermi-LAT (Abdollahi et al. 2020), and the VHE flux
|
444 |
+
of the halo measured by HAWC.
|
445 |
+
3.2. Nearby pulsar B0355+54
|
446 |
+
Another pulsar, PSR B0355+54 (B0355) is only 0.09
|
447 |
+
degrees from J0359. B0355 is classified as a radio-loud
|
448 |
+
pulsar with characteristic age of 564 kyr and spin-down
|
449 |
+
power
|
450 |
+
˙E = 4.5 × 1034 erg s−1 at a distance of 1 kpc.
|
451 |
+
B0355 has not been detected at high or very-high ener-
|
452 |
+
gies (Benbow et al. 2021). Below we investigate whether
|
453 |
+
B0355 is related to the HAWC excess emission.
|
454 |
+
We performed likelihood fits and compared three sce-
|
455 |
+
narios: 1) the VHE emission is only associated with
|
456 |
+
J0359, 2) the VHE emission is only associated with
|
457 |
+
|
458 |
+
TeV Halo Candidate Surrounding Radio-quiet pulsar
|
459 |
+
5
|
460 |
+
(a) PL, point-like model
|
461 |
+
(b) PL, extended model
|
462 |
+
(c) PL, point-like residuals
|
463 |
+
(d) PL, extended residuals
|
464 |
+
(e) PL, point-like residual
|
465 |
+
histogram
|
466 |
+
( f) PL, extended residual
|
467 |
+
histogram
|
468 |
+
Figure 2.
|
469 |
+
Comparison of the model maps, significance
|
470 |
+
maps, and 1-D residual histograms for point-like and ex-
|
471 |
+
tended spatial models. The source position is fixed to PSR
|
472 |
+
J0359+5414 (black cross in the significance maps) and the
|
473 |
+
spectrum is assumed to be a non-broken power-law.
|
474 |
+
The
|
475 |
+
best-fit parameter values are listed in Table 1.
|
476 |
+
B0355, and 3) the VHE emission is contributed by both
|
477 |
+
sources. We present the detailed results of scenarios 2
|
478 |
+
and 3 in Appendix A and B, respectively. We find that
|
479 |
+
the two-source scenario (scenario 3) is disfavored com-
|
480 |
+
pared to the single-source scenarios. Scenario 1 (J0359)
|
481 |
+
yields lower BIC values than scenario 2 (B0355) for var-
|
482 |
+
ious spectral and spatial models, though the preference
|
483 |
+
of scenario 1 is not statistically significant.
|
484 |
+
The VERITAS telescope searched for emission from
|
485 |
+
the PWN of B0355 and posed tight upper limits on the
|
486 |
+
TeV flux (Benbow et al. 2021). The right panel of Figure
|
487 |
+
3 shows the broadband SED of B0355, which includes
|
488 |
+
the radio observation of the pulsar (Lorimer et al. 1995),
|
489 |
+
X-ray observation of the pulsar and its tail at 0.5-8 keV
|
490 |
+
(Klingler et al. 2016), and the VERITAS upper limits at
|
491 |
+
95% C.L. between 1 and 10 TeV (Benbow et al. 2021).
|
492 |
+
For comparison, we show the best-fit flux between 16
|
493 |
+
and 59 TeV obtained by assuming that the VHE emis-
|
494 |
+
sion is centered at the position of B0355. The upper
|
495 |
+
limits set by VERITAS on B0355’s tail are in tension
|
496 |
+
with the HAWC’s flux at 16 TeV for both the point-
|
497 |
+
like and extended models. This suggests that the excess
|
498 |
+
emission observed by HAWC is more likely associated
|
499 |
+
with J0359 than B0355, though future multi-wavelength
|
500 |
+
observation is needed to confirm the finding.
|
501 |
+
4. SYSTEMATIC UNCERTAINTIES
|
502 |
+
The systematic uncertainties arising from the detec-
|
503 |
+
tor performance and simulations are described in Abey-
|
504 |
+
sekara et al. (2017b) and Abeysekara et al. (2019). The
|
505 |
+
systematic contribution is calculated in a single energy
|
506 |
+
band for each spectral and spatial parameter, with the
|
507 |
+
positive (negative) shift results added in quadrature to
|
508 |
+
account for the upward (downward) uncertainties. The
|
509 |
+
systematic uncertainties are calculated for the PL spec-
|
510 |
+
tral model and for both the point-like and extended tem-
|
511 |
+
plates.
|
512 |
+
To account for additional sources of systematic uncer-
|
513 |
+
tainties, such as the variations in the atmosphere that
|
514 |
+
are not considered in simulations, a 10% error has been
|
515 |
+
added to normalization ��ux (Albert et al. 2020). The
|
516 |
+
total systematic uncertainties are reported in Table 2.
|
517 |
+
5. CONCLUSIONS
|
518 |
+
With 2321 days of HAWC observation, VHE γ-ray
|
519 |
+
emission is detected in a relatively source-empty region
|
520 |
+
in the outer galaxy. Based on likelihood fits with dif-
|
521 |
+
ferent spectral and spatial models to the HAWC data
|
522 |
+
and the comparison of VHE γ-ray flux with multi-
|
523 |
+
wavelength observations, we conclude that the emis-
|
524 |
+
sion is a TeV halo candidate associated with the pulsar
|
525 |
+
PSR J0359+5414.
|
526 |
+
If this TeV emission is a halo, it would share sim-
|
527 |
+
ilar characteristics with the existing population.
|
528 |
+
We
|
529 |
+
find a 95% upper limit on the extension of the emis-
|
530 |
+
sion as 0.◦41 (with the PL-extended model in Ta-
|
531 |
+
ble 1), corresponding to a physical size of Rul
|
532 |
+
=
|
533 |
+
|
534 |
+
PSRJO0359+5414
|
535 |
+
2
|
536 |
+
b
|
537 |
+
0
|
538 |
+
149
|
539 |
+
148
|
540 |
+
147
|
541 |
+
[。]1
|
542 |
+
-4 -2
|
543 |
+
2
|
544 |
+
46
|
545 |
+
8
|
546 |
+
10
|
547 |
+
1214
|
548 |
+
0
|
549 |
+
VTSPSRJO0359+5414
|
550 |
+
2
|
551 |
+
b
|
552 |
+
0
|
553 |
+
149
|
554 |
+
148
|
555 |
+
147
|
556 |
+
[。]1
|
557 |
+
-4 -2
|
558 |
+
0
|
559 |
+
2
|
560 |
+
46
|
561 |
+
8
|
562 |
+
10
|
563 |
+
1214
|
564 |
+
VTSPSRT0359+5414
|
565 |
+
2
|
566 |
+
b
|
567 |
+
0
|
568 |
+
149
|
569 |
+
148
|
570 |
+
147
|
571 |
+
1[°]
|
572 |
+
-4 -2
|
573 |
+
2
|
574 |
+
46
|
575 |
+
0
|
576 |
+
8
|
577 |
+
10
|
578 |
+
1214
|
579 |
+
VTSPSRJO0359+5414
|
580 |
+
2
|
581 |
+
X
|
582 |
+
b
|
583 |
+
0
|
584 |
+
149
|
585 |
+
148
|
586 |
+
147
|
587 |
+
[。
|
588 |
+
-4 -2
|
589 |
+
0
|
590 |
+
2
|
591 |
+
46
|
592 |
+
8
|
593 |
+
10
|
594 |
+
1214
|
595 |
+
VTS1D Significance Histogram
|
596 |
+
101
|
597 |
+
Pixels
|
598 |
+
Data
|
599 |
+
Expectation
|
600 |
+
Number of
|
601 |
+
Fit
|
602 |
+
mean = 0.395 ± 0.043
|
603 |
+
width = 1.249 ± 0.049
|
604 |
+
100
|
605 |
+
0
|
606 |
+
-4
|
607 |
+
-2
|
608 |
+
0
|
609 |
+
2
|
610 |
+
4
|
611 |
+
significance1D Significance Histogram
|
612 |
+
101
|
613 |
+
Pixels
|
614 |
+
Data
|
615 |
+
Expectation
|
616 |
+
Number of
|
617 |
+
Fit
|
618 |
+
mean = 0.063 ± 0.019
|
619 |
+
width = 0.964 ± 0.018
|
620 |
+
100
|
621 |
+
0
|
622 |
+
.4
|
623 |
+
-2
|
624 |
+
0
|
625 |
+
2
|
626 |
+
significance6
|
627 |
+
10
|
628 |
+
4
|
629 |
+
10
|
630 |
+
1
|
631 |
+
102
|
632 |
+
105
|
633 |
+
108
|
634 |
+
1011
|
635 |
+
1014
|
636 |
+
Energy (eV)
|
637 |
+
10
|
638 |
+
16
|
639 |
+
10
|
640 |
+
15
|
641 |
+
10
|
642 |
+
14
|
643 |
+
10
|
644 |
+
13
|
645 |
+
10
|
646 |
+
12
|
647 |
+
10
|
648 |
+
11
|
649 |
+
E2 dN/dE (erg cm
|
650 |
+
2 s
|
651 |
+
1)
|
652 |
+
Telescope/Observatory
|
653 |
+
HAWC
|
654 |
+
Chandra
|
655 |
+
Fermi-LAT
|
656 |
+
Effelsberg
|
657 |
+
Components
|
658 |
+
TeV halo
|
659 |
+
PWN
|
660 |
+
Pulsar
|
661 |
+
10
|
662 |
+
4
|
663 |
+
10
|
664 |
+
1
|
665 |
+
102
|
666 |
+
105
|
667 |
+
108
|
668 |
+
1011
|
669 |
+
1014
|
670 |
+
Energy (eV)
|
671 |
+
10
|
672 |
+
16
|
673 |
+
10
|
674 |
+
15
|
675 |
+
10
|
676 |
+
14
|
677 |
+
10
|
678 |
+
13
|
679 |
+
10
|
680 |
+
12
|
681 |
+
10
|
682 |
+
11
|
683 |
+
E2 dN/dE (erg cm
|
684 |
+
2 s
|
685 |
+
1)
|
686 |
+
Telescope/Observatory
|
687 |
+
HAWC
|
688 |
+
Chandra
|
689 |
+
VERITAS
|
690 |
+
Lovell
|
691 |
+
Components
|
692 |
+
TeV halo
|
693 |
+
Tail
|
694 |
+
Pulsar
|
695 |
+
Figure 3. Left panel: Spectral energy distribution (SED) of the emission around PSR J0359+5414, including the TeV halo
|
696 |
+
(green and blue bands corresponding to the HAWC observation for a point-like and extended model, respectively, as explained
|
697 |
+
in Section 3), the PWN (black band at 0.3-10 keV; Zyuzin et al. 2018), and the pulsar (in grey color; including the upper limit
|
698 |
+
in radio at 1400 MHz from Grießmeier et al. 2021, the band in X-ray at 0.3-10 keV from Zyuzin et al. 2018, and the data
|
699 |
+
points or limits at 100 MeV-1 TeV from Abdollahi et al. 2020). Right panel: SED of the emission around PSR B0355+54.
|
700 |
+
The green and blue bands indicate the TeV excess emission obtained from fits to the HAWC data with models that center at
|
701 |
+
B0355 with point-like and extended spatial profiles, respectively (see Appendix A). For comparison, the upper limits on VHE
|
702 |
+
gamma-ray emission from the PWN by VERITAS with hard spectral cuts are shown in orange, with the upper and lower bars
|
703 |
+
corresponding to region sizes of 0◦.1 and 0◦.235, respectively (Benbow et al. 2021). The black band at 0.5-8 keV indicates the
|
704 |
+
PWN in X-rays (Klingler et al. 2016). The grey band at 0.5-8 keV (Klingler et al. 2016) and the circular data markers at 1400
|
705 |
+
and 1600 MHz (Lorimer et al. 1995) correspond to the emission from the pulsar. The HAWC bands correspondo to statistical
|
706 |
+
uncerntanties only.
|
707 |
+
Table 2. Systematic uncertainties considering a PL
|
708 |
+
for each spatial scenario.
|
709 |
+
Model
|
710 |
+
Parameter
|
711 |
+
Lower sys.
|
712 |
+
Upper sys.
|
713 |
+
Point-like
|
714 |
+
N0
|
715 |
+
−3.9
|
716 |
+
4.6
|
717 |
+
α
|
718 |
+
−0.15
|
719 |
+
0.3
|
720 |
+
Extended
|
721 |
+
N0
|
722 |
+
−4.6
|
723 |
+
3.4
|
724 |
+
α
|
725 |
+
−0.05
|
726 |
+
0.03
|
727 |
+
extension
|
728 |
+
−0.02
|
729 |
+
0.02
|
730 |
+
Note—N0 is in units of 10−17 TeV−1cm−2s−1 and
|
731 |
+
extension is in degrees.
|
732 |
+
25 (d/3.45 kpc) pc.
|
733 |
+
The diffusion coefficient of the
|
734 |
+
halo is confined to be D
|
735 |
+
≲
|
736 |
+
R2
|
737 |
+
ul/(4 te)
|
738 |
+
=
|
739 |
+
3.7 ×
|
740 |
+
1027 cm2 s−1(te/12 kyr)−1(d/3.45 kpc)2,
|
741 |
+
where
|
742 |
+
te
|
743 |
+
∼
|
744 |
+
12 kyr(Ee/100 TeV)−1 is the cooling time of an electron
|
745 |
+
at energy Ee by upper-scattering the CMB. Like the
|
746 |
+
other halos (Abeysekara et al. 2017a), the diffusion co-
|
747 |
+
efficient is much lower than the average diffusion coeffi-
|
748 |
+
cient of the ISM.
|
749 |
+
The candidate halo of J0359 joins the observation of
|
750 |
+
extended VHE emission surrounding PSR J0622+3749
|
751 |
+
(Aharonian et al. 2021) as the first evidence of TeV halos
|
752 |
+
around radio-quiet pulsars. Their presence suggests that
|
753 |
+
the formation of the halos is insensitive to the configu-
|
754 |
+
ration of the pulsar magnetosphere, in particular, the
|
755 |
+
geometry of the γ-ray and radio beams (Harding 2001).
|
756 |
+
With an age of 70 kyr, J0359 is younger than the other
|
757 |
+
pulsars with halos. It is likely in a transition between
|
758 |
+
the so-called relic- and halo-stage of a PWN, the bound-
|
759 |
+
aries of which are not well defined and have motivated
|
760 |
+
different classification criteria of TeV halos (Linden et al.
|
761 |
+
2017; Giacinti et al. 2020; L´opez-Coto et al. 2022). Our
|
762 |
+
observation of TeV halo features associated with J0359
|
763 |
+
implies that high-energy particles may already start es-
|
764 |
+
caping in the ISM in the late relic-stage.
|
765 |
+
Our observation provides spectral evidence toward a
|
766 |
+
TeV halo nature of J0359.
|
767 |
+
Future data from HAWC
|
768 |
+
and multi-wavelength follow-ups of this new TeV source
|
769 |
+
are crucial to confirming its nature via morphological
|
770 |
+
studies that identify the halo extension and exclude the
|
771 |
+
association with the nearby pulsars. Future observations
|
772 |
+
of young to middle-aged pulsars like PSR J0359+5414
|
773 |
+
with wide-field γ-ray experiments and imaging atmo-
|
774 |
+
spheric Cherenkov telescopes may provide further un-
|
775 |
+
derstanding into the evolution of TeV PWNe and their
|
776 |
+
connection with TeV halos.
|
777 |
+
ACKNOWLEDGMENTS
|
778 |
+
We acknowledge the support from:
|
779 |
+
the US Na-
|
780 |
+
tional Science Foundation (NSF); the US Department
|
781 |
+
of Energy Office of High-Energy Physics; the Labora-
|
782 |
+
|
783 |
+
TeV Halo Candidate Surrounding Radio-quiet pulsar
|
784 |
+
7
|
785 |
+
tory Directed Research and Development (LDRD) pro-
|
786 |
+
gram of Los Alamos National Laboratory; Consejo Na-
|
787 |
+
cional de Ciencia y Tecnolog´ıa (CONACyT), M´exico,
|
788 |
+
grants 271051, 232656, 260378, 179588, 254964, 258865,
|
789 |
+
243290, 132197, A1-S-46288, A1-S-22784, c´atedras 873,
|
790 |
+
1563, 341, 323, Red HAWC, M´exico; DGAPA-UNAM
|
791 |
+
grants IG101320, IN111716-3, IN111419, IA102019,
|
792 |
+
IN110621, IN110521; VIEP-BUAP; PIFI 2012, 2013,
|
793 |
+
PROFOCIE 2014, 2015; the University of Wisconsin
|
794 |
+
Alumni Research Foundation; the Institute of Geo-
|
795 |
+
physics,
|
796 |
+
Planetary Physics,
|
797 |
+
and Signatures at Los
|
798 |
+
Alamos National Laboratory; Polish Science Centre
|
799 |
+
grant, DEC-2017/27/B/ST9/02272; Coordinaci´on de la
|
800 |
+
Investigaci´on Cient´ıfica de la Universidad Michoacana;
|
801 |
+
Royal Society - Newton Advanced Fellowship 180385;
|
802 |
+
Generalitat Valenciana, grant CIDEGENT/2018/034;
|
803 |
+
The Program Management Unit for Human Resources
|
804 |
+
& Institutional Development, Research and Innovation,
|
805 |
+
NXPO (grant number B16F630069); Coordinaci´on Gen-
|
806 |
+
eral Acad´emica e Innovaci´on (CGAI-UdeG), PRODEP-
|
807 |
+
SEP UDG-CA-499; Institute of Cosmic Ray Research
|
808 |
+
(ICRR), University of Tokyo, H.F. acknowledges sup-
|
809 |
+
port by NASA under award number 80GSFC21M0002.
|
810 |
+
We also acknowledge the significant contributions over
|
811 |
+
many years of Stefan Westerhoff, Gaurang Yodh and Ar-
|
812 |
+
nulfo Zepeda Dominguez, all deceased members of the
|
813 |
+
HAWC collaboration. Thanks to Scott Delay, Luciano
|
814 |
+
D´ıaz and Eduardo Murrieta for technical support.
|
815 |
+
REFERENCES
|
816 |
+
Abdo, A. A., Ackermann, M., Ajello, M., et al. 2010, ApJS,
|
817 |
+
188, 405, doi: 10.1088/0067-0049/188/2/405
|
818 |
+
Abdollahi, S., Acero, F., Ackermann, M., et al. 2020, ApJS,
|
819 |
+
247, 33, doi: 10.3847/1538-4365/ab6bcb
|
820 |
+
Abeysekara, A. U., Albert, A., Alfaro, R., et al. 2017a,
|
821 |
+
Science, 358, 911, doi: 10.1126/science.aan4880
|
822 |
+
—. 2017b, ApJ, 843, 39, doi: 10.3847/1538-4357/aa7555
|
823 |
+
—. 2019, ApJ, 881, 134, doi: 10.3847/1538-4357/ab2f7d
|
824 |
+
Abeysekara, A. U., et al. 2021, PoS, ICRC2021, 828,
|
825 |
+
doi: 10.22323/1.395.0828
|
826 |
+
Acero, F., Ackermann, M., Ajello, M., et al. 2015, ApJS,
|
827 |
+
218, 23, doi: 10.1088/0067-0049/218/2/23
|
828 |
+
Aharonian, F., An, Q., Axikegu, Bai, L. X., et al. 2021,
|
829 |
+
PhRvL, 126, 241103,
|
830 |
+
doi: 10.1103/PhysRevLett.126.241103
|
831 |
+
Aharonian, F. A., Atoyan, A. M., & Kifune, T. 1997,
|
832 |
+
MNRAS, 291, 162, doi: 10.1093/mnras/291.1.162
|
833 |
+
Albert, A., Alfaro, R., Alvarez, C., et al. 2020, ApJ, 905,
|
834 |
+
76, doi: 10.3847/1538-4357/abc2d8
|
835 |
+
Benbow, W., Brill, A., Buckley, J. H., et al. 2021, ApJ, 916,
|
836 |
+
117, doi: 10.3847/1538-4357/ac05b9
|
837 |
+
Clark, C. J., Wu, J., Pletsch, H. J., et al. 2017, ApJ, 834,
|
838 |
+
106, doi: 10.3847/1538-4357/834/2/106
|
839 |
+
De La Torre Luque, P., Fornieri, O., & Linden, T. 2022,
|
840 |
+
arXiv e-prints, arXiv:2205.08544.
|
841 |
+
https://arxiv.org/abs/2205.08544
|
842 |
+
Evoli, C., Linden, T., & Morlino, G. 2018, Phys. Rev. D,
|
843 |
+
98, 063017, doi: 10.1103/PhysRevD.98.063017
|
844 |
+
Giacinti, G., Mitchell, A. M. W., L´opez-Coto, R., et al.
|
845 |
+
2020, Astron. Astrophys., 636, A113,
|
846 |
+
doi: 10.1051/0004-6361/201936505
|
847 |
+
Grießmeier, J. M., Smith, D. A., Theureau, G., et al. 2021,
|
848 |
+
A&A, 654, A43, doi: 10.1051/0004-6361/202140841
|
849 |
+
H. E. S. S. Collaboration. 2020, Nature Astronomy, 4, 167,
|
850 |
+
doi: 10.1038/s41550-019-0910-0
|
851 |
+
H. E. S. S. Collaboration, Abdalla, H., Abramowski, A.,
|
852 |
+
et al. 2018, A&A, 612, A2,
|
853 |
+
doi: 10.1051/0004-6361/201629377
|
854 |
+
Harding, A. K. 2001, in American Institute of Physics
|
855 |
+
Conference Series, Vol. 558, High Energy Gamma-Ray
|
856 |
+
Astronomy: International Symposium, ed. F. A.
|
857 |
+
Aharonian & H. J. V¨olk, 115–126, doi: 10.1063/1.1370785
|
858 |
+
Hooper, D., Cholis, I., Linden, T., & Fang, K. 2017, Phys.
|
859 |
+
Rev. D, 96, 103013, doi: 10.1103/PhysRevD.96.103013
|
860 |
+
Kass, R. E., & Raftery, A. E. 1995, Journal of the
|
861 |
+
American Statistical Association, 90, 773,
|
862 |
+
doi: 10.1080/01621459.1995.10476572
|
863 |
+
Klingler, N., Rangelov, B., Kargaltsev, O., et al. 2016, ApJ,
|
864 |
+
833, 253, doi: 10.3847/1538-4357/833/2/253
|
865 |
+
Linden, T., Auchettl, K., Bramante, J., et al. 2017, PhRvD,
|
866 |
+
96, 103016, doi: 10.1103/PhysRevD.96.103016
|
867 |
+
Liu, R.-Y. 2022, arXiv e-prints, arXiv:2207.04011.
|
868 |
+
https://arxiv.org/abs/2207.04011
|
869 |
+
L´opez-Coto, R., de O˜na Wilhelmi, E., Aharonian, F.,
|
870 |
+
Amato, E., & Hinton, J. 2022, Nature Astronomy, 6, 199,
|
871 |
+
doi: 10.1038/s41550-021-01580-0
|
872 |
+
Lorimer, D. R., Yates, J. A., Lyne, A. G., & Gould, D. M.
|
873 |
+
1995, MNRAS, 273, 411, doi: 10.1093/mnras/273.2.411
|
874 |
+
Plucinsky, P. P., Snowden, S. L., Aschenbach, B., et al.
|
875 |
+
1996, ApJ, 463, 224, doi: 10.1086/177236
|
876 |
+
Smith, A. J., & HAWC Collaboration. 2015, in
|
877 |
+
International Cosmic Ray Conference, Vol. 34, 34th
|
878 |
+
International Cosmic Ray Conference (ICRC2015), 966.
|
879 |
+
https://arxiv.org/abs/1508.05826
|
880 |
+
Sudoh, T., Linden, T., & Beacom, J. F. 2019, PhRvD, 100,
|
881 |
+
043016, doi: 10.1103/PhysRevD.100.043016
|
882 |
+
|
883 |
+
8
|
884 |
+
Vianello, G., Lauer, R. J., Younk, P., et al. 2015, arXiv
|
885 |
+
e-prints, arXiv:1507.08343.
|
886 |
+
https://arxiv.org/abs/1507.08343
|
887 |
+
Wakely, S. P., & Horan, D. 2008, in International Cosmic
|
888 |
+
Ray Conference, Vol. 3, International Cosmic Ray
|
889 |
+
Conference, 1341–1344
|
890 |
+
Wilks, S. S. 1938, Annals Math. Statist., 9, 60,
|
891 |
+
doi: 10.1214/aoms/1177732360
|
892 |
+
Zyuzin, D. A., Karpova, A. V., & Shibanov, Y. A. 2018,
|
893 |
+
MNRAS, 476, 2177, doi: 10.1093/mnras/sty359
|
894 |
+
|
895 |
+
TeV Halo Candidate Surrounding Radio-quiet pulsar
|
896 |
+
9
|
897 |
+
Table 3.
|
898 |
+
Results of the likelihood fit assuming that the only emitting source
|
899 |
+
is PSR B0355+54. The PL spectral model along with the two different spatial
|
900 |
+
models were tested.
|
901 |
+
Spatial model
|
902 |
+
TS
|
903 |
+
∆BIC
|
904 |
+
Extension
|
905 |
+
N0
|
906 |
+
α
|
907 |
+
[◦]
|
908 |
+
TeV−1 cm−2 s−1
|
909 |
+
Point-like
|
910 |
+
35.86
|
911 |
+
-1.9
|
912 |
+
0.0
|
913 |
+
(1.28+0.34
|
914 |
+
−0.27) × 10−16
|
915 |
+
2.56 ± 0.17
|
916 |
+
Extended
|
917 |
+
41.83
|
918 |
+
-1.5
|
919 |
+
0.22 ± 0.09
|
920 |
+
(2.0+0.7
|
921 |
+
−0.5) × 10−16
|
922 |
+
2.51 ± 0.15
|
923 |
+
Note—All associated errors are statistical. ∆BIC is obtained comparing the BIC
|
924 |
+
value with the best spectral model fit for both spatial models assuming that the
|
925 |
+
emission is coming from J0359 (Section 3).
|
926 |
+
APPENDIX
|
927 |
+
A. PSR B0355+54 FITTING RESULTS
|
928 |
+
In this section, we explore the possibility that the TeV excess comes entirely from B0355. We fit models with a
|
929 |
+
power-law (PL) spectrum and the spatial templates described in Section 3. The results are summarized in Table 3.
|
930 |
+
The energy ranges at which the source is detected are 7-180 TeV at 1σ level, 11-90 TeV at 2σ level and 17-54 TeV
|
931 |
+
assuming at 3σ level assuming a point-like morphology. For an extended morphology, the energy ranges are found to
|
932 |
+
be 8-155 TeV for 1σ level, 11-90 TeV at 2σ level and 17-59 at 3σ level.
|
933 |
+
As single-source scenarios are not nested models, we have employed the Bayesian Information Criterion (BIC) to
|
934 |
+
select the models. The difference in the BIC value, ∆BIC, quantifies the evidence against the model with a higher
|
935 |
+
BIC value. According to Kass & Raftery (1995), if ∆BIC is between 0 and 2 it is not clear which model is preferred;
|
936 |
+
∆BIC between 2 and 10 and above 10 indicates a slight and strong preference of the model with the smallest BIC,
|
937 |
+
respectively.
|
938 |
+
The small difference in ∆BIC from the fits of models centered at J0359 and B0355 does not allow us to distinguish
|
939 |
+
between the models. This is expected as the angular distance of the two pulsars is smaller than the spatial resolution
|
940 |
+
of HAWC. However, the tension between the VERITAS limits on B0355 and HAWC fluxes, as explained in Section 3,
|
941 |
+
suggests that the TeV emission is more likely associated with J0359.
|
942 |
+
B. FITTING RESULTS OF A TWO-SOURCE SCENARIO
|
943 |
+
We further explore a scenario where both J0359 and B0359 contribute to the TeV emission observed by HAWC.
|
944 |
+
Such a two-source model is disfavored by the data.
|
945 |
+
Table 4 presents the results of the two-source models. We consider three combinations of spatial profiles of the
|
946 |
+
two sources: (A) both sources are point-like, (B) both sources are extended with a Gaussian shape, and (C) J0359 is
|
947 |
+
extended source and B0355 is point-like. The energy spectrum is assumed to be a PL. The normalization flux N0 and
|
948 |
+
the spectral index α in each fit were free to vary while the position of the sources for all the scenarios were fixed.
|
949 |
+
The ∆TS column shows the gain of test statistics by adding an extra source to the one-source model presented in
|
950 |
+
Section 3 and Section A (the baseline model considers pure background plus the emission from the other source). The
|
951 |
+
two-source model is disfavored in all cases.
|
952 |
+
|
953 |
+
10
|
954 |
+
Table 4.
|
955 |
+
Results of the likelihood fit assuming that the excess observed comes from two sources: PSR
|
956 |
+
J0359+5414 and PSR B0355+54. The spectral model for all the spatial models is a PL.
|
957 |
+
Two-source model
|
958 |
+
Source
|
959 |
+
∆TS
|
960 |
+
∆BIC
|
961 |
+
Extension
|
962 |
+
N0
|
963 |
+
α
|
964 |
+
[◦]
|
965 |
+
TeV−1 cm−2 s−1
|
966 |
+
J0359
|
967 |
+
2.32
|
968 |
+
0.0
|
969 |
+
(1.0+0.5
|
970 |
+
−0.9) × 10−16
|
971 |
+
2.63+0.6
|
972 |
+
−0.20
|
973 |
+
Model A
|
974 |
+
B0355
|
975 |
+
0.32
|
976 |
+
-24
|
977 |
+
0.0
|
978 |
+
(0.00034+6
|
979 |
+
−0.00024) × 10−13
|
980 |
+
2.4+1.3
|
981 |
+
−5
|
982 |
+
J0359
|
983 |
+
8.73
|
984 |
+
1.500+0.18
|
985 |
+
−0.004
|
986 |
+
(3.3+1.5
|
987 |
+
−3.3) × 10−16
|
988 |
+
2.2+0.4
|
989 |
+
−1.3
|
990 |
+
Model B
|
991 |
+
B0355
|
992 |
+
10.29
|
993 |
+
-26
|
994 |
+
0.14+0.08
|
995 |
+
−0.15
|
996 |
+
(1.5+0.5
|
997 |
+
−0.4) × 10−16
|
998 |
+
2.56 ± 0.20
|
999 |
+
J0359
|
1000 |
+
13.02
|
1001 |
+
1.5000 ± 0.0010
|
1002 |
+
(0.04+4
|
1003 |
+
−0.04) × 10−14
|
1004 |
+
2.2 ± 2.8
|
1005 |
+
Model C
|
1006 |
+
B0355
|
1007 |
+
8.62
|
1008 |
+
-26 / -15
|
1009 |
+
0.0
|
1010 |
+
(3.2+3.0
|
1011 |
+
−1.5) × 10−16
|
1012 |
+
2.60 ± 0.28
|
1013 |
+
Note—All associated errors are statistical. Model A corresponds to a scenario where both sources are point-
|
1014 |
+
like, model B assumes that both sources are extended with a Gaussian shape, and model C assumes that
|
1015 |
+
PSR J0359+5414 is as a point-like source and PSR B0355+54 is an extended source with a Gaussian shape.
|
1016 |
+
∆BIC is obtained comparing the BIC value with the best model fit assuming that the emission is coming
|
1017 |
+
from J0359 (Section 3). For model A, with the PL point-like model, for model B with the PL Gaussian
|
1018 |
+
model and for model C with the two previous models.
|
1019 |
+
|
69E3T4oBgHgl3EQfpwru/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
79FAT4oBgHgl3EQfoh2y/content/tmp_files/2301.08635v1.pdf.txt
ADDED
@@ -0,0 +1,1788 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Astronomy & Astrophysics manuscript no. main
|
2 |
+
©ESO 2023
|
3 |
+
January 23, 2023
|
4 |
+
Supercritical colliding wind binaries
|
5 |
+
Leandro Abaroa1, 2⋆, Gustavo E. Romero1, 2, and Pablo Sotomayor1, 2
|
6 |
+
1 Instituto Argentino de Radioastronomía, CICPBA-CONICET-UNLP
|
7 |
+
Villa Elisa, La Plata, Argentina
|
8 |
+
2 Facultad de Cs. Astronómicas y Geofísicas, Universidad Nacional de La Plata
|
9 |
+
Paseo del Bosque S/N (1900), La Plata, Argentina
|
10 |
+
Received / Accepted
|
11 |
+
ABSTRACT
|
12 |
+
Context. Particle-accelerating colliding-wind binaries (PACWBs) are systems that are formed by two massive and hot stars and
|
13 |
+
produce nonthermal radiation. The key elements of these systems are fast winds and the shocks that they create when they collide.
|
14 |
+
Binaries with nonaccreting young pulsars have also been detected as nonthermal emitters, again as a consequence of the wind–wind
|
15 |
+
interaction. Black holes might produce nonthermal radiation by this mechanism if they accrete at super-Eddington rates. In such cases,
|
16 |
+
the disk is expected to launch a radiation-driven wind, and if this wind has an equatorial component, it can collide with the companion
|
17 |
+
star yielding a PACWB. These systems are supercritical colliding wind binaries.
|
18 |
+
Aims. We aim to characterize the particle acceleration and nonthermal radiation produced by the collision of winds in binary systems
|
19 |
+
composed of a superaccreting black hole and an early-type star.
|
20 |
+
Methods. We estimated the terminal velocity of the disk-driven wind by calculating the spatial distribution of the radiation fields
|
21 |
+
and their effect on disk particles. We then found the location of the wind collision region and calculated the timescales of energy
|
22 |
+
gain and losses of relativistic particles undergoing diffusive particle acceleration. With this information, we were able to compute
|
23 |
+
the associated spectral energy distribution of the radiation. We calculated a number of specific models with different parameters to
|
24 |
+
explore this scenario.
|
25 |
+
Results. We find that the interaction of winds can produce nonthermal emission from radio up to tens of GeV, with luminosities in
|
26 |
+
the range of ∼ 1033–1035 erg s−1, which for the most part are contributed by electron synchrotron and inverse Compton radiation.
|
27 |
+
Conclusions. We conclude that supercritical colliding wind binaries, such as some ultraluminous X-ray sources and some Galactic
|
28 |
+
X-ray binaries, are capable of accelerating cosmic rays and producing nonthermal electromagnetic emission from radio to γ-rays, in
|
29 |
+
addition to the thermal components.
|
30 |
+
Key words. acceleration of particles – accretion, accretion disks – relativistic processes – X-ray: binaries – gamma-rays: general –
|
31 |
+
radiation mechanism: non-thermal
|
32 |
+
1. Introduction
|
33 |
+
Early-type stars are very hot and their radiation fields can launch
|
34 |
+
powerful particle winds (Lamers & Cassinelli 1999). Such winds
|
35 |
+
quickly reach supersonic velocities and accelerate to terminal
|
36 |
+
velocities in the range (2 − 4) × 103 km s−1 (Abbott 1978; Mui-
|
37 |
+
jres et al. 2012). When two massive stars with powerful winds
|
38 |
+
form a binary system, the winds collide producing shocks sepa-
|
39 |
+
rated by a contact discontinuity from where matter is evacuated
|
40 |
+
(e.g., Stevens et al. 1992). A reverse shock moves in the wind of
|
41 |
+
each star. When such shocks are adiabatic, they can accelerate
|
42 |
+
suprathermal particles up to relativistic energies (Eichler & Usov
|
43 |
+
1993; Pittard et al. 2020). These particles, in turn, cool mainly
|
44 |
+
by synchrotron radiation and inverse Compton upscattering of
|
45 |
+
stellar photons, emitting nonthermal radiation (Eichler & Usov
|
46 |
+
1993; Benaglia & Romero 2003; Reimer et al. 2006; De Becker
|
47 |
+
2007; Reitberger et al. 2014; del Palacio et al. 2016; Pittard et al.
|
48 |
+
2021). Proton acceleration can also lead to gamma-ray emission
|
49 |
+
through pp collisions and the subsequent π0 decays (e.g., Balbo
|
50 |
+
& Walter 2017; Grimaldo et al. 2019).
|
51 |
+
The actual fraction of particle-accelerating colliding-wind
|
52 |
+
binaries (PACWBs) among massive colliding wind binaries
|
53 |
+
Send offprint requests to: Leandro Abaroa
|
54 | |
55 |
+
(CWBs) is not well known. De Becker & Raucq (2013) list 43
|
56 |
+
confirmed cases, mostly detected at radio wavelengths. These
|
57 |
+
authors mention several other candidates, and new sources have
|
58 |
+
been found since the publication of this latter work (e.g., Be-
|
59 |
+
naglia et al. 2015; del Palacio et al. 2016). The total kinetic
|
60 |
+
power of these systems ranges from ∼ 1034 to more than
|
61 |
+
1037 erg s−1. The most extreme cases are WR89, WR98, and
|
62 |
+
WR140, with powers of between 6 and 8 times 1037 erg s−1.
|
63 |
+
Less than 10−7 of this power is finally radiated through syn-
|
64 |
+
chrotron radio emission. The most luminous nonthermal radio-
|
65 |
+
emitting CWB is WR140, with a total radio luminosity of ∼
|
66 |
+
2.6 × 1030 erg s−1.
|
67 |
+
Contrary to the radio emission, high-energy radiation has
|
68 |
+
been more difficult to detect in CWBs. At X-rays, the thermal
|
69 |
+
component usually dominates and hinders the detection of non-
|
70 |
+
thermal components. In the gamma-ray domain, only two sys-
|
71 |
+
tems have been detected so far: η Carinae and WR11. The lat-
|
72 |
+
ter is the nearest known CWB. At d ∼ 340 pc, it shows a
|
73 |
+
gamma-ray luminosity in the Fermi-LAT energy range of Lγ =
|
74 |
+
(3.7 ± 0.7) × 1031 erg s−1. This luminosity amounts to ∼ 6 × 10−6
|
75 |
+
of the total wind kinetic power (Pshirkov 2016). Similar frac-
|
76 |
+
tions for other, more distant PACWBs yield fluxes that are un-
|
77 |
+
detectable with the currently available instrumentation. The no-
|
78 |
+
table exception is the mentioned η Carinae.
|
79 |
+
Article number, page 1 of 12
|
80 |
+
arXiv:2301.08635v1 [astro-ph.HE] 20 Jan 2023
|
81 |
+
|
82 |
+
A&A proofs: manuscript no. main
|
83 |
+
η Carinae is a heavily obscured and peculiar object. The sys-
|
84 |
+
tem includes a luminous blue variable (LBV) star of about 90
|
85 |
+
solar masses and a secondary Wolf-Rayet (WR) star of ∼ 30 so-
|
86 |
+
lar masses. η Carinae is the most luminous binary in the Galaxy,
|
87 |
+
with a bolometric luminosity of about 5 × 106 L⊙. The mass-
|
88 |
+
loss rate of the primary is extremely high, reaching up to 10−3
|
89 |
+
M⊙ yr−1. The binary was detected in hard X-rays by INTEGRAL
|
90 |
+
(Leyder et al. 2008) and Suzaku (Okazaki et al. 2008), suggesting
|
91 |
+
the presence of relativistic electrons in the system. AGILE de-
|
92 |
+
tected gamma rays from η Carinae for the first time (Tavani et al.
|
93 |
+
2009). The system was subsequently detected by Fermi (Abdo
|
94 |
+
et al. 2010) with a luminosity of ∼ 1034 erg s−1. The observations
|
95 |
+
reveal the presence of a hard component in the spectrum around
|
96 |
+
periastron, which disappears near apastron. Such a component
|
97 |
+
has been explained through the decay of π0 produced by rela-
|
98 |
+
tivistic protons interacting with the dense stellar wind (Farnier
|
99 |
+
et al. 2011). There is a clear variability with the orbital phase.
|
100 |
+
Different behaviors are observed at low (0.3 − 10 GeV) and high
|
101 |
+
(> 10 GeV) gamma-ray energies. The low-energy component is
|
102 |
+
likely produced by inverse Compton scattering of stellar photons
|
103 |
+
(Balbo & Walter 2017).
|
104 |
+
The case of η Carinae suggests that super-Eddington systems
|
105 |
+
might be particularly powerful PACWBs. When a compact ob-
|
106 |
+
ject such as a black hole accretes with rates that exceed the Ed-
|
107 |
+
dington rate, the radiation pressure on the surface of the disk will
|
108 |
+
overcome the gravitational attraction and matter will be expelled
|
109 |
+
from the surface of the disk in the form of a strong wind. Such
|
110 |
+
winds can rival and even surpass those of the most luminous
|
111 |
+
CWBs in terms of kinetic power. When the donor star is a hot
|
112 |
+
early-type star also endowed with a wind, a supercritical collid-
|
113 |
+
ing wind binary (SCWB) can be formed. Such systems should
|
114 |
+
have strong shocks and are potential particle accelerators and
|
115 |
+
nonthermal emitters.
|
116 |
+
In our Galaxy, there are some examples of black hole X-ray
|
117 |
+
binaries with disks that launch strong outflows. Two examples
|
118 |
+
are GRS 1915+105 (Mirabel & Rodríguez 1994; Neilsen & Lee
|
119 |
+
2009) and V404 Cygni (Muñoz-Darias et al. 2016; Tetarenko
|
120 |
+
et al. 2017). However, the donor star in both of these systems
|
121 |
+
is a low-mass star. Another well-known supercritical source is
|
122 |
+
the Galactic microquasar SS433, which is a confirmed nonther-
|
123 |
+
mal emitter and might be a possible example of a SCWB in our
|
124 |
+
Galaxy (see Fabrika 2004, for an extensive review). Many ul-
|
125 |
+
traluminous X-ray sources (ULXs) detected in nearby galaxies
|
126 |
+
might also belong to this category of sources.
|
127 |
+
In this paper, we explore the CWB scenario where one of the
|
128 |
+
winds is launched by a supercritical disk around a black hole. We
|
129 |
+
start by characterizing the disk model and the radiation fields it
|
130 |
+
produces (Sections 2.1 and 2.2). We then investigate the motion
|
131 |
+
of particles under the radiation pressure in such fields (Section
|
132 |
+
2.3). This allows us to get reasonable estimates of the terminal
|
133 |
+
velocities expected for the matter ejected in the direction of the
|
134 |
+
companion star. We then proceed to study the wind interactions,
|
135 |
+
shock adiabaticity, and other relevant issues for particle accelera-
|
136 |
+
tion in Sect. 3. This is followed by estimates of energy losses for
|
137 |
+
accelerated particles, particle distributions, and calculations of
|
138 |
+
the nonthermal output (Sect. 4). In Section 5 we present results
|
139 |
+
for some specific models, with different choices of the accretor
|
140 |
+
mass and the accretion power. The donor star is supposed to be a
|
141 |
+
hot O.5V with a temperature of 41500 K and a kinetic power of
|
142 |
+
a few times 1037 erg s−1. We finally apply our model to the ex-
|
143 |
+
tragalactic binary system NGC 4190 ULX 1. After a discussion
|
144 |
+
(Sect. 7), we close with a summary and our conclusions.
|
145 |
+
2. The accretion disk and its wind
|
146 |
+
We assume that the X-ray binary is composed of a Population
|
147 |
+
I star and a nonrotating stellar mass black hole (BH) in a close
|
148 |
+
orbit.
|
149 |
+
The orbital semi-axis a, the stellar radius, and the mass ratio
|
150 |
+
of the system, q = M∗/MBH, satisfy (Eggleton 1983):
|
151 |
+
R∗
|
152 |
+
lob =
|
153 |
+
a 0.49 q2/3
|
154 |
+
0.6 q2/3 + ln (1 + q1/3),
|
155 |
+
(1)
|
156 |
+
where M∗ is the mass of the star and MBH the mass of the BH.
|
157 |
+
Hence, the star overflows its Roche lobe R∗
|
158 |
+
lob, transfers mass to
|
159 |
+
the BH through the Lagrange point, and an accretion disk is
|
160 |
+
formed due to the angular momentum of the system.
|
161 |
+
In this section, we describe the semi-analytical models we
|
162 |
+
use to study the accretion disk, the spatial distribution of the ra-
|
163 |
+
diation fields produced by the disk, and the wind ejected from its
|
164 |
+
surface. We assume a Newtonian potential for the gravity field,
|
165 |
+
because we are interested in weak-field processes.
|
166 |
+
2.1. Accretion disk
|
167 |
+
We adopt cylindrical coordinates with axial symmetry along the
|
168 |
+
z-axis, neglect the self-gravity of the disk gas, and consider a
|
169 |
+
nonmagnetized disk with a super-Eddington accretion rate at the
|
170 |
+
outer part of the disk, ˙minput = ˙Minput/ ˙MEdd ≫ 1, where ˙Minput
|
171 |
+
is the input of mass per time unit in the accretion disk. The Ed-
|
172 |
+
dington rate is given by
|
173 |
+
˙MEdd = LEdd
|
174 |
+
ηc2 ≈ 2.2×10−8MBH yr−1 = 1.4×1018 MBH
|
175 |
+
M⊙
|
176 |
+
g s−1, (2)
|
177 |
+
with LEdd the Eddington luminosity1, η ≈ 0.1 the accretion effi-
|
178 |
+
ciency, and c the speed of light.
|
179 |
+
The critical or spherization radius, given by
|
180 |
+
rcrit ∼ 40 ˙minputrg,
|
181 |
+
(3)
|
182 |
+
separates the disk in two regions: a standard outer disk (Shakura
|
183 |
+
& Sunyaev 1973) and a radiation-dominated inner disk with ad-
|
184 |
+
vection (Fukue 2004). In relation (3), rg = GMBH/c2 is the grav-
|
185 |
+
itational radius of the BH, with G the gravitational constant. In
|
186 |
+
the disk model, the advection is parameterized as a fraction f of
|
187 |
+
the viscous heating, Qadv = f Qvis, and the disk becomes geo-
|
188 |
+
metrically thick in the inner region, where the ejection of winds
|
189 |
+
by the radiation force helps to regulate the mass-accretion rate
|
190 |
+
onto the BH ( ˙Macc) at the Eddington rate2.
|
191 |
+
As the disk is optically thick, we assume that it radiates lo-
|
192 |
+
cally as a blackbody. The radiation intensity of a plasma element
|
193 |
+
in the comoving frame of the outer and inner disk, at a radius rd
|
194 |
+
measured on the equatorial plane, is
|
195 |
+
I0 = 1
|
196 |
+
πσT 4
|
197 |
+
eff =
|
198 |
+
���������������������
|
199 |
+
1
|
200 |
+
π
|
201 |
+
3GMBH ˙Minput
|
202 |
+
8πr3
|
203 |
+
d
|
204 |
+
fin, rd > rcrit
|
205 |
+
1
|
206 |
+
π
|
207 |
+
3
|
208 |
+
4
|
209 |
+
√c3
|
210 |
+
LEdd
|
211 |
+
4πr2
|
212 |
+
d
|
213 |
+
, rd ≤ rcrit,
|
214 |
+
(4)
|
215 |
+
1 The Eddington luminosity is defined as the luminosity required to
|
216 |
+
balance the attractive gravitational pull of the accreting object by radia-
|
217 |
+
tion pressure.
|
218 |
+
2
|
219 |
+
˙Macc = ˙Minput in the outer region of the disk and ˙Macc = ˙Minputrd/rcrit
|
220 |
+
in the inner region (Fukue 2004).
|
221 |
+
Article number, page 2 of 12
|
222 |
+
|
223 |
+
L. Abaroa et al.: Super critical colliding wind binaries
|
224 |
+
Fig. 1: Geometry of the present disk model. The radiation fields
|
225 |
+
are calculated in the rz plane, where φ = 0. Here, Q is the posi-
|
226 |
+
tion of the plasma element of the disk and P the point of calcu-
|
227 |
+
lation on the rz plane. The scale height of the disk is H, and D
|
228 |
+
is the distance between Q and P. The short arrow is the direction
|
229 |
+
cosine jµ. This figure is adapted from Watarai & Fukue (1999).
|
230 |
+
where √c3 = H/rd = tan δ, with H the scale height of the disk,
|
231 |
+
δ the disk opening angle, and fin = 1 − rin/rd ≈ 1 (as rd >
|
232 |
+
rcrit, then rd ≫ rin). Here, c3 (along with c1 and c2 used in the
|
233 |
+
following section) is a coefficient that depends on the advection
|
234 |
+
parameter, the adiabatic index of the gas γ, and the viscosity α
|
235 |
+
(see Appendix in Fukue 2004). We adopt a disk with f = 0.5 and
|
236 |
+
α = 0.5; that is, we assume equipartition between advection and
|
237 |
+
viscous heating. The index γ = 4/3 corresponds to a radiation-
|
238 |
+
dominated gas in the inner disk. These values lead to a disk-
|
239 |
+
opening angle of δ = 30◦.
|
240 |
+
2.2. Radiation fields
|
241 |
+
The wind launched from the radiation-dominated region of the
|
242 |
+
disk will be determined by the radiation forces acting upon the
|
243 |
+
particles on the disk surface and along their subsequent trajec-
|
244 |
+
tories. These forces will have contributions from different parts
|
245 |
+
of the disk in relative motion with respect to the particles. Some
|
246 |
+
radiation will be blueshifted and some will be redshifted, result-
|
247 |
+
ing in differential azimuthal forces onto the particles and then
|
248 |
+
transferring angular momentum from the disk to the wind.
|
249 |
+
In order to obtain the radiative contribution of each plasma
|
250 |
+
element Q = (rd, φd, H) of the disk surface, at any point P =
|
251 |
+
(r, φ, z) above or below the disk, we make a transformation of
|
252 |
+
the intensity between the inertial and comoving reference frames
|
253 |
+
(see Fig. 1). Azimuthal symmetry allows us to perform the cal-
|
254 |
+
culations for any constant value of φ; therefore, we do it in the
|
255 |
+
rz plane (φ = 0). The relativistic Doppler factor D provides the
|
256 |
+
transformation between the reference frames (McKinley 1980):
|
257 |
+
I = D4I0 =
|
258 |
+
I0
|
259 |
+
(1 + zred)4 ,
|
260 |
+
(5)
|
261 |
+
where zred is the redshift factor given by (Watarai & Fukue 1999)
|
262 |
+
zred = −(r cos φd − rd)vr − (r sin φd)vφ + (z − H)vrc3
|
263 |
+
cD
|
264 |
+
.
|
265 |
+
(6)
|
266 |
+
Here, D is the distance between P and Q, vφ = c2vK is the az-
|
267 |
+
imuthal velocity and vr = −c1αvK is the radial velocity, with
|
268 |
+
vK = √GMBH/rd the Keplerian velocity. We note that we only
|
269 |
+
consider the inner part of the disk for these calculations, because
|
270 |
+
the intensity decays with r−3
|
271 |
+
d .
|
272 |
+
The radiation-field tensor is given by (Rybicki & Lightman
|
273 |
+
1986)
|
274 |
+
Rµν =
|
275 |
+
�
|
276 |
+
E
|
277 |
+
1
|
278 |
+
c Fα
|
279 |
+
1
|
280 |
+
c Fα
|
281 |
+
Pαβ
|
282 |
+
�
|
283 |
+
= 1
|
284 |
+
c
|
285 |
+
�
|
286 |
+
I jµ jνdΩ.
|
287 |
+
(7)
|
288 |
+
This is a symmetric tensor of rank 2 and therefore we calculate
|
289 |
+
ten elements in total: one for the energy density E, three for the
|
290 |
+
flux vector Fα, and six for the stress tensor Pαβ. In Eq. 7, jµ and
|
291 |
+
jν are the direction cosines in Cartesian coordinates, and Ω is the
|
292 |
+
solid angle subtended by Q:
|
293 |
+
jµ =
|
294 |
+
�r − rd cos φd
|
295 |
+
D
|
296 |
+
, −rd sin φd
|
297 |
+
D
|
298 |
+
, z − H
|
299 |
+
D
|
300 |
+
�
|
301 |
+
,
|
302 |
+
(8)
|
303 |
+
dΩ = −(r cos φd − rd) sin δ + (z − H) cos δ
|
304 |
+
D3
|
305 |
+
dS,
|
306 |
+
(9)
|
307 |
+
where dS = √1 + c3 rd drd dφd.
|
308 |
+
2.3. Particles in the photon field
|
309 |
+
We now calculate the trajectory and velocity of the particles
|
310 |
+
ejected from the disk when they interact with photons of the am-
|
311 |
+
bient radiation field.
|
312 |
+
The equation of motion under a relativistic, radiation treat-
|
313 |
+
ment, is given by (Kato & Fukue 2020)
|
314 |
+
fµ = −∂Φe
|
315 |
+
∂xν + Rν
|
316 |
+
µ;ν,
|
317 |
+
(10)
|
318 |
+
where fµ is the four-force per unit volume. The effective po-
|
319 |
+
tential Φe is the sum of gravitational (Φg) and centrifugal (Φc)
|
320 |
+
potentials. The semicolon (; ) in the second term refers to the
|
321 |
+
covariant differentiation of the energy-momentum tensor.
|
322 |
+
As we consider a disk with axial symmetry, the gravitational
|
323 |
+
potential cancels out in the azimuthal coordinate: ∂Φg/∂xα =
|
324 |
+
(∂Φg/∂r, 0, ∂Φg/∂z). Furthermore, the centrifugal potential acts
|
325 |
+
only in the radial direction: ∂Φc/∂xα = (l2/r3, 0, 0), with l =
|
326 |
+
r2
|
327 |
+
dωK being the specific angular momentum of the disk, and ωK
|
328 |
+
the angular velocity.
|
329 |
+
The equations of motion of the ejected particles can be found
|
330 |
+
working with Eq. 10. In terms of the nondimensional form of
|
331 |
+
the radiation-field tensor elements ϵ, f α, and pαβ, the system of
|
332 |
+
differential, tensorial, and coupled equations is as follows (equa-
|
333 |
+
tions originally derived by Watarai & Fukue 1999, Eq. 42–44,
|
334 |
+
but now extended to second order in velocity):
|
335 |
+
Radial coordinate:
|
336 |
+
dur
|
337 |
+
dτ = − ∂Φg
|
338 |
+
∂r + l2
|
339 |
+
r3 +
|
340 |
+
(11)
|
341 |
+
+ 1
|
342 |
+
2[γ f r − prβuβ − γ2ϵur + ur(2γ f βuβ − pβδuβuδ)].
|
343 |
+
Azimuthal coordinate:
|
344 |
+
1
|
345 |
+
r
|
346 |
+
dl
|
347 |
+
dτ = 1
|
348 |
+
2[γ f φ − pφβuβ − γ2ϵ(l/r)+
|
349 |
+
(12)
|
350 |
+
+ (l/r)(2γ f βuβ − pβδuβuδ)].
|
351 |
+
Article number, page 3 of 12
|
352 |
+
|
353 |
+
P = (r,Φ,z)
|
354 |
+
D
|
355 |
+
Q = (rd,Φd, H)
|
356 |
+
7
|
357 |
+
r
|
358 |
+
S
|
359 |
+
BHA&A proofs: manuscript no. main
|
360 |
+
Height coordinate:
|
361 |
+
duz
|
362 |
+
dτ = − ∂Φg
|
363 |
+
∂z +
|
364 |
+
(13)
|
365 |
+
+ 1
|
366 |
+
2[γ f z − pzβuβ − γ2ϵuz + uz(2γ f βuβ − pβδuβuδ)],
|
367 |
+
where uµ denotes the four-velocity of the particles and γ the
|
368 |
+
Lorentz factor, which is given by
|
369 |
+
γ =
|
370 |
+
�
|
371 |
+
1 + urur + l2/r2 + uzuz.
|
372 |
+
(14)
|
373 |
+
The free parameter of these equations of motion is the launch-
|
374 |
+
ing radius of the particles, r0, and we assume as initial con-
|
375 |
+
dition that the particles co-rotate with the disk at this radius,
|
376 |
+
uα
|
377 |
+
0 = (0, l0/r0, 0).
|
378 |
+
We solve this system of equations numerically and assume
|
379 |
+
that the kinematics of the disk-driven wind is roughly described
|
380 |
+
by the trajectory and terminal velocities obtained for the test par-
|
381 |
+
ticles. As the accretion rate in the inner region of the disk is reg-
|
382 |
+
ulated at the Eddington rate, the mass loss in the wind is of the
|
383 |
+
order of the super-Eddington accretion rate, ˙Mdw ∼ ˙Minput.
|
384 |
+
3. Collision of winds
|
385 |
+
The wind ejected from the disk collides with the stellar wind
|
386 |
+
at the interaction region, where shocks are generated giving rise
|
387 |
+
to particle acceleration. An important quantity that characterizes
|
388 |
+
the wind is the kinetic luminosity, LK =
|
389 |
+
˙Mv2/2, where ˙M is
|
390 |
+
the mass-loss rate and v the velocity of the fluid. A small frac-
|
391 |
+
tion of the total kinetic power of the wind is transferred to rel-
|
392 |
+
ativistic particles, Lrel ∼ 0.1LK, where we assume equipartition
|
393 |
+
between relativistic protons and electrons (Le = Lp). The mass-
|
394 |
+
loss rate and velocity of the stellar wind are set according to the
|
395 |
+
parameters found in the literature for the type of star we have
|
396 |
+
chosen (e.g., Kobulnicky et al. 2019). In the case of the disk-
|
397 |
+
driven wind, the velocity is obtained following the procedures
|
398 |
+
described in the previous section. Given the orbital separation,
|
399 |
+
the disk inclination, and the stellar size, we estimate that ∼ 10%
|
400 |
+
of the original kinetic power reaches the acceleration region. We
|
401 |
+
assume a circular orbit, that is, the geometry associated with the
|
402 |
+
collision of winds does not depend on the orbital phase.
|
403 |
+
In this section, we describe the models for the collision re-
|
404 |
+
gion, the magnetic ambient field, and the shocks. We adopt a
|
405 |
+
one-zone approximation for these calculations.
|
406 |
+
Fig. 2: Scheme of the wind collision seen in the rz plane (not to
|
407 |
+
scale), adapted from Abaroa et al. (2021).
|
408 |
+
3.1. Contact discontinuity
|
409 |
+
The winds collide at a surface called the contact discontinuity
|
410 |
+
(CD). The stagnation point (SP) is the closest position of the CD
|
411 |
+
to the star, and is located where the ram pressures of the winds
|
412 |
+
are in equilibrium,
|
413 |
+
Pram(rBH) = ρdwv2
|
414 |
+
dw = ρ∗wv2
|
415 |
+
∗w = Pram(r∗).
|
416 |
+
(15)
|
417 |
+
Here, rBH and r∗ are the distances to the SP from the BH and
|
418 |
+
from the center of the star, respectively. The density of the spher-
|
419 |
+
ical stellar wind at this location is given by
|
420 |
+
ρ∗w =
|
421 |
+
˙M∗
|
422 |
+
4πr2∗v∗w
|
423 |
+
,
|
424 |
+
(16)
|
425 |
+
whereas the density of the disk-driven wind reads
|
426 |
+
ρdw =
|
427 |
+
˙Mdw
|
428 |
+
Ωr2
|
429 |
+
BHvdw
|
430 |
+
,
|
431 |
+
(17)
|
432 |
+
where Ω = 2π(1 − cos θ) is the solid angle of the wind and θ
|
433 |
+
the semi-opening angle of the wind. Solving these equations we
|
434 |
+
obtain the position of the SP.
|
435 |
+
3.2. Magnetic field
|
436 |
+
The strength of the magnetic field at the CD is essentially deter-
|
437 |
+
mined by the stellar surface magnetic field B∗. The intensity of
|
438 |
+
BCD and its topology –dipole (i), radial (ii), or toroidal (iii)–, is
|
439 |
+
given by (Eichler & Usov 1993):
|
440 |
+
BCD ≈ B∗ ×
|
441 |
+
�����������������
|
442 |
+
R3
|
443 |
+
∗/r3
|
444 |
+
∗, R∗ < r∗ < rA,
|
445 |
+
(i)
|
446 |
+
R3
|
447 |
+
∗/rAr2
|
448 |
+
∗, rA < r∗ < R∗(v∗w/vrot
|
449 |
+
∗ ),
|
450 |
+
(ii)
|
451 |
+
R2
|
452 |
+
∗vrot
|
453 |
+
∗ /rAr∗v∗w, R∗(v∗w/vrot
|
454 |
+
∗ ) < r∗, (iii),
|
455 |
+
(18)
|
456 |
+
where R∗ is the stellar radius, rA the Alfvén radius, and vrot
|
457 |
+
∗
|
458 |
+
∼
|
459 |
+
0.1v∗w the surface rotation velocity.
|
460 |
+
3.3. Particle acceleration and shock
|
461 |
+
Particles are accelerated up to relativistic energies in the col-
|
462 |
+
lision region through a first-order diffusive shock mechanism.
|
463 |
+
Two shock fronts are generated: a forward shock (FS) that prop-
|
464 |
+
agates through the stellar wind, and a reverse shock (RS) that
|
465 |
+
propagates through the wind of the disk. The diffusive accelera-
|
466 |
+
tion rate of the particles is given by (e.g., Protheroe 1999):
|
467 |
+
t−1
|
468 |
+
ac = ηac
|
469 |
+
e Z c BCD
|
470 |
+
E
|
471 |
+
,
|
472 |
+
(19)
|
473 |
+
where e is the electric charge, Z the atomic number, and E is the
|
474 |
+
energy of the particle. The acceleration efficiency, ηac, depends
|
475 |
+
on the diffusion coefficient of the particles, the shock velocity,
|
476 |
+
and the angle between the magnetic field and the normal to the
|
477 |
+
shock plane. We assume that the shock propagates perpendicu-
|
478 |
+
lar to the magnetic field and that diffusion occurs in the Bohm
|
479 |
+
regime. Thus, the acceleration efficiency is
|
480 |
+
ηac ≈ 3
|
481 |
+
8
|
482 |
+
�vsh
|
483 |
+
c
|
484 |
+
�2
|
485 |
+
,
|
486 |
+
(20)
|
487 |
+
where the shock velocities in the reference frame where one of
|
488 |
+
the fluids is at rest, v∗w = 0, and the other one moves with a
|
489 |
+
velocity vdw, are given by (Lee et al. 1996):
|
490 |
+
vRS = −4
|
491 |
+
3
|
492 |
+
1
|
493 |
+
1 +
|
494 |
+
�
|
495 |
+
n∗w/ndw
|
496 |
+
vdw,
|
497 |
+
(21)
|
498 |
+
Article number, page 4 of 12
|
499 |
+
|
500 |
+
Shock
|
501 |
+
Stellar wind
|
502 |
+
Disk-driven wind
|
503 |
+
BH
|
504 |
+
Star
|
505 |
+
Disk
|
506 |
+
7L. Abaroa et al.: Super critical colliding wind binaries
|
507 |
+
vFS = 4
|
508 |
+
3
|
509 |
+
1
|
510 |
+
1 +
|
511 |
+
�
|
512 |
+
ndw/n∗w
|
513 |
+
vdw.
|
514 |
+
(22)
|
515 |
+
Here, n∗w and ndw are the numerical densities of the winds (nw =
|
516 |
+
ρw/mp, with mp the mass of the proton). The pressure and density
|
517 |
+
of the shocked medium are calculated following the Rankine-
|
518 |
+
Hugoniot relations (e.g., Lamers & Cassinelli 1999).
|
519 |
+
As we are interested in the nonthermal particle distribution,
|
520 |
+
we investigate only adiabatic shocks; that is, where radiative
|
521 |
+
losses are negligible. This is because in radiative shocks the gas
|
522 |
+
in the shocked region emits large amounts of thermal radiation;
|
523 |
+
the system therefore loses energy, the entropy increases, and
|
524 |
+
the medium becomes increasingly homogeneous. If magnetic-
|
525 |
+
inhomogeneities disappear, the acceleration efficiency decays
|
526 |
+
abruptly, aborting the formation of nonthermal distributions.
|
527 |
+
The shock is adiabatic if the thermal cooling length RΛ is
|
528 |
+
larger than the size of the acceleration region ∆xac (McCray &
|
529 |
+
Snow 1979). The cooling length reads
|
530 |
+
RΛ =
|
531 |
+
5.9 × 1011µ(vsh/km s−1)3
|
532 |
+
(nw/cm−3)[Λ(Tsh)/erg s−1 cm−3] cm.
|
533 |
+
(23)
|
534 |
+
Here, nw is the number density of the undisturbed medium, µ
|
535 |
+
is the average molecular weight (µ = 0.6 for a fully ionized
|
536 |
+
plasma), and Λ(Tsh) is the cooling function, which depends on
|
537 |
+
the shock temperature (Raymond et al. 1976; Myasnikov et al.
|
538 |
+
1998; Wolfire et al. 2003). This latter function can be written as
|
539 |
+
Λ(Tsh) =
|
540 |
+
�������������
|
541 |
+
4 × 10−29T 0.8
|
542 |
+
sh ,
|
543 |
+
55 K ≤ Tsh < 104 K
|
544 |
+
7 × 10−27Tsh,
|
545 |
+
104 K ≤ Tsh < 105 K
|
546 |
+
7 × 10−19T −0.6
|
547 |
+
sh
|
548 |
+
,
|
549 |
+
105 K ≤ Tsh < 4 × 107 K
|
550 |
+
3 × 10−27T 0.5
|
551 |
+
sh ,
|
552 |
+
Tsh ≥ 4 × 107 K,
|
553 |
+
(24)
|
554 |
+
where Tsh is given by
|
555 |
+
Tsh = 18.21µ
|
556 |
+
�
|
557 |
+
vsh
|
558 |
+
km s−1
|
559 |
+
�2
|
560 |
+
K.
|
561 |
+
(25)
|
562 |
+
We note that this temperature has a maximum value in a colli-
|
563 |
+
sional plasma: it is self-regulated by the pair-creation, satisfying
|
564 |
+
in any case kBTsh < 1 MeV (kB is the Boltzmann constant).
|
565 |
+
We assume that the size of the acceleration region is a frac-
|
566 |
+
tion of the distance from the BH to the SP, ∆xac ∼ 0.1rBH. As
|
567 |
+
we consider a one-zone model, the acceleration region must be
|
568 |
+
narrow enough to generate near-homogeneous conditions.
|
569 |
+
4. Radiative processes
|
570 |
+
Particles accelerated at the shock can cool through different pro-
|
571 |
+
cesses and produce nonthermal radiation. The timescales asso-
|
572 |
+
ciated to this cooling are related to the total energy-loss of the
|
573 |
+
particles:
|
574 |
+
dE
|
575 |
+
dt ≈ −E
|
576 |
+
tcool
|
577 |
+
,
|
578 |
+
(26)
|
579 |
+
where the total cooling rate is
|
580 |
+
t−1
|
581 |
+
cool =
|
582 |
+
�
|
583 |
+
i
|
584 |
+
t−1
|
585 |
+
i ,
|
586 |
+
(27)
|
587 |
+
where ti corresponds to each timescale of the involved cooling
|
588 |
+
processes.
|
589 |
+
We assume advective escape; that is, particles are removed
|
590 |
+
from the acceleration region by the bulk motion of the fluid. If
|
591 |
+
the timescales of cooling are shorter than those of escape, par-
|
592 |
+
ticles radiate before they escape from the acceleration region.
|
593 |
+
The maximum energy for each kind of particle can be inferred
|
594 |
+
by looking at the point where the acceleration rate is equal to
|
595 |
+
the total cooling or escape rate. This energy cannot exceed the
|
596 |
+
maximum energy imposed by the Hillas criterion, Emax
|
597 |
+
e,p < Emax
|
598 |
+
Hillas.
|
599 |
+
As we are interested in nonthermal processes, we work at
|
600 |
+
scales smaller than the size of the binary system and assume that
|
601 |
+
rotation effects are negligible there. Effects caused by the orbital
|
602 |
+
motion, such as Coriolis or centrifugal forces, could be relevant
|
603 |
+
on larger scales and lead to strong disturbances in the flow and
|
604 |
+
thermal processes. The analysis of such effects usually requires
|
605 |
+
numerical simulations and is beyond the scope of this work.
|
606 |
+
4.1. Energy losses
|
607 |
+
We consider adiabatic and radiative losses. Adiabatic cooling is
|
608 |
+
related to the work done by the particles of the wind to expand
|
609 |
+
the shocked gas. Radiative cooling is caused by nonthermal pro-
|
610 |
+
cesses as a consequence of the interaction of the wind particles
|
611 |
+
with ambient fields and matter.
|
612 |
+
Our model is lepto-hadronic, and so we calculate the follow-
|
613 |
+
ing radiative processes numerically:
|
614 |
+
–Synchrotron: interaction of protons and electrons with the
|
615 |
+
ambient magnetic field, which will be amplified by a factor of 4
|
616 |
+
in the shocked region due to Rankine-Hugoniot relations.
|
617 |
+
–Inverse Compton (IC): collision of relativistic electrons
|
618 |
+
with photons of the ambient radiation field.
|
619 |
+
–Bremmstrahlung: Coulombian interactions between rela-
|
620 |
+
tivistic electrons and cold matter.
|
621 |
+
–Photo-hadronic interactions: interaction of highly relativis-
|
622 |
+
tic protons with photons of the ambient radiation field.
|
623 |
+
–Proton-proton: collision of relativistic protons with cold
|
624 |
+
matter.
|
625 |
+
In addition, we take into account inelastic collision of parti-
|
626 |
+
cles with atoms of the dense medium; that is, ionization losses,
|
627 |
+
which can be relevant in the 1–100 MeV range. We note that
|
628 |
+
in this energy range, ionization losses largely dominate over
|
629 |
+
Coulomb scatterings (see e.g., Fig. 7 from O’C Drury et al.
|
630 |
+
1996), and so the latter are not included in our analysis. The
|
631 |
+
reader is referred to Romero & Paredes (2011), Romero & Vila
|
632 |
+
(2014), and Müller & Romero (2020) plus references therein for
|
633 |
+
additional details on radiative processes.
|
634 |
+
4.2. Particle distribution
|
635 |
+
We investigate the evolution of particles that are accelerated
|
636 |
+
at the shock and injected into the surrounding medium. The
|
637 |
+
medium around the shock is the shocked gas of the winds. In
|
638 |
+
this paper, we restrict our analysis to this region. Beyond the bi-
|
639 |
+
nary, the surrounding medium has been affected by the effects
|
640 |
+
of the stellar winds, and so the system is expected to be located
|
641 |
+
inside a bubble inflated by the winds and surrounded by a shell
|
642 |
+
formed with the swept-up material at distances of a few to sev-
|
643 |
+
eral parsecs, depending on the mass of the black hole progenitor.
|
644 |
+
Inside the bubble, where the advected protons will be injected,
|
645 |
+
the density is expected to be lower than that of the standard in-
|
646 |
+
terstellar medium (e.g., around 0.01 cm−3 or less). In the shell,
|
647 |
+
there should be sufficient material for hadronic interactions with
|
648 |
+
the protons diffused or transported from the central source3.
|
649 |
+
3 These effects will be discussed elsewhere; some of them might be
|
650 |
+
responsible for part of the high-energy emission observed in the shell
|
651 |
+
Article number, page 5 of 12
|
652 |
+
|
653 |
+
A&A proofs: manuscript no. main
|
654 |
+
The relativistic particles have a distribution given by dN =
|
655 |
+
n(r, E, t)dEdV, where n is the number density of particles, t the
|
656 |
+
time, r the position, V the volume, and E the energy. The evo-
|
657 |
+
lution of this distribution is determined by the transport equa-
|
658 |
+
tion (see e.g., Ginzburg & Syrovatskii 1964; Romero & Paredes
|
659 |
+
2011). We solve this equation numerically in steady state and in
|
660 |
+
the one-zone approximation:
|
661 |
+
∂
|
662 |
+
∂E
|
663 |
+
�dE
|
664 |
+
dt N(E)
|
665 |
+
�
|
666 |
+
+ N(E)
|
667 |
+
tesc
|
668 |
+
= Q(E),
|
669 |
+
(28)
|
670 |
+
where tesc ∼ ∆xac/vsh is the advection time, and the particle in-
|
671 |
+
jection function,
|
672 |
+
Q(E) = Q0E−p exp (−E/Emax),
|
673 |
+
(29)
|
674 |
+
is a power-law in the energy with an exponential cutoff and a
|
675 |
+
spectral index p = 2.2, which is characteristic of the Fermi first-
|
676 |
+
order acceleration mechanism (see e.g., Drury 1983). The nor-
|
677 |
+
malization constant Q0 is obtained from
|
678 |
+
L(e,p) = ∆V
|
679 |
+
� Emax
|
680 |
+
(e,p)
|
681 |
+
Emin
|
682 |
+
(e,p)
|
683 |
+
dE(e,p)E(e,p)Q(e,p)(E(e,p)),
|
684 |
+
(30)
|
685 |
+
where ∆V is the volume of the acceleration region, and Emax
|
686 |
+
(e,p)
|
687 |
+
the maximum energy reached by protons and electrons, which is
|
688 |
+
found by looking at the point where the acceleration rate is equal
|
689 |
+
to the total cooling or escape rate.
|
690 |
+
4.3. Nonthermal emission
|
691 |
+
Once we have the particle distributions, we calculate the spectral
|
692 |
+
energy distribution (SED) for each of the relevant processes in-
|
693 |
+
volved in cooling. We find that in SCWBs, electrons typically
|
694 |
+
cool by synchrotron and IC mechanisms, and protons escape
|
695 |
+
from the acceleration region without significant cooling. The
|
696 |
+
resultant nonthermal SED usually yields a broadband spectrum
|
697 |
+
from radio waves (due to synchrotron emission) to gamma-rays
|
698 |
+
(due to IC emission).
|
699 |
+
4.4. Wind emission
|
700 |
+
We calculate the thermal emission of the photosphere of the disk-
|
701 |
+
driven wind assuming a spherically symmetric wind that ex-
|
702 |
+
pands with constant velocity equal to its terminal velocity. Since
|
703 |
+
the mass-loss rate of the disk is much higher than the critical
|
704 |
+
rate, the wind is optically thick and therefore we assume that it
|
705 |
+
radiates locally as a blackbody. The temperature measured by an
|
706 |
+
observer at infinity is given by (Fukue 2009):
|
707 |
+
σTT 4
|
708 |
+
dw =
|
709 |
+
˙e LEdd
|
710 |
+
(1 − β cos Θ)4 4πR2 ,
|
711 |
+
(31)
|
712 |
+
where ˙e =
|
713 |
+
˙E/LEdd is the normalized comoving luminosity,
|
714 |
+
β = vdw/c the normalized velocity, Θ the angle of the flow with
|
715 |
+
respect to the line of sight, and R =
|
716 |
+
√
|
717 |
+
r2 + z2, with r and z the
|
718 |
+
being cylindrical coordinates. We assume that the comoving lu-
|
719 |
+
minosity is equal to the Eddington luminosity (˙e = 1), as is com-
|
720 |
+
monly done in supercritical wind-models (e.g., Fukue 2009).
|
721 |
+
The apparent photosphere of this wind is defined as the sur-
|
722 |
+
face where the optical depth τphoto is unity for an observer at in-
|
723 |
+
finity. If the velocity of the wind is relativistic, the optical depth
|
724 |
+
of W50, which is powered by SS433, although there are jets involved in
|
725 |
+
this specific object.
|
726 |
+
in the observer frame depends in general on the magnitude of
|
727 |
+
the velocity and the viewing angle. The location of the apparent
|
728 |
+
photosphere from the equatorial plane zphoto is (Fukue 2009):
|
729 |
+
τphoto =
|
730 |
+
� ∞
|
731 |
+
zphoto
|
732 |
+
γdw(1 − β cos Θ) κco ρcodz = 1,
|
733 |
+
(32)
|
734 |
+
where γdw is the wind Lorentz factor, κco the opacity in the co-
|
735 |
+
moving frame, and ρco the wind density in the comoving frame.
|
736 |
+
As we assume a fully ionized wind, the opacity is dominated by
|
737 |
+
free electron scattering (κco = σT/mp).
|
738 |
+
4.5. Absorption
|
739 |
+
Finally, we calculate the gamma absorption by pair creation from
|
740 |
+
photon–photon annihilation, γ + γ → e+ + e−. The nonthermal
|
741 |
+
photons in their way out of the acceleration region can find pho-
|
742 |
+
tons of the ambient radiation fields and annihilate. The absorp-
|
743 |
+
tion is quantified by the optical depth of the medium, τγγ. If the
|
744 |
+
original luminosity of gamma rays is L0
|
745 |
+
γ(Eγ), the attenuated lu-
|
746 |
+
minosity reads:
|
747 |
+
Lγ(Eγ) = L0
|
748 |
+
γ(Eγ) · e−τ,
|
749 |
+
(33)
|
750 |
+
where e−τ is the attenuation factor. The targets of the ambient ra-
|
751 |
+
diation fields are photons from the star and from the disk-driven
|
752 |
+
wind photosphere.
|
753 |
+
The process of annihilation is possible only above a kine-
|
754 |
+
matic energy threshold given by
|
755 |
+
EγEph > (mec2)2,
|
756 |
+
(34)
|
757 |
+
in a frontal collision, where Eph is the energy of the targets. The
|
758 |
+
opacity caused by a photon–photon pair production for a photon
|
759 |
+
created at a distance r from the center of the thermal source can
|
760 |
+
be obtained from (Romero & Vila 2008):
|
761 |
+
τγγ(Eγ, r) =
|
762 |
+
� ∞
|
763 |
+
Emin
|
764 |
+
� ∞
|
765 |
+
r
|
766 |
+
nph(Eph, r′) σγγ(Eph, Eγ) dr′dEph,
|
767 |
+
(35)
|
768 |
+
where nph is the density of the ambient radiation field. The total
|
769 |
+
cross-section is given by (see e.g., Aharonian et al. 1985):
|
770 |
+
σγγ = πr2
|
771 |
+
e
|
772 |
+
2 (1 − ξ2)
|
773 |
+
�
|
774 |
+
(3 − ξ4) ln
|
775 |
+
�1 + ξ
|
776 |
+
1 − ξ
|
777 |
+
�
|
778 |
+
+ 2ξ(ξ2 − 2)
|
779 |
+
�
|
780 |
+
,
|
781 |
+
(36)
|
782 |
+
where re is the classical radius of the electron, and
|
783 |
+
ξ =
|
784 |
+
�
|
785 |
+
1 − (mec2)2
|
786 |
+
EγEph
|
787 |
+
�1/2
|
788 |
+
.
|
789 |
+
(37)
|
790 |
+
The blackbody density radiation of the star and the photosphere
|
791 |
+
of the disk-driven wind is given by
|
792 |
+
nph =
|
793 |
+
2E2
|
794 |
+
ph
|
795 |
+
h3c3
|
796 |
+
1
|
797 |
+
exp(Eph/kBT) − 1,
|
798 |
+
(38)
|
799 |
+
where T is the temperature of the thermal source considered for
|
800 |
+
each case; that is, Tdw or Teff.
|
801 |
+
On the other side, free-free absorption (FFA) must also be
|
802 |
+
taken into account. The collision of low-energy photons with
|
803 |
+
particles of the dense medium leads to a cutoff in the SED at
|
804 |
+
radio frequencies. The denser the medium, the higher the energy
|
805 |
+
at which the cutoff occurs. Therefore, FFA will determine the
|
806 |
+
Article number, page 6 of 12
|
807 |
+
|
808 |
+
L. Abaroa et al.: Super critical colliding wind binaries
|
809 |
+
turnover of the synchrotron spectrum in SCWBs, which is ex-
|
810 |
+
pected to be at ∼GHz frequencies (see e.g., Rybicki & Lightman
|
811 |
+
1986; del Palacio et al. 2016).
|
812 |
+
Other absorption processes, such as the photoelectric effect,
|
813 |
+
direct Compton, or γ-nucleon pair creation, are not taken into
|
814 |
+
account in this paper. Their cross-sections are not high enough to
|
815 |
+
become relevant in the calculation of opacity given the ambient
|
816 |
+
densities that we consider here (see Fig. 1 from Reynoso et al.
|
817 |
+
2011).
|
818 |
+
5. Results
|
819 |
+
In this section, we apply our model to a generic super-Eddington
|
820 |
+
X-ray binary. We consider a star of spectral type O.5V (Table
|
821 |
+
1) and investigate four scenarios: in scenarios S1 and S2 we re-
|
822 |
+
gard a BH with mass MBH = 5M⊙ and mass-accretion rates of
|
823 |
+
102 ˙MEdd and 103 ˙MEdd, respectively; in scenarios S3 and S4 we
|
824 |
+
consider a BH with mass MBH = 20M⊙ and again accretion rates
|
825 |
+
of 102 ˙MEdd and 103 ˙MEdd, respectively. The complete set of pa-
|
826 |
+
rameters is summarized in Table 2.
|
827 |
+
Type O.5V Star
|
828 |
+
Parameter
|
829 |
+
Value
|
830 |
+
Units
|
831 |
+
M∗
|
832 |
+
37
|
833 |
+
M⊙
|
834 |
+
R∗
|
835 |
+
11
|
836 |
+
R⊙
|
837 |
+
Teff
|
838 |
+
41500
|
839 |
+
K
|
840 |
+
˙M∗
|
841 |
+
1.2 × 10−5
|
842 |
+
M⊙ yr−1
|
843 |
+
v∗w
|
844 |
+
2.9 × 108
|
845 |
+
cm s−1
|
846 |
+
vrot
|
847 |
+
∗
|
848 |
+
2.9 × 107
|
849 |
+
cm s−1
|
850 |
+
L∗
|
851 |
+
K
|
852 |
+
3.2 × 1037
|
853 |
+
erg s−1
|
854 |
+
B∗
|
855 |
+
750
|
856 |
+
G
|
857 |
+
Table 1: Parameters adopted in the model for the star of type
|
858 |
+
O.5V. All parameters from Kobulnicky et al. (2019), with the
|
859 |
+
exception for the magnetic field (from Wade & MiMeS Collab-
|
860 |
+
oration 2015).
|
861 |
+
5.1. Wind
|
862 |
+
We calculate the radiation-field tensor (Eq. 7) and in Fig. 3 we
|
863 |
+
show the distribution of the energy density (ϵ) on the rz plane,
|
864 |
+
where the black zone is the inflated inner disk. We obtain a
|
865 |
+
strong azimuthal flux component of the radiation-field tensor.
|
866 |
+
This distribution is the same in all four scenarios, because in
|
867 |
+
the critical disk the radiation-field tensor depends on advection,
|
868 |
+
viscosity, and adiabatic parameters, which remain the same in all
|
869 |
+
cases.
|
870 |
+
We solve Eqs. 11-13 to find the trajectory and velocity of the
|
871 |
+
particles. Both quantities are determined by Rµν and therefore we
|
872 |
+
obtain the same trajectories and terminal velocities in S1–S4. As
|
873 |
+
an example, in Fig. 4 we show the normalized velocity of a test
|
874 |
+
particle, with a launching radius of 40rg (≡ 20rs), which reaches
|
875 |
+
a terminal velocity of ≈ 0.16c. This result does not vary much if
|
876 |
+
we vary the launching radius (±0.02c for ±20rg).
|
877 |
+
The particles describe a helical trajectory in the vicinity of
|
878 |
+
the BH for two main reasons (Fig. 5). The first is the presence
|
879 |
+
of the strong azimuthal components of the radiation field, which
|
880 |
+
help to maintain the spiral geometry of the particles in the inner
|
881 |
+
disk. The second reason is the condition imposed for the particle
|
882 |
+
ejection, namely that the particles initially have only azimuthal
|
883 |
+
0
|
884 |
+
5
|
885 |
+
10
|
886 |
+
15
|
887 |
+
20
|
888 |
+
0
|
889 |
+
5
|
890 |
+
10
|
891 |
+
15
|
892 |
+
20
|
893 |
+
z [rs]
|
894 |
+
r [rs]
|
895 |
+
1x10-3
|
896 |
+
2x10-3
|
897 |
+
2x10-3
|
898 |
+
2x10-3
|
899 |
+
3x10-3
|
900 |
+
4x10-3
|
901 |
+
4x10-3
|
902 |
+
5x10-3
|
903 |
+
Fig. 3: Contour maps of the spatial distribution of the normalized
|
904 |
+
radiation energy density ϵ in the rz plane above the accretion
|
905 |
+
disk. Both axes are in units of Schwarzschild radius. The color
|
906 |
+
bar is the intensity of ϵ and the black zone is the inflated disk
|
907 |
+
( f = 0.5, α = 0.5, γ = 4/3).
|
908 |
+
40
|
909 |
+
60
|
910 |
+
80
|
911 |
+
100
|
912 |
+
120
|
913 |
+
0.16
|
914 |
+
0.17
|
915 |
+
0.18
|
916 |
+
0.19
|
917 |
+
0.20
|
918 |
+
0.21
|
919 |
+
r/rs
|
920 |
+
v/c
|
921 |
+
Fig. 4: Normalized velocity of a wind test particle as a function
|
922 |
+
of the Schwarzschild radius. The particle reaches a terminal ve-
|
923 |
+
locity of ∼ 0.16c for a launching radius of r0 = 20rs (coincident
|
924 |
+
with the vertical axis).
|
925 |
+
velocity. The intensity of the radiation field decays rapidly with
|
926 |
+
distance from the BH, and therefore the ejected particles follow a
|
927 |
+
spiral trajectory near the BH, but beyond a certain radius (∼ rcrit)
|
928 |
+
they follow a free path with a strong component of the radial
|
929 |
+
velocity.
|
930 |
+
The overall result is an equatorial wind with terminal veloci-
|
931 |
+
ties of the order of 0.15c. The kinetic power of these winds is in
|
932 |
+
the range 1039−41 erg s−1, which is well above the power of the
|
933 |
+
winds of typical WR or OB stars. Therefore, in general, the disk
|
934 |
+
wind is expected to overwhelm the stellar wind.
|
935 |
+
5.2. Energy gain and losses
|
936 |
+
We follow the calculations in Sect. 3.1 and find that, in all four
|
937 |
+
scenarios, the SP is located near the stellar surface and the wind
|
938 |
+
of the disk completely sweeps up the stellar wind, as expected.
|
939 |
+
Hence, the forward shock is in the stellar atmosphere, fully ra-
|
940 |
+
Article number, page 7 of 12
|
941 |
+
|
942 |
+
A&A proofs: manuscript no. main
|
943 |
+
Scenario
|
944 |
+
Parameter
|
945 |
+
Symbol [units]
|
946 |
+
S1
|
947 |
+
S2
|
948 |
+
S3
|
949 |
+
S4
|
950 |
+
Black hole mass(1)
|
951 |
+
MBH [M⊙]
|
952 |
+
5
|
953 |
+
5
|
954 |
+
20
|
955 |
+
20
|
956 |
+
Mass accretion rate(1)
|
957 |
+
˙Minput [M⊙ yr−1]
|
958 |
+
1.1 × 10−5
|
959 |
+
1.1 × 10−4
|
960 |
+
4.4 × 10−5
|
961 |
+
4.4 × 10−4
|
962 |
+
Orbital semi-axis(1)
|
963 |
+
a [R⊙]
|
964 |
+
15
|
965 |
+
15
|
966 |
+
22
|
967 |
+
22
|
968 |
+
Gravitational radius(2)
|
969 |
+
rg [cm]
|
970 |
+
7.4 × 105
|
971 |
+
7.4 × 105
|
972 |
+
2.9 × 106
|
973 |
+
2.9 × 106
|
974 |
+
Critical radius(2)
|
975 |
+
rcrit [rg]
|
976 |
+
4000
|
977 |
+
40000
|
978 |
+
4000
|
979 |
+
40000
|
980 |
+
Mass loss in disk winds(1)
|
981 |
+
˙Mdw [M⊙ yr−1]
|
982 |
+
10−5
|
983 |
+
10−4
|
984 |
+
4.3 × 10−5
|
985 |
+
4.3 × 10−4
|
986 |
+
Kinetic power of the disk-driven wind(2)
|
987 |
+
Ldw
|
988 |
+
K
|
989 |
+
[erg s−1]
|
990 |
+
7.8 × 1039
|
991 |
+
7.8 × 1040
|
992 |
+
3.4 × 1040
|
993 |
+
3.4 × 1041
|
994 |
+
Cold matter density at SP(2)
|
995 |
+
ndw [cm−3]
|
996 |
+
5.1 × 1012
|
997 |
+
5.1 × 1013
|
998 |
+
2.9 × 1012
|
999 |
+
2.9 × 1013
|
1000 |
+
Distance to SP from BH(2)
|
1001 |
+
rBH [cm]
|
1002 |
+
2.7 × 1011
|
1003 |
+
2.7 × 1011
|
1004 |
+
7.6 × 1011
|
1005 |
+
7.6 × 1011
|
1006 |
+
Size of acceleration region(1)
|
1007 |
+
∆xac [cm]
|
1008 |
+
2.7 × 1010
|
1009 |
+
2.7 × 1010
|
1010 |
+
7.6 × 1010
|
1011 |
+
7.6 × 1010
|
1012 |
+
Shock cold matter density(2)
|
1013 |
+
nRS [cm−3]
|
1014 |
+
2 × 1013
|
1015 |
+
2 × 1014
|
1016 |
+
1.2 × 1013
|
1017 |
+
1.2 × 1014
|
1018 |
+
Shock cooling length(2)
|
1019 |
+
RΛ [cm]
|
1020 |
+
7.6 × 1011
|
1021 |
+
7.6 × 1010
|
1022 |
+
1.3 × 1012
|
1023 |
+
1.3 × 1011
|
1024 |
+
Maximum energy of electrons(2)
|
1025 |
+
Emax
|
1026 |
+
e
|
1027 |
+
[eV]
|
1028 |
+
1011
|
1029 |
+
1.6 × 1011
|
1030 |
+
1011
|
1031 |
+
1011
|
1032 |
+
Maximum energy of protons(2)
|
1033 |
+
Emax
|
1034 |
+
p
|
1035 |
+
[eV]
|
1036 |
+
1015
|
1037 |
+
1015
|
1038 |
+
3 × 1015
|
1039 |
+
3.1 × 1015
|
1040 |
+
Emission peak (low energy)(2)
|
1041 |
+
L0.01mm [erg s−1]
|
1042 |
+
3.2 × 1033
|
1043 |
+
3.2 × 1033
|
1044 |
+
8 × 1034
|
1045 |
+
8 × 1034
|
1046 |
+
Emission peak (high energy)(2)
|
1047 |
+
L10MeV [erg s−1]
|
1048 |
+
4 × 1032
|
1049 |
+
4 × 1032
|
1050 |
+
1034
|
1051 |
+
1034
|
1052 |
+
Table 2: Parameters of the different scenarios calculated for the model. We indicate with superscript (1) those parameters that are
|
1053 |
+
assumed and with (2) those that are derived. In all models, the system is supposed to be oriented face-on to the observer, that is, the
|
1054 |
+
inclination of the normal to the orbital plane i with respect to the line of the sight is ∼ 0◦.
|
1055 |
+
Fig. 5: Trajectory of a test particle in the Cartesian 3D-space in
|
1056 |
+
units of Schwarzschild radius. The particles describe a helical
|
1057 |
+
trajectory above the inner disk because of the strong azimuthal
|
1058 |
+
radiation fields. The launching radius of this test particle is r0 =
|
1059 |
+
20rs.
|
1060 |
+
diative, and completely unable to accelerate relativistic particles.
|
1061 |
+
Only the reverse shock (RS) is suitable for the task. As r∗ ≈ R∗,
|
1062 |
+
the magnetic field at the CD is BCD ≈ B���.
|
1063 |
+
The cooling length of the RS is greater than the size of the
|
1064 |
+
acceleration region in all cases (see Table 2); this is why the
|
1065 |
+
shock is adiabatic and the acceleration efficiency of the process
|
1066 |
+
is relatively high: ηac ∼ 10−2 (see Sect. 3.3). The shock velocity
|
1067 |
+
is ≈ 4.4 × 109 cm s−1 and the temperature of the shocked gas
|
1068 |
+
reaches ≈ 4.8 × 1010 K.
|
1069 |
+
We calculate the energy gain and losses of the shock-
|
1070 |
+
accelerated particles following Sect. 4. Highly relativistic pro-
|
1071 |
+
tons escape from the acceleration region without cooling in all
|
1072 |
+
scenarios considered here (with energies up to Ep ≈ 1 PeV) and
|
1073 |
+
are injected into the interstellar medium (ISM). Protons are ad-
|
1074 |
+
vected, that is, they are removed from the collision region by the
|
1075 |
+
bulk motion of the fluid. They therefore do not interact with am-
|
1076 |
+
bient material at scales similar to that of the system. Electrons
|
1077 |
+
cool mainly through IC and synchrotron mechanisms, and reach
|
1078 |
+
a maximum energy of Ee ≈ 100 GeV. To obtain the electron dis-
|
1079 |
+
tribution, we solve the transport equation considering only the
|
1080 |
+
dominant IC and synchrotron losses, and a power-law injection
|
1081 |
+
function with a spectral index of 2.2 and an exponential cutoff
|
1082 |
+
(see Eq. 29).
|
1083 |
+
5.3. Spectral energy distribution
|
1084 |
+
Figure 6 shows the SEDs of the four scenarios. The only thermal
|
1085 |
+
component of the spectrum is the photosphere of the optically
|
1086 |
+
thick disk-driven wind. The emission peak of the wind for S1
|
1087 |
+
and S2 is ≈ 1037 erg s−1, whereas for S3 and S4 the peak is
|
1088 |
+
≈ 1038 erg s−1. This occurs at energies of ∼ 100 eV for S1 and
|
1089 |
+
S3, and ∼ 30 eV for S2 and S4. Therefore, if MBH increases, the
|
1090 |
+
luminosity is higher and, if the mass-accretion rate increases, the
|
1091 |
+
luminosity peak occurs at lower energies.
|
1092 |
+
In the case of the nonthermal spectrum, we calculate the
|
1093 |
+
emission due to synchrotron and IC losses. In the latter case, we
|
1094 |
+
consider the photon fields of the star and of the wind photosphere
|
1095 |
+
as targets. In all cases, the dominant IC contribution is that of
|
1096 |
+
the star. The luminosity in S3 and S4 is an order of magnitude
|
1097 |
+
greater than that in S1 and S2. This is because of the modification
|
1098 |
+
of the orbital parameters when the BH mass varies: to guarantee
|
1099 |
+
the overflow of the Roche lobe, the orbital semi-axis varies with
|
1100 |
+
MBH, which results in variation in the size of the acceleration re-
|
1101 |
+
Article number, page 8 of 12
|
1102 |
+
|
1103 |
+
L. Abaroa et al.: Super critical colliding wind binaries
|
1104 |
+
gion and the photon density at SP, among other parameters. The
|
1105 |
+
emission peak at low energies is ∼ 1033 erg s−1 for S1 and S2,
|
1106 |
+
and ∼ 1035 erg s−1 for S3 and S4. At high energies, the emission
|
1107 |
+
peak is ∼ 1032 erg s−1 (S1 and S2) and ∼ 1034 erg s−1 (S3 and
|
1108 |
+
S4). The gamma-ray absorption due to γγ annihilation is total
|
1109 |
+
for energies > 10 GeV in all scenarios4.
|
1110 |
+
Attenuation due to material between the source and the ob-
|
1111 |
+
server, that is, absorption by external cold gas, is mainly in the
|
1112 |
+
optical-to-UV range and at soft X-rays. At radio wavelengths, re-
|
1113 |
+
fractive scintillation on free electrons of the ISM occurs at lower
|
1114 |
+
frequencies than predicted here. For high-energy gamma rays,
|
1115 |
+
the main absorbers are infrared (IR) fields and the cosmic mi-
|
1116 |
+
crowave background (CMB), but their effects are only relevant
|
1117 |
+
for cosmological distances.
|
1118 |
+
6. Application to NGC 4190 ULX 1
|
1119 |
+
Ultraluminous X-ray sources (ULXs) are extragalactic point-like
|
1120 |
+
objects where the luminosity in the X-ray band appears to be
|
1121 |
+
higher than the Eddington luminosity (Bachetti 2016). ULXs are
|
1122 |
+
thought to be X-ray binaries with a stellar-mass compact object
|
1123 |
+
accreting at super-Eddington rates, where a beaming effect could
|
1124 |
+
be responsible for the luminosity observed in the X-ray band:
|
1125 |
+
the radiation emitted from the inner part of the accretion disk is
|
1126 |
+
geometrically collimated by the ejected wind, which is optically
|
1127 |
+
thick except in a narrow region around the black-hole axis and
|
1128 |
+
forms a cone-shaped funnel (King et al. 2001; King 2009; Kaaret
|
1129 |
+
et al. 2017; Fabrika et al. 2021).
|
1130 |
+
We apply our model to estimate the radiation emitted by the
|
1131 |
+
ultraluminous X-ray source NGC 4190 ULX 1 (also known as
|
1132 |
+
CXO J121345.2+363754). Although many characteristics of this
|
1133 |
+
ULX remain poorly understood, several authors have explored
|
1134 |
+
the system and have provided constraints on some of its param-
|
1135 |
+
eters (see e.g., Liu & Bregman 2005; Gladstone et al. 2013; Ko-
|
1136 |
+
liopanos et al. 2017; Kosec et al. 2018; Ghosh & Rana 2021).
|
1137 |
+
In what follows, we describe the parameterization of the sys-
|
1138 |
+
tem and its components, and investigate the expected collision
|
1139 |
+
of winds. The complete set of parameters used in this section is
|
1140 |
+
detailed in Table 3.
|
1141 |
+
6.1. System parameterization
|
1142 |
+
The source is located in the nearby Galaxy NGC 4190 at a dis-
|
1143 |
+
tance of d ≈ 3 Mpc (Tully et al. 2013). Observations made
|
1144 |
+
in 2010 using the XMM-Newton telescope reveal a long-term
|
1145 |
+
spectral variability in the 0.3–10.0 keV energy range: LX ∼
|
1146 |
+
3 − 8 × 1039 erg s−1.
|
1147 |
+
The angle i between the line of sight and the z-axis at which
|
1148 |
+
the disk of a ULX is observed determines the components of its
|
1149 |
+
spectrum: blackbody disk (BB) or Comptonization. If i is small,
|
1150 |
+
the observer is able to look into the funnel and see the innermost
|
1151 |
+
part of the disk: the spectrum shows only the BB component,
|
1152 |
+
which corresponds to thermal emission of the disk. This type of
|
1153 |
+
spectrum is called broadened disk (BD). If i is sufficiently large,
|
1154 |
+
another effect is observed: the interaction between photons and
|
1155 |
+
wind particles near the disk surface induces a Comptonization
|
1156 |
+
that produces a hardening in the spectrum. Most ULXs exhibit a
|
1157 |
+
combination of both phenomena in their X-ray spectrum.
|
1158 |
+
4 We note that, since we assume a nearly face-on inclination of the
|
1159 |
+
system, there are no significant variations of the radiative output associ-
|
1160 |
+
ated with the orbital phase. If the system were oriented nearly edge-on,
|
1161 |
+
the emission would be modulated by the orbital phase due to absorption
|
1162 |
+
(for details see Romero et al. 2010).
|
1163 |
+
Ghosh & Rana (2021) investigated the spectral properties of
|
1164 |
+
NGC 4190 ULX 1 and suggested that the ULX is in a BD state,
|
1165 |
+
and that the compact object is a BH with mass ∼ 10 − 30M⊙
|
1166 |
+
accreting at super-Eddington rates. We fit the XMM-Newton ob-
|
1167 |
+
servations (Epoch 3) with the supercritical advection-dominated
|
1168 |
+
disk model detailed in Sect. 2.1, assuming a mass-accretion rate
|
1169 |
+
of ˙Minput = 10 ˙MEdd. We also assume a face-on inclination i ≈ 0◦,
|
1170 |
+
a BH mass 10M⊙ and a geometrical beaming factor b = 0.07.
|
1171 |
+
This factor is given by,
|
1172 |
+
b = Ω/4π = 0.5(1 − cos ϑ),
|
1173 |
+
(39)
|
1174 |
+
where Ω is the solid angle of the emission. The angle ϑ is related
|
1175 |
+
to the opening angles of the disk (δ) and its wind (θ): ϑ+δ+2θ =
|
1176 |
+
90◦. Both angles, i and ϑ, can change over time, causing the
|
1177 |
+
spectral variability of the object (Fabrika et al. 2021).
|
1178 |
+
On the other hand, Gladstone et al. (2013) provided con-
|
1179 |
+
straints on the characteristics of the optical counterpart of the
|
1180 |
+
system. They suggested that, if MBH = 10M⊙, the mass of the
|
1181 |
+
star could be < 50M⊙ and its radius < 86R⊙. We choose a star
|
1182 |
+
of type B2V for our model in light of one of the fittings these
|
1183 |
+
latter authors made from Hubble Space Telescope observations.
|
1184 |
+
If we apply Eq. 1 and consider the mass ratio M∗/MBH, and the
|
1185 |
+
stellar radius involved (see Table 3), the transfer of mass in the
|
1186 |
+
binary system occurs for an orbital semi-axis a ≤ 15.2 R⊙, which
|
1187 |
+
results in a period ≤ 38 h.
|
1188 |
+
6.2. Collision of winds
|
1189 |
+
The terminal velocity of the disk-driven wind is vdw = 4.95 ×
|
1190 |
+
109 cm s−1, and therefore Ldw
|
1191 |
+
K = 1.5 × 1039 erg s−1, while L∗
|
1192 |
+
K =
|
1193 |
+
2.17 × 1034 erg s−1. The SP is located near the stellar surface
|
1194 |
+
and the wind of the disk completely suppresses the stellar wind.
|
1195 |
+
We therefore only take into account the reverse shock (RS). As
|
1196 |
+
r∗ ≈ R∗, the magnetic field at the CD is BCD ≈ B∗.
|
1197 |
+
The cooling length of the RS is RΛ = 2.2 × 1013 cm and
|
1198 |
+
the size of the acceleration region is ∆xac = 6.68 × 1010 cm;
|
1199 |
+
therefore, the shock is adiabatic and the acceleration efficiency of
|
1200 |
+
the process is ηac = 10−2, as in our general models. We calculate
|
1201 |
+
the energy gain and losses of the shock particles following Sect.
|
1202 |
+
4. Highly relativistic protons escape from the acceleration region
|
1203 |
+
without cooling, as in our previous scenarios (with energies up
|
1204 |
+
to Ep ≈ 1 PeV), and are injected into the ISM. Electrons cool
|
1205 |
+
mainly through IC and synchrotron mechanisms. Figure 7 shows
|
1206 |
+
the timescales of electrons, which reach a maximum energy of
|
1207 |
+
Ee ≈ 0.32 TeV. To obtain the electron distribution, we solve the
|
1208 |
+
transport equation taking into account only IC and synchrotron
|
1209 |
+
losses, and a power-law injection function with a spectral index
|
1210 |
+
of 2.2 and an exponential cutoff.
|
1211 |
+
6.3. Total SED
|
1212 |
+
The SED of the ULX spans a broadband energy range. Figure
|
1213 |
+
9 shows the thermal (wind and accretion disk) and nonthermal
|
1214 |
+
(colliding-winds shock) contributions of the system. We also
|
1215 |
+
show the sensitivity of the instruments ALMA, VLA (sub-mm
|
1216 |
+
waves), Fermi, and CTA (gamma rays), and observational data
|
1217 |
+
from XMM-Newton.
|
1218 |
+
The luminosity in the IR band is ∼ 1034 erg s−1, which is rel-
|
1219 |
+
atively strong, though still undetectable at megaparsec distances.
|
1220 |
+
The luminosity in gamma-rays also reaches ∼ 1034 erg s−1. The
|
1221 |
+
attenuation factor (Fig. 8) has an effect on photons with ener-
|
1222 |
+
gies ≳ 1 GeV. Most of the radiation above 1 GeV and all above
|
1223 |
+
50 GeV is suppressed by the annihilation of the γ rays with the
|
1224 |
+
photon fields of the disk-driven wind and the star.
|
1225 |
+
Article number, page 9 of 12
|
1226 |
+
|
1227 |
+
A&A proofs: manuscript no. main
|
1228 |
+
30
|
1229 |
+
31
|
1230 |
+
32
|
1231 |
+
33
|
1232 |
+
34
|
1233 |
+
35
|
1234 |
+
36
|
1235 |
+
37
|
1236 |
+
38
|
1237 |
+
39
|
1238 |
+
40
|
1239 |
+
-4
|
1240 |
+
-2
|
1241 |
+
0
|
1242 |
+
2
|
1243 |
+
4
|
1244 |
+
6
|
1245 |
+
8
|
1246 |
+
10
|
1247 |
+
12
|
1248 |
+
S1 & S3
|
1249 |
+
log10 (EγLγ / erg s-1)
|
1250 |
+
log10 (Eγ / eV)
|
1251 |
+
wind photosphere S1
|
1252 |
+
synchrotron S1
|
1253 |
+
inverse Compton S1 (star)
|
1254 |
+
inverse Compton S1 (wind)
|
1255 |
+
total SED S1
|
1256 |
+
wind photosphere S3
|
1257 |
+
synchrotron S3
|
1258 |
+
inverse Compton S3 (star)
|
1259 |
+
inverse Compton S3 (wind)
|
1260 |
+
total SED S3
|
1261 |
+
30
|
1262 |
+
31
|
1263 |
+
32
|
1264 |
+
33
|
1265 |
+
34
|
1266 |
+
35
|
1267 |
+
36
|
1268 |
+
37
|
1269 |
+
38
|
1270 |
+
39
|
1271 |
+
40
|
1272 |
+
-4
|
1273 |
+
-2
|
1274 |
+
0
|
1275 |
+
2
|
1276 |
+
4
|
1277 |
+
6
|
1278 |
+
8
|
1279 |
+
10
|
1280 |
+
12
|
1281 |
+
S2 & S4
|
1282 |
+
log10 (EγLγ / erg s-1)
|
1283 |
+
log10 (Eγ / eV)
|
1284 |
+
wind photosphere S2
|
1285 |
+
synchrotron S2
|
1286 |
+
inverse Compton S2 (star)
|
1287 |
+
inverse Compton S2 (wind)
|
1288 |
+
total SED S2
|
1289 |
+
wind photosphere S4
|
1290 |
+
synchrotron S4
|
1291 |
+
inverse Compton S4 (star)
|
1292 |
+
inverse Compton S4 (wind)
|
1293 |
+
total SED S4
|
1294 |
+
Fig. 6: Thermal and nonthermal SEDs of the four scenarios considered, S1–S4, in logarithmic scale, where a face-on inclination is
|
1295 |
+
assumed. S1 and S3 are shown in the left plot, whereas S2 and S4 are shown in the right plot. Dashed lines correspond to S1 (left)
|
1296 |
+
and S2 (right), solid lines correspond to S3 (left) and S4 (right). We plot the nonattenuated inverse Compton contributions in gray.
|
1297 |
+
The emission peak at high energies is ∼ 1033 erg s−1 for S1 and S2, and ∼ 1034 erg s−1 for S3 and S4. The gamma-ray absorption
|
1298 |
+
due to γγ annihilation is total for energies > 10 GeV.
|
1299 |
+
-8
|
1300 |
+
-6
|
1301 |
+
-4
|
1302 |
+
-2
|
1303 |
+
0
|
1304 |
+
2
|
1305 |
+
4
|
1306 |
+
6
|
1307 |
+
8
|
1308 |
+
10
|
1309 |
+
6
|
1310 |
+
7
|
1311 |
+
8
|
1312 |
+
9
|
1313 |
+
10
|
1314 |
+
11
|
1315 |
+
12
|
1316 |
+
13
|
1317 |
+
ηac ∼ 10-2
|
1318 |
+
RS
|
1319 |
+
log10 (t-1 / s-1)
|
1320 |
+
log10 (Ee / eV)
|
1321 |
+
synchrotron
|
1322 |
+
inverse Compton
|
1323 |
+
Bremsstrahlung
|
1324 |
+
adiabatic
|
1325 |
+
ion
|
1326 |
+
acceleration
|
1327 |
+
escape
|
1328 |
+
Fig. 7: Timescales in logarithmic scale of the electron accelera-
|
1329 |
+
tion, escape, and cooling at the reverse shock in NGC 4190 ULX
|
1330 |
+
1. Electrons reach a maximum energy of ≈ 0.32 TeV. The accel-
|
1331 |
+
eration efficiency is 10−2.
|
1332 |
+
7. Discussion
|
1333 |
+
Our analysis of supercritical colliding wind binaries shows that
|
1334 |
+
these systems should exhibit broadband emission from radio to
|
1335 |
+
gamma rays. In this sense, they are similar to CWBs formed by
|
1336 |
+
two hot stars, such as O+WR binaries. However, there are im-
|
1337 |
+
portant differences as well. If we compare our models with re-
|
1338 |
+
cent models of O+WR CWBs (Pittard et al. 2021), we find that
|
1339 |
+
(i) in SCWBs, the wind of the disk is far more powerful than
|
1340 |
+
the wind of the star. This results in stagnation points that are
|
1341 |
+
very close to the surface of the star. Efficient particle accelera-
|
1342 |
+
tion then can only occur in reverse shocks. (ii) We also see that
|
1343 |
+
the disk wind advects protons from the acceleration region be-
|
1344 |
+
fore they have time to cool. Only electrons can cool locally. The
|
1345 |
+
0
|
1346 |
+
0.2
|
1347 |
+
0.4
|
1348 |
+
0.6
|
1349 |
+
0.8
|
1350 |
+
1
|
1351 |
+
-6
|
1352 |
+
-4
|
1353 |
+
-2
|
1354 |
+
0
|
1355 |
+
2
|
1356 |
+
4
|
1357 |
+
6
|
1358 |
+
8
|
1359 |
+
10
|
1360 |
+
12
|
1361 |
+
e−τγγ
|
1362 |
+
log10 (Eγ / eV)
|
1363 |
+
Star
|
1364 |
+
Wind photosphere
|
1365 |
+
Total attenuation
|
1366 |
+
Fig. 8: Attenuation factors due to γγ-annihilation between high-
|
1367 |
+
energy nonthermal radiation and photon fields from the star and
|
1368 |
+
from the photosphere of the disk-driven wind in NGC 4190 ULX
|
1369 |
+
1. The total attenuation is plotted with a black line.
|
1370 |
+
resulting SED is consequently dominated by synchrotron and IC
|
1371 |
+
radiation. (iii) As the acceleration region is close to the star, the
|
1372 |
+
local magnetic field is relatively strong. Synchrotron emission
|
1373 |
+
reaches energies of hundreds of keV. As the medium is far more
|
1374 |
+
dense than in stellar CWBs, free-free absorption causes this radi-
|
1375 |
+
ation to turnover below ∼ 24 GHz. The total power at millimeter
|
1376 |
+
(mm) and submm wavelengths can be between three and five
|
1377 |
+
orders of magnitude higher in SCWBs than in stellar CWBs.
|
1378 |
+
(iv) IC is the dominant radiation mechanism at high energies.
|
1379 |
+
The stronger thermal fields of SCWBs (wind photosphere and
|
1380 |
+
star) provide the seed photons, but also impose a high-energy
|
1381 |
+
cutoff at ∼ 1 GeV through γ − γ attenuation. Instead, stellar
|
1382 |
+
CWBs can reach energies close to 1 TeV. (v) The strong mag-
|
1383 |
+
netic fields in the acceleration region cut electromagnetic cas-
|
1384 |
+
Article number, page 10 of 12
|
1385 |
+
|
1386 |
+
L. Abaroa et al.: Super critical colliding wind binaries
|
1387 |
+
Table 3: Parameters of NGC 4190 ULX 1.
|
1388 |
+
Parameter
|
1389 |
+
Symbol
|
1390 |
+
Value
|
1391 |
+
Units
|
1392 |
+
System
|
1393 |
+
Inclination(1)
|
1394 |
+
i
|
1395 |
+
0
|
1396 |
+
◦
|
1397 |
+
Orbital semi-axis(2)
|
1398 |
+
a
|
1399 |
+
15
|
1400 |
+
R⊙
|
1401 |
+
Distance to the source(3)
|
1402 |
+
d
|
1403 |
+
3
|
1404 |
+
Mpc
|
1405 |
+
Black hole
|
1406 |
+
Mass(1)
|
1407 |
+
MBH
|
1408 |
+
10
|
1409 |
+
M⊙
|
1410 |
+
Gravitational radius(2)
|
1411 |
+
rg
|
1412 |
+
1.48 × 106
|
1413 |
+
cm
|
1414 |
+
Accretion disk
|
1415 |
+
Disk semi opening angle(1)
|
1416 |
+
δ
|
1417 |
+
30
|
1418 |
+
◦
|
1419 |
+
Critical radius(2)
|
1420 |
+
rcrit
|
1421 |
+
3.5 × 109
|
1422 |
+
cm
|
1423 |
+
Eddington accretion rate
|
1424 |
+
˙MEdd
|
1425 |
+
2.2 × 10−7
|
1426 |
+
M⊙ yr−1
|
1427 |
+
Mass accretion rate(1)
|
1428 |
+
˙Minput
|
1429 |
+
2.2 × 10−6
|
1430 |
+
M⊙ yr−1
|
1431 |
+
Mass loss in winds(1)
|
1432 |
+
˙Mdw
|
1433 |
+
1.98 × 10−6
|
1434 |
+
M⊙ yr−1
|
1435 |
+
Wind velocity(2)
|
1436 |
+
vdw
|
1437 |
+
4.95 × 109
|
1438 |
+
cm s−1
|
1439 |
+
Wind semi opening angle(2)
|
1440 |
+
θ
|
1441 |
+
14.5
|
1442 |
+
◦
|
1443 |
+
Beaming factor(2)
|
1444 |
+
b
|
1445 |
+
0.07
|
1446 |
+
−
|
1447 |
+
B2V Star
|
1448 |
+
Mass(4)
|
1449 |
+
M∗
|
1450 |
+
8
|
1451 |
+
M⊙
|
1452 |
+
Radius(4)
|
1453 |
+
R∗
|
1454 |
+
5.4
|
1455 |
+
R⊙
|
1456 |
+
Temperature(4)
|
1457 |
+
Teff
|
1458 |
+
20600
|
1459 |
+
K
|
1460 |
+
Mass loss in winds(4)
|
1461 |
+
˙M∗
|
1462 |
+
1.4 × 10−7
|
1463 |
+
M⊙ yr−1
|
1464 |
+
Wind velocity(4)
|
1465 |
+
v∗w
|
1466 |
+
7 × 107
|
1467 |
+
cm s−1
|
1468 |
+
Rotation velocity(1)
|
1469 |
+
vrot
|
1470 |
+
∗
|
1471 |
+
7 × 106
|
1472 |
+
cm s−1
|
1473 |
+
Magnetic field(5)
|
1474 |
+
B∗
|
1475 |
+
200
|
1476 |
+
G
|
1477 |
+
Colliding winds
|
1478 |
+
Kinetic power of disk-driven wind(2)
|
1479 |
+
Ldw
|
1480 |
+
K
|
1481 |
+
1.5 × 1039
|
1482 |
+
erg s−1
|
1483 |
+
Kinetic power of stellar wind(2)
|
1484 |
+
L∗
|
1485 |
+
K
|
1486 |
+
2.17 × 1034
|
1487 |
+
erg s−1
|
1488 |
+
Distance from BH to SP(2)
|
1489 |
+
rBH
|
1490 |
+
6.68 × 1011
|
1491 |
+
cm
|
1492 |
+
Size of acceleration region(1)
|
1493 |
+
∆xac
|
1494 |
+
6.68 × 1010
|
1495 |
+
cm
|
1496 |
+
Magnetic field at SP(2)
|
1497 |
+
BSP
|
1498 |
+
200
|
1499 |
+
G
|
1500 |
+
Injection spectral index(1)
|
1501 |
+
p
|
1502 |
+
2.2
|
1503 |
+
−
|
1504 |
+
Acceleration efficiency(2)
|
1505 |
+
ηac
|
1506 |
+
10−2
|
1507 |
+
−
|
1508 |
+
Molecular mean weight(1)
|
1509 |
+
µ
|
1510 |
+
0.6
|
1511 |
+
−
|
1512 |
+
Reverse shock
|
1513 |
+
Velocity(2)
|
1514 |
+
vRS
|
1515 |
+
4.4 × 109
|
1516 |
+
cm s−1
|
1517 |
+
Temperature(2)
|
1518 |
+
TRS
|
1519 |
+
1010
|
1520 |
+
K
|
1521 |
+
Cold matter density(2)
|
1522 |
+
nRS
|
1523 |
+
6.9 × 1011
|
1524 |
+
cm−3
|
1525 |
+
Cooling length(2)
|
1526 |
+
RΛ
|
1527 |
+
2.2 × 1013
|
1528 |
+
cm
|
1529 |
+
Notes. We indicate the parameters we have assumed with superscript
|
1530 |
+
(1) and those we have derived with (2). Parameters with superscripts
|
1531 |
+
(3), (4), and (5) were taken from Tully et al. (2013), Kobulnicky et al.
|
1532 |
+
(2019), and Shultz et al. (2015), respectively.
|
1533 |
+
cades in SCWBs. (vi) The SED is always dominated by the X-
|
1534 |
+
ray component associated with the disk or its wind in SCWBs.
|
1535 |
+
Finally, (vii) stellar CWBs have wider orbits and a variable sep-
|
1536 |
+
aration between the components of the system. This produces
|
1537 |
+
variability related to the orbital period. On the contrary, the or-
|
1538 |
+
bits of SCWBs should be mostly circularized. In general, CWBs
|
1539 |
+
are weaker than SCWBs, although span a broader energy range.
|
1540 |
+
An interesting feature of SCWBs is their potential as cosmic
|
1541 |
+
ray sources. As mentioned, the strong wind of the disk drags
|
1542 |
+
away the relativistic protons before they cool. These protons,
|
1543 |
+
with maximum energies of the order of 1 PeV, are then injected
|
1544 |
+
into the ISM where they diffuse. Even if a fraction of just ∼ 1
|
1545 |
+
% of the wind kinetic power goes to relativistic protons, the
|
1546 |
+
cosmic ray output of a SCWB would be in the range 1037−39
|
1547 |
+
erg s−1. These protons might interact with ambient clouds at
|
1548 |
+
some distance from the system, producing gamma rays through
|
1549 |
+
pp → π0 + pp interactions and the subsequent pion decays
|
1550 |
+
π0 → γγ. The gamma-ray emission from the illuminated clouds
|
1551 |
+
30
|
1552 |
+
32
|
1553 |
+
34
|
1554 |
+
36
|
1555 |
+
38
|
1556 |
+
40
|
1557 |
+
42
|
1558 |
+
-8
|
1559 |
+
-6
|
1560 |
+
-4
|
1561 |
+
-2
|
1562 |
+
0
|
1563 |
+
2
|
1564 |
+
4
|
1565 |
+
6
|
1566 |
+
8
|
1567 |
+
10
|
1568 |
+
12
|
1569 |
+
Fermi
|
1570 |
+
CTA
|
1571 |
+
ALMA
|
1572 |
+
VLA
|
1573 |
+
log10 (EγLγ / erg s-1)
|
1574 |
+
log10 (Eγ / eV)
|
1575 |
+
wind photosphere
|
1576 |
+
beamed disk
|
1577 |
+
inverse Compton (star)
|
1578 |
+
inverse Compton (wind)
|
1579 |
+
synchrotron
|
1580 |
+
Total SED
|
1581 |
+
XMM Newton data
|
1582 |
+
Fig. 9: Thermal and nonthermal SEDs of NGC 4190 ULX 1 in
|
1583 |
+
logarithmic scale (dashed lines). The nonthermal SED is par-
|
1584 |
+
tially attenuated for energies > 1 GeV and totally attenuated
|
1585 |
+
for energies > 50 GeV due to annihilation of γ-rays with the
|
1586 |
+
photon fields of the star and the photosphere of the disk-driven
|
1587 |
+
wind. The gray dashed lines are the nonattenuated IC contribu-
|
1588 |
+
tions. The total SED is plotted with a solid black line. Data from
|
1589 |
+
XMM-Newton (Epoch 3), and the sensitivity of ALMA, Fermi,
|
1590 |
+
VLA, and CTA are also shown (instrument sensitivities were
|
1591 |
+
taken from Sotomayor & Romero 2022).
|
1592 |
+
can be even stronger than the emission from the binary itself.
|
1593 |
+
However, the spectrum should be softer because of propagation
|
1594 |
+
effects (Aharonian & Atoyan 1996). Recent modeling by Pit-
|
1595 |
+
tard et al. (2021) of particle acceleration in colliding wind bina-
|
1596 |
+
ries with wind velocities of a few 103 km s−1 and mG magnetic
|
1597 |
+
fields in the acceleration region demonstrate that up to ∼ 30 % of
|
1598 |
+
the wind power can be transferred to nonthermal particles. This
|
1599 |
+
means that, in some extreme cases, a SCWB might inject up to
|
1600 |
+
∼ 1040 erg s−1 in cosmic rays.
|
1601 |
+
Another type of CWB is the so-called gamma-ray binary
|
1602 |
+
(GRB; e.g., LS 5039, PSR B1259-63, LSI +61◦ 303, PSR
|
1603 |
+
J2032+4127, and others; see, e.g., Dubus 2013; Chernyakova &
|
1604 |
+
Malyshev 2020). These sources are formed by a massive star
|
1605 |
+
(usually a Be star with a dense equatorial decretion disk and a
|
1606 |
+
fast wind) and a young pulsar in an eccentric orbit. The pul-
|
1607 |
+
sar ejects a relativistic pair wind. The wind collision produces
|
1608 |
+
a broadband spectrum from electrons accelerated at the shock
|
1609 |
+
that cool by synchrotron and IC radiation. The two-peak SEDs
|
1610 |
+
are similar to those we estimate for SCWBs, but some differ-
|
1611 |
+
ences are also clearly seen: (i) GRBs are less energetic because
|
1612 |
+
the spin-down luminosity of the pulsar is much smaller than the
|
1613 |
+
power of a supercritical wind. (ii) GRBs are highly variable. This
|
1614 |
+
variability is modulated with the orbital period. The orbital mod-
|
1615 |
+
ulation of the different components of the broadband spectrum is
|
1616 |
+
a consequence of the orbital variability of geometrical parame-
|
1617 |
+
ters, such as the geometry of the contact surface of the stellar
|
1618 |
+
and pulsar winds. Absorption effects are also strongly variable.
|
1619 |
+
(iii) Hadronic interactions are likely when the pulsar crosses the
|
1620 |
+
equatorial disk of the star (e.g., Bykov et al. 2021). (iv) GeV
|
1621 |
+
flares have been observed after the periastron passage in sources
|
1622 |
+
such as PSR B1259-63 (Abdo et al. 2011; Chernyakova et al.
|
1623 |
+
2014). These flares are attributed to the effects of the unshocked
|
1624 |
+
pulsar wind interaction with photons from the stellar disk (e.g.,
|
1625 |
+
Khangulyan et al. 2012).
|
1626 |
+
Article number, page 11 of 12
|
1627 |
+
|
1628 |
+
A&A proofs: manuscript no. main
|
1629 |
+
We finally mention that some black holes accreting at super-
|
1630 |
+
critical rates seem to be capable of launching mildly relativis-
|
1631 |
+
tic jets. A remarkable case in our Galaxy is the notorious mi-
|
1632 |
+
croquasar SS433 (Fabrika 2004). This object resembles a ULX
|
1633 |
+
source seen edge on (Begelman et al. 2006). The accretion rate
|
1634 |
+
should be extremely high in order to explain the large jet power
|
1635 |
+
LK ∼ 1040 erg s−1. Begelman et al. (2006) suggest rates of
|
1636 |
+
∼ 5 × 103 ˙MEdd ∼ 5 × 10−4 M⊙ yr−1, which are consistent with
|
1637 |
+
estimates of equatorial mass outflows inferred from radio obser-
|
1638 |
+
vations (Blundell et al. 2001). These outflows, ejected toward
|
1639 |
+
either side of the jets, present a thermal spectrum and might well
|
1640 |
+
correspond to the radiation-driven wind of the hypercritical disk.
|
1641 |
+
The contamination from the jet base makes it impossible to dis-
|
1642 |
+
entangle contributions from colliding winds from those coming
|
1643 |
+
from the jet. However, the equatorial outflow might propagate
|
1644 |
+
well beyond the system and reveal itself if it collides with any
|
1645 |
+
clouds. The shock generated in the collision would convert the
|
1646 |
+
kinetic energy of the plasmoids into internal energy and relativis-
|
1647 |
+
tic particles, which might then cool by pp interactions with the
|
1648 |
+
cloud material. Such a scenario might explain the detection of a
|
1649 |
+
GeV source by the Fermi satellite on the side of SS433 (Bordas
|
1650 |
+
2020; Li et al. 2020). We will explore the details of this hypoth-
|
1651 |
+
esis elsewhere.
|
1652 |
+
8. Summary and conclusions
|
1653 |
+
We explored the consequences of supercritical accretion in bi-
|
1654 |
+
nary systems consisting of a hot star and a black hole. We find
|
1655 |
+
that a fraction of the kinetic power of the radiation-driven wind
|
1656 |
+
released by the accretion disk is transformed into relativistic par-
|
1657 |
+
ticles in the region of the wind that collides with the star. Elec-
|
1658 |
+
trons are cooled locally, mainly through synchrotron and inverse
|
1659 |
+
Compton radiation. The radiation fields of the star and wind pho-
|
1660 |
+
tosphere provide abundant thermal photons for the latter process;
|
1661 |
+
they also absorb high-energy radiation above a few GeV. Free-
|
1662 |
+
free absorption imposes a high-frequency turnover in the ra-
|
1663 |
+
dio regime, suppressing centimeter radio waves, unlike the case
|
1664 |
+
of colliding wind binaries. The relativistic protons are blown
|
1665 |
+
away by the wind before they can cool down significantly. Once
|
1666 |
+
trapped by the outflow, these protons are transported to outer re-
|
1667 |
+
gions where they can interact with ambient gas away from the
|
1668 |
+
binary system, producing hadronic gamma-rays. Our most im-
|
1669 |
+
portant finding is that, in addition to being strong thermal UV
|
1670 |
+
and X-ray sources, supercritical colliding wind binaries can be
|
1671 |
+
significant nonthermal sources at mm wavelengths and GeV en-
|
1672 |
+
ergies.
|
1673 |
+
Acknowledgements. The authors thank the anonymous referee for a careful and
|
1674 |
+
constructive review, and for his/her comments that improved this work. We thank
|
1675 |
+
also Daniela Pérez and Jiˇri Horák for fruitful discussions. This work was sup-
|
1676 |
+
ported by grant PIP 0554 (CONICET). LA ackowledges the Universidad Na-
|
1677 |
+
cional de La Plata for the education received. GER acknowledges the support
|
1678 |
+
from the Spanish Ministerio de Ciencia e Innovación (MICINN) under grant
|
1679 |
+
PID2019-105510GBC31 and through the Center of Excellence Mara de Maeztu
|
1680 |
+
2020-2023 award to the ICCUB (CEX2019-000918-M).
|
1681 |
+
References
|
1682 |
+
Abaroa, L., Sotomayor Checa, P., & Romero, G. E. 2021, Boletin de la Asocia-
|
1683 |
+
cion Argentina de Astronomia La Plata Argentina, 62, 262
|
1684 |
+
Abbott, D. C. 1978, ApJ, 225, 893
|
1685 |
+
Abdo, A. A., Ackermann, M., Ajello, M., et al. 2010, ApJ, 723, 649
|
1686 |
+
Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011, ApJ, 736, L11
|
1687 |
+
Aharonian, F. A. & Atoyan, A. M. 1996, A&A, 309, 917
|
1688 |
+
Aharonian, F. A., Kririllov-Ugriumov, V. G., & Vardanian, V. V. 1985, Ap&SS,
|
1689 |
+
115, 201
|
1690 |
+
Bachetti, M. 2016, Astronomische Nachrichten, 337, 349
|
1691 |
+
Balbo, M. & Walter, R. 2017, A&A, 603, A111
|
1692 |
+
Begelman, M. C., King, A. R., & Pringle, J. E. 2006, MNRAS, 370, 399
|
1693 |
+
Benaglia, P., Marcote, B., Moldón, J., et al. 2015, A&A, 579, A99
|
1694 |
+
Benaglia, P. & Romero, G. E. 2003, A&A, 399, 1121
|
1695 |
+
Blundell, K. M., Mioduszewski, A. J., Muxlow, T. W. B., Podsiadlowski, P., &
|
1696 |
+
Rupen, M. P. 2001, ApJ, 562, L79
|
1697 |
+
Bordas, P. 2020, Nature Astronomy, 4, 1132
|
1698 |
+
Bykov, A. M., Petrov, A. E., Kalyashova, M. E., & Troitsky, S. V. 2021, ApJ,
|
1699 |
+
921, L10
|
1700 |
+
Chernyakova, M., Abdo, A. A., Neronov, A., et al. 2014, MNRAS, 439, 432
|
1701 |
+
Chernyakova, M. & Malyshev, D. 2020, in Multifrequency Behaviour of High
|
1702 |
+
Energy Cosmic Sources - XIII. 3-8 June 2019. Palermo, 45
|
1703 |
+
De Becker, M. 2007, A&A Rev., 14, 171
|
1704 |
+
De Becker, M. & Raucq, F. 2013, A&A, 558, A28
|
1705 |
+
del Palacio, S., Bosch-Ramon, V., Romero, G. E., & Benaglia, P. 2016, A&A,
|
1706 |
+
591, A139
|
1707 |
+
Drury, L. O. 1983, Reports on Progress in Physics, 46, 973
|
1708 |
+
Dubus, G. 2013, A&A Rev., 21, 64
|
1709 |
+
Eggleton, P. P. 1983, ApJ, 268, 368
|
1710 |
+
Eichler, D. & Usov, V. 1993, ApJ, 402, 271
|
1711 |
+
Fabrika, S. 2004, Astrophys. Space Phys. Res., 12, 1
|
1712 |
+
Fabrika, S. N., Atapin, K. E., Vinokurov, A. S., & Sholukhova, O. N. 2021,
|
1713 |
+
Astrophysical Bulletin, 76, 6
|
1714 |
+
Farnier, C., Walter, R., & Leyder, J. C. 2011, A&A, 526, A57
|
1715 |
+
Fukue, J. 2004, PASJ, 56, 569
|
1716 |
+
Fukue, J. 2009, PASJ, 61, 1305
|
1717 |
+
Ghosh, T. & Rana, V. 2021, MNRAS, 504, 974
|
1718 |
+
Ginzburg, V. L. & Syrovatskii, S. I. 1964, The Origin of Cosmic Rays. Oxford:
|
1719 |
+
Pergamon Press.
|
1720 |
+
Gladstone, J. C., Copperwheat, C., Heinke, C. O., et al. 2013, ApJS, 206, 14
|
1721 |
+
Grimaldo, E., Reimer, A., Kissmann, R., Niederwanger, F., & Reitberger, K.
|
1722 |
+
2019, ApJ, 871, 55
|
1723 |
+
Kaaret, P., Feng, H., & Roberts, T. P. 2017, ARA&A, 55, 303
|
1724 |
+
Kato, S. & Fukue, J. 2020, Fundamentals of Astrophysical Fluid Dynamics. Sin-
|
1725 |
+
gapore: Springer.
|
1726 |
+
Khangulyan, D., Aharonian, F. A., Bogovalov, S. V., & Ribó, M. 2012, ApJ, 752,
|
1727 |
+
L17
|
1728 |
+
King, A. R. 2009, MNRAS, 393, L41
|
1729 |
+
King, A. R., Davies, M. B., Ward, M. J., Fabbiano, G., & Elvis, M. 2001, ApJ,
|
1730 |
+
552, L109
|
1731 |
+
Kobulnicky, H. A., Chick, W. T., & Povich, M. S. 2019, AJ, 158, 73
|
1732 |
+
Koliopanos, F., Vasilopoulos, G., Godet, O., et al. 2017, A&A, 608, A47
|
1733 |
+
Kosec, P., Pinto, C., Fabian, A. C., & Walton, D. J. 2018, MNRAS, 473, 5680
|
1734 |
+
Lamers, H. J. G. L. M. & Cassinelli, J. P. 1999, Introduction to Stellar Winds.
|
1735 |
+
Cambridge: Cambridge University Press.
|
1736 |
+
Lee, H. M., Kang, H., & Ryu, D. 1996, ApJ, 464, 131
|
1737 |
+
Leyder, J. C., Walter, R., & Rauw, G. 2008, A&A, 477, L29
|
1738 |
+
Li, J., Torres, D. F., Liu, R.-Y., et al. 2020, Nature Astronomy, 4, 1177
|
1739 |
+
Liu, J.-F. & Bregman, J. N. 2005, ApJS, 157, 59
|
1740 |
+
McCray, R. & Snow, T. P., J. 1979, ARA&A, 17, 213
|
1741 |
+
McKinley, J. M. 1980, American Journal of Physics, 48, 612
|
1742 |
+
Mirabel, I. F. & Rodríguez, L. F. 1994, Nature, 371, 46
|
1743 |
+
Muñoz-Darias, T., Casares, J., Mata Sánchez, D., et al. 2016, Nature, 534, 75
|
1744 |
+
Muijres, L. E., Vink, J. S., de Koter, A., Müller, P. E., & Langer, N. 2012, A&A,
|
1745 |
+
537, A37
|
1746 |
+
Müller, A. L. & Romero, G. E. 2020, A&A, 636, A92
|
1747 |
+
Myasnikov, A. V., Zhekov, S. A., & Belov, N. A. 1998, MNRAS, 298, 1021
|
1748 |
+
Neilsen, J. & Lee, J. C. 2009, Nature, 458, 481
|
1749 |
+
O’C Drury, L., Duffy, P., & Kirk, J. G. 1996, A&A, 309, 1002
|
1750 |
+
Okazaki, A. T., Owocki, S. P., Russell, C. M. P., & Corcoran, M. F. 2008, MN-
|
1751 |
+
RAS, 388, L39
|
1752 |
+
Pittard, J. M., Romero, G. E., & Vila, G. S. 2021, MNRAS, 504, 4204
|
1753 |
+
Pittard, J. M., Vila, G. S., & Romero, G. E. 2020, MNRAS, 495, 2205
|
1754 |
+
Protheroe, R. J. 1999, in Topics in Cosmic-Ray Astrophysics. New York: Nova
|
1755 |
+
Science Publishing, ed. M. A. Duvernois, Vol. 230
|
1756 |
+
Pshirkov, M. S. 2016, MNRAS, 457, L99
|
1757 |
+
Raymond, J. C., Cox, D. P., & Smith, B. W. 1976, ApJ, 204, 290
|
1758 |
+
Reimer, A., Pohl, M., & Reimer, O. 2006, ApJ, 644, 1118
|
1759 |
+
Reitberger, K., Kissmann, R., Reimer, A., & Reimer, O. 2014, ApJ, 789, 87
|
1760 |
+
Reynoso, M. M., Medina, M. C., & Romero, G. E. 2011, A&A, 531, A30
|
1761 |
+
Romero, G. E., Del Valle, M. V., & Orellana, M. 2010, A&A, 518, A12
|
1762 |
+
Romero, G. E. & Paredes, J. M. 2011, Introducción a la Astrofísica Relativista.
|
1763 |
+
Barcelona: Universitat de Barcelona.
|
1764 |
+
Romero, G. E. & Vila, G. S. 2008, A&A, 485, 623
|
1765 |
+
Romero, G. E. & Vila, G. S. 2014, Introduction to Black Hole Astrophysics.
|
1766 |
+
Heidelberg: Springer.
|
1767 |
+
Rybicki, G. B. & Lightman, A. P. 1986, Radiative Processes in Astrophysics.
|
1768 |
+
New York: Wiley VCH.
|
1769 |
+
Shakura, N. I. & Sunyaev, R. A. 1973, A&A, 500, 33
|
1770 |
+
Article number, page 12 of 12
|
1771 |
+
|
1772 |
+
L. Abaroa et al.: Super critical colliding wind binaries
|
1773 |
+
Shultz, M., Wade, G. A., Alecian, E., & BinaMIcS Collaboration. 2015, MN-
|
1774 |
+
RAS, 454, L1
|
1775 |
+
Sotomayor, P. & Romero, G. E. 2022, A&A, 664, A178
|
1776 |
+
Stevens, I. R., Blondin, J. M., & Pollock, A. M. T. 1992, ApJ, 386, 265
|
1777 |
+
Tavani, M., Sabatini, S., Pian, E., et al. 2009, ApJ, 698, L142
|
1778 |
+
Tetarenko, A. J., Sivakoff, G. R., Miller-Jones, J. C. A., et al. 2017, MNRAS,
|
1779 |
+
469, 3141
|
1780 |
+
Tully, R. B., Courtois, H. M., Dolphin, A. E., et al. 2013, AJ, 146, 86
|
1781 |
+
Wade, G. A. & MiMeS Collaboration. 2015, in Astronomical Society of the Pa-
|
1782 |
+
cific Conference Series, Vol. 494, Physics and Evolution of Magnetic and
|
1783 |
+
Related Stars, ed. Y. Y. Balega, I. I. Romanyuk, & D. O. Kudryavtsev, 30
|
1784 |
+
Watarai, K.-y. & Fukue, J. 1999, PASJ, 51, 725
|
1785 |
+
Wolfire, M. G., McKee, C. F., Hollenbach, D., & Tielens, A. G. G. M. 2003, ApJ,
|
1786 |
+
587, 278
|
1787 |
+
Article number, page 13 of 12
|
1788 |
+
|
79FAT4oBgHgl3EQfoh2y/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
7NE3T4oBgHgl3EQfRgki/content/tmp_files/2301.04421v1.pdf.txt
ADDED
@@ -0,0 +1,1476 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
Abstract— Motion prediction is essential for safe and efficient
|
5 |
+
autonomous
|
6 |
+
driving.
|
7 |
+
However,
|
8 |
+
the
|
9 |
+
inexplicability
|
10 |
+
and
|
11 |
+
uncertainty of complex artificial intelligence models may lead to
|
12 |
+
unpredictable failures of the motion prediction module, which
|
13 |
+
may mislead the system to make unsafe decisions. Therefore, it
|
14 |
+
is necessary to develop methods to guarantee reliable
|
15 |
+
autonomous driving, where failure detection is a potential
|
16 |
+
direction. Uncertainty estimates can be used to quantify the
|
17 |
+
degree of confidence a model has in its predictions and may be
|
18 |
+
valuable for failure detection. We propose a framework of
|
19 |
+
failure detection for motion prediction from the uncertainty
|
20 |
+
perspective, considering both motion uncertainty and model
|
21 |
+
uncertainty,
|
22 |
+
and
|
23 |
+
formulate
|
24 |
+
various
|
25 |
+
uncertainty
|
26 |
+
scores
|
27 |
+
according to different prediction stages. The proposed approach
|
28 |
+
is evaluated based on different motion prediction algorithms,
|
29 |
+
uncertainty estimation methods, uncertainty scores, etc., and
|
30 |
+
the results show that uncertainty is promising for failure
|
31 |
+
detection for motion prediction but should be used with caution.
|
32 |
+
I. INTRODUCTION
|
33 |
+
Motion prediction is a hot topic in mobile robot and
|
34 |
+
autonomous vehicle communities, accurate prediction of the
|
35 |
+
future motion of surrounding traffic participants is
|
36 |
+
fundamental to robust and reliable decision-making.
|
37 |
+
Artificial intelligence (AI), especially deep learning, has been
|
38 |
+
widely used in autonomous driving tasks by its advantages in
|
39 |
+
dealing with complex problems. With the collection of
|
40 |
+
large-scale data, the improvement of computing power and
|
41 |
+
related algorithms, AI is expected to play a vital role in
|
42 |
+
autonomous driving systems in the future [1].
|
43 |
+
However, although AI-based motion prediction has shown
|
44 |
+
statistical performance advantages, it is difficult to avoid
|
45 |
+
unpredictable failures due to the inherent inexplicability and
|
46 |
+
insufficient reliability of deep learning models, which may
|
47 |
+
cause serious autonomous driving accidents [2]. From the
|
48 |
+
uncertainty perspective, motion prediction faces the dual
|
49 |
+
challenge of uncertainty from the environment and the model.
|
50 |
+
Drivers, pedestrians, etc. in the environment have uncertainty
|
51 |
+
in their intentions and movements, which makes it difficult to
|
52 |
+
accurately predict their future in all scenarios. Additionally,
|
53 |
+
due to insufficient training data and training process, the
|
54 |
+
model may experience serious performance degradation
|
55 |
+
when faced with rare or unknown scenarios.
|
56 |
+
|
57 |
+
*Research supported by the National Science Foundation of China Project:
|
58 |
+
U1964203 and 52072215, and the National Key R&D Program of
|
59 |
+
China:2020YFB1600303. (Corresponding authors: Hong Wang)
|
60 |
+
Wenbo Shao, Liang Peng, Jun Li and Hong Wang are with School of Veh
|
61 |
+
icle and Mobility, Tsinghua University, Beijing 100084, China. (e-mail: {sw
|
62 |
+
b19, peng-l20}@mails.tsinghua.edu.cn; {lijun1958, hong_wang}@tsinghua.
|
63 |
+
edu.cn)
|
64 |
+
Yanchao Xu is with School of Mechanical Engineering, Beijing Institute
|
65 |
+
of Technology, Beijing 100081, China. (e-mail: [email protected])
|
66 |
+
Failure Detector
|
67 |
+
Main Model
|
68 |
+
Maneuver
|
69 |
+
Classifier
|
70 |
+
Trajectory
|
71 |
+
Predictor
|
72 |
+
Graph
|
73 |
+
Model
|
74 |
+
UM
|
75 |
+
UT
|
76 |
+
Is there a wrong maneuver
|
77 |
+
classification or trajectory prediction?
|
78 |
+
|
79 |
+
Fig. 1. Uncertainty-based failure detection for motion prediction. UM, UT are
|
80 |
+
the uncertainty scores extracted for maneuver classification and trajectory
|
81 |
+
prediction, respectively.
|
82 |
+
The failure detection, isolation, and recovery mechanism is
|
83 |
+
an effective way to solve the above problems [3]. Among
|
84 |
+
them, the study of failure detection for AI models has
|
85 |
+
attracted increasing interest, which is of critical significance
|
86 |
+
for the development of reliable autonomous driving systems
|
87 |
+
[4]. As shown in Fig. 1, using the information extracted from
|
88 |
+
the main model, i.e. motion prediction model, a failure
|
89 |
+
detector is built to identify maneuver classification errors and
|
90 |
+
trajectory prediction errors. Uncertainty, as a measure of the
|
91 |
+
confidence level of the model in its output, has been used by
|
92 |
+
some researchers for failure detection in tasks such as
|
93 |
+
semantic segmentation [5]. Our study exploits various
|
94 |
+
uncertainties from motion prediction and explores their
|
95 |
+
usefulness for failure detection.
|
96 |
+
In this work, we concentrate on failure detection for motion
|
97 |
+
prediction from the uncertainty perspective. The main
|
98 |
+
contributions are as follows:
|
99 |
+
A framework of failure detection using uncertainty for
|
100 |
+
motion prediction tasks, taking into account both motion
|
101 |
+
uncertainty and model uncertainty.
|
102 |
+
A series of uncertainty scores for failure detection
|
103 |
+
formulated for different motion prediction stages and
|
104 |
+
algorithms.
|
105 |
+
A detailed evaluation and comparison with multiple
|
106 |
+
motion prediction algorithms, uncertainty estimation
|
107 |
+
methods and uncertainty scores.
|
108 |
+
II. RELATED WORK
|
109 |
+
A. Motion Prediction and Motion Uncertainty Estimation
|
110 |
+
Traditional motion prediction methods predict the future
|
111 |
+
motion of the target agent (TA) based on its historical state by
|
112 |
+
explicitly modeling kinematic models, such as Kalman
|
113 |
+
Filter[5], [6], but they only apply to short-term prediction
|
114 |
+
under scenarios with few interactions. In recent years, deep
|
115 |
+
learning-based motion prediction [8]–[10] has demonstrated
|
116 |
+
promising performance by simultaneously modeling TA’s
|
117 |
+
historical state, its interactions with surrounding traffic
|
118 |
+
participants, and other environmental information in deep
|
119 |
+
Failure Detection for Motion Prediction of Autonomous Driving:
|
120 |
+
An Uncertainty Perspective*
|
121 |
+
Wenbo Shao, Yanchao Xu, Liang Peng, Jun Li, and Hong Wang
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
+
neural networks. A broader review of deep learning-based
|
126 |
+
motion prediction can be found in [11]. As for the model’s
|
127 |
+
output form, some studies regard motion prediction as a
|
128 |
+
multipoint regression problem [12]–[14], so as to output the
|
129 |
+
unimodal predicted trajectory. However, due to the diversity
|
130 |
+
of intentions and the uncertainty of traffic participants’
|
131 |
+
behaviors, the future trajectory distribution corresponding to
|
132 |
+
one model input presents multiple possibilities. Recently,
|
133 |
+
increasing researchers and prediction competitions have paid
|
134 |
+
attention to multimodal motion prediction, which is generally
|
135 |
+
divided into two stages: maneuver or target classification, and
|
136 |
+
trajectory prediction. Some studies [15], [16] define
|
137 |
+
maneuvers as specified behavior patterns, then train the
|
138 |
+
maneuver classifier through supervised learning. For example,
|
139 |
+
CS-LSTM [15] defines six maneuver modes for vehicles on
|
140 |
+
highways, where longitudinal maneuvers include normal
|
141 |
+
driving and braking, the lateral maneuvers include left lane
|
142 |
+
change, right lane change, and lane keeping. The predicted
|
143 |
+
maneuvers can serve as an important guide for future
|
144 |
+
trajectory prediction. Other studies do not explicitly define
|
145 |
+
specific behavior patterns before training, but guide the
|
146 |
+
model to learn the optimal maneuver modes through model
|
147 |
+
design and training process [17]–[19]. For example,
|
148 |
+
Trajectron++ [18] adopts the conditional variational
|
149 |
+
autoencoder (CVAE) to encode multimodality by introducing
|
150 |
+
latent variables, and relies on a bivariate Gaussian Mixture
|
151 |
+
Model (GMM) to model the final output.
|
152 |
+
B. Model Uncertainty Estimation
|
153 |
+
The above multimodal prediction algorithms model the
|
154 |
+
uncertainty in the traffic participants’ movements. In addition,
|
155 |
+
deep learning models have inherent uncertainty, generally
|
156 |
+
called model uncertainty or epistemic uncertainty [20], it is
|
157 |
+
difficult to ignore in the real world where there are
|
158 |
+
distribution shifts or out-of-distribution data. Bayesian neural
|
159 |
+
network (BNN) [21]–[23] is a representative method for
|
160 |
+
estimating model uncertainty, in which Bayesian inference
|
161 |
+
plays an important role. Methods such as Monte-Carlo
|
162 |
+
dropout [24], [25] achieve approximate inference through
|
163 |
+
sampling, and they further promote the generality and
|
164 |
+
popularity of BNN. Besides, deep ensemble [26]–[28], as a
|
165 |
+
simple and scalable method, has shown promising
|
166 |
+
performance in model uncertainty estimation and thus has
|
167 |
+
attracted many researchers and practitioners. As the
|
168 |
+
representative method requiring only a single forward pass,
|
169 |
+
evidential deep learning (EDL) [29] computes the uncertainty
|
170 |
+
of the output distribution by modeling the prior distribution
|
171 |
+
for the classification.
|
172 |
+
C. Failure Detection for Autonomous Driving
|
173 |
+
Failure detection is attracting attention as a technology to
|
174 |
+
achieve reliable autonomous driving. It uses the main model's
|
175 |
+
input, internal features, or output to diagnose whether there is
|
176 |
+
a failure. Learning-based approaches build a specialized
|
177 |
+
model to act as the failure detector, and it identifies failures of
|
178 |
+
the main model by using failure cases for supervised training
|
179 |
+
[30]–[32] or estimating reconstruction errors [33]–[35]. In
|
180 |
+
addition, uncertainty-based anomaly detection has attracted
|
181 |
+
some
|
182 |
+
interest,
|
183 |
+
such
|
184 |
+
as
|
185 |
+
detecting
|
186 |
+
misclassified
|
187 |
+
or
|
188 |
+
out-of-distribution examples through maximum softmax
|
189 |
+
probabilities directly output by classification networks [36] or
|
190 |
+
predictive entropy quantization taking into account model
|
191 |
+
uncertainty [26]. However, to the best of our knowledge,
|
192 |
+
most current research on failure detection for autonomous
|
193 |
+
driving focuses on perception tasks, such as semantic
|
194 |
+
segmentation, depth estimation, etc. [5], and failure detection
|
195 |
+
for motion prediction models from the uncertainty
|
196 |
+
perspective has been rarely discussed.
|
197 |
+
Our approach utilizes both motion uncertainty and model
|
198 |
+
uncertainty, proposes uncertainty scores for different stages
|
199 |
+
of motion prediction, and investigates the effect of motion
|
200 |
+
prediction failure detection based on different scores.
|
201 |
+
III. METHODOLOGY
|
202 |
+
A. Problem Setting
|
203 |
+
Motion prediction is a task that predicts TA’s trajectory
|
204 |
+
over a period of time in the future given input information.
|
205 |
+
Assuming the current moment
|
206 |
+
0
|
207 |
+
t
|
208 |
+
, the input information
|
209 |
+
may include TA’s historical state
|
210 |
+
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
1
|
216 |
+
2
|
217 |
+
0
|
218 |
+
[
|
219 |
+
,
|
220 |
+
,
|
221 |
+
,
|
222 |
+
]
|
223 |
+
h
|
224 |
+
h
|
225 |
+
t
|
226 |
+
t
|
227 |
+
s
|
228 |
+
s
|
229 |
+
s
|
230 |
+
|
231 |
+
|
232 |
+
|
233 |
+
|
234 |
+
S
|
235 |
+
|
236 |
+
|
237 |
+
in
|
238 |
+
the past
|
239 |
+
ht timesteps, the historical state of TA’s surrounding
|
240 |
+
traffic participants, and other contextual information such as
|
241 |
+
maps, which are uniformly represented here by C . Among
|
242 |
+
them,
|
243 |
+
ts
|
244 |
+
may contain TA’s information such as the position,
|
245 |
+
speed, and category at t . The output is the predicted position
|
246 |
+
ˆY of TA in the future
|
247 |
+
ft timesteps:
|
248 |
+
|
249 |
+
|
250 |
+
|
251 |
+
ˆ
|
252 |
+
,
|
253 |
+
f
|
254 |
+
|
255 |
+
Y
|
256 |
+
S C
|
257 |
+
(1)
|
258 |
+
with
|
259 |
+
1
|
260 |
+
2
|
261 |
+
ˆ ˆ
|
262 |
+
ˆ
|
263 |
+
ˆ
|
264 |
+
[ ,
|
265 |
+
,...,
|
266 |
+
]
|
267 |
+
ft
|
268 |
+
d d
|
269 |
+
d
|
270 |
+
|
271 |
+
Y
|
272 |
+
consisting of the
|
273 |
+
ft
|
274 |
+
predicted
|
275 |
+
positions ˆ
|
276 |
+
td . For multimodal motion prediction, ˆY contains
|
277 |
+
predicted trajectories under multiple maneuvers.
|
278 |
+
Failure detection for motion prediction refers to identifying
|
279 |
+
potential motion prediction failures by monitoring model’s
|
280 |
+
state, where failures may exist in the form of maneuver
|
281 |
+
misclassification or excessive error of predicted trajectories.
|
282 |
+
Uncertainty, as the measure of TA's behavior or model state,
|
283 |
+
reflects the model's confidence in its particular output and
|
284 |
+
thus has the potential to diagnose potential prediction failures.
|
285 |
+
This work proposes to detect the performance degradation of
|
286 |
+
motion prediction models, i.e. the decrease in the accuracy of
|
287 |
+
prediction results, by quantifying the uncertainty scores.
|
288 |
+
B. Motion Prediction with Motion Uncertainty Estimation
|
289 |
+
Due to the unavailability of TA's actual intentions and the
|
290 |
+
randomness of its behavior, it may have multiple possible
|
291 |
+
future trajectories. GRIP++ is an enhanced graph-based
|
292 |
+
interaction-aware trajectory prediction algorithm, it models
|
293 |
+
inter-agent interactions and temporal features but only
|
294 |
+
predicts future trajectories in a single mode. As shown in Fig.
|
295 |
+
2, we add the maneuver classification module to GRIP++, by
|
296 |
+
distinguishing different behavioral patterns to improve the
|
297 |
+
authenticity and usability of the prediction results. The new
|
298 |
+
method is called GRIP+++.
|
299 |
+
We focus on two-stage tasks in the proposed method:
|
300 |
+
maneuver classification and maneuver-based trajectory
|
301 |
+
prediction. In the maneuver classification stage, given the
|
302 |
+
TA’s historical state and scene context, feature G are
|
303 |
+
extracted through the graph convolutional model (GCN),
|
304 |
+
which includes the processing of fixed and trainable graphs.
|
305 |
+
Then TA’s maneuver
|
306 |
+
|
307 |
+
|
308 |
+
P
|
309 |
+
|
|
310 |
+
z
|
311 |
+
z G is inferred by multilayer
|
312 |
+
perceptron (MLP), where
|
313 |
+
|
314 |
+
|
315 |
+
1,2,...,
|
316 |
+
z
|
317 |
+
Z
|
318 |
+
|
319 |
+
represents one of the
|
320 |
+
|
321 |
+
|
322 |
+
|
323 |
+
defined maneuver modes. In CS-LSTM [15], the modes are
|
324 |
+
divided into three types of lateral maneuvers and two types of
|
325 |
+
longitudinal maneuvers, but they are only applicable to
|
326 |
+
vehicles driving on highway, we define a common set of
|
327 |
+
maneuver modes suitable for various scenarios. Specifically,
|
328 |
+
TA's maneuvers are divided into four categories according to
|
329 |
+
their movement direction and speed: going straight, turning
|
330 |
+
left, turning right, and stopping. In the network, we adopt the
|
331 |
+
softmax head for probabilistic maneuver classification.
|
332 |
+
Graph Convolutional Model
|
333 |
+
Maneuver
|
334 |
+
Classification Module
|
335 |
+
64
|
336 |
+
ht
|
337 |
+
n
|
338 |
+
Trajectory Prediction Module
|
339 |
+
Predicted Trajectories
|
340 |
+
concat
|
341 |
+
Maneuver Probabilities
|
342 |
+
ht
|
343 |
+
|
344 |
+
Fig. 2. The architecture of GRIP+++.
|
345 |
+
The maneuver-based trajectory prediction module consists
|
346 |
+
of seq2seq networks taking the concatenation of the graph
|
347 |
+
feature G and the feature vector transformed by the
|
348 |
+
maneuver z as input, and outputs the future trajectory ˆ
|
349 |
+
z
|
350 |
+
Y
|
351 |
+
under the maneuver z .
|
352 |
+
To compare the generality of uncertainty-based failure
|
353 |
+
detection in different motion prediction mechanisms, we
|
354 |
+
employ another two classes of typical prediction algorithms.
|
355 |
+
Firstly, we focus on multimodal trajectory prediction based
|
356 |
+
on the generative model, so we adopt Trajectron++ [18], it
|
357 |
+
utilizes the CVAE-based latent network framework to model
|
358 |
+
multimodal future trajectories, where the discrete categorical
|
359 |
+
latent variable z encodes high-level behavior patterns:
|
360 |
+
|
361 |
+
|
362 |
+
|
363 |
+
1,2,...,
|
364 |
+
P(
|
365 |
+
,
|
366 |
+
)
|
367 |
+
P (
|
368 |
+
,
|
369 |
+
, )P (
|
370 |
+
,
|
371 |
+
)
|
372 |
+
ˆ
|
373 |
+
ˆ
|
374 |
+
ˆ
|
375 |
+
z
|
376 |
+
Z
|
377 |
+
z
|
378 |
+
z
|
379 |
+
|
380 |
+
|
381 |
+
|
382 |
+
ψ
|
383 |
+
θ
|
384 |
+
S C
|
385 |
+
S C
|
386 |
+
Y
|
387 |
+
Y
|
388 |
+
Y
|
389 |
+
S C
|
390 |
+
∣
|
391 |
+
∣
|
392 |
+
∣
|
393 |
+
(2)
|
394 |
+
where θ , ψ are deep neural network parameters.
|
395 |
+
Furthermore, we use PGP [16] as a comparison, it is a
|
396 |
+
multimodal trajectory prediction method combining graph
|
397 |
+
traversal, latent vector sampling, and clustering. It models
|
398 |
+
discrete policy for graph traversal by representing HD maps
|
399 |
+
as lane graphs, and implements diverse trajectories prediction
|
400 |
+
combined with a random sampling of latent vectors for
|
401 |
+
longitudinal variability. Furthermore, it uses K-means
|
402 |
+
clustering to obtain Z predictive trajectories. With its clever
|
403 |
+
design, PGP achieved the state-of-the-art results on almost all
|
404 |
+
metrics of the nuScenes leaderboard when proposed.
|
405 |
+
C. Model Uncertainty Estimation
|
406 |
+
As mentioned above, deep ensemble has certain
|
407 |
+
advantages in model uncertainty estimation, so we design a
|
408 |
+
prediction approach that simultaneously integrates model
|
409 |
+
uncertainty and motion uncertainty estimation based on it.
|
410 |
+
Specifically, we use random initialization of the model
|
411 |
+
parameters and random shuffling of the training data to train
|
412 |
+
K homogeneous and heterogeneous models, then estimate
|
413 |
+
uncertainty based on the K set of output ˆ k
|
414 |
+
Y ,
|
415 |
+
|
416 |
+
|
417 |
+
1,2,...,
|
418 |
+
k
|
419 |
+
K
|
420 |
+
|
421 |
+
.
|
422 |
+
In addition, EDL, as a method to capture multiclass
|
423 |
+
uncertainties with low computational cost, is also exploited to
|
424 |
+
estimate the model uncertainty of the maneuver classification
|
425 |
+
module. Specifically, the Dirichlet distribution is considered
|
426 |
+
the prior distribution for the classification:
|
427 |
+
|
428 |
+
1
|
429 |
+
1
|
430 |
+
1
|
431 |
+
P
|
432 |
+
for P
|
433 |
+
(P| )
|
434 |
+
( )
|
435 |
+
0
|
436 |
+
otherwise
|
437 |
+
z
|
438 |
+
Z
|
439 |
+
z
|
440 |
+
Z
|
441 |
+
z
|
442 |
+
D
|
443 |
+
B
|
444 |
+
|
445 |
+
|
446 |
+
|
447 |
+
|
448 |
+
|
449 |
+
|
450 |
+
|
451 |
+
|
452 |
+
α
|
453 |
+
α
|
454 |
+
|
455 |
+
(3)
|
456 |
+
where
|
457 |
+
1
|
458 |
+
[
|
459 |
+
,...,
|
460 |
+
]
|
461 |
+
Z
|
462 |
+
|
463 |
+
|
464 |
+
|
465 |
+
α
|
466 |
+
are the distribution parameters,
|
467 |
+
1
|
468 |
+
z
|
469 |
+
z
|
470 |
+
e
|
471 |
+
|
472 |
+
|
473 |
+
|
474 |
+
is the evidence, and
|
475 |
+
Z is the Z-dimensional unit simplex.
|
476 |
+
D. Uncertainty Scores Design
|
477 |
+
In our work, different uncertainty scores are proposed for
|
478 |
+
failure detection. Considering the different problem forms of
|
479 |
+
maneuver classification and trajectory prediction tasks, we
|
480 |
+
formulate corresponding scores for both.
|
481 |
+
For maneuver classification task combined with deep
|
482 |
+
ensemble, we formulate the following uncertainty scores
|
483 |
+
referring to the definition in [37]:
|
484 |
+
Total entropy (TE) for maneuver classification is
|
485 |
+
quantified to represent the total uncertainty considering both
|
486 |
+
model uncertainty and the motion uncertainty:
|
487 |
+
|
488 |
+
|
489 |
+
|
490 |
+
|
491 |
+
|
492 |
+
|
493 |
+
|
494 |
+
P
|
495 |
+
1
|
496 |
+
1
|
497 |
+
TE=
|
498 |
+
P
|
499 |
+
| , ,
|
500 |
+
P
|
501 |
+
| , ,
|
502 |
+
K
|
503 |
+
k
|
504 |
+
k
|
505 |
+
z
|
506 |
+
K
|
507 |
+
z
|
508 |
+
|
509 |
+
|
510 |
+
|
511 |
+
|
512 |
+
|
513 |
+
|
514 |
+
|
515 |
+
|
516 |
+
|
517 |
+
|
518 |
+
|
519 |
+
|
520 |
+
|
521 |
+
|
522 |
+
|
523 |
+
|
524 |
+
θ
|
525 |
+
S C θ
|
526 |
+
S C θ
|
527 |
+
∣
|
528 |
+
(4)
|
529 |
+
where
|
530 |
+
k
|
531 |
+
θ are the parameters of the kth model of deep
|
532 |
+
ensemble,
|
533 |
+
represents the formula for calculating entropy,
|
534 |
+
represents the training set.
|
535 |
+
Data entropy (DE) for maneuver classification is quantified
|
536 |
+
to represent the average of data uncertainty from different
|
537 |
+
models. The larger the value, the higher the motion
|
538 |
+
uncertainty estimated by deep ensemble-prediction models.
|
539 |
+
|
540 |
+
|
541 |
+
|
542 |
+
|
543 |
+
|
544 |
+
|
545 |
+
|
546 |
+
P
|
547 |
+
1
|
548 |
+
1
|
549 |
+
DE
|
550 |
+
| , ,
|
551 |
+
P
|
552 |
+
| , ,
|
553 |
+
K
|
554 |
+
k
|
555 |
+
k
|
556 |
+
K
|
557 |
+
z
|
558 |
+
z
|
559 |
+
|
560 |
+
|
561 |
+
|
562 |
+
|
563 |
+
|
564 |
+
|
565 |
+
|
566 |
+
|
567 |
+
|
568 |
+
|
569 |
+
|
570 |
+
θ
|
571 |
+
S C θ
|
572 |
+
S C θ
|
573 |
+
∣
|
574 |
+
|
575 |
+
(5)
|
576 |
+
Mutual Information (MI) is quantified to represent the
|
577 |
+
model uncertainty. As it increases, the degree of difference
|
578 |
+
between the prediction results of multiple models increases,
|
579 |
+
which to a certain extent reflects the reduction of the
|
580 |
+
confidence of the models in their classification results.
|
581 |
+
|
582 |
+
MI
|
583 |
+
,
|
584 |
+
, ,
|
585 |
+
TE
|
586 |
+
DE
|
587 |
+
z
|
588 |
+
|
589 |
+
|
590 |
+
|
591 |
+
|
592 |
+
|
593 |
+
|
594 |
+
|
595 |
+
θ S C
|
596 |
+
∣
|
597 |
+
|
598 |
+
(6)
|
599 |
+
The maximum predicted probability [38] is also considered
|
600 |
+
and its inverse (negative maximum softmax probability,
|
601 |
+
NMaP) is calculated as an uncertainty score.
|
602 |
+
As for the EDL-based method, the above-discussed types
|
603 |
+
of uncertainty scores are also quantified for comparison, and
|
604 |
+
their formulas are derived according to (3)-(6). Additionally,
|
605 |
+
we consider the metrics suggested in [29]:
|
606 |
+
|
607 |
+
1
|
608 |
+
u
|
609 |
+
Z
|
610 |
+
z
|
611 |
+
z
|
612 |
+
Z
|
613 |
+
|
614 |
+
|
615 |
+
|
616 |
+
|
617 |
+
|
618 |
+
(7)
|
619 |
+
Trajectory prediction involves multiple trajectories output
|
620 |
+
by one or more models, where each trajectory contains
|
621 |
+
position information for multiple future moments. Referring
|
622 |
+
|
623 |
+
|
624 |
+
|
625 |
+
to the usual error metrics [8, 12, 18], average displacement
|
626 |
+
error (ADE) and final displacement error (FDE), we define
|
627 |
+
two basic metrics, average predictive entropy (APE) and final
|
628 |
+
predictive entropy (FPE), to represent the uncertainty formed
|
629 |
+
by multiple trajectories:
|
630 |
+
|
631 |
+
1
|
632 |
+
=1
|
633 |
+
l
|
634 |
+
ˆ
|
635 |
+
A
|
636 |
+
1
|
637 |
+
1
|
638 |
+
1
|
639 |
+
ˆ
|
640 |
+
( n2
|
641 |
+
1)
|
642 |
+
ln
|
643 |
+
2
|
644 |
+
PE=
|
645 |
+
f
|
646 |
+
f
|
647 |
+
t
|
648 |
+
t
|
649 |
+
i
|
650 |
+
i
|
651 |
+
f
|
652 |
+
f
|
653 |
+
t
|
654 |
+
t
|
655 |
+
t
|
656 |
+
t
|
657 |
+
d
|
658 |
+
|
659 |
+
|
660 |
+
|
661 |
+
|
662 |
+
|
663 |
+
|
664 |
+
|
665 |
+
|
666 |
+
|
667 |
+
|
668 |
+
|
669 |
+
|
670 |
+
|
671 |
+
|
672 |
+
|
673 |
+
|
674 |
+
|
675 |
+
|
676 |
+
(8)
|
677 |
+
|
678 |
+
|
679 |
+
|
680 |
+
l
|
681 |
+
ˆ
|
682 |
+
FP
|
683 |
+
1
|
684 |
+
ˆ
|
685 |
+
n2
|
686 |
+
1
|
687 |
+
ln
|
688 |
+
2
|
689 |
+
E
|
690 |
+
f
|
691 |
+
f
|
692 |
+
t
|
693 |
+
td
|
694 |
+
|
695 |
+
|
696 |
+
|
697 |
+
|
698 |
+
|
699 |
+
|
700 |
+
|
701 |
+
|
702 |
+
|
703 |
+
|
704 |
+
(9)
|
705 |
+
where for different predicted trajectories of the same input,
|
706 |
+
the predicted position ˆ
|
707 |
+
td at the same time is assumed to
|
708 |
+
follow a two-dimensional Gaussian distribution.
|
709 |
+
Based on the above two basic metrics, different types of
|
710 |
+
uncertainty scores are defined according to the source of
|
711 |
+
different predicted trajectories (such as different sub-models,
|
712 |
+
different maneuvers, or both), which may represent model
|
713 |
+
uncertainty, motion uncertainty, or both.
|
714 |
+
IV. EXPERIMENTS
|
715 |
+
A. Experimental Setup
|
716 |
+
1) Model Implementation: For the training of GRIP+++,
|
717 |
+
inspired by [15], we adopt a two-stage training approach. In
|
718 |
+
the first stage, we focus on improving the trajectory
|
719 |
+
prediction accuracy under the real maneuver, by training the
|
720 |
+
model as a regression task at each time:
|
721 |
+
|
722 |
+
,
|
723 |
+
1
|
724 |
+
ˆ
|
725 |
+
1
|
726 |
+
ft
|
727 |
+
t z
|
728 |
+
t
|
729 |
+
reg
|
730 |
+
f
|
731 |
+
t
|
732 |
+
L
|
733 |
+
t
|
734 |
+
|
735 |
+
|
736 |
+
|
737 |
+
Y
|
738 |
+
Y
|
739 |
+
(10)
|
740 |
+
where
|
741 |
+
,ˆ
|
742 |
+
t z
|
743 |
+
Y and
|
744 |
+
t
|
745 |
+
Y are predicted positions for true maneuver
|
746 |
+
z and ground truth at time t respectively.
|
747 |
+
In the second stage, we additionally consider the loss of
|
748 |
+
maneuver classification by adding the cross-entropy loss:
|
749 |
+
|
750 |
+
reg
|
751 |
+
man
|
752 |
+
L
|
753 |
+
L
|
754 |
+
L
|
755 |
+
|
756 |
+
|
757 |
+
|
758 |
+
|
759 |
+
(11)
|
760 |
+
where
|
761 |
+
|
762 |
+
|
763 |
+
|
764 |
+
|
765 |
+
log P
|
766 |
+
,
|
767 |
+
|
|
768 |
+
man
|
769 |
+
L
|
770 |
+
z
|
771 |
+
|
772 |
+
S C
|
773 |
+
, is the weighting factor, and
|
774 |
+
z is the true maneuver label. Besides, in the implementation
|
775 |
+
of GRIP+++, the trajectories are sampled at 2Hz, with an
|
776 |
+
observation length of 3s and a prediction horizon of 3s.
|
777 |
+
As for the implementation of Trajectron++ [18] and PGP
|
778 |
+
[16], we follow their original model design and training
|
779 |
+
scheme. For deep ensemble, we set
|
780 |
+
5
|
781 |
+
K
|
782 |
+
, a scheme
|
783 |
+
considered cost-controllable and sufficiently efficient. To
|
784 |
+
achieve EDL, referring to [29], we incorporate a
|
785 |
+
Kullback-Leibler (KL) divergence term into our loss function
|
786 |
+
to avoid unnecessary uncertainty reduction.
|
787 |
+
2) Dataset: The proposed motion prediction models and
|
788 |
+
failure detectors are trained and validated on real traffic
|
789 |
+
datasets. Specifically, GRIP+++ and its failure detectors are
|
790 |
+
trained on SinD and tested on SinD and INTERACTION,
|
791 |
+
respectively. Trajectron++, PGP and their failure detection
|
792 |
+
experiments are carried out on the nuScenes dataset.
|
793 |
+
The SinD [39] dataset consists of 13248 recorded
|
794 |
+
trajectories from a signalized intersection. The traffic
|
795 |
+
participant classes include car, truck, bus, tricycle, bike,
|
796 |
+
motorcycle, and pedestrian. The INTERACTION [40] dataset
|
797 |
+
contains motion data collected in four categories of scenarios,
|
798 |
+
where we adopt the TC_intersection_VA (VA) subset that
|
799 |
+
also belongs to signalized intersection. It provided 3775
|
800 |
+
trajectories for around 60 minutes. The nuScenes [41] dataset
|
801 |
+
is a large-scale self-driving car dataset with 1000 scenes, each
|
802 |
+
scene contains 20s object annotations and HD semantic maps.
|
803 |
+
3) Evaluation methodology: We set the evaluation
|
804 |
+
methodology separately for the failure detection for the
|
805 |
+
two-stage prediction task. Maneuver classification is a
|
806 |
+
classification task, a good failure detector is considered to
|
807 |
+
assign higher uncertainty scores to misclassified cases.
|
808 |
+
Therefore, we adopt the area under the receiver operating
|
809 |
+
characteristic curve (AUROC) as the basic evaluation metric.
|
810 |
+
However, AUROC does not reflect the impact of the addition
|
811 |
+
of the uncertainty estimation module on the original
|
812 |
+
prediction algorithm. Therefore, we also plot the cut-off
|
813 |
+
curve to evaluate the average accuracy of the remaining data
|
814 |
+
after filtering out a certain percentage of data in descending
|
815 |
+
order of uncertainty. The area under the cut-off curve
|
816 |
+
(AUCOC) is regarded as an overall evaluation of the
|
817 |
+
prediction model with the failure detector, with a larger value
|
818 |
+
indicating better performance.
|
819 |
+
For trajectory prediction tasks, AUROC is not suitable, we
|
820 |
+
use the cut-off curve as the evaluation methodology. Unlike
|
821 |
+
maneuver classification, the curve here is drawn by
|
822 |
+
calculating the average prediction error of the remaining data,
|
823 |
+
so a smaller AUCOC represents better performance.
|
824 |
+
B. Failure Detection for Maneuver Classification
|
825 |
+
Regarding failure detection for maneuver classification, we
|
826 |
+
set up several experiments to answer the following questions.
|
827 |
+
|
828 |
+
Fig. 3. Uncertainty distribution for correctly classified and misclassified
|
829 |
+
samples. Experimental results of GRIP+++ based on deep ensemble.
|
830 |
+
How different are the distributions of uncertainty
|
831 |
+
scores for correct and misclassified cases? An effective
|
832 |
+
uncertainty-based failure detector is built on the assumption
|
833 |
+
that the uncertainty score level has a strong correlation with
|
834 |
+
the correctness of the prediction. As shown in Fig. 3, the
|
835 |
+
uncertainty scores of the correctly predicted maneuvers are
|
836 |
+
generally relatively low, while the incorrectly predicted cases
|
837 |
+
generally have high uncertainty scores. Meanwhile, there is a
|
838 |
+
relatively obvious separation between the two distributions,
|
839 |
+
especially for TA, DA, and NMaP. Therefore, it is
|
840 |
+
preliminarily inferred that the uncertainty scores have the
|
841 |
+
potential for failure detection.
|
842 |
+
Differences between different uncertainty scores for
|
843 |
+
failure detection? As indicated previously, in the deep
|
844 |
+
ensemble-based maneuver classification network, we can
|
845 |
+
extract various uncertainty scores, here we set up experiments
|
846 |
+
to compare the effects of different scores as the reference for
|
847 |
+
failure detection. The second row of TABLE I shows the
|
848 |
+
results, NMaP, TE, and DE achieve better failure detection
|
849 |
+
|
850 |
+
|
851 |
+
|
852 |
+
performance when used as uncertainty scores, where the total
|
853 |
+
uncertainty considering both motion and model uncertainty is
|
854 |
+
slightly better than the motion uncertainty alone. NMaP is
|
855 |
+
relatively simple to calculate and has a strong detection
|
856 |
+
ability. Furthermore, although MI, which represents the
|
857 |
+
model uncertainty, reflects the reduced confidence of the
|
858 |
+
model when faced with unknown scenarios (as in TABLE II),
|
859 |
+
its performance is relatively weak when used alone as the
|
860 |
+
reference for failure detection. In Fig. 4, the cut-off curve and
|
861 |
+
AUCOC corresponding to different uncertainty scores are
|
862 |
+
further compared. Their performance has a great advantage
|
863 |
+
over the random filtering method and is close to the optimal
|
864 |
+
situation. And the relative relationship between different
|
865 |
+
uncertainty scores is consistent with TABLE I.
|
866 |
+
TABLE I. AUROC(↑) FOR MANEUVER CLASSIFICATION STAGE OF GRIP+++
|
867 |
+
|
868 |
+
TE
|
869 |
+
DE
|
870 |
+
MI
|
871 |
+
NMaP
|
872 |
+
u
|
873 |
+
Ensemble
|
874 |
+
0.911
|
875 |
+
0.903
|
876 |
+
0.864
|
877 |
+
0.918
|
878 |
+
-
|
879 |
+
Model 1
|
880 |
+
-
|
881 |
+
0.871
|
882 |
+
-
|
883 |
+
0.867
|
884 |
+
-
|
885 |
+
Model 2
|
886 |
+
-
|
887 |
+
0.868
|
888 |
+
-
|
889 |
+
0.864
|
890 |
+
-
|
891 |
+
Model 3
|
892 |
+
-
|
893 |
+
0.871
|
894 |
+
-
|
895 |
+
0.867
|
896 |
+
-
|
897 |
+
Model 4
|
898 |
+
-
|
899 |
+
0.868
|
900 |
+
-
|
901 |
+
0.864
|
902 |
+
-
|
903 |
+
Model 5
|
904 |
+
-
|
905 |
+
0.863
|
906 |
+
-
|
907 |
+
0.858
|
908 |
+
-
|
909 |
+
EDL
|
910 |
+
0.912
|
911 |
+
0.909
|
912 |
+
0.911
|
913 |
+
0.912
|
914 |
+
0.910
|
915 |
+
TABLE II. AVERAGE UNCERTAINTY OBTAINED BY DEEP ENSEMBLE-BASED
|
916 |
+
GRIP+++ TRAINED ON SIND, AND TESTED ON IN-DISTRIBUTION DATA
|
917 |
+
(SIND) AND OUT-OF-DISTRIBUTION DATA (VA), RESPECTIVELY
|
918 |
+
|
919 |
+
TE
|
920 |
+
DE
|
921 |
+
MI
|
922 |
+
NMaP
|
923 |
+
SinD
|
924 |
+
0.318
|
925 |
+
0.250
|
926 |
+
0.068
|
927 |
+
-0.877
|
928 |
+
VA
|
929 |
+
0.303
|
930 |
+
0.198
|
931 |
+
0.105
|
932 |
+
-0.879
|
933 |
+
|
934 |
+
Fig. 4. Cut-off curves and AUCOC (↑). The optimal curve is drawn by
|
935 |
+
directly using the classification error as a filtering reference; the random
|
936 |
+
curve is drawn by filtering the data in random order.
|
937 |
+
TABLE III. AUCOC (↑) FOR MANEUVER CLASSIFICATION STAGE OF
|
938 |
+
GRIP+++. MODEL I IS THE RESULT FROM THE ITH MODEL IN DEEP ENSEMBLE
|
939 |
+
|
940 |
+
TE
|
941 |
+
DE
|
942 |
+
MI
|
943 |
+
NMaP
|
944 |
+
u
|
945 |
+
Ensemble
|
946 |
+
0.988
|
947 |
+
0.987
|
948 |
+
0.984
|
949 |
+
0.989
|
950 |
+
-
|
951 |
+
Model 1
|
952 |
+
-
|
953 |
+
0.981
|
954 |
+
-
|
955 |
+
0.982
|
956 |
+
-
|
957 |
+
Model 2
|
958 |
+
-
|
959 |
+
0.980
|
960 |
+
-
|
961 |
+
0.981
|
962 |
+
-
|
963 |
+
Model 3
|
964 |
+
-
|
965 |
+
0.981
|
966 |
+
-
|
967 |
+
0.982
|
968 |
+
-
|
969 |
+
Model 4
|
970 |
+
-
|
971 |
+
0.980
|
972 |
+
-
|
973 |
+
0.980
|
974 |
+
-
|
975 |
+
Model 5
|
976 |
+
-
|
977 |
+
0.979
|
978 |
+
-
|
979 |
+
0.979
|
980 |
+
-
|
981 |
+
EDL
|
982 |
+
0.978
|
983 |
+
0.978
|
984 |
+
0.978
|
985 |
+
0.978
|
986 |
+
0.978
|
987 |
+
Uncertainty scores based on deep ensemble vs.
|
988 |
+
uncertainty scores based on a single model? Here, we
|
989 |
+
obtain DE and NMaP from the single model in deep ensemble,
|
990 |
+
and they are further used for failure detection for the
|
991 |
+
maneuver classification module of the corresponding model.
|
992 |
+
From the comparison of rows 2-7 of TABLE I, although the
|
993 |
+
uncertainty scores extracted from the single model has a
|
994 |
+
certain failure detection ability, they are not as good as the
|
995 |
+
failure detector based on deep ensemble. In addition, it is also
|
996 |
+
concluded from the comparison of rows 2-7 in TABLE III
|
997 |
+
that the introduction of deep ensemble is beneficial to
|
998 |
+
improve the maneuver classification performance combined
|
999 |
+
with failure detector filtering.
|
1000 |
+
How well do the EDL-based uncertainty scores
|
1001 |
+
perform? As a comparison, we employ EDL to extract
|
1002 |
+
uncertainty scores and evaluate their performance for failure
|
1003 |
+
detection. TABLE I shows that using the uncertainty scores
|
1004 |
+
extracted by EDL as references for the failure detector
|
1005 |
+
achieves comparable results to deep ensemble. However,
|
1006 |
+
TABLE III presents that the overall accuracy after filtering
|
1007 |
+
the data based on these uncertainty scores is not high. One
|
1008 |
+
possible reason is that the regularization term added by EDL
|
1009 |
+
during the training process causes a drop in the prediction
|
1010 |
+
performance of the main model, which in turn weakens the
|
1011 |
+
effect of motion prediction with failure detection.
|
1012 |
+
C. Failure Detection for Trajectory Prediction
|
1013 |
+
As for failure detection for trajectory prediction, we design
|
1014 |
+
some experiments to answer the following questions.
|
1015 |
+
How well does the failure detector based on uncertainty
|
1016 |
+
scores from multiple trajectories perform? For the
|
1017 |
+
prediction error, considering the K predicted trajectories
|
1018 |
+
under the real maneuver z, we calculate the minimum
|
1019 |
+
(minADEz, minFDEz) and mean (meanADEz, meanFDEz) of
|
1020 |
+
the errors of the K trajectories, and the error of their average
|
1021 |
+
trajectory (ADEz, avg, FDEz, avg). We calculate APEz and FPEz
|
1022 |
+
of the above K trajectories to estimate the predictive
|
1023 |
+
uncertainty. As a comparison, we calculate the uncertainty of
|
1024 |
+
the average trajectories of K models in different maneuvers
|
1025 |
+
(APEavg, FPEavg), which to some extent represent the motion
|
1026 |
+
uncertainty. In TABLE IV, each column represents an error
|
1027 |
+
metric and each row represents the corresponding uncertainty
|
1028 |
+
score used for failure detection (except rows 1-3). By
|
1029 |
+
comparing rows 2-5 of the 2 sub-tables, APEz and FPEz have
|
1030 |
+
stronger failure detection potential than APEavg and FPEavg.
|
1031 |
+
Are the uncertainty scores extracted in the maneuver
|
1032 |
+
classification stage applicable to the trajectory prediction
|
1033 |
+
stage? Theoretically, the uncertainty scores obtained in the
|
1034 |
+
maneuver classification stage represent the confidence of the
|
1035 |
+
model in the current scene, so it may be suitable for failure
|
1036 |
+
detection in the trajectory prediction stage. We conduct some
|
1037 |
+
experiments to explore this question, the results are recorded
|
1038 |
+
in rows 6-9 of the two sub-tables of TABLE IV. Compared
|
1039 |
+
with the above trajectory uncertainty scores, the uncertainty
|
1040 |
+
extracted in the maneuver classification stage has limited
|
1041 |
+
potential for detecting high-error trajectories. One of the
|
1042 |
+
possible reasons is that the uncertainty scores calculated
|
1043 |
+
directly based on the trajectories imply the consideration of
|
1044 |
+
information such as the velocity and acceleration of the object,
|
1045 |
+
thus having a greater correlation with the trajectory error.
|
1046 |
+
How is the failure detection generalizing to scenarios
|
1047 |
+
with larger distributional shifts? Here, we use the VA
|
1048 |
+
dataset to test the model trained based on SinD, results are
|
1049 |
+
shown in TABLE V and TABLE VI. Compared with TABLE
|
1050 |
+
I, III, and IV, when faced with larger distributional shifts,
|
1051 |
+
while the reduction in the prediction accuracy of the main
|
1052 |
+
model leads to a worsening of AUCOC, the decrease in
|
1053 |
+
failure detection ability (such as AUROC) is relatively small.
|
1054 |
+
|
1055 |
+
|
1056 |
+
|
1057 |
+
TABLE IV. AUCOC (↓)/IMPROVEMENT RATIO (IR)1 (↑) FOR THE
|
1058 |
+
TRAJECTORY PREDICTION STAGE OF GRIP+++
|
1059 |
+
|
1060 |
+
minADEz
|
1061 |
+
meanADEz
|
1062 |
+
ADEz, avg
|
1063 |
+
Optimal
|
1064 |
+
0.066
|
1065 |
+
0.096
|
1066 |
+
0.088
|
1067 |
+
Random
|
1068 |
+
0.259
|
1069 |
+
0.345
|
1070 |
+
0.330
|
1071 |
+
APEz
|
1072 |
+
0.119/0.725
|
1073 |
+
0.143/0.813
|
1074 |
+
0.139/0.790
|
1075 |
+
APEavg
|
1076 |
+
0.136/0.636
|
1077 |
+
0.172/0.694
|
1078 |
+
0.166/0.677
|
1079 |
+
TE
|
1080 |
+
0.170/0.459
|
1081 |
+
0.228/0.469
|
1082 |
+
0.218/0.464
|
1083 |
+
DE
|
1084 |
+
0.170/0.457
|
1085 |
+
0.229/0.466
|
1086 |
+
0.218/0.462
|
1087 |
+
MI
|
1088 |
+
0.169/0.462
|
1089 |
+
0.227/0.476
|
1090 |
+
0.216/0.470
|
1091 |
+
NMaP
|
1092 |
+
0.170/0.461
|
1093 |
+
0.228/0.472
|
1094 |
+
0.217/0.467
|
1095 |
+
|
1096 |
+
minFDEz
|
1097 |
+
meanFDEz
|
1098 |
+
FDEz, avg
|
1099 |
+
Optimal
|
1100 |
+
0.114
|
1101 |
+
0.182
|
1102 |
+
0.164
|
1103 |
+
Random
|
1104 |
+
0.522
|
1105 |
+
0.718
|
1106 |
+
0.686
|
1107 |
+
FPEz
|
1108 |
+
0.249/0.670
|
1109 |
+
0.301/0.779
|
1110 |
+
0.293/0.754
|
1111 |
+
FPEavg
|
1112 |
+
0.278/0.599
|
1113 |
+
0.358/0.672
|
1114 |
+
0.345/0.654
|
1115 |
+
TE
|
1116 |
+
0.361/0.395
|
1117 |
+
0.493/0.420
|
1118 |
+
0.471/0.413
|
1119 |
+
DE
|
1120 |
+
0.362/0.393
|
1121 |
+
0.494/0.417
|
1122 |
+
0.472/0.410
|
1123 |
+
MI
|
1124 |
+
0.359/0.400
|
1125 |
+
0.489/0.428
|
1126 |
+
0.467/0.420
|
1127 |
+
NMaP
|
1128 |
+
0.360/0.397
|
1129 |
+
0.491/0.423
|
1130 |
+
0.497/0.416
|
1131 |
+
TABLE V. RESULTS FOR MANEUVER CLASSIFICATION STAGE OF GRIP+++
|
1132 |
+
WITH DEEP ENSEMBLE, WHICH IS TRAINED ON SIND AND TESTES ON VA
|
1133 |
+
|
1134 |
+
TE
|
1135 |
+
DE
|
1136 |
+
MI
|
1137 |
+
NMaP
|
1138 |
+
AUROC
|
1139 |
+
0.914
|
1140 |
+
0.915
|
1141 |
+
0.863
|
1142 |
+
0.912
|
1143 |
+
AUCOC
|
1144 |
+
0.978
|
1145 |
+
0.978
|
1146 |
+
0.971
|
1147 |
+
0.978
|
1148 |
+
TABLE VI. AUCOCOPTIMAL/AUCOCUNCERTAINTY (↓)/AUCOCRANDOM/IR(↑) FOR
|
1149 |
+
TRAJECTORY PREDICTION STAGE OF GRIP+++ WITH DEEP ENSEMBLE,
|
1150 |
+
WHICH IS TRAINED ON SIND AND TESTES ON VA
|
1151 |
+
|
1152 |
+
minADEz
|
1153 |
+
meanADEz
|
1154 |
+
ADEz, avg
|
1155 |
+
APEz
|
1156 |
+
0.088/0.210/
|
1157 |
+
0.445/0.656
|
1158 |
+
0.125/0.238/
|
1159 |
+
0.565/0.744
|
1160 |
+
0.117/0.234/
|
1161 |
+
0.550/0.730
|
1162 |
+
|
1163 |
+
minFDEz
|
1164 |
+
meanFDEz
|
1165 |
+
FDEz, avg
|
1166 |
+
FPEz
|
1167 |
+
0.158/0.991/
|
1168 |
+
0.491/0.601
|
1169 |
+
0.243/1.262/
|
1170 |
+
0.550/0.699
|
1171 |
+
0.228/1.232/
|
1172 |
+
0.543/0.686
|
1173 |
+
TABLE VII. AUCOCOPTIMAL/AUCOCUNCERTAINTY (↓)/AUCOCRANDOM/IR(↑) FOR
|
1174 |
+
TRAJECTRON++ ON NUSCENES
|
1175 |
+
|
1176 |
+
Single model
|
1177 |
+
Deep ensemble
|
1178 |
+
(mean)minADE
|
1179 |
+
0.088/0.167/0.378/0.730
|
1180 |
+
0.096/0.160/0.384/0.778
|
1181 |
+
(mean)minFDE
|
1182 |
+
0.132/0.308/0.689/0.683
|
1183 |
+
0.151/0.293/0.702/0.742
|
1184 |
+
(mean)meanADE
|
1185 |
+
0.322/0.386/1.045/0.912
|
1186 |
+
0.339/0.394/1.040/0.922
|
1187 |
+
(mean)meanFDE
|
1188 |
+
0.608/0.754/2.096/0.902
|
1189 |
+
0.637/0.763/2.082/0.913
|
1190 |
+
minminADE
|
1191 |
+
-
|
1192 |
+
0.055/0.112/0.234/0.682
|
1193 |
+
meanmaxpADE
|
1194 |
+
-
|
1195 |
+
0.181/0.280/0.801/0.841
|
1196 |
+
TABLE VIII. AUCOCOPTIMAL/AUCOCUNCERTAINTY(↓)/AUCOCRANDOM/IR(↑) FOR
|
1197 |
+
PGP ON NUSCENES, UC MEANS UNIFIED CLUSTERING
|
1198 |
+
|
1199 |
+
Single model
|
1200 |
+
Deep ensemble
|
1201 |
+
(mean)minADE
|
1202 |
+
0.498/0.837/0.945/0.242
|
1203 |
+
0.529/0.832/0.945/0.271
|
1204 |
+
(mean)minFDE
|
1205 |
+
0.623/1.273/1.554/0.302
|
1206 |
+
0.747/1.249/1.548/0.373
|
1207 |
+
minminADE
|
1208 |
+
-
|
1209 |
+
0.367/0.628/0.708/0.234
|
1210 |
+
meanmaxpADE
|
1211 |
+
-
|
1212 |
+
1.538/2.497/3.115/0.392
|
1213 |
+
minADE (uc)
|
1214 |
+
-
|
1215 |
+
0.488/0.797/0.908/0.264
|
1216 |
+
minFDE (uc)
|
1217 |
+
-
|
1218 |
+
0.612/0.181/1.466/0.333
|
1219 |
+
How well does uncertainty-based failure detection
|
1220 |
+
perform in generative model-based trajectory prediction?
|
1221 |
+
We adopt Trajectron++ combined with deep ensemble to
|
1222 |
+
extract multiple uncertainty scores as failure detection
|
1223 |
+
references. The results of this investigation are provided in
|
1224 |
+
TABLE VII, where minADE/minFDE/meanADE/meanFDE
|
1225 |
+
for single model is calculated based on the 10 trajectories
|
1226 |
+
|
1227 |
+
1 IR is calculated by (AUCOCrandom – AUCOCuncertainty)/(AUCOCrandom –
|
1228 |
+
AUCOCoptimal), where AUCOCrandom, AUCOCoptimal, and AUCOCuncertainty
|
1229 |
+
represent the AUCOC based on the optimal sorting, the random sorting, and
|
1230 |
+
the uncertainty scores-based sorting, respectively.
|
1231 |
+
predicted by the single model, and the corresponding
|
1232 |
+
uncertainty scores for failure detection are APE/FPE/APE
|
1233 |
+
/FPE obtained from the 10 trajectories. In contrast,
|
1234 |
+
meanminADE/meanminFDE/meanmeanADE/meanmeanFD
|
1235 |
+
E/minminADE/meanmaxpADE for deep ensemble are
|
1236 |
+
calculated based on 50 trajectories from all 5 ensemble
|
1237 |
+
models, where the first operator (mean/min) is for different
|
1238 |
+
sub-models and the second operator (mean/min/maxp) is for
|
1239 |
+
different maneuvers from each model’s output. The
|
1240 |
+
corresponding uncertainty scores for failure detection are
|
1241 |
+
meanAPE/meanFPE/meanAPE/meanFPE/APEall/APEmaxp,
|
1242 |
+
where meanAPE/meanFPE are obtained by averaging APE/
|
1243 |
+
FPE from 5 sub-models, APEall is directly calculated from all
|
1244 |
+
50 trajectories, APEmaxp is calculated according to the
|
1245 |
+
maximum probability trajectory of each model. The results
|
1246 |
+
show promising performance of the uncertainty-based failure
|
1247 |
+
detector.
|
1248 |
+
Can the above uncertainty-based failure detection be
|
1249 |
+
simply applied to any trajectory prediction algorithms? In
|
1250 |
+
addition to the typical deep neural network architecture and
|
1251 |
+
modules, existing trajectory prediction algorithms may use
|
1252 |
+
various tricks, which may directly affect the uncertainty
|
1253 |
+
scores extracted from the output trajectories. We conduct
|
1254 |
+
some exploratory experiments with PGP, a high-performance
|
1255 |
+
prediction algorithm integrating special tricks including
|
1256 |
+
traversal, sampling, and clustering, to analyze the
|
1257 |
+
performance of applying the uncertainty scores obtained from
|
1258 |
+
the output trajectories for failure detection. In addition, we
|
1259 |
+
apply deep ensemble to consider model uncertainty. From the
|
1260 |
+
evaluation results in TABLE VIII, we conclude that the
|
1261 |
+
performance of direct uncertainty quantification based on
|
1262 |
+
output results is not very outstanding. Possible reasons
|
1263 |
+
include operations such as sampling latent vectors from an
|
1264 |
+
unconstrained normal distribution or clustering. This result
|
1265 |
+
reminds us that it is necessary to improve uncertainty
|
1266 |
+
estimation methods and scores according to the prediction
|
1267 |
+
algorithms’ characteristics. For example, we propose a
|
1268 |
+
framework for unified clustering based on the outputs of all
|
1269 |
+
sub-models of the deep ensemble, the results in the last two
|
1270 |
+
rows of TABLE VIII show some improvement over the
|
1271 |
+
original model in trajectory prediction performance.
|
1272 |
+
V. CONCLUSION
|
1273 |
+
In this work, we propose a framework to detect motion
|
1274 |
+
prediction failures from the uncertainty perspective. We
|
1275 |
+
divide motion prediction tasks into two stages, maneuver
|
1276 |
+
classification and maneuver-based trajectory prediction, and
|
1277 |
+
formulate corresponding uncertainty scores for failure
|
1278 |
+
detection, where motion uncertainty and model uncertainty
|
1279 |
+
are both discussed. Our experiments cover the comparison of
|
1280 |
+
different prediction tasks, multiple prediction algorithms,
|
1281 |
+
different uncertainty estimation methods, and various
|
1282 |
+
uncertainty scores, Finally, we observe that uncertainty
|
1283 |
+
quantification is promising for failure detection for motion
|
1284 |
+
prediction, with the potential to generalize to environments
|
1285 |
+
with larger distributional shifts. However, it is also necessary
|
1286 |
+
to conduct targeted discussions and designs for different
|
1287 |
+
prediction algorithms. Our future work will focus on the
|
1288 |
+
integration of the proposed method with safety decision
|
1289 |
+
-making for autonomous driving, and its implementation and
|
1290 |
+
validation on physical vehicle platforms.
|
1291 |
+
|
1292 |
+
|
1293 |
+
|
1294 |
+
REFERENCES
|
1295 |
+
[1]
|
1296 |
+
A. Jain, L. Del Pero, H. Grimmett, and P. Ondruska, “Autonomy 2.0:
|
1297 |
+
Why is self-driving always 5 years away?” arXiv, Aug. 09, 2021. doi:
|
1298 |
+
10.48550/arXiv.2107.08142.
|
1299 |
+
[2]
|
1300 |
+
L. Plaza, “Collision Between Vehicle Controlled by Developmental
|
1301 |
+
Automated Driving System and Pedestrian,” PB2019-101402, Mar.
|
1302 |
+
2018.
|
1303 |
+
[3]
|
1304 |
+
J. Sifakis and D. Harel, “Trustworthy Autonomous System
|
1305 |
+
Development,” ACM Trans. Embed. Comput. Syst., Jun. 2022, doi:
|
1306 |
+
10.1145/3545178.
|
1307 |
+
[4]
|
1308 |
+
L. A. Dennis and M. Fisher, “Verifiable Self-Aware Agent-Based
|
1309 |
+
Autonomous Systems,” Proceedings of the IEEE, vol. 108, no. 7, pp.
|
1310 |
+
1011–1026, Jul. 2020, doi: 10.1109/JPROC.2020.2991262.
|
1311 |
+
[5]
|
1312 |
+
B. Sun, J. Xing, H. Blum, R. Siegwart, and C. Cadena, “See Yourself in
|
1313 |
+
Others: Attending Multiple Tasks for Own Failure Detection,” in 2022
|
1314 |
+
International Conference on Robotics and Automation (ICRA), 2022,
|
1315 |
+
pp. 8409–8416. doi: 10.1109/ICRA46639.2022.9812310.
|
1316 |
+
[6]
|
1317 |
+
E. A. Wan and R. Van Der Merwe, “The unscented Kalman filter for
|
1318 |
+
nonlinear estimation,” in Proceedings of the IEEE 2000 Adaptive
|
1319 |
+
Systems for Signal Processing, Communications, and Control
|
1320 |
+
Symposium
|
1321 |
+
(Cat.
|
1322 |
+
No.00EX373),
|
1323 |
+
2000,
|
1324 |
+
pp.
|
1325 |
+
153–158.
|
1326 |
+
doi:
|
1327 |
+
10.1109/ASSPCC.2000.882463.
|
1328 |
+
[7] A. Cosgun et al., “Towards full automated drive in urban environments:
|
1329 |
+
A demonstration in GoMentum Station, California,” in 2017 IEEE
|
1330 |
+
Intelligent Vehicles Symposium (IV), Jun. 2017, pp. 1811–1818. doi:
|
1331 |
+
10.1109/IVS.2017.7995969.
|
1332 |
+
[8]
|
1333 |
+
A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S.
|
1334 |
+
Savarese, “Social LSTM: Human Trajectory Prediction in Crowded
|
1335 |
+
Spaces,” in Proceedings of the IEEE Conference on Computer Vision
|
1336 |
+
and Pattern Recognition, 2016, pp. 961–971.
|
1337 |
+
[9]
|
1338 |
+
J. Gao et al., “VectorNet: Encoding HD Maps and Agent Dynamics
|
1339 |
+
From Vectorized Representation,” in Proceedings of the IEEE/CVF
|
1340 |
+
Conference on Computer Vision and Pattern Recognition, 2020, pp.
|
1341 |
+
11525–11533.
|
1342 |
+
[10] J. Gu, C. Sun, and H. Zhao, “DenseTNT: End-to-End Trajectory
|
1343 |
+
Prediction From Dense Goal Sets,” in Proceedings of the IEEE/CVF
|
1344 |
+
International
|
1345 |
+
Conference
|
1346 |
+
on
|
1347 |
+
Computer
|
1348 |
+
Vision,
|
1349 |
+
2021,
|
1350 |
+
pp.
|
1351 |
+
15303–15312.
|
1352 |
+
[11] S. Mozaffari, O. Y. Al-Jarrah, M. Dianati, P. Jennings, and A.
|
1353 |
+
Mouzakitis, “Deep Learning-Based Vehicle Behavior Prediction for
|
1354 |
+
Autonomous Driving Applications: A Review,” IEEE Transactions on
|
1355 |
+
Intelligent Transportation Systems, vol. 23, no. 1, pp. 33–47, 2022, doi:
|
1356 |
+
10.1109/TITS.2020.3012034.
|
1357 |
+
[12] X. Li, X. Ying, and M. C. Chuah, “GRIP++: Enhanced Graph-based
|
1358 |
+
Interaction-aware Trajectory Prediction for Autonomous Driving.”
|
1359 |
+
arXiv, May 19, 2020. doi: 10.48550/arXiv.1907.07792.
|
1360 |
+
[13] X. Mo, Z. Huang, Y. Xing, and C. Lv, “Multi-Agent Trajectory
|
1361 |
+
Prediction With Heterogeneous Edge-Enhanced Graph Attention
|
1362 |
+
Network,” IEEE Transactions on Intelligent Transportation Systems,
|
1363 |
+
vol.
|
1364 |
+
23,
|
1365 |
+
no.
|
1366 |
+
7,
|
1367 |
+
pp.
|
1368 |
+
9554–9567,
|
1369 |
+
Jul.
|
1370 |
+
2022,
|
1371 |
+
doi:
|
1372 |
+
10.1109/TITS.2022.3146300.
|
1373 |
+
[14] N. Djuric et al., “Uncertainty-aware Short-term Motion Prediction of
|
1374 |
+
Traffic Actors for Autonomous Driving,” in Proceedings of the
|
1375 |
+
IEEE/CVF Winter Conference on Applications of Computer Vision,
|
1376 |
+
2020, pp. 2095–2104.
|
1377 |
+
[15] N. Deo and M. M. Trivedi, “Convolutional Social Pooling for Vehicle
|
1378 |
+
Trajectory Prediction,” in Proceedings of the IEEE Conference on
|
1379 |
+
Computer Vision and Pattern Recognition Workshops, 2018, pp.
|
1380 |
+
1468–1476.
|
1381 |
+
[16] N. Deo, E. Wolff, and O. Beijbom, “Multimodal Trajectory Prediction
|
1382 |
+
Conditioned on Lane-Graph Traversals,” in Proceedings of the 5th
|
1383 |
+
Conference on Robot Learning, Jan. 2022, pp. 203–212.
|
1384 |
+
[17] H. Cui et al., “Multimodal Trajectory Predictions for Autonomous
|
1385 |
+
Driving using Deep Convolutional Networks,” in 2019 International
|
1386 |
+
Conference on Robotics and Automation (ICRA), 2019, pp. 2090–2096.
|
1387 |
+
doi: 10.1109/ICRA.2019.8793868.
|
1388 |
+
[18] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone,
|
1389 |
+
“Trajectron++: Dynamically-Feasible Trajectory Forecasting with
|
1390 |
+
Heterogeneous Data,” in Computer Vision – ECCV 2020, Cham, 2020,
|
1391 |
+
pp. 683–700. doi: 10.1007/978-3-030-58523-5_40.
|
1392 |
+
[19] T.-J. Hsieh, C.-S. Shih, C.-W. Lin, C.-W. Chen, and P.-K. Tsung,
|
1393 |
+
“Trajectory Prediction at Unsignalized Intersections using Social
|
1394 |
+
Conditional Generative Adversarial Network,” in 2021 IEEE
|
1395 |
+
International Intelligent Transportation Systems Conference (ITSC),
|
1396 |
+
Sep. 2021, pp. 844–851. doi: 10.1109/ITSC48978.2021.9564441.
|
1397 |
+
[20] J. Gawlikowski et al., “A Survey of Uncertainty in Deep Neural
|
1398 |
+
Networks.” arXiv, Jan. 18, 2022. doi: 10.48550/arXiv.2107.03342.
|
1399 |
+
[21] A. Kendall and Y. Gal, “What Uncertainties Do We Need in Bayesian
|
1400 |
+
Deep Learning for Computer Vision?,” in Advances in Neural
|
1401 |
+
Information Processing Systems, 2017, vol. 30.
|
1402 |
+
[22] C. Louizos and M. Welling, “Multiplicative Normalizing Flows for
|
1403 |
+
Variational Bayesian Neural Networks,” in Proceedings of the 34th
|
1404 |
+
International Conference on Machine Learning, Jul. 2017, pp.
|
1405 |
+
2218–2227.
|
1406 |
+
[23] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight
|
1407 |
+
Uncertainty in Neural Network,” in Proceedings of the 32nd
|
1408 |
+
International Conference on Machine Learning, Jun. 2015, pp.
|
1409 |
+
1613–1622.
|
1410 |
+
[24] Y. Gal and Z. Ghahramani, “Dropout as a Bayesian Approximation:
|
1411 |
+
Representing Model Uncertainty in Deep Learning,” in Proceedings of
|
1412 |
+
The 33rd International Conference on Machine Learning, Jun. 2016,
|
1413 |
+
pp. 1050–1059.
|
1414 |
+
[25] Y. Gal, J. Hron, and A. Kendall, “Concrete Dropout,” in Advances in
|
1415 |
+
Neural Information Processing Systems, 2017, vol. 30.
|
1416 |
+
[26] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and
|
1417 |
+
Scalable Predictive Uncertainty Estimation using Deep Ensembles,” in
|
1418 |
+
Advances in Neural Information Processing Systems, 2017, vol. 30.
|
1419 |
+
[27] Y. Wen, D. Tran, and J. Ba, “BatchEnsemble: An Alternative
|
1420 |
+
Approach to Efficient Ensemble and Lifelong Learning.” arXiv, Feb.
|
1421 |
+
19, 2020. doi: 10.48550/arXiv.2002.06715.
|
1422 |
+
[28] F. Wenzel, J. Snoek, D. Tran, and R. Jenatton, “Hyperparameter
|
1423 |
+
Ensembles for Robustness and Uncertainty Quantification,” in
|
1424 |
+
Advances in Neural Information Processing Systems, 2020, vol. 33, pp.
|
1425 |
+
6514–6527.
|
1426 |
+
[29] M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to
|
1427 |
+
quantify classification uncertainty,” in Advances in neural information
|
1428 |
+
processing systems, 2018, vol. 31.
|
1429 |
+
[30] D. Hendrycks, M. Mazeika, and T. Dietterich, “Deep Anomaly
|
1430 |
+
Detection with Outlier Exposure.” arXiv, Jan. 28, 2019. doi:
|
1431 |
+
10.48550/arXiv.1812.04606.
|
1432 |
+
[31] C. B. Kuhn, M. Hofbauer, Z. Xu, G. Petrovic, and E. Steinbach,
|
1433 |
+
“Pixel-Wise Failure Prediction For Semantic Video Segmentation,” in
|
1434 |
+
2021 IEEE International Conference on Image Processing (ICIP), Sep.
|
1435 |
+
2021, pp. 614–618. doi: 10.1109/ICIP42928.2021.9506552.
|
1436 |
+
[32] Q. M. Rahman, N. Sünderhauf, P. Corke, and F. Dayoub, “FSNet: A
|
1437 |
+
Failure Detection Framework for Semantic Segmentation,” IEEE
|
1438 |
+
Robotics and Automation Letters, vol. 7, no. 2, pp. 3030–3037, Apr.
|
1439 |
+
2022, doi: 10.1109/LRA.2022.3143219.
|
1440 |
+
[33] K. Lis, K. Nakka, P. Fua, and M. Salzmann, “Detecting the Unexpected
|
1441 |
+
via Image Resynthesis,” in Proceedings of the IEEE/CVF International
|
1442 |
+
Conference on Computer Vision, 2019, pp. 2152–2161.
|
1443 |
+
[34] D. Haldimann, H. Blum, R. Siegwart, and C. Cadena, “This is not what
|
1444 |
+
I imagined: Error Detection for Semantic Segmentation through Visual
|
1445 |
+
Dissimilarity.” arXiv, Sep. 02, 2019. doi: 10.48550/arXiv.1909.00676.
|
1446 |
+
[35] L. Deecke, R. Vandermeulen, L. Ruff, S. Mandt, and M. Kloft, “Image
|
1447 |
+
Anomaly Detection with Generative Adversarial Networks,” in
|
1448 |
+
Machine Learning and Knowledge Discovery in Databases, Cham,
|
1449 |
+
2019, pp. 3–17. doi: 10.1007/978-3-030-10925-7_1.
|
1450 |
+
[36] D. Hendrycks and K. Gimpel, “A Baseline for Detecting Misclassified
|
1451 |
+
and Out-of-Distribution Examples in Neural Networks,” presented at
|
1452 |
+
the International Conference on Learning Representations, 2017.
|
1453 |
+
[37] D. Feng, A. Harakeh, S. L. Waslander, and K. Dietmayer, “A Review
|
1454 |
+
and Comparative Study on Probabilistic Object Detection in
|
1455 |
+
Autonomous
|
1456 |
+
Driving,”
|
1457 |
+
IEEE
|
1458 |
+
Transactions
|
1459 |
+
on
|
1460 |
+
Intelligent
|
1461 |
+
Transportation Systems, vol. 23, no. 8, pp. 9961–9980, 2022, doi:
|
1462 |
+
10.1109/TITS.2021.3096854.
|
1463 |
+
[38] W. J. Maddox, P. Izmailov, T. Garipov, D. P. Vetrov, and A. G. Wilson,
|
1464 |
+
“A Simple Baseline for Bayesian Uncertainty in Deep Learning,” in
|
1465 |
+
Advances in Neural Information Processing Systems, 2019, vol. 32.
|
1466 |
+
[39] Y. Xu et al., “SIND: A Drone Dataset at Signalized Intersection in
|
1467 |
+
China.” arXiv, Sep. 06, 2022. doi: 10.48550/arXiv.2209.02297.
|
1468 |
+
[40] W. Zhan et al., “INTERACTION Dataset: An INTERnational,
|
1469 |
+
Adversarial and Cooperative moTION Dataset in Interactive Driving
|
1470 |
+
Scenarios with Semantic Maps.” arXiv, Sep. 30, 2019. doi:
|
1471 |
+
10.48550/arXiv.1910.03088.
|
1472 |
+
[41] H. Caesar et al., “nuScenes: A Multimodal Dataset for Autonomous
|
1473 |
+
Driving,” in Proceedings of the IEEE/CVF Conference on Computer
|
1474 |
+
Vision and Pattern Recognition, 2020, pp. 11621–11631.
|
1475 |
+
|
1476 |
+
|
7NE3T4oBgHgl3EQfRgki/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
99AzT4oBgHgl3EQfSvs8/content/tmp_files/2301.01236v1.pdf.txt
ADDED
@@ -0,0 +1,684 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
� For correspondence:
|
2 | |
3 |
+
Funding: This work was partially
|
4 |
+
supported by the Wallenberg AI,
|
5 |
+
Autonomous Systems and
|
6 |
+
Software Program (WASP) funded
|
7 |
+
by the Knut and Alice Wallenberg
|
8 |
+
Foundation.
|
9 |
+
A Tutorial on Parametric
|
10 |
+
Variational Inference
|
11 |
+
Jens Sjölund 1 �
|
12 |
+
1Department of Information Technology, Uppsala University, Sweden
|
13 |
+
Abstract
|
14 |
+
Variational inference uses optimization, rather than integration, to approximate the marginal
|
15 |
+
likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational
|
16 |
+
scalability made in the last decade, variational inference is now the preferred choice for many
|
17 |
+
high-dimensional models and large datasets. This tutorial introduces variational inference from
|
18 |
+
the parametric perspective that dominates these recent developments, in contrast to the
|
19 |
+
mean-field perspective commonly found in other introductory texts.
|
20 |
+
Introduction
|
21 |
+
In Bayesian machine learning and statistics, the central object of interest is the posterior distribu-
|
22 |
+
tion found by Bayesian inference—combining prior beliefs with observations according to Bayes’
|
23 |
+
rule. In simple cases, such as in conjugate models, this can be done exactly. But, general (non-
|
24 |
+
conjugate) models require approximate inference techniques such as Monte Carlo or variational
|
25 |
+
inference. These have complementary strengths and weaknesses, hence the most appropriate
|
26 |
+
choice is application dependent. We focus on variational inference, which is on the one hand not
|
27 |
+
guaranteed to be asymptotically exact but is on the other hand computationally efficient and scal-
|
28 |
+
able to high-dimensional models and large datasets.
|
29 |
+
Notation
|
30 |
+
We use a single observation variable 풙 to denote both the observed inputs and outputs. Our pri-
|
31 |
+
mary interest is however in the latent variables 풛. Since we adhere to the Bayesian framework,
|
32 |
+
the “parameters” of a model (such as the slope and intercept in a linear regression model) that are
|
33 |
+
assigned priors are actually latent variables. We denote remaining parameters of interest by 휽.
|
34 |
+
Variational inference
|
35 |
+
So, why do we need variational inference? First, recall that to infer anything about the latent vari-
|
36 |
+
ables from our observations, we need the posterior:
|
37 |
+
푝휽(풛 ∣ 풙) = 푝휽(풙, 풛)
|
38 |
+
푝휽(풙) .
|
39 |
+
(1)
|
40 |
+
The expression in the denominator, 푝휽(풙), is called the marginal likelihood of 풙 because it can be
|
41 |
+
rewritten as a marginalization over the latent variables:
|
42 |
+
푝휽(풙) = ∫ 푝휽(풙, 풛) 푑풛.
|
43 |
+
(2)
|
44 |
+
The catch is that in practice this integral is often intractable, i.e. not computable in closed form.
|
45 |
+
Since 풙 are our observations, the marginal likelihood is a (normalizing) constant. Nevertheless,
|
46 |
+
without knowing this constant the utility of the posterior is limited. Hence the need for approximate
|
47 |
+
inference.
|
48 |
+
Sjölund
|
49 |
+
|
|
50 |
+
arXiv
|
51 |
+
|
|
52 |
+
January 4, 2023
|
53 |
+
|
|
54 |
+
1–9
|
55 |
+
arXiv:2301.01236v1 [stat.ML] 3 Jan 2023
|
56 |
+
|
57 |
+
Key idea
|
58 |
+
The key idea in variational inference is to replace the intractable marginal likelihood with a tractable
|
59 |
+
lower bound that we then maximize. Modeling mainly consists of choosing a family of probability
|
60 |
+
distribution that are well-behaved yet sufficiently expressive. More specifically, we want there to
|
61 |
+
be a distribution 푞 ∈ , called the variational posterior, that can be used as a drop-in replacement
|
62 |
+
for the true posterior. The variational posterior should therefore be “close” to the true posterior
|
63 |
+
푝휽(풛 ∣ 풙) and at the same time (relatively) easy to find. The search procedure amounts to mathemat-
|
64 |
+
ical optimization, which is why variational inference is sometimes described as trading a difficult
|
65 |
+
integration problem for an easier optimization problem.
|
66 |
+
The evidence lower bound (ELBO)
|
67 |
+
In variational inference, the distance between the true posterior 푝(풛 ∣ 풙) and the variational poste-
|
68 |
+
rior 푞(풛) is measured using the Kullback-Leibler (KL) divergence,
|
69 |
+
KL(푞(풛) ‖ 푝(풛 ∣ 풙)) = − ∫ 푞(풛) log
|
70 |
+
(푝(풛 ∣ 풙)
|
71 |
+
푞(풛)
|
72 |
+
)
|
73 |
+
푑풛.
|
74 |
+
(3)
|
75 |
+
Other distance measures can also be used to make the variational posterior similar to the true pos-
|
76 |
+
terior, but the KL divergence has a particular benefit: through a neat trick we can simultaneously
|
77 |
+
estimate the marginal likelihood and circumvent the need to evaluate the posterior in equation 3.
|
78 |
+
To see how, we first note the two mathematical identities:
|
79 |
+
∫ 푞(풛) 푑풛 = 1,
|
80 |
+
(4)
|
81 |
+
푝(풙) = 푝(풙, 풛)
|
82 |
+
푝(풛 ∣ 풙) = 푝(풙, 풛)
|
83 |
+
푞(풛)
|
84 |
+
(푝(풛 ∣ 풙)
|
85 |
+
푞(풛)
|
86 |
+
)−1
|
87 |
+
.
|
88 |
+
(5)
|
89 |
+
Using these we may rewrite the marginal likelihood as follows:
|
90 |
+
log 푝(풙) =
|
91 |
+
(4) log 푝(풙) ⋅ ∫ 푞(풛) 푑풛 = ∫ 푞(풛) log 푝(풙) 푑풛
|
92 |
+
=
|
93 |
+
(5) ∫ 푞(풛) log
|
94 |
+
(푝(풙, 풛)
|
95 |
+
푞(풛)
|
96 |
+
)
|
97 |
+
푑풛 − ∫ 푞(풛) log
|
98 |
+
(푝(풛 ∣ 풙)
|
99 |
+
푞(풛)
|
100 |
+
)
|
101 |
+
푑풛
|
102 |
+
= ∫ 푞(풛) log
|
103 |
+
(푝(풙, 풛)
|
104 |
+
푞(풛)
|
105 |
+
)
|
106 |
+
푑풛 + KL(푞(풛) ‖ 푝(풛 ∣ 풙)).
|
107 |
+
(6)
|
108 |
+
Because the KL divergence is always nonnegative, the first term lower bounds the log marginal
|
109 |
+
likelihood (also known as the evidence) for any 푞, and is therefore known as the evidence lower
|
110 |
+
bound (ELBO):
|
111 |
+
ELBO(푞(풛)) = ∫ 푞(풛) log
|
112 |
+
(푝(풙, 풛)
|
113 |
+
푞(풛)
|
114 |
+
)
|
115 |
+
푑풛 = 피푞(풛)
|
116 |
+
[log 푝(풙, 풛) − log 푞(풛)] .
|
117 |
+
(7)
|
118 |
+
Equation 6 can thus be written more succinctly as
|
119 |
+
log 푝(풙) = ELBO(푞(풛)) + KL(푞(풛) ‖ 푝(풛 ∣ 풙)).
|
120 |
+
(8)
|
121 |
+
For a fixed model 푝(풙, 풛), the (log) evidence is a constant. Hence—recalling that the KL divergence is
|
122 |
+
nonnegative—we conclude that maximizing the ELBO is equivalent to minimizing the KL divergence.
|
123 |
+
This is great, because to compute the KL divergence we would have to marginalize over a function
|
124 |
+
that includes the same intractable posterior that we want to estimate. In contrast, the model only
|
125 |
+
enters in the ELBO through the joint distribution 푝(풙, 풛), which means that, first, we don’t need to
|
126 |
+
compute the problematic integral in equation 2 and, second, we can factorize the joint distribution,
|
127 |
+
e.g., as encoded by a directed graphical model (Wainwright and Jordan, 2008).
|
128 |
+
Sjölund
|
129 |
+
|
|
130 |
+
A Tutorial on Parametric Variational Inference
|
131 |
+
arXiv
|
132 |
+
|
|
133 |
+
2 of 9
|
134 |
+
|
135 |
+
Example 1
|
136 |
+
Suppose we have a single observation 푥 from an Exp(휆) likelihood with a Gamma(훼, 훽) prior
|
137 |
+
on the rate parameter 휆. Assuming that 훼 and 훽 are known, the only latent variable of interest
|
138 |
+
is 푧 = {휆}. Specifically,
|
139 |
+
푝(푥 ∣ 휆) = 휆푒−휆푥,
|
140 |
+
푝(휆) =
|
141 |
+
훽훼
|
142 |
+
Γ(훼)휆훼−1푒−훽휆,
|
143 |
+
where Γ(훼) is the Gamma function. Since the Gamma distribution is the conjugate prior for 휆,
|
144 |
+
we know that the posterior is also a Gamma distribution. Invoking Bayes’ rule and disregarding
|
145 |
+
all factors not including 휆, we find that
|
146 |
+
푝(휆 ∣ 푥) ∝ 푝(푥 ∣ 휆)푝(휆) ∝ 휆훼푒−휆(훽+푥).
|
147 |
+
Hence, we identify the posterior as 푝(휆 ∣ 푥) = Gamma(훼 + 1, 훽 + 푥).
|
148 |
+
But, let’s pretend we don’t know this and instead want to fit a Lognormal(휇, 휎2) distribution to
|
149 |
+
the posterior using variational inference, i.e.
|
150 |
+
푞(휆) =
|
151 |
+
1
|
152 |
+
휆휎
|
153 |
+
√
|
154 |
+
2휋
|
155 |
+
exp
|
156 |
+
(
|
157 |
+
−(log 휆 − 휇)2
|
158 |
+
2휎2
|
159 |
+
)
|
160 |
+
.
|
161 |
+
From equation 7 we have that
|
162 |
+
ELBO(푞(휆)) = 피푞(휆)
|
163 |
+
[log (푝(푥 ∣ 휆)푝(휆)) − log 푞(휆)]
|
164 |
+
= 피푞(휆)
|
165 |
+
[
|
166 |
+
log
|
167 |
+
( 훽훼
|
168 |
+
Γ(훼)
|
169 |
+
)
|
170 |
+
+ 훼 log 휆 − 휆(훽 + 푥) + log
|
171 |
+
√
|
172 |
+
2휋 + log 휎 + log 휆 + (log 휆 − 휇)2
|
173 |
+
2휎2
|
174 |
+
]
|
175 |
+
= log
|
176 |
+
(
|
177 |
+
훽훼√
|
178 |
+
2휋
|
179 |
+
Γ(훼)
|
180 |
+
)
|
181 |
+
+ (훼 + 1)피푞(휆)
|
182 |
+
[log 휆] − (훽 + 푥)피푞(휆) [휆] + log 휎 +
|
183 |
+
1
|
184 |
+
2휎2 피푞(휆)
|
185 |
+
[(log 휆 − 휇)2] .
|
186 |
+
The expectation 피푞(휆) [휆] = exp
|
187 |
+
(
|
188 |
+
휇 + 휎2
|
189 |
+
2
|
190 |
+
)
|
191 |
+
since, by definition, it is the mean of the lognormal
|
192 |
+
distribution. Furthermore, the change of integration variables 푦 = log 휆, which transforms 푞(휆)
|
193 |
+
into 푞(푦) = (휇, 휎2), shows that
|
194 |
+
피푞(휆)
|
195 |
+
[log 휆] = 피푞(푦) [푦] = 휇,
|
196 |
+
피푞(휆)
|
197 |
+
[(log 휆 − 휇)2] = 피푞(푦)
|
198 |
+
[(푦 − 휇)2] = 휎2.
|
199 |
+
The final expression for the ELBO is thus
|
200 |
+
ELBO(푞(휆)) = log
|
201 |
+
(
|
202 |
+
훽훼√
|
203 |
+
2휋
|
204 |
+
Γ(훼)
|
205 |
+
)
|
206 |
+
+ (훼 + 1)휇 − (훽 + 푥)푒휇+ 휎2
|
207 |
+
2 + log 휎 + 1
|
208 |
+
2.
|
209 |
+
Sjölund
|
210 |
+
|
|
211 |
+
A Tutorial on Parametric Variational Inference
|
212 |
+
arXiv
|
213 |
+
|
|
214 |
+
3 of 9
|
215 |
+
|
216 |
+
Modeling
|
217 |
+
How, then, do we choose the variational family ? Historically, the dominant approach has been
|
218 |
+
to assume a particular factorization of the variational posterior, and to use calculus of variations to
|
219 |
+
search for distributions that match this factorization. This is known as mean-field variational infer-
|
220 |
+
ence (Blei, Kucukelbir, and McAuliffe, 2017), and is still the approach most-often taught in classes.
|
221 |
+
However, mean-field variational inference is only applicable to a rather limited set of models. Most
|
222 |
+
of the successes of VI in the last 10–15 years have instead taken a parametric approach, where the
|
223 |
+
variational family is parameterized by a highly expressive model such as a deep neural network.
|
224 |
+
One can then use “standard” optimization techniques to search for the parameters 휽∗ that max-
|
225 |
+
imize the ELBO. In light of the above, this tutorial focuses exclusively on parametric variational
|
226 |
+
inference.
|
227 |
+
In example 1, we indeed took the parametric approach, since the variational posterior was
|
228 |
+
explicitly parameterized by a Lognormal distribution with parameters 휽 = {휇, 휎}. In example 2, we
|
229 |
+
take a closer look at the ELBO for a specific instance of this model.
|
230 |
+
To approximate the true posterior distribution accurately, we want the variational family to
|
231 |
+
be as rich as possible so long as we maintain tractability—it is impossible to overfit! However, as
|
232 |
+
example 3 shows, there is one pitfall to be aware of: 푞(풛) needs to be zero whenever 푝(풛 ∣ 풙) is zero.
|
233 |
+
Estimating the ELBO
|
234 |
+
In the examples we’ve seen so far the expectations could be computed in closed form. But that
|
235 |
+
will rarely be the case in general (non-conjugate) models. We can, however, use a Monte Carlo
|
236 |
+
estimate to replace the expectation with a sum,
|
237 |
+
ELBO(푞(풛)) = ∫ 푞(풛) log
|
238 |
+
(푝(풙, 풛)
|
239 |
+
푞(풛)
|
240 |
+
)
|
241 |
+
푑풛 = 피푞(풛)
|
242 |
+
[log 푝(풙, 풛) − log 푞(풛)]
|
243 |
+
≈ 1
|
244 |
+
퐿
|
245 |
+
퐿
|
246 |
+
∑
|
247 |
+
푖=1
|
248 |
+
(log 푝(풙, 풛(푖)) − log 푞(풛(푖))) .
|
249 |
+
(9)
|
250 |
+
The key requirement is that we are able to draw samples 풛(푖) from the variational posterior 푞(풛). But,
|
251 |
+
as suggested by the previous section, it is not enough to evaluate the ELBO for a given 푞 ∈ —we
|
252 |
+
want to find the best 푞! Having parameterized the variational posterior 푞휽(풛) with the parameters 휽,
|
253 |
+
we may rephrase this as finding parameter values that maximize the ELBO. For efficient optimiza-
|
254 |
+
tion, however, we need to evaluate both the objective function (the ELBO) and its gradient.
|
255 |
+
Gradient-based optimization of the ELBO
|
256 |
+
In optimization, it is standard practice to consider minimization problems. (Since a maximization
|
257 |
+
problem can be transformed into a minimization problem by negating the objective function, this
|
258 |
+
can be done without loss of generality.) We thus express our optimization problem as:
|
259 |
+
휽∗ = arg min
|
260 |
+
휽
|
261 |
+
− 피푞휽(풛)
|
262 |
+
[log 푝(풙, 풛) − log 푞휽(풛)] .
|
263 |
+
(10)
|
264 |
+
Applying, for instance, gradient descent to this problem corresponds to the iterations
|
265 |
+
휽푘+1 = 휽푘 + 휂∇휽피푞휽(풛)
|
266 |
+
[log 푝(풙, 풛) − log 푞휽(풛)] ,
|
267 |
+
푘 = 0, 1, …
|
268 |
+
(11)
|
269 |
+
where the hyperparameter 휂 > 0 is the step size. But this reveals a complication: the gradient
|
270 |
+
acts on the parameters of the distribution that we compute the expectation over. Consequently,
|
271 |
+
we cannot simply move the gradient inside the expectation, nor can we use the Monte Carlo trick
|
272 |
+
to first replace the expectation with samples and then compute the gradient on those. But there
|
273 |
+
are other, less direct, ways of applying the Monte Carlo idea that do work (incidentally, this turns
|
274 |
+
gradient descent into stochastic gradient descent). We begin by rewriting the gradient of the ELBO
|
275 |
+
Sjölund
|
276 |
+
|
|
277 |
+
A Tutorial on Parametric Variational Inference
|
278 |
+
arXiv
|
279 |
+
|
|
280 |
+
4 of 9
|
281 |
+
|
282 |
+
Example 2
|
283 |
+
To make thing more concrete, we continue with the setting from example 1 and set 훼 = 3,
|
284 |
+
훽 = 1, and 푥 = 1. The evidence 푝(푥) is the, previously neglected, proportionality constant
|
285 |
+
relating the posterior and the joint distributions,
|
286 |
+
푝(푥) = 푝(푥, 휆)
|
287 |
+
푝(휆 ∣ 푥) = Γ(훼 + 1)
|
288 |
+
Γ(훼)
|
289 |
+
훽훼
|
290 |
+
(훽 + 푥)훼+1 =
|
291 |
+
훼훽훼
|
292 |
+
(훽 + 푥)훼+1 .
|
293 |
+
Inserting the numerical values above gives 푝(푥 = 1) = 3∕16.
|
294 |
+
For simplicity, we fix 휎 = 0.5 in the variational posterior (this corresponds approximately to
|
295 |
+
the value found by moment matching) and study the effect of changing 휇.
|
296 |
+
Example 2—figure 1. The fit of a Lognormal(휇, 휎2 = 0.25) variational posterior to a Gamma(4, 2) posterior
|
297 |
+
for different values of 휇.
|
298 |
+
Example 2—figure 2. How well the ELBO approximates the log evidence depends on the parameter 휇.
|
299 |
+
The gap corresponds exactly to the KL divergence, hence maximizing the ELBO is equivalent to minimizing
|
300 |
+
the KL divergence.
|
301 |
+
Sjölund
|
302 |
+
|
|
303 |
+
A Tutorial on Parametric Variational Inference
|
304 |
+
arXiv
|
305 |
+
|
|
306 |
+
5 of 9
|
307 |
+
|
308 |
+
0.6
|
309 |
+
Exact posterior
|
310 |
+
Variational posterior (μ= 0.4)
|
311 |
+
0.5
|
312 |
+
Variational posterior (μ= 0.6)
|
313 |
+
0.4
|
314 |
+
Variational posterior (μ= 0.8)
|
315 |
+
0.3
|
316 |
+
0.2
|
317 |
+
0.1
|
318 |
+
0.0
|
319 |
+
2
|
320 |
+
FM
|
321 |
+
0
|
322 |
+
F51
|
323 |
+
61.68
|
324 |
+
KL(qμ*() II p( I X=1))
|
325 |
+
1.70
|
326 |
+
1.72
|
327 |
+
1.74
|
328 |
+
1.76
|
329 |
+
1.78
|
330 |
+
Inp(x= 1)
|
331 |
+
1.B0
|
332 |
+
ELBO(μ, = 0.5)
|
333 |
+
0.40
|
334 |
+
0.45
|
335 |
+
0.50
|
336 |
+
0.55
|
337 |
+
090
|
338 |
+
0.65
|
339 |
+
0.70
|
340 |
+
0.75
|
341 |
+
0.BO
|
342 |
+
μExample 3
|
343 |
+
Let’s return to Example 1 and see what happens if we try to use an (휇, 휎2) distribution as the
|
344 |
+
variational posterior, i.e.
|
345 |
+
푞(휆) =
|
346 |
+
1
|
347 |
+
√
|
348 |
+
2휋휎2
|
349 |
+
exp
|
350 |
+
(
|
351 |
+
−(휆 − 휇)2
|
352 |
+
2휎2
|
353 |
+
)
|
354 |
+
.
|
355 |
+
Deriving the ELBO as before, we have that
|
356 |
+
ELBO(푞(휆)) = 피푞(휆)
|
357 |
+
[log (푝(푥 ∣ 휆)푝(휆)) − log 푞(휆)]
|
358 |
+
= 피푞(휆)
|
359 |
+
[
|
360 |
+
log
|
361 |
+
( 훽훼
|
362 |
+
Γ(훼)
|
363 |
+
)
|
364 |
+
+ 훼 log 휆 − 휆(훽 + 푥) + log
|
365 |
+
√
|
366 |
+
2휋 + log 휎 + (휆 − 휇)2
|
367 |
+
2휎2
|
368 |
+
]
|
369 |
+
= log
|
370 |
+
(
|
371 |
+
훽훼√
|
372 |
+
2휋
|
373 |
+
Γ(훼)
|
374 |
+
)
|
375 |
+
+ 훼 피푞(휆)
|
376 |
+
[log 휆]
|
377 |
+
⏟⏞⏞⏞⏞⏟⏞⏞⏞⏞⏟
|
378 |
+
undefined!
|
379 |
+
−(훽 + 푥) 피푞(휆) [휆]
|
380 |
+
⏟⏟⏟
|
381 |
+
=휇
|
382 |
+
+ log 휎 +
|
383 |
+
1
|
384 |
+
2휎2 피푞(휆)
|
385 |
+
[(휆 − 휇)2]
|
386 |
+
⏟⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏟
|
387 |
+
=휎2
|
388 |
+
The logarithm is only defined for positive values, hence 피푞(휆)
|
389 |
+
[log 휆] is undefined. This illustrates
|
390 |
+
an important caveat when choosing the variational distribution: 푞(풛) needs to be zero whenever
|
391 |
+
푝(풛 ∣ 풙) is zero.
|
392 |
+
as follows (Ranganath, Gerrish, and Blei, 2014):
|
393 |
+
∇휽피푞휽(풛)
|
394 |
+
[log 푝(풙, 풛) − log 푞휽(풛)] = ∇휽 ∫
|
395 |
+
(log 푝(풙, 풛) − log 푞휽(풛)) 푞휽(풛) 푑풛
|
396 |
+
= ∫
|
397 |
+
(log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 푞휽(풛) 푑풛 − ∫
|
398 |
+
(∇휽 log 푞휽(풛)) 푞휽(풛) 푑풛
|
399 |
+
(12)
|
400 |
+
But the second term in this expression vanishes,
|
401 |
+
∫
|
402 |
+
(∇휽 log 푞휽(풛)) 푞휽(풛) 푑풛 = ∫
|
403 |
+
∇휃푞휽(풛)
|
404 |
+
��
|
405 |
+
�
|
406 |
+
푞휽(풛) ��
|
407 |
+
�
|
408 |
+
푞휽(풛) 푑풛 = ∇휃 ∫ 푞휽(풛) 푑풛
|
409 |
+
⏟⏞⏞⏞⏞⏟⏞⏞⏞⏞⏟
|
410 |
+
=1
|
411 |
+
= 0.
|
412 |
+
(13)
|
413 |
+
In conclusion, we have that
|
414 |
+
∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] = ∫
|
415 |
+
(log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 푞휽(풛)푑풛.
|
416 |
+
(14)
|
417 |
+
Sometimes, as in example 4, we can rewrite ∇휽푞휽(풛) (the gradient of the variational posterior) such
|
418 |
+
that we can directly use a Monte Carlo method to estimate the integral. Later, we will cover two
|
419 |
+
more general Monte Carlo-based approaches: reparameterization (Kingma and Welling, 2014) and
|
420 |
+
black-box variational inference (Ranganath, Gerrish, and Blei, 2014).
|
421 |
+
Reparameterization
|
422 |
+
The “reparameterization trick” was popularized in the work introducing the variational autoencoder
|
423 |
+
(Kingma and Welling, 2014) but the general principle has a much longer history (Devroye, 1996).
|
424 |
+
The idea is to decouple the source of randomness from the parameters by cleverly reformulating
|
425 |
+
the random variable 풛 ∼ 푞휽(풛) as a parameterized transformation 푧 = 푔휽(휖) of another random
|
426 |
+
variable 휖 ∼ 푝(휖) that is easy to sample. Effectively, this moves the randomness “outside” the model
|
427 |
+
and makes it possible to move the gradient inside the expectation, as shown in the example below.
|
428 |
+
The reparameterization trick is valid if and only if 푔(휖, 휽) is a continuous function of 휽 for all 휖
|
429 |
+
(Schulman et al., 2015). Further, it works in the same way as in the example above also for expec-
|
430 |
+
tations 피푞휽(풛) [푓(풛)] where 푓(풛) is a general nonlinear function of 풛,
|
431 |
+
∇휽피푞휽(풛) [푓(풛)] = ∇휽피푝(휖)
|
432 |
+
[푓(푔휽(휖))] = 피푝(휖)
|
433 |
+
[∇휽푓(푔휽(휖))] .
|
434 |
+
(15)
|
435 |
+
By setting 푓(풛) = log 푝(풙, 풛) − log 푞휽(풛) we retrieve the ELBO as a special case.
|
436 |
+
Sjölund
|
437 |
+
|
|
438 |
+
A Tutorial on Parametric Variational Inference
|
439 |
+
arXiv
|
440 |
+
|
|
441 |
+
6 of 9
|
442 |
+
|
443 |
+
Example 4
|
444 |
+
Consider a univariate Normal variational posterior parameterized by the mean 휇 and standard
|
445 |
+
deviation 휎, i.e.
|
446 |
+
푞휽(푧) =
|
447 |
+
1
|
448 |
+
√
|
449 |
+
2휋휎2
|
450 |
+
exp
|
451 |
+
(
|
452 |
+
−(푧 − 휇)2
|
453 |
+
2휎2
|
454 |
+
)
|
455 |
+
,
|
456 |
+
휽 = {휇, 휎}.
|
457 |
+
After some algebraic manipulations, the partial derivatives can be written as:
|
458 |
+
휕푞휽
|
459 |
+
휕휇 = 푧 − 휇
|
460 |
+
휎2
|
461 |
+
⋅ 푞휽(푧),
|
462 |
+
휕푞휽
|
463 |
+
휕휎 = 1
|
464 |
+
휎
|
465 |
+
((푧 − 휇)2
|
466 |
+
휎2
|
467 |
+
− 1
|
468 |
+
)
|
469 |
+
⋅ 푞휽(푧).
|
470 |
+
Note that 푞휽(푧) appears in both of these expressions. By inserting the above in equation 14,
|
471 |
+
we thus arrive at an expectation that we can replace with a Monte Carlo estimate:
|
472 |
+
∇휽피푞휽(푧) [log 푝(풙, 푧) − log 푞휽(푧)]
|
473 |
+
= ∫
|
474 |
+
(log 푝(풙, 푧) − log 푞휽(푧)) (
|
475 |
+
푧−휇
|
476 |
+
휎2 , 1
|
477 |
+
휎
|
478 |
+
(
|
479 |
+
(푧−휇)2
|
480 |
+
휎2
|
481 |
+
− 1
|
482 |
+
))⊤
|
483 |
+
푞휽(푧)푑푧
|
484 |
+
≈ 1
|
485 |
+
퐿
|
486 |
+
퐿
|
487 |
+
∑
|
488 |
+
푖=1
|
489 |
+
(log 푝(풙, 푧(푖)) − log 푞휽(푧(푖))) (
|
490 |
+
푧(푖)−휇
|
491 |
+
휎2 , 1
|
492 |
+
휎
|
493 |
+
(
|
494 |
+
(푧(푖)−휇)2
|
495 |
+
휎2
|
496 |
+
− 1
|
497 |
+
))⊤
|
498 |
+
,
|
499 |
+
where 푧(푖) ∼ 푞휽(푧).
|
500 |
+
Example 5
|
501 |
+
Suppose the variational posterior is a univariate Normal distribution parameterized by the
|
502 |
+
mean 휇 and standard deviation 휎, i.e. 푞휽(푧) = (푧; 휽) where 휽 = {휇, 휎}. This can be reparame-
|
503 |
+
terized as 푧 = 푔휽(휖) = 휇 + 휎 ⋅ 휖 where 휖 ∼ (0, 1).
|
504 |
+
Let’s consider the effect this has on the expectation 피푞휽(푧)
|
505 |
+
[log 푧].
|
506 |
+
(i) Original expression:
|
507 |
+
피푞휽(푧)
|
508 |
+
[log 푧] =
|
509 |
+
1
|
510 |
+
√
|
511 |
+
2휋휎2 ∫ log 푧 exp
|
512 |
+
(
|
513 |
+
−(푧 − 휇)2
|
514 |
+
2휎2
|
515 |
+
)
|
516 |
+
푑푧,
|
517 |
+
∇휽피푞휽(푧)
|
518 |
+
[log 푧] = ∫ log 푧 ∇휽푞휽(푧)푑푧
|
519 |
+
= 피푞휽(푧)
|
520 |
+
[
|
521 |
+
log 푧 ⋅
|
522 |
+
((
|
523 |
+
푧−휇
|
524 |
+
휎2
|
525 |
+
)
|
526 |
+
, 1
|
527 |
+
휎
|
528 |
+
(
|
529 |
+
(푧−휇)2
|
530 |
+
휎2
|
531 |
+
− 1
|
532 |
+
))⊤]
|
533 |
+
where we used the expression for ∇휽푞휽(푧) from example 4.
|
534 |
+
(ii) Reparameterized expression:
|
535 |
+
피푞휽(푧)
|
536 |
+
[log 푧] ||||푧=휇+휎휖
|
537 |
+
=
|
538 |
+
1
|
539 |
+
√
|
540 |
+
2휋휎2 ∫ log(휇 + 휎휖) exp
|
541 |
+
(
|
542 |
+
−(휇 + 휎휖 − 휇)2
|
543 |
+
2휎2
|
544 |
+
)
|
545 |
+
휎 푑휖
|
546 |
+
=
|
547 |
+
1
|
548 |
+
√
|
549 |
+
2휋 ∫ log(휇 + 휎휖) exp
|
550 |
+
(
|
551 |
+
−휖2
|
552 |
+
2
|
553 |
+
)
|
554 |
+
푑휖 = 피푝(휖)
|
555 |
+
[log(휇 + 휎휖)] ,
|
556 |
+
∇휽피푝(휖)
|
557 |
+
[log(휇 + 휎휖)] = 피푝(휖)
|
558 |
+
[∇휽 log(휇 + 휎휖)]
|
559 |
+
= 피푝(휖)
|
560 |
+
[(
|
561 |
+
1
|
562 |
+
휇+휎휖 ,
|
563 |
+
휖
|
564 |
+
휇+휎휖
|
565 |
+
)⊤]
|
566 |
+
.
|
567 |
+
Sjölund
|
568 |
+
|
|
569 |
+
A Tutorial on Parametric Variational Inference
|
570 |
+
arXiv
|
571 |
+
|
|
572 |
+
7 of 9
|
573 |
+
|
574 |
+
Amortized variational inference
|
575 |
+
Many probabilistic models have local latent variables 풛푖 associated with each data point 풙푖. The
|
576 |
+
simplest case is when the joint distribution factorizes as
|
577 |
+
푝(풙, 풛) =
|
578 |
+
푁
|
579 |
+
∏
|
580 |
+
푖=1
|
581 |
+
푝(풙푖 ∣ 풛푖)푝(풛푖).
|
582 |
+
(16)
|
583 |
+
Suppose we use a variational posterior that factorizes accordingly,
|
584 |
+
푞휽(풛) =
|
585 |
+
푁
|
586 |
+
∏
|
587 |
+
푖=1
|
588 |
+
푞휽푖(풛푖),
|
589 |
+
(17)
|
590 |
+
then the ELBO maximization in equation 10 decomposes into a sum of local ELBOs
|
591 |
+
휽∗ = arg min
|
592 |
+
휽
|
593 |
+
−
|
594 |
+
푁
|
595 |
+
∑
|
596 |
+
푖=1
|
597 |
+
피푞휽푖 (풛푖)
|
598 |
+
[log 푝(풙푖 ∣ 풛푖) + log 푝(풛푖) − log 푞휽푖(풛푖)] .
|
599 |
+
(18)
|
600 |
+
Since the optimization variables are 휽 = {휽1, … , 휽푁}, large datasets amount to large optimization
|
601 |
+
problems, which are computationally demanding to solve. This led to the idea of amortized vari-
|
602 |
+
ational inference (Rezende, Mohamed, and Wierstra, 2014), wherein a machine learning model
|
603 |
+
(often a neural network) is trained to directly predict the solution 휽∗ of this optimization problem.
|
604 |
+
Specifically, let Λ휙 denote a neural network parameterized by 휙 that maps individual datapoints
|
605 |
+
풙푖 to corresponding parameters 휽푖 of the local variational posterior 푞휽푖(풛푖). This model is trained us-
|
606 |
+
ing the expression in equation 18 as the loss function but replacing 휽푖 = Λ휙(풙푖). Note that even
|
607 |
+
though the objective function is the same, this is a form of amortized optimization (Amos, 2022)
|
608 |
+
since we are now using 휙 as the optimization variables instead of 휽. Furthermore, the loss function
|
609 |
+
is a sum over datapoints, which means that the standard machinery for training neural networks
|
610 |
+
(stochastic gradient descent etc.) can be applied. In the context of variational autoencoders, the
|
611 |
+
model Λ휙 is referred to as the encoder, which is accompanied by a, jointly trained, decoder corre-
|
612 |
+
sponding to the probability distribution 푝(풙 ∣ 풛) (Kingma and Welling, 2019).
|
613 |
+
Black-box variational inference
|
614 |
+
The reparameterization trick lets you compute the exact gradient by automatic differentiation,
|
615 |
+
which is undoubtedly convenient. On the other hand, there are many models in which reparam-
|
616 |
+
eterization is impossible. In these cases, one can instead estimate the gradient using black-box
|
617 |
+
variational inference (BBVI) (Ranganath, Gerrish, and Blei, 2014), which is more general yet still
|
618 |
+
convenient. However, the BBVI estimator suffers from high variance.
|
619 |
+
BBVI relies on the observation that
|
620 |
+
∇휽 log 푞휽(풛) = ∇휽푞휽(풛)
|
621 |
+
푞휽(풛) ,
|
622 |
+
(19)
|
623 |
+
which is sometimes referred to as the REINFORCE trick (Williams, 1992). This can be used to rewrite
|
624 |
+
equation 14 as
|
625 |
+
∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] = ∫
|
626 |
+
(log 푝(풙, 풛) − log 푞휽(풛)) 푞휽(풛)∇휽 log 푞휽(풛)
|
627 |
+
⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟
|
628 |
+
=∇휽푞휽(풛)
|
629 |
+
푑풛
|
630 |
+
= 피푞휽(풛) [(log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 log 푞휽(풛)] ≈ 1
|
631 |
+
퐿
|
632 |
+
퐿
|
633 |
+
∑
|
634 |
+
푖=1
|
635 |
+
(log 푝(풙, 풛(푖)) − log 푞휽(풛(푖))) ∇휽 log 푞휽(풛(푖)).
|
636 |
+
(20)
|
637 |
+
Since we can often use automatic differentiation to evaluate the score function ∇휽 log 푞휽(풛), it ap-
|
638 |
+
pears that this reformulation resolves the problem of estimating the gradient of the ELBO from
|
639 |
+
samples. The catch is, however, that this estimator often has a too high variance to be useful in
|
640 |
+
practice. Arguably, the key contribution of BBVI was to adapt two variance reduction techniques—
|
641 |
+
Rao-Blackwellization and control variates—to the estimator in equation 20. Going into detail on
|
642 |
+
these variance reduction techniques would, however, take us beyond the scope of a tutorial on the
|
643 |
+
basics of variational inference. We refer the interested reader to the original work by Ranganath,
|
644 |
+
Gerrish, and Blei (2014).
|
645 |
+
Sjölund
|
646 |
+
|
|
647 |
+
A Tutorial on Parametric Variational Inference
|
648 |
+
arXiv
|
649 |
+
|
|
650 |
+
8 of 9
|
651 |
+
|
652 |
+
Acknowledgments
|
653 |
+
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Pro-
|
654 |
+
gram (WASP) funded by the Knut and Alice Wallenberg Foundation. This preprint was created using
|
655 |
+
the LaPreprint template (https://github.com/roaldarbol/lapreprint) by Mikkel Roald-Arbøl.
|
656 |
+
References
|
657 |
+
Amos, Brandon (2022). “Tutorial on amortized optimization for learning to optimize over continu-
|
658 |
+
ous domains”. In: arXiv preprint 2202.00665.
|
659 |
+
Blei, David M, Alp Kucukelbir, and Jon D McAuliffe (2017). “Variational inference: A review for statis-
|
660 |
+
ticians”. In: Journal of the American statistical Association 112.518, pp. 859–877.
|
661 |
+
Devroye, Luc (1996). “Random variate generation in one line of code”. In: Proceedings Winter Simu-
|
662 |
+
lation Conference. IEEE, pp. 265–272.
|
663 |
+
Kingma, Diederik P and Max Welling (2014). “Auto-Encoding Variational Bayes”. In: 2nd International
|
664 |
+
Conference on Learning Representations.
|
665 |
+
— (2019). “An introduction to variational autoencoders”. In: Foundations and Trends® in Machine
|
666 |
+
Learning 12.4, pp. 307–392.
|
667 |
+
Ranganath, Rajesh, Sean Gerrish, and David M Blei (2014). “Black box variational inference”. In:
|
668 |
+
Artificial intelligence and statistics. PMLR, pp. 814–822.
|
669 |
+
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra (2014). “Stochastic backpropagation
|
670 |
+
and approximate inference in deep generative models”. In: International conference on machine
|
671 |
+
learning. PMLR, pp. 1278–1286.
|
672 |
+
Schulman, John et al. (2015). “Gradient estimation using stochastic computation graphs”. In: Ad-
|
673 |
+
vances in Neural Information Processing Systems 28.
|
674 |
+
Wainwright, Martin J and Michael I Jordan (2008). “Graphical models, exponential families, and vari-
|
675 |
+
ational inference”. In: Foundations and Trends in Machine Learning 1.1–2, pp. 1–305.
|
676 |
+
Williams, Ronald J (1992). “Simple statistical gradient-following algorithms for connectionist rein-
|
677 |
+
forcement learning”. In: Machine learning 8.3, pp. 229–256.
|
678 |
+
Sjölund
|
679 |
+
|
|
680 |
+
A Tutorial on Parametric Variational Inference
|
681 |
+
arXiv
|
682 |
+
|
|
683 |
+
9 of 9
|
684 |
+
|
99AzT4oBgHgl3EQfSvs8/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf,len=222
|
2 |
+
page_content='� For correspondence: jens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
3 |
+
page_content='sjolund@it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
4 |
+
page_content='uu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
5 |
+
page_content='se Funding: This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
6 |
+
page_content=' A Tutorial on Parametric Variational Inference Jens Sjölund 1 � 1Department of Information Technology, Uppsala University, Sweden Abstract Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
7 |
+
page_content=' Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
8 |
+
page_content=' This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
9 |
+
page_content=' Introduction In Bayesian machine learning and statistics, the central object of interest is the posterior distribu- tion found by Bayesian inference—combining prior beliefs with observations according to Bayes’ rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
10 |
+
page_content=' In simple cases, such as in conjugate models, this can be done exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
11 |
+
page_content=' But, general (non- conjugate) models require approximate inference techniques such as Monte Carlo or variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
12 |
+
page_content=' These have complementary strengths and weaknesses, hence the most appropriate choice is application dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
13 |
+
page_content=' We focus on variational inference, which is on the one hand not guaranteed to be asymptotically exact but is on the other hand computationally efficient and scal- able to high-dimensional models and large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
14 |
+
page_content=' Notation We use a single observation variable 풙 to denote both the observed inputs and outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
15 |
+
page_content=' Our pri- mary interest is however in the latent variables 풛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
16 |
+
page_content=' Since we adhere to the Bayesian framework, the “parameters” of a model (such as the slope and intercept in a linear regression model) that are assigned priors are actually latent variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
17 |
+
page_content=' We denote remaining parameters of interest by 휽.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
18 |
+
page_content=' Variational inference So, why do we need variational inference?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
19 |
+
page_content=' First, recall that to infer anything about the latent vari- ables from our observations, we need the posterior: 푝휽(풛 ∣ 풙) = 푝휽(풙, 풛) 푝휽(풙) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
20 |
+
page_content=' (1) The expression in the denominator, 푝휽(풙), is called the marginal likelihood of 풙 because it can be rewritten as a marginalization over the latent variables: 푝휽(풙) = ∫ 푝휽(풙, 풛) 푑풛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
21 |
+
page_content=' (2) The catch is that in practice this integral is often intractable, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
22 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
23 |
+
page_content=' not computable in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
24 |
+
page_content=' Since 풙 are our observations, the marginal likelihood is a (normalizing) constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
25 |
+
page_content=' Nevertheless, without knowing this constant the utility of the posterior is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
26 |
+
page_content=' Hence the need for approximate inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
27 |
+
page_content=' Sjölund | arXiv | January 4, 2023 | 1–9 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
28 |
+
page_content='01236v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
29 |
+
page_content='ML] 3 Jan 2023 Key idea The key idea in variational inference is to replace the intractable marginal likelihood with a tractable lower bound that we then maximize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
30 |
+
page_content=' Modeling mainly consists of choosing a family \ue23d of probability distribution that are well-behaved yet sufficiently expressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
31 |
+
page_content=' More specifically, we want there to be a distribution 푞 ∈ \ue23d, called the variational posterior, that can be used as a drop-in replacement for the true posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
32 |
+
page_content=' The variational posterior should therefore be “close” to the true posterior 푝휽(풛 ∣ 풙) and at the same time (relatively) easy to find.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
33 |
+
page_content=' The search procedure amounts to mathemat- ical optimization, which is why variational inference is sometimes described as trading a difficult integration problem for an easier optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
34 |
+
page_content=' The evidence lower bound (ELBO) In variational inference, the distance between the true posterior 푝(풛 ∣ 풙) and the variational poste- rior 푞(풛) is measured using the Kullback-Leibler (KL) divergence, KL(푞(풛) ‖ 푝(풛 ∣ 풙)) = − ∫ 푞(풛) log (푝(풛 ∣ 풙) 푞(풛) ) 푑풛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
35 |
+
page_content=' (3) Other distance measures can also be used to make the variational posterior similar to the true pos- terior, but the KL divergence has a particular benefit: through a neat trick we can simultaneously estimate the marginal likelihood and circumvent the need to evaluate the posterior in equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
36 |
+
page_content=' To see how, we first note the two mathematical identities: ∫ 푞(풛) 푑풛 = 1, (4) 푝(풙) = 푝(풙, 풛) 푝(풛 ∣ 풙) = 푝(풙, 풛) 푞(풛) (푝(풛 ∣ 풙) 푞(풛) )−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
37 |
+
page_content=' (5) Using these we may rewrite the marginal likelihood as follows: log 푝(풙) = (4) log 푝(풙) ⋅ ∫ 푞(풛) 푑풛 = ∫ 푞(풛) log 푝(풙) 푑풛 = (5) ∫ 푞(풛) log (푝(풙, 풛) 푞(풛) ) 푑풛 − ∫ 푞(풛) log (푝(풛 ∣ 풙) 푞(풛) ) 푑풛 = ∫ 푞(풛) log (푝(풙, 풛) 푞(풛) ) 푑풛 + KL(푞(풛) ‖ 푝(풛 ∣ 풙)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
38 |
+
page_content=' (6) Because the KL divergence is always nonnegative, the first term lower bounds the log marginal likelihood (also known as the evidence) for any 푞, and is therefore known as the evidence lower bound (ELBO): ELBO(푞(풛)) = ∫ 푞(풛) log (푝(풙, 풛) 푞(풛) ) 푑풛 = 피푞(풛) [log 푝(풙, 풛) − log 푞(풛)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
39 |
+
page_content=' (7) Equation 6 can thus be written more succinctly as log 푝(풙) = ELBO(푞(풛)) + KL(푞(풛) ‖ 푝(풛 ∣ 풙)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
40 |
+
page_content=' (8) For a fixed model 푝(풙, 풛), the (log) evidence is a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
41 |
+
page_content=' Hence—recalling that the KL divergence is nonnegative—we conclude that maximizing the ELBO is equivalent to minimizing the KL divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
42 |
+
page_content=' This is great, because to compute the KL divergence we would have to marginalize over a function that includes the same intractable posterior that we want to estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
43 |
+
page_content=' In contrast, the model only enters in the ELBO through the joint distribution 푝(풙, 풛), which means that, first, we don’t need to compute the problematic integral in equation 2 and, second, we can factorize the joint distribution, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
44 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
45 |
+
page_content=', as encoded by a directed graphical model (Wainwright and Jordan, 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
46 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 2 of 9 Example 1 Suppose we have a single observation 푥 from an Exp(휆) likelihood with a Gamma(훼, 훽) prior on the rate parameter 휆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
47 |
+
page_content=' Assuming that 훼 and 훽 are known, the only latent variable of interest is 푧 = {휆}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
48 |
+
page_content=' Specifically, 푝(푥 ∣ 휆) = 휆푒−휆푥, 푝(휆) = 훽훼 Γ(훼)휆훼−1푒−훽휆, where Γ(훼) is the Gamma function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
49 |
+
page_content=' Since the Gamma distribution is the conjugate prior for 휆, we know that the posterior is also a Gamma distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
50 |
+
page_content=' Invoking Bayes’ rule and disregarding all factors not including 휆, we find that 푝(휆 ∣ 푥) ∝ 푝(푥 ∣ 휆)푝(휆) ∝ 휆훼푒−휆(훽+푥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
51 |
+
page_content=' Hence, we identify the posterior as 푝(휆 ∣ 푥) = Gamma(훼 + 1, 훽 + 푥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
52 |
+
page_content=' But, let’s pretend we don’t know this and instead want to fit a Lognormal(휇, 휎2) distribution to the posterior using variational inference, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
53 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
54 |
+
page_content=' 푞(휆) = 1 휆휎 √ 2휋 exp ( −(log 휆 − 휇)2 2휎2 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
55 |
+
page_content=' From equation 7 we have that ELBO(푞(휆)) = 피푞(휆) [log (푝(푥 ∣ 휆)푝(휆)) − log 푞(휆)] = 피푞(휆) [ log ( 훽훼 Γ(훼) ) + 훼 log 휆 − 휆(훽 + 푥) + log √ 2휋 + log 휎 + log 휆 + (log 휆 − 휇)2 2휎2 ] = log ( 훽훼√ 2휋 Γ(훼) ) + (훼 + 1)피푞(휆) [log 휆] − (훽 + 푥)피푞(휆) [휆] + log 휎 + 1 2휎2 피푞(휆) [(log 휆 − 휇)2] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
56 |
+
page_content=' The expectation 피푞(휆) [휆] = exp ( 휇 + 휎2 2 ) since, by definition, it is the mean of the lognormal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
57 |
+
page_content=' Furthermore, the change of integration variables 푦 = log 휆, which transforms 푞(휆) into 푞(푦) = \ue23a (휇, 휎2), shows that 피푞(휆) [log 휆] = 피푞(푦) [푦] = 휇, 피푞(휆) [(log 휆 − 휇)2] = 피푞(푦) [(푦 − 휇)2] = 휎2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
58 |
+
page_content=' The final expression for the ELBO is thus ELBO(푞(휆)) = log ( 훽훼√ 2휋 Γ(훼) ) + (훼 + 1)휇 − (훽 + 푥)푒휇+ 휎2 2 + log 휎 + 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
59 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 3 of 9 Modeling How, then, do we choose the variational family \ue23d?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
60 |
+
page_content=' Historically, the dominant approach has been to assume a particular factorization of the variational posterior, and to use calculus of variations to search for distributions that match this factorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
61 |
+
page_content=' This is known as mean-field variational infer- ence (Blei, Kucukelbir, and McAuliffe, 2017), and is still the approach most-often taught in classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
62 |
+
page_content=' However, mean-field variational inference is only applicable to a rather limited set of models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
63 |
+
page_content=' Most of the successes of VI in the last 10–15 years have instead taken a parametric approach, where the variational family is parameterized by a highly expressive model such as a deep neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
64 |
+
page_content=' One can then use “standard” optimization techniques to search for the parameters 휽∗ that max- imize the ELBO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
65 |
+
page_content=' In light of the above, this tutorial focuses exclusively on parametric variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
66 |
+
page_content=' In example 1, we indeed took the parametric approach, since the variational posterior was explicitly parameterized by a Lognormal distribution with parameters 휽 = {휇, 휎}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
67 |
+
page_content=' In example 2, we take a closer look at the ELBO for a specific instance of this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
68 |
+
page_content=' To approximate the true posterior distribution accurately, we want the variational family \ue23d to be as rich as possible so long as we maintain tractability—it is impossible to overfit!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
69 |
+
page_content=' However, as example 3 shows, there is one pitfall to be aware of: 푞(풛) needs to be zero whenever 푝(풛 ∣ 풙) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
70 |
+
page_content=' Estimating the ELBO In the examples we’ve seen so far the expectations could be computed in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
71 |
+
page_content=' But that will rarely be the case in general (non-conjugate) models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
72 |
+
page_content=' We can, however, use a Monte Carlo estimate to replace the expectation with a sum, ELBO(푞(풛)) = ∫ 푞(풛) log (푝(풙, 풛) 푞(풛) ) 푑풛 = 피푞(풛) [log 푝(풙, 풛) − log 푞(풛)] ≈ 1 퐿 퐿 ∑ 푖=1 (log 푝(풙, 풛(푖)) − log 푞(풛(푖))) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
73 |
+
page_content=' (9) The key requirement is that we are able to draw samples 풛(푖) from the variational posterior 푞(풛).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
74 |
+
page_content=' But, as suggested by the previous section, it is not enough to evaluate the ELBO for a given 푞 ∈ \ue23d—we want to find the best 푞!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
75 |
+
page_content=' Having parameterized the variational posterior 푞휽(풛) with the parameters 휽, we may rephrase this as finding parameter values that maximize the ELBO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
76 |
+
page_content=' For efficient optimiza- tion, however, we need to evaluate both the objective function (the ELBO) and its gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
77 |
+
page_content=' Gradient-based optimization of the ELBO In optimization, it is standard practice to consider minimization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
78 |
+
page_content=' (Since a maximization problem can be transformed into a minimization problem by negating the objective function, this can be done without loss of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
79 |
+
page_content=') We thus express our optimization problem as: 휽∗ = arg min 휽 − 피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
80 |
+
page_content=' (10) Applying, for instance, gradient descent to this problem corresponds to the iterations 휽푘+1 = 휽푘 + 휂∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] , 푘 = 0, 1, … (11) where the hyperparameter 휂 > 0 is the step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
81 |
+
page_content=' But this reveals a complication: the gradient acts on the parameters of the distribution that we compute the expectation over.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
82 |
+
page_content=' Consequently, we cannot simply move the gradient inside the expectation, nor can we use the Monte Carlo trick to first replace the expectation with samples and then compute the gradient on those.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
83 |
+
page_content=' But there are other, less direct, ways of applying the Monte Carlo idea that do work (incidentally, this turns gradient descent into stochastic gradient descent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
84 |
+
page_content=' We begin by rewriting the gradient of the ELBO Sjölund | A Tutorial on Parametric Variational Inference arXiv | 4 of 9 Example 2 To make thing more concrete, we continue with the setting from example 1 and set 훼 = 3, 훽 = 1, and 푥 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
85 |
+
page_content=' The evidence 푝(푥) is the, previously neglected, proportionality constant relating the posterior and the joint distributions, 푝(푥) = 푝(푥, 휆) 푝(휆 ∣ 푥) = Γ(훼 + 1) Γ(훼) 훽훼 (훽 + 푥)훼+1 = 훼훽훼 (훽 + 푥)훼+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
86 |
+
page_content=' Inserting the numerical values above gives 푝(푥 = 1) = 3∕16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
87 |
+
page_content=' For simplicity, we fix 휎 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
88 |
+
page_content='5 in the variational posterior (this corresponds approximately to the value found by moment matching) and study the effect of changing 휇.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
89 |
+
page_content=' Example 2—figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
90 |
+
page_content=' The fit of a Lognormal(휇, 휎2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
91 |
+
page_content='25) variational posterior to a Gamma(4, 2) posterior for different values of 휇.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
92 |
+
page_content=' Example 2—figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
93 |
+
page_content=' How well the ELBO approximates the log evidence depends on the parameter 휇.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
94 |
+
page_content=' The gap corresponds exactly to the KL divergence, hence maximizing the ELBO is equivalent to minimizing the KL divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
95 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 5 of 9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
96 |
+
page_content='6 Exact posterior Variational posterior (μ= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
97 |
+
page_content='4) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
98 |
+
page_content='5 Variational posterior (μ= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
99 |
+
page_content='6) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
100 |
+
page_content='4 Variational posterior (μ= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
101 |
+
page_content='8) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
102 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
103 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
104 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
105 |
+
page_content='0 2 FM 0 F51 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
106 |
+
page_content='68 KL(qμ*() II p( I X=1)) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
107 |
+
page_content='70 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
108 |
+
page_content='72 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
109 |
+
page_content='74 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
110 |
+
page_content='76 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
111 |
+
page_content='78 Inp(x= 1) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
112 |
+
page_content='B0 ELBO(μ, = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
113 |
+
page_content='5) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
114 |
+
page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
115 |
+
page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
116 |
+
page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
117 |
+
page_content='55 090 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
118 |
+
page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
119 |
+
page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
120 |
+
page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
121 |
+
page_content='BO μExample 3 Let’s return to Example 1 and see what happens if we try to use an \ue23a (휇, 휎2) distribution as the variational posterior, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
122 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
123 |
+
page_content=' 푞(휆) = 1 √ 2휋휎2 exp ( −(휆 − 휇)2 2휎2 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
124 |
+
page_content=' Deriving the ELBO as before, we have that ELBO(푞(휆)) = 피푞(휆) [log (푝(푥 ∣ 휆)푝(휆)) − log 푞(휆)] = 피푞(휆) [ log ( 훽훼 Γ(훼) ) + 훼 log 휆 − 휆(훽 + 푥) + log √ 2휋 + log 휎 + (휆 − 휇)2 2휎2 ] = log ( 훽훼√ 2휋 Γ(훼) ) + 훼 피푞(휆) [log 휆] ⏟⏞⏞⏞⏞⏟⏞⏞⏞⏞⏟ undefined!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
125 |
+
page_content=' −(훽 + 푥) 피푞(휆) [휆] ⏟⏟⏟ =휇 + log 휎 + 1 2휎2 피푞(휆) [(휆 − 휇)2] ⏟⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏟ =휎2 The logarithm is only defined for positive values, hence 피푞(휆) [log 휆] is undefined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
126 |
+
page_content=' This illustrates an important caveat when choosing the variational distribution: 푞(풛) needs to be zero whenever 푝(풛 ∣ 풙) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
127 |
+
page_content=' as follows (Ranganath, Gerrish, and Blei, 2014): ∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] = ∇휽 ∫ (log 푝(풙, 풛) − log 푞휽(풛)) 푞휽(풛) 푑풛 = ∫ (log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 푞휽(풛) 푑풛 − ∫ (∇휽 log 푞휽(풛)) 푞휽(풛) 푑풛 (12) But the second term in this expression vanishes, ∫ (∇휽 log 푞휽(풛)) 푞휽(풛) 푑풛 = ∫ ∇휃푞휽(풛) �� � 푞휽(풛) �� � 푞휽(풛) 푑풛 = ∇휃 ∫ 푞휽(풛) 푑풛 ⏟⏞⏞⏞⏞⏟⏞⏞⏞⏞⏟ =1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
128 |
+
page_content=' (13) In conclusion, we have that ∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] = ∫ (log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 푞휽(풛)푑풛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
129 |
+
page_content=' (14) Sometimes, as in example 4, we can rewrite ∇휽푞휽(풛) (the gradient of the variational posterior) such that we can directly use a Monte Carlo method to estimate the integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
130 |
+
page_content=' Later, we will cover two more general Monte Carlo-based approaches: reparameterization (Kingma and Welling, 2014) and black-box variational inference (Ranganath, Gerrish, and Blei, 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
131 |
+
page_content=' Reparameterization The “reparameterization trick” was popularized in the work introducing the variational autoencoder (Kingma and Welling, 2014) but the general principle has a much longer history (Devroye, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
132 |
+
page_content=' The idea is to decouple the source of randomness from the parameters by cleverly reformulating the random variable 풛 ∼ 푞휽(풛) as a parameterized transformation 푧 = 푔휽(휖) of another random variable 휖 ∼ 푝(휖) that is easy to sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
133 |
+
page_content=' Effectively, this moves the randomness “outside” the model and makes it possible to move the gradient inside the expectation, as shown in the example below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
134 |
+
page_content=' The reparameterization trick is valid if and only if 푔(휖, 휽) is a continuous function of 휽 for all 휖 (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
135 |
+
page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
136 |
+
page_content=' Further, it works in the same way as in the example above also for expec- tations 피푞휽(풛) [푓(풛)] where 푓(풛) is a general nonlinear function of 풛, ∇휽피푞휽(풛) [푓(풛)] = ∇휽피푝(휖) [푓(푔휽(휖))] = 피푝(휖) [∇휽푓(푔휽(휖))] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
137 |
+
page_content=' (15) By setting 푓(풛) = log 푝(풙, 풛) − log 푞휽(풛) we retrieve the ELBO as a special case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
138 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 6 of 9 Example 4 Consider a univariate Normal variational posterior parameterized by the mean 휇 and standard deviation 휎, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
139 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
140 |
+
page_content=' 푞휽(푧) = 1 √ 2휋휎2 exp ( −(푧 − 휇)2 2휎2 ) , 휽 = {휇, 휎}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
141 |
+
page_content=' After some algebraic manipulations, the partial derivatives can be written as: 휕푞휽 휕휇 = 푧 − 휇 휎2 ⋅ 푞휽(푧), 휕푞휽 휕휎 = 1 휎 ((푧 − 휇)2 휎2 − 1 ) ⋅ 푞휽(푧).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
142 |
+
page_content=' Note that 푞휽(푧) appears in both of these expressions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
143 |
+
page_content=' By inserting the above in equation 14, we thus arrive at an expectation that we can replace with a Monte Carlo estimate: ∇휽피푞휽(푧) [log 푝(풙, 푧) − log 푞휽(푧)] = ∫ (log 푝(풙, 푧) − log 푞휽(푧)) ( 푧−휇 휎2 , 1 휎 ( (푧−휇)2 휎2 − 1 ))⊤ 푞휽(푧)푑푧 ≈ 1 퐿 퐿 ∑ 푖=1 (log 푝(풙, 푧(푖)) − log 푞휽(푧(푖))) ( 푧(푖)−휇 휎2 , 1 휎 ( (푧(푖)−휇)2 휎2 − 1 ))⊤ , where 푧(푖) ∼ 푞휽(푧).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
144 |
+
page_content=' Example 5 Suppose the variational posterior is a univariate Normal distribution parameterized by the mean 휇 and standard deviation 휎, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
145 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
146 |
+
page_content=' 푞휽(푧) = \ue23a (푧;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
147 |
+
page_content=' 휽) where 휽 = {휇, 휎}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
148 |
+
page_content=' This can be reparame- terized as 푧 = 푔휽(휖) = 휇 + 휎 ⋅ 휖 where 휖 ∼ \ue23a (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
149 |
+
page_content=' Let’s consider the effect this has on the expectation 피푞휽(푧) [log 푧].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
150 |
+
page_content=' (i) Original expression: 피푞휽(푧) [log 푧] = 1 √ 2휋휎2 ∫ log 푧 exp ( −(푧 − 휇)2 2휎2 ) 푑푧, ∇휽피푞휽(푧) [log 푧] = ∫ log 푧 ∇휽푞휽(푧)푑푧 = 피푞휽(푧) [ log 푧 ⋅ (( 푧−휇 휎2 ) , 1 휎 ( (푧−휇)2 휎2 − 1 ))⊤] where we used the expression for ∇휽푞휽(푧) from example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
151 |
+
page_content=' (ii) Reparameterized expression: 피푞휽(푧) [log 푧] ||||푧=휇+휎휖 = 1 √ 2휋휎2 ∫ log(휇 + 휎휖) exp ( −(휇 + 휎휖 − 휇)2 2휎2 ) 휎 푑휖 = 1 √ 2휋 ∫ log(휇 + 휎휖) exp ( −휖2 2 ) 푑휖 = 피푝(휖) [log(휇 + 휎휖)] , ∇휽피푝(휖) [log(휇 + 휎휖)] = 피푝(휖) [∇휽 log(휇 + 휎휖)] = 피푝(휖) [( 1 휇+휎휖 , 휖 휇+휎휖 )⊤] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
152 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 7 of 9 Amortized variational inference Many probabilistic models have local latent variables 풛푖 associated with each data point 풙푖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
153 |
+
page_content=' The simplest case is when the joint distribution factorizes as 푝(풙, 풛) = 푁 ∏ 푖=1 푝(풙푖 ∣ 풛푖)푝(풛푖).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
154 |
+
page_content=' (16) Suppose we use a variational posterior that factorizes accordingly, 푞휽(풛) = 푁 ∏ 푖=1 푞휽푖(풛푖), (17) then the ELBO maximization in equation 10 decomposes into a sum of local ELBOs 휽∗ = arg min 휽 − 푁 ∑ 푖=1 피푞휽푖 (풛푖) [log 푝(풙푖 ∣ 풛푖) + log 푝(풛푖) − log 푞휽푖(풛푖)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
155 |
+
page_content=' (18) Since the optimization variables are 휽 = {휽1, … , 휽푁}, large datasets amount to large optimization problems, which are computationally demanding to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
156 |
+
page_content=' This led to the idea of amortized vari- ational inference (Rezende, Mohamed, and Wierstra, 2014), wherein a machine learning model (often a neural network) is trained to directly predict the solution 휽∗ of this optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
157 |
+
page_content=' Specifically, let Λ휙 denote a neural network parameterized by 휙 that maps individual datapoints 풙푖 to corresponding parameters 휽푖 of the local variational posterior 푞휽푖(풛푖).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
158 |
+
page_content=' This model is trained us- ing the expression in equation 18 as the loss function but replacing 휽푖 = Λ휙(풙푖).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
159 |
+
page_content=' Note that even though the objective function is the same, this is a form of amortized optimization (Amos, 2022) since we are now using 휙 as the optimization variables instead of 휽.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
160 |
+
page_content=' Furthermore, the loss function is a sum over datapoints, which means that the standard machinery for training neural networks (stochastic gradient descent etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
161 |
+
page_content=') can be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
162 |
+
page_content=' In the context of variational autoencoders, the model Λ휙 is referred to as the encoder, which is accompanied by a, jointly trained, decoder corre- sponding to the probability distribution 푝(풙 ∣ 풛) (Kingma and Welling, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
163 |
+
page_content=' Black-box variational inference The reparameterization trick lets you compute the exact gradient by automatic differentiation, which is undoubtedly convenient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
164 |
+
page_content=' On the other hand, there are many models in which reparam- eterization is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
165 |
+
page_content=' In these cases, one can instead estimate the gradient using black-box variational inference (BBVI) (Ranganath, Gerrish, and Blei, 2014), which is more general yet still convenient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
166 |
+
page_content=' However, the BBVI estimator suffers from high variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
167 |
+
page_content=' BBVI relies on the observation that ∇휽 log 푞휽(풛) = ∇휽푞휽(풛) 푞휽(풛) , (19) which is sometimes referred to as the REINFORCE trick (Williams, 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
168 |
+
page_content=' This can be used to rewrite equation 14 as ∇휽피푞휽(풛) [log 푝(풙, 풛) − log 푞휽(풛)] = ∫ (log 푝(풙, 풛) − log 푞휽(풛)) 푞휽(풛)∇휽 log 푞휽(풛) ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟ =∇휽푞휽(풛) 푑풛 = 피푞휽(풛) [(log 푝(풙, 풛) − log 푞휽(풛)) ∇휽 log 푞휽(풛)] ≈ 1 퐿 퐿 ∑ 푖=1 (log 푝(풙, 풛(푖)) − log 푞휽(풛(푖))) ∇휽 log 푞휽(풛(푖)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
169 |
+
page_content=' (20) Since we can often use automatic differentiation to evaluate the score function ∇휽 log 푞휽(풛), it ap- pears that this reformulation resolves the problem of estimating the gradient of the ELBO from samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
170 |
+
page_content=' The catch is, however, that this estimator often has a too high variance to be useful in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
171 |
+
page_content=' Arguably, the key contribution of BBVI was to adapt two variance reduction techniques— Rao-Blackwellization and control variates—to the estimator in equation 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
172 |
+
page_content=' Going into detail on these variance reduction techniques would, however, take us beyond the scope of a tutorial on the basics of variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
173 |
+
page_content=' We refer the interested reader to the original work by Ranganath, Gerrish, and Blei (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
174 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 8 of 9 Acknowledgments This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Pro- gram (WASP) funded by the Knut and Alice Wallenberg Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
175 |
+
page_content=' This preprint was created using the LaPreprint template (https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
176 |
+
page_content='com/roaldarbol/lapreprint) by Mikkel Roald-Arbøl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
177 |
+
page_content=' References Amos, Brandon (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
178 |
+
page_content=' “Tutorial on amortized optimization for learning to optimize over continu- ous domains”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
179 |
+
page_content=' In: arXiv preprint 2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
180 |
+
page_content='00665.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
181 |
+
page_content=' Blei, David M, Alp Kucukelbir, and Jon D McAuliffe (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
182 |
+
page_content=' “Variational inference: A review for statis- ticians”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
183 |
+
page_content=' In: Journal of the American statistical Association 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
184 |
+
page_content='518, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
185 |
+
page_content=' 859–877.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
186 |
+
page_content=' Devroye, Luc (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
187 |
+
page_content=' “Random variate generation in one line of code”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
188 |
+
page_content=' In: Proceedings Winter Simu- lation Conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
189 |
+
page_content=' IEEE, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
190 |
+
page_content=' 265–272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
191 |
+
page_content=' Kingma, Diederik P and Max Welling (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
192 |
+
page_content=' “Auto-Encoding Variational Bayes”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
193 |
+
page_content=' In: 2nd International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
194 |
+
page_content=' — (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
195 |
+
page_content=' “An introduction to variational autoencoders”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
196 |
+
page_content=' In: Foundations and Trends® in Machine Learning 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
197 |
+
page_content='4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
198 |
+
page_content=' 307–392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
199 |
+
page_content=' Ranganath, Rajesh, Sean Gerrish, and David M Blei (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
200 |
+
page_content=' “Black box variational inference”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
201 |
+
page_content=' In: Artificial intelligence and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
202 |
+
page_content=' PMLR, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
203 |
+
page_content=' 814–822.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
204 |
+
page_content=' Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
205 |
+
page_content=' “Stochastic backpropagation and approximate inference in deep generative models”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
206 |
+
page_content=' In: International conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
207 |
+
page_content=' PMLR, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
208 |
+
page_content=' 1278–1286.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
209 |
+
page_content=' Schulman, John et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
210 |
+
page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
211 |
+
page_content=' “Gradient estimation using stochastic computation graphs”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
212 |
+
page_content=' In: Ad- vances in Neural Information Processing Systems 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
213 |
+
page_content=' Wainwright, Martin J and Michael I Jordan (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
214 |
+
page_content=' “Graphical models, exponential families, and vari- ational inference”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
215 |
+
page_content=' In: Foundations and Trends in Machine Learning 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
216 |
+
page_content='1–2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
217 |
+
page_content=' 1–305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
218 |
+
page_content=' Williams, Ronald J (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
219 |
+
page_content=' “Simple statistical gradient-following algorithms for connectionist rein- forcement learning”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
220 |
+
page_content=' In: Machine learning 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
221 |
+
page_content='3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
222 |
+
page_content=' 229–256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
223 |
+
page_content=' Sjölund | A Tutorial on Parametric Variational Inference arXiv | 9 of 9' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99AzT4oBgHgl3EQfSvs8/content/2301.01236v1.pdf'}
|
9NE1T4oBgHgl3EQfCQIp/content/tmp_files/2301.02861v1.pdf.txt
ADDED
@@ -0,0 +1,1231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.02861v1 [math.NT] 7 Jan 2023
|
2 |
+
IDENTITIES INVOLVING DEGENERATE HARMONIC AND DEGENERATE
|
3 |
+
HYPERHARMONIC NUMBERS
|
4 |
+
HYE KYUNG KIM1, DAE SAN KIM2, AND TAEKYUN KIM3,∗
|
5 |
+
ABSTRACT. Harmonic numbers have been studied since antiquity, while hyperharmonic numbers
|
6 |
+
were intoduced by Conway and Guy in 1996. The degenerate harmonic numbers and degenerate
|
7 |
+
hyperharmonic numbers are their respective degenerate versions. The aim of this paper is to further
|
8 |
+
investigate some properties, recurrence relations and identities involving the degenerate harmonic
|
9 |
+
and degenerate hyperharmonic numbers in connection with degenerate Stirling numbers of the first
|
10 |
+
kind, degenerate Daehee numbers and degenerate derangements.
|
11 |
+
1. INTRODUCTION
|
12 |
+
In recent years, various degenerate versions of many special numbers and polynomials have
|
13 |
+
beem studied and yielded a lot of fascinating and fruitful results (see [5, 6, 7, 8, 9, 10, 11, 12] and
|
14 |
+
the references therein), which began with Carlitz’s work on the degenerate Bernoulli and degen-
|
15 |
+
erate Euler numbers (see [2]). It is worthwhile to mention that these explorations for degenerate
|
16 |
+
versions are not limited to polynomials and numbers but also extended to transcendental functions,
|
17 |
+
like gamma functions (see [9, 10]). It is also remarkable that the λ-umbral calculus and λ-q-umbral
|
18 |
+
calculus were introduced as degenerate versions of the umbral calculus and the q-umbral calculus,
|
19 |
+
respectively (see [6, 11]). As it turns out, the λ-umbral calculus and λ-q-umbral calculus are more
|
20 |
+
convenient than the umbral calculus and the q-umbral calculus when dealing with degenerate Shef-
|
21 |
+
fer polynomials and degenerate q-Sheffer polynomials.
|
22 |
+
The aim of this paper is to further investigate some properties, recurrence relations and identities
|
23 |
+
involving the degenerate harmonic numbers (see (6)) and the degenerate hyperharmonic numbers
|
24 |
+
(see (7), (8)) in connection with degenerate Stirling numbers of the first kind, degenerate Daehee
|
25 |
+
numbers and degenerate derangements. The degenerate harmonic numbers and degenerate hyper-
|
26 |
+
harmonic numbers are respectively degenerate versions of the harmonic numbers and the hyperhar-
|
27 |
+
monic numbers, of which the latter are introduced in [4].
|
28 |
+
The outline of this paper is as follows. In Section 1, we recall the degenerate exponentials and
|
29 |
+
the degenerate logarithms. We remind the reader of the harmonic numbers, and of the hyperhar-
|
30 |
+
monic numbers together with their explicit expression due to Conway and Guy (see [4]). Then
|
31 |
+
we recall their degenerate versions, namely the degenerate harmonic numbers, and the degenerate
|
32 |
+
hyperharmonic numbers together with their explicit expression (see [7, 8]). We also mention the
|
33 |
+
recently introduced degenerate Stirling numbers of the first kind and the degenerate Daehee num-
|
34 |
+
bers of order r. Section 2 is the main result of this paper. We obtain an expression of the degenerate
|
35 |
+
hyperharmonic numbers of order r in terms of the same numbers of lower orders in Theorem 1. We
|
36 |
+
express the Daehee numbers in terms of the degenerate harmonic numbers and of the degenerate
|
37 |
+
hyperharmonic numbers, respectively in Theorem 2 and Theorem 3. In Theorem 4, the degenerate
|
38 |
+
harmonic numbers are represented in terms of the degenerate hyperharmonic numbers of order r.
|
39 |
+
2010 Mathematics Subject Classification. 05A19; 11B73; 11B83.
|
40 |
+
Key words and phrases. degenerate harmonic number; degenerate hyperharmonic number; degenerate Daehee num-
|
41 |
+
ber; degenerate logarithm; degenerate Stirling number of the first kind; degenerate derangement.
|
42 |
+
* is corresponding author.
|
43 |
+
1
|
44 |
+
|
45 |
+
2
|
46 |
+
Identities involving degenerate harmonic and degenerate hyperharmonic numbers
|
47 |
+
In Theorem 5, the degenerate Daehee numbers are represented in terms of the degenerate Daehee
|
48 |
+
numbers of order r −1 and of the degenerate hyperharmonic numbers. We derive a simple relation
|
49 |
+
between the degenerate hyperharmonic numbers and the degenerate Daehee numbers in Theorem
|
50 |
+
6. We deduce an identity involving the degenerate hyperharmonic numbers and the degenerate de-
|
51 |
+
rangements in Theorem 7. The degenerate Daehee numbers are expressed in terms of the degenerate
|
52 |
+
Stirling numbers of the first kind in Theorem 8. Finally, we get an identity involving the degenerate
|
53 |
+
Stirling numbers of the first kind and the degenerate harmonic numbers in Theorem 9.
|
54 |
+
For any nonzero λ ∈ R, the degenerate exponential functions are defined by
|
55 |
+
ex
|
56 |
+
λ(t) = (1+λt)
|
57 |
+
x
|
58 |
+
λ =
|
59 |
+
∞
|
60 |
+
∑
|
61 |
+
n=0
|
62 |
+
(x)n,λ
|
63 |
+
tn
|
64 |
+
n!,
|
65 |
+
eλ(t) = e1
|
66 |
+
λ(t),
|
67 |
+
(see [2, 8]),
|
68 |
+
(1)
|
69 |
+
where
|
70 |
+
(x)0,λ = 1, (x)n,λ = x(x−λ)···(x−(n−1)λ), (n ≥ 1),
|
71 |
+
(see [8]).
|
72 |
+
Let logλ t be the compositional inverse of eλ(t) with eλ(logλ t) = logλ eλ(t) = t. It is called the
|
73 |
+
degenerate logarithm and is given by
|
74 |
+
logλ(1+t) =
|
75 |
+
∞
|
76 |
+
∑
|
77 |
+
k=1
|
78 |
+
λ k−1(1)k, 1
|
79 |
+
λ
|
80 |
+
k!
|
81 |
+
tk = 1
|
82 |
+
λ ((1+t)λ −1),
|
83 |
+
(see [5]).
|
84 |
+
(2)
|
85 |
+
The harmonic numbers are given by
|
86 |
+
H0 = 0, Hn = 1+ 1
|
87 |
+
2 +···+ 1
|
88 |
+
n,
|
89 |
+
(n ∈ N),
|
90 |
+
(see [3, 4, 16]).
|
91 |
+
(3)
|
92 |
+
In 1996, Conway and Guy introduced the hyperharmonic numbers H(r)
|
93 |
+
n
|
94 |
+
of order r, (n,r ≥ 0), which
|
95 |
+
are given by
|
96 |
+
H(r)
|
97 |
+
0
|
98 |
+
= 0, (r ≥ 0), H(0)
|
99 |
+
n
|
100 |
+
= 1
|
101 |
+
n, (n ≥ 1), H(r)
|
102 |
+
n
|
103 |
+
=
|
104 |
+
n
|
105 |
+
∑
|
106 |
+
k=1
|
107 |
+
H(r−1)
|
108 |
+
k
|
109 |
+
, (n,r ≥ 1),
|
110 |
+
(see [4]).
|
111 |
+
(4)
|
112 |
+
Thus, by (4), we get
|
113 |
+
H(r)
|
114 |
+
n
|
115 |
+
=
|
116 |
+
�n+r −1
|
117 |
+
n
|
118 |
+
�
|
119 |
+
(Hn+r−1 −Hr−1),
|
120 |
+
(r ≥ 1),
|
121 |
+
(see [4]).
|
122 |
+
(5)
|
123 |
+
Recently, the degenerate harmonic numbers are defined by
|
124 |
+
H0,λ = 0, Hn,λ =
|
125 |
+
n
|
126 |
+
∑
|
127 |
+
k=1
|
128 |
+
1
|
129 |
+
λ
|
130 |
+
�λ
|
131 |
+
k
|
132 |
+
�
|
133 |
+
(−1)k−1,
|
134 |
+
(n ≥ 1),
|
135 |
+
(see [8]).
|
136 |
+
(6)
|
137 |
+
Note that limλ→0 Hn,λ = Hn. The degenerate hyperharmonic numbers H(r)
|
138 |
+
n,λ of order r, (n,r ≥ 0),
|
139 |
+
are defined by
|
140 |
+
H(r)
|
141 |
+
0,λ = 0, (r ≥ 0), H(0)
|
142 |
+
n,λ = 1
|
143 |
+
λ
|
144 |
+
�λ
|
145 |
+
n
|
146 |
+
�
|
147 |
+
(−1)n−1, (n ≥ 1), H(r)
|
148 |
+
n,λ =
|
149 |
+
n
|
150 |
+
∑
|
151 |
+
k=1
|
152 |
+
H(r−1)
|
153 |
+
k,λ
|
154 |
+
, (n,r ≥ 1),
|
155 |
+
(see [7]).
|
156 |
+
(7)
|
157 |
+
We see from (6) and (7) that H(1)
|
158 |
+
n,λ = Hn,λ. From (7), we note that
|
159 |
+
H(r)
|
160 |
+
n,λ = (−1)r−1
|
161 |
+
�λ−1
|
162 |
+
r−1
|
163 |
+
�
|
164 |
+
�n+r −1
|
165 |
+
n
|
166 |
+
�
|
167 |
+
(Hn+r−1,λ −Hr−1,λ),
|
168 |
+
(see [7]),
|
169 |
+
(8)
|
170 |
+
where n, r are positive numbers. Here we observe from (5) and (8) that limλ→0 H(r)
|
171 |
+
n,λ = H(r)
|
172 |
+
n .
|
173 |
+
|
174 |
+
H. K. Kim, D. S. Kim, and T. Kim
|
175 |
+
3
|
176 |
+
In [5], the degenerate Stirling numbers of the first kind are defined by
|
177 |
+
(x)n =
|
178 |
+
n
|
179 |
+
∑
|
180 |
+
k=0
|
181 |
+
S1,λ(n,k)(x)k,λ ,
|
182 |
+
(n ≥ 0),
|
183 |
+
(see [5, 8]),
|
184 |
+
(9)
|
185 |
+
where (x)0 = 1, (x)n = x(x−1)···(x−n+1), (n ≥ 1).
|
186 |
+
For r ∈ N, the degenerate Daehee numbers of order r are defined by
|
187 |
+
�logλ(1+t)
|
188 |
+
t
|
189 |
+
�r
|
190 |
+
=
|
191 |
+
∞
|
192 |
+
∑
|
193 |
+
n=0
|
194 |
+
D(r)
|
195 |
+
n,λ
|
196 |
+
tn
|
197 |
+
n!,
|
198 |
+
(see [11]).
|
199 |
+
(10)
|
200 |
+
In particular, for r = 1, Dn,λ = D(1)
|
201 |
+
n,λ are called the degenerate Daehee numbers
|
202 |
+
2. IDENTITIES INVOLVING DEGENERATE HARMONIC AND DEGENERATE HYPERHARMONIC
|
203 |
+
NUMBERS
|
204 |
+
From (6) and (7), we note that
|
205 |
+
−logλ(1−t)
|
206 |
+
(1−t)
|
207 |
+
=
|
208 |
+
∞
|
209 |
+
∑
|
210 |
+
n=1
|
211 |
+
Hn,λtn,
|
212 |
+
(see [7]),
|
213 |
+
(11)
|
214 |
+
and
|
215 |
+
−logλ(1−t)
|
216 |
+
(1−t)r
|
217 |
+
=
|
218 |
+
∞
|
219 |
+
∑
|
220 |
+
n=1
|
221 |
+
H(r)
|
222 |
+
n,λtn,
|
223 |
+
(see [7]),
|
224 |
+
(12)
|
225 |
+
where r is a nonnegative integer.
|
226 |
+
By (12), we get
|
227 |
+
∞
|
228 |
+
∑
|
229 |
+
n=1
|
230 |
+
H(r−1)
|
231 |
+
n,λ
|
232 |
+
tn = −logλ(1−t)
|
233 |
+
(1−t)r
|
234 |
+
(1−t) =
|
235 |
+
∞
|
236 |
+
∑
|
237 |
+
n=1
|
238 |
+
H(r)
|
239 |
+
n,λtn(1−t)
|
240 |
+
=
|
241 |
+
∞
|
242 |
+
∑
|
243 |
+
n=1
|
244 |
+
H(r)
|
245 |
+
n,λtn −
|
246 |
+
∞
|
247 |
+
∑
|
248 |
+
n=1
|
249 |
+
H(r)
|
250 |
+
n,λtn+1 =
|
251 |
+
∞
|
252 |
+
∑
|
253 |
+
n=1
|
254 |
+
(H(r)
|
255 |
+
n,λ −H(r)
|
256 |
+
n−1,λ)tn.
|
257 |
+
(13)
|
258 |
+
By comparing the coefficients on both sides of (13), we get
|
259 |
+
(14)
|
260 |
+
H(r)
|
261 |
+
n,λ = H(r)
|
262 |
+
n−1,λ +H(r−1)
|
263 |
+
n,λ
|
264 |
+
.
|
265 |
+
For 1 ≤ s ≤ r, by (12), we get
|
266 |
+
∞
|
267 |
+
∑
|
268 |
+
n=1
|
269 |
+
H(r)
|
270 |
+
n,λtn = −logλ(1−t)
|
271 |
+
(1−t)r
|
272 |
+
= −logλ(1−t)
|
273 |
+
(1−t)r−s
|
274 |
+
1
|
275 |
+
(1−t)s
|
276 |
+
=
|
277 |
+
∞
|
278 |
+
∑
|
279 |
+
l=1
|
280 |
+
H(r−s)
|
281 |
+
l,λ
|
282 |
+
tl
|
283 |
+
∞
|
284 |
+
∑
|
285 |
+
k=0
|
286 |
+
�k +s−1
|
287 |
+
k
|
288 |
+
�
|
289 |
+
tk
|
290 |
+
=
|
291 |
+
∞
|
292 |
+
∑
|
293 |
+
n=1
|
294 |
+
n
|
295 |
+
∑
|
296 |
+
l=1
|
297 |
+
H(r−s)
|
298 |
+
l,λ
|
299 |
+
�n−l +s−1
|
300 |
+
s−1
|
301 |
+
�
|
302 |
+
tn.
|
303 |
+
(15)
|
304 |
+
By comparing the coefficients on both sides of (15), we get
|
305 |
+
H(r)
|
306 |
+
n,λ =
|
307 |
+
n
|
308 |
+
∑
|
309 |
+
l=1
|
310 |
+
H(r−s)
|
311 |
+
l,λ
|
312 |
+
�n−l +s−1
|
313 |
+
s−1
|
314 |
+
�
|
315 |
+
,
|
316 |
+
(16)
|
317 |
+
where r, s ∈ Z with 1 ≤ s ≤ r. In particular, for r = s, we have
|
318 |
+
H(r)
|
319 |
+
n,λ =
|
320 |
+
n
|
321 |
+
∑
|
322 |
+
l=1
|
323 |
+
H(0)
|
324 |
+
l,λ
|
325 |
+
�n−l +r −1
|
326 |
+
r −1
|
327 |
+
�
|
328 |
+
=
|
329 |
+
n
|
330 |
+
∑
|
331 |
+
l=1
|
332 |
+
1
|
333 |
+
λ
|
334 |
+
�λ
|
335 |
+
l
|
336 |
+
�
|
337 |
+
(−1)l−1
|
338 |
+
�n−l +r −1
|
339 |
+
r −1
|
340 |
+
�
|
341 |
+
.
|
342 |
+
(17)
|
343 |
+
Therefore, by (16) and (17), we obtain the following theorem.
|
344 |
+
|
345 |
+
4
|
346 |
+
Identities involving degenerate harmonic and degenerate hyperharmonic numbers
|
347 |
+
Theorem 1. For r, s ∈ Z with 1 ≤ s ≤ r, we have
|
348 |
+
H(r)
|
349 |
+
n,λ =
|
350 |
+
n
|
351 |
+
∑
|
352 |
+
l=1
|
353 |
+
H(r−s)
|
354 |
+
l,λ
|
355 |
+
�n−l +s−1
|
356 |
+
s−1
|
357 |
+
�
|
358 |
+
,
|
359 |
+
and
|
360 |
+
H(r)
|
361 |
+
n,λ =
|
362 |
+
n
|
363 |
+
∑
|
364 |
+
l=1
|
365 |
+
1
|
366 |
+
λ
|
367 |
+
�λ
|
368 |
+
l
|
369 |
+
�
|
370 |
+
(−1)l−1
|
371 |
+
�n−l +r −1
|
372 |
+
r −1
|
373 |
+
�
|
374 |
+
.
|
375 |
+
From (11) and (14), we note that
|
376 |
+
∞
|
377 |
+
∑
|
378 |
+
n=0
|
379 |
+
Dn,λ
|
380 |
+
tn
|
381 |
+
n! = logλ(1+t)
|
382 |
+
t
|
383 |
+
= logλ(1+t)
|
384 |
+
1+t
|
385 |
+
1+t
|
386 |
+
t
|
387 |
+
=
|
388 |
+
� ∞
|
389 |
+
∑
|
390 |
+
k=1
|
391 |
+
(−1)k+1Hk,λtk
|
392 |
+
��
|
393 |
+
1+ 1
|
394 |
+
t
|
395 |
+
�
|
396 |
+
=
|
397 |
+
∞
|
398 |
+
∑
|
399 |
+
n=1
|
400 |
+
(−1)n+1Hn,λtn +
|
401 |
+
∞
|
402 |
+
∑
|
403 |
+
n=0
|
404 |
+
(−1)nHn+1,λtn
|
405 |
+
= 1+
|
406 |
+
∞
|
407 |
+
∑
|
408 |
+
n=1
|
409 |
+
(−1)n(Hn+1,λ −Hn,λ)tn.
|
410 |
+
(18)
|
411 |
+
Therefore, by comparing the coefficients on both sides of (18), we have the following theorem.
|
412 |
+
Theorem 2. For n ≥ 0, we have
|
413 |
+
D0,λ = 1, Dn,λ = (−1)nn!(Hn+1,λ −Hn,λ), (n ≥ 1).
|
414 |
+
From (12), we note that
|
415 |
+
∞
|
416 |
+
∑
|
417 |
+
n=0
|
418 |
+
Dn,λ
|
419 |
+
tn
|
420 |
+
n! = logλ(1+t)
|
421 |
+
t
|
422 |
+
= logλ(1+t)
|
423 |
+
t(1+t)r (1+t)r
|
424 |
+
=
|
425 |
+
∞
|
426 |
+
∑
|
427 |
+
k=0
|
428 |
+
H(r)
|
429 |
+
k+1,λ(−1)ktk
|
430 |
+
∞
|
431 |
+
∑
|
432 |
+
l=0
|
433 |
+
�r
|
434 |
+
l
|
435 |
+
�
|
436 |
+
tl
|
437 |
+
=
|
438 |
+
∞
|
439 |
+
∑
|
440 |
+
n=0
|
441 |
+
� n
|
442 |
+
∑
|
443 |
+
k=0
|
444 |
+
H(r)
|
445 |
+
k+1,λ
|
446 |
+
� r
|
447 |
+
n−k
|
448 |
+
�
|
449 |
+
(−1)k
|
450 |
+
�
|
451 |
+
tn.
|
452 |
+
(19)
|
453 |
+
Therefore, by (19), we obtain the following theorem
|
454 |
+
Theorem 3. For n ≥ 0, we have
|
455 |
+
Dn,λ = n!
|
456 |
+
n
|
457 |
+
∑
|
458 |
+
k=0
|
459 |
+
H(r)
|
460 |
+
k+1,λ
|
461 |
+
� r
|
462 |
+
n−k
|
463 |
+
�
|
464 |
+
(−1)k.
|
465 |
+
Now, we observe from (2) that
|
466 |
+
(20)
|
467 |
+
∞
|
468 |
+
∑
|
469 |
+
n=0
|
470 |
+
Dn,λ
|
471 |
+
tn
|
472 |
+
n! = logλ(1+t)
|
473 |
+
t
|
474 |
+
=
|
475 |
+
∞
|
476 |
+
∑
|
477 |
+
n=1
|
478 |
+
�λ
|
479 |
+
n
|
480 |
+
� 1
|
481 |
+
λ tn−1 =
|
482 |
+
∞
|
483 |
+
∑
|
484 |
+
n=0
|
485 |
+
� λ
|
486 |
+
n+1
|
487 |
+
� 1
|
488 |
+
λ tn.
|
489 |
+
Thus, by (20), we get
|
490 |
+
Dn,λ = n! 1
|
491 |
+
λ
|
492 |
+
� λ
|
493 |
+
n+1
|
494 |
+
�
|
495 |
+
= (λ −1)n
|
496 |
+
n+1 ,
|
497 |
+
(n ≥ 0).
|
498 |
+
(21)
|
499 |
+
|
500 |
+
H. K. Kim, D. S. Kim, and T. Kim
|
501 |
+
5
|
502 |
+
From (11), we have
|
503 |
+
∞
|
504 |
+
∑
|
505 |
+
n=1
|
506 |
+
Hn,λtn = −logλ(1−t)
|
507 |
+
1−t
|
508 |
+
= −logλ(1−t)
|
509 |
+
t
|
510 |
+
t
|
511 |
+
1−t
|
512 |
+
=
|
513 |
+
∞
|
514 |
+
∑
|
515 |
+
l=0
|
516 |
+
Dl,λ(−1)l tl
|
517 |
+
l!
|
518 |
+
∞
|
519 |
+
∑
|
520 |
+
m=1
|
521 |
+
tm
|
522 |
+
=
|
523 |
+
∞
|
524 |
+
∑
|
525 |
+
n=1
|
526 |
+
� n−1
|
527 |
+
∑
|
528 |
+
l=0
|
529 |
+
Dl,λ
|
530 |
+
(−1)l
|
531 |
+
l!
|
532 |
+
�
|
533 |
+
tn.
|
534 |
+
(22)
|
535 |
+
Thus, by Theorem 3 and (22), we get
|
536 |
+
Hn,λ =
|
537 |
+
n−1
|
538 |
+
∑
|
539 |
+
l=0
|
540 |
+
Dl,λ
|
541 |
+
(−1)l
|
542 |
+
l!
|
543 |
+
=
|
544 |
+
n−1
|
545 |
+
∑
|
546 |
+
l=0
|
547 |
+
(−1)l
|
548 |
+
l!
|
549 |
+
l!
|
550 |
+
l
|
551 |
+
∑
|
552 |
+
k=0
|
553 |
+
H(r)
|
554 |
+
k+1,λ
|
555 |
+
� r
|
556 |
+
l −k
|
557 |
+
�
|
558 |
+
(−1)k
|
559 |
+
=
|
560 |
+
n−1
|
561 |
+
∑
|
562 |
+
l=0
|
563 |
+
l
|
564 |
+
∑
|
565 |
+
k=0
|
566 |
+
(−1)k+lH(r)
|
567 |
+
k+1,λ
|
568 |
+
� r
|
569 |
+
l −k
|
570 |
+
�
|
571 |
+
,
|
572 |
+
(n ≥ 1).
|
573 |
+
(23)
|
574 |
+
Therefore, by (23), we obtain the following theorem.
|
575 |
+
Theorem 4. For n ≥ 1, we have
|
576 |
+
Hn,λ =
|
577 |
+
n−1
|
578 |
+
∑
|
579 |
+
l=0
|
580 |
+
l
|
581 |
+
∑
|
582 |
+
k=0
|
583 |
+
(−1)k+l
|
584 |
+
� r
|
585 |
+
l −k
|
586 |
+
�
|
587 |
+
H(r)
|
588 |
+
k+1,λ.
|
589 |
+
By (10), we get
|
590 |
+
∞
|
591 |
+
∑
|
592 |
+
n=0
|
593 |
+
D(r)
|
594 |
+
n,λ
|
595 |
+
tn
|
596 |
+
n! =
|
597 |
+
�logλ(1+t)
|
598 |
+
t
|
599 |
+
�r
|
600 |
+
= logλ(1+t)
|
601 |
+
t(1+t)k
|
602 |
+
�logλ(1+t)
|
603 |
+
t
|
604 |
+
�r−1
|
605 |
+
(1+t)k
|
606 |
+
=
|
607 |
+
∞
|
608 |
+
∑
|
609 |
+
i=1
|
610 |
+
(−1)i+1H(k)
|
611 |
+
i,λ ti−1
|
612 |
+
∞
|
613 |
+
∑
|
614 |
+
j=0
|
615 |
+
D(r−1)
|
616 |
+
j,λ
|
617 |
+
t j
|
618 |
+
j!
|
619 |
+
∞
|
620 |
+
∑
|
621 |
+
l=0
|
622 |
+
�k
|
623 |
+
l
|
624 |
+
�
|
625 |
+
tl
|
626 |
+
=
|
627 |
+
∞
|
628 |
+
∑
|
629 |
+
i=0
|
630 |
+
(−1)iH(k)
|
631 |
+
i+1,λti
|
632 |
+
∞
|
633 |
+
∑
|
634 |
+
m=0
|
635 |
+
� m
|
636 |
+
∑
|
637 |
+
j=0
|
638 |
+
�m
|
639 |
+
j
|
640 |
+
�
|
641 |
+
D(r−1)
|
642 |
+
j,λ
|
643 |
+
(k)m− j
|
644 |
+
� tm
|
645 |
+
m!
|
646 |
+
=
|
647 |
+
∞
|
648 |
+
∑
|
649 |
+
n=0
|
650 |
+
� n
|
651 |
+
∑
|
652 |
+
i=0
|
653 |
+
n−i
|
654 |
+
∑
|
655 |
+
j=0
|
656 |
+
(−1)i
|
657 |
+
�n−i
|
658 |
+
j
|
659 |
+
�(k)n−i− j
|
660 |
+
(n−i)! D(r−1)
|
661 |
+
j,λ
|
662 |
+
H(k)
|
663 |
+
i+1,λ
|
664 |
+
�
|
665 |
+
tn.
|
666 |
+
(24)
|
667 |
+
Therefore, by comparing the coefficients on both sides of (24), we obtain the following theorem.
|
668 |
+
Theorem 5. For n,k ≥ 0 and r ≥ 1, we have
|
669 |
+
D(r)
|
670 |
+
n,λ = n!
|
671 |
+
n
|
672 |
+
∑
|
673 |
+
i=0
|
674 |
+
n−i
|
675 |
+
∑
|
676 |
+
j=0
|
677 |
+
(−1)i
|
678 |
+
�n−i
|
679 |
+
j
|
680 |
+
�(k)n−i− j
|
681 |
+
(n−i)! D(r−1)
|
682 |
+
j,λ
|
683 |
+
H(k)
|
684 |
+
i+1,λ.
|
685 |
+
By (11), we get
|
686 |
+
∞
|
687 |
+
∑
|
688 |
+
n=1
|
689 |
+
Hn,λtn = −logλ(1−t)
|
690 |
+
1−t
|
691 |
+
= logλ(1−t)
|
692 |
+
−t
|
693 |
+
t
|
694 |
+
1−t
|
695 |
+
=
|
696 |
+
∞
|
697 |
+
∑
|
698 |
+
l=0
|
699 |
+
(−1)lDl,λ
|
700 |
+
tl
|
701 |
+
l!
|
702 |
+
∞
|
703 |
+
∑
|
704 |
+
j=1
|
705 |
+
t j
|
706 |
+
=
|
707 |
+
∞
|
708 |
+
∑
|
709 |
+
n=1
|
710 |
+
� n−1
|
711 |
+
∑
|
712 |
+
l=0
|
713 |
+
(−1)l Dl,λ
|
714 |
+
l!
|
715 |
+
�
|
716 |
+
tn.
|
717 |
+
(25)
|
718 |
+
|
719 |
+
6
|
720 |
+
Identities involving degenerate harmonic and degenerate hyperharmonic numbers
|
721 |
+
Thus, by comparing the coefficients on both sides of (25), we get
|
722 |
+
Hn,λ =
|
723 |
+
n−1
|
724 |
+
∑
|
725 |
+
l=0
|
726 |
+
(−1)l Dl,λ
|
727 |
+
l! ,
|
728 |
+
(n ≥ 1).
|
729 |
+
(26)
|
730 |
+
From (12), we can derive the following.
|
731 |
+
∞
|
732 |
+
∑
|
733 |
+
n=1
|
734 |
+
H(r)
|
735 |
+
n,λtn = −logλ(1−t)
|
736 |
+
t
|
737 |
+
t
|
738 |
+
(1−t)r
|
739 |
+
=
|
740 |
+
∞
|
741 |
+
∑
|
742 |
+
l=0
|
743 |
+
Dl,λ(−1)l tl
|
744 |
+
l!
|
745 |
+
∞
|
746 |
+
∑
|
747 |
+
m=1
|
748 |
+
�r +m−2
|
749 |
+
m−1
|
750 |
+
�
|
751 |
+
tm
|
752 |
+
=
|
753 |
+
∞
|
754 |
+
∑
|
755 |
+
n=1
|
756 |
+
�
|
757 |
+
n
|
758 |
+
∑
|
759 |
+
m=1
|
760 |
+
�r +m−2
|
761 |
+
r −1
|
762 |
+
� Dn−m,λ
|
763 |
+
(n−m)!(−1)n−m
|
764 |
+
�
|
765 |
+
tn.
|
766 |
+
(27)
|
767 |
+
Therefore, by (26) and (27), we obtain the following theorem.
|
768 |
+
Theorem 6. For n ∈ N, we have
|
769 |
+
Hn,λ =
|
770 |
+
n−1
|
771 |
+
∑
|
772 |
+
l=0
|
773 |
+
(−1)l Dl,λ
|
774 |
+
l! ,
|
775 |
+
(n ≥ 1),
|
776 |
+
and
|
777 |
+
H(r)
|
778 |
+
n,λ =
|
779 |
+
n
|
780 |
+
∑
|
781 |
+
m=1
|
782 |
+
�r +m−2
|
783 |
+
r −1
|
784 |
+
� Dn−m,λ
|
785 |
+
(n−m)!(−1)n−m.
|
786 |
+
The degenerate derangements are defined by
|
787 |
+
1
|
788 |
+
1−t eλ(−t) =
|
789 |
+
∞
|
790 |
+
∑
|
791 |
+
n=0
|
792 |
+
dn,λ
|
793 |
+
tn
|
794 |
+
n!.
|
795 |
+
(28)
|
796 |
+
Thus, we note that
|
797 |
+
dn,λ = n!
|
798 |
+
n
|
799 |
+
∑
|
800 |
+
k=0
|
801 |
+
(1)k,λ
|
802 |
+
(−1)k
|
803 |
+
k!
|
804 |
+
,
|
805 |
+
(n ≥ 0).
|
806 |
+
Now, we observe that
|
807 |
+
−logλ(1−t)
|
808 |
+
(1−t)r
|
809 |
+
eλ(−t) =
|
810 |
+
∞
|
811 |
+
∑
|
812 |
+
l=1
|
813 |
+
H(r)
|
814 |
+
l,λtl
|
815 |
+
∞
|
816 |
+
∑
|
817 |
+
k=0
|
818 |
+
(1)k,λ
|
819 |
+
k!
|
820 |
+
(−1)ktk
|
821 |
+
=
|
822 |
+
∞
|
823 |
+
∑
|
824 |
+
n=1
|
825 |
+
� n
|
826 |
+
∑
|
827 |
+
l=1
|
828 |
+
H(r)
|
829 |
+
l,λ
|
830 |
+
(1)n−l,λ
|
831 |
+
(n−l)! (−1)n−l
|
832 |
+
�
|
833 |
+
tn.
|
834 |
+
(29)
|
835 |
+
On the other hand, by (28), we get
|
836 |
+
−logλ(1−t)
|
837 |
+
(1−t)r
|
838 |
+
eλ(−t) = −logλ(1−t)
|
839 |
+
(1−t)r−1
|
840 |
+
1
|
841 |
+
1−t eλ(−t)
|
842 |
+
=
|
843 |
+
∞
|
844 |
+
∑
|
845 |
+
l=1
|
846 |
+
H(r−1)
|
847 |
+
l,λ
|
848 |
+
tl
|
849 |
+
∞
|
850 |
+
∑
|
851 |
+
k=0
|
852 |
+
dk,λ
|
853 |
+
tk
|
854 |
+
k! =
|
855 |
+
∞
|
856 |
+
∑
|
857 |
+
n=1
|
858 |
+
� n
|
859 |
+
∑
|
860 |
+
l=1
|
861 |
+
H(r−1)
|
862 |
+
l,λ
|
863 |
+
dn−l,λ
|
864 |
+
(n−l)!
|
865 |
+
�
|
866 |
+
tn.
|
867 |
+
(30)
|
868 |
+
Therefore, by (29) and (30), we obtain the following theorem.
|
869 |
+
Theorem 7. For n ∈ N, we have
|
870 |
+
n
|
871 |
+
∑
|
872 |
+
l=1
|
873 |
+
H(r)
|
874 |
+
l,λ
|
875 |
+
(1)n−l,λ
|
876 |
+
(n−l)! (−1)n−l =
|
877 |
+
n
|
878 |
+
∑
|
879 |
+
l=1
|
880 |
+
H(r−1)
|
881 |
+
l,λ
|
882 |
+
dn−l,λ
|
883 |
+
(n−l)!.
|
884 |
+
|
885 |
+
H. K. Kim, D. S. Kim, and T. Kim
|
886 |
+
7
|
887 |
+
We let Y = logλ(1+t). Then, for N ≥ 1, we have
|
888 |
+
� d
|
889 |
+
dt
|
890 |
+
�N
|
891 |
+
Y = (λ −1)(λ −2)···(λ −N +1)(1+t)λ−N
|
892 |
+
= N!
|
893 |
+
λ
|
894 |
+
�λ
|
895 |
+
N
|
896 |
+
�
|
897 |
+
eλ−N
|
898 |
+
λ
|
899 |
+
(logλ(1+t))
|
900 |
+
= N!
|
901 |
+
λ
|
902 |
+
�λ
|
903 |
+
N
|
904 |
+
� ∞
|
905 |
+
∑
|
906 |
+
k=0
|
907 |
+
(λ −N)k,λ
|
908 |
+
1
|
909 |
+
k!(logλ(1+t))k
|
910 |
+
= N!
|
911 |
+
λ
|
912 |
+
�λ
|
913 |
+
N
|
914 |
+
� ∞
|
915 |
+
∑
|
916 |
+
k=0
|
917 |
+
(λ −N)k,λ
|
918 |
+
∞
|
919 |
+
∑
|
920 |
+
n=k
|
921 |
+
S1,λ(n,k)tn
|
922 |
+
n!
|
923 |
+
=
|
924 |
+
∞
|
925 |
+
∑
|
926 |
+
n=0
|
927 |
+
�N!
|
928 |
+
λ
|
929 |
+
�λ
|
930 |
+
N
|
931 |
+
� n
|
932 |
+
∑
|
933 |
+
k=0
|
934 |
+
S1,λ(n,k)(λ −N)k,λ
|
935 |
+
�tn
|
936 |
+
n!,
|
937 |
+
(31)
|
938 |
+
where N is a positive integer.
|
939 |
+
On the other hand, by (10), we get
|
940 |
+
Y = logλ(1+t) = logλ(1+t)
|
941 |
+
t
|
942 |
+
t =
|
943 |
+
∞
|
944 |
+
∑
|
945 |
+
n=1
|
946 |
+
nDn−1,λ
|
947 |
+
tn
|
948 |
+
n!.
|
949 |
+
(32)
|
950 |
+
Thus, by (32), we get
|
951 |
+
� d
|
952 |
+
dt
|
953 |
+
�N
|
954 |
+
Y =
|
955 |
+
∞
|
956 |
+
∑
|
957 |
+
n=N
|
958 |
+
nDn−1,λn(n−1)···(n−N +1)tn−N
|
959 |
+
n!
|
960 |
+
=
|
961 |
+
∞
|
962 |
+
∑
|
963 |
+
n=0
|
964 |
+
(n+N)Dn+N−1,λ
|
965 |
+
tn
|
966 |
+
n!.
|
967 |
+
(33)
|
968 |
+
Therefore, by (31) and (33), we obtain the following theorem.
|
969 |
+
Theorem 8. For N ∈ N and n ≥ N −1, we have
|
970 |
+
Dn,λ =
|
971 |
+
N!
|
972 |
+
n+1
|
973 |
+
1
|
974 |
+
λ
|
975 |
+
�λ
|
976 |
+
N
|
977 |
+
� n−N+1
|
978 |
+
∑
|
979 |
+
k=0
|
980 |
+
S1,λ(n−N +1,k)(λ −N)k,λ.
|
981 |
+
Next, we let F = −logλ(1−t). Then, for N ≥ 1, we have
|
982 |
+
� d
|
983 |
+
dt
|
984 |
+
�N
|
985 |
+
F = (−1)N+1(λ −1)(λ −2)···(λ −N +1)(1−t)λ−N
|
986 |
+
= (−1)N+1 N!
|
987 |
+
λ
|
988 |
+
�λ
|
989 |
+
N
|
990 |
+
�
|
991 |
+
eλ−N
|
992 |
+
λ
|
993 |
+
(logλ(1−t))
|
994 |
+
= (−1)N+1N! 1
|
995 |
+
λ
|
996 |
+
�λ
|
997 |
+
N
|
998 |
+
� ∞
|
999 |
+
∑
|
1000 |
+
k=0
|
1001 |
+
(λ −N)k,λ
|
1002 |
+
1
|
1003 |
+
k!(logλ(1−t))k
|
1004 |
+
= (−1)N+1N! 1
|
1005 |
+
λ
|
1006 |
+
�λ
|
1007 |
+
N
|
1008 |
+
� ∞
|
1009 |
+
∑
|
1010 |
+
k=0
|
1011 |
+
(λ −N)k,λ
|
1012 |
+
∞
|
1013 |
+
∑
|
1014 |
+
n=k
|
1015 |
+
S1,λ(n,k)(−1)n tn
|
1016 |
+
n!
|
1017 |
+
=
|
1018 |
+
∞
|
1019 |
+
∑
|
1020 |
+
n=0
|
1021 |
+
�
|
1022 |
+
N! 1
|
1023 |
+
λ
|
1024 |
+
�λ
|
1025 |
+
N
|
1026 |
+
� n
|
1027 |
+
∑
|
1028 |
+
k=0
|
1029 |
+
(−1)n−N−1(λ −N)k,λS1,λ(n,k)
|
1030 |
+
�tn
|
1031 |
+
n!.
|
1032 |
+
(34)
|
1033 |
+
On the other hand, by (11), we get
|
1034 |
+
(35)
|
1035 |
+
F = −logλ(1−t) = −logλ(1−t)
|
1036 |
+
1−t
|
1037 |
+
(1−t) =
|
1038 |
+
∞
|
1039 |
+
∑
|
1040 |
+
n=1
|
1041 |
+
(Hn,λ −Hn−1,λ)tn.
|
1042 |
+
|
1043 |
+
8
|
1044 |
+
Identities involving degenerate harmonic and degenerate hyperharmonic numbers
|
1045 |
+
Thus, by (35) and for N ≥ 1, we have
|
1046 |
+
� d
|
1047 |
+
dt
|
1048 |
+
�N
|
1049 |
+
F =
|
1050 |
+
∞
|
1051 |
+
∑
|
1052 |
+
n=N
|
1053 |
+
n(n−1)···(n−N +1)(Hn,λ −Hn−1,λ)tn−N
|
1054 |
+
=
|
1055 |
+
∞
|
1056 |
+
∑
|
1057 |
+
n=0
|
1058 |
+
(n+N)(n+N −1)···(n+1)(Hn+N,λ −Hn+N−1,λ)tn
|
1059 |
+
=
|
1060 |
+
∞
|
1061 |
+
∑
|
1062 |
+
n=0
|
1063 |
+
N!
|
1064 |
+
�n+N
|
1065 |
+
N
|
1066 |
+
�
|
1067 |
+
(Hn+N,λ −Hn+N−1,λ)tn.
|
1068 |
+
(36)
|
1069 |
+
Therefore, by (34) and (36), we obtain the following theorem.
|
1070 |
+
Theorem 9. For N ∈ N and n ≥ 0, we have
|
1071 |
+
1
|
1072 |
+
n!
|
1073 |
+
1
|
1074 |
+
λ
|
1075 |
+
�λ
|
1076 |
+
N
|
1077 |
+
� n
|
1078 |
+
∑
|
1079 |
+
k=0
|
1080 |
+
(−1)n−N−1(λ −N)k,λS1,λ(n,k) =
|
1081 |
+
�n+N
|
1082 |
+
N
|
1083 |
+
�
|
1084 |
+
(Hn+N,λ −Hn+N−1,λ).
|
1085 |
+
By Theorem 9 and (6), we get
|
1086 |
+
1
|
1087 |
+
n!
|
1088 |
+
n
|
1089 |
+
∑
|
1090 |
+
k=0
|
1091 |
+
(−1)n−N−1(λ −N)k,λS1,λ(n,k) =
|
1092 |
+
�n+N
|
1093 |
+
N
|
1094 |
+
�
|
1095 |
+
1
|
1096 |
+
λ
|
1097 |
+
�λ
|
1098 |
+
N
|
1099 |
+
� (Hn+N,λ −Hn+N−1,λ)
|
1100 |
+
=
|
1101 |
+
�n+N
|
1102 |
+
N
|
1103 |
+
�
|
1104 |
+
1
|
1105 |
+
λ
|
1106 |
+
�λ
|
1107 |
+
N
|
1108 |
+
� 1
|
1109 |
+
λ
|
1110 |
+
�
|
1111 |
+
λ
|
1112 |
+
n+N
|
1113 |
+
�
|
1114 |
+
(−1)n+N−1 = (−1)n+N−1
|
1115 |
+
� λ
|
1116 |
+
N+n
|
1117 |
+
�
|
1118 |
+
�λ
|
1119 |
+
N
|
1120 |
+
�
|
1121 |
+
�n+N
|
1122 |
+
N
|
1123 |
+
�
|
1124 |
+
.
|
1125 |
+
(37)
|
1126 |
+
Therefore, by (37), we obtain the following corollary.
|
1127 |
+
Corollary 10. For n ≥ 0 and N ∈ N, we have
|
1128 |
+
1
|
1129 |
+
n!
|
1130 |
+
n
|
1131 |
+
∑
|
1132 |
+
k=0
|
1133 |
+
(λ −N)k,λS1,λ(n,k) =
|
1134 |
+
� λ
|
1135 |
+
n+N
|
1136 |
+
�
|
1137 |
+
�λ
|
1138 |
+
N
|
1139 |
+
�
|
1140 |
+
�n+N
|
1141 |
+
N
|
1142 |
+
�
|
1143 |
+
.
|
1144 |
+
Remark 11. From Corollary 10 and letting λ → 0, we obtain
|
1145 |
+
(−1)n
|
1146 |
+
N
|
1147 |
+
n+N
|
1148 |
+
�n+N
|
1149 |
+
N
|
1150 |
+
�
|
1151 |
+
= 1
|
1152 |
+
n!
|
1153 |
+
n
|
1154 |
+
∑
|
1155 |
+
k=0
|
1156 |
+
(−1)kNkS1(n,k).
|
1157 |
+
Remark 12. Recently, on the Daehee numbers and their related topics various studies have been
|
1158 |
+
conducted by several researchers. Interested readers may refer to [1, 12, 13, 14, 15, 17, 18].
|
1159 |
+
3. CONCLUSION
|
1160 |
+
Many different tools have been used in the explorations for degenerate versions of some special
|
1161 |
+
numbers and polynomials, which include generating functions, combinatorial methods, umbral cal-
|
1162 |
+
culus, p-adic analysis, differential equations, probability theory, operator theory, special functions
|
1163 |
+
and analytic number theory (see [5, 6, 7, 8, 9, 10, 11, 12] and the references therein). In this paper,
|
1164 |
+
we used the elementary methods of generating functions in order to study the degenerate harmonic
|
1165 |
+
and degenerate hyperharmonic numbers. Some properties, recurrence relations and identities relat-
|
1166 |
+
ing to those numbers were derived in connection with the degenerate Stirling numbers of the first
|
1167 |
+
kind, the degenerate Daehee numbers and the degenerate derangement.
|
1168 |
+
We would like to continue to investigate various degenerate versions of certain special numbers
|
1169 |
+
and polynomials, especially their applications to physics, science and engineering.
|
1170 |
+
|
1171 |
+
H. K. Kim, D. S. Kim, and T. Kim
|
1172 |
+
9
|
1173 |
+
Acknowledgments
|
1174 |
+
The authors thank Jangjeon Institute for Mathematical Sciences for the support of this research.
|
1175 |
+
Availability of data and material
|
1176 |
+
Not applicable.
|
1177 |
+
Funding
|
1178 |
+
This work was supported by the Basic Science Research Program, the National Research Founda-
|
1179 |
+
tion of Korea, (NRF-2021R1F1A1050151).
|
1180 |
+
Ethics approval and consent to participate
|
1181 |
+
All authors declare that there is no ethical problem in the production of this paper.
|
1182 |
+
Competing interests
|
1183 |
+
All authors declare no conflict of interest.
|
1184 |
+
Consent for publication
|
1185 |
+
All authors want to publish this paper in this journal.
|
1186 |
+
Author’ Contributions
|
1187 |
+
All authors read and approved the final manuscript.
|
1188 |
+
REFERENCES
|
1189 |
+
[1] S. Araci, U. Duran and M. Acikgoz, On weighted q-Daehee polynomials with their applications. Indag. Math. (N.S.)
|
1190 |
+
30 (2019), no. 2, 365-374.
|
1191 |
+
[2] L. Carlitz, Degenerate Stirling, Bernoulli and Eulerian numbers. Utilitas Math. 15 (1979), 51-88.
|
1192 |
+
[3] L. Comtet, Advanced combinatorics. The art of finite and infinite expansions. Revised and enlarged edition. D. Reidel
|
1193 |
+
Publishing Co., Dordrecht, 1974. xi+343 pp. ISBN: 90-277-0441-4.
|
1194 |
+
[4] J. H. Conway and R. K. Guy, The book of numbers. Copernicus, New York, 1996. x+310 pp. ISBN: 0-387-97993-X.
|
1195 |
+
[5] D. S. Kim and T. Kim, A note on a new type of degenerate Bernoulli numbers. Russ. J. Math. Phys. 27 (2020), no. 2,
|
1196 |
+
227-235.
|
1197 |
+
[6] D. S. Kim and T. Kim, Degenerate Sheffer sequence and λ-Sheffer sequence. J. Math. Anal. Appl. 493 (2021), no.
|
1198 |
+
1, 124521.
|
1199 |
+
[7] T. Kim and D. S. Kim, Some identities on degenerate hyperharmonic numbers. Georgian Math. J., 2022 (2022).
|
1200 |
+
https://doi.org/10.1515/gmj-2022-2203
|
1201 |
+
[8] T. Kim and D. S. Kim, On some degenerate differential and degenerate difference operators. Russ. J. Math. Phys. 29
|
1202 |
+
(2022), no. 1, 37-46.
|
1203 |
+
[9] T. Kim and D. S. Kim, Degenerate Laplace transform and degenerate gamma function. Russ. J. Math. Phys. 24
|
1204 |
+
(2017), no. 2, 241–248 .
|
1205 |
+
[10] T. Kim and D. S. Kim, Note on the degenerate gamma function Russ. J. Math. Phys. 27 (2020), no. 3, 352-358.
|
1206 |
+
[11] T. Kim, D. S. Kim and H. K. Kim, λ-q-Sheffer sequence and its applications. Demonstr. Math. 55 (2022), 843–865.
|
1207 |
+
[12] T. Kim, D. S. Kim, H. Lee and J. Kwon, Representations by degenerate Daehee polynomials. Open Math. 20 (2022),
|
1208 |
+
no. 1, 179-194.
|
1209 |
+
[13] J. Kwon, W. J. Kim and S.-H. Rim, On the some identities of the type 2 Daehee and Changhee polynomials arising
|
1210 |
+
from p-adic integrals on Zp. Proc. Jangjeon Math. Soc. 22 (2019), no. 3, 487-497.
|
1211 |
+
[14] J. G. Lee, J. Kwon, G.-W. Jang and L.-C. Jang, Some identities of λ-Daehee polynomials. J. Nonlinear Sci. Appl.
|
1212 |
+
10 (2017), no. 8, 4137-4142.
|
1213 |
+
[15] J.-W. Park, B. M. Kim and J. Kwon, On a modified degenerate Daehee polynomials and numbers. J. Nonlinear Sci.
|
1214 |
+
Appl. 10 (2017), no. 3, 1108-1115.
|
1215 |
+
[16] S. Roman, The umbral calculus. Pure and Applied Mathematics, 111. Academic Press, Inc. [Harcourt Brace Jo-
|
1216 |
+
vanovich, Publishers], New York, 1984. x+193 pp. ISBN: 0-12-594380-6
|
1217 |
+
[17] S. K. Sharma, W. A. Khan, S. Araci and S. S. Ahmed, New type of degenerate Daehee polynomials of the second
|
1218 |
+
kind. Adv. Difference Equ. 2020 (2020), Paper No. 428, 14 pp.
|
1219 |
+
[18] S. J. Yun and J.-W. Park, On fully degenerate Daehee numbers and polynomials of the second kind. J. Math. 2020
|
1220 |
+
(2020), Art. ID 7893498, 9 pp.
|
1221 |
+
|
1222 |
+
10
|
1223 |
+
Identities involving degenerate harmonic and degenerate hyperharmonic numbers
|
1224 |
+
DEPARTMENT OF MATHEMATICS EDUCATION, DAEGU CATHOLIC UNIVERSITY, GYEONGSAN 38430, REPUB-
|
1225 |
+
LIC OF KOREA
|
1226 |
+
Email address: [email protected]
|
1227 |
+
DEPARTMENT OF MATHEMATICS, SOGANG UNIVERSITY, SEOUL 121-742, REPUBLIC OF KOREA
|
1228 |
+
Email address: [email protected]
|
1229 |
+
DEPARTMENT OF MATHEMATICS, KWANGWOON UNIVERSITY, SEOUL 139-701, REPUBLIC OF KOREA
|
1230 |
+
Email address: [email protected]
|
1231 |
+
|
9NE1T4oBgHgl3EQfCQIp/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,381 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf,len=380
|
2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
3 |
+
page_content='02861v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
4 |
+
page_content='NT] 7 Jan 2023 IDENTITIES INVOLVING DEGENERATE HARMONIC AND DEGENERATE HYPERHARMONIC NUMBERS HYE KYUNG KIM1, DAE SAN KIM2, AND TAEKYUN KIM3,∗ ABSTRACT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
5 |
+
page_content=' Harmonic numbers have been studied since antiquity, while hyperharmonic numbers were intoduced by Conway and Guy in 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
6 |
+
page_content=' The degenerate harmonic numbers and degenerate hyperharmonic numbers are their respective degenerate versions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
7 |
+
page_content=' The aim of this paper is to further investigate some properties, recurrence relations and identities involving the degenerate harmonic and degenerate hyperharmonic numbers in connection with degenerate Stirling numbers of the first kind, degenerate Daehee numbers and degenerate derangements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
8 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
9 |
+
page_content=' INTRODUCTION In recent years, various degenerate versions of many special numbers and polynomials have beem studied and yielded a lot of fascinating and fruitful results (see [5, 6, 7, 8, 9, 10, 11, 12] and the references therein), which began with Carlitz’s work on the degenerate Bernoulli and degen- erate Euler numbers (see [2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
10 |
+
page_content=' It is worthwhile to mention that these explorations for degenerate versions are not limited to polynomials and numbers but also extended to transcendental functions, like gamma functions (see [9, 10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
11 |
+
page_content=' It is also remarkable that the λ-umbral calculus and λ-q-umbral calculus were introduced as degenerate versions of the umbral calculus and the q-umbral calculus, respectively (see [6, 11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
12 |
+
page_content=' As it turns out, the λ-umbral calculus and λ-q-umbral calculus are more convenient than the umbral calculus and the q-umbral calculus when dealing with degenerate Shef- fer polynomials and degenerate q-Sheffer polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
13 |
+
page_content=' The aim of this paper is to further investigate some properties, recurrence relations and identities involving the degenerate harmonic numbers (see (6)) and the degenerate hyperharmonic numbers (see (7), (8)) in connection with degenerate Stirling numbers of the first kind, degenerate Daehee numbers and degenerate derangements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
14 |
+
page_content=' The degenerate harmonic numbers and degenerate hyper- harmonic numbers are respectively degenerate versions of the harmonic numbers and the hyperhar- monic numbers, of which the latter are introduced in [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
15 |
+
page_content=' The outline of this paper is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
16 |
+
page_content=' In Section 1, we recall the degenerate exponentials and the degenerate logarithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
17 |
+
page_content=' We remind the reader of the harmonic numbers, and of the hyperhar- monic numbers together with their explicit expression due to Conway and Guy (see [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
18 |
+
page_content=' Then we recall their degenerate versions, namely the degenerate harmonic numbers, and the degenerate hyperharmonic numbers together with their explicit expression (see [7, 8]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
19 |
+
page_content=' We also mention the recently introduced degenerate Stirling numbers of the first kind and the degenerate Daehee num- bers of order r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
20 |
+
page_content=' Section 2 is the main result of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
21 |
+
page_content=' We obtain an expression of the degenerate hyperharmonic numbers of order r in terms of the same numbers of lower orders in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
22 |
+
page_content=' We express the Daehee numbers in terms of the degenerate harmonic numbers and of the degenerate hyperharmonic numbers, respectively in Theorem 2 and Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
23 |
+
page_content=' In Theorem 4, the degenerate harmonic numbers are represented in terms of the degenerate hyperharmonic numbers of order r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
24 |
+
page_content=' 2010 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
25 |
+
page_content=' 05A19;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
26 |
+
page_content=' 11B73;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
27 |
+
page_content=' 11B83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
28 |
+
page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
29 |
+
page_content=' degenerate harmonic number;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
30 |
+
page_content=' degenerate hyperharmonic number;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
31 |
+
page_content=' degenerate Daehee num- ber;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
32 |
+
page_content=' degenerate logarithm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
33 |
+
page_content=' degenerate Stirling number of the first kind;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
34 |
+
page_content=' degenerate derangement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
35 |
+
page_content=' is corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
36 |
+
page_content=' 1 2 Identities involving degenerate harmonic and degenerate hyperharmonic numbers In Theorem 5, the degenerate Daehee numbers are represented in terms of the degenerate Daehee numbers of order r −1 and of the degenerate hyperharmonic numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
37 |
+
page_content=' We derive a simple relation between the degenerate hyperharmonic numbers and the degenerate Daehee numbers in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
38 |
+
page_content=' We deduce an identity involving the degenerate hyperharmonic numbers and the degenerate de- rangements in Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
39 |
+
page_content=' The degenerate Daehee numbers are expressed in terms of the degenerate Stirling numbers of the first kind in Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
40 |
+
page_content=' Finally, we get an identity involving the degenerate Stirling numbers of the first kind and the degenerate harmonic numbers in Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
41 |
+
page_content=' For any nonzero λ ∈ R, the degenerate exponential functions are defined by ex λ(t) = (1+λt) x λ = ∞ ∑ n=0 (x)n,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
42 |
+
page_content=', eλ(t) = e1 λ(t), (see [2, 8]), (1) where (x)0,λ = 1, (x)n,λ = x(x−λ)···(x−(n−1)λ), (n ≥ 1), (see [8]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
43 |
+
page_content=' Let logλ t be the compositional inverse of eλ(t) with eλ(logλ t) = logλ eλ(t) = t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
44 |
+
page_content=' It is called the degenerate logarithm and is given by logλ(1+t) = ∞ ∑ k=1 λ k−1(1)k, 1 λ k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
45 |
+
page_content=' tk = 1 λ ((1+t)λ −1), (see [5]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
46 |
+
page_content=' (2) The harmonic numbers are given by H0 = 0, Hn = 1+ 1 2 +···+ 1 n, (n ∈ N), (see [3, 4, 16]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
47 |
+
page_content=' (3) In 1996, Conway and Guy introduced the hyperharmonic numbers H(r) n of order r, (n,r ≥ 0), which are given by H(r) 0 = 0, (r ≥ 0), H(0) n = 1 n, (n ≥ 1), H(r) n = n ∑ k=1 H(r−1) k , (n,r ≥ 1), (see [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
48 |
+
page_content=' (4) Thus, by (4), we get H(r) n = �n+r −1 n � (Hn+r−1 −Hr−1), (r ≥ 1), (see [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
49 |
+
page_content=' (5) Recently, the degenerate harmonic numbers are defined by H0,λ = 0, Hn,λ = n ∑ k=1 1 λ �λ k � (−1)k−1, (n ≥ 1), (see [8]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
50 |
+
page_content=' (6) Note that limλ→0 Hn,λ = Hn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
51 |
+
page_content=' The degenerate hyperharmonic numbers H(r) n,λ of order r, (n,r ≥ 0), are defined by H(r) 0,λ = 0, (r ≥ 0), H(0) n,λ = 1 λ �λ n � (−1)n−1, (n ≥ 1), H(r) n,λ = n ∑ k=1 H(r−1) k,λ , (n,r ≥ 1), (see [7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
52 |
+
page_content=' (7) We see from (6) and (7) that H(1) n,λ = Hn,λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
53 |
+
page_content=' From (7), we note that H(r) n,λ = (−1)r−1 �λ−1 r−1 � �n+r −1 n � (Hn+r−1,λ −Hr−1,λ), (see [7]), (8) where n, r are positive numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
54 |
+
page_content=' Here we observe from (5) and (8) that limλ→0 H(r) n,λ = H(r) n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
55 |
+
page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
56 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
57 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
58 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
59 |
+
page_content=' Kim, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
60 |
+
page_content=' Kim 3 In [5], the degenerate Stirling numbers of the first kind are defined by (x)n = n ∑ k=0 S1,λ(n,k)(x)k,λ , (n ≥ 0), (see [5, 8]), (9) where (x)0 = 1, (x)n = x(x−1)···(x−n+1), (n ≥ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
61 |
+
page_content=' For r ∈ N, the degenerate Daehee numbers of order r are defined by �logλ(1+t) t �r = ∞ ∑ n=0 D(r) n,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
62 |
+
page_content=', (see [11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
63 |
+
page_content=' (10) In particular, for r = 1, Dn,λ = D(1) n,λ are called the degenerate Daehee numbers 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
64 |
+
page_content=' IDENTITIES INVOLVING DEGENERATE HARMONIC AND DEGENERATE HYPERHARMONIC NUMBERS From (6) and (7), we note that −logλ(1−t) (1−t) = ∞ ∑ n=1 Hn,λtn, (see [7]), (11) and −logλ(1−t) (1−t)r = ∞ ∑ n=1 H(r) n,λtn, (see [7]), (12) where r is a nonnegative integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
65 |
+
page_content=' By (12), we get ∞ ∑ n=1 H(r−1) n,λ tn = −logλ(1−t) (1−t)r (1−t) = ∞ ∑ n=1 H(r) n,λtn(1−t) = ∞ ∑ n=1 H(r) n,λtn − ∞ ∑ n=1 H(r) n,λtn+1 = ∞ ∑ n=1 (H(r) n,λ −H(r) n−1,λ)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
66 |
+
page_content=' (13) By comparing the coefficients on both sides of (13), we get (14) H(r) n,λ = H(r) n−1,λ +H(r−1) n,λ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
67 |
+
page_content=' For 1 ≤ s ≤ r, by (12), we get ∞ ∑ n=1 H(r) n,λtn = −logλ(1−t) (1−t)r = −logλ(1−t) (1−t)r−s 1 (1−t)s = ∞ ∑ l=1 H(r−s) l,λ tl ∞ ∑ k=0 �k +s−1 k � tk = ∞ ∑ n=1 n ∑ l=1 H(r−s) l,λ �n−l +s−1 s−1 � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
68 |
+
page_content=' (15) By comparing the coefficients on both sides of (15), we get H(r) n,λ = n ∑ l=1 H(r−s) l,λ �n−l +s−1 s−1 � , (16) where r, s ∈ Z with 1 ≤ s ≤ r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
69 |
+
page_content=' In particular, for r = s, we have H(r) n,λ = n ∑ l=1 H(0) l,λ �n−l +r −1 r −1 � = n ∑ l=1 1 λ �λ l � (−1)l−1 �n−l +r −1 r −1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
70 |
+
page_content=' (17) Therefore, by (16) and (17), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
71 |
+
page_content=' 4 Identities involving degenerate harmonic and degenerate hyperharmonic numbers Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
72 |
+
page_content=' For r, s ∈ Z with 1 ≤ s ≤ r, we have H(r) n,λ = n ∑ l=1 H(r−s) l,λ �n−l +s−1 s−1 � , and H(r) n,λ = n ∑ l=1 1 λ �λ l � (−1)l−1 �n−l +r −1 r −1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
73 |
+
page_content=' From (11) and (14), we note that ∞ ∑ n=0 Dn,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
74 |
+
page_content=' = logλ(1+t) t = logλ(1+t) 1+t 1+t t = � ∞ ∑ k=1 (−1)k+1Hk,λtk �� 1+ 1 t � = ∞ ∑ n=1 (−1)n+1Hn,λtn + ∞ ∑ n=0 (−1)nHn+1,λtn = 1+ ∞ ∑ n=1 (−1)n(Hn+1,λ −Hn,λ)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
75 |
+
page_content=' (18) Therefore, by comparing the coefficients on both sides of (18), we have the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
76 |
+
page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
77 |
+
page_content=' For n ≥ 0, we have D0,λ = 1, Dn,λ = (−1)nn!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
78 |
+
page_content=' (Hn+1,λ −Hn,λ), (n ≥ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
79 |
+
page_content=' From (12), we note that ∞ ∑ n=0 Dn,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
80 |
+
page_content=' = logλ(1+t) t = logλ(1+t) t(1+t)r (1+t)r = ∞ ∑ k=0 H(r) k+1,λ(−1)ktk ∞ ∑ l=0 �r l � tl = ∞ ∑ n=0 � n ∑ k=0 H(r) k+1,λ � r n−k � (−1)k � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
81 |
+
page_content=' (19) Therefore, by (19), we obtain the following theorem Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
82 |
+
page_content=' For n ≥ 0, we have Dn,λ = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
83 |
+
page_content=' n ∑ k=0 H(r) k+1,λ � r n−k � (−1)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
84 |
+
page_content=' Now, we observe from (2) that (20) ∞ ∑ n=0 Dn,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
85 |
+
page_content=' = logλ(1+t) t = ∞ ∑ n=1 �λ n � 1 λ tn−1 = ∞ ∑ n=0 � λ n+1 � 1 λ tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
86 |
+
page_content=' Thus, by (20), we get Dn,λ = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
87 |
+
page_content=' 1 λ � λ n+1 � = (λ −1)n n+1 , (n ≥ 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
88 |
+
page_content=' (21) H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
89 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
90 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
91 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
92 |
+
page_content=' Kim, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
93 |
+
page_content=' Kim 5 From (11), we have ∞ ∑ n=1 Hn,λtn = −logλ(1−t) 1−t = −logλ(1−t) t t 1−t = ∞ ∑ l=0 Dl,λ(−1)l tl l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
94 |
+
page_content=' ∞ ∑ m=1 tm = ∞ ∑ n=1 � n−1 ∑ l=0 Dl,λ (−1)l l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
95 |
+
page_content=' � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
96 |
+
page_content=' (22) Thus, by Theorem 3 and (22), we get Hn,λ = n−1 ∑ l=0 Dl,λ (−1)l l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
97 |
+
page_content=' = n−1 ∑ l=0 (−1)l l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
98 |
+
page_content=' l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
99 |
+
page_content=' l ∑ k=0 H(r) k+1,λ � r l −k � (−1)k = n−1 ∑ l=0 l ∑ k=0 (−1)k+lH(r) k+1,λ � r l −k � , (n ≥ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
100 |
+
page_content=' (23) Therefore, by (23), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
101 |
+
page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
102 |
+
page_content=' For n ≥ 1, we have Hn,λ = n−1 ∑ l=0 l ∑ k=0 (−1)k+l � r l −k � H(r) k+1,λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
103 |
+
page_content=' By (10), we get ∞ ∑ n=0 D(r) n,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
104 |
+
page_content=' = �logλ(1+t) t �r = logλ(1+t) t(1+t)k �logλ(1+t) t �r−1 (1+t)k = ∞ ∑ i=1 (−1)i+1H(k) i,λ ti−1 ∞ ∑ j=0 D(r−1) j,λ t j j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
105 |
+
page_content=' ∞ ∑ l=0 �k l � tl = ∞ ∑ i=0 (−1)iH(k) i+1,λti ∞ ∑ m=0 � m ∑ j=0 �m j � D(r−1) j,λ (k)m− j � tm m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
106 |
+
page_content=' = ∞ ∑ n=0 � n ∑ i=0 n−i ∑ j=0 (−1)i �n−i j �(k)n−i− j (n−i)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
107 |
+
page_content=' D(r−1) j,λ H(k) i+1,λ � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
108 |
+
page_content=' (24) Therefore, by comparing the coefficients on both sides of (24), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
109 |
+
page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
110 |
+
page_content=' For n,k ≥ 0 and r ≥ 1, we have D(r) n,λ = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
111 |
+
page_content=' n ∑ i=0 n−i ∑ j=0 (−1)i �n−i j �(k)n−i− j (n−i)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
112 |
+
page_content=' D(r−1) j,λ H(k) i+1,λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
113 |
+
page_content=' By (11), we get ∞ ∑ n=1 Hn,λtn = −logλ(1−t) 1−t = logλ(1−t) −t t 1−t = ∞ ∑ l=0 (−1)lDl,λ tl l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
114 |
+
page_content=' ∞ ∑ j=1 t j = ∞ ∑ n=1 � n−1 ∑ l=0 (−1)l Dl,λ l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
115 |
+
page_content=' � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
116 |
+
page_content=' (25) 6 Identities involving degenerate harmonic and degenerate hyperharmonic numbers Thus, by comparing the coefficients on both sides of (25), we get Hn,λ = n−1 ∑ l=0 (−1)l Dl,λ l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
117 |
+
page_content=' , (n ≥ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
118 |
+
page_content=' (26) From (12), we can derive the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
119 |
+
page_content=' ∞ ∑ n=1 H(r) n,λtn = −logλ(1−t) t t (1−t)r = ∞ ∑ l=0 Dl,λ(−1)l tl l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
120 |
+
page_content=' ∞ ∑ m=1 �r +m−2 m−1 � tm = ∞ ∑ n=1 � n ∑ m=1 �r +m−2 r −1 � Dn−m,λ (n−m)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
121 |
+
page_content=' (−1)n−m � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
122 |
+
page_content=' (27) Therefore, by (26) and (27), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
123 |
+
page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
124 |
+
page_content=' For n ∈ N, we have Hn,λ = n−1 ∑ l=0 (−1)l Dl,λ l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
125 |
+
page_content=' , (n ≥ 1), and H(r) n,λ = n ∑ m=1 �r +m−2 r −1 � Dn−m,λ (n−m)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
126 |
+
page_content=' (−1)n−m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
127 |
+
page_content=' The degenerate derangements are defined by 1 1−t eλ(−t) = ∞ ∑ n=0 dn,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
128 |
+
page_content='. (28) Thus, we note that dn,λ = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
129 |
+
page_content=' n ∑ k=0 (1)k,λ (−1)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
130 |
+
page_content=' , (n ≥ 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
131 |
+
page_content=' Now, we observe that −logλ(1−t) (1−t)r eλ(−t) = ∞ ∑ l=1 H(r) l,λtl ∞ ∑ k=0 (1)k,λ k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
132 |
+
page_content=' (−1)ktk = ∞ ∑ n=1 � n ∑ l=1 H(r) l,λ (1)n−l,λ (n−l)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
133 |
+
page_content=' (−1)n−l � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
134 |
+
page_content=' (29) On the other hand, by (28), we get −logλ(1−t) (1−t)r eλ(−t) = −logλ(1−t) (1−t)r−1 1 1−t eλ(−t) = ∞ ∑ l=1 H(r−1) l,λ tl ∞ ∑ k=0 dk,λ tk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
135 |
+
page_content=' = ∞ ∑ n=1 � n ∑ l=1 H(r−1) l,λ dn−l,λ (n−l)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
136 |
+
page_content=' � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
137 |
+
page_content=' (30) Therefore, by (29) and (30), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
138 |
+
page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
139 |
+
page_content=' For n ∈ N, we have n ∑ l=1 H(r) l,λ (1)n−l,λ (n−l)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
140 |
+
page_content=' (−1)n−l = n ∑ l=1 H(r−1) l,λ dn−l,λ (n−l)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
141 |
+
page_content='. H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
142 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
143 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
144 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
145 |
+
page_content=' Kim, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
146 |
+
page_content=' Kim 7 We let Y = logλ(1+t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
147 |
+
page_content=' Then, for N ≥ 1, we have � d dt �N Y = (λ −1)(λ −2)···(λ −N +1)(1+t)λ−N = N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
148 |
+
page_content=' λ �λ N � eλ−N λ (logλ(1+t)) = N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
149 |
+
page_content=' λ �λ N � ∞ ∑ k=0 (λ −N)k,λ 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
150 |
+
page_content=' (logλ(1+t))k = N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
151 |
+
page_content=' λ �λ N � ∞ ∑ k=0 (λ −N)k,λ ∞ ∑ n=k S1,λ(n,k)tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
152 |
+
page_content=' = ∞ ∑ n=0 �N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
153 |
+
page_content=' λ �λ N � n ∑ k=0 S1,λ(n,k)(λ −N)k,λ �tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
154 |
+
page_content=', (31) where N is a positive integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
155 |
+
page_content=' On the other hand, by (10), we get Y = logλ(1+t) = logλ(1+t) t t = ∞ ∑ n=1 nDn−1,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
156 |
+
page_content='. (32) Thus, by (32), we get � d dt �N Y = ∞ ∑ n=N nDn−1,λn(n−1)···(n−N +1)tn−N n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
157 |
+
page_content=' = ∞ ∑ n=0 (n+N)Dn+N−1,λ tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
158 |
+
page_content='. (33) Therefore, by (31) and (33), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
159 |
+
page_content=' Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
160 |
+
page_content=' For N ∈ N and n ≥ N −1, we have Dn,λ = N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
161 |
+
page_content=' n+1 1 λ �λ N � n−N+1 ∑ k=0 S1,λ(n−N +1,k)(λ −N)k,λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
162 |
+
page_content=' Next, we let F = −logλ(1−t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
163 |
+
page_content=' Then, for N ≥ 1, we have � d dt �N F = (−1)N+1(λ −1)(λ −2)···(λ −N +1)(1−t)λ−N = (−1)N+1 N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
164 |
+
page_content=' λ �λ N � eλ−N λ (logλ(1−t)) = (−1)N+1N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
165 |
+
page_content=' 1 λ �λ N � ∞ ∑ k=0 (λ −N)k,λ 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
166 |
+
page_content=' (logλ(1−t))k = (−1)N+1N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
167 |
+
page_content=' 1 λ �λ N � ∞ ∑ k=0 (λ −N)k,λ ∞ ∑ n=k S1,λ(n,k)(−1)n tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
168 |
+
page_content=' = ∞ ∑ n=0 � N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
169 |
+
page_content=' 1 λ �λ N � n ∑ k=0 (−1)n−N−1(λ −N)k,λS1,λ(n,k) �tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
170 |
+
page_content='. (34) On the other hand, by (11), we get (35) F = −logλ(1−t) = −logλ(1−t) 1−t (1−t) = ∞ ∑ n=1 (Hn,λ −Hn−1,λ)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
171 |
+
page_content=' 8 Identities involving degenerate harmonic and degenerate hyperharmonic numbers Thus, by (35) and for N ≥ 1, we have � d dt �N F = ∞ ∑ n=N n(n−1)···(n−N +1)(Hn,λ −Hn−1,λ)tn−N = ∞ ∑ n=0 (n+N)(n+N −1)···(n+1)(Hn+N,λ −Hn+N−1,λ)tn = ∞ ∑ n=0 N!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
172 |
+
page_content=' �n+N N � (Hn+N,λ −Hn+N−1,λ)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
173 |
+
page_content=' (36) Therefore, by (34) and (36), we obtain the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
174 |
+
page_content=' Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
175 |
+
page_content=' For N ∈ N and n ≥ 0, we have 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
176 |
+
page_content=' 1 λ �λ N � n ∑ k=0 (−1)n−N−1(λ −N)k,λS1,λ(n,k) = �n+N N � (Hn+N,λ −Hn+N−1,λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
177 |
+
page_content=' By Theorem 9 and (6), we get 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
178 |
+
page_content=' n ∑ k=0 (−1)n−N−1(λ −N)k,λS1,λ(n,k) = �n+N N � 1 λ �λ N � (Hn+N,λ −Hn+N−1,λ) = �n+N N � 1 λ �λ N � 1 λ � λ n+N � (−1)n+N−1 = (−1)n+N−1 � λ N+n � �λ N � �n+N N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
179 |
+
page_content=' (37) Therefore, by (37), we obtain the following corollary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
180 |
+
page_content=' Corollary 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
181 |
+
page_content=' For n ≥ 0 and N ∈ N, we have 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
182 |
+
page_content=' n ∑ k=0 (λ −N)k,λS1,λ(n,k) = � λ n+N � �λ N � �n+N N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
183 |
+
page_content=' Remark 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
184 |
+
page_content=' From Corollary 10 and letting λ → 0, we obtain (−1)n N n+N �n+N N � = 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
185 |
+
page_content=' n ∑ k=0 (−1)kNkS1(n,k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
186 |
+
page_content=' Remark 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
187 |
+
page_content=' Recently, on the Daehee numbers and their related topics various studies have been conducted by several researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
188 |
+
page_content=' Interested readers may refer to [1, 12, 13, 14, 15, 17, 18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
189 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
190 |
+
page_content=' CONCLUSION Many different tools have been used in the explorations for degenerate versions of some special numbers and polynomials, which include generating functions, combinatorial methods, umbral cal- culus, p-adic analysis, differential equations, probability theory, operator theory, special functions and analytic number theory (see [5, 6, 7, 8, 9, 10, 11, 12] and the references therein).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
191 |
+
page_content=' In this paper, we used the elementary methods of generating functions in order to study the degenerate harmonic and degenerate hyperharmonic numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
192 |
+
page_content=' Some properties, recurrence relations and identities relat- ing to those numbers were derived in connection with the degenerate Stirling numbers of the first kind, the degenerate Daehee numbers and the degenerate derangement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
193 |
+
page_content=' We would like to continue to investigate various degenerate versions of certain special numbers and polynomials, especially their applications to physics, science and engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
194 |
+
page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
195 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
196 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
197 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
198 |
+
page_content=' Kim, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
199 |
+
page_content=' Kim 9 Acknowledgments The authors thank Jangjeon Institute for Mathematical Sciences for the support of this research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
200 |
+
page_content=' Availability of data and material Not applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
201 |
+
page_content=' Funding This work was supported by the Basic Science Research Program, the National Research Founda- tion of Korea, (NRF-2021R1F1A1050151).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
202 |
+
page_content=' Ethics approval and consent to participate All authors declare that there is no ethical problem in the production of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
203 |
+
page_content=' Competing interests All authors declare no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
204 |
+
page_content=' Consent for publication All authors want to publish this paper in this journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
205 |
+
page_content=' Author’ Contributions All authors read and approved the final manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
206 |
+
page_content=' REFERENCES [1] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
207 |
+
page_content=' Araci, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
208 |
+
page_content=' Duran and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
209 |
+
page_content=' Acikgoz, On weighted q-Daehee polynomials with their applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
210 |
+
page_content=' Indag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
211 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
212 |
+
page_content=' (N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
213 |
+
page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
214 |
+
page_content=') 30 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
215 |
+
page_content=' 2, 365-374.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
216 |
+
page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
217 |
+
page_content=' Carlitz, Degenerate Stirling, Bernoulli and Eulerian numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
218 |
+
page_content=' Utilitas Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
219 |
+
page_content=' 15 (1979), 51-88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
220 |
+
page_content=' [3] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
221 |
+
page_content=' Comtet, Advanced combinatorics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
222 |
+
page_content=' The art of finite and infinite expansions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
223 |
+
page_content=' Revised and enlarged edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
224 |
+
page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
225 |
+
page_content=' Reidel Publishing Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
226 |
+
page_content=', Dordrecht, 1974.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
227 |
+
page_content=' xi+343 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
228 |
+
page_content=' ISBN: 90-277-0441-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
229 |
+
page_content=' [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
230 |
+
page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
231 |
+
page_content=' Conway and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
232 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
233 |
+
page_content=' Guy, The book of numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
234 |
+
page_content=' Copernicus, New York, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
235 |
+
page_content=' x+310 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
236 |
+
page_content=' ISBN: 0-387-97993-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
237 |
+
page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
238 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
239 |
+
page_content=' Kim and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
240 |
+
page_content=' Kim, A note on a new type of degenerate Bernoulli numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
241 |
+
page_content=' Russ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
242 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
243 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
244 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
245 |
+
page_content=' 27 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
246 |
+
page_content=' 2, 227-235.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
247 |
+
page_content=' [6] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
248 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
249 |
+
page_content=' Kim and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
250 |
+
page_content=' Kim, Degenerate Sheffer sequence and λ-Sheffer sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
251 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
252 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
253 |
+
page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
254 |
+
page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
255 |
+
page_content=' 493 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
256 |
+
page_content=' 1, 124521.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
257 |
+
page_content=' [7] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
258 |
+
page_content=' Kim and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
259 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
260 |
+
page_content=' Kim, Some identities on degenerate hyperharmonic numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
261 |
+
page_content=' Georgian Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
262 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
263 |
+
page_content=', 2022 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
264 |
+
page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
265 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
266 |
+
page_content='1515/gmj-2022-2203 [8] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
267 |
+
page_content=' Kim and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
268 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
269 |
+
page_content=' Kim, On some degenerate differential and degenerate difference operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
270 |
+
page_content=' Russ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
271 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
272 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
273 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
274 |
+
page_content=' 29 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
275 |
+
page_content=' 1, 37-46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
276 |
+
page_content=' [9] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
277 |
+
page_content=' Kim and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
278 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
279 |
+
page_content=' Kim, Degenerate Laplace transform and degenerate gamma function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
280 |
+
page_content=' Russ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
281 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
282 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
283 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
284 |
+
page_content=' 24 (2017), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
285 |
+
page_content=' 2, 241–248 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
286 |
+
page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
287 |
+
page_content=' Kim and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
288 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
289 |
+
page_content=' Kim, Note on the degenerate gamma function Russ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
290 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
291 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
292 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
293 |
+
page_content=' 27 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
294 |
+
page_content=' 3, 352-358.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
295 |
+
page_content=' [11] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
296 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
297 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
298 |
+
page_content=' Kim and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
299 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
300 |
+
page_content=' Kim, λ-q-Sheffer sequence and its applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
301 |
+
page_content=' Demonstr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
302 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
303 |
+
page_content=' 55 (2022), 843–865.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
304 |
+
page_content=' [12] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
305 |
+
page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
306 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
307 |
+
page_content=' Kim, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
308 |
+
page_content=' Lee and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
309 |
+
page_content=' Kwon, Representations by degenerate Daehee polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
310 |
+
page_content=' Open Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
311 |
+
page_content=' 20 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
312 |
+
page_content=' 1, 179-194.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
313 |
+
page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
314 |
+
page_content=' Kwon, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
315 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
316 |
+
page_content=' Kim and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
317 |
+
page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
318 |
+
page_content=' Rim, On the some identities of the type 2 Daehee and Changhee polynomials arising from p-adic integrals on Zp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
319 |
+
page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
320 |
+
page_content=' Jangjeon Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
321 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
322 |
+
page_content=' 22 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
323 |
+
page_content=' 3, 487-497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
324 |
+
page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
325 |
+
page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
326 |
+
page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
327 |
+
page_content=' Kwon, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
328 |
+
page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
329 |
+
page_content=' Jang and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
330 |
+
page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
331 |
+
page_content=' Jang, Some identities of λ-Daehee polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
332 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
333 |
+
page_content=' Nonlinear Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
334 |
+
page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
335 |
+
page_content=' 10 (2017), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
336 |
+
page_content=' 8, 4137-4142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
337 |
+
page_content=' [15] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
338 |
+
page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
339 |
+
page_content=' Park, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
340 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
341 |
+
page_content=' Kim and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
342 |
+
page_content=' Kwon, On a modified degenerate Daehee polynomials and numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
343 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
344 |
+
page_content=' Nonlinear Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
345 |
+
page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
346 |
+
page_content=' 10 (2017), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
347 |
+
page_content=' 3, 1108-1115.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
348 |
+
page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
349 |
+
page_content=' Roman, The umbral calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
350 |
+
page_content=' Pure and Applied Mathematics, 111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
351 |
+
page_content=' Academic Press, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
352 |
+
page_content=' [Harcourt Brace Jo- vanovich, Publishers], New York, 1984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
353 |
+
page_content=' x+193 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
354 |
+
page_content=' ISBN: 0-12-594380-6 [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
355 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
356 |
+
page_content=' Sharma, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
357 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
358 |
+
page_content=' Khan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
359 |
+
page_content=' Araci and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
360 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
361 |
+
page_content=' Ahmed, New type of degenerate Daehee polynomials of the second kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
362 |
+
page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
363 |
+
page_content=' Difference Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
364 |
+
page_content=' 2020 (2020), Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
365 |
+
page_content=' 428, 14 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
366 |
+
page_content=' [18] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
367 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
368 |
+
page_content=' Yun and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
369 |
+
page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
370 |
+
page_content=' Park, On fully degenerate Daehee numbers and polynomials of the second kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
371 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
372 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
373 |
+
page_content=' 2020 (2020), Art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
374 |
+
page_content=' ID 7893498, 9 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
375 |
+
page_content=' 10 Identities involving degenerate harmonic and degenerate hyperharmonic numbers DEPARTMENT OF MATHEMATICS EDUCATION, DAEGU CATHOLIC UNIVERSITY, GYEONGSAN 38430, REPUB- LIC OF KOREA Email address: hkkim@cu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
376 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
377 |
+
page_content='kr DEPARTMENT OF MATHEMATICS, SOGANG UNIVERSITY, SEOUL 121-742, REPUBLIC OF KOREA Email address: dskim@sogang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
378 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
379 |
+
page_content='kr DEPARTMENT OF MATHEMATICS, KWANGWOON UNIVERSITY, SEOUL 139-701, REPUBLIC OF KOREA Email address: tkkim@kw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
380 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
381 |
+
page_content='kr' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfCQIp/content/2301.02861v1.pdf'}
|
A9FAT4oBgHgl3EQfrx7P/content/2301.08655v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01262dbda26eb60a40ab2d73bbb0d61831a1ec2b3f16a31595e401692e63df5a
|
3 |
+
size 193782
|
A9FAT4oBgHgl3EQfrx7P/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af582328fb9199499516cc030f6e3225ceb2b4864a6902c013d2b6702e173fc2
|
3 |
+
size 1441837
|
A9FAT4oBgHgl3EQfrx7P/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:02e56db9b963589338936f99f9a6a55410754150363c75f83c66ab84877d3975
|
3 |
+
size 64777
|
BNE1T4oBgHgl3EQfVgSF/content/tmp_files/2301.03103v1.pdf.txt
ADDED
@@ -0,0 +1,2084 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A Multi-Site Accelerator-Rich Processing Fabric for
|
2 |
+
Scalable Brain-Computer Interfacing
|
3 |
+
Karthik Sriram, Raghavendra Pradyumna Pothukuchi, Michał Gerasimiuk, Oliver Ye, Muhammed Ugur
|
4 |
+
Rajit Manohar, Anurag Khandelwal, Abhishek Bhattacharjee
|
5 |
+
Yale University, New Haven, USA
|
6 |
+
Abstract—Hull1 is an accelerator-rich distributed implantable
|
7 |
+
brain-computer interface (BCI) that reads biological neurons at
|
8 |
+
data rates that are 2-3 orders of magnitude higher than the
|
9 |
+
prior art, while supporting many neuroscientific applications.
|
10 |
+
Prior approaches have restricted brain interfacing to tens of
|
11 |
+
megabits per second in order to meet two constraints necessary
|
12 |
+
for effective operation and safe long-term implantation—power
|
13 |
+
dissipation under tens of milliwatts and response latencies in
|
14 |
+
the tens of milliseconds. Hull also adheres to these constraints,
|
15 |
+
but is able to interface with the brain at much higher data
|
16 |
+
rates, thereby enabling, for the first time, BCI-driven research on
|
17 |
+
and clinical treatment of brain-wide behaviors and diseases that
|
18 |
+
require reading and stimulating many brain locations. Central to
|
19 |
+
Hull’s power efficiency is its realization as a distributed system of
|
20 |
+
BCI nodes with accelerator-rich compute. Hull balances modular
|
21 |
+
system layering with aggressive cross-layer hardware-software
|
22 |
+
co-design to integrate compute, networking, and storage. The
|
23 |
+
result is a lesson in designing networked distributed systems with
|
24 |
+
hardware accelerators from the ground up.
|
25 |
+
I. I N T RO D U C T I O N
|
26 |
+
Brain-computer interfaces (BCIs) sense the electrical activity
|
27 |
+
of biological neurons and electrically stimulate them to “rewire”
|
28 |
+
neuronal circuits. By directly connecting brains to computers,
|
29 |
+
BCIs help advance our understanding of the brain and the
|
30 |
+
mind [1, 2], offer treatment of neurological disorders [2–6],
|
31 |
+
enable industrial robotics [7], permit novel modes of personal
|
32 |
+
entertainment [8], and more.
|
33 |
+
BCIs can be realized as surface electrodes (i.e., electrical
|
34 |
+
sensors) placed on the scalp above the skull to measure brain
|
35 |
+
activity [2, 3]. While such wearable BCIs do not require surgical
|
36 |
+
deployment, the signals they collect are muffled by the skull,
|
37 |
+
making them noisy, low-resolution, and less ideal for forward-
|
38 |
+
looking BCI applications [5, 9–12].
|
39 |
+
Instead, this work focuses on implantable BCIs that are
|
40 |
+
surgically embedded directly on, around, and in the brain
|
41 |
+
tissue [13, 14]. Implantable BCIs directly record from and
|
42 |
+
stimulate neurons with high fidelity, spatial resolution, and in
|
43 |
+
real time [2, 5]. Hundreds of individuals use clinically approved
|
44 |
+
implantable BCIs to treat epilepsy, movement disorders, as well
|
45 |
+
as impaired vision [15–17]. Implantable BCIs are also being
|
46 |
+
studied in clinical trials to assess their effectiveness in treating
|
47 |
+
brain stroke, memory disorders, paralysis, anxiety/depression,
|
48 |
+
addiction, and more [14, 18, 19].
|
49 |
+
Conflicting constraints make it challenging to design hard-
|
50 |
+
ware for implantable BCIs. BCIs cannot overheat brain regions
|
51 |
+
1A hull is the protective outer covering of grain. We call our design Hull
|
52 |
+
since it similarly protects the brain.
|
53 |
+
by >1 ◦C to avoid cellular damage [20, 21] and must therefore
|
54 |
+
be ultra-low-power. But, BCI designers are also seeking to
|
55 |
+
leverage improvements in sensor technology that are reading
|
56 |
+
exponentially increasing neuronal data [22]. It is challenging to
|
57 |
+
constrain power/energy while processing such large data, espe-
|
58 |
+
cially to respond to neuronal activity in real-time (i.e., in ms).
|
59 |
+
Hardware over-specialization is not a viable way to reduce BCI
|
60 |
+
power; to enable many research and clinical studies, BCIs must
|
61 |
+
be adequately programmable to personalize algorithms, support
|
62 |
+
several computational methods to treat multiple disorders, and
|
63 |
+
enable deployment of maturing/emerging algorithms [23, 24].
|
64 |
+
Complicating BCI design further is the emergence of
|
65 |
+
applications that read and process neural activity from many
|
66 |
+
brain sites over time [1, 13, 25, 26]. This is because the brain’s
|
67 |
+
functions (and disorders) are ultimately based on physical and
|
68 |
+
functional connectivity between brain regions that evolve over
|
69 |
+
time [1, 13, 25, 26].
|
70 |
+
Existing BCIs [16, 17, 23, 27, 28] are designed for single-
|
71 |
+
site implantation and lack the ability to store adequately long
|
72 |
+
historical neural data. Most BCIs [29, 30] have additional
|
73 |
+
limitations in that they have historically eschewed a subset of
|
74 |
+
programmability, data rates, and flexibility to meet safe power
|
75 |
+
constraints. These BCIs are specialized to a specific task and/or
|
76 |
+
limit personalization of the algorithm [29, 30]. Some support
|
77 |
+
more programmability by sacrificing high data rates [16, 17,
|
78 |
+
27, 28]. Consequently, none support the distributed and rapidly
|
79 |
+
evolving neural data processing that emerging BCI applications
|
80 |
+
require. Recent work on HALO balances flexibility, data rates,
|
81 |
+
and power, but is limited to one brain site. At best, distributed
|
82 |
+
BCI applications have been studied in prior work that consists
|
83 |
+
of multiple sensor implants that offload processing to external
|
84 |
+
devices with higher power budgets [31, 32]. But, this is not
|
85 |
+
a panacea because of the long network latency, privacy, and
|
86 |
+
mobility limitations [33].
|
87 |
+
Our work is the first to offer a path toward scalable whole-
|
88 |
+
brain interfacing across multiple brain sites. Our solution, Hull,
|
89 |
+
is a distributed BCI consisting of multiple BCI nodes that
|
90 |
+
communicate with one another wirelessly and interface with
|
91 |
+
the brain with an aggregate data rate 2-3 orders of magnitude
|
92 |
+
higher than the prior state-of-art [23]. Each BCI node consists
|
93 |
+
of flexible compute made up of reconfigurable power-efficient
|
94 |
+
domain-specific hardware accelerators customized for important
|
95 |
+
neural processing applications. These accelerators are tightly
|
96 |
+
co-designed with storage and networking to ensure system-wide
|
97 |
+
adherence to power and response latency constraints.
|
98 |
+
1
|
99 |
+
arXiv:2301.03103v1 [cs.DC] 8 Jan 2023
|
100 |
+
|
101 |
+
Hull uses a scheduler based on an integer linear program
|
102 |
+
(ILP) to optimally map tasks and algorithms to the hardware
|
103 |
+
accelerators across all of Hull’s nodes, and to create the network
|
104 |
+
and storage access schedules to feed the accelerators. Hull
|
105 |
+
supports three types of BCI applications [34, 35]:
|
106 |
+
The first category consists of applications that continuously
|
107 |
+
monitor brain activity and respond to aberrant behavior without
|
108 |
+
engaging with agents external to Hull [34–36]. This includes,
|
109 |
+
for example, detection of seizures [26], prediction of their
|
110 |
+
spread, and finally, to mitigate symptoms, electrical stimulation
|
111 |
+
of regions where seizures are expected to migrate.
|
112 |
+
The second category also monitor the brain continuously but
|
113 |
+
rely on agents external to Hull to respond. This includes, for
|
114 |
+
example, the detection of an individual’s intended movement,
|
115 |
+
and relaying of this data to prostheses or assistive devices [3, 33–
|
116 |
+
35]. These applications enable paralyzed individuals to actuate
|
117 |
+
machines and restore (partial) sensory function.
|
118 |
+
The third category interactively queries Hull to analyze data
|
119 |
+
from multiple brain sites. These queries may be used “in the
|
120 |
+
loop” or “out of the loop”. With the former, clinicians may need
|
121 |
+
to modify treatment options based on data that the BCI reads;
|
122 |
+
e.g., confirming that Hull has correctly detected seizures and
|
123 |
+
responded appropriately [37, 38]. The latter refers to interactive
|
124 |
+
queries used by technicians to debug system operations, by
|
125 |
+
clinicians to glean the individual’s medical history, and by
|
126 |
+
researchers to better understand brain function.
|
127 |
+
In supporting these types of applications, Hull offers research
|
128 |
+
insights on building end-to-end computer systems centered on
|
129 |
+
hardware accelerators. Specifically, our choice of hardware
|
130 |
+
accelerators follows several important design principles:
|
131 |
+
First, to maximize power efficiency, as well as simplicity of
|
132 |
+
hardware design, we identify algorithmic kernels within our
|
133 |
+
applications that accelerate not only their computation but also
|
134 |
+
their networking and storage latency needs. This is a non-trivial
|
135 |
+
exercise that requires domain-specific knowledge to convert BCI
|
136 |
+
applications into equivalent forms that are amenable to exposing
|
137 |
+
these kernels. We reformulate known computational pipelines
|
138 |
+
for seizure propagation prediction/treatment and movement
|
139 |
+
intent – two BCI applications that Hull focuses on – to use hash-
|
140 |
+
based similarity measures that identify the neural signals from
|
141 |
+
different brain sites likeliest to be correlated, before applying
|
142 |
+
heavy-weight correlation measures. The same hashes can drive
|
143 |
+
the design of domain-specific layouts of data in our storage
|
144 |
+
stack. Co-designing our hashes with our storage layout permit
|
145 |
+
several power/latency-efficient data retrievals from our storage
|
146 |
+
layer. The hashes also enable reducing the data communicated
|
147 |
+
in the intra-BCI network between the Hull nodes.
|
148 |
+
Second, we design our accelerators to be predictable in
|
149 |
+
latency and power for our target data rates. Predictable perfor-
|
150 |
+
mance and power characteristics facilitate the optimal design of
|
151 |
+
compute and network schedules—the enabling feature for our
|
152 |
+
ILP scheduler. Designing accelerators with predictable perfor-
|
153 |
+
mance and power requires care. For some accelerators, whose
|
154 |
+
data generation rate is input-dependent (e.g., data compression),
|
155 |
+
we use theoretically-derived worst-case latency and throughput
|
156 |
+
estimates. Furthermore, we design our accelerators in their own
|
157 |
+
clock domains to enable a range of operating frequencies with
|
158 |
+
well-defined power and performance characteristics.
|
159 |
+
Third, we design our accelerators to be reconfigurable. This
|
160 |
+
permits the repurposing of our hash accelerators, for example,
|
161 |
+
to act as hash indices for our storage layer, as filters that reduce
|
162 |
+
our networking traffic, and to be tuned differently depending
|
163 |
+
on the target application being supported.
|
164 |
+
Overall, Hull scales brain-computer interface bandwidth
|
165 |
+
beyond what was previously achievable. Hull is flexible, recon-
|
166 |
+
figurable, and supports real-time distributed neural processing.
|
167 |
+
We use a detailed physical synthesis flow in a 28 nm CMOS
|
168 |
+
process (including tapeouts of partial hardware at 12 nm)
|
169 |
+
coupled with network and storage models to evaluate Hull’s
|
170 |
+
power and performance. We show that Hull can process an
|
171 |
+
aggregate of 460 Mbps wireless data from multiple regions in
|
172 |
+
only 10 ms and dissipates no more than 15 mW at any node,
|
173 |
+
confirming its suitability for autonomous use. In interactive
|
174 |
+
mode, it can support 10 queries per second over 6 MB of
|
175 |
+
data over 10 implants. Existing designs support two orders of
|
176 |
+
magnitude lower data and need intrusive wiring. In summary,
|
177 |
+
our specific contributions are:
|
178 |
+
1) Hull, the first distributed and wireless BCI system that scales
|
179 |
+
to multi-region neural processing in real time.
|
180 |
+
2) The cross-layer co-design of BCI applications, processing,
|
181 |
+
and storage for scalable and distributed neural processing.
|
182 |
+
Hull is the first to support long-term storage of data, hashing,
|
183 |
+
and database indices to enable distributed signal processing
|
184 |
+
on a single BCI platform.
|
185 |
+
3) An evaluation of Hull’s flexibility and performance on
|
186 |
+
epileptic seizure propagation and detection of movement
|
187 |
+
intent, in various deployment scenarios, as well as in
|
188 |
+
the support of more arbitrary queries that may be used
|
189 |
+
interactively by clinicians/technicians.
|
190 |
+
Hull furthers the elevation of hardware accelerators to first-
|
191 |
+
class compute citizens that, like CPUs, can directly engage
|
192 |
+
with networking and storage. This trend will be crucial to
|
193 |
+
future IoT, swarm, and intermittent computing environments
|
194 |
+
that sustain adaptive and complex functionality while meeting
|
195 |
+
strict safe-use (i.e., power, latency, throughput) constraints.
|
196 |
+
II. BAC K G RO U N D
|
197 |
+
A. Brain-Computer Interface Design
|
198 |
+
BCI applications typically perform signal measurement,
|
199 |
+
feature extraction, classification/decision-making, and when
|
200 |
+
applicable, neural feedback/stimulation [2, 5]. The hardware
|
201 |
+
organization of BCIs reflects these four aspects. Signal mea-
|
202 |
+
surement is performed by electrodes that read the electrical
|
203 |
+
activity of clusters of biological neurons, and analog to digital
|
204 |
+
converters (ADCs) then digitize this data. State-of-the-art
|
205 |
+
sensors consist of 96-256 electrode arrays per implant. ADCs
|
206 |
+
typically encode measured signal samples with 8-16 bits at a
|
207 |
+
rate of 20-50 K samples per second per electrode. Digitized data
|
208 |
+
is then relayed to compute logic for feature extraction, based
|
209 |
+
on which classification/decision-making proceeds. If needed,
|
210 |
+
the electrode arrays are repurposed, after a digital-to-analog
|
211 |
+
(DAC) conversion step, to electrically stimulate the brain.
|
212 |
+
2
|
213 |
+
|
214 |
+
Modern BCIs also use radios to communicate with external
|
215 |
+
agents (e.g., servers, prostheses), presenting an evolution from
|
216 |
+
the surgical cables (which were susceptible to infections
|
217 |
+
and restricted free movement) used in early BCIs [39, 40].
|
218 |
+
Finally, BCIs are powered with rechargeable batteries and/or
|
219 |
+
inductive power transfer. These components are packaged in
|
220 |
+
hermetically-fused silica or titanium capsules. While the power
|
221 |
+
limit considered safe for permanent implantation varies on
|
222 |
+
the implant’s target location and depth, we use 15 mW as a
|
223 |
+
conservative limit [23, 41] for each of Hull’s constituent BCI
|
224 |
+
nodes.
|
225 |
+
B. Neural Processing Applications & Algorithmic Kernels
|
226 |
+
Future BCI applications will collect data across multiple
|
227 |
+
brain sites, and compare histories of stored neural signals
|
228 |
+
across them. Many applications exhibit these needs, including
|
229 |
+
algorithms for neuromuscular rehabilitation and neuropsychi-
|
230 |
+
atric disorders [2, 4, 13, 42–44], but we focus on epileptic
|
231 |
+
seizure propagation and detection of movement intent as they
|
232 |
+
form the bulk of emerging BCI use [2, 5, 45–47]. In addition,
|
233 |
+
we also consider spike sorting, a crucial kernel widely used
|
234 |
+
in many applications [48, 49]. Spike sorting differs from
|
235 |
+
seizure propagation and movement intent in that it is not a full
|
236 |
+
application in itself. Nevertheless, we study it because it is a
|
237 |
+
prime candidate for wide use in a distributed manner.
|
238 |
+
1) Epileptic seizure propagation application: Seizures often
|
239 |
+
migrate across brain regions [26]. Predicting seizure spread
|
240 |
+
can help explain seizure dynamics and offer treatment options.
|
241 |
+
When a seizure is detected at a brain site, seizure propagation
|
242 |
+
algorithms compare neural signals from the originating site
|
243 |
+
against current and past signals collected from other brain sites
|
244 |
+
of interest. Correlation measures are used to detect whether
|
245 |
+
there is a seizure match across brain sites; i.e., whether a seizure
|
246 |
+
is likely to propagate to another brain region.
|
247 |
+
Figure 1a shows the steps used (but unsupported in their
|
248 |
+
entirety in any existing BCI) in standard seizure propagation
|
249 |
+
pipelines [25, 26]. First, seizure signals are detected in the
|
250 |
+
signals from each electrode in all the brain regions that the
|
251 |
+
electrodes probe. This step typically uses band-pass filters or a
|
252 |
+
fast Fourier transform (FFT) on continuous signal windows to
|
253 |
+
generate features, followed by a classifier like a support vector
|
254 |
+
machine (SVM) [50]. Alternatively, clinicians may manually
|
255 |
+
annotate the onset of a seizure.
|
256 |
+
Once a seizure is detected in a region at a specific point in
|
257 |
+
time, the signal window from that region is compared with all
|
258 |
+
the concurrent and previous windows from all other regions,
|
259 |
+
up to a chosen time in the past.
|
260 |
+
2) Detection of movement intent application: BCIs can infer
|
261 |
+
coarse-grained movement from reading single sites of the motor
|
262 |
+
cortex region [51, 52], but more fine-grained movement intent
|
263 |
+
(e.g., the movement of individual fingers grasping an object)
|
264 |
+
requires reading neural activity from multiple brain regions [45,
|
265 |
+
53, 54]. Figure 1b shows a typical computational pipeline
|
266 |
+
that infers fine-grained movement intent [47, 55–57]. Neural
|
267 |
+
signals from all electrodes in all target brain sites are first
|
268 |
+
filtered or converted into the frequency domain using FFT
|
269 |
+
(a) Seizure propagation analysis.
|
270 |
+
(b) Decoding movement intent and stimulating response to it.
|
271 |
+
(c) Spike sorting to separate the combined electrode activity.
|
272 |
+
Fig. 1: Main BCI application steps. BCIs do not yet support
|
273 |
+
on-device seizure propagation or multi-site movement intent.
|
274 |
+
for feature extraction. Then, the features are all pushed into
|
275 |
+
a classifier to deduce intended movement. Linear SVMs are
|
276 |
+
commonly used for classification because they are effective,
|
277 |
+
and because their parameters are intuitive for neuroscientists to
|
278 |
+
reason about [3, 55, 58, 59]. Intended movement is then relayed
|
279 |
+
to an external agent like a prosthetic arm. The prosthetic arm’s
|
280 |
+
movement then has to be conveyed to the brain regions (e.g.,
|
281 |
+
the sensorimotor cortex) responsible for sensing the individual’s
|
282 |
+
environment using neural stimulation patterns [60, 61].
|
283 |
+
3) Spike sorting algorithmic kernel: Spike sorting is an
|
284 |
+
exemplar of key signal transformations that comprise important
|
285 |
+
applications, and that benefit from engagement with multiple
|
286 |
+
brain sites. Most sensor arrays used in existing BCIs have
|
287 |
+
electrodes that measure the combined electrical activity of
|
288 |
+
a cluster of neurons, rather than that of individual neurons.
|
289 |
+
Spike sorting detects all the peaks in the combined electrode
|
290 |
+
activity and separates them into a series of constituent signal
|
291 |
+
spikes from distinct neurons. Figure 1c shows this algorithm.
|
292 |
+
It measures the distance of each signal peak from several
|
293 |
+
spike templates, and the nearest template is chosen as the
|
294 |
+
peak’s spike. In some variants [62], the templates are obtained
|
295 |
+
dynamically from clustering the peaks. Spike distances are
|
296 |
+
measured with dynamic time warping (DTW) or earth movers
|
297 |
+
distance (EMD) [63, 64], which are computationally expensive.
|
298 |
+
Modern spike sorting methods are too slow to be deployed
|
299 |
+
online; distributed spike sorting has even higher overheads.
|
300 |
+
No existing BCIs support the signal processing needed for
|
301 |
+
historical analysis of seizure and movement intent activity
|
302 |
+
emanating from multiple brain sites, and for distributed spike
|
303 |
+
sorting. Most designs use a single implanted device that
|
304 |
+
senses and processes information from the brain region probed
|
305 |
+
by the implant [16, 17, 23, 27, 28]. Some designs use
|
306 |
+
distributed sensors that do not directly connect to computational
|
307 |
+
support [31, 32], and offload data to an external device. But,
|
308 |
+
the lack of on-device distributed processing precludes BCI
|
309 |
+
support for applications that require ms-scale decisions, such
|
310 |
+
as preempting propagation of seizures, or control of prosthetics.
|
311 |
+
C. Locality-Sensitive Hashing for Signal Comparison
|
312 |
+
All the applications described previously use signal com-
|
313 |
+
parison that is expensive. We use locality-sensitive hashing
|
314 |
+
for fast time series matching [65] to meet Hull’s ms-scale
|
315 |
+
latency constraints. We face two challenges in using locality-
|
316 |
+
3
|
317 |
+
|
318 |
+
Propagation
|
319 |
+
Electrode
|
320 |
+
Seizure
|
321 |
+
Signal similarity in
|
322 |
+
Data
|
323 |
+
Detection
|
324 |
+
all other regionsStimulation
|
325 |
+
Electrode
|
326 |
+
Feature
|
327 |
+
SVM
|
328 |
+
Data
|
329 |
+
ExtractionSorted Spikes
|
330 |
+
Electrode
|
331 |
+
Spike
|
332 |
+
Template match
|
333 |
+
Data
|
334 |
+
Detection
|
335 |
+
for every spike(a) Hull overview.
|
336 |
+
(b) The processor fabric in each of Hull’s nodes.
|
337 |
+
Fig. 2: The Hull BCI is made up of nodes that are implanted in distinct brain sites. The nodes communicate wirelessly with
|
338 |
+
each other and external agents. Each Hull node has sensors, radios, analog/digital conversion, processing fabric, and storage;
|
339 |
+
the processing fabric contains hardware accelerators and configurable switches that can be used to create different pipelines.
|
340 |
+
sensitive hashing. The first is the presence of variable-latency
|
341 |
+
computations involving randomization, and the other is the
|
342 |
+
need to support multiple comparison measures—the choice of
|
343 |
+
measure varies across BCI uses [63, 64, 66]. We leverage prior
|
344 |
+
work on two locality-sensitive hashing schemes developed for
|
345 |
+
DTW [67] and EMD [68]. Subsequent sections describe how
|
346 |
+
we modify them to suit the needs of Hull’s target applications.
|
347 |
+
The DTW hash generation process [67] first creates sketches
|
348 |
+
of the signal by using the dot product of a random vector with
|
349 |
+
sliding windows in the signal. If the dot product is positive,
|
350 |
+
the sketch value for the window is 1; otherwise, it is 0. Next,
|
351 |
+
it counts the occurrences of n-grams formed by n consecutive
|
352 |
+
sketch values. The n-grams and their counts are used by a
|
353 |
+
randomized weighted min-hash to produce the final hash.
|
354 |
+
The original EMD hash [68] is obtained by first calculating
|
355 |
+
the dot product of the entire signal with a random vector, and
|
356 |
+
computing a linear function of the dot product’s square root.
|
357 |
+
D. Flexibility as a Goal in Brain-Computer Interface Design
|
358 |
+
A key takeaway from Sections II-B and II-C is the need for
|
359 |
+
flexible support of compute on emerging BCIs. Indeed, this
|
360 |
+
is a topic explored in recent work on the HALO architecture
|
361 |
+
for BCIs [23, 69, 70]. Prior to HALO, power efficiency was
|
362 |
+
achieved by specializing BCIs to offer a specific type of
|
363 |
+
computation for a specific brain region. However, flexibility is
|
364 |
+
an important requirement for future BCIs for several reasons:
|
365 |
+
First, there is no single best signal processing pipeline for
|
366 |
+
a task; instead, there exist several distinct signal processing
|
367 |
+
pipelines with different tradeoffs [24, 35, 71]. For Hull, this
|
368 |
+
means that the specific hardware accelerators needed to support
|
369 |
+
target computational pipelines (e.g., DTW vs cross-correlation),
|
370 |
+
and the configuration of key parameters in these accelerators
|
371 |
+
(e.g., window sizes, thresholds) must be customizable to users.
|
372 |
+
Second, BCIs may be used in different ways [34, 35]. One
|
373 |
+
use is autonomous operation, monitoring neural activity and
|
374 |
+
stimulating neurons when a harmful event occurs. An example
|
375 |
+
is epileptic seizure monitoring and deep brain stimulation to
|
376 |
+
preempt the seizure before its onset [71]. Alternatively, BCIs
|
377 |
+
may translate neural activity into commands for an external
|
378 |
+
device [33] (e.g., the commands to move a prosthetic) or the
|
379 |
+
letters to be displayed on a screen [5]. It is common for the
|
380 |
+
BCI to also translate the external activity into neural feedback
|
381 |
+
(e.g., to recreate the sense of touch and movement) [72].
|
382 |
+
Third, beyond clinical uses, the same BCI platform should
|
383 |
+
support algorithmic deployment and data collection for research
|
384 |
+
and exploration of the brain sciences [5, 35, 73]. In these
|
385 |
+
cases, many applications and usage modes may be necessary
|
386 |
+
depending on the desired experiment.Some of these uses may
|
387 |
+
require interactive monitoring, where the BCI and a clinician
|
388 |
+
are part of the decision-making loop [37]. In this case, the
|
389 |
+
BCI operates autonomously until it detects abnormal activity,
|
390 |
+
such as the onset of a seizure. When this happens, it alerts
|
391 |
+
a clinician, who can use additional data from the individual
|
392 |
+
to determine the course of action [37]. A useful BCI system
|
393 |
+
must be customizable to support these different scenarios.
|
394 |
+
Beyond these scenarios, there are many practical reasons
|
395 |
+
that BCIs should be flexible, such as changes in the individ-
|
396 |
+
ual’s neurological conditions (which may require modifying
|
397 |
+
treatment protocols), changes in electrode behavior from the
|
398 |
+
immune response of the brain to the BCI etc. [5, 35, 71].
|
399 |
+
Supporting high performance with flexibility under extreme
|
400 |
+
power and latency constraints is challenging. Like HALO,
|
401 |
+
Hull relies on modular hardware accelerators (henceforth
|
402 |
+
referred to as processing elements or PEs) to form various
|
403 |
+
signal processing pipelines. Unlike HALO and any existing
|
404 |
+
BCI, however, Hull supports the distributed signal processing
|
405 |
+
applications in Section II-B for the first time.
|
406 |
+
III. T H E D E S I G N O F T H E H U L L S Y S T E M
|
407 |
+
Figure 2 shows the Hull BCI and its constituent Hull
|
408 |
+
nodes implanted in different regions of the brain. Hull nodes
|
409 |
+
communicate with one another wirelessly. An ILP scheduler
|
410 |
+
maps applications and interactive queries onto Hull’s nodes.
|
411 |
+
Each Hull node contains 16-bit ADCs/DACs, a reconfigurable
|
412 |
+
processor with several PEs, an integrated physical storage layer
|
413 |
+
made of non-volatile memory (NVM), separate radios for Hull’s
|
414 |
+
4
|
415 |
+
|
416 |
+
Closed-loop
|
417 |
+
Prosthesis
|
418 |
+
ILP
|
419 |
+
NVM
|
420 |
+
Autonomous
|
421 |
+
Operation
|
422 |
+
Queries
|
423 |
+
External
|
424 |
+
Write
|
425 |
+
Radio
|
426 |
+
DAC
|
427 |
+
Configuration
|
428 |
+
Read
|
429 |
+
ADC
|
430 |
+
Intra-BCI
|
431 |
+
Processor
|
432 |
+
Radio
|
433 |
+
Brain
|
434 |
+
Interactive
|
435 |
+
Tissue
|
436 |
+
Power Supply
|
437 |
+
Monitoring
|
438 |
+
Data↑
|
439 |
+
CSEL
|
440 |
+
MC
|
441 |
+
SC
|
442 |
+
NGRAM
|
443 |
+
DTW
|
444 |
+
HCONV
|
445 |
+
EMDH
|
446 |
+
GATE
|
447 |
+
CCHECK
|
448 |
+
NEO
|
449 |
+
THR
|
450 |
+
HFREQ
|
451 |
+
DCOMP
|
452 |
+
FFT
|
453 |
+
BBF
|
454 |
+
SVM
|
455 |
+
HCOMP
|
456 |
+
UNPACK
|
457 |
+
XCOR
|
458 |
+
NPACKnodes to communicate with one another (i.e., intra-BCI radios)
|
459 |
+
and externally (i.e., external radio), and a power supply.
|
460 |
+
A. Rewriting Applications for On-Device Processing
|
461 |
+
We make three changes to existing BCI applications to
|
462 |
+
run them on Hull (to meet real-time constraints), rather than
|
463 |
+
relying on external processing. First, we rewrite the signal
|
464 |
+
processing pipelines to use fast hash-based signal comparison
|
465 |
+
in the common case, falling back to more time-consuming
|
466 |
+
approaches (e.g., cross-correlation or DTW) only when more
|
467 |
+
accurate computation is really necessary. Second, we allow our
|
468 |
+
applications to use memory. Third, we observe that classifiers
|
469 |
+
commonly used in neuroscience are linear (e.g., SVMs), and
|
470 |
+
therefore compute classifier outputs hierarchically across Hull’s
|
471 |
+
nodes in a manner that reduces network communication.
|
472 |
+
Figure 3a shows our newly created seizure propagation
|
473 |
+
application. While functionally equivalent to the standard
|
474 |
+
version, our application is made up of three phases—seizure
|
475 |
+
detection, hash comparison, and exact signal matching. On
|
476 |
+
every sample at all electrodes, we generate new hashes for
|
477 |
+
each sliding signal window (e.g., one hash for a 120-sample
|
478 |
+
window), and store them on the on-device non-volatile memory
|
479 |
+
in each Hull node (Section III-B). When a Hull node detects
|
480 |
+
a seizure locally (i.e., in the brain region that it probes), it
|
481 |
+
broadcasts the hashes of the signal windows that were classified
|
482 |
+
as a seizure. All other Hull nodes check if these hashes match
|
483 |
+
with any of their recently stored local hashes, and respond when
|
484 |
+
a match is found. A match indicates that a seizure experienced
|
485 |
+
in one brain region likely has a correlated seizure in another
|
486 |
+
region. To ascertain this, the Hull node that initially detected
|
487 |
+
the seizure broadcasts the entire signal window for the signals
|
488 |
+
that resulted in a hash collision. Seizure propagation is then
|
489 |
+
confirmed by running an exact comparison with these signals at
|
490 |
+
the nodes that had the hash collision. Since the full signal data
|
491 |
+
and exact similarity matches are performed only when necessary,
|
492 |
+
computation per Hull node and communication among Hull
|
493 |
+
nodes is reduced by two orders of magnitude compared to the
|
494 |
+
baseline application pipelines in Section II-B.
|
495 |
+
Figure 3c shows that we use a similar approach to enable,
|
496 |
+
for the first time, an online version of spike sorting even in
|
497 |
+
distributed scenarios. Like seizure propagation, spike sorting
|
498 |
+
benefits from hash-based signal processing and memory. The
|
499 |
+
templates are stored in NVM, and distance computation is
|
500 |
+
replaced with hash collision checks. Because spike sorting is a
|
501 |
+
precursor to many neural processing algorithms [35, 48], this
|
502 |
+
online realization of it for the first time unlocks the ability to
|
503 |
+
support many spike sorting-centered applications.
|
504 |
+
Finally, movement intent also benefits from computing our
|
505 |
+
linear classifier hierarchically. Figure 3b shows the pipeline
|
506 |
+
that Hull supports. Each Hull node computes a partial classifier
|
507 |
+
output from the signals it receives and transmits the output.
|
508 |
+
One node, the leader, computes the final SVM classification and
|
509 |
+
communicates it to an external prosthetic device. The prosthetic
|
510 |
+
device’s movements are broadcast back to Hull; each node then
|
511 |
+
electrically stimulates the sensorimotor cortex of the brain to
|
512 |
+
simulate the “feeling” of having moved a natural limb.
|
513 |
+
(a) Seizure propagation.
|
514 |
+
(b) Decoding movement intent and stimulating response to it.
|
515 |
+
(c) Spike sorting.
|
516 |
+
Fig. 3: High-level overview of the BCI applications supported
|
517 |
+
for online distributed processing in Hull.
|
518 |
+
B. Flexible & Energy-Efficient Accelerator Design
|
519 |
+
Figure 2b shows the processing fabric that we design for
|
520 |
+
each of Hull’s nodes. Several accelerators or PEs are connected
|
521 |
+
via programmable switches to realize many signal processing
|
522 |
+
pipelines. A low-power microcontroller (MC) support mis-
|
523 |
+
cellaneous workloads for which there are no PEs. The PEs
|
524 |
+
are designed for flexibility to support various computational
|
525 |
+
functions, power/energy- and area-efficient acceleration, and
|
526 |
+
deterministic latency and energy consumption to enable our ILP
|
527 |
+
scheduler to optimally map application tasks onto our acceler-
|
528 |
+
ators. We use the recently-published HALO architecture [23]
|
529 |
+
as a starting point to realize a set of PEs that are useful for
|
530 |
+
single-implant scenarios, and then go beyond to realize PEs
|
531 |
+
that accelerate our distributed neural applications.
|
532 |
+
Hull includes PEs for single-site spike detection (NEO–
|
533 |
+
non-linear energy operator; DWT–discrete wavelet transform),
|
534 |
+
compression (LZ4; LZMA), feature extraction (FFT–discrete
|
535 |
+
fast Fourier transform; XCOR–cross-correlation measure; BBF–
|
536 |
+
Butterworth bandpass filtering), thresholding (THR), conditional
|
537 |
+
(GATE), classification (SVM–linear support vector machine),
|
538 |
+
and the radio for communication with systems outside of Hull.
|
539 |
+
Hull then integrates several new PEs to support distributed
|
540 |
+
computation, fine-grained wireless communication, and access
|
541 |
+
to per-node NVM. Each PE has appropriately sized SRAM
|
542 |
+
buffers to support its processing. The PEs include support for:
|
543 |
+
1) Hash generation:
|
544 |
+
Hull supports
|
545 |
+
Euclidean, cross-
|
546 |
+
correlation, DTW, and EMD; we support configurability of
|
547 |
+
hash settings for all four measures.
|
548 |
+
First, we identify that important parameters of the DTW hash
|
549 |
+
(e.g., size and step of the sliding window), and n-gram length
|
550 |
+
(Section II-C) can be modified to also support Euclidean, and
|
551 |
+
cross-correlation measures. There is no need for new hardware
|
552 |
+
to support additional means of configurability beyond what is
|
553 |
+
already needed for the DTW-hash parameters.
|
554 |
+
5
|
555 |
+
|
556 |
+
Signal
|
557 |
+
Source
|
558 |
+
Device
|
559 |
+
Hash
|
560 |
+
Hash
|
561 |
+
Generation
|
562 |
+
Seizure
|
563 |
+
Broadcast
|
564 |
+
Transmit
|
565 |
+
Sensor
|
566 |
+
Detection
|
567 |
+
Hash
|
568 |
+
Signal
|
569 |
+
Remote
|
570 |
+
Collision
|
571 |
+
Signal
|
572 |
+
Devices
|
573 |
+
Check
|
574 |
+
SimilarityIntent
|
575 |
+
Source
|
576 |
+
Feature
|
577 |
+
Local
|
578 |
+
Global
|
579 |
+
Al
|
580 |
+
Sensor
|
581 |
+
Device
|
582 |
+
Extraction
|
583 |
+
SVM
|
584 |
+
SVM
|
585 |
+
Devices
|
586 |
+
Remote
|
587 |
+
Feature
|
588 |
+
Local
|
589 |
+
Sensor
|
590 |
+
Devices
|
591 |
+
Extraction
|
592 |
+
SVMSorted
|
593 |
+
Spikes
|
594 |
+
Spike
|
595 |
+
Hash
|
596 |
+
Nearest
|
597 |
+
Sensor
|
598 |
+
Generation
|
599 |
+
Template Lookup
|
600 |
+
Detection
|
601 |
+
TemplateSecond, we identify that the DTW and EMD hashes share
|
602 |
+
dot product computation of the signal with a random vector
|
603 |
+
(Section II-C), enabling the reuse of hardware.
|
604 |
+
Finally, we select a different weighted min-hash algorithm
|
605 |
+
for the last step of the DTW hash than the one originally
|
606 |
+
proposed in prior work [67]. Our approach [74] preserves hash
|
607 |
+
properties while achieving deterministic latency and power.
|
608 |
+
Our hash generation uses three PEs: HCONV, to obtain the
|
609 |
+
dot product of a configurable signal window with a random
|
610 |
+
vector; NGRAM, to compute the n-gram counts in a signal and
|
611 |
+
generate the DTW-based hash; and EMDH, to square root the
|
612 |
+
dot product, and other operations to generate the EMD hash.
|
613 |
+
2) Hash collision check: To determine signal similarity
|
614 |
+
across multiple brain sites, the hashes received over the network
|
615 |
+
by the Hull nodes must be compared with the locally generated
|
616 |
+
hashes in the recent past (e.g., 100 ms). Each Hull node uses
|
617 |
+
a CCHECK PE that receives decompressed hashes from the
|
618 |
+
network, stores them in SRAM registers, and sorts them in
|
619 |
+
place. The PE requests the storage controller (SC) to read
|
620 |
+
the hashes to be compared from the NVM. These hashes are
|
621 |
+
compared with those in the registers using binary search.
|
622 |
+
3) Signal similarity: CSEL identifies signals for exact
|
623 |
+
signal comparison using DTW, EMD, and Euclidean distance.
|
624 |
+
For DTW, we build a pipelined implementation that uses
|
625 |
+
the standard DTW algorithm [75] with a Sakoe-Chiba band
|
626 |
+
parameter for faster computation [76]. This PE can also support
|
627 |
+
Euclidean distance computation by using the Sakoe-Chiba band
|
628 |
+
parameter to be 1. We use the microcontroller to run EMD [77]
|
629 |
+
for now, although we will build custom PEs in the future.
|
630 |
+
4) Intra-BCI network compression and packing: The intra-
|
631 |
+
Hull network transmits hashes and signals. We compress the
|
632 |
+
hashes but transmit uncompressed raw signals. Compression
|
633 |
+
makes data more vulnerable to bit errors. Because the hashes
|
634 |
+
are used only for approximate matching, bit errors are not
|
635 |
+
as critical to the quality of signal correlation. But, the raw
|
636 |
+
signals are used for accurate matching. Measures like DTW
|
637 |
+
are naturally resilient to single-bit errors in the signal, but their
|
638 |
+
quality worsens rapidly with erroneous compressed signals.
|
639 |
+
Compression PEs (i.e., LZ/LZMA) built for HALO do not
|
640 |
+
meet Hull’s power and latency constraints for hashes. Instead,
|
641 |
+
we build PEs customized to our particular data/communication
|
642 |
+
needs. The HFREQ PE collects the hash values (and sorts them
|
643 |
+
by frequency of occurrence) that a Hull node must transmit. The
|
644 |
+
HCOMP PE encodes the hashes first with dictionary coding,
|
645 |
+
then uses run-length encoding of the dictionary indexes [78],
|
646 |
+
and finally uses Elias-γ coding [79] on the run-length counts.
|
647 |
+
HCOMP’s compression ratio is only 10% lower than that of
|
648 |
+
LZ4/LZMA, but consumes ≈7× less power.
|
649 |
+
Compressed data is sent to the NPACK PE, which adds
|
650 |
+
checksums before transmission. The UNPACK and DCOMP
|
651 |
+
PEs decode and decompress packets on the receiving side.
|
652 |
+
5) Storage control: An SC PE manages NVM access. SC
|
653 |
+
uses SRAM to buffer data before NVM writes in 4 KB pages.
|
654 |
+
The SRAM also permits data buffering during NVM erase
|
655 |
+
operations when writes cannot be accepted. Finally, SC (and
|
656 |
+
the SRAM) permits data reorganization to accelerate future
|
657 |
+
reads from the NVM (Section III-C). SC uses registers to store
|
658 |
+
metadata about data written by the ADC and hash PEs (e.g., the
|
659 |
+
last written page and the size of written data). This accelerates,
|
660 |
+
for example, the search for recent common signal data.
|
661 |
+
6) Microcontroller:
|
662 |
+
The MC runs at low frequency
|
663 |
+
(20 MHz), and integrates 8 KB memory. It configures individual
|
664 |
+
PEs into target pipelines (Section IV) and receives commands to
|
665 |
+
stimulate neurons either for stopping a seizure or for conveying
|
666 |
+
neural feedback from a prosthetic. The MC can be used for
|
667 |
+
general-purpose computation not supported by any PEs such
|
668 |
+
as new algorithms, or infrequently run system operations such
|
669 |
+
as clock synchronization (Section III-F).
|
670 |
+
7) Well-defined throughput: Each PE operates in its own
|
671 |
+
clock domain, like prior work [23], but also supports multiple
|
672 |
+
frequencies. This enables each PE to lower operating frequency
|
673 |
+
(and reduce power) to the minimum necessary to sustain the
|
674 |
+
PE’s target data rate, for varying input electrode counts. This
|
675 |
+
feature also ensures fixed latency even when PEs process a
|
676 |
+
variable number of input electrode signals. We design each PE
|
677 |
+
to support a maximum frequency f P E
|
678 |
+
max which is high enough
|
679 |
+
to support the maximum data processing rate required. We use
|
680 |
+
a configurable register that can be used to set the frequency to
|
681 |
+
f P E
|
682 |
+
max/k, where k is user-programmable. The clock frequency
|
683 |
+
is varied using a simple state machine that uses a counter to
|
684 |
+
only pass through every k clock pulses. The power consumed
|
685 |
+
by this counter is in the µW range [80], much lower than the
|
686 |
+
per-PE power. Overall, the dynamic power of the PEs scales
|
687 |
+
linearly with the frequency. This also enables deterministic
|
688 |
+
power and latency and helps optimal scheduling (Section III-E).
|
689 |
+
C. On-device non-volatile memory
|
690 |
+
Each Hull node integrates 128 GB on-device NVM to store
|
691 |
+
raw neural signals, hashes of these signals, and pre-loaded data
|
692 |
+
needed by applications (e.g., templates for spike sorting). We
|
693 |
+
divide the NVM into four partitions, one for each of these
|
694 |
+
classes of data, and another for use by the MC. The sizes
|
695 |
+
of the partitions are configurable through registers. When a
|
696 |
+
partition is full, the oldest data in the partition is overwritten.
|
697 |
+
We optimize the layout of signal and hash data in the NVM
|
698 |
+
for performance and power. Hull’s ADCs (and hash generation
|
699 |
+
PEs) process electrode samples sequentially. If the data is stored
|
700 |
+
in this manner, extracting a contiguous segment of one signal
|
701 |
+
would require long-latency reads from multiple discontinuous
|
702 |
+
NVM locations. Instead, we store contiguous chunks (where
|
703 |
+
a chunk size is user-specified) of each signal. Retrieving the
|
704 |
+
signal (or hashes) at a particular electrode and time-step need
|
705 |
+
only offset calculations. SC enables this reorganization as it
|
706 |
+
buffers data in 4 KB SRAM pages before NVM writes.
|
707 |
+
D. Networking
|
708 |
+
We use separate radios for intra-Hull and external device
|
709 |
+
communication as the required distances and communication
|
710 |
+
needs are different. For intra-Hull communication, we use a
|
711 |
+
custom network protocol with a fixed schedule across the nodes.
|
712 |
+
The schedule is decided by an ILP based on application goals
|
713 |
+
6
|
714 |
+
|
715 |
+
Fig. 4: Seizure detection and propagation on Hull. The colors of the PEs are matched with the high-level tasks from Figure 3a.
|
716 |
+
(Section III-E). To coordinate intra-Hull communication, we
|
717 |
+
use TDMA for its simplicity and deterministic behavior.
|
718 |
+
Each network packet has an 84-bit header, and a variable data
|
719 |
+
size up to a maximum packet size of 256 bytes. The header and
|
720 |
+
the data have 32-bit CRC32 [81] checksums. On a checksum
|
721 |
+
mismatch, the receiver simply discards the packet and does not
|
722 |
+
participate in the pipeline for processing the current sample.
|
723 |
+
However, we find that while it is best to discard erroneous
|
724 |
+
packets with hashes, erroneous packets carrying raw signal
|
725 |
+
data can still be used without adversely affecting the overall
|
726 |
+
application because of the resiliency of measures like DTW.
|
727 |
+
E. Task Scheduling on Accelerators, Storage, & Networking
|
728 |
+
As input to our ILP scheduler, users provide a description of
|
729 |
+
the desired computation as a dataflow pipeline using functions
|
730 |
+
of the PEs, or as an interactive query from which the dataflow
|
731 |
+
can be extracted (Section IV). They also provide the priorities
|
732 |
+
of the tasks in the application (e.g., seizure detection versus
|
733 |
+
signal comparison), and constraints like the overall response
|
734 |
+
latency. A higher priority task ensures that the system processes
|
735 |
+
more neural signals in this task relative to the others when all
|
736 |
+
signals cannot be processed for all tasks due to power or latency
|
737 |
+
constraints. The ILP maps each function to the corresponding
|
738 |
+
PE in one or more of Hull’s nodes.
|
739 |
+
The ILP considers each possible mapping of application
|
740 |
+
tasks (e.g., seizure detection, hash comparison) to PEs as a flow,
|
741 |
+
and maximizes the weighted sum of the number of channels
|
742 |
+
processed in each flow. It uses three major constraints:
|
743 |
+
Latency: End-to-end latencies through the PEs and communi-
|
744 |
+
cation links must be below a specified limit.
|
745 |
+
Power: The power consumed by all the PEs and links at all
|
746 |
+
times must be below a specified limit.
|
747 |
+
Communication: Only one flow is allowed to use the radio at
|
748 |
+
any time because of TDMA.
|
749 |
+
Our ILP setup is simple because of the behavior of the PEs.
|
750 |
+
With variable throughput processing, the latency of processing
|
751 |
+
any number of input signals is the same. The dynamic power
|
752 |
+
consumed by a PE scales predictably linearly with the input
|
753 |
+
size (since frequency scales linearly). Finally, the system allows
|
754 |
+
two flows to share the same PE. When this occurs and electrode
|
755 |
+
signals to be processed are allocated to both flows, the signals
|
756 |
+
from each flow are interleaved so that they are all run at the
|
757 |
+
same frequency—completing within the same time as if they
|
758 |
+
were run independently. The hardware tags the signals from
|
759 |
+
each flow so that they are routed to the correct destinations.
|
760 |
+
F. Clock Synchronization
|
761 |
+
Hull’s distributed processing requires the clocks in each BCI
|
762 |
+
node to be synchronized up to a few µs of precision. Hull’s
|
763 |
+
clocks are based on pausable clock generators and clock control
|
764 |
+
units [82, 83] that suffer only picoseconds of clock uncertainty,
|
765 |
+
a scale much smaller than our µs target. Hull operates at the
|
766 |
+
temperature of the human body and does not experience clock
|
767 |
+
drift due to temperature variance. Nevertheless, Hull runs clock
|
768 |
+
synchronization once a day using SNTP [84].
|
769 |
+
One of the Hull nodes is set up to act as the SNTP server, to
|
770 |
+
which all other nodes send messages to synchronize their time.
|
771 |
+
The clients send their previously synchronized clock times,
|
772 |
+
and current times, while the server sends its corresponding
|
773 |
+
times. The difference between these values is used to adjust
|
774 |
+
the clocks. This process repeats a few times until all the
|
775 |
+
clocks are synchronized within the desired precision. During
|
776 |
+
clock synchronization, the intra-Hull network is unavailable for
|
777 |
+
application use. However, operations that do not require the
|
778 |
+
network (e.g., seizure detection) or NVM access can continue.
|
779 |
+
IV. D E P L OY I N G H U L L F O R BCI A P P L I C AT I O N S
|
780 |
+
Hull supports autonomous epileptic seizure propagation
|
781 |
+
in autonomous, movement intent detection for closed-loop
|
782 |
+
prosthesis, online spike sorting, and interactive querying.
|
783 |
+
Autonomous seizure propagation and detection: Figure 4
|
784 |
+
shows Hull’s implementation of autonomous seizure detection
|
785 |
+
and propagation. The choice of the PE functions is based
|
786 |
+
on prior work [50]. This implementation uses XCOR, BBF,
|
787 |
+
and FFT to extract features from the ADC measurements and
|
788 |
+
uses an SVM to detect a seizure. When a seizure is detected,
|
789 |
+
the nodes exchange hashes for comparison. To confirm that a
|
790 |
+
seizure is indeed likely being propagated, Hull uses the DTW
|
791 |
+
distance of the signals across nodes, and electrically stimulates
|
792 |
+
the brain in response to predicted propagation within 10 ms.
|
793 |
+
The dataflow in Figure 4 is fed to the ILP to schedule this
|
794 |
+
application on Hull. The ILP generates an optimal mapping of
|
795 |
+
the functions and generates a configuration code. This code is
|
796 |
+
run by each Hull node’s microcontroller to configure the PEs.
|
797 |
+
Online spike sorting: Figure 5 shows the mapping of online
|
798 |
+
spike sorting to Hull. The template-matching version pre-loads
|
799 |
+
the NVM in the nodes with templates and their hashes.
|
800 |
+
Fig. 5: Spike sorting on Hull.
|
801 |
+
7
|
802 |
+
|
803 |
+
NEO
|
804 |
+
sC
|
805 |
+
ADC
|
806 |
+
HCONV
|
807 |
+
EMDH
|
808 |
+
GATE
|
809 |
+
CCHECK
|
810 |
+
MCSource
|
811 |
+
Remote
|
812 |
+
Device
|
813 |
+
FFT
|
814 |
+
Devices
|
815 |
+
SVM
|
816 |
+
ADC
|
817 |
+
BBF
|
818 |
+
THR
|
819 |
+
SC
|
820 |
+
XCOR
|
821 |
+
UNPACK
|
822 |
+
DCOMP
|
823 |
+
CCHECK
|
824 |
+
DTW
|
825 |
+
THR
|
826 |
+
SC
|
827 |
+
NGRAM
|
828 |
+
NPACK
|
829 |
+
HCONV
|
830 |
+
GATE
|
831 |
+
HFREQ
|
832 |
+
HCOMP
|
833 |
+
CSELMovement intent detection and feedback: Figure 6 shows
|
834 |
+
how Hull implements detection of movement intent and
|
835 |
+
feedback, augmented from prior work[55]. Each node extracts
|
836 |
+
features from its local signals and computes a partial SVM
|
837 |
+
output. Then, one node receives the partial SVM outputs and
|
838 |
+
computes the commands for the prosthetic. The movements of
|
839 |
+
the prosthetic are transmitted wirelessly, and each node runs a
|
840 |
+
stimulation algorithm for its region to provide neural feedback.
|
841 |
+
Fig. 6: Query pipeline for movement intent application
|
842 |
+
Interactive querying:
|
843 |
+
Interactive queries are used to read
|
844 |
+
multi-site data or modify system configuration. The general
|
845 |
+
format for an interactive query follows a select-project structure,
|
846 |
+
akin to SQL queries [85]:
|
847 |
+
from [set of devices] select
|
848 |
+
data[electrodes][time range] where condition
|
849 |
+
The query specifies select criteria, i.e., the range of time
|
850 |
+
from which data is requested, along with the nodes from which
|
851 |
+
the data should be returned, and the project criteria, i.e., the
|
852 |
+
conditions that the selected data must satisfy. Similar to select-
|
853 |
+
project-based SQL queries, Hull’s interactive query interface
|
854 |
+
can support a wide range of complex queries. The project
|
855 |
+
conditions are evaluated on the PEs when possible, and on the
|
856 |
+
microcontroller otherwise. The following illustrates an example
|
857 |
+
query to fetch ±100 ms of data from all devices from the time
|
858 |
+
they detected a seizure in the last 5 s. This example requires
|
859 |
+
seizure detection using 120 ms windows of the raw signal data.
|
860 |
+
from * select data[:][t-100:t+100] where
|
861 |
+
seizure_detect(data[t-120:t]) and t >= -5000
|
862 |
+
and t <= 0
|
863 |
+
Complex examples can supply template signals and request
|
864 |
+
data from nodes that recorded signals similar to the templates.
|
865 |
+
Queries are separately compiled and the extracted dataflow is
|
866 |
+
sent to the ILP, which finalizes query execution schedules.
|
867 |
+
Users can also set up the pipelines of specific tasks; e.g.,
|
868 |
+
a clinician may modify seizure_detect to use only FFT
|
869 |
+
for feature extraction instead of FFT, BBF and XCOR as in
|
870 |
+
Figure 4. Such a configuration does not need the ILP.
|
871 |
+
Interactive queries use a power-hungry radio, precluding
|
872 |
+
simultaneous execution of queries and autonomous tasks
|
873 |
+
in some cases. Some of these are either slowed down or
|
874 |
+
temporarily paused; e.g., when a clinician responds to a seizure
|
875 |
+
alert and requests recent signal data, seizure propagation has
|
876 |
+
to be paused to send the data to the clinician.
|
877 |
+
V. M E T H O D O L O G Y
|
878 |
+
Processing fabric: Hull’s PEs are designed with a commercial
|
879 |
+
28 nm fully-depleted silicon-on-insulator (FD-SOI) CMOS
|
880 |
+
process and synthesized using the Cadence® suite of tools.
|
881 |
+
We use standard cell libraries from STMicroelectronic and
|
882 |
+
foundry-supplied memory macros that are interpolated to 40 °C,
|
883 |
+
which is close to human body temperature. We design each
|
884 |
+
PE for its highest frequency, and scale the power when using
|
885 |
+
them at lower frequency. We run multi-corner, physically-aware
|
886 |
+
synthesis, and use latency and power measurements from the
|
887 |
+
worst variation corner. Table I shows these values. We taped
|
888 |
+
out early designs of the PEs at 12 nm to confirm these values.
|
889 |
+
TABLE I: Latency and Power of the PEs.
|
890 |
+
Processing
|
891 |
+
Max Freq
|
892 |
+
Power (µW )
|
893 |
+
Latency
|
894 |
+
Elements
|
895 |
+
(MHz)
|
896 |
+
Leakage
|
897 |
+
Dyn/Elec
|
898 |
+
(mS)
|
899 |
+
FFT
|
900 |
+
15.7
|
901 |
+
141.97
|
902 |
+
9.02
|
903 |
+
4.00
|
904 |
+
XCOR
|
905 |
+
85
|
906 |
+
377.00
|
907 |
+
44.11
|
908 |
+
4.00
|
909 |
+
BBF
|
910 |
+
6
|
911 |
+
66.00
|
912 |
+
0.35
|
913 |
+
4.00
|
914 |
+
SVM
|
915 |
+
3
|
916 |
+
99.00
|
917 |
+
0.53
|
918 |
+
1.67
|
919 |
+
THR
|
920 |
+
16
|
921 |
+
2.00
|
922 |
+
0.11
|
923 |
+
0.06
|
924 |
+
NEO
|
925 |
+
3
|
926 |
+
12.00
|
927 |
+
0.03
|
928 |
+
4.00
|
929 |
+
HCONV
|
930 |
+
3
|
931 |
+
89.89
|
932 |
+
0.80
|
933 |
+
1.50
|
934 |
+
NGRAM
|
935 |
+
0.2
|
936 |
+
15.69
|
937 |
+
0.08
|
938 |
+
1.50
|
939 |
+
EMDH
|
940 |
+
0.03
|
941 |
+
10.47
|
942 |
+
0.00
|
943 |
+
0.04
|
944 |
+
GATE
|
945 |
+
5
|
946 |
+
67.00
|
947 |
+
0.63
|
948 |
+
0.00
|
949 |
+
HFREQ
|
950 |
+
2.88
|
951 |
+
61.98
|
952 |
+
0.52
|
953 |
+
4.00
|
954 |
+
HCOMP
|
955 |
+
2.88
|
956 |
+
77.00
|
957 |
+
0.65
|
958 |
+
4.00
|
959 |
+
NPACK
|
960 |
+
3
|
961 |
+
3.53
|
962 |
+
5.49
|
963 |
+
0.008
|
964 |
+
UNPACK
|
965 |
+
3
|
966 |
+
3.53
|
967 |
+
5.49
|
968 |
+
0.008
|
969 |
+
DCOMP
|
970 |
+
16.393
|
971 |
+
7.20
|
972 |
+
0.14
|
973 |
+
0.50
|
974 |
+
CCHECK
|
975 |
+
16.393
|
976 |
+
7.20
|
977 |
+
0.14
|
978 |
+
0.50
|
979 |
+
CSEL
|
980 |
+
0.1
|
981 |
+
4.00
|
982 |
+
6.00
|
983 |
+
0.04
|
984 |
+
SC
|
985 |
+
3.2
|
986 |
+
95.30
|
987 |
+
1.64
|
988 |
+
0.03-4
|
989 |
+
DTW
|
990 |
+
50
|
991 |
+
167.93
|
992 |
+
26.94
|
993 |
+
0.003
|
994 |
+
We assume that each node uses a standard 96-electrode
|
995 |
+
array [86] to sense neural activity, and a configurable 16-bit
|
996 |
+
ADC [87] generating 30 K samples per second per electrode.
|
997 |
+
The ADC dissipates 2.88 mW per sample from all 96 electrodes.
|
998 |
+
Each node has a DAC to support electrical stimulation of brain
|
999 |
+
tissue [88], a process that consumes ≈0.6 mW of power.
|
1000 |
+
Radio parameters. We use a radio that can transmit/receive
|
1001 |
+
up to 10 m to external devices, at 46 Mbps, 250 MHz frequency,
|
1002 |
+
and which consumes 9.2 mW. For intra-Hull communication, we
|
1003 |
+
consider a state-of-the-art radio designed for safe implantation
|
1004 |
+
in the brain [89]. While the radio was originally designed for
|
1005 |
+
asymmetric transmission/reception, we modify it for symmetric
|
1006 |
+
communication. Our intra-Hull radio supports a transmission
|
1007 |
+
distance of 20 cm (i.e., > 90th percentile head breadth [90]). To
|
1008 |
+
estimate the power and data rates, we use path-loss models [91],
|
1009 |
+
with a path-loss parameter of 3.5 for transmission through the
|
1010 |
+
brain, skull, and skin, consistent with prior studies [92, 93].
|
1011 |
+
We calculate that our radio can transmit/receive 7 Mbps at
|
1012 |
+
4.12 GHz and consumes 1.721 mW of power.
|
1013 |
+
Non-volatile memory. We use NVMs with 4 KB page sizes
|
1014 |
+
and 1 MB block sizes. The NVMs can read 8 bytes, write a
|
1015 |
+
page, or erase a block in one operation. We use SLC NAND
|
1016 |
+
parameters like erase time (1.5 ms), program time (350 us),
|
1017 |
+
and voltage (2.7 V) from industrial technical reports [94] with
|
1018 |
+
NVSim [95]. We choose a low operating power transistor type
|
1019 |
+
in NVSim, and use a temperature of 40 °C. NVSim assesses
|
1020 |
+
a leakage power of 0.252 mW, dynamic energies of 164.4 nJ
|
1021 |
+
and 261.143 nJ per page for reads and writes, respectively. We
|
1022 |
+
also use these parameters to size our SC buffers to 24 KB.
|
1023 |
+
Electrophysiological data. We use publicly available electro-
|
1024 |
+
physiological data for our evaluation [96, 97]. For seizure
|
1025 |
+
detection and propagation, we use a data from the Mayo
|
1026 |
+
8
|
1027 |
+
|
1028 |
+
Source
|
1029 |
+
ADC
|
1030 |
+
FFT
|
1031 |
+
SVM
|
1032 |
+
UNPACK
|
1033 |
+
SVM
|
1034 |
+
THR
|
1035 |
+
MC
|
1036 |
+
MC
|
1037 |
+
Device
|
1038 |
+
Remote
|
1039 |
+
MC
|
1040 |
+
ADC
|
1041 |
+
FFT
|
1042 |
+
SVM
|
1043 |
+
NPACK
|
1044 |
+
DevicesSeizure
|
1045 |
+
Detection
|
1046 |
+
Spike
|
1047 |
+
Sorting
|
1048 |
+
Signal
|
1049 |
+
Similarity
|
1050 |
+
Movement
|
1051 |
+
Intent
|
1052 |
+
100
|
1053 |
+
101
|
1054 |
+
102
|
1055 |
+
103
|
1056 |
+
104
|
1057 |
+
Max Aggregate Throughput (Mbps)
|
1058 |
+
Central No-Hash
|
1059 |
+
Central
|
1060 |
+
Hull No-Hash
|
1061 |
+
Hull
|
1062 |
+
(a) Maximum aggregate throughput of Hull versus
|
1063 |
+
alternative BCI architectures.
|
1064 |
+
1
|
1065 |
+
2
|
1066 |
+
4
|
1067 |
+
8
|
1068 |
+
16
|
1069 |
+
32
|
1070 |
+
64
|
1071 |
+
Number of devices
|
1072 |
+
1
|
1073 |
+
10
|
1074 |
+
100
|
1075 |
+
1000
|
1076 |
+
10000
|
1077 |
+
Max Aggregate Throughput (Mbps)
|
1078 |
+
DTW Comparison
|
1079 |
+
Hash Comparison
|
1080 |
+
DTW One-All
|
1081 |
+
Movement Intent
|
1082 |
+
(b) Maximum
|
1083 |
+
aggregate
|
1084 |
+
throughput
|
1085 |
+
of
|
1086 |
+
communication-dependent tasks in Hull.
|
1087 |
+
0
|
1088 |
+
50
|
1089 |
+
100
|
1090 |
+
150
|
1091 |
+
Sensor Data Rate (Mbps)
|
1092 |
+
0
|
1093 |
+
25
|
1094 |
+
50
|
1095 |
+
75
|
1096 |
+
100
|
1097 |
+
125
|
1098 |
+
Throughput (Mbps)
|
1099 |
+
Seizure Detection
|
1100 |
+
Spike Sorting
|
1101 |
+
(c) Maximum throughput of tasks without inter-
|
1102 |
+
node communication, using re-designed PEs.
|
1103 |
+
Fig. 7: Experimental quantification of Hull’s benefits.
|
1104 |
+
Clinic [97] of a patient (label “I001 P013”) with 76 electrodes
|
1105 |
+
implanted in the parietal/occipital lobes. This data-set was
|
1106 |
+
recorded for 4 days at 5 KHz, and is annotated with hundreds
|
1107 |
+
of seizure instances. We upscaled the sampling frequency to
|
1108 |
+
30 KHz, and split the dataset to emulate multiple BCI devices.
|
1109 |
+
We use consecutive and overlapping 4 ms windows (120
|
1110 |
+
samples) from the electrodes to detect seizures [98]. For
|
1111 |
+
propagation, we check similarity with a seizure-positive signal
|
1112 |
+
in the last 100 ms from electrode data in all nodes [98]. For
|
1113 |
+
hash pipelines, we use one 8-bit hash for 120 sample data.
|
1114 |
+
For spike sorting, we use the Spikeforest dataset [96, 99].
|
1115 |
+
This dataset contains recordings collected from the CA1 region
|
1116 |
+
of a rat hippocampus using tetrode electrodes recorded at
|
1117 |
+
30 KHz sampling frequency. The dataset contains spikes from
|
1118 |
+
10 neurons, with 65, 000 spikes that were manually sorted.
|
1119 |
+
Alternative system architectures. Table II shows the systems
|
1120 |
+
that we compare Hull against. Hull No-Hash uses the same
|
1121 |
+
Hull architecture but does not use hashes. The power saved by
|
1122 |
+
removing the hash processing PEs is allocated to the remaining
|
1123 |
+
tasks optimally. Hull No-Hash does not require re-writing
|
1124 |
+
the applications for hash-based processing. Central uses one
|
1125 |
+
processing node with the same processor as Hull, and multiple
|
1126 |
+
sensors that are connected using wires. Finally, Central No-
|
1127 |
+
Hash is a centralized design without hash processing, like
|
1128 |
+
most existing BCIs [27, 31, 100]. We do not consider wireless
|
1129 |
+
centralized designs as they need a radio and have lesser compute
|
1130 |
+
available than the wired ones. We also do not consider designs
|
1131 |
+
without memory as they do not support seizure propagation. We
|
1132 |
+
map our applications onto all systems using the ILP, ensuring
|
1133 |
+
that each node consumes < 15 mW.
|
1134 |
+
TABLE II: Alternative BCI designs.
|
1135 |
+
Design
|
1136 |
+
Architecture
|
1137 |
+
Comparison
|
1138 |
+
Communication
|
1139 |
+
Hull (Proposed)
|
1140 |
+
Distributed
|
1141 |
+
Hash, Signal
|
1142 |
+
Wireless
|
1143 |
+
Hull-No hash
|
1144 |
+
Distributed
|
1145 |
+
Signal
|
1146 |
+
Wireless
|
1147 |
+
Central
|
1148 |
+
Centralized
|
1149 |
+
Hash, Signal
|
1150 |
+
Wired
|
1151 |
+
Central-No hash
|
1152 |
+
Centralized
|
1153 |
+
Signal
|
1154 |
+
Wired
|
1155 |
+
VI. E VA L UAT I O N
|
1156 |
+
A. Comparing BCI Architectures
|
1157 |
+
Figure 7a shows the maximum aggregate throughput of the
|
1158 |
+
systems in Table II. A task’s maximum aggregate throughput is
|
1159 |
+
achieved when it is the only task running in the system, summed
|
1160 |
+
over all nodes. Central No-Hash has the worst throughput for
|
1161 |
+
all tasks. This design suffers from having just one processor
|
1162 |
+
and from using expensive signal processing. Central increases
|
1163 |
+
throughput by an order of magnitude for tasks that benefit
|
1164 |
+
from hashing (spike sorting and signal similarity). However,
|
1165 |
+
the single processor remains the bottleneck for all tasks.
|
1166 |
+
Hull No-Hash has distributed processors and enjoys higher
|
1167 |
+
aggregate seizure detection and movement intent. However, it
|
1168 |
+
performs poorly for tasks that need signal comparison (signal
|
1169 |
+
similarity, spike sorting). For these tasks, Hull No-Hash has
|
1170 |
+
lower throughput than Central because it does not use hashes.
|
1171 |
+
Hull uses distributed hash-based processing and has the highest
|
1172 |
+
aggregate throughput for all tasks. Compared to Central-No
|
1173 |
+
hash, which is closest to state-of-the-art BCIs, Hull’s data
|
1174 |
+
rates are an order of magnitude higher for seizure detection,
|
1175 |
+
and movement intent detection, and are nearly three orders of
|
1176 |
+
magnitude higher for signal similarity and spike sorting.
|
1177 |
+
B. Throughput for Communication-Dependent Tasks
|
1178 |
+
Figure 7b shows the maximum aggregate throughput of
|
1179 |
+
the communication-dependent task (hash comparison, DTW
|
1180 |
+
comparison, and movement intent), with various node counts.
|
1181 |
+
DTW Comparison uses all-to-all comparison of raw signals.
|
1182 |
+
It has a lower throughput than the other tasks because only
|
1183 |
+
16 out of 96 electrode signals can be transmitted for all-to-
|
1184 |
+
all comparison. The reason is that new electrode samples are
|
1185 |
+
obtained at 47 Mbps from the ADC, but the intra-Hull radio can
|
1186 |
+
only transmit about 7 Mbps. Increasing the number of nodes
|
1187 |
+
decreases the throughput further because of the communication
|
1188 |
+
delays. Because Hull uses a TDMA network, where slots for
|
1189 |
+
network access are serialized, DTW Comparison has the worst
|
1190 |
+
throughput and scales poorly with node count.
|
1191 |
+
An alternative DTW One-All, which only uses one-to-all
|
1192 |
+
DTW comparison, scales better since its communication latency
|
1193 |
+
9
|
1194 |
+
|
1195 |
+
doesn’t increase with the number of nodes. However, a one-to-
|
1196 |
+
all comparison is insufficient for general BCI applications.
|
1197 |
+
Hash Comparison uses all-to-all hash communication to
|
1198 |
+
check for collisions. Its throughput increases to 470 Mbps until
|
1199 |
+
10 devices, after which it begins to decrease. When the number
|
1200 |
+
of nodes is small, few TDMA slots are required to exchange all
|
1201 |
+
hashes, enabling a linear increase in throughput as a function of
|
1202 |
+
node count. But, as node counts keep increasing, it takes longer
|
1203 |
+
to communicate all hashes and overall throughput reduces.
|
1204 |
+
Finally, Movement Intent uses all-to-one communication of
|
1205 |
+
the partial SVM products. However, as the product is small, its
|
1206 |
+
throughput scales linearly with the number of nodes (note that
|
1207 |
+
the Y-axis in Figure 7b is logarithmic). It also has the highest
|
1208 |
+
aggregate throughput because it needs the least communication.
|
1209 |
+
Figure 7b shows that hashing, and distributing the SVM
|
1210 |
+
computation in Hull enables it to scale to many regions and
|
1211 |
+
with higher data rates than what has been possible.
|
1212 |
+
C. Throughput for Non-Communicating Tasks
|
1213 |
+
We design our PEs for a maximum sensor rate of 47 Mbps
|
1214 |
+
per node (Section V). However, we study potential PE re-design
|
1215 |
+
to support higher processing rates for tasks that do not need
|
1216 |
+
communication. Figure 7c shows the throughput of Seizure
|
1217 |
+
Detection and Spike Sorting for varying per-node signal sensor
|
1218 |
+
rates. Task throughput increases linearly up to 105 Mbps for
|
1219 |
+
spike sorting, and 70 Mbps for seizure detection. Beyond this
|
1220 |
+
sensing rate, the higher frequency of the PEs and ADCs results
|
1221 |
+
in exceeding the device power limit. Nonetheless, these values
|
1222 |
+
are nearly twice as supported by existing single-implant BCIs
|
1223 |
+
and show the robustness of our methodology.
|
1224 |
+
D. Application Level Throughput
|
1225 |
+
The throughput achieved at the application level depends
|
1226 |
+
on the number of implanted nodes. Additionally, when there
|
1227 |
+
are multiple tasks, it depends on the priorities assigned to the
|
1228 |
+
application tasks. Recall that the ILP schedules applications
|
1229 |
+
to optimize a priority-weighted sum of the signals processed
|
1230 |
+
in each task. For seizure detection propagation, Figure 8a
|
1231 |
+
shows the weighted aggregate throughput as a function of the
|
1232 |
+
number of devices, for various weight choices (in the format
|
1233 |
+
seizure detection:hash comparison:DTW comparison). For an
|
1234 |
+
equal priority to seizure detection, DTW processing, and hash
|
1235 |
+
comparison, we find that the maximum throughput is achieved
|
1236 |
+
for 11 nodes. Other weight choices have different optimal node
|
1237 |
+
counts. Note that there is no comparable system for on-device
|
1238 |
+
seizure propagation—Hull is the first design with this feature.
|
1239 |
+
Movement intent has only one task, and its throughput (in
|
1240 |
+
number of intents detected per second), is shown in Figure 8b.
|
1241 |
+
This metric accounts for only movement intent detection, and
|
1242 |
+
not for the variable response latency of the prosthetic device.
|
1243 |
+
Hull spike sorts up to 12, 250 spikes per second per node
|
1244 |
+
with 82% accuracy, comparing well to the state of the art [96].
|
1245 |
+
E. Interactive Queries
|
1246 |
+
We consider three types of common queries applied on data
|
1247 |
+
ranging from the past 100 ms (≈7 MB over all nodes) to the
|
1248 |
+
0
|
1249 |
+
20
|
1250 |
+
40
|
1251 |
+
60
|
1252 |
+
80
|
1253 |
+
Number of Devices
|
1254 |
+
0
|
1255 |
+
10
|
1256 |
+
20
|
1257 |
+
30
|
1258 |
+
40
|
1259 |
+
50
|
1260 |
+
Weighted Throughput (Mbps)
|
1261 |
+
1
|
1262 |
+
1:1:1
|
1263 |
+
3:1:1
|
1264 |
+
1:3:1
|
1265 |
+
0
|
1266 |
+
20
|
1267 |
+
40
|
1268 |
+
60
|
1269 |
+
80
|
1270 |
+
Number of Devices
|
1271 |
+
0
|
1272 |
+
50
|
1273 |
+
100
|
1274 |
+
150
|
1275 |
+
Movement Intents / Second
|
1276 |
+
2
|
1277 |
+
(a) Weighted throughput of seizure
|
1278 |
+
propagation tasks.
|
1279 |
+
0
|
1280 |
+
20
|
1281 |
+
40
|
1282 |
+
60
|
1283 |
+
80
|
1284 |
+
Number of Devices
|
1285 |
+
0
|
1286 |
+
10
|
1287 |
+
20
|
1288 |
+
30
|
1289 |
+
40
|
1290 |
+
50
|
1291 |
+
Weighted Throughput (Mbps)
|
1292 |
+
1
|
1293 |
+
1:1:1
|
1294 |
+
3:1:1
|
1295 |
+
1:3:1
|
1296 |
+
0
|
1297 |
+
20
|
1298 |
+
40
|
1299 |
+
60
|
1300 |
+
80
|
1301 |
+
Number of Devices
|
1302 |
+
0
|
1303 |
+
50
|
1304 |
+
100
|
1305 |
+
150
|
1306 |
+
Movement Intents / Second
|
1307 |
+
2
|
1308 |
+
(b) Movement intents
|
1309 |
+
per second
|
1310 |
+
(without device movement time).
|
1311 |
+
Fig. 8: Application level metrics on Hull.
|
1312 |
+
past 1 s (≈60 MB). They are: Q1, which returns all signals that
|
1313 |
+
were detected as a seizure; Q2, which returns all signals that
|
1314 |
+
matched with a template using a hash; and Q3, which returns
|
1315 |
+
all data in the timeframe. For Q1 and Q2, we vary the fraction
|
1316 |
+
of data that tests positive for their condition.
|
1317 |
+
Figure 9 shows Hull’s throughput with 11 nodes for our
|
1318 |
+
queries. Hull supports up to 10 queries per second (QPS) for
|
1319 |
+
Q1 and Q2 over the last 100 ms data (the common case). If Q2
|
1320 |
+
is run with DTW instead of hash-based search, we see a QPS
|
1321 |
+
of 8, which is only slightly lower, but the power consumption
|
1322 |
+
increases from 3.57 mW for the hash vs the entire 15 mW for
|
1323 |
+
DTW based matching. Thus, DTW-based matching is unsuitable
|
1324 |
+
when interactively querying in response to a seizure.
|
1325 |
+
Q3 on this data takes 1.21 s, yielding a throughput of ≈0.8.
|
1326 |
+
In interactive querying, the external radio, which consumes
|
1327 |
+
high power, is the bottleneck.
|
1328 |
+
As the data to be searched increases, the query latency
|
1329 |
+
increases linearly due to the radio latency. However, Hull
|
1330 |
+
can still process 1 QPS for Q1 and Q2 for the past 1 s data
|
1331 |
+
(≈60 MB), making it suitable for real-time use.
|
1332 |
+
7 (110 ms)
|
1333 |
+
24 (400 ms)
|
1334 |
+
42 (700 ms)
|
1335 |
+
60 (1000 ms)
|
1336 |
+
0.0
|
1337 |
+
0.2
|
1338 |
+
0.4
|
1339 |
+
0.6
|
1340 |
+
0.8
|
1341 |
+
1.0
|
1342 |
+
5%
|
1343 |
+
5%
|
1344 |
+
50%
|
1345 |
+
50%
|
1346 |
+
100%
|
1347 |
+
100%
|
1348 |
+
100%
|
1349 |
+
5%
|
1350 |
+
5%
|
1351 |
+
50%
|
1352 |
+
50%
|
1353 |
+
100%
|
1354 |
+
100%
|
1355 |
+
100%
|
1356 |
+
5%
|
1357 |
+
5%
|
1358 |
+
50%
|
1359 |
+
50%
|
1360 |
+
100%
|
1361 |
+
100%
|
1362 |
+
100%
|
1363 |
+
5%
|
1364 |
+
5%
|
1365 |
+
50%
|
1366 |
+
50%
|
1367 |
+
100%
|
1368 |
+
100%
|
1369 |
+
100%
|
1370 |
+
Query Data Size (MB) (Time Range)
|
1371 |
+
0.1
|
1372 |
+
1
|
1373 |
+
10
|
1374 |
+
Queries per second
|
1375 |
+
Q1
|
1376 |
+
Q2
|
1377 |
+
Q3
|
1378 |
+
Fig. 9: Interactive query throughput on Hull with 11 nodes.
|
1379 |
+
F. Hashing
|
1380 |
+
Accuracy: We vary the parameters of all our hash functions
|
1381 |
+
and show the performance of the best configuration for seizure
|
1382 |
+
propagation and spike sorting. Figure 11 shows the accuracy
|
1383 |
+
(TP: True positive, TN: True negative, FP: False positive, FN:
|
1384 |
+
False negative) for the four hash functions. XCOR and EMD
|
1385 |
+
hashes have ≈ 85% accuracy while Euclidean and DTW have
|
1386 |
+
over 90% accuracy. The high true positive rate of our DTW
|
1387 |
+
10
|
1388 |
+
|
1389 |
+
0
|
1390 |
+
25
|
1391 |
+
50
|
1392 |
+
75
|
1393 |
+
100
|
1394 |
+
125
|
1395 |
+
Window Size
|
1396 |
+
0
|
1397 |
+
1
|
1398 |
+
2
|
1399 |
+
3
|
1400 |
+
4
|
1401 |
+
5
|
1402 |
+
6
|
1403 |
+
Ngram Size
|
1404 |
+
XCOR
|
1405 |
+
DTW
|
1406 |
+
Euclidean
|
1407 |
+
XCOR Euclidean DTW
|
1408 |
+
EMD
|
1409 |
+
0
|
1410 |
+
50
|
1411 |
+
100
|
1412 |
+
Percentage (%)
|
1413 |
+
TP
|
1414 |
+
FN
|
1415 |
+
TN
|
1416 |
+
FP
|
1417 |
+
Fig. 10: Hash accuracy.
|
1418 |
+
0
|
1419 |
+
25
|
1420 |
+
50
|
1421 |
+
75
|
1422 |
+
100
|
1423 |
+
125
|
1424 |
+
Window Size
|
1425 |
+
0
|
1426 |
+
1
|
1427 |
+
2
|
1428 |
+
3
|
1429 |
+
4
|
1430 |
+
5
|
1431 |
+
6
|
1432 |
+
Ngram Size
|
1433 |
+
XCOR
|
1434 |
+
DTW
|
1435 |
+
Euclidean
|
1436 |
+
XCOR Euclidean DTW
|
1437 |
+
EMD
|
1438 |
+
0
|
1439 |
+
25
|
1440 |
+
50
|
1441 |
+
75
|
1442 |
+
100
|
1443 |
+
Percentage (%)
|
1444 |
+
True PositiveTrue Negative
|
1445 |
+
Fig. 11: Hash flexibility.
|
1446 |
+
Standard Bit Error Rate
|
1447 |
+
0.0
|
1448 |
+
0.2
|
1449 |
+
0.4
|
1450 |
+
0.6
|
1451 |
+
0.8
|
1452 |
+
1.0
|
1453 |
+
Percentage of Packets with Error (%)
|
1454 |
+
30
|
1455 |
+
50
|
1456 |
+
70
|
1457 |
+
90
|
1458 |
+
100
|
1459 |
+
Hash Packets
|
1460 |
+
Signal Packets
|
1461 |
+
DTW Failure
|
1462 |
+
10
|
1463 |
+
4
|
1464 |
+
10
|
1465 |
+
5
|
1466 |
+
10
|
1467 |
+
6
|
1468 |
+
0
|
1469 |
+
5
|
1470 |
+
10
|
1471 |
+
15
|
1472 |
+
Fig. 12: Bit error rates.
|
1473 |
+
1
|
1474 |
+
2
|
1475 |
+
3
|
1476 |
+
4
|
1477 |
+
5
|
1478 |
+
6
|
1479 |
+
7
|
1480 |
+
8
|
1481 |
+
9 10 11 12 13 14 15
|
1482 |
+
Number of devices
|
1483 |
+
0.01
|
1484 |
+
0.1
|
1485 |
+
1
|
1486 |
+
10
|
1487 |
+
100
|
1488 |
+
1000
|
1489 |
+
10000
|
1490 |
+
100000
|
1491 |
+
Time (seconds)
|
1492 |
+
Full ILP
|
1493 |
+
Reduced
|
1494 |
+
Fig. 13: Time to solve the ILP.
|
1495 |
+
hash is particularly beneficial for the seizure propagation (note
|
1496 |
+
that false positives are removed using exact DTW).
|
1497 |
+
Parameter selection: Figure 11 shows the best parameters
|
1498 |
+
of our hash implementation (window size and n-gram size—
|
1499 |
+
Section II-C) to approximate each of Euclidean, cross correla-
|
1500 |
+
tion, and DTW similarity. We also show parameters (with lighter
|
1501 |
+
colors in the figure) that are within 90% of the true positive
|
1502 |
+
rate achieved by the corresponding best configuration. This
|
1503 |
+
flexibility enables reusing a single fast hardware accelerator
|
1504 |
+
for different measures.
|
1505 |
+
G. Impact of Network Bit Error Rate
|
1506 |
+
The intra-Hull network protocol drops packets carrying
|
1507 |
+
hashes when there is a checksum error but allows signal packets
|
1508 |
+
to flow into PEs since signal similarity measures are naturally
|
1509 |
+
resilient to a few errors. We simulate various bit-error ratios
|
1510 |
+
(BERs) using uniformly random bit flips in the packet header
|
1511 |
+
and data. Figure 12 shows the fraction of hash or signal packets
|
1512 |
+
with an error at different BERs, and the fraction of erroneous
|
1513 |
+
signal packets that flipped the similarity measure (DTW). For
|
1514 |
+
reference, the BER is <10−4 for the radio we use [89].
|
1515 |
+
Figure 12 shows that signals and hashes suffer errors as
|
1516 |
+
BER increases, but signals are more susceptible since they are
|
1517 |
+
longer. But, even though several signal packets suffer errors,
|
1518 |
+
they have no impact on the final signal similarity outcome.
|
1519 |
+
H. ILP Performance
|
1520 |
+
The complexity of the ILP increases with the number of
|
1521 |
+
pipeline stages in the application and the number of Hull nodes.
|
1522 |
+
When all nodes are the same and have the same power/energy
|
1523 |
+
constraints, the schedule of one node can be replicated (with
|
1524 |
+
a constant offset) on all other nodes and remain optimal. We
|
1525 |
+
call this method Reduced ILP. However, we cannot apply
|
1526 |
+
this method when the nodes are different or have different
|
1527 |
+
constraints. Figure 13 shows the time taken to solve the ILP
|
1528 |
+
and the reduced version for varying numbers of devices for the
|
1529 |
+
seizure propagation application. We measure this time when
|
1530 |
+
using GLPK, an open-source ILP solver, with default settings
|
1531 |
+
on an Intel-Xeon E5-2620 v3 machine with 93 GB RAM.
|
1532 |
+
As expected, the solver time for the standard ILP increases
|
1533 |
+
exponentially with the number of devices, taking ≈2 hours
|
1534 |
+
with 11 devices. For >12 devices, the ILP did not finish within
|
1535 |
+
24 hours and was terminated. The reduced ILP however, can
|
1536 |
+
be solved in less than 10ms for any number of devices.
|
1537 |
+
VII. R E L AT E D W O R K
|
1538 |
+
Commercial and research BCIs have focused largely on single
|
1539 |
+
brain location monitoring and stimulation [16, 17, 23, 27, 28],
|
1540 |
+
and have no support for distributed systems, making them
|
1541 |
+
inhospitable for the applications that we target.
|
1542 |
+
Most implantable BCIs offer little to no storage capacity
|
1543 |
+
and stream data out continuously instead. NeuroChip [100] is
|
1544 |
+
an exception, but is wired to an external case storing a 128
|
1545 |
+
GB SD card that must be physically extracted for offline data
|
1546 |
+
analysis. Hull is the first to use storage for pre-processing and
|
1547 |
+
reduce computation by using the hash.
|
1548 |
+
A growing interest in distributed analyses of the brain [1,
|
1549 |
+
13, 25, 26] has motivated the design of rudimentary multi-site
|
1550 |
+
BCIs [31, 32, 101]. Prior studies [31, 32] propose microchips
|
1551 |
+
that stream sensor data wirelessly to a central hub outside the
|
1552 |
+
skull using back-scattering radio techniques. Unfortunately,
|
1553 |
+
these approaches are restricted in their interfacing bandwidth
|
1554 |
+
as they rely on centralized processing and communication.
|
1555 |
+
Although recent work has studied unary neural networks on
|
1556 |
+
single-site BCIs [102], we will study distributed neural network
|
1557 |
+
models for seizure detection, propagation, spike sorting, and
|
1558 |
+
movement intent for multi-side BCIs going forward. Hull can
|
1559 |
+
support any algorithm with linear computational complexity
|
1560 |
+
without significant changes to the ILP formulation. However,
|
1561 |
+
neural network inference, which is super-linear, may require
|
1562 |
+
non-linear formulations for scheduling. Using MILP and
|
1563 |
+
approximations for such PEs may be a suitable extension.
|
1564 |
+
VIII. C O N C L U S I O N & F U T U R E W O R K
|
1565 |
+
Hull enables distributed BCI interfacing that can scale to
|
1566 |
+
multiple regions, and provides for the first time, on-device
|
1567 |
+
computation for important BCI applications. Hull offers two
|
1568 |
+
orders of magnitude higher task throughput, and real-time
|
1569 |
+
support for interactive querying with up to 10 QPS over 7 MB
|
1570 |
+
data or 1 QPS over 60 MB data.
|
1571 |
+
Hull will influence the wider field of IoT devices, ranging
|
1572 |
+
from low-power temperature and voltage sensors [103], AR/VR
|
1573 |
+
devices, to devices in smart home, factory, and vehicle settings.
|
1574 |
+
These devices must collect and process large volumes of data
|
1575 |
+
on the edge, as communicating this data to centralized locations
|
1576 |
+
is likely to be near impossible for today’s cloud infrastructure.
|
1577 |
+
Similar to Hull, a network of power-constrained devices will
|
1578 |
+
need to process large volumes of data, often with flexible
|
1579 |
+
processing requirements to support rapidly evolving use cases.
|
1580 |
+
11
|
1581 |
+
|
1582 |
+
Hull’s design principles– i.e., its modular PE architecture,
|
1583 |
+
fast-but-approximate hash-based approach to signal similarity,
|
1584 |
+
support for low-power and efficiently-indexed non-volatile
|
1585 |
+
storage, and a centralized planner that produces near-optimal
|
1586 |
+
mapping of task schedules to devices – can be instrumental to
|
1587 |
+
success in other IoT environments as well.
|
1588 |
+
R E F E R E N C E S
|
1589 |
+
[1] R. A. Andersen, T. Aflalo, L. Bashford, D. Bj˚anes, and
|
1590 |
+
S. Kellis, “Exploring Cognition with Brain–Machine
|
1591 |
+
Interfaces,” Annual Review of Psychology, vol. 73, pp.
|
1592 |
+
131–158, 2022.
|
1593 |
+
[2] M. A. Lebedev and M. A. Nicolelis, “Brain-machine
|
1594 |
+
interfaces: From basic science to neuroprostheses and
|
1595 |
+
neurorehabilitation,” Physiological reviews, vol. 97, no. 2,
|
1596 |
+
pp. 767–837, 2017.
|
1597 |
+
[3] S. Chandrasekaran, M. Fifer, S. Bickel, L. Osborn,
|
1598 |
+
J. Herrero, B. Christie, J. Xu, R. K. Murphy, S. Singh,
|
1599 |
+
M. F. Glasser et al., “Historical perspectives, challenges,
|
1600 |
+
and future directions of implantable brain-computer
|
1601 |
+
interfaces for sensorimotor applications,” Bioelectronic
|
1602 |
+
medicine, vol. 7, no. 1, pp. 1–11, 2021.
|
1603 |
+
[4] A. S. Widge, D. D. Dougherty, and C. T. Moritz, “Af-
|
1604 |
+
fective brain-computer interfaces as enabling technology
|
1605 |
+
for responsive psychiatric stimulation,” Brain-Computer
|
1606 |
+
Interfaces, vol. 1, no. 2, pp. 126–136, 2014.
|
1607 |
+
[5] J. J. Shih, D. J. Krusienski, and J. R. Wolpaw, “Brain-
|
1608 |
+
computer interfaces in medicine,” in Mayo clinic pro-
|
1609 |
+
ceedings, vol. 87, no. 3.
|
1610 |
+
Elsevier, 2012, pp. 268–279.
|
1611 |
+
[6] D. J. McFarland, J. Daly, C. Boulay, and M. A. Parvaz,
|
1612 |
+
“Therapeutic applications of BCI technologies,” Brain-
|
1613 |
+
Computer Interfaces, vol. 4, no. 1-2, pp. 37–52, 2017.
|
1614 |
+
[7] L. Huang and G. van Luijtelaar, “Brain computer inter-
|
1615 |
+
face for epilepsy treatment,” Brain-Computer Interface
|
1616 |
+
Systems-Recent Progress and Future Prospects, 2013.
|
1617 |
+
[8] C. M¨uhl, B. Allison, A. Nijholt, and G. Chanel, “A survey
|
1618 |
+
of affective brain computer interfaces: principles, state-
|
1619 |
+
of-the-art, and challenges,” Brain-Computer Interfaces,
|
1620 |
+
vol. 1, no. 2, pp. 66–84, 2014.
|
1621 |
+
[9] G. Buzs´aki, C. A. Anastassiou, and C. Koch, “The Origin
|
1622 |
+
of Extracellular Fields and Currents — EEG, ECoG, LFP
|
1623 |
+
and Spikes,” Nature Reviews Neuroscience, vol. 13, pp.
|
1624 |
+
407–420, May 2012.
|
1625 |
+
[10] B. Pesaran, M. Vinck, G. T. Einevoll, A. Sirota, P. Fries,
|
1626 |
+
M. Siegel, W. Truccolo, C. E. Schroeder, and R. Srini-
|
1627 |
+
vasan, “Investigating large-scale brain dynamics using
|
1628 |
+
field potential recordings: analysis and interpretation,”
|
1629 |
+
Nature neuroscience, vol. 21, no. 7, pp. 903–919, 2018.
|
1630 |
+
[11] J. del R. Milan and J. M. Carmena, “Invasive or
|
1631 |
+
Noninvasive: Understanding Brain-Machine Interface
|
1632 |
+
Technology [Conversations in BME],” IEEE Engineering
|
1633 |
+
in Medicine and Biology Magazine, vol. 29, pp. 16–22,
|
1634 |
+
Jan 2010.
|
1635 |
+
[12] E. Musk et al., “An integrated brain-machine interface
|
1636 |
+
platform with thousands of channels,” Journal of medical
|
1637 |
+
Internet research, vol. 21, no. 10, p. e16194, 2019.
|
1638 |
+
[13] K. M. Szostak, P. Feng, F. Mazza, and T. G. Con-
|
1639 |
+
standinou, “Distributed Neural Interfaces: Challenges and
|
1640 |
+
Trends in Scaling Implantable Technology,” Handbook
|
1641 |
+
of Neuroengineering, pp. 1–37, 2021.
|
1642 |
+
[14] A. B. Rapeaux and T. G. Constandinou, “Implantable
|
1643 |
+
brain machine interfaces: first-in-human studies, tech-
|
1644 |
+
nology challenges and trends,” Current opinion in
|
1645 |
+
biotechnology, vol. 72, pp. 102–111, 2021.
|
1646 |
+
[15] U.S. Food and Drug Administration, “FDA authorizes
|
1647 |
+
marketing of device to facilitate muscle rehabilitation in
|
1648 |
+
stroke patients,” https://www.fda.gov/news-events/press-
|
1649 |
+
announcements/fda-authorizes-marketing-device-
|
1650 |
+
facilitate-muscle-rehabilitation-stroke-patients,
|
1651 |
+
April
|
1652 |
+
2021.
|
1653 |
+
[16] F. T. Sun and M. J. Morrell, “The RNS System:
|
1654 |
+
responsive cortical stimulation for the treatment of
|
1655 |
+
refractory partial epilepsy,” Expert review of medical
|
1656 |
+
devices, vol. 11, no. 6, pp. 563–572, 2014.
|
1657 |
+
[17] Medtronic, “Deep Brain Stimulation Systems - Activa
|
1658 |
+
PC,”
|
1659 |
+
https://www.medtronic.com/us-en/healthcare-
|
1660 |
+
professionals/products/neurological/deep-brain-
|
1661 |
+
stimulation-systems/activa-pc.html, November 2018,
|
1662 |
+
Retrieved August 10, 2019.
|
1663 |
+
[18] U.S.
|
1664 |
+
Food and Drug
|
1665 |
+
Administration, “Implanted
|
1666 |
+
Brain-Computer Interface (BCI) Devices for Patients
|
1667 |
+
with Paralysis or Amputation - Non-clinical Testing and
|
1668 |
+
Clinical Considerations,” https://www.fda.gov/regulatory-
|
1669 |
+
information/search-fda-guidance-documents/implanted-
|
1670 |
+
brain-computer-interface-bci-devices-patients-
|
1671 |
+
paralysis-or-amputation-non-clinical-testing, February
|
1672 |
+
2019, Retrieved August 10, 2019.
|
1673 |
+
[19] J. J. Han, “Synchron receives FDA approval to be-
|
1674 |
+
gin early feasibility study of their endovascular, brain-
|
1675 |
+
computer interface device,” Artificial Organs, vol. 45,
|
1676 |
+
no. 10, pp. 1134–1135, 2021.
|
1677 |
+
[20] C. Serrano-Amenos, F. Hu, P. T. Wang, S. Kellis,
|
1678 |
+
R. A. Andersen, C. Y. Liu, P. Heydari, A. H. Do, and
|
1679 |
+
Z. Nenadic, “Thermal analysis of a skull implant in brain-
|
1680 |
+
computer interfaces,” in 2020 42nd Annual International
|
1681 |
+
Conference of the IEEE Engineering in Medicine &
|
1682 |
+
Biology Society (EMBC).
|
1683 |
+
IEEE, 2020, pp. 3066–3069.
|
1684 |
+
[21] P. D. Wolf, “Thermal Considerations for the Design of
|
1685 |
+
an Implanted Cortical Brain-Machine Interface (BMI),”
|
1686 |
+
Indwelling Neural Implants: Strategies for Contending
|
1687 |
+
with the In Vivo Environment, 2008.
|
1688 |
+
[22] I. Stevenson and K. Kording, “How Advances in Neural
|
1689 |
+
Recording Affect Data Analysis,” Nature neuroscience,
|
1690 |
+
vol. 14, pp. 139–42, 02 2011.
|
1691 |
+
[23] I. Karageorgos, K. Sriram, J. Vesel`y, M. Wu, M. Powell,
|
1692 |
+
D. Borton, R. Manohar, and A. Bhattacharjee, “Hardware-
|
1693 |
+
software co-design for brain-computer interfaces,” in
|
1694 |
+
2020 ACM/IEEE 47th Annual International Symposium
|
1695 |
+
on Computer Architecture (ISCA).
|
1696 |
+
IEEE, 2020, pp.
|
1697 |
+
391–404.
|
1698 |
+
[24] R. Zelmann, A. C. Paulk, I. Basu, A. Sarma, A. Yousefi,
|
1699 |
+
B. Crocker, E. Eskandar, Z. Williams, G. R. Cosgrove,
|
1700 |
+
12
|
1701 |
+
|
1702 |
+
D. S. Weisholtz et al., “CLoSES: A platform for closed-
|
1703 |
+
loop intracranial stimulation in humans,” NeuroImage,
|
1704 |
+
vol. 223, p. 117314, 2020.
|
1705 |
+
[25] V. K. Jirsa, T. Proix, D. Perdikis, M. M. Woodman,
|
1706 |
+
H. Wang, J. Gonzalez-Martinez, C. Bernard, C. B´enar,
|
1707 |
+
M. Guye, P. Chauvel et al., “The Virtual Epileptic Patient:
|
1708 |
+
Individualized whole-brain models of epilepsy spread,”
|
1709 |
+
Neuroimage, vol. 145, pp. 377–388, 2017.
|
1710 |
+
[26] F. Bartolomei, S. Lagarde, F. Wendling, A. McGonigal,
|
1711 |
+
V. Jirsa, M. Guye, and C. B´enar, “Defining epileptogenic
|
1712 |
+
networks: Contribution of SEEG and signal analysis,”
|
1713 |
+
Epilepsia, vol. 58, no. 7, pp. 1131–1147, 2017.
|
1714 |
+
[27] Hossein Kassiri, Sana Tonekaboni, M. Tariqus Salam,
|
1715 |
+
Nima Soltani, karim Abdelhalim, Jose Luis Perez Ve-
|
1716 |
+
lasquez, Roman Genov, “Closed-Loop Neurostimulators:
|
1717 |
+
A Survey and A Seizure-Predicting Design Example
|
1718 |
+
for Intractable Epilepsy Treatment,” IEEE Transactions
|
1719 |
+
on Biomedical Circuits and Systems, vol. 11, no. 5, pp.
|
1720 |
+
1026–1040, 2017.
|
1721 |
+
[28] G. O’Leary and D. M. Groppe and T. A. Valiante
|
1722 |
+
and N. Verma and R. Genov, “Nurip: Neural interface
|
1723 |
+
processor for brain-state classification and programmable-
|
1724 |
+
waveform neurostimulation,” IEEE Journal of Solid State
|
1725 |
+
Circuits, vol. 53, 2018.
|
1726 |
+
[29] J. N. Y. Aziz, K. Abdelhalim, R. Shulyzki, R. Genov, B. L.
|
1727 |
+
Bardakjian, M. Derchansky, D. Serletis, and P. L. Carlen,
|
1728 |
+
“256-Channel Neural Recording and Delta Compression
|
1729 |
+
Microsystem With 3D Electrodes,” IEEE Journal of
|
1730 |
+
Solid-State Circuits, vol. 44, no. 3, pp. 995–1005, March
|
1731 |
+
2009.
|
1732 |
+
[30] T. Chen, C. Jeng, S. Chang, H. Chiueh, S. Liang, Y. Hsu,
|
1733 |
+
and T. Chien, “A Hardware Implementation of Real-
|
1734 |
+
Time Epileptic Seizure Detector on FPGA,” 2011 IEEE
|
1735 |
+
Biomedical Circuits and Systems Conference (BioCAS),
|
1736 |
+
pp. 25–28, Nov 2011.
|
1737 |
+
[31] N. Ahmadi, M. L. Cavuto, P. Feng, L. B. Leene,
|
1738 |
+
M. Maslik, F. Mazza, O. Savolainen, K. M. Szostak, C.-S.
|
1739 |
+
Bouganis, J. Ekanayake et al., “Towards a Distributed,
|
1740 |
+
Chronically-Implantable Neural Interface,” in 2019 9th
|
1741 |
+
International IEEE/EMBS Conference on Neural Engi-
|
1742 |
+
neering (NER).
|
1743 |
+
IEEE, 2019, pp. 719–724.
|
1744 |
+
[32] J. Lee, V. Leung, A.-H. Lee, J. Huang, P. Asbeck, P. P.
|
1745 |
+
Mercier, S. Shellhammer, L. Larson, F. Laiwalla, and
|
1746 |
+
A. Nurmikko, “Neural recording and stimulation using
|
1747 |
+
wireless networks of microimplants,” Nature Electronics,
|
1748 |
+
vol. 4, no. 8, pp. 604–614, 2021.
|
1749 |
+
[33] B. Zhu, U. Shin, and M. Shoaran, “Closed-Loop Neural
|
1750 |
+
Prostheses With On-Chip Intelligence: A Review and a
|
1751 |
+
Low-Latency Machine Learning Model for Brain State
|
1752 |
+
Detection,” IEEE Transactions on Biomedical Circuits
|
1753 |
+
and Systems, 2021.
|
1754 |
+
[34] C. Zrenner, P. Belardinelli, F. M¨uller-Dahlhaus, and
|
1755 |
+
U. Ziemann, “Closed-Loop Neuroscience and Non-
|
1756 |
+
Invasive Brain Stimulation: A Tale of Two Loop,”
|
1757 |
+
Frontiers in cellular neuroscience, vol. 10, p. 92, 2016.
|
1758 |
+
[35] M. D. Murphy, D. J. Guggenmos, D. T. Bundy, and
|
1759 |
+
R. J. Nudo, “Current Challenges Facing the Translation
|
1760 |
+
of Brain Computer Interfaces from Preclinical Trials to
|
1761 |
+
Use in Human Patients,” Front Cell Neurosci, vol. 9, p.
|
1762 |
+
497, 2015.
|
1763 |
+
[36] A. O. Hebb, J. J. Zhang, M. H. Mahoor, C. Tsiokos,
|
1764 |
+
C. Matlack, H. J. Chizeck, and N. Pouratian, “Creating
|
1765 |
+
the feedback loop: Closed loop neurostimulation,” Neu-
|
1766 |
+
rosurgery Clinics of North America, vol. 25, no. 1, pp.
|
1767 |
+
187–204, 2014.
|
1768 |
+
[37] V. Sladky, P. Nejedly, F. Mivalt, B. H. Brinkmann, I. Kim,
|
1769 |
+
E. K. St. Louis, N. M. Gregg, B. N. Lundstrom, C. M.
|
1770 |
+
Crowe, T. P. Attia et al., “Distributed brain co-processor
|
1771 |
+
for tracking spikes, seizures and behaviour during elec-
|
1772 |
+
trical brain stimulation,” Brain Communications, vol. 4,
|
1773 |
+
no. 3, p. fcac115, 2022.
|
1774 |
+
[38] M. A. Kural, J. Jing, F. F¨urbass, H. Perko, E. Qerama,
|
1775 |
+
B. Johnsen, S. Fuchs, M. B. Westover, and S. Beniczky,
|
1776 |
+
“Accurate identification of EEG recordings with interictal
|
1777 |
+
epileptiform discharges using a hybrid approach: Artifi-
|
1778 |
+
cial intelligence supervised by human experts,” Epilepsia,
|
1779 |
+
vol. 63, no. 5, pp. 1064–1073, 2022.
|
1780 |
+
[39] J. D. Simeral, T. Hosman, J. Saab, S. N. Flesher,
|
1781 |
+
M. Vilela, B. Franco, J. N. Kelemen, D. M. Brandman,
|
1782 |
+
J. G. Ciancibello, P. G. Rezaii et al., “Home Use of
|
1783 |
+
a Percutaneous Wireless Intracortical Brain-Computer
|
1784 |
+
Interface by Individuals With Tetraplegia,” IEEE Trans-
|
1785 |
+
actions on Biomedical Engineering, vol. 68, no. 7, pp.
|
1786 |
+
2313–2325, 2021.
|
1787 |
+
[40] M. Yin, D. A. Borton, J. Komar, N. Agha, Y. Lu,
|
1788 |
+
H. Li, J. Laurens, Y. Lang, Q. Li, C. Bull et al., “Wire-
|
1789 |
+
less Neurosensor for Full-Spectrum Electrophysiology
|
1790 |
+
Recordings during Free Behavior,” Neuron, vol. 84, no. 6,
|
1791 |
+
pp. 1170–1182, 2014.
|
1792 |
+
[41] S. M. Kim, P. Tathireddy, R. Normann, and F. Solzbacher,
|
1793 |
+
“Thermal Impact of an Active 3-D Microelectrode Array
|
1794 |
+
Implanted in the Brain,” IEEE Transactions on Neural
|
1795 |
+
Systems and Rehabilitation Engineering, vol. 15, pp.
|
1796 |
+
493–501, Dec 2007.
|
1797 |
+
[42] O. Sporns, “Graph theory methods: applications in brain
|
1798 |
+
networks,” Dialogues in clinical neuroscience, 2022.
|
1799 |
+
[43] M. T. Bilge, A. K. Gosai, and A. S. Widge, “Deep Brain
|
1800 |
+
Stimulation in Psychiatry,” Psychiatric Clinics, vol. 41,
|
1801 |
+
no. 3, pp. 373–383, 2018.
|
1802 |
+
[44] G. Deco and M. L. Kringelbach, “Great Expectations:
|
1803 |
+
Using Whole-Brain Computational Connectomics for
|
1804 |
+
Understanding Neuropsychiatric Disorders,” Neuron,
|
1805 |
+
vol. 84, no. 5, pp. 892–905, 2014.
|
1806 |
+
[45] J. A. Gallego, T. R. Makin, and S. D. McDougle, “Going
|
1807 |
+
beyond primary motor cortex to improve brain–computer
|
1808 |
+
interfaces,” Trends in Neurosciences, vol. 45, no. 3, pp.
|
1809 |
+
176–183, 2022.
|
1810 |
+
[46] D. Camargo-Vargas, M. Callejas-Cuervo, and S. Maz-
|
1811 |
+
zoleni, “Brain-Computer Interfaces Systems for Upper
|
1812 |
+
and Lower Limb Rehabilitation: A Systematic Review,”
|
1813 |
+
Sensors, vol. 21, no. 13, p. 4312, 2021.
|
1814 |
+
[47] S. J. Bensmaia and L. E. Miller, “Restoring sensorimotor
|
1815 |
+
13
|
1816 |
+
|
1817 |
+
function through intracortical interfaces: progress and
|
1818 |
+
looming challenges,” Nature Reviews Neuroscience,
|
1819 |
+
vol. 15, no. 5, pp. 313–325, 2014.
|
1820 |
+
[48] S. Todorova, P. Sadtler, A. Batista, S. Chase, and V. Ven-
|
1821 |
+
tura, “To sort or not to sort: the impact of spike-sorting
|
1822 |
+
on neural decoding performance,” Journal of neural
|
1823 |
+
engineering, vol. 11, no. 5, p. 056005, 2014.
|
1824 |
+
[49] H. G. Rey, C. Pedreira, and R. Q. Quiroga, “Past, present
|
1825 |
+
and future of spike sorting techniques,” Brain research
|
1826 |
+
bulletin, vol. 119, pp. 106–117, 2015.
|
1827 |
+
[50] H. Shiao, V. Cherkassky, J. Lee, B. Veber, E. E. Patterson,
|
1828 |
+
B. H. Brinkmann, and G. A. Worrell, “SVM-Based
|
1829 |
+
System for Prediction of Epileptic Seizures From iEEG
|
1830 |
+
Signal,” IEEE Transactions on Biomedical Engineering,
|
1831 |
+
vol. 64, no. 5, pp. 1011–1022, May 2017.
|
1832 |
+
[51] I. K. Niazi, N. Jiang, O. Tiberghien, J. F. Nielsen,
|
1833 |
+
K. Dremstrup, and D. Farina, “Detection of movement
|
1834 |
+
intention from single-trial movement-related cortical
|
1835 |
+
potentials,” Journal of Neural Engineering, vol. 8, no. 6,
|
1836 |
+
p. 066009, 2011.
|
1837 |
+
[52] M. Jochumsen, I. K. Niazi, H. Rovsing, C. Rovsing, G. A.
|
1838 |
+
Nielsen, T. K. Andersen, N. Dong, M. E. Sørensen,
|
1839 |
+
N. Mrachacz-Kersting, N. Jiang et al., “Detection of
|
1840 |
+
movement intentions through a single channel of elec-
|
1841 |
+
troencephalography,” in Replace, Repair, Restore, Relieve–
|
1842 |
+
Bridging Clinical and Engineering Solutions in Neurore-
|
1843 |
+
habilitation.
|
1844 |
+
Springer, 2014, pp. 465–472.
|
1845 |
+
[53] C. L. Ojakangas, A. Shaikhouni, G. M. Friehs, A. H.
|
1846 |
+
Caplan, M. D. Serruya, M. Saleh, D. S. Morris, and J. P.
|
1847 |
+
Donoghue, “Decoding Movement Intent From Human
|
1848 |
+
Premotor Cortex Neurons for Neural Prosthetic Appli-
|
1849 |
+
cations,” Journal of Clinical Neurophysiology: Official
|
1850 |
+
Publication of the American Electroencephalographic
|
1851 |
+
Society, vol. 23, no. 6, p. 577, 2006.
|
1852 |
+
[54] J. F. Kalaska, “From Intention to Action: Motor Cortex
|
1853 |
+
and the Control of Reaching Movements,” Progress in
|
1854 |
+
Motor Control, pp. 139–178, 2009, series Title: Advances
|
1855 |
+
in Experimental Medicine and Biology.
|
1856 |
+
[55] A. Shakeel, M. S. Navid, M. N. Anwar, S. Mazhar,
|
1857 |
+
M. Jochumsen, and I. K. Niazi, “A Review of Techniques
|
1858 |
+
for Detection of Movement Intention Using Movement-
|
1859 |
+
Related Cortical Potentials,” Computational and Mathe-
|
1860 |
+
matical Methods in Medicine, vol. 2015, 2015.
|
1861 |
+
[56] D. M. Taylor, S. I. H. Tillery, and A. B. Schwartz,
|
1862 |
+
“Direct Cortical Control of 3D Neuroprosthetic Devices,”
|
1863 |
+
Science, vol. 296, no. 5574, pp. 1829–1832, 2002.
|
1864 |
+
[57] E. E. Fetz, “Restoring motor function with bidirectional
|
1865 |
+
neural interfaces,” Progress in Brain Research, vol. 218,
|
1866 |
+
pp. 241–252, 2015.
|
1867 |
+
[58] M. Capogrosso, T. Milekovic, D. Borton, F. Wagner, E. M.
|
1868 |
+
Moraud, J.-B. Mignardot, N. Buse, J. Gandar, Q. Barraud,
|
1869 |
+
D. Xing et al., “A Brain–Spine Interface Alleviating Gait
|
1870 |
+
Deficits after Spinal Cord Injury in Primates,” Nature,
|
1871 |
+
vol. 539, no. 7628, pp. 284–288, 2016.
|
1872 |
+
[59] K.-R. Muller, C. Anderson, and G. Birch, “Linear and
|
1873 |
+
nonlinear methods for brain-computer interfaces,” IEEE
|
1874 |
+
Transactions on Neural Systems and Rehabilitation
|
1875 |
+
Engineering, vol. 11, no. 2, pp. 165–169, 2003.
|
1876 |
+
[60] R. Darie, M. Powell, and D. Borton, “Delivering the
|
1877 |
+
Sense of Touch to the Human Brain,” Neuron, vol. 93,
|
1878 |
+
no. 4, pp. 728–730, 2017.
|
1879 |
+
[61] S. Tafazoli, C. J. MacDowell, Z. Che, K. C. Letai, C. R.
|
1880 |
+
Steinhardt, and T. J. Buschman, “Learning to control the
|
1881 |
+
brain through adaptive closed-loop patterned stimulation,”
|
1882 |
+
Journal of Neural Engineering, vol. 17, no. 5, p. 056007,
|
1883 |
+
2020.
|
1884 |
+
[62] U. Rutishauser, E. M. Schuman, and A. N. Mamelak,
|
1885 |
+
“Online detection and sorting of extracellularly recorded
|
1886 |
+
action potentials in human medial temporal lobe record-
|
1887 |
+
ings, in vivo,” Journal of Neuroscience Methods, vol.
|
1888 |
+
154, no. 1-2, pp. 204–224, 2006.
|
1889 |
+
[63] B. Sotomayor-G´omez, F. P. Battaglia, and M. Vinck,
|
1890 |
+
“SpikeShip: A method for fast, unsupervised discovery
|
1891 |
+
of high-dimensional neural spiking patterns,” bioRxiv,
|
1892 |
+
pp. 2020–06, 2021.
|
1893 |
+
[64] L. Grossberger, F. P. Battaglia, and M. Vinck, “Unsuper-
|
1894 |
+
vised clustering of temporal patterns in high-dimensional
|
1895 |
+
neuronal ensembles using a novel dissimilarity measure,”
|
1896 |
+
PLoS Comput Biol, vol. 14, no. 7, p. e1006283, 07 2018.
|
1897 |
+
[65] Y. B. Kim, “Physiological Time Series Retrieval and
|
1898 |
+
Prediction with Locality-Sensitive Hashing,” Ph.D. dis-
|
1899 |
+
sertation, Massachusetts Institute of Technology, 2017.
|
1900 |
+
[66] Y. Cao, N. Rakhilin, P. H. Gordon, X. Shen, and E. C.
|
1901 |
+
Kan, “A real-time spike classification method based on
|
1902 |
+
dynamic time warping for extracellular enteric neural
|
1903 |
+
recording with large waveform variability,” Journal of
|
1904 |
+
Neuroscience Methods, vol. 261, pp. 97–109, 2016.
|
1905 |
+
[67] C. Luo and A. Shrivastava, “SSH (Sketch, Shingle, &
|
1906 |
+
Hash) for Indexing Massive-Scale Time Series,” in NIPS
|
1907 |
+
2016 Time Series Workshop.
|
1908 |
+
PMLR, 2017, pp. 38–58.
|
1909 |
+
[68] D. Gorisse, M. Cord, and F. Precioso, “Locality-Sensitive
|
1910 |
+
Hashing for Chi2 Distance,” IEEE Transactions on
|
1911 |
+
Pattern Analysis and Machine Intelligence, vol. 34, no. 2,
|
1912 |
+
pp. 402–409, 2011.
|
1913 |
+
[69] A. Bhattacharjee and R. Manohar, “HALO: A Flexible
|
1914 |
+
and Low Power Processing Fabric for Brain-Computer
|
1915 |
+
Interfaces,” in 2022 IEEE Hot Chips 34 Symposium
|
1916 |
+
(HCS).
|
1917 |
+
IEEE Computer Society, 2022, pp. 1–37.
|
1918 |
+
[70] I. Karageorgos, K. Sriram, J. Vesel´y, N. Lindsay, X. Wen,
|
1919 |
+
M. Wu, M. Powell, D. Borton, R. Manohar, and A. Bhat-
|
1920 |
+
tacharjee, “Balancing specialized versus flexible compu-
|
1921 |
+
tation in brain–computer interfaces,” IEEE Micro, vol. 41,
|
1922 |
+
no. 3, pp. 87–94, 2021.
|
1923 |
+
[71] F. T. Sun and M. J. Morrell, “Closed-loop Neurostim-
|
1924 |
+
ulation: The Clinical Experience,” Neurotherapeutics,
|
1925 |
+
vol. 11, no. 3, pp. 553–563, 2014.
|
1926 |
+
[72] G. W. V. Vidal, M. L. Rynes, Z. Kelliher, and S. J.
|
1927 |
+
Goodwin, “Review of Brain-Machine Interfaces Used in
|
1928 |
+
Neural Prosthetics with New Perspective on Somatosen-
|
1929 |
+
sory Feedback through Method of Signal Breakdown,”
|
1930 |
+
Scientifica, vol. 2016, 2016.
|
1931 |
+
[73] C. Herff, D. J. Krusienski, and P. Kubben, “The Poten-
|
1932 |
+
14
|
1933 |
+
|
1934 |
+
tial of Stereotactic-EEG for Brain-Computer Interfaces:
|
1935 |
+
Current Progress and Future Directions,” Frontiers in
|
1936 |
+
Neuroscience, vol. 14, p. 123, 2020.
|
1937 |
+
[74] D. Karger, E. Lehman, T. Leighton, R. Panigrahy,
|
1938 |
+
M. Levine, and D. Lewin, “Consistent Hashing and Ran-
|
1939 |
+
dom Trees: Distributed Caching Protocols for Relieving
|
1940 |
+
Hot Spots on the World Wide Web,” in ACM Symposium
|
1941 |
+
on Theory of Computing (STOC), 1997.
|
1942 |
+
[75] J. Kruskall and M. Liberman, “The symmetric time
|
1943 |
+
warping algorithm: From continuous to discrete,” Time
|
1944 |
+
Warps, String Edits and Macromolecules: The Theory
|
1945 |
+
and Practice of Sequence Comparison, 1983.
|
1946 |
+
[76] H. Sakoe and S. Chiba, “Dynamic programming al-
|
1947 |
+
gorithm optimization for spoken word recognition,”
|
1948 |
+
IEEE Transactions on Acoustics, Speech, and Signal
|
1949 |
+
Processing, vol. 26, pp. 159–165, 1978.
|
1950 |
+
[77] O. Pele and M. Werman, “Fast and robust Earth Mover’s
|
1951 |
+
Distances,” in 2009 IEEE 12th international conference
|
1952 |
+
on computer vision.
|
1953 |
+
IEEE, 2009, pp. 460–467.
|
1954 |
+
[78] I. M. Pu, Fundamental data compression.
|
1955 |
+
Butterworth-
|
1956 |
+
Heinemann, 2005.
|
1957 |
+
[79] P. Elias, “Universal codeword sets and representations of
|
1958 |
+
the integers,” IEEE Transactions on Information Theory,
|
1959 |
+
vol. 21, no. 2, pp. 194–203, 1975.
|
1960 |
+
[80] N. Bingham and R. Manohar, “Qdi constant-time coun-
|
1961 |
+
ters,” IEEE Transactions on Very Large Scale Integration
|
1962 |
+
(VLSI) Systems, vol. 27, no. 1, pp. 83–91, 2018.
|
1963 |
+
[81] W. W. Peterson and D. T. Brown, “Cyclic codes for
|
1964 |
+
error detection,” Proceedings of the IRE, vol. 49, no. 1,
|
1965 |
+
pp. 228–235, 1961.
|
1966 |
+
[82] K. Yun and R. Donohue, “Pausible clocking: a first
|
1967 |
+
step toward heterogeneous systems,” in Proceedings
|
1968 |
+
International Conference on Computer Design. VLSI
|
1969 |
+
in Computers and Processors, 1996, pp. 118–123.
|
1970 |
+
[83] A. Moreno and J. Cortadella, “Synthesis of all-digital
|
1971 |
+
delay lines,” in 2017 23rd IEEE International Symposium
|
1972 |
+
on Asynchronous Circuits and Systems (ASYNC), 2017,
|
1973 |
+
pp. 75–82.
|
1974 |
+
[84] D. Mills, “Simple Network Time Protocol (SNTP),” RFC
|
1975 |
+
Editor, RFC 1769, 1995.
|
1976 |
+
[85] D. D. Chamberlin and R. F. Boyce, “SEQUEL: A
|
1977 |
+
structured English query language,” in Proceedings of
|
1978 |
+
the 1974 ACM SIGFIDET (now SIGMOD) workshop on
|
1979 |
+
Data description, access and control.
|
1980 |
+
Association for
|
1981 |
+
Computing Machinery, 1974, pp. 249–264.
|
1982 |
+
[86] Blackrock
|
1983 |
+
Microsystems,
|
1984 |
+
“The
|
1985 |
+
Benchmark
|
1986 |
+
for
|
1987 |
+
Multichannel, High-density Neural Recording,” https:
|
1988 |
+
//www.blackrockmicro.com/electrode-types/utah-array/,
|
1989 |
+
Retrieved August 10, 2019.
|
1990 |
+
[87] J. Shen, A. Shikata, L. D. Fernando, N. Guthrie, B. Chen,
|
1991 |
+
M. Maddox, N. Mascarenhas, R. Kapusta, and M. C. W.
|
1992 |
+
Coln, “A 16-bit 16-ms/s sar adc with on-chip calibration
|
1993 |
+
in 55-nm cmos,” IEEE Journal of Solid-State Circuits,
|
1994 |
+
vol. 53, no. 4, pp. 1149–1160, 2018.
|
1995 |
+
[88] Medtronic,
|
1996 |
+
“Medtronic
|
1997 |
+
Activa
|
1998 |
+
PC
|
1999 |
+
Multi-
|
2000 |
+
program
|
2001 |
+
neurostimulator
|
2002 |
+
implant
|
2003 |
+
manual,”
|
2004 |
+
http://www.neuromodulation.ch/sites/default/files/
|
2005 |
+
pictures/activa PC DBS implant manuel.pdf,
|
2006 |
+
2008,
|
2007 |
+
Retrieved August 10, 2019.
|
2008 |
+
[89] H. Rahmani and A. Babakhani, “A wirelessly powered
|
2009 |
+
reconfigurable fdd radio with on-chip antennas for multi-
|
2010 |
+
site neural interfaces,” IEEE Journal of Solid-State
|
2011 |
+
Circuits, vol. 56, no. 10, pp. 3177–3190, 2021.
|
2012 |
+
[90] J. W. Young, “Head and Face Anthropometry of Adult
|
2013 |
+
U.S. Citizens,” Federal Aviation Administration, Tech.
|
2014 |
+
Rep., 1993.
|
2015 |
+
[91] A. F. Molisch, K. Balakrishnan, C.-C. Chong, S. Emami,
|
2016 |
+
A. Fort, J. Karedal, J. Kunisch, H. Schantz, U. Schuster,
|
2017 |
+
and K. Siwiak, “IEEE 802.15. 4a channel model-final
|
2018 |
+
report,” IEEE P802, vol. 15, no. 04, p. 0662, 2004.
|
2019 |
+
[92] M. S¨arest¨oniemi, C. Pomalaza-Raez, K. Sayrafian,
|
2020 |
+
T. Myllyl¨a, and J. Iinatti, “A Preliminary Study of
|
2021 |
+
RF Propagation for High Data Rate Brain Telemetry,”
|
2022 |
+
in Body Area Networks. Smart IoT and Big Data
|
2023 |
+
for Intelligent Health Management.
|
2024 |
+
Cham: Springer
|
2025 |
+
International Publishing, 2022, pp. 126–138.
|
2026 |
+
[93] A. Taparugssanagorn, A. Rabbachin, M. H¨am¨al¨ainen,
|
2027 |
+
J. Saloranta, J. Iinatti et al., “A Review of Channel
|
2028 |
+
Modelling for Wireless Body Area Network in Wire-
|
2029 |
+
less Medical Communications,” The 11th International
|
2030 |
+
Symposium on Wireless Personal Multimedia Communi-
|
2031 |
+
cations (WPMC, 2008.
|
2032 |
+
[94] “MT29F128G08AKCABH2-10,”
|
2033 |
+
https://www.
|
2034 |
+
micron.com/products/nand-flash/slc-nand/part-
|
2035 |
+
catalog/mt29f128g08akcabh2-10, Retrieved November
|
2036 |
+
17, 2022.
|
2037 |
+
[95] X. Dong, C. Xu, Y. Xie, and N. P. Jouppi, “Nvsim:
|
2038 |
+
A circuit-level performance, energy, and area model
|
2039 |
+
for emerging nonvolatile memory,” IEEE Transactions
|
2040 |
+
on Computer-Aided Design of Integrated Circuits and
|
2041 |
+
Systems, vol. 31, no. 7, pp. 994–1007, 2012.
|
2042 |
+
[96] J. Magland, J. J. Jun, E. Lovero, A. J. Morley, C. L.
|
2043 |
+
Hurwitz, A. P. Buccino, S. Garcia, and A. H. Barnett,
|
2044 |
+
“SpikeForest, reproducible web-facing ground-truth vali-
|
2045 |
+
dation of automated neural spike sorters,” Elife, vol. 9,
|
2046 |
+
p. e55167, 2020.
|
2047 |
+
[97] “IEEG.ORG,” https://www.ieeg.org, Retrieved November
|
2048 |
+
17, 2022.
|
2049 |
+
[98] P. Shah, A. Ashourvan, F. Mikhail, A. Pines, L. Kini,
|
2050 |
+
K. Oechsel, S. R. Das, J. M. Stein, R. T. Shinohara, D. S.
|
2051 |
+
Bassett et al., “Characterizing the role of the structural
|
2052 |
+
connectome in seizure dynamics,” Brain, vol. 142, no. 7,
|
2053 |
+
pp. 1955–1972, 2019.
|
2054 |
+
[99] J. E. Chung, J. F. Magland, A. H. Barnett, V. M. Tolosa,
|
2055 |
+
A. C. Tooker, K. Y. Lee, K. G. Shah, S. H. Felix,
|
2056 |
+
L. M. Frank, and L. F. Greengard, “A Fully Automated
|
2057 |
+
Approach to Spike Sorting,” Neuron, vol. 95, no. 6, pp.
|
2058 |
+
1381–1394, 2017.
|
2059 |
+
[100] L. E. Shupe, F. P. Miles, G. Jones, R. Yun, J. Mish-
|
2060 |
+
ler, I. Rembado, R. L. Murphy, S. I. Perlmutter, and
|
2061 |
+
E. E. Fetz, “Neurochip3: An Autonomous Multichannel
|
2062 |
+
Bidirectional Brain-Computer Interface for Closed-Loop
|
2063 |
+
15
|
2064 |
+
|
2065 |
+
Activity-Dependent Stimulation,” Frontiers in Neuro-
|
2066 |
+
science, vol. 15, 2021.
|
2067 |
+
[101] J. E. Chung, H. R. Joo, J. L. Fan, D. F. Liu, A. H.
|
2068 |
+
Barnett, S. Chen, C. Geaghan-Breiner, M. P. Karlsson,
|
2069 |
+
M. Karlsson, K. Y. Lee et al., “High-Density, Long-
|
2070 |
+
Lasting, and Multi-region Electrophysiological Record-
|
2071 |
+
ings Using Polymer Electrode Arrays,” Neuron, vol. 101,
|
2072 |
+
no. 1, pp. 21–31, 2019.
|
2073 |
+
[102] D. Wu, J. Li, Z. Pan, Y. Kim, and J. S. Miguel, “uBrain:
|
2074 |
+
A Unary Brain Computer Interface,” in Proceedings of
|
2075 |
+
the 49th Annual International Symposium on Computer
|
2076 |
+
Architecture.
|
2077 |
+
Association for Computing Machinery,
|
2078 |
+
2022, pp. 468–481.
|
2079 |
+
[103] E. M. Stewart, A. Liao, and C. Roberts, “Open µpmu:
|
2080 |
+
A real world reference distribution micro-phasor mea-
|
2081 |
+
surement unit data set for research and application
|
2082 |
+
development,” IEEE Power Engineering Letters, 2016.
|
2083 |
+
16
|
2084 |
+
|
BNE1T4oBgHgl3EQfVgSF/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
BtE5T4oBgHgl3EQfTQ-D/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2f88bf288ead57d18ca1c677782893709bc3780043189d23d152370239a7b580
|
3 |
+
size 2293805
|
CNE1T4oBgHgl3EQfDwNk/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2494581eeb464decb1038b755bf34e72c25be7b61c343273019c7b90256f6d87
|
3 |
+
size 3538989
|
CNFJT4oBgHgl3EQfASxa/content/2301.11419v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:efe35316418bc3a45a60f2aa18e9e5dffdc1f5d7a1a5c61a2ab4d43f2e055586
|
3 |
+
size 1696742
|
CNFJT4oBgHgl3EQfASxa/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f3ef0febf4e2b5dc5322e0ead6534829dfb063854f43b8f841b09582d00d47bd
|
3 |
+
size 2293805
|
CNFJT4oBgHgl3EQfASxa/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f47c1916c7fb2e83b61925986e80072f341e2de20ad01df43e69747e9fff0595
|
3 |
+
size 103997
|
DNE0T4oBgHgl3EQfggEA/content/tmp_files/2301.02417v1.pdf.txt
ADDED
@@ -0,0 +1,2744 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.02417v1 [cs.IT] 6 Jan 2023
|
2 |
+
1
|
3 |
+
Uplink Precoding Design for Cell-Free Massive
|
4 |
+
MIMO with Iteratively Weighted MMSE
|
5 |
+
Zhe Wang, Jiayi Zhang, Senior Member, IEEE, Hien Quoc Ngo, Senior Member, IEEE,
|
6 |
+
Bo Ai, Fellow, IEEE and M´erouane Debbah, Fellow, IEEE
|
7 |
+
Abstract
|
8 |
+
In this paper, we investigate a cell-free massive multiple-input multiple-output system with both access points
|
9 |
+
and user equipments equipped with multiple antennas over the Weichselberger Rayleigh fading channel. We study
|
10 |
+
the uplink spectral efficiency (SE) for the fully centralized processing scheme and large-scale fading decoding
|
11 |
+
(LSFD) scheme. To further improve the SE performance, we design the uplink precoding schemes based on
|
12 |
+
the weighted sum SE maximization. Since the weighted sum SE maximization problem is not jointly over all
|
13 |
+
optimization variables, two efficient uplink precoding schemes based on Iteratively Weighted sum-Minimum Mean
|
14 |
+
Square Error (I-WMMSE) algorithms, which rely on the iterative minimization of weighted MSE, are proposed for
|
15 |
+
two processing schemes investigated. Furthermore, with maximum ratio combining applied in the LSFD scheme,
|
16 |
+
we derive novel closed-form achievable SE expressions and optimal precoding schemes. Numerical results validate
|
17 |
+
the proposed results and show that the I-WMMSE precoding schemes can achieve excellent sum SE performance
|
18 |
+
with a large number of UE antennas.
|
19 |
+
Index Terms
|
20 |
+
Cell-free massive MIMO, uplink precoding, weighted sum-rate maximization, spectral efficiency.
|
21 |
+
I. INTRODUCTION
|
22 |
+
Cell-free massive multiple-input multiple-output (CF mMIMO) has attracted a lot of research interest
|
23 |
+
and is regarded as a promising technology for future wireless communications, for its ability to achieve
|
24 |
+
This article was presented in part at IEEE International Conference on Communications 2022 [1].
|
25 |
+
Z. Wang and J. Zhang are with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China,
|
26 |
+
and also with the Frontiers Science Center for Smart High-speed Railway System, Beijing Jiaotong University, Beijing 100044, China (e-mail:
|
27 |
+
{zhewang 77, jiayizhang}@bjtu.edu.cn).
|
28 |
+
H. Q. Ngo is with the Institute of Electronics, Communications, and Information Technology, Queen’s University Belfast, BT3 9DT Belfast,
|
29 |
+
U.K. (email: [email protected]).
|
30 |
+
B. Ai is with the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, China, also with
|
31 |
+
the Frontiers Science Center for Smart High-Speed Railway System and the Henan Joint International Research Laboratory of Intelligent
|
32 |
+
Networking and Data Analysis, Zhengzhou University, Zhengzhou 450001, China, and also with the Research Center of Networks and
|
33 |
+
Communications, Peng Cheng Laboratory, Shenzhen 518066, China (e-mail: [email protected]).
|
34 |
+
M. Debbah is with the Technology Innovation Institute, Abu Dhabi, United Arab Emirates, and also with CentraleSup´elec, University
|
35 |
+
Paris-Saclay, 91192 Gif-sur-Yvette, France (e-mail: [email protected]).
|
36 |
+
|
37 |
+
2
|
38 |
+
uniformly high spectral efficiency (SE) [2]–[7]. Basically, a large number of access points (APs), arbitrarily
|
39 |
+
distributed in a wide coverage area and connected to one or several central processing units (CPUs), jointly
|
40 |
+
serve all user equipments (UEs) on the same time-frequency resource. Compared with the traditional
|
41 |
+
cellular mMIMO system, the CF mMIMO system operates with no cell boundaries and many more APs
|
42 |
+
than UEs [8]–[10]. Relying upon the prominent network topology of CF mMIMO, four uplink (UL)
|
43 |
+
signal processing schemes, distinguished from levels of the mutual cooperation between all APs and the
|
44 |
+
assistance from the CPU, can be implemented as [5]. Among these schemes, the “Level 4” and “Level 3” are
|
45 |
+
viewed as efficient processing techniques. The so-called Level 4 is a fully-centralized processing scheme
|
46 |
+
where all the pilot and data signals received at APs are transmitted to the CPU via the fronthaul links
|
47 |
+
and the CPU performs channel estimation and data detection. The similar scheme was also investigated
|
48 |
+
in [11]–[13]. The so-called Level 3 stands for a two layer decoding scheme: in the first layer, each AP
|
49 |
+
estimates channels and decodes the UE data locally by applying an arbitrary combining scheme based
|
50 |
+
on the local channel state information (CSI); in the second layer, all the local estimates of the UE data
|
51 |
+
are gathered at the CPU in which they are linearly weighted by the optimal large-scale fading decoding
|
52 |
+
(LSFD) coefficient to obtain the final decoding data. The LSFD scheme has been widely investigated in
|
53 |
+
[14]–[17] since it can make full use of the prominent network topology for CF mMIMO and achieve
|
54 |
+
excellent performance.
|
55 |
+
To promote the practical implementation of the CF mMIMO network, a new framework of scalable CF
|
56 |
+
mMIMO system and its respective processing algorithms were proposed in [9] by exploiting the dynamic
|
57 |
+
cooperation cluster (DCC) concept. Besides, the scalability aspects in a realistic scenario with multiple
|
58 |
+
CPUs were considered in [18], where the data processing, network topology and power control strategies
|
59 |
+
with multiple CPUs were discussed. Moreover, the authors of [19] considered the uplink of a radio-strip-
|
60 |
+
based CF mMIMO network architecture with sequential fronthaul links between APs and proposed MMSE-
|
61 |
+
based sequential processing schemes, which significantly reduced the fronthaul requirement. However,
|
62 |
+
when the CF mMIMO network is operated in practice, a more practical capacity-constrained fronthaul
|
63 |
+
network would have a great effect on the system performance. The authors of [20] and [21] discussed
|
64 |
+
the uplink performance of a CF mMIMO system with limited capacity fronthaul links. Furthermore, it
|
65 |
+
is worth noting that the CF mMIMO architecture has been co-designed with another promising future
|
66 |
+
wireless technology: Reconfigurable Intelligent Surface (RIS) [22], [23], which would undoubtedly provide
|
67 |
+
vital tutorials for the future wireless network design.
|
68 |
+
|
69 |
+
3
|
70 |
+
The vast majority of scientific papers on CF mMIMO focus on the scenario with single-antenna UEs.
|
71 |
+
However, in practice, contemporary UEs with moderate physical sizes have already been equipped with
|
72 |
+
multiple antennas to achieve higher multiplexing gain and boost the system reliability. The authors of [24]
|
73 |
+
investigated the UL performance of a CF mMIMO system with multi-antenna UEs over maximum ratio
|
74 |
+
(MR) combining and zero-forcing (ZF) combining. The authors of [25] considered a user-centric (UC)
|
75 |
+
approach for CF mMIMO with multi-antenna UEs and proposed power allocation strategies for either sum-
|
76 |
+
rate maximization or minimum-rate maximization. Besides, the authors of [26] analyzed the downlink SE
|
77 |
+
performance for a CF mMIMO system with multi-antenna UEs and computed SE expressions in closed-
|
78 |
+
form. Then, the SE performance for a CF mMIMO system with multi-antenna UEs and low-resolution
|
79 |
+
DACs was investigated in [27]. Nevertheless, these works only investigated a simple distributed processing
|
80 |
+
scheme and are based on the overly idealistic assumption of independent and identically distributed (i.i.d.)
|
81 |
+
Rayleigh fading channels, neglecting the spatial correlation that has a significant impact on practical CF
|
82 |
+
mMIMO systems [15], [16]. The authors of [28] considered a CF mMIMO system with multi-antenna
|
83 |
+
UEs over the jointly-correlated Weichselberger model [29] and analyzed four UL processing schemes.
|
84 |
+
As observed in [26], [28], increasing the number of antennas per UE may not always benefit the SE
|
85 |
+
performance. The SE would reach the maximum value with particular number of antennas per UE, then
|
86 |
+
decrease with the increase of number of antennas per UE. One main reason for this phenomenon is
|
87 |
+
that the UEs cannot make full use of the benefit of equipping with multiple antennas to achieve higher
|
88 |
+
SE performance without UL precoding schemes. So it is undoubtedly vital to design the UL precoding
|
89 |
+
scheme to further improve the performance of systems. However, it is worth noting that the design of
|
90 |
+
UL precoding for CF mMIMO has not been investigated. For the traditional mMIMO or MIMO systems,
|
91 |
+
one popular optimization objective for the uplink/downlink precoding design is to maximize the weighted
|
92 |
+
sum rate (WSR) [30]–[33]. The authors of [30] and [32] discussed the equivalence between the WSR
|
93 |
+
maximization problem and the Weighted sum-Minimum Mean Square Error (WMMSE) problem in MIMO
|
94 |
+
systems and proposed an iteratively downlink transceiver design algorithm for the WSR maximization.
|
95 |
+
Note that the algorithm relies on the iterative minimization of weighted MSE since the WMMSE problem
|
96 |
+
are not jointly convex over all optimization variables. Moreover, the authors of [31] investigated the UL
|
97 |
+
precoding scheme optimization based on [30] under sum-power-constraint or individual-power-constraint.
|
98 |
+
Motivated by the above observations, we investigate a CF mMIMO system with both multi-antenna
|
99 |
+
APs and UEs over the Weichselberger Rayleigh fading channel. Two pragmatic processing schemes: 1)
|
100 |
+
|
101 |
+
4
|
102 |
+
the fully centralized processing scheme; 2) the large-scale fading decoding scheme are implemented. The
|
103 |
+
main contributions are given as follows.
|
104 |
+
• We design an efficient UL precoding scheme to maximize the WSR for the fully centralized processing
|
105 |
+
scheme based on an iteratively WMMSE (I-WMMSE) algorithm. Note that the design of I-WMMSE
|
106 |
+
precoding scheme for the fully centralized processing scheme is implemented at the CPU and based
|
107 |
+
on the instantaneous CSI.
|
108 |
+
• For the LSFD processing scheme, we derive a UL precoding scheme for the WSR maximization
|
109 |
+
based on an iteratively WMMSE algorithm. The design of I-WMMSE precoding scheme for the
|
110 |
+
LSFD scheme is implemented at the CPU but based only on channel statistics. More importantly,
|
111 |
+
we compute achievable SE expressions and optimal precoding schemes in novel closed-form for the
|
112 |
+
LSFD scheme with MR combining.
|
113 |
+
• We analyze the practical implementation and computation complexity for the proposed I-WMMSE
|
114 |
+
precoding schemes. It is found that the proposed I-WMMSE precoding schemes can be guaranteed
|
115 |
+
to converge. More importantly, the proposed UL precoding schemes are efficient to achieve excellent
|
116 |
+
sum SE/rate performance and the average rate benefits from the multiple antennas at the UE-side,
|
117 |
+
which undoubtedly provides vital insights for the practical implementation of multi-antenna UEs.
|
118 |
+
Note that this paper differs from the conference version [1] in the following aspects: i) we investigate
|
119 |
+
the fully centralized processing/LSFD schemes and design their respective UL precoding schemes, while
|
120 |
+
only the LSFD scheme was considered in [1]; ii) we provide details for the derivation of the I-WMMSE
|
121 |
+
precoding schemes, which are omitted in [1] due to the lack of space; iii) we analyze the practical im-
|
122 |
+
plementation and convergence behavior of the proposed precoding schemes. More importantly, numerical
|
123 |
+
results show vital insights for the CF mMIMO system with the proposed UL precoding schemes.
|
124 |
+
The rest of this paper is organized as follows. In Section II, we consider a CF mMIMO system with the
|
125 |
+
Weichselberger Rayleigh fading channel, and describe the channel estimation and data detection. Then,
|
126 |
+
Section III introduces the fully centralized processing and LSFD processing schemes, and provides their
|
127 |
+
respective achievable SE expressions. Novel closed-form SE expressions for the LSFD scheme with MR
|
128 |
+
combining are derived. More importantly, based on the achievable SE expressions, we propose UL I-
|
129 |
+
WMMSE precoding schemes for two processing schemes. Then, Section IV provides some insights for
|
130 |
+
the practical implementation and computation complexity of proposed I-WMMSE precoding schemes. In
|
131 |
+
Section V, numerical results and performance analysis for the I-WMMSE precoding schemes are provided.
|
132 |
+
|
133 |
+
5
|
134 |
+
CPU
|
135 |
+
Fronthaul
|
136 |
+
Fig. 1. A cell-free massive MIMO system.
|
137 |
+
Finally, the major conclusions and future directions are drawn in Section VI.
|
138 |
+
Notation: Lowercase letters x and boldface uppercase letters X denote the column vectors and matrices,
|
139 |
+
respectively. E {·}, tr {·} and ≜ are the expectation operator, the trace operator, and the definitions,
|
140 |
+
respectively. |·|, ∥·∥ and ∥·∥F are the determinant of a matrix or the absolute value of a number, the
|
141 |
+
Euclidean norm and the Frobenious norm, respectively. vec (A) denotes a column vector formed by the
|
142 |
+
stack of the columns of A. The n×n identity matrix is represented by In×n. The Kronecker products and
|
143 |
+
the element-wise products are denoted by ⊗ and ⊙, respectively. Finally, x ∼ NC (0, R) is a circularly
|
144 |
+
symmetric complex Gaussian distribution vector with correlation matrix R.
|
145 |
+
II. SYSTEM MODEL
|
146 |
+
In this paper, we investigate a CF mMIMO system consisting of M APs and K UEs, where all APs
|
147 |
+
are connected to one or several CPUs via fronthaul links as shown in Fig. 1. For simplicity, there is only
|
148 |
+
one CPU and all APs serve all UEs1. The numbers of antennas per AP and UE are L and N, respectively.
|
149 |
+
A standard block fading model is investigated, in which the channel response is constant and frequency
|
150 |
+
flat in a coherence block of τc-length (channel uses). Let τp and τc − τp denote channel uses dedicated
|
151 |
+
for the channel estimation and data transmission, respectively. We denote by Hmk ∈ CL×N the channel
|
152 |
+
response between AP m and UE k. We assume that Hmk for different AP-UE pairs are independent.
|
153 |
+
A. Channel Model
|
154 |
+
Based on the jointly-correlated (also known as the Weichselberger model [29]) Rayleigh fading channel2,
|
155 |
+
Hmk is modeled as
|
156 |
+
Hmk = Umk,r
|
157 |
+
�
|
158 |
+
˜Ωmk ⊙ Hmk,iid
|
159 |
+
�
|
160 |
+
UH
|
161 |
+
mk,t
|
162 |
+
(1)
|
163 |
+
1As shown in Fig. 1, a more practical network topology is with multiple CPUs and dynamic cooperation clusters, where each UE is only
|
164 |
+
served by a cluster of APs and the APs are grouped into cell-centric clusters. Each cell-centric cluster is connected to a particular CPU.
|
165 |
+
2Note that the Rayleigh fading channel is a special case of the Rician fading channel. And the performance gap between the Rician
|
166 |
+
channel and the Rayleigh channel is small [34]. However, the focus of this paper is not on the channel model but on the UL precoding
|
167 |
+
scheme design. So for the simplicity of analysis, we investigate an essential Rayleigh fading channel by assuming there is no line-of-sight
|
168 |
+
(LoS) link between each UE and AP.
|
169 |
+
|
170 |
+
0000000000006
|
171 |
+
where Umk,r = [umk,r,1, · · · , umk,r,L] ∈ CL×L and Umk,t = [umk,t,1, · · · , umk,t,N] ∈ CN×N are the eigen-
|
172 |
+
vector matrices of the one-sided correlation matrices Rmk,r ≜ E
|
173 |
+
�
|
174 |
+
HmkHH
|
175 |
+
mk
|
176 |
+
�
|
177 |
+
and Rmk,t ≜ E
|
178 |
+
�
|
179 |
+
HT
|
180 |
+
mkH∗
|
181 |
+
mk
|
182 |
+
�
|
183 |
+
,
|
184 |
+
and Hmk,iid ∈ CL×N is composed of i.i.d. NC (0, 1) random entries, respectively. Besides, we denote
|
185 |
+
by Ωmk ≜ ˜Ωmk ⊙ ˜Ωmk ∈ RL×N the “eigenmode coupling matrix” with the (l, n)-th element [Ωmk]ln
|
186 |
+
specifying the average amount of power coupling from umk,r,l to umk,t,n. Hmk can also be formed as
|
187 |
+
Hmk = [hmk,1, · · · , hmk,N] with hmk,n ∈ CL being the channel between AP m and n-th antenna of UE
|
188 |
+
k. By stacking the columns of Hmk on each other, we define hmk ≜ vec (Hmk) = [hT
|
189 |
+
mk,1, · · · , hT
|
190 |
+
mk,N]T ∼
|
191 |
+
NC (0, Rmk), where Rmk ≜ E{hmkhH
|
192 |
+
mk} is the full correlation matrix
|
193 |
+
Rmk = (U∗
|
194 |
+
mk,t ⊗ Umk,r)diag (vec (Ωmk)) (U∗
|
195 |
+
mk,t ⊗ Umk,r)H.
|
196 |
+
(2)
|
197 |
+
Moreover, note that Rmk can be structured into the block form as [28] with the (n, i)-th submatrix
|
198 |
+
being Rni
|
199 |
+
mk = E{hmk,nhH
|
200 |
+
mk,i}. Besides, the large-scale fading coefficient βmk can be extracted from Rmk
|
201 |
+
as βmk =
|
202 |
+
1
|
203 |
+
LN tr (Rmk) =
|
204 |
+
1
|
205 |
+
LN ∥Ωmk∥1. It is worth mentioning that the motivations for adopting the
|
206 |
+
Weichselberger channel model are: 1) The Weichselberger model investigated in (1) not only captures
|
207 |
+
the correlation features at both the AP-side and UE-side but models the joint correlation dependence
|
208 |
+
between each AP-UE pair through the coupling matrix; 2) The coupling matrix Ωmk reflects the practical
|
209 |
+
spatial arrangement of scattering objects between AP m and UE k. More significantly, the Weichselberger
|
210 |
+
model can reduce to most channel models of great interest by adjusting the coupling Ωmk to particular
|
211 |
+
formulation, such as the Kronecker model and i.i.d. Rayleigh fading model [28], [29]; 3) Compared with
|
212 |
+
other stochastic channel models, the Weichselberger model displays significantly less modeling error,
|
213 |
+
which is validated based on the practical measurement in [29].
|
214 |
+
B. Channel Estimation
|
215 |
+
For the channel estimation, mutually orthogonal pilot matrices are constructed and each pilot matrix
|
216 |
+
is composed of N mutually orthogonal pilot sequences. We denote by Φk the pilot matrix assigned to
|
217 |
+
UE k with ΦH
|
218 |
+
k Φl = τpIN, if
|
219 |
+
l = k and 0 otherwise. And Pk is the index subset of UEs using the
|
220 |
+
same pilot matrix as UE k including itself. When all UEs transmit their pilot matrices, the received
|
221 |
+
signal at AP m Yp
|
222 |
+
mk ∈ CL×τp is Yp
|
223 |
+
m = �K
|
224 |
+
k=1 HmkFk,pΦT
|
225 |
+
k + Np
|
226 |
+
m, where Fk,p ∈ CN×N is the precoding
|
227 |
+
matrix for UE k under the phase of pilot transmission, Np
|
228 |
+
m ∈ CL×τp is the additive noise at AP m with
|
229 |
+
independent NC(0, σ2) entries and σ2 being the noise power, respectively. The pilot transmission should
|
230 |
+
be implemented under the power constraint as tr(Fk,pFH
|
231 |
+
k,p) ⩽ pk, where pk is the maximum transmit
|
232 |
+
power for UE k. To derive sufficient statistics for hmk, AP m projects Yp
|
233 |
+
mk onto Φ∗
|
234 |
+
k as Yp
|
235 |
+
mk = Yp
|
236 |
+
mΦ∗
|
237 |
+
k =
|
238 |
+
|
239 |
+
7
|
240 |
+
�K
|
241 |
+
l=1 HmlFl,p
|
242 |
+
�
|
243 |
+
ΦT
|
244 |
+
l Φ∗
|
245 |
+
k
|
246 |
+
�
|
247 |
+
+Np
|
248 |
+
mΦ∗
|
249 |
+
k = �
|
250 |
+
l∈Pk τpHmlFl,p + Qp
|
251 |
+
mk, where Qp
|
252 |
+
mk ≜ Np
|
253 |
+
mΦ∗
|
254 |
+
k. Then, following the
|
255 |
+
standard MMSE estimation steps in [35] and [36], AP m can compute the MMSE estimation of hmk as
|
256 |
+
ˆhmk = vec( ˆHmk) = Rmk˜FH
|
257 |
+
k,pΨ−1
|
258 |
+
mkyp
|
259 |
+
mk,
|
260 |
+
(3)
|
261 |
+
where ˆHmk is the MMSE estimation of Hmk, ˜Fk,p = FT
|
262 |
+
k,p⊗IL, yp
|
263 |
+
mk ≜ vec (Yp
|
264 |
+
mk) = �
|
265 |
+
l∈Pk τp˜Fl,phml + qp
|
266 |
+
m,
|
267 |
+
qp
|
268 |
+
m = vec (Qp
|
269 |
+
mk) and Ψmk = �
|
270 |
+
l∈Pk τp˜Fl,pRml˜FH
|
271 |
+
l,p + σ2ILN, respectively. Note that the estimate ˆhmk and
|
272 |
+
estimation error ˜hmk = hmk − ˆhmk are independent random vectors distributed as ˆhmk ∼ NC(0, ˆRmk)
|
273 |
+
and ˜hmk ∼ NC(0, Cmk), where ˆRmk ≜ τpRmk˜FH
|
274 |
+
k,pΨ−1
|
275 |
+
mk˜Fk,pRmk and Cmk ≜ Rmk − ˆRmk. We can also
|
276 |
+
form ˆRmk and Cmk in the block structure with the (n, i)-th submatrix being ˆRni
|
277 |
+
mk = E{ˆhmk,nˆhH
|
278 |
+
mk,i} and
|
279 |
+
Cni
|
280 |
+
mk = E{˜hmk,n˜hH
|
281 |
+
mk,i}, respectively.
|
282 |
+
C. Data Transmission
|
283 |
+
For the data transmission, all antennas of all UEs simultaneously transmit their data symbols to all
|
284 |
+
APs. The received signal ym ∈ CL at AP m is
|
285 |
+
ym =
|
286 |
+
K
|
287 |
+
�
|
288 |
+
k=1
|
289 |
+
Hmksk + nm,
|
290 |
+
(4)
|
291 |
+
where nm ∼ NC(0, σ2IL) is the independent receiver noise. The transmitted signal from UE k sk ∈ CN
|
292 |
+
can be constructed as sk = Fk,uxk, where xk ∼ NC(0, IN) is the data symbol for UE k and Fk,u ∈ CN×N
|
293 |
+
is the precoding matrix for the data transmission which should satisfy the power constraint of UE k as
|
294 |
+
tr(Fk,uFH
|
295 |
+
k,u) ⩽ pk.
|
296 |
+
III. SPECTRAL EFFICIENCY ANALYSIS AND I-WMMSE PRECODING DESIGN
|
297 |
+
In this section, we investigate two promising signal processing schemes, called “fully centralized pro-
|
298 |
+
cessing” and “LSFD processing”, and analyze their corresponding SE performance and design respective
|
299 |
+
iteratively WMMSE precoding schemes3.
|
300 |
+
A. Fully Centralized Processing
|
301 |
+
1) Spectral Efficiency Analysis: For the fully centralized processing scheme, all M APs send all the
|
302 |
+
received pilot signals and data signals to the CPU. Indeed, both the channel estimation and data detection
|
303 |
+
are implemented at the CPU. The collective channel hk ∈ CMLN for UE k can be constructed as hk =
|
304 |
+
[vec(H1k)T, · · · , vec(HMk)T]T ∼ NC(0, Rk) with Rk = diag (R1k, · · · , RMk) ∈ CMLN×MLN being the
|
305 |
+
3We only optimize the precoding matrices for the phase of data transmission Fk,u. The optimization of Fk,p is left for future research.
|
306 |
+
Although we do not design Fk,p in this paper, we try to keep the derived equations more generalized. So a scenario with arbitrary Fk,p
|
307 |
+
instead of limiting Fk,p to a particular form is investigated. It is worth noting that all equations in this paper hold for any Fk,p so undoubtedly
|
308 |
+
provide some important guidelines for the investigation of optimization design for Fk,p in the future work.
|
309 |
+
|
310 |
+
8
|
311 |
+
whole block-diagonal correlation matrix for UE k. Similar to (3), the CPU can derive the channel estimate
|
312 |
+
for UE k as4 ˆhk ≜
|
313 |
+
�
|
314 |
+
ˆhT
|
315 |
+
1k, . . . , ˆhT
|
316 |
+
Mk
|
317 |
+
�T
|
318 |
+
∼ NC
|
319 |
+
�
|
320 |
+
0, τpRk¯FH
|
321 |
+
k,pΨ−1
|
322 |
+
k ¯Fk,pRk
|
323 |
+
�
|
324 |
+
where ¯Fk,p = diag(˜Fk,p, . . . , ˜Fk,p
|
325 |
+
�
|
326 |
+
��
|
327 |
+
�
|
328 |
+
M
|
329 |
+
)
|
330 |
+
and Ψ−1
|
331 |
+
k
|
332 |
+
= diag(Ψ−1
|
333 |
+
1k , . . . , Ψ−1
|
334 |
+
Mk). The channel estimation error is ˜hk ∼ NC (0, Ck) where Ck ≜ Rk −
|
335 |
+
τpRk¯FH
|
336 |
+
k,pΨ−1
|
337 |
+
k ¯Fk,pRk. Moreover, the received data signal at the CPU can be denoted as
|
338 |
+
[yT
|
339 |
+
1 , · · · , yT
|
340 |
+
M]T
|
341 |
+
�
|
342 |
+
��
|
343 |
+
�
|
344 |
+
= y
|
345 |
+
=
|
346 |
+
K
|
347 |
+
�
|
348 |
+
k=1
|
349 |
+
[HT
|
350 |
+
1k, . . . , HT
|
351 |
+
Mk]T
|
352 |
+
�
|
353 |
+
��
|
354 |
+
�
|
355 |
+
= Hk
|
356 |
+
Fk,uxk + [nT
|
357 |
+
1 , . . . , nT
|
358 |
+
M]T
|
359 |
+
�
|
360 |
+
��
|
361 |
+
�
|
362 |
+
= n
|
363 |
+
,
|
364 |
+
(5)
|
365 |
+
or a compact form as y = �K
|
366 |
+
k=1 HkFk,uxk + n.
|
367 |
+
Under the setting of “fully centralized processing”, we assume that UL precoding matrices (Fk,u and
|
368 |
+
Fk,p) are available at the CPU. Based on the collective channel estimates, the CPU designs an arbitrary
|
369 |
+
receive combining matrix Vk ∈ CLM×N for UE k to detect xk as
|
370 |
+
ˇxk = VH
|
371 |
+
k y = VH
|
372 |
+
k ˆHkFk,uxk + VH
|
373 |
+
k ˜HkFk,uxk +
|
374 |
+
K
|
375 |
+
�
|
376 |
+
l̸=k
|
377 |
+
VH
|
378 |
+
k HlFl,uxl + VH
|
379 |
+
k n,
|
380 |
+
(6)
|
381 |
+
and the conditional MSE matrix for UE k is
|
382 |
+
Ek,(1) = E{(xk − ˇxk)(xk − ˇxk)H|{ ˆHk}, {Fk,u}}
|
383 |
+
= IN − VH
|
384 |
+
k ˆHkFk,u − FH
|
385 |
+
k,u ˆHH
|
386 |
+
k Vk + VH
|
387 |
+
k
|
388 |
+
� K
|
389 |
+
�
|
390 |
+
l=1
|
391 |
+
�
|
392 |
+
ˆHl¯Fl,u ˆHH
|
393 |
+
l + C′
|
394 |
+
l
|
395 |
+
�
|
396 |
+
+ σ2IML
|
397 |
+
�
|
398 |
+
Vk
|
399 |
+
(7)
|
400 |
+
where ¯Fl,u ≜ Fl,uFH
|
401 |
+
l,u, C′
|
402 |
+
l ≜ diag (C′
|
403 |
+
1l, · · · , C′
|
404 |
+
Ml) ∈ CML×ML and C′
|
405 |
+
ml = E{ ˜Hml¯Fl,u ˜HH
|
406 |
+
ml} ∈ CL×L with
|
407 |
+
the (j, q)-th element of C′
|
408 |
+
ml being [C′
|
409 |
+
ml]jq = �N
|
410 |
+
p1=1
|
411 |
+
�N
|
412 |
+
p2=1
|
413 |
+
�¯Fl
|
414 |
+
�
|
415 |
+
p2p1 [Cp2p1
|
416 |
+
ml ]jq.
|
417 |
+
By implementing the per-user-basis minimum mean-squared error-based successive interference cancel-
|
418 |
+
lation (MMSE-SIC) detector while treating co-user interference as uncorrelated Gaussian noise, we derive
|
419 |
+
the achievable SE for UE k as follows.
|
420 |
+
Corollary 1. An achievable for UE k under the setting of “fully centralized processing” with the MMSE
|
421 |
+
estimator is
|
422 |
+
SEk,(1) =
|
423 |
+
�
|
424 |
+
1 − τp
|
425 |
+
τc
|
426 |
+
�
|
427 |
+
E
|
428 |
+
�
|
429 |
+
log2
|
430 |
+
���IN + DH
|
431 |
+
k,(1)Σ−1
|
432 |
+
k,(1)Dk,(1)
|
433 |
+
���
|
434 |
+
�
|
435 |
+
,
|
436 |
+
(8)
|
437 |
+
where Dk,(1) ≜ VH
|
438 |
+
k ˆHkFk,u and Σk,(1) ≜ VH
|
439 |
+
k
|
440 |
+
��K
|
441 |
+
l=1 ˆHl¯Fl,u ˆHH
|
442 |
+
l − ˆHk¯Fk,u ˆHH
|
443 |
+
k + �K
|
444 |
+
l=1 C′
|
445 |
+
l + σ2IML
|
446 |
+
�
|
447 |
+
Vk.
|
448 |
+
4Note that the pilot signals received at the APs are first transmitted to the CPU and then the CPU estimates the channels, where τpML
|
449 |
+
complex scalars are sent from the APs to the CPU at each coherence block. Alternatively, all APs can first estimate the channels as (3), and
|
450 |
+
then send their channel estimates to the CPU, where MKLN complex scalars are sent from the APs to the CPU at each coherence block.
|
451 |
+
Since the pilot contamination is investigated (τp < KN) in this paper, we consider the first transmission protocol due to its lower fronthaul
|
452 |
+
overhead.
|
453 |
+
|
454 |
+
9
|
455 |
+
The expectations are with respect to all sources of randomness.
|
456 |
+
Proof. The proof follows from the similar approach as [28, Corollary 1] and is therefore omitted.
|
457 |
+
We notice that Corollary 1 holds for any combining schemes. One promising combining scheme is the
|
458 |
+
MMSE combining as
|
459 |
+
VMMSE
|
460 |
+
k
|
461 |
+
=
|
462 |
+
� K
|
463 |
+
�
|
464 |
+
l=1
|
465 |
+
�
|
466 |
+
ˆHl¯Fl,u ˆHH
|
467 |
+
l + C′
|
468 |
+
l
|
469 |
+
�
|
470 |
+
+ σ2IML
|
471 |
+
�−1
|
472 |
+
ˆHkFk,u,
|
473 |
+
(9)
|
474 |
+
which can minimize the mean-squared error MSEk,(1) = tr(Ek,(1)). With the MMSE combining scheme,
|
475 |
+
the conditional MSE matrix in (7) is
|
476 |
+
Eopt
|
477 |
+
k,(1) = IN − FH
|
478 |
+
k,u ˆHH
|
479 |
+
k
|
480 |
+
� K
|
481 |
+
�
|
482 |
+
l=1
|
483 |
+
�
|
484 |
+
ˆHl¯Fl,u ˆHH
|
485 |
+
l + C′
|
486 |
+
l
|
487 |
+
�
|
488 |
+
+ σ2IML
|
489 |
+
�−1
|
490 |
+
ˆHkFk,u
|
491 |
+
(10)
|
492 |
+
More importantly, the MMSE combining in (9) can also maximize the achievable SE in (8) as follows.
|
493 |
+
Corollary 2. The achievable SE for UE k in (8) can be maximized by the MMSE combining scheme in
|
494 |
+
(9) with the maximum value
|
495 |
+
SEopt
|
496 |
+
k,(1) =
|
497 |
+
�
|
498 |
+
1 − τp
|
499 |
+
τc
|
500 |
+
�
|
501 |
+
E
|
502 |
+
|
503 |
+
|
504 |
+
log2
|
505 |
+
������
|
506 |
+
IN + FH
|
507 |
+
k,u ˆHH
|
508 |
+
k
|
509 |
+
� K
|
510 |
+
�
|
511 |
+
l=1
|
512 |
+
�
|
513 |
+
ˆHl¯Fl,u ˆHH
|
514 |
+
l + C′
|
515 |
+
l
|
516 |
+
�
|
517 |
+
− ˆHk¯Fk,u ˆHH
|
518 |
+
k + σ2IML
|
519 |
+
�−1
|
520 |
+
ˆHkFk,u
|
521 |
+
������
|
522 |
+
|
523 |
+
|
524 |
+
.
|
525 |
+
(11)
|
526 |
+
Proof. The proof can be found in [28, Appendix B] and is therefore omitted.
|
527 |
+
2) Iteratively WMMSE Precoding Design: In this part, we design the uplink precoding scheme for the
|
528 |
+
“fully centralized processing”. One popular weighted sum-rate maximization problem is investigated as5
|
529 |
+
max
|
530 |
+
{F}
|
531 |
+
K
|
532 |
+
�
|
533 |
+
k=1
|
534 |
+
µk,(1)SEk,(1)
|
535 |
+
s.t.
|
536 |
+
��Fk,u,(1)
|
537 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
538 |
+
(12)
|
539 |
+
where µk,(1) represents the priority weight of UE k and SEk,(1) is given by (8).
|
540 |
+
5The notation F is short for {Fk,u}k=1,...,K, denoting all variables Fk,u with k = 1, . . . , K. Similar definitions are applied for V, A, W,
|
541 |
+
S in the following. In this section, we denote by Fk,u,(1) and Fk,u,(2) the UL precoding matrix of UE k for the fully centralized processing
|
542 |
+
and LSFD scheme, respectively.
|
543 |
+
|
544 |
+
10
|
545 |
+
As [30] and [32], the matrix-weighted sum-MSE minimization problem as
|
546 |
+
min
|
547 |
+
{F,V,W}
|
548 |
+
K
|
549 |
+
�
|
550 |
+
k=1
|
551 |
+
µk,(1)
|
552 |
+
�
|
553 |
+
tr
|
554 |
+
�
|
555 |
+
Wk,(1)Ek,(1)
|
556 |
+
�
|
557 |
+
− log2
|
558 |
+
��Wk,(1)
|
559 |
+
���
|
560 |
+
s.t.
|
561 |
+
��Fk,u,(1)
|
562 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
563 |
+
(13)
|
564 |
+
is equivalent to the weighted sum-rate maximization problem (12), where Wk,(1) ∈ CN×N is the weight
|
565 |
+
matrix for UE k. We notice that (13) is convex over each optimization variable F, V, W but is not jointly
|
566 |
+
convex over all optimization variables. Following the method in [30], we can solve (13) by sequentially
|
567 |
+
fixing two of the three optimization variables F, V, W and updating the third.
|
568 |
+
Fixing the other variables, the update of Vk is given by the MMSE solution as (9). Under the MMSE
|
569 |
+
combining, the MSE matrix is given by (10). Then, note that optimal Wk,(1) for (13) is
|
570 |
+
Wopt
|
571 |
+
k,(1) = E−1
|
572 |
+
k,(1),
|
573 |
+
(14)
|
574 |
+
which can be easily derived through the first order optimality condition for Wk,(1) by fixing F and V.
|
575 |
+
Remark 1. When the MMSE combining VMMSE
|
576 |
+
k
|
577 |
+
and Wopt
|
578 |
+
k,(1) for all UEs are implemented in (13), we have
|
579 |
+
tr(Wk,(1)Ek,(1))−log2
|
580 |
+
��Wk,(1)
|
581 |
+
�� = tr (IN)−log2 |(Eopt
|
582 |
+
k,(1))−1|. So the matrix-weighted sum-MSE minimization
|
583 |
+
problem in (13) would reduce to the equivalent optimization problem of (12) as6:
|
584 |
+
max
|
585 |
+
{F}
|
586 |
+
K
|
587 |
+
�
|
588 |
+
k=1
|
589 |
+
µk,(1) log2
|
590 |
+
����
|
591 |
+
�
|
592 |
+
Eopt
|
593 |
+
k,(1)
|
594 |
+
�−1����
|
595 |
+
s.t.
|
596 |
+
��Fk,u,(1)
|
597 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
598 |
+
(15)
|
599 |
+
which is a well-known relationship between Eopt
|
600 |
+
k,(1) and SEopt
|
601 |
+
k,(1).
|
602 |
+
Finally, fixing V and W, the update of Fk,u,(1) for (13) results in the optimization problem as7
|
603 |
+
6Note that “SE” is equivalent to “rate” except from having one scaling factor (τc − τp)/τc. Since τc and τp are constants in this paper,
|
604 |
+
so we ignore the difference between SE and rate in the optimization problem.
|
605 |
+
7It is worth mentioning that the updates of optimization variables are based on the preliminary of fixing the other optimization variables.
|
606 |
+
For instance, when updating Fk,u,(1), we should fix the other optimization variables (Vk and Wk,(1)) but not only limited to their respective
|
607 |
+
optimal solutions VMMSE
|
608 |
+
k
|
609 |
+
and Wopt
|
610 |
+
k,(1). So we update Fk,u,(1) based on (16) with generalized Vk and Wk,(1) instead of (15) with optimal
|
611 |
+
VMMSE
|
612 |
+
k
|
613 |
+
and Wopt
|
614 |
+
k,(1).
|
615 |
+
|
616 |
+
11
|
617 |
+
min
|
618 |
+
{F}
|
619 |
+
K
|
620 |
+
�
|
621 |
+
k=1
|
622 |
+
µk,(1)tr
|
623 |
+
�
|
624 |
+
Wk,(1)
|
625 |
+
�
|
626 |
+
IN − VH
|
627 |
+
k ˆHkFk,u,(1)
|
628 |
+
� �
|
629 |
+
IN − VH
|
630 |
+
k ˆHkFk,u,(1)
|
631 |
+
�H�
|
632 |
+
+
|
633 |
+
K
|
634 |
+
�
|
635 |
+
k=1
|
636 |
+
µk,(1)tr
|
637 |
+
�
|
638 |
+
Wk,(1)VH
|
639 |
+
k
|
640 |
+
� K
|
641 |
+
�
|
642 |
+
l̸=k
|
643 |
+
ˆHlFl,u,(1)FH
|
644 |
+
l,u,(1) ˆHH
|
645 |
+
l
|
646 |
+
�
|
647 |
+
Vk
|
648 |
+
�
|
649 |
+
−
|
650 |
+
K
|
651 |
+
�
|
652 |
+
k=1
|
653 |
+
µk,(1) log2
|
654 |
+
��Wk,(1)
|
655 |
+
��
|
656 |
+
+
|
657 |
+
K
|
658 |
+
�
|
659 |
+
k=1
|
660 |
+
µk,(1)tr
|
661 |
+
�
|
662 |
+
Wk,(1)VH
|
663 |
+
k
|
664 |
+
� K
|
665 |
+
�
|
666 |
+
l=1
|
667 |
+
E
|
668 |
+
�
|
669 |
+
˜HlFl,u,(1)FH
|
670 |
+
l,u,(1) ˜HH
|
671 |
+
l
|
672 |
+
��� F
|
673 |
+
�
|
674 |
+
+ σ2IML
|
675 |
+
�
|
676 |
+
Vk
|
677 |
+
�
|
678 |
+
s.t.
|
679 |
+
��Fk,u,(1)
|
680 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
681 |
+
(16)
|
682 |
+
which is a convex quadratic optimization problem. So the classic Lagrange multipliers methods and
|
683 |
+
Karush-Kuhn-Tucker (KKT) conditions can be applied to derive an optimal solution. The Lagrange
|
684 |
+
function of (16) is
|
685 |
+
f
|
686 |
+
�
|
687 |
+
F1,u,(1), . . . , FK,u,(1)
|
688 |
+
�
|
689 |
+
=
|
690 |
+
K
|
691 |
+
�
|
692 |
+
k=1
|
693 |
+
µk,(1)tr
|
694 |
+
�
|
695 |
+
Wk,(1)
|
696 |
+
�
|
697 |
+
IN − VH
|
698 |
+
k ˆHkFk,u,(1)
|
699 |
+
� �
|
700 |
+
IN − VH
|
701 |
+
k ˆHkFk,u,(1)
|
702 |
+
�H�
|
703 |
+
+
|
704 |
+
K
|
705 |
+
�
|
706 |
+
k=1
|
707 |
+
µk,(1)tr
|
708 |
+
�
|
709 |
+
Wk,(1)VH
|
710 |
+
k
|
711 |
+
� K
|
712 |
+
�
|
713 |
+
l̸=k
|
714 |
+
ˆHlFl,u,(1)FH
|
715 |
+
l,u,(1) ˆHH
|
716 |
+
l
|
717 |
+
�
|
718 |
+
Vk
|
719 |
+
�
|
720 |
+
+
|
721 |
+
K
|
722 |
+
�
|
723 |
+
k=1
|
724 |
+
µk,(1)tr
|
725 |
+
�
|
726 |
+
Wk,(1)VH
|
727 |
+
k
|
728 |
+
� K
|
729 |
+
�
|
730 |
+
l=1
|
731 |
+
E
|
732 |
+
�
|
733 |
+
˜HlFl,u,(1)FH
|
734 |
+
l,u,(1) ˜HH
|
735 |
+
l
|
736 |
+
��� F
|
737 |
+
�
|
738 |
+
+ σ2IML
|
739 |
+
�
|
740 |
+
Vk
|
741 |
+
�
|
742 |
+
+
|
743 |
+
K
|
744 |
+
�
|
745 |
+
k=1
|
746 |
+
λk,(1)
|
747 |
+
�
|
748 |
+
tr
|
749 |
+
�
|
750 |
+
Fk,u,(1)FH
|
751 |
+
k,u,(1)
|
752 |
+
�
|
753 |
+
− pk
|
754 |
+
�
|
755 |
+
(17)
|
756 |
+
Finally, we derive the optimal precoding scheme as the following theorem.
|
757 |
+
Theorem 1. By fixing other optimization variables and applying the first-order optimality condition of
|
758 |
+
(17) with respect to each Fk,u,(1), the optimal precoding scheme is given by
|
759 |
+
Fopt
|
760 |
+
k,u,(1) = µk,(1)
|
761 |
+
� K
|
762 |
+
�
|
763 |
+
l=1
|
764 |
+
µl,(1)
|
765 |
+
�
|
766 |
+
ˆHH
|
767 |
+
k VlWl,(1)VH
|
768 |
+
l ˆHk + E
|
769 |
+
�
|
770 |
+
˜HH
|
771 |
+
k VlWl,(1)VH
|
772 |
+
l ˜Hk
|
773 |
+
��� V, W
|
774 |
+
��
|
775 |
+
+ λk,(1)IN
|
776 |
+
�−1
|
777 |
+
ˆHH
|
778 |
+
k VkWk,(1)
|
779 |
+
= µk,(1)
|
780 |
+
� K
|
781 |
+
�
|
782 |
+
l=1
|
783 |
+
µl,(1)
|
784 |
+
�
|
785 |
+
ˆHH
|
786 |
+
k VlWl,(1)VH
|
787 |
+
l ˆHk + ¯Ckl
|
788 |
+
�
|
789 |
+
+ λk,(1)IN
|
790 |
+
�−1
|
791 |
+
ˆHH
|
792 |
+
k VkWk,(1),
|
793 |
+
(18)
|
794 |
+
where λk,(1) ⩾ 0 is the Lagrangian multiplier and the (i, n)-th element of ¯Ckl ≜ E{ ˜HH
|
795 |
+
k VlWl,(1)VH
|
796 |
+
l ˜Hk|V, W}
|
797 |
+
∈ CN×N is
|
798 |
+
�¯Ckl
|
799 |
+
�
|
800 |
+
in = tr( ¯VlE{˜hk,n˜hH
|
801 |
+
k,i}) = tr
|
802 |
+
�¯VlCk,in
|
803 |
+
�
|
804 |
+
with ¯Vl ≜ VlWl,(1)VH
|
805 |
+
l and Ck,ni ≜ E{˜hk,n˜hH
|
806 |
+
k,i} =
|
807 |
+
|
808 |
+
12
|
809 |
+
diag (Cni
|
810 |
+
1k, . . . , Cni
|
811 |
+
Mk) ∈ CML×ML. According to the KKT condition, λk,(1) and Fk,u,(1) should also satisfy
|
812 |
+
��Fk,u,(1)
|
813 |
+
��2 ⩽ pk,
|
814 |
+
λk,(1)
|
815 |
+
���Fk,u,(1)
|
816 |
+
��2 − pk
|
817 |
+
�
|
818 |
+
= 0,
|
819 |
+
λk,(1) ⩾ 0.
|
820 |
+
(19)
|
821 |
+
Proof: The proof is given in Appendix D.
|
822 |
+
We denote by Fk,u,(1)(λk,(1)) the right-hand side of (18), when �K
|
823 |
+
l=1 µl,(1)( ˆHH
|
824 |
+
k VlWl,(1)VH
|
825 |
+
l ˆHk + ¯Ckl)
|
826 |
+
is invertible and tr[Fk,u,(1)(0)Fk,u,(1)(0)H] ⩽ pk, then Fopt
|
827 |
+
k,u,(1) = Fk,u,(1) (0), otherwise we have
|
828 |
+
tr[Fk,u,(1)(λk,(1))Fk,u,(1)(λk,(1))H] = pk
|
829 |
+
to satisfy (19).
|
830 |
+
Corollary 3. tr[Fk,u,(1)(λk,(1))Fk,u,(1)(λk,(1))H] is a monotonically decreasing function of λk,(1).
|
831 |
+
Proof: Let DΛDH denote the eigendecomposition of �K
|
832 |
+
l=1 µl,(1)( ˆHH
|
833 |
+
k VlWl,(1)VH
|
834 |
+
l ˆHk + ¯Ckl). Fol-
|
835 |
+
lowing the method in [30], we define Φ = µ2
|
836 |
+
k,(1)DH ˆHH
|
837 |
+
k VkW2
|
838 |
+
k,(1) ˆHkVH
|
839 |
+
k D and we have
|
840 |
+
tr[Fk,u,(1)(λk,(1))Fk,u,(1)(λk,(1))H] = tr
|
841 |
+
��
|
842 |
+
DΛDH + λk,(1)IN
|
843 |
+
�−1 DΦDH �
|
844 |
+
DΛDH + λk,(1)IN
|
845 |
+
�−1�
|
846 |
+
= tr
|
847 |
+
��
|
848 |
+
DΛDH + λk,(1)IN
|
849 |
+
�−2 DΦDH�
|
850 |
+
= tr
|
851 |
+
��
|
852 |
+
Λ + λk,(1)IN
|
853 |
+
�−2�
|
854 |
+
=
|
855 |
+
N
|
856 |
+
�
|
857 |
+
n=1
|
858 |
+
[Φ]nn
|
859 |
+
�
|
860 |
+
[Λ]nn + λk,(1)
|
861 |
+
�2,
|
862 |
+
(20)
|
863 |
+
so tr[Fk,u,(1)(λk,(1))Fk,u,(1)(λk,(1))H] is a monotonically decreasing function of λk,(1).
|
864 |
+
Based on Corollary 3, optimum λk,(1) (denoted by λopt
|
865 |
+
k,(1)) can be easily obtained by a one-dimensional
|
866 |
+
(1-D) bisection algorithm so we derive the solution for Fk,u,(1)(λopt
|
867 |
+
k,(1)). Furthermore, an iterative opti-
|
868 |
+
mization algorithm for Fk,u,(1), called “iteratively WMMSE (I-WMMSE) algorithm”, is summarized in
|
869 |
+
Algorithm 18. The convergence of Algorithm 1 is proven in [30, Theorem 3].
|
870 |
+
Remark 2. Note that the design of Fk,p is a valuable future direction to further improve the system
|
871 |
+
performance. One valuable optimization problem is to minimize the total MSE of the channel estimators
|
872 |
+
of all UEs as
|
873 |
+
min
|
874 |
+
{Fk,p}
|
875 |
+
K
|
876 |
+
�
|
877 |
+
k=1
|
878 |
+
tr (Ck)
|
879 |
+
s.t. ∥Fk,p∥2 ⩽ pk ∀k = 1, . . . , K
|
880 |
+
(21)
|
881 |
+
where the optimization goal is only based on the statistical knowledge so Fk,p is also based on the
|
882 |
+
statistical knowledge.
|
883 |
+
8To balance the efficiency and the computational complexity of the proposed algorithm, we also include the stopping criterion “R(i)
|
884 |
+
(1) <
|
885 |
+
R(i−1)
|
886 |
+
(1)
|
887 |
+
. Moreover, the I-WMMSE precoding scheme is derived at iteration (i − 1), which may achieve higher sum SE than the one at
|
888 |
+
iteration i.
|
889 |
+
|
890 |
+
13
|
891 |
+
Algorithm 1: I-WMMSE Algorithm for the Design of Fk,u,(1)
|
892 |
+
Input: Collective channel estimates ˆHk for all UEs; Estimation error covariance matrices Cml for
|
893 |
+
all possible pairs; UE weights µk,(1) for all UEs;
|
894 |
+
Output: Optimal precoding matrices Fk,u,(1) for all UEs (F(i)
|
895 |
+
k,u,(1) for the first or third stopping
|
896 |
+
criterion and F(i−1)
|
897 |
+
k,u,(1) for the second stopping criterion);
|
898 |
+
1 Initiation: i = 0, F(0)
|
899 |
+
k,u,(1) and R(0)
|
900 |
+
(1) = �K
|
901 |
+
k=1 µk,(1)SE(0)
|
902 |
+
k,(1) for all UEs; maximum iteration number
|
903 |
+
I(1),max and threshold ε(1);
|
904 |
+
2 repeat
|
905 |
+
3
|
906 |
+
i = i + 1
|
907 |
+
4
|
908 |
+
Update the MMSE combining scheme V(i)
|
909 |
+
k
|
910 |
+
with F(i−1)
|
911 |
+
k,u,(1) based on (9);
|
912 |
+
5
|
913 |
+
Update optimal MSE matrix E(i)
|
914 |
+
k,(1) with F(i−1)
|
915 |
+
l,u,(1) based on (10), and update W(i)
|
916 |
+
k,(1) based on
|
917 |
+
(14);
|
918 |
+
6
|
919 |
+
Update optimal precoding matrix F(i)
|
920 |
+
l,u,(1) with V(i)
|
921 |
+
k
|
922 |
+
and W(i)
|
923 |
+
k,(1) based on (18), where λ(i)
|
924 |
+
k,(1) is
|
925 |
+
found by a bisection algorithm;
|
926 |
+
7
|
927 |
+
Update sum weighted rate R(i)
|
928 |
+
(1) = �K
|
929 |
+
k=1 µk,(1)SE(i)
|
930 |
+
k,(1);
|
931 |
+
8 until
|
932 |
+
���R(i)
|
933 |
+
(1) − R(i−1)
|
934 |
+
(1)
|
935 |
+
��� /R(i−1)
|
936 |
+
(1)
|
937 |
+
⩽ ε(1) or R(i)
|
938 |
+
(1) < R(i−1)
|
939 |
+
(1)
|
940 |
+
or i ⩾ I(1),max;
|
941 |
+
B. Large-Scale Fading Decoding
|
942 |
+
1) Spectral Efficiency Analysis: Another promising processing scheme is “large-scale fading decoding”,
|
943 |
+
which is a two-layer decoding scheme to decode the data symbol. Note that UL precoding matrices (Fk,u
|
944 |
+
and Fk,p) are assumed to be available at all APs and the CPU. In the first layer, AP m applies an arbitrary
|
945 |
+
combining matrix Vmk ∈ CL×N to derive local detection of xk as
|
946 |
+
˜xmk = VH
|
947 |
+
mkym = VH
|
948 |
+
mkHmkFk,uxk +
|
949 |
+
K
|
950 |
+
�
|
951 |
+
l=1,l̸=k
|
952 |
+
VH
|
953 |
+
mkHmlFl,uxl + VH
|
954 |
+
mknm.
|
955 |
+
(22)
|
956 |
+
We notice that Vmk is designed based on local channel estimates at AP m and one handy choice is
|
957 |
+
MR combining Vmk = ˆHmk. Moreover, local MMSE (L-MMSE) combining
|
958 |
+
Vmk =
|
959 |
+
� K
|
960 |
+
�
|
961 |
+
l=1
|
962 |
+
�
|
963 |
+
ˆHml¯Fl,u ˆHH
|
964 |
+
ml + C′
|
965 |
+
ml
|
966 |
+
�
|
967 |
+
+ σ2IL
|
968 |
+
�−1
|
969 |
+
ˆHmkFk,u,
|
970 |
+
(23)
|
971 |
+
is also regarded as a promising scheme, since (23) can minimize E{∥ xk − VH
|
972 |
+
mkym ∥2 |{ ˆHmk}, {Fk,u}}.
|
973 |
+
In the second layer, the “LSFD” method is implemented at the CPU [5]. The CPU weights all the local
|
974 |
+
estimates ˜xmk from all APs by the LSFD coefficient matrix as
|
975 |
+
ˆxk =
|
976 |
+
M
|
977 |
+
�
|
978 |
+
m=1
|
979 |
+
AH
|
980 |
+
mk˜xmk =
|
981 |
+
M
|
982 |
+
�
|
983 |
+
m=1
|
984 |
+
AH
|
985 |
+
mkVH
|
986 |
+
mkHmkFk,uxk +
|
987 |
+
M
|
988 |
+
�
|
989 |
+
m=1
|
990 |
+
K
|
991 |
+
�
|
992 |
+
l=1,l̸=k
|
993 |
+
AH
|
994 |
+
mkVH
|
995 |
+
mkHmlFl,uxl+n′
|
996 |
+
k,
|
997 |
+
(24)
|
998 |
+
where Amk ∈ CN×N is the complex LSFD coefficient matrix for AP m-UE k and n′
|
999 |
+
k = �M
|
1000 |
+
m=1 AH
|
1001 |
+
mkVH
|
1002 |
+
mknm.
|
1003 |
+
|
1004 |
+
14
|
1005 |
+
Moreover, we can rewrite ˆxk in a more compact form as
|
1006 |
+
ˆxk = AH
|
1007 |
+
k GkkFk,uxk +
|
1008 |
+
K
|
1009 |
+
�
|
1010 |
+
l=1,l̸=k
|
1011 |
+
AH
|
1012 |
+
k GklFl,uxl + n′
|
1013 |
+
k = AH
|
1014 |
+
k
|
1015 |
+
�
|
1016 |
+
GkkFk,uxk +
|
1017 |
+
K
|
1018 |
+
�
|
1019 |
+
l=1,l̸=k
|
1020 |
+
GklFl,uxl + ˜n′
|
1021 |
+
k
|
1022 |
+
�
|
1023 |
+
�
|
1024 |
+
��
|
1025 |
+
�
|
1026 |
+
˜xk
|
1027 |
+
(25)
|
1028 |
+
where Ak ≜ [AT
|
1029 |
+
1k, . . . , AT
|
1030 |
+
Mk]T ∈ CMN×N, Gkl ≜ [VH
|
1031 |
+
1kH1l; . . . ; VH
|
1032 |
+
MkHMl] ∈ CMN×N and
|
1033 |
+
˜n′
|
1034 |
+
k =
|
1035 |
+
�
|
1036 |
+
VH
|
1037 |
+
1kn1; . . . ; VH
|
1038 |
+
MknM
|
1039 |
+
�
|
1040 |
+
∈ CMN×N.
|
1041 |
+
Note that the CPU does not have the knowledge of channel estimates and is only aware of channel
|
1042 |
+
statistics [5]. The conditional MSE matrix for UE k Ek,(2) ≜ E
|
1043 |
+
�
|
1044 |
+
(xk − ˆxk) (xk − ˆxk)H |Θ
|
1045 |
+
�
|
1046 |
+
is
|
1047 |
+
Ek,(2) = IN − FH
|
1048 |
+
k,uE{GH
|
1049 |
+
kk}Ak − AH
|
1050 |
+
k E{Gkk}Fk,u + AH
|
1051 |
+
k
|
1052 |
+
� K
|
1053 |
+
�
|
1054 |
+
l=1
|
1055 |
+
E{Gkl¯Fl,uGH
|
1056 |
+
kl} + σ2Sk
|
1057 |
+
�
|
1058 |
+
Ak,
|
1059 |
+
(26)
|
1060 |
+
where Θ denotes all the channel statistics and Sk = diag(E{VH
|
1061 |
+
1kV1k}, · · · , E{VH
|
1062 |
+
MkVMk}) ∈ CMN×MN.
|
1063 |
+
Then, we apply classical use-and-then-forget bound to obtain the following ergodic achievable SE.
|
1064 |
+
Corollary 4. For the “LSFD” scheme, an achievable SE for UE k can be written as
|
1065 |
+
SEk,(2) =
|
1066 |
+
�
|
1067 |
+
1 − τp
|
1068 |
+
τc
|
1069 |
+
�
|
1070 |
+
log2
|
1071 |
+
���IN + DH
|
1072 |
+
k,(2)Σ−1
|
1073 |
+
k,(2)Dk,(2)
|
1074 |
+
��� ,
|
1075 |
+
(27)
|
1076 |
+
where Σk,(2) = �K
|
1077 |
+
l=1 AH
|
1078 |
+
k E{Gkl¯Fl,uGH
|
1079 |
+
kl}Ak − Dk,(2)DH
|
1080 |
+
k,(2) + σ2AH
|
1081 |
+
k SkAk and Dk,(2) = AH
|
1082 |
+
k E{Gkk}Fk,u.
|
1083 |
+
Proof. The proof follows similar steps as the proof of [28, Corollary 2] and is therefore omitted.
|
1084 |
+
Note that Ak can be optimized by the CPU based on channel statistics to maximize the achievable
|
1085 |
+
SE in (27). Based on the theory of optimal receivers as in [37], we derive the optimal LSFD coefficient
|
1086 |
+
matrix, which not only maximizes the achievable SE but minimizes the conditional MSE, as follows.
|
1087 |
+
Corollary 5. The achievable SE in (27) is maximized by
|
1088 |
+
Aopt
|
1089 |
+
k
|
1090 |
+
=
|
1091 |
+
� K
|
1092 |
+
�
|
1093 |
+
l=1
|
1094 |
+
E{Gkl¯Fl,uGH
|
1095 |
+
kl} + σ2Sk
|
1096 |
+
�−1
|
1097 |
+
E{Gkk}Fk,u,
|
1098 |
+
(28)
|
1099 |
+
leading to the maximum value as
|
1100 |
+
SEopt
|
1101 |
+
k,(2)
|
1102 |
+
=
|
1103 |
+
�
|
1104 |
+
1 − τp
|
1105 |
+
τc
|
1106 |
+
�
|
1107 |
+
log2
|
1108 |
+
������
|
1109 |
+
IN + FH
|
1110 |
+
k,uE {Gkk}
|
1111 |
+
� K
|
1112 |
+
�
|
1113 |
+
l=1
|
1114 |
+
E
|
1115 |
+
�
|
1116 |
+
Gkl¯Fl,uGH
|
1117 |
+
kl
|
1118 |
+
�
|
1119 |
+
− E {Gkk} ¯Fk,uE
|
1120 |
+
�
|
1121 |
+
GH
|
1122 |
+
kk
|
1123 |
+
�
|
1124 |
+
+ σ2Sk
|
1125 |
+
�−1
|
1126 |
+
E {Gkk} Fk,u
|
1127 |
+
������
|
1128 |
+
.
|
1129 |
+
(29)
|
1130 |
+
|
1131 |
+
15
|
1132 |
+
Note that the optimal LSFD coefficient matrix in (28) can also minimize the conditional MSE for UE k
|
1133 |
+
MSEk,(2) = tr(Ek,(2)).
|
1134 |
+
Proof. The proof is given in Appendix B.
|
1135 |
+
If the optimal LSFD coefficient matrix is applied, the MSE matrix for UE k can be written as
|
1136 |
+
Eopt
|
1137 |
+
k,(2) = IN − FH
|
1138 |
+
k,uE
|
1139 |
+
�
|
1140 |
+
GH
|
1141 |
+
kk
|
1142 |
+
�
|
1143 |
+
� K
|
1144 |
+
�
|
1145 |
+
l=1
|
1146 |
+
E{Gkl¯Fl,uGH
|
1147 |
+
kl} + σ2Sk
|
1148 |
+
�−1
|
1149 |
+
E{Gkk}Fk,u.
|
1150 |
+
(31)
|
1151 |
+
Furthermore, if MR combining Vmk = ˆHmk is applied, we derive closed-form SE expressions as follows.
|
1152 |
+
Theorem 2. For MR combining Vmk = ˆHmk, (27) can be computed in closed-form as
|
1153 |
+
SEk,(2),c =
|
1154 |
+
�
|
1155 |
+
1 − τp
|
1156 |
+
τc
|
1157 |
+
�
|
1158 |
+
log2
|
1159 |
+
���IN + DH
|
1160 |
+
k,(2),cΣ−1
|
1161 |
+
k,(2),cDk,(2),c
|
1162 |
+
��� ,
|
1163 |
+
(32)
|
1164 |
+
where Σk,(2),c = AH
|
1165 |
+
k (�K
|
1166 |
+
l=1 Tkl,(1) + �
|
1167 |
+
l∈Pk Tkl,(2))Ak − Dk,(2),cDH
|
1168 |
+
k,(2),c + σ2AH
|
1169 |
+
k Sk,cAk and Dk,(2),c =
|
1170 |
+
AH
|
1171 |
+
k ZkFk,u, with E{Gkk} = Zk = [ZT
|
1172 |
+
1k, . . . , ZT
|
1173 |
+
Mk]T and Sk,c = diag(Z1k, · · · , ZMk) with the (n, n′)-
|
1174 |
+
th element of Zmk ∈ CN×N being [Zmk]nn′ = tr(ˆRn′n
|
1175 |
+
mk). Moreover, Tkl,(1) ≜ diag(Γ(1)
|
1176 |
+
kl,1, · · · , Γ(1)
|
1177 |
+
kl,M) ∈
|
1178 |
+
CMN×MN and Tmm′
|
1179 |
+
kl,(2) = Γ(2)
|
1180 |
+
kl,m − Γ(1)
|
1181 |
+
kl,m if m = m′ and Λmkl¯Fl,uΛm′lk otherwise, where Tmm′
|
1182 |
+
kl,(2) de-
|
1183 |
+
notes (m, m′)-submatrix of Tkl,(2) ∈ CMN×MN, the (n, n′)-th element of N × N-dimension complex
|
1184 |
+
matrices Λmkl, Λm′lk, Γ(1)
|
1185 |
+
kl,m and Γ(2)
|
1186 |
+
kl,m are [Λmkl]nn′ = tr(Ξn′n
|
1187 |
+
mkl), [Λm′lk]nn′ = tr(Ξn′n
|
1188 |
+
m′lk), [Γ(1)
|
1189 |
+
mkl]nn′ =
|
1190 |
+
�N
|
1191 |
+
i=1
|
1192 |
+
�N
|
1193 |
+
i′=1 [¯Fl,u]i′itr(Ri′i
|
1194 |
+
ml ˆRn′n
|
1195 |
+
mk) and [Γ(2)
|
1196 |
+
kl,m]nn′ given by
|
1197 |
+
�
|
1198 |
+
Γ(2)
|
1199 |
+
kl,m
|
1200 |
+
�
|
1201 |
+
nn′ =
|
1202 |
+
N
|
1203 |
+
�
|
1204 |
+
i=1
|
1205 |
+
N
|
1206 |
+
�
|
1207 |
+
i′=1
|
1208 |
+
�¯Fl
|
1209 |
+
�
|
1210 |
+
i′i
|
1211 |
+
�
|
1212 |
+
tr
|
1213 |
+
�
|
1214 |
+
Ri′i
|
1215 |
+
mlPn′n
|
1216 |
+
mkl,(1)
|
1217 |
+
�
|
1218 |
+
+τ 2
|
1219 |
+
p
|
1220 |
+
N
|
1221 |
+
�
|
1222 |
+
q1=1
|
1223 |
+
N
|
1224 |
+
�
|
1225 |
+
q2=1
|
1226 |
+
�
|
1227 |
+
tr
|
1228 |
+
�
|
1229 |
+
˜Pq1n
|
1230 |
+
mkl,(2) ˜Ri′q2
|
1231 |
+
ml ˜Rq2i
|
1232 |
+
ml ˜Pn′q1
|
1233 |
+
mkl,(2)
|
1234 |
+
�
|
1235 |
+
+ tr
|
1236 |
+
�
|
1237 |
+
˜Pq1n
|
1238 |
+
mkl,(2) ˜Ri′q2
|
1239 |
+
ml
|
1240 |
+
�
|
1241 |
+
tr
|
1242 |
+
�
|
1243 |
+
˜Pn′q2
|
1244 |
+
mkl,(2) ˜Rq2i
|
1245 |
+
ml
|
1246 |
+
���
|
1247 |
+
(33)
|
1248 |
+
with Ξmkl = τpRml˜FH
|
1249 |
+
l,pΨ−1
|
1250 |
+
mk˜Fk,pRmk, Ξm′lk = τpRm′k˜FH
|
1251 |
+
k,pΨ−1
|
1252 |
+
m′k˜Fl,pRm′l, Pmkl,(1) = τpSmk(Ψmk −
|
1253 |
+
τp˜Fl,pRml˜FH
|
1254 |
+
l,p)SH
|
1255 |
+
mk, Smk = Rmk˜FH
|
1256 |
+
k,pΨ−1
|
1257 |
+
mk, Pmkl,(2) = Smk˜Fl,pRml˜FH
|
1258 |
+
l,pSH
|
1259 |
+
mk, ˜Rni
|
1260 |
+
ml and ˜Pni
|
1261 |
+
mkl,(2) being (n, i)-
|
1262 |
+
submatrix of R
|
1263 |
+
1
|
1264 |
+
2
|
1265 |
+
ml and P
|
1266 |
+
1
|
1267 |
+
2
|
1268 |
+
mkl,(2), respectively. Furthermore, the optimal LSFD coefficient matrix in (28)
|
1269 |
+
and MSE matrix in (31) can also be computed in closed-form as
|
1270 |
+
|
1271 |
+
|
1272 |
+
|
1273 |
+
|
1274 |
+
|
1275 |
+
Aopt
|
1276 |
+
k,c =
|
1277 |
+
��K
|
1278 |
+
l=1 Tkl,(1) + �
|
1279 |
+
l∈Pk Tkl,(2) + σ2Sk,c
|
1280 |
+
�−1
|
1281 |
+
ZkFk,u,
|
1282 |
+
Eopt
|
1283 |
+
k,(2),c = IN − FH
|
1284 |
+
k,uZH
|
1285 |
+
k
|
1286 |
+
��K
|
1287 |
+
l=1 Tkl,(1) + �
|
1288 |
+
l∈Pk Tkl,(2) + σ2Sk,c
|
1289 |
+
�−1
|
1290 |
+
ZkFk,u.
|
1291 |
+
(34)
|
1292 |
+
Proof: The proof is given in Appendix C.
|
1293 |
+
|
1294 |
+
16
|
1295 |
+
2) Iteratively WMMSE Precoding Design: For the LSFD scheme, we also investigate a weighted sum-
|
1296 |
+
rate maximization problem as
|
1297 |
+
max
|
1298 |
+
{F}
|
1299 |
+
K
|
1300 |
+
�
|
1301 |
+
k=1
|
1302 |
+
µk,(2)SEk,(2)
|
1303 |
+
s.t.
|
1304 |
+
��Fk,u,(2)
|
1305 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
1306 |
+
(35)
|
1307 |
+
where µk,(2) represents the priority weight of UE k for the “LSFD” scheme and SEk,(2) is given in (27)
|
1308 |
+
with arbitrary combining structure in the first decoding layer.
|
1309 |
+
Similarly, the matrix-weighted sum-MSE minimization problem as9
|
1310 |
+
min
|
1311 |
+
{F,A,W,G,S}
|
1312 |
+
K
|
1313 |
+
�
|
1314 |
+
k=1
|
1315 |
+
µk,(2)
|
1316 |
+
�
|
1317 |
+
tr
|
1318 |
+
�
|
1319 |
+
Wk,(2)Ek,(2)
|
1320 |
+
�
|
1321 |
+
− log2
|
1322 |
+
��Wk,(2)
|
1323 |
+
���
|
1324 |
+
s.t.
|
1325 |
+
��Fk,u,(2)
|
1326 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
1327 |
+
(36)
|
1328 |
+
is equivalent to the weighted sum-rate maximization problem (35), where Wk,(2) is the weight matrix for
|
1329 |
+
UE k. Note that (36) is convex over each optimization variable F, A, W, G, S but is not jointly convex
|
1330 |
+
over all optimization variables. So we can solve (36) by sequentially fixing four of the five optimization
|
1331 |
+
variables F, A, W, G, S and updating the fifth.10
|
1332 |
+
The update of Ak and Ek,(2) are given by the optimal LSFD scheme (28) and MSE matrix with optimal
|
1333 |
+
LSFD scheme (31). Note that optimal Wk,(2) for (36) is Wopt
|
1334 |
+
k,(2) = E−1
|
1335 |
+
k,(2).
|
1336 |
+
Remark 3. When Aopt
|
1337 |
+
k
|
1338 |
+
and Wopt
|
1339 |
+
k,(2) for all UEs are applied in (36), we notice that (36) becomes to the
|
1340 |
+
equivalent optimization problem of (35) as
|
1341 |
+
max
|
1342 |
+
{F,G,S}
|
1343 |
+
K
|
1344 |
+
�
|
1345 |
+
k=1
|
1346 |
+
µk,(2) log2
|
1347 |
+
����
|
1348 |
+
�
|
1349 |
+
Eopt
|
1350 |
+
k,(2)
|
1351 |
+
�−1����
|
1352 |
+
s.t.
|
1353 |
+
��Fk,u,(2)
|
1354 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
1355 |
+
(37)
|
1356 |
+
which is a well-known relationship between Eopt
|
1357 |
+
k,(2) and SEopt
|
1358 |
+
k,(2) and proven in Appendix E.
|
1359 |
+
Last but not least, fixing other variables, the update of Fk,u,(2) for (36) results in the optimization
|
1360 |
+
9The notation G denotes all G-relevant variables, like E{Gkl¯Fl,u,(2)GH
|
1361 |
+
kl} and E{Gkk}, etc.
|
1362 |
+
10As for G and S, if L-MMSE combining scheme applied, E {Gkk} and Sk are relevant to Fk,u,(2) so we should also update them. On
|
1363 |
+
the contrary, E{Gkk} and Sk with MR combining structure are irrelevant to F so we only need to update E{Gkl¯Fl,u,(2)GH
|
1364 |
+
kl}.
|
1365 |
+
|
1366 |
+
17
|
1367 |
+
problem as
|
1368 |
+
min
|
1369 |
+
{F}
|
1370 |
+
K
|
1371 |
+
�
|
1372 |
+
k=1
|
1373 |
+
µk,(2)
|
1374 |
+
�
|
1375 |
+
tr
|
1376 |
+
�
|
1377 |
+
Wk
|
1378 |
+
�
|
1379 |
+
IN − FH
|
1380 |
+
k,u,(2)E
|
1381 |
+
�
|
1382 |
+
GH
|
1383 |
+
kk
|
1384 |
+
�
|
1385 |
+
Ak
|
1386 |
+
� �
|
1387 |
+
IN − FH
|
1388 |
+
k,u,(2)E
|
1389 |
+
�
|
1390 |
+
GH
|
1391 |
+
kk
|
1392 |
+
�
|
1393 |
+
Ak
|
1394 |
+
�H��
|
1395 |
+
+
|
1396 |
+
K
|
1397 |
+
�
|
1398 |
+
k=1
|
1399 |
+
µk,(2)
|
1400 |
+
�
|
1401 |
+
tr
|
1402 |
+
�
|
1403 |
+
Wk,(2)AH
|
1404 |
+
k
|
1405 |
+
� K
|
1406 |
+
�
|
1407 |
+
l̸=k
|
1408 |
+
E
|
1409 |
+
�
|
1410 |
+
Gkl¯Fl,u,(2)GH
|
1411 |
+
kl
|
1412 |
+
�
|
1413 |
+
+ σ2Sk
|
1414 |
+
�
|
1415 |
+
Ak
|
1416 |
+
��
|
1417 |
+
s.t.
|
1418 |
+
��Fk,u,(2)
|
1419 |
+
��2 ⩽ pk ∀k = 1, . . . , K
|
1420 |
+
(38)
|
1421 |
+
which is a convex quadratic optimization problem. Thus, we can also derive the optimal precoding scheme
|
1422 |
+
by applying classic Lagrange multipliers methods and KKT conditions. The Lagrange function of (38) is
|
1423 |
+
f
|
1424 |
+
�
|
1425 |
+
F1,u,(2), . . . , FK,u,(2)
|
1426 |
+
�
|
1427 |
+
=
|
1428 |
+
K
|
1429 |
+
�
|
1430 |
+
k=1
|
1431 |
+
µk,(2)
|
1432 |
+
�
|
1433 |
+
tr
|
1434 |
+
�
|
1435 |
+
Wk,(2)
|
1436 |
+
�
|
1437 |
+
IN − FH
|
1438 |
+
k,u,(2)E
|
1439 |
+
�
|
1440 |
+
GH
|
1441 |
+
kk
|
1442 |
+
�
|
1443 |
+
Ak
|
1444 |
+
� �
|
1445 |
+
IN − FH
|
1446 |
+
k,u,(2)E
|
1447 |
+
�
|
1448 |
+
GH
|
1449 |
+
kk
|
1450 |
+
�
|
1451 |
+
Ak
|
1452 |
+
�H��
|
1453 |
+
+
|
1454 |
+
K
|
1455 |
+
�
|
1456 |
+
k=1
|
1457 |
+
µk,(2)
|
1458 |
+
�
|
1459 |
+
tr
|
1460 |
+
�
|
1461 |
+
Wk,(2)AH
|
1462 |
+
k
|
1463 |
+
� K
|
1464 |
+
�
|
1465 |
+
l̸=k
|
1466 |
+
E
|
1467 |
+
�
|
1468 |
+
Gkl¯Fl,u,(2)GH
|
1469 |
+
kl
|
1470 |
+
�
|
1471 |
+
+ σ2Sk
|
1472 |
+
�
|
1473 |
+
Ak
|
1474 |
+
��
|
1475 |
+
+
|
1476 |
+
K
|
1477 |
+
�
|
1478 |
+
k=1
|
1479 |
+
λk,(2)
|
1480 |
+
�
|
1481 |
+
tr
|
1482 |
+
�
|
1483 |
+
Fk,u,(2)FH
|
1484 |
+
k,u,(2)
|
1485 |
+
�
|
1486 |
+
− pk
|
1487 |
+
�
|
1488 |
+
.
|
1489 |
+
(39)
|
1490 |
+
Theorem 3. By applying the first-order optimality condition of (39) with respect to each Fk,u,(2) and
|
1491 |
+
fixing other optimization variables, we obtain the optimal precoding scheme as
|
1492 |
+
Fopt
|
1493 |
+
k,u,(2) = µk,(2)
|
1494 |
+
� K
|
1495 |
+
�
|
1496 |
+
l=1
|
1497 |
+
µl,(2)E
|
1498 |
+
�
|
1499 |
+
GH
|
1500 |
+
lkAlE−1
|
1501 |
+
l,(2)AH
|
1502 |
+
l Glk
|
1503 |
+
�
|
1504 |
+
+ λk,(2)IN
|
1505 |
+
�−1
|
1506 |
+
E
|
1507 |
+
�
|
1508 |
+
GH
|
1509 |
+
kk
|
1510 |
+
�
|
1511 |
+
AkE−1
|
1512 |
+
k,(2),
|
1513 |
+
(40)
|
1514 |
+
where λk,(2) ⩾ 0 is the Lagrangian multiplier during the phase of “LSFD” scheme. According to the KKT
|
1515 |
+
condition, λk,(2) and Fk,u,(2) should also satisfy
|
1516 |
+
��Fk,u,(2)
|
1517 |
+
��2 ⩽ pk,
|
1518 |
+
λk,(2)
|
1519 |
+
���Fk,u,(2)
|
1520 |
+
��2 − pk
|
1521 |
+
�
|
1522 |
+
= 0,
|
1523 |
+
λk,(2) ⩾ 0.
|
1524 |
+
(41)
|
1525 |
+
Note that when �K
|
1526 |
+
l=1 µl,(2)E{GH
|
1527 |
+
lkAlE−1
|
1528 |
+
l,(2)AH
|
1529 |
+
l Glk} is invertible and tr
|
1530 |
+
�
|
1531 |
+
Fk,u,(2)(0)Fk,u,(2)(0
|
1532 |
+
�H] ⩽ pk,
|
1533 |
+
then Fopt
|
1534 |
+
k,u,(2) = Fk,u,(2) (0), otherwise we must have tr[Fk,u,(2)(λk,(2))Fk,u,(2)(λk,(2))H] = pk. Following the
|
1535 |
+
similar method in Corollary 3, we notice that λk,(2) can be easily found by a 1-D bisection algorithm
|
1536 |
+
since tr[Fk,u,(2)(λk,(2))Fk,u,(2)(λk,(2))H] is a monotonically decreasing function of λk,(2).
|
1537 |
+
Moreover, if MR combining Vmk = ˆHmk is applied in the first layer, we can compute expectations in
|
1538 |
+
(40) in closed-form as following theorem.
|
1539 |
+
Theorem 4. With MR combining Vmk = ˆHmk and the optimal LSFD scheme applied, we can compute
|
1540 |
+
|
1541 |
+
18
|
1542 |
+
E{GH
|
1543 |
+
kk}, Aopt
|
1544 |
+
k , and Eopt
|
1545 |
+
k,(2) in closed-form as Theorem 2. Moreover, we have ¯Tlk = E{GH
|
1546 |
+
lkAlE−1
|
1547 |
+
l,(2)AH
|
1548 |
+
l Glk} ∈
|
1549 |
+
CN×N where the (i, n)-th element of ¯Tlk is tr(¯Al ¯Glk,ni) with ¯Al ≜ AlE−1
|
1550 |
+
l,(2)AH
|
1551 |
+
l
|
1552 |
+
and the [(m − 1) N +
|
1553 |
+
p, (m′ − 1) N + p′]-th (or [o, j]-th briefly) entry of ¯Glk,ni ≜ E{glk,ngH
|
1554 |
+
lk,i} ∈ CMN×MN being
|
1555 |
+
E{glk,ngH
|
1556 |
+
lk,i}oj =
|
1557 |
+
|
1558 |
+
|
1559 |
+
|
1560 |
+
|
1561 |
+
|
1562 |
+
|
1563 |
+
|
1564 |
+
|
1565 |
+
|
1566 |
+
|
1567 |
+
|
1568 |
+
|
1569 |
+
|
1570 |
+
|
1571 |
+
|
1572 |
+
|
1573 |
+
|
1574 |
+
|
1575 |
+
|
1576 |
+
|
1577 |
+
|
1578 |
+
|
1579 |
+
|
1580 |
+
|
1581 |
+
|
1582 |
+
|
1583 |
+
|
1584 |
+
|
1585 |
+
|
1586 |
+
0,
|
1587 |
+
l /∈ Pk, m ̸= m′
|
1588 |
+
tr(Rni
|
1589 |
+
mk ˆRp′p
|
1590 |
+
ml),
|
1591 |
+
l /∈ Pk, m = m′
|
1592 |
+
tr(Ξnp
|
1593 |
+
mlk)tr(Ξp′i
|
1594 |
+
m′kl),
|
1595 |
+
l ∈ Pk, m ̸= m′
|
1596 |
+
tr
|
1597 |
+
�
|
1598 |
+
Rni
|
1599 |
+
mkPp′p
|
1600 |
+
mlk,(1)
|
1601 |
+
�
|
1602 |
+
+ τ 2
|
1603 |
+
p
|
1604 |
+
�N
|
1605 |
+
q1=1
|
1606 |
+
�N
|
1607 |
+
q2=1 tr
|
1608 |
+
�
|
1609 |
+
˜Pq1p
|
1610 |
+
mlk,(2) ˜Rnq2
|
1611 |
+
mk ˜Rq2i
|
1612 |
+
mk ˜Pp′q1
|
1613 |
+
mlk,(2)
|
1614 |
+
�
|
1615 |
+
+τ 2
|
1616 |
+
p
|
1617 |
+
�N
|
1618 |
+
q1=1
|
1619 |
+
�N
|
1620 |
+
q2=1 tr
|
1621 |
+
�
|
1622 |
+
˜Pq1n
|
1623 |
+
mlk,(2) ˜Rnq1
|
1624 |
+
mk
|
1625 |
+
�
|
1626 |
+
tr
|
1627 |
+
�
|
1628 |
+
˜Pp′q2
|
1629 |
+
mlk,(2) ˜Rq2i
|
1630 |
+
mk
|
1631 |
+
�
|
1632 |
+
,
|
1633 |
+
l ∈ Pk, m = m′
|
1634 |
+
(42)
|
1635 |
+
where Ξmlk = τpRmk˜FH
|
1636 |
+
k,pΨ−1
|
1637 |
+
mk˜Fl,pRml, Ξm′kl = τpRm′l˜FH
|
1638 |
+
l,pΨ−1
|
1639 |
+
m′l˜Fk,pRm′k, Sml = Rml˜FH
|
1640 |
+
l,pΨ−1
|
1641 |
+
ml, Pmlk,(1) =
|
1642 |
+
τpSml(Ψml − τp˜Fk,pRmk˜FH
|
1643 |
+
k,p)SH
|
1644 |
+
ml and Pmlk,(2) = Sml˜Fk,pRmk˜FH
|
1645 |
+
k,pSH
|
1646 |
+
ml. Plugging the derived results into
|
1647 |
+
(40), we can compute Fopt
|
1648 |
+
k,u,(2) in closed-form as
|
1649 |
+
Fopt
|
1650 |
+
k,u,(2),c = µk,(2)
|
1651 |
+
� K
|
1652 |
+
�
|
1653 |
+
l=1
|
1654 |
+
µl,(2) ¯Tlk + λk,(2)IN
|
1655 |
+
�−1
|
1656 |
+
ZH
|
1657 |
+
k Aopt
|
1658 |
+
k,c Eopt,−1
|
1659 |
+
k,(2),c .
|
1660 |
+
(43)
|
1661 |
+
Proof: The proof is given in Appendix F.
|
1662 |
+
Furthermore, an iterative optimization algorithm for Fk,u,(2) is summarized in Algorithm 2. The con-
|
1663 |
+
vergence of Algorithm 2 is proven in [30, Theorem 3].
|
1664 |
+
Remark 4. Relying on the iterative minimization of weighted MSE, two efficient uplink I-WMMSE
|
1665 |
+
precoding schemes to maximize the weighted sum SE are proposed. The I-WMMSE precoding schemes
|
1666 |
+
for the “FCP” and “LSFD” schemes are investigated in Algorithm 1 and Algorithm 2, respectively. Note
|
1667 |
+
that the design of I-WMMSE precoding scheme for the FCP/LSFD is based on instantaneous CSI/channel
|
1668 |
+
statistics, respectively. More importantly, we can compute I-WMMSE precoding schemes in novel closed-
|
1669 |
+
form only for the LSFD scheme with MR combining based on Theorm 4.
|
1670 |
+
IV. PRECODING IMPLEMENTATION AND COMPLEXITY ANALYSIS
|
1671 |
+
In this section, we discuss the practical implementation and analyze computational complexity for the
|
1672 |
+
UL precoding schemes investigated in Section III.
|
1673 |
+
A. Precoding Implementation
|
1674 |
+
1) Precoding Characteristics: As described above, we investigate a standard block fading model, where
|
1675 |
+
the channel response is constant and frequency flat in a coherence block, which contains τc channel
|
1676 |
+
|
1677 |
+
19
|
1678 |
+
Algorithm 2: I-WMMSE Algorithm for the Design of Fk,u,(2)
|
1679 |
+
Input: Channel statistics Θ for all possible pairs; UE weights µk,(2) for all UEs;
|
1680 |
+
Output: Optimal precoding matrices Fk,u,(2) for all UEs (F(i)
|
1681 |
+
k,u,(2) for the first or third stopping
|
1682 |
+
criterion and F(i−1)
|
1683 |
+
k,u,(2) for the second stopping criterion);
|
1684 |
+
1 Initiation: i = 0, F(0)
|
1685 |
+
k,u,(2) and R(0)
|
1686 |
+
(2) = �K
|
1687 |
+
k=1 µk,(2)SE(0)
|
1688 |
+
k,(2) for all UEs; maximum iteration number
|
1689 |
+
I(2),max and threshold ε(2);
|
1690 |
+
2 repeat
|
1691 |
+
3
|
1692 |
+
i = i + 1
|
1693 |
+
4
|
1694 |
+
Update channel statistics Θ(i), such as E{G(i)
|
1695 |
+
kk}, E{G(i)
|
1696 |
+
kl ¯F(i−1)
|
1697 |
+
l,u
|
1698 |
+
(G(i)
|
1699 |
+
kl )H} and S(i)
|
1700 |
+
k ;
|
1701 |
+
5
|
1702 |
+
Update optimal LSFD matrix A(i)
|
1703 |
+
k
|
1704 |
+
with F(i−1)
|
1705 |
+
l,u,(2) and Θ(i) based on (28);
|
1706 |
+
6
|
1707 |
+
Update optimal MSE matrix E(i)
|
1708 |
+
k,(2) with F(i−1)
|
1709 |
+
l,u,(2), A(i)
|
1710 |
+
k
|
1711 |
+
and E{G(i)
|
1712 |
+
kk} based on (31) and update
|
1713 |
+
W(i)
|
1714 |
+
k,(2);
|
1715 |
+
7
|
1716 |
+
Update optimal precoding matrix F(i)
|
1717 |
+
k,u,(2) with A(i)
|
1718 |
+
k , W(i)
|
1719 |
+
k,(2) and Θ(i) based on (40), where
|
1720 |
+
λ(i),
|
1721 |
+
k,(2) is found by a bisection algorithm;
|
1722 |
+
8
|
1723 |
+
Update sum weighted rate R(i)
|
1724 |
+
(2) = �K
|
1725 |
+
k=1 µk,(2)SE(i)
|
1726 |
+
k,(2);
|
1727 |
+
9 until |R(i)
|
1728 |
+
(2) − R(i−1)
|
1729 |
+
(2)
|
1730 |
+
|/R(i−1)
|
1731 |
+
(2)
|
1732 |
+
⩽ ε(2) or R(i)
|
1733 |
+
(2) < R(i−1)
|
1734 |
+
(2)
|
1735 |
+
or i ⩾ I(2),max;
|
1736 |
+
uses. For the “fully centralized processing” scheme, we notice that the I-WMMSE precoding design is
|
1737 |
+
implemented at the CPU based on the instantaneous CSI as (18). Moreover, to guarantee the convergence
|
1738 |
+
of Algorithm 1, only MMSE combining as (9) is advocated to detect the UL data since the equivalent
|
1739 |
+
relationship between Eopt
|
1740 |
+
k,(1) and SEopt
|
1741 |
+
k,(1), which only satisfies with MMSE combining, should be guaranteed.
|
1742 |
+
As for the LSFD scheme, the optimal design of Fopt
|
1743 |
+
k,u,(2) as (40) can only be implemented at the CPU, but
|
1744 |
+
relies only on channel statistics. Besides, L-MMSE or MR combining can be applied at each AP. When
|
1745 |
+
MR combining is applied, all terms in Algorithm 2 can be computed in closed-form as Theorem 4.
|
1746 |
+
2) Fronthaul Requirements: For the FCP scheme with the I-WMMSE precoding, in each coherence
|
1747 |
+
block, all APs should relay their received signals to the CPU and the CPU requires precoding matrices
|
1748 |
+
Fk,u,(1) feedback to all UEs. All APs need to send τcML complex scalars (τpML complex scalars for the
|
1749 |
+
pilot signals and (τc − τp)ML complex scalars for the received data signals). Besides, the full correlation
|
1750 |
+
matrices {Rmk} are available at the CPU, which contains MKL2N2/2 complex scalars for each realization
|
1751 |
+
of the AP/UE locations/statistics11. Moreover, the CPU transmits optimal precoding matrices to all UEs,
|
1752 |
+
which are described by KN2 complex scalars per coherence block. In summary, for the FCP scheme
|
1753 |
+
with the I-WMMSE precoding implemented, total τcMLNr + MKL2N2/2 + KN2Nr complex scalars
|
1754 |
+
are transmitted via fronthaul links for each realization of the AP/UE locations. For comparison, when
|
1755 |
+
11Note that the channel statistics remain constant for each realization of the AP/UE locations and each realization of the AP/UE locations
|
1756 |
+
contains Nr channel realizations (coherence blocks).
|
1757 |
+
|
1758 |
+
20
|
1759 |
+
the FCP scheme without the I-WMMSE precoding is implemented, all APs should also transmit τcML
|
1760 |
+
complex scalars for the received signals to the CPU in each coherence block and MKL2N2/2 complex
|
1761 |
+
scalars for {Rmk} to the CPU for each realization of the AP/UE locations. So for the CFP scheme without
|
1762 |
+
the I-WMMSE precoding, total τcMLNr + MKL2N2/2 complex scalars are transmitted via fronthaul
|
1763 |
+
links for each realization of the AP/UE locations.
|
1764 |
+
As for the LSFD scheme with the I-WMMSE precoding, all APs transmit their local data estimates
|
1765 |
+
˜xmk, described by (τc − τp)MKN complex scalars, to the CPU per coherence block. Besides, E{Gkk} ∈
|
1766 |
+
CMN×N, described by MKN2 complex scalars for each realization of the AP/UE locations, are also
|
1767 |
+
required at the CPU. As for E{Gkl¯Fl,u,(2)GH
|
1768 |
+
kl} ∈ CMN×MN, following the formulation method investigated
|
1769 |
+
in Appendix F, the optimization of ¯Fl,u,(2) requires the knowledge of
|
1770 |
+
�
|
1771 |
+
E
|
1772 |
+
�
|
1773 |
+
vH
|
1774 |
+
ml,phmk,nhH
|
1775 |
+
m′k,ivm′l,p′��
|
1776 |
+
,
|
1777 |
+
described by M2K2N4/2 complex scalars for each realization of the AP/UE locations, where vml,p denotes
|
1778 |
+
the p-th column of Vml. Moreover, the CPU requires optimal precoding matrices Fk,u,(2) feedback to all
|
1779 |
+
APs and UEs only for each realization of the AP/UE locations, which are KN2 complex scalars. As for the
|
1780 |
+
LSFD scheme without the I-WMMSE precoding, local data estimates ˜xmk, described by (τc − τp)MKN
|
1781 |
+
complex scalars per coherence block, E{Gkk}, described by MKN2 complex scalars for each realization
|
1782 |
+
of the AP/UE locations, and E{GklGH
|
1783 |
+
kl} ∈ CMN×MN, described by M2K2N2/2 complex scalars for each
|
1784 |
+
realization of the AP/UE locations, are required. That is total (τc −τp)MKNNr + MKN2 + M2K2N2/2
|
1785 |
+
complex scalars transmitted via fronthaul links for each realization of the AP/UE locations.
|
1786 |
+
3) Practical Implementation: Note that the basic motivation of the investigated I-WMMSE precoding
|
1787 |
+
schemes is to achieve as good the sum uplink SE performance as possible so we ignore some practical
|
1788 |
+
issues, which are vital for the realistic implementation of the investigated precoding schemes. When the
|
1789 |
+
precoding schemes are implemented in practice, these realistic issues should be considered.
|
1790 |
+
• Capacity-constrained fronthaul network
|
1791 |
+
As discussed above, the I-WMMSE precoding require more fronthaul requirements than the case without
|
1792 |
+
the I-WMMSE precoding. It is quite vital to consider a more practical capacity-constrained fronthaul
|
1793 |
+
network [38]. Moreover, the wireless fronthaul [39], which is more flexible than the conventional wire
|
1794 |
+
fronthaul, would also be regarded as a promising solution to boost the practical implementation of the
|
1795 |
+
I-WMMSE precoding.
|
1796 |
+
• Scalability aspects with dynamic cooperation clusters
|
1797 |
+
When the precoding schemes are implemented in practice, a more realistic network architecture with
|
1798 |
+
|
1799 |
+
21
|
1800 |
+
multiple CPUs and dynamic cooperation clusters should be advocated, where each UE is only served by
|
1801 |
+
a cluster of APs (that a is user-centric cluster) and the APs are grouped into cell-centric clusters as shown
|
1802 |
+
in Fig. 1. Note a user-centric cluster might consist of APs connecting with different CPUs. Based on the
|
1803 |
+
signal processing schemes in [9], [18], the analytical framework in this paper can be implemented in a
|
1804 |
+
scalable paradigm where the fronthaul requirements and computational complexity can be relieved with
|
1805 |
+
an anticipated modest performance loss compared with canonical architecture. The I-WMMSE precoding
|
1806 |
+
design with these two practical aspects is left in future work. To bring valuable technical insights for
|
1807 |
+
the study of I-WMMSE precoding schemes with the DCC strategy and the capacity-constrained fronthaul
|
1808 |
+
link, we provide two tutorials for the FCP and LSFD in Fig. 2 based on [9], [10], [38].
|
1809 |
+
Tutorials to investigate the I-WMMSE precoding scheme with the DCC strategy and the capacity-constrained fronthaul
|
1810 |
+
1: Joint initial access, pilot assignment, and cluster formation for the DCC topology
|
1811 |
+
based on a classical algorithm in [9, Sec V. A] or a more efficient algorithm as [10,
|
1812 |
+
Algorithm 1].
|
1813 |
+
2: Each AP transmits the quantized versions of the local detection signals in (22) to the
|
1814 |
+
CPU based on Case 2 in [38] called "Quantized Weighted Signal Available at the CPU"
|
1815 |
+
as [38, eq. (20)].
|
1816 |
+
3: Based on Section Ⅲ. B. (1), generate the DCC based processing scheme for the
|
1817 |
+
LSFD (the local combining design, P-LSFD, and achievable SE computation) motivated
|
1818 |
+
by [10, Sec Ⅱ. B].
|
1819 |
+
4: Based on Section Ⅲ. B. (2), formulate the I-WMMSE precoding design optimization
|
1820 |
+
problem with a capacity-constrained fronthaul motivated by [38, eq. (24)] and [38, eq.
|
1821 |
+
(26)].
|
1822 |
+
5: Obtain the optimal precoding scheme based on potential methods.
|
1823 |
+
Tutorial 2. The I-WMMSE precoding for the LSFD with the DCC strategy and
|
1824 |
+
the capacity-constrained fronthaul
|
1825 |
+
1: Joint initial access, pilot assignment, and cluster formation for the DCC
|
1826 |
+
topology based on a classical algorithm in [9, Sec V. A] or a more efficient
|
1827 |
+
algorithm as [10, Algorithm 1].
|
1828 |
+
2: Each AP transmits the quantized versions of its received pilot signals and data
|
1829 |
+
signals to the CPU based on Case 1 in [38] called "Quantized Estimate of the
|
1830 |
+
Channel and Quantized Signal Available at the CPU" as [38, eq. (11) ].
|
1831 |
+
3: Based on Section Ⅲ. A. (1), generate the DCC based processing scheme for
|
1832 |
+
the FCP (the receive combining and achievable SE computation) motivated by
|
1833 |
+
[9, Sec V. B].
|
1834 |
+
4: Based on Section Ⅲ. A. (2), formulate the I-WMMSE precoding design
|
1835 |
+
optimization problem with a capacity-constrained fronthaul motivated by [38,
|
1836 |
+
eq. (24)] and [38, eq. (26)] .
|
1837 |
+
5: Obtain the optimal precoding scheme based on potential methods.
|
1838 |
+
Tutorial 1. The I-WMMSE precoding for the FCP with the DCC strategy
|
1839 |
+
and the capacity-constrained fronthaul
|
1840 |
+
Fig. 2. Two tutorials to investigate the I-WMMSE precoding schemes with the DCC strategy and the capacity-constrained fronthaul.
|
1841 |
+
B. Complexity Analysis
|
1842 |
+
In this subsection, we analyze the computational complexity of two precoding schemes investigated.
|
1843 |
+
Since the bisection step for λk,{(1),(2)} generally takes few iterations compared with other steps, we
|
1844 |
+
ignore bisection steps for λk,{(1),(2)} in the complexity analysis. For the fully centralized processing
|
1845 |
+
scheme and each realization of the AP/UE locations, the per-iteration complexity of iterative optimiza-
|
1846 |
+
tion is O (M3K2N5Nr). For the LSFD scheme and each realization of the AP/UE locations, the per-
|
1847 |
+
iteration complexity of iterative optimization based on L-MMSE combining with the Monte-Carlo method,
|
1848 |
+
MR combining with the Monte-Carlo method and MR combining with the closed-form expressions are
|
1849 |
+
O (M2K2N3Nr), O (M2K2N3Nr + M3KN3) and O (M3K2N5), respectively. To further reduce the
|
1850 |
+
computation complexity, it’s quite necessary to apply the asymptotic analysis method [40], [41] to compute
|
1851 |
+
the terms, which cannot be computed in closed-form, in approximation results.
|
1852 |
+
|
1853 |
+
22
|
1854 |
+
TABLE I
|
1855 |
+
COMPARISON OF TWO PRECODING SCHEMES IN THIS PAPER. THE NUMBER OF COMPLEX SCALARS IS COMPUTED FOR EACH
|
1856 |
+
REALIZATION OF THE AP/UE LOCATIONS. THE SUM SE IMPROVEMENT IS COMPUTED WITH M = 20, K = 10, L = 1 AND N = 4.
|
1857 |
+
FCP
|
1858 |
+
LSFD
|
1859 |
+
CSI
|
1860 |
+
Instantaneous CSI
|
1861 |
+
Statistical CSI
|
1862 |
+
Detection scheme
|
1863 |
+
MMSE combining
|
1864 |
+
L-MMSE/MR combining + Optimal LSFD scheme
|
1865 |
+
Number of complex scalars
|
1866 |
+
sent from APs to the CPU
|
1867 |
+
with I-WMMSE precoding
|
1868 |
+
τcMLNr + MKL2N 2/2
|
1869 |
+
(τc − τp)MKNNr + MKN 2 + M 2K2N 4/2
|
1870 |
+
Number of complex scalars
|
1871 |
+
sent from APs to the CPU
|
1872 |
+
without I-WMMSE precoding
|
1873 |
+
τcMLNr + MKL2N 2/2
|
1874 |
+
(τc − τp)MKNNr + MKN 2 + M 2K2N 2/2
|
1875 |
+
Number of complex scalars
|
1876 |
+
feedback sent from the CPU
|
1877 |
+
KN 2Nr
|
1878 |
+
KN 2
|
1879 |
+
Per-iteration computational
|
1880 |
+
complexity
|
1881 |
+
O
|
1882 |
+
�
|
1883 |
+
M 3K2N 5Nr
|
1884 |
+
�
|
1885 |
+
L-MMSE: O
|
1886 |
+
�
|
1887 |
+
M 2K2N 3Nr
|
1888 |
+
�
|
1889 |
+
MR (Monte-Carlo): O
|
1890 |
+
�
|
1891 |
+
M 2K2N 3Nr + M 3KN 3�
|
1892 |
+
MR (Analytical): O
|
1893 |
+
�
|
1894 |
+
M 3K2N 5�
|
1895 |
+
Sum SE improvement
|
1896 |
+
28.93%
|
1897 |
+
L-MMSE: 46.74%
|
1898 |
+
MR: 15.13%
|
1899 |
+
V. NUMERICAL RESULTS
|
1900 |
+
In this paper, a CF mMIMO system is investigated, where all APs and UEs are uniformly distributed in
|
1901 |
+
a 1×1 km2 area with a wrap-around scheme [42]. The pathloss and shadow fading are modeled similarly as
|
1902 |
+
[28]. In practice, Umk,r, Umk,t and Ωmk are estimated through measurements [29]. However, we generate
|
1903 |
+
them randomly in this paper, where the coupling matrix Ωmk consists of one strong transmit eigendirection
|
1904 |
+
capturing dominant power [43]12. Besides, we have Fk,p = F(0)
|
1905 |
+
k,u,{(1),(2)} =
|
1906 |
+
� pk
|
1907 |
+
N IN. As for Algorithm 1
|
1908 |
+
and Algorithm 2, balancing the convergence and accuracy, we assume that I(1),max = I(2),max = 20,
|
1909 |
+
ε(1) = ε(2) = 5 × 10−4, and weights for all UEs are equal (µk,(1) = µk,(2) = 1) without losing generality,
|
1910 |
+
respectively. Moreover, we consider communication with 20 MHz bandwidth and σ2 = −94 dBm noise
|
1911 |
+
power. All UEs transmit with 200 mW power constraint. Each coherence block contains τc = 200 channel
|
1912 |
+
uses and τp = KN/2. Besides, a pilot assignment approach similar as that in [28] is investigated.
|
1913 |
+
Figure 3 shows the cumulative distribution function (CDF) of the achievable sum SE over different
|
1914 |
+
realizations of the AP/UE locations for two processing schemes investigated (we shortly call “fully
|
1915 |
+
centralized processing” as “FCP” in the following) over “I-WMMSE precoding” or “w/o precoding”13.
|
1916 |
+
We notice that the FCP scheme undoubtedly achieves higher SE than that of the LSFD scheme since the
|
1917 |
+
12In this paper, we choose one eigendirection capturing dominant channel power (randomly accounting for 80% ∼ 95% of the total
|
1918 |
+
channel power) and other eigendirections contain the remaining power.
|
1919 |
+
13The “w/o precoding” scenario denotes that identity precoding matrices Fk,u,{(1),(2)} =
|
1920 |
+
� pk
|
1921 |
+
N IN are implemented without optimization.
|
1922 |
+
|
1923 |
+
23
|
1924 |
+
0
|
1925 |
+
10
|
1926 |
+
20
|
1927 |
+
30
|
1928 |
+
40
|
1929 |
+
50
|
1930 |
+
60
|
1931 |
+
70
|
1932 |
+
80
|
1933 |
+
90
|
1934 |
+
0
|
1935 |
+
0.2
|
1936 |
+
0.4
|
1937 |
+
0.6
|
1938 |
+
0.8
|
1939 |
+
1
|
1940 |
+
Fig. 3. CDF of the sum SE over different processing schemes and
|
1941 |
+
precoding schemes with M = 20, K = 10, L = 2, and N = 4.
|
1942 |
+
1
|
1943 |
+
2
|
1944 |
+
3
|
1945 |
+
4
|
1946 |
+
5
|
1947 |
+
6
|
1948 |
+
0
|
1949 |
+
20
|
1950 |
+
40
|
1951 |
+
60
|
1952 |
+
80
|
1953 |
+
100
|
1954 |
+
Fig. 4. Sum SE against the number antennas per AP L over different
|
1955 |
+
processing schemes and precoding schemes with M = 20, K = 10,
|
1956 |
+
and N = 4.
|
1957 |
+
1
|
1958 |
+
2
|
1959 |
+
3
|
1960 |
+
4
|
1961 |
+
5
|
1962 |
+
6
|
1963 |
+
1
|
1964 |
+
2
|
1965 |
+
3
|
1966 |
+
4
|
1967 |
+
5
|
1968 |
+
6
|
1969 |
+
7
|
1970 |
+
Fig. 5. Average rate against the number of antennas per UE N over
|
1971 |
+
different processing schemes and precoding schemes with M = 20,
|
1972 |
+
K = 10, and L = 2.
|
1973 |
+
1
|
1974 |
+
2
|
1975 |
+
3
|
1976 |
+
4
|
1977 |
+
5
|
1978 |
+
6
|
1979 |
+
1
|
1980 |
+
2
|
1981 |
+
3
|
1982 |
+
4
|
1983 |
+
5
|
1984 |
+
6
|
1985 |
+
7
|
1986 |
+
Fig. 6. Average SE with I-WMMSE precoding schemes against the
|
1987 |
+
number of antennas per UE N over different τc with M = 20,
|
1988 |
+
K = 10, and L = 2.
|
1989 |
+
FCP with MMSE combining is a competitive scheme in CF mMIMO [5]. More importantly, the proposed
|
1990 |
+
I-WMMSE schemes are efficient to improve the respective achievable sum SE performance, e.g., 12.78%,
|
1991 |
+
19.54% and 28.13% sum SE improvement for the FCP, the LSFD with MR combining and the LSFD
|
1992 |
+
with L-MMSE combining, respectively. Besides, for the LSFD scheme with MR combining, markers “◦”
|
1993 |
+
generated by analytical results overlap with the curves generated by simulations, respectively, validating
|
1994 |
+
our derived closed-form expressions.
|
1995 |
+
Figure 4 shows the achievable sum SE as a function of the number of antennas per AP with two
|
1996 |
+
processing schemes investigated and different precoding schemes14. We notice that, for the FCP or LSFD
|
1997 |
+
with (L-)MMSE combining, the performance gap between the “I-WMMSE” and “w/o precoding” becomes
|
1998 |
+
smaller with the increase of L, which implies that (L-)MMSE combining can use all antennas on each
|
1999 |
+
14Note that the achievable sum SE investigated is the average sum SE value taken over many AP/UE locations.
|
2000 |
+
|
2001 |
+
24
|
2002 |
+
10
|
2003 |
+
20
|
2004 |
+
30
|
2005 |
+
40
|
2006 |
+
50
|
2007 |
+
60
|
2008 |
+
1
|
2009 |
+
2
|
2010 |
+
3
|
2011 |
+
4
|
2012 |
+
5
|
2013 |
+
6
|
2014 |
+
7
|
2015 |
+
8
|
2016 |
+
9
|
2017 |
+
10
|
2018 |
+
Fig. 7.
|
2019 |
+
Average SE against the number of APs M for the LSFD
|
2020 |
+
scheme with K = 10, L = 4, and N = 4.
|
2021 |
+
2
|
2022 |
+
3
|
2023 |
+
4
|
2024 |
+
5
|
2025 |
+
6
|
2026 |
+
8
|
2027 |
+
10
|
2028 |
+
12
|
2029 |
+
2
|
2030 |
+
3
|
2031 |
+
4
|
2032 |
+
5
|
2033 |
+
6
|
2034 |
+
2.5
|
2035 |
+
3
|
2036 |
+
3.5
|
2037 |
+
4
|
2038 |
+
3.38
|
2039 |
+
3.4
|
2040 |
+
Fig. 8. Average SE against the number of antennas per UE N for
|
2041 |
+
different channel models with M = 40, K = 8, and L = 2.
|
2042 |
+
2
|
2043 |
+
4
|
2044 |
+
6
|
2045 |
+
8
|
2046 |
+
10
|
2047 |
+
12
|
2048 |
+
14
|
2049 |
+
16
|
2050 |
+
18
|
2051 |
+
25
|
2052 |
+
26
|
2053 |
+
27
|
2054 |
+
28
|
2055 |
+
29
|
2056 |
+
30
|
2057 |
+
31
|
2058 |
+
32
|
2059 |
+
33
|
2060 |
+
34
|
2061 |
+
(a) FCP
|
2062 |
+
2
|
2063 |
+
4
|
2064 |
+
6
|
2065 |
+
8
|
2066 |
+
10
|
2067 |
+
12
|
2068 |
+
14
|
2069 |
+
16
|
2070 |
+
16
|
2071 |
+
18
|
2072 |
+
20
|
2073 |
+
22
|
2074 |
+
24
|
2075 |
+
26
|
2076 |
+
28
|
2077 |
+
(b) LSFD
|
2078 |
+
Fig. 9. Convergence examples of the I-WMMSE algorithm for the FCP and LSFD with M = 20, K = 10, L = 2, and N = 4.
|
2079 |
+
AP to suppress interference and achieve excellent SE performance even without any precoding scheme.
|
2080 |
+
For instance, the performance gap between the “I-WMMSE” and “w/o precoding” for the LSFD with
|
2081 |
+
L-MMSE combining is 46.74% and 6.17% over L = 1 and L = 6, respectively. Meanwhile, for the LSFD
|
2082 |
+
with MR combining, the performance gap between the “I-WMMSE” and “w/o precoding” becomes large
|
2083 |
+
with the increase of L, e.g. 15.13% and 25.48% for L = 1 and L = 6, respectively. Besides, for the
|
2084 |
+
LSFD scheme with MR combining, markers “✷” generated by analytical results overlap with the curves
|
2085 |
+
generated by simulations, respectively, validating our derived closed-form expressions.
|
2086 |
+
To further show the advantage of the proposed I-WMMSE precoding schemes, Fig. 5 shows the average
|
2087 |
+
rate15 as a function of the number of antennas per UE. We find that the average rates for all schemes with
|
2088 |
+
I-WMMSE precoding schemes grow with N and the average rates for the case without UL precoding
|
2089 |
+
may also suffer the degradation with the increase of N. The implementation of the I-WMMSE precoding
|
2090 |
+
15Note that one main reason for the phenomenon that additional UE antennas may give rise to the SE degradation is that increasing N will
|
2091 |
+
increase the channel estimation overhead and reduce the pre-log factor “(τc − τp) /τc” in all SE expressions [26], [28]. So we investigate
|
2092 |
+
“the average rate” in Fig. 5, ignoring the effect of “(τc − τp) /τc”.
|
2093 |
+
|
2094 |
+
25
|
2095 |
+
50
|
2096 |
+
100
|
2097 |
+
150
|
2098 |
+
200
|
2099 |
+
250
|
2100 |
+
300
|
2101 |
+
3.4
|
2102 |
+
3.6
|
2103 |
+
3.8
|
2104 |
+
4
|
2105 |
+
104
|
2106 |
+
50
|
2107 |
+
100
|
2108 |
+
150
|
2109 |
+
200
|
2110 |
+
250
|
2111 |
+
300
|
2112 |
+
0
|
2113 |
+
2
|
2114 |
+
4
|
2115 |
+
6
|
2116 |
+
105
|
2117 |
+
Fig. 10. Total number of complex scalars sent via the fronthaul per channel use for each realization of the AP/UE locations with M = 20,
|
2118 |
+
K = 10, L = 2, and N = 4.
|
2119 |
+
schemes undoubtedly makes UEs benefit from multiple antennas and achieve excellent rate performance.
|
2120 |
+
Moreover, we observe that the I-WMMSE precoding schemes perform more efficiently with a larger
|
2121 |
+
number of UE antennas. For instance, the average rate improvements achieved by the I-WMMSE precoding
|
2122 |
+
for the LSFD with L-MMSE combining are 31.91% and 9.43% for N = 6 and N = 2, respectively.
|
2123 |
+
However, the average SE (with scaling factor (τc − τp)/τc) with I-WMMSE precoding implemented may
|
2124 |
+
also degrade with the increase of N as the Fig. 2 in [1] since, with the increase of N, the prerequisite
|
2125 |
+
of “mutually orthogonal pilot matrices” still requires huge channel uses for the pilot transmission and
|
2126 |
+
the inter-user interference also increases. So the design of non-orthogonal pilot matrices and per-antenna
|
2127 |
+
power control scheme are quite necessary, which are regarded as promising ways to reduce the cost of
|
2128 |
+
pilot transmission and further improve the SE performance [44].
|
2129 |
+
Figure 6 discusses the average SE with I-WMMSE precoding schemes against N over different τc.
|
2130 |
+
Note that Fig. 5 can be viewed as a special case in Fig. 6 with the coherence block with infinite length
|
2131 |
+
τc = ∞. We observe that the average SE with I-WMMSE precoding schemes increases with N over
|
2132 |
+
τc = 500 or ∞, which means the SE performance can benefit from having additional UE antennas when
|
2133 |
+
the coherence block resource is abundant.
|
2134 |
+
Figure 7 investigates the average SE as a function of M for the LSFD scheme over different precoding
|
2135 |
+
schemes16. For MR combining, markers “✷” generated by analytical results overlap with the curves
|
2136 |
+
generated by simulations, respectively, validating our derived closed-form expressions again. Besides, the
|
2137 |
+
I-WMMSE algorithm is more efficient to improve the SE performance for MR combining than that of
|
2138 |
+
L-MMSE combining for the scenario over large L and M, e.g., 4.03% and 24.21% SE improvement for L-
|
2139 |
+
MMSE combining and MR combining with M = 60, respectively, implying that the L-MMSE combining
|
2140 |
+
16The “WMMSE precoding” denotes the precoding schemes generated by the I-WMMSE algorithm with only single iteration.
|
2141 |
+
|
2142 |
+
26
|
2143 |
+
based on large L and M can achieve excellent SE performance even without any precoding scheme and
|
2144 |
+
the proposed I-WMMSE precoding scheme is handy to mitigate the weakness of MR combining17.
|
2145 |
+
Figure 8 considers the average SE as a function of N over the i.i.d. and the Weichselberger Rayleigh
|
2146 |
+
fading channel. As observed, the proposed I-WMMSE precoding schemes are more efficient over the
|
2147 |
+
Weichselberger Rayleigh fading channel. For instance, 24.89% and 9.77% average SE improvement can
|
2148 |
+
be achieved when N = 6 over the “Weichselberger” scenario for the LSFD scheme with MR combining
|
2149 |
+
and the FCP scheme, respectively, but only 0.29% and 6.63% average SE improvement can be achieved
|
2150 |
+
for “I.I.D. Rayleigh channel”. Moreover, compared with Fig. 5, we notice that the I-WMMSE precoding
|
2151 |
+
scheme for the FCP scheme is more efficient in the highly loaded system (the scenario in Fig. 5) where
|
2152 |
+
the number of total AP-antennas is comparable with the number of total UE-antennas.
|
2153 |
+
Figure 9 illustrates the convergence behavior of the I-WMMSE algorithms for the FCP scheme and
|
2154 |
+
the LSFD scheme with L-MMSE/MR combining. Note the convergence example in Fig. 9 (a) for the
|
2155 |
+
FCP is given by a particular channel realization and the convergence example for the LSFD in Fig. 9 (b)
|
2156 |
+
is given by a particular realization of the AP/UE locations. Note that the algorithms investigated can be
|
2157 |
+
guaranteed to converge and are efficient to achieve excellent sum SE performance. Besides, Fig. 9 (b) for
|
2158 |
+
the LSFD scheme with MR combining validates our derived closed-form expressions in Algorithm 2.
|
2159 |
+
Figure 10 investigates the total number of complex scalars sent via the fronthaul per channel use against
|
2160 |
+
τc for each realization of the AP/UE locations. As observed, total number of complex scalars per channel
|
2161 |
+
use for the FCP/LSFD scheme becomes smaller/larger, which can also be easily found from Table I.
|
2162 |
+
Besides, the LSFD scheme requires more fronthaul signaling than the FCP scheme since APs under the
|
2163 |
+
LSFD scheme need to transmit all received data signals to the CPU, which requires a huge fronthaul load.
|
2164 |
+
More importantly, with the increase of τc, the gap between “I-WMMSE precoding” and “W/O precoding”
|
2165 |
+
becomes smaller for either the FCP scheme or the LSFD scheme. Considering the SE performance
|
2166 |
+
improvement of the I-WMMSE precoding, additional fronthaul loads can be acceptable, especially when
|
2167 |
+
the coherence resource is abundant. Although the computational complexity of Algorithm 1 for the FCP
|
2168 |
+
scheme is much higher than that of Algorithm 2 for the LSFD scheme, the FCP scheme needs much
|
2169 |
+
less fronthaul signaling than that of the LSFD scheme and can achieve better SE performance. So two
|
2170 |
+
processing schemes and their respective precoding schemes can be chosen based on different requirements.
|
2171 |
+
17MR combining is a simple combining scheme but cannot efficiently suppress the interference.
|
2172 |
+
|
2173 |
+
27
|
2174 |
+
VI. CONCLUSION
|
2175 |
+
We consider a CF mMIMO system with both APs and UEs equipped with multiple antennas over
|
2176 |
+
the Weichselberger Rayleigh fading channel. The FCP scheme and LSFD scheme are implemented. To
|
2177 |
+
further improve the sum SE performance, efficient UL precoding schemes based on iteratively WMMSE
|
2178 |
+
algorithms are investigated to maximize weighted sum SE for the two processing schemes. Note that we
|
2179 |
+
compute achievable SE expressions and optimal precoding schemes in novel closed-form for the LSFD
|
2180 |
+
scheme with MR combining. Numerical results show that the investigated I-WMMSE precoding schemes
|
2181 |
+
are efficient to achieve excellent sum SE performance. More importantly, it can be seen that the proposed
|
2182 |
+
I-WMMSE precoding schemes are more efficient with a larger number of UE antennas, which means
|
2183 |
+
the I-WMMSE precoding schemes can achieve excellent performance even with a large number of UE
|
2184 |
+
antennas. The derived results undoubtedly provides vital insights for the practical implementation of multi-
|
2185 |
+
antenna UEs in CF mMIMO systems. In future work, we will investigate the design of UL precoding
|
2186 |
+
scheme for the phase of pilot transmission and consider the practical implementation of the investigated
|
2187 |
+
I-WMMSE precoding schemes with capacity-constrained fronthaul network and dynamic cooperation
|
2188 |
+
clusters. Moreover, the non-orthogonal pilot matrix design will also be considered to further improve the
|
2189 |
+
performance for the CF mMIMO system with multi-antenna UEs. Last but not least, the UL precoding
|
2190 |
+
performance over a more practical Rician fading channel with phase-shifts will also be analyzed.
|
2191 |
+
APPENDIX A
|
2192 |
+
SOME USEFUL LEMMAS
|
2193 |
+
Lemma 1. Let X ∈ CM×N be a random matrix and Y is a deterministic M × M matrix. The (n, i)-th
|
2194 |
+
element of E
|
2195 |
+
�
|
2196 |
+
XHYX
|
2197 |
+
�
|
2198 |
+
is tr
|
2199 |
+
�
|
2200 |
+
Y · E
|
2201 |
+
�
|
2202 |
+
xixH
|
2203 |
+
n
|
2204 |
+
��
|
2205 |
+
where xi and xn are the i-th and n-th column of X.
|
2206 |
+
Lemma 2. For matrices A ∈ CN1×N1, B ∈ CN1×N2, C ∈ CN2×N2, and D ∈ CN2×N1, we have
|
2207 |
+
(A + BCD)−1 = A−1−A−1B
|
2208 |
+
�
|
2209 |
+
DA−1B + C−1�−1 DA−1, which is a well-known matrix inversion lemma
|
2210 |
+
[36, Lemma B.3].
|
2211 |
+
APPENDIX B
|
2212 |
+
PROOF COROLLARY 5
|
2213 |
+
Since the CPU is only aware of channel statistics, we need to treat E{Gkk}Fk,u as the true deterministic
|
2214 |
+
channel and rewrite ˜xk in (25) as ˜xk = E {Gkk} Fk,uxk+(GkkFk,u − E {Gkk} Fk,u) xk +
|
2215 |
+
K
|
2216 |
+
�
|
2217 |
+
l=1,l̸=k
|
2218 |
+
GklFl,uxl + n′
|
2219 |
+
k
|
2220 |
+
�
|
2221 |
+
��
|
2222 |
+
�
|
2223 |
+
v
|
2224 |
+
where v is a complex circular symmetric noise with an invertible covariance matrix Ξk = E{vvH|Θ} =
|
2225 |
+
|
2226 |
+
28
|
2227 |
+
�K
|
2228 |
+
l=1 E{GklFk,uFH
|
2229 |
+
k,uGH
|
2230 |
+
kl} − E{Gkk}Fk,uFH
|
2231 |
+
k,uE{GH
|
2232 |
+
kk} + σ2Sk. Firstly, we whiten the noise as Ξ
|
2233 |
+
− 1
|
2234 |
+
2
|
2235 |
+
k ˆxk =
|
2236 |
+
Ξ
|
2237 |
+
− 1
|
2238 |
+
2
|
2239 |
+
k E {Gkk} Fk,uxk + ˜v, where ˜v ≜ Ξ
|
2240 |
+
− 1
|
2241 |
+
2
|
2242 |
+
k v becomes white. Next, we project Ξ
|
2243 |
+
− 1
|
2244 |
+
2
|
2245 |
+
k ˆxk in the direction of
|
2246 |
+
Ξ
|
2247 |
+
− 1
|
2248 |
+
2
|
2249 |
+
k E {Gkk} Fk,u to obtain an effective scalar channel as
|
2250 |
+
�
|
2251 |
+
Ξ
|
2252 |
+
− 1
|
2253 |
+
2
|
2254 |
+
k E {Gkk} Fk,u
|
2255 |
+
�H
|
2256 |
+
Ξ
|
2257 |
+
− 1
|
2258 |
+
2
|
2259 |
+
k ˆxk = (E {Gkk} Fk,u)H Ξ−1
|
2260 |
+
k E {Gkk} Fk,uxk + (E {Gkk} Fk,u)H Ξ−1
|
2261 |
+
k v.
|
2262 |
+
(44)
|
2263 |
+
Based on theories of optimal receivers [37], we derive optimal LSFD matrix Ak=Ξ−1
|
2264 |
+
k E {Gkk}Fk,u as
|
2265 |
+
Ak =
|
2266 |
+
� K
|
2267 |
+
�
|
2268 |
+
l=1
|
2269 |
+
E
|
2270 |
+
�
|
2271 |
+
GklFk,uFH
|
2272 |
+
k,uGH
|
2273 |
+
kl
|
2274 |
+
�
|
2275 |
+
− E {Gkk} Fk,uFH
|
2276 |
+
k,uE
|
2277 |
+
�
|
2278 |
+
GH
|
2279 |
+
kk
|
2280 |
+
�
|
2281 |
+
+ σ2Sk
|
2282 |
+
�−1
|
2283 |
+
E {Gkk} Fk,u.
|
2284 |
+
(45)
|
2285 |
+
Moreover, based on the the standard results of matrix derivation in [45], we can easily obtain the LSFD
|
2286 |
+
matrix minimizing the conditional MSE for UE k MSE(2)
|
2287 |
+
k
|
2288 |
+
= tr(E(2)
|
2289 |
+
k ) as
|
2290 |
+
Ak =
|
2291 |
+
� K
|
2292 |
+
�
|
2293 |
+
l=1
|
2294 |
+
E{Gkl¯Fl,uGH
|
2295 |
+
kl} + σ2Sk
|
2296 |
+
�−1
|
2297 |
+
E{Gkk}Fk,u.
|
2298 |
+
(46)
|
2299 |
+
We notice that the LSFD matrix in (45) is equivalent to the LSFD matrix in (46), except from having
|
2300 |
+
another scaling matrix IN −
|
2301 |
+
�
|
2302 |
+
CHB−1C + IN
|
2303 |
+
�−1 CHB−1C on the right side, which would not affect the
|
2304 |
+
value of (27), where B = �K
|
2305 |
+
l=1 E{GklFk,uFH
|
2306 |
+
k,uGH
|
2307 |
+
kl} + σ2Sk and C = E{Gkk}Fk,u. So the LSFD matrix
|
2308 |
+
in (46) cannot maximize the achievable SE but minimize the MSE for UE k.
|
2309 |
+
APPENDIX C
|
2310 |
+
PROOF OT THEOREM 2
|
2311 |
+
In this part, we compute terms of (27) in closed-form for the LSFD scheme with MR combining Vmk =
|
2312 |
+
ˆHmk. For the first term Dk,(2) = AH
|
2313 |
+
k E{Gkk}Fk,u, we have E{Gkk} = [E{VH
|
2314 |
+
1kH1k}; . . . ; E{VH
|
2315 |
+
MkHMk}] =
|
2316 |
+
[ZT
|
2317 |
+
1k, . . . , ZT
|
2318 |
+
Mk]T ≜ Zk, where Zmk = E{VH
|
2319 |
+
mkHmk} = E{ ˆHH
|
2320 |
+
mk ˆHmk} ∈ CN×N and the (n, n′)-th el-
|
2321 |
+
ement of Zmk can be denoted as [Zmk]nn′ = E{ˆhH
|
2322 |
+
mk,nˆhmk,n′} = tr(ˆRn′n
|
2323 |
+
mk). So we derive the closed-
|
2324 |
+
form for Dk,(2) as Dk,(2),c = AH
|
2325 |
+
k ZkFk,u. As for the second term Sk ∈ CMN×MN, we have Sk =
|
2326 |
+
diag(E{VH
|
2327 |
+
1kV1k}, . . . , E{VH
|
2328 |
+
MkVMk}) = diag(Z1k, . . . , ZMk). For E{Gkl¯Fl,uGH
|
2329 |
+
kl}, we notice that the
|
2330 |
+
(m, m′)-submatrix of E{Gkl¯Fl,uGH
|
2331 |
+
kl} is E{VH
|
2332 |
+
mkHml¯Fl,uHH
|
2333 |
+
m′lVm′k}.
|
2334 |
+
Based on [28], we compute E{VH
|
2335 |
+
mkHml¯Fl,uHH
|
2336 |
+
m′lVm′k} for four possible AP-UE combinations. For
|
2337 |
+
“m ̸= m′, l /∈ Pk”, we have E{VH
|
2338 |
+
mkHml¯Fl,uHH
|
2339 |
+
m′lVm′k} = 0 for the independence between Vmk and
|
2340 |
+
Hml. For “m ̸= m′, l ∈ Pk”, we have E{VH
|
2341 |
+
mkHml¯Fl,uHH
|
2342 |
+
m′lVm′k} = E{VH
|
2343 |
+
mkHml}¯Fl,uE{HH
|
2344 |
+
m′lVm′k} =
|
2345 |
+
Λmkl¯Fl,uΛm′lk, where the (n, n′)-th element of N ×N-dimension complex matrices Λmkl ≜ E{VH
|
2346 |
+
mkHml},
|
2347 |
+
|
2348 |
+
29
|
2349 |
+
Λm′lk ≜ E{HH
|
2350 |
+
m′lVm′k} are [Λmkl]nn′ = E{ˆhH
|
2351 |
+
mk,nˆhml,n′} = tr(Ξn′n
|
2352 |
+
mkl) and [Λm′lk]nn′ = E{ˆhH
|
2353 |
+
m′l,nˆhmk,n′} =
|
2354 |
+
tr(Ξn′n
|
2355 |
+
m′lk) with Ξmkl ≜ E{ˆhmlˆhH
|
2356 |
+
mk} = τpRml˜FH
|
2357 |
+
l,pΨ−1
|
2358 |
+
mk˜Fk,pRmk, Ξm′lk ≜ E{ˆhm′kˆhH
|
2359 |
+
m′l} = τpRm′k˜FH
|
2360 |
+
k,pΨ−1
|
2361 |
+
m′k˜Fl,pRm′l.
|
2362 |
+
For “m = m′, l /∈ Pk”, we define Γ(1)
|
2363 |
+
mkl ≜ E{VH
|
2364 |
+
mkHml¯Fl,uHH
|
2365 |
+
mlVmk} ∈ CN×N with the (n, n′)-th element
|
2366 |
+
[Γ(1)
|
2367 |
+
mkl]nn′ = �N
|
2368 |
+
i=1
|
2369 |
+
�N
|
2370 |
+
i′=1 [¯Fl,u]i′iE{ˆhH
|
2371 |
+
mk,nhml,i′hH
|
2372 |
+
ml,iˆhmk,n′} being
|
2373 |
+
[Γ(1)
|
2374 |
+
mkl]nn′ =
|
2375 |
+
N
|
2376 |
+
�
|
2377 |
+
i=1
|
2378 |
+
N
|
2379 |
+
�
|
2380 |
+
i′=1
|
2381 |
+
[¯Fl,u]i′itr(E
|
2382 |
+
�
|
2383 |
+
hml,i′hH
|
2384 |
+
ml,i
|
2385 |
+
�
|
2386 |
+
E{ˆhmk,n′ˆhH
|
2387 |
+
mk,n}) =
|
2388 |
+
N
|
2389 |
+
�
|
2390 |
+
i=1
|
2391 |
+
N
|
2392 |
+
�
|
2393 |
+
i′=1
|
2394 |
+
[¯Fl,u]i′itr(Ri′i
|
2395 |
+
ml ˆRn′n
|
2396 |
+
mk)
|
2397 |
+
(47)
|
2398 |
+
since ˆHmk and Hml are independent. Finally, for “m = m′, l ∈ Pk”, ˆHmk and Hml are no longer inde-
|
2399 |
+
pendent. We define Γ(2)
|
2400 |
+
mkl ≜ E{VH
|
2401 |
+
mkHml¯Fl,uHH
|
2402 |
+
mlVmk} ∈ CN×N whose (n, n′)-th element is [Γ(2)
|
2403 |
+
mkl]nn′ =
|
2404 |
+
�N
|
2405 |
+
i=1
|
2406 |
+
�N
|
2407 |
+
i′=1 [¯Fl,u]i′iE{ˆhH
|
2408 |
+
mk,nhml,i′hH
|
2409 |
+
ml,iˆhmk,n′}. We follow the similar method in [28] and derive
|
2410 |
+
[Γ(2)
|
2411 |
+
kl,m]nn′ = �N
|
2412 |
+
i=1
|
2413 |
+
�N
|
2414 |
+
i′=1 [¯Fl,u]i′itr(Ri′i
|
2415 |
+
mlPn′n
|
2416 |
+
mkl,(1))+τ 2
|
2417 |
+
p
|
2418 |
+
�N
|
2419 |
+
q1=1
|
2420 |
+
�N
|
2421 |
+
q2=1 [¯Fl,u]i′i[tr(˜Pq1n
|
2422 |
+
mkl,(2) ˜Ri′q2
|
2423 |
+
ml ˜Rq2i
|
2424 |
+
ml ˜Pn′q1
|
2425 |
+
mkl,(2))].+
|
2426 |
+
τ 2
|
2427 |
+
p
|
2428 |
+
�N
|
2429 |
+
q1=1
|
2430 |
+
�N
|
2431 |
+
q2=1 [¯Fl,u]i′itr(˜Pq1n
|
2432 |
+
mkl,(2) ˜Ri′q2
|
2433 |
+
ml )tr(˜Pn′q2
|
2434 |
+
mkl,(2) ˜Rq2i
|
2435 |
+
ml), where Pmkl,(1) ≜ τpSmk(Ψmk−τp˜Fl,pRml˜FH
|
2436 |
+
l,p)SH
|
2437 |
+
mk,
|
2438 |
+
Smk ≜ Rmk˜FH
|
2439 |
+
k,pΨ−1
|
2440 |
+
mk and Pmkl,(2) ≜ Smk˜Fl,pRml˜FH
|
2441 |
+
l,pSH
|
2442 |
+
mk, respectively. Besides, ˜Rni
|
2443 |
+
ml and ˜Pni
|
2444 |
+
mkl,(2) denote
|
2445 |
+
(n, i)-submatrix of R
|
2446 |
+
1
|
2447 |
+
2
|
2448 |
+
ml and P
|
2449 |
+
1
|
2450 |
+
2
|
2451 |
+
mkl,(2), respectively.
|
2452 |
+
In summary, combining all the cases, we have E{Gkl¯Fl,uGH
|
2453 |
+
kl} = Tkl,(1) + Tkl,(2) if l ∈ Pk and Tkl,(1)
|
2454 |
+
otherwise, where Tkl,(1) ≜ diag(Γ(1)
|
2455 |
+
kl,1, . . . , Γ(1)
|
2456 |
+
kl,M) ∈ CMN×MN and Tmm′
|
2457 |
+
kl,(2) = Γ(2)
|
2458 |
+
kl,m − Γ(1)
|
2459 |
+
kl,m if m = m′
|
2460 |
+
and Λmkl¯Fl,uΛm′lk otherwise. Plugging the derived results into (28) and (31), we can easily compute the
|
2461 |
+
optimal LSFD coefficient matrix and MSE matrix in closed-form as (34). So we have finished the proof
|
2462 |
+
of Theorem 2. For more details on the derived expression, please refer to [28, Appendix D].
|
2463 |
+
APPENDIX D
|
2464 |
+
PROOF OF THEOREM 1
|
2465 |
+
When other optimization variables are fixed, we derive the partial derivative of (17) w.r.t F(1)
|
2466 |
+
k,u as
|
2467 |
+
∂f
|
2468 |
+
�
|
2469 |
+
F1,u,(1), . . . , FK,u,(1)
|
2470 |
+
�
|
2471 |
+
∂Fk,u,(1)
|
2472 |
+
=
|
2473 |
+
K
|
2474 |
+
�
|
2475 |
+
l=1
|
2476 |
+
µl,(1)
|
2477 |
+
�
|
2478 |
+
ˆHH
|
2479 |
+
k VlWl,(1)VH
|
2480 |
+
l ˆHk + E
|
2481 |
+
�
|
2482 |
+
˜HH
|
2483 |
+
k VlWl,(1)VH
|
2484 |
+
l
|
2485 |
+
˜Hk
|
2486 |
+
��� V, W
|
2487 |
+
��
|
2488 |
+
+ λk,(1)IN
|
2489 |
+
− µk,(1) ˆHH
|
2490 |
+
k VH
|
2491 |
+
k Wk,(1).
|
2492 |
+
(48)
|
2493 |
+
By applying the first-order optimality condition and setting
|
2494 |
+
∂f(F1,u,(1),...,FK,u,(1))
|
2495 |
+
∂Fk,u,(1)
|
2496 |
+
= 0, we can easily obtain
|
2497 |
+
the optimal precoding scheme. Besides, λk,(1) and Fk,u,(1) should also satisfy KKT condition as (19).
|
2498 |
+
As for ¯Ckl ≜ E{ ˜HH
|
2499 |
+
k VlWl,(1)VH
|
2500 |
+
l ˜Hk|V, W} ∈ CN×N, by applying Lemma 1, the (i, n)-th element of
|
2501 |
+
¯Ckl is tr( ¯VlE{˜hk,n˜hH
|
2502 |
+
k,i}) where ¯Vl ≜ VlWl,(1)VH
|
2503 |
+
l
|
2504 |
+
and ˜hk,n = [˜hT
|
2505 |
+
1k,n, . . . , ˜hT
|
2506 |
+
Mk,n]T ∈ CML is the n-th
|
2507 |
+
column of ˜Hk. Finally, we derive Ck,ni ≜ E{˜hk,n˜hH
|
2508 |
+
k,i} = diag (Cni
|
2509 |
+
1k, . . . , Cni
|
2510 |
+
Mk) ∈ CML×ML since ˜hmk,n
|
2511 |
+
|
2512 |
+
30
|
2513 |
+
and ˜hm′k,n for m ̸= m′ are independent and both have zero mean. So Ck,ni is a block-diagonal matrix
|
2514 |
+
with the square matrices Cni
|
2515 |
+
1k = E{˜h1k,n˜hH
|
2516 |
+
1k,i}, . . . , Cni
|
2517 |
+
Mk = E{˜hMk,n˜hH
|
2518 |
+
Mk,i} on the diagonal.
|
2519 |
+
APPENDIX E
|
2520 |
+
PROOF OF (15)
|
2521 |
+
For the LSFD scheme, the conditional MSE matrix for UE k can be written as (26). Based on [28,
|
2522 |
+
Appendix C], we prove that (28) can also minimize MSEk,(2) = tr
|
2523 |
+
�
|
2524 |
+
Ek,(2)
|
2525 |
+
�
|
2526 |
+
. With (28) implemented, Ek,(2)
|
2527 |
+
is given by (31). Then, by applying Lemma 2, we have
|
2528 |
+
�
|
2529 |
+
Eopt
|
2530 |
+
k,(2)
|
2531 |
+
�−1
|
2532 |
+
= IN + FH
|
2533 |
+
k,u,(2)E
|
2534 |
+
�
|
2535 |
+
GH
|
2536 |
+
kk
|
2537 |
+
�
|
2538 |
+
� K
|
2539 |
+
�
|
2540 |
+
l=1
|
2541 |
+
E
|
2542 |
+
�
|
2543 |
+
Gkl¯Fl,u,(2)GH
|
2544 |
+
kl
|
2545 |
+
�
|
2546 |
+
− E {Gkk} ¯Fk,u,(2)E
|
2547 |
+
�
|
2548 |
+
GH
|
2549 |
+
kk
|
2550 |
+
�
|
2551 |
+
+ σ2Sk
|
2552 |
+
�−1
|
2553 |
+
× E {Gkk} Fk,u,(2),
|
2554 |
+
where A ≜ IN, B ≜ −FH
|
2555 |
+
k,u,(2)E{GH
|
2556 |
+
kk}, C ≜ (�K
|
2557 |
+
l=1 E{Gkl¯Fl,u,(2)GH
|
2558 |
+
kl}+σ2Sk)−1 and D ≜ E{Gkk}Fk,u,(2),
|
2559 |
+
respectively. We show the equivalence between SEopt
|
2560 |
+
k,(2) and log2 |(Eopt
|
2561 |
+
k,(2))−1| without a factor (1 − τp/τc).
|
2562 |
+
APPENDIX F
|
2563 |
+
PROOF OF THEOREM 4
|
2564 |
+
When MR combining Vmk = ˆHmk and the optimal LSFD scheme applied, we can easily compute
|
2565 |
+
E{GH
|
2566 |
+
kk}, Aopt
|
2567 |
+
k , and Eopt
|
2568 |
+
k,(2) in closed-form as Theorem 2. Furthermore, by applying Lemma 1, the (i, n)-th
|
2569 |
+
entry of ¯Tlk = E{GH
|
2570 |
+
lkAlE−1
|
2571 |
+
l,(2)AH
|
2572 |
+
l Glk} ∈ CN×N can be denoted as tr(¯AlE{glk,ngH
|
2573 |
+
lk,i}), where ¯Al ≜
|
2574 |
+
AlE−1
|
2575 |
+
l,(2)AH
|
2576 |
+
l
|
2577 |
+
and glk,n ∈ CMN is the n-th column of Glk. Note that the (m − 1) N + p-th element of
|
2578 |
+
glk,n is ˆhH
|
2579 |
+
ml,phmk,n so the [(m − 1) N + p, (m′ − 1) N + p′]-th (or [o, j]-th briefly) entry of ¯Glk,ni ≜
|
2580 |
+
E{glk,ngH
|
2581 |
+
lk,i} ∈ CMN×MN can be denoted as E{ˆhH
|
2582 |
+
ml,phmk,nhH
|
2583 |
+
m′k,iˆhm′l,p′}, which can be computed for four
|
2584 |
+
AP-UE combinations as Theorem 2.
|
2585 |
+
For “l /∈ Pk, m ̸= m′”, we have E{ˆhH
|
2586 |
+
ml,phmk,nhH
|
2587 |
+
m′k,iˆhm′l,p′} = 0. For “l ∈ Pk, m ̸= m′”, we have
|
2588 |
+
E{ˆhH
|
2589 |
+
ml,phmk,nhH
|
2590 |
+
mk,iˆhml,p′} = tr(Rni
|
2591 |
+
mk ˆRp′p
|
2592 |
+
ml). For “l /∈ Pk, m = m′”, we have E{ˆhH
|
2593 |
+
ml,phmk,nhH
|
2594 |
+
m′k,iˆhm′l,p′} =
|
2595 |
+
E{ˆhH
|
2596 |
+
ml,phmk,n}E{hH
|
2597 |
+
m′k,iˆhm′l,p′} = tr(Ξnp
|
2598 |
+
mlk)tr(Ξp′i
|
2599 |
+
m′kl), where Ξmlk = τpRmk˜FH
|
2600 |
+
k,pΨ−1
|
2601 |
+
mk˜Fl,pRml and Ξm′kl =
|
2602 |
+
τpRm′l˜FH
|
2603 |
+
l,pΨ−1
|
2604 |
+
m′l˜Fk,pRm′k. For “l ∈ Pk, m = m′”, we obtain E{ˆhH
|
2605 |
+
ml,phmk,nhH
|
2606 |
+
mk,iˆhml,p′} = tr(Rni
|
2607 |
+
mkPp′p
|
2608 |
+
mkl,(1))+
|
2609 |
+
τ 2
|
2610 |
+
p
|
2611 |
+
�N
|
2612 |
+
q1=1
|
2613 |
+
�N
|
2614 |
+
q2=1 tr(˜Pq1p
|
2615 |
+
mlk,(2) ˜Rnq2
|
2616 |
+
mk ˜Rq2i
|
2617 |
+
mk ˜Pp′q1
|
2618 |
+
mlk,(2)) + τ 2
|
2619 |
+
p
|
2620 |
+
�N
|
2621 |
+
q1=1
|
2622 |
+
�N
|
2623 |
+
q2=1 tr(˜Pq1n
|
2624 |
+
mlk,(2) ˜Rnq1
|
2625 |
+
mk)tr(˜Pp′q2
|
2626 |
+
mlk,(2) ˜Rq2i
|
2627 |
+
mk), where
|
2628 |
+
Sml = Rml˜FH
|
2629 |
+
l,pΨ−1
|
2630 |
+
ml, Pmlk,(1) = τpSml(Ψml−τp˜Fk,pRmk˜FH
|
2631 |
+
k,p)SH
|
2632 |
+
ml and Pmlk,(2) = Sml˜Fk,pRmk˜FH
|
2633 |
+
k,pSH
|
2634 |
+
ml with
|
2635 |
+
˜Rni
|
2636 |
+
mk and ˜Pni
|
2637 |
+
mkl,(2) being (n, i)-submatrix of R
|
2638 |
+
1
|
2639 |
+
2
|
2640 |
+
mk and P
|
2641 |
+
1
|
2642 |
+
2
|
2643 |
+
mkl,(2), respectively. We can compute E{glk,ngH
|
2644 |
+
lk,i}oj
|
2645 |
+
in closed-form as (42) and Fopt
|
2646 |
+
k,u,(2) in closed-form as (43).
|
2647 |
+
|
2648 |
+
31
|
2649 |
+
REFERENCES
|
2650 |
+
[1] Z. Wang, J. Zhang, H. Q. Ngo, B. Ai, and M. Debbah, “Iteratively weighted MMSE uplink precoding for cell-free massive MIMO,”
|
2651 |
+
in Proc. IEEE ICC, May 2022, pp. 1–6.
|
2652 |
+
[2] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-free massive MIMO versus small cells,” IEEE Trans.
|
2653 |
+
Wireless Commun., vol. 16, no. 3, pp. 1834–1850, Mar. 2017.
|
2654 |
+
[3] J. Zhang, E. Bj¨ornson, M. Matthaiou, D. W. K. Ng, H. Yang, and D. J. Love, “Prospective multiple antenna technologies for beyond
|
2655 |
+
5G,” IEEE J. Sel. Areas Commun, vol. 38, no. 8, pp. 1637–1660, Jun. 2020.
|
2656 |
+
[4] S. Chen, J. Zhang, J. Zhang, E. Bj¨ornson, and B. Ai, “A survey on user-centric cell-free massive MIMO systems,” Digit. Commun.
|
2657 |
+
Netw., 2021.
|
2658 |
+
[5] E. Bj¨ornson and L. Sanguinetti, “Making cell-free massive MIMO competitive with MMSE processing and centralized implementation,”
|
2659 |
+
IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 77–90, Jan. 2019.
|
2660 |
+
[6] S. Elhoushy, M. Ibrahim, and W. Hamouda, “Cell-free massive MIMO: A survey,” IEEE Commun. Surveys Tuts., vol. 24, no. 1, pp.
|
2661 |
+
492–523, 1st Quart. 2022.
|
2662 |
+
[7] J. Zhang, S. Chen, Y. Lin, J. Zheng, B. Ai, and L. Hanzo, “Cell-free massive MIMO: A new next-generation paradigm,” IEEE Access,
|
2663 |
+
vol. 7, pp. 99 878–99 888, Jul. 2019.
|
2664 |
+
[8] H. Q. Ngo, L. Tran, T. Q. Duong, M. Matthaiou, and E. G. Larsson, “On the total energy efficiency of cell-free massive MIMO,” IEEE
|
2665 |
+
Trans. Green Commun. Netw, vol. 2, no. 1, pp. 25–39, Mar. 2018.
|
2666 |
+
[9] E. Bj¨ornson and L. Sanguinetti, “Scalable cell-free massive MIMO systems,” IEEE Trans. Commun., vol. 68, no. 7, pp. 4247–4261,
|
2667 |
+
Jul. 2020.
|
2668 |
+
[10] S. Chen, J. Zhang, E. Bj¨ornson, J. Zhang, and B. Ai, “Structured massive access for scalable cell-free massive MIMO systems,” IEEE
|
2669 |
+
J. Sel. Areas Commun, vol. 39, no. 4, pp. 1086–1100, Aug. 2021.
|
2670 |
+
[11] A. Papazafeiropoulos, E. Bj¨ornson, P. Kourtessis, S. Chatzinotas, and J. M. Senior, “Scalable cell-free massive MIMO systems: Impact
|
2671 |
+
of hardware impairments,” IEEE Trans. Veh. Technol., vol. 70, no. 10, pp. 9701–9715, Oct. 2021.
|
2672 |
+
[12] J. Zheng, J. Zhang, E. Bj¨ornson, Z. Li, and B. Ai, “Cell-free massive MIMO-OFDM for high-speed train communications,” IEEE J.
|
2673 |
+
Sel. Areas Commun., vol. 40, no. 10, pp. 2823–2839, Oct. 2022.
|
2674 |
+
[13] M. Attarifar, A. Abbasfar, and A. Lozano, “Subset MMSE receivers for cell-free networks,” IEEE Trans. Wireless Commun., vol. 19,
|
2675 |
+
no. 6, pp. 4183–4194, Jun. 2020.
|
2676 |
+
[14] E. Nayebi, A. Ashikhmin, T. L. Marzetta, and B. D. Rao, “Performance of cell-free massive MIMO systems with MMSE and LSFD
|
2677 |
+
receivers,” in Proc. Asilomar Conf. Signals, Syst. Comput., Nov. 2016, pp. 203–207.
|
2678 |
+
[15]
|
2679 |
+
¨O. ¨Ozdogan, E. Bj¨ornson, and J. Zhang, “Performance of cell-free massive MIMO with Rician fading and phase shifts,” IEEE Trans.
|
2680 |
+
Wireless Commun., vol. 18, no. 11, pp. 5299–5315, Nov. 2019.
|
2681 |
+
[16] Z. Wang, J. Zhang, E. Bj¨ornson, and B. Ai, “Uplink performance of cell-free massive MIMO over spatially correlated Rician fading
|
2682 |
+
channels,” IEEE Commun. Lett., vol. 25, no. 4, pp. 1348–1352, Apr. 2021.
|
2683 |
+
[17] J. Zhang, J. Zhang, E. Bj¨ornson, and B. Ai, “Local partial zero-forcing combining for cell-free massive MIMO systems,” IEEE Trans.
|
2684 |
+
Commun., vol. 69, no. 12, pp. 8459–8473, Dec. 2021.
|
2685 |
+
[18] G. Interdonato, P. Frenger, and E. G. Larsson, “Scalability aspects of cell-free massive MIMO,” in Proc. IEEE ICC, 2019, pp. 1–6.
|
2686 |
+
[19] Z. H. Shaik, E. Bj¨ornson, and E. G. Larsson, “MMSE-optimal sequential processing for cell-free massive MIMO with radio stripes,”
|
2687 |
+
IEEE Trans. Commun., vol. 69, no. 11, pp. 7775–7789, Nov. 2021.
|
2688 |
+
|
2689 |
+
32
|
2690 |
+
[20] H. Masoumi and M. J. Emadi, “Performance analysis of cell-free massive MIMO system with limited fronthaul capacity and hardware
|
2691 |
+
impairments,” IEEE Trans. Wireless Commun., vol. 19, no. 2, pp. 1038–1053, Feb. 2020.
|
2692 |
+
[21] M. Bashar, P. Xiao, R. Tafazolli, K. Cumanan, A. G. Burr, and E. Bj¨ornson, “Limited-fronthaul cell-free massive MIMO with local
|
2693 |
+
MMSE receiver under Rician fading and phase shifts,” IEEE Wireless Commun. Lett., vol. 10, no. 9, pp. 1934–1938, Sep. 2021.
|
2694 |
+
[22] T. Van Chien, H. Q. Ngo, S. Chatzinotas, M. Di Renzo, and B. Ottersten, “Reconfigurable intelligent surface-assisted cell-free massive
|
2695 |
+
MIMO systems over spatially-correlated channels,” IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 5106–5128, Jul. 2022.
|
2696 |
+
[23] E. Shi, J. Zhang, S. Chen, J. Zheng, Y. Zhang, D. W. Kwan Ng, and B. Ai, “Wireless energy transfer in RIS-aided cell-free massive
|
2697 |
+
MIMO systems: Opportunities and challenges,” IEEE Commun. Mag., vol. 60, no. 3, pp. 26–32, Mar. 2022.
|
2698 |
+
[24] T. C. Mai, H. Q. Ngo, and T. Q. Duong, “Cell-free massive MIMO systems with multi-antenna users,” in IEEE GlobalSIP, Nov. 2018,
|
2699 |
+
pp. 828–832.
|
2700 |
+
[25] S. Buzzi, C. D’Andrea, A. Zappone, and C. D’Elia, “User-centric 5G cellular networks: Resource allocation and comparison with the
|
2701 |
+
cell-free massive MIMO approach,” IEEE Trans. Wireless Commun., vol. 19, no. 2, pp. 1250–1264, Feb. 2020.
|
2702 |
+
[26] T. C. Mai, H. Q. Ngo, and T. Q. Duong, “Downlink spectral efficiency of cell-free massive MIMO systems with multi-antenna users,”
|
2703 |
+
IEEE Trans. Commun., vol. 68, no. 8, pp. 4803–4815, Apr. 2020.
|
2704 |
+
[27] M. Zhou, L. Yang, and H. Zhu, “Sum-SE for multigroup multicast cell-free massive MIMO with multi-antenna users and low-resolution
|
2705 |
+
DACs,” IEEE Wireless Commun. Lett., vol. 10, no. 8, pp. 1702–1706, May 2021.
|
2706 |
+
[28] Z. Wang, J. Zhang, B. Ai, C. Yuen, and M. Debbah, “Uplink performance of cell-free massive MIMO with multi-antenna users over
|
2707 |
+
jointly-correlated Rayleigh fading channels,” IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 7391–7406, Sep. 2022.
|
2708 |
+
[29] W. Weichselberger, M. Herdin, H. Ozcelik, and E. Bonek, “A stochastic MIMO channel model with joint correlation of both link ends,”
|
2709 |
+
IEEE Trans. Wireless Commun., vol. 5, no. 1, pp. 90–100, Jan. 2006.
|
2710 |
+
[30] Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for
|
2711 |
+
a MIMO interfering broadcast channel,” IEEE Trans. Signal Process., vol. 59, no. 9, pp. 4331–4340, Apr. 2011.
|
2712 |
+
[31] J. Shin and J. Moon, “Weighted-sum-rate-maximizing linear transceiver filters for the K-user MIMO interference channel,” IEEE Trans.
|
2713 |
+
Commun., vol. 60, no. 10, pp. 2776–2783, Sep. 2012.
|
2714 |
+
[32] S. S. Christensen, R. Agarwal, E. De Carvalho, and J. M. Cioffi, “Weighted sum-rate maximization using weighted MMSE for MIMO-
|
2715 |
+
BC beamforming design,” IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 4792–4799, 2008.
|
2716 |
+
[33] X. Li, X. Zhang, Y. Zhou, and L. Hanzo, “Optimal massive-MIMO-Aided clustered base-station coordination,” IEEE Trans. Veh.
|
2717 |
+
Technol., vol. 70, no. 3, pp. 2699–2712, Feb. 2021.
|
2718 |
+
[34] H. Q. Ngo, H. Tataria, M. Matthaiou, S. Jin, and E. G. Larsson, “On the performance of cell-free massive MIMO in Ricean fading,”
|
2719 |
+
in Proc. Asilomar Conf. Signals, Syst., Comput., Oct. 2018, pp. 980–984.
|
2720 |
+
[35] E. Bj¨ornson and B. Ottersten, “A framework for training-based estimation in arbitrarily correlated Rician MIMO channels with Rician
|
2721 |
+
disturbance,” IEEE Trans. Signal Process., vol. 58, no. 3, pp. 1807–1820, Nov. 2010.
|
2722 |
+
[36] E. Bj¨ornson, J. Hoydis, and L. Sanguinetti, “Massive MIMO networks: Spectral, energy, and hardware efficiency,” Foundations and
|
2723 |
+
Trends® in Signal Processing, vol. 11, no. 3-4, pp. 154–655, 2017.
|
2724 |
+
[37] D. Tse and P. Viswanath, Fundamentals of wireless communication.
|
2725 |
+
Cambridge university press, 2005.
|
2726 |
+
[38] M. Bashar, K. Cumanan, A. G. Burr, H. Q. Ngo, M. Debbah, and P. Xiao, “Max-min rate of cell-free massive MIMO uplink with
|
2727 |
+
optimal uniform quantization,” IEEE Trans. Commun., vol. 67, no. 10, pp. 6796–6815, Oct. 2019.
|
2728 |
+
[39] B. Hu, C. Hua, C. Chen, and X. Guan, “Joint beamformer design for wireless fronthaul and access links in C-RANs,” IEEE Trans.
|
2729 |
+
Wireless Commun., vol. 17, no. 5, pp. 2869–2881, May 2018.
|
2730 |
+
|
2731 |
+
33
|
2732 |
+
[40] J. Hoydis, S. ten Brink, and M. Debbah, “Massive MIMO in the UL/DL of cellular networks: How many antennas do we need?” IEEE
|
2733 |
+
J. Sel. Areas Commun, vol. 31, no. 2, pp. 160–171, Feb. 2013.
|
2734 |
+
[41] S. Wagner, R. Couillet, M. Debbah, and D. T. M. Slock, “Large system analysis of linear precoding in correlated MISO broadcast
|
2735 |
+
channels under limited feedback,” IEEE Trans. Inf. Theory, vol. 58, no. 7, pp. 4509–4537, Jul. 2012.
|
2736 |
+
[42] J. Zhang, J. Zhang, D. W. K. Ng, S. Jin, and B. Ai, “Improving sum-rate of cell-free massive MIMO with expanded compute-and-
|
2737 |
+
forward,” IEEE Trans. Signal Process., vol. 70, pp. 202–215, 2021.
|
2738 |
+
[43] A. Tulino, A. Lozano, and S. Verdu, “Impact of antenna correlation on the capacity of multiantenna channels,” IEEE Trans. Inf. Theory,
|
2739 |
+
vol. 51, no. 7, pp. 2491–2509, Jun. 2005.
|
2740 |
+
[44] Y. Wu, S. Ma, and Y. Gu, “A unified framework of non-orthogonal pilot design for multi-cell massive MIMO systems,” IEEE Trans.
|
2741 |
+
Commun., vol. 68, no. 12, pp. 7623–7633, 2020.
|
2742 |
+
[45] A. Hjørungnes, Complex-valued matrix derivatives: with applications in signal processing and communications. Cambridge University
|
2743 |
+
Press, 2011.
|
2744 |
+
|
DNE0T4oBgHgl3EQfggEA/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
EdE1T4oBgHgl3EQfWgTb/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b71f34899cdd9cc0ce9e23b7bafbcc82aaba807734f7f3105cf6845690f8b482
|
3 |
+
size 7667757
|
EdE5T4oBgHgl3EQfUw9y/content/2301.05546v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8059ee327055e8029001a9f59d3bacd32eddd339177eb8d41d36623b6d3a7e08
|
3 |
+
size 292322
|
EdE5T4oBgHgl3EQfUw9y/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a486f3dd10015351b4ecee684b4df7503240db8bccdc92c22bd485e5150246c4
|
3 |
+
size 2555949
|