Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -NFQT4oBgHgl3EQfKDVk/vector_store/index.pkl +3 -0
- .gitattributes +59 -0
- 0tE1T4oBgHgl3EQflAQk/vector_store/index.pkl +3 -0
- 19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf +3 -0
- 19AzT4oBgHgl3EQfDfqo/vector_store/index.pkl +3 -0
- 1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf +3 -0
- 1tE0T4oBgHgl3EQf_wKi/vector_store/index.faiss +3 -0
- 1tE0T4oBgHgl3EQf_wKi/vector_store/index.pkl +3 -0
- 2dAzT4oBgHgl3EQfRvud/content/tmp_files/2301.01221v1.pdf.txt +0 -0
- 2dAzT4oBgHgl3EQfRvud/content/tmp_files/load_file.txt +0 -0
- 49FAT4oBgHgl3EQfFRw-/vector_store/index.faiss +3 -0
- 4NAzT4oBgHgl3EQfuv1g/content/tmp_files/2301.01695v1.pdf.txt +0 -0
- 4NAzT4oBgHgl3EQfuv1g/content/tmp_files/load_file.txt +0 -0
- 4NE2T4oBgHgl3EQf6Qhs/content/tmp_files/2301.04198v1.pdf.txt +1829 -0
- 4NE2T4oBgHgl3EQf6Qhs/content/tmp_files/load_file.txt +0 -0
- 4tAzT4oBgHgl3EQf9v5K/content/tmp_files/2301.01923v1.pdf.txt +696 -0
- 4tAzT4oBgHgl3EQf9v5K/content/tmp_files/load_file.txt +0 -0
- 5dAyT4oBgHgl3EQfQPaq/content/2301.00042v1.pdf +3 -0
- 5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf +3 -0
- 5tAzT4oBgHgl3EQff_wz/vector_store/index.faiss +3 -0
- 5tAzT4oBgHgl3EQff_wz/vector_store/index.pkl +3 -0
- 7dAyT4oBgHgl3EQfQvYd/vector_store/index.pkl +3 -0
- 89FLT4oBgHgl3EQfBi6R/content/2301.11971v1.pdf +3 -0
- 89FLT4oBgHgl3EQfBi6R/vector_store/index.pkl +3 -0
- 9tAyT4oBgHgl3EQf3Pnc/content/2301.00767v1.pdf +3 -0
- A9E4T4oBgHgl3EQfEwz_/content/tmp_files/2301.04881v1.pdf.txt +751 -0
- A9E4T4oBgHgl3EQfEwz_/content/tmp_files/load_file.txt +0 -0
- AdFJT4oBgHgl3EQfrS3C/content/tmp_files/2301.11608v1.pdf.txt +1275 -0
- AdFJT4oBgHgl3EQfrS3C/content/tmp_files/load_file.txt +0 -0
- C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf +3 -0
- C9AyT4oBgHgl3EQfSPck/vector_store/index.faiss +3 -0
- C9AyT4oBgHgl3EQfSPck/vector_store/index.pkl +3 -0
- CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf +3 -0
- CNFQT4oBgHgl3EQf-jfx/vector_store/index.faiss +3 -0
- CNFQT4oBgHgl3EQf-jfx/vector_store/index.pkl +3 -0
- DtE0T4oBgHgl3EQfQgBB/content/tmp_files/2301.02193v1.pdf.txt +2495 -0
- DtE0T4oBgHgl3EQfQgBB/content/tmp_files/load_file.txt +0 -0
- DtE1T4oBgHgl3EQfWQTV/vector_store/index.faiss +3 -0
- FdAyT4oBgHgl3EQfrPl7/content/tmp_files/2301.00557v1.pdf.txt +2010 -0
- FdAyT4oBgHgl3EQfrPl7/content/tmp_files/load_file.txt +0 -0
- GNE2T4oBgHgl3EQf-glT/content/tmp_files/2301.04239v1.pdf.txt +1672 -0
- GNE2T4oBgHgl3EQf-glT/content/tmp_files/load_file.txt +0 -0
- GNE3T4oBgHgl3EQfWAqd/content/tmp_files/2301.04465v1.pdf.txt +992 -0
- GNE3T4oBgHgl3EQfWAqd/content/tmp_files/load_file.txt +0 -0
- HNE5T4oBgHgl3EQfWQ9u/content/tmp_files/2301.05557v1.pdf.txt +2132 -0
- HNE5T4oBgHgl3EQfWQ9u/content/tmp_files/load_file.txt +0 -0
- HdE3T4oBgHgl3EQfuAus/content/2301.04681v1.pdf +3 -0
- HdE3T4oBgHgl3EQfuAus/vector_store/index.faiss +3 -0
- HdE3T4oBgHgl3EQfuAus/vector_store/index.pkl +3 -0
- I9E3T4oBgHgl3EQfuwtu/content/2301.04687v1.pdf +3 -0
-NFQT4oBgHgl3EQfKDVk/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa4c85a5925b39b7a27b47c2d728b5bd36a668551215419d42e12f5e1c091429
|
3 |
+
size 315468
|
.gitattributes
CHANGED
@@ -5390,3 +5390,62 @@ KdFAT4oBgHgl3EQfvx7k/content/2301.08678v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
5390 |
5dAyT4oBgHgl3EQfQPaq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5391 |
RtE4T4oBgHgl3EQf_g5X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5392 |
5NAzT4oBgHgl3EQfEPrI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5390 |
5dAyT4oBgHgl3EQfQPaq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5391 |
RtE4T4oBgHgl3EQf_g5X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5392 |
5NAzT4oBgHgl3EQfEPrI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5393 |
+
HdE3T4oBgHgl3EQfuAus/content/2301.04681v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5394 |
+
hdE1T4oBgHgl3EQffQTi/content/2301.03217v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5395 |
+
rNAyT4oBgHgl3EQfzvn1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5396 |
+
R9E4T4oBgHgl3EQflg05/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5397 |
+
JdE4T4oBgHgl3EQfIwx_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5398 |
+
PdE2T4oBgHgl3EQfrgiA/content/2301.04050v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5399 |
+
hNAyT4oBgHgl3EQfj_j5/content/2301.00427v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5400 |
+
qNE2T4oBgHgl3EQfKga-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5401 |
+
UtAzT4oBgHgl3EQflv2x/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5402 |
+
I9E3T4oBgHgl3EQfuwtu/content/2301.04687v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5403 |
+
9tAyT4oBgHgl3EQf3Pnc/content/2301.00767v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5404 |
+
NdE3T4oBgHgl3EQfZAra/content/2301.04494v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5405 |
+
jNE1T4oBgHgl3EQfNQMV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5406 |
+
V9AzT4oBgHgl3EQfYPyg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5407 |
+
P9FKT4oBgHgl3EQfhC52/content/2301.11836v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5408 |
+
HdE3T4oBgHgl3EQfuAus/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5409 |
+
mtE2T4oBgHgl3EQfzQgZ/content/2301.04128v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5410 |
+
hdE1T4oBgHgl3EQffQTi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5411 |
+
cdE0T4oBgHgl3EQfnwHq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5412 |
+
1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5413 |
+
utE_T4oBgHgl3EQf-Rwc/content/2301.08385v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5414 |
+
CNFQT4oBgHgl3EQf-jfx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5415 |
+
RtE4T4oBgHgl3EQf_g5X/content/2301.05371v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5416 |
+
qNE2T4oBgHgl3EQfKga-/content/2301.03704v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5417 |
+
1tE0T4oBgHgl3EQf_wKi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5418 |
+
LtAyT4oBgHgl3EQf6frG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5419 |
+
CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5420 |
+
adAzT4oBgHgl3EQf2v4D/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5421 |
+
fNAzT4oBgHgl3EQfoP3V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5422 |
+
KdFAT4oBgHgl3EQfvx7k/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5423 |
+
Q9E0T4oBgHgl3EQf1gIp/content/2301.02699v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5424 |
+
C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5425 |
+
I9E3T4oBgHgl3EQfuwtu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5426 |
+
fNAzT4oBgHgl3EQfoP3V/content/2301.01595v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5427 |
+
zNE0T4oBgHgl3EQftgHb/content/2301.02594v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5428 |
+
5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5429 |
+
utE_T4oBgHgl3EQf-Rwc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5430 |
+
XtE1T4oBgHgl3EQfJgNL/content/2301.02952v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5431 |
+
XtE1T4oBgHgl3EQfJgNL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5432 |
+
mtE2T4oBgHgl3EQfzQgZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5433 |
+
o9E5T4oBgHgl3EQfkQ_G/content/2301.05662v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5434 |
+
_tFRT4oBgHgl3EQftDeF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5435 |
+
J9FLT4oBgHgl3EQfKi8f/content/2301.12008v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5436 |
+
C9AyT4oBgHgl3EQfSPck/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5437 |
+
89FLT4oBgHgl3EQfBi6R/content/2301.11971v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5438 |
+
DtE1T4oBgHgl3EQfWQTV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5439 |
+
o9E5T4oBgHgl3EQfkQ_G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5440 |
+
x9E2T4oBgHgl3EQfhQeR/content/2301.03946v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5441 |
+
Q9E0T4oBgHgl3EQf1gIp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5442 |
+
LtAyT4oBgHgl3EQf6frG/content/2301.00824v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5443 |
+
zNE0T4oBgHgl3EQftgHb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5444 |
+
fNFST4oBgHgl3EQfGjh7/content/2301.13722v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5445 |
+
5dAyT4oBgHgl3EQfQPaq/content/2301.00042v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5446 |
+
5tAzT4oBgHgl3EQff_wz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5447 |
+
49FAT4oBgHgl3EQfFRw-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5448 |
+
j9E3T4oBgHgl3EQf5Quy/content/2301.04780v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5449 |
+
SdA0T4oBgHgl3EQfD_9b/content/2301.02011v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5450 |
+
19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf filter=lfs diff=lfs merge=lfs -text
|
5451 |
+
gNE5T4oBgHgl3EQfhg_G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
0tE1T4oBgHgl3EQflAQk/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d2da0f5619d302889f5f973bb10ec9e4ecd075945e4486e1c38600f7fea1963
|
3 |
+
size 147677
|
19AzT4oBgHgl3EQfDfqo/content/2301.00978v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9925bff81be3ad818a9e86b8e6bc1dd3fd8859935ed71cf1cd5f8aebbb0bce59
|
3 |
+
size 197336
|
19AzT4oBgHgl3EQfDfqo/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:727e595b3f25fecf40f8405214bbb4cd09e8bad6aa673ddc6ad5920f7380645c
|
3 |
+
size 63368
|
1tE0T4oBgHgl3EQf_wKi/content/2301.02831v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:23e78ee8f3fb66ca690713b6415304eb8b40cd96ba581780a7baea2296ce0738
|
3 |
+
size 985639
|
1tE0T4oBgHgl3EQf_wKi/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01c587a9a627c318e44d752df5a95eadb6159ee42591296d58b8c73f25b574d9
|
3 |
+
size 1507373
|
1tE0T4oBgHgl3EQf_wKi/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:61aa1451a38119f0869b0bd7e11c5ce8b6c2e72923670038bafb2964e1638b95
|
3 |
+
size 60152
|
2dAzT4oBgHgl3EQfRvud/content/tmp_files/2301.01221v1.pdf.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
2dAzT4oBgHgl3EQfRvud/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
49FAT4oBgHgl3EQfFRw-/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:817a5447c9155c51156f8a225bdb3d11be81508ca5365af3f042ef1fc6e6dc54
|
3 |
+
size 6225965
|
4NAzT4oBgHgl3EQfuv1g/content/tmp_files/2301.01695v1.pdf.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
4NAzT4oBgHgl3EQfuv1g/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
4NE2T4oBgHgl3EQf6Qhs/content/tmp_files/2301.04198v1.pdf.txt
ADDED
@@ -0,0 +1,1829 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.04198v1 [math.CO] 10 Jan 2023
|
2 |
+
Sharp thresholds for spanning regular graphs
|
3 |
+
Maksim Zhukovskii∗
|
4 |
+
Abstract
|
5 |
+
Let d ≥ 3 be a constant and let F be a d-regular graph on [n] with not too
|
6 |
+
many symmetries. The expectation threshold for the existence of a spanning
|
7 |
+
subgraph in G(n, p) isomorphic to F is p∗(n) = (1 + o(1))(e/n)2/d. We give
|
8 |
+
a tight bound on the edge expansion of F guaranteeing that the probability
|
9 |
+
threshold for the appearance of a copy of F has the same order of magnitude
|
10 |
+
as p∗. We also prove that, within a slight strengthening of this bound, the
|
11 |
+
probability threshold is asymptotically equal to p∗. In particular, it proves
|
12 |
+
the conjecture of Kahn, Narayanan and Park on a sharp threshold for the
|
13 |
+
containment of a square of a Hamilton cycle. It also implies that, for d ≥ 4
|
14 |
+
and (asymptotically) almost all d-regular graphs F on [n], p(n) = (e/n)2/d is
|
15 |
+
a sharp threshold for F-containment.
|
16 |
+
1
|
17 |
+
Introduction
|
18 |
+
Let d ≥ 3 be a fixed constant. Given a d-regular graph Fn on the vertex set [n] :=
|
19 |
+
{1, . . . , n}, what is the threshold probability to contain its isomorphic copy by the
|
20 |
+
binomial random graph G(n, p) (i.e. the unique p = p(n) such that the probability
|
21 |
+
that G(n, p) contains an isomorphic copy of Fn equals 1/2)? Note that the threshold
|
22 |
+
probability exists since the considered property is monotone [7, Chapter 1.5].
|
23 |
+
If Fn has a small enough automorphism group, then, by the union bound, the
|
24 |
+
threshold probability is at least (1 + o(1))(e/n)2/d. Indeed, let Fn be the set of all
|
25 |
+
isomorphic copies of Fn on [n], and let the number of automorphisms of Fn be eo(n).
|
26 |
+
Clearly |Fn| =
|
27 |
+
n!
|
28 |
+
eo(n). Let X be the number of graphs from Fn that are subgraphs of
|
29 |
+
G(n, p). We get
|
30 |
+
EX = |Fn|pdn/2 =
|
31 |
+
n!
|
32 |
+
eo(n)pdn/2 → 0
|
33 |
+
as n → ∞
|
34 |
+
if p < (1 − ε)
|
35 |
+
� e
|
36 |
+
n
|
37 |
+
�2/d, implying that with high probability (whp for brevity) G(n, p)
|
38 |
+
does not contain any graph from Fn. Let us denote by p∗(n) = (1+o(1))(e/n)2/d the
|
39 |
+
∗The University of Sheffield; [email protected]
|
40 |
+
1
|
41 |
+
|
42 |
+
expectation threshold for the existence of a spanning subgraph in G(n, p) isomorphic
|
43 |
+
to Fn (i.e. p∗(n) is the unique solution of the equation EX = 1).
|
44 |
+
On the other hand, from the recently resolved “expectation–threshold” conjec-
|
45 |
+
ture of Kahn and Kalai [16] it follows that the threshold does not exceed Cp∗(n) log n
|
46 |
+
for some constant C > 0. For some specific Fn it is known that the logarithmic fac-
|
47 |
+
tor can be removed, and the threshold probability equals Θ(n−2/d): it is true for
|
48 |
+
example for powers of a Hamilton cycle [15] and for the square tori T√n×√n [15]. On
|
49 |
+
the other hand, if Fn has many small subgraphs with a small edge boundary, this
|
50 |
+
is no longer true. More precisely, assume that, for some constant v, every vertex
|
51 |
+
of Fn belongs to a subgraph on v vertices with the edge boundary at most d (the
|
52 |
+
edge boundary of a subgraph ˜F is the number of edges between ˜F and its vertex
|
53 |
+
complement) or, equivalently, with at least dv
|
54 |
+
2 − d
|
55 |
+
2 edges. Then, a polylogarithmic
|
56 |
+
factor arises since in order to contain a copy of Fn, the random graph should have
|
57 |
+
every vertex inside a graph with v vertices and at least dv
|
58 |
+
2 − d
|
59 |
+
2 edges — see [17].
|
60 |
+
We prove that, when the number of automorphisms of Fn is small enough, this
|
61 |
+
condition on the edge boundary is the only obstacle.
|
62 |
+
Theorem 1. Let d ≥ 3 and let Fn be a sequence of d-regular graphs on [n], n ∈ N,
|
63 |
+
such that
|
64 |
+
• for every ε > 0 and all large enough n the number of automorphisms of Fn is
|
65 |
+
less then eεn2/d;
|
66 |
+
• for every ˜F ⊂ Fn with 3 ≤ |V ( ˜F)| ≤ n − 3, the edge boundary of ˜F is at least
|
67 |
+
d + 1.
|
68 |
+
Let ε > 0. If p > (1 + ε)dp∗, then whp (assuming that dn is even) G(n, p) contains
|
69 |
+
a copy of Fn.
|
70 |
+
It immediately implies that the threshold probability for containing a copy of Fn
|
71 |
+
equals p(n) = Θ(n−2/d). As we mentioned above, the restriction on edge boundaries
|
72 |
+
is tight — if we allow subgraphs with edge boundary d instead of d + 1, then the
|
73 |
+
assertion becomes false.
|
74 |
+
Note that a bound on the number of symmetries can not be omitted — as soon
|
75 |
+
as the number of automorphisms of Fn becomes larger, the expectation threshold
|
76 |
+
p∗ becomes larger as well. In particular, p(n) = (d! log n)
|
77 |
+
2
|
78 |
+
d(d+1) n−2/(d+1) is a sharp
|
79 |
+
threshold for the existence of a Kd+1-factor [14].
|
80 |
+
In [15] Riordan proved a general result that for d-regular graphs can be stated
|
81 |
+
as follows: p(n) = Θ(n−2/d) is the threshold probability for containing a copy of Fn
|
82 |
+
if the d-regular graph Fn (the automorphism group should be at most exponential
|
83 |
+
2
|
84 |
+
|
85 |
+
in n) satisfies a stronger condition on the edge boundary: for every ˜F ⊂ Fn with
|
86 |
+
3 ≤ |V ( ˜F)| ≤ n−3, the edge boundary of ˜F is at least 2d. For powers of a Hamilton
|
87 |
+
cycle, this result implies the following: for every k ≥ 3, the threshold probability
|
88 |
+
for containing the kth power of a Hamilton cycle equals Θ(n−1/k). However, the
|
89 |
+
proof of Riordan does not work for k = 2. In [10], K¨uhn and Osthus proved that
|
90 |
+
n−1/2+o(1) is the threshold probability for containing the second power of a Hamilton
|
91 |
+
cycle and conjectured that the threshold is actually Θ(n−1/2). In [13], Nenadov and
|
92 |
+
ˇSkori´c proved the upper bound n−1/2(log n)4, which was improved to n−1/2(log n)3 by
|
93 |
+
Fischer, ˇSkori´c, Steger and Truji´c in [4], and to n−1/2(log n)2 in an unpublished work
|
94 |
+
of Montgomery (see [6]). Eventually, the conjecture was solved by Kahn, Narayanan
|
95 |
+
and Park in [8]. However, they did non settle a right constant in front of n−1/2 and
|
96 |
+
conjectured that the right constant is √e and that the threshold p(n) =
|
97 |
+
�
|
98 |
+
e/n(1 +
|
99 |
+
o(1)) is sharp (i.e., if p > (1 + ε)
|
100 |
+
�
|
101 |
+
e/n, then whp G(n, p) contains the second power
|
102 |
+
of a Hamilton cycle). In this paper, we prove this conjecture and even more: for
|
103 |
+
d ≤ 4 the requirement from Theorem 1 guarantees that p(n) = (1 + o(1))
|
104 |
+
�
|
105 |
+
e/n
|
106 |
+
is even sharp; however, for d ≥ 5 we need to strengthen the bound on the edge
|
107 |
+
boundary to 2d − 2 (note that this is still better than the condition of Riordan).
|
108 |
+
Theorem 2. Let d ≥ 3 and let Fn be a sequence of d-regular graphs on [n], n ∈ N,
|
109 |
+
such that
|
110 |
+
• for every ε > 0 and all large enough n the number of automorphisms of Fn is
|
111 |
+
less then eεn2/d;
|
112 |
+
• either d ∈ {3, 4} and, for every ˜F ⊂ Fn with 3 ≤ |V ( ˜F)| ≤ n − 3, the edge
|
113 |
+
boundary of ˜F is at least d + 1,
|
114 |
+
or d ≥ 5 and, for every ˜F ⊂ Fn with 3 ≤ |V ( ˜F)| ≤ n − 3, the edge boundary
|
115 |
+
of ˜F is at least 2d − 2.
|
116 |
+
Let ε > 0. If p > (1+ε)
|
117 |
+
� e
|
118 |
+
n
|
119 |
+
�2/d, then whp (assuming that dn is even) G(n, p) contains
|
120 |
+
a copy of Fn.
|
121 |
+
Kahn, Narayanan and Park in [8] noted that the crucial fact that can be used
|
122 |
+
to prove that the threshold for appearance of the second power of a Hamilton cycle
|
123 |
+
equals Θ(n−1/2) is that the hypergraph of all copies of the second power of a cycle
|
124 |
+
on [n] is (1 + o(1))
|
125 |
+
�
|
126 |
+
e/n-spread. Actually, they refined the notion of spreadness by
|
127 |
+
incorporating the count of the number of components in a subhyperedge. This re-
|
128 |
+
fined notion was distilled by D´ıaz and Person in [3], named superspreadness and used
|
129 |
+
to generalise the result of Kahn, Narayanan and Park to a wider class of spanning
|
130 |
+
subgraphs in G(n, p). In particular, they answered a question of Frieze asked in [5]
|
131 |
+
— they showed that the threshold for appearance of spanning 2-overlapping 4-cycles
|
132 |
+
(i.e. the copies of C4 are ordered cyclically, two consecutive C4 overlap in exactly
|
133 |
+
3
|
134 |
+
|
135 |
+
one edge, whereby each C4 overlaps with two copies of C4 in opposite edges) equals
|
136 |
+
Θ(n−2/3). Clearly, Theorem 2 implies that p(n) = (e/n)2/3 is a sharp threshold for
|
137 |
+
appearance of spanning 2-overlapping 4-cycles.
|
138 |
+
Let us call a sequence of d-regular graphs on [n] satisfying the conditions of
|
139 |
+
Theorem 2 good. Note that, for every d ≥ 4, almost all d-regular graphs are good
|
140 |
+
(see [2, 9, 11]). In particular, if d ≥ 5, then whp in a random d-regular graph on
|
141 |
+
[n] there are no subgraphs with 3 ≤ v ≤ n − 3 vertices and the edge boundary at
|
142 |
+
most 2d − 2. If d = 4, then whp there are no subgraphs with 3 ≤ v ≤ n − 3 vertices
|
143 |
+
and the edge boundary at most d + 1. If d = 3, then whp there are no subgraphs
|
144 |
+
with 3 ≤ v ≤ n − 3 vertices and the edge boundary at most d + 1 other than C3, C4
|
145 |
+
and their vertex-complements. Since the edge boundary of C4 is exactly d + 1 = 4,
|
146 |
+
a random 3-regular graph is good whp under the condition that it does not contain
|
147 |
+
triangles.
|
148 |
+
Corollary 1. For every good sequence Fn, p(n) =
|
149 |
+
� e
|
150 |
+
n
|
151 |
+
�2/d is a sharp threshold for
|
152 |
+
containing a copy of Fn. In particular,
|
153 |
+
• for every ℓ ≥ 2, p(n) = (e/n)ℓ is a sharp threshold for containing the ℓth power
|
154 |
+
of a Hamilton cycle;
|
155 |
+
• p(n) = (e/n)2/3 is a sharp threshold for containing a spanning 2-overlapping
|
156 |
+
4-cycle;
|
157 |
+
• for every 3 ≤ m ≤ √n, p(n) =
|
158 |
+
�
|
159 |
+
e/n is a sharp threshold for containing
|
160 |
+
rectangular tori Tm×n/m (assuming that n is divisible by m);
|
161 |
+
• for every d ≥ 4 and (asymptotically) almost all d-regular graphs Fn on [n], as-
|
162 |
+
suming that dn is even, p(n) = (e/n)2/d is a sharp threshold for Fn-containment;
|
163 |
+
• for (asymptotically) almost all triangle-free 3-regular graphs Fn on [n], assum-
|
164 |
+
ing that n is odd, p(n) = (e/n)2/3 is a sharp threshold for Fn-containment.
|
165 |
+
Actually we are able to establish the same sharp threshold for almost all 3-regular
|
166 |
+
graphs — the condition of the absence of triangles is redundant, since the number
|
167 |
+
of triangles converges in probability to a Poisson random variable [19], and so it is
|
168 |
+
bounded in probability. In other words, we may allow Fn to have a bounded number
|
169 |
+
of subgraphs with a smaller edge boundary. However, we do not want to overload
|
170 |
+
the proof with technical details, and so we formulate Theorem 1 and Theorem 2 as
|
171 |
+
well as Corollary 1 in their current laconic forms.
|
172 |
+
We prove Theorem 1 using the “planted trick” that in different forms appears
|
173 |
+
in many applications — one of them is the well-known and very useful “spread
|
174 |
+
4
|
175 |
+
|
176 |
+
lemma” [1] which in particular gives good sunflower bounds [18]; in probabilistic
|
177 |
+
terms the application of the trick for the “spread lemma” is described in [12]. Kahn,
|
178 |
+
Narayanan and Park [8] and further D´ıaz and Person [3] used the “planted trick” to
|
179 |
+
prove their results on threshold probabilities as well. In essence, the key idea is to
|
180 |
+
“plant” a graph F from the family Fn and to combine it with the noise produced
|
181 |
+
by G(n, p). Then, it turns out that whp there exists a graph F ′ ∈ Fn which is
|
182 |
+
entirely inside the perturbed planted hyperedge F ∪ G(n, p) such that the size of
|
183 |
+
F ′ \ G(n, p) is quite small. This allows to replace Fn with the set of fragments of
|
184 |
+
F ∈ Fn equal to F ′ \ G(n, p), to draw independently edges of another G(n, p) and
|
185 |
+
to apply the same argument once again. If the number of steps in this procedure
|
186 |
+
is bounded by a constant, then we get that the threshold probability has the same
|
187 |
+
order of magnitude as p∗.
|
188 |
+
In the proof of Theorem 2 we show that it is sufficient to apply this trick only
|
189 |
+
once. Actually the usual second moment method (but for the uniform model instead
|
190 |
+
of the binomial) works as well. However, we give the proof of Theorem 2 in terms
|
191 |
+
of the planted hyperedge for the sake of convenience and coherence. In particular,
|
192 |
+
we want to explicitly show the borders between the following three phenomena:
|
193 |
+
1) it is sufficient to apply the “planted trick” once, 2) it is sufficient to apply the
|
194 |
+
“planted trick” constantly many times, 3) the number of applications of the trick is
|
195 |
+
unbounded. We claim that our analysis is optimal, and the method in its current
|
196 |
+
form cannot be used to weaken the bound on edge boundaries in Theorem 2 for
|
197 |
+
d ≥ 5. Our main achievement is that we make a step beyond the usage of the notions
|
198 |
+
of spreadness and superspreadness. We obtain optimal bounds on the number of
|
199 |
+
hyperedges containing a given set of edges I (commonly denoted by |Fn ∩ ⟨I⟩|)
|
200 |
+
and on the number of subgraphs of Fn with a fixed number of vertices, edges and
|
201 |
+
components (see Section 5 and Claim 6). The main ingredient of the proof of Claim 6
|
202 |
+
is a very nice property of d-regular graphs satisfying the requirements of Theorem 1:
|
203 |
+
for every v, there are not too many subgraphs on v vertices with the maximum
|
204 |
+
possible number of edges
|
205 |
+
dv
|
206 |
+
2 −
|
207 |
+
� d+1
|
208 |
+
2
|
209 |
+
�
|
210 |
+
(see Section 2).
|
211 |
+
We describe the “planted
|
212 |
+
trick” in Section 3. Then we prove both theorems in Section 4. Sections 6 and 7 are
|
213 |
+
devoted to the proof of Claim 6 and the key lemma (Lemma 3 from Section 3) that
|
214 |
+
validates the application of the planted trick respectively.
|
215 |
+
2
|
216 |
+
Linearly many closed subgraphs
|
217 |
+
Let us call a graph Fn with the second property from the requirement (on the edge
|
218 |
+
boundary) in Theorem 1 locally sparse. Note that this (local sparsity) property is
|
219 |
+
that the edge boundary of every subgraph ˜F with 3 ≤ |V ( ˜F)| ≤ n − 3 is at least
|
220 |
+
d + 1. Clearly d + 1 can be replaced with d + 2 for even d since in this case the edge
|
221 |
+
boundary δ( ˜F) cannot be odd. Let ∆ = d+1 for odd d and ∆ = d+2 for even d. It
|
222 |
+
5
|
223 |
+
|
224 |
+
is easy to see that the condition |δ( ˜F)| ≥ ∆ holds for all ˜F with 2 ≤ |V ( ˜F)| ≤ d − 1
|
225 |
+
just due to the d-regularity of Fn. Let us call a subgraph ˜F with the edge boundary
|
226 |
+
exactly ∆ closed (note that a closed subgraph is always connected). For j < d, let
|
227 |
+
us call a vertex w of a connected subgraph ˜F ⊂ Fn j-free, if its degree in ˜F equals
|
228 |
+
j; w is simply free, if it is j-free for some j < d.
|
229 |
+
Let F be a locally sparse d-regular graph on [n].
|
230 |
+
Claim 1. Every closed subgraph of F with at least 3 vertices has minimum degree
|
231 |
+
at least d/2.
|
232 |
+
Proof. Assume that ˜F is a closed subgraph of F with a vertex w having degree
|
233 |
+
d′ < d/2. If we remove the vertex w from ˜F, then we get the graph ˜F \ w with edge
|
234 |
+
boundary δ(F) + 2d′ −d < δ(F) = ∆. This contradicts the local sparsity of F when
|
235 |
+
|V ( ˜F)| ≥ 4. Otherwise it contradicts the fact that a subgraph on 2 vertices has the
|
236 |
+
edge boundary at least 2d − 2 ≥ ∆.
|
237 |
+
Claim 2. For any pair of adjacent vertices x, y in F and for every 3 ≤ v ≤ n − 3,
|
238 |
+
there are at most two closed subgraphs in F on v vertices containing x and not
|
239 |
+
containing y.
|
240 |
+
Proof. Fix adjacent vertices x, y and 3 ≤ v ≤ n − 3.
|
241 |
+
A closed graph ˜F ⊂ F sends exactly ∆ edges to F \ ˜F implying that F \ ˜F is
|
242 |
+
also closed. Assume that v ≥ n/2, and that there are at least 3 closed graphs on
|
243 |
+
v vertices that share x and do not contain y. Then their complements are closed
|
244 |
+
graphs on n − v ≤ n/2 vertices that share y and do not share x. Therefore, it is
|
245 |
+
sufficient to prove the claim for v ≤ n/2.
|
246 |
+
Let H1, H2 be different closed subgraphs of F on v vertices that contain x and do
|
247 |
+
not contain y. Note that H1, H2 should have at least one other common vertex since
|
248 |
+
otherwise the degree of x is bigger than d due to Claim 1. Then |V (H1) ∪ V (H2)| ≤
|
249 |
+
n − 2.
|
250 |
+
Let H0 = H1 ∩ H2. Note that |E(H0)| ≤ d
|
251 |
+
2|V (H0)| − ∆
|
252 |
+
2 implying that |E(Hj) \
|
253 |
+
E(H0)| ≥
|
254 |
+
d
|
255 |
+
2|V (Hj \ H0)| for both j = 1 and j = 2 since H1, H2 are closed. On
|
256 |
+
the other hand, if, say |E(H2) \ E(H0)| >
|
257 |
+
d
|
258 |
+
2|V (H2 \ H0)|, then |E(H1 ∪ H2)| >
|
259 |
+
d
|
260 |
+
2|V (H1∪H2)|− ∆
|
261 |
+
2 which contradicts the local sparsity of F since |V (H1∪H2)| ≤ n−2.
|
262 |
+
Therefore, |E(Hj) \ E(H0)| = d
|
263 |
+
2|V (Hj \ H0)| for both j = 1 and j = 2, but then H0
|
264 |
+
is closed.
|
265 |
+
Then, there are exactly ∆ edges between H0 and F \ H0, and one of them is the
|
266 |
+
edge between x and y. It means that Hj \ H0, j ∈ {1, 2}, send at most ∆ − 1 edges
|
267 |
+
(in total) to H0. This may happen only if |V (Hj \ H0)| = 1 for both j = 1 and
|
268 |
+
j = 2. Indeed, |V (H1 \ H0)| = |V (H2 \ H0)|. Moreover, the number of edges that
|
269 |
+
6
|
270 |
+
|
271 |
+
Hj \ H0 sends to H0 equals
|
272 |
+
|E(Hj) \ E(H0)| − |E(Hj \ H0)| ≥ d
|
273 |
+
2|V (Hj \ H0)| −
|
274 |
+
�d
|
275 |
+
2|V (Hj \ H0)| − ∆
|
276 |
+
2
|
277 |
+
�
|
278 |
+
= ∆
|
279 |
+
2
|
280 |
+
whenever |V (Hj \ H0)| ≥ 2.
|
281 |
+
Assume that there exists a closed graph H3 ̸⊂ H1∪H2 on v vertices that contains
|
282 |
+
x and does not contain y. From the above it follows that H3 ∩ H1 = H3 ∩ H2 = H0.
|
283 |
+
Each vertex of Hj \ H0 sends at least d
|
284 |
+
2 edges to H0 due to Claim 1. But then the
|
285 |
+
vertices from Hj \ H0 send at least 3d
|
286 |
+
2 ≥ ∆ edges to H0 — a contradiction (since
|
287 |
+
there is one additional edge {x, y} in the edge boundary of H0). Therefore, any
|
288 |
+
other closed graph that contains x and does not contain y should be entirely inside
|
289 |
+
H1 ∪ H2. Assume that such a graph H3 exists. Let w1 ∈ H1 \ H0, w2 ∈ H2 \ H0.
|
290 |
+
Clearly, H3 contains w1, w2 and all but 1 vertex of H0. In the same way as above
|
291 |
+
we get that H1 ∩ H2 = H0, H1 ∩ H3 and H2 ∩ H3 are three closed graphs on v − 1
|
292 |
+
vertices that contain x and do not contain y. These three closed graphs on v − 1
|
293 |
+
vertices have the property that none of them is inside the union of the other two —
|
294 |
+
this is only possible when v − 1 = 2, i.e. v = 3. The only possible closed graph on
|
295 |
+
3 vertices is a triangle. Moreover, a triangle is closed only when d = 4. So, H1, H2
|
296 |
+
are triangles sharing an edge, but then H3 adds another edge to the union H1 ∪ H2
|
297 |
+
implying that H1 ∪ H2 ∪ H3 is a 4-clique. We get a contradiction with the local
|
298 |
+
sparsity since the edge boundary of a 4-clique is 4 < ∆ = 6.
|
299 |
+
From this, it immediately follows, that for every v, there are at most Cn closed
|
300 |
+
subgraphs on v vertices in F for a certain universal constant C. More precisely, the
|
301 |
+
following is true.
|
302 |
+
Claim 3. Let k ∈ N, and let F ′ be the induced subgraph of F on [k]. For every
|
303 |
+
3 ≤ v ≤ n − 3, the number of closed subgraphs of F ′ with v vertices is at most 2dk
|
304 |
+
3 .
|
305 |
+
Proof. Fix a vertex w in F ′ and let us bound the number (denoted by µ(w)) of
|
306 |
+
closed subgraphs of F ′ on v vertices containing w such that the vertex w is free in
|
307 |
+
these graphs. Due to Claim 2, µ(w) ≤ 2d. On the other hand, Claim 1 implies
|
308 |
+
that every closed subgraph contains at least 3 free vertices. Letting f to be the
|
309 |
+
number of closed subgraphs in F ′ on v vertices, by double counting, we get that
|
310 |
+
3f ≤ �
|
311 |
+
w∈V (F ′) µ(w) ≤ 2dk as needed.
|
312 |
+
For d = 3, 4, we need sharper bounds. Let us start from d = 3.
|
313 |
+
Claim 4. Let d = 3, k ∈ N. Let F ′ be the induced subgraph of F on [k]. Then for
|
314 |
+
every 3 ≤ v ≤ n − 3, there are at most 3
|
315 |
+
4k closed subgraphs in F ′ on v vertices.
|
316 |
+
7
|
317 |
+
|
318 |
+
Proof. Fix a vertex w in F ′ and let us compute the number µ(w) of closed subgraphs
|
319 |
+
of F ′ on v vertices containing w such that the vertex w is free in these graphs.
|
320 |
+
Reviewing the proof of Claim 2, we may see that in the case d = 3, every vertex
|
321 |
+
x may be inside only a single closed subgraph on v vertices that does not contain
|
322 |
+
another vertex y — otherwise H1 \ H0 sends at least 2 edges to H0, and the same
|
323 |
+
for H2 \ H0 implying that H0 cannot be closed. Then, for every w, µ(w) ≤ 3. On
|
324 |
+
the other hand, Claim 1 implies that every closed subgraph contains at least 4 free
|
325 |
+
vertices. Letting f to be the number of closed subgraphs in F ′ on v vertices, by
|
326 |
+
double counting, we get that 4f ≤ �
|
327 |
+
w∈V (F ′) µ(w) ≤ 3k as needed.
|
328 |
+
For d = 4, we get the following.
|
329 |
+
Claim 5. Let d = 4, k ∈ N. Let F ′ be the induced subgraph of F on [k]. Then for
|
330 |
+
every 3 ≤ v ≤ n − 3, there are at most 4
|
331 |
+
3k closed subgraphs in F ′ on v vertices.
|
332 |
+
Proof. Fix a vertex w in F ′ and let us compute the number of closed subgraphs of
|
333 |
+
F ′ on v vertices containing w such that the vertex w is free in these graphs. Let
|
334 |
+
µj(w) be the number of closed subgraphs on v vertices such that w is j-free in these
|
335 |
+
graphs. Due to Claim 1, µ1(w) = 0. Due to Claim 2, µ3(w) + 2µ2(w) ≤ 8. On the
|
336 |
+
other hand, letting f to be the number of closed subgraphs in F ′ on v vertices, since
|
337 |
+
every closed graph has the edge boundary equal to 6, we get that 6f is exactly the
|
338 |
+
number of pairs (a closed graph ˜F, an edge from the boundary of ˜F). Therefore,
|
339 |
+
6f = �
|
340 |
+
w(µ3(w) + 2µ2(w)) ≤ 8k implying that the number of closed graphs on v
|
341 |
+
vertices is at most 4
|
342 |
+
3k as needed.
|
343 |
+
3
|
344 |
+
Planted hyperedge
|
345 |
+
Let dn be even, Fn be a d-regular graph on [n] satisfying the requirements of
|
346 |
+
Theorem 1, and let F be a uniformly chosen random element of Fn. Let ε > 0,
|
347 |
+
w = (1 + ε + o(1))(e/n)2/d�n
|
348 |
+
2
|
349 |
+
�
|
350 |
+
be a sequence of integers and W be a random graph
|
351 |
+
on [n] with w edges chosen uniformly at random. In this section, we review the
|
352 |
+
constructions and follow the terminology from [8]. For the sake of convenience, we
|
353 |
+
give the argument in full. Our achievement is Lemma 3 that we state in the end of
|
354 |
+
the section.
|
355 |
+
Fix a non-negative integer ℓ0. Let us call a pair (F ∈ Fn, W ⊂
|
356 |
+
�[n]
|
357 |
+
2
|
358 |
+
�
|
359 |
+
) ℓ0-bad, if
|
360 |
+
for every ℓ0-subset L ⊂ F (we hereinafter assume that F is the set of edges), we
|
361 |
+
have that L ⊔ [W \ F] does not contain a graph from Fn.
|
362 |
+
Fix t ∈
|
363 |
+
�
|
364 |
+
0, 1, . . . , dn
|
365 |
+
2
|
366 |
+
�
|
367 |
+
and let w′ = w − t. Note that, for F ∈ Fn, W ⊂
|
368 |
+
�[n]
|
369 |
+
2
|
370 |
+
�
|
371 |
+
,
|
372 |
+
such that |F ∩ W| = t, we have
|
373 |
+
|F ∪ W| = dn
|
374 |
+
2 + w − t = w′ + dn
|
375 |
+
2 .
|
376 |
+
8
|
377 |
+
|
378 |
+
Call Z ∈
|
379 |
+
� ([n]
|
380 |
+
2 )
|
381 |
+
w′+dn/2
|
382 |
+
�
|
383 |
+
ℓ0-pathological if
|
384 |
+
|{F ⊂ Z : F ∈ Fn, (F, Z) is ℓ0-bad}| > 1
|
385 |
+
n|Fn|
|
386 |
+
��n
|
387 |
+
2
|
388 |
+
�
|
389 |
+
− dn/2
|
390 |
+
w′
|
391 |
+
�
|
392 |
+
/
|
393 |
+
�
|
394 |
+
�n
|
395 |
+
2
|
396 |
+
�
|
397 |
+
w′ + dn/2
|
398 |
+
�
|
399 |
+
=: M.
|
400 |
+
Note that
|
401 |
+
P(|F ∩ W| = t) =
|
402 |
+
�dn/2
|
403 |
+
t
|
404 |
+
���n
|
405 |
+
2
|
406 |
+
�
|
407 |
+
− dn/2
|
408 |
+
w′
|
409 |
+
�
|
410 |
+
/
|
411 |
+
��n
|
412 |
+
2
|
413 |
+
�
|
414 |
+
w
|
415 |
+
�
|
416 |
+
.
|
417 |
+
We have therefore
|
418 |
+
P((F, W) is ℓ0-bad, F ∪ W is not ℓ0-pathological | |F ∩ W| = t)
|
419 |
+
= P((F, W) is ℓ0-bad, F ∪ W is not ℓ0-pathological, |F ∩ W| = t)
|
420 |
+
P(|F ∩ W| = t)
|
421 |
+
≤
|
422 |
+
�
|
423 |
+
(n
|
424 |
+
2)
|
425 |
+
w′+dn/2
|
426 |
+
�
|
427 |
+
|{F ⊂ Z : F ∈ Fn, (F, Z) is ℓ0-bad}|
|
428 |
+
�dn/2
|
429 |
+
t
|
430 |
+
�
|
431 |
+
|Fn|
|
432 |
+
�(n
|
433 |
+
2)
|
434 |
+
w
|
435 |
+
��dn/2
|
436 |
+
t
|
437 |
+
��(n
|
438 |
+
2)−dn/2
|
439 |
+
w′
|
440 |
+
�
|
441 |
+
/
|
442 |
+
�(n
|
443 |
+
2)
|
444 |
+
w
|
445 |
+
�
|
446 |
+
≤ 1
|
447 |
+
n,
|
448 |
+
where Z is a not ℓ0-pathological hyperedge from
|
449 |
+
� ([n]
|
450 |
+
2 )
|
451 |
+
w′+dn/2
|
452 |
+
�
|
453 |
+
with maximum possible
|
454 |
+
value of |{F ⊂ Z : F ∈ Fn, (F, Z) is ℓ0-bad}|.
|
455 |
+
Fix F ∈ Fn and a set B ⊂ F of size t. Let W′ be chosen uniformly at random
|
456 |
+
from
|
457 |
+
�([n]
|
458 |
+
2 )\F
|
459 |
+
w′
|
460 |
+
�
|
461 |
+
. Note that if F ∪ W′ is ℓ0-pathological, then there are at least M
|
462 |
+
subgraphs from Fn in F ∪ W′. On the other hand, if (F, W′) is ℓ0-bad, then, for
|
463 |
+
every F ′ ∈ Fn such that F ′ ⊂ F ∪ W′,
|
464 |
+
|F ′ ∩ F| = |F ′ \ W′| ≥ ℓ0 + 1.
|
465 |
+
Let X count the number of F ′ ∈ Fn such that F ′ ⊂ F ∪ W′ and |F ′ ∩ F| ≥ ℓ0 + 1.
|
466 |
+
We get that the event “(F, W′) is ℓ0-bad and F ∪ W′ is ℓ0-pathological” implies
|
467 |
+
X ≥ M. Then
|
468 |
+
P((F, W) is ℓ0-bad, F ∪ W is ℓ0-pathological | |F ∩ W| = t) =
|
469 |
+
P((F, W) is ℓ0-bad, F ∪ W is ℓ0-pathological | F = F, F ∩ W = B) =
|
470 |
+
P((F, W′) is ℓ0-bad, F ∪ W′ is ℓ0-pathological) ≤ P(X ≥ M) ≤ EX
|
471 |
+
M .
|
472 |
+
For ℓ ≥ ℓ0 + 1, let πℓ := P(|F ∩ F| = ℓ). Then
|
473 |
+
EX =
|
474 |
+
�
|
475 |
+
ℓ≥ℓ0+1
|
476 |
+
|Fn|πℓ
|
477 |
+
�
|
478 |
+
w′
|
479 |
+
dn/2 − ℓ
|
480 |
+
�
|
481 |
+
/
|
482 |
+
��n
|
483 |
+
2
|
484 |
+
�
|
485 |
+
− dn/2
|
486 |
+
dn/2 − ℓ
|
487 |
+
�
|
488 |
+
.
|
489 |
+
(1)
|
490 |
+
9
|
491 |
+
|
492 |
+
We have
|
493 |
+
EX
|
494 |
+
M =
|
495 |
+
�
|
496 |
+
ℓ≥ℓ0+1
|
497 |
+
|Fn| πℓ
|
498 |
+
M
|
499 |
+
�
|
500 |
+
w′
|
501 |
+
dn/2 − ℓ
|
502 |
+
�
|
503 |
+
/
|
504 |
+
��n
|
505 |
+
2
|
506 |
+
�
|
507 |
+
− dn/2
|
508 |
+
dn/2 − ℓ
|
509 |
+
�
|
510 |
+
= n
|
511 |
+
�
|
512 |
+
ℓ≥ℓ0+1
|
513 |
+
πℓ
|
514 |
+
�
|
515 |
+
w′
|
516 |
+
dn/2−ℓ
|
517 |
+
�
|
518 |
+
/
|
519 |
+
�(n
|
520 |
+
2)−dn/2
|
521 |
+
dn/2−ℓ
|
522 |
+
�
|
523 |
+
�(n
|
524 |
+
2)−dn/2
|
525 |
+
w′
|
526 |
+
�
|
527 |
+
/
|
528 |
+
�
|
529 |
+
(n
|
530 |
+
2)
|
531 |
+
w′+dn/2
|
532 |
+
�.
|
533 |
+
(2)
|
534 |
+
Note that
|
535 |
+
�
|
536 |
+
w′
|
537 |
+
dn/2−ℓ
|
538 |
+
�
|
539 |
+
/
|
540 |
+
�(n
|
541 |
+
2)−dn/2
|
542 |
+
dn/2−ℓ
|
543 |
+
�
|
544 |
+
�(n
|
545 |
+
2)−dn/2
|
546 |
+
w′
|
547 |
+
�
|
548 |
+
/
|
549 |
+
�
|
550 |
+
(n
|
551 |
+
2)
|
552 |
+
w′+dn/2
|
553 |
+
� ∼
|
554 |
+
w′2w′(
|
555 |
+
�n
|
556 |
+
2
|
557 |
+
�
|
558 |
+
− dn + ℓ)(n
|
559 |
+
2)−dn+ℓ�n
|
560 |
+
2
|
561 |
+
�(n
|
562 |
+
2)
|
563 |
+
(w′ − dn/2 + ℓ)w′−dn/2+ℓ(w′ + dn/2)w′+dn/2(
|
564 |
+
�n
|
565 |
+
2
|
566 |
+
�
|
567 |
+
− dn/2)2(n
|
568 |
+
2)−dn
|
569 |
+
< e− (dn/2−ℓ)2
|
570 |
+
2w
|
571 |
+
− (dn/2)2
|
572 |
+
2w
|
573 |
+
+O(1)
|
574 |
+
��
|
575 |
+
n
|
576 |
+
(1 + ε)e
|
577 |
+
�2/d�ℓ
|
578 |
+
.
|
579 |
+
(3)
|
580 |
+
In Section 7, we prove the following.
|
581 |
+
Lemma 3. If one of the following two conditions hold
|
582 |
+
• ℓ0 =
|
583 |
+
�
|
584 |
+
d2
|
585 |
+
2 ln(1+ε/2)n1−(∆−d)/d�
|
586 |
+
, or
|
587 |
+
• Fn is good and ℓ0 = 0,
|
588 |
+
then
|
589 |
+
EX
|
590 |
+
M ≤ n
|
591 |
+
�
|
592 |
+
ℓ≥ℓ0+1
|
593 |
+
πℓe− (dn/2−ℓ)2
|
594 |
+
2w
|
595 |
+
− (dn/2)2
|
596 |
+
2w
|
597 |
+
��
|
598 |
+
n
|
599 |
+
(1 + ε)e
|
600 |
+
�2/d�ℓ
|
601 |
+
= o
|
602 |
+
�1
|
603 |
+
n
|
604 |
+
�
|
605 |
+
.
|
606 |
+
(4)
|
607 |
+
4
|
608 |
+
Proofs of Theorems 1, 2
|
609 |
+
Lemma 3 implies Theorem 2 immediately.
|
610 |
+
It remains to prove Theorem 1 for d ≥ 5. Let p > (1 + ε)d
|
611 |
+
� e
|
612 |
+
n
|
613 |
+
�2/d. We use
|
614 |
+
the first assertion of Lemma 3 for that. Consider d independent copies G1, . . . , Gd
|
615 |
+
of G(n, p′), p′ = (1 + ε)
|
616 |
+
� e
|
617 |
+
n
|
618 |
+
�2/d. For every F ∈ Fn, consider a minimum possible
|
619 |
+
R = R(F) ⊂ F such that R ∪ G1 contains a graph from Fn. By Lemma 3 whp
|
620 |
+
|R| ≤
|
621 |
+
d2
|
622 |
+
2 ln(1+ε/2)n1−(∆−d)/d.
|
623 |
+
Let Σ be the set of all F ∈ Fn such that |R(F)| ≤
|
624 |
+
d2
|
625 |
+
2 ln(1+ε/2)n1−(∆−d)/d. We have that E|Σ| = (1 − o(1))|Fn|. By Markov’s inequality,
|
626 |
+
P
|
627 |
+
�
|
628 |
+
|Σ| ≤ |Fn|
|
629 |
+
2
|
630 |
+
�
|
631 |
+
= P
|
632 |
+
�
|
633 |
+
|Fn| − |Σ| ≥ |Fn|
|
634 |
+
2
|
635 |
+
�
|
636 |
+
≤ 2(|Fn| − E|Σ|)
|
637 |
+
|Fn|
|
638 |
+
→ 0.
|
639 |
+
10
|
640 |
+
|
641 |
+
Let R = {R(F) : F ∈ Σ} be a multiset, i.e. |R| = |Σ|. We may assume that all sets
|
642 |
+
R ∈ R have equal cardinality exactly ℓ1 :=
|
643 |
+
�
|
644 |
+
d2
|
645 |
+
2 ln(1+ε/2)n1−(∆−d)/d�
|
646 |
+
. We then apply
|
647 |
+
the same proof (as in Section 3) but for R instead of Fn.
|
648 |
+
Let ℓ0 = 1
|
649 |
+
2
|
650 |
+
�
|
651 |
+
d2
|
652 |
+
ln(1+ε/2)
|
653 |
+
�2
|
654 |
+
n1−2(∆−d)/d. Let us call a pair (R ∈ R, W ∈
|
655 |
+
�([n]
|
656 |
+
2 )
|
657 |
+
w
|
658 |
+
�
|
659 |
+
) bad, if
|
660 |
+
for every ℓ0-subset L ⊂ R, we have that L ⊔ [W \ R] does not contain a graph from
|
661 |
+
R. For a fixed size of intersection t, a set Z ∈
|
662 |
+
� ([n]
|
663 |
+
2 )
|
664 |
+
w−t+ℓ1
|
665 |
+
�
|
666 |
+
is pathological if
|
667 |
+
|{R ⊂ Z : R ∈ R, (R, Z) is bad}| > 1
|
668 |
+
n|R|
|
669 |
+
��n
|
670 |
+
2
|
671 |
+
�
|
672 |
+
− ℓ1
|
673 |
+
w − t
|
674 |
+
�
|
675 |
+
/
|
676 |
+
�
|
677 |
+
�n
|
678 |
+
2
|
679 |
+
�
|
680 |
+
w − t + ℓ1
|
681 |
+
�
|
682 |
+
=: M.
|
683 |
+
In order to show that for a fixed R ∈ R, whp in R∪G2 there exists a subset R′ ∈ R
|
684 |
+
such that |R′ \ G2| ≤ ℓ0, it is sufficient to prove an analogue of the first assertion of
|
685 |
+
Lemma 3: let
|
686 |
+
• W′ be chosen uniformly at random from
|
687 |
+
�([n]
|
688 |
+
2 )\R
|
689 |
+
w−t
|
690 |
+
�
|
691 |
+
;
|
692 |
+
• X be the number of R′ ∈ R such that R′ ⊂ R ∪ W′ and |R′ ∩ R| ≥ ℓ0 + 1,
|
693 |
+
then EX/M = o(1/n). Note that an analogue of the first inequality in (4) holds true
|
694 |
+
with dn/2 replaced by ℓ1. Note that R has at most 2ℓ1 vertices. Defining p(ℓ, x, c)
|
695 |
+
in the same way as in Section 5 and applying Claim 6, we get
|
696 |
+
p(ℓ, x, c) ≤ max
|
697 |
+
a
|
698 |
+
α2ℓ1(a, ℓ, x, c)β(a, ℓ, x, c)
|
699 |
+
|R|
|
700 |
+
≤
|
701 |
+
�2ℓ1
|
702 |
+
c
|
703 |
+
�
|
704 |
+
[(n − x + c)! + O(1)]
|
705 |
+
|R|
|
706 |
+
×
|
707 |
+
× max
|
708 |
+
a
|
709 |
+
�x − 2c
|
710 |
+
c − a
|
711 |
+
��c
|
712 |
+
a
|
713 |
+
��˜σ + c
|
714 |
+
c − a
|
715 |
+
��d
|
716 |
+
2
|
717 |
+
�a �4d
|
718 |
+
3
|
719 |
+
�c−a
|
720 |
+
22d(d+5)˜σ
|
721 |
+
max
|
722 |
+
o≤(d+4)˜σ
|
723 |
+
�x
|
724 |
+
o
|
725 |
+
�2
|
726 |
+
.
|
727 |
+
Note that this bound differs from (8) only in the first binomial coefficient with n
|
728 |
+
replaced by 2ℓ1. Therefore, applying the same arguments as in Section 7.1, we get
|
729 |
+
that, for every ℓ > ℓ0,
|
730 |
+
πℓ =
|
731 |
+
�
|
732 |
+
x,c
|
733 |
+
p(x, ℓ, c) ≤ n2 � e
|
734 |
+
n
|
735 |
+
� 2ℓ
|
736 |
+
d (1 + δ)ℓe
|
737 |
+
d2
|
738 |
+
2
|
739 |
+
d2
|
740 |
+
ln(1+ε/2) n1−2(∆−d)/d.
|
741 |
+
Therefore the analogues of (1), (2) and (3) imply that
|
742 |
+
EX
|
743 |
+
M ≤
|
744 |
+
�
|
745 |
+
ℓ>ℓ0
|
746 |
+
n3
|
747 |
+
�1 + δ
|
748 |
+
1 + ε
|
749 |
+
�ℓ
|
750 |
+
e
|
751 |
+
d2
|
752 |
+
2
|
753 |
+
d2
|
754 |
+
2 ln(1+ε/2) n1−2(∆−d)/d = o
|
755 |
+
�1
|
756 |
+
n
|
757 |
+
�
|
758 |
+
.
|
759 |
+
Applying repeatedly the whole argument
|
760 |
+
d
|
761 |
+
∆−d − 1 ≤ d − 1 times, we arrive to
|
762 |
+
fragments of graphs from Fn of sizes at most ℓd−1 =
|
763 |
+
�
|
764 |
+
1
|
765 |
+
2
|
766 |
+
�
|
767 |
+
d2
|
768 |
+
ln(1+ε/2)
|
769 |
+
�d−1
|
770 |
+
n(∆−d)/d
|
771 |
+
�
|
772 |
+
.
|
773 |
+
11
|
774 |
+
|
775 |
+
Defining R (whp |R| ≥ |Fn|/2d−1) in a usual way as the multiset of fragments of
|
776 |
+
size exactly ℓd−1, letting
|
777 |
+
M := 1
|
778 |
+
n|R|
|
779 |
+
��n
|
780 |
+
2
|
781 |
+
�
|
782 |
+
− ℓd−1
|
783 |
+
w − t
|
784 |
+
�
|
785 |
+
/
|
786 |
+
�
|
787 |
+
�n
|
788 |
+
2
|
789 |
+
�
|
790 |
+
w − t + ℓd−1
|
791 |
+
�
|
792 |
+
,
|
793 |
+
considering a fixed fragment R, a uniformly chosen W′ ∈
|
794 |
+
�([n]
|
795 |
+
2 )\R
|
796 |
+
w−t
|
797 |
+
�
|
798 |
+
, and defining X
|
799 |
+
as the number of R′ ∈ R such that R′ ⊂ R ∪ W′ and |R′ ∩ R| ≥ 1, it remains to
|
800 |
+
show that EX/M = o(1/n). We then consider p(ℓ, x, c) and apply Claim 6:
|
801 |
+
p(ℓ, x, c) ≤ max
|
802 |
+
a
|
803 |
+
α2ℓd−1(a, ℓ, x, c)β(a, ℓ, x, c)
|
804 |
+
|R|
|
805 |
+
≤
|
806 |
+
�2ℓd−1
|
807 |
+
c
|
808 |
+
�
|
809 |
+
[(n − x + c)! + O(1)]
|
810 |
+
|R|
|
811 |
+
×
|
812 |
+
× max
|
813 |
+
a
|
814 |
+
�x − 2c
|
815 |
+
c − a
|
816 |
+
��c
|
817 |
+
a
|
818 |
+
��˜σ + c
|
819 |
+
c − a
|
820 |
+
��d
|
821 |
+
2
|
822 |
+
�a �4d
|
823 |
+
3
|
824 |
+
�c−a
|
825 |
+
22d(d+5)˜σ
|
826 |
+
max
|
827 |
+
o≤(d+4)˜σ
|
828 |
+
�x
|
829 |
+
o
|
830 |
+
�2
|
831 |
+
.
|
832 |
+
In the same way as in Section 7.1, the only non-trivial case is σ < ε′x, x < ε′n,
|
833 |
+
where 0 < ε′ ≪ δ is small enough (otherwise, p(ℓ, x, c) is even smaller). In this case,
|
834 |
+
for large enough constant C = C(d),
|
835 |
+
p(ℓ, x, c) ≤ 2d−1
|
836 |
+
�2ℓd−1
|
837 |
+
c
|
838 |
+
�
|
839 |
+
e2ℓ/d+(x−c)2/n+o(n2/d)
|
840 |
+
n2ℓ/d+(∆−d)c/d
|
841 |
+
(1 + δ/2)ℓ
|
842 |
+
�d2
|
843 |
+
2 e2/d−5/6
|
844 |
+
�c
|
845 |
+
≤ 2d−1
|
846 |
+
��
|
847 |
+
d2
|
848 |
+
ln(1 + ε/2)
|
849 |
+
�d−1 e
|
850 |
+
c
|
851 |
+
�c
|
852 |
+
e2ℓ/d+(x−c)2/n+o(n2/d)
|
853 |
+
n2ℓ/d
|
854 |
+
(1 + δ/2)ℓ
|
855 |
+
�d2
|
856 |
+
2 e2/d−5/6
|
857 |
+
�c
|
858 |
+
≤ C e2ℓ/d+o(n2/d)
|
859 |
+
n2ℓ/d
|
860 |
+
(1 + δ)ℓ.
|
861 |
+
Finally, for every ℓ ≥ 1,
|
862 |
+
πℓ =
|
863 |
+
�
|
864 |
+
x,c
|
865 |
+
p(x, ℓ, c) ≤ n2C e2ℓ/d+o(n2/d)
|
866 |
+
n2ℓ/d
|
867 |
+
(1 + δ)ℓ.
|
868 |
+
Therefore the analogues of (1), (2) and (3) imply that
|
869 |
+
EX
|
870 |
+
M ≤
|
871 |
+
�
|
872 |
+
ℓ>ℓ0
|
873 |
+
n3
|
874 |
+
�1 + δ
|
875 |
+
1 + ε
|
876 |
+
�ℓ
|
877 |
+
e−Θ(n2/d) = o
|
878 |
+
�1
|
879 |
+
n
|
880 |
+
�
|
881 |
+
.
|
882 |
+
5
|
883 |
+
Spread
|
884 |
+
We here follow the notations of Section 3: F ∈ Fn is fixed, F ∈ Fn is chosen
|
885 |
+
uniformly at random, and πℓ = P(|F ∩ F| = ℓ).
|
886 |
+
12
|
887 |
+
|
888 |
+
Fix c ∈ [ℓ], x ∈
|
889 |
+
� 2ℓ
|
890 |
+
d + ∆
|
891 |
+
d c, ℓ + c
|
892 |
+
�
|
893 |
+
. Denote
|
894 |
+
σ := d
|
895 |
+
2x −
|
896 |
+
�
|
897 |
+
ℓ + ∆
|
898 |
+
2 c
|
899 |
+
�
|
900 |
+
.
|
901 |
+
Let p(ℓ, x, c) be the probability that the intersection of F with F is a graph on x
|
902 |
+
vertices with ℓ edges and c connected components (we think about graphs as about
|
903 |
+
sets of their edges, so there are no isolated vertices in |F∩F|). Let integers ℓ1, . . . , ℓc
|
904 |
+
and x1, . . . , xc be chosen in a way such that
|
905 |
+
•
|
906 |
+
2ℓi
|
907 |
+
d + ∆
|
908 |
+
d ≤ xi ≤ ℓi + 1 for all i ∈ [c];
|
909 |
+
• �c
|
910 |
+
i=1 ℓi = ℓ, �c
|
911 |
+
i=1 xi = x.
|
912 |
+
Let p(ℓ1, . . . , ℓc, x1, . . . , xc) be the probability that the intersection of F with F con-
|
913 |
+
sists of c connected components R1, . . . , Rc such that |V (Ri)| = xi, |E(Ri)| = ℓi.
|
914 |
+
Clearly,
|
915 |
+
p(ℓ, x, c) =
|
916 |
+
�
|
917 |
+
ℓi,xi
|
918 |
+
p(ℓi, xi, i ∈ [c]),
|
919 |
+
(5)
|
920 |
+
where the summation is over all unordered choices of ℓ1, . . . , ℓc, x1, . . . , xc. Note
|
921 |
+
that, in the case of the ordered choice, the number of ways to choose the values of
|
922 |
+
xi ≥ 2 is at most
|
923 |
+
�x−c
|
924 |
+
c
|
925 |
+
�
|
926 |
+
. The number of ways to choose the respective ℓi is at most
|
927 |
+
�σ+c
|
928 |
+
c
|
929 |
+
�
|
930 |
+
.
|
931 |
+
We will use the following claim. Let ˜F be a subgraph of F on k vertices. We
|
932 |
+
assume that either k = n and ˜F = F, or k ≪ n.
|
933 |
+
Claim 6. The number of ways to choose a subgraph R1 ⊔ . . . ⊔ Rc from ˜F without
|
934 |
+
isolated vertices with x vertices, ℓ edges and c components such that a is the number
|
935 |
+
of Ri that are either an edge or a full vertex-complement to an edge (that comprises
|
936 |
+
n − 2 vertices and dn/2 − (2d − 1) edges) is
|
937 |
+
αk(a, ℓ, x, c) ≤
|
938 |
+
�k
|
939 |
+
c
|
940 |
+
��x − 2c
|
941 |
+
c − a
|
942 |
+
��c
|
943 |
+
a
|
944 |
+
��˜σ + c
|
945 |
+
c − a
|
946 |
+
� �d
|
947 |
+
2
|
948 |
+
�a
|
949 |
+
γc−a
|
950 |
+
max
|
951 |
+
o≤(d+4)˜σ
|
952 |
+
�x
|
953 |
+
o
|
954 |
+
�
|
955 |
+
2d(d+5)˜σ,
|
956 |
+
(6)
|
957 |
+
where
|
958 |
+
γ = 2d
|
959 |
+
3 I(d ≥ 5) + 4
|
960 |
+
3I(d = 4) + 3
|
961 |
+
4I(d = 3),
|
962 |
+
˜σ = σ − a(d − 1 − ∆/2).
|
963 |
+
Given disjoint non-trivial connected R1, . . . , Rc ⊂ ˜F such that their union has x
|
964 |
+
vertices and ℓ edges, and there are exactly a graphs Ri that are either an edge or a
|
965 |
+
full vertex-complement to an edge, the number of ways to extend R1 ⊔ . . . ⊔ Rc to a
|
966 |
+
graph from Fn is
|
967 |
+
β(a, ℓ, x, c) ≤ (n − x + c)!(d − 1)a2c−a
|
968 |
+
max
|
969 |
+
o≤(d+4)˜σ
|
970 |
+
�x
|
971 |
+
o
|
972 |
+
�
|
973 |
+
2d(d+4)˜σ + O(1).
|
974 |
+
(7)
|
975 |
+
13
|
976 |
+
|
977 |
+
6
|
978 |
+
Proof of Claim 6
|
979 |
+
Fix ℓ1, . . . , ℓc and x1, . . . , xc. Let us compute the number of ways to choose connected
|
980 |
+
vertex-disjoint subgraphs R1, . . . , Rc from ˜F with the respective numbers of edges
|
981 |
+
and vertices. Let us call Ri dense, if one of the following holds: 1) xi = 2 and ℓi = 1,
|
982 |
+
2) Ri is closed, 3) xi = n − 2 and ℓi = d
|
983 |
+
2n − (2d − 1), 4) xi = n − 1 and ℓi = d
|
984 |
+
2n − d,
|
985 |
+
5) xi = n and ℓi = d
|
986 |
+
2n.
|
987 |
+
For i ∈ [c], set σi =
|
988 |
+
d
|
989 |
+
2xi − ℓi − ∆
|
990 |
+
2 . Note that a is the number of i such that
|
991 |
+
xi = 2 and ℓi = 1, or xi = n − 2 and ℓi = d
|
992 |
+
2n − (2d − 1). Let us call the respective
|
993 |
+
Ri edge-components.
|
994 |
+
The number of ways to choose i ∈ [c] such that Ri is an
|
995 |
+
edge component equals
|
996 |
+
�c
|
997 |
+
a
|
998 |
+
�
|
999 |
+
, while the number of ways to choose the values of the
|
1000 |
+
remaining xi is at most
|
1001 |
+
�x−2c
|
1002 |
+
c−a
|
1003 |
+
�
|
1004 |
+
. The number of ways to choose the respective ℓi is at
|
1005 |
+
most
|
1006 |
+
�˜σ+c
|
1007 |
+
c−a
|
1008 |
+
�
|
1009 |
+
.
|
1010 |
+
We first choose dense graphs. If c = 1, x1 = n − 1 and ℓ1 = d
|
1011 |
+
2n − d, then there
|
1012 |
+
are exactly n ways to choose R1. If c = 1, x1 = n and ℓ1 = 2n, there is only one way
|
1013 |
+
to choose R1. Otherwise, we first choose edge-components Ri: for j = 1, 2, . . . , a,
|
1014 |
+
the jth edge is chosen out of the set of kj remaining vertices in dkj
|
1015 |
+
2 ways, and then
|
1016 |
+
kj−1 ≤ kj −2. After that, we choose all the remaining dense graphs from R1, . . . , Rc.
|
1017 |
+
Note that the remaining dense graphs from R1, . . . , Rc are closed. Assume that we
|
1018 |
+
want to choose a closed Rj, and kj is the number of remaining vertices. Then by
|
1019 |
+
Claims 3, 4, and 5, the number of ways to choose Rj is at most γkj. After that,
|
1020 |
+
kj − |V (Rj)| ≤ kj − 3 vertices remain. Assuming that c − ˜c is the number of dense
|
1021 |
+
Rj, we get that there are at most
|
1022 |
+
� d
|
1023 |
+
2
|
1024 |
+
�a γc−˜c−a
|
1025 |
+
n!
|
1026 |
+
(n−(c−˜c))! number of ways to choose
|
1027 |
+
dense subgraphs.
|
1028 |
+
Without loss of generality, we assume that it remains to choose R1, . . . , R˜c.
|
1029 |
+
Note that the component Ri might have at most ∆ + 2σi free vertices.
|
1030 |
+
For ev-
|
1031 |
+
ery i = 1, 2, . . . , ˜c, we expose Ri in ˜F in the following way.
|
1032 |
+
Assume that F ′ is
|
1033 |
+
the current graph (obtained by removing all the chosen subgraphs R1, . . . , Ri−1 and
|
1034 |
+
R˜c+1, . . . , Rc) on k′ vertices,
|
1035 |
+
1. choose the number of free vertices oi ≤ d + 2 + 2σi;
|
1036 |
+
2. choose the iterations of the below algorithm (out of the total number of iter-
|
1037 |
+
ations xi) that produce a free vertex of Ri in
|
1038 |
+
�xi
|
1039 |
+
oi
|
1040 |
+
�
|
1041 |
+
ways;
|
1042 |
+
3. choose a vertex w in F ′ which is minimum in Ri (here we mean the ordering
|
1043 |
+
of the vertices of Ri induced by the ordering of the vertices of F ′) in at most
|
1044 |
+
k′ ways, and activate it;
|
1045 |
+
4. at every step, choose the minimum vertex (in the ordering of the vertices from
|
1046 |
+
F ′) in the set of active vertices:
|
1047 |
+
14
|
1048 |
+
|
1049 |
+
• if it should be free (in accordance to the above choice), then add to Ri
|
1050 |
+
some of its neighbours (in at most 2d ways), deactivate it and activate all
|
1051 |
+
its chosen neighbours,
|
1052 |
+
• if it should not be free, then add to Ri all its neighbours, deactivate the
|
1053 |
+
vertex and activate all its neighbours.
|
1054 |
+
We get that the number of ways to choose Ri is at most k′(d+2+2σi)
|
1055 |
+
max
|
1056 |
+
oi≤d+2+2σi
|
1057 |
+
�xi
|
1058 |
+
oi
|
1059 |
+
�
|
1060 |
+
2doi.
|
1061 |
+
Eventually we get that the number of ordered choices of components with pa-
|
1062 |
+
rameters ℓi, xi, i ∈ [c], in ˜F is at most
|
1063 |
+
c!
|
1064 |
+
�k
|
1065 |
+
c
|
1066 |
+
� �d
|
1067 |
+
2
|
1068 |
+
�a
|
1069 |
+
γc−˜c−a
|
1070 |
+
˜c�
|
1071 |
+
i=1
|
1072 |
+
(d + 2 + 2σi)
|
1073 |
+
max
|
1074 |
+
oi≤d+2+2σi
|
1075 |
+
�xi
|
1076 |
+
oi
|
1077 |
+
�
|
1078 |
+
2doi
|
1079 |
+
≤ c!
|
1080 |
+
�k
|
1081 |
+
c
|
1082 |
+
� �d
|
1083 |
+
2
|
1084 |
+
�a
|
1085 |
+
γc−˜c−a
|
1086 |
+
max
|
1087 |
+
o≤(d+4)˜σ
|
1088 |
+
�x
|
1089 |
+
o
|
1090 |
+
�
|
1091 |
+
2d(d+5)˜σ.
|
1092 |
+
Note that this bound does not depend on the order of the choice of all these com-
|
1093 |
+
ponents R1, . . . , Rc, thus it implies (6).
|
1094 |
+
Let us now fix connected subgraphs R1, . . . , Rc of F as above. Let us bound
|
1095 |
+
the number of ways to extend R1 ⊔ . . . ⊔ Rc to an F ′ ∈ Fn. We construct such an
|
1096 |
+
extension in the following way.
|
1097 |
+
First, let us consider the two special cases: 1) c = 1, x1 ≥ n − 2 and R1 is dense;
|
1098 |
+
2) c = 2, x1 = 2, x2 = n − 2 and R1, R2 are dense. In the first case, if x1 = n − 2,
|
1099 |
+
then there are at most
|
1100 |
+
�2(d−1)
|
1101 |
+
d−1
|
1102 |
+
�
|
1103 |
+
ways to construct F ′ (the remaining 2 vertices should
|
1104 |
+
be adjacent — so we should only draw missing edges in constantly many ways); if
|
1105 |
+
x1 ≥ n − 1, then there is a unique way to construct F ′. In the second case, there
|
1106 |
+
are also at most
|
1107 |
+
�2(d−1)
|
1108 |
+
d−1
|
1109 |
+
�
|
1110 |
+
ways to draw F ′.
|
1111 |
+
We then forget the labels of the vertices from R1, . . . , Rc and assume (without
|
1112 |
+
loss of generality) that the desired F ′ ∈ Fn is defined on [n] in a way such that
|
1113 |
+
every i ∈ {2, . . . , n} has a neighbour in [i − 1], denote one such neighbour (chosen
|
1114 |
+
randomly) by ν(i) (note that the graph F ′ is connected due to the restriction on
|
1115 |
+
edge boundaries). Let H be obtained from F by deleting all the edges that do not
|
1116 |
+
belong to R1 ⊔ . . . ⊔ Rc. Then let Z be the set of all components in H (together
|
1117 |
+
with the isolated vertices), i.e. |Z| = n − x + c. We should compute the number of
|
1118 |
+
ways to embed the elements of Z in F ′ disjointly.
|
1119 |
+
Let z1, . . . , zn−x+c be an ordering of Z. At every step i = 1, . . . , n−x+c, consider
|
1120 |
+
the minimum vertex κi of F ′ such that none of the embedded elements of Z in F ′
|
1121 |
+
contain this vertex. If zi /∈ {R1, . . . , Rc}, then we assign κi with zi and proceed
|
1122 |
+
with the next step. Otherwise, we distinguish between the following cases. We let
|
1123 |
+
zi = R1 without loss of generality.
|
1124 |
+
15
|
1125 |
+
|
1126 |
+
First, we assume that R1 is dense.
|
1127 |
+
If |V (R1)| = 2, then there are at most
|
1128 |
+
d − 1 ways to choose the image of the edge R1, since the edge {κi, ν(κi)} is already
|
1129 |
+
‘occupied’. If 2 < |V (R1)| < n − 2, then due to Claim 2, there are at most 2 ways
|
1130 |
+
to choose a copy of R1 in F ′ containing κi and not containing ν(κi), as desired.
|
1131 |
+
Second, let R1 be not dense. Choose the iterations of the below algorithm (out
|
1132 |
+
of the total number of iterations xi) that produce a free vertex of Ri in
|
1133 |
+
�xi
|
1134 |
+
oi
|
1135 |
+
�
|
1136 |
+
ways.
|
1137 |
+
Activate κi.
|
1138 |
+
At every step, choose the minimum vertex (in the ordering of the
|
1139 |
+
vertices from F ′) in the set of active vertices:
|
1140 |
+
• if it should be free (in accordance to the above choice), then add to the image of
|
1141 |
+
R1 under construction some of its neighbours (in at most 2d ways), deactivate
|
1142 |
+
it and activate all its neighbours,
|
1143 |
+
• if it should not free, then add all its neighbours, deactivate the vertex and
|
1144 |
+
activate all its neighbours.
|
1145 |
+
We get that the number of ways to construct the image of R1 is at most
|
1146 |
+
�xi
|
1147 |
+
oi
|
1148 |
+
�
|
1149 |
+
2doi.
|
1150 |
+
Eventually we get that there are at most
|
1151 |
+
(n − x + c)!(d − 1)a2c−˜c−a
|
1152 |
+
˜c�
|
1153 |
+
i=1
|
1154 |
+
max
|
1155 |
+
oi≤d+2+2σi
|
1156 |
+
�xi
|
1157 |
+
oi
|
1158 |
+
�
|
1159 |
+
2doi + O(1)
|
1160 |
+
≤ (n − x + c)!(d − 1)a2c−˜c−a
|
1161 |
+
max
|
1162 |
+
o≤(d+4)˜σ
|
1163 |
+
�x
|
1164 |
+
o
|
1165 |
+
�
|
1166 |
+
2d(d+4)˜σ + O(1).
|
1167 |
+
ways to expose F ′ as needed.
|
1168 |
+
7
|
1169 |
+
Proof of Lemma 3
|
1170 |
+
Summing up, from (5), (6) and (7), we get that
|
1171 |
+
p(ℓ, x, c) ≤ max
|
1172 |
+
a
|
1173 |
+
αn(a, ℓ, x, c)β(a, ℓ, x, c)
|
1174 |
+
|Fn|
|
1175 |
+
≤
|
1176 |
+
�n
|
1177 |
+
c
|
1178 |
+
�
|
1179 |
+
[(n − x + c)! + O(1)]
|
1180 |
+
|Fn|
|
1181 |
+
×
|
1182 |
+
× max
|
1183 |
+
a
|
1184 |
+
�x − 2c
|
1185 |
+
c − a
|
1186 |
+
��c
|
1187 |
+
a
|
1188 |
+
��˜σ + c
|
1189 |
+
c − a
|
1190 |
+
��d
|
1191 |
+
2
|
1192 |
+
�a
|
1193 |
+
(2γ)c−a22d(d+5)˜σ
|
1194 |
+
max
|
1195 |
+
o≤(d+4)˜σ
|
1196 |
+
�x
|
1197 |
+
o
|
1198 |
+
�2
|
1199 |
+
.
|
1200 |
+
(8)
|
1201 |
+
We let
|
1202 |
+
Υ := max
|
1203 |
+
a
|
1204 |
+
�˜σ + c
|
1205 |
+
c − a
|
1206 |
+
�
|
1207 |
+
22d(d+5)˜σ
|
1208 |
+
max
|
1209 |
+
o≤(d+4)˜σ
|
1210 |
+
�x
|
1211 |
+
o
|
1212 |
+
�2
|
1213 |
+
.
|
1214 |
+
Let us find the maximum value of ϕ(x) =
|
1215 |
+
�x−2c
|
1216 |
+
c−a
|
1217 |
+
�
|
1218 |
+
e∆c/d−x as a function of x. Since
|
1219 |
+
ϕ(x + 1)
|
1220 |
+
ϕ(x)
|
1221 |
+
= 1
|
1222 |
+
e
|
1223 |
+
�
|
1224 |
+
1 +
|
1225 |
+
c − a
|
1226 |
+
x + 1 − 3c + a
|
1227 |
+
�
|
1228 |
+
,
|
1229 |
+
16
|
1230 |
+
|
1231 |
+
we get that the maximum value is achieved at x = 2c+(c−a)
|
1232 |
+
e
|
1233 |
+
e−1 +O(1). Therefore,
|
1234 |
+
p(ℓ, x, c) ≤
|
1235 |
+
�n
|
1236 |
+
c
|
1237 |
+
�
|
1238 |
+
e2σ/d+2ℓ/d+(x−c)2/n+o(n2/d)
|
1239 |
+
n2ℓ/d+(∆−d)c/d+2σ/d
|
1240 |
+
Υ×
|
1241 |
+
× (2γ)c max
|
1242 |
+
a
|
1243 |
+
�c
|
1244 |
+
a
|
1245 |
+
� �d(d − 1)
|
1246 |
+
4γ
|
1247 |
+
�a ��
|
1248 |
+
(c − a)
|
1249 |
+
e
|
1250 |
+
e−1
|
1251 |
+
�
|
1252 |
+
c − a
|
1253 |
+
�
|
1254 |
+
e−(2−∆/d)c−(c−a)
|
1255 |
+
e
|
1256 |
+
e−1.
|
1257 |
+
Since
|
1258 |
+
�c
|
1259 |
+
a
|
1260 |
+
� �d(d − 1)
|
1261 |
+
4γ
|
1262 |
+
�a ��
|
1263 |
+
(c − a)
|
1264 |
+
e
|
1265 |
+
e−1
|
1266 |
+
�
|
1267 |
+
c − a
|
1268 |
+
�
|
1269 |
+
e−(c−a)
|
1270 |
+
e
|
1271 |
+
e−1 ≤
|
1272 |
+
�c
|
1273 |
+
a
|
1274 |
+
� �d(d − 1)
|
1275 |
+
4γ
|
1276 |
+
�a �2
|
1277 |
+
e
|
1278 |
+
�
|
1279 |
+
e
|
1280 |
+
e−1 (c−a)
|
1281 |
+
≤
|
1282 |
+
�
|
1283 |
+
d(d − 1)
|
1284 |
+
4γ
|
1285 |
+
+
|
1286 |
+
�2
|
1287 |
+
e
|
1288 |
+
�e/(e−1)�c
|
1289 |
+
,
|
1290 |
+
we eventually get
|
1291 |
+
p(ℓ, x, c) ≤
|
1292 |
+
�n
|
1293 |
+
c
|
1294 |
+
�
|
1295 |
+
e2σ/d+2ℓ/d+(x−c)2/n+o(n2/d)
|
1296 |
+
n2ℓ/d+(∆−d)c/d+2σ/d
|
1297 |
+
Υ
|
1298 |
+
�
|
1299 |
+
4d
|
1300 |
+
3 e−(2−∆/d)
|
1301 |
+
�
|
1302 |
+
3(d − 1)
|
1303 |
+
8
|
1304 |
+
+
|
1305 |
+
�2
|
1306 |
+
e
|
1307 |
+
�e/(e−1)��c
|
1308 |
+
≤
|
1309 |
+
�n
|
1310 |
+
c
|
1311 |
+
�
|
1312 |
+
e2σ/d+2ℓ/d+(x−c)2/n+o(n2/d)
|
1313 |
+
n2ℓ/d+(∆−d)c/d+2σ/d
|
1314 |
+
Υ
|
1315 |
+
�d2
|
1316 |
+
2 e2/d−5/6
|
1317 |
+
�c
|
1318 |
+
for all d ≥ 5;
|
1319 |
+
p(ℓ, x, c) ≤
|
1320 |
+
�n
|
1321 |
+
c
|
1322 |
+
�
|
1323 |
+
eσ/2+ℓ/2+(x−c)2/n+o(√n)
|
1324 |
+
nℓ/2+c/2+σ/2
|
1325 |
+
Υ
|
1326 |
+
�
|
1327 |
+
8
|
1328 |
+
3√e
|
1329 |
+
�
|
1330 |
+
9
|
1331 |
+
4 +
|
1332 |
+
�2
|
1333 |
+
e
|
1334 |
+
�e/(e−1)��c
|
1335 |
+
≤
|
1336 |
+
�n
|
1337 |
+
c
|
1338 |
+
�
|
1339 |
+
eσ/2+ℓ/2+(x−c)2/n+o(√n)
|
1340 |
+
nℓ/2+c/2+σ/2
|
1341 |
+
Υ
|
1342 |
+
�7.8
|
1343 |
+
√e
|
1344 |
+
�c
|
1345 |
+
for d = 4;
|
1346 |
+
p(ℓ, x, c) ≤
|
1347 |
+
�n
|
1348 |
+
c
|
1349 |
+
�
|
1350 |
+
e2σ/3+2ℓ/3+(x−c)2/n+o(n2/3)
|
1351 |
+
n2ℓ/3+c/3+2σ/3
|
1352 |
+
Υ
|
1353 |
+
�
|
1354 |
+
3
|
1355 |
+
2e2/3
|
1356 |
+
�
|
1357 |
+
2 +
|
1358 |
+
�2
|
1359 |
+
e
|
1360 |
+
�e/(e−1)��c
|
1361 |
+
≤
|
1362 |
+
�n
|
1363 |
+
c
|
1364 |
+
�
|
1365 |
+
e2σ/3+2ℓ/3+(x−c)2/n+o(n2/3)
|
1366 |
+
n2ℓ/3+c/3+2σ/3
|
1367 |
+
Υ
|
1368 |
+
� 4
|
1369 |
+
e2/3
|
1370 |
+
�c
|
1371 |
+
for d = 3.
|
1372 |
+
From now on, we separately proof three assertions of Lemma 3.
|
1373 |
+
7.1
|
1374 |
+
d ≥ 5: existence of a fragment
|
1375 |
+
First we assume that d ≥ 5, ℓ0 =
|
1376 |
+
d2
|
1377 |
+
2 ln(1+ε/2)n1−(∆−d)/d, ℓ > ℓ0. Let δ > 0 be small
|
1378 |
+
enough. Choose ε′ = ε′(δ) > 0 small enough in a way such that σ < ε′x implies
|
1379 |
+
17
|
1380 |
+
|
1381 |
+
Υ ≤ (1 + δ/3)ℓ and c < ε′x implies
|
1382 |
+
�x−2c
|
1383 |
+
c−a
|
1384 |
+
��c
|
1385 |
+
a
|
1386 |
+
��d
|
1387 |
+
2
|
1388 |
+
�a(2γ)c−a ≤ (1 + δ/3)ℓ for all a as
|
1389 |
+
well.
|
1390 |
+
Assume that σ < ε′x. In this case,
|
1391 |
+
p(ℓ, x, c) ≤
|
1392 |
+
�n
|
1393 |
+
c
|
1394 |
+
�
|
1395 |
+
e2ℓ/d+(x−c)2/n+o(n2/d)
|
1396 |
+
n2ℓ/d+(∆−d)c/d
|
1397 |
+
(1 + δ)ℓ
|
1398 |
+
�d2
|
1399 |
+
2 e2/d−5/6
|
1400 |
+
�c
|
1401 |
+
.
|
1402 |
+
If x > ε′n and c > ε′x, then p(ℓ, x, c) =
|
1403 |
+
� e
|
1404 |
+
n
|
1405 |
+
� 2ℓ
|
1406 |
+
d exp(−Θ(n ln n)).
|
1407 |
+
If x > ε′n and c ≤ ε′x, then
|
1408 |
+
p(ℓ, x, c) ≤
|
1409 |
+
�n
|
1410 |
+
c
|
1411 |
+
�
|
1412 |
+
n(∆−d)c/d
|
1413 |
+
� e
|
1414 |
+
n
|
1415 |
+
� 2ℓ
|
1416 |
+
d (1 + δ)ℓ ≤ en1−(∆−d)c/d � e
|
1417 |
+
n
|
1418 |
+
� 2ℓ
|
1419 |
+
d (1 + δ)ℓ.
|
1420 |
+
Finally, let x ≤ ε′n.
|
1421 |
+
The maximum value of
|
1422 |
+
�n
|
1423 |
+
c
|
1424 |
+
�
|
1425 |
+
(d2e2/d−5/6/2)cn−(∆−d)c/d is
|
1426 |
+
achieved when c = d2
|
1427 |
+
2 e2/d−5/6n1−(∆−d)/d + O(1) and is at most
|
1428 |
+
�en
|
1429 |
+
c
|
1430 |
+
�c
|
1431 |
+
(d2e2/d−5/6/2)cn−(∆−d)c/d ≤ exp
|
1432 |
+
�d2
|
1433 |
+
2 e2/d−5/6n1−(∆−d)/d + O(1)
|
1434 |
+
�
|
1435 |
+
implying that
|
1436 |
+
p(ℓ, x, c) ≤
|
1437 |
+
� e
|
1438 |
+
n
|
1439 |
+
� 2ℓ
|
1440 |
+
d (1 + δ)ℓe
|
1441 |
+
d2
|
1442 |
+
2 n1−(∆−d)/d.
|
1443 |
+
Let σ ≥ ε′x. Then, for some large enough C > 0,
|
1444 |
+
p(ℓ, x, c) ≤
|
1445 |
+
�n
|
1446 |
+
c
|
1447 |
+
�
|
1448 |
+
[(n − x + c)! + O(1)]
|
1449 |
+
|Fn|
|
1450 |
+
23x+σ+2d(d+5)σ
|
1451 |
+
�d
|
1452 |
+
2
|
1453 |
+
�a
|
1454 |
+
(2γ)c−a
|
1455 |
+
≤ n−2ℓ/dCx22d(d−5)σ
|
1456 |
+
n2σ/d
|
1457 |
+
= o(n−2ℓ/d).
|
1458 |
+
Summing up, for every ℓ > ℓ0,
|
1459 |
+
πℓ =
|
1460 |
+
�
|
1461 |
+
x,c
|
1462 |
+
p(x, ℓ, c) ≤ n2 � e
|
1463 |
+
n
|
1464 |
+
� 2ℓ
|
1465 |
+
d (1 + δ)ℓe
|
1466 |
+
d2
|
1467 |
+
2 n1−(∆−d)/d.
|
1468 |
+
Therefore (1), (2) and (3) imply that
|
1469 |
+
EX
|
1470 |
+
M ≤
|
1471 |
+
�
|
1472 |
+
ℓ>ℓ0
|
1473 |
+
n3
|
1474 |
+
�1 + δ
|
1475 |
+
1 + ε
|
1476 |
+
�ℓ
|
1477 |
+
e
|
1478 |
+
d2
|
1479 |
+
2 n1−(∆−d)/d = o
|
1480 |
+
�1
|
1481 |
+
n
|
1482 |
+
�
|
1483 |
+
.
|
1484 |
+
18
|
1485 |
+
|
1486 |
+
7.2
|
1487 |
+
d ≥ 5: sharp threshold for a good sequence
|
1488 |
+
Now, we assume that d ≥ 5, Fn is good, ℓ0 = 0 and ℓ ≥ 1. In this case ˜σ ≥
|
1489 |
+
(c − a)(d − ∆/2). Therefore,
|
1490 |
+
(∆ − d)c
|
1491 |
+
d
|
1492 |
+
+ 2σ
|
1493 |
+
d ≥ c
|
1494 |
+
�
|
1495 |
+
1 − 2
|
1496 |
+
d
|
1497 |
+
�
|
1498 |
+
+
|
1499 |
+
˜σ
|
1500 |
+
d − ∆/2.
|
1501 |
+
In the same way as above, if x > ε′n and c > ε′x, then p(ℓ, x, c) =
|
1502 |
+
� e
|
1503 |
+
n
|
1504 |
+
� 2ℓ
|
1505 |
+
d exp(−Θ(n ln n)).
|
1506 |
+
If x > ε′n and c ≤ ε′x, then p(ℓ, x, c) ≤ exp
|
1507 |
+
�
|
1508 |
+
n(∆−d)c/d� � e
|
1509 |
+
n
|
1510 |
+
� 2ℓ
|
1511 |
+
d (1 + δ)ℓ.
|
1512 |
+
Finally, let x ≤ ε′n. Assume first that c − a ≤ ε′x. Then
|
1513 |
+
p(ℓ, x, c) ≤
|
1514 |
+
�n
|
1515 |
+
c
|
1516 |
+
�
|
1517 |
+
[(n − x + c)! + O(1)]
|
1518 |
+
|Fn|
|
1519 |
+
�˜σ + c
|
1520 |
+
c − a
|
1521 |
+
��d
|
1522 |
+
2
|
1523 |
+
�c
|
1524 |
+
(1 + δ/2)ℓ22d(d+5)˜σ
|
1525 |
+
max
|
1526 |
+
o≤(d+4)˜σ
|
1527 |
+
�x
|
1528 |
+
o
|
1529 |
+
�2
|
1530 |
+
≤
|
1531 |
+
�n
|
1532 |
+
c
|
1533 |
+
�
|
1534 |
+
e(x−c)2/n+o(n2/d)
|
1535 |
+
n2ℓ/d+(∆−d)c/d+2σ/d
|
1536 |
+
�˜σ + c
|
1537 |
+
c − a
|
1538 |
+
��d
|
1539 |
+
2
|
1540 |
+
�c
|
1541 |
+
(1 + δ/2)ℓ22d(d+5)˜σ
|
1542 |
+
max
|
1543 |
+
o≤(d+4)˜σ
|
1544 |
+
�x
|
1545 |
+
o
|
1546 |
+
�2
|
1547 |
+
≤
|
1548 |
+
� e
|
1549 |
+
n
|
1550 |
+
�2ℓ/d
|
1551 |
+
e(d
|
1552 |
+
2)(n/e)2/d e(x−c)2/n+o(n2/d)
|
1553 |
+
n˜σ/(d−∆/2)
|
1554 |
+
�˜σ + c
|
1555 |
+
c − a
|
1556 |
+
�
|
1557 |
+
(1 + δ/2)ℓ22d(d+5)˜σ
|
1558 |
+
max
|
1559 |
+
o≤(d+4)˜σ
|
1560 |
+
�x
|
1561 |
+
o
|
1562 |
+
�2
|
1563 |
+
≤
|
1564 |
+
� e
|
1565 |
+
n
|
1566 |
+
�2ℓ/d
|
1567 |
+
e(d
|
1568 |
+
2)(n/e)2/d+o(n2/d)(1 + δ)ℓ.
|
1569 |
+
Let c − a > ε′x. Then ˜σ > ε′x(d − ∆/2). Therefore,
|
1570 |
+
p(ℓ, x, c) ≤
|
1571 |
+
�n
|
1572 |
+
c
|
1573 |
+
�
|
1574 |
+
[(n − x + c)! + O(1)]
|
1575 |
+
|Fn|
|
1576 |
+
2x+˜σ
|
1577 |
+
�d
|
1578 |
+
2
|
1579 |
+
�c
|
1580 |
+
(2γ)c−a22d(d+5)˜σ
|
1581 |
+
max
|
1582 |
+
o≤(d+4)˜σ
|
1583 |
+
�x
|
1584 |
+
o
|
1585 |
+
�2
|
1586 |
+
≤
|
1587 |
+
�n
|
1588 |
+
c
|
1589 |
+
�
|
1590 |
+
e(x−c)2/n+o(n2/d)
|
1591 |
+
n2ℓ/d+c(1−2/d)+˜σ/(d−∆/2) 23x+˜σ
|
1592 |
+
�d
|
1593 |
+
2
|
1594 |
+
�c
|
1595 |
+
(2γ)c−a22d(d+5)˜σ
|
1596 |
+
≤
|
1597 |
+
�1
|
1598 |
+
n
|
1599 |
+
�2ℓ/d
|
1600 |
+
�n
|
1601 |
+
c
|
1602 |
+
�
|
1603 |
+
nc(1−2/d) (1 + δ)ℓ ≤
|
1604 |
+
�1
|
1605 |
+
n
|
1606 |
+
�2ℓ/d
|
1607 |
+
en2/d(1 + δ)ℓ.
|
1608 |
+
Summing up, for every ℓ > 0,
|
1609 |
+
πℓ =
|
1610 |
+
�
|
1611 |
+
x,c
|
1612 |
+
p(x, ℓ, c) ≤ n2 � e
|
1613 |
+
n
|
1614 |
+
�2ℓ/d
|
1615 |
+
e(d
|
1616 |
+
2)(n/e)2/d+o(n2/d)(1 + δ)ℓ.
|
1617 |
+
Therefore (1), (2) and (3) imply that (assuming that ε < 1/d)
|
1618 |
+
EX
|
1619 |
+
M ≤
|
1620 |
+
�
|
1621 |
+
ℓ>0
|
1622 |
+
n3
|
1623 |
+
�1 + δ
|
1624 |
+
1 + ε
|
1625 |
+
�ℓ
|
1626 |
+
e− d(1−dε)
|
1627 |
+
2
|
1628 |
+
(n/e)2/d = o
|
1629 |
+
�1
|
1630 |
+
n
|
1631 |
+
�
|
1632 |
+
.
|
1633 |
+
(9)
|
1634 |
+
19
|
1635 |
+
|
1636 |
+
7.3
|
1637 |
+
d = 4
|
1638 |
+
We now consider d = 4. The maximum value of
|
1639 |
+
�n
|
1640 |
+
c
|
1641 |
+
�
|
1642 |
+
(7.8/√e)cn−c/2 is achieved when
|
1643 |
+
c = 7.8
|
1644 |
+
√e
|
1645 |
+
√n + O(1) and is at most
|
1646 |
+
�en
|
1647 |
+
c
|
1648 |
+
�c
|
1649 |
+
(7.8/√e)cn−c/2 ≤
|
1650 |
+
�7.8√e√n
|
1651 |
+
c
|
1652 |
+
�c
|
1653 |
+
≤ e
|
1654 |
+
7.8
|
1655 |
+
√e
|
1656 |
+
√n+O(1)
|
1657 |
+
implying that
|
1658 |
+
p(ℓ, x, c) ≤ e(x−c)2/n+o(√n)
|
1659 |
+
nℓ/2+σ/2
|
1660 |
+
Υeℓ/2+σ/2e
|
1661 |
+
7.8
|
1662 |
+
√e
|
1663 |
+
√n.
|
1664 |
+
Let us assume that σ < ε′ℓ. If 1 ≤ ℓ ≤ ε′n, then
|
1665 |
+
p(ℓ, c, x) ≤ (1 + δ)ℓe
|
1666 |
+
�
|
1667 |
+
7.8
|
1668 |
+
√e +o(1)
|
1669 |
+
�√n(e/n)ℓ/2.
|
1670 |
+
If ℓ > ε′n and c > ε′x, then p(ℓ, c, x) ≤ n−ℓ/2 exp(−Ω(n ln n)). If c ≤ ε′x, then
|
1671 |
+
�x−2c
|
1672 |
+
c−a2
|
1673 |
+
�
|
1674 |
+
≤ (1 + δ/3)ℓ. Therefore,
|
1675 |
+
p(ℓ, x, c) ≤
|
1676 |
+
�n
|
1677 |
+
c
|
1678 |
+
�
|
1679 |
+
[(n − x + c)! + O(1)]
|
1680 |
+
|Fn|
|
1681 |
+
(1+2δ/3)ℓ32c ≤
|
1682 |
+
�n
|
1683 |
+
c
|
1684 |
+
�
|
1685 |
+
e(x−c)2/n+o(√n)
|
1686 |
+
nℓ/2+c/2+σ/2
|
1687 |
+
(1+2δ/3)ℓ32c.
|
1688 |
+
Since
|
1689 |
+
�n
|
1690 |
+
c
|
1691 |
+
�
|
1692 |
+
n−c/232c ≤ e32√n+O(1), we get
|
1693 |
+
p(ℓ, x, c) ≤ e32√n+o(√n)e(x−c)2/n(1 + 2δ/3)ℓn−ℓ/2 ≤ (1 + δ)ℓ(e/n)ℓ/2.
|
1694 |
+
Finally, let σ ≥ ε′ℓ. Note that this is possible only when ℓ is large enough. Since
|
1695 |
+
�c
|
1696 |
+
a
|
1697 |
+
��x − 2c
|
1698 |
+
c − a
|
1699 |
+
��σ + c − a
|
1700 |
+
c − a
|
1701 |
+
�
|
1702 |
+
6a
|
1703 |
+
�8
|
1704 |
+
3
|
1705 |
+
�c−a �x
|
1706 |
+
o
|
1707 |
+
�2
|
1708 |
+
≤ 23x+σ3a
|
1709 |
+
�8
|
1710 |
+
3
|
1711 |
+
�c−a
|
1712 |
+
≤ 23x+σ3c,
|
1713 |
+
we get that
|
1714 |
+
p(ℓ, x, c) ≤
|
1715 |
+
�n
|
1716 |
+
c
|
1717 |
+
�
|
1718 |
+
[(n − x + c)! + O(1)]
|
1719 |
+
|Fn|
|
1720 |
+
23x+73σ3c ≤
|
1721 |
+
�n
|
1722 |
+
c
|
1723 |
+
�
|
1724 |
+
eo(√n)
|
1725 |
+
nℓ/2+c/2+σ/2 e(x−c)2/n23x+73σ3c
|
1726 |
+
≤ e
|
1727 |
+
√n+o(√n)n−ℓ/2−σ/2ex−c23x+73σ3c ≤ e
|
1728 |
+
√n+o(√n)n−ℓ/2.
|
1729 |
+
Therefore,
|
1730 |
+
πℓ =
|
1731 |
+
�
|
1732 |
+
x,c
|
1733 |
+
p(x, ℓ, c) ≤ n2(1 + δ)ℓe
|
1734 |
+
�
|
1735 |
+
7.8
|
1736 |
+
√e +o(1)
|
1737 |
+
�√n(e/n)ℓ/2.
|
1738 |
+
Then (1), (2) and (3) imply (9) as needed.
|
1739 |
+
20
|
1740 |
+
|
1741 |
+
7.4
|
1742 |
+
d = 3
|
1743 |
+
It remains to consider d = 3. We only need to consider the case σ < ε′ℓ, 1 ≤ ℓ ≤ ε′n.
|
1744 |
+
For all the other values of the parameters, the proof is absolutely the same as for
|
1745 |
+
d = 4. The maximum value of
|
1746 |
+
�n
|
1747 |
+
c
|
1748 |
+
�
|
1749 |
+
(4/e2/3)cn−c/3 is achieved when c =
|
1750 |
+
4
|
1751 |
+
e2/3n2/3+O(1)
|
1752 |
+
and is at most
|
1753 |
+
�en
|
1754 |
+
c
|
1755 |
+
�c
|
1756 |
+
(4/e2/3)cn−c/3 ≤
|
1757 |
+
�4e1/3n2/3
|
1758 |
+
c
|
1759 |
+
�c
|
1760 |
+
≤ e4(n/e)2/3+O(1)
|
1761 |
+
implying that
|
1762 |
+
p(ℓ, x, c) ≤ e(x−c)2/n+o(√n)
|
1763 |
+
n2ℓ/3+2σ/3
|
1764 |
+
Υe2ℓ/3+2σ/3e4(n/e)2/3 ≤ (1 + δ)ℓe(4/e2/3+o(1))n2/3(e/n)2ℓ/3.
|
1765 |
+
Therefore,
|
1766 |
+
πℓ =
|
1767 |
+
�
|
1768 |
+
x,c
|
1769 |
+
p(x, ℓ, c) ≤ n2(1 + δ)ℓe(4/e2/3+o(1))n2/3(e/n)2ℓ/3.
|
1770 |
+
Then (1), (2) and (3) imply (9) as needed.
|
1771 |
+
Acknowledgements
|
1772 |
+
This work was originated when the author was a visitor at Tel Aviv University.
|
1773 |
+
The author is grateful to Wojciech Samotij for his kind hospitality during the visit
|
1774 |
+
and for helpful discussions. The author would like to thank Michael Krivelevich for
|
1775 |
+
helpful remarks and valuable comments on the paper.
|
1776 |
+
References
|
1777 |
+
[1] R. Alweiss, S. Lovett, K. Wu, J. Zhang, Improved bounds for the sunflower
|
1778 |
+
lemma, Ann. of Math. (2), 194:3 (2021) 795–815.
|
1779 |
+
[2] B. Bollob´as, The isoperimetric number of random regular graphs, European
|
1780 |
+
Journal of Combinatorics 9 (1988) 241-244.
|
1781 |
+
[3] A. E. D´ıaz, Y. Person, Spanning F-cycles in random graphs, Preprint (2021)
|
1782 |
+
arXiv:2106.10023.
|
1783 |
+
[4] M. Fischer, N. ˇSkori´c, A. Steger, M. Truji´c, Triangle resilience of the square
|
1784 |
+
of a Hamilton cycle in random graphs, J. Comb. Theory, Ser. B 152 (2018)
|
1785 |
+
171–220.
|
1786 |
+
21
|
1787 |
+
|
1788 |
+
[5] A. Frieze, A note on spanning Kr-cycles in random graphs, AIMS Mathematics
|
1789 |
+
5:5 (2020) 4849–4852.
|
1790 |
+
[6] A. Frieze, Hamilton cycles in random graphs:
|
1791 |
+
a bibliography, Preprint,
|
1792 |
+
arXiv:1901.07139.
|
1793 |
+
[7] S. Janson, T. �Luczak, A. Ruci´nski, Random graphs, J. Wiley & Sons Inc., 2000.
|
1794 |
+
[8] J. Kahn, B. Narayanan, J. Park, The threshold for the square of a Hamilton
|
1795 |
+
cycle, Proc. Amer. Math. Soc. 149 (2021) 3201–3208.
|
1796 |
+
[9] J. H. Kim, B. Sudakov, V. Vu, Small subgraphs of random regular graphs,
|
1797 |
+
Discrete Mathematics 307 (2007) 1961–1967.
|
1798 |
+
[10] D. K¨uhn, D. Osthus, On P´osa’s conjecture for random graphs, SIAM J. Discrete
|
1799 |
+
math. 26:3 (2012) 1440–1457.
|
1800 |
+
[11] B. D. McKay, N. C. Wormald, Automorphisms of random graphs with specified
|
1801 |
+
degrees, Combinatorica 4 (1984) 325–338.
|
1802 |
+
[12] E. Mossel, J. Niles-Weed, N. Sun, I. Zadik, A second moment proof of the spread
|
1803 |
+
lemma, Preprint (2022) arXiv:2209.11347.
|
1804 |
+
[13] R. Nenadov, N. ˇSkori´c, Powers of Hamilton cycles in random graphs and tight
|
1805 |
+
Hamilton cycles in random hypergraphs, Random Structures & Algorithms 554
|
1806 |
+
(2019) 187–208.
|
1807 |
+
[14] O. Riordan, Random cliques in random graphs and sharp thresholds for F-
|
1808 |
+
factors, Random Structures & Algorithms 61:4 (2022) 619–637.
|
1809 |
+
[15] O. Riordan, Spanning subgraphs of random graphs, Combinatorics, Probability
|
1810 |
+
& Computing 9:2 (2000) 125–148.
|
1811 |
+
[16] J.
|
1812 |
+
Park,
|
1813 |
+
H.T.
|
1814 |
+
Pham,
|
1815 |
+
A
|
1816 |
+
proof
|
1817 |
+
of
|
1818 |
+
the
|
1819 |
+
Kahn–Kalai
|
1820 |
+
conjecture,
|
1821 |
+
2022,
|
1822 |
+
arXiv:2203.17207.
|
1823 |
+
[17] J. Spencer, Threshold functions for extension statements, Journal of Combina-
|
1824 |
+
torial Theory Ser. A 53 (1990) 286–305.
|
1825 |
+
[18] T. Tao, The sunflower lemma via Shannon entropy, Online post, 2020.
|
1826 |
+
[19] N. C. Wormald, The asymptotic distribution of short cycles in random regular
|
1827 |
+
graphs, Journal of Combinatorial Theory Ser. B 31:2 (1981) 168–182.
|
1828 |
+
22
|
1829 |
+
|
4NE2T4oBgHgl3EQf6Qhs/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
4tAzT4oBgHgl3EQf9v5K/content/tmp_files/2301.01923v1.pdf.txt
ADDED
@@ -0,0 +1,696 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Two-dimensional Heisenberg models with materials-dependent superexchange
|
2 |
+
interactions
|
3 |
+
Jia-Wen Li,1 Zhen Zhang,2 Jing-Yang You,3 Bo Gu,1, 4, 5, ∗ and Gang Su1, 4, 5, 6, †
|
4 |
+
1Kavli Institute for Theoretical Sciences, University of Chinese Academy of Sciences, Beijng 100049, China
|
5 |
+
2Key Laboratory of Multifunctional Nanomaterials and Smart Systems,
|
6 |
+
Division of Advanced Materials, Suzhou Institute of Nano-Tech and Nano-Bionics,
|
7 |
+
Chinese Academy of Sciences, Suzhou, 215123 China
|
8 |
+
3Department of Physics, National University of Singapore, Science Drive, Singapore 117551
|
9 |
+
4CAS Center for Excellence in Topological Quantum Computation,
|
10 |
+
University of Chinese Academy of Sciences, Beijng 100190, China
|
11 |
+
5Physical Science Laboratory, Huairou National Comprehensive Science Center, Beijing 101400, China
|
12 |
+
6School of Physical Sciences, University of Chinese Academy of Sciences, Beijng 100049, China
|
13 |
+
The two-dimensional (2D) van der Waals ferromagnetic semiconductors, such as CrI3 and
|
14 |
+
Cr2Ge2Te6, and the 2D ferromagnetic metals, such as Fe3GeTe2 and MnSe2, have been obtained in
|
15 |
+
recent experiments and attracted a lot of attentions. The superexchange interaction has been sug-
|
16 |
+
gested to dominate the magnetic interactions in these 2D magnetic systems. In the usual theoretical
|
17 |
+
studies, the expression of the 2D Heisenberg models were fixed by hand due to experiences. Here, we
|
18 |
+
propose a method to determine the expression of the 2D Heisenberg models by counting the possible
|
19 |
+
superexchange paths with the density functional theory (DFT) and Wannier function calculations.
|
20 |
+
With this method, we obtain a 2D Heisenberg model with six different nearest-neighbor exchange
|
21 |
+
coupling constants for the 2D ferromagnetic metal Cr3Te6, which is very different for the crystal
|
22 |
+
structure of Cr atoms in Cr3Te6. The calculated Curie temperature Tc = 328 K is close to the
|
23 |
+
Tc = 344 K of 2D Cr3Te6 reported in recent experiment. In addition, we predict two stable 2D
|
24 |
+
ferromagnetic semiconductors Cr3O6 and Mn3O6 sharing the same crystal structure of Cr3Te6. The
|
25 |
+
similar Heisenberg models are obtained for 2D Cr3O6 and Mn3O6, where the calculated Tc is 218
|
26 |
+
K and 208 K, respectively. Our method offers a general approach to determine the expression of
|
27 |
+
Heisenberg models for these 2D magnetic semiconductors and metals, and builds up a solid basis
|
28 |
+
for further studies.
|
29 |
+
I.
|
30 |
+
INTRODUCTION
|
31 |
+
Recently, the successful synthesis of two-dimensional
|
32 |
+
(2D) van der Waals ferromagnetic semiconductors in ex-
|
33 |
+
periments, such as CrI3 [1] and Cr2Ge2Te6 [2] has at-
|
34 |
+
tracted extensive attentions to 2D ferromagnetic materi-
|
35 |
+
als. According to Mermin-Wagner theorem [3], the mag-
|
36 |
+
netic anisotropy is essential to produce the long-range
|
37 |
+
magnetic order in 2D systems. For the 2D magnetic semi-
|
38 |
+
conductors obtained in experiments, the Curie tempera-
|
39 |
+
ture Tc is still much lower than room temperature. For
|
40 |
+
example, Tc = 45 K in CrI3 [1], 30 K in Cr2Ge2Te6 [2],
|
41 |
+
34 K in CrBr3 [4], 17 K in CrCl3 [5], 75 K in Cr2S3 [6, 7],
|
42 |
+
etc. For applications, the ferromagnetic semiconductors
|
43 |
+
with Tc higher than room temperature are highly re-
|
44 |
+
quired [8–11]. On the other hand, the 2D van der Waals
|
45 |
+
ferromagnetic metals with high Tc have been obtained in
|
46 |
+
recent experiments. For example, Tc = 140 K in CrTe
|
47 |
+
[12], 300 K in CrTe2 [13, 14], 344 K in Cr3Te6 [15], 160
|
48 |
+
K in Cr3Te4 [16], 280 K in CrSe [17], 300 K in Fe3GeTe2
|
49 |
+
[18, 19], 270 K in Fe4GeTe2 [20], 229 K in Fe5GeTe2
|
50 |
+
[21, 22], 300 K in MnSe2 [23], etc.
|
51 |
+
In these 2D van der Waals ferromagnetic materials, the
|
52 |
+
superexchange interaction has been suggested to domi-
|
53 | |
54 | |
55 |
+
nate the magnetic interactions. The superexchange in-
|
56 |
+
teraction describes the indirect magnetic interaction be-
|
57 |
+
tween two magnetic cations mediated by the neighboring
|
58 |
+
non-magnetic anions [24–26]. The superexchange inter-
|
59 |
+
action has been discussed in the 2D magnetic semicon-
|
60 |
+
ductors.
|
61 |
+
Based on the superexchange interaction, the
|
62 |
+
strain-enhanced Tc in 2D ferromagnetic semiconductor
|
63 |
+
Cr2Ge2Se6 can be understood by the decreased energy
|
64 |
+
difference between the d electrons of cation Cr atoms
|
65 |
+
and the p electrons of anion Se atoms [27]. The similar
|
66 |
+
superexchange picture was obtained in several 2D ferro-
|
67 |
+
magnetic semiconductors, including the great enhance-
|
68 |
+
ment of Tc in bilayer heterostructures Cr2Ge2Te6/PtSe2
|
69 |
+
[28], the high Tc in technetium-based semiconductors
|
70 |
+
TcSiTe3, TcGeSe3 and TcGeTe3 [29], and the electric
|
71 |
+
field enhanced Tc in the monolayer MnBi2Te4 [30]. The
|
72 |
+
superexchange interaction has also been discussed in
|
73 |
+
the semiconductor heterostructure CrI3/MoTe2 [31], and
|
74 |
+
2D semiconductor Cr2Ge2Te6 with molecular adsorption
|
75 |
+
[32].
|
76 |
+
In addition, the superexchange interaction has also
|
77 |
+
been obtained in the 2D van der Waals ferromagnetic
|
78 |
+
metals. By adding vacancies, the angles of the superex-
|
79 |
+
change interaction paths of 2D metals VSe2 and MnSe2
|
80 |
+
will change, thereby tuning the superexchange coupling
|
81 |
+
strength [33]. It is found that biaxial strain changes the
|
82 |
+
angle of superexchange paths in 2D metal Fe3GeTe2, and
|
83 |
+
affects Tc [34]. Under tensile strain, the ferromagnetism
|
84 |
+
of the 2D magnetic metal CoB6 is enhanced, due to the
|
85 |
+
arXiv:2301.01923v1 [cond-mat.mtrl-sci] 5 Jan 2023
|
86 |
+
|
87 |
+
2
|
88 |
+
competition between superexchange and direct exchange
|
89 |
+
interactions [35].
|
90 |
+
It is important to determine the spin Hamiltonian for
|
91 |
+
the magnetic materials, in order to theoretically study
|
92 |
+
the magnetic properties, such as Tc. In the usual theoret-
|
93 |
+
ical studies, the expression of the spin Hamiltonian needs
|
94 |
+
to be fixed by hand according to the experiences. By the
|
95 |
+
four-state method and density functional theory (DFT)
|
96 |
+
calculations [36–38], the exchange coupling parameters of
|
97 |
+
the spin Hamiltonian, such as the nearest neighbor, the
|
98 |
+
next nearest neighbor, inter-layer, etc, can be obtained.
|
99 |
+
Then the Tc can be estimated through Monte Carlo sim-
|
100 |
+
ulations [38]. With different spin Hamiltonians chosen by
|
101 |
+
hand, sometimes different results are obtained in calcu-
|
102 |
+
lations. Is it possible to determine the spin Hamiltonian
|
103 |
+
by the help of calculations rather than by the experiences
|
104 |
+
?
|
105 |
+
In this paper, we propose a method to establish the 2D
|
106 |
+
Heisenberg models for the 2D van der Waals magnetic
|
107 |
+
materials, when the superexchange interactions domi-
|
108 |
+
nate.
|
109 |
+
Through the DFT and Wannier function calcu-
|
110 |
+
lations, we can calculate the exchange coupling between
|
111 |
+
any two magnetic cations, by counting the possible su-
|
112 |
+
perexchange paths.
|
113 |
+
By this method, we obtain a 2D
|
114 |
+
Heisenberg model with six different nearest-neighbor ex-
|
115 |
+
change coupling constants for the 2D van der Waals fer-
|
116 |
+
romagnetic metal Cr3Te6 [15], where the calculated Tc
|
117 |
+
= 328 K is close to the Tc = 344 K reported in the ex-
|
118 |
+
periment. In addition, based on the crystal structure of
|
119 |
+
2D Cr3Te6, we predict two 2D magnetic semiconductors
|
120 |
+
Cr3O6 and Mn3O6 with Tc of 218 K and 208 K, and
|
121 |
+
energy gap of 0.99 eV and 0.75 eV, respectively.
|
122 |
+
II.
|
123 |
+
COMPUTATIONAL METHODS
|
124 |
+
Our calculations were based on the DFT as im-
|
125 |
+
plemented in the Vienna ab initio simulation package
|
126 |
+
(VASP) [39]. The exchange-correlation potential is de-
|
127 |
+
scribed with the Perdew-Burke-Ernzerhof (PBE) form
|
128 |
+
of the generalized gradient approximation (GGA) [40].
|
129 |
+
The electron-ion potential is described by the projector-
|
130 |
+
augmented wave (PAW) method [41].
|
131 |
+
We carried out
|
132 |
+
the calculation of GGA + U with U = 3.2 eV, a rea-
|
133 |
+
sonable U value for the 3d electrons of Cr in Cr3Te6
|
134 |
+
[15]. The band structures for 2D Cr3O6 and Mn3O6 were
|
135 |
+
calculated in HSE06 hybrid functional [42]. The plane-
|
136 |
+
wave cutoff energy is set to be 500 eV. Spin polariza-
|
137 |
+
tion is taken into account in structure optimization. To
|
138 |
+
prevent interlayer interaction in the supercell of 2D sys-
|
139 |
+
tems, the vacuum layer of 16 ˚A is included. The 5×9×1,
|
140 |
+
5×9×1 and 7×11×1 Monkhorst Pack k-point meshed
|
141 |
+
were used for the Brillouin zone (BZ) sampling for 2D
|
142 |
+
Cr3O6, Cr3Te6 and Mn3O6, respectively [43]. The struc-
|
143 |
+
tures of 2D Cr3O6 and Mn3O6 were fully relaxed, where
|
144 |
+
the convergence precision of energy and force were 10−6
|
145 |
+
and 10−3 eV/˚A, respectively. The phonon spectra were
|
146 |
+
obtained in a 3×3×1 supercell with the PHONOPY pack-
|
147 |
+
age [44]. The Wannier90 code was used to construct a
|
148 |
+
tight-binding Hamiltonian [45, 46] to calculate the mag-
|
149 |
+
netic coupling constant.
|
150 |
+
In the calculation of molecu-
|
151 |
+
lar dynamics, a 3×4×1 supercell (108 atoms) was built,
|
152 |
+
and we took the NVT ensemble (constant-temperature,
|
153 |
+
constant-volume ensemble) and maintained a tempera-
|
154 |
+
ture of 250 K with a step size of 3 fs and a total duration
|
155 |
+
of 6 ps.
|
156 |
+
III.
|
157 |
+
Method to determine the 2D Heisenberg
|
158 |
+
model: an example of 2D Cr3Te6
|
159 |
+
A.
|
160 |
+
Calculate exchange coupling J from
|
161 |
+
superexchange paths
|
162 |
+
The crystal structure of 2D Cr3Te6 is shown in Fig. 1,
|
163 |
+
where the space goup is Pm (No.6). In experiment, it
|
164 |
+
is a ferromagnetic metal with high Tc = 344 K [15]. To
|
165 |
+
theoretically study its magnetic properties, we considered
|
166 |
+
seven different magnetic configurations, including a ferro-
|
167 |
+
magnetic (FM) , a ferrimagnetic (FIM), and five antifer-
|
168 |
+
romagnetic (AFM) configurations, as discussed in Sup-
|
169 |
+
plemental Materials [47]. The calculation results show
|
170 |
+
that the magnetic ground state is ferromagnetic, consis-
|
171 |
+
tent with the experimental results. Since the superex-
|
172 |
+
change interaction has been suggested to dominate the
|
173 |
+
magnetic interactions in these 2D van der Waals ferro-
|
174 |
+
magnetic semiconductors and metals, we study the su-
|
175 |
+
perexchange interactions in 2D Cr3Te6.
|
176 |
+
The superexchange interaction can be reasonably de-
|
177 |
+
scried by a simple Cr-Te-Cr model [48], as shown in Fig.
|
178 |
+
2.
|
179 |
+
There are two Cr atoms at sites i and j, and one
|
180 |
+
Te atom at site k between the two Cr atoms. By the
|
181 |
+
perturbation calculation, the superexchange coupling Jij
|
182 |
+
between the two Cr atoms can be obtained as [48],
|
183 |
+
Jij =( 1
|
184 |
+
E2
|
185 |
+
↑↓
|
186 |
+
−
|
187 |
+
1
|
188 |
+
E2
|
189 |
+
↑↑
|
190 |
+
)
|
191 |
+
�
|
192 |
+
k,p,d
|
193 |
+
|Vik|2Jpd
|
194 |
+
kj
|
195 |
+
= 1
|
196 |
+
A
|
197 |
+
�
|
198 |
+
k,p,d
|
199 |
+
|Vik|2Jpd
|
200 |
+
kj .
|
201 |
+
(1)
|
202 |
+
The indirect exchange coupling Jij is consisting of two
|
203 |
+
processes. One is the direct exchange process between the
|
204 |
+
d electron of Cr at site j and the p electrons of Te at site
|
205 |
+
k, presented by Jpd
|
206 |
+
kj. The other is the electron hopping
|
207 |
+
process between p electrons of Te atom at site k and d
|
208 |
+
electrons of Cr atom at site i, presented by —Vik|2/A.
|
209 |
+
Vik is the hopping parameter between d electrons of Cr
|
210 |
+
atom at site i and p electrons of Te atom at site k. Here, A
|
211 |
+
= 1/(1/E2
|
212 |
+
↑↓-1/E2
|
213 |
+
↑↑), and is taken as a pending parameter.
|
214 |
+
E↑↑ and E↑↓ are energies of two d electrons at Cr atom
|
215 |
+
at site i with parallel and antiparallel spins, respectively.
|
216 |
+
The direct exchange coupling Jpd
|
217 |
+
kj can be expressed as
|
218 |
+
[27–30]:
|
219 |
+
|
220 |
+
3
|
221 |
+
FIG. 1. Crystal structure of Cr3Te6 . (a) Top view (b) Side view.
|
222 |
+
Jpd
|
223 |
+
kj =
|
224 |
+
2|Vkj|2
|
225 |
+
|Ep
|
226 |
+
k − Ed
|
227 |
+
j |.
|
228 |
+
(2)
|
229 |
+
Vkj is the hopping parameter between p electrons of
|
230 |
+
Te atom at site k and d electrons of Cr atom at site j.
|
231 |
+
Ep
|
232 |
+
k is the energy of p electrons of Te atom at site k, and
|
233 |
+
Ed
|
234 |
+
j is the energy of d electrons of Cr atom at site j.
|
235 |
+
FIG. 2. Schematic picture of superexchange interaction by
|
236 |
+
a Cr-Te-Cr model. There are two process, one is direct ex-
|
237 |
+
change process between Crj and Tek, noted as Jpd
|
238 |
+
kj, and the
|
239 |
+
other is electron hopping between Tek and Cri, noted as
|
240 |
+
|Vik|2/A. See text for details.
|
241 |
+
By the DFT and Wannier function calculations, the
|
242 |
+
parameters Vik, Vkj, Ep
|
243 |
+
k, and Ed
|
244 |
+
j in Eqs. (1) and (2) can
|
245 |
+
be calculated. The JijA can be obtained by counting all
|
246 |
+
the possible k sites of Te atoms, p orbitals of Te atoms,
|
247 |
+
and d orbitals of Cr atoms.
|
248 |
+
From the calculated results in Table I, it is suggested
|
249 |
+
that there are six different nearest-neighbor couplings,
|
250 |
+
denoted as J11, J22, J33, J12, J13, and J23, as shown in
|
251 |
+
Fig. 3(b). Accordingly, there are three kinds of Cr atoms,
|
252 |
+
noted as Cr1, Cr2, and Cr3. Based on the results in Table
|
253 |
+
I, the effective spin Hamiltonian can be written as
|
254 |
+
H =J11
|
255 |
+
�
|
256 |
+
n
|
257 |
+
⃗S1n · ⃗S1n + J22
|
258 |
+
�
|
259 |
+
n
|
260 |
+
⃗S2n · ⃗S2n + J33
|
261 |
+
�
|
262 |
+
n
|
263 |
+
⃗S3n · ⃗S3n
|
264 |
+
+J12
|
265 |
+
�
|
266 |
+
n
|
267 |
+
⃗S1n · ⃗S2n + J13
|
268 |
+
�
|
269 |
+
n
|
270 |
+
⃗S1n · ⃗S3n + J23
|
271 |
+
�
|
272 |
+
n
|
273 |
+
⃗S2n · ⃗S3n
|
274 |
+
+D
|
275 |
+
�
|
276 |
+
n
|
277 |
+
(S2
|
278 |
+
1nz + S2
|
279 |
+
2nz + S2
|
280 |
+
3nz),
|
281 |
+
(3)
|
282 |
+
where Jij means magnetic coupling between Cri and Crj,
|
283 |
+
as indicated in Fig.
|
284 |
+
3(b).
|
285 |
+
D represents the magnetic
|
286 |
+
anisotropy energy (MAE) of Cr3Te6.
|
287 |
+
B.
|
288 |
+
Determine the parameters D and A
|
289 |
+
The single-ion magnetic anisotropy parameter DS2 can
|
290 |
+
be obtained by: DS2=(E⊥-E∥)/6, where E⊥ and E∥ are
|
291 |
+
energies of Cr3Te6 with out-of-plane and in-plane polar-
|
292 |
+
izations in FM state, respectively. It has DS2 = -0.14
|
293 |
+
meV/Cr for 2D Cr3Te6, which is in agreement with the
|
294 |
+
value of -0.13 meV/Cr reported in the previous study of
|
295 |
+
Cr3Te6 [15].
|
296 |
+
The parameter A can be calculated in the following
|
297 |
+
way. Considering a FM and an AFM configurations, the
|
298 |
+
total energy of Eq. (3) without MAE term can be re-
|
299 |
+
spectively expressed as [47]:
|
300 |
+
EF M = 2J11S2
|
301 |
+
1 + 2J22S2
|
302 |
+
2 + 2J33S2
|
303 |
+
3 + 8J12S1S2
|
304 |
+
+2J23S2S3 + 8J13S1S3 + E0
|
305 |
+
= 11838/A + E0,
|
306 |
+
EAF M1 = 2J11S2
|
307 |
+
1 + 2J22S2
|
308 |
+
2 − 2J33S2
|
309 |
+
3 − 8J12S1S2 + E0
|
310 |
+
= −2502/A + E0.
|
311 |
+
(4)
|
312 |
+
The results in Table I are used to obtain the final ex-
|
313 |
+
pressions in Eq. (4). Since two parameters A and E0 are
|
314 |
+
kept, two spin configurations FM and AFM1 are consid-
|
315 |
+
ered here. Discussion on the choice of spin configurations
|
316 |
+
|
317 |
+
(b)
|
318 |
+
(a)
|
319 |
+
y
|
320 |
+
Te
|
321 |
+
X
|
322 |
+
X1
|
323 |
+
2
|
324 |
+
Tek
|
325 |
+
Tpd
|
326 |
+
A
|
327 |
+
kj
|
328 |
+
d1
|
329 |
+
P1
|
330 |
+
P2
|
331 |
+
d24
|
332 |
+
FIG. 3. (a) The crystal structure of Cr atoms in 2D Cr3Te6. (b) The magnetic structure of Cr atoms in 2D Cr3Te6, calculated
|
333 |
+
by Eqs. (1) and (2).
|
334 |
+
TABLE I. For 2D Cr3Te6, the calculated exchange coupling parameters JijA in Eqs.(1) and (2), by the density functional
|
335 |
+
theory and Wannier functional calculations. A is a pending parameter. The unit of JijA is meV3.
|
336 |
+
J11A
|
337 |
+
J22A
|
338 |
+
J33A
|
339 |
+
J12A
|
340 |
+
J13A
|
341 |
+
J23A
|
342 |
+
40
|
343 |
+
26
|
344 |
+
53
|
345 |
+
29
|
346 |
+
44
|
347 |
+
83
|
348 |
+
is given in Supplemental Materials [47]. For the FM spin
|
349 |
+
configuration, the ground state of Cr3Te6, the total en-
|
350 |
+
ergy is taken as EF M = 0 for the energy reference. The
|
351 |
+
total energy of AFM1, EAF M1 = 535 meV is obtained by
|
352 |
+
the DFT calculation. The parameters A and E0 are ob-
|
353 |
+
tained by solving Eq. (4), and the six exchange coupling
|
354 |
+
parameters Jij can be obtained by Table I. The results
|
355 |
+
are given in Table II.
|
356 |
+
C.
|
357 |
+
Estimate Tc by Monte Carlo simulation
|
358 |
+
To calculate the Curie temperature, we used the Monte
|
359 |
+
Carlo program for the Heisenberg-type Hamiltonian in
|
360 |
+
Eq. (3) with parameters in Table II. The Monte Carlo
|
361 |
+
simulation was performed on a 30
|
362 |
+
√
|
363 |
+
3 ×30
|
364 |
+
√
|
365 |
+
3 lattice with
|
366 |
+
more than 1×106 steps for each temperature. The first
|
367 |
+
two-third steps were discarded, and the last one-thirds
|
368 |
+
steps were used to calculate the temperature-dependent
|
369 |
+
physical quantities. As shown in Table II and Fig. 4 (d),
|
370 |
+
the calculated Tc = 328 K for 2D Cr3Te6, close to the Tc
|
371 |
+
= 344 K of 2D Cr3Te6 in the experiment [15]. Discussion
|
372 |
+
on the choice of spin configurations and the estimation of
|
373 |
+
exchange couplings Jij and Tc is given in Supplemental
|
374 |
+
Materials [47].
|
375 |
+
IV.
|
376 |
+
Prediction of Two High Curie Temperature
|
377 |
+
Magnetic Semiconductors Cr3O6 and Mn3O6
|
378 |
+
Inspired by the high Tc in the 2D magnetic metal
|
379 |
+
Cr3Te6, we explore the possible high Tc magnetic semi-
|
380 |
+
conductors with the same crystal structure of Cr3Te6 by
|
381 |
+
FIG. 4. (a) Band structures of Cr3O6 with a bandgap of 0.99
|
382 |
+
eV. (b) Band structures of Mn3O6 with a bandgap of 0.75 eV.
|
383 |
+
(c) Energy gap of Cr3O6 and Mn3O6 under external electric
|
384 |
+
field out-plane. (d) The magnetic moment of Cr3Te6, Cr3O6,
|
385 |
+
and Mn3O6 varies with temperature.
|
386 |
+
the DFT calculations. We obtain two stable ferromag-
|
387 |
+
netic semiconductors Cr3O6 and Mn3O6.
|
388 |
+
In order to
|
389 |
+
study the stability of the 2D Cr3O6 and Mn3O6, we cal-
|
390 |
+
culate the phonon spectrum. As shown in Supplemental
|
391 |
+
Materials [47], there is no imaginary frequency, indicat-
|
392 |
+
ing the dynamical stability. In addition, we performed
|
393 |
+
molecular dynamics simulations of Cr3O6 and Mn3O6
|
394 |
+
at 250 K, taking the NVT ensemble (constant temper-
|
395 |
+
ature and volume) and run for 6 ps. The results show
|
396 |
+
that 2D Cr3O6 and Mn3O6 are thermodynamically sta-
|
397 |
+
|
398 |
+
(a)
|
399 |
+
(b)
|
400 |
+
22
|
401 |
+
J11
|
402 |
+
J23
|
403 |
+
J33
|
404 |
+
J13
|
405 |
+
2(a)
|
406 |
+
(b)
|
407 |
+
-1.0
|
408 |
+
-1.0
|
409 |
+
(eV)
|
410 |
+
(eV)
|
411 |
+
-0.5
|
412 |
+
-0.5
|
413 |
+
- Spin up
|
414 |
+
2DCr306
|
415 |
+
2D/Mn.06
|
416 |
+
- Spin up
|
417 |
+
E
|
418 |
+
0.0
|
419 |
+
0.0
|
420 |
+
Spin down
|
421 |
+
Spin down
|
422 |
+
E
|
423 |
+
-0.5
|
424 |
+
-0.5
|
425 |
+
-1.0
|
426 |
+
-1.0
|
427 |
+
-1.5
|
428 |
+
-1.5
|
429 |
+
X
|
430 |
+
S
|
431 |
+
X
|
432 |
+
S
|
433 |
+
(c)
|
434 |
+
(d)
|
435 |
+
1.0
|
436 |
+
1.4
|
437 |
+
- 2D C
|
438 |
+
1.2
|
439 |
+
- 2D Mn3O6
|
440 |
+
0.8
|
441 |
+
Exp
|
442 |
+
(eV)
|
443 |
+
1.0
|
444 |
+
ref. 15)
|
445 |
+
0.6
|
446 |
+
0.8
|
447 |
+
Gap
|
448 |
+
0.6
|
449 |
+
Mas
|
450 |
+
Cr,Te6"
|
451 |
+
0.4
|
452 |
+
Cr,O6
|
453 |
+
0.2
|
454 |
+
0.2
|
455 |
+
Mn,O6
|
456 |
+
0.0
|
457 |
+
-0.3 -0.2 -0.1
|
458 |
+
0.3
|
459 |
+
400
|
460 |
+
0
|
461 |
+
0.1 0.2
|
462 |
+
100
|
463 |
+
200
|
464 |
+
300
|
465 |
+
500
|
466 |
+
Electric field (V/A)
|
467 |
+
Temepture (K)5
|
468 |
+
TABLE II. For 2D magnetic metal Cr3Te6 and semiconductors Cr3O6 and Mn3O6, the parameter A (in unit of meV−2) in Eq.
|
469 |
+
(1), the exchange couping parameters JijS2 and the magnetic anisotropy parameter DS2 (in unit of meV) in the Hamiltonian
|
470 |
+
in Eq. (3), and the estimated Curie temperature Tc. See text for details.
|
471 |
+
Materials
|
472 |
+
A
|
473 |
+
J11S2
|
474 |
+
J22S2
|
475 |
+
J33S2
|
476 |
+
J12S2
|
477 |
+
J13S2
|
478 |
+
J23S2
|
479 |
+
DS2
|
480 |
+
Tc (K)
|
481 |
+
Cr3Te6
|
482 |
+
-27
|
483 |
+
-17.1
|
484 |
+
-11.5
|
485 |
+
-24.4
|
486 |
+
-12.6
|
487 |
+
-19.6
|
488 |
+
-37.4
|
489 |
+
-0.14
|
490 |
+
328
|
491 |
+
Cr3O6
|
492 |
+
-36
|
493 |
+
-18.9
|
494 |
+
-14.6
|
495 |
+
-10.1
|
496 |
+
-18.7
|
497 |
+
-1.8
|
498 |
+
-3.1
|
499 |
+
0.04
|
500 |
+
218
|
501 |
+
Mn3O6
|
502 |
+
-465
|
503 |
+
-11.9
|
504 |
+
-7.6
|
505 |
+
-50.4
|
506 |
+
-15.9
|
507 |
+
-5.2
|
508 |
+
-10.7
|
509 |
+
-0.09
|
510 |
+
208
|
511 |
+
ble [47]. These calculation results suggest that 2D Cr3O6
|
512 |
+
and Mn3O6 may be feasible in experiment.
|
513 |
+
The band structure of 2D Cr3O6 and Mn3O6 is shown
|
514 |
+
in Figs. 4(a) and 4(b), respectively, where the band gap
|
515 |
+
is 0.99 eV for Cr3O6 and 0.75 eV for Mn3O6. As shown
|
516 |
+
in Figs. 4(a) and (b), the band gap for 2D Cr3O6 and
|
517 |
+
Mn3O6 is 0.99 eV and 0.75 eV, respectively. When ap-
|
518 |
+
plying an out-of-plane electric field with a range of ± 0.3
|
519 |
+
V/˚A, which is possible in experiment [49], the band gap
|
520 |
+
of Cr3O6 (Mn3O6) increases (decreases) with increasing
|
521 |
+
electric field, as shown in Fig. 4(c). By the same calcula-
|
522 |
+
tion method above, the parameter A, the similar Heisen-
|
523 |
+
berg models in Eq. 3 with six nearest-neighbor exchange
|
524 |
+
coupling Jij are obtained for the 2D Cr3O6 and Mn3O6.
|
525 |
+
The parameters A, Jij and D are calculated and shown
|
526 |
+
in Table II. The spin polarization of Cr3O6 and Mn3O6
|
527 |
+
is in-plane (DS2 = 0.04 meV) and out-of-plane (DS2 =
|
528 |
+
-0.09 meV), respectively. Fig. 4(d) shows the magnetiza-
|
529 |
+
tion as a function of temperature for 2D Cr3Te6, Cr3O6
|
530 |
+
and Mn3O6. The calculated Curie temperature is Tc =
|
531 |
+
218 K for 2D Cr3O6 and Tc = 208 K for 2D Mn3O6,
|
532 |
+
respectively.
|
533 |
+
V.
|
534 |
+
CONCLUSION
|
535 |
+
Based on the DFT and Wannier function calculations,
|
536 |
+
we propose a method for constructing the 2D Heisen-
|
537 |
+
berg model with the superexchange interactions. By this
|
538 |
+
method, we obtain a 2D Heisenberg model with six differ-
|
539 |
+
ent nearest-neighbor exchange couplings for the 2D fer-
|
540 |
+
romagnetic metal Cr3Te6. The calculated Curie temper-
|
541 |
+
ature Tc = 328 K is close to the Tc = 344 K of Cr3Te6 in
|
542 |
+
the experiment. In addition, we predicted two 2D mag-
|
543 |
+
netic semiconductors: Cr3O6 with band gap of 0.99 eV
|
544 |
+
and Tc = 218 K, and Mn3O6 with band gap of 0.75 eV
|
545 |
+
and Tc = 208 K, where the similar 2D Heisenberg models
|
546 |
+
are obtained. The complex Heisenberg model developed
|
547 |
+
from the simple crystal structure shows the power of our
|
548 |
+
method to study the magnetic properties in these 2D
|
549 |
+
magnetic metals and semiconductors.
|
550 |
+
ACKNOWLEDGEMENTS
|
551 |
+
This work is supported in part by the National Natu-
|
552 |
+
ral Science Foundation of China (Grants No. 12074378
|
553 |
+
and No. 11834014), the Beijing Natural Science Foun-
|
554 |
+
dation (Grant No.
|
555 |
+
Z190011), the National Key R&D
|
556 |
+
Program of China (Grant No.
|
557 |
+
2018YFA0305800), the
|
558 |
+
Beijing Municipal Science and Technology Commission
|
559 |
+
(Grant No. Z191100007219013), the Chinese Academy of
|
560 |
+
Sciences (Grants No. YSBR-030 and No. Y929013EA2),
|
561 |
+
and the Strategic Priority Research Program of Chinese
|
562 |
+
Academy of Sciences (Grants No. XDB28000000 and No.
|
563 |
+
XDB33000000).
|
564 |
+
[1] B. Huang, G. Clark, E. Navarro-Moratalla, D. R. Klein,
|
565 |
+
R. Cheng, K. L. Seyler, D. Zhong, E. Schmidgall, M. A.
|
566 |
+
McGuire, D. H. Cobden, W. Yao, D. Xiao, P. Jarillo-
|
567 |
+
Herrero, and X. Xu, Nature 546, 270 (2017).
|
568 |
+
[2] C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao,
|
569 |
+
W. Bao, C. Wang, Y. Wang, Z. Q. Qiu, R. J. Cava, S. G.
|
570 |
+
Louie, J. Xia, and X. Zhang, Nature 546, 265 (2017).
|
571 |
+
[3] N. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1307
|
572 |
+
(1966).
|
573 |
+
[4] Z. Zhang, J. Shang, C. Jiang, A. Rasmita, W. Gao, and
|
574 |
+
T. Yu, Nano. Lett. 19, 3138 (2019).
|
575 |
+
[5] X. Cai, T. Song, N. P. Wilson, G. Clark, M. He,
|
576 |
+
X. Zhang, T. Taniguchi, K. Watanabe, W. Yao, D. Xiao,
|
577 |
+
M. A. McGuire, D. H. Cobden, and X. Xu, Nano. Lett.
|
578 |
+
19, 3993 (2019).
|
579 |
+
[6] F. Cui, X. Zhao, J. Xu, B. Tang, Q. Shang, J. Shi,
|
580 |
+
Y. Huan, J. Liao, Q. Chen, Y. Hou, Q. Zhang, S. J. Pen-
|
581 |
+
nycook, and Y. Zhang, Adv. Mater. 32, 1905896 (2019).
|
582 |
+
[7] J. Chu, Y. Zhang, Y. Wen, R. Qiao, C. Wu, P. He, L. Yin,
|
583 |
+
R. Cheng, F. Wang, Z. Wang, J. Xiong, Y. Li, and J. He,
|
584 |
+
Nano. Lett. 19, 2154 (2019).
|
585 |
+
[8] X. Zhao, J. Dong, L. Fu, Y. Gu, R. Zhang, Q. Yang,
|
586 |
+
L. Xie, Y. Tang, and F. Ning, J. Semicond. 43, 112501
|
587 |
+
(2022).
|
588 |
+
[9] J. Zhao, Y. Li, and P. Xiong, J. Semicond. 42, 010302
|
589 |
+
(2021).
|
590 |
+
[10] W. Huang, R. Lin, W. Chen, Y. Wang, and H. Zhang, J.
|
591 |
+
Semicond. 42, 072501 (2021).
|
592 |
+
[11] Z. Sun, B. Cai, X. Chen, W. Wei, X. Li, D. Yang,
|
593 |
+
C. Meng, Y. Wu, and H. Zeng, J. Semicond. 41, 122501
|
594 |
+
(2020).
|
595 |
+
[12] M. Wang, L. Kang, J. Su, L. Zhang, H. Dai, H. Cheng,
|
596 |
+
X. Han, T. Zhai, Z. Liu, and J. Han, Nanoscale 12, 16427
|
597 |
+
(2020).
|
598 |
+
[13] L. Meng, Z. Zhou, M. Xu, S. Yang, K. Si, L. Liu,
|
599 |
+
X. Wang, H. Jiang, B. Li, P. Qin, P. Zhang, J. Wang,
|
600 |
+
|
601 |
+
6
|
602 |
+
Z. Liu, P. Tang, Y. Ye, W. Zhou, L. Bao, H.-J. Gao, and
|
603 |
+
Y. Gong, Nat. Commun. 12, 94 (2021).
|
604 |
+
[14] X. Zhang, Q. Lu, W. Liu, W. Niu, J. Sun, J. Cook,
|
605 |
+
M. Vaninger, P. F. Miceli, D. J. Singh, S.-W. Lian, T.-
|
606 |
+
R. Chang, X. He, J. Du, L. He, R. Zhang, G. Bian, and
|
607 |
+
Y. Xu, Nat. Commun. 12, 1 (2021).
|
608 |
+
[15] R. Chua, J. Zhou, X. Yu, W. Yu, J. Gou, R. Zhu,
|
609 |
+
L. Zhang, M. Liu, M. B. H. Breese, W. Chen, K. P. Loh,
|
610 |
+
Y. P. Feng, M. Yang, Y. L. Huang, and A. T. S. Wee,
|
611 |
+
Adv. Mater. 33, 2103360 (2021).
|
612 |
+
[16] B. Li, X. Deng, W. Shu, X. Cheng, Q. Qian, Z. Wan,
|
613 |
+
B. Zhao, X. Shen, R. Wu, S. Shi, H. Zhang, Z. Zhang,
|
614 |
+
X. Yang, J. Zhang, M. Zhong, Q. Xia, J. Li, Y. Liu,
|
615 |
+
L. Liao, Y. Ye, L. Dai, Y. Peng, B. Li, and X. Duan,
|
616 |
+
Mater. Today 57, 66 (2022).
|
617 |
+
[17] Y. Zhang, J. Chu, L. Yin, T. Shifa, Z. Cheng, R. Cheng,
|
618 |
+
F. Wang, Y. Wen, X. Zhan, Z. Wang, and J. He, Adv.
|
619 |
+
Mater. 31, 1900056 (2019).
|
620 |
+
[18] Z. Fei, B. Huang, P. Malinowski, W. Wang, T. Song,
|
621 |
+
J. Sanchez, W. Yao, D. Xiao, X. Zhu, A. F. May, W. Wu,
|
622 |
+
D. H. Cobden, J.-H. Chu, and X. Xu, Nat. Mater. 17,
|
623 |
+
778 (2018).
|
624 |
+
[19] Y. Deng, Y. Yu, Y. Song, J. Zhang, N. Z. Wang, Z. Sun,
|
625 |
+
Y. Yi, Y. Z. Wu, S. Wu, J. Zhu, J. Wang, X. H. Chen,
|
626 |
+
and Y. Zhang, Nature 563, 94 (2018).
|
627 |
+
[20] J. Seo, D. Kim, E. An, K. Kim, G.-Y. Kim, S.-Y. Hwang,
|
628 |
+
D. Kim, B. Jang, H. Kim, G. Eom, S. Seo, R. Stania,
|
629 |
+
M. Muntwiler, J. Lee, K. Watanabe, T. Taniguchi, Y. Jo,
|
630 |
+
J. Lee, B. Min, M. Jo, H. Yeom, S.-Y. Choi, J. Shim, and
|
631 |
+
J. Kim, Sci. Adv. 6, 1 (2020).
|
632 |
+
[21] X. Chen, Y.-T. Shao, R. Chen, S. Susarla, T. Hogan,
|
633 |
+
Y. He, H. Zhang, S. Wang, J. Yao, P. Ercius, D. Muller,
|
634 |
+
R. Ramesh, and R. Birgeneau, Phys. Rev. Lett. 128, 1
|
635 |
+
(2022).
|
636 |
+
[22] A. May,
|
637 |
+
D. Ovchinnikov,
|
638 |
+
Q. Zheng,
|
639 |
+
R. Hermann,
|
640 |
+
S. Calder, B. Huang, Z. Fei, Y. Liu, X. Xu, and
|
641 |
+
M. Mcguire, ACS. Nano. 13, 4436 (2019).
|
642 |
+
[23] D. J. O’Hara, T. Zhu, A. H. Trout, A. S. Ahmed, Y. K.
|
643 |
+
Luo, C. H. Lee, M. R. Brenner, S. Rajan, J. A. Gupta,
|
644 |
+
D. W. McComb, and R. K. Kawakami, Nano. Lett. 18,
|
645 |
+
3125 (2018).
|
646 |
+
[24] P. W. Anderson, Phys. Rev. 79, 350 (1950).
|
647 |
+
[25] J. B. Goodenough and A. L. Loeb, Phys. Rev. 98, 391
|
648 |
+
(1955).
|
649 |
+
[26] J. Kanamori, Prog. Theor. Phys. 17, 177 (1957).
|
650 |
+
[27] X.-J. Dong, J.-Y. You, B. Gu, and G. Su, Phys. Rev.
|
651 |
+
Appl. 12, 014020 (2019).
|
652 |
+
[28] X.-J. Dong, J.-Y. You, Z. Zhang, B. Gu, and G. Su, Phys.
|
653 |
+
Rev. B 102, 144443 (2020).
|
654 |
+
[29] J.-Y. You, Z. Zhang, X.-J. Dong, B. Gu, and G. Su, Phys.
|
655 |
+
Rev. Res. 2, 013002 (2020).
|
656 |
+
[30] J.-Y. You, X.-J. Dong, B. Gu, and G. Su, Phys. Rev. B
|
657 |
+
103, 104403 (2021).
|
658 |
+
[31] S. Chen, C. Huang, H. Sun, J. Ding, P. Jena, and E. Kan,
|
659 |
+
J. Phys. Chem. C 123, 17987 (2019).
|
660 |
+
[32] J. He, G. Ding, C. Zhong, S. Li, D. Li, and G. Zhang, J.
|
661 |
+
Mater. Chem. C 7, 5084 (2019).
|
662 |
+
[33] Y. Li, D. Legut, X. Liu, C. Lin, X. Feng, Z. Li, and
|
663 |
+
Q. Zhang, J. Phys. Chem. C 126, 8817 (2022).
|
664 |
+
[34] X. Hu, Y. Zhao, X. Shen, A. V. Krasheninnikov, Z. Chen,
|
665 |
+
and L. Sun, ACS Appl. Mater. Interfaces 12, 26367
|
666 |
+
(2020).
|
667 |
+
[35] X. Tang, W. Sun, Y. Gu, C. Lu, L. Kou, and C. Chen,
|
668 |
+
Phys. Rev. B 99, 045445 (2019).
|
669 |
+
[36] H. Xiang, C. Lee, H.-J. Koo, X. Gong, and M.-H.
|
670 |
+
Whangbo, Dalton. Trans. 42, 823 (2013).
|
671 |
+
[37] H. Xiang, E. Kan, S.-H. Wei, M.-H. Whangbo, and
|
672 |
+
X. Gong, Phys. Rev. B 84, 224429 (2011).
|
673 |
+
[38] X. Li, H. Yu, F. Lou, J. Feng, M.-H. Whangbo, and H. Xi-
|
674 |
+
ang, Molecules 26, 803 (2021).
|
675 |
+
[39] G. Kresse and J. Furthm¨uller, Phys. Rev. B 54, 11169
|
676 |
+
(1996).
|
677 |
+
[40] J. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett.
|
678 |
+
77, 3865 (1996).
|
679 |
+
[41] P. Bl¨ochl, Phys. Rev. B 50, 17953 (1994).
|
680 |
+
[42] J. Heyd, G. Scuseria, and M. Ernzerhof, J. Chem. Phys.
|
681 |
+
118, 8207 (2003).
|
682 |
+
[43] H. Monkhorst and J. Pack, Phys. Rev. B 13, 5188 (1976).
|
683 |
+
[44] A. Togo and I. Tanaka, Scripta. Mater. 108, 1 (2015).
|
684 |
+
[45] A. Mostofi, J. Yates, G. Pizzi, Y.-S. Lee, I. Souza, D. Van-
|
685 |
+
derbilt, and N. Marzari, Comput. Phys. Commun. 185,
|
686 |
+
2309 (2014).
|
687 |
+
[46] A. A. Mostofi, J. R. Yates, Y.-S. Lee, I. Souza, D. Van-
|
688 |
+
derbilt, and N. Marzari, Comput. Phys. Commun. 178,
|
689 |
+
685 (2008).
|
690 |
+
[47] See details in Supplemental Materials.
|
691 |
+
[48] D. S. Dai and K. M. Qian, Ferromagnetism, Vol. 1 (Sci-
|
692 |
+
ence Press, Beijing, 2017).
|
693 |
+
[49] D. Domaretskiy, M. Philippi, M. Gibertini, N. Ubrig,
|
694 |
+
I. Guti´errez-Lezama, and A. F. Morpurgo, Nat. Nan-
|
695 |
+
otechnol. 17, 1078 (2022).
|
696 |
+
|
4tAzT4oBgHgl3EQf9v5K/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
5dAyT4oBgHgl3EQfQPaq/content/2301.00042v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ddfd418b71af2cec6225324b3e3c2c1eb0cb9939a3ec4e7b7fd0c8f3fa649af0
|
3 |
+
size 8255979
|
5tAzT4oBgHgl3EQff_wz/content/2301.01460v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b7b316af04f3db48fa5fe896f5b3c2155d48e63159bbba99a5ce2f977f277cb2
|
3 |
+
size 1968587
|
5tAzT4oBgHgl3EQff_wz/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2401a9262025294defa1ffbba0dbe90a28e17ee7348141d0f44e17dfb9ce2f64
|
3 |
+
size 1703981
|
5tAzT4oBgHgl3EQff_wz/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0aac4b377e8f018aa0d1fa8dbd30bb83a09efa391a2c744af116d63a4d2f0036
|
3 |
+
size 60839
|
7dAyT4oBgHgl3EQfQvYd/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d7661eebb1a6dfecf9ddc3df935bb202dd5bb52812b66472d73fbff6eff40cb
|
3 |
+
size 157288
|
89FLT4oBgHgl3EQfBi6R/content/2301.11971v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:149df3c06a23fbcbeecb71923a7f59ee60766a55de7664e5318afdce5680a7a3
|
3 |
+
size 2781371
|
89FLT4oBgHgl3EQfBi6R/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3bb9f5f12c70cc131061122e74299a326e3d2e4a5055f7b85d1422b705169de5
|
3 |
+
size 272546
|
9tAyT4oBgHgl3EQf3Pnc/content/2301.00767v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:270b87f857b09bd8447e571c55b1441ca7e049bef3b0bdbbc584af3eca58bb51
|
3 |
+
size 3404689
|
A9E4T4oBgHgl3EQfEwz_/content/tmp_files/2301.04881v1.pdf.txt
ADDED
@@ -0,0 +1,751 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.04881v1 [math.CO] 12 Jan 2023
|
2 |
+
Strengthening the Directed Brooks’ Theorem for oriented graphs
|
3 |
+
and consequences on digraph redicolouring *
|
4 |
+
Lucas Picasarri-Arrieta
|
5 |
+
Universit´e Cˆote d’Azur, CNRS, I3S, INRIA, Sophia Antipolis, France
|
6 | |
7 |
+
Abstract
|
8 |
+
Let D = (V, A) be a digraph. We define ∆max(D) as the maximum of {max(d+(v), d−(v)) | v ∈ V } and
|
9 |
+
∆min(D) as the maximum of {min(d+(v), d−(v)) | v ∈ V }. It is known that the dichromatic number of D
|
10 |
+
is at most ∆min(D) + 1. In this work, we prove that every digraph D which has dichromatic number exactly
|
11 |
+
∆min(D) + 1 must contain the directed join of ←→
|
12 |
+
Kr and ←→
|
13 |
+
Ks for some r, s such that r + s = ∆min(D) + 1. In
|
14 |
+
particular, every oriented graph ⃗G with ∆min( ⃗G) ≥ 2 has dichromatic number at most ∆min( ⃗G).
|
15 |
+
Let ⃗G be an oriented graph of order n such that ∆min( ⃗G) ≤ 1. Given two 2-dicolourings of ⃗G, we show that
|
16 |
+
we can transform one into the other in at most n steps, by recolouring one vertex at each step while maintaining
|
17 |
+
a dicolouring at any step. Furthermore, we prove that, for every oriented graph ⃗G on n vertices, the distance
|
18 |
+
between two k-dicolourings is at most 2∆min( ⃗G)n when k ≥ ∆min( ⃗G) + 1.
|
19 |
+
We then extend a theorem of Feghali to digraphs. We prove that, for every digraph D with ∆max(D) =
|
20 |
+
∆ ≥ 3 and every k ≥ ∆ + 1, the k-dicolouring graph of D consists of isolated vertices and at most one further
|
21 |
+
component that has diameter at most c∆n2, where c∆ = O(∆2) is a constant depending only on ∆.
|
22 |
+
1
|
23 |
+
Introduction
|
24 |
+
1.1
|
25 |
+
Graph (re)colouring
|
26 |
+
Given a graph G = (V, E), a k-colouring of G is a function c : V −→ {1, . . ., k} such that, for every edge xy ∈ E,
|
27 |
+
we have c(x) ̸= c(y). So for every i ∈ {1, . . ., k}, c−1(i) induces an independent set on G. The chromatic number
|
28 |
+
of G, denoted by χ(G), is the smallest k such that G admits a k-colouring. The maximum degree of G, denoted
|
29 |
+
by ∆(G), is the degree of the vertex with the greatest number of edges incident to it. A simple greedy procedure
|
30 |
+
shows that, for any graph G, χ(G) ≤ ∆(G) + 1. The celebrated theorem of Brooks [6] characterizes the graphs
|
31 |
+
for which equality holds.
|
32 |
+
Theorem 1 (Brooks, [6]). A connected graph G satisfies χ(G) = ∆(G) + 1 if and only if G is an odd cycle or a
|
33 |
+
complete graph.
|
34 |
+
For any k ≥ χ(G), the k-colouring graph of G, denoted by Ck(G), is the graph whose vertices are the k-
|
35 |
+
colourings of G and in which two k-colourings are adjacent if they differ by the colour of exactly one vertex. A
|
36 |
+
path between two given colourings in Ck(G) corresponds to a recolouring sequence, that is a sequence of pairs
|
37 |
+
composed of a vertex of G, which is going to receive a new colour, and a new colour for this vertex. If Ck(G)
|
38 |
+
is connected, we say that G is k-mixing. A k-colouring of G is k-frozen if it is an isolated vertex in Ck(G). The
|
39 |
+
graph G is k-freezable if it admits a k-frozen colouring. In the last fifteen years, since the papers of Cereceda,
|
40 |
+
van den Heuvel and Johnson [8, 7], graph recolouring has been studied by many researchers in graph theory. We
|
41 |
+
refer the reader to the PhD thesis of Bartier [2] for a complete overview on graph recolouring and to the surveys
|
42 |
+
of van Heuvel [12] and Nishimura [15] for reconfiguration problems in general. Feghali [9] proved the following
|
43 |
+
analogue of Brooks’ Theorem for graphs recolouring.
|
44 |
+
*Research supported by research grant DIGRAPHS ANR-19-CE48-0013 and by the French government, through the EUR DS4H Invest-
|
45 |
+
ments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-17-EURE-0004.
|
46 |
+
1
|
47 |
+
|
48 |
+
Theorem 2 (Feghali, [9]). Let G = (V, E) be a connected graph with ∆(G) = ∆ ≥ 3, k ≥ ∆ + 1, and α, β two
|
49 |
+
k-colourings of G. Then at least one of the following holds:
|
50 |
+
• α is k-frozen, or
|
51 |
+
• β is k-frozen, or
|
52 |
+
• there is a recolouring sequence of length at most c∆|V |2 between α and β, where c∆ = O(∆) is a constant
|
53 |
+
depending on ∆.
|
54 |
+
1.2
|
55 |
+
Digraph (re)dicolouring.
|
56 |
+
In this paper, we are looking for extensions of the previous results on graphs colouring and recolouring to digraphs.
|
57 |
+
Let D be a digraph. A digon is a pair of arcs in opposite directions between the same vertices. A simple arc
|
58 |
+
is an arc which is not in a digon. For any two vertices x, y ∈ V (D), the digon {xy, yx} is denoted by [x, y]. The
|
59 |
+
digon graph of D is the undirected graph with vertex set V (D) in which uv is an edge if and only if [u, v] is a
|
60 |
+
digon of D. An oriented graph is a digraph with no digon. The bidirected graph associated to a graph G, denoted
|
61 |
+
by ←→
|
62 |
+
G , is the digraph obtained from G, by replacing every edge by a digon. The underlying graph of D, denoted
|
63 |
+
by UG(D), is the undirected graph G with vertex set V (D) in which uv is an edge if and only if uv or vu is an
|
64 |
+
arc of D.
|
65 |
+
Let v be a vertex of a digraph D. The out-degree (resp. in-degree) of a vertex v, denoted by d+(v) (resp.
|
66 |
+
d−(v)), is the number of arcs leaving (resp. entering) v. We de��ne the maximum degree of v as dmax(v) =
|
67 |
+
max{d+(v), d−(v)}, and the minimum degree of v as dmin(v) = min{d+(v), d−(v)}. We can then define the cor-
|
68 |
+
responding maximum degrees of D: ∆max(D) = maxv∈V (D)(dmax(v)) and ∆min(D) = maxv∈V (D)(dmin(v)).
|
69 |
+
A digraph D is ∆-diregular if, for every vertex v ∈ V (D), d−(v) = d+(v) = ∆.
|
70 |
+
In 1982, Neumann-Lara [14] introduced the notions of dicolouring and dichromatic number, which generalize
|
71 |
+
the ones of colouring and chromatic number. A k-dicolouring of D is a function c : V (D) → {1, . . ., k} such
|
72 |
+
that c−1(i) induces an acyclic subdigraph in D for each i ∈ {1, . . . , k}. The dichromatic number of D, denoted
|
73 |
+
by ⃗χ(D), is the smallest k such that D admits a k-dicolouring. There is a one-to-one correspondence between
|
74 |
+
the k-colourings of a graph G and the k-dicolourings of the associated bidirected graph ←→
|
75 |
+
G , and in particular
|
76 |
+
χ(G) = ⃗χ(←→
|
77 |
+
G ). Hence every result on graph colourings can be seen as a result on dicolourings of bidirected
|
78 |
+
graphs, and it is natural to study whether the result can be extended to all digraphs.
|
79 |
+
The directed version of Brooks’ Theorem has been first proved by Mohar in [13], but people discovered a flaw
|
80 |
+
in the proof. Harutyunyan and Mohar then gave a stronger result in [10]. Finally, Aboulker and Aubian gave four
|
81 |
+
new proofs of the following theorem in [1].
|
82 |
+
Theorem 3 (DIRECTED BROOKS’ THEOREM). Let D be a connected digraph. Then ⃗χ(D) ≤ ∆max(D) + 1 and
|
83 |
+
equality holds if and only if one of the following occurs:
|
84 |
+
• D is a directed cycle, or
|
85 |
+
• D is a bidirected odd cycle, or
|
86 |
+
• D is a bidirected complete graph (of order at least 4).
|
87 |
+
It is easy to prove, by induction on |V (D)|, that every digraph D can be dicoloured with ∆min(D) + 1
|
88 |
+
colours. Hence, one can wonder if Brooks’ Theorem can be extended to digraphs using ∆min(D) instead of
|
89 |
+
∆max(D). Unfortunately, Aboulker and Aubian [1] proved that, given a digraph D, deciding whether D is
|
90 |
+
∆min(D)-dicolourable is NP-complete. Thus, unless P=NP, we cannot expect an easy characterization of digraphs
|
91 |
+
satisfying ⃗χ(D) = ∆min(D) + 1.
|
92 |
+
Let the maximum geometric mean of a digraph D be ˜∆(D) = max{
|
93 |
+
�
|
94 |
+
d+(v)d−(v)|v ∈ V (D)}. By definition
|
95 |
+
we have ∆min(D) ≤ ˜∆(D) ≤ ∆max(D). Restricted to oriented graphs, Harutyunyan and Mohar [11] have
|
96 |
+
strengthened Theorem 3 by proving the following.
|
97 |
+
2
|
98 |
+
|
99 |
+
Theorem 4 (Harutyunyan and Mohar [11]). There is an absolute constant ∆1 such that every oriented graph ⃗G
|
100 |
+
with ˜∆(⃗G) ≥ ∆1 has ⃗χ(⃗G) ≤ (1 − e−13) ˜∆(⃗G).
|
101 |
+
In Section 2, we give another strengthening of Theorem 3 on a large class of digraphs which contains oriented
|
102 |
+
graphs. The directed join of H1 and H2, denoted by H1 ⇒ H2, is the digraph obtained from disjoint copies of H1
|
103 |
+
and H2 by adding all arcs from the copy of H1 to the copy of H2 (H1 or H2 may be empty).
|
104 |
+
Theorem 5. Let D be a digraph. If D is not ∆min(D)-dicolourable, then one of the following holds:
|
105 |
+
• ∆min(D) ≤ 1, or
|
106 |
+
• ∆min(D) = 2 and D contains ←→
|
107 |
+
K2, or
|
108 |
+
• ∆min(D) ≥ 3 and D contains ←→
|
109 |
+
Kr ⇒ ←→
|
110 |
+
Ks, for some r, s such that r + s = ∆min(D) + 1.
|
111 |
+
In particular, the following is a direct consequence of Theorem 5.
|
112 |
+
Corollary 6. Let D be a digraph. If ⃗χ(D) = ∆min(D) + 1, then D contains the complete bidirected graph on
|
113 |
+
� ∆min+1
|
114 |
+
2
|
115 |
+
�
|
116 |
+
vertices as a subdigraph.
|
117 |
+
Moreover, since an oriented graph does not contain any digon, Corollary 6 implies the following:
|
118 |
+
Corollary 7. Let ⃗G be an oriented graph. If ∆min(⃗G) ≥ 2, then ⃗χ(⃗G) ≤ ∆min(⃗G).
|
119 |
+
Corollary 6 is best possible: if we restrict D to not contain the complete bidirected graph on
|
120 |
+
� ∆min+1
|
121 |
+
2
|
122 |
+
�
|
123 |
+
+ 1,
|
124 |
+
then we show that deciding whether ⃗χ(D) ≤ ∆min(D) remains NP-complete (Theorem 11).
|
125 |
+
For any k ≥ ⃗χ(D), the k-dicolouring graph of D, denoted by Dk(D), is the graph whose vertices are the
|
126 |
+
k-dicolourings of D and in which two k-dicolourings are adjacent if they differ by the colour of exactly one vertex.
|
127 |
+
Observe that Ck(G) = Dk(←→
|
128 |
+
G ) for any bidirected graph ←→
|
129 |
+
G . A redicolouring sequence between two dicolourings is
|
130 |
+
a path between these dicolourings in Dk(D). The digraph D is k-mixing if Dk(D) is connected. A k-dicolouring of
|
131 |
+
D is k-frozen if it is an isolated vertex in Dk(D). The digraph D is k-freezable if it admits a k-frozen dicolouring.
|
132 |
+
A vertex v is blocked to its colour in a dicolouring α if, for every colour c ̸= α(v), recolouring v to c in α creates
|
133 |
+
a monochromatic directed cycle.
|
134 |
+
Digraph redicolouring was first introduced in [5], where the authors generalized different results on graph re-
|
135 |
+
colouring to digraphs, and proved some specific results on oriented graphs redicolouring. In particular, they studied
|
136 |
+
the k-dicolouring graph of digraphs with bounded degeneracy or bounded maximum average degree, and they show
|
137 |
+
that finding a redicolouring sequence between two given k-dicolouring of a digraph is PSPACE-complete. Dealing
|
138 |
+
with the maximum degree of a digraph, they proved that, given an orientation of a subcubic graph ⃗G on n ver-
|
139 |
+
tices, its 2-dicolouring graph D2(⃗G) is connected and has diameter at most 2n and they asked if this bound can be
|
140 |
+
improved. We answer this question in Section 3 by proving the following theorem.
|
141 |
+
Theorem 8. Let ⃗G be an oriented graph of order n such that ∆min(⃗G) ≤ 1. Then D2(⃗G) is connected and has
|
142 |
+
diameter exactly n.
|
143 |
+
In particular, if ⃗G is an orientation of a subcubic graph, then ∆min(⃗G) ≤ 1 (because d+(v) + d−(v) ≤ 3 for
|
144 |
+
every vertex v), and so D2(⃗G) has diameter exactly n. Furthermore, we prove the following as a consequence of
|
145 |
+
Corollary 7 and Theorem 8.
|
146 |
+
Corollary 9. Let ⃗G be oriented graph of order n with ∆min(⃗G) = ∆ ≥ 1, and let k ≥ ∆ + 1. Then Dk(⃗G) is
|
147 |
+
connected and has diameter at most 2∆n.
|
148 |
+
Corollary 9 does not hold for digraphs in general: indeed, ←→
|
149 |
+
Pn, the bidirected path on n vertices, satisfies
|
150 |
+
∆min(←→
|
151 |
+
Pn) = 2 and D3(←→
|
152 |
+
Pn) = C3(Pn) has diameter Ω(n2), as proved in [4].
|
153 |
+
In Section 4, we extend Theorem 2 to digraphs.
|
154 |
+
Theorem 10. Let D = (V, A) be a connected digraph with ∆max(D) = ∆ ≥ 3, k ≥ ∆ + 1, and α, β two
|
155 |
+
k-dicolourings of D. Then at least one of the following holds:
|
156 |
+
3
|
157 |
+
|
158 |
+
• α is k-frozen, or
|
159 |
+
• β is k-frozen, or
|
160 |
+
• there is a redicolouring sequence of length at most c∆|V |2 between α and β, where c∆ = O(∆2) is a
|
161 |
+
constant depending only on ∆.
|
162 |
+
Furthermore, we prove that a digraph D is k-freezable only if D is bidirected and its underlying graph is
|
163 |
+
k-freezable. Thus, an obstruction in Theorem 10 is exactly the bidirected graph of an obstruction in Theorem 2.
|
164 |
+
2
|
165 |
+
Strengthening of Directed Brooks’ Theorem for oriented graphs
|
166 |
+
A digraph D is k-dicritical if ⃗χ(D) = k and for every vertex v ∈ V (D), ⃗χ(D − v) < k. Observe that every
|
167 |
+
digraph with dichromatic number at least k contains a k-dicritical subdigraph.
|
168 |
+
Let F2 be {←→
|
169 |
+
K2}, and for each ∆ ≥ 3, we define F∆ = {←→
|
170 |
+
Kr ⇒ ←→
|
171 |
+
Ks | r, s ≥ 0 and r + s = ∆ + 1}. A digraph
|
172 |
+
D is F∆-free if it does not contain F as a subdigraph, for any F ∈ F∆. Theorem 5 can then be reformulated as
|
173 |
+
follows:
|
174 |
+
Theorem 5. Let D be a digraph with ∆min(D) = ∆ ≥ 2. If D is F∆-free, then ⃗χ(D) ≤ ∆.
|
175 |
+
Proof. Let D be a digraph such that ∆min(D) = ∆ ≥ 2 and ⃗χ(D) = ∆ + 1. We will show that D contains some
|
176 |
+
F ∈ F∆ as a subdigraph.
|
177 |
+
Let (X, Y ) be a partition of V (D) such that for each x ∈ X, d+(x) ≤ ∆, and for each y ∈ Y , d−(y) ≤ ∆.
|
178 |
+
We define the digraph ˜D as follows:
|
179 |
+
• V ( ˜D) = V (D),
|
180 |
+
• A( ˜D) = A(D⟨X⟩) ∪ A(D⟨Y ⟩) ∪ {xy, yx | xy ∈ A(D), x ∈ X, y ∈ Y }.
|
181 |
+
Claim 5.1: ⃗χ( ˜D) ≥ ∆ + 1.
|
182 |
+
Proof of claim. Assume for a contradiction that there exists a ∆-dicolouring c of ˜D. Then D, coloured with c,
|
183 |
+
must contain a monochromatic directed cycle C. Now C is not contained in X nor Y , for otherwise C would be a
|
184 |
+
monochromatic directed cycle of D⟨X⟩ or D⟨Y ⟩ and so a monochromatic directed cycle of ˜D. Thus C contains
|
185 |
+
an arc xy from X to Y . But then, [x, y] is a monochromatic digon in ˜D, a contradiction.
|
186 |
+
♦
|
187 |
+
Since ⃗χ( ˜D) ≥ ∆ + 1, there is a (∆ + 1)-dicritical subdigraph H of ˜D. By dicriticality of H, for every vertex
|
188 |
+
v ∈ V (H), d+
|
189 |
+
H(v) ≥ ∆ and d−
|
190 |
+
H(v) ≥ ∆, for otherwise a ∆-dicolouring of H − v could be extended to H by
|
191 |
+
choosing for v a colour which is not appearing in its out-neighbourhood or in its in-neighbourhood. We define XH
|
192 |
+
as X ∩ V (H) and YH as Y ∩ V (H). Note that both H⟨XH⟩ and H⟨YH⟩ are subdigraphs of D.
|
193 |
+
Claim 5.2: H is ∆-diregular.
|
194 |
+
Proof of claim. Let ℓ be the number of digons between XH and YH in H. Observe that, by definition of X and
|
195 |
+
H, for each vertex x ∈ XH, d+
|
196 |
+
H(x) = ∆. Note also that, in H, ℓ is exactly the number of arcs leaving XH and
|
197 |
+
exactly the number of arcs entering XH. We get:
|
198 |
+
∆|XH| =
|
199 |
+
�
|
200 |
+
x∈XH
|
201 |
+
d+
|
202 |
+
H(x)
|
203 |
+
= ℓ + |A(H⟨XH⟩)|
|
204 |
+
=
|
205 |
+
�
|
206 |
+
x∈XH
|
207 |
+
d−
|
208 |
+
H(x)
|
209 |
+
which implies, since H is dicritical, d+
|
210 |
+
H(x) = d−
|
211 |
+
H(x) = ∆ for every vertex x ∈ XH. Using a symmetric argument,
|
212 |
+
we prove that ∆|YH| = �
|
213 |
+
y∈YH d+
|
214 |
+
H(y), implying d+
|
215 |
+
H(y) = d−
|
216 |
+
H(y) = ∆ for every vertex y ∈ YH.
|
217 |
+
♦
|
218 |
+
4
|
219 |
+
|
220 |
+
Since H is ∆-diregular, then in particular ∆max(H) = ∆. Hence, because ⃗χ(H) = ∆ + 1, by Theorem 3,
|
221 |
+
either ∆ = 2 and H is a bidirected odd cycle, or ∆ ≥ 3 and H is the bidirected complete graph on ∆ + 1 vertices.
|
222 |
+
• If ∆ = 2 and H is a bidirected odd cycle, then at least one digon of H belongs to H⟨XH⟩ or H⟨YH⟩, for oth-
|
223 |
+
erwise H would be bipartite (with bipartition (XH, YH)). Since both H⟨XH⟩ and H⟨YH⟩ are subdigraphs
|
224 |
+
of D, this shows, as desired, that D contains a copy of ←→
|
225 |
+
K2.
|
226 |
+
• If k ≥ 3 and H is the bidirected complete graph on ∆ + 1 vertices, let AH be all the arcs from YH to XH.
|
227 |
+
Then D⟨V (H)⟩ \ AH is a subdigraph of D which belongs to F∆.
|
228 |
+
Now we will justify that Corollary 6 is best possible. To do so, we prove that given a digraph D which does
|
229 |
+
not contain the bidirected complete graph on
|
230 |
+
�
|
231 |
+
∆min(D)+1
|
232 |
+
2
|
233 |
+
�
|
234 |
+
+ 1 vertices, deciding if it is ∆min(D)-dicolourable is
|
235 |
+
NP-complete. We shall use a reduction from k-DICOLOURABILITY which is defined as follows:
|
236 |
+
k-DICOLOURABILITY
|
237 |
+
Input: A digraph D
|
238 |
+
Question: Is D k-dicolourable ?
|
239 |
+
k-DICOLOURABILITY is NP-complete for every fixed k ≥ 2 [3]. It remains NP-complete when we restrict to
|
240 |
+
digraphs D with ∆min(D) = k [1].
|
241 |
+
Theorem 11. For all k ≥ 2, k-DICOLOURABILITY remains NP-complete when restricted to digraphs D satisfying
|
242 |
+
∆min(D) = k and not containing the bidirected complete graph on
|
243 |
+
� k+1
|
244 |
+
2
|
245 |
+
�
|
246 |
+
+ 1 vertices.
|
247 |
+
Proof. Let D = (V, A) be an instance of k-DICOLOURABILITY for some fixed k ≥ 2. Then we build D′ =
|
248 |
+
(V ′, A′) as follows:
|
249 |
+
• For each vertex x ∈ V , we associate a copy of S−
|
250 |
+
x ⇒ S+
|
251 |
+
x where S−
|
252 |
+
x is the bidirected complete graph on
|
253 |
+
� k+1
|
254 |
+
2
|
255 |
+
�
|
256 |
+
vertices, and S+
|
257 |
+
x is the bidirected complete graph on
|
258 |
+
� k+1
|
259 |
+
2
|
260 |
+
�
|
261 |
+
vertices.
|
262 |
+
• For each arc xy ∈ A, we associate all possible arcs x+y− in A′, such that x+ ∈ S+
|
263 |
+
x and y− ∈ S−
|
264 |
+
y .
|
265 |
+
First observe that ∆min(D′) = k. Let v be a vertex of D′, if v belongs to some S+
|
266 |
+
x , then d−(v) = k, otherwise
|
267 |
+
it belongs to some S−
|
268 |
+
x and then d+(v) = k. Then observe that D′ does not contain the bidirected complete graph
|
269 |
+
on
|
270 |
+
� k+1
|
271 |
+
2
|
272 |
+
�
|
273 |
+
+ 1 vertices since every digon in D′ is contained in some S+
|
274 |
+
x or S−
|
275 |
+
x . Thus we only have to prove that
|
276 |
+
⃗χ(D) ≤ k if and only if ⃗χ(D′) ≤ k to get the result.
|
277 |
+
• Let us first prove that ⃗χ(D) ≤ k implies ⃗χ(D′) ≤ k.
|
278 |
+
Assume that ⃗χ(D) ≤ k. Let φ : V −→ {1, . . . , k} be a k-dicolouring of D. Let φ′ be the k-dicolouring of
|
279 |
+
D′ defined as follows: for each vertex x ∈ V , choose arbitrarily x− ∈ S−
|
280 |
+
x , x+ ∈ S+
|
281 |
+
x , and set φ′(x−) =
|
282 |
+
φ′(x+) = φ(x). Then choose a distinct colour for every other vertex v in S−
|
283 |
+
x ∪ S+
|
284 |
+
x , and set φ′(v) to
|
285 |
+
this colour. We get that φ′ must be a k-dicolouring of D′: for each x ∈ V , every vertex but x− in S−
|
286 |
+
x
|
287 |
+
must be a sink in its colour class, and every vertex but x+ in S+
|
288 |
+
x must be a source in its colour class.
|
289 |
+
Thus if D′, coloured with φ′, contains a monochromatic directed cycle C′, then C′ must be of the form
|
290 |
+
x−
|
291 |
+
1 x+
|
292 |
+
1 x−
|
293 |
+
2 x+
|
294 |
+
2 · · · x−
|
295 |
+
ℓ x+
|
296 |
+
ℓ x−
|
297 |
+
1 . But then C = x1x2 · · · xℓx1 is a monochromatic directed cycle in D coloured
|
298 |
+
with φ: a contradiction.
|
299 |
+
• Reciprocally, let us prove that ⃗χ(D′) ≤ k implies ⃗χ(D) ≤ k.
|
300 |
+
Assume that ⃗χ(D′) ≤ k. Let φ′ : V ′ −→ {1, . . ., k} be a k-dicolouring of D′. Let φ be the k-dicolouring
|
301 |
+
of D defined as follows. For each vertex x ∈ V , we know that |S+
|
302 |
+
x ∪ S−
|
303 |
+
x | = k + 1, thus there must be
|
304 |
+
two vertices x+ and x− in S+
|
305 |
+
x ∪ S−
|
306 |
+
x such that φ′(x+) = φ′(x−). Moreover, since both S+
|
307 |
+
x and S−
|
308 |
+
x are
|
309 |
+
bidirected, one of these two vertices belongs to S+
|
310 |
+
x and the other one belongs to S−
|
311 |
+
x . We assume without
|
312 |
+
loss of generality x+ ∈ S+
|
313 |
+
x and x− ∈ S−
|
314 |
+
x . Then we set φ(x) = φ′(x+). We get that φ must be a k-
|
315 |
+
dicolouring of D. If D, coloured with φ, contains a monochromatic directed cycle C = x1x2 · · · xℓx1, then
|
316 |
+
C′ = x−
|
317 |
+
1 x+
|
318 |
+
1 x−
|
319 |
+
2 x+
|
320 |
+
2 · · · x−
|
321 |
+
ℓ x+
|
322 |
+
ℓ x−
|
323 |
+
1 is a monochromatic directed cycle in D′ coloured with φ′, a contradiction.
|
324 |
+
5
|
325 |
+
|
326 |
+
3
|
327 |
+
Redicolouring oriented graphs
|
328 |
+
In this section, we restrict to oriented graphs. We first prove Theorem 8, let us restate it.
|
329 |
+
Theorem 8. Let ⃗G be an oriented graph of order n such that ∆min(⃗G) ≤ 1. Then D2(⃗G) is connected and has
|
330 |
+
diameter exactly n.
|
331 |
+
Observe that, if D2(⃗G) is connected, then its diameter must be at least n: for any 2-dicolouring α, we can
|
332 |
+
define its mirror ¯α where, for every vertex v ∈ V (⃗G), α(v) ̸= ¯α(v); then every redicolouring sequence between α
|
333 |
+
and ¯α has length at least n.
|
334 |
+
Lemma 12. Let C be a directed cycle of length at least 3. Then D2(C) is connected and has diameter exactly n.
|
335 |
+
Proof. Let α and β be any two 2-dicolourings of C. Let x = diff(α, β) = |{v ∈ V (C) | α(v) ̸= β(v)}|. By
|
336 |
+
induction on x ≥ 0, let us show that there exists a path of length at most x from α to β in D2(C). This clearly holds
|
337 |
+
for x = 0 (i.e., α = β). Assume x > 0 and the result holds for x − 1. Let v ∈ V (C) be such that α(v) ̸= β(v).
|
338 |
+
If v can be recoloured in β(v), then we recolour it and reach a new 2-dicolouring α′ such that diff(α′, β) = x−1
|
339 |
+
and the result holds by induction. Else if v cannot be recoloured, then recolouring v must create a monochromatic
|
340 |
+
directed cycle, which must be C. Then there must be a vertex v′, different from v, such that β(v) = α(v′) ̸= β(v′),
|
341 |
+
and v′ can be recoloured. We recolour it and reach a new 2-dicolouring α′ such that diff(α′, β) = x − 1 and the
|
342 |
+
result holds by induction.
|
343 |
+
We are now ready to prove Theorem 8.
|
344 |
+
Proof of Theorem 8. Let α and β be any two 2-dicolourings of ���G. We will show that there exists a redicolouring
|
345 |
+
sequence of length at most n between α and β. We may assume that ⃗G is strongly connected, otherwise we
|
346 |
+
consider each strongly connected component independently. This implies in particular that ⃗G does not contain any
|
347 |
+
sink nor source. Let (X, Y ) be a partition of V (⃗G) such that, for every x ∈ X, d+(x) = 1, and for every y ∈ Y ,
|
348 |
+
d−(y) = 1.
|
349 |
+
Assume first that ⃗G⟨X⟩ contains a directed cycle C. Since every vertex in X has exactly one out-neighbour,
|
350 |
+
there is no arc leaving C. Thus, since ⃗G is strongly connected, ⃗G must be exactly C, and the result holds by
|
351 |
+
Lemma 12. Using a symmetric argument, we get the result when ⃗G⟨Y ⟩ contains a directed cycle.
|
352 |
+
Assume now that both ⃗G⟨X⟩ and ⃗G⟨Y ⟩ are acyclic. Thus, since every vertex in X has exactly one out-
|
353 |
+
neighbour, ⃗G⟨X⟩ is the union of disjoint and independent in-trees, that are oriented trees in which all arcs are
|
354 |
+
directed towards the root. We denote by Xr the set of roots of these in-trees. Symmetrically, ⃗G⟨Y ⟩ is the union of
|
355 |
+
disjoint and independent out-trees (oriented trees in which all arcs are directed away from the root), and we denote
|
356 |
+
by Yr the set of roots of these out-trees. Set Xℓ = X \ Xr and Yℓ = Y \ Yr. Observe that the arcs from X to Y
|
357 |
+
form a perfect matching directed from Xr to Yr. We denote by Mr this perfect matching. Observe also that there
|
358 |
+
can be any arc from Y to X. Now we define X1
|
359 |
+
r and Y 1
|
360 |
+
r two subsets of Xr and Yr respectively, depending on the
|
361 |
+
two 2-dicolourings α and β, as follows:
|
362 |
+
X1
|
363 |
+
r = {x | xy ∈ Mr, α(x) = β(y) ̸= α(y) = β(x)}
|
364 |
+
Y 1
|
365 |
+
r = {y | xy ∈ Mr, α(x) = β(y) ̸= α(y) = β(x)}
|
366 |
+
Set X2
|
367 |
+
r = Xr \ X1
|
368 |
+
r and Y 2
|
369 |
+
r = Yr \ Y 1
|
370 |
+
r . We denote by M 1
|
371 |
+
r (respectively M 2
|
372 |
+
r ) the perfect matching from X1
|
373 |
+
r to Y 1
|
374 |
+
r
|
375 |
+
(respectively from X2
|
376 |
+
r to Y 2
|
377 |
+
r ). Figure 1 shows a partitioning of V (⃗G) into X1
|
378 |
+
r , X2
|
379 |
+
r, Xℓ, Y 1
|
380 |
+
r , Y 2
|
381 |
+
r , Yℓ.
|
382 |
+
Claim 8.1: There exists a redicolouring sequence of length sα from α to some 2-dicolouring α′ and a redicolouring
|
383 |
+
sequence of length sβ from β to some 2-dicolouring β′ such that each of the following holds:
|
384 |
+
(i) For any arc xy ∈ Mr, α′(x) ̸= α′(y) and β′(x) ̸= β′(y),
|
385 |
+
(ii) For any arc xy ∈ M 2
|
386 |
+
r , α′(x) = β′(x) (and so α′(y) = β′(y) by (i)), and
|
387 |
+
(iii) sα + sβ ≤ |X2
|
388 |
+
r| + |Y 2
|
389 |
+
r |.
|
390 |
+
6
|
391 |
+
|
392 |
+
X1
|
393 |
+
r
|
394 |
+
X2
|
395 |
+
r
|
396 |
+
Xℓ
|
397 |
+
Y 1
|
398 |
+
r
|
399 |
+
Y 2
|
400 |
+
r
|
401 |
+
Yℓ
|
402 |
+
⃗G dicoloured with α
|
403 |
+
X1
|
404 |
+
r
|
405 |
+
X2
|
406 |
+
r
|
407 |
+
Xℓ
|
408 |
+
Y 1
|
409 |
+
r
|
410 |
+
Y 2
|
411 |
+
r
|
412 |
+
Yℓ
|
413 |
+
⃗G dicoloured with β
|
414 |
+
Figure 1: The partitioning of V (⃗G) into X1
|
415 |
+
r, X2
|
416 |
+
r , Xℓ, Y 1
|
417 |
+
r , Y 2
|
418 |
+
r , Yℓ.
|
419 |
+
Proof of claim. We consider the arcs xy of M 2
|
420 |
+
r one after another and do the following recolourings depending on
|
421 |
+
the colours of x and y in both α and β to get α′ and β′.
|
422 |
+
• If α(x) = α(y) = β(x) = β(y), then we recolour x in both α and β;
|
423 |
+
• Else if α(x) = α(y) ̸= β(x) = β(y), then we recolour x in α and we recolour y in β;
|
424 |
+
• Else if α(x) = β(x) ̸= α(y) = β(y), then we do nothing;
|
425 |
+
• Else if α(x) ̸= α(y) = β(x) = β(y), then we recolour x in β;
|
426 |
+
• Finally if α(y) ̸= α(x) = β(x) = β(y), then we recolour y in β.
|
427 |
+
Each of these recolourings is valid because, when a vertex in X2
|
428 |
+
r (respectively Y 2
|
429 |
+
r ) is recoloured, it gets a colour
|
430 |
+
different from its only out-neighbour (respectively in-neighbour). Let α′ and β′ be the the two resulting 2-
|
431 |
+
dicolourings. By construction, α′ and β′ agree on X2
|
432 |
+
r ∪ Y 2
|
433 |
+
r . For each arc xy ∈ M 2
|
434 |
+
r , either α(x) = α′(x) or
|
435 |
+
α(y) = α′(y), and the same holds for β and β′. This implies that sα + sβ ≤ 2|M 2
|
436 |
+
r | = |X2
|
437 |
+
r | + |Y 2
|
438 |
+
r |.
|
439 |
+
♦
|
440 |
+
Claim 8.2: There exists a redicolouring sequence from α′ to some 2-dicolouring ˜α of length s′
|
441 |
+
α and a redicolouring
|
442 |
+
sequence from β′ to some 2-dicolouring ˜β of length s′
|
443 |
+
β such that each of the following holds:
|
444 |
+
(i) ˜α and ˜β agree on V (⃗G) \ (X1
|
445 |
+
r ∪ Y 1
|
446 |
+
r ),
|
447 |
+
(ii) α′ and ˜α agree on Xr ∪ Yr,
|
448 |
+
(iii) β′ and ˜β agree on Xr ∪ Yr,
|
449 |
+
(iv) Xℓ ∪ Yℓ is monochromatic in ˜α (and in ˜β by (i)), and
|
450 |
+
(v) s′
|
451 |
+
α + s′
|
452 |
+
β ≤ |Xℓ| + |Yℓ|.
|
453 |
+
Proof of claim. Observe that in both 2-dicolourings α′ and β′, we are free to recolour any vertex of Xℓ ∪ Yℓ since
|
454 |
+
there is no monochromatic arc from X to Y and both ⃗G⟨X⟩ and ⃗G⟨Y ⟩ are acyclic. Let n1 (respectively n2) be the
|
455 |
+
number of vertices in Xℓ ∪ Yℓ that are coloured 1 (respectively 2) in both α′ and β′. Without loss of generality,
|
456 |
+
assume that n1 ≤ n2. Then we set each vertex of Xℓ ∪ Yℓ to colour 2 in both α′ and β′. Let ˜α and ˜β the resulting
|
457 |
+
2-dicolouring. Then s′
|
458 |
+
α + s′
|
459 |
+
β is exactly |Xℓ| + |Yℓ| + n1 − n2 ≤ |Xℓ| + |Yℓ|.
|
460 |
+
♦
|
461 |
+
7
|
462 |
+
|
463 |
+
Claim 8.3: There is a redicolouring sequence between ˜α and ˜β of length |X1
|
464 |
+
r| + |Y 1
|
465 |
+
r |.
|
466 |
+
Proof of claim. By construction of ˜α and ˜β, we only have to exchange the colours of x and y for each arc xy ∈ M 1
|
467 |
+
r .
|
468 |
+
Without loss of generality, we may assume that the colour of all vertices in Xℓ ∪ Yℓ by ˜α and ˜β is 1.
|
469 |
+
We first prove that, by construction, we can recolour any vertex of X1
|
470 |
+
r ∪Y 1
|
471 |
+
r from 1 to 2. Assume not, then there
|
472 |
+
is such a vertex x ∈ X1
|
473 |
+
r ∪Y 1
|
474 |
+
r such that recolouring x from 1 to 2 creates a monochromatic directed cycle C. Since
|
475 |
+
both ⃗G⟨X⟩ and ⃗G⟨Y ⟩ are acyclic, C must contain an arc of Mr. Since Mr does not contain any monochromatic
|
476 |
+
arc in ˜α, then this arc must be incident to x. Now observe that colour 2, in ˜α, induces an independent set on both
|
477 |
+
⃗G⟨X⟩ and ⃗G⟨Y ⟩. This implies that C must contain at least 2 arcs in Mr. This is a contradiction since recolouring
|
478 |
+
x creates exactly one monochromatic arc in Mr.
|
479 |
+
Then, for each arc xy ∈ M 1
|
480 |
+
r , we can first recolour the vertex coloured 1 and then the vertex coloured 2.
|
481 |
+
Note that we maintain the invariant that colour 2 induces an independent set on both ⃗G⟨X⟩ and ⃗G⟨Y ⟩. We get a
|
482 |
+
redicolouring sequence from ˜α to ˜β in exactly 2|M 1
|
483 |
+
r | = |X1
|
484 |
+
r| + |Y 1
|
485 |
+
r | steps.
|
486 |
+
♦
|
487 |
+
Combining the three claims, we finally proved that there exists a redicolouring sequence between α and β of length
|
488 |
+
at most n.
|
489 |
+
In the following, when α is a dicolouring of a digraph D, and H is a subdigraph of D, we denote by α|H the
|
490 |
+
restriction of α to H. We will prove Corollary 9, let us restate it.
|
491 |
+
Corollary 9. Let ⃗G be an oriented graph of order n with ∆min(⃗G) = ∆ ≥ 1, and let k ≥ ∆ + 1. Then Dk(⃗G) is
|
492 |
+
connected and has diameter at most 2∆n.
|
493 |
+
Proof. We will show the result by induction on ∆.
|
494 |
+
Assume first that ∆ = 1, let k ≥ 2. Let α be any k-dicolouring of ⃗G and γ be any 2-dicolouring of ⃗G. To
|
495 |
+
ensure that Dk(⃗G) is connected and has diameter at most 2n, it is sufficient to prove that there is a redicolouring
|
496 |
+
sequence between α and γ of length at most n. Let H be the digraph induced by the set of vertices coloured 1 or
|
497 |
+
2 in α, and let J be V (⃗G) \ V (H). By Theorem 8, since ∆min(H) ≤ ∆min(⃗G) ≤ 1, we know that there exists
|
498 |
+
a redicolouring sequence, in H, from α|H to γ|H of length at most |V (H)|. This redicolouring sequence extends
|
499 |
+
in ⃗G because it only uses colours 1 and 2. Let α′ be the obtained dicolouring of ⃗G. Since α′(v) = γ(v) for every
|
500 |
+
v ∈ H, we can recolour each vertex in J to its colour in γ. This shows that there is a redicolouring sequence
|
501 |
+
between α and γ of length at most |V (H)| + |J| = |V (⃗G)|. This ends the case ∆ = 1.
|
502 |
+
Assume now that ∆ ≥ 2 and let k ≥ ∆ + 1. Let α and β be two k-dicolourings of ⃗G. By Corollary 7, we
|
503 |
+
know that ⃗χ(⃗G) ≤ ∆ ≤ k − 1. We first show that there is a redicolouring sequence of length at most 2n from
|
504 |
+
α to some (k − 1)-dicolouring γ of ⃗G. From α, whenever it is possible we recolour each vertex coloured 1, 2 or
|
505 |
+
k with a colour of {3, . . . , k − 1} (when k = 3 we do nothing). Let ˜α be the obtained dicolouring, and let M be
|
506 |
+
the set of vertices coloured in {3, . . . , k − 1} by ˜α (when k = 3, M is empty). We get that H = ⃗G − M satisfies
|
507 |
+
∆min(H) ≤ 2, since every vertex in H has at least one in-neighbour and one out-neighbour coloured c for every
|
508 |
+
c ∈ {3, . . ., k − 1}. By Corollary 7, there exists a 2-dicolouring γ|H of H. From ˜α|H, whenever it is possible,
|
509 |
+
we recolour a vertex coloured 1 or 2 to colour k. Let ˆα be the resulting dicolouring, and ˆH be the subdigraph of
|
510 |
+
H induced by the vertices coloured 1 or 2 in ˆα. We get that ∆min( ˆH) ≤ 1 since every vertex in ˆH has, in ⃗G, at
|
511 |
+
least one in-neighbour and one out-neighbour coloured c for every c ∈ {3, . . ., k}. In at most |V ( ˆH)| steps, using
|
512 |
+
Theorem 8, we can recolour the vertices of V ( ˆH) to their colour in γ|H (using only colours 1 and 2). Then we can
|
513 |
+
recolour each vertex coloured k to its colour in γ|H. This results in a redicolouring sequence of length at most 2n
|
514 |
+
from α to some (k − 1)-dicolouring γ of ⃗G , since colour k is not used in the resulting dicolouring (recall that M
|
515 |
+
is coloured with {3, . . ., k − 1}).
|
516 |
+
Now, from β, whenever it is possible we recolour each vertex to colour k. Let ˜β be the obtained k-dicolouring,
|
517 |
+
and let N be the set of vertices coloured k in ˜β. We get that J = ⃗G − N satisfies ∆min(J) ≤ ∆ − 1. Thus,
|
518 |
+
by induction, there exists a redicolouring sequence from ˜β|J to γ|J, in at most 2(∆ − 1)|V (J)| steps (using only
|
519 |
+
colours {1, . . . , k − 1}). Since N is coloured k in ˜β, this extends to a redicolouring sequence in ⃗G. Now, since γ
|
520 |
+
does not use colour k, we can recolour each vertex in N to its colour in γ. We finally get a redicolouring sequence
|
521 |
+
from β to γ of length at most 2(∆ − 1)n. Concatenating the redicolouring sequence from α to γ and the one from
|
522 |
+
γ to β, we get a redicolouring sequence from α to β in at most 2∆n steps.
|
523 |
+
8
|
524 |
+
|
525 |
+
4
|
526 |
+
An analogue of Brook’s theorem for digraph redicolouring
|
527 |
+
Let us restate Theorem 10.
|
528 |
+
Theorem 10. Let D be a connected digraph with ∆max(D) = ∆ ≥ 3, k ≥ ∆ + 1, and α, β two k-dicolourings
|
529 |
+
of D. Then at least one of the following holds:
|
530 |
+
• α is k-frozen, or
|
531 |
+
• β is k-frozen, or
|
532 |
+
• there is a redicolouring sequence of length at most c∆|V |2 between α and β, where c∆ = O(∆2) is a
|
533 |
+
constant depending only on ∆.
|
534 |
+
An L-assignment of a digraph D is a function which associates to every vertex a list of colours. An L-
|
535 |
+
dicolouring of D is a dicolouring α where, for every vertex v of D, α(v) ∈ L(v). An L-redicolouring sequence is
|
536 |
+
a redicolouring sequence γ1, . . . , γr, such that for every i ∈ {1, . . . , r}, γi is an L-dicolouring of D.
|
537 |
+
Lemma 13. Let D = (V, A) be a digraph and L be a list-assignment of D such that, for every vertex v ∈ V ,
|
538 |
+
|L(v)| ≥ dmax(v) + 1. Let α be an L-dicolouring of D. If u ∈ V is blocked in α, then for each colour c ∈ L(u)
|
539 |
+
different from α(u), u has exactly one out-neighbour u+
|
540 |
+
c and one in-neighbour u−
|
541 |
+
c coloured c. Moreover, if
|
542 |
+
u+
|
543 |
+
c ̸= u−
|
544 |
+
c , there must be a monochromatic directed path from u+
|
545 |
+
c to u−
|
546 |
+
c . In particular, u is not incident to a
|
547 |
+
monochromatic arc.
|
548 |
+
Proof. Since u is blocked to its colour in α, for each colour c ∈ L(u) different from α(u), recolouring u to c must
|
549 |
+
create a monochromatic directed cycle C. Let v be the out-neighbour of u in C and w be the in-neighbour of u in
|
550 |
+
C. Then α(v) = α(w) = c, and there is a monochromatic directed path (in C) from v to w.
|
551 |
+
This implies that, for each colour c ∈ L(u) different from α(u), u has at least one out-neighbour and at least
|
552 |
+
one in-neighbour coloured c. Since |L(u)| ≥ dmax(u) + 1, then |L(u)| = dmax(u) + 1, and u must have exactly
|
553 |
+
one out-neighbour and exactly one in-neighbour coloured c. In particular, u cannot be incident to a monochromatic
|
554 |
+
arc.
|
555 |
+
Lemma 14. Let D = (V, A) be a digraph such that for every vertex v ∈ V , N +(v) \ N −(v) ̸= ∅ and N −(v) \
|
556 |
+
N +(v) ̸= ∅. Let L be a list assignment of D, such that for every vertex v ∈ V , |L(v)| ≥ dmax(v) + 1.
|
557 |
+
Then for any pair of L-dicolourings α, β of D, there is an L-redicolouring sequence of length at most (|V | +
|
558 |
+
3)|V |.
|
559 |
+
Proof. Let x = diff(α, β) = |{v ∈ V | α(v) ̸= β(v)}|. We will show by induction on x that there is an L-
|
560 |
+
redicolouring sequence from α to β of length at most (|V | + 3)x. The result clearly holds for x = 0 (i.e. α = β).
|
561 |
+
Let v ∈ V be such that α(v) ̸= β(v). We denote α(v) by c and β(v) by c′. If v can be recoloured to c′, then we
|
562 |
+
recolour it and we get the result by induction.
|
563 |
+
Assume now that v cannot be recoloured to c′. Whenever v is contained in a directed cycle C of length at least
|
564 |
+
3, such that every vertex of C but v is coloured c′, we do the following: we choose w a vertex of C different from
|
565 |
+
v, such that β(w) ̸= c′. We know that such a w exists, for otherwise C would be a monochromatic directed cycle
|
566 |
+
in β. Now, since w is incident to a monochromatic arc in C, and because |L(w)| ≥ dmax(w) + 1, by Lemma 13,
|
567 |
+
we know that w can be recoloured to some colour different from c′. Thus we recolour w to this colour. Observe
|
568 |
+
that it does not increase x.
|
569 |
+
After repeating this process, maybe v cannot be recoloured to c′ because it is adjacent by a digon to some
|
570 |
+
vertices coloured c′. We know that these vertices are not coloured c′ in β. Thus, whenever such a vertex can be
|
571 |
+
recoloured, we recolour it. After this, let η be the obtained dicolouring. If v can be recoloured to c′ in η, we are
|
572 |
+
done. Otherwise, there must be some vertices, blocked to colour c′ in η, adjacent to v by a digon. Let S be the set
|
573 |
+
of such vertices. Observe that, by Lemma 13, for every vertex s ∈ S, c belongs to L(s), for otherwise s would not
|
574 |
+
be blocked in η. We distinguish two cases, depending on the size of S.
|
575 |
+
9
|
576 |
+
|
577 |
+
• If |S| ≥ 2, then by Lemma 13, v can be recoloured to a colour c′′, different from both c and c′, because v is
|
578 |
+
adjacent by a digon with two neighbours coloured c′. Hence we can successively recolour v to c′′, and every
|
579 |
+
vertex of S to c . This does not create any monochromatic directed cycle because for each s ∈ S, since s is
|
580 |
+
blocked in η, by Lemma 13 v must be the only neighbour of s coloured c in η.
|
581 |
+
We can finally recolour v to c′.
|
582 |
+
• If |S| = 1, let w be the only vertex in S. If v can be recoloured to any colour (different from c′ since w is
|
583 |
+
coloured c′), then we first recolour v, allowing us to recolour w to c, because v is the single neighbour of w
|
584 |
+
coloured c in η by Lemma 13. We finally can recolour v to c′.
|
585 |
+
Assume then that v is blocked to colour c in η. Let us fix w+ ∈ N +(w) \ N −(w). Since w is blocked to c′
|
586 |
+
in η, by Lemma 13, there exists exactly one vertex w− ∈ N −(w) \ N +(w) such that η(w+) = η(w−) = c′′
|
587 |
+
and there must be a monochromatic directed path from w+ to w−.
|
588 |
+
Since v is blocked to colour c in η, either vw− /∈ A or w+v /∈ A, otherwise, by Lemma 13, there must be
|
589 |
+
a monochromatic directed path from w− to w+, which is blocking v to its colour. But since there is also a
|
590 |
+
monochromatic directed path from w+ to w− (blocking w) there would be a monochromatic directed cycle,
|
591 |
+
a contradiction (see Figure 2).
|
592 |
+
w
|
593 |
+
v
|
594 |
+
w+
|
595 |
+
w−
|
596 |
+
Figure 2: The vertices v, w, w+ and w−.
|
597 |
+
We distinguish the two possible cases:
|
598 |
+
– if vw− /∈ A, then we start by recolouring w− with a colour that does not appear in its in-neighbourhood.
|
599 |
+
This is possible because w− has a monochromatic entering arc, and because |L(w−)| ≥ dmax(w−)+1.
|
600 |
+
We first recolour w with c′′, since c′′ does not appear in its in-neighbourhood anymore (w− was the
|
601 |
+
only one by Lemma 13). Next we recolour v with c′: this is possible because v does not have any
|
602 |
+
out-neighbour coloured c′ since w was the only one by Lemma 13 and w− is not an out-neighbour of
|
603 |
+
v. We can finally recolour w to colour c and w− to c′′. After all these operations, we exchanged the
|
604 |
+
colours of v and w.
|
605 |
+
– if w+v /∈ A, then we use a symmetric argument.
|
606 |
+
Observe that we found an L-redicolouring sequence from α to a α′, in at most |V |+3 steps, such that diff(α′, β) <
|
607 |
+
diff(α, β). Thus by induction, we get an L-redicolouring sequence of length at most (|V | + 3)x between α and
|
608 |
+
β.
|
609 |
+
We are now able to prove Theorem 10. The idea of the proof is to divide the digraph D into two parts. One of
|
610 |
+
them is bidirected and we will use Theorem 2 as a black box on it. In the other part, we know that each vertex is
|
611 |
+
incident to at least two simple arcs, one leaving and one entering, and we will use Lemma 14 on it.
|
612 |
+
Proof of Theorem 10. Let D = (V, A) be a connected digraph with ∆max(D) = ∆, k ≥ ∆ + 1. Let α and β be
|
613 |
+
two k-dicolourings of D. Assume that neither α nor β is k-frozen.
|
614 |
+
We first make a simple observation. For any simple arc xy ∈ A, we may assume that N +(y) \ N −(y) ̸= ∅
|
615 |
+
and N −(x) \ N +(x) ̸= ∅. If this is not the case, then every directed cycle containing xy must contain a digon,
|
616 |
+
implying that the k-dicolouring graph of D is also the k-dicolouring graph of D \ {xy}. Then we may look for a
|
617 |
+
redicolouring sequence in D \ {xy}.
|
618 |
+
10
|
619 |
+
|
620 |
+
Let X = {v ∈ V | N +(v) = N −(v)} and Y = V \ X. Observe that D⟨X⟩ is bidirected, and thus the
|
621 |
+
dicolourings of D⟨X⟩ are exactly the colourings of UG(D⟨X⟩). We first show that α|D⟨X⟩ and β|D⟨X⟩ are not
|
622 |
+
frozen k-colourings of D⟨X⟩. If Y is empty, then D⟨X⟩ = D and α|D⟨X⟩ and β|D⟨X⟩ are not k-frozen by
|
623 |
+
assumption. Otherwise, since D is connected, there exists x ∈ X such that, in D⟨X⟩, d+(x) = d−(x) ≤ ∆ − 1,
|
624 |
+
implying that x is not blocked in any dicolouring of D⟨X⟩. Thus, by Theorem 2, there is a redicolouring sequence
|
625 |
+
γ′
|
626 |
+
1, . . . , γ′
|
627 |
+
r in D⟨X⟩ from α|D⟨X⟩ to β|D⟨X⟩, where r ≤ c∆|X|2, and c∆ = O(∆) is a constant depending on ∆.
|
628 |
+
We will show that, for each i ∈ {1, . . . , r − 1}, if γi is a k-dicolouring of D which agrees with γ′
|
629 |
+
i on X, then
|
630 |
+
there exist a k-dicolouring γi+1 of D that agrees with γ′
|
631 |
+
i+1 on X and a redicolouring sequence from γi to γi+1 of
|
632 |
+
length at most ∆ + 2.
|
633 |
+
Observe that α agrees with γ′
|
634 |
+
1 on X. Now assume that there is such a γi, which agrees with γ′
|
635 |
+
i on X, and
|
636 |
+
let vi ∈ X be the vertex for which γ′
|
637 |
+
i(vi) ̸= γ′
|
638 |
+
i+1(vi). We denote by c (respectively c′) the colour of vi in γ′
|
639 |
+
i
|
640 |
+
(respectively γ′
|
641 |
+
i+1). If recolouring vi to c′ in γi is valid then we have the desired γi+1. Otherwise, we know that
|
642 |
+
vi is adjacent with a digon (since vi is only adjacent to digons) to some vertices (at most ∆) coloured c′ in Y .
|
643 |
+
Whenever such a vertex can be recoloured to a colour different from c′, we recolour it. Let ηi be the reached
|
644 |
+
k-dicolouring after these operations. If vi can be recoloured to c′ in ηi we are done. If not, then the neighbours of
|
645 |
+
vi coloured c′ in Y are blocked to colour c′ in ηi. We denote by S the set of these neighbours. We distinguish two
|
646 |
+
cases:
|
647 |
+
• If |S| ≥ 2, then by Lemma 13, vi can be recoloured to a colour c′′, different from both c and c′, because vi
|
648 |
+
has two neighbours with the same colour. Then we successively recolour vi to c′′, and every vertex of S to
|
649 |
+
c. This does not create any monochromatic directed cycle because, by Lemma 13, for each s ∈ S, vi is the
|
650 |
+
only neighbour of s coloured c in ηi. We can finally recolour vi to c′ to reach the desired γi+1.
|
651 |
+
• If |S| = 1, let y be the only vertex in S. Since y belongs to Y and is blocked to its colour in ηi, by Lemma 13,
|
652 |
+
we know that y has an out-neighbour y+ ∈ N +(y)\N −(y) and an in-neighbour y− ∈ N −(y)\N +(y) such
|
653 |
+
that there is a monochromatic directed path from y+ to y−. Observe that both y+ and y− are recolourable
|
654 |
+
in ηi by Lemma 13, because there are incident to a monochromatic arc.
|
655 |
+
– If vi is not adjacent to y+, then we recolour y+ to any possible colour, and we recolour y to ηi(y+).
|
656 |
+
We can finally recolour vi to c′ to reach the desired γi+1.
|
657 |
+
– If vi is not adjacent to y−, then we recolour y− to any possible colour, and we recolour y to ηi(y−).
|
658 |
+
We can finally recolour vi to c′ to reach the desired γi+1.
|
659 |
+
– Finally if vi is adjacent to both y+ and y−, since ηi(y+) = ηi(y−), then vi can be recoloured to a
|
660 |
+
colour c′′ different from c and c′. This allows us to recolour y to c, and we finally can recolour vi to c′
|
661 |
+
to reach the desired γi+1.
|
662 |
+
We have shown that there is a redicolouring sequence of length at most (∆ + 2)c∆n2 from α to some α′ that
|
663 |
+
agrees with β on X. Now we define the list-assignment: for each y ∈ Y ,
|
664 |
+
L(y) = {1, . . . , k} \ {β(x) | x ∈ N(y) ∩ X}.
|
665 |
+
Observe that, for every y ∈ Y ,
|
666 |
+
|L(y)| ≥ k − |N +(y) ∩ X| ≥ ∆ + 1 − (∆ − d+
|
667 |
+
Y (y)) ≥ d+
|
668 |
+
Y (y) + 1.
|
669 |
+
Symmetrically, we get |L(y)| ≥ d−
|
670 |
+
Y (y) + 1. This implies, in D⟨Y ⟩, |L(y)| ≥ dmax(y) + 1. Note also that
|
671 |
+
both α′
|
672 |
+
|D⟨Y ⟩ and β|D⟨Y ⟩ are L-dicolourings of D⟨Y ⟩. Note finally that, for each y ∈ Y , N +(y) \ N −(y) ̸= ∅
|
673 |
+
and N +(y) \ N −(y) ̸= ∅ by choice of X and Y and by the initial observation. By Lemma 14, there is an L-
|
674 |
+
redicolouring sequence in D⟨Y ⟩ between α′
|
675 |
+
|D⟨Y ⟩ and β|D⟨Y ⟩, with length at most (|Y | + 3)|Y |. By choice of L,
|
676 |
+
this extends directly to a redicolouring sequence from α′ to β on D of the same length.
|
677 |
+
The concatenation of the redicolouring sequence from α to α′ and the one from α′ to β leads to a redicolouring
|
678 |
+
sequence from α to β of length at most c′
|
679 |
+
∆|V |2, where c′
|
680 |
+
∆ = O(∆2) is a constant depending on ∆.
|
681 |
+
11
|
682 |
+
|
683 |
+
Remark 15. If α is a k-frozen dicolouring of a digraph D, with k ≥ ∆max(D) + 1, then D must be bidirected.
|
684 |
+
If D is not bidirected, then we choose v a vertex incident to a simple arc. If v cannot be recoloured in α, by
|
685 |
+
Lemma 13, since v is incident to a simple arc, there exists a colour c for which v has an out-neighbour w and an
|
686 |
+
in-neighbour u both coloured c, such that u ̸= w and there is a monochromatic directed path from w to u. But
|
687 |
+
then, every vertex on this path is incident to a monochromatic arc, and it can be recoloured by Lemma 13. Thus, α
|
688 |
+
is not k-frozen. This shows that an obstruction of Theorem 10 is exactly the bidirected graph of an obstruction of
|
689 |
+
Theorem 2.
|
690 |
+
5
|
691 |
+
Further research
|
692 |
+
In this paper, we established some analogues of Brooks’ Theorem for the dichromatic number of oriented graphs
|
693 |
+
and for digraph redicolouring. Many open questions arise, we detail a few of them.
|
694 |
+
Restricted to oriented graphs, Mcdiarmid and Mohar (see [11]) conjectured that the Directed Brooks’ Theorem
|
695 |
+
can be improved to the following.
|
696 |
+
Conjecture 16 (Mcdiarmid and Mohar). Every oriented graph ⃗G has ⃗χ(⃗G) = O
|
697 |
+
�
|
698 |
+
∆max
|
699 |
+
log(∆max)
|
700 |
+
�
|
701 |
+
.
|
702 |
+
Concerning digraph redicolouring, we believe that Corollary 9 and Theorem 10 can be improved. We pose the
|
703 |
+
following two conjectures.
|
704 |
+
Conjecture 17. There is an absolute constant c such that for every integer k and every oriented graph ⃗G such that
|
705 |
+
k ≥ ∆min(⃗G) + 1, the diameter of Dk(⃗G) is bounded by cn.
|
706 |
+
Conjecture 18. There is an absolute constant d such that for every integer k and every digraph D with k ≥
|
707 |
+
∆max(D) + 1, the diameter of Dk(D) is bounded by dn2.
|
708 |
+
Given an orientation ⃗G of a planar graph, a celebrated conjecture from Neumann-Lara [14] states that the
|
709 |
+
dichromatic number of ⃗G is at most 2.
|
710 |
+
It is known that it must be 4-mixing because planar graphs are 5-
|
711 |
+
degenerate [5]. It is also known that there exists 2-freezable orientations of planar graphs [5]. Thus the following
|
712 |
+
problem, stated in [5], remains open:
|
713 |
+
Question 19. Is every oriented planar graph 3-mixing ?
|
714 |
+
Acknowledgement
|
715 |
+
I am grateful to Fr´ed´eric Havet and Nicolas Nisse for stimulating discussions.
|
716 |
+
References
|
717 |
+
[1] Pierre Aboulker and Guillaume Aubian.
|
718 |
+
Four proofs of the directed Brooks’ Theorem.
|
719 |
+
arXiv preprint
|
720 |
+
arXiv:2109.01600, 2021.
|
721 |
+
[2] Valentin Bartier. Combinatorial and Algorithmic aspects of Reconfiguration. PhD thesis, Universit´e Grenoble
|
722 |
+
Alpes, 2021.
|
723 |
+
[3] D. Bokal, G. Fijavz, M. Juvan, P.M. Kayll, and B. Mohar. The circular chromatic number of a digraph. J.
|
724 |
+
Graph Theory, 46(3):227–240, 2004.
|
725 |
+
[4] Marthe Bonamy, Matthew Johnson, Ioannis Lignos, Viresh Patel, and Daniel Paulusma. Reconfiguration
|
726 |
+
graphs for vertex colourings of chordal and chordal bipartite graphs. Journal of Combinatorial Optimization,
|
727 |
+
27(1):132–143, 2014.
|
728 |
+
12
|
729 |
+
|
730 |
+
[5] Nicolas Bousquet, Fr´ed´eric Havet, Nicolas Nisse, Lucas Picasarri-Arrieta, and Amadeus Reinald. Digraph
|
731 |
+
redicolouring. arXiv preprint arXiv:2301.03417, 2023.
|
732 |
+
[6] R. L. Brooks. On colouring the nodes of a network. Mathematical Proceedings of the Cambridge Philosoph-
|
733 |
+
ical Society, 37(2):194–197, 1941.
|
734 |
+
[7] Luis Cereceda, Jan Van den Heuvel, and Matthew Johnson. Mixing 3-colourings in bipartite graphs. Euro-
|
735 |
+
pean Journal of Combinatorics, 30(7):1593–1606, 2009.
|
736 |
+
[8] Luis Cereceda, Jan van den Heuvel, and Matthew Johnson. Finding paths between 3-colorings. Journal of
|
737 |
+
Graph Theory, 67(1):69–82, 2011.
|
738 |
+
[9] Carl Feghali, Matthew Johnson, and Dani¨el Paulusma. A reconfigurations analogue of Brooks’ Theorem and
|
739 |
+
its consequences. Journal of Graph Theory, 83(4):340–358, 2016.
|
740 |
+
[10] Ararat Harutyunyan and Bojan Mohar. Gallai’s theorem for list coloring of digraphs. SIAM Journal on
|
741 |
+
Discrete Mathematics, 25(1):170–180, 2011.
|
742 |
+
[11] Ararat Harutyunyan and Bojan Mohar. Strengthened Brooks' theorem for digraphs of girth at least three. The
|
743 |
+
Electronic Journal of Combinatorics, 18(1), October 2011.
|
744 |
+
[12] Jan van den Heuvel. The complexity of change, page 127–160. London Mathematical Society Lecture Note
|
745 |
+
Series. Cambridge University Press, 2013.
|
746 |
+
[13] Bojan Mohar. Eigenvalues and colorings of digraphs. Linear Algebra and its Applications, 432(9):2273–
|
747 |
+
2277, 2010.
|
748 |
+
[14] Victor Neumann-Lara. The dichromatic number of a digraph. J. Combin. Theory Ser. B., 33:265–270, 1982.
|
749 |
+
[15] Naomi Nishimura. Introduction to reconfiguration. Algorithms, 11(4), 2018.
|
750 |
+
13
|
751 |
+
|
A9E4T4oBgHgl3EQfEwz_/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
AdFJT4oBgHgl3EQfrS3C/content/tmp_files/2301.11608v1.pdf.txt
ADDED
@@ -0,0 +1,1275 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A Multi-View Joint Learning Framework for Embedding Clinical Codes and Text
|
2 |
+
Using Graph Neural Networks
|
3 |
+
Lecheng Kong, Christopher King, Bradley Fritz, Yixin Chen
|
4 |
+
Washington University in St. Louis
|
5 |
+
One Brookings Drive
|
6 |
+
St. Louis, Missouri 63130, USA
|
7 |
+
{jerry.kong, christopherking, bafritz, ychen25}@wustl.edu
|
8 |
+
Abstract
|
9 |
+
Learning to represent free text is a core task in many clini-
|
10 |
+
cal machine learning (ML) applications, as clinical text con-
|
11 |
+
tains observations and plans not otherwise available for in-
|
12 |
+
ference. State-of-the-art methods use large language models
|
13 |
+
developed with immense computational resources and train-
|
14 |
+
ing data; however, applying these models is challenging be-
|
15 |
+
cause of the highly varying syntax and vocabulary in clinical
|
16 |
+
free text. Structured information such as International Clas-
|
17 |
+
sification of Disease (ICD) codes often succinctly abstracts
|
18 |
+
the most important facts of a clinical encounter and yields
|
19 |
+
good performance, but is often not as available as clinical text
|
20 |
+
in real-world scenarios. We propose a multi-view learning
|
21 |
+
framework that jointly learns from codes and text to com-
|
22 |
+
bine the availability and forward-looking nature of text and
|
23 |
+
better performance of ICD codes. The learned text embed-
|
24 |
+
dings can be used as inputs to predictive algorithms indepen-
|
25 |
+
dent of the ICD codes during inference. Our approach uses a
|
26 |
+
Graph Neural Network (GNN) to process ICD codes, and Bi-
|
27 |
+
LSTM to process text. We apply Deep Canonical Correlation
|
28 |
+
Analysis (DCCA) to enforce the two views to learn a similar
|
29 |
+
representation of each patient. In experiments using planned
|
30 |
+
surgical procedure text, our model outperforms BERT models
|
31 |
+
fine-tuned to clinical data, and in experiments using diverse
|
32 |
+
text in MIMIC-III, our model is competitive to a fine-tuned
|
33 |
+
BERT at a tiny fraction of its computational effort.
|
34 |
+
We also find that the multi-view approach is beneficial for
|
35 |
+
stabilizing inferences on codes that were unseen during train-
|
36 |
+
ing, which is a real problem within highly detailed coding
|
37 |
+
systems. We propose a labeling training scheme in which
|
38 |
+
we block part of the training code during DCCA to improve
|
39 |
+
the generalizability of the GNN to unseen codes. In experi-
|
40 |
+
ments with unseen codes, the proposed scheme consistently
|
41 |
+
achieves superior performance on code inference tasks.
|
42 |
+
1
|
43 |
+
Introduction
|
44 |
+
An electronic health record (EHR) stores a patient’s com-
|
45 |
+
prehensive information within a healthcare system. It pro-
|
46 |
+
vides rich contexts for evaluating the patient’s status and fu-
|
47 |
+
ture clinical plans. The information in an EHR can be clas-
|
48 |
+
sified as structured or unstructured. Over the past decade,
|
49 |
+
ML techniques have been widely applied to uncover pat-
|
50 |
+
terns behind structured information such as lab results (Yu,
|
51 |
+
Copyright © 2023, Association for the Advancement of Artificial
|
52 |
+
Intelligence (www.aaai.org). All rights reserved.
|
53 |
+
Beam, and Kohane 2018; Shickel et al. 2017; Goldstein et al.
|
54 |
+
2017). Recently, the surge of deep learning and large-scale
|
55 |
+
pre-trained networks has allowed unstructured data, mainly
|
56 |
+
clinical notes, to be effectively used for learning (Huang, Al-
|
57 |
+
tosaar, and Ranganath 2019; Lee et al. 2020; Si et al. 2019).
|
58 |
+
However, most methods focus on either structured or un-
|
59 |
+
structured data only.
|
60 |
+
A particularly informative type of structured data is the
|
61 |
+
International Classification of Diseases (ICD) codes. ICD is
|
62 |
+
an expert-identified hierarchical medical concept ontology
|
63 |
+
used to systematically organize medical concepts into cate-
|
64 |
+
gories and encode valuable domain knowledge about a pa-
|
65 |
+
tient’s diseases and procedures.
|
66 |
+
Because ICD codes are highly specific and unambigu-
|
67 |
+
ous, ML models that use ICD codes to predict procedure
|
68 |
+
outcomes often yield more accurate results than those do
|
69 |
+
not (Deschepper et al. 2019; Liu et al. 2020a). However,
|
70 |
+
the availability of ICD codes is not always guaranteed. For
|
71 |
+
example, billing ICD codes are generated after the clinical
|
72 |
+
encounter, meaning that we cannot use the ICD codes to
|
73 |
+
predict post-operative outcomes before the surgery. A more
|
74 |
+
subtle but crucial drawback of using ICD codes is that there
|
75 |
+
might be unseen codes during inference. When a future pro-
|
76 |
+
cedure is associated with a code outside the trained subset,
|
77 |
+
most existing models using procedure codes cannot accu-
|
78 |
+
rately represent the case. Shifts in coding practices can also
|
79 |
+
cause data during inference to not overlap the trained set.
|
80 |
+
On the other hand, unstructured text data are readily and
|
81 |
+
consistently available. Clinical notes are generated as free
|
82 |
+
text and potentially carry a doctor’s complete insight about
|
83 |
+
a patient’s condition, including possible but not known di-
|
84 |
+
agnoses and planned procedures. Unfortunately, the clinical
|
85 |
+
text is a challenging natural language source, containing am-
|
86 |
+
biguous abbreviations, input errors, and words and phrases
|
87 |
+
rarely seen in pre-training sources. It is consequently diffi-
|
88 |
+
cult to train a robust model that predicts surgery outcomes
|
89 |
+
from the large volume of free texts. Most current models
|
90 |
+
rely on large-scale pre-trained models (Huang, Altosaar, and
|
91 |
+
Ranganath 2019; Lee et al. 2020). Such methods require
|
92 |
+
a considerable corpus of relevant texts to fine-tune, which
|
93 |
+
might not be available at a particular facility. Hence, mod-
|
94 |
+
els that only consider clinical texts suffer from poor perfor-
|
95 |
+
mance and incur huge computation costs.
|
96 |
+
To overcome the problems of models using only text or
|
97 |
+
arXiv:2301.11608v1 [cs.CL] 27 Jan 2023
|
98 |
+
|
99 |
+
codes, we propose to learn from the ICD codes and clini-
|
100 |
+
cal text in a multi-view joint learning framework. We ob-
|
101 |
+
serve that despite having different formats, the text and code
|
102 |
+
data are complementary and broadly describe the same un-
|
103 |
+
derlying facts about the patient. This enables each learner
|
104 |
+
(view) to use the other view’s representation as a regulariza-
|
105 |
+
tion function where less information is present. Under our
|
106 |
+
framework, even when one view is missing, the other view
|
107 |
+
can perform inference independently and maintain the effec-
|
108 |
+
tive data representation learned from the different perspec-
|
109 |
+
tives, which allows us to train reliable text models without a
|
110 |
+
vast corpus and computation cost required by other text-only
|
111 |
+
models.
|
112 |
+
Specifically, we make the following contributions in this
|
113 |
+
paper. (1) We propose a multi-view learning framework us-
|
114 |
+
ing Deep Canonical Correlation Analysis (DCCA) for ICD
|
115 |
+
codes and clinical notes. (2) We propose a novel tree-like
|
116 |
+
structure to encode ICD codes by relational graph and ap-
|
117 |
+
ply Relational Graph Convolution Network (RGCN) to em-
|
118 |
+
bed ICD codes. (3) We use a two-stage Bi-LSTM to en-
|
119 |
+
code lengthy clinical texts. (4) To solve the unseen code pre-
|
120 |
+
diction problem, we propose a labeling training scheme in
|
121 |
+
which we simulate unseen node prediction during training.
|
122 |
+
Combined with the DCCA optimization process, the training
|
123 |
+
scheme teaches the RGCN to discriminate between unseen
|
124 |
+
and seen codes during inference and achieves better perfor-
|
125 |
+
mance than plain RGCN.
|
126 |
+
2
|
127 |
+
Related Works
|
128 |
+
Deep learning on clinical notes. Many works focus on
|
129 |
+
applying deep learning to learn representations of clini-
|
130 |
+
cal texts for downstream tasks. Early work (Boag et al.
|
131 |
+
2018) compared the performance of classic NLP meth-
|
132 |
+
ods including bag-of-words (Zhang, Jin, and Zhou 2010),
|
133 |
+
Word2Vec (Mikolov et al. 2013), and Long-Short-Term-
|
134 |
+
Memory (LSTM) (Hochreiter and Schmidhuber 1997) on
|
135 |
+
clinical prediction tasks. These methods solely learn from
|
136 |
+
the training text, but as the clinical texts are very noisy, they
|
137 |
+
either tend to overfit the data or fail to uncover valuable pat-
|
138 |
+
terns behind the text. Inspired by large-scale pre-trained lan-
|
139 |
+
guage models such as BERT (Devlin et al. 2018), a series of
|
140 |
+
works developed transformer models pre-trained on medical
|
141 |
+
notes, including ClinicalBERT (Huang, Altosaar, and Ran-
|
142 |
+
ganath 2019), BioBERT (Lee et al. 2020), and PubBERT
|
143 |
+
(Alsentzer et al. 2019). These models fine-tune general lan-
|
144 |
+
guage models on a large corpus of clinical texts and achieve
|
145 |
+
superior performance. Despite the general nature of these
|
146 |
+
models, the fine-tuning portion may not translate well to new
|
147 |
+
settings. For example, PubBERT is trained on the clinical
|
148 |
+
texts of a single tertiary hospital, and the colloquial terms
|
149 |
+
used and procedures typically performed may not map to
|
150 |
+
different hospitals. BioBERT is trained on Pubmed abstracts
|
151 |
+
and articles, which also is likely poorly representative of the
|
152 |
+
topics and terms used to, for example, describe a planned
|
153 |
+
surgery.
|
154 |
+
Some other models propose to use joint learning models
|
155 |
+
to learn from the clinical text, and structured data (e.g., mea-
|
156 |
+
sured blood pressure and procedure codes) (Wei et al. 2016;
|
157 |
+
Zhang et al. 2020a). Since the structured data are less noisy,
|
158 |
+
these models can produce better and more stable results.
|
159 |
+
However, most assume the co-existence of text and struc-
|
160 |
+
tured data at the inference time, while procedure codes for a
|
161 |
+
patient are frequently incomplete until much later.
|
162 |
+
Machine learning and procedure codes. Procedure
|
163 |
+
codes are a handy resource for EHR data mining. Most
|
164 |
+
works focus on automatic coding, using machine learning
|
165 |
+
models to predict a patient’s diagnostic codes from clini-
|
166 |
+
cal notes (Pascual, Luck, and Wattenhofer 2021; Li and Yu
|
167 |
+
2020). Some other works directly use the billing code to pre-
|
168 |
+
dict clinical outcomes (Liu et al. 2020a; Deschepper et al.
|
169 |
+
2019), whereas our work focuses on using the high correla-
|
170 |
+
tion of codes and text data to augment the performance of
|
171 |
+
each. Most of these works exploit the code hierarchies by
|
172 |
+
human-defined logic based on domain knowledge. In con-
|
173 |
+
trast, our proposed framework uses GNN and can encode
|
174 |
+
arbitrary relations between codes.
|
175 |
+
Graph neural networks. A series of works (Xu et al.
|
176 |
+
2018; Gilmer et al. 2017) summarize GNN structures in
|
177 |
+
which each node iteratively aggregates neighbor nodes’ em-
|
178 |
+
bedding and summarizes information in a neighborhood.
|
179 |
+
The resulting node embeddings can be used to predict down-
|
180 |
+
stream tasks. RGCN (Schlichtkrull et al. 2018) generalizes
|
181 |
+
GNN to heterogeneous graphs where nodes and edges can
|
182 |
+
have different types. Our model utilizes such heterogeneous
|
183 |
+
properties on our proposed hierarchy graph encoding. Some
|
184 |
+
works (Liu et al. 2020b; Choi et al. 2020) applied GNN to
|
185 |
+
model interaction between EHRs, whereas our model uses
|
186 |
+
GNN on the code hierarchy.
|
187 |
+
Privileged information. Our approach is related to the
|
188 |
+
Learning Under Privileged Information (LUPI) (Vapnik and
|
189 |
+
Vashist 2009) paradigm, where the privileged information
|
190 |
+
is only accessible during training (in this case, billing code
|
191 |
+
data). Many works have applied LUPI to other fields like
|
192 |
+
computer vision (Lambert, Sener, and Savarese 2018) and
|
193 |
+
metric learning (Fouad et al. 2013).
|
194 |
+
3
|
195 |
+
Methods
|
196 |
+
Admissions with ICD codes and clinical text can be repre-
|
197 |
+
sented as D = {(C1, A1, y1), ..., (Cn, An, yn)}, where Ci
|
198 |
+
is a set of ICD codes for admission i, Ai is a set of clin-
|
199 |
+
ical texts, and yi is the desired task label (e.g. mortality,
|
200 |
+
re-admission, etc.). The ultimate goal is to minimize task-
|
201 |
+
appropriate losses L defined as:
|
202 |
+
min
|
203 |
+
fC,gC
|
204 |
+
�
|
205 |
+
i
|
206 |
+
L(fC(gC(Ci)), yi)
|
207 |
+
(1)
|
208 |
+
and
|
209 |
+
min
|
210 |
+
fA,gA
|
211 |
+
�
|
212 |
+
i
|
213 |
+
L(fA(gA(Ai)), yi),
|
214 |
+
(2)
|
215 |
+
where gC and gA embed codes and texts to vector repre-
|
216 |
+
sentations respectively, and fC and fA map representations
|
217 |
+
to the task labels. Note that (gC, fC) and (gA, fA) should
|
218 |
+
operate independently during inference, meaning that even
|
219 |
+
when one type of data is missing, we can still make accurate
|
220 |
+
predictions.
|
221 |
+
In this section, we first propose a novel ICD ontology
|
222 |
+
graph encoding method and describe how we use Graph
|
223 |
+
|
224 |
+
Figure 1: Overall multi-view joint learning framework.
|
225 |
+
Blue boxes/arrows represent the text prediction pipeline, and
|
226 |
+
green represents the code prediction pipeline. Dashed boxes
|
227 |
+
and arrows denote processes only happening during training.
|
228 |
+
By removing the dashed parts, text and code pipelines can
|
229 |
+
predict tasks independently.
|
230 |
+
Neural Network (GNN) to parameterize gC. We then de-
|
231 |
+
scribe the two-stage Bi-LSTM (gA) to embed lengthy clini-
|
232 |
+
cal texts. We then describe how to use DCCA on the repre-
|
233 |
+
sentation from gC and gA to generate representations that are
|
234 |
+
less noisy and more informative, so the downstream models
|
235 |
+
fC and fA are able to make accurate predictions. Figure 1
|
236 |
+
shows the overall architecture of our multi-view joint learn-
|
237 |
+
ing framework.
|
238 |
+
3.1
|
239 |
+
ICD Ontology as Graphs
|
240 |
+
The ICD ontology has a hierarchical scheme. We can rep-
|
241 |
+
resent it as a tree graph as shown in Figure 2, where each
|
242 |
+
node is a medical concept and a node’s children are finer di-
|
243 |
+
visions of the concept. All top-level nodes are connected to a
|
244 |
+
root node. In this tree graph, only the leaf nodes correspond
|
245 |
+
to observable codes in the coding system, all other nodes are
|
246 |
+
the hierarchy of the ontology. This representation is widely
|
247 |
+
adopted by many machine learning systems (Zhang et al.
|
248 |
+
2020b; Li, Ma, and Gao 2021) as a refinement of the earlier
|
249 |
+
approach of grouping together all codes at the top level of the
|
250 |
+
hierarchy. A tree graph is ideal for algorithms based on mes-
|
251 |
+
sage passing. It allows pooling of information within disjoint
|
252 |
+
groups, and encodes a compact set of neighbors. However,
|
253 |
+
it (1) ignores the granularity of different levels of classifica-
|
254 |
+
tion, and (2) cannot encode similarities of nodes that are dis-
|
255 |
+
tant from each other. This latter point comes about because
|
256 |
+
a tree system may split on factors that are not the most rele-
|
257 |
+
Figure 2: Top: Conventional encoding of ICD ontology. Bot-
|
258 |
+
tom Left: ICD ontology encoded with relations. Relation
|
259 |
+
types for different levels are denoted by different colors.
|
260 |
+
Bottom Right: Jump connection creates additional edges to
|
261 |
+
leaf nodes’ predecessors, denoted by dashed color lines.
|
262 |
+
vant for a given task, such as the same procedure in an arm
|
263 |
+
versus a leg, or because cross-system concepts are empiri-
|
264 |
+
cally very correlated in medical syndromes, such as kidney
|
265 |
+
failure and certain endocrine disorders.
|
266 |
+
To overcome the aforementioned problems, we propose
|
267 |
+
to augment the tree graph with edge types and jump connec-
|
268 |
+
tions. Unlike conventional tree graphs, where all edges have
|
269 |
+
the same edge type, we use different edge types for connec-
|
270 |
+
tions between different levels in the tree graph as shown in
|
271 |
+
the bottom left of Figure 2. For example, ICD-10 codes have
|
272 |
+
seven characters and hence eight levels in the graph (includ-
|
273 |
+
ing the root level). The edges between the root node and its
|
274 |
+
children have edge Type 1, and the edges between the sev-
|
275 |
+
enth level and the last level (actual code level) have edge
|
276 |
+
Type 7. Different edge types not only encode whether two
|
277 |
+
procedures are related but also encode the level of similarity
|
278 |
+
between codes.
|
279 |
+
With multiple edge types introduced to the graph, we are
|
280 |
+
able to further extend the graph structure by jump connec-
|
281 |
+
tions. For each leaf node, we add one additional edge be-
|
282 |
+
tween the node and each of its predecessors up to the root
|
283 |
+
node, as shown in the bottom right of Figure 2. The edge
|
284 |
+
type depends on the level that the predecessor resides. For
|
285 |
+
example, in the ICD-10 tree graph, a leaf node will have
|
286 |
+
seven additional connections to its predecessors. Its edge to
|
287 |
+
the root node will have Type 8 (the first seven types are used
|
288 |
+
to represent connections between levels), and its edge to the
|
289 |
+
third level node will have Type 10. Jump connections signifi-
|
290 |
+
cantly increase the connectivity of the graph. Meanwhile, we
|
291 |
+
still maintain the good hierarchical information of the origi-
|
292 |
+
|
293 |
+
Adm 1: Posterior Cervical Decompression.
|
294 |
+
Adm 1: 0QSH06Z
|
295 |
+
Adm 2: Thoracic Laminectomy for.
|
296 |
+
Adm 2: 00CU0ZZ,009U3ZX,02HV33Z
|
297 |
+
Adm 3: ...
|
298 |
+
Adm 3: ...
|
299 |
+
Word2Vec
|
300 |
+
Code to Node Index
|
301 |
+
Two-Stage LSTM
|
302 |
+
RGCN
|
303 |
+
1
|
304 |
+
Codes Embedding
|
305 |
+
Text Embedding
|
306 |
+
Generated from
|
307 |
+
Sum/Max Pooling
|
308 |
+
Text Projection Matrix
|
309 |
+
DCCA
|
310 |
+
Code Projection Matrix
|
311 |
+
Projected Text
|
312 |
+
Projected Codes
|
313 |
+
Embedding
|
314 |
+
Embedding
|
315 |
+
MLP
|
316 |
+
MLP
|
317 |
+
Downstream Task Predictior
|
318 |
+
Downstream Task PredictionRoot node r connects
|
319 |
+
to all level-1 ontology
|
320 |
+
Bottom level nodes
|
321 |
+
represent actual codes
|
322 |
+
0
|
323 |
+
8
|
324 |
+
0RGAOT0
|
325 |
+
ORGA0T1
|
326 |
+
5A09357
|
327 |
+
5A09358
|
328 |
+
Relation-Augmented Graph
|
329 |
+
Jump Connection Graphnal tree graph because the jump connections are represented
|
330 |
+
by a different set of edge types. Using jump connection helps
|
331 |
+
uncover relationships between codes that are not presented
|
332 |
+
in the ontology. For example, the relationship between ane-
|
333 |
+
mia and renal failure can be learned using jump connec-
|
334 |
+
tion even though these diverge at the root node in ICD-9
|
335 |
+
and ICD-10. Moreover, GNNs suffer from over-smoothing,
|
336 |
+
where all node representations converge to the same value
|
337 |
+
when the GNN has too many layers (Li, Han, and Wu 2018).
|
338 |
+
If we do not employ jump connections, the maximal distance
|
339 |
+
between one leaf node to another is twice the number of
|
340 |
+
levels in the graph. To capture the connection between the
|
341 |
+
nodes, we will need a GNN with that many layers, which
|
342 |
+
is computationally expensive and prone to over-smoothing.
|
343 |
+
Jump connections make the distance between two leaf nodes
|
344 |
+
two, and this ensures that the GNN is able to embed any
|
345 |
+
correlation between two nodes. We will discuss this in more
|
346 |
+
detail in Section 3.2.
|
347 |
+
3.2
|
348 |
+
Embedding ICD Codes using GNN
|
349 |
+
We use GNN to embed medical concepts in the ICD ontol-
|
350 |
+
ogy. Let G = {V, E, R} be a graph, where V is its set of the
|
351 |
+
vertex (medical concepts in the ICD graph), E ⊆ {V ×V } is
|
352 |
+
its set of edges (connects medical concept to its sub-classes),
|
353 |
+
and R is the set of edge type in the graph (edges in different
|
354 |
+
levels and jump connection). As each ICD code corresponds
|
355 |
+
to one node in the graph, we use code and node interchange-
|
356 |
+
ably.
|
357 |
+
We adopt RGCN (Schlichtkrull et al. 2018), which itera-
|
358 |
+
tively updates a node’s embedding from its neighbor nodes.
|
359 |
+
Specifically, the kth layer of RGCN on node u ∈ V is:
|
360 |
+
h(k+1)
|
361 |
+
u
|
362 |
+
= σ
|
363 |
+
�
|
364 |
+
��
|
365 |
+
r∈R
|
366 |
+
�
|
367 |
+
v∈N r
|
368 |
+
u
|
369 |
+
1
|
370 |
+
cu,r
|
371 |
+
W (k)
|
372 |
+
r
|
373 |
+
h(k)
|
374 |
+
v
|
375 |
+
+ W (k)h(k)
|
376 |
+
u
|
377 |
+
�
|
378 |
+
� (3)
|
379 |
+
where N r
|
380 |
+
i is the set of neighbors of i that connects to i by re-
|
381 |
+
lation r, h(k)
|
382 |
+
i
|
383 |
+
is the embedding of node i after k GNN layers,
|
384 |
+
h0
|
385 |
+
i is a randomly initialized trainable embedding, W (k)
|
386 |
+
r
|
387 |
+
is a
|
388 |
+
linear transformation on embeddings of nodes in N r
|
389 |
+
i , W (k)
|
390 |
+
updates the embedding of u, and σ is a nonlinear activation
|
391 |
+
function. We have c = |N r
|
392 |
+
i | as a normalization factor.
|
393 |
+
After T iterations, h(T )
|
394 |
+
u
|
395 |
+
can be used to learn down-
|
396 |
+
stream tasks. Since a patient can have a set of codes, Ci =
|
397 |
+
{vi1, vi2, vi3, ...} ⊆ V , we use sum and max pooling to sum-
|
398 |
+
marize Ci in an embedding function gC:
|
399 |
+
gC(Ci) =
|
400 |
+
�
|
401 |
+
v∈Ci
|
402 |
+
h(T )
|
403 |
+
v
|
404 |
+
⊕ max({h(T )
|
405 |
+
v
|
406 |
+
|v ∈ Ci}),
|
407 |
+
(4)
|
408 |
+
where max is the element-wise maximization, and ⊕ rep-
|
409 |
+
resents vector concatenation. Summation more accurately
|
410 |
+
summarizes the codes’ information, while maximization
|
411 |
+
provides regularization and stability in DCCA, which we
|
412 |
+
will discuss in Section 3.4.
|
413 |
+
Training RGCN helps embed the ICD codes into vectors
|
414 |
+
based on the defined ontology. Nodes that are close together
|
415 |
+
in the graph will be assigned similar embeddings because
|
416 |
+
of their similar neighborhood. Moreover, distant nodes that
|
417 |
+
appear together frequently in the health record can also be
|
418 |
+
assigned correlated embeddings because the jump connec-
|
419 |
+
tion keeps the maximal distance between two nodes at two.
|
420 |
+
Consider a set of codes C = {u, v}, because of the sum-
|
421 |
+
mation in the code pooling, using a 2-layer RGCN, we will
|
422 |
+
have non-zero gradients of hT
|
423 |
+
u and hT
|
424 |
+
v with respect to h0
|
425 |
+
v
|
426 |
+
and h0
|
427 |
+
u, respectively, which connects the embeddings of u
|
428 |
+
and v. In contrast, applying RGCN on a graph without jump
|
429 |
+
connections will result in zero gradients when the distance
|
430 |
+
between u and v is greater than two.
|
431 |
+
3.3
|
432 |
+
Embedding Clinical Notes using Bi-LSTM
|
433 |
+
Patients can have different numbers of clinical texts in each
|
434 |
+
encounter. Where applicable, we sort the texts in an en-
|
435 |
+
counter in ascending order by time, and have a set of texts
|
436 |
+
Ai = (ai1, ai2, ..., ain). In our examples, we concatenate
|
437 |
+
the texts together to a single document Hi, and we have
|
438 |
+
Hi = CAT(Ai) = �
|
439 |
+
j={1...n} aij. We leave to future work
|
440 |
+
the possibility of further modeling the collection.
|
441 |
+
The concatenated text might be very lengthy with over
|
442 |
+
ten thousands word tokens, and RNN suffers from dimin-
|
443 |
+
ishing gradients with LSTM-type modifications. While at-
|
444 |
+
tention mechanisms are effective for arbitrary long-range
|
445 |
+
dependence, they require large sample size and expensive
|
446 |
+
computational resources. Hence, following previously suc-
|
447 |
+
cessful approach (Huang et al. 2019), we adopt a two-stage
|
448 |
+
model which stacks a low-frequency RNN on a local RNN.
|
449 |
+
Given Hi, we first split it into blocks of equal size b, Hi =
|
450 |
+
{Hi1, Hi2, ..., HiK}. The last block HiK is padded to length
|
451 |
+
b. The two-stage model first generates block-wise text em-
|
452 |
+
beddings by
|
453 |
+
lHik = LSTM({w(Hik1), w(Hik2), ..., w(Hikb)}),
|
454 |
+
(5)
|
455 |
+
where w(·) is a Word2Vec (Mikolov et al. 2013) trainable
|
456 |
+
embedding function. The representation of Ai is given by
|
457 |
+
gA(Ai) = LSTM({lHi1, ..., lHiK}).
|
458 |
+
(6)
|
459 |
+
The two-stage learning scheme minimizes the effect of di-
|
460 |
+
minishing gradients while maintaining the temporal order of
|
461 |
+
the text.
|
462 |
+
3.4
|
463 |
+
DCCA between Graph and Text Data
|
464 |
+
As previously mentioned, ICD codes may not be available at
|
465 |
+
the time when models would be most useful, but are struc-
|
466 |
+
tured and easier to analyze, while the clinical text is read-
|
467 |
+
ily available but very noisy. Despite different data formats,
|
468 |
+
they usually describe the same information: the main diag-
|
469 |
+
noses and treatments for an encounter. Borrowing ideas from
|
470 |
+
multi-view learning, we can use them to supplement each
|
471 |
+
other. Many existing multi-view learning methods require
|
472 |
+
the presence of both views during inference and are not able
|
473 |
+
to adapt to the applications we envision. Specifically, we use
|
474 |
+
DCCA (Andrew et al. 2013; Wang et al. 2015) on gA(Ai)
|
475 |
+
and gC(Ci) to learn a joint representation. DCCA solves the
|
476 |
+
|
477 |
+
following optimization problem,
|
478 |
+
max
|
479 |
+
gC,gA,U,V
|
480 |
+
1
|
481 |
+
N tr(U T M T
|
482 |
+
C MAV )
|
483 |
+
s.t.
|
484 |
+
U T ( 1
|
485 |
+
N M T
|
486 |
+
C MC + rCI)U = I,
|
487 |
+
V T ( 1
|
488 |
+
N M T
|
489 |
+
AMA + rAI)V = I,
|
490 |
+
uT
|
491 |
+
i M T
|
492 |
+
C MAvj = 0,
|
493 |
+
∀i ̸= j,
|
494 |
+
1 ≤ i, j ≤ L
|
495 |
+
MC = stack{gC(Ci)|∀i},
|
496 |
+
MA = stack{gA(Ai)|∀i},
|
497 |
+
(7)
|
498 |
+
where MC and MA are the matrices stacked by vector rep-
|
499 |
+
resentations of codes and texts, (rC, rA) > 0 are regulariza-
|
500 |
+
tion parameters. U = [u1, ..., uL] and V = [v1, ..., vL] maps
|
501 |
+
GNN and Bi-LSTM output to maximally correlated embed-
|
502 |
+
ding, and L is a hyper-parameter controlling the number of
|
503 |
+
correlated dimensions. We use gC(Ci)U, gA(Ai)V as the fi-
|
504 |
+
nal embedding of codes and texts. By maximizing their cor-
|
505 |
+
relation, we force the weak learner (usually the LSTM) to
|
506 |
+
learn a similar representation as the strong learner (usually
|
507 |
+
the GNN) and to filter out inputs unrelated to the structured
|
508 |
+
data. Hence, when a record’s codes can yield correct results,
|
509 |
+
its text embedding is highly correlated with that of the codes,
|
510 |
+
and the text should also be likely to produce correct predic-
|
511 |
+
tions.
|
512 |
+
During development, we found that a batch of ICD data
|
513 |
+
often contains many repeated codes with the same embed-
|
514 |
+
ding and that a SUM pooling tended to obtain a less than
|
515 |
+
full rank embedding matrix MC and MA, which causes in-
|
516 |
+
stability in solving the optimization problem. A nonlinear
|
517 |
+
max pooling function helps prevent this.
|
518 |
+
The above optimization problem suggests full-batch train-
|
519 |
+
ing. However, the computation graph will be too large for
|
520 |
+
the text and code data. Following (Wang et al. 2015), we use
|
521 |
+
large mini-batches to train the model, and from the experi-
|
522 |
+
mental results, they sufficiently represent the overall distri-
|
523 |
+
bution. After training, we stack MC, MA again from all data
|
524 |
+
output and obtain U and V as fixed projection matrix from
|
525 |
+
equation (7).
|
526 |
+
After obtaining the projection matrices and embedding
|
527 |
+
models, we attach two MLPs (fA and fC) to the embedding
|
528 |
+
models as the classifier, and train/fine-tune fA (fC) and gA
|
529 |
+
(gC) together in an end-to-end fashion with respect to the
|
530 |
+
learning task using the loss functions in (1) and (2).
|
531 |
+
4
|
532 |
+
Predicting Unseen Codes
|
533 |
+
In the previous section, we discuss the formulation of ICD
|
534 |
+
ontology and how we can use DCCA to generate embed-
|
535 |
+
dings that share representations across views. In this section,
|
536 |
+
we will demonstrate another use case for DCCA-regularized
|
537 |
+
embeddings. In real-world settings, the set of codes that re-
|
538 |
+
searchers observe in training is usually a small subset of the
|
539 |
+
entire ICD ontology. In part, this is due to the extreme speci-
|
540 |
+
ficity of some ontologies, with ICD-10-PCS having 87,000
|
541 |
+
distinct procedures and ICD-10-CM 68,000 diagnostic pos-
|
542 |
+
sibilities before considering that some codes represent a fur-
|
543 |
+
ther modification of another entity. In even large training
|
544 |
+
samples, some codes will likely be seen zero or a small num-
|
545 |
+
ber of times in training. Traditional models using indepen-
|
546 |
+
dent code embedding are expected to function poorly on rare
|
547 |
+
codes and have arbitrary output on previously unseen nodes,
|
548 |
+
even if similar entities are contained in the training data.
|
549 |
+
Our proposed model and the graph-embedded hierarchy
|
550 |
+
can naturally address the above challenge. Its two features
|
551 |
+
enable predictions of novel codes at inference:
|
552 |
+
• Relational embedding. By embedding the novel code
|
553 |
+
in the ontology graph, we are able to exploit the repre-
|
554 |
+
sentation of its neighbors. For example, a rare diagnostic
|
555 |
+
procedure’s embedding is highly influenced by other pro-
|
556 |
+
cedures that are nearby in the ontology.
|
557 |
+
• Jump connection. While other methods also exploit
|
558 |
+
the proximity defined by the hierarchy, as we suggested
|
559 |
+
above, codes can be highly correlated but remain distant
|
560 |
+
in the graph. Jump connections increase the graph con-
|
561 |
+
nectivity; hence, our model can seek the whole hierarchy
|
562 |
+
for potential connection to the missing code. Because the
|
563 |
+
connections across different levels are assigned different
|
564 |
+
relation types, our GNN can also differentiate the likeli-
|
565 |
+
hood of connections across different levels and distances.
|
566 |
+
Meanwhile, during inference, the potential problem is that
|
567 |
+
the model does not automatically differentiate between the
|
568 |
+
novel and the previously seen codes. Because the model
|
569 |
+
never uses novel codes to generate any gC(Ci), the embed-
|
570 |
+
dings of the seen and novel nodes experience different gra-
|
571 |
+
dient update processes and hence are from different distri-
|
572 |
+
butions. Nevertheless, during inference, the model will treat
|
573 |
+
them as if they are from the same distribution. However,
|
574 |
+
such transferability and credibility of novel node embed-
|
575 |
+
dings are not guaranteed, and applying them homogeneously
|
576 |
+
may result in untrustworthy predictions.
|
577 |
+
Hence, we propose a labeling training scheme to teach
|
578 |
+
the model how to handle novel nodes during inference. Let
|
579 |
+
G = {V, E, R} be the ICD graph and U be the set of unique
|
580 |
+
nodes in the training set, U ⊆ V . We select a random subset
|
581 |
+
Us from U to form the seen nodes during training, and Uu =
|
582 |
+
V \ Us be treated as unseen nodes. We augment the initial
|
583 |
+
node embeddings with 1-0 labels, formally,
|
584 |
+
h0+
|
585 |
+
u
|
586 |
+
= h0
|
587 |
+
u ⊕ 1
|
588 |
+
∀u ∈ Us
|
589 |
+
h0+
|
590 |
+
v
|
591 |
+
= h0
|
592 |
+
v ⊕ 0
|
593 |
+
∀v ∈ V \ Us
|
594 |
+
(8)
|
595 |
+
Note that we still use h0
|
596 |
+
u as the trainable node embedding,
|
597 |
+
while the input to the RGCN is augmented to h0+
|
598 |
+
u . We fur-
|
599 |
+
ther extract data that only contain the seen nodes to form the
|
600 |
+
seen data: Ds = {(Ci, Ai, yi)|c ∈ Us∀c ∈ Ci}.
|
601 |
+
We, again, use DCCA on Ds to maximize the correlation
|
602 |
+
between the text representation and the code representation.
|
603 |
+
After obtaining the projection matrix, we train on the en-
|
604 |
+
tire dataset D to minimize the prediction loss. Note that D
|
605 |
+
contains nodes that do not appear in the DCCA process and
|
606 |
+
are labeled differently from the seen nodes. The different la-
|
607 |
+
bels allow the RGCN to tell whether a node is unseen during
|
608 |
+
the DCCA process. If unseen nodes hurt the prediction, it
|
609 |
+
will be reflected in the prediction loss. Intuitively, if unseen
|
610 |
+
nodes are less credible, data with more 0-labeled nodes will
|
611 |
+
|
612 |
+
Method
|
613 |
+
Local Data
|
614 |
+
MIMIC-III
|
615 |
+
DEL
|
616 |
+
DIA
|
617 |
+
TH
|
618 |
+
D30
|
619 |
+
MORT
|
620 |
+
R30
|
621 |
+
Corr.
|
622 |
+
17.3 ± 1.3
|
623 |
+
16.8 ± 2.6
|
624 |
+
16.8 ± 2.6
|
625 |
+
16.8 ± 2.6
|
626 |
+
10.4 ± 1.7
|
627 |
+
12.7 ± 2.3
|
628 |
+
BERT
|
629 |
+
65.2 ± 0.6
|
630 |
+
76.3 ± 1.2
|
631 |
+
62.1 ± 1.1
|
632 |
+
74.6 ± 1.8
|
633 |
+
88.4 ± 1.8
|
634 |
+
69.2 ± 1.9
|
635 |
+
ClinicalBERT
|
636 |
+
66.3 ± 0.5
|
637 |
+
77.0 ± 0.9
|
638 |
+
62.7 ± 0.8
|
639 |
+
74.9 ± 1.5
|
640 |
+
90.5 ± 1.3
|
641 |
+
71.4 ± 1.8
|
642 |
+
Bi-LSTM
|
643 |
+
64.6 ± 0.2
|
644 |
+
76.8 ± 1.8
|
645 |
+
61.3 ± 1.2
|
646 |
+
73.9 ± 1.9
|
647 |
+
87.3 ± 1.7
|
648 |
+
68.4 ± 2.6
|
649 |
+
DCCA+Bi-LSTM
|
650 |
+
66.9 ± 0.8
|
651 |
+
78.9 ± 1.1
|
652 |
+
61.6 ± 1.1
|
653 |
+
76.5 ± 1.3
|
654 |
+
87.2 ± 1.6
|
655 |
+
71.1 ± 1.4
|
656 |
+
RGCN
|
657 |
+
76.4 ± 1.2
|
658 |
+
97.2 ± 1.1
|
659 |
+
75.9 ± 3.0
|
660 |
+
91.5 ± 1.0
|
661 |
+
90.4 ± 1.0
|
662 |
+
68.6 ± 1.4
|
663 |
+
DCCA+RGCN
|
664 |
+
78.9 ± 1.3
|
665 |
+
98.4 ± 0.9
|
666 |
+
77.6 ± 1.2
|
667 |
+
91.5 ± 1.3
|
668 |
+
90.5 ± 1.5
|
669 |
+
67.2 ± 2.5
|
670 |
+
RGCN+Bi-LSTM
|
671 |
+
79.5 ± 1.7
|
672 |
+
97.1 ± 1.4
|
673 |
+
75.6 ± 0.8
|
674 |
+
90.8 ± 0.8
|
675 |
+
91.3 ± 1.2
|
676 |
+
69.5 ± 1.2
|
677 |
+
DCCA+RGCN+Bi-LSTM
|
678 |
+
78.7 ± 2.3
|
679 |
+
98.2 ± 1.3
|
680 |
+
77.1 ± 2.9
|
681 |
+
91.0 ± 0.9
|
682 |
+
90.1 ± 1.3
|
683 |
+
71.2 ± 1.0
|
684 |
+
Table 1: DCCA Joint Learning and baseline AUROC (%). Top 4 lines use clinical notes only during inference, middle 2 ICD
|
685 |
+
codes only, and bottom 2 both. Corr = Sum of correlation of latent representations over 20 dimensions.
|
686 |
+
Method
|
687 |
+
Local Data
|
688 |
+
MIMIC-III
|
689 |
+
DEL
|
690 |
+
DIA
|
691 |
+
TH
|
692 |
+
D30
|
693 |
+
MORT
|
694 |
+
R30
|
695 |
+
RGCN
|
696 |
+
74.6 ± 1.2
|
697 |
+
87.3 ± 13.1
|
698 |
+
67.4 ± 6.9
|
699 |
+
82.8 ± 3.7
|
700 |
+
84.5 ± 3.6
|
701 |
+
60.4 ± 2.8
|
702 |
+
RGCN+Labling
|
703 |
+
73.2 ± 0.6
|
704 |
+
87.4 ± 14.9
|
705 |
+
68.5 ± 3.4
|
706 |
+
83.8 ± 2.1
|
707 |
+
85.7 ± 3.6
|
708 |
+
61.3 ± 2.3
|
709 |
+
DCCA+RGCN
|
710 |
+
74.9 ± 1.0
|
711 |
+
89.1 ± 12.5
|
712 |
+
70.8 ± 0.9
|
713 |
+
83.5 ± 1.9
|
714 |
+
85.1 ± 4.1
|
715 |
+
61.7 ± 2.6
|
716 |
+
DCCA+RGCN+Labeling
|
717 |
+
75.3 ± 1.1
|
718 |
+
95.4 ± 0.7
|
719 |
+
70.6 ± 3.2
|
720 |
+
84.4 ± 1.4
|
721 |
+
86.4 ± 4.2
|
722 |
+
63.4 ± 2.8
|
723 |
+
Table 2: Ablation Study of the Labeling Training Scheme under Unseen Code Setting in AUROC (%).
|
724 |
+
have poor prediction results; GNN can capture this charac-
|
725 |
+
teristic and reflect it in the prediction by assigning less pos-
|
726 |
+
itive/negative scores to queries with more 0-labeled nodes.
|
727 |
+
The labeling training scheme essentially blocks a part of the
|
728 |
+
training code during DCCA and thus obtains embeddings
|
729 |
+
for Us and Uu from different distributions. And we train on
|
730 |
+
the entire training dataset so that the model learns to handle
|
731 |
+
seen and unseen codes heterogeneously. This setup mimics
|
732 |
+
the actual inference scenario. Note that despite being differ-
|
733 |
+
ent, the distributions of seen and unseen node embeddings
|
734 |
+
can be similar and overlapped. Thus, the additional 1-0 la-
|
735 |
+
beling is necessary to differentiate them.
|
736 |
+
5
|
737 |
+
Experimental Results
|
738 |
+
Datasets. We use two datasets to evaluate the performance
|
739 |
+
of our framework: Proprietary Dataset. This dataset con-
|
740 |
+
tains medical records of 38,551 admissions at the local Hos-
|
741 |
+
pital from 2018 to 2021. Each entry is also associated with
|
742 |
+
a free text procedural description and a set of ICD-10 pro-
|
743 |
+
cedure codes. We aim to use our framework to predict a set
|
744 |
+
of post-operative outcomes, including delirium (DEL), dial-
|
745 |
+
ysis (DIA), troponin high (TH), and death in 30 days (D30).
|
746 |
+
MIMIC-III dataset (Johnson et al. 2016). This dataset con-
|
747 |
+
tains medical records of 58,976 unique ICU hospital ad-
|
748 |
+
mission from 38,597 patients at the Beth Israel Deaconess
|
749 |
+
Medical Center between 2001 and 2012. Each admission
|
750 |
+
record is associated with a set of ICD-9 diagnoses codes
|
751 |
+
and multiple clinical notes from different sources, includ-
|
752 |
+
ing case management, consult, ECG, discharge summary,
|
753 |
+
general nursing, etc. We aim to predict two outcomes from
|
754 |
+
the codes and texts: (1) In-hospital mortality (MORT). We
|
755 |
+
use admissions with hospital expire flag=1 in the MIMIC-
|
756 |
+
III dataset as the positive data and sample the same number
|
757 |
+
of negative data to form the final dataset. All clinical notes
|
758 |
+
generated on the last day of admission are filtered out to
|
759 |
+
avoid directly mentioning the outcome. We use all clinical
|
760 |
+
notes ordered by time and take the first 2,500-word tokens
|
761 |
+
as the input text. (2) 30-day readmission (R30). We follow
|
762 |
+
(Huang, Altosaar, and Ranganath 2019), label admissions
|
763 |
+
where a patient is readmitted within 30 days as positive, and
|
764 |
+
sample an equal number of negative admissions. Newborn
|
765 |
+
and death admissions are filtered out. We only use clinical
|
766 |
+
notes of type Discharge Summary and take the first 2,500-
|
767 |
+
word tokens as the input text. Sample sizes can be found in
|
768 |
+
Table 3.
|
769 |
+
Effectiveness of DCCA training. We split the dataset
|
770 |
+
with a train/validation/test ratio of 8:1:1 and use 5-fold
|
771 |
+
cross-validation to evaluate our model. GNN and Bi-LSTM
|
772 |
+
are optimized in the DCCA process using the training set.
|
773 |
+
The checkpoint model with the best validation correlation
|
774 |
+
is picked to compute the projection matrix only from the
|
775 |
+
training dataset. Then we attach an MLP head to the tar-
|
776 |
+
get prediction model (either the GNN or the Bi-LSTM) and
|
777 |
+
fine-tune the model in an end-to-end fashion to minimize the
|
778 |
+
prediction loss.
|
779 |
+
For this task, we compare our framework to popular pre-
|
780 |
+
trained models ClinicalBERT and BERT. We also compare
|
781 |
+
it to the base GNN and Bi-LSTM to show the effective-
|
782 |
+
ness of our proposed framework. We additionally provide
|
783 |
+
experimental results where both text and code embedding
|
784 |
+
|
785 |
+
are used to make predictions. We compare our model with a
|
786 |
+
vanilla multi-view model without DCCA. For all baselines,
|
787 |
+
we report their Area Under Receiver Operating Characteris-
|
788 |
+
tic (AUROC) as evaluation metrics, and Average Precision
|
789 |
+
(AP) can be found in Appendix A. For all datasets, we set
|
790 |
+
L, the number of correlated dimensions to 20, and report the
|
791 |
+
total amount of correlation obtained (Corr).
|
792 |
+
Table 1 shows the main results. For clinical notes predic-
|
793 |
+
tion, we can see that the codes augmented model can con-
|
794 |
+
sistently outperform the base Bi-LSTM, with an average rel-
|
795 |
+
ative performance increase of 2.4% on the proprietary data
|
796 |
+
and 1.6% on the MIMIC-III data. Our proposed method out-
|
797 |
+
performs BERT on most tasks and achieves very competi-
|
798 |
+
tive performance compared to that of ClinicalBERT. Note
|
799 |
+
that our model only trains on the labeled EHR data without
|
800 |
+
unsupervised training on extra data like BERT and Clini-
|
801 |
+
calBERT do. ClinicalBERT has been previously trained and
|
802 |
+
fine-tuned on the entire MIMIC dataset, including the dis-
|
803 |
+
charge summaries, and therefore these results may overesti-
|
804 |
+
mate its performance.
|
805 |
+
For ICD code prediction, we see that DCCA brings a 1.5%
|
806 |
+
performance increase on the proprietary data. Since the
|
807 |
+
codes model significantly outperforms the language model
|
808 |
+
on all tasks, the RGCN is a much stronger learner and has
|
809 |
+
less information to learn from the text model. Comparing the
|
810 |
+
results of the proprietary and the MIMIC datasets, we can
|
811 |
+
see that DCCA brings a more significant performance boost
|
812 |
+
to the proprietary dataset, presumably because of the larger
|
813 |
+
amount of correlation obtained in the proprietary dataset
|
814 |
+
(85% versus 58%). Moreover, an important difference in
|
815 |
+
these datasets is the ontology used: MIMIC-III uses ICD-9
|
816 |
+
and the proprietary dataset uses ICD-10. The ICD-9 ontol-
|
817 |
+
ogy tree has a height of four, which is much smaller than that
|
818 |
+
of ICD-10 and is more coarsely classified. This may also ex-
|
819 |
+
plain the smaller performance gains in MIMIC-III.
|
820 |
+
The combined model with DCCA only brings a slight per-
|
821 |
+
formance boost compared to the one without because the
|
822 |
+
amount of information for the models to learn is equiva-
|
823 |
+
lent. Nevertheless, the DCCA model encourages the two
|
824 |
+
views’ embeddings to agree and allows independent predic-
|
825 |
+
tion. In contrast, a vanilla multi-view model does not help
|
826 |
+
the weaker learner learn from the stronger learner.
|
827 |
+
Unseen Codes Experiments. We identify the set of
|
828 |
+
unique codes in the dataset. We split the codes into k-fold
|
829 |
+
and ran k experiments on each split. For each experiment,
|
830 |
+
we pick one fold as the unseen code set. Data that contain
|
831 |
+
at least one unseen code are used as the evaluation set. The
|
832 |
+
evaluation set is split into two halves as the valid and test
|
833 |
+
sets. The rest of the data forms the training set. We pick an-
|
834 |
+
other fold from the code split as the DCCA unseen code
|
835 |
+
set. Training set data that do not contain any DCCA unseen
|
836 |
+
code form the DCCA training set. Then, the entire training
|
837 |
+
set is used for task fine-tuning. Because the distribution of
|
838 |
+
codes is not uniform, the number of data for each split is not
|
839 |
+
equal across different folds. We use k=10 for the proprietary
|
840 |
+
dataset and k=20 for the MIMIC-III dataset to generate a
|
841 |
+
reasonable data division. We provide average split sizes in
|
842 |
+
Appendix C.
|
843 |
+
For this task, we compare our method with the base GNN,
|
844 |
+
# Admission
|
845 |
+
# Pos. Samples
|
846 |
+
# Unique codes
|
847 |
+
DEL
|
848 |
+
11,064
|
849 |
+
5,367
|
850 |
+
5,637
|
851 |
+
DIA
|
852 |
+
38,551
|
853 |
+
1,387
|
854 |
+
9,320
|
855 |
+
TH
|
856 |
+
38,551
|
857 |
+
1,235
|
858 |
+
9,320
|
859 |
+
D30
|
860 |
+
38,551
|
861 |
+
1,444
|
862 |
+
9,320
|
863 |
+
MORT
|
864 |
+
5,926
|
865 |
+
2,963
|
866 |
+
4,448
|
867 |
+
R30
|
868 |
+
10,998
|
869 |
+
5,499
|
870 |
+
3,645
|
871 |
+
Table 3: Statistics of different datasets and tasks.
|
872 |
+
base GNN augmented with the same labeling training strat-
|
873 |
+
egy, and DCCA-optimized GNN to demonstrate the out-
|
874 |
+
standing performance of our framework. Similarly, we re-
|
875 |
+
port AUROC and include AP in Appendix A.
|
876 |
+
Table 2 summarizes the results of the unseen codes exper-
|
877 |
+
iments. Note that all test data contain at least one code that
|
878 |
+
never appears in the training process. In such a more diffi-
|
879 |
+
cult inference scenario, comparing the plain RGCN with the
|
880 |
+
DCCA-augmented RGCN, we see a 2.2% average relative
|
881 |
+
performance increase on the proprietary dataset. With the
|
882 |
+
labeling learning method, we can further improve the per-
|
883 |
+
formance gain to 4.2%. On the MIMIC-III dataset, the per-
|
884 |
+
formance boost of our model over the plain RGCN is 3.6%,
|
885 |
+
demonstrating our method’s ability to differentiate seen and
|
886 |
+
unseen codes. We also notice that DCCA alone only slightly
|
887 |
+
improves the performance on the MIMIC-III dataset (1.4%).
|
888 |
+
We suspect that while the labeling training scheme helps dis-
|
889 |
+
tinguish seen and unseen codes, the number of data used
|
890 |
+
in the DCCA process is also reduced. As MORT and R30
|
891 |
+
datasets are smaller and a small DCCA training set may not
|
892 |
+
faithfully represent the actual data distribution, the regular-
|
893 |
+
ization effect of DCCA diminishes.
|
894 |
+
6
|
895 |
+
Conclusions
|
896 |
+
Predicting patient outcomes from EHR data is an essen-
|
897 |
+
tial task in clinical ML. Conventional methods that solely
|
898 |
+
learn from clinical texts suffer from poor performance, and
|
899 |
+
those that learn from codes have limited application in real-
|
900 |
+
world clinical settings. In this paper, we propose a multi-
|
901 |
+
view framework that jointly learns from the clinical notes
|
902 |
+
and ICD codes of EHR data using Bi-LSTM and GNN. We
|
903 |
+
use DCCA to create shared information but maintain each
|
904 |
+
view’s independence during inference. This allows accurate
|
905 |
+
prediction using clinical notes when the ICD codes are miss-
|
906 |
+
ing, which is commonly the case in pre-operative analysis.
|
907 |
+
We also propose a label augmentation method for our frame-
|
908 |
+
work, which allows the GNN model to make effective infer-
|
909 |
+
ences on codes that are not seen during training, enhancing
|
910 |
+
generalizability. Experiments are conducted on two different
|
911 |
+
datasets. Our methods show consistent effectiveness across
|
912 |
+
tasks. In the future, we plan to incorporate more data types in
|
913 |
+
the EHR and combine them with other multi-view learning
|
914 |
+
methods to make more accurate predictions.
|
915 |
+
|
916 |
+
References
|
917 |
+
Alsentzer, E.; Murphy, J. R.; Boag, W.; Weng, W.-H.;
|
918 |
+
Jin, D.; Naumann, T.; and McDermott, M. 2019.
|
919 |
+
Pub-
|
920 |
+
licly available clinical BERT embeddings. arXiv preprint
|
921 |
+
arXiv:1904.03323.
|
922 |
+
Andrew, G.; Arora, R.; Bilmes, J.; and Livescu, K. 2013.
|
923 |
+
Deep canonical correlation analysis. In International con-
|
924 |
+
ference on machine learning, 1247–1255. PMLR.
|
925 |
+
Boag, W.; Doss, D.; Naumann, T.; and Szolovits, P. 2018.
|
926 |
+
What’s in a note? unpacking predictive value in clinical note
|
927 |
+
representations.
|
928 |
+
AMIA Summits on Translational Science
|
929 |
+
Proceedings, 2018: 26.
|
930 |
+
Choi, E.; Xu, Z.; Li, Y.; Dusenberry, M.; Flores, G.; Xue, E.;
|
931 |
+
and Dai, A. 2020. Learning the graphical structure of elec-
|
932 |
+
tronic health records with graph convolutional transformer.
|
933 |
+
In Proceedings of the AAAI conference on artificial intelli-
|
934 |
+
gence, volume 34, 606–613.
|
935 |
+
Deschepper, M.; Eeckloo, K.; Vogelaers, D.; and Waege-
|
936 |
+
man, W. 2019. A hospital wide predictive model for un-
|
937 |
+
planned readmission using hierarchical ICD data. Computer
|
938 |
+
methods and programs in biomedicine, 173: 177–183.
|
939 |
+
Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2018.
|
940 |
+
BERT: Pre-training of Deep Bidirectional Transformers for
|
941 |
+
Language Understanding. CoRR, abs/1810.04805.
|
942 |
+
Fouad, S.; Tino, P.; Raychaudhury, S.; and Schneider, P.
|
943 |
+
2013. Incorporating Privileged Information Through Met-
|
944 |
+
ric Learning. IEEE Transactions on Neural Networks and
|
945 |
+
Learning Systems, 24(7): 1086–1098.
|
946 |
+
Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and
|
947 |
+
Dahl, G. E. 2017.
|
948 |
+
Neural message passing for quantum
|
949 |
+
chemistry. In International conference on machine learn-
|
950 |
+
ing, 1263–1272. PMLR.
|
951 |
+
Goldstein, B. A.; Navar, A. M.; Pencina, M. J.; and Ioan-
|
952 |
+
nidis, J. 2017. Opportunities and challenges in developing
|
953 |
+
risk prediction models with electronic health records data: a
|
954 |
+
systematic review. Journal of the American Medical Infor-
|
955 |
+
matics Association, 24(1): 198–208.
|
956 |
+
Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term
|
957 |
+
memory. Neural computation, 9(8): 1735–1780.
|
958 |
+
Huang, K.; Altosaar, J.; and Ranganath, R. 2019. Clinical-
|
959 |
+
bert: Modeling clinical notes and predicting hospital read-
|
960 |
+
mission. arXiv preprint arXiv:1904.05342.
|
961 |
+
Huang, K.; Singh, A.; Chen, S.; Moseley, E. T.; Deng, C.-y.;
|
962 |
+
George, N.; and Lindvall, C. 2019. Clinical XLNet: Mod-
|
963 |
+
eling sequential clinical notes and predicting prolonged me-
|
964 |
+
chanical ventilation. arXiv preprint arXiv:1912.11975.
|
965 |
+
Johnson, A. E.; Pollard, T. J.; Shen, L.; Lehman, L.-w. H.;
|
966 |
+
Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; An-
|
967 |
+
thony Celi, L.; and Mark, R. G. 2016. MIMIC-III, a freely
|
968 |
+
accessible critical care database. Scientific data, 3(1): 1–9.
|
969 |
+
Lambert, J.; Sener, O.; and Savarese, S. 2018. Deep learning
|
970 |
+
under privileged information using heteroscedastic dropout.
|
971 |
+
In Proceedings of the IEEE Conference on Computer Vision
|
972 |
+
and Pattern Recognition, 8886–8895.
|
973 |
+
Lee, J.; Yoon, W.; Kim, S.; Kim, D.; Kim, S.; So, C. H.; and
|
974 |
+
Kang, J. 2020. BioBERT: a pre-trained biomedical language
|
975 |
+
representation model for biomedical text mining. Bioinfor-
|
976 |
+
matics, 36(4): 1234–1240.
|
977 |
+
Li, F.; and Yu, H. 2020. ICD coding from clinical text using
|
978 |
+
multi-filter residual convolutional neural network. In Pro-
|
979 |
+
ceedings of the AAAI Conference on Artificial Intelligence,
|
980 |
+
volume 34, 8180–8187.
|
981 |
+
Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper insights into
|
982 |
+
graph convolutional networks for semi-supervised learning.
|
983 |
+
In Thirty-Second AAAI conference on artificial intelligence.
|
984 |
+
Li, R.; Ma, F.; and Gao, J. 2021. Integrating Multimodal
|
985 |
+
Electronic Health Records for Diagnosis Prediction.
|
986 |
+
In
|
987 |
+
AMIA Annual Symposium Proceedings, volume 2021, 726.
|
988 |
+
American Medical Informatics Association.
|
989 |
+
Liu, W.; Stansbury, C.; Singh, K.; Ryan, A. M.; Sukul, D.;
|
990 |
+
Mahmoudi, E.; Waljee, A.; Zhu, J.; and Nallamothu, B. K.
|
991 |
+
2020a. Predicting 30-day hospital readmissions using arti-
|
992 |
+
ficial neural networks with medical code embedding. PloS
|
993 |
+
one, 15(4): e0221606.
|
994 |
+
Liu, Z.; Li, X.; Peng, H.; He, L.; and Yu, P. S. 2020b. Het-
|
995 |
+
erogeneous Similarity Graph Neural Network on Electronic
|
996 |
+
Health Records. In 2020 IEEE International Conference on
|
997 |
+
Big Data (Big Data), 1196–1205.
|
998 |
+
Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and
|
999 |
+
Dean, J. 2013.
|
1000 |
+
Distributed representations of words and
|
1001 |
+
phrases and their compositionality. Advances in neural in-
|
1002 |
+
formation processing systems, 26.
|
1003 |
+
Pascual, D.; Luck, S.; and Wattenhofer, R. 2021. Towards
|
1004 |
+
BERT-based automatic ICD coding: Limitations and oppor-
|
1005 |
+
tunities. arXiv preprint arXiv:2104.06709.
|
1006 |
+
Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; Berg, R. v. d.;
|
1007 |
+
Titov, I.; and Welling, M. 2018. Modeling relational data
|
1008 |
+
with graph convolutional networks. In European semantic
|
1009 |
+
web conference, 593–607. Springer.
|
1010 |
+
Shickel, B.; Tighe, P. J.; Bihorac, A.; and Rashidi, P. 2017.
|
1011 |
+
Deep EHR: a survey of recent advances in deep learn-
|
1012 |
+
ing techniques for electronic health record (EHR) analysis.
|
1013 |
+
IEEE journal of biomedical and health informatics, 22(5):
|
1014 |
+
1589–1604.
|
1015 |
+
Si, Y.; Wang, J.; Xu, H.; and Roberts, K. 2019. Enhanc-
|
1016 |
+
ing clinical concept extraction with contextual embeddings.
|
1017 |
+
Journal of the American Medical Informatics Association,
|
1018 |
+
26(11): 1297–1304.
|
1019 |
+
Vapnik, V.; and Vashist, A. 2009. A new learning paradigm:
|
1020 |
+
Learning using privileged information.
|
1021 |
+
Neural networks,
|
1022 |
+
22(5-6): 544–557.
|
1023 |
+
Wang, W.; Arora, R.; Livescu, K.; and Bilmes, J. 2015. On
|
1024 |
+
deep multi-view representation learning.
|
1025 |
+
In International
|
1026 |
+
conference on machine learning, 1083–1092. PMLR.
|
1027 |
+
Wei, W.-Q.; Teixeira, P. L.; Mo, H.; Cronin, R. M.; Warner,
|
1028 |
+
J. L.; and Denny, J. C. 2016. Combining billing codes, clin-
|
1029 |
+
ical notes, and medications from electronic health records
|
1030 |
+
provides superior phenotyping performance. Journal of the
|
1031 |
+
American Medical Informatics Association, 23(e1): e20–
|
1032 |
+
e27.
|
1033 |
+
|
1034 |
+
Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018.
|
1035 |
+
How powerful are graph neural networks? arXiv preprint
|
1036 |
+
arXiv:1810.00826.
|
1037 |
+
Yu, K.-H.; Beam, A. L.; and Kohane, I. S. 2018. Artificial
|
1038 |
+
intelligence in healthcare. Nature biomedical engineering,
|
1039 |
+
2(10): 719–731.
|
1040 |
+
Zhang, D.; Yin, C.; Zeng, J.; Yuan, X.; and Zhang, P. 2020a.
|
1041 |
+
Combining structured and unstructured data for predictive
|
1042 |
+
models: a deep learning approach. BMC medical informatics
|
1043 |
+
and decision making, 20(1): 1–11.
|
1044 |
+
Zhang, M.; King, C. R.; Avidan, M.; and Chen, Y. 2020b.
|
1045 |
+
Hierarchical attention propagation for healthcare represen-
|
1046 |
+
tation learning. In Proceedings of the 26th ACM SIGKDD
|
1047 |
+
International Conference on Knowledge Discovery & Data
|
1048 |
+
Mining, 249–256.
|
1049 |
+
Zhang, Y.; Jin, R.; and Zhou, Z.-H. 2010. Understanding
|
1050 |
+
bag-of-words model: a statistical framework. International
|
1051 |
+
journal of machine learning and cybernetics, 1(1): 43–52.
|
1052 |
+
|
1053 |
+
A
|
1054 |
+
Average Precision Score Results
|
1055 |
+
AP results demonstrate a similar pattern to AUROC results,
|
1056 |
+
where DCCA augmented model can consistently outper-
|
1057 |
+
form the base model while achieving very competitive re-
|
1058 |
+
sults compared to ClinicalBERT for the text data as shown
|
1059 |
+
in Table 5. The proposed labeling training scheme can also
|
1060 |
+
consistently improve our model’s performance on the un-
|
1061 |
+
seen codes experiments, as shown in Table 6.
|
1062 |
+
B
|
1063 |
+
Hyperparameters
|
1064 |
+
We use grid search for hyperparameter tuning. For missing
|
1065 |
+
view experiments on text, we fix the number of RGCN layers
|
1066 |
+
to be 3. We use 32 for all hidden dimensions as we found that
|
1067 |
+
varying hidden size has minimal impact on the performance
|
1068 |
+
of the data. Text and Code represent the hyperparameters
|
1069 |
+
used for text and code inference tasks. Table 7 summarizes
|
1070 |
+
the set of hyperparameters used for tuning.
|
1071 |
+
C
|
1072 |
+
Unseen Code Sample Size
|
1073 |
+
We use 10-fold code split for the local data and 20-fold code
|
1074 |
+
split for the MIMIC-III data so that the split sizes are reason-
|
1075 |
+
able for training. We report the average number of samples
|
1076 |
+
for all tasks in Table 4.
|
1077 |
+
DCCA Train
|
1078 |
+
Full Train
|
1079 |
+
Test
|
1080 |
+
DEL
|
1081 |
+
3,458.9
|
1082 |
+
4,624.5
|
1083 |
+
3,219.4
|
1084 |
+
DIA
|
1085 |
+
19,305.4
|
1086 |
+
23,717.8
|
1087 |
+
7,416.3
|
1088 |
+
TH
|
1089 |
+
19,305.4
|
1090 |
+
23,717.8
|
1091 |
+
7,416.3
|
1092 |
+
D30
|
1093 |
+
19,305.4
|
1094 |
+
23,717.8
|
1095 |
+
7,416.3
|
1096 |
+
MORT
|
1097 |
+
4,603.1
|
1098 |
+
6,148.2
|
1099 |
+
2,424.7
|
1100 |
+
R30
|
1101 |
+
2,528.1
|
1102 |
+
3,264.0
|
1103 |
+
1,330.8
|
1104 |
+
Table 4: Average Split Size in Unseen Codes Experiment
|
1105 |
+
.
|
1106 |
+
D
|
1107 |
+
Data And Implementation
|
1108 |
+
We adopted the local dataset because it is the only dataset we
|
1109 |
+
have access to that uses both clinical free texts and ICD-10
|
1110 |
+
codes. The implementation details of the MIMIC-III dataset
|
1111 |
+
experiments can be found in the supplementary material
|
1112 |
+
(code).
|
1113 |
+
|
1114 |
+
Local Data
|
1115 |
+
MIMIC-III
|
1116 |
+
DEL
|
1117 |
+
DIA
|
1118 |
+
TH
|
1119 |
+
D30
|
1120 |
+
MORT
|
1121 |
+
R30
|
1122 |
+
Corr.
|
1123 |
+
17.3 ± 1.3
|
1124 |
+
16.8 ± 2.6
|
1125 |
+
16.8 ± 2.6
|
1126 |
+
16.8 ± 2.6
|
1127 |
+
10.4 ± 1.7
|
1128 |
+
12.7 ± 2.3
|
1129 |
+
BERT
|
1130 |
+
65.4 ± 0.7
|
1131 |
+
23.6 ± 1.2
|
1132 |
+
6.5 ± 1.2
|
1133 |
+
15.1 ± 1.6
|
1134 |
+
85.9 ± 1.9
|
1135 |
+
67.7 ± 1.6
|
1136 |
+
ClinicalBERT
|
1137 |
+
66.0 ± 0.6
|
1138 |
+
23.5 ± 0.7
|
1139 |
+
7.0 ± 0.8
|
1140 |
+
15.8 ± 2.1
|
1141 |
+
88.6 ± 1.2
|
1142 |
+
70.1 ± 2.2
|
1143 |
+
LSTM
|
1144 |
+
64.4 ± 0.5
|
1145 |
+
22.1 ± 1.6
|
1146 |
+
6.3 ± 0.9
|
1147 |
+
14.3 ± 0.7
|
1148 |
+
85.4 ± 1.4
|
1149 |
+
66.2 ± 2.8
|
1150 |
+
DCCA+LSTM
|
1151 |
+
65.4 ± 0.4
|
1152 |
+
24.6 ± 0.8
|
1153 |
+
6.3 ± 1.4
|
1154 |
+
15.9 ± 1.0
|
1155 |
+
84.9 ± 1.6
|
1156 |
+
70.4 ± 2.1
|
1157 |
+
RGCN
|
1158 |
+
74.2 ± 1.7
|
1159 |
+
91.9 ± 2.1
|
1160 |
+
11.7 ± 0.3
|
1161 |
+
60.4 ± 4.8
|
1162 |
+
90.1 ± 1.7
|
1163 |
+
67.4 ± 2.2
|
1164 |
+
DCCA+RGCN
|
1165 |
+
76.6 ± 1.7
|
1166 |
+
90.6 ± 1.6
|
1167 |
+
14.8 ± 0.5
|
1168 |
+
60.8 ± 4.9
|
1169 |
+
90.2 ± 1.2
|
1170 |
+
68.4 ± 1.9
|
1171 |
+
RGCN+Bi-LSTM
|
1172 |
+
78.6 ± 2.6
|
1173 |
+
90.3 ± 1.6
|
1174 |
+
12.6 ± 0.7
|
1175 |
+
62.1 ± 1.5
|
1176 |
+
90.2 ± 1.4
|
1177 |
+
66.8 ± 1.5
|
1178 |
+
DCCA+RGCN+Bi-LSTM
|
1179 |
+
77.2 ± 2.1
|
1180 |
+
91.6 ± 2.0
|
1181 |
+
15.8 ± 0.6
|
1182 |
+
61.7 ± 1.2
|
1183 |
+
89.6 ± 1.7
|
1184 |
+
67.6 ± 1.6
|
1185 |
+
Table 5: Effect of DCCA Joint Learning Compared to Different Baselines in AP (%).
|
1186 |
+
Method
|
1187 |
+
Local Data
|
1188 |
+
MIMIC-III
|
1189 |
+
DEL
|
1190 |
+
DIA
|
1191 |
+
TH
|
1192 |
+
D30
|
1193 |
+
MORT
|
1194 |
+
R30
|
1195 |
+
RGCN
|
1196 |
+
72.6 ± 1.5
|
1197 |
+
82.1 ± 9.4
|
1198 |
+
9.2 ± 4.1
|
1199 |
+
53.4 ± 7.1
|
1200 |
+
87.6 ± 3.6
|
1201 |
+
63.7 ± 3.1
|
1202 |
+
RGCN+Labling
|
1203 |
+
73.2 ± 0.9
|
1204 |
+
83.6 ± 9.6
|
1205 |
+
9.1 ± 4.7
|
1206 |
+
54.2 ± 9.1
|
1207 |
+
88.5 ± 3.9
|
1208 |
+
65.4 ± 3.6
|
1209 |
+
DCCA+RGCN
|
1210 |
+
73.8 ± 1.2
|
1211 |
+
85.3 ± 8.1
|
1212 |
+
12.6 ± 1.3
|
1213 |
+
53.6 ± 8.2
|
1214 |
+
88.7 ± 3.0
|
1215 |
+
65.0 ± 2.9
|
1216 |
+
DCCA+RGCN+Labeling
|
1217 |
+
74.5 ± 1.1
|
1218 |
+
89.4 ± 1.3
|
1219 |
+
12.7 ± 3.0
|
1220 |
+
53.5 ± 6.9
|
1221 |
+
89.9 ± 3.1
|
1222 |
+
65.1 ± 3.4
|
1223 |
+
Table 6: Ablation Study of the Labeling Training Scheme under Unseen Code Setting in AP (%).
|
1224 |
+
Hyperparameter
|
1225 |
+
Local-Text
|
1226 |
+
Local-Code
|
1227 |
+
MIMIC-III-Text
|
1228 |
+
MIMIC-III-Code
|
1229 |
+
GNN
|
1230 |
+
#layers
|
1231 |
+
3
|
1232 |
+
{2,3,4}
|
1233 |
+
3
|
1234 |
+
{2,3,4}
|
1235 |
+
LSTM
|
1236 |
+
block size(b)
|
1237 |
+
-
|
1238 |
+
-
|
1239 |
+
30
|
1240 |
+
30
|
1241 |
+
MLP
|
1242 |
+
#layers
|
1243 |
+
2
|
1244 |
+
2
|
1245 |
+
1
|
1246 |
+
1
|
1247 |
+
dropout
|
1248 |
+
{0,0.2,0.4}
|
1249 |
+
{0,0.2,0.4}
|
1250 |
+
{0.2,0.4,0.6,0.8}
|
1251 |
+
{0.2,0.4,0.6,0.8}
|
1252 |
+
DCCA
|
1253 |
+
learning rate
|
1254 |
+
0.001
|
1255 |
+
0.001
|
1256 |
+
0.001
|
1257 |
+
0.001
|
1258 |
+
batch size
|
1259 |
+
1024
|
1260 |
+
1024
|
1261 |
+
400
|
1262 |
+
400
|
1263 |
+
Task
|
1264 |
+
learning rate
|
1265 |
+
{1e-3,1e-4,1e-5}
|
1266 |
+
{1e-3,1e-4,1e-5}
|
1267 |
+
{1e-3,1e-4,1e-5}
|
1268 |
+
{1e-3,1e-4,1e-5}
|
1269 |
+
batch size
|
1270 |
+
256
|
1271 |
+
256
|
1272 |
+
32
|
1273 |
+
32
|
1274 |
+
Table 7: Hyperparameters used for tuning.
|
1275 |
+
|
AdFJT4oBgHgl3EQfrS3C/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
C9AyT4oBgHgl3EQfSPck/content/2301.00080v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1881b3b5bc4619c7dba7467de32ea62b5f12e0550a1f1b24675c5a8944771362
|
3 |
+
size 1545665
|
C9AyT4oBgHgl3EQfSPck/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f4738935d9d09435855bc34d4a4148b5ae8888f75b6d1f58e891fa09cd358fd
|
3 |
+
size 1245229
|
C9AyT4oBgHgl3EQfSPck/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:45dd4319e38bc7ab70d46780da4eaf61a5b0f82306ab917277647964f3fe9294
|
3 |
+
size 57421
|
CNFQT4oBgHgl3EQf-jfx/content/2301.13455v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9bbf28610c80625a77856dcce2409dcfac4985035f07d0eaf04b875f5400bf0d
|
3 |
+
size 712985
|
CNFQT4oBgHgl3EQf-jfx/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c4cf0dabf0396efccf6fcd6cb63fe9c633f53638ec3f28b5ea10b21e1b1f3124
|
3 |
+
size 1441837
|
CNFQT4oBgHgl3EQf-jfx/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5389ca76f2c87d43b35c1f08315651d21cf2130e455dbbeb6f34b832d8d2550
|
3 |
+
size 55889
|
DtE0T4oBgHgl3EQfQgBB/content/tmp_files/2301.02193v1.pdf.txt
ADDED
@@ -0,0 +1,2495 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.02193v1 [physics.app-ph] 5 Jan 2023
|
2 |
+
Universal scaling between wave speed and size
|
3 |
+
enables nanoscale high-performance reservoir
|
4 |
+
computing based on propagating spin-waves
|
5 |
+
Satoshi Iihama,1,2 Yuya Koike,2,3,5 Shigemi Mizukami,2,4 Natsuhiko Yoshinaga2,5∗
|
6 |
+
1Frontier Research Institute for Interdisciplinary Sciences (FRIS), Tohoku University,
|
7 |
+
Sendai, 980-8578, Japan
|
8 |
+
2WPI Advanced Institute for Materials Research (AIMR), Tohoku University,
|
9 |
+
Katahira 2-1-1, Sendai, 980-8577, Japan
|
10 |
+
3Department of Applied Physics, Tohoku University,
|
11 |
+
Sendai, 980-8579, Japan
|
12 |
+
4Center for Science and Innovation in Spintronics (CSIS), Tohoku University,
|
13 |
+
Sendai, 980-8577, Japan
|
14 |
+
5MathAM-OIL, AIST, Sendai, 980-8577, Japan
|
15 |
+
∗To whom correspondence should be addressed; E-mail: [email protected].
|
16 |
+
Neuromorphic computing using spin waves is promising for high-speed nanoscale
|
17 |
+
devices, but the realization of high performance has not yet been achieved.
|
18 |
+
Here we show, using micromagnetic simulations and simplified theory with
|
19 |
+
response functions, that spin-wave physical reservoir computing can achieve
|
20 |
+
miniaturization down to nanoscales keeping high computational power com-
|
21 |
+
parable with other state-of-art systems. We also show the scaling of system
|
22 |
+
sizes with the propagation speed of spin waves plays a key role to achieve high
|
23 |
+
performance at nanoscales.
|
24 |
+
1
|
25 |
+
|
26 |
+
Introduction
|
27 |
+
Non-local magnetization dynamics in a nanomagnet, spin-waves, can be used for processing in-
|
28 |
+
formation in an energy-efficient manner since spin-waves carry information in a magnetic ma-
|
29 |
+
terial without Ohmic losses (1). The wavelength of the spin-wave can be down to the nanometer
|
30 |
+
scale, and the spin-wave frequency becomes several GHz to THz frequency, which are promis-
|
31 |
+
ing properties for nanoscale and high-speed operation devices. Recently, neuromorphic com-
|
32 |
+
puting using spintronics technology has attracted great attention for the development of future
|
33 |
+
low-power consumption artificial intelligence (2). Spin-waves can be created by various means
|
34 |
+
such as magnetic field, spin-transfer torque, spin-orbit torque, voltage induced change in mag-
|
35 |
+
netic anisotropy and can be detected by the magnetoresistance effect (3). Therefore, neuromor-
|
36 |
+
phic computing using spin waves may have a potential of realisable devices.
|
37 |
+
Reservoir computing (RC) is a promising neuromorphic computation framework. RC is a
|
38 |
+
variant of recurrent neural networks (RNNs) and has a single layer, referred to as a reservoir, to
|
39 |
+
transform an input signal into an output (4). In contrast with the conventional RNNs, RC does
|
40 |
+
not update the weights in the reservoir. Therefore, by replacing the reservoir of an artificial
|
41 |
+
neural network with a physical system, for example, magnetization dynamics, we may realize a
|
42 |
+
neural network device to perform various tasks, such as time-series prediction (4,5), short-term
|
43 |
+
memory (6, 7), pattern recognition, and pattern generation. Several physical RC has been pro-
|
44 |
+
posed: spintronic oscillators (8,9), optics (10), photonics (11,12), fluids, soft robots, and others
|
45 |
+
(see reviews (13–15)). Among these systems, spintronic RC has the advantage in its potential
|
46 |
+
realization of nanoscale devices at high speed of GHz frequency with low power consumption,
|
47 |
+
which may outperform conventional electric computers in future. So far, spintronic RC has
|
48 |
+
been considered using spin-torque oscillators (8, 9), magnetic skyrmion (16), and spin waves
|
49 |
+
in garnet thin films (17–19). However, the current performance of spintronic RC still remains
|
50 |
+
2
|
51 |
+
|
52 |
+
poor compared with the Echo State Network (ESN) (6, 7), idealized RC systems. The biggest
|
53 |
+
issue is a lack of our understanding of how to achieve high performance in the RC systems.
|
54 |
+
To achieve high performance, the reservoir has to have a large degree of freedom, N. How-
|
55 |
+
ever, in practice, it is difficult to increase the number of physical nodes, Np, because it requires
|
56 |
+
more wiring of multiple inputs. In this respect, wave-based computation in continuum media
|
57 |
+
has attracting features. The dynamics in the continuum media have large, possibly infinite, de-
|
58 |
+
grees of freedom. In fact, several wave-based computations have been proposed (20, 21). The
|
59 |
+
challenge is to use the advantages of both wave-based computation and RC to achieve high-
|
60 |
+
performance computing of time-series data. For spin wave-based RC, so far, the large degrees
|
61 |
+
of freedom are extracted only by using a large number of input and/or output nodes (19, 22).
|
62 |
+
Here, to propose a realisable spin wave RC, we use an alternative route; we extract the informa-
|
63 |
+
tion from the continuum media using a small number of physical nodes.
|
64 |
+
Along this direction, using Nv virtual nodes for the dynamics with delay was proposed to
|
65 |
+
increase N in (23). This idea was applied in optical fibres with a long delay line (11) and a net-
|
66 |
+
work of oscillators with delay (24). Nevertheless, the mechanism of high performance remains
|
67 |
+
elusive, and no unified understanding has been made. The increase of N = NpNv with Nv does
|
68 |
+
not necessarily improve performance. In fact, RC based-on STO struggles with insufficient per-
|
69 |
+
formance both in experiments (9) and simulations (25). The photonic RC requires a large size
|
70 |
+
of devices due to the long delay line (11,12).
|
71 |
+
In this work, we show nanoscale and high-speed RC based on spin wave propagation with a
|
72 |
+
small number of inputs can achieve performance comparable with the ESN and other state-of-art
|
73 |
+
RC systems. More importantly, by using a simple theoretical model, we clarify the mechanism
|
74 |
+
of the high performance of spin wave RC. We show the scaling between wave speed and system
|
75 |
+
size to make virtual nodes effective.
|
76 |
+
3
|
77 |
+
|
78 |
+
Results
|
79 |
+
Reservoir computing using wave propagation
|
80 |
+
The basic task of RC is to transform an input signal Un to an output Yn for the discrete step n =
|
81 |
+
1, 2, . . . , T at the time tn. For example, for speech recognition, the input is an acoustic wave,
|
82 |
+
and the output is a word corresponding to the sound. Each word is determined not only by the
|
83 |
+
instantaneous input but also by the past history. Therefore, the output is, in general, a function
|
84 |
+
of all the past input, Yn = g ({Um}n
|
85 |
+
m=1) as in Fig. 1(a). The RC can also be used for time-series
|
86 |
+
prediction by setting the output as Yn = Un+1 (4). In this case, the state at the next time step
|
87 |
+
is predicted from all the past data; namely, the effect of delay is included. The performance of
|
88 |
+
the input-output transformation g can be characterized by how much past information does g
|
89 |
+
have, and how much nonlinear transformation does g perform. We will discuss that the former
|
90 |
+
is expressed by memory capacity (MC) (26), whereas the latter is measured by information
|
91 |
+
processing capacity(IPC) (27).
|
92 |
+
We propose physical computing based on a propagating wave (see Fig. 1(b,c)). Time series
|
93 |
+
of an input signal Un can be transformed into an output signal Yn (Fig. 1(a)). As we will discuss
|
94 |
+
below, this transformation requires large linear and nonlinear memories; for example, to predict
|
95 |
+
Yn, we need to memorize the information of Un−2 and Un−1. The input signal is injected in
|
96 |
+
the first input node and propagates in the device to the output node spending a time τ1 as in
|
97 |
+
Fig. 1(b). Then, the output may have past information at tn − τ1 corresponding to the step
|
98 |
+
n − m1. The output may receive the information from another input at different time tn − τ2.
|
99 |
+
The sum of the two peices of information is mixed and transformed as Un−m1Un−m2 either by
|
100 |
+
nonlinear readout or by nonlinear dynamics of the reservoir (see also Sec. B in Supplementary
|
101 |
+
Information). We will demonstrate the wave propagation can indeed enhances memory capacity
|
102 |
+
and learning performance of the input-output relationship.
|
103 |
+
4
|
104 |
+
|
105 |
+
Figure 1:
|
106 |
+
Illustration of physical reservoir computing and reservoir based on propa-
|
107 |
+
gating spin-wave network. (a) Schematic illustration of output function prediction by using
|
108 |
+
time-series data. Output signal Y is transformed by past information of input signal U. (b)
|
109 |
+
Schematic illustration of reservoir computing with multiple physical nodes. The output signal
|
110 |
+
at physical node A contains past input signals in other physical nodes, which are memorized by
|
111 |
+
the reservoir. (c) Schematic illustration of reservoir computing based on propagating spin-wave.
|
112 |
+
Propagating spin-wave in ferromagnetic thin film (m ∥ ez) is excited by spin-transfer torque at
|
113 |
+
multiple physical nodes with reference magnetic layer (m ∥ ex). x-component of magnetization
|
114 |
+
is detected by the magnetoresistance effect at each physical node.
|
115 |
+
5
|
116 |
+
|
117 |
+
a
|
118 |
+
(b)
|
119 |
+
Y(t) α U(t - T1) · U(t - t2)
|
120 |
+
output
|
121 |
+
input
|
122 |
+
g((U(t)))
|
123 |
+
U(t - T1)
|
124 |
+
X(t) α U(t - T1) + U(t - T2)
|
125 |
+
input
|
126 |
+
reservoir
|
127 |
+
U(t - T2)
|
128 |
+
Spin injector and detector
|
129 |
+
(c)
|
130 |
+
Ferromagnetic thin filmBefore explaining our learning strategy, we discuss how to achieve accurate learning of the
|
131 |
+
input-output relationship Yn = g ({Um}n
|
132 |
+
m=1) from the data. Here, the output may be dependent
|
133 |
+
on a whole sequence of the input {Um}n
|
134 |
+
m=1 = (U1, . . . , Un). Even when both Un and Yn are
|
135 |
+
one-variable time-series data, the input-output relationship g(·) may be T-variable polynomials,
|
136 |
+
where T is the length of the time series. Formally, g(·) can be expanded in a polynomial
|
137 |
+
series (Volterra series) such that g ({Um}n
|
138 |
+
m=1) = �
|
139 |
+
k1,k2,··· ,kt βk1,k2,··· ,ktUk1
|
140 |
+
1 Uk2
|
141 |
+
2 · · · Ukn
|
142 |
+
n with the
|
143 |
+
coefficients βk1,k2,··· ,kn. Therefore, even for the linear input-output relationship, we need T
|
144 |
+
coefficients in g(·), and as the degree of powers in the polynomials increases, the number of
|
145 |
+
the coefficients increases exponentially. This observation implies that a large number of data is
|
146 |
+
required to estimate the input-output relationship. Nevertheless, we may expect a dimensional
|
147 |
+
reduction of g(·) due to its possible dependence on the time close to t and on the lower powers.
|
148 |
+
Still, our physical computers should have degrees of freedom N ≫ 1, if not exponentially large.
|
149 |
+
The reservoir computing framework is used to handle time-series data of the input U and the
|
150 |
+
output Y (6). In this framework, the input-output relationship is learned through the reservoir
|
151 |
+
dynamics X(t), which in our case, is magnetization at the detectors. The reservoir state at a
|
152 |
+
time tn is driven by the input at the nth step corresponding to tn as
|
153 |
+
X(tn+1) = f (X(tn), Un)
|
154 |
+
(1)
|
155 |
+
with nonlinear (or possibly linear) function f(·). The output is approximated by the readout
|
156 |
+
operator ψ(·) as
|
157 |
+
ˆYn = ψ (X(tn)) .
|
158 |
+
(2)
|
159 |
+
Our study uses the nonlinear readout ψ (X(t)) = W1X(t) + W2X2(t) (5, 28). The weight
|
160 |
+
matrices W1 and W2 are estimated from the data of the reservoir dynamics X(t) and the true
|
161 |
+
output Yn, where X(t) is obtained by (1). With the nonlinear readout, the RC with linear
|
162 |
+
6
|
163 |
+
|
164 |
+
dynamics can achieve nonlinear transformation, as Fig.1(b). We stress that the system also
|
165 |
+
works with linear readout when the RC has nonlinear dynamics. We discuss this case in Sec.B.
|
166 |
+
Spin wave reservoir computing
|
167 |
+
We consider a magnetic device of a thin rectangular system with cylindrical injectors (see
|
168 |
+
Fig.1(c)). The size of the device is L × L × D. Under the uniform external magnetic field,
|
169 |
+
the magnetization is along the z direction. Electric current is injected at the Np injectors with
|
170 |
+
the radius a and the same height with the device. The spin-torque by the current drives mag-
|
171 |
+
netization m(x, t) and propagating spin-waves as schematically shown in Fig.1(c). The actual
|
172 |
+
demonstration of the spin-wave reservoir computing is shown in Fig. 2. We demonstrate the
|
173 |
+
spin-wave RC using two methods: the micromagnetic simulations and the theoretical model
|
174 |
+
using a response function.
|
175 |
+
In the micromagnetic simulations, we analyze the Landau-Lifshitz-Gilbert (LLG) equation
|
176 |
+
with the effective magnetic field Heff = Hext+Hdemag+Hexch consists of the external field, de-
|
177 |
+
magnetization, and the exchange interaction (see Theoretical analysis using response function
|
178 |
+
in Methods). The spin waves are driven by Slonczewski spin-transfer torque (29). The driving
|
179 |
+
term is proportional to the DC current j(t) at the nanocontact. We inject the DC current propor-
|
180 |
+
tional to the input time series U with a pre-processing filter. From the resulting spatially inho-
|
181 |
+
mogeneous magnetization m(x, t), we measure the averaged magnetization at ith nanocontact
|
182 |
+
mi(t). We use the method of time multiplexing with Nv virtual nodes (23). We choose the x-
|
183 |
+
component of magnetization mx,i as a reservoir state, namely, Xn = {mx,i(tn,k)}i∈[1,Np],k∈[1,Nv]
|
184 |
+
(see (14) in Methods for its concrete form). For the output transformation, we use ψ(mi,x) =
|
185 |
+
W1,imi,x + W2,im2
|
186 |
+
i,x. Therefore, the dimension of our reservoir is 2NpNv. The nonlinear output
|
187 |
+
transformation can enhance the nonlinear transformation in reservoir (5), and it was shown that
|
188 |
+
even under the linear reservoir dynamics, RC can learn any nonlinearity (28, 30). In Sec. B in
|
189 |
+
7
|
190 |
+
|
191 |
+
0
|
192 |
+
0.5
|
193 |
+
0
|
194 |
+
0.5
|
195 |
+
0
|
196 |
+
1.6
|
197 |
+
3.2
|
198 |
+
4.8
|
199 |
+
6.4
|
200 |
+
8
|
201 |
+
0
|
202 |
+
0.5
|
203 |
+
・
|
204 |
+
・
|
205 |
+
・
|
206 |
+
�i
|
207 |
+
n
|
208 |
+
n+1
|
209 |
+
n+2
|
210 |
+
n+3
|
211 |
+
n+4
|
212 |
+
0
|
213 |
+
0.5
|
214 |
+
Time step
|
215 |
+
higher damping region
|
216 |
+
cylindrical region to apply
|
217 |
+
spin-transfer torque
|
218 |
+
(a)
|
219 |
+
(b)
|
220 |
+
Training
|
221 |
+
�
|
222 |
+
Input, �
|
223 |
+
Time
|
224 |
+
Binary mask, �i
|
225 |
+
�n,�
|
226 |
+
�n
|
227 |
+
��
|
228 |
+
�n
|
229 |
+
��
|
230 |
+
�n��
|
231 |
+
�
|
232 |
+
��
|
233 |
+
Time (ns)
|
234 |
+
・
|
235 |
+
・
|
236 |
+
・
|
237 |
+
Masked input,
|
238 |
+
Time step
|
239 |
+
Input, �
|
240 |
+
Time step
|
241 |
+
Output, �
|
242 |
+
Time step
|
243 |
+
� �
|
244 |
+
Time step
|
245 |
+
Time
|
246 |
+
�
|
247 |
+
0
|
248 |
+
�
|
249 |
+
�n
|
250 |
+
�n+1
|
251 |
+
�n,3
|
252 |
+
�n,5
|
253 |
+
�n,7
|
254 |
+
(t)
|
255 |
+
(t)
|
256 |
+
Figure 2:
|
257 |
+
Dimension of spin-wave reservoir and prediction of NARMA10 task.
|
258 |
+
(a)
|
259 |
+
Input signals U are multiplied by binary mask Bi(t) and transformed into injected current
|
260 |
+
j(t) = 2jc ˜Ui(t) for the ith physical node. Current is injected into each physical node with
|
261 |
+
the cylindrical region to apply spin-transfer torque and to excite spin-wave. Higher damping
|
262 |
+
regions in the edges of the rectangle are set to avoid reflection of spin-waves. (b) Prediction of
|
263 |
+
NARMA10 task. x-component of magnetization at each physical and virtual node are collected
|
264 |
+
and output weights are trained by linear regression.
|
265 |
+
8
|
266 |
+
|
267 |
+
0.02
|
268 |
+
0.01
|
269 |
+
0
|
270 |
+
-0.01
|
271 |
+
Node (1, 1)
|
272 |
+
-0.02
|
273 |
+
0.02
|
274 |
+
0.01
|
275 |
+
0
|
276 |
+
-0.01
|
277 |
+
(2,1)
|
278 |
+
-0.02
|
279 |
+
0.02
|
280 |
+
0.01
|
281 |
+
0
|
282 |
+
-0.01
|
283 |
+
(Ny,Np
|
284 |
+
-0.02
|
285 |
+
1000
|
286 |
+
2000
|
287 |
+
3000
|
288 |
+
4000
|
289 |
+
5000
|
290 |
+
6000- OQutput,
|
291 |
+
Predicted
|
292 |
+
0.8
|
293 |
+
0.6
|
294 |
+
0.4
|
295 |
+
0.2
|
296 |
+
Error
|
297 |
+
0
|
298 |
+
6000
|
299 |
+
7000
|
300 |
+
8000
|
301 |
+
9000
|
302 |
+
10000
|
303 |
+
110000.8
|
304 |
+
0.6
|
305 |
+
0.4
|
306 |
+
0.2
|
307 |
+
0
|
308 |
+
1000
|
309 |
+
2000
|
310 |
+
3000
|
311 |
+
4000
|
312 |
+
5000
|
313 |
+
60000.5
|
314 |
+
1000
|
315 |
+
2000
|
316 |
+
3000
|
317 |
+
4000
|
318 |
+
5000
|
319 |
+
60001000nm
|
320 |
+
1000nm
|
321 |
+
Q
|
322 |
+
500nm
|
323 |
+
4nmSupplementary Information, we also discuss the linear readout, but including the z-component
|
324 |
+
of magnetization X = (mx, mz). In this case, mz plays a similar role to m2
|
325 |
+
x. The performance
|
326 |
+
of the RC is measured by three tasks: MC, IPC, and NARMA10. The weights in the readout
|
327 |
+
are trained by reservoir variable X and the output Y (Fig.2(b), see also Methods).
|
328 |
+
To understand the mechanism of high performance of learning by spin wave propagation,
|
329 |
+
we also consider a simplified model using the response function of the spin wave dynamics. By
|
330 |
+
linearizing the magnetization around m = (0, 0, 1) without inputs, we may express the linear
|
331 |
+
response of the magnetization at the ith readout mi = mx,i + imy,i to the input as(see Methods)
|
332 |
+
mi(t) =
|
333 |
+
Np
|
334 |
+
�
|
335 |
+
j=1
|
336 |
+
�
|
337 |
+
dt′Gij(t, t′)U(j)(t′).
|
338 |
+
(3)
|
339 |
+
Here, U(j)(t) is the input time series at jth nanocontact. The response function has a self part
|
340 |
+
Gii, that is, input and readout nanocontacts are the same, and the propagation part Gij, where
|
341 |
+
the distance between the input and readout nanocontacts is |Ri − Rj|. We use the quadratic
|
342 |
+
nonlinear readout, which has a structure
|
343 |
+
m2
|
344 |
+
i (t) =
|
345 |
+
Np
|
346 |
+
�
|
347 |
+
j1=1
|
348 |
+
Np
|
349 |
+
�
|
350 |
+
j2=1
|
351 |
+
�
|
352 |
+
dt1
|
353 |
+
�
|
354 |
+
dt2G(2)
|
355 |
+
ij1j2(t, t1, t2)U(j1)(t1)U(j2)(t2).
|
356 |
+
(4)
|
357 |
+
The response function of the nonlinear readout is G(2)
|
358 |
+
ij1j2(t, t1, t2) ∝ Gij1(t, t1)Gij2(t, t2). The
|
359 |
+
same structure as (4) appears when we use a second-order perturbation for the input (see Meth-
|
360 |
+
ods). In general, we may include the cubic and higher-order terms of the input. This expansion
|
361 |
+
leads to the Volterra series of the output in terms of the input time series, and suggests how the
|
362 |
+
spin wave RC works (see Sec. A.1 in Supplementary Information for more details). Once the
|
363 |
+
magnetization at each nanocontact is computed, we may estimate MC and IPC.
|
364 |
+
Figure 3 shows the results of the three tasks. When the time scale of the virtual node θ is
|
365 |
+
small and the damping is small, the performance of spin wave RC is high. As Fig. 3(a) shows,
|
366 |
+
we achieve MC ≈ 60 and IPC ≈ 60. Accordingly, we achieve a small error in the NARMA10
|
367 |
+
9
|
368 |
+
|
369 |
+
task, NRMSE ≈ 0.2 (Fig. 3(c)). Theses performances are comparable with state-of-the-art ESN
|
370 |
+
with the number of nodes ∼ 100. When the damping is stronger, both MC and IPC become
|
371 |
+
smaller. Because the NARMA10 task requires the memory with the delay steps ≈ 10 and the
|
372 |
+
second order nonlinearity with the delay steps ≈ 10 (see Sec.A in Supplementary Information),
|
373 |
+
the NRMSE becomes larger when MC ≲ 10 and IPC ≲ 102/2.
|
374 |
+
The results of the micromagnetic simulations are semi-quantitatively reproduced by the the-
|
375 |
+
oretical model using the response function, as shown in Fig. 3(b). This result suggests that
|
376 |
+
the linear response function G(t, t′) captures the essential feature of delay t − t′ due to wave
|
377 |
+
propagation.
|
378 |
+
(a)
|
379 |
+
(b)
|
380 |
+
Non-linear, IPC
|
381 |
+
Linear, MC
|
382 |
+
0
|
383 |
+
20
|
384 |
+
40
|
385 |
+
60
|
386 |
+
80
|
387 |
+
5
|
388 |
+
2.5
|
389 |
+
α = 5×10-4
|
390 |
+
Frequency, 1/θ (GHz)
|
391 |
+
0
|
392 |
+
20
|
393 |
+
40
|
394 |
+
60
|
395 |
+
80
|
396 |
+
α = 5×10-3
|
397 |
+
Linear and non-linear memory capacity
|
398 |
+
0.2
|
399 |
+
0.4
|
400 |
+
0
|
401 |
+
20
|
402 |
+
40
|
403 |
+
60
|
404 |
+
80
|
405 |
+
α = 5×10-2
|
406 |
+
Distance of virtual nodes, θ (ns)
|
407 |
+
(c)
|
408 |
+
Normalized root mean square error, NRMSE
|
409 |
+
for NARMA10 task
|
410 |
+
0
|
411 |
+
0.5
|
412 |
+
1
|
413 |
+
5
|
414 |
+
2.5
|
415 |
+
α = 5×10-4
|
416 |
+
Training,
|
417 |
+
Test
|
418 |
+
Frequency, 1/θ (GHz)
|
419 |
+
0
|
420 |
+
0.5
|
421 |
+
1
|
422 |
+
α = 5×10-3
|
423 |
+
|
424 |
+
0.2
|
425 |
+
0.4
|
426 |
+
0
|
427 |
+
0.5
|
428 |
+
1
|
429 |
+
α = 5×10-2
|
430 |
+
Distance of virtual nodes, θ (ns)
|
431 |
+
0
|
432 |
+
20
|
433 |
+
40
|
434 |
+
60
|
435 |
+
80
|
436 |
+
5
|
437 |
+
2.5
|
438 |
+
α = 5×10-4
|
439 |
+
Frequency, 1/θ (GHz)
|
440 |
+
0
|
441 |
+
20
|
442 |
+
40
|
443 |
+
60
|
444 |
+
80
|
445 |
+
α = 5×10-3
|
446 |
+
Linear and non-linear memory capacity
|
447 |
+
0.2
|
448 |
+
0.4
|
449 |
+
0
|
450 |
+
20
|
451 |
+
40
|
452 |
+
60
|
453 |
+
80
|
454 |
+
α = 5×10-2
|
455 |
+
Distance of virtual nodes, θ (ns)
|
456 |
+
Figure 3: Effect of virtual node distance on performance of spin-wave reservoir computing
|
457 |
+
obtained with 8 physical nodes and 8 virtual nodes. Memory capacity MC and information
|
458 |
+
processing capacity IPC obtained by (a) micromagnetics simulation and (b) response function
|
459 |
+
method plotted as a function of virtual node distance θ with different damping parameters α.
|
460 |
+
(c) Normalized root mean square error, NRMSE for NARMA10 task is plotted as a function of
|
461 |
+
θ with different α.
|
462 |
+
To confirm the high MC and IPC are due to spin-wave propagation, we perform micromag-
|
463 |
+
netic simulations with damping layers between nodes (Fig. 4(a)). The damping layers inhibit
|
464 |
+
spin wave propagation. The result of Fig. 4(b) shows that the memory capacity is substantially
|
465 |
+
lower than that without damping, particularly when θ is small. The NARMA10 task shows a
|
466 |
+
10
|
467 |
+
|
468 |
+
larger error (Fig. 4(d)). When θ is small, the suppression is less effective. This may be due to
|
469 |
+
incomplete suppression of wave propagation.
|
470 |
+
We also analyze the theoretical model with the response function by neglecting the inter-
|
471 |
+
action between two physical nodes, namely, Gij = 0 for i ̸= j. In this case, information
|
472 |
+
transmission between two physical nodes is not allowed. We obtain smaller MC and IPC than
|
473 |
+
the system with wave propagation, supporting our claim (see (Fig. 4(c))).
|
474 |
+
Our spin wave RC also works for the prediction of time-series data. In the study of (5),
|
475 |
+
the functional relationship between the state at t + ∆t and the states before t is learned by the
|
476 |
+
ESN. The trained ESN can estimate the state at t + ∆t from the past states, and therefore, it can
|
477 |
+
predict the dynamics without the data. In (5), the prediction for the chaotic time-series data was
|
478 |
+
demonstrated. Figure 5 shows the prediction using our spin wave RC for the Lorenz model. We
|
479 |
+
can demonstrate that the RC shows short-time prediction and, more importantly, reconstruct the
|
480 |
+
chaotic attractor.
|
481 |
+
Scaling of system size and wave speed
|
482 |
+
To clarify the mechanism of the high performance of our spin wave RC, we investigate MC
|
483 |
+
and IPC of the system with different characteristic length scales L and different wave propagat-
|
484 |
+
ing speed v. The characteristic length scale is controlled by the radius of the circle on which
|
485 |
+
inputs are located (see Fig. 2(a)). We use our theoretical model with the response function to
|
486 |
+
compute MC and IPC in the parameter space (v, R). This calculation can be done because the
|
487 |
+
computational cost of our model is much cheaper than numerical micromagnetic simulations.
|
488 |
+
Figure 6(a,b) shows that both MC and IPC have maximum when L ∝ v. To obtain a deeper
|
489 |
+
understanding of the result, we perform the same analyzes for the further simplified model, in
|
490 |
+
11
|
491 |
+
|
492 |
+
(�
|
493 |
+
�
|
494 |
+
(c)
|
495 |
+
(d)
|
496 |
+
Linear, MC
|
497 |
+
Non-linear, IPC
|
498 |
+
0
|
499 |
+
20
|
500 |
+
40
|
501 |
+
60
|
502 |
+
80
|
503 |
+
5
|
504 |
+
2.5
|
505 |
+
α = 5 × 10-4
|
506 |
+
C �
|
507 |
+
|
508 |
|
509 |
+
|
510 |
+
|
511 |
+
Frequency, 1/θ (GHz)
|
512 |
+
0.2
|
513 |
+
0.4
|
514 |
+
0
|
515 |
+
20
|
516 |
+
40
|
517 |
+
60
|
518 |
+
80
|
519 |
+
No connection
|
520 |
+
Linear and non-linear memory capacity
|
521 |
+
Distance of virtual nodes, θ (ns)
|
522 |
+
Normalized root mean square error, NRMSE
|
523 |
+
for NARMA10 task
|
524 |
+
0
|
525 |
+
0.5
|
526 |
+
1
|
527 |
+
5
|
528 |
+
2.5
|
529 |
+
|
530 |
+
!"#$ %&'
|
531 |
+
Training,
|
532 |
+
Test
|
533 |
+
Frequency, 1/θ (GHz)
|
534 |
+
0.2
|
535 |
+
0.4
|
536 |
+
0
|
537 |
+
0.5
|
538 |
+
1
|
539 |
+
|
540 |
+
No-connection
|
541 |
+
Distance of virtual nodes, θ (ns)
|
542 |
+
(a)
|
543 |
+
)*+
|
544 |
+
-./ 123
|
545 |
+
4
|
546 |
+
56789 :
|
547 |
+
No-connection
|
548 |
+
0
|
549 |
+
20
|
550 |
+
40
|
551 |
+
60
|
552 |
+
80
|
553 |
+
5
|
554 |
+
2.5
|
555 |
+
α = 5 × 10-4
|
556 |
+
; <= >?@ ABD
|
557 |
+
EFG
|
558 |
+
H
|
559 |
+
I JK
|
560 |
+
Frequency, 1/θ (GHz)
|
561 |
+
0.2
|
562 |
+
0.4
|
563 |
+
0
|
564 |
+
20
|
565 |
+
40
|
566 |
+
60
|
567 |
+
80
|
568 |
+
No connection (G
|
569 |
+
iL = 0 )
|
570 |
+
Linear and non-linear memory capacity
|
571 |
+
Distance of virtual nodes, θ (ns)
|
572 |
+
Figure 4:
|
573 |
+
Effect of the network connection on the performance of reservoir computing.
|
574 |
+
(a) Schematic illustration of the network of physical nodes connected through propagating spin-
|
575 |
+
wave [left] and physical nodes with no connection [right]. Memory capacity MC and informa-
|
576 |
+
tion processing capacity IPC obtained using a connected network with 8 physical nodes [top]
|
577 |
+
and physical nodes with no connection [bottom] calculated by (a) micromagnetics simulation
|
578 |
+
and (b) response function method plotted as a function of virtual node distance θ. 8 virtual
|
579 |
+
nodes are used. (c) Normalized root mean square error, NRMSE for NARMA10 task obtained
|
580 |
+
by micromagnetics simulation is plotted as a function of θ with a connected network [top] and
|
581 |
+
physical nodes with no connection [bottom].
|
582 |
+
12
|
583 |
+
|
584 |
+
ground truth
|
585 |
+
prediction
|
586 |
+
time
|
587 |
+
-10
|
588 |
+
0
|
589 |
+
10
|
590 |
+
20
|
591 |
+
30
|
592 |
+
40
|
593 |
+
5
|
594 |
+
15
|
595 |
+
25
|
596 |
+
35
|
597 |
+
45
|
598 |
+
-20
|
599 |
+
-10
|
600 |
+
0
|
601 |
+
10
|
602 |
+
20
|
603 |
+
-20
|
604 |
+
-10
|
605 |
+
10
|
606 |
+
20
|
607 |
+
0
|
608 |
+
training
|
609 |
+
prediction
|
610 |
+
A1
|
611 |
+
A1
|
612 |
+
A3
|
613 |
+
A3
|
614 |
+
A1
|
615 |
+
A2
|
616 |
+
A3
|
617 |
+
Figure 5: Prediction of time-series data for the Lorenz system using the RC with micro-
|
618 |
+
magnetic simulations. The parameters are θ = 0.4ns and α = 5.0×10−4. (a) The ground truth
|
619 |
+
(A1(t), A2(t), A3(t)) and the estimated time series ( ˆ
|
620 |
+
A1(t), ˆ
|
621 |
+
A1(t), ˆ
|
622 |
+
A3(t)) are shown in blue and
|
623 |
+
red, respectively. The training steps are during t < 0, whereas the prediction steps are during
|
624 |
+
t > 0. (b) The attractor in the A1A3 plane for the ground truth and during the prediction steps.
|
625 |
+
13
|
626 |
+
|
627 |
+
45
|
628 |
+
40
|
629 |
+
35
|
630 |
+
30
|
631 |
+
25
|
632 |
+
20
|
633 |
+
15
|
634 |
+
10
|
635 |
+
5
|
636 |
+
-20
|
637 |
+
-10
|
638 |
+
0
|
639 |
+
10
|
640 |
+
2045
|
641 |
+
40
|
642 |
+
35
|
643 |
+
30
|
644 |
+
25
|
645 |
+
20
|
646 |
+
15
|
647 |
+
10
|
648 |
+
5
|
649 |
+
-20
|
650 |
+
-10
|
651 |
+
0
|
652 |
+
10
|
653 |
+
20which the response function is replaced by the Gaussian function
|
654 |
+
Gij(t) = exp
|
655 |
+
�
|
656 |
+
− 1
|
657 |
+
2w2
|
658 |
+
�
|
659 |
+
t − Rij
|
660 |
+
v
|
661 |
+
�2�
|
662 |
+
(5)
|
663 |
+
where Rij is the distance between ith and jth physical nodes, and w is the width of the function.
|
664 |
+
Even in this simplified model, we obtain MC≈ 40 and IPC≈ 60, and also the maximum when
|
665 |
+
L ∝ v (Fig. 6(c,d)). From this result, the origin of the optimal ratio between the length and
|
666 |
+
speed becomes clearer; when L ≪ v, the response functions under different Rij overlap so
|
667 |
+
that different physical nodes cannot carry the information of different delay times. On the
|
668 |
+
other hand, when L ≫ v, the characteristic delay time L/v exceeds the maximum delay time
|
669 |
+
to compute MC and IPC, or exceeds the total length of the time series. Note that we set the
|
670 |
+
maximum delay time as 100, which is much longer than the value necessary for the NARMA10
|
671 |
+
task.
|
672 |
+
The result suggests the universal scaling between the size of the system and the speed of
|
673 |
+
the RC based on wave propagation. Our system of the spin wave has a characteristic length
|
674 |
+
L ∼ 500 nm and a speed of v ∼ 200 m s−1. In fact, the reported photonic RC has characteristic
|
675 |
+
length scale of optical fibres close to the scaling in Fig. 7.
|
676 |
+
Discussion
|
677 |
+
Figure 7 shows reports of reservoir computing in literature with multiple nodes plotted as a
|
678 |
+
function of the length of nodes L and products of wave speed and delay time vτ0 for both
|
679 |
+
photonic and spintronic RC. For the spintronic RC, the dipole interaction is considered for wave
|
680 |
+
propagation in which speed is proportional to both saturation magnetization and thickness of the
|
681 |
+
film (31)(See supplementary information sec. C). For the photonic RC, the characteristic speed
|
682 |
+
is the speed of light, v ∼ 108 m s−1. Symbol size corresponds to MC taken from the literature
|
683 |
+
[See details of plots in supplementary information sec. D]. Plots are roughly on a broad oblique
|
684 |
+
14
|
685 |
+
|
686 |
+
(a)
|
687 |
+
(b)
|
688 |
+
(c)
|
689 |
+
(d)
|
690 |
+
wave speed (log m/s)
|
691 |
+
characteristic size (log nm)
|
692 |
+
1.0
|
693 |
+
3.0
|
694 |
+
2.0
|
695 |
+
2.0
|
696 |
+
3.0
|
697 |
+
4.0
|
698 |
+
MC
|
699 |
+
10
|
700 |
+
20
|
701 |
+
30
|
702 |
+
40
|
703 |
+
50
|
704 |
+
60
|
705 |
+
damping time
|
706 |
+
1.0
|
707 |
+
wave speed (log m/s)
|
708 |
+
characteristic size (log nm)
|
709 |
+
1.0
|
710 |
+
3.0
|
711 |
+
2.0
|
712 |
+
2.0
|
713 |
+
3.0
|
714 |
+
4.0
|
715 |
+
IPC
|
716 |
+
10
|
717 |
+
20
|
718 |
+
30
|
719 |
+
40
|
720 |
+
50
|
721 |
+
60
|
722 |
+
damping time
|
723 |
+
1.0
|
724 |
+
wave speed (log m/s)
|
725 |
+
characteristic size (log nm)
|
726 |
+
1.0
|
727 |
+
3.0
|
728 |
+
2.0
|
729 |
+
2.0
|
730 |
+
3.0
|
731 |
+
4.0
|
732 |
+
MC
|
733 |
+
10
|
734 |
+
20
|
735 |
+
30
|
736 |
+
40
|
737 |
+
1.0
|
738 |
+
4.0
|
739 |
+
5.0
|
740 |
+
wave speed (log m/s)
|
741 |
+
characteristic size (log nm)
|
742 |
+
1.0
|
743 |
+
3.0
|
744 |
+
2.0
|
745 |
+
2.0
|
746 |
+
3.0
|
747 |
+
4.0
|
748 |
+
IPC
|
749 |
+
10
|
750 |
+
20
|
751 |
+
30
|
752 |
+
40
|
753 |
+
1.0
|
754 |
+
4.0
|
755 |
+
5.0
|
756 |
+
60
|
757 |
+
50
|
758 |
+
time
|
759 |
+
response function
|
760 |
+
11( )
|
761 |
+
G
|
762 |
+
t
|
763 |
+
12( )
|
764 |
+
G
|
765 |
+
t
|
766 |
+
13( )
|
767 |
+
G
|
768 |
+
t
|
769 |
+
14( )
|
770 |
+
G
|
771 |
+
t
|
772 |
+
15( )
|
773 |
+
G
|
774 |
+
t
|
775 |
+
time
|
776 |
+
dense
|
777 |
+
sparse
|
778 |
+
memorise
|
779 |
+
(e)
|
780 |
+
Figure 6:
|
781 |
+
Scaling between characteristic size and propagating wave speed obtained by
|
782 |
+
response function method. MC (a,c) and IPC (b,d) as a function of the characteristic length
|
783 |
+
scale between physical nodes R and the speed of wave propagation v. The results with the
|
784 |
+
response function for the dipole interaction (a,b) and for the Gaussian function (5) (c,d) are
|
785 |
+
shown. (e) Schematic illustration of the response function and its relation to wave propagation
|
786 |
+
between physical nodes. When the speed of the wave is too fast, all the response functions are
|
787 |
+
overlapped (dense regime), while the response functions cannot cover the time windows when
|
788 |
+
the speed of the wave is too slow (sparse regime).
|
789 |
+
15
|
790 |
+
|
791 |
+
line with a ratio L/(vτ0) ∼ 1. Therefore, the photonic RC requires a larger system size, as
|
792 |
+
long as the delay time of the input τ0 = Nvθ is the same order (τ0 = 0.3 − 3 ns in our spin
|
793 |
+
wave RC). As can be seen in Fig. 6, if one wants to reduce the length of physical nodes, one
|
794 |
+
must reduce wave speed or delay time; otherwise the information is dense, and the reservoir
|
795 |
+
cannot memorize many degrees of freedom (See Fig. 6(e)). Reducing delay time is challenging
|
796 |
+
since the experimental demonstration of the photonic reservoirs has already used the short delay
|
797 |
+
close to the instrumental limit. Also, reducing wave speed in photonics systems is challenging.
|
798 |
+
On the other hand, the wave speed of propagating spin-wave is much lower than the speed of
|
799 |
+
light and can be tuned by configuration, thickness and material parameters. If one reduces wave
|
800 |
+
speed or delay time over the broad line in Fig. 7, information becomes sparse and cannot be
|
801 |
+
used efficiently(See Fig. 6(e)). Therefore, there is an optimal condition for high-performance
|
802 |
+
RC.
|
803 |
+
The performance is comparable with other state of the art techniques, which are summa-
|
804 |
+
rized in Fig. 8. For example, for the spintronic RC, MC ≈ 30 (19) and NRMSE ≈ 0.2 (22) in
|
805 |
+
the NARMA10 task are obtained using Np ≈ 100 physical nodes. The spintronic RC with one
|
806 |
+
physical node but with 101 − 102 virtual nodes do not show high performance; MC is less than
|
807 |
+
10 (the bottom left points in Fig. 8). This fact suggests that the spintronic RC so far cannot use
|
808 |
+
virtual nodes effectively. On the other hand, for the photonic RC, comparable performances are
|
809 |
+
achieved using Nv ≈ 50 virtual nodes, but only one physical node. As we discussed, however,
|
810 |
+
the photonic RC requires mm system sizes. Our system achieves comparable performances
|
811 |
+
using ≲ 10 physical nodes, and the size is down to nanoscales keeping the 2 − 50 GHz compu-
|
812 |
+
tational speed. We also demonstrate that the spin wave RC can perform time-series prediction
|
813 |
+
and reconstruction of an attractor for the chaotic data. To our knowledge, this has not been done
|
814 |
+
in nanoscale systems.
|
815 |
+
Our results of micromagnetic simulations suggest that our system can be physically im-
|
816 |
+
16
|
817 |
+
|
818 |
+
0
|
819 |
+
10
|
820 |
+
20
|
821 |
+
30
|
822 |
+
40
|
823 |
+
50
|
824 |
+
60
|
825 |
+
MC
|
826 |
+
This work (
|
827 |
+
t
|
828 |
+
0
|
829 |
+
= 1.6 ns)
|
830 |
+
This work (
|
831 |
+
t
|
832 |
+
0
|
833 |
+
= 0.16 ns)
|
834 |
+
Spintronic RC
|
835 |
+
( 19 )
|
836 |
+
( 22 )
|
837 |
+
Photonic RC
|
838 |
+
( 46 )
|
839 |
+
( 47 )
|
840 |
+
(48 )
|
841 |
+
(12 )
|
842 |
+
( 50 )
|
843 |
+
vt
|
844 |
+
0 (m)
|
845 |
+
Length,
|
846 |
+
L (m)
|
847 |
+
Dense
|
848 |
+
Sparse
|
849 |
+
Figure 7: Reports of reservoir computing using multiple nodes are plotted as a function
|
850 |
+
of the length between nodes and characteristic wave speed (v) times delay time (τ0) for
|
851 |
+
photonics system (open symbols) and spintronics system (solid symbols). The size of sym-
|
852 |
+
bols corresponds to memory capacity, which is taken from literature (12,19,22,32–35) and this
|
853 |
+
work. The gray scale represents memory capacity evaluated by using the response function
|
854 |
+
method [Eq. (5)].
|
855 |
+
17
|
856 |
+
|
857 |
+
VJKhJK-Jh-J1
|
858 |
+
10
|
859 |
+
100
|
860 |
+
0.1
|
861 |
+
1
|
862 |
+
This work (Calc., Nv = 8, θ = 0.2 ns)
|
863 |
+
This work (Calc., Nv = 8, θ = 0.02 ns)
|
864 |
+
Spintronic RC (Calc.)
|
865 |
+
(22)
|
866 |
+
Photonic RC
|
867 |
+
(45) (Nv = 50)
|
868 |
+
(48) (Nv = 50)
|
869 |
+
(49) (Nv = 50)
|
870 |
+
Normalized root mean square error, NRMSE
|
871 |
+
for NARMA10 task
|
872 |
+
Number of physical nodes, Np
|
873 |
+
1
|
874 |
+
10
|
875 |
+
100
|
876 |
+
10
|
877 |
+
100
|
878 |
+
This work (Calc., Nv = 8, θ = 0.2 ns)
|
879 |
+
This work (Calc., Nv = 8, θ = 0.02 ns)
|
880 |
+
Spintronic RC (Calc.)
|
881 |
+
(44)
|
882 |
+
(19)
|
883 |
+
(22)
|
884 |
+
Spintronic RC (Exp.)
|
885 |
+
(9) (Nv = 250)
|
886 |
+
(51) (Nv = 40)
|
887 |
+
Photonic RC
|
888 |
+
(46) (Nv = 50)
|
889 |
+
(47) (Nv = 50)
|
890 |
+
(48) (Nv = 50)
|
891 |
+
Memory capacity, MC
|
892 |
+
Number of physical nodes, Np
|
893 |
+
(a)
|
894 |
+
(b)
|
895 |
+
Figure 8: Reservoir computing performance compared with different systems. (a) Memory
|
896 |
+
capacity, MC reported plotted as a function of physical nodes Np. (b) Normalized root mean
|
897 |
+
square error, NRMSE for NARMA10 task is plotted as a function of Np. Open blue symbols are
|
898 |
+
values reported using photonic RC while solid red symbols are values reported using spintronic
|
899 |
+
RC. MC and NRMSE for NARMA10 task are taken from Refs. (9,19,22,36,37) for spintronic
|
900 |
+
RC and Refs. (32–34,38,39) for photonic RC.
|
901 |
+
plemented.
|
902 |
+
All the parameters in this study are feasible using realistic materials (40–43).
|
903 |
+
Nanoscale propagating spin waves in a ferromagnetic thin film excited by spin-transfer torque
|
904 |
+
using nanometer electrical contacts have been observed (44–46). Patterning of multiple elec-
|
905 |
+
trical nanocontacts into magnetic thin films was demonstrated in mutually synchronized spin-
|
906 |
+
torque oscillators (46). In addition to the excitation of propagating spin-wave in a magnetic thin
|
907 |
+
film, its non-local magnetization dynamics can be detected by tunnel magnetoresistance effect
|
908 |
+
at each electrical contact, as schematically shown in Fig. 1(c), which are widely used for the
|
909 |
+
development of spintronics memory and spin-torque oscillators. In addition, virtual nodes are
|
910 |
+
effectively used in our system by considering the speed of propagating spin-wave and distance
|
911 |
+
of physical nodes; thus, high-performance reservoir computing can be achieved with the small
|
912 |
+
number of physical nodes, contrary to many physical nodes used in previous reports. This work
|
913 |
+
provides a way to realize nanoscale high-performance reservoir computing based on propagat-
|
914 |
+
ing spin-wave in a ferromagnetic thin film.
|
915 |
+
There is an interesting connection between our study to the recently proposed next-generation
|
916 |
+
18
|
917 |
+
|
918 |
+
RC (28, 47), in which the linear ESN is identified with the NVAR (nonlinear vectorial autore-
|
919 |
+
gression) method to estimate a dynamical equation from data. Our formula of the response func-
|
920 |
+
tion (3) results in the linear input-output relationship with a delay Yn+1 = anUn+an−1Un−1+. . .
|
921 |
+
(see Sec. A in Supplementary Information). More generally, with the nonlinear readout or with
|
922 |
+
higher-order response functions, we have the input-output relationship with delay and non-
|
923 |
+
linearity Yn+1 = anUn + an−1Un−1 + . . . + an,nUnYn + an,n−1UnUn−1 + . . . (see Sec. B in
|
924 |
+
Supplementary Information). These input-output relations are nothing but Volterra series of the
|
925 |
+
output as a function of the input with delay and nonlinearity (48). The coefficients of the ex-
|
926 |
+
pansion are associated with the response function. Therefore, the performance of RC falls into
|
927 |
+
the independent components of the matrix of the response function, which can be evaluated by
|
928 |
+
how much delay the response functions between two nodes cover without overlap. The results
|
929 |
+
would be helpful to a potential design of the network of the physical nodes.
|
930 |
+
We should note that the polynomial basis of the input-output relation in this study originates
|
931 |
+
from spin wave excitation around the stationary state mz = 1. When the input data has a hier-
|
932 |
+
archical structure, another basis may be more efficient than the polynomial expansion. Another
|
933 |
+
setup of magnetic systems may lead to a different basis. We believe that our study shows simple
|
934 |
+
but clear intuition of the mechanism of high-performance RC, that can lead to the exploration
|
935 |
+
of another setup for more practical application of the physical RC.
|
936 |
+
19
|
937 |
+
|
938 |
+
Materials and Methods
|
939 |
+
Micromagnetic simulations
|
940 |
+
We analyze the LLG equation using the micromagnetic simulator mumax3 (49). The LLG
|
941 |
+
equation for the magnetization M(x, t) yields
|
942 |
+
∂tM(x, t) = −
|
943 |
+
γµ0
|
944 |
+
1 + α2M × Heff −
|
945 |
+
αγµ0
|
946 |
+
Ms(1 + α2)M × (M × Heff)
|
947 |
+
+
|
948 |
+
ℏPγ
|
949 |
+
4M2s eDJ(x, t)M × (M × mf) .
|
950 |
+
(6)
|
951 |
+
We consider the effective magnetic field as
|
952 |
+
Heff = Hext + Hdemag + Hexch,
|
953 |
+
(7)
|
954 |
+
Hext = H0ez
|
955 |
+
(8)
|
956 |
+
Hms = − 1
|
957 |
+
4π
|
958 |
+
�
|
959 |
+
∇∇
|
960 |
+
1
|
961 |
+
|r − r′|dr′
|
962 |
+
(9)
|
963 |
+
Hexch = 2Aex
|
964 |
+
µ0Ms
|
965 |
+
∆M,
|
966 |
+
(10)
|
967 |
+
where Hext is the external magnetic field, Hms is the magnetostatic interaction, and Hexch is the
|
968 |
+
exchange interaction with the exchange parameter Aex.
|
969 |
+
The size of our system is L = 1000 nm and D = 4 nm. The number of mesh points is
|
970 |
+
200 in the x and y directions, and 1 in the z direction. We consider Co2MnSi Heusler alloy
|
971 |
+
ferromagnet, which has a low Gilbert damping and high spin polarization with the parameter
|
972 |
+
Aex = 23.5 pJ/m, Ms = 1000 kA/m, and α = 5 × 10−4 (40,41,41–43). Out-of-plane magnetic
|
973 |
+
field µ0H0 = 1.5 T is applied so that magnetization is pointing out-of-plane. The spin-polarized
|
974 |
+
current field is included by the Slonczewski model (29) with polarization parameter P = 1 and
|
975 |
+
spin torque asymmetry parameter λ = 1 with the reduced Planck constant ℏ and the charge of
|
976 |
+
an electron e. The uniform fixed layer magnetization is mf = ex. We use absorbing boundary
|
977 |
+
layers for spin waves to ensure the magnetization vanishes at the boundary of the system (50).
|
978 |
+
We set the initial magnetization as m = ez.
|
979 |
+
20
|
980 |
+
|
981 |
+
The reference time scale in this system is τ0 = 1/γµ0Ms ≈ 5 ps, where γ is the gyromag-
|
982 |
+
netic ratio, µ0 is permeability, and Ms is saturation magnetization. The reference length scale is
|
983 |
+
the exchange length l0 ≈ 5 nm. The relevant parameters are Gilbert damping α, the time scale
|
984 |
+
of the input time series θ, and the characteristic length between the input nodes R0.
|
985 |
+
The injectors and detectors of spin are placed as cylindrical nanocontacts embedded in the
|
986 |
+
region with their radius a and height D. We set a = 20nm unless otherwise stated. The
|
987 |
+
input time series is uniform random noise Un ∈ U(0, 0.5). The injected density current is
|
988 |
+
set as j(tn) = 2jcUn with jc = 2 × 10−4/(πa2)A/m2. Under a given input time series of
|
989 |
+
the length T, we apply the current during the time θ, and then update the current at the next
|
990 |
+
step. The same input current with different filters is injected for different virtual nodes (see
|
991 |
+
Learning with reservoir computing). The total simulation time is, therefore, TθNv.
|
992 |
+
Learning with reservoir computing
|
993 |
+
Our RC architecture consists of reservoir state variables
|
994 |
+
X(t + ∆t) = f (X(t), U(t))
|
995 |
+
(11)
|
996 |
+
and the readout
|
997 |
+
Yn = W · ˜˜X(tn).
|
998 |
+
(12)
|
999 |
+
In our spin wave RC, the reservoir state is chosen as x-component of the magnetization
|
1000 |
+
X =
|
1001 |
+
�
|
1002 |
+
mx,1(tn), . . . , mx,i(tn), . . . , mx,Np(tn)
|
1003 |
+
�T ,
|
1004 |
+
(13)
|
1005 |
+
for the indices for the physical nodes i = 1, 2, . . . , Np. Here, Np is the number of physical
|
1006 |
+
nodes, and each mx,i(tn) is a T-dimensional row vector with n = 1, 2, . . . , T. We use a time-
|
1007 |
+
multiplex network of virtual nodes in RC (23), and use Nv virtual nodes with time interval θ.
|
1008 |
+
21
|
1009 |
+
|
1010 |
+
The expanded reservoir state is expressed by NpNv × T matrix ˜X as (see Fig.2(b))
|
1011 |
+
˜X = (mx,1(tn,1), mx,1(tn,2), . . . , mx,1(tn,k), . . . , mx,1(tn,Nv),
|
1012 |
+
. . . , mx,i(tn,1), mx,i(tn,2), . . . , mx,i(tn,k), . . . , mx,i(tn,Nv), . . . ,
|
1013 |
+
mx,Np(tn,1), mx,Np(tn,2), . . . , mx,Np(tn,k), . . . , mx,Np(tn,Nv)
|
1014 |
+
�T ,
|
1015 |
+
(14)
|
1016 |
+
where tn,k = ((n − 1)Nv − (k − 1))θ for the indices of the virtual nodes k = 1, 2, . . . , Nv. The
|
1017 |
+
total number of rows is N = NpNv. We use the nonlinear readout by augmenting the reservoir
|
1018 |
+
state as
|
1019 |
+
˜˜X =
|
1020 |
+
�
|
1021 |
+
˜X
|
1022 |
+
˜X ◦ ˜X
|
1023 |
+
�
|
1024 |
+
,
|
1025 |
+
(15)
|
1026 |
+
where ˜X(t) ◦ ˜X(t) is the Hadamard product of ˜X(t), that is, component-wise product. The
|
1027 |
+
readout weight W is trained by the data of the output Y (t)
|
1028 |
+
W = Y · ˜˜X†
|
1029 |
+
(16)
|
1030 |
+
where X† is pseudo-inverse of X.
|
1031 |
+
In the time-multiplexing approach, the input time-series U = (U1, U2, . . . , UT) ∈ RT is
|
1032 |
+
translated into piece-wise constant time-series ˜U(t) = Un with t = (n − 1)Nvθ + s under
|
1033 |
+
k = 1, . . . , T and s = [0, Nvθ) (see Fig. 2(a)). This means that the same input remains during
|
1034 |
+
the time period τ0 = Nvθ. To use the advantage of physical and virtual nodes, the actual input
|
1035 |
+
Ji(t) at the ith physical node is ˜U(t) multiplied by τ0-periodic random binary filter Bi(t). Here,
|
1036 |
+
Bi(t) ∈ {0, 1} is piece-wise constant during the time θ. At each physical node, we use different
|
1037 |
+
realizations of the binary filter as in Fig. 2(a).
|
1038 |
+
Unless otherwise stated, We use 1000 steps of the input time-series as burn-in. After these
|
1039 |
+
steps, we use 5000 steps for training and 5000 steps for test for the MC, IPC, and NARMA10
|
1040 |
+
tasks.
|
1041 |
+
22
|
1042 |
+
|
1043 |
+
NARMA task
|
1044 |
+
The NARMA10 task is based on the discrete differential equation,
|
1045 |
+
Yn+1 = αYn + βYn
|
1046 |
+
9
|
1047 |
+
�
|
1048 |
+
p=0
|
1049 |
+
Yn−p + γUnUn−9 + δ.
|
1050 |
+
(17)
|
1051 |
+
Here, Un is an input taken from the uniform random distribution U(0, 0.5), and yk is an output.
|
1052 |
+
We choose the parameter as α = 0.3, β = 0.05, γ = 1.5, and δ = 0.1. In RC, the input is
|
1053 |
+
U = (U1, U2, . . . , UT) and the output Y = (Y1, Y2, . . . , YT). The goal of the NARMA10 task
|
1054 |
+
is to estimate the output time-series Y from the given input U. The training of RC is done by
|
1055 |
+
tuning the weights W so that the estimated output ˆY (tn) is close to the true output Yn in terms
|
1056 |
+
of squared norm | ˆY (tn) − Yn|2.
|
1057 |
+
The performance of the NARMA10 task is measured by the deviation of the estimated time
|
1058 |
+
series ˆY = W · ˜˜X from the true output Y. The normalized root-mean-square error (NRMSE)
|
1059 |
+
is
|
1060 |
+
NRMSE ≡
|
1061 |
+
��
|
1062 |
+
n( ˆY (tn) − Yn)2
|
1063 |
+
�
|
1064 |
+
n Y 2
|
1065 |
+
n
|
1066 |
+
.
|
1067 |
+
(18)
|
1068 |
+
Performance of the task is high when NRMSE ≈ 0. In the ESN, it was reported that NRMSE ≈
|
1069 |
+
0.4 for N = 50 and NRMSE ≈ 0.2 for N = 200 (51). The number of node N = 200 was used
|
1070 |
+
for the speech recognition with ≈ 0.02 word error rate (51), and time-series prediction of sptio-
|
1071 |
+
temporal chaos (5). Therefore, NRMSE ≈ 0.2 is considered as reasonably high performance in
|
1072 |
+
practical application. We also stress that we use the same order of nodes (virtual and physical
|
1073 |
+
nodes) N = 128 to achieve NRMSE ≈ 0.2.
|
1074 |
+
Memory capacity and information processing capacity
|
1075 |
+
Memory capacity (MC) is a measure of the short-term memory of RC. This was introduced
|
1076 |
+
in (6). For the input Un of random time series taken from the uniform distribution, the network
|
1077 |
+
23
|
1078 |
+
|
1079 |
+
is trained for the output Yn = Un−k. The MC is computed from
|
1080 |
+
MCk = ⟨Un−k, W · X(tn)⟩2
|
1081 |
+
⟨U2
|
1082 |
+
n⟩⟨(W · X(tn))2⟩.
|
1083 |
+
(19)
|
1084 |
+
This quantity is decaying as the delay k increases, and MC is defined as
|
1085 |
+
MC =
|
1086 |
+
kmax
|
1087 |
+
�
|
1088 |
+
k=1
|
1089 |
+
MCk.
|
1090 |
+
(20)
|
1091 |
+
Here, kmax is a maximum delay, and in this study we set it as kmax = 100. The advantage of MC
|
1092 |
+
is that when the input is independent and identically distributed (i.i.d.), and the output function
|
1093 |
+
is linear, then MC is bounded by N, the number of internal nodes.
|
1094 |
+
Information processing capacity (IPC) is a nonlinear version of MC (27). In this task, the
|
1095 |
+
output is set as
|
1096 |
+
Yn =
|
1097 |
+
�
|
1098 |
+
k
|
1099 |
+
Pdk(Un−k)
|
1100 |
+
(21)
|
1101 |
+
where dk is non-negative integer, and Pdk(x) is the Legendre polynomials of x order dk. We
|
1102 |
+
may define
|
1103 |
+
IPCd0,d1,...,dT −1 =
|
1104 |
+
⟨Yn, W · X(tn)⟩2
|
1105 |
+
⟨Y 2
|
1106 |
+
n ⟩⟨(W · X(tn))2⟩.
|
1107 |
+
(22)
|
1108 |
+
and then compute jth order IPC as We may define
|
1109 |
+
IPCj =
|
1110 |
+
�
|
1111 |
+
dks.t.j=�
|
1112 |
+
k dk
|
1113 |
+
IPCd1,d2,...,dT .
|
1114 |
+
(23)
|
1115 |
+
When j = 1, the IPC is, in fact, equivalent to MC, because P0(x) = 1 and P1(x) = x. In
|
1116 |
+
this case, Yn = Un−k for di = 1 when i = k and di = 0 otherwise. (23) takes the sum over
|
1117 |
+
all possible delay k, which is nothing but MC. When j > 1, IPC captures all the nonlinear
|
1118 |
+
transformation and delays up to the jth polynomial order. For example, when j = 2, the output
|
1119 |
+
can be Yn = Un−k1Un−k2 or Yn = U2
|
1120 |
+
n−k + const. In this study, we focus on j = 2 because
|
1121 |
+
24
|
1122 |
+
|
1123 |
+
the second-order nonlinearity is essential for the NARMA10 task (see Sec. A in Supplementary
|
1124 |
+
Information).
|
1125 |
+
The relevance of MC and IPC is clear by considering the Volterra series of the input-output
|
1126 |
+
relation,
|
1127 |
+
Yn =
|
1128 |
+
�
|
1129 |
+
k1,k2,··· ,kt
|
1130 |
+
βk1,k2,··· ,knUk1
|
1131 |
+
1 Uk2
|
1132 |
+
2 · · · Ukn
|
1133 |
+
n .
|
1134 |
+
(24)
|
1135 |
+
Instead of polynomial basis, we may use orthonormal basis such as the Legendre polynomials
|
1136 |
+
Yn =
|
1137 |
+
�
|
1138 |
+
k1,k2,··· ,kn
|
1139 |
+
βk1,k2,··· ,knPk1(U1)Pk2(U2) · · ·Pkn(Un).
|
1140 |
+
(25)
|
1141 |
+
Each term in (25) is characterized by the non-negative indices (k1, k2, . . . , kn). Therefore, the
|
1142 |
+
terms corresponding to j = �
|
1143 |
+
i ki = 1 in Yn have information on linear terms with time
|
1144 |
+
delay. Similarly, the terms corresponding to j = �
|
1145 |
+
i ki = 2 have information of second-order
|
1146 |
+
nonlinearity with time delay. In this view, the estimation of the output Y (t) is nothing but the
|
1147 |
+
estimation of the coefficients βk1,k2,...,kn. In RC, the readout of the reservoir state at ith node
|
1148 |
+
(either physical or virtual node) can also be expanded as the Volterra series
|
1149 |
+
˜˜X(i)(tn) =
|
1150 |
+
�
|
1151 |
+
k1,k2,··· ,kn
|
1152 |
+
˜˜β(i)
|
1153 |
+
k1,k2,··· ,knUk1
|
1154 |
+
1 Uk2
|
1155 |
+
2 · · · Ukn
|
1156 |
+
n .
|
1157 |
+
(26)
|
1158 |
+
Therefore, MC and IPC are essentially a reconstruction of βk1,k2,··· ,kn from ˜˜β(i)
|
1159 |
+
k1,k2,··· ,kn with
|
1160 |
+
i ∈ [1, N]. This can be done by regarding βk1,k2,··· ,kn as a T + T(T − 1)/2 + · · · -dimensional
|
1161 |
+
vector, and using the matrix M associated with the readout weights as
|
1162 |
+
βk1,k2,··· ,kn = M ·
|
1163 |
+
|
1164 |
+
|
1165 |
+
|
1166 |
+
|
1167 |
+
|
1168 |
+
|
1169 |
+
˜˜β(1)
|
1170 |
+
k1,k2,··· ,kn
|
1171 |
+
˜˜β(2)
|
1172 |
+
k1,k2,··· ,kn
|
1173 |
+
...
|
1174 |
+
˜˜β(N)
|
1175 |
+
k1,k2,··· ,kn
|
1176 |
+
|
1177 |
+
|
1178 |
+
|
1179 |
+
|
1180 |
+
|
1181 |
+
|
1182 |
+
.
|
1183 |
+
(27)
|
1184 |
+
MC corresponds to the reconstruction of βk1,k2,··· ,kn for �
|
1185 |
+
i ki = 1, whereas the second-order
|
1186 |
+
IPC is the reconstruction of βk1,k2,··· ,kn for �
|
1187 |
+
i ki = 2. If all of the reservoir states are indepen-
|
1188 |
+
25
|
1189 |
+
|
1190 |
+
dent, we may reconstruct N components in βk1,k2,··· ,kn. In realistic cases, the reservoir states are
|
1191 |
+
not independent, and therefore, we can estimate only < N components in βk1,k2,··· ,kn.
|
1192 |
+
Prediction of chaotic time-series data
|
1193 |
+
Following (5), we perform the prediction of time-series data from the Lorenz model. The model
|
1194 |
+
is a three-variable system of (A1(t), A2(t), A3(t)) yielding the following equation
|
1195 |
+
dA1
|
1196 |
+
dt = 10(A2 − A1)
|
1197 |
+
(28)
|
1198 |
+
dA2
|
1199 |
+
dt = A1(28 − A3) − A2
|
1200 |
+
(29)
|
1201 |
+
dA3
|
1202 |
+
dt = A1A2 − 8
|
1203 |
+
3A3.
|
1204 |
+
(30)
|
1205 |
+
The parameters are chosen such that the model exhibits chaotic dynamics. Similar to the other
|
1206 |
+
tasks, we apply the different masks of binary noise for different physical nodes, B(l)
|
1207 |
+
i (t) ∈
|
1208 |
+
{−1, 1}. Because the input time series is three-dimensional, we use three independent masks
|
1209 |
+
for A1, A2, and A3, therefore, l ∈ {1, 2, 3}. The input for the ith physical node after the mask is
|
1210 |
+
given as Bi(t) ˜Ui(t) = B(1)
|
1211 |
+
i (t)A1(t)+B(2)
|
1212 |
+
i (t)A2(t)+B(3)
|
1213 |
+
i (t)A3(t). Then, the input is normalized
|
1214 |
+
so that its range becomes [0, 0.5], and applied as an input current. Once the input is prepared,
|
1215 |
+
we may compute magnetization dynamics for each physical and virtual node, as in the case of
|
1216 |
+
the NARMA10 task. We note that here we use the binary mask of {−1, 1} instead of {0, 1}
|
1217 |
+
used for other tasks. We found that the {0, 1} does not work for the prediction of the Lorenz
|
1218 |
+
model, possibly because of the symmetry of the model.
|
1219 |
+
The ground-truth data of the Lorenz time-series is prepared using the Runge-Kutta method
|
1220 |
+
with the time step ∆t = 0.025. The time series is t ∈ [−60, 75], and t ∈ [−60, −50] is used for
|
1221 |
+
relaxation, t ∈ (−50, 0] for training, and t ∈ (0, 75] for prediction. During the training steps,
|
1222 |
+
we compute the output weight by taking the output as Y = (A1(t + ∆t), A2(t + ∆t), A3(t +
|
1223 |
+
∆t)). After training, the RC learns the mapping (A1(t), A2(t), A3(t)) → (A1(t + ∆t), A2(t +
|
1224 |
+
26
|
1225 |
+
|
1226 |
+
∆t), A3(t + ∆t)). For the prediction steps, we no longer use the ground-truth input but the
|
1227 |
+
estimated data ( ˆ
|
1228 |
+
A1(t), ˆ
|
1229 |
+
A2(t), ˆ
|
1230 |
+
A3(t)). Using the fixed output weights computed in the training
|
1231 |
+
steps, the time evolution of the estimated time-series ( ˆ
|
1232 |
+
A1(t), ˆ
|
1233 |
+
A2(t), ˆ
|
1234 |
+
A3(t)) is computed by the
|
1235 |
+
RC.
|
1236 |
+
Theoretical analysis using response function
|
1237 |
+
We consider the Landau-Lifshitz-Gilbert equation for the magnetization field m(x, t),
|
1238 |
+
∂tm(x, t) = −m × heff − m × (m × heff) + σ(x, t)m × (m × mf)
|
1239 |
+
(31)
|
1240 |
+
We normalize both the magnetic and effective fields by saturation magnetization as m =
|
1241 |
+
M/Ms and heff = Heff/Ms. This normalization applies to all the fields including external
|
1242 |
+
and anisotropic fields. We also normalize the current density as σ(x, t) = J(x, t)/j0 for the
|
1243 |
+
current density J(x) and the unit of current density j0 =
|
1244 |
+
4M2
|
1245 |
+
s eπa2Dµ0
|
1246 |
+
ℏP
|
1247 |
+
. We apply the current
|
1248 |
+
density at the nanocontact as
|
1249 |
+
J(x, t) = 2jc ˜U(t)
|
1250 |
+
Np
|
1251 |
+
�
|
1252 |
+
i=1
|
1253 |
+
χa(|x − Ri|)
|
1254 |
+
(32)
|
1255 |
+
Here χa(x) is a characteristic function χa(x) = 1 when x ≤ a and χa(x) = 0 otherwise.
|
1256 |
+
We expand the solution of (31) around the uniform magnetization m(x, t) = (0, 0, 1) with-
|
1257 |
+
out current injection as
|
1258 |
+
m(x, t) = m0(x, t) + ǫm(1)(x, t) + O(ǫ2).
|
1259 |
+
(33)
|
1260 |
+
Here, m0(x, t) = (0, 0, 1) and ǫ ≪ 1 is a small parameter corresponding to the magnitude of the
|
1261 |
+
input σ(x, t). The first-order term corresponds to a linear response of the magnetization to the
|
1262 |
+
input σ, whereas the higher-order terms describe nonlinear responses, for example, m(2)(x, t) ∼
|
1263 |
+
σ(x1, t1)σ(x2, t2). Because our input is driven by the spin torque with fixed layer magnetization
|
1264 |
+
in the x-direction, mf = ex, only mx and my appear in the first-order term O(ǫ). Deviation of
|
1265 |
+
27
|
1266 |
+
|
1267 |
+
mz from mz = 1 appears in O(ǫ2). Therefore, for the first-order term m(1), we may define the
|
1268 |
+
complex magnetization
|
1269 |
+
m = mx + imy.
|
1270 |
+
(34)
|
1271 |
+
Here, we will show the magnetization is expressed by the response function Gij(t). The
|
1272 |
+
input at the jth physical node affects the magnetization at the ith physical node as
|
1273 |
+
mi(t)
|
1274 |
+
=
|
1275 |
+
�
|
1276 |
+
dτGii(t − τ)σi(τ) + �
|
1277 |
+
i̸=j
|
1278 |
+
�
|
1279 |
+
dτGij(t − τ)σj(τ).
|
1280 |
+
(35)
|
1281 |
+
The input for the jth physical node is expressed by σj(t) = 2jcBj(t) ˜Uj(t). Because different
|
1282 |
+
physical nodes have different masks discussed in Learning with reservoir computing in Meth-
|
1283 |
+
ods. When the wave propagation is dominated by the exchange interaction, the response func-
|
1284 |
+
tion for the same node is
|
1285 |
+
Gii(t − τ) = 1
|
1286 |
+
2πe−˜h(α+i)(t−τ)
|
1287 |
+
�
|
1288 |
+
1 − e−
|
1289 |
+
a2
|
1290 |
+
4(α+i)(t−τ)
|
1291 |
+
�
|
1292 |
+
(36)
|
1293 |
+
and for different nodes, it becomes
|
1294 |
+
Gij(t − τ) = a2
|
1295 |
+
2πe−˜h(α+i)(t−τ)e−
|
1296 |
+
|Ri−Rj|2
|
1297 |
+
4(α+i)(t−τ)
|
1298 |
+
1
|
1299 |
+
2(α + i)(t − τ).
|
1300 |
+
(37)
|
1301 |
+
When the wave propagation is dominated by the dipole interaction, the response function for
|
1302 |
+
the same node is
|
1303 |
+
Gii(t − τ) = 1
|
1304 |
+
2πe−˜h(α+i)(t−τ) −1 +
|
1305 |
+
�
|
1306 |
+
1 +
|
1307 |
+
a2
|
1308 |
+
(d/4)2(α+i)2(t−τ)2
|
1309 |
+
�
|
1310 |
+
1 +
|
1311 |
+
a2
|
1312 |
+
(d/4)2(α+i)2(t−τ)2
|
1313 |
+
(38)
|
1314 |
+
and for different nodes it becomes
|
1315 |
+
Gij(t − τ) = a2
|
1316 |
+
2πe−˜h(α+i)(t−τ)
|
1317 |
+
×
|
1318 |
+
1
|
1319 |
+
(d/4)2(α + i)2(t − τ)2
|
1320 |
+
�
|
1321 |
+
1 +
|
1322 |
+
|Ri−Rj|2
|
1323 |
+
(d/4)2(α+i)2(t−τ)2
|
1324 |
+
�3/2.
|
1325 |
+
(39)
|
1326 |
+
28
|
1327 |
+
|
1328 |
+
Clearly, Gii(0) → 1 and Gij(0) → 0, while Gii(∞) → 0 and Gij(∞) → 0.
|
1329 |
+
Once the magnetization is expressed in the form of (35), we may compute the reservoir state
|
1330 |
+
X under the input U. Then, we may use the same method as in Learning with reservoir computing,
|
1331 |
+
and estimate the output ˆY. Similar to the micromagnetic simulations, we evaluate the perfor-
|
1332 |
+
mance by MC, IPC, and NARMA10 tasks.
|
1333 |
+
We may extend the analyzes for the higher-order terms in the expansion of (33). In Sec.B in
|
1334 |
+
Supplementary Materials, we show the second-order term m(2)(x, t) has only the z-component,
|
1335 |
+
and moreover, it is dependent only on the first-order terms. As a result, the second-order term
|
1336 |
+
is expressed as
|
1337 |
+
m(2)
|
1338 |
+
z (x, t) = −1
|
1339 |
+
2
|
1340 |
+
�
|
1341 |
+
(m(1)
|
1342 |
+
x )2 + (m(1)
|
1343 |
+
y )2�
|
1344 |
+
.
|
1345 |
+
(40)
|
1346 |
+
To compute the response functions, we linearize (31) for the complex magnetization m(x, t)
|
1347 |
+
as
|
1348 |
+
∂tm(x, t) = Lm + σ(x, t),
|
1349 |
+
(41)
|
1350 |
+
where the linear operator is expressed as
|
1351 |
+
L =
|
1352 |
+
�
|
1353 |
+
−˜h + ∆
|
1354 |
+
�
|
1355 |
+
(α + i) .
|
1356 |
+
(42)
|
1357 |
+
In the Fourier space, the linearized equation becomes
|
1358 |
+
∂tmk(t) = Lkmk + σk(t),
|
1359 |
+
(43)
|
1360 |
+
with
|
1361 |
+
Lk = −
|
1362 |
+
�
|
1363 |
+
˜h + k2�
|
1364 |
+
(α + i) .
|
1365 |
+
(44)
|
1366 |
+
The solution of ((43)) is obtained as
|
1367 |
+
mk(t) =
|
1368 |
+
�
|
1369 |
+
dτeLk(t−τ)σk(τ).
|
1370 |
+
(45)
|
1371 |
+
29
|
1372 |
+
|
1373 |
+
We have Np cylindrical shape inputs with radius a and the ith input is located at Ri. The input
|
1374 |
+
function is expressed as
|
1375 |
+
σ(x) =
|
1376 |
+
Np
|
1377 |
+
�
|
1378 |
+
i=1
|
1379 |
+
χa (|x − Ri|) .
|
1380 |
+
(46)
|
1381 |
+
We are interested in the magnetization at the input mi(t) = m(x = Ri, t), which is
|
1382 |
+
mi =
|
1383 |
+
1
|
1384 |
+
(2π)2
|
1385 |
+
�
|
1386 |
+
j
|
1387 |
+
�
|
1388 |
+
dτe−˜h(α+i)(t−τ)
|
1389 |
+
�
|
1390 |
+
dke−k2(α+i)(t−τ)eik·(Ri−Rj)2πaJ1(ka)σj(t)
|
1391 |
+
= a
|
1392 |
+
2π
|
1393 |
+
�
|
1394 |
+
j
|
1395 |
+
�
|
1396 |
+
dτe−˜h(α+i)(t−τ)
|
1397 |
+
�
|
1398 |
+
dke−k2(α+i)(t−τ)J0 (k|Ri − Rj|) J1(ka)σj(t)
|
1399 |
+
(47)
|
1400 |
+
For the same node, |Ri − Rj| = 0, and we may compute the integral explicitly as (36). When
|
1401 |
+
ka ≪ 1, we may assume J1(ka) ≈ ka/2, and finally, come up with (37).
|
1402 |
+
When the thickness d of the material is thin, the dispersion relation becomes
|
1403 |
+
Lk = −˜h(α + i)
|
1404 |
+
��
|
1405 |
+
1 + k2
|
1406 |
+
˜h
|
1407 |
+
� �
|
1408 |
+
1 + k2
|
1409 |
+
˜h
|
1410 |
+
+ βk
|
1411 |
+
˜h
|
1412 |
+
�
|
1413 |
+
(48)
|
1414 |
+
where
|
1415 |
+
β = d
|
1416 |
+
2.
|
1417 |
+
(49)
|
1418 |
+
We assume for k ≪ β
|
1419 |
+
�
|
1420 |
+
˜h, then the linearized operator becomes
|
1421 |
+
Lk = −(α + i)
|
1422 |
+
�
|
1423 |
+
˜h + kd
|
1424 |
+
4
|
1425 |
+
�
|
1426 |
+
(50)
|
1427 |
+
leading to (38) and (39).
|
1428 |
+
Acknowledgements:
|
1429 |
+
S. M. thanks to CSRN at Tohoku University. Numerical simulations in this work were carried
|
1430 |
+
out in part by AI Bridging Cloud Infrastructure (ABCI) at National Institute of Advanced In-
|
1431 |
+
dustrial Science and Technology (AIST), and by the supercomputer system at the information
|
1432 |
+
30
|
1433 |
+
|
1434 |
+
initiative center, Hokkaido University, Sapporo, Japan.
|
1435 |
+
Funding:
|
1436 |
+
This work is support by JSPS KAKENHI grant numbers 21H04648, 21H05000 to S.M., by
|
1437 |
+
JST, PRESTO Grant Number JPMJPR22B2 to S.I., X-NICS, MEXT Grant Number JPJ011438
|
1438 |
+
to S.M., and by JST FOREST Program Grant Number JPMJFR2140 to N.Y.
|
1439 |
+
Author Contributions
|
1440 |
+
S.M., N.Y., S.I. conceived the research. S.I., Y.K., N.Y. carried out simulations. N.Y., S.I. an-
|
1441 |
+
alyzed the results. N.Y., S.I., S.M. wrote the manuscript. All the authors discussed the results
|
1442 |
+
and analysis.
|
1443 |
+
Competing Interests
|
1444 |
+
The authors declare that they have no competing financial interests.
|
1445 |
+
Data and materials availability:
|
1446 |
+
All data are available in the main text or the supplementary materials.
|
1447 |
+
A
|
1448 |
+
Connection between the NARMA10 task and MC/IPC
|
1449 |
+
In this section, we discuss the necessary properties of reservoir computing to achieve high per-
|
1450 |
+
formance of the NARMA10 task. In short, the NARMA10 task is dominated by the memory of
|
1451 |
+
31
|
1452 |
+
|
1453 |
+
nine step previous data and second-order nonlinearity. We discuss these properties in two meth-
|
1454 |
+
ods. The first method is based on the extended Dynamic Mode Decomposition (DMD) (52) and
|
1455 |
+
the higher-order DMD (53). The second method is a regression of the input-output relationship.
|
1456 |
+
We will discuss the details of the two methods. Our results are consistent with previous studies;
|
1457 |
+
the requirement of memory was discussed in (54), and the second-order nonlinear terms with a
|
1458 |
+
time delay in (55).
|
1459 |
+
The NARMA10 task is based on the discrete differential equation,
|
1460 |
+
Yn+1 = αYn + βYn
|
1461 |
+
9
|
1462 |
+
�
|
1463 |
+
i=0
|
1464 |
+
Yn−i + γUnUn−9 + δ.
|
1465 |
+
(51)
|
1466 |
+
Here, Un is an input at the time step n taken from the uniform random distribution U(0, 0.5),
|
1467 |
+
and Yn is an output. We choose the parameter as α = 0.3, β = 0.05, γ = 1.5, and δ = 0.1.
|
1468 |
+
In the first method, we estimate the transition matrix A from the state variable Yn =
|
1469 |
+
(Y1, Y2, . . . , Yn) to Yn+1 = (Y2, Y3, . . . , Yn+1) yielding
|
1470 |
+
Yn+1 = A · Yn.
|
1471 |
+
(52)
|
1472 |
+
We may extend the notion of the state variable to contain delayed data and polynomials of the
|
1473 |
+
output with time delay as
|
1474 |
+
Yn = (Yn, Yn−1, . . . , Y1, YnYn, YnYn−1, . . . , Y1Y1) .
|
1475 |
+
(53)
|
1476 |
+
Including the delay terms following from the higher-order DMD (53), while the polynomial
|
1477 |
+
nonlinear terms are used as a polynomial dictionary in the extended DMD (52). Here, (53)
|
1478 |
+
contains all the combination of the second-order terms with time delay, Yn−i1Yn−i2 with the
|
1479 |
+
integers i1 and i2 in 0 ≤ i1 ≤ l2 ≤ n − 1. We may straightforwardly include higher-order terms
|
1480 |
+
in powers in (53). In the NARMA10 task, the output Yn+1 is also affected by the input Un.
|
1481 |
+
Therefore, the extended DMD is generalized to include the control as (56)
|
1482 |
+
Yn+1 = (A B) ·
|
1483 |
+
�Yn
|
1484 |
+
Un
|
1485 |
+
�
|
1486 |
+
,
|
1487 |
+
(54)
|
1488 |
+
32
|
1489 |
+
|
1490 |
+
where the state variable corresponding to the input includes time delay and nonlinearity, and is
|
1491 |
+
described as
|
1492 |
+
Un = (Un, Un−1, . . . , U1, UnUn, UnUn−1, . . . , U1U1) .
|
1493 |
+
(55)
|
1494 |
+
We denote the generalized transition matrix as
|
1495 |
+
Ξ = (A B) .
|
1496 |
+
(56)
|
1497 |
+
The idea of DMD is to estimate the transition matrix from the data. This is done by taking
|
1498 |
+
pseudo inverse of the state variables as
|
1499 |
+
ˆΞ = Yk+1 ·
|
1500 |
+
�
|
1501 |
+
Yk
|
1502 |
+
Uk
|
1503 |
+
�†
|
1504 |
+
.
|
1505 |
+
(57)
|
1506 |
+
Here, M† is the pseudoinverse of the matrix M. This is nothing but a least-square estimation
|
1507 |
+
for the cost function of l.h.s minus r.h.s of (54). We may include the Tikhonov regularization
|
1508 |
+
term.
|
1509 |
+
Note that for the extended DMD (52) and the higher-order DMD (53), the transition matrix
|
1510 |
+
Ξ is further decomposed into characteristic modes associated with its eigenvalues. The decom-
|
1511 |
+
position gives us a dimensional reduction of the system. The estimation of the transition matrix
|
1512 |
+
is also called nonlinear system identification, particularly, nonlinear autoregression with exoge-
|
1513 |
+
nous inputs (NARX). In this work, we focus on the estimation of the input-output relationship,
|
1514 |
+
and do not discuss the dimensional reduction. For time-series prediction, we estimate the func-
|
1515 |
+
tion Yn+1 = f(Yn, Yn−1, . . . , Y1), and we do not need the input Un in (54). Even in this case,
|
1516 |
+
we may consider a similar estimation of Ξ (in fact, A). This estimation is the method used in
|
1517 |
+
the next-generation RC (47).
|
1518 |
+
The second method is based on the Volterra series of the state variable Yn by the input Un.
|
1519 |
+
In this method, we assume that the state variable is independent of its initial condition. Then,
|
1520 |
+
33
|
1521 |
+
|
1522 |
+
we may express the state variable as
|
1523 |
+
Yn = G · Un.
|
1524 |
+
(58)
|
1525 |
+
Note that Un includes the input and its polynomials with a time delay as in (55). Similar to the
|
1526 |
+
first method, we estimate G by
|
1527 |
+
ˆG = Yt · U†
|
1528 |
+
t.
|
1529 |
+
(59)
|
1530 |
+
The estimated ˆG gives us information on which time delay and nonlinearity dominate the state
|
1531 |
+
variable.
|
1532 |
+
0
|
1533 |
+
5
|
1534 |
+
10
|
1535 |
+
15
|
1536 |
+
20
|
1537 |
+
25
|
1538 |
+
30
|
1539 |
+
delay
|
1540 |
+
NRMSE
|
1541 |
+
0
|
1542 |
+
0.2
|
1543 |
+
0.4
|
1544 |
+
0.6
|
1545 |
+
0.8
|
1546 |
+
1.0
|
1547 |
+
test
|
1548 |
+
training
|
1549 |
+
linear
|
1550 |
+
(A)
|
1551 |
+
(B)
|
1552 |
+
(C)
|
1553 |
+
(D)
|
1554 |
+
(E)
|
1555 |
+
(F)
|
1556 |
+
0
|
1557 |
+
5
|
1558 |
+
10
|
1559 |
+
15
|
1560 |
+
20
|
1561 |
+
25
|
1562 |
+
30
|
1563 |
+
delay
|
1564 |
+
NRMSE
|
1565 |
+
0
|
1566 |
+
0.2
|
1567 |
+
0.6
|
1568 |
+
0.8
|
1569 |
+
1.0
|
1570 |
+
0.4
|
1571 |
+
second-order nonlinearity
|
1572 |
+
0
|
1573 |
+
5
|
1574 |
+
10
|
1575 |
+
15
|
1576 |
+
20
|
1577 |
+
25
|
1578 |
+
30
|
1579 |
+
delay
|
1580 |
+
NRMSE
|
1581 |
+
0
|
1582 |
+
0.2
|
1583 |
+
0.6
|
1584 |
+
0.8
|
1585 |
+
1.0
|
1586 |
+
0.4
|
1587 |
+
third-order nonlinearity
|
1588 |
+
0
|
1589 |
+
5
|
1590 |
+
10
|
1591 |
+
15
|
1592 |
+
20
|
1593 |
+
25
|
1594 |
+
30
|
1595 |
+
delay
|
1596 |
+
NRMSE
|
1597 |
+
0
|
1598 |
+
0.2
|
1599 |
+
0.4
|
1600 |
+
0.6
|
1601 |
+
0.8
|
1602 |
+
1.0
|
1603 |
+
linear
|
1604 |
+
0
|
1605 |
+
5
|
1606 |
+
10
|
1607 |
+
15
|
1608 |
+
20
|
1609 |
+
25
|
1610 |
+
30
|
1611 |
+
delay
|
1612 |
+
NRMSE
|
1613 |
+
0
|
1614 |
+
0.2
|
1615 |
+
0.6
|
1616 |
+
0.8
|
1617 |
+
1.0
|
1618 |
+
0.4
|
1619 |
+
second-order nonlinearity
|
1620 |
+
0
|
1621 |
+
5
|
1622 |
+
10
|
1623 |
+
15
|
1624 |
+
20
|
1625 |
+
25
|
1626 |
+
30
|
1627 |
+
delay
|
1628 |
+
NRMSE
|
1629 |
+
0
|
1630 |
+
0.2
|
1631 |
+
0.6
|
1632 |
+
0.8
|
1633 |
+
1.0
|
1634 |
+
0.4
|
1635 |
+
third-order nonlinearity
|
1636 |
+
Figure 9: (A-C) the estimation based on the extended DMD, (D-F) the estimation based on the
|
1637 |
+
Volterra series. The dictionary of each case is (A,D) first-order (linear) delay terms, (B,E) up to
|
1638 |
+
second-order delay terms, and (C,F) up to third-order delay terms.
|
1639 |
+
The results of the two estimation methods are shown in Fig. 9. Both approaches suggest
|
1640 |
+
that memory of ≈ 10 steps is enough to get high performance, and further memory does not
|
1641 |
+
improve the error. The second-order nonlinear term shows a reasonably small NRMSE of ≈
|
1642 |
+
0.01. Including the third-order nonlinearity improves the error, but there is a sign of overfitting
|
1643 |
+
34
|
1644 |
+
|
1645 |
+
at a longer delay because the number of the state variables is too large. It should also be noted
|
1646 |
+
that even with the linear terms, the NRMSE becomes ≈ 0.35. This result implies that although
|
1647 |
+
NRMSE ≈ 0.35 is often considered good performance, nonlinearity of the data is not learned
|
1648 |
+
at the error of this order.
|
1649 |
+
A.1
|
1650 |
+
The MC and IPC tasks as Volterra series for linear and nonlinear
|
1651 |
+
readout
|
1652 |
+
In (3) and (4) in the main text, we show that the magnetization at the input region is expressed
|
1653 |
+
by the response function. The magnetization at the time tn corresponding to the input Un at the
|
1654 |
+
n step is expressed as
|
1655 |
+
m(tn) = anUn + an−1Un−1 + · · · ,
|
1656 |
+
(60)
|
1657 |
+
where the coefficients an can be computed from the response function. We first consider the
|
1658 |
+
linear case, but we will generalize the expression for the nonlinear case. Because we use virtual
|
1659 |
+
nodes, the input Un at the step n continues during the time period t ∈ [tn, tn+1) discretized by
|
1660 |
+
Nv steps as (tn,1, tn,2, . . . , tn,Nv), and is multiplied by the filter of the binary noise (see Fig.2 and
|
1661 |
+
Methods in the main text). Therefore, the magnetization is expressed by the response functions
|
1662 |
+
G(t − t′) is formally expressed as
|
1663 |
+
m(tn) =
|
1664 |
+
Np
|
1665 |
+
�
|
1666 |
+
i
|
1667 |
+
[(G(0) + G(θ) + · · · G(θ(Nv − 1))) σi(tn)
|
1668 |
+
+ (G(θNv) + G(θ(Nv + 1)) + · · · G(θ(2Nv − 1))) σi(tn−1)
|
1669 |
+
+ · · ·] ,
|
1670 |
+
(61)
|
1671 |
+
where σi(tn) ∝ Un is the non-dimensionalized current injection at the time tn at the ith physical
|
1672 |
+
node, which is proportional to Un. Therefore, (61) results in the expression of (60). Our input
|
1673 |
+
is taken from a uniform random distribution. Therefore, the inner product of the reservoir state,
|
1674 |
+
35
|
1675 |
+
|
1676 |
+
which is nothing but magnetization, and (delayed) input to learn MC is
|
1677 |
+
⟨m(tn), Un⟩ =
|
1678 |
+
T
|
1679 |
+
�
|
1680 |
+
n=1
|
1681 |
+
m(tn)Un = an⟨U2
|
1682 |
+
n⟩ + O(1/T).
|
1683 |
+
(62)
|
1684 |
+
Similarly, the variance of the magnetization is equal to the variance of the input with the coef-
|
1685 |
+
ficient associated with m(tn).
|
1686 |
+
We may express the MC and IPC tasks in a matrix form as
|
1687 |
+
˜S ≈ W · G · (S ◦ Win) .
|
1688 |
+
(63)
|
1689 |
+
Here, S is the matrix associated with the original input, and ˜S is the delayed one. The output
|
1690 |
+
weight is denoted by W, and Win is the matrix associated with the mask of binary noise. The
|
1691 |
+
goal of MC and IPC tasks is to approximate the delayed input ˜S by the reservoir states G · S.
|
1692 |
+
Here, the reservoir states are expressed by the response function G and input denoted by S. We
|
1693 |
+
define delayed input ˜S ∈ RK×T
|
1694 |
+
˜S =
|
1695 |
+
|
1696 |
+
|
1697 |
+
|
1698 |
+
|
1699 |
+
|
1700 |
+
Un
|
1701 |
+
Un+1
|
1702 |
+
Un+2
|
1703 |
+
· · ·
|
1704 |
+
Un−1
|
1705 |
+
Un
|
1706 |
+
Un+1
|
1707 |
+
· · ·
|
1708 |
+
Un−2
|
1709 |
+
Un−1
|
1710 |
+
Un
|
1711 |
+
· · ·
|
1712 |
+
...
|
1713 |
+
...
|
1714 |
+
...
|
1715 |
+
...
|
1716 |
+
|
1717 |
+
|
1718 |
+
|
1719 |
+
|
1720 |
+
.
|
1721 |
+
(64)
|
1722 |
+
Here, T is the number of the time series, and K is the total length of the delay that we consider.
|
1723 |
+
The ith row shows the i−1 delayed time series. The input S ∈ RTNv×T to compute the reservoir
|
1724 |
+
states are expressed as
|
1725 |
+
S =
|
1726 |
+
|
1727 |
+
|
1728 |
+
|
1729 |
+
|
1730 |
+
|
1731 |
+
|
1732 |
+
|
1733 |
+
|
1734 |
+
|
1735 |
+
|
1736 |
+
|
1737 |
+
|
1738 |
+
σ(tn)
|
1739 |
+
σ(tn+1)
|
1740 |
+
σ(tn+2)
|
1741 |
+
· · ·
|
1742 |
+
...
|
1743 |
+
...
|
1744 |
+
...
|
1745 |
+
...
|
1746 |
+
σ(tn)
|
1747 |
+
σ(tn+1)
|
1748 |
+
σ(tn+2)
|
1749 |
+
· · ·
|
1750 |
+
σ(tn−1)
|
1751 |
+
σ(tn)
|
1752 |
+
σ(tn+1)
|
1753 |
+
· · ·
|
1754 |
+
...
|
1755 |
+
...
|
1756 |
+
...
|
1757 |
+
...
|
1758 |
+
σ(tn−2)
|
1759 |
+
σ(tn−1)
|
1760 |
+
σ(tn)
|
1761 |
+
· · ·
|
1762 |
+
...
|
1763 |
+
...
|
1764 |
+
...
|
1765 |
+
...
|
1766 |
+
|
1767 |
+
|
1768 |
+
|
1769 |
+
|
1770 |
+
|
1771 |
+
|
1772 |
+
|
1773 |
+
|
1774 |
+
|
1775 |
+
|
1776 |
+
|
1777 |
+
|
1778 |
+
.
|
1779 |
+
(65)
|
1780 |
+
Note that σ(tn) ∝ Un upto constant. Due to time multiplexing, each row is repeated Nv times,
|
1781 |
+
and then the time series is delayed in the next row. After multiplying the input filter Win, the
|
1782 |
+
36
|
1783 |
+
|
1784 |
+
input is fed into the response function. The input filter Win ∈ RTNv×T is a stack of constant row
|
1785 |
+
vectors with the length T. The Nv different realizations of row vectors are taken from binary
|
1786 |
+
noise, and then the resulting Nv × T matrix is repeated T times in the row direction. This input
|
1787 |
+
is multiplied by the coefficients of the Volterra series G ∈ RN×TNv
|
1788 |
+
G =
|
1789 |
+
|
1790 |
+
|
1791 |
+
|
1792 |
+
|
1793 |
+
|
1794 |
+
G(1)(0)
|
1795 |
+
· · ·
|
1796 |
+
G(1)(θ(Nv − 1))
|
1797 |
+
G(1)(θNv)
|
1798 |
+
· · ·
|
1799 |
+
G(1)(θ(2Nv − 1))
|
1800 |
+
· · ·
|
1801 |
+
G(2)(0)
|
1802 |
+
· · ·
|
1803 |
+
G(2)(θ(Nv − 1))
|
1804 |
+
G(2)(θNv)
|
1805 |
+
· · ·
|
1806 |
+
G(2)(θ(2Nv − 1))
|
1807 |
+
· · ·
|
1808 |
+
...
|
1809 |
+
...
|
1810 |
+
...
|
1811 |
+
...
|
1812 |
+
...
|
1813 |
+
...
|
1814 |
+
...
|
1815 |
+
G(N)(0)
|
1816 |
+
· · ·
|
1817 |
+
G(N)(θ(Nv − 1))
|
1818 |
+
G(N)(θNv)
|
1819 |
+
· · ·
|
1820 |
+
G(N)(θ(2Nv − 1))
|
1821 |
+
· · ·
|
1822 |
+
|
1823 |
+
|
1824 |
+
|
1825 |
+
|
1826 |
+
|
1827 |
+
(66)
|
1828 |
+
(63) implies that by choosing the appropriate W, we can get a canonical form of G. If
|
1829 |
+
the canonical form has N × N identity matrix in the left part of W · G, then the reservoir
|
1830 |
+
reproduces the time series up to N − 1 delay. This means that the rank of the matrix G, or the
|
1831 |
+
number of independent rows, is the maximum number of steps of the delay. This is consistent
|
1832 |
+
with the known fact that MC is bounded by the number of independent components of reservoir
|
1833 |
+
variables (6).
|
1834 |
+
Next we extend the Volterra series of the magnetization, including nonlinear terms. The
|
1835 |
+
magnetization is expressed as
|
1836 |
+
m(tn) = anσ(tn) + an−1σ(tn−1) + · · · + an,nσ(tn)σ(tn) + an,n−1σ(tn)σ(tn−1) + · · · .
|
1837 |
+
(67)
|
1838 |
+
The delayed input ˜S is rewritten as
|
1839 |
+
˜S =
|
1840 |
+
|
1841 |
+
|
1842 |
+
|
1843 |
+
|
1844 |
+
|
1845 |
+
|
1846 |
+
|
1847 |
+
|
1848 |
+
|
1849 |
+
Un
|
1850 |
+
Un+1
|
1851 |
+
Un+2
|
1852 |
+
· · ·
|
1853 |
+
Un−1
|
1854 |
+
Un
|
1855 |
+
Un+1
|
1856 |
+
· · ·
|
1857 |
+
...
|
1858 |
+
...
|
1859 |
+
...
|
1860 |
+
...
|
1861 |
+
UnUn
|
1862 |
+
Un+1Un+1
|
1863 |
+
Un+2Un+2
|
1864 |
+
· · ·
|
1865 |
+
UnUn−1
|
1866 |
+
Un+1Un
|
1867 |
+
Un+2Un+1
|
1868 |
+
· · ·
|
1869 |
+
...
|
1870 |
+
...
|
1871 |
+
...
|
1872 |
+
...
|
1873 |
+
|
1874 |
+
|
1875 |
+
|
1876 |
+
|
1877 |
+
|
1878 |
+
|
1879 |
+
|
1880 |
+
|
1881 |
+
|
1882 |
+
.
|
1883 |
+
(68)
|
1884 |
+
The matrix ˜S contains all the nonlinear combinations of the input series (Un, Un+1, · · ·). Ac-
|
1885 |
+
cordingly, we should modify S and also G to include the nonlinear response functions. Note
|
1886 |
+
that to guarantee the orthogonality, Legendre polynomials (or other orthogonal polynomials)
|
1887 |
+
37
|
1888 |
+
|
1889 |
+
should be used instead of polynomials in powers. Nevertheless, up to the second order of
|
1890 |
+
nonlinearity, which is relevant to consider the performance of NARMA10 (see Sec. A), the dif-
|
1891 |
+
ference is only in the constant terms (P2(x) = x2 − 1
|
1892 |
+
2). Because we subtract the mean value of
|
1893 |
+
the time series of all the input, output, and reservoir states, these constant terms do not change
|
1894 |
+
our conclusion. With nonlinear terms, (66) is extended as G = (Glin, Gnonl). Still, the rank of
|
1895 |
+
the matrix remains N at most. This is the reason why the total sum of IPC, including all the lin-
|
1896 |
+
ear and nonlinear delays, is bounded by the number of independent reservoir variables. When
|
1897 |
+
Gnonl = 0, the reservoir can memorize only the linear delay terms, but MC can be maximized
|
1898 |
+
to be N. On the other hand, when Gnonl ̸= 0, it is possible that MC is less than N, but the
|
1899 |
+
reservoir may have finite IPC.
|
1900 |
+
When the readout is nonlinear, we use the reservoir state variable as
|
1901 |
+
X =
|
1902 |
+
�
|
1903 |
+
M
|
1904 |
+
M ◦ M
|
1905 |
+
�
|
1906 |
+
,
|
1907 |
+
(69)
|
1908 |
+
where ◦ is the Hadamard product. If M is linear in the input, G has a structure of
|
1909 |
+
G =
|
1910 |
+
�
|
1911 |
+
Glin
|
1912 |
+
0
|
1913 |
+
0
|
1914 |
+
Gnonlin
|
1915 |
+
�
|
1916 |
+
.
|
1917 |
+
(70)
|
1918 |
+
In this case, rank(G) = rank(Glin) + rank(Gnonlin).
|
1919 |
+
B
|
1920 |
+
Learning with multiple variables
|
1921 |
+
In the main text, we use only mx for the readout as in (13)-(15). The readout is nonlinear and
|
1922 |
+
has both the information of mx and m2
|
1923 |
+
x. In this section, we consider the linear readout, but
|
1924 |
+
use both mx and mz for the output in micromagnetic simulations. We begin with the linear
|
1925 |
+
readout only with mx. The results of the MC and IPC tasks are shown in Fig. 10(a,b). We
|
1926 |
+
obtain a similar performance for the MC task with the result in the main text (Fig. 3). On the
|
1927 |
+
other hand, the performance for the IPC task in Fig. 10(a) is significantly poorer than the result
|
1928 |
+
38
|
1929 |
+
|
1930 |
+
in Fig. 3(a). This result demonstrates that the linear readout only with mx does not learn the
|
1931 |
+
nonlinearity effectively. Note that in the theoretical model with the response function, the IPC
|
1932 |
+
is exactly zero when we use the linear readout only with mx. The discrepancy arises from the
|
1933 |
+
expansion (33) around m0 = (0, 0, 1) in the main text. Strictly speaking, the expansion should
|
1934 |
+
be made around m0 under the constant input ⟨σ⟩ averaged over time at the input nanocontact.
|
1935 |
+
This reference state is inhomogeneous in space, and is hard to compute analytically. Due to this
|
1936 |
+
effect, mx in the micromagnetic simulations contain small nonlinearity.
|
1937 |
+
Next, we consider the linear readout with mx and mz. As seen in Fig. 10(c,d), mz carries
|
1938 |
+
nonlinear information, and enhances the IPC and learning performance of NARMA10 com-
|
1939 |
+
pared with linear readout only with mx (Fig. 10 (a,b)). The performance is IPC ≈ 60 under
|
1940 |
+
α = 5 × 10−4, which is comparable value with the results in the main text (Fig. 3(a,c)) where
|
1941 |
+
the readout is (mx, m2
|
1942 |
+
x). Also, high performance for NARMA10 task, NRMSE ≈ 0.2, can be
|
1943 |
+
obtained using variables (mx, mz). These results show that adding mz into the readout has a
|
1944 |
+
similar effect to adding m2
|
1945 |
+
x.
|
1946 |
+
Similarity between m2
|
1947 |
+
x and mz can be understood by using the theoretical formula with the
|
1948 |
+
response function in the main text. We continue the expansion (33) at the second order, and
|
1949 |
+
obtain
|
1950 |
+
∂tm(2)(x, t) = − m(1) × ∆m(1) − αm(1) ×
|
1951 |
+
��
|
1952 |
+
˜hm(1) − ∆m(1)�
|
1953 |
+
× ez
|
1954 |
+
�
|
1955 |
+
+ σ(x, t)m(1) × ey.
|
1956 |
+
(71)
|
1957 |
+
This result suggests that m(2) contains only the z component, and is slaved by m(1), which does
|
1958 |
+
not have z component. Therefore, m(2)
|
1959 |
+
z
|
1960 |
+
can be computed as
|
1961 |
+
m(2)
|
1962 |
+
z (x, t) = −1
|
1963 |
+
2
|
1964 |
+
�
|
1965 |
+
(m(1)
|
1966 |
+
x )2 + (m(1)
|
1967 |
+
y )2�
|
1968 |
+
.
|
1969 |
+
(72)
|
1970 |
+
Because mx and my carry similar information, mz in the readout has a similar effect with m2
|
1971 |
+
x
|
1972 |
+
in the readout.
|
1973 |
+
39
|
1974 |
+
|
1975 |
+
C
|
1976 |
+
Speed of propagating spin wave using dipole interaction
|
1977 |
+
Propagating spin wave when magnetization is pointing along film normal is called magneto-
|
1978 |
+
static forward volume mode, and its dispersion relation can be described by the following equa-
|
1979 |
+
tion (31).
|
1980 |
+
ω(k) = γµ0
|
1981 |
+
�
|
1982 |
+
(H0 − Ms)
|
1983 |
+
�
|
1984 |
+
H0 − Ms
|
1985 |
+
1 − e−kd
|
1986 |
+
kd
|
1987 |
+
�
|
1988 |
+
.
|
1989 |
+
(73)
|
1990 |
+
Then, one can obtain the group velocity at k ∼ 0 as,
|
1991 |
+
vg = dω
|
1992 |
+
dk (k = 0) = γµ0Msd
|
1993 |
+
4
|
1994 |
+
.
|
1995 |
+
(74)
|
1996 |
+
In the magneto-static spin wave driven by dipole interaction, group velocity is proportional to
|
1997 |
+
both Ms and d. vg ∼ 200 m/s is obtained when the following parameters are used: µ0H = 1.5
|
1998 |
+
T, Ms = 1.0 × 106 A/m, d = 4 nm. The same estimation is used for calculating the speed of
|
1999 |
+
information propagation for spin reservoirs in Refs. (19) and (22), which are used to plot Fig. 7
|
2000 |
+
in the main text.
|
2001 |
+
D
|
2002 |
+
Details of reservoir computing scaling compared with lit-
|
2003 |
+
erature
|
2004 |
+
In this section, details of Fig. 7 shown in the main text are described. MC and NRMSE for
|
2005 |
+
NARMA10 tasks using photonic and spintronic RC are reported in Refs. (12,32–35,38,39) for
|
2006 |
+
photonic RC and (9,19,22,25,36,37,57,58) for spintronic RC. Table 1 and 2 shows reports of
|
2007 |
+
MC for photonic and spintronic RC with different length scales, which are plotted in Fig. 7 in
|
2008 |
+
the main text.
|
2009 |
+
40
|
2010 |
+
|
2011 |
+
Table 1: Report of photonic RC with different length scales used in Fig. 7 in the main text
|
2012 |
+
Reports
|
2013 |
+
Length, L
|
2014 |
+
Time interval, τ0
|
2015 |
+
vτ0
|
2016 |
+
N
|
2017 |
+
MC
|
2018 |
+
Duport et al. (32)
|
2019 |
+
1.6 km
|
2020 |
+
8 µs
|
2021 |
+
2.4 km
|
2022 |
+
50
|
2023 |
+
21
|
2024 |
+
Dejonckheere et al. (33)
|
2025 |
+
1.6 km
|
2026 |
+
8 µs
|
2027 |
+
2.4 km
|
2028 |
+
50
|
2029 |
+
37
|
2030 |
+
Vincker et al. (34)
|
2031 |
+
230 m
|
2032 |
+
1.1 µs
|
2033 |
+
340 m
|
2034 |
+
50
|
2035 |
+
21
|
2036 |
+
Takano et al. (12)
|
2037 |
+
11 mm
|
2038 |
+
200 ps
|
2039 |
+
60 mm
|
2040 |
+
31
|
2041 |
+
1.5
|
2042 |
+
Sugano et al. (35)
|
2043 |
+
10 mm
|
2044 |
+
240 ps
|
2045 |
+
72 mm
|
2046 |
+
240
|
2047 |
+
10
|
2048 |
+
Note: speed of light, v = 3 × 108 m/s is used.
|
2049 |
+
Table 2: Report of spin reservoirs with different length scales used in Fig. 7 in the main text
|
2050 |
+
Reports
|
2051 |
+
L
|
2052 |
+
τ0
|
2053 |
+
v
|
2054 |
+
vτ0
|
2055 |
+
N
|
2056 |
+
MC
|
2057 |
+
Nakane et al. (19)
|
2058 |
+
5 µm
|
2059 |
+
2 ns
|
2060 |
+
2.4 km/s
|
2061 |
+
4.8 µm
|
2062 |
+
72
|
2063 |
+
21
|
2064 |
+
Dale et al. (22)
|
2065 |
+
50 nm
|
2066 |
+
10 ps
|
2067 |
+
200 m/s
|
2068 |
+
2 nm
|
2069 |
+
100
|
2070 |
+
35
|
2071 |
+
This work
|
2072 |
+
500 nm
|
2073 |
+
1.6 ns
|
2074 |
+
200 m/s
|
2075 |
+
320 nm
|
2076 |
+
64
|
2077 |
+
26
|
2078 |
+
Note: v is calculated based on magneto-static spin wave using Eq. 74.
|
2079 |
+
E
|
2080 |
+
Other data
|
2081 |
+
E.1
|
2082 |
+
Nv and Np dependence of performance
|
2083 |
+
Fig. 11 shows Nv and Np dependencies of MC, IPC and NRMSE for NARMA10 task. As Nv
|
2084 |
+
and Np are increased, MC and IPC increase. Then, NARMA10 prediction task becomes better
|
2085 |
+
with increasing Nv and Np. MC and NRMSE for NARMA10 with different Np with fixed Nv
|
2086 |
+
= 8 are compared with other reservoirs shown in Fig. 8 in the main text.
|
2087 |
+
E.2
|
2088 |
+
exchange interaction
|
2089 |
+
In the main text, we use the dipole interaction to compute the response function as (38) and
|
2090 |
+
(39). In this section, we show the result using the exchange interaction shown in (36) and (37).
|
2091 |
+
Figure 12 shows the results.
|
2092 |
+
41
|
2093 |
+
|
2094 |
+
References
|
2095 |
+
1. A. V. Chumak, V. I. Vasyuchka, A. A. Serga, B. Hillebrands, Magnon spintronics. Nature
|
2096 |
+
Physics 11, 453–461 (2015).
|
2097 |
+
2. J. Grollier, D. Querlioz, K. Camsari, K. Everschor-Sitte, S. Fukami, M. D. Stiles, Neuro-
|
2098 |
+
morphic spintronics. Nature electronics 3, 360–370 (2020).
|
2099 |
+
3. A. Barman, G. Gubbiotti, S. Ladak, A. O. Adeyeye, M. Krawczyk, J. Gr¨afe, C. Adelmann,
|
2100 |
+
S. Cotofana, A. Naeemi, V. I. Vasyuchka, et al., The 2021 magnonics roadmap. Journal of
|
2101 |
+
Physics: Condensed Matter 33, 413001 (2021).
|
2102 |
+
4. H. Jaeger, H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy
|
2103 |
+
in wireless communication. Science 304, 78–80 (2004).
|
2104 |
+
5. J. Pathak, B. Hunt, M. Girvan, Z. Lu, E. Ott, Model-free prediction of large spatiotempo-
|
2105 |
+
rally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett. 120,
|
2106 |
+
024102 (2018).
|
2107 |
+
6. H. Jaeger, Short term memory in echo state networks, Tech. Rep. Technical Report GMD
|
2108 |
+
Report 152, German National Research Center for Information Technology (2002).
|
2109 |
+
7. W. Maass, T. Natschl¨ager, H. Markram, Real-time computing without stable states: A new
|
2110 |
+
framework for neural computation based on perturbations. Neural computation 14, 2531–
|
2111 |
+
2560 (2002).
|
2112 |
+
8. J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti,
|
2113 |
+
V. Cros, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. D. Stiles, J. Grollier, Neu-
|
2114 |
+
romorphic computing with nanoscale spintronic oscillators. Nature 547, 428 (2017).
|
2115 |
+
42
|
2116 |
+
|
2117 |
+
9. S. Tsunegi, T. Taniguchi, K. Nakajima, S. Miwa, K. Yakushiji, A. Fukushima, S. Yuasa,
|
2118 |
+
H. Kubota, Physical reservoir computing based on spin torque oscillator with forced syn-
|
2119 |
+
chronization. Applied Physics Letters 114, 164101 (2019).
|
2120 |
+
10. M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, S. Gigan, Large-scale optical reservoir com-
|
2121 |
+
puting for spatiotemporal chaotic systems prediction. Phys. Rev. X 10, 041037 (2020).
|
2122 |
+
11. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R.
|
2123 |
+
Mirasso, I. Fischer, Photonic information processing beyond turing: an optoelectronic im-
|
2124 |
+
plementation of reservoir computing. Opt. Express 20, 3241–3249 (2012).
|
2125 |
+
12. K. Takano, C. Sugano, M. Inubushi, K. Yoshimura, S. Sunada, K. Kanno, A. Uchida, Com-
|
2126 |
+
pact reservoir computing with a photonic integrated circuit. Opt. Express 26, 29424–29439
|
2127 |
+
(2018).
|
2128 |
+
13. M. Lukoˇseviˇcius, H. Jaeger, Reservoir computing approaches to recurren: neural network
|
2129 |
+
training. Computer Science Review 3, 127-149 (2009).
|
2130 |
+
14. G. Van der Sande, D. Brunner, M. C. Soriano, Advances in photonic reservoir computing.
|
2131 |
+
Nanophotonics 6, 561–576 (2017).
|
2132 |
+
15. G. Tanaka, T. Yamane, J. B. H´eroux, R. Nakane, N. Kanazawa, S. Takeda, H. Numata,
|
2133 |
+
D. Nakano, A. Hirose, Recent advances in physical reservoir computing: A review. Neural
|
2134 |
+
Networks 115, 100 - 123 (2019).
|
2135 |
+
16. D. Prychynenko, M. Sitte, K. Litzius, B. Kr¨uger, G. Bourianoff, M. Kl¨aui, J. Sinova,
|
2136 |
+
K. Everschor-Sitte, Magnetic skyrmion as a nonlinear resistive element: A potential build-
|
2137 |
+
ing block for reservoir computing. Phys. Rev. Applied 9, 014034 (2018).
|
2138 |
+
43
|
2139 |
+
|
2140 |
+
17. R. Nakane, G. Tanaka, A. Hirose, Reservoir computing with spin waves excited in a garnet
|
2141 |
+
film. IEEE Access 6, 4462-4469 (2018).
|
2142 |
+
18. T. Ichimura, R. Nakane, G. Tanaka, A. Hirose, A numerical exploration of signal detector
|
2143 |
+
arrangement in a spin-wave reservoir computing device. IEEE Access 9, 72637–72646
|
2144 |
+
(2021).
|
2145 |
+
19. R. Nakane, A. Hirose, G. Tanaka, Spin waves propagating through a stripe magnetic do-
|
2146 |
+
main structure and their applications to reservoir computing. Phys. Rev. Research 3, 033243
|
2147 |
+
(2021).
|
2148 |
+
20. T. W. Hughes, I. A. D. Williamson, M. Minkov, S. Fan, Wave physics as an analog recurrent
|
2149 |
+
neural network. Science Advances 5, eaay6946 (2019).
|
2150 |
+
21. G. Marcucci, D. Pierangeli, C. Conti, Theory of neuromorphic computing by waves: Ma-
|
2151 |
+
chine learning by rogue waves, dispersive shocks, and solitons. Phys. Rev. Lett. 125, 093901
|
2152 |
+
(2020).
|
2153 |
+
22. M. Dale, R. F. L. Evans, S. Jenkins, S. O’Keefe, A. Sebald, S. Stepney, F. Torre, M. Trefzer,
|
2154 |
+
Reservoir computing with thin-film ferromagnetic devices. arXiv:2101.12700 (2021).
|
2155 |
+
23. L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre,
|
2156 |
+
B. Schrauwen, C. R. Mirasso, I. Fischer, Information processing using a single dynami-
|
2157 |
+
cal node as complex system. Nature Communications 2, 468 (2011).
|
2158 |
+
24. A. R¨ohm, K. L¨udge, Multiplexed networks: reservoir computing with virtual and real
|
2159 |
+
nodes. Journal of Physics Communications 2, 085007 (2018).
|
2160 |
+
44
|
2161 |
+
|
2162 |
+
25. T. Furuta, K. Fujii, K. Nakajima, S. Tsunegi, H. Kubota, Y. Suzuki, S. Miwa, Macro-
|
2163 |
+
magnetic simulation for reservoir computing utilizing spin dynamics in magnetic tunnel
|
2164 |
+
junctions. Phys. Rev. Applied 10, 034063 (2018).
|
2165 |
+
26. F. Stelzer, A. R¨ohm, K. L¨udge, S. Yanchuk, Performance boost of time-delay reservoir
|
2166 |
+
computing by non-resonant clock cycle. Neural Networks 124, 158-169 (2020).
|
2167 |
+
27. J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, Information processing capacity of
|
2168 |
+
dynamical systems. Scientific Reports 2, 514 (2012).
|
2169 |
+
28. E. Bollt, On explaining the surprising success of reservoir computing forecaster of chaos?
|
2170 |
+
the universal machine learning dynamical system with contrast to var and dmd. Chaos: An
|
2171 |
+
Interdisciplinary Journal of Nonlinear Science 31, 013108 (2021).
|
2172 |
+
29. J. Slonczewski, Excitation of spin waves by an electric current. Journal of Magnetism and
|
2173 |
+
Magnetic Materials 195, L261-L268 (1999).
|
2174 |
+
30. L. Gonon, J.-P. Ortega, Reservoir computing universality with stochastic inputs. IEEE
|
2175 |
+
transactions on neural networks and learning systems 31, 100–112 (2019).
|
2176 |
+
31. B. Hillebrands, J. Hamrle, Investigation of Spin Waves and Spin Dynamics by Optical Tech-
|
2177 |
+
niques (John Wiley & Sons, Ltd, 2007).
|
2178 |
+
32. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, All-optical reservoir com-
|
2179 |
+
puting. Optics Express 20, 1958–1964 (2012).
|
2180 |
+
33. A. Dejonckheere, F. Duport, A. Smerieri, L. Fang, J.-L. Oudar, M. Haelterman, S. Massar,
|
2181 |
+
All-optical reservoir computer based on saturation of absorption. Optics Express 22, 10868
|
2182 |
+
(2014).
|
2183 |
+
45
|
2184 |
+
|
2185 |
+
34. Q. Vinckier, F. Duport, A. Smerieri, K. Vandoorne, P. Bienstman, M. Haelterman, S. Mas-
|
2186 |
+
sar, High-performance photonic reservoir computer based on a coherently driven passive
|
2187 |
+
cavity. Optica 2, 438 (2015).
|
2188 |
+
35. C. Sugano, K. Kanno, A. Uchida, Reservoir Computing Using Multiple Lasers with Feed-
|
2189 |
+
back on a Photonic Integrated Circuit. IEEE Journal of Selected Topics in Quantum Elec-
|
2190 |
+
tronics 26, 1500409 (2020).
|
2191 |
+
36. T. Kanao, H. Suto, K. Mizushima, H. Goto, T. Tanamoto, T. Nagasawa, Reservoir comput-
|
2192 |
+
ing on spin-torque oscillator array. Physical Review Applied 12, 042052 (2019).
|
2193 |
+
37. S. Watt, M. Kostylev, A. B. Ustinov, B. A. Kalinikos, Implementing a Magnonic Reservoir
|
2194 |
+
Computer Model Based on Time-Delay Multiplexing. Physical Review Applied 15, 064060
|
2195 |
+
(2021).
|
2196 |
+
38. F. Duport, A. Smerieri, A. Akrout, M. Haelterman, S. Massar, Fully analogue photonic
|
2197 |
+
reservoir computer. Scientific Reports 6, 22381 (2016).
|
2198 |
+
39. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar,
|
2199 |
+
Optoelectronic reservoir computing. Scientific Reports 2, 287 (2012).
|
2200 |
+
40. J. Hamrle, O. Gaier, S. G. Min, B. Hillebrands, Y. Sakuraba, Y. Ando, Determination of
|
2201 |
+
exchange constants of Heusler compounds by Brillouin light scattering spectroscopy: Ap-
|
2202 |
+
plication to Co2MnSi. Journal of Physics D: Applied Physics 42, 084005 (2009).
|
2203 |
+
41. T. Kubota, J. Hamrle, Y. Sakuraba, O. Gaier, M. Oogane, A. Sakuma, B. Hillebrands,
|
2204 |
+
K. Takanashi, Y. Ando, Structure, exchange stiffness, and magnetic anisotropy of Co
|
2205 |
+
2MnAlxSi1−x Heusler compounds. Journal of Applied Physics 106, 113907 (2009).
|
2206 |
+
46
|
2207 |
+
|
2208 |
+
42. C. Guillemard, S. Petit-Watelot, L. Pasquier, D. Pierre, J. Ghanbaja, J. C. Rojas-S´anchez,
|
2209 |
+
A. Bataille, J. Rault, P. Le F`evre, F. Bertran, S. Andrieu, Ultralow Magnetic Damping in
|
2210 |
+
Co2Mn-Based Heusler Compounds: Promising Materials for Spintronics. Physical Review
|
2211 |
+
Applied 11, 064009 (2019).
|
2212 |
+
43. C. Guillemard, W. Zhang, G. Malinowski, C. de Melo, J. Gorchon, S. Petit-Watelot,
|
2213 |
+
J. Ghanbaja, S. Mangin, P. Le F`evre, F. Bertran, S. Andrieu, Engineering Co2MnAlxSi1−x
|
2214 |
+
Heusler Compounds as a Model System to Correlate Spin Polarization, Intrinsic Gilbert
|
2215 |
+
Damping, and Ultrafast Demagnetization. Advanced Materials 32, 1908357 (2020).
|
2216 |
+
44. V. E. Demidov, S. Urazhdin, S. O. Demokritov, Direct observation and mapping of spin
|
2217 |
+
waves emitted by spin-torque nano-oscillators. Nature materials 9, 984–988 (2010).
|
2218 |
+
45. M. Madami, S. Bonetti, G. Consolo, S. Tacchi, G. Carlotti, G. Gubbiotti, F. Mancoff, M. A.
|
2219 |
+
Yar, J. ˚Akerman, Direct observation of a propagating spin wave induced by spin-transfer
|
2220 |
+
torque. Nature nanotechnology 6, 635–638 (2011).
|
2221 |
+
46. S. Sani, J. Persson, S. M. Mohseni, Y. Pogoryelov, P. Muduli, A. Eklund, G. Malm, M. K¨all,
|
2222 |
+
A. Dmitriev, J. ˚Akerman, Mutually synchronized bottom-up multi-nanocontact spin–torque
|
2223 |
+
oscillators. Nature communications 4, 2731 (2013).
|
2224 |
+
47. D. J. Gauthier, E. Bollt, A. Griffith, W. A. Barbosa, Next generation reservoir computing.
|
2225 |
+
Nature communications 12, 1–8 (2021).
|
2226 |
+
48. S. A. Billings, Nonlinear system identification : NARMAX methods in the time, frequency,
|
2227 |
+
and spatio-temporal domains (Wiley, 2013).
|
2228 |
+
49. A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, B. Van Waeyen-
|
2229 |
+
berge, The design and verification of mumax3. AIP Advances 4, 107133 (2014).
|
2230 |
+
47
|
2231 |
+
|
2232 |
+
50. G. Venkat, H. Fangohr, A. Prabhakar, Absorbing boundary layers for spin wave micromag-
|
2233 |
+
netics. Journal of Magnetism and Magnetic Materials 450, 34 - 39 (2018). Perspectives on
|
2234 |
+
magnon spintronics.
|
2235 |
+
51. A. Rodan, P. Tino, Minimum complexity echo state network. IEEE Transactions on Neural
|
2236 |
+
Networks 22, 131-144 (2011).
|
2237 |
+
52. Q. Li, F. Dietrich, E. M. Bollt, I. G. Kevrekidis, Extended dynamic mode decomposition
|
2238 |
+
with dictionary learning: A data-driven adaptive spectral decomposition of the koopman
|
2239 |
+
operator. Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 103111 (2017).
|
2240 |
+
53. S. Le Clainche, J. Vega, Higher order dynamic mode decomposition. SIAM Journal on
|
2241 |
+
Applied Dynamical Systems 16, 882-925 (2017).
|
2242 |
+
54. T. L. Carroll, Optimizing memory in reservoir computers. Chaos: An Interdisciplinary
|
2243 |
+
Journal of Nonlinear Science 32, 023123 (2022).
|
2244 |
+
55. T. Kubota, H. Takahashi, K. Nakajima, Unifying framework for information processing in
|
2245 |
+
stochastically driven dynamical systems. Phys. Rev. Research 3, 043135 (2021).
|
2246 |
+
56. S. L. Brunton, J. N. Kutz, Data-driven Science and Engineering: Machine Learning, Dy-
|
2247 |
+
namical Systems, and Control (Cambridge University Press, 2019).
|
2248 |
+
57. N. Akashi, T. Yamaguchi, S. Tsunegi, T. Taniguchi, M. Nishida, R. Sakurai, Y. Wakao,
|
2249 |
+
K. Nakajima, Input-driven bifurcations and information processing capacity in spintronics
|
2250 |
+
reservoirs. Phys. Rev. Research 2, 043303 (2020).
|
2251 |
+
58. M. K. Lee, M. Mochizuki, Reservoir Computing with Spin Waves in a Skyrmion Crystal.
|
2252 |
+
Physical Review Applied 18, 014074 (2022).
|
2253 |
+
48
|
2254 |
+
|
2255 |
+
(a) (mx), MC and IPC
|
2256 |
+
MNO
|
2257 |
+
Pm
|
2258 |
+
x), NARMA10
|
2259 |
+
MC
|
2260 |
+
IPC
|
2261 |
+
0
|
2262 |
+
20
|
2263 |
+
40
|
2264 |
+
60
|
2265 |
+
80
|
2266 |
+
5
|
2267 |
+
2.5
|
2268 |
+
α = 5×10-4
|
2269 |
+
Frequency, 1/θ (GHz)
|
2270 |
+
0
|
2271 |
+
20
|
2272 |
+
40
|
2273 |
+
60
|
2274 |
+
80
|
2275 |
+
α = 5×10-3
|
2276 |
+
Linear and non-linear memory capacity
|
2277 |
+
0.2
|
2278 |
+
0.4
|
2279 |
+
0
|
2280 |
+
20
|
2281 |
+
40
|
2282 |
+
60
|
2283 |
+
80
|
2284 |
+
α = 5×10-2
|
2285 |
+
Distance of virtual nodes, θ (ns)
|
2286 |
+
0
|
2287 |
+
0.5
|
2288 |
+
1
|
2289 |
+
5
|
2290 |
+
2.5
|
2291 |
+
α = 5×10-4
|
2292 |
+
Training,
|
2293 |
+
Test
|
2294 |
+
Frequency, 1/θ (GHz)
|
2295 |
+
0
|
2296 |
+
0.5
|
2297 |
+
1
|
2298 |
+
α = 5×10-3
|
2299 |
+
Normalized root mean square error, NRMSE
|
2300 |
+
for NARMA10 task
|
2301 |
+
0.2
|
2302 |
+
0.4
|
2303 |
+
0
|
2304 |
+
0.5
|
2305 |
+
1
|
2306 |
+
α = 5×10-2
|
2307 |
+
Distance of virtual nodes, θ (ns)
|
2308 |
+
(c) (m
|
2309 |
+
Q, mz) MC and IPC
|
2310 |
+
(d) (m
|
2311 |
+
R, mz), NARMA10
|
2312 |
+
MC
|
2313 |
+
IPC
|
2314 |
+
0
|
2315 |
+
20
|
2316 |
+
40
|
2317 |
+
60
|
2318 |
+
80
|
2319 |
+
5
|
2320 |
+
2.5
|
2321 |
+
α = 5×10-4
|
2322 |
+
Frequency, 1/θ (GHz)
|
2323 |
+
0
|
2324 |
+
20
|
2325 |
+
40
|
2326 |
+
60
|
2327 |
+
80
|
2328 |
+
α = 5×10-3
|
2329 |
+
Linear and non-linear memory capacity
|
2330 |
+
0.2
|
2331 |
+
0.4
|
2332 |
+
0
|
2333 |
+
20
|
2334 |
+
40
|
2335 |
+
60
|
2336 |
+
80
|
2337 |
+
α = 5×10-2
|
2338 |
+
Distance of virtual nodes, θ (ns)
|
2339 |
+
0
|
2340 |
+
0.5
|
2341 |
+
1
|
2342 |
+
5
|
2343 |
+
2.5
|
2344 |
+
α = 5×10-4
|
2345 |
+
Training,
|
2346 |
+
Test
|
2347 |
+
Frequency, 1/θ (GHz)
|
2348 |
+
0
|
2349 |
+
0.5
|
2350 |
+
1
|
2351 |
+
α = 5×10-3
|
2352 |
+
Normalized root mean square error, NRMSE
|
2353 |
+
for NARMA10 task
|
2354 |
+
0.2
|
2355 |
+
0.4
|
2356 |
+
0
|
2357 |
+
0.5
|
2358 |
+
1
|
2359 |
+
α = 5×10-2
|
2360 |
+
Distance of virtual nodes, θ (ns)
|
2361 |
+
Figure 10:
|
2362 |
+
Reservoir computing with various parameter combinations obtained using micro-
|
2363 |
+
magnetic Mumax3 simulation. Linear memory capacity, MC and nonlinear memory capacity,
|
2364 |
+
IPC plotted as a function of θ obtained using linear mx output only (a) and using mx, mz (c).
|
2365 |
+
Normalized root mean square error, NRMSE for NARMA10 task plotted as a function of θ
|
2366 |
+
obtained using linear mx output only (b) and using mx, mz (d).
|
2367 |
+
49
|
2368 |
+
|
2369 |
+
2
|
2370 |
+
4
|
2371 |
+
6
|
2372 |
+
8
|
2373 |
+
2
|
2374 |
+
4
|
2375 |
+
6
|
2376 |
+
8
|
2377 |
+
Number of physical nodes, Np
|
2378 |
+
Number of virtual nodes, Nv
|
2379 |
+
0
|
2380 |
+
5
|
2381 |
+
10
|
2382 |
+
15
|
2383 |
+
20
|
2384 |
+
25
|
2385 |
+
30
|
2386 |
+
Memory capacity, MC
|
2387 |
+
2
|
2388 |
+
4
|
2389 |
+
6
|
2390 |
+
8
|
2391 |
+
2
|
2392 |
+
4
|
2393 |
+
6
|
2394 |
+
8
|
2395 |
+
Number of physical nodes, Np
|
2396 |
+
Number of virtual nodes, Nv
|
2397 |
+
0
|
2398 |
+
10
|
2399 |
+
20
|
2400 |
+
30
|
2401 |
+
40
|
2402 |
+
50
|
2403 |
+
60
|
2404 |
+
S T
|
2405 |
+
Nonlinear memory capacity, IPC
|
2406 |
+
2
|
2407 |
+
4
|
2408 |
+
6
|
2409 |
+
8
|
2410 |
+
2
|
2411 |
+
4
|
2412 |
+
6
|
2413 |
+
8
|
2414 |
+
Number of physical nodes, Np
|
2415 |
+
Number of virtual nodes, Nv
|
2416 |
+
0.00
|
2417 |
+
0.20
|
2418 |
+
0.40
|
2419 |
+
0.60
|
2420 |
+
0.80
|
2421 |
+
1.00
|
2422 |
+
Normalized mean square error, NRMSE
|
2423 |
+
for NARMA10 task
|
2424 |
+
(a)
|
2425 |
+
U VW
|
2426 |
+
(c)
|
2427 |
+
Figure 11: (a) Memory capacity, MC (b) Nonlinear memory capacity, IPC and (c) Normalized
|
2428 |
+
root mean square error, NRMSE for NARMA10 task plotted as a function of the number of
|
2429 |
+
virtual and physical nodes. The parameters used in the simulation are α = 5 × 10−4, θ = 0.2
|
2430 |
+
ns.
|
2431 |
+
(a)
|
2432 |
+
(b)
|
2433 |
+
(c)
|
2434 |
+
wave speed (log m/s)
|
2435 |
+
characteristic size (log nm)
|
2436 |
+
3.0
|
2437 |
+
2.0
|
2438 |
+
2.0
|
2439 |
+
3.0
|
2440 |
+
4.0
|
2441 |
+
MC
|
2442 |
+
20
|
2443 |
+
30
|
2444 |
+
40
|
2445 |
+
50
|
2446 |
+
damping time
|
2447 |
+
1.0
|
2448 |
+
5.0
|
2449 |
+
4.0
|
2450 |
+
wave speed (log m/s)
|
2451 |
+
characteristic size (log nm)
|
2452 |
+
2.0
|
2453 |
+
4.0
|
2454 |
+
3.0
|
2455 |
+
2.0
|
2456 |
+
3.0
|
2457 |
+
4.0
|
2458 |
+
IPC
|
2459 |
+
20
|
2460 |
+
30
|
2461 |
+
40
|
2462 |
+
50
|
2463 |
+
damping time
|
2464 |
+
1.0
|
2465 |
+
5.0
|
2466 |
+
0
|
2467 |
+
20
|
2468 |
+
40
|
2469 |
+
60
|
2470 |
+
80
|
2471 |
+
5
|
2472 |
+
2.5
|
2473 |
+
� = 5×10-4
|
2474 |
+
Frequency, 1/� (GHz)
|
2475 |
+
0
|
2476 |
+
20
|
2477 |
+
40
|
2478 |
+
60
|
2479 |
+
80
|
2480 |
+
� = 5×10-3
|
2481 |
+
Linear and non-linear memory capacity
|
2482 |
+
0.2
|
2483 |
+
0.4
|
2484 |
+
0
|
2485 |
+
20
|
2486 |
+
40
|
2487 |
+
60
|
2488 |
+
80
|
2489 |
+
� = 5×10-2
|
2490 |
+
Distance of virtual nodes, � (ns)
|
2491 |
+
Figure 12: (a) Memory capacity, MC (solid symbols) and nonlinear memory capacity, IPC
|
2492 |
+
(open symbols) obtained using the response function method for exchange interaction plotted
|
2493 |
+
as a function of θ with different damping parameters α. (b) MC and (c) IPC plotted as a function
|
2494 |
+
of characteristic size and wave speed.
|
2495 |
+
50
|
2496 |
+
|
DtE0T4oBgHgl3EQfQgBB/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
DtE1T4oBgHgl3EQfWQTV/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:368b98a89d5fa2006a6172b0c52dad1c573023570afe04cf497f268ac010fa1a
|
3 |
+
size 3342381
|
FdAyT4oBgHgl3EQfrPl7/content/tmp_files/2301.00557v1.pdf.txt
ADDED
@@ -0,0 +1,2010 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
2 |
+
Ian Covert 1 Wei Qiu 1 Mingyu Lu 1 Nayoon Kim 1 Nathan White 2 Su-In Lee 1
|
3 |
+
Abstract
|
4 |
+
Feature selection helps reduce data acquisition
|
5 |
+
costs in ML, but the standard approach is to train
|
6 |
+
models with static feature subsets. Here, we con-
|
7 |
+
sider the dynamic feature selection (DFS) prob-
|
8 |
+
lem where a model sequentially queries features
|
9 |
+
based on the presently available information. DFS
|
10 |
+
is often addressed with reinforcement learning
|
11 |
+
(RL), but we explore a simpler approach of greed-
|
12 |
+
ily selecting features based on their conditional
|
13 |
+
mutual information. This method is theoretically
|
14 |
+
appealing but requires oracle access to the data
|
15 |
+
distribution, so we develop a learning approach
|
16 |
+
based on amortized optimization. The proposed
|
17 |
+
method is shown to recover the greedy policy
|
18 |
+
when trained to optimality and outperforms nu-
|
19 |
+
merous existing feature selection methods in our
|
20 |
+
experiments, thus validating it as a simple but
|
21 |
+
powerful approach for this problem.
|
22 |
+
1. Introduction
|
23 |
+
A machine learning model’s inputs can be costly to obtain,
|
24 |
+
and feature selection is often used to reduce data acquisition
|
25 |
+
costs. In applications where information is gathered sequen-
|
26 |
+
tially, a natural option is to select features adaptively based
|
27 |
+
on the currently available information rather than using a
|
28 |
+
fixed feature set. This setup is known as dynamic feature
|
29 |
+
selection (DFS)1, and the problem has been considered by
|
30 |
+
several works in the last decade (Saar-Tsechansky et al.,
|
31 |
+
2009; Dulac-Arnold et al., 2011; Chen et al., 2015b; Early
|
32 |
+
et al., 2016a; He et al., 2016a; Kachuee et al., 2018).
|
33 |
+
Compared to static feature selection with a fixed feature
|
34 |
+
set (Li et al., 2017; Cai et al., 2018), DFS can offer better
|
35 |
+
performance given a fixed budget. This is easy to see, be-
|
36 |
+
cause selecting the same features for all instances (e.g., all
|
37 |
+
1Paul G. Allen School of Computer Science & Engineering,
|
38 |
+
University of Washington 2Department of Emergency Medicine,
|
39 |
+
University of Washington.
|
40 |
+
Correspondence to:
|
41 |
+
Ian Covert
|
42 |
+
<[email protected]>.
|
43 |
+
1The problem is also sometimes referred to as sequential fea-
|
44 |
+
ture selection or active feature acquisition.
|
45 |
+
patients visiting a hospital’s emergency room) is suboptimal
|
46 |
+
when the most informative features vary across individuals.
|
47 |
+
Although it should in theory offer better performance, DFS
|
48 |
+
also presents a more challenging learning problem, because
|
49 |
+
it requires learning both a feature selection policy and how
|
50 |
+
to make predictions given variable feature sets.
|
51 |
+
Prior work has approached DFS in several ways, though of-
|
52 |
+
ten using reinforcement learning (RL) (Dulac-Arnold et al.,
|
53 |
+
2011; Shim et al., 2018; Kachuee et al., 2018; Janisch et al.,
|
54 |
+
2019; Li & Oliva, 2021). RL is a natural approach for se-
|
55 |
+
quential decision-making problems, but current methods are
|
56 |
+
difficult to train and do not reliably outperform static fea-
|
57 |
+
ture selection methods (Henderson et al., 2018; Erion et al.,
|
58 |
+
2021). Our work therefore explores a simpler approach:
|
59 |
+
greedily selecting features based on their conditional mutual
|
60 |
+
information (CMI) with the response variable.
|
61 |
+
The greedy approach is known from prior work (Fleuret,
|
62 |
+
2004; Chen et al., 2015b; Ma et al., 2019) but is difficult
|
63 |
+
to use in practice, because calculating CMI requires oracle
|
64 |
+
access to the data distribution (Cover & Thomas, 2012).
|
65 |
+
Our focus is therefore developing a practical approximation.
|
66 |
+
Whereas previous work makes strong assumptions about the
|
67 |
+
data (e.g., binary features in Fleuret 2004) or approximates
|
68 |
+
the data distribution with generative modeling (Ma et al.,
|
69 |
+
2019), we develop a flexible approach that directly predicts
|
70 |
+
the optimal selection at each step. Our method is based on a
|
71 |
+
variational perspective on the greedy CMI policy, and it uses
|
72 |
+
a technique known as amortized optimization (Amos, 2022)
|
73 |
+
to enable training using only a standard labeled dataset.
|
74 |
+
Notably, the model is trained with an objective that recovers
|
75 |
+
the greedy policy when it is trained to optimality.
|
76 |
+
Our contributions in this work are the following:
|
77 |
+
1. We derive a variational, or optimization-based perspec-
|
78 |
+
tive on the greedy CMI policy, which shows it to be
|
79 |
+
equivalent to minimizing the one-step-ahead prediction
|
80 |
+
loss given an optimal classifier.
|
81 |
+
2. We develop a learning approach based on amortized op-
|
82 |
+
timization, where a policy network is trained to directly
|
83 |
+
predict the greedy selection at each step. Rather than re-
|
84 |
+
quiring a dataset that indicates the correct selections, our
|
85 |
+
training approach is based on a standard labeled dataset
|
86 |
+
and an objective function whose global optimizer is the
|
87 |
+
arXiv:2301.00557v1 [cs.LG] 2 Jan 2023
|
88 |
+
|
89 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
90 |
+
greedy CMI policy.
|
91 |
+
3. We propose a continuous relaxation for the inherently
|
92 |
+
discrete learning objective, which enables efficient and
|
93 |
+
architecture-agnostic training with stochastic gradient
|
94 |
+
descent.
|
95 |
+
Our experiments evaluate the proposed method on numer-
|
96 |
+
ous datasets, and the results show that it outperforms many
|
97 |
+
recent dynamic and static feature selection methods. Over-
|
98 |
+
all, our work shows that when learned properly, the greedy
|
99 |
+
CMI policy is a simple and powerful method for DFS.
|
100 |
+
2. Problem formulation
|
101 |
+
In this section, we describe the DFS problem and introduce
|
102 |
+
notation used throughout the paper.
|
103 |
+
2.1. Notation
|
104 |
+
Let x denote a vector of input features and y a response
|
105 |
+
variable for a supervised learning task. The input consists
|
106 |
+
of d distinct features, or x = (x1, . . . , xd). We use the nota-
|
107 |
+
tion s ⊆ [d] ≡ {1, . . . , d} to denote a subset of indices and
|
108 |
+
xs = {xi : i ∈ s} a subset of features. Bold symbols x, y
|
109 |
+
represent random variables, the symbols x, y are possible
|
110 |
+
values, and p(x, y) denotes the data distribution.
|
111 |
+
Our goal is to design a policy that controls which features
|
112 |
+
are selected given the currently available information. The
|
113 |
+
selection policy can be viewed as a function π(xs) ∈ [d],
|
114 |
+
meaning that it receives a subset of features as its input
|
115 |
+
and outputs the next feature index to query. The policy is
|
116 |
+
accompanied by a predictor f(xs) that can make predic-
|
117 |
+
tions given the set of available features; for example, if y
|
118 |
+
is discrete then predictions lie in the probability simplex,
|
119 |
+
or f(xs) ∈ ∆K−1 for K classes. The notation f(xs ∪ xi)
|
120 |
+
represents the prediction given the combined features. We
|
121 |
+
initially consider policy and predictor functions that operate
|
122 |
+
on feature subsets, and Section 4 proposes an implementa-
|
123 |
+
tion using a mask variable m ∈ [0, 1]d where the functions
|
124 |
+
operate on x ⊙ m.
|
125 |
+
2.2. Dynamic feature selection
|
126 |
+
The goal of DFS is to select features with minimal budget
|
127 |
+
that achieve maximum predictive accuracy. Having access
|
128 |
+
to more features generally makes prediction easier, so the
|
129 |
+
challenge is selecting a small number of informative features.
|
130 |
+
There are multiple formulations for this problem, including
|
131 |
+
non-uniform feature costs and different budgets for each
|
132 |
+
sample (Kachuee et al., 2018), but we focus on the setting
|
133 |
+
with a fixed budget and uniform costs. Our goal is to handle
|
134 |
+
data samples at test-time by beginning with no features,
|
135 |
+
sequentially selecting features xs such that |s| = k for a
|
136 |
+
fixed budget k < d, and finally making accurate predictions
|
137 |
+
for the response variable y.
|
138 |
+
Given a loss function that measures the discrepancy between
|
139 |
+
predictions and labels ℓ(ˆy, y), a natural scoring criterion is
|
140 |
+
the expected loss after selecting k features. The scoring is
|
141 |
+
applied to a policy-predictor pair (π, f), and we define the
|
142 |
+
score for a fixed budget k as follows,
|
143 |
+
vk(π, f) = Ep(x,y)
|
144 |
+
�
|
145 |
+
ℓ
|
146 |
+
�
|
147 |
+
f
|
148 |
+
�
|
149 |
+
{xit}k
|
150 |
+
t=1
|
151 |
+
�
|
152 |
+
, y
|
153 |
+
��
|
154 |
+
,
|
155 |
+
(1)
|
156 |
+
where feature indices are chosen sequentially for each (x, y)
|
157 |
+
according to in = π({xit}n−1
|
158 |
+
t=1 ). The goal is to minimize
|
159 |
+
vk(π, f), or equivalently, to maximize our final predictive
|
160 |
+
accuracy.
|
161 |
+
One approach is to frame this as a Markov decision process
|
162 |
+
(MDP) and solve it using standard RL techniques, so that
|
163 |
+
π and f are trained to optimize a reward function based on
|
164 |
+
eq. (1). Several recent works have designed such formula-
|
165 |
+
tions (Shim et al., 2018; Kachuee et al., 2018; Janisch et al.,
|
166 |
+
2019; Li & Oliva, 2021). However, these approaches are
|
167 |
+
difficult to train effectively, so our work focuses on a greedy
|
168 |
+
approach that is easier to learn and simpler to interpret.
|
169 |
+
3. Greedy information maximization
|
170 |
+
This section first defines the greedy CMI policy, and then
|
171 |
+
describes an existing approximation strategy based on gen-
|
172 |
+
erative modeling.
|
173 |
+
3.1. The greedy selection policy
|
174 |
+
As an idealized approach to DFS, we are interested in the
|
175 |
+
greedy algorithm that selects the most informative feature
|
176 |
+
at each step. This feature can be defined in multiple ways,
|
177 |
+
but we focus on the information-theoretic perspective that
|
178 |
+
the most useful feature has maximum CMI with the re-
|
179 |
+
sponse variable (Cover & Thomas, 2012). CMI, denoted as
|
180 |
+
I(xi; y | xs), quantifies how much information an unknown
|
181 |
+
feature xi provides about the response y when accounting
|
182 |
+
for the current features xs, and it is defined as the KL diver-
|
183 |
+
gence between the joint and factorized distributions:
|
184 |
+
I(xi; y | xs) = DKL
|
185 |
+
�
|
186 |
+
p(xi, y | xs) || p(xi | xs)p(y | xs)
|
187 |
+
�
|
188 |
+
.
|
189 |
+
Based on this, we define the greedy CMI policy as π∗(xs) ≡
|
190 |
+
arg maxi I(xi; y | xs), so that features are sequentially se-
|
191 |
+
lected to maximize our information about the response vari-
|
192 |
+
able. We can alternatively understand the policy as perform-
|
193 |
+
ing greedy uncertainty minimization, because this is equiva-
|
194 |
+
lent to minimizing y’s conditional entropy at each step, or
|
195 |
+
π∗(xs) = arg mini H(y | xi, xs) (Cover & Thomas, 2012).
|
196 |
+
For a complete characterization of this idealized approach
|
197 |
+
to DFS, we also consider that the policy is paired with the
|
198 |
+
Bayes classifier as a predictor, or f ∗(xs) = p(y | xs).
|
199 |
+
|
200 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
201 |
+
Maximizing the information about y at each step is intuitive
|
202 |
+
and should be effective in many problems. Prior work has
|
203 |
+
considered the same idea, but from two perspectives that
|
204 |
+
differ from ours. First, Chen et al. (2015b) take a theoretical
|
205 |
+
perspective and prove that the greedy algorithm has bounded
|
206 |
+
suboptimality relative to the optimal policy-predictor pair;
|
207 |
+
the proof requires specific distributional assumptions, but
|
208 |
+
we find that the greedy algorithm performs well with many
|
209 |
+
real datasets (Section 6). Second, from an implementation
|
210 |
+
perspective, two works aim to provide practical approxi-
|
211 |
+
mations; however, these suffer from several limitations, so
|
212 |
+
our work aims to develop a simple and flexible alternative
|
213 |
+
(Section 4). In these works, Fleuret (2004) requires binary
|
214 |
+
features, and Ma et al. (2019) requires a conditional gen-
|
215 |
+
erative model of the data distribution, which we discuss
|
216 |
+
next.
|
217 |
+
3.2. Estimating conditional mutual information
|
218 |
+
The greedy policy is trivial to implement if we can directly
|
219 |
+
calculate CMI, but this is rarely the case in practice. Instead,
|
220 |
+
one option to to estimate it. We now describe a procedure to
|
221 |
+
do so iteratively for each feature, assuming for now that we
|
222 |
+
have oracle access to the response distributions p(y | xs)
|
223 |
+
for all s ⊆ [d] and the feature distributions p(xi | xs) for
|
224 |
+
all s ⊆ [d] and i ∈ [d].
|
225 |
+
At any point in the selection procedure, given the current
|
226 |
+
features xs, we can estimate the CMI for a feature xi where
|
227 |
+
i /∈ s as follows. First, we can sample multiple values for
|
228 |
+
xi from its conditional distribution, or xj
|
229 |
+
i ∼ p(xi | xs) for
|
230 |
+
j ∈ [n]. Next, we can generate Bayes optimal predictions
|
231 |
+
for each sampled value, or p(y | xs, xj
|
232 |
+
i). Finally, we can
|
233 |
+
calculate the mean prediction and the mean KL divergence
|
234 |
+
relative to the mean prediction, which yields the following
|
235 |
+
CMI estimator:
|
236 |
+
In
|
237 |
+
i = 1
|
238 |
+
n
|
239 |
+
n
|
240 |
+
�
|
241 |
+
j=1
|
242 |
+
DKL
|
243 |
+
�
|
244 |
+
p(y | xs, xj
|
245 |
+
i) || 1
|
246 |
+
n
|
247 |
+
n
|
248 |
+
�
|
249 |
+
l=1
|
250 |
+
p(y | xs, xl
|
251 |
+
i)
|
252 |
+
�
|
253 |
+
.
|
254 |
+
(2)
|
255 |
+
This score measures the variability among predictions and
|
256 |
+
captures whether different xi values significantly affect y’s
|
257 |
+
conditional distribution. The estimator can be used to select
|
258 |
+
features, or we can set π(xs) = arg maxi In
|
259 |
+
i , due to the
|
260 |
+
following limiting result (see Appendix A):
|
261 |
+
lim
|
262 |
+
n→∞ In
|
263 |
+
i = I(y; xi | xs).
|
264 |
+
(3)
|
265 |
+
This procedure thus provides a way to identify the correct
|
266 |
+
greedy selections by estimating the CMI. Prior work has
|
267 |
+
explored similar ideas for scoring features based on sam-
|
268 |
+
pled predictions (Saar-Tsechansky et al., 2009; Chen et al.,
|
269 |
+
2015a; Early et al., 2016a;b), but the implementation choices
|
270 |
+
in these works prevent them from performing greedy infor-
|
271 |
+
mation maximization. In eq. (2), is it important that our
|
272 |
+
estimator uses the Bayes classifier, that we sample features
|
273 |
+
from the conditional distribution p(xi | xs), and that we
|
274 |
+
use the KL divergence as a measure of prediction variability.
|
275 |
+
However, this estimator is impractical because we typically
|
276 |
+
lack access to both p(y | xs) and p(xi | xs).
|
277 |
+
In practice, we would instead require learned substitutes
|
278 |
+
for each distribution. For example, we can use a a classi-
|
279 |
+
fier that approximates f(xs) ≈ p(y | xs) and a generative
|
280 |
+
model that approximates samples from p(xi | xs). Simi-
|
281 |
+
larly, Ma et al. (2019) propose jointly modeling (x, y) with
|
282 |
+
a conditional generative model, which is implemented via
|
283 |
+
a modified VAE (Kingma et al., 2015). This approach is
|
284 |
+
limited for several reasons, including (i) the difficulty of
|
285 |
+
training an accurate conditional generative model, (ii) the
|
286 |
+
challenge of modeling mixed continuous/categorical fea-
|
287 |
+
tures (Ma et al., 2020; Nazabal et al., 2020), and (iii) the
|
288 |
+
slow CMI estimation process. In our approach, which we
|
289 |
+
discuss next, we bypass all three of these challenges by
|
290 |
+
directly predicting the best selection at each step.
|
291 |
+
4. Proposed method
|
292 |
+
We now introduce our approach, a practical approximation
|
293 |
+
the greedy policy trained using amortized optimization. Un-
|
294 |
+
like prior work that estimates CMI as an intermediate step,
|
295 |
+
we develop a variational perspective on the greedy policy,
|
296 |
+
which we then leverage to train a policy network that directly
|
297 |
+
predicts the optimal selection given the current features.
|
298 |
+
4.1. A variational perspective on CMI
|
299 |
+
For our purpose, it is helpful to recognize that the greedy
|
300 |
+
policy can be viewed as the solution to an optimization
|
301 |
+
problem. Section 3 provides a conventional definition of
|
302 |
+
CMI as a KL divergence, but this is difficult to integrate into
|
303 |
+
an end-to-end learning approach. Instead, we now consider
|
304 |
+
the one-step-ahead prediction achieved by a policy π and
|
305 |
+
predictor f, and we determine the behavior that minimizes
|
306 |
+
their loss. Given the current features xs and a selection
|
307 |
+
i = π(xs), the expected one-step-ahead loss is:
|
308 |
+
Ey,xi|xs
|
309 |
+
�
|
310 |
+
ℓ
|
311 |
+
�
|
312 |
+
f(xs ∪ xi), y
|
313 |
+
��
|
314 |
+
.
|
315 |
+
(4)
|
316 |
+
The variational perspective we develop here consists of
|
317 |
+
two main results regarding this expected loss. The first
|
318 |
+
result concerns the predictor, and we show that the loss-
|
319 |
+
minimizing predictor can be defined independently of the
|
320 |
+
policy π. We formalize this in the following proposition for
|
321 |
+
classification tasks, and our results can also be generalized
|
322 |
+
to regression tasks (see proofs in Appendix A).
|
323 |
+
Proposition 1. When y is discrete and ℓ is cross-entropy
|
324 |
+
loss, eq. (4) is minimized for any policy π by the Bayes
|
325 |
+
classifier, or f ∗(xs) = p(y | xs).
|
326 |
+
|
327 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
328 |
+
The above property requires that features are selected with-
|
329 |
+
out knowledge of the remaining features or response vari-
|
330 |
+
able, which is a valid assumption for DFS, but not in scenar-
|
331 |
+
ios where selections are based on the full feature set (Chen
|
332 |
+
et al., 2018; Yoon et al., 2018; Jethani et al., 2021). Now,
|
333 |
+
assuming that we use the Bayes classifier f ∗ as a predictor,
|
334 |
+
our second result concerns the selection policy. As we show
|
335 |
+
next, the loss-minimizing policy is equivalent to making
|
336 |
+
selections based on CMI.
|
337 |
+
Proposition 2. When y is discrete, ℓ is cross-entropy
|
338 |
+
loss and the predictor is the Bayes classifier f ∗, eq. (4)
|
339 |
+
is minimized by the greedy CMI policy, or π∗(xs) =
|
340 |
+
arg maxi I(y; xi | xs).
|
341 |
+
With this, we can see that the greedy CMI policy defined
|
342 |
+
in Section 3 is equivalent to minimizing the one-step-ahead
|
343 |
+
prediction loss. Next, we exploit this variational perspec-
|
344 |
+
tive to develop a joint learning procedure for a policy and
|
345 |
+
predictor network.
|
346 |
+
4.2. An amortized optimization approach
|
347 |
+
Instead of estimating each feature’s CMI to identify the next
|
348 |
+
selection, we now develop an approach that directly pre-
|
349 |
+
dicts the best selection at step. The greedy policy implicitly
|
350 |
+
requires solving an optimization problem at each step, or
|
351 |
+
arg maxi I(y, xi; xs), but since we lack access to this ob-
|
352 |
+
jective, we now formulate an approach that directly predicts
|
353 |
+
the solution. Following a technique known as amortized
|
354 |
+
optimization (Amos, 2022), we do so by casting our varia-
|
355 |
+
tional perspective on CMI from Section 4.1 as an objective
|
356 |
+
function to be optimized by a learnable network.
|
357 |
+
First, because it facilitates gradient-based optimization, we
|
358 |
+
now consider that the policy outputs a distribution over fea-
|
359 |
+
ture indices. With slight abuse of notation, this section lets
|
360 |
+
the policy be a function π(xs) ∈ ∆d−1, which generalizes
|
361 |
+
the previous definition π(xs) ∈ [d]. Using this stochas-
|
362 |
+
tic policy, we can now formulate our objective function as
|
363 |
+
follows.
|
364 |
+
Let the selection policy be parameterized by a neural
|
365 |
+
network π(xs; φ) and the predictor by a neural network
|
366 |
+
f(xs; θ). Let p(s) represent a distribution over subsets with
|
367 |
+
p(s) > 0 for all |s| < d. Then, our objective function
|
368 |
+
L(θ, φ) is defined as
|
369 |
+
L(θ, φ) = Ep(x,y)Ep(s)
|
370 |
+
�
|
371 |
+
Ei∼π(xs;φ)
|
372 |
+
�
|
373 |
+
ℓ
|
374 |
+
�
|
375 |
+
f(xs ∪ xi; θ), y
|
376 |
+
���
|
377 |
+
.
|
378 |
+
(5)
|
379 |
+
Intuitively, eq. (5) represents generating a random feature
|
380 |
+
set xs, sampling a feature index according to i ∼ π(xs; φ),
|
381 |
+
and then measuring the loss of the prediction f(xs ∪ xi; θ).
|
382 |
+
Our objective thus optimizes for individual selections and
|
383 |
+
predictions rather than the entire trajectory, which lets us
|
384 |
+
build on Proposition 1-2. We describe this as an implemen-
|
385 |
+
tation of the greedy approach because it recovers the greedy
|
386 |
+
CMI selections when it is trained to optimality. In the clas-
|
387 |
+
sification case, we show the following result under a mild
|
388 |
+
assumption that there is a unique optimal selection.
|
389 |
+
Theorem 1. When y is discrete and ℓ is cross-entropy loss,
|
390 |
+
the global optimum of eq. (5) is a predictor that satisfies
|
391 |
+
f(xs; θ∗) = p(y | xs) and a policy π(xs; φ∗) that puts all
|
392 |
+
probability mass on i∗ = arg maxi I(y; xi | xs).
|
393 |
+
If we relax the assumption of a unique optimal selection,
|
394 |
+
the optimal policy π(xs; φ∗) will simply split probability
|
395 |
+
mass among the best indices. A similar result holds in the
|
396 |
+
regression case, where we can interpret the greedy policy as
|
397 |
+
performing conditional variance minimization.
|
398 |
+
Theorem 2. When y is continuous and ℓ is squared error
|
399 |
+
loss, the global optimum of eq. (5) is a predictor that satisfies
|
400 |
+
f(xs; θ∗) = E[y | xs] and a policy π(xs; φ∗) that puts all
|
401 |
+
probability mass on i∗ = arg mini Exi|xs[Var(y | xi, xs)].
|
402 |
+
Proofs for these results are in Appendix A. This approach
|
403 |
+
has two key advantages over the CMI estimation procedure
|
404 |
+
from Section 3.2. First, we avoid modeling the feature
|
405 |
+
conditional distributions p(xi | xs) for all (s, i). Modeling
|
406 |
+
these distributions is a difficult intermediate step, and our
|
407 |
+
approach instead aims to directly output the optimal index.
|
408 |
+
Second, our approach is faster because each selection is
|
409 |
+
made in a single forward pass: selecting k features using
|
410 |
+
the Ma et al. (2019) procedure requires O(dk) scoring steps,
|
411 |
+
but our approach requires only k forward passes through the
|
412 |
+
policy π(xs; φ).
|
413 |
+
Furthermore, compared to a policy trained by RL, the
|
414 |
+
greedy approach is easier to learn. Our training proce-
|
415 |
+
dure can be viewed as a form of reward shaping (Sutton
|
416 |
+
et al., 1998; Randløv & Alstrøm, 1998), where the reward
|
417 |
+
accounts for the loss after each step and provides a strong
|
418 |
+
signal about whether each selection is helpful. In compar-
|
419 |
+
ison, observing the reward only after selecting k features
|
420 |
+
provides a comparably weak signal to the policy network
|
421 |
+
(see eq. (1)). RL methods generally face a challenging
|
422 |
+
exploration-exploitation trade-off, but learning the greedy
|
423 |
+
policy is simpler because it only requires finding the locally
|
424 |
+
optimal choice at each step.
|
425 |
+
4.3. Training with a continuous relaxation
|
426 |
+
Our objective in eq. (5) yields the correct greedy policy
|
427 |
+
when it is perfectly optimized, but L(θ, φ) is difficult to
|
428 |
+
optimize by gradient descent. In particular, gradients are
|
429 |
+
difficult to propagate through the policy network given a
|
430 |
+
sampled index i ∼ π(xs; φ). The REINFORCE trick
|
431 |
+
(Williams, 1992) is one way to get stochastic gradients,
|
432 |
+
but high gradient variance can make it ineffective in many
|
433 |
+
problems. There is a robust literature on reducing gradient
|
434 |
+
|
435 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
436 |
+
𝜋 ⋅ ; 𝜙
|
437 |
+
𝑓 ⋅ ; 𝜃
|
438 |
+
Policy
|
439 |
+
Predictor
|
440 |
+
#𝑦
|
441 |
+
Repeat for 𝑘 selection steps
|
442 |
+
Concrete 𝛼, 𝜏
|
443 |
+
Update masked
|
444 |
+
input
|
445 |
+
0
|
446 |
+
𝑥#
|
447 |
+
0
|
448 |
+
𝑥%
|
449 |
+
𝑥"
|
450 |
+
𝑥#
|
451 |
+
0
|
452 |
+
𝑥%
|
453 |
+
≈
|
454 |
+
Figure 1. Diagram of our training approach. Left: features are selected by making repeated calls to the policy network using masked
|
455 |
+
inputs. Right: predictions are made after each selection using the predictor network. Only solid lines are backpropagated through when
|
456 |
+
performing gradient descent.
|
457 |
+
variance in this setting (Tucker et al., 2017; Grathwohl et al.,
|
458 |
+
2018), but we propose a simple alternative: the Concrete
|
459 |
+
distribution (Maddison et al., 2016).
|
460 |
+
An index sampled according to i ∼ π(xs; φ) can be rep-
|
461 |
+
resented by a one-hot vector m ∈ {0, 1}d indicating the
|
462 |
+
chosen index, and with the Concrete distribution we instead
|
463 |
+
sample an approximately one-hot vector in the probability
|
464 |
+
simplex, or m ∈ ∆d−1. This continuous relaxation lets us
|
465 |
+
calculate gradients using the reparameterization trick (Mad-
|
466 |
+
dison et al., 2016; Jang et al., 2016). Relaxing the subset
|
467 |
+
s ⊆ [d] to a continuous vector also requires relaxing the
|
468 |
+
policy and predictor functions, so we let these operate on
|
469 |
+
a masked input x, or the element-wise product x ⊙ m. To
|
470 |
+
avoid ambiguity about whether features are zero or masked,
|
471 |
+
we can also pass the mask as an input.
|
472 |
+
Training with the Concrete distribution requires specifying
|
473 |
+
a temperature parameter τ > 0 to control how discrete the
|
474 |
+
samples are. Previous works have typically trained with a
|
475 |
+
fixed temperature or annealed it over a pre-determined num-
|
476 |
+
ber of epochs (Chang et al., 2017; Chen et al., 2018; Balın
|
477 |
+
et al., 2019), but we instead train with a sequence of τ values
|
478 |
+
and perform early stopping at each step. This removes the
|
479 |
+
temperature and number of epochs as important hyperpa-
|
480 |
+
rameters to tune. Our training procedure is summarized in
|
481 |
+
Figure 1, and in more detail by Algorithm 1.
|
482 |
+
There are also several optional steps that we found can
|
483 |
+
improve optimization:
|
484 |
+
• Parameters can be shared between the predictor and pol-
|
485 |
+
icy networks f(x; θ), π(x, φ). This does not complicate
|
486 |
+
their joint optimization, and learning a shared represen-
|
487 |
+
tation in the early layers can in some cases help the
|
488 |
+
networks optimize faster.
|
489 |
+
• Rather than training with a random subset distribution
|
490 |
+
p(s), we generate subsets using features selected by
|
491 |
+
the policy π(x; φ). This allows the models to focus on
|
492 |
+
subsets likely to be encountered at inference time, and
|
493 |
+
it does not affect the globally optimal policy/predictor:
|
494 |
+
gradients are not propagated between selections, so both
|
495 |
+
eq. (5) and this sampling approach treat each feature
|
496 |
+
set as an independent optimization problem, only with
|
497 |
+
different weights (see Appendix D).
|
498 |
+
• We pre-train the predictor f(x; θ) using random subsets
|
499 |
+
before jointly training the policy-predictor pair. This
|
500 |
+
works better than optimizing L(θ, φ) from a random ini-
|
501 |
+
tialization, because a random predictor f(x; θ) provides
|
502 |
+
no signal to π(x; φ) about which features are useful.
|
503 |
+
5. Related work
|
504 |
+
Prior work has frequently addressed DFS using RL. For
|
505 |
+
example, Dulac-Arnold et al. (2011); Shim et al. (2018);
|
506 |
+
Janisch et al. (2019); Li & Oliva (2021) optimize a reward
|
507 |
+
based on the final prediction accuracy, and Kachuee et al.
|
508 |
+
(2018) use a reward that accounts for prediction uncertainty.
|
509 |
+
RL is a natural approach for sequential decision-making
|
510 |
+
problems, but it can be difficult to optimize in practice:
|
511 |
+
RL requires complex architectures and training routines, is
|
512 |
+
slow to converge, and is highly sensitive to its initialization
|
513 |
+
(Henderson et al., 2018). As a result, RL-based DFS does
|
514 |
+
not reliably outperform static feature selection, as shown by
|
515 |
+
Erion et al. (2021) and confirmed in our experiments.
|
516 |
+
Several other approaches include imitation learning (He
|
517 |
+
et al., 2012; 2016a) and iterative feature scoring methods
|
518 |
+
(Melville et al., 2004; Saar-Tsechansky et al., 2009; Chen
|
519 |
+
et al., 2015a; Early et al., 2016b;a). Imitation learning casts
|
520 |
+
DFS as supervised classification, whereas our training ap-
|
521 |
+
proach bypasses the need for an oracle policy. Most existing
|
522 |
+
feature scoring techniques are greedy methods, like ours,
|
523 |
+
but they use scoring heuristics unrelated to maximizing
|
524 |
+
CMI (see Section 3.2). Two feature scoring methods are
|
525 |
+
specifically designed to calculate CMI, but they suffer from
|
526 |
+
practical limitations: Fleuret (2004) requires binary features,
|
527 |
+
and Ma et al. (2019) relies on difficult-to-train generative
|
528 |
+
models. Our approach is simpler, faster and more flexi-
|
529 |
+
ble, because the selection logic is contained within a policy
|
530 |
+
network that avoids the need for generative modeling.
|
531 |
+
|
532 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
533 |
+
0
|
534 |
+
5
|
535 |
+
10
|
536 |
+
15
|
537 |
+
20
|
538 |
+
25
|
539 |
+
# Selected Features
|
540 |
+
0.55
|
541 |
+
0.60
|
542 |
+
0.65
|
543 |
+
0.70
|
544 |
+
0.75
|
545 |
+
AUROC
|
546 |
+
Bleeding AUROC Comparison
|
547 |
+
0
|
548 |
+
5
|
549 |
+
10
|
550 |
+
15
|
551 |
+
20
|
552 |
+
25
|
553 |
+
# Selected Features
|
554 |
+
0.65
|
555 |
+
0.70
|
556 |
+
0.75
|
557 |
+
0.80
|
558 |
+
0.85
|
559 |
+
AUROC
|
560 |
+
Respiratory AUROC Comparison
|
561 |
+
2
|
562 |
+
4
|
563 |
+
6
|
564 |
+
8
|
565 |
+
10
|
566 |
+
# Selected Features
|
567 |
+
0.70
|
568 |
+
0.75
|
569 |
+
0.80
|
570 |
+
0.85
|
571 |
+
AUROC
|
572 |
+
Fluid AUROC Comparison
|
573 |
+
0.0
|
574 |
+
0.2
|
575 |
+
0.4
|
576 |
+
0.6
|
577 |
+
0.8
|
578 |
+
1.0
|
579 |
+
0.0
|
580 |
+
0.2
|
581 |
+
0.4
|
582 |
+
0.6
|
583 |
+
0.8
|
584 |
+
1.0
|
585 |
+
IntGrad
|
586 |
+
DeepLift
|
587 |
+
SAGE
|
588 |
+
Perm Test
|
589 |
+
CAE
|
590 |
+
Opportunistic (OL)
|
591 |
+
CMI (Marginal)
|
592 |
+
CMI (PVAE)
|
593 |
+
Greedy (Ours)
|
594 |
+
0
|
595 |
+
5
|
596 |
+
10
|
597 |
+
15
|
598 |
+
20
|
599 |
+
25
|
600 |
+
# Selected Features
|
601 |
+
0.70
|
602 |
+
0.75
|
603 |
+
0.80
|
604 |
+
0.85
|
605 |
+
0.90
|
606 |
+
0.95
|
607 |
+
AUROC
|
608 |
+
Spam AUROC Comparison
|
609 |
+
0
|
610 |
+
5
|
611 |
+
10
|
612 |
+
15
|
613 |
+
20
|
614 |
+
25
|
615 |
+
# Selected Features
|
616 |
+
0.65
|
617 |
+
0.70
|
618 |
+
0.75
|
619 |
+
0.80
|
620 |
+
0.85
|
621 |
+
0.90
|
622 |
+
0.95
|
623 |
+
AUROC
|
624 |
+
MiniBooNE AUROC Comparison
|
625 |
+
2
|
626 |
+
4
|
627 |
+
6
|
628 |
+
8
|
629 |
+
10
|
630 |
+
# Selected Features
|
631 |
+
0.75
|
632 |
+
0.80
|
633 |
+
0.85
|
634 |
+
0.90
|
635 |
+
0.95
|
636 |
+
AUROC
|
637 |
+
Diabetes AUROC Comparison
|
638 |
+
0.0
|
639 |
+
0.2
|
640 |
+
0.4
|
641 |
+
0.6
|
642 |
+
0.8
|
643 |
+
1.0
|
644 |
+
0.0
|
645 |
+
0.2
|
646 |
+
0.4
|
647 |
+
0.6
|
648 |
+
0.8
|
649 |
+
1.0
|
650 |
+
IntGrad
|
651 |
+
DeepLift
|
652 |
+
SAGE
|
653 |
+
Perm Test
|
654 |
+
CAE
|
655 |
+
Opportunistic (OL)
|
656 |
+
CMI (Marginal)
|
657 |
+
CMI (PVAE)
|
658 |
+
Greedy (Ours)
|
659 |
+
Figure 2. Evaluating the greedy approach on six tabular datasets. The results for each method are the average across five runs.
|
660 |
+
Static feature selection is a long-standing problem (Guyon
|
661 |
+
& Elisseeff, 2003; Cai et al., 2018). There are no default ap-
|
662 |
+
proaches for neural networks, but one option is ranking fea-
|
663 |
+
tures by local or global importance scores (Breiman, 2001;
|
664 |
+
Shrikumar et al., 2017; Sundararajan et al., 2017; Covert
|
665 |
+
et al., 2020). In addition, several prior works have leveraged
|
666 |
+
continuous relaxations to learn feature selection strategies
|
667 |
+
by gradient descent: for example, Chang et al. (2017); Balın
|
668 |
+
et al. (2019); Yamada et al. (2020); Lee et al. (2021) perform
|
669 |
+
static feature selection, and Chen et al. (2018); Jethani et al.
|
670 |
+
(2021) perform instance-wise feature selection given all the
|
671 |
+
features. Our work uses a similar continuous relaxation
|
672 |
+
for optimization but in the DFS context, where our method
|
673 |
+
learns a selection policy rather than a static selection layer.
|
674 |
+
Finally, several works have examined greedy feature selec-
|
675 |
+
tion algorithms from a theoretical perspective. For example,
|
676 |
+
Das & Kempe (2011); Elenberg et al. (2018) show that
|
677 |
+
weak submodularity implies near-optimal performance in
|
678 |
+
the static feature selection setting. Chen et al. (2015b) find
|
679 |
+
that the related notion of adaptive submodularity (Golovin
|
680 |
+
& Krause, 2011) does not not apply to DFS when evaluated
|
681 |
+
via mutual information, but manage to provide performance
|
682 |
+
guarantees under specific distributional assumptions.
|
683 |
+
6. Experiments
|
684 |
+
We now demonstrate the use of our greedy approach on
|
685 |
+
several datasets. We first explore tabular datasets of vari-
|
686 |
+
ous sizes, including four medical diagnosis tasks, and we
|
687 |
+
then consider two image classification datasets. Several
|
688 |
+
of the tasks are natural candidates for DFS, and the re-
|
689 |
+
maining ones serve as useful tasks to test the effectiveness
|
690 |
+
of our approach. Code for reproducing our experiments
|
691 |
+
is available online: https://github.com/iancovert/
|
692 |
+
dynamic-selection.
|
693 |
+
We evaluate our method by comparing to both dynamic and
|
694 |
+
static feature selection methods. We also ensure consistent
|
695 |
+
comparisons by only using methods applicable to neural
|
696 |
+
networks. As static baselines, we use permutation tests
|
697 |
+
(Breiman, 2001) and SAGE (Covert et al., 2020) to rank
|
698 |
+
features by their importance to model accuracy, as well as
|
699 |
+
per-prediction DeepLift (Shrikumar et al., 2017) and Int-
|
700 |
+
Grad (Sundararajan et al., 2017) scores aggregated across
|
701 |
+
the dataset. We then use a supervised version of the Con-
|
702 |
+
crete Autoencoder (CAE, Balın et al. 2019), a state-of-the-
|
703 |
+
art static feature selection method. As dynamic baselines,
|
704 |
+
we use two versions of the CMI estimation procedure de-
|
705 |
+
scribed in Section 3.2. First, we use the PVAE generative
|
706 |
+
model from Ma et al. (2019) to sample unknown features,
|
707 |
+
and second, we instead sample unknown features from their
|
708 |
+
marginal distribution; in both cases, we use a classifier
|
709 |
+
trained with random feature subsets to make predictions.
|
710 |
+
Finally, we also use the RL-based Opportunistic Learning
|
711 |
+
(OL) approach (Kachuee et al., 2018). Appendix C provides
|
712 |
+
more information about each of the baselines.
|
713 |
+
6.1. Tabular datasets
|
714 |
+
We first applied our method to three medical diagnosis tasks
|
715 |
+
derived from an emergency medicine setting. The tasks
|
716 |
+
involve predicting a patient’s bleeding risk via a low fibrino-
|
717 |
+
gen concentration (bleeding), whether the patient requires
|
718 |
+
endotracheal intubation for respiratory support (respiratory),
|
719 |
+
and whether the patient will be responsive to fluid resusci-
|
720 |
+
|
721 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
722 |
+
Table 1. AUROC averaged across budgets of 1-10 features (with 95% confidence intervals).
|
723 |
+
Spam
|
724 |
+
MiniBooNE
|
725 |
+
Diabetes
|
726 |
+
Bleeding
|
727 |
+
Respiratory
|
728 |
+
Fluid
|
729 |
+
Static
|
730 |
+
IntGrad
|
731 |
+
82.84 ± 0.68
|
732 |
+
89.10 ± 0.33
|
733 |
+
88.91 ± 0.24
|
734 |
+
66.70 ± 0.27
|
735 |
+
81.10 ± 0.04
|
736 |
+
79.94 ± 0.94
|
737 |
+
DeepLift
|
738 |
+
90.16 ± 1.24
|
739 |
+
88.62 ± 0.30
|
740 |
+
95.42 ± 0.13
|
741 |
+
67.75 ± 0.49
|
742 |
+
76.05 ± 0.35
|
743 |
+
76.96 ± 0.56
|
744 |
+
SAGE
|
745 |
+
89.70 ± 1.10
|
746 |
+
92.64 ± 0.03
|
747 |
+
95.43 ± 0.01
|
748 |
+
71.34 ± 0.19
|
749 |
+
82.92 ± 0.26
|
750 |
+
83.27 ± 0.53
|
751 |
+
Perm Test
|
752 |
+
85.64 ± 3.58
|
753 |
+
92.19 ± 0.15
|
754 |
+
95.46 ± 0.02
|
755 |
+
68.89 ± 1.06
|
756 |
+
81.56 ± 0.28
|
757 |
+
81.35 ± 1.04
|
758 |
+
CAE
|
759 |
+
92.28 ± 0.27
|
760 |
+
92.76 ± 0.41
|
761 |
+
95.91 ± 0.07
|
762 |
+
70.69 ± 0.57
|
763 |
+
83.10 ± 0.45
|
764 |
+
79.40 ± 0.86
|
765 |
+
Dynamic
|
766 |
+
Opportunistic (OL)
|
767 |
+
85.94 ± 0.20
|
768 |
+
69.23 ± 0.64
|
769 |
+
83.07 ± 0.82
|
770 |
+
60.63 ± 0.55
|
771 |
+
74.44 ± 0.42
|
772 |
+
78.13 ± 0.31
|
773 |
+
CMI (Marginal)
|
774 |
+
86.57 ± 1.54
|
775 |
+
92.21 ± 0.40
|
776 |
+
95.48 ± 0.05
|
777 |
+
70.57 ± 0.46
|
778 |
+
79.62 ± 0.62
|
779 |
+
81.97 ± 0.93
|
780 |
+
CMI (PVAE)
|
781 |
+
89.01 ± 1.40
|
782 |
+
88.94 ± 1.25
|
783 |
+
90.50 ± 5.16
|
784 |
+
70.17 ± 0.74
|
785 |
+
74.12 ± 3.50
|
786 |
+
80.27 ± 1.02
|
787 |
+
Greedy (Ours)
|
788 |
+
93.91 ± 0.17
|
789 |
+
94.46 ± 0.12
|
790 |
+
96.03 ± 0.02
|
791 |
+
72.64 ± 0.31
|
792 |
+
84.48 ± 0.08
|
793 |
+
86.59 ± 0.25
|
794 |
+
tation (fluid). See Appendix B for more details about the
|
795 |
+
datasets. In each scenario, gathering all possible inputs at
|
796 |
+
test-time is challenging due to time and resource constraints,
|
797 |
+
thus making DFS a natural solution.
|
798 |
+
We use fully connected networks for all methods, and we
|
799 |
+
use dropout to reduce overfitting (Srivastava et al., 2014).
|
800 |
+
Figure 2 (top) shows the results of applying each method
|
801 |
+
with various feature budgets. The classification accuracy is
|
802 |
+
measured via AUROC, and the greedy method achieves the
|
803 |
+
best results for nearly all feature budgets on all three tasks.
|
804 |
+
Among the baselines, several static methods are sometimes
|
805 |
+
close, but the CMI estimation method is rarely competitive.
|
806 |
+
Additionally, OL provides unstable and weak results. The
|
807 |
+
greedy method’s advantage is often largest when selecting a
|
808 |
+
small number of features, and it usually becomes narrower
|
809 |
+
once the accuracy saturates.
|
810 |
+
Next, we conducted experiments using three publicly avail-
|
811 |
+
able tabular datasets: spam classification (Dua & Graff,
|
812 |
+
2017), particle identification (MiniBooNE) (Roe et al.,
|
813 |
+
2005) and diabetes diagnosis (Miller, 1973). The diabetes
|
814 |
+
task is a natural application for DFS and was used in prior
|
815 |
+
work (Kachuee et al., 2018). We again tested various num-
|
816 |
+
bers of features, and Figure 2 (bottom) shows plots of the
|
817 |
+
AUROC for each feature budget. On these tasks, the greedy
|
818 |
+
method is once again most accurate for nearly all numbers
|
819 |
+
of features. Table 1 summarizes the results via the mean
|
820 |
+
AUROC across k = 1, . . . , 10 features, further emphasizing
|
821 |
+
the benefits of the greedy method across all six datasets.
|
822 |
+
Appendix E shows larger versions of the AUROC curves
|
823 |
+
(Figure 4 and Figure 5), as well as plots demonstrating the
|
824 |
+
variability of selections within each dataset.
|
825 |
+
The results with these datasets reveal that, perhaps surpris-
|
826 |
+
ingly, dynamic methods can be outperformed by static meth-
|
827 |
+
ods. Interestingly, this point was not highlighted in prior
|
828 |
+
work where strong static baselines were not used (Kachuee
|
829 |
+
et al., 2018; Janisch et al., 2019). For example, OL is never
|
830 |
+
competitive on these datasets, and the two versions of the
|
831 |
+
CMI estimation method are not consistently among the top
|
832 |
+
baselines. Dynamic methods are in principle capable of
|
833 |
+
performing better, so the sub-par results from these methods
|
834 |
+
underscore the difficulty of learning both a selection policy
|
835 |
+
and a prediction function that works for multiple feature
|
836 |
+
sets. In these experiments, our approach is the only dynamic
|
837 |
+
method to do both successfully.
|
838 |
+
6.2. Image classification datasets
|
839 |
+
Next, we considered two standard image classification
|
840 |
+
datasets: MNIST (LeCun et al., 1998) and CIFAR-10
|
841 |
+
(Krizhevsky et al., 2009). Our goal is to begin with a blank
|
842 |
+
image, sequentially reveal multiple pixels or patches, and
|
843 |
+
ultimately make a classification using a small portion of
|
844 |
+
the image. Although this is not an obvious use case for
|
845 |
+
DFS, it represents a challenging problem for our method,
|
846 |
+
and similar tasks were considered in several earlier works
|
847 |
+
(Karayev et al., 2012; Mnih et al., 2014; Early et al., 2016a;
|
848 |
+
Janisch et al., 2019).
|
849 |
+
For MNIST, we use fully connected architectures for both
|
850 |
+
the policy and predictor, and we treat pixels as individual
|
851 |
+
features, where d = 784. For CIFAR-10, we use a shared
|
852 |
+
ResNet backbone (He et al., 2016b) for the policy and pre-
|
853 |
+
dictor networks, and each network uses its own output head.
|
854 |
+
The 32 × 32 images are coarsened into d = 64 patches
|
855 |
+
of size 4 × 4, so the selector head generates logits corre-
|
856 |
+
sponding to each patch, and the predictor head generates
|
857 |
+
probabilities for each class.
|
858 |
+
Figure 3 shows our method’s accuracy for different feature
|
859 |
+
budgets. For MNIST, we use the previous baselines but ex-
|
860 |
+
clude the CMI estimation method due to its computational
|
861 |
+
cost. We observe a large benefit for our method, particu-
|
862 |
+
larly when making a small number of selections: our greedy
|
863 |
+
method reaches nearly 90% accuracy with just 10 pixels,
|
864 |
+
which is roughly 10% higher than the best baseline and con-
|
865 |
+
siderably higher than prior work (Balın et al., 2019; Yamada
|
866 |
+
et al., 2020; Covert et al., 2020). OL yields the worst results,
|
867 |
+
and it also trains slowly due to the large number of states.
|
868 |
+
For CIFAR-10, we use two simple baselines: center crops
|
869 |
+
and random masks of various sizes. For each method, we
|
870 |
+
plot the mean and 95% confidence intervals determined
|
871 |
+
from five trials. Our greedy approach is slightly less ac-
|
872 |
+
curate with 1-2 patches, but it reaches significantly higher
|
873 |
+
|
874 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
875 |
+
10
|
876 |
+
20
|
877 |
+
30
|
878 |
+
40
|
879 |
+
50
|
880 |
+
# Selected Pixels
|
881 |
+
0.2
|
882 |
+
0.3
|
883 |
+
0.4
|
884 |
+
0.5
|
885 |
+
0.6
|
886 |
+
0.7
|
887 |
+
0.8
|
888 |
+
0.9
|
889 |
+
1.0
|
890 |
+
Top-1 Accuracy
|
891 |
+
MNIST Accuracy Comparison
|
892 |
+
IntGrad
|
893 |
+
DeepLift
|
894 |
+
SAGE
|
895 |
+
Perm Test
|
896 |
+
CAE
|
897 |
+
Opportunistic (OL)
|
898 |
+
Greedy (Ours)
|
899 |
+
0
|
900 |
+
5
|
901 |
+
10
|
902 |
+
15
|
903 |
+
20
|
904 |
+
25
|
905 |
+
30
|
906 |
+
# Selected Patches
|
907 |
+
0.3
|
908 |
+
0.4
|
909 |
+
0.5
|
910 |
+
0.6
|
911 |
+
0.7
|
912 |
+
0.8
|
913 |
+
0.9
|
914 |
+
Top-1 Accuracy
|
915 |
+
CIFAR-10 Accuracy Comparison
|
916 |
+
Center Crop
|
917 |
+
Random Mask
|
918 |
+
Greedy (Ours)
|
919 |
+
Horse
|
920 |
+
Truck
|
921 |
+
Airplane
|
922 |
+
Ship
|
923 |
+
Frog
|
924 |
+
Horse
|
925 |
+
Ship
|
926 |
+
Deer
|
927 |
+
Dog
|
928 |
+
Bird
|
929 |
+
Prob = 55.34%
|
930 |
+
Prob = 98.69%
|
931 |
+
Prob = 99.98%
|
932 |
+
Prob = 80.30%
|
933 |
+
Prob = 97.80%
|
934 |
+
Prob = 20.27%
|
935 |
+
Prob = 99.01%
|
936 |
+
Prob = 51.61%
|
937 |
+
Prob = 52.17%
|
938 |
+
Prob = 99.86%
|
939 |
+
Figure 3. Greedy feature selection for image classification. Top left: accuracy comparison on MNIST with results averaged across five
|
940 |
+
runs. Top right: accuracy comparison on CIFAR-10 with 95% confidence intervals. Bottom: example selections and predictions for the
|
941 |
+
greedy method with 10 out of 64 patches for CIFAR-10 images.
|
942 |
+
accuracy when using 5-20 patches. Figure 3 (bottom) also
|
943 |
+
shows qualitative examples of our method’s predictions af-
|
944 |
+
ter selecting 10 out of 64 patches, and Appendix E shows
|
945 |
+
similar plots with different numbers of patches.
|
946 |
+
7. Conclusion
|
947 |
+
In this work, we explored a greedy algorithm for DFS that se-
|
948 |
+
lects features based on their CMI with the response variable.
|
949 |
+
We proposed an approach to approximate this policy by di-
|
950 |
+
rectly predicting the optimal selection at each step, and we
|
951 |
+
conducted experiments that show our method outperforms
|
952 |
+
a variety of existing feature selection methods, including
|
953 |
+
both dynamic and static baselines. Future work on this
|
954 |
+
topic may include incorporating non-uniform features costs,
|
955 |
+
determining the feature budget on a per-sample basis, and
|
956 |
+
further characterizing the greedy suboptimality gap: some
|
957 |
+
progress has been made in analyzing the greedy algorithm’s
|
958 |
+
suboptimality in the dynamic setting (Chen et al., 2015b),
|
959 |
+
but more general characterizations remain an open topic for
|
960 |
+
future work.
|
961 |
+
Acknowledgements
|
962 |
+
We thank Samuel Ainsworth, Kevin Jamieson, Mukund
|
963 |
+
Sudarshan and the Lee Lab for helpful discussions. This
|
964 |
+
work was funded by NSF DBI-1552309 and DBI-1759487,
|
965 |
+
NIH R35-GM-128638 and R01-NIA-AG-061132.
|
966 |
+
References
|
967 |
+
National health and nutrition examination survey, 2018.
|
968 |
+
URL https://www.cdc.gov/nchs/nhanes.
|
969 |
+
Amos, B. Tutorial on amortized optimization for learning
|
970 |
+
to optimize over continuous domains. arXiv preprint
|
971 |
+
arXiv:2202.00665, 2022.
|
972 |
+
Balın, M. F., Abid, A., and Zou, J. Concrete autoencoders:
|
973 |
+
Differentiable feature selection and reconstruction. In
|
974 |
+
International Conference on Machine Learning, pp. 444–
|
975 |
+
453. PMLR, 2019.
|
976 |
+
Breiman, L. Random forests. Machine Learning, 45(1):
|
977 |
+
5–32, 2001.
|
978 |
+
Cai, J., Luo, J., Wang, S., and Yang, S. Feature selection in
|
979 |
+
machine learning: A new perspective. Neurocomputing,
|
980 |
+
300:70–79, 2018.
|
981 |
+
Chang, C.-H., Rampasek, L., and Goldenberg, A. Dropout
|
982 |
+
feature ranking for deep learning models. arXiv preprint
|
983 |
+
arXiv:1712.08645, 2017.
|
984 |
+
Chen, J., Song, L., Wainwright, M., and Jordan, M. Learn-
|
985 |
+
ing to explain: An information-theoretic perspective on
|
986 |
+
model interpretation. In International Conference on
|
987 |
+
Machine Learning, pp. 883–892. PMLR, 2018.
|
988 |
+
Chen, S., Choi, A., and Darwiche, A. Value of information
|
989 |
+
based on decision robustness. In Proceedings of the AAAI
|
990 |
+
Conference on Artificial Intelligence, volume 29, 2015a.
|
991 |
+
|
992 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
993 |
+
Chen, Y., Hassani, S. H., Karbasi, A., and Krause, A. Se-
|
994 |
+
quential information maximization: When is greedy near-
|
995 |
+
optimal? In Conference on Learning Theory, pp. 338–
|
996 |
+
363. PMLR, 2015b.
|
997 |
+
Cover, T. and Thomas, J. Elements of Information Theory.
|
998 |
+
Wiley, 2012. ISBN 9781118585771.
|
999 |
+
Covert, I., Lundberg, S. M., and Lee, S.-I. Understand-
|
1000 |
+
ing global feature contributions with additive importance
|
1001 |
+
measures. Advances in Neural Information Processing
|
1002 |
+
Systems, 33:17212–17223, 2020.
|
1003 |
+
Covert, I., Lundberg, S. M., and Lee, S.-I. Explaining by
|
1004 |
+
removing: A unified framework for model explanation.
|
1005 |
+
Journal of Machine Learning Research, 22:209–1, 2021.
|
1006 |
+
Das, A. and Kempe, D. Submodular meets spectral: Greedy
|
1007 |
+
algorithms for subset selection, sparse approximation and
|
1008 |
+
dictionary selection. arXiv preprint arXiv:1102.3975,
|
1009 |
+
2011.
|
1010 |
+
Dua, D. and Graff, C. UCI machine learning repository,
|
1011 |
+
2017. URL http://archive.ics.uci.edu/ml.
|
1012 |
+
Dulac-Arnold, G., Denoyer, L., Preux, P., and Gallinari, P.
|
1013 |
+
Datum-wise classification: a sequential approach to spar-
|
1014 |
+
sity. In Joint European Conference on Machine Learning
|
1015 |
+
and Knowledge Discovery in Databases, pp. 375–390.
|
1016 |
+
Springer, 2011.
|
1017 |
+
Early, K., Fienberg, S. E., and Mankoff, J. Test time fea-
|
1018 |
+
ture ordering with FOCUS: Interactive predictions with
|
1019 |
+
minimal user burden. In Proceedings of the 2016 ACM
|
1020 |
+
International Joint Conference on Pervasive and Ubiqui-
|
1021 |
+
tous Computing, pp. 992–1003, 2016a.
|
1022 |
+
Early, K., Mankoff, J., and Fienberg, S. E.
|
1023 |
+
Dynamic
|
1024 |
+
question ordering in online surveys.
|
1025 |
+
arXiv preprint
|
1026 |
+
arXiv:1607.04209, 2016b.
|
1027 |
+
Elenberg, E. R., Khanna, R., Dimakis, A. G., and Negahban,
|
1028 |
+
S. Restricted strong convexity implies weak submodular-
|
1029 |
+
ity. The Annals of Statistics, 46(6B):3539–3568, 2018.
|
1030 |
+
Erion, G., Janizek, J. D., Hudelson, C., Utarnachitt, R. B.,
|
1031 |
+
McCoy, A. M., Sayre, M. R., White, N. J., and Lee, S.-I.
|
1032 |
+
CoAI: Cost-aware artificial intelligence for health care.
|
1033 |
+
medRxiv, 2021.
|
1034 |
+
Fleuret, F. Fast binary feature selection with conditional mu-
|
1035 |
+
tual information. Journal of Machine Learning Research,
|
1036 |
+
5(9), 2004.
|
1037 |
+
Gal, Y. and Ghahramani, Z. Dropout as a bayesian approxi-
|
1038 |
+
mation: Representing model uncertainty in deep learning.
|
1039 |
+
In International Conference on Machine Learning, pp.
|
1040 |
+
1050–1059. PMLR, 2016.
|
1041 |
+
Golovin, D. and Krause, A. Adaptive submodularity: The-
|
1042 |
+
ory and applications in active learning and stochastic
|
1043 |
+
optimization. Journal of Artificial Intelligence Research,
|
1044 |
+
42:427–486, 2011.
|
1045 |
+
Grathwohl, W., Choi, D., Wu, Y., Roeder, G., and Duve-
|
1046 |
+
naud, D. Backpropagation through the void: Optimizing
|
1047 |
+
control variates for black-box gradient estimation. In
|
1048 |
+
International Conference on Learning Representations,
|
1049 |
+
2018.
|
1050 |
+
Guyon, I. and Elisseeff, A. An introduction to variable and
|
1051 |
+
feature selection. Journal of Machine Learning Research,
|
1052 |
+
3(Mar):1157–1182, 2003.
|
1053 |
+
He, H., Daum´e III, H., and Eisner, J. Cost-sensitive dynamic
|
1054 |
+
feature selection. In ICML Inferning Workshop, 2012.
|
1055 |
+
He, H., Mineiro, P., and Karampatziakis, N. Active infor-
|
1056 |
+
mation acquisition. arXiv preprint arXiv:1602.02181,
|
1057 |
+
2016a.
|
1058 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn-
|
1059 |
+
ing for image recognition. In Proceedings of the IEEE
|
1060 |
+
Conference on Computer Vision and Pattern Recognition,
|
1061 |
+
pp. 770–778, 2016b.
|
1062 |
+
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup,
|
1063 |
+
D., and Meger, D. Deep reinforcement learning that mat-
|
1064 |
+
ters. In Proceedings of the AAAI Conference on Artificial
|
1065 |
+
Intelligence, volume 32, 2018.
|
1066 |
+
Jang, E., Gu, S., and Poole, B.
|
1067 |
+
Categorical repa-
|
1068 |
+
rameterization with Gumbel-softmax.
|
1069 |
+
arXiv preprint
|
1070 |
+
arXiv:1611.01144, 2016.
|
1071 |
+
Janisch, J., Pevn`y, T., and Lis`y, V. Classification with costly
|
1072 |
+
features using deep reinforcement learning. In Proceed-
|
1073 |
+
ings of the AAAI Conference on Artificial Intelligence,
|
1074 |
+
volume 33, pp. 3959–3966, 2019.
|
1075 |
+
Jethani, N., Sudarshan, M., Aphinyanaphongs, Y., and Ran-
|
1076 |
+
ganath, R.
|
1077 |
+
Have we learned to explain?: How inter-
|
1078 |
+
pretability methods can learn to encode predictions in
|
1079 |
+
their interpretations. In International Conference on Arti-
|
1080 |
+
ficial Intelligence and Statistics, pp. 1459–1467. PMLR,
|
1081 |
+
2021.
|
1082 |
+
Kachuee, M., Goldstein, O., K¨arkk¨ainen, K., Darabi, S., and
|
1083 |
+
Sarrafzadeh, M. Opportunistic learning: Budgeted cost-
|
1084 |
+
sensitive learning from data streams. In International
|
1085 |
+
Conference on Learning Representations, 2018.
|
1086 |
+
Kachuee, M., Karkkainen, K., Goldstein, O., Zamanzadeh,
|
1087 |
+
D., and Sarrafzadeh, M. Cost-sensitive diagnosis and
|
1088 |
+
learning leveraging public health data. arXiv preprint
|
1089 |
+
arXiv:1902.07102, 2019.
|
1090 |
+
|
1091 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1092 |
+
Karayev, S., Baumgartner, T., Fritz, M., and Darrell, T.
|
1093 |
+
Timely object recognition. Advances in Neural Informa-
|
1094 |
+
tion Processing Systems, 25, 2012.
|
1095 |
+
Kingma, D. P., Salimans, T., and Welling, M. Variational
|
1096 |
+
dropout and the local reparameterization trick. Advances
|
1097 |
+
in Neural Information Processing Systems, 28, 2015.
|
1098 |
+
Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Al-
|
1099 |
+
sallakh, B., Reynolds, J., Melnikov, A., Kliushkina, N.,
|
1100 |
+
Araya, C., Yan, S., et al. Captum: A unified and generic
|
1101 |
+
model interpretability library for PyTorch. arXiv preprint
|
1102 |
+
arXiv:2009.07896, 2020.
|
1103 |
+
Krizhevsky, A., Hinton, G., et al. Learning multiple layers
|
1104 |
+
of features from tiny images. 2009.
|
1105 |
+
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-
|
1106 |
+
based learning applied to document recognition. Proceed-
|
1107 |
+
ings of the IEEE, 86(11):2278–2324, 1998.
|
1108 |
+
Lee, C., Imrie, F., and van der Schaar, M. Self-supervision
|
1109 |
+
enhanced feature selection with correlated gates. In Inter-
|
1110 |
+
national Conference on Learning Representations, 2021.
|
1111 |
+
Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R. P.,
|
1112 |
+
Tang, J., and Liu, H. Feature selection: A data perspective.
|
1113 |
+
ACM Computing Surveys (CSUR), 50(6):1–45, 2017.
|
1114 |
+
Li, Y. and Oliva, J. Active feature acquisition with gener-
|
1115 |
+
ative surrogate models. In International Conference on
|
1116 |
+
Machine Learning, pp. 6450–6459. PMLR, 2021.
|
1117 |
+
Ma, C., Tschiatschek, S., Palla, K., Hernandez-Lobato,
|
1118 |
+
J. M., Nowozin, S., and Zhang, C. EDDI: Efficient dy-
|
1119 |
+
namic discovery of high-value information with partial
|
1120 |
+
VAE. In International Conference on Machine Learning,
|
1121 |
+
pp. 4234–4243. PMLR, 2019.
|
1122 |
+
Ma, C., Tschiatschek, S., Turner, R., Hern´andez-Lobato,
|
1123 |
+
J. M., and Zhang, C. VAEM: a deep generative model
|
1124 |
+
for heterogeneous mixed type data. Advances in Neural
|
1125 |
+
Information Processing Systems, 33:11237–11247, 2020.
|
1126 |
+
Maddison, C. J., Mnih, A., and Teh, Y. W. The Concrete
|
1127 |
+
distribution: A continuous relaxation of discrete random
|
1128 |
+
variables. arXiv preprint arXiv:1611.00712, 2016.
|
1129 |
+
Melville, P., Saar-Tsechansky, M., Provost, F., and Mooney,
|
1130 |
+
R. Active feature-value acquisition for classifier induc-
|
1131 |
+
tion. In Fourth IEEE International Conference on Data
|
1132 |
+
Mining (ICDM’04), pp. 483–486. IEEE, 2004.
|
1133 |
+
Miller, H. W. Plan and operation of the health and nutrition
|
1134 |
+
examination survey, United States, 1971-1973. DHEW
|
1135 |
+
publication no. (PHS)-Dept. of Health, Education, and
|
1136 |
+
Welfare (USA), 1973.
|
1137 |
+
Mnih, V., Heess, N., Graves, A., et al. Recurrent mod-
|
1138 |
+
els of visual attention. Advances in Neural Information
|
1139 |
+
Processing Systems, 27, 2014.
|
1140 |
+
Mosesson, M. W. Fibrinogen and fibrin structure and func-
|
1141 |
+
tions. Journal of Thrombosis and Haemostasis, 3(8):
|
1142 |
+
1894–1904, 2005.
|
1143 |
+
Nazabal, A., Olmos, P. M., Ghahramani, Z., and Valera,
|
1144 |
+
I. Handling incomplete heterogeneous data using VAEs.
|
1145 |
+
Pattern Recognition, 107:107501, 2020.
|
1146 |
+
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E.,
|
1147 |
+
DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer,
|
1148 |
+
A. Automatic differentiation in PyTorch. 2017.
|
1149 |
+
Randløv, J. and Alstrøm, P. Learning to drive a bicycle using
|
1150 |
+
reinforcement learning and shaping. In ICML, volume 98,
|
1151 |
+
pp. 463–471. Citeseer, 1998.
|
1152 |
+
Roe, B., Yand, H., Zhu, J., Lui, Y., Stancu, I., et al. Boosted
|
1153 |
+
decision trees, an alternative to artificial neural networks.
|
1154 |
+
Nucl. Instrm. Meth. A, 543:577–584, 2005.
|
1155 |
+
Saar-Tsechansky, M., Melville, P., and Provost, F. Active
|
1156 |
+
feature-value acquisition. Management Science, 55(4):
|
1157 |
+
664–684, 2009.
|
1158 |
+
Shim, H., Hwang, S. J., and Yang, E. Joint active feature
|
1159 |
+
acquisition and classification with variable-size set encod-
|
1160 |
+
ing. Advances in Neural Information Processing Systems,
|
1161 |
+
31, 2018.
|
1162 |
+
Shrikumar, A., Greenside, P., and Kundaje, A. Learning
|
1163 |
+
important features through propagating activation differ-
|
1164 |
+
ences. In International Conference on Machine Learning,
|
1165 |
+
pp. 3145–3153. PMLR, 2017.
|
1166 |
+
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
|
1167 |
+
and Salakhutdinov, R. Dropout: a simple way to prevent
|
1168 |
+
neural networks from overfitting. The Journal of Machine
|
1169 |
+
Learning Research, 15(1):1929–1958, 2014.
|
1170 |
+
Subcommittee, A., Group, I. A. W., et al. Advanced trauma
|
1171 |
+
life support (ATLS®): the ninth edition. The Journal of
|
1172 |
+
Trauma and Acute Care Surgery, 74(5):1363–1366, 2013.
|
1173 |
+
Sundararajan, M., Taly, A., and Yan, Q. Axiomatic attribu-
|
1174 |
+
tion for deep networks. In International Conference on
|
1175 |
+
Machine Learning, pp. 3319–3328. PMLR, 2017.
|
1176 |
+
Sutton, R. S., Barto, A. G., et al. Introduction to reinforce-
|
1177 |
+
ment learning. 1998.
|
1178 |
+
Tucker, G., Mnih, A., Maddison, C. J., Lawson, J., and Sohl-
|
1179 |
+
Dickstein, J. Rebar: Low-variance, unbiased gradient
|
1180 |
+
estimates for discrete latent variable models. Advances
|
1181 |
+
in Neural Information Processing Systems, 30, 2017.
|
1182 |
+
|
1183 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1184 |
+
Williams, R. J. Simple statistical gradient-following algo-
|
1185 |
+
rithms for connectionist reinforcement learning. Machine
|
1186 |
+
Learning, 8(3):229–256, 1992.
|
1187 |
+
Yamada, Y., Lindenbaum, O., Negahban, S., and Kluger, Y.
|
1188 |
+
Feature selection using stochastic gates. In International
|
1189 |
+
Conference on Machine Learning. PMLR, 2020.
|
1190 |
+
Yoon, J., Jordon, J., and van der Schaar, M.
|
1191 |
+
INVASE:
|
1192 |
+
Instance-wise variable selection using neural networks.
|
1193 |
+
In International Conference on Learning Representations,
|
1194 |
+
2018.
|
1195 |
+
|
1196 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1197 |
+
A. Proofs
|
1198 |
+
In this section, we re-state and prove our main theoretical results. We begin with our proposition regarding the optimal
|
1199 |
+
predictor for an arbitrary policy π.
|
1200 |
+
Proposition 1. When y is discrete and ℓ is cross-entropy loss, eq. (4) is minimized for any policy π by the Bayes classifier,
|
1201 |
+
or f ∗(xs) = p(y | xs).
|
1202 |
+
Proof. Given the predictor inputs xs, our goal is to determine the prediction that minimizes the expected loss. Because
|
1203 |
+
features are selected sequentially by π with no knowledge of the non-selected values, there is no other information to
|
1204 |
+
condition on; for the predictor, we do not even need to distinguish the order in which features were selected. We can
|
1205 |
+
therefore derive the optimal prediction ˆy ∈ ∆K−1 for a discrete response y ∈ [K] as follows:
|
1206 |
+
f ∗(xs) = arg min
|
1207 |
+
ˆy
|
1208 |
+
Ey|xs
|
1209 |
+
�
|
1210 |
+
ℓ(ˆy, y)
|
1211 |
+
�
|
1212 |
+
= arg min
|
1213 |
+
ˆy
|
1214 |
+
�
|
1215 |
+
i∈Y
|
1216 |
+
p(y = i | xs) log ˆyi
|
1217 |
+
= arg min
|
1218 |
+
ˆy
|
1219 |
+
DKL
|
1220 |
+
�
|
1221 |
+
p(y | xs) || ˆy
|
1222 |
+
�
|
1223 |
+
+ H(y | xs)
|
1224 |
+
= p(y | xs).
|
1225 |
+
In the case of a continuous response y ∈ R with squared error loss, we have a similar result involving the response’s
|
1226 |
+
conditional expectation:
|
1227 |
+
f ∗(xs) = arg min
|
1228 |
+
ˆy
|
1229 |
+
Ey|xs
|
1230 |
+
�
|
1231 |
+
(ˆy − y)2�
|
1232 |
+
= arg min
|
1233 |
+
ˆy
|
1234 |
+
Ey|xs
|
1235 |
+
�
|
1236 |
+
(ˆy − E[y | xs])2�
|
1237 |
+
+ Var(y | xs)
|
1238 |
+
= E[y | xs].
|
1239 |
+
Proposition 2. When y is discrete, ℓ is cross-entropy loss and the predictor is the Bayes classifier f ∗, eq. (4) is minimized
|
1240 |
+
by the greedy CMI policy, or π∗(xs) = arg maxi I(y; xi | xs).
|
1241 |
+
Proof. Following eq. (4), the policy network’s selection i = π(xs) incurs the following expected loss with the distribution
|
1242 |
+
p(y, xi | xs):
|
1243 |
+
Ey,xi|xs
|
1244 |
+
�
|
1245 |
+
ℓ(f ∗(xs ∪ xi), y)
|
1246 |
+
�
|
1247 |
+
= Ey,xi|xs
|
1248 |
+
�
|
1249 |
+
ℓ(p(y | xi, xs), y)
|
1250 |
+
�
|
1251 |
+
= Exi|xs
|
1252 |
+
�
|
1253 |
+
Ey|xi,xs[ℓ(p(y | xi, xs), y)]
|
1254 |
+
�
|
1255 |
+
= Exi|xs
|
1256 |
+
�
|
1257 |
+
H(y | xi, xs)
|
1258 |
+
�
|
1259 |
+
= H(y | xs) − I(y; xi | xs).
|
1260 |
+
Note that H(y | xs) is a constant that does not depend on i. When identifying the index that minimizes the expected loss,
|
1261 |
+
we thus have the following result:
|
1262 |
+
arg min
|
1263 |
+
i
|
1264 |
+
Ey,xi|xs
|
1265 |
+
�
|
1266 |
+
ℓ(f ∗(xs ∪ xi), y)
|
1267 |
+
�
|
1268 |
+
= arg max
|
1269 |
+
i
|
1270 |
+
I(y; xi | xs).
|
1271 |
+
|
1272 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1273 |
+
In the case of a continuous response with squared error loss and an optimal predictor given by f ∗(xs) = E[y | xs], we have
|
1274 |
+
a similar result:
|
1275 |
+
Ey,xi|xs
|
1276 |
+
�
|
1277 |
+
(f ∗(xs ∪ xi) − y)2�
|
1278 |
+
= Ey,xi|xs
|
1279 |
+
�
|
1280 |
+
(E[y | xi, xs] − y)2�
|
1281 |
+
= Exi|xs
|
1282 |
+
�
|
1283 |
+
Ey|xi,xs[(E[y | xi, xs] − y)2]
|
1284 |
+
�
|
1285 |
+
= Exi|xs[Var(y | xi, xs)].
|
1286 |
+
When we aim to minimize the expected loss, our selection is thus the index that yields the lowest expected conditional
|
1287 |
+
variance:
|
1288 |
+
arg min
|
1289 |
+
i
|
1290 |
+
Exi|xs[Var(y | xi, xs)].
|
1291 |
+
We also prove the limiting result presented in eq. (3), which states that In
|
1292 |
+
i → I(y; xi | xs).
|
1293 |
+
Proof. Conditional mutual information I(y; xi | xs) is defined as follows (Cover & Thomas, 2012):
|
1294 |
+
I(y; xi | xs) = DKL
|
1295 |
+
�
|
1296 |
+
p(xi, y | xs) || p(xi | xs)p(y | xs)
|
1297 |
+
�
|
1298 |
+
= Ey,xi|xs
|
1299 |
+
�
|
1300 |
+
log
|
1301 |
+
p(y, xi | xs)
|
1302 |
+
p(xi | xs)p(y | xs)
|
1303 |
+
�
|
1304 |
+
.
|
1305 |
+
Rearranging terms, we can write this as an expected KL divergence with respect to xi:
|
1306 |
+
I(y; xi | xs) = Exi|xsEy|xs,xi
|
1307 |
+
�
|
1308 |
+
log
|
1309 |
+
p(y, xi | xs)
|
1310 |
+
p(xi | xs)p(y | xs)
|
1311 |
+
�
|
1312 |
+
= Exi|xsEy|xs,xi
|
1313 |
+
�
|
1314 |
+
log p(y | xi, xs)
|
1315 |
+
p(y | xs)
|
1316 |
+
�
|
1317 |
+
= Exi|xs
|
1318 |
+
�
|
1319 |
+
DKL
|
1320 |
+
�
|
1321 |
+
p(y | xi, xs) || p(y | xs)
|
1322 |
+
��
|
1323 |
+
Now, when we sample multiple values x1
|
1324 |
+
i , . . . , xn
|
1325 |
+
i ∼ p(xi | xs) and make predictions using the Bayes classifier, we have
|
1326 |
+
the following mean prediction as n becomes large:
|
1327 |
+
lim
|
1328 |
+
n→∞
|
1329 |
+
1
|
1330 |
+
n
|
1331 |
+
n
|
1332 |
+
�
|
1333 |
+
j=1
|
1334 |
+
p(y | xs, xj
|
1335 |
+
i) = Exi|xs
|
1336 |
+
�
|
1337 |
+
p(y | xi, xs)
|
1338 |
+
�
|
1339 |
+
= p(y | xs).
|
1340 |
+
Calculating the mean KL divergence across the predictions, we arrive at the following result:
|
1341 |
+
lim
|
1342 |
+
n→∞ In
|
1343 |
+
i = Exi|xs
|
1344 |
+
�
|
1345 |
+
DKL
|
1346 |
+
�
|
1347 |
+
p(y | xi, xs) || p(y | xs)
|
1348 |
+
��
|
1349 |
+
= I(y; xi | xs).
|
1350 |
+
Theorem 1. When y is discrete and ℓ is cross-entropy loss, the global optimum of eq. (5) is a predictor that satisfies
|
1351 |
+
f(xs; θ∗) = p(y | xs) and a policy π(xs; φ∗) that puts all probability mass on i∗ = arg maxi I(y; xi | xs).
|
1352 |
+
|
1353 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1354 |
+
Proof. We first consider the predictor network f(xs; θ). When the predictor is given the feature values xs, it means that
|
1355 |
+
one index i ∈ s was chosen by the policy according to π(xs\i; φ) and the remaining indices s \ i were sampled from p(s).
|
1356 |
+
Because s is sampled independently from (x, y), and because π(xs\i; φ) is not given access to (x[d]\s, xi, y), the predictor’s
|
1357 |
+
expected loss must be considered with respect to the distribution y | xs. The globally optimal predictor f(xs; θ∗) is thus
|
1358 |
+
defined as follows, regardless of the selection policy π(xs; φ) and which index i was selected last:
|
1359 |
+
f(xs; θ∗) = arg min
|
1360 |
+
ˆy
|
1361 |
+
Ey|xs
|
1362 |
+
�
|
1363 |
+
ℓ(ˆy, y)
|
1364 |
+
�
|
1365 |
+
= p(y | xs).
|
1366 |
+
The above result follows from our proof for Proposition 1. Now, given the optimal predictor f(xs; θ∗), we can define the
|
1367 |
+
globally optimal policy by minimizing the expected loss for a fixed input xs. Denoting the probability mass placed on each
|
1368 |
+
index i ∈ [d] as πi(xs; φ), where π(xs; φ) ∈ ∆d−1, the expected loss is the following:
|
1369 |
+
Ei∼π(xs;φ)Ey,xi|xs
|
1370 |
+
�
|
1371 |
+
ℓ(f(xs ∪ xi; θ∗), y)
|
1372 |
+
�
|
1373 |
+
=
|
1374 |
+
�
|
1375 |
+
i∈[d]
|
1376 |
+
πi(xs; φ)Ey,xi|xs
|
1377 |
+
�
|
1378 |
+
ℓ
|
1379 |
+
�
|
1380 |
+
f(xs ∪ xi; θ∗), y
|
1381 |
+
��
|
1382 |
+
=
|
1383 |
+
�
|
1384 |
+
i∈[d]
|
1385 |
+
πi(xs; φ)Exi|xs[H(y | xi, xs)].
|
1386 |
+
The above result follows from our proof for Proposition 2. If there exists a single index i∗ ∈ [d] that yields the lowest
|
1387 |
+
expected conditional entropy, or
|
1388 |
+
Exi∗|xs[H(y | xi∗, xs)] < Exi|xs[H(y | xi, xs)]
|
1389 |
+
∀i ̸= i∗,
|
1390 |
+
then the optimal predictor must put all its probability mass on i∗, or πi∗(xs; φ∗) = 1. Note that the corresponding feature
|
1391 |
+
xi∗ has maximum conditional mutual information with y, because we have
|
1392 |
+
I(y; xi∗ | xs) = H(y | xs)
|
1393 |
+
�
|
1394 |
+
��
|
1395 |
+
�
|
1396 |
+
Constant
|
1397 |
+
−Exi∗|xs[H(y | xi∗, xs)].
|
1398 |
+
To summarize, we derived the global optimum to our objective L(θ, φ) by first considering the optimal predictor f(xs; θ∗),
|
1399 |
+
and then considering the optimal policy π(xs; φ∗) when we assume that we use the optimal predictor.
|
1400 |
+
Theorem 2. When y is continuous and ℓ is squared error loss, the global optimum of eq. (5) is a predictor that satisfies
|
1401 |
+
f(xs; θ∗) = E[y | xs] and a policy π(xs; φ∗) that puts all probability mass on i∗ = arg mini Exi|xs[Var(y | xi, xs)].
|
1402 |
+
Proof. Our proof follows the same logic as our proof for Theorem 1. For the optimal predictor given an arbitrary policy, we
|
1403 |
+
have:
|
1404 |
+
f(xs; θ∗) = arg min
|
1405 |
+
ˆy
|
1406 |
+
Ey|xs
|
1407 |
+
�
|
1408 |
+
(ˆy − y)2�
|
1409 |
+
= E[y | xs].
|
1410 |
+
Then, for the policy’s expected loss, we have:
|
1411 |
+
Ei∼π(xs;φ)Ey,xi|xs
|
1412 |
+
��
|
1413 |
+
f(xs ∪ xi; θ∗) − y
|
1414 |
+
�2�
|
1415 |
+
=
|
1416 |
+
�
|
1417 |
+
i∈[d]
|
1418 |
+
πi(xs; φ)Exi|xs[Var(y | xi, xs)].
|
1419 |
+
If there exists an index i∗ ∈ [d] that yields the lowest expected conditional variance, then the optimal policy must put all its
|
1420 |
+
probability mass on i∗, or πi∗(xs; φ∗) = 1.
|
1421 |
+
|
1422 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1423 |
+
B. Datasets
|
1424 |
+
The datasets used in our experiments are summarized in Table 2. Three of the tabular datasets and the two image classification
|
1425 |
+
datasets are publicly available, and the three emergency medicine tasks were privately curated from the Harborview Medical
|
1426 |
+
Center Trauma Registry.
|
1427 |
+
Table 2. Summary of datasets used in our experiments.
|
1428 |
+
Dataset
|
1429 |
+
# Features
|
1430 |
+
# Feature Groups
|
1431 |
+
# Classes
|
1432 |
+
# Samples
|
1433 |
+
Fluid
|
1434 |
+
224
|
1435 |
+
162
|
1436 |
+
2
|
1437 |
+
2,770
|
1438 |
+
Respiratory
|
1439 |
+
112
|
1440 |
+
35
|
1441 |
+
2
|
1442 |
+
65,515
|
1443 |
+
Bleeding
|
1444 |
+
121
|
1445 |
+
44
|
1446 |
+
2
|
1447 |
+
6,496
|
1448 |
+
Spam
|
1449 |
+
58
|
1450 |
+
–
|
1451 |
+
2
|
1452 |
+
4,601
|
1453 |
+
MiniBooNE
|
1454 |
+
51
|
1455 |
+
–
|
1456 |
+
2
|
1457 |
+
130,064
|
1458 |
+
Diabetes
|
1459 |
+
45
|
1460 |
+
–
|
1461 |
+
3
|
1462 |
+
92,062
|
1463 |
+
MNIST
|
1464 |
+
784
|
1465 |
+
–
|
1466 |
+
10
|
1467 |
+
60,000
|
1468 |
+
CIFAR-10
|
1469 |
+
1,024
|
1470 |
+
64
|
1471 |
+
10
|
1472 |
+
60,000
|
1473 |
+
B.1. MiniBooNE and spam classification
|
1474 |
+
The spam dataset includes features extracted from e-mail messages to predict whether or not a message is spam. Three
|
1475 |
+
features describes the usage of capital letters in the e-mail, and the remaining 54 features describe the frequency with which
|
1476 |
+
certain key words or characters are used. The MiniBooNE particle identification dataset involves distinguishing electron
|
1477 |
+
neutrinos from muon neutrinos based on various continuous features (Roe et al., 2005). Both datasets were obtained from
|
1478 |
+
the UCI repository (Dua & Graff, 2017).
|
1479 |
+
B.2. Diabetes classification
|
1480 |
+
The diabetes dataset was obtained from from the National Health and Nutrition Examination Survey (NHANES) (NHA,
|
1481 |
+
2018), an ongoing survey designed to assess the well-being of adults and children in the United States. We used a version
|
1482 |
+
of the data pre-processed by Kachuee et al. (2018; 2019) that includes data collected from 1999 through 2016. The input
|
1483 |
+
features include demographic information (age, gender, ethnicity, etc.), lab results (total cholesterol, triglyceride, etc.),
|
1484 |
+
examination data (weight, height, etc.), and questionnaire answers (smoking, alcohol, sleep habits, etc.). An expert was also
|
1485 |
+
asked to suggest costs for each feature based on the financial burden, patient privacy, and patient inconvenience, but we
|
1486 |
+
assume uniform feature costs in our experiments. Finally, the fasting glucose values were used to define three classes based
|
1487 |
+
on standard threshold values: normal, pre-diabetes, and diabetes.
|
1488 |
+
B.3. Image classification datasets
|
1489 |
+
The MNIST and CIFAR-10 datasets were downloaded using PyTorch (Paszke et al., 2017). We used the standard train-test
|
1490 |
+
splits, and we split the train set to obtain a validation set with the same size as the test set (10,000 examples).
|
1491 |
+
B.4. Emergency medicine datasets
|
1492 |
+
The emergency medicine datasets used in this study were gathered over a 13-year period (2007-2020) and encompass 14,463
|
1493 |
+
emergency department admissions. We excluded patients under the age of 18, and we curated 3 clinical cohorts commonly
|
1494 |
+
seen in pre-hospitalization settings. These include 1) pre-hospital fluid resuscitation, 2) emergency department respiratory
|
1495 |
+
support, and 3) bleeding after injury. These datasets are not publicly available due to patient privacy concerns.
|
1496 |
+
Pre-hospital fluid resuscitation
|
1497 |
+
We selected 224 variables that were available in the pre-hospital setting, including
|
1498 |
+
dispatch information (injury date, time, cause, and location), demographic information (age, sex), and pre-hospital vital
|
1499 |
+
signs (blood pressure, heart rate, respiratory rate). The outcome was each patient’s response to fluid resuscitation, following
|
1500 |
+
the Advanced Trauma Life Support (ATLS) definition (Subcommittee et al., 2013).
|
1501 |
+
|
1502 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1503 |
+
Emergency department respiratory support
|
1504 |
+
In this cohort, our goal is to predict which patients require respiratory
|
1505 |
+
support upon arrival in the emergency department. Similar to the previous dataset, we selected 112 pre-hospital clinical
|
1506 |
+
features including dispatch information (injury date, time, cause, and location), demographic information (age, sex), and
|
1507 |
+
pre-hospital vital signs (blood pressure, heart rate, respiratory rate). The outcome is defined based on whether a patient
|
1508 |
+
received respiratory support, including both invasive (intubation) and non-invasive (BiPap) approaches.
|
1509 |
+
Bleeding
|
1510 |
+
In this cohort, we only included patients whose fibrinogen levels were measured, as this provides an indicator for
|
1511 |
+
bleeding or fibrinolysis (Mosesson, 2005). As with the previous datasets, demographic information, dispatch information,
|
1512 |
+
and pre-hospital observations were used as input features. The outcome, based on experts’ opinion, was defined by whether
|
1513 |
+
an individual’s fibrinogen level is below 200 mg/dL, which represents higher risk of bleeding after injury.
|
1514 |
+
C. Baselines
|
1515 |
+
This section provides more details on the baseline methods used in our experiments (Section 6).
|
1516 |
+
C.1. Global feature importance methods
|
1517 |
+
Two of our static feature selection baselines, permutation tests and SAGE, are global feature importance methods that rank
|
1518 |
+
features based on their role in improving model accuracy (Covert et al., 2021). In our experiments, we ran each method
|
1519 |
+
using a single classifier trained on the entire dataset, and we then selected the top k features depending on the budget.
|
1520 |
+
When running the permutation test, we calculated the validation AUROC while replacing values in the corresponding feature
|
1521 |
+
column with random draws from the training set. When running SAGE, we used the authors’ implementation with automatic
|
1522 |
+
convergence detection (Covert et al., 2020). To handle held-out features, we averaged across 128 sampled values for the six
|
1523 |
+
tabular datasets, and for MNIST we used a zeros baseline to achieve faster convergence.
|
1524 |
+
C.2. Local feature importance methods
|
1525 |
+
Two of our static feature selection baselines, DeepLift and Integrated Gradients, are local feature importance methods that
|
1526 |
+
rank features based on their importance to a single prediction. In our experiments, we generated feature importance scores
|
1527 |
+
for the true class using all examples in the validation set. We then selected the top k features based on their mean absolute
|
1528 |
+
importance. We used a mean baseline for Integrated Gradients (Sundararajan et al., 2017), and both methods were run using
|
1529 |
+
the Captum package (Kokhlikyan et al., 2020).
|
1530 |
+
C.3. CMI estimation
|
1531 |
+
Our experiments use two versions of the CMI estimation approach described in Section 3.2. Both are inspired by the
|
1532 |
+
EDDI method introduced by Ma et al. (2019), but a key difference is that we do not jointly model (x, y) within the same
|
1533 |
+
conditional generative model: we instead separately model the response with a classifier f(xs) ≈ p(y | xs) and the features
|
1534 |
+
with a generative model of p(xi | xs). This partially mitigates one challenge with this approach, which is working with
|
1535 |
+
mixed continuous/categorical data (i.e., we do not need to jointly model categorical response variables).
|
1536 |
+
For the first version of this approach, we train a PVAE as a generative model (Ma et al., 2019). The encoder and decoder both
|
1537 |
+
have two hidden layers, the latent dimension is set to 16, and we use 128 samples from the latent posterior to approximate
|
1538 |
+
p(xi | xs) =
|
1539 |
+
�
|
1540 |
+
p(xi | z)p(z | xs). We use Gaussian distributions for both the latent and decoder spaces, and we generate
|
1541 |
+
samples using the decoder mean, similar to the original approach (Ma et al., 2019). In the second version, we bypass the
|
1542 |
+
need for a generative model with a simple approximation: we sample features from their marginal distribution, which is
|
1543 |
+
equivalent to assuming feature independence.
|
1544 |
+
C.4. Opportunistic learning
|
1545 |
+
Kachuee et al. (2018) proposed Opportunistic Learning (OL), an approach to solve DFS using RL. The model consists
|
1546 |
+
of two networks analogous to our policy and predictor: a Q-network that estimates the value associated with each action,
|
1547 |
+
where actions correspond to features, and a P-network responsible for making predictions. When using OL, we use the same
|
1548 |
+
architectures as our approach, and OL shares network parameters between the P- and Q-networks.
|
1549 |
+
The authors introduce a utility function for their reward, shown in eq. (6), which calculates the difference in prediction
|
1550 |
+
|
1551 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1552 |
+
uncertainty as approximated by MC dropout (Gal & Ghahramani, 2016). The reward also accounts for feature costs, but we
|
1553 |
+
set all feature costs to ci = 1:
|
1554 |
+
ri = ||Cert(xs) − Cert(xs ∪ xi)||
|
1555 |
+
ci
|
1556 |
+
(6)
|
1557 |
+
To provide a fair comparison with the remaining methods, we made several modifications to the authors’ implementation.
|
1558 |
+
These include 1) preventing the prediction action until the pre-specified budget is met, 2) setting all feature costs to be
|
1559 |
+
identical, and 3) supporting pre-defined feature groups as described in Appendix D.3. When training, we update the P-,
|
1560 |
+
Q-, and target Q-networks every 1 +
|
1561 |
+
d
|
1562 |
+
100 experiences, where d is the number of features in a dataset. In addition, the
|
1563 |
+
replay buffer is set to store the 1000d most recent experiences, and the random exploration probability is decayed so that it
|
1564 |
+
eventually reaches a value of 0.1.
|
1565 |
+
D. Training approach and hyperparameters
|
1566 |
+
This section provides more details on our training approach and hyperparameter choices.
|
1567 |
+
D.1. Training pseudocode
|
1568 |
+
Algorithm 1 summarizes our training approach. Briefly, we select features by drawing a Concrete sample using policy
|
1569 |
+
network’s logits, we calculate the loss based on the subsequent prediction, and we then update the mask for the next step
|
1570 |
+
using a discrete sample from the policy’s distribution. We implemented this approach using PyTorch (Paszke et al., 2017)
|
1571 |
+
and PyTorch Lightning2.
|
1572 |
+
Algorithm 1: Training pseudocode
|
1573 |
+
Input: Data distribution p(x, y), budget k > 0, learning rate γ > 0, temperature τ > 0
|
1574 |
+
Output: Predictor model f(x; θ), policy model π(x; φ)
|
1575 |
+
initialize f(x; θ), π(x; φ)
|
1576 |
+
while not converged do
|
1577 |
+
sample x, y ∼ p(x, y)
|
1578 |
+
initialize L = 0, m = [0, . . . , 0]
|
1579 |
+
for j = 1 to k do
|
1580 |
+
calculate logits α = π(x ⊙ m; φ), sample Gi ∼ Gumbel for i ∈ [d]
|
1581 |
+
set ˜m = max
|
1582 |
+
�
|
1583 |
+
m, softmax(G + α, τ)
|
1584 |
+
�
|
1585 |
+
// update with Concrete
|
1586 |
+
set m = max
|
1587 |
+
�
|
1588 |
+
m, softmax(G + α, 0)
|
1589 |
+
�
|
1590 |
+
// update with one-hot
|
1591 |
+
update L ← L + ℓ
|
1592 |
+
�
|
1593 |
+
f(x ⊙ ˜m; θ), y
|
1594 |
+
�
|
1595 |
+
end
|
1596 |
+
update θ ← θ − γ∇θL, φ ← φ − γ∇φL
|
1597 |
+
end
|
1598 |
+
return f(x; θ), π(x; φ)
|
1599 |
+
One notable difference between Algorithm 1 and our objective L(θ, φ) in the main text is the use of the policy π(x; φ) for
|
1600 |
+
generating feature subsets. This differs from eq. (5), which generates feature subsets using a subset distribution p(s). The
|
1601 |
+
key shared factor between both approaches is that there are separate optimization problems over each feature set that are
|
1602 |
+
effectively treated independently. For each feature set xs, the problem is the one-step-ahead loss, and it incorporates both
|
1603 |
+
the policy and predictor as follows:
|
1604 |
+
Ei∼π(xs;φ)
|
1605 |
+
�
|
1606 |
+
ℓ
|
1607 |
+
�
|
1608 |
+
f(xs ∪ xi; θ), y
|
1609 |
+
��
|
1610 |
+
.
|
1611 |
+
(7)
|
1612 |
+
The problems for each subset do not interact: during optimization, the selection given xs is based only on the immediate
|
1613 |
+
change in the loss, and gradients are not propagated through multiple selections as they would be for an RL-based solution.
|
1614 |
+
In solving these multiple problems, the difference is simply that eq. (5) weights them according to p(s), whereas Algorithm 1
|
1615 |
+
weights them according to the current policy π(x, φ).
|
1616 |
+
2https://www.pytorchlightning.ai
|
1617 |
+
|
1618 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1619 |
+
D.2. Hyperparameters
|
1620 |
+
Our experiments with the six tabular datasets all used fully connected architectures with dropout in all layers (Srivastava
|
1621 |
+
et al., 2014). The dropout probability is set to 0.3, the networks have two hidden layers of width 128, and we performed
|
1622 |
+
early stopping using the validation loss. For our method, the predictor and policy were separate networks with identical
|
1623 |
+
architectures. When training models with the features selected by static methods, we reported results using the best model
|
1624 |
+
from multiple training runs based on the validation loss. We did not perform any additional hyperparameter tuning due to
|
1625 |
+
the large number of models being trained.
|
1626 |
+
For MNIST, we used fully connected architectures with two layers of width 512 and the dropout probability set to 0.3.
|
1627 |
+
Again, our method used separate networks with identical architectures. For CIFAR-10, we used a shared ResNet backbone
|
1628 |
+
(He et al., 2016b) consisting of several residually connected convolutional layers. The classification head consists of global
|
1629 |
+
average pooling and a linear layer, and the selection head consisted of a transposed convolution layer followed by a 1 × 1
|
1630 |
+
convolution, which output a grid of logits with size 8 × 8. Our CIFAR-10 networks are trained using random crops and
|
1631 |
+
random horizontal flips as augmentations.
|
1632 |
+
D.3. Feature grouping
|
1633 |
+
All of the methods used in our experiments were designed to select individual features, but this is undesirable when using
|
1634 |
+
categorical features with one-hot encodings. Each of our three emergency medicine tasks involve such features, so we
|
1635 |
+
extended each method to support feature grouping.
|
1636 |
+
SAGE and permutation tests are trivial to extend to feature groups: we simply removed groups of features rather than
|
1637 |
+
individual features when calculating importance scores. For DeepLift and Integrated Gradients, we used the summed
|
1638 |
+
importance within each group, which preserves each method’s additivity property. For the method based on Concrete
|
1639 |
+
Autoencoders, we implemented a generalized version of the selection layer that operates on feature groups. We also extended
|
1640 |
+
OL to operate on feature groups by having actions map to groups rather than individual features.
|
1641 |
+
Finally, for our method, we parameterized the policy network π(x; φ) so that the number of outputs is the number of groups
|
1642 |
+
g rather than the total number of features d (where g < d). When applying masking, we first generate a binary mask
|
1643 |
+
m ∈ [0, 1]g, and we then project the mask into [0, 1]d using a binary group matrix G ∈ {0, 1}d×g, where Gij = 1 if feature
|
1644 |
+
i is in group j and Gij = 0 otherwise. Thus, our masked input vector is given by x ⊙ (Gm).
|
1645 |
+
E. Additional results
|
1646 |
+
This section provides several additional experimental results. First, Figure 4 and Figure 5 show the same results as Figure 2
|
1647 |
+
but larger for improved visibility. Next, Figure 6 though Figure 11 display the feature selection frequency for each of the
|
1648 |
+
tabular datasets when using the greedy method. The heatmaps in each plot show the portion of the time that a feature (or
|
1649 |
+
feature group) is selected under a specific feature budget. These plots reveal that our method is indeed selecting different
|
1650 |
+
features for different samples.
|
1651 |
+
Finally, Figure 12 displays examples of CIFAR-10 predictions given different numbers of revealed patches. The predictions
|
1652 |
+
generally become relatively accurate after revealing only a small number of patches, reflecting a similar result as Figure 3.
|
1653 |
+
Qualitatively, we can see that the policy network learns to select vertical stripes, but the order in which it fills out each stripe
|
1654 |
+
depends on where it predicts important information may be located.
|
1655 |
+
|
1656 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1657 |
+
0
|
1658 |
+
5
|
1659 |
+
10
|
1660 |
+
15
|
1661 |
+
20
|
1662 |
+
25
|
1663 |
+
# Selected Features
|
1664 |
+
0.55
|
1665 |
+
0.60
|
1666 |
+
0.65
|
1667 |
+
0.70
|
1668 |
+
0.75
|
1669 |
+
AUROC
|
1670 |
+
Bleeding AUROC Comparison
|
1671 |
+
0
|
1672 |
+
5
|
1673 |
+
10
|
1674 |
+
15
|
1675 |
+
20
|
1676 |
+
25
|
1677 |
+
# Selected Features
|
1678 |
+
0.65
|
1679 |
+
0.70
|
1680 |
+
0.75
|
1681 |
+
0.80
|
1682 |
+
0.85
|
1683 |
+
AUROC
|
1684 |
+
Respiratory AUROC Comparison
|
1685 |
+
2
|
1686 |
+
4
|
1687 |
+
6
|
1688 |
+
8
|
1689 |
+
10
|
1690 |
+
# Selected Features
|
1691 |
+
0.700
|
1692 |
+
0.725
|
1693 |
+
0.750
|
1694 |
+
0.775
|
1695 |
+
0.800
|
1696 |
+
0.825
|
1697 |
+
0.850
|
1698 |
+
0.875
|
1699 |
+
AUROC
|
1700 |
+
Fluid AUROC Comparison
|
1701 |
+
0.0
|
1702 |
+
0.2
|
1703 |
+
0.4
|
1704 |
+
0.6
|
1705 |
+
0.8
|
1706 |
+
1.0
|
1707 |
+
0.0
|
1708 |
+
0.2
|
1709 |
+
0.4
|
1710 |
+
0.6
|
1711 |
+
0.8
|
1712 |
+
1.0
|
1713 |
+
IntGrad
|
1714 |
+
DeepLift
|
1715 |
+
SAGE
|
1716 |
+
Perm Test
|
1717 |
+
CAE
|
1718 |
+
Opportunistic (OL)
|
1719 |
+
CMI (Marginal)
|
1720 |
+
CMI (PVAE)
|
1721 |
+
Greedy (Ours)
|
1722 |
+
Figure 4. AUROC comparison on the three emergency medicine diagnosis tasks.
|
1723 |
+
|
1724 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1725 |
+
0
|
1726 |
+
5
|
1727 |
+
10
|
1728 |
+
15
|
1729 |
+
20
|
1730 |
+
25
|
1731 |
+
# Selected Features
|
1732 |
+
0.70
|
1733 |
+
0.75
|
1734 |
+
0.80
|
1735 |
+
0.85
|
1736 |
+
0.90
|
1737 |
+
0.95
|
1738 |
+
AUROC
|
1739 |
+
Spam AUROC Comparison
|
1740 |
+
0
|
1741 |
+
5
|
1742 |
+
10
|
1743 |
+
15
|
1744 |
+
20
|
1745 |
+
25
|
1746 |
+
# Selected Features
|
1747 |
+
0.65
|
1748 |
+
0.70
|
1749 |
+
0.75
|
1750 |
+
0.80
|
1751 |
+
0.85
|
1752 |
+
0.90
|
1753 |
+
0.95
|
1754 |
+
AUROC
|
1755 |
+
MiniBooNE AUROC Comparison
|
1756 |
+
2
|
1757 |
+
4
|
1758 |
+
6
|
1759 |
+
8
|
1760 |
+
10
|
1761 |
+
# Selected Features
|
1762 |
+
0.75
|
1763 |
+
0.80
|
1764 |
+
0.85
|
1765 |
+
0.90
|
1766 |
+
0.95
|
1767 |
+
AUROC
|
1768 |
+
Diabetes AUROC Comparison
|
1769 |
+
0.0
|
1770 |
+
0.2
|
1771 |
+
0.4
|
1772 |
+
0.6
|
1773 |
+
0.8
|
1774 |
+
1.0
|
1775 |
+
0.0
|
1776 |
+
0.2
|
1777 |
+
0.4
|
1778 |
+
0.6
|
1779 |
+
0.8
|
1780 |
+
1.0
|
1781 |
+
IntGrad
|
1782 |
+
DeepLift
|
1783 |
+
SAGE
|
1784 |
+
Perm Test
|
1785 |
+
CAE
|
1786 |
+
Opportunistic (OL)
|
1787 |
+
CMI (Marginal)
|
1788 |
+
CMI (PVAE)
|
1789 |
+
Greedy (Ours)
|
1790 |
+
Figure 5. AUROC comparison on the three public tabular datasets.
|
1791 |
+
|
1792 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1793 |
+
0
|
1794 |
+
5
|
1795 |
+
10
|
1796 |
+
15
|
1797 |
+
20
|
1798 |
+
25
|
1799 |
+
30
|
1800 |
+
35
|
1801 |
+
40
|
1802 |
+
Feature Index
|
1803 |
+
0
|
1804 |
+
5
|
1805 |
+
10
|
1806 |
+
15
|
1807 |
+
20
|
1808 |
+
25
|
1809 |
+
# Selections
|
1810 |
+
Bleeding Feature Selection Frequency
|
1811 |
+
0.0
|
1812 |
+
0.2
|
1813 |
+
0.4
|
1814 |
+
0.6
|
1815 |
+
0.8
|
1816 |
+
Figure 6. Feature selection frequency for our greedy approach on the bleeding dataset.
|
1817 |
+
0
|
1818 |
+
5
|
1819 |
+
10
|
1820 |
+
15
|
1821 |
+
20
|
1822 |
+
25
|
1823 |
+
30
|
1824 |
+
Feature Index
|
1825 |
+
0
|
1826 |
+
5
|
1827 |
+
10
|
1828 |
+
15
|
1829 |
+
20
|
1830 |
+
25
|
1831 |
+
# Selections
|
1832 |
+
Respiratory Feature Selection Frequency
|
1833 |
+
0.0
|
1834 |
+
0.2
|
1835 |
+
0.4
|
1836 |
+
0.6
|
1837 |
+
0.8
|
1838 |
+
1.0
|
1839 |
+
Figure 7. Feature selection frequency for our greedy approach on the respiratory dataset.
|
1840 |
+
0
|
1841 |
+
20
|
1842 |
+
40
|
1843 |
+
60
|
1844 |
+
80
|
1845 |
+
100
|
1846 |
+
120
|
1847 |
+
140
|
1848 |
+
160
|
1849 |
+
Feature Index
|
1850 |
+
0
|
1851 |
+
5
|
1852 |
+
10
|
1853 |
+
15
|
1854 |
+
20
|
1855 |
+
25
|
1856 |
+
# Selections
|
1857 |
+
Fluid Feature Selection Frequency
|
1858 |
+
0.0
|
1859 |
+
0.2
|
1860 |
+
0.4
|
1861 |
+
0.6
|
1862 |
+
0.8
|
1863 |
+
1.0
|
1864 |
+
Figure 8. Feature selection frequency for our greedy approach on the fluid dataset.
|
1865 |
+
|
1866 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1867 |
+
0
|
1868 |
+
10
|
1869 |
+
20
|
1870 |
+
30
|
1871 |
+
40
|
1872 |
+
50
|
1873 |
+
Feature Index
|
1874 |
+
0
|
1875 |
+
5
|
1876 |
+
10
|
1877 |
+
15
|
1878 |
+
20
|
1879 |
+
25
|
1880 |
+
# Selections
|
1881 |
+
Spam Feature Selection Frequency
|
1882 |
+
0.0
|
1883 |
+
0.2
|
1884 |
+
0.4
|
1885 |
+
0.6
|
1886 |
+
0.8
|
1887 |
+
Figure 9. Feature selection frequency for our greedy approach on the spam dataset.
|
1888 |
+
0
|
1889 |
+
10
|
1890 |
+
20
|
1891 |
+
30
|
1892 |
+
40
|
1893 |
+
Feature Index
|
1894 |
+
0
|
1895 |
+
5
|
1896 |
+
10
|
1897 |
+
15
|
1898 |
+
20
|
1899 |
+
25
|
1900 |
+
# Selections
|
1901 |
+
MiniBooNE Feature Selection Frequency
|
1902 |
+
0.0
|
1903 |
+
0.2
|
1904 |
+
0.4
|
1905 |
+
0.6
|
1906 |
+
0.8
|
1907 |
+
1.0
|
1908 |
+
Figure 10. Feature selection frequency for our greedy approach on the MiniBooNE dataset.
|
1909 |
+
0
|
1910 |
+
5
|
1911 |
+
10
|
1912 |
+
15
|
1913 |
+
20
|
1914 |
+
25
|
1915 |
+
30
|
1916 |
+
35
|
1917 |
+
40
|
1918 |
+
Feature Index
|
1919 |
+
0
|
1920 |
+
5
|
1921 |
+
10
|
1922 |
+
15
|
1923 |
+
20
|
1924 |
+
25
|
1925 |
+
# Selections
|
1926 |
+
Diabetes Feature Selection Frequency
|
1927 |
+
0.0
|
1928 |
+
0.2
|
1929 |
+
0.4
|
1930 |
+
0.6
|
1931 |
+
0.8
|
1932 |
+
1.0
|
1933 |
+
Figure 11. Feature selection frequency for our greedy approach on the diabetes dataset.
|
1934 |
+
|
1935 |
+
Learning to Maximize Mutual Information for Dynamic Feature Selection
|
1936 |
+
Full Image
|
1937 |
+
Horse
|
1938 |
+
Automobile
|
1939 |
+
Truck
|
1940 |
+
Cat
|
1941 |
+
Dog
|
1942 |
+
Frog
|
1943 |
+
Ship
|
1944 |
+
1 Patches
|
1945 |
+
Prob = 8.04%
|
1946 |
+
Prob = 27.61%
|
1947 |
+
Prob = 12.60%
|
1948 |
+
Prob = 12.08%
|
1949 |
+
Prob = 69.06%
|
1950 |
+
Prob = 33.50%
|
1951 |
+
Prob = 40.99%
|
1952 |
+
2 Patches
|
1953 |
+
Prob = 19.55%
|
1954 |
+
Prob = 94.60%
|
1955 |
+
Prob = 3.71%
|
1956 |
+
Prob = 19.11%
|
1957 |
+
Prob = 85.20%
|
1958 |
+
Prob = 78.33%
|
1959 |
+
Prob = 52.80%
|
1960 |
+
5 Patches
|
1961 |
+
Prob = 48.14%
|
1962 |
+
Prob = 99.99%
|
1963 |
+
Prob = 16.02%
|
1964 |
+
Prob = 27.03%
|
1965 |
+
Prob = 99.98%
|
1966 |
+
Prob = 94.11%
|
1967 |
+
Prob = 94.02%
|
1968 |
+
10 Patches
|
1969 |
+
Prob = 76.57%
|
1970 |
+
Prob = 99.97%
|
1971 |
+
Prob = 94.65%
|
1972 |
+
Prob = 41.52%
|
1973 |
+
Prob = 99.99%
|
1974 |
+
Prob = 99.75%
|
1975 |
+
Prob = 82.71%
|
1976 |
+
15 Patches
|
1977 |
+
Prob = 92.00%
|
1978 |
+
Prob = 100.00%
|
1979 |
+
Prob = 88.88%
|
1980 |
+
Prob = 72.54%
|
1981 |
+
Prob = 99.97%
|
1982 |
+
Prob = 99.90%
|
1983 |
+
Prob = 98.36%
|
1984 |
+
20 Patches
|
1985 |
+
Prob = 81.35%
|
1986 |
+
Prob = 100.00%
|
1987 |
+
Prob = 96.01%
|
1988 |
+
Prob = 79.03%
|
1989 |
+
Prob = 99.93%
|
1990 |
+
Prob = 99.89%
|
1991 |
+
Prob = 99.90%
|
1992 |
+
25 Patches
|
1993 |
+
Prob = 97.02%
|
1994 |
+
Prob = 100.00%
|
1995 |
+
Prob = 96.34%
|
1996 |
+
Prob = 75.32%
|
1997 |
+
Prob = 99.91%
|
1998 |
+
Prob = 99.56%
|
1999 |
+
Prob = 99.88%
|
2000 |
+
30 Patches
|
2001 |
+
Prob = 91.91%
|
2002 |
+
Prob = 100.00%
|
2003 |
+
Prob = 96.29%
|
2004 |
+
Prob = 66.15%
|
2005 |
+
Prob = 99.78%
|
2006 |
+
Prob = 99.35%
|
2007 |
+
Prob = 99.86%
|
2008 |
+
Figure 12. CIFAR-10 predictions with different numbers of patches revealed by our approach.
|
2009 |
+
|
2010 |
+
1
|
FdAyT4oBgHgl3EQfrPl7/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
GNE2T4oBgHgl3EQf-glT/content/tmp_files/2301.04239v1.pdf.txt
ADDED
@@ -0,0 +1,1672 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Magnetic anisotropy and low-energy spin dynamics in magnetic van der Waals
|
2 |
+
compounds Mn2P2S6 and MnNiP2S6
|
3 |
+
J. J. Abraham,1, 2, ∗ Y. Senyk,1, 2, ∗ Y. Shemerliuk,1 S. Selter,1, 2
|
4 |
+
S. Aswartham,1 B. B¨uchner,1, 3 V. Kataev,1 and A. Alfonsov1, 3
|
5 |
+
1Leibniz IFW Dresden, D-01069 Dresden, Germany
|
6 |
+
2Institute for Solid State and Materials Physics, TU Dresden, D-01062 Dresden, Germany
|
7 |
+
3Institute for Solid State and Materials Physics and W¨urzburg-Dresden
|
8 |
+
Cluster of Excellence ct.qmat, TU Dresden, D-01062 Dresden, Germany
|
9 |
+
(Dated: January 12, 2023)
|
10 |
+
We report the detailed high-field and high-frequency electron spin resonance (HF-ESR) spectro-
|
11 |
+
scopic study of the single-crystalline van der Waals compounds Mn2P2S6 and MnNiP2S6. Analysis
|
12 |
+
of magnetic excitations shows that in comparison to Mn2P2S6 increasing the Ni content yields a
|
13 |
+
larger magnon gap in the ordered state and a larger g-factor value and its anisotropy in the param-
|
14 |
+
agnetic state. The studied compounds are found to be strongly anisotropic having each the unique
|
15 |
+
ground state and type of magnetic order. Stronger deviation of the g-factor from the free electron
|
16 |
+
value in the samples containing Ni suggests that the anisotropy of the exchange is an important
|
17 |
+
contributor to the stabilization of a certain type of magnetic order with particular anisotropy. At
|
18 |
+
the temperatures above the magnetic order, we have analyzed the spin-spin correlations resulting
|
19 |
+
in a development of slowly fluctuating short-range order. They are much stronger pronounced in
|
20 |
+
MnNiP2S6 compared to Mn2P2S6. The enhanced spin fluctuations in MnNiP2S6 are attributed to
|
21 |
+
the competition of different types of magnetic order. Finally, the analysis of the temperature de-
|
22 |
+
pendent critical behavior of the magnon gaps below the ordering temperature in Mn2P2S6 suggest
|
23 |
+
that the character of the spin wave excitations in this compound undergoes a field induced crossover
|
24 |
+
from a 3D-like towards 2D XY regime.
|
25 |
+
I.
|
26 |
+
INTRODUCTION
|
27 |
+
In the past recent years magnetic van der Waals (vdW)
|
28 |
+
materials have become increasingly attractive for the fun-
|
29 |
+
damental investigations since they provide immense pos-
|
30 |
+
sibility to study intrinsic magnetism in low dimensional
|
31 |
+
limit [1–3].
|
32 |
+
The weak vdW forces hold together the
|
33 |
+
atomic monolayers in vdW crystals, which results in a
|
34 |
+
poor interlayer coupling, and therefore renders these ma-
|
35 |
+
terials intrinsically two dimensional. In addition to the
|
36 |
+
fundamental research, these materials are very promis-
|
37 |
+
ing as potential candidates for next-generation spintron-
|
38 |
+
ics devices [4–7].
|
39 |
+
Among the variety of magnetic vdW materials a par-
|
40 |
+
ticularly interesting subclass is represented by the anti-
|
41 |
+
ferromagnetic (TM)2P2S6 tiophosphates (TM stands for
|
42 |
+
a transition metal ion). Here the transition metal ions
|
43 |
+
are arranged in a graphene-like layered honeycomb lat-
|
44 |
+
tice [8]. The high flexibility of the choice of the TM ion
|
45 |
+
enables to control the properties. Among the tiophos-
|
46 |
+
phates there are examples of superconductors [9], pho-
|
47 |
+
todetectors and field effect transistors [10, 11]. They also
|
48 |
+
can be used for ion-exchange applications [12], catalytic
|
49 |
+
activity [13], etc. Therefore, a proper choice of TM, or of
|
50 |
+
a mixture of magnetically inequivalent ions on the same
|
51 |
+
crystallographic position, could lead to the possibility of
|
52 |
+
engineering of a material with desired magnetic ground
|
53 |
+
state, excitations and correlations.
|
54 |
+
∗ These authors contributed equally to this work.
|
55 |
+
In order to establish the connection between the choice
|
56 |
+
of magnetic ion and the resulting ground state and cor-
|
57 |
+
relations, we performed a detailed high-field and high-
|
58 |
+
frequency electron spin resonance (HF-ESR) spectro-
|
59 |
+
scopic study on single crystals of the van der Waals com-
|
60 |
+
pounds Mn2P2S6 and MnNiP2S6 in a broad range of mi-
|
61 |
+
crowave frequencies and temperatures below and above
|
62 |
+
the magnetic order. ESR spectroscopy is a powerful tool
|
63 |
+
that can provide insights into spin-spin correlations, mag-
|
64 |
+
netic anisotropy and spin dynamics. This technique has
|
65 |
+
shown to be very effective for exploration of the mag-
|
66 |
+
netic properties of vdW systems [14–25].
|
67 |
+
Albeit reso-
|
68 |
+
nance studies on Mn2P2S6 were made by Okuda et al.
|
69 |
+
[14], Joy and Vasudevan [15] and Kobets et al. [18], a
|
70 |
+
high-frequency ESR study exploring broad range of tem-
|
71 |
+
peratures below and above magnetic order was not yet
|
72 |
+
performed. The MnNiP2S6 compound is barely explored
|
73 |
+
from the point of view of spin excitations from the mag-
|
74 |
+
netic ground state below the ordering temperature, and
|
75 |
+
from the point of view of spin-spin correlations in the
|
76 |
+
high temperature regime.
|
77 |
+
Investigating Mn2P2S6 and MnNiP2S6 we have found
|
78 |
+
difference in the types of magnetic order, anisotropies be-
|
79 |
+
low the ordering temperature TN, as well as the g-factors
|
80 |
+
and their anisotropy above TN in these compounds. In
|
81 |
+
fact, increasing the Ni content yields a larger magnon
|
82 |
+
gap in the ordered state (T << TN) and a larger g-
|
83 |
+
factor value and its anisotropy in the paramagnetic state
|
84 |
+
(T >> TN). At temperatures above the magnetic or-
|
85 |
+
der, we have analyzed the spin-spin correlations result-
|
86 |
+
ing in a development of slowly fluctuating short-range or-
|
87 |
+
der. They are much stronger pronounced in MnNiP2S6
|
88 |
+
arXiv:2301.04239v1 [cond-mat.str-el] 10 Jan 2023
|
89 |
+
|
90 |
+
2
|
91 |
+
compared to Mn2P2S6, which in our previous study has
|
92 |
+
shown clear cut signatures of 2D correlated spin dy-
|
93 |
+
namics [25].
|
94 |
+
Therefore, enhanced spin fluctuations in
|
95 |
+
MnNiP2S6 are attributed to the competition of differ-
|
96 |
+
ent types of magnetic order. Finally, the analysis of the
|
97 |
+
temperature dependent critical behavior of the magnon
|
98 |
+
gaps below the ordering temperature in Mn2P2S6 sug-
|
99 |
+
gest that the character of the spin wave excitations in
|
100 |
+
this compound undergoes a field induced crossover from
|
101 |
+
a 3D-like towards 2D XY regime.
|
102 |
+
II.
|
103 |
+
EXPERIMENTAL DETAILS
|
104 |
+
Crystal growth of Mn2P2S6 and MnNiP2S6 samples
|
105 |
+
investigated in this work was done using the chemical
|
106 |
+
vapor transport technique with iodine as the transport
|
107 |
+
agent. Details of their growth, crystallographic, composi-
|
108 |
+
tional and static magnetic characterization are described
|
109 |
+
in Refs. [26, 27]. Note, that the experimental value xexp
|
110 |
+
in (Mn1−xNix)2P2S6 for the nominal MnNiP2S6 com-
|
111 |
+
pound is found to be xexp = 0.45, considering an uncer-
|
112 |
+
tainty of approximately 5% [27]. Both materials exhibit
|
113 |
+
a monoclinic crystal lattice system with a C2/m space
|
114 |
+
group [27, 28]. Each unit cell contains a [P2S6]4− cluster
|
115 |
+
with S atoms occupying the edges of TM octahedra and
|
116 |
+
P-P dumbbells occupying the void of each metal honey-
|
117 |
+
comb sublattice. The crystallographic c-axis makes an
|
118 |
+
angle of 17◦ with the normal to the ab-plane [29], which
|
119 |
+
is known to be one of the magnetic axes and is hereafter
|
120 |
+
called as c* [18].
|
121 |
+
The ordering temperature of Mn2P2S6 is found to be
|
122 |
+
TN = 77 K [27].
|
123 |
+
In contrast, the transition tempera-
|
124 |
+
ture of MnNiP2S6 is rather uncertain and might depend
|
125 |
+
on the direction of the applied magnetic field. Various
|
126 |
+
studies have reported different values of TN, for instance
|
127 |
+
it amounts to 12 K in [30], 38 K in [27], 41 K in [31]
|
128 |
+
and 42 K in [32].
|
129 |
+
For the samples used in this study
|
130 |
+
the ordering temperatures were extracted from the tem-
|
131 |
+
perature dependence of the susceptibility χ measured at
|
132 |
+
H = 1000 Oe (see appendix, Fig. 10 (a)). The calcula-
|
133 |
+
tion of the maximum value of the derivative d(χ · T)/dT
|
134 |
+
yields TN ∼ 57 K for H ∥ c* and TN ∼ 76 K for H ⊥ c*
|
135 |
+
(hereafter called TN*).
|
136 |
+
The antiferromagnetic resonance (AFMR) and ESR
|
137 |
+
measurements (hereafter called HF-ESR) were performed
|
138 |
+
on several single crystalline samples of Mn2P2S6 and
|
139 |
+
MnNiP2S6 using a homemade HF-ESR spectrometer. A
|
140 |
+
superconducting magnet from Oxford instruments with
|
141 |
+
a variable temperature insert (VTI) was used to gener-
|
142 |
+
ate magnetic fields up to 16 T allowing a continuous field
|
143 |
+
sweep. The sample was mounted on a probe head which
|
144 |
+
was then inserted into the VTI immersed in a 4He cryo-
|
145 |
+
stat. A piezoelectric step-motor based sample holder was
|
146 |
+
used for angular dependent measurements. Continuous
|
147 |
+
He gas flow was utilized to attain stable temperatures
|
148 |
+
in the range of 3 to 300 K. Generation and detection of
|
149 |
+
microwaves was performed using a vector network an-
|
150 |
+
6
|
151 |
+
7
|
152 |
+
8
|
153 |
+
9 10 11 12 13
|
154 |
+
SD (arb. u.)
|
155 |
+
Magnetic Field (T)
|
156 |
+
b) MnNiP2S6, � = 326 GHz
|
157 |
+
300 K
|
158 |
+
250 K
|
159 |
+
200 K
|
160 |
+
175 K
|
161 |
+
150 K
|
162 |
+
140 K
|
163 |
+
120 K
|
164 |
+
100 K
|
165 |
+
90 K
|
166 |
+
80 K
|
167 |
+
70 K
|
168 |
+
60 K
|
169 |
+
50 K
|
170 |
+
45 K
|
171 |
+
40 K
|
172 |
+
35 K
|
173 |
+
30 K
|
174 |
+
20 K
|
175 |
+
10 K
|
176 |
+
7.5 K
|
177 |
+
5 K
|
178 |
+
3 K
|
179 |
+
2
|
180 |
+
4
|
181 |
+
6
|
182 |
+
B5
|
183 |
+
SD (arb. u.)
|
184 |
+
Magnetic Field (T)
|
185 |
+
70 K
|
186 |
+
65 K
|
187 |
+
60 K
|
188 |
+
90 K
|
189 |
+
80 K
|
190 |
+
75 K
|
191 |
+
*
|
192 |
+
**
|
193 |
+
*
|
194 |
+
*
|
195 |
+
*
|
196 |
+
*
|
197 |
+
****
|
198 |
+
*
|
199 |
+
50 K
|
200 |
+
40 K
|
201 |
+
30 K
|
202 |
+
20 K
|
203 |
+
10 K
|
204 |
+
7.5 K
|
205 |
+
5 K
|
206 |
+
3 K
|
207 |
+
293 K
|
208 |
+
250 K
|
209 |
+
194 K
|
210 |
+
171 K
|
211 |
+
150 K
|
212 |
+
130 K
|
213 |
+
110 K
|
214 |
+
100 K
|
215 |
+
B4
|
216 |
+
a) Mn2P2S6, � = 147 GHz
|
217 |
+
FIG. 1. Temperature dependence of HF-ESR spectra of (a)
|
218 |
+
Mn2P2S6 at fixed excitation frequency ν ≈ 147 GHz and (b)
|
219 |
+
MnNiP2S6 at ν ≈ 326 GHz in H ∥ c* configuration. Spectra
|
220 |
+
are normalized and vertically shifted for clarity. The temper-
|
221 |
+
ature independent peaks from the impurity in the probehead
|
222 |
+
occurring at low frequencies are marked with asterisks.
|
223 |
+
alyzer (PNA-X) from Keysight Technologies. Equipped
|
224 |
+
with the frequency extensions from Virginia Diodes, Inc.,
|
225 |
+
the PNA-X can generate a frequency in the range from
|
226 |
+
75 to 330 GHz. The measurements are performed in the
|
227 |
+
transmission mode, where the microwaves are directed
|
228 |
+
to the sample using oversized waveguides. All measure-
|
229 |
+
ments were made by sweeping the field from 0 to 16 T
|
230 |
+
and back to 0 T at constant temperature and frequency.
|
231 |
+
HF-ESR signals generally have a Lorentzian line pro-
|
232 |
+
file with an absorption and dispersion components. For
|
233 |
+
such a case, the resonance field (Hres) and linewidth (full
|
234 |
+
width at half maxima, ∆H) can be extracted by fitting
|
235 |
+
the signal using the function:
|
236 |
+
SD(H) = 2Amp
|
237 |
+
π
|
238 |
+
× (L1sinα + L2cosα)
|
239 |
+
+ Coffset + CslopeH
|
240 |
+
(1)
|
241 |
+
where SD(H) is the signal at the detector and Amp is
|
242 |
+
the amplitude. Coffset represents the offset and CslopeH
|
243 |
+
is the linear background of the spectra.
|
244 |
+
L1 is the
|
245 |
+
Lorentzian absorption which is defined in terms of Hres
|
246 |
+
and ∆H. L2 is the Lorentzian dispersion which is ob-
|
247 |
+
tained by applying the Kramers-Kronig transformation
|
248 |
+
to L1. α is a parameter used to define the degree of in-
|
249 |
+
strumental mixing of the absorption and dispersion com-
|
250 |
+
ponents which is unavoidable in the used setup. Some
|
251 |
+
of the HF-ESR signals of Mn2P2S6 could not be fitted
|
252 |
+
using the above equation due to the development of the
|
253 |
+
shoulders or the splitting of peaks [33]. ∆H, Hres and,
|
254 |
+
|
255 |
+
3
|
256 |
+
0
|
257 |
+
50
|
258 |
+
100
|
259 |
+
150
|
260 |
+
200
|
261 |
+
250
|
262 |
+
300
|
263 |
+
-4
|
264 |
+
-2
|
265 |
+
0
|
266 |
+
2
|
267 |
+
4
|
268 |
+
6
|
269 |
+
� H = Hres(T) - Hres(300 K) (T)
|
270 |
+
Temperature (K)
|
271 |
+
88 GHz, H || c*
|
272 |
+
88 GHz, H || c*
|
273 |
+
147 GHz, H || c*
|
274 |
+
147 GHz, H || c*
|
275 |
+
329 GHz, H || c*
|
276 |
+
329 GHz, H ^ c*
|
277 |
+
TN
|
278 |
+
Mn2P2S6
|
279 |
+
B5
|
280 |
+
B4
|
281 |
+
B3
|
282 |
+
0
|
283 |
+
100
|
284 |
+
200
|
285 |
+
300
|
286 |
+
0.05
|
287 |
+
0.10
|
288 |
+
0.15
|
289 |
+
0.20
|
290 |
+
0.25
|
291 |
+
� = 329 GHz
|
292 |
+
H || c*
|
293 |
+
H ^ c*
|
294 |
+
∆H = Linewidth (T)
|
295 |
+
Temperature (K)
|
296 |
+
TN
|
297 |
+
100
|
298 |
+
150
|
299 |
+
200
|
300 |
+
250
|
301 |
+
300
|
302 |
+
-0.1
|
303 |
+
0.0
|
304 |
+
0.1
|
305 |
+
FIG. 2. Shift of the resonance field position δH (main panel)
|
306 |
+
and linewidth ∆H (inset) as a function of temperature. The
|
307 |
+
horizontal dashed line represents zero shift from the room
|
308 |
+
temperature value and the vertical dashed line (also for inset)
|
309 |
+
represents the N´eel temperature of the material.
|
310 |
+
therefore, δH = Hres - Hres(300 K) were then obtained
|
311 |
+
by picking a position of the peak value and by calculating
|
312 |
+
the full width at half maximum.
|
313 |
+
III.
|
314 |
+
RESULTS
|
315 |
+
A.
|
316 |
+
Temperature dependence of HF-ESR response
|
317 |
+
To study the temperature evolution of the spin dynam-
|
318 |
+
ics, the HF-ESR spectra were measured at several tem-
|
319 |
+
peratures in the range of 3 - 300 K and at few selected
|
320 |
+
microwave excitation frequencies ν.
|
321 |
+
Such dependences
|
322 |
+
measured in the H ∥ c* configuration at ν = 147 GHz
|
323 |
+
for Mn2P2S6 and at ν = 326 GHz for MnNiP2S6 are pre-
|
324 |
+
sented in Fig. 1. As can be seen, in the case of Mn2P2S6
|
325 |
+
upon entering the ordered state with lowering tempera-
|
326 |
+
ture, the single ESR line transforms into two modes B4
|
327 |
+
and B5 (see below) at ν = 147 GHz. The temperature
|
328 |
+
dependence of the spectra for other frequencies can be
|
329 |
+
found in Appendix in Fig. 9.
|
330 |
+
The shift of the obtained values of Hres from the
|
331 |
+
resonance field position at T = 300 K, δH = Hres -
|
332 |
+
Hres(300 K) is plotted as a function of temperature for
|
333 |
+
Mn2P2S6 and MnNiP2S6 in Fig. 2 and Fig. 3, respec-
|
334 |
+
tively. Hres(300 K) was calculated using the equation
|
335 |
+
hν = gµBµ0Hres, where the g-factor is obtained from
|
336 |
+
the frequency dependence of the resonance field at 300 K
|
337 |
+
(see Sec. III B). In the case of Mn2P2S6, δH stays practi-
|
338 |
+
cally constant down to T ∼ 130 − 150 K for both config-
|
339 |
+
urations H ∥ c* and H ⊥ c*. Below this temperature it
|
340 |
+
starts to slightly deviate (lower inset in Fig. 2), suggest-
|
341 |
+
ing a development of the static on the ESR time scale
|
342 |
+
0
|
343 |
+
50
|
344 |
+
100
|
345 |
+
150
|
346 |
+
200
|
347 |
+
250
|
348 |
+
300
|
349 |
+
-4
|
350 |
+
-3
|
351 |
+
-2
|
352 |
+
-1
|
353 |
+
0
|
354 |
+
H || c*
|
355 |
+
H ⊥ c*
|
356 |
+
� H = Hres (T) - Hres (300 K) (T)
|
357 |
+
Temperature (K)
|
358 |
+
MnNiP2S6, � = 326 GHz
|
359 |
+
TN TN*
|
360 |
+
0
|
361 |
+
100
|
362 |
+
200
|
363 |
+
300
|
364 |
+
0
|
365 |
+
1
|
366 |
+
2
|
367 |
+
3
|
368 |
+
� H = Linewidth (T)
|
369 |
+
Temperature (K)
|
370 |
+
TN*
|
371 |
+
TN
|
372 |
+
FIG. 3. Temperature dependence of δH (main panel) and ∆H
|
373 |
+
(inset) measured at ν = 325.67 GHz. The horizontal dashed
|
374 |
+
line represents zero shift from room temperature value and
|
375 |
+
the vertical dashed line (also for inset) represents the N´eel
|
376 |
+
temperature of the material.
|
377 |
+
internal fields. In contrast, the deviations of δH from
|
378 |
+
zero value in MnNiP2S6 are larger, and are observed at
|
379 |
+
a higher temperature T ∼ 200 K. In the vicinity of the
|
380 |
+
ordering temperature TN* there is a strong shift of the
|
381 |
+
ESR line, observed for both compounds. In the Mn2P2S6
|
382 |
+
case the sign of δH below the ordering temperature de-
|
383 |
+
pends on the particular AFMR mode, which is probed at
|
384 |
+
the specific frequency. This is detailed in the following
|
385 |
+
Sec. III C.
|
386 |
+
Insets of Fig. 2 and Fig. 3 represent the evolution
|
387 |
+
of the linewidth ∆H as a function of temperature for
|
388 |
+
Mn2P2S6 and MnNiP2S6 compounds, respectively.
|
389 |
+
At
|
390 |
+
T > TN, ∆H remains practically temperature indepen-
|
391 |
+
dent for both compounds.
|
392 |
+
A small broadening of the
|
393 |
+
line is observed in the vicinity of the phase transition
|
394 |
+
temperature, and there is a drastic increase of ∆H in
|
395 |
+
the ordered state. Note, that ∆H of MnNiP2S6 is larger
|
396 |
+
than that of Mn2P2S6 in the whole temperature range.
|
397 |
+
Moreover, for MnNiP2S6, ∆H increases at low tempera-
|
398 |
+
tures by almost one order of magnitude from 0.3 to 3 T
|
399 |
+
(inset in Fig. 3). Such extensive line broadening at low
|
400 |
+
temperatures hampers the accurate determination of the
|
401 |
+
linewidth and resonance field, which is accounted for in
|
402 |
+
the error bars.
|
403 |
+
B.
|
404 |
+
Frequency dependence at 300 K
|
405 |
+
The frequency dependence of the resonance field
|
406 |
+
ν(Hres) of Mn2P2S6 and MnNiP2S6 compounds mea-
|
407 |
+
sured in the paramagnetic state at T = 300 K is shown in
|
408 |
+
Fig. 4. Both plots have a linear dependence which can be
|
409 |
+
fitted with the conventional paramagnetic resonance con-
|
410 |
+
|
411 |
+
4
|
412 |
+
0
|
413 |
+
2
|
414 |
+
4
|
415 |
+
6
|
416 |
+
8
|
417 |
+
10 12
|
418 |
+
g|| = 2.026 ± 0.002
|
419 |
+
g⊥ = 2.047 ± 0.004
|
420 |
+
SD (arb. u.)
|
421 |
+
0
|
422 |
+
2
|
423 |
+
4
|
424 |
+
6
|
425 |
+
8
|
426 |
+
10 12
|
427 |
+
0
|
428 |
+
50
|
429 |
+
100
|
430 |
+
150
|
431 |
+
200
|
432 |
+
250
|
433 |
+
300
|
434 |
+
350
|
435 |
+
a) Mn2P2S6
|
436 |
+
H || c*
|
437 |
+
H ⊥ c*
|
438 |
+
Frequency (GHz)
|
439 |
+
Resonance Field (T)
|
440 |
+
g|| = 1.992 ± 0.001
|
441 |
+
g⊥ = 1.999 ± 0.001
|
442 |
+
H || c*
|
443 |
+
H ⊥ c*
|
444 |
+
b) MnNiP2S6
|
445 |
+
FIG. 4.
|
446 |
+
ν(Hres) dependence measured at 300 K for (a)
|
447 |
+
Mn2P2S6 and (b) MnNiP2S6. Blue squares represent H ∥ c*
|
448 |
+
configuration and the red circles represent H ⊥ c* con-
|
449 |
+
figuration.
|
450 |
+
Solid lines show the results of the fit accord-
|
451 |
+
ing to the resonance condition of a conventional paramagnet
|
452 |
+
hν = gµBµ0Hres. Right vertical axis: Representative spectra
|
453 |
+
normalized for clarity. The color of the spectra corresponds
|
454 |
+
to the color of the data points in the ν(Hres) plot with the
|
455 |
+
same Hres.
|
456 |
+
dition for a gapless excitation hν = gµBµ0Hres. Here,
|
457 |
+
h is the Plank constant, µB is the Bohr magneton, µ0 is
|
458 |
+
the permeability of free space and g is the g-factor of res-
|
459 |
+
onating spins. For Mn2P2S6, we obtain almost isotropic
|
460 |
+
values of the g-factor: g∥ = 1.992 ± 0.001 (H ∥ c*) and
|
461 |
+
g⊥ = 1.999 ± 0.001 (H ⊥ c*), which is expected for a
|
462 |
+
Mn2+ ion [34].
|
463 |
+
In contrast, MnNiP2S6 shows a small
|
464 |
+
anisotropy of g-factors with g∥ = 2.026 ± 0.002 and g⊥
|
465 |
+
= 2.047 ± 0.004. In case of Ni2+ ions (3d8, S = 1), g-
|
466 |
+
factors are expected to be appreciably greater than free
|
467 |
+
spin value, as is revealed in HF-ESR studies on Ni2P2S6
|
468 |
+
[24].
|
469 |
+
C.
|
470 |
+
Frequency dependence at 3 K
|
471 |
+
1.
|
472 |
+
Mn2P2S6
|
473 |
+
The low temperature resonance modes of Mn2P2S6 ob-
|
474 |
+
tained at T = 3 K are plotted in Fig. 5. The measure-
|
475 |
+
ments along the H ∥ c* configuration (Fig. 5) yield three
|
476 |
+
branches B3, B4 and B5, two of which (B3 and B4)
|
477 |
+
are observed below the spin-flop field, HSF = 3.62 T.
|
478 |
+
Branches B1 and B2 are assigned to the measurements
|
479 |
+
along a- and b-axis, respectively [35].
|
480 |
+
Additionally, at
|
481 |
+
the spin-flop field, a non-resonance absorption peak (full
|
482 |
+
circles) was observed at high frequencies.
|
483 |
+
The exact gap values are calculated by fitting the in-
|
484 |
+
plane resonance branches B1 and B2 using the analytical
|
485 |
+
expressions for easy-axis AFMs [36]:
|
486 |
+
hν = [(g⊥µBµ0Hres)2 + ∆2
|
487 |
+
1,2]1/2.
|
488 |
+
(2)
|
489 |
+
Here ∆1 corresponds to the magnon excitation gap for
|
490 |
+
branch B2 (also B3), and ∆2 corresponds to B1 (also
|
491 |
+
B4). The obtained values are ∆1 = ∆Mn2P2S6
|
492 |
+
1
|
493 |
+
= 101.3 ±
|
494 |
+
0
|
495 |
+
2
|
496 |
+
4
|
497 |
+
6
|
498 |
+
8
|
499 |
+
10
|
500 |
+
12
|
501 |
+
0
|
502 |
+
50
|
503 |
+
150
|
504 |
+
200
|
505 |
+
250
|
506 |
+
300
|
507 |
+
350
|
508 |
+
Frequency (GHz)
|
509 |
+
Resonance Field (T)
|
510 |
+
Spin Flop
|
511 |
+
Mn2P2S6, T = 3 K
|
512 |
+
116
|
513 |
+
101
|
514 |
+
AFM Branch, H || a*-axis
|
515 |
+
AFM Branch, H || c*-axis
|
516 |
+
AFM Branch, H || b*-axis
|
517 |
+
B1
|
518 |
+
B2
|
519 |
+
B3
|
520 |
+
B4
|
521 |
+
B5
|
522 |
+
H || a
|
523 |
+
H || b
|
524 |
+
H || c*
|
525 |
+
Paramagnetic branch
|
526 |
+
SD (arb. u.)
|
527 |
+
FIG. 5.
|
528 |
+
ν(Hres) dependence of HF-ESR signals measured
|
529 |
+
at T = 3 K (symbols).
|
530 |
+
Solid lines are the fit to the phe-
|
531 |
+
nomenological equations as explained in the text. The dash
|
532 |
+
gray lines correspond to the frequencies at which temperature
|
533 |
+
dependent measurements were performed. The dash line in
|
534 |
+
magenta represents the paramagnetic branch. Right vertical
|
535 |
+
scale: Normalized ESR spectra for selected frequencies. For
|
536 |
+
clarity the spectra are shifted vertically.
|
537 |
+
Error bars in the
|
538 |
+
Hres are smaller than the symbol size.
|
539 |
+
0.6 GHz and ∆2 = ∆Mn2P2S6
|
540 |
+
2
|
541 |
+
= 116 ± 2 GHz.
|
542 |
+
These
|
543 |
+
values, which agree well with previous measurements by
|
544 |
+
Okuda et al. [14] and Kobets et al. [18], are then used
|
545 |
+
in the theoretical description for a rhombic biaxial two-
|
546 |
+
lattice AFM [18, 37] to match the field dependence of B3
|
547 |
+
and B4 [38]:
|
548 |
+
ν = gµBµ0
|
549 |
+
2h
|
550 |
+
×
|
551 |
+
�
|
552 |
+
∆2
|
553 |
+
1 + ∆2
|
554 |
+
2 + 2H2
|
555 |
+
res±
|
556 |
+
±
|
557 |
+
�
|
558 |
+
8H2res(∆2
|
559 |
+
1 + ∆2
|
560 |
+
2) + (∆2
|
561 |
+
1 + ∆2
|
562 |
+
2)2
|
563 |
+
�1/2
|
564 |
+
. (3)
|
565 |
+
Above the spin-flop field the above model can not be
|
566 |
+
used to describe the system. Therefore branch B5 [38]
|
567 |
+
was simulated by the resonance condition of a conven-
|
568 |
+
tional easy-axis AFM [36]:
|
569 |
+
hν = [(g∥µBµ0Hres)2 − ∆2
|
570 |
+
1]1/2.
|
571 |
+
(4)
|
572 |
+
The presence of the second easy-axis within the ab-
|
573 |
+
plane is further confirmed by the angular dependence of
|
574 |
+
Hres(θ) in the H ⊥ c* configuration (Fig. 6). It follows
|
575 |
+
a A + Bsin2(θ) law, which suggests a 180◦ periodicity of
|
576 |
+
Hres(θ). θ denotes the angle between the applied field
|
577 |
+
and a-axis. For a honeycomb spin system with a N´eel
|
578 |
+
type arrangement, a six-fold periodicity of angular de-
|
579 |
+
pendence in the layer plane can be expected. However,
|
580 |
+
this is absent in the case of Mn2P2S6 sample due to dom-
|
581 |
+
inating effects of a two-fold in-plane anisotropy.
|
582 |
+
|
583 |
+
5
|
584 |
+
-120 -90 -60 -30
|
585 |
+
0
|
586 |
+
30
|
587 |
+
60
|
588 |
+
90
|
589 |
+
120 150 180 210 240
|
590 |
+
3.90
|
591 |
+
3.95
|
592 |
+
4.00
|
593 |
+
4.05
|
594 |
+
4.10
|
595 |
+
4.15
|
596 |
+
4.20
|
597 |
+
4.25
|
598 |
+
4.30
|
599 |
+
4.35
|
600 |
+
4.40
|
601 |
+
4.45
|
602 |
+
Resonance Field (T)
|
603 |
+
Theta (°)
|
604 |
+
Mn2P2S6
|
605 |
+
H ⊥ c*, T = 3 K
|
606 |
+
� = 159.9 GHz
|
607 |
+
Model
|
608 |
+
pi_periodicity (User)
|
609 |
+
Equation
|
610 |
+
A + B * sin(x*pi/180 + C)^2
|
611 |
+
Plot
|
612 |
+
Resonance Field
|
613 |
+
A
|
614 |
+
3.96068 ± 0.01024
|
615 |
+
B
|
616 |
+
0.43992 ± 0.01675
|
617 |
+
C
|
618 |
+
0.03046 ± 0.01979
|
619 |
+
Reduced Chi-Sqr
|
620 |
+
8.35579E-4
|
621 |
+
R-Square (COD)
|
622 |
+
0.97185
|
623 |
+
Adj. R-Square
|
624 |
+
0.96903
|
625 |
+
H || a
|
626 |
+
H || b
|
627 |
+
FIG. 6.
|
628 |
+
Resonance field as a function of angle θ at T =
|
629 |
+
3 K and ν = 160 GHz for Mn2P2S6.
|
630 |
+
θ denotes the angle
|
631 |
+
between the direction of the field applied along the ab-plane
|
632 |
+
and the a-axis. Red dash line represents the result of the fit,
|
633 |
+
as explained in the text.
|
634 |
+
To further analyze the measured ν(Hres) dependence
|
635 |
+
of the AFMR modes in the magnetically ordered state of
|
636 |
+
Mn2P2S6, that correspond to the collective excitations
|
637 |
+
of the spin lattice (spin waves), we employed a linear
|
638 |
+
spin wave theory (LSWT) with the second quantization
|
639 |
+
formalism [36, 39]. The details of our model are provided
|
640 |
+
in Ref. [40]. The phenomenological Hamiltonian for the
|
641 |
+
two-sublattice spin system, used for calculations of the
|
642 |
+
spin waves energies, has the following form:
|
643 |
+
H = A(M1M2)
|
644 |
+
M 2
|
645 |
+
0
|
646 |
+
+ Kuniax
|
647 |
+
M1z
|
648 |
+
2 + M2z
|
649 |
+
2
|
650 |
+
M 2
|
651 |
+
0
|
652 |
+
+ Kbiax
|
653 |
+
2
|
654 |
+
(M 2
|
655 |
+
1x − M 2
|
656 |
+
1y) + (M 2
|
657 |
+
2x − M 2
|
658 |
+
2y)
|
659 |
+
M 2
|
660 |
+
0
|
661 |
+
− (HM1) − (HM2) .
|
662 |
+
(5)
|
663 |
+
Here the first term represents the exchange interaction
|
664 |
+
between the magnetic sublattices with respective magne-
|
665 |
+
tizations M1 and M2, such that M 2
|
666 |
+
1 = M 2
|
667 |
+
2 = (M0)2 =
|
668 |
+
(Ms/2)2, with M 2
|
669 |
+
s being the square of the saturation
|
670 |
+
magnetization. A is the mean-field antiferromagnetic ex-
|
671 |
+
change constant. The second term in Eq. (5) is the uni-
|
672 |
+
axial part of the magnetocrystalline anisotropy given by
|
673 |
+
the anisotropy constant Kuniax. The third term describes
|
674 |
+
an additional anisotropy in the xy-plane with the respec-
|
675 |
+
tive constant Kbiax. The fourth and fifth terms are the
|
676 |
+
Zeeman interactions for both sublattice magnetizations.
|
677 |
+
The results of the calculation match well the measured
|
678 |
+
data. In the calculation we assumed a full Mn satura-
|
679 |
+
tion moment of ∼ 5µB, yielding Ms = 446 erg/(G·cm3)
|
680 |
+
= 446 · 103 J/(T·m3), considering 4 Mn ions in the unit
|
681 |
+
cell. The average g-factor value of 1.995 was taken from
|
682 |
+
0
|
683 |
+
1
|
684 |
+
2
|
685 |
+
3
|
686 |
+
4
|
687 |
+
5
|
688 |
+
6
|
689 |
+
7
|
690 |
+
8
|
691 |
+
9
|
692 |
+
10
|
693 |
+
11
|
694 |
+
12
|
695 |
+
0
|
696 |
+
50
|
697 |
+
100
|
698 |
+
150
|
699 |
+
200
|
700 |
+
250
|
701 |
+
300
|
702 |
+
350
|
703 |
+
400
|
704 |
+
450
|
705 |
+
AFM Branch, H || c*
|
706 |
+
AFM Branch, H ⊥ c*
|
707 |
+
Paramagnetic branch
|
708 |
+
H || c*
|
709 |
+
H ⊥ c*
|
710 |
+
Resonance Field (T)
|
711 |
+
Frequency (GHz)
|
712 |
+
MnNiP2S6,
|
713 |
+
T = 3 K
|
714 |
+
SD (arb. u.)
|
715 |
+
FIG. 7. ν(Hres) dependence measured at 3 K for both con-
|
716 |
+
figurations of magnetic field. Right vertical scale: Exemplary
|
717 |
+
spectra positioned above the resonance points. The horizontal
|
718 |
+
dash gray line represents the frequency at which the temper-
|
719 |
+
ature dependence was measured. The dash line in magenta
|
720 |
+
depicts the paramagnetic resonance branch at 300 K.
|
721 |
+
the frequency dependence measurements at T = 300 K
|
722 |
+
(Fig. 4).
|
723 |
+
As the result we obtain the exchange con-
|
724 |
+
stant A = 2.53 · 108 erg/cm3 = 2.53 · 107 J/m3, uni-
|
725 |
+
axial anisotropy constant Kuniax = −7.2 · 104 erg/cm3
|
726 |
+
= −7.2 · 103 J/m3, and an in-plane anisotropy constant
|
727 |
+
Kbiax = 1.9 · 104 erg/cm3 = 1.9 · 103 J/m3. Within the
|
728 |
+
mean-field theory A is related to the Weiss constant
|
729 |
+
Θ = A ∗ C/M 2
|
730 |
+
0 , where C is the Curie constant.
|
731 |
+
Θ,
|
732 |
+
that provides an average energy scale for the exchange
|
733 |
+
interaction in the system, amounts therefore at least
|
734 |
+
ΘMn2P2S6 ≈ 350 K.
|
735 |
+
2.
|
736 |
+
MnNiP2S6
|
737 |
+
In the case of MnNiP2S6 we observe one branch for
|
738 |
+
H ∥ c* and another one for H ⊥ c* configuration, re-
|
739 |
+
spectively, as shown in Fig. 7.
|
740 |
+
HF-ESR spectra were
|
741 |
+
also recorded at various angles for the in-plane orienta-
|
742 |
+
tion. Within the experimental error bars of ∼ 300 mT, no
|
743 |
+
signatures for an in-plane anisotropy were observed (see
|
744 |
+
Fig. 10 in Appendix). Both branches follow the resonance
|
745 |
+
condition for a hard direction of an AFM given by Eq. (2),
|
746 |
+
which reveals that neither c*-axis nor the ab-plane are
|
747 |
+
energetically favorable. The magnitude of the gap was
|
748 |
+
obtained from the fit as ∆MnNiP2S6
|
749 |
+
1
|
750 |
+
= 115 ± 9 GHz for
|
751 |
+
H ∥ c* and ∆MnNiP2S6
|
752 |
+
2
|
753 |
+
= 215 ± 1 GHz for H ⊥ c* config-
|
754 |
+
urations, respectively.
|
755 |
+
Unfortunately, we could not find a good matching of
|
756 |
+
the calculated frequency dependence to the one mea-
|
757 |
+
sured at low temperature (Fig. 7) with the AFM Hamil-
|
758 |
+
tonian for a two sublattice model.
|
759 |
+
Inclusion of the
|
760 |
+
|
761 |
+
6
|
762 |
+
terms describing cubic, hexagonal and symmetric ex-
|
763 |
+
change anisotropies in addition to those given in Eq. (5)
|
764 |
+
did not yield a good result either.
|
765 |
+
This could be ex-
|
766 |
+
plained by the complicated type of order of two mag-
|
767 |
+
netically inequivalent ions Mn2+ (S =
|
768 |
+
5
|
769 |
+
2, g = 1.955)
|
770 |
+
and Ni2+ (S = 1, g = 2.17), which possibly requires a
|
771 |
+
more sophisticated model than the one used in this study.
|
772 |
+
The analysis might be even more complicated by poten-
|
773 |
+
tial disorder in the system due to the stochastic distribu-
|
774 |
+
tion of these ions on the 4g Wyckoff sites. Therefore the
|
775 |
+
full description of this system remains an open question.
|
776 |
+
However, one could draw some conclusions by analyzing
|
777 |
+
how the magnetization measured at low-T depends on the
|
778 |
+
Mn/Ni ratio [27]. The reduction of the magnetization
|
779 |
+
measured at low-T can be explained by the the reduc-
|
780 |
+
tion of the total moment per formula unit of MnNiP2S6,
|
781 |
+
which can be found as an average of the Mn and Ni sat-
|
782 |
+
uration magnetizations and amounts to ∼ 7.2 µB, com-
|
783 |
+
pared to Mn2P2S6 which has the saturation moment of
|
784 |
+
∼ 10 µB. Additionally, an almost isotropic behavior of
|
785 |
+
the magnetization as a function of magnetic field (inset of
|
786 |
+
Fig. 10 (a)) suggests that the isotropic exchange energy
|
787 |
+
is by orders of magnitude the strongest term defining the
|
788 |
+
static magnetic properties of MnNiP2S6. In this case, the
|
789 |
+
magnetization value, measured at the magnetic field ap-
|
790 |
+
plied along some hard direction, should be inversely pro-
|
791 |
+
portional to the mean-field isotropic exchange constant
|
792 |
+
M ∼ H/A.
|
793 |
+
The reduced magnetization in MnNiP2S6
|
794 |
+
suggests, therefore, that Θ ∼ A should be at least as large
|
795 |
+
in MnNiP2S6 (ΘMnNiP2S6 ≥
|
796 |
+
∼ 350 K) as in Mn2P2S6
|
797 |
+
(ΘMn2P2S6 ≈ 350 K, see Sec. III C 1).
|
798 |
+
IV.
|
799 |
+
DISCUSSION
|
800 |
+
A.
|
801 |
+
Spin-Spin correlations in (Mn1−xNix)2P2S6
|
802 |
+
(T > TN*)
|
803 |
+
As has been shown in our previous work, both the
|
804 |
+
resonance field and the linewidth of the HF-ESR signal
|
805 |
+
in Ni2P2S6 remain temperature independent by cooling
|
806 |
+
the sample down to temperatures close to TN [24]. Usu-
|
807 |
+
ally, in the quasi-2D spin systems the ESR line broad-
|
808 |
+
ening and shift occur at T > TN due to the growth
|
809 |
+
of the in-plane spin-spin correlations resulting in a de-
|
810 |
+
velopment of slowly fluctuating short-range order [41].
|
811 |
+
Specifically, the slowly fluctuating spins produce a static
|
812 |
+
on the ESR timescale field causing a shift of the reso-
|
813 |
+
nance line, and a distribution of these local fields and
|
814 |
+
shortening of the spin-spin relaxation time due to the
|
815 |
+
slowing down of the spin fluctuations increase the ESR
|
816 |
+
linewidth.
|
817 |
+
In the Mn2P2S6 compound these features
|
818 |
+
are not very pronounced, only in the resonance field of
|
819 |
+
the HF-ESR response one can detect within error bars
|
820 |
+
small deviations starting at T ∼ 130 − 150 K. In the
|
821 |
+
MnNiP2S6 compound, in turn, the critical broadening
|
822 |
+
and the shift of the resonance line are observed at tem-
|
823 |
+
perature T ∼ 200 K, which is much higher than TN.
|
824 |
+
Even though the critical broadening and the line shift
|
825 |
+
above TN are much stronger pronounced in MnNiP2S6,
|
826 |
+
our previous low-frequency ESR study shows that the
|
827 |
+
clear cut signatures of 2D correlated spin dynamics are
|
828 |
+
present above TN only in the Mn2P2S6 compound [25].
|
829 |
+
Interestingly, these signatures, seen in the characteristic
|
830 |
+
angular dependence of the ESR linewidth, develop only
|
831 |
+
at elevated temperatures, where the effect of the strong
|
832 |
+
isotropic AFM coupling (ΘMn2P2S6 ≈ 350 K) on the
|
833 |
+
spin fluctuations becomes gradually suppressed. Critical
|
834 |
+
broadening and the shift of the ESR line in MnNiP2S6
|
835 |
+
above TN could therefore be due to the stochastic dis-
|
836 |
+
tribution of Mn and Ni ions on the 4g Wyckoff sites of
|
837 |
+
the crystal structure causing a competition of different
|
838 |
+
order types with contrasting magnetic anisotropies. Our
|
839 |
+
conclusion on the drastic difference in the ground states
|
840 |
+
is supported by the strong distinction in the energy gaps
|
841 |
+
and magnetic field dependences of the low-T spin wave
|
842 |
+
excitations in Mn2P2S6, MnNiP2S6 and Ni2P2S6, respec-
|
843 |
+
tively.
|
844 |
+
The competing types of magnetic order might
|
845 |
+
enhance spin fluctuations seen in the HF-ESR response
|
846 |
+
at elevated temperatures. Strong fluctuations suppress,
|
847 |
+
in turn, the ordering temperature for MnNiP2S6 which is
|
848 |
+
evident in the recent studies on the (Mn1−xNix)2P2S6 se-
|
849 |
+
ries [27, 30, 32]. Moreover, in this scenario of the stochas-
|
850 |
+
tic distribution of Mn and Ni, small deviation of the sto-
|
851 |
+
ichiometry from sample to sample of the same nominal
|
852 |
+
composition could vary the ordering temperature, which
|
853 |
+
explains the broad range of TN measured in MnNiP2S6
|
854 |
+
samples [27, 30, 32].
|
855 |
+
B.
|
856 |
+
Ground state and anisotropy of
|
857 |
+
(Mn1−xNix)2P2S6 (T << TN*)
|
858 |
+
At the lowest measurement temperature Mn2P2S6 has
|
859 |
+
an antiferromagnetic ground state with biaxial type of
|
860 |
+
anisotropy, and the spin wave excitations can be suc-
|
861 |
+
cessfully modeled using LSWT. As the result we obtain
|
862 |
+
the estimation of the exchange interaction ΘMn2P2S6 ≈
|
863 |
+
350 K and the parameters of the anisotropy Kuniax =
|
864 |
+
−7.2 · 104 erg/cm3 = −7.2 · 103 J/m3 and Kbiax = 1.9 ·
|
865 |
+
104 erg/cm3 = 1.9 · 103 J/m3. There is only about four
|
866 |
+
times difference between Kuniax and Kbiax, which sug-
|
867 |
+
gests that the anisotropy in the ab-plane makes a sig-
|
868 |
+
nificant contribution to the properties of the ground
|
869 |
+
state of Mn2P2S6.
|
870 |
+
Interestingly, the value of Kbiax =
|
871 |
+
1.9 · 103 J/m3 ≈ 2 · 10−25 J/spin is very close to the es-
|
872 |
+
timation of the anisotropy within the ab-plane made by
|
873 |
+
Goossens [42], suggesting a possible dipolar nature of this
|
874 |
+
anisotropy. In the MnNiP2S6 case we could not find an
|
875 |
+
appropriate Hamiltonian within a two sublattice model
|
876 |
+
which would fully describe the system, calling for a more
|
877 |
+
sophisticated theoretical study. Interestingly, the charac-
|
878 |
+
teristic feature of the MnNiP2S6 compound is the almost
|
879 |
+
isotropic dependence of the magnetization as a function
|
880 |
+
of magnetic field, measured at temperature well below TN
|
881 |
+
[27]. The isothermal magnetization measurements made
|
882 |
+
|
883 |
+
7
|
884 |
+
on the sample used in this study, confirm the presence
|
885 |
+
of this almost isotropic static magnetic response (see ap-
|
886 |
+
pendix Fig. 10 (a)). Such an isotropic behavior of the
|
887 |
+
static magnetization is related to the strong isotropic
|
888 |
+
AFM exchange interaction (ΘMnNiP2S6 ≥
|
889 |
+
∼ 350 K),
|
890 |
+
which is larger than the applied magnetic field and the
|
891 |
+
observed magnetic anisotropy in this system. However,
|
892 |
+
the HF-ESR data reveals a substantial anisotropy in the
|
893 |
+
magnetic field dependence of the spin waves. This seem-
|
894 |
+
ing contradiction is actually not surprising. The magne-
|
895 |
+
tization value at the magnetic field applied along some
|
896 |
+
hard direction is mostly given by the mean-field exchange
|
897 |
+
constant M ∼ H/A, whereas the magnon gap measured
|
898 |
+
in the ESR experiment is roughly proportional to the
|
899 |
+
square root of the product of exchange and magnetic
|
900 |
+
anisotropy constants [36].
|
901 |
+
Qualitatively, the evolution of the type of magnetic
|
902 |
+
anisotropy with x in (Mn1−xNix)2P2S6 is also evident
|
903 |
+
from our study, where, e.g., MnNiP2S6 reveals no easy-
|
904 |
+
axis within or normal to the ab-plane. In order to quan-
|
905 |
+
tify the change of magnetic anisotropic properties with
|
906 |
+
the Mn/Ni content the excitation energy gaps can be
|
907 |
+
used.
|
908 |
+
The single gap of about 260 GHz was found in
|
909 |
+
our previous study on Ni2P2S6 [24]. Both Mn containing
|
910 |
+
compounds have two gaps ∆MnNiP2S6
|
911 |
+
1
|
912 |
+
= 115±9 GHz and
|
913 |
+
∆MnNiP2S6
|
914 |
+
2
|
915 |
+
= 215 ± 1 GHz in the case of MnNiP2S6, and
|
916 |
+
∆Mn2P2S6
|
917 |
+
1
|
918 |
+
= 101.3±0.6 GHz and ∆Mn2P2S6
|
919 |
+
2
|
920 |
+
= 116±2 GHz
|
921 |
+
in the case of Mn2P2S6. As can be seen, there is a no-
|
922 |
+
ticeable increase of the zero field AFM gaps in the sam-
|
923 |
+
ples with higher Ni content, suggesting an increase of the
|
924 |
+
magnetic anisotropy and exchange interaction. Indeed,
|
925 |
+
the estimated energy scale of the exchange interaction in
|
926 |
+
Mn2P2S6 is about ∼ 350 K, in MnNiP2S6 is more than
|
927 |
+
∼ 350 K, and it is even larger in Ni2P2S6, due to the
|
928 |
+
observation of the larger TN and as it is suggested by the
|
929 |
+
previous investigations [25, 43–45]. Mn2+ with the half
|
930 |
+
filled 3d electronic shell, and a small admixture of the
|
931 |
+
excited state 4P5/2 into the ground state 6S5/2 is an ion
|
932 |
+
with rather isotropic magnetic properties. In contrast,
|
933 |
+
the ground state of the Ni2+ ion in the octahedral envi-
|
934 |
+
ronment [8] is a spin triplet with the higher lying orbital
|
935 |
+
multiplets, admixed through the spin-orbit coupling [34],
|
936 |
+
which makes the Ni spin (S = 1) sensitive to the local
|
937 |
+
crystal field.
|
938 |
+
This, first, could increase a contribution
|
939 |
+
of the local (single ion) magnetic anisotropy term in the
|
940 |
+
Hamiltonian describing the system in the ordered and
|
941 |
+
in the paramagnetic state, as discussed for the case of
|
942 |
+
Ni2P2S6 in [24].
|
943 |
+
Second, it could yield a deviation of
|
944 |
+
the g-factor from the free electron value and also induce
|
945 |
+
an effective g-factor anisotropy.
|
946 |
+
The effective g-factor
|
947 |
+
value and its anisotropy, as found in our study, increase
|
948 |
+
with Ni content. Deviation of the g-factor from the free
|
949 |
+
electron value (∆g) and the anisotropy of the exchange
|
950 |
+
originate from the spin-orbit coupling effect, and there-
|
951 |
+
fore are interrelated. In the case of symmetric anisotropic
|
952 |
+
exchange the elements of the anisotropic exchange ten-
|
953 |
+
sor are A ∝ (∆g/g)2J [46–48], where J is the isotropic
|
954 |
+
exchange interaction constant. Observation of increased
|
955 |
+
0.1
|
956 |
+
1
|
957 |
+
0.2
|
958 |
+
0.4
|
959 |
+
0.6
|
960 |
+
0.8
|
961 |
+
1
|
962 |
+
1.2
|
963 |
+
B5, 329 GHz
|
964 |
+
B5, 147 GHz
|
965 |
+
B4, 147 GHz
|
966 |
+
B4, 88 GHz
|
967 |
+
∆(T)/∆(3 K)
|
968 |
+
1 - T/TN
|
969 |
+
b = 0.55
|
970 |
+
b = 0.6
|
971 |
+
b = 0.29
|
972 |
+
b = 0.26
|
973 |
+
collinear phase
|
974 |
+
spin-flop
|
975 |
+
phase
|
976 |
+
H||c*
|
977 |
+
0
|
978 |
+
2
|
979 |
+
4
|
980 |
+
6
|
981 |
+
8
|
982 |
+
10
|
983 |
+
12
|
984 |
+
0
|
985 |
+
50
|
986 |
+
150
|
987 |
+
200
|
988 |
+
250
|
989 |
+
300
|
990 |
+
350
|
991 |
+
100
|
992 |
+
Frequency (GHz)
|
993 |
+
Resonance Field (T)
|
994 |
+
T = 3 K
|
995 |
+
H||c*
|
996 |
+
B1
|
997 |
+
B2
|
998 |
+
B3
|
999 |
+
B4
|
1000 |
+
B5
|
1001 |
+
µ0Hsf
|
1002 |
+
collinear
|
1003 |
+
phase
|
1004 |
+
spin-flop
|
1005 |
+
phase
|
1006 |
+
FIG. 8. Main panel: Temperature dependence of the normal-
|
1007 |
+
ized energy gap ∆(T)/∆(3 K) = [1−(T/TN)]b for Mn2P2S6 at
|
1008 |
+
different field regimes. Symbol shapes and colors correspond
|
1009 |
+
to that in Fig. 2. Inset: Resonance branches at T = 3 K (solid
|
1010 |
+
lines) as in Fig. 5. Symbols (same as in the main panel) indi-
|
1011 |
+
cate the positions of the resonance modes B4 at 147 GHz and
|
1012 |
+
B5 at 147 and 329 GHz. The position of mode B4 at 88 GHz
|
1013 |
+
is not shown here since it can be detected at T ≥ 50 K only.
|
1014 |
+
The temperature dependence of these modes shown in Fig. 2
|
1015 |
+
was used to estimate that of ∆(T). (see the text)
|
1016 |
+
∆g at higher Ni content suggests that in the Ni contain-
|
1017 |
+
ing (Mn1−xNix)2P2S6 the exchange anisotropy is likely
|
1018 |
+
an important contributor to the anisotropic properties of
|
1019 |
+
the ground state at low temperatures < TN, such as, e.g.,
|
1020 |
+
increased magnon gaps.
|
1021 |
+
C.
|
1022 |
+
Critical behavior of Mn2P2S6 (T ≲ TN*)
|
1023 |
+
In the following we discuss the temperature depen-
|
1024 |
+
dence of the excitation energy gap ∆ at finite magnetic
|
1025 |
+
fields in the collinear and the spin-flop AFM ordered
|
1026 |
+
phases of Mn2P2S6, at H < Hsf and H > Hsf, re-
|
1027 |
+
spectively. This should provide useful insights onto the
|
1028 |
+
type of the critical behavior of the Mn spin lattice at
|
1029 |
+
T < TN.
|
1030 |
+
Such a dependence can be obtained by an-
|
1031 |
+
alyzing the temperature dependence of the shift of the
|
1032 |
+
resonance field positions Hres(T) of the excitation modes
|
1033 |
+
B4 and B5 for H ∥ c* (Fig. 2) with the aid of the sim-
|
1034 |
+
plified relations ∆ ≈ hν − g∥µBµ0Hres for mode B4 and
|
1035 |
+
∆ ≈ [(g∥µBµ0Hres)2 − (hν)2]1/2 for mode B5 derived
|
1036 |
+
from Eqs. (3) and (4), respectively.
|
1037 |
+
The result of this analysis is shown in Fig. 8.
|
1038 |
+
The
|
1039 |
+
∆(T) dependence can be well fitted to the power law
|
1040 |
+
∆(T) ∝ [1 − (T/TN)]b in a broad temperature range be-
|
1041 |
+
low TN with some deviations from it at lower T. The
|
1042 |
+
exponents b indicated in this Figure appear to be very
|
1043 |
+
different for modes B4 and B5. Notably, the resonance
|
1044 |
+
|
1045 |
+
8
|
1046 |
+
field of mode B4 is always smaller than the spin-flop
|
1047 |
+
field, HB4
|
1048 |
+
res |88 GHz< HB4
|
1049 |
+
res |147 GHz< Hsf, whereas mode
|
1050 |
+
B5 occurs at larger fields with Hsf < HB5
|
1051 |
+
res |145 GHz<
|
1052 |
+
HB5
|
1053 |
+
res |329 GHz [Fig. 8(inset)]. This suggests a significant
|
1054 |
+
difference in the temperature dependence of the exci-
|
1055 |
+
tation gap in the collinear and spin-flop AFM ordered
|
1056 |
+
phases of Mn2P2S6.
|
1057 |
+
Usually, the magnetic anisotropy gap ∆(T) observed
|
1058 |
+
in quasi-2D antiferromagnets scales with the sublattice
|
1059 |
+
magnetization Msl(T) [49–51] so that the exponent b of
|
1060 |
+
the temperature dependence of ∆ can be treated as a crit-
|
1061 |
+
ical exponent β of the AFM order parameter Msl. If that
|
1062 |
+
were the case for Mn2P2S6, the value of b in the collinear
|
1063 |
+
phase would indicate the mean-field behavior of Msl(T)
|
1064 |
+
for which β = 0.5 (Fig. 8). In contrast, a strong reduction
|
1065 |
+
of b in the spin-flop phase, as seen in Fig. 8, would cor-
|
1066 |
+
respond to the critical behavior of Msl(T) in the 2D XY
|
1067 |
+
model for which β = 0.231 [52]. However, measurements
|
1068 |
+
of the temperature dependence of Msl by elastic and of
|
1069 |
+
∆ by inelastic neutron scattering in zero magnetic field
|
1070 |
+
reveal a more complex scaling between these two param-
|
1071 |
+
eters with b ≈ 3β/2 and β = 0.32 in the vicinity of TN,
|
1072 |
+
and b ≈ β with β = 0.25 at lower temperatures [53–55].
|
1073 |
+
This finding was tentatively ascribed to different tem-
|
1074 |
+
perature dependence of the competing single-ion and
|
1075 |
+
dipolar anisotropies which are both responsible for a fi-
|
1076 |
+
nite value of ∆ in the AFM ordered state of Mn2P2S6
|
1077 |
+
[53]. Theoretical analysis in Ref. [42] shows that besides
|
1078 |
+
the dipolar anisotropy which is responsible for the out-
|
1079 |
+
of-plane order of the Mn spins there is a competing, pre-
|
1080 |
+
sumably single ion anisotropy turning the spins into the
|
1081 |
+
ab plane. As argued in Ref. [54], the presence of the latter
|
1082 |
+
contribution gives rise to the 2D XY critical behavior.
|
1083 |
+
It should also be noted that the scaling b ≈ 3β/2 is
|
1084 |
+
a characteristics of a 3D antiferromagnet, as it follows
|
1085 |
+
from the theories of AFM resonance [56–58] and was con-
|
1086 |
+
firmed experimentally (see, e.g., [59, 60]). Thus, a field-
|
1087 |
+
dependent change of b indicates a kind of field-driven di-
|
1088 |
+
mensional crossover of the spin wave excitations at inter-
|
1089 |
+
mediate temperatures below TN while ramping the mag-
|
1090 |
+
netic field across the spin-flop transition. Magnetic fields
|
1091 |
+
H > Hsf push the spins into the plane, boosting the
|
1092 |
+
effective XY anisotropy, which changes the character of
|
1093 |
+
spin wave excitations observed by ESR towards the 2D
|
1094 |
+
XY scaling regime.
|
1095 |
+
V.
|
1096 |
+
CONCLUSION
|
1097 |
+
In summary, we have performed a detailed ESR spec-
|
1098 |
+
troscopic study of the single-crystalline samples of the
|
1099 |
+
van der Waals compounds Mn2P2S6 and MnNiP2S6. The
|
1100 |
+
measurements were carried out in a broad range of ex-
|
1101 |
+
citation frequencies and temperatures, and at different
|
1102 |
+
orientations of the magnetic field with respect to the sam-
|
1103 |
+
ple. Our study suggests a strong sensitivity of the type
|
1104 |
+
of magnetic order and anisotropy below TN, as well as of
|
1105 |
+
the g-factor and its anisotropy above TN to the Ni con-
|
1106 |
+
centration. Stronger deviation of the g-factor from the
|
1107 |
+
free electron value in the samples containing Ni suggests
|
1108 |
+
that the anisotropy of the exchange can be an impor-
|
1109 |
+
tant contributor to the stabilization of the certain type
|
1110 |
+
of magnetic order with particular anisotropy. Analysis of
|
1111 |
+
the spin excitations at T << TN has shown that both
|
1112 |
+
Mn2P2S6 and MnNiP2S6 are strongly anisotropic.
|
1113 |
+
In
|
1114 |
+
fact, increasing the Ni content yields a larger magnon
|
1115 |
+
gap in the ordered state (T << TN). In the Mn2P2S6
|
1116 |
+
compound we could fully describe the magnetic excita-
|
1117 |
+
tions using a two sublattice AFM Hamiltonian, which
|
1118 |
+
yielded an estimation of the uniaxial anisotropy energy,
|
1119 |
+
the anisotropy energy within the ab-plane, and the av-
|
1120 |
+
erage exchange interaction ΘMn2P2S6 ≈ 350 K. On the
|
1121 |
+
contrary, in the MnNiP2S6 compound the ground state
|
1122 |
+
and the excitations appear too complex to be described
|
1123 |
+
using two-sublattice AFM model. This could be due to a
|
1124 |
+
stochastic mixing of two magnetically inequivalent ions,
|
1125 |
+
Mn and Ni, on the 4g Wyckoff crystallographic sites.
|
1126 |
+
However, the analysis of the magnetization measured at
|
1127 |
+
low-T suggests that the exchange coupling in this com-
|
1128 |
+
pound should be comparable to or stronger than that in
|
1129 |
+
Mn2P2S6.
|
1130 |
+
We have analyzed the spin-spin correlations resulting
|
1131 |
+
in a development of slowly fluctuating short-range order,
|
1132 |
+
which, in the quasi-2D spin systems, manifest in the ESR
|
1133 |
+
line broadening and shift at T > TN. The line broaden-
|
1134 |
+
ing and shift are much stronger pronounced in MnNiP2S6
|
1135 |
+
compared to Mn2P2S6, suggesting that the critical broad-
|
1136 |
+
ening and the shift of the ESR line in MnNiP2S6 could
|
1137 |
+
be due to the enhanced spin fluctuations at the elevated
|
1138 |
+
temperatures caused by the competition of different types
|
1139 |
+
of magnetic order. Moreover, these strong spin fluctua-
|
1140 |
+
tions in the mixed Mn/Ni compounds could additionally
|
1141 |
+
lower the ordering temperature.
|
1142 |
+
Finally, the analysis of the temperature dependence of
|
1143 |
+
the spin excitation gap in Mn2P2S6 at different applied
|
1144 |
+
fields suggests a kind of field-driven dimensional crossover
|
1145 |
+
of the spin wave excitations at intermediate temperatures
|
1146 |
+
below TN.
|
1147 |
+
Strong magnetic fields push the spins into
|
1148 |
+
the plane, boosting the effective XY anisotropy, which
|
1149 |
+
changes the character of spin wave excitations observed
|
1150 |
+
by ESR from a 3D-like towards the 2D XY scaling regime.
|
1151 |
+
ACKNOWLEDGMENTS
|
1152 |
+
J.J.A. acknowledges the valuable discussions with
|
1153 |
+
Kranthi Kumar Bestha. This work was supported by the
|
1154 |
+
Deutsche Forschungsgemeinschaft (DFG) through grants
|
1155 |
+
No. KA 1694/12-1, AL 1771/8-1, AS 523/4-1, and within
|
1156 |
+
the Collaborative Research Center SFB 1143 “Correlated
|
1157 |
+
Magnetism – From Frustration to Topology” (project-id
|
1158 |
+
247310070), and the Dresden-W¨urzburg Cluster of Excel-
|
1159 |
+
lence (EXC 2147) “ct.qmat - Complexity and Topology
|
1160 |
+
in Quantum Matter” (project-id 390858490), as well as
|
1161 |
+
by the UKRATOP-project (funded by BMBF with Grant
|
1162 |
+
No. 01DK18002).
|
1163 |
+
|
1164 |
+
9
|
1165 |
+
Appendix
|
1166 |
+
0
|
1167 |
+
1
|
1168 |
+
2
|
1169 |
+
3
|
1170 |
+
300 K
|
1171 |
+
200 K
|
1172 |
+
175 K
|
1173 |
+
150 K
|
1174 |
+
140 K
|
1175 |
+
130 K
|
1176 |
+
120 K
|
1177 |
+
110 K
|
1178 |
+
100 K
|
1179 |
+
b) Mn2P2S6, ν = 88 GHz
|
1180 |
+
10 K
|
1181 |
+
7.5 K
|
1182 |
+
5 K
|
1183 |
+
3 K
|
1184 |
+
30 K
|
1185 |
+
20 K
|
1186 |
+
65 K
|
1187 |
+
60 K
|
1188 |
+
50 K
|
1189 |
+
40 K
|
1190 |
+
90 K
|
1191 |
+
80 K
|
1192 |
+
75 K
|
1193 |
+
70 K
|
1194 |
+
11.8 12.0 12.2 12.4
|
1195 |
+
50 K
|
1196 |
+
40 K
|
1197 |
+
30 K
|
1198 |
+
20 K
|
1199 |
+
10 K
|
1200 |
+
7.5 K
|
1201 |
+
5 K
|
1202 |
+
3 K
|
1203 |
+
SD (arb. u.)
|
1204 |
+
Magnetic
|
1205 |
+
300 K
|
1206 |
+
250 K
|
1207 |
+
200 K
|
1208 |
+
175 K
|
1209 |
+
150 K
|
1210 |
+
140 K
|
1211 |
+
130 K
|
1212 |
+
120 K
|
1213 |
+
110 K
|
1214 |
+
100 K
|
1215 |
+
90 K
|
1216 |
+
80 K
|
1217 |
+
75 K
|
1218 |
+
70 K
|
1219 |
+
65 K
|
1220 |
+
60 K
|
1221 |
+
a) Mn2P2S6, ν = 326 GHz
|
1222 |
+
****
|
1223 |
+
*
|
1224 |
+
***
|
1225 |
+
*
|
1226 |
+
*
|
1227 |
+
*
|
1228 |
+
*
|
1229 |
+
6
|
1230 |
+
7
|
1231 |
+
8
|
1232 |
+
9 10 11 12 13
|
1233 |
+
Field (T)
|
1234 |
+
d) MnNiP2S6, � = 326 GHz
|
1235 |
+
300 K
|
1236 |
+
250 K
|
1237 |
+
200 K
|
1238 |
+
175 K
|
1239 |
+
150 K
|
1240 |
+
140 K
|
1241 |
+
120 K
|
1242 |
+
100 K
|
1243 |
+
90 K
|
1244 |
+
85 K
|
1245 |
+
80 K
|
1246 |
+
75 K
|
1247 |
+
70 K
|
1248 |
+
65 K
|
1249 |
+
60 K
|
1250 |
+
55 K
|
1251 |
+
50 K
|
1252 |
+
40 K
|
1253 |
+
30 K
|
1254 |
+
20 K
|
1255 |
+
10 K
|
1256 |
+
7.5 K
|
1257 |
+
5 K
|
1258 |
+
3 K
|
1259 |
+
11.0
|
1260 |
+
11.5
|
1261 |
+
12.0
|
1262 |
+
300 K
|
1263 |
+
250 K
|
1264 |
+
200 K
|
1265 |
+
175 K
|
1266 |
+
150 K
|
1267 |
+
140 K
|
1268 |
+
120 K
|
1269 |
+
110 K
|
1270 |
+
100 K
|
1271 |
+
95 K
|
1272 |
+
90 K
|
1273 |
+
85 K
|
1274 |
+
80 K
|
1275 |
+
|
1276 |
+
75 K
|
1277 |
+
70 K
|
1278 |
+
|
1279 |
+
60 K
|
1280 |
+
50 K
|
1281 |
+
40 K
|
1282 |
+
30 K
|
1283 |
+
20 K
|
1284 |
+
10 K
|
1285 |
+
7.5 K
|
1286 |
+
5 K
|
1287 |
+
3 K
|
1288 |
+
c) Mn2P2S6, � = 329 GHz
|
1289 |
+
FIG. 9. Temperature dependence of the HF-ESR spectra of (a) Mn2P2S6 at the excitation frequency, ν ≈ 326 GHz for H ∥ c*
|
1290 |
+
configuration, (b) Mn2P2S6 at ν ≈ 88 GHz for H ∥ c*. The temperature independent peaks from the impurity in the probehead
|
1291 |
+
occurring only at low frequencies are marked with asterisks. (c) Mn2P2S6 at ν ≈ 329 GHz for H ⊥ c* and (d) MnNiP2S6 at
|
1292 |
+
ν ≈ 326 GHz for H ⊥ c*. Spectra are normalized and vertically shifted for clarity.
|
1293 |
+
0
|
1294 |
+
50
|
1295 |
+
100
|
1296 |
+
150
|
1297 |
+
200
|
1298 |
+
250
|
1299 |
+
300
|
1300 |
+
1.0
|
1301 |
+
1.2
|
1302 |
+
1.4
|
1303 |
+
1.6
|
1304 |
+
1.8
|
1305 |
+
2.0
|
1306 |
+
� m (10-2 emu/ mol Oe)
|
1307 |
+
Temperature (K)
|
1308 |
+
H || c*
|
1309 |
+
H ⊥ c*
|
1310 |
+
H = 100 mT
|
1311 |
+
56.7 K
|
1312 |
+
75.5 K
|
1313 |
+
� � � � � �
|
1314 |
+
0
|
1315 |
+
2
|
1316 |
+
4
|
1317 |
+
6
|
1318 |
+
� � ��
|
1319 |
+
� � ��
|
1320 |
+
0.0
|
1321 |
+
0.1
|
1322 |
+
0.2
|
1323 |
+
M (µB/f.u.)
|
1324 |
+
H (T)
|
1325 |
+
T = 1.8 K
|
1326 |
+
MnNiP2S6
|
1327 |
+
a)
|
1328 |
+
0
|
1329 |
+
30
|
1330 |
+
60
|
1331 |
+
90
|
1332 |
+
120
|
1333 |
+
150
|
1334 |
+
180
|
1335 |
+
2.4
|
1336 |
+
2.6
|
1337 |
+
2.8
|
1338 |
+
3.0
|
1339 |
+
3.2
|
1340 |
+
3.4
|
1341 |
+
MnNiP2S6, H � c*
|
1342 |
+
T = 3 K, � = 226 GHz
|
1343 |
+
Resonance Field (T)
|
1344 |
+
Theta (° )
|
1345 |
+
b)
|
1346 |
+
FIG. 10.
|
1347 |
+
(a) Molar susceptibility at the applied field of 1000 Oe as a function of temperature measured on the sample
|
1348 |
+
of MnNiP2S6, which was used for the ESR investigations.
|
1349 |
+
The gray broken lines represent the magnetic phase transition
|
1350 |
+
temperature in both configurations. Inset: Isothermal magnetization per formula unit as a function of applied field performed
|
1351 |
+
at 1.8 K for MnNiP2S6, depicting the almost isotropic field dependence of magnetic response. (b) In-plane angular dependence
|
1352 |
+
of the resonance field at T = 3 K and ν = 226 GHz for MnNiP2S6, showing no systematic angular dependence within the
|
1353 |
+
average error bar of 0.16 T. The large linewidth values ∆H of the peaks in the ESR spectra are accounted for in the enlarged
|
1354 |
+
error bars.
|
1355 |
+
[1] K. S. Burch, D. Mandrus, and J.-G. Park, Magnetism in
|
1356 |
+
two-dimensional van der Waals materials, Nature 563,
|
1357 |
+
47 (2018).
|
1358 |
+
[2] B. Huang, G. Clark, E. Navarro-Moratalla, D. R. Klein,
|
1359 |
+
|
1360 |
+
10
|
1361 |
+
R. Cheng, K. L. Seyler, D. Zhong, E. Schmidgall, M. A.
|
1362 |
+
McGuire, D. H. Cobden, W. Yao, D. Xiao, P. Jarillo-
|
1363 |
+
Herrero, and X. Xu, Layer-dependent ferromagnetism in
|
1364 |
+
a van der Waals crystal down to the monolayer limit,
|
1365 |
+
Nature 546, 270 (2017).
|
1366 |
+
[3] X. Cai, T. Song, N. P. Wilson, G. Clark, M. He,
|
1367 |
+
X. Zhang, T. Taniguchi, K. Watanabe, W. Yao, D. Xiao,
|
1368 |
+
M. A. McGuire, D. H. Cobden, and X. Xu, Atomically
|
1369 |
+
Thin CrCl3: An In-Plane Layered Antiferromagnetic In-
|
1370 |
+
sulator, Nano Letters 19, 3993 (2019).
|
1371 |
+
[4] Y. Khan, S. M. Obaidulla, M. R. Habib, A. Gayen,
|
1372 |
+
T. Liang, X. Wang, and M. Xu, Recent breakthroughs in
|
1373 |
+
two-dimensional van der Waals magnetic materials and
|
1374 |
+
emerging applications, Nano Today 34, 100902 (2020).
|
1375 |
+
[5] H. Li, S. Ruan, and Y.-J. Zeng, Intrinsic Van Der Waals
|
1376 |
+
Magnetic Materials from Bulk to the 2D Limit: New
|
1377 |
+
Frontiers of Spintronics, Advanced Materials 31, 1900065
|
1378 |
+
(2019).
|
1379 |
+
[6] G. Scheunert, O. Heinonen, R. Hardeman, A. Lapicki,
|
1380 |
+
M. Gubbins, and R. M. Bowman, A review of high mag-
|
1381 |
+
netic moment thin films for microscale and nanotech-
|
1382 |
+
nology applications, Applied Physics Reviews 3, 011301
|
1383 |
+
(2016).
|
1384 |
+
[7] V. O. Jimenez, V. Kalappattil, T. Eggers, M. Bonilla,
|
1385 |
+
S. Kolekar, P. T. Huy, M. Batzill, and M.-H. Phan, A
|
1386 |
+
magnetic sensor using a 2D van der Waals ferromagnetic
|
1387 |
+
material, Scientific Reports 10, 4789 (2020).
|
1388 |
+
[8] F. Wang, T. A. Shifa, P. Yu, P. He, Y. Liu, F. Wang,
|
1389 |
+
Z. Wang, X. Zhan, X. Lou, F. Xia, and J. He, New
|
1390 |
+
Frontiers on van der Waals Layered Metal Phospho-
|
1391 |
+
rous Trichalcogenides, Advanced Functional Materials
|
1392 |
+
28, 1802151 (2018).
|
1393 |
+
[9] Y. Wang, J. Ying, Z. Zhou, J. Sun, T. Wen, Y. Zhou,
|
1394 |
+
N. Li, Q. Zhang, F. Han, Y. Xiao, P. Chow, W. Yang,
|
1395 |
+
V. V. Struzhkin, Y. Zhao, and H.-k. Mao, Emergent su-
|
1396 |
+
perconductivity in an iron-based honeycomb lattice initi-
|
1397 |
+
ated by pressure-driven spin-crossover, Nature Commu-
|
1398 |
+
nications 9, 1914 (2018).
|
1399 |
+
[10] R. N. Jenjeti, R. Kumar, M. P. Austeria, and S. Sam-
|
1400 |
+
path, Field Effect Transistor Based on Layered NiPS3,
|
1401 |
+
Scientific Reports 8, 8586 (2018).
|
1402 |
+
[11] J. Chu, F. Wang, L. Yin, L. Lei, C. Yan, F. Wang,
|
1403 |
+
Y. Wen, Z. Wang, C. Jiang, L. Feng, J. Xiong, Y. Li,
|
1404 |
+
and J. He, High-Performance Ultraviolet Photodetector
|
1405 |
+
Based on a Few-Layered 2D NiPS3 Nanosheet, Advanced
|
1406 |
+
Functional Materials 27, 1701342 (2017).
|
1407 |
+
[12] P. A. Joy and S. Vasudevan, The intercalation reaction of
|
1408 |
+
pyridine with manganese thiophosphate, MnPS3, Journal
|
1409 |
+
of the American Chemical Society 114, 7792 (1992).
|
1410 |
+
[13] W. Zhu, W. Gan, Z. Muhammad, C. Wang, C. Wu,
|
1411 |
+
H. Liu, D. Liu, K. Zhang, Q. He, H. Jiang, X. Zheng,
|
1412 |
+
Z. Sun, S. Chen, and L. Song, Exfoliation of ultrathin
|
1413 |
+
FePS3 layers as a promising electrocatalyst for the oxy-
|
1414 |
+
gen evolution reaction, Chem. Commun. 54, 4481 (2018).
|
1415 |
+
[14] K. Okuda, K. Kurosawa, S. Saito, M. Honda, Z. Yu,
|
1416 |
+
and M. Date, Magnetic Properties of Layered Compound
|
1417 |
+
MnPS3, Journal of the Physical Society of Japan 55, 4456
|
1418 |
+
(1986), https://doi.org/10.1143/JPSJ.55.4456.
|
1419 |
+
[15] P. A. Joy and S. Vasudevan, Magnetism and spin dy-
|
1420 |
+
namics in MnPS3 and pyridine intercalated MnPS3: An
|
1421 |
+
electron paramagnetic resonance study, The Journal of
|
1422 |
+
Chemical Physics 99, 4411 (1993).
|
1423 |
+
[16] E. Lifshitz and A. H. Francis, Analysis of the ESR spec-
|
1424 |
+
trum of manganese(II) impurity centers in the layered
|
1425 |
+
compound cadmium phosphide sulfide (CdPS3), The
|
1426 |
+
Journal of Physical Chemistry 86, 4714 (1982).
|
1427 |
+
[17] S. Sibley, A. Francis, E. Lifshitz, and R. Cl´ement, Mag-
|
1428 |
+
netic resonance studies of intercalated, two-dimensional
|
1429 |
+
transition metal chalcogenophosphate lattices, Colloids
|
1430 |
+
and Surfaces A: Physicochemical and Engineering As-
|
1431 |
+
pects 82, 205 (1994).
|
1432 |
+
[18] M. I. Kobets, K. G. Dergachev, S. L. Gnatchenko, E. N.
|
1433 |
+
Khats’ko, Y. M. Vysochanskii, and M. I. Gurzan, Anti-
|
1434 |
+
ferromagnetic resonance in Mn2P2S6, Low Temperature
|
1435 |
+
Physics 35, 930 (2009).
|
1436 |
+
[19] J. Zeisner, A. Alfonsov, S. Selter, S. Aswartham, M. P.
|
1437 |
+
Ghimire, M. Richter, J. van den Brink, B. B¨uchner, and
|
1438 |
+
V. Kataev, Magnetic anisotropy and spin-polarized two-
|
1439 |
+
dimensional electron gas in the van der Waals ferromag-
|
1440 |
+
net Cr2Ge2Te6, Phys. Rev. B 99, 165109 (2019).
|
1441 |
+
[20] C. Wellm, J. Zeisner, A. Alfonsov, A. U. B. Wolter,
|
1442 |
+
M. Roslova, A. Isaeva, T. Doert, M. Vojta, B. B¨uchner,
|
1443 |
+
and V. Kataev, Signatures of low-energy fractionalized
|
1444 |
+
excitations in α−RuCl3 from field-dependent microwave
|
1445 |
+
absorption, Phys. Rev. B 98, 184408 (2018).
|
1446 |
+
[21] J. Zeisner, K. Mehlawat, A. Alfonsov, M. Roslova, T. Do-
|
1447 |
+
ert, A. Isaeva, B. B¨uchner, and V. Kataev, Electron spin
|
1448 |
+
resonance and ferromagnetic resonance spectroscopy in
|
1449 |
+
the high-field phase of the van der Waals magnet CrCl3,
|
1450 |
+
Phys. Rev. Materials 4, 064406 (2020).
|
1451 |
+
[22] L. Alahmed, B. Nepal, J. Macy, W. Zheng, B. Casas,
|
1452 |
+
A. Sapkota, N. Jones, A. R. Mazza, M. Brahlek, W. Jin,
|
1453 |
+
M. Mahjouri-Samani, S. S.-L. Zhang, C. Mewes, L. Bali-
|
1454 |
+
cas, T. Mewes, and P. Li, Magnetism and spin dynamics
|
1455 |
+
in room-temperature van der Waals magnet Fe5GeTe2,
|
1456 |
+
2D Materials 8, 045030 (2021).
|
1457 |
+
[23] X. Shen, H. Chen, Y. Li, H. Xia, F. Zeng, J. Xu, H. Y.
|
1458 |
+
Kwon, Y. Ji, C. Won, W. Zhang, and Y. Wu, Multi-
|
1459 |
+
domain ferromagnetic resonance in magnetic van der
|
1460 |
+
Waals crystals CrI3 and CrBr3, Journal of Magnetism
|
1461 |
+
and Magnetic Materials 528, 167772 (2021).
|
1462 |
+
[24] K. Mehlawat, A. Alfonsov, S. Selter, Y. Shemerliuk,
|
1463 |
+
S. Aswartham, B. B¨uchner, and V. Kataev, Low-energy
|
1464 |
+
excitations and magnetic anisotropy of the layered van
|
1465 |
+
der Waals antiferromagnet Ni2P2S6, Phys. Rev. B 105,
|
1466 |
+
214427 (2022).
|
1467 |
+
[25] Y. Senyk, J. J. Abraham, Y. Shemerliuk, S. Selter,
|
1468 |
+
S. Aswartham, B. B¨uchner, V. Kataev, and A. Alfonsov,
|
1469 |
+
Evolution of the spin dynamics in the van der Waals sys-
|
1470 |
+
tem M2P2S6 (M2 = Mn2, MnNi, Ni2) series probed by
|
1471 |
+
electron spin resonance spectroscopy, arXiv , 2211.00521
|
1472 |
+
(2022).
|
1473 |
+
[26] S. Selter, Y. Shemerliuk, M.-I. Sturza, A. U. B. Wolter,
|
1474 |
+
B. B¨uchner, and S. Aswartham, Crystal growth and
|
1475 |
+
anisotropic magnetic properties of quasi-two-dimensional
|
1476 |
+
(Fe1−xNix)2P2S6, Phys. Rev. Materials 5, 073401 (2021).
|
1477 |
+
[27] Y. Shemerliuk, Y. Zhou, Z. Yang, G. Cao, A. U. B.
|
1478 |
+
Wolter, B. B¨uchner, and S. Aswartham, Tuning magnetic
|
1479 |
+
and transport properties in quasi-2D (Mn1−xNix)2P2S6
|
1480 |
+
single crystals, Electronic Materials 2, 284 (2021).
|
1481 |
+
[28] D. G. Chica, A. K. Iyer, M. Cheng, K. M. Ryan,
|
1482 |
+
P. Krantz, C. Laing, R. Dos Reis, V. Chandrasekhar,
|
1483 |
+
V. P. Dravid, and M. G. Kanatzidis, P2 S 5 Reactive Flux
|
1484 |
+
Method for the Rapid Synthesis of Mono- and Bimetallic
|
1485 |
+
2D Thiophosphates M2−xM’xP2S6, Inorg Chem 60, 3502
|
1486 |
+
(2021).
|
1487 |
+
[29] W.
|
1488 |
+
Klingen,
|
1489 |
+
G.
|
1490 |
+
Eulenberger,
|
1491 |
+
and
|
1492 |
+
H.
|
1493 |
+
Hahn,
|
1494 |
+
¨Uber
|
1495 |
+
|
1496 |
+
11
|
1497 |
+
Hexathio- und Hexaselenohypodiphosphate vom Typ
|
1498 |
+
MII
|
1499 |
+
2 P2X6, Die Naturwissenschaften 55, 229 (1968).
|
1500 |
+
[30] R. Basnet, A. Wegner, K. Pandey, S. Storment, and
|
1501 |
+
J. Hu, Highly sensitive spin-flop transition in antiferro-
|
1502 |
+
magnetic van der Waals material MPS3 (M = Ni and
|
1503 |
+
Mn), Phys. Rev. Materials 5, 064413 (2021).
|
1504 |
+
[31] Y. Xiao-Bing, C. Xing-Guo, and Q. Jin-Gui, Synthesis
|
1505 |
+
and Magnetic Properties of New Layered NixMn1−xPS3
|
1506 |
+
and Their Intercalation Compounds, Acta Chimica
|
1507 |
+
Sinica 69, 1017 (2011).
|
1508 |
+
[32] Z. Lu, X. Yang, L. Huang, X. Chen, M. Liu, J. Peng,
|
1509 |
+
S. Dong, and J.-M. Liu, Evolution of magnetic phase
|
1510 |
+
in two-dimensional van der Waals Mn1−xNixPS3 sin-
|
1511 |
+
gle crystals, Journal of Physics: Condensed Matter 34,
|
1512 |
+
354005 (2022).
|
1513 |
+
[33] The splitting of the signal was mostly observed at high
|
1514 |
+
microwaves frequencies, at which the wavelength be-
|
1515 |
+
comes comparable to the size of the measured sample
|
1516 |
+
flake. In such regime one could expect some instrumental
|
1517 |
+
effects distorting the line shape of the ESR signal.
|
1518 |
+
[34] A. Abragam and B. Bleaney, Electron paramagnetic res-
|
1519 |
+
onance of transition ions (Oxford University Press, Ox-
|
1520 |
+
ford, 2012).
|
1521 |
+
[35] As it is suggested in Ref. [42], the more energetically
|
1522 |
+
favorable in-plane direction is b. In the case of an antifer-
|
1523 |
+
romagnet, a given magnetic field applied along the b-axis
|
1524 |
+
yields a lower magnetization value, and, therefore, the in-
|
1525 |
+
ternal field in such configuration is smaller compared to
|
1526 |
+
H ∥ a*. This further implies that a larger external field
|
1527 |
+
is required to reach the resonance condition for H ∥ b*
|
1528 |
+
configuration. In the experiment, the magnetic field is
|
1529 |
+
applied at various angles by rotating the crystal in the
|
1530 |
+
H ⊥ c* configuration. The orientation which yields the
|
1531 |
+
maximum Hres is therefore H ∥ b*. The frequency depen-
|
1532 |
+
dence was performed in that orientation to record mode
|
1533 |
+
B2. Similarly, branch B1 was recorded when Hres had
|
1534 |
+
the minimum value (see Fig. 6).
|
1535 |
+
[36] E. A. Turov, Physical Properties of Magnetically Ordered
|
1536 |
+
Crystals, edited by A. Tybulewicz and S. Chomet (Aca-
|
1537 |
+
demic Press, New York, 1965).
|
1538 |
+
[37] T. Nagamiya, K. Yosida, and R. Kubo, Antiferromag-
|
1539 |
+
netism, Advances in Physics 4, 1 (1955).
|
1540 |
+
[38] Considering the two magnetic sublattice AFM model,
|
1541 |
+
M1 and M2 are the sublattice magnetization vectors
|
1542 |
+
coupled antiparallely. When the field is applied along the
|
1543 |
+
easy-axis, these vectors start precessing around H clock-
|
1544 |
+
wise or anticlockwise giving rise to two low field branches,
|
1545 |
+
B3 and B4. At the spin-flop field, the antiferromagnetic
|
1546 |
+
phase is no more stable and spins flop to the b direction
|
1547 |
+
in the ab-plane, making an equal angle with the field. The
|
1548 |
+
mutual precession of flipped M1 and M2 vectors gives
|
1549 |
+
rise to the third branch B5 which increases with field [61].
|
1550 |
+
[39] T. Holstein and H. Primakoff, Field Dependence of the
|
1551 |
+
Intrinsic Domain Magnetization of a Ferromagnet, Phys.
|
1552 |
+
Rev. 58, 1098 (1940).
|
1553 |
+
[40] A. Alfonsov,
|
1554 |
+
K. Mehlawat,
|
1555 |
+
A. Zeugner,
|
1556 |
+
A. Isaeva,
|
1557 |
+
B. B¨uchner, and V. Kataev, Magnetic-field tuning of
|
1558 |
+
the spin dynamics in the magnetic topological insulators
|
1559 |
+
(MnBi2Te4)(Bi2Te3)n, Phys. Rev. B 104, 195139 (2021).
|
1560 |
+
[41] H. Benner and J. P. Boucher, Spin Dynamics in the Para-
|
1561 |
+
magnetic Regime: NMR and EPR in Two-Dimensional
|
1562 |
+
Magnets, in Magnetic Properties of Layered Transition
|
1563 |
+
Metal Compounds, edited by L. J. de Jongh (Springer
|
1564 |
+
Netherlands, Dordrecht, 1990) pp. 323–378.
|
1565 |
+
[42] D. J. Goossens, Dipolar anisotropy in quasi-2D honey-
|
1566 |
+
comb antiferromagnet MnPS3, The European Physical
|
1567 |
+
Journal B 78, 305 (2010).
|
1568 |
+
[43] A. R. Wildes, J. R. Stewart, M. D. Le, R. A. Ewings,
|
1569 |
+
K. C. Rule, G. Deng, and K. Anand, Magnetic dynamics
|
1570 |
+
of NiPS3, Phys. Rev. B 106, 174422 (2022).
|
1571 |
+
[44] D. Lan¸con, R. A. Ewings, T. Guidi, F. Formisano,
|
1572 |
+
and A. R. Wildes, Magnetic exchange parameters and
|
1573 |
+
anisotropy of the quasi-two-dimensional antiferromagnet
|
1574 |
+
NiPS3, Phys. Rev. B 98, 134414 (2018).
|
1575 |
+
[45] K. Kim, S. Y. Lim, J.-U. Lee, S. Lee, T. Y. Kim, K. Park,
|
1576 |
+
G. S. Jeon, C.-H. Park, J.-G. Park, and H. Cheong, Sup-
|
1577 |
+
pression of magnetic ordering in XXZ-type antiferromag-
|
1578 |
+
netic monolayer NiPS3, Nature Communications 10, 345
|
1579 |
+
(2019).
|
1580 |
+
[46] R.
|
1581 |
+
Kubo
|
1582 |
+
and
|
1583 |
+
K.
|
1584 |
+
Tomita,
|
1585 |
+
A
|
1586 |
+
General
|
1587 |
+
Theory
|
1588 |
+
of
|
1589 |
+
Magnetic
|
1590 |
+
Resonance
|
1591 |
+
Absorption,
|
1592 |
+
Journal
|
1593 |
+
of
|
1594 |
+
the
|
1595 |
+
Physical
|
1596 |
+
Society
|
1597 |
+
of
|
1598 |
+
Japan
|
1599 |
+
9,
|
1600 |
+
888
|
1601 |
+
(1954),
|
1602 |
+
https://doi.org/10.1143/JPSJ.9.888.
|
1603 |
+
[47] V. Kataev, K.-Y. Choi, M. Gr¨uninger, U. Ammerahl,
|
1604 |
+
B. B¨uchner, A. Freimuth, and A. Revcolevschi, Strong
|
1605 |
+
Anisotropy of Superexchange in the Copper-Oxygen
|
1606 |
+
Chains of La14−xCaxCu24O41, Phys. Rev. Lett. 86, 2882
|
1607 |
+
(2001).
|
1608 |
+
[48] T. Moriya, Anisotropic Superexchange Interaction and
|
1609 |
+
Weak Ferromagnetism, Phys. Rev. 120, 91 (1960).
|
1610 |
+
[49] K. Nagata and Y. Tomono, Antiferromagnetic Resonance
|
1611 |
+
Frequency in Quadratic Layer Antiferromagnets, Jour-
|
1612 |
+
nal of the Physical Society of Japan 36, 78 (1974),
|
1613 |
+
https://doi.org/10.1143/JPSJ.36.78.
|
1614 |
+
[50] C. M. J. van Uijen and H. W. de Wijn, Dipolar anisotropy
|
1615 |
+
in quadratic-layer antiferromagnets, Phys. Rev. B 30,
|
1616 |
+
5265 (1984).
|
1617 |
+
[51] A. F. M. Arts and H. W. de Wijn, Spin Waves in Two-
|
1618 |
+
Dimensional Magnetic Systems: Theory And Applica-
|
1619 |
+
tions, in Magnetic Properties of Layered Transition Metal
|
1620 |
+
Compounds, edited by L. J. de Jongh (Springer Nether-
|
1621 |
+
lands, Dordrecht, 1990) pp. 191–229.
|
1622 |
+
[52] S. T. Bramwell and P. C. W. Holdsworth, Magnetization
|
1623 |
+
and universal sub-critical behaviour in two-dimensional
|
1624 |
+
XY magnets, Journal of Physics: Condensed Matter 5,
|
1625 |
+
L53 (1993).
|
1626 |
+
[53] A. R. Wildes, B. Roessli, B. Lebech, and K. W. Godfrey,
|
1627 |
+
Spin waves and the critical behaviour of the magnetiza-
|
1628 |
+
tion in MnPS3, J. Phys.: Condensed Matter 10, 6417
|
1629 |
+
(1998).
|
1630 |
+
[54] A. R. Wildes, H. M. Rønnow, B. Roessli, M. J. Har-
|
1631 |
+
ris, and K. W. Godfrey, Static and dynamic critical
|
1632 |
+
properties of the quasi-two-dimensional antiferromagnet
|
1633 |
+
MnPS3, Phys. Rev. B 74, 094422 (2006).
|
1634 |
+
[55] A. Wildes, H. Rønnow, B. Roessli, M. Harris, and
|
1635 |
+
K. Godfrey, Anisotropy and the critical behaviour of
|
1636 |
+
the quasi-2D antiferromagnet, MnPS3, J. Magn. Magn
|
1637 |
+
Mater. 310, 1221 (2007).
|
1638 |
+
[56] T.
|
1639 |
+
Nagamiya,
|
1640 |
+
Theory
|
1641 |
+
of
|
1642 |
+
Antiferromag-
|
1643 |
+
netism
|
1644 |
+
and
|
1645 |
+
Antiferromagnetic
|
1646 |
+
Resonance
|
1647 |
+
Ab-
|
1648 |
+
sorption,
|
1649 |
+
II,
|
1650 |
+
Prog.
|
1651 |
+
Theor.
|
1652 |
+
Phys.
|
1653 |
+
6,
|
1654 |
+
350
|
1655 |
+
(1951),
|
1656 |
+
https://academic.oup.com/ptp/article-
|
1657 |
+
pdf/6/3/350/5239851/6-3-350.pdf.
|
1658 |
+
[57] F. Keffer and C. Kittel, Theory of Antiferromagnetic Res-
|
1659 |
+
onance, Phys. Rev. 85, 329 (1952).
|
1660 |
+
[58] J. Kanamori and M. Tachiki, Collective Motion of Spins
|
1661 |
+
in Ferro- and Antiferromagnets, J. Phys. Soc. Jpn 17,
|
1662 |
+
1384 (1962), https://doi.org/10.1143/JPSJ.17.1384.
|
1663 |
+
|
1664 |
+
12
|
1665 |
+
[59] F. M. Johnson and A. H. Nethercot, Antiferromagnetic
|
1666 |
+
Resonance in MnF2, Phys. Rev. 114, 705 (1959).
|
1667 |
+
[60] P. L. Richards, Far-Infrared Magnetic Resonance in NiF2,
|
1668 |
+
Phys. Rev. 138, A1769 (1965).
|
1669 |
+
[61] S. M. Rezende, A. Azevedo, and R. L. Rodr´ıguez-Su´arez,
|
1670 |
+
Introduction to antiferromagnetic magnons, Journal of
|
1671 |
+
Applied Physics 126, 151101 (2019).
|
1672 |
+
|
GNE2T4oBgHgl3EQf-glT/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
GNE3T4oBgHgl3EQfWAqd/content/tmp_files/2301.04465v1.pdf.txt
ADDED
@@ -0,0 +1,992 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.04465v1 [cs.CV] 11 Jan 2023
|
2 |
+
CO-TRAINING WITH HIGH-CONFIDENCE PSEUDO LABELS FOR
|
3 |
+
SEMI-SUPERVISED MEDICAL IMAGE SEGMENTATION
|
4 |
+
Zhiqiang Shen1,2
|
5 |
+
Peng Cao1,2∗
|
6 |
+
Hua Yang3
|
7 |
+
Xiaoli Liu4
|
8 |
+
Jinzhu Yang1,2
|
9 |
+
Osmar R. Zaiane5
|
10 |
+
1College of Computer Science and Engineering, Northeastern University, Shenyang, China
|
11 |
+
2Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
|
12 |
+
3College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, China
|
13 |
+
4DAMO Academy, Alibaba Group, China
|
14 |
+
5Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Alberta, Canada
|
15 | |
16 |
+
ABSTRACT
|
17 |
+
High-quality pseudo labels are essential for semi-supervised semantic segmentation. Consistency
|
18 |
+
regularization and pseudo labeling-based semi-supervised methods perform co-training using the
|
19 |
+
pseudo labels from multi-view inputs. However, such co-training models tend to converge early to
|
20 |
+
a consensus during training, so that the models degenerate to the self-training ones. Besides, the
|
21 |
+
multi-view inputs are generated by perturbing or augmenting the original images, which inevitably
|
22 |
+
introduces noise into the input leading to low-confidence pseudo labels. To address these issues,
|
23 |
+
we propose an Uncertainty-guided Collaborative Mean-Teacher (UCMT) for semi-supervised se-
|
24 |
+
mantic segmentation with the high-confidence pseudo labels. Concretely, UCMT consists of two
|
25 |
+
main components: 1) collaborative mean-teacher (CMT) for encouraging model disagreement and
|
26 |
+
performing co-training between the sub-networks, and 2) uncertainty-guided region mix (UMIX)
|
27 |
+
for manipulating the input images according to the uncertainty maps of CMT and facilitating CMT
|
28 |
+
to produce high-confidence pseudo labels. Combining the strengths of UMIX with CMT, UCMT
|
29 |
+
can retain model disagreement and enhance the quality of pseudo labels for the co-training seg-
|
30 |
+
mentation. Extensive experiments on four public medical image datasets including 2D and 3D
|
31 |
+
modalities demonstrate the superiority of UCMT over the state-of-the-art. Code is available at:
|
32 |
+
https://github.com/Senyh/UCMT.
|
33 |
+
1
|
34 |
+
Introduction
|
35 |
+
Semantic segmentation is critical for medical image analysis. Great progress has been made by deep learning-based
|
36 |
+
segmentation models relying on a large amount of labeled data [1, 2]. However, labeling such pixel-level annotations
|
37 |
+
is laborious and requires expert knowledge especially in medical images, resulting in that labeled data are expensive
|
38 |
+
or simply unavailable. Unlabeled data, on the contrary, are cheap and relatively easy to obtain. Under this condition,
|
39 |
+
semi-supervised learning (SSL) has been the dominant data-efficient strategy through exploiting information from a
|
40 |
+
limited amount labeled data and an arbitrary amount of unlabeled data, so as to alleviate the label scarcity problem
|
41 |
+
[3].
|
42 |
+
Consistency regularization [4] and pseudo labeling [6] are the two main methods for semi-supervised semantic seg-
|
43 |
+
mentation. Currently, combining consistency regularization and pseudo labeling via cross supervision between the
|
44 |
+
sub-networks, has shown promising performance for semi-supervised segmentation [6, 7, 8, 5, 9]. One critical limita-
|
45 |
+
tion of these approaches is that the sub-networks tend to converge early to a consensus situation causing the co-training
|
46 |
+
model degenerating to the self-training [10]. Disagreement between the sub-networks is crucial for co-training, where
|
47 |
+
the sub-networks initialized with different parameters or trained with different views have different biases (i.e., dis-
|
48 |
+
agreement) ensuring that the information they provide is complementary to each other. Another key factor affecting
|
49 |
+
∗corresponding author
|
50 |
+
|
51 |
+
UCMT
|
52 |
+
(d) Co-training disagreement
|
53 |
+
(e) Pseudo labels uncertainty
|
54 |
+
�
|
55 |
+
� ��
|
56 |
+
� �
|
57 |
+
��
|
58 |
+
�
|
59 |
+
(a) MT
|
60 |
+
�
|
61 |
+
� ��
|
62 |
+
� ��
|
63 |
+
��
|
64 |
+
��
|
65 |
+
(b) CPS
|
66 |
+
� �
|
67 |
+
�
|
68 |
+
�
|
69 |
+
� ��
|
70 |
+
� ��
|
71 |
+
��
|
72 |
+
��
|
73 |
+
UMIX
|
74 |
+
(c) UCMT
|
75 |
+
(f) Semi-supervised Segmentation
|
76 |
+
EMA
|
77 |
+
EMA
|
78 |
+
EMA
|
79 |
+
Figure 1: Illustration of the architectures and curves for co-training based semi-supervised semantic segmentation.
|
80 |
+
(a) Mean-teacher [4], (b) Cross pseudo supervision [5], (c) Uncertainty-guided collaborative mean-teacher, (d) the
|
81 |
+
disagreement between the pseudo labels in terms of dice loss of two branches (Y 1 and Y in MT; Y 1 and Y 2 in CPS;
|
82 |
+
Y 1 and Y 2 in UCMT) from the co-training sub-networks (w.r.t. number of iterations), (e) the uncertainty variation of
|
83 |
+
the pseudo labels in terms of entropy w.r.t. number of iterations, and (f) the performance of MT, CPS, and UCMT on
|
84 |
+
the semi-supervised skin lesion segmentation under different proportion of labeled data.
|
85 |
+
the performance of these approaches is the quality of pseudo labels. More importantly, these two factors influence
|
86 |
+
each other. Intuitively, high quality pseudo labels should have low uncertainty [11]. However, increasing the degree
|
87 |
+
of the disagreement between the co-training sub-networks by different perturbations or augmentations could result
|
88 |
+
in their opposite training directions, thus increasing the uncertainty of pseudo labels. To investigate the effect of the
|
89 |
+
disagreement and the quality of pseudo labels for co-training based semi-supervised segmentation, which has not been
|
90 |
+
studied in the literature, we conduct a pilot experiment to illustrate these correlations. As shown in Figure 1, com-
|
91 |
+
pared with mean-teacher (MT) [4] [Figure 1 (a)], cross pseudo supervision (CPS) [5] [Figure 1 (b)] with the higher
|
92 |
+
model disagreement [(d)] and the lower uncertainty [Figure 1 (e)] produces higher performance [Figure 1 (f)] on semi-
|
93 |
+
supervised segmentation. Note that the dice loss of two branches are calculated to measure the disagreement. The
|
94 |
+
question that comes to mind is: how to effectively improve the disagreement between the co-training sub-networks and
|
95 |
+
the quality of pseudo labels jointly in a unified network for SSL.
|
96 |
+
In this paper, we focus on two major goals: maintaining model disagreement and the high-confidence pseudo labels at
|
97 |
+
the same time. To this end, we propose the Uncertainty-guided Collaborative Mean Teacher (UCMT) framework that
|
98 |
+
is capable of retaining higher disagreement between the co-training segmentation sub-networks [Figure 1 (d)] based on
|
99 |
+
the higher confidence pseudo labels [Figure 1 (e)], thus achieving better semi-supervised segmentation performance
|
100 |
+
under the same backbone network and task settings [Figure 1 (f)]. Specifically, UCMT involves two major compo-
|
101 |
+
nents: 1) collaborative mean-teacher (CMT), and 2) uncertainty-guided region mix (UMIX), where UMIX operates
|
102 |
+
the input images according to the uncertainty maps of CMT while CMT performs co-training under the supervision
|
103 |
+
of the pseudo labels derived from the UMIX images. Inspired by the co-teaching [12, 10, 5] for struggling with early
|
104 |
+
converging to a consensus situation and degrading into self-training, we introduce a third component, the teacher
|
105 |
+
model, into the co-training framework as a regularizer to construct CMT for more effective SSL. The teacher model
|
106 |
+
acts as self-ensemble by averaging the student models, serving as a third part to guide the training of the two student
|
107 |
+
models. Further, we develop UMIX to construct high-confident pseudo labels and perform regional dropout for learn-
|
108 |
+
ing robust semi-supervised semantic segmentation models. Instead of random region erasing or swapping [13, 14],
|
109 |
+
UMIX manipulates the original image and its corresponding pseudo labels according to the epistemic uncertainty of
|
110 |
+
the segmentation models, which not only reduces the uncertainty of the pseudo labels but also enlarges the training
|
111 |
+
data distribution. Finally, by combining the strengths of UMIX with CMT, the proposed approach UCMT significantly
|
112 |
+
improves the state-of-the-art (sota) results in semi-supervised segmentation on multiple benchmark datasets. For ex-
|
113 |
+
ample, UCMT and UCMT(U-Net) achieve 88.22% and 82.14% Dice Similarity Coefficient (DSC) on ISIC dataset
|
114 |
+
under 5% labeled data, outperforming our baseline model CPS [5] and the state-of-the-art UGCL [15] by 1.41% and
|
115 |
+
9.47%, respectively.
|
116 |
+
In a nutshell, our contributions mainly include:
|
117 |
+
2
|
118 |
+
|
119 |
+
UCMT
|
120 |
+
• We pinpoint the problem in existing co-training based semi-supervised segmentation methods: the insufficient
|
121 |
+
disagreement among the sub-networks and the lower-confidence pseudo labels. To address the problem, we
|
122 |
+
design an uncertainty-guidedcollaborative mean-teacher to maintain co-training with high-confidence pseudo
|
123 |
+
labels, where we incorporate CMT and UMIX into a holistic framework for semi-supervised medical image
|
124 |
+
segmentation.
|
125 |
+
• To avoid introducing noise into the new samples, we propose an uncertainty-guided regional mix algorithm,
|
126 |
+
UMIX, which encourages the segmentation model to yield high-confident pseudo labels and enlarge the
|
127 |
+
training data distribution.
|
128 |
+
• We conduct extensive experiments on four public medical image segmentation datasets including 2D and 3D
|
129 |
+
scenarios to investigate the effectiveness of our method. Comprehensive results demonstrate the effectiveness
|
130 |
+
of each component of our method and the superiority of UCMT over the state-of-the-art.
|
131 |
+
2
|
132 |
+
Related work
|
133 |
+
2.1
|
134 |
+
Semi-supervised learning
|
135 |
+
Semi-supervised learning aims to improve performance in supervised learning by utilizing information generally asso-
|
136 |
+
ciated with unsupervised learning, and vice versa [3]. A common form of SSL is introducing a regularization term into
|
137 |
+
the objective function of supervised learning to leverage unlabeled data. From this perspective, SSL-based methods
|
138 |
+
can be divided into two main lines, i.e., pseudo labeling and consistency regularization. Pseudo labeling attempts to
|
139 |
+
generate pseudo labels similar to the ground truth, for which models are trained as in supervised learning [6]. Con-
|
140 |
+
sistency regularization enforces the model’s outputs to be consistent for the inputs under different perturbations [4].
|
141 |
+
Current state-of-the-art approaches have incorporated these two strategies and shown superior performance for semi-
|
142 |
+
supervised image classification [16, 17]. Based on this line of research, we explore more effective consistency learning
|
143 |
+
algorithms for semi-supervised semantic segmentation.
|
144 |
+
2.2
|
145 |
+
Semi-supervised semantic segmentation
|
146 |
+
Compared with image classification, semantic segmentation requires much more intensively and costly labeling for
|
147 |
+
pixel-level annotations. Semi-supervised semantic segmentation inherits the main ideas of semi-supervised image clas-
|
148 |
+
sification. The combination of consistency regularization and pseudo labeling, mainly conducting cross supervision
|
149 |
+
between sub-networks using pseudo labels, has become the mainstream strategy for semi-supervised semantic seg-
|
150 |
+
mentation in both natural images [7, 5] and medical images [18, 19, 20, 21]. Specifically, these combined approaches
|
151 |
+
enforce the consistency of the predictions under different perturbations, such as input perturbations [22, 23], feature
|
152 |
+
perturbations [7], and network perturbations [4, 5, 20, 21]. In addition, adversarial learning-based methods, rendering
|
153 |
+
the distribution of model predictions from labeled data to be aligned with those from unlabeled data, can also be re-
|
154 |
+
garded as a special form of consistency regularization [24, 25]. However, such cross supervision models may converge
|
155 |
+
early to a consensus, thus degenerating to self-training ones. We hypothesize that enlarging the disagreement for the
|
156 |
+
co-training models based on the high-confidence pseudo labels can improve the performance of SSL. Therefore, we
|
157 |
+
propose a novel SSL framework, i.e., UCMT, to generate more accurate pseudo labels and maintain co-training for
|
158 |
+
semi-supervised medical image segmentation.
|
159 |
+
2.3
|
160 |
+
Uncertainty-guided semi-supervised semantic segmentation
|
161 |
+
Model uncertainty (epistemic uncertainty) can guide the SSL models to capture information from the pseudo labels.
|
162 |
+
Two critical problems for leveraging model uncertainty are how to obtain and exploit model uncertainty. Recently,
|
163 |
+
there are mainly two strategies to estimate model uncertainty: 1) using Monte Carlo dropout [26], and 2) calculating
|
164 |
+
the variance among different predictions [27]. For semi-supervised semantic segmentation, previous works exploit
|
165 |
+
model uncertainty to re-weight the training loss [18] or selecting the contrastive samples [15]. However, these methods
|
166 |
+
require manually setting a threshold to neglect the low-confidence pseudo labels, where the fixed threshold is hard to
|
167 |
+
determine. In this paper, we obtain the epistemic uncertainty by the entropy of the predictions of CMT for the same
|
168 |
+
input and exploit the uncertainty to guide the region mix for gradually exploring information from the unlabeled data.
|
169 |
+
3
|
170 |
+
Methodology
|
171 |
+
3
|
172 |
+
|
173 |
+
UCMT
|
174 |
+
EMA
|
175 |
+
EMA
|
176 |
+
� �; ��
|
177 |
+
� �; ��
|
178 |
+
� �; �
|
179 |
+
Collaborative mean-teacher
|
180 |
+
UMIX
|
181 |
+
�
|
182 |
+
�
|
183 |
+
�
|
184 |
+
�
|
185 |
+
MAA
|
186 |
+
EMA
|
187 |
+
EMA
|
188 |
+
� �; ��
|
189 |
+
� �; ��
|
190 |
+
� �; �
|
191 |
+
Collaborative mean-teacher
|
192 |
+
�
|
193 |
+
�
|
194 |
+
�
|
195 |
+
�
|
196 |
+
MAA
|
197 |
+
First step: uncertainty estimation
|
198 |
+
Second step: training with UMIX
|
199 |
+
� �; �
|
200 |
+
Testing Phase
|
201 |
+
Loss functions
|
202 |
+
�
|
203 |
+
�’
|
204 |
+
��
|
205 |
+
��
|
206 |
+
���
|
207 |
+
���
|
208 |
+
����
|
209 |
+
����
|
210 |
+
�
|
211 |
+
�
|
212 |
+
���
|
213 |
+
����
|
214 |
+
�����
|
215 |
+
�����
|
216 |
+
�����
|
217 |
+
�����
|
218 |
+
�����
|
219 |
+
�����
|
220 |
+
�����
|
221 |
+
�����
|
222 |
+
��
|
223 |
+
��
|
224 |
+
Training Phase
|
225 |
+
�� = ���� + ����� +�����
|
226 |
+
���� = ����� + �����
|
227 |
+
���� = ����� + �����
|
228 |
+
�� = ���� + ����
|
229 |
+
� = �� + ���
|
230 |
+
Figure 2: Overview of the proposed UCMT. CMT includes three sub-networks, i.e., the teacher sub-network (f(·; θ))
|
231 |
+
and the two student sub-networks (f(·; θ1) and f(·; θ2)). UMIX constructs each new samples X′ by replacing the top
|
232 |
+
k most uncertain regions (red grids in V 1 and V 2) with the top k most certain regions (green grids in V 2 and V 1) in
|
233 |
+
the original image X. Note that three sub-networks are collaboratively learning during the training stage, while only
|
234 |
+
the teacher model is needed in the testing stage.
|
235 |
+
3.1
|
236 |
+
Problem definition
|
237 |
+
Before introducing our method, we first define the semi-supervised segmentation problem with some notations used
|
238 |
+
in this work. The training set D = {DL, DU} contains a labeled set DL = {(Xi, Yi)N
|
239 |
+
i=1} and a unlabeled set
|
240 |
+
DU = {(Xj)M
|
241 |
+
j=N+1}, where Xi/Xj denotes the ith/jth labeled/unlabeled image, Yi is the ground truth of the labeled
|
242 |
+
image, and N and M − N are the number of labeled and unlabeled samples, respectively. Given the training data D,
|
243 |
+
the goal of semi-supervised semantic segmentation is to learn a model f(·; θ) performing well on unseen test sets.
|
244 |
+
3.2
|
245 |
+
Overview
|
246 |
+
To avoid the co-training degrading to the self-training, we propose to encourage model disagreement during train-
|
247 |
+
ing and ensure pseudo labels with low uncertainty. With this motivation, we propose uncertainty-guided collabo-
|
248 |
+
rative mean-teacher for semi-supervised image segmentation, which includes 1) collaborative mean-teacher, and 2)
|
249 |
+
uncertainty-guided region mix. As shown in Figure 1 (d), CMT and UCMT gradually enlarge the disagreement
|
250 |
+
between the co-training sub-networks. Meanwhile, CMT equipped with UMIX guarantees low-uncertainty for the
|
251 |
+
pseudo labels. With the help of these conditions, we can safely maintain the co-training status to improve the effec-
|
252 |
+
tiveness of SSL for exploring unlabeled data. Details of CMT and UMIX are presented on Section 3.3 and Section
|
253 |
+
3.4, respectively. Figure 2 illustrates the schematic diagram of the proposed UCMT. Generally, there are two steps in
|
254 |
+
the training phase of UCMT. In the first step, we train CMT using the original labeled and unlabeled data to obtain the
|
255 |
+
uncertainty maps; Then, we perform UMIX to generate the new samples based on the uncertainty maps. In the second
|
256 |
+
step, we re-train CMT using the UMIX samples. Details of the training process of UCMT are shown in Algorithm
|
257 |
+
1. Although UCMT includes three models, i.e., one teacher model and two student models, only the teacher model is
|
258 |
+
required in the testing stage.
|
259 |
+
3.3
|
260 |
+
Collaborative mean-teacher
|
261 |
+
Current consistency learning-based SSL algorithms, e.g., Mean-teacher [4] and CPS [5], suggest to perform consis-
|
262 |
+
tency regularization among the pseudo labels in a multi-model architecture rather than in a single model. However,
|
263 |
+
during the training process, the two-network SSL framework may converge early to a consensus and the co-training
|
264 |
+
degenerate to the self-training [10]. To tackle this issue, we design the collaborative mean teacher (CMT) framework
|
265 |
+
by introducing a "arbitrator", i.e., the teacher model, into the co-training architecture [5] to guide the training of the
|
266 |
+
4
|
267 |
+
|
268 |
+
UCMT
|
269 |
+
Algorithm 1 UCMT algorithm
|
270 |
+
Input: DL = {{(Xi, Yi)}N
|
271 |
+
i=1}, DU = {{Xj}M
|
272 |
+
j=N+1}
|
273 |
+
Parameter: θ, θ1, θ2
|
274 |
+
Output: f(·; θ)
|
275 |
+
1: for T ∈ [1, numepochs] do
|
276 |
+
2:
|
277 |
+
for each minibatch B do
|
278 |
+
3:
|
279 |
+
// i/j is the index for labeled/unlabeled data
|
280 |
+
4:
|
281 |
+
step 1: uncertainty estimation
|
282 |
+
5:
|
283 |
+
ˆY 0
|
284 |
+
i ← f(Xi ∈ B; θ), ˆY 0
|
285 |
+
j ← f(Xj ∈ B; θ)
|
286 |
+
6:
|
287 |
+
ˆY 1
|
288 |
+
i ← f(Xi ∈ B; θ1), ˆY 1
|
289 |
+
j ← f(Xj ∈ B; θ1)
|
290 |
+
7:
|
291 |
+
ˆY 2
|
292 |
+
i ← f(Xi ∈ B; θ2), ˆY 2
|
293 |
+
j ← f(Xj ∈ B; θ2)
|
294 |
+
8:
|
295 |
+
L ← Ls( ˆY 0
|
296 |
+
i , ˆY 1
|
297 |
+
i , ˆY 2
|
298 |
+
i , Yi) + λ(T )Lu( ˆY 0
|
299 |
+
j , ˆY 1
|
300 |
+
j , ˆY 2
|
301 |
+
j )
|
302 |
+
9:
|
303 |
+
Update f(·; θ), f(·; θ1), f(·; θ2) using optimizer
|
304 |
+
10:
|
305 |
+
U 1
|
306 |
+
i ← Uncertain(f(Xi ∈ B; θ1), f(Xi ∈ B; θ))
|
307 |
+
11:
|
308 |
+
U 2
|
309 |
+
i ← Uncertain(f(Xi ∈ B; θ2), f(Xi ∈ B; θ))
|
310 |
+
12:
|
311 |
+
U 1
|
312 |
+
j ← Uncertain(f(Xj ∈ B; θ1), f(Xj ∈ B; θ))
|
313 |
+
13:
|
314 |
+
U 2
|
315 |
+
j ← Uncertain(f(Xj ∈ B; θ2), f(Xj ∈ B; θ))
|
316 |
+
14:
|
317 |
+
step 2: training with UMIX
|
318 |
+
15:
|
319 |
+
X′
|
320 |
+
i/Y ′
|
321 |
+
i ← UMIX(Xi/Yi, U 1
|
322 |
+
i , U 2
|
323 |
+
i ; k, 1/r)
|
324 |
+
16:
|
325 |
+
X′
|
326 |
+
j/ ˆY ′0
|
327 |
+
j ← UMIX(Xj/ ˆY 0
|
328 |
+
j , U 1
|
329 |
+
j , U 2
|
330 |
+
j ; k, 1/r)
|
331 |
+
17:
|
332 |
+
Repeat 3-7 using X′
|
333 |
+
i, X′
|
334 |
+
j, Y ′
|
335 |
+
i , and ˆ
|
336 |
+
Y ′0
|
337 |
+
j
|
338 |
+
18:
|
339 |
+
end for
|
340 |
+
19: end for
|
341 |
+
20: return f(·; θ)
|
342 |
+
two student models. As shown in Figure 2, CMT consists of one teacher model and two student models, where the
|
343 |
+
teacher model is the self-ensemble of the average of the student models. For labeled data, these models are all opti-
|
344 |
+
mized by supervised learning. For unlabeled data, there are two critical factors: 1) co-training between the two student
|
345 |
+
models, and 2) direct supervision from the teacher to the student models. Formally, the data flow diagram of CMT can
|
346 |
+
be illustrated as 2,
|
347 |
+
ր f (·; θ1) → ˆY 1
|
348 |
+
X → f (·; θ) → ˆ
|
349 |
+
Y 0
|
350 |
+
ց f (·; θ2) → ˆY 2,
|
351 |
+
(1)
|
352 |
+
where X is an input image of the labeled or unlabeled data, ˆY 0/ ˆY 1/ ˆY 2 is the predicted segmentation map, and
|
353 |
+
f(·; θ)/f(·; θ1)/f(·; θ2) with parameters θ, θ1 and θ2 denote the teacher model and student models, respectively.
|
354 |
+
These models have the same architecture but initialized with different weights for network perturbations.
|
355 |
+
To explore both the labeled and unlabeled data, the total loss L for training UCMT involves two parts, i.e., the super-
|
356 |
+
vised loss Ls and the unsupervised loss Lu.
|
357 |
+
L = Ls + λLu,
|
358 |
+
(2)
|
359 |
+
where λ is a regularization parameter to balance the supervised and unsupervised learning losses. We adopt a Gaussian
|
360 |
+
ramp-up function to gradually increase the coefficient, i.e., λ(t) = λm × exp [−5(1 −
|
361 |
+
t
|
362 |
+
tm )2], where λm scales the
|
363 |
+
maximum value of the weighted function, t denotes the current iteration, and tm is the maximum iteration in training.
|
364 |
+
Supervised learning path. For the labeled data, the supervised loss is formulated as,
|
365 |
+
Ls = 1
|
366 |
+
N
|
367 |
+
N
|
368 |
+
�
|
369 |
+
i=1
|
370 |
+
Lseg (f (Xi; θ) , Yi)
|
371 |
+
+ Lseg (f (Xi; θ1) , Yi) + Lseg (f (Xi; θ2) , Yi) ,
|
372 |
+
(3)
|
373 |
+
where Lseg can be any supervised semantic segmentation loss, such as cross entropy loss and dice loss. Note that we
|
374 |
+
choose dice loss in our experiments as its compelling performance in medical image segmentation.
|
375 |
+
2We omit the image index i to indicate that X can be the labeled data or unlabeled data.
|
376 |
+
5
|
377 |
+
|
378 |
+
UCMT
|
379 |
+
Unsupervised learning path. The unsupervised loss Lu acts as a regularization term to explore potential knowledge
|
380 |
+
for the labeled and unlabeled data. Lu includes the cross pseudo supervision Lcps between the two student models
|
381 |
+
and the mean-teacher supervision Lmts for guiding the student models from the teacher, as follow:
|
382 |
+
Lu = Lcps + Lmts.
|
383 |
+
(4)
|
384 |
+
1) Cross pseudo supervision. The purpose of Lcps is to promote two students to learn from each other and to enforce
|
385 |
+
the consistency between them. Lcps = Lcps1 + Lcps2 encourages bidirectional interaction for the two student sub-
|
386 |
+
networks f(·; θ1) and f(·; θ2) as follows,
|
387 |
+
Lcps1 =
|
388 |
+
1
|
389 |
+
M − N
|
390 |
+
M−N
|
391 |
+
�
|
392 |
+
j=1
|
393 |
+
Lseg
|
394 |
+
�
|
395 |
+
f (Xj; θ1) , ˆY 2
|
396 |
+
j
|
397 |
+
�
|
398 |
+
Lcps2 =
|
399 |
+
1
|
400 |
+
M − N
|
401 |
+
M−N
|
402 |
+
�
|
403 |
+
j=1
|
404 |
+
Lseg
|
405 |
+
�
|
406 |
+
f (Xj; θ2) , ˆY 1
|
407 |
+
j
|
408 |
+
�
|
409 |
+
,
|
410 |
+
(5)
|
411 |
+
where ˆY 1
|
412 |
+
j and ˆY 2
|
413 |
+
j are the pseudo labels (segmentation maps) for Xj predicted by f (·; θ1) and f (·; θ1) , respectively.
|
414 |
+
2) Mean-teacher supervision. To avoid the two students cross supervision in the wrong direction, we introduce a
|
415 |
+
teacher model to guide the optimization of the student models. Specifically, the teacher model is updated by the
|
416 |
+
exponential moving average (EMA) of the average of the student models:
|
417 |
+
θt = αθt−1 + (1 − α)θt
|
418 |
+
1 + θt
|
419 |
+
2
|
420 |
+
2
|
421 |
+
,
|
422 |
+
(6)
|
423 |
+
where t represents the current training iteration. α is the EMA decay that controls the parameters’ updating rate and
|
424 |
+
we set α = 0.999 in our experiments.
|
425 |
+
The loss of mean-teacher supervision Lmts = Lmts1 + Lmts2 is calculated from two branches:
|
426 |
+
Lmts1 =
|
427 |
+
1
|
428 |
+
M − N
|
429 |
+
M−N
|
430 |
+
�
|
431 |
+
j=1
|
432 |
+
Lseg
|
433 |
+
�
|
434 |
+
f (Xj; θ1) , ˆY 0
|
435 |
+
j
|
436 |
+
�
|
437 |
+
Lmts2 =
|
438 |
+
1
|
439 |
+
M − N
|
440 |
+
M−N
|
441 |
+
�
|
442 |
+
j=1
|
443 |
+
Lseg
|
444 |
+
�
|
445 |
+
f (Xj; θ2) , ˆY 0
|
446 |
+
j
|
447 |
+
�
|
448 |
+
,
|
449 |
+
(7)
|
450 |
+
where ˆY 0
|
451 |
+
j refers to the predicted segmentation map derived from f (Xj; θ).
|
452 |
+
3.4
|
453 |
+
Uncertainty-guided Mix
|
454 |
+
Although CMT can promote model disagreement for co-training, it also slightly increases the uncertainty of the pseudo
|
455 |
+
labels as depicted in Figure 1. On the other hand, random regional dropout can expand the training distribution and
|
456 |
+
improve the generalization capability of models [13, 14]. However, such random perturbations to the input images in-
|
457 |
+
evitably introduce noise into the new samples, thus deteriorating the quality of pseudo labels for SSL. One sub-network
|
458 |
+
may provide some incorrect pseudo labels to the other sub-networks, degrading their performance. To overcome these
|
459 |
+
limitations, we propose UMIX to manipulate image patches under the guidance of the uncertainty maps produced by
|
460 |
+
CMT. The main idea of UMIX is constructing a new sample by replacing the top k most uncertain (low-confidence) re-
|
461 |
+
gions with the top k most certain (high-confidence) regions in the input image. As illustrated in Figure 2, for example,
|
462 |
+
we obtain the most uncertain regions (the red grids) and the most certain regions (the green grids) from an uncertainty
|
463 |
+
map U. Then, we replace the red regions with the green regions in the input image X to construct a new sample X′.
|
464 |
+
Formally, UMIX constructs a new sample X′ = UMIX(X, U 1, U 2; k, 1/r) by replacing the top k most uncertain
|
465 |
+
regions (red grids in V 1 and V 2) with the top k most certain regions (green grids in V 2 and V 1) in X, where each
|
466 |
+
region has size 1/r to the image size. To ensure the reliability of the uncertainty evaluation, we obtain the uncertain
|
467 |
+
maps by integrating the outputs of the teacher and the student model instead of performing T stochastic forward
|
468 |
+
passes designed by Monte Carlo Dropout estimate model [26, 18], which is equivalent to sampling predictions from
|
469 |
+
the previous and current iterations. This process can be formulated as:
|
470 |
+
U m = Uncertain(f(X; θm), f(X; θ)) = −
|
471 |
+
�
|
472 |
+
c
|
473 |
+
Pc log(Pc),
|
474 |
+
Pc = 1
|
475 |
+
2(Softmax(f(X; θm)) + Softmax(f(X; θ))),
|
476 |
+
(8)
|
477 |
+
where m = 1, 2 denotes the index of the student models and c refers to the class index.
|
478 |
+
6
|
479 |
+
|
480 |
+
UCMT
|
481 |
+
Table 1: Comparison with state-of-the-art methods on ISIC dataset. 5% DL and 10% DL of the labeled data are used
|
482 |
+
for training, respectively. Results are measured by DSC.
|
483 |
+
Method
|
484 |
+
5% DL
|
485 |
+
10% DL
|
486 |
+
MT [4]
|
487 |
+
86.67
|
488 |
+
87.42
|
489 |
+
CCT [7]
|
490 |
+
83.97
|
491 |
+
86.43
|
492 |
+
CPS [5]
|
493 |
+
86.81
|
494 |
+
87.70
|
495 |
+
UGCL(U-Net) [15]
|
496 |
+
72.67
|
497 |
+
79.48
|
498 |
+
UCMT(U-Net) (ours)
|
499 |
+
82.14
|
500 |
+
83.33
|
501 |
+
CMT (ours)
|
502 |
+
87.86
|
503 |
+
88.10
|
504 |
+
UCMT (ours)
|
505 |
+
88.22
|
506 |
+
88.46
|
507 |
+
4
|
508 |
+
Experiments and results
|
509 |
+
4.1
|
510 |
+
Experiments Settings
|
511 |
+
Datasets. We conduct extensive experiments on different medical image segmentation tasks to evaluate the proposed
|
512 |
+
method, including skin lesion segmentation from dermoscopy images, polyp segmentation from colonoscopy images,
|
513 |
+
and the 3D left atrium segmentation from cardiac MRI images.
|
514 |
+
Dermoscopy. We validate our method on the ISIC dataset [28] including 2594 dermoscopy images and corresponding
|
515 |
+
annotations. Following [15], we adopt 1815 images for training and 779 images for validation.
|
516 |
+
Colonoscopy. We evaluate the proposed method on the two public colonoscopy datasets, including Kvasir-SEG [29]
|
517 |
+
and CVC-ClinicDB [30]. Kvasir-SEG and CVC-ClinicDB contain 1000 and 612 colonoscopy images with correspond-
|
518 |
+
ing annotations, respectively.
|
519 |
+
Cardiac MRI. We evaluate our method on the 3D left atrial (LA) segmentation challenge dataset, which consists
|
520 |
+
of 100 3D gadolinium-enhanced magnetic resonance images and LA segmentation masks for training and validation.
|
521 |
+
Following [18], we split the 100 scans into 80 samples for training and 20 samples for evaluation.
|
522 |
+
4.1.1
|
523 |
+
Implementation details
|
524 |
+
We use DeepLabv3+ [1] equipped with ResNet50 as the baseline architecture for 2D image segmentation, whereas
|
525 |
+
adopt VNet [31] as the baseline in the 3D scenario. All images are resized to 256×256 for inference, while the outputs
|
526 |
+
are recovered to the original size for evaluation, in the 2D scenario. For 3D image segmentation, we randomly crop
|
527 |
+
80 × 112 × 112(Depth × Height × Width) patches for training and iteratively crop patches using a sliding window
|
528 |
+
strategy to obtain the final segmentation mask for testing. We implement our method using PyTorch framework on a
|
529 |
+
NVIDIA Quadro RTX 6000 GPU. We adopt AdamW as an optimizer with the fixed learning rate of le-4. The batchsize
|
530 |
+
is set to 16, including 8 labeled samples and 8 unlabeled samples. All 2D models are trained for 50 epochs, while the
|
531 |
+
3D models are trained for 1000 epochs 3. We empirically set k = 2 and r = 16 for our method in the experiment.
|
532 |
+
4.2
|
533 |
+
Comparison with state of the arts
|
534 |
+
We compare the proposed method with state-of-the art on the four public medical image segmentation datasets. We
|
535 |
+
re-implement MT [4], CCT [7], and CPS [5] by adopting implementations from [5]. For other approaches, we directly
|
536 |
+
use the results reported in their original papers.
|
537 |
+
Results on Dermoscopy. In Table 1, we report the results of our methods on ISIC and compare them with other state-
|
538 |
+
of-the-art approaches. UCMT substantially outperforms all previous methods and sets new state-of-the-art of 88.22%
|
539 |
+
DSC and 88.46 DSC under 5% and 10% labeled data. For fair comparison with UGCL [15], replace the backbone
|
540 |
+
of UCMT with U-Net. The results indicate that our UCMT(U-Net) exceeds UGCL by a large margin. Moreover, our
|
541 |
+
CMT version also outperforms other approaches under the two labeled data rates. For example, CMT surpasses MT
|
542 |
+
and CPS by 1.19% and 1.08% on 5% DL labeled data, showing the superiority of collaborative mean-teacher against
|
543 |
+
the current consistency learning framework. By introducing UMIX, UCMT consistently increases the performance
|
544 |
+
under different labeled data rates, which implies that promoting model disagreement and guaranteeing high-confident
|
545 |
+
pseudo labels are beneficial for semi-supervised segmentation.
|
546 |
+
3Since UCMT performs the two-step training within one iteration, it is trained for half of the epochs.
|
547 |
+
7
|
548 |
+
|
549 |
+
UCMT
|
550 |
+
Table 2: Comparison with state-of-the-art methods on Kvasir-SEG and CVC-ClinicDB datasets. 15% DL and 30%
|
551 |
+
DL of the labeled data are individually used for training. Results are measured by DSC.
|
552 |
+
Method
|
553 |
+
Kvasir-SEG
|
554 |
+
CVC-ClinicDB
|
555 |
+
15% DL
|
556 |
+
30% DL
|
557 |
+
15% DL
|
558 |
+
30% DL
|
559 |
+
AdvSemSeg [24]
|
560 |
+
56.88
|
561 |
+
76.09
|
562 |
+
68.39
|
563 |
+
75.93
|
564 |
+
ColAdv [32]
|
565 |
+
76.76
|
566 |
+
80.95
|
567 |
+
82.18
|
568 |
+
89.29
|
569 |
+
MT [4]
|
570 |
+
87.44
|
571 |
+
88.72
|
572 |
+
84.19
|
573 |
+
84.40
|
574 |
+
CCT [7]
|
575 |
+
81.14
|
576 |
+
84.67
|
577 |
+
74.20
|
578 |
+
78.46
|
579 |
+
CPS [5]
|
580 |
+
86.44
|
581 |
+
88.71
|
582 |
+
85.34
|
583 |
+
86.69
|
584 |
+
CMT (ours)
|
585 |
+
88.08
|
586 |
+
88.61
|
587 |
+
85.88
|
588 |
+
86.83
|
589 |
+
UCMT (ours)
|
590 |
+
88.68
|
591 |
+
89.06
|
592 |
+
87.30
|
593 |
+
87.51
|
594 |
+
Results on Colonoscopy.
|
595 |
+
We further conduct a comparative experiment on the polyp segmentation task from
|
596 |
+
colonoscopy images. Table 2 reports the quantitative results on Kvasir-SEG and CVC-ClinicDB datasets. Com-
|
597 |
+
pared with the adversarial learning-based [24, 32] and consistency learning-based [4, 7, 5] algorithms, the proposed
|
598 |
+
methods achieve the state-of-the-art performance. For example, both CMT and UCMT outperform AdvSemSeg [24]
|
599 |
+
and ColAdv [32] by large margins on Kvasir-SEG and CVC-ClinicDB, except that ColAdv shows the better perfor-
|
600 |
+
mance of 89.29% on CVC-ClinicDB under 30% labeled data. These results demonstrate that our uncertainty-guided
|
601 |
+
collaborative mean-teacher scheme is superior to the adversarial learning and consistency learning schemes used in
|
602 |
+
the compared approaches. In addition, CMT and UCMT show better performance on the low-data regime, i.e., 15%
|
603 |
+
DL, and the performance between 15% DL and 30% DL labeled data is close. This phenomenon reflects the capacity
|
604 |
+
of our method to produce high-quality pseudo labels from unlabeled data for semi-supervised learning, even with less
|
605 |
+
labeled data.
|
606 |
+
Results on Cardiac MRI. We further evaluate the proposed method in the 3D medical image segmentation task. Table
|
607 |
+
3 shows the comparison results on the 3D left atrium segmentation from cardiac MRI. The compared approaches are
|
608 |
+
based on consistency learning and pseudo labeling, including uncertainty-aware [18, 33], shape-aware [25], structure-
|
609 |
+
aware [34], dual-task [19], and mutual training [20] consistency. It can be observed that UCMT achieves the best
|
610 |
+
performance under 10% and 20% DL in terms of DSC and Jaccard over the state-of-the-art methods. For example,
|
611 |
+
compared with UA-MT [18] and MC-Net [20], UCMT shows 3.88% DSC and 0.43% DSC improvements on the 10%
|
612 |
+
labeled data. The results demonstrate the superiority of our UCMT for 3D medical image segmentation.
|
613 |
+
Table 3: Comparison with state-of-the-art methods on the 3D left atrial segmentation challenge dataset. 10% DL and
|
614 |
+
20% DL of the labeled data are used for training.
|
615 |
+
Method
|
616 |
+
10% DL
|
617 |
+
20% DL
|
618 |
+
DSC
|
619 |
+
Jaccard
|
620 |
+
95HD
|
621 |
+
ASD
|
622 |
+
DSC
|
623 |
+
Jaccard
|
624 |
+
95HD
|
625 |
+
ASD
|
626 |
+
UA-MT [18]
|
627 |
+
84.25
|
628 |
+
73.48
|
629 |
+
13.84
|
630 |
+
3.36
|
631 |
+
88.88
|
632 |
+
80.21
|
633 |
+
7.32
|
634 |
+
2.26
|
635 |
+
SASSNet [25]
|
636 |
+
87.32
|
637 |
+
77.72
|
638 |
+
9.62
|
639 |
+
2.55
|
640 |
+
89.54
|
641 |
+
81.24
|
642 |
+
8.24
|
643 |
+
2.20
|
644 |
+
LG-ER-MT [34]
|
645 |
+
85.54
|
646 |
+
75.12
|
647 |
+
13.29
|
648 |
+
3.77
|
649 |
+
89.62
|
650 |
+
81.31
|
651 |
+
7.16
|
652 |
+
2.06
|
653 |
+
DUWM [33]
|
654 |
+
85.91
|
655 |
+
75.75
|
656 |
+
12.67
|
657 |
+
3.31
|
658 |
+
89.65
|
659 |
+
81.35
|
660 |
+
7.04
|
661 |
+
2.03
|
662 |
+
DTC [19]
|
663 |
+
86.57
|
664 |
+
76.55
|
665 |
+
14.47
|
666 |
+
3.74
|
667 |
+
89.42
|
668 |
+
80.98
|
669 |
+
7.32
|
670 |
+
2.10
|
671 |
+
MC-Net [20]
|
672 |
+
87.71
|
673 |
+
78.31
|
674 |
+
9.36
|
675 |
+
2.18
|
676 |
+
90.34
|
677 |
+
82.48
|
678 |
+
6.00
|
679 |
+
1.77
|
680 |
+
MT [4]
|
681 |
+
86.15
|
682 |
+
76.16
|
683 |
+
11.37
|
684 |
+
3.60
|
685 |
+
89.81
|
686 |
+
81.85
|
687 |
+
6.08
|
688 |
+
1.96
|
689 |
+
CPS [5]
|
690 |
+
86.23
|
691 |
+
76.22
|
692 |
+
11.68
|
693 |
+
3.65
|
694 |
+
88.72
|
695 |
+
80.01
|
696 |
+
7.49
|
697 |
+
1.91
|
698 |
+
CMT (ours)
|
699 |
+
87.23
|
700 |
+
77.83
|
701 |
+
7.83
|
702 |
+
2.23
|
703 |
+
89.88
|
704 |
+
81.74
|
705 |
+
6.07
|
706 |
+
1.94
|
707 |
+
UCMT (ours)
|
708 |
+
88.13
|
709 |
+
79.18
|
710 |
+
9.14
|
711 |
+
3.06
|
712 |
+
90.41
|
713 |
+
82.54
|
714 |
+
6.31
|
715 |
+
1.70
|
716 |
+
4.3
|
717 |
+
Ablation study
|
718 |
+
We conduct an ablation study in terms of network architectures, loss functions, and region mix to investigate the
|
719 |
+
effectiveness of each component and analyze the hyperparameters of the proposed method. There are three types of
|
720 |
+
network architectures: 1) teacher-student (TS) in MT [4], 2) student-student (SS) in CPS [5], and 3) student-teacher-
|
721 |
+
student in the proposed CMT.
|
722 |
+
8
|
723 |
+
|
724 |
+
UCMT
|
725 |
+
Effectiveness of each component. Table 4 reports the performance improvements over the baseline. It shows a trend
|
726 |
+
that the segmentation performance improves when the components, including the STS (student-teacher-student), Lcps,
|
727 |
+
Lmts, and UMIX are introduced into the baseline, and again confirms the necessity of encouraging model disagreement
|
728 |
+
and enhancing the quality of pseudo labels for semi-supervised segmentation. The semi-supervised segmentation
|
729 |
+
model is boosted for two reasons: 1) Lcps, Lmts and the STS architecture that force the model disagreement in CMT
|
730 |
+
for co-training, and 2) UMIX facilitating the model to produce high-confidence pseudo labels. All the components
|
731 |
+
contribute to UCMT to achieve 88.22% DSC. These results demonstrate their effectiveness and complementarity for
|
732 |
+
semi-supervised medical image segmentation. On the other hand, the two groups of comparisons between "TS (teacher-
|
733 |
+
student) + Lmts" (i.e., MT) and STS + Lmts (i.e., CMTv1), and between "SS (student-student) + Lcps" (i.e., CPS) and
|
734 |
+
"STS + Lcps" (i.e., CMTv2) show that the STS-based approaches yield the improvements of 0.17% and 0.64%, which
|
735 |
+
verifies the effectiveness of our STS for SSL. The performance gaps are not significant because the STS architecture
|
736 |
+
increases the co-training disagreement but decreases the confidences of pseudo labels. However, It can be easily found
|
737 |
+
that the results are improved to 87.86% by "STS + Lcps + Lmts" (i.e., CMTv3) and the relative improvements of 1.55%
|
738 |
+
and 1.41% DSC have been achieved by "STS + Lcps + Lmts + UMIX" (i.e., UCMT) compared with MT and CPS.
|
739 |
+
The results demonstrate our hypothesis that maintaining co-training with high-confidence pseudo labels can improve
|
740 |
+
the performance of semi-supervised learning.
|
741 |
+
Table 4: Ablation study of the different components combinations on ISIC dataset. All models are trained for 5%
|
742 |
+
labeled data. TS: teacher-student; SS: student-student; STS: student-teacher-student; Lcps: cross pseudo supervision;
|
743 |
+
Lmts: mean-teacher supervision; U: UMIX;
|
744 |
+
Method
|
745 |
+
TS
|
746 |
+
SS
|
747 |
+
STS
|
748 |
+
Lcps
|
749 |
+
Lmts
|
750 |
+
U
|
751 |
+
DSC
|
752 |
+
Baseline
|
753 |
+
83.31
|
754 |
+
MT
|
755 |
+
√
|
756 |
+
√
|
757 |
+
86.67
|
758 |
+
CPS
|
759 |
+
√
|
760 |
+
√
|
761 |
+
86.81
|
762 |
+
CMTv1
|
763 |
+
√
|
764 |
+
√
|
765 |
+
86.84
|
766 |
+
CMTv2
|
767 |
+
√
|
768 |
+
√
|
769 |
+
87.48
|
770 |
+
CMTv3
|
771 |
+
√
|
772 |
+
√
|
773 |
+
√
|
774 |
+
87.86
|
775 |
+
UCMT
|
776 |
+
√
|
777 |
+
√
|
778 |
+
√
|
779 |
+
√
|
780 |
+
88.22
|
781 |
+
Comparison with CutMix. We further compare the proposed UMIX, component of our UCMT, with CutMix [14]
|
782 |
+
on ISIC and LA datasets with different labeled data to investigate their effects in semi-supervised medical image
|
783 |
+
segmentation. As illustrates in Figure 3, UMIX outperforms CutMix, especial in the low-data regime, i.e., 2% labeled
|
784 |
+
data. The reason for this phenomenon is that CutMix performs random region mix that inevitably introduces noise into
|
785 |
+
the new samples, which reduces the quality of the pseudo labels, while UMIX processes the image regions according
|
786 |
+
to the uncertainty of the model, which facilitates the model to generate more confident pseudo labels from the new
|
787 |
+
samples.
|
788 |
+
(a) ISIC
|
789 |
+
(b) LA
|
790 |
+
CMT+CutMix
|
791 |
+
CMT+UMIX
|
792 |
+
CMT
|
793 |
+
CMT+CutMix
|
794 |
+
CMT+UMIX
|
795 |
+
CMT
|
796 |
+
Figure 3: Comparison UCMT (UMIX) with CutMix on ISIC (a) and LA (b) dataset under 2%, 5%, 10%, and 20%
|
797 |
+
DL.
|
798 |
+
Parameter sensitivity analysis. UMIX has two hyperparamters, i.e., the top k regions for mix and the size of the
|
799 |
+
regions (patches) 1/r to the image size. We study the influence of these factors to UCMT on ISIC dataset with 5% DL.
|
800 |
+
It can be observed in Table 5 that reducing the patch size leads to a slight increase in performance. Moreover, varying
|
801 |
+
the number of k does not bring us any improvement. The reason for this phenomenon is that we can choose any value
|
802 |
+
9
|
803 |
+
|
804 |
+
UCMT
|
805 |
+
of K to eliminate outliers, thus bringing high-confidence pseudo labels for semi-supervised learning, indicating the
|
806 |
+
robustness of UMIX.
|
807 |
+
Table 5: Investigation on how the top k and region size affect the capacity of UMIX. All results are evaluated on ISIC
|
808 |
+
dataset with 5% labeled data.
|
809 |
+
1/r
|
810 |
+
k
|
811 |
+
1
|
812 |
+
2
|
813 |
+
3
|
814 |
+
4
|
815 |
+
5
|
816 |
+
1/16
|
817 |
+
87.95
|
818 |
+
88.22
|
819 |
+
88.12
|
820 |
+
87.96
|
821 |
+
88.08
|
822 |
+
1/4
|
823 |
+
87.65
|
824 |
+
88.15
|
825 |
+
87.80
|
826 |
+
87.54
|
827 |
+
87.90
|
828 |
+
1/8
|
829 |
+
87.86
|
830 |
+
87.92
|
831 |
+
88.03
|
832 |
+
88.01
|
833 |
+
87.87
|
834 |
+
4.4
|
835 |
+
Qualitative results
|
836 |
+
Figure 4 visualizes some example results of polyp segmentation, skin lesion segmentation, and left atrial segmentation.
|
837 |
+
As shown in Figure 4 (a), the supervised baseline insufficiently segments some lesion regions, mainly due to the limited
|
838 |
+
number of labeled data. Moreover, MT [Figure 4 (b)] and CPS [Figure 4 (c)] typically under-segment certain objects,
|
839 |
+
which can be attributed to the limited generalization capability. On the contrary, our CMT [Figure 4 (e)] corrects these
|
840 |
+
errors and produces smoother segment boundaries by gaining more effective supervision from unlabeled data. Besides,
|
841 |
+
our complete method UCMT [Figure 4 (f)] further generates more accurate results by recovering finer segmentation
|
842 |
+
details through more efficient training. These examples qualitatively verify the robustness of the proposed UCMT. In
|
843 |
+
addition, to clearly give an insight into the procedure of the pseudo label generation and utilization in the co-training
|
844 |
+
SSL method, we illustrate the uncertainty maps for two samples during the training in Figure 5. As shown, UCMT
|
845 |
+
generates the uncertainty maps with high uncertainty [Figure 5 (a)/(c)] in the early training stage whereas our model
|
846 |
+
produces relative higher confidence maps [Figure 5 (b)/(d)] from the UMIX images. During training, UCMT gradually
|
847 |
+
improves the confidence for the input images. These results prove that UMIX can facilitate SSL models to generate
|
848 |
+
high-confidence pseudo labels during training, guaranteeing that UCMT is able to maintain co-training in a more
|
849 |
+
proper way.
|
850 |
+
5
|
851 |
+
Conclusion
|
852 |
+
We present an uncertainty-guided collaborative mean-teacher for semi-supervised medical image segmentation. Our
|
853 |
+
main ideas lies in maintaining co-training with high-confidence pseudo labels to improve the capability of the SSL
|
854 |
+
models to explore information from unlabeled data. Extensive experiments on four public datasets demonstrate the
|
855 |
+
effectiveness of this idea and show that the proposed UCMT can achieve state-of-the-art performance. In the future,
|
856 |
+
we will investigate more deeply the underlying mechanisms of co-training for more effective semi-supervised image
|
857 |
+
segmentation.
|
858 |
+
Acknowledgments
|
859 |
+
This work was supported by the National Natural Science Foundation of China under Grant 62076059 and the Natural
|
860 |
+
Science Foundation of Liaoning Province under Grant 2021-MS-105.
|
861 |
+
References
|
862 |
+
[1] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with
|
863 |
+
atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on
|
864 |
+
computer vision (ECCV), pages 801–818, 2018.
|
865 |
+
[2] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image
|
866 |
+
segmentation. In International Conference on Medical image computing and computer-assisted intervention,
|
867 |
+
pages 234–241. Springer, 2015.
|
868 |
+
[3] Jesper E Van Engelen and Holger H Hoos. A survey on semi-supervised learning. Machine Learning, 109(2):373–
|
869 |
+
440, 2020.
|
870 |
+
[4] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets
|
871 |
+
improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017.
|
872 |
+
10
|
873 |
+
|
874 |
+
UCMT
|
875 |
+
(a)
|
876 |
+
(b)
|
877 |
+
(c)
|
878 |
+
(d)
|
879 |
+
(e)
|
880 |
+
(f)
|
881 |
+
(g)
|
882 |
+
Figure 4: Qualitative examples of polyp segmentation, skin lesion segmentation, and left atrial segmentation. (a)
|
883 |
+
images, (b) supervised baseline, (c) MT, (d) CPS, (e) CMT, (f) UCMT, and (g) ground truth.
|
884 |
+
[5] Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with
|
885 |
+
cross pseudo supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog-
|
886 |
+
nition, pages 2613–2622, 2021.
|
887 |
+
[6] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural
|
888 |
+
networks. In Workshop on challenges in representation learning, ICML, volume 3, page 896, 2013.
|
889 |
+
[7] Yassine Ouali, Céline Hudelot, and Myriam Tami.
|
890 |
+
Semi-supervised semantic segmentation with cross-
|
891 |
+
consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
|
892 |
+
pages 12674–12684, 2020.
|
893 |
+
[8] Zhanghan Ke, Di Qiu, Kaican Li, Qiong Yan, and Rynson WH Lau. Guided collaborative training for pixel-wise
|
894 |
+
semi-supervised learning. In European conference on computer vision, pages 429–445. Springer, 2020.
|
895 |
+
[9] Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, and Gustavo Carneiro. Perturbed and
|
896 |
+
strict mean teachers for semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference
|
897 |
+
on Computer Vision and Pattern Recognition, pages 4258–4267, 2022.
|
898 |
+
[10] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help
|
899 |
+
generalization against label corruption? In International Conference on Machine Learning, pages 7164–7173.
|
900 |
+
PMLR, 2019.
|
901 |
+
[11] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. Advances in neural
|
902 |
+
information processing systems, 17, 2004.
|
903 |
+
[12] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-
|
904 |
+
teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information
|
905 |
+
processing systems, 31, 2018.
|
906 |
+
[13] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout.
|
907 |
+
arXiv preprint arXiv:1708.04552, 2017.
|
908 |
+
11
|
909 |
+
|
910 |
+
121081703UCMT
|
911 |
+
(b)
|
912 |
+
Epoch
|
913 |
+
(a)
|
914 |
+
5th
|
915 |
+
10th
|
916 |
+
15th
|
917 |
+
(d)
|
918 |
+
(c)
|
919 |
+
1th
|
920 |
+
20th
|
921 |
+
Case 1
|
922 |
+
Case 2
|
923 |
+
Figure 5: Illustration of the uncertainty maps for two sets of examples. (a) and (b) are the uncertainty maps of the
|
924 |
+
original images and the UMIX images for the first example, while (c) and (d) are for the second example.
|
925 |
+
[14] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix:
|
926 |
+
Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF
|
927 |
+
international conference on computer vision, pages 6023–6032, 2019.
|
928 |
+
[15] Tao Wang, Jianglin Lu, Zhihui Lai, Jiajun Wen, and Heng Kong. Uncertainty-guided pixel contrastive learning for
|
929 |
+
semi-supervised medical image segmentation. In Proceedings of the Thirty-First International Joint Conference
|
930 |
+
on Artificial Intelligence, IJCAI, pages 1444–1450, 2022.
|
931 |
+
[16] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk,
|
932 |
+
Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and
|
933 |
+
confidence. Advances in neural information processing systems, 33:596–608, 2020.
|
934 |
+
[17] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki.
|
935 |
+
Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information
|
936 |
+
Processing Systems, 34:18408–18419, 2021.
|
937 |
+
[18] Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, and Pheng-Ann Heng.
|
938 |
+
Uncertainty-aware self-
|
939 |
+
ensembling model for semi-supervised 3d left atrium segmentation. In International Conference on Medical
|
940 |
+
Image Computing and Computer-Assisted Intervention, pages 605–613. Springer, 2019.
|
941 |
+
[19] Xiangde Luo, Jieneng Chen, Tao Song, and Guotai Wang. Semi-supervised medical image segmentation through
|
942 |
+
dual-task consistency. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8801–
|
943 |
+
8809, 2021.
|
944 |
+
[20] Yicheng Wu, Minfeng Xu, Zongyuan Ge, Jianfei Cai, and Lei Zhang. Semi-supervised left atrium segmentation
|
945 |
+
with mutual consistency training. In International Conference on Medical Image Computing and Computer-
|
946 |
+
Assisted Intervention, pages 297–306. Springer, 2021.
|
947 |
+
[21] Yicheng Wu, Zongyuan Ge, Donghao Zhang, Minfeng Xu, Lei Zhang, Yong Xia, and Jianfei Cai. Mutual
|
948 |
+
consistency learning for semi-supervised medical image segmentation. Medical Image Analysis, 81:102530,
|
949 |
+
2022.
|
950 |
+
[22] Xiaomeng Li, Lequan Yu, Hao Chen, Chi-Wing Fu, Lei Xing, and Pheng-Ann Heng. Transformation-consistent
|
951 |
+
self-ensembling model for semisupervised medical image segmentation. IEEE Transactions on Neural Networks
|
952 |
+
and Learning Systems, 32(2):523–534, 2020.
|
953 |
+
[23] Peng Tu, Yawen Huang, Feng Zheng, Zhenyu He, Liujuan Cao, and Ling Shao. Guidedmix-net: Semi-supervised
|
954 |
+
semantic segmentation by using labeled images as reference. In Proceedings of the AAAI Conference on Artificial
|
955 |
+
Intelligence, volume 36, pages 2379–2387, 2022.
|
956 |
+
12
|
957 |
+
|
958 |
+
UCMT
|
959 |
+
[24] Wei Chih Hung, Yi Hsuan Tsai, Yan Ting Liou, Yen-Yu Lin, and Ming Hsuan Yang. Adversarial learning for
|
960 |
+
semi-supervised semantic segmentation. In 29th British Machine Vision Conference, BMVC 2018, 2018.
|
961 |
+
[25] Shuailin Li, Chuyu Zhang, and Xuming He. Shape-aware semi-supervised 3d semantic segmentation for medical
|
962 |
+
images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages
|
963 |
+
552–561. Springer, 2020.
|
964 |
+
[26] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in
|
965 |
+
deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016.
|
966 |
+
[27] Zhedong Zheng and Yi Yang. Rectifying pseudo label learning via uncertainty estimation for domain adaptive
|
967 |
+
semantic segmentation. International Journal of Computer Vision, 129(4):1106–1120, 2021.
|
968 |
+
[28] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi
|
969 |
+
Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detec-
|
970 |
+
tion: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international
|
971 |
+
skin imaging collaboration (isic). In 2018 IEEE 15th international symposium on biomedical imaging (ISBI
|
972 |
+
2018), pages 168–172. IEEE, 2018.
|
973 |
+
[29] Debesh Jha, Pia H Smedsrud, Michael A Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D
|
974 |
+
Johansen. Kvasir-seg: A segmented polyp dataset. In International Conference on Multimedia Modeling, pages
|
975 |
+
451–462. Springer, 2020.
|
976 |
+
[30] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando
|
977 |
+
Vilariño. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from
|
978 |
+
physicians. Computerized medical imaging and graphics, 43:99–111, 2015.
|
979 |
+
[31] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for
|
980 |
+
volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV), pages
|
981 |
+
565–571. IEEE, 2016.
|
982 |
+
[32] Huisi Wu, Guilian Chen, Zhenkun Wen, and Jing Qin. Collaborative and adversarial learning of focused and dis-
|
983 |
+
persive representations for semi-supervised polyp segmentation. In Proceedings of the IEEE/CVF International
|
984 |
+
Conference on Computer Vision, pages 3489–3498, 2021.
|
985 |
+
[33] Yixin Wang, Yao Zhang, Jiang Tian, Cheng Zhong, Zhongchao Shi, Yang Zhang, and Zhiqiang He. Double-
|
986 |
+
uncertainty weighted method for semi-supervised learning. In International Conference on Medical Image Com-
|
987 |
+
puting and Computer-Assisted Intervention, pages 542–551. Springer, 2020.
|
988 |
+
[34] Wenlong Hang, Wei Feng, Shuang Liang, Lequan Yu, Qiong Wang, Kup-Sze Choi, and Jing Qin. Local and
|
989 |
+
global structure-aware entropy regularized mean teacher model for 3d left atrium segmentation. In International
|
990 |
+
Conference on Medical Image Computing and Computer-Assisted Intervention, pages 562–571. Springer, 2020.
|
991 |
+
13
|
992 |
+
|
GNE3T4oBgHgl3EQfWAqd/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
HNE5T4oBgHgl3EQfWQ9u/content/tmp_files/2301.05557v1.pdf.txt
ADDED
@@ -0,0 +1,2132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Vibronic Effects on the Quantum Tunnelling of Magnetisation in Single-Molecule Magnets
|
2 |
+
Andrea Mattioni,1, ∗ Jakob K. Staab,1 William J. A. Blackmore,1 Daniel
|
3 |
+
Reta,1, 2 Jake Iles-Smith,3, 4 Ahsan Nazir,3 and Nicholas F. Chilton1, †
|
4 |
+
1Department of Chemistry, School of Natural Sciences,
|
5 |
+
The University of Manchester, Oxford Road, Manchester, M13 9PL, UK
|
6 |
+
2Faculty of Chemistry, UPV/EHU & Donostia International Physics Center DIPC,
|
7 |
+
Ikerbasque, Basque Foundation for Science, Bilbao, Spain
|
8 |
+
3Department of Physics and Astronomy, School of Natural Sciences,
|
9 |
+
The University of Manchester, Oxford Road, Manchester M13 9PL, UK
|
10 |
+
4Department of Electrical and Electronic Engineering, School of Engineering,
|
11 |
+
The University of Manchester, Sackville Street Building, Manchester M1 3BB, UK
|
12 |
+
Single-molecule magnets are among the most promising platforms for achieving molecular-scale data stor-
|
13 |
+
age and processing. Their magnetisation dynamics are determined by the interplay between electronic and
|
14 |
+
vibrational degrees of freedom, which can couple coherently, leading to complex vibronic dynamics. Building
|
15 |
+
on an ab initio description of the electronic and vibrational Hamiltonians, we formulate a non-perturbative vi-
|
16 |
+
bronic model of the low-energy magnetic degrees of freedom in a single-molecule magnet, which we benchmark
|
17 |
+
against field-dependent magnetisation measurements. Describing the low-temperature magnetism of the com-
|
18 |
+
plex in terms of magnetic polarons, we are able to quantify the vibronic contribution to the quantum tunnelling
|
19 |
+
of the magnetisation. Despite collectively enhancing magnetic relaxation, we observe that specific vibrations
|
20 |
+
suppress quantum tunnelling by enhancing the magnetic axiality of the complex. Finally, we discuss how this
|
21 |
+
observation might impact the current paradigm to chemical design of new high-performance single-molecule
|
22 |
+
magnets, promoting vibrations to an active role rather than just regarding them as sources of noise and decoher-
|
23 |
+
ence.
|
24 |
+
I.
|
25 |
+
INTRODUCTION
|
26 |
+
Single-molecule magnets (SMMs) hold the potential for
|
27 |
+
realising high-density data storage and quantum informa-
|
28 |
+
tion processing [1–4].
|
29 |
+
These molecules exhibit a doubly-
|
30 |
+
degenerate ground state, comprising two states supporting a
|
31 |
+
large magnetic moment with opposite orientation, which rep-
|
32 |
+
resents an ideal platform for storing digital data. Slow reori-
|
33 |
+
entation of this magnetic moment results in magnetic hystere-
|
34 |
+
sis at the single-molecule level at sufficiently low tempera-
|
35 |
+
tures [5]. The main obstacle to extending this behaviour to
|
36 |
+
room temperature is the coupling of the magnetic degrees of
|
37 |
+
freedom to molecular and lattice vibrations, often referred to
|
38 |
+
as spin-phonon coupling. Thermal excitation of the molec-
|
39 |
+
ular vibrations cause transitions between different magnetic
|
40 |
+
states, ultimately leading to a complete loss of magnetisation.
|
41 |
+
Advances in design, synthesis and characterisation of SMMs
|
42 |
+
have shed light on the microscopic mechanisms underlying
|
43 |
+
their desirable magnetic properties, extending this behaviour
|
44 |
+
to increasingly higher temperatures [6–8].
|
45 |
+
The mechanism responsible for magnetic relaxation in
|
46 |
+
SMMs strongly depends on temperature. At higher temper-
|
47 |
+
atures, relaxation is driven by one (Orbach) and two (Raman)
|
48 |
+
phonon transitions between magnetic sublevels [9].
|
49 |
+
When
|
50 |
+
temperatures approach absolute zero, all vibrations are pre-
|
51 |
+
dominantly found in their ground state.
|
52 |
+
Thus, both Or-
|
53 |
+
bach and Raman transitions become negligible and the dom-
|
54 |
+
inant mechanism is quantum tunnelling of the magnetisation
|
55 | |
56 | |
57 |
+
(QTM) between the two degenerate ground states [10, 11].
|
58 |
+
This process relies on the presence of a coherent coupling
|
59 |
+
mixing the two otherwise degenerate ground states, opening
|
60 |
+
a tunnelling gap, and allowing population to redistribute be-
|
61 |
+
tween them, thus leading to facile magnetic reorientation.
|
62 |
+
While the role of vibrations in high-temperature magnetic
|
63 |
+
relaxation is well understood in terms of weak-coupling rate
|
64 |
+
equations for the electronic populations [12–15], the connec-
|
65 |
+
tion between QTM and spin-phonon coupling is still unclear.
|
66 |
+
Some analyses have looked at the influence of vibrations on
|
67 |
+
QTM in integer-spin SMMs, where a model spin system was
|
68 |
+
used to show that spin-phonon coupling could open a tunnel-
|
69 |
+
ing gap [16, 17]. However, QTM remains more elusive to
|
70 |
+
grasp in half-integer spin complexes, such as monometallic
|
71 |
+
Dy(III) SMMs, since it is observed experimentally despite
|
72 |
+
being forbidden by Kramers theorem [18].
|
73 |
+
In this case, a
|
74 |
+
magnetic field is needed to break the time-reversal symmetry
|
75 |
+
of the molecular Hamiltonian and lift the degeneracy of the
|
76 |
+
ground doublet. This magnetic field can be provided by hy-
|
77 |
+
perfine interaction with nuclear spins or by dipolar coupling
|
78 |
+
to other SMMs; both these effects have been shown to af-
|
79 |
+
fect tunnelling behaviour [19–25]. Once the tunnelling gap
|
80 |
+
is opened by a magnetic field, molecular vibrations can in
|
81 |
+
principle affect its magnitude in a nontrivial way. In a re-
|
82 |
+
cent work, Ortu et al. analysed the magnetic hysteresis of a
|
83 |
+
series of Dy(III) SMMs, suggesting that QTM efficiency cor-
|
84 |
+
relates with molecular flexibility [22]. In another work, hyper-
|
85 |
+
fine coupling was proposed to assists QTM by facilitating the
|
86 |
+
interaction between molecular vibrations and spin sublevels
|
87 |
+
[26]. However, a clear and unambiguous demonstration of the
|
88 |
+
influence of the spin-phonon coupling on QTM beyond toy-
|
89 |
+
model approaches is still lacking to this date.
|
90 |
+
In this work we present a theoretical analysis of the effect of
|
91 |
+
arXiv:2301.05557v1 [quant-ph] 13 Jan 2023
|
92 |
+
|
93 |
+
2
|
94 |
+
molecular vibrations on the tunnelling dynamics in a Dy(III)
|
95 |
+
SMM. In contrast to previous treatments, our approach is
|
96 |
+
based on a fully ab initio description of the SMM vibrational
|
97 |
+
environment and accounts for the spin-phonon coupling in a
|
98 |
+
non perturbative way, overcoming the standard weak-coupling
|
99 |
+
master equation approach commonly used to determine the
|
100 |
+
high-temperature magnetisation dynamics. After deriving an
|
101 |
+
effective low-energy model for the relevant vibronic degrees
|
102 |
+
of freedom based on a polaron approach [27], we demon-
|
103 |
+
strate that vibrations can either enhance or reduce the quantum
|
104 |
+
tunnelling gap, depending on the orientation of the magnetic
|
105 |
+
field relative to the main anisotropy axis of the SMM. More-
|
106 |
+
over, we validate our vibronic model against frozen solution,
|
107 |
+
field-dependent magnetisation measurements and show that
|
108 |
+
vibronic effects on QTM survive the orientational averaging
|
109 |
+
imposed by amorphous samples, leading, on average, to a sig-
|
110 |
+
nificant enhancement of the tunnelling probability. Lastly, we
|
111 |
+
argue that not all vibrations lead to faster QTM; depending on
|
112 |
+
how strongly vibrations impact the axiality of the lowest en-
|
113 |
+
ergy magnetic doublet, we show that they can play a benign
|
114 |
+
role by suppressing tunnelling, and discuss first steps in that
|
115 |
+
direction.
|
116 |
+
II.
|
117 |
+
MODEL
|
118 |
+
The compound investigated in this work is [Dy(Cpttt)2]+,
|
119 |
+
shown in Fig. 1a [6]. The complex consists of a dyspro-
|
120 |
+
sium ion Dy(III) enclosed between two negatively charged
|
121 |
+
cyclopentadienyl rings with tert-butyl groups at positions 1, 2
|
122 |
+
and 4 (Cpttt). The crystal field generated by the axial ligands
|
123 |
+
makes the states with larger angular momentum energetically
|
124 |
+
favourable, resulting in the energy level diagram sketched in
|
125 |
+
Fig. 1b. The energy barrier separating the two degenerate
|
126 |
+
ground states results in magnetic hysteresis, which was ob-
|
127 |
+
served up to T = 60 K [6]. Magnetic hysteresis is hindered
|
128 |
+
by QTM, which leads to a characteristic sudden drop of the
|
129 |
+
magnetisation at zero magnetic field.
|
130 |
+
To single out the contribution of molecular vibrations, we
|
131 |
+
focus on a magnetically diluted sample in a frozen solution
|
132 |
+
of dichloromethane (DCM). Thus, our computational model
|
133 |
+
consists of a solvated [Dy(Cpttt)2]+ cation (see Section S1 for
|
134 |
+
details; Fig. 1a), which provides a realistic description of the
|
135 |
+
low-frequency vibrational environment, comprised of pseudo-
|
136 |
+
acoustic vibrational modes (Fig. 1c). These constitute the ba-
|
137 |
+
sis to consider further contributions of dipolar and hyperfine
|
138 |
+
interactions to QTM (Fig. 1b).
|
139 |
+
Once the equilibrium geometry and vibrational modes of
|
140 |
+
the solvated SMM (which are in general combinations of
|
141 |
+
molecular and solvent vibrations) are obtained at the density-
|
142 |
+
functional level of theory (see Section S1), we proceed to de-
|
143 |
+
termine the equilibrium electronic structure via complete ac-
|
144 |
+
tive space self-consistent field spin-orbit (CASSCF-SO) cal-
|
145 |
+
culations. The electronic structure is projected onto an effec-
|
146 |
+
tive crystal-field Hamiltonian, parametrised in terms of crys-
|
147 |
+
tal field parameters. The spin-phonon couplings are obtained
|
148 |
+
from a single CASSCF calculation, by computing the analytic
|
149 |
+
derivatives of the molecular Hamiltonian with respect to the
|
150 |
+
nuclear coordinates [14] (see Section S1 for more details).
|
151 |
+
The
|
152 |
+
lowest-energy
|
153 |
+
angular
|
154 |
+
momentum
|
155 |
+
multiplet
|
156 |
+
of
|
157 |
+
[Dy(Cpttt)2]+ (J = 15/2) can thus be described by the ab ini-
|
158 |
+
tio vibronic Hamiltonian
|
159 |
+
ˆH = ∑
|
160 |
+
m
|
161 |
+
Em|m⟩⟨m|+∑
|
162 |
+
j
|
163 |
+
ˆVj ⊗(ˆbj + ˆb†
|
164 |
+
j)+∑
|
165 |
+
j
|
166 |
+
ωj ˆb†
|
167 |
+
j ˆbj,
|
168 |
+
(1)
|
169 |
+
where Em denotes the energy associated with the electronic
|
170 |
+
state |m⟩ and ˆVj represent the spin-phonon coupling opera-
|
171 |
+
tors. The harmonic vibrational modes of the DCM-solvated
|
172 |
+
[Dy(Cpttt)2]+ are described in terms of their bosonic annihi-
|
173 |
+
lation (creation) operators ˆbj (ˆb†
|
174 |
+
j) and frequencies ωj.
|
175 |
+
In the absence of magnetic fields, the Hamiltonian (1) is
|
176 |
+
symmetric under time reversal. This symmetry results in a
|
177 |
+
two-fold degeneracy of the energy levels Em, whose corre-
|
178 |
+
sponding eigenstates |m⟩ and | ¯m⟩ form a time-reversal conju-
|
179 |
+
gate Kramers doublet. The degeneracy is lifted by introducing
|
180 |
+
a magnetic field B, which couples to the electronic degrees of
|
181 |
+
freedom via the Zeeman interaction ˆHZee = µBgJB · ˆJ, where
|
182 |
+
gJ is the Landé g-factor and ˆJ is the total angular momentum
|
183 |
+
operator. To linear order in the magnetic field, each Kramers
|
184 |
+
doublet splits into two energy levels Em±∆m/2 corresponding
|
185 |
+
to the states
|
186 |
+
|m+⟩ = cos θm
|
187 |
+
2 |m⟩+eiφm sin θm
|
188 |
+
2 | ¯m⟩
|
189 |
+
(2)
|
190 |
+
|m−⟩ = −sin θm
|
191 |
+
2 |m⟩+eiφm cos θm
|
192 |
+
2 | ¯m⟩
|
193 |
+
(3)
|
194 |
+
where the energy splitting ∆m and the mixing angles θm and
|
195 |
+
φm are determined by the matrix elements of the Zeeman
|
196 |
+
Hamiltonian on the subspace {|m⟩,| ¯m⟩}. In addition to the
|
197 |
+
intra-doublet mixing described by Eqs. (2) and (3), the Zee-
|
198 |
+
man interaction also mixes Kramers doublets at different ener-
|
199 |
+
gies. The ground doublet acquires contributions from higher-
|
200 |
+
lying states
|
201 |
+
|1′
|
202 |
+
±⟩ = |1±⟩+ ∑
|
203 |
+
m̸=1,¯1
|
204 |
+
|m⟩⟨m| ˆHZee|1±⟩
|
205 |
+
E1 −Em
|
206 |
+
+O(B2).
|
207 |
+
(4)
|
208 |
+
These states no longer form a time-reversal conjugate doublet,
|
209 |
+
meaning that the spin-phonon coupling can now contribute to
|
210 |
+
transitions between them.
|
211 |
+
Since QTM is typically observed at much lower tempera-
|
212 |
+
tures than the energy gap between the lowest and first excited
|
213 |
+
doublets (which here is ∼ 660 K [6]), we focus on the per-
|
214 |
+
turbed ground doublet |1′
|
215 |
+
±⟩. Within this subspace, the Hamil-
|
216 |
+
tonian ˆH + ˆHZee takes the form
|
217 |
+
ˆHeff = E1 + ∆1
|
218 |
+
2 σ′
|
219 |
+
z +∑
|
220 |
+
j
|
221 |
+
ωj ˆb†
|
222 |
+
j ˆbj
|
223 |
+
(5)
|
224 |
+
+ ∑
|
225 |
+
j
|
226 |
+
�
|
227 |
+
⟨1| ˆVj|1⟩−wz
|
228 |
+
jσ′
|
229 |
+
z
|
230 |
+
��
|
231 |
+
ˆbj + ˆb†
|
232 |
+
j
|
233 |
+
�
|
234 |
+
− ∑
|
235 |
+
j
|
236 |
+
�
|
237 |
+
wx
|
238 |
+
jσ′
|
239 |
+
x +wy
|
240 |
+
jσ′
|
241 |
+
y
|
242 |
+
��
|
243 |
+
ˆbj + ˆb†
|
244 |
+
j
|
245 |
+
�
|
246 |
+
.
|
247 |
+
This Hamiltonian describes the interaction between vi-
|
248 |
+
brational
|
249 |
+
modes
|
250 |
+
and
|
251 |
+
an
|
252 |
+
effective
|
253 |
+
spin
|
254 |
+
one-half
|
255 |
+
rep-
|
256 |
+
resented
|
257 |
+
by
|
258 |
+
the
|
259 |
+
Pauli
|
260 |
+
matrices
|
261 |
+
σ′ = (σ′
|
262 |
+
x,σ′
|
263 |
+
y,σ′
|
264 |
+
z),
|
265 |
+
|
266 |
+
3
|
267 |
+
QTM
|
268 |
+
electronic
|
269 |
+
vibronic
|
270 |
+
b)
|
271 |
+
a)
|
272 |
+
Energy
|
273 |
+
[Dy(Cpttt)2]+
|
274 |
+
c)
|
275 |
+
vibrational DOS
|
276 |
+
DCM
|
277 |
+
z
|
278 |
+
d)
|
279 |
+
polarons
|
280 |
+
FIG. 1.
|
281 |
+
Quantum tunnelling in single-molecule magnets. (a) Molecular structure of a Dy(III) single-molecule magnet surrounded by a
|
282 |
+
dichloromethane bath. (b) Equilibrium energy level diagram of the lowest-energy angular momentum multiplet with J = 15/2. The second-
|
283 |
+
lowest doublet at E2 is 524 cm−1 higher than the ground doublet at E1, while the highest doublet is 1523 cm−1 above E1. Dipolar and hyperfine
|
284 |
+
magnetic fields (Bint) can lift the degeneracy of the doublets and cause quantum tunnelling, which results in avoided crossings when sweeping
|
285 |
+
an external magnetic field Bext. Molecular vibrations can influence the magnitude of the avoided crossing. (c) Spin-phonon coupling for the
|
286 |
+
solvated complex shown above, as a function of the vibrational frequency (vibrations with ωj > 1500 cm−1 not shown), calculated as the
|
287 |
+
Frobenius norm of the operator ˆVj. The grey dashed line represents the vibrational density of states, obtained by assigning to each molecular
|
288 |
+
vibration a (anti-symmetrised) Lorentzian lineshape with full width at half-maximum 10 cm−1 (corresponding to a typical timescale of ∼ 1 ps).
|
289 |
+
(d) Idea behind the polaron transformation of Eq. (6). Each spin state |1′±⟩ is accompanied by a vibrational distortion (greatly exaggerated
|
290 |
+
for visualisation), thus forming a magnetic polaron. Vibrational states |ν⟩ are now described in terms of harmonic displacements around the
|
291 |
+
deformed structure, which depends on the state of the spin. Polarons provide an accurate physical picture when the spin-phonon coupling is
|
292 |
+
strong and mostly modulates the energy of different spin states but not the coupling between them.
|
293 |
+
where σ′
|
294 |
+
z = |1′
|
295 |
+
+⟩⟨1′
|
296 |
+
+| − |1′
|
297 |
+
−⟩⟨1′
|
298 |
+
−|.
|
299 |
+
The vector w j =
|
300 |
+
(ℜ⟨1−| ˆWj|1+⟩,ℑ⟨1−| ˆWj|1+⟩,⟨1+| ˆWj|1+⟩) is defined in terms
|
301 |
+
of the operator ˆWj = ∑m̸=1,¯1 ˆVj|m⟩⟨m| ˆHZee/(Em −E1)+ h.c.,
|
302 |
+
describing the effect of the Zeeman interaction on the
|
303 |
+
spin-phonon coupling. Due to the strong magnetic axiality of
|
304 |
+
the complex considered here, the longitudinal component of
|
305 |
+
the spin-phonon coupling wz
|
306 |
+
j dominates over the transverse
|
307 |
+
part wx
|
308 |
+
j, wy
|
309 |
+
j (see Section S3).
|
310 |
+
In this case, we can get a
|
311 |
+
better physical picture of the system by transforming the
|
312 |
+
Hamiltonian (5) to the polaron frame defined by the unitary
|
313 |
+
operator
|
314 |
+
ˆS = exp
|
315 |
+
�
|
316 |
+
∑
|
317 |
+
s=±
|
318 |
+
|1′
|
319 |
+
s⟩⟨1′
|
320 |
+
s| ∑
|
321 |
+
j
|
322 |
+
ξ s
|
323 |
+
j
|
324 |
+
�
|
325 |
+
ˆb†
|
326 |
+
j − ˆbj
|
327 |
+
��
|
328 |
+
,
|
329 |
+
(6)
|
330 |
+
which mixes electronic and vibrational degrees of freedom by
|
331 |
+
displacing the mode operators by ξ ±
|
332 |
+
j = (⟨1| ˆVj|1⟩ ∓ wz
|
333 |
+
j)/ωj
|
334 |
+
depending on the state of the effective spin one-half [27].
|
335 |
+
The idea behind this transformation is to allow nuclei to re-
|
336 |
+
lax around a new equilibrium geometry, which may be differ-
|
337 |
+
ent for every spin state. This lowers the energy of the system
|
338 |
+
and provides a good description of the vibronic eigenstates
|
339 |
+
when the spin-phonon coupling is approximately diagonal in
|
340 |
+
the spin basis (Fig. 1d). In the polaron frame, the longitu-
|
341 |
+
dinal spin-phonon coupling is fully absorbed into the purely
|
342 |
+
electronic part of the Hamiltonian, while the transverse com-
|
343 |
+
ponents can be approximated by their thermal average over
|
344 |
+
vibrations, neglecting their vanishingly small quantum fluc-
|
345 |
+
tuations (see Section S2).
|
346 |
+
After transforming back to the
|
347 |
+
original frame, we are left with an effective spin one-half
|
348 |
+
Hamiltonian with no residual spin-phonon coupling Heff ≈
|
349 |
+
ˆH(pol)
|
350 |
+
eff
|
351 |
+
+∑j ωj ˆb†
|
352 |
+
j ˆbj, where
|
353 |
+
ˆH(pol)
|
354 |
+
eff
|
355 |
+
= E1 + ∆1
|
356 |
+
2 σ′′
|
357 |
+
z +2∑
|
358 |
+
j
|
359 |
+
⟨1| ˆVj|1��
|
360 |
+
ωj
|
361 |
+
w j ·σ′′.
|
362 |
+
(7)
|
363 |
+
The set of Pauli matrices σ′′ = ˆS†(σ′ ⊗ 1lvib) ˆS describe the
|
364 |
+
two-level system formed by the magnetic polarons of the
|
365 |
+
form ˆS†|1′
|
366 |
+
±⟩|{νj}⟩vib, where {νj} is a set of occupation num-
|
367 |
+
bers for the vibrational modes of the solvent-SMM system.
|
368 |
+
These magnetic polarons can be thought as magnetic elec-
|
369 |
+
tronic states strongly coupled to a distortion of the molecular
|
370 |
+
geometry. They inherit the magnetic properties of the cor-
|
371 |
+
responding electronic states, and can be seen as the molecu-
|
372 |
+
|
373 |
+
4
|
374 |
+
lar equivalent of the magnetic polarons observed in a range
|
375 |
+
of magnetic materials [28–30].
|
376 |
+
Polaron representations of
|
377 |
+
vibronic systems have been employed in a wide variety of
|
378 |
+
settings, ranging from spin-boson models [27, 31] to photo-
|
379 |
+
synthetic complexes [32–34], to quantum dots [35–37], pro-
|
380 |
+
viding a convenient basis to describe the dynamics of quan-
|
381 |
+
tum systems strongly coupled to a vibrational environment.
|
382 |
+
These methods are particularly well suited for condensed
|
383 |
+
matter systems where the electron-phonon coupling is strong
|
384 |
+
but causes very slow transitions between different electronic
|
385 |
+
states, allowing exact treatment of the pure-dephasing part
|
386 |
+
of the electron-phonon coupling and renormalising the elec-
|
387 |
+
tronic parameters. For this reason, the polaron transformation
|
388 |
+
is especially effective for describing our system (as detailed in
|
389 |
+
Section S3). The most striking advantage of this approach is
|
390 |
+
that the average effect of the spin-phonon coupling is included
|
391 |
+
non-perturbatively into the electronic part of the Hamiltonian,
|
392 |
+
leaving behind a vanishingly small residual spin-phonon cou-
|
393 |
+
pling.
|
394 |
+
As a last step, we bring the Hamiltonian in Eq. (7) into a
|
395 |
+
more familiar form by expressing it in terms of an effective g-
|
396 |
+
matrix. We recall that the quantities ∆1 and w j depend linearly
|
397 |
+
on the magnetic field B via the Zeeman Hamiltonian ˆHZee. An
|
398 |
+
additional dependence on the orientation of the magnetic field
|
399 |
+
comes from the mixing angles θ1 and φ1 introduced in Eqs.
|
400 |
+
(2) and (3), appearing in the states |1±⟩ used in the definition
|
401 |
+
of w j. This further dependence is removed by transforming
|
402 |
+
the Pauli operators back to the basis {|1⟩,|¯1⟩} via a three-
|
403 |
+
dimensional rotation σ = Rθ1,φ1 ·σ′′. Finally, we obtain
|
404 |
+
ˆH(pol)
|
405 |
+
eff
|
406 |
+
= E1 + µBB·
|
407 |
+
�
|
408 |
+
gel +∑
|
409 |
+
j
|
410 |
+
gvib
|
411 |
+
j
|
412 |
+
�
|
413 |
+
· σ
|
414 |
+
2 ,
|
415 |
+
(8)
|
416 |
+
for appropriately defined electronic and single-mode vibronic
|
417 |
+
g-matrices gel and gvib
|
418 |
+
j . These are directly related to the elec-
|
419 |
+
tronic splitting term ∆1 and to the vibronic corrections de-
|
420 |
+
scribed by w j in Eq. (7), respectively (see Section S2 for
|
421 |
+
a thorough derivation). The main advantage of representing
|
422 |
+
the ground Kramers doublet with an effective spin one-half
|
423 |
+
Hamiltonian is that it provides a conceptually simple founda-
|
424 |
+
tion for studying low-temperature magnetic behaviour of the
|
425 |
+
complex, confining all microscopic details, including vibronic
|
426 |
+
effects, to an effective g-matrix.
|
427 |
+
III.
|
428 |
+
RESULTS
|
429 |
+
We begin by considering the influence of vibrations on the
|
430 |
+
Zeeman splitting of the lowest doublet. The Zeeman splitting
|
431 |
+
in absence of vibrations is simply given by ∆1 = µB|B · gel|.
|
432 |
+
In the presence of vibrations, the electronic g-matrix gel is
|
433 |
+
modified by adding the vibronic correction ∑j gvib
|
434 |
+
j , resulting
|
435 |
+
in the Zeeman splitting ∆vib
|
436 |
+
1 . In Fig. 2 we show the Zee-
|
437 |
+
man splittings as a function of the orientation of the mag-
|
438 |
+
netic field B, parametrised in terms of the polar angles (θ,φ).
|
439 |
+
Depending on the field orientation, vibrations can lead to ei-
|
440 |
+
ther an increase or decrease of the Zeeman splitting. These
|
441 |
+
changes seem rather small when compared to the largest elec-
|
442 |
+
tronic splitting, obtained when B is oriented along the z-axis
|
443 |
+
(Fig. 1a), as expected for a complex with easy-axis anisotropy.
|
444 |
+
However, they become quite significant for field orientations
|
445 |
+
close to the xy-plane, where the purely electronic splitting ∆1
|
446 |
+
becomes vanishingly small and ∆vib
|
447 |
+
1
|
448 |
+
can be dominated by the
|
449 |
+
vibronic contribution. This is clearly shown in Fig. 2b and
|
450 |
+
2c, where we decompose the total field B = Bint + Bext in a
|
451 |
+
fixed internal component Bint originating from dipolar and hy-
|
452 |
+
perfine interactions, responsible for opening a tunnelling gap,
|
453 |
+
and an external part Bext which we sweep along a fixed direc-
|
454 |
+
tion across zero. We note that this effect is specific to states
|
455 |
+
with easy-axis magnetic anisotropy, however this is the defin-
|
456 |
+
ing feature of SMMs, such that our results should be generally
|
457 |
+
applicable to all Kramers SMMs. A more in-depth discussion
|
458 |
+
on the origin and magnitude of the internal field can be found
|
459 |
+
in Section S5. When these fields lie in the plane perpendicu-
|
460 |
+
lar to the purely electronic easy axis, i.e. the hard plane, the
|
461 |
+
vibronic splitting can be four orders of magnitude larger than
|
462 |
+
the electronic one (Fig. 2b). The situation is reversed when
|
463 |
+
the fields lie in the hard plane of the vibronic g-matrix (Fig.
|
464 |
+
2c).
|
465 |
+
So far we have seen that spin-phonon coupling can either
|
466 |
+
enhance or reduce the tunnelling gap in the presence of a mag-
|
467 |
+
netic field depending on its orientation. For this reason, it is
|
468 |
+
not immediately clear whether its effects survive ensemble av-
|
469 |
+
eraging in a collection of randomly oriented SMMs, such as
|
470 |
+
the frozen solutions considered in magnetometry experiments.
|
471 |
+
In order to check this, let us consider an ideal field-dependent
|
472 |
+
magnetisation measurement. When sweeping a magnetic field
|
473 |
+
Bext at a constant rate from positive to negative values along
|
474 |
+
a given direction, QTM is typically observed as a sharp step
|
475 |
+
in the magnetisation of the sample when crossing the region
|
476 |
+
around Bext = 0 [10]. This sudden change of the magnetisa-
|
477 |
+
tion is due to a non-adiabatic spin-flip transition between the
|
478 |
+
two lowest energy spin states, that occurs when traversing an
|
479 |
+
avoided crossing (see diagram in Fig. 1b, right). The spin-flip
|
480 |
+
probability is given by the celebrated Landau-Zener expres-
|
481 |
+
sion [38–43], which in our case takes the form
|
482 |
+
PLZ = 1−exp
|
483 |
+
�
|
484 |
+
−π|∆⊥|2
|
485 |
+
2|v|
|
486 |
+
�
|
487 |
+
,
|
488 |
+
(9)
|
489 |
+
where we have defined v = µBdBext/dt ·g, and ∆⊥ is the com-
|
490 |
+
ponent of ∆ = µBBint · g perpendicular to v, while g denotes
|
491 |
+
the total electronic-vibrational g-matrix appearing in Eq. (8)
|
492 |
+
(see Section S2 for a derivation of Eq. (9)). We account for
|
493 |
+
orientational disorder by averaging Eq. (9) over all possible
|
494 |
+
orientations of internal and external magnetic fields, yielding
|
495 |
+
the ensemble average ⟨PLZ⟩.
|
496 |
+
The effect of spin-phonon coupling on the spin-flip dynam-
|
497 |
+
ics of an ensemble of SMMs can be clearly seen in Fig. 3. In-
|
498 |
+
cluding the vibronic correction to the ground doublet g-matrix
|
499 |
+
leads to enhanced spin-flip probabilities across a wide range
|
500 |
+
of internal field strengths and field sweep rates. This is in line
|
501 |
+
with previous results suggesting that molecular flexibility cor-
|
502 |
+
relates with QTM [22]. To further corroborate our model, we
|
503 |
+
test its predictions against experimental data. We extracted the
|
504 |
+
average spin-flip probability from published hysteresis data
|
505 |
+
|
506 |
+
5
|
507 |
+
a)
|
508 |
+
b)
|
509 |
+
c)
|
510 |
+
FIG. 2.
|
511 |
+
Zeeman splitting of the ground Kramers doublet.
|
512 |
+
(a)
|
513 |
+
Electronic ground doublet splitting (∆1, top) and vibronic correction
|
514 |
+
(∆vib
|
515 |
+
1
|
516 |
+
− ∆1, bottom) as a function of the orientation of the magnetic
|
517 |
+
field B = (sinθ cosφ,sinθ sinφ,cosθ), with magnitude fixed to 1 T.
|
518 |
+
The dashed (solid) line corresponds to the electronic (vibronic) hard
|
519 |
+
plane. (b–c) Electronic (dashed) and vibronic (solid) Zeeman split-
|
520 |
+
ting of the ground doublet as a function of the external field magni-
|
521 |
+
tude Bext in the presence of a transverse internal field Bint = 1 mT.
|
522 |
+
External and internal fields are perpendicular to each other and were
|
523 |
+
both chosen to lie in the hard plane of either the electronic (b) or
|
524 |
+
vibronic (c) g-matrix. The orientation of the external (internal) field
|
525 |
+
is shown for both cases as circles (crosses) in the inset in (a), with
|
526 |
+
colors matching the ones in (b) and (c).
|
527 |
+
of [Dy(Cpttt)2][B(C6F5)4] in DCM with sweep rates ranging
|
528 |
+
between 10–20 Oe/s [6], yielding a value of ⟨PLZ⟩ = 0.27,
|
529 |
+
indicated by the pink line in Fig. 3. We then checked what
|
530 |
+
strength of the internal field Bint is required to reproduce such
|
531 |
+
spin-flip probability based on Eq. (9). In Fig. 3, we observe
|
532 |
+
that the values of Bint required by the vibronic model to re-
|
533 |
+
produce the observed spin-flip probability are perfectly con-
|
534 |
+
sistent with the dipolar fields naturally occurring in the sam-
|
535 |
+
ple, whereas the purely electronic model necessitates internal
|
536 |
+
fields that are one order of magnitude larger. These results
|
537 |
+
clearly demonstrate the significance of spin-phonon coupling
|
538 |
+
for QTM in a disordered ensemble of SMMs. A detailed dis-
|
539 |
+
cussion on the estimation of spin-flip probabilities and internal
|
540 |
+
fields from magnetisation measurements is presented in Sec-
|
541 |
+
FIG. 3.
|
542 |
+
Landau-Zener spin-flip probability. Ensemble-averaged
|
543 |
+
spin-flip probability as a function of the internal field strength Bint
|
544 |
+
causing tunnelling within the ground Kramers doublet, shown for
|
545 |
+
different sweep rates dBext/dt. Results for the vibronic model of Eq.
|
546 |
+
(8) are shown as orange solid lines, together with the spin-flip prob-
|
547 |
+
abilities predicted by a purely electronic model obtained by setting
|
548 |
+
the spin-phonon coupling to zero, shown as blue dashed lines. The
|
549 |
+
horizontal pink line indicates ⟨PLZ⟩ = 0.27, extracted from hysteresis
|
550 |
+
data from Ref. [6] (Section S4). The green shaded area indicates the
|
551 |
+
range of values for typical dipolar fields in the corresponding sample
|
552 |
+
(Section S5).
|
553 |
+
tions S4 and S5.
|
554 |
+
IV.
|
555 |
+
DISCUSSION
|
556 |
+
As shown above, the combined effect of all vibrations in
|
557 |
+
a randomly oriented ensemble of solvated SMMs is to en-
|
558 |
+
hance QTM. However, not all vibrations contribute to the
|
559 |
+
same extent. Based on the polaron model introduced above,
|
560 |
+
vibrations with large spin-phonon coupling and low frequency
|
561 |
+
have a larger impact on the magnetic properties of the ground
|
562 |
+
Kramers doublet. This can be seen from Eq. (7), where the
|
563 |
+
vibronic correction to the effective ground Kramers Hamil-
|
564 |
+
tonian is weighted by the factor ⟨1| ˆVj|1⟩/ω j. Another prop-
|
565 |
+
erty of vibrations that can influence QTM is their symmetry.
|
566 |
+
In monometallic SMMs, QTM has generally been correlated
|
567 |
+
with a reduction of the axial symmetry of the complex, either
|
568 |
+
by the presence of flexible ligands or by transverse magnetic
|
569 |
+
fields. Since we are interested in symmetry only as long as it
|
570 |
+
influences magnetism, it is useful to introduce a measure of
|
571 |
+
axiality on the g-matrix, such as
|
572 |
+
A(g) =
|
573 |
+
��g− 1
|
574 |
+
3Tr g
|
575 |
+
��
|
576 |
+
�
|
577 |
+
2
|
578 |
+
3Tr g
|
579 |
+
,
|
580 |
+
(10)
|
581 |
+
where ∥·∥ denotes the Frobenius norm. This measure yields 1
|
582 |
+
for a perfect easy-axis complex, 1/2 for an easy plane system,
|
583 |
+
and 0 for the perfectly isotropic case. The axiality of an indi-
|
584 |
+
vidual vibrational mode can be quantified as Aj = A(gel+gvib
|
585 |
+
j )
|
586 |
+
by building a single-mode vibronic g-matrix, analogous to
|
587 |
+
|
588 |
+
△1 (cm-1
|
589 |
+
元
|
590 |
+
10
|
591 |
+
8
|
592 |
+
6
|
593 |
+
4
|
594 |
+
2
|
595 |
+
2
|
596 |
+
0
|
597 |
+
0
|
598 |
+
0
|
599 |
+
一
|
600 |
+
元
|
601 |
+
2
|
602 |
+
2
|
603 |
+
ΦAvib - △1 (cm-1)
|
604 |
+
0.2
|
605 |
+
0.1
|
606 |
+
0
|
607 |
+
2
|
608 |
+
-0.1
|
609 |
+
-0.2
|
610 |
+
0
|
611 |
+
元
|
612 |
+
0
|
613 |
+
一元
|
614 |
+
2
|
615 |
+
2(cm
|
616 |
+
0.2
|
617 |
+
0.1
|
618 |
+
0
|
619 |
+
2
|
620 |
+
-0.1
|
621 |
+
-0.2
|
622 |
+
0
|
623 |
+
一元
|
624 |
+
2
|
625 |
+
2(cm
|
626 |
+
0.2
|
627 |
+
0.1
|
628 |
+
0
|
629 |
+
2
|
630 |
+
-0.1
|
631 |
+
-0.2
|
632 |
+
0
|
633 |
+
一元
|
634 |
+
2
|
635 |
+
26
|
636 |
+
a)
|
637 |
+
b)
|
638 |
+
FIG. 4.
|
639 |
+
Single-mode contributions to tunnelling of the magnetisation. (a) Single-mode vibronic Landau-Zener probabilities plotted for
|
640 |
+
each vibrational mode, shown as a function of the mode axiality relative to the axiality of the purely electronic g-matrix (∆Aj = Aj −Ael). The
|
641 |
+
magnitude of the internal field is fixed to Bint = 1 mT and the external field sweep rate is 10 Oe/s. The color coding represents the spin-phonon
|
642 |
+
coupling strength ∥ ˆVj∥. Grey dashed lines corresponds to the purely electronic model. (b) Visual representation of the displacements induced
|
643 |
+
by the vibrational modes indicated by arrows in (a). Solvent motion is only shown for modes 2 and 6, which have negligible amplitude on the
|
644 |
+
SMM.
|
645 |
+
the multi-mode one introduced in Eq.
|
646 |
+
(8).
|
647 |
+
We might be
|
648 |
+
tempted to intuitively conclude that vibrational motion al-
|
649 |
+
ways decreases the axiality with respect to its electronic value
|
650 |
+
Ael = A(gel), given that the collective effect of vibrations is to
|
651 |
+
enhance QTM. However, when considered individually, some
|
652 |
+
vibrations can have the opposite effect, of effectively increas-
|
653 |
+
ing the magnetic axiality.
|
654 |
+
In order to see how axiality correlates to QTM, we calcu-
|
655 |
+
late the single-mode Landau-Zener probabilities ⟨Pj⟩. These
|
656 |
+
are obtained by replacing the multi-mode vibronic g-matrix in
|
657 |
+
Eq. (8) with the single-mode one gel +gvib
|
658 |
+
j , and following the
|
659 |
+
same procedure detailed in Section S2. The single-mode con-
|
660 |
+
tribution to the spin-flip probability unambiguously correlates
|
661 |
+
with mode axiality, as shown in Fig. 4a. Vibrational modes
|
662 |
+
that lead to a larger QTM probability are likely to reduce the
|
663 |
+
magnetic axiality of the complex (top-left sector). Vice versa,
|
664 |
+
those vibrational modes that enhance axiality also suppress
|
665 |
+
QTM (bottom-right sector).
|
666 |
+
As a first step towards uncovering the microscopic basis
|
667 |
+
of this unexpected behaviour, we single out the three vibra-
|
668 |
+
tional modes that have the largest impact on axiality and spin-
|
669 |
+
flip probability in both directions. These vibrational modes,
|
670 |
+
labelled 1–6, represent a range of qualitatively distinct vibra-
|
671 |
+
tions, as can be observed in Fig. 4b. Modes 4 and 5 are among
|
672 |
+
the ones exhibiting the strongest spin-phonon coupling. Both
|
673 |
+
of them are mainly localised on one of the Cpttt ligands and
|
674 |
+
involve atomic displacements along the easy axis and, to a
|
675 |
+
lesser extent, rotations of the methyl groups. Modes 1 and
|
676 |
+
3 are among the ones with largest amplitude on the Dy ion,
|
677 |
+
which in both cases mainly moves in the hard plane, disrupt-
|
678 |
+
ing axial symmetry and enhancing tunnelling. Lastly, modes
|
679 |
+
2 and 6 predominantly correspond to solvent vibrations, and
|
680 |
+
are thus very low energy and so give a large contribution via
|
681 |
+
the small denominator in Eq. (7).
|
682 |
+
This analysis shows that the effect of vibrational modes on
|
683 |
+
QTM is more nuanced than what both intuition and previous
|
684 |
+
work would suggest. Despite leading to an overall increase
|
685 |
+
of the spin-flip probability on average, coupling the spin to
|
686 |
+
specific vibrations can increase the magnetic axiality of the
|
687 |
+
complex and suppress QTM. This opens a new avenue for the
|
688 |
+
improvement of magnetic relaxation times in SMMs, shifting
|
689 |
+
the role of vibrations from purely antagonistic to potentially
|
690 |
+
beneficial.
|
691 |
+
According to the results shown above, the ideal candidates
|
692 |
+
to observe vibronic suppression of QTM are systems exhibit-
|
693 |
+
ing strongly axial, low frequency vibrations, strongly coupled
|
694 |
+
to the electronic effective spin. Strong spin-phonon coupling
|
695 |
+
and low frequency ensure a significant change in magnetic
|
696 |
+
properties according to Eq. (7), but may not be enough to hin-
|
697 |
+
der tunnelling. In order to be beneficial, vibrations also need
|
698 |
+
to enhance the axiality of the ground doublet g-matrix. The
|
699 |
+
relation between magnetic axiality and vibrational symmetry
|
700 |
+
remains yet to be explored, and might lead to new insights
|
701 |
+
regarding rational design of ideal ligands.
|
702 |
+
|
703 |
+
17
|
704 |
+
V.
|
705 |
+
CONCLUSIONS
|
706 |
+
In conclusion, we have presented a detailed description of
|
707 |
+
the effect of molecular and solvent vibrations on the quan-
|
708 |
+
tum tunnelling between low-energy spin states in a single-ion
|
709 |
+
Dy(III) SMM. Our theoretical results, based on an ab initio
|
710 |
+
approach, are complemented by a polaron treatment of the rel-
|
711 |
+
evant vibronic degrees of freedom, which does not suffer from
|
712 |
+
any weak spin-phonon coupling assumption and is therefore
|
713 |
+
well-suited to other strong coupling scenarios. We have been
|
714 |
+
able to derive a non-perturbative vibronic correction to the ef-
|
715 |
+
fective g-matrix of the lowest-energy Kramers doublet, which
|
716 |
+
we have used as a basis to determine the tunnelling dynamics
|
717 |
+
in a magnetic field sweep experiment. This has allowed us to
|
718 |
+
formulate the key observation that, vibrations collectively en-
|
719 |
+
hance QTM, but some particular vibrational modes unexpect-
|
720 |
+
edly suppress QTM. This behaviour correlates to the axiality
|
721 |
+
of each mode, which can be used as a proxy for determining
|
722 |
+
whether a specific vibration enhances or hinders tunnelling.
|
723 |
+
The observation that individual vibrational modes can sup-
|
724 |
+
press QTM challenges the paradigm that dismisses vibrations
|
725 |
+
as detrimental, a mere obstacle to achieving long-lasting in-
|
726 |
+
formation storage on SMMs, and forces us instead to recon-
|
727 |
+
sider them under a new light, as tools that can be actively en-
|
728 |
+
gineered to our advantage to keep tunnelling at bay and ex-
|
729 |
+
tend relaxation timescales in molecular magnets. This idea
|
730 |
+
suggests parallelisms with other seemingly unrelated chem-
|
731 |
+
ical systems where electron-phonon coupling plays an im-
|
732 |
+
portant role.
|
733 |
+
For example, the study of electronic energy
|
734 |
+
transfer across photosynthetic complexes was radically trans-
|
735 |
+
formed by the simple observation that vibrations could play
|
736 |
+
an active role, maintaining quantum coherence in noisy room-
|
737 |
+
temperature environments, rather than just passively causing
|
738 |
+
decoherence between electronic states [44]. Identifying these
|
739 |
+
beneficial vibrations and amplifying their effect via chemical
|
740 |
+
design of new SMMs remains an open question, whose solu-
|
741 |
+
tion we believe could greatly benefit from the results and the
|
742 |
+
methods introduced in this work.
|
743 |
+
ACKNOWLEDGEMENTS
|
744 |
+
This work was made possible thanks to the ERC
|
745 |
+
grant 2019-STG-851504 and Royal Society fellowship
|
746 |
+
URF191320. The authors also acknowledge support from the
|
747 |
+
Computational Shared Facility at the University of Manch-
|
748 |
+
ester.
|
749 |
+
DATA AVAILABILITY
|
750 |
+
The data that support the findings of this study are available
|
751 |
+
at http://doi.org/10.48420/21892887.
|
752 |
+
[1] M. N. Leuenberger and D. Loss, Quantum computing in molec-
|
753 |
+
ular magnets, Nature 410, 789 (2001).
|
754 |
+
[2] R. Sessoli, Magnetic molecules back in the race, Nature 548,
|
755 |
+
400 (2017).
|
756 |
+
[3] E. Coronado, Molecular magnetism: from chemical design to
|
757 |
+
spin control in molecules, materials and devices, Nature Re-
|
758 |
+
views Materials 5, 87 (2020).
|
759 |
+
[4] N. F. Chilton, Molecular magnetism, Annual Review of Mate-
|
760 |
+
rials Research 52, 79 (2022).
|
761 |
+
[5] R. Sessoli, D. Gatteschi, A. Caneschi, and M. A. Novak, Mag-
|
762 |
+
netic bistability in a metal-ion cluster, Nature 365, 141 (1993).
|
763 |
+
[6] C. A. P. Goodwin, F. Ortu, D. Reta, N. F. Chilton,
|
764 |
+
and
|
765 |
+
D. P. Mills, Molecular magnetic hysteresis at 60 kelvin in dys-
|
766 |
+
prosocenium, Nature 548, 439 (2017).
|
767 |
+
[7] F.-S. Guo, B. M. Day, Y.-C. Chen, M.-L. Tong, A. Man-
|
768 |
+
sikkamäki, and R. A. Layfield, Magnetic hysteresis up to 80
|
769 |
+
kelvin in a dysprosium metallocene single-molecule magnet,
|
770 |
+
Science 362, 1400 (2018).
|
771 |
+
[8] C. A. Gould, K. R. McClain, D. Reta, J. G. C. Kragskow, D. A.
|
772 |
+
Marchiori, E. Lachman, E.-S. Choi, J. G. Analytis, R. D. Britt,
|
773 |
+
N. F. Chilton, B. G. Harvey, and J. R. Long, Ultrahard mag-
|
774 |
+
netism from mixed-valence dilanthanide complexes with metal-
|
775 |
+
metal bonding, Science 375, 198 (2022).
|
776 |
+
[9] D. Gatteschi, R. Sessoli, and J. Villain, Molecular Nanomag-
|
777 |
+
nets (Oxford University Press, 2006).
|
778 |
+
[10] L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and
|
779 |
+
B. Barbara, Macroscopic quantum tunnelling of magnetization
|
780 |
+
in a single crystal of nanomagnets, Nature 383, 145 (1996).
|
781 |
+
[11] D. A. Garanin and E. M. Chudnovsky, Thermally activated res-
|
782 |
+
onant magnetization tunneling in molecular magnets: Mn12ac
|
783 |
+
and others, Phys. Rev. B 56, 11102 (1997).
|
784 |
+
[12] D. Reta, J. G. C. Kragskow, and N. F. Chilton, Ab initio pre-
|
785 |
+
diction of high-temperature magnetic relaxation rates in single-
|
786 |
+
molecule magnets, Journal of the American Chemical Society
|
787 |
+
143, 5943 (2021).
|
788 |
+
[13] M. Briganti, F. Santanni, L. Tesi, F. Totti, R. Sessoli,
|
789 |
+
and
|
790 |
+
A. Lunghi, A complete ab initio view of Orbach and Raman
|
791 |
+
spin-lattice relaxation in a dysprosium coordination compound,
|
792 |
+
Journal of the American Chemical Society 143, 13633 (2021).
|
793 |
+
[14] J. K. Staab and N. F. Chilton, Analytic linear vibronic cou-
|
794 |
+
pling method for first-principles spin-dynamics calculations
|
795 |
+
in single-molecule magnets, Journal of Chemical Theory and
|
796 |
+
Computation (2022), 10.1021/acs.jctc.2c00611.
|
797 |
+
[15] A. Lunghi, Toward exact predictions of spin-phonon relaxation
|
798 |
+
times: An ab initio implementation of open quantum systems
|
799 |
+
theory, Science Advances 8, eabn7880 (2022).
|
800 |
+
[16] K. Irländer and J. Schnack, Spin-phonon interaction induces
|
801 |
+
tunnel splitting in single-molecule magnets, Phys. Rev. B 102,
|
802 |
+
054407 (2020).
|
803 |
+
[17] K. Irländer, H.-J. Schmidt,
|
804 |
+
and J. Schnack, Supersymmetric
|
805 |
+
spin-phonon coupling prevents odd integer spins from quantum
|
806 |
+
tunneling, The European Physical Journal B 94, 68 (2021).
|
807 |
+
[18] A. H. Kramers, Théorie générale de la rotation paramagnétique
|
808 |
+
dans les cristaux, Proceedings Royal Acad. Amsterdam 33, 959
|
809 |
+
(1930).
|
810 |
+
[19] N. Ishikawa,
|
811 |
+
M. Sugita,
|
812 |
+
and W. Wernsdorfer, Quan-
|
813 |
+
tum
|
814 |
+
tunneling
|
815 |
+
of
|
816 |
+
magnetization
|
817 |
+
in
|
818 |
+
lanthanide
|
819 |
+
single-
|
820 |
+
molecule
|
821 |
+
magnets:
|
822 |
+
Bis(phthalocyaninato)terbium
|
823 |
+
and
|
824 |
+
bis(phthalocyaninato)dysprosium anions, Angewandte Chemie
|
825 |
+
International Edition 44, 2931 (2005).
|
826 |
+
|
827 |
+
8
|
828 |
+
[20] E. Moreno-Pineda, M. Damjanovi´c, O. Fuhr, W. Wernsdor-
|
829 |
+
fer,
|
830 |
+
and M. Ruben, Nuclear spin isomers: Engineering a
|
831 |
+
Et4N[DyPc2] spin qudit, Angewandte Chemie International
|
832 |
+
Edition 56, 9915 (2017).
|
833 |
+
[21] N. F. Chilton, S. K. Langley, B. Moubaraki, A. Soncini, S. R.
|
834 |
+
Batten, and K. S. Murray, Single molecule magnetism in a fam-
|
835 |
+
ily of mononuclear β-diketonate lanthanide(III) complexes: ra-
|
836 |
+
tionalization of magnetic anisotropy in complexes of low sym-
|
837 |
+
metry, Chem. Sci. 4, 1719 (2013).
|
838 |
+
[22] F. Ortu, D. Reta, Y.-S. Ding, C. A. P. Goodwin, M. P. Gregson,
|
839 |
+
E. J. L. McInnes, R. E. P. Winpenny, Y.-Z. Zheng, S. T. Liddle,
|
840 |
+
D. P. Mills, and N. F. Chilton, Studies of hysteresis and quan-
|
841 |
+
tum tunnelling of the magnetisation in dysprosium(III) single
|
842 |
+
molecule magnets, Dalton Trans. 48, 8541 (2019).
|
843 |
+
[23] F. Pointillart,
|
844 |
+
K. Bernot,
|
845 |
+
S. Golhen,
|
846 |
+
B. Le Guennic,
|
847 |
+
T. Guizouarn, L. Ouahab,
|
848 |
+
and O. Cador, Magnetic memory
|
849 |
+
in an isotopically enriched and magnetically isolated mononu-
|
850 |
+
clear dysprosium complex, Angewandte Chemie International
|
851 |
+
Edition 54, 1504 (2015).
|
852 |
+
[24] Y. Kishi, F. Pointillart, B. Lefeuvre, F. Riobé, B. Le Guennic,
|
853 |
+
S. Golhen, O. Cador, O. Maury, H. Fujiwara,
|
854 |
+
and L. Oua-
|
855 |
+
hab, Isotopically enriched polymorphs of dysprosium single
|
856 |
+
molecule magnets, Chem. Commun. 53, 3575 (2017).
|
857 |
+
[25] J. Flores Gonzalez, F. Pointillart,
|
858 |
+
and O. Cador, Hyper-
|
859 |
+
fine coupling and slow magnetic relaxation in isotopically
|
860 |
+
enriched DyIII mononuclear single-molecule magnets, Inorg.
|
861 |
+
Chem. Front. 6, 1081 (2019).
|
862 |
+
[26] E. Moreno-Pineda, G. Taran, W. Wernsdorfer, and M. Ruben,
|
863 |
+
Quantum tunnelling of the magnetisation in single-molecule
|
864 |
+
magnet isotopologue dimers, Chem. Sci. 10, 5138 (2019).
|
865 |
+
[27] R. Silbey and R. A. Harris, Variational calculation of the dy-
|
866 |
+
namics of a two level system interacting with a bath, The Jour-
|
867 |
+
nal of Chemical Physics 80, 2615 (1984).
|
868 |
+
[28] D. R. Yakovlev and W. Ossau, Magnetic polarons, in Introduc-
|
869 |
+
tion to the Physics of Diluted Magnetic Semiconductors, edited
|
870 |
+
by J. A. Gaj and J. Kossut (Springer Berlin Heidelberg, Berlin,
|
871 |
+
Heidelberg, 2010) pp. 221–262.
|
872 |
+
[29] S. Schott, U. Chopra, V. Lemaur, A. Melnyk, Y. Olivier,
|
873 |
+
R. Di Pietro, I. Romanov, R. L. Carey, X. Jiao, C. Jellett, M. Lit-
|
874 |
+
tle, A. Marks, C. R. McNeill, I. McCulloch, E. R. McNellis,
|
875 |
+
D. Andrienko, D. Beljonne, J. Sinova,
|
876 |
+
and H. Sirringhaus,
|
877 |
+
Polaron spin dynamics in high-mobility polymeric semiconduc-
|
878 |
+
tors, Nature Physics 15, 814 (2019).
|
879 |
+
[30] F. Godejohann, A. V. Scherbakov, S. M. Kukhtaruk, A. N.
|
880 |
+
Poddubny, D. D. Yaremkevich, M. Wang, A. Nadzeyka,
|
881 |
+
D. R. Yakovlev, A. W. Rushforth, A. V. Akimov,
|
882 |
+
and
|
883 |
+
M. Bayer, Magnon polaron formed by selectively coupled co-
|
884 |
+
herent magnon and phonon modes of a surface patterned ferro-
|
885 |
+
magnet, Phys. Rev. B 102, 144438 (2020).
|
886 |
+
[31] A. W. Chin, J. Prior, S. F. Huelga, and M. B. Plenio, General-
|
887 |
+
ized polaron ansatz for the ground state of the sub-ohmic spin-
|
888 |
+
boson model: An analytic theory of the localization transition,
|
889 |
+
Phys. Rev. Lett. 107, 160601 (2011).
|
890 |
+
[32] L. Yang, M. Devi,
|
891 |
+
and S. Jang, Polaronic quantum master
|
892 |
+
equation theory of inelastic and coherent resonance energy
|
893 |
+
transfer for soft systems, The Journal of Chemical Physics 137,
|
894 |
+
024101 (2012).
|
895 |
+
[33] A. Kolli, A. Nazir,
|
896 |
+
and A. Olaya-Castro, Electronic excita-
|
897 |
+
tion dynamics in multichromophoric systems described via a
|
898 |
+
polaron-representation master equation, The Journal of Chem-
|
899 |
+
ical Physics 135, 154112 (2011).
|
900 |
+
[34] F. A. Pollock, D. P. S. McCutcheon, B. W. Lovett, E. M.
|
901 |
+
Gauger, and A. Nazir, A multi-site variational master equation
|
902 |
+
approach to dissipative energy transfer, New Journal of Physics
|
903 |
+
15, 075018 (2013).
|
904 |
+
[35] I. Wilson-Rae and A. Imamo˘glu, Quantum dot cavity-QED in
|
905 |
+
the presence of strong electron-phonon interactions, Phys. Rev.
|
906 |
+
B 65, 235311 (2002).
|
907 |
+
[36] D. P. S. McCutcheon and A. Nazir, Quantum dot Rabi rotations
|
908 |
+
beyond the weak exciton-phonon coupling regime, New Journal
|
909 |
+
of Physics 12, 113042 (2010).
|
910 |
+
[37] A. Nazir and D. P. S. McCutcheon, Modelling exciton–phonon
|
911 |
+
interactions in optically driven quantum dots, Journal of
|
912 |
+
Physics: Condensed Matter 28, 103002 (2016).
|
913 |
+
[38] L. D. Landau, Zur Theorie der Energieübertragung, Phyz. Z.
|
914 |
+
Sowjetunion 1, 88 (1932).
|
915 |
+
[39] L. D. Landau, Zur Theorie der Energieübertragung II, Phyz. Z.
|
916 |
+
Sowjetunion 2, 46 (1932).
|
917 |
+
[40] C. Zener and R. H. Fowler, Non-adiabatic crossing of energy
|
918 |
+
levels, Proceedings of the Royal Society of London. Series A
|
919 |
+
137, 696 (1932).
|
920 |
+
[41] E. C. G. Stückelberg, Theorie der unelastischen Stösse zwis-
|
921 |
+
chen Atomen, Helv. Phys. Acta 5, 369 (1932).
|
922 |
+
[42] E. Majorana, Atomi orientati in campo magnetico variabile, Il
|
923 |
+
Nuovo Cimento 9, 43 (1932).
|
924 |
+
[43] O. V. Ivakhnenko, S. N. Shevchenko, and F. Nori, Nonadia-
|
925 |
+
batic Landau-Zener-Stückelberg-Majorana transitions, dynam-
|
926 |
+
ics, and interference, Physics Reports 995, 1 (2023).
|
927 |
+
[44] G. D. Scholes, G. R. Fleming, L. X. Chen, A. Aspuru-Guzik,
|
928 |
+
A. Buchleitner, D. F. Coker, G. S. Engel, R. van Grondelle,
|
929 |
+
A. Ishizaki, D. M. Jonas, J. S. Lundeen, J. K. McCusker,
|
930 |
+
S. Mukamel, J. P. Ogilvie, A. Olaya-Castro, M. A. Ratner, F. C.
|
931 |
+
Spano, K. B. Whaley, and X. Zhu, Using coherence to enhance
|
932 |
+
function in chemical and biophysical systems, Nature 543, 647
|
933 |
+
(2017).
|
934 |
+
|
935 |
+
1
|
936 |
+
Supplementary Information:
|
937 |
+
Vibronic Effects on the Quantum Tunnelling of Magnetisation
|
938 |
+
in Single-Molecule Magnets
|
939 |
+
Andrea Mattioni,1,∗ Jakob K. Staab,1 William J. A. Blackmore,1 Daniel Reta,1,2
|
940 |
+
Jake Iles-Smith,3,4 Ahsan Nazir,3 and Nicholas F. Chilton1,†
|
941 |
+
1Department of Chemistry, School of Natural Sciences,
|
942 |
+
The University of Manchester, Oxford Road, Manchester, M13 9PL, UK
|
943 |
+
2 Faculty of Chemistry, UPV/EHU & Donostia International Physics Center DIPC,
|
944 |
+
Ikerbasque, Basque Foundation for Science, Bilbao, Spain
|
945 |
+
3Department of Physics and Astronomy, School of Natural Sciences,
|
946 |
+
The University of Manchester, Oxford Road, Manchester M13 9PL, UK
|
947 |
+
4Department of Electrical and Electronic Engineering, School of Engineering,
|
948 |
+
The University of Manchester, Sackville Street Building, Manchester M1 3BB, UK
|
949 | |
950 | |
951 |
+
|
952 |
+
2
|
953 |
+
S1.
|
954 |
+
AB INITIO CALCULATIONS
|
955 |
+
The ab initio model of the DCM-solvated [Dy(Cpttt)2]+ molecule is constructed using a multi-layer approach. During ge-
|
956 |
+
ometry optimisation and frequency calculation the system is partitioned into two layers following the ONIOM scheme [1].
|
957 |
+
The high-level layer, consisting of the SMM itself and the first solvation shell of 26 DCM molecules, is described by Density
|
958 |
+
Functional Theory (DFT) while the outer bulk of the DCM ball constitutes the low-level layer modelled by the semi-empirical
|
959 |
+
PM6 method. All DFT calculations are carried out using the pure PBE exchange-correlation functional [2] with Grimme’s D3
|
960 |
+
dispersion correction. Dysprosium is replaced by its diamagnetic analogue yttrium for which the Stuttgart RSC 1997 ECP basis
|
961 |
+
is employed [3]. Cp ring carbons directly coordinated to the central ion are equipped with Dunning’s correlation consistent
|
962 |
+
triple-zeta polarised cc-pVTZ basis set and all remaining atoms with its double-zeta analogue cc-pVDZ [4]. Subsequently, the
|
963 |
+
electronic spin states and spin-phonon coupling parameters are calculated at the CASSCF-SO level explicitly accounting for the
|
964 |
+
strong static correlation present in the f-shell of Dy(III) ions. At this level, environmental effects are treated using an electrostatic
|
965 |
+
point charge representation of all DCM atoms. All DFT/PM6 calculations are carried out with GAUSSIAN version 9 revision
|
966 |
+
D.01 [5] and the CASSCF calculations are carried out with OPENMOLCAS version 21.06 [6].
|
967 |
+
The starting [Dy(Cpttt)2]+ solvated system was obtained using the solvate program belonging to the AmberTool suite of
|
968 |
+
packages, with box as method and CHCL3BOX as solvent model. Chloroform molecules were subsequently converted to
|
969 |
+
DCM. From this large system, only molecules falling within 9 Å from the central metal atom are considered from now on.
|
970 |
+
The initial disordered system of 160 DCM molecules packed around the [Dy(Cpttt)2]+ crystal structure [7] is pre-optimised
|
971 |
+
in steps, starting by only optimising the high-level layer atoms and freezing the rest of the system. The low-layer atoms are
|
972 |
+
pre-optimised along the same lines starting with DCM molecules closest to the SMM and working in shells towards the outside.
|
973 |
+
Subsequently, the whole system is geometry optimised until RMS (maximum) values in force and displacement corresponding
|
974 |
+
to 0.00045 au (0.0003 au) and 0.0018 au (0.0012 au) are reached, respectively. After adjusting the isotopic mass of yttrium to
|
975 |
+
that of dysprosium mDy = 162.5u, vibrational normal modes and frequencies of the entire molecular aggregate are computed
|
976 |
+
within the harmonic approximation.
|
977 |
+
Electrostatic atomic point charge representations of the environment DCM molecules are evaluated for each isolated solvent
|
978 |
+
molecule independently at the DFT level of theory employing the CHarges from ELectrostatic Potentials using a Grid-based
|
979 |
+
(ChelpG) method [8], which serve as a classical model of environmental effects in the subsequent CASSCF calculations.
|
980 |
+
The evaluation of equilibrium electronic states and spin-phonon coupling parameters is carried out at the CASSCF level
|
981 |
+
including scalar relativistic effects using the second-order Douglas-Kroll Hamiltonian and spin-orbit coupling through the atomic
|
982 |
+
mean field approximation implemented in the restricted active space state interaction approach [9, 10]. The dysprosium atom is
|
983 |
+
equipped with the ANO-RCC-VTZP, the Cp ring carbons with the ANO-RCC-VDZP and the remaining atoms with the ANO-
|
984 |
+
RCC-VDZ basis set [11]. The resolution of the identity approximation with an on-the-fly acCD auxiliary basis is employed to
|
985 |
+
handle the two-electron integrals [12]. The active space of 9 electrons in 7 orbitals, spanned by 4f atomic orbitals, is employed
|
986 |
+
in a state-average CASSCF calculation including the 18 lowest lying sextet roots which span the 6H and 6F atomic terms.
|
987 |
+
We use our own implementation of spin Hamiltonian parameter projection to obtain the crystal field parameters Bq
|
988 |
+
k entering
|
989 |
+
the Hamiltonian
|
990 |
+
ˆHCF = ∑
|
991 |
+
k=2,4,6
|
992 |
+
k
|
993 |
+
∑
|
994 |
+
q=−k
|
995 |
+
θkBq
|
996 |
+
kOq
|
997 |
+
k(ˆJ),
|
998 |
+
(S1)
|
999 |
+
describing the 6H15/2 ground state multiplet. Operator equivalent factors and Stevens operators are denoted by θk and Oq
|
1000 |
+
k(ˆJ),
|
1001 |
+
where ˆJ = ( ˆJx, ˆJy, ˆJz) are the angular momentum components. Spin-phonon coupling arises from changes to the Hamiltonian
|
1002 |
+
(S1) due to slight distortions of the molecular geometry, parametrised as
|
1003 |
+
Bq
|
1004 |
+
k({Xj}) = Bq
|
1005 |
+
k +
|
1006 |
+
M
|
1007 |
+
∑
|
1008 |
+
j=1
|
1009 |
+
∂Bq
|
1010 |
+
k
|
1011 |
+
∂Xj
|
1012 |
+
Xj +...,
|
1013 |
+
(S2)
|
1014 |
+
where Xj denotes the dimensionless j-th normal coordinate of the complex under consideration. The derivatives ∂Bq
|
1015 |
+
k/∂Xj are
|
1016 |
+
calculated using the Linear Vibronic Coupling (LVC) approach described in Ref. [13] based on the state-average CASSCF
|
1017 |
+
density-fitting gradients and non-adiabatic coupling involving all 18 sextet roots.
|
1018 |
+
The final step leading to Eq. (1) in the main text is to quantise the normal modes and express them in terms of bosonic
|
1019 |
+
annihilation and creation operators satisfying [ˆbi, ˆb†
|
1020 |
+
j] = δij as
|
1021 |
+
ˆXj =
|
1022 |
+
ˆbj + ˆb†
|
1023 |
+
j
|
1024 |
+
√
|
1025 |
+
2
|
1026 |
+
.
|
1027 |
+
(S3)
|
1028 |
+
|
1029 |
+
3
|
1030 |
+
Defining the spin-phonon coupling operators
|
1031 |
+
ˆVj = 1
|
1032 |
+
√
|
1033 |
+
2 ∑
|
1034 |
+
k,q
|
1035 |
+
θk
|
1036 |
+
∂Bq
|
1037 |
+
k
|
1038 |
+
∂Xj
|
1039 |
+
Oq
|
1040 |
+
k(ˆJ),
|
1041 |
+
(S4)
|
1042 |
+
we can finally write down the crystal field Hamiltonian including linear spin-phonon coupling as
|
1043 |
+
ˆH = ˆHCF +∑
|
1044 |
+
j
|
1045 |
+
ˆVj ⊗(ˆbj + ˆb†
|
1046 |
+
j)+∑
|
1047 |
+
j
|
1048 |
+
ωj ˆb†
|
1049 |
+
j ˆb j.
|
1050 |
+
(S5)
|
1051 |
+
|
1052 |
+
4
|
1053 |
+
S2.
|
1054 |
+
DERIVATION OF THE EFFECTIVE VIBRONIC DOUBLET HAMILTONIAN
|
1055 |
+
A.
|
1056 |
+
Electronic perturbation Theory
|
1057 |
+
The starting point for our analysis of vibronic effects on QTM is the vibronic Hamiltonian
|
1058 |
+
ˆH = ∑
|
1059 |
+
m>0
|
1060 |
+
Em(|m⟩⟨m|+| ¯m⟩⟨ ¯m|)+ ˆHZee +∑
|
1061 |
+
j
|
1062 |
+
ˆVj ⊗(ˆbj + ˆb†
|
1063 |
+
j)+∑
|
1064 |
+
j
|
1065 |
+
ωj ˆb†
|
1066 |
+
j ˆb j,
|
1067 |
+
(S6)
|
1068 |
+
where ˆHZee = µBgJB · ˆJ. is the Zeeman interaction with a magnetic field B. The doubly degenerate eigenstates of the crystal
|
1069 |
+
field Hamiltonian HCF = ∑m>0 Em(|m⟩⟨m|+| ¯m⟩⟨ ¯m|) are related by time-reversal symmetry, i.e. ˆΘ|m⟩ ∝ | ¯m⟩ with ˆΘ2|m⟩ = −|m⟩,
|
1070 |
+
where ˆΘ is the time-reversal operator. In the case of [Dy(Cpttt)2]+, the total electronic angular momentum is J = 15/2, leading
|
1071 |
+
to 2J + 1 = 16 electronic states. We label these states in ascending energy with integers m = ±1,...,±8, using the compact
|
1072 |
+
notation |−m⟩ = | ¯m⟩.
|
1073 |
+
We momentarily neglect the spin-phonon coupling and focus on the purely electronic Hamiltonian Hel = HCF +HZee. Within
|
1074 |
+
each degenerate subspace, the Zeeman term selects a specific electronic basis and lifts its degeneracy. This can be seen by
|
1075 |
+
projecting the electronic Hamitonian onto the m-th subspace and diagonalising the 2×2 matrix
|
1076 |
+
H(m)
|
1077 |
+
el
|
1078 |
+
= Em + µBgJ
|
1079 |
+
�
|
1080 |
+
⟨m|B· ˆJ|m⟩ ⟨m|B· ˆJ| ¯m⟩
|
1081 |
+
⟨ ¯m|B· ˆJ|m⟩ ⟨ ¯m|B· ˆJ| ¯m⟩
|
1082 |
+
�
|
1083 |
+
.
|
1084 |
+
(S7)
|
1085 |
+
For each individual cartesian component of the angular momentum, we decompose the corresponding 2 × 2 matrix in terms of
|
1086 |
+
Pauli spin operators, which allows to rewrite the Hamiltonian of the m-th doublet as H(m)
|
1087 |
+
el
|
1088 |
+
= Em + µBB·g(m)
|
1089 |
+
el ·σ(m)/2, where
|
1090 |
+
g(m)
|
1091 |
+
el
|
1092 |
+
= 2gJ
|
1093 |
+
�
|
1094 |
+
�
|
1095 |
+
ℜ⟨ ¯m| ˆJx|m⟩ ℑ⟨ ¯m| ˆJx|m⟩ ⟨m| ˆJx|m⟩
|
1096 |
+
ℜ⟨ ¯m| ˆJy|m⟩ ℑ⟨ ¯m| ˆJy|m⟩ ⟨m| ˆJy|m⟩
|
1097 |
+
ℜ⟨ ¯m| ˆJz|m⟩ ℑ⟨ ¯m| ˆJz|m⟩ ⟨m| ˆJz|m⟩
|
1098 |
+
�
|
1099 |
+
�
|
1100 |
+
(S8)
|
1101 |
+
is the g-matrix for an effective spin 1/2 and σ(m) = (σ(m)
|
1102 |
+
x
|
1103 |
+
,σ(m)
|
1104 |
+
y
|
1105 |
+
,σ(m)
|
1106 |
+
z
|
1107 |
+
), with σ(m)
|
1108 |
+
z
|
1109 |
+
= |m⟩⟨m|−| ¯m⟩⟨ ¯m|. We note that in general the
|
1110 |
+
g-matrix in Eq. (S8) is not hermitean, but can be brought to such form by transforming the spin operators σ(m) to an appropriate
|
1111 |
+
basis [14]. An easier prescription to find the hermitean form af any g-matrix g is to redefine it as
|
1112 |
+
�
|
1113 |
+
gg†.
|
1114 |
+
To lowest order in the magnetic field, the Zeeman interaction lifts the two-fold degeneracy by selecting the basis
|
1115 |
+
|m+⟩ = cos θm
|
1116 |
+
2 |m⟩+eiφm sin θm
|
1117 |
+
2 | ¯m⟩
|
1118 |
+
(S9)
|
1119 |
+
|m−⟩ = −sin θm
|
1120 |
+
2 |m⟩+eiφm cos θm
|
1121 |
+
2 | ¯m⟩
|
1122 |
+
(S10)
|
1123 |
+
and shifting the energies according to Em,± = Em ±∆m/2, where the gap
|
1124 |
+
∆m = ⟨m+| ˆHZee|m+⟩−⟨m−| ˆHZee|m−⟩
|
1125 |
+
(S11)
|
1126 |
+
= 2µBgJ
|
1127 |
+
�
|
1128 |
+
⟨m|B· ˆJ|m⟩2 +|⟨m|B· ˆJ| ¯m⟩|2
|
1129 |
+
can be obtained as the norm of the vector jm = µBB·g(m)
|
1130 |
+
el
|
1131 |
+
and the phase and mixing angles are defined as
|
1132 |
+
eiφm = ⟨ ¯m|B· ˆJ|m⟩
|
1133 |
+
|⟨ ¯m|B· ˆJ|m⟩|,
|
1134 |
+
tanθm = |⟨ ¯m|B· ˆJ|m⟩|
|
1135 |
+
⟨m|B· ˆJ|m⟩ ,
|
1136 |
+
(S12)
|
1137 |
+
or equivalently as the azimuthal and polar angles determining the direction of jm.
|
1138 |
+
Besides selecting a preferred basis and lifting the degeneracy of each doublet, the Zeeman interaction also causes mixing
|
1139 |
+
between different doublets. In particular, the lowest doublet will change according to
|
1140 |
+
|1′
|
1141 |
+
±⟩ = |1±⟩+ ∑
|
1142 |
+
m̸=1,¯1
|
1143 |
+
|m⟩⟨m| ˆHZee|1±⟩
|
1144 |
+
E1 −Em
|
1145 |
+
+O(B2) ≈
|
1146 |
+
�
|
1147 |
+
1− ˆQ1 ˆHZee
|
1148 |
+
�
|
1149 |
+
|1±⟩,
|
1150 |
+
(S13)
|
1151 |
+
with
|
1152 |
+
ˆQ1 = ∑
|
1153 |
+
m̸=1,¯1
|
1154 |
+
|m⟩
|
1155 |
+
1
|
1156 |
+
Em −E1
|
1157 |
+
⟨m|.
|
1158 |
+
(S14)
|
1159 |
+
|
1160 |
+
5
|
1161 |
+
B.
|
1162 |
+
Spin-boson Hamiltonian for the ground doublet
|
1163 |
+
Now that we have an approximate expression for the relevant electronic states, we reintroduce the spin-phonon coupling into
|
1164 |
+
the picture. First, we project the vibronic Hamiltonian (S6) onto the subspace spanned by |1′
|
1165 |
+
±⟩, yielding
|
1166 |
+
ˆHeff = E1 +
|
1167 |
+
� ∆1
|
1168 |
+
2
|
1169 |
+
0
|
1170 |
+
0
|
1171 |
+
− ∆1
|
1172 |
+
2
|
1173 |
+
�
|
1174 |
+
+∑
|
1175 |
+
j
|
1176 |
+
�
|
1177 |
+
⟨1′
|
1178 |
+
+| ˆVj|1′
|
1179 |
+
+⟩ ⟨1′
|
1180 |
+
+| ˆVj|1′
|
1181 |
+
−⟩
|
1182 |
+
⟨1′
|
1183 |
+
−| ˆVj|1′
|
1184 |
+
+⟩ ⟨1′
|
1185 |
+
−| ˆVj|1′
|
1186 |
+
−⟩
|
1187 |
+
�
|
1188 |
+
⊗(ˆbj + ˆb†
|
1189 |
+
j)+∑
|
1190 |
+
j
|
1191 |
+
ωj ˆb†
|
1192 |
+
j ˆb j.
|
1193 |
+
(S15)
|
1194 |
+
On this basis, the purely electronic part ˆHCF + ˆHZee is diagonal with eigenvalues E1 ± ∆1/2, and the purely vibrational part is
|
1195 |
+
trivially unaffected. On the other hand, the spin-phonon couplings can be calculated to lowest order in the magnetic field strength
|
1196 |
+
B as
|
1197 |
+
⟨1′
|
1198 |
+
±| ˆVj|1′
|
1199 |
+
±⟩ = ⟨1±|
|
1200 |
+
�
|
1201 |
+
1− ˆHZee ˆQ1
|
1202 |
+
� ˆVj
|
1203 |
+
�
|
1204 |
+
1− ˆQ1 ˆHZee
|
1205 |
+
�
|
1206 |
+
|1±⟩+O(B2)
|
1207 |
+
(S16)
|
1208 |
+
= ⟨1±| ˆVj|1±⟩−⟨1±|
|
1209 |
+
� ˆVj ˆQ1 ˆHZee + ˆHZee ˆQ1 ˆVj
|
1210 |
+
�
|
1211 |
+
|1±⟩+O(B2)
|
1212 |
+
= ⟨1| ˆVj|1⟩−⟨1±| ˆWj|1±⟩+O(B2),
|
1213 |
+
⟨1′
|
1214 |
+
∓| ˆVj|1′
|
1215 |
+
±⟩ = ⟨1∓|
|
1216 |
+
�
|
1217 |
+
1− ˆHZee ˆQ1
|
1218 |
+
� ˆVj
|
1219 |
+
�
|
1220 |
+
1− ˆQ1 ˆHZee
|
1221 |
+
�
|
1222 |
+
|1±⟩+O(B2)
|
1223 |
+
(S17)
|
1224 |
+
= ⟨1∓| ˆVj|1±⟩−⟨1∓|
|
1225 |
+
� ˆVj ˆQ1 ˆHZee + ˆHZee ˆQ1 ˆVj
|
1226 |
+
�
|
1227 |
+
|1±⟩+O(B2)
|
1228 |
+
= −⟨1∓| ˆWj|1±⟩+O(B2),
|
1229 |
+
where we have defined
|
1230 |
+
ˆWj = ˆVj ˆQ1 ˆHZee + ˆHZee ˆQ1 ˆVj
|
1231 |
+
(S18)
|
1232 |
+
and used the time-reversal invariance of the spin-phonon coupling operators to obtain ⟨1±| ˆVj|1±⟩ = ⟨1| ˆVj|1⟩ and ⟨1∓| ˆVj|1±⟩ = 0.
|
1233 |
+
The two states |1±⟩ form a conjugate pair under time reversal, meaning that ˆΘ|1±⟩ = ∓eiα|1∓⟩ for some α ∈ R. Using the
|
1234 |
+
fact that for any two states ψ, ϕ, and for any operator ˆO we have ⟨ψ| ˆO|ϕ⟩ = ⟨ ˆΘϕ| ˆΘ ˆO† ˆΘ−1| ˆΘψ⟩, and recalling that the angular
|
1235 |
+
momentum operator is odd under time reversal, i.e. ˆΘˆJ ˆΘ−1 = −ˆJ, we can show that
|
1236 |
+
⟨1−| ˆWj|1−⟩ = ⟨ ˆΘ1−| ˆΘ ˆWj ˆΘ−1| ˆΘ1−⟩ = −⟨1+| ˆWj|1+⟩.
|
1237 |
+
Keeping in mind these observations, and defining the vector
|
1238 |
+
w j =
|
1239 |
+
�
|
1240 |
+
�
|
1241 |
+
wx
|
1242 |
+
j
|
1243 |
+
wy
|
1244 |
+
j
|
1245 |
+
wz
|
1246 |
+
j
|
1247 |
+
�
|
1248 |
+
� =
|
1249 |
+
�
|
1250 |
+
�
|
1251 |
+
ℜ ⟨1−| ˆWj|1+⟩
|
1252 |
+
ℑ ⟨1−| ˆWj|1+⟩
|
1253 |
+
⟨1+| ˆWj|1+⟩
|
1254 |
+
�
|
1255 |
+
�,
|
1256 |
+
(S19)
|
1257 |
+
we can rewrite the spin-phonon coupling operators in Eq. (S15) as
|
1258 |
+
�
|
1259 |
+
⟨1′
|
1260 |
+
+| ˆVj|1′
|
1261 |
+
+⟩ ⟨1′
|
1262 |
+
+| ˆVj|1′
|
1263 |
+
−⟩
|
1264 |
+
⟨1′
|
1265 |
+
−| ˆVj|1′
|
1266 |
+
+⟩ ⟨1′
|
1267 |
+
−| ˆVj|1′
|
1268 |
+
−⟩
|
1269 |
+
�
|
1270 |
+
= ⟨1| ˆVj|1⟩−
|
1271 |
+
�
|
1272 |
+
⟨1+| ˆWj|1+⟩ ⟨1−| ˆWj|1+⟩∗
|
1273 |
+
⟨1−| ˆWj|1+⟩ −⟨1+| ˆWj|1+⟩
|
1274 |
+
�
|
1275 |
+
= ⟨1| ˆVj|1⟩−w j ·σ′
|
1276 |
+
(S20)
|
1277 |
+
where σ′ is a vector whose entries are the Pauli matrices in the basis |1′
|
1278 |
+
±⟩, i.e. σ′
|
1279 |
+
z = |1′
|
1280 |
+
+⟩⟨1′
|
1281 |
+
+| − |1′
|
1282 |
+
−⟩⟨1′
|
1283 |
+
−|. Plugging this back
|
1284 |
+
into Eq. (S15) and explicitly singling out the diagonal components of ˆHeff in the basis |1′
|
1285 |
+
±⟩, we obtain
|
1286 |
+
ˆHeff = |1′
|
1287 |
+
+⟩⟨1′
|
1288 |
+
+|
|
1289 |
+
�
|
1290 |
+
E1 + ∆1
|
1291 |
+
2 +∑
|
1292 |
+
j
|
1293 |
+
�
|
1294 |
+
⟨1| ˆVj|1⟩−wz
|
1295 |
+
j
|
1296 |
+
��
|
1297 |
+
ˆbj + ˆb†
|
1298 |
+
j
|
1299 |
+
�
|
1300 |
+
+∑
|
1301 |
+
j
|
1302 |
+
ω j ˆb†
|
1303 |
+
j ˆbj
|
1304 |
+
�
|
1305 |
+
(S21)
|
1306 |
+
+ |1′
|
1307 |
+
−⟩⟨1′
|
1308 |
+
−|
|
1309 |
+
�
|
1310 |
+
E1 − ∆1
|
1311 |
+
2 +∑
|
1312 |
+
j
|
1313 |
+
�
|
1314 |
+
⟨1| ˆVj|1⟩+wz
|
1315 |
+
j
|
1316 |
+
��
|
1317 |
+
ˆbj + ˆb†
|
1318 |
+
j
|
1319 |
+
�
|
1320 |
+
+∑
|
1321 |
+
j
|
1322 |
+
ω j ˆb†
|
1323 |
+
j ˆbj
|
1324 |
+
�
|
1325 |
+
− ∑
|
1326 |
+
j
|
1327 |
+
�
|
1328 |
+
wx
|
1329 |
+
jσ′
|
1330 |
+
x +wy
|
1331 |
+
jσ′
|
1332 |
+
y
|
1333 |
+
��
|
1334 |
+
ˆbj + ˆb†
|
1335 |
+
j
|
1336 |
+
�
|
1337 |
+
.
|
1338 |
+
At this point, we apply a unitary polaron transformation to the Hamiltonian (S21)
|
1339 |
+
ˆS = exp
|
1340 |
+
�
|
1341 |
+
∑
|
1342 |
+
s=±
|
1343 |
+
|1′
|
1344 |
+
s⟩⟨1′
|
1345 |
+
s| ∑
|
1346 |
+
j
|
1347 |
+
1
|
1348 |
+
ωj
|
1349 |
+
�
|
1350 |
+
⟨1| ˆVj|1⟩−swz
|
1351 |
+
j
|
1352 |
+
��
|
1353 |
+
ˆb†
|
1354 |
+
j − ˆbj
|
1355 |
+
��
|
1356 |
+
(S22)
|
1357 |
+
= ∑
|
1358 |
+
s=±
|
1359 |
+
|1′
|
1360 |
+
s⟩⟨1′
|
1361 |
+
s| ∏
|
1362 |
+
j
|
1363 |
+
ˆD j(ξ s
|
1364 |
+
j)
|
1365 |
+
|
1366 |
+
6
|
1367 |
+
where ξ s
|
1368 |
+
j =
|
1369 |
+
�
|
1370 |
+
⟨1| ˆVj|1⟩−swz
|
1371 |
+
j
|
1372 |
+
�
|
1373 |
+
/ω j and
|
1374 |
+
ˆD j(ξ s
|
1375 |
+
j) = eξ s
|
1376 |
+
j
|
1377 |
+
�
|
1378 |
+
ˆb†
|
1379 |
+
j−ˆb j
|
1380 |
+
�
|
1381 |
+
(S23)
|
1382 |
+
is the bosonic displacement operator acting on mode j, i.e. ˆD j(ξ)ˆbj ˆD†
|
1383 |
+
j(ξ) = ˆbj −ξ. The Hamiltonian thus becomes
|
1384 |
+
ˆS ˆHeff ˆS† = ∑
|
1385 |
+
s=±
|
1386 |
+
|1′
|
1387 |
+
s⟩⟨1′
|
1388 |
+
s|
|
1389 |
+
�
|
1390 |
+
E1 +s∆1
|
1391 |
+
2 −∑
|
1392 |
+
j
|
1393 |
+
ωj|ξ s
|
1394 |
+
j|2
|
1395 |
+
�
|
1396 |
+
+∑
|
1397 |
+
j
|
1398 |
+
ωj ˆb†
|
1399 |
+
j ˆbj −∑
|
1400 |
+
j
|
1401 |
+
ˆS
|
1402 |
+
�
|
1403 |
+
wx
|
1404 |
+
jσ′
|
1405 |
+
x +wy
|
1406 |
+
jσ′
|
1407 |
+
y
|
1408 |
+
��
|
1409 |
+
ˆb j + ˆb†
|
1410 |
+
j
|
1411 |
+
�
|
1412 |
+
ˆS†.
|
1413 |
+
(S24)
|
1414 |
+
The polaron transformation reabsorbes the diagonal component of the spin-phonon coupling (S20) proportional to wz
|
1415 |
+
j into the
|
1416 |
+
energy shifts ωj|ξ ±
|
1417 |
+
j |2, leaving a residual off-diagonal spin-phonon coupling proportional to wx
|
1418 |
+
j and wy
|
1419 |
+
j. Note that the polaron
|
1420 |
+
transformation exactly diagonalises the Hamiltonian (S15) if wx
|
1421 |
+
j = wy
|
1422 |
+
j = 0. In Section S3, we argue in detail that in our case
|
1423 |
+
|wx
|
1424 |
+
j|,|wy
|
1425 |
+
j| ≪ |wz
|
1426 |
+
j| to a very good approximation. Based on this argument, we could decide to neglect the residual spin-phonon
|
1427 |
+
coupling in the polaron frame. The energies of the states belonging to the lowest doublet are shifted by a vibronic correction
|
1428 |
+
E1′± = E1 ± ∆1
|
1429 |
+
2 −∑
|
1430 |
+
j
|
1431 |
+
1
|
1432 |
+
ω j
|
1433 |
+
�
|
1434 |
+
⟨1| ˆVj|1⟩∓wz
|
1435 |
+
j
|
1436 |
+
�2
|
1437 |
+
(S25)
|
1438 |
+
= E1 ± ��1
|
1439 |
+
2 −∑
|
1440 |
+
j
|
1441 |
+
1
|
1442 |
+
ω j
|
1443 |
+
�
|
1444 |
+
⟨1| ˆVj|1⟩2 ∓2⟨1| ˆVj|1⟩wz
|
1445 |
+
j +O(B2)
|
1446 |
+
�
|
1447 |
+
,
|
1448 |
+
(S26)
|
1449 |
+
leading to a redefinition of the energy gap
|
1450 |
+
E1′+ −E1′− = ∆1 +4∑
|
1451 |
+
j
|
1452 |
+
⟨1| ˆVj|1⟩
|
1453 |
+
ωj
|
1454 |
+
wz
|
1455 |
+
j.
|
1456 |
+
(S27)
|
1457 |
+
Although the off-diagonal components of the spin-phonon coupling wx
|
1458 |
+
j and wy
|
1459 |
+
j are several orders of magnitude smaller than
|
1460 |
+
the diagonal one wz
|
1461 |
+
j (see Section S3), the sheer number of vibrational modes could still lead to an observable effect on the
|
1462 |
+
electronic degrees of freedom. We can estimate this effect by averaging the residual spin-phonon coupling over a thermal
|
1463 |
+
phonon distribution in the polaron frame. Making use of Eq. (S22), the off-diagonal coupling in Eq. (S24) can be written as
|
1464 |
+
ˆH(pol)
|
1465 |
+
sp-ph = −∑
|
1466 |
+
j
|
1467 |
+
ˆS
|
1468 |
+
�
|
1469 |
+
wx
|
1470 |
+
jσ′
|
1471 |
+
x +wy
|
1472 |
+
jσ′
|
1473 |
+
y
|
1474 |
+
��
|
1475 |
+
ˆbj + ˆb†
|
1476 |
+
j
|
1477 |
+
�
|
1478 |
+
ˆS†
|
1479 |
+
(S28)
|
1480 |
+
= −∑
|
1481 |
+
j
|
1482 |
+
|1′
|
1483 |
+
−⟩⟨1−| ˆWj|1+⟩⟨1′
|
1484 |
+
+| ˆD j(ξ −
|
1485 |
+
j )
|
1486 |
+
�
|
1487 |
+
ˆbj + ˆb†
|
1488 |
+
j
|
1489 |
+
�
|
1490 |
+
ˆD†
|
1491 |
+
j(ξ +
|
1492 |
+
j )+h.c.
|
1493 |
+
Assuming the vibrations to be in a thermal state at temperature T in the polaron frame
|
1494 |
+
ρ(th)
|
1495 |
+
ph = ∏
|
1496 |
+
j
|
1497 |
+
ρ(th)
|
1498 |
+
j
|
1499 |
+
= ∏
|
1500 |
+
j
|
1501 |
+
e−ωj ˆb†
|
1502 |
+
j ˆb j/kBT
|
1503 |
+
Tr
|
1504 |
+
�
|
1505 |
+
e−ωj ˆb†
|
1506 |
+
j ˆb j/kBT�,
|
1507 |
+
(S29)
|
1508 |
+
obtaining the average of Eq. (S28) reduces to calculating the dimensionless quantity
|
1509 |
+
κj = −Tr
|
1510 |
+
�
|
1511 |
+
ˆD j(ξ −
|
1512 |
+
j )
|
1513 |
+
�
|
1514 |
+
ˆbj + ˆb†
|
1515 |
+
j
|
1516 |
+
�
|
1517 |
+
ˆD†
|
1518 |
+
j(ξ +
|
1519 |
+
j )ρ(th)
|
1520 |
+
j
|
1521 |
+
�
|
1522 |
+
(S30)
|
1523 |
+
=
|
1524 |
+
�
|
1525 |
+
ξ +
|
1526 |
+
j +ξ −
|
1527 |
+
j
|
1528 |
+
�
|
1529 |
+
e− 1
|
1530 |
+
2
|
1531 |
+
�
|
1532 |
+
ξ +
|
1533 |
+
j −ξ −
|
1534 |
+
j
|
1535 |
+
�2
|
1536 |
+
coth
|
1537 |
+
� ωj
|
1538 |
+
2kBT
|
1539 |
+
�
|
1540 |
+
= 2⟨1| ˆVj|1⟩
|
1541 |
+
ωj
|
1542 |
+
e
|
1543 |
+
−2
|
1544 |
+
(wz
|
1545 |
+
j)2
|
1546 |
+
ω2
|
1547 |
+
j
|
1548 |
+
coth
|
1549 |
+
� ωj
|
1550 |
+
2kBT
|
1551 |
+
�
|
1552 |
+
= 2⟨1| ˆVj|1⟩
|
1553 |
+
ωj
|
1554 |
+
�
|
1555 |
+
1+O(B2),
|
1556 |
+
�
|
1557 |
+
which appears as a multiplicative rescaling factor for the off-diagonal couplings ⟨1∓| ˆWj|1±⟩. Note that, when neglecting second
|
1558 |
+
and higher order terms in the magnetic field, κj does not show any dependence on temperature or on the magnetic field orientation
|
1559 |
+
via θ1 and φ1.
|
1560 |
+
|
1561 |
+
7
|
1562 |
+
After thermal averaging, the effective electronic Hamiltonian for the lowest energy doublet becomes
|
1563 |
+
ˆHel = Trph
|
1564 |
+
�
|
1565 |
+
ˆS ˆHeff ˆS†ρ(th)
|
1566 |
+
ph
|
1567 |
+
�
|
1568 |
+
= E1 +δE1 +
|
1569 |
+
�
|
1570 |
+
2∑
|
1571 |
+
j
|
1572 |
+
⟨1| ˆVj|1⟩
|
1573 |
+
ωj
|
1574 |
+
wx
|
1575 |
+
j,2∑
|
1576 |
+
j
|
1577 |
+
⟨1| ˆVj|1⟩
|
1578 |
+
ωj
|
1579 |
+
wy
|
1580 |
+
j, ∆1
|
1581 |
+
2 +2∑
|
1582 |
+
j
|
1583 |
+
⟨1| ˆVj|1⟩
|
1584 |
+
ωj
|
1585 |
+
wz
|
1586 |
+
j
|
1587 |
+
�
|
1588 |
+
·
|
1589 |
+
�
|
1590 |
+
�
|
1591 |
+
σ′
|
1592 |
+
x
|
1593 |
+
σ′
|
1594 |
+
y
|
1595 |
+
σ′
|
1596 |
+
z
|
1597 |
+
�
|
1598 |
+
�
|
1599 |
+
(S31)
|
1600 |
+
where the energy of the lowest doublet is shifted by
|
1601 |
+
δE1 = −∑
|
1602 |
+
j
|
1603 |
+
⟨1| ˆVj|1⟩2
|
1604 |
+
ωj
|
1605 |
+
+∑
|
1606 |
+
j
|
1607 |
+
ω j
|
1608 |
+
eωj/kBT −1
|
1609 |
+
(S32)
|
1610 |
+
due to the spin-phonon coupling and to the thermal phonon energy. Eq. (S31) thus represents a refined description of the lowest
|
1611 |
+
effective spin-1/2 doublet in the presence of spin-phonon coupling.
|
1612 |
+
We can finally recast the Hamiltonian (S31) in terms of a g-matrix for an effective spin 1/2, similarly to what we did earlier
|
1613 |
+
in the case of no spin-phonon coupling. In order to do so, we first recall from Eq. (S11) and (S19) that the quantities ∆1 and
|
1614 |
+
(wx
|
1615 |
+
j,wy
|
1616 |
+
j,wz
|
1617 |
+
j) appearing in Eq. (S31) depend on the magnetic field orientation via the states |1±⟩, and on both orientation and
|
1618 |
+
intensity via ˆHZee. We can get rid of the first dependence by expressing the Zeeman eigenstates |1±⟩ in terms of the original
|
1619 |
+
crystal field eigenstates |1⟩, |¯1⟩. For the spin-phonon coupling vector wj, we obtain
|
1620 |
+
w j =
|
1621 |
+
�
|
1622 |
+
�
|
1623 |
+
ℜ⟨1−| ˆWj|1+⟩
|
1624 |
+
ℑ⟨1−| ˆWj|1+⟩
|
1625 |
+
⟨1+| ˆWj|1+⟩
|
1626 |
+
�
|
1627 |
+
� =
|
1628 |
+
�
|
1629 |
+
�
|
1630 |
+
cosθ1 cosφ1 cosθ1 sinφ1 −sinθ1
|
1631 |
+
−sinφ1
|
1632 |
+
cosφ1
|
1633 |
+
0
|
1634 |
+
sinθ1 cosφ1
|
1635 |
+
sinθ1 sinφ1
|
1636 |
+
cosθ1
|
1637 |
+
�
|
1638 |
+
�
|
1639 |
+
�
|
1640 |
+
�
|
1641 |
+
ℜ⟨¯1| ˆWj|1⟩
|
1642 |
+
ℑ⟨¯1| ˆWj|1⟩
|
1643 |
+
⟨1| ˆWj|1⟩
|
1644 |
+
�
|
1645 |
+
� = R(θ1,φ1)· ˜w j.
|
1646 |
+
(S33)
|
1647 |
+
where R(θ1,φ1) is a rotation matrix. Similarly, the elctronic contribution ∆1 transforms as
|
1648 |
+
(0,0,∆1) = j1 ·R(θ1,φ1)T,= µBB·g(1)
|
1649 |
+
el ·R(θ1,φ1)T.
|
1650 |
+
(S34)
|
1651 |
+
The Pauli spin operators need to be changed accordingly to ˜σ = R(θ1,φ1)T · σ′. Lastly, we single out explicitly the magnetic
|
1652 |
+
field dependence of ˆWj, defined in Eq. (S18), by introducing a three-component operator ˆKj = ( ˆKx
|
1653 |
+
j, ˆKy
|
1654 |
+
j, ˆKz
|
1655 |
+
j), such that
|
1656 |
+
ˆWj = µBgJB·
|
1657 |
+
� ˆVj ˆQ1ˆJ+ ˆJ ˆQ1 ˆVj
|
1658 |
+
�
|
1659 |
+
(S35)
|
1660 |
+
= µBgJB· ˆK j.
|
1661 |
+
Thus, the effective electronic Hamiltonian in Eq. (S31) can be finally rewritten as
|
1662 |
+
ˆHel = E1 +δE1 + µBB·
|
1663 |
+
�
|
1664 |
+
g(1)
|
1665 |
+
el +gvib
|
1666 |
+
�
|
1667 |
+
· ˜σ/2
|
1668 |
+
(S36)
|
1669 |
+
where g(1)
|
1670 |
+
el is the electronic g-matrix defined in Eq. (S8), and
|
1671 |
+
gvib = 4gJ∑
|
1672 |
+
j
|
1673 |
+
⟨1| ˆVj|1⟩
|
1674 |
+
ωj
|
1675 |
+
�
|
1676 |
+
�
|
1677 |
+
ℜ⟨¯1| ˆKx
|
1678 |
+
j|1⟩ ℑ⟨¯1| ˆKx
|
1679 |
+
j|1⟩ ⟨1| ˆKx
|
1680 |
+
j|1⟩
|
1681 |
+
ℜ⟨¯1| ˆKy
|
1682 |
+
j|1⟩ ℑ⟨¯1| ˆKy
|
1683 |
+
j|1⟩ ⟨1| ˆKy
|
1684 |
+
j|1⟩
|
1685 |
+
ℜ⟨¯1| ˆKz
|
1686 |
+
j|1⟩ ℑ⟨¯1| ˆKz
|
1687 |
+
j|1⟩ ⟨1| ˆKz
|
1688 |
+
j|1⟩
|
1689 |
+
�
|
1690 |
+
�
|
1691 |
+
(S37)
|
1692 |
+
is a vibronic correction.
|
1693 |
+
Note that this correction is non-perturbative in the spin-phonon coupling, despite only containing quadratic terms in ˆVj (recall
|
1694 |
+
that ˆK j depends linearly on ˆVj). The only approximations leading to Eq. (S36) are a linear perturbative expansion in the magnetic
|
1695 |
+
field B and neglecting quantum fluctuations of the off-diagonal spin-phonon coupling in the polaron frame, which is accounted
|
1696 |
+
for only via its thermal expectation value. This approximation relies on the fact that the off-diagonal couplings are much smaller
|
1697 |
+
than the diagonal spin-phonon coupling that is treated exactly by the polaron transformation (see Section S3).
|
1698 |
+
C.
|
1699 |
+
Landau-Zener probability
|
1700 |
+
Let us consider a situation in which the magnetic field comprises a time-independent contribution arising from internal dipolar
|
1701 |
+
or hyperfine fields Bint and a time dependent external field Bext(t). Let us fix the orientation of the external field and vary its
|
1702 |
+
magnitude at a constant rate, such that the field switches direction at t = 0. Under these circumstances, the Hamiltonian of Eq.
|
1703 |
+
(S36) becomes
|
1704 |
+
ˆHel(t) = E1 +δE1 + µB
|
1705 |
+
�
|
1706 |
+
Bint + dBext
|
1707 |
+
dt
|
1708 |
+
t
|
1709 |
+
�
|
1710 |
+
·g· ˜σ
|
1711 |
+
2 ,
|
1712 |
+
(S38)
|
1713 |
+
|
1714 |
+
8
|
1715 |
+
where g = g(1)
|
1716 |
+
el +gvib. Neglecting the constant energy shift and introducing the vectors
|
1717 |
+
∆
|
1718 |
+
= µBBext ·g,
|
1719 |
+
(S39)
|
1720 |
+
v = µBdBext/dt ·g,
|
1721 |
+
(S40)
|
1722 |
+
the Hamiltonian then becomes
|
1723 |
+
ˆHel(t) = ∆
|
1724 |
+
2 · ˜σ + vt
|
1725 |
+
2 · ˜σ = ∆⊥
|
1726 |
+
2
|
1727 |
+
· ˜σ + vt +∆∥
|
1728 |
+
2
|
1729 |
+
· ˜σ.
|
1730 |
+
(S41)
|
1731 |
+
In the second equality, we have split the vector ∆ = ∆⊥ +∆∥ into a perpendicular and a parallel component to v. Choosing an
|
1732 |
+
appropriate reference frame, we can write
|
1733 |
+
ˆHel(t′) = ∆⊥
|
1734 |
+
2 ˜σx + vt′
|
1735 |
+
2 ˜σz,
|
1736 |
+
(S42)
|
1737 |
+
in terms of the new time variable t′ = t +∆∥/v. Assuming that the spin is initialised in its ground state at t′ → −∞, the probability
|
1738 |
+
of observing a spin flip at t′ → +∞ is given by the Landau-Zener formula [15–20]
|
1739 |
+
PLZ = 1−exp
|
1740 |
+
�
|
1741 |
+
−π∆2
|
1742 |
+
⊥
|
1743 |
+
2v
|
1744 |
+
�
|
1745 |
+
.
|
1746 |
+
(S43)
|
1747 |
+
We remark that tunnelling is only made possible by the presence of ∆⊥, which stems from internal fields that have a perpen-
|
1748 |
+
dicular component to the externally applied field. We also observe that a perfectly axial system would not exhibit tunnelling
|
1749 |
+
behaviour, since in that case the direction of B · g would always point along the easy axis (i.e. along the only eigenvector of
|
1750 |
+
g with a non-vanishing eigenvalue), and therefore v and ∆ would always be parallel. Thus, deviations from axiality and the
|
1751 |
+
presence of transverse fields are both required for QTM to occur.
|
1752 |
+
|
1753 |
+
9
|
1754 |
+
S3.
|
1755 |
+
DISTRIBUTION OF SPIN-PHONON COUPLING VECTORS
|
1756 |
+
The effective polaron Hamiltonian presented in Eq. (7) and derived in the previous section provides a good description of
|
1757 |
+
the ground doublet only if the spin-phonon coupling operators are approximately diagonal in the electronic eigenbasis. This is
|
1758 |
+
equivalent to requiring that the components of the vectors w j defined in Eq. (S19) satisfy
|
1759 |
+
|wx
|
1760 |
+
j|,|wy
|
1761 |
+
j| ≪ |wz
|
1762 |
+
j|.
|
1763 |
+
(S44)
|
1764 |
+
Fig. S1a shows the distribution of points {w j, j = 1,...,M} (where M is the number of vibrational modes) in 3D space for
|
1765 |
+
different orientations of the magnetic field. As a consequence of the strong magnetic axiality of the complex under consideration,
|
1766 |
+
we see that these points are mainly distributed along the z-axis, therefore satisfying the criterion expressed in Eq. (S44) (note
|
1767 |
+
the different scale on the xy-plane).
|
1768 |
+
b)
|
1769 |
+
a)
|
1770 |
+
FIG. S1. Distribution of spin-phonon coupling vectors wj. (a) The points w j distribute along a straight line in 3D space (units: cm−1) when
|
1771 |
+
the magnetic field is oriented along x, y, z. The magnitude is fixed to 1 T. Note that, owing to the definition of w j, a different magnitude would
|
1772 |
+
yield a uniformly rescaled distribution of points, leaving the shape unchanged. (b) Variance of the points w j in the xy-plane in units of the total
|
1773 |
+
variance, as a function of magnetic field orientation.
|
1774 |
+
In order to confirm that the points w j maintain a similar distribution regardless of the magnetic field orientation, we calculate
|
1775 |
+
their variances along different directions of the 3D space they inhabit. We define
|
1776 |
+
σ2
|
1777 |
+
α = var(wα
|
1778 |
+
j ) =
|
1779 |
+
1
|
1780 |
+
M −1
|
1781 |
+
M
|
1782 |
+
∑
|
1783 |
+
j=1
|
1784 |
+
�
|
1785 |
+
wα
|
1786 |
+
j − µα
|
1787 |
+
�2 ,
|
1788 |
+
(S45)
|
1789 |
+
where α = x,y,z and µα = 1
|
1790 |
+
M ∑M
|
1791 |
+
j=1 wα
|
1792 |
+
j . The dependence of these variances on the field orientation is made evident by recalling
|
1793 |
+
that the points w j are related via a rotation R(θ1,φ1) to the set of points ˜w j, which only depend linearly on the field B, as shown
|
1794 |
+
in Eqs. (S33) and (S35). If the points are mainly distributed along z for any field orientation, we expect the combined variance
|
1795 |
+
in the xy-plane to be much smaller than the total variance of the dataset, i.e.
|
1796 |
+
σ2
|
1797 |
+
x +σ2
|
1798 |
+
y ≪ σ2
|
1799 |
+
x +σ2
|
1800 |
+
y +σ2
|
1801 |
+
z .
|
1802 |
+
(S46)
|
1803 |
+
Fig. S1b provides a direct confirmation of this hypothesis, showing that the variance in the xy-plane is at most 6 × 10−4 times
|
1804 |
+
smaller than the total variance. Therefore, we conclude that the approach followed in Section S2 is fully justified.
|
1805 |
+
|
1806 |
+
0.05
|
1807 |
+
0.00
|
1808 |
+
-0.0002
|
1809 |
+
-0.05
|
1810 |
+
0.0000
|
1811 |
+
0.0002
|
1812 |
+
0.0000
|
1813 |
+
0.0002
|
1814 |
+
-0.00020.05
|
1815 |
+
0.00
|
1816 |
+
-0.0002
|
1817 |
+
-0.05
|
1818 |
+
0.0000
|
1819 |
+
0.0002
|
1820 |
+
0.0000
|
1821 |
+
0.0002
|
1822 |
+
-0.00020.005
|
1823 |
+
0.000
|
1824 |
+
-0.0002
|
1825 |
+
-0.005
|
1826 |
+
0.0000
|
1827 |
+
0.0002
|
1828 |
+
0.0000
|
1829 |
+
0.0002
|
1830 |
+
-0.0002元
|
1831 |
+
0.0006
|
1832 |
+
0.0004
|
1833 |
+
元
|
1834 |
+
2
|
1835 |
+
0.0002
|
1836 |
+
0
|
1837 |
+
0
|
1838 |
+
爪
|
1839 |
+
0
|
1840 |
+
一元
|
1841 |
+
元
|
1842 |
+
2
|
1843 |
+
210
|
1844 |
+
S4.
|
1845 |
+
EXPERIMENTAL ESTIMATE OF THE SPIN-FLIP PROBABILITY
|
1846 |
+
In order to provide experimental support for our vibronic model of QTM, we compare the calculated spin-flip probabilities
|
1847 |
+
with values extracted from previously reported measurements of magnetic hysteresis. We use data from field-dependent mag-
|
1848 |
+
netisation measurements reported in Ref. [7](Fig. S35, sample 4), reproduced here in Fig. S2. The sample consisted of a 83 µL
|
1849 |
+
volume of a 170 mM solution of [Dy(Cpttt)2][B(C6F5)4] in dichloromethane (DCM). The field-dependent magnetisation was
|
1850 |
+
measured at T = 2 K while sweeping an external magnetic field Bext from +7 T to −7 T and back again to +7 T. The result-
|
1851 |
+
ing hysteresis loop is shown in Fig. S2a. The sweep rate dBext/dt is not constant throughout the hysteresis loop, as shown in
|
1852 |
+
Fig. S2b. In particular, it takes values between 10 Oe/s and 20 Oe/s across the zero field region where QTM takes place.
|
1853 |
+
QTM results in a characteristic step around the zero field region in magnetic hysteresis curves (Fig. S2a). The spin-flip
|
1854 |
+
probability across the tunnelling transition can be easily related to the height of this step via the expression [21]
|
1855 |
+
P↑→↓ = 1
|
1856 |
+
2
|
1857 |
+
� M
|
1858 |
+
Msat
|
1859 |
+
− M′
|
1860 |
+
Msat
|
1861 |
+
�
|
1862 |
+
.
|
1863 |
+
(S47)
|
1864 |
+
The value of the magnetisation before (M) and after (M′) the QTM drop is estimated by performing a linear fit of the field-
|
1865 |
+
dependent magnetisation close to the zero field region, for both Bext > 0 and Bext < 0, and extrapolating the magnetisation at
|
1866 |
+
Bext = 0 (Fig. S2a, inset). The saturation value of the magnetisation Msat is obtained by measuring the magnetisation at low
|
1867 |
+
temperature in a strong external magnetic field (T = 2 K, Bext = 7 T). Following this method, we obtain a spin-flip probability
|
1868 |
+
P↑→↓ = 0.27, which is shown as a purple horizontal line in Fig. 4 in the main text.
|
1869 |
+
b)
|
1870 |
+
a)
|
1871 |
+
FIG. S2. Magnetic hysteresis of [Dy(Cpttt)2]+ from Ref. [7]. (a) Field-dependent magnetisation was measured on a 170 mM frozen solution
|
1872 |
+
of [Dy(Cpttt)2]+ (counter ion [B(C6F5)4]−) in DCM at T = 2 K. Data presented in [7] (Fig. S35, sample 4). The loop is traversed in the
|
1873 |
+
direction indicated by the blue arrows. The sudden drop of the magnetisation from M to M′ around Bext = 0 is a characteristic signature
|
1874 |
+
of QTM. The slow magnetisation decay around the QTM step can be ascribed to other magnetic relaxation mechanisms (Raman). (b) Time
|
1875 |
+
dependence of the magnetic field Bext (top) and instantaneous sweep rate (bottom). Note that the sweep rate is not constant around the avoided
|
1876 |
+
crossing at Bext = 0, but assumes values in the range 10–20 Oe/s.
|
1877 |
+
|
1878 |
+
11
|
1879 |
+
S5.
|
1880 |
+
ESTIMATE OF THE INTERNAL FIELDS IN A FROZEN SOLUTION
|
1881 |
+
A.
|
1882 |
+
Dipolar fields
|
1883 |
+
In this section we provide an estimate of the internal fields Bint in a disordered ensemble of SMMs, based on field-dependent
|
1884 |
+
magnetisation data introduced in Section S4.
|
1885 |
+
When a SMM with strongly axial magnetic anisotropy is placed in a strong external magnetic field Bext, it gains a non-zero
|
1886 |
+
magnetic dipole moment along its easy axis. Once the external field is removed, the SMM partially retains its magnetisation
|
1887 |
+
µ = µ ˆµ, which produces a microscopic dipolar field
|
1888 |
+
Bdip(r) = µ0µ
|
1889 |
+
4πr3 [3ˆr( ˆµ· ˆr)− ˆµ]
|
1890 |
+
(S48)
|
1891 |
+
at a point r = rˆr in space. This field can then cause a tunnelling gap to open in neighboring SMMs, depending on their relative
|
1892 |
+
distance and orientation.
|
1893 |
+
In order to estimate the strength of typical dipolar fields, we need to determine the average distance between SMMs in the
|
1894 |
+
sample, and the magnetic dipole moment associated with a single SMM. Since we know both volume V and concentration of
|
1895 |
+
Dy centres in the sample (see previous section), we can easily obtain the number of SMMs in solution N. The average distance
|
1896 |
+
between SMMs can then be obtained simply by taking the cubic root of the volume per particle, as
|
1897 |
+
r =
|
1898 |
+
�V
|
1899 |
+
N
|
1900 |
+
�1/3
|
1901 |
+
≈ 21.4 Å.
|
1902 |
+
(S49)
|
1903 |
+
The magnetic moment can be obtained from the hysteresis curve shown in Fig. S2a, by reading the value of the magnetisation
|
1904 |
+
M right before the QTM step. This amounts to an average magnetic moment per molecule
|
1905 |
+
⟨µ∥⟩ = M
|
1906 |
+
N ≈ 4.07µB
|
1907 |
+
(S50)
|
1908 |
+
along the direction of the external field Bext, where ⟨·⟩ denotes the average over the ensemble of SMMs. Since the orientation
|
1909 |
+
of SMMs in a frozen solution is random, the component of the magnetisation µ perpendicular to the applied field averages to
|
1910 |
+
zero, i.e. ⟨µ⊥⟩ = 0. However, it still contributes to the formation of the microscopic dipolar field (S48), which depends on
|
1911 |
+
µ = µ∥ +µ⊥. Since the sample consists of many randomly oriented SMMs, the average magnetisation in Eq. (S50) can also be
|
1912 |
+
expressed in terms of µ = |µ| via the orientational average
|
1913 |
+
⟨µ∥⟩ =
|
1914 |
+
� π/2
|
1915 |
+
0
|
1916 |
+
dθ sinθ µ∥(θ) = µ
|
1917 |
+
2 ,
|
1918 |
+
(S51)
|
1919 |
+
where µ∥(θ) = µ cosθ is the component of the magnetisation of a SMM along the direction of the external field Bext. Thus, the
|
1920 |
+
magnetic moment responsible for the microscopic dipolar field is twice as big as the measured value (S50).
|
1921 |
+
Based on these estimates, the magnitude of dipolar fields experienced by a Dy atom in the sample is
|
1922 |
+
Bdip = 0.77 mT×
|
1923 |
+
�
|
1924 |
+
|3( ˆµ· ˆr)2 −1|.
|
1925 |
+
(S52)
|
1926 |
+
The square root averages to 1.38 for randomly oriented µ and r and can take values between 1 and 2, represented by the green
|
1927 |
+
shaded area in Fig. 4 in the main text.
|
1928 |
+
B.
|
1929 |
+
Hyperfine coupling
|
1930 |
+
Another possible source of microscopic magnetic fields are nuclear spins. Among the different isotopes of dysprosium, only
|
1931 |
+
161Dy and 163Dy have non-zero nuclear spin (I = 5/2), making up for approximately 44 % of naturally occurring dysprosium.
|
1932 |
+
The nucear spin degrees of freedom are described by the Hamiltonian
|
1933 |
+
ˆHnuc = ˆHQ + ˆHHF = ˆI·P· ˆI+ ˆI·A· ˆJ,
|
1934 |
+
(S53)
|
1935 |
+
where the first term is the quadrupole Hamiltonian ˆHQ = ˆI·P· ˆI, accounting for the zero-field splitting of the nuclear spin states,
|
1936 |
+
and the second term ˆHHF = ˆI·A· ˆJ accounts for the hyperfine coupling between nuclear spin ˆI and electronic angular momentum
|
1937 |
+
|
1938 |
+
12
|
1939 |
+
ˆJ operators. In analogy with the electronic Zeeman Hamiltonian ˆHZee = µBgJB· ˆJ, we define the effective nuclear magnetic field
|
1940 |
+
operator
|
1941 |
+
µBgJ ˆBnuc = AT · ˆI,
|
1942 |
+
(S54)
|
1943 |
+
so that the hyperfine coupling Hamiltonian takes the form of a Zeeman interaction ˆHHF = µBgJ ˆB†
|
1944 |
+
nuc · ˆJ. If we consider the nuclear
|
1945 |
+
spin to be in a thermal state at temperature T with respect to the quadrupole Hamiltonian ˆHQ, the resulting expectation value of
|
1946 |
+
the nuclear magnetic field vanishes, since the nuclear spin is completely unpolarised. However, the external field Bext will tend
|
1947 |
+
to polarise the nuclear spin via the nuclear Zeeman Hamiltonian
|
1948 |
+
ˆHnuc, Zee = µNg Bext · ˆI,
|
1949 |
+
(S55)
|
1950 |
+
where µN is the nuclear magneton and g is the nuclear g-factor of a Dy nucleus. In this case, the nuclear spin is described by the
|
1951 |
+
thermal state
|
1952 |
+
ρ(th)
|
1953 |
+
nuc =
|
1954 |
+
e−( ˆHQ+ ˆHnuc, Zee)/kBT
|
1955 |
+
Tr
|
1956 |
+
�
|
1957 |
+
e−( ˆHQ+ ˆHnuc, Zee)/kBT�
|
1958 |
+
(S56)
|
1959 |
+
and the effective nuclear magnetic field can be calculated as
|
1960 |
+
Bnuc = Tr
|
1961 |
+
� ˆBnucρ(th)
|
1962 |
+
nuc
|
1963 |
+
�
|
1964 |
+
.
|
1965 |
+
(S57)
|
1966 |
+
To the best of our knowledge, quadrupole and hyperfine coupling tensors for Dy in [Dy(Cpttt)2]+ have not been reported in
|
1967 |
+
the literature. However, ab initio calculations of hyperfine coupling tensors have been performed on DyPc2 [22]. Although the
|
1968 |
+
dysprosium atom in DyPc2 and [Dy(Cpttt)2]+ interacts with different ligands, the crystal field is qualitatively similar for these
|
1969 |
+
two complexes, therefore we expect the nuclear spin Hamiltonian to be sufficiently close to the one for [Dy(Cpttt)2]+, at least for
|
1970 |
+
the purpose of obtaining an approximate estimate. Using the quadrupolar and hyperfine tensors determined for DyPc2 [22] and
|
1971 |
+
the nuclear g-factors measured for 161Dy and 163Dy [23], we can compute Bnuc = |Bnuc| from Eq. (S57) for different orientations
|
1972 |
+
of the external magnetic field. As shown in Table S1, the effective nuclear magnetic fields at T = 2 K are at least one order of
|
1973 |
+
magnitude smaller than the dipolar fields calculated in the previous section, regardless of the orientation of the external field.
|
1974 |
+
161Dy
|
1975 |
+
163Dy
|
1976 |
+
Bext//ˆx 2.82×10−8 5.34×10−8
|
1977 |
+
Bext//ˆy 1.77×10−8 3.38×10−8
|
1978 |
+
Bext//ˆz 5.51×10−5 1.08×10−4
|
1979 |
+
TABLE S1. Effective Dy nuclear magnetic field Bnuc (T) at T = 2 K.
|
1980 |
+
|
1981 |
+
13
|
1982 |
+
S6.
|
1983 |
+
RESULTS FOR A DIFFERENT SOLVENT CONFIGURATION
|
1984 |
+
In this section we show that the results presented in the main text are robust against variations of the solvent environment on
|
1985 |
+
a qualitative level. In order to show this, we consider a smaller and rounder solvent ball consisting of 111 DCM molecules, and
|
1986 |
+
reproduce the results shown in the main text, as shown in Fig. S3. It is worth noting that the vibronic spin-flip probabilities are
|
1987 |
+
significantly smaller for the smaller solvent ball, confirming the importance of the low-frequency vibrational modes associated
|
1988 |
+
to the solvent for determining QTM behaviour. The general tendency of vibrations to enhance QTM, however, is correctly
|
1989 |
+
reproduced.
|
1990 |
+
vibrational DOS
|
1991 |
+
a)
|
1992 |
+
b)
|
1993 |
+
c)
|
1994 |
+
d)
|
1995 |
+
e)
|
1996 |
+
j
|
1997 |
+
el
|
1998 |
+
j
|
1999 |
+
FIG. S3.
|
2000 |
+
Results for a different solvent configuration. (a) Alternative arrangement of 111 DCM molecules around [Dy(Cpttt)2]+. (b)
|
2001 |
+
Spin-phonon coupling strength and vibrational density of states (see Fig. 1c). (c) Vibronic correction to the energy splitting of the ground
|
2002 |
+
Kramers doublet (∆vib
|
2003 |
+
1
|
2004 |
+
− ∆1) for different orientations of the magnetic field (see Fig. 2a). (d) Ensemble-averaged spin-flip probability for
|
2005 |
+
different field sweep rates as a function of the internal field strength (see Fig. 3). (e) Orientationally averaged single-mode spin-flip probability
|
2006 |
+
⟨Pj⟩ vs change in magnetic axiality ∆Aj/Ael (see Fig. 4).
|
2007 |
+
The most evident difference between these results and the ones presented in the main text is the shape of the single-mode
|
2008 |
+
axiality distribution (Fig. S3e). In this case, single-mode spin-flip probability ⟨Pj⟩ still correlates to relative single-mode axiality
|
2009 |
+
∆A j/Ael. However, instead of taking values on a continuous range, the relative axiality seems to cluster around discrete values.
|
2010 |
+
In an attempt to clarify the origin of this strange behaviour, we looked at the composition of the vibrational modes belonging
|
2011 |
+
to the different clusters. Vibrational modes belonging to the same cluster were not found to share any evident common feature.
|
2012 |
+
Rather than in the structure of the vibrational modes, this behaviour seems to originate from the equilibrium electronic g-matrix
|
2013 |
+
gel. This can be seen by computing the single-mode axiality Aj = A(gel +gvib
|
2014 |
+
j ) for slightly different choices of gel. In particular,
|
2015 |
+
we checked how axiality of the electronic g-matrix affects the mode axiality. In order to do that, we considered the singular
|
2016 |
+
value decomposition of the electronic g-matrix
|
2017 |
+
gel = U·diag(g1,g2,g3)·V†,
|
2018 |
+
(S58)
|
2019 |
+
|
2020 |
+
△1
|
2021 |
+
-A1
|
2022 |
+
(cm
|
2023 |
+
0.2
|
2024 |
+
0.1
|
2025 |
+
2
|
2026 |
+
0
|
2027 |
+
0.1
|
2028 |
+
-
|
2029 |
+
0.2
|
2030 |
+
0
|
2031 |
+
-元
|
2032 |
+
0
|
2033 |
+
元
|
2034 |
+
2
|
2035 |
+
2(cm
|
2036 |
+
0.2
|
2037 |
+
0.1
|
2038 |
+
0
|
2039 |
+
2
|
2040 |
+
-0.1
|
2041 |
+
-0.2
|
2042 |
+
0
|
2043 |
+
一元
|
2044 |
+
2
|
2045 |
+
2A1
|
2046 |
+
VID
|
2047 |
+
A1
|
2048 |
+
(cm
|
2049 |
+
0.2
|
2050 |
+
0.1
|
2051 |
+
2
|
2052 |
+
0
|
2053 |
+
0.1
|
2054 |
+
0.2
|
2055 |
+
-元
|
2056 |
+
0
|
2057 |
+
元
|
2058 |
+
2
|
2059 |
+
214
|
2060 |
+
the matrices U and V contain its left and right eigenvectors. The singular values are g1 = 19.99, g2 = 3.40 × 10−6, g3 =
|
2061 |
+
2.98 × 10−6, and the axiality is very close to one, i.e. 1 − Ael = 4.79 × 10−7. We artificially change the axiality of gel by
|
2062 |
+
rescaling the hard-plane g-values by a factor α and redefining the electronic g-matrix as
|
2063 |
+
gel
|
2064 |
+
α = U·diag(g1,αg2,αg3)·V†.
|
2065 |
+
(S59)
|
2066 |
+
The results are shown in Fig. S4. The three different colours distinguish the vibrational modes belonging to the three clusters
|
2067 |
+
visible in Fig. S3e (corresponding to α = 1). When α = 0, the g-matrix has perfect easy-axis anisotropy. In this case, the
|
2068 |
+
vibronic correction to the g-matrix is too small to cause significant changes in the magnetic axiality, and all the vibrational
|
2069 |
+
modes align around A j ≈ Ael. Increasing α to 0.9, clusters begin to appear. For α = 1.3, the single-mode axiality distribution
|
2070 |
+
begins to look like the one shown in Fig. 4a in the main text. The electronic g-matrix obtained for the solvent ball considered in
|
2071 |
+
the main text has a lower axiality than the one used throughout this section, i.e. 1−Ael = 1.12×10−6. Therefore, it makes sense
|
2072 |
+
that for α sufficiently larger than 1 we recover the same type of distribution as in the main text, since increasing α corresponds
|
2073 |
+
to lowering the electronic axiality A(gel
|
2074 |
+
α).
|
2075 |
+
FIG. S4. Impact of electronic axiality on single-mode axiality. Distribution of single-mode spin-flip probability ⟨Pj⟩ and g-matrix axiality
|
2076 |
+
A j = A(gelα + gvib
|
2077 |
+
j ) relative to the axiality of the modified electronic g-matrix A(gelα) defined in Eq. (S59). Vibrational modes belonging to
|
2078 |
+
different clusters in Fig. S3e (α = 1) are labelled with different colors.
|
2079 |
+
|
2080 |
+
15
|
2081 |
+
[1] M. Svensson, S. Humbel, R. D. J. Froese, T. Matsubara, S. Sieber, and K. Morokuma, ONIOM: a multilayered integrated MO + MM
|
2082 |
+
method for geometry optimizations and single point energy predictions. a test for diels-alder reactions and Pt(P(t-Bu)3)2 + H2 oxidative
|
2083 |
+
addition, The Journal of Physical Chemistry 100, 19357 (1996).
|
2084 |
+
[2] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Physical Review Letters 77, 3865 (1996).
|
2085 |
+
[3] D. Andrae, U. Häußermann, M. Dolg, H. Stoll, and H. Preuß, Energy-adjusted ab initio pseudopotentials for the second and third row
|
2086 |
+
transition elements, Theor. Chim. Acta 77, 123 (1990).
|
2087 |
+
[4] T. H. Dunning, Gaussian basis sets for use in correlated molecular calculations. i. the atoms boron through neon and hydrogen, The
|
2088 |
+
Journal of Chemical Physics 90, 1007 (1989).
|
2089 |
+
[5] M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman, G. Scalmani, V. Barone, B. Mennucci, G. A.
|
2090 |
+
Petersson, H. Nakatsuji, M. Caricato, X. Li, H. P. Hratchian, A. F. Izmaylov, J. Bloino, G. Zheng, J. L. Sonnenberg, M. Hada, M. Ehara,
|
2091 |
+
K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, T. Vreven, J. A. Montgomery, Jr., J. E.
|
2092 |
+
Peralta, F. Ogliaro, M. Bearpark, J. J. Heyd, E. Brothers, K. N. Kudin, V. N. Staroverov, R. Kobayashi, J. Normand, K. Raghavachari,
|
2093 |
+
A. Rendell, J. C. Burant, S. S. Iyengar, J. Tomasi, M. Cossi, N. Rega, J. M. Millam, M. Klene, J. E. Knox, J. B. Cross, V. Bakken,
|
2094 |
+
C. Adamo, J. Jaramillo, R. Gomperts, R. E. Stratmann, O. Yazyev, A. J. Austin, R. Cammi, C. Pomelli, J. W. Ochterski, R. L. Martin,
|
2095 |
+
K. Morokuma, V. G. Zakrzewski, G. A. Voth, P. Salvador, J. J. Dannenberg, S. Dapprich, A. D. Daniels, O. Farkas, J. B. Foresman, J. V.
|
2096 |
+
Ortiz, J. Cioslowski, and D. J. Fox, Gaussian 09 Revision D.01, (2009), Gaussian Inc. Wallingford CT.
|
2097 |
+
[6] I. Fdez. Galván, M. Vacher, A. Alavi, C. Angeli, F. Aquilante, J. Autschbach, J. J. Bao, S. I. Bokarev, N. A. Bogdanov, R. K. Carlson,
|
2098 |
+
L. F. Chibotaru, J. Creutzberg, N. Dattani, M. G. Delcey, S. S. Dong, A. Dreuw, L. Freitag, L. M. Frutos, L. Gagliardi, F. Gendron,
|
2099 |
+
A. Giussani, L. González, G. Grell, M. Guo, C. E. Hoyer, M. Johansson, S. Keller, S. Knecht, G. Kovaˇcevi´c, E. Källman, G. Li Manni,
|
2100 |
+
M. Lundberg, Y. Ma, S. Mai, J. P. Malhado, P. Å. Malmqvist, P. Marquetand, S. A. Mewes, J. Norell, M. Olivucci, M. Oppel, Q. M.
|
2101 |
+
Phung, K. Pierloot, F. Plasser, M. Reiher, A. M. Sand, I. Schapiro, P. Sharma, C. J. Stein, L. K. Sørensen, D. G. Truhlar, M. Ugandi,
|
2102 |
+
L. Ungur, A. Valentini, S. Vancoillie, V. Veryazov, O. Weser, T. A. Wesołowski, P.-O. Widmark, S. Wouters, A. Zech, J. P. Zobel, and
|
2103 |
+
R. Lindh, Openmolcas: From source code to insight, Journal of Chemical Theory and Computation 15, 5925 (2019).
|
2104 |
+
[7] C. A. P. Goodwin, F. Ortu, D. Reta, N. F. Chilton, and D. P. Mills, Molecular magnetic hysteresis at 60 kelvin in dysprosocenium, Nature
|
2105 |
+
548, 439 (2017).
|
2106 |
+
[8] C. M. Breneman and K. B. Wiberg, Determining atom-centered monopoles from molecular electrostatic potentials. the need for high
|
2107 |
+
sampling density in formamide conformational analysis, Journal of Computational Chemistry 11, 361 (1990).
|
2108 |
+
[9] P.-Å. Malmqvist and B. O. Roos, The CASSCF state interaction method, Chemical Physics Letters 155, 189 (1989).
|
2109 |
+
[10] P.-Å. Malmqvist, B. O. Roos, and B. Schimmelpfennig, The restricted active space (RAS) state interaction approach with spin–orbit
|
2110 |
+
coupling, Chemical Physics Letters 357, 230 (2002).
|
2111 |
+
[11] P.-O. Widmark, P.-Å. Malmqvist, and B. O. Roos, Density matrix averaged atomic natural orbital (ANO) basis sets for correlated
|
2112 |
+
molecular wave functions, Theoretica Chimica Acta 77, 291 (1990).
|
2113 |
+
[12] F. Aquilante, R. Lindh, and T. B. Pedersen, Unbiased auxiliary basis sets for accurate two-electron integral approximations, The Journal
|
2114 |
+
of Chemical Physics 127, 114107 (2007).
|
2115 |
+
[13] J. K. Staab and N. F. Chilton, Analytic linear vibronic coupling method for first-principles spin-dynamics calculations in single-molecule
|
2116 |
+
magnets, Journal of Chemical Theory and Computation (2022), 10.1021/acs.jctc.2c00611.
|
2117 |
+
[14] L. F. Chibotaru, A. Ceulemans, and H. Bolvin, Unique definition of the Zeeman-splitting g tensor of a Kramers doublet, Phys. Rev. Lett.
|
2118 |
+
101, 033003 (2008).
|
2119 |
+
[15] L. D. Landau, Zur Theorie der Energieübertragung, Phyz. Z. Sowjetunion 1, 88 (1932).
|
2120 |
+
[16] L. D. Landau, Zur Theorie der Energieübertragung II, Phyz. Z. Sowjetunion 2, 46 (1932).
|
2121 |
+
[17] C. Zener and R. H. Fowler, Non-adiabatic crossing of energy levels, Proceedings of the Royal Society of London. Series A 137, 696
|
2122 |
+
(1932).
|
2123 |
+
[18] E. C. G. Stückelberg, Theorie der unelastischen Stösse zwischen Atomen, Helv. Phys. Acta 5, 369 (1932).
|
2124 |
+
[19] E. Majorana, Atomi orientati in campo magnetico variabile, Il Nuovo Cimento 9, 43 (1932).
|
2125 |
+
[20] O. V. Ivakhnenko, S. N. Shevchenko, and F. Nori, Nonadiabatic Landau-Zener-Stückelberg-Majorana transitions, dynamics, and inter-
|
2126 |
+
ference, Physics Reports 995, 1 (2023).
|
2127 |
+
[21] G. Taran, E. Bonet, and W. Wernsdorfer, Decoherence measurements in crystals of molecular magnets, Phys. Rev. B 99, 180408 (2019).
|
2128 |
+
[22] A. L. Wysocki and K. Park, Hyperfine and quadrupole interactions for Dy isotopes in DyPc2 molecules, Journal of Physics: Condensed
|
2129 |
+
Matter 32, 274002 (2020).
|
2130 |
+
[23] J. Ferch, W. Dankwort, and H. Gebauer, Hyperfine structure investigations in DyI with the atomic beam magnetic resonance method,
|
2131 |
+
Physics Letters A 49, 287 (1974).
|
2132 |
+
|
HNE5T4oBgHgl3EQfWQ9u/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
HdE3T4oBgHgl3EQfuAus/content/2301.04681v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:caaca03d29020dbd3a7b1a0bd3dd085a96ca34ffcff79eedda1e871e35441f1d
|
3 |
+
size 175161
|
HdE3T4oBgHgl3EQfuAus/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:00f7ea70f3b97bf41b3ef29f427bbadc9aff3fa13a0ab9dfddba7d535def45de
|
3 |
+
size 1769517
|
HdE3T4oBgHgl3EQfuAus/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:42b850c6aaa5a037bffd0764c953945e5c9edd437024c3533b2fd4777eb66e8e
|
3 |
+
size 65161
|
I9E3T4oBgHgl3EQfuwtu/content/2301.04687v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:111454d0408884e0d5c919f67dd0a442615f429ac83b729efe3b2bfccbfd4c95
|
3 |
+
size 555875
|