Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -dE1T4oBgHgl3EQfoQRv/content/2301.03318v1.pdf +3 -0
- -dE1T4oBgHgl3EQfoQRv/vector_store/index.faiss +3 -0
- -dE1T4oBgHgl3EQfoQRv/vector_store/index.pkl +3 -0
- -tE1T4oBgHgl3EQfoQTL/content/2301.03319v1.pdf +3 -0
- -tE1T4oBgHgl3EQfoQTL/vector_store/index.faiss +3 -0
- -tE1T4oBgHgl3EQfoQTL/vector_store/index.pkl +3 -0
- .gitattributes +63 -0
- 0NE2T4oBgHgl3EQfMwbp/vector_store/index.faiss +3 -0
- 0NE2T4oBgHgl3EQfMwbp/vector_store/index.pkl +3 -0
- 0NFLT4oBgHgl3EQfoi-S/content/tmp_files/2301.12132v1.pdf.txt +2062 -0
- 0NFLT4oBgHgl3EQfoi-S/content/tmp_files/load_file.txt +0 -0
- 19E1T4oBgHgl3EQfRwPe/content/2301.03058v1.pdf +3 -0
- 19E1T4oBgHgl3EQfRwPe/vector_store/index.faiss +3 -0
- 19E1T4oBgHgl3EQfRwPe/vector_store/index.pkl +3 -0
- 1tE1T4oBgHgl3EQflQSM/content/tmp_files/2301.03283v1.pdf.txt +2423 -0
- 1tE1T4oBgHgl3EQflQSM/content/tmp_files/load_file.txt +0 -0
- 39FQT4oBgHgl3EQfHTWW/vector_store/index.pkl +3 -0
- 49FIT4oBgHgl3EQf7St_/content/2301.11397v1.pdf +3 -0
- 4NFKT4oBgHgl3EQf9C5Z/content/tmp_files/2301.11952v1.pdf.txt +859 -0
- 4NFKT4oBgHgl3EQf9C5Z/content/tmp_files/load_file.txt +439 -0
- 4dFQT4oBgHgl3EQf4Ta7/content/2301.13431v1.pdf +3 -0
- 5tE1T4oBgHgl3EQfBAK1/content/tmp_files/2301.02847v1.pdf.txt +1976 -0
- 5tE1T4oBgHgl3EQfBAK1/content/tmp_files/load_file.txt +0 -0
- 6tFAT4oBgHgl3EQfnx0W/content/tmp_files/2301.08630v1.pdf.txt +1508 -0
- 6tFAT4oBgHgl3EQfnx0W/content/tmp_files/load_file.txt +0 -0
- 8NAzT4oBgHgl3EQfgfyv/vector_store/index.pkl +3 -0
- 8NFLT4oBgHgl3EQfsy_c/content/tmp_files/load_file.txt +0 -0
- 9dFRT4oBgHgl3EQfqjdh/content/2301.13617v1.pdf +3 -0
- 9dFRT4oBgHgl3EQfqjdh/vector_store/index.pkl +3 -0
- A9AzT4oBgHgl3EQfhv18/content/2301.01489v1.pdf +3 -0
- A9AzT4oBgHgl3EQfhv18/vector_store/index.pkl +3 -0
- A9E1T4oBgHgl3EQf9QZb/content/2301.03554v1.pdf +3 -0
- A9E1T4oBgHgl3EQf9QZb/vector_store/index.faiss +3 -0
- ANFQT4oBgHgl3EQf8jdP/content/tmp_files/2301.13447v1.pdf.txt +1150 -0
- ANFQT4oBgHgl3EQf8jdP/content/tmp_files/load_file.txt +0 -0
- AdAzT4oBgHgl3EQfF_tg/content/2301.01020v1.pdf +3 -0
- AdAzT4oBgHgl3EQfF_tg/vector_store/index.faiss +3 -0
- AdAzT4oBgHgl3EQfF_tg/vector_store/index.pkl +3 -0
- AtE2T4oBgHgl3EQf8QmS/content/tmp_files/2301.04217v1.pdf.txt +418 -0
- AtE2T4oBgHgl3EQf8QmS/content/tmp_files/load_file.txt +385 -0
- BtE1T4oBgHgl3EQfpgUG/content/tmp_files/2301.03331v1.pdf.txt +890 -0
- BtE1T4oBgHgl3EQfpgUG/content/tmp_files/load_file.txt +0 -0
- DNFQT4oBgHgl3EQf_zdP/vector_store/index.faiss +3 -0
- DtE1T4oBgHgl3EQfEQNL/content/2301.02887v1.pdf +3 -0
- DtE1T4oBgHgl3EQfEQNL/vector_store/index.faiss +3 -0
- DtE1T4oBgHgl3EQfEQNL/vector_store/index.pkl +3 -0
- E9E1T4oBgHgl3EQfEgPe/content/2301.02892v1.pdf +3 -0
- GNAzT4oBgHgl3EQfxP7V/content/2301.01736v1.pdf +3 -0
- GNAzT4oBgHgl3EQfxP7V/vector_store/index.pkl +3 -0
- HNE1T4oBgHgl3EQfrQW4/content/tmp_files/2301.03353v1.pdf.txt +1029 -0
-dE1T4oBgHgl3EQfoQRv/content/2301.03318v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa0b7d6e9789f62713c833ad82d7bbedd6058639e2d1e48b4344909391511670
|
3 |
+
size 195814
|
-dE1T4oBgHgl3EQfoQRv/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:83e6914d4600ce14dcf984376c2a4e1abb818b589ed1079b3226e957cf92557b
|
3 |
+
size 2424877
|
-dE1T4oBgHgl3EQfoQRv/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e1ac21d8cdf5aacde3ec69b06dac11cb96028b1c5a59927998a2e6cc95531033
|
3 |
+
size 93415
|
-tE1T4oBgHgl3EQfoQTL/content/2301.03319v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ea155fe4f886aab5e8c8661f3221460a8dac8e04f5a538673f6d4355067aa8cc
|
3 |
+
size 543706
|
-tE1T4oBgHgl3EQfoQTL/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5a5ebdcbf23074e45cfcfed109a508e34f15a813fd64b1ea005e1d83a7703401
|
3 |
+
size 4587565
|
-tE1T4oBgHgl3EQfoQTL/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5dcac91dca0e67fb83ab576394b489a8e769651454b44705039f61803f1c3c4a
|
3 |
+
size 159596
|
.gitattributes
CHANGED
@@ -2193,3 +2193,66 @@ R9E4T4oBgHgl3EQf_g7i/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
|
|
2193 |
L9AyT4oBgHgl3EQf6voI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2194 |
g9E0T4oBgHgl3EQf6gID/content/2301.02763v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2195 |
YNE2T4oBgHgl3EQfvAgF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2193 |
L9AyT4oBgHgl3EQf6voI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2194 |
g9E0T4oBgHgl3EQf6gID/content/2301.02763v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2195 |
YNE2T4oBgHgl3EQfvAgF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2196 |
+
i9E0T4oBgHgl3EQf7AKY/content/2301.02771v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2197 |
+
49FIT4oBgHgl3EQf7St_/content/2301.11397v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2198 |
+
i9E0T4oBgHgl3EQf7AKY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2199 |
+
ldFLT4oBgHgl3EQfdy92/content/2301.12088v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2200 |
+
-tE1T4oBgHgl3EQfoQTL/content/2301.03319v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2201 |
+
U9E3T4oBgHgl3EQf0QsH/content/2301.04735v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2202 |
+
E9E1T4oBgHgl3EQfEgPe/content/2301.02892v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2203 |
+
itE2T4oBgHgl3EQfIAZX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2204 |
+
A9E1T4oBgHgl3EQf9QZb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2205 |
+
vNE1T4oBgHgl3EQfkQSg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2206 |
+
itE4T4oBgHgl3EQfSwxB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2207 |
+
itE4T4oBgHgl3EQfSwxB/content/2301.05001v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2208 |
+
g9E0T4oBgHgl3EQf6gID/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2209 |
+
f9E3T4oBgHgl3EQfHwmY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2210 |
+
zNE4T4oBgHgl3EQfyQ1D/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2211 |
+
-dE1T4oBgHgl3EQfoQRv/content/2301.03318v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2212 |
+
_NE1T4oBgHgl3EQfogR8/content/2301.03321v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2213 |
+
RNFAT4oBgHgl3EQf0x4X/content/2301.08705v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2214 |
+
ctAzT4oBgHgl3EQfZ_wC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2215 |
+
ldFLT4oBgHgl3EQfdy92/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2216 |
+
DNFQT4oBgHgl3EQf_zdP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2217 |
+
_NE1T4oBgHgl3EQfogR8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2218 |
+
zNE4T4oBgHgl3EQfyQ1D/content/2301.05264v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2219 |
+
yNAyT4oBgHgl3EQfa_ed/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2220 |
+
-tE1T4oBgHgl3EQfoQTL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2221 |
+
xNAzT4oBgHgl3EQfdvyx/content/2301.01426v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2222 |
+
YNE2T4oBgHgl3EQfvAgF/content/2301.04085v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2223 |
+
4dFQT4oBgHgl3EQf4Ta7/content/2301.13431v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2224 |
+
U9E3T4oBgHgl3EQf0QsH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2225 |
+
hdE4T4oBgHgl3EQfrg0j/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2226 |
+
RNFAT4oBgHgl3EQf0x4X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2227 |
+
TtFJT4oBgHgl3EQfMiyz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2228 |
+
W9FKT4oBgHgl3EQfoC5E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2229 |
+
0NE2T4oBgHgl3EQfMwbp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2230 |
+
WNFJT4oBgHgl3EQfOyzK/content/2301.11484v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2231 |
+
DtE1T4oBgHgl3EQfEQNL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2232 |
+
AdAzT4oBgHgl3EQfF_tg/content/2301.01020v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2233 |
+
DtE1T4oBgHgl3EQfEQNL/content/2301.02887v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2234 |
+
MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2235 |
+
19E1T4oBgHgl3EQfRwPe/content/2301.03058v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2236 |
+
9dFRT4oBgHgl3EQfqjdh/content/2301.13617v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2237 |
+
O9AzT4oBgHgl3EQflP2h/content/2301.01545v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2238 |
+
A9AzT4oBgHgl3EQfhv18/content/2301.01489v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2239 |
+
QNE2T4oBgHgl3EQfrwi3/content/2301.04053v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2240 |
+
w9E5T4oBgHgl3EQfMw4O/content/2301.05483v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2241 |
+
L9FQT4oBgHgl3EQfUzaG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2242 |
+
A9E1T4oBgHgl3EQf9QZb/content/2301.03554v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2243 |
+
-dE1T4oBgHgl3EQfoQRv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2244 |
+
GNAzT4oBgHgl3EQfxP7V/content/2301.01736v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2245 |
+
rdAzT4oBgHgl3EQfO_vG/content/2301.01177v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2246 |
+
w9E5T4oBgHgl3EQfMw4O/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2247 |
+
19E1T4oBgHgl3EQfRwPe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2248 |
+
O9AzT4oBgHgl3EQflP2h/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2249 |
+
m9E5T4oBgHgl3EQfjQ9S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2250 |
+
yNAyT4oBgHgl3EQfa_ed/content/2301.00254v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2251 |
+
m9E5T4oBgHgl3EQfjQ9S/content/2301.05654v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2252 |
+
PtFRT4oBgHgl3EQf6Djm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2253 |
+
PtFRT4oBgHgl3EQf6Djm/content/2301.13675v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2254 |
+
I9E2T4oBgHgl3EQfpAhk/content/2301.04024v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2255 |
+
AdAzT4oBgHgl3EQfF_tg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2256 |
+
QNE2T4oBgHgl3EQfrwi3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
2257 |
+
StAzT4oBgHgl3EQf0f7W/content/2301.01786v1.pdf filter=lfs diff=lfs merge=lfs -text
|
2258 |
+
L9FQT4oBgHgl3EQfUzaG/content/2301.13298v1.pdf filter=lfs diff=lfs merge=lfs -text
|
0NE2T4oBgHgl3EQfMwbp/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:16e5b7c04649be627c60a247b6fde521c89c2d7271ec230268d1f0ccd8c2b1bd
|
3 |
+
size 3080237
|
0NE2T4oBgHgl3EQfMwbp/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a2a3e8033704efc8a814bfed52f49336fa7c42e44a3a7a51d7c641d533401603
|
3 |
+
size 123655
|
0NFLT4oBgHgl3EQfoi-S/content/tmp_files/2301.12132v1.pdf.txt
ADDED
@@ -0,0 +1,2062 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
AUTOPEFT: Automatic Configuration Search for
|
2 |
+
Parameter-Efficient Fine-Tuning
|
3 |
+
Han Zhou1,*
|
4 |
+
Xingchen Wan2,*
|
5 |
+
Ivan Vuli´c1
|
6 |
+
Anna Korhonen1
|
7 |
+
1Language Technology Lab, University of Cambridge
|
8 |
+
2Machine Learning Research Group, University of Oxford
|
9 |
+
{hz416, iv250, alk23}@cam.ac.uk
|
10 | |
11 |
+
Abstract
|
12 |
+
Large pretrained language models have been
|
13 |
+
widely used in downstream NLP tasks via task-
|
14 |
+
specific fine-tuning.
|
15 |
+
Recently, an array of
|
16 |
+
Parameter-Efficient Fine-Tuning (PEFT) meth-
|
17 |
+
ods have also achieved strong task perfor-
|
18 |
+
mance while updating a much smaller num-
|
19 |
+
ber of parameters compared to full model tun-
|
20 |
+
ing.
|
21 |
+
However, it is non-trivial to make in-
|
22 |
+
formed per-task design choices (i.e., to create
|
23 |
+
PEFT configurations) concerning the selection
|
24 |
+
of PEFT architectures and modules, the num-
|
25 |
+
ber of tunable parameters, and even the lay-
|
26 |
+
ers in which the PEFT modules are inserted.
|
27 |
+
Consequently, it is highly likely that the cur-
|
28 |
+
rent, manually set PEFT configurations might
|
29 |
+
be suboptimal for many tasks from the perspec-
|
30 |
+
tive of the performance-to-efficiency trade-off.
|
31 |
+
To address the core question of the PEFT con-
|
32 |
+
figuration selection that aims to control and
|
33 |
+
maximise the balance between performance
|
34 |
+
and parameter efficiency, we first define a rich
|
35 |
+
configuration search space spanning multiple
|
36 |
+
representative PEFT modules along with finer-
|
37 |
+
grained configuration decisions over the mod-
|
38 |
+
ules (e.g., parameter budget, insertion layer).
|
39 |
+
We then propose AUTOPEFT, a novel frame-
|
40 |
+
work to traverse this configuration space: it
|
41 |
+
automatically configures multiple PEFT mod-
|
42 |
+
ules via high-dimensional Bayesian optimisa-
|
43 |
+
tion. We show the resource scalability and task
|
44 |
+
transferability of AUTOPEFT-found configu-
|
45 |
+
rations, outperforming existing PEFT methods
|
46 |
+
on average on the standard GLUE benchmark
|
47 |
+
while conducting the configuration search on
|
48 |
+
a single task. The per-task AUTOPEFT-based
|
49 |
+
configuration search even outperforms full-
|
50 |
+
model tuning.
|
51 |
+
1
|
52 |
+
Introduction and Motivation
|
53 |
+
Pretrained language models (PLM) are used in
|
54 |
+
downstream tasks via the standard transfer learning
|
55 |
+
*Equal contribution.
|
56 |
+
Code is available at https://
|
57 |
+
github.com/cambridgeltl/autopeft
|
58 |
+
100
|
59 |
+
101
|
60 |
+
Fine-tuned Parameters (%)
|
61 |
+
80
|
62 |
+
81
|
63 |
+
82
|
64 |
+
83
|
65 |
+
84
|
66 |
+
Average Score
|
67 |
+
Pfeiffer
|
68 |
+
UniPELT
|
69 |
+
MAM
|
70 |
+
AdaMix
|
71 |
+
Prefix
|
72 |
+
LoRA
|
73 |
+
Parallel
|
74 |
+
AutoPEFT
|
75 |
+
Full Model FT
|
76 |
+
Figure 1:
|
77 |
+
The performance of
|
78 |
+
AUTOPEFT-found
|
79 |
+
PEFT configurations compared to other standard PEFT
|
80 |
+
methods and full model FT on the GLUE bench-
|
81 |
+
mark (Wang et al., 2018). We report the average score
|
82 |
+
for each method by taking the mean of metrics for 8
|
83 |
+
GLUE tasks. The dashed horizontal bar (Full Model
|
84 |
+
FT) indicates the full-model FT that updates 100% of
|
85 |
+
parameters, and our approach aims to learn the best
|
86 |
+
trade-off configuration between task performance and
|
87 |
+
parameter efficiency.
|
88 |
+
paradigm, where they get fine-tuned for particu-
|
89 |
+
lar tasks (Devlin et al., 2019; Liu et al., 2019b).
|
90 |
+
This achieves state-of-the-art results in a wide spec-
|
91 |
+
trum of NLP tasks, becoming a prevalent modelling
|
92 |
+
paradigm in NLP (Raffel et al., 2020). Fine-tuning
|
93 |
+
the PLMs typically requires a full update of their
|
94 |
+
original parameters (i.e., the so-called full-model
|
95 |
+
fine-tuning (FT)); however, this is (i) computation-
|
96 |
+
ally expensive and also (ii) storage-wise expensive
|
97 |
+
as it requires saving a separate full model copy
|
98 |
+
for each task-tuned model. With the ever-growing
|
99 |
+
size of the PLMs (Brown et al., 2020; Sanh et al.,
|
100 |
+
2022), the cost of full model FT becomes a major
|
101 |
+
bottleneck, due to its increasing demands as well
|
102 |
+
as computational (time and space) non-efficiency.
|
103 |
+
Parameter-Efficient Fine-Tuning (PEFT) deliv-
|
104 |
+
ers a solution for alleviating the issues with full-
|
105 |
+
model FT (Houlsby et al., 2019). By freezing the
|
106 |
+
majority of pretrained weights of PLMs, PEFT ap-
|
107 |
+
proaches only update a small portion of parameters
|
108 |
+
arXiv:2301.12132v1 [cs.CL] 28 Jan 2023
|
109 |
+
|
110 |
+
for efficiently adapting the PLM to a new down-
|
111 |
+
stream task. Recent studies have shown that PEFT
|
112 |
+
can achieve competitive task performance while be-
|
113 |
+
ing modular, adaptable, and preventing catastrophic
|
114 |
+
forgetting in comparison to traditional FT (Wang
|
115 |
+
et al., 2022).
|
116 |
+
Recent developments have created diverse PEFT
|
117 |
+
modules with distinctive characteristics (Pfeiffer
|
118 |
+
et al., 2020b; Li and Liang, 2021), with one of
|
119 |
+
the two main aims in focus: 1) improve task perfor-
|
120 |
+
mance over other PEFT approaches while maintain-
|
121 |
+
ing the same parameter budget as the competitor
|
122 |
+
PEFT methods; or 2) maintain task performance
|
123 |
+
while reducing the parameter budget needed. Exist-
|
124 |
+
ing PEFT modules, optimising for one of the two
|
125 |
+
aims, have been successfully applied to transfer
|
126 |
+
learning tasks (Chen et al., 2022b; Pfeiffer et al.,
|
127 |
+
2022). However, different tasks, with different
|
128 |
+
complexity, show distinct sensitivity to the allo-
|
129 |
+
cated parameter budget and even to the chosen
|
130 |
+
PEFT approach (He et al., 2022). At the same
|
131 |
+
time, most PEFT applications are limited to a sin-
|
132 |
+
gle PEFT architecture (e.g., serial adapters, prefix-
|
133 |
+
tuning) with fixed decisions on its components (e.g.,
|
134 |
+
hidden size dimensionality, insertion layers) result-
|
135 |
+
ing in potentially suboptimal PEFT configurations
|
136 |
+
across many tasks. Therefore, in this work, we
|
137 |
+
propose a new, versatile and unified framework
|
138 |
+
that automatically searches for improved and task-
|
139 |
+
adapted PEFT configurations, aiming to effectively
|
140 |
+
balance between the two (often colliding goals)
|
141 |
+
of (i) improving performance and (ii) keeping the
|
142 |
+
desired low parameter budget for PEFT.
|
143 |
+
While recent research has started exploring more
|
144 |
+
dynamic PEFT configurations, the prior studies
|
145 |
+
remain limited across several dimensions, includ-
|
146 |
+
ing how they define the configuration search space.
|
147 |
+
Namely, they typically focus only on a single PEFT
|
148 |
+
architecture (e.g., adapters) or their simple combi-
|
149 |
+
nations, or a single property (e.g., insertion layers –
|
150 |
+
where to insert the module); see a short overview
|
151 |
+
later in §2. Here, we propose a unified and more
|
152 |
+
comprehensive framework for improved configu-
|
153 |
+
ration search. It covers multiple standard PEFT
|
154 |
+
modules (1. serial adapters, 2. parallel adapters,
|
155 |
+
3. prefix-tuning), combined with the critical pa-
|
156 |
+
rameter budget-related decisions: the size of each
|
157 |
+
constituent module and the insertion layers for the
|
158 |
+
modules.
|
159 |
+
Our defined comprehensive search space is huge;
|
160 |
+
as a consequence, traversing it effectively and effi-
|
161 |
+
ciently is extremely challenging. To enable search
|
162 |
+
over the large configuration space, we thus propose
|
163 |
+
the AUTOPEFT framework. It automatically con-
|
164 |
+
figures multiple PEFT modules along with their
|
165 |
+
efficiency-oriented design decisions, relying on a
|
166 |
+
high-dimensional Bayesian optimisation (BO) ap-
|
167 |
+
proach. Crucially, within the search space, we pro-
|
168 |
+
pose a multi-objective optimisation which learns
|
169 |
+
to simultaneously balance between maximising the
|
170 |
+
searched configurations’ task performance and pa-
|
171 |
+
rameter efficiency.
|
172 |
+
We conduct extensive experiments on the stan-
|
173 |
+
dard GLUE benchmark (Wang et al., 2018). We
|
174 |
+
first study the transferability of the AUTOPEFT-
|
175 |
+
searched architecture by running AUTOPEFT on a
|
176 |
+
single task, followed by transferring the found ar-
|
177 |
+
chitecture to other tasks. Experimental results show
|
178 |
+
that this architecture can outperform existing PEFT
|
179 |
+
baselines while achieving on-par performance to
|
180 |
+
the standard full-model FT, relying only on 1.4%
|
181 |
+
of the original trainable parameters. Further slight
|
182 |
+
gains can be achieved via a computationally more
|
183 |
+
expensive approach, where we run AUTOPEFT per
|
184 |
+
each single task to find a task-adapted PEFT config-
|
185 |
+
uration. As demonstrated in Figure 1, AUTOPEFT
|
186 |
+
is able to find configurations that offer a solid trade-
|
187 |
+
off between task performance and parameter effi-
|
188 |
+
ciency, even outperforming full-model FT. We also
|
189 |
+
provide ablation studies over the search space, vali-
|
190 |
+
dating that the AUTOPEFT framework is versatile
|
191 |
+
and portable to different search spaces.
|
192 |
+
Contributions. 1) We propose a large and com-
|
193 |
+
prehensive search space of PEFT configurations,
|
194 |
+
which integrates three representative PEFT mod-
|
195 |
+
ules, the tunable number of parameters of each
|
196 |
+
module, and the binary decisions concerning Trans-
|
197 |
+
former layers for inserting these modules. 2) We
|
198 |
+
propose a novel AUTOPEFT framework with high-
|
199 |
+
dimensional Bayesian optimisation that can auto-
|
200 |
+
matically and feasibly search for the effective PEFT
|
201 |
+
configuration in terms of both task performance
|
202 |
+
and parameter efficiency. 3) We demonstrate that
|
203 |
+
the AUTOPEFT-found configurations can not only
|
204 |
+
reduce the parameter budget but also outperform
|
205 |
+
existing PEFT modules while being transferable
|
206 |
+
across tasks. The AUTOPEFT framework can also
|
207 |
+
be easily extended to other and new PEFT modules.
|
208 |
+
2
|
209 |
+
Related Work
|
210 |
+
Parameter-Efficient Fine-Tuning.
|
211 |
+
Standard
|
212 |
+
PEFT methods can be divided into two main
|
213 |
+
|
214 |
+
groups. 1) Some methods fine-tune a small por-
|
215 |
+
tion of pretrained parameters (Zhao et al., 2020;
|
216 |
+
Guo et al., 2021). For instance, Ben Zaken et al.
|
217 |
+
(2022) propose to fine-tune the PLM’s bias terms,
|
218 |
+
while Sung et al. (2021) and Ansell et al. (2022)
|
219 |
+
fine-tune sparse subnetworks withing the original
|
220 |
+
PLM for a particular task. 2) Other methods fine-
|
221 |
+
tune an additional set of parameters (Liu et al.,
|
222 |
+
2022). Since there is no interference with the pre-
|
223 |
+
trained parameters, this class of PEFT modules, be-
|
224 |
+
sides offering strong task performance, is arguably
|
225 |
+
more modular; we thus focus on this class of PEFT
|
226 |
+
methods in this work. The original adapter mod-
|
227 |
+
ules (Houlsby et al., 2019; Pfeiffer et al., 2020b)
|
228 |
+
have a bottleneck serial architecture which can be
|
229 |
+
inserted into every Transformer layer, see Figure 2.
|
230 |
+
LoRA (Hu et al., 2022a) assumes the low-rank
|
231 |
+
intrinsic dimensionality of the target task and per-
|
232 |
+
forms low-rank updates (Mahabadi et al., 2021).
|
233 |
+
Li and Liang (2021) propose the Prefix-Tuning
|
234 |
+
method that appends a learnable vector to the at-
|
235 |
+
tention heads at each Transformer layer. Similarly,
|
236 |
+
prompt-tuning (Lester et al., 2021) only appends
|
237 |
+
this vector to the input embedding. UniPELT (Mao
|
238 |
+
et al., 2022) integrates multiple PEFT modules with
|
239 |
+
a dynamic gating mechanism. He et al. (2022)
|
240 |
+
provide a unified formulation of existing PEFT
|
241 |
+
modules and propose a parallel adapter module,
|
242 |
+
along with a combined ‘Mix-and-Match Adapter
|
243 |
+
(MAM)’ architecture that blends parallel adapters
|
244 |
+
and prefix-tuning. Wang et al. (2022) propose the
|
245 |
+
mixture-of-adaptations (AdaMix) combined archi-
|
246 |
+
tecture that leverages weight averaging for a mix-
|
247 |
+
ture of adapters.
|
248 |
+
Optimising Parameter Efficiency in PEFT. Re-
|
249 |
+
cent work further aims to optimise the parameter
|
250 |
+
efficiency of existing PEFT modules while main-
|
251 |
+
taining task performance. The standard approach
|
252 |
+
is to insert (typically serial) adapters into all Trans-
|
253 |
+
former layers, which still requires a sizeable pa-
|
254 |
+
rameter budget. Rücklé et al. (2021) address this
|
255 |
+
question by performing random dropout of adapters
|
256 |
+
from lower-level layers, displaying only a small de-
|
257 |
+
crease in task performance. Adaptable Adapters
|
258 |
+
(AA) (Moosavi et al., 2022) generalise this idea
|
259 |
+
by learning gates that switch on or off adapters
|
260 |
+
in particular Transformer layers. Neural Architec-
|
261 |
+
ture Search (NAS) methods aim to automate the
|
262 |
+
design of neural net architectures themselves, and
|
263 |
+
NAS has seen great advances recently, with per-
|
264 |
+
formance often surpassing human expert-designed
|
265 |
+
architectures in various tasks (Zoph and Le, 2017;
|
266 |
+
Ren et al., 2021; Elsken et al., 2019). Concerning
|
267 |
+
NLP tasks and PEFT, Hu et al. (2022b) propose
|
268 |
+
S3PET, which adapts Differentiable Architecture
|
269 |
+
Search (DARTS) (Liu et al., 2019a) to learn the po-
|
270 |
+
sitions for inserting the PEFT modules. This work
|
271 |
+
is closest in spirit to ours.
|
272 |
+
Our method, discussed in detail in §3, offers
|
273 |
+
a spectrum of advantages over S3PET and other
|
274 |
+
related PEFT work. Relying on multi-objective
|
275 |
+
optimisation, unlike S3PET, we can automatically
|
276 |
+
discover a family of configurations at different pa-
|
277 |
+
rameter efficiency levels in a single search run, ef-
|
278 |
+
fectively balancing between task performance and
|
279 |
+
parameter efficiency, without the need to set the
|
280 |
+
‘parameter budget’ in advance; similarly, we en-
|
281 |
+
able an automatic search over multiple constituent
|
282 |
+
modules over the desirable range of parameter bud-
|
283 |
+
get and effective layers, whereas previous work
|
284 |
+
can only support one architecture per each search
|
285 |
+
run. Further, previous work indicated that weight-
|
286 |
+
sharing NAS such as DARTS may suffer with the
|
287 |
+
reliability of prediction (White et al., 2021b), and
|
288 |
+
its success often hinges heavily on the design of
|
289 |
+
the actual search space (Li and Talwalkar, 2019;
|
290 |
+
Ru et al., 2020; Dong and Yang, 2020; Yang et al.,
|
291 |
+
2020). We mitigate those issues with our design of
|
292 |
+
AUTOPEFT. Finally, while weight-sharing NAS is
|
293 |
+
arguably more computationally efficient, through
|
294 |
+
combining the use of low-fidelity performance pre-
|
295 |
+
dictors and the strong transferability of the configu-
|
296 |
+
rations found across tasks, AUTOPEFT can also be
|
297 |
+
made very computationally efficient in discovering
|
298 |
+
effective PEFT configurations. We further discuss
|
299 |
+
this in §3 and demonstrate empirically in §5.
|
300 |
+
3
|
301 |
+
AUTOPEFT Framework
|
302 |
+
We start by designing a large configuration space,
|
303 |
+
providing the motivation behind each decision to
|
304 |
+
include a particular module and its components
|
305 |
+
into the configuration space, along with a mathe-
|
306 |
+
matical formulation. We then propose AUTOPEFT,
|
307 |
+
a novel framework to search over this challenging
|
308 |
+
configuration space. It automatically configures
|
309 |
+
(components of) multiple PEFT modules via high-
|
310 |
+
dimensional Bayesian optimisation.
|
311 |
+
PEFT Configuration Search Space. The search
|
312 |
+
space is an influential factor in the performance
|
313 |
+
of any search algorithm. In order to simultane-
|
314 |
+
ously maximise task performance along with pa-
|
315 |
+
rameter efficiency, it is necessary to first define a
|
316 |
+
|
317 |
+
‘parameter-reducible’ search space, where each di-
|
318 |
+
mension within the space potentially contributes
|
319 |
+
to reducing the parameter budget. Similarly, each
|
320 |
+
dimension might potentially bring positive impact
|
321 |
+
to the task performance without introducing redun-
|
322 |
+
dancy in the space (Wan et al., 2022). Therefore,
|
323 |
+
we propose the search space with representative
|
324 |
+
PEFT modules, as follows, spanning a plethora of
|
325 |
+
(non-redundant) configurations, as also shown in
|
326 |
+
Figure 2.
|
327 |
+
PEFT Modules. We include three distinctive PEFT
|
328 |
+
designs to efficiently adapt different forwarding
|
329 |
+
stages of hidden states in the PLM layers. We
|
330 |
+
combine Serial Adapters (SA), Parallel Adapters
|
331 |
+
(PA), and Prefix-Tuning (PT) as the three represen-
|
332 |
+
tative modules in the search space, where the PT
|
333 |
+
module adapts the multi-head attention layer, and
|
334 |
+
SA and PA interact with the FFN layer (Figure 2).
|
335 |
+
Each configuration makes a decision on the PEFT
|
336 |
+
modules in the insertion layer: all of them can be
|
337 |
+
‘turned’ on or off. We combine this binary decision
|
338 |
+
with the actual non-binary decision on the module
|
339 |
+
size (see next), so that the value of 0 in fact denotes
|
340 |
+
the absence of the modules in the layer(s).
|
341 |
+
Size. Previous studies show that PEFT methods
|
342 |
+
are highly sensitive to the number of tunable pa-
|
343 |
+
rameters: adaptively setting their capacity in ac-
|
344 |
+
cordance with the target task is then essential for
|
345 |
+
achieving good performance (Chen et al., 2022a).
|
346 |
+
The number of tunable parameters is dependent on
|
347 |
+
each particular module. The additional parameters
|
348 |
+
introduced by both SA and PA are dominated by
|
349 |
+
their bottleneck dimension D. Similarly, the size
|
350 |
+
of the PT module is defined by its prefix length
|
351 |
+
LPT. Thus, we define a binary logarithmic search
|
352 |
+
scale for the respective discrete sets DSA, DPA,
|
353 |
+
and LPT, spanning the values from 0 (absence of
|
354 |
+
the module) to Dh where Dh is the dimensionality
|
355 |
+
of the PLM (e.g., Dh=768 for BERTbase).
|
356 |
+
Insertion Layers. Prior work has also shown that
|
357 |
+
different layers in the PLMs store different se-
|
358 |
+
mantic information (Vuli´c et al., 2020), where the
|
359 |
+
higher layers produce more task-specific and con-
|
360 |
+
textualized representations (Tenney et al., 2019).
|
361 |
+
Therefore, as another configuration dimension, we
|
362 |
+
aim to search for the minimal number and the ac-
|
363 |
+
tual position of layers in which to insert the PEFT
|
364 |
+
modules. We define a binary ‘insertion’ decision at
|
365 |
+
each layer li.
|
366 |
+
Combining PEFT Modules. The SA module and
|
367 |
+
the PA module share a bottleneck architecture. The
|
368 |
+
Feed Forward
|
369 |
+
LayerNorm
|
370 |
+
Multi-Head Attention
|
371 |
+
LayerNorm
|
372 |
+
Prefix-Tuning
|
373 |
+
Serial
|
374 |
+
Parallel
|
375 |
+
PEFT Layer
|
376 |
+
PEFT Layer
|
377 |
+
Layer
|
378 |
+
PEFT Layer
|
379 |
+
Layer
|
380 |
+
PEFT Layer
|
381 |
+
Trainable
|
382 |
+
Search
|
383 |
+
Frozen
|
384 |
+
Figure 2: Illustration of the main components of our
|
385 |
+
configuration search space, traversed via AUTOPEFT.
|
386 |
+
AUTOPEFT configures the selected Transformer layers
|
387 |
+
with PEFT modules, where the activation of each sub-
|
388 |
+
module is controlled by the learned size of each sub-
|
389 |
+
module. See also Table 4 in the appendix.
|
390 |
+
SA receives hidden states from the FFN output as
|
391 |
+
its inputs, adapting it with a down-projection ma-
|
392 |
+
trix W down
|
393 |
+
SA
|
394 |
+
∈ RDh×DSA, followed by a non-linear
|
395 |
+
activation function, and then an up-projection ma-
|
396 |
+
trix W up
|
397 |
+
SA ∈ RDSA×Dh:
|
398 |
+
fSA(h) = ReLU(hW down
|
399 |
+
SA
|
400 |
+
)W up
|
401 |
+
SA.
|
402 |
+
(1)
|
403 |
+
PA, on the other hand, receives its inputs from
|
404 |
+
hidden states before the FFN layer with the same
|
405 |
+
formulation:
|
406 |
+
fPA(x) = ReLU(xW down
|
407 |
+
PA
|
408 |
+
)W up
|
409 |
+
PA.
|
410 |
+
(2)
|
411 |
+
Therefore, it is able to act in parallel with the SA
|
412 |
+
without interference. Note that the FFN hidden
|
413 |
+
states h = F(x) contain the task-specific bias
|
414 |
+
learned in its pretrained weights. Therefore, by
|
415 |
+
combining SA with PA, the following composition
|
416 |
+
of functions is achieved:
|
417 |
+
fSAPA(x) =ReLU(F(x)W down
|
418 |
+
SA
|
419 |
+
)W up
|
420 |
+
SA
|
421 |
+
+ReLU(xW down
|
422 |
+
PA
|
423 |
+
)W up
|
424 |
+
PA.
|
425 |
+
(3)
|
426 |
+
The final composition should provide an effective
|
427 |
+
adaptation to both bias-influence hidden states and
|
428 |
+
the original inputs before the pretrained FFN layer.1
|
429 |
+
Further, applying PEFT modules to interact both
|
430 |
+
with FFNs and multi-head attention should have a
|
431 |
+
positive impact on task performance (Mao et al.,
|
432 |
+
1The PA module also acts as the low-rank reparametriza-
|
433 |
+
tion of the learned SA together with the frozen FFN layer to
|
434 |
+
further match the intrinsic dimensionality of the target task.
|
435 |
+
|
436 |
+
Query new configuration by
|
437 |
+
Suggested
|
438 |
+
config in
|
439 |
+
AutoPEFT
|
440 |
+
search
|
441 |
+
space
|
442 |
+
Performance &
|
443 |
+
Parameter Efficiency
|
444 |
+
Evaluate
|
445 |
+
Parameter-Efficient Fine-Tuning
|
446 |
+
|
447 |
+
|
448 |
+
Multi-Objective Bayesian Optimisation
|
449 |
+
Serial
|
450 |
+
Parallel
|
451 |
+
Prefix
|
452 |
+
Config
|
453 |
+
Layer Configuration
|
454 |
+
maximising the acquisition function
|
455 |
+
GP Surrogate
|
456 |
+
Figure 3: Illustration of the AUTOPEFT framework: to search for optimal architectures in the defined configu-
|
457 |
+
ration space, AUTOPEFT uses a multi-objective BO agent, which trains on previous observations of the PEFT
|
458 |
+
configuration vector and its performance (e.g., accuracy – obtained by fine-tuning the language model with the
|
459 |
+
PEFT configuration) and cost (e.g., number of parameters). The BO agent then suggests new configurations, and
|
460 |
+
the algorithm continues iteratively until convergence.
|
461 |
+
2022; He et al., 2022). PT learns two prefix vectors,
|
462 |
+
Pk and Pv ∈ RLPT×Dh, that are concatenated with
|
463 |
+
the original multi-head attention’s key and value
|
464 |
+
vectors, which efficiently adapts the multi-head
|
465 |
+
attention layer to fit the target task. We thus finally
|
466 |
+
combine the SA and the PA (i.e., SAPA from above)
|
467 |
+
with PT.
|
468 |
+
In sum, the overview of the dimensions spanning
|
469 |
+
the final configuration space is provided in Figure 2
|
470 |
+
and Table 4. The combination of the different ‘con-
|
471 |
+
figuration dimensions’ outlined above gives rise to
|
472 |
+
a total of e.g., 5,451,776 possible configurations
|
473 |
+
with BERTbase and ∼ 3×1010 configurations with
|
474 |
+
RoBERTalarge (i.e., the number of configurations is
|
475 |
+
2|l|×|DSA|×|DPA|×|LPT|). While a large search
|
476 |
+
space is crucial for expressiveness and to ensure
|
477 |
+
that good-performing configurations are contained,
|
478 |
+
it also increases the difficulty for search strategies
|
479 |
+
to both navigate the search space well while re-
|
480 |
+
maining sample- and thus computationally efficient.
|
481 |
+
Furthermore, in the PEFT setting, we are also often
|
482 |
+
interested in discovering a family of configurations
|
483 |
+
that trade off between performance and efficiency
|
484 |
+
for general application in various scenarios with
|
485 |
+
different resource constraints, thus giving rise to a
|
486 |
+
multi-objective optimisation problem where we si-
|
487 |
+
multaneously aim to maximise performance while
|
488 |
+
minimising costs. In what follows, we propose a
|
489 |
+
search framework that satisfies all those criteria.
|
490 |
+
AUTOPEFT via Multi-Objective Bayesian Opti-
|
491 |
+
misation. Formally, denoting the full AUTOPEFT
|
492 |
+
search space as A and a single configuration a ∈ A
|
493 |
+
with trainable weights W, without loss of gener-
|
494 |
+
ality, assuming our objective is to maximise (i) a
|
495 |
+
performance metric f(a, W) (e.g., the accuracy
|
496 |
+
on the dev set) and to (ii) minimise a cost metric
|
497 |
+
g(a) (e.g., the number of parameters in a), a search
|
498 |
+
method aims to solve the bi-level, bi-objective op-
|
499 |
+
timisation problem:
|
500 |
+
max
|
501 |
+
a∈A
|
502 |
+
�
|
503 |
+
f(a, W ∗), −g(a)
|
504 |
+
�
|
505 |
+
;
|
506 |
+
s.t.W ∗ = arg min
|
507 |
+
W Ltrain(a, W),
|
508 |
+
(4)
|
509 |
+
where the inner loop optimisation problem is the op-
|
510 |
+
timisation of the configuration weights achieved by
|
511 |
+
fine-tuning the configuration a itself over the train
|
512 |
+
loss Ltrain. Given the bi-objective nature of the
|
513 |
+
problem, there is in general no single maximiser of
|
514 |
+
Eq. (4) but a set of non-dominated Pareto-optimal
|
515 |
+
configurations A∗ = {a∗
|
516 |
+
1, ..., a∗
|
517 |
+
|A∗|}.
|
518 |
+
To address these challenges in this work, we
|
519 |
+
adopt a Bayesian optimisation (BO) approach, il-
|
520 |
+
lustrated in Figure 3. BO is a sample-efficient,
|
521 |
+
zeroth-order model-based sequential optimisation
|
522 |
+
algorithm (Garnett, 2023) with proven successes
|
523 |
+
in NAS and automated machine learning in gen-
|
524 |
+
eral (Snoek et al., 2012; White et al., 2021a; Ru
|
525 |
+
et al., 2021; Kandasamy et al., 2018). BO is partic-
|
526 |
+
ularly popular in the multi-objective setups where
|
527 |
+
one is interested in recovering a Pareto front where
|
528 |
+
it is less straightforward to apply methods such as
|
529 |
+
differentiable / one-shot architecture search meth-
|
530 |
+
ods that are typically used to discover a single best-
|
531 |
+
performing configuration (Eriksson et al., 2021;
|
532 |
+
Izquierdo et al., 2021). BO consists of a surro-
|
533 |
+
gate model, usually a Gaussian Process (GP) that
|
534 |
+
sequentially approximates the objective function
|
535 |
+
based on the observations so far, and an acquisition
|
536 |
+
function, which balances between exploitation (i.e.,
|
537 |
+
regions in the search space with high perceived
|
538 |
+
|
539 |
+
Mateérn kernel
|
540 |
+
Samplesfrompriordistribution
|
541 |
+
3
|
542 |
+
1
|
543 |
+
0
|
544 |
+
-1
|
545 |
+
-2
|
546 |
+
Sampled function #1
|
547 |
+
Sampled function #2
|
548 |
+
Sampled function #3
|
549 |
+
Sampled function #4
|
550 |
+
Samplesfromposteriordistribution
|
551 |
+
Sampled function #5
|
552 |
+
3
|
553 |
+
Mean
|
554 |
+
± 1 std. dev.
|
555 |
+
2
|
556 |
+
Observations
|
557 |
+
1
|
558 |
+
-1
|
559 |
+
-2
|
560 |
+
-3
|
561 |
+
0
|
562 |
+
2
|
563 |
+
3
|
564 |
+
5value) and exploration (i.e., regions that have not
|
565 |
+
been visited before). It is optimised at each iter-
|
566 |
+
ation to actively select the next configuration to
|
567 |
+
evaluate. For a detailed overview of BO, we refer
|
568 |
+
the readers to Frazier (2018).
|
569 |
+
While vanilla BO methods are better-suited
|
570 |
+
in modestly-dimensioned and continuous prob-
|
571 |
+
lems, our current setup instead features a high-
|
572 |
+
dimensional and combinatorial search space. Here,
|
573 |
+
performance of non-parametric methods such as
|
574 |
+
GP-based BO tend to suffer due to the exponen-
|
575 |
+
tially exploding volume of space the surrogate
|
576 |
+
needs to model as dimensionality increases. For-
|
577 |
+
tunately, recent advances in search methods have
|
578 |
+
allowed us to address these challenges effectively.
|
579 |
+
Specifically, we adopt the SAAS-GP (Eriksson and
|
580 |
+
Jankowiak, 2021) model as the surrogate function:
|
581 |
+
on a high level, SAAS-GP (1) places a relatively
|
582 |
+
strong regularising half-Cauchy prior on the model
|
583 |
+
lengthscales (which dictate the perceived impor-
|
584 |
+
tance of search dimensions to the objective func-
|
585 |
+
tion value) to induce sparsity and (2) approximately
|
586 |
+
marginalises over model hyperparameters via a
|
587 |
+
No-U-Turn Monte Carlo sampler (Hoffman et al.,
|
588 |
+
2014) to reduce overfitting in high dimensions. We
|
589 |
+
argue that both are appealing in our setup, while the
|
590 |
+
benefit of (2) in our setup is self-evident, (1) also ef-
|
591 |
+
fectively places a prior to encode our belief that in
|
592 |
+
spite of the high nominal complexity search space,
|
593 |
+
the effective dimensionality of the problem should
|
594 |
+
be much lower – this is appropriate in our setup,
|
595 |
+
as although we have a nominally high dimensions,
|
596 |
+
consistent to previous findings in NAS (Wan et al.,
|
597 |
+
2022), we do expect a few disproportionately in-
|
598 |
+
fluential key dimensions (although we do not have
|
599 |
+
information on which a priori – this is meant to be
|
600 |
+
discovered by the BO algorithm).
|
601 |
+
For the acquisition function,
|
602 |
+
we use the
|
603 |
+
noisy expected hypervolume improvement (NE-
|
604 |
+
HVI) (Daulton et al., 2021), which is suitable for
|
605 |
+
the setup described in Eq. 4. Lastly, while BO is
|
606 |
+
sample-efficient, it may still require 100-200 eval-
|
607 |
+
uations of different configurations in the search
|
608 |
+
space to sufficiently explore the search space; to
|
609 |
+
make sure the search remains cost-efficient, during
|
610 |
+
search we also adopt low-fidelity approximations
|
611 |
+
commonly employed in NAS: at the search stage,
|
612 |
+
for a configuration a, instead of evaluating the ob-
|
613 |
+
jective f(a, W) defined in Eq. 4 in full, we only
|
614 |
+
fine-tune the a using a smaller computational bud-
|
615 |
+
get – for example, if a complete fine-tuning takes
|
616 |
+
100% of training data, at search time we are able
|
617 |
+
to only fine-tune with 1% of training data and use
|
618 |
+
the accuracy after that as a lower-cost proxy to the
|
619 |
+
accuracy after full-length FT, the latter of which is
|
620 |
+
significantly more expensive to obtain. Therefore,
|
621 |
+
when we are facing high-resource tasks, fine-tuning
|
622 |
+
the full training resources is only performed once
|
623 |
+
at evaluation time after the Pareto-optimal configu-
|
624 |
+
rations are finalised. Other low-cost proxies such
|
625 |
+
as training for fewer number of epochs than full
|
626 |
+
FT are also compatible but not used in the present
|
627 |
+
work.
|
628 |
+
4
|
629 |
+
Experimental Setup
|
630 |
+
Evaluation Data. We follow prior PEFT research
|
631 |
+
and base our evaluation on the standard GLUE
|
632 |
+
benchmark.
|
633 |
+
We include 4 types of text classi-
|
634 |
+
fication tasks, including linguistic acceptability:
|
635 |
+
CoLA; similarity and paraphrase: STS-B, MRPC,
|
636 |
+
QQP; sentiment analysis: SST-2; natural language
|
637 |
+
inference: RTE, QNLI, MNLI. We exclude WNLI
|
638 |
+
following previous work (Houlsby et al., 2019;
|
639 |
+
Mao et al., 2022).
|
640 |
+
Baselines. We compare the performance of the
|
641 |
+
AUTOPEFT-found configurations to the standard
|
642 |
+
full model FT and each individual PEFT module
|
643 |
+
(SA, PA, PT) from the search space used in their
|
644 |
+
default setup from respective original work. We
|
645 |
+
also compare with the LoRA module, to provide
|
646 |
+
a comparison to low-rank decomposition methods.
|
647 |
+
In order to provide comparisons with recently pro-
|
648 |
+
posed methods that also integrate multiple PEFT
|
649 |
+
modules (see §2), we further include the UniPELT
|
650 |
+
and the MAM adapter in their default settings. We
|
651 |
+
reproduce AdaMix for a comparison to a mixture
|
652 |
+
of homogeneous adaptations. In ablations on inser-
|
653 |
+
tion layers, we also include the Adaptable Adapter
|
654 |
+
(AA) as a baseline that proposes a differentiable
|
655 |
+
gate learning method to select the insertion layer
|
656 |
+
for PEFT modules (i.e., serial adapters originally).
|
657 |
+
Implementation Details.
|
658 |
+
Following previous
|
659 |
+
work on the GLUE benchmark, we report the best
|
660 |
+
GLUE dev set performance (Ben Zaken et al.,
|
661 |
+
2022) and use 20 training epochs with an early
|
662 |
+
stopping scheme of 10 epochs for all tasks. We
|
663 |
+
use AdapterHub (Pfeiffer et al., 2020a) as the code-
|
664 |
+
base and conduct extensive experiments with the
|
665 |
+
uncased BERTbase (Devlin et al., 2019) as the main
|
666 |
+
backbone model.
|
667 |
+
We report main experiments
|
668 |
+
with the mean and standard deviation over 5 dif-
|
669 |
+
ferent random seeds. Experimental results using
|
670 |
+
|
671 |
+
Method
|
672 |
+
#Param.
|
673 |
+
RTE
|
674 |
+
MRPC
|
675 |
+
STS-B
|
676 |
+
CoLA
|
677 |
+
SST-2
|
678 |
+
QNLI
|
679 |
+
QQP
|
680 |
+
MNLI
|
681 |
+
Avg.
|
682 |
+
Fine-tune
|
683 |
+
100%
|
684 |
+
71.121.46
|
685 |
+
85.741.75
|
686 |
+
89.000.45
|
687 |
+
59.320.62
|
688 |
+
92.570.24
|
689 |
+
91.500.08
|
690 |
+
91.520.04
|
691 |
+
84.430.22
|
692 |
+
83.15
|
693 |
+
Prefix
|
694 |
+
0.17%
|
695 |
+
70.540.49
|
696 |
+
85.930.89
|
697 |
+
88.760.15
|
698 |
+
58.881.15
|
699 |
+
91.930.45
|
700 |
+
90.760.14
|
701 |
+
89.120.07
|
702 |
+
82.780.16
|
703 |
+
82.33
|
704 |
+
LoRA
|
705 |
+
0.27%
|
706 |
+
65.851.49
|
707 |
+
84.461.04
|
708 |
+
88.730.08
|
709 |
+
57.580.78
|
710 |
+
92.060.38
|
711 |
+
90.620.22
|
712 |
+
89.41 0.04
|
713 |
+
83.000.07
|
714 |
+
81.46
|
715 |
+
Serial
|
716 |
+
0.81%
|
717 |
+
68.011.34
|
718 |
+
84.750.45
|
719 |
+
88.610.11
|
720 |
+
59.730.62
|
721 |
+
91.930.33
|
722 |
+
91.060.12
|
723 |
+
90.520.05
|
724 |
+
84.180.22
|
725 |
+
82.35
|
726 |
+
AdaMix
|
727 |
+
0.81%
|
728 |
+
70.110.62
|
729 |
+
86.861.12
|
730 |
+
89.120.11
|
731 |
+
59.111.00
|
732 |
+
92.060.22
|
733 |
+
91.520.15
|
734 |
+
90.220.04
|
735 |
+
84.250.14
|
736 |
+
82.91
|
737 |
+
UniPELT
|
738 |
+
1.25%
|
739 |
+
67.071.82
|
740 |
+
84.220.78
|
741 |
+
88.840.11
|
742 |
+
60.130.46
|
743 |
+
92.520.24
|
744 |
+
91.090.13
|
745 |
+
90.690.11
|
746 |
+
84.280.18
|
747 |
+
82.35
|
748 |
+
Parallel
|
749 |
+
6.46%
|
750 |
+
68.523.44
|
751 |
+
86.520.96
|
752 |
+
88.900.28
|
753 |
+
58.721.69
|
754 |
+
92.130.35
|
755 |
+
90.830.22
|
756 |
+
90.740.08
|
757 |
+
73.9319.24
|
758 |
+
81.29
|
759 |
+
MAM
|
760 |
+
6.97%
|
761 |
+
69.101.76
|
762 |
+
87.160.74
|
763 |
+
89.010.48
|
764 |
+
47.8723.97
|
765 |
+
83.9416.52
|
766 |
+
90.850.22
|
767 |
+
90.760.05
|
768 |
+
83.310.17
|
769 |
+
80.25
|
770 |
+
AUTOPEFTRTE
|
771 |
+
S
|
772 |
+
0.06%
|
773 |
+
69.680.76
|
774 |
+
85.540.78
|
775 |
+
88.780.18
|
776 |
+
56.830.54
|
777 |
+
91.930.34
|
778 |
+
90.810.18
|
779 |
+
88.510.05
|
780 |
+
82.260.11
|
781 |
+
81.79
|
782 |
+
AUTOPEFTMNLI
|
783 |
+
S
|
784 |
+
0.30%
|
785 |
+
69.770.47
|
786 |
+
85.730.61
|
787 |
+
88.780.17
|
788 |
+
57.501.79
|
789 |
+
91.880.32
|
790 |
+
91.120.13
|
791 |
+
89.900.05
|
792 |
+
83.920.10
|
793 |
+
82.32
|
794 |
+
AUTOPEFTRTE
|
795 |
+
M
|
796 |
+
1.42%
|
797 |
+
72.350.84
|
798 |
+
86.130.62
|
799 |
+
89.060.09
|
800 |
+
60.231.00
|
801 |
+
92.110.23
|
802 |
+
91.000.09
|
803 |
+
90.640.07
|
804 |
+
84.010.21
|
805 |
+
83.19
|
806 |
+
AUTOPEFTRTE
|
807 |
+
L
|
808 |
+
6.60%
|
809 |
+
71.701.18
|
810 |
+
86.620.65
|
811 |
+
89.190.13
|
812 |
+
59.440.75
|
813 |
+
92.410.28
|
814 |
+
91.090.12
|
815 |
+
90.790.06
|
816 |
+
83.910.14
|
817 |
+
83.14
|
818 |
+
AUTOPEFTtask
|
819 |
+
Avg.
|
820 |
+
1.40%
|
821 |
+
72.350.94
|
822 |
+
87.450.87
|
823 |
+
89.170.00
|
824 |
+
60.921.47
|
825 |
+
92.110.25
|
826 |
+
91.120.13
|
827 |
+
90.640.05
|
828 |
+
84.010.10
|
829 |
+
83.47
|
830 |
+
Table 1: Results on the GLUE benchmark with BERTbase, where tasks are ordered in ascending order of the
|
831 |
+
training resources. We conduct three groups of task transferability experiments on RTE and one resource scalability
|
832 |
+
experiment on MNLI. We report the average fine-tuned parameters of per-task AUTOPEFT, where we conduct
|
833 |
+
additional per-task searches on MRPC, STS-B, and CoLA, and take best-found configurations for the remaining
|
834 |
+
tasks. We report Spearman’s Correlation for STS-B, Matthew’s Correlation for CoLA, and accuracy for all other
|
835 |
+
tasks, where we report the matched accuracy for MNLI. The percentage of parameters is computed as a ratio of
|
836 |
+
the number of additional parameters to the pretrained parameters. We reproduce all baselines and report the mean
|
837 |
+
and standard deviation of all results for 5 random seeds. The best, second-best, and third-best results are marked
|
838 |
+
in bold fonts and ranked by colour.
|
839 |
+
RoBERTalarge (Liu et al., 2019b) show findings
|
840 |
+
that are consistent to the ones BERTbase, and are
|
841 |
+
included in Table 3 in the appendix. We report
|
842 |
+
the setup for each PEFT module and the detailed
|
843 |
+
training scheme in §A.
|
844 |
+
5
|
845 |
+
Results and Discussion
|
846 |
+
Transferability of Configurations across Tasks.
|
847 |
+
The main results are summarized in Table 1. First,
|
848 |
+
we analyze task transferability of AUTOPEFT-
|
849 |
+
found configurations by running AUTOPEFT on
|
850 |
+
the most low-resource and challenging task, RTE,
|
851 |
+
followed by transferring the three best AUTOPEFT-
|
852 |
+
found configurations to other tasks.
|
853 |
+
First, we
|
854 |
+
note that the parameter budget of the configura-
|
855 |
+
tion AUTOPEFTRTE
|
856 |
+
M
|
857 |
+
is only 1.42%, while it shows
|
858 |
+
considerable average gains over all the PEFT base-
|
859 |
+
lines on the RTE task, by a margin of at least 2%.
|
860 |
+
The AUTOPEFT-found configuration also outper-
|
861 |
+
forms the full-model FT baseline on the RTE task
|
862 |
+
by more than 1%. These results indicate the ef-
|
863 |
+
fectiveness of the AUTOPEFT framework in opti-
|
864 |
+
mising both task performance and parameter effi-
|
865 |
+
ciency. Transferring the RTE-based configurations
|
866 |
+
to other tasks, we find that strong performance is
|
867 |
+
maintained across the target tasks, with more bene-
|
868 |
+
fits on the medium-resource tasks (MRPC, STS-B,
|
869 |
+
CoLA), but the configuration remains competitive
|
870 |
+
also for higher-resource tasks (e.g., QQP, MNLI).
|
871 |
+
10
|
872 |
+
2
|
873 |
+
10
|
874 |
+
1
|
875 |
+
100
|
876 |
+
Fine-tuned Parameters (%)
|
877 |
+
62.5
|
878 |
+
65.0
|
879 |
+
67.5
|
880 |
+
70.0
|
881 |
+
72.5
|
882 |
+
75.0
|
883 |
+
Task Score
|
884 |
+
RTE
|
885 |
+
10
|
886 |
+
2
|
887 |
+
10
|
888 |
+
1
|
889 |
+
100
|
890 |
+
Fine-tuned Parameters (%)
|
891 |
+
75
|
892 |
+
80
|
893 |
+
85
|
894 |
+
|
895 |
+
MRPC
|
896 |
+
Serial
|
897 |
+
Parallel
|
898 |
+
Prefix
|
899 |
+
LoRA
|
900 |
+
AutoPEFT
|
901 |
+
Figure 4: The Pareto front of the AUTOPEFT on tasks
|
902 |
+
RTE and MRPC compared to baselines with BERTbase
|
903 |
+
in various settings of parameter budgets.
|
904 |
+
We report
|
905 |
+
the single-seed task score for each task following the
|
906 |
+
settings in Table 1. The plots for STS-B, and CoLA,
|
907 |
+
showing the same trends, are in Appendix §B.
|
908 |
+
When we assign a large parameter budget to
|
909 |
+
the potential configurations, AUTOPEFTRTE
|
910 |
+
L
|
911 |
+
also
|
912 |
+
shows a stronger transfer performance in high-
|
913 |
+
resource tasks. This indicates that, as expected,
|
914 |
+
the parameter capacity of the configuration is an
|
915 |
+
important factor in transfer learning (Chen et al.,
|
916 |
+
2022a). On average, the AUTOPEFTRTE
|
917 |
+
M
|
918 |
+
configura-
|
919 |
+
tion shows a comparable fine-tuning performance,
|
920 |
+
83.19, to the full model FT, 83.15, by only updating
|
921 |
+
1.42% of parameters. With strong transferability
|
922 |
+
across similar tasks, AUTOPEFT provides distinct
|
923 |
+
advantages in parameter efficiency; the search al-
|
924 |
+
gorithm itself coupled with transfer becomes more
|
925 |
+
|
926 |
+
sample-efficient within limited training resources.
|
927 |
+
Resource Scalability and Efficiency.
|
928 |
+
We next
|
929 |
+
‘stress-test’ the ability of AUTOPEFT in a more
|
930 |
+
challenging scenario with limited task training
|
931 |
+
data, carrying out an experiment on the most high-
|
932 |
+
resource MNLI task using only a small set of its
|
933 |
+
training data. We randomly sample 1% of the orig-
|
934 |
+
inal MNLI training data to train AUTOPEFT, and
|
935 |
+
retain using the original dev set for evaluation.2
|
936 |
+
We report AUTOPEFTMNLI
|
937 |
+
S
|
938 |
+
in Table 1 as the best-
|
939 |
+
found configuration in this low-resource setting. It
|
940 |
+
requires only 0.30% of fine-tuned parameters and
|
941 |
+
shows the strong MNLI performance of 83.92%.
|
942 |
+
In another efficiency-oriented test, we conduct con-
|
943 |
+
figuration transfer in a radically parameter-efficient
|
944 |
+
setup (training on the full RTE training set but with
|
945 |
+
reduced parameter budget, and then transferring to
|
946 |
+
other tasks; AUTOPEFTRTE
|
947 |
+
S
|
948 |
+
in Table 1). The main
|
949 |
+
finding is that, while performance does decrease
|
950 |
+
slightly as expected, strong task performance can
|
951 |
+
still be achieved even with the parameter budget of
|
952 |
+
0.06% within this very efficient setup.
|
953 |
+
Per-Task Configuration Search. Finally, we con-
|
954 |
+
duct full-resource per-task AUTOPEFT searches,
|
955 |
+
which naturally come with increased computational
|
956 |
+
costs, for RTE, MRPC, STS-B, and CoLA, and
|
957 |
+
then, for efficiency reasons, port the small set of
|
958 |
+
best configurations to the remaining high-resource
|
959 |
+
tasks: SST-2, QNLI, QQP, MNLI. In addition
|
960 |
+
to the peak score on RTE, we observe gains on
|
961 |
+
MRPC (87.16% to 87.45%) and CoLA (60.13%
|
962 |
+
to 60.92%) over the best-performing PEFT base-
|
963 |
+
lines. We also observe gains over the transferred
|
964 |
+
configuration AUTOPEFTRTE
|
965 |
+
M . One interpretation
|
966 |
+
of the results is that AUTOPEFT is strong at match-
|
967 |
+
ing the intrinsic dimensionality of the low-resource
|
968 |
+
downstream task to the capacity (i.e., parameter
|
969 |
+
budget) of the PEFT modules, whereas full model
|
970 |
+
FT performs better in high-resource scenarios, giv-
|
971 |
+
ing the largest capacity to capture the informa-
|
972 |
+
tion in high-resource tasks.3 However, the per-
|
973 |
+
task AUTOPEFTtask variant outperforms even full
|
974 |
+
model FT by 0.3% while its parameter budget is
|
975 |
+
only 1.4% of the full model per task.
|
976 |
+
Analysing the ‘Behaviour’ of Bayesian Optimi-
|
977 |
+
2With this setup, we effectively save 99% of training re-
|
978 |
+
sources and the search framework becomes extremely fast
|
979 |
+
even for high-resource datasets.
|
980 |
+
3Due to the richness of training resources in high-resource
|
981 |
+
datasets, the results in these tasks are mostly saturated. Pre-
|
982 |
+
vious work shows that PEFT methods can only reach on-par
|
983 |
+
performance to full model FT on those tasks.
|
984 |
+
10
|
985 |
+
2
|
986 |
+
10
|
987 |
+
1
|
988 |
+
100
|
989 |
+
101
|
990 |
+
Fine-tuned Parameters (%)
|
991 |
+
60
|
992 |
+
65
|
993 |
+
70
|
994 |
+
75
|
995 |
+
Accuracy (%)
|
996 |
+
Initialisation
|
997 |
+
Random Search
|
998 |
+
AutoPEFT
|
999 |
+
Figure 5: The distribution of AUTOPEFT-found config-
|
1000 |
+
urations compared to the random search on RTE with
|
1001 |
+
a single random seed. We initialise the AUTOPEFT
|
1002 |
+
search with 100 runs of random sampling for initial ex-
|
1003 |
+
plorations in the search space. We then conduct 100
|
1004 |
+
runs of the AUTOPEFT with Bayesian optimisation.
|
1005 |
+
sation. Figure 5 shows the distribution of AU-
|
1006 |
+
TOPEFT-found configurations when we conduct
|
1007 |
+
its search experiment on RTE. Due to the greedy
|
1008 |
+
nature of our predefined acquisition function, we
|
1009 |
+
enforce the initialisation of our algorithm with a
|
1010 |
+
wide exploration of potential configurations. In
|
1011 |
+
the subsequent AUTOPEFT runs, it starts exploit-
|
1012 |
+
ing the best-found configurations while optimising
|
1013 |
+
towards the region with improved parameter effi-
|
1014 |
+
ciency, whereas the random search baseline keeps
|
1015 |
+
obtaining inefficient configurations in a lottery-
|
1016 |
+
ticket manner in the expensive region of param-
|
1017 |
+
eters. It is observable that AUTOPEFT exploits the
|
1018 |
+
region with roughly 1.4% of parameters and finds
|
1019 |
+
configurations with further enhanced task perfor-
|
1020 |
+
mance from 74.4% to 75.1% of accuracy, which
|
1021 |
+
is also the architecture AUTOPEFTRTE
|
1022 |
+
M
|
1023 |
+
with the
|
1024 |
+
strongest transferability across tasks. We also in-
|
1025 |
+
clude the best-found architecture within the initiali-
|
1026 |
+
sation stage as the AUTOPEFTRTE
|
1027 |
+
L
|
1028 |
+
, and our trans-
|
1029 |
+
ferability experiments show that the AUTOPEFT-
|
1030 |
+
found architecture is more robust to the random
|
1031 |
+
initialisation of the neural network, outperforming
|
1032 |
+
the best random search baseline in the searched
|
1033 |
+
task by 0.7% with 5.2% less parameter cost.
|
1034 |
+
Ablation of the Configuration Space. To pro-
|
1035 |
+
vide a finer-grained analysis of factors that bring
|
1036 |
+
positive impact to AUTOPEFT, we ablate the AU-
|
1037 |
+
TOPEFT search space from the full configuration
|
1038 |
+
space: 1) to the basic enumeration of the bottleneck
|
1039 |
+
size DSA of the SA only (the ‘SA’ space). We then
|
1040 |
+
include the Transformer layer and the SA size to-
|
1041 |
+
gether into the search space (the ‘SA-Layer’ space)
|
1042 |
+
to validate the usefulness of using layer selection
|
1043 |
+
as one configuration dimension. We can then also
|
1044 |
+
|
1045 |
+
Method
|
1046 |
+
#Layers
|
1047 |
+
Size DSA
|
1048 |
+
RTE Accu-
|
1049 |
+
racy (%)
|
1050 |
+
Serial Adapter
|
1051 |
+
24
|
1052 |
+
64
|
1053 |
+
72.560.76
|
1054 |
+
Adaptable Adapter
|
1055 |
+
13
|
1056 |
+
128
|
1057 |
+
73.360.80
|
1058 |
+
AdapterDrop
|
1059 |
+
13
|
1060 |
+
128
|
1061 |
+
73.501.40
|
1062 |
+
AUTOPEFTSA
|
1063 |
+
Layer
|
1064 |
+
10
|
1065 |
+
128
|
1066 |
+
73.860.94
|
1067 |
+
Table 2: The results of AUTOPEFT to layer selection
|
1068 |
+
baselines with the same parameter budget on BERTlarge.
|
1069 |
+
We report the Pfeiffer adapter for all 24 layers. We in-
|
1070 |
+
clude the specialised AdapterDrop (Rücklé et al., 2021)
|
1071 |
+
that inserts SA for the last 13 layers. We report the
|
1072 |
+
AAuni architecture (Moosavi et al., 2022) without its ra-
|
1073 |
+
tional activation function with 13 selected layers. We
|
1074 |
+
run our AUTOPEFT with the comparable search space
|
1075 |
+
of 24 layers and the size of the Pfeiffer adapter.
|
1076 |
+
expand the search space by adding another module
|
1077 |
+
(e.g., PA yields the ‘SA-PA-Layer’ space). Figure 6
|
1078 |
+
plots the performance over the ‘ablated’ configura-
|
1079 |
+
tion spaces and over different parameter budgets.
|
1080 |
+
Several key findings emerge. First, combining mul-
|
1081 |
+
tiple single PEFT modules has a positive impact
|
1082 |
+
on AUTOPEFT in general (cf., full AUTOPEFT
|
1083 |
+
versus ’SA-PA-Layer’ versus ’SA-Layer’). Rely-
|
1084 |
+
ing on layer selection also brings benefits (cf., ’SA’
|
1085 |
+
versus ’SA-Layer’). The comparison also indicates
|
1086 |
+
that leaving out some Transformer layers while
|
1087 |
+
increasing the capacity of the PEFT module is a
|
1088 |
+
straightforward method to improve the parameter
|
1089 |
+
efficiency and task performance of the PEFT mod-
|
1090 |
+
ule within a fixed parameter budget. Figure 6 sug-
|
1091 |
+
gests that AUTOPEFT can effectively operate over
|
1092 |
+
configuration spaces of different ‘granularity’.
|
1093 |
+
We analyse the impact of each single PEFT mod-
|
1094 |
+
ule in more detail in Appendix §B.
|
1095 |
+
Layer Selection.
|
1096 |
+
To further compare different
|
1097 |
+
layer selection approaches, we conduct a controlled
|
1098 |
+
experiment with the SA module on BERTlarge (24
|
1099 |
+
Transformer layers) under a predefined parameter
|
1100 |
+
budget. In Table 2, the simple AdapterDrop ap-
|
1101 |
+
proach simply drops the adapters for the first 11
|
1102 |
+
layers while doubling their bottleneck sizes, im-
|
1103 |
+
proving the RTE result by roughly 1%. Within
|
1104 |
+
the same architecture, we include the Adaptable
|
1105 |
+
Adapter with selected layers from switch learning,
|
1106 |
+
which has 3 and 10 layers from the first 12 and
|
1107 |
+
the other 12 layers, respectively. We show that AU-
|
1108 |
+
TOPEFT outperforms all existing layer selection
|
1109 |
+
baselines by learning less activated adapter layers,
|
1110 |
+
leading to better parameter efficiency (12.5% fewer
|
1111 |
+
parameters in relative terms) and higher task perfor-
|
1112 |
+
mance. It indicates that selecting the best insertion
|
1113 |
+
0
|
1114 |
+
2
|
1115 |
+
4
|
1116 |
+
6
|
1117 |
+
Fine-tuned Parameters (%)
|
1118 |
+
65.0
|
1119 |
+
67.5
|
1120 |
+
70.0
|
1121 |
+
72.5
|
1122 |
+
75.0
|
1123 |
+
Accuracy (%)
|
1124 |
+
SA
|
1125 |
+
SA-Layer
|
1126 |
+
SA-PA-Layer
|
1127 |
+
PA-PT-Layer
|
1128 |
+
AutoPEFT
|
1129 |
+
Figure 6: The performance of AUTOPEFT with ab-
|
1130 |
+
lation of search space on RTE with a single random
|
1131 |
+
seed on BERTbase. The SA results refer to the Pfeiffer
|
1132 |
+
adapter (Pfeiffer et al., 2020b) with an enumeration of
|
1133 |
+
its bottleneck size. For other search spaces, we report
|
1134 |
+
the Pareto front of AUTOPEFT-found configurations,
|
1135 |
+
where SA-PA-PT-Layer forms the search space of AU-
|
1136 |
+
TOPEFT.
|
1137 |
+
layer is non-trivial, and AUTOPEFT can learn the
|
1138 |
+
correlation between layers.
|
1139 |
+
6
|
1140 |
+
Conclusion
|
1141 |
+
We proposed AUTOPEFT, a novel search frame-
|
1142 |
+
work for automatically configuring various PEFT
|
1143 |
+
modules in selective layers of pretrained language
|
1144 |
+
models. AUTOPEFT searches the optimal architec-
|
1145 |
+
ture via Bayesian optimisation with iterative evalu-
|
1146 |
+
ation and predicting the desired architecture given
|
1147 |
+
the configuration search space. The proposed multi-
|
1148 |
+
objective optimisation can produce a Pareto front
|
1149 |
+
of candidate architectures by simultaneously max-
|
1150 |
+
imising the model performance and parameter effi-
|
1151 |
+
ciency. We demonstrated that AUTOPEFT-found
|
1152 |
+
architectures offer an effective trade-off between
|
1153 |
+
task performance and parameter efficiency, outper-
|
1154 |
+
forming a variety of PEFT baselines.
|
1155 |
+
Limitations
|
1156 |
+
The proposed AUTOPEFT method is relatively
|
1157 |
+
expensive since it requires iterative optimisation
|
1158 |
+
by learning to optimise each explored configura-
|
1159 |
+
tion. While all intermediate configurations can
|
1160 |
+
be skipped without laying a burden on the final
|
1161 |
+
storage space, the intermediate computation cost
|
1162 |
+
becomes the main bottleneck of this approach. In
|
1163 |
+
this work, we alleviated this problem by (i) con-
|
1164 |
+
ducting the search with 1% of training resources
|
1165 |
+
for large datasets, and (ii) configuration transfer
|
1166 |
+
from low-resource tasks. The search itself can be
|
1167 |
+
seen as a one-time cost yielding a ‘permanent’ well-
|
1168 |
+
performing and shareable configuration for particu-
|
1169 |
+
|
1170 |
+
lar tasks. We plan to delve deeper into the related
|
1171 |
+
efficiency and computational tractability aspects in
|
1172 |
+
future work.
|
1173 |
+
We have conducted extensive experiments on
|
1174 |
+
the search space that contains three representative
|
1175 |
+
PEFT modules. The AUTOPEFT framework is
|
1176 |
+
decoupled from the actual single PEFT modules:
|
1177 |
+
with further PEFT developments and new PEFT
|
1178 |
+
approaches, those may also get integrated into the
|
1179 |
+
AUTOPEFT framework in future work.
|
1180 |
+
Acknowledgements
|
1181 |
+
Xingchen Wan is supported by the Clarendon
|
1182 |
+
Scholarship at University of Oxford. The work
|
1183 |
+
has been supported in part by a personal Royal So-
|
1184 |
+
ciety University Research Fellowship (no 221137;
|
1185 |
+
2022-) awarded to Ivan Vuli´c.
|
1186 |
+
References
|
1187 |
+
Alan Ansell, Edoardo Ponti, Anna Korhonen, and Ivan
|
1188 |
+
Vuli´c. 2022.
|
1189 |
+
Composable sparse fine-tuning for
|
1190 |
+
cross-lingual transfer.
|
1191 |
+
In Proceedings of the 60th
|
1192 |
+
Annual Meeting of the Association for Computa-
|
1193 |
+
tional Linguistics (Volume 1: Long Papers), pages
|
1194 |
+
1778–1796, Dublin, Ireland. Association for Com-
|
1195 |
+
putational Linguistics.
|
1196 |
+
Maximilian Balandat, Brian Karrer, Daniel Jiang,
|
1197 |
+
Samuel Daulton, Ben Letham, Andrew G Wilson,
|
1198 |
+
and Eytan Bakshy. 2020.
|
1199 |
+
Botorch: A framework
|
1200 |
+
for efficient monte-carlo bayesian optimization. Ad-
|
1201 |
+
vances in neural information processing systems,
|
1202 |
+
33:21524–21538.
|
1203 |
+
Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.
|
1204 |
+
2022. BitFit: Simple parameter-efficient fine-tuning
|
1205 |
+
for transformer-based masked language-models. In
|
1206 |
+
Proceedings of the 60th Annual Meeting of the As-
|
1207 |
+
sociation for Computational Linguistics (Volume 2:
|
1208 |
+
Short Papers), pages 1–9, Dublin, Ireland. Associa-
|
1209 |
+
tion for Computational Linguistics.
|
1210 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
|
1211 |
+
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
|
1212 |
+
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
|
1213 |
+
Askell,
|
1214 |
+
Sandhini Agarwal,
|
1215 |
+
Ariel Herbert-Voss,
|
1216 |
+
Gretchen Krueger, Tom Henighan, Rewon Child,
|
1217 |
+
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
|
1218 |
+
Clemens Winter, Christopher Hesse, Mark Chen,
|
1219 |
+
Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
|
1220 |
+
Chess, Jack Clark, Christopher Berner, Sam Mc-
|
1221 |
+
Candlish, Alec Radford, Ilya Sutskever, and Dario
|
1222 |
+
Amodei. 2020. Language models are few-shot learn-
|
1223 |
+
ers. In Advances in Neural Information Processing
|
1224 |
+
Systems 33: Annual Conference on Neural Informa-
|
1225 |
+
tion Processing Systems 2020, NeurIPS 2020, De-
|
1226 |
+
cember 6-12, 2020, virtual.
|
1227 |
+
Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, and
|
1228 |
+
Shangsong Liang. 2022a.
|
1229 |
+
Revisiting parameter-
|
1230 |
+
efficient tuning: Are we really there yet?
|
1231 |
+
In Pro-
|
1232 |
+
ceedings of the 2022 Conference on Empirical Meth-
|
1233 |
+
ods in Natural Language Processing, pages 2612–
|
1234 |
+
2626, Abu Dhabi, United Arab Emirates. Associa-
|
1235 |
+
tion for Computational Linguistics.
|
1236 |
+
Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu
|
1237 |
+
Wang, Yibing Song, Jue Wang, and Ping Luo. 2022b.
|
1238 |
+
Adaptformer: Adapting vision transformers for scal-
|
1239 |
+
able visual recognition. In Advances in Neural In-
|
1240 |
+
formation Processing Systems.
|
1241 |
+
Samuel Daulton, Maximilian Balandat, and Eytan Bak-
|
1242 |
+
shy. 2021.
|
1243 |
+
Parallel bayesian optimization of mul-
|
1244 |
+
tiple noisy objectives with expected hypervolume
|
1245 |
+
improvement. In Advances in Neural Information
|
1246 |
+
Processing Systems 34: Annual Conference on Neu-
|
1247 |
+
ral Information Processing Systems 2021, NeurIPS
|
1248 |
+
2021, December 6-14, 2021, virtual, pages 2187–
|
1249 |
+
2200.
|
1250 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
|
1251 |
+
Kristina Toutanova. 2019.
|
1252 |
+
BERT: Pre-training of
|
1253 |
+
deep bidirectional transformers for language under-
|
1254 |
+
standing.
|
1255 |
+
In Proceedings of the 2019 Conference
|
1256 |
+
of the North American Chapter of the Association
|
1257 |
+
for Computational Linguistics: Human Language
|
1258 |
+
Technologies, Volume 1 (Long and Short Papers),
|
1259 |
+
pages 4171–4186, Minneapolis, Minnesota. Associ-
|
1260 |
+
ation for Computational Linguistics.
|
1261 |
+
Xuanyi Dong and Yi Yang. 2020. Nas-bench-201: Ex-
|
1262 |
+
tending the scope of reproducible neural architecture
|
1263 |
+
search. In 8th International Conference on Learning
|
1264 |
+
Representations, ICLR 2020, Addis Ababa, Ethiopia,
|
1265 |
+
April 26-30, 2020.
|
1266 |
+
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter.
|
1267 |
+
2019. Neural architecture search: A survey. The
|
1268 |
+
Journal of Machine Learning Research, 20(1):1997–
|
1269 |
+
2017.
|
1270 |
+
David Eriksson, Pierce I-Jen Chuang, Samuel Daulton,
|
1271 |
+
Peng Xia, Akshat Shrivastava, Arun Babu, Shicong
|
1272 |
+
Zhao, Ahmed A Aly, Ganesh Venkatesh, and Max-
|
1273 |
+
imilian Balandat. 2021.
|
1274 |
+
Latency-aware neural ar-
|
1275 |
+
chitecture search with multi-objective bayesian op-
|
1276 |
+
timization. In 8th ICML Workshop on Automated
|
1277 |
+
Machine Learning (AutoML).
|
1278 |
+
David Eriksson and Martin Jankowiak. 2021.
|
1279 |
+
High-
|
1280 |
+
dimensional bayesian optimization with sparse axis-
|
1281 |
+
aligned subspaces. In Uncertainty in Artificial Intel-
|
1282 |
+
ligence, pages 493–503. PMLR.
|
1283 |
+
Peter I. Frazier. 2018. A tutorial on bayesian optimiza-
|
1284 |
+
tion. CoRR, abs/1807.02811.
|
1285 |
+
Roman Garnett. 2023. Bayesian Optimization. Cam-
|
1286 |
+
bridge University Press.
|
1287 |
+
Demi Guo, Alexander Rush, and Yoon Kim. 2021.
|
1288 |
+
Parameter-efficient transfer learning with diff prun-
|
1289 |
+
ing. In Proceedings of the 59th Annual Meeting of
|
1290 |
+
|
1291 |
+
the Association for Computational Linguistics and
|
1292 |
+
the 11th International Joint Conference on Natu-
|
1293 |
+
ral Language Processing (Volume 1: Long Papers),
|
1294 |
+
pages 4884–4896, Online. Association for Computa-
|
1295 |
+
tional Linguistics.
|
1296 |
+
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-
|
1297 |
+
Kirkpatrick, and Graham Neubig. 2022. Towards a
|
1298 |
+
unified view of parameter-efficient transfer learning.
|
1299 |
+
In The Tenth International Conference on Learning
|
1300 |
+
Representations, ICLR 2022, Virtual Event, April 25-
|
1301 |
+
29, 2022.
|
1302 |
+
Matthew D Hoffman, Andrew Gelman, et al. 2014.
|
1303 |
+
The no-u-turn sampler:
|
1304 |
+
adaptively setting path
|
1305 |
+
lengths in hamiltonian monte carlo. J. Mach. Learn.
|
1306 |
+
Res., 15(1):1593–1623.
|
1307 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski,
|
1308 |
+
Bruna Morrone, Quentin de Laroussilhe, Andrea
|
1309 |
+
Gesmundo, Mona Attariyan, and Sylvain Gelly.
|
1310 |
+
2019. Parameter-efficient transfer learning for NLP.
|
1311 |
+
In Proceedings of the 36th International Conference
|
1312 |
+
on Machine Learning, ICML 2019, 9-15 June 2019,
|
1313 |
+
Long Beach, California, USA, pages 2790–2799.
|
1314 |
+
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan
|
1315 |
+
Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
|
1316 |
+
and Weizhu Chen. 2022a. Lora: Low-rank adapta-
|
1317 |
+
tion of large language models. In The Tenth Inter-
|
1318 |
+
national Conference on Learning Representations,
|
1319 |
+
ICLR 2022, Virtual Event, April 25-29, 2022.
|
1320 |
+
Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang,
|
1321 |
+
Yasheng Wang, Zhiyuan Liu, and Maosong Sun.
|
1322 |
+
2022b.
|
1323 |
+
Sparse structure search for delta tuning.
|
1324 |
+
In Advances in Neural Information Processing Sys-
|
1325 |
+
tems.
|
1326 |
+
Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns,
|
1327 |
+
Guilherme
|
1328 |
+
Miotto,
|
1329 |
+
Simon
|
1330 |
+
Schrodi,
|
1331 |
+
André
|
1332 |
+
Biedenkapp, Thomas Elsken, Difan Deng, Marius
|
1333 |
+
Lindauer, and Frank Hutter. 2021.
|
1334 |
+
Bag of base-
|
1335 |
+
lines for multi-objective joint neural architecture
|
1336 |
+
search and hyperparameter optimization.
|
1337 |
+
In 8th
|
1338 |
+
ICML Workshop on Automated Machine Learning
|
1339 |
+
(AutoML).
|
1340 |
+
Kirthevasan Kandasamy,
|
1341 |
+
Willie Neiswanger,
|
1342 |
+
Jeff
|
1343 |
+
Schneider, Barnabás Póczos, and Eric P. Xing. 2018.
|
1344 |
+
Neural architecture search with bayesian optimisa-
|
1345 |
+
tion and optimal transport. In Advances in Neural
|
1346 |
+
Information Processing Systems 31: Annual Con-
|
1347 |
+
ference on Neural Information Processing Systems
|
1348 |
+
2018, NeurIPS 2018, December 3-8, 2018, Mon-
|
1349 |
+
tréal, Canada, pages 2020–2029.
|
1350 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
|
1351 |
+
The power of scale for parameter-efficient prompt
|
1352 |
+
tuning. In Proceedings of the 2021 Conference on
|
1353 |
+
Empirical Methods in Natural Language Processing,
|
1354 |
+
pages 3045–3059, Online and Punta Cana, Domini-
|
1355 |
+
can Republic. Association for Computational Lin-
|
1356 |
+
guistics.
|
1357 |
+
Liam Li and Ameet Talwalkar. 2019. Random search
|
1358 |
+
and reproducibility for neural architecture search. In
|
1359 |
+
Proceedings of the Thirty-Fifth Conference on Un-
|
1360 |
+
certainty in Artificial Intelligence, UAI 2019, Tel
|
1361 |
+
Aviv, Israel, July 22-25, 2019, pages 367–377.
|
1362 |
+
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
|
1363 |
+
Optimizing continuous prompts for generation. In
|
1364 |
+
Proceedings of the 59th Annual Meeting of the
|
1365 |
+
Association for Computational Linguistics and the
|
1366 |
+
11th International Joint Conference on Natural Lan-
|
1367 |
+
guage Processing (Volume 1: Long Papers), pages
|
1368 |
+
4582–4597, Online. Association for Computational
|
1369 |
+
Linguistics.
|
1370 |
+
Hanxiao Liu, Karen Simonyan, and Yiming Yang.
|
1371 |
+
2019a. DARTS: differentiable architecture search.
|
1372 |
+
In 7th International Conference on Learning Repre-
|
1373 |
+
sentations, ICLR 2019, New Orleans, LA, USA, May
|
1374 |
+
6-9, 2019.
|
1375 |
+
Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay
|
1376 |
+
Mohta, Tenghao Huang, Mohit Bansal, and Colin
|
1377 |
+
Raffel. 2022.
|
1378 |
+
Few-shot parameter-efficient fine-
|
1379 |
+
tuning is better and cheaper than in-context learning.
|
1380 |
+
In Advances in Neural Information Processing Sys-
|
1381 |
+
tems.
|
1382 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
|
1383 |
+
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
|
1384 |
+
Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
|
1385 |
+
Roberta: A robustly optimized BERT pretraining ap-
|
1386 |
+
proach. CoRR, abs/1907.11692.
|
1387 |
+
Rabeeh Karimi Mahabadi, James Henderson, and Se-
|
1388 |
+
bastian Ruder. 2021. Compacter: Efficient low-rank
|
1389 |
+
hypercomplex adapter layers. In Advances in Neu-
|
1390 |
+
ral Information Processing Systems 34: Annual Con-
|
1391 |
+
ference on Neural Information Processing Systems
|
1392 |
+
2021, NeurIPS 2021, December 6-14, 2021, virtual,
|
1393 |
+
pages 1022–1035.
|
1394 |
+
Yuning Mao, Lambert Mathias, Rui Hou, Amjad Alma-
|
1395 |
+
hairi, Hao Ma, Jiawei Han, Scott Yih, and Madian
|
1396 |
+
Khabsa. 2022. UniPELT: A unified framework for
|
1397 |
+
parameter-efficient language model tuning. In Pro-
|
1398 |
+
ceedings of the 60th Annual Meeting of the Associa-
|
1399 |
+
tion for Computational Linguistics (Volume 1: Long
|
1400 |
+
Papers), pages 6253–6264, Dublin, Ireland. Associ-
|
1401 |
+
ation for Computational Linguistics.
|
1402 |
+
Nafise Moosavi, Quentin Delfosse, Kristian Kersting,
|
1403 |
+
and Iryna Gurevych. 2022. Adaptable adapters. In
|
1404 |
+
Proceedings of the 2022 Conference of the North
|
1405 |
+
American Chapter of the Association for Computa-
|
1406 |
+
tional Linguistics: Human Language Technologies,
|
1407 |
+
pages 3742–3753, Seattle, United States. Associa-
|
1408 |
+
tion for Computational Linguistics.
|
1409 |
+
Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James
|
1410 |
+
Cross, Sebastian Riedel, and Mikel Artetxe. 2022.
|
1411 |
+
Lifting the curse of multilinguality by pre-training
|
1412 |
+
modular transformers. In Proceedings of the 2022
|
1413 |
+
Conference of the North American Chapter of the
|
1414 |
+
Association for Computational Linguistics: Human
|
1415 |
+
|
1416 |
+
Language Technologies, pages 3479–3495, Seattle,
|
1417 |
+
United States. Association for Computational Lin-
|
1418 |
+
guistics.
|
1419 |
+
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aish-
|
1420 |
+
warya
|
1421 |
+
Kamath,
|
1422 |
+
Ivan
|
1423 |
+
Vuli´c,
|
1424 |
+
Sebastian
|
1425 |
+
Ruder,
|
1426 |
+
Kyunghyun Cho,
|
1427 |
+
and Iryna Gurevych. 2020a.
|
1428 |
+
AdapterHub: A framework for adapting transform-
|
1429 |
+
ers. In Proceedings of the 2020 Conference on Em-
|
1430 |
+
pirical Methods in Natural Language Processing:
|
1431 |
+
System Demonstrations, pages 46–54, Online. Asso-
|
1432 |
+
ciation for Computational Linguistics.
|
1433 |
+
Jonas Pfeiffer, Ivan Vuli´c, Iryna Gurevych, and Se-
|
1434 |
+
bastian Ruder. 2020b. MAD-X: An Adapter-Based
|
1435 |
+
Framework for Multi-Task Cross-Lingual Transfer.
|
1436 |
+
In Proceedings of the 2020 Conference on Empirical
|
1437 |
+
Methods in Natural Language Processing (EMNLP),
|
1438 |
+
pages 7654–7673, Online. Association for Computa-
|
1439 |
+
tional Linguistics.
|
1440 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
|
1441 |
+
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
|
1442 |
+
Wei Li, and Peter J. Liu. 2020. Exploring the limits
|
1443 |
+
of transfer learning with a unified text-to-text trans-
|
1444 |
+
former. J. Mach. Learn. Res., 21:140:1–140:67.
|
1445 |
+
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao
|
1446 |
+
Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang.
|
1447 |
+
2021. A comprehensive survey of neural architec-
|
1448 |
+
ture search: Challenges and solutions. ACM Com-
|
1449 |
+
puting Surveys (CSUR), 54(4):1–34.
|
1450 |
+
Bin Xin Ru, Xingchen Wan, Xiaowen Dong, and
|
1451 |
+
Michael A. Osborne. 2021.
|
1452 |
+
Interpretable neural
|
1453 |
+
architecture search via bayesian optimisation with
|
1454 |
+
weisfeiler-lehman kernels. In 9th International Con-
|
1455 |
+
ference on Learning Representations, ICLR 2021,
|
1456 |
+
Virtual Event, Austria, May 3-7, 2021.
|
1457 |
+
Robin Ru, Pedro M. Esperança, and Fabio Maria Car-
|
1458 |
+
lucci. 2020. Neural architecture generator optimiza-
|
1459 |
+
tion. In Advances in Neural Information Processing
|
1460 |
+
Systems 33: Annual Conference on Neural Informa-
|
1461 |
+
tion Processing Systems 2020, NeurIPS 2020, De-
|
1462 |
+
cember 6-12, 2020, virtual.
|
1463 |
+
Andreas Rücklé,
|
1464 |
+
Gregor Geigle,
|
1465 |
+
Max Glockner,
|
1466 |
+
Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna
|
1467 |
+
Gurevych. 2021. AdapterDrop: On the efficiency
|
1468 |
+
of adapters in transformers. In Proceedings of the
|
1469 |
+
2021 Conference on Empirical Methods in Natural
|
1470 |
+
Language Processing, pages 7930–7946, Online and
|
1471 |
+
Punta Cana, Dominican Republic. Association for
|
1472 |
+
Computational Linguistics.
|
1473 |
+
Victor Sanh, Albert Webson, Colin Raffel, Stephen
|
1474 |
+
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
|
1475 |
+
Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey,
|
1476 |
+
M Saiful Bari,
|
1477 |
+
Canwen Xu,
|
1478 |
+
Urmish Thakker,
|
1479 |
+
Shanya Sharma Sharma, Eliza Szczechla, Taewoon
|
1480 |
+
Kim, Gunjan Chhablani, Nihal V. Nayak, De-
|
1481 |
+
bajyoti Datta, Jonathan Chang, Mike Tian-Jian
|
1482 |
+
Jiang, Han Wang, Matteo Manica, Sheng Shen,
|
1483 |
+
Zheng Xin Yong, Harshit Pandey, Rachel Bawden,
|
1484 |
+
Thomas Wang, Trishala Neeraj, Jos Rozen, Ab-
|
1485 |
+
heesht Sharma, Andrea Santilli, Thibault Févry, Ja-
|
1486 |
+
son Alan Fries, Ryan Teehan, Teven Le Scao, Stella
|
1487 |
+
Biderman, Leo Gao, Thomas Wolf, and Alexan-
|
1488 |
+
der M. Rush. 2022.
|
1489 |
+
Multitask prompted training
|
1490 |
+
enables zero-shot task generalization. In The Tenth
|
1491 |
+
International Conference on Learning Representa-
|
1492 |
+
tions, ICLR 2022, Virtual Event, April 25-29, 2022.
|
1493 |
+
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams.
|
1494 |
+
2012. Practical bayesian optimization of machine
|
1495 |
+
learning algorithms. In Advances in Neural Infor-
|
1496 |
+
mation Processing Systems 25: 26th Annual Con-
|
1497 |
+
ference on Neural Information Processing Systems
|
1498 |
+
2012. Proceedings of a meeting held December 3-
|
1499 |
+
6, 2012, Lake Tahoe, Nevada, United States, pages
|
1500 |
+
2960–2968.
|
1501 |
+
Yi-Lin Sung, Varun Nair, and Colin Raffel. 2021.
|
1502 |
+
Training neural networks with fixed sparse masks.
|
1503 |
+
In Advances in Neural Information Processing Sys-
|
1504 |
+
tems 34: Annual Conference on Neural Information
|
1505 |
+
Processing Systems 2021, NeurIPS 2021, December
|
1506 |
+
6-14, 2021, virtual, pages 24193–24205.
|
1507 |
+
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019.
|
1508 |
+
BERT rediscovers the classical NLP pipeline.
|
1509 |
+
In
|
1510 |
+
Proceedings of the 57th Annual Meeting of the Asso-
|
1511 |
+
ciation for Computational Linguistics, pages 4593–
|
1512 |
+
4601, Florence, Italy. Association for Computational
|
1513 |
+
Linguistics.
|
1514 |
+
Ivan Vuli´c, Edoardo Maria Ponti, Robert Litschko,
|
1515 |
+
Goran Glavaš, and Anna Korhonen. 2020. Probing
|
1516 |
+
pretrained language models for lexical semantics. In
|
1517 |
+
Proceedings of the 2020 Conference on Empirical
|
1518 |
+
Methods in Natural Language Processing (EMNLP),
|
1519 |
+
pages 7222–7240, Online. Association for Computa-
|
1520 |
+
tional Linguistics.
|
1521 |
+
Xingchen Wan,
|
1522 |
+
Vu Nguyen,
|
1523 |
+
Huong Ha,
|
1524 |
+
Binxin
|
1525 |
+
Ru, Cong Lu, and Michael A Osborne. 2021.
|
1526 |
+
Think global and act local: Bayesian optimisation
|
1527 |
+
over high-dimensional categorical and mixed search
|
1528 |
+
spaces.
|
1529 |
+
In International Conference on Machine
|
1530 |
+
Learning, pages 10663–10674. PMLR.
|
1531 |
+
Xingchen Wan, Binxin Ru, Pedro M. Esperança, and
|
1532 |
+
Zhenguo Li. 2022. On redundancy and diversity in
|
1533 |
+
cell-based neural architecture search. In The Tenth
|
1534 |
+
International Conference on Learning Representa-
|
1535 |
+
tions, ICLR 2022, Virtual Event, April 25-29, 2022.
|
1536 |
+
OpenReview.net.
|
1537 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Fe-
|
1538 |
+
lix Hill, Omer Levy, and Samuel Bowman. 2018.
|
1539 |
+
GLUE: A multi-task benchmark and analysis plat-
|
1540 |
+
form for natural language understanding.
|
1541 |
+
In Pro-
|
1542 |
+
ceedings of the 2018 EMNLP Workshop Black-
|
1543 |
+
boxNLP: Analyzing and Interpreting Neural Net-
|
1544 |
+
works for NLP, pages 353–355, Brussels, Belgium.
|
1545 |
+
Association for Computational Linguistics.
|
1546 |
+
Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee,
|
1547 |
+
Xiaodong Liu, Jing Gao, Ahmed Hassan Awadal-
|
1548 |
+
|
1549 |
+
lah, and Jianfeng Gao. 2022.
|
1550 |
+
Adamix: Mixture-
|
1551 |
+
of-adaptations for parameter-efficient model tuning.
|
1552 |
+
In Proceedings of the 2022 Conference on Empiri-
|
1553 |
+
cal Methods in Natural Language Processing, pages
|
1554 |
+
5744–5760, Abu Dhabi, United Arab Emirates. As-
|
1555 |
+
sociation for Computational Linguistics.
|
1556 |
+
Colin White, Willie Neiswanger, and Yash Savani.
|
1557 |
+
2021a. BANANAS: bayesian optimization with neu-
|
1558 |
+
ral architectures for neural architecture search. In
|
1559 |
+
Thirty-Fifth AAAI Conference on Artificial Intelli-
|
1560 |
+
gence, AAAI 2021, Thirty-Third Conference on In-
|
1561 |
+
novative Applications of Artificial Intelligence, IAAI
|
1562 |
+
2021, The Eleventh Symposium on Educational Ad-
|
1563 |
+
vances in Artificial Intelligence, EAAI 2021, Virtual
|
1564 |
+
Event, February 2-9, 2021, pages 10293–10301.
|
1565 |
+
Colin White, Arber Zela, Robin Ru, Yang Liu, and
|
1566 |
+
Frank Hutter. 2021b.
|
1567 |
+
How powerful are perfor-
|
1568 |
+
mance predictors in neural architecture search? In
|
1569 |
+
Advances in Neural Information Processing Systems
|
1570 |
+
34: Annual Conference on Neural Information Pro-
|
1571 |
+
cessing Systems 2021, NeurIPS 2021, December 6-
|
1572 |
+
14, 2021, virtual, pages 28454–28469.
|
1573 |
+
Antoine Yang, Pedro M. Esperança, and Fabio Maria
|
1574 |
+
Carlucci. 2020. NAS evaluation is frustratingly hard.
|
1575 |
+
In 8th International Conference on Learning Repre-
|
1576 |
+
sentations, ICLR 2020, Addis Ababa, Ethiopia, April
|
1577 |
+
26-30, 2020.
|
1578 |
+
Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, and Hin-
|
1579 |
+
rich Schütze. 2020.
|
1580 |
+
Masking as an efficient alter-
|
1581 |
+
native to finetuning for pretrained language models.
|
1582 |
+
In Proceedings of the 2020 Conference on Empirical
|
1583 |
+
Methods in Natural Language Processing (EMNLP),
|
1584 |
+
pages 2226–2241, Online. Association for Computa-
|
1585 |
+
tional Linguistics.
|
1586 |
+
Barret Zoph and Quoc V. Le. 2017. Neural architec-
|
1587 |
+
ture search with reinforcement learning. In 5th Inter-
|
1588 |
+
national Conference on Learning Representations,
|
1589 |
+
ICLR 2017, Toulon, France, April 24-26, 2017, Con-
|
1590 |
+
ference Track Proceedings.
|
1591 |
+
A
|
1592 |
+
Supplemental Material: Technical
|
1593 |
+
Details
|
1594 |
+
PEFT Modules: Architectures and Setup.
|
1595 |
+
We
|
1596 |
+
implement the serial adapter architecture (SA) fol-
|
1597 |
+
lowing the setup of Pfeiffer et al. (2020b). The
|
1598 |
+
parallel adapter (PA) architecture is the same as the
|
1599 |
+
one proposed by He et al. (2022), where a scaling
|
1600 |
+
factor of 4 is implemented in all PA experiments.
|
1601 |
+
The prefix-tuning (PT) architecture has an interme-
|
1602 |
+
diate MLP with a bottleneck size of 800, which is
|
1603 |
+
trained the same way as in the original wor (Li and
|
1604 |
+
Liang, 2021). We also use the default setting for
|
1605 |
+
LoRA with a scaling of 8 and a rank of 8. We re-
|
1606 |
+
produce the experimental results with the reported
|
1607 |
+
setup of the MAM adapter He et al. (2022) and
|
1608 |
+
UniPELT (Mao et al., 2022). We reproduce the
|
1609 |
+
AdaMix results with the reported hyperparameter
|
1610 |
+
setup from the original work (Wang et al., 2022)
|
1611 |
+
in 20 epochs. In the experiments of Figure 4, we
|
1612 |
+
control the bottleneck size DSA and DPA for SA
|
1613 |
+
and PA baselines, respectively, while keeping other
|
1614 |
+
setups unchanged to discover their performance
|
1615 |
+
across the parameter budget. Similarly, we control
|
1616 |
+
the prefix length LPT for prefix-tuning and the rank
|
1617 |
+
r of LoRA without changing other setups.
|
1618 |
+
Training
|
1619 |
+
Details.
|
1620 |
+
Following previous work
|
1621 |
+
(Pfeiffer et al., 2020b), we use a recommended
|
1622 |
+
learning rate of 1e-4 for all PEFT experiments.
|
1623 |
+
In RoBERTalarge experiments, we report the
|
1624 |
+
RTE results with a learning rate of 2e-5 for
|
1625 |
+
AUTOPEFTMRPC and AUTOPEFTCoLA; 1e-4 for
|
1626 |
+
AUTOPEFTRTE. We use the learning rate of 2e-
|
1627 |
+
5 for full model FT according to Mao et al. (2022).
|
1628 |
+
We use the batch size of 32 and 16 for all BERT and
|
1629 |
+
RoBERTa experiments, respectively. The optimiser
|
1630 |
+
settings for each PEFT module follow the default
|
1631 |
+
settings in AdapterHub (Pfeiffer et al., 2020a).
|
1632 |
+
AUTOPEFT Search Setup.
|
1633 |
+
We implement the
|
1634 |
+
BO algorithm in BoTorch (Balandat et al., 2020).
|
1635 |
+
We use the Matern 5/2 kernel as the covariance
|
1636 |
+
function, and for the Monte Carlo sampling settings
|
1637 |
+
of SAAS-BO (Eriksson and Jankowiak, 2021), we
|
1638 |
+
use a warm-up step of 256, the number of samples
|
1639 |
+
to retain as 128, and thinning as 16. For the opti-
|
1640 |
+
misation of the acquisition function, to adapt to the
|
1641 |
+
discrete setup, we use a local search method sim-
|
1642 |
+
ilar to previous literature involving similar setup
|
1643 |
+
(Wan et al., 2021; Eriksson et al., 2021): at each
|
1644 |
+
search iteration (after the initial randomly sampled
|
1645 |
+
points), we collect the Pareto-optimal architectures
|
1646 |
+
up to this point. From this collection of Pareto-
|
1647 |
+
optimal architectures, we perform a local search by
|
1648 |
+
evaluating the acquisition function values of their
|
1649 |
+
neighbours, and move the current point to a neigh-
|
1650 |
+
bour with a higher acquisition function value and
|
1651 |
+
this process is repeated until convergence (which is
|
1652 |
+
a local minimum in terms of acquisition function),
|
1653 |
+
or 100 evaluations in acquisition function value are
|
1654 |
+
reached. At each search iteration, we restart this
|
1655 |
+
process 10 times and select the top candidate for
|
1656 |
+
the query (in this case, fine-tuning) for the next
|
1657 |
+
iteration. For all BO experiments, we use 200 total
|
1658 |
+
evaluations; given the noisy nature of the problem,
|
1659 |
+
we use a relatively large number of random initiali-
|
1660 |
+
sation points (100) to ensure that the search results
|
1661 |
+
|
1662 |
+
10
|
1663 |
+
2
|
1664 |
+
10
|
1665 |
+
1
|
1666 |
+
100
|
1667 |
+
Fine-tuned Parameters (%)
|
1668 |
+
87.0
|
1669 |
+
87.5
|
1670 |
+
88.0
|
1671 |
+
88.5
|
1672 |
+
89.0
|
1673 |
+
89.5
|
1674 |
+
Task Score
|
1675 |
+
STS-B
|
1676 |
+
10
|
1677 |
+
2
|
1678 |
+
10
|
1679 |
+
1
|
1680 |
+
100
|
1681 |
+
Fine-tuned Parameters (%)
|
1682 |
+
50.0
|
1683 |
+
52.5
|
1684 |
+
55.0
|
1685 |
+
57.5
|
1686 |
+
60.0
|
1687 |
+
62.5
|
1688 |
+
|
1689 |
+
CoLA
|
1690 |
+
Serial
|
1691 |
+
Parallel
|
1692 |
+
Prefix
|
1693 |
+
LoRA
|
1694 |
+
AutoPEFT
|
1695 |
+
Figure 7: The Pareto front of the AUTOPEFT frame-
|
1696 |
+
work on tasks STS-B and CoLA compared to baselines
|
1697 |
+
with BERTbase in various settings of parameter budgets.
|
1698 |
+
We report the single-seed task score for each task fol-
|
1699 |
+
lowing the settings in Table 1.
|
1700 |
+
are not overly sensitive to initialisation. We use the
|
1701 |
+
same hyperparameter settings as described for all
|
1702 |
+
experiments conducted in this paper.
|
1703 |
+
Calculation of Fine-tuned Parameters.
|
1704 |
+
The
|
1705 |
+
uncased BERTbase model (109M) has 12 Trans-
|
1706 |
+
former layers with a hidden dimension size of
|
1707 |
+
768. The uncased BERTlarge model (335M) and
|
1708 |
+
RoBERTalarge (355M) both have 24 layers with a
|
1709 |
+
hidden dimension size of 1, 024. For both SA and
|
1710 |
+
PA, their fine-tuned parameters are computed by
|
1711 |
+
2 × Dadapter × Dh × |l|, where Dh represents the
|
1712 |
+
corresponding hidden dimension of the selected
|
1713 |
+
model, and |l| refers to the total selected number
|
1714 |
+
of insertion layers. Similarly, we calculate the fine-
|
1715 |
+
tuned parameters of PT by 2 × LPT × Dh × |l|.
|
1716 |
+
Thus, the number of fine-tuned parameters of the
|
1717 |
+
AUTOPEFT-found configurations is a summation
|
1718 |
+
of individual PEFT modules’ parameters. We re-
|
1719 |
+
port the default fine-tuned parameters for the re-
|
1720 |
+
maining PEFT modules as defined in their original
|
1721 |
+
papers.
|
1722 |
+
B
|
1723 |
+
Search Space and Discovered
|
1724 |
+
Architectures
|
1725 |
+
Impact of Single PEFT Modules within AU-
|
1726 |
+
TOPEFT and Other Side Analyses.
|
1727 |
+
We pro-
|
1728 |
+
vide a more detailed analysis of the behaviour
|
1729 |
+
of AUTOPEFT by inspecting the Pareto front of
|
1730 |
+
AUTOPEFT-found configurations when we ablate
|
1731 |
+
each PEFT module into the search space, as plot-
|
1732 |
+
ted in Figure 6. After combining the serial adapter
|
1733 |
+
with the parallel adapter, the upper bound of perfor-
|
1734 |
+
mance is improved by more than 1%. We consider
|
1735 |
+
the gain here leverages the capacity of multiple het-
|
1736 |
+
erogeneous PEFT modules as a mixture-of-experts
|
1737 |
+
while providing a more efficient adaptation by up-
|
1738 |
+
dating both bias-influenced hidden states and the
|
1739 |
+
original states according to Eq. 3. We recall that
|
1740 |
+
prefix-tuning stabilises its learning with an interme-
|
1741 |
+
diate reparametrization network, which is dropped
|
1742 |
+
in the inference stage. Therefore, at the cost of
|
1743 |
+
the increased training parameters, prefix-tuning
|
1744 |
+
is one of the most parameter-efficient approaches.
|
1745 |
+
Consequently, we notice that incorporating prefix-
|
1746 |
+
tuning into the search space further improves the
|
1747 |
+
overall parameter efficiency (4% to 1.4%) of the
|
1748 |
+
AUTOPEFT-found configuration. Due to the pa-
|
1749 |
+
rameter efficiency of each single PEFT module,
|
1750 |
+
it also explains the distribution of the parameter
|
1751 |
+
budget for each PEFT module in the learned con-
|
1752 |
+
figurations. We also analyse the learned config-
|
1753 |
+
urations in terms of the selected layers over dif-
|
1754 |
+
ferent parameter scales in Table 5. They show a
|
1755 |
+
common trend in selecting the higher Transformer
|
1756 |
+
layers to insert the PEFT modules, which coincides
|
1757 |
+
with previous findings that the higher layer con-
|
1758 |
+
tains richer task-specific representations, and intro-
|
1759 |
+
ducing PEFT modules to these layers is more effi-
|
1760 |
+
cient than other layers. With the AUTOPEFT-found
|
1761 |
+
configurations reported in Table 5, we hope future
|
1762 |
+
PEFT research and applications can benefit from
|
1763 |
+
the architecture design similar to AUTOPEFTRTE
|
1764 |
+
M
|
1765 |
+
that we find the most transferable across tasks.
|
1766 |
+
|
1767 |
+
Method
|
1768 |
+
#Param.
|
1769 |
+
RTE
|
1770 |
+
MRPC
|
1771 |
+
STS-B
|
1772 |
+
CoLA
|
1773 |
+
SST-2
|
1774 |
+
QNLI
|
1775 |
+
Avg.
|
1776 |
+
Fine-tune†
|
1777 |
+
100%
|
1778 |
+
86.6
|
1779 |
+
90.9
|
1780 |
+
92.4
|
1781 |
+
68.0
|
1782 |
+
96.4
|
1783 |
+
94.7
|
1784 |
+
88.2
|
1785 |
+
LoRA‡
|
1786 |
+
0.22%
|
1787 |
+
85.2
|
1788 |
+
90.2
|
1789 |
+
92.3
|
1790 |
+
68.2
|
1791 |
+
96.2
|
1792 |
+
94.8
|
1793 |
+
87.8
|
1794 |
+
Serial
|
1795 |
+
0.89%
|
1796 |
+
84.8
|
1797 |
+
90.2
|
1798 |
+
92.0
|
1799 |
+
66.8
|
1800 |
+
96.3
|
1801 |
+
94.7
|
1802 |
+
87.5
|
1803 |
+
AUTOPEFTRTE
|
1804 |
+
S
|
1805 |
+
0.03%
|
1806 |
+
88.1
|
1807 |
+
89.5
|
1808 |
+
92.3
|
1809 |
+
62.7
|
1810 |
+
96.0
|
1811 |
+
94.6
|
1812 |
+
87.2
|
1813 |
+
AUTOPEFTMRPC
|
1814 |
+
S
|
1815 |
+
0.25%
|
1816 |
+
86.6
|
1817 |
+
92.2
|
1818 |
+
92.2
|
1819 |
+
66.6
|
1820 |
+
96.2
|
1821 |
+
94.6
|
1822 |
+
88.1
|
1823 |
+
AUTOPEFTCoLA
|
1824 |
+
M
|
1825 |
+
2.36%
|
1826 |
+
85.9
|
1827 |
+
90.0
|
1828 |
+
91.8
|
1829 |
+
70.6
|
1830 |
+
96.8
|
1831 |
+
94.6
|
1832 |
+
88.3
|
1833 |
+
AUTOPEFTRTE
|
1834 |
+
L
|
1835 |
+
9.41%
|
1836 |
+
89.5
|
1837 |
+
88.5
|
1838 |
+
91.6
|
1839 |
+
65.6
|
1840 |
+
95.9
|
1841 |
+
94.6
|
1842 |
+
87.6
|
1843 |
+
AUTOPEFTtask
|
1844 |
+
Avg.
|
1845 |
+
0.88%
|
1846 |
+
88.1
|
1847 |
+
92.2
|
1848 |
+
92.4
|
1849 |
+
70.6
|
1850 |
+
96.8
|
1851 |
+
94.6
|
1852 |
+
89.1
|
1853 |
+
Table 3: Experimental results on the GLUE benchmark with RoBERTalarge. We report the full model fine-tuning†
|
1854 |
+
results from Liu et al. (2019b) with Pearson correlation for STS-B and Matthew’s correlation for CoLA. We
|
1855 |
+
include the LoRA‡ module performance from Hu et al. (2022a). We report single-seed results for the experiments
|
1856 |
+
and exclude QQP and MNLI tasks due to the large computation cost of RoBERTalarge. Similar to Table 1, we
|
1857 |
+
conduct per-task search experiments on RTE, MRPC, STS-B, and CoLA, transferring best-found configurations
|
1858 |
+
to the remaining tasks. In addition to the transfer experiment from RTE, we also report transfer performance
|
1859 |
+
from MRPC and CoLA tasks with significantly different parameter budgets. All reported results are from the
|
1860 |
+
configurations listed in Table 7. The best, second-best, and third-best results are marked in bold fonts and ranked
|
1861 |
+
by colour.
|
1862 |
+
Model
|
1863 |
+
Insertion Layer {li}
|
1864 |
+
Module
|
1865 |
+
Size
|
1866 |
+
BERTbase
|
1867 |
+
1, 2, 3, 4, 5, 6,
|
1868 |
+
7, 8, 9, 10, 11, 12
|
1869 |
+
Serial Adapter DSA
|
1870 |
+
0, 1, 3, 6, 12, 24, 48, 96, 192, 384, 768
|
1871 |
+
Parallel Adapter DPA
|
1872 |
+
0, 1, 3, 6, 12, 24, 48, 96, 192, 384, 768
|
1873 |
+
Prefix-Tuning LPT
|
1874 |
+
0, 1, 3, 6, 12, 24, 48, 96, 192, 384, 768
|
1875 |
+
BERT/RoBERTalarge
|
1876 |
+
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
|
1877 |
+
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24
|
1878 |
+
Serial Adapter DSA
|
1879 |
+
0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024
|
1880 |
+
Parallel Adapter DPA
|
1881 |
+
0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024
|
1882 |
+
Prefix-Tuning LPT
|
1883 |
+
0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024
|
1884 |
+
Table 4: The search space of the AUTOPEFT. Each insertion layer has a Boolean decision for inserting the PEFT
|
1885 |
+
modules. The 0 size of submodules indicates that we exclude the corresponding submodule from the configuration.
|
1886 |
+
The total number of configurations for BERTbase: 212 × 11 × 11 × 11 ≈ 5 × 106 and for BERT/RoBERTalarge:
|
1887 |
+
224 × 12 × 12 × 12 ≈ 3 × 1010.
|
1888 |
+
|
1889 |
+
Task
|
1890 |
+
#Param.
|
1891 |
+
Search Space
|
1892 |
+
Configuration
|
1893 |
+
Submodule
|
1894 |
+
Configuration
|
1895 |
+
RTE
|
1896 |
+
0.06%
|
1897 |
+
Layer li
|
1898 |
+
3, 4, 6,
|
1899 |
+
8, 9, 11
|
1900 |
+
Serial Adapter DSA
|
1901 |
+
3
|
1902 |
+
Parallel Adapter DPA
|
1903 |
+
1
|
1904 |
+
Prefix-Tuning LPT
|
1905 |
+
3
|
1906 |
+
RTE
|
1907 |
+
1.42%
|
1908 |
+
Layer li
|
1909 |
+
2, 5, 6,
|
1910 |
+
7, 8, 9, 10
|
1911 |
+
Serial Adapter DSA
|
1912 |
+
96
|
1913 |
+
Parallel Adapter DPA
|
1914 |
+
48
|
1915 |
+
Prefix-Tuning LPT
|
1916 |
+
1
|
1917 |
+
RTE
|
1918 |
+
6.60%
|
1919 |
+
Layer li
|
1920 |
+
3, 4, 6,
|
1921 |
+
7, 8, 9, 10
|
1922 |
+
Serial Adapter DSA
|
1923 |
+
384
|
1924 |
+
Parallel Adapter DPA
|
1925 |
+
192
|
1926 |
+
Prefix-Tuning LPT
|
1927 |
+
96
|
1928 |
+
MRPC
|
1929 |
+
3.86%
|
1930 |
+
Layer li
|
1931 |
+
2, 3, 6,
|
1932 |
+
7, 9, 10, 11
|
1933 |
+
Serial Adapter DSA
|
1934 |
+
6
|
1935 |
+
Parallel Adapter DPA
|
1936 |
+
384
|
1937 |
+
Prefix-Tuning LPT
|
1938 |
+
3
|
1939 |
+
STS-B
|
1940 |
+
1.06%
|
1941 |
+
Layer li
|
1942 |
+
2, 5,
|
1943 |
+
7, 8, 9, 11
|
1944 |
+
Serial Adapter DSA
|
1945 |
+
96
|
1946 |
+
Parallel Adapter DPA
|
1947 |
+
6
|
1948 |
+
Prefix-Tuning LPT
|
1949 |
+
24
|
1950 |
+
CoLA
|
1951 |
+
0.29%
|
1952 |
+
Layer li
|
1953 |
+
3, 4,
|
1954 |
+
8, 9, 10
|
1955 |
+
Serial Adapter DSA
|
1956 |
+
12
|
1957 |
+
Parallel Adapter DPA
|
1958 |
+
24
|
1959 |
+
Prefix-Tuning LPT
|
1960 |
+
6
|
1961 |
+
MNLI
|
1962 |
+
0.30%
|
1963 |
+
Layer li
|
1964 |
+
3, 6,
|
1965 |
+
7, 8, 9, 11, 12
|
1966 |
+
Serial Adapter DSA
|
1967 |
+
24
|
1968 |
+
Parallel Adapter DPA
|
1969 |
+
6
|
1970 |
+
Prefix-Tuning LPT
|
1971 |
+
1
|
1972 |
+
Table 5: The AUTOPEFT-found configurations reported in Table 1 using BERTbase. The average of fine-tuned
|
1973 |
+
parameters (%) of AUTOPEFTtask
|
1974 |
+
Avg. is calculated by (1.42+3.86+1.06+0.29+1.42+0.30+1.42+1.42)/8 = 1.40,
|
1975 |
+
where we transfer the best-found AUTOPEFTRTE
|
1976 |
+
M
|
1977 |
+
to SST-2, QQP, and MNLI as their best per-task configurations
|
1978 |
+
for achieving the best trade-off between task performance and efficiency.
|
1979 |
+
Task
|
1980 |
+
#Param.
|
1981 |
+
Search Space
|
1982 |
+
Configuration
|
1983 |
+
Submodule
|
1984 |
+
Configuration
|
1985 |
+
RTE
|
1986 |
+
0.78%
|
1987 |
+
Layer li
|
1988 |
+
2, 6, 8, 11, 14, 15, 16, 17, 21, 23
|
1989 |
+
Serial Adapter DSA
|
1990 |
+
128
|
1991 |
+
Table 6: The AUTOPEFT-found configurations reported in Table 2 using BERTlarge.
|
1992 |
+
Task
|
1993 |
+
#Param.
|
1994 |
+
Search Space
|
1995 |
+
Configuration
|
1996 |
+
Submodule
|
1997 |
+
Configuration
|
1998 |
+
RTE
|
1999 |
+
0.03%
|
2000 |
+
Layer li
|
2001 |
+
6, 10,
|
2002 |
+
14, 15, 18, 19, 21, 23
|
2003 |
+
Serial Adapter DSA
|
2004 |
+
2
|
2005 |
+
Parallel Adapter DPA
|
2006 |
+
4
|
2007 |
+
Prefix-Tuning LPT
|
2008 |
+
1
|
2009 |
+
RTE
|
2010 |
+
9.41%
|
2011 |
+
Layer li
|
2012 |
+
1, 2, 3, 4, 5, 7, 11, 12,
|
2013 |
+
14, 15, 17, 19, 20, 21, 23
|
2014 |
+
Serial Adapter DSA
|
2015 |
+
64
|
2016 |
+
Parallel Adapter DPA
|
2017 |
+
1
|
2018 |
+
Prefix-Tuning LPT
|
2019 |
+
1024
|
2020 |
+
MRPC
|
2021 |
+
0.25%
|
2022 |
+
Layer li
|
2023 |
+
1, 2, 4, 5, 6, 8, 9, 10, 11,
|
2024 |
+
13, 14, 16, 17, 21, 22, 23, 24
|
2025 |
+
Serial Adapter DSA
|
2026 |
+
8
|
2027 |
+
Parallel Adapter DPA
|
2028 |
+
2
|
2029 |
+
Prefix-Tuning LPT
|
2030 |
+
16
|
2031 |
+
STS-B
|
2032 |
+
0.25%
|
2033 |
+
Layer li
|
2034 |
+
1, 2, 4, 5, 6, 7, 8, 9, 10, 11,
|
2035 |
+
13, 14, 16, 17, 21, 22, 24
|
2036 |
+
Serial Adapter DSA
|
2037 |
+
8
|
2038 |
+
Parallel Adapter DPA
|
2039 |
+
2
|
2040 |
+
Prefix-Tuning LPT
|
2041 |
+
16
|
2042 |
+
CoLA
|
2043 |
+
2.36%
|
2044 |
+
Layer li
|
2045 |
+
1, 5, 6, 8, 9, 10,
|
2046 |
+
13, 14, 15, 19, 21, 22, 23, 24
|
2047 |
+
Serial Adapter DSA
|
2048 |
+
256
|
2049 |
+
Parallel Adapter DPA
|
2050 |
+
32
|
2051 |
+
Prefix-Tuning LPT
|
2052 |
+
4
|
2053 |
+
Table 7: The AUTOPEFT-found configurations reported in Table 3 using RoBERTalarge. The average of fine-tuned
|
2054 |
+
parameters (%) of AUTOPEFTtask
|
2055 |
+
Avg. is calculated by (0.03 + 0.25 + 0.25 + 2.36 + 2.36 + 0.03)/6 = 0.88, where we
|
2056 |
+
transfer the best-found AUTOPEFTCoLA
|
2057 |
+
M
|
2058 |
+
to SST-2 and AUTOPEFTRTE
|
2059 |
+
S
|
2060 |
+
to QNLI as their best per-task configurations
|
2061 |
+
for achieving the best trade-off between performance and efficiency.
|
2062 |
+
|
0NFLT4oBgHgl3EQfoi-S/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
19E1T4oBgHgl3EQfRwPe/content/2301.03058v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c4a67dc0b7b6da109d7cabb57e01f43669ec51fd94053b277c996996c48aa462
|
3 |
+
size 129277
|
19E1T4oBgHgl3EQfRwPe/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a94e790231c933cfc1ba463097fa7936b581e6865b45780040e661b578ea2df2
|
3 |
+
size 1441837
|
19E1T4oBgHgl3EQfRwPe/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:63482b8823368ef198499bd3c1b9fbc137bf3ad92b3bef59ef292d15018f10bb
|
3 |
+
size 47488
|
1tE1T4oBgHgl3EQflQSM/content/tmp_files/2301.03283v1.pdf.txt
ADDED
@@ -0,0 +1,2423 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Abstract—Model transparency, label correlation learning and
|
2 |
+
the robustness to label noise are crucial for multilabel learning.
|
3 |
+
However, few existing methods study these three characteristics
|
4 |
+
simultaneously. To address this challenge, we propose the robust
|
5 |
+
multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS)
|
6 |
+
with three mechanisms. First, we design a soft label learning mech-
|
7 |
+
anism to reduce the effect of label noise by explicitly measuring the
|
8 |
+
interactions between labels, which is also the basis of the other two
|
9 |
+
mechanisms. Second, the rule-based TSK FS is used as the base
|
10 |
+
model to efficiently model the inference relationship between fea-
|
11 |
+
tures and soft labels in a more transparent way than many existing
|
12 |
+
multilabel models. Third, to further improve the performance of
|
13 |
+
multilabel learning, we build a correlation enhancement learning
|
14 |
+
mechanism based on the soft label space and the fuzzy feature
|
15 |
+
space.1Extensive experiments are conducted to demonstrate the
|
16 |
+
superiority of the proposed method.
|
17 |
+
|
18 |
+
Index Terms—Multilabel classification, label correlation, model
|
19 |
+
transparency, label noise.
|
20 |
+
I. INTRODUCTION
|
21 |
+
ULTILABEL learning concerns instances that can be asso-
|
22 |
+
ciated with more than one labels. For example, an article
|
23 |
+
can be labeled as being related to “politics”, “culture” and “re-
|
24 |
+
ligion” at the same time; and a travel photo can be given the
|
25 |
+
labels “beach”, “sunrise”, “sail” and “tourist” simultaneously
|
26 |
+
because of the presence of the corresponding objects. For mul-
|
27 |
+
tilabel learning, label correlation learning, model transparency
|
28 |
+
and robustness against label noise are essential. Constructing
|
29 |
+
the correlation between labels is the basic work to improve the
|
30 |
+
performance of multilabel learning [1, 2]. A transparent struc-
|
31 |
+
ture is important to enhance the interpretability of multilabel
|
32 |
+
learning [3]. And robustness against label noise enhances the
|
33 |
+
effectiveness in practical applications under noisy environment
|
34 |
+
[4].
|
35 |
+
For label correlation learning, existing multilabel methods
|
36 |
+
are mainly based on first-order [5], second-order [6] and high-
|
37 |
+
order [7] strategies to consider the correlation between labels.
|
38 |
+
|
39 |
+
This work was supported in part by the National key R & D plan under Grant
|
40 |
+
(2022YFE0112400), the NSFC under Grant 62176105, the Six Talent Peaks
|
41 |
+
Project in Jiangsu Province under Grant XYDXX-056, the Hong Kong Re-
|
42 |
+
search Grants Council (PolyU 152006/19E), the Project of Strategic Importance
|
43 |
+
of the Hong Kong Polytechnic University (1-ZE1V) and the Postgraduate Re-
|
44 |
+
search & Practice innovation Program of Jiangsu Province under Grant
|
45 |
+
KYCX22_2313. (Corresponding author: Zhaohong Deng).
|
46 |
+
Q. Lou, S. Wang are with the School of Artificial Intelligence and Computer
|
47 |
+
Science, Jiangnan University and Jiangsu Key Laboratory of Digital Design and
|
48 |
+
Software Technology, Wuxi 214122, China, and Q. Lou is with the Centre for
|
49 |
+
First-order methods ignore label correlation and adopt label-
|
50 |
+
by-label approach for multilabel learning. For example, sparse
|
51 |
+
weighted instance-based multilabel (SWIM) realizes the mul-
|
52 |
+
tilabel learning only based on the association between instances
|
53 |
+
[8]. Second-order methods build the pairwise relationship be-
|
54 |
+
tween labels. For example, labels related to the sample are
|
55 |
+
ranked before labels unrelated to the sample [9]. Multilabel
|
56 |
+
learning with global and local label correlation (GLOCAL) de-
|
57 |
+
composes the Laplacian matrix to indirectly learn the correla-
|
58 |
+
tion between any two labels [10]. High-order methods construct
|
59 |
+
the correlation between multiple labels simultaneously. For ex-
|
60 |
+
ample, cross-coupling aggregation (COCOA) first models the
|
61 |
+
correlation between random label pairs and then aggregates
|
62 |
+
their learning effects [11]. Multilabel classification with label-
|
63 |
+
specific features and label-specific classifiers (MLC-LFLC) in-
|
64 |
+
troduces the sparse learning to analyze the dependency between
|
65 |
+
a single label and other labels [12].
|
66 |
+
For model transparency in multilabel learning, existing work
|
67 |
+
is mainly based on rules or logical inference to achieve trans-
|
68 |
+
parency [13]. For example, hierarchical multilabel classifica-
|
69 |
+
tion with a genetic algorithm (HMC-GA) [14] utilizes the ge-
|
70 |
+
netic algorithm to induce classification rules for protein func-
|
71 |
+
tion prediction which belongs to hierarchical multilabel learn-
|
72 |
+
ing. The gradient-weighted class activation mapping (Grad-
|
73 |
+
CAM) is used in [15] to realize the inferential interpretation for
|
74 |
+
predicted label results. The causal discovery is exploited in [16]
|
75 |
+
to analyze the specific features of a label. The multilabel Tak-
|
76 |
+
agi-Sugeno-Kang fuzzy system (TSK FS), i.e., ML-TSK FS [17]
|
77 |
+
offers good transparency through fuzzy rule-based structure and
|
78 |
+
fuzzy inference. Among the above existing multilabel methods,
|
79 |
+
ML-TSK FS has shown more promising performance because
|
80 |
+
it realizes the complete inference process from feature to label.
|
81 |
+
For robustness against label noise, much work has been stud-
|
82 |
+
ied because of the urgent need of practical application [18, 19].
|
83 |
+
For example, class-conditional multilabel noise (CCMN) [20]
|
84 |
+
designs two unbiased estimators with error bounds to reduce the
|
85 |
+
influence of label noise. Multilabel noise robust collaborative
|
86 |
+
learning (RCML) [21] employs the group lasso to detect noisy
|
87 |
+
Smart Health, and the School of Nursing, the Hong Kong Polytechnic Univer-
|
88 |
+
sity, Hong Kong. (e-mail: [email protected]; wxwangst@ali-
|
89 |
+
yun.com).
|
90 |
+
Z. Deng is with the School of Artificial Intelligence and Computer Science,
|
91 |
+
Jiangnan University, Wuxi 214122, China, and Key Laboratory of Computa-
|
92 |
+
tional Neuroscience and Brain-Inspired Intelligence (LCNBI) and ZJLab,
|
93 |
+
Shanghai 200433, China. (e-mail: [email protected]).
|
94 |
+
K.S. Choi is with the Centre for Smart Health, Hong Kong Polytechnic Uni-
|
95 |
+
versity. (e-mail: [email protected]).
|
96 |
+
A Robust Multilabel Method Integrating Rule-based
|
97 |
+
Transparent Model, Soft Label Correlation Learning
|
98 |
+
and Label Noise Resistance
|
99 |
+
Qiongdan Lou, Zhaohong Deng, Senior Member, IEEE, Kup-Sze Choi, Shitong Wang
|
100 |
+
M
|
101 |
+
|
102 |
+
labels. Partial multilabel learning with noisy label identification
|
103 |
+
(PML-NI) [22] builds the feature-induce noise term to identify
|
104 |
+
noisy labels. Multilabel iterated learning (MILe) [23] strength-
|
105 |
+
ens learning bottleneck for successive generations of teacher
|
106 |
+
and student networks to improve the robustness against label
|
107 |
+
noise. Different from removing noisy labels directly, noisy la-
|
108 |
+
bel tolerated partial multilabel learning (NATAL) [24] reduces
|
109 |
+
the impact of noisy labels by assuming that the label infor-
|
110 |
+
mation is precise and feature information is inadequate.
|
111 |
+
The above related work indicates that the importance of label
|
112 |
+
correlation, model transparency and robustness against noisy
|
113 |
+
labels has received extensive attention. However, such desira-
|
114 |
+
ble characteristics are still rarely studied simultaneously in mul-
|
115 |
+
tilabel learning. Therefore, it is necessary to further study the
|
116 |
+
multilabel method with transparency, label correlation learning
|
117 |
+
ability and robustness to noise labels.
|
118 |
+
Based on the above analysis, we aim to develop a multilabel
|
119 |
+
learning method with strong fuzzy inference ability and label
|
120 |
+
correlation learning ability, even under the influence of noisy
|
121 |
+
labels. To achieve the goal need, a robust multilabel learning
|
122 |
+
classifier, called robust multilabel Takagi-Sugeno-Kang fuzzy
|
123 |
+
system (R-MLTSK-FS), is proposed by developing three ena-
|
124 |
+
bling mechanisms. The first mechanism concerns soft label
|
125 |
+
learning. The R-MLTSK-FS maps the original label matrix to
|
126 |
+
the soft label space where each soft label is affected by all the
|
127 |
+
original labels. The mechanism thus reduces the influence of
|
128 |
+
label noise in the original label space, and is the basis of the
|
129 |
+
other two mechanisms. The second mechanism concerns the
|
130 |
+
construction of soft multilabel loss function. In R-MLTSK-FS,
|
131 |
+
the “IF-THEN” rule-based TSK FS is used to model the infer-
|
132 |
+
ence between the inputs and outputs. Specifically, multi-output
|
133 |
+
TSK FS is employed in this paper. The IF-part of a multi-output
|
134 |
+
TSK FS is leveraged to transform the original feature matrix
|
135 |
+
into the fuzzy feature space; the THEN-part is used to imple-
|
136 |
+
ment the inference between inputs and outputs; and the regres-
|
137 |
+
sion loss is constructed based on the TSK FS and soft label
|
138 |
+
learning. The adoption of TSK FS is advantageous in that the
|
139 |
+
rule-based TSK FS makes the proposed R-MLTSK-FS more
|
140 |
+
transparent than traditional models. The third mechanism con-
|
141 |
+
cerns correlation enhancement learning. The mechanism estab-
|
142 |
+
lishes associations between any two soft labels and their corre-
|
143 |
+
sponding fuzzy discriminative features, which can effectively
|
144 |
+
improve the performance of R-MLTSK-FS.
|
145 |
+
The main contributions of this paper are summarized as fol-
|
146 |
+
lows:
|
147 |
+
(1) A soft label learning mechanism is constructed to explic-
|
148 |
+
itly measure the interaction between the labels and reduce the
|
149 |
+
influence of label noise.
|
150 |
+
(2) A soft multilabel loss function is constructed based on
|
151 |
+
soft labels and TSK FS to improve the efficiency and transpar-
|
152 |
+
ency of the learning process of R-MLTSK-FS.
|
153 |
+
(3) A correlation enhancement learning mechanism based on
|
154 |
+
soft label space and fuzzy feature space is built to further en-
|
155 |
+
hance the learning ability of R-MLTSK-FS.
|
156 |
+
(4) Extensive experiments are conducted using 10 bench-
|
157 |
+
mark multilabel datasets and 3 synthetic multilabel datasets to
|
158 |
+
compare with 8 methods. Comprehensive evaluations are
|
159 |
+
carried out by conducting classification performance evaluation,
|
160 |
+
robustness analysis, effectiveness analysis of soft label learning
|
161 |
+
and correlation enhancement learning, parameter analysis, con-
|
162 |
+
vergence analysis, and statistical analysis.
|
163 |
+
The rest of this paper is organized as follows. Section II re-
|
164 |
+
views the concepts of multilabel learning, and the traditional
|
165 |
+
TSK FS. Section III gives details of the proposed method. Ex-
|
166 |
+
tensive experimental analyses are presented and discussed in
|
167 |
+
Section IV. Finally, Section V summarizes the paper.
|
168 |
+
II. BACKGROUND KNOWLEDGE
|
169 |
+
In this section, the problem statement of the multilabel learn-
|
170 |
+
ing research concerned in the study is given, followed by the
|
171 |
+
review of traditional TSK FS.
|
172 |
+
A. Problem Statement
|
173 |
+
Let 𝒳 ∈ ℛ𝐷 and 𝒴 ∈ ℛ𝐿 be a D-dimensional feature
|
174 |
+
space and an L-dimensional label space respectively. 𝒟 =
|
175 |
+
{(𝒙𝑖, 𝒚𝑖)}𝑖=1
|
176 |
+
𝑁
|
177 |
+
is the training set with N samples. 𝑿 =
|
178 |
+
[𝒙1, 𝒙2, … , 𝒙𝑁] ∈ ℛ𝐷×𝑁 is the input matrix, and
|
179 |
+
𝒀 =
|
180 |
+
[𝒚1, 𝒚2, … , 𝒚𝑁] ∈ ℛ𝐿×𝑁 is the output matrix. In multilabel
|
181 |
+
learning, the label of an instance 𝒙𝑖 = [𝑥𝑖1, 𝑥𝑖2, … , 𝑥𝑖𝐷]T is
|
182 |
+
given by a vector 𝒚𝑖 = [𝑦𝑖1, 𝑦𝑖2, … , 𝑦𝑖𝐿]T. If 𝒙𝑖 is related to
|
183 |
+
the jth label, then 𝑦𝑖𝑗 = 1, otherwise, 𝑦𝑖𝑗 = 0. The aim of this
|
184 |
+
study is to find a robust mapping function 𝑓: 𝒳 → 𝒴 that can
|
185 |
+
reduce the influence of label noise and effectively predict the
|
186 |
+
label vector for a new instance on the basis of transparent infer-
|
187 |
+
ence rules.
|
188 |
+
B. TSK Fuzzy System
|
189 |
+
TSK FS is a classical inference model based on fuzzy rules
|
190 |
+
with superior interpretability (transparency) and learning ability.
|
191 |
+
It has been successfully applied in different areas, e.g., transfer
|
192 |
+
learning [25, 26], multiview learning [27], multitask learning
|
193 |
+
[28] and others [29, 30, 31, 32]. For a classical TSK FS with K
|
194 |
+
rules, the kth rule can be expressed as follows:
|
195 |
+
IF: 𝑥1 𝑖𝑠 𝐴1
|
196 |
+
𝑘 ∧ 𝑥2 𝑖𝑠 𝐴2
|
197 |
+
𝑘 ∧ … ∧ 𝑥𝐷 𝑖𝑠 𝐴𝐷
|
198 |
+
𝑘,
|
199 |
+
THEN: 𝑓𝑘(𝒙) = 𝑐0
|
200 |
+
𝑘 + 𝑐1
|
201 |
+
𝑘𝑥1 + ⋯ + 𝑐𝐷
|
202 |
+
𝑘𝑥𝐷,
|
203 |
+
𝑘 = 1, 2, … , 𝐾
|
204 |
+
(1)
|
205 |
+
where D is the feature dimension, and 𝑓𝑘(𝒙) is the output of
|
206 |
+
instance 𝒙 on the kth rule. 𝐴𝑑
|
207 |
+
𝑘 (𝑑 = 1, 2, … , 𝐷) in IF-part
|
208 |
+
represents the antecedent fuzzy set, which can be described by
|
209 |
+
membership functions. 𝑐𝑑
|
210 |
+
𝑘 in THEN-part is the consequent pa-
|
211 |
+
rameter.
|
212 |
+
Depending on application scenarios, different membership
|
213 |
+
functions can be chosen for the antecedent fuzzy sets. Gaussian
|
214 |
+
function, which is commonly used, is adopted in this paper and
|
215 |
+
the corresponding membership function associated with 𝐴𝑑
|
216 |
+
𝑘
|
217 |
+
can be expressed as follows:
|
218 |
+
𝜇𝐴𝑑
|
219 |
+
𝑘(𝑥𝑑) = exp {−
|
220 |
+
1
|
221 |
+
2 (
|
222 |
+
𝑥𝑑−𝑚𝑑
|
223 |
+
𝑘
|
224 |
+
𝛿𝑑
|
225 |
+
𝑘
|
226 |
+
)2}
|
227 |
+
(2)
|
228 |
+
where 𝑚𝑑
|
229 |
+
𝑘 and 𝛿𝑑
|
230 |
+
𝑘 can be obtained using different methods.
|
231 |
+
In the absence of domain knowledge, data-driven methods are
|
232 |
+
usually utilized to estimate 𝑚𝑑
|
233 |
+
𝑘 and 𝛿𝑑
|
234 |
+
𝑘. For example, the Var-
|
235 |
+
Part clustering has been used for this purpose [33]. It is insen-
|
236 |
+
sitive to the parameters and is therefore beneficial in terms of
|
237 |
+
|
238 |
+
stability and practicability. Hence, the Var-Part clustering is
|
239 |
+
used in this study.
|
240 |
+
For TSK FS, the firing strength of instance 𝒙 on the kth rule
|
241 |
+
can be computed as follows:
|
242 |
+
𝜇𝑘(𝒙) = ∏
|
243 |
+
𝜇𝐴𝑑
|
244 |
+
𝑘(𝑥𝑑)
|
245 |
+
𝐷
|
246 |
+
𝑑=1
|
247 |
+
|
248 |
+
(3)
|
249 |
+
𝜇̃𝑘(𝒙) = 𝜇𝑘(𝒙) ∑
|
250 |
+
𝜇𝑘′(𝒙)
|
251 |
+
𝐾
|
252 |
+
𝑘′=1
|
253 |
+
⁄
|
254 |
+
|
255 |
+
(4)
|
256 |
+
where Eq. (4) is the normalized form of Eq. (3).
|
257 |
+
Finally, the output of TSK FS for instance 𝒙 can be ex-
|
258 |
+
pressed as
|
259 |
+
𝑦 = 𝑓(𝒙) = ∑
|
260 |
+
𝜇̃𝑘(𝒙)𝑓𝑘(𝒙)
|
261 |
+
𝐾
|
262 |
+
𝑘=1
|
263 |
+
|
264 |
+
(5)
|
265 |
+
In fact, Eq. (5) can also be expressed as a linear model in a new
|
266 |
+
fuzzy feature space, that is,
|
267 |
+
𝑦 = 𝑓(𝒙) = 𝒄T𝒙𝑔
|
268 |
+
(6)
|
269 |
+
where
|
270 |
+
𝒙𝑒 = [1, 𝒙T]T ∈ ℛ(𝐷+1)×1
|
271 |
+
(7)
|
272 |
+
𝒙̃𝑘 = 𝜇̃𝑘(𝒙)𝒙𝑒 ∈ ℛ(𝐷+1)×1
|
273 |
+
(8)
|
274 |
+
𝒙𝑔 = [(𝒙̃1)T, (𝒙̃2)T, … , (𝒙̃𝐾)T]T ∈ ℛ𝐾(𝐷+1)×1
|
275 |
+
(9)
|
276 |
+
𝒄𝑘 = [𝑐0
|
277 |
+
𝑘, 𝑐1
|
278 |
+
𝑘, … , 𝑐𝐷
|
279 |
+
𝑘]T ∈ ℛ(𝐷+1)×1
|
280 |
+
(10)
|
281 |
+
𝒄 = [(𝒄1)T, (𝒄2)T, … , (𝒄𝐾)T]T ∈ ℛ𝐾(𝐷+1)×1
|
282 |
+
(11)
|
283 |
+
Here, 𝒙𝑔 is the fuzzy representation of instance 𝒙 in a new
|
284 |
+
feature space generated by fuzzy rules. 𝒄 is the consequent pa-
|
285 |
+
rameter vector of all the rules, which can be optimized by solv-
|
286 |
+
ing the linear model in Eq. (6).
|
287 |
+
III. PROPOSED METHOD: R-MLTSK-FS
|
288 |
+
A. System Architecture
|
289 |
+
The architecture of the R-MLTSK-FS proposed in this study
|
290 |
+
is shown in Fig. 1. It aims to provide a robust multilabel model
|
291 |
+
with fuzzy inference ability, label correlation learning ability
|
292 |
+
and resistance against noisy labels. R-MLTSK-FS contains
|
293 |
+
three mechanisms for soft label learning, soft multilabel loss
|
294 |
+
function construction and correlation enhancement learning, re-
|
295 |
+
spectively.
|
296 |
+
|
297 |
+
|
298 |
+
|
299 |
+
Fig. 1 The architecture of the proposed R-MLTSK-FS.
|
300 |
+
|
301 |
+
The first mechanism, soft label learning, maps the original
|
302 |
+
labels to soft label space by linear transformation. Each soft la-
|
303 |
+
bel in the soft label space is associated with all the original la-
|
304 |
+
bels, which reduces the influence of label noise in the original
|
305 |
+
label space. It is the basis of the other two mechanisms. The
|
306 |
+
second mechanism, i.e., soft multilabel loss function construc-
|
307 |
+
tion, leverages the IF-part of the TSK FS to transform the orig-
|
308 |
+
inal features into the fuzzy feature space, uses the THEN-part
|
309 |
+
of the TSK FS to complete the inference between inputs and
|
310 |
+
outputs, and then constructs the regression function between the
|
311 |
+
fuzzy feature space and the soft label space. Rule-based TSK
|
312 |
+
FS makes R-MLTSK-FS transparent in modeling inference re-
|
313 |
+
lationship between features and labels. The third mechanism,
|
314 |
+
correlation enhancement learning, implements label correlation
|
315 |
+
learning by establishing associations between any two soft la-
|
316 |
+
bels and their corresponding fuzzy discriminative features. This
|
317 |
+
mechanism further enhances the learning ability of R-MLTSK-
|
318 |
+
FS.
|
319 |
+
The details of R-MLTSK-FS are expanded in the following
|
320 |
+
three sections. The learning criteria of R-MLTSK-FS is intro-
|
321 |
+
duced in Section III-B. The optimization process and the algo-
|
322 |
+
rithm description are given in Section III-C, and the computa-
|
323 |
+
tional complexity is analyzed in Section III-D.
|
324 |
+
B. Learning Criteria of R-MLTSK-FS
|
325 |
+
According to the analysis in Section III-A, the multilabel
|
326 |
+
learning problem in this paper can be expressed as the following
|
327 |
+
optimization objective criteria:
|
328 |
+
min
|
329 |
+
𝜙1,𝜙2 𝛽 ∙ 𝑆𝑜𝑓_𝑙𝑎𝑏(𝒀|𝜙1) + 𝑆𝑜𝑓_𝑙𝑜𝑠(𝒀, 𝑿|𝜙1, 𝜙2) +
|
330 |
+
𝛾 ∙ 𝐶𝑜𝑟_𝑒𝑛ℎ(𝒀, 𝑿|𝜙1, 𝜙2)
|
331 |
+
(12)
|
332 |
+
The first term represents soft label learning, where 𝜙1 trans-
|
333 |
+
forms the original labels to the soft labels. The second term rep-
|
334 |
+
resents soft multilabel loss function construction, where 𝜙2 is
|
335 |
+
used to predict the labels from the original feature space to the
|
336 |
+
soft label space. The third term represents correlation enhance-
|
337 |
+
ment learning, which is used to measure the association be-
|
338 |
+
tween any two soft labels and their corresponding fuzzy dis-
|
339 |
+
criminative features. The hyperparameters β and γ are used to
|
340 |
+
balance the influences of different terms in Eq. (12). The solu-
|
341 |
+
tions of 𝜙1 and 𝜙2 can be obtained by optimizing Eq. (12).
|
342 |
+
The implementation of three terms is described below.
|
343 |
+
1) Soft Label Learning based on Original Label Space and
|
344 |
+
Soft Label Space
|
345 |
+
For the lth label 𝒀𝑙 ∈ ℛ1×𝑁 (1 ≤ 𝑙 ≤ 𝐿) (i.e., the lth row in
|
346 |
+
𝒀), the interference of its label noise can be reduced by consid-
|
347 |
+
ering the influence of all labels on 𝒀𝑙 comprehensively. Based
|
348 |
+
on this, for soft label learning, we assume that each label is as-
|
349 |
+
sociated with all the other original labels to some extent. The
|
350 |
+
learning process involves two steps. First, we construct the label
|
351 |
+
transformation 𝜙1 to effectively measure the interaction be-
|
352 |
+
tween the labels. 𝜙1 maps the output matrix 𝒀 explicitly
|
353 |
+
from the original label space to the soft label space. In the soft
|
354 |
+
label space, each soft label is associated with all the original
|
355 |
+
labels. The transformation function of 𝜙1 is defined as:
|
356 |
+
𝜙1(𝒀) = 𝑺𝒀
|
357 |
+
(13)
|
358 |
+
where 𝑺 = [𝒔1, 𝒔2, … , 𝒔𝐿]T ∈ ℛ𝐿×𝐿 , and 𝒔𝑙 ∈ ℛ𝐿×1 (1 ≤ 𝑙 ≤
|
359 |
+
𝐿) represents the influence weights of all the original labels on
|
360 |
+
the lth soft label.
|
361 |
+
Second, we preserve the expression consistency between the
|
362 |
+
soft labels and original labels to ensure the classification per-
|
363 |
+
formance. Therefore, the overall soft label learning is defined
|
364 |
+
as:
|
365 |
+
min
|
366 |
+
𝜙1 𝑆𝑜𝑓_𝑙𝑎𝑏(𝒀|𝜙1) = min
|
367 |
+
𝑺 ‖(𝒀 − 𝑺𝒀)T‖2,1
|
368 |
+
(14)
|
369 |
+
|
370 |
+
TSKFuzzy System
|
371 |
+
Soft Label Space
|
372 |
+
SoftLabel
|
373 |
+
Soft Multilabel Loss
|
374 |
+
Learning
|
375 |
+
Function Construction
|
376 |
+
Correlation
|
377 |
+
EnhancementLearningAlthough different regularization norms can be used in Eq.
|
378 |
+
(14), we choose the L2,1 norm for two reasons: (1) since L2,1
|
379 |
+
norm has the characteristic of row sparsity, we can screen out
|
380 |
+
the original label subsets which have significant impact on the
|
381 |
+
corresponding soft label, (2) L2,1 norm is well-known for its
|
382 |
+
ability in robust group selection [34, 35, 36], which is helpful
|
383 |
+
to reduce the impact of label noise on soft label learning.
|
384 |
+
2) Soft Multilabel Loss Function Construction based on TSK
|
385 |
+
FS
|
386 |
+
Multilabel loss function can be constructed by employing an
|
387 |
+
evaluation metric as the multilabel objective function [37, 38],
|
388 |
+
or by using linear regression to derive the multilabel loss func-
|
389 |
+
tion [39, 40, 41]. Unlike these methods, we construct the loss
|
390 |
+
function using soft label learning and TSK FS, which essen-
|
391 |
+
tially constructs a rule-based transparent model that maps the
|
392 |
+
original feature space to the soft label space. The construction
|
393 |
+
of the soft multilabel loss function is divided into three steps.
|
394 |
+
First, the original feature matrix is transformed into the fuzzy
|
395 |
+
feature space through the IF-part of the fuzzy rules. Second, the
|
396 |
+
inference between inputs and outputs is completed through the
|
397 |
+
THEN-part of fuzzy rules. Third, the regression loss function is
|
398 |
+
constructed based on the fuzzy rules and soft labels. These de-
|
399 |
+
tails are as follows.
|
400 |
+
•
|
401 |
+
IF-part implementation of fuzzy rules. In the multi-out-
|
402 |
+
put TSK FS with K rules, the fuzzy feature matrix obtained
|
403 |
+
by 𝑿 using fuzzy rules is given by
|
404 |
+
𝑿�� = [𝒙𝑔,1, 𝒙𝑔,2, … , 𝒙𝑔,𝑁] ∈ ℛ𝐾(𝐷+1)×𝑁
|
405 |
+
(15)
|
406 |
+
where 𝒙𝑔,𝑖 (1 ≤ 𝑖 ≤ 𝑁) is mapped by the instance 𝒙𝑖
|
407 |
+
through the IF-part of fuzzy rules, and it can be obtained
|
408 |
+
by Eqs. (2)-(4) and (7)-(9).
|
409 |
+
Compared with the original features, the rule-based
|
410 |
+
fuzzy features can empower R-MLTSK-FS to analyze the
|
411 |
+
implicit inference relationship between features and labels
|
412 |
+
[42], thereby strengthening the learning ability.
|
413 |
+
•
|
414 |
+
THEN-part adaptation of fuzzy rules. Based on Eq. (6),
|
415 |
+
the THEN-part of multi-output TSK FS is used to complete
|
416 |
+
the inference, i.e.,
|
417 |
+
𝜙2(𝑿) = 𝑪𝑿𝑔
|
418 |
+
(16)
|
419 |
+
where
|
420 |
+
𝑪 = [𝒄1, 𝒄2, … , 𝒄𝐿]T ∈ ℛ𝐿×𝐾(𝐷+1)
|
421 |
+
(17)
|
422 |
+
is composed of L consequent parameter vectors in THEN-
|
423 |
+
part. As defined in Eq. (11), 𝒄𝑙 ∈ ℛ𝐾(𝐷+1)×1 (1 ≤ 𝑙 ≤ 𝐿)
|
424 |
+
is the consequent parameter vector corresponding to the
|
425 |
+
lth-output in multi-output TSK FS and the lth soft label.
|
426 |
+
The main difference between multi-output TSK FS and sin-
|
427 |
+
gle-output TSK FS is that the consequent parameters of sin-
|
428 |
+
gle-output TSK FS are represented with a vector, whereas
|
429 |
+
the consequent parameters of multi-output TSK FS are rep-
|
430 |
+
resented by a matrix composed of multiple vectors.
|
431 |
+
•
|
432 |
+
Construction of regression loss. The loss function is a
|
433 |
+
fundamental part of the optimization objective for multila-
|
434 |
+
bel classification. In this paper, it is built based on soft label
|
435 |
+
learning and TSK FS. Combining Eqs. (13) and (16), we
|
436 |
+
construct the soft multilabel loss function as follows:
|
437 |
+
min
|
438 |
+
𝜙1,𝜙2 𝑆𝑜𝑓_𝑙𝑜𝑠(𝒀, 𝑿|𝜙1, 𝜙2)
|
439 |
+
= min
|
440 |
+
𝑺,𝑪 ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛼‖𝑪‖𝐹
|
441 |
+
2
|
442 |
+
(18)
|
443 |
+
where α is a hyperparameter to balance the influence of the
|
444 |
+
soft multilabel loss function and the regularization term.
|
445 |
+
Taking the Frobenius norm ‖∙‖𝐹 as the regularization
|
446 |
+
term can not only reduce the risk of overfitting, but also
|
447 |
+
facilitate the solution of consequent parameter matrix 𝑪.
|
448 |
+
3) Correlation Enhancement Learning based on Soft Label
|
449 |
+
Space and Fuzzy Feature Space
|
450 |
+
Section I has clarified that mining the correlation information
|
451 |
+
between labels can effectively improve the performance of the
|
452 |
+
model. In this paper, we analyze the label correlation based on
|
453 |
+
the fact that the correlation between two labels is consistent
|
454 |
+
with the correlation between their discriminative features. For
|
455 |
+
example, there is an intersection between the labels “Cat” and
|
456 |
+
“Animal”, and then their discriminative features should par-
|
457 |
+
tially overlap.
|
458 |
+
Based on the above analysis, we utilize the correlation infor-
|
459 |
+
mation on the basis of soft label learning and fuzzy features as
|
460 |
+
follows:
|
461 |
+
min
|
462 |
+
𝜙1,𝜙2 𝐶𝑜𝑟_𝑒𝑛ℎ(𝒀, 𝑿|𝜙1, 𝜙2)
|
463 |
+
= min
|
464 |
+
𝑺,𝑪 ∑
|
465 |
+
∑
|
466 |
+
‖(𝒔𝑖
|
467 |
+
T𝒀 − 𝒔𝑗
|
468 |
+
T𝒀)T‖
|
469 |
+
2𝒄𝑖
|
470 |
+
T𝒄𝑗
|
471 |
+
𝐿
|
472 |
+
𝑗=1
|
473 |
+
𝐿
|
474 |
+
𝑖=1
|
475 |
+
|
476 |
+
(19)
|
477 |
+
where 𝒔𝑙
|
478 |
+
T𝒀 ∈ ℛ1×𝑁 (1 ≤ 𝑙 ≤ 𝐿) represents the lth soft label
|
479 |
+
vector corresponding to N samples. 𝒔𝑙 ∈ ℛ𝐿×1 represents the
|
480 |
+
influence weights of all original labels on the lth soft label. 𝒄𝑙 ∈
|
481 |
+
ℛ𝐾(𝐷+1)×1 (1 ≤ 𝑙 ≤ 𝐿) is used to learn the discriminative fea-
|
482 |
+
tures from fuzzy feature space for the lth soft label. The larger
|
483 |
+
the difference between the ith and jth soft labels, the more sig-
|
484 |
+
nificant the difference between their fuzzy discriminative fea-
|
485 |
+
tures, and further, the smaller the value of 𝒄𝑖
|
486 |
+
T𝒄𝑗. Further, Eq.
|
487 |
+
(19) can be expressed as:
|
488 |
+
min
|
489 |
+
𝜙1,𝜙2 𝐶𝑜𝑟_𝑒𝑛ℎ(𝒀, 𝑿|𝜙1, 𝜙2)
|
490 |
+
= min
|
491 |
+
𝑺,𝑪 ∑
|
492 |
+
∑
|
493 |
+
‖(𝒔𝑖
|
494 |
+
T𝒀 − 𝒔𝑗
|
495 |
+
T𝒀)T‖
|
496 |
+
2𝒄𝑖
|
497 |
+
T𝒄𝑗
|
498 |
+
𝐿
|
499 |
+
𝑗=1
|
500 |
+
𝐿
|
501 |
+
𝑖=1
|
502 |
+
|
503 |
+
= min
|
504 |
+
𝑺,𝑪 2Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
505 |
+
(20)
|
506 |
+
where 𝑳 = 𝑫 − 𝑹, 𝑹 = 𝑪𝑪𝑇 ∈ ℛ𝐿×𝐿, 𝑫 ∈ ℛ𝐿×𝐿 is a diago-
|
507 |
+
nal matrix, and 𝐷𝑖𝑖 = ∑
|
508 |
+
𝑅𝑖𝑗
|
509 |
+
𝐿
|
510 |
+
𝑗=1
|
511 |
+
.
|
512 |
+
C. Complete Objective Function and its Optimization
|
513 |
+
By integrating Eqs. (14), (18) and (20), the multilabel learn-
|
514 |
+
ing problem in Eq. (12) is defined and the complete objective
|
515 |
+
function of R-MLTSK-FS is expressed as:
|
516 |
+
min
|
517 |
+
𝜙1,𝜙2 𝛽 ∙ 𝑆𝑜𝑓_𝑙𝑎𝑏(𝒀|𝜙1) + 𝑆𝑜𝑓_𝑙𝑜𝑠(𝒀, 𝑿|𝜙1, 𝜙2) +
|
518 |
+
𝛾 ∙ 𝐶𝑜𝑟_𝑒𝑛ℎ(𝒀, 𝑿|𝜙1, 𝜙2)
|
519 |
+
= min
|
520 |
+
𝑺,𝑪 𝛽‖(𝒀 − 𝑺𝒀)T‖2,1 + ‖(𝑺𝒀 − 𝑪𝑿𝑔)
|
521 |
+
T‖
|
522 |
+
2,1 +
|
523 |
+
𝛼‖𝑪‖𝐹
|
524 |
+
2 + 2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
525 |
+
= min
|
526 |
+
𝑺,𝑪 ‖(𝑺𝒀 − 𝑪𝑿𝑔)
|
527 |
+
T‖
|
528 |
+
2,1 + 𝛼‖𝑪‖𝐹
|
529 |
+
2 + 𝛽‖(𝒀 − 𝑺𝒀)T‖2,1 +
|
530 |
+
2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
531 |
+
|
532 |
+
(21)
|
533 |
+
|
534 |
+
To optimize 𝑺 and 𝑪, we adopt the alternating direction
|
535 |
+
minimization strategy, where Eq. (21) is divided into two sub-
|
536 |
+
problems, namely, the 𝑺-subproblem and the 𝑪-subproblem.
|
537 |
+
The optimization processes are as follows.
|
538 |
+
1) 𝑺-Subproblem
|
539 |
+
By fixing 𝑪, the 𝑺-subproblem can be expressed as:
|
540 |
+
𝑺∗ = 𝑎𝑟𝑔𝑚𝑖𝑛𝑺 ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛽‖(𝒀 − 𝑺𝒀)T‖2,1 +
|
541 |
+
2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
542 |
+
(22)
|
543 |
+
In Eq. (22), the Lagrange function for 𝑺 is
|
544 |
+
𝐿(𝑺) = ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛽‖(𝒀 − 𝑺𝒀)T‖2,1 +
|
545 |
+
2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
546 |
+
(23)
|
547 |
+
Set the derivative of Eq. (23) with respect to 𝑺 to 0, i.e.,
|
548 |
+
𝜕𝐿(𝑺) 𝜕𝑺
|
549 |
+
⁄
|
550 |
+
= 2𝑺𝒀𝑫𝑆1𝒀T − 2𝑪𝑿𝑔𝑫𝑆1𝒀T + 2𝛽𝑺𝒀𝑫𝑆2𝒀T
|
551 |
+
−2𝛽𝒀𝑫𝑆2𝒀T + 4𝛾𝑳𝑺𝒀𝒀T = 0
|
552 |
+
(24)
|
553 |
+
where 𝑫𝑆1 ∈ ℛ𝑁×𝑁 and 𝑫𝑆2 ∈ ℛ𝑁×𝑁 are diagonal matrices,
|
554 |
+
and
|
555 |
+
𝐷𝑆1,𝑖𝑖 = 1 (2‖(𝑺𝒀 − 𝑪𝑿𝑔)𝑖
|
556 |
+
T‖)
|
557 |
+
⁄
|
558 |
+
,
|
559 |
+
𝐷𝑆2,𝑖𝑖 =
|
560 |
+
1 (2‖(𝒀 − 𝑺𝒀)𝑖
|
561 |
+
T‖)
|
562 |
+
⁄
|
563 |
+
. (𝑨𝑖
|
564 |
+
T represents the ith row in 𝑨T.)
|
565 |
+
Then, Eq. (24) can be re-expressed as
|
566 |
+
(2𝛾𝑳)𝑺 + 𝑺(𝒀𝑫𝑆1𝒀T(𝒀𝒀T)−1 + 𝛽𝒀𝑫𝑆2𝒀T(𝒀𝒀T)−1)
|
567 |
+
= 𝑪𝑿𝑔𝑫𝑆1𝒀T(𝒀𝒀T)−1 + 𝛽𝒀𝑫𝑆2𝒀T(𝒀𝒀T)−1
|
568 |
+
(25)
|
569 |
+
Eq. (25) is a classical optimization problem, i.e., the Sylvester
|
570 |
+
equation, which has been thoroughly studied [43, 44, 45].
|
571 |
+
In general, for the Sylvester equation 𝑨𝑾 + 𝑾𝑩 = 𝒁 (𝑨 ∈
|
572 |
+
ℛ𝑚×𝑚, 𝑩 ∈ ℛ𝑛×𝑛, 𝒁 ∈ ℛ𝑚×𝑛, 𝑾 ∈ ℛ𝑚×𝑛), the matrix 𝑾
|
573 |
+
is the variable to be solved. The specific solution formula of 𝑾
|
574 |
+
is as follows:
|
575 |
+
𝑾(: ) = (𝑰1⨂𝑨 + 𝑩T⨂𝑰2)−𝟏𝒁(: )
|
576 |
+
(26)
|
577 |
+
where 𝑰1 ∈ ℛ𝑛×𝑛 and 𝑰2 ∈ ℛ𝑚×𝑚 are identity matrices, ⨂
|
578 |
+
is the Kronecker tensor product, 𝒁(: ) ∈ ℛ𝑚𝑛×1 and 𝑾(: ) ∈
|
579 |
+
ℛ𝑚𝑛×1 denote that the matrices 𝒁 and 𝑾 are single column
|
580 |
+
vectors. 𝑾(: ) can be reshaped to 𝑾∗ ∈ ℛ𝑚×𝑛, which is the
|
581 |
+
solution of 𝑨𝑾 + 𝑾𝑩 = 𝒁. For simplicity, the solution 𝑾∗
|
582 |
+
is denoted as 𝑾∗ = 𝑠𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟(𝑨, 𝑩, 𝒁).
|
583 |
+
Therefore, the solution of Eq. (25) is
|
584 |
+
𝑺∗ = 𝑠𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟(2𝛾𝑳, 𝒀(𝑫𝑆1 + 𝛽𝑫𝑆2)𝒀T(𝒀𝒀T)−1,
|
585 |
+
(𝑪𝑿𝑔𝑫𝑆1 + 𝛽𝒀𝑫𝑆2)𝒀T(𝒀𝒀T)−1)
|
586 |
+
(27)
|
587 |
+
2) 𝑪-Subproblem
|
588 |
+
By fixing 𝑺, the 𝑪-subproblem can be expressed as:
|
589 |
+
𝑪∗ = 𝑎𝑟𝑔𝑚𝑖𝑛𝑪 ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛼‖𝑪‖𝐹
|
590 |
+
2 +
|
591 |
+
2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
592 |
+
(28)
|
593 |
+
In Eq. (28), the Lagrange function for 𝑪 is
|
594 |
+
𝐿(𝑪) = ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛼‖𝑪‖𝐹
|
595 |
+
2 + 2𝛾Tr(𝒀T𝑺T𝑳𝑺𝒀)
|
596 |
+
= ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛼‖𝑪‖𝐹
|
597 |
+
2 + 2𝛾Tr(𝒀T𝑺T(𝑫 − 𝑹)𝑺𝒀)
|
598 |
+
= ‖(𝑺𝒀 − 𝑪𝑿𝑔)T‖2,1 + 𝛼‖𝑪‖𝐹
|
599 |
+
2 + 2𝛾Tr(𝒀T𝑺T(𝑪𝑪T𝟏𝟏T ∘
|
600 |
+
𝑰3 − 𝑪𝑪T)𝑺𝒀)
|
601 |
+
|
602 |
+
(29)
|
603 |
+
where 𝟏 ∈ ℛ𝐿×1 is a column vector with all elements equal to
|
604 |
+
one. The symbol (∘) represents the Hadamard product. 𝑰3 ∈
|
605 |
+
ℛ𝐿×𝐿 is the identity matrix.
|
606 |
+
Set the derivative of Eq. (29) with respect to 𝑪 to 0, i.e.,
|
607 |
+
𝜕𝐿(𝑪) 𝜕𝑪
|
608 |
+
⁄
|
609 |
+
= 2𝑪𝑿𝑔𝑫𝐶𝑿𝑔
|
610 |
+
T − 2𝑺𝒀𝑫𝐶𝑿𝑔
|
611 |
+
T + 2𝛼𝑪 +
|
612 |
+
2𝛾(((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3)T𝟏𝟏T𝑪 + 𝟏𝟏T((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3)𝑪 −
|
613 |
+
2𝑺𝒀𝒀T𝑺T𝑪) = 0
|
614 |
+
|
615 |
+
(30)
|
616 |
+
where 𝑫𝐶 ∈ ℛ𝑁×𝑁 is a diagonal matrix, and 𝐷𝐶,𝑖𝑖 =
|
617 |
+
1 (2‖(𝑺𝒀 − 𝑪𝑿𝑔)𝑖
|
618 |
+
𝑇‖)
|
619 |
+
⁄
|
620 |
+
. (𝑨𝑖
|
621 |
+
T is the ith row of 𝑨T.)
|
622 |
+
Eq. (30) is also a Sylvester equation. Therefore, we can solve
|
623 |
+
𝑪 as follows:
|
624 |
+
𝑪∗ = 𝑠𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟(𝛼𝑰3 + 𝛾((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3)
|
625 |
+
T𝟏𝟏T +
|
626 |
+
𝛾𝟏𝟏T((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3) − 2𝛾𝑺𝒀𝒀T𝑺T, 𝑿𝑔𝑫𝐶𝑿𝑔
|
627 |
+
T, 𝑺𝒀𝑫𝐶𝑿𝑔
|
628 |
+
T)
|
629 |
+
|
630 |
+
(31)
|
631 |
+
When the optimal 𝑺∗ and 𝑪∗ are obtained, the prediction
|
632 |
+
output of the test instance 𝒙′ (i.e., 𝒚′ = [𝑦1
|
633 |
+
′, … , 𝑦𝐿
|
634 |
+
′]𝑇) can be
|
635 |
+
formulated as follows:
|
636 |
+
𝒚′ = 𝜑𝜏(𝑪∗𝒙𝑔
|
637 |
+
′ )
|
638 |
+
(32)
|
639 |
+
where 𝒙𝑔
|
640 |
+
′ is the fuzzy feature representation of 𝒙′ through
|
641 |
+
fuzzy rules. It can be obtained from Eqs. (2)-(4) and (7)-(9).
|
642 |
+
𝜑𝜏(∙) is a threshold function to convert the continuous output
|
643 |
+
to the discrete output, and 𝜏 is the threshold. Therefore, for the
|
644 |
+
lth label 𝑦𝑙
|
645 |
+
′ (1 ≤ 𝑙 ≤ 𝐿) in 𝒚′, its definition is
|
646 |
+
𝑦𝑙
|
647 |
+
′ = {1, 𝑖𝑓 (𝑪∗𝒙𝑔
|
648 |
+
′ )𝑙 > 𝜏
|
649 |
+
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
|
650 |
+
(33)
|
651 |
+
where (𝑪∗𝒙𝑔
|
652 |
+
′ )𝑙 is the lth element in (𝑪∗𝒙𝑔
|
653 |
+
′ ). The value of 𝜏
|
654 |
+
can be optimized by cross-validation. In this paper, we set it to
|
655 |
+
the fixed value of 0.5.
|
656 |
+
Based on the above analysis, the procedure of the proposed
|
657 |
+
R-MLTSK-FS is described in Algorithm I.
|
658 |
+
D. Computational Complexity Analysis
|
659 |
+
The computational complexity of R-MLTSK-FS is analyzed
|
660 |
+
according to the steps in Algorithm I, which is expressed using
|
661 |
+
the big-O notation. For step 1, the complexity of initialization
|
662 |
+
is 𝑂(1). For step 2, the computational complexity of trans-
|
663 |
+
forming 𝑿 into 𝑿𝑔 is 𝑂(2𝑁𝐾𝐷 + 2𝑁𝐾) . The computa-
|
664 |
+
tional complexity of step 4 is 𝑂(𝐿2𝑁 + 𝐿𝑁𝐾(𝐷 + 1)). For the
|
665 |
+
step 5, the computational complexity of 𝑻1 is 𝑂(2𝐿2𝑁 +
|
666 |
+
𝐿3 + 2𝐿2). For step 6, the computational complexity of 𝑻2 is
|
667 |
+
𝑂(𝑁2𝐾(𝐷 + 1) + 𝑁𝐾2(𝐷 + 1)2). For step 7, the computa-
|
668 |
+
tional complexity of calculating 𝑻3 is 𝑂(𝐿2𝑁 + 𝐿𝑁2 +
|
669 |
+
Algorithm I R-MLTSK-FS
|
670 |
+
Input: Input matrix 𝑿 ∈ ℛ𝐷×𝑁, output matrix 𝒀 ∈ ℛ𝐿×𝑁, rule number K,
|
671 |
+
trade-off parameters α, β and γ.
|
672 |
+
Procedure:
|
673 |
+
1: Initialize:
|
674 |
+
𝑺 = 𝟏𝐿×𝐿, 𝑪 = (1 𝐿
|
675 |
+
⁄ )𝟏𝐿×𝐾(𝐷+1), 𝑫 = 𝟎𝐿×𝐿, 𝑫𝐶 = 𝟎𝑁×𝑁, 𝑫𝑆1 =
|
676 |
+
𝟎𝑁×𝑁, 𝑫𝑆2 = 𝟎𝑁×𝑁.
|
677 |
+
2: Transform 𝑿 into 𝑿𝑔 using Eqs. (2)-(4) and (7)-(9).
|
678 |
+
3: While not converged do
|
679 |
+
4: 𝐷𝐶,𝑖𝑖 = 1 (2‖(𝑺𝒀 − 𝑪𝑿𝑔)𝑖
|
680 |
+
𝑇‖)
|
681 |
+
⁄
|
682 |
+
;
|
683 |
+
5: 𝑻1 ← 𝛼𝑰3 + 𝛾((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3)T𝟏𝟏T + 𝛾𝟏𝟏T((𝑺𝒀𝒀T𝑺T) ∘ 𝑰3) −
|
684 |
+
2𝛾𝑺𝒀𝒀T𝑺T;
|
685 |
+
6: 𝑻2 ← 𝑿𝑔𝑫𝐶𝑿𝑔
|
686 |
+
T;
|
687 |
+
7: 𝑻3 ← 𝑺𝒀𝑫𝐶𝑿𝑔
|
688 |
+
T;
|
689 |
+
|
690 |
+
8: 𝑪 ← 𝑠𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟(𝑻1, 𝑻2, 𝑻3);
|
691 |
+
9: 𝐷𝑆1,𝑖𝑖 = 1 (2‖(𝑺𝒀 − 𝑪𝑿𝑔)𝑖
|
692 |
+
T‖)
|
693 |
+
⁄
|
694 |
+
;
|
695 |
+
10: 𝐷𝑆2,𝑖𝑖 = 1 (2‖(𝒀 − 𝑺𝒀)𝑖
|
696 |
+
T‖)
|
697 |
+
⁄
|
698 |
+
;
|
699 |
+
11: 𝑹 ← 𝑪𝑪T;
|
700 |
+
12: 𝐷𝑖𝑖 ← ∑
|
701 |
+
𝑅𝑖𝑗
|
702 |
+
𝐿
|
703 |
+
𝑗=1
|
704 |
+
;
|
705 |
+
13: 𝑳 = 𝑫 − 𝑹;
|
706 |
+
14: 𝑻4 ← 2γ𝑳;
|
707 |
+
15: 𝑻5 ← 𝒀(𝑫𝑆1 + 𝛽𝑫𝑆2)𝒀T(𝒀𝒀T)−1;
|
708 |
+
16: 𝑻6 ← (𝑪𝑿𝑔𝑫𝑆1 + 𝛽𝒀𝑫𝑆2)𝒀T(𝒀𝒀T)−1;
|
709 |
+
17: 𝑺 ← 𝑠𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟(𝑻4, 𝑻5, 𝑻6);
|
710 |
+
18: Check the convergence conditions;
|
711 |
+
19: End
|
712 |
+
Output: 𝑺, 𝑪.
|
713 |
+
|
714 |
+
TABLE I
|
715 |
+
STATISTICS OF DATASETS
|
716 |
+
|
717 |
+
Dataset
|
718 |
+
#Instance
|
719 |
+
#Feature
|
720 |
+
#Label
|
721 |
+
Arts
|
722 |
+
5000
|
723 |
+
462
|
724 |
+
26
|
725 |
+
Birds
|
726 |
+
645
|
727 |
+
260
|
728 |
+
19
|
729 |
+
CAL500
|
730 |
+
502
|
731 |
+
68
|
732 |
+
174
|
733 |
+
Corel5k
|
734 |
+
5000
|
735 |
+
499
|
736 |
+
374
|
737 |
+
Flags
|
738 |
+
194
|
739 |
+
19
|
740 |
+
7
|
741 |
+
Genbase
|
742 |
+
662
|
743 |
+
1185
|
744 |
+
27
|
745 |
+
Medical
|
746 |
+
978
|
747 |
+
1449
|
748 |
+
45
|
749 |
+
Mirflickr
|
750 |
+
25000
|
751 |
+
150
|
752 |
+
24
|
753 |
+
Recreation
|
754 |
+
5000
|
755 |
+
606
|
756 |
+
22
|
757 |
+
Science
|
758 |
+
5000
|
759 |
+
743
|
760 |
+
40
|
761 |
+
|
762 |
+
𝐿𝑁𝐾(𝐷 + 1)) . The computational complexity of step 8 is
|
763 |
+
𝑂(3𝐿4) . For step 9, the complexity of calculating 𝑫𝑆1 is
|
764 |
+
𝑂(𝐿2𝑁 + 𝐿𝑁𝐾(𝐷 + 1)). For step 10, the complexity of 𝑫𝑆2 is
|
765 |
+
𝑂(𝐿2𝑁). The complexity of step 11 is 𝑂(𝐿2𝐾(𝐷 + 1)). The
|
766 |
+
complexity of steps 12-14 is 𝑂(1). For step 15, the complexity
|
767 |
+
of 𝑻5 is 𝑂(𝐿𝑁2 + 𝐿2𝑁 + 𝐿3). The complexity of step 16 is
|
768 |
+
𝑂(𝐿𝑁𝐾(𝐷 + 1) + 𝐿𝑁2 + 𝐿2𝑁 + 𝐿3) . For step 17, the com-
|
769 |
+
plexity is 𝑂(3𝐿2𝐾2(𝐷 + 1)2). Hence, the overall complexity
|
770 |
+
of the whole algorithm is dominated by steps 6 and 16. Let 𝑎 =
|
771 |
+
max (𝐿, 𝐷, 𝐾) , 𝑏 = max (𝑁, 𝐾(𝐷 + 1)) . In general, 𝑎 ≪ 𝑏 .
|
772 |
+
Therefore, the maximum computational complexity of R-
|
773 |
+
MLTSK-FS is 𝑂(𝑎3 + 𝑏(2𝑎𝑏 + 𝑎2 + 2𝑏2)).
|
774 |
+
IV. EXPERIMENTAL ANALYSIS
|
775 |
+
Extensive experiments are conducted to fully assess the ef-
|
776 |
+
fectiveness of R-MLTSK-FS, including classification perfor-
|
777 |
+
mance evaluation, robustness analysis, effectiveness analysis of
|
778 |
+
soft label learning and correlation enhancement learning, pa-
|
779 |
+
rameter analysis, convergence analysis, and statistical analysis.
|
780 |
+
The datasets, evaluation metrics and the settings used in the ex-
|
781 |
+
periments are described below.
|
782 |
+
A. Datasets
|
783 |
+
We adopt 10 benchmark multilabel datasets to evaluate the
|
784 |
+
performance of R-MLTSK-FS. Table I shows the details of
|
785 |
+
these datasets, where #Instance, #Feature, and #Label denote
|
786 |
+
the instance number, the feature dimension, and the label space
|
787 |
+
dimension respectively. These datasets are available from the
|
788 |
+
Github1.
|
789 |
+
B. Evaluation Metrics
|
790 |
+
Let {(𝒙̃𝑖, 𝒚̃𝑖)|1 ≤ 𝑖 ≤ 𝑁𝑡} be a test set with 𝑁𝑡 samples,
|
791 |
+
𝒚̂𝑖 be the predicted labels of 𝒙̃𝑖, 𝑓(𝒙̃𝑖, 𝑙) be the continuous
|
792 |
+
output predicted by the multilabel method for the instance 𝒙̃𝑖
|
793 |
+
on the lth label. The ranking function 𝑟𝑎𝑛𝑘(𝒙̃𝑖, 𝑙) is obtained
|
794 |
+
according
|
795 |
+
to
|
796 |
+
𝑓(𝒙̃𝑖, 𝑙) .
|
797 |
+
If
|
798 |
+
𝑓(𝒙̃𝑖, 𝑙) > 𝑓(𝒙̃𝑖, 𝑙′) ,
|
799 |
+
then
|
800 |
+
𝑟𝑎𝑛𝑘(𝒙̃𝑖, 𝑙) < 𝑟𝑎𝑛𝑘(𝒙̃𝑖, 𝑙′). Let 𝐿𝒙𝑖 be the label set related to
|
801 |
+
𝒙̃𝑖, and 𝐿𝒙𝑖 is the complement of 𝐿𝒙𝑖. Based on the settings,
|
802 |
+
the four metrics below, commonly used in multilabel learning,
|
803 |
+
are employed in the experiments [46].
|
804 |
+
(1) Average Precision (AP): It is the average proportion of
|
805 |
+
the related labels of an instance that are ranked lower than a
|
806 |
+
given label l. The larger the value of AP, the better the classifi-
|
807 |
+
cation performance.
|
808 |
+
AP =
|
809 |
+
1
|
810 |
+
𝑁𝑡 ∑
|
811 |
+
1
|
812 |
+
|𝐿𝒙𝑖| ∑
|
813 |
+
|{𝑙′ ∈ 𝐿𝒙𝑖|𝑓(𝒙̃𝑖, 𝑙′) ≥ 𝑓(𝒙̃𝑖, 𝑙)}|
|
814 |
+
𝑟𝑎𝑛𝑘(𝒙̃𝑖,𝑙)
|
815 |
+
𝑙∈𝐿𝒙𝑖
|
816 |
+
𝑁𝑡
|
817 |
+
𝑖=1
|
818 |
+
(34)
|
819 |
+
(2) Hamming Loss (HL): It is the average proportion of an
|
820 |
+
instance that is predicted incorrectly. The smaller the value of
|
821 |
+
HL, the better the classification performance.
|
822 |
+
HL =
|
823 |
+
1
|
824 |
+
𝑁𝑡 ∑
|
825 |
+
|𝒚̃𝑖⨁𝒚̂𝑖|
|
826 |
+
𝐿
|
827 |
+
𝑁𝑡
|
828 |
+
𝑖=1
|
829 |
+
|
830 |
+
(35)
|
831 |
+
where ⨁ is the XOR operation.
|
832 |
+
(3) Ranking Loss (RL): It is the proportion of the related la-
|
833 |
+
bels that are ranked higher than the unrelated labels. The
|
834 |
+
smaller the value of RL, the better the classification perfor-
|
835 |
+
mance.
|
836 |
+
RL =
|
837 |
+
1
|
838 |
+
𝑁𝑡 ∑
|
839 |
+
|{(𝑙, 𝑙′)|𝑓(𝒙̃𝑖, 𝑙) ≤ 𝑓(𝒙̃𝑖, 𝑙′), (𝑙, 𝑙′) ∈ 𝐿𝒙𝑖 × 𝐿𝒙𝑖}|
|
840 |
+
|𝐿𝒙𝑖||𝐿𝒙𝑖|
|
841 |
+
𝑁𝑡
|
842 |
+
𝑖=1
|
843 |
+
|
844 |
+
|
845 |
+
(36)
|
846 |
+
(4) Coverage (CV): It is the average number of times that all
|
847 |
+
related labels of an instance are found. The smaller the value of
|
848 |
+
CV, the better the classification performance.
|
849 |
+
CV =
|
850 |
+
1
|
851 |
+
𝑁𝑡 ∑
|
852 |
+
max
|
853 |
+
𝑙∈𝐿𝒙𝑖
|
854 |
+
𝑟𝑎𝑛𝑘(𝒙̃𝑖, 𝑙) − 1
|
855 |
+
𝑁𝑡
|
856 |
+
𝑖=1
|
857 |
+
|
858 |
+
(37)
|
859 |
+
C. Experimental Settings
|
860 |
+
In this paper, we employ eight methods for comparison, in-
|
861 |
+
cluding binary relevance (BR) [47], multilabel k-nearest neigh-
|
862 |
+
bor (MLkNN) [48], meta-label-specific features (MLSF) [49],
|
863 |
+
ML-TSK FS [17], classifier chains (CC) [50], random k-label-
|
864 |
+
sets (RAkEL) [51], correlated logistic models (CorrLog) [52]
|
865 |
+
and hybrid noise-oriented multilabel learning (HNOML) [53].
|
866 |
+
These methods and the settings of the parameters for grid search
|
867 |
+
are described in Table II. We adopt the 5-fold cross-validation
|
868 |
+
strategy to evaluate the performance.
|
869 |
+
|
870 |
+
|
871 |
+
|
872 |
+
1https://github.com/ZesenChen/multi-label-dataset and https://github.com/
|
873 |
+
KKimura360/MLC_toolbox/tree/master/dataset/matfile
|
874 |
+
|
875 |
+
TABLE II
|
876 |
+
DESCRIPTION OF METHODS
|
877 |
+
|
878 |
+
Methods
|
879 |
+
Description
|
880 |
+
Parameter Setting
|
881 |
+
BR
|
882 |
+
This method is a first-order method. To improve the robustness, it introduces -in-
|
883 |
+
sensitive learning (a fuzzy method) by solving a system of linear inequalities
|
884 |
+
(LSSLI) [54] as the binary classifier.
|
885 |
+
|
886 |
+
𝐶 = 2. ^(−5: 1: 5),
|
887 |
+
𝑀 = {2, 3, 4, 5, 6, 7, 8, 9}.
|
888 |
+
MLkNN
|
889 |
+
This method is a first-order method that predicts a new instance by maximizing the
|
890 |
+
posterior probability of each label. The number of nearest neighbors affects the ro-
|
891 |
+
bustness of the model to some extent.
|
892 |
+
|
893 |
+
𝐾 = {1, 3, 5, 7, 9, 11, 13},
|
894 |
+
𝑠 = {0.01, 0.03, 0.05, 0.07, 0.09}.
|
895 |
+
MLSF
|
896 |
+
This method is a second-order method. It improves the performance through meta-
|
897 |
+
label learning and specific feature selection.
|
898 |
+
|
899 |
+
𝑘 = {2,4,6,8}, 𝜀 = {0.1,1,10},
|
900 |
+
𝛼 = {0.1,0.5,0.9}, 𝛾 = {0.1,1,10}.
|
901 |
+
ML-TSK FS
|
902 |
+
This method is a second-order method that uses the correlation between any two la-
|
903 |
+
bels to improve performance. To realize the transparency, it uses fuzzy rules to
|
904 |
+
model the inference relationship between features and labels. This method does not
|
905 |
+
consider the influence of label noise.
|
906 |
+
|
907 |
+
𝐾 = {2,3,4,5},
|
908 |
+
𝛼 = {0.01,0.1,1,10,100},
|
909 |
+
𝛽 = {0.01,0.1,1,10,100}.
|
910 |
+
CC
|
911 |
+
This method is a high-order method which adds the prediction result of the previous
|
912 |
+
label to the feature space to participate in the prediction of the next label. The -in-
|
913 |
+
sensitive learning (a fuzzy method) by solving a system of linear inequalities
|
914 |
+
(LSSLI) [54] is used as the binary classifier to improve the robustness.
|
915 |
+
|
916 |
+
𝐶 = 2. ^(−5: 1: 5),
|
917 |
+
𝑀 = {2, 3, 4, 5, 6, 7, 8, 9}.
|
918 |
+
RAkEL
|
919 |
+
This method is a high-order method. In this method, the label space is randomly di-
|
920 |
+
vided into multiple label subspaces, and the prediction result of a label is associated
|
921 |
+
with other labels in the subspace.
|
922 |
+
|
923 |
+
𝑘 = 𝑁./(12: −2: 2) (N is the instance number),
|
924 |
+
𝛼 = {0.1, 0.3, 0.5, 0.7, 0.9}.
|
925 |
+
CorrLog
|
926 |
+
This method is a high-order method. It achieves robustness by constructing the asso-
|
927 |
+
ciation between a label and all other labels.
|
928 |
+
|
929 |
+
𝑟ℎ𝑜1 = {0.001, 0.003,0.005,0.007, 0.009, 0.01,
|
930 |
+
0.03,0.05,0.07,0.09,0.1,0.3,0.5,0.7,0.9},
|
931 |
+
𝑟ℎ𝑜2 = {0.001, 0.005, 0.01, 0.05,0.1,0.5}.
|
932 |
+
|
933 |
+
HNOML
|
934 |
+
This method is a high-order method. It designs a label enrichment matrix to improve
|
935 |
+
the robustness.
|
936 |
+
|
937 |
+
𝛼 = {0.01,0.1,1,10},
|
938 |
+
𝛽 = {0.01,0.1,1,10,100},
|
939 |
+
𝛾 = {0.01,0.1,1,10}.
|
940 |
+
|
941 |
+
R-MLTSK-FS
|
942 |
+
(ours)
|
943 |
+
The method proposed in this paper. It is a second-order method and achieves the
|
944 |
+
transparency and robustness against label noise through fuzzy rules, correlation en-
|
945 |
+
hancement learning, soft multilabel loss function construction, and soft label learn-
|
946 |
+
ing.
|
947 |
+
𝛼 = {0.001,0.005,0.01,0.05,0.1,0.5,1,5,10,50,100},
|
948 |
+
𝛽 = {0.001,0.005,0.01,0.05,0.1,0.5,1,5,10,50,100},
|
949 |
+
𝛾 = {0.001,0.005,0.01,0.05,0.1,0.5,1,5,10,50,100},
|
950 |
+
𝑘 = {2,3}.
|
951 |
+
|
952 |
+
D. Performance Analysis
|
953 |
+
1) Classification Performance Evaluation
|
954 |
+
To verify the effectiveness of R-MLTSK-FS, we compare the
|
955 |
+
R-MLTSK-FS with eight methods on 10 datasets. The experi-
|
956 |
+
mental results, expressed in terms of the mean and standard de-
|
957 |
+
viation (inside brackets) of the four metrics, are shown in Table
|
958 |
+
III. For each dataset, the best value of each metric is bold-faced.
|
959 |
+
We can see that compared to the eight methods, the overall per-
|
960 |
+
formance of R-MLTSK-FS is the best on all the metrics. This
|
961 |
+
is attributable to the three mechanisms introduced.
|
962 |
+
|
963 |
+
2) Robustness Analysis
|
964 |
+
In order to verify the robustness of R-MLTSK-FS against la-
|
965 |
+
bel noise, we introduce label noise to the data and evaluate the
|
966 |
+
performance. Specifically, we randomly select 0%, 10%, 20%,
|
967 |
+
30% and 40% samples from the training set, and then create
|
968 |
+
noise by changing their related (unrelated) labels to unrelated
|
969 |
+
(related) ones. The 5-fold cross-validation strategy is adopted
|
970 |
+
in the experiment. Fig. 2 shows the experimental results, from
|
971 |
+
which the following findings are obtained:
|
972 |
+
(1) Despite the increase in the amount of noise in the experi-
|
973 |
+
ments, the proposed R-MLTSK-FS maintains outstanding clas-
|
974 |
+
sification performance, indicating the effectiveness of the three
|
975 |
+
mechanisms introduced in reducing the influence of label noise.
|
976 |
+
(2) Label noise has different effect on the comparison meth-
|
977 |
+
ods. For example, the performance of MLkNN in the presence
|
978 |
+
of label noise is unstable because the robustness of MLkNN
|
979 |
+
against noisy labels is affected by the number of nearest neigh-
|
980 |
+
bors. For RAkEL and CorrLog, their performance is unsatisfac-
|
981 |
+
tory since they ignore label noise in modeling the correlation
|
982 |
+
between labels. For ML-TSK FS, its overall robustness is infe-
|
983 |
+
rior to the proposed method as it also ignores the influence of
|
984 |
+
label noise in model training.
|
985 |
+
|
986 |
+
3) Effectiveness Analysis of Soft Label Learning
|
987 |
+
To evaluate the effectiveness of R-MLTSK-FS in soft label
|
988 |
+
learning, we study the influence weights 𝑺 with three synthetic
|
989 |
+
multilabel datasets, namely Independence dataset, Equality da-
|
990 |
+
taset and Union dataset [55], each containing 1000 samples.
|
991 |
+
For each sample, the feature dimension is 20 and the label di-
|
992 |
+
mension is 5. Each feature in the synthetic datasets is normal-
|
993 |
+
ized in [0, 1].
|
994 |
+
Each synthetic dataset has five labels, 𝒴1, …, 𝒴5. For the
|
995 |
+
first four labels, their logical relationships are designed as fol-
|
996 |
+
lows:
|
997 |
+
Independence dataset: The first four labels 𝒴1 , 𝒴2 , 𝒴3
|
998 |
+
and 𝒴4 are independent of each other.
|
999 |
+
Equality dataset: 𝒴1 = 𝒴2 and 𝒴3 = 𝒴4 . That is, for a
|
1000 |
+
sample (𝒙𝑖, 𝒚𝑖) (1 ≤ 𝑖 ≤ 1000), 𝑦𝑖1 = 𝑦𝑖2 and 𝑦𝑖3 = 𝑦𝑖4.
|
1001 |
+
Union dataset: 𝒴1 = 𝒴2 ∨ 𝒴3 ∨ 𝒴4. That is, for a sample
|
1002 |
+
(𝒙𝑖, 𝒚𝑖) (1 ≤ 𝑖 ≤ 1000), if 𝑦𝑖2 = 1 or 𝑦𝑖3 = 1 or 𝑦𝑖4 = 1,
|
1003 |
+
then 𝑦𝑖1 = 1, otherwise, 𝑦𝑖1 = 0.
|
1004 |
+
|
1005 |
+
TABLE III
|
1006 |
+
|
1007 |
+
MEAN (SD) OF THE METRICS OF THE MULTILABEL CLASSIFICATION METHODS
|
1008 |
+
|
1009 |
+
Datasets
|
1010 |
+
|
1011 |
+
Methods
|
1012 |
+
|
1013 |
+
|
1014 |
+
Met-
|
1015 |
+
rics
|
1016 |
+
BR
|
1017 |
+
MLkNN
|
1018 |
+
MLSF
|
1019 |
+
ML-TSK FS
|
1020 |
+
CC
|
1021 |
+
RAkEL
|
1022 |
+
CorrLog
|
1023 |
+
HNOML
|
1024 |
+
R-MLTSK-FS
|
1025 |
+
Arts
|
1026 |
+
AP
|
1027 |
+
0.6270
|
1028 |
+
(0.0076)
|
1029 |
+
0.5454
|
1030 |
+
(0.0082)
|
1031 |
+
0.4977
|
1032 |
+
(0.0859)
|
1033 |
+
0.6207
|
1034 |
+
(0.0141)
|
1035 |
+
0.6164
|
1036 |
+
(0.0084)
|
1037 |
+
0.2682
|
1038 |
+
(0.0285)
|
1039 |
+
0.3646
|
1040 |
+
(0.0482)
|
1041 |
+
0.6090
|
1042 |
+
(0.0082)
|
1043 |
+
0.6289
|
1044 |
+
(0.0130)
|
1045 |
+
HL
|
1046 |
+
0.0902
|
1047 |
+
(0.0050)
|
1048 |
+
0.0629
|
1049 |
+
(0.0007)
|
1050 |
+
0.0604
|
1051 |
+
(0.0022)
|
1052 |
+
0.0529
|
1053 |
+
(0.0019)
|
1054 |
+
0.1025
|
1055 |
+
(0.0011)
|
1056 |
+
0.1950
|
1057 |
+
(0.0092)
|
1058 |
+
0.0597
|
1059 |
+
(0.0018)
|
1060 |
+
0.0573
|
1061 |
+
(0.0009)
|
1062 |
+
0.0546
|
1063 |
+
(0.0017)
|
1064 |
+
RL
|
1065 |
+
0.1266
|
1066 |
+
(0.0042)
|
1067 |
+
0.1396
|
1068 |
+
(0.0028)
|
1069 |
+
0.1257
|
1070 |
+
(0.0309)
|
1071 |
+
0.1161
|
1072 |
+
(0.0039)
|
1073 |
+
0.1300
|
1074 |
+
(0.0069)
|
1075 |
+
0.4123
|
1076 |
+
(0.0325)
|
1077 |
+
0.3865
|
1078 |
+
(0.0878)
|
1079 |
+
0.1509
|
1080 |
+
(0.0052)
|
1081 |
+
0.1118
|
1082 |
+
(0.0075)
|
1083 |
+
CV
|
1084 |
+
0.1965
|
1085 |
+
(0.0053)
|
1086 |
+
0.1981
|
1087 |
+
(0.0036)
|
1088 |
+
0.3047
|
1089 |
+
(0.0663)
|
1090 |
+
0.1807
|
1091 |
+
(0.0083)
|
1092 |
+
0.2054
|
1093 |
+
(0.0082)
|
1094 |
+
0.8363
|
1095 |
+
(0.0369)
|
1096 |
+
0.4724
|
1097 |
+
(0.0694)
|
1098 |
+
0.2371
|
1099 |
+
(0.0045)
|
1100 |
+
0.1720
|
1101 |
+
(0.0073)
|
1102 |
+
Birds
|
1103 |
+
AP
|
1104 |
+
0.3422
|
1105 |
+
(0.0340)
|
1106 |
+
0.2303
|
1107 |
+
(0.0185)
|
1108 |
+
0.2712
|
1109 |
+
(0.0203)
|
1110 |
+
0.3438
|
1111 |
+
(0.0347)
|
1112 |
+
0.3360
|
1113 |
+
(0.0174)
|
1114 |
+
0.3591
|
1115 |
+
(0.0319)
|
1116 |
+
0.2124
|
1117 |
+
(0.0230)
|
1118 |
+
0.3352
|
1119 |
+
(0.0325)
|
1120 |
+
0.3694
|
1121 |
+
(0.0354)
|
1122 |
+
HL
|
1123 |
+
0.0556
|
1124 |
+
(0.0022)
|
1125 |
+
0.0551
|
1126 |
+
(0.0058)
|
1127 |
+
0.0648
|
1128 |
+
(0.0027)
|
1129 |
+
0.0514
|
1130 |
+
(0.0038)
|
1131 |
+
0.0545
|
1132 |
+
(0.0033)
|
1133 |
+
0.0446
|
1134 |
+
(0.0032)
|
1135 |
+
0.0451
|
1136 |
+
(0.0027)
|
1137 |
+
0.0515
|
1138 |
+
(0.0065)
|
1139 |
+
0.0430
|
1140 |
+
(0.0063)
|
1141 |
+
RL
|
1142 |
+
0.0983
|
1143 |
+
(0.0230)
|
1144 |
+
0.1565
|
1145 |
+
(0.0127)
|
1146 |
+
0.0807
|
1147 |
+
(0.0205)
|
1148 |
+
0.0863
|
1149 |
+
(0.0221)
|
1150 |
+
0.1097
|
1151 |
+
(0.0055)
|
1152 |
+
0.6509
|
1153 |
+
(0.0634)
|
1154 |
+
0.1611
|
1155 |
+
(0.0067)
|
1156 |
+
0.0968
|
1157 |
+
(0.0215)
|
1158 |
+
0.0710
|
1159 |
+
(0.0124)
|
1160 |
+
CV
|
1161 |
+
0.1311
|
1162 |
+
(0.0151)
|
1163 |
+
0.1887
|
1164 |
+
(0.0203)
|
1165 |
+
0.1699
|
1166 |
+
(0.0495)
|
1167 |
+
0.1132
|
1168 |
+
(0.0315)
|
1169 |
+
0.1445
|
1170 |
+
(0.0094)
|
1171 |
+
0.7032
|
1172 |
+
(0.0364)
|
1173 |
+
0.1939
|
1174 |
+
(0.0141)
|
1175 |
+
0.1179
|
1176 |
+
(0.0188)
|
1177 |
+
0.0957
|
1178 |
+
(0.0193)
|
1179 |
+
CAL500
|
1180 |
+
AP
|
1181 |
+
0.5048
|
1182 |
+
(0.0055)
|
1183 |
+
0.4965
|
1184 |
+
(0.0037)
|
1185 |
+
0.4906
|
1186 |
+
(0.0119)
|
1187 |
+
0.5075
|
1188 |
+
(0.0104)
|
1189 |
+
0.4541
|
1190 |
+
(0.0088)
|
1191 |
+
0.2150
|
1192 |
+
(0.0047)
|
1193 |
+
0.3108
|
1194 |
+
(0.0171)
|
1195 |
+
0.4314
|
1196 |
+
(0.1844)
|
1197 |
+
0.5153
|
1198 |
+
(0.0152)
|
1199 |
+
HL
|
1200 |
+
0.1447
|
1201 |
+
(0.0034)
|
1202 |
+
0.1371
|
1203 |
+
(0.0031)
|
1204 |
+
0.1368
|
1205 |
+
(0.0027)
|
1206 |
+
0.1368
|
1207 |
+
(0.0027)
|
1208 |
+
0.1442
|
1209 |
+
(0.0026)
|
1210 |
+
0.1363
|
1211 |
+
(0.0036)
|
1212 |
+
0.1371
|
1213 |
+
(0.0046)
|
1214 |
+
0.1411
|
1215 |
+
(0.0072)
|
1216 |
+
0.1358
|
1217 |
+
(0.0034)
|
1218 |
+
RL
|
1219 |
+
0.1879
|
1220 |
+
(0.0058)
|
1221 |
+
0.1822
|
1222 |
+
(0.0043)
|
1223 |
+
0.1780
|
1224 |
+
(0.0053)
|
1225 |
+
0.1763
|
1226 |
+
(0.0035)
|
1227 |
+
0.2515
|
1228 |
+
(0.0085)
|
1229 |
+
0.6145
|
1230 |
+
(0.0161)
|
1231 |
+
0.6750
|
1232 |
+
(0.1145)
|
1233 |
+
0.1423
|
1234 |
+
(0.0797)
|
1235 |
+
0.1744
|
1236 |
+
(0.0012)
|
1237 |
+
CV
|
1238 |
+
0.7656
|
1239 |
+
(0.0132)
|
1240 |
+
0.7583
|
1241 |
+
(0.0122)
|
1242 |
+
0.7600
|
1243 |
+
(0.0132)
|
1244 |
+
0.7380
|
1245 |
+
(0.0091)
|
1246 |
+
0.9085
|
1247 |
+
(0.0105)
|
1248 |
+
0.7835
|
1249 |
+
(0.0264)
|
1250 |
+
0.8722
|
1251 |
+
(0.0119)
|
1252 |
+
0.7669
|
1253 |
+
(0.0579)
|
1254 |
+
0.7348
|
1255 |
+
(0.0278)
|
1256 |
+
Corel5k
|
1257 |
+
AP
|
1258 |
+
0.3044
|
1259 |
+
(0.0068)
|
1260 |
+
0.2561
|
1261 |
+
(0.0077)
|
1262 |
+
0.2134
|
1263 |
+
(0.0178)
|
1264 |
+
0.3064
|
1265 |
+
(0.0003)
|
1266 |
+
0.2639
|
1267 |
+
(0.0061)
|
1268 |
+
0.0652
|
1269 |
+
(0.0032)
|
1270 |
+
0.2079
|
1271 |
+
(0.0085)
|
1272 |
+
0.2884
|
1273 |
+
(0.0105)
|
1274 |
+
0.3070
|
1275 |
+
(0.0070)
|
1276 |
+
HL
|
1277 |
+
0.0094
|
1278 |
+
(0.0001)
|
1279 |
+
0.0094
|
1280 |
+
(0.0001)
|
1281 |
+
0.0094
|
1282 |
+
(0.0001)
|
1283 |
+
0.0094
|
1284 |
+
(0.0003)
|
1285 |
+
0.0094
|
1286 |
+
(0.0001)
|
1287 |
+
0.0197
|
1288 |
+
(0.0002)
|
1289 |
+
0.0094
|
1290 |
+
(0.0003)
|
1291 |
+
0.0111
|
1292 |
+
(0.0006)
|
1293 |
+
0.0094
|
1294 |
+
(0.0001)
|
1295 |
+
RL
|
1296 |
+
0.1649
|
1297 |
+
(0.0044)
|
1298 |
+
0.1313
|
1299 |
+
(0.0040)
|
1300 |
+
0.2591
|
1301 |
+
(0.0290)
|
1302 |
+
0.1294
|
1303 |
+
(0.0047)
|
1304 |
+
0.1784
|
1305 |
+
(0.0068)
|
1306 |
+
0.5564
|
1307 |
+
(0.0279)
|
1308 |
+
0.1432
|
1309 |
+
(0.0032)
|
1310 |
+
0.1119
|
1311 |
+
(0.2279)
|
1312 |
+
0.1092
|
1313 |
+
(0.0028)
|
1314 |
+
CV
|
1315 |
+
0.3852
|
1316 |
+
(0.0045)
|
1317 |
+
0.3023
|
1318 |
+
(0.0059)
|
1319 |
+
0.6994
|
1320 |
+
(0.0983)
|
1321 |
+
0.3018
|
1322 |
+
(0.0108)
|
1323 |
+
0.4288
|
1324 |
+
(0.0108)
|
1325 |
+
0.5552
|
1326 |
+
(0.0167)
|
1327 |
+
0.3207
|
1328 |
+
(0.0101)
|
1329 |
+
0.3678
|
1330 |
+
(0.0092)
|
1331 |
+
0.2600
|
1332 |
+
(0.0090)
|
1333 |
+
Flags
|
1334 |
+
AP
|
1335 |
+
0.8101
|
1336 |
+
(0.0316)
|
1337 |
+
0.8020
|
1338 |
+
(0.0415)
|
1339 |
+
0.8163
|
1340 |
+
(0.0226)
|
1341 |
+
0.8176
|
1342 |
+
(0.0118)
|
1343 |
+
0.8076
|
1344 |
+
(0.0413)
|
1345 |
+
0.6581
|
1346 |
+
(0.0544)
|
1347 |
+
0.7704
|
1348 |
+
(0.0180)
|
1349 |
+
0.8080
|
1350 |
+
(0.0110)
|
1351 |
+
0.8209
|
1352 |
+
(0.0391)
|
1353 |
+
HL
|
1354 |
+
0.2796
|
1355 |
+
(0.0216)
|
1356 |
+
0.3275
|
1357 |
+
(0.0272)
|
1358 |
+
0.2768
|
1359 |
+
(0.0155)
|
1360 |
+
0.2649
|
1361 |
+
(0.0254)
|
1362 |
+
0.2711
|
1363 |
+
(0.0307)
|
1364 |
+
0.2755
|
1365 |
+
(0.0323)
|
1366 |
+
0.2856
|
1367 |
+
(0.0258)
|
1368 |
+
0.2711
|
1369 |
+
(0.0124)
|
1370 |
+
0.2647
|
1371 |
+
(0.0438)
|
1372 |
+
RL
|
1373 |
+
0.2155
|
1374 |
+
(0.0341)
|
1375 |
+
0.2443
|
1376 |
+
(0.0374)
|
1377 |
+
0.1374
|
1378 |
+
(0.0066)
|
1379 |
+
0.2132
|
1380 |
+
(0.0173)
|
1381 |
+
0.2340
|
1382 |
+
(0.0495)
|
1383 |
+
0.6030
|
1384 |
+
(0.0419)
|
1385 |
+
0.3566
|
1386 |
+
(0.0408)
|
1387 |
+
0.2178
|
1388 |
+
(0.0159)
|
1389 |
+
0.2054
|
1390 |
+
(0.0345)
|
1391 |
+
CV
|
1392 |
+
0.5523
|
1393 |
+
(0.0159)
|
1394 |
+
0.5626
|
1395 |
+
(0.0198)
|
1396 |
+
0.5524
|
1397 |
+
(0.0206)
|
1398 |
+
0.5232
|
1399 |
+
(0.0127)
|
1400 |
+
0.5553
|
1401 |
+
(0.0123)
|
1402 |
+
0.8903
|
1403 |
+
(0.0252)
|
1404 |
+
0.5486
|
1405 |
+
(0.0150)
|
1406 |
+
0.5431
|
1407 |
+
(0.0341)
|
1408 |
+
0.5318
|
1409 |
+
(0.0276)
|
1410 |
+
Genbase
|
1411 |
+
AP
|
1412 |
+
0.9922
|
1413 |
+
(0.0067)
|
1414 |
+
0.9910
|
1415 |
+
(0.0043)
|
1416 |
+
0.9913
|
1417 |
+
(0.0051)
|
1418 |
+
0.9968
|
1419 |
+
(0.0027)
|
1420 |
+
0.9802
|
1421 |
+
(0.0181)
|
1422 |
+
0.7784
|
1423 |
+
(0.0697)
|
1424 |
+
0.9717
|
1425 |
+
(0.0097)
|
1426 |
+
0.9941
|
1427 |
+
(0.0050)
|
1428 |
+
0.9977
|
1429 |
+
(0.0031)
|
1430 |
+
HL
|
1431 |
+
0.0011
|
1432 |
+
(0.0006)
|
1433 |
+
0.0016
|
1434 |
+
(0.0005)
|
1435 |
+
0.0044
|
1436 |
+
(0.0016)
|
1437 |
+
0.0015
|
1438 |
+
(0.0017)
|
1439 |
+
0.0095
|
1440 |
+
(0.0033)
|
1441 |
+
0.0022
|
1442 |
+
(0.0012)
|
1443 |
+
0.0022
|
1444 |
+
(0.0007)
|
1445 |
+
0.0020
|
1446 |
+
(0.0015)
|
1447 |
+
0.0010
|
1448 |
+
(0.0012)
|
1449 |
+
RL
|
1450 |
+
0.0035
|
1451 |
+
(0.0049)
|
1452 |
+
0.0061
|
1453 |
+
(0.0040)
|
1454 |
+
0.0038
|
1455 |
+
(0.0026)
|
1456 |
+
0.0011
|
1457 |
+
(0.0009)
|
1458 |
+
0.0087
|
1459 |
+
(0.0081)
|
1460 |
+
0.0242
|
1461 |
+
(0.0184)
|
1462 |
+
0.0355
|
1463 |
+
(0.0095)
|
1464 |
+
0.0006
|
1465 |
+
(0.0007)
|
1466 |
+
0.0006
|
1467 |
+
(0.0005)
|
1468 |
+
CV
|
1469 |
+
0.0150
|
1470 |
+
(0.0061)
|
1471 |
+
0.0192
|
1472 |
+
(0.0073)
|
1473 |
+
0.0195
|
1474 |
+
(0.0073)
|
1475 |
+
0.0105
|
1476 |
+
(0.0042)
|
1477 |
+
0.0244
|
1478 |
+
(0.0154)
|
1479 |
+
0.0588
|
1480 |
+
(0.0159)
|
1481 |
+
0.0407
|
1482 |
+
(0.0063)
|
1483 |
+
0.0126
|
1484 |
+
(0.0046)
|
1485 |
+
0.0102
|
1486 |
+
(0.0021)
|
1487 |
+
Medical
|
1488 |
+
AP
|
1489 |
+
0.8755
|
1490 |
+
(0.0266)
|
1491 |
+
0.8067
|
1492 |
+
(0.0128)
|
1493 |
+
0.8272
|
1494 |
+
(0.0250)
|
1495 |
+
0.8959
|
1496 |
+
(0.0143)
|
1497 |
+
0.8765
|
1498 |
+
(0.0307)
|
1499 |
+
0.4443
|
1500 |
+
(0.0219)
|
1501 |
+
0.7562
|
1502 |
+
(0.0181)
|
1503 |
+
0.8761
|
1504 |
+
(0.0495)
|
1505 |
+
0.8822
|
1506 |
+
(0.0150)
|
1507 |
+
HL
|
1508 |
+
0.0142
|
1509 |
+
(0.0018)
|
1510 |
+
0.0156
|
1511 |
+
(0.0004)
|
1512 |
+
0.0131
|
1513 |
+
(0.0012)
|
1514 |
+
0.0107
|
1515 |
+
(0.0006)
|
1516 |
+
0.0125
|
1517 |
+
(0.0014)
|
1518 |
+
0.0109
|
1519 |
+
(0.0008)
|
1520 |
+
0.0113
|
1521 |
+
(0.0007)
|
1522 |
+
0.0213
|
1523 |
+
(0.0085)
|
1524 |
+
0.0105
|
1525 |
+
(0.0019)
|
1526 |
+
RL
|
1527 |
+
0.0274
|
1528 |
+
(0.0147)
|
1529 |
+
0.0430
|
1530 |
+
(0.0061)
|
1531 |
+
0.0273
|
1532 |
+
(0.0038)
|
1533 |
+
0.0371
|
1534 |
+
(0.0136)
|
1535 |
+
0.0311
|
1536 |
+
(0.0175)
|
1537 |
+
0.1079
|
1538 |
+
(0.0250)
|
1539 |
+
0.2742
|
1540 |
+
(0.0258)
|
1541 |
+
0.0232
|
1542 |
+
(0.0320)
|
1543 |
+
0.0197
|
1544 |
+
(0.0039)
|
1545 |
+
CV
|
1546 |
+
0.0415
|
1547 |
+
(0.0186)
|
1548 |
+
0.0629
|
1549 |
+
(0.0056)
|
1550 |
+
0.0717
|
1551 |
+
(0.0082)
|
1552 |
+
0.0363
|
1553 |
+
(0.0068)
|
1554 |
+
0.0453
|
1555 |
+
(0.0226)
|
1556 |
+
0.1394
|
1557 |
+
(0.0304)
|
1558 |
+
0.1969
|
1559 |
+
(0.0280)
|
1560 |
+
0.0357
|
1561 |
+
(0.0217)
|
1562 |
+
0.0308
|
1563 |
+
(0.0105)
|
1564 |
+
Mirflickr
|
1565 |
+
AP
|
1566 |
+
0.4540
|
1567 |
+
(0.0421)
|
1568 |
+
0.5096
|
1569 |
+
(0.0028)
|
1570 |
+
0.2906
|
1571 |
+
(0.0156)
|
1572 |
+
0.5239
|
1573 |
+
(0.0045)
|
1574 |
+
0.4703
|
1575 |
+
(0.0019)
|
1576 |
+
0.2216
|
1577 |
+
(0.0030)
|
1578 |
+
0.4779
|
1579 |
+
(0.0085)
|
1580 |
+
0.5121
|
1581 |
+
(0.0084)
|
1582 |
+
0.5246
|
1583 |
+
(0.0015)
|
1584 |
+
HL
|
1585 |
+
0.1528
|
1586 |
+
(0.0122)
|
1587 |
+
0.1533
|
1588 |
+
(0.0006)
|
1589 |
+
0.1543
|
1590 |
+
(0.0010)
|
1591 |
+
0.1521
|
1592 |
+
(0.0005)
|
1593 |
+
0.1588
|
1594 |
+
(0.0010)
|
1595 |
+
0.2122
|
1596 |
+
(0.0030)
|
1597 |
+
0.1548
|
1598 |
+
(0.0005)
|
1599 |
+
0.1523
|
1600 |
+
(0.0022)
|
1601 |
+
0.1521
|
1602 |
+
(0.0004)
|
1603 |
+
RL
|
1604 |
+
0.3218
|
1605 |
+
(0.0419)
|
1606 |
+
0.2050
|
1607 |
+
(0.0027)
|
1608 |
+
0.2616
|
1609 |
+
(0.0012)
|
1610 |
+
0.1946
|
1611 |
+
(0.0015)
|
1612 |
+
0.2444
|
1613 |
+
(0.0015)
|
1614 |
+
0.5694
|
1615 |
+
(0.0087)
|
1616 |
+
0.2146
|
1617 |
+
(0.0028)
|
1618 |
+
0.2106
|
1619 |
+
(0.0097)
|
1620 |
+
0.1929
|
1621 |
+
(0.0012)
|
1622 |
+
CV
|
1623 |
+
0.6120
|
1624 |
+
(0.0327)
|
1625 |
+
0.4395
|
1626 |
+
(0.0045)
|
1627 |
+
0.4703
|
1628 |
+
(0.0082)
|
1629 |
+
0.4190
|
1630 |
+
(0.0031)
|
1631 |
+
0.5314
|
1632 |
+
(0.0037)
|
1633 |
+
0.9937
|
1634 |
+
(0.0021)
|
1635 |
+
0.4495
|
1636 |
+
(0.0041)
|
1637 |
+
0.4434
|
1638 |
+
(0.0043)
|
1639 |
+
0.4182
|
1640 |
+
(0.0051)
|
1641 |
+
Recreation
|
1642 |
+
AP
|
1643 |
+
0.6363
|
1644 |
+
(0.0151)
|
1645 |
+
0.5333
|
1646 |
+
(0.0092)
|
1647 |
+
0.4817
|
1648 |
+
(0.0426)
|
1649 |
+
0.6362
|
1650 |
+
(0.0061)
|
1651 |
+
0.6286
|
1652 |
+
(0.0152)
|
1653 |
+
0.2922
|
1654 |
+
(0.0193)
|
1655 |
+
0.2104
|
1656 |
+
(0.0247)
|
1657 |
+
0.6062
|
1658 |
+
(0.0076)
|
1659 |
+
0.6366
|
1660 |
+
(0.0058)
|
1661 |
+
HL
|
1662 |
+
0.0905
|
1663 |
+
(0.0014)
|
1664 |
+
0.0647
|
1665 |
+
(0.0012)
|
1666 |
+
0.0637
|
1667 |
+
(0.0014)
|
1668 |
+
0.0592
|
1669 |
+
(0.0012)
|
1670 |
+
0.0998
|
1671 |
+
(0.0019)
|
1672 |
+
0.2923
|
1673 |
+
(0.0148)
|
1674 |
+
0.0583
|
1675 |
+
(0.0010)
|
1676 |
+
0.0563
|
1677 |
+
(0.0021)
|
1678 |
+
0.0553
|
1679 |
+
(0.0017)
|
1680 |
+
RL
|
1681 |
+
0.1391
|
1682 |
+
(0.0082)
|
1683 |
+
0.1640
|
1684 |
+
(0.0011)
|
1685 |
+
0.1408
|
1686 |
+
(0.0410)
|
1687 |
+
0.1297
|
1688 |
+
(0.0020)
|
1689 |
+
0.1400
|
1690 |
+
(0.0083)
|
1691 |
+
0.4073
|
1692 |
+
(0.0155)
|
1693 |
+
0.4839
|
1694 |
+
(0.0119)
|
1695 |
+
0.1989
|
1696 |
+
(0.0061)
|
1697 |
+
0.1246
|
1698 |
+
(0.0058)
|
1699 |
+
CV
|
1700 |
+
0.1877
|
1701 |
+
(0.0117)
|
1702 |
+
0.2035
|
1703 |
+
(0.0040)
|
1704 |
+
0.3076
|
1705 |
+
(0.0867)
|
1706 |
+
0.1697
|
1707 |
+
(0.0043)
|
1708 |
+
0.1906
|
1709 |
+
(0.0125)
|
1710 |
+
0.8912
|
1711 |
+
(0.0206)
|
1712 |
+
0.4554
|
1713 |
+
(0.0240)
|
1714 |
+
0.2545
|
1715 |
+
(0.0113)
|
1716 |
+
0.1675
|
1717 |
+
(0.0054)
|
1718 |
+
Science
|
1719 |
+
AP
|
1720 |
+
0.5983
|
1721 |
+
(0.0132)
|
1722 |
+
0.5134
|
1723 |
+
(0.0119)
|
1724 |
+
0.4461
|
1725 |
+
(0.0063)
|
1726 |
+
0.5978
|
1727 |
+
(0.0217)
|
1728 |
+
0.5861
|
1729 |
+
(0.0125)
|
1730 |
+
0.2333
|
1731 |
+
(0.0115)
|
1732 |
+
0.2492
|
1733 |
+
(0.0106)
|
1734 |
+
0.5737
|
1735 |
+
(0.0144)
|
1736 |
+
0.5984
|
1737 |
+
(0.0051)
|
1738 |
+
HL
|
1739 |
+
0.0526
|
1740 |
+
(0.0007)
|
1741 |
+
0.0363
|
1742 |
+
(0.0006)
|
1743 |
+
0.0343
|
1744 |
+
(0.0011)
|
1745 |
+
0.0329
|
1746 |
+
(0.0004)
|
1747 |
+
0.0603
|
1748 |
+
(0.0009)
|
1749 |
+
0.1288
|
1750 |
+
(0.0087)
|
1751 |
+
0.0370
|
1752 |
+
(0.0036)
|
1753 |
+
0.0333
|
1754 |
+
(0.0004)
|
1755 |
+
0.0324
|
1756 |
+
(0.0009)
|
1757 |
+
RL
|
1758 |
+
0.1140
|
1759 |
+
(0.0068)
|
1760 |
+
0.1211
|
1761 |
+
(0.0046)
|
1762 |
+
0.0990
|
1763 |
+
(0.0143)
|
1764 |
+
0.0996
|
1765 |
+
(0.0072)
|
1766 |
+
0.1128
|
1767 |
+
(0.0071)
|
1768 |
+
0.3794
|
1769 |
+
(0.0352)
|
1770 |
+
0.4989
|
1771 |
+
(0.1339)
|
1772 |
+
0.1867
|
1773 |
+
(0.0086)
|
1774 |
+
0.0976
|
1775 |
+
(0.0050)
|
1776 |
+
CV
|
1777 |
+
0.1596
|
1778 |
+
(0.0089)
|
1779 |
+
0.1574
|
1780 |
+
(0.0050)
|
1781 |
+
0.1823
|
1782 |
+
(0.0269)
|
1783 |
+
0.1357
|
1784 |
+
(0.0088)
|
1785 |
+
0.1620
|
1786 |
+
(0.0093)
|
1787 |
+
0.7443
|
1788 |
+
(0.0366)
|
1789 |
+
0.3614
|
1790 |
+
(0.0219)
|
1791 |
+
0.2434
|
1792 |
+
(0.0061)
|
1793 |
+
0.1321
|
1794 |
+
(0.0058)
|
1795 |
+
|
1796 |
+
|
1797 |
+
|
1798 |
+
|
1799 |
+
|
1800 |
+
|
1801 |
+
|
1802 |
+
(a) Arts
|
1803 |
+
(b) Birds
|
1804 |
+
(c) CAL500
|
1805 |
+
(d) Corel5k
|
1806 |
+
(e) Flags
|
1807 |
+
|
1808 |
+
|
1809 |
+
|
1810 |
+
|
1811 |
+
|
1812 |
+
(f) Genbase
|
1813 |
+
(g) Medical
|
1814 |
+
(h) Mirflickr
|
1815 |
+
(i) Recreation
|
1816 |
+
(j) Science
|
1817 |
+
Fig. 2 Performance in terms of AP on datasets with label noise. (Noise ratio is defined as the proportion of samples that are randomly selected from the training set
|
1818 |
+
and their related (unrelated) labels are changed to unrelated (related) ones. The larger the value of AP, the better the classification performance.)
|
1819 |
+
|
1820 |
+
TABLE IV
|
1821 |
+
INFLUENCE WEIGHTS (S) OF ORIGINAL LABELS ON A SOFT LABEL IN INDEPENDENCE DATASET
|
1822 |
+
|
1823 |
+
|
1824 |
+
original label 1 (𝒴1)
|
1825 |
+
original label 2 (𝒴2)
|
1826 |
+
original label 3 (𝒴3)
|
1827 |
+
original label 4 (𝒴4)
|
1828 |
+
original label 5 (𝒴5)
|
1829 |
+
soft label 1 (𝒴1
|
1830 |
+
′)
|
1831 |
+
0.2016
|
1832 |
+
0.0510
|
1833 |
+
0.0697
|
1834 |
+
0.0462
|
1835 |
+
0.0797
|
1836 |
+
soft label 2 (𝒴2
|
1837 |
+
′)
|
1838 |
+
0.1409
|
1839 |
+
0.3149
|
1840 |
+
0.1921
|
1841 |
+
0.1552
|
1842 |
+
0.2182
|
1843 |
+
soft label 3 (𝒴3
|
1844 |
+
′)
|
1845 |
+
0.2447
|
1846 |
+
0.2523
|
1847 |
+
0.4662
|
1848 |
+
0.2628
|
1849 |
+
0.3666
|
1850 |
+
soft label 4 (𝒴4
|
1851 |
+
′)
|
1852 |
+
0.0031
|
1853 |
+
0.0051
|
1854 |
+
0.0053
|
1855 |
+
0.1191
|
1856 |
+
0.0061
|
1857 |
+
soft label 5 (𝒴5
|
1858 |
+
′)
|
1859 |
+
0.1179
|
1860 |
+
0.1046
|
1861 |
+
0.1068
|
1862 |
+
0.1281
|
1863 |
+
0.2832
|
1864 |
+
N.B. 𝒴1, 𝒴2, 𝒴3 and 𝒴4 are independent. 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4).
|
1865 |
+
|
1866 |
+
TABLE V
|
1867 |
+
INFLUENCE WEIGHTS (S) OF ORIGINAL LABELS ON A SOFT LABEL IN EQUALITY DATASET
|
1868 |
+
|
1869 |
+
|
1870 |
+
original label 1 (𝒴1)
|
1871 |
+
original label 2 (𝒴2)
|
1872 |
+
original label 3 (𝒴3)
|
1873 |
+
original label 4 (𝒴4)
|
1874 |
+
original label 5 (𝒴5)
|
1875 |
+
soft label 1 (𝒴1
|
1876 |
+
′)
|
1877 |
+
0.3645
|
1878 |
+
0.3645
|
1879 |
+
0.2252
|
1880 |
+
0.2252
|
1881 |
+
0.6172
|
1882 |
+
soft label 2 (𝒴2
|
1883 |
+
′)
|
1884 |
+
0.3645
|
1885 |
+
0.3645
|
1886 |
+
0.2252
|
1887 |
+
0.2252
|
1888 |
+
0.6172
|
1889 |
+
soft label 3 (𝒴3
|
1890 |
+
′)
|
1891 |
+
0.1900
|
1892 |
+
0.1900
|
1893 |
+
0.2456
|
1894 |
+
0.2456
|
1895 |
+
0.4350
|
1896 |
+
soft label 4 (𝒴4
|
1897 |
+
′)
|
1898 |
+
0.1900
|
1899 |
+
0.1900
|
1900 |
+
0.2456
|
1901 |
+
0.2456
|
1902 |
+
0.4350
|
1903 |
+
soft label 5 (𝒴5
|
1904 |
+
′)
|
1905 |
+
0.1252
|
1906 |
+
0.1252
|
1907 |
+
0.1260
|
1908 |
+
0.1260
|
1909 |
+
0.4480
|
1910 |
+
N.B. 𝒴1 = 𝒴2 and 𝒴3 = 𝒴4. 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4).
|
1911 |
+
|
1912 |
+
TABLE VI
|
1913 |
+
INFLUENCE WEIGHTS (S) OF ORIGINAL LABELS ON A SOFT LABEL IN UNION DATASET
|
1914 |
+
|
1915 |
+
|
1916 |
+
original label 1 (𝒴1)
|
1917 |
+
original label 2 (𝒴2)
|
1918 |
+
original label 3 (𝒴3)
|
1919 |
+
original label 4 (𝒴4)
|
1920 |
+
original label 5 (𝒴5)
|
1921 |
+
soft label 1 (𝒴1
|
1922 |
+
′)
|
1923 |
+
0.2295
|
1924 |
+
0.0798
|
1925 |
+
0.0981
|
1926 |
+
0.1206
|
1927 |
+
0.2654
|
1928 |
+
soft label 2 (𝒴2
|
1929 |
+
′)
|
1930 |
+
0.0791
|
1931 |
+
0.1529
|
1932 |
+
0.0363
|
1933 |
+
0.0551
|
1934 |
+
0.1327
|
1935 |
+
soft label 3 (𝒴3
|
1936 |
+
′)
|
1937 |
+
0.1378
|
1938 |
+
0.0520
|
1939 |
+
0.1694
|
1940 |
+
0.1017
|
1941 |
+
0.2151
|
1942 |
+
soft label 4 (𝒴4
|
1943 |
+
′)
|
1944 |
+
0.0077
|
1945 |
+
-0.0002
|
1946 |
+
0.0005
|
1947 |
+
0.0668
|
1948 |
+
0.0106
|
1949 |
+
soft label 5 (𝒴5
|
1950 |
+
′)
|
1951 |
+
0.0649
|
1952 |
+
-0.0107
|
1953 |
+
-0.0264
|
1954 |
+
0.0351
|
1955 |
+
0.1057
|
1956 |
+
N.B. 𝒴1 = 𝒴2 ∨ 𝒴3 ∨ 𝒴4. 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4).
|
1957 |
+
|
1958 |
+
The fifth label is mutually exclusive with the first four labels
|
1959 |
+
(i.e., 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4) ). Specifically,
|
1960 |
+
for a sample (𝒙𝑖, 𝒚𝑖) (1 ≤ 𝑖 ≤ 1000), if 𝑦𝑖1 = 0 and 𝑦𝑖2 =
|
1961 |
+
0 and 𝑦𝑖3 = 0 and 𝑦𝑖4 = 0, then 𝑦𝑖5 = 1, otherwise, 𝑦𝑖5 =
|
1962 |
+
0.
|
1963 |
+
The learned influence weights 𝑺 for each of the three syn-
|
1964 |
+
thetic datasets are shown in Tables IV-VI respectively. The fol-
|
1965 |
+
lowing findings can be obtained from the tables:
|
1966 |
+
(1) In Tables IV-VI, since the fifth label is mutually exclusive
|
1967 |
+
with the first four labels (i.e., 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧
|
1968 |
+
(¬𝒴3) ∧ (¬𝒴4)), reconstruction cannot be achieved with the
|
1969 |
+
first four labels. From the results of influence weights in Tables
|
1970 |
+
IV-VI, we can find that the influence of 𝒴5 on the soft label
|
1971 |
+
𝒴5
|
1972 |
+
′ is most significant, whereas the influence of 𝒴1 ∼ 𝒴4 on
|
1973 |
+
𝒴5
|
1974 |
+
′ is relatively small.
|
1975 |
+
(2) In Table IV, the first four labels 𝒴1, 𝒴2, 𝒴3 and 𝒴4
|
1976 |
+
are independent of each other, and 𝒴5 = (¬𝒴1) ∧ (¬𝒴2) ∧
|
1977 |
+
|
1978 |
+
(¬𝒴3) ∧ (¬𝒴4). The results of influence weights in Table IV
|
1979 |
+
show that the effect of 𝒴1, 𝒴2, 𝒴3 and 𝒴4 on the soft labels
|
1980 |
+
𝒴1
|
1981 |
+
′, 𝒴2
|
1982 |
+
′, 𝒴3
|
1983 |
+
′ and 𝒴4
|
1984 |
+
′, respectively, are significant. In addition,
|
1985 |
+
the contribution of 𝒴5 to 𝒴1
|
1986 |
+
′, 𝒴2
|
1987 |
+
′, 𝒴3
|
1988 |
+
′ and 𝒴4
|
1989 |
+
′ is also obvi-
|
1990 |
+
ous.
|
1991 |
+
(3) In Table V, 𝒴1 = 𝒴2 , 𝒴3 = 𝒴4, and 𝒴5 = (¬𝒴1) ∧
|
1992 |
+
(¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4). The results of influence weights in
|
1993 |
+
Table V reveal that 𝒴5 has a greater influence on the soft la-
|
1994 |
+
bels 𝒴1
|
1995 |
+
′, 𝒴2
|
1996 |
+
′, 𝒴3
|
1997 |
+
′ and 𝒴4
|
1998 |
+
′. Meanwhile, it is obvious that 𝒴1
|
1999 |
+
and 𝒴2 have the same influence on 𝒴1
|
2000 |
+
′ (𝒴2
|
2001 |
+
′ ), and 𝒴3 and
|
2002 |
+
𝒴4 have the same influence on 𝒴3
|
2003 |
+
′ (𝒴4
|
2004 |
+
′).
|
2005 |
+
(4) In Table VI, 𝒴1 = 𝒴2 ∨ 𝒴3 ∨ 𝒴4 and 𝒴5 = (¬𝒴1) ∧
|
2006 |
+
(¬𝒴2) ∧ (¬𝒴3) ∧ (¬𝒴4) . From the results of influence
|
2007 |
+
weights in Table VI, we can see that it is 𝒴1 and 𝒴5 that af-
|
2008 |
+
fect the soft label 𝒴1
|
2009 |
+
′ significantly, and that the effect of 𝒴2 ∼
|
2010 |
+
𝒴4 on the soft label 𝒴1
|
2011 |
+
′ are similar.
|
2012 |
+
The above findings are consistent with the logical relation-
|
2013 |
+
ship we designed for the labels, which validates that the soft
|
2014 |
+
label learning in R-MLTSK-FS is effective.
|
2015 |
+
|
2016 |
+
4) Effectiveness Analysis of Correlation Enhancement Learn-
|
2017 |
+
ing
|
2018 |
+
In order to verify the effectiveness of the correlation en-
|
2019 |
+
hancement learning mechanism in guiding the consequent vec-
|
2020 |
+
tor optimization, we conduct correlation visualization experi-
|
2021 |
+
ment on the Science dataset, where the dimension of label space
|
2022 |
+
is 40. Specifically, the Pearson correlation coefficient is used to
|
2023 |
+
measure the correlation between two vectors. The higher the
|
2024 |
+
value of Pearson correlation coefficient, the stronger the corre-
|
2025 |
+
lation between two vectors. Experimental results are shown in
|
2026 |
+
Fig. 3, where Fig. 3(a) visualizes the correlation between any
|
2027 |
+
two original labels, and Fig. 3(b) visualizes the correlation be-
|
2028 |
+
tween any two optimized consequent vectors associated with
|
2029 |
+
the corresponding labels. For an effective correlation enhance-
|
2030 |
+
ment learning mechanism, the correlation coefficient between
|
2031 |
+
two consequent vectors should be kept close to that between
|
2032 |
+
their corresponding labels.
|
2033 |
+
|
2034 |
+
|
2035 |
+
|
2036 |
+
(a)
|
2037 |
+
(b)
|
2038 |
+
Fig. 3 Visualization of label correlation learning on the Science dataset: (a)
|
2039 |
+
visualization of the correlation coefficient between any two original label
|
2040 |
+
vectors, and (b) visualization of the correlation coefficient between any two
|
2041 |
+
consequent vectors associated with the corresponding labels. The higher the
|
2042 |
+
value of correlation coefficient, the stronger the correlation between two
|
2043 |
+
vectors.
|
2044 |
+
|
2045 |
+
It is clear that there is little difference between Fig. 3(a) and
|
2046 |
+
Fig. 3(b), indicating that the correlation between the labels can
|
2047 |
+
closely guide the learning of the corresponding consequent vec-
|
2048 |
+
tors, and demonstrating the effectiveness of the correlation en-
|
2049 |
+
hancement learning mechanism.
|
2050 |
+
5) Parameter Analysis
|
2051 |
+
In this section, we analyze the influence of the hyperparam-
|
2052 |
+
eters α, β, γ and K on the classification performance of R-
|
2053 |
+
MLTSK-FS in terms of AP. In the analysis, we study the sensi-
|
2054 |
+
tivity of the classification performance to a specific hyperpa-
|
2055 |
+
rameter by keeping the other three fixed. For example, we fix
|
2056 |
+
the values of β, γ and K, and adjust the value of α to analyze the
|
2057 |
+
effect of α. The hyperparameters α, β and γ are varied within
|
2058 |
+
{10-3, 10-2, 10-1, 100, 101, 102} and K is varied within {2, 3, 4,
|
2059 |
+
5, 6, 7, 8, 9, 10}. The AP values of R-MLTSK-FS are obtained
|
2060 |
+
with the 5-fold cross-validation strategy.
|
2061 |
+
|
2062 |
+
|
2063 |
+
|
2064 |
+
(a) α
|
2065 |
+
(b) β
|
2066 |
+
|
2067 |
+
|
2068 |
+
(c) γ
|
2069 |
+
(d) K
|
2070 |
+
|
2071 |
+
Fig. 4 The influence of the hyperparameters (a) α, (b) β, (c) γ,
|
2072 |
+
and (d) K on AP of the R-MLTSK-FS.
|
2073 |
+
|
2074 |
+
|
2075 |
+
The experimental results are shown in Fig. 4, from which the
|
2076 |
+
following observations are obtained:
|
2077 |
+
(1) When α is in the range of (10-3, 100), the performance of
|
2078 |
+
R-MLTSK-FS in terms of AP is stable for most datasets. In ad-
|
2079 |
+
dition, AP decreases with increasing α for most datasets when
|
2080 |
+
α is within (101, 102). For the CAL500 dataset, AP increases
|
2081 |
+
with α. In general, R-MLTSK-FS is stable and can achieve op-
|
2082 |
+
timal performance when α is in the range of (10-2, 100).
|
2083 |
+
(2) In general, R-MLTSK-FS is sensitive to β when it is in
|
2084 |
+
the range of (10-3, 100). It is stable and can reach an optimal AP
|
2085 |
+
value for the 10 datasets when β is within (101, 102).
|
2086 |
+
(3) For the hyperparameter γ, AP fluctuates in a similar way
|
2087 |
+
for all the 10 datasets. In general, the performance of R-
|
2088 |
+
MLTSK-FS is stable when γ is within (10-3, 10-1). The AP value
|
2089 |
+
fluctuates significantly when γ is in the range of (10-1, 102),
|
2090 |
+
while exhibiting a decreasing trend with increasing γ. In general,
|
2091 |
+
optimal AP can be achieved for all the 10 datasets when γ is in
|
2092 |
+
the range of (10-3, 10-1).
|
2093 |
+
(4) The AP value for the 10 datasets fluctuates slightly with
|
2094 |
+
increasing K. Optimal values of AP can be obtained when K is
|
2095 |
+
within (4, 9).
|
2096 |
+
According to the above analysis, it is necessary for R-
|
2097 |
+
MLTSK-FS to adopt the grid search strategy and the cross-val-
|
2098 |
+
idation strategy to get the optimal hyperparameters for different
|
2099 |
+
datasets.
|
2100 |
+
|
2101 |
+
6) Convergence Analysis
|
2102 |
+
The Birds and Flags datasets are adopted in this part to inves-
|
2103 |
+
tigate the convergence of the proposed method. The results are
|
2104 |
+
shown in Fig. 5, where the vertical axis represents the absolute
|
2105 |
+
value of the difference between the previous and the current
|
2106 |
+
value of the objective function (denoted by df), and the hori-
|
2107 |
+
zontal axis represents the number of iterations. It can be seen
|
2108 |
+
from Fig. 5 that for the Birds and Flags datasets, R-MLTSK-FS
|
2109 |
+
is convergent within 10 iterations.
|
2110 |
+
|
2111 |
+
|
2112 |
+
|
2113 |
+
|
2114 |
+
(a)
|
2115 |
+
(b)
|
2116 |
+
Fig. 5 Convergence analysis for datasets (a) Birds and (b) Flags.
|
2117 |
+
|
2118 |
+
7) Statistical Analysis
|
2119 |
+
We employ the Friedman test and the Bonferroni-Dunn test
|
2120 |
+
to evaluate the statistical significance of the difference observed
|
2121 |
+
between the proposed R-MLTSK-FS and the eight comparison
|
2122 |
+
methods [56]. The details are as follows.
|
2123 |
+
(1) Friedman Test: Based on the experimental results in Ta-
|
2124 |
+
ble III, we perform the Friedman test on the four metrics, i.e.,
|
2125 |
+
AP, HL, RL and CV. The null hypothesis is that there is no sig-
|
2126 |
+
nificant difference between all the methods in terms of the four
|
2127 |
+
metrics. For each metric, if the Friedman statistic FF is greater
|
2128 |
+
than a critical value (i.e., 2.0698), the null hypothesis for that
|
2129 |
+
metric is rejected, which means the difference is statistically
|
2130 |
+
significant. The results of the Friedman test, corresponding to
|
2131 |
+
the results in Table III, are shown in Table VII. It can be seen
|
2132 |
+
from Table VII that the null hypotheses on AP, HL, RL and CV
|
2133 |
+
are all rejected. This means that the differences in classification
|
2134 |
+
performance of the nine methods are significant in terms of the
|
2135 |
+
four metrics. Next, we conduct the post-hoc Bonferroni-Dunn
|
2136 |
+
test to evaluate whether the difference in performance between
|
2137 |
+
R-MLTSK-FS and the comparison methods is statistically sig-
|
2138 |
+
nificant.
|
2139 |
+
|
2140 |
+
TABLE VII
|
2141 |
+
FRIEDMAN STATISTICS
|
2142 |
+
|
2143 |
+
Evaluation met-
|
2144 |
+
ric
|
2145 |
+
FF
|
2146 |
+
Critical value (α = 0.05)
|
2147 |
+
AP
|
2148 |
+
28.6045
|
2149 |
+
2.0698
|
2150 |
+
HL
|
2151 |
+
6.6863
|
2152 |
+
RL
|
2153 |
+
20.3718
|
2154 |
+
CV
|
2155 |
+
26.6201
|
2156 |
+
|
2157 |
+
(2) Bonferroni-Dunn Test: According to the results in Fried-
|
2158 |
+
man test, we conduct the post-hoc test based on the results of
|
2159 |
+
AP, HL, RL and CV respectively, where R-MLTSK-FS is set
|
2160 |
+
as the control method. First, we calculate the average rank of
|
2161 |
+
the nine methods for each metric respectively. We also calcu-
|
2162 |
+
late the critical difference (CD), which is a standard used for
|
2163 |
+
evaluating the difference in average rank between the methods,
|
2164 |
+
using the equation below:
|
2165 |
+
|
2166 |
+
|
2167 |
+
(a) AP
|
2168 |
+
(b) HL
|
2169 |
+
|
2170 |
+
|
2171 |
+
(c) RL
|
2172 |
+
(d) CV
|
2173 |
+
Fig. 6 Comparison of R-MLTSK-FS (as control) with the other meth-
|
2174 |
+
ods using the Bonferroni-Dunn test. The letter A refers to R-MLTSK-
|
2175 |
+
FS, B to BR, C to MLkNN, D to MLSF, E to ML-TSK FS, F to CC,
|
2176 |
+
G to RAkEL, H to CorrLog, and I to HNOML, respectively.
|
2177 |
+
|
2178 |
+
CD = 𝑞𝛼√𝑛(𝑛 + 1) 6𝑀
|
2179 |
+
⁄
|
2180 |
+
|
2181 |
+
(38)
|
2182 |
+
where n and M are the number of methods (n = 9) and the num-
|
2183 |
+
ber of datasets (M = 10), respectively. With confidence level α
|
2184 |
+
= 0.05 and 𝑞𝛼 = 2.724, we have CD = 3.3362.
|
2185 |
+
Fig. 6 gives the average rank of the nine methods, which are
|
2186 |
+
shown on the horizontal line with ticks marking 1 to 9. The
|
2187 |
+
smaller the average rank (i.e., closer to the right), the better the
|
2188 |
+
method. As R-MLTSK-FS is at the rightmost position on the
|
2189 |
+
horizontal line, for all the four metrics, it is the best among the
|
2190 |
+
nine methods. A red line of length one CD is drawn from R-
|
2191 |
+
MLTSK-FS to the left. For a method located within the span of
|
2192 |
+
the red line, the difference in average rank between the method
|
2193 |
+
and R-MLTSK-FS is less than one CD, indicating that the per-
|
2194 |
+
formance difference between them is small. Otherwise, the dif-
|
2195 |
+
ference is significant. The following conclusions can be drawn
|
2196 |
+
from Fig. 6. Firstly, R-MLTSK-FS is superior to other methods
|
2197 |
+
on the four metrics. Secondly, in general, the performance of
|
2198 |
+
ML-TSK FS is the second best. Thirdly, the performance of
|
2199 |
+
MLkNN, CC, RAkEL and CorrLog are significantly lower than
|
2200 |
+
that of R-MLTSK-FS in terms of the four metrics. Fourthly, for
|
2201 |
+
BR, MLSF and HNOML, their performance is mediocre.
|
2202 |
+
V. CONCLUSION
|
2203 |
+
The robust multilabel learning method R-MLTSK-FS with
|
2204 |
+
strong fuzzy inference ability, label correlation learning ability
|
2205 |
+
and robustness against noisy labels is proposed in this paper.
|
2206 |
+
From the aspect of soft label learning, R-MLTSK-FS constructs
|
2207 |
+
the soft label space to reduce the influence of label noise. From
|
2208 |
+
the aspect of soft multilabel loss function construction, R-
|
2209 |
+
MLTSK-FS utilizes the fuzzy rule-based TSK FS as a transpar-
|
2210 |
+
ent model to build the inference relationship between input fea-
|
2211 |
+
tures and soft labels, and then the loss function is constructed
|
2212 |
+
based on TSK FS and soft labels to enhance model training.
|
2213 |
+
From the aspect of correlation enhancement learning, R-
|
2214 |
+
MLTSK-FS utilizes the correlation information between soft la-
|
2215 |
+
bels to constrain the learning of model parameters and enhance
|
2216 |
+
the learning ability. Experimental analyses on ten benchmark
|
2217 |
+
multilabel datasets and three synthetic multilabel datasets show
|
2218 |
+
the promising performance of R-MLTSK-FS.
|
2219 |
+
Further research on R-MLTSK-FS will proceed along two
|
2220 |
+
directions. First, we will reduce the complexity of soft label
|
2221 |
+
|
2222 |
+
learning. Since R-MLTSK-FS considers all the original labels
|
2223 |
+
for a soft label, which is computationally intensive, research
|
2224 |
+
will be conducted to model with random label subsets for a soft
|
2225 |
+
label to reduce the complexity. Second, we will simplify the
|
2226 |
+
rule base of TSK FS. In R-MLTSK-FS, the fuzzy system trans-
|
2227 |
+
forms all the original features into the fuzzy feature space. If the
|
2228 |
+
dimension of the original feature space is large, the learning
|
2229 |
+
speed of R-MLTSK-FS will be slow. Hence, a screening mech-
|
2230 |
+
anism will be developed to identify representative subsets of the
|
2231 |
+
original features to improve the learning efficiency.
|
2232 |
+
REFERENCE
|
2233 |
+
[1] W. W. Liu, H. B. Wang, X. B. Shen, and I. W. Tsang, "The emerging
|
2234 |
+
trends of multi-label learning," IEEE Transactions on Pattern Analysis
|
2235 |
+
and Machine Intelligence, vol. 44, no. 11, pp. 7955-7974, 2021.
|
2236 |
+
[2] M. L. Zhang and Z. H. Zhou, "A review on multi-label learning
|
2237 |
+
algorithms," IEEE Transactions on Knowledge and Data Engineering, vol.
|
2238 |
+
26, no. 8, pp. 1819-1837, 2014.
|
2239 |
+
[3] M. Monfort, B. Pan, K. Ramakrishnan, A. Andonian, B. A. McNamara, A.
|
2240 |
+
Lascelles, Q. Fan, D. Gutfreund, R. Feris, and A. Oliva, "Multi-moments
|
2241 |
+
in time: learning and interpreting models for multi-action video
|
2242 |
+
understanding," IEEE Transactions on Pattern Analysis and Machine
|
2243 |
+
Intelligence, vol. 44, no. 12, pp. 9434-9445, 2022.
|
2244 |
+
[4] J. Speth and E. M. Hand, "Automated label noise identification for facial
|
2245 |
+
attribute recognition," in Proc. the IEEE/CVF Conference on Computer
|
2246 |
+
Vision and Pattern Recognition, 2019, pp. 25-28.
|
2247 |
+
[5] Q. W. Zhang, Y. Zhong, and M. L. Zhang, "Feature-induced labeling
|
2248 |
+
information enrichment for multi-label learning," in Proc. the 32th AAAI
|
2249 |
+
Conference on Artificial Intelligence, 2018, pp. 4446-4453.
|
2250 |
+
[6] S. J. Huang, G. X. Li, W. Y. Huang, and S. Y. Li, "Incremental multi-label
|
2251 |
+
learning with active queries," Journal of Computer Science and
|
2252 |
+
Technology, vol. 35, no. 2, pp. 234-246, 2020.
|
2253 |
+
[7] Q. Tan, G. Yu, J. Wang, C. Domeniconi, and X. Zhang, "Individuality-and
|
2254 |
+
commonality-based multiview multilabel learning," IEEE Transactions on
|
2255 |
+
Cybernetics, vol. 51, no. 3, pp. 1716-1727, 2019.
|
2256 |
+
[8] H. Liu, X. Li, and S. Zhang, "Learning instance correlation functions for
|
2257 |
+
multilabel classification," IEEE Transactions on Cybernetics, vol. 47, no.
|
2258 |
+
2, pp. 499-510, 2017.
|
2259 |
+
[9] J. Du and C. M. Vong, "Robust online multilabel learning under dynamic
|
2260 |
+
changes in data distribution with labels," IEEE Transactions on
|
2261 |
+
Cybernetics, vol. 50, no. 1, pp. 374-385, 2019.
|
2262 |
+
[10] Y. Zhu, J. T. Kwok, and Z. H. Zhou, "Multi-label learning with global and
|
2263 |
+
local label correlation," IEEE Transactions on Knowledge and Data
|
2264 |
+
Engineering, vol. 30, no. 6, pp. 1081-1094, 2018.
|
2265 |
+
[11] M. L. Zhang, Y. K. Li, H. Yang, and X. Y. Liu, "Towards class-imbalance
|
2266 |
+
aware multi-label learning," IEEE Transactions on Cybernetics, vol. 52,
|
2267 |
+
no. 6, pp. 4459-4471, 2020.
|
2268 |
+
[12] J. Ma, H. Zhang, and T. W. Chow, "Multilabel classification with label-
|
2269 |
+
specific features and classifiers: a coarse-and fine-tuned framework,"
|
2270 |
+
IEEE Transactions on Cybernetics, vol. 51, no. 2, pp. 1028-1042, 2019.
|
2271 |
+
[13] E. Lughofer, "Evolving multi-label fuzzy classifier," Information Sciences,
|
2272 |
+
vol. 597, pp. 1-23, 2022.
|
2273 |
+
[14] R. Cerri, M. P. Basgalupp, R. C. Barros, and A. C.P.L.F Carvalho,
|
2274 |
+
"Inducing hierarchical multi-label classification rules with genetic
|
2275 |
+
algorithms," Applied Soft Computing Journal, vol. 77, pp. 584-604, 2019.
|
2276 |
+
[15] H. Y. Jiang, J. Xu, R. J. Shi, K. Yang, D. D. Zhang, M. D. Gao, H. Ma,
|
2277 |
+
and W. Qian, "A multi-label deep learning model with interpretable Grad-
|
2278 |
+
CAM for diabetic retinopathy classification," in Proc. IEEE Engineering
|
2279 |
+
in Medicine and Biology Society, 2020, pp. 1560-1563.
|
2280 |
+
[16] J. Wang, Y. J. Lin, L. Z. Li, Y. A. Wang, M. Y. Xu, and J. K. Chen, "Multi-
|
2281 |
+
label cause feature selection based on neighbourhood mutual information,"
|
2282 |
+
International Journal of Machine Learning and Cybernetics, vol. 13, no.
|
2283 |
+
11, pp. 3509-3522, 2022.
|
2284 |
+
[17] Q. Lou, Z. Deng, Z. Xiao, K. S. Choi, and S. Wang, "Multi-label Takagi-
|
2285 |
+
Sugeno-Kang fuzzy system," IEEE Transactions on Fuzzy Systems, vol.
|
2286 |
+
30, no. 9, pp. 3410-3425, 2021.
|
2287 |
+
[18] E. Cole, O. M. Aodha, T. Lorieul, P. Perona, D. Morris, and N. Jojic,
|
2288 |
+
"Multi-label learning from single positive labels," in Proc. the IEEE/CVF
|
2289 |
+
Conference on Computer Vision and Pattern Recognition, 2021, pp. 933-
|
2290 |
+
942.
|
2291 |
+
[19] M. Hu, H. Han, S. Shan, and X. Chen, "Weakly supervised image
|
2292 |
+
classifiaction through noise regularization," in Proc. the IEEE/CVF
|
2293 |
+
Conference on Computer Vision and Pattern Recognition, 2019, pp.
|
2294 |
+
11517-11525.
|
2295 |
+
[20] M. K. Xie and S. J. Huang, "CCMN: a general framework for learning
|
2296 |
+
with class-conditional multi-label noise," IEEE Transactions on Pattern
|
2297 |
+
Analysis
|
2298 |
+
and
|
2299 |
+
Machine
|
2300 |
+
Intelligence,
|
2301 |
+
2022.
|
2302 |
+
Doi:
|
2303 |
+
10.1109/TPAMI.2022.3141240
|
2304 |
+
[21] A. K. Aksoy, M. Ravanbakhsh, and B. Demir, "Multi-label noise robust
|
2305 |
+
collaborative learning for remote sensing image classification," IEEE
|
2306 |
+
Transactions on Neural Networks and Learning Systems, 2022. Doi:
|
2307 |
+
10.1109/TNNLS.2022.3209992
|
2308 |
+
[22] M. K. Xie and S. J. Huang, "Partial multi-label learning with noisy label
|
2309 |
+
identification," IEEE Transactions on Pattern Analysis and Machine
|
2310 |
+
Intelligence, vol. 44, no. 7, pp. 3676-3687, 2022.
|
2311 |
+
[23] S. Rajeswar, P. Rodriguez, S. Singhal, D. Vazquez, and A. Courville,
|
2312 |
+
"Multi-label iterated learning for image classification with label
|
2313 |
+
ambiguity," in Proc. the IEEE/CVF Conference on Computer Vision and
|
2314 |
+
Pattern Recognition, 2022, pp. 4783-4793.
|
2315 |
+
[24] G. Lyu, S. Feng, and Y. Li, "Noisy label tolerance: a new perspective of
|
2316 |
+
partial multi-label learning," Information Sciences, vol. 543, pp. 454-466,
|
2317 |
+
2021.
|
2318 |
+
[25] Z. Deng, P. Xu, L. Xie, K. S. Choi, and S. Wang, "Transductive joint-
|
2319 |
+
knowledge-transfer TSK FS for recognition of epileptic EEG signals,"
|
2320 |
+
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
|
2321 |
+
vol. 26, no. 8, pp. 1481-1494, 2018.
|
2322 |
+
[26] C. Yang, Z. Deng, K. S. Choi, and S. Wang, "Takagi-Sugeno-Kang
|
2323 |
+
transfer learning fuzzy logic system for the adaptive recognition of
|
2324 |
+
epileptic electroencephalogram signals," IEEE Transactions on Fuzzy
|
2325 |
+
Systems, vol. 24, no. 5, pp. 1079-1094, 2016.
|
2326 |
+
[27] T. Zhang, Z. Deng, D. Wu, and S. Wang, "Multiview fuzzy logic system
|
2327 |
+
with the cooperation between visible and hidden views," IEEE
|
2328 |
+
Transactions on Fuzzy Systems, vol. 27, no. 6, pp. 1162-1173, 2019.
|
2329 |
+
[28] Y. Jiang, Z. Deng, K. S. Choi, F. L. Chung, and S. Wang, "A novel multi-
|
2330 |
+
task TSK fuzzy classifier and its enhanced version for labeling-risk-aware
|
2331 |
+
multi-task classification," Information Sciences, vol. 357, no. C, pp. 39-60,
|
2332 |
+
2016.
|
2333 |
+
[29] Y. Jiang, Z. Deng, F. L. Chung, G. Wang, P. Qian, K. S. Choi, et al.,
|
2334 |
+
"Recognition of epileptic EEG signals using a novel multiview TSK fuzzy
|
2335 |
+
system," IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 3-20,
|
2336 |
+
2017.
|
2337 |
+
[30] L. Kong, W. He, W. Yang, Q. Li, and O. Kaynak, "Fuzzy approximation-
|
2338 |
+
based finite-time control for a robot with actuator saturation under time-
|
2339 |
+
varying constraints of work space," IEEE Transactions on Cybernetics,
|
2340 |
+
vol. 51, no. 10, pp. 4873-4884, 2020.
|
2341 |
+
[31] Q. Liao and D. Sun, "Sparse and decoupling control strategies based on
|
2342 |
+
Takagi–Sugeno fuzzy models," IEEE Transactions on Cybernetics, vol.
|
2343 |
+
51, no. 2, pp. 947-960, 2019.
|
2344 |
+
[32] S. C. Tong, Y. M. Li, G. Feng, and T. S. Li, "Observer-based adaptive
|
2345 |
+
fuzzy backstepping dynamic surface control for a class of MIMO
|
2346 |
+
nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics,
|
2347 |
+
Part B (Cybernetics), vol. 41, no. 4, pp. 1124-1135, 2011.
|
2348 |
+
[33] P. Xu, Z. Deng, J. Wang, Q. Zhang, K. S. Choi, and S. Wang, "Transfer
|
2349 |
+
representation learning with TSK fuzzy system," IEEE Transactions on
|
2350 |
+
Fuzzy Systems, vol. 29, no. 3, pp. 649-663, 2019.
|
2351 |
+
[34] Y. Guo and W. Xue, "Probabilistic multi-label classification with sparse
|
2352 |
+
feature learning," in Proc. the 23rd International Joint Conference on
|
2353 |
+
Artificial Intelligence, 2013, pp. 1373-1379.
|
2354 |
+
[35] Y. Yang, H. T. Shen, Z. Ma, Z. Huang, and X. Zhou, "L2,1-norm
|
2355 |
+
regularized discriminative feature selection for unsupervised learning," in
|
2356 |
+
Proc. the 22nd International Joint Conference on Artificial Intelligence,
|
2357 |
+
2011, pp. 1589-1594.
|
2358 |
+
[36] F. Nie, H. Huang, X. Cai, and C. H. Q. Ding, "Efficient and robust feature
|
2359 |
+
selection via joint ℓ2, 1-norms minimization," in Proc. the 23rd
|
2360 |
+
International Conference on Neural Information Processing Systems,
|
2361 |
+
2010, pp. 1813-1821.
|
2362 |
+
[37] M. L. Zhang and Z. H. Zhou, "Multilabel neural networks with
|
2363 |
+
applications to functional genomics and text categorization," IEEE
|
2364 |
+
Transactions on Knowledge and Data Engineering, vol. 18, no. 10, pp.
|
2365 |
+
1338-1351, 2006.
|
2366 |
+
[38] N. Li and Z. H. Zhou, "Selective ensemble of classifier chains," in
|
2367 |
+
International Workshop on Multiple Classifier Systems, 2013, pp. 146-156.
|
2368 |
+
[39] J. Huang, G. Li, Q. Huang, and X. Wu, "Learning label specific features
|
2369 |
+
for multi-label classification," in Proc. IEEE International Conference on
|
2370 |
+
Data Mining, 2015, pp. 181-190.
|
2371 |
+
|
2372 |
+
[40] S. J. Huang, W. Gao, and Z. H. Zhou, "Fast multi-instance multi-label
|
2373 |
+
learning," IEEE Transactions on Pattern Analysis and Machine
|
2374 |
+
Intelligence, vol. 41, no. 11, pp. 2614-2627, 2018.
|
2375 |
+
[41] S. Ji, L. Tang, S. Yu, and J. Ye, "Extracting shared subspace for multi-
|
2376 |
+
label classification," in Proc. the 14th ACM SIGKDD International
|
2377 |
+
Conference on Knowledge Discovery and Data Mining, 2008, pp. 381-389.
|
2378 |
+
[42] S. Wang, F. L. Chung, H. B. Shen, and D. Hu, "Cascaded centralized TSK
|
2379 |
+
fuzzy system: universal approximator and high interpretation," Applied
|
2380 |
+
Soft Computing, vol. 5, no. 2, pp. 131-145, 2005.
|
2381 |
+
[43] D. Y. Hu and L. Reichel, "Krylov-subspace methods for the sylvester
|
2382 |
+
equation," Linear Algebra and its Applications, vol. 172, no. 15, pp. 283-
|
2383 |
+
313, 1992.
|
2384 |
+
[44] G. Chen, Y. Song, F. Wang, and C. Zhang, "Semi-supervised multi-label
|
2385 |
+
learning by solving a sylvester equation," in Proc. the Siam International
|
2386 |
+
Conference on Data Mining, 2008, pp. 410-419.
|
2387 |
+
[45] D. C. Sorensen and A. C. Antoulas, "The sylvester equation and
|
2388 |
+
approximate balanced reduction," in Proc. Linear Algebra and its
|
2389 |
+
Applications, 2002, pp. 351-352.
|
2390 |
+
[46] S. Sun and D. Zong, "LCBM: a multi-view probabilistic model for multi-
|
2391 |
+
label classification," IEEE Transactions on Pattern Analysis and Machine
|
2392 |
+
Intelligence, vol. 43, no. 8, pp. 2682-2696, 2020.
|
2393 |
+
[47] O. Luaces, J. Díez, J. Barranquero, J. J. Coz, and A. Bahamonde, "Binary
|
2394 |
+
relevance efficacy for multilabel classification," Progress in Artificial
|
2395 |
+
Intelligence, vol. 1, no. 4, pp. 303-313, 2012.
|
2396 |
+
[48] M. L. Zhang and Z. H. Zhou, "ML-KNN: a lazy learning approach to
|
2397 |
+
multi-label learning," Pattern Recognition, vol. 40, no. 7, pp. 2038-2048,
|
2398 |
+
2007.
|
2399 |
+
[49] L. Sun, M. Kudo, and K. Kimura, "Multi-label classification with meta-
|
2400 |
+
label-specific features," in Proc. the 23rd International Conference on
|
2401 |
+
Pattern Recognition, 2016, pp. 1613-1618.
|
2402 |
+
[50] J. Read, B. Pfahringer, G. Holmes, and E. Frank, "Classifier chains for
|
2403 |
+
multi-label classification," Machine Learning, vol. 85, no. 3, pp. 333-359,
|
2404 |
+
2011.
|
2405 |
+
[51] Tsoumakas, Grigorios, Katakis, Ioannis, and Vlahavas, "Random k-
|
2406 |
+
labelsets for multilabel classification," IEEE Transactions on Knowledge
|
2407 |
+
and Data Engineering, vol. 23, no. 7, pp. 1079-1089, 2011.
|
2408 |
+
[52] W. Bian, B. Xie, and D. Tao, "Corrlog: correlated logistic models for joint
|
2409 |
+
prediction of multiple labels," in Proc. the 15th International Conference
|
2410 |
+
on Artificial Intelligence and Statistics, 2012, pp. 109-117.
|
2411 |
+
[53] C. Zhang, Z. Yu, H. Fu, P. Zhu, L. Chen, and Q. Hu, "Hybrid noise-
|
2412 |
+
oriented multilabel learning," IEEE Transactions on Cybermetics, vol. 50,
|
2413 |
+
no. 6, pp. 2837-2850, 2019.
|
2414 |
+
[54] J. Łȩski, "Improving the generalization ability of neuro-fuzzy systems by
|
2415 |
+
ε-insensitive learning," International Journal of Applied Mathematics and
|
2416 |
+
Computer Science, vol. 12, no. 3, pp. 437-447, 2002.
|
2417 |
+
[55] S. J. Huang, Y. Yu, and Z. H. Zhou, "Multi-label hypothesis reuse," in
|
2418 |
+
Proc. the 18th ACM SIGKDD International Conference on Knowledge
|
2419 |
+
Discovery and Data Mining, 2012, pp. 525-533.
|
2420 |
+
[56] J. Demšar, "Statistical comparisons of classifiers over multiple data sets,"
|
2421 |
+
Journal of Machine Learning Research, vol. 7, no. 1, pp. 1-30, 2006.
|
2422 |
+
|
2423 |
+
|
1tE1T4oBgHgl3EQflQSM/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
39FQT4oBgHgl3EQfHTWW/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa4e8503d3e85b4b50b7f8952a19afa331aa89da598983d15467a8d2711ac495
|
3 |
+
size 123336
|
49FIT4oBgHgl3EQf7St_/content/2301.11397v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17ee208b39c82caf31c6ec0f89f81088089808bab46f2d3796152cd1e77336f6
|
3 |
+
size 2509023
|
4NFKT4oBgHgl3EQf9C5Z/content/tmp_files/2301.11952v1.pdf.txt
ADDED
@@ -0,0 +1,859 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Optimization of External Stimuli for Populations of Theta Neurons
|
2 |
+
via Mean-Field Feedback Control*
|
3 |
+
Roman Chertovskih1, Nikolay Pogodaev2, Maxim Staritsyn1, Joaquim Da Silva Sewane3
|
4 |
+
and Ant´onio Pedro Aguiar1
|
5 |
+
Abstract— We study a problem of designing “robust” external
|
6 |
+
excitations for control and synchronization of an assembly
|
7 |
+
of homotypic harmonic oscillators representing so-called theta
|
8 |
+
neurons. The model of theta neurons (Theta model) captures,
|
9 |
+
in main, the bursting behavior of spiking cells in the brain of
|
10 |
+
biological beings, enduring periodic oscillations of the electric
|
11 |
+
potential in their membrane.
|
12 |
+
We study the following optimization problem: to design an
|
13 |
+
external stimulus (control), which steers all neurons of a given
|
14 |
+
population to their desired phases (i.e., excites/slows down its
|
15 |
+
spiking activity) with the highest probability.
|
16 |
+
This task is formulated as an optimal mean-field control
|
17 |
+
problem for the local continuity equation in the space of
|
18 |
+
probability measures. To solve this problem numerically, we
|
19 |
+
propose an indirect deterministic descent method based on an
|
20 |
+
exact representation of the increment (infinite-order variation)
|
21 |
+
of the objective functional. We discuss some aspects of practical
|
22 |
+
realization of the proposed method, and provide results of
|
23 |
+
numerical experiments.
|
24 |
+
I. INTRODUCTION
|
25 |
+
The phenomenon of synchronization of oscillatory pro-
|
26 |
+
cesses arise in many physical and natural systems involving
|
27 |
+
(relatively large) collections of structurally similar interacting
|
28 |
+
objects. This type of behavior — typically manifested in
|
29 |
+
practice by a formation of (desired or pathological) time-
|
30 |
+
periodic patterns — is demonstrated, e.g., by semiconductors
|
31 |
+
in laser physics [1], vibrating processes in mechanics [2],
|
32 |
+
biochemical reactions [3], [4], as well as in cardiac and
|
33 |
+
neural activity [5]–[7].
|
34 |
+
In connection with oscillatory processes, there naturally
|
35 |
+
arise problems of designing artificial signals that can drive
|
36 |
+
open systems towards (or away from) synchronous oscil-
|
37 |
+
lations and frequency entrainment; important examples are
|
38 |
+
clinical treatment of neurological and cardiac deceases (such
|
39 |
+
*The authors acknowledge the financial support of the Foundation for
|
40 |
+
Science and Technology (FCT, Portugal) in the framework of the Associated
|
41 |
+
Laboratory “Advanced Production and Intelligent Systems” (AL ARISE,
|
42 |
+
ref. LA/P/0112/2020), R&D Unit SYSTEC (base UIDB/00147/2020 and
|
43 |
+
programmatic UIDP/00147/2020 funds), and projects SNAP (ref. NORTE-
|
44 |
+
01-0145-FEDER-000085) and MLDLCOV (ref. DSAIPA/CS/0086/2020).
|
45 |
+
1Roman Chertovskih, Maxim Staritsyn and Ant´onio Pedro Aguiar are
|
46 |
+
with Research Center for Systems and Technologies (SYSTEC), Faculty
|
47 |
+
of Engineering, University of Porto, Rua Dr. Roberto Frias, s/n 4200-465,
|
48 |
+
Porto, Portugal [email protected], [email protected],
|
49 | |
50 |
+
2Nikolay Pogodaev is with Department of Mathematics “Tullio Levi-
|
51 |
+
Civita”, School of Sciences, University of Padova, Via Trieste, 63 - 35121
|
52 |
+
Padova, Italy [email protected]
|
53 |
+
3
|
54 |
+
Joaquim
|
55 |
+
Da
|
56 |
+
Silva
|
57 |
+
Sewane
|
58 |
+
is
|
59 |
+
with
|
60 |
+
Department
|
61 |
+
of
|
62 |
+
Mathe-
|
63 |
+
matics
|
64 |
+
and
|
65 |
+
Informatics,
|
66 |
+
Faculty
|
67 |
+
of
|
68 |
+
Sciences,
|
69 |
+
University
|
70 |
+
of
|
71 |
+
Ed-
|
72 |
+
uardo Mondlane, Av. Julius Nyerere, nr. 3453 Maputo, Mozambique
|
73 | |
74 |
+
as Parkinson’s disease, epilepsy, and cardiac arrhythmias),
|
75 |
+
control of circadian rhythms [8], organization/destruction of
|
76 |
+
patterns in complex dynamic structures [9], and in neuro-
|
77 |
+
computing [10], [11].
|
78 |
+
Starting from the pioneer works of Y. Kuramoto and
|
79 |
+
H. Araki, the mathematical imperative in the study of
|
80 |
+
oscillatory ensembles is the mean field dynamics, which
|
81 |
+
describes the behavior of an “averaged” representative of
|
82 |
+
the population instead of tracking all individuals in person.
|
83 |
+
This approach leads to a treatable (and elegant) mathematical
|
84 |
+
representation of the ensemble dynamics even in the case
|
85 |
+
when the cardinality of the population becomes very large,
|
86 |
+
and is naturally translated to the control-theoretical context:
|
87 |
+
in the most of applications, it is technically difficult (or even
|
88 |
+
impossible) to “isolate” the control influence for a particular
|
89 |
+
oscillatory unit; on the contrary, admissible signals usually
|
90 |
+
affect a significant part of the system, or the system as a
|
91 |
+
whole. The topic of control engineering which is focused on
|
92 |
+
designing “simultaneous” control signals for multi-agent sys-
|
93 |
+
tems is familiar under the name ensemble control. “Adaptive”
|
94 |
+
(distributed in the phase space) signals are called mean-field
|
95 |
+
type controls.
|
96 |
+
In this paper, we address a particular optimal control
|
97 |
+
problem of the type [12] based on a classical oscillatory
|
98 |
+
model [13] from the mathematical neuroscience. Namely, we
|
99 |
+
study the problem of in-phase synchronization of the mean
|
100 |
+
field of so-called theta neurons: to steer a given probability
|
101 |
+
distribution of harmonic phases towards a target one by a
|
102 |
+
simultaneous (ensemble) or individual (mean-field) control.
|
103 |
+
To solve our problem numerically, we propose a determin-
|
104 |
+
istic iterative method of sequential “control improvement”,
|
105 |
+
entailed by an an exact formula for the variation of the
|
106 |
+
objective functional. The proposed approach is based on the
|
107 |
+
optimal mean-field control theory (the dynamic optimization
|
108 |
+
in the space of probability measures) and is quite flexible:
|
109 |
+
it admits one to treat arbitrary statistical ensembles, and can
|
110 |
+
be applied to any problem of a “state-linear” structure, far
|
111 |
+
beyond the considered specific model.
|
112 |
+
II. PROBLEM STATEMENT.
|
113 |
+
MEAN-FIELD CONTROL SETUP
|
114 |
+
Consider a population of homotypic oscillatory systems
|
115 |
+
represented by the canonical Ermentrout-Kopell model [13],
|
116 |
+
[14]. This model describes the time-evolution of excitable
|
117 |
+
neurons (customary named “theta neurons”) which endure
|
118 |
+
periodic oscillations of their membrane potential. Each theta
|
119 |
+
arXiv:2301.11952v1 [math.OC] 27 Jan 2023
|
120 |
+
|
121 |
+
neuron in the population is characterized by its phase
|
122 |
+
θ(t) ∈ S1 .= R/2πZ
|
123 |
+
which satisfies the ODEs
|
124 |
+
d
|
125 |
+
dtθ .= ˙θ = vu(θ, η) .= (1 − cos θ) + (1 + cos θ) (u + η) .
|
126 |
+
Here, η is the baseline current in the neuron membrane,
|
127 |
+
which varies in a given interval I .= [a, b], and u is an external
|
128 |
+
stimulus.
|
129 |
+
Theta model provides a simple mathematical description
|
130 |
+
of the so-called spiking behavior. By convention, we say that
|
131 |
+
a neuron produces a spike at time t if θ(t) = π. If η > 0 (and
|
132 |
+
u ≡ 0) the neuron spikes periodically with the frequency
|
133 |
+
2√η. If η < 0, the neuron is excitable and can produce
|
134 |
+
spikes after a sufficiently intensive stimulus u.
|
135 |
+
In what follows, η is viewed as a parameter of the model
|
136 |
+
fluctuation. In the simplest case, this parameter runs through
|
137 |
+
a finite set {ηk, k = 1, N}, which corresponds to a finite
|
138 |
+
ensemble {θk, k = 1, N} of theta neurons,
|
139 |
+
˙θk = vu(θk, ηk),
|
140 |
+
k = 1, N.
|
141 |
+
(1)
|
142 |
+
In a more general setup to be discussed below, η can be
|
143 |
+
drawn from a given probability distribution.
|
144 |
+
Remark that (1) falls into the well-recognized Watanabe-
|
145 |
+
Strogatz class of phase oscillators driven by complex func-
|
146 |
+
tions t �→ Hk(t) ∈ C,
|
147 |
+
˙θk = ωk + Im
|
148 |
+
�
|
149 |
+
Hk(t) e−i θk�
|
150 |
+
,
|
151 |
+
k = 1, N,
|
152 |
+
where ωk is the natural (intrinsic) frequency of the kth
|
153 |
+
oscillator in the population, and Hk is the associated input,
|
154 |
+
modulated by a sinusoidal function (sometimes, this model is
|
155 |
+
called “sinusoidally coupled”); in general, both the natural
|
156 |
+
frequencies and the inputs can be effected by an external
|
157 |
+
driving parameter, furthermore, Hk can model interactions
|
158 |
+
between oscillators inside the population. Note that model
|
159 |
+
(1) fits the general statement with
|
160 |
+
ωk = ωk(u) .= u + ηk + 1,
|
161 |
+
Hk = Hk(u) .= i(u + ηk − 1),
|
162 |
+
which does not involve interaction terms (formally, equations
|
163 |
+
(1) are paired only by the common term u). In the context
|
164 |
+
of applications, this non-interacting model can be viewed
|
165 |
+
as a “first-order approximation” of a sufficiently sparsely
|
166 |
+
connected neural network (such are real biological ones),
|
167 |
+
especially, if the neurons’ activity is studied over relatively
|
168 |
+
short time periods. The case of interacting neurons will be
|
169 |
+
briefly discussed in section V.
|
170 |
+
A. Mean-Field Limit
|
171 |
+
We are interested in the behavior of system (1) for the case
|
172 |
+
when N → ∞. Introduce extra, “fictitious” states t �→ ηk(t)
|
173 |
+
as solutions to
|
174 |
+
˙ηk = 0,
|
175 |
+
(2)
|
176 |
+
accompanying (1), and consider the empirical probability
|
177 |
+
measure
|
178 |
+
µN
|
179 |
+
t = 1
|
180 |
+
N
|
181 |
+
N
|
182 |
+
�
|
183 |
+
k=1
|
184 |
+
δ(θk(t),ηk(t)),
|
185 |
+
(3)
|
186 |
+
(δx stands for the Dirac probability measure concentrated at
|
187 |
+
at a point x).
|
188 |
+
The measure-valued function t �→ µN
|
189 |
+
t
|
190 |
+
designates the
|
191 |
+
statistical behavior of the ensemble {(θk, ηk), k = 1, N}:
|
192 |
+
for any Borel set A ⊂ S1 × I, the value µN
|
193 |
+
t (A) shows the
|
194 |
+
number of neurons whose phase belongs to A.
|
195 |
+
It is well-known that the curve t �→ µN
|
196 |
+
t
|
197 |
+
satisfies, in the
|
198 |
+
weak sense, the local continuity equation [15]
|
199 |
+
∂tµt(θ, η) + ∂θ
|
200 |
+
�
|
201 |
+
vu(θ, η) µt(θ, η)
|
202 |
+
�
|
203 |
+
= 0.
|
204 |
+
(4)
|
205 |
+
Recall that the map t �→ µt is said to be a weak (distribu-
|
206 |
+
tional) solution of (4) iff
|
207 |
+
0 =
|
208 |
+
� T
|
209 |
+
0
|
210 |
+
dt
|
211 |
+
�
|
212 |
+
S1×I
|
213 |
+
�
|
214 |
+
∂tϕ + ∇xϕ · vu
|
215 |
+
�
|
216 |
+
dµt
|
217 |
+
∀ ϕ ∈ C1
|
218 |
+
c ((0, T) × S1 × I).
|
219 |
+
(C1
|
220 |
+
c ((0, T)×S1×I) denotes the space of continuously differ-
|
221 |
+
entiable functions (0, T)×S1 ×I �→ R with compact support
|
222 |
+
in (0, T) × S1 × I.) Under standard regularity assumptions,
|
223 |
+
the weak solution exists, it is unique, and it is absolutely
|
224 |
+
continuous as a function [0, T] �→ P(S1 ×I); here P(S1 ×I)
|
225 |
+
denotes the space of probability measures on S1×I endowed
|
226 |
+
with any Wasserstein distance Wp, p ≥ 1 [15].
|
227 |
+
Equation (4) provides the macroscopic description of the
|
228 |
+
population of microscopic dynamical units (1) called the
|
229 |
+
mean field. This representation remains valid in the limit
|
230 |
+
N → ∞, when µN converges to some µ ∈ P(S1 × I) in
|
231 |
+
C([0, T]; P(S1 × I)). Moreover, (4) makes sense if phases
|
232 |
+
θ and currents η are drawn from an abstract probability
|
233 |
+
distribution on the cylinder S1 × I,
|
234 |
+
µ0 = ϑ ∈ P(S1 × I).
|
235 |
+
(5)
|
236 |
+
Indeed, one can immerse the system of ODEs (1) in a
|
237 |
+
deterministic (S1 × I)-valued random process
|
238 |
+
(t, ω) �→ Θt(ω),
|
239 |
+
defined on a probability space (Ω, F, P) of an arbitrary
|
240 |
+
nature (Ω is an abstract set, F is a sigma-algebra on Ω,
|
241 |
+
and P is a probability measure F �→ [0, 1]), and satisfying
|
242 |
+
the ODE
|
243 |
+
d
|
244 |
+
dtΘt(ω) =
|
245 |
+
�
|
246 |
+
vu
|
247 |
+
�
|
248 |
+
Θt(ω)
|
249 |
+
�
|
250 |
+
0
|
251 |
+
�
|
252 |
+
.
|
253 |
+
It is a simple technical exercise to check that the function
|
254 |
+
t �→ µt .= (Θt)♯P
|
255 |
+
solves the Cauchy problem (4), (5) with ϑ .= (Θ0)♯P, where
|
256 |
+
the symbol ♯ denotes the operation of pushforward of a
|
257 |
+
measure by a (Borel) function Ω �→ S1 × I. Note that
|
258 |
+
empirical ensembles (3) fit this setup if Ω = {1, . . . , N}
|
259 |
+
and P is the normalized counting measure.
|
260 |
+
|
261 |
+
Finally, observe that the variable η enters PDE (4) as a
|
262 |
+
parameter rather than state variable. This means that (4) can
|
263 |
+
be regarded as an η-parametric family of continuity equations
|
264 |
+
on the 1D space S1 rather than a PDE on the 2D space S1×I.
|
265 |
+
This observation is essential for the numerical treatment of
|
266 |
+
the problem (4) (see section IV).
|
267 |
+
B. Control Signals
|
268 |
+
Now, we shall fix the class of admissible control signal u.
|
269 |
+
Consider two options:
|
270 |
+
• u = u(t), i.e., the control effects all neurons of the
|
271 |
+
ensemble in the same way. We call this type of ex-
|
272 |
+
ternal influences the ensemble (simultaneous, common)
|
273 |
+
control. Such a control is statistical in its spirit as
|
274 |
+
it influences the whole ensemble “in average”. As a
|
275 |
+
natural space of such controls we choose
|
276 |
+
u ∈ U .= L2([0, T]; R).
|
277 |
+
(6)
|
278 |
+
• u = wt(θ, η), i.e., the stimulus is adopted to the
|
279 |
+
neuron’s individual characteristics and phase-dependent.
|
280 |
+
The use of such a distributed, mean-field type control
|
281 |
+
w ∈ W .= L2([0, T]; C(S1 × I; R)),
|
282 |
+
(7)
|
283 |
+
assumes some technical option to variate control signals
|
284 |
+
over the spatial domain.
|
285 |
+
It is natural to expect that the second-type control should
|
286 |
+
perform better. However, let us stress again that the practical
|
287 |
+
implementation of “personalized” control signals is hardly
|
288 |
+
realistic as soon as the number of driven objects is large
|
289 |
+
enough (for experiments that pretend to mimic the biological
|
290 |
+
neural tissue, this number should be astronomic!). In reality,
|
291 |
+
a meaningful class of control signals is U, or something “in
|
292 |
+
the middle” between the mentioned two options.
|
293 |
+
C. Performance Criterion
|
294 |
+
We study a generalization of the optimization problem
|
295 |
+
[12]: to steer the neural population to a target phase dis-
|
296 |
+
tribution at a prescribed (finite) time moment T > 0 with
|
297 |
+
care about the total energy of the control action. Assuming
|
298 |
+
that the target distribution is given by a (bounded continuous)
|
299 |
+
function η �→ ˇθ(η), our optimization problem reads:
|
300 |
+
(P1)
|
301 |
+
�
|
302 |
+
�
|
303 |
+
�
|
304 |
+
�
|
305 |
+
�
|
306 |
+
�
|
307 |
+
�
|
308 |
+
�
|
309 |
+
�
|
310 |
+
�
|
311 |
+
�
|
312 |
+
�
|
313 |
+
�
|
314 |
+
�
|
315 |
+
�
|
316 |
+
min I[u] =
|
317 |
+
�
|
318 |
+
F
|
319 |
+
�
|
320 |
+
θ, ˇθ(η)
|
321 |
+
�
|
322 |
+
dµT (θ, η)
|
323 |
+
+α
|
324 |
+
2
|
325 |
+
� T
|
326 |
+
0
|
327 |
+
u2(t) dt,
|
328 |
+
α > 0,
|
329 |
+
subject to (4), (6),
|
330 |
+
where
|
331 |
+
F(θ, ω) = 1
|
332 |
+
2(sin θ − sin ω)2 + 1
|
333 |
+
2(cos θ − cos ω)2
|
334 |
+
=1 − cos(θ − ω),
|
335 |
+
and
|
336 |
+
�
|
337 |
+
.=
|
338 |
+
�
|
339 |
+
S1×I
|
340 |
+
.
|
341 |
+
In this problem, the part of state variable is played by the
|
342 |
+
probability measure µt.
|
343 |
+
Note that the functional I and the dynamics (4) are linear
|
344 |
+
in µ (despite the non-linearity of the map (θ, η) �→ vu(θ, η)).
|
345 |
+
At the same time, (4) contains a product of µ and u, which
|
346 |
+
means that (P1) is, in fact, a bi-linear (non-convex) problem.
|
347 |
+
Standard arguments from the theory of transport equations
|
348 |
+
in the Wasserstein space [15] together with the classical
|
349 |
+
Weierstrass theorem ensure that problem (P1) is well posed,
|
350 |
+
i.e., it does have a minimizer within the admissible class U
|
351 |
+
of control signals (refer, e.g., to [16]).
|
352 |
+
An alternative version of problem (P1) is formulated in
|
353 |
+
terms of the mean-field type control:
|
354 |
+
(P2)
|
355 |
+
�
|
356 |
+
�
|
357 |
+
�
|
358 |
+
�
|
359 |
+
�
|
360 |
+
�
|
361 |
+
�
|
362 |
+
�
|
363 |
+
�
|
364 |
+
�
|
365 |
+
�
|
366 |
+
�
|
367 |
+
�
|
368 |
+
�
|
369 |
+
�
|
370 |
+
min J[w] =
|
371 |
+
�
|
372 |
+
F
|
373 |
+
�
|
374 |
+
θ, ˇθ(η)
|
375 |
+
�
|
376 |
+
dµT
|
377 |
+
+α
|
378 |
+
2
|
379 |
+
� T
|
380 |
+
0
|
381 |
+
dt
|
382 |
+
�
|
383 |
+
w2
|
384 |
+
t dµt,
|
385 |
+
subject to (4), (7).
|
386 |
+
In what follows, we shall focus on the “more realistic”
|
387 |
+
statement (P1), though all the forthcoming results can be
|
388 |
+
extended, at least formally, to problem (P2).
|
389 |
+
III. COST INCREMENT FORMULA.
|
390 |
+
NUMERICAL ALGORITHM
|
391 |
+
As it was remarked above, problem (P1) is linear in state-
|
392 |
+
measure. This fact allows us to represent the variation of
|
393 |
+
the cost functional I with respect to any variation of con-
|
394 |
+
trol u exactly (without any residual terms). The announced
|
395 |
+
representation follows from the duality with the co-state
|
396 |
+
from Pontryagin’s maximum principle [17], and generalizes
|
397 |
+
the classical exact increment formula for conventional state-
|
398 |
+
linear optimal control problems [18].
|
399 |
+
Consider two arbitrary controls
|
400 |
+
¯u, u ∈ U,
|
401 |
+
u ̸= ¯u,
|
402 |
+
and let
|
403 |
+
t �→ ¯µt .= µt[¯u] and t �→ µt .= µt[u]
|
404 |
+
be the respective weak solutions to the continuity equation
|
405 |
+
(4). Let also
|
406 |
+
¯p .= p[¯u] : (t, θ, η) �→ ¯pt(θ, η)
|
407 |
+
be a classical solution to the following (non-conservative
|
408 |
+
transport) equation:
|
409 |
+
∂tpt(θ, η)+ ∂θpt(θ, η) · v¯u(t)(θ, η) = 0.
|
410 |
+
(8)
|
411 |
+
PDE (8) is known to be dual to the (conservative transport
|
412 |
+
equation) (4); the duality is formally established by the
|
413 |
+
observation that the map
|
414 |
+
t �→
|
415 |
+
�
|
416 |
+
¯pt d¯µt
|
417 |
+
is constant on [0, T]. One can check that, under the common
|
418 |
+
regularity of the problem data, this map is an absolutely
|
419 |
+
continuous function [0, T] �→ R (refer to [15] for further
|
420 |
+
details).
|
421 |
+
|
422 |
+
As soon as ¯p is chosen as a solution to (8) with the terminal
|
423 |
+
condition
|
424 |
+
pT (θ, η) = − F
|
425 |
+
�
|
426 |
+
θ, ˇθ(η)
|
427 |
+
�
|
428 |
+
,
|
429 |
+
(9)
|
430 |
+
the discussed duality makes it possible to represent the
|
431 |
+
increment (variation)
|
432 |
+
∆I .= I[u] − I[¯u]
|
433 |
+
of the functional I as follows:
|
434 |
+
−∆I =
|
435 |
+
� T
|
436 |
+
0
|
437 |
+
�
|
438 |
+
H (µt, ∂θ ¯pt, u(t)) − H (µt, ∂θ ¯pt, ¯u(t))
|
439 |
+
�
|
440 |
+
dt,
|
441 |
+
(10)
|
442 |
+
where
|
443 |
+
H(µ, ζ, u) .= u
|
444 |
+
�
|
445 |
+
ζ(θ, η) · (1 + cos θ) dµ(θ, η) − α
|
446 |
+
2 u2.
|
447 |
+
The derivation of this formula is dropped, since it is com-
|
448 |
+
pletely similar to [18].
|
449 |
+
Based on representation (10), we can treat problem (P1)
|
450 |
+
in the following iterative way: given a reference control ¯u,
|
451 |
+
one looks for a new “target” signal u that “improves” the
|
452 |
+
functional value, i.e such that ∆I < 0. The best choice of
|
453 |
+
the target control is provided by the maximization of the
|
454 |
+
integrand of (10) in the variable u:
|
455 |
+
H (µt, ∂θ ¯pt, u) → max,
|
456 |
+
u ∈ R.
|
457 |
+
The unique solution of the latter problem is obtain in the
|
458 |
+
analytic form as
|
459 |
+
ut[µ] = 1
|
460 |
+
α
|
461 |
+
�
|
462 |
+
∂θ ¯pt(θ, η) (1 + cos θ) dµ(θ, η).
|
463 |
+
(11)
|
464 |
+
Here, it is worthwhile to mention that the reference dual
|
465 |
+
state ¯p enters formula (11) only in the form of the partial
|
466 |
+
derivative
|
467 |
+
¯ξt(θ, η) .= ∂θ ¯pt(θ, η).
|
468 |
+
Differentiating (8) and (9) in θ one can easily check that
|
469 |
+
¯ξ solves the η-parametric family of the same continuity
|
470 |
+
equations (4) backward in time, starting from the terminal
|
471 |
+
condition
|
472 |
+
ξT = −∂θF
|
473 |
+
�
|
474 |
+
θ, ˇθ(η)
|
475 |
+
� .= sin
|
476 |
+
�ˇθ(η) − θ
|
477 |
+
�
|
478 |
+
.
|
479 |
+
(12)
|
480 |
+
Now, (11) can be reformulated in terms of the variable ¯ξ:
|
481 |
+
ut[µ] = 1
|
482 |
+
α
|
483 |
+
�
|
484 |
+
¯ξt(θ, η) (1 + cos θ) dµ(θ, η).
|
485 |
+
(13)
|
486 |
+
Note that the map (t, µ) �→ ut[µ] can be used as a feedback
|
487 |
+
control
|
488 |
+
[0, T] × P(S1 × I) �→ R
|
489 |
+
of system (4) in the space of probability measures. Injecting
|
490 |
+
this control into (4), we obtain a nonlocal continuity equation
|
491 |
+
∂tµt + ∂θ
|
492 |
+
�
|
493 |
+
vu[µt] µt
|
494 |
+
�
|
495 |
+
= 0,
|
496 |
+
µ0 = ϑ,
|
497 |
+
(14)
|
498 |
+
which is well-posed (thanks to the fact that function (θ, η) �→
|
499 |
+
vu(θ, η) is smooth and bounded). Solving the last equation
|
500 |
+
Algorithm 1: Numerical algorithm for optimal en-
|
501 |
+
semble control
|
502 |
+
Data: ¯u ∈ U (initial guess), ε > 0 (tolerance)
|
503 |
+
Result: {uk}k≥0 ⊂ U such that I[uk+1] < I[uk]
|
504 |
+
k ← 0;
|
505 |
+
u0 ← ¯u;
|
506 |
+
repeat
|
507 |
+
µk ← ˆµ[uk];
|
508 |
+
uk+1 ← u[µk];
|
509 |
+
k ← k + 1;
|
510 |
+
until I[uk−1] − I[uk] < ε;
|
511 |
+
numerically, and substituting its solution t �→ ˆµt .= ˆµt[¯u]
|
512 |
+
into (11), we construct the “improved” signal:
|
513 |
+
u(t) = ut[ˆµt].
|
514 |
+
This idea gives rise to the following Algorithm 1.
|
515 |
+
By construction, Algorithm 1 generates a sequence
|
516 |
+
{uk}k≥0 ⊂ U of controls with the property:
|
517 |
+
Ik+1 .= I[uk+1] < I[uk] .= Ik.
|
518 |
+
Since the sequence of numbers (Ik)k≥0 is bounded from
|
519 |
+
below by min(P) it converges.
|
520 |
+
Finally, remark that the same line of arguments can be
|
521 |
+
formally applied to problem (P2). The respective mean-field
|
522 |
+
type control takes the form
|
523 |
+
wt(θ, η) = 1
|
524 |
+
α
|
525 |
+
¯ξt(θ, η) (1 + cos θ).
|
526 |
+
This construction gives rise to an iterative method, similar
|
527 |
+
to Algorithm 1.
|
528 |
+
IV. NUMERICAL RESULTS
|
529 |
+
Let us discuss several aspects of the numerical implemen-
|
530 |
+
tation of Algorithm 1.
|
531 |
+
First, note that the method proposed here does not involve
|
532 |
+
any intrinsic parametric optimization: the most of indirect
|
533 |
+
algorithms for optimal control require the dynamic adjust-
|
534 |
+
ment of some internal computational parameters; such are
|
535 |
+
standard methods based on Pontryagin’s maximum principle
|
536 |
+
[19], [20] that imply the internal such as line search for the
|
537 |
+
specification of the “depth” of the needle-shaped (or weak)
|
538 |
+
control variations.
|
539 |
+
Each iteration of Algorithm 1 requires numerical solution
|
540 |
+
of two problems: one is the linear problem (4), (12) (inte-
|
541 |
+
grated backward in time), and one for the nonlocal continuity
|
542 |
+
equation (14) (solved numerically forward in time). Since
|
543 |
+
both (4) and (14) have no terms involving partial derivatives
|
544 |
+
in η, one can think of η as a parameter and solve the corre-
|
545 |
+
sponding parametric families of one-dimensional continuity
|
546 |
+
equations.
|
547 |
+
Consider the problem (P) with initial distribution of
|
548 |
+
neurons µ0 given by the density function
|
549 |
+
ρ0(θ, η) =
|
550 |
+
�
|
551 |
+
2 + 3 cos(2θ) − 2 sin(2θ)
|
552 |
+
�
|
553 |
+
η,
|
554 |
+
|
555 |
+
and with constant target function ˇθ(η) ≡ π. In other words,
|
556 |
+
our goal is to bring neurons’ states as close as possible to
|
557 |
+
the segment 0 × I by the time moment T with the aid of
|
558 |
+
sufficiently small controls.
|
559 |
+
Parameters for the computation:
|
560 |
+
T = 6,
|
561 |
+
I = [0.0, 1.0],
|
562 |
+
α = 1;
|
563 |
+
we used 512 Fourier harmonics in θ and grid steps
|
564 |
+
∆η = 0.002,
|
565 |
+
∆t = 0.002.
|
566 |
+
Equations (4) and (14) are integrated by the standard spectral
|
567 |
+
method [21] using the trigonometric Fourier expansion in θ
|
568 |
+
for each η from the grid. Parameters of the algorithm: ¯u ≡ 0,
|
569 |
+
ε = 0.01.
|
570 |
+
0
|
571 |
+
1
|
572 |
+
2
|
573 |
+
3
|
574 |
+
4
|
575 |
+
5
|
576 |
+
6
|
577 |
+
t
|
578 |
+
−3
|
579 |
+
−2
|
580 |
+
−1
|
581 |
+
0
|
582 |
+
1
|
583 |
+
u(t)
|
584 |
+
Fig. 1.
|
585 |
+
Control input computed by the Algorithm 1
|
586 |
+
V. CONCLUSION
|
587 |
+
The goal of this paper is to present an approach based
|
588 |
+
on the mean-field control paradigm to solve problems of
|
589 |
+
optimization and synchronization of oscillatory processes
|
590 |
+
(here, the addressed Theta model is among the simplest
|
591 |
+
but prominent examples). The proposed technique can be
|
592 |
+
applied to any state-linear optimal control problem involving
|
593 |
+
(finite or infinite) non-interacting statistical ensembles of an
|
594 |
+
arbitrary nature. In particular, Algorithm 1 can be easily
|
595 |
+
adapted to some other neural model such as SNIPER model,
|
596 |
+
sinusoidal model etc. [12].
|
597 |
+
We plan to continue this study in the way of natural
|
598 |
+
generalization of model (1) by admitting the interaction
|
599 |
+
between theta neurons,
|
600 |
+
˙θk = vu(θk, ηk) + 1
|
601 |
+
N
|
602 |
+
N
|
603 |
+
�
|
604 |
+
j=1
|
605 |
+
K(θk, θj),
|
606 |
+
k = 1, N,
|
607 |
+
where K is certain interaction potential formalizing the
|
608 |
+
spatial connectivity of neurons in the tissue. This will result
|
609 |
+
in control problems of the sort (P1,2) stated over the nonlocal
|
610 |
+
continuity equation
|
611 |
+
∂tµt + ∂θ
|
612 |
+
�
|
613 |
+
[vu + K ⋆ µt] µt
|
614 |
+
�
|
615 |
+
= 0
|
616 |
+
involving the term
|
617 |
+
(K ⋆ µ)(θ) .=
|
618 |
+
�
|
619 |
+
K(θ, ζ) dµ(ζ).
|
620 |
+
0
|
621 |
+
π
|
622 |
+
2π
|
623 |
+
θ
|
624 |
+
0.0
|
625 |
+
0.2
|
626 |
+
0.4
|
627 |
+
0.6
|
628 |
+
0.8
|
629 |
+
1.0
|
630 |
+
η
|
631 |
+
0
|
632 |
+
π
|
633 |
+
2π
|
634 |
+
θ
|
635 |
+
0.0
|
636 |
+
0.2
|
637 |
+
0.4
|
638 |
+
0.6
|
639 |
+
0.8
|
640 |
+
1.0
|
641 |
+
η
|
642 |
+
0
|
643 |
+
π
|
644 |
+
2π
|
645 |
+
θ
|
646 |
+
0.0
|
647 |
+
0.2
|
648 |
+
0.4
|
649 |
+
0.6
|
650 |
+
0.8
|
651 |
+
1.0
|
652 |
+
η
|
653 |
+
Fig. 2.
|
654 |
+
Trajectory µt(θ, µ) of (4) at time moments t = 0, 3 and 6 (from
|
655 |
+
top to bottom) computed for the optimal control input shown in Fig. 1. The
|
656 |
+
standard “rainbow” color table was used to code the isovalues: from black
|
657 |
+
(minimal values), violet, . . . , to red (maximal values).
|
658 |
+
|
659 |
+
Such problems are not state-linear anymore, and the exact
|
660 |
+
formula (10) becomes inapplicable. For this case, a promis-
|
661 |
+
ing alternative could be an approach based on Pontryagin’s
|
662 |
+
maximum principle [16].
|
663 |
+
REFERENCES
|
664 |
+
[1] I.
|
665 |
+
Fischer,
|
666 |
+
Y.
|
667 |
+
Liu,
|
668 |
+
and
|
669 |
+
P.
|
670 |
+
Davis,
|
671 |
+
“Synchronization
|
672 |
+
of
|
673 |
+
chaotic
|
674 |
+
semiconductor
|
675 |
+
laser
|
676 |
+
dynamics
|
677 |
+
on
|
678 |
+
subnanosecond
|
679 |
+
time
|
680 |
+
scales
|
681 |
+
and
|
682 |
+
its
|
683 |
+
potential
|
684 |
+
for
|
685 |
+
chaos
|
686 |
+
communication,”
|
687 |
+
Phys.
|
688 |
+
Rev.
|
689 |
+
A,
|
690 |
+
vol.
|
691 |
+
62,
|
692 |
+
p.
|
693 |
+
011801,
|
694 |
+
Jun
|
695 |
+
2000.
|
696 |
+
[Online].
|
697 |
+
Available:
|
698 |
+
https://link.aps.org/doi/10.1103/PhysRevA.62.011801
|
699 |
+
[2] I.
|
700 |
+
Blekhman,
|
701 |
+
I.
|
702 |
+
Blekhman,
|
703 |
+
and
|
704 |
+
E.
|
705 |
+
Rivin,
|
706 |
+
Synchro-
|
707 |
+
nization
|
708 |
+
in
|
709 |
+
Science
|
710 |
+
and
|
711 |
+
Technology,
|
712 |
+
ser.
|
713 |
+
ASME
|
714 |
+
press
|
715 |
+
translations.
|
716 |
+
ASME
|
717 |
+
Press,
|
718 |
+
1988.
|
719 |
+
[Online].
|
720 |
+
Available:
|
721 |
+
https://books.google.ru/books?id=ao1QAAAAMAAJ
|
722 |
+
[3] T. Nishikawa, N. Gulbahce, and A. E. Motter, “Spontaneous
|
723 |
+
reaction silencing in metabolic optimization,” PLOS Computational
|
724 |
+
Biology, vol. 4, no. 12, pp. 1–12, 12 2008. [Online]. Available:
|
725 |
+
https://doi.org/10.1371/journal.pcbi.1000236
|
726 |
+
[4] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, ser.
|
727 |
+
Dover books on chemistry.
|
728 |
+
Dover Publications, 2003. [Online].
|
729 |
+
Available: https://books.google.ru/books?id=4ADt7smO5Q8C
|
730 |
+
[5] P. J. Uhlhaas and W. Singer, “Neural synchrony in brain disorders:
|
731 |
+
Relevance
|
732 |
+
for
|
733 |
+
cognitive
|
734 |
+
dysfunctions
|
735 |
+
and
|
736 |
+
pathophysiology,”
|
737 |
+
Neuron, vol. 52, no. 1, pp. 155–168, 2006. [Online]. Available:
|
738 |
+
https://www.sciencedirect.com/science/article/pii/S0896627306007276
|
739 |
+
[6] S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano,
|
740 |
+
and W. L. Ditto, “Controlling chaos in the brain,” Nature, vol.
|
741 |
+
370,
|
742 |
+
no.
|
743 |
+
6491,
|
744 |
+
pp.
|
745 |
+
615–620,
|
746 |
+
Aug
|
747 |
+
1994.
|
748 |
+
[Online].
|
749 |
+
Available:
|
750 |
+
https://doi.org/10.1038/370615a0
|
751 |
+
[7] L.
|
752 |
+
Glass,
|
753 |
+
“Cardiac
|
754 |
+
arrhythmias
|
755 |
+
and
|
756 |
+
circle
|
757 |
+
maps-a
|
758 |
+
classical
|
759 |
+
problem,”
|
760 |
+
Chaos:
|
761 |
+
An
|
762 |
+
Interdisciplinary
|
763 |
+
Journal
|
764 |
+
of
|
765 |
+
Nonlinear
|
766 |
+
Science, vol. 1, no. 1, pp. 13–19, 1991. [Online]. Available:
|
767 |
+
https://doi.org/10.1063/1.165810
|
768 |
+
[8] A. Winfree, The Geometry of Biological Time, ser. Interdisciplinary
|
769 |
+
Applied Mathematics. Springer New York, 2013. [Online]. Available:
|
770 |
+
https://books.google.ru/books?id=7qjTBwAAQBAJ
|
771 |
+
[9] I. Z. Kiss, C. G. Rusin, H. Kori, and J. L. Hudson, “Engineering
|
772 |
+
complex dynamical structures: Sequential patterns and desynchro-
|
773 |
+
nization,” Science, vol. 316, no. 5833, pp. 1886–1889, 2007. [Online].
|
774 |
+
Available: https://www.science.org/doi/abs/10.1126/science.1140858
|
775 |
+
[10] F. C. Hoppensteadt and E. M. Izhikevich, “Synchronization of laser
|
776 |
+
oscillators, associative memory, and optical neurocomputing,” Phys.
|
777 |
+
Rev. E, vol. 62, pp. 4010–4013, Sep 2000. [Online]. Available:
|
778 |
+
https://link.aps.org/doi/10.1103/PhysRevE.62.4010
|
779 |
+
[11] F. Hoppensteadt and E. Izhikevich, “Synchronization of mems res-
|
780 |
+
onators and mechanical neurocomputing,” IEEE Transactions on Cir-
|
781 |
+
cuits and Systems I: Fundamental Theory and Applications, vol. 48,
|
782 |
+
no. 2, pp. 133–138, 2001.
|
783 |
+
[12] J.-S. Li, I. Dasanayake, and J. Ruths, “Control and synchronization of
|
784 |
+
neuron ensembles,” IEEE Transactions on Automatic Control, vol. 58,
|
785 |
+
no. 8, pp. 1919–1930, 2013.
|
786 |
+
[13] G. B. Ermentrout and N. Kopell, “Parabolic bursting in an excitable
|
787 |
+
system coupled with a slow oscillation,” SIAM Journal on Applied
|
788 |
+
Mathematics, vol. 46, no. 2, pp. 233–253, 1986. [Online]. Available:
|
789 |
+
http://www.jstor.org/stable/2101582
|
790 |
+
[14] S. H. Strogatz, Nonlinear dynamics and chaos : with applications to
|
791 |
+
physics, biology, chemistry, and engineering. Second edition. Boulder,
|
792 |
+
CO : Westview Press, a member of the Perseus Books Group, [2015],
|
793 |
+
[2015], includes bibliographical references and index. [Online].
|
794 |
+
Available: https://search.library.wisc.edu/catalog/9910223127702121
|
795 |
+
[15] L. Ambrosio and G. Savar´e, “Gradient flows of probability measures,”
|
796 |
+
in Handbook of differential equations: evolutionary equations. Vol. III,
|
797 |
+
ser. Handb. Differ. Equ.
|
798 |
+
Elsevier/North-Holland, Amsterdam, 2007,
|
799 |
+
pp. 1–136.
|
800 |
+
[16] N. Pogodaev and M. Staritsyn, “Impulsive control of nonlocal
|
801 |
+
transport
|
802 |
+
equations,”
|
803 |
+
Journal
|
804 |
+
of
|
805 |
+
Differential
|
806 |
+
Equations,
|
807 |
+
vol.
|
808 |
+
269,
|
809 |
+
no.
|
810 |
+
4,
|
811 |
+
pp.
|
812 |
+
3585–3623,
|
813 |
+
2020.
|
814 |
+
[Online].
|
815 |
+
Available:
|
816 |
+
https://www.sciencedirect.com/science/article/pii/S002203962030108X
|
817 |
+
[17] N. Pogodaev, “Optimal control of continuity equations,” NoDEA
|
818 |
+
Nonlinear Differential Equations Appl., vol. 23, no. 2, pp. Art. 21,
|
819 |
+
24, 2016.
|
820 |
+
[18] M. Staritsyn, N. Pogodaev, R. Chertovskih, and F. L. Pereira, “Feed-
|
821 |
+
back maximum principle for ensemble control of local continuity
|
822 |
+
equations: An application to supervised machine learning,” IEEE
|
823 |
+
Control Systems Letters, vol. 6, pp. 1046–1051, 2022.
|
824 |
+
[19] A.
|
825 |
+
V.
|
826 |
+
Arguchintsev,
|
827 |
+
V.
|
828 |
+
A.
|
829 |
+
Dykhta,
|
830 |
+
and
|
831 |
+
V.
|
832 |
+
A.
|
833 |
+
Srochko,
|
834 |
+
“Optimal
|
835 |
+
control:
|
836 |
+
Nonlocal
|
837 |
+
conditions,
|
838 |
+
computational
|
839 |
+
methods,
|
840 |
+
and the variational principle of maximum,” Russian Mathematics,
|
841 |
+
vol.
|
842 |
+
53,
|
843 |
+
no.
|
844 |
+
1,
|
845 |
+
pp.
|
846 |
+
1–35,
|
847 |
+
Jan
|
848 |
+
2009.
|
849 |
+
[Online].
|
850 |
+
Available:
|
851 |
+
https://doi.org/10.3103/S1066369X09010010
|
852 |
+
[20] P. Drag, K. Styczen, M. Kwiatkowska, and A. Szczurek, “A review
|
853 |
+
on the direct and indirect methods for solving optimal control prob-
|
854 |
+
lems with differential-algebraic constraints,” Studies in Computational
|
855 |
+
Intelligence, vol. 610, pp. 91–105, 07 2016.
|
856 |
+
[21] J. P. Boyd, Chebyshev and Fourier spectral methods., 2nd ed.
|
857 |
+
Mi-
|
858 |
+
neola, NY: Dover Publications, 2001.
|
859 |
+
|
4NFKT4oBgHgl3EQf9C5Z/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,439 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf,len=438
|
2 |
+
page_content='Optimization of External Stimuli for Populations of Theta Neurons via Mean-Field Feedback Control* Roman Chertovskih1, Nikolay Pogodaev2, Maxim Staritsyn1, Joaquim Da Silva Sewane3 and Ant´onio Pedro Aguiar1 Abstract— We study a problem of designing “robust” external excitations for control and synchronization of an assembly of homotypic harmonic oscillators representing so-called theta neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
3 |
+
page_content=' The model of theta neurons (Theta model) captures, in main, the bursting behavior of spiking cells in the brain of biological beings, enduring periodic oscillations of the electric potential in their membrane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
4 |
+
page_content=' We study the following optimization problem: to design an external stimulus (control), which steers all neurons of a given population to their desired phases (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
5 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
6 |
+
page_content=', excites/slows down its spiking activity) with the highest probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
7 |
+
page_content=' This task is formulated as an optimal mean-field control problem for the local continuity equation in the space of probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
8 |
+
page_content=' To solve this problem numerically, we propose an indirect deterministic descent method based on an exact representation of the increment (infinite-order variation) of the objective functional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
9 |
+
page_content=' We discuss some aspects of practical realization of the proposed method, and provide results of numerical experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
10 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
11 |
+
page_content=' INTRODUCTION The phenomenon of synchronization of oscillatory pro- cesses arise in many physical and natural systems involving (relatively large) collections of structurally similar interacting objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
12 |
+
page_content=' This type of behavior — typically manifested in practice by a formation of (desired or pathological) time- periodic patterns — is demonstrated, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
13 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
14 |
+
page_content=', by semiconductors in laser physics [1], vibrating processes in mechanics [2], biochemical reactions [3], [4], as well as in cardiac and neural activity [5]–[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
15 |
+
page_content=' In connection with oscillatory processes, there naturally arise problems of designing artificial signals that can drive open systems towards (or away from) synchronous oscil- lations and frequency entrainment;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
16 |
+
page_content=' important examples are clinical treatment of neurological and cardiac deceases (such The authors acknowledge the financial support of the Foundation for Science and Technology (FCT, Portugal) in the framework of the Associated Laboratory “Advanced Production and Intelligent Systems” (AL ARISE, ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
17 |
+
page_content=' LA/P/0112/2020), R&D Unit SYSTEC (base UIDB/00147/2020 and programmatic UIDP/00147/2020 funds), and projects SNAP (ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
18 |
+
page_content=' NORTE- 01-0145-FEDER-000085) and MLDLCOV (ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
19 |
+
page_content=' DSAIPA/CS/0086/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
20 |
+
page_content=' 1Roman Chertovskih, Maxim Staritsyn and Ant´onio Pedro Aguiar are with Research Center for Systems and Technologies (SYSTEC), Faculty of Engineering, University of Porto, Rua Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
21 |
+
page_content=' Roberto Frias, s/n 4200-465, Porto, Portugal roman@fe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
22 |
+
page_content='up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
23 |
+
page_content='pt, staritsyn@fe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
24 |
+
page_content='up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
25 |
+
page_content='pt, pedro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
26 |
+
page_content='aguiar@fe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
27 |
+
page_content='up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
28 |
+
page_content='pt 2Nikolay Pogodaev is with Department of Mathematics “Tullio Levi- Civita”, School of Sciences, University of Padova, Via Trieste, 63 - 35121 Padova, Italy nickpogo@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
29 |
+
page_content='com 3 Joaquim Da Silva Sewane is with Department of Mathe- matics and Informatics, Faculty of Sciences, University of Ed- uardo Mondlane, Av.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
30 |
+
page_content=' Julius Nyerere, nr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
31 |
+
page_content=' 3453 Maputo, Mozambique joaquimdasilvasewane@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
32 |
+
page_content='com as Parkinson’s disease, epilepsy, and cardiac arrhythmias), control of circadian rhythms [8], organization/destruction of patterns in complex dynamic structures [9], and in neuro- computing [10], [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
33 |
+
page_content=' Starting from the pioneer works of Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
34 |
+
page_content=' Kuramoto and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
35 |
+
page_content=' Araki, the mathematical imperative in the study of oscillatory ensembles is the mean field dynamics, which describes the behavior of an “averaged” representative of the population instead of tracking all individuals in person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
36 |
+
page_content=' This approach leads to a treatable (and elegant) mathematical representation of the ensemble dynamics even in the case when the cardinality of the population becomes very large, and is naturally translated to the control-theoretical context: in the most of applications, it is technically difficult (or even impossible) to “isolate” the control influence for a particular oscillatory unit;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
37 |
+
page_content=' on the contrary, admissible signals usually affect a significant part of the system, or the system as a whole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
38 |
+
page_content=' The topic of control engineering which is focused on designing “simultaneous” control signals for multi-agent sys- tems is familiar under the name ensemble control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
39 |
+
page_content=' “Adaptive” (distributed in the phase space) signals are called mean-field type controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
40 |
+
page_content=' In this paper, we address a particular optimal control problem of the type [12] based on a classical oscillatory model [13] from the mathematical neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
41 |
+
page_content=' Namely, we study the problem of in-phase synchronization of the mean field of so-called theta neurons: to steer a given probability distribution of harmonic phases towards a target one by a simultaneous (ensemble) or individual (mean-field) control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
42 |
+
page_content=' To solve our problem numerically, we propose a determin- istic iterative method of sequential “control improvement”, entailed by an an exact formula for the variation of the objective functional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
43 |
+
page_content=' The proposed approach is based on the optimal mean-field control theory (the dynamic optimization in the space of probability measures) and is quite flexible: it admits one to treat arbitrary statistical ensembles, and can be applied to any problem of a “state-linear” structure, far beyond the considered specific model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
44 |
+
page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
45 |
+
page_content=' PROBLEM STATEMENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
46 |
+
page_content=' MEAN-FIELD CONTROL SETUP Consider a population of homotypic oscillatory systems represented by the canonical Ermentrout-Kopell model [13], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
47 |
+
page_content=' This model describes the time-evolution of excitable neurons (customary named “theta neurons”) which endure periodic oscillations of their membrane potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
48 |
+
page_content=' Each theta arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
49 |
+
page_content='11952v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
50 |
+
page_content='OC] 27 Jan 2023 neuron in the population is characterized by its phase θ(t) ∈ S1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
51 |
+
page_content='= R/2πZ which satisfies the ODEs d dtθ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
52 |
+
page_content='= ˙θ = vu(θ, η) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
53 |
+
page_content='= (1 − cos θ) + (1 + cos θ) (u + η) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
54 |
+
page_content=' Here, η is the baseline current in the neuron membrane, which varies in a given interval I .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
55 |
+
page_content='= [a, b], and u is an external stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
56 |
+
page_content=' Theta model provides a simple mathematical description of the so-called spiking behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
57 |
+
page_content=' By convention, we say that a neuron produces a spike at time t if θ(t) = π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
58 |
+
page_content=' If η > 0 (and u ≡ 0) the neuron spikes periodically with the frequency 2√η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
59 |
+
page_content=' If η < 0, the neuron is excitable and can produce spikes after a sufficiently intensive stimulus u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
60 |
+
page_content=' In what follows, η is viewed as a parameter of the model fluctuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
61 |
+
page_content=' In the simplest case, this parameter runs through a finite set {ηk, k = 1, N}, which corresponds to a finite ensemble {θk, k = 1, N} of theta neurons, ˙θk = vu(θk, ηk), k = 1, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
62 |
+
page_content=' (1) In a more general setup to be discussed below, η can be drawn from a given probability distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
63 |
+
page_content=' Remark that (1) falls into the well-recognized Watanabe- Strogatz class of phase oscillators driven by complex func- tions t �→ Hk(t) ∈ C, ˙θk = ωk + Im � Hk(t) e−i θk� , k = 1, N, where ωk is the natural (intrinsic) frequency of the kth oscillator in the population, and Hk is the associated input, modulated by a sinusoidal function (sometimes, this model is called “sinusoidally coupled”);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
64 |
+
page_content=' in general, both the natural frequencies and the inputs can be effected by an external driving parameter, furthermore, Hk can model interactions between oscillators inside the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
65 |
+
page_content=' Note that model (1) fits the general statement with ωk = ωk(u) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
66 |
+
page_content='= u + ηk + 1, Hk = Hk(u) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
67 |
+
page_content='= i(u + ηk − 1), which does not involve interaction terms (formally, equations (1) are paired only by the common term u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
68 |
+
page_content=' In the context of applications, this non-interacting model can be viewed as a “first-order approximation” of a sufficiently sparsely connected neural network (such are real biological ones), especially, if the neurons’ activity is studied over relatively short time periods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
69 |
+
page_content=' The case of interacting neurons will be briefly discussed in section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
70 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
71 |
+
page_content=' Mean-Field Limit We are interested in the behavior of system (1) for the case when N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
72 |
+
page_content=' Introduce extra, “fictitious” states t �→ ηk(t) as solutions to ˙ηk = 0, (2) accompanying (1), and consider the empirical probability measure µN t = 1 N N � k=1 δ(θk(t),ηk(t)), (3) (δx stands for the Dirac probability measure concentrated at at a point x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
73 |
+
page_content=' The measure-valued function t �→ µN t designates the statistical behavior of the ensemble {(θk, ηk), k = 1, N}: for any Borel set A ⊂ S1 × I, the value µN t (A) shows the number of neurons whose phase belongs to A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
74 |
+
page_content=' It is well-known that the curve t �→ µN t satisfies, in the weak sense, the local continuity equation [15] ∂tµt(θ, η) + ∂θ � vu(θ, η) µt(θ, η) � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
75 |
+
page_content=' (4) Recall that the map t �→ µt is said to be a weak (distribu- tional) solution of (4) iff 0 = � T 0 dt � S1×I � ∂tϕ + ∇xϕ · vu � dµt ∀ ϕ ∈ C1 c ((0, T) × S1 × I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
76 |
+
page_content=' (C1 c ((0, T)×S1×I) denotes the space of continuously differ- entiable functions (0, T)×S1 ×I �→ R with compact support in (0, T) × S1 × I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
77 |
+
page_content=') Under standard regularity assumptions, the weak solution exists, it is unique, and it is absolutely continuous as a function [0, T] �→ P(S1 ×I);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
78 |
+
page_content=' here P(S1 ×I) denotes the space of probability measures on S1×I endowed with any Wasserstein distance Wp, p ≥ 1 [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
79 |
+
page_content=' Equation (4) provides the macroscopic description of the population of microscopic dynamical units (1) called the mean field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
80 |
+
page_content=' This representation remains valid in the limit N → ∞, when µN converges to some µ ∈ P(S1 × I) in C([0, T];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
81 |
+
page_content=' P(S1 × I)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
82 |
+
page_content=' Moreover, (4) makes sense if phases θ and currents η are drawn from an abstract probability distribution on the cylinder S1 × I, µ0 = ϑ ∈ P(S1 × I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
83 |
+
page_content=' (5) Indeed, one can immerse the system of ODEs (1) in a deterministic (S1 × I)-valued random process (t, ω) �→ Θt(ω), defined on a probability space (Ω, F, P) of an arbitrary nature (Ω is an abstract set, F is a sigma-algebra on Ω, and P is a probability measure F �→ [0, 1]), and satisfying the ODE d dtΘt(ω) = � vu � Θt(ω) � 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
84 |
+
page_content=' It is a simple technical exercise to check that the function t �→ µt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
85 |
+
page_content='= (Θt)♯P solves the Cauchy problem (4), (5) with ϑ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
86 |
+
page_content='= (Θ0)♯P, where the symbol ♯ denotes the operation of pushforward of a measure by a (Borel) function Ω �→ S1 × I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
87 |
+
page_content=' Note that empirical ensembles (3) fit this setup if Ω = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
88 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
89 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
90 |
+
page_content=' , N} and P is the normalized counting measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
91 |
+
page_content=' Finally, observe that the variable η enters PDE (4) as a parameter rather than state variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
92 |
+
page_content=' This means that (4) can be regarded as an η-parametric family of continuity equations on the 1D space S1 rather than a PDE on the 2D space S1×I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
93 |
+
page_content=' This observation is essential for the numerical treatment of the problem (4) (see section IV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
94 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
95 |
+
page_content=' Control Signals Now, we shall fix the class of admissible control signal u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
96 |
+
page_content=' Consider two options: u = u(t), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
97 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
98 |
+
page_content=', the control effects all neurons of the ensemble in the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
99 |
+
page_content=' We call this type of ex- ternal influences the ensemble (simultaneous, common) control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
100 |
+
page_content=' Such a control is statistical in its spirit as it influences the whole ensemble “in average”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
101 |
+
page_content=' As a natural space of such controls we choose u ∈ U .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
102 |
+
page_content='= L2([0, T];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
103 |
+
page_content=' R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
104 |
+
page_content=' (6) u = wt(θ, η), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
105 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
106 |
+
page_content=', the stimulus is adopted to the neuron’s individual characteristics and phase-dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
107 |
+
page_content=' The use of such a distributed, mean-field type control w ∈ W .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
108 |
+
page_content='= L2([0, T];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
109 |
+
page_content=' C(S1 × I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
110 |
+
page_content=' R)), (7) assumes some technical option to variate control signals over the spatial domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
111 |
+
page_content=' It is natural to expect that the second-type control should perform better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
112 |
+
page_content=' However, let us stress again that the practical implementation of “personalized” control signals is hardly realistic as soon as the number of driven objects is large enough (for experiments that pretend to mimic the biological neural tissue, this number should be astronomic!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
113 |
+
page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
114 |
+
page_content=' In reality, a meaningful class of control signals is U, or something “in the middle” between the mentioned two options.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
115 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
116 |
+
page_content=' Performance Criterion We study a generalization of the optimization problem [12]: to steer the neural population to a target phase dis- tribution at a prescribed (finite) time moment T > 0 with care about the total energy of the control action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
117 |
+
page_content=' Assuming that the target distribution is given by a (bounded continuous) function η �→ ˇθ(η), our optimization problem reads: (P1) � � � � � � � � � � � � � � � min I[u] = � F � θ, ˇθ(η) � dµT (θ, η) +α 2 � T 0 u2(t) dt, α > 0, subject to (4), (6), where F(θ, ω) = 1 2(sin θ − sin ω)2 + 1 2(cos θ − cos ω)2 =1 − cos(θ − ω), and � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
118 |
+
page_content='= � S1×I .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
119 |
+
page_content=' In this problem, the part of state variable is played by the probability measure µt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
120 |
+
page_content=' Note that the functional I and the dynamics (4) are linear in µ (despite the non-linearity of the map (θ, η) �→ vu(θ, η)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
121 |
+
page_content=' At the same time, (4) contains a product of µ and u, which means that (P1) is, in fact, a bi-linear (non-convex) problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
122 |
+
page_content=' Standard arguments from the theory of transport equations in the Wasserstein space [15] together with the classical Weierstrass theorem ensure that problem (P1) is well posed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
123 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
124 |
+
page_content=', it does have a minimizer within the admissible class U of control signals (refer, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
125 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
126 |
+
page_content=', to [16]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
127 |
+
page_content=' An alternative version of problem (P1) is formulated in terms of the mean-field type control: (P2) � � � � � � � � � � � � � � � min J[w] = � F � θ, ˇθ(η) � dµT +α 2 � T 0 dt � w2 t dµt, subject to (4), (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
128 |
+
page_content=' In what follows, we shall focus on the “more realistic” statement (P1), though all the forthcoming results can be extended, at least formally, to problem (P2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
129 |
+
page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
130 |
+
page_content=' COST INCREMENT FORMULA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
131 |
+
page_content=' NUMERICAL ALGORITHM As it was remarked above, problem (P1) is linear in state- measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
132 |
+
page_content=' This fact allows us to represent the variation of the cost functional I with respect to any variation of con- trol u exactly (without any residual terms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
133 |
+
page_content=' The announced representation follows from the duality with the co-state from Pontryagin’s maximum principle [17], and generalizes the classical exact increment formula for conventional state- linear optimal control problems [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
134 |
+
page_content=' Consider two arbitrary controls ¯u, u ∈ U, u ̸= ¯u, and let t �→ ¯µt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
135 |
+
page_content='= µt[¯u] and t �→ µt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
136 |
+
page_content='= µt[u] be the respective weak solutions to the continuity equation (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
137 |
+
page_content=' Let also ¯p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
138 |
+
page_content='= p[¯u] : (t, θ, η) �→ ¯pt(θ, η) be a classical solution to the following (non-conservative transport) equation: ∂tpt(θ, η)+ ∂θpt(θ, η) · v¯u(t)(θ, η) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
139 |
+
page_content=' (8) PDE (8) is known to be dual to the (conservative transport equation) (4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
140 |
+
page_content=' the duality is formally established by the observation that the map t �→ � ¯pt d¯µt is constant on [0, T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
141 |
+
page_content=' One can check that, under the common regularity of the problem data, this map is an absolutely continuous function [0, T] �→ R (refer to [15] for further details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
142 |
+
page_content=' As soon as ¯p is chosen as a solution to (8) with the terminal condition pT (θ, η) = − F � θ, ˇθ(η) � , (9) the discussed duality makes it possible to represent the increment (variation) ∆I .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
143 |
+
page_content='= I[u] − I[¯u] of the functional I as follows: −∆I = � T 0 � H (µt, ∂θ ¯pt, u(t)) − H (µt, ∂θ ¯pt, ¯u(t)) � dt, (10) where H(µ, ζ, u) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
144 |
+
page_content='= u � ζ(θ, η) · (1 + cos θ) dµ(θ, η) − α 2 u2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
145 |
+
page_content=' The derivation of this formula is dropped, since it is com- pletely similar to [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
146 |
+
page_content=' Based on representation (10), we can treat problem (P1) in the following iterative way: given a reference control ¯u, one looks for a new “target” signal u that “improves” the functional value, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
147 |
+
page_content='e such that ∆I < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
148 |
+
page_content=' The best choice of the target control is provided by the maximization of the integrand of (10) in the variable u: H (µt, ∂θ ¯pt, u) → max, u ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
149 |
+
page_content=' The unique solution of the latter problem is obtain in the analytic form as ut[µ] = 1 α � ∂θ ¯pt(θ, η) (1 + cos θ) dµ(θ, η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
150 |
+
page_content=' (11) Here, it is worthwhile to mention that the reference dual state ¯p enters formula (11) only in the form of the partial derivative ¯ξt(θ, η) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
151 |
+
page_content='= ∂θ ¯pt(θ, η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
152 |
+
page_content=' Differentiating (8) and (9) in θ one can easily check that ¯ξ solves the η-parametric family of the same continuity equations (4) backward in time, starting from the terminal condition ξT = −∂θF � θ, ˇθ(η) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
153 |
+
page_content='= sin �ˇθ(η) − θ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
154 |
+
page_content=' (12) Now, (11) can be reformulated in terms of the variable ¯ξ: ut[µ] = 1 α � ¯ξt(θ, η) (1 + cos θ) dµ(θ, η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
155 |
+
page_content=' (13) Note that the map (t, µ) �→ ut[µ] can be used as a feedback control [0, T] × P(S1 × I) �→ R of system (4) in the space of probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
156 |
+
page_content=' Injecting this control into (4), we obtain a nonlocal continuity equation ∂tµt + ∂θ � vu[µt] µt � = 0, µ0 = ϑ, (14) which is well-posed (thanks to the fact that function (θ, η) �→ vu(θ, η) is smooth and bounded).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
157 |
+
page_content=' Solving the last equation Algorithm 1: Numerical algorithm for optimal en- semble control Data: ¯u ∈ U (initial guess), ε > 0 (tolerance) Result: {uk}k≥0 ⊂ U such that I[uk+1] < I[uk] k ← 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
158 |
+
page_content=' u0 ← ¯u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
159 |
+
page_content=' repeat µk ← ˆµ[uk];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
160 |
+
page_content=' uk+1 ← u[µk];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
161 |
+
page_content=' k ← k + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
162 |
+
page_content=' until I[uk−1] − I[uk] < ε;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
163 |
+
page_content=' numerically, and substituting its solution t �→ ˆµt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
164 |
+
page_content='= ˆµt[¯u] into (11), we construct the “improved” signal: u(t) = ut[ˆµt].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
165 |
+
page_content=' This idea gives rise to the following Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
166 |
+
page_content=' By construction, Algorithm 1 generates a sequence {uk}k≥0 ⊂ U of controls with the property: Ik+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
167 |
+
page_content='= I[uk+1] < I[uk] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
168 |
+
page_content='= Ik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
169 |
+
page_content=' Since the sequence of numbers (Ik)k≥0 is bounded from below by min(P) it converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
170 |
+
page_content=' Finally, remark that the same line of arguments can be formally applied to problem (P2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
171 |
+
page_content=' The respective mean-field type control takes the form wt(θ, η) = 1 α ¯ξt(θ, η) (1 + cos θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
172 |
+
page_content=' This construction gives rise to an iterative method, similar to Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
173 |
+
page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
174 |
+
page_content=' NUMERICAL RESULTS Let us discuss several aspects of the numerical implemen- tation of Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
175 |
+
page_content=' First, note that the method proposed here does not involve any intrinsic parametric optimization: the most of indirect algorithms for optimal control require the dynamic adjust- ment of some internal computational parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
176 |
+
page_content=' such are standard methods based on Pontryagin’s maximum principle [19], [20] that imply the internal such as line search for the specification of the “depth” of the needle-shaped (or weak) control variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
177 |
+
page_content=' Each iteration of Algorithm 1 requires numerical solution of two problems: one is the linear problem (4), (12) (inte- grated backward in time), and one for the nonlocal continuity equation (14) (solved numerically forward in time).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
178 |
+
page_content=' Since both (4) and (14) have no terms involving partial derivatives in η, one can think of η as a parameter and solve the corre- sponding parametric families of one-dimensional continuity equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
179 |
+
page_content=' Consider the problem (P) with initial distribution of neurons µ0 given by the density function ρ0(θ, η) = � 2 + 3 cos(2θ) − 2 sin(2θ) � η, and with constant target function ˇθ(η) ≡ π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
180 |
+
page_content=' In other words, our goal is to bring neurons’ states as close as possible to the segment 0 × I by the time moment T with the aid of sufficiently small controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
181 |
+
page_content=' Parameters for the computation: T = 6, I = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
182 |
+
page_content='0, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
183 |
+
page_content='0], α = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
184 |
+
page_content=' we used 512 Fourier harmonics in θ and grid steps ∆η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
185 |
+
page_content='002, ∆t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
186 |
+
page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
187 |
+
page_content=' Equations (4) and (14) are integrated by the standard spectral method [21] using the trigonometric Fourier expansion in θ for each η from the grid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
188 |
+
page_content=' Parameters of the algorithm: ¯u ≡ 0, ε = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
189 |
+
page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
190 |
+
page_content=' 0 1 2 3 4 5 6 t −3 −2 −1 0 1 u(t) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
191 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
192 |
+
page_content=' Control input computed by the Algorithm 1 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
193 |
+
page_content=' CONCLUSION The goal of this paper is to present an approach based on the mean-field control paradigm to solve problems of optimization and synchronization of oscillatory processes (here, the addressed Theta model is among the simplest but prominent examples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
194 |
+
page_content=' The proposed technique can be applied to any state-linear optimal control problem involving (finite or infinite) non-interacting statistical ensembles of an arbitrary nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
195 |
+
page_content=' In particular, Algorithm 1 can be easily adapted to some other neural model such as SNIPER model, sinusoidal model etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
196 |
+
page_content=' [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
197 |
+
page_content=' We plan to continue this study in the way of natural generalization of model (1) by admitting the interaction between theta neurons, ˙θk = vu(θk, ηk) + 1 N N � j=1 K(θk, θj), k = 1, N, where K is certain interaction potential formalizing the spatial connectivity of neurons in the tissue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
198 |
+
page_content=' This will result in control problems of the sort (P1,2) stated over the nonlocal continuity equation ∂tµt + ∂θ � [vu + K ⋆ µt] µt � = 0 involving the term (K ⋆ µ)(θ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
199 |
+
page_content='= � K(θ, ζ) dµ(ζ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
200 |
+
page_content=' 0 π 2π θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
201 |
+
page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
202 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
203 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
204 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
205 |
+
page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
206 |
+
page_content='0 η 0 π 2π θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
207 |
+
page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
208 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
209 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
210 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
211 |
+
page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
212 |
+
page_content='0 η 0 π 2π θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
213 |
+
page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
214 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
215 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
216 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
217 |
+
page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
218 |
+
page_content='0 η Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
219 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
220 |
+
page_content=' Trajectory µt(θ, µ) of (4) at time moments t = 0, 3 and 6 (from top to bottom) computed for the optimal control input shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
221 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
222 |
+
page_content=' The standard “rainbow” color table was used to code the isovalues: from black (minimal values), violet, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
223 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
224 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
225 |
+
page_content=' , to red (maximal values).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
226 |
+
page_content=' Such problems are not state-linear anymore, and the exact formula (10) becomes inapplicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
227 |
+
page_content=' For this case, a promis- ing alternative could be an approach based on Pontryagin’s maximum principle [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
228 |
+
page_content=' REFERENCES [1] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
229 |
+
page_content=' Fischer, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
230 |
+
page_content=' Liu, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
231 |
+
page_content=' Davis, “Synchronization of chaotic semiconductor laser dynamics on subnanosecond time scales and its potential for chaos communication,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
232 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
233 |
+
page_content=' A, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
234 |
+
page_content=' 62, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
235 |
+
page_content=' 011801, Jun 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
236 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
237 |
+
page_content=' Available: https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
238 |
+
page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
239 |
+
page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
240 |
+
page_content='1103/PhysRevA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
241 |
+
page_content='62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
242 |
+
page_content='011801 [2] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
243 |
+
page_content=' Blekhman, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
244 |
+
page_content=' Blekhman, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
245 |
+
page_content=' Rivin, Synchro- nization in Science and Technology, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
246 |
+
page_content=' ASME press translations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
247 |
+
page_content=' ASME Press, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
248 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
249 |
+
page_content=' Available: https://books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
250 |
+
page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
251 |
+
page_content='ru/books?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
252 |
+
page_content='id=ao1QAAAAMAAJ [3] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
253 |
+
page_content=' Nishikawa, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
254 |
+
page_content=' Gulbahce, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
255 |
+
page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
256 |
+
page_content=' Motter, “Spontaneous reaction silencing in metabolic optimization,” PLOS Computational Biology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
257 |
+
page_content=' 4, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
258 |
+
page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
259 |
+
page_content=' 1–12, 12 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
260 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
261 |
+
page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
262 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
263 |
+
page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
264 |
+
page_content='pcbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
265 |
+
page_content='1000236 [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
266 |
+
page_content=' Kuramoto, Chemical Oscillations, Waves, and Turbulence, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
267 |
+
page_content=' Dover books on chemistry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
268 |
+
page_content=' Dover Publications, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
269 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
270 |
+
page_content=' Available: https://books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
271 |
+
page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
272 |
+
page_content='ru/books?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
273 |
+
page_content='id=4ADt7smO5Q8C [5] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
274 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
275 |
+
page_content=' Uhlhaas and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
276 |
+
page_content=' Singer, “Neural synchrony in brain disorders: Relevance for cognitive dysfunctions and pathophysiology,” Neuron, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
277 |
+
page_content=' 52, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
278 |
+
page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
279 |
+
page_content=' 155–168, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
280 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
281 |
+
page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
282 |
+
page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
283 |
+
page_content='com/science/article/pii/S0896627306007276 [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
284 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
285 |
+
page_content=' Schiff, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
286 |
+
page_content=' Jerger, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
287 |
+
page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
288 |
+
page_content=' Duong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
289 |
+
page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
290 |
+
page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
291 |
+
page_content=' Spano, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
292 |
+
page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
293 |
+
page_content=' Ditto, “Controlling chaos in the brain,” Nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
294 |
+
page_content=' 370, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
295 |
+
page_content=' 6491, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
296 |
+
page_content=' 615–620, Aug 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
297 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
298 |
+
page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
299 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
300 |
+
page_content='1038/370615a0 [7] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
301 |
+
page_content=' Glass, “Cardiac arrhythmias and circle maps-a classical problem,” Chaos: An Interdisciplinary Journal of Nonlinear Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
302 |
+
page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
303 |
+
page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
304 |
+
page_content=' 13–19, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
305 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
306 |
+
page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
307 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
308 |
+
page_content='1063/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
309 |
+
page_content='165810 [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
310 |
+
page_content=' Winfree, The Geometry of Biological Time, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
311 |
+
page_content=' Interdisciplinary Applied Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
312 |
+
page_content=' Springer New York, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
313 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
314 |
+
page_content=' Available: https://books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
315 |
+
page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
316 |
+
page_content='ru/books?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
317 |
+
page_content='id=7qjTBwAAQBAJ [9] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
318 |
+
page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
319 |
+
page_content=' Kiss, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
320 |
+
page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
321 |
+
page_content=' Rusin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
322 |
+
page_content=' Kori, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
323 |
+
page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
324 |
+
page_content=' Hudson, “Engineering complex dynamical structures: Sequential patterns and desynchro- nization,” Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
325 |
+
page_content=' 316, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
326 |
+
page_content=' 5833, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
327 |
+
page_content=' 1886–1889, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
328 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
329 |
+
page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
330 |
+
page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
331 |
+
page_content='org/doi/abs/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
332 |
+
page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
333 |
+
page_content='1140858 [10] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
334 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
335 |
+
page_content=' Hoppensteadt and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
336 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
337 |
+
page_content=' Izhikevich, “Synchronization of laser oscillators, associative memory, and optical neurocomputing,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
338 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
339 |
+
page_content=' E, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
340 |
+
page_content=' 62, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
341 |
+
page_content=' 4010–4013, Sep 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
342 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
343 |
+
page_content=' Available: https://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
344 |
+
page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
345 |
+
page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
346 |
+
page_content='1103/PhysRevE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
347 |
+
page_content='62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
348 |
+
page_content='4010 [11] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
349 |
+
page_content=' Hoppensteadt and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
350 |
+
page_content=' Izhikevich, “Synchronization of mems res- onators and mechanical neurocomputing,” IEEE Transactions on Cir- cuits and Systems I: Fundamental Theory and Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
351 |
+
page_content=' 48, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
352 |
+
page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
353 |
+
page_content=' 133–138, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
354 |
+
page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
355 |
+
page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
356 |
+
page_content=' Li, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
357 |
+
page_content=' Dasanayake, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
358 |
+
page_content=' Ruths, “Control and synchronization of neuron ensembles,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
359 |
+
page_content=' 58, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
360 |
+
page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
361 |
+
page_content=' 1919–1930, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
362 |
+
page_content=' [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
363 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
364 |
+
page_content=' Ermentrout and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
365 |
+
page_content=' Kopell, “Parabolic bursting in an excitable system coupled with a slow oscillation,” SIAM Journal on Applied Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
366 |
+
page_content=' 46, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
367 |
+
page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
368 |
+
page_content=' 233–253, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
369 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
370 |
+
page_content=' Available: http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
371 |
+
page_content='jstor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
372 |
+
page_content='org/stable/2101582 [14] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
373 |
+
page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
374 |
+
page_content=' Strogatz, Nonlinear dynamics and chaos : with applications to physics, biology, chemistry, and engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
375 |
+
page_content=' Second edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
376 |
+
page_content=' Boulder, CO : Westview Press, a member of the Perseus Books Group, [2015], [2015], includes bibliographical references and index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
377 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
378 |
+
page_content=' Available: https://search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
379 |
+
page_content='library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
380 |
+
page_content='wisc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
381 |
+
page_content='edu/catalog/9910223127702121 [15] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
382 |
+
page_content=' Ambrosio and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
383 |
+
page_content=' Savar´e, “Gradient flows of probability measures,” in Handbook of differential equations: evolutionary equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
384 |
+
page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
385 |
+
page_content=' III, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
386 |
+
page_content=' Handb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
387 |
+
page_content=' Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
388 |
+
page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
389 |
+
page_content=' Elsevier/North-Holland, Amsterdam, 2007, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
390 |
+
page_content=' 1–136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
391 |
+
page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
392 |
+
page_content=' Pogodaev and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
393 |
+
page_content=' Staritsyn, “Impulsive control of nonlocal transport equations,” Journal of Differential Equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
394 |
+
page_content=' 269, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
395 |
+
page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
396 |
+
page_content=' 3585–3623, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
397 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
398 |
+
page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
399 |
+
page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
400 |
+
page_content='com/science/article/pii/S002203962030108X [17] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
401 |
+
page_content=' Pogodaev, “Optimal control of continuity equations,” NoDEA Nonlinear Differential Equations Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
402 |
+
page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
403 |
+
page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
404 |
+
page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
405 |
+
page_content=' Art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
406 |
+
page_content=' 21, 24, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
407 |
+
page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
408 |
+
page_content=' Staritsyn, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
409 |
+
page_content=' Pogodaev, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
410 |
+
page_content=' Chertovskih, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
411 |
+
page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
412 |
+
page_content=' Pereira, “Feed- back maximum principle for ensemble control of local continuity equations: An application to supervised machine learning,” IEEE Control Systems Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
413 |
+
page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
414 |
+
page_content=' 1046–1051, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
415 |
+
page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
416 |
+
page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
417 |
+
page_content=' Arguchintsev, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
418 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
419 |
+
page_content=' Dykhta, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
420 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
421 |
+
page_content=' Srochko, “Optimal control: Nonlocal conditions, computational methods, and the variational principle of maximum,” Russian Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
422 |
+
page_content=' 53, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
423 |
+
page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
424 |
+
page_content=' 1–35, Jan 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
425 |
+
page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
426 |
+
page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
427 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
428 |
+
page_content='3103/S1066369X09010010 [20] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
429 |
+
page_content=' Drag, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
430 |
+
page_content=' Styczen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
431 |
+
page_content=' Kwiatkowska, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
432 |
+
page_content=' Szczurek, “A review on the direct and indirect methods for solving optimal control prob- lems with differential-algebraic constraints,” Studies in Computational Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
433 |
+
page_content=' 610, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
434 |
+
page_content=' 91–105, 07 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
435 |
+
page_content=' [21] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
436 |
+
page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
437 |
+
page_content=' Boyd, Chebyshev and Fourier spectral methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
438 |
+
page_content=', 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
439 |
+
page_content=' Mi- neola, NY: Dover Publications, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf'}
|
4dFQT4oBgHgl3EQf4Ta7/content/2301.13431v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:edc86a0dc45fa279c8c37480a4838925e0a80c8e0691d035ae2e121a7830243d
|
3 |
+
size 7678259
|
5tE1T4oBgHgl3EQfBAK1/content/tmp_files/2301.02847v1.pdf.txt
ADDED
@@ -0,0 +1,1976 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
USTC-ICTS/PCFT-22-27
|
2 |
+
Irregular universe in the Nieh-Yan modified teleparallel gravity
|
3 |
+
Mingzhe Li
|
4 |
+
Interdisciplinary Center for Theoretical Study, University of Science and Technology of China, Hefei, Anhui 230026, China and
|
5 |
+
Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China
|
6 |
+
Haomin Rao
|
7 |
+
School of Fundamental Physics and Mathematical Sciences,
|
8 |
+
Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China and
|
9 |
+
University of Chinese Academy of Sciences, 100190 Beijing, China
|
10 |
+
The Nieh-Yan modified teleparallel gravity is a model which modifies the general relativity equiv-
|
11 |
+
alent teleparallel gravity by a coupling between the Nieh-Yan density and an axion-like field. This
|
12 |
+
model predicts parity violations in the gravitational waves if the axion-like field has a non-trivial
|
13 |
+
background, and more importantly it is ghost free and avoids the pathologies presented in other
|
14 |
+
parity-violating gravity models.
|
15 |
+
The cosmological dynamics and perturbations of the Nieh-Yan
|
16 |
+
modified teleparallel gravity have been investigated in detail, but all these previous investigations
|
17 |
+
rely on the symmetry requirement that in the background universe both the metric and affine con-
|
18 |
+
nection are homogeneous and isotropic. In this paper we relax the symmetry constraint on the
|
19 |
+
connection and leave it arbitrary at the beginning, after all the cosmological principle only needs
|
20 |
+
the metric of the background spacetime to meet the symmetry requirement. We find a new flat
|
21 |
+
universe solution for the Nieh-Yan modified teleparallel gravity, for which the background dynamics
|
22 |
+
itself is unchanged but the perturbations around it present a new feature that the scalar and tensor
|
23 |
+
perturbations are coupled together at the linear level. The implications of this peculiar feature in
|
24 |
+
primordial perturbations from inflation are also discussed.
|
25 |
+
I.
|
26 |
+
INTRODUCTION
|
27 |
+
Stimulated by the experimental detections of gravitational waves (GWs) [1, 2] and the developments in the cosmic
|
28 |
+
microwave background radiation (CMB) experiments [3, 4], parity violating gravities attracted lots of interests in
|
29 |
+
recent years. A famous and frequently studied parity violating gravity model is the so-called Chern-Simons modified
|
30 |
+
gravity [5, 6] which within the framework of Riemannian geometry modifies general relativity (GR) by a gravitational
|
31 |
+
Chern-Simons term. The Chern-Simons modified gravity predicts the difference between the amplitudes of the left-
|
32 |
+
and right-handed polarized components of gravitational waves, i.e., the so-called amplitude birefringence phenomenon.
|
33 |
+
However, this model was found to suffer from the problem of vacuum instability because one of the circularly polarized
|
34 |
+
components of GWs becomes a ghost at high frequencies [7]. Further extensions [8–10] to this model did not circumvent
|
35 |
+
this difficulty because in these extended models the pathological behavior still appear at high energy scales, as shown
|
36 |
+
in Ref. [11]. It is very difficult to have a ghost-free parity violating gravity model within the framework of Riemannian
|
37 |
+
geometry.
|
38 |
+
Successful parity violating gravity models are available if we go beyond the Riemannian geometry. For example,
|
39 |
+
the Nieh-Yan modified teleparallel gravity (NYTG) [12, 13] is constructed within the framework of the teleparallel
|
40 |
+
gravity (TG) [14, 15], where the gravity is identified with the spacetime torsion in stead of the curvature. One may
|
41 |
+
have a GR equivalent TG model [16] (we may call it TGR). The NYTG model [12, 13] modifies TGR slightly by the
|
42 |
+
anomalous coupling θT �T between an axion-like field θ(x) and the Nieh-Yan density [17]: T �T ≡ (1/2)εµνρσT λ
|
43 |
+
µνTλρσ,
|
44 |
+
where T λ
|
45 |
+
µν is the torsion tensor, εµνρσ is Levi-Civita tensor which relates the totally antisymmetric symbol ϵµνρσ
|
46 |
+
and the determinant of the metric g through the equation εµνρσ = ϵµνρσ/√−g. The Nieh-Yan density is parity-odd,
|
47 |
+
so at the background with ∂µθ ̸= 0, the Nieh-Yan coupling term θT �T violates the parity symmetry spontaneously.
|
48 |
+
The NYTG model has been applied to cosmology in Refs. [12, 13], where it was found that this model predicts a
|
49 |
+
arXiv:2301.02847v1 [gr-qc] 7 Jan 2023
|
50 |
+
|
51 |
+
2
|
52 |
+
difference between the propagating velocities of the left- and right-handed polarized components of GWs, i.e., the
|
53 |
+
so-called velocity birefringence phenomenon. More importantly, through detailed investigations on the cosmological
|
54 |
+
perturbations, it was shown in Refs. [12, 13] that the NYTG model is ghost-free. Recently, this model was found
|
55 |
+
to be compatible with the results of most local tests in the Solar System at the post-Newtonian order [18, 19], the
|
56 |
+
upper limit on its model parameters by the GWs data of LIGO/Virgo Collaboration was obtained in Ref. [20], and the
|
57 |
+
enhancement of primordial GWs during inflation due to the velocity birefringence of NYTG model and its implications
|
58 |
+
in the air-based GWs experiments were studied in Ref. [21]. Other recent studies on parity violating gravities can be
|
59 |
+
found in Refs. [22–33].
|
60 |
+
In all the previous studies of the cosmological applications of the NYTG model, both the metric and the affine
|
61 |
+
connection of the background universe are required to be homogeneous and isotropic at the beginning. The spacetime
|
62 |
+
under this strong symmetry constraint is called the regular universe in this paper. The background solutions of the
|
63 |
+
regular universe have been well studied within the TG framework [34–36], and are universally applicable to almost
|
64 |
+
all TG models. In fact these solutions have been frequently adopted by different authors, e.g., [24, 37–39] 1.
|
65 |
+
However, the cosmological principle only needs the metric of the background universe to meet the high symmetry
|
66 |
+
requirement. In the Riemannian geometry, once we impose this symmetry requirement on the metric, the connection
|
67 |
+
(i.e., the Christoffel symbol) satisfies the same symmetry requirement automatically. In TG models, the symmetry
|
68 |
+
constraint on the affine connection is independent of the one on the metric.
|
69 |
+
If one drops this extra constraint
|
70 |
+
on the connection and leaves it arbitrary at the beginning, there will be final solutions for which the connection
|
71 |
+
is neither homogeneous nor isotropic. We call the universe which has a homogeneous and isotropic metric and a
|
72 |
+
non-homogeneous and non-isotropic affine connection the irregular universe. So far the irregular universe has rarely
|
73 |
+
aroused research interest, one example is the flat irregular universe solution found in Ref. [40] for the f(T) gravity.
|
74 |
+
The irregular universe does not violate the cosmological principle, but questions are in coming: What features and
|
75 |
+
new physical phenomena could exist in the irregular universe? Or might the irregular universe have properties that
|
76 |
+
are clearly contradictory to experiments so that only the regular universe is physically feasible? These questions
|
77 |
+
deserve detailed studies for any TG models.
|
78 |
+
In this paper, we will study the irregular universe in the NYTG model. Firstly, we will obtain a more general
|
79 |
+
flat universe solution than those in Refs. [12, 13] by solving the equations of motion of the NYTG model directly
|
80 |
+
under the condition that only the metric is required to be homogeneous and isotropic. By analyzing the symmetry
|
81 |
+
of the connection, we will show that the flat universe we obtain is generally an irregular flat universe, and in special
|
82 |
+
cases it reduces back to a regular universe. We will also show that even in the irregular flat universe, the background
|
83 |
+
equations in the NYTG model are exactly the same as those in GR. Secondly, we will study the linear cosmological
|
84 |
+
perturbations around the irregular flat universe. We will find that tensor perturbations and scalar perturbations are
|
85 |
+
coupled at the linear perturbation level. This is a peculiar feature that distinguishes the irregular universe from the
|
86 |
+
regular universe in the NYTG model. We speculate that this peculiar feature is caused by the fact that the interior
|
87 |
+
space does not satisfy the homogeneity and isotropy in the irregular universe. Finally, we will study the primordial
|
88 |
+
fluctuations generated by slow-roll inflation in the regular and irregular flat universes. We will show that the primordial
|
89 |
+
fluctuations of left- and right-handed GWs are different whether in the regular universe or in the irregular universe.
|
90 |
+
We will also show that there is a strong statistical correlation between primordial scalar fluctuations and primordial
|
91 |
+
tensor fluctuations generated by slow-roll inflation in the irregular universe.
|
92 |
+
This paper is organized as follows. In Sec. II, we briefly introduce the TG theory and the NYTG model. In Sec. III,
|
93 |
+
we study spatially flat cosmological background solutions that only requires the metric to be homogeneous and
|
94 |
+
isotropic in the NYTG model. In Sec. IV, through the quadratic actions for scalar, vector, and tensor perturbations,
|
95 |
+
we investigate linear perturbations around the regular and irregular flat universes. In Sec. V, we apply our result to
|
96 |
+
the early universe and discuss briefly the primordial perturbations generated by slow-roll inflation.
|
97 |
+
1 Actually, the cosmological background solution whose tetrad is eA
|
98 |
+
µ = diag(1, a, a, a) or eA
|
99 |
+
µ = diag(a, a, a, a) under the Weitzenb¨ock
|
100 |
+
gauge is the regular flat universe. However, most of the earlier literature did not clearly point out that the selection of such a tetrad
|
101 |
+
under the Weitzenb¨ock gauge actually requires the connection to satisfy the same symmetry of the metric.
|
102 |
+
|
103 |
+
3
|
104 |
+
In this paper, we adopt the unit 8πG = 1, and use the signature (+, −, −, −) for the metric. The tensor indices of
|
105 |
+
the interior space are denoted by A, B, C, ... = 0, 1, 2, 3 and by a, b, c, ... = 1, 2, 3 when limiting to spatial components.
|
106 |
+
They are lowered and raised by the Minkowski metric ηAB and its inverse ηAB. The spacetime tensor indices are
|
107 |
+
denoted by Greek µ, ν, ρ, ... = 0, 1, 2, 3 and by Latin i, j, k, ... = 1, 2, 3 when limiting to spatial components. They
|
108 |
+
are lowered and raised by the spacetime metric gµν and its inverse gµν. The antisymmetric symbol ϵµνρσ has the
|
109 |
+
properties: ϵ0ijk = ϵijk ≡ ϵijk, and ϵ123 = 1. In addition, we distinguish the spacetime affine connection ˆΓρ
|
110 |
+
µν and
|
111 |
+
its associated covariant derivative ˆ∇ from the Levi-Civita connection Γρ
|
112 |
+
µν and its associated covariant derivative ∇
|
113 |
+
respectively.
|
114 |
+
II.
|
115 |
+
TG THEORY AND THE NYTG MODEL
|
116 |
+
The TG theory can be considered as a constrained metric-affine theory. It is formulated in a spacetime endowed
|
117 |
+
with a metric gµν and an affine connection ˆΓρ
|
118 |
+
µν, which is curvature free and metric compatible,
|
119 |
+
ˆRσ
|
120 |
+
ρµν = ∂µˆΓσ
|
121 |
+
νρ − ∂ν ˆΓσ
|
122 |
+
µρ + ˆΓσ
|
123 |
+
µλˆΓλ
|
124 |
+
νρ − ˆΓσ
|
125 |
+
νλˆΓλ
|
126 |
+
µρ = 0 , ˆ∇ρgµν = ∂ρgµν − ˆΓλ
|
127 |
+
ρµgλν − ˆΓλ
|
128 |
+
ρνgµλ = 0 .
|
129 |
+
(1)
|
130 |
+
Without curvature and nonmetricity, in the TG theory the gravity is identified with spacetime torsion T ρ
|
131 |
+
µν = 2ˆΓρ
|
132 |
+
[µν].
|
133 |
+
One can also describe the TG theory using the language of the tetrad eA
|
134 |
+
µ and the spin connection ωA
|
135 |
+
Bµ. They relates
|
136 |
+
the metric gµν and the affine connection ˆΓρ
|
137 |
+
µν through the following relations
|
138 |
+
gµν = ηABeA
|
139 |
+
µeB
|
140 |
+
ν ,
|
141 |
+
ˆΓρ
|
142 |
+
µν = e ρ
|
143 |
+
A (∂µeA
|
144 |
+
ν + ωA
|
145 |
+
BµeB
|
146 |
+
ν) .
|
147 |
+
(2)
|
148 |
+
The torsion tensor is written as
|
149 |
+
T ρ
|
150 |
+
µν = 2e ρ
|
151 |
+
A (∂[µeA
|
152 |
+
ν] + ωA
|
153 |
+
B[µeB
|
154 |
+
ν]) .
|
155 |
+
(3)
|
156 |
+
The teleparallel constraints (1) dictate that the spin connection can be in general expressed as
|
157 |
+
ωA
|
158 |
+
Bµ = (Λ−1)A
|
159 |
+
C∂µΛC
|
160 |
+
B ,
|
161 |
+
(4)
|
162 |
+
where ΛA
|
163 |
+
B is arbitrary element of Lorentz transformation matrix which is position dependent and satisfies the relation
|
164 |
+
ηABΛA
|
165 |
+
CΛB
|
166 |
+
D = ηCD at any spacetime point. Therefore, the tetrad eA
|
167 |
+
µ and the Lorentz matrix ΛA
|
168 |
+
B can be regarded
|
169 |
+
as the basic variables of the TG theory. In this way, the teleparallel constraints (1) are automatically satisfied.
|
170 |
+
The TGR model, as the GR equivalent TG model, has the following action,
|
171 |
+
ST GR = 1
|
172 |
+
2
|
173 |
+
�
|
174 |
+
d4x |e| T ≡
|
175 |
+
�
|
176 |
+
d4x |e|
|
177 |
+
�
|
178 |
+
−1
|
179 |
+
2TµT µ + 1
|
180 |
+
8TαβµT αβµ + 1
|
181 |
+
4TαβµT βαµ
|
182 |
+
�
|
183 |
+
,
|
184 |
+
(5)
|
185 |
+
where |e| = √−g is the determinant of the tetrad, T is the torsion scalar, and Tµ = T α
|
186 |
+
µα is the torsion vector.
|
187 |
+
Since we have the identity −R(e) = T + 2∇µT µ, the action (5) is identical to the Einstein-Hilbert action up to a
|
188 |
+
surface term, where the curvature scalar R(e) is defined by the Levi-Civita connection and considered as being fully
|
189 |
+
constructed from the metric, and in turn from the tetrad. Since the surface term in the action does not affect the
|
190 |
+
equations of motion, we say that the TGR is equivalent to GR at the level of the equations of motion.
|
191 |
+
The NYTG model [12, 13] modifies the TGR model by introducing the coupling
|
192 |
+
SNY = c
|
193 |
+
4
|
194 |
+
�
|
195 |
+
d4x |e| θ T �T ,
|
196 |
+
(6)
|
197 |
+
between an axion-like field θ and the Nieh-Yan density T �T . The coupling constant c is dimensionless. Generally we
|
198 |
+
should also consider its own dynamics of the axion-like field and take other matter into account, so the full action of
|
199 |
+
the NYTG model is
|
200 |
+
SNY T G =
|
201 |
+
�
|
202 |
+
d4x |e|
|
203 |
+
�1
|
204 |
+
2T + c
|
205 |
+
4 θ T �T + 1
|
206 |
+
2∇µθ∇µθ − V (θ)
|
207 |
+
�
|
208 |
+
+ Sm .
|
209 |
+
(7)
|
210 |
+
|
211 |
+
4
|
212 |
+
Other matter with the action Sm is assumed to be coupled to spacetime minimally through the tetrad.
|
213 |
+
At the
|
214 |
+
background in which the axion-like field has non-zero spacetime derivatives, the Nieh-Yan coupling term breaks
|
215 |
+
parity spontaneously. Because only the first-order derivatives of the basic variables appears in the action, the NYTG
|
216 |
+
model can avoid the Ostrogradski ghost mode, which is expected to be originated from higher-order derivatives in the
|
217 |
+
action [41].
|
218 |
+
As with most modified TG theories, the NYTG model apparently has two kinds of gauge symmetries: diffeomor-
|
219 |
+
phism invariance and local Lorentz invariance. The latter transformation makes the following change:
|
220 |
+
eA
|
221 |
+
µ → (L−1)A
|
222 |
+
BeB
|
223 |
+
µ , ΛA
|
224 |
+
B → ΛA
|
225 |
+
CLC
|
226 |
+
B ,
|
227 |
+
(8)
|
228 |
+
where LA
|
229 |
+
B(x) are the element of Lorentz matrix. We would like to use different notations to distinguish two kinds
|
230 |
+
of Lorentz matrices: ΛA
|
231 |
+
B(x) is used to express the spin connection as in Eq. (4), but LA
|
232 |
+
B(x) represents the local
|
233 |
+
transformation that makes a shift from one local frame to another. Transformation (8) can be expressed in terms of
|
234 |
+
tetrad and spin connections as
|
235 |
+
eA
|
236 |
+
µ → (L−1)A
|
237 |
+
BeB
|
238 |
+
µ , ωA
|
239 |
+
Bµ → (L−1)A
|
240 |
+
CωC
|
241 |
+
DµLD
|
242 |
+
B + (L−1)A
|
243 |
+
C∂µLC
|
244 |
+
B .
|
245 |
+
(9)
|
246 |
+
It is easy to prove that the metric gµν and torsion tensor T ρ
|
247 |
+
µν are invariant under the local Lorentz transformation
|
248 |
+
(8), as is the action (7). Due to the local Lorentz invariance, one can choose the gauge ΛA
|
249 |
+
B = δA
|
250 |
+
B, i.e., ωA
|
251 |
+
Bµ = 0.
|
252 |
+
This is the Weitzenb¨ock connection, which has been frequently adopted in the literature. In addition, there is another
|
253 |
+
symmetry hidden in the NYTG model. The Nieh-Yan term (6) can be integrated by parts as
|
254 |
+
SNY = − c
|
255 |
+
2
|
256 |
+
�
|
257 |
+
d4x ηABϵµνρσ(∂µθ)(ΛA
|
258 |
+
CeC
|
259 |
+
ν)∂ρ(ΛB
|
260 |
+
DeD
|
261 |
+
σ) .
|
262 |
+
(10)
|
263 |
+
It can be seen that the Nieh-Yan term (6) is invariant under the following transformation
|
264 |
+
(ΛA
|
265 |
+
CeC
|
266 |
+
µ) → LA
|
267 |
+
B(θ)(ΛB
|
268 |
+
CeC
|
269 |
+
µ) ,
|
270 |
+
(11)
|
271 |
+
where LA
|
272 |
+
B(θ) is Lorentz matrix that depends only on axion-like field θ. Note that ΛA
|
273 |
+
CeC
|
274 |
+
µ is invariant under trans-
|
275 |
+
formation (8). Due to the Lorentz symmetry (8), the transformation (11) can always be attributed to the fact that
|
276 |
+
the tetrad eA
|
277 |
+
µ remains unchanged while the Lorentz matrix ΛA
|
278 |
+
B undergoes a Lorentz transformation. Obviously the
|
279 |
+
metric and the action of TGR model are invariant under such a transformation. So the total action of the NYTG
|
280 |
+
model is invariant under the transformation (11).
|
281 |
+
The equations of motion follow from the variation of the action (7) with respect to eA
|
282 |
+
µ and ΛA
|
283 |
+
B separately
|
284 |
+
Gµν + N µν = T µν + T µν
|
285 |
+
θ
|
286 |
+
,
|
287 |
+
(12)
|
288 |
+
N [µν] = 0 ,
|
289 |
+
(13)
|
290 |
+
where N µν = (c/2)εµλρσ∂λθ T ν
|
291 |
+
ρσ, Gµν is the Einstein tensor, T µν = −(2/√−g)(δSm/δgµν) and T µν
|
292 |
+
θ
|
293 |
+
= [V (θ) −
|
294 |
+
∇αθ∇αθ/2]gµν + ∇µθ∇νθ are the energy-momentum tensors for the matter and the axion-like field θ respectively.
|
295 |
+
Similar to most modified TG models, the equation of motion (13) from the variation of ΛA
|
296 |
+
B is not independent of
|
297 |
+
Eq. (12), it is just the antisymmetric part of the latter. As explained in Ref. [13], this is due to the local Lorentz
|
298 |
+
invariance of the action, any change caused by δΛA
|
299 |
+
B can always be equivalent to the change caused by δeA
|
300 |
+
µ, so
|
301 |
+
requiring the action to take the extremum under δeA
|
302 |
+
µ already includes the case where the action takes the extremum
|
303 |
+
under δΛA
|
304 |
+
B. There is another equation following from the variation of the action (7) with respect to θ,
|
305 |
+
□θ + V (1) − c
|
306 |
+
4T �T = 0 ,
|
307 |
+
(14)
|
308 |
+
where □ = gµν∇µ∇ν and V (n) = dnV (θ)/dθn. All of these equations of motion are consistent with the Bianchi
|
309 |
+
identity ∇µGµν = 0 and the covariant conservation law ∇µT µν = 0.
|
310 |
+
Also in Refs. [12, 13], the cosmological perturbations of the NYTG model were analyzed in detail. It was found
|
311 |
+
that the NYTG model makes a difference between the propagating velocities of the left- and right-handed polarized
|
312 |
+
|
313 |
+
5
|
314 |
+
components of GWs, but makes no difference between their amplitudes. This phenomenon is called velocity birefrin-
|
315 |
+
gence, which is a clear physical signal of parity violation. More importantly, the NYTG model was confirmed to be
|
316 |
+
ghost free through the quadratic action of cosmological perturbations.
|
317 |
+
It is worth mentioning that the Nieh-Yan density T �T is not the only parity-odd term within the TG framework.
|
318 |
+
A more general model including all the parity-odd terms which are quadratic in the torsion tensor was considered in
|
319 |
+
Ref. [42]. But then it was found in Ref. [43] that this more general model suffers from the problem of ghost instability
|
320 |
+
again, unless it completely reduces to the NYTG model. Therefore, within the TG framework, for all parity-odd
|
321 |
+
terms which are quadratic in the torsion tensor, only the Nieh-Yan density T �T can avoid the ghost instability. This
|
322 |
+
means the NYTG model is robust to some extent.
|
323 |
+
III.
|
324 |
+
IRREGULAR FLAT UNIVERSE IN THE NYTG MODEL
|
325 |
+
So far all the studies on the cosmological applications of the NYTG model only considered the regular universe
|
326 |
+
as the background, that means both the metric and the affine connection are constrained to be homogeneous and
|
327 |
+
isotropic.
|
328 |
+
This constraint may be too strong, after all the cosmological principle which is supported by current
|
329 |
+
observations only needs the metric of the background spacetime to meet the high symmetry requirement. In this
|
330 |
+
paper, we will drop the symmetry requirement on the connection and leave it arbitrary at the beginning. After this
|
331 |
+
relaxation, it is expected that the NYTG model will have more interesting cosmological background solutions. We
|
332 |
+
are interested in the irregular universe solutions in which the metric homogeneous and isotropic but the connection
|
333 |
+
is neither homogeneous nor isotropic. For simplicity, we will only consider the spatially flat universe.
|
334 |
+
In flat universe, the metric can be expressed in rectangular coordinate as
|
335 |
+
ds2 = gµνdxµdxν = a2 �
|
336 |
+
dη2 − δijdxidxj�
|
337 |
+
,
|
338 |
+
(15)
|
339 |
+
where a = a(η) is the scale factor of the universe, η is the conformal time. This is the Friedmann-Robertson-Walker
|
340 |
+
(FRW) metric. There are 6 Killing vector fields {ξµ
|
341 |
+
I , I = 1, 2...6} in flat universe, which can be expressed as
|
342 |
+
ξµ
|
343 |
+
I = δ µ
|
344 |
+
I , ξµ
|
345 |
+
I+3 = ϵIijδµ
|
346 |
+
ixj ,
|
347 |
+
I = 1, 2, 3
|
348 |
+
(16)
|
349 |
+
where ξµ
|
350 |
+
1 , ξµ
|
351 |
+
2 , ξµ
|
352 |
+
3 are Killing vector fields representing the symmetry of spatial translation, and ξµ
|
353 |
+
4 , ξµ
|
354 |
+
5 , ξµ
|
355 |
+
6 are Killing
|
356 |
+
vector fields representing the symmetry of spatial rotation. One can prove that the FRW metric satisfies the condition:
|
357 |
+
LξIgµν = 0, where LξI is the Lie derivative along the Killing vector field ξµ
|
358 |
+
I . This reflects the result that the metric
|
359 |
+
is homogeneous and isotropic. One can also prove that LξIΓρ
|
360 |
+
µν = 0 for the Levi-Civita connection Γρ
|
361 |
+
µν, which is
|
362 |
+
automatically homogeneous and isotropic. This is why we do not need to pay extra attention to the symmetry of the
|
363 |
+
connection within the framework of Riemannian geometry.
|
364 |
+
A.
|
365 |
+
Regular flat universe
|
366 |
+
For TG models, even the metric is determined, the affine connection is still arbitrary to some extent. Usually, as
|
367 |
+
suggested in Refs [34–36], a further constraint was imposed that requires the connection is also homogeneous and
|
368 |
+
isotropic, that is,
|
369 |
+
LξI ˆΓρ
|
370 |
+
µν = ˆ∇µ ˆ∇ν ξρ
|
371 |
+
I − ˆ∇µ(T ρ
|
372 |
+
νσξσ
|
373 |
+
I ) = 0 .
|
374 |
+
(17)
|
375 |
+
Although ˆΓρ
|
376 |
+
µν is coordinate dependent, the Lie derivative of ˆΓρ
|
377 |
+
µν does not depend on the coordinate. Hence the
|
378 |
+
condition (17) is unambiguous. Combining Eqs. (15) and (17) selected the regular flat universe solution in which the
|
379 |
+
tetrad eA
|
380 |
+
µ and Lorentz matrix ΛA
|
381 |
+
B have the following forms:
|
382 |
+
eA
|
383 |
+
µ = aδA
|
384 |
+
µ , ΛA
|
385 |
+
B = ˚ΛA
|
386 |
+
B ,
|
387 |
+
(18)
|
388 |
+
|
389 |
+
6
|
390 |
+
where ˚ΛA
|
391 |
+
B is a global Lorentz matrix, which does not depend on spacetime. All other solutions satisfying Eqs. (15)
|
392 |
+
and (17) differ from the solution (18) only by Lorentz transformation (8), so they are physically equivalent to the
|
393 |
+
solution (18). The above process does not depend on a specific TG theory, so the solution (18) is generally applicable
|
394 |
+
to most TG theories.
|
395 |
+
For the NYTG model, the solution (18) can automatically satisfy the constraint N [µν] = 0, so the solution (18)
|
396 |
+
is compatible with the NYTG model. Furthermore, solution (18) leads to N µν = 0 and T �T = 0, which means that
|
397 |
+
the Nieh-Yan term has no effect on the regular flat universe background. Therefore, the background equations of the
|
398 |
+
regular flat universe are exactly the same as those of GR [12, 13].
|
399 |
+
B.
|
400 |
+
Irregular flat universe
|
401 |
+
To look for the irregular universe solution, we should give up the constraint (17) on the connection. After this
|
402 |
+
relaxation, the connection is left to be determined by the equation of motion.
|
403 |
+
In a flat universe, we can always simply find the non-zero components of Gµν, T µν and T µν
|
404 |
+
θ
|
405 |
+
as
|
406 |
+
G00 = 3H2
|
407 |
+
a4
|
408 |
+
, T 00 = ρ
|
409 |
+
a2 , T 00
|
410 |
+
θ
|
411 |
+
= ρθ
|
412 |
+
a2 , Gij = −2H′ + H2
|
413 |
+
a4
|
414 |
+
δij , T ij = p
|
415 |
+
a2 δij , T ij
|
416 |
+
θ = pθ
|
417 |
+
a2 δij ,
|
418 |
+
(19)
|
419 |
+
where H = a′/a is the conformal Hubble rate, prime represents the derivative with respect to the conformal time η,
|
420 |
+
ρθ = θ′2/
|
421 |
+
�
|
422 |
+
2a2�
|
423 |
+
+V and pθ = θ′2/
|
424 |
+
�
|
425 |
+
2a2�
|
426 |
+
−V are the energy density and pressure of the θ field, and ρ and p denote the
|
427 |
+
energy density and pressure of other matter. Thanks to the Lorentz symmetry (8), we can always reduce the tetrad
|
428 |
+
to the simple form eA
|
429 |
+
µ = aδA
|
430 |
+
µ in flat universe. In order to facilitate further analysis, we decompose the independent
|
431 |
+
non-zero components of spin connections ωA
|
432 |
+
Bµ as follows
|
433 |
+
δa
|
434 |
+
iω0
|
435 |
+
a0 = Ui ,
|
436 |
+
δi
|
437 |
+
aδb
|
438 |
+
jωa
|
439 |
+
bk = Σϵijk + ϵijlΣkl + Σiδjk − Σjδik ,
|
440 |
+
δi
|
441 |
+
aδb
|
442 |
+
jωa
|
443 |
+
b0 = ϵijkVk ,
|
444 |
+
δa
|
445 |
+
iω0
|
446 |
+
aj = σδij + σij + ϵijkσk ,
|
447 |
+
(20)
|
448 |
+
where Σij and σij are symmetric and traceless spatial tensors. In the above decomposition we have exploited the
|
449 |
+
property ωABµ = −ωBAµ due to ˆ∇ρgµν = 0. Note that the variables σ, Σ, Ui, Vi, σi, Σi, σij, Σij are not completely
|
450 |
+
independent because we have not yet imposed ˆRσ
|
451 |
+
ρµν = 0 on the spin connection. Combining eA
|
452 |
+
µ = aδA
|
453 |
+
µ and Eq. (20),
|
454 |
+
N µν can be obtained as
|
455 |
+
N 00 = 0 ,
|
456 |
+
N 0i = 0 ,
|
457 |
+
N i0 = 2cθ′
|
458 |
+
a4 σi ,
|
459 |
+
N ij = cθ′
|
460 |
+
a4 (2Σδij − Σij + ϵijkΣk) .
|
461 |
+
(21)
|
462 |
+
In order for Eqs. (12) and (13) to hold, there must be
|
463 |
+
σi = 0 ,
|
464 |
+
Σi = 0 ,
|
465 |
+
Σij = 0 ,
|
466 |
+
Σ = Σ(η) .
|
467 |
+
(22)
|
468 |
+
Combining eA
|
469 |
+
µ = aδA
|
470 |
+
µ, Eqs. (20) and (22), Nieh-Yan density can be obtained as
|
471 |
+
T �T = 24Σ
|
472 |
+
a2 (H − σ) .
|
473 |
+
(23)
|
474 |
+
In order for Eq. (14) to hold, the Nieh-Yan density T �T can only be a function of time η, so σ = σ(η) when Σ ̸= 0.
|
475 |
+
Combining Eqs. (20) and (22), ˆRσ
|
476 |
+
ρµν = 0 gives
|
477 |
+
S′
|
478 |
+
ij − Ui,j + ϵijkΣ Uk + ϵiklSjkVl = 0 ,
|
479 |
+
(24)
|
480 |
+
Σ′δij − Vi,j + ϵijkΣ Uk − ϵiklSjkUl = 0 ,
|
481 |
+
(25)
|
482 |
+
ϵiklSlj,k + Σ(Sij − Skkδij) = 0 ,
|
483 |
+
(26)
|
484 |
+
ϵinmSjnSkm − Σ2ϵijk = 0 ,
|
485 |
+
(27)
|
486 |
+
where Sij = σδij + σij and the subscript “, i” represents a derivative with respect to xi. The trace of Eq. (26) gives
|
487 |
+
σΣ = 0 .
|
488 |
+
(28)
|
489 |
+
|
490 |
+
7
|
491 |
+
This means that at least one of σ and Σ is zero. If σ = 0, the equation after the Hodge duality of the ”j, k” index in
|
492 |
+
Eq. (27) can be decomposed as follows according to the trace part and the traceless part:
|
493 |
+
6 Σ2 + σijσij = 0 ,
|
494 |
+
σikσjk − 1
|
495 |
+
3(σklσkl)δij = 0 .
|
496 |
+
(29)
|
497 |
+
The solution of Eq. (29) is Σ = 0, σij = 0. This means that Eqs. (27) and (28) must give
|
498 |
+
Σ = 0 .
|
499 |
+
(30)
|
500 |
+
Combining Eqs. (22) and (30) gives N µν = 0 and T �T = 0, which means that the Nieh-Yan term has no effect
|
501 |
+
even on the irregular flat universe background. Therefore, the background equations of the irregular flat universe are
|
502 |
+
exactly the same as those of GR. This is a somewhat unexpected result. But the fact that Nieh-Yan term has no effect
|
503 |
+
on the background does not mean that it has no effect on the perturbations. In order to analyze the perturbations,
|
504 |
+
we need to first find the background solution of the irregular flat universe.
|
505 |
+
Substituting Eq. (30) into Eqs. (24), (25), (26) and (27), we get
|
506 |
+
S′
|
507 |
+
ij − Ui,j + ϵiklSjkVl = 0 ,
|
508 |
+
(31)
|
509 |
+
Vi,j + ϵiklSjkUl = 0 ,
|
510 |
+
(32)
|
511 |
+
ϵiklSlj,k = 0 ,
|
512 |
+
(33)
|
513 |
+
ϵinmSjnSkm = 0 ,
|
514 |
+
(34)
|
515 |
+
Although there are more equations than variables, this does not mean that Eqs. (31), (32), (33) and (34) have no
|
516 |
+
solution. It can be verified that the following are the solution of Eqs. (31), (32), (33) and (34)
|
517 |
+
Sij = vivjf(η)F (1)(⃗v · ⃗x) ,
|
518 |
+
Vi = ga(η)αa
|
519 |
+
i (η, ⃗x) − ha(η)βa
|
520 |
+
i (η, ⃗x) ,
|
521 |
+
Ui = ha(η)αa
|
522 |
+
i (η, ⃗x) + ga(η)βa
|
523 |
+
i (η, ⃗x) + vif (1)(η)F(⃗v · ⃗x) ,
|
524 |
+
(35)
|
525 |
+
where
|
526 |
+
αa
|
527 |
+
i (η, ⃗x) = cosh [vf(η)F(⃗v · ⃗x)] δai + vavi
|
528 |
+
v2
|
529 |
+
�
|
530 |
+
1 − cosh [vf(η)F(⃗v · ⃗x)]
|
531 |
+
�
|
532 |
+
,
|
533 |
+
βa
|
534 |
+
i (η, ⃗x) = ϵaij
|
535 |
+
vj
|
536 |
+
v sinh [vf(η)F(⃗v · ⃗x)] ,
|
537 |
+
where v1, v2, v3 are constant parameters, v =
|
538 |
+
�
|
539 |
+
δijvivj, ⃗v · ⃗x = vixi, f(η), ga(η), ha(η) are arbitrary smooth function
|
540 |
+
of conformal time η, F(⃗v · ⃗x) is arbitrary smooth function of ⃗v · ⃗x, f (n)(η) is the n derivative of f(η) with respect to
|
541 |
+
conformal time η, and F (n)(⃗v · ⃗x) is the n derivative of F(⃗v · ⃗x) with respect to ⃗v · ⃗x.
|
542 |
+
Putting solutions (22), (30) and (35) into the decomposition (20), the spin connection ωA
|
543 |
+
Bµ when the tetrad is
|
544 |
+
eA
|
545 |
+
µ = aδA
|
546 |
+
µ can be obtained as
|
547 |
+
ωa
|
548 |
+
00 = ω0
|
549 |
+
a0 = hc(η)αc
|
550 |
+
a(η, ⃗x) + gc(η)βc
|
551 |
+
a(η, ⃗x) + vaf (1)(η)F(⃗v · ⃗x) ,
|
552 |
+
ωa
|
553 |
+
b0 = ϵabi [gc(η)αc
|
554 |
+
i(η, ⃗x) − hc(η)βc
|
555 |
+
i (η, ⃗x)] ,
|
556 |
+
ω0
|
557 |
+
ai = ωa
|
558 |
+
0i = vavif(η)F (1)(⃗v · ⃗x) ,
|
559 |
+
ωa
|
560 |
+
bi = 0 .
|
561 |
+
(36)
|
562 |
+
It can be verified that the spin connection (36) does satisfy the teleparallel constraints (1). Due to the symmetry
|
563 |
+
(11), not every hI(η) and gI(η) represent a physically inequivalent solution. In order to see this better, we perform a
|
564 |
+
Lorentz transformation (9) on the above solution. The transformation matrix LA
|
565 |
+
B is
|
566 |
+
L0
|
567 |
+
0 = cosh [vf(η)F(⃗v · ⃗x)] , L0
|
568 |
+
a = La
|
569 |
+
0 = va
|
570 |
+
v sinh [vf(η)F(⃗v · ⃗x)] ,
|
571 |
+
La
|
572 |
+
b = δab + vavb
|
573 |
+
v2
|
574 |
+
�
|
575 |
+
cosh [vf(η)F(⃗v · ⃗x)] − 1
|
576 |
+
�
|
577 |
+
,
|
578 |
+
(37)
|
579 |
+
|
580 |
+
8
|
581 |
+
Then, the tetrad ˜eA
|
582 |
+
µ = LA
|
583 |
+
BeB
|
584 |
+
µ and the corresponding spin connection ˜ωA
|
585 |
+
Bµ are
|
586 |
+
˜e0
|
587 |
+
0 = a cosh [vf(η)F(⃗v · ⃗x)] , ˜ea
|
588 |
+
0 = δai˜e0
|
589 |
+
i = ava
|
590 |
+
v sinh [vf(η)F(⃗v · ⃗x)] ,
|
591 |
+
˜ea
|
592 |
+
i = a
|
593 |
+
�
|
594 |
+
δai + vavi
|
595 |
+
v2
|
596 |
+
�
|
597 |
+
cosh [vf(η)F(⃗v · ⃗x)] − 1
|
598 |
+
��
|
599 |
+
,
|
600 |
+
˜ωa
|
601 |
+
00 = ˜ω0
|
602 |
+
a0 = ha(η) , ˜ωa
|
603 |
+
b0 = ϵabcgb(η) , ˜ωA
|
604 |
+
Bi = 0 .
|
605 |
+
(38)
|
606 |
+
It can be verified that the metric gµν and connection ˆΓρ
|
607 |
+
µν given by solution (38) are the same as those given by
|
608 |
+
the tetrad eA
|
609 |
+
µ = aδA
|
610 |
+
µ and the spin connection (36). Since the solution (38) satisfies the teleparallel constraints (1),
|
611 |
+
the spin connection ˜ωA
|
612 |
+
Bµ in the solution (38) can be expressed by a Lorentz matrix ˜ΛA
|
613 |
+
B(η, ⃗x). And ˜ωA
|
614 |
+
Bi = 0 means
|
615 |
+
that ˜ΛA
|
616 |
+
B(η, ⃗x) = ˜ΛA
|
617 |
+
B(η). So taking different ha(η) and ga(η) is actually taking different ˜ΛA
|
618 |
+
B(η). Since θ = θ(η) in
|
619 |
+
the cosmological background, different ˜ΛA
|
620 |
+
B(η) can be converted to each other through the Lorentz transformation
|
621 |
+
˜ΛA
|
622 |
+
B(η) → LA
|
623 |
+
C(θ)˜ΛC
|
624 |
+
B(η). Therefore, the solutions with different ha(η) and ga(η) can be transformed into each other
|
625 |
+
by transformation (11), so they are physically equivalent. In this case, we only need to consider the simplest case
|
626 |
+
below, that is, the case where ha(η) = ga(η) = 0, so that the solution (36) can be simplified to
|
627 |
+
eA
|
628 |
+
µ = aδA
|
629 |
+
µ ,
|
630 |
+
ωa
|
631 |
+
00 = ω0
|
632 |
+
a0 = vaf (1)(η)F(⃗v · ⃗x) , ωa
|
633 |
+
b0 = 0 ,
|
634 |
+
ωa
|
635 |
+
0i = ω0
|
636 |
+
ai = vavif(η)F (1)(⃗v · ⃗x) , ωa
|
637 |
+
bi = 0 .
|
638 |
+
(39)
|
639 |
+
The solution (39) can be expressed by the tetrad eA
|
640 |
+
µ and the Lorentz matrix ΛA
|
641 |
+
B as
|
642 |
+
eA
|
643 |
+
µ = aδA
|
644 |
+
µ ,
|
645 |
+
Λ = ˚Λ · exp
|
646 |
+
�
|
647 |
+
f(η)F(⃗v · ⃗x) vaKa�
|
648 |
+
,
|
649 |
+
(40)
|
650 |
+
where ˚Λ is a spacetime independent Lorentz matrix, and K1, K2, K3 are the boost matrices whose expression are
|
651 |
+
K1 =
|
652 |
+
�
|
653 |
+
�
|
654 |
+
�
|
655 |
+
�
|
656 |
+
�
|
657 |
+
0 1 0 0
|
658 |
+
1 0 0 0
|
659 |
+
0 0 0 0
|
660 |
+
0 0 0 0
|
661 |
+
�
|
662 |
+
�
|
663 |
+
�
|
664 |
+
�
|
665 |
+
� ,
|
666 |
+
K2 =
|
667 |
+
�
|
668 |
+
�
|
669 |
+
�
|
670 |
+
�
|
671 |
+
�
|
672 |
+
0 0 1 0
|
673 |
+
0 0 0 0
|
674 |
+
1 0 0 0
|
675 |
+
0 0 0 0
|
676 |
+
�
|
677 |
+
�
|
678 |
+
�
|
679 |
+
�
|
680 |
+
� ,
|
681 |
+
K3 =
|
682 |
+
�
|
683 |
+
�
|
684 |
+
�
|
685 |
+
�
|
686 |
+
�
|
687 |
+
0 0 0 1
|
688 |
+
0 0 0 0
|
689 |
+
0 0 0 0
|
690 |
+
1 0 0 0
|
691 |
+
�
|
692 |
+
�
|
693 |
+
�
|
694 |
+
�
|
695 |
+
� .
|
696 |
+
Regardless of the functional form of f(η) and F(⃗v · ⃗x), it can be verified that the solution (40) always satisfies the
|
697 |
+
teleparallel constraints (1) and makes Eqs. (12) and (14) self-consistent. Putting solution (40) into Eqs. (12) and (14),
|
698 |
+
we can get
|
699 |
+
3H2 = a2 (ρθ + ρ) ,
|
700 |
+
2H′ + H2 = −a2 (pθ + p) ,
|
701 |
+
θ′′ + 2Hθ′ + a2V (1) = 0 .
|
702 |
+
(41)
|
703 |
+
The background equations are exactly the same as those of GR. This means that the Nieh-Yan term has no effect
|
704 |
+
even on the irregular flat universe background. This is consistent with our analysis above.
|
705 |
+
Finally, let’s focus on the symmetry of the connection given by the solution (40). The non-zero components of
|
706 |
+
LξI ˆΓρ
|
707 |
+
µν given by the solution (40) are
|
708 |
+
LξI ˆΓ0
|
709 |
+
0i = LξI ˆΓi
|
710 |
+
00 = vIvif (1)(η)F (1)(⃗v · ⃗x) ,
|
711 |
+
LξI ˆΓ0
|
712 |
+
ij = LξI ˆΓi
|
713 |
+
j0 = vIvivjf(η)F (2)(⃗v · ⃗x) ,
|
714 |
+
LξI+3 ˆΓ0
|
715 |
+
0i = LξI+3 ˆΓi
|
716 |
+
00 = −ϵIijvjf (1)(η)F(⃗v · ⃗x) + viϵIjkvjxkf (1)(η)F (1)(⃗v · ⃗x) ,
|
717 |
+
LξI+3 ˆΓ0
|
718 |
+
ij = LξI+3 ˆΓi
|
719 |
+
j0 = 2v(iϵj)Ikvkf(η)F (1)(⃗v · ⃗x) + vivjϵIklvkxlf(η)F (2)(⃗v · ⃗x) ,
|
720 |
+
(42)
|
721 |
+
where I = 1, 2, 3 in Eq. (42), and the subscript parentheses denotes the symmetrization. The fact that LξI ˆΓρ
|
722 |
+
µν ̸= 0
|
723 |
+
indicates that the spacetime connection given by the solution (40) is neither homogeneous nor isotropic.
|
724 |
+
So the
|
725 |
+
solution (40) does represent a irregular flat universe. When vi = 0 or f(η) = 0 or F(⃗v · ⃗x) = 0, there is LξI ˆΓρ
|
726 |
+
µν = 0,
|
727 |
+
and the solution (40) dose reduce to the regular flat universe solution (18).
|
728 |
+
|
729 |
+
9
|
730 |
+
IV.
|
731 |
+
PERTURBATIONS AROUND THE IRREGULAR FLAT UNIVERSE
|
732 |
+
In the previous section we studied the flat universe solution of the NYTG model that only requires the metric to
|
733 |
+
be homogeneous and isotropic. We found that the Nieh-Yan term has no effect even on the irregular flat universe
|
734 |
+
background. In order to explore the effect of the Nieh-Yan term on the irregular flat universe, we study the linear
|
735 |
+
cosmological perturbations around the irregular flat universe (40) in this section. For simplicity, we only consider the
|
736 |
+
case of F(⃗v · ⃗x) = ⃗v · ⃗x, which is equivalent to requiring that the coefficients of the equations of linear perturbations
|
737 |
+
do not depend on the spatial coordinates ⃗x (see below for details). And we also ignore other matter so that Sm = 0.
|
738 |
+
We use the following parametrization for perturbed tetrad [44]:
|
739 |
+
e0
|
740 |
+
0 = a(1 + A) , e0
|
741 |
+
i = a(β,i + βV
|
742 |
+
i ) , ec
|
743 |
+
0 = aδci(χ,i + χV
|
744 |
+
i ) ,
|
745 |
+
ec
|
746 |
+
i = aδcj[(1 − ψ)δij + α,ij + αV
|
747 |
+
j,i − ϵijk(λ,k + λV
|
748 |
+
k ) + 1
|
749 |
+
2hT
|
750 |
+
ij] ,
|
751 |
+
(43)
|
752 |
+
So the perturbed metric components have the familiar forms:
|
753 |
+
g00 = a2(1 + 2A) , g0i = −a2(B,i + BV
|
754 |
+
i ) ,
|
755 |
+
gij = −a2[(1 − 2ψ)δij + 2α,ij + αV
|
756 |
+
i,j + αV
|
757 |
+
j,i + hT
|
758 |
+
ij] ,
|
759 |
+
(44)
|
760 |
+
where B = χ − β and BV
|
761 |
+
i
|
762 |
+
= χV
|
763 |
+
i − βV
|
764 |
+
i . Besides the familiar scalar perturbations (A, B, ψ, α), vector perturbations
|
765 |
+
(BV
|
766 |
+
i , αV
|
767 |
+
i ), and tensor perturbations hT
|
768 |
+
ij in the metric, the parametrization of tetrad brings six extra variables, which
|
769 |
+
are scalar perturbation λ, χ + β and vector perturbation λV
|
770 |
+
i , χV
|
771 |
+
i + βV
|
772 |
+
i . All the vector perturbations are transverse
|
773 |
+
and denoted by the superscript V , both the tensor perturbations are transverse and traceless and denoted by the
|
774 |
+
superscript T. In addition, the scalar field θ is decomposed as θ(η, ⃗x) = ¯θ(η) + δθ(η, ⃗x).
|
775 |
+
Although we can perform a similar decomposition on the Lorentz matrix ΛA
|
776 |
+
B following the parametrization in
|
777 |
+
Ref. [13], we do not need to do so in this paper. Because we can always transform the perturbed Lorentz matrix into
|
778 |
+
the background Lorentz matrix in Eq. (40) through the infinitesimal Lorentz transformation (8). In other words, we
|
779 |
+
can always absorb the perturbations of the Lorentz matrix ΛA
|
780 |
+
B into the perturbations of the tetrad eA
|
781 |
+
µ through the
|
782 |
+
infinitesimal Lorentz transformation (8), so that we only need to deal with the perturbations of the the tetrad.
|
783 |
+
Due to the diffeomorphism invariance, it is safe to take the unitary gauge δθ = 0, α = 0, αV
|
784 |
+
i = 0. This simplifies
|
785 |
+
the calculations, for example, the gauge invariant scalar perturbation ζ = −(ψ + Hδθ/θ′) representing the curvature
|
786 |
+
perturbation of the hypersurfaces of constant θ reduces to −ψ under the unitary gauge. Since both α and αV
|
787 |
+
i
|
788 |
+
are
|
789 |
+
perturbations which enter the metric, the perturbations α, αV
|
790 |
+
i and δθ are invariant under the infinitesimal Lorentz
|
791 |
+
transformation (8). Therefore, the unitary gauge is compatible with the operation of absorbing the perturbations of
|
792 |
+
the Lorentz matrix into the perturbations of the tetrad.
|
793 |
+
The non-isotropic nature of the background connection may lead to coupling of scalar, vector and tensor perturba-
|
794 |
+
tions. Therefore, when studying linear perturbations around the irregular flat universe (40), we should not deal with
|
795 |
+
scalar, vector, or tensor perturbations individually, but should deal with all perturbation variables simultaneously. In
|
796 |
+
the following we choose A, ζ, B, BV
|
797 |
+
i , βi = β,i + βV
|
798 |
+
i , λi = λ,i + λV
|
799 |
+
i and hT
|
800 |
+
ij as independent variables, and we study
|
801 |
+
the linear perturbations around the irregular flat universe by means of quadratic action.
|
802 |
+
For the NYTG model (7) with Sm = 0, one can directly obtain the quadratic action as
|
803 |
+
S(2) =
|
804 |
+
�
|
805 |
+
d4x a2
|
806 |
+
�
|
807 |
+
6Hζ′A − 3ζ′2 − (2A + ζ)ζ,ii − a2V A2 + 2(ζ′ − HA)B,ii + 1
|
808 |
+
8
|
809 |
+
�
|
810 |
+
hT ′
|
811 |
+
ij hT ′
|
812 |
+
ij − hT
|
813 |
+
ij,khT
|
814 |
+
ij,k
|
815 |
+
�
|
816 |
+
−1
|
817 |
+
4BV
|
818 |
+
i BV
|
819 |
+
i,jj + cθ′�
|
820 |
+
2λiζ,i + 1
|
821 |
+
2ϵijk(βiβj,k − λiλj,k) + ˆSijλiβj − 1
|
822 |
+
2ϵijkSilhT
|
823 |
+
jlβk − 1
|
824 |
+
8ϵijkhT
|
825 |
+
ilhT
|
826 |
+
jl,k
|
827 |
+
��
|
828 |
+
. (45)
|
829 |
+
where Sij = vivjf(η)F (1)(⃗v · ⃗x) and ˆSij = (vivj − v2δij)f(η)F (1)(⃗v · ⃗x). In general, the coefficients Sij and ˆSij are
|
830 |
+
dependent on the spatial coordinate ⃗x. The coefficients of the equations for the linear perturbations are thus also
|
831 |
+
dependent on the spatial coordinate ⃗x. It means that the evolution equations for the linear perturbations are not
|
832 |
+
|
833 |
+
10
|
834 |
+
homogeneous. For simplicity, in the following we only consider the case of F(⃗v · ⃗x) = ⃗v · ⃗x 2. In this way, Sij and ˆSij
|
835 |
+
are constant coefficients. So the evolution equations for the linear perturbations are homogeneous. But it should be
|
836 |
+
noted that even in this case, the action (45) appears to be only homogeneous rather than homogeneous and isotropic,
|
837 |
+
because the constant coefficients Sij and ˆSij are not spatial rotation invariants. In addition, the terms ˆSijλiβj and
|
838 |
+
ϵijkSilhT
|
839 |
+
jlβk in the action (45) show that there is a coupling of scalar, vector and tensor perturbations. But such
|
840 |
+
coupling may be eliminated by the constraints imposed by the action (45) itself. Therefore, only after the constraints
|
841 |
+
are lifted can we know whether there is really a coupling of scalar, vector and tensor perturbations.
|
842 |
+
To further simplify the quadratic action, we change to the momentum space in terms of Fourier transformations,
|
843 |
+
ζ(η, ⃗x) =
|
844 |
+
�
|
845 |
+
d3k
|
846 |
+
(2π)
|
847 |
+
3
|
848 |
+
2 ζ(η,⃗k) ei⃗k·⃗x ,
|
849 |
+
(46)
|
850 |
+
and we also expand the variables A, B, λi, βi and hT
|
851 |
+
ij in the same way. The tensor perturbation hT
|
852 |
+
ij can be further
|
853 |
+
expanded as
|
854 |
+
hT
|
855 |
+
ij(η,⃗k) =
|
856 |
+
�
|
857 |
+
A
|
858 |
+
hA(η,⃗k) ˆeA
|
859 |
+
ij(⃗k) ,
|
860 |
+
(47)
|
861 |
+
where {ˆeA
|
862 |
+
ij(⃗k), A = L, R} are circular polarization bases 3 satisfying ˆklϵlikˆeA
|
863 |
+
jk(⃗k) = ipAˆeA
|
864 |
+
ij(⃗k), where ˆk is the unit
|
865 |
+
vector of ⃗k, pL = −1 and pR = 1. Note that we use the normal letter A for the left- and right- hand indices to
|
866 |
+
distinguish it from the italic letter A used to represent the tetrad indices. The quadratic action in the momentum
|
867 |
+
space can be expressed as
|
868 |
+
S(2) =
|
869 |
+
�
|
870 |
+
dη
|
871 |
+
�
|
872 |
+
d3k a2
|
873 |
+
�
|
874 |
+
6Hζ′A∗ − 3ζ∗′ζ′ + k2(2A + ζ)ζ∗ + 2k2(HA − ζ′)B∗
|
875 |
+
−a2V A∗A + 1
|
876 |
+
4k2BV ∗
|
877 |
+
i
|
878 |
+
BV
|
879 |
+
i + 1
|
880 |
+
4
|
881 |
+
�
|
882 |
+
A
|
883 |
+
�
|
884 |
+
h∗′
|
885 |
+
Ah′
|
886 |
+
A − (k2 − cθ′pAk)h∗
|
887 |
+
AhA
|
888 |
+
�
|
889 |
+
+cθ′�
|
890 |
+
2ikiλ∗
|
891 |
+
i ζ + i
|
892 |
+
2ϵijkki(β∗
|
893 |
+
j βk − λ∗
|
894 |
+
jλk) + ˆSijλ∗
|
895 |
+
i βj − 1
|
896 |
+
2β∗
|
897 |
+
i
|
898 |
+
� �
|
899 |
+
A
|
900 |
+
SA
|
901 |
+
i hA
|
902 |
+
���
|
903 |
+
,
|
904 |
+
(48)
|
905 |
+
where SA
|
906 |
+
i (⃗k) = ϵijkSjlˆeA
|
907 |
+
kl(⃗k). It can be seen that A, B, BV
|
908 |
+
i , λi and βi are all non-dynamical fields and the variations
|
909 |
+
of the action (48) with them lead to the following constraints:
|
910 |
+
BV
|
911 |
+
i = 0 ,
|
912 |
+
(49)
|
913 |
+
HA − ζ′ = 0 ,
|
914 |
+
(50)
|
915 |
+
3Hζ′ + k2ζ − a2V A + Hk2B = 0 ,
|
916 |
+
(51)
|
917 |
+
ϵijkkjλk − i ˆSijβj + 2kiζ = 0 ,
|
918 |
+
(52)
|
919 |
+
− ˆSijλj + iϵijkkjβk + 1
|
920 |
+
2
|
921 |
+
�
|
922 |
+
A
|
923 |
+
SA
|
924 |
+
i hA = 0 .
|
925 |
+
(53)
|
926 |
+
For the regular flat universe case with vi = 0 or f(η) = 0, there are ˆSij = 0 and SA
|
927 |
+
i = 0, so the solution of Eqs.
|
928 |
+
(49), (50), (51), (52) and (53) is
|
929 |
+
ζ = 0 , A = 0 , B = 0 , BV
|
930 |
+
i = 0 , λi = ikiλ , βi = ikiβ ,
|
931 |
+
(54)
|
932 |
+
2 The expression of F(⃗v · ⃗x) can differ by a constant term, which does not change the coefficients Sij and ˆSij. And a constant factor of
|
933 |
+
the difference of F(⃗v · ⃗x) can be absorbed into f(η).
|
934 |
+
3 Note that the choice of circular polarization bases is not unique, ˆeA
|
935 |
+
ij(⃗k) can be rotated along the ⃗k-axis while maintaining all the properties
|
936 |
+
of the circular polarization bases. For the case where there is a constant vector ⃗v ̸= 0 on the background, we can always choose the
|
937 |
+
circular polarization bases to satisfy vivjˆeA
|
938 |
+
ij(⃗k) = (v2/
|
939 |
+
√
|
940 |
+
2) sin2 ϑ, where ϑ is the angle between ⃗k and ⃗v. This choice maximally simplifies
|
941 |
+
the quadratic action (57), so we adopt this choice in this paper.
|
942 |
+
|
943 |
+
11
|
944 |
+
where λ and β are arbitrary scalar perturbations. Substituting the Eq. (54) back into the action (48), the action (48)
|
945 |
+
can be simplified as
|
946 |
+
S(2) =
|
947 |
+
�
|
948 |
+
dη
|
949 |
+
�
|
950 |
+
d3k a2
|
951 |
+
4
|
952 |
+
�
|
953 |
+
A
|
954 |
+
�
|
955 |
+
|h′
|
956 |
+
A|2 − ω2
|
957 |
+
A|hA|2�
|
958 |
+
,
|
959 |
+
(55)
|
960 |
+
where ω2
|
961 |
+
A = k2 −cθ′pAk. It can be seen that there is no scalar dynamical degree of freedom at the linear perturbation
|
962 |
+
level. This is a bit strange because the action (7) clearly shows that there is a scalar dynamical degree of freedom.
|
963 |
+
Further research in Ref. [13] shows that the missing scalar dynamical degree of freedom reappears in the regular curved
|
964 |
+
universe. The phenomenon of degrees of freedom being hidden under special background also appears in f(T) gravity
|
965 |
+
[45] and massive gravity [46]. This implies that such a special background is likely to suffer from strong coupling
|
966 |
+
issue [47]. It can also be seen that the modified dispersion relation ω2
|
967 |
+
A is helicity dependent. This means that GWs
|
968 |
+
with different helicities will have different propagation velocities. This phenomenon is called velocity birefringence,
|
969 |
+
which is a direct reflection of the parity violation in the NYTG model. These results are consistent with the results
|
970 |
+
in Refs. [12, 13] 4.
|
971 |
+
For the irregular flat universe case with vi ̸= 0 and f(η) ̸= 0, the solution of Eqs. (49), (50), (51), (52) and (53) is
|
972 |
+
A = ζ′/H , B = −
|
973 |
+
�
|
974 |
+
θ′2ζ′ + 2k2Hζ
|
975 |
+
�
|
976 |
+
/2k2H2 , BV
|
977 |
+
i = 0 ,
|
978 |
+
λi =
|
979 |
+
� 2 cos ϑ
|
980 |
+
kv sin2 ϑϵijkkjvk
|
981 |
+
�
|
982 |
+
ζ −
|
983 |
+
i
|
984 |
+
2
|
985 |
+
√
|
986 |
+
2k ki� �
|
987 |
+
A
|
988 |
+
pAhA
|
989 |
+
�
|
990 |
+
,
|
991 |
+
βi =
|
992 |
+
�
|
993 |
+
2i
|
994 |
+
v2f(η) sin2 ϑki + 2ivf(η) cos ϑ
|
995 |
+
k sin2 ϑ
|
996 |
+
vi
|
997 |
+
�
|
998 |
+
ζ + ivf(η) cos ϑ
|
999 |
+
2
|
1000 |
+
√
|
1001 |
+
2k
|
1002 |
+
vi
|
1003 |
+
� �
|
1004 |
+
A
|
1005 |
+
hA
|
1006 |
+
�
|
1007 |
+
,
|
1008 |
+
(56)
|
1009 |
+
where ϑ is the angle between ⃗k and ⃗v. Substituting the above results back into the action (48), the action (48) can
|
1010 |
+
be simplified as
|
1011 |
+
S(2) =
|
1012 |
+
�
|
1013 |
+
dη
|
1014 |
+
�
|
1015 |
+
d3k
|
1016 |
+
�z2
|
1017 |
+
2
|
1018 |
+
�
|
1019 |
+
|ζ′|2 − k2|ζ|2�
|
1020 |
+
+ a2
|
1021 |
+
4
|
1022 |
+
�
|
1023 |
+
A
|
1024 |
+
�
|
1025 |
+
|h′
|
1026 |
+
A|2 − ω2
|
1027 |
+
A|hA|2�
|
1028 |
+
− ca2θ′k
|
1029 |
+
√
|
1030 |
+
2
|
1031 |
+
ζ∗� �
|
1032 |
+
A
|
1033 |
+
pAhA
|
1034 |
+
��
|
1035 |
+
,
|
1036 |
+
(57)
|
1037 |
+
where z2 = a2θ′2/H2. For the action (57), the following points need to be emphasized. Firstly, it can be seen that there
|
1038 |
+
is indeed a scalar dynamical degree of freedom, which again verifies that there is a scalar dynamical degree of freedom
|
1039 |
+
hidden under the regular flat universe at the linear perturbation level. Secondly, there are two tensor dynamics degrees
|
1040 |
+
of freedom and the dispersion relation ω2
|
1041 |
+
A is helicity dependent, as is the case for the regular universe. This means
|
1042 |
+
that the velocity birefringence phenomenon of GWs also exists in the irregular universe. Thirdly, it is surprising that
|
1043 |
+
vi and f(η) are completely cancelled in the step of lifting the constraints, so that the action (57) no longer depends on
|
1044 |
+
vi and f(η). This makes the case of vi = 0, f(η) = 0 not the limit of the case of vi → 0, f(η) → 0. This is somewhat
|
1045 |
+
analogous to the case where a massless photon is not the limit of a photon with mass tends to zero. Fourth, it can be
|
1046 |
+
seen that the coefficients in the action (57) are homogeneous and isotropic. This means that the evolution equations of
|
1047 |
+
the scalar perturbation ζ and the tensor perturbations hA are homogeneous and isotropic. Finally, it can be seen that
|
1048 |
+
even after the constraints are lifted, there is still a coupling of scalar and tensor degrees of freedom. This is a feature
|
1049 |
+
that neither in the regular flat universe nor in the regular curved universe. This means that scalar perturbations and
|
1050 |
+
tensor perturbations can influence each other at the linear perturbation level. This can be seen more clearly from the
|
1051 |
+
perspective of the equations of motion. From the action (57), the linear equations of ζ and hA can be obtained as
|
1052 |
+
ζ′′ + 2z′
|
1053 |
+
z ζ′ + k2ζ + ca2θ′k
|
1054 |
+
√
|
1055 |
+
2z2
|
1056 |
+
� �
|
1057 |
+
A
|
1058 |
+
pAhA
|
1059 |
+
�
|
1060 |
+
= 0 ,
|
1061 |
+
(58)
|
1062 |
+
h′′
|
1063 |
+
A + 2Hh′
|
1064 |
+
A + ω2
|
1065 |
+
AhA +
|
1066 |
+
√
|
1067 |
+
2cθ′pAkζ = 0 .
|
1068 |
+
(59)
|
1069 |
+
4 The subtle difference in the dispersion relation ω2
|
1070 |
+
A is due to the difference between expanding by ei⃗k·⃗x and expanding by e−i⃗k·⃗x in the
|
1071 |
+
Fourier transformation.
|
1072 |
+
|
1073 |
+
12
|
1074 |
+
Eq. (58) shows that the tensor perturbations hA can be used as a source of the scalar perturbation ζ. The scalar
|
1075 |
+
perturbation ζ can be excited when left- and right- handed GWs have different amplitudes or phases. And Eq. (59)
|
1076 |
+
shows that the scalar perturbation ζ can be used as a source of the tensor perturbations hA. It is worth noting that
|
1077 |
+
the source of the tensor perturbations hA caused by ζ is helicity-dependent, that is, the excitation effects caused by
|
1078 |
+
ζ on the left- and right-handed GWs are different.
|
1079 |
+
V.
|
1080 |
+
PRIMORDIAL FLUCTUATIONS GENERATED BY INFLATION
|
1081 |
+
In the previous section, we preliminarily studied the the linear perturbations around the regular and irregular flat
|
1082 |
+
universe, and obtained the quadratic action after the constraints was lifted. In this section, we will preliminarily
|
1083 |
+
study the primordial fluctuations generated by slow-roll inflation in the regular and irregular flat universe.
|
1084 |
+
A.
|
1085 |
+
The case of the regular universe
|
1086 |
+
For the case of regular universe, the quadratic action (55) can be expressed as
|
1087 |
+
S(2) =
|
1088 |
+
�
|
1089 |
+
dη
|
1090 |
+
�
|
1091 |
+
d3k a2
|
1092 |
+
2
|
1093 |
+
�
|
1094 |
+
A
|
1095 |
+
���� 1
|
1096 |
+
√
|
1097 |
+
2h′
|
1098 |
+
A
|
1099 |
+
���
|
1100 |
+
2
|
1101 |
+
−
|
1102 |
+
�
|
1103 |
+
k2 − cθ′pAk
|
1104 |
+
� ��� 1
|
1105 |
+
√
|
1106 |
+
2hA
|
1107 |
+
���
|
1108 |
+
2�
|
1109 |
+
.
|
1110 |
+
(60)
|
1111 |
+
Note that since there are only tensor degrees of freedom in the regular flat universe at the linear perturbation
|
1112 |
+
level, a scalar field other than θ needs to be introduced to generate the primordial scalar perturbation [12, 21]. In
|
1113 |
+
this subsection we do not consider the case of introducing additional scalar fields, and we only focus on the tensor
|
1114 |
+
perturbations.
|
1115 |
+
Next we consider the case of slow-roll inflation dominated by the axion-like field θ. Since the background equations
|
1116 |
+
of the regular flat universe are exactly the same as those in GR, the background evolution during inflation will be
|
1117 |
+
exactly the same as the case of slow-roll inflation in GR [48, 49]. So we don’t need to repeat the analysis of the details
|
1118 |
+
of single scalar field inflation. We introduce two commonly used slow-roll parameters
|
1119 |
+
ε ≡ −
|
1120 |
+
˙H
|
1121 |
+
H2 , δ ≡
|
1122 |
+
¨θ
|
1123 |
+
H ˙θ
|
1124 |
+
,
|
1125 |
+
(61)
|
1126 |
+
where H = ˙a/a = H/a is the Hubble rate, the upper dot represents the derivative with respect to the physical time
|
1127 |
+
t. We assume ε ∼ |δ| ≪ 1, | ˙ε/H| ≪ |ε| and | ˙δ/H| ≪ |δ| during inflation. Under the slow-roll approximation,
|
1128 |
+
H ≈ −1 + ε
|
1129 |
+
η
|
1130 |
+
,
|
1131 |
+
θ′ ≈
|
1132 |
+
√
|
1133 |
+
2ε
|
1134 |
+
η
|
1135 |
+
.
|
1136 |
+
(62)
|
1137 |
+
Without loss of generality, in Eq. (62) we have assumed that the value of θ decreases during inflation.
|
1138 |
+
Next, by combining Eqs (60) and (62), the correlation function of hA can be obtained through the process in
|
1139 |
+
Appendix C:
|
1140 |
+
⟨h†
|
1141 |
+
AhA⟩ ≈ H2e−pA√
|
1142 |
+
ε/2cπk−(3+2ε) ,
|
1143 |
+
(63)
|
1144 |
+
and ⟨h†
|
1145 |
+
LhR⟩ = 0. Through the correlation functions (63), the power spectrum of the left- and right-handed GWs can
|
1146 |
+
be obtained as
|
1147 |
+
PA(k) = k3
|
1148 |
+
π2 ⟨h†
|
1149 |
+
AhA⟩ ≈ H2
|
1150 |
+
π2 e−pA√
|
1151 |
+
ε/2cπk−2ε .
|
1152 |
+
(64)
|
1153 |
+
The power spectrum of the tensor perturbations can be obtained as
|
1154 |
+
PT (k) = PL(k) + PR(k) ≈ H2
|
1155 |
+
π2
|
1156 |
+
�
|
1157 |
+
1 + cosh
|
1158 |
+
��ε
|
1159 |
+
2cπ
|
1160 |
+
��
|
1161 |
+
k−2ε .
|
1162 |
+
(65)
|
1163 |
+
|
1164 |
+
13
|
1165 |
+
The relative different between the power spectrum of the left- and right-handed GWs can be obtained as
|
1166 |
+
Π ��� PR − PL
|
1167 |
+
PR + PL
|
1168 |
+
≈ − tanh
|
1169 |
+
��ε
|
1170 |
+
2cπ
|
1171 |
+
�
|
1172 |
+
≈ −
|
1173 |
+
�ε
|
1174 |
+
2cπ .
|
1175 |
+
(66)
|
1176 |
+
Π ̸= 0 means that the magnitudes of the primordial fluctuations of left- and right-handed GWs are different. This is
|
1177 |
+
a clear physical signal of parity violation. But this seems to contradict the conclusion in Refs. [12, 13] that there is
|
1178 |
+
only velocity birefringence of GWs but no amplitude birefringence of GWs in the NYTG model. The reason for this
|
1179 |
+
contradiction is that θ′ is approximated as a constant in the analysis of the evolution of GWs in Refs. [12, 13]. Of
|
1180 |
+
course, this approximation is valid when studying the propagation of GWs in a slowly expanding universe. However,
|
1181 |
+
θ′ = a ˙θ ∝ 1/η cannot be approximated as a constant during the slow-roll inflation dominated by θ. We know that for
|
1182 |
+
a harmonic oscillator (the equation of motion is ¨x+ω2x = 0), the amplitude of the harmonic oscillator can be changed
|
1183 |
+
when the frequency ω is time-dependent. And when the time dependence of θ′ is not negligible, the time dependence
|
1184 |
+
of ωL and ωR will be different, resulting in different effects on the amplitudes of left- and right-hand GWs. This is
|
1185 |
+
why the magnitudes of the primordial fluctuations of left- and right-handed GWs generated by slow-roll inflation in
|
1186 |
+
the regular flat universe are different. If ε → 0, it can be seen from Eq. (62) that θ′ ≈ 0 can be approximated as a
|
1187 |
+
constant, and from Eq. (66), it can be seen that Π → 0 too, that is, the magnitudes of the primordial fluctuation of
|
1188 |
+
the left- and right-handed GWs are the same.
|
1189 |
+
Finally, let’s look at the case when the coupling constant c → 0, then
|
1190 |
+
PT (k) ≈ 2H2
|
1191 |
+
π2 k−2ε ,
|
1192 |
+
Π ≈ 0 .
|
1193 |
+
(67)
|
1194 |
+
This is exactly the result of the slow-roll inflation of single scalar field in GR.
|
1195 |
+
B.
|
1196 |
+
The case of the irregular universe
|
1197 |
+
For the case of irregular universe, since the coupling of ζ and hA in the action (57) makes it difficult to analyze the
|
1198 |
+
quantum fluctuations, we first diagonalize the variables ζ and hA below. Firstly, for the convenience of analysis, we
|
1199 |
+
introduce new variables ξ1 = (z/a)ζ, ξ2 = (1/
|
1200 |
+
√
|
1201 |
+
2)hL and ξ3 = (1/
|
1202 |
+
√
|
1203 |
+
2)hR, so that the action (57) can be simplified as
|
1204 |
+
S(2) =
|
1205 |
+
�
|
1206 |
+
dη
|
1207 |
+
�
|
1208 |
+
d3k a2
|
1209 |
+
2
|
1210 |
+
�
|
1211 |
+
3
|
1212 |
+
�
|
1213 |
+
s=1
|
1214 |
+
ξ∗′
|
1215 |
+
s ξs −
|
1216 |
+
3
|
1217 |
+
�
|
1218 |
+
s1=1
|
1219 |
+
3
|
1220 |
+
�
|
1221 |
+
s2=1
|
1222 |
+
Ms1s2ξ∗
|
1223 |
+
s1ξs2
|
1224 |
+
�
|
1225 |
+
,
|
1226 |
+
with M =
|
1227 |
+
�
|
1228 |
+
�
|
1229 |
+
�
|
1230 |
+
k2 − Ω
|
1231 |
+
−κ
|
1232 |
+
κ
|
1233 |
+
−κ
|
1234 |
+
k2 − σ
|
1235 |
+
0
|
1236 |
+
κ
|
1237 |
+
0
|
1238 |
+
k2 + σ
|
1239 |
+
�
|
1240 |
+
�
|
1241 |
+
� ,
|
1242 |
+
(68)
|
1243 |
+
where Ω = z′′/z − a′′/a, σ = −cθ′k and κ = cHk are background quantities. Secondly, we introduce an orthogonal
|
1244 |
+
matrix T that can diagonalize the matrix M, and its expression is
|
1245 |
+
T =
|
1246 |
+
�
|
1247 |
+
�
|
1248 |
+
�
|
1249 |
+
tT
|
1250 |
+
1
|
1251 |
+
tT
|
1252 |
+
2
|
1253 |
+
tT
|
1254 |
+
3
|
1255 |
+
�
|
1256 |
+
�
|
1257 |
+
� , with ts =
|
1258 |
+
−s2 + 5s − 5
|
1259 |
+
�
|
1260 |
+
1 + (τs−σ)2
|
1261 |
+
κ2
|
1262 |
+
+
|
1263 |
+
�
|
1264 |
+
1 − (τs−σ)(τs+Ω)
|
1265 |
+
κ2
|
1266 |
+
�2
|
1267 |
+
�
|
1268 |
+
�
|
1269 |
+
�
|
1270 |
+
(τs − σ)/κ
|
1271 |
+
1 − (τs − σ)(τs + Ω)/κ2
|
1272 |
+
1
|
1273 |
+
�
|
1274 |
+
�
|
1275 |
+
� ,
|
1276 |
+
(69)
|
1277 |
+
where the superscript T means transpose, and {τs, s = 1, 2, 3} are the solutions of the cubic equation
|
1278 |
+
τ 3 + Ωτ 2 − (2κ2 + σ2)τ − σ2Ω = 0 .
|
1279 |
+
(70)
|
1280 |
+
The specific expressions of {τs, s = 1, 2, 3} are in Appendix A. Finally, we introduce new variables {qs, s = 1, 2, 3},
|
1281 |
+
which are defined as
|
1282 |
+
�
|
1283 |
+
�
|
1284 |
+
�
|
1285 |
+
q1
|
1286 |
+
q2
|
1287 |
+
q3
|
1288 |
+
�
|
1289 |
+
�
|
1290 |
+
� = T
|
1291 |
+
�
|
1292 |
+
�
|
1293 |
+
�
|
1294 |
+
ξ1
|
1295 |
+
ξ2
|
1296 |
+
ξ3
|
1297 |
+
�
|
1298 |
+
�
|
1299 |
+
� .
|
1300 |
+
(71)
|
1301 |
+
|
1302 |
+
14
|
1303 |
+
Thus, the action (68) can be further simplified as
|
1304 |
+
S(2) =
|
1305 |
+
3
|
1306 |
+
�
|
1307 |
+
s=1
|
1308 |
+
�
|
1309 |
+
dη
|
1310 |
+
�
|
1311 |
+
d3k a2
|
1312 |
+
2
|
1313 |
+
�
|
1314 |
+
|q′
|
1315 |
+
s|2 − (k2 + τs)|qs|2�
|
1316 |
+
.
|
1317 |
+
(72)
|
1318 |
+
So far, we have simplified the action (57) with coupling between variables to the action (72) without coupling between
|
1319 |
+
variables. The latter form makes it easier to calculate the primordial fluctuations generated by inflation.
|
1320 |
+
Next we consider the case of slow-roll inflation dominated by the axion-like field θ. Since in Sec. III we proved that
|
1321 |
+
the background equations of the irregular flat universe are exactly the same as those in GR, the background evolution
|
1322 |
+
during inflation will be exactly the same as the case of slow-roll inflation in GR. Under the slow-roll approximation,
|
1323 |
+
the background quantities Ω, σ and κ can be approximately expressed as
|
1324 |
+
Ω ≈ 3(ε + δ)
|
1325 |
+
2η2
|
1326 |
+
, σ ≈ −
|
1327 |
+
√
|
1328 |
+
2εck
|
1329 |
+
η
|
1330 |
+
, κ ≈ −(1 + ε)ck
|
1331 |
+
η
|
1332 |
+
.
|
1333 |
+
(73)
|
1334 |
+
In this section, we also assume that the coupling constant c ∼ 1 (it can also be seen as a requirement of naturalness),
|
1335 |
+
so that c ≫ √ε. Ignoring high-order small quantities such as ε2, {τs, s = 1, 2, 3} in Eq. (A3) can be approximated as
|
1336 |
+
τ1 ≈ (2 + 3ε)ck
|
1337 |
+
√
|
1338 |
+
2η
|
1339 |
+
− 3(ε + δ)
|
1340 |
+
2η2
|
1341 |
+
, τ2 ≈ 0 , τ3 ≈ −(2 + 3ε)ck
|
1342 |
+
√
|
1343 |
+
2η
|
1344 |
+
− 3(ε + δ)
|
1345 |
+
2η2
|
1346 |
+
.
|
1347 |
+
(74)
|
1348 |
+
If only up to the order of √ε is retained, the orthogonal matrix T can be approximated as
|
1349 |
+
T ≈
|
1350 |
+
�
|
1351 |
+
�
|
1352 |
+
�
|
1353 |
+
�
|
1354 |
+
1
|
1355 |
+
√
|
1356 |
+
2
|
1357 |
+
1+√ε
|
1358 |
+
2
|
1359 |
+
− 1−√ε
|
1360 |
+
2
|
1361 |
+
−√ε
|
1362 |
+
1
|
1363 |
+
√
|
1364 |
+
2
|
1365 |
+
1
|
1366 |
+
√
|
1367 |
+
2
|
1368 |
+
1
|
1369 |
+
√
|
1370 |
+
2
|
1371 |
+
− 1−√ε
|
1372 |
+
2
|
1373 |
+
1+√ε
|
1374 |
+
2
|
1375 |
+
�
|
1376 |
+
�
|
1377 |
+
�
|
1378 |
+
�
|
1379 |
+
(75)
|
1380 |
+
Regarding the approximate expression (75), there are two points that need additional explanation. First, the order √ε
|
1381 |
+
is the lowest order approximation required to preserve the difference in the power spectrum of left- and right-handed
|
1382 |
+
GWs. If we further ignore the contribution of √ε in T, the difference in the power spectrum of left- and right-handed
|
1383 |
+
GWs disappears. And if we keep the higher-order terms, it brings only more complex but less important corrections
|
1384 |
+
in the power spectrum. Second, it can be seen that the matrix T does not tend to the identity matrix as c → 0
|
1385 |
+
in the approximate expression (75). This is confusing because the three variables are all decoupled as c → 0 in the
|
1386 |
+
action (68). The reason for this confusing phenomenon is that we have used the approximation c ≫ √ε in Eqs. (74)
|
1387 |
+
and (75). If c is too small, neither the Eq. (74) nor Eq. (75) hold. See Appendix B for the approximate behavior of
|
1388 |
+
orthogonal matrix T when c → 0.
|
1389 |
+
Next, by combining Eqs (72) and (74), the correlation function between variables qs can be obtained through the
|
1390 |
+
process in Appendix C:
|
1391 |
+
⟨q†
|
1392 |
+
1q1⟩ ≈ H2
|
1393 |
+
2 e
|
1394 |
+
cπ
|
1395 |
+
√
|
1396 |
+
2 k−(3+3ε+δ) , ⟨q†
|
1397 |
+
2q2⟩ ≈ H2
|
1398 |
+
2 k−(3+2ε) , ⟨q†
|
1399 |
+
3q3⟩ ≈ H2
|
1400 |
+
2 e− cπ
|
1401 |
+
√
|
1402 |
+
2 k−(3+3ε+δ) ,
|
1403 |
+
(76)
|
1404 |
+
and ⟨q†
|
1405 |
+
s1qs2⟩ = 0 when s1 ̸= s2. Then, using the approximation techniques in Appendix D and combining Eqs (71),
|
1406 |
+
(75) and (76), the correlation functions for the variables ζ and hA can be obtained as
|
1407 |
+
⟨ζ†ζ⟩ ≈ 1
|
1408 |
+
2ε cosh
|
1409 |
+
� cπ
|
1410 |
+
√
|
1411 |
+
2
|
1412 |
+
�
|
1413 |
+
H2knS−4 ,
|
1414 |
+
⟨h†
|
1415 |
+
AhA⟩ ≈
|
1416 |
+
�1
|
1417 |
+
2 + 1
|
1418 |
+
2 cosh
|
1419 |
+
� cπ
|
1420 |
+
√
|
1421 |
+
2
|
1422 |
+
�
|
1423 |
+
− pA
|
1424 |
+
√ε sinh
|
1425 |
+
� cπ
|
1426 |
+
√
|
1427 |
+
2
|
1428 |
+
��
|
1429 |
+
H2knT −3 ,
|
1430 |
+
⟨ζ†hA⟩ ≈ − pA
|
1431 |
+
2
|
1432 |
+
√
|
1433 |
+
2ε sinh
|
1434 |
+
� cπ
|
1435 |
+
√
|
1436 |
+
2
|
1437 |
+
�
|
1438 |
+
H2k−(3+3ε+δ) ,
|
1439 |
+
⟨h†
|
1440 |
+
LhR⟩ ≈ 1
|
1441 |
+
2
|
1442 |
+
�
|
1443 |
+
1 − cosh
|
1444 |
+
� cπ
|
1445 |
+
√
|
1446 |
+
2
|
1447 |
+
��
|
1448 |
+
H2k
|
1449 |
+
−(3+3ε+δ)− 1
|
1450 |
+
2 csch2�
|
1451 |
+
cπ
|
1452 |
+
2
|
1453 |
+
√
|
1454 |
+
2
|
1455 |
+
�
|
1456 |
+
(ε+δ) ,
|
1457 |
+
(77)
|
1458 |
+
|
1459 |
+
15
|
1460 |
+
where
|
1461 |
+
nS ≈ 1 − (δ + 3ε) ,
|
1462 |
+
nT ≈ −(3ε + δ) + 1
|
1463 |
+
2 sech2
|
1464 |
+
� cπ
|
1465 |
+
2
|
1466 |
+
√
|
1467 |
+
2
|
1468 |
+
�
|
1469 |
+
(ε + δ) .
|
1470 |
+
(78)
|
1471 |
+
It should be noted that since Eqs. (74) and (75) are approximately true only when c ≫ √ε, Eqs. (77) and (78) are
|
1472 |
+
also approximately true only when c ≫ √ε.
|
1473 |
+
Through the correlation functions (77), the power spectrum of the scalar perturbation ζ can be obtained as
|
1474 |
+
PS(k) = k3
|
1475 |
+
2π2 ⟨ζ†ζ⟩ ≈ H2
|
1476 |
+
8π2ε cosh
|
1477 |
+
� cπ
|
1478 |
+
√
|
1479 |
+
2
|
1480 |
+
�
|
1481 |
+
knS−1 .
|
1482 |
+
(79)
|
1483 |
+
The power spectrum of the left- and right-handed GWs can be obtained as
|
1484 |
+
PA(k) = k3
|
1485 |
+
π2 ⟨h†
|
1486 |
+
AhA⟩ ≈ H2
|
1487 |
+
2π2
|
1488 |
+
�
|
1489 |
+
1 + cosh
|
1490 |
+
� cπ
|
1491 |
+
√
|
1492 |
+
2
|
1493 |
+
�
|
1494 |
+
− 2pA
|
1495 |
+
√ε sinh
|
1496 |
+
� cπ
|
1497 |
+
√
|
1498 |
+
2
|
1499 |
+
��
|
1500 |
+
knT .
|
1501 |
+
(80)
|
1502 |
+
The power spectrum of the tensor perturbations can be obtained as
|
1503 |
+
PT (k) = PL(k) + PR(k) ≈ H2
|
1504 |
+
π2
|
1505 |
+
�
|
1506 |
+
1 + cosh
|
1507 |
+
� cπ
|
1508 |
+
√
|
1509 |
+
2
|
1510 |
+
��
|
1511 |
+
knT .
|
1512 |
+
(81)
|
1513 |
+
The tensor-to-scalar ratio r can be obtained as
|
1514 |
+
r ≡ PT
|
1515 |
+
PS
|
1516 |
+
= 8
|
1517 |
+
�
|
1518 |
+
1 + sech
|
1519 |
+
� cπ
|
1520 |
+
√
|
1521 |
+
2
|
1522 |
+
��
|
1523 |
+
ε .
|
1524 |
+
(82)
|
1525 |
+
The relative different between the power spectrum of the left- and right-handed GWs can be obtained as
|
1526 |
+
Π ≡ PR − PL
|
1527 |
+
PR + PL
|
1528 |
+
≈ −2√ε tanh
|
1529 |
+
� cπ
|
1530 |
+
√
|
1531 |
+
2
|
1532 |
+
�
|
1533 |
+
.
|
1534 |
+
(83)
|
1535 |
+
Strictly speaking, since Eqs. (77) and (78) are only approximately true when c ≫ √ε, Eqs. (79)-(83) are also approx-
|
1536 |
+
imately true only when c ≫ √ε. But If we ignore this fact and force c → 0, then
|
1537 |
+
PS ≈ H2
|
1538 |
+
8π2εknS−1 , PT ≈ 2H2
|
1539 |
+
π2 knT , r ≈ 16ε , Π ≈ 0 .
|
1540 |
+
(84)
|
1541 |
+
It can be seen that except for the spectral indices nS and nT , Eq. (84) is the result of the slow-roll inflation in GR.
|
1542 |
+
From the Planck 2018 [50], we know that the scalar spectral index nS ≈ 0.966 and the tensor-to-scalar ratio
|
1543 |
+
r < 0.101. This means that the allowable value range of the slow-roll parameters ε and δ is
|
1544 |
+
0 < ε <
|
1545 |
+
0.101
|
1546 |
+
8
|
1547 |
+
�
|
1548 |
+
1 + sech
|
1549 |
+
�
|
1550 |
+
cπ/
|
1551 |
+
√
|
1552 |
+
2
|
1553 |
+
�� < 0.012625 ,
|
1554 |
+
δ ≈ 0.034 − 3ε .
|
1555 |
+
(85)
|
1556 |
+
It can be seen that the maximum value of ε depends on the coupling constant c, but will not exceed 0.012625 (the
|
1557 |
+
upper limit of ε when c → ∞). The allowable value of δ is determined by ε. FIG. 1 shows the allowable value range
|
1558 |
+
of slow-roll parameters ε and δ when c = 1.
|
1559 |
+
Although by comparing the results in subsections V A and V B, we can find that the power spectrum of the left-
|
1560 |
+
and right-handed GWs given by the irregular universe is different from that of the regular universe. But this is not
|
1561 |
+
the main difference between irregular and regular universes for primordial fluctuations. For primordial fluctuations,
|
1562 |
+
the most important feature of the irregular universe compared to the regular universe is that the correlation function
|
1563 |
+
of scalar perturbation and tensor perturbations ⟨ζ†hA⟩ ̸= 0 at the linear perturbation level. This means that there
|
1564 |
+
is a strong statistical correlation between primordial scalar fluctuations and primordial tensor fluctuations generated
|
1565 |
+
by slow-roll inflation in the irregular universe. The apparent reason for this phenomenon is that the quadratic action
|
1566 |
+
contains the coupling of scalar perturbations and tensor perturbations in the irregular universe, as exhibited by the
|
1567 |
+
action (57). The deeper reason may be that the condition LξI ˆΓρ
|
1568 |
+
µν ̸= 0 destroys the homogeneity and isotropy of the
|
1569 |
+
interior space, so that the scalar fluctuations and the tensor fluctuations can interact with each other in the irregular
|
1570 |
+
universe.
|
1571 |
+
|
1572 |
+
16
|
1573 |
+
0.002
|
1574 |
+
0.004
|
1575 |
+
0.006
|
1576 |
+
0.008
|
1577 |
+
0.010 ε
|
1578 |
+
0.005
|
1579 |
+
0.010
|
1580 |
+
0.015
|
1581 |
+
0.020
|
1582 |
+
0.025
|
1583 |
+
0.030
|
1584 |
+
0.035
|
1585 |
+
δ
|
1586 |
+
FIG. 1: In the ε-δ plane, the blue line is the allowable value range when c = 1.
|
1587 |
+
VI.
|
1588 |
+
CONCLUSION
|
1589 |
+
As a step towards exploring the irregular universe within the TG framework, in this paper, we studied the irregular
|
1590 |
+
flat universe of the NYTG model. Firstly, we obtained the irregular flat universe solution of the NYTG model under
|
1591 |
+
the condition that only the symmetry of the metric is required. We found that the cosmological background equations
|
1592 |
+
of the NYTG model are exactly the same as those of GR in both the regular flat universe and the irregular flat
|
1593 |
+
universe. Secondly, we studied the linear cosmological perturbations around the irregular flat universes. We found a
|
1594 |
+
peculiar feature of the irregular flat universe: the tensor and scalar perturbations are coupled together at the linear
|
1595 |
+
perturbation level. We speculate that this peculiar feature is caused by the fact that the interior space does not satisfy
|
1596 |
+
the homogeneity and isotropy in the irregular universe. Finally, we applied the NYTG model to the early universe
|
1597 |
+
and studied the primordial perturbations generated by slow-roll inflation in the regular and irregular flat universes.
|
1598 |
+
We found that the left- and right-handed primordial GWs are different in both the regular flat universe and the
|
1599 |
+
irregular flat universe. We also found that there is a strong statistical correlation between the primordial scalar and
|
1600 |
+
tensor perturbations generated by slow-roll inflation in the case of irregular universe, this is a direct consequence of
|
1601 |
+
the direct coupling between the scalar and tensor perturbations at linear order.
|
1602 |
+
Acknowledgement:
|
1603 |
+
This work is supported in part by National Key R&D Program of China Grant No.
|
1604 |
+
2021YFC2203102, and by NSFC under Grant No. 12075231 and 12047502.
|
1605 |
+
Appendix A: Solutions of the cubic equation
|
1606 |
+
Consider a cubic equation with respect to the variable τ as
|
1607 |
+
aτ 3 + bτ 2 + cτ + d = 0 ,
|
1608 |
+
(A1)
|
1609 |
+
where a, b, c and d are real coefficients. In order to express the solution of Eq. (A1) conveniently, we introduce the
|
1610 |
+
following parameters
|
1611 |
+
A = b2 − 3ac , B = bc − 9ad , C = c2 − 3bd , ∆ = B2 − 4AC , Θ = 1
|
1612 |
+
3 arccos
|
1613 |
+
�2Ab − 3Ba
|
1614 |
+
2A3/2
|
1615 |
+
�
|
1616 |
+
.
|
1617 |
+
(A2)
|
1618 |
+
|
1619 |
+
17
|
1620 |
+
When ∆ < 0, Eq. (A1) has three real solutions, which are
|
1621 |
+
τ1 = − 1
|
1622 |
+
3a
|
1623 |
+
�
|
1624 |
+
b + 2
|
1625 |
+
√
|
1626 |
+
A cos Θ
|
1627 |
+
�
|
1628 |
+
,
|
1629 |
+
τ2 = 1
|
1630 |
+
3a
|
1631 |
+
�
|
1632 |
+
−b +
|
1633 |
+
√
|
1634 |
+
A
|
1635 |
+
�
|
1636 |
+
cos Θ −
|
1637 |
+
√
|
1638 |
+
3 sin Θ
|
1639 |
+
��
|
1640 |
+
,
|
1641 |
+
τ3 = 1
|
1642 |
+
3a
|
1643 |
+
�
|
1644 |
+
−b +
|
1645 |
+
√
|
1646 |
+
A
|
1647 |
+
�
|
1648 |
+
cos Θ +
|
1649 |
+
√
|
1650 |
+
3 sin Θ
|
1651 |
+
��
|
1652 |
+
,
|
1653 |
+
(A3)
|
1654 |
+
The Eq. (70) in the main text is the result of taking a = 1, b = Ω, c = −(2κ2 + σ2) and d = −σ2Ω in Eq. (A1).
|
1655 |
+
In this case, there are always A ≥ 0 and ∆ ≤ 0, where the equal sign holds if and only if κ = σ = Ω = 0. And when
|
1656 |
+
κ = σ = Ω = 0, obviously the three solutions of Eq. (70) are τ1 = τ2 = τ3 = 0, and the orthogonal matrix T is the
|
1657 |
+
identity matrix.
|
1658 |
+
Appendix B: The orthogonal matrix T when c → 0
|
1659 |
+
In this appendix, we discuss the approximate behavior of the orthogonal matrix T in Eq. (69) as c → 0 in a more
|
1660 |
+
general background (not only during inflation). Since σ ∝ c and κ ∝ c, then
|
1661 |
+
κ
|
1662 |
+
Ω ∝ c , σ
|
1663 |
+
Ω ∝ c ,
|
1664 |
+
κ2
|
1665 |
+
2σΩ ∝ c .
|
1666 |
+
(B1)
|
1667 |
+
When c is much smaller than any other background quantities such as √ε, ˙θ and H−1, ignoring the quadratic and
|
1668 |
+
higher terms of c, the solutions of Eq. (70) can be approximately expressed as
|
1669 |
+
τ1 ≈ Ω , τ2 ≈ σ , τ3 ≈ σ .
|
1670 |
+
(B2)
|
1671 |
+
So the orthogonal matrix T in Eq. (69) can be approximately expressed as
|
1672 |
+
T =
|
1673 |
+
�
|
1674 |
+
�
|
1675 |
+
�
|
1676 |
+
1
|
1677 |
+
κ
|
1678 |
+
Ω
|
1679 |
+
− κ
|
1680 |
+
Ω
|
1681 |
+
− κ
|
1682 |
+
Ω
|
1683 |
+
1
|
1684 |
+
κ2
|
1685 |
+
2σΩ
|
1686 |
+
κ
|
1687 |
+
Ω
|
1688 |
+
− κ2
|
1689 |
+
2σΩ
|
1690 |
+
1
|
1691 |
+
�
|
1692 |
+
�
|
1693 |
+
�
|
1694 |
+
when c→0
|
1695 |
+
−−−−−−−→
|
1696 |
+
�
|
1697 |
+
�
|
1698 |
+
�
|
1699 |
+
1 0 0
|
1700 |
+
0 1 0
|
1701 |
+
0 0 1
|
1702 |
+
�
|
1703 |
+
�
|
1704 |
+
�
|
1705 |
+
(B3)
|
1706 |
+
It can be easily seen from Eqs. (B1) and (B3) that when c → 0, the orthogonal matrix T does tend to the identity
|
1707 |
+
matrix. This is consistent with the fact that all variables in the action (68) tend to be decoupled when c → 0.
|
1708 |
+
Appendix C: Correlation function generated by inflation
|
1709 |
+
The purpose of this appendix is to show how to calculate the correlation function generated by inflation. Consider
|
1710 |
+
a univariate system whose effective action during inflation is
|
1711 |
+
S = 1
|
1712 |
+
2
|
1713 |
+
�
|
1714 |
+
dη d3k a2
|
1715 |
+
�
|
1716 |
+
|q′
|
1717 |
+
⃗k|2 −
|
1718 |
+
�
|
1719 |
+
k2 − 2ak
|
1720 |
+
η
|
1721 |
+
− 3b
|
1722 |
+
η2
|
1723 |
+
�
|
1724 |
+
|q⃗k|2
|
1725 |
+
�
|
1726 |
+
,
|
1727 |
+
(C1)
|
1728 |
+
where a and b are real parameters, and b has the same order of magnitude as the slow-roll parameter ε. Here q(η, ⃗x) is
|
1729 |
+
the variable and we have changed to the Fourier space q⃗k(η). After quantization, the variable q⃗k(η) can be expanded
|
1730 |
+
as
|
1731 |
+
q⃗k(η) =
|
1732 |
+
1
|
1733 |
+
a(η)
|
1734 |
+
�
|
1735 |
+
vk(η)ˆa⃗k + v∗
|
1736 |
+
k(η)ˆa†
|
1737 |
+
⃗k
|
1738 |
+
�
|
1739 |
+
,
|
1740 |
+
(C2)
|
1741 |
+
where ˆa†
|
1742 |
+
⃗k and ˆa⃗k are the generation and annihilation operators that satisfy the following commutation relations
|
1743 |
+
[ˆa⃗k ˆa†
|
1744 |
+
⃗k′] = δ(3)(⃗k − ⃗k′) ,
|
1745 |
+
[ˆa⃗k ˆa⃗k′] = [ˆa†
|
1746 |
+
⃗k ˆa†
|
1747 |
+
⃗k′] = 0 ,
|
1748 |
+
(C3)
|
1749 |
+
|
1750 |
+
18
|
1751 |
+
and vk(η) satisfies the following equation
|
1752 |
+
v′′
|
1753 |
+
k +
|
1754 |
+
�
|
1755 |
+
k2 − 2ak
|
1756 |
+
η
|
1757 |
+
− µ2 − 1/4
|
1758 |
+
η2
|
1759 |
+
�
|
1760 |
+
vk = 0 ,
|
1761 |
+
(C4)
|
1762 |
+
where µ ≈ 3/2+ε+b. Note that in Eq. (C4), we used the approximation a′′/a ≈ [(3/2+ε)2 −1/4]/η, and we ignored
|
1763 |
+
the higher-order terms of ε and b. Next we choose the Bunch-Davies vacuum at η → −∞, that is,
|
1764 |
+
lim
|
1765 |
+
η→−∞ vk =
|
1766 |
+
1
|
1767 |
+
√
|
1768 |
+
2k
|
1769 |
+
e−ikη .
|
1770 |
+
(C5)
|
1771 |
+
Under this condition, the solution for Eq. (C4) is (for more detail, see [51])
|
1772 |
+
vk(η) = e−ikη(−2kη)µ(−η)
|
1773 |
+
1
|
1774 |
+
2 e−iπ( 1
|
1775 |
+
4 + µ
|
1776 |
+
2 )U (1/2 + µ − ia, 1 + 2µ; 2ikη) e− aπ
|
1777 |
+
2 ,
|
1778 |
+
(C6)
|
1779 |
+
where U(c1, c2; z) is the confluent hypergeometric function. The |vk| has the following asymptotic form when kη → 0−
|
1780 |
+
(super-horizon scale)
|
1781 |
+
|vk| ≈ 2µ−1π− 1
|
1782 |
+
2 Γ(µ)k−µ(−η)
|
1783 |
+
1
|
1784 |
+
2 −µe− aπ
|
1785 |
+
2 ≈ 2− 1
|
1786 |
+
2 e− aπ
|
1787 |
+
2 aHk−µ
|
1788 |
+
(C7)
|
1789 |
+
where Γ(z) is the Gamma function. In the last approximately equal sign in Eq. (C7), we used the approximations
|
1790 |
+
µ ≈ 3/2 and (−η)−1 ≈ aH. Combining Eqs. (C2), (C3) and (C7), we can obtain the correlation function on the
|
1791 |
+
super-horizon scale as
|
1792 |
+
⟨0|q†
|
1793 |
+
⃗kq⃗k′|0⟩ ≈ H2
|
1794 |
+
2 e−aπk−(3+2ε+2b)δ(3)(⃗k + ⃗k′) .
|
1795 |
+
(C8)
|
1796 |
+
where |0⟩ is the vacuum state, which satisfies ˆa⃗k|0⟩ = 0. For the sake of convenience, we can omit the subscript ⃗k and
|
1797 |
+
throw away the annoying delta function δ(3)(⃗k + ⃗k′), so that the correlation function (C8) can be abbreviated as
|
1798 |
+
⟨q†q⟩ ≈ H2
|
1799 |
+
2 e−aπk−(3+2ε+2b) .
|
1800 |
+
(C9)
|
1801 |
+
Appendix D: Summation of nearly scale-invariant functions
|
1802 |
+
Consider there are N nearly scale-invariant functions {fi(k) = Cikni, i = 1, 2, ..., N}, where |ni| ≪ 1. Then the
|
1803 |
+
sum of these functions should also be a nearly scale-invariant function, so it can be approximated as
|
1804 |
+
f(k) =
|
1805 |
+
N
|
1806 |
+
�
|
1807 |
+
i=1
|
1808 |
+
fi(k) =
|
1809 |
+
N
|
1810 |
+
�
|
1811 |
+
i=1
|
1812 |
+
Cikni ≈ Ckn , with |n| ≪ 1 .
|
1813 |
+
(D1)
|
1814 |
+
Next we need to find the coefficient C and the exponent n in Eq. (D1). Since ni ≈ 0 and n ≈ 0, we can approximately
|
1815 |
+
let ni = n = 0 in Eq. (D1), so that Eq. (D1) becomes
|
1816 |
+
C ≈
|
1817 |
+
N
|
1818 |
+
�
|
1819 |
+
i=1
|
1820 |
+
Ci .
|
1821 |
+
(D2)
|
1822 |
+
Next, let Eq. (D1) take the derivative of k and then let ni = n = 0 on the exponent of k. Then the approximate
|
1823 |
+
expression of n can be obtained as
|
1824 |
+
n ≈ 1
|
1825 |
+
C
|
1826 |
+
N
|
1827 |
+
�
|
1828 |
+
i=1
|
1829 |
+
Cini .
|
1830 |
+
(D3)
|
1831 |
+
[1] B.
|
1832 |
+
P.
|
1833 |
+
Abbott
|
1834 |
+
et
|
1835 |
+
al.
|
1836 |
+
[LIGO
|
1837 |
+
Scientific
|
1838 |
+
and
|
1839 |
+
Virgo],
|
1840 |
+
Phys.
|
1841 |
+
Rev.
|
1842 |
+
Lett.
|
1843 |
+
116,
|
1844 |
+
no.6,
|
1845 |
+
061102
|
1846 |
+
(2016)
|
1847 |
+
doi:10.1103/PhysRevLett.116.061102 [arXiv:1602.03837 [gr-qc]].
|
1848 |
+
|
1849 |
+
19
|
1850 |
+
[2] B.
|
1851 |
+
P.
|
1852 |
+
Abbott
|
1853 |
+
et
|
1854 |
+
al.
|
1855 |
+
[LIGO
|
1856 |
+
Scientific
|
1857 |
+
and
|
1858 |
+
Virgo],
|
1859 |
+
Phys.
|
1860 |
+
Rev.
|
1861 |
+
Lett.
|
1862 |
+
119,
|
1863 |
+
no.16,
|
1864 |
+
161101
|
1865 |
+
(2017)
|
1866 |
+
doi:10.1103/PhysRevLett.119.161101 [arXiv:1710.05832 [gr-qc]].
|
1867 |
+
[3] H. Li, S. Y. Li, Y. Liu, Y. P. Li, Y. Cai, M. Li, G. B. Zhao, C. Z. Liu, Z. W. Li and H. Xu, et al. “Probing Primordial
|
1868 |
+
Gravitational Waves: Ali CMB Polarization Telescope,” Natl. Sci. Rev. 6, no.1, 145-154 (2019) doi:10.1093/nsr/nwy019
|
1869 |
+
[arXiv:1710.03047 [astro-ph.CO]].
|
1870 |
+
[4] K. Abazajian et al. [CMB-S4], Astrophys. J. 926, no.1, 54 (2022) doi:10.3847/1538-4357/ac1596 [arXiv:2008.12619 [astro-
|
1871 |
+
ph.CO]].
|
1872 |
+
[5] R. Jackiw and S. Y. Pi, Phys. Rev. D 68, 104012 (2003) doi:10.1103/PhysRevD.68.104012 [arXiv:gr-qc/0308071 [gr-qc]].
|
1873 |
+
[6] S. Alexander and N. Yunes, Phys. Rept. 480, 1-55 (2009) doi:10.1016/j.physrep.2009.07.002 [arXiv:0907.2562 [hep-th]].
|
1874 |
+
[7] S. Dyda, E. E. Flanagan and M. Kamionkowski, Phys. Rev. D 86, 124031 (2012) doi:10.1103/PhysRevD.86.124031
|
1875 |
+
[arXiv:1208.4871 [gr-qc]].
|
1876 |
+
[8] M.
|
1877 |
+
Crisostomi,
|
1878 |
+
K.
|
1879 |
+
Noui,
|
1880 |
+
C.
|
1881 |
+
Charmousis
|
1882 |
+
and
|
1883 |
+
D.
|
1884 |
+
Langlois,
|
1885 |
+
Phys.
|
1886 |
+
Rev.
|
1887 |
+
D
|
1888 |
+
97,
|
1889 |
+
no.4,
|
1890 |
+
044034
|
1891 |
+
(2018)
|
1892 |
+
doi:10.1103/PhysRevD.97.044034 [arXiv:1710.04531 [hep-th]].
|
1893 |
+
[9] X. Gao and X. Y. Hong, Phys. Rev. D 101, no.6, 064057 (2020) doi:10.1103/PhysRevD.101.064057 [arXiv:1906.07131
|
1894 |
+
[gr-qc]].
|
1895 |
+
[10] W. Zhao, T. Zhu, J. Qiao and A. Wang, “Waveform of gravitational waves in the general parity-violating gravities,” Phys.
|
1896 |
+
Rev. D 101, no.2, 024002 (2020) doi:10.1103/PhysRevD.101.024002 [arXiv:1909.10887 [gr-qc]].
|
1897 |
+
[11] N. Bartolo, L. Caloni, G. Orlando and A. Ricciardone, JCAP 03, 073 (2021) doi:10.1088/1475-7516/2021/03/073
|
1898 |
+
[arXiv:2008.01715 [astro-ph.CO]].
|
1899 |
+
[12] M. Li, H. Rao and D. Zhao, JCAP 11, 023 (2020) doi:10.1088/1475-7516/2020/11/023 [arXiv:2007.08038 [gr-qc]].
|
1900 |
+
[13] M. Li, H. Rao and Y. Tong, Phys. Rev. D 104, no.8, 084077 (2021) doi:10.1103/PhysRevD.104.084077 [arXiv:2104.05917
|
1901 |
+
[gr-qc]].
|
1902 |
+
[14] R. Aldrovandi and J. G. Pereira, Teleparallel Gravity, Vol. 173. Springer, 23 Dordrecht, (2013).
|
1903 |
+
[15] S. Bahamonde, K. F. Dialektopoulos, C. Escamilla-Rivera, G. Farrugia, V. Gakis, M. Hendry, M. Hohmann, J. L. Said,
|
1904 |
+
J. Mifsud and E. Di Valentino, [arXiv:2106.13793 [gr-qc]].
|
1905 |
+
[16] J. W. Maluf, Annalen Phys. 525, 339-357 (2013) doi:10.1002/andp.201200272 [arXiv:1303.3897 [gr-qc]].
|
1906 |
+
[17] H. T. Nieh and M. L. Yan, J. Math. Phys. 23, 373 (1982) doi:10.1063/1.525379
|
1907 |
+
[18] H. Rao, Phys. Rev. D 104, no.12, 124084 (2021) doi:10.1103/PhysRevD.104.124084 [arXiv:2107.08597 [gr-qc]].
|
1908 |
+
[19] J. Qiao, T. Zhu, G. Li and W. Zhao, JCAP 04, no.04, 054 (2022) doi:10.1088/1475-7516/2022/04/054 [arXiv:2110.09033
|
1909 |
+
[gr-qc]].
|
1910 |
+
[20] Q. Wu, T. Zhu, R. Niu, W. Zhao and A. Wang, Phys. Rev. D 105, no.2, 024035 (2022) doi:10.1103/PhysRevD.105.024035
|
1911 |
+
[arXiv:2110.13870 [gr-qc]].
|
1912 |
+
[21] R. G. Cai, C. Fu and W. W. Yu, Phys. Rev. D 105, no.10, 103520 (2022) doi:10.1103/PhysRevD.105.103520
|
1913 |
+
[arXiv:2112.04794 [astro-ph.CO]].
|
1914 |
+
[22] M. Li and D. Zhao, Phys. Lett. B 827 (2022), 136968 doi:10.1016/j.physletb.2022.136968 [arXiv:2108.01337 [gr-qc]].
|
1915 |
+
[23] C. Gong, T. Zhu, R. Niu, Q. Wu, J. L. Cui, X. Zhang, W. Zhao and A. Wang, Phys. Rev. D 105 (2022) no.4, 044034
|
1916 |
+
doi:10.1103/PhysRevD.105.044034 [arXiv:2112.06446 [gr-qc]].
|
1917 |
+
[24] M. Hohmann and C. Pfeifer, Phys. Lett. B 834 (2022), 137437 doi:10.1016/j.physletb.2022.137437 [arXiv:2203.01856 [gr-
|
1918 |
+
qc]].
|
1919 |
+
[25] X. Tong and Z. Z. Xianyu, JHEP 10 (2022), 194 doi:10.1007/JHEP10(2022)194 [arXiv:2203.06349 [hep-ph]].
|
1920 |
+
[26] M. Li, Y. Tong and D. Zhao, Phys. Rev. D 105 (2022) no.10, 104002 doi:10.1103/PhysRevD.105.104002 [arXiv:2203.06912
|
1921 |
+
[gr-qc]].
|
1922 |
+
[27] F. Zhang, J. X. Feng and X. Gao, JCAP 10 (2022), 054 doi:10.1088/1475-7516/2022/10/054 [arXiv:2205.12045 [gr-qc]].
|
1923 |
+
[28] T. Zhu, W. Zhao and A. Wang, [arXiv:2210.05259 [gr-qc]].
|
1924 |
+
[29] T. Zhu, W. Zhao and A. Wang, [arXiv:2211.04711 [gr-qc]].
|
1925 |
+
[30] A. A. A. Filho, J. R. Nascimento, A. Y. Petrov and P. J. Porf´ırio, [arXiv:2211.11821 [gr-qc]].
|
1926 |
+
[31] J. Qiao, Z. Li, T. Zhu, R. Ji, G. Li and W. Zhao, [arXiv:2211.16825 [gr-qc]].
|
1927 |
+
[32] Y. Cai, [arXiv:2212.10893 [gr-qc]].
|
1928 |
+
[33] Z. Chen, Y. Yu and X. Gao, [arXiv:2212.14362 [gr-qc]].
|
1929 |
+
[34] M. Hohmann, L. J¨arv, M. Krˇsˇs´ak and C. Pfeifer, Phys. Rev. D 100, no.8, 084002 (2019) doi:10.1103/PhysRevD.100.084002
|
1930 |
+
[arXiv:1901.05472 [gr-qc]].
|
1931 |
+
[35] M. Hohmann,
|
1932 |
+
Int. J. Geom. Meth. Mod. Phys. 18,
|
1933 |
+
no.supp01,
|
1934 |
+
2140005 (2021) doi:10.1142/S0219887821400053
|
1935 |
+
[arXiv:2008.12186 [gr-qc]].
|
1936 |
+
|
1937 |
+
20
|
1938 |
+
[36] A. A. Coley, R. J. v. Hoogen and D. D. McNutt, [arXiv:2205.10719 [gr-qc]].
|
1939 |
+
[37] R. Myrzakulov, Eur. Phys. J. C 71, 1752 (2011) doi:10.1140/epjc/s10052-011-1752-9 [arXiv:1006.1120 [gr-qc]].
|
1940 |
+
[38] Y. F. Cai, S. H. Chen, J. B. Dent, S. Dutta and E. N. Saridakis, Class. Quant. Grav. 28, 215011 (2011) doi:10.1088/0264-
|
1941 |
+
9381/28/21/215011 [arXiv:1104.4349 [astro-ph.CO]].
|
1942 |
+
[39] Y. F. Cai, S. Capozziello, M. De Laurentis and E. N. Saridakis, Rept. Prog. Phys. 79, no.10, 106901 (2016)
|
1943 |
+
doi:10.1088/0034-4885/79/10/106901 [arXiv:1511.07586 [gr-qc]].
|
1944 |
+
[40] C. Bejarano, R. Ferraro and M. J. Guzm´an, Eur. Phys. J. C 77, no.12, 825 (2017) doi:10.1140/epjc/s10052-017-5394-4
|
1945 |
+
[arXiv:1707.06637 [gr-qc]].
|
1946 |
+
[41] R. P. Woodard, Lect. Notes Phys. 720, 403-433 (2007) doi:10.1007/978-3-540-71013-4 14 [arXiv:astro-ph/0601672 [astro-
|
1947 |
+
ph]].
|
1948 |
+
[42] M. Hohmann and C. Pfeifer, Eur. Phys. J. C 81, no.4, 376 (2021) doi:10.1140/epjc/s10052-021-09165-x [arXiv:2012.14423
|
1949 |
+
[gr-qc]].
|
1950 |
+
[43] M. Li, Z. Li and H. Rao, Phys. Lett. B 834, 137395 (2022) doi:10.1016/j.physletb.2022.137395 [arXiv:2201.02357 [gr-qc]].
|
1951 |
+
[44] K. Izumi and Y. C. Ong, JCAP 06, 029 (2013) doi:10.1088/1475-7516/2013/06/029 [arXiv:1212.5774 [gr-qc]].
|
1952 |
+
[45] Y. C. Ong, K. Izumi, J. M. Nester and P. Chen, Phys. Rev. D 88, 024019 (2013) doi:10.1103/PhysRevD.88.024019
|
1953 |
+
[arXiv:1303.0993 [gr-qc]].
|
1954 |
+
[46] A.
|
1955 |
+
De
|
1956 |
+
Felice,
|
1957 |
+
A.
|
1958 |
+
E.
|
1959 |
+
Gumrukcuoglu
|
1960 |
+
and
|
1961 |
+
S.
|
1962 |
+
Mukohyama,
|
1963 |
+
Phys.
|
1964 |
+
Rev.
|
1965 |
+
Lett.
|
1966 |
+
109,
|
1967 |
+
171101
|
1968 |
+
(2012)
|
1969 |
+
doi:10.1103/PhysRevLett.109.171101 [arXiv:1206.2080 [hep-th]].
|
1970 |
+
[47] A. Delhom, A. Jim´enez-Cano and F. J. Maldonado Torralba, [arXiv:2207.13431 [gr-qc]].
|
1971 |
+
[48] D. Baumann, doi:10.1142/9789814327183 0010 [arXiv:0907.5424 [hep-th]].
|
1972 |
+
[49] Y. Wang, Commun. Theor. Phys. 62, 109-166 (2014) doi:10.1088/0253-6102/62/1/19 [arXiv:1303.1523 [hep-th]].
|
1973 |
+
[50] N. Aghanim et al. [Planck], Astron. Astrophys. 641 (2020), A1 doi:10.1051/0004-6361/201833880 [arXiv:1807.06205 [astro-
|
1974 |
+
ph.CO]].
|
1975 |
+
[51] M. Satoh, JCAP 11, 024 (2010) doi:10.1088/1475-7516/2010/11/024 [arXiv:1008.2724 [astro-ph.CO]].
|
1976 |
+
|
5tE1T4oBgHgl3EQfBAK1/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
6tFAT4oBgHgl3EQfnx0W/content/tmp_files/2301.08630v1.pdf.txt
ADDED
@@ -0,0 +1,1508 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Evaluating approaches for on-the-fly machine learning interatomic potential for
|
2 |
+
activated mechanisms sampling with the activation-relaxation technique nouveau
|
3 |
+
Eugène Sanscartier,1 Félix Saint-Denis,1 Karl-Étienne Bolduc,1 and Normand Mousseau1
|
4 |
+
1Département de physique and Regroupement québécois sur les matériaux de pointe,
|
5 |
+
Université de Montréal, Case Postale 6128, Succursale Centre-ville, Montréal, Québec H3C 3J7, Canada
|
6 |
+
(Dated: January 23, 2023)
|
7 |
+
In the last few years, much efforts have gone into developing universal machine-learning potentials
|
8 |
+
able to describe interactions for a wide range of structures and phases. Yet, as attention turns to
|
9 |
+
more complex materials including alloys, disordered and heterogeneous systems, the challenge of
|
10 |
+
providing reliable description for all possible environment become ever more costly. In this work, we
|
11 |
+
evaluate the benefits of using specific versus general potentials for the study of activated mechanisms
|
12 |
+
in solid-state materials. More specifically, we tests three machine-learning fitting approaches using
|
13 |
+
the moment-tensor potential to reproduce a reference potential when exploring the energy landscape
|
14 |
+
around a vacancy in Stillinger-Weber silicon crystal and silicon-germanium zincblende structure
|
15 |
+
using the activation-relaxation technique nouveau (ARTn). We find that a a targeted on-the-fly
|
16 |
+
approach specific and integrated to ARTn generates the highest precision on the energetic and
|
17 |
+
geometry of activated barriers, while remaining cost-effective. This approach expands the type of
|
18 |
+
problems that can be addressed with high-accuracy ML potentials.
|
19 |
+
I.
|
20 |
+
INTRODUCTION
|
21 |
+
As computational materials scientists turn to atten-
|
22 |
+
tion to ever more complex systems, they are faced with
|
23 |
+
two major challenges : (i) how to describe correctly their
|
24 |
+
physics and (ii) how to reach the appropriate size and
|
25 |
+
time scale to capture the properties of interest.
|
26 |
+
The
|
27 |
+
first challenge is generally solved by turning to ab ini-
|
28 |
+
tio methods,1 that allow the solution Heisenberg’s equa-
|
29 |
+
tion with reasonably controlled approximations. Theses
|
30 |
+
approaches, however, suffer from N 4 scaling which lim-
|
31 |
+
its their application to small system sizes and short time
|
32 |
+
scales. The second challenge is met by a variety of meth-
|
33 |
+
ods that cover different scales. Molecular dynamics2, for
|
34 |
+
example, which directly solves Newton’s equation, ac-
|
35 |
+
cesses typical time scales between picoseconds and mi-
|
36 |
+
croseconds, at the very best.
|
37 |
+
Other approaches, such
|
38 |
+
as lattice3,4 and off-lattices kinetic Monte-Carlo5,6, by
|
39 |
+
focusing on physically relevant mechanisms, can extend
|
40 |
+
this time scale to seconds and more, as long the diffusion
|
41 |
+
takes place through activated processes.
|
42 |
+
Even though
|
43 |
+
these methods are efficient, each trajectory can require
|
44 |
+
hundreds of thousands to millions of forces evaluations,
|
45 |
+
which becomes too costly with ab initio approaches, forc-
|
46 |
+
ing modellers to use empirical potentials in spite of their
|
47 |
+
incapacity at describing correctly complex environments.
|
48 |
+
Building on ab initio energy and forces, machine-
|
49 |
+
learned potentials7–10 open the door to lifting some of
|
50 |
+
this difficulties, by offering much more reliable physics as
|
51 |
+
a small fraction of the cost of ab initio evaluations.
|
52 |
+
Since their introduction, ML potentials have been
|
53 |
+
largely coupled with MD and focusing on the search for
|
54 |
+
universal potentials, able to describe a full range of struc-
|
55 |
+
tures and phases for a given material11–13. As we turn
|
56 |
+
to more complex systems such as alloys and disordered
|
57 |
+
and heterogeneous systems, it becomes more and more
|
58 |
+
difficult to generate such universal potentials, since the
|
59 |
+
number of possible environments grows rapidly with this
|
60 |
+
complexity. In this context, the development of specific
|
61 |
+
potentials, with on-the-fly learning that makes it possible
|
62 |
+
to adapt to new environments, becomes a strategy worth
|
63 |
+
exploring.
|
64 |
+
In this work, we focus on the construction of machine-
|
65 |
+
learned potentials adapted to the sampling of energy
|
66 |
+
landscape dominated by activated mechanisms,
|
67 |
+
i.e.,
|
68 |
+
solid-state systems with local activated diffusion and evo-
|
69 |
+
lution. A correct computational sampling, using methods
|
70 |
+
such as the activation-relaxation technique (ART)14 and
|
71 |
+
its revised version (ART nouveau or ARTn)15,16, requires
|
72 |
+
a precise description of local minima and of the land-
|
73 |
+
scape surrounding the first-order saddle points that char-
|
74 |
+
acterize diffusion according to the transition-state theory
|
75 |
+
(TST)17. These barriers can be high — reaching many
|
76 |
+
electron-volts — and involve strained configurations that
|
77 |
+
can be visited only very rarely with standard molecular
|
78 |
+
dynamics.
|
79 |
+
More specifically, we compare three machine learning
|
80 |
+
procedures in which we change the context where lean-
|
81 |
+
ing on-the-fly occur to train a Moment Tensor Poten-
|
82 |
+
tial (MTP)10,18 that describes the diffusion of vacancy
|
83 |
+
in Stillinger-Weber silicon19 and silicon-germanium20 as
|
84 |
+
sampled with ARTn. The first one uses a pure MD learn-
|
85 |
+
ing procedure, fitted at various temperatures, in a proce-
|
86 |
+
dure that echoes the work of Novoselov et al.21, a second-
|
87 |
+
one adds an on-the-fly adjustment during an ARTn run
|
88 |
+
and the third one focuses on purely OTF-ARTn potential
|
89 |
+
adjustment.
|
90 |
+
Results underline the efficiency gain in developing tar-
|
91 |
+
geted ML potentials for specific applications, comparing
|
92 |
+
the cost of fitting Si with SiGe, it also shows the rapid
|
93 |
+
increase in computation complexity associated with mov-
|
94 |
+
ing from element to alloy systems, which emphasizes the
|
95 |
+
usefulness of a specific approach such as the one applied
|
96 |
+
here to activated processes.
|
97 |
+
arXiv:2301.08630v1 [cond-mat.mtrl-sci] 20 Jan 2023
|
98 |
+
|
99 |
+
2
|
100 |
+
II.
|
101 |
+
METHODOLOGY
|
102 |
+
A.
|
103 |
+
ML Potential
|
104 |
+
The Moment Tensor Potential (MTP)10,18 is a linear
|
105 |
+
model of functions Bα(ri) built from contractions of mo-
|
106 |
+
ment tensor descriptors defined by the local neighbor-
|
107 |
+
hood relative position ri of atom i within a sphere of
|
108 |
+
influence of radius rc respecting a set invariances. This
|
109 |
+
model has been shown to be fast while giving accuracy
|
110 |
+
on the order of ∼meV/atom and requiring few hundreds
|
111 |
+
to thousands of reference potential calls22 on-the-fly.
|
112 |
+
MTP have been used on a wide variety of problems in-
|
113 |
+
cluding on-the-fly MD simulation18,21,23, search and min-
|
114 |
+
imization of new alloys24,25 and diffusion processes21 on
|
115 |
+
systems counting one or multiple species.
|
116 |
+
MTP approximates atomic configuration energy as
|
117 |
+
sum of local contributions. A local contribution is ob-
|
118 |
+
tained through a sum over the included basis {Bα(ri)}
|
119 |
+
as a linear combination of B(ri) and ξα,
|
120 |
+
V (ri) =
|
121 |
+
m
|
122 |
+
�
|
123 |
+
α=1
|
124 |
+
ξαBα(ri)
|
125 |
+
(1)
|
126 |
+
The “level” of a potential gives the number of different
|
127 |
+
possible tensor Mµ,ν (ri) descriptors. The {Bα(ri)} func-
|
128 |
+
tions of Eq. 1 are constructed by a tensorial contraction
|
129 |
+
of different Mµ,ν (ri) and the number of different tenso-
|
130 |
+
rial contraction sets m in Eq. 1. More information on
|
131 |
+
MTP is available in Ref. 18.
|
132 |
+
The total energy of a N-atom configuration (R) is then
|
133 |
+
given by the sum of N local contributions
|
134 |
+
E(R) =
|
135 |
+
N
|
136 |
+
�
|
137 |
+
i=1
|
138 |
+
V (ri) =
|
139 |
+
N
|
140 |
+
�
|
141 |
+
i=1
|
142 |
+
m
|
143 |
+
�
|
144 |
+
α=1
|
145 |
+
ξαBα(ri)
|
146 |
+
(2)
|
147 |
+
and the forces are obtained by taking the gradient of this
|
148 |
+
quantity
|
149 |
+
F(R) = −∇
|
150 |
+
N
|
151 |
+
�
|
152 |
+
i=1
|
153 |
+
m
|
154 |
+
�
|
155 |
+
α=1
|
156 |
+
ξαBα(ri)
|
157 |
+
(3)
|
158 |
+
The parameters ξα are obtained by minimizing the loss
|
159 |
+
function:
|
160 |
+
�
|
161 |
+
R∈A
|
162 |
+
�
|
163 |
+
we
|
164 |
+
���E(R) − ˆE(R)
|
165 |
+
���
|
166 |
+
2
|
167 |
+
2 + wf
|
168 |
+
N
|
169 |
+
�
|
170 |
+
i
|
171 |
+
���fi(R) −ˆfi(R)
|
172 |
+
���
|
173 |
+
2
|
174 |
+
2
|
175 |
+
�
|
176 |
+
→ min
|
177 |
+
ξ
|
178 |
+
(4)
|
179 |
+
Here A is the training set made of configurations with
|
180 |
+
known energy and forces. The goal is to minimize the
|
181 |
+
difference between E(R), fi(R)(real value) and ˆE(R),
|
182 |
+
ˆfi(R)(predicted by model), respectively, for all element
|
183 |
+
in A. Weights on contribution from energy and forces
|
184 |
+
(we and wf) are set to one.
|
185 |
+
B.
|
186 |
+
Learning On-The-Fly Tools
|
187 |
+
On-the-fly atomic machine learning potential (OTF)
|
188 |
+
involves the repeated training of the model potential as
|
189 |
+
new atomic environments are generated through various
|
190 |
+
procedures.
|
191 |
+
Following the work of Shapeev and collaborators18, the
|
192 |
+
reliability of the potential to describe a given configura-
|
193 |
+
tion is evaluated using the D-optimality criterion to grade
|
194 |
+
to which extend a configuration extrapolate. This grade
|
195 |
+
is used along with a selection algorithm (MaxVol) to as-
|
196 |
+
sess whether the new configuration should be added to
|
197 |
+
the training set or replace a configuration already in it.
|
198 |
+
While a detailed description can be found in Ref.23, we
|
199 |
+
provide here a brief summary of the retained approach.
|
200 |
+
The selection and extrapolation-grade algorithm can
|
201 |
+
be applied using either a local-energy or a global-energy
|
202 |
+
descriptor.
|
203 |
+
The local-energy descriptor is presented as a rectangu-
|
204 |
+
lar matrix Gm×N formed by the basis elements {Bα(ri)}
|
205 |
+
associated with the neighborhood ri of all N atoms:
|
206 |
+
G =
|
207 |
+
�
|
208 |
+
�
|
209 |
+
�
|
210 |
+
B1(r1) . . . Bm(r1)
|
211 |
+
...
|
212 |
+
...
|
213 |
+
...
|
214 |
+
B1(rN) . . . Bm(rN)
|
215 |
+
�
|
216 |
+
�
|
217 |
+
�
|
218 |
+
T
|
219 |
+
For a given configuration, the global-energy description
|
220 |
+
reduces this information to a vector g
|
221 |
+
g =
|
222 |
+
� b1(R) . . . bm(R) �
|
223 |
+
where each term, {bα(R)} is a sum over all neighborhoods
|
224 |
+
for a specific basis element {Bα(ri)}:
|
225 |
+
{bα(R)} =
|
226 |
+
N
|
227 |
+
�
|
228 |
+
i=0
|
229 |
+
{Bα(ri)}
|
230 |
+
For the global-energy descriptor, evaluating the over-
|
231 |
+
lap of a new configuration with the training set A is done
|
232 |
+
by solving for cj, in
|
233 |
+
A
|
234 |
+
� c1 . . . cm
|
235 |
+
�
|
236 |
+
= g,
|
237 |
+
(5)
|
238 |
+
The coefficients {cj} can be understood as expressing g
|
239 |
+
through A. The extrapolation grade, γ, is then defined
|
240 |
+
as the largest component of {cj},
|
241 |
+
γ(R) = max |cj| .
|
242 |
+
(6)
|
243 |
+
The same approach is used for the local-energy descrip-
|
244 |
+
tion, applying Eq. 5 with the rows of matrix G rather
|
245 |
+
than the vector g and solve for a matrix of cj,k and Eq. 6
|
246 |
+
becomes γ(R) = max |cj,k|.
|
247 |
+
For γ(R) below a certain threshold γ0, the new con-
|
248 |
+
figuration is considered to overlap sufficiently with the
|
249 |
+
training set to allow the model to interpolate with confi-
|
250 |
+
dence. For γ0 < γ(R) < γmax, the model cannot be ap-
|
251 |
+
plied with confidence, but can be adapted by adding this
|
252 |
+
|
253 |
+
3
|
254 |
+
configuration to the training set. When γ(R) > γmax,
|
255 |
+
the configuration is too far from the training set and it
|
256 |
+
is rejected as the model cannot be adapted with confi-
|
257 |
+
dence. In this work, we set γ0 = 1.1 and γmax = 2.2,
|
258 |
+
unless specified otherwise.
|
259 |
+
C.
|
260 |
+
On-The-Fly Learning Cycle Workflow
|
261 |
+
Our workflow is similar to that of Ref.18, with main
|
262 |
+
differences discussed in Section II F. We follow the same
|
263 |
+
general machine-learning on-the-fly workflow for all sam-
|
264 |
+
pling approaches tested here.
|
265 |
+
We split each simulation in one or multiple sequences
|
266 |
+
of atomic configurations generated using either MD or
|
267 |
+
ARTn. Each run unrolls as follows (see fig. 1):
|
268 |
+
1. Launch a sequence during which configurations are
|
269 |
+
generated according to a sampling algorithm (MD
|
270 |
+
or ARTn).
|
271 |
+
At each iteration step the extrapolation-grade γ is
|
272 |
+
evaluated.
|
273 |
+
(a) If 0 < γ < γmax, the energy and forces of the
|
274 |
+
configuration are evaluated with MTP;
|
275 |
+
(b) if γ0 < γ < γmax, the configuration is set aside
|
276 |
+
for an update of MTP parameters;
|
277 |
+
(c) else if γ > γmax, energy and forces of the con-
|
278 |
+
figuration are not evaluated with MTP and
|
279 |
+
the configuration is not kept for update. The
|
280 |
+
sequence is stopped and we go directly to the
|
281 |
+
update step (step 3).
|
282 |
+
2. Move on next to the iteration in the sequence (step
|
283 |
+
1).
|
284 |
+
3. The model is updated, if at at least one configura-
|
285 |
+
tion as been set aside for an update of MPT (i) at
|
286 |
+
the end of a sequence or (ii) at any moment during
|
287 |
+
the sequence if γ > γmax.
|
288 |
+
4. If there is an update, restart a new sequence (go to
|
289 |
+
step 1), else stop if no configurations with γ > γ0
|
290 |
+
have been set aside during the predefined maximum
|
291 |
+
length of the sequence.
|
292 |
+
The moment tensor potential model update is defined
|
293 |
+
as follows (see Fig. 1, right-hand side):
|
294 |
+
1. A selection is made from the set aside configura-
|
295 |
+
tions (with γ > γ0) using MaxVol23.
|
296 |
+
2. Each selected configuration is evaluated by the ref-
|
297 |
+
erence model
|
298 |
+
3. The training set is updated with the new evaluated
|
299 |
+
configurations
|
300 |
+
4. The moment tensor potential is fitted on the new
|
301 |
+
training set accordingly to Eq. 4
|
302 |
+
More details of this procedure can be found in Ref.23.
|
303 |
+
Simulation
|
304 |
+
Configuration
|
305 |
+
< <
|
306 |
+
Evaluate
|
307 |
+
< <
|
308 |
+
it+1
|
309 |
+
Evaluate
|
310 |
+
Configuration
|
311 |
+
Configuration
|
312 |
+
Set for MTP
|
313 |
+
Update
|
314 |
+
>
|
315 |
+
itmax
|
316 |
+
it=0
|
317 |
+
Update MTP
|
318 |
+
No
|
319 |
+
Next Sequence
|
320 |
+
Yes
|
321 |
+
it=0
|
322 |
+
Update MTP
|
323 |
+
Update MTP
|
324 |
+
Selection
|
325 |
+
Selected
|
326 |
+
New Configuration
|
327 |
+
Evaluated by
|
328 |
+
Reference Model
|
329 |
+
Update New TS
|
330 |
+
Retrain MTP
|
331 |
+
Figure 1.
|
332 |
+
On-the-fly machine learning workflow used with
|
333 |
+
MD and ARTn (on the left).
|
334 |
+
A potential update can take
|
335 |
+
place at two points: when the sequence ends or when γ >
|
336 |
+
γmax. The updating procedures are given in the box on the
|
337 |
+
right.
|
338 |
+
D.
|
339 |
+
MD and ARTn
|
340 |
+
Two sampling approaches are used to generate a
|
341 |
+
sequence of configurations:
|
342 |
+
(1) molecular dynamics
|
343 |
+
(MD) as implemented within LAMMPS26 and (2) the
|
344 |
+
activation-relaxation technique nouveau (ARTn) algo-
|
345 |
+
rithm developed by Mousseau and collaborators14,15,27.
|
346 |
+
Since MD is well known, we only give below a brief sum-
|
347 |
+
mary of ARTn.
|
348 |
+
ARTn is designed to explore the potential energy land-
|
349 |
+
scape of atomic systems through the identification of lo-
|
350 |
+
cal transition states connecting nearby local minima. Its
|
351 |
+
workflow can be summarized in three main steps (see, for
|
352 |
+
a recent in depth discussion of the ARTn version used in
|
353 |
+
this work, see Ref.27):
|
354 |
+
1. Leaving the harmonic well: starting from an energy
|
355 |
+
minimum, an atom and its neighbours are moved
|
356 |
+
iteratively in a direction selected at random un-
|
357 |
+
til a direction of negative curvature on the poten-
|
358 |
+
tial energy surfaces, d(λmin) with λmin, the lowest
|
359 |
+
eigenvalue of the Hessian matrix, smaller than zero,
|
360 |
+
emerges; this indicates the presence of a nearby
|
361 |
+
first-order saddle point;
|
362 |
+
2. Converging to a first-order saddle point: the system
|
363 |
+
is then pushed in the direction of negative curvature
|
364 |
+
d(λmin) while the force is minimized in the perpen-
|
365 |
+
dicular plane, until the total force F passes below a
|
366 |
+
threshold near F0, which indicates the saddle point
|
367 |
+
have been reached;
|
368 |
+
3. Relaxing into a new minimum: the system is then
|
369 |
+
pushed over the saddle point and relaxed into a
|
370 |
+
connected new minimum.
|
371 |
+
|
372 |
+
4
|
373 |
+
At each step λmin and d(λmin) are found using an it-
|
374 |
+
erative Lanczos method16,28,29. Perpendicular relaxation
|
375 |
+
during activation and global minimization are done using
|
376 |
+
the Fast Inertial Relaxation Engine (FIRE) algorithm30.
|
377 |
+
Generated events are accepted or rejected according to
|
378 |
+
the Metropolis algorithm, where the acceptation proba-
|
379 |
+
bility p is given by
|
380 |
+
p = min
|
381 |
+
�
|
382 |
+
1, e−β∆E�
|
383 |
+
(7)
|
384 |
+
with ∆E = Esaddle − Eminimum, the energy difference
|
385 |
+
between the saddle and a connected minima and β =
|
386 |
+
1/kBT where kB is the Boltzmann factor and T is a fic-
|
387 |
+
titious temperature, since thermal deformations are not
|
388 |
+
taken into account.
|
389 |
+
Potential energy landscape explo-
|
390 |
+
ration consist of generating a number of event.
|
391 |
+
E.
|
392 |
+
Systems studied
|
393 |
+
The fitting approaches are tested on two physical sys-
|
394 |
+
tems: (i) a Si diamond structure with Stillinger-Weber as
|
395 |
+
a reference potential19; and (ii) a SiGe zincblende struc-
|
396 |
+
ture using the Stillinger-Weber potential with parame-
|
397 |
+
ters from Ref.20. Both models count 215 atoms and a
|
398 |
+
vacancy.
|
399 |
+
The Si system is fitted with a ML potential set at level
|
400 |
+
16, with 92 moment tensor functions (B(R), Eq.
|
401 |
+
1).
|
402 |
+
For SiGe, a potential at this level (16) generates errors
|
403 |
+
on the barrier of the order of 0.5 eV, which indicates
|
404 |
+
that a richer set of parameters is needed to describe the
|
405 |
+
chemical diversity and a level 20 is chosen for this system,
|
406 |
+
with 288 moment tensor functions. The relation between
|
407 |
+
the number of moment tensor functions for Si and energy
|
408 |
+
error is presented in Supplemental Fig. 1.
|
409 |
+
F.
|
410 |
+
Fitting approaches
|
411 |
+
To evaluate the reliability of the various on-the-fly ap-
|
412 |
+
proaches to reproduce the reference potential on config-
|
413 |
+
urations of interest for complex materials, the training
|
414 |
+
set is limited to structures visited during MD or ARTn
|
415 |
+
simulations within the conditions described below. No
|
416 |
+
additional information regarding alternative crystalline
|
417 |
+
structures, defects, surfaces, pressure, etc. is provided.
|
418 |
+
For each of these two systems, we compare the follow-
|
419 |
+
ing approaches:
|
420 |
+
1. ML-MD: The MTP potential is train OTF on MD
|
421 |
+
simulations. The potential is then evaluated, with-
|
422 |
+
out further update, in ARTn simulation.
|
423 |
+
2. OTF-MDART: Starting from the ML-MD gener-
|
424 |
+
ated potential, the MTP is re-trained following the
|
425 |
+
OTF procedure during ARTn simulations.
|
426 |
+
3. OTF-ART: Training of the potential is done
|
427 |
+
uniquely during ARTn runs with OTF.
|
428 |
+
The ML-MD approach is in line with21 where a po-
|
429 |
+
tential is trained OTF during MD. However, while the
|
430 |
+
potential is trained with MD, its accuracy is evaluated
|
431 |
+
during ARTn activated process search.
|
432 |
+
1.
|
433 |
+
ML-MD: simulations details
|
434 |
+
Nine sets of MTP ML-MD potentials are developed
|
435 |
+
and trained independently during NVT MD simulations.
|
436 |
+
Each set is trained at one specific simulation temperature
|
437 |
+
ranging from 300 K to 2700 K by step of 300 K and
|
438 |
+
starting from the same 215 atom crystalline structure
|
439 |
+
with a vacancy. Each set consists of ten independently
|
440 |
+
constructed MTP potentials for statistical purpose.
|
441 |
+
Training takes place on a series of sequences, each run
|
442 |
+
for a maximum of 100 ps, with steps of 1 fs, with an
|
443 |
+
average of 75 ps per cycle. MTP potentials require about
|
444 |
+
34 ± 14 and 93 ± 43 learning cycles for Si and SiGe to
|
445 |
+
be converged: the MTP potential is considered having
|
446 |
+
learned the potential when no configuration generated
|
447 |
+
during a 100 ps second is found in the extrapolating zone
|
448 |
+
of the potential (with γ > γmax).
|
449 |
+
As long as this is not the case, the sequence is restarted
|
450 |
+
from the same initial structure with different initial ve-
|
451 |
+
locities. To facilitate convergence, ML-MD potentials are
|
452 |
+
fitted over three sets of progressively more restricted re-
|
453 |
+
liability extrapolation parameter γ0. Moreover because
|
454 |
+
MD leads to global deformation, the extrapolation is
|
455 |
+
computed using global descriptors (see tab. I).
|
456 |
+
The final potential is then evaluated, in a fixed form,
|
457 |
+
in ARTn simulations.
|
458 |
+
Table I. Extrapolation and selection hyper-parameter values
|
459 |
+
used for the three on-the-fly approaches used in this work.
|
460 |
+
approach:
|
461 |
+
γ0
|
462 |
+
γmax
|
463 |
+
grade-
|
464 |
+
mode
|
465 |
+
ML-MD
|
466 |
+
5.5/3.3/1.1 60/10/2.2 global
|
467 |
+
OTF-MDART
|
468 |
+
1.1
|
469 |
+
2.2
|
470 |
+
local
|
471 |
+
OTF-ART
|
472 |
+
1.1
|
473 |
+
2.2
|
474 |
+
local
|
475 |
+
2.
|
476 |
+
OTF ARTn simulations details
|
477 |
+
Each ARTn simulation is launched for 1500 events,
|
478 |
+
with 24 parallel independent searches, for a total of
|
479 |
+
36 000 generated events.
|
480 |
+
For ARTn, a sequence is ei-
|
481 |
+
ther a search for a saddle point (successful or failed) or
|
482 |
+
a minimization from the saddle to minimum.
|
483 |
+
At each point, 24 sequences are generated in parallel,
|
484 |
+
and the configuration selected for an update of the po-
|
485 |
+
tential is made on the combined set of configurations to
|
486 |
+
generate one training set. Sequence are restarted from
|
487 |
+
the last accepted position or, in the case of the vacancy
|
488 |
+
in Si, the ground state. When an activation step gener-
|
489 |
+
ates a configuration with γ(R) > γmax, it is relaunched
|
490 |
+
|
491 |
+
5
|
492 |
+
with the same initial deformation. As with MD, ten in-
|
493 |
+
dependent ARTn runs are launched for statistics.
|
494 |
+
In the bulk, diffusion of the vacancy in Si takes place
|
495 |
+
through a symmetric mechanism bringing the vacancy
|
496 |
+
from one state to an identical one so all ARTn event
|
497 |
+
searches are effectively started from the same state.
|
498 |
+
Starting from a zincblende structure, SiGe evolves ac-
|
499 |
+
cording to an accept-reject Metropolis with a fictitious
|
500 |
+
temperature of 0.5 eV31.
|
501 |
+
Since the configurations ex-
|
502 |
+
plored by ARTn are locally deformed; the extrapolation
|
503 |
+
grade for ARTn generated configurations used for the
|
504 |
+
OTF-MDART and OTF-ART approaches are evaluated
|
505 |
+
with the local descriptors.
|
506 |
+
G.
|
507 |
+
Analysis
|
508 |
+
Following the standard approach, the error is com-
|
509 |
+
puted on the energy and force differences between the
|
510 |
+
MLP and reference potentials computed on the same
|
511 |
+
structures. Here, however, this error is only measured
|
512 |
+
on configurations generated during the ARTn procedure.
|
513 |
+
For the energy:
|
514 |
+
∆E = |EMLP (XMLP ) − Eref(XMLP )|,
|
515 |
+
(8)
|
516 |
+
For the forces:
|
517 |
+
∆F = 1
|
518 |
+
N
|
519 |
+
N
|
520 |
+
�
|
521 |
+
i=0
|
522 |
+
�
|
523 |
+
∥f (i)
|
524 |
+
MLP (XMLP ) − f (i)
|
525 |
+
ref(Xref)∥2,
|
526 |
+
(9)
|
527 |
+
where the positions XMLP are obtained from a simu-
|
528 |
+
lation run with the machine-learned potential and the
|
529 |
+
energy on this exact configuration is computed with the
|
530 |
+
reference and the machine-learned potentials. The same
|
531 |
+
is done for the error on forces.
|
532 |
+
Since this work is focused on the correct description of
|
533 |
+
first-order transition states, we also compute the mini-
|
534 |
+
mum and saddle barrier positions and energy convergence
|
535 |
+
errors(∆Xconv, ∆Econv) as
|
536 |
+
∆Xconv
|
537 |
+
=
|
538 |
+
��N
|
539 |
+
i=0 ∥x(i)
|
540 |
+
MLP − x(i)
|
541 |
+
ref∥2,
|
542 |
+
(10)
|
543 |
+
∆Econv = |EMLP (XMLP ) − Eref(Xref)|,
|
544 |
+
(11)
|
545 |
+
where XMLP and Xref are the positions corresponding
|
546 |
+
to minimum or saddle point as defined by the MLP and
|
547 |
+
the reference potentials respectively, with EMLP (XMLP )
|
548 |
+
and Eref(Xref) the corresponding energies; by definition,
|
549 |
+
forces are zero at these points defined by the respective
|
550 |
+
potentials.
|
551 |
+
While XMLP and EMLP (XMLP ) are obtained on the
|
552 |
+
ARTn trajectories, Xref and Eref(Xref) are obtained af-
|
553 |
+
ter reconverging the minima or the saddle point using the
|
554 |
+
reference potential starting from XMLP and following the
|
555 |
+
ARTn procedure.
|
556 |
+
From an energy barrier δE(X), the energy barrier error
|
557 |
+
∆δEbarrier is given by,
|
558 |
+
∆δEbarrier = |δEMLP (XMLP ) − δEref(Xref)|
|
559 |
+
(12)
|
560 |
+
If no trend is observed between the different temper-
|
561 |
+
atures where potentials are trained, we calculate their
|
562 |
+
average and deviation in order to to effectively compare
|
563 |
+
them with other approach.
|
564 |
+
III.
|
565 |
+
RESULTS
|
566 |
+
0
|
567 |
+
500
|
568 |
+
1000
|
569 |
+
1500
|
570 |
+
2000
|
571 |
+
2500
|
572 |
+
T(K)
|
573 |
+
200
|
574 |
+
400
|
575 |
+
600
|
576 |
+
800
|
577 |
+
1000
|
578 |
+
1200
|
579 |
+
1400
|
580 |
+
Number of reference potential calls
|
581 |
+
253±60
|
582 |
+
369±85
|
583 |
+
1232±177
|
584 |
+
505±109
|
585 |
+
628±283
|
586 |
+
ml-md
|
587 |
+
new: otf-mdart
|
588 |
+
total: otf-mdart+md
|
589 |
+
otf-art
|
590 |
+
Figure 2.
|
591 |
+
Number of calls to the reference potential for
|
592 |
+
each of the machine-learned potentials developed for Si as
|
593 |
+
function of the temperature referring to the one used during
|
594 |
+
MD training. Since configurations are relaxed to zero K in
|
595 |
+
ARTn simulations, there is no associated temperature for this
|
596 |
+
procedure.
|
597 |
+
In this section, we first examine results for a vacancy
|
598 |
+
in c-Si to establish the methods then consider the same
|
599 |
+
approaches on the more complex SiGe alloy.
|
600 |
+
A.
|
601 |
+
ML-MD
|
602 |
+
The ML-MD approach serves as a benchmark to assess
|
603 |
+
the efficiency of the various approaches in sampling en-
|
604 |
+
ergy barriers and diffusion mechanisms. Here, ten inde-
|
605 |
+
pendent ML potentials are generated through on-the-fly
|
606 |
+
MD simulations at 9 different target temperatures rang-
|
607 |
+
ing from 300 to 2700 K by step of 300 K and require
|
608 |
+
between 253 ± 60, at 300 K, and 369 ± 85 evaluations of
|
609 |
+
the reference potential, at 2700 K, to complete learning
|
610 |
+
cycles (see Fig. 2).
|
611 |
+
For the purpose of this work, the quality of the ML-MD
|
612 |
+
potential is evaluated on configurations generated with
|
613 |
+
ARTn as local activated events associated with vacancy
|
614 |
+
in a crystalline environment are generated. To avoid non-
|
615 |
+
physical results, when a ARTn-generated configuration
|
616 |
+
shows a γ > 200, the configuration is rejected, the event
|
617 |
+
search is stopped and a new event search is launched from
|
618 |
+
the same initial minimum.
|
619 |
+
|
620 |
+
6
|
621 |
+
0
|
622 |
+
1
|
623 |
+
2
|
624 |
+
3
|
625 |
+
4
|
626 |
+
5
|
627 |
+
6
|
628 |
+
7
|
629 |
+
Energy error per atom (meV/atom)
|
630 |
+
1500
|
631 |
+
2000
|
632 |
+
0.2
|
633 |
+
0.3
|
634 |
+
0.4
|
635 |
+
0.5
|
636 |
+
0.6
|
637 |
+
ml-md
|
638 |
+
otf-mdart
|
639 |
+
otf-art
|
640 |
+
0
|
641 |
+
500
|
642 |
+
1000
|
643 |
+
1500
|
644 |
+
2000
|
645 |
+
2500
|
646 |
+
T(K)
|
647 |
+
0.0100
|
648 |
+
0.0125
|
649 |
+
0.0150
|
650 |
+
0.0175
|
651 |
+
0.0200
|
652 |
+
0.0225
|
653 |
+
0.0250
|
654 |
+
0.0275
|
655 |
+
0.0300
|
656 |
+
Force error (eV/Å)
|
657 |
+
Figure 3.
|
658 |
+
Average energy (top) and mean absolute forces
|
659 |
+
(bottom) errors per atom for Si measured over all configu-
|
660 |
+
rations generated along pathways in ARTn for the three ap-
|
661 |
+
proaches.
|
662 |
+
Temperature refers to the one used during MD
|
663 |
+
training.
|
664 |
+
Fig. 3 shows the standard validation error on en-
|
665 |
+
ergy and forces calculated over all configurations gen-
|
666 |
+
erated along pathways for the 36 000 successful events
|
667 |
+
and 10 080 failed saddle searches (a success rate of
|
668 |
+
78 %).
|
669 |
+
The error on energy increases almost expo-
|
670 |
+
nentially with the sampling temperature, ranging from
|
671 |
+
0.44 ± 0.36 meV/atom at 300 K to 5.1 ± 1.7 meV/atom
|
672 |
+
at 2700K. The error on forces is essentially constant
|
673 |
+
at 0.0123 eV/Å, on average, between 300 and 1800 K,
|
674 |
+
and increases rapidly at high temperature, to reach
|
675 |
+
0.0256 eV/Å at 2700 K.
|
676 |
+
Since, the focus of this work is on transition states,
|
677 |
+
Fig. 4 displays the error on the energy barriers as a func-
|
678 |
+
tion of MD-fitting temperature, computed with Eq. 10
|
679 |
+
and averaged over all generated barriers. This error is
|
680 |
+
relatively uncorrelated of the MD temperature simula-
|
681 |
+
tion with an average of 0.056 ± 0.022 eV, with minimum
|
682 |
+
error of 0.024 ± 0.01 eV at 2400 K and maximum of
|
683 |
+
0
|
684 |
+
500
|
685 |
+
1000
|
686 |
+
1500
|
687 |
+
2000
|
688 |
+
2500
|
689 |
+
T(K)
|
690 |
+
0.02
|
691 |
+
0.04
|
692 |
+
0.06
|
693 |
+
0.08
|
694 |
+
0.10
|
695 |
+
0.12
|
696 |
+
Energy barrier error (eV)
|
697 |
+
ml-md
|
698 |
+
otf-mdart
|
699 |
+
otf-art
|
700 |
+
Figure 4.
|
701 |
+
Average energy barrier error for Si as defined
|
702 |
+
by Eq. 12 for all events generated in ARTn for the three ap-
|
703 |
+
proaches.
|
704 |
+
Temperature refers to the one used during MD
|
705 |
+
training.
|
706 |
+
0
|
707 |
+
500
|
708 |
+
1000
|
709 |
+
1500
|
710 |
+
2000
|
711 |
+
2500
|
712 |
+
T(K)
|
713 |
+
0.050
|
714 |
+
0.075
|
715 |
+
0.100
|
716 |
+
0.125
|
717 |
+
0.150
|
718 |
+
0.175
|
719 |
+
0.200
|
720 |
+
Saddle position error (Å)
|
721 |
+
ml-md
|
722 |
+
otf-mdart
|
723 |
+
otf-art
|
724 |
+
Figure 5.
|
725 |
+
Mean position error on all saddle point for Si.
|
726 |
+
Temperature refers to the one used during MD training.
|
727 |
+
0.08±0.03 eV at 1200 K. This error is lower than that for
|
728 |
+
a general point on the energy landscape (Fig. 3) in part
|
729 |
+
because it is computed as a di���erence between saddle and
|
730 |
+
initial minimum.
|
731 |
+
Errors on the position of the saddle point, associated
|
732 |
+
with the capacity to reproduce correctly their geome-
|
733 |
+
try, are given in Fig. 5.
|
734 |
+
The top panel indicates the
|
735 |
+
average distance between saddle points converged with
|
736 |
+
the reference and the ML potentials: it decreases from
|
737 |
+
0.16 ± 0.05 Å at 300 K to a minimum of 0.09 ± 0.02 Å
|
738 |
+
between 1500 and 2100 K, going up at the two highest
|
739 |
+
temperatures (2400 and 2700 K).
|
740 |
+
Overall, this straightforward fitting approach based on
|
741 |
+
constant-temperature MD runs provides accurate diffu-
|
742 |
+
sion barriers, ranging from 0.51 to more than 4 eV, for
|
743 |
+
|
744 |
+
7
|
745 |
+
a vacancy in crystalline silicon at a low computational
|
746 |
+
costs (263 to 369 evaluations of the reference potential).
|
747 |
+
B.
|
748 |
+
Revisiting ML-MD potential in ARTn: the
|
749 |
+
OTF-MDART adjusting approach
|
750 |
+
To evaluate the possibility of improving on ML-MD
|
751 |
+
potentials for activated events, potentials are on-the-fly
|
752 |
+
re-trained during ARTn learning cycles (OTF-MDART).
|
753 |
+
Fig. 2 gives the number of calls to the reference poten-
|
754 |
+
tial for this procedure during the ARTn runs (dashed
|
755 |
+
orange line) as well as the total number of calls, includ-
|
756 |
+
ing those made during ML-MD fitting (solid orange line).
|
757 |
+
The number of calls during ARTn learning cycles ranges
|
758 |
+
from 979±153 at 300 K to to 136±38 at 2700 K for a to-
|
759 |
+
tal of 1232±177 to 505±109 respectively, when including
|
760 |
+
ML-MD calls.
|
761 |
+
The error on energy and forces remains correlated with
|
762 |
+
the ML-MD temperature: it is higher when the error is
|
763 |
+
higher at ML-MD trained temperature. This correlation
|
764 |
+
is particularly strong when retraining MD potentials fit-
|
765 |
+
ted between 1500 and 2700 K (Fig. 3, solid orange line).
|
766 |
+
Error on energy for OTF-MDART is almost constant
|
767 |
+
between 300 and 2400 K, at 0.22 meV/atom, rising to
|
768 |
+
1.9 meV/atom at 2700 K, lower by 50 to 63 % than ML-
|
769 |
+
MD. As similar improvement is observed on the forces,
|
770 |
+
which range from 0.0103 eV/Å, on average, between 300
|
771 |
+
and 1800 K, increasing to 0.0173 eV/Å at 2700 K, repre-
|
772 |
+
senting a 16 % to 32 % decrease in error.
|
773 |
+
Table II. Average energy barrier error and mean position error
|
774 |
+
on all saddle point for Si. The average error for ML-MD and
|
775 |
+
OTF-MDART training is taken over all temperature sets.
|
776 |
+
Errors
|
777 |
+
ML-MD
|
778 |
+
OTF-MDART
|
779 |
+
OTF-ART
|
780 |
+
Energy (eV)
|
781 |
+
0.056±0.022
|
782 |
+
0.039±0.008
|
783 |
+
0.040±0.012
|
784 |
+
Position (Å)
|
785 |
+
0.114±0.029
|
786 |
+
0.072±0.006
|
787 |
+
0.072±0.010
|
788 |
+
Between 300 and 1500 K, retrained potentials with
|
789 |
+
OTF-MDART show more constant energy barrier errors
|
790 |
+
than pure ML-MD models (Fig. 4), with an error of about
|
791 |
+
0.036 eV (OTF-MDART) vs average of 0.064 eV (ML-
|
792 |
+
MD) a 44 % improvement. At the highest temperature —
|
793 |
+
1800 to 2700 K, however, as OTF-MDART calls for less
|
794 |
+
learning cycles, errors and fluctuations are not reduced
|
795 |
+
with respect to ML-MD. Interestingly, though, improve-
|
796 |
+
ments on the saddle position is observed at all tempera-
|
797 |
+
tures for OTF-MDART (Fig. 5) with an average error of
|
798 |
+
0.072 ± 0.010 Å.
|
799 |
+
Overall, by retraining ML-MD potential in ARTn, er-
|
800 |
+
rors are reduced and results are more consistent, i.e., er-
|
801 |
+
ror distributions are narrower, irrespective of the tem-
|
802 |
+
perature used in the initial MD training. This additional
|
803 |
+
retraining leads to a 50 % to 96 % decrease in energy
|
804 |
+
error (Fig. 3), a 29 % improvement for average energy
|
805 |
+
barrier errors (Tab. II) and a 37 % reduction on mean
|
806 |
+
saddle positions errors but with an additional number of
|
807 |
+
calls to the reference potential increasing between 37 to
|
808 |
+
490 %.
|
809 |
+
500
|
810 |
+
1000
|
811 |
+
1500
|
812 |
+
2000
|
813 |
+
2500
|
814 |
+
T (K)
|
815 |
+
0
|
816 |
+
10
|
817 |
+
20
|
818 |
+
30
|
819 |
+
40
|
820 |
+
50
|
821 |
+
Proportion of MD configuration in training set (%)
|
822 |
+
|
823 |
+
0.0
|
824 |
+
2.5
|
825 |
+
5.0
|
826 |
+
7.5
|
827 |
+
10.0
|
828 |
+
12.5
|
829 |
+
15.0
|
830 |
+
17.5
|
831 |
+
Kept MD configuration in training set (%)
|
832 |
+
|
833 |
+
Figure 6.
|
834 |
+
Fraction of original MD configurations (left scale)
|
835 |
+
and total number of MD configurations (right scale) remain-
|
836 |
+
ing in the final training set (TS) for Si. Temperature refers
|
837 |
+
to the one used during MD training.
|
838 |
+
These results can be understood by looking at the frac-
|
839 |
+
tion of MD-generated configurations that remain in the
|
840 |
+
training set at the end of the simulation (Fig. 6): at
|
841 |
+
temperatures between 300 and 1200 K, none of the ML-
|
842 |
+
MD configurations remain in the final training set for
|
843 |
+
training temperatures between 300 and 1200 K; this pro-
|
844 |
+
portions goes from from 1.3 to 38 % between 1500 and
|
845 |
+
2700 K (left-hand axis, blue line).
|
846 |
+
At these tempera-
|
847 |
+
tures, the system melts and generates a wider range of
|
848 |
+
configurations. Since these configurations are far from
|
849 |
+
ARTn-generated configurations, the selection algorithm
|
850 |
+
keeps them in the set even though they do not help re-
|
851 |
+
duce errors for the configurational space of interest with
|
852 |
+
ARTn.
|
853 |
+
C.
|
854 |
+
The OTF-ART adjusting approach
|
855 |
+
Given the results for OTF-MDART, we now turn to
|
856 |
+
an OTF approach entirely integrated in ARTn, in an at-
|
857 |
+
tempt to increase accuracy, and reduce the cost and waste
|
858 |
+
of evaluations of the reference potential.
|
859 |
+
Ten independents on-the-fly ML potential are gener-
|
860 |
+
ated entirely in ARTn for a total of 36 000 events start-
|
861 |
+
ing from the same initial minimum.
|
862 |
+
Each potential is
|
863 |
+
trained initially from the same one con���guration (the ini-
|
864 |
+
tial minimum), in the training set. Each parallel event
|
865 |
+
search goes trow a learning cycle if needed and as the
|
866 |
+
simulation progresses learning cycle become rarer. The
|
867 |
+
values are averaged over the ten simulation and as the
|
868 |
+
simulation go through learning.
|
869 |
+
With an average total of 628 ± 283 reference po-
|
870 |
+
tential evaluations, the cost of the OTF-ART is be-
|
871 |
+
|
872 |
+
8
|
873 |
+
tween that of ML-MD and OTF-MDART. Along path-
|
874 |
+
ways, the average energy error for these potentials is of
|
875 |
+
0.22 ± 0.03 meV/atom, on par with OTF-MDART po-
|
876 |
+
tential based on low-temperature ML-MD fitting, and
|
877 |
+
49 % lower than the 300 K ML-MD potential. Errors
|
878 |
+
on forces, at 0.011±0.001 eV/Å, are in between ML-
|
879 |
+
MD (0.012 eV/Å) and OTF-MDART (0.010 eV/Å) at
|
880 |
+
low training temperature. Comparing with the 2700 K
|
881 |
+
potential fitting in MD, OFT-ART error is 57 % lower
|
882 |
+
than ML-MD (0.026 eV/Å) and 36 % lower than OTF-
|
883 |
+
MDART (0.017 eV/Å).
|
884 |
+
Focusing on barrier energy, the average error is 0.039±
|
885 |
+
0.008 eV (see Fig. 4), about 2.5 % lower than OTF-
|
886 |
+
MDART and 30.3 % better than ML-MD. The error of
|
887 |
+
0.072 ± 0.006 Å on the converged saddle position is sim-
|
888 |
+
ilar to the 0.072 ± 0.010 Å obtained with OTF-MDART
|
889 |
+
and 37 % lower than with ML-MD (0.114 Å).
|
890 |
+
D.
|
891 |
+
Reproducing the dominant diffusion mechanism
|
892 |
+
0
|
893 |
+
1
|
894 |
+
2
|
895 |
+
3
|
896 |
+
4
|
897 |
+
5
|
898 |
+
Energy barrier (eV)
|
899 |
+
0.0
|
900 |
+
2.5
|
901 |
+
5.0
|
902 |
+
7.5
|
903 |
+
10.0
|
904 |
+
12.5
|
905 |
+
15.0
|
906 |
+
17.5
|
907 |
+
Probability
|
908 |
+
MTP
|
909 |
+
Refined
|
910 |
+
Figure 7.
|
911 |
+
ARTn-generated energy barrier distributions for
|
912 |
+
vacancy-diffusion events in Si, including direct barrier (from
|
913 |
+
ground state) and inverse barriers (from excited states), as
|
914 |
+
generated with the MTP model (orange) and re-converged
|
915 |
+
using the reference model (blue) from saddles and minima
|
916 |
+
position originally found with the MTP model.
|
917 |
+
The exploration of the energy landscape around the
|
918 |
+
vacancy leads to a generation of wide range of activated
|
919 |
+
mechanisms and associated barriers (both forward, as-
|
920 |
+
sociated with the diffusion of the vacancy, and back-
|
921 |
+
ward, from the final minima back to the saddle point).
|
922 |
+
Fig. 7 presents the complete distribution of generated di-
|
923 |
+
rect and inverse barriers connected to the ground state.
|
924 |
+
The peak near 0 eV (around 10−2 to 10−1 eV) is associ-
|
925 |
+
ated with the inverse barrier to to direct saddle at 2.38,
|
926 |
+
2.70 eV and higher (up to 5.5 eV), except for the in-
|
927 |
+
verse 0.45 eV barrier which is linked to the 2.87 eV direct
|
928 |
+
barrier. Direct barriers at 0.51 eV represent symmetric
|
929 |
+
first neighbor vacancy diffusion while barriers at 2.38 and
|
930 |
+
2.70 eV are associated with more complex vacancy diffu-
|
931 |
+
sion mechanism32. Events with barriers at 2.38, 2.70 eV,
|
932 |
+
for example, involve a vacancy diffusion through com-
|
933 |
+
plex bond-exchanges. Spectator events33 where the dia-
|
934 |
+
mond network around the vacancy is transformed by a
|
935 |
+
bond switching are also generated. This mechanism was
|
936 |
+
proposed by Wooten, Winer, and Weaire (WWW) to de-
|
937 |
+
scribe the amorphization of silicon34. The main spectator
|
938 |
+
event occurs as two neighbors of the vacancy are pushed
|
939 |
+
together allowing the creation of a bound associated with
|
940 |
+
the 2.87 eV barrier. Other mechanisms involve strong lat-
|
941 |
+
tice distortion and bond formation not involving direct
|
942 |
+
neighbors of the vacancy with very high energy barriers32
|
943 |
+
of in between 3.2 and 4.0 eV.
|
944 |
+
Table III.
|
945 |
+
Average energy barrier errors and mean saddle
|
946 |
+
position error on the 0.51 eV vacancy diffusion for Si. The
|
947 |
+
average error for ML-MD and OTF-MDART training is taken
|
948 |
+
over all temperature sets.
|
949 |
+
Errors
|
950 |
+
ML-MD
|
951 |
+
OTF-MDART
|
952 |
+
OTF-ART
|
953 |
+
Energy (eV)
|
954 |
+
0.026±0.015
|
955 |
+
0.022±0.011
|
956 |
+
0.019±0.005
|
957 |
+
Position (Å)
|
958 |
+
0.088±0.036
|
959 |
+
0.040±0.017
|
960 |
+
0.047±0.018
|
961 |
+
Since vacancy diffusion for this system is dominated
|
962 |
+
by a 0.51 eV single barrier mechanism, with the next
|
963 |
+
barrier at 2.35 eV, an accurate description of the dom-
|
964 |
+
inant mechanism is essential to correctly capture defect
|
965 |
+
kinetics in Si. Tab. III presents the error on this barrier
|
966 |
+
for the three approaches described above. With an error
|
967 |
+
of 0.019±0.005 eV, a relative error of 3.7 %, OTF-ART
|
968 |
+
offers the closest reproduction of the reference barrier,
|
969 |
+
followed by OTF-MDART and ML-MD, with a respec-
|
970 |
+
tive error of 0.022±0.011 (relative error of 4.3 %) and
|
971 |
+
0.026±0.015 (5.1 %). Overall, the error on energy bar-
|
972 |
+
rier is lower than that on the total energy presented above
|
973 |
+
(0.046±0.006 eV for OTF-ART, for example), due to a
|
974 |
+
partial error cancellation associated with energy differ-
|
975 |
+
ence taken to measure the barrier.
|
976 |
+
The validity of the barrier is also measured by the
|
977 |
+
precision on the saddle geometry.
|
978 |
+
For the e 0.51 eV
|
979 |
+
barrier, ML-MD converges with an error on the posi-
|
980 |
+
tion of 0.088±0.036Å, with OTF-MDART and OTF-
|
981 |
+
ART giving error almost 50 % lower, at 0.040±0.017Å
|
982 |
+
and 0.047±0.018Å respectively.
|
983 |
+
E.
|
984 |
+
SiGe system
|
985 |
+
Having shown the interest of developing a specific po-
|
986 |
+
tential by applying on-the-fly learning directly to acti-
|
987 |
+
vated events on a simple system such as c-Si with a va-
|
988 |
+
cancy, we test this approach with a more complex al-
|
989 |
+
loy with the same overall reference potential to facilitate
|
990 |
+
comparison. Starting from a ordered zincblende struc-
|
991 |
+
ture, the diffusion of a vacancy creates chemical disorder
|
992 |
+
that complexifies the landscape visited as shown by the
|
993 |
+
|
994 |
+
9
|
995 |
+
0
|
996 |
+
1
|
997 |
+
2
|
998 |
+
3
|
999 |
+
4
|
1000 |
+
5
|
1001 |
+
Energy barrier (eV)
|
1002 |
+
0.0
|
1003 |
+
0.5
|
1004 |
+
1.0
|
1005 |
+
1.5
|
1006 |
+
2.0
|
1007 |
+
2.5
|
1008 |
+
3.0
|
1009 |
+
3.5
|
1010 |
+
Probability
|
1011 |
+
MTP
|
1012 |
+
Refined
|
1013 |
+
Figure 8.
|
1014 |
+
SiGe barrier histogram, including direct bar-
|
1015 |
+
rier (from ground state) and inverse barriers (from excited
|
1016 |
+
states), as found on-the-fly by the MTP model(orange) and
|
1017 |
+
re-converge by the reference model(blue) from saddles and
|
1018 |
+
minimums position originally given by MTP.
|
1019 |
+
0
|
1020 |
+
500
|
1021 |
+
1000
|
1022 |
+
1500
|
1023 |
+
2000
|
1024 |
+
2500
|
1025 |
+
T(K)
|
1026 |
+
1000
|
1027 |
+
2000
|
1028 |
+
3000
|
1029 |
+
4000
|
1030 |
+
Number of reference potential calls
|
1031 |
+
380±125
|
1032 |
+
1549±705
|
1033 |
+
3465±844
|
1034 |
+
3329±265
|
1035 |
+
ml-md
|
1036 |
+
new: otf-mdart
|
1037 |
+
total: otf-mdart+md
|
1038 |
+
mean: otf-mdart+md
|
1039 |
+
otf-art
|
1040 |
+
Figure 9.
|
1041 |
+
Number of calls to the reference potential for each
|
1042 |
+
of the OTF machine-learned potentials developed for SiGe
|
1043 |
+
as a function of the temperature referring to the one used
|
1044 |
+
during MD training. Since configurations are relaxed to zero
|
1045 |
+
K in ARTn simulations, there is no associated temperature
|
1046 |
+
for this procedure.
|
1047 |
+
continuous distribution of activated barriers, including
|
1048 |
+
both direct and inverse barriers, found as the vacancy
|
1049 |
+
diffuses (Fig. 8); we note that the lowest barrier for a
|
1050 |
+
vacancy diffusing is around 0.6 eV, with lower barriers
|
1051 |
+
associated, as for Si, with reverse jumps from metastable
|
1052 |
+
states. The energy barrier distribution for a vacancy dif-
|
1053 |
+
fusing in SiGe (Fig. 8) is much more complex than for Si
|
1054 |
+
due to the chemical disorder that builds as the vacancy
|
1055 |
+
diffuses.
|
1056 |
+
As stated in the methodology, the additional complex-
|
1057 |
+
ity of the system imposes a richer machined-learning po-
|
1058 |
+
0.25
|
1059 |
+
0.50
|
1060 |
+
0.75
|
1061 |
+
1.00
|
1062 |
+
1.25
|
1063 |
+
1.50
|
1064 |
+
1.75
|
1065 |
+
Energy error per atom (meV/atom)
|
1066 |
+
ml-md
|
1067 |
+
otf-mdart
|
1068 |
+
otf-art
|
1069 |
+
0
|
1070 |
+
500
|
1071 |
+
1000
|
1072 |
+
1500
|
1073 |
+
2000
|
1074 |
+
2500
|
1075 |
+
T(K)
|
1076 |
+
0.015
|
1077 |
+
0.020
|
1078 |
+
0.025
|
1079 |
+
0.030
|
1080 |
+
Force error (eV/Å)
|
1081 |
+
Figure 10.
|
1082 |
+
Average energy (top) and mean absolute forces
|
1083 |
+
(bottom) errors for SiGe measured over all configurations gen-
|
1084 |
+
erated along pathways in ARTn for the three approaches.
|
1085 |
+
Temperature refers to the one used during MD training.
|
1086 |
+
Table IV. Average energy barrier errors and mean saddle po-
|
1087 |
+
sition error on all barriers for SiGe.
|
1088 |
+
The average error for
|
1089 |
+
ML-MD and OTF-MDART training is taken over all temper-
|
1090 |
+
ature sets.
|
1091 |
+
Errors
|
1092 |
+
ML-MD
|
1093 |
+
OTF-MDART
|
1094 |
+
OTF-ART
|
1095 |
+
Energy (eV)
|
1096 |
+
0.082±0.024
|
1097 |
+
0.072±0.014
|
1098 |
+
0.066±0.015
|
1099 |
+
Position (Å)
|
1100 |
+
0.091±0.020
|
1101 |
+
0.076±0.013
|
1102 |
+
0.070±0.014
|
1103 |
+
tential, with a larger set of parameters to encompass the
|
1104 |
+
greater diversity in the components and the configura-
|
1105 |
+
tions, due to chemical disorder.
|
1106 |
+
Combined, these two
|
1107 |
+
levels of complexity (set of parameters and configura-
|
1108 |
+
tional) result in an overall higher numbers of calls to the
|
1109 |
+
reference potential as compared to Si, irrespective of the
|
1110 |
+
approach used (see Fig. 9 (SiGe) vs. Fig. 2(Si)): while
|
1111 |
+
ML-MD requires between 380 evaluations of the reference
|
1112 |
+
potential at 300 K and 1549 at 2700 K, OTF-MDART
|
1113 |
+
needs a total of around 3465 calculations of the reference
|
1114 |
+
|
1115 |
+
10
|
1116 |
+
potential, irrespective of the temperature as original ML-
|
1117 |
+
MD configurations are progressively removed from the
|
1118 |
+
training set.
|
1119 |
+
This efforts results in a number of calls
|
1120 |
+
to the reference potential for OTF-MDART 4 % higher
|
1121 |
+
than with OTF-ART (3329 on average). To reduce com-
|
1122 |
+
putational costs, we omit the 1500 K run, as statistical
|
1123 |
+
behavior is smooth in this temperature region.
|
1124 |
+
500
|
1125 |
+
1000
|
1126 |
+
1500
|
1127 |
+
2000
|
1128 |
+
2500
|
1129 |
+
T(K)
|
1130 |
+
0
|
1131 |
+
20
|
1132 |
+
40
|
1133 |
+
60
|
1134 |
+
80
|
1135 |
+
Percent interruption ( > 200) per event search (%)
|
1136 |
+
Si
|
1137 |
+
SiGe
|
1138 |
+
Figure 11.
|
1139 |
+
Percentage of search interruptions during ML-
|
1140 |
+
MD potential evaluation in ARTn (γ > 200) for Si and SiGe
|
1141 |
+
as function of ML-MD training temperature.
|
1142 |
+
To disentangle the two contributions, we compare with
|
1143 |
+
the cost of fitting a Si potential with the same level 20
|
1144 |
+
potential as used for SiGe. Following the full OTF-ART
|
1145 |
+
procedure, creating a Si MLP requires 2926 calls to the
|
1146 |
+
reference potential. The intrinsic complexity of the land-
|
1147 |
+
scape contributes therefore to about a 14 % increase of
|
1148 |
+
the Si baseline calls count.
|
1149 |
+
In terms of accuracy, the
|
1150 |
+
Si MLP level 20 leads to an average error on energy of
|
1151 |
+
0.1 meV/atom, about 50 % lower than with the level 16
|
1152 |
+
potential described above (0.22 meV/atom). For SiGe,
|
1153 |
+
this error is (0.42 meV/atom), two times higher than for
|
1154 |
+
Si MLP level 16 and four times that of Si MLP level 20.
|
1155 |
+
This can be understood by the number of different con-
|
1156 |
+
figurations visited: as opposed to the Si system where
|
1157 |
+
each initial minimum is identical (as the vacancy moves
|
1158 |
+
in an otherwise perfect elemental crystal), the binary sys-
|
1159 |
+
tem is transformed as the vacancy diffuses, as the chemi-
|
1160 |
+
cal order is slowly destroyed: each of the 24 ARTn parallel
|
1161 |
+
trajectories used to define the potential over 1500 events
|
1162 |
+
evolves independently according to a probability given
|
1163 |
+
by the Metropolis algorithm with a fictitious tempera-
|
1164 |
+
ture (since network itself is structurally at 0K) of 0.5 eV
|
1165 |
+
(Eq. 7), providing a rich range of local environments.
|
1166 |
+
Fitting a potential is clearly harder: with the param-
|
1167 |
+
eters used — when a configuration graded at γ > 200 is
|
1168 |
+
encountered, the ARTn event search is stopped —, not
|
1169 |
+
significantly enough event could be generated using the
|
1170 |
+
ML-MD potential at 300 K and 600 K, which explains the
|
1171 |
+
absence of data for this temperatures in Fig. 10 and IV.
|
1172 |
+
For SiGe, the error on energy (see Fig. 10) with the ML-
|
1173 |
+
MD at 900 K and above ranges from 0.5 meV/atom to
|
1174 |
+
1.4 meV/atom, as a function of temperature. On aver-
|
1175 |
+
age, these errors are between 14 % and 69 % lower with
|
1176 |
+
OTF-MDART or OTF-ART at around 0.43 meV/atom.
|
1177 |
+
The OTF-ART approach gives an error in energy bar-
|
1178 |
+
rier of 0.066 ± 0.015 eV which represent a 19.5 % and
|
1179 |
+
8.3 % lower error from the ML-MD (0.082±0.024 eV) and
|
1180 |
+
OTF-MDART (0.072 ± 0.014 eV) respectively (Tab. IV).
|
1181 |
+
The errors on the converged saddle position for OTF-
|
1182 |
+
ART and OTF-MDART are similar at 0.070 ± 0.014 Å
|
1183 |
+
and 0.076 ± 0.013 Å, respectively, and represent a 23 %
|
1184 |
+
lower error than with ML-MD (0.092 Å). This accuracy
|
1185 |
+
is similar to that obtained with Si, in contrast to total
|
1186 |
+
energy and energy barrier errors.
|
1187 |
+
We note that the advantage of ML-MD for SiGe is
|
1188 |
+
overstated as shown by the proportion of events gener-
|
1189 |
+
ated with ML-MD potential that are interrupted due to
|
1190 |
+
a too large extrapolation grade, γ > 200 for both SiGe
|
1191 |
+
and Si (Fig. 11): for SiGe between 85 % and 30 % of
|
1192 |
+
events are aborted between 300 K and 1200 K, respec-
|
1193 |
+
tively.
|
1194 |
+
This proportion falls to zero percent failure at
|
1195 |
+
1800 K.
|
1196 |
+
IV.
|
1197 |
+
DISCUSSION
|
1198 |
+
We compare three approaches aimed at the construc-
|
1199 |
+
tion of potentials with machine learning on-the-fly for
|
1200 |
+
the exploration of activated mechanism of the potential
|
1201 |
+
energy landscape. We evaluate these by computing their
|
1202 |
+
efficiency at reproducing the energy landscape around a
|
1203 |
+
vacancy in two systems, a relatively simple Si diamond
|
1204 |
+
system (Fig. 7) and a more complex SiGe zincblende sys-
|
1205 |
+
tem that disorders under vacancy diffusion (Fig. 8).
|
1206 |
+
The first approach, which sets the comparison level,
|
1207 |
+
constructs a more general machine learning potential
|
1208 |
+
with molecular dynamics (ML-MD), the second on-the-
|
1209 |
+
fly adjusts this this generated potential, during the search
|
1210 |
+
for activated events using ARTn, while the third ap-
|
1211 |
+
proaches constructs a specifically on-the-fly trained a po-
|
1212 |
+
tential during search of activated events (OTF-ART).
|
1213 |
+
The efficiency of these three procedures is measured on
|
1214 |
+
the quality of the reproduction of the reference potential
|
1215 |
+
during the search for activated event.
|
1216 |
+
The baseline, defined by the ML-MD, is competitive
|
1217 |
+
with previously published work. Energy errors for the
|
1218 |
+
more standard ML MD approach with a level 16 po-
|
1219 |
+
tential range from 0.44 ± 0.36 meV/atom at 300 K to
|
1220 |
+
5.1 ± 1.7 meV/atom at 2700 K (Fig. 3), an order of mag-
|
1221 |
+
nitude lower or similar than the 4 meV/atom on an MTP
|
1222 |
+
potential of level 24 for Si obtained by Zuo et al.22, with
|
1223 |
+
the difference explained by the fact that activated events
|
1224 |
+
involve local deformations from a zero-temperature crys-
|
1225 |
+
tal with a vacancy and that DFT potentials are more
|
1226 |
+
difficult to fit than empirical ones18.
|
1227 |
+
Similarly, the relative energy error on the dominant
|
1228 |
+
0.51 eV diffusion barrier for SW Si is of 5.1 % (0.026 eV)
|
1229 |
+
|
1230 |
+
11
|
1231 |
+
with the ML-MD approach and 3.7 % (0.019 eV) with
|
1232 |
+
the OTF-ART. Using the same MTP potential trained
|
1233 |
+
using an OTF MD with an ab initio reference potential,
|
1234 |
+
Novoselov et al. find a 0.20 eV barrier for vacancy dif-
|
1235 |
+
fusion in Si as compared with 0.18 eV with the reference
|
1236 |
+
potential, an error of 0.02 eV or a 10.0 % relative error.
|
1237 |
+
Overall, the ML-MD approach, especially when run at
|
1238 |
+
temperatures between 900 and 1800 K, can generate a
|
1239 |
+
generic ML potential with reasonable precision for de-
|
1240 |
+
scribing activated mechanisms in Si and SiGe.
|
1241 |
+
Devel-
|
1242 |
+
oping a more specific OTF potential, generated directly
|
1243 |
+
with ARTn on activated trajectories, however, offers a
|
1244 |
+
more accurate description of both the energy and geom-
|
1245 |
+
etry at the barriers.
|
1246 |
+
It is possible to recover this precision by adjusting the
|
1247 |
+
original MD potential during ARTn runs, however, this
|
1248 |
+
increases the number of calls to the reference potential,
|
1249 |
+
raising the total costs beyond that of OTF-ART while
|
1250 |
+
largely erasing work made during ML-MD training phase:
|
1251 |
+
for Si, between 300 and 1200 K, none of the ML-MD
|
1252 |
+
configurations are retained while around 1.3 to 12.5 %
|
1253 |
+
are retained for the potential trained in range of 1500
|
1254 |
+
and 2700 K (Fig. 6, right-hand axis, orange line), but at
|
1255 |
+
the cost of lowering the precision on barriers.
|
1256 |
+
Moving to a more complex system, such a binary al-
|
1257 |
+
loy, increases the overall cost of the procedure in terms of
|
1258 |
+
calls to the reference potential, as more parameters need
|
1259 |
+
to be fit. Here also, the gain on using a specific poten-
|
1260 |
+
tial constructed with from ARTn trajectories is notable,
|
1261 |
+
both in the average errors and their fluctuations. Indeed,
|
1262 |
+
the ML-MD potential presents considerable instabilities
|
1263 |
+
when generated activated trajectories as can be seen by
|
1264 |
+
the number of configurations considered out-side of the
|
1265 |
+
potential’s scope (γ > 200), see Fig. 11.
|
1266 |
+
CONCLUSION
|
1267 |
+
We compare the advantage of using a more general vs
|
1268 |
+
specific machine-learned potential (MLP) to describe ac-
|
1269 |
+
tivated mechanisms in solid. To do so, we generate first
|
1270 |
+
an MLP constructed with the Moment Tensor Potential
|
1271 |
+
formalism10,18 to replicate Stillinger-Weber potential for
|
1272 |
+
Si and SiGe crystals with a single vacancy using a stan-
|
1273 |
+
dard molecular dynamics procedure (MD-ML).
|
1274 |
+
Comparing the quality of the reproduction of activated
|
1275 |
+
mechanisms with a ML potential further refined during
|
1276 |
+
an activation-relaxation technique nouveau sampling of
|
1277 |
+
the energy landscape and a potential unique constructed
|
1278 |
+
on-the-fly within ARTn, we show that while a general
|
1279 |
+
potential can deliver high accuracy for both the barrier
|
1280 |
+
geometries and their related energies, error and fluctua-
|
1281 |
+
tions around the average value are significantly lowered
|
1282 |
+
by constructing a specific potential, with a number of
|
1283 |
+
calls to the reference potential that is lower than a com-
|
1284 |
+
bined approach (MD + ARTn) for a similar precision.
|
1285 |
+
The advantage of using a specific potential remains
|
1286 |
+
when looking at more complex materials, such the SiGe
|
1287 |
+
alloys considered here, even though the advantage in
|
1288 |
+
terms of calls to the reference is strongly reduced.
|
1289 |
+
Having demonstrated that a specific machine-learned
|
1290 |
+
potential developed with methods such as MTP and
|
1291 |
+
ARTn can reproduce with high precision the activated
|
1292 |
+
mechanisms at the origin of kinetic for complex mate-
|
1293 |
+
rials, the next steps will involve applying this strategy
|
1294 |
+
to attack problems that have long been out of reach of
|
1295 |
+
computational materials sciences, allowing a much closer
|
1296 |
+
connection between modeling and experience.
|
1297 |
+
V.
|
1298 |
+
CODE AND DATA AVAILABILITY
|
1299 |
+
The ARTn packages as well as the data reported here
|
1300 |
+
are distributed freely. Please contact Normand Mousseau
|
1301 |
+
([email protected]).
|
1302 |
+
ACKNOWLEDGMENTS
|
1303 |
+
This project is supported through a Discovery grant
|
1304 |
+
from the Natural Science and Engineering Research
|
1305 |
+
Council of Canada (NSERC). Karl-Étienne Bolduc is
|
1306 |
+
grateful to NSERC and IVADO for summer scholarchips.
|
1307 |
+
We are grateful to Calcul Québec and Compute Canada
|
1308 |
+
for generous allocation of computational resources.
|
1309 |
+
1 W. Kohn and L. J. Sham.
|
1310 |
+
Self-consistent equations in-
|
1311 |
+
cluding exchange and correlation effects.
|
1312 |
+
Phys. Rev.,
|
1313 |
+
140:A1133–A1138, Nov 1965.
|
1314 |
+
2 Erik R Lindahl. Molecular dynamics simulations. In Molec-
|
1315 |
+
ular modeling of proteins, pages 3–23. Springer, 2008.
|
1316 |
+
3 Arthur F Voter and Jimmie D Doll. Transition state theory
|
1317 |
+
description of surface self-diffusion: Comparison with clas-
|
1318 |
+
sical trajectory results. The Journal of chemical physics,
|
1319 |
+
80(11):5832–5838, 1984.
|
1320 |
+
4 Arthur F Voter. Introduction to the kinetic monte carlo
|
1321 |
+
method.
|
1322 |
+
In Radiation effects in solids, pages 1–23.
|
1323 |
+
Springer, 2007.
|
1324 |
+
5 Graeme Henkelman and Hannes Jónsson. Long time scale
|
1325 |
+
kinetic monte carlo simulations without lattice approxima-
|
1326 |
+
tion and predefined event table. The Journal of Chemical
|
1327 |
+
Physics, 115(21):9657–9666, 2001.
|
1328 |
+
6 Fedwa El-Mellouhi, Normand Mousseau, and Laurent J.
|
1329 |
+
Lewis.
|
1330 |
+
Kinetic activation-relaxation technique:
|
1331 |
+
An off-
|
1332 |
+
lattice self-learning kinetic Monte Carlo algorithm. Physi-
|
1333 |
+
cal Review B, 78(15):153202, October 2008.
|
1334 |
+
7 Jörg Behler and Michele Parrinello. Generalized Neural-
|
1335 |
+
Network Representation of High-Dimensional Potential-
|
1336 |
+
Energy Surfaces.
|
1337 |
+
Phys. Rev. Lett., 98(14):146401, April
|
1338 |
+
2007.
|
1339 |
+
|
1340 |
+
12
|
1341 |
+
8 Albert P Bartók, Risi Kondor, and Gábor Csányi.
|
1342 |
+
On
|
1343 |
+
representing chemical environments. Physical Review B,
|
1344 |
+
87(18):184115, 2013.
|
1345 |
+
9 Aidan P Thompson, Laura P Swiler, Christian R Trott,
|
1346 |
+
Stephen M Foiles, and Garritt J Tucker. Spectral neigh-
|
1347 |
+
bor analysis method for automated generation of quantum-
|
1348 |
+
accurate interatomic potentials. Journal of Computational
|
1349 |
+
Physics, 285:316–330, 2015.
|
1350 |
+
10 Alexander V Shapeev. Moment tensor potentials: A class
|
1351 |
+
of systematically improvable interatomic potentials. Mul-
|
1352 |
+
tiscale Modeling & Simulation, 14(3):1153–1173, 2016.
|
1353 |
+
11 Ganesh Sivaraman, Jicheng Guo, Logan Ward, Nathaniel
|
1354 |
+
Hoyt, Mark Williamson, Ian Foster, Chris Benmore, and
|
1355 |
+
Nicholas Jackson. Automated development of molten salt
|
1356 |
+
machine learning potentials: application to licl. The Jour-
|
1357 |
+
nal of Physical Chemistry Letters, 12(17):4278–4285, 2021.
|
1358 |
+
12 Pei-Lin Kang, Cheng Shang, and Zhi-Pan Liu.
|
1359 |
+
Large-
|
1360 |
+
scale atomic simulation via machine learning potentials
|
1361 |
+
constructed by global potential energy surface exploration.
|
1362 |
+
Accounts of Chemical Research, 53(10):2119–2129, 2020.
|
1363 |
+
13 Ganesh Sivaraman, Anand Narayanan Krishnamoorthy,
|
1364 |
+
Matthias Baur, Christian Holm, Marius Stan, Gábor
|
1365 |
+
Csányi, Chris Benmore, and Álvaro Vázquez-Mayagoitia.
|
1366 |
+
Machine-learned interatomic potentials by active learning:
|
1367 |
+
amorphous and liquid hafnium dioxide. npj Computational
|
1368 |
+
Materials, 6(1):1–8, 2020.
|
1369 |
+
14 G. T. Barkema and Normand Mousseau. Event-Based Re-
|
1370 |
+
laxation of Continuous Disordered Systems. Physical Re-
|
1371 |
+
view Letters, 77(21):4358–4361, November 1996.
|
1372 |
+
15 Rachid
|
1373 |
+
Malek
|
1374 |
+
and
|
1375 |
+
Normand
|
1376 |
+
Mousseau.
|
1377 |
+
Dynamics
|
1378 |
+
of Lennard-Jones clusters:
|
1379 |
+
A characterization of the
|
1380 |
+
activation-relaxation
|
1381 |
+
technique.
|
1382 |
+
Physical
|
1383 |
+
Review
|
1384 |
+
E,
|
1385 |
+
62(6):7723–7728, December 2000.
|
1386 |
+
16 Antoine Jay, Miha Gunde, Nicolas Salles, Matic Poberžnik,
|
1387 |
+
Layla Martin-Samos, Nicolas Richard, Stefano de Giron-
|
1388 |
+
coli, Normand Mousseau, and Anne Hémeryck.
|
1389 |
+
Activa-
|
1390 |
+
tion–Relaxation Technique: An efficient way to find min-
|
1391 |
+
ima and saddle points of potential energy surfaces. Com-
|
1392 |
+
putational Materials Science, 209:111363, June 2022.
|
1393 |
+
17 Donald G Truhlar, Bruce C Garrett, and Stephen J Klip-
|
1394 |
+
penstein. Current status of transition-state theory. The
|
1395 |
+
Journal of physical chemistry, 100(31):12771–12800, 1996.
|
1396 |
+
18 Ivan
|
1397 |
+
S
|
1398 |
+
Novikov,
|
1399 |
+
Konstantin
|
1400 |
+
Gubaev,
|
1401 |
+
Evgeny
|
1402 |
+
V
|
1403 |
+
Podryabinkin, and Alexander V Shapeev.
|
1404 |
+
The mlip
|
1405 |
+
package: moment tensor potentials with mpi and active
|
1406 |
+
learning.
|
1407 |
+
Machine Learning:
|
1408 |
+
Science and Technology,
|
1409 |
+
2(2):025002, 2020.
|
1410 |
+
19 Frank H Stillinger and Thomas A Weber. Computer simu-
|
1411 |
+
lation of local order in condensed phases of silicon. Physical
|
1412 |
+
review B, 31(8):5262, 1985.
|
1413 |
+
20 Stéphane Ethier and Laurent J Lewis. Epitaxial growth of
|
1414 |
+
si1- xgex on si (100) 2× 1: A molecular-dynamics study.
|
1415 |
+
Journal of materials research, 7(10):2817–2827, 1992.
|
1416 |
+
21 II
|
1417 |
+
Novoselov,
|
1418 |
+
AV
|
1419 |
+
Yanilkin,
|
1420 |
+
AV
|
1421 |
+
Shapeev,
|
1422 |
+
and
|
1423 |
+
EV
|
1424 |
+
Podryabinkin.
|
1425 |
+
Moment
|
1426 |
+
tensor
|
1427 |
+
potentials
|
1428 |
+
as
|
1429 |
+
a
|
1430 |
+
promising tool to study diffusion processes.
|
1431 |
+
Computa-
|
1432 |
+
tional Materials Science, 164:46–56, 2019.
|
1433 |
+
22 Yunxing Zuo, Chi Chen, Xiangguo Li, Zhi Deng, Yiming
|
1434 |
+
Chen, Jorg Behler, Gábor Csányi, Alexander V Shapeev,
|
1435 |
+
Aidan P Thompson, Mitchell A Wood, et al. Performance
|
1436 |
+
and cost assessment of machine learning interatomic po-
|
1437 |
+
tentials. The Journal of Physical Chemistry A, 124(4):731–
|
1438 |
+
745, 2020.
|
1439 |
+
23 Evgeny V Podryabinkin and Alexander V Shapeev. Active
|
1440 |
+
learning of linearly parametrized interatomic potentials.
|
1441 |
+
Computational Materials Science, 140:171–180, 2017.
|
1442 |
+
24 Evgeny V Podryabinkin, Evgeny V Tikhonov, Alexander V
|
1443 |
+
Shapeev, and Artem R Oganov. Accelerating crystal struc-
|
1444 |
+
ture prediction by machine-learning interatomic potentials
|
1445 |
+
with active learning.
|
1446 |
+
Physical Review B, 99(6):064114,
|
1447 |
+
2019.
|
1448 |
+
25 Konstantin Gubaev, Evgeny V Podryabinkin, Gus LW
|
1449 |
+
Hart, and Alexander V Shapeev.
|
1450 |
+
Accelerating high-
|
1451 |
+
throughput searches for new alloys with active learning of
|
1452 |
+
interatomic potentials. Computational Materials Science,
|
1453 |
+
156:148–156, 2019.
|
1454 |
+
26 A. P. Thompson, H. M. Aktulga, R. Berger, D. S. Bolin-
|
1455 |
+
tineanu, W. M. Brown, P. S. Crozier, P. J. in ’t Veld,
|
1456 |
+
A. Kohlmeyer, S. G. Moore, T. D. Nguyen, R. Shan,
|
1457 |
+
M. J. Stevens, J. Tranchida, C. Trott, and S. J. Plimpton.
|
1458 |
+
LAMMPS - a flexible simulation tool for particle-based
|
1459 |
+
materials modeling at the atomic, meso, and continuum
|
1460 |
+
scales. Comp. Phys. Comm., 271:108171, 2022.
|
1461 |
+
27 Antoine
|
1462 |
+
Jay,
|
1463 |
+
Miha
|
1464 |
+
Gunde,
|
1465 |
+
Nicolas
|
1466 |
+
Salles,
|
1467 |
+
Matic
|
1468 |
+
Poberžnik, Layla Martin-Samos, Nicolas Richard, Stefano
|
1469 |
+
de Gironcoli, Normand Mousseau, and Anne Hémeryck.
|
1470 |
+
Activation–relaxation technique: An efficient way to find
|
1471 |
+
minima and saddle points of potential energy surfaces.
|
1472 |
+
Computational Materials Science, 209:111363, 2022.
|
1473 |
+
28 Cornelius Lanczos. An iteration method for the solution
|
1474 |
+
of the eigenvalue problem of linear differential and integral
|
1475 |
+
operators. 1950.
|
1476 |
+
29 Rachid
|
1477 |
+
Malek
|
1478 |
+
and
|
1479 |
+
Normand
|
1480 |
+
Mousseau.
|
1481 |
+
Dynamics
|
1482 |
+
of lennard-jones clusters:
|
1483 |
+
A characterization of the
|
1484 |
+
activation-relaxation
|
1485 |
+
technique.
|
1486 |
+
Physical
|
1487 |
+
Review
|
1488 |
+
E,
|
1489 |
+
62(6):7723, 2000.
|
1490 |
+
30 Erik Bitzek, Pekka Koskinen, Franz Gähler, Michael
|
1491 |
+
Moseler, and Peter Gumbsch. Structural relaxation made
|
1492 |
+
simple. Physical review letters, 97(17):170201, 2006.
|
1493 |
+
31 Normand Mousseau and Gerard T. Barkema. Exploring
|
1494 |
+
High-Dimensional Energy Landscapes. Computing in Sci-
|
1495 |
+
ence & Engineering, 1(2):74–82, March 1999.
|
1496 |
+
32 Fedwa El-Mellouhi, Normand Mousseau, and Pablo Orde-
|
1497 |
+
jón. Sampling the diffusion paths of a neutral vacancy in
|
1498 |
+
silicon with quantum mechanical calculations. Phys. Rev.
|
1499 |
+
B, 70:205202, Nov 2004.
|
1500 |
+
33 Yuko Kumeda, David J Wales, and Lindsey J Munro.
|
1501 |
+
Transition states and rearrangement mechanisms from hy-
|
1502 |
+
brid eigenvector-following and density functional theory.:
|
1503 |
+
application to c10h10 and defect migration in crystalline
|
1504 |
+
silicon. Chemical physics letters, 341(1-2):185–194, 2001.
|
1505 |
+
34 F. Wooten, K. Winer, and D. Weaire. Computer genera-
|
1506 |
+
tion of structural models of amorphous si and ge. Phys.
|
1507 |
+
Rev. Lett., 54:1392–1395, Apr 1985.
|
1508 |
+
|
6tFAT4oBgHgl3EQfnx0W/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
8NAzT4oBgHgl3EQfgfyv/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0590c851e63d05837b8558354c9ecaf46ba41cfc875d209bae9adf119dcf934a
|
3 |
+
size 72483
|
8NFLT4oBgHgl3EQfsy_c/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
9dFRT4oBgHgl3EQfqjdh/content/2301.13617v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a74517c25d0ac22b61a1de92622aae50fb772eea528d30d6e16ffcf73a1a8fb4
|
3 |
+
size 2040715
|
9dFRT4oBgHgl3EQfqjdh/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed8de90c79b609b8fe991814990c5103d34d7e18e36b0eeb08e7ce83e5968c00
|
3 |
+
size 608034
|
A9AzT4oBgHgl3EQfhv18/content/2301.01489v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:262d5ea3b5123cd1009dea4f0410bb6eb48b3154f8365f9cedb528c53481c7a9
|
3 |
+
size 1968081
|
A9AzT4oBgHgl3EQfhv18/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7f7f75bb86b285f6c6f7b3afd1c30a13c0721b657db7c2954547227a1b500c58
|
3 |
+
size 168330
|
A9E1T4oBgHgl3EQf9QZb/content/2301.03554v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:49e5afd853a41b01a4f0691572effed653e936b5cb664f5243b4775aead4e862
|
3 |
+
size 833905
|
A9E1T4oBgHgl3EQf9QZb/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c22d0e3afd598e32505914cec4b3e1b3ed1634180ba86493b14ffb69b8ab7e97
|
3 |
+
size 10551341
|
ANFQT4oBgHgl3EQf8jdP/content/tmp_files/2301.13447v1.pdf.txt
ADDED
@@ -0,0 +1,1150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A Data-Driven Modeling and Control
|
2 |
+
Framework for Physics-Based Building
|
3 |
+
Emulators
|
4 |
+
Chihyeon Song ⋆ Aayushman Sharma ⋆ Raman Goyal
|
5 |
+
Alejandro Brito Saman Mostafavi ⋆
|
6 |
+
∗ Palo Alto Research Center, Inc. (PARC), Palo Alto, CA 94304 USA
|
7 |
+
(e-mail: {csong, asharma, rgoyal, abrito, smostafa}@parc.com).
|
8 |
+
Abstract: We present a data-driven modeling and control framework for physics-based building
|
9 |
+
emulators. Our approach comprises: (a) Offline training of differentiable surrogate models that
|
10 |
+
speed up model evaluations, provide cheap gradients, and have good predictive accuracy for the
|
11 |
+
receding horizon in Model Predictive Control (MPC) and (b) Formulating and solving nonlinear
|
12 |
+
building HVAC MPC problems. We extensively verify the modeling and control performance
|
13 |
+
using multiple surrogate models and optimization frameworks for different available test cases
|
14 |
+
in the Building Optimization Testing Framework (BOPTEST). The framework is compatible
|
15 |
+
with other modeling techniques and customizable with different control formulations. The
|
16 |
+
modularity makes the approach future-proof for test cases currently in development for physics-
|
17 |
+
based building emulators and provides a path toward prototyping predictive controllers in large
|
18 |
+
buildings.
|
19 |
+
Keywords: Data-driven control, Nonlinear model predictive control, Building emulator,
|
20 |
+
Surrogate modeling.
|
21 |
+
1. INTRODUCTION
|
22 |
+
According to recent estimates by United States Energy
|
23 |
+
Information Administration (2021), residential and com-
|
24 |
+
mercial buildings account for nearly 40% of energy usage
|
25 |
+
in the United States. A significant amount of this energy
|
26 |
+
consumption can be eliminated by improving the build-
|
27 |
+
ing’s HVAC control system, for example using predictive
|
28 |
+
control methods as has been shown in Drgoˇna et al. (2020).
|
29 |
+
Among these methods, model predictive control (MPC) is
|
30 |
+
a particularly powerful approach for handling constraints
|
31 |
+
for state and control inputs in nonlinear multivariable
|
32 |
+
control systems. While the gains are evident, the challenge
|
33 |
+
is to show that MPC can be implemented at scale in
|
34 |
+
a cost-friendly manner (O’Dwyer et al., 2022). It is well
|
35 |
+
understood that the main obstacle to this is the modeling
|
36 |
+
cost and according to one study (Atam and Helsen, 2016),
|
37 |
+
this can be as much as 70% of the total effort of setting
|
38 |
+
up an MPC-based building controller, mainly due to the
|
39 |
+
effort and expertise required to create realistically cali-
|
40 |
+
brated models. Recently, Building Optimization Testing
|
41 |
+
Framework (BOPTEST) (Blum et al., 2021) is developed
|
42 |
+
to facilitate simulation-based benchmarking of building
|
43 |
+
HVAC control algorithms. The emulator uses calibrated
|
44 |
+
Modelica models to emulate building physical dynamics
|
45 |
+
based on first principles. Models also output Key Perfor-
|
46 |
+
mance Indices (KPI) that represent occupant satisfaction,
|
47 |
+
energy cost and consumption, and carbon footprint. What
|
48 |
+
makes this platform even more impressive is the fact that
|
49 |
+
it is set up to replicate a real building control system with
|
50 |
+
⋆ Chihyeon Song and Aayushman Sharma contributed equally to
|
51 |
+
this paper. Saman Mostafavi is the corresponding author.
|
52 |
+
all its control limitations, e.g., there are realistic low-level
|
53 |
+
feedback control laws, box constraints on control inputs,
|
54 |
+
weather, occupancy profiles, economizer schedules, etc.
|
55 |
+
MOTIVATION While the value of BOPTEST, and
|
56 |
+
other physics-based emulators in creating a unified testing
|
57 |
+
platform for control is unquestionable, there are several
|
58 |
+
intrinsic obstacles to the implementation of predictive
|
59 |
+
control and its adoption by a broader audience 1 : (1) In
|
60 |
+
BOPTEST, and most other physic-based emulators, the
|
61 |
+
numerical solvers for scaled-up models will not be compu-
|
62 |
+
tationally efficient to run in iterative optimization loops.
|
63 |
+
(2) Solving the optimization problem requires gradient
|
64 |
+
calculations which, derived through perturbations, only
|
65 |
+
compounds the computational intensity. (3) Furthermore,
|
66 |
+
some optimal control methods, such as iterative Linear
|
67 |
+
Quadratic Regulator method (iLQR) (Todorov and Li,
|
68 |
+
2005), derive optimal solutions by exploring trajectories
|
69 |
+
that might be infeasible for the emulator to evaluate (due
|
70 |
+
to control input and state constraints), which can lead to
|
71 |
+
crashing the iterative algorithm prematurely.
|
72 |
+
While acknowledging the significant progress in deep
|
73 |
+
neural networks-based reinforcement learning (RL) ap-
|
74 |
+
proaches for controlling unknown dynamical systems, with
|
75 |
+
applications expanding from playing games (Silver et al.,
|
76 |
+
2016), locomotion (Lillicrap et al., 2015) and robotic hand
|
77 |
+
manipulation (Levine et al., 2016), RL is still highly data
|
78 |
+
intensive. The training time for such algorithms is typically
|
79 |
+
1 It is worth mentioning that we consider these challenges to be
|
80 |
+
almost identical for a real building HVAC control system and,
|
81 |
+
therefore, addressing and solving them is a first step to deploying
|
82 |
+
such control algorithms in the field.
|
83 |
+
arXiv:2301.13447v1 [eess.SY] 31 Jan 2023
|
84 |
+
|
85 |
+
very large, and high variance and reproducibility issues
|
86 |
+
mar the performance Henderson et al. (2018). At the mo-
|
87 |
+
ment, RL algorithms remain intractable for adjustable and
|
88 |
+
reproducible implementations at scale. On the other hand,
|
89 |
+
most of the the building MPC work (Sturzenegger et al.,
|
90 |
+
2015; Mostafavi et al., 2022; Oei et al., 2020; Walker et al.,
|
91 |
+
2017) consider either simple low-fidelity RC-based models,
|
92 |
+
bilinear models with low accuracy, Machine Learning (ML)
|
93 |
+
approaches that cannot be directly used for fast MPC
|
94 |
+
implementation, or directly use Modelica-based models
|
95 |
+
with hand-tuned cost functions for nonlinear optimiza-
|
96 |
+
tion of energy consumption. Such modeling and control
|
97 |
+
approaches require a lot of customization for high-fidelity
|
98 |
+
models with complex, hybrid, and constrained systems
|
99 |
+
that use external inputs and therefore, are not suited to a
|
100 |
+
robust control framework.
|
101 |
+
CONTRIBUTIONS The main contribution of this pa-
|
102 |
+
per is the development of a modeling and control frame-
|
103 |
+
work for building HVAC control based on identifying dif-
|
104 |
+
ferentiable models that are compatible with optimization-
|
105 |
+
based nonlinear optimal control methods. We address
|
106 |
+
these limitations by the following two-fold approach: first,
|
107 |
+
in an off-line round, we identify a differentiable surrogate
|
108 |
+
model for the following nonlinear mapping:
|
109 |
+
xt+1 = f(xt, ut, dt)
|
110 |
+
(1)
|
111 |
+
where x represent the state of the model, u the control
|
112 |
+
inputs, and d the external time-varying disturbances, asso-
|
113 |
+
ciated with the weather and occupancy conditions. Second,
|
114 |
+
we use automatic differentiation (AD) (Paszke et al., 2017)
|
115 |
+
to compute gradients for solving nonlinear model predic-
|
116 |
+
tive control (NMPC) with box constraints for state and
|
117 |
+
inputs. The details for modeling and control approaches
|
118 |
+
are discussed in Section 2 and Section 3. The individual
|
119 |
+
contributions of the paper are as follows: we demonstrate
|
120 |
+
how to identify a suitable Neural Network (NN) to capture
|
121 |
+
the dynamics of building envelope and HVAC control sys-
|
122 |
+
tem. We investigate several choices of lags for states, con-
|
123 |
+
trols, and disturbances and provide insight into best prac-
|
124 |
+
tices. We also present different MPC formulations, assisted
|
125 |
+
by using AD, to maintain occupants’ comfort constraints
|
126 |
+
while minimizing KPIs for HVAC energy consumption. We
|
127 |
+
show the customizability of the framework through the
|
128 |
+
ease of using two different control approaches to solve the
|
129 |
+
MPC problem. We show that the proposed approach can
|
130 |
+
be used to warm-start the receding horizon replanning for
|
131 |
+
the MPC problem. In the result section, we also provide a
|
132 |
+
performance comparison between different approaches for
|
133 |
+
modeling and solving NMPC when operating on compu-
|
134 |
+
tationally intensive hybrid system models. We also discuss
|
135 |
+
potential best practices based on desired control criteria
|
136 |
+
(speed, optimality, etc.). Finally, to the best of our knowl-
|
137 |
+
edge, the NMPC control of the BOPTEST five-zone model
|
138 |
+
is the first of its kind. We believe this framework is scalable
|
139 |
+
for data-driven NMPC control of the BOPTEST, and
|
140 |
+
potentially other physics-based building emulators, that
|
141 |
+
are being developed for prototyping controllers in large
|
142 |
+
building HVAC systems.
|
143 |
+
2. SURROGATE MODELING FOR BUILDING
|
144 |
+
EMULATOR
|
145 |
+
Our aim is to replace the computationally expensive non-
|
146 |
+
linear numerical simulations with alternative, fast repre-
|
147 |
+
sentations for model-based control. In the context of using
|
148 |
+
NNs for MPC, we believe that one should include the
|
149 |
+
following criteria in their surrogate modeling process:
|
150 |
+
• Computing cost: Small computing cost for fast
|
151 |
+
iterative evaluations.
|
152 |
+
• Predictive accuracy: Good prediction accuracy for
|
153 |
+
MPC’s horizon.
|
154 |
+
• Differentiability: Fast and accurate gradient infor-
|
155 |
+
mation for successive linearization, nonlinear solvers,
|
156 |
+
etc., for different MPC formulations.
|
157 |
+
We leverage Pytorch (Paszke et al., 2019) modeling li-
|
158 |
+
brary to meet these goals. In this study, we consider
|
159 |
+
the following cases: Linear, MLP, and Long short-term
|
160 |
+
memory (LSTM). MLP has a fast forward computation
|
161 |
+
and good expressivity to approximate complex functions
|
162 |
+
Hornik et al. (1989). On the other, since BOPTEST is
|
163 |
+
Partially Observable MDP (POMDP), it requires lag infor-
|
164 |
+
mation from states, actions, and time-varying disturbance
|
165 |
+
for model fitting. This can be curtailed by using LSTM
|
166 |
+
which has proven to work well for nonlinear mappings with
|
167 |
+
autoregressive features Siami-Namini et al. (2018). While
|
168 |
+
fairly simple, the linear model has the advantage of fastest
|
169 |
+
model evaluations and plug-and-play viability for fast QP
|
170 |
+
solvers.
|
171 |
+
2.1 Linear
|
172 |
+
The surrogate model takes its input as the states x, control
|
173 |
+
inputs u, time-varying disturbances d, and their lags of
|
174 |
+
past time-steps. The output of the surrogate model is the
|
175 |
+
future state prediction {xt+1}, i.e.,:
|
176 |
+
xt+1 = f(xt−Mx:t, ut−Mu:t, dt−Md:t)
|
177 |
+
(2)
|
178 |
+
where Mx, Mu, Md are state, input and disturbance lags,
|
179 |
+
respectively. Since the choices of lags are application
|
180 |
+
dependent, we discuss this further in the result section.
|
181 |
+
Here, f is linearized as follows:
|
182 |
+
xt+1 =
|
183 |
+
Mx
|
184 |
+
�
|
185 |
+
k=0
|
186 |
+
Akxt−k +
|
187 |
+
Mu
|
188 |
+
�
|
189 |
+
k=0
|
190 |
+
Bkut−k +
|
191 |
+
Md
|
192 |
+
�
|
193 |
+
k=0
|
194 |
+
Ckdt−k
|
195 |
+
(3)
|
196 |
+
where Ak = ∇xf ∈ RNx×Nx, Bk = ∇uf ∈ RNx×Nu and
|
197 |
+
Ck = ∇df ∈ RNx×Nd are learnable parameter matrices for
|
198 |
+
state, control input and disturbance, respectively.
|
199 |
+
2.2 MLP
|
200 |
+
The linearized model given by Equation 3 also applies
|
201 |
+
here. The forward computation in MLP is written as the
|
202 |
+
following:
|
203 |
+
h0 = [xt−Mx, ut−Mu, dt−Md]
|
204 |
+
hk+1 = tanh(Wkhk + bk),
|
205 |
+
k = {0, ..., K − 1}
|
206 |
+
xt+1 = ot+1 = WKhK + bK
|
207 |
+
(4)
|
208 |
+
where hk ∈ Rl is a hidden unit of the layer k, Wk and bk
|
209 |
+
are weight parameters of the layer k.
|
210 |
+
2.3 LSTM
|
211 |
+
The forward computation of LSTM is written as the
|
212 |
+
following:
|
213 |
+
|
214 |
+
ht, ct = MLPenc(xt−Mx:t, ut−Mu:t−1, dt−Mu:t)
|
215 |
+
it = σ(Wiiut + bii + Whiht−1 + bhi)
|
216 |
+
ft = σ(Wifut + bif + Wwhfht−1 + bhf)
|
217 |
+
gt = tanh(Wigut + big + Whght + bhg)
|
218 |
+
ot+1 = σ(Wiout + bio + Whoht + bho)
|
219 |
+
ct+1 = ft ⊙ ct + it ⊙ gt
|
220 |
+
ht+1 = ot ⊙ tanh(ct+1)
|
221 |
+
xt+1 = MLPdec(ht+1)
|
222 |
+
(5)
|
223 |
+
where ht is the hidden state, ct is the cell state, it, ft, gt and
|
224 |
+
ot are the input, forget, cell, and output gates, respectively.
|
225 |
+
σ(·) is the sigmoid function, ⊙ is the Hadamard product,
|
226 |
+
and MLPenc and MLPdec are a MLP encoder and decoder,
|
227 |
+
respectively.
|
228 |
+
3. CONTROL PROBLEM FORMULATION
|
229 |
+
Consider the discrete-time nonlinear dynamical system:
|
230 |
+
xt+1 = f(xt, ut, dt),
|
231 |
+
(6)
|
232 |
+
where xt ∈ Rnx and ut ∈ Rnu correspond to the state
|
233 |
+
and control vectors at time t and dt
|
234 |
+
∈ Rnd is the
|
235 |
+
set of contextual variables/external inputs. The optimal
|
236 |
+
control problem is to find the optimal control policy that
|
237 |
+
minimizes the cumulative cost:
|
238 |
+
min
|
239 |
+
ut
|
240 |
+
T
|
241 |
+
�
|
242 |
+
t=0
|
243 |
+
ct(xt, ut, dt)
|
244 |
+
(7)
|
245 |
+
Subject to : xt+1 = f(xt, ut, dt),
|
246 |
+
(8)
|
247 |
+
Subject to : ul
|
248 |
+
t ≤ ut ≤ uu
|
249 |
+
t ,
|
250 |
+
(9)
|
251 |
+
for given x0, and where ct(·) is the instantaneous cost
|
252 |
+
function given as:
|
253 |
+
ct(·) = Pc + Ph + Lk + γPx,
|
254 |
+
(10)
|
255 |
+
where Pc and Ph are total cooling and heating cost,
|
256 |
+
Lk = ∥˜ut+1 − ˜ut∥2
|
257 |
+
R is a regularizer term, which penalizes
|
258 |
+
large changes in the control inputs to avoid undesirable
|
259 |
+
oscillations, and Px = max(xl
|
260 |
+
t − xt, 0) + max(xt − xu
|
261 |
+
t , 0)
|
262 |
+
enforces the occupant comfort constraints implemented
|
263 |
+
with ReLU function with a penalty coefficient γ. The
|
264 |
+
problem also considers input box constraints with lower
|
265 |
+
and upper bound given as [ul
|
266 |
+
t, uu
|
267 |
+
t ].
|
268 |
+
3.1 Gradient Descent Method
|
269 |
+
The gradient descent method is one of the widely-used
|
270 |
+
algorithms to optimize a differentiable objective function.
|
271 |
+
At each iteration, the gradient of the objective function
|
272 |
+
is computed and the decision variables are updated in
|
273 |
+
direction of the computed gradient. Gradient descent al-
|
274 |
+
gorithms have a precedent across domains such as training
|
275 |
+
neural networks Schmidhuber (2015) and solving optimal
|
276 |
+
control problems (Lin et al., 2014). In this paper, we use
|
277 |
+
Adam (Kingma and Ba, 2014), which has shown promising
|
278 |
+
results in deep learning applications. For input constraint
|
279 |
+
(9), we use projected gradient descent, a common method
|
280 |
+
in solving constrained optimization: after each gradient
|
281 |
+
update, we project the control vector ut into a feasible
|
282 |
+
region [ul
|
283 |
+
t, uu
|
284 |
+
t ]. Since the feasible region is a box constraint,
|
285 |
+
the projected control vector is easily computed by using a
|
286 |
+
clamp function after each update of the algorithm.
|
287 |
+
3.2 Sequential Quadratic Programming
|
288 |
+
There have been numerous tools and methods developed
|
289 |
+
to solve specific nonlinear optimization problems with par-
|
290 |
+
ticular structures of cost functions, equality, and inequal-
|
291 |
+
ity constraint functions. However, Sequential Quadratic
|
292 |
+
Programming (SQP) remains one of the most efficient
|
293 |
+
approaches to solving any general constrained-nonlinear
|
294 |
+
optimization problem. For the SQP approach, we utilize
|
295 |
+
the optimization subroutine originally proposed by Dieter
|
296 |
+
Kraft Kraft (1988) and as implemented in SciPy Virtanen
|
297 |
+
et al. (2020) to solve the control optimization problem
|
298 |
+
described in Eqns. (7-9). The algorithm is a quasi-Newton
|
299 |
+
method (using BFGS) applied to a Lagrange function
|
300 |
+
consisting of a loss function and equality and inequality
|
301 |
+
constraints. In our implementation, we provide the func-
|
302 |
+
tion evaluations, which are calculated using Equation 10,
|
303 |
+
and it’s Jacobian using automatic differentiation. Instead
|
304 |
+
of clamping, we pass bounds for control inputs directly to
|
305 |
+
the solver.
|
306 |
+
4. RESULTS
|
307 |
+
We demonstrate the effectiveness of our control framework
|
308 |
+
for controlling building models in BOPTEST (Blum et al.,
|
309 |
+
2021), a software for simulation-based benchmarking of
|
310 |
+
building HVAC control algorithms. The rest of this sec-
|
311 |
+
tion details two test cases that demonstrate the results of
|
312 |
+
deriving different surrogate models and discusses the sub-
|
313 |
+
sequent control results for the control algorithms described
|
314 |
+
in Section 3.
|
315 |
+
4.1 Model Description
|
316 |
+
BOPTEST emulators use Modelica (Wetter et al., 2014)
|
317 |
+
to represent realistic physical dynamics. Embedded in
|
318 |
+
these models are baseline control algorithms that can
|
319 |
+
be overwritten using supervisory and local-loop control
|
320 |
+
signals. BOPTEST uses a containerized run-time environ-
|
321 |
+
ment (RTE) which enables rapid, repeatable deployment
|
322 |
+
of models. Using this feature, we stand up several instances
|
323 |
+
of models on servers and query these models to speed-up
|
324 |
+
data generation at scale for surrogate modeling. We also
|
325 |
+
test controls on the same containers, representing digital-
|
326 |
+
twins of real buildings. We consider the following case
|
327 |
+
studies:
|
328 |
+
BESTEST Case 900 model
|
329 |
+
This test case is a single
|
330 |
+
room with floor dimensions of 6m x 8m and a floor-to-
|
331 |
+
ceiling height of 2.7m. The building is assumed to be oc-
|
332 |
+
cupied by two people from 8 am to 6 pm each day. Heating
|
333 |
+
and cooling are provided to the office using an idealized
|
334 |
+
four-pipe fan coil unit (FCU), presented in Figure 1. The
|
335 |
+
FCU contains a fan, cooling coil, and heating coil. The
|
336 |
+
fan draws room air into the HVAC unit and supplies the
|
337 |
+
conditioned air back to the room. No outside air is mixed
|
338 |
+
during this process. The fan has a variable speed drive
|
339 |
+
serving the fan motor. The cooling coil is served by chilled
|
340 |
+
water produced by a chiller and the heating coil is served
|
341 |
+
by hot water produced by a gas boiler. Two different PI
|
342 |
+
controllers for heating and cooling modulate the supply
|
343 |
+
air temperature and fan speed to provide cooling and
|
344 |
+
heating load to the room. The schematics and control
|
345 |
+
|
346 |
+
(a)
|
347 |
+
(b)
|
348 |
+
Fig.
|
349 |
+
1.
|
350 |
+
Control
|
351 |
+
schematics
|
352 |
+
of
|
353 |
+
the
|
354 |
+
BESTEST
|
355 |
+
Case
|
356 |
+
900
|
357 |
+
model.
|
358 |
+
Source:
|
359 |
+
https://ibpsa.github.io/
|
360 |
+
project1-boptest/
|
361 |
+
mapping are shown in Figure 1. For our supervisory MPC
|
362 |
+
controller, we manipulate supply air temperature and fan
|
363 |
+
speed as control inputs to minimize the combined cooling,
|
364 |
+
heating, and fan power consumption while maintaining the
|
365 |
+
occupant comfort bounds. Assuming the building to be in
|
366 |
+
a climate close to Denver, CO, USA, the state and input
|
367 |
+
box constraints are as follows:
|
368 |
+
21oC ≤ xTzone,occ ≤ 24oC
|
369 |
+
(11)
|
370 |
+
15oC ≤ xTzone,unocc ≤ 30oC
|
371 |
+
(12)
|
372 |
+
0.0 ≤ ufan ≤ 1.0
|
373 |
+
(13)
|
374 |
+
12oC ≤ uTsupp ≤ 40oC
|
375 |
+
(14)
|
376 |
+
Multi-zone office (ASHRAE 2006 VAVReaheat)
|
377 |
+
The
|
378 |
+
test case represents the middle floor of an office build-
|
379 |
+
ing located in Chicago, IL, as described in the set of
|
380 |
+
DOE Commercial Building Benchmarks for new construc-
|
381 |
+
tion (Deru et al., 2011) with weather data from TMY3
|
382 |
+
for Chicago O’Hare International Airport. The represented
|
383 |
+
floor has five zones, with four perimeter zones and one
|
384 |
+
core zone. The occupied time for the HVAC system is
|
385 |
+
between 6 AM and 7 PM each day. The HVAC system is a
|
386 |
+
multi-zone single-duct Variable Air Volume (VAV) system
|
387 |
+
with pressure-independent terminal boxes with reheat. A
|
388 |
+
schematic of the system is shown in Figure 2. The cooling
|
389 |
+
and heating coils are water-based, served by an air-cooled
|
390 |
+
chiller and air-to-water heat pump respectively. A number
|
391 |
+
of low-level, local-loop controllers are used to maintain the
|
392 |
+
desired setpoints using the available actuators. The pri-
|
393 |
+
mary local-loop controllers are specified on the diagrams
|
394 |
+
of Figure 3 as C1 to C3. C1 is responsible for maintaining
|
395 |
+
the zone temperature setpoints as determined by the oper-
|
396 |
+
ating mode of the system and implements dual-maximum
|
397 |
+
logic. C2 is responsible for maintaining the duct static
|
398 |
+
pressure setpoint and implements a duct static pressure
|
399 |
+
reset strategy. C3 is responsible for maintaining the supply
|
400 |
+
air temperature setpoint as well as the minimum outside
|
401 |
+
air flow rate as determined by the operating mode of
|
402 |
+
the system. In this case, we assume the fan speed to be
|
403 |
+
constant and our supervisory MPC controller manipulates
|
404 |
+
the damper position and reheat control signal to control
|
405 |
+
the airflow and zone supply air temperature respectively
|
406 |
+
(at each zone). In addition, the central HVAC cooling and
|
407 |
+
heating units are manipulated to control the central supply
|
408 |
+
air temperature. The optimization objective is to minimize
|
409 |
+
the overall cooling and heating loads while maintaining the
|
410 |
+
occupant comfort bounds and central supply air tempera-
|
411 |
+
ture. The state and input box constraints are as follows:
|
412 |
+
21oC ≤ xTzonei,occ ≤ 24oC
|
413 |
+
(15)
|
414 |
+
15oC ≤ xTzonei,unocc ≤ 30oC
|
415 |
+
(16)
|
416 |
+
0.0 ≤ udami ≤ 1.0
|
417 |
+
(17)
|
418 |
+
0.0 ≤ uyReaHeai ≤ 1.0
|
419 |
+
(18)
|
420 |
+
∀i ∈ {1, 2, 3, 4, 5}
|
421 |
+
5oC ≤ xTsupp ≤ 20oC
|
422 |
+
(19)
|
423 |
+
0.0 ≤ uyHea ≤ 1.0
|
424 |
+
(20)
|
425 |
+
0.0 ≤ uyCoo ≤ 1.0
|
426 |
+
(21)
|
427 |
+
4.2 System Identification
|
428 |
+
We consider the three choices of models as described
|
429 |
+
in Section 2 for the single zone and multi-zone case.
|
430 |
+
We describe how we sufficiently excite the system to
|
431 |
+
generate data and report the training and out-of-training
|
432 |
+
performance of each model.
|
433 |
+
Data generation
|
434 |
+
For each time-step t = 0, ..., T − 1,
|
435 |
+
we sample a random control input ut from a uniform
|
436 |
+
distribution of the feasible input space and pass the
|
437 |
+
sampled control input to BOPTEST simulation to get
|
438 |
+
the next observation and disturbance. We collect the data
|
439 |
+
up to time-step T, and repeat this procedure K times
|
440 |
+
using different initial conditions. In the BESTEST case, we
|
441 |
+
choose K = 120, T = 500, and use 100 distinct trajectories
|
442 |
+
as training data, 10 for validation and 10 for test. In the
|
443 |
+
(a)
|
444 |
+
(b)
|
445 |
+
Fig.
|
446 |
+
2.
|
447 |
+
Envelope,
|
448 |
+
Floorplan
|
449 |
+
and
|
450 |
+
control
|
451 |
+
schematics
|
452 |
+
of multi zone office air simple emulator model
|
453 |
+
of BOPTEST. Source: https://ibpsa.github.io/
|
454 |
+
project1-boptest/
|
455 |
+
|
456 |
+
C1
|
457 |
+
C2
|
458 |
+
Zone
|
459 |
+
0PI
|
460 |
+
Map
|
461 |
+
PI50.0 m
|
462 |
+
4.57 m
|
463 |
+
North
|
464 |
+
33.25 m
|
465 |
+
West
|
466 |
+
East
|
467 |
+
Core
|
468 |
+
Middle Floor
|
469 |
+
Southcorezone
|
470 |
+
southzone
|
471 |
+
eastzone
|
472 |
+
northzone
|
473 |
+
west zone
|
474 |
+
heating cooling
|
475 |
+
coil
|
476 |
+
coil
|
477 |
+
Motor
|
478 |
+
Sensor
|
479 |
+
Point
|
480 |
+
Ccontrol
|
481 |
+
Point
|
482 |
+
Sensor
|
483 |
+
Point
|
484 |
+
Airflow Rate
|
485 |
+
Sensor
|
486 |
+
Point
|
487 |
+
Differentia
|
488 |
+
Pressure
|
489 |
+
SensorTable 1.
|
490 |
+
MSE (×10−5)
|
491 |
+
for
|
492 |
+
dif-
|
493 |
+
fer-
|
494 |
+
ent
|
495 |
+
model
|
496 |
+
choices
|
497 |
+
in
|
498 |
+
BESTEST
|
499 |
+
case
|
500 |
+
Model
|
501 |
+
Train MSE
|
502 |
+
Val MSE
|
503 |
+
Test MSE
|
504 |
+
Linear
|
505 |
+
699.5
|
506 |
+
566.8
|
507 |
+
780.3
|
508 |
+
MLP
|
509 |
+
8.846
|
510 |
+
12.70
|
511 |
+
17.56
|
512 |
+
LSTM
|
513 |
+
1.418
|
514 |
+
1.726
|
515 |
+
2.145
|
516 |
+
multi-zone office case, we choose K = 600, T = 1000, and
|
517 |
+
use 500 trajectories as the training dataset, and keep 50
|
518 |
+
for validation and 50 for test purposes. It is evident that
|
519 |
+
test data, which all results are reported on, is the data
|
520 |
+
that the model has never been trained on.
|
521 |
+
Hyperparameters
|
522 |
+
The MLP framework consists of 4
|
523 |
+
layers with 256 nodes in each layer, and tanh(·) activation
|
524 |
+
layers in-between the MLP layers. For the LSTM model,
|
525 |
+
we implement 2 layers with 256 nodes for MLPenc and
|
526 |
+
MLPdec and choose the dimension of hidden and cell state
|
527 |
+
as 256. Mean squared error (MSE) is used for computing
|
528 |
+
training loss. For all surrogate models, we choose Adam
|
529 |
+
to optimize the parameters with learning rate=0.001, and
|
530 |
+
epoch=1000.
|
531 |
+
Predictive performance
|
532 |
+
Table 1 and Table 2 show the
|
533 |
+
results of test performance for single-zone and five-zone
|
534 |
+
(a)
|
535 |
+
(b)
|
536 |
+
(c)
|
537 |
+
Fig.
|
538 |
+
3.
|
539 |
+
Lower-level
|
540 |
+
control
|
541 |
+
schematics
|
542 |
+
for
|
543 |
+
five-zone
|
544 |
+
model.
|
545 |
+
Source:
|
546 |
+
https://ibpsa.github.io/
|
547 |
+
project1-boptest/
|
548 |
+
Table 2.
|
549 |
+
MSE (×10−5)
|
550 |
+
for
|
551 |
+
dif-
|
552 |
+
fer-
|
553 |
+
ent
|
554 |
+
MLP
|
555 |
+
hy-
|
556 |
+
per-
|
557 |
+
pa-
|
558 |
+
ram-
|
559 |
+
e-
|
560 |
+
ter
|
561 |
+
choices
|
562 |
+
in
|
563 |
+
multi-
|
564 |
+
zone
|
565 |
+
of-
|
566 |
+
fice
|
567 |
+
case
|
568 |
+
(Mx, Mu, Md)
|
569 |
+
Train MSE
|
570 |
+
Val MSE
|
571 |
+
Test MSE
|
572 |
+
(1, 1, 1)
|
573 |
+
511.6
|
574 |
+
623.9
|
575 |
+
618.6
|
576 |
+
(1, 1, 5)
|
577 |
+
476.0
|
578 |
+
623.8
|
579 |
+
624.3
|
580 |
+
(1, 5, 1)
|
581 |
+
20.46
|
582 |
+
21.74
|
583 |
+
24.35
|
584 |
+
(5, 1, 1)
|
585 |
+
82.43
|
586 |
+
98.92
|
587 |
+
103.8
|
588 |
+
(1, 5, 5)
|
589 |
+
14.71
|
590 |
+
17.76
|
591 |
+
18.47
|
592 |
+
(5, 1, 5)
|
593 |
+
78.38
|
594 |
+
98.17
|
595 |
+
100.06
|
596 |
+
(5, 5, 1)
|
597 |
+
21.20
|
598 |
+
23.67
|
599 |
+
26.87
|
600 |
+
(5, 5, 5)
|
601 |
+
10.37
|
602 |
+
14.80
|
603 |
+
14.82
|
604 |
+
models respectively. Losses are calculated using average
|
605 |
+
prediction error for 40 steps.For multi-step ahead predic-
|
606 |
+
tion, a for-loop is implemented in the forward propaga-
|
607 |
+
tion of the ML models. The results for single-zone and
|
608 |
+
multi-zone models demonstrate the superiority of LSTM
|
609 |
+
in prediction accuracy, although, MLP performance is
|
610 |
+
comparable in the five-zone case as depicted in Figure 4.
|
611 |
+
In Table 2, we compare the performance of different MLP
|
612 |
+
model choices with different lag values of the state, input,
|
613 |
+
and time-varying disturbances. (5,5,5) is the best model
|
614 |
+
among all choices but (1,5,5) model comes very close
|
615 |
+
with fewer model inputs. This model depends on lags
|
616 |
+
of weather data and control inputs, which we speculate
|
617 |
+
is not unrelated to the lags associated with lower-level
|
618 |
+
controllers in this system. We chose (1,5,5) as a more
|
619 |
+
simple, equally accurate choice. Figure 5 is a visual de-
|
620 |
+
piction of the predictive accuracy of the chosen MLP for
|
621 |
+
surrogate modeling of the five-zone model during three
|
622 |
+
distinct weather events (January, May, and August) for
|
623 |
+
the core zone. Each orange trajectory is a 50-step ahead
|
624 |
+
prediction (12.5 hours) starting from the leftmost point of
|
625 |
+
the trajectory. These results appear to be conclusive for
|
626 |
+
deploying the model in MPC.
|
627 |
+
4.3 Control Results
|
628 |
+
For all control algorithms, we deploy a receding-horizon
|
629 |
+
controller, wherein a 10-step ”look-ahead” trajectory is
|
630 |
+
generated using the optimization algorithm, and only
|
631 |
+
the first step of the optimization solution is passed to
|
632 |
+
BOPTEST model to obtain new measurements. The new
|
633 |
+
data point is then used as the initial condition for the
|
634 |
+
next iteration of the control optimization. In addition, to
|
635 |
+
|
636 |
+
+
|
637 |
+
PI
|
638 |
+
Map
|
639 |
+
PIScale
|
640 |
+
PI
|
641 |
+
&
|
642 |
+
PI
|
643 |
+
Limit
|
644 |
+
max> yHea
|
645 |
+
TSupSet
|
646 |
+
Map
|
647 |
+
yOA1
|
648 |
+
> yCoo
|
649 |
+
TSup
|
650 |
+
Max
|
651 |
+
yOA
|
652 |
+
supFanSpe
|
653 |
+
Map
|
654 |
+
yOA2speed up convergence, the previously optimized control
|
655 |
+
trajectory is used as the initial trajectory for warm-
|
656 |
+
starting the receding horizon replanning for the MPC
|
657 |
+
problem.
|
658 |
+
The control results for single-zone and multi-zone cases are
|
659 |
+
reported in Table 3 and Table 4, respectively. In the single-
|
660 |
+
zone case, LSTM model performs best for control. This
|
661 |
+
is expected from the superior predictive accuracy of the
|
662 |
+
model. fIt also has the best average computation time. As
|
663 |
+
or the control algorithm, Gradient-based approach finds a
|
664 |
+
better local minima for the problem. In the multi-zone
|
665 |
+
case, LSTM performs poorly (unexpectedly) and MLP
|
666 |
+
outperforms all models. Here, in contrast to the previous
|
667 |
+
case, SLSQP finds a better local minima. Next, we discuss
|
668 |
+
the significance of these results.
|
669 |
+
4.4 Discussion
|
670 |
+
The modeling results indicate that it is possible to derive
|
671 |
+
accurate ML models from the building emulators. It is
|
672 |
+
worth mentioning that the bottleneck in this process is
|
673 |
+
data generation which is not always trivial for hybrid
|
674 |
+
systems with many if-else conditions, low-level control
|
675 |
+
loops and system constraints, and finely-tuned numerical
|
676 |
+
solvers.
|
677 |
+
On the control side, we have run extensive tests using
|
678 |
+
SLSQP and Gradient-based approaches from different ini-
|
679 |
+
tial conditions. In the one-zone case, the gradient-based
|
680 |
+
approach with the LSTM model shows the lowest power
|
681 |
+
consumption with an acceptable discomfort level. How-
|
682 |
+
ever, in the multi-zone case, SLSQP with MLP model
|
683 |
+
reaches the lowest power consumption, even though LSTM
|
684 |
+
model shows better predictive performance. This can hap-
|
685 |
+
pen when the optimization problem in the control for-
|
686 |
+
mulation is highly non-convex. The complexity of the
|
687 |
+
surrogate model likely creates many additional local min-
|
688 |
+
ima, which in turn, depreciates the control performance.
|
689 |
+
This, somewhat contradictory, implies that better predic-
|
690 |
+
tive performance does not always guarantee better control
|
691 |
+
performance. We believe that based on this experiment, a
|
692 |
+
middle-ground between model complexity and predictive
|
693 |
+
performance should be considered for these types of MPC
|
694 |
+
problems. Alternatively, better control formulations might
|
695 |
+
help to curb this issue. Since we have found little precedent
|
696 |
+
in the literature, we are running more tests to find better
|
697 |
+
Fig. 4. Test MSE for different choices of surrogate models
|
698 |
+
in multi-zone test case. LSTM and MLP have compa-
|
699 |
+
rable performance and outperform the Linear model.
|
700 |
+
Table 3.
|
701 |
+
Av-
|
702 |
+
er-
|
703 |
+
age
|
704 |
+
of
|
705 |
+
to-
|
706 |
+
tal
|
707 |
+
power(kWh/m2),
|
708 |
+
ther-
|
709 |
+
mal
|
710 |
+
dis-
|
711 |
+
com-
|
712 |
+
fort
|
713 |
+
(kh/zone)
|
714 |
+
and
|
715 |
+
com-
|
716 |
+
pu-
|
717 |
+
ta-
|
718 |
+
tion
|
719 |
+
time
|
720 |
+
(sec)
|
721 |
+
on
|
722 |
+
BESTEST
|
723 |
+
case
|
724 |
+
Model
|
725 |
+
Solver
|
726 |
+
Power
|
727 |
+
Discomfort
|
728 |
+
Time
|
729 |
+
Linear
|
730 |
+
GDM
|
731 |
+
0.0189
|
732 |
+
1556
|
733 |
+
1.607
|
734 |
+
Linear
|
735 |
+
SLSQP
|
736 |
+
0.2551
|
737 |
+
1528
|
738 |
+
0.933
|
739 |
+
MLP
|
740 |
+
GDM
|
741 |
+
4.804
|
742 |
+
2.935
|
743 |
+
1.694
|
744 |
+
MLP
|
745 |
+
SLSQP
|
746 |
+
5.059
|
747 |
+
5.207
|
748 |
+
1.684
|
749 |
+
LSTM
|
750 |
+
GDM
|
751 |
+
4.818
|
752 |
+
2.081
|
753 |
+
0.620
|
754 |
+
LSTM
|
755 |
+
SLSQP
|
756 |
+
4.943
|
757 |
+
4.415
|
758 |
+
0.661
|
759 |
+
and more definitive answers. It is also worth pointing out
|
760 |
+
that the framework is working as designed, helping to
|
761 |
+
frame new hypotheses based on experimentation.
|
762 |
+
Computation Time
|
763 |
+
By comparing the average compu-
|
764 |
+
tation time between several methods, we make the fol-
|
765 |
+
lowing interesting observations: First, both the gradient-
|
766 |
+
based approach and SLSQP show comparable computa-
|
767 |
+
tion time, though the computation time of both solvers
|
768 |
+
depends on their stopping criteria. For example, after
|
769 |
+
running extensive tests, we decided that 100 iterations was
|
770 |
+
a good stopping criteria for the gradient-based approach.
|
771 |
+
We expect this hyperparameter tuning to be problem
|
772 |
+
specific. Second, for the surrogate model, it is obvious to
|
773 |
+
us that MLP should take longer than the Linear model to
|
774 |
+
run. Surprisingly, the LSTM model, which has the most
|
775 |
+
complex structure among the three candidates, shows the
|
776 |
+
fastest computation time. We think that this computation
|
777 |
+
time gap most likely comes from a difference in the imple-
|
778 |
+
mentation language. Each surrogate model has a for-loop
|
779 |
+
to predict the multi-steps. Although all surrogate models
|
780 |
+
are implemented in Pytorch, the linear and MLP model
|
781 |
+
conduct their for-loops in python, while LSTM model uses
|
782 |
+
C++.
|
783 |
+
5. CONCLUSION AND FUTURE WORK
|
784 |
+
We presented a modeling and control framework for con-
|
785 |
+
trolling physics-based building emulators. We have shown
|
786 |
+
that our approach is successful in reducing cooling and
|
787 |
+
heating loads in the BOPTEST emulator while satisfying
|
788 |
+
|
789 |
+
1e-3
|
790 |
+
Linear
|
791 |
+
4
|
792 |
+
MLP
|
793 |
+
Prediction MSE
|
794 |
+
LSTM
|
795 |
+
m
|
796 |
+
0
|
797 |
+
0
|
798 |
+
10
|
799 |
+
20
|
800 |
+
30
|
801 |
+
40
|
802 |
+
Prediction StepsFig. 5. The set of figures show the results of out-of-training predictive performance for five zone model during three
|
803 |
+
distinct weather events (January, May, and August) for core zone (top). The ambient temperature trajectories is
|
804 |
+
depicted in red (bottom). The orange lines represent the 50-step ahead predictions (12.5 hours) starting from the
|
805 |
+
left most point of the trajectory. The full MSEs are reported in Table 2.
|
806 |
+
(a)
|
807 |
+
(b)
|
808 |
+
Fig. 6. Result comparison for different choices of models and control algorithms. The top figure represents the temperate.
|
809 |
+
The bottom figure is the relevant weather data, and the middle figures are the corresponding control inputs.
|
810 |
+
The results are divided into a cold (Jan) and hot (Aug) weather events. (a) Result for control of core-zone in
|
811 |
+
the multi-zone test case using SLSQP with Linear, MLP, and LSTM models. Using MLP model, the control
|
812 |
+
outperforms LSTM and Linear model-based implementation. (b) MLP-based control results with SLSQP solver
|
813 |
+
slightly outperform the Gradient-based approach.
|
814 |
+
occupant comfort and adhering to control system con-
|
815 |
+
straints. The approach is modular, meaning that it will
|
816 |
+
be compatible with various other choices of models and
|
817 |
+
control algorithms. For example, while we did not succeed
|
818 |
+
in training a good LSTM model for the five-zone case, we
|
819 |
+
anticipate that the right hyperparameter tuning should
|
820 |
+
address this issue and we are actively working on it.
|
821 |
+
The same is true for control. For example, we tested the
|
822 |
+
framework with an iLQR controller which failed to satisfy
|
823 |
+
constraints. While we did not manage to get the results
|
824 |
+
we expected, we anticipate that significantly better control
|
825 |
+
results are possible with iLQR and we are currently fixing
|
826 |
+
our implementation of the algorithm. This is especially
|
827 |
+
important since iLQR has shown superior performance
|
828 |
+
for nonlinear optimal control problems (Li and Todorov,
|
829 |
+
2007). We are also exploring other fast first-order solvers
|
830 |
+
with alternative control formulations. For example, we
|
831 |
+
are considering OSQP (Stellato et al., 2020), which will
|
832 |
+
significantly speed up the optimization while producing
|
833 |
+
high-quality solutions, or distributed ADMM (Boyd et al.,
|
834 |
+
|
835 |
+
Linear, SLSQP
|
836 |
+
MLP, SLSQP
|
837 |
+
LSTM, SLSOP
|
838 |
+
30
|
839 |
+
Room Temp.(° C)
|
840 |
+
NNAN
|
841 |
+
25
|
842 |
+
20
|
843 |
+
15
|
844 |
+
Heater Valve
|
845 |
+
1.0
|
846 |
+
0.5
|
847 |
+
0.0
|
848 |
+
Cooler Valve
|
849 |
+
1.0
|
850 |
+
0.5
|
851 |
+
0.0
|
852 |
+
Reheat Signal
|
853 |
+
1.0
|
854 |
+
0.5
|
855 |
+
..AA
|
856 |
+
0.0
|
857 |
+
30
|
858 |
+
Solar Irradiation (kW/m2)
|
859 |
+
Ambient
|
860 |
+
Irradiation
|
861 |
+
Ambient Temp.(° C)
|
862 |
+
10
|
863 |
+
2
|
864 |
+
10
|
865 |
+
1
|
866 |
+
-30
|
867 |
+
0
|
868 |
+
Jan 4
|
869 |
+
Jan 7
|
870 |
+
Jan 10
|
871 |
+
Aug 8
|
872 |
+
Aug 11
|
873 |
+
Aug 14MLP,
|
874 |
+
GDM
|
875 |
+
MLP, SLSQP
|
876 |
+
30
|
877 |
+
Room Temp.(° C)
|
878 |
+
25
|
879 |
+
20
|
880 |
+
15
|
881 |
+
Heater Valve
|
882 |
+
1.0
|
883 |
+
0.5
|
884 |
+
0.0
|
885 |
+
Cooler Valve
|
886 |
+
1.0
|
887 |
+
0.5
|
888 |
+
0.0
|
889 |
+
Reheat Signal
|
890 |
+
1.0
|
891 |
+
0.5
|
892 |
+
0.0
|
893 |
+
30
|
894 |
+
Solar Irradiation (kW/m2)
|
895 |
+
Ambient
|
896 |
+
Irradiation
|
897 |
+
Ambient Temp.(° C)
|
898 |
+
10
|
899 |
+
2
|
900 |
+
10
|
901 |
+
1
|
902 |
+
-30
|
903 |
+
0
|
904 |
+
Jan 4
|
905 |
+
Jan 7
|
906 |
+
Jan 10
|
907 |
+
Aug 8
|
908 |
+
Aug 11
|
909 |
+
Aug 1435
|
910 |
+
(。)
|
911 |
+
30
|
912 |
+
(0。)
|
913 |
+
Jan 29th
|
914 |
+
May 2nd
|
915 |
+
Aug 3rd
|
916 |
+
Ground Truth
|
917 |
+
Surrogate ModelTable 4.
|
918 |
+
Av-
|
919 |
+
er-
|
920 |
+
age
|
921 |
+
of
|
922 |
+
to-
|
923 |
+
tal
|
924 |
+
power(kWh/m2),
|
925 |
+
ther-
|
926 |
+
mal
|
927 |
+
dis-
|
928 |
+
com-
|
929 |
+
fort
|
930 |
+
(kh/zone)
|
931 |
+
and
|
932 |
+
com-
|
933 |
+
pu-
|
934 |
+
ta-
|
935 |
+
tion
|
936 |
+
time
|
937 |
+
(sec)
|
938 |
+
on
|
939 |
+
multi-
|
940 |
+
zone
|
941 |
+
of-
|
942 |
+
fice
|
943 |
+
case
|
944 |
+
Model
|
945 |
+
Solver
|
946 |
+
Power
|
947 |
+
Discomfort
|
948 |
+
Time
|
949 |
+
Linear
|
950 |
+
GDM
|
951 |
+
2.807
|
952 |
+
10.44
|
953 |
+
1.504
|
954 |
+
Linear
|
955 |
+
SLSQP
|
956 |
+
2.487
|
957 |
+
11.40
|
958 |
+
1.600
|
959 |
+
MLP
|
960 |
+
GDM
|
961 |
+
3.458
|
962 |
+
4.054
|
963 |
+
1.782
|
964 |
+
MLP
|
965 |
+
SLSQP
|
966 |
+
2.778
|
967 |
+
3.154
|
968 |
+
2.144
|
969 |
+
LSTM
|
970 |
+
GDM
|
971 |
+
2.222
|
972 |
+
124.7
|
973 |
+
0.570
|
974 |
+
LSTM
|
975 |
+
SLSQP
|
976 |
+
2.880
|
977 |
+
35.48
|
978 |
+
0.818
|
979 |
+
2011) for district-level problems. In addition, We are ac-
|
980 |
+
tively working with the developers of BOPTEST to control
|
981 |
+
scaled-up models, including multiple coupled buildings,
|
982 |
+
with the framework.
|
983 |
+
The main bottleneck for scaling the current approach is
|
984 |
+
the customized nature of the data generation process. In
|
985 |
+
the current process, many trials and errors are required to
|
986 |
+
find a feasible input space that does not break the emu-
|
987 |
+
lator in forward simulations. Latest studies(Chakrabarty
|
988 |
+
et al., 2022) provide some promising insight into more
|
989 |
+
robust sampling procedures. We are currently working on
|
990 |
+
incorporating similar approaches into our process.
|
991 |
+
Last but not least, while in this paper we focused on con-
|
992 |
+
trol as an application, we firmly believe that system design,
|
993 |
+
fault diagnosis, and reliability are other applications that
|
994 |
+
will benefit from the proposed modeling approach, and we
|
995 |
+
are actively investigating problems in these domains.
|
996 |
+
REFERENCES
|
997 |
+
Atam, E. and Helsen, L. (2016). Control-oriented thermal
|
998 |
+
modeling of multizone buildings: Methods and issues:
|
999 |
+
Intelligent control of a building system. IEEE Control
|
1000 |
+
systems magazine, 36(3), 86–111.
|
1001 |
+
Blum, D., Arroyo, J., Huang, S., Drgoˇna, J., Jorissen,
|
1002 |
+
F., Walnum, H.T., Chen, Y., Benne, K., Vrabie, D.,
|
1003 |
+
Wetter, M., et al. (2021). Building optimization testing
|
1004 |
+
framework (boptest) for simulation-based benchmarking
|
1005 |
+
of control strategies in buildings. Journal of Building
|
1006 |
+
Performance Simulation, 14(5), 586–610.
|
1007 |
+
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.,
|
1008 |
+
et al. (2011). Distributed optimization and statistical
|
1009 |
+
learning via the alternating direction method of multi-
|
1010 |
+
pliers. Foundations and Trends® in Machine learning,
|
1011 |
+
3(1), 1–122.
|
1012 |
+
Chakrabarty, A., Bortoff, S.A., and Laughman, C.R.
|
1013 |
+
(2022). Simulation failure-robust bayesian optimization
|
1014 |
+
for data-driven parameter estimation. IEEE Transac-
|
1015 |
+
tions on Systems, Man, and Cybernetics: Systems.
|
1016 |
+
Deru, M., Field, K., Studer, D., Benne, K., Griffith, B.,
|
1017 |
+
Torcellini, P., Liu, B., Halverson, M., Winiarski, D.,
|
1018 |
+
Rosenberg, M., et al. (2011). Us department of energy
|
1019 |
+
commercial reference building models of the national
|
1020 |
+
building stock. National Renewable Energy Laboratory.
|
1021 |
+
Drgoˇna, J., Arroyo, J., Figueroa, I.C., Blum, D., Arendt,
|
1022 |
+
K., Kim, D., Oll´e, E.P., Oravec, J., Wetter, M., Vrabie,
|
1023 |
+
D.L., et al. (2020). All you need to know about model
|
1024 |
+
predictive control for buildings.
|
1025 |
+
Annual Reviews in
|
1026 |
+
Control.
|
1027 |
+
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup,
|
1028 |
+
D., and Meger, D. (2018). Deep reinforcement learning
|
1029 |
+
that matters.
|
1030 |
+
In Thirty-Second AAAI Conference on
|
1031 |
+
Artificial Intelligence.
|
1032 |
+
Hornik, K., Stinchcombe, M., and White, H. (1989). Multi-
|
1033 |
+
layer feedforward networks are universal approximators.
|
1034 |
+
Neural networks, 2(5), 359–366.
|
1035 |
+
Kingma,
|
1036 |
+
D.P.
|
1037 |
+
and
|
1038 |
+
Ba,
|
1039 |
+
J.
|
1040 |
+
(2014).
|
1041 |
+
Adam:
|
1042 |
+
A
|
1043 |
+
method for stochastic optimization.
|
1044 |
+
arXiv preprint
|
1045 |
+
arXiv:1412.6980.
|
1046 |
+
Kraft, D. (1988).
|
1047 |
+
A software package for sequential
|
1048 |
+
quadratic programming.
|
1049 |
+
Forschungsbericht- Deutsche
|
1050 |
+
Forschungs- und Versuchsanstalt fur Luft- und Raum-
|
1051 |
+
fahrt.
|
1052 |
+
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016).
|
1053 |
+
End-to-end training of deep visuomotor policies. The
|
1054 |
+
Journal of Machine Learning Research, 17(1), 1334–
|
1055 |
+
1373.
|
1056 |
+
Li, W. and Todorov, E. (2007).
|
1057 |
+
Iterative linearization
|
1058 |
+
methods for approximately optimal control and esti-
|
1059 |
+
mation of non-linear stochastic system.
|
1060 |
+
International
|
1061 |
+
Journal of Control, 80(9), 1439–1453.
|
1062 |
+
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez,
|
1063 |
+
T., Tassa, Y., Silver, D., and Wierstra, D. (2015).
|
1064 |
+
Continuous control with deep reinforcement learning.
|
1065 |
+
arXiv preprint arXiv:1509.02971.
|
1066 |
+
Lin, Q., Loxton, R., and Teo, K.L. (2014). The control
|
1067 |
+
parameterization method for nonlinear optimal control:
|
1068 |
+
a survey. Journal of Industrial and management opti-
|
1069 |
+
mization, 10(1), 275–309.
|
1070 |
+
Mostafavi, S., Doddi, H., Kalyanam, K., and Schwartz,
|
1071 |
+
D. (2022).
|
1072 |
+
Nonlinear moving horizon estimation and
|
1073 |
+
model predictive control for buildings with unknown
|
1074 |
+
hvac dynamics. Ifac-Papersonline, Accepted.
|
1075 |
+
Oei, M., Guenther, J., B¨ohm, M., Park, S., and Sawodny,
|
1076 |
+
O. (2020).
|
1077 |
+
A bilinear approach to model predictive
|
1078 |
+
control for thermal conditioning of adaptive buildings.
|
1079 |
+
IFAC-PapersOnLine, 53(2), 8383–8388.
|
1080 |
+
O’Dwyer,
|
1081 |
+
E.,
|
1082 |
+
Atam,
|
1083 |
+
E.,
|
1084 |
+
Falugi,
|
1085 |
+
P.,
|
1086 |
+
Kerrigan,
|
1087 |
+
E.,
|
1088 |
+
Zagorowska, M., and Shah, N. (2022).
|
1089 |
+
A modelling
|
1090 |
+
workflow for predictive control in residential buildings.
|
1091 |
+
|
1092 |
+
In Active Building Energy Systems, 99–128. Springer.
|
1093 |
+
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang,
|
1094 |
+
E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and
|
1095 |
+
Lerer, A. (2017). Automatic differentiation in pytorch.
|
1096 |
+
31st Conference on Neural Information Processing Sys-
|
1097 |
+
tems (NIPS 2017), Long Beach, CA, USA.
|
1098 |
+
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
|
1099 |
+
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga,
|
1100 |
+
L., et al. (2019). Pytorch: An imperative style, high-
|
1101 |
+
performance deep learning library. Advances in neural
|
1102 |
+
information processing systems, 32.
|
1103 |
+
Schmidhuber, J. (2015). Deep learning in neural networks:
|
1104 |
+
An overview. Neural networks, 61, 85–117.
|
1105 |
+
Siami-Namini, S., Tavakoli, N., and Namin, A.S. (2018). A
|
1106 |
+
comparison of arima and lstm in forecasting time series.
|
1107 |
+
In 2018 17th IEEE international conference on machine
|
1108 |
+
learning and applications (ICMLA), 1394–1401. IEEE.
|
1109 |
+
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L.,
|
1110 |
+
Van Den Driessche, G., Schrittwieser, J., Antonoglou,
|
1111 |
+
I., Panneershelvam, V., Lanctot, M., et al. (2016).
|
1112 |
+
Mastering the game of go with deep neural networks
|
1113 |
+
and tree search. nature, 529(7587), 484.
|
1114 |
+
Stellato, B., Banjac, G., Goulart, P., Bemporad, A., and
|
1115 |
+
Boyd, S. (2020).
|
1116 |
+
Osqp: An operator splitting solver
|
1117 |
+
for quadratic programs.
|
1118 |
+
Mathematical Programming
|
1119 |
+
Computation, 12(4), 637–672.
|
1120 |
+
Sturzenegger, D., Gyalistras, D., Morari, M., and Smith,
|
1121 |
+
R.S. (2015).
|
1122 |
+
Model predictive climate control of a
|
1123 |
+
swiss office building: Implementation, results, and cost–
|
1124 |
+
benefit analysis. IEEE Transactions on Control Systems
|
1125 |
+
Technology, 24(1), 1–12.
|
1126 |
+
Todorov, E. and Li, W. (2005).
|
1127 |
+
A generalized iterative
|
1128 |
+
LQG method for locally-optimal feedback control of
|
1129 |
+
constrained nonlinear stochastic systems. In Proceedings
|
1130 |
+
of American Control Conference, 300 – 306.
|
1131 |
+
United States Energy Information Administration (2021).
|
1132 |
+
Total energy monthly data.
|
1133 |
+
URL https://www.eia.
|
1134 |
+
gov/totalenergy/data/monthly/.
|
1135 |
+
Virtanen, P., Gommers, R., Oliphant, T.E., Haberland,
|
1136 |
+
M., Reddy, T., Cournapeau, D., Burovski, E., Peterson,
|
1137 |
+
P., Weckesser, W., Bright, J., et al. (2020).
|
1138 |
+
Scipy
|
1139 |
+
1.0: fundamental algorithms for scientific computing in
|
1140 |
+
python. Nature methods, 17(3), 261–272.
|
1141 |
+
Walker, S.S., Lombardi, W., Lesecq, S., and Roshany-
|
1142 |
+
Yamchi, S. (2017).
|
1143 |
+
Application of distributed model
|
1144 |
+
predictive approaches to temperature and co2 concen-
|
1145 |
+
tration control in buildings. IFAC-PapersOnLine, 50(1),
|
1146 |
+
2589–2594.
|
1147 |
+
Wetter, M., Zuo, W., Nouidui, T.S., and Pang, X. (2014).
|
1148 |
+
Modelica buildings library. Journal of Building Perfor-
|
1149 |
+
mance Simulation, 7(4), 253–270.
|
1150 |
+
|
ANFQT4oBgHgl3EQf8jdP/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
AdAzT4oBgHgl3EQfF_tg/content/2301.01020v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3fd13189bc5602af2662f53db8efafa8bfccb215962a33654c754674721c063e
|
3 |
+
size 161342
|
AdAzT4oBgHgl3EQfF_tg/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4810fc435aae5e92939b2252b99134cd439f88790629be794d166646680d8d81
|
3 |
+
size 1900589
|
AdAzT4oBgHgl3EQfF_tg/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc6a8a6ff0563992dfbff9560ee14bc6c53dc9131edb9e658bf7e00d18126502
|
3 |
+
size 71888
|
AtE2T4oBgHgl3EQf8QmS/content/tmp_files/2301.04217v1.pdf.txt
ADDED
@@ -0,0 +1,418 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.04217v1 [math.CO] 10 Jan 2023
|
2 |
+
Neighbourhood complexity of graphs of bounded twin-width∗
|
3 |
+
´Edouard Bonnet†
|
4 |
+
Florent Foucaud‡ §
|
5 |
+
Tuomo Lehtil¨a¶ ‖
|
6 |
+
Aline Parreau∗∗
|
7 |
+
January 12, 2023
|
8 |
+
Abstract
|
9 |
+
We give essentially tight bounds for, ν(d, k), the maximum number of distinct neighbourhoods
|
10 |
+
on a set X of k vertices in a graph with twin-width at most d. Using the celebrated Marcus-Tardos
|
11 |
+
theorem, two independent works [Bonnet et al., Algorithmica ’22; Przybyszewski ’22] have shown the
|
12 |
+
upper bound ν(d, k) ⩽ exp(exp(O(d)))k, with a double-exponential dependence in the twin-width.
|
13 |
+
We give a short self-contained proof that for every d and k,
|
14 |
+
ν(d, k) ⩽ (d + 2)2d+1k = 2d+O(log d)k,
|
15 |
+
and build a bipartite graph implying ν(d, k) ⩾ 2d+log d+O(1)k, in the regime when k is large enough
|
16 |
+
compared to d.
|
17 |
+
1
|
18 |
+
Introduction
|
19 |
+
The aim of this paper is to refine our understanding of how complex the neighbourhoods of graphs of
|
20 |
+
bounded twin-width can be. We provide an improved bound on the neighbourhood complexity of such
|
21 |
+
graphs, complemented by a construction showing that our bound is essentially tight. The improvements in
|
22 |
+
the bounds for neighbourhood complexities translate directly to better structural bounds and algorithms,
|
23 |
+
in some contexts which are explained below.
|
24 |
+
Twin-width.
|
25 |
+
Twin-width is a recently introduced graph invariant [10]; see Section 2 for a definition.
|
26 |
+
It can be naturally extended to matrices over finite alphabets and binary structures [10, 7, 12]. Although
|
27 |
+
classes of bounded twin-width are broad and diverse, they allow (most of the time, provided a witness
|
28 |
+
is given as an input) improved algorithms, compared to what is possible on general graphs or binary
|
29 |
+
structures.
|
30 |
+
Most prominently, it was shown [10] that, on n-vertex graphs given with a d-sequence (a witness that
|
31 |
+
their twin-width is at most d), deciding if a first-order sentence ϕ holds can be solved in time f(d, ϕ)n, for
|
32 |
+
some computable function f. In some special cases, such as for k-Independent Set or k-Dominating
|
33 |
+
Set1, single-exponential parameterised algorithms running in time 2Od(k)n are possible [5]. In the same
|
34 |
+
setting, the triangles of an n-vertex m-edge graph can be counted in time O(d2n+m) [19]. See [8, 18, 25]
|
35 |
+
for more applications of twin-width with an algorithmic flavour.
|
36 |
+
Classes of binary structures with bounded twin-width include bounded treewidth, and more gener-
|
37 |
+
ally, bounded clique-width classes, proper minor-closed classes, posets of bounded width (that is, whose
|
38 |
+
antichains are of bounded size), hereditary subclasses of permutations, as well as Ω(log n)-subdivisions of
|
39 |
+
∗Florent Foucaud was financed by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25) and by
|
40 |
+
the ANR project GRALMECO (ANR-21-CE48-0004). Tuomo Lehtil¨a’s research was supported by the Finnish Cultural
|
41 |
+
Foundation and by the Academy of Finland grant 338797.
|
42 |
+
†Univ Lyon, CNRS, ENS de Lyon, Universit´e Claude Bernard Lyon 1, LIP UMR5668, France.
|
43 |
+
‡Universit´e Clermont-Auvergne, CNRS, Mines de Saint-´Etienne, Clermont-Auvergne-INP, LIMOS, 63000 Clermont-
|
44 |
+
Ferrand, France.
|
45 |
+
§Univ. Orl´eans, INSA Centre Val de Loire, LIFO EA 4022, F-45067 Orl´eans Cedex 2, France.
|
46 |
+
¶Univ Lyon, UCBL, CNRS, LIRIS - UMR 5205, F69622, France
|
47 |
+
‖University of Turku, Department of Mathematics and Statistics, Turku, Finland
|
48 |
+
∗∗Univ Lyon, CNRS, INSA Lyon, UCBL, Centrale Lyon, Univ Lyon 2, LIRIS, UMR5205, F-69622 Villeurbanne, France
|
49 |
+
1That is, the problems of deciding whether in an input graph, there are k vertices that are pairwise non-adjacent or
|
50 |
+
whose closed neighbourhood is the entire vertex set, respectively.
|
51 |
+
1
|
52 |
+
|
53 |
+
n-vertex graphs [10], and particular classes of (bounded-degree) expanders [6]. A rich range of geometric
|
54 |
+
graph classes have bounded twin-width such as map graphs, bounded-degree string graphs [10], classes
|
55 |
+
with bounded queue number or bounded stack number [6], segment graphs with no Kt,t subgraph, and
|
56 |
+
visibility graphs of simple polygons without large independent sets [4], to give a few examples.
|
57 |
+
If efficiently approximating the twin-width is a challenging open question in general, this is known
|
58 |
+
to be possible for the above-mentioned classes (albeit a representation may be needed for the geometric
|
59 |
+
classes) and for ordered graphs [7]. By that, we mean that there are two computable functions f, g and
|
60 |
+
an algorithm that, for an input n-vertex graph G from the class and an integer k, and in time g(k)nO(1),
|
61 |
+
either outputs an f(k)-sequence (again, witnessing that the twin-width is at most f(k)) or correctly
|
62 |
+
reports that the twin-width of G is larger than k.
|
63 |
+
Structural properties of graph classes of bounded twin-width include χ-boundedness [5], even with
|
64 |
+
a quasipolynomial binding function [24], smallness (i.e., containing up to isomorphism 2O(n) n-vertex
|
65 |
+
graphs) [6, 12], and Vapnik-Chervonenkis (VC) density at most 1 [9, 26]. The latter property is the topic
|
66 |
+
of the current article.
|
67 |
+
VC density and neighbourhood complexity.
|
68 |
+
VC density is related to the celebrated VC dimen-
|
69 |
+
sion [29]. Given a set-system (or hypergraph) S on a domain X, the shatter function πS : N → N is
|
70 |
+
defined as
|
71 |
+
πS(n) = max
|
72 |
+
A∈(X
|
73 |
+
n)
|
74 |
+
|{Y ⊆ A | ∃S ∈ S, Y = A ∩ S}|.
|
75 |
+
The Perles-Sauer-Shelah lemma states that πS(n) = O(nd) if the VC dimension of S (i.e., the supremum of
|
76 |
+
{n | πS(n) = 2n}) is a finite integer d. Then the VC density of S is defined as inf{c ∈ R | πS(n) = O(nc)},
|
77 |
+
and as +∞ if the VC dimension is unbounded.
|
78 |
+
We define the VC density of an infinite class C of finite graphs as the VC density of the infinite
|
79 |
+
set-system formed by the neighbourhood hypergraph of the disjoint union of the graphs of C, that is,
|
80 |
+
{NG(v) | v ∈ V (⊎G∈CG)}, where NG(v) denotes the set of neighbours of v in G.
|
81 |
+
The VC density
|
82 |
+
is an important measure in finite model theory, often more tractable than the VC dimension (see for
|
83 |
+
instance [1, 2]). Tight bounds have been obtained for the VC density of (logically) definable hypergraphs
|
84 |
+
from graph classes of bounded clique-width [23] (with monadic second-order logic), and more recently, of
|
85 |
+
bounded twin-width [18] (with first-order logic).
|
86 |
+
In structural graph theory and kernelisation [16] (a subarea of parameterised complexity [14]) the
|
87 |
+
function πN(G), where N(G) is the neighbourhood hypergraph of G, is often1 called neighbourhood com-
|
88 |
+
plexity. (See [3] for an algorithmic study of the computation of this notion.) In these contexts, obtaining
|
89 |
+
the best possible upper bound for πN(G) (and not just the exponent matching the VC density) translates
|
90 |
+
to qualitatively better structural bounds and algorithms; see for instance [9, 11, 15, 28].
|
91 |
+
The r-neighbourhood complexity of G is the neighbourhood complexity of Gr, with same vertex set
|
92 |
+
as G, and an edge between two vertices at distance at most r in G. Reidl et al. [28] showed that among
|
93 |
+
subgraph-closed classes, bounded expansion2 is equivalent to linear r-neighbourhood complexity. Indeed,
|
94 |
+
the more general nowhere dense classes [21] (another invention of the Sparsity program [22]) have almost
|
95 |
+
linear r-neighbourhood complexity [15]: there is a function f : N × N → N such that for every ε > 0,
|
96 |
+
πN(Gr)(n) ⩽ f(r, ε)n1+ε for all n. On hereditary classes, i.e., closed under taking induced subgraphs,
|
97 |
+
there is no known characterisation of linear neighbourhood complexity.
|
98 |
+
As we already mentioned in a different language, bounded twin-width classes have been proven to have
|
99 |
+
linear neighbourhood complexity. See [9, Lemma 3] or [26, Section 3] for two independent proofs, both
|
100 |
+
using the Marcus-Tardos theorem [20]. However, the dependence in the twin-width is doubly exponential
|
101 |
+
in both papers. Setting ν(d, k) as the maximum number of distinct neighbourhoods on a set of size k
|
102 |
+
within a graph of twin-width at most d, i.e., max{πN(G)(k) | G has twin-width at most d}, they show
|
103 |
+
that ν(d, k) ⩽ exp(exp(O(d)))k.
|
104 |
+
Our results.
|
105 |
+
In this note, we give in Section 3 a self-contained proof (not using the Marcus-Tardos
|
106 |
+
theorem) that ν(d, k) ⩽ 2d+O(log d)k. In Section 4, we complement that proof with a construction of a
|
107 |
+
1Some authors define the neighbourhood complexity as n �→
|
108 |
+
πN (G)(n)
|
109 |
+
n
|
110 |
+
.
|
111 |
+
2A notion from the Sparsity theory of Neˇsetˇril and Ossona de Mendez [22] extending bounded degree and proper minor-
|
112 |
+
free classes.
|
113 |
+
2
|
114 |
+
|
115 |
+
bipartite graph witnessing that ν(d, k) ⩾ 2d+log d+O(1)k, which makes our single-exponential upper bound
|
116 |
+
in twin-width essentially tight.
|
117 |
+
2
|
118 |
+
Preliminaries
|
119 |
+
We use the standard graph-theoretic notations: V (G), E(G), G[S], G − S respectively denote the vertex
|
120 |
+
set, edge set, subgraph of G induced by S, and subgraph of G induced by V (G) \ S. If v ∈ V (G), then
|
121 |
+
NG(v) (or N(v) if G is clear from the context) denotes the set of neighbours of v in G. If X ⊆ V (G),
|
122 |
+
then an X-neighbourhood is a set N(v) ∩ X for some v ∈ V (G).
|
123 |
+
We now define the twin-width of a graph, following the definition of [10].
|
124 |
+
A trigraph is a triple G = (V (G), E(G), R(G)) where E(G) and R(G) are two disjoint sets of edges
|
125 |
+
on V (G): the usual edges (also called black edges) and the red edges. Informally, a red edge between two
|
126 |
+
vertices u and v means that some errors have been made between u and v. The red degree of a trigraph
|
127 |
+
is the maximum degree of the graph (V (G), R(G)).
|
128 |
+
Any graph G can be interepreted as a trigraph
|
129 |
+
G = (V (G), E(G), ∅). Given a trigraph and two vertices u, v ∈ V (G) (not necessarily adjacent), the
|
130 |
+
trigraph G/u, v = G′ is obtained by contracting u and v in a new vertex w such that:
|
131 |
+
• V (G′) = {w} ∪ V (G) \ {u, v};
|
132 |
+
• the edges between vertices of V (G) \ {u, v} are the same in G′;
|
133 |
+
• the following edges are incident to w:
|
134 |
+
– wx ∈ E(G′) if xu ∈ E(G) and xv ∈ E(G);
|
135 |
+
– wx /∈ E(G′) ∪ R(G′) if xu /∈ E(G) ∪ R(G) and xv /∈ E(G) ∪ R(G);
|
136 |
+
– wx ∈ R(G′) otherwise.
|
137 |
+
In other words, the common black neighbours of u and v are black neighbours of w. All the other
|
138 |
+
neighbours of u or v are red neighbours of w. Red edges stay red, black edges stay black, red and black
|
139 |
+
edges become red. We say that G/u, v is a contraction of G. A d-sequence of an n-vertex graph G is
|
140 |
+
a sequence of n trigraphs Gn = G, Gn−1, ...., G1 such that each trigraph Gi is obtained from Gi+1 by a
|
141 |
+
contraction and has red degree at most d. The twin-width of G, denoted by tww(G), is the minimum
|
142 |
+
integer d such that G admits a d-sequence. Note that an induced subgraph of G has a twin-width smaller
|
143 |
+
or equal to the twin-width of G [10].
|
144 |
+
If u ∈ Gi, then u(G) denotes the set of vertices of G eventually contracted to u in Gi. Instead of
|
145 |
+
considering the trigraphs Gi, we might prefer to deal with the partitions induced by the sets u(G) for u
|
146 |
+
in Gi: Pi = {u(G) | u ∈ V (Gi)}. We say that there is a red edge between two parts u(G) and v(G) of Pi
|
147 |
+
if uv is red in Gi.
|
148 |
+
3
|
149 |
+
Upper bound on the number of distinct neighbourhoods
|
150 |
+
We state and prove our upper bound on the maximum number of distinct X-neighbourhoods in bounded
|
151 |
+
twin-width graphs.
|
152 |
+
Theorem 1. Let G be an n-vertex graph of twin-width d, and X ⊆ V (G). Then the number of distinct
|
153 |
+
X-neighbourhoods in G is at most (d + 2)2d+1|X| = 2d+O(log d)|X|.
|
154 |
+
Proof. Fix X ⊆ V (G). First of all, for all vertices of V (G) \ X with the same X-neighbourhood, we keep
|
155 |
+
only one representative. Note that the new graph G′′ is an induced subgraph of G, thus its twin-width
|
156 |
+
is at most d. We further modify graph G′′ by adding for each v ∈ X a new vertex u to G′′ so that
|
157 |
+
N(u) = N(v) if such vertex does not exist in V (G′′) \ X. The new graph is called G′ and it has the same
|
158 |
+
twin-width as G′′.
|
159 |
+
Let M = (d + 2)2d+1 + 1. We prove by induction on n that an n-vertex graph of twin-width at
|
160 |
+
most d with a set X of k vertices, where all vertices outside X have a distinct X-neighbourhood, satisfies
|
161 |
+
n ⩽ kM. This will prove that G′ has at most kM vertices, and thus that in G, there are at most (M −1)k
|
162 |
+
distinct X-neighbourhoods.
|
163 |
+
The statement is trivially true for n ⩽ 5 since M ⩾ 5, for all d ⩾ 0.
|
164 |
+
3
|
165 |
+
|
166 |
+
Thus, assume n ⩾ 6. In particular, we have k > 1. Let x ∈ X. Let X′ = X \ {x} and let Tx be the
|
167 |
+
set of pairs of vertices outside X that are twins with respect to X′, i.e.
|
168 |
+
Tx =
|
169 |
+
�
|
170 |
+
{u, v} ∈
|
171 |
+
�V (G′) \ X
|
172 |
+
2
|
173 |
+
�
|
174 |
+
| N(u) ∩ X′ = N(v) ∩ X′
|
175 |
+
�
|
176 |
+
.
|
177 |
+
Since every vertex of V (G′) \ X has a distinct neighbourhood in X, there are at most two vertices of
|
178 |
+
V (G′) \ X with the same (possibly empty) neighbourhood N in X′; namely the vertices u, v ∈ V (G′) \ X
|
179 |
+
with N(u) ∩ X = N and N(v) ∩ X = N ∪ {x} (if they exist). Hence, Tx consists of pairwise-disjoint pairs
|
180 |
+
of vertices.
|
181 |
+
We prove the following claim.
|
182 |
+
Claim A. There exists a vertex x of X such that Tx comprises at most M − 1 pairs, in G′.
|
183 |
+
Proof of claim. By contradiction, assume this is not the case: for every x in X, Tx has size at least M.
|
184 |
+
Consider a d-sequence of contractions G′
|
185 |
+
n, . . . , G′
|
186 |
+
1 of G′. Consider the last step G′
|
187 |
+
i of the sequence where
|
188 |
+
all the parts of Pi contain at most one vertex of X (that is, contrary to Pi, some part of Pi−1 contains
|
189 |
+
two vertices of X).
|
190 |
+
Let P be a part of Pi. Let x be the unique (if there exists one) element of P ∩X. Then we claim that
|
191 |
+
|P \ X| ⩽ 2d+1. Indeed, any two vertices of P \ X have some vertex in the symmetric difference of their
|
192 |
+
X-neighbourhoods, either it is x, or some vertex x′ of X outside P. If that distinguishing vertex is some
|
193 |
+
x′ that is not in P, then there has to be a red edge between P and the part that contains x′. There are
|
194 |
+
at most d red edges with P as an extremity. Since all the elements of X are in distinct parts in G′
|
195 |
+
i, it
|
196 |
+
means that d + 1 vertices of X are enough to distinguish all the X-neighbourhoods of vertices of P \ X,
|
197 |
+
and thus |P \ X| ⩽ 2d+1.
|
198 |
+
We now consider the next contraction in the sequence, which leads to G′
|
199 |
+
i−1. By definition of G′
|
200 |
+
i, it
|
201 |
+
must contract two vertices corresponding to two parts of Pi that both contain an element of X. Let
|
202 |
+
x1 and x2 be these two elements of X. Let Q be the part of Pi−1 that contains both x1 and x2. By
|
203 |
+
our assumption, Tx1 has size at least M. Let {u, v} be a pair of Tx1. Since u and v have the same
|
204 |
+
neighbourhood in X \ {x1}, it means that they are either both adjacent or both non-adjacent to x2, and
|
205 |
+
exactly one of them is adjacent to x1. Thus, necessarily, one vertex among the pair {u, v} is adjacent
|
206 |
+
to exactly one vertex among {x1, x2}. In particular, if this vertex is not in Q, then there has to be a
|
207 |
+
red edge between the part containing this vertex and the part Q in G′
|
208 |
+
i−1. Since Tx1 contains at least M
|
209 |
+
pairs (which are disjoint) and Q has at most 2d+2 vertices not in X, there are at least M − 2d+2 vertices
|
210 |
+
not in X whose part in G′
|
211 |
+
i−1 has a red edge to Q. Since each other part has at most 2d+1 vertices not
|
212 |
+
in X, it makes at least M−2d+2
|
213 |
+
2d+1
|
214 |
+
red edges incident to Q. Thus, we must have M−2d+2
|
215 |
+
2d+1
|
216 |
+
⩽ d, leading to
|
217 |
+
M ⩽ 2d+1(d + 2), a contradiction that proves the claim. (□)
|
218 |
+
By Claim A, there exists a vertex x ∈ X such that |Tx| ⩽ M − 1. Let Y be a set of |Tx| vertices that
|
219 |
+
intersects each pair of Tx exactly once. Let GY = G′ − (Y ∪ {x}). Then, X′ = X \ {x} is a vertex set
|
220 |
+
of size k − 1 such that all X′-neighbourhoods of vertices outside X′ are distinct. The graph GY has at
|
221 |
+
least n − M vertices, and twin-width at most d. By induction, we have n − M ⩽ |V (GY )| ⩽ (k − 1)M
|
222 |
+
and thus, n ⩽ kM. Hence, once we recall that no vertex in X has unique X-neighbourhood, there are at
|
223 |
+
most (M − 1)k distinct X-neighbourhoods, which completes the proof.
|
224 |
+
4
|
225 |
+
Lower bound on the number of distinct neighbourhoods
|
226 |
+
Notice that when |X| and tww(G) are roughly the same, the bound from Theorem 1 cannot be sharp,
|
227 |
+
since G′ has at most 2|X| + |X| vertices. However, when |X| is large enough compared to tww(G), we
|
228 |
+
next show that the bound is sharp up to a constant factor.
|
229 |
+
Proposition 2. There is a positive constant c, such that for any integer d, there is a bipartite graph G of
|
230 |
+
twin-width at most d, and a large enough set X ⊆ V (G), with at least c·d2d|X| distinct X-neighbourhoods
|
231 |
+
in G.
|
232 |
+
Proof. Observe that the claim is clearly true for any small d. Thus, we do not need to consider separately
|
233 |
+
graphs with small twin-width upper bounded by a constant. Hence, we assume from now on that d ≥ d′
|
234 |
+
where d′ is some positive constant.
|
235 |
+
4
|
236 |
+
|
237 |
+
We construct the graph G as follows. Let A, B, C ∈ Z be three constants that will be given later
|
238 |
+
(A and B will be roughly equal to
|
239 |
+
√
|
240 |
+
d and C will be roughly equal to d). Let X = {x1, ..., xk} be an
|
241 |
+
independent set of k vertices. Our goal is that each vertex in V (G) \ X has a unique X-neighbourhood.
|
242 |
+
For any integers i, j, t with 1 ⩽ i ⩽ j ⩽ i + A − 1, j + 2 ⩽ t ⩽ j + 1 + B and t ⩽ k − C, we create
|
243 |
+
a set Vi,j,t of vertices as follows. Consider the set Xt = {xt+1, ..., xt+C}. For every subset Y of Xt, let
|
244 |
+
Y ′ = {xi, ..., xj, xt} ∪ Y and add a vertex vY ′ to Vi,j,t, making it adjacent to the vertices of Y ′. Each set
|
245 |
+
Vi,j,t has size 2C and there are Θ(kAB) (for fixed A and B and growing k) such sets. Thus there are
|
246 |
+
Θ(kAB2C) vertices in the graph.
|
247 |
+
Any two vertices not in X have distinct X-neighbourhoods.
|
248 |
+
Indeed, by considering the natural
|
249 |
+
ordering of X induced by the indices, any vertex not in X is first adjacent to a consecutive interval
|
250 |
+
of vertices from xi to xj, then is not adjacent to vertices from xj+1 to xt−1 (which is not empty since
|
251 |
+
t ⩾ j + 2), and then adjacent to xt. Thus, if two vertices have the same X-neighbourhood, they must be
|
252 |
+
in the same set Vi,j,t. But then, they have a distinct neighbourhood in {xt+1, ..., xt+C}.
|
253 |
+
We now prove that the twin-width of G is at most M = max{AB, C}+2. For that, we give a sequence
|
254 |
+
of contractions with red degree at most M.
|
255 |
+
The contraction sequence is split into k − C steps, for each vertex of X. Let 0 ≤ i ≤ k − C − 1. Step 0
|
256 |
+
corresponds to the starting point, where each vertex is alone. Let i ⩾ 1. After Step i, there will be the
|
257 |
+
following parts in the corresponding partition (vertices not in any part have not yet been contracted):
|
258 |
+
• For each j, t such that i ⩽ j ⩽ i + A − 1 and j + 2 ⩽ t ⩽ j + 1 + B, there is a part Bj,t. The parts
|
259 |
+
Bi,t (parts with j = i), contain all the vertices of the sets Vi′,j′,t such that j′ ≤ i. The parts Bj,t
|
260 |
+
with j > i contain all the vertices of the sets Vi′,j′,t such that i′ ⩽ i and j′ = j. Note that there
|
261 |
+
are AB non-empty Bj,t parts in total.
|
262 |
+
• There is a part X0 that contains vertices from x1 to xi of X.
|
263 |
+
• There is a part T (for “trash”) that contains all the vertices of the sets Vi′,j,t with t ⩽ i + 1.
|
264 |
+
All the other vertices are not yet contracted. This corresponds to the vertices from xi+1 to xk of X
|
265 |
+
and to the vertices of the sets Vi′,j,t with i′ > i. Indeed, if i′ ⩽ i and t ⩽ i + 1, then the vertices of Vi′,j,t
|
266 |
+
are in T . If t ⩾ i + 2 but j ⩽ i, then they are in the part Bi,t. If j > i, then they are in the part Bj,t.
|
267 |
+
We first prove that the red degree after Step i is at most M. Then, we explain how to get from Step
|
268 |
+
i to Step i + 1 by keeping the red degree at most M.
|
269 |
+
Consider the part Bj,t at the end of Step i. A vertex in this part belongs to some set Vi′,j′,t with
|
270 |
+
i′ ⩽ i and j′ = j if j > i or j′ ⩽ i otherwise. In particular, two vertices of Bj,t are adjacent to all the
|
271 |
+
vertices between xi+1 and xj, to no vertex between xj+1 and xt−1, to xt, and to no vertex after xt+C.
|
272 |
+
Thus, there is a red edge between the parts Bj,t and X0, and C red edges between the part Bj,t and the
|
273 |
+
vertices {xt+1, ..., xt+C}. Therefore, the number of red edges incident with Bj,t is at most C + 1.
|
274 |
+
Consider now the part T . Vertices in T are adjacent only to vertices of X up to xi+C+1. Since vertices
|
275 |
+
x1 to xi are all in the part X0, the red degree of T is at most C + 2.
|
276 |
+
Single vertices not in X have no incident red edges: indeed, they are all in some sets Vi′,j,t for i′ > i
|
277 |
+
and thus are not adjacent to any vertex of X0. For the same reason, there are red edges incident to X0
|
278 |
+
only to T and to the parts Bj,t. Hence, the red degree of X0 is at most AB + 1. Similarly, the red degree
|
279 |
+
of xi′, i′ > i + 1 is at most AB + 1. Moreover, the red degree of xi+1 is at most one. Indeed, the only
|
280 |
+
red edge is between xi+1 and T .
|
281 |
+
Finally, the red degree after step i is at most max{AB + 1, C + 2} ⩽ M.
|
282 |
+
Let i ≥ 0. We now explain how we perform the contractions to go from step i to step i + 1.
|
283 |
+
1. (only if i ≥ 1) For any i + 3 ⩽ t ⩽ i + 2 + B, merge the part Bi,t with the part Bi+1,t. The only
|
284 |
+
new red edge this merging may lead to, when Bi,t is non-empty, is between Bi+1,t and xi+1. Thus,
|
285 |
+
we add only one red edge between xi+1 and Bi+1,t. Thus, the red degree of Bi+1,t is at most C + 2
|
286 |
+
and the red degree of xi+1 is at most 2.
|
287 |
+
2. Add all the vertices of Vi+1,j,t for some j, t to the part (that might be empty at this point) Bj,t.
|
288 |
+
The red degree of Bj,t is at most C + 2 since we might have a red edge between Bj,t and xi+1. The
|
289 |
+
number of nonempty parts Bj,t at this point is AB + 1 (there is still the part Bi,i+2). Adding T ,
|
290 |
+
this gives AB + 2 red edges incident to a vertex in X (or from part X0).
|
291 |
+
5
|
292 |
+
|
293 |
+
3. Add xi+1 to X0. The part X0 has red edges only to parts Bi+1,t, to Bi,i+2 and to T , but no edges
|
294 |
+
to the single vertices. Thus, it has red degree at most AB + 2.
|
295 |
+
4. Put the part Bi,i+2 into T . This part is only adjacent to vertices up to xi+2+C, and thus has C + 2
|
296 |
+
red edges.
|
297 |
+
Thus, at each point, the red degree is always at most M = max{AB, C} + 2.
|
298 |
+
The process ends at step i = k − C − 1. Then, all the vertices not in X are in some parts, and there
|
299 |
+
are at most AB + 1 such parts. On the other side of bipartition, we have part X0 and C + 1 single
|
300 |
+
vertices. Thus, the graph is bipartite with both sides of size at most M. One can contract each part
|
301 |
+
independently to finish the contraction sequence.
|
302 |
+
To conclude, taking C = d − 2 and A = B = ⌊
|
303 |
+
√
|
304 |
+
d − 2⌋, we have M ⩽ d and kAB2C = Θ(kd2d).
|
305 |
+
Notice that we may assume that A, B and C are positive since d ≥ d′ where d′ was some well chosen
|
306 |
+
positive constant. This concludes the proof.
|
307 |
+
5
|
308 |
+
Conclusion
|
309 |
+
We have given an improved and tight upper bound for the neighbourhood complexity of graphs of bounded
|
310 |
+
twin-width. Unlike the previously known (weaker) bounds, our method is simple and avoids the use of
|
311 |
+
the Marcus-Tardos theorem. We hope that it can inspire future works in the area.
|
312 |
+
It is known that the twin-width of Gr can be upper-bounded by a function of the twin-width of G
|
313 |
+
and r [10]. Thus, graphs of twin-width at most d have linear r-neighbourhood complexity. We leave as an
|
314 |
+
interesting open problem to obtain an essentially tight twin-width dependence for the r-neighbourhood
|
315 |
+
complexity.
|
316 |
+
We remark that the neighbourhood complexity is also related to identification problems on graphs
|
317 |
+
such as identifying codes or locating-dominating sets, where one seeks a (small) set A of vertices of a graph
|
318 |
+
such that all other vertices have a distinct neighbourhood in A [17]. Some works in this area about specific
|
319 |
+
graph classes, are equivalent to the study of the neighbourhood complexity of these graph classes: see for
|
320 |
+
example [13, 17, 27]. Moreover, we note that for graph classes with VC density 1, since any solution has
|
321 |
+
linear size, the natural minimisation versions of the above identification problems have a polynomial-time
|
322 |
+
constant-factor approximation algorithm (trivially select the whole vertex set), while such an algorithm
|
323 |
+
is unlikely to exist in the general case [13]. Thus, our work implies a better approximation ratio for these
|
324 |
+
problems, when restricted to input graph classes of bounded twin-width.
|
325 |
+
References
|
326 |
+
[1] Matthias Aschenbrenner, Alf Dolich, Deirdre Haskell, Dugald Macpherson, and Sergei Starchenko.
|
327 |
+
Vapnik–chervonenkis density in some theories without the independence property, II. Notre Dame
|
328 |
+
Journal of Formal Logic, 54(3-4):311–363, 2013.
|
329 |
+
[2] Matthias Aschenbrenner, Alf Dolich, Deirdre Haskell, Dugald Macpherson, and Sergei Starchenko.
|
330 |
+
Vapnik-chervonenkis density in some theories without the independence property, I. Transactions
|
331 |
+
of the American Mathematical Society, 368(8):5889–5949, 2016.
|
332 |
+
[3] Cristina Bazgan, Florent Foucaud, and Florian Sikora. Parameterized and approximation complexity
|
333 |
+
of partial VC dimension. Theor. Comput. Sci., 766:1–15, 2019.
|
334 |
+
[4] ´Edouard Bonnet, Dibyayan Chakraborty, Eun Jung Kim, Noleen K¨ohler, Raul Lopes, and St´ephan
|
335 |
+
Thomass´e. Twin-width VIII: delineation and win-wins. In Holger Dell and Jesper Nederlof, editors,
|
336 |
+
17th International Symposium on Parameterized and Exact Computation, IPEC 2022, September
|
337 |
+
7-9, 2022, Potsdam, Germany, volume 249 of LIPIcs, pages 9:1–9:18. Schloss Dagstuhl - Leibniz-
|
338 |
+
Zentrum f¨ur Informatik, 2022.
|
339 |
+
[5] ´Edouard Bonnet, Colin Geniet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant. Twin-
|
340 |
+
width III: max independent set, min dominating set, and coloring. In Nikhil Bansal, Emanuela
|
341 |
+
Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and
|
342 |
+
Programming, ICALP 2021, July 12-16, 2021, Glasgow, Scotland (Virtual Conference), volume 198
|
343 |
+
of LIPIcs, pages 35:1–35:20. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2021.
|
344 |
+
6
|
345 |
+
|
346 |
+
[6] ´Edouard Bonnet, Colin Geniet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant. Twin-
|
347 |
+
width II: small classes. Combinatorial Theory, 2 (2), 2022.
|
348 |
+
[7] ´Edouard Bonnet, Ugo Giocanti, Patrice Ossona de Mendez, Pierre Simon, St´ephan Thomass´e, and
|
349 |
+
Szymon Toru´nczyk. Twin-width IV: ordered graphs and matrices. In Stefano Leonardi and Anupam
|
350 |
+
Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome,
|
351 |
+
Italy, June 20 - 24, 2022, pages 924–937. ACM, 2022.
|
352 |
+
[8] ´Edouard Bonnet, Eun Jung Kim, Amadeus Reinald, and St´ephan Thomass´e. Twin-width VI: the
|
353 |
+
lens of contraction sequences. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete
|
354 |
+
Algorithms (SODA), pages 1036–1056. SIAM, 2022.
|
355 |
+
[9] ´Edouard Bonnet, Eun Jung Kim, Amadeus Reinald, St´ephan Thomass´e, and R´emi Watrigant. Twin-
|
356 |
+
width and polynomial kernels. Algorithmica, 84:3300–3337, 2022.
|
357 |
+
[10] ´Edouard Bonnet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant. Twin-width I: tractable
|
358 |
+
FO model checking. J. ACM, 69(1):3:1–3:46, 2022.
|
359 |
+
[11] ´Edouard Bonnet, O-joung Kwon, and David R. Wood. Reduced bandwidth: a qualitative strength-
|
360 |
+
ening of twin-width in minor-closed classes (and beyond). CoRR, abs/2202.11858, 2022.
|
361 |
+
[12] ´Edouard Bonnet, Jaroslav Neˇsetˇril, Patrice Ossona de Mendez, Sebastian Siebertz, and St´ephan
|
362 |
+
Thomass´e. Twin-width and permutations. CoRR, abs/2102.06880, 2021.
|
363 |
+
[13] Nicolas Bousquet, Aur´elie Lagoutte, Zhentao Li, Aline Parreau, and St´ephan Thomass´e. Identifying
|
364 |
+
codes in hereditary classes of graphs and VC-dimension. SIAM J. Discret. Math., 29(4):2047–2064,
|
365 |
+
2015.
|
366 |
+
[14] Marek Cygan, Fedor V Fomin, �Lukasz Kowalik, Daniel Lokshtanov, D´aniel Marx, Marcin Pilipczuk,
|
367 |
+
Micha�l Pilipczuk, and Saket Saurabh. Parameterized algorithms, volume 4. Springer, 2015.
|
368 |
+
[15] Kord Eickmeyer, Archontia C. Giannopoulou, Stephan Kreutzer, O-joung Kwon, Micha�l Pilipczuk,
|
369 |
+
Roman Rabinovich, and Sebastian Siebertz. Neighborhood complexity and kernelization for nowhere
|
370 |
+
dense classes of graphs. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl,
|
371 |
+
editors, 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017,
|
372 |
+
July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 63:1–63:14. Schloss Dagstuhl -
|
373 |
+
Leibniz-Zentrum f¨ur Informatik, 2017.
|
374 |
+
[16] Fedor V Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Kernelization: theory of
|
375 |
+
parameterized preprocessing. Cambridge University Press, 2019.
|
376 |
+
[17] Florent Foucaud, George B. Mertzios, Reza Naserasr, Aline Parreau, and Petru Valicov. Identifi-
|
377 |
+
cation, location-domination and metric dimension on interval and permutation graphs. I. bounds.
|
378 |
+
Theor. Comput. Sci., 668:43–58, 2017.
|
379 |
+
[18] Jakub Gajarsk´y, Micha�l Pilipczuk, Wojciech Przybyszewski, and Szymon Toru´nczyk. Twin-width
|
380 |
+
and types.
|
381 |
+
In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th In-
|
382 |
+
ternational Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022,
|
383 |
+
Paris, France, volume 229 of LIPIcs, pages 123:1–123:21. Schloss Dagstuhl - Leibniz-Zentrum f¨ur
|
384 |
+
Informatik, 2022.
|
385 |
+
[19] Stefan Kratsch, Florian Nelles, and Alexandre Simon. On triangle counting parameterized by twin-
|
386 |
+
width. CoRR, abs/2202.06708, 2022.
|
387 |
+
[20] Adam Marcus and G´abor Tardos. Excluded permutation matrices and the Stanley-Wilf conjecture.
|
388 |
+
J. Comb. Theory, Ser. A, 107(1):153–160, 2004.
|
389 |
+
[21] Jaroslav Neˇsetˇril and Patrice Ossona de Mendez.
|
390 |
+
On nowhere dense graphs.
|
391 |
+
Eur. J. Comb.,
|
392 |
+
32(4):600–617, 2011.
|
393 |
+
[22] Jaroslav Neˇsetˇril and Patrice Ossona de Mendez. Sparsity - Graphs, Structures, and Algorithms,
|
394 |
+
volume 28 of Algorithms and combinatorics. Springer, 2012.
|
395 |
+
7
|
396 |
+
|
397 |
+
[23] Adam Paszke and Micha�l Pilipczuk. VC density of set systems definable in tree-like graphs. In Javier
|
398 |
+
Esparza and Daniel Kr´al’, editors, 45th International Symposium on Mathematical Foundations of
|
399 |
+
Computer Science, MFCS 2020, August 24-28, 2020, Prague, Czech Republic, volume 170 of LIPIcs,
|
400 |
+
pages 78:1–78:13. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2020.
|
401 |
+
[24] Micha�l Pilipczuk and Marek Soko�lowski.
|
402 |
+
Graphs of bounded twin-width are quasi-polynomially
|
403 |
+
χ-bounded. CoRR, abs/2202.07608, 2022.
|
404 |
+
[25] Micha�l Pilipczuk, Marek Soko�lowski, and Anna Zych-Pawlewicz. Compact representation for matri-
|
405 |
+
ces of bounded twin-width. In Petra Berenbrink and Benjamin Monmege, editors, 39th International
|
406 |
+
Symposium on Theoretical Aspects of Computer Science, STACS 2022, March 15-18, 2022, Mar-
|
407 |
+
seille, France (Virtual Conference), volume 219 of LIPIcs, pages 52:1–52:14. Schloss Dagstuhl -
|
408 |
+
Leibniz-Zentrum f¨ur Informatik, 2022.
|
409 |
+
[26] Wojciech Przybyszewski. VC-density and abstract cell decomposition for edge relation in graphs of
|
410 |
+
bounded twin-width. CoRR, abs/2202.04006, 2022.
|
411 |
+
[27] Douglas Rall and Peter J. Slater. On location-domination numbers for certain classes of graphs.
|
412 |
+
Congressus Numerantium, 45:97–106, 1984.
|
413 |
+
[28] Felix Reidl, Fernando S´anchez Villaamil, and Konstantinos S. Stavropoulos. Characterising bounded
|
414 |
+
expansion by neighbourhood complexity. Eur. J. Comb., 75:152–168, 2019.
|
415 |
+
[29] Vladimir N. Vapnik and Alexey Y. Chervonenkis. On the uniform convergence of relative frequencies
|
416 |
+
of events to their probabilities. In Measures of complexity, pages 11–30. Springer, 2015.
|
417 |
+
8
|
418 |
+
|
AtE2T4oBgHgl3EQf8QmS/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,385 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf,len=384
|
2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
3 |
+
page_content='04217v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
4 |
+
page_content='CO] 10 Jan 2023 Neighbourhood complexity of graphs of bounded twin-width∗ ´Edouard Bonnet† Florent Foucaud‡ § Tuomo Lehtil¨a¶ ‖ Aline Parreau∗∗ January 12, 2023 Abstract We give essentially tight bounds for, ν(d, k), the maximum number of distinct neighbourhoods on a set X of k vertices in a graph with twin-width at most d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
5 |
+
page_content=' Using the celebrated Marcus-Tardos theorem, two independent works [Bonnet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
6 |
+
page_content=', Algorithmica ’22;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
7 |
+
page_content=' Przybyszewski ’22] have shown the upper bound ν(d, k) ⩽ exp(exp(O(d)))k, with a double-exponential dependence in the twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
8 |
+
page_content=' We give a short self-contained proof that for every d and k, ν(d, k) ⩽ (d + 2)2d+1k = 2d+O(log d)k, and build a bipartite graph implying ν(d, k) ⩾ 2d+log d+O(1)k, in the regime when k is large enough compared to d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
9 |
+
page_content=' 1 Introduction The aim of this paper is to refine our understanding of how complex the neighbourhoods of graphs of bounded twin-width can be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
10 |
+
page_content=' We provide an improved bound on the neighbourhood complexity of such graphs, complemented by a construction showing that our bound is essentially tight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
11 |
+
page_content=' The improvements in the bounds for neighbourhood complexities translate directly to better structural bounds and algorithms, in some contexts which are explained below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
12 |
+
page_content=' Twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
13 |
+
page_content=' Twin-width is a recently introduced graph invariant [10];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
14 |
+
page_content=' see Section 2 for a definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
15 |
+
page_content=' It can be naturally extended to matrices over finite alphabets and binary structures [10, 7, 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
16 |
+
page_content=' Although classes of bounded twin-width are broad and diverse, they allow (most of the time, provided a witness is given as an input) improved algorithms, compared to what is possible on general graphs or binary structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
17 |
+
page_content=' Most prominently, it was shown [10] that, on n-vertex graphs given with a d-sequence (a witness that their twin-width is at most d), deciding if a first-order sentence ϕ holds can be solved in time f(d, ϕ)n, for some computable function f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
18 |
+
page_content=' In some special cases, such as for k-Independent Set or k-Dominating Set1, single-exponential parameterised algorithms running in time 2Od(k)n are possible [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
19 |
+
page_content=' In the same setting, the triangles of an n-vertex m-edge graph can be counted in time O(d2n+m) [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
20 |
+
page_content=' See [8, 18, 25] for more applications of twin-width with an algorithmic flavour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
21 |
+
page_content=' Classes of binary structures with bounded twin-width include bounded treewidth, and more gener- ally, bounded clique-width classes, proper minor-closed classes, posets of bounded width (that is, whose antichains are of bounded size), hereditary subclasses of permutations, as well as Ω(log n)-subdivisions of ∗Florent Foucaud was financed by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25) and by the ANR project GRALMECO (ANR-21-CE48-0004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
22 |
+
page_content=' Tuomo Lehtil¨a’s research was supported by the Finnish Cultural Foundation and by the Academy of Finland grant 338797.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
23 |
+
page_content=' †Univ Lyon, CNRS, ENS de Lyon, Universit´e Claude Bernard Lyon 1, LIP UMR5668, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
24 |
+
page_content=' ‡Universit´e Clermont-Auvergne, CNRS, Mines de Saint-´Etienne, Clermont-Auvergne-INP, LIMOS, 63000 Clermont- Ferrand, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
25 |
+
page_content=' §Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
26 |
+
page_content=' Orl´eans, INSA Centre Val de Loire, LIFO EA 4022, F-45067 Orl´eans Cedex 2, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
27 |
+
page_content=' ¶Univ Lyon, UCBL, CNRS, LIRIS - UMR 5205, F69622, France ‖University of Turku, Department of Mathematics and Statistics, Turku, Finland ∗∗Univ Lyon, CNRS, INSA Lyon, UCBL, Centrale Lyon, Univ Lyon 2, LIRIS, UMR5205, F-69622 Villeurbanne, France 1That is, the problems of deciding whether in an input graph, there are k vertices that are pairwise non-adjacent or whose closed neighbourhood is the entire vertex set, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
28 |
+
page_content=' 1 n-vertex graphs [10], and particular classes of (bounded-degree) expanders [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
29 |
+
page_content=' A rich range of geometric graph classes have bounded twin-width such as map graphs, bounded-degree string graphs [10], classes with bounded queue number or bounded stack number [6], segment graphs with no Kt,t subgraph, and visibility graphs of simple polygons without large independent sets [4], to give a few examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
30 |
+
page_content=' If efficiently approximating the twin-width is a challenging open question in general, this is known to be possible for the above-mentioned classes (albeit a representation may be needed for the geometric classes) and for ordered graphs [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
31 |
+
page_content=' By that, we mean that there are two computable functions f, g and an algorithm that, for an input n-vertex graph G from the class and an integer k, and in time g(k)nO(1), either outputs an f(k)-sequence (again, witnessing that the twin-width is at most f(k)) or correctly reports that the twin-width of G is larger than k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
32 |
+
page_content=' Structural properties of graph classes of bounded twin-width include χ-boundedness [5], even with a quasipolynomial binding function [24], smallness (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
33 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
34 |
+
page_content=', containing up to isomorphism 2O(n) n-vertex graphs) [6, 12], and Vapnik-Chervonenkis (VC) density at most 1 [9, 26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
35 |
+
page_content=' The latter property is the topic of the current article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
36 |
+
page_content=' VC density and neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
37 |
+
page_content=' VC density is related to the celebrated VC dimen- sion [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
38 |
+
page_content=' Given a set-system (or hypergraph) S on a domain X, the shatter function πS : N → N is defined as πS(n) = max A∈(X n) |{Y ⊆ A | ∃S ∈ S, Y = A ∩ S}|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
39 |
+
page_content=' The Perles-Sauer-Shelah lemma states that πS(n) = O(nd) if the VC dimension of S (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
40 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
41 |
+
page_content=', the supremum of {n | πS(n) = 2n}) is a finite integer d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
42 |
+
page_content=' Then the VC density of S is defined as inf{c ∈ R | πS(n) = O(nc)}, and as +∞ if the VC dimension is unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
43 |
+
page_content=' We define the VC density of an infinite class C of finite graphs as the VC density of the infinite set-system formed by the neighbourhood hypergraph of the disjoint union of the graphs of C, that is, {NG(v) | v ∈ V (⊎G∈CG)}, where NG(v) denotes the set of neighbours of v in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
44 |
+
page_content=' The VC density is an important measure in finite model theory, often more tractable than the VC dimension (see for instance [1, 2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
45 |
+
page_content=' Tight bounds have been obtained for the VC density of (logically) definable hypergraphs from graph classes of bounded clique-width [23] (with monadic second-order logic), and more recently, of bounded twin-width [18] (with first-order logic).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
46 |
+
page_content=' In structural graph theory and kernelisation [16] (a subarea of parameterised complexity [14]) the function πN(G), where N(G) is the neighbourhood hypergraph of G, is often1 called neighbourhood com- plexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
47 |
+
page_content=' (See [3] for an algorithmic study of the computation of this notion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
48 |
+
page_content=') In these contexts, obtaining the best possible upper bound for πN(G) (and not just the exponent matching the VC density) translates to qualitatively better structural bounds and algorithms;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
49 |
+
page_content=' see for instance [9, 11, 15, 28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
50 |
+
page_content=' The r-neighbourhood complexity of G is the neighbourhood complexity of Gr, with same vertex set as G, and an edge between two vertices at distance at most r in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
51 |
+
page_content=' Reidl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
52 |
+
page_content=' [28] showed that among subgraph-closed classes, bounded expansion2 is equivalent to linear r-neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
53 |
+
page_content=' Indeed, the more general nowhere dense classes [21] (another invention of the Sparsity program [22]) have almost linear r-neighbourhood complexity [15]: there is a function f : N × N → N such that for every ε > 0, πN(Gr)(n) ⩽ f(r, ε)n1+ε for all n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
54 |
+
page_content=' On hereditary classes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
55 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
56 |
+
page_content=', closed under taking induced subgraphs, there is no known characterisation of linear neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
57 |
+
page_content=' As we already mentioned in a different language, bounded twin-width classes have been proven to have linear neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
58 |
+
page_content=' See [9, Lemma 3] or [26, Section 3] for two independent proofs, both using the Marcus-Tardos theorem [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
59 |
+
page_content=' However, the dependence in the twin-width is doubly exponential in both papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
60 |
+
page_content=' Setting ν(d, k) as the maximum number of distinct neighbourhoods on a set of size k within a graph of twin-width at most d, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
61 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
62 |
+
page_content=', max{πN(G)(k) | G has twin-width at most d}, they show that ν(d, k) ⩽ exp(exp(O(d)))k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
63 |
+
page_content=' Our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
64 |
+
page_content=' In this note, we give in Section 3 a self-contained proof (not using the Marcus-Tardos theorem) that ν(d, k) ⩽ 2d+O(log d)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
65 |
+
page_content=' In Section 4, we complement that proof with a construction of a 1Some authors define the neighbourhood complexity as n �→ πN (G)(n) n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
66 |
+
page_content=' 2A notion from the Sparsity theory of Neˇsetˇril and Ossona de Mendez [22] extending bounded degree and proper minor- free classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
67 |
+
page_content=' 2 bipartite graph witnessing that ν(d, k) ⩾ 2d+log d+O(1)k, which makes our single-exponential upper bound in twin-width essentially tight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
68 |
+
page_content=' 2 Preliminaries We use the standard graph-theoretic notations: V (G), E(G), G[S], G − S respectively denote the vertex set, edge set, subgraph of G induced by S, and subgraph of G induced by V (G) \\ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
69 |
+
page_content=' If v ∈ V (G), then NG(v) (or N(v) if G is clear from the context) denotes the set of neighbours of v in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
70 |
+
page_content=' If X ⊆ V (G), then an X-neighbourhood is a set N(v) ∩ X for some v ∈ V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
71 |
+
page_content=' We now define the twin-width of a graph, following the definition of [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
72 |
+
page_content=' A trigraph is a triple G = (V (G), E(G), R(G)) where E(G) and R(G) are two disjoint sets of edges on V (G): the usual edges (also called black edges) and the red edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
73 |
+
page_content=' Informally, a red edge between two vertices u and v means that some errors have been made between u and v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
74 |
+
page_content=' The red degree of a trigraph is the maximum degree of the graph (V (G), R(G)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
75 |
+
page_content=' Any graph G can be interepreted as a trigraph G = (V (G), E(G), ∅).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
76 |
+
page_content=' Given a trigraph and two vertices u, v ∈ V (G) (not necessarily adjacent), the trigraph G/u, v = G′ is obtained by contracting u and v in a new vertex w such that: V (G′) = {w} ∪ V (G) \\ {u, v};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
77 |
+
page_content=' the edges between vertices of V (G) \\ {u, v} are the same in G′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
78 |
+
page_content=' the following edges are incident to w: – wx ∈ E(G′) if xu ∈ E(G) and xv ∈ E(G);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
79 |
+
page_content=' – wx /∈ E(G′) ∪ R(G′) if xu /∈ E(G) ∪ R(G) and xv /∈ E(G) ∪ R(G);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
80 |
+
page_content=' – wx ∈ R(G′) otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
81 |
+
page_content=' In other words, the common black neighbours of u and v are black neighbours of w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
82 |
+
page_content=' All the other neighbours of u or v are red neighbours of w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
83 |
+
page_content=' Red edges stay red, black edges stay black, red and black edges become red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
84 |
+
page_content=' We say that G/u, v is a contraction of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
85 |
+
page_content=' A d-sequence of an n-vertex graph G is a sequence of n trigraphs Gn = G, Gn−1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
86 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
87 |
+
page_content='., G1 such that each trigraph Gi is obtained from Gi+1 by a contraction and has red degree at most d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
88 |
+
page_content=' The twin-width of G, denoted by tww(G), is the minimum integer d such that G admits a d-sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
89 |
+
page_content=' Note that an induced subgraph of G has a twin-width smaller or equal to the twin-width of G [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
90 |
+
page_content=' If u ∈ Gi, then u(G) denotes the set of vertices of G eventually contracted to u in Gi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
91 |
+
page_content=' Instead of considering the trigraphs Gi, we might prefer to deal with the partitions induced by the sets u(G) for u in Gi: Pi = {u(G) | u ∈ V (Gi)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
92 |
+
page_content=' We say that there is a red edge between two parts u(G) and v(G) of Pi if uv is red in Gi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
93 |
+
page_content=' 3 Upper bound on the number of distinct neighbourhoods We state and prove our upper bound on the maximum number of distinct X-neighbourhoods in bounded twin-width graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
94 |
+
page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
95 |
+
page_content=' Let G be an n-vertex graph of twin-width d, and X ⊆ V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
96 |
+
page_content=' Then the number of distinct X-neighbourhoods in G is at most (d + 2)2d+1|X| = 2d+O(log d)|X|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
97 |
+
page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
98 |
+
page_content=' Fix X ⊆ V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
99 |
+
page_content=' First of all, for all vertices of V (G) \\ X with the same X-neighbourhood, we keep only one representative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
100 |
+
page_content=' Note that the new graph G′′ is an induced subgraph of G, thus its twin-width is at most d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
101 |
+
page_content=' We further modify graph G′′ by adding for each v ∈ X a new vertex u to G′′ so that N(u) = N(v) if such vertex does not exist in V (G′′) \\ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
102 |
+
page_content=' The new graph is called G′ and it has the same twin-width as G′′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
103 |
+
page_content=' Let M = (d + 2)2d+1 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
104 |
+
page_content=' We prove by induction on n that an n-vertex graph of twin-width at most d with a set X of k vertices, where all vertices outside X have a distinct X-neighbourhood, satisfies n ⩽ kM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
105 |
+
page_content=' This will prove that G′ has at most kM vertices, and thus that in G, there are at most (M −1)k distinct X-neighbourhoods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
106 |
+
page_content=' The statement is trivially true for n ⩽ 5 since M ⩾ 5, for all d ⩾ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
107 |
+
page_content=' 3 Thus, assume n ⩾ 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
108 |
+
page_content=' In particular, we have k > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
109 |
+
page_content=' Let x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
110 |
+
page_content=' Let X′ = X \\ {x} and let Tx be the set of pairs of vertices outside X that are twins with respect to X′, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
111 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
112 |
+
page_content=' Tx = � {u, v} ∈ �V (G′) \\ X 2 � | N(u) ∩ X′ = N(v) ∩ X′ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
113 |
+
page_content=' Since every vertex of V (G′) \\ X has a distinct neighbourhood in X, there are at most two vertices of V (G′) \\ X with the same (possibly empty) neighbourhood N in X′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
114 |
+
page_content=' namely the vertices u, v ∈ V (G′) \\ X with N(u) ∩ X = N and N(v) ∩ X = N ∪ {x} (if they exist).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
115 |
+
page_content=' Hence, Tx consists of pairwise-disjoint pairs of vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
116 |
+
page_content=' We prove the following claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
117 |
+
page_content=' Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
118 |
+
page_content=' There exists a vertex x of X such that Tx comprises at most M − 1 pairs, in G′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
119 |
+
page_content=' Proof of claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
120 |
+
page_content=' By contradiction, assume this is not the case: for every x in X, Tx has size at least M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
121 |
+
page_content=' Consider a d-sequence of contractions G′ n, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
122 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
123 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
124 |
+
page_content=' , G′ 1 of G′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
125 |
+
page_content=' Consider the last step G′ i of the sequence where all the parts of Pi contain at most one vertex of X (that is, contrary to Pi, some part of Pi−1 contains two vertices of X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
126 |
+
page_content=' Let P be a part of Pi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
127 |
+
page_content=' Let x be the unique (if there exists one) element of P ∩X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
128 |
+
page_content=' Then we claim that |P \\ X| ⩽ 2d+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
129 |
+
page_content=' Indeed, any two vertices of P \\ X have some vertex in the symmetric difference of their X-neighbourhoods, either it is x, or some vertex x′ of X outside P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
130 |
+
page_content=' If that distinguishing vertex is some x′ that is not in P, then there has to be a red edge between P and the part that contains x′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
131 |
+
page_content=' There are at most d red edges with P as an extremity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
132 |
+
page_content=' Since all the elements of X are in distinct parts in G′ i, it means that d + 1 vertices of X are enough to distinguish all the X-neighbourhoods of vertices of P \\ X, and thus |P \\ X| ⩽ 2d+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
133 |
+
page_content=' We now consider the next contraction in the sequence, which leads to G′ i−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
134 |
+
page_content=' By definition of G′ i, it must contract two vertices corresponding to two parts of Pi that both contain an element of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
135 |
+
page_content=' Let x1 and x2 be these two elements of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
136 |
+
page_content=' Let Q be the part of Pi−1 that contains both x1 and x2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
137 |
+
page_content=' By our assumption, Tx1 has size at least M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
138 |
+
page_content=' Let {u, v} be a pair of Tx1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
139 |
+
page_content=' Since u and v have the same neighbourhood in X \\ {x1}, it means that they are either both adjacent or both non-adjacent to x2, and exactly one of them is adjacent to x1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
140 |
+
page_content=' Thus, necessarily, one vertex among the pair {u, v} is adjacent to exactly one vertex among {x1, x2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
141 |
+
page_content=' In particular, if this vertex is not in Q, then there has to be a red edge between the part containing this vertex and the part Q in G′ i−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
142 |
+
page_content=' Since Tx1 contains at least M pairs (which are disjoint) and Q has at most 2d+2 vertices not in X, there are at least M − 2d+2 vertices not in X whose part in G′ i−1 has a red edge to Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
143 |
+
page_content=' Since each other part has at most 2d+1 vertices not in X, it makes at least M−2d+2 2d+1 red edges incident to Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
144 |
+
page_content=' Thus, we must have M−2d+2 2d+1 ⩽ d, leading to M ⩽ 2d+1(d + 2), a contradiction that proves the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
145 |
+
page_content=' (□) By Claim A, there exists a vertex x ∈ X such that |Tx| ⩽ M − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
146 |
+
page_content=' Let Y be a set of |Tx| vertices that intersects each pair of Tx exactly once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
147 |
+
page_content=' Let GY = G′ − (Y ∪ {x}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
148 |
+
page_content=' Then, X′ = X \\ {x} is a vertex set of size k − 1 such that all X′-neighbourhoods of vertices outside X′ are distinct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
149 |
+
page_content=' The graph GY has at least n − M vertices, and twin-width at most d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
150 |
+
page_content=' By induction, we have n − M ⩽ |V (GY )| ⩽ (k − 1)M and thus, n ⩽ kM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
151 |
+
page_content=' Hence, once we recall that no vertex in X has unique X-neighbourhood, there are at most (M �� 1)k distinct X-neighbourhoods, which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
152 |
+
page_content=' 4 Lower bound on the number of distinct neighbourhoods Notice that when |X| and tww(G) are roughly the same, the bound from Theorem 1 cannot be sharp, since G′ has at most 2|X| + |X| vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
153 |
+
page_content=' However, when |X| is large enough compared to tww(G), we next show that the bound is sharp up to a constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
154 |
+
page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
155 |
+
page_content=' There is a positive constant c, such that for any integer d, there is a bipartite graph G of twin-width at most d, and a large enough set X ⊆ V (G), with at least c·d2d|X| distinct X-neighbourhoods in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
156 |
+
page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
157 |
+
page_content=' Observe that the claim is clearly true for any small d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
158 |
+
page_content=' Thus, we do not need to consider separately graphs with small twin-width upper bounded by a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
159 |
+
page_content=' Hence, we assume from now on that d ≥ d′ where d′ is some positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
160 |
+
page_content=' 4 We construct the graph G as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
161 |
+
page_content=' Let A, B, C ∈ Z be three constants that will be given later (A and B will be roughly equal to √ d and C will be roughly equal to d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
162 |
+
page_content=' Let X = {x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
163 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
164 |
+
page_content=', xk} be an independent set of k vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
165 |
+
page_content=' Our goal is that each vertex in V (G) \\ X has a unique X-neighbourhood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
166 |
+
page_content=' For any integers i, j, t with 1 ⩽ i ⩽ j ⩽ i + A − 1, j + 2 ⩽ t ⩽ j + 1 + B and t ⩽ k − C, we create a set Vi,j,t of vertices as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
167 |
+
page_content=' Consider the set Xt = {xt+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
168 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
169 |
+
page_content=', xt+C}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
170 |
+
page_content=' For every subset Y of Xt, let Y ′ = {xi, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
171 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
172 |
+
page_content=', xj, xt} ∪ Y and add a vertex vY ′ to Vi,j,t, making it adjacent to the vertices of Y ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
173 |
+
page_content=' Each set Vi,j,t has size 2C and there are Θ(kAB) (for fixed A and B and growing k) such sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
174 |
+
page_content=' Thus there are Θ(kAB2C) vertices in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
175 |
+
page_content=' Any two vertices not in X have distinct X-neighbourhoods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
176 |
+
page_content=' Indeed, by considering the natural ordering of X induced by the indices, any vertex not in X is first adjacent to a consecutive interval of vertices from xi to xj, then is not adjacent to vertices from xj+1 to xt−1 (which is not empty since t ⩾ j + 2), and then adjacent to xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
177 |
+
page_content=' Thus, if two vertices have the same X-neighbourhood, they must be in the same set Vi,j,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
178 |
+
page_content=' But then, they have a distinct neighbourhood in {xt+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
179 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
180 |
+
page_content=', xt+C}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
181 |
+
page_content=' We now prove that the twin-width of G is at most M = max{AB, C}+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
182 |
+
page_content=' For that, we give a sequence of contractions with red degree at most M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
183 |
+
page_content=' The contraction sequence is split into k − C steps, for each vertex of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
184 |
+
page_content=' Let 0 ≤ i ≤ k − C − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
185 |
+
page_content=' Step 0 corresponds to the starting point, where each vertex is alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
186 |
+
page_content=' Let i ⩾ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
187 |
+
page_content=' After Step i, there will be the following parts in the corresponding partition (vertices not in any part have not yet been contracted): For each j, t such that i ⩽ j ⩽ i + A − 1 and j + 2 ⩽ t ⩽ j + 1 + B, there is a part Bj,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
188 |
+
page_content=' The parts Bi,t (parts with j = i), contain all the vertices of the sets Vi′,j′,t such that j′ ≤ i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
189 |
+
page_content=' The parts Bj,t with j > i contain all the vertices of the sets Vi′,j′,t such that i′ ⩽ i and j′ = j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
190 |
+
page_content=' Note that there are AB non-empty Bj,t parts in total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
191 |
+
page_content=' There is a part X0 that contains vertices from x1 to xi of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
192 |
+
page_content=' There is a part T (for “trash”) that contains all the vertices of the sets Vi′,j,t with t ⩽ i + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
193 |
+
page_content=' All the other vertices are not yet contracted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
194 |
+
page_content=' This corresponds to the vertices from xi+1 to xk of X and to the vertices of the sets Vi′,j,t with i′ > i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
195 |
+
page_content=' Indeed, if i′ ⩽ i and t ⩽ i + 1, then the vertices of Vi′,j,t are in T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
196 |
+
page_content=' If t ⩾ i + 2 but j ⩽ i, then they are in the part Bi,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
197 |
+
page_content=' If j > i, then they are in the part Bj,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
198 |
+
page_content=' We first prove that the red degree after Step i is at most M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
199 |
+
page_content=' Then, we explain how to get from Step i to Step i + 1 by keeping the red degree at most M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
200 |
+
page_content=' Consider the part Bj,t at the end of Step i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
201 |
+
page_content=' A vertex in this part belongs to some set Vi′,j′,t with i′ ⩽ i and j′ = j if j > i or j′ ⩽ i otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
202 |
+
page_content=' In particular, two vertices of Bj,t are adjacent to all the vertices between xi+1 and xj, to no vertex between xj+1 and xt−1, to xt, and to no vertex after xt+C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
203 |
+
page_content=' Thus, there is a red edge between the parts Bj,t and X0, and C red edges between the part Bj,t and the vertices {xt+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
204 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
205 |
+
page_content=', xt+C}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
206 |
+
page_content=' Therefore, the number of red edges incident with Bj,t is at most C + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
207 |
+
page_content=' Consider now the part T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
208 |
+
page_content=' Vertices in T are adjacent only to vertices of X up to xi+C+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
209 |
+
page_content=' Since vertices x1 to xi are all in the part X0, the red degree of T is at most C + 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
210 |
+
page_content=' Single vertices not in X have no incident red edges: indeed, they are all in some sets Vi′,j,t for i′ > i and thus are not adjacent to any vertex of X0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
211 |
+
page_content=' For the same reason, there are red edges incident to X0 only to T and to the parts Bj,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
212 |
+
page_content=' Hence, the red degree of X0 is at most AB + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
213 |
+
page_content=' Similarly, the red degree of xi′, i′ > i + 1 is at most AB + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
214 |
+
page_content=' Moreover, the red degree of xi+1 is at most one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
215 |
+
page_content=' Indeed, the only red edge is between xi+1 and T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
216 |
+
page_content=' Finally, the red degree after step i is at most max{AB + 1, C + 2} ⩽ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
217 |
+
page_content=' Let i ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
218 |
+
page_content=' We now explain how we perform the contractions to go from step i to step i + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
219 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
220 |
+
page_content=' (only if i ≥ 1) For any i + 3 ⩽ t ⩽ i + 2 + B, merge the part Bi,t with the part Bi+1,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
221 |
+
page_content=' The only new red edge this merging may lead to, when Bi,t is non-empty, is between Bi+1,t and xi+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
222 |
+
page_content=' Thus, we add only one red edge between xi+1 and Bi+1,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
223 |
+
page_content=' Thus, the red degree of Bi+1,t is at most C + 2 and the red degree of xi+1 is at most 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
224 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
225 |
+
page_content=' Add all the vertices of Vi+1,j,t for some j, t to the part (that might be empty at this point) Bj,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
226 |
+
page_content=' The red degree of Bj,t is at most C + 2 since we might have a red edge between Bj,t and xi+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
227 |
+
page_content=' The number of nonempty parts Bj,t at this point is AB + 1 (there is still the part Bi,i+2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
228 |
+
page_content=' Adding T , this gives AB + 2 red edges incident to a vertex in X (or from part X0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
229 |
+
page_content=' 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
230 |
+
page_content=' Add xi+1 to X0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
231 |
+
page_content=' The part X0 has red edges only to parts Bi+1,t, to Bi,i+2 and to T , but no edges to the single vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
232 |
+
page_content=' Thus, it has red degree at most AB + 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
233 |
+
page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
234 |
+
page_content=' Put the part Bi,i+2 into T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
235 |
+
page_content=' This part is only adjacent to vertices up to xi+2+C, and thus has C + 2 red edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
236 |
+
page_content=' Thus, at each point, the red degree is always at most M = max{AB, C} + 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
237 |
+
page_content=' The process ends at step i = k − C − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
238 |
+
page_content=' Then, all the vertices not in X are in some parts, and there are at most AB + 1 such parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
239 |
+
page_content=' On the other side of bipartition, we have part X0 and C + 1 single vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
240 |
+
page_content=' Thus, the graph is bipartite with both sides of size at most M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
241 |
+
page_content=' One can contract each part independently to finish the contraction sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
242 |
+
page_content=' To conclude, taking C = d − 2 and A = B = ⌊ √ d − 2⌋, we have M ⩽ d and kAB2C = Θ(kd2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
243 |
+
page_content=' Notice that we may assume that A, B and C are positive since d ≥ d′ where d′ was some well chosen positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
244 |
+
page_content=' This concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
245 |
+
page_content=' 5 Conclusion We have given an improved and tight upper bound for the neighbourhood complexity of graphs of bounded twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
246 |
+
page_content=' Unlike the previously known (weaker) bounds, our method is simple and avoids the use of the Marcus-Tardos theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
247 |
+
page_content=' We hope that it can inspire future works in the area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
248 |
+
page_content=' It is known that the twin-width of Gr can be upper-bounded by a function of the twin-width of G and r [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
249 |
+
page_content=' Thus, graphs of twin-width at most d have linear r-neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
250 |
+
page_content=' We leave as an interesting open problem to obtain an essentially tight twin-width dependence for the r-neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
251 |
+
page_content=' We remark that the neighbourhood complexity is also related to identification problems on graphs such as identifying codes or locating-dominating sets, where one seeks a (small) set A of vertices of a graph such that all other vertices have a distinct neighbourhood in A [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
252 |
+
page_content=' Some works in this area about specific graph classes, are equivalent to the study of the neighbourhood complexity of these graph classes: see for example [13, 17, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
253 |
+
page_content=' Moreover, we note that for graph classes with VC density 1, since any solution has linear size, the natural minimisation versions of the above identification problems have a polynomial-time constant-factor approximation algorithm (trivially select the whole vertex set), while such an algorithm is unlikely to exist in the general case [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
254 |
+
page_content=' Thus, our work implies a better approximation ratio for these problems, when restricted to input graph classes of bounded twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
255 |
+
page_content=' References [1] Matthias Aschenbrenner, Alf Dolich, Deirdre Haskell, Dugald Macpherson, and Sergei Starchenko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
256 |
+
page_content=' Vapnik–chervonenkis density in some theories without the independence property, II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
257 |
+
page_content=' Notre Dame Journal of Formal Logic, 54(3-4):311–363, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
258 |
+
page_content=' [2] Matthias Aschenbrenner, Alf Dolich, Deirdre Haskell, Dugald Macpherson, and Sergei Starchenko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
259 |
+
page_content=' Vapnik-chervonenkis density in some theories without the independence property, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
260 |
+
page_content=' Transactions of the American Mathematical Society, 368(8):5889–5949, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
261 |
+
page_content=' [3] Cristina Bazgan, Florent Foucaud, and Florian Sikora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
262 |
+
page_content=' Parameterized and approximation complexity of partial VC dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
263 |
+
page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
264 |
+
page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
265 |
+
page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
266 |
+
page_content=', 766:1–15, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
267 |
+
page_content=' [4] ´Edouard Bonnet, Dibyayan Chakraborty, Eun Jung Kim, Noleen K¨ohler, Raul Lopes, and St´ephan Thomass´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
268 |
+
page_content=' Twin-width VIII: delineation and win-wins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
269 |
+
page_content=' In Holger Dell and Jesper Nederlof, editors, 17th International Symposium on Parameterized and Exact Computation, IPEC 2022, September 7-9, 2022, Potsdam, Germany, volume 249 of LIPIcs, pages 9:1–9:18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
270 |
+
page_content=' Schloss Dagstuhl - Leibniz- Zentrum f¨ur Informatik, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
271 |
+
page_content=' [5] ´Edouard Bonnet, Colin Geniet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
272 |
+
page_content=' Twin- width III: max independent set, min dominating set, and coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
273 |
+
page_content=' In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming, ICALP 2021, July 12-16, 2021, Glasgow, Scotland (Virtual Conference), volume 198 of LIPIcs, pages 35:1–35:20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
274 |
+
page_content=' Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
275 |
+
page_content=' 6 [6] ´Edouard Bonnet, Colin Geniet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
276 |
+
page_content=' Twin- width II: small classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
277 |
+
page_content=' Combinatorial Theory, 2 (2), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
278 |
+
page_content=' [7] ´Edouard Bonnet, Ugo Giocanti, Patrice Ossona de Mendez, Pierre Simon, St´ephan Thomass´e, and Szymon Toru´nczyk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
279 |
+
page_content=' Twin-width IV: ordered graphs and matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
280 |
+
page_content=' In Stefano Leonardi and Anupam Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 - 24, 2022, pages 924–937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
281 |
+
page_content=' ACM, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
282 |
+
page_content=' [8] ´Edouard Bonnet, Eun Jung Kim, Amadeus Reinald, and St´ephan Thomass´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
283 |
+
page_content=' Twin-width VI: the lens of contraction sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
284 |
+
page_content=' In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1036–1056.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
285 |
+
page_content=' SIAM, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
286 |
+
page_content=' [9] ´Edouard Bonnet, Eun Jung Kim, Amadeus Reinald, St´ephan Thomass´e, and R´emi Watrigant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
287 |
+
page_content=' Twin- width and polynomial kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
288 |
+
page_content=' Algorithmica, 84:3300–3337, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
289 |
+
page_content=' [10] ´Edouard Bonnet, Eun Jung Kim, St´ephan Thomass´e, and R´emi Watrigant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
290 |
+
page_content=' Twin-width I: tractable FO model checking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
291 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
292 |
+
page_content=' ACM, 69(1):3:1–3:46, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
293 |
+
page_content=' [11] ´Edouard Bonnet, O-joung Kwon, and David R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
294 |
+
page_content=' Wood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
295 |
+
page_content=' Reduced bandwidth: a qualitative strength- ening of twin-width in minor-closed classes (and beyond).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
296 |
+
page_content=' CoRR, abs/2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
297 |
+
page_content='11858, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
298 |
+
page_content=' [12] ´Edouard Bonnet, Jaroslav Neˇsetˇril, Patrice Ossona de Mendez, Sebastian Siebertz, and St´ephan Thomass´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
299 |
+
page_content=' Twin-width and permutations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
300 |
+
page_content=' CoRR, abs/2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
301 |
+
page_content='06880, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
302 |
+
page_content=' [13] Nicolas Bousquet, Aur´elie Lagoutte, Zhentao Li, Aline Parreau, and St´ephan Thomass´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
303 |
+
page_content=' Identifying codes in hereditary classes of graphs and VC-dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
304 |
+
page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
305 |
+
page_content=' Discret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
306 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
307 |
+
page_content=', 29(4):2047–2064, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
308 |
+
page_content=' [14] Marek Cygan, Fedor V Fomin, �Lukasz Kowalik, Daniel Lokshtanov, D´aniel Marx, Marcin Pilipczuk, Micha�l Pilipczuk, and Saket Saurabh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
309 |
+
page_content=' Parameterized algorithms, volume 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
310 |
+
page_content=' Springer, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
311 |
+
page_content=' [15] Kord Eickmeyer, Archontia C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
312 |
+
page_content=' Giannopoulou, Stephan Kreutzer, O-joung Kwon, Micha�l Pilipczuk, Roman Rabinovich, and Sebastian Siebertz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
313 |
+
page_content=' Neighborhood complexity and kernelization for nowhere dense classes of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
314 |
+
page_content=' In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 63:1–63:14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
315 |
+
page_content=' Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
316 |
+
page_content=' [16] Fedor V Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
317 |
+
page_content=' Kernelization: theory of parameterized preprocessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
318 |
+
page_content=' Cambridge University Press, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
319 |
+
page_content=' [17] Florent Foucaud, George B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
320 |
+
page_content=' Mertzios, Reza Naserasr, Aline Parreau, and Petru Valicov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
321 |
+
page_content=' Identifi- cation, location-domination and metric dimension on interval and permutation graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
322 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
323 |
+
page_content=' bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
324 |
+
page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
325 |
+
page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
326 |
+
page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
327 |
+
page_content=', 668:43–58, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
328 |
+
page_content=' [18] Jakub Gajarsk´y, Micha�l Pilipczuk, Wojciech Przybyszewski, and Szymon Toru´nczyk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
329 |
+
page_content=' Twin-width and types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
330 |
+
page_content=' In Mikolaj Bojanczyk, Emanuela Merelli, and David P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
331 |
+
page_content=' Woodruff, editors, 49th In- ternational Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022, Paris, France, volume 229 of LIPIcs, pages 123:1–123:21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
332 |
+
page_content=' Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
333 |
+
page_content=' [19] Stefan Kratsch, Florian Nelles, and Alexandre Simon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
334 |
+
page_content=' On triangle counting parameterized by twin- width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
335 |
+
page_content=' CoRR, abs/2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
336 |
+
page_content='06708, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
337 |
+
page_content=' [20] Adam Marcus and G´abor Tardos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
338 |
+
page_content=' Excluded permutation matrices and the Stanley-Wilf conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
339 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
340 |
+
page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
341 |
+
page_content=' Theory, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
342 |
+
page_content=' A, 107(1):153–160, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
343 |
+
page_content=' [21] Jaroslav Neˇsetˇril and Patrice Ossona de Mendez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
344 |
+
page_content=' On nowhere dense graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
345 |
+
page_content=' Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
346 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
347 |
+
page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
348 |
+
page_content=', 32(4):600–617, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
349 |
+
page_content=' [22] Jaroslav Neˇsetˇril and Patrice Ossona de Mendez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
350 |
+
page_content=' Sparsity - Graphs, Structures, and Algorithms, volume 28 of Algorithms and combinatorics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
351 |
+
page_content=' Springer, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
352 |
+
page_content=' 7 [23] Adam Paszke and Micha�l Pilipczuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
353 |
+
page_content=' VC density of set systems definable in tree-like graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
354 |
+
page_content=' In Javier Esparza and Daniel Kr´al’, editors, 45th International Symposium on Mathematical Foundations of Computer Science, MFCS 2020, August 24-28, 2020, Prague, Czech Republic, volume 170 of LIPIcs, pages 78:1–78:13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
355 |
+
page_content=' Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
356 |
+
page_content=' [24] Micha�l Pilipczuk and Marek Soko�lowski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
357 |
+
page_content=' Graphs of bounded twin-width are quasi-polynomially χ-bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
358 |
+
page_content=' CoRR, abs/2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
359 |
+
page_content='07608, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
360 |
+
page_content=' [25] Micha�l Pilipczuk, Marek Soko�lowski, and Anna Zych-Pawlewicz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
361 |
+
page_content=' Compact representation for matri- ces of bounded twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
362 |
+
page_content=' In Petra Berenbrink and Benjamin Monmege, editors, 39th International Symposium on Theoretical Aspects of Computer Science, STACS 2022, March 15-18, 2022, Mar- seille, France (Virtual Conference), volume 219 of LIPIcs, pages 52:1–52:14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
363 |
+
page_content=' Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
364 |
+
page_content=' [26] Wojciech Przybyszewski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
365 |
+
page_content=' VC-density and abstract cell decomposition for edge relation in graphs of bounded twin-width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
366 |
+
page_content=' CoRR, abs/2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
367 |
+
page_content='04006, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
368 |
+
page_content=' [27] Douglas Rall and Peter J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
369 |
+
page_content=' Slater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
370 |
+
page_content=' On location-domination numbers for certain classes of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
371 |
+
page_content=' Congressus Numerantium, 45:97–106, 1984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
372 |
+
page_content=' [28] Felix Reidl, Fernando S´anchez Villaamil, and Konstantinos S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
373 |
+
page_content=' Stavropoulos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
374 |
+
page_content=' Characterising bounded expansion by neighbourhood complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
375 |
+
page_content=' Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
376 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
377 |
+
page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
378 |
+
page_content=', 75:152–168, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
379 |
+
page_content=' [29] Vladimir N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
380 |
+
page_content=' Vapnik and Alexey Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
381 |
+
page_content=' Chervonenkis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
382 |
+
page_content=' On the uniform convergence of relative frequencies of events to their probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
383 |
+
page_content=' In Measures of complexity, pages 11–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
384 |
+
page_content=' Springer, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
385 |
+
page_content=' 8' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf'}
|
BtE1T4oBgHgl3EQfpgUG/content/tmp_files/2301.03331v1.pdf.txt
ADDED
@@ -0,0 +1,890 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
1
|
2 |
+
A Specific Task-oriented Semantic Image
|
3 |
+
Communication System for substation patrol
|
4 |
+
inspection
|
5 |
+
Senran Fan, Haotai Liang, Chen Dong*, Xiaodong Xu, Geng Liu
|
6 |
+
Abstract—Intelligent inspection robots are widely used in
|
7 |
+
substation patrol inspection, which can help check potential
|
8 |
+
safety hazards by patrolling the substation and sending back
|
9 |
+
scene images. However, when patrolling some marginal areas with
|
10 |
+
weak signal, the scene images cannot be sucessfully transmissted
|
11 |
+
to be used for hidden danger elimination, which greatly reduces
|
12 |
+
the quality of robots’ daily work. To solve such problem,
|
13 |
+
a Specific Task-oriented Semantic Communication System for
|
14 |
+
Image—–STSCI is designed, which involves the semantic features
|
15 |
+
extraction, transmission, restoration and enhancement to get
|
16 |
+
clearer images sent by intelligent robots under weak signals.
|
17 |
+
Inspired by that only some specific details of the image are
|
18 |
+
needed in such substation patrol inspection task, we proposed
|
19 |
+
a new paradigm of semantic enhancement in such specific
|
20 |
+
task to ensure the clarity of key semantic information when
|
21 |
+
facing a lower bit rate or a low signal-to-noise ratio situation.
|
22 |
+
Across the reality-based simulation, experiments show our STSCI
|
23 |
+
can generally surpass traditional image-compression-based and
|
24 |
+
channel-coding-based or other semantic communication system
|
25 |
+
in the substation patrol inspection task with a lower bit rate even
|
26 |
+
under a low signal-to-noise ratio situation.
|
27 |
+
Index Terms—Semantic Communication, substation patrol
|
28 |
+
robot, STSCI
|
29 |
+
I. INTRODUCTION
|
30 |
+
W
|
31 |
+
Ith the development of Internet Technology especially
|
32 |
+
in intelligent applications like IoT fields, the fierce
|
33 |
+
demand for tremendous amount of information transmissions
|
34 |
+
is becoming inevitable, which urges people to continuously
|
35 |
+
improve the efficiency in communication process. However,
|
36 |
+
the transmission rate based on traditional communication
|
37 |
+
system in physical layer has already been approaching the
|
38 |
+
Shannon limit under most situations, so researchers are willing
|
39 |
+
to explore new theories and new forms of communication
|
40 |
+
systems.
|
41 |
+
Based on this, the concept of semantic communication has
|
42 |
+
attracted more and more attention. First mentioned in Shannon
|
43 |
+
Senran Fan, Haotai Liang are with the State Key Laboratory of Network-
|
44 |
+
ing and Switching Technology, Beijing University of Posts and Telecom-
|
45 |
+
munications, Beijing, 100876, China. (E-mail: [email protected]; lianghao-
|
46 | |
47 |
+
Xiaodong Xu is with the State Key Laboratory of Networking and
|
48 |
+
Switching Technology, Beijing University of Posts and Telecommunications,
|
49 |
+
Beijing, China, and also with the Department of Broad-band Communica-
|
50 |
+
tion, Peng Cheng Laboratory, Shenzhen, Guangdong, China. (E-mail: xuxi-
|
51 | |
52 |
+
Geng Liu is with the Beijing Smart-chip Microelectronics Technology
|
53 |
+
Co.,Ltd. (E-mail: [email protected])
|
54 |
+
*Chen Dong is the corresponding author and with the State Key
|
55 |
+
Laboratory of Networking and Switching Technology, Beijing Univer-
|
56 |
+
sity of Posts and Telecommunications, Beijing, 100876, China. (E-mail:
|
57 | |
58 |
+
and Weaver’s paper [1], the semantic-based communication
|
59 |
+
system is believed to be a new and bright direction for
|
60 |
+
communication feilds. The explosion of data requires the
|
61 |
+
communication systems to greatly upgrade their ability in
|
62 |
+
data compression, while the semantic-based communication
|
63 |
+
system is suitable for it. Considering that when transmitting
|
64 |
+
the information, large amount of task-irrelevant information
|
65 |
+
is involved especially in some specific communication scenes,
|
66 |
+
which leads to massive waste in communication resources.
|
67 |
+
Especially in this task, intelligent substation patrol inspection,
|
68 |
+
what really cared about is only the key semantic contents
|
69 |
+
such as the areas with key units of the image. Introduced
|
70 |
+
in [2], [3], by transmitting the information through semantic
|
71 |
+
feature extracting, transmission and reconstruction, semantic
|
72 |
+
communication system only keeps the effective information
|
73 |
+
which achieves extremely compression and high efficiency
|
74 |
+
communication.
|
75 |
+
Deep learning can be an answer to reciously extract the se-
|
76 |
+
mantic features from the image. Indeed, using neural networks
|
77 |
+
to make semantic analysis from images is a large subject in
|
78 |
+
the computer vision fields. Semantic segmentation networks
|
79 |
+
[4]–[6] as well as target detection networks [7]–[10] show
|
80 |
+
great power in semantic features extracting and analysis. At the
|
81 |
+
same time, GAN-based [11] networks is possessed of ability
|
82 |
+
in handling semantic features. GAN-based networks can gen-
|
83 |
+
erator images from semantic vectors. Moverover in InfoGAN
|
84 |
+
[12] and StyleGAN [13], semantic vector can be edited to
|
85 |
+
control the features of the generated images. And comes into
|
86 |
+
unsupervised feilds, auto-encoder [14] is an inspiring archi-
|
87 |
+
tecture for semantic feature extracting. Compressing the high-
|
88 |
+
dimensional data into a low-dimensional latent which is used
|
89 |
+
for data reconstruction. Auto-encoder forced the reconstruction
|
90 |
+
results to get close enough to the original ones. Combined with
|
91 |
+
semantic-related networks such as GANs and the structure
|
92 |
+
of auto-encoder, the system can realize the aim to decrease
|
93 |
+
distortion in semantic contents of images during the process
|
94 |
+
of extreme compression as well as transmission.
|
95 |
+
The traditional communication systems involve source cod-
|
96 |
+
ing and channel coding. We replace the former part with
|
97 |
+
neural networks, and to resist against noise in channels we
|
98 |
+
decide to use the Joint Source-Channel Coding(JSCC) [15],
|
99 |
+
[16]. Leading the channel simulation models into the deep
|
100 |
+
networks, the networks can perform well in real-world channel
|
101 |
+
conditions and even better than traditional channel coding
|
102 |
+
methods like LDPC especially in low bite rate or low signal-
|
103 |
+
to-noise ratio situation.
|
104 |
+
arXiv:2301.03331v1 [cs.CV] 9 Jan 2023
|
105 |
+
|
106 |
+
2
|
107 |
+
Figure 1. The framework of STSCI.
|
108 |
+
The above studies shows the possiblity in applying the
|
109 |
+
semantic communication system into the subtsation patrol
|
110 |
+
inspection task. At the same time, substations do have toubles
|
111 |
+
in intelligent patrol task. When the robots patrolling the
|
112 |
+
marginal areas of substation with weak signals, the images
|
113 |
+
sent back by robots can be too blurred to be used for
|
114 |
+
security check. Considering that semantic communication has
|
115 |
+
potential to be the answer for solving such problem and
|
116 |
+
no literature been published before is trying to apply the
|
117 |
+
semantic communication technology to the substation patrol
|
118 |
+
task, a specific task-oriented semantic communication system
|
119 |
+
STSCI is proposed for solving this specific task and the
|
120 |
+
similar communication tasks featured by the fixed image
|
121 |
+
source, fixed channel conditions and focusing only on some
|
122 |
+
specific task-oriented semantic contents of the image. The
|
123 |
+
system is mainly a GAN-based auto-encoder-structure network
|
124 |
+
for image’ compressing, transmission and reconstruction. In
|
125 |
+
addition, a yolo-net is involved to locate the images’ specific
|
126 |
+
semantic contents, which will then be embedded and sent to
|
127 |
+
the semantic enhancement models to improve the transmission
|
128 |
+
quality of the important semantic contents of the images to
|
129 |
+
make sure there is no errors or missing when making security
|
130 |
+
check with the transmitted images. The main contributions of
|
131 |
+
this paper are summarized as follows.
|
132 |
+
(1) A specific task-oriented semantic communication system
|
133 |
+
for image is proposed for the transmission of images
|
134 |
+
obtained by intelligent robots in the substation patrol
|
135 |
+
inspection task. A new paradigm of key semantic contents
|
136 |
+
extraction and preservation for such specific tasks is
|
137 |
+
proposed. A Yolo networks is involved to locate the
|
138 |
+
key semantic contents which is the task exactly cares
|
139 |
+
about, while the located part will be sent into a semantic
|
140 |
+
enhancement models to enhance the transmission quality
|
141 |
+
of the located areas.
|
142 |
+
(2) A GAN-based auto-encoder structure network is de-
|
143 |
+
signed. Combined with RRDB blocks, channel normal-
|
144 |
+
ization, idea of conditional gan and some other tricks,
|
145 |
+
the network can extremely compress the images into the
|
146 |
+
semantic feature latent and reconstruct them after the
|
147 |
+
transmission.
|
148 |
+
(3) Through simulations and experiments, this paper show
|
149 |
+
the application and performance of the semantic commu-
|
150 |
+
nication system in haddling the specific task. By present
|
151 |
+
the metrics, semantic communication system is proven
|
152 |
+
to be superior to the traditional communication systems
|
153 |
+
in such specific tasks with fixed image source and fixed
|
154 |
+
channel conditions. As a practice, the STSCI has better
|
155 |
+
transmission quality especially under low bit rate or low
|
156 |
+
signal-to-noise ratio channel conditions compared with
|
157 |
+
the traditional communication systems, which signifi-
|
158 |
+
cantly enlarge the areas covered by effective signal to
|
159 |
+
ensure the proper work of the intelligent robots when
|
160 |
+
patrolling the marginal areas of the substation with weak
|
161 |
+
signal.
|
162 |
+
This paper is arranged as follows. In section II, we review
|
163 |
+
the structure of the specific task-oriented semantic commu-
|
164 |
+
nication system for image STSCI, and show details in the
|
165 |
+
model architectures and training flow path of two parts of
|
166 |
+
STSCI. Then, in section III, a direct comparison between the
|
167 |
+
STSCI and other image communication systems is provided to
|
168 |
+
quantify the performance of STSCI with the proposed method.
|
169 |
+
Finally, conclusions of this paper are drawn in section IV.
|
170 |
+
II. SPECIFC TASK-ORIENTED SEMANTIC COMMUNICATION
|
171 |
+
SYSTEM
|
172 |
+
Shown in Fig. 1, the specific task-oriented semantic com-
|
173 |
+
munication system for image(STSCI) is mainly composed of
|
174 |
+
two parallel parts: the base system and semantic enhance-
|
175 |
+
ment system. The base system is mainly a GAN-based auto-
|
176 |
+
encoder network to achieve images’ compression, transmission
|
177 |
+
and reconstruction through semantic features. Meanwhile, the
|
178 |
+
semantic enhancement system locates the areas with key
|
179 |
+
semantic contents of the image and improves these areas’
|
180 |
+
quality during transmission. Both of the two parts will be
|
181 |
+
introduced in detail in the following contents.
|
182 |
+
|
183 |
+
Semantic
|
184 |
+
Semantic
|
185 |
+
Channels
|
186 |
+
Encoder
|
187 |
+
Decoder
|
188 |
+
Base system
|
189 |
+
Semantic enhancement system
|
190 |
+
Enhancement
|
191 |
+
个
|
192 |
+
个
|
193 |
+
Yolo-Net
|
194 |
+
Model
|
195 |
+
Receiver3
|
196 |
+
Figure 2. The architecture of the base system.
|
197 |
+
A. Base System
|
198 |
+
Shown in Fig. 2, the base system is mainly a neural network
|
199 |
+
consists of three parts: an Encoder network, simulated channel
|
200 |
+
models and a GAN-based Decoder network. The images
|
201 |
+
gained by substation patrol inspection robots will be com-
|
202 |
+
pressed by Encoder network, sent to receiver through physical
|
203 |
+
channels simulated by channel models and reconstructed by
|
204 |
+
the Decoder network.
|
205 |
+
The most frequently proposed semantic-based communica-
|
206 |
+
tion systems used the structure of auto-encoder to achieve
|
207 |
+
image compression, however traditional CNN and loss in
|
208 |
+
auto-encoder have difficulties in acquiring high quality re-
|
209 |
+
constructed images. In pace with development in image en-
|
210 |
+
hancement task especially image de-noising and image super-
|
211 |
+
resolution, GANs have been proven to be possessed of strong
|
212 |
+
talents in high-quality image generation, which were pre-
|
213 |
+
viously employed to improve the visual quality of image
|
214 |
+
compression systems [17], [18]. Inspired by these previous
|
215 |
+
studies, we decide to use GAN to replace the traditional CNN
|
216 |
+
as decoder network in auto-encoder to significantly improve
|
217 |
+
the quality and similarity of images transmitted by semantic
|
218 |
+
communication system.
|
219 |
+
Meanwhile, considering that structure of auto-encoder in-
|
220 |
+
volved in semantic communication system is highly consistent
|
221 |
+
with the information communication process, Joint Source-
|
222 |
+
Channel Coding(JSCC) was proposed in [15]. No longer need
|
223 |
+
additional channel coding like LDPC to resist against noise
|
224 |
+
in channels, adding noise through simulation channel models
|
225 |
+
when training auto-encoder networks, an anti-noise communi-
|
226 |
+
cation system is formed, which can ensure high-quality image
|
227 |
+
transmission even under low signal-to-noise ratio situation.
|
228 |
+
Though JSCC methods has its limitation for being constrained
|
229 |
+
by specific source, specific scene and specific task, which
|
230 |
+
lead to deep-based semantic communication system’s lack of
|
231 |
+
generalization. However, in this task, the information source
|
232 |
+
and channels are fixed, such constrains can be ignored. In
|
233 |
+
addition, the data of channel conditions in the substation
|
234 |
+
can be collected continuously in the practical application to
|
235 |
+
fine-tune simulation channel model to improve the system’s
|
236 |
+
performance in this specific task.
|
237 |
+
In terms of the loss functions which plays a decisive role in
|
238 |
+
training the networks, MSE loss and LPIPS loss is chosen
|
239 |
+
to measure the distortion between the original images and
|
240 |
+
the generated ones. MSE loss measures the difference per
|
241 |
+
pixel and shows their distance in the high-dimension space,
|
242 |
+
which helps keep the similarity. At the same time, LPIPS
|
243 |
+
loss proposed in [19] is calculated through a VGG-net which
|
244 |
+
has been trained previously. Having special model structure
|
245 |
+
and trained with tricks, the pre-trained VGG-net gives more
|
246 |
+
attention to the structure and texture of the images and does
|
247 |
+
well in telling such kind of difference between images. It’s
|
248 |
+
the difference in structure and texture that is of importance
|
249 |
+
but hard to measure through tradition losses such as L1 loss
|
250 |
+
or MSE loss. LPIPS loss helps supply this gap, and makes the
|
251 |
+
generated images more close to the original ones in visual.
|
252 |
+
In fact, before the final training, the encoder and GAN-based
|
253 |
+
decoder is trained by only using the two mentioned losses
|
254 |
+
instead of involving the adversarial loss at the beginning. Such
|
255 |
+
tricks were also applied in [17], [20], [21]. Initializing the
|
256 |
+
generator net in this way helps the generator performs better
|
257 |
+
in the final training process so the discriminator can learn
|
258 |
+
more useful information and the adversarial loss can be more
|
259 |
+
rational. Otherwise if skip this process, the images generated
|
260 |
+
by the generator is far from the ground truth and easy for
|
261 |
+
discriminator to tell, which may lead to the vanishing gradient
|
262 |
+
of the generator.
|
263 |
+
Speaking of the adversarial loss, the structure of our dis-
|
264 |
+
criminator is abnormal. Inspired by [17], the discriminator
|
265 |
+
shares the structure of that in conditional GAN. Receiving
|
266 |
+
not only the generated images as well as the ground truth but
|
267 |
+
also the latent which puts into the generator, the discriminator
|
268 |
+
no longer only focus on the quality of generator images.
|
269 |
+
With limitation from the different latent involved in, the
|
270 |
+
discriminator is forced to take attention to the connections
|
271 |
+
between the latent and the image and the difference between
|
272 |
+
images with different latent, so the adversarial loss covers
|
273 |
+
more useful information to help the network performs better
|
274 |
+
in reconstruction process.
|
275 |
+
According to all above introductions and the structure
|
276 |
+
shown in Fig. 2, the complete process of the base system is
|
277 |
+
as follows.
|
278 |
+
The image 𝑋 to be transmitted is sent in to the Encoder
|
279 |
+
first to get the semantic features 𝑌,
|
280 |
+
𝑌 = 𝐸(𝑥).
|
281 |
+
(1)
|
282 |
+
|
283 |
+
Encoder
|
284 |
+
Gan-based
|
285 |
+
D
|
286 |
+
Residual-in-Residual
|
287 |
+
Decodel
|
288 |
+
Conv Norm ReLU
|
289 |
+
Conv Norm ReLU
|
290 |
+
Upsample Conv
|
291 |
+
Upsample Conv
|
292 |
+
Upsample Conv
|
293 |
+
X'
|
294 |
+
Channel
|
295 |
+
Vgg networks
|
296 |
+
X-
|
297 |
+
Conv
|
298 |
+
Dense Blocks
|
299 |
+
Model
|
300 |
+
concat4
|
301 |
+
The nearest neighbor quantization operation is then performed
|
302 |
+
on the extracted semantic features ����,
|
303 |
+
𝑌𝑞(𝑖) = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑗||𝑌𝑖 − 𝐼 𝑗||.
|
304 |
+
(2)
|
305 |
+
Where the set 𝐼 of quantization centers is:
|
306 |
+
𝐼 = {𝐼0, 𝐼1, ..., 𝐼 𝑗, ..., 𝐼𝑙}.
|
307 |
+
(3)
|
308 |
+
According to JSCC, then the quantized semantic feature 𝑌𝑞 is
|
309 |
+
sent to the simulated channel models. In this paper, AWGN
|
310 |
+
model is chosen as the simulated channel model,
|
311 |
+
𝑌𝑞
|
312 |
+
′ = ℎ · 𝑌𝑞 + 𝑛.
|
313 |
+
(4)
|
314 |
+
In this formula, ℎ represents the channel gain, while 𝑛 rep-
|
315 |
+
resents the independent identically distributed Gaussian noise.
|
316 |
+
Such model simulate the feature’s distortion transmitted in the
|
317 |
+
real-world channel and give the base model the ability to resist
|
318 |
+
the noise.
|
319 |
+
The image 𝑋
|
320 |
+
′ is generated by the Generator(the Decoder
|
321 |
+
network) from the processed latent 𝑌𝑞
|
322 |
+
′ at the receiver,
|
323 |
+
𝑋
|
324 |
+
′ = 𝐺(𝑌𝑞
|
325 |
+
′).
|
326 |
+
(5)
|
327 |
+
The Encoder maps the source image 𝑋 to a specific distribution
|
328 |
+
𝑃𝑋. The generator G tries to map samples 𝑌 from a fixed
|
329 |
+
known distribution 𝑃𝑌 to 𝑃𝑋, while the Discriminator D is
|
330 |
+
learned to tell the difference between such two distributions
|
331 |
+
using the sampled data 𝑋 and the generated 𝑋
|
332 |
+
′.A properly
|
333 |
+
trained Discriminator helps the Generator to find and simulate
|
334 |
+
the distribution 𝑃𝑋 more preciously. Involving the idea of
|
335 |
+
conditional GANs as mentioned before, the adversarial loss
|
336 |
+
is as follows.
|
337 |
+
𝐿𝐺 = −𝑙𝑜𝑔(𝐷(𝑋
|
338 |
+
′,𝑌𝑞
|
339 |
+
′)),
|
340 |
+
(6)
|
341 |
+
𝐿𝐷 = −𝑙𝑜𝑔(1 − 𝐷(𝑋
|
342 |
+
′,𝑌𝑞
|
343 |
+
′)) − 𝑙𝑜𝑔(𝐷(𝑋,𝑌𝑞
|
344 |
+
′)).
|
345 |
+
(7)
|
346 |
+
Besides, when optimizing the Encoder and the Generator, the
|
347 |
+
MSE loss and the LPIPS loss are also involved to measure the
|
348 |
+
texture and perception distance between the source image X
|
349 |
+
and the generator image 𝑋
|
350 |
+
′. Moverover, helping to initialize
|
351 |
+
these two networks, these two kinds of loss guide the Gener-
|
352 |
+
ator and Discriminator to be trained on the right direction. So
|
353 |
+
the final loss for the Encoder and the Generator are as follows.
|
354 |
+
In the initial training:
|
355 |
+
𝐿𝐸𝐺 = ||𝑋 − 𝑋
|
356 |
+
′|| + 𝛼𝐿𝑃𝐼𝑃𝑆(𝑋, 𝑋
|
357 |
+
′).
|
358 |
+
(8)
|
359 |
+
In the final training:
|
360 |
+
𝐿𝐸𝐺 = ||𝑋−𝑋
|
361 |
+
′||+𝛼𝐿𝑃𝐼𝑃𝑆(𝑋, 𝑋
|
362 |
+
′)+𝛽[−𝑙𝑜𝑔(𝐷(𝑋
|
363 |
+
′,𝑌𝑞
|
364 |
+
′)]. (9)
|
365 |
+
B. Semantic Enhancement System
|
366 |
+
The Semantic Enhance System is designed to enhance
|
367 |
+
transmission quality of the key semantic contents which is
|
368 |
+
cared about in the specific task such as the panels or electrical
|
369 |
+
insulators in the intelligent substation patrol inspection task.
|
370 |
+
Figure 3. The process of the semantic enhancement system.
|
371 |
+
The system consists of two parts: a Yolo net to locate the
|
372 |
+
area with key semantic contents which will be sent into the
|
373 |
+
base model as well as a enhancement network which can get
|
374 |
+
more precious and high quality images at the receiver with the
|
375 |
+
input of the transmitted image and the areas with key semantic
|
376 |
+
contents.
|
377 |
+
In this paper, target detection network yolo-net is involved
|
378 |
+
to locate the key semantic contents instead of the semantic
|
379 |
+
segmentation network such as Unet or FCN in some other se-
|
380 |
+
mantic communication systems like [18], the principal reason
|
381 |
+
is as follows.
|
382 |
+
The pre-trained Yolo-net has the ability in finding and locat-
|
383 |
+
ing the objects which need to be shoot during the patrol task,
|
384 |
+
which is not only used to locate and mark the area containing
|
385 |
+
key semantic information during the semantic communication
|
386 |
+
process, but can also help the intelligent robots to judge
|
387 |
+
whether there exists objects in the patrol list to shoot and
|
388 |
+
how to change the position, angle and focal length of camera
|
389 |
+
to get a sharper image. Under the constraint of storage space
|
390 |
+
in the patrol robot, Yolo-net which can do multiple jobs is a
|
391 |
+
rather cost-effective choice.
|
392 |
+
As shown in Fig. 3, semantic enhancement system’s process
|
393 |
+
is as follows.
|
394 |
+
The area 𝑋𝑠𝑢𝑏 with key semantic contents is located by the
|
395 |
+
yolo-net with input of source image 𝑋,
|
396 |
+
|
397 |
+
Yolo-net
|
398 |
+
不
|
399 |
+
BASE
|
400 |
+
SYSTEM
|
401 |
+
个
|
402 |
+
Semantic
|
403 |
+
Enhancement
|
404 |
+
Model
|
405 |
+
不
|
406 |
+
Xsub1
|
407 |
+
Xsub
|
408 |
+
X final5
|
409 |
+
𝑋𝑠𝑢𝑏 = 𝑌𝑜𝑙𝑜 𝑛𝑒𝑡(𝑋).
|
410 |
+
(10)
|
411 |
+
After sent into the base model, 𝑋𝑠𝑢𝑏 is encoded, transmitted
|
412 |
+
and finally reconstructed as the 𝑋𝑠𝑢𝑏
|
413 |
+
′ at the receiver,
|
414 |
+
𝑋𝑠𝑢𝑏
|
415 |
+
′ = 𝐵𝑎𝑠𝑒 𝑠𝑦𝑠𝑡𝑒𝑚(𝑋𝑠𝑢𝑏).
|
416 |
+
(11)
|
417 |
+
At the same time, the whole image 𝑋 is transmitted through
|
418 |
+
the base model to get another area 𝑋𝑠𝑢𝑏1
|
419 |
+
′ with key semantic
|
420 |
+
contents cut from the reconstructed image 𝑋
|
421 |
+
′. The difference
|
422 |
+
between these two sub-images is calculated as follows.
|
423 |
+
𝑋𝑑𝑖 𝑓 𝑓 = 𝑋𝑠𝑢𝑏1
|
424 |
+
′ − 𝑋
|
425 |
+
′
|
426 |
+
𝑠𝑢𝑏.
|
427 |
+
(12)
|
428 |
+
The DIFF image is sent to the semantic enhancement model
|
429 |
+
whose job is to balance the difference between two sub-
|
430 |
+
image to make full use of these extra information to let the
|
431 |
+
transmitted image as close as the original one in the area with
|
432 |
+
key semantic contents,
|
433 |
+
𝑋𝑑𝑖 𝑓 𝑓
|
434 |
+
′ = 𝐸𝑛ℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡 𝑀𝑜𝑑𝑒𝑙(𝑋𝑑𝑖 𝑓 𝑓 ).
|
435 |
+
(13)
|
436 |
+
The final image is formed as follows.
|
437 |
+
𝑋 𝑓 𝑖𝑛𝑎𝑙 = 𝑋
|
438 |
+
′ + 𝑋𝑑𝑖 𝑓 𝑓
|
439 |
+
′.
|
440 |
+
(14)
|
441 |
+
In this task, the similarity between the final image 𝑋 𝑓 𝑖𝑛𝑎𝑙
|
442 |
+
and the original image 𝑋 is focused on, which can help
|
443 |
+
decrease the possibility of errors or missing during analyzing
|
444 |
+
the images. So we choose the MSE loss and SSIM loss to
|
445 |
+
optimize the semantic enhancement models, and parameters
|
446 |
+
in the yolonet as well as the base model are fixed during the
|
447 |
+
optimization,
|
448 |
+
𝐿𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡 = ||𝑋 − 𝑋 𝑓 𝑖𝑛𝑎𝑙|| + 𝛼𝑆𝑆𝐼𝑀(𝑋, 𝑋 𝑓 𝑖𝑛𝑎𝑙).
|
449 |
+
(15)
|
450 |
+
In the end of Section II, the details of networks involved in
|
451 |
+
the STSCI is shown in table I and table II.
|
452 |
+
III. EXPERIMENTAL RESULTS
|
453 |
+
This section is mainly introduced the relevant testing set-
|
454 |
+
tings, including the dataset for STSCI’s train and test, the
|
455 |
+
introduction of baseline as well as evalation metrics and the
|
456 |
+
performance for the STSCI in different metrics.
|
457 |
+
Discription and figures are given to show how the STSCI
|
458 |
+
surpass the traditional image communication system or other
|
459 |
+
semantic system under some specific situations.
|
460 |
+
A. Dataset for train and test
|
461 |
+
The training dataset is formed of 10000 images sampled
|
462 |
+
from the COCO2014 dataset while 200 images of substation
|
463 |
+
are used to fine-tune the base system to improve the STSCI’s
|
464 |
+
performance in the intelligent substation patrol inspection task.
|
465 |
+
During the testing process, the images from COCO2014
|
466 |
+
testset which are not involved in training process are sampled
|
467 |
+
to measure the metrics of the communication systems.
|
468 |
+
B. Baseline and Evaluation metrics
|
469 |
+
The widely used image compression technology JPEG and
|
470 |
+
JPEG2000 are used as baseline for the image compression.
|
471 |
+
Table I
|
472 |
+
BASE SYSTEM
|
473 |
+
Model
|
474 |
+
Layers
|
475 |
+
Encoder
|
476 |
+
Conv2d,kernel=(7,7),stride=(1,1),channels=64
|
477 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=128
|
478 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=256
|
479 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=512
|
480 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=1024
|
481 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=3
|
482 |
+
Decoder
|
483 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=1024
|
484 |
+
RRDB(1024, 1024) x 9
|
485 |
+
ConvT,kernel=(3,3),stride=(2,2),channels=1024
|
486 |
+
ConvT,kernel=(3,3),stride=(2,2),channels=512
|
487 |
+
ConvT,kernel=(3,3),stride=(2,2),channels=256
|
488 |
+
ConvT,kernel=(3,3),stride=(2,2),channels=128
|
489 |
+
Conv2d,kernel=(7,7),stride=(1,1),channels=3
|
490 |
+
Discriminator
|
491 |
+
For latent Y: nearest neighbor upsampling 16x
|
492 |
+
concat[upsampled latent Y, input image X or X’]
|
493 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=64
|
494 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=128
|
495 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=256
|
496 |
+
Conv2d,kernel=(3,3),stride=(2,2),channels=512
|
497 |
+
Conv2d,kernel=(1,1),stride=(1,1),channels=1
|
498 |
+
Table II
|
499 |
+
SEMANTIC ENHANCEMENT MODEL
|
500 |
+
Model
|
501 |
+
Layers
|
502 |
+
Enhancement
|
503 |
+
Conv2d,kernel=(7,7),stride=(1,1),channels=64
|
504 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=128
|
505 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=256
|
506 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=512
|
507 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=1024
|
508 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=512
|
509 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=256
|
510 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=128
|
511 |
+
Conv2d,kernel=(3,3),stride=(1,1),channels=64
|
512 |
+
Conv2d,kernel=(7,7),stride=(1,1),channels=3
|
513 |
+
Both of the compression methods are the target for the base
|
514 |
+
model in STSCI to substitute for in the patrol task. The
|
515 |
+
LSCI proposed in [18] is also involved in the comparison. We
|
516 |
+
draw lessons from some tricks proposed in that paper, so it’s
|
517 |
+
necessary to show how we surpass it especially in the specific
|
518 |
+
task.
|
519 |
+
|
520 |
+
6
|
521 |
+
Figure 4. The performance of the reconstructed image of JPEG, JPEG2000, LSCI and STSCI.
|
522 |
+
Figure 5. Visual example of images produced by LSCI along with the corresponding results for JPEG and JPEG2000.
|
523 |
+
Meanwhile, the LDPC channel coding is used to make
|
524 |
+
comparison with JSCC methods under simulated channel
|
525 |
+
conditions of the wireless transmission channels.
|
526 |
+
SSIM as well as PSNR is chosen as evaluation metrics to
|
527 |
+
measure both the quality of images at the recevier and the
|
528 |
+
similarity between the transmitted ones with the original ones,
|
529 |
+
which can help comprehensively describe the performance of
|
530 |
+
the communication systems.
|
531 |
+
C. Analysis for results in image compression
|
532 |
+
We visualize the outcome of the comparison between JPEG,
|
533 |
+
JPEG2000, LSCI and STSCI in image compression task in Fig.
|
534 |
+
4. The x coordinate represents the average bits per pixel (bpp)
|
535 |
+
on the images, while the y coordinate individually show the
|
536 |
+
value of metrics of SSIM and PSNR.
|
537 |
+
From the Fig. 4, it’s obvious that STSCI is always preferred
|
538 |
+
to other image compression methods at equal bitrates. In the
|
539 |
+
bitrate around 0.15, the STSCI is 0.75 higher than the LSCI
|
540 |
+
and JPEG2000 in value of SSIM and 0.75 is a enormous
|
541 |
+
number which means the reconstructed image gained by
|
542 |
+
STSCI is much more resemble to the original ones.
|
543 |
+
And that is extatly the truth, visual examples presented in
|
544 |
+
Fig. 5 shows how clear the imge compressed by the STSCI.
|
545 |
+
Even using only half bpp of JPEG2000 and one of three bpp
|
546 |
+
of JPEG, image handled by STSCI is 0.1 higher in SSIM
|
547 |
+
and around 8dB higher in PSNR metrics. It’s esay for us
|
548 |
+
to see noises and distortions in images compressed by JPEG
|
549 |
+
and JPEG2000, compared to which, the STSCI’s job is much
|
550 |
+
better. Such results in compressing and transmitting the image
|
551 |
+
shows that STSCI can be equal to the specific patrol task with
|
552 |
+
higher quality and less bpp.
|
553 |
+
Considering that the base system is fine-tuned with some
|
554 |
+
|
555 |
+
34
|
556 |
+
0.900
|
557 |
+
0.875
|
558 |
+
32
|
559 |
+
0.850
|
560 |
+
上
|
561 |
+
0.825
|
562 |
+
30
|
563 |
+
SSIM
|
564 |
+
PSNR
|
565 |
+
0.800
|
566 |
+
SSIMvsbpp
|
567 |
+
PSNRvsbpp
|
568 |
+
28
|
569 |
+
STSCI
|
570 |
+
STSCI
|
571 |
+
0.775
|
572 |
+
LSCI
|
573 |
+
LSCI
|
574 |
+
0.750
|
575 |
+
JPEG2000
|
576 |
+
26
|
577 |
+
JPEG2000
|
578 |
+
JPEG
|
579 |
+
JPEG
|
580 |
+
0.725
|
581 |
+
0.10
|
582 |
+
0.15
|
583 |
+
0.20
|
584 |
+
0.25
|
585 |
+
0.30
|
586 |
+
0.35
|
587 |
+
0.10
|
588 |
+
0.15
|
589 |
+
0.20
|
590 |
+
0.25
|
591 |
+
0.30
|
592 |
+
0.35
|
593 |
+
bpp
|
594 |
+
bpp80
|
595 |
+
60
|
596 |
+
10
|
597 |
+
Ob
|
598 |
+
40
|
599 |
+
OB
|
600 |
+
40
|
601 |
+
120
|
602 |
+
120
|
603 |
+
120
|
604 |
+
20
|
605 |
+
140
|
606 |
+
20
|
607 |
+
140
|
608 |
+
20
|
609 |
+
OC
|
610 |
+
140
|
611 |
+
0
|
612 |
+
160
|
613 |
+
U
|
614 |
+
160
|
615 |
+
160
|
616 |
+
SSCI:
|
617 |
+
JPEG:
|
618 |
+
JPEG2000:
|
619 |
+
bpp = 0.13
|
620 |
+
bpp = 0.35
|
621 |
+
bpp = 0.21
|
622 |
+
SSIM = 0.92
|
623 |
+
SSIM = 0.79
|
624 |
+
SSIM = 0.82
|
625 |
+
PSNR = 32.6
|
626 |
+
PSNR = 26.1
|
627 |
+
PSNR = 26.57
|
628 |
+
Figure 6. Training details and visual example of the yolonet.
|
629 |
+
substation and industral images, and that’s why in this visual
|
630 |
+
sample, the STSCI’s SSIM and PSNR metrics are higher than
|
631 |
+
the average values in 0.13bpp. Indeed, in the substation patrol
|
632 |
+
task, the images of substation can be collected continuously to
|
633 |
+
fine-tune or even retrain the networks of Base system, which
|
634 |
+
can leads to better performance in the specific task.
|
635 |
+
Figure 7. visual example of the semantic enhancement model.
|
636 |
+
D.Analysis for semantic enhancement system
|
637 |
+
For example, taking the panel as the key semantic informa-
|
638 |
+
tion, a yolo-net is trained with 200 images of panels. Both the
|
639 |
+
details and the example of trained yolonet is shown in Fig. 6.
|
640 |
+
With pre-trained checkpoints involved, after 200 images’
|
641 |
+
training, the yolo-net is precious enough for the daily patorl
|
642 |
+
task with making errors or missing in low frequency.
|
643 |
+
Meanwhile Fig. 7 shows the effect of the semantic enhance-
|
644 |
+
ment model. The enhanced area in Fig. 7 has the high SSIM at
|
645 |
+
0.946 and PSNR at 34.4dB. Through the enhancement model,
|
646 |
+
we can still see the direction of the hand on the panel, which
|
647 |
+
is of great meaningful information for the patrol task.
|
648 |
+
E.Simulated results for channel communication
|
649 |
+
In the experiments, we choose AWGN model to make
|
650 |
+
channel simulation. As shown in Fig. 7, when the SNR is
|
651 |
+
larger than 5dB, the value of SSIM and PSNR gained by
|
652 |
+
STSCI+LDPC is a bit higher than STSCI+JSCC, but when
|
653 |
+
the channel conditions gets bad and the SNR is close or
|
654 |
+
even lower than 0db, the quality of image transmitted through
|
655 |
+
JSCC metheds doesn’t decrease very fast and becomes much
|
656 |
+
higher than that of LDPC methods.And that’s what we want in
|
657 |
+
solving the specific task. One of the most importance mission
|
658 |
+
for STSCI in this task is to ensure the quality of image
|
659 |
+
sent back by robots when patrolling some marginal areas
|
660 |
+
|
661 |
+
train/box_loss
|
662 |
+
train/obj_loss
|
663 |
+
train/cls_loss
|
664 |
+
metrics/precision
|
665 |
+
metrics/recall
|
666 |
+
0.12
|
667 |
+
0.035
|
668 |
+
results
|
669 |
+
1.0
|
670 |
+
1.0
|
671 |
+
0.030
|
672 |
+
0.04
|
673 |
+
0.10
|
674 |
+
0.8
|
675 |
+
0.8
|
676 |
+
0.025
|
677 |
+
0.02
|
678 |
+
0.08
|
679 |
+
0.6
|
680 |
+
0.6
|
681 |
+
0.020
|
682 |
+
0.00
|
683 |
+
0.06
|
684 |
+
0.4
|
685 |
+
0.4
|
686 |
+
0.015
|
687 |
+
0.02
|
688 |
+
0.04
|
689 |
+
0.2
|
690 |
+
0.010
|
691 |
+
0.2
|
692 |
+
0.04
|
693 |
+
0.02
|
694 |
+
0.005
|
695 |
+
0.0
|
696 |
+
0.0
|
697 |
+
0
|
698 |
+
200
|
699 |
+
0
|
700 |
+
200
|
701 |
+
200
|
702 |
+
0
|
703 |
+
200
|
704 |
+
0
|
705 |
+
200
|
706 |
+
val/box_loss
|
707 |
+
val/obj_loss
|
708 |
+
val/cls_loss
|
709 |
+
metrics/mAP_0.5
|
710 |
+
metrics/mAP_0.5:0.95
|
711 |
+
1.0
|
712 |
+
0.10
|
713 |
+
0.020
|
714 |
+
0.04
|
715 |
+
0.8
|
716 |
+
0.6
|
717 |
+
0.08
|
718 |
+
0.02
|
719 |
+
0.015
|
720 |
+
0.6
|
721 |
+
0.00
|
722 |
+
0.4
|
723 |
+
0.06
|
724 |
+
0.4
|
725 |
+
0.010
|
726 |
+
0.02
|
727 |
+
0.2
|
728 |
+
0.04
|
729 |
+
0.2
|
730 |
+
0.04
|
731 |
+
0.005
|
732 |
+
0.02
|
733 |
+
0.0
|
734 |
+
0.0
|
735 |
+
0
|
736 |
+
200
|
737 |
+
200
|
738 |
+
0
|
739 |
+
200
|
740 |
+
0
|
741 |
+
200
|
742 |
+
0
|
743 |
+
200
|
744 |
+
90%bpp:0.15
|
745 |
+
Enhance Part:
|
746 |
+
PSNR:32.4
|
747 |
+
SSIM:0.9468
|
748 |
+
Figure 8. Comparison between STSCI and LSCI with JSCC or channel slice models and traditional channel coding LDPC with SSIM and PSNR metrics.
|
749 |
+
with weak signal or under low signal-to-noise ratio channel
|
750 |
+
conditions. And unlike LSCI whose Encder and Decoder is
|
751 |
+
not optimized when involving the noise by using channel slice
|
752 |
+
models, STSCI’s performance in good channel conditions can
|
753 |
+
get closer and closer to the LDPC metheds.
|
754 |
+
IV. CONCLUSION
|
755 |
+
In this paper, a specific task-oriented semantic image com-
|
756 |
+
munication system STSCI is proposed for intelligent substa-
|
757 |
+
tion patorl inspection, which is mainly composed of a base
|
758 |
+
system and a semantic enhancemant system. To haddle the
|
759 |
+
task of ensuring the quality of images sent back by robots in
|
760 |
+
singal-weak areas of substation. We designed a GAN-based
|
761 |
+
networks in structure of auto-encoders to extremely compress
|
762 |
+
the images. And to preserve the key semantic contents during
|
763 |
+
transmission to decrease the posibility of errors or missing
|
764 |
+
of the inspection, a yolo-net is involved to locate the areas
|
765 |
+
with key semantic information, and a semantic enhancement
|
766 |
+
model is designed to make full use of these extra information
|
767 |
+
to make these areas clearer. Meanwhile, technology of JSCC
|
768 |
+
is involved to improve the performance of STSCI under low
|
769 |
+
signal-to-noise ratio channel conditions.
|
770 |
+
With all metheds taken, expriments show the specific task-
|
771 |
+
oriented semantic image communication system, the STSCI
|
772 |
+
has the ability in solving this inspection task.
|
773 |
+
V. ACKNOWLEDGEMENTS
|
774 |
+
This work is supported in part by the National Key R&D
|
775 |
+
Program of China under Grant 2022YFB2902102.The work
|
776 |
+
of Chen Dong is supported by The Academician expert Open
|
777 |
+
Fund of Beijing Smart-chip Microelectronics Technology Co.,
|
778 |
+
Ltd under project SGITZXDTKJJS2201045.
|
779 |
+
REFERENCES
|
780 |
+
[1] C. E. Shannon, “A mathematical theory of communication,” The Bell
|
781 |
+
System Technical Journal, vol. 27, no. 3, pp. 379–423, 1948. I
|
782 |
+
[2] M. Kountouris and N. Pappas, “Semantics-empowered communication
|
783 |
+
for networked intelligent systems,” IEEE Communications Magazine,
|
784 |
+
vol. 59, no. 6, pp. 96–102, 2021. I
|
785 |
+
[3] P. Zhang, W. Xu, H. Gao, K. Niu, X. Xu, X. Qin, C. Yuan, Z. Qin,
|
786 |
+
H. Zhao, J. Wei, and F. Zhang, “Toward wisdom-evolutionary and
|
787 |
+
primitive-concise 6g: A new paradigm of semantic communication
|
788 |
+
networks,” Engineering, vol. 8, pp. 60–73, 2022. I
|
789 |
+
[4] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks
|
790 |
+
for semantic segmentation,” in Proceedings of the IEEE conference on
|
791 |
+
computer vision and pattern recognition, 2015, pp. 3431–3440. I
|
792 |
+
[5] V. Badrinarayanan, A. Handa, and R. Cipolla, “Segnet: A deep con-
|
793 |
+
volutional encoder-decoder architecture for robust semantic pixel-wise
|
794 |
+
labelling,” arXiv preprint arXiv:1505.07293, 2015. I
|
795 |
+
[6] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks
|
796 |
+
for biomedical image segmentation,” in International Conference on
|
797 |
+
Medical image computing and computer-assisted intervention. Springer,
|
798 |
+
2015, pp. 234–241. I
|
799 |
+
[7] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time
|
800 |
+
object detection with region proposal networks,” Advances in neural
|
801 |
+
information processing systems, vol. 28, 2015. I
|
802 |
+
[8] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C.
|
803 |
+
Berg, “Ssd: Single shot multibox detector,” in European conference on
|
804 |
+
computer vision.
|
805 |
+
Springer, 2016, pp. 21–37. I
|
806 |
+
[9] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look
|
807 |
+
once: Unified, real-time object detection,” in Proceedings of the IEEE
|
808 |
+
conference on computer vision and pattern recognition, 2016, pp. 779–
|
809 |
+
788. I
|
810 |
+
[10] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in
|
811 |
+
Proceedings of the IEEE conference on computer vision and pattern
|
812 |
+
recognition, 2017, pp. 7263–7271. I
|
813 |
+
[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
|
814 |
+
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,”
|
815 |
+
Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020. I
|
816 |
+
[12] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and
|
817 |
+
P. Abbeel, “Infogan: Interpretable representation learning by information
|
818 |
+
maximizing generative adversarial nets,” Advances in neural information
|
819 |
+
processing systems, vol. 29, 2016. I
|
820 |
+
[13] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture
|
821 |
+
for generative adversarial networks,” in Proceedings of the IEEE/CVF
|
822 |
+
conference on computer vision and pattern recognition, 2019, pp. 4401–
|
823 |
+
4410. I
|
824 |
+
[14] U. Michelucci, “An introduction to autoencoders,” arXiv preprint
|
825 |
+
arXiv:2201.03898, 2022. I
|
826 |
+
[15] E. Bourtsoulatze, D. B. Kurka, and D. G¨und¨uz, “Deep joint source-
|
827 |
+
channel coding for wireless image transmission,” IEEE Transactions on
|
828 |
+
Cognitive Communications and Networking, vol. 5, no. 3, pp. 567–579,
|
829 |
+
2019. I, II
|
830 |
+
[16] D. B. Kurka and D. G¨und¨uz, “Deepjscc-f: Deep joint source-channel
|
831 |
+
coding of images with feedback,” IEEE Journal on Selected Areas in
|
832 |
+
Information Theory, vol. 1, no. 1, pp. 178–193, 2020. I
|
833 |
+
[17] F. Mentzer, G. D. Toderici, M. Tschannen, and E. Agustsson, “High-
|
834 |
+
fidelity generative image compression,” Advances in Neural Information
|
835 |
+
Processing Systems, vol. 33, pp. 11 913–11 924, 2020. II, II
|
836 |
+
[18] C. Dong, H. Liang, X. Xu, S. Han, B. Wang, and P. Zhang, “Semantic
|
837 |
+
|
838 |
+
0.85
|
839 |
+
30
|
840 |
+
28
|
841 |
+
0.80
|
842 |
+
PSNR(dB)
|
843 |
+
SSIM
|
844 |
+
26
|
845 |
+
0.75
|
846 |
+
AWGN
|
847 |
+
AWGN
|
848 |
+
24
|
849 |
+
STSCI-JSCC
|
850 |
+
STSCI-JSCC
|
851 |
+
STSCI-LDPC
|
852 |
+
STSCI-LDPC
|
853 |
+
0.70
|
854 |
+
22
|
855 |
+
LSCI-ChannelModel
|
856 |
+
.
|
857 |
+
LSCI-ChannelModel
|
858 |
+
LSCI-LDPC
|
859 |
+
LSCI-LDPC
|
860 |
+
0.65
|
861 |
+
20
|
862 |
+
5
|
863 |
+
0
|
864 |
+
5
|
865 |
+
10
|
866 |
+
15
|
867 |
+
5
|
868 |
+
0
|
869 |
+
5
|
870 |
+
10
|
871 |
+
15
|
872 |
+
SNR(dB)
|
873 |
+
AWGN
|
874 |
+
SNR(dB)9
|
875 |
+
communication system based on semantic slice models propagation,”
|
876 |
+
IEEE Journal on Selected Areas in Communications, 2022. II, II, III
|
877 |
+
[19] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time
|
878 |
+
style transfer and super-resolution,” in European conference on computer
|
879 |
+
vision.
|
880 |
+
Springer, 2016, pp. 694–711. II
|
881 |
+
[20] C. Ledig, L. Theis, F. Husz´ar, J. Caballero, A. Cunningham, A. Acosta,
|
882 |
+
A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single
|
883 |
+
image super-resolution using a generative adversarial network,” in
|
884 |
+
Proceedings of the IEEE conference on computer vision and pattern
|
885 |
+
recognition, 2017, pp. 4681–4690. II
|
886 |
+
[21] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and
|
887 |
+
C. Change Loy, “Esrgan: Enhanced super-resolution generative adversar-
|
888 |
+
ial networks,” in Proceedings of the European conference on computer
|
889 |
+
vision (ECCV) workshops, 2018, pp. 0–0. II
|
890 |
+
|
BtE1T4oBgHgl3EQfpgUG/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
DNFQT4oBgHgl3EQf_zdP/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d5e452bfcd2d565c8a484de104033dc2b52982c0b461830430b68ce018f165b8
|
3 |
+
size 3538989
|
DtE1T4oBgHgl3EQfEQNL/content/2301.02887v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:62db0244b9302e6dc30be681d8f0f26ab3ebfcc134e8fc825fac0501ef3645ed
|
3 |
+
size 123972
|
DtE1T4oBgHgl3EQfEQNL/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c5ab80c06685aeda2aabdecbf57bc02a0e2c3696c955e7e960574c6bc90461d5
|
3 |
+
size 1179693
|
DtE1T4oBgHgl3EQfEQNL/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:68eb8a22fa58d093198b44b21003d9d6d12d87266bbf75b8b09810adb1602077
|
3 |
+
size 42205
|
E9E1T4oBgHgl3EQfEgPe/content/2301.02892v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e2e2f5957864496dd9791104860425db30e1c9937f99a4240e8dcd3eacac996f
|
3 |
+
size 3082691
|
GNAzT4oBgHgl3EQfxP7V/content/2301.01736v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8fa9e46a216bbd57ab0074a6d61d10e42222463b4f1559d625960acc35475e0
|
3 |
+
size 207211
|
GNAzT4oBgHgl3EQfxP7V/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b83d7fe5655d5811ab25cad57013f4bf058ceadc5ddc6f9d051458b16d75215
|
3 |
+
size 84867
|
HNE1T4oBgHgl3EQfrQW4/content/tmp_files/2301.03353v1.pdf.txt
ADDED
@@ -0,0 +1,1029 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Learning Bidirectional Action-Language Translation with Limited
|
2 |
+
Supervision and Incongruent Extra Input
|
3 |
+
Ozan ¨Ozdemira, Matthias Kerzela, Cornelius Webera, Jae Hee Leea, Muhammad
|
4 |
+
Burhan Hafeza, Patrick Brunsb, Stefan Wermtera
|
5 |
+
aKnowledge Technology, Department of Informatics, University of Hamburg,
|
6 |
+
Vogt-Koelln-Str. 30, 22527 Hamburg, Germany
|
7 |
+
bBiological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11,
|
8 |
+
20146 Hamburg, Germany
|
9 |
+
ARTICLE HISTORY
|
10 |
+
Compiled January 10, 2023
|
11 |
+
ABSTRACT
|
12 |
+
Human infant learning happens during exploration of the environment, by interac-
|
13 |
+
tion with objects, and by listening to and repeating utterances casually, which is
|
14 |
+
analogous to unsupervised learning. Only occasionally, a learning infant would re-
|
15 |
+
ceive a matching verbal description of an action it is committing, which is similar to
|
16 |
+
supervised learning. Such a learning mechanism can be mimicked with deep learn-
|
17 |
+
ing. We model this weakly supervised learning paradigm using our Paired Gated
|
18 |
+
Autoencoders (PGAE) model, which combines an action and a language autoen-
|
19 |
+
coder. After observing a performance drop when reducing the proportion of super-
|
20 |
+
vised training, we introduce the Paired Transformed Autoencoders (PTAE) model,
|
21 |
+
using Transformer-based crossmodal attention. PTAE achieves significantly higher
|
22 |
+
accuracy in language-to-action and action-to-language translations, particularly in
|
23 |
+
realistic but difficult cases when only few supervised training samples are available.
|
24 |
+
We also test whether the trained model behaves realistically with conflicting multi-
|
25 |
+
modal input. In accordance with the concept of incongruence in psychology, conflict
|
26 |
+
deteriorates the model output. Conflicting action input has a more severe impact
|
27 |
+
than conflicting language input, and more conflicting features lead to larger interfer-
|
28 |
+
ence. PTAE can be trained on mostly unlabelled data where labeled data is scarce,
|
29 |
+
and it behaves plausibly when tested with incongruent input.
|
30 |
+
KEYWORDS
|
31 |
+
Unsupervised learning; weak supervision; autoencoders; object manipulation;
|
32 |
+
robot action; language grounding; Transformers; bidirectional translation
|
33 |
+
1. Introduction
|
34 |
+
Embodiment, i.e., action-taking in the environment, is considered essential for lan-
|
35 |
+
guage learning (Bisk et al. 2020). Recently, language grounding with robotic object
|
36 |
+
manipulation has received considerable attention from the research community. Most
|
37 |
+
approaches proposed in this domain cover robotic action execution based on linguistic
|
38 |
+
input (Hatori et al. 2018; Shridhar, Mittal, and Hsu 2020; Shao et al. 2020; Lynch
|
39 |
+
and Sermanet 2021), i.e., language-to-action translation. Others cover language pro-
|
40 |
+
duction based on the actions done on objects (Heinrich et al. 2020; Eisermann et al.
|
41 |
+
CONTACT Ozan ¨Ozdemir. Email: [email protected]
|
42 |
+
arXiv:2301.03353v1 [cs.CL] 9 Jan 2023
|
43 |
+
|
44 |
+
2021), i.e., action-to-language translation. However, only few approaches (Ogata et al.
|
45 |
+
2007; Yamada et al. 2018; Antunes et al. 2019; Abramson et al. 2020; ¨Ozdemir, Kerzel,
|
46 |
+
and Wermter 2021) handle both directions by being able to not just execute actions
|
47 |
+
according to given instructions but also to describe those actions, i.e., bidirectional
|
48 |
+
translation.
|
49 |
+
Moreover, as infants learn, the actions that they are performing are not permanently
|
50 |
+
being labeled by matching words from their caretakers, hence, supervised learning with
|
51 |
+
labels must be considered rare. Instead, infants rather explore the objects around them
|
52 |
+
and listen to utterances, which may not frequently relate to their actions, hence, un-
|
53 |
+
supervised learning without matching labels is abundant. Nevertheless, most language
|
54 |
+
grounding approaches do not make use of unsupervised learning except those that use
|
55 |
+
some unsupervised loss terms (Yamada et al. 2018; Abramson et al. 2020; ¨Ozdemir,
|
56 |
+
Kerzel, and Wermter 2021), while large language models (LLMs) (Devlin et al. 2019;
|
57 |
+
Radford et al. 2019; Brown et al. 2020) introduced for various unimodal downstream
|
58 |
+
language tasks rely on unsupervised learning for pretraining objectives.
|
59 |
+
In order to reduce this dependence on labeled data during training, we introduce
|
60 |
+
a new training procedure, in which we limit the amount of training data used for
|
61 |
+
supervised learning. More precisely, we only use a certain portion of training samples
|
62 |
+
for crossmodal action-to-language and language-to-action translations whilst training
|
63 |
+
unimodally on the rest of the training samples. As crossmodal translation requires
|
64 |
+
each sample modality to be labeled with the other modality (e.g., an action sequence
|
65 |
+
must be paired with a corresponding language description), we artificially simulate the
|
66 |
+
realistic conditions where there is a large amount of unlabelled (unimodal) data but
|
67 |
+
a much smaller amount of labeled (crossmodal) data.
|
68 |
+
slide blue quickly
|
69 |
+
Figure 1.
|
70 |
+
Our table-top object manipulation sce-
|
71 |
+
nario in the simulation environment: the NICO robot
|
72 |
+
is moving the blue cube on the table. The performed
|
73 |
+
action is labeled as “slide blue quickly”. Our approach
|
74 |
+
can translate from language to action and vice versa;
|
75 |
+
i.e., we perform actions that are described in language
|
76 |
+
and also describe the given actions using language.
|
77 |
+
Another aspect of human language
|
78 |
+
learning is that it takes place in an envi-
|
79 |
+
ronment and while using different modal-
|
80 |
+
ities such as vision and proprioception.
|
81 |
+
Concepts such as weight, softness, and
|
82 |
+
size cannot be grounded without being
|
83 |
+
in the environment and interacting with
|
84 |
+
objects. Language learning approaches
|
85 |
+
that use multiple modalities and take ac-
|
86 |
+
tion in an environment into account are
|
87 |
+
preferable to those that use a unimodal
|
88 |
+
approach to process large amounts of
|
89 |
+
text. Hence we strive to devise embodied
|
90 |
+
multimodal models that tackle language
|
91 |
+
grounding. To this end, our robotic object
|
92 |
+
manipulation dataset is generated from a
|
93 |
+
simulation setup as seen in Figure 1. We
|
94 |
+
use a humanoid child-size robot Neuro-
|
95 |
+
Inspired COmpanion (NICO) (Kerzel et
|
96 |
+
al. 2017; Kerzel et al. 2020) to perform
|
97 |
+
various actions on cubes on a table and label those actions with language descriptions.
|
98 |
+
We introduce further details of our setup in Section 4.
|
99 |
+
Different from other approaches, our previous Paired Gated Autoencoders (PGAE)
|
100 |
+
model (¨Ozdemir, Kerzel, Weber, Lee, and Wermter 2022) can bidirectionally trans-
|
101 |
+
late between language and action, which enables an agent not only to execute actions
|
102 |
+
according to given instructions but also to recognize and verbalize its own actions
|
103 |
+
2
|
104 |
+
|
105 |
+
or actions executed by another agent. As the desired translation task is communi-
|
106 |
+
cated to the network through an additional signal word in the language input, PGAE
|
107 |
+
can flexibly translate between and within modalities during inference. However, when
|
108 |
+
trained under limited supervision conditions, PGAE performs poorly on the action-to-
|
109 |
+
language translation task, under two conditions: Firstly, we experiment with reducing
|
110 |
+
the number of supervised training iterations while using the whole data set for super-
|
111 |
+
vised training. Secondly, we experiment with reducing the number of training samples
|
112 |
+
used with the supervised signals. In both instances, though the first is more trivial than
|
113 |
+
the second, the action-to-language performance of PGAE suffers as the proportion of
|
114 |
+
supervision decreases.
|
115 |
+
To overcome this hurdle, we present a novel model, Paired Transformed Au-
|
116 |
+
toencoders (PTAE), in this follow-up paper. Inspired by the successful application
|
117 |
+
of the Crossmodal Transformer in vision-language navigation by the Hierarchical
|
118 |
+
Cross-Modal Agent (HCM) architecture (Irshad, Ma, and Kira 2021), PTAE replaces
|
119 |
+
PGAE’s gated multimodal fusion mechanism and optionally the LSTM-based (long
|
120 |
+
short-term memory) (Hochreiter and Schmidhuber 1997) encoders with a Crossmodal
|
121 |
+
Transformer. Thanks to its more efficient and sequence-retaining crossmodal attention
|
122 |
+
mechanism, PTAE achieves superior performance even when an overwhelming major-
|
123 |
+
ity of training iterations (e.g., 98 or 99%) consist of unsupervised learning. When the
|
124 |
+
majority of training samples are used for unsupervised learning, PTAE still maintains
|
125 |
+
its perfect action-to-language performance up to 80% of training samples learned uni-
|
126 |
+
modally and performs relatively well for the 90% case (over 80% sentence accuracy).
|
127 |
+
Even for the cases where only 1 or 2% of the training samples are used in a super-
|
128 |
+
vised fashion, which is analogous to few-shot learning, PTAE describes actions well
|
129 |
+
over chance level with up to 50% success rate. Our results hint that PTAE precludes
|
130 |
+
the need for large amounts of expensive labeled data, which is required for supervised
|
131 |
+
learning, as the new architecture with the Crossmodal Transformer as the multimodal-
|
132 |
+
ity fusion technique significantly outperforms PGAE (¨Ozdemir et al. 2022) under the
|
133 |
+
limited supervision training conditions.
|
134 |
+
Furthermore, inspired by the concept of incongruence in psychology and to test
|
135 |
+
the robustness of the trained model to noise, for each task we introduce an extra
|
136 |
+
input that is contradictory to the expected output of the model. For example, for
|
137 |
+
language-to-action translation, we introduce extra conflicting action input showing an
|
138 |
+
action that is different from the expected action from the model. The intertwined
|
139 |
+
processing of language and action input in the Crossmodal Transformer resembles
|
140 |
+
the tight interconnection between language and sensorimotor processes that has been
|
141 |
+
observed in the human brain (Hauk, Johnsrude, and Pulverm¨uller 2004; van Elk et al.
|
142 |
+
2010). Embodied accounts of human language comprehension assume that linguistic
|
143 |
+
information induces mental simulations of relevant sensorimotor experiences. As a
|
144 |
+
direct consequence of embodied language processing, conflicts between linguistic input
|
145 |
+
and sensorimotor processes have been shown to result in bidirectional impairments
|
146 |
+
of language comprehension on the one hand and perceptual judgments and motor
|
147 |
+
responses on the other hand (Aravena et al. 2010; Glenberg and Kaschak 2002; Kaschak
|
148 |
+
et al. 2005; Meteyard, Bahrami, and Vigliocco 2007), although the strength of these
|
149 |
+
behavioral effects has recently been debated (Winter et al. 2022). In our PTAE model,
|
150 |
+
we found asymmetry in terms of the impact of the action and language modalities on
|
151 |
+
the performance of the model. Regardless of the output modality, introducing extra
|
152 |
+
contradictory action input affects the model performance much more than introducing
|
153 |
+
it in the language modality.
|
154 |
+
Our contributions in this work can be summarised as:
|
155 |
+
3
|
156 |
+
|
157 |
+
(1) We introduce PTAE that handles realistic learning conditions that mainly in-
|
158 |
+
clude unsupervised/unpaired language and action experiences while requiring
|
159 |
+
minimal use of labeled data, which is expensive to collect.
|
160 |
+
(2) We show plausible behavior of the model when testing it with psychology-
|
161 |
+
inspired contradictory information.
|
162 |
+
The remainder of this paper is as follows: in Section 2, we summarise different
|
163 |
+
approaches in language grounding with robotic object manipulation. In Section 3, we
|
164 |
+
define our PTAE in detail. Section 4 introduces the experiments and their results. In
|
165 |
+
Section 5, we discuss these results, while Section 6 concludes the paper.
|
166 |
+
2. Related Work
|
167 |
+
There are several approaches toward intelligent agents that combine language learning
|
168 |
+
with interactions in a 3D environment. A comprehensive research program (Abramson
|
169 |
+
et al. 2020) proposed combining supervised learning, reinforcement learning (RL),
|
170 |
+
and imitation learning. In the environment, two agents communicate with each other
|
171 |
+
as one agent (setter) asks questions to or instructs the other (solver) that answers
|
172 |
+
questions and interacts with objects accordingly. However, the scenario is abstract
|
173 |
+
with unrealistic object interaction. Hence, proprioception is not used as the actions
|
174 |
+
are high level, and a transfer of the approach from simulation to the real world would
|
175 |
+
be non-trivial.
|
176 |
+
Jang et al. (2021) proposed BC-Z which leverages a large multi-task dataset (100
|
177 |
+
tasks) to train a single policy, which is supervised with behavior cloning to match
|
178 |
+
the actions demonstrated by humans in the dataset. To generalize to new tasks, the
|
179 |
+
policy is conditioned on a task description; a joint embedding of a video demonstra-
|
180 |
+
tion, and a language instruction. This allows passing either the video command or
|
181 |
+
the language command to the policy when being trained to match the actions in a
|
182 |
+
demonstration. BC-Z generalizes to different tasks, but requires a large collection of
|
183 |
+
human demonstrations, which is expensive. It also relies on human intervention to
|
184 |
+
avoid unsafe situations and to correct mistakes.
|
185 |
+
Inspired by Yamada et al. (2018), we introduced the bidirectional Paired Varia-
|
186 |
+
tional Autoencoders (PVAE) (¨Ozdemir et al. 2021) that is capable of modeling both
|
187 |
+
language-to-action and action-to-language translation in a simple table-top setting
|
188 |
+
where a humanoid robot interacts with small cubes. The approach can pair each robotic
|
189 |
+
action sample (a sequence of joint values and visual features) with multiple language
|
190 |
+
descriptions involving alternative words replacing original words. The two variational
|
191 |
+
autoencoder networks of the model do not share any connections but are aligned with
|
192 |
+
a binding loss term. Due to the lack of common multimodal representations, PVAE
|
193 |
+
needs to be prepared for each translation task in advance. To overcome this issue, we
|
194 |
+
proposed a bidirectional attention-based multimodal network, PGAE (¨Ozdemir et al.
|
195 |
+
2022), which can flexibly translate between the two modalities with the help of a signal
|
196 |
+
phrase.
|
197 |
+
Another approach, CLIPort (Shridhar, Manuelli, and Fox 2021), combines the CLIP
|
198 |
+
model (Radford et al. 2021) for pretrained vision-language representations with the
|
199 |
+
Transporter model (Zeng et al. 2020) for robotic manipulation tasks. Transporter takes
|
200 |
+
an action-centric approach to perception by detecting actions, rather than objects, and
|
201 |
+
then learns a policy, which allows CLIPort to exploit geometric symmetries for efficient
|
202 |
+
representation learning. On multiple object manipulation tasks, CLIPort outperforms
|
203 |
+
4
|
204 |
+
|
205 |
+
CLIP and Transporter alone. Further, CLIPort trained on multiple tasks performs
|
206 |
+
better in most cases than CLIPort trained only on particular tasks. This supports
|
207 |
+
the hypothesis that language-conditioned task-learning skills can be transferred from
|
208 |
+
one task to another. However, the approach is only realized with a relatively simple
|
209 |
+
gripper as it does not output joint angle values but 2D pixel affordance predictions.
|
210 |
+
The actual action execution relies on the calibration between the robotic arm base
|
211 |
+
and the RGB-D camera.
|
212 |
+
More recently, the same authors introduced Perceiver-Actor (PERACT) (Shridhar,
|
213 |
+
Manuelli, and Fox 2022), which is designed to efficiently learn multi-task robotic ma-
|
214 |
+
nipulations according to given language input by utilizing voxel grids extracted from
|
215 |
+
RGB-D images. The backbone of the model is the Transformer-based Perceiver IO
|
216 |
+
(Jaegle et al. 2021) that uses latent vectors to tackle the processing of very long se-
|
217 |
+
quences. After the processing of appended language and voxel encodings by Perceiver
|
218 |
+
IO, the voxels are decoded again to generate discrete actions by using linear trans-
|
219 |
+
formations. PERACT achieves promising results in multiple tasks such as opening a
|
220 |
+
drawer, turning a tap, and sliding blocks. However, as it only produces discrete actions,
|
221 |
+
it relies on a random motion planner to execute instructions.
|
222 |
+
SayCan (Ahn et al. 2022), utilizes LLMs to provide task-grounding capabilities to
|
223 |
+
the agent, which is capable of executing short-horizon commands. The use of LLMs
|
224 |
+
helps to ground these capabilities in the real world using value functions of the agent
|
225 |
+
in order to produce feasible and useful instructions. However, the approach is limited
|
226 |
+
to the set of skills that the agent can possess in the environment. An LLM is utilized
|
227 |
+
to assign affordance probabilities to these skills according to a given high-level user
|
228 |
+
instruction. The way these skills are defined in language (the wording, the length,
|
229 |
+
etc.) can affect the performance of the whole system, e.g., LLMs tend to favor shorter
|
230 |
+
phrases over longer ones.
|
231 |
+
GATO (Reed et al. 2022) is a single multi-task, multi-embodiment model that is
|
232 |
+
general and performs well on hundreds of tasks in various domains such as playing Atari
|
233 |
+
games, manipulating objects, image captioning, etc. Regardless of the modality (e.g.,
|
234 |
+
vision, proprioception, language, etc.), the input is flattened and embedded before it
|
235 |
+
is provided to the model. The model is a large Transformer decoder that has the same
|
236 |
+
weights and architecture for all tasks and is trained solely in a supervised manner.
|
237 |
+
However, despite performing moderately in each task, the approach cannot compete
|
238 |
+
with specialized approaches in various tasks.
|
239 |
+
The encoder-decoder-based VisuoMotor Attention model, VIMA for short, (Jiang
|
240 |
+
et al. 2022) is another object manipulation approach. It deals with robot action gen-
|
241 |
+
eration from multimodal prompts by interleaving language and image or video frame
|
242 |
+
tokens at the input level. VIMA uses an object detection module to extract objects
|
243 |
+
and bounding boxes from visual input to use as object tokens. The object tokens
|
244 |
+
are then interleaved with the language tokens and processed by the pretrained T5
|
245 |
+
model (Raffel et al. 2020) which is used as the encoder. On the decoder end, the ap-
|
246 |
+
proach uses a causal Transformer decoder which consists of cross- and self-attention
|
247 |
+
layers and autoregressively generates actions based on the history of previous actions
|
248 |
+
and the multimodal prompt. It is shown that VIMA outperforms state-of-the-art ap-
|
249 |
+
proaches, including GATO, on a number of increasingly difficult object manipulation
|
250 |
+
tasks involving zero-shot generalization with unseen objects and their combinations.
|
251 |
+
An apparent weakness of VIMA is that it relies on the performance of off-the-self
|
252 |
+
object detectors.
|
253 |
+
Different from most of the aforementioned approaches, our model is bidirectional: it
|
254 |
+
can not only produce actions according to given language descriptions but also recog-
|
255 |
+
5
|
256 |
+
|
257 |
+
<BOS>
|
258 |
+
pull
|
259 |
+
red
|
260 |
+
<EOS>
|
261 |
+
pull
|
262 |
+
fast
|
263 |
+
j1
|
264 |
+
v1
|
265 |
+
jM
|
266 |
+
vM
|
267 |
+
j1
|
268 |
+
ĵ2
|
269 |
+
y1
|
270 |
+
yN-1
|
271 |
+
v1
|
272 |
+
vM-1
|
273 |
+
v2
|
274 |
+
ĵ2
|
275 |
+
ĵM-1
|
276 |
+
y2
|
277 |
+
ĵM
|
278 |
+
x1
|
279 |
+
y1
|
280 |
+
y3
|
281 |
+
'execute:
|
282 |
+
pull red
|
283 |
+
fast'
|
284 |
+
LSTM
|
285 |
+
LSTM
|
286 |
+
LSTM
|
287 |
+
ĵ3
|
288 |
+
LSTM
|
289 |
+
LSTM
|
290 |
+
LSTM
|
291 |
+
Crossmodal Transformer
|
292 |
+
FFW
|
293 |
+
FFW
|
294 |
+
h
|
295 |
+
Lfeats
|
296 |
+
Afeats
|
297 |
+
hdec
|
298 |
+
hdec
|
299 |
+
A
|
300 |
+
L
|
301 |
+
Figure 2.
|
302 |
+
The architecture of the PTAE model. The inputs are a language description (incl. a task signal)
|
303 |
+
and a sequence of visual features (extracted using the channel-separated convolutional autoencoder) and joint
|
304 |
+
values, while the outputs are a description and a sequence of joint values. Language encoder can be an LSTM,
|
305 |
+
the BERT Base model (Devlin et al. 2019), or the descriptions can be directly passed to the transformer word
|
306 |
+
by word. The action encoder can be an LSTM or the action sequence can be passed directly to the transformer.
|
307 |
+
Both decoders are LSTMs - we show unfolded versions of the LSTMs. The bottleneck, where the two streams
|
308 |
+
are connected, is based on the Crossmodal Transformer. h is the shared representation vector.
|
309 |
+
nize actions and produce their descriptions. As our model is based on an autoencoder-
|
310 |
+
like architecture, it can be trained in a mostly unsupervised way by asking the model
|
311 |
+
to reproduce the given language or proprioception input. Moreover, our approach is
|
312 |
+
flexible during inference since it does not need to be reconfigured for the translation
|
313 |
+
task: due to the inclusion of the task signal in the language input, our PTAE can
|
314 |
+
reliably execute the desired task on the go, whether it is a translation from language
|
315 |
+
to action or vice versa. This is an essential step towards an autonomous agent that
|
316 |
+
can interact within the environment as well as communicate with humans.
|
317 |
+
3. Paired Transformed Autoencoder
|
318 |
+
Our model, named PTAE, is an encoder-decoder architecture that is capable of bidi-
|
319 |
+
rectional translation between robot actions and language. It consists of a Crossmodal
|
320 |
+
Transformer that is the backbone and multimodality fusion mechanism of the architec-
|
321 |
+
ture, and LSTM-based decoders that output language and joint values respectively. As
|
322 |
+
input, PTAE accepts language descriptions of actions including the task signal, which
|
323 |
+
defines the translation direction, as well as a sequence of the concatenation of joint
|
324 |
+
values and visual features. According to the task signal, PTAE outputs joint values
|
325 |
+
required for executing a particular action or it outputs language descriptions of an
|
326 |
+
action.
|
327 |
+
As shown in Figure 2, PTAE is composed of a Crossmodal Transformer, which ac-
|
328 |
+
cepts multimodal input (i.e., language, proprioception, and vision), and language and
|
329 |
+
action decoders that output language descriptions and joint values respectively. The
|
330 |
+
language and action input can optionally be preprocessed by LSTM-based encoders
|
331 |
+
as in the case of PGAE1. However, after some initial trials with both cases, in this
|
332 |
+
paper, we do not use any extra encoding layers before the Crossmodal Transformer
|
333 |
+
1For exact definitions of LSTM-based language and action encoder, readers may refer to the PGAE paper
|
334 |
+
(¨Ozdemir et al. 2022).
|
335 |
+
6
|
336 |
+
|
337 |
+
Scaled Dot Product Attention
|
338 |
+
Lfeats
|
339 |
+
Afeats
|
340 |
+
Input
|
341 |
+
Emb.
|
342 |
+
V
|
343 |
+
Conc.
|
344 |
+
K
|
345 |
+
Q
|
346 |
+
Pos.
|
347 |
+
Emb.
|
348 |
+
Lfeats
|
349 |
+
Input
|
350 |
+
Emb.
|
351 |
+
FFW
|
352 |
+
h
|
353 |
+
FFW
|
354 |
+
FFW
|
355 |
+
FFW
|
356 |
+
Scal.
|
357 |
+
Dot
|
358 |
+
Prod.
|
359 |
+
Att.
|
360 |
+
Figure 3.
|
361 |
+
The architecture of the Crossmodal Transformer: Language features are embedded and used as
|
362 |
+
the query vector (Q), whereas the embedded action features are used as the key (K) and value (V) vectors.
|
363 |
+
The positional embedding is applied only to the language features. The multi-head attention (MHA) involves
|
364 |
+
the Q-, K- and V-specific feedforward (FFW) and scaled dot product attention layer following the original
|
365 |
+
Transformer architecture. The multiple heads are then concatenated and fed to the final FFW, which outputs
|
366 |
+
the common hidden representation vector h.
|
367 |
+
for the sake of simplicity and model size as we do not see any significant change in the
|
368 |
+
performance.
|
369 |
+
3.1. Crossmodal Transformer
|
370 |
+
The Crossmodal Transformer replaces the Gated Multimodal Unit (GMU) (Arevalo
|
371 |
+
et al. 2020) in our previous PGAE model (¨Ozdemir et al. 2022) and can be employed
|
372 |
+
essentially as language and action encoders. The simplified architecture of the Cross-
|
373 |
+
modal Transformer can be seen in Figure 3. The functionality of the Crossmodal
|
374 |
+
Transformer is to extract the common latent representations of paired language and
|
375 |
+
action sequences. Following the HCM architecture (Irshad et al. 2021), we use the lan-
|
376 |
+
guage modality as queries (Q vectors) and the action modality (concatenated visual
|
377 |
+
features and joint values) as keys (K vectors) and values (V vectors). The language de-
|
378 |
+
scriptions are represented as one-hot encoded vectors, whilst action input is composed
|
379 |
+
of joint values of NICO’s left arm and the visual features from images recorded by
|
380 |
+
the camera in NICO’s eye. As in PGAE, we use a channel-separated convolutional au-
|
381 |
+
toencoder (CAE) to extract visual features from images. The Crossmodal Transformer
|
382 |
+
encodes the common latent representations as follows:
|
383 |
+
Q = ReLU
|
384 |
+
�
|
385 |
+
W token · xt + btoken�
|
386 |
+
+ PE(xt)
|
387 |
+
(1 ≤ t ≤ N + 1),
|
388 |
+
K, V = ReLU
|
389 |
+
�
|
390 |
+
W act · [vt; jt] + bact�
|
391 |
+
(1 ≤ t ≤ M),
|
392 |
+
At = MHA(Q, K, V )
|
393 |
+
(1 ≤ t ≤ N + 1),
|
394 |
+
ht = PWFF(At)
|
395 |
+
(1 ≤ t ≤ N + 1),
|
396 |
+
h = AvgPool(ht)
|
397 |
+
(1 ≤ t ≤ N + 1),
|
398 |
+
where x, v and j are linguistic, visual, and proprioceptive inputs respectively – note
|
399 |
+
that when no language or action encoder is used, x corresponds to Lfeats, while the
|
400 |
+
concatenation of visual features and joint values [vt; jt] corresponds to Afeats in Figure
|
401 |
+
3. ReLU is the rectified linear unit activation function while PE, MHA, and PWFF are
|
402 |
+
the positional encodings, multi-head attention layer, and the position-wise feedforward
|
403 |
+
7
|
404 |
+
|
405 |
+
layer as used in the original Transformer paper (Vaswani, Shazeer, Parmar, Uszkoreit,
|
406 |
+
Jones, Gomez, Kaiser, and Polosukhin 2017). At is the crossmodal attention vector
|
407 |
+
for time step t, whereas ht is the hidden vector for time step t. AvgPool is the average
|
408 |
+
pooling applied on the time axis to the sequential hidden vector to arrive at the
|
409 |
+
common latent representation vector h. For our experiments, we employ a single-layer
|
410 |
+
Crossmodal Transformer with 4 parallel attention heads.
|
411 |
+
3.2. Language Decoder
|
412 |
+
We use an LSTM as the language decoder in order to autoregressively generate the
|
413 |
+
descriptions word by word by expanding the common latent representation vector h
|
414 |
+
produced by the Crossmodal Transformer:
|
415 |
+
hdec
|
416 |
+
0 , cdec
|
417 |
+
0
|
418 |
+
= W dec · h + bdec,
|
419 |
+
hdec
|
420 |
+
t
|
421 |
+
, cdec
|
422 |
+
t
|
423 |
+
= LSTM(yt−1, hdec
|
424 |
+
t−1, cdec
|
425 |
+
t−1) (1 ≤ t ≤ N − 1),
|
426 |
+
yt = soft(W out · hdec
|
427 |
+
t
|
428 |
+
+ bout)
|
429 |
+
(1 ≤ t ≤ N − 1),
|
430 |
+
where soft represents the softmax activation function. y0 is the vector for the symbol
|
431 |
+
indicating the beginning of the sentence, the <BOS> tag.
|
432 |
+
3.3. Action Decoder
|
433 |
+
Similarly, an LSTM is employed as the action decoder to output joint angle values at
|
434 |
+
each time step with the help of the common representation vector h:
|
435 |
+
hdec
|
436 |
+
0 , cdec
|
437 |
+
0
|
438 |
+
= W dec · h + bdec,
|
439 |
+
hdec
|
440 |
+
t
|
441 |
+
, cdec
|
442 |
+
t
|
443 |
+
= LSTM(vt, ˆȷt, hdec
|
444 |
+
t−1, cdec
|
445 |
+
t−1)
|
446 |
+
(1 ≤ t ≤ M − 1),
|
447 |
+
ˆȷt+1 = tanh(W out · hdec
|
448 |
+
t
|
449 |
+
+ bout)
|
450 |
+
(1 ≤ t ≤ M − 1),
|
451 |
+
where ˆȷt is the predicted joint values for time step t and tanh is the hyperbolic tangent
|
452 |
+
activation function. We take ˆȷ1 as j1, i.e., ground-truth joint angle values corresponding
|
453 |
+
to the initial position of the arm. The visual features used as input v are extracted
|
454 |
+
from the ground-truth images and used similarly to teacher forcing, whereas the joint
|
455 |
+
angle values ˆȷt are used autoregressively.
|
456 |
+
3.4. Visual Feature Extraction
|
457 |
+
Following the PGAE pipeline (¨Ozdemir et al. 2022), the channel-separated convolu-
|
458 |
+
tional autoencoder (CAE) is used to extract visual features from first-person images
|
459 |
+
from the eye cameras of NICO recorded in the simulation. We utilize channel sep-
|
460 |
+
aration when extracting visual features: an instance of the CAE is trained for each
|
461 |
+
RGB color channel. In a previous paper (¨Ozdemir et al. 2021), we show that channel
|
462 |
+
separation distinguishes object colors more accurately than the regular CAE without
|
463 |
+
channel separation.
|
464 |
+
We feed each instance of the channel-separated CAE with the corresponding chan-
|
465 |
+
nel of RGB images of size 120 × 160. The channel-separated CAE is made up of a
|
466 |
+
convolutional encoder, a fully-connected bottleneck, and a deconvolutional decoder.
|
467 |
+
8
|
468 |
+
|
469 |
+
Each RGB channel is trained separately, after which we extract the channel-specific
|
470 |
+
visual features from the bottleneck and concatenate them to arrive at composite visual
|
471 |
+
features. These visual features make up v which is used as vision input to PTAE. For
|
472 |
+
further details on the visual feature extraction process, readers may refer to (¨Ozdemir
|
473 |
+
et al. 2021).
|
474 |
+
3.5. Loss Function
|
475 |
+
We use two loss functions to calculate the deviation from the ground-truth language
|
476 |
+
descriptions and joint values. The language loss, Llang, is calculated as the cross entropy
|
477 |
+
between input and output words, while the action loss, Lact, is the mean squared error
|
478 |
+
(MSE) between original and predicted joint values:
|
479 |
+
Llang =
|
480 |
+
1
|
481 |
+
N − 1
|
482 |
+
N−1
|
483 |
+
�
|
484 |
+
t=1
|
485 |
+
�
|
486 |
+
−
|
487 |
+
V −1
|
488 |
+
�
|
489 |
+
i=0
|
490 |
+
x[i]
|
491 |
+
t+1 log y[i]
|
492 |
+
t
|
493 |
+
�
|
494 |
+
,
|
495 |
+
Lact =
|
496 |
+
1
|
497 |
+
M − 1
|
498 |
+
M−1
|
499 |
+
�
|
500 |
+
t=1
|
501 |
+
∥jt+1 − ˆȷt+1∥2
|
502 |
+
2 ,
|
503 |
+
where V is the vocabulary size, N is the number of words per description, and M is the
|
504 |
+
sequence length for action trajectories. The total loss is then the sum of the language
|
505 |
+
and action losses:
|
506 |
+
Lall = αLlang + βLact
|
507 |
+
where α and β are weighting factors for language and action terms in the loss function.
|
508 |
+
In our experiments, we take both α and β as 1.0. We use the identical loss functions
|
509 |
+
as PGAE except for the weight vector used in the language loss.
|
510 |
+
3.6. Training Details
|
511 |
+
Visual features are extracted in advance by the channel-separated CAE before train-
|
512 |
+
ing PTAE and PGAE. Visual features are necessary to execute actions according to
|
513 |
+
language instructions since cube arrangements are decisive in manipulating the left or
|
514 |
+
right object, i.e., determining whether to manipulate the left or right cube depends on
|
515 |
+
the position of the target cube. After extracting visual features, both PGAE and PTAE
|
516 |
+
are trained end-to-end with all three modalities. After initial experiments, PGAE is
|
517 |
+
trained for 6,000 epochs, while PTAE is trained for 2,500 epochs using the gradient
|
518 |
+
descent algorithm and Adam optimizer (Kingma and Ba 2015). For PTAE, we decided
|
519 |
+
that h has 256 dimensions, whereas the same vector has 50 dimensions in PGAE. x
|
520 |
+
has 28 dimensions, j has 5 dimensions, N is equal to 5, while M is 50 for fast and
|
521 |
+
100 for slow actions. For both PGAE and PTAE, we take the learning rate as 10−5
|
522 |
+
with a batch size of 6 samples after determining them as optimal hyperparameters.
|
523 |
+
PTAE has approximately 1.5M parameters compared to PGAE’s a little over 657K
|
524 |
+
parameters.
|
525 |
+
9
|
526 |
+
|
527 |
+
4. Experiments
|
528 |
+
We use the same dataset (¨Ozdemir et al. 2021) as in the PGAE paper (¨Ozdemir
|
529 |
+
et al. 2022), except that in this paper we exclude experiments with another agent
|
530 |
+
from the opposite side of the table. The dataset encompasses 864 samples of sequences
|
531 |
+
of images and joint values alongside their textual descriptions. It consists of robot
|
532 |
+
actions on two cubes of different colors on the table by the NICO robot, generated
|
533 |
+
using inverse kinematics and created in the simulation environment using Blender
|
534 |
+
software2. The NICO robot has a camera in each eye, which is used to record a sequence
|
535 |
+
of egocentric images. According to the scenario, NICO manipulates one of the two
|
536 |
+
cubes on the table with its left arm at a time. In total, the dataset includes 12 distinct
|
537 |
+
actions, 6 cube colors, 288 descriptions, and 144 patterns (action-description-cube
|
538 |
+
arrangement combinations). The 144 patterns are randomly varied six times in terms
|
539 |
+
of action execution in simulation: we arrive at a dataset of 864 samples in total. Out
|
540 |
+
of 864 samples, 216 samples that involve every unique description and action type are
|
541 |
+
excluded and used as the test set. The remaining 648 samples make up the training
|
542 |
+
set. The vocabulary consists of the following words divided into 3 categories:
|
543 |
+
• 6 action words (3 original/3 alternative): “push/move-up”, “pull/move-down”,
|
544 |
+
“slide/move-sideways”
|
545 |
+
• 12 colour words (6 original/6 alternative): “red/scarlet”, “green/harlequin”,
|
546 |
+
“blue/azure”, “yellow/blonde”, “cyan/greenish-blue”, “violet/purple”
|
547 |
+
• 4 speed words (2 original/2 alternative): “slowly/unhurriedly”, “fast/quickly”
|
548 |
+
The sentences consist of a word from each category: therefore, our textual descriptions
|
549 |
+
are 3-word sentences. For more details on the dataset, readers may consult our pre-
|
550 |
+
vious work (¨Ozdemir et al. 2021). PGAE and PTAE are trained on this dataset and
|
551 |
+
their performances are tested in terms of action-to-language and language-to-action
|
552 |
+
translations under different amounts of supervision.
|
553 |
+
Task signals. We use four signals to train PTAE. According to the given signal, the
|
554 |
+
input and output of the model change. The signals are:
|
555 |
+
• Describe: action-to-language translation
|
556 |
+
• Execute: language-to-action translation
|
557 |
+
• Repeat Action: action-to-action translation
|
558 |
+
• Repeat Language: language-to-language translation
|
559 |
+
According to the latter two “repeat” signals, the network uses mainly unimodal infor-
|
560 |
+
mation. The “describe” and “execute” signals, on the other hand, involve crossmodal
|
561 |
+
translation from one modality to the other. The unimodal signals are used in the
|
562 |
+
unsupervised learning of an autoencoder, whereas the crossmodal signals are used
|
563 |
+
in supervised learning, where coordinated action values and language labels must be
|
564 |
+
available. In the case of PGAE training, an additional “repeat both” signal is also
|
565 |
+
used, which also requires coordinated labels, and leads to slightly better performance
|
566 |
+
(¨Ozdemir et al. 2022). For the PTAE, however, this was found unnecessary.
|
567 |
+
Reduction of supervised training. We restrict the amount of supervision by in-
|
568 |
+
creasing the ratio of unsupervised learning iterations, i.e., training with the unimodal
|
569 |
+
“repeat” signals, in the overall training iterations. Thereby the ratio of supervised
|
570 |
+
2https://www.blender.org/
|
571 |
+
10
|
572 |
+
|
573 |
+
Figure 4.
|
574 |
+
Sentence accuracy for action-to-language translation on the test set wrt. supervised training itera-
|
575 |
+
tions. Supervised training refers to crossmodal translation cases “describe” and “execute”. The two crossmodal
|
576 |
+
signals receive the same number of iterations between them out of the supervised iterations. We report the
|
577 |
+
results for 1%, 2%, 10%, 20%, 50%, and 66.6% (the regular training case) crossmodal (supervised) iterations.
|
578 |
+
These percentages correspond to the fraction of supervised training iterations for PGAE and PTAE. Note that
|
579 |
+
the 100% case is not shown here, since the models need unsupervised iterations (unimodal repeat signals) to
|
580 |
+
be able to perform the “repeat language” and “repeat action” tasks.
|
581 |
+
learning iterations, i.e., training with the crossmodal signals, decreases. The resulting
|
582 |
+
training paradigm is analogous to developmental language learning, where an infant
|
583 |
+
is exposed only to a limited amount of supervision. We train both PTAE and PGAE
|
584 |
+
with varying ratios of unimodal/total training iterations. For another set of experi-
|
585 |
+
ments, we restrict the amount of supervision by limiting the proportion of training
|
586 |
+
samples used for crossmodal translation tasks. We test the performance of both mod-
|
587 |
+
els with varying degrees of unsupervised training under different schemes (limiting the
|
588 |
+
percentage of iterations or samples) on the crossmodal translation tasks.
|
589 |
+
In this work, we investigate action-to-language and language-to-action translations
|
590 |
+
because they are the more important and difficult tasks. For the “repeat” tasks, the
|
591 |
+
results match our previous work; therefore, the readers can refer to our publication
|
592 |
+
(¨Ozdemir et al. 2022). Figure 4 shows the results of PGAE and PTAE on action-
|
593 |
+
to-language translation with different percentages of training iterations used in a su-
|
594 |
+
pervised fashion. Both PGAE and PTAE with different training regimes based on
|
595 |
+
different proportions of supervised training iterations achieve accuracies higher than
|
596 |
+
chance level (2.78%), which we calculate based on our grammar (action, color, speed):
|
597 |
+
1÷(3×6×2). The action-to-language translation performance of PGAE falls when the
|
598 |
+
ratio of crossmodal (viz. supervised) training iterations is low, particularly when 10%
|
599 |
+
or a smaller proportion of the iterations are supervised. Even though the description
|
600 |
+
accuracy slightly increases to over 95% when supervised training amounts to only 20%
|
601 |
+
of all training iterations, it sharply drops to well below 50% when the rate is decreased
|
602 |
+
to 2%. PGAE is able to describe 36% of the test samples when only 1% of the training
|
603 |
+
iterations are used to learn crossmodal translations between action and language. In
|
604 |
+
contrast, PTAE maintains its perfect description accuracy even when it has only been
|
605 |
+
11
|
606 |
+
|
607 |
+
Action-to-Language Performance wrt. Ratio of Supervised Training Iterations
|
608 |
+
80
|
609 |
+
(%)
|
610 |
+
Sentence Accuracy
|
611 |
+
60
|
612 |
+
PGAE
|
613 |
+
PTAE
|
614 |
+
chance
|
615 |
+
40
|
616 |
+
20
|
617 |
+
2
|
618 |
+
10
|
619 |
+
20
|
620 |
+
50
|
621 |
+
66
|
622 |
+
(Crossmodal Training Iterations)-(Total Training Iterations) (%)Figure 5.
|
623 |
+
Sentence accuracy for action-to-language translation on the test set wrt. supervised training sam-
|
624 |
+
ples. Supervised training refers to crossmodal translation cases “describe” and “execute”. We limit the number
|
625 |
+
of training samples for the supervised tasks. We report the results for the 1%, 2%, 5% 10%, 20%, 50%, and
|
626 |
+
66.6% cases as well as the 100% regular training case. These percentages correspond to the fraction of training
|
627 |
+
samples used exclusively for the supervised training for PGAE and PTAE, i.e., both “execute” and “describe”
|
628 |
+
signals are trained with only a limited number of samples corresponding to the percentages.
|
629 |
+
trained with 1% supervised training iterations. While there is a detrimental impact of
|
630 |
+
reduced supervision, i.e., the limitation on the percentage of crossmodal training itera-
|
631 |
+
tions, on the action-to-language translation performance of PGAE, transformer-based
|
632 |
+
PTAE is not affected by the same phenomenon. For space reasons, we do not report
|
633 |
+
language-to-action results wrt. different percentages of supervised iterations, but we
|
634 |
+
observed a similar trend comparable with Figure 4.
|
635 |
+
In order to further investigate the performance of PTAE with limited supervision,
|
636 |
+
we introduce a more challenging training regime. We limit the number of training
|
637 |
+
samples shown to supervised signals, “describe” and “execute”, and show the rest of
|
638 |
+
the training samples only on “repeat action” and “repeat language” modes. We train
|
639 |
+
both PGAE and PTAE with varying percentages of supervised training samples. The
|
640 |
+
results can be seen in Figure 5. In all cases with different proportions of supervised
|
641 |
+
training samples, both PGAE and PTAE outperform the chance level. While main-
|
642 |
+
taining perfect sentence accuracy down to 20% supervised training and keeping up
|
643 |
+
its performance for 10% supervised training for the “describe” signal, PTAE’s perfor-
|
644 |
+
mance drops sharply when the ratio of training samples used for crossmodal signals
|
645 |
+
is 2% and below. Nevertheless, PTAE beats PGAE in each case when trained on dif-
|
646 |
+
ferent percentages of supervised training samples. PGAE’s performance suffers even
|
647 |
+
when 50% of training samples are used for supervised signals; it drops below 80% -
|
648 |
+
PTAE retains 100% for the same case. It takes more than 90% of the training samples
|
649 |
+
to be exclusively used in the unsupervised signals for PTAE’s performance to decrease
|
650 |
+
meaningfully (from 100% to 81%), while this ratio is much lower for PGAE as its per-
|
651 |
+
formance already drops significantly at 50%. Even for 1% supervised training samples
|
652 |
+
which amount to only 7 training samples, PTAE manages to translate one-third of the
|
653 |
+
test samples from action to sentences.
|
654 |
+
12
|
655 |
+
|
656 |
+
Action-to-Language Performance wrt. Ratio of Supervised Training Samples
|
657 |
+
100
|
658 |
+
PGAE
|
659 |
+
-PTAE
|
660 |
+
chance
|
661 |
+
80-
|
662 |
+
Sentence Accuracy (%)
|
663 |
+
60
|
664 |
+
40
|
665 |
+
20
|
666 |
+
0-
|
667 |
+
12
|
668 |
+
5
|
669 |
+
20
|
670 |
+
50
|
671 |
+
1o
|
672 |
+
66
|
673 |
+
100
|
674 |
+
(Crossmodal Training Samples)-(Total Training Samples) (%)Figure 6.
|
675 |
+
Joint value prediction error in language-to-action translation on the test set wrt. supervised training
|
676 |
+
samples. Supervised training refers to crossmodal translation cases “describe” and “execute”. We limit the
|
677 |
+
number of training samples for the supervised tasks. We report the results for the 1%, 2%, 5% 10%, 20%, 50%,
|
678 |
+
and 66.6% cases as well as the 100% regular training case. These percentages correspond to the fraction of
|
679 |
+
training samples used exclusively for the supervised training for PGAE and PTAE. “execute” and “describe”
|
680 |
+
translations are shown the same limited number of samples.
|
681 |
+
Language-to-action translation results with respect to different percentages of su-
|
682 |
+
pervised training samples for PGAE and PTAE are shown in Figure 6. We show the
|
683 |
+
deviation of the produced joint values from the original ones in terms of the normalized
|
684 |
+
root-mean-squared error (NRMSE), which we obtain by normalizing the root-mean-
|
685 |
+
squared error (RMSE) between the predicted and ground-truth values by the range of
|
686 |
+
joint values – the lower percentages indicate better prediction (0% NRMSE meaning
|
687 |
+
predicted values are identical with ground-truth values), whereas the higher percent-
|
688 |
+
ages indicate worse prediction (100% NRMSE meaning the RMSE between predicted
|
689 |
+
and ground-truth values is equal to the range of possible values). We can see a similar
|
690 |
+
trend as in action-to-language translation apart from the regular case (100%) when
|
691 |
+
PGAE has a lower error than PTAE, which is probably due to the fact that PGAE
|
692 |
+
is trained for more than two times the number of iterations than PTAE since it takes
|
693 |
+
longer for PGAE’s training loss to reach a global minimum. In all other cases, limiting
|
694 |
+
the ratio of training samples to be used in the supervised modes impacts PGAE’s
|
695 |
+
language-to-action performance heavily: the NRMSE rises from less than 0.5% to al-
|
696 |
+
most 8% when the percentage of supervised samples is reduced to two-thirds of the
|
697 |
+
training samples. The error rate increases further as the number of training samples
|
698 |
+
used in the crossmodal training modes decreases. The NRMSE for PTAE is also in-
|
699 |
+
versely proportional to the ratio of supervised training samples. However, the impact
|
700 |
+
of limiting the number of training samples for supervised modes on PTAE is much
|
701 |
+
lower than on PGAE. When the percentage of supervised training samples is reduced
|
702 |
+
to 1%, the deviation from the ground-truth joint values is only a little more than 4%
|
703 |
+
for PTAE, whereas the same statistic for PGAE is almost 14%.
|
704 |
+
13
|
705 |
+
|
706 |
+
14
|
707 |
+
★-PGAE
|
708 |
+
PTAE
|
709 |
+
12
|
710 |
+
10
|
711 |
+
2
|
712 |
+
01
|
713 |
+
12
|
714 |
+
5
|
715 |
+
10
|
716 |
+
20
|
717 |
+
50
|
718 |
+
66
|
719 |
+
100
|
720 |
+
(Crossmodal Training Samples)-(Total Training Samples) (%)Figure 7.
|
721 |
+
Model performance on the test set wrt. no. of conflicts introduced in the extra input. For action-
|
722 |
+
to-language and language-to-language (the top row), we show the predicted sentence accuracies. For language-
|
723 |
+
to-action and action-to-action, we show the normalized root-mean-squared error (NRMSE) for predicted joint
|
724 |
+
values. The modality in which the conflicts are introduced is given in the x-axis. For each signal, we add extra
|
725 |
+
conflicting inputs either in the action or language input. When the conflict is introduced in action, we also
|
726 |
+
test having the conflict only in the vision and only in the proprioception submodality - in this case, the other
|
727 |
+
submodality has the matching input.
|
728 |
+
Exposure to conflicting input modalities. We also investigate the impact of
|
729 |
+
contradictory extra input on the performance of PTAE. For this, we use PTAE-regular
|
730 |
+
that is trained with 33% unsupervised training iterations and no contradictory input.
|
731 |
+
We test the robustness of our approach to varying numbers of conflicts (up to 3) in
|
732 |
+
the extra input. The definitions of the added conflict per task signal are:
|
733 |
+
• “describe”: Here, we add a conflicting description to the language input (conflict
|
734 |
+
in language).
|
735 |
+
• “execute”: Here, we use a conflicting sequence of vision and proprioception input
|
736 |
+
(conflict in action).
|
737 |
+
• “repeat action”: Here, we add a conflicting description to the language input
|
738 |
+
(conflict in language).
|
739 |
+
• “repeat language”: Here, we use a conflicting sequence of vision and propriocep-
|
740 |
+
tion input (conflict in action).
|
741 |
+
The conflicts are introduced using the following scheme:
|
742 |
+
• for the conflict in the extra language input; one, two, or all of the action, color,
|
743 |
+
and speed words that constitute a description, do not match with the action.
|
744 |
+
• for the conflict in the extra action input; one, two, or all of the action-type,
|
745 |
+
position, and speed aspects, which form distinct actions, do not match with the
|
746 |
+
language description.
|
747 |
+
The results of this experiment are given in Figure 7. In the case of the “describe” and
|
748 |
+
“repeat action” signals, the action supplies the relevant input whereas the language
|
749 |
+
14
|
750 |
+
|
751 |
+
Action-to-Language Performance
|
752 |
+
Language-to-Language Performance
|
753 |
+
100
|
754 |
+
100
|
755 |
+
Action (Vis.+Prop.) Conf.
|
756 |
+
Only Vis. Conf.
|
757 |
+
Only Prop. Conf.
|
758 |
+
Sentence Accuracy (%)
|
759 |
+
80
|
760 |
+
80
|
761 |
+
60 -
|
762 |
+
60 -
|
763 |
+
40
|
764 |
+
40
|
765 |
+
20 -
|
766 |
+
20
|
767 |
+
0
|
768 |
+
No. of conflicts in extra language input
|
769 |
+
No.of conflicts in extra action input
|
770 |
+
Action-to-Action Performance
|
771 |
+
Language-to-Action Performance
|
772 |
+
Action (Vis.+Prop.) Conf.
|
773 |
+
Only Vis. Conf.
|
774 |
+
Only Prop. Conf.
|
775 |
+
2
|
776 |
+
2
|
777 |
+
No. of conflicts in extra language input
|
778 |
+
No. of conflicts in extra action inputis the conflicting distractor. Here, we observe only a slight decrease in performance.
|
779 |
+
In the case of action-to-language translation (“describe”) the sentence accuracy goes
|
780 |
+
down from 100% to 95% when there are three conflicting input elements (action type,
|
781 |
+
color, speed). Action-to-action (“repeat action”) translation manages to retain its
|
782 |
+
performance as the error in joint values only slightly increases from 1.03% to 1.09%
|
783 |
+
for the case with 3 conflicts.
|
784 |
+
In the case of “execute” and “repeat language” signals, the language supplies the
|
785 |
+
relevant input while the action is the conflicting distractor. Here, we observe a big
|
786 |
+
performance drop. Language-to-action translation (“execute”) suffers heavily as the
|
787 |
+
deviation of the predicted joint values from the ground-truth joint values increases from
|
788 |
+
0.99% to 4.95%. In the language-to-language translation case (“repeat language”),
|
789 |
+
PTAE loses its ability to repeat the given language description when one or more
|
790 |
+
conflicting elements (action type, position, speed) are introduced with the extra input:
|
791 |
+
the sentence accuracy decreases from 100% to 0%.
|
792 |
+
Therefore, we can see the asymmetric impact of conflicts in the two modalities,
|
793 |
+
namely, when language input is introduced as a contradictory element, the perfor-
|
794 |
+
mance drops slightly, whereas when the contradictory input is introduced in the action
|
795 |
+
stream, the model is affected heavily and performs poorly. The output modality has
|
796 |
+
no significant impact on the result; for example, we can see that both “describe” and
|
797 |
+
“repeat language” output language at large, but they are affected very differently by
|
798 |
+
the conflicting input. To test whether the bigger impact of conflicting action input
|
799 |
+
is due to the involvement of two modalities in action (vision and proprioception), we
|
800 |
+
also tried introducing the conflict either only in vision or only in proprioception (the
|
801 |
+
relatively brighter bars in the two charts on the right in Figure 7). In either case, the
|
802 |
+
performance is still substantially negatively affected, although the drop in performance
|
803 |
+
is naturally not as severe as introducing the conflict in both modalities.
|
804 |
+
5. Discussion
|
805 |
+
The experimental results on action-to-language and language-to-action translations
|
806 |
+
show the superior performance and efficiency of our novel PTAE model under lim-
|
807 |
+
ited supervision. Limiting the percentage of supervised crossmodal iterations during
|
808 |
+
training has no adverse effect on PTAE as it maintains its perfect sentence accuracy
|
809 |
+
when translating from action to language. In contrast, the previous PGAE model’s
|
810 |
+
action-to-language translation accuracy drops substantially to around 40% when only
|
811 |
+
1 or 2% of the training iterations are supervised.
|
812 |
+
When we challenge both models more by limiting the number of training samples for
|
813 |
+
the supervised crossmodal “execute” and “describe” signals, we see a similar pattern:
|
814 |
+
when 50% or less of the training samples are used for supervised signals, action-to-
|
815 |
+
language sentence accuracy for PGAE decreases directly proportional to the ratio of
|
816 |
+
supervised samples. PTAE, on the other hand, retains its action-to-language perfor-
|
817 |
+
mance up until the case where only 5% of the training samples are used in a supervised
|
818 |
+
fashion. Even after being trained with 2% supervised training, which amounts to only
|
819 |
+
13 samples out of 648, PTAE is able to describe more than half of the action sequences
|
820 |
+
correctly. All in all, PTAE shows superior action-to-language performance than PGAE
|
821 |
+
for varied levels of limited supervision.
|
822 |
+
The adverse effect of limiting the number of supervised training samples on the
|
823 |
+
language-to-action performance can already be seen for PGAE even when only one-
|
824 |
+
third of the samples are excluded (66% supervised case). The NRMSE between pre-
|
825 |
+
15
|
826 |
+
|
827 |
+
dicted and ground-truth joint values rises significantly from around 0.5% to around
|
828 |
+
8%. It continues to increase gradually after reducing the level of supervision to 20%.
|
829 |
+
On the contrary, PTAE is robust against the limited supervision with respect to the
|
830 |
+
ratio of crossmodal training samples until the supervised percentage is brought down
|
831 |
+
to 10%. After that, it can be seen that the error rate gradually increases, albeit only
|
832 |
+
just over 4% for PTAE when only 7 samples are used for the supervised signals. Over-
|
833 |
+
all, these results indicate the clear superiority of Transformer-based multimodal fusion
|
834 |
+
over a simpler attention mechanism by GMU in terms of performance and efficiency.
|
835 |
+
Although it is relatively larger than PGAE, PTAE is trained much faster and reaches
|
836 |
+
a global optimum in less than half of the training iterations of PGAE.
|
837 |
+
When introducing a conflicting modality input during testing, we observed an asym-
|
838 |
+
metry in that a conflicting action input leads to a larger disturbance than a conflicting
|
839 |
+
language input. One possible reason is that the Crossmodal Transformer architecture
|
840 |
+
is asymmetric: As input, we are using action input as two input vectors (K and V:
|
841 |
+
keys and values), whereas language as one input vector (Q: queries). This setting was
|
842 |
+
chosen because the opposite setup (with action as queries) was found less performant.
|
843 |
+
Our setup can be interpreted as language-conditioned action attention. A computa-
|
844 |
+
tionally more expensive architecture could combine both asymmetric setups, as has
|
845 |
+
been done for learning vision and language representations (Lu et al. 2019).
|
846 |
+
Another possible reason for the larger impact of a conflicting action could be that
|
847 |
+
the action input combines two submodalities, vision, and proprioception, and therefore
|
848 |
+
involves more information than the language input. However, limiting the conflict to
|
849 |
+
one of the submodalities did not completely remove the asymmetry as introducing the
|
850 |
+
conflict only in one action submodality (vision or proprioception) still had a stronger
|
851 |
+
effect on the model performance than a conflicting language input. Unlike language,
|
852 |
+
vision contains the complete information to perform a task. Consider the example “pull
|
853 |
+
red slowly” for language-to-action translation. Here, the language does not contain
|
854 |
+
any information about whether the object is on the left or right side, so the agent can
|
855 |
+
only execute this correctly when also taking visual input into account during action
|
856 |
+
execution. In contrast, in the opposite direction (action-to-language translation) and
|
857 |
+
in action repetition, the visual input has the complete information.
|
858 |
+
6. Conclusion
|
859 |
+
In this paper, we introduced a paired Transformer-based autoencoder, PTAE, which we
|
860 |
+
trained largely by unsupervised learning with additional, but reduced supervision. The
|
861 |
+
PTAE achieves significantly better action-to-language and language-to-action transla-
|
862 |
+
tion performance under limited supervision conditions compared to the former GMU-
|
863 |
+
based model, PGAE. Furthermore, we tested the robustness of our new approach
|
864 |
+
against contradictory extra input. In line with the concept of incongruence in psy-
|
865 |
+
chology, these experiments show that conflict deteriorates the output of our model,
|
866 |
+
and more conflicting features lead to higher interference. We also found an asymmetry
|
867 |
+
between the action and language modalities in terms of their conflicting impact: the
|
868 |
+
action modality has significantly more influence over the performance of the model
|
869 |
+
regardless of the main output modality.
|
870 |
+
Our novel bidirectional embodied language learning model is flexible in performing
|
871 |
+
multiple tasks and it is efficient and robust against the scarcity of labeled data. Hence,
|
872 |
+
it is a step towards an autonomous agent that can communicate with humans while
|
873 |
+
performing various tasks in the real world. In the future, we will expand our approach
|
874 |
+
16
|
875 |
+
|
876 |
+
with reinforcement learning to reduce the need for expert-defined action trajectories.
|
877 |
+
Furthermore, a reinforcement learner may explore more dexterous object manipula-
|
878 |
+
tion with diversified action trajectories. With more realistic action execution, we will
|
879 |
+
attempt to tackle the problem of sim-to-real transfer. Lastly, diversifying our action
|
880 |
+
repertoire will inevitably lead to more diverse natural language descriptions, which we
|
881 |
+
can tackle by employing a pretrained Transformer-based large language model as a
|
882 |
+
language encoder.
|
883 |
+
Disclosure statement
|
884 |
+
The authors report there are no competing interests to declare.
|
885 |
+
Funding
|
886 |
+
This work was supported by the German Research Foundation (DFG) under Project
|
887 |
+
TRR 169 Crossmodal Learning (CML), LeCareBot, IDEAS, and MoReSpace.
|
888 |
+
References
|
889 |
+
Abramson, J., A. Ahuja, A. Brussee, F. Carnevale, M. Cassin, S. Clark, A. Dudzik, P. Georgiev,
|
890 |
+
A. Guy, T. Harley, F. Hill, A. Hung, Z. Kenton, J. Landon, T. P. Lillicrap, K. W. Mathewson,
|
891 |
+
A. Muldal, A. Santoro, N. Savinov, V. Varma, G. Wayne, N. Wong, C. Yan, and R. Zhu.
|
892 |
+
2020. Imitating interactive intelligence. arXiv preprint arXiv:2012.05672.
|
893 |
+
Ahn, M., A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-
|
894 |
+
ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.
|
895 |
+
Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y. Kuang, K.-H. Lee,
|
896 |
+
S. Levine, Y. Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse,
|
897 |
+
D. Reyes, P. Sermanet, N. Sievers, C. Tan, A. Toshev, V. Vanhoucke, F. Xia, T. Xiao,
|
898 |
+
P. Xu, S. Xu, M. Yan, and A. Zeng. 2022. Do as I can and not as I say: Grounding language
|
899 |
+
in robotic affordances. arXiv preprint arXiv:2204.01691.
|
900 |
+
Antunes, A., A. Laflaquiere, T. Ogata, and A. Cangelosi. 2019.
|
901 |
+
A bi-directional multiple
|
902 |
+
timescales LSTM model for grounding of actions and verbs. In 2019 IEEE/RSJ International
|
903 |
+
Conference on Intelligent Robots and Systems (IROS), pp. 2614–2621.
|
904 |
+
Aravena, P., E. Hurtado, R. Riveros, J. F. Cardona, F. Manes, and A. Ib´a˜nez. 2010. Applauding
|
905 |
+
with closed hands: neural signature of action-sentence compatibility effects. PloS ONE 5(7),
|
906 |
+
e11751.
|
907 |
+
Arevalo, J., T. Solorio, M. Montes-y G´omez, and F. A. Gonz´alez. 2020. Gated multimodal
|
908 |
+
networks. Neural Computing and Applications 32(14), 10209–10228.
|
909 |
+
Bisk, Y., A. Holtzman, J. Thomason, J. Andreas, Y. Bengio, J. Chai, M. Lapata, A. Lazari-
|
910 |
+
dou, J. May, A. Nisnevich, N. Pinto, and J. Turian. 2020, November. Experience grounds
|
911 |
+
language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language
|
912 |
+
Processing, pp. 8718–8735. Association for Computational Linguistics.
|
913 |
+
Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,
|
914 |
+
P. Shyam, G. Sastry, A. Askell, et al. 2020. Language models are few-shot learners. Advances
|
915 |
+
in Neural Information Processing Systems 33, 1877–1901.
|
916 |
+
Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova. 2019, June. BERT: Pre-training of deep
|
917 |
+
bidirectional transformers for language understanding. In Proceedings of the 2019 Confer-
|
918 |
+
ence of the North American Chapter of the Association for Computational Linguistics: Hu-
|
919 |
+
man Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota,
|
920 |
+
pp. 4171–4186. Association for Computational Linguistics.
|
921 |
+
17
|
922 |
+
|
923 |
+
Eisermann, A., J. H. Lee, C. Weber, and S. Wermter. 2021, Jul. Generalization in multimodal
|
924 |
+
language learning from simulation. In Proceedings of the International Joint Conference on
|
925 |
+
Neural Networks (IJCNN 2021).
|
926 |
+
Glenberg, A. M. and M. P. Kaschak. 2002.
|
927 |
+
Grounding language in action.
|
928 |
+
Psychonomic
|
929 |
+
Bulletin & Review 9(3), 558–565.
|
930 |
+
Hatori, J., Y. Kikuchi, S. Kobayashi, K. Takahashi, Y. Tsuboi, Y. Unno, W. Ko, and J. Tan.
|
931 |
+
2018. Interactively picking real-world objects with unconstrained spoken language instruc-
|
932 |
+
tions. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp.
|
933 |
+
3774–3781. IEEE.
|
934 |
+
Hauk, O., I. Johnsrude, and F. Pulverm¨uller. 2004. Somatotopic representation of action words
|
935 |
+
in human motor and premotor cortex. Neuron 41(2), 301–307.
|
936 |
+
Heinrich, S., Y. Yao, T. Hinz, Z. Liu, T. Hummel, M. Kerzel, C. Weber, and S. Wermter.
|
937 |
+
2020. Crossmodal language grounding in an embodied neurocognitive model. Frontiers in
|
938 |
+
Neurorobotics 14, 52.
|
939 |
+
Hochreiter, S. and J. Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8),
|
940 |
+
1735–1780.
|
941 |
+
Irshad, M. Z., C.-Y. Ma, and Z. Kira. 2021.
|
942 |
+
Hierarchical cross-modal agent for robotics
|
943 |
+
vision-and-language navigation. In 2021 IEEE International Conference on Robotics and
|
944 |
+
Automation (ICRA), pp. 13238–13246.
|
945 |
+
Jaegle, A., S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran,
|
946 |
+
A. Brock, E. Shelhamer, et al. 2021. Perceiver io: A general architecture for structured inputs
|
947 |
+
& outputs. In International Conference on Learning Representations.
|
948 |
+
Jang, E., A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn.
|
949 |
+
2021. BC-z: Zero-shot task generalization with robotic imitation learning. In 5th Annual
|
950 |
+
Conference on Robot Learning.
|
951 |
+
Jiang, Y., A. Gupta, Z. Zhang, G. Wang, Y. Dou, Y. Chen, L. Fei-Fei, A. Anandkumar, Y. Zhu,
|
952 |
+
and L. Fan. 2022. VIMA: General robot manipulation with multimodal prompts.
|
953 |
+
Kaschak, M. P., C. J. Madden, D. J. Therriault, R. H. Yaxley, M. Aveyard, A. A. Blanchard,
|
954 |
+
and R. A. Zwaan. 2005. Perception of motion affects language processing. Cognition 94(3),
|
955 |
+
B79–B89.
|
956 |
+
Kerzel, M., T. Pekarek-Rosin, E. Strahl, S. Heinrich, and S. Wermter. 2020. Teaching NICO
|
957 |
+
how to grasp: an empirical study on crossmodal social interaction as a key factor for robots
|
958 |
+
learning from humans. Frontiers in Neurorobotics 14, 28.
|
959 |
+
Kerzel, M., E. Strahl, S. Magg, N. Navarro-Guerrero, S. Heinrich, and S. Wermter. 2017.
|
960 |
+
NICO—Neuro-Inspired COmpanion: A developmental humanoid robot platform for mul-
|
961 |
+
timodal interaction. In 2017 26th IEEE International Symposium on Robot and Human
|
962 |
+
Interactive Communication (RO-MAN), pp. 113–120. IEEE.
|
963 |
+
Kingma, D. P. and J. Ba. 2015. Adam: A method for stochastic optimization. In 3rd Interna-
|
964 |
+
tional Conference on Learning Representations, ICLR, San Diego, CA, USA, May 7-9.
|
965 |
+
Lu, J., D. Batra, D. Parikh, and S. Lee. 2019.
|
966 |
+
ViLBERT: Pretraining task-agnostic visi-
|
967 |
+
olinguistic representations for vision-and-language tasks.
|
968 |
+
In H. Wallach, H. Larochelle,
|
969 |
+
A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett (Eds.), Advances in Neural Infor-
|
970 |
+
mation Processing Systems, Volume 32. Curran Associates, Inc.
|
971 |
+
Lynch, C. and P. Sermanet. 2021. Language conditioned imitation learning over unstructured
|
972 |
+
data. In D. A. Shell, M. Toussaint, and M. A. Hsieh (Eds.), Robotics: Science and System
|
973 |
+
XVII.
|
974 |
+
Meteyard, L., B. Bahrami, and G. Vigliocco. 2007. Motion detection and motion verbs: Lan-
|
975 |
+
guage affects low-level visual perception. Psychological Science 18(11), 1007–1013. PMID:
|
976 |
+
17958716.
|
977 |
+
Ogata, T., M. Murase, J. Tani, K. Komatani, and H. G. Okuno. 2007. Two-way translation
|
978 |
+
of compound sentences and arm motions by recurrent neural networks. In 2007 IEEE/RSJ
|
979 |
+
International Conference on Intelligent Robots and Systems, pp. 1858–1863.
|
980 |
+
¨Ozdemir, O., M. Kerzel, C. Weber, J. H. Lee, and S. Wermter. 2022. Learning flexible transla-
|
981 |
+
tion between robot actions and language descriptions. In E. Pimenidis, P. Angelov, C. Jayne,
|
982 |
+
18
|
983 |
+
|
984 |
+
A. Papaleonidas, and M. Aydin (Eds.), Artificial Neural Networks and Machine Learning –
|
985 |
+
ICANN 2022, Cham, pp. 246–257. Springer Nature Switzerland.
|
986 |
+
¨Ozdemir, O., M. Kerzel, and S. Wermter. 2021, Aug. Embodied language learning with paired
|
987 |
+
variational autoencoders.
|
988 |
+
In 2021 IEEE International Conference on Development and
|
989 |
+
Learning (ICDL), pp. 1–6.
|
990 |
+
Radford, A., J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,
|
991 |
+
P. Mishkin, J. Clark, et al. 2021. Learning transferable visual models from natural language
|
992 |
+
supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR.
|
993 |
+
Radford, A., J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models
|
994 |
+
are unsupervised multitask learners.
|
995 |
+
Raffel, C., N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J.
|
996 |
+
Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
|
997 |
+
Journal of Machine Learning Research 21(140), 1–67.
|
998 |
+
Reed, S., K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,
|
999 |
+
Y. Sulsky, J. Kay, J. T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. Edwards, N. Heess,
|
1000 |
+
Y. Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas. 2022. A generalist agent.
|
1001 |
+
Shao, L., T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. 2020. Concept2robot: Learning
|
1002 |
+
manipulation concepts from instructions and human demonstrations.
|
1003 |
+
In Proceedings of
|
1004 |
+
Robotics: Science and Systems (RSS).
|
1005 |
+
Shridhar, M., L. Manuelli, and D. Fox. 2021. CLIPort: What and where pathways for robotic
|
1006 |
+
manipulation. In Proceedings of the 5th Conference on Robot Learning (CoRL).
|
1007 |
+
Shridhar, M., L. Manuelli, and D. Fox. 2022. Perceiver-Actor: A multi-task transformer for
|
1008 |
+
robotic manipulation. In Proceedings of the 6th Conference on Robot Learning (CoRL).
|
1009 |
+
Shridhar, M., D. Mittal, and D. Hsu. 2020. INGRESS: Interactive visual grounding of referring
|
1010 |
+
expressions. The International Journal of Robotics Research 39(2-3), 217–232.
|
1011 |
+
van Elk, M., H. T. van Schie, R. A. Zwaan, and H. Bekkering. 2010. The functional role of
|
1012 |
+
motor activation in language processing: Motor cortical oscillations support lexical-semantic
|
1013 |
+
retrieval. Neuroimage 50(2), 665–677.
|
1014 |
+
Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, �L. Kaiser, and
|
1015 |
+
I. Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing
|
1016 |
+
Systems, pp. 5998–6008.
|
1017 |
+
Winter, A., C. Dudschig, J. Miller, R. Ulrich, and B. Kaup. 2022. The action-sentence com-
|
1018 |
+
patibility effect (ace): Meta-analysis of a benchmark finding for embodiment. Acta Psycho-
|
1019 |
+
logica 230, 103712.
|
1020 |
+
Yamada, T., H. Matsunaga, and T. Ogata. 2018. Paired recurrent autoencoders for bidirec-
|
1021 |
+
tional translation between robot actions and linguistic descriptions. IEEE Robotics and
|
1022 |
+
Automation Letters 3(4), 3441–3448.
|
1023 |
+
Zeng, A., P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,
|
1024 |
+
D. Duong, V. Sindhwani, and J. Lee. 2020. Transporter networks: Rearranging the visual
|
1025 |
+
world for robotic manipulation. In J. Kober, F. Ramos, and C. J. Tomlin (Eds.), 4th Con-
|
1026 |
+
ference on Robot Learning, CoRL 2020, 16-18 November 2020, Virtual Event / Cambridge,
|
1027 |
+
MA, USA, Volume 155 of Proceedings of Machine Learning Research, pp. 726–747. PMLR.
|
1028 |
+
19
|
1029 |
+
|