Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +101 -0
- 1dA0T4oBgHgl3EQfMv9X/content/2301.02136v1.pdf +3 -0
- 1dAzT4oBgHgl3EQf8v7L/content/2301.01910v1.pdf +3 -0
- 1dAzT4oBgHgl3EQf8v7L/vector_store/index.faiss +3 -0
- 1dAzT4oBgHgl3EQf8v7L/vector_store/index.pkl +3 -0
- 2dE4T4oBgHgl3EQfagyV/content/tmp_files/2301.05065v1.pdf.txt +3254 -0
- 2dE4T4oBgHgl3EQfagyV/content/tmp_files/load_file.txt +0 -0
- 3tAzT4oBgHgl3EQfffwg/content/2301.01452v1.pdf +3 -0
- 3tAzT4oBgHgl3EQfffwg/vector_store/index.faiss +3 -0
- 3tAzT4oBgHgl3EQfffwg/vector_store/index.pkl +3 -0
- 49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf +0 -0
- 49AzT4oBgHgl3EQfEPqD/content/tmp_files/2301.00990v1.pdf.txt +841 -0
- 49AzT4oBgHgl3EQfEPqD/content/tmp_files/load_file.txt +340 -0
- 6tE2T4oBgHgl3EQf7Qi3/vector_store/index.faiss +3 -0
- 79FLT4oBgHgl3EQfAy4h/content/2301.11967v1.pdf +3 -0
- 79FLT4oBgHgl3EQfAy4h/vector_store/index.pkl +3 -0
- 89AzT4oBgHgl3EQf-_4J/content/2301.01940v1.pdf +3 -0
- 89E1T4oBgHgl3EQfUAPq/content/2301.03086v1.pdf +3 -0
- 89E1T4oBgHgl3EQfUAPq/vector_store/index.faiss +3 -0
- 8tE3T4oBgHgl3EQfSAk3/content/tmp_files/2301.04427v1.pdf.txt +1228 -0
- 8tE3T4oBgHgl3EQfSAk3/content/tmp_files/load_file.txt +0 -0
- 99A0T4oBgHgl3EQfO__U/content/2301.02170v1.pdf +3 -0
- 99A0T4oBgHgl3EQfO__U/vector_store/index.faiss +3 -0
- 9NE4T4oBgHgl3EQfdgy1/content/2301.05092v1.pdf +3 -0
- 9NE4T4oBgHgl3EQfdgy1/vector_store/index.pkl +3 -0
- 9dE1T4oBgHgl3EQf8AVQ/content/tmp_files/2301.03540v1.pdf.txt +1008 -0
- 9dE1T4oBgHgl3EQf8AVQ/content/tmp_files/load_file.txt +0 -0
- 9dE3T4oBgHgl3EQfSQko/content/2301.04430v1.pdf +3 -0
- 9dE3T4oBgHgl3EQfSQko/vector_store/index.faiss +3 -0
- 9dE3T4oBgHgl3EQfSQko/vector_store/index.pkl +3 -0
- AdFLT4oBgHgl3EQfEy_H/content/2301.11985v1.pdf +3 -0
- AdFLT4oBgHgl3EQfEy_H/vector_store/index.pkl +3 -0
- BNAzT4oBgHgl3EQf__-y/content/2301.01957v1.pdf +3 -0
- BNAzT4oBgHgl3EQf__-y/vector_store/index.faiss +3 -0
- CdFJT4oBgHgl3EQftC3b/content/2301.11616v1.pdf +3 -0
- CdFJT4oBgHgl3EQftC3b/vector_store/index.faiss +3 -0
- CdFJT4oBgHgl3EQftC3b/vector_store/index.pkl +3 -0
- EdAzT4oBgHgl3EQfwv7y/content/2301.01729v1.pdf +3 -0
- EdAzT4oBgHgl3EQfwv7y/vector_store/index.faiss +3 -0
- EdAzT4oBgHgl3EQfwv7y/vector_store/index.pkl +3 -0
- EtE2T4oBgHgl3EQf-Ak-/content/2301.04233v1.pdf +3 -0
- EtE2T4oBgHgl3EQf-Ak-/vector_store/index.faiss +3 -0
- EtE2T4oBgHgl3EQf-Ak-/vector_store/index.pkl +3 -0
- FNE0T4oBgHgl3EQfzAKR/content/tmp_files/2301.02667v1.pdf.txt +1363 -0
- FNE0T4oBgHgl3EQfzAKR/content/tmp_files/load_file.txt +0 -0
- FdFJT4oBgHgl3EQfDCyN/content/2301.11432v1.pdf +3 -0
- FdFJT4oBgHgl3EQfDCyN/vector_store/index.faiss +3 -0
- FdFJT4oBgHgl3EQfDCyN/vector_store/index.pkl +3 -0
- HNAzT4oBgHgl3EQfjP3H/content/2301.01514v1.pdf +3 -0
- HNAzT4oBgHgl3EQfjP3H/vector_store/index.faiss +3 -0
.gitattributes
CHANGED
@@ -1134,3 +1134,104 @@ T9FIT4oBgHgl3EQffis3/content/2301.11279v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
1134 |
1dA0T4oBgHgl3EQfMv9X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1135 |
6tE2T4oBgHgl3EQf7Qi3/content/2301.04208v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1136 |
cdAyT4oBgHgl3EQf-foD/content/2301.00891v1.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1134 |
1dA0T4oBgHgl3EQfMv9X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1135 |
6tE2T4oBgHgl3EQf7Qi3/content/2301.04208v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1136 |
cdAyT4oBgHgl3EQf-foD/content/2301.00891v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1137 |
+
BNAzT4oBgHgl3EQf__-y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1138 |
+
tNAyT4oBgHgl3EQf0PnV/content/2301.00716v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1139 |
+
aNE1T4oBgHgl3EQfcwSy/content/2301.03188v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1140 |
+
89E1T4oBgHgl3EQfUAPq/content/2301.03086v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1141 |
+
99A0T4oBgHgl3EQfO__U/content/2301.02170v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1142 |
+
aNE1T4oBgHgl3EQfcwSy/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1143 |
+
o9AzT4oBgHgl3EQfAfpB/content/2301.00926v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1144 |
+
otE1T4oBgHgl3EQf1wXJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1145 |
+
cdAyT4oBgHgl3EQf-foD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1146 |
+
o9AzT4oBgHgl3EQfAfpB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1147 |
+
jdE4T4oBgHgl3EQfsg0n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1148 |
+
QdA0T4oBgHgl3EQfDf80/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1149 |
+
b9E0T4oBgHgl3EQfWQBc/content/2301.02275v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1150 |
+
FdFJT4oBgHgl3EQfDCyN/content/2301.11432v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1151 |
+
BNAzT4oBgHgl3EQf__-y/content/2301.01957v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1152 |
+
89E1T4oBgHgl3EQfUAPq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1153 |
+
tNAyT4oBgHgl3EQf0PnV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1154 |
+
bNAyT4oBgHgl3EQfwPk2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1155 |
+
bNAyT4oBgHgl3EQfwPk2/content/2301.00644v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1156 |
+
l9FPT4oBgHgl3EQf3jX-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1157 |
+
KdE2T4oBgHgl3EQfVAc5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1158 |
+
XtE1T4oBgHgl3EQfvwXS/content/2301.03404v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1159 |
+
6tE2T4oBgHgl3EQf7Qi3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1160 |
+
utE1T4oBgHgl3EQfQwOJ/content/2301.03044v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1161 |
+
cNAzT4oBgHgl3EQfLfum/content/2301.01116v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1162 |
+
FdFJT4oBgHgl3EQfDCyN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1163 |
+
utE1T4oBgHgl3EQfQwOJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1164 |
+
KdE2T4oBgHgl3EQfVAc5/content/2301.03818v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1165 |
+
XtE1T4oBgHgl3EQfvwXS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1166 |
+
h9AyT4oBgHgl3EQf-vq9/content/2301.00898v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1167 |
+
sdE1T4oBgHgl3EQfQAOj/content/2301.03035v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1168 |
+
etFKT4oBgHgl3EQfAi12/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1169 |
+
1dA0T4oBgHgl3EQfMv9X/content/2301.02136v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1170 |
+
b9E0T4oBgHgl3EQfWQBc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1171 |
+
89AzT4oBgHgl3EQf-_4J/content/2301.01940v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1172 |
+
sdE1T4oBgHgl3EQfQAOj/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1173 |
+
Z9FAT4oBgHgl3EQf4R6K/content/2301.08725v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1174 |
+
9dE3T4oBgHgl3EQfSQko/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1175 |
+
9dE3T4oBgHgl3EQfSQko/content/2301.04430v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1176 |
+
PNAyT4oBgHgl3EQf7fod/content/2301.00838v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1177 |
+
cNAzT4oBgHgl3EQfLfum/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1178 |
+
HNAzT4oBgHgl3EQfjP3H/content/2301.01514v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1179 |
+
3tAzT4oBgHgl3EQfffwg/content/2301.01452v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1180 |
+
EdAzT4oBgHgl3EQfwv7y/content/2301.01729v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1181 |
+
etFKT4oBgHgl3EQfAi12/content/2301.11699v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1182 |
+
PNAyT4oBgHgl3EQf7fod/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1183 |
+
tdA0T4oBgHgl3EQfLf9-/content/2301.02119v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1184 |
+
3tAzT4oBgHgl3EQfffwg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1185 |
+
I9E2T4oBgHgl3EQfUQfh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1186 |
+
EtE2T4oBgHgl3EQf-Ak-/content/2301.04233v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1187 |
+
EtE2T4oBgHgl3EQf-Ak-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1188 |
+
gNE1T4oBgHgl3EQfMQM3/content/2301.02986v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1189 |
+
x9FQT4oBgHgl3EQfAzVW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1190 |
+
WdFRT4oBgHgl3EQfMjc9/content/2301.13506v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1191 |
+
EdAzT4oBgHgl3EQfwv7y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1192 |
+
HNAzT4oBgHgl3EQfjP3H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1193 |
+
gNE1T4oBgHgl3EQfMQM3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1194 |
+
x9FQT4oBgHgl3EQfAzVW/content/2301.13224v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1195 |
+
zdFRT4oBgHgl3EQfjzd-/content/2301.13592v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1196 |
+
v9AyT4oBgHgl3EQfnPij/content/2301.00486v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1197 |
+
rNFKT4oBgHgl3EQfIi19/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1198 |
+
AdFLT4oBgHgl3EQfEy_H/content/2301.11985v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1199 |
+
I9AyT4oBgHgl3EQfTfcL/content/2301.00104v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1200 |
+
rNFKT4oBgHgl3EQfIi19/content/2301.11734v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1201 |
+
I9AyT4oBgHgl3EQfTfcL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1202 |
+
p9E2T4oBgHgl3EQfKgZi/content/2301.03703v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1203 |
+
v9AyT4oBgHgl3EQfnPij/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1204 |
+
dtE1T4oBgHgl3EQfyAVc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1205 |
+
dtE1T4oBgHgl3EQfyAVc/content/2301.03428v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1206 |
+
1dAzT4oBgHgl3EQf8v7L/content/2301.01910v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1207 |
+
p9E2T4oBgHgl3EQfKgZi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1208 |
+
zdFRT4oBgHgl3EQfjzd-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1209 |
+
LdE1T4oBgHgl3EQfGwON/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1210 |
+
dtE0T4oBgHgl3EQfoQFr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1211 |
+
9NE4T4oBgHgl3EQfdgy1/content/2301.05092v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1212 |
+
dtE0T4oBgHgl3EQfoQFr/content/2301.02523v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1213 |
+
iNE1T4oBgHgl3EQfMwPW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1214 |
+
KdE4T4oBgHgl3EQfiA17/content/2301.05130v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1215 |
+
1dAzT4oBgHgl3EQf8v7L/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1216 |
+
h9AyT4oBgHgl3EQf-vq9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1217 |
+
tdA0T4oBgHgl3EQfLf9-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1218 |
+
iNE0T4oBgHgl3EQfYAA8/content/2301.02300v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1219 |
+
CdFJT4oBgHgl3EQftC3b/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1220 |
+
b9AyT4oBgHgl3EQfXPer/content/2301.00180v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1221 |
+
Z9FAT4oBgHgl3EQf4R6K/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1222 |
+
oNAzT4oBgHgl3EQfqf3Z/content/2301.01631v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1223 |
+
e9E2T4oBgHgl3EQfGgbf/content/2301.03659v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1224 |
+
79FLT4oBgHgl3EQfAy4h/content/2301.11967v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1225 |
+
CdFJT4oBgHgl3EQftC3b/content/2301.11616v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1226 |
+
e9E2T4oBgHgl3EQfGgbf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1227 |
+
b9AyT4oBgHgl3EQfXPer/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1228 |
+
oNE5T4oBgHgl3EQfkA8i/content/2301.05659v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1229 |
+
f9AzT4oBgHgl3EQfavwP/content/2301.01372v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1230 |
+
I9E2T4oBgHgl3EQfUQfh/content/2301.03812v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1231 |
+
wtE0T4oBgHgl3EQf-QJD/content/2301.02811v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1232 |
+
kNFPT4oBgHgl3EQf1zVk/content/2301.13184v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1233 |
+
oNFPT4oBgHgl3EQfKjR1/content/2301.13019v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1234 |
+
LtAyT4oBgHgl3EQfgPig/content/2301.00356v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1235 |
+
99A0T4oBgHgl3EQfO__U/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1236 |
+
edFST4oBgHgl3EQfFzg4/content/2301.13719v1.pdf filter=lfs diff=lfs merge=lfs -text
|
1237 |
+
edFST4oBgHgl3EQfFzg4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
1dA0T4oBgHgl3EQfMv9X/content/2301.02136v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11b960abc5db177debf4f11e398aa702993a5e59f178302db36b709d8bfe43e4
|
3 |
+
size 5144707
|
1dAzT4oBgHgl3EQf8v7L/content/2301.01910v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9802146da3b4a8c2129b1a8e6ccdfe901c5422478b4f1c624e0782ab8c0983cc
|
3 |
+
size 221890
|
1dAzT4oBgHgl3EQf8v7L/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7affa7a2b7e1faeeb277d51131251afe74e0dac2e37d9db01e73527e63e87a2d
|
3 |
+
size 2752557
|
1dAzT4oBgHgl3EQf8v7L/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17334ceee2d4432c9fe3dac0945b2238ed659b74223d81e3e5527c5e0c1d203a
|
3 |
+
size 104048
|
2dE4T4oBgHgl3EQfagyV/content/tmp_files/2301.05065v1.pdf.txt
ADDED
@@ -0,0 +1,3254 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Toward Building General Foundation Models for Language, Vision, and
|
2 |
+
Vision-Language Understanding Tasks
|
3 |
+
Xinsong Zhang 1 Yan Zeng 1 Jipeng Zhang 2 Hang Li 1
|
4 |
+
Abstract
|
5 |
+
Foundation models or pre-trained models have
|
6 |
+
substantially improved the performance of various
|
7 |
+
language, vision, and vision-language understand-
|
8 |
+
ing tasks. However, existing foundation models
|
9 |
+
can only perform the best in one type of tasks,
|
10 |
+
namely language, vision, or vision-language. It is
|
11 |
+
still an open question whether it is possible to con-
|
12 |
+
struct a foundation model performing the best for
|
13 |
+
all the understanding tasks, which we call a gen-
|
14 |
+
eral foundation model. In this paper, we propose
|
15 |
+
a new general foundation model, X-FM (the X-
|
16 |
+
Foundation Model). X-FM has one language en-
|
17 |
+
coder, one vision encoder, and one fusion encoder,
|
18 |
+
as well as a new training method. The training
|
19 |
+
method includes two new techniques for learning
|
20 |
+
X-FM from text, image, and image-text pair data.
|
21 |
+
One is to stop gradients from the vision-language
|
22 |
+
training when learning the language encoder. The
|
23 |
+
other is to leverage the vision-language training
|
24 |
+
to guide the learning of the vision encoder. Exten-
|
25 |
+
sive experiments on benchmark datasets show that
|
26 |
+
X-FM can significantly outperform existing gen-
|
27 |
+
eral foundation models and perform better than or
|
28 |
+
comparable to existing foundation models specif-
|
29 |
+
ically for language, vision, or vision-language
|
30 |
+
understanding.
|
31 |
+
1. Introduction
|
32 |
+
With the enormous power of foundation models, also known
|
33 |
+
as pre-trained models, remarkable performance gains have
|
34 |
+
recently been achieved in a variety of understanding tasks in
|
35 |
+
natural language processing (NLP), computer vision (CV),
|
36 |
+
and other fields (Devlin et al., 2019; Liu et al., 2019; Lewis
|
37 |
+
et al., 2020; Raffel et al., 2020; Brown et al., 2020; Doso-
|
38 |
+
vitskiy et al., 2021; He et al., 2022; Bao et al., 2021; Lu
|
39 |
+
1ByteDance AI Lab 2The Hong Kong University of Science
|
40 |
+
and Technology. Correspondence to: Xinsong Zhang <zhangxin-
|
41 | |
42 |
+
Copyright 2023 by the author(s). The code and pre-trained models
|
43 |
+
will be released upon publication.
|
44 |
+
et al., 2019; Tan & Bansal, 2019a; Chen et al., 2020; Li
|
45 |
+
et al., 2020; 2021a; Zeng et al., 2021; 2022) . Foundation
|
46 |
+
models are usually equipped with Transformer (Vaswani
|
47 |
+
et al., 2017) as the backbone, pre-trained with a tremendous
|
48 |
+
amount of unlabeled data, and then fine-tuned with small
|
49 |
+
amounts of labeled data in downstream tasks. The strong
|
50 |
+
representation ability of the model, the massive amount of
|
51 |
+
data, and the effective means of training make the founda-
|
52 |
+
tion models powerful for successfully solving the tasks of
|
53 |
+
vision, language, and vision-language (Li et al., 2021b;c;
|
54 |
+
Singh et al., 2021; Wang et al., 2021b; 2022b; Diao et al.,
|
55 |
+
2022; Wang et al., 2022a).
|
56 |
+
The state-of-the-art foundation models usually work the
|
57 |
+
best for one type of tasks, namely language, vision, and
|
58 |
+
vision-language. For example, RoBERTa (Liu et al., 2019),
|
59 |
+
BEiTv2 (Peng et al., 2022), and X-VLM (Zeng et al., 2021;
|
60 |
+
2022) are language, vision, and vision-language founda-
|
61 |
+
tion models respectively, and can achieve state-of-the-art
|
62 |
+
performances for the specific type of tasks. It is still very
|
63 |
+
challenging, however, to build a general foundation model
|
64 |
+
that can perform the best in all types of tasks. Existing
|
65 |
+
models, such as FLAVA (Singh et al., 2021), OFA (Wang
|
66 |
+
et al., 2022b), DaVinci (Diao et al., 2022) and Uni-Perceiver-
|
67 |
+
MoE (Zhu et al., 2022), are trying to achieve the goal. Their
|
68 |
+
performances are still not satisfactory, however, when com-
|
69 |
+
pared with the best performing foundation models for the
|
70 |
+
individual types of tasks, as shown in Table 1. Previous
|
71 |
+
work (Bingel & Søgaard, 2017; Wang et al., 2020) also
|
72 |
+
shows that it is difficult to train a general foundation model
|
73 |
+
in a multi-task learning setting that can effectively learn and
|
74 |
+
utilize representations for all types of tasks. The reason is
|
75 |
+
that language, vision, and vision-language are very different
|
76 |
+
in nature, and a simple way of jointly training a model from
|
77 |
+
language, vision, and vision-language data can easily create
|
78 |
+
a suboptimal solution.
|
79 |
+
To address the challenge, we propose a new general founda-
|
80 |
+
tion model, X-FM (X-Foundation Model). X-FM consists of
|
81 |
+
three modular encoders for language (text) encoding, vision
|
82 |
+
(image) encoding, and fusion encoding, as shown in Fig 1.
|
83 |
+
The language encoder, the vision encoder, and the entire
|
84 |
+
model can be used in downstream tasks of language, vision,
|
85 |
+
and vision-language understanding, respectively. All three
|
86 |
+
arXiv:2301.05065v1 [cs.CV] 12 Jan 2023
|
87 |
+
|
88 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
89 |
+
Methods
|
90 |
+
Text Tasks
|
91 |
+
Vision Tasks
|
92 |
+
Multi-modal Tasks (MSCOCO Retriveal & VQA)
|
93 |
+
GLUE
|
94 |
+
ImageNet
|
95 |
+
Zero-Shot
|
96 |
+
Fine-Tune
|
97 |
+
MNLI
|
98 |
+
RTE
|
99 |
+
FT/LE
|
100 |
+
TR
|
101 |
+
IR
|
102 |
+
TR
|
103 |
+
IR
|
104 |
+
VQA
|
105 |
+
Foundation models specifically for language, vision, or vision-language understanding
|
106 |
+
RoBERTa (Liu et al., 2019)
|
107 |
+
87.6
|
108 |
+
78.7
|
109 |
+
–
|
110 |
+
–
|
111 |
+
–
|
112 |
+
–
|
113 |
+
–
|
114 |
+
–
|
115 |
+
BEiTv2 (Peng et al., 2022)
|
116 |
+
–
|
117 |
+
–
|
118 |
+
85.5/80.1
|
119 |
+
–
|
120 |
+
–
|
121 |
+
–
|
122 |
+
–
|
123 |
+
–
|
124 |
+
X-VLM (Zeng et al., 2021)
|
125 |
+
–
|
126 |
+
–
|
127 |
+
–
|
128 |
+
70.8/92.1/96.5
|
129 |
+
55.6/82.7/90.0
|
130 |
+
80.4/95.5/98.2
|
131 |
+
63.1/85.7/91.6
|
132 |
+
78.1
|
133 |
+
X2-VLM (Zeng et al., 2022)
|
134 |
+
–
|
135 |
+
–
|
136 |
+
–
|
137 |
+
–
|
138 |
+
–
|
139 |
+
80.5/95.5/97.8
|
140 |
+
62.7/84.7/90.7
|
141 |
+
79.2
|
142 |
+
General foundation models
|
143 |
+
UNIMO-2 (Li et al., 2021c)
|
144 |
+
87.5
|
145 |
+
–
|
146 |
+
80.8/-
|
147 |
+
–
|
148 |
+
–
|
149 |
+
–
|
150 |
+
–
|
151 |
+
76.3
|
152 |
+
SimVLM (Wang et al., 2021c)
|
153 |
+
83.4
|
154 |
+
63.9
|
155 |
+
-/80.6
|
156 |
+
–
|
157 |
+
–
|
158 |
+
–
|
159 |
+
–
|
160 |
+
77.9
|
161 |
+
FLAVA (Singh et al., 2021)
|
162 |
+
80.3
|
163 |
+
57.8
|
164 |
+
-/75.5
|
165 |
+
42.7/76.8/-
|
166 |
+
38.4/67.5/-
|
167 |
+
61.5/82.1/89.6
|
168 |
+
50.1/74.4/83.2
|
169 |
+
72.8
|
170 |
+
OFA (Wang et al., 2022b)
|
171 |
+
84.3
|
172 |
+
70.8
|
173 |
+
82.2/–
|
174 |
+
–
|
175 |
+
–
|
176 |
+
–
|
177 |
+
–
|
178 |
+
78.0
|
179 |
+
DaVinci (Diao et al., 2022)
|
180 |
+
83.1
|
181 |
+
64.2
|
182 |
+
83.9/78.8
|
183 |
+
–
|
184 |
+
–
|
185 |
+
–
|
186 |
+
–
|
187 |
+
76.3
|
188 |
+
OmniVL (Wang et al., 2022a)
|
189 |
+
–
|
190 |
+
–
|
191 |
+
–
|
192 |
+
–
|
193 |
+
–
|
194 |
+
76.8/93.6/97.3
|
195 |
+
58.5/82.6/89.5
|
196 |
+
78.3
|
197 |
+
Uni-Perceiver-MoE (Zhu et al., 2022)
|
198 |
+
81.5
|
199 |
+
75.8
|
200 |
+
84.5/–
|
201 |
+
64.6/–/–
|
202 |
+
51.6/–/–
|
203 |
+
70.5/–/–
|
204 |
+
54.1/–/–
|
205 |
+
–
|
206 |
+
X-FMbase
|
207 |
+
87.7
|
208 |
+
83.2
|
209 |
+
85.3/81.0
|
210 |
+
73.8/93.9/97.2
|
211 |
+
59.4/83.6/90.0
|
212 |
+
81.8/96.0/98.3
|
213 |
+
64.7/86.1/91.6
|
214 |
+
79.1
|
215 |
+
Table 1: Performance comparisons between foundation models. All results are from base-size models. MSCOCO is a
|
216 |
+
cross-modal retrieval task, and IR and TR are image-retrieval and text-retrieval, respectively. MNLI results are average
|
217 |
+
accuracies of MNLI-m and MNLI-mm. Accuracy is reported for RTE. For ImageNet1k classification, we report linear
|
218 |
+
evaluation (LE) performance and fine-tuning (FT) performance, respectively. We report R@1/R@5/R@10 for all retrieval
|
219 |
+
tasks at both zero-shot and fine-tune settings. We report the VQA test-dev result. bold denotes the best number across
|
220 |
+
general foundation models. underline denotes the best across all models.
|
221 |
+
encoders are stacked Transformer layers. The language en-
|
222 |
+
coder and the vision encoder follow the implementations
|
223 |
+
of BERT (Devlin et al., 2019) and ViT (Dosovitskiy et al.,
|
224 |
+
2021), respectively. The fusion encoder has the same ar-
|
225 |
+
chitecture as BERT except that there is a cross-attention
|
226 |
+
sub-layer after the self-attention sub-layer in each Trans-
|
227 |
+
former layer.
|
228 |
+
In learning of X-FM, the language encoder, vision encoder,
|
229 |
+
and fusion encoder are jointly trained with text data, im-
|
230 |
+
age data, and image-text pair data as input. Given the text
|
231 |
+
data, we train the language encoder by masked language
|
232 |
+
modeling (MLM). Given the image data, we train the vi-
|
233 |
+
sion encoder by masked image modeling (MIM). Given the
|
234 |
+
image-text pair data, we train the fusion encoder by image
|
235 |
+
text matching (ITM), image-conditioned masked language
|
236 |
+
modeling (IMLM), bounding box prediction (BBP), train
|
237 |
+
the vision encoder and the language encoder by image-text
|
238 |
+
contrastive learning (ITC), and train the vision encoder by
|
239 |
+
MIM. (See Fig 1.)
|
240 |
+
The essential thinking of our learning method is that lan-
|
241 |
+
guage is more abstract than vision, and there is an asymmet-
|
242 |
+
ric relationship between language and vision. Therefore, we
|
243 |
+
separate the learning of the three encoders. The language
|
244 |
+
encoder is trained mainly from text data and is isolated from
|
245 |
+
the training of the fusion encoder. The vision encoder is
|
246 |
+
simultaneously trained from image data and image-text pair
|
247 |
+
data, guided by the vision-language training. The fusion
|
248 |
+
encoder is trained from image-text pair data.
|
249 |
+
Our learning method includes two new techniques. One
|
250 |
+
technique is to stop gradients from the vision-language train-
|
251 |
+
ing when learning the language encoder. The gradient flow
|
252 |
+
is stopped from the fusion encoder to the language encoder
|
253 |
+
in training, while the activation flow from the language en-
|
254 |
+
coder to the fusion encoder is as usual. As a result, the
|
255 |
+
language encoder is not affected by training of the fusion
|
256 |
+
encoder with image-text pair data. Moreover, the training of
|
257 |
+
the fusion encoder concentrates on learning the alignments
|
258 |
+
between language and vision features.
|
259 |
+
The other technique is to leverage the vision-language train-
|
260 |
+
ing to guide the learning of the vision encoder with masked
|
261 |
+
image modeling (MIM). In MIM, the masked image is com-
|
262 |
+
pared with the original image by the differences between the
|
263 |
+
predicted representations and target representations at the
|
264 |
+
masked and [CLS] positions. The vision encoder creates
|
265 |
+
both the predicated and target representations, while there
|
266 |
+
is gradient flow from the predicted representations but no
|
267 |
+
gradient flow from the target representations. The vision
|
268 |
+
encoder can create the target representations because it is
|
269 |
+
also trained in the vision-language training.
|
270 |
+
We conduct experiments on a variety of twenty-two tasks of
|
271 |
+
language, vision, and vision-language understanding. X-FM
|
272 |
+
can outperform other general foundation models by a large
|
273 |
+
margin and can even achieve better or comparable perfor-
|
274 |
+
mance than SOTA foundation models specifically designed
|
275 |
+
for language, vision, or vision-language understanding tasks,
|
276 |
+
as shown in Table 1.
|
277 |
+
|
278 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
279 |
+
2. Related Work
|
280 |
+
Following the success of language model pre-training, vi-
|
281 |
+
sion pre-training and vision-language pre-training with
|
282 |
+
Transformer as the backbone (Vaswani et al., 2017) have
|
283 |
+
also made significant progress recently, pushing the state-of-
|
284 |
+
the-art of various understanding tasks of language, vision,
|
285 |
+
and vision-language.
|
286 |
+
In language understanding, BERT (Devlin et al., 2019) is
|
287 |
+
the first model adopting masked language modeling (MLM)
|
288 |
+
for pre-training, which achieves remarkable performance
|
289 |
+
on a wide range of tasks. Several other models are then
|
290 |
+
developed to improve training robustness (Liu et al., 2019),
|
291 |
+
sample efficiency (Sun et al., 2019; Joshi et al., 2020; Clark
|
292 |
+
et al., 2020), and prediction accuracy of BERT (Lan et al.,
|
293 |
+
2020; Zhang et al., 2020; He et al., 2021).
|
294 |
+
In vision understanding, ViT (Dosovitskiy et al., 2021; Tou-
|
295 |
+
vron et al., 2021) is proposed, utilizing Transformer as the
|
296 |
+
backbone. Inspired by MLM, subsequent work proposes
|
297 |
+
using masked image modeling (MIM) with the objective of
|
298 |
+
recovering masked images. The learning targets vary from
|
299 |
+
pixels (He et al., 2022) to image tokens (Bao et al., 2021;
|
300 |
+
Peng et al., 2022).
|
301 |
+
In vision-language understanding, there are generally two
|
302 |
+
approaches. One is “dual encoders,” in which image and text
|
303 |
+
are encoded separately, followed by a shallow interaction
|
304 |
+
layer. The other is “fusion encoder(s)” in which attention
|
305 |
+
or self-attention is used to fuse information from the two
|
306 |
+
modalities after encoding. The former approach includes
|
307 |
+
CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021)
|
308 |
+
and performs well in vision tasks and cross-modal retrieval
|
309 |
+
tasks. However, it cannot perform so well in multi-modal
|
310 |
+
fusion tasks such as visual question answering (VQA (Goyal
|
311 |
+
et al., 2017)) and visual reasoning (NLVR2 (Suhr et al.,
|
312 |
+
2019b)). The latter approach varies depending on the way
|
313 |
+
of using image features. Early work feeds pre-extracted
|
314 |
+
object features along with texts into Transformer models
|
315 |
+
and trains the models to make multi-modal modeling and
|
316 |
+
multi-modal alignments with suitable objectives (Lu et al.,
|
317 |
+
2019; Tan & Bansal, 2019b; Li et al., 2020; Chen et al.,
|
318 |
+
2020; Cho et al., 2021; Zhang et al., 2021). Later work
|
319 |
+
uses patch embeddings directly with new architectures such
|
320 |
+
as vision Transformer (Li et al., 2021a; 2022) or multiway
|
321 |
+
Transformer (Wang et al., 2021a; Bao et al., 2022) and uses
|
322 |
+
new objectives such as bounding box prediction (Zeng et al.,
|
323 |
+
2021; 2022).
|
324 |
+
Recently, the fact that Transformer can model multi-modal
|
325 |
+
data within a single architecture has inspired research to
|
326 |
+
develop general foundation models that can solve lan-
|
327 |
+
guage, vision, and vision-language tasks at the same time.
|
328 |
+
UNIMO (Li et al., 2021b;c) jointly learns from image
|
329 |
+
and text data vision representations, language representa-
|
330 |
+
tions, and vision-language alignments in a shared space.
|
331 |
+
FLAVA (Singh et al., 2021), a general foundation model, per-
|
332 |
+
forms pre-training with masked uni-modal and multi-modal
|
333 |
+
modeling objectives. OFA (Wang et al., 2022c) formulates
|
334 |
+
vision-language tasks as sequence-to-sequence (seq2seq)
|
335 |
+
problems and pre-trains a seq2seq model in multi-task learn-
|
336 |
+
ing. SimVLM (Wang et al., 2021c) pre-trains a seq2seq
|
337 |
+
model with a single objective of language generation (prefix
|
338 |
+
language modeling). DaVinci (Diao et al., 2022) combines
|
339 |
+
prefix language modeling and prefix image modeling to
|
340 |
+
learn a general foundation model for a wide range of tasks.
|
341 |
+
Uni-Perceiver (Zhu et al., 2021; 2022) builds a unified per-
|
342 |
+
ception architecture that processes various modalities and
|
343 |
+
tasks with a single Transformer network and shared parame-
|
344 |
+
ters.
|
345 |
+
Previous studies on general foundation models have shown
|
346 |
+
that different capabilities can be established with only one
|
347 |
+
model. Still, few studies demonstrate that the best perfor-
|
348 |
+
mance can be achieved in all tasks with one model. In this
|
349 |
+
paper, we propose a new general foundation model and show
|
350 |
+
that it can perform the best for all the understanding tasks
|
351 |
+
of language, vision, and vision-language. We compare our
|
352 |
+
model extensively with recent general foundation models
|
353 |
+
on multiple dimensions, as shown in Appendix A.
|
354 |
+
Several super-large foundation models (over 1B parame-
|
355 |
+
ters) are proposed recently, most of which are trained on
|
356 |
+
super-large in-house datasets (over 400M image-text pairs).
|
357 |
+
The authors do not report results at the base (about 280M
|
358 |
+
parameters) and large (about 800M parameters) scale on
|
359 |
+
public datasets, which we consider in this paper. CoCa (Yu
|
360 |
+
et al., 2022) pre-trains an image-text sequence-to-sequence
|
361 |
+
model with contrastive loss and captioning loss. BEiT-
|
362 |
+
3 (Wang et al., 2022d) uses a multi-way Transformer and a
|
363 |
+
unified objective of masked “language” modeling for learn-
|
364 |
+
ing from image (Imglish1), text, and image-text pair data.
|
365 |
+
Florence (Yuan et al., 2021) first scales the web-scale image-
|
366 |
+
text pairs to 900M representations and then adapts to vari-
|
367 |
+
ous computer vision tasks. Flamingo (Alayrac et al., 2022)
|
368 |
+
makes use of a large language model in vision-language
|
369 |
+
pre-training to solve the “in-context learning” problem for
|
370 |
+
vision-language tasks. PaLI (Chen et al., 2022) jointly scales
|
371 |
+
up the vision encoder and language encoder to cover a vari-
|
372 |
+
ety of language, vision, vision-language, and multilingual
|
373 |
+
tasks.
|
374 |
+
3. Method
|
375 |
+
3.1. Model Architecture and Training Process
|
376 |
+
We propose a new general foundation model X-FM, having
|
377 |
+
a language encoder, a vision encoder, and a fusion encoder,
|
378 |
+
shown as Fig 1. The language encoder is a stack of Trans-
|
379 |
+
1They view the image as a foreign language.
|
380 |
+
|
381 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
382 |
+
Text Encoder
|
383 |
+
Cross-Attention
|
384 |
+
Fusion Encoder
|
385 |
+
Feed Forward
|
386 |
+
Self-Attention
|
387 |
+
Vision Encoder
|
388 |
+
Feed Forward
|
389 |
+
Self-Attention
|
390 |
+
two
|
391 |
+
brown
|
392 |
+
and
|
393 |
+
white
|
394 |
+
dogs.
|
395 |
+
Mask
|
396 |
+
Feed Forward
|
397 |
+
Self-Attention
|
398 |
+
stop grad
|
399 |
+
MIM
|
400 |
+
MLM
|
401 |
+
IMLM, ITM, BBP
|
402 |
+
ITC
|
403 |
+
x M
|
404 |
+
x N
|
405 |
+
x L
|
406 |
+
Mask
|
407 |
+
target
|
408 |
+
Figure 1: The architecture and pre-training process of X-FM, a Transformer-based general foundation model. Given
|
409 |
+
a text, we learn the language encoder by MLM. Given an image, we learn the vision encoder by MIM. Given an image-text
|
410 |
+
pair, we learn the fusion encoder by BBP, ITM, IMLM and ITC, and further learn the vision encoder by MIM. The gradients
|
411 |
+
of BBP, ITM, and IMLM are stopped from the fusion encoder to the language encoder. The vision encoder is trained by
|
412 |
+
MIM with both the image-text pair data and the image data. M, N and P denote numbers of encoder layers.
|
413 |
+
former layers like that of BERT (Devlin et al., 2019), while
|
414 |
+
the vision encoder is a stack of Transformer layers like that
|
415 |
+
of ViT (Dosovitskiy et al., 2021). The language encoder uses
|
416 |
+
a post-layer-norm, while the vision encoder uses a pre-layer-
|
417 |
+
norm. The fusion encoder is similar to that of ALBEF (Li
|
418 |
+
et al., 2021a) and X-VLM (Zeng et al., 2021), in which
|
419 |
+
each layer has an attention sub-layer after a self-attention
|
420 |
+
sub-layer. In the self-attention sub-layers, the queries are
|
421 |
+
from language and the keys & values are from vision.
|
422 |
+
We propose a new method for learning X-FM, also shown in
|
423 |
+
Fig 1. Text, image, and image-text pair data are used as input
|
424 |
+
to train X-FM. The language encoder is trained by masked
|
425 |
+
language modeling (MLM) and image text contrastive learn-
|
426 |
+
ing (ITC). The vision encoder is trained by masked image
|
427 |
+
modeling (MIM) and ITC. The fusion encoder is trained
|
428 |
+
by image text matching (ITM), image-conditioned masked
|
429 |
+
language modeling (IMLM), and bounding box prediction
|
430 |
+
(BBP). There are two new techniques developed for the
|
431 |
+
training.
|
432 |
+
Stop Gradient. We stop gradients from the vision-language
|
433 |
+
training when learning the language encoder. Specifically,
|
434 |
+
when the fusion encoder is trained with image-text pair
|
435 |
+
data by ITM, IMLM, and BBP, there are forward flows
|
436 |
+
(activations) from the language encoder to the fusion en-
|
437 |
+
coder, but there are no backward flows (gradients) from the
|
438 |
+
fusion encoder to the language encoder. In this way, the
|
439 |
+
language encoder is only trained with text data by MLM
|
440 |
+
and with image-text pair data by ITC. The former helps the
|
441 |
+
language encoder to learn text representations, and the latter
|
442 |
+
helps the language encoder and the vision encoder to make
|
443 |
+
alignments between their respective text representations and
|
444 |
+
image representations. Meanwhile, the training of the fusion
|
445 |
+
encoder is performed separately with the focus of learning
|
446 |
+
from image-text pair data.
|
447 |
+
Masked Image Modeling. The training of vision encoder
|
448 |
+
by MIM is carried out as follows. The image data is first
|
449 |
+
masked and then predicted by the vision encoder. The dif-
|
450 |
+
ferences between predicted representations and ‘target’ rep-
|
451 |
+
resentations at masked positions and [CLS] position are
|
452 |
+
then measured with MSE (mean squared error) loss. The
|
453 |
+
target representations are obtained from the same image
|
454 |
+
data (without masking) by the vision encoder. There are
|
455 |
+
no gradients from the target representations in the learning
|
456 |
+
of the vision encoder. The vision encoder can create target
|
457 |
+
representations because it is also trained with image-text
|
458 |
+
pair data. In this way, the vision encoder is trained by both
|
459 |
+
the cross-modal objectives (ITC, ITM, BBP, IMLM) with
|
460 |
+
image-text pair data and the uni-modal objective (MIM)
|
461 |
+
with image data. The representations obtained from the
|
462 |
+
vision-language training are highly semantic, which is nec-
|
463 |
+
essary for MIM as demonstrated in previous work (Bao
|
464 |
+
et al., 2021; Peng et al., 2022; Wei et al., 2022a;b).
|
465 |
+
There are three advantages by exploiting the new MIM
|
466 |
+
technique. First, it becomes possible to leverage image data
|
467 |
+
for learning of the vision encoder, which is relatively easy
|
468 |
+
to obtain. Second, it is convenient to conduct MIM with the
|
469 |
+
signals from the vision-language training. Note that most
|
470 |
+
previous work for MIM makes use of an external image
|
471 |
+
tokenizer such as VQ-VAE (Bao et al., 2021; Singh et al.,
|
472 |
+
2021), CLIP (Wei et al., 2022b), and VQ-KL (Peng et al.,
|
473 |
+
|
474 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
475 |
+
2022). Third, the learning of the vision encoder and that
|
476 |
+
of the fusion encoder are mutually enhanced. Once the
|
477 |
+
vision encoder is trained, it is also utilized to train the fusion
|
478 |
+
encoder.
|
479 |
+
3.2. Pre-training Objectives
|
480 |
+
We explain six objectives in learning of X-FM. Here, T
|
481 |
+
represents the distribution of text data, I represents the
|
482 |
+
distribution of image data, and D represents the distribution
|
483 |
+
of image-text pair data.
|
484 |
+
Masked Language Modeling (MLM) We perform MLM
|
485 |
+
on text data to learn the language encoder of X-FM. Specifi-
|
486 |
+
cally we recover the masked tokens in a text by minimizing
|
487 |
+
the cross entropy loss below.
|
488 |
+
Lmlm = ET ∼T H(⃗y( ¯T), ˆ⃗p( ¯T))
|
489 |
+
(1)
|
490 |
+
where T denotes a text, ¯T denotes the masked text of T, ˆ⃗p
|
491 |
+
denotes the predicted probability vectors of masked tokens
|
492 |
+
of ¯T, ⃗y denotes the one-hot vectors representing the original
|
493 |
+
tokens of ¯T, and H denotes cross-entropy.
|
494 |
+
Image-Text Contrastive Learning (ITC). We use an
|
495 |
+
image-text contrastive loss as in CLIP (Radford et al., 2021)
|
496 |
+
to learn the alignments between images and texts in ITC.
|
497 |
+
Given a batch of images and texts, we calculate the cosine
|
498 |
+
similarities between all image-text pairs. For each image,
|
499 |
+
there is one text matched and the rest is unmatched. For each
|
500 |
+
text, there is one image matched and the rest is unmatched.
|
501 |
+
The contrastive loss is defined as follows.
|
502 |
+
Litc = 1
|
503 |
+
2E(I,T )∼D
|
504 |
+
�
|
505 |
+
H(⃗yi2t(I), ⃗pi2t(I))
|
506 |
+
+ H(⃗yt2i(T), ⃗pt2i(T))
|
507 |
+
�
|
508 |
+
(2)
|
509 |
+
where (I, T) denotes an image-text pair, ⃗pi2t(I) denotes the
|
510 |
+
in-batch image-to-text similarities, ⃗pt2i(T) denotes the in-
|
511 |
+
batch text-to-image similarities, ⃗yi2t(I) denotes the one-hot
|
512 |
+
vectors representing the image-to-text matching relations,
|
513 |
+
⃗yt2i(T) denotes the one-hot vectors representing the text-to-
|
514 |
+
image matching relations, and H denotes cross-entropy.
|
515 |
+
Image-Text Matching (ITM). We also learn the align-
|
516 |
+
ments between images and texts in ITM, using a loss in-
|
517 |
+
dicating whether an image-text pair is matched. For each
|
518 |
+
image in a batch there is a matched (positive) text, and we
|
519 |
+
sample an unmatched (negative) text in the batch. For each
|
520 |
+
text there is a matched (positive) image, and we sample
|
521 |
+
an unmatched image in the batch. The loss is defined as
|
522 |
+
follows.
|
523 |
+
Litm = E(I,T )∼D
|
524 |
+
�
|
525 |
+
H(pmatch(I, T))
|
526 |
+
+H(pmatch(˜I, T))
|
527 |
+
(3)
|
528 |
+
+H(pmatch(I, ˜T))
|
529 |
+
�
|
530 |
+
where (I, T) denotes a positive image-text pair, (˜I, T) and
|
531 |
+
(I, ˜T) denote negative image-text pairs, pmatch(I, T) de-
|
532 |
+
notes a predicted matching probability of (I, T), and H
|
533 |
+
denotes logistic loss.
|
534 |
+
Image-conditioned
|
535 |
+
Masked
|
536 |
+
Language
|
537 |
+
Modeling
|
538 |
+
(IMLM) We conduct IMLM on image-text pair data to
|
539 |
+
learn the fusion encoder.
|
540 |
+
Specifically, we recover the
|
541 |
+
masked tokens of the text given for an image-text pair by
|
542 |
+
minimizing the cross entropy loss below.
|
543 |
+
Limlm = E(I,T )∼DH(⃗y( ¯T), ˆ⃗p(I, ¯T))
|
544 |
+
(4)
|
545 |
+
where (I, T) denotes an image-text pair, ¯T denotes the
|
546 |
+
masked text of T, ˆ⃗p(I, ¯T) denotes the predicted probability
|
547 |
+
vectors of the masked tokens of ¯T based on I, ⃗y denotes the
|
548 |
+
one-hot vectors representing the original tokens of ¯T, and
|
549 |
+
H denotes cross-entropy.
|
550 |
+
Bounding Box Prediction (BBP) We adopt the BBP in X-
|
551 |
+
VLM (Zeng et al., 2021; 2022), which locates the visual
|
552 |
+
concept in the image by a bounding box given the text. With
|
553 |
+
BBP we learn the alignments between the images and texts
|
554 |
+
in multi-granularity. In BBP, two losses are simultaneously
|
555 |
+
minimized to measure the differences between the predicted
|
556 |
+
bounding box and the ground-truth bounding box. One
|
557 |
+
is generalized intersection over union GIoU (Rezatofighi
|
558 |
+
et al., 2019) and the other is ℓ1 distance.
|
559 |
+
Lbbp = E(I,T )∼D{GIoU(⃗b,ˆ⃗b) + ∥⃗b − ˆ⃗b∥1}
|
560 |
+
(5)
|
561 |
+
where⃗b = (cx, cy, w, h) denotes the ground truth bounding
|
562 |
+
box, ˆ⃗b = ( ˆcx, ˆcy, ˆw, ˆh) denotes the predicted bounding box.
|
563 |
+
A bounding box is represented by two coordinates, width,
|
564 |
+
and height.
|
565 |
+
Masked Image Modeling (MIM) We perform MIM on im-
|
566 |
+
age data and image-text pair data to learn the vision encoder.
|
567 |
+
Specifically, we recover the masked image patches in an
|
568 |
+
image by minimizing the loss below.
|
569 |
+
Lmim = E(I,T )∼D||⃗v(¯I) − ˆ⃗v(¯I)||2 + EI∼I||⃗v(¯I) − ˆ⃗v(¯I)||2
|
570 |
+
(6)
|
571 |
+
where (I, T) and I denote an image-text pair and a single
|
572 |
+
image respectively, ¯I denotes the masked image I, ˆ⃗v(¯I) de-
|
573 |
+
notes the predicted representations at the masked positions
|
574 |
+
and [CLS] of ¯I, and ⃗v(¯I) denotes the target representa-
|
575 |
+
tions at the masked positions and [CLS] of ¯I. ||˙||2 is the
|
576 |
+
MSE loss. We employ block masking following previous
|
577 |
+
work (Bao et al., 2021; Peng et al., 2022). Note that (I, T)
|
578 |
+
and I are independently sampled from D and I, and the
|
579 |
+
sample sizes are not necessarily equal.
|
580 |
+
Finally, the pre-training objective of X-FM is defined as the
|
581 |
+
sum of the losses described above.
|
582 |
+
L = Lmlm + Litc + Litm + Limlm + Lbbp + Lmim (7)
|
583 |
+
|
584 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
585 |
+
Base-Size Models
|
586 |
+
Large-Size Models
|
587 |
+
RoBERTa
|
588 |
+
BEiTv2
|
589 |
+
X2-VLM
|
590 |
+
UNIMO-2
|
591 |
+
FLAVA
|
592 |
+
SimVLM
|
593 |
+
OFA
|
594 |
+
DaVinci
|
595 |
+
Uni-Per.
|
596 |
+
OmniVL
|
597 |
+
X-FM
|
598 |
+
RoBERTa
|
599 |
+
BEiTv2
|
600 |
+
X2-VLM
|
601 |
+
SimVLM
|
602 |
+
OFA
|
603 |
+
Uni-Per.
|
604 |
+
X-FM
|
605 |
+
Task
|
606 |
+
Eval.
|
607 |
+
1
|
608 |
+
2
|
609 |
+
3
|
610 |
+
4
|
611 |
+
5
|
612 |
+
6
|
613 |
+
7
|
614 |
+
8
|
615 |
+
9
|
616 |
+
10
|
617 |
+
11
|
618 |
+
12
|
619 |
+
13
|
620 |
+
14
|
621 |
+
15
|
622 |
+
16
|
623 |
+
17
|
624 |
+
18
|
625 |
+
MNLI
|
626 |
+
FT
|
627 |
+
87.6
|
628 |
+
–
|
629 |
+
–
|
630 |
+
87.5
|
631 |
+
80.3
|
632 |
+
83.4
|
633 |
+
84.3
|
634 |
+
82.3
|
635 |
+
81.5
|
636 |
+
–
|
637 |
+
87.7
|
638 |
+
90.2
|
639 |
+
–
|
640 |
+
–
|
641 |
+
–
|
642 |
+
84.3
|
643 |
+
85.7
|
644 |
+
90.4
|
645 |
+
CoLA
|
646 |
+
FT
|
647 |
+
63.6
|
648 |
+
–
|
649 |
+
–
|
650 |
+
62.1
|
651 |
+
50.7
|
652 |
+
46.7
|
653 |
+
52.3
|
654 |
+
52.1
|
655 |
+
52.2
|
656 |
+
–
|
657 |
+
65.3
|
658 |
+
68.0
|
659 |
+
–
|
660 |
+
–
|
661 |
+
–
|
662 |
+
52.3
|
663 |
+
57.4
|
664 |
+
69.9
|
665 |
+
MRPC
|
666 |
+
FT
|
667 |
+
90.2
|
668 |
+
–
|
669 |
+
–
|
670 |
+
–
|
671 |
+
84.2
|
672 |
+
79.8
|
673 |
+
88.7
|
674 |
+
83.1
|
675 |
+
–
|
676 |
+
–
|
677 |
+
91.7
|
678 |
+
90.9
|
679 |
+
–
|
680 |
+
–
|
681 |
+
–
|
682 |
+
88.7
|
683 |
+
–
|
684 |
+
92.4
|
685 |
+
QQP
|
686 |
+
FT
|
687 |
+
91.9
|
688 |
+
–
|
689 |
+
–
|
690 |
+
–
|
691 |
+
88.7
|
692 |
+
90.4
|
693 |
+
91.3
|
694 |
+
88.2
|
695 |
+
–
|
696 |
+
–
|
697 |
+
91.8
|
698 |
+
92.2
|
699 |
+
–
|
700 |
+
–
|
701 |
+
–
|
702 |
+
91.3
|
703 |
+
–
|
704 |
+
92.2
|
705 |
+
SST-2
|
706 |
+
FT
|
707 |
+
94.8
|
708 |
+
–
|
709 |
+
–
|
710 |
+
94.7
|
711 |
+
90.9
|
712 |
+
90.9
|
713 |
+
92.7
|
714 |
+
90.5
|
715 |
+
90.9
|
716 |
+
–
|
717 |
+
95.0
|
718 |
+
96.4
|
719 |
+
–
|
720 |
+
–
|
721 |
+
–
|
722 |
+
92.7
|
723 |
+
93.4
|
724 |
+
96.7
|
725 |
+
QNLI
|
726 |
+
FT
|
727 |
+
92.8
|
728 |
+
–
|
729 |
+
–
|
730 |
+
–
|
731 |
+
87.3
|
732 |
+
88.6
|
733 |
+
91.1
|
734 |
+
87.2
|
735 |
+
88.2
|
736 |
+
–
|
737 |
+
92.9
|
738 |
+
94.7
|
739 |
+
–
|
740 |
+
–
|
741 |
+
–
|
742 |
+
91.1
|
743 |
+
91.9
|
744 |
+
94.8
|
745 |
+
RTE
|
746 |
+
FT
|
747 |
+
78.70
|
748 |
+
–
|
749 |
+
–
|
750 |
+
–
|
751 |
+
57.8
|
752 |
+
63.9
|
753 |
+
70.8
|
754 |
+
60.7
|
755 |
+
75.8
|
756 |
+
–
|
757 |
+
83.8
|
758 |
+
86.6
|
759 |
+
–
|
760 |
+
–
|
761 |
+
–
|
762 |
+
70.8
|
763 |
+
78.4
|
764 |
+
87.4
|
765 |
+
STS-B
|
766 |
+
FT
|
767 |
+
91.2
|
768 |
+
–
|
769 |
+
–
|
770 |
+
91.2
|
771 |
+
85.7
|
772 |
+
87.2
|
773 |
+
–
|
774 |
+
86.3
|
775 |
+
–
|
776 |
+
–
|
777 |
+
90.8
|
778 |
+
92.4
|
779 |
+
–
|
780 |
+
–
|
781 |
+
–
|
782 |
+
–
|
783 |
+
–
|
784 |
+
92.1
|
785 |
+
Language Avg.
|
786 |
+
86.4
|
787 |
+
–
|
788 |
+
–
|
789 |
+
–
|
790 |
+
78.2
|
791 |
+
78.9
|
792 |
+
–
|
793 |
+
78.8
|
794 |
+
–
|
795 |
+
–
|
796 |
+
87.4
|
797 |
+
88.9
|
798 |
+
–
|
799 |
+
–
|
800 |
+
–
|
801 |
+
–
|
802 |
+
–
|
803 |
+
89.5
|
804 |
+
ImageNet
|
805 |
+
FT
|
806 |
+
–
|
807 |
+
85.5
|
808 |
+
–
|
809 |
+
80.8
|
810 |
+
–
|
811 |
+
–
|
812 |
+
82.2
|
813 |
+
83.9
|
814 |
+
84.5
|
815 |
+
–
|
816 |
+
85.3
|
817 |
+
–
|
818 |
+
87.3
|
819 |
+
–
|
820 |
+
–
|
821 |
+
–
|
822 |
+
86.4
|
823 |
+
86.3
|
824 |
+
ImageNet
|
825 |
+
LE
|
826 |
+
–
|
827 |
+
80.1
|
828 |
+
–
|
829 |
+
–
|
830 |
+
75.5
|
831 |
+
80.6
|
832 |
+
71.4†
|
833 |
+
75.9
|
834 |
+
–
|
835 |
+
–
|
836 |
+
81.0
|
837 |
+
–
|
838 |
+
66.8†
|
839 |
+
–
|
840 |
+
82.3
|
841 |
+
74.7†
|
842 |
+
–
|
843 |
+
81.0
|
844 |
+
Food101
|
845 |
+
LE
|
846 |
+
–
|
847 |
+
88.2†
|
848 |
+
–
|
849 |
+
–
|
850 |
+
88.5
|
851 |
+
–
|
852 |
+
75.2†
|
853 |
+
89.3
|
854 |
+
–
|
855 |
+
87.4
|
856 |
+
88.7
|
857 |
+
–
|
858 |
+
52.2†
|
859 |
+
–
|
860 |
+
–
|
861 |
+
81.6†
|
862 |
+
–
|
863 |
+
88.9
|
864 |
+
CIFAR10
|
865 |
+
LE
|
866 |
+
–
|
867 |
+
95.3†
|
868 |
+
–
|
869 |
+
–
|
870 |
+
92.9
|
871 |
+
–
|
872 |
+
86.1†
|
873 |
+
93.0
|
874 |
+
–
|
875 |
+
96.2
|
876 |
+
97.2
|
877 |
+
–
|
878 |
+
63.5†
|
879 |
+
–
|
880 |
+
–
|
881 |
+
91.9†
|
882 |
+
–
|
883 |
+
97.2
|
884 |
+
CIFAR100
|
885 |
+
LE
|
886 |
+
–
|
887 |
+
81.5†
|
888 |
+
–
|
889 |
+
–
|
890 |
+
77.7
|
891 |
+
–
|
892 |
+
66.7†
|
893 |
+
79.0
|
894 |
+
–
|
895 |
+
83.2
|
896 |
+
86.7
|
897 |
+
–
|
898 |
+
39.7†
|
899 |
+
–
|
900 |
+
–
|
901 |
+
75.6†
|
902 |
+
–
|
903 |
+
85.1
|
904 |
+
Pets
|
905 |
+
LE
|
906 |
+
–
|
907 |
+
93.1†
|
908 |
+
–
|
909 |
+
–
|
910 |
+
84.8
|
911 |
+
–
|
912 |
+
81.0†
|
913 |
+
85.5
|
914 |
+
–
|
915 |
+
87.1
|
916 |
+
90.8
|
917 |
+
–
|
918 |
+
38.9†
|
919 |
+
–
|
920 |
+
–
|
921 |
+
86.8†
|
922 |
+
–
|
923 |
+
90.0
|
924 |
+
DTD
|
925 |
+
LE
|
926 |
+
–
|
927 |
+
78.4†
|
928 |
+
–
|
929 |
+
–
|
930 |
+
77.3
|
931 |
+
–
|
932 |
+
70.3†
|
933 |
+
77.1
|
934 |
+
–
|
935 |
+
76.2
|
936 |
+
78.4
|
937 |
+
–
|
938 |
+
44.4†
|
939 |
+
–
|
940 |
+
–
|
941 |
+
74.4†
|
942 |
+
–
|
943 |
+
79.0
|
944 |
+
Flowers102
|
945 |
+
LE
|
946 |
+
–
|
947 |
+
95.7†
|
948 |
+
–
|
949 |
+
–
|
950 |
+
96.4
|
951 |
+
–
|
952 |
+
86.3†
|
953 |
+
96.1
|
954 |
+
–
|
955 |
+
89.8
|
956 |
+
97.1
|
957 |
+
–
|
958 |
+
66.6†
|
959 |
+
–
|
960 |
+
–
|
961 |
+
92.6†
|
962 |
+
–
|
963 |
+
95.8
|
964 |
+
Vision Avg.
|
965 |
+
–
|
966 |
+
88.7
|
967 |
+
–
|
968 |
+
–
|
969 |
+
86.3
|
970 |
+
–
|
971 |
+
79.2
|
972 |
+
86.7
|
973 |
+
–
|
974 |
+
86.7
|
975 |
+
89.8
|
976 |
+
–
|
977 |
+
50.9
|
978 |
+
–
|
979 |
+
–
|
980 |
+
83.8
|
981 |
+
–
|
982 |
+
89.3
|
983 |
+
VQAv2
|
984 |
+
FT
|
985 |
+
–
|
986 |
+
–
|
987 |
+
79.2
|
988 |
+
76.3
|
989 |
+
72.5
|
990 |
+
77.9
|
991 |
+
78.0
|
992 |
+
73.9
|
993 |
+
–
|
994 |
+
78.3
|
995 |
+
79.1
|
996 |
+
–
|
997 |
+
–
|
998 |
+
80.5
|
999 |
+
79.3
|
1000 |
+
80.3
|
1001 |
+
–
|
1002 |
+
79.5
|
1003 |
+
NLVR2
|
1004 |
+
FT
|
1005 |
+
–
|
1006 |
+
–
|
1007 |
+
86.1
|
1008 |
+
–
|
1009 |
+
–
|
1010 |
+
81.8
|
1011 |
+
–
|
1012 |
+
77.9
|
1013 |
+
–
|
1014 |
+
–
|
1015 |
+
86.7
|
1016 |
+
–
|
1017 |
+
–
|
1018 |
+
87.6
|
1019 |
+
84.8
|
1020 |
+
–
|
1021 |
+
–
|
1022 |
+
87.8
|
1023 |
+
Flickr30K TR R@1
|
1024 |
+
ZS
|
1025 |
+
–
|
1026 |
+
–
|
1027 |
+
85.1†
|
1028 |
+
88.5
|
1029 |
+
67.7
|
1030 |
+
–
|
1031 |
+
–
|
1032 |
+
–
|
1033 |
+
82.1
|
1034 |
+
–
|
1035 |
+
90.1
|
1036 |
+
–
|
1037 |
+
–
|
1038 |
+
86.8†
|
1039 |
+
–
|
1040 |
+
–
|
1041 |
+
83.6
|
1042 |
+
89.7
|
1043 |
+
Flickr30K IR R@1
|
1044 |
+
ZS
|
1045 |
+
–
|
1046 |
+
–
|
1047 |
+
77.3†
|
1048 |
+
72.7
|
1049 |
+
65.2
|
1050 |
+
–
|
1051 |
+
–
|
1052 |
+
–
|
1053 |
+
72.4
|
1054 |
+
–
|
1055 |
+
79.1
|
1056 |
+
–
|
1057 |
+
–
|
1058 |
+
80.5†
|
1059 |
+
–
|
1060 |
+
–
|
1061 |
+
75.9
|
1062 |
+
79.1
|
1063 |
+
Flickr30K TR R@1
|
1064 |
+
FT
|
1065 |
+
–
|
1066 |
+
–
|
1067 |
+
97.4
|
1068 |
+
92.0
|
1069 |
+
–
|
1070 |
+
–
|
1071 |
+
–
|
1072 |
+
–
|
1073 |
+
93.6
|
1074 |
+
94.9
|
1075 |
+
97.4
|
1076 |
+
–
|
1077 |
+
–
|
1078 |
+
99.1
|
1079 |
+
–
|
1080 |
+
–
|
1081 |
+
94.1
|
1082 |
+
97.9
|
1083 |
+
Flickr30K IR R@1
|
1084 |
+
FT
|
1085 |
+
–
|
1086 |
+
–
|
1087 |
+
90.0
|
1088 |
+
80.1
|
1089 |
+
–
|
1090 |
+
–
|
1091 |
+
–
|
1092 |
+
–
|
1093 |
+
79.8
|
1094 |
+
83.4
|
1095 |
+
88.6
|
1096 |
+
–
|
1097 |
+
–
|
1098 |
+
91.1
|
1099 |
+
–
|
1100 |
+
–
|
1101 |
+
83.7
|
1102 |
+
89.4
|
1103 |
+
COCO TR R@1
|
1104 |
+
ZS
|
1105 |
+
–
|
1106 |
+
–
|
1107 |
+
68.4†
|
1108 |
+
–
|
1109 |
+
42.7
|
1110 |
+
–
|
1111 |
+
–
|
1112 |
+
–
|
1113 |
+
64.6
|
1114 |
+
–
|
1115 |
+
73.8
|
1116 |
+
–
|
1117 |
+
–
|
1118 |
+
69.7†
|
1119 |
+
–
|
1120 |
+
–
|
1121 |
+
67.9
|
1122 |
+
74.4
|
1123 |
+
COCO IR R@1
|
1124 |
+
ZS
|
1125 |
+
–
|
1126 |
+
–
|
1127 |
+
55.2†
|
1128 |
+
–
|
1129 |
+
38.4
|
1130 |
+
–
|
1131 |
+
–
|
1132 |
+
–
|
1133 |
+
51.6
|
1134 |
+
–
|
1135 |
+
59.4
|
1136 |
+
–
|
1137 |
+
–
|
1138 |
+
58.3†
|
1139 |
+
–
|
1140 |
+
–
|
1141 |
+
55.3
|
1142 |
+
59.4
|
1143 |
+
COCO TR R@1
|
1144 |
+
FT
|
1145 |
+
–
|
1146 |
+
–
|
1147 |
+
80.5
|
1148 |
+
–
|
1149 |
+
–
|
1150 |
+
–
|
1151 |
+
–
|
1152 |
+
–
|
1153 |
+
70.5
|
1154 |
+
76.8
|
1155 |
+
81.8
|
1156 |
+
–
|
1157 |
+
–
|
1158 |
+
82.3
|
1159 |
+
–
|
1160 |
+
–
|
1161 |
+
74.7
|
1162 |
+
82.1
|
1163 |
+
COCO IR R@1
|
1164 |
+
FT
|
1165 |
+
–
|
1166 |
+
–
|
1167 |
+
62.7
|
1168 |
+
–
|
1169 |
+
–
|
1170 |
+
–
|
1171 |
+
–
|
1172 |
+
–
|
1173 |
+
52.6
|
1174 |
+
58.5
|
1175 |
+
64.7
|
1176 |
+
–
|
1177 |
+
–
|
1178 |
+
65.2
|
1179 |
+
–
|
1180 |
+
–
|
1181 |
+
57.1
|
1182 |
+
65.4
|
1183 |
+
Vision-Language Avg.
|
1184 |
+
–
|
1185 |
+
–
|
1186 |
+
78.2
|
1187 |
+
–
|
1188 |
+
–
|
1189 |
+
–
|
1190 |
+
–
|
1191 |
+
–
|
1192 |
+
–
|
1193 |
+
–
|
1194 |
+
80.1
|
1195 |
+
–
|
1196 |
+
–
|
1197 |
+
80.1
|
1198 |
+
–
|
1199 |
+
–
|
1200 |
+
–
|
1201 |
+
80.5
|
1202 |
+
Table 2: Experimental results on vision, language and vision-language tasks. MNLI results are average of MNLI-m
|
1203 |
+
and MNLI-mm. MRPC results are average accuracies and F1 scores. Matthews correlation coefficient (MCC) is reported for
|
1204 |
+
CoLA, and Pearson correlation coefficient (PCC) is reported for STS-B. We report accuracies for all the vision and multi-
|
1205 |
+
modal tasks. FT is short for fine-tuning, LE for linear evaluation, ZS for zero-shot, TR for text retrieval, and IR for image
|
1206 |
+
retrieval. Results for RoBERTa are from its corresponding paper (Liu et al., 2019), and they use the mid-training (Phang
|
1207 |
+
et al., 2018) on MNLI for RTE, MRPC, and STS-B while other models (e.g., BERT, SimVLM, DaVinci, X-FM) do not
|
1208 |
+
use this trick. Language Avg. is the average score of all the language tasks, while Vision Avg. is the average score of six
|
1209 |
+
line evaluation tasks except ImageNet. Vision-Language Avg. is the average score of all vision-language tasks. † are our
|
1210 |
+
reproduced results with the officially released models. Uni-Per. stands for Uni-Perceiver-MoE (Zhu et al., 2022).
|
1211 |
+
4. Experiments
|
1212 |
+
4.1. Pre-training Datasets
|
1213 |
+
We conduct our experiments on several widely used pub-
|
1214 |
+
lic datasets, which consist of two in-domain datasets,
|
1215 |
+
COCO (Lin et al., 2014) and Visual Genome (VG) (Kr-
|
1216 |
+
ishna et al., 2017), and two out-of-domain datasets, SBU
|
1217 |
+
Captions (Ordonez et al., 2011) and Conceptual Captions
|
1218 |
+
(CC) (Sharma et al., 2018). Following X-VLM (Zeng et al.,
|
1219 |
+
2021; 2022), we also include annotations of objects and
|
1220 |
+
regions from RefCOCO (Yu et al., 2016), Objects365 (Shao
|
1221 |
+
et al., 2019) and OpenImages (Kuznetsova et al., 2018).
|
1222 |
+
Since we assume also using uni-modal data, we include
|
1223 |
+
RoBERTa corpus (Liu et al., 2019), C4 datasets (Raffel
|
1224 |
+
et al., 2020) and Imagenet21K (Ridnik et al., 2021). All
|
1225 |
+
pre-training datasets are listed in Table 3.
|
1226 |
+
4.2. Implementation Details
|
1227 |
+
Pre-training
|
1228 |
+
Our model is of base size and large size, and
|
1229 |
+
the parameters are listed in Table 5. The vision encoder is
|
1230 |
+
initialized with BEiTv2 (Peng et al., 2022). The language
|
1231 |
+
encoder is initialized with RoBERTa (Liu et al., 2019). The
|
1232 |
+
fusion encoder is trained from scratch. X-FM is pre-trained
|
1233 |
+
at image resolution of 224 × 224 with patch size of 16 × 16.
|
1234 |
+
Dataset
|
1235 |
+
# Images
|
1236 |
+
# Texts
|
1237 |
+
# Objects
|
1238 |
+
# Regions
|
1239 |
+
COCO
|
1240 |
+
0.11M
|
1241 |
+
0.55M
|
1242 |
+
0.45M
|
1243 |
+
-
|
1244 |
+
VG
|
1245 |
+
0.10M
|
1246 |
+
-
|
1247 |
+
2.0M
|
1248 |
+
3.7M
|
1249 |
+
SBU
|
1250 |
+
0.86M
|
1251 |
+
0.86M
|
1252 |
+
-
|
1253 |
+
-
|
1254 |
+
CC-3M
|
1255 |
+
2.9M
|
1256 |
+
2.9M
|
1257 |
+
-
|
1258 |
+
-
|
1259 |
+
Objects365
|
1260 |
+
0.58M
|
1261 |
+
-
|
1262 |
+
2.0M
|
1263 |
+
-
|
1264 |
+
OpenImages
|
1265 |
+
1.7M
|
1266 |
+
-
|
1267 |
+
4.2M
|
1268 |
+
-
|
1269 |
+
C4
|
1270 |
+
-
|
1271 |
+
800GB
|
1272 |
+
-
|
1273 |
+
-
|
1274 |
+
RoBERTa Corpus
|
1275 |
+
-
|
1276 |
+
160GB
|
1277 |
+
-
|
1278 |
+
-
|
1279 |
+
ImageNet-21k
|
1280 |
+
14M
|
1281 |
+
-
|
1282 |
+
-
|
1283 |
+
-
|
1284 |
+
Table 3: Statistics of the pre-training datasets.
|
1285 |
+
We pre-train X-FMbase for 200K steps with a batch size of
|
1286 |
+
3072 image-text pairs, 3072 images, and 8192 sentences on
|
1287 |
+
32 A100 and pre-train X-FMlarge with the same batch for
|
1288 |
+
160K steps on 64 A100, which takes about six days. The
|
1289 |
+
learning rate for both models is warmed-up to 1e−4 in the
|
1290 |
+
first 2500 steps and decayed following a linear schedule. We
|
1291 |
+
set the maximum number of text tokens to 30 for image-text
|
1292 |
+
pairs, while that of pure text corpus is set to 128. We apply
|
1293 |
+
mixed precision for pre-training.
|
1294 |
+
Fine-tuning
|
1295 |
+
We choose widely used downstream tasks
|
1296 |
+
whose details are shown in Appendix B. We report overall
|
1297 |
+
|
1298 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
1299 |
+
Model
|
1300 |
+
# Params
|
1301 |
+
MSCOCO (5K test set)
|
1302 |
+
Flickr30K (1K test set)
|
1303 |
+
MSCOCO (5K test set)
|
1304 |
+
Flickr30K (1K test set)
|
1305 |
+
TR-Fine-Tune
|
1306 |
+
IR-Fine-Tune
|
1307 |
+
TR-Fine-Tune
|
1308 |
+
IR-Fine-Tune
|
1309 |
+
TR-Zero-Shot
|
1310 |
+
IR-Zero-Shot
|
1311 |
+
TR-Zero-Shot
|
1312 |
+
IR-Zero-Shot
|
1313 |
+
R@1/R@5/R@10
|
1314 |
+
R@1/R@5/R@10
|
1315 |
+
R@1/R@5/R@10
|
1316 |
+
R@1/R@5/R@10
|
1317 |
+
R@1/R@5/R@10
|
1318 |
+
R@1/R@5/R@10
|
1319 |
+
R@1/R@5/R@10
|
1320 |
+
R@1/R@5/R@10
|
1321 |
+
ALBEF
|
1322 |
+
210M
|
1323 |
+
73.1/91.4/96.0
|
1324 |
+
56.8/81.5/89.2
|
1325 |
+
94.3/99.4/99.8
|
1326 |
+
82.8/96.7/98.4
|
1327 |
+
–
|
1328 |
+
–
|
1329 |
+
90.5/98.8/99.7
|
1330 |
+
76.8/93.7/96.7
|
1331 |
+
VLMobase
|
1332 |
+
175M
|
1333 |
+
74.8/93.1/96.9
|
1334 |
+
57.2/82.6/89.8
|
1335 |
+
92.3/99.4/99.9
|
1336 |
+
79.3/95.7/97.8
|
1337 |
+
–
|
1338 |
+
–
|
1339 |
+
–
|
1340 |
+
–
|
1341 |
+
VL-BEiT
|
1342 |
+
175M
|
1343 |
+
79.5/–/–
|
1344 |
+
61.5/–/–
|
1345 |
+
95.8/–/–
|
1346 |
+
83.9/–/–
|
1347 |
+
–
|
1348 |
+
–
|
1349 |
+
–
|
1350 |
+
–
|
1351 |
+
OmniVL
|
1352 |
+
288M
|
1353 |
+
76.8/93.6/97.3
|
1354 |
+
58.5/82.6/89.5
|
1355 |
+
94.9/9.6/99.9
|
1356 |
+
83.4/97.0/98.6
|
1357 |
+
–
|
1358 |
+
–
|
1359 |
+
–
|
1360 |
+
–
|
1361 |
+
X-VLM
|
1362 |
+
216M
|
1363 |
+
80.4/95.5/98.2
|
1364 |
+
63.1/85.7/91.6
|
1365 |
+
96.8/99.8/100
|
1366 |
+
86.1/97.4/98.7
|
1367 |
+
70.8/92.1/96.5
|
1368 |
+
55.6/82.7/90.0
|
1369 |
+
85.3/97.8/99.6
|
1370 |
+
71.9/93.3/96.4
|
1371 |
+
X2-VLMbase
|
1372 |
+
255M
|
1373 |
+
80.5/95.5/97.8
|
1374 |
+
62.7/84.7/90.7
|
1375 |
+
97.4/99.9/100
|
1376 |
+
90.0/98.6/99.3
|
1377 |
+
68.4†/92.5†/96.8†
|
1378 |
+
55.2†/82.2†/89.3†
|
1379 |
+
85.1†/99.2†/100.0†
|
1380 |
+
77.3†/95.3†/97.6†
|
1381 |
+
X-FMbase
|
1382 |
+
284M
|
1383 |
+
81.8/96.0/98.3
|
1384 |
+
64.7/86.1/91.6
|
1385 |
+
97.4/100/100
|
1386 |
+
88.6/97.9/98.9
|
1387 |
+
73.8/93.9/97.2
|
1388 |
+
59.4/83.6/90.0
|
1389 |
+
90.1/99.2/99.9
|
1390 |
+
79.1/95.2/97.3
|
1391 |
+
VLMolarge
|
1392 |
+
562M
|
1393 |
+
78.2/94.4/97.4
|
1394 |
+
60.6/84.4/91.0
|
1395 |
+
95.3/99.9/100
|
1396 |
+
84.5/97.3/98.6
|
1397 |
+
–
|
1398 |
+
–
|
1399 |
+
–
|
1400 |
+
–
|
1401 |
+
X2-VLMlarge
|
1402 |
+
593M
|
1403 |
+
82.3/96.2/98.3
|
1404 |
+
65.2/86.4/91.9
|
1405 |
+
99.1/100/100
|
1406 |
+
91.1/98.6/99.4
|
1407 |
+
69.7†/93.0†/97.2†
|
1408 |
+
58.3†/83.8†/90.5†
|
1409 |
+
86.8†/98.9†/99.9†
|
1410 |
+
80.5†/96.4†/98.3†
|
1411 |
+
X-FMlarge
|
1412 |
+
807M
|
1413 |
+
82.1/96.2/98.2
|
1414 |
+
65.4/86.6/91.9
|
1415 |
+
97.9/100/100
|
1416 |
+
89.4/98.2/99.1
|
1417 |
+
74.4/94.1/97.3
|
1418 |
+
59.4/84.4/90.7
|
1419 |
+
89.7/99.1/100
|
1420 |
+
79.1/95.4/97.9
|
1421 |
+
Super-Large Models or Super-Large Datasets
|
1422 |
+
CLIP
|
1423 |
+
490M
|
1424 |
+
–
|
1425 |
+
–
|
1426 |
+
88.7/98.0/99.2
|
1427 |
+
76.7/93.6/96.4
|
1428 |
+
58.4/81.5/88.1
|
1429 |
+
37.8/62.4/72.2
|
1430 |
+
88.0/98.7/99.4
|
1431 |
+
68.7/90.6/95.2
|
1432 |
+
ALIGN
|
1433 |
+
490M
|
1434 |
+
77.0/93.5/96.9
|
1435 |
+
59.9/83.3/89.8
|
1436 |
+
95.3/99.8/100
|
1437 |
+
84.9/97.4/98.6
|
1438 |
+
58.6/83.0/89.7
|
1439 |
+
45.6/69.8/78.6
|
1440 |
+
88.6/98.7/99.7
|
1441 |
+
75.7/93.8/96.8
|
1442 |
+
Florence
|
1443 |
+
893M
|
1444 |
+
81.8/95.2/–
|
1445 |
+
63.2/85.7/–
|
1446 |
+
97.2/99.9/–
|
1447 |
+
87.9/98.1/–
|
1448 |
+
64.7/85.9/–
|
1449 |
+
47.2/71.4/–
|
1450 |
+
90.9/99.1/–
|
1451 |
+
76.7/93.6/–
|
1452 |
+
CoCa
|
1453 |
+
2.1B
|
1454 |
+
–
|
1455 |
+
–
|
1456 |
+
–
|
1457 |
+
–
|
1458 |
+
66.3/86.2/91.8
|
1459 |
+
51.2/74.2/82.0
|
1460 |
+
92.5/99.5/99.9
|
1461 |
+
80.4/95.7/97.7
|
1462 |
+
BEiT-3
|
1463 |
+
1.9B
|
1464 |
+
84.8/96.5/98.3
|
1465 |
+
67.2/87.7/92.8
|
1466 |
+
98.0/100/100
|
1467 |
+
90.3/98.7/99.5
|
1468 |
+
–
|
1469 |
+
–
|
1470 |
+
94.9/99.9/100.0
|
1471 |
+
81.5/95.6/97.8
|
1472 |
+
X2-VLMlarge
|
1473 |
+
593M
|
1474 |
+
84.4/96.5/98.5
|
1475 |
+
67.7/87.5/92.5
|
1476 |
+
98.8/100/100
|
1477 |
+
91.8/98.6/99.5
|
1478 |
+
–
|
1479 |
+
–
|
1480 |
+
–
|
1481 |
+
–
|
1482 |
+
Table 4: Results of text-retrieval (TR) and image-retrieval (IR) on COCO and Flickr30K. † denotes our reproduced results
|
1483 |
+
with the officially released models. Giant models with over 1B parameters (e.g., BEiT-3) and models are pre-trained with
|
1484 |
+
over 400M data (e.g., CLIP and X2-VLMlarge) are in grey since they are not directly comparable with other models.
|
1485 |
+
Model
|
1486 |
+
Param
|
1487 |
+
Hidden
|
1488 |
+
Layers
|
1489 |
+
Vision
|
1490 |
+
Text
|
1491 |
+
Fusion
|
1492 |
+
X-FMbase
|
1493 |
+
284
|
1494 |
+
768
|
1495 |
+
12
|
1496 |
+
12
|
1497 |
+
12
|
1498 |
+
X-FMlarge
|
1499 |
+
807
|
1500 |
+
1024
|
1501 |
+
24
|
1502 |
+
24
|
1503 |
+
12
|
1504 |
+
Table 5: Size variants of X-FM. All modules consist of
|
1505 |
+
transformer layers. Param indicates the parameter number
|
1506 |
+
of transformer layers.
|
1507 |
+
performance on eight language tasks from GLUE (Wang
|
1508 |
+
et al., 2019), eight vision tasks following OmniVL (Wang
|
1509 |
+
et al., 2022a), four multi-modal tasks, which are text-
|
1510 |
+
image retrieval on MSCOCO and Flickr, visual question
|
1511 |
+
answering (VQA (Goyal et al., 2017)) and visual reason-
|
1512 |
+
ing (NLVR2 (Suhr et al., 2019b)). For image-text retrieval
|
1513 |
+
task, we report both zero-shot results and fine-tuned results.
|
1514 |
+
For the ImageNet classification task, we report both linear
|
1515 |
+
evaluation results and fine-tuning results. The other vision
|
1516 |
+
tasks are evaluated in the linear evaluation setting. All the
|
1517 |
+
other tasks are evaluated in the fine-tuning setting. Because
|
1518 |
+
the image resolution differs between pre-training and fine-
|
1519 |
+
tuning, the position parameters are adapted using linear
|
1520 |
+
interpolation. For all downstream tasks, we apply random
|
1521 |
+
resize crops and horizontal flips augmentation for the im-
|
1522 |
+
ages during training. More details of network architectures
|
1523 |
+
and hyper-parameters setups are given in Appendix C.
|
1524 |
+
4.3. Comparison with SOTA Foundation Models
|
1525 |
+
We extensively compare the performance of X-FM with
|
1526 |
+
state-of-the-art foundation models on vision, language, and
|
1527 |
+
multi-modal tasks. We first compare our model with general
|
1528 |
+
foundation models, including UNIMO-v2 (Li et al., 2021c),
|
1529 |
+
FLAVA (Singh et al., 2021), SimVLM (Wang et al., 2021c),
|
1530 |
+
OFA (Wang et al., 2022b), DaVinci (Diao et al., 2022), Om-
|
1531 |
+
niVL (Wang et al., 2022a), and Uni-Perceiver-MoE (Zhu
|
1532 |
+
et al., 2022). We also include comparisons with SOTA
|
1533 |
+
foundation models specifically designed for language, vi-
|
1534 |
+
sion, or vision-language tasks, RoBERTa (Liu et al., 2019),
|
1535 |
+
BEiTv2 (Peng et al., 2022), and X2-VLM (Zeng et al., 2022).
|
1536 |
+
There are several observations in Table 2. First, X-FMbase
|
1537 |
+
(column 11) outperforms all the previous general foundation
|
1538 |
+
models (column 4-10) across almost all tasks by a large mar-
|
1539 |
+
gin, becoming a new and stronger general foundation model.
|
1540 |
+
Compared to the previous general foundation models, X-
|
1541 |
+
FMbase improves at least 3.2% and the most even to 9.7%
|
1542 |
+
on the average of all the reported numbers. Second, we com-
|
1543 |
+
pare X-FM with state-of-the-art foundation models specif-
|
1544 |
+
ically designed for language, vision, and vision-language
|
1545 |
+
tasks, RoBERTa, BEiTv2 and X2-VLM. We observe that
|
1546 |
+
X-FM is also better than or comparable with the foundation
|
1547 |
+
models at both base and large scale (column 1,2,3 vs 11 and
|
1548 |
+
12,13,14 vs 18).
|
1549 |
+
4.4. Comparison with SOTA Vision-Language Models
|
1550 |
+
In addition to general foundation models, we also compare
|
1551 |
+
X-FM with state-of-the-art vision-language models. The re-
|
1552 |
+
sults are shown in Table 4 and Table 7. X-FM demonstrates
|
1553 |
+
its superiority on MSCOCO retrieval and NLVR2, while
|
1554 |
+
achieving competitive performance on Flickr retrieval and
|
1555 |
+
VQA. Note that X-FMbase outperforms CLIP, ALIGN and
|
1556 |
+
Florence on image-text retrieval tasks with fewer parame-
|
1557 |
+
ters and much less training data. Compared to the recently
|
1558 |
+
released SOTA vision-language model, X2-VLM, X-FM is
|
1559 |
+
much better on image-text retrieval tasks at the zero-shot
|
1560 |
+
setting.
|
1561 |
+
|
1562 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
1563 |
+
X-FMbase
|
1564 |
+
RoBERTa†
|
1565 |
+
S-MLM
|
1566 |
+
S-ITM
|
1567 |
+
wostop
|
1568 |
+
BEiTv2†
|
1569 |
+
woMIM
|
1570 |
+
wBEiTv2 Tokenizer
|
1571 |
+
X2-VLM†
|
1572 |
+
Multi-task
|
1573 |
+
ALL
|
1574 |
+
Task
|
1575 |
+
Eval.
|
1576 |
+
1
|
1577 |
+
2
|
1578 |
+
3
|
1579 |
+
4
|
1580 |
+
5
|
1581 |
+
6
|
1582 |
+
7
|
1583 |
+
8
|
1584 |
+
9
|
1585 |
+
10
|
1586 |
+
MNLI
|
1587 |
+
FT
|
1588 |
+
87.7
|
1589 |
+
87.4
|
1590 |
+
87.3
|
1591 |
+
87.7
|
1592 |
+
–
|
1593 |
+
–
|
1594 |
+
–
|
1595 |
+
–
|
1596 |
+
87.4
|
1597 |
+
87.6
|
1598 |
+
CoLA
|
1599 |
+
FT
|
1600 |
+
63.2
|
1601 |
+
61.6
|
1602 |
+
63.6
|
1603 |
+
64.2
|
1604 |
+
–
|
1605 |
+
–
|
1606 |
+
–
|
1607 |
+
–
|
1608 |
+
62.2
|
1609 |
+
65.2
|
1610 |
+
MRPC
|
1611 |
+
FT
|
1612 |
+
90.7
|
1613 |
+
92.2
|
1614 |
+
91.1
|
1615 |
+
90.7
|
1616 |
+
–
|
1617 |
+
–
|
1618 |
+
–
|
1619 |
+
–
|
1620 |
+
92.0
|
1621 |
+
92.5
|
1622 |
+
QQP
|
1623 |
+
FT
|
1624 |
+
91.5
|
1625 |
+
91.6
|
1626 |
+
91.6
|
1627 |
+
91.6
|
1628 |
+
–
|
1629 |
+
–
|
1630 |
+
–
|
1631 |
+
–
|
1632 |
+
91.6
|
1633 |
+
91.6
|
1634 |
+
SST-2
|
1635 |
+
FT
|
1636 |
+
95.0
|
1637 |
+
95.1
|
1638 |
+
94.2
|
1639 |
+
94.6
|
1640 |
+
–
|
1641 |
+
–
|
1642 |
+
–
|
1643 |
+
–
|
1644 |
+
94.4
|
1645 |
+
95.3
|
1646 |
+
QNLI
|
1647 |
+
FT
|
1648 |
+
93.1
|
1649 |
+
93.0
|
1650 |
+
93.2
|
1651 |
+
92.5
|
1652 |
+
–
|
1653 |
+
–
|
1654 |
+
–
|
1655 |
+
–
|
1656 |
+
92.8
|
1657 |
+
92.9
|
1658 |
+
RTE
|
1659 |
+
FT
|
1660 |
+
80.9
|
1661 |
+
79.1
|
1662 |
+
81.6
|
1663 |
+
81.2
|
1664 |
+
–
|
1665 |
+
–
|
1666 |
+
–
|
1667 |
+
–
|
1668 |
+
79.8
|
1669 |
+
81.9
|
1670 |
+
STS-B
|
1671 |
+
FT
|
1672 |
+
90.9
|
1673 |
+
90.7
|
1674 |
+
90.7
|
1675 |
+
90.4
|
1676 |
+
–
|
1677 |
+
–
|
1678 |
+
–
|
1679 |
+
–
|
1680 |
+
90.1
|
1681 |
+
90.8
|
1682 |
+
Language Avg.
|
1683 |
+
86.6
|
1684 |
+
86.4
|
1685 |
+
86.7
|
1686 |
+
86.6
|
1687 |
+
–
|
1688 |
+
–
|
1689 |
+
–
|
1690 |
+
–
|
1691 |
+
86.3
|
1692 |
+
87.2
|
1693 |
+
ImageNet
|
1694 |
+
FT
|
1695 |
+
–
|
1696 |
+
–
|
1697 |
+
–
|
1698 |
+
–
|
1699 |
+
85.5
|
1700 |
+
84.8
|
1701 |
+
85.0
|
1702 |
+
–
|
1703 |
+
85.0
|
1704 |
+
85.3
|
1705 |
+
ImageNet
|
1706 |
+
LE
|
1707 |
+
–
|
1708 |
+
–
|
1709 |
+
–
|
1710 |
+
–
|
1711 |
+
80.5
|
1712 |
+
79.1
|
1713 |
+
79.4
|
1714 |
+
–
|
1715 |
+
79.3
|
1716 |
+
81.1
|
1717 |
+
Food101
|
1718 |
+
LE
|
1719 |
+
–
|
1720 |
+
–
|
1721 |
+
–
|
1722 |
+
–
|
1723 |
+
88.2
|
1724 |
+
86.9
|
1725 |
+
87.2
|
1726 |
+
–
|
1727 |
+
86.9
|
1728 |
+
88.7
|
1729 |
+
CIFAR10
|
1730 |
+
LE
|
1731 |
+
–
|
1732 |
+
–
|
1733 |
+
–
|
1734 |
+
–
|
1735 |
+
95.3
|
1736 |
+
96.6
|
1737 |
+
96.5
|
1738 |
+
–
|
1739 |
+
96.6
|
1740 |
+
97.5
|
1741 |
+
CIFAR100
|
1742 |
+
LE
|
1743 |
+
–
|
1744 |
+
–
|
1745 |
+
–
|
1746 |
+
–
|
1747 |
+
81.5
|
1748 |
+
83.3
|
1749 |
+
83.9
|
1750 |
+
–
|
1751 |
+
84.1
|
1752 |
+
86.9
|
1753 |
+
Pets
|
1754 |
+
LE
|
1755 |
+
–
|
1756 |
+
–
|
1757 |
+
–
|
1758 |
+
–
|
1759 |
+
93.1
|
1760 |
+
88.1
|
1761 |
+
88.5
|
1762 |
+
–
|
1763 |
+
88.2
|
1764 |
+
90.7
|
1765 |
+
DTD
|
1766 |
+
LE
|
1767 |
+
–
|
1768 |
+
–
|
1769 |
+
–
|
1770 |
+
–
|
1771 |
+
78.4
|
1772 |
+
77.7
|
1773 |
+
76.9
|
1774 |
+
–
|
1775 |
+
78.0
|
1776 |
+
78.7
|
1777 |
+
Flowers102
|
1778 |
+
LE
|
1779 |
+
–
|
1780 |
+
–
|
1781 |
+
–
|
1782 |
+
–
|
1783 |
+
95.7
|
1784 |
+
94.1
|
1785 |
+
94.5
|
1786 |
+
–
|
1787 |
+
94.2
|
1788 |
+
97.1
|
1789 |
+
Vision Avg.
|
1790 |
+
–
|
1791 |
+
–
|
1792 |
+
–
|
1793 |
+
–
|
1794 |
+
87.3
|
1795 |
+
86.3
|
1796 |
+
86.5
|
1797 |
+
–
|
1798 |
+
86.5
|
1799 |
+
88.2
|
1800 |
+
VQAv2
|
1801 |
+
FT
|
1802 |
+
–
|
1803 |
+
78.8
|
1804 |
+
78.5
|
1805 |
+
78.7
|
1806 |
+
–
|
1807 |
+
78.3
|
1808 |
+
78.2
|
1809 |
+
78.0
|
1810 |
+
78.2
|
1811 |
+
78.6
|
1812 |
+
NLVR2
|
1813 |
+
FT
|
1814 |
+
–
|
1815 |
+
86.3
|
1816 |
+
86.0
|
1817 |
+
86.4
|
1818 |
+
–
|
1819 |
+
85.9
|
1820 |
+
85.5
|
1821 |
+
86.2
|
1822 |
+
86.1
|
1823 |
+
86.7
|
1824 |
+
Flickr30K TR R@1
|
1825 |
+
ZS
|
1826 |
+
–
|
1827 |
+
88.3
|
1828 |
+
87.2
|
1829 |
+
87.1
|
1830 |
+
–
|
1831 |
+
87.1
|
1832 |
+
87.2
|
1833 |
+
87.7
|
1834 |
+
85.0
|
1835 |
+
89.3
|
1836 |
+
Flickr30K IR R@1
|
1837 |
+
ZS
|
1838 |
+
–
|
1839 |
+
76.6
|
1840 |
+
74.9
|
1841 |
+
75.8
|
1842 |
+
–
|
1843 |
+
76.1
|
1844 |
+
75.3
|
1845 |
+
75.1
|
1846 |
+
75.6
|
1847 |
+
77.4
|
1848 |
+
Flickr30K TR R@1
|
1849 |
+
FT
|
1850 |
+
–
|
1851 |
+
97.5
|
1852 |
+
97.0
|
1853 |
+
97.2
|
1854 |
+
–
|
1855 |
+
96.4
|
1856 |
+
96.7
|
1857 |
+
97.0
|
1858 |
+
97.0
|
1859 |
+
97.7
|
1860 |
+
Flickr30K IR R@1
|
1861 |
+
FT
|
1862 |
+
–
|
1863 |
+
87.4
|
1864 |
+
86.9
|
1865 |
+
87.3
|
1866 |
+
–
|
1867 |
+
86.2
|
1868 |
+
86.6
|
1869 |
+
86.2
|
1870 |
+
86.4
|
1871 |
+
87.4
|
1872 |
+
COCO TR R@1
|
1873 |
+
ZS
|
1874 |
+
–
|
1875 |
+
72.0
|
1876 |
+
72.1
|
1877 |
+
70.5
|
1878 |
+
–
|
1879 |
+
73.0
|
1880 |
+
72.1
|
1881 |
+
73.2
|
1882 |
+
69.9
|
1883 |
+
72.8
|
1884 |
+
COCO IR R@1
|
1885 |
+
ZS
|
1886 |
+
–
|
1887 |
+
58.4
|
1888 |
+
57.1
|
1889 |
+
57.7
|
1890 |
+
–
|
1891 |
+
58.2
|
1892 |
+
57.7
|
1893 |
+
57.7
|
1894 |
+
56.5
|
1895 |
+
59.0
|
1896 |
+
COCO TR R@1
|
1897 |
+
FT
|
1898 |
+
–
|
1899 |
+
81.2
|
1900 |
+
80.2
|
1901 |
+
80.9
|
1902 |
+
–
|
1903 |
+
80.6
|
1904 |
+
80.1
|
1905 |
+
80.3
|
1906 |
+
80.0
|
1907 |
+
81.2
|
1908 |
+
COCO IR R@1
|
1909 |
+
FT
|
1910 |
+
–
|
1911 |
+
64.2
|
1912 |
+
63.4
|
1913 |
+
63.6
|
1914 |
+
–
|
1915 |
+
63.7
|
1916 |
+
63.0
|
1917 |
+
63.1
|
1918 |
+
63.0
|
1919 |
+
64.0
|
1920 |
+
Vision-Language Avg.
|
1921 |
+
–
|
1922 |
+
79.1
|
1923 |
+
78.3
|
1924 |
+
78.5
|
1925 |
+
–
|
1926 |
+
78.6
|
1927 |
+
78.2
|
1928 |
+
78.5
|
1929 |
+
77.8
|
1930 |
+
79.4
|
1931 |
+
Table 6: Ablation studies on vision, language, and vision-language tasks. We use the same settings as Table 2. “ALL” for
|
1932 |
+
X-FMbase is trained with the same data under the same settings for pre-training and fine-tuning compared to all the variants.
|
1933 |
+
Language Avg. is the average of all language tasks, while Vision Avg. is the average of all vision tasks. Vision-Language
|
1934 |
+
Avg. is the average of all vision-language tasks. Note that performance of “ALL” is slightly different from X-FMbase in
|
1935 |
+
Table 2, because we use less training steps (160k) for ablation to save the computational resources.
|
1936 |
+
4.5. Ablation Study
|
1937 |
+
To verify the contributions of different modules in our frame-
|
1938 |
+
work, we ablate them and evaluate the performance of X-
|
1939 |
+
FM on all downstream tasks. The results are shown in
|
1940 |
+
Table 6. We first explain several abbreviations in the table.
|
1941 |
+
S-MLM means that we only separate the language represen-
|
1942 |
+
tations in the learning IMLM task, while S-ITM means that
|
1943 |
+
language representations for computing ITM and BBP are
|
1944 |
+
separated. wostop indicates without stopping the gradients
|
1945 |
+
of all language representations. woMIM means that we do
|
1946 |
+
not learn by MIM, while wBEiTv2 tokenizer means that
|
1947 |
+
we learn by MIM with the image tokenizer used in BEiTv2.
|
1948 |
+
Multi-task is a variation that uses straightforward multi-task
|
1949 |
+
learning to optimize the three encoders in X-FM. To make
|
1950 |
+
a fair comparison, we also train RoBERTa, BEiTv2 and
|
1951 |
+
X2-VLM with the same data noted as RoBERTa†, BEiTv2†
|
1952 |
+
and X2-VLM†. Note that we also increase the fusion layers
|
1953 |
+
in X2-VLM† to make the parameter sizes comparable to
|
1954 |
+
our models. RoBERTa†, BEiTv2† and X2-VLM† all have
|
1955 |
+
slightly better results on average than the official ones. From
|
1956 |
+
the results, we have the following observations.
|
1957 |
+
First, both designs (stop gradient and masked image mod-
|
1958 |
+
eling) bring improvements, and the combination can make
|
1959 |
+
further improvements on all three downstream tasks (col-
|
1960 |
+
umn 10 vs. others). Second, without separated language
|
1961 |
+
representations, models always perform worse on language
|
1962 |
+
understanding tasks (column 10 vs. 2,3,4). Besides, the sep-
|
1963 |
+
arate language representations in the IMLM task on image-
|
1964 |
+
text data are helpful for multi-modal tasks (column 2 vs. 4).
|
1965 |
+
As we point out in section 1, the fusion encoder can learn
|
1966 |
+
better cross-modal feature alignments by IMLM task from
|
1967 |
+
image-text pairs instead of utilizing text tokens. Although S-
|
1968 |
+
ITM shows slight side effects (column 4 vs. 3), stopping the
|
1969 |
+
gradients of language representation in the fusion encoder
|
1970 |
+
is necessary to simultaneously achieve strong language un-
|
1971 |
+
derstanding and vision-language understanding capability.
|
1972 |
+
Third, the MIM task is useful for vision-language and vi-
|
1973 |
+
sion learning (column 10 vs. 6). Meanwhile, the targets
|
1974 |
+
in our MIM task are better than the BEiTv2 tokenizer (col-
|
1975 |
+
umn 10 vs. 7). Four, X-FM is much better than a naive
|
1976 |
+
|
1977 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
1978 |
+
Method
|
1979 |
+
# Params
|
1980 |
+
VQA
|
1981 |
+
NLVR2
|
1982 |
+
test-dev
|
1983 |
+
test-std
|
1984 |
+
dev
|
1985 |
+
test-P
|
1986 |
+
ALBEF
|
1987 |
+
210M
|
1988 |
+
74.5
|
1989 |
+
74.7
|
1990 |
+
80.2
|
1991 |
+
80.5
|
1992 |
+
VLMobase
|
1993 |
+
175M
|
1994 |
+
76.6
|
1995 |
+
76.9
|
1996 |
+
82.8
|
1997 |
+
83.3
|
1998 |
+
METER
|
1999 |
+
341M
|
2000 |
+
77.7
|
2001 |
+
77.6
|
2002 |
+
82.3
|
2003 |
+
83.1
|
2004 |
+
VL-BEiT
|
2005 |
+
175M
|
2006 |
+
77.5
|
2007 |
+
77.8
|
2008 |
+
81.9
|
2009 |
+
82.7
|
2010 |
+
BLIPbase
|
2011 |
+
240M
|
2012 |
+
78.2
|
2013 |
+
78.2
|
2014 |
+
82.5
|
2015 |
+
83.1
|
2016 |
+
X-VLM
|
2017 |
+
216M
|
2018 |
+
78.1
|
2019 |
+
78.1
|
2020 |
+
84.2
|
2021 |
+
84.2
|
2022 |
+
OFAbase
|
2023 |
+
182M
|
2024 |
+
78.0
|
2025 |
+
78.1
|
2026 |
+
-
|
2027 |
+
-
|
2028 |
+
OmniVL
|
2029 |
+
288M
|
2030 |
+
78.3
|
2031 |
+
78.4
|
2032 |
+
-
|
2033 |
+
-
|
2034 |
+
X2-VLMbase
|
2035 |
+
255M
|
2036 |
+
79.2
|
2037 |
+
79.3
|
2038 |
+
85.9
|
2039 |
+
86.1
|
2040 |
+
X-FMbase
|
2041 |
+
284M
|
2042 |
+
79.1
|
2043 |
+
79.2
|
2044 |
+
86.3
|
2045 |
+
86.5
|
2046 |
+
VLMolarge
|
2047 |
+
562M
|
2048 |
+
79.9
|
2049 |
+
80.0
|
2050 |
+
85.6
|
2051 |
+
86.9
|
2052 |
+
OFAlarge
|
2053 |
+
472M
|
2054 |
+
80.3
|
2055 |
+
80.5
|
2056 |
+
-
|
2057 |
+
-
|
2058 |
+
X2-VLMlarge
|
2059 |
+
593M
|
2060 |
+
80.5
|
2061 |
+
80.5
|
2062 |
+
87.2
|
2063 |
+
87.6
|
2064 |
+
X-FMlarge
|
2065 |
+
807M
|
2066 |
+
79.5
|
2067 |
+
79.6
|
2068 |
+
86.2
|
2069 |
+
87.8
|
2070 |
+
Super-Large Models or Super-Large Datasets
|
2071 |
+
SimVLMbase
|
2072 |
+
273M
|
2073 |
+
77.9
|
2074 |
+
78.1
|
2075 |
+
81.7
|
2076 |
+
X2-VLMbase
|
2077 |
+
255M
|
2078 |
+
80.4
|
2079 |
+
80.2
|
2080 |
+
86.2
|
2081 |
+
87.0
|
2082 |
+
SimVLMlarge
|
2083 |
+
783M
|
2084 |
+
79.3
|
2085 |
+
79.6
|
2086 |
+
84.1
|
2087 |
+
84.8
|
2088 |
+
X2-VLMlarge
|
2089 |
+
593M
|
2090 |
+
81.9
|
2091 |
+
81.8
|
2092 |
+
88.7
|
2093 |
+
89.4
|
2094 |
+
Florence
|
2095 |
+
893M
|
2096 |
+
80.2
|
2097 |
+
80.3
|
2098 |
+
–
|
2099 |
+
–
|
2100 |
+
CoCa
|
2101 |
+
2.1B
|
2102 |
+
82.3
|
2103 |
+
82.3
|
2104 |
+
86.1
|
2105 |
+
87.0
|
2106 |
+
BEiT-3
|
2107 |
+
1.9B
|
2108 |
+
84.2
|
2109 |
+
84.0
|
2110 |
+
91.5
|
2111 |
+
92.6
|
2112 |
+
Table 7: Results on VQA and visual reasoning. Giant mod-
|
2113 |
+
els with over 1B parameters (e.g., CoCa and BEiT-3) or
|
2114 |
+
models are pre-trained with over 400M data (e.g., SimVLM
|
2115 |
+
and X2-VLMlarge) are in grey because they are not directly
|
2116 |
+
comparable with other models.
|
2117 |
+
multi-task learning strategy for a foundation model (column
|
2118 |
+
10 vs. 8). Compared with the straightforward multi-task
|
2119 |
+
strategy, X-FMbase improves an average of 0.9%, 1.7%
|
2120 |
+
and 1.6% on language, vision, and vision-language tasks,
|
2121 |
+
respectively. Five, X-FM is also slightly better than foun-
|
2122 |
+
dation models specifically designed for language, vision,
|
2123 |
+
and vision-language tasks with the same training corpus
|
2124 |
+
(column 10 vs. 1,5,8).
|
2125 |
+
5. Conclusion and Limitation
|
2126 |
+
5.1. Conclusion
|
2127 |
+
In this work, we address the problem of how to build a
|
2128 |
+
general foundation model that can perform the best for all
|
2129 |
+
the understanding tasks of language, vision, and vision-
|
2130 |
+
language. We propose a general foundation model with two
|
2131 |
+
new and effective training techniques, X-FM, to learn rich
|
2132 |
+
language, vision and vision-language representations at the
|
2133 |
+
same time. Experimental results explicitly imply that X-FM
|
2134 |
+
outperforms other general foundation models by a large
|
2135 |
+
margin. Moreover, X-FM can even be better or comparable
|
2136 |
+
compared with the SOTA foundation models specifically
|
2137 |
+
designed for language, vision, or vision-language under-
|
2138 |
+
standing tasks.
|
2139 |
+
5.2. Limitation
|
2140 |
+
Like most existing work on foundation models, the entire
|
2141 |
+
project consumed over 5 A100 GPU years on a computing
|
2142 |
+
cluster with high electricity costs, although we only tested
|
2143 |
+
base and large models. There is still potential for efficiency
|
2144 |
+
improvement through sparse attention (Zaheer et al., 2020)
|
2145 |
+
or the lottery ticket hypothesis (Frankle & Carbin, 2018).
|
2146 |
+
We will explore the techniques to improve the training effi-
|
2147 |
+
ciency and reduce the carbon footprint so that we can adhere
|
2148 |
+
to the proposals on “green” deep learning (Schwartz et al.,
|
2149 |
+
2020; Xu et al., 2021).
|
2150 |
+
Due to considerations of fair comparisons and computa-
|
2151 |
+
tional resources, we did not try super-large models which
|
2152 |
+
use at least 1.9B or more parameters like BEITv3 (Wang
|
2153 |
+
et al., 2022d), CoCa (Yu et al., 2022) and PaLI (Chen et al.,
|
2154 |
+
2022). However, scalability is also an important factor for
|
2155 |
+
foundation models. We leave the investigations to future
|
2156 |
+
work.
|
2157 |
+
References
|
2158 |
+
Agirre, E., Màrquez, L., and Wicentowski, R. (eds.). Pro-
|
2159 |
+
ceedings of the Fourth International Workshop on Seman-
|
2160 |
+
tic Evaluations (SemEval-2007), Prague, Czech Republic,
|
2161 |
+
2007. Association for Computational Linguistics. URL
|
2162 |
+
https://aclanthology.org/S07-1000.
|
2163 |
+
Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I.,
|
2164 |
+
Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds,
|
2165 |
+
M., et al. Flamingo: a visual language model for few-shot
|
2166 |
+
learning. arXiv preprint arXiv:2204.14198, 2022.
|
2167 |
+
Bao, H., Dong, L., and Wei, F. BEiT: Bert pre-training of
|
2168 |
+
image transformers. arXiv preprint, 2021.
|
2169 |
+
Bao, H., Wang, W., Dong, L., and Wei, F.
|
2170 |
+
Vl-beit:
|
2171 |
+
Generative vision-language pretraining. arXiv preprint
|
2172 |
+
arXiv:2206.01127, 2022.
|
2173 |
+
Bentivogli, L., Clark, P., Dagan, I., and Giampiccolo, D.
|
2174 |
+
The fifth pascal recognizing textual entailment challenge.
|
2175 |
+
In TAC, 2009.
|
2176 |
+
Bingel, J. and Søgaard, A. Identifying beneficial task re-
|
2177 |
+
lations for multi-task learning in deep neural networks.
|
2178 |
+
arXiv preprint arXiv:1702.08303, 2017.
|
2179 |
+
Bossard, L., Guillaumin, M., and Gool, L. V. Food-101–
|
2180 |
+
mining discriminative components with random forests.
|
2181 |
+
In European conference on computer vision, pp. 446–461.
|
2182 |
+
Springer, 2014.
|
2183 |
+
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan,
|
2184 |
+
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
|
2185 |
+
|
2186 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2187 |
+
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
|
2188 |
+
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M.,
|
2189 |
+
Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E.,
|
2190 |
+
Litwin, M., Gray, S., Chess, B., Clark, J., Berner,
|
2191 |
+
C., McCandlish, S., Radford, A., Sutskever, I., and
|
2192 |
+
Amodei, D.
|
2193 |
+
Language models are few-shot learners.
|
2194 |
+
In Larochelle, H., Ranzato, M., Hadsell, R., Balcan,
|
2195 |
+
M., and Lin, H. (eds.), Advances in Neural Information
|
2196 |
+
Processing Systems 33: Annual Conference on Neural
|
2197 |
+
Information Processing Systems 2020, NeurIPS 2020,
|
2198 |
+
December 6-12, 2020, virtual, 2020.
|
2199 |
+
URL https:
|
2200 |
+
//proceedings.neurips.cc/paper/2020/hash/
|
2201 |
+
1457c0d6bfcb4967418bfb8ac142f64a-Abstract.
|
2202 |
+
html.
|
2203 |
+
Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J.,
|
2204 |
+
Bojanowski, P., and Joulin, A. Emerging properties in
|
2205 |
+
self-supervised vision transformers. In Proceedings of
|
2206 |
+
the IEEE/CVF International Conference on Computer
|
2207 |
+
Vision, pp. 9650–9660, 2021.
|
2208 |
+
Chen, X., Wang, X., Changpinyo, S., Piergiovanni, A.,
|
2209 |
+
Padlewski, P., Salz, D., Goodman, S., Grycner, A.,
|
2210 |
+
Mustafa, B., Beyer, L., et al.
|
2211 |
+
Pali: A jointly-scaled
|
2212 |
+
multilingual language-image model.
|
2213 |
+
arXiv preprint
|
2214 |
+
arXiv:2209.06794, 2022.
|
2215 |
+
Chen, Y.-C., Li, L., Yu, L., El Kholy, A., Ahmed, F., Gan,
|
2216 |
+
Z., Cheng, Y., and Liu, J. UNITER: Universal image-
|
2217 |
+
text representation learning. In European Conference on
|
2218 |
+
Computer Vision (ECCV), 2020.
|
2219 |
+
Cho, J., Lei, J., Tan, H., and Bansal, M.
|
2220 |
+
Unifying
|
2221 |
+
vision-and-language tasks via text generation. In Meila,
|
2222 |
+
M. and Zhang, T. (eds.), Proceedings of the 38th In-
|
2223 |
+
ternational Conference on Machine Learning, ICML
|
2224 |
+
2021, 18-24 July 2021, Virtual Event, volume 139 of
|
2225 |
+
Proceedings of Machine Learning Research, pp. 1931–
|
2226 |
+
1942. PMLR, 2021. URL http://proceedings.mlr.
|
2227 |
+
press/v139/cho21a.html.
|
2228 |
+
Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., and
|
2229 |
+
Vedaldi, A. Describing textures in the wild. In 2014
|
2230 |
+
IEEE Conference on Computer Vision and Pattern Recog-
|
2231 |
+
nition, CVPR 2014, Columbus, OH, USA, June 23-28,
|
2232 |
+
2014, pp. 3606–3613. IEEE Computer Society, 2014. doi:
|
2233 |
+
10.1109/CVPR.2014.461. URL https://doi.org/10.
|
2234 |
+
1109/CVPR.2014.461.
|
2235 |
+
Clark, K., Luong, M., Le, Q. V., and Manning, C. D.
|
2236 |
+
ELECTRA: pre-training text encoders as discriminators
|
2237 |
+
rather than generators.
|
2238 |
+
In 8th International Confer-
|
2239 |
+
ence on Learning Representations, ICLR 2020, Addis
|
2240 |
+
Ababa, Ethiopia, April 26-30, 2020. OpenReview.net,
|
2241 |
+
2020. URL https://openreview.net/forum?id=
|
2242 |
+
r1xMH1BtvB.
|
2243 |
+
Dagan, I., Glickman, O., and Magnini, B. The pascal recog-
|
2244 |
+
nising textual entailment challenge. In Machine Learning
|
2245 |
+
Challenges Workshop, pp. 177–190. Springer, 2005.
|
2246 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT:
|
2247 |
+
Pre-training of deep bidirectional transformers for lan-
|
2248 |
+
guage understanding. In Proceedings of the 2019 Confer-
|
2249 |
+
ence of the North American Chapter of the Association for
|
2250 |
+
Computational Linguistics: Human Language Technolo-
|
2251 |
+
gies, Volume 1 (Long and Short Papers), pp. 4171–4186,
|
2252 |
+
Minneapolis, Minnesota, 2019. Association for Compu-
|
2253 |
+
tational Linguistics. doi: 10.18653/v1/N19-1423. URL
|
2254 |
+
https://aclanthology.org/N19-1423.
|
2255 |
+
Diao, S., Zhou, W., Zhang, X., and Wang, J. Prefix lan-
|
2256 |
+
guage models are unified modal learners. arXiv preprint
|
2257 |
+
arXiv:2206.07699, 2022.
|
2258 |
+
Dolan, W. B. and Brockett, C. Automatically construct-
|
2259 |
+
ing a corpus of sentential paraphrases.
|
2260 |
+
In Proceed-
|
2261 |
+
ings of the Third International Workshop on Paraphras-
|
2262 |
+
ing (IWP2005), 2005. URL https://aclanthology.
|
2263 |
+
org/I05-5002.
|
2264 |
+
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
|
2265 |
+
D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer,
|
2266 |
+
M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N.
|
2267 |
+
An image is worth 16x16 words: Transformers for image
|
2268 |
+
recognition at scale. In 9th International Conference on
|
2269 |
+
Learning Representations, ICLR 2021, Virtual Event, Aus-
|
2270 |
+
tria, May 3-7, 2021. OpenReview.net, 2021. URL https:
|
2271 |
+
//openreview.net/forum?id=YicbFdNTTy.
|
2272 |
+
Frankle, J. and Carbin, M. The lottery ticket hypothesis:
|
2273 |
+
Finding sparse, trainable neural networks. arXiv preprint
|
2274 |
+
arXiv:1803.03635, 2018.
|
2275 |
+
Giampiccolo, D., Magnini, B., Dagan, I., and Dolan, B. The
|
2276 |
+
third PASCAL recognizing textual entailment challenge.
|
2277 |
+
In Proceedings of the ACL-PASCAL Workshop on Textual
|
2278 |
+
Entailment and Paraphrasing, pp. 1–9, Prague, 2007.
|
2279 |
+
Association for Computational Linguistics. URL https:
|
2280 |
+
//aclanthology.org/W07-1401.
|
2281 |
+
Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and
|
2282 |
+
Parikh, D. Making the V in VQA matter: Elevating
|
2283 |
+
the role of image understanding in visual question an-
|
2284 |
+
swering. In 2017 IEEE Conference on Computer Vi-
|
2285 |
+
sion and Pattern Recognition, CVPR 2017, Honolulu,
|
2286 |
+
HI, USA, July 21-26, 2017, pp. 6325–6334. IEEE Com-
|
2287 |
+
puter Society, 2017. doi: 10.1109/CVPR.2017.670. URL
|
2288 |
+
https://doi.org/10.1109/CVPR.2017.670.
|
2289 |
+
Haim, R. B., Dagan, I., Dolan, B., Ferro, L., Giampiccolo,
|
2290 |
+
D., Magnini, B., and Szpektor, I. The second pascal recog-
|
2291 |
+
nising textual entailment challenge. In Proceedings of the
|
2292 |
+
Second PASCAL Challenges Workshop on Recognising
|
2293 |
+
Textual Entailment, volume 7, 2006.
|
2294 |
+
|
2295 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2296 |
+
He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. B. Mo-
|
2297 |
+
mentum contrast for unsupervised visual representation
|
2298 |
+
learning. In 2020 IEEE/CVF Conference on Computer
|
2299 |
+
Vision and Pattern Recognition, CVPR 2020, Seattle, WA,
|
2300 |
+
USA, June 13-19, 2020, pp. 9726–9735. IEEE, 2020.
|
2301 |
+
doi: 10.1109/CVPR42600.2020.00975. URL https:
|
2302 |
+
//doi.org/10.1109/CVPR42600.2020.00975.
|
2303 |
+
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick,
|
2304 |
+
R. Masked autoencoders are scalable vision learners. In
|
2305 |
+
Proceedings of the IEEE/CVF Conference on Computer
|
2306 |
+
Vision and Pattern Recognition, pp. 16000–16009, 2022.
|
2307 |
+
He, P., Liu, X., Gao, J., and Chen, W. Deberta: decoding-
|
2308 |
+
enhanced bert with disentangled attention. In 9th Interna-
|
2309 |
+
tional Conference on Learning Representations, ICLR
|
2310 |
+
2021, Virtual Event, Austria, May 3-7, 2021. Open-
|
2311 |
+
Review.net, 2021. URL https://openreview.net/
|
2312 |
+
forum?id=XPZIaotutsD.
|
2313 |
+
Iyer, S., Dandekar, N., Csernai, K., et al. First quora dataset
|
2314 |
+
release: Question pairs. data. quora. com, 2017.
|
2315 |
+
Jia, C., Yang, Y., Xia, Y., Chen, Y., Parekh, Z., Pham, H.,
|
2316 |
+
Le, Q. V., Sung, Y., Li, Z., and Duerig, T. Scaling up
|
2317 |
+
visual and vision-language representation learning with
|
2318 |
+
noisy text supervision. In Meila, M. and Zhang, T. (eds.),
|
2319 |
+
Proceedings of the 38th International Conference on Ma-
|
2320 |
+
chine Learning, ICML 2021, 18-24 July 2021, Virtual
|
2321 |
+
Event, volume 139 of Proceedings of Machine Learning
|
2322 |
+
Research, pp. 4904–4916. PMLR, 2021. URL http:
|
2323 |
+
//proceedings.mlr.press/v139/jia21b.html.
|
2324 |
+
Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer,
|
2325 |
+
L., and Levy, O. Spanbert: Improving pre-training by
|
2326 |
+
representing and predicting spans. Transactions of the As-
|
2327 |
+
sociation for Computational Linguistics, 8:64–77, 2020.
|
2328 |
+
Karpathy, A. and Li, F. Deep visual-semantic alignments
|
2329 |
+
for generating image descriptions. In IEEE Conference
|
2330 |
+
on Computer Vision and Pattern Recognition, CVPR
|
2331 |
+
2015, Boston, MA, USA, June 7-12, 2015, pp. 3128–3137.
|
2332 |
+
IEEE Computer Society, 2015. doi: 10.1109/CVPR.2015.
|
2333 |
+
7298932.
|
2334 |
+
URL https://doi.org/10.1109/CVPR.
|
2335 |
+
2015.7298932.
|
2336 |
+
Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K.,
|
2337 |
+
Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma,
|
2338 |
+
D. A., et al. Visual genome: Connecting language and
|
2339 |
+
vision using crowdsourced dense image annotations. In-
|
2340 |
+
ternational Journal of Computer Vision (IJCV), 2017.
|
2341 |
+
Krizhevsky, A., Hinton, G., et al. Learning multiple layers
|
2342 |
+
of features from tiny images. 2009.
|
2343 |
+
Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin,
|
2344 |
+
I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M.,
|
2345 |
+
Kolesnikov, A., et al. The open images dataset v4: Unified
|
2346 |
+
image classification, object detection, and visual relation-
|
2347 |
+
ship detection at scale. arXiv preprint arXiv:1811.00982,
|
2348 |
+
2018. URL https://arxiv.org/abs/1811.00982.
|
2349 |
+
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P.,
|
2350 |
+
and Soricut, R. ALBERT: A lite BERT for self-supervised
|
2351 |
+
learning of language representations. In 8th International
|
2352 |
+
Conference on Learning Representations, ICLR 2020, Ad-
|
2353 |
+
dis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net,
|
2354 |
+
2020. URL https://openreview.net/forum?id=
|
2355 |
+
H1eA7AEtvS.
|
2356 |
+
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo-
|
2357 |
+
hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L.
|
2358 |
+
BART: Denoising sequence-to-sequence pre-training for
|
2359 |
+
natural language generation, translation, and comprehen-
|
2360 |
+
sion. In Proceedings of the 58th Annual Meeting of the As-
|
2361 |
+
sociation for Computational Linguistics, pp. 7871–7880,
|
2362 |
+
Online, 2020. Association for Computational Linguis-
|
2363 |
+
tics. doi: 10.18653/v1/2020.acl-main.703. URL https:
|
2364 |
+
//aclanthology.org/2020.acl-main.703.
|
2365 |
+
Li, J., Selvaraju, R. R., Gotmare, A. D., Joty, S., Xiong,
|
2366 |
+
C., and Hoi, S. Align before fuse: Vision and language
|
2367 |
+
representation learning with momentum distillation. In
|
2368 |
+
Conference on Neural Information Processing Systems
|
2369 |
+
(NeurIPS), 2021a.
|
2370 |
+
Li, J., Li, D., Xiong, C., and Hoi, S.
|
2371 |
+
Blip:
|
2372 |
+
Boot-
|
2373 |
+
strapping language-image pre-training for unified vision-
|
2374 |
+
language understanding and generation. arXiv preprint
|
2375 |
+
arXiv:2201.12086, 2022.
|
2376 |
+
Li, W., Gao, C., Niu, G., Xiao, X., Liu, H., Liu, J., Wu,
|
2377 |
+
H., and Wang, H. UNIMO: Towards unified-modal un-
|
2378 |
+
derstanding and generation via cross-modal contrastive
|
2379 |
+
learning. In Proceedings of the 59th Annual Meeting of
|
2380 |
+
the Association for Computational Linguistics and the
|
2381 |
+
11th International Joint Conference on Natural Language
|
2382 |
+
Processing (Volume 1: Long Papers), pp. 2592–2607,
|
2383 |
+
Online, 2021b. Association for Computational Linguis-
|
2384 |
+
tics. doi: 10.18653/v1/2021.acl-long.202. URL https:
|
2385 |
+
//aclanthology.org/2021.acl-long.202.
|
2386 |
+
Li, W., Gao, C., Niu, G., Xiao, X., Liu, H., Liu, J., Wu,
|
2387 |
+
H., and Wang, H. UNIMO: Towards unified-modal un-
|
2388 |
+
derstanding and generation via cross-modal contrastive
|
2389 |
+
learning. In Proceedings of the 59th Annual Meeting of
|
2390 |
+
the Association for Computational Linguistics and the
|
2391 |
+
11th International Joint Conference on Natural Language
|
2392 |
+
Processing (Volume 1: Long Papers), pp. 2592–2607,
|
2393 |
+
Online, 2021c. Association for Computational Linguis-
|
2394 |
+
tics. doi: 10.18653/v1/2021.acl-long.202. URL https:
|
2395 |
+
//aclanthology.org/2021.acl-long.202.
|
2396 |
+
|
2397 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2398 |
+
Li, X., Yin, X., Li, C., Zhang, P., Hu, X., Zhang, L., Wang,
|
2399 |
+
L., Hu, H., Dong, L., Wei, F., et al.
|
2400 |
+
Oscar: Object-
|
2401 |
+
semantics aligned pre-training for vision-language tasks.
|
2402 |
+
In European Conference on Computer Vision (ECCV),
|
2403 |
+
2020.
|
2404 |
+
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P.,
|
2405 |
+
Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft
|
2406 |
+
COCO: Common objects in context. In European Con-
|
2407 |
+
ference on Computer Vision (ECCV), 2014.
|
2408 |
+
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
|
2409 |
+
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
|
2410 |
+
V.
|
2411 |
+
RoBERTa: A robustly optimized bert pretraining
|
2412 |
+
approach. arXiv preprint, 2019.
|
2413 |
+
Lu, J., Batra, D., Parikh, D., and Lee, S. Vilbert: Pre-
|
2414 |
+
training task-agnostic visiolinguistic representations
|
2415 |
+
for vision-and-language tasks.
|
2416 |
+
In Wallach, H. M.,
|
2417 |
+
Larochelle, H., Beygelzimer, A., d’Alché-Buc, F.,
|
2418 |
+
Fox, E. B., and Garnett, R. (eds.), Advances in Neural
|
2419 |
+
Information Processing Systems 32:
|
2420 |
+
Annual Con-
|
2421 |
+
ference on Neural Information Processing Systems
|
2422 |
+
2019, NeurIPS 2019, December 8-14, 2019, Vancou-
|
2423 |
+
ver, BC, Canada, pp. 13–23, 2019.
|
2424 |
+
URL https:
|
2425 |
+
//proceedings.neurips.cc/paper/2019/hash/
|
2426 |
+
c74d97b01eae257e44aa9d5bade97baf-Abstract.
|
2427 |
+
html.
|
2428 |
+
Nilsback, M.-E. and Zisserman, A. Automated flower clas-
|
2429 |
+
sification over a large number of classes. In 2008 Sixth
|
2430 |
+
Indian Conference on Computer Vision, Graphics & Im-
|
2431 |
+
age Processing, pp. 722–729. IEEE, 2008.
|
2432 |
+
Ordonez, V., Kulkarni, G., and Berg, T. L. Im2text: Describ-
|
2433 |
+
ing images using 1 million captioned photographs. In
|
2434 |
+
Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F.
|
2435 |
+
C. N., and Weinberger, K. Q. (eds.), Advances in Neural
|
2436 |
+
Information Processing Systems 24: 25th Annual Confer-
|
2437 |
+
ence on Neural Information Processing Systems 2011.
|
2438 |
+
Proceedings of a meeting held 12-14 December 2011,
|
2439 |
+
Granada, Spain, pp. 1143–1151, 2011. URL https:
|
2440 |
+
//proceedings.neurips.cc/paper/2011/hash/
|
2441 |
+
5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.
|
2442 |
+
html.
|
2443 |
+
Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar,
|
2444 |
+
C. V.
|
2445 |
+
Cats and dogs.
|
2446 |
+
In 2012 IEEE Conference on
|
2447 |
+
Computer Vision and Pattern Recognition, Providence, RI,
|
2448 |
+
USA, June 16-21, 2012, pp. 3498–3505. IEEE Computer
|
2449 |
+
Society, 2012. doi: 10.1109/CVPR.2012.6248092. URL
|
2450 |
+
https://doi.org/10.1109/CVPR.2012.6248092.
|
2451 |
+
Peng, Z., Dong, L., Bao, H., Ye, Q., and Wei, F. Beit v2:
|
2452 |
+
Masked image modeling with vector-quantized visual
|
2453 |
+
tokenizers. arXiv preprint arXiv:2208.06366, 2022.
|
2454 |
+
Phang, J., Févry, T., and Bowman, S. R. Sentence encoders
|
2455 |
+
on stilts: Supplementary training on intermediate labeled-
|
2456 |
+
data tasks. ArXiv, abs/1811.01088, 2018.
|
2457 |
+
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
|
2458 |
+
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark,
|
2459 |
+
J., Krueger, G., and Sutskever, I. Learning transferable
|
2460 |
+
visual models from natural language supervision.
|
2461 |
+
In
|
2462 |
+
Meila, M. and Zhang, T. (eds.), Proceedings of the 38th
|
2463 |
+
International Conference on Machine Learning, ICML
|
2464 |
+
2021, 18-24 July 2021, Virtual Event, volume 139 of
|
2465 |
+
Proceedings of Machine Learning Research, pp. 8748–
|
2466 |
+
8763. PMLR, 2021. URL http://proceedings.mlr.
|
2467 |
+
press/v139/radford21a.html.
|
2468 |
+
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
|
2469 |
+
Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the
|
2470 |
+
limits of transfer learning with a unified text-to-text trans-
|
2471 |
+
former. Journal of Machine Learning Research (JMLR),
|
2472 |
+
2020.
|
2473 |
+
Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. SQuAD:
|
2474 |
+
100,000+ questions for machine comprehension of text.
|
2475 |
+
In Proceedings of the 2016 Conference on Empirical
|
2476 |
+
Methods in Natural Language Processing, pp. 2383–
|
2477 |
+
2392, Austin, Texas, 2016. Association for Computa-
|
2478 |
+
tional Linguistics. doi: 10.18653/v1/D16-1264. URL
|
2479 |
+
https://aclanthology.org/D16-1264.
|
2480 |
+
Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid,
|
2481 |
+
I. D., and Savarese, S. Generalized intersection over
|
2482 |
+
union: A metric and a loss for bounding box regression.
|
2483 |
+
In IEEE Conference on Computer Vision and Pattern
|
2484 |
+
Recognition, CVPR 2019, Long Beach, CA, USA, June
|
2485 |
+
16-20, 2019, pp. 658–666. Computer Vision Foundation /
|
2486 |
+
IEEE, 2019. doi: 10.1109/CVPR.2019.00075.
|
2487 |
+
Ridnik, T., Ben-Baruch, E., Noy, A., and Zelnik-Manor, L.
|
2488 |
+
Imagenet-21k pretraining for the masses, 2021.
|
2489 |
+
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
|
2490 |
+
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein,
|
2491 |
+
M., et al. Imagenet large scale visual recognition chal-
|
2492 |
+
lenge. International journal of computer vision, 115(3):
|
2493 |
+
211–252, 2015.
|
2494 |
+
Schwartz, R., Dodge, J., Smith, N. A., and Etzioni, O. Green
|
2495 |
+
ai. Communications of the ACM, 63(12):54–63, 2020.
|
2496 |
+
Shao, S., Li, Z., Zhang, T., Peng, C., Yu, G., Zhang, X., Li,
|
2497 |
+
J., and Sun, J. Objects365: A large-scale, high-quality
|
2498 |
+
dataset for object detection. In 2019 IEEE/CVF Interna-
|
2499 |
+
tional Conference on Computer Vision, ICCV 2019, Seoul,
|
2500 |
+
Korea (South), October 27 - November 2, 2019, pp. 8429–
|
2501 |
+
8438. IEEE, 2019. doi: 10.1109/ICCV.2019.00852. URL
|
2502 |
+
https://doi.org/10.1109/ICCV.2019.00852.
|
2503 |
+
|
2504 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2505 |
+
Sharma, P., Ding, N., Goodman, S., and Soricut, R. Con-
|
2506 |
+
ceptual captions: A cleaned, hypernymed, image alt-text
|
2507 |
+
dataset for automatic image captioning. In Proceedings
|
2508 |
+
of the 56th Annual Meeting of the Association for Compu-
|
2509 |
+
tational Linguistics (Volume 1: Long Papers), pp. 2556–
|
2510 |
+
2565, Melbourne, Australia, 2018. Association for Com-
|
2511 |
+
putational Linguistics. doi: 10.18653/v1/P18-1238. URL
|
2512 |
+
https://aclanthology.org/P18-1238.
|
2513 |
+
Singh, A., Hu, R., Goswami, V., Couairon, G., Galuba,
|
2514 |
+
W., Rohrbach, M., and Kiela, D. Flava: A foundational
|
2515 |
+
language and vision alignment model. ArXiv preprint,
|
2516 |
+
abs/2112.04482, 2021.
|
2517 |
+
URL https://arxiv.org/
|
2518 |
+
abs/2112.04482.
|
2519 |
+
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning,
|
2520 |
+
C. D., Ng, A., and Potts, C. Recursive deep models for
|
2521 |
+
semantic compositionality over a sentiment treebank. In
|
2522 |
+
Proceedings of the 2013 Conference on Empirical Meth-
|
2523 |
+
ods in Natural Language Processing, pp. 1631–1642,
|
2524 |
+
Seattle, Washington, USA, 2013. Association for Com-
|
2525 |
+
putational Linguistics. URL https://aclanthology.
|
2526 |
+
org/D13-1170.
|
2527 |
+
Suhr, A., Zhou, S., Zhang, A., Zhang, I., Bai, H., and
|
2528 |
+
Artzi, Y. A corpus for reasoning about natural language
|
2529 |
+
grounded in photographs. In Proceedings of the 57th
|
2530 |
+
Annual Meeting of the Association for Computational
|
2531 |
+
Linguistics, pp. 6418–6428, Florence, Italy, 2019a. As-
|
2532 |
+
sociation for Computational Linguistics. doi: 10.18653/
|
2533 |
+
v1/P19-1644.
|
2534 |
+
URL https://aclanthology.org/
|
2535 |
+
P19-1644.
|
2536 |
+
Suhr, A., Zhou, S., Zhang, A., Zhang, I., Bai, H., and
|
2537 |
+
Artzi, Y. A corpus for reasoning about natural language
|
2538 |
+
grounded in photographs. In Proceedings of the 57th
|
2539 |
+
Annual Meeting of the Association for Computational
|
2540 |
+
Linguistics, pp. 6418–6428, Florence, Italy, 2019b. As-
|
2541 |
+
sociation for Computational Linguistics. doi: 10.18653/
|
2542 |
+
v1/P19-1644.
|
2543 |
+
URL https://aclanthology.org/
|
2544 |
+
P19-1644.
|
2545 |
+
Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H.,
|
2546 |
+
Tian, X., Zhu, D., Tian, H., and Wu, H. Ernie: Enhanced
|
2547 |
+
representation through knowledge integration.
|
2548 |
+
arXiv
|
2549 |
+
preprint arXiv:1904.09223, 2019.
|
2550 |
+
Tan, H. and Bansal, M. LXMERT: Learning cross-modality
|
2551 |
+
encoder representations from transformers.
|
2552 |
+
In Pro-
|
2553 |
+
ceedings of the 2019 Conference on Empirical Meth-
|
2554 |
+
ods in Natural Language Processing and the 9th In-
|
2555 |
+
ternational Joint Conference on Natural Language Pro-
|
2556 |
+
cessing (EMNLP-IJCNLP), pp. 5100–5111, Hong Kong,
|
2557 |
+
China, 2019a. Association for Computational Linguis-
|
2558 |
+
tics.
|
2559 |
+
doi: 10.18653/v1/D19-1514.
|
2560 |
+
URL https://
|
2561 |
+
aclanthology.org/D19-1514.
|
2562 |
+
Tan, H. and Bansal, M. LXMERT: Learning cross-modality
|
2563 |
+
encoder representations from transformers.
|
2564 |
+
In Pro-
|
2565 |
+
ceedings of the 2019 Conference on Empirical Meth-
|
2566 |
+
ods in Natural Language Processing and the 9th In-
|
2567 |
+
ternational Joint Conference on Natural Language Pro-
|
2568 |
+
cessing (EMNLP-IJCNLP), pp. 5100–5111, Hong Kong,
|
2569 |
+
China, 2019b. Association for Computational Linguis-
|
2570 |
+
tics.
|
2571 |
+
doi: 10.18653/v1/D19-1514.
|
2572 |
+
URL https://
|
2573 |
+
aclanthology.org/D19-1514.
|
2574 |
+
Touvron, H., Cord, M., Douze, M., Massa, F., Sablay-
|
2575 |
+
rolles, A., and Jégou, H.
|
2576 |
+
Training data-efficient im-
|
2577 |
+
age transformers & distillation through attention.
|
2578 |
+
In
|
2579 |
+
Meila, M. and Zhang, T. (eds.), Proceedings of the 38th
|
2580 |
+
International Conference on Machine Learning, ICML
|
2581 |
+
2021, 18-24 July 2021, Virtual Event, volume 139 of
|
2582 |
+
Proceedings of Machine Learning Research, pp. 10347–
|
2583 |
+
10357. PMLR, 2021.
|
2584 |
+
URL http://proceedings.
|
2585 |
+
mlr.press/v139/touvron21a.html.
|
2586 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J.,
|
2587 |
+
Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin,
|
2588 |
+
I.
|
2589 |
+
Attention is all you need.
|
2590 |
+
In Guyon, I., von
|
2591 |
+
Luxburg, U., Bengio, S., Wallach, H. M., Fergus,
|
2592 |
+
R., Vishwanathan, S. V. N., and Garnett, R. (eds.),
|
2593 |
+
Advances in Neural Information Processing Systems
|
2594 |
+
30: Annual Conference on Neural Information Pro-
|
2595 |
+
cessing Systems 2017, December 4-9, 2017, Long
|
2596 |
+
Beach, CA, USA, pp. 5998–6008, 2017. URL https:
|
2597 |
+
//proceedings.neurips.cc/paper/2017/hash/
|
2598 |
+
3f5ee243547dee91fbd053c1c4a845aa-Abstract.
|
2599 |
+
html.
|
2600 |
+
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and
|
2601 |
+
Bowman, S. R. GLUE: A multi-task benchmark and
|
2602 |
+
analysis platform for natural language understanding. In
|
2603 |
+
7th International Conference on Learning Representa-
|
2604 |
+
tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
|
2605 |
+
OpenReview.net, 2019. URL https://openreview.
|
2606 |
+
net/forum?id=rJ4km2R5t7.
|
2607 |
+
Wang, J., Chen, D., Wu, Z., Luo, C., Zhou, L., Zhao, Y.,
|
2608 |
+
Xie, Y., Liu, C., Jiang, Y.-G., and Yuan, L. Omnivl:
|
2609 |
+
One foundation model for image-language and video-
|
2610 |
+
language tasks. arXiv preprint arXiv:2209.07526, 2022a.
|
2611 |
+
Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J.,
|
2612 |
+
Zhou, C., Zhou, J., and Yang, H. Ofa: Unifying architec-
|
2613 |
+
tures, tasks, and modalities through a simple sequence-
|
2614 |
+
to-sequence learning framework. In International Con-
|
2615 |
+
ference on Machine Learning, pp. 23318–23340. PMLR,
|
2616 |
+
2022b.
|
2617 |
+
Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z.,
|
2618 |
+
Ma, J., Zhou, C., Zhou, J., and Yang, H.
|
2619 |
+
Unifying
|
2620 |
+
architectures, tasks, and modalities through a simple
|
2621 |
+
|
2622 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2623 |
+
sequence-to-sequence learning framework. arXiv preprint
|
2624 |
+
arXiv:2202.03052, 2022c.
|
2625 |
+
Wang, W., Tran, D., and Feiszli, M. What makes training
|
2626 |
+
multi-modal classification networks hard? In Proceedings
|
2627 |
+
of the IEEE/CVF Conference on Computer Vision and
|
2628 |
+
Pattern Recognition, pp. 12695–12705, 2020.
|
2629 |
+
Wang, W., Bao, H., Dong, L., and Wei, F. Vlmo: Unified
|
2630 |
+
vision-language pre-training with mixture-of-modality-
|
2631 |
+
experts. ArXiv preprint, abs/2111.02358, 2021a. URL
|
2632 |
+
https://arxiv.org/abs/2111.02358.
|
2633 |
+
Wang, W., Bao, H., Dong, L., Bjorck, J., Peng, Z., Liu,
|
2634 |
+
Q., Aggarwal, K., Mohammed, O. K., Singhal, S., Som,
|
2635 |
+
S., et al. Image as a foreign language: Beit pretraining
|
2636 |
+
for all vision and vision-language tasks. arXiv preprint
|
2637 |
+
arXiv:2208.10442, 2022d.
|
2638 |
+
Wang, Z., Yu, J., Yu, A. W., Dai, Z., Tsvetkov, Y., and Cao,
|
2639 |
+
Y. Simvlm: Simple visual language model pretraining
|
2640 |
+
with weak supervision. CoRR, abs/2108.10904, 2021b.
|
2641 |
+
Wang, Z., Yu, J., Yu, A. W., Dai, Z., Tsvetkov, Y., and Cao,
|
2642 |
+
Y. Simvlm: Simple visual language model pretraining
|
2643 |
+
with weak supervision. arXiv preprint, 2021c.
|
2644 |
+
Warstadt, A., Singh, A., and Bowman, S. R. Neural network
|
2645 |
+
acceptability judgments. Transactions of the Association
|
2646 |
+
for Computational Linguistics, 7:625–641, 2019. doi:
|
2647 |
+
10.1162/tacl_a_00290. URL https://aclanthology.
|
2648 |
+
org/Q19-1040.
|
2649 |
+
Wei, C., Fan, H., Xie, S., Wu, C.-Y., Yuille, A., and Feicht-
|
2650 |
+
enhofer, C. Masked feature prediction for self-supervised
|
2651 |
+
visual pre-training. In Proceedings of the IEEE/CVF Con-
|
2652 |
+
ference on Computer Vision and Pattern Recognition, pp.
|
2653 |
+
14668–14678, 2022a.
|
2654 |
+
Wei, L., Xie, L., Zhou, W., Li, H., and Tian, Q. Mvp:
|
2655 |
+
Multimodality-guided visual pre-training. arXiv preprint
|
2656 |
+
arXiv:2203.05175, 2022b.
|
2657 |
+
Williams, A., Nangia, N., and Bowman, S.
|
2658 |
+
A broad-
|
2659 |
+
coverage challenge corpus for sentence understanding
|
2660 |
+
through inference.
|
2661 |
+
In Proceedings of the 2018 Con-
|
2662 |
+
ference of the North American Chapter of the Associ-
|
2663 |
+
ation for Computational Linguistics: Human Language
|
2664 |
+
Technologies, Volume 1 (Long Papers), pp. 1112–1122,
|
2665 |
+
New Orleans, Louisiana, 2018. Association for Compu-
|
2666 |
+
tational Linguistics. doi: 10.18653/v1/N18-1101. URL
|
2667 |
+
https://aclanthology.org/N18-1101.
|
2668 |
+
Xu, J., Zhou, W., Fu, Z., Zhou, H., and Li, L. A survey
|
2669 |
+
on green deep learning. ArXiv preprint, abs/2111.05193,
|
2670 |
+
2021. URL https://arxiv.org/abs/2111.05193.
|
2671 |
+
Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini,
|
2672 |
+
M., and Wu, Y. Coca: Contrastive captioners are image-
|
2673 |
+
text foundation models. arXiv preprint arXiv:2205.01917,
|
2674 |
+
2022.
|
2675 |
+
Yu, L., Poirson, P., Yang, S., Berg, A. C., and Berg, T. L.
|
2676 |
+
Modeling context in referring expressions. In European
|
2677 |
+
Conference on Computer Vision, pp. 69–85. Springer,
|
2678 |
+
2016.
|
2679 |
+
Yuan, L., Chen, D., Chen, Y.-L., Codella, N., Dai, X., Gao,
|
2680 |
+
J., Hu, H., Huang, X., Li, B., Li, C., Liu, C., Liu, M.,
|
2681 |
+
Liu, Z., Lu, Y., Shi, Y., Wang, L., Wang, J., Xiao, B.,
|
2682 |
+
Xiao, Z., Yang, J., Zeng, M., Zhou, L., and Zhang, P.
|
2683 |
+
Florence: A new foundation model for computer vision.
|
2684 |
+
arXiv preprint, 2021.
|
2685 |
+
Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Al-
|
2686 |
+
berti, C., Ontanon, S., Pham, P., Ravula, A., Wang, Q.,
|
2687 |
+
Yang, L., et al. Big bird: Transformers for longer se-
|
2688 |
+
quences. Advances in Neural Information Processing
|
2689 |
+
Systems, 33:17283–17297, 2020.
|
2690 |
+
Zeng, Y., Zhang, X., and Li, H.
|
2691 |
+
Multi-grained vision
|
2692 |
+
language pre-training: Aligning texts with visual con-
|
2693 |
+
cepts.
|
2694 |
+
ArXiv preprint, abs/2111.08276, 2021.
|
2695 |
+
URL
|
2696 |
+
https://arxiv.org/abs/2111.08276.
|
2697 |
+
Zeng, Y., Zhang, X., Li, H., Wang, J., Zhang, J., and Zhou,
|
2698 |
+
W. X2-vlm: All-in-one pre-trained model for vision-
|
2699 |
+
language tasks. arXiv preprint arXiv:2211.12402, 2022.
|
2700 |
+
Zhang, P., Li, X., Hu, X., Yang, J., Zhang, L., Wang, L.,
|
2701 |
+
Choi, Y., and Gao, J. VinVL: Revisiting visual repre-
|
2702 |
+
sentations in vision-language models. In Conference on
|
2703 |
+
Computer Vision and Pattern Recognition (CVPR), 2021.
|
2704 |
+
Zhang, X., Li, P., and Li, H. Ambert: A pre-trained language
|
2705 |
+
model with multi-grained tokenization. arXiv preprint
|
2706 |
+
arXiv:2008.11869, 2020.
|
2707 |
+
Zhu, J., Zhu, X., Wang, W., Wang, X., Li, H., Wang, X.,
|
2708 |
+
and Dai, J. Uni-perceiver-moe: Learning sparse gen-
|
2709 |
+
eralist models with conditional moes. arXiv preprint
|
2710 |
+
arXiv:2206.04674, 2022.
|
2711 |
+
Zhu, X., Zhu, J., Li, H., Wu, X., Wang, X., Li, H., Wang,
|
2712 |
+
X., and Dai, J. Uni-perceiver: Pre-training unified archi-
|
2713 |
+
tecture for generic perception for zero-shot and few-shot
|
2714 |
+
tasks. arXiv preprint arXiv:2112.01522, 2021.
|
2715 |
+
|
2716 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2717 |
+
A. Comparison of Recent Foundation Models
|
2718 |
+
Table 8 shows an extensive comparison of recent foundation
|
2719 |
+
models and X-FM on multiple axes. Previous work either (i)
|
2720 |
+
perform best on uni-modal tasks (Liu et al., 2019; Peng et al.,
|
2721 |
+
2022) or vision-language tasks (Zeng et al., 2021; 2022); (2)
|
2722 |
+
target a specific uni-modal domain along with part of vision-
|
2723 |
+
and-language tasks (Wang et al., 2021a; Radford et al., 2021;
|
2724 |
+
Jia et al., 2021; Wang et al., 2021c; Yu et al., 2022; Wang
|
2725 |
+
et al., 2022b; Diao et al., 2022); or (3) target all domains
|
2726 |
+
but cannot perform best on all the tasks (Li et al., 2021c;
|
2727 |
+
Singh et al., 2021; Zhu et al., 2022). Our model, X-FM, is
|
2728 |
+
a general foundation model that can perform the best for
|
2729 |
+
all the understanding tasks of language, vision, and vision
|
2730 |
+
language.
|
2731 |
+
B. Details of Downstream Tasks
|
2732 |
+
Language Understanding.
|
2733 |
+
We conduct experiments on GLUE benchmark including
|
2734 |
+
MNLI (Williams et al., 2018), CoLA (Warstadt et al., 2019),
|
2735 |
+
MRPC (Dolan & Brockett, 2005), QQP (Iyer et al., 2017),
|
2736 |
+
SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2016),
|
2737 |
+
RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo
|
2738 |
+
et al., 2007; Bentivogli et al., 2009), and STS-B (Agirre
|
2739 |
+
et al., 2007). We follow the practice of BERT (Devlin et al.,
|
2740 |
+
2019; Liu et al., 2019) and feed the input into the language
|
2741 |
+
encoder, and the hidden state of the [CLS] is fed into a
|
2742 |
+
new multi-class linear classifier or regression head.
|
2743 |
+
Vision Understanding.
|
2744 |
+
We conduct vision experiments on both fine-tuning and lin-
|
2745 |
+
ear evaluation (linear eval). The linear evaluation follows
|
2746 |
+
a common practice (Caron et al., 2021; He et al., 2020;
|
2747 |
+
Singh et al., 2021) in self-supervised learning to evaluate
|
2748 |
+
the representation quality, where the pre-trained backbone
|
2749 |
+
model is frozen, and an MLP head is appended on top of it.
|
2750 |
+
We choose 7 popular datasets following OmnVL (Wang
|
2751 |
+
et al., 2022a):
|
2752 |
+
ImageNet (Russakovsky et al., 2015),
|
2753 |
+
Food101 (Bossard et al., 2014), CIFAR10 (Krizhevsky et al.,
|
2754 |
+
2009), CIFAR100 (Krizhevsky et al., 2009), DTD (Cimpoi
|
2755 |
+
et al., 2014), Pets (Parkhi et al., 2012) and Flowers102 (Nils-
|
2756 |
+
back & Zisserman, 2008).
|
2757 |
+
Vision-Language Understanding.
|
2758 |
+
Image-Text Retrieval We evaluate X-FM on both
|
2759 |
+
MSCOCO and Flickr30K datasets. We adopt the widely
|
2760 |
+
used Karpathy split (Karpathy & Li, 2015) for both datasets.
|
2761 |
+
Following the previous work (Li et al., 2021a; Zeng et al.,
|
2762 |
+
2021; 2022), we first encode images and texts separately
|
2763 |
+
and calculate s(I, T) to obtain the top-k candidates, and
|
2764 |
+
then use the fusion encoder to re-rank the candidates.
|
2765 |
+
Visual Question Answering The task requires the model
|
2766 |
+
to predict an answer given an image and a question. We
|
2767 |
+
evaluate X-FM on the VQA v2.0 dataset (Goyal et al., 2017).
|
2768 |
+
Following the previous work (Zeng et al., 2021), we use
|
2769 |
+
a Transformer decoder to generate answers based on the
|
2770 |
+
outputs of the fusion module. The decoder network shares
|
2771 |
+
the same network architecture with the fusion encoder. Note
|
2772 |
+
that we use an image resolution of 768*768 for the final
|
2773 |
+
result of X-FMbase, and use an image resolution of 480*480
|
2774 |
+
for X-FMlarge and X-FMbase in ablation studies for efficient
|
2775 |
+
fine-tuning.
|
2776 |
+
Visual Reasoning We evaluate X-FM on a widely used
|
2777 |
+
benchmark NLVR2 (Suhr et al., 2019a). The task allows the
|
2778 |
+
model to determine whether a text describes the relations
|
2779 |
+
between two images. Following previous work (Wang et al.,
|
2780 |
+
2021a; Bao et al., 2022), we formulate the triplet input into
|
2781 |
+
two image-text pairs, each containing the text description
|
2782 |
+
and an image. We then concatenate the final output [CLS]
|
2783 |
+
features of the fusion module of the two pairs to predict the
|
2784 |
+
label.
|
2785 |
+
C. Details of hyper parameters
|
2786 |
+
Pre-training
|
2787 |
+
X-FMbase is implemented with a 12-layer
|
2788 |
+
language encoder, a 12-layer vision encoder, and a 12-layer
|
2789 |
+
fusion encoder, 768 dimensions for hidden states, 3072 for
|
2790 |
+
intermediate size, and 128 for maximum input length. X-
|
2791 |
+
FMlarge is implemented with a 24-layer language encoder, a
|
2792 |
+
24-layer vision encoder, and a 12-layer fusion encoder, 1024
|
2793 |
+
dimensions for hidden states, 4096 for intermediate size, and
|
2794 |
+
128 for maximum input length. We initialize the language
|
2795 |
+
encoder with RoBERTa and the vision encoder with BEiTv2.
|
2796 |
+
The weight decay is set to 0.01 with β1 = 0.9, β2 = 0.98.
|
2797 |
+
The learning rate is 1e-4 with a warm-up period for the first
|
2798 |
+
2500 steps and then linearly decayed to 0. In each batch,
|
2799 |
+
there are 3072 image-text pairs, 3072 images, and 8192 text-
|
2800 |
+
only sentences. We use center-crop to resize each image
|
2801 |
+
to the size of 224×224. The default settings are shown in
|
2802 |
+
Table 9.
|
2803 |
+
Fine-tuning
|
2804 |
+
The learning rate is ∈ {1e-5, 2e-5, 5e-5} and
|
2805 |
+
our model is optimized by AdamW. Because the image
|
2806 |
+
resolution differs between pre-training and fine-tuning, the
|
2807 |
+
position parameters are adapted using linear interpolation.
|
2808 |
+
For all downstream tasks, we apply random resize crops
|
2809 |
+
and horizontal flips augmentation during training. The de-
|
2810 |
+
fault settings for text classification, image classification and
|
2811 |
+
vision-language understanding are shown in Tables 10, 11,
|
2812 |
+
12 and 13, respectively. Note that the resolution for VQA is
|
2813 |
+
different as described in Section B.
|
2814 |
+
|
2815 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
2816 |
+
Methods
|
2817 |
+
Multimodal data
|
2818 |
+
Pretraining Objectives
|
2819 |
+
Fusion Arch.
|
2820 |
+
Target Modalities
|
2821 |
+
public
|
2822 |
+
dataset(s)
|
2823 |
+
size
|
2824 |
+
Contr.
|
2825 |
+
ITM
|
2826 |
+
BBP
|
2827 |
+
(M/P)LM
|
2828 |
+
Unimodal
|
2829 |
+
ST
|
2830 |
+
CT
|
2831 |
+
MT
|
2832 |
+
V
|
2833 |
+
CV&L
|
2834 |
+
MV&L
|
2835 |
+
L
|
2836 |
+
RoBERTa (Liu et al., 2019)
|
2837 |
+
–
|
2838 |
+
–
|
2839 |
+
–
|
2840 |
+
–
|
2841 |
+
–
|
2842 |
+
–
|
2843 |
+
–
|
2844 |
+
MLM
|
2845 |
+
–
|
2846 |
+
–
|
2847 |
+
–
|
2848 |
+
–
|
2849 |
+
–
|
2850 |
+
–
|
2851 |
+
�
|
2852 |
+
BEiTv2 (Peng et al., 2022)
|
2853 |
+
–
|
2854 |
+
–
|
2855 |
+
–
|
2856 |
+
–
|
2857 |
+
–
|
2858 |
+
–
|
2859 |
+
–
|
2860 |
+
MIM
|
2861 |
+
–
|
2862 |
+
–
|
2863 |
+
–
|
2864 |
+
�
|
2865 |
+
–
|
2866 |
+
–
|
2867 |
+
–
|
2868 |
+
X-VLM (Zeng et al., 2021; 2022)
|
2869 |
+
�
|
2870 |
+
Combination
|
2871 |
+
5M
|
2872 |
+
�
|
2873 |
+
�
|
2874 |
+
�
|
2875 |
+
MLM
|
2876 |
+
–
|
2877 |
+
–
|
2878 |
+
�
|
2879 |
+
–
|
2880 |
+
–
|
2881 |
+
�
|
2882 |
+
�
|
2883 |
+
–
|
2884 |
+
VLMo (Wang et al., 2021a)
|
2885 |
+
�
|
2886 |
+
Combination
|
2887 |
+
5M
|
2888 |
+
�
|
2889 |
+
�
|
2890 |
+
–
|
2891 |
+
MLM
|
2892 |
+
MLM+MIM
|
2893 |
+
–
|
2894 |
+
–
|
2895 |
+
�
|
2896 |
+
–
|
2897 |
+
�
|
2898 |
+
�
|
2899 |
+
–
|
2900 |
+
CLIP (Radford et al., 2021)
|
2901 |
+
�
|
2902 |
+
WebImageText
|
2903 |
+
400M
|
2904 |
+
�
|
2905 |
+
–
|
2906 |
+
–
|
2907 |
+
–
|
2908 |
+
–
|
2909 |
+
–
|
2910 |
+
–
|
2911 |
+
–
|
2912 |
+
�
|
2913 |
+
�
|
2914 |
+
–
|
2915 |
+
–
|
2916 |
+
ALIGN (Jia et al., 2021)
|
2917 |
+
�
|
2918 |
+
JFT
|
2919 |
+
1.8B
|
2920 |
+
�
|
2921 |
+
–
|
2922 |
+
–
|
2923 |
+
–
|
2924 |
+
–
|
2925 |
+
–
|
2926 |
+
–
|
2927 |
+
–
|
2928 |
+
�
|
2929 |
+
�
|
2930 |
+
–
|
2931 |
+
–
|
2932 |
+
SimVLM (Wang et al., 2021c)
|
2933 |
+
�
|
2934 |
+
JFT
|
2935 |
+
1.8B
|
2936 |
+
–
|
2937 |
+
–
|
2938 |
+
–
|
2939 |
+
PrefixLM
|
2940 |
+
PrefixLM
|
2941 |
+
�
|
2942 |
+
–
|
2943 |
+
–
|
2944 |
+
∗
|
2945 |
+
–
|
2946 |
+
�
|
2947 |
+
�
|
2948 |
+
CoCa (Yu et al., 2022)
|
2949 |
+
�
|
2950 |
+
JFT
|
2951 |
+
4.8B
|
2952 |
+
�
|
2953 |
+
–
|
2954 |
+
–
|
2955 |
+
LM
|
2956 |
+
–
|
2957 |
+
�
|
2958 |
+
–
|
2959 |
+
–
|
2960 |
+
�
|
2961 |
+
�
|
2962 |
+
�
|
2963 |
+
–
|
2964 |
+
UNIMO-2 (Li et al., 2021c)
|
2965 |
+
�
|
2966 |
+
Combination
|
2967 |
+
5M
|
2968 |
+
–
|
2969 |
+
�
|
2970 |
+
–
|
2971 |
+
MLM
|
2972 |
+
VCL
|
2973 |
+
�
|
2974 |
+
–
|
2975 |
+
–
|
2976 |
+
�
|
2977 |
+
�
|
2978 |
+
�
|
2979 |
+
�
|
2980 |
+
OFA (Wang et al., 2022b)
|
2981 |
+
�
|
2982 |
+
Combination
|
2983 |
+
15M
|
2984 |
+
–
|
2985 |
+
–
|
2986 |
+
–
|
2987 |
+
LM
|
2988 |
+
LM
|
2989 |
+
�
|
2990 |
+
–
|
2991 |
+
–
|
2992 |
+
∗
|
2993 |
+
–
|
2994 |
+
�
|
2995 |
+
�
|
2996 |
+
DaVinci (Diao et al., 2022)
|
2997 |
+
�
|
2998 |
+
Combination
|
2999 |
+
46M
|
3000 |
+
–
|
3001 |
+
–
|
3002 |
+
–
|
3003 |
+
PrefixLM + PrefixIM
|
3004 |
+
PrefixLM
|
3005 |
+
�
|
3006 |
+
–
|
3007 |
+
–
|
3008 |
+
�
|
3009 |
+
–
|
3010 |
+
�
|
3011 |
+
�
|
3012 |
+
FLAVA (Singh et al., 2021)
|
3013 |
+
�
|
3014 |
+
Combination
|
3015 |
+
70M
|
3016 |
+
�
|
3017 |
+
�
|
3018 |
+
–
|
3019 |
+
MLM
|
3020 |
+
MLM+MIM
|
3021 |
+
�
|
3022 |
+
–
|
3023 |
+
–
|
3024 |
+
�
|
3025 |
+
�
|
3026 |
+
�
|
3027 |
+
�
|
3028 |
+
Uni-Perceiver-MoE (Zhu et al., 2022)
|
3029 |
+
�
|
3030 |
+
Combination
|
3031 |
+
116M
|
3032 |
+
–
|
3033 |
+
�
|
3034 |
+
–
|
3035 |
+
LM+MLM
|
3036 |
+
LM+MLM+Classify.
|
3037 |
+
�
|
3038 |
+
–
|
3039 |
+
–
|
3040 |
+
�
|
3041 |
+
�
|
3042 |
+
�
|
3043 |
+
�
|
3044 |
+
X-FM
|
3045 |
+
�
|
3046 |
+
Combination
|
3047 |
+
5M
|
3048 |
+
�
|
3049 |
+
�
|
3050 |
+
�
|
3051 |
+
MLM+MIM
|
3052 |
+
MLM+MIM
|
3053 |
+
–
|
3054 |
+
�
|
3055 |
+
–
|
3056 |
+
�
|
3057 |
+
�
|
3058 |
+
�
|
3059 |
+
�
|
3060 |
+
Super-Large Models
|
3061 |
+
Flamingo (Alayrac et al., 2022)
|
3062 |
+
�
|
3063 |
+
Combination
|
3064 |
+
2.2B
|
3065 |
+
–
|
3066 |
+
–
|
3067 |
+
–
|
3068 |
+
LM
|
3069 |
+
–
|
3070 |
+
�
|
3071 |
+
–
|
3072 |
+
–
|
3073 |
+
–
|
3074 |
+
�
|
3075 |
+
�
|
3076 |
+
–
|
3077 |
+
BEiT-v3 (Wang et al., 2022d)
|
3078 |
+
�
|
3079 |
+
Combination
|
3080 |
+
21M
|
3081 |
+
–
|
3082 |
+
–
|
3083 |
+
–
|
3084 |
+
MLM
|
3085 |
+
MLM+MIM
|
3086 |
+
–
|
3087 |
+
–
|
3088 |
+
�
|
3089 |
+
∗
|
3090 |
+
�
|
3091 |
+
�
|
3092 |
+
–
|
3093 |
+
PaLI (Chen et al., 2022)
|
3094 |
+
�
|
3095 |
+
WebImageText
|
3096 |
+
41B
|
3097 |
+
–
|
3098 |
+
–
|
3099 |
+
–
|
3100 |
+
LM
|
3101 |
+
–
|
3102 |
+
�
|
3103 |
+
–
|
3104 |
+
–
|
3105 |
+
�
|
3106 |
+
�
|
3107 |
+
�
|
3108 |
+
�
|
3109 |
+
Table 8: Comparison of recent foundation models in different modalities. Contr. indicates contrastive learning. ITM is
|
3110 |
+
short for image-text matching. BBP represents boundary box prediction. (M/P)LM means image-conditioned (masked/prefix)
|
3111 |
+
language modeling. V, CV&L, MV&L and L stand for vision tasks, cross-modal retrieval tasks, multi-modal fusion tasks
|
3112 |
+
and language tasks respectively. ST, CT and MT are abbreviations for single Transformer, cross-attention Transformer and
|
3113 |
+
multiway Transformer. VCL stands for visual contrastive learning. ∗ means the modality is partially targeted (SimVLM
|
3114 |
+
and OFA include ImageNet.). Giant models with over 1B parameters (e.g. BEiT-3) are in grey since they are not directly
|
3115 |
+
comparable with other models.
|
3116 |
+
config
|
3117 |
+
value
|
3118 |
+
optimizer
|
3119 |
+
AdamW
|
3120 |
+
learning rate
|
3121 |
+
1e-4
|
3122 |
+
weight decay
|
3123 |
+
0.01
|
3124 |
+
optimizer momentum
|
3125 |
+
β1, β2=0.9, 0.999
|
3126 |
+
language batch size
|
3127 |
+
8192
|
3128 |
+
vision batch size
|
3129 |
+
3072
|
3130 |
+
vision-language batch size
|
3131 |
+
3072
|
3132 |
+
learning rate schedule
|
3133 |
+
linear decay
|
3134 |
+
warmup steps
|
3135 |
+
2500
|
3136 |
+
training steps
|
3137 |
+
200k
|
3138 |
+
augmentation
|
3139 |
+
RandomResizedCrop
|
3140 |
+
image res
|
3141 |
+
224*224
|
3142 |
+
patch size
|
3143 |
+
16
|
3144 |
+
text length for MLM
|
3145 |
+
128
|
3146 |
+
text length for IMLM
|
3147 |
+
30
|
3148 |
+
Table 9: Pre-training setting.
|
3149 |
+
config
|
3150 |
+
value
|
3151 |
+
optimizer
|
3152 |
+
AdamW
|
3153 |
+
learning rate
|
3154 |
+
{1e-5, 2e-5, 5e-5}
|
3155 |
+
weight decay
|
3156 |
+
0.0
|
3157 |
+
optimizer momentum
|
3158 |
+
β1, β2=0.9, 0.999
|
3159 |
+
batch size
|
3160 |
+
{16, 32, 64}
|
3161 |
+
learning rate schedule
|
3162 |
+
linear decay
|
3163 |
+
warmup ratio
|
3164 |
+
0.0
|
3165 |
+
training epochs
|
3166 |
+
{5, 10, 20}
|
3167 |
+
Table 10: Text classification: GLUE setting.
|
3168 |
+
config
|
3169 |
+
value
|
3170 |
+
optimizer
|
3171 |
+
AdamW
|
3172 |
+
learning rate
|
3173 |
+
[2e-5, 4e-5]
|
3174 |
+
weight decay
|
3175 |
+
0.01
|
3176 |
+
optimizer momentum
|
3177 |
+
β1, β2=0.9, 0.999
|
3178 |
+
batch size
|
3179 |
+
[256, 2048]
|
3180 |
+
learning rate schedule
|
3181 |
+
linear decay
|
3182 |
+
warmup rate
|
3183 |
+
0.1
|
3184 |
+
training epochs
|
3185 |
+
100
|
3186 |
+
augmentation
|
3187 |
+
RandomResizedCrop
|
3188 |
+
image res
|
3189 |
+
224*224
|
3190 |
+
patch size
|
3191 |
+
16
|
3192 |
+
Table 11: Image classification: Linear probing setting.
|
3193 |
+
|
3194 |
+
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
|
3195 |
+
config
|
3196 |
+
value
|
3197 |
+
optimizer
|
3198 |
+
AdamW
|
3199 |
+
learning rate
|
3200 |
+
4e-5
|
3201 |
+
minimal learning rate
|
3202 |
+
1e-7
|
3203 |
+
weight decay
|
3204 |
+
0.01
|
3205 |
+
optimizer momentum
|
3206 |
+
β1, β2=0.9, 0.999
|
3207 |
+
batch size
|
3208 |
+
1024
|
3209 |
+
learning rate schedule
|
3210 |
+
linear decay
|
3211 |
+
warmup rate
|
3212 |
+
0.1
|
3213 |
+
training epochs
|
3214 |
+
100
|
3215 |
+
augmentation
|
3216 |
+
RandomResizedCrop
|
3217 |
+
image res
|
3218 |
+
224*224
|
3219 |
+
patch size
|
3220 |
+
16
|
3221 |
+
label smoothing
|
3222 |
+
0.1
|
3223 |
+
mixup prob.
|
3224 |
+
1.0
|
3225 |
+
cutmix prob.
|
3226 |
+
1.0
|
3227 |
+
Table 12: ImageNet classification: Fine-tuning setting.
|
3228 |
+
config
|
3229 |
+
value
|
3230 |
+
optimizer
|
3231 |
+
AdamW
|
3232 |
+
learning rate
|
3233 |
+
{1e-5, 2e-5, 5e-5}
|
3234 |
+
weight decay
|
3235 |
+
0.01
|
3236 |
+
optimizer momentum
|
3237 |
+
β1, β2=0.9, 0.999
|
3238 |
+
batch size
|
3239 |
+
{64, 192, 512}
|
3240 |
+
learning rate schedule
|
3241 |
+
linear decay
|
3242 |
+
warmup rate
|
3243 |
+
0.1
|
3244 |
+
training epochs
|
3245 |
+
{10, 15, 20}
|
3246 |
+
augmentation
|
3247 |
+
RandomResizedCrop
|
3248 |
+
image res
|
3249 |
+
384*384
|
3250 |
+
patch size
|
3251 |
+
16
|
3252 |
+
Table 13: Vision-Language understanding: fine-tuning set-
|
3253 |
+
ting.
|
3254 |
+
|
2dE4T4oBgHgl3EQfagyV/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
3tAzT4oBgHgl3EQfffwg/content/2301.01452v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01557fe8633d95cec7d23f6dadf1a9c88a3b92a89fc15ffb53ddb394a02dab0e
|
3 |
+
size 1396191
|
3tAzT4oBgHgl3EQfffwg/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e7898d24ed5bac251f3f3f87c3e699f9b6526d3ef5a5e8599fbb8a4e01d05623
|
3 |
+
size 3014701
|
3tAzT4oBgHgl3EQfffwg/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d649a8d69ca3317a0696c3c3603b53f416a1c7f6f0c5f90e887e73ad5019791
|
3 |
+
size 112914
|
49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf
ADDED
Binary file (89.5 kB). View file
|
|
49AzT4oBgHgl3EQfEPqD/content/tmp_files/2301.00990v1.pdf.txt
ADDED
@@ -0,0 +1,841 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.00990v1 [math.NA] 3 Jan 2023
|
2 |
+
The energy method for high-order invariants in shallow water wave equations
|
3 |
+
Qifeng Zhanga, Tong Yana, Guang-hua Gaob
|
4 |
+
aDepartment of Mathematics, Zhejiang Sci-Tech University, Hangzhou, 310018, China
|
5 |
+
bDepartment of Mathematics, Nanjing University of Posts and Telecommunications, Nanjing, 210096, China
|
6 |
+
Abstract
|
7 |
+
Third order dispersive evolution equations are widely adopted to model one-dimensional long waves and have
|
8 |
+
extensive applications in fluid mechanics, plasma physics and nonlinear optics. Among them are the KdV equation,
|
9 |
+
the Camassa–Holm equation and the Degasperis–Procesi equation. They share many common features such as
|
10 |
+
complete integrability, Lax pairs and bi-Hamiltonian structure. In this paper we revisit high-order invariants
|
11 |
+
for these three types of shallow water wave equations by the energy method in combination of a skew-adjoint
|
12 |
+
operator (1 − ∂xx)−1. Several applications to seek high-order invariants of the Benjamin-Bona-Mahony equation,
|
13 |
+
the regularized long wave equation and the Rosenau equation are also presented.
|
14 |
+
Keywords: Energy method; High-order invariant; Shallow water wave equation
|
15 |
+
1. Introduction
|
16 |
+
A family of third order dispersive evolution equations of the form
|
17 |
+
ut − α2uxxt + γuxxx + c0ux = (c1u2 + c2u2
|
18 |
+
x + c3uuxx)x,
|
19 |
+
x ∈ R, t > 0
|
20 |
+
(1.1)
|
21 |
+
frequently appeared in the simulation of the shallow water waves, see e.g., [1], where α, γ and ci (i = 0, 1, 2, 3) are
|
22 |
+
real constants; u denotes a horizontal velocity field with the independent spatial variable x and temporal variable t.
|
23 |
+
A typical such equation (1.1) with α2 = c0 = c2 = c3 = 0, c1 = 2, γ = −2 is the KdV equation
|
24 |
+
ut − 4uux − 2uxxx = 0,
|
25 |
+
x ∈ R, t > 0,
|
26 |
+
(1.2)
|
27 |
+
which describes the unidirectional propagation of waves at the free surface of shallow water under the influence
|
28 |
+
of gravity. The first four invariants of (1.2) are respectively as (see e.g., [2], although there is a minor typo in the
|
29 |
+
coefficient of the fourth invariant, it does not affect the reading of this classic review)
|
30 |
+
M1 =
|
31 |
+
�
|
32 |
+
R
|
33 |
+
udx,
|
34 |
+
M2 =
|
35 |
+
�
|
36 |
+
R
|
37 |
+
u2dx,
|
38 |
+
M3 =
|
39 |
+
�
|
40 |
+
R
|
41 |
+
�
|
42 |
+
u2
|
43 |
+
x − 2
|
44 |
+
3u3�
|
45 |
+
dx,
|
46 |
+
M4 =
|
47 |
+
�
|
48 |
+
R
|
49 |
+
�
|
50 |
+
u2
|
51 |
+
xx − 10
|
52 |
+
3 uu2
|
53 |
+
x + 5
|
54 |
+
9u4�
|
55 |
+
dx.
|
56 |
+
Taking α2 = c3 = 1, γ = c0 = 0, c1 = − 3
|
57 |
+
2, c2 =
|
58 |
+
1
|
59 |
+
2, we have another example called the Camassa–Holm
|
60 |
+
equation [3]
|
61 |
+
ut − uxxt + 3uux = 2uxuxx + uuxxx,
|
62 |
+
x ∈ R, t > 0,
|
63 |
+
(1.3)
|
64 |
+
which models the unidirectional propagation of shallow water waves over a flat bottom. The first three invariants
|
65 |
+
are listed as follows
|
66 |
+
E1 =
|
67 |
+
�
|
68 |
+
R
|
69 |
+
(u − uxx)dx,
|
70 |
+
E2 = 1
|
71 |
+
2
|
72 |
+
�
|
73 |
+
R
|
74 |
+
(u2 + u2
|
75 |
+
x)dx,
|
76 |
+
E3 = 1
|
77 |
+
2
|
78 |
+
�
|
79 |
+
R
|
80 |
+
u(u2 + u2
|
81 |
+
x)dx.
|
82 |
+
The third example by assigning α2 = c2 = c3 = 1, γ = c0 = 0, c1 = −2 is called the Degasperis–Procesi
|
83 |
+
equation
|
84 |
+
ut − uxxt + 4uux = 3uxuxx + uuxxx,
|
85 |
+
x ∈ R, t > 0,
|
86 |
+
(1.4)
|
87 |
+
∗E-mail address: [email protected] (Q. Zhang), [email protected] (Tong Yan), [email protected] (G. Gao)
|
88 |
+
Preprint submitted to Elsevier
|
89 |
+
January 4, 2023
|
90 |
+
|
91 |
+
which can be regarded as a model for nonlinear shallow water dynamics [4]. The frequently discussed invariants
|
92 |
+
are
|
93 |
+
H1 =
|
94 |
+
�
|
95 |
+
R
|
96 |
+
(u − uxx)dx,
|
97 |
+
H2 =
|
98 |
+
�
|
99 |
+
R
|
100 |
+
(u − uxx)vdx,
|
101 |
+
H3 =
|
102 |
+
�
|
103 |
+
R
|
104 |
+
u3dx,
|
105 |
+
where 4v − vxx = u.
|
106 |
+
Up to now, there have been thousands of papers focusing on the theoretical and numerical studies on these three
|
107 |
+
equations. It is worth mentioning that the invariant-preserving property is a key index of the success for numerical
|
108 |
+
methods. However, high-order invariants are usually difficult to preserve numerically. Liu et al. also pointed out “it
|
109 |
+
appears a rather difficult task to preserve all three conservation laws” in [5]. In this work, higher-order invariants of
|
110 |
+
these equations will be re-derived in view of the energy method, which may be possible to provide some thoughts
|
111 |
+
for invariant-preserving numerical methods. Actually, the energy method originated from conservation laws in
|
112 |
+
physics was first proposed in 1928 by Courant, Friedrichs and Lewy [6]. From then on, it has been widely applied
|
113 |
+
to the mathematical and numerical analysis of nonlinear evolution equations. We trust the readers with [7] instead
|
114 |
+
of a long list of references to relevant works.
|
115 |
+
The rest of the paper is arranged as follows.
|
116 |
+
In Section 2, combining the energy method and a skew-
|
117 |
+
adjoint operator, we show the high-order invariants for the KdV equation, the Camassa–Holm equation and the
|
118 |
+
Degasperis–Procesi equation, respectively. Then we list several applications for seeking some high-order invariants
|
119 |
+
of other types of the shallow water wave equations in Section 3.
|
120 |
+
2. Main results
|
121 |
+
In what follows, we directly show that Mi (i = 1, 2, 3, 4), Ei (i = 1, 2, 3) and Hi (i = 1, 2, 3) are invariants of
|
122 |
+
(1.2), (1.3) and (1.4) subjected to the periodic boundary conditions based on the energy method, respectively.
|
123 |
+
2.1. Invariants of the KdV equation
|
124 |
+
Proof: (I) Multiplying by 1, u and (u2 + uxx), respectively, with (1.2), we have Mi (i = 1, 2, 3). In what follows, we
|
125 |
+
show the fourth invariant M4 of the KdV equation by the energy method.
|
126 |
+
Multiplying both sides of (1.2) by 2uxxxx + 10
|
127 |
+
3 u2
|
128 |
+
x + 20
|
129 |
+
3 uuxx + 20
|
130 |
+
9 u3 and integrating the result, we have
|
131 |
+
0 =
|
132 |
+
�
|
133 |
+
R
|
134 |
+
�
|
135 |
+
2uxxxx + 10
|
136 |
+
3 u2
|
137 |
+
x + 20
|
138 |
+
3 uuxx + 20
|
139 |
+
9 u3�
|
140 |
+
· utdx
|
141 |
+
−
|
142 |
+
�
|
143 |
+
R
|
144 |
+
�
|
145 |
+
2uxxxx + 10
|
146 |
+
3 u2
|
147 |
+
x + 20
|
148 |
+
3 uuxx + 20
|
149 |
+
9 u3�
|
150 |
+
· (4uux + 2uxxx)dx
|
151 |
+
=
|
152 |
+
�
|
153 |
+
R
|
154 |
+
�
|
155 |
+
2uxxuxxt − 10
|
156 |
+
3 (utu2
|
157 |
+
x + 2uuxuxt) + 20
|
158 |
+
9 u3ut
|
159 |
+
�
|
160 |
+
dx
|
161 |
+
−
|
162 |
+
�
|
163 |
+
R
|
164 |
+
�
|
165 |
+
2uxxxx + 10
|
166 |
+
3 u2
|
167 |
+
x + 20
|
168 |
+
3 uuxx + 20
|
169 |
+
9 u3�
|
170 |
+
· (4uux + 2uxxx)dx
|
171 |
+
= d
|
172 |
+
dt M4 − 8
|
173 |
+
�
|
174 |
+
R
|
175 |
+
uuxuxxxxdx − 40
|
176 |
+
3
|
177 |
+
�
|
178 |
+
R
|
179 |
+
uu3
|
180 |
+
xdx − 80
|
181 |
+
3
|
182 |
+
�
|
183 |
+
R
|
184 |
+
u2uxuxxdx − 80
|
185 |
+
9
|
186 |
+
�
|
187 |
+
R
|
188 |
+
u4uxdx
|
189 |
+
− 4
|
190 |
+
�
|
191 |
+
R
|
192 |
+
uxxxuxxxxdx − 20
|
193 |
+
3
|
194 |
+
�
|
195 |
+
R
|
196 |
+
uxxxu2
|
197 |
+
xdx − 40
|
198 |
+
3
|
199 |
+
�
|
200 |
+
R
|
201 |
+
uuxxuxxxdx − 40
|
202 |
+
9
|
203 |
+
�
|
204 |
+
R
|
205 |
+
u3uxxxdx.
|
206 |
+
(2.1)
|
207 |
+
It remains to check that the sum of all the integral terms in the above equation is zero. Calculating each term in
|
208 |
+
(2.1) using the integration by parts, we have
|
209 |
+
− 8
|
210 |
+
�
|
211 |
+
R
|
212 |
+
uuxuxxxxdx = −20
|
213 |
+
�
|
214 |
+
R
|
215 |
+
uxu2
|
216 |
+
xxdx,
|
217 |
+
(2.2)
|
218 |
+
− 80
|
219 |
+
3
|
220 |
+
�
|
221 |
+
R
|
222 |
+
u2uxuxxdx = 80
|
223 |
+
3
|
224 |
+
�
|
225 |
+
R
|
226 |
+
uu3
|
227 |
+
xdx,
|
228 |
+
(2.3)
|
229 |
+
− 80
|
230 |
+
9
|
231 |
+
�
|
232 |
+
R
|
233 |
+
u4uxdx = 0,
|
234 |
+
(2.4)
|
235 |
+
− 4
|
236 |
+
�
|
237 |
+
R
|
238 |
+
uxxxuxxxxdx = 0,
|
239 |
+
(2.5)
|
240 |
+
2
|
241 |
+
|
242 |
+
− 20
|
243 |
+
3
|
244 |
+
�
|
245 |
+
R
|
246 |
+
uxxxu2
|
247 |
+
xdx = 40
|
248 |
+
3
|
249 |
+
�
|
250 |
+
R
|
251 |
+
uxu2
|
252 |
+
xxdx,
|
253 |
+
(2.6)
|
254 |
+
− 40
|
255 |
+
3
|
256 |
+
�
|
257 |
+
R
|
258 |
+
uuxxuxxxdx = 20
|
259 |
+
3
|
260 |
+
�
|
261 |
+
R
|
262 |
+
uxu2
|
263 |
+
xxdx,
|
264 |
+
(2.7)
|
265 |
+
− 40
|
266 |
+
9
|
267 |
+
�
|
268 |
+
R
|
269 |
+
u3uxxxdx = −40
|
270 |
+
3
|
271 |
+
�
|
272 |
+
R
|
273 |
+
uu3
|
274 |
+
xdx.
|
275 |
+
(2.8)
|
276 |
+
Substituting (2.2)–(2.8) into (2.1), we have d
|
277 |
+
dt M4 = 0, which completes the proof.
|
278 |
+
Remark 1. Suppose the general form of the KdV equation is
|
279 |
+
ut − auux − buxxx = 0,
|
280 |
+
and the corresponding high-order invariant
|
281 |
+
M(t) =
|
282 |
+
�
|
283 |
+
R
|
284 |
+
(u2
|
285 |
+
xx − Auu2
|
286 |
+
x + Bu4)dx.
|
287 |
+
Using the same method above, we could derive
|
288 |
+
� 5a = 3Ab,
|
289 |
+
12Bb = Aa,
|
290 |
+
which can be rewritten as
|
291 |
+
a
|
292 |
+
b = 3A
|
293 |
+
5 = 12B
|
294 |
+
A .
|
295 |
+
Therefore, it follows
|
296 |
+
A2 = 20B.
|
297 |
+
For instance, when a = −6, b = −1, we have A = 10, B = 5, which deduces to the KdV equation as
|
298 |
+
ut + 6uux + uxxx = 0,
|
299 |
+
with a fourth-order invariant
|
300 |
+
M(t) =
|
301 |
+
�
|
302 |
+
R
|
303 |
+
(u2
|
304 |
+
xx − 10uu2
|
305 |
+
x + 5u4)dx.
|
306 |
+
2.2. Invariants of the Camassa–Holm equation
|
307 |
+
Proof: Multiplying by 1 and u on both sides of (1.3), respectively, and then integrating the results, which implies
|
308 |
+
E1 and E2 through the integration by parts. Below, we prove E3 by the energy method. Firstly, noticing that (1.3)
|
309 |
+
can be written with a skew-adjoint operator (1 − ∂xx)−1 as
|
310 |
+
ut + uux + ∂x(1 − ∂xx)−1�
|
311 |
+
u2 + 1
|
312 |
+
2u2
|
313 |
+
x
|
314 |
+
�
|
315 |
+
= 0.
|
316 |
+
Let g = (1 − ∂xx)−1�
|
317 |
+
u2 + 1
|
318 |
+
2u2
|
319 |
+
x
|
320 |
+
�
|
321 |
+
. Then we see from the above equation that (1.3) is equivalent to
|
322 |
+
|
323 |
+
ut + uux + gx = 0,
|
324 |
+
(2.9)
|
325 |
+
g − gxx = u2 + 1
|
326 |
+
2u2
|
327 |
+
x.
|
328 |
+
(2.10)
|
329 |
+
Multiplying (2.9) by 3u2 + u2
|
330 |
+
x − 2(uux)x and integrating the result on both sides, we have
|
331 |
+
0 =
|
332 |
+
�
|
333 |
+
R
|
334 |
+
(ut + uux + gx) · (3u2 + u2
|
335 |
+
x − 2(uux)x)dx
|
336 |
+
=
|
337 |
+
�
|
338 |
+
R
|
339 |
+
ut · (3u2 + u2
|
340 |
+
x − 2(uux)x)dx +
|
341 |
+
�
|
342 |
+
R
|
343 |
+
(uux + gx) · (3u2 + u2
|
344 |
+
x − 2(uux)x)dx
|
345 |
+
≜ A + B.
|
346 |
+
(2.11)
|
347 |
+
3
|
348 |
+
|
349 |
+
Calculating each term derives that
|
350 |
+
A =
|
351 |
+
�
|
352 |
+
R
|
353 |
+
ut · (3u2 + u2
|
354 |
+
x − 2(uux)x)dx
|
355 |
+
=
|
356 |
+
�
|
357 |
+
R
|
358 |
+
ut · (3u2 + u2
|
359 |
+
x)dx +
|
360 |
+
�
|
361 |
+
R
|
362 |
+
2uux · uxtdx
|
363 |
+
=
|
364 |
+
�
|
365 |
+
R
|
366 |
+
ut · 3u2dx +
|
367 |
+
�
|
368 |
+
R
|
369 |
+
ut · u2
|
370 |
+
xdx +
|
371 |
+
�
|
372 |
+
R
|
373 |
+
u · (u2
|
374 |
+
x)tdx
|
375 |
+
=
|
376 |
+
�
|
377 |
+
R
|
378 |
+
(u3)tdx +
|
379 |
+
�
|
380 |
+
R
|
381 |
+
(u · u2
|
382 |
+
x)tdx
|
383 |
+
= d
|
384 |
+
dt
|
385 |
+
�
|
386 |
+
R
|
387 |
+
(u3 + uu2
|
388 |
+
x)dx
|
389 |
+
(2.12)
|
390 |
+
and
|
391 |
+
B =
|
392 |
+
�
|
393 |
+
R
|
394 |
+
(uux + gx) · (3u2 + u2
|
395 |
+
x − 2(uux)x)dx
|
396 |
+
=
|
397 |
+
�
|
398 |
+
R
|
399 |
+
u · u3
|
400 |
+
xdx +
|
401 |
+
�
|
402 |
+
R
|
403 |
+
gx · (3u2 + u2
|
404 |
+
x)dx −
|
405 |
+
�
|
406 |
+
R
|
407 |
+
gx · 2(uux)xdx
|
408 |
+
=
|
409 |
+
�
|
410 |
+
R
|
411 |
+
u · u3
|
412 |
+
xdx +
|
413 |
+
�
|
414 |
+
R
|
415 |
+
gx · (3u2 + u2
|
416 |
+
x)dx + 2
|
417 |
+
�
|
418 |
+
R
|
419 |
+
gxx · uuxdx
|
420 |
+
=
|
421 |
+
�
|
422 |
+
R
|
423 |
+
u · u3
|
424 |
+
xdx +
|
425 |
+
�
|
426 |
+
R
|
427 |
+
gx · (3u2 + u2
|
428 |
+
x)dx + 2
|
429 |
+
�
|
430 |
+
R
|
431 |
+
(g − u2 − 1
|
432 |
+
2u2
|
433 |
+
x) · uuxdx
|
434 |
+
=
|
435 |
+
�
|
436 |
+
R
|
437 |
+
gx · (3u2 + u2
|
438 |
+
x)dx + 2
|
439 |
+
�
|
440 |
+
R
|
441 |
+
g · uuxdx
|
442 |
+
=
|
443 |
+
�
|
444 |
+
R
|
445 |
+
gx · (3u2 + u2
|
446 |
+
x)dx −
|
447 |
+
�
|
448 |
+
R
|
449 |
+
gx · u2dx
|
450 |
+
=
|
451 |
+
�
|
452 |
+
R
|
453 |
+
gx · (2u2 + u2
|
454 |
+
x)dx
|
455 |
+
= 2
|
456 |
+
�
|
457 |
+
R
|
458 |
+
gx · (g − gxx)dx = 0.
|
459 |
+
(2.13)
|
460 |
+
Substituting (2.12) and (2.13) into (2.11), we have
|
461 |
+
d
|
462 |
+
dt
|
463 |
+
�
|
464 |
+
R
|
465 |
+
(u3 + uu2
|
466 |
+
x)dx = 0,
|
467 |
+
which implies E3.
|
468 |
+
2.3. Invariants of the Degasperis–Procesi equation
|
469 |
+
Proof: Integrating on both sides of (1.4), it easily obtains H1. Then we show invariants H2 and H3 of (1.4),
|
470 |
+
respectively. Firstly let g = (1 − ∂xx)−1� 3
|
471 |
+
2u2�
|
472 |
+
, then (1.4) is equivalent to
|
473 |
+
|
474 |
+
ut + uux + gx = 0,
|
475 |
+
(2.14)
|
476 |
+
g − gxx = 3
|
477 |
+
2u2.
|
478 |
+
(2.15)
|
479 |
+
Multiplying by 2u − 6v on both sides of (2.14) and then integrating the result, we have
|
480 |
+
0 =
|
481 |
+
�
|
482 |
+
R
|
483 |
+
(ut + uux + gx) · (2u − 6v)dx
|
484 |
+
=
|
485 |
+
�
|
486 |
+
R
|
487 |
+
ut · (2u − 6v)dx +
|
488 |
+
�
|
489 |
+
R
|
490 |
+
uux · (2u − 6v)dx +
|
491 |
+
�
|
492 |
+
R
|
493 |
+
gx · (2u − 6v)dx
|
494 |
+
≜ C + D.
|
495 |
+
(2.16)
|
496 |
+
4
|
497 |
+
|
498 |
+
The each term in the above identity is estimated as
|
499 |
+
C =
|
500 |
+
�
|
501 |
+
R
|
502 |
+
ut · (2u − 6v)dx = 2
|
503 |
+
�
|
504 |
+
R
|
505 |
+
ut · udx − 6
|
506 |
+
�
|
507 |
+
R
|
508 |
+
ut · vdx = 2
|
509 |
+
�
|
510 |
+
R
|
511 |
+
ut · udx − 6
|
512 |
+
�
|
513 |
+
R
|
514 |
+
(4vt − vxxt) · vdx
|
515 |
+
= 2
|
516 |
+
�
|
517 |
+
R
|
518 |
+
ut · udx − 24
|
519 |
+
�
|
520 |
+
R
|
521 |
+
vt · vdx − 6
|
522 |
+
�
|
523 |
+
R
|
524 |
+
vxt · vxdx = d
|
525 |
+
dt
|
526 |
+
�
|
527 |
+
R
|
528 |
+
(u2 − 12v2 − 3v2
|
529 |
+
x)dx
|
530 |
+
= d
|
531 |
+
dt
|
532 |
+
�
|
533 |
+
R
|
534 |
+
�
|
535 |
+
u2 − 3(4v − vxx) · v
|
536 |
+
�
|
537 |
+
dx = d
|
538 |
+
dt
|
539 |
+
�
|
540 |
+
R
|
541 |
+
(u2 − 3uv)dx = d
|
542 |
+
dt
|
543 |
+
�
|
544 |
+
R
|
545 |
+
u · (u − 3v)dx
|
546 |
+
= d
|
547 |
+
dt
|
548 |
+
�
|
549 |
+
R
|
550 |
+
u · (v − vxx)dx = d
|
551 |
+
dt
|
552 |
+
�
|
553 |
+
R
|
554 |
+
(u − uxx) · vdx
|
555 |
+
(2.17)
|
556 |
+
and
|
557 |
+
D =
|
558 |
+
�
|
559 |
+
R
|
560 |
+
uux · (2u − 6v)dx +
|
561 |
+
�
|
562 |
+
R
|
563 |
+
gx · (2u − 6v)dx
|
564 |
+
= −6
|
565 |
+
�
|
566 |
+
R
|
567 |
+
uux · vdx +
|
568 |
+
�
|
569 |
+
R
|
570 |
+
gx · (2u − 6v)dx
|
571 |
+
= 3
|
572 |
+
�
|
573 |
+
R
|
574 |
+
u2 · vxdx +
|
575 |
+
�
|
576 |
+
R
|
577 |
+
gx · (2u − 6v)dx
|
578 |
+
= 2
|
579 |
+
�
|
580 |
+
R
|
581 |
+
(g − gxx) · vxdx +
|
582 |
+
�
|
583 |
+
R
|
584 |
+
gx · (2u − 6v)dx
|
585 |
+
= 2
|
586 |
+
�
|
587 |
+
R
|
588 |
+
g · vxdx − 2
|
589 |
+
�
|
590 |
+
R
|
591 |
+
gxx · vxdx +
|
592 |
+
�
|
593 |
+
R
|
594 |
+
gx · (2v − 2vxx)dx
|
595 |
+
= 2
|
596 |
+
�
|
597 |
+
R
|
598 |
+
g · vxdx + 2
|
599 |
+
�
|
600 |
+
R
|
601 |
+
gx · vdx − 2
|
602 |
+
�
|
603 |
+
R
|
604 |
+
gxx · vxdx − 2
|
605 |
+
�
|
606 |
+
R
|
607 |
+
gx · vxxdx
|
608 |
+
= 2
|
609 |
+
�
|
610 |
+
R
|
611 |
+
(gv)xdx − 2
|
612 |
+
�
|
613 |
+
R
|
614 |
+
(gx · vx)xdx = 0.
|
615 |
+
(2.18)
|
616 |
+
Substituting (2.17) and (2.18) into (2.16), we have
|
617 |
+
d
|
618 |
+
dt
|
619 |
+
�
|
620 |
+
R
|
621 |
+
(u − uxx) · vdx = 0,
|
622 |
+
which implies H2.
|
623 |
+
Finally, we show H3. Multiplying (2.14) on both sides by u2 and integrating the result, it yields by noting (2.15)
|
624 |
+
0 =
|
625 |
+
�
|
626 |
+
R
|
627 |
+
(ut + uux + gx) · u2dx
|
628 |
+
=
|
629 |
+
�
|
630 |
+
R
|
631 |
+
ut · u2dx +
|
632 |
+
�
|
633 |
+
R
|
634 |
+
u3 · uxdx +
|
635 |
+
�
|
636 |
+
R
|
637 |
+
gx · u2dx
|
638 |
+
=
|
639 |
+
�
|
640 |
+
R
|
641 |
+
�1
|
642 |
+
3u3�
|
643 |
+
tdx + 2
|
644 |
+
3
|
645 |
+
�
|
646 |
+
gx · (g − gxx)dx
|
647 |
+
= 1
|
648 |
+
3
|
649 |
+
d
|
650 |
+
dt
|
651 |
+
�
|
652 |
+
R
|
653 |
+
u3dx,
|
654 |
+
which implies the invariant H3.
|
655 |
+
3. Applications to other periodic nonlinear dispersive waves
|
656 |
+
3.1. Benjamin-Bona-Mahony equation
|
657 |
+
Consider the Benjamin-Bona-Mahony equation [8] of the form
|
658 |
+
ut − uxxt + ux + εuux = 0,
|
659 |
+
x ∈ R.
|
660 |
+
(3.1)
|
661 |
+
5
|
662 |
+
|
663 |
+
It can be written as
|
664 |
+
ut + ∂x(1 − ∂xx)−1�
|
665 |
+
u + ε
|
666 |
+
2u2�
|
667 |
+
= 0,
|
668 |
+
x ∈ R.
|
669 |
+
Let g = (1 − ∂xx)−1�
|
670 |
+
u + ε
|
671 |
+
2u2�
|
672 |
+
, then the equation (3.1) turns out to be
|
673 |
+
|
674 |
+
ut + gx = 0,
|
675 |
+
(3.2)
|
676 |
+
g − gxx = u + ε
|
677 |
+
2u2.
|
678 |
+
(3.3)
|
679 |
+
Multiplying both sides of (3.2) by u2 and integrating the result, and then using (3.3), we have
|
680 |
+
0 =
|
681 |
+
�
|
682 |
+
R
|
683 |
+
(ut + gx) · u2dx =
|
684 |
+
�
|
685 |
+
R
|
686 |
+
ut · u2dx +
|
687 |
+
�
|
688 |
+
R
|
689 |
+
gx · u2dx
|
690 |
+
=
|
691 |
+
�
|
692 |
+
R
|
693 |
+
ut · u2dx + 2
|
694 |
+
ε
|
695 |
+
�
|
696 |
+
R
|
697 |
+
gx · (g − gxx − u)dx =
|
698 |
+
�
|
699 |
+
R
|
700 |
+
ut · u2dx − 2
|
701 |
+
ε
|
702 |
+
�
|
703 |
+
R
|
704 |
+
gx · udx
|
705 |
+
=
|
706 |
+
�
|
707 |
+
R
|
708 |
+
ut · u2dx + 2
|
709 |
+
ε
|
710 |
+
�
|
711 |
+
R
|
712 |
+
ut · udx = d
|
713 |
+
dt
|
714 |
+
�
|
715 |
+
R
|
716 |
+
�1
|
717 |
+
3u3 + 1
|
718 |
+
εu2�
|
719 |
+
dx,
|
720 |
+
which indicates
|
721 |
+
�
|
722 |
+
R
|
723 |
+
1
|
724 |
+
3
|
725 |
+
�
|
726 |
+
u3 + 1
|
727 |
+
εu2�
|
728 |
+
dx
|
729 |
+
is a three-order invariant for (3.1).
|
730 |
+
3.2. Regularized long wave equation
|
731 |
+
Consider the regularized long wave equation [9] of the form
|
732 |
+
ut − µuxxt + ux + upux = 0,
|
733 |
+
(3.4)
|
734 |
+
where µ > 0 is a positive constant. When p = 2, it is called modified regularized long wave equation; when p ⩾ 3,
|
735 |
+
it is called generalized regularized long wave equation. Similar to the foregoing argument, (3.4) can be written as
|
736 |
+
an equivalent form of
|
737 |
+
|
738 |
+
ut + gx = 0,
|
739 |
+
(3.5)
|
740 |
+
g − µgxx = u +
|
741 |
+
1
|
742 |
+
p + 1up+1.
|
743 |
+
(3.6)
|
744 |
+
Multiplying both sides of (3.5) by up+1, integrating the result, and then using (3.6), we have
|
745 |
+
0 =
|
746 |
+
�
|
747 |
+
R
|
748 |
+
(ut + gx) · up+1dx =
|
749 |
+
�
|
750 |
+
R
|
751 |
+
ut · up+1dx +
|
752 |
+
�
|
753 |
+
R
|
754 |
+
gx · up+1dx
|
755 |
+
=
|
756 |
+
�
|
757 |
+
R
|
758 |
+
ut · up+1dx + (p + 1)
|
759 |
+
�
|
760 |
+
R
|
761 |
+
gx · (g − µgxx − u)dx
|
762 |
+
=
|
763 |
+
�
|
764 |
+
R
|
765 |
+
ut · up+1dx − (p + 1)
|
766 |
+
�
|
767 |
+
R
|
768 |
+
gx · udx
|
769 |
+
=
|
770 |
+
�
|
771 |
+
R
|
772 |
+
ut · up+1dx + (p + 1)
|
773 |
+
�
|
774 |
+
R
|
775 |
+
ut · udx
|
776 |
+
= d
|
777 |
+
dt
|
778 |
+
�
|
779 |
+
R
|
780 |
+
�
|
781 |
+
1
|
782 |
+
p + 2up+2 + p + 1
|
783 |
+
2
|
784 |
+
u2�
|
785 |
+
dx,
|
786 |
+
which indicates
|
787 |
+
�
|
788 |
+
R
|
789 |
+
�
|
790 |
+
1
|
791 |
+
p + 2up+2 + p + 1
|
792 |
+
2
|
793 |
+
u2�
|
794 |
+
dx
|
795 |
+
is a high-order invariant for (3.4). This corrects an invariant I3 in Example 4 appeared in [10] (pp. 492).
|
796 |
+
6
|
797 |
+
|
798 |
+
3.3. Rosenau equation
|
799 |
+
Consider the Rosenau equation [11]
|
800 |
+
ut + uxxxxt + ux + uux = 0,
|
801 |
+
(3.7)
|
802 |
+
which is equivalent to
|
803 |
+
|
804 |
+
ut + gx = 0,
|
805 |
+
(3.8)
|
806 |
+
g + gxxxx = u + 1
|
807 |
+
2u2.
|
808 |
+
(3.9)
|
809 |
+
Multiplying both sides of (3.8) by u2 and noticing (3.9), similar to the argument in the above, we have a third-order
|
810 |
+
invariant for (3.7) of the form
|
811 |
+
�
|
812 |
+
R
|
813 |
+
�1
|
814 |
+
3u3 + u2�
|
815 |
+
dx.
|
816 |
+
Acknowledgement
|
817 |
+
We appreciate Prof. Zhi-zhong Sun for many useful discussions. This work is dedicated to Prof. Zhi-zhong Sun
|
818 |
+
on the occasion of his 60th birthday. The work is supported by Natural Science Foundation of Zhejiang Province
|
819 |
+
(Grant No. LZ23A010007).
|
820 |
+
References
|
821 |
+
References
|
822 |
+
[1] J. Escher, Y. Liu, Z. Yin, Global weak solutions and blow-up structure for the Degasperis-Procesi equation. J. Funct. Anal., 241 (2006)
|
823 |
+
457–485.
|
824 |
+
[2] T. Tao, Low-regularity global solutions to nonlinear dispersive equations. Surveys in analysis and operator theory (Canberra, 2001),
|
825 |
+
19–48, Proc. Centre Math. Appl. Austral. Nat. Univ., 40, Austral. Nat. Univ., Canberra, (2002).
|
826 |
+
[3] R. Camassa, D. D. Holm, An integrable shallow water equation with peaked solitons. Phys. Rev. Lett., 71 (1993) 1661–1664.
|
827 |
+
[4] A. Degasperis, M. Procesi, Asymptotic integrability, in: A. Degasperis, G. Gaeta (Eds.). Symmetry and Perturbation Theory, World
|
828 |
+
Scientific, Singapore, (1999) 23–37.
|
829 |
+
[5] H. Liu, Y. Xing, An invariant preserving discontinuous Galerkin method for the Camassa-Holm equation. SIAM J. Sci. Comput., 38
|
830 |
+
(2016) A1919–A1934.
|
831 |
+
[6] R. Courant, K.O. Friedrichs, H. Lewy, ¨Uber die partiellen Differenzenglei-chungen der mathematischen physik, Math. Ann., 100 (1928)
|
832 |
+
32–74.
|
833 |
+
[7] Z. Sun. Finite Difference Methods for Nonlinear Evolution Equations, Science Press, Beijing, (2018).
|
834 |
+
[8] L.A. Medeiros, G.P. Menzala, Existence and uniqueness for periodic solutions of the Benjamin-Bona-Mahony equation. SIAM J. Math.
|
835 |
+
Anal., 8(5) (1977) 792–799.
|
836 |
+
[9] C.E. Seyler, D.L. Fenstermacher, A symmetric regularized-long-wave equation. The Physics of Fluids, 27(4) (1984) 4–7.
|
837 |
+
[10] A. Ghiloufi, K. Omrani, New conservative difference schemes with fourth-order accuracy for some model equation for nonlinear
|
838 |
+
dispersive waves. Numer. Methods Partial Differential Equation, 34 (2018) 451–500.
|
839 |
+
[11] M.A. Park, On the Rosenau equation. Math. Appl. Comput., 9 (1990) 145–152.
|
840 |
+
7
|
841 |
+
|
49AzT4oBgHgl3EQfEPqD/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,340 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf,len=339
|
2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
3 |
+
page_content='00990v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
4 |
+
page_content='NA] 3 Jan 2023 The energy method for high-order invariants in shallow water wave equations Qifeng Zhanga,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
5 |
+
page_content=' Tong Yana,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
6 |
+
page_content=' Guang-hua Gaob aDepartment of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
7 |
+
page_content=' Zhejiang Sci-Tech University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
8 |
+
page_content=' Hangzhou,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
9 |
+
page_content=' 310018,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
10 |
+
page_content=' China bDepartment of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
11 |
+
page_content=' Nanjing University of Posts and Telecommunications,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
12 |
+
page_content=' Nanjing,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
13 |
+
page_content=' 210096,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
14 |
+
page_content=' China Abstract Third order dispersive evolution equations are widely adopted to model one-dimensional long waves and have extensive applications in fluid mechanics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
15 |
+
page_content=' plasma physics and nonlinear optics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
16 |
+
page_content=' Among them are the KdV equation, the Camassa–Holm equation and the Degasperis–Procesi equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
17 |
+
page_content=' They share many common features such as complete integrability, Lax pairs and bi-Hamiltonian structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
18 |
+
page_content=' In this paper we revisit high-order invariants for these three types of shallow water wave equations by the energy method in combination of a skew-adjoint operator (1 − ∂xx)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
19 |
+
page_content=' Several applications to seek high-order invariants of the Benjamin-Bona-Mahony equation, the regularized long wave equation and the Rosenau equation are also presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
20 |
+
page_content=' Keywords: Energy method;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
21 |
+
page_content=' High-order invariant;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
22 |
+
page_content=' Shallow water wave equation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
23 |
+
page_content=' Introduction A family of third order dispersive evolution equations of the form ut − α2uxxt + γuxxx + c0ux = (c1u2 + c2u2 x + c3uuxx)x, x ∈ R, t > 0 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
24 |
+
page_content='1) frequently appeared in the simulation of the shallow water waves, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
25 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
26 |
+
page_content=', [1], where α, γ and ci (i = 0, 1, 2, 3) are real constants;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
27 |
+
page_content=' u denotes a horizontal velocity field with the independent spatial variable x and temporal variable t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
28 |
+
page_content=' A typical such equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
29 |
+
page_content='1) with α2 = c0 = c2 = c3 = 0, c1 = 2, γ = −2 is the KdV equation ut − 4uux − 2uxxx = 0, x ∈ R, t > 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
30 |
+
page_content='2) which describes the unidirectional propagation of waves at the free surface of shallow water under the influence of gravity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
31 |
+
page_content=' The first four invariants of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
32 |
+
page_content='2) are respectively as (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
33 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
34 |
+
page_content=', [2], although there is a minor typo in the coefficient of the fourth invariant, it does not affect the reading of this classic review) M1 = � R udx, M2 = � R u2dx, M3 = � R � u2 x − 2 3u3� dx, M4 = � R � u2 xx − 10 3 uu2 x + 5 9u4� dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
35 |
+
page_content=' Taking α2 = c3 = 1, γ = c0 = 0, c1 = − 3 2, c2 = 1 2, we have another example called the Camassa–Holm equation [3] ut − uxxt + 3uux = 2uxuxx + uuxxx, x ∈ R, t > 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
36 |
+
page_content='3) which models the unidirectional propagation of shallow water waves over a flat bottom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
37 |
+
page_content=' The first three invariants are listed as follows E1 = � R (u − uxx)dx, E2 = 1 2 � R (u2 + u2 x)dx, E3 = 1 2 � R u(u2 + u2 x)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
38 |
+
page_content=' The third example by assigning α2 = c2 = c3 = 1, γ = c0 = 0, c1 = −2 is called the Degasperis–Procesi equation ut − uxxt + 4uux = 3uxuxx + uuxxx, x ∈ R, t > 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
39 |
+
page_content='4) ∗E-mail address: zhangqifeng0504@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
40 |
+
page_content='com (Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
41 |
+
page_content=' Zhang), tyan0320@mails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
42 |
+
page_content='zstu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
43 |
+
page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
44 |
+
page_content='cn (Tong Yan), gaogh@njupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
45 |
+
page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
46 |
+
page_content='cn (G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
47 |
+
page_content=' Gao) Preprint submitted to Elsevier January 4, 2023 which can be regarded as a model for nonlinear shallow water dynamics [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
48 |
+
page_content=' The frequently discussed invariants are H1 = � R (u − uxx)dx, H2 = � R (u − uxx)vdx, H3 = � R u3dx, where 4v − vxx = u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
49 |
+
page_content=' Up to now, there have been thousands of papers focusing on the theoretical and numerical studies on these three equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
50 |
+
page_content=' It is worth mentioning that the invariant-preserving property is a key index of the success for numerical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
51 |
+
page_content=' However, high-order invariants are usually difficult to preserve numerically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
52 |
+
page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
53 |
+
page_content=' also pointed out “it appears a rather difficult task to preserve all three conservation laws” in [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
54 |
+
page_content=' In this work, higher-order invariants of these equations will be re-derived in view of the energy method, which may be possible to provide some thoughts for invariant-preserving numerical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
55 |
+
page_content=' Actually, the energy method originated from conservation laws in physics was first proposed in 1928 by Courant, Friedrichs and Lewy [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
56 |
+
page_content=' From then on, it has been widely applied to the mathematical and numerical analysis of nonlinear evolution equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
57 |
+
page_content=' We trust the readers with [7] instead of a long list of references to relevant works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
58 |
+
page_content=' The rest of the paper is arranged as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
59 |
+
page_content=' In Section 2, combining the energy method and a skew- adjoint operator, we show the high-order invariants for the KdV equation, the Camassa–Holm equation and the Degasperis–Procesi equation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
60 |
+
page_content=' Then we list several applications for seeking some high-order invariants of other types of the shallow water wave equations in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
61 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
62 |
+
page_content=' Main results In what follows, we directly show that Mi (i = 1, 2, 3, 4), Ei (i = 1, 2, 3) and Hi (i = 1, 2, 3) are invariants of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
63 |
+
page_content='2), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
64 |
+
page_content='3) and (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
65 |
+
page_content='4) subjected to the periodic boundary conditions based on the energy method, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
66 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
67 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
68 |
+
page_content=' Invariants of the KdV equation Proof: (I) Multiplying by 1, u and (u2 + uxx), respectively, with (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
69 |
+
page_content='2), we have Mi (i = 1, 2, 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
70 |
+
page_content=' In what follows, we show the fourth invariant M4 of the KdV equation by the energy method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
71 |
+
page_content=' Multiplying both sides of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
72 |
+
page_content='2) by 2uxxxx + 10 3 u2 x + 20 3 uuxx + 20 9 u3 and integrating the result,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
73 |
+
page_content=' we have ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
74 |
+
page_content='0 = ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
75 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
76 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
77 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
78 |
+
page_content='2uxxxx + 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
79 |
+
page_content='3 u2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
80 |
+
page_content='x + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
81 |
+
page_content='3 uuxx + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
82 |
+
page_content='9 u3� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
83 |
+
page_content='utdx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
84 |
+
page_content='− ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
85 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
86 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
87 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
88 |
+
page_content='2uxxxx + 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
89 |
+
page_content='3 u2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
90 |
+
page_content='x + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
91 |
+
page_content='3 uuxx + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
92 |
+
page_content='9 u3� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
93 |
+
page_content='(4uux + 2uxxx)dx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
94 |
+
page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
95 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
96 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
97 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
98 |
+
page_content='2uxxuxxt − 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
99 |
+
page_content='3 (utu2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
100 |
+
page_content='x + 2uuxuxt) + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
101 |
+
page_content='9 u3ut ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
102 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
103 |
+
page_content='dx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
104 |
+
page_content='− ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
105 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
106 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
107 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
108 |
+
page_content='2uxxxx + 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
109 |
+
page_content='3 u2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
110 |
+
page_content='x + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
111 |
+
page_content='3 uuxx + 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
112 |
+
page_content='9 u3� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
113 |
+
page_content='(4uux + 2uxxx)dx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
114 |
+
page_content='= d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
115 |
+
page_content='dt M4 − 8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
116 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
117 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
118 |
+
page_content='uuxuxxxxdx − 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
119 |
+
page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
120 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
121 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
122 |
+
page_content='uu3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
123 |
+
page_content='xdx − 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
124 |
+
page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
125 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
126 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
127 |
+
page_content='u2uxuxxdx − 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
128 |
+
page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
129 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
130 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
131 |
+
page_content='u4uxdx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
132 |
+
page_content='− 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
133 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
134 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
135 |
+
page_content='uxxxuxxxxdx − 20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
136 |
+
page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
137 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
138 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
139 |
+
page_content='uxxxu2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
140 |
+
page_content='xdx − 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
141 |
+
page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
142 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
143 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
144 |
+
page_content='uuxxuxxxdx − 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
145 |
+
page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
146 |
+
page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
147 |
+
page_content='R ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
148 |
+
page_content='u3uxxxdx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
149 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
150 |
+
page_content='1) It remains to check that the sum of all the integral terms in the above equation is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
151 |
+
page_content=' Calculating each term in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
152 |
+
page_content='1) using the integration by parts, we have − 8 � R uuxuxxxxdx = −20 � R uxu2 xxdx, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
153 |
+
page_content='2) − 80 3 � R u2uxuxxdx = 80 3 � R uu3 xdx, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
154 |
+
page_content='3) − 80 9 � R u4uxdx = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
155 |
+
page_content='4) − 4 � R uxxxuxxxxdx = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
156 |
+
page_content='5) 2 − 20 3 � R uxxxu2 xdx = 40 3 � R uxu2 xxdx, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
157 |
+
page_content='6) − 40 3 � R uuxxuxxxdx = 20 3 � R uxu2 xxdx, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
158 |
+
page_content='7) − 40 9 � R u3uxxxdx = −40 3 � R uu3 xdx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
159 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
160 |
+
page_content='8) Substituting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
161 |
+
page_content='2)–(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
162 |
+
page_content='8) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
163 |
+
page_content='1), we have d dt M4 = 0, which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
164 |
+
page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
165 |
+
page_content=' Suppose the general form of the KdV equation is ut − auux − buxxx = 0, and the corresponding high-order invariant M(t) = � R (u2 xx − Auu2 x + Bu4)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
166 |
+
page_content=' Using the same method above, we could derive � 5a = 3Ab, 12Bb = Aa, which can be rewritten as a b = 3A 5 = 12B A .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
167 |
+
page_content=' Therefore, it follows A2 = 20B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
168 |
+
page_content=' For instance, when a = −6, b = −1, we have A = 10, B = 5, which deduces to the KdV equation as ut + 6uux + uxxx = 0, with a fourth-order invariant M(t) = � R (u2 xx − 10uu2 x + 5u4)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
169 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
170 |
+
page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
171 |
+
page_content=' Invariants of the Camassa–Holm equation Proof: Multiplying by 1 and u on both sides of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
172 |
+
page_content='3), respectively, and then integrating the results, which implies E1 and E2 through the integration by parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
173 |
+
page_content=' Below, we prove E3 by the energy method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
174 |
+
page_content=' Firstly, noticing that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
175 |
+
page_content='3) can be written with a skew-adjoint operator (1 − ∂xx)−1 as ut + uux + ∂x(1 − ∂xx)−1� u2 + 1 2u2 x � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
176 |
+
page_content=' Let g = (1 − ∂xx)−1� u2 + 1 2u2 x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
177 |
+
page_content=' Then we see from the above equation that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
178 |
+
page_content='3) is equivalent to \uf8f1\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f3 ut + uux + gx = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
179 |
+
page_content='9) g − gxx = u2 + 1 2u2 x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
180 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
181 |
+
page_content='10) Multiplying (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
182 |
+
page_content='9) by 3u2 + u2 x − 2(uux)x and integrating the result on both sides, we have 0 = � R (ut + uux + gx) · (3u2 + u2 x − 2(uux)x)dx = � R ut · (3u2 + u2 x − 2(uux)x)dx + � R (uux + gx) · (3u2 + u2 x − 2(uux)x)dx ≜ A + B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
183 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
184 |
+
page_content='11) 3 Calculating each term derives that A = � R ut · (3u2 + u2 x − 2(uux)x)dx = � R ut · (3u2 + u2 x)dx + � R 2uux · uxtdx = � R ut · 3u2dx + � R ut · u2 xdx + � R u · (u2 x)tdx = � R (u3)tdx + � R (u · u2 x)tdx = d dt � R (u3 + uu2 x)dx (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
185 |
+
page_content='12) and B = � R (uux + gx) · (3u2 + u2 x − 2(uux)x)dx = � R u · u3 xdx + � R gx · (3u2 + u2 x)dx − � R gx · 2(uux)xdx = � R u · u3 xdx + � R gx · (3u2 + u2 x)dx + 2 � R gxx · uuxdx = � R u · u3 xdx + � R gx · (3u2 + u2 x)dx + 2 � R (g − u2 − 1 2u2 x) · uuxdx = � R gx · (3u2 + u2 x)dx + 2 � R g · uuxdx = � R gx · (3u2 + u2 x)dx − � R gx · u2dx = � R gx · (2u2 + u2 x)dx = 2 � R gx · (g − gxx)dx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
186 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
187 |
+
page_content='13) Substituting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
188 |
+
page_content='12) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
189 |
+
page_content='13) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
190 |
+
page_content='11), we have d dt � R (u3 + uu2 x)dx = 0, which implies E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
191 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
192 |
+
page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
193 |
+
page_content=' Invariants of the Degasperis–Procesi equation Proof: Integrating on both sides of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
194 |
+
page_content='4), it easily obtains H1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
195 |
+
page_content=' Then we show invariants H2 and H3 of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
196 |
+
page_content='4), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
197 |
+
page_content=' Firstly let g = (1 − ∂xx)−1� 3 2u2� , then (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
198 |
+
page_content='4) is equivalent to \uf8f1\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f3 ut + uux + gx = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
199 |
+
page_content='14) g − gxx = 3 2u2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
200 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
201 |
+
page_content='15) Multiplying by 2u − 6v on both sides of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
202 |
+
page_content='14) and then integrating the result, we have 0 = � R (ut + uux + gx) · (2u − 6v)dx = � R ut · (2u − 6v)dx + � R uux · (2u − 6v)dx + � R gx · (2u − 6v)dx ≜ C + D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
203 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
204 |
+
page_content='16) 4 The each term in the above identity is estimated as C = � R ut · (2u − 6v)dx = 2 � R ut · udx − 6 � R ut · vdx = 2 � R ut · udx − 6 � R (4vt − vxxt) · vdx = 2 � R ut · udx − 24 � R vt · vdx − 6 � R vxt · vxdx = d dt � R (u2 − 12v2 − 3v2 x)dx = d dt � R � u2 − 3(4v − vxx) · v � dx = d dt � R (u2 − 3uv)dx = d dt � R u · (u − 3v)dx = d dt � R u · (v − vxx)dx = d dt � R (u − uxx) · vdx (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
205 |
+
page_content='17) and D = � R uux · (2u − 6v)dx + � R gx · (2u − 6v)dx = −6 � R uux · vdx + � R gx · (2u − 6v)dx = 3 � R u2 · vxdx + � R gx · (2u − 6v)dx = 2 � R (g − gxx) · vxdx + � R gx · (2u − 6v)dx = 2 � R g · vxdx − 2 � R gxx · vxdx + � R gx · (2v − 2vxx)dx = 2 � R g · vxdx + 2 � R gx · vdx − 2 � R gxx · vxdx − 2 � R gx · vxxdx = 2 � R (gv)xdx − 2 � R (gx · vx)xdx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
206 |
+
page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
207 |
+
page_content='18) Substituting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
208 |
+
page_content='17) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
209 |
+
page_content='18) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
210 |
+
page_content='16), we have d dt � R (u − uxx) · vdx = 0, which implies H2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
211 |
+
page_content=' Finally, we show H3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
212 |
+
page_content=' Multiplying (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
213 |
+
page_content='14) on both sides by u2 and integrating the result, it yields by noting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
214 |
+
page_content='15) 0 = � R (ut + uux + gx) · u2dx = � R ut · u2dx + � R u3 · uxdx + � R gx · u2dx = � R �1 3u3� tdx + 2 3 � gx · (g − gxx)dx = 1 3 d dt � R u3dx, which implies the invariant H3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
215 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
216 |
+
page_content=' Applications to other periodic nonlinear dispersive waves 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
217 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
218 |
+
page_content=' Benjamin-Bona-Mahony equation Consider the Benjamin-Bona-Mahony equation [8] of the form ut − uxxt + ux + εuux = 0, x ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
219 |
+
page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
220 |
+
page_content='1) 5 It can be written as ut + ∂x(1 − ∂xx)−1� u + ε 2u2� = 0, x ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
221 |
+
page_content=' Let g = (1 − ∂xx)−1� u + ε 2u2� , then the equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
222 |
+
page_content='1) turns out to be \uf8f1\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f3 ut + gx = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
223 |
+
page_content='2) g − gxx = u + ε 2u2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
224 |
+
page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
225 |
+
page_content='3) Multiplying both sides of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
226 |
+
page_content='2) by u2 and integrating the result, and then using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
227 |
+
page_content='3), we have 0 = � R (ut + gx) · u2dx = � R ut · u2dx + � R gx · u2dx = � R ut · u2dx + 2 ε � R gx · (g − gxx − u)dx = � R ut · u2dx − 2 ε � R gx · udx = � R ut · u2dx + 2 ε � R ut · udx = d dt � R �1 3u3 + 1 εu2� dx, which indicates � R 1 3 � u3 + 1 εu2� dx is a three-order invariant for (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
228 |
+
page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
229 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
230 |
+
page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
231 |
+
page_content=' Regularized long wave equation Consider the regularized long wave equation [9] of the form ut − µuxxt + ux + upux = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
232 |
+
page_content='4) where µ > 0 is a positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
233 |
+
page_content=' When p = 2, it is called modified regularized long wave equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
234 |
+
page_content=' when p ⩾ 3, it is called generalized regularized long wave equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
235 |
+
page_content=' Similar to the foregoing argument, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
236 |
+
page_content='4) can be written as an equivalent form of \uf8f1\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f3 ut + gx = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
237 |
+
page_content='5) g − µgxx = u + 1 p + 1up+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
238 |
+
page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
239 |
+
page_content='6) Multiplying both sides of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
240 |
+
page_content='5) by up+1, integrating the result, and then using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
241 |
+
page_content='6), we have 0 = � R (ut + gx) · up+1dx = � R ut · up+1dx + � R gx · up+1dx = � R ut · up+1dx + (p + 1) � R gx · (g − µgxx − u)dx = � R ut · up+1dx − (p + 1) � R gx · udx = � R ut · up+1dx + (p + 1) � R ut · udx = d dt � R � 1 p + 2up+2 + p + 1 2 u2� dx, which indicates � R � 1 p + 2up+2 + p + 1 2 u2� dx is a high-order invariant for (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
242 |
+
page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
243 |
+
page_content=' This corrects an invariant I3 in Example 4 appeared in [10] (pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
244 |
+
page_content=' 492).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
245 |
+
page_content=' 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
246 |
+
page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
247 |
+
page_content=' Rosenau equation Consider the Rosenau equation [11] ut + uxxxxt + ux + uux = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
248 |
+
page_content='7) which is equivalent to \uf8f1\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f3 ut + gx = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
249 |
+
page_content='8) g + gxxxx = u + 1 2u2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
250 |
+
page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
251 |
+
page_content='9) Multiplying both sides of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
252 |
+
page_content='8) by u2 and noticing (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
253 |
+
page_content='9), similar to the argument in the above, we have a third-order invariant for (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
254 |
+
page_content='7) of the form � R �1 3u3 + u2� dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
255 |
+
page_content=' Acknowledgement We appreciate Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
256 |
+
page_content=' Zhi-zhong Sun for many useful discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
257 |
+
page_content=' This work is dedicated to Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
258 |
+
page_content=' Zhi-zhong Sun on the occasion of his 60th birthday.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
259 |
+
page_content=' The work is supported by Natural Science Foundation of Zhejiang Province (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
260 |
+
page_content=' LZ23A010007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
261 |
+
page_content=' References References [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
262 |
+
page_content=' Escher, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
263 |
+
page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
264 |
+
page_content=' Yin, Global weak solutions and blow-up structure for the Degasperis-Procesi equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
265 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
266 |
+
page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
267 |
+
page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
268 |
+
page_content=', 241 (2006) 457–485.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
269 |
+
page_content=' [2] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
270 |
+
page_content=' Tao, Low-regularity global solutions to nonlinear dispersive equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
271 |
+
page_content=' Surveys in analysis and operator theory (Canberra, 2001), 19–48, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
272 |
+
page_content=' Centre Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
273 |
+
page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
274 |
+
page_content=' Austral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
275 |
+
page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
276 |
+
page_content=' Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
277 |
+
page_content=', 40, Austral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
278 |
+
page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
279 |
+
page_content=' Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
280 |
+
page_content=', Canberra, (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
281 |
+
page_content=' [3] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
282 |
+
page_content=' Camassa, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
283 |
+
page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
284 |
+
page_content=' Holm, An integrable shallow water equation with peaked solitons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
285 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
286 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
287 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
288 |
+
page_content=', 71 (1993) 1661–1664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
289 |
+
page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
290 |
+
page_content=' Degasperis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
291 |
+
page_content=' Procesi, Asymptotic integrability, in: A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
292 |
+
page_content=' Degasperis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
293 |
+
page_content=' Gaeta (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
294 |
+
page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
295 |
+
page_content=' Symmetry and Perturbation Theory, World Scientific, Singapore, (1999) 23–37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
296 |
+
page_content=' [5] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
297 |
+
page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
298 |
+
page_content=' Xing, An invariant preserving discontinuous Galerkin method for the Camassa-Holm equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
299 |
+
page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
300 |
+
page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
301 |
+
page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
302 |
+
page_content=', 38 (2016) A1919–A1934.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
303 |
+
page_content=' [6] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
304 |
+
page_content=' Courant, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
305 |
+
page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
306 |
+
page_content=' Friedrichs, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
307 |
+
page_content=' Lewy, ¨Uber die partiellen Differenzenglei-chungen der mathematischen physik, Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
308 |
+
page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
309 |
+
page_content=', 100 (1928) 32–74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
310 |
+
page_content=' [7] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
311 |
+
page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
312 |
+
page_content=' Finite Difference Methods for Nonlinear Evolution Equations, Science Press, Beijing, (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
313 |
+
page_content=' [8] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
314 |
+
page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
315 |
+
page_content=' Medeiros, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
316 |
+
page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
317 |
+
page_content=' Menzala, Existence and uniqueness for periodic solutions of the Benjamin-Bona-Mahony equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
318 |
+
page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
319 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
320 |
+
page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
321 |
+
page_content=', 8(5) (1977) 792–799.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
322 |
+
page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
323 |
+
page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
324 |
+
page_content=' Seyler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
325 |
+
page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
326 |
+
page_content=' Fenstermacher, A symmetric regularized-long-wave equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
327 |
+
page_content=' The Physics of Fluids, 27(4) (1984) 4–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
328 |
+
page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
329 |
+
page_content=' Ghiloufi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
330 |
+
page_content=' Omrani, New conservative difference schemes with fourth-order accuracy for some model equation for nonlinear dispersive waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
331 |
+
page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
332 |
+
page_content=' Methods Partial Differential Equation, 34 (2018) 451–500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
333 |
+
page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
334 |
+
page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
335 |
+
page_content=' Park, On the Rosenau equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
336 |
+
page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
337 |
+
page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
338 |
+
page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
339 |
+
page_content=', 9 (1990) 145–152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
340 |
+
page_content=' 7' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49AzT4oBgHgl3EQfEPqD/content/2301.00990v1.pdf'}
|
6tE2T4oBgHgl3EQf7Qi3/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b24495c5010fa3f7b048648148112965ded73097e1856d3e97f54c6acdcfb697
|
3 |
+
size 2621485
|
79FLT4oBgHgl3EQfAy4h/content/2301.11967v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f40400add0a99df05a448f65a729583ee54450a058a5a93f29ea055eb51a7305
|
3 |
+
size 3000697
|
79FLT4oBgHgl3EQfAy4h/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9aaabffa6d8cd3c272a553552179a525d8d50408b82161083b6f13e0a5b03b4
|
3 |
+
size 142250
|
89AzT4oBgHgl3EQf-_4J/content/2301.01940v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9123c1852f7c939b6dfbde1dea226903321fa44fa9c8772e947b3f9ab564b984
|
3 |
+
size 25647033
|
89E1T4oBgHgl3EQfUAPq/content/2301.03086v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b3f7e53a9baddb0b2f4df378ac41044dd3942190990959b45e55350abac4cc7b
|
3 |
+
size 293704
|
89E1T4oBgHgl3EQfUAPq/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:824a7bb7a97473d08814defd6cdbd83d2d3d700468e9a85b19be9a68fe63b3b1
|
3 |
+
size 2752557
|
8tE3T4oBgHgl3EQfSAk3/content/tmp_files/2301.04427v1.pdf.txt
ADDED
@@ -0,0 +1,1228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantum sensing of electric field distributions of liquid electrolytes with NV-centers
|
2 |
+
in nanodiamonds
|
3 |
+
M. Hollendonner,1, 2 S. Sharma,2 D. B. R. Dasari,3 A. Finkler,4 S. V. Kusminskiy,2, 5 and R. Nagy1, ∗
|
4 |
+
1Friedrich-Alexander-University Erlangen-Nuremberg, 91058 Erlangen, Germany
|
5 |
+
2Max Planck Institute for the Science of Light, 91058 Erlangen, Germany
|
6 |
+
33rd Institute of Physics, IQST, and Research Center SCoPE,
|
7 |
+
University of Stuttgart, 70569 Stuttgart, Germany
|
8 |
+
4Department of Chemical and Biological Physics,
|
9 |
+
Weizmann Institute of Science, Rehovot 7610001, Israel
|
10 |
+
5Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
|
11 |
+
(Dated: January 12, 2023)
|
12 |
+
To use batteries as large-scale energy storage systems it is necessary to measure and understand
|
13 |
+
their degradation in-situ and in-operando. As a battery’s degradation is often the result of molecular
|
14 |
+
processes inside the electrolyte, a sensing platform which allows to measure the ions with a high
|
15 |
+
spatial resolution is needed. Primary candidates for such a platform are NV-centers in diamonds.
|
16 |
+
We propose to use a single NV-center to deduce the electric field distribution generated by the
|
17 |
+
ions inside the electrolyte through microwave pulse sequences. We show that the electric field can
|
18 |
+
be reconstructed with great accuracy by using a protocol which includes different variations of the
|
19 |
+
Free Induction Decay to obtain the mean electric field components and a modified Hahn-echo pulse
|
20 |
+
sequence to measure the electric field’s standard deviation σE. From a semi-analytical ansatz we find
|
21 |
+
that for a lithium ion battery there is a direct relationship between σE and the ionic concentration.
|
22 |
+
Our results show that it is therefore possible to use NV-centers as sensors to measure both the
|
23 |
+
electric field distribution and the local ionic concentration inside electrolytes.
|
24 |
+
I.
|
25 |
+
INTRODUCTION
|
26 |
+
Rechargeable batteries play an important role for our
|
27 |
+
society and are a key ingredient for the transition to-
|
28 |
+
wards renewable energy sources [1–3]. As the production
|
29 |
+
of batteries is accompanied with a considerable use of re-
|
30 |
+
sources, recyclable [4] batteries with a long lifetime are
|
31 |
+
needed. The latter is limited by degradation mechanisms,
|
32 |
+
such as the formation of solid-electrolyte interfaces [5] or
|
33 |
+
lithium-plating [6] which can reduce the battery’s capac-
|
34 |
+
ity with increasing cell age [7]. As these processes happen
|
35 |
+
on a molecular level within nanometer scales [5], a sensor
|
36 |
+
which is capable of monitoring the ionic concentration
|
37 |
+
in-situ and in-operando with high spatial and temporal
|
38 |
+
resolutions is needed. Even though MRI allows to recon-
|
39 |
+
struct transport properties [8, 9] of a battery, tools which
|
40 |
+
allow to perform measurements inside the electrolyte are
|
41 |
+
still absent [5].
|
42 |
+
It has been demonstrated that nitrogen-vacancy (NV)
|
43 |
+
centers in diamond (see Fig. 1(b)) are high-resolution
|
44 |
+
quantum sensors, which can detect oscillating or fluctu-
|
45 |
+
ating [10–13] magnetic fields with nano- [14, 15] and even
|
46 |
+
subpico-Tesla [16] sensitivities. Besides this, NV-centers
|
47 |
+
have great ability for the detection of electric fields. They
|
48 |
+
can not only detect DC [17, 18] or AC [19] electric fields
|
49 |
+
with remarkable precision, but are additionally capable
|
50 |
+
of detecting single fundamental charges [20] even within
|
51 |
+
the diamond lattice [21]. This electric field sensitivity was
|
52 |
+
used by Ref. [22] to show that, based on theoretical con-
|
53 | |
54 |
+
siderations, bulk NV-centers can work as electrochemical
|
55 |
+
sensors if they are in contact with an electrolyte solution.
|
56 |
+
Here we show that nanodiamonds equipped with sin-
|
57 |
+
gle NV-centers can act as in-situ electric field sensors
|
58 |
+
inside liquid electrolytes (Fig. 1(a)). By exploiting how
|
59 |
+
transverse and axial electric fields act on the NV-center’s
|
60 |
+
ground state spin states, we find variations of the free-
|
61 |
+
induction decay (FID) pulse sequence, which allow to
|
62 |
+
measure the mean electric field components.
|
63 |
+
Further,
|
64 |
+
we show that it is possible to use variants of the Hahn-
|
65 |
+
echo pulse sequence to additionally obtain the electric
|
66 |
+
field’s standard deviation σE.
|
67 |
+
From a semi-analytical
|
68 |
+
ansatz we demonstrate exemplarily for a lithium ion bat-
|
69 |
+
tery (LIB) that there is a direct relationship between the
|
70 |
+
electric field’s standard deviation and the local ionic con-
|
71 |
+
centration. A nanodiamond with a single NV-center can
|
72 |
+
therefore work as a sensor which allows to simultaneously
|
73 |
+
reconstruct the electric field distribution and to measure
|
74 |
+
the ionic concentration with nm spatial resolution.
|
75 |
+
II.
|
76 |
+
ELECTRIC FIELD DISTRIBUTION IN
|
77 |
+
LIQUID ELECTROLYTES
|
78 |
+
Before introducing measurements of the electric field
|
79 |
+
distribution by the NV-center, we would like to develop
|
80 |
+
an analytic expression of the electric field induced inside
|
81 |
+
the nanodiamond by the positive and negative ions of the
|
82 |
+
electrolyte.
|
83 |
+
The potential Φ at position r inside the nanodiamond
|
84 |
+
due to a single charge q at position b, is described by
|
85 |
+
arXiv:2301.04427v1 [quant-ph] 11 Jan 2023
|
86 |
+
|
87 |
+
2
|
88 |
+
FIG. 1. (a) Experimental setting. A nanodiamond which is dissolved in the liquid electrolyte of the battery is surrounded by
|
89 |
+
positive (orange) and negative (blue) ions. Two perpendicular aligned gold wires allow to generate polarized microwave drives.
|
90 |
+
(b) To work as a quantum sensor, the nanodiamond contains a vacancy (V) next to a nitrogen atom (red). (c) Standard deviation
|
91 |
+
of Ez, calculated from 500 repeated sets of randomly placed ions of concentration c around the nanodiamond (rND = 100 nm)
|
92 |
+
and inside a sphere of radius R. The relative permittivities are ϵND = 5.8 [22] and ϵe = 17.5 [23]. Solid lines are fits following
|
93 |
+
Eq. (3) with A as a fit parameter. (d) Fit parameters A obtained from (c), compared to the theory value.
|
94 |
+
Poisson’s equation
|
95 |
+
∇2Φ (r) = −ρ (r)
|
96 |
+
ϵ
|
97 |
+
.
|
98 |
+
(1)
|
99 |
+
Here ϵ = ϵ0ϵi with i = e, ND, are the permittivities of, re-
|
100 |
+
spectively, the electrolyte and the nanodiamond in terms
|
101 |
+
of the vacuum permittivity ϵ0 and ρ is the charge density
|
102 |
+
induced by q. The solution inside the nanodiamond, ΦND
|
103 |
+
(see Methods for the detailed derivation), allows to ob-
|
104 |
+
tain the electric field at the center of the nanodiamond,
|
105 |
+
which is
|
106 |
+
END =
|
107 |
+
q
|
108 |
+
4πϵ0
|
109 |
+
3
|
110 |
+
2ϵe + ϵND
|
111 |
+
b
|
112 |
+
b3 .
|
113 |
+
(2)
|
114 |
+
By considering the positions of ions of a molar concentra-
|
115 |
+
tion c to be normally distributed within a sphere of radius
|
116 |
+
R around a nanodiamond (radius rND), the standard de-
|
117 |
+
viation of the electric field distribution at the center of
|
118 |
+
the nanodiamond is
|
119 |
+
σEz = A
|
120 |
+
�
|
121 |
+
c
|
122 |
+
� 1
|
123 |
+
rND
|
124 |
+
− 1
|
125 |
+
R
|
126 |
+
�
|
127 |
+
A =
|
128 |
+
|q|
|
129 |
+
ϵ0 (2ϵe + ϵND)
|
130 |
+
�
|
131 |
+
3NA
|
132 |
+
4π .
|
133 |
+
(3)
|
134 |
+
To validate Eq. (3), we simulated the standard deviation
|
135 |
+
of 500 sets of uniformly and randomly placed ions for dif-
|
136 |
+
ferent molar ionic concentrations (see Fig. 1(c)). As it is
|
137 |
+
the most widely used electrolyte of LIBs [24], we chose
|
138 |
+
LiPF−
|
139 |
+
6 with ϵe = 17.5 [23]. The total electric field was
|
140 |
+
calculated as the linear sum of Eq. (2) for all randomly
|
141 |
+
placed ions around a 200 nm spherical nanodiamond [25].
|
142 |
+
As it can be seen from Fig. 1(d), the expected A value is
|
143 |
+
in fair agreement with the simulations. From Eq. (3) it
|
144 |
+
can be calculated that for R = 500 nm, the fluctuations
|
145 |
+
will increase only by 3%, compared to σE (R = 400 nm).
|
146 |
+
As σE therefore saturates for R ≳ 500 nm, this implies
|
147 |
+
that electric field fluctuations only affect the nanodia-
|
148 |
+
mond within sub-micrometer range and the system is
|
149 |
+
limited by the confocal volume of the experimental setup,
|
150 |
+
which typically is ∼ 1 µm3 [26, 27].
|
151 |
+
III.
|
152 |
+
SENSING OF STATIC ELECTRIC FIELDS
|
153 |
+
INSIDE ELECTROLYTES
|
154 |
+
An electric field E can in cylindrical coordinates be
|
155 |
+
expressed by its axial component Ez, its transverse pro-
|
156 |
+
jection E⊥ =
|
157 |
+
�
|
158 |
+
E2x + E2y and an angle φE, which defines
|
159 |
+
the projections onto the x and y axis as Ex = E⊥ cos φE
|
160 |
+
and Ey = E⊥ sin φE. The total Hamiltonian which de-
|
161 |
+
scribes the NV-center in presence of electric and axial
|
162 |
+
magnetic fields will in the following be denoted as ˆH0.
|
163 |
+
By taking into account that the NV-center can be driven
|
164 |
+
by two perpendicular microwave wires (see Fig. 1(a)) with
|
165 |
+
amplitude Ω, frequency ωd and a phase φ between each
|
166 |
+
other, the total ground state Hamiltonian in a frame ro-
|
167 |
+
tating with ωd is ˆH = ˆH0 + ˆHd (see Methods), where
|
168 |
+
ˆH0 = (∆ + ξz) ˆS2
|
169 |
+
z + βz ˆSz − ξ⊥
|
170 |
+
2
|
171 |
+
�
|
172 |
+
ˆS2
|
173 |
+
+eiφE + h.c.
|
174 |
+
�
|
175 |
+
ˆHd = Ω
|
176 |
+
√
|
177 |
+
2
|
178 |
+
�
|
179 |
+
ϵ−σ0,−1 + ϵ+σ†
|
180 |
+
0,+1 + h.c.
|
181 |
+
�
|
182 |
+
.
|
183 |
+
(4)
|
184 |
+
Here ∆ = D − ωd is the detuning between the zero-
|
185 |
+
field splitting, D = 2.87 GHz [28], and the microwave
|
186 |
+
drive frequency.
|
187 |
+
Si, i = x, y, z, are the spin-1 op-
|
188 |
+
erators, which can be used to define ladder operators
|
189 |
+
S± = Sx ± iSy. σ0,±1 = |0⟩ ⟨±1| are operators which
|
190 |
+
describe transitions between |0⟩ and, respectively, |±1⟩.
|
191 |
+
Frequency contributions generated by electric and axial
|
192 |
+
magnetic fields are considered through ξz = d∥Ez and
|
193 |
+
ξ⊥ = d⊥E⊥ (d∥ = 0.35 Hz cm/V, d⊥ = 17 Hz cm/V [29])
|
194 |
+
and βz = γeBz (γe = 28 GHz/T [30]).
|
195 |
+
The phase factors ϵ± =
|
196 |
+
�
|
197 |
+
1 − ie∓iφ�
|
198 |
+
/2 which enter into
|
199 |
+
Eq. (4), allow to describe the transitions which are caused
|
200 |
+
by circularly (φ = ±π/2) or linearly (φ = 0) polarized
|
201 |
+
microwave drives [31]. The time-evolution operators of
|
202 |
+
ˆHd, ˆR (t) = e−i ˆ
|
203 |
+
Hdt (see Methods), show that one can
|
204 |
+
induce Rabi oscillations between |0⟩ and |1⟩ for right cir-
|
205 |
+
cularly polarized drives and |0⟩ ↔ |−1⟩ for left circular
|
206 |
+
polarizations.
|
207 |
+
Linearly polarized drives allow to drive
|
208 |
+
transitions between |0⟩ and both |±1⟩.
|
209 |
+
|
210 |
+
MW
|
211 |
+
Pos.Electrode3
|
212 |
+
FIG. 2. (a) FID-variations to extract ξ⊥, φE and ξz through subsequent pulse sequences. Here Tπ (Tπ/2) is the duration of
|
213 |
+
the microwave pulse such that a π-pulse (π/2-pulse) is performed. Subscripts ± denote circularly polarized drives which cause
|
214 |
+
oscillations between |0⟩ and either |1⟩ or |−1⟩. Subscript 0 denotes linear polarization of the drive and the free evolution is
|
215 |
+
described through ˆF. (b) FIDξ⊥ for different magnetic fields up to βz = 2.7 MHz, corresponding to Bz = 1 G. For βz = 0 the
|
216 |
+
signal has the highest contrast with the lowest frequency of oscillation. (c) Fourier transform of FIDξ⊥,ξz with Ω = 10 MHz
|
217 |
+
and Ex,y,z = 10 V/µm. Only for T ∗
|
218 |
+
2 > 10 µs the peaks at ξ⊥ ± ξz = 2.4 ± 0.04 MHz and 2ξ⊥ can be resolved.
|
219 |
+
In absence of microwave drives, the |±1⟩ states are
|
220 |
+
symmetrically mixed by ξ⊥ and axial electric fields ef-
|
221 |
+
fectively shift |0⟩ from |±1⟩, which can be seen from
|
222 |
+
ˆF (τ) = e−i ˆ
|
223 |
+
H0τ (see Methods). As axial and transverse
|
224 |
+
electric fields thus act differently on the |ms = 0, ±1⟩
|
225 |
+
states of the NV-center, one can derive variations of the
|
226 |
+
Free Induction Decay (FID), which allow to extract these
|
227 |
+
electric field components.
|
228 |
+
A.
|
229 |
+
Measurement of electric field components
|
230 |
+
The FID consists of two microwave pulses separated
|
231 |
+
by a free evolution period τ. Electric field contributions
|
232 |
+
ξ⊥, φE and ξz can be sensed through FID-variations,
|
233 |
+
as shown in Fig. 2(a). The NV-center can be initialized
|
234 |
+
into its |0⟩ state via excitation with green laser light, fol-
|
235 |
+
lowed by intersystem-crossing [32]. This state can then
|
236 |
+
be driven to −i |1⟩ through a right-polarized π-pulse, de-
|
237 |
+
noted as ˆR (Tπ)+, and will be influenced by both axial
|
238 |
+
magnetic as well as transverse electric fields. The latter
|
239 |
+
induce mixing with |−1⟩. By using a microwave π-pulse
|
240 |
+
with the same polarization as the initial one, the trans-
|
241 |
+
ferred population from |1⟩ to |−1⟩ can be obtained from
|
242 |
+
the FID-signal
|
243 |
+
FIDξ⊥ (τ) = | ⟨0| ˆR (Tπ)+ ˆF (τ) ˆR (Tπ)+ |0⟩ |2
|
244 |
+
= cos2
|
245 |
+
�
|
246 |
+
τ
|
247 |
+
�
|
248 |
+
β2z + ξ2
|
249 |
+
⊥
|
250 |
+
�
|
251 |
+
+
|
252 |
+
β2
|
253 |
+
z
|
254 |
+
β2z + ξ2
|
255 |
+
⊥
|
256 |
+
sin2
|
257 |
+
�
|
258 |
+
τ
|
259 |
+
�
|
260 |
+
β2z + ξ2
|
261 |
+
⊥
|
262 |
+
�
|
263 |
+
,
|
264 |
+
(5)
|
265 |
+
which is a measure of the population which has been
|
266 |
+
transferred from |1⟩ to |−1⟩. In Fig. 2(b) one can see this
|
267 |
+
FID-signal as a function of the free evolution time τ for
|
268 |
+
βz values up to 2.8 MHz, which corresponds to Bz = 1 G.
|
269 |
+
Besides having a decreased contrast for βz ̸= 0, the
|
270 |
+
frequency
|
271 |
+
�
|
272 |
+
β2z + ξ2
|
273 |
+
⊥ of the FID-oscillations depends on
|
274 |
+
both axial magnetic and transverse electric fields. It is
|
275 |
+
therefore strongly recommended to perform the measure-
|
276 |
+
ments in a magnetically shielded environment, for exam-
|
277 |
+
ple by a µ-metal as in Ref. [33]. In the following it will
|
278 |
+
be assumed that all measurement are performed without
|
279 |
+
any magnetic field being present.
|
280 |
+
The transverse electric field components are uniquely
|
281 |
+
defined through φE, as ξx
|
282 |
+
=
|
283 |
+
ξ⊥ cos φE and ξy
|
284 |
+
=
|
285 |
+
ξ⊥ sin φE. A superposition state −eiπ/4 (|1⟩ + |−1⟩) /
|
286 |
+
√
|
287 |
+
2
|
288 |
+
generated through a linearly polarized π-pulse (consid-
|
289 |
+
ered via ˆR (Tπ)0, see Methods) will additionally to ξ⊥
|
290 |
+
also be affected by φE as this phase differs in its sign
|
291 |
+
for |1⟩ and |−1⟩ (see Methods). If either |1⟩ or |−1⟩ is
|
292 |
+
projected to |0⟩ through the final microwave pulse, one
|
293 |
+
obtains an FID-signal, which both depends on ξ⊥ and
|
294 |
+
φE,
|
295 |
+
FIDφE,ξ⊥ (τ) = 1
|
296 |
+
2 (1 − sin (2τξ⊥) sin φE) .
|
297 |
+
(6)
|
298 |
+
One can obtain φE as the relative fraction between the
|
299 |
+
value of the FID-signal at τ = 0 and its first maxima at
|
300 |
+
2τξ⊥ = π/2,
|
301 |
+
FIDφE,ξ⊥
|
302 |
+
�
|
303 |
+
τ = π
|
304 |
+
2
|
305 |
+
1
|
306 |
+
2ξ⊥
|
307 |
+
�
|
308 |
+
FIDφE,ξ⊥ (τ = 0)
|
309 |
+
= 1 − sin φE .
|
310 |
+
(7)
|
311 |
+
By using FIDξ⊥ and FIDξ⊥,φE, it is therefore possible to
|
312 |
+
not only determine the electric field’s transverse compo-
|
313 |
+
nent, but also to obtain the projection onto the x and y
|
314 |
+
axes, which are determined through φE.
|
315 |
+
Axial electric field contributions ξz cause a Stark
|
316 |
+
shift between |0⟩ and |±1⟩.
|
317 |
+
A superposition state
|
318 |
+
(|0⟩ − i |−1⟩) /
|
319 |
+
√
|
320 |
+
2 generated by a circularly polarized
|
321 |
+
π/2-pulse (see Fig. 2(a)) will therefore be affected both
|
322 |
+
by ξz and ξ⊥. If the final microwave π/2-pulse has the
|
323 |
+
same polarization as the initial one, an FID-signal is ob-
|
324 |
+
tained which depends both on ξ⊥ and ξz,
|
325 |
+
FIDξz,ξ⊥ (τ) = 1
|
326 |
+
4
|
327 |
+
�
|
328 |
+
1 − 2 cos (τξ⊥) cos (τξz) + cos2 (τξ⊥)
|
329 |
+
�
|
330 |
+
,
|
331 |
+
(8)
|
332 |
+
if the NV-center was driven with ωd = D. The Fourier
|
333 |
+
|
334 |
+
4
|
335 |
+
transform of Eq. (8) (see Methods),
|
336 |
+
�
|
337 |
+
FID (ω > 0) = π
|
338 |
+
4
|
339 |
+
�1
|
340 |
+
2δ (2ξ⊥ − ω)
|
341 |
+
− δ (ξ⊥ + ξz − ω) − δ (ξ⊥ − ξz − ω)
|
342 |
+
�
|
343 |
+
, (9)
|
344 |
+
shows, that ξz can be measured if it is possible to spec-
|
345 |
+
trally resolve ξ⊥ ± ξz.
|
346 |
+
To study this, we numerically
|
347 |
+
[34, 35] simulated FIDξz,ξ⊥ and included dephasing at
|
348 |
+
rates 1/T ∗
|
349 |
+
2 through a Lindblad operator
|
350 |
+
�
|
351 |
+
1/T ∗
|
352 |
+
2 Sz for
|
353 |
+
T ∗
|
354 |
+
2 in the range up to 15 µs (see Fig. 2(c)). One can re-
|
355 |
+
solve ξ⊥±ξz for nanodiamonds with T ∗
|
356 |
+
2 > 10 µs, which is
|
357 |
+
higher than the value of typical nanodiamonds [36]. For
|
358 |
+
a nanodiamond with T ∗
|
359 |
+
2 ≈ 15 µs it would be possible to
|
360 |
+
distinguish between ξ⊥ and ξz and therefore to determine
|
361 |
+
the projection of the electric field onto the symmetry axis
|
362 |
+
of the NV-center.
|
363 |
+
IV.
|
364 |
+
INFLUENCE OF FLUCTUATING
|
365 |
+
ELECTRIC FIELDS
|
366 |
+
It can be assumed that the ions surrounding the nan-
|
367 |
+
odiamond will not stay static for the timescales in which
|
368 |
+
measurements are performed but will be subject to, for
|
369 |
+
instance, drift and diffusion.
|
370 |
+
These fluctuations will
|
371 |
+
affect the electric field inside the nanodiamond.
|
372 |
+
Due
|
373 |
+
to the limited T ∗
|
374 |
+
2 of nanodiamonds, the FID pulse se-
|
375 |
+
quences as introduced before will be mainly suitable for
|
376 |
+
the measurement of the average electric fields (see Meth-
|
377 |
+
ods).
|
378 |
+
The coherence time of a nanodiamond can be
|
379 |
+
significantly prolonged if instead of an FID, a Hahn-
|
380 |
+
Echo pulse sequence is used [25].
|
381 |
+
As it is shown in
|
382 |
+
Fig. 3(a), we propose a modified version of the Hahn-
|
383 |
+
Echo, where after the first free evolution interval, a π-
|
384 |
+
pulse with right-circular polarization is performed, be-
|
385 |
+
fore the spin is allowed to precess freely during a sec-
|
386 |
+
ond free evolution interval τ. Before being read out, a
|
387 |
+
right-circularly polarized π-pulse is applied, which leads
|
388 |
+
to a signal Hahn (τ) = (1 − cos (2τξ⊥))2 /4. Simulations
|
389 |
+
of this Hahn-Echo variation show that the averages (see
|
390 |
+
Methods for an example) can be fit by
|
391 |
+
⟨Hahn (τ)⟩ = 1
|
392 |
+
4
|
393 |
+
�
|
394 |
+
1 − cos (2τξ⊥) e−τ/T2�2
|
395 |
+
.
|
396 |
+
(10)
|
397 |
+
Here T2 is the sum of the intrinsic spin coherence time
|
398 |
+
T2,int. = 100 µs [25] and a contribution due to the fluc-
|
399 |
+
tuating electric fields,
|
400 |
+
1
|
401 |
+
T2
|
402 |
+
=
|
403 |
+
1
|
404 |
+
T2,int.
|
405 |
+
+
|
406 |
+
1
|
407 |
+
T2,E
|
408 |
+
.
|
409 |
+
(11)
|
410 |
+
In Fig. 3(b), one can see T2 as a function of the electric
|
411 |
+
field’s standard deviation σE, where solid lines are T2,E =
|
412 |
+
αEm/σ2
|
413 |
+
E in terms of a fit parameters α. The total spin
|
414 |
+
coherence time is therefore strongly affected by σE and
|
415 |
+
the mean electric field value Em. If the mean transverse
|
416 |
+
electric field has been sensed by the FID sequence as
|
417 |
+
shown in Eq. (5), it is therefore possible to derive the
|
418 |
+
electric field’s standard deviation, which together with
|
419 |
+
ξ⊥, φE and ξz defines the electric field distribution. As
|
420 |
+
there is a direct relationship between σE and the local
|
421 |
+
ionic concentration (see Fig. 1(c)), the proposed Hahn-
|
422 |
+
echo pulse sequence additionally allows to use the NV-
|
423 |
+
center inside the nanodiamond as a local concentration
|
424 |
+
sensor.
|
425 |
+
FIG. 3.
|
426 |
+
(a) Hahn-echo pulse sequence used to simulate
|
427 |
+
Eq. (10).
|
428 |
+
(b) Total T2 for numerically [34, 35] simulated
|
429 |
+
Hahn-Echoes with T2,int = 100 µs, with the electric field com-
|
430 |
+
ponents sampled from a normal distribution with mean Em
|
431 |
+
and standard deviation σE.
|
432 |
+
For the simulations a drive of
|
433 |
+
Ω = 10 MHz was used. Solid lines are fits of αEm/σ2
|
434 |
+
E. Every
|
435 |
+
trajectory was obtained from 1000 individual simulations. Er-
|
436 |
+
ror bars of one standard deviation are smaller than the data
|
437 |
+
points.
|
438 |
+
V.
|
439 |
+
CONCLUSION AND OUTLOOK
|
440 |
+
In conclusion we have shown here a full reconstruc-
|
441 |
+
tion of the mean electric field generated in a liquid elec-
|
442 |
+
trolyte, through the spin control of a quantum sensor
|
443 |
+
immersed in the electrolyte. We have found exact ex-
|
444 |
+
pressions correlating the electric field components with
|
445 |
+
the free-induction decay of the sensor spin, and the de-
|
446 |
+
pendence of the variance on the spin-echo measurements.
|
447 |
+
Together we were able to deduce the electric field distri-
|
448 |
+
bution and also measure the local ionic concentration, a
|
449 |
+
key parameter in characterizing the performance of the
|
450 |
+
liquid electrolyte for battery applications. We envisage
|
451 |
+
that with improved modeling of the electric field distribu-
|
452 |
+
tion in liquid electrolytes and using better quantum con-
|
453 |
+
trol methods, for example using correlation spectroscopy
|
454 |
+
[37], we could enhance the sensitivity of the sensor to the
|
455 |
+
local electric-field environment, allowing for an in-situ
|
456 |
+
monitoring of the battery using the liquid electrolyte.
|
457 |
+
ACKNOWLEDGMENTS
|
458 |
+
R. N. would like to acknowledge financial support by
|
459 |
+
the Federal Ministry of Education and Research (BMBF)
|
460 |
+
|
461 |
+
5
|
462 |
+
project QMNDQCNet and DFG (Project No. 507241320
|
463 |
+
and 46256793).
|
464 |
+
S. V. K. and D. D. would like to
|
465 |
+
acknowledge the funding support from BMBF (Grant
|
466 |
+
No. 16KIS1590K). A. F. is the incumbent of the Elaine
|
467 |
+
Blond Career Development Chair and acknowledges sup-
|
468 |
+
port from Israel Science Foundation (ISF grants 963/19
|
469 |
+
and 418/20) as well as the Abramson Family Center for
|
470 |
+
Young Scientists and the Willner Family Leadership In-
|
471 |
+
stitute for the Weizmann Institute of Science.
|
472 |
+
[1] B. Diouf and R. Pode, “Potential of lithium-ion batteries
|
473 |
+
in renewable energy,” Renewable Energy, vol. 76, pp. 375–
|
474 |
+
380, Apr. 2015.
|
475 |
+
[2] D. Di Lecce, R. Verrelli, and J. Hassoun, “Lithium-
|
476 |
+
ion batteries for sustainable energy storage: recent ad-
|
477 |
+
vances towards new cell configurations,” Green Chem-
|
478 |
+
istry, vol. 19, no. 15, pp. 3442–3467, 2017.
|
479 |
+
[3] A. Jaiswal, “Lithium-ion battery based renewable energy
|
480 |
+
solution for off-grid electricity: A techno-economic analy-
|
481 |
+
sis,” Renewable and Sustainable Energy Reviews, vol. 72,
|
482 |
+
pp. 922–934, May 2017.
|
483 |
+
[4] G. Harper, R. Sommerville, E. Kendrick, L. Driscoll,
|
484 |
+
P. Slater, R. Stolkin, A. Walton, P. Christensen, O. Hei-
|
485 |
+
drich, S. Lambert, A. Abbott, K. Ryder, L. Gaines, and
|
486 |
+
P. Anderson, “Recycling lithium-ion batteries from elec-
|
487 |
+
tric vehicles,” Nature, vol. 575, pp. 75–86, Nov. 2019.
|
488 |
+
[5] Y. S. Meng, V. Srinivasan, and K. Xu, “Designing better
|
489 |
+
electrolytes,” Science, vol. 378, p. eabq3750, Dec. 2022.
|
490 |
+
[6] U. S. Meda, L. Lal, S. M, and P. Garg, “Solid Electrolyte
|
491 |
+
Interphase (SEI), a boon or a bane for lithium batter-
|
492 |
+
ies: A review on the recent advances,” Journal of Energy
|
493 |
+
Storage, vol. 47, p. 103564, Mar. 2022.
|
494 |
+
[7] J. S. Edge, S. O’Kane, R. Prosser, N. D. Kirkaldy, A. N.
|
495 |
+
Patel, A. Hales, A. Ghosh, W. Ai, J. Chen, J. Yang,
|
496 |
+
S. Li, M.-C. Pang, L. Bravo Diaz, A. Tomaszewska,
|
497 |
+
M. W. Marzook, K. N. Radhakrishnan, H. Wang, Y. Pa-
|
498 |
+
tel, B. Wu, and G. J. Offer, “Lithium ion battery degra-
|
499 |
+
dation:
|
500 |
+
what you need to know,” Physical Chemistry
|
501 |
+
Chemical Physics, vol. 23, no. 14, pp. 8200–8221, 2021.
|
502 |
+
[8] M. Klett, M. Giesecke, A. Nyman, F. Hallberg, R. W.
|
503 |
+
Lindstr¨om, G. Lindbergh, and I. Fur´o, “Quantifying
|
504 |
+
Mass Transport during Polarization in a Li Ion Battery
|
505 |
+
Electrolyte by in Situ 7 Li NMR Imaging,” Journal of the
|
506 |
+
American Chemical Society, vol. 134, pp. 14654–14657,
|
507 |
+
Sept. 2012.
|
508 |
+
[9] S. A. Krachkovskiy, J. D. Bazak, P. Werhun, B. J. Bal-
|
509 |
+
com, I. C. Halalay, and G. R. Goward, “Visualization
|
510 |
+
of Steady-State Ionic Concentration Profiles Formed in
|
511 |
+
Electrolytes during Li-Ion Battery Operation and De-
|
512 |
+
termination of Mass-Transport Properties by in Situ
|
513 |
+
Magnetic Resonance Imaging,” Journal of the American
|
514 |
+
Chemical Society, vol. 138, pp. 7992–7999, June 2016.
|
515 |
+
[10] L. T. Hall, J. H. Cole, C. D. Hill, and L. C. L. Hollenberg,
|
516 |
+
“Sensing of Fluctuating Nanoscale Magnetic Fields Using
|
517 |
+
Nitrogen-Vacancy Centers in Diamond,” Physical Review
|
518 |
+
Letters, vol. 103, p. 220802, Nov. 2009.
|
519 |
+
[11] S. Steinert, F. Ziem, L. T. Hall, A. Zappe, M. Schweik-
|
520 |
+
ert, N. G¨otz, A. Aird, G. Balasubramanian, L. Hollen-
|
521 |
+
berg, and J. Wrachtrup, “Magnetic spin imaging under
|
522 |
+
ambient conditions with sub-cellular resolution,” Nature
|
523 |
+
Communications, vol. 4, p. 1607, June 2013.
|
524 |
+
[12] L. Luan, M. S. Grinolds, S. Hong, P. Maletinsky, R. L.
|
525 |
+
Walsworth, and A. Yacoby, “Decoherence imaging of spin
|
526 |
+
ensembles using a scanning single-electron spin in dia-
|
527 |
+
mond,” Scientific Reports, vol. 5, p. 8119, July 2015.
|
528 |
+
[13] K. Agarwal, R. Schmidt, B. Halperin, V. Oganesyan,
|
529 |
+
G. Zar´and, M. D. Lukin, and E. Demler, “Magnetic noise
|
530 |
+
spectroscopy as a probe of local electronic correlations in
|
531 |
+
two-dimensional systems,” Physical Review B, vol. 95,
|
532 |
+
p. 155107, Apr. 2017.
|
533 |
+
[14] G.
|
534 |
+
Balasubramanian,
|
535 |
+
P.
|
536 |
+
Neumann,
|
537 |
+
D.
|
538 |
+
Twitchen,
|
539 |
+
M. Markham,
|
540 |
+
R. Kolesov,
|
541 |
+
N. Mizuochi,
|
542 |
+
J. Isoya,
|
543 |
+
J. Achard, J. Beck, J. Tissler, V. Jacques, P. R. Hem-
|
544 |
+
mer, F. Jelezko, and J. Wrachtrup, “Ultralong spin co-
|
545 |
+
herence time in isotopically engineered diamond,” Nature
|
546 |
+
Materials, vol. 8, pp. 383–387, May 2009.
|
547 |
+
[15] J. L. Webb, J. D. Clement, L. Troise, S. Ahmadi, G. J.
|
548 |
+
Johansen, A. Huck, and U. L. Andersen, “Nanotesla sen-
|
549 |
+
sitivity magnetic field sensing using a compact diamond
|
550 |
+
nitrogen-vacancy magnetometer,” Applied Physics Let-
|
551 |
+
ters, vol. 114, p. 231103, June 2019.
|
552 |
+
[16] T. Wolf,
|
553 |
+
P. Neumann,
|
554 |
+
K. Nakamura,
|
555 |
+
H. Sumiya,
|
556 |
+
T. Ohshima, J. Isoya, and J. Wrachtrup, “Subpicotesla
|
557 |
+
Diamond Magnetometry,” Physical Review X, vol. 5,
|
558 |
+
p. 041001, Oct. 2015.
|
559 |
+
[17] F. Dolde, H. Fedder, M. W. Doherty, T. N¨obauer,
|
560 |
+
F. Rempp, G. Balasubramanian, T. Wolf, F. Reinhard,
|
561 |
+
L. C. L. Hollenberg, F. Jelezko, and J. Wrachtrup,
|
562 |
+
“Electric-field sensing using single diamond spins,” Na-
|
563 |
+
ture Physics, vol. 7, pp. 459–463, June 2011.
|
564 |
+
[18] K. Bian, W. Zheng, X. Zeng, X. Chen, R. St¨ohr,
|
565 |
+
A. Denisenko, S. Yang, J. Wrachtrup, and Y. Jiang,
|
566 |
+
“Nanoscale electric-field imaging based on a quantum
|
567 |
+
sensor and its charge-state control under ambient con-
|
568 |
+
dition,” Nature Communications, vol. 12, p. 2457, Dec.
|
569 |
+
2021.
|
570 |
+
[19] J. Michl, J. Steiner, A. Denisenko, A. B¨ulau, A. Zim-
|
571 |
+
mermann, K. Nakamura, H. Sumiya, S. Onoda, P. Neu-
|
572 |
+
mann, J. Isoya, and J. Wrachtrup, “Robust and Accurate
|
573 |
+
Electric Field Sensing with Solid State Spin Ensembles,”
|
574 |
+
Nano Letters, vol. 19, pp. 4904–4910, Aug. 2019.
|
575 |
+
[20] F. Dolde, M. W. Doherty, J. Michl, I. Jakobi, B. Nay-
|
576 |
+
denov, S. Pezzagna, J. Meijer, P. Neumann, F. Jelezko,
|
577 |
+
N. B. Manson, and J. Wrachtrup, “Nanoscale Detection
|
578 |
+
of a Single Fundamental Charge in Ambient Conditions
|
579 |
+
Using the NV - Center in Diamond,” Physical Review
|
580 |
+
Letters, vol. 112, p. 097603, Mar. 2014.
|
581 |
+
[21] T. Mittiga, S. Hsieh, C. Zu, B. Kobrin, F. Machado,
|
582 |
+
P. Bhattacharyya, N. Rui, A. Jarmola, S. Choi, D. Bud-
|
583 |
+
ker, and N. Y. Yao, “Imaging the local charge environ-
|
584 |
+
ment of nitrogen-vacancy centers in diamond,” Physical
|
585 |
+
Review Letters, vol. 121, p. 246402, Dec. 2018.
|
586 |
+
[22] H. T. Dinani, E. Mu˜noz, and J. R. Maze, “Sensing elec-
|
587 |
+
trochemical signals using a nitrogen-vacancy center in
|
588 |
+
diamond,” Nanomaterials, vol. 11, p. 358, Feb. 2021.
|
589 |
+
[23] M. Liu, P. J. Chimtali, X.-b. Huang, and R.-b. Zhang,
|
590 |
+
“Structures and dynamic properties of the LiPF 6 elec-
|
591 |
+
trolytic solution under electric fields – a theoretical
|
592 |
+
|
593 |
+
6
|
594 |
+
study,” Physical Chemistry Chemical Physics, vol. 21,
|
595 |
+
no. 24, pp. 13186–13193, 2019.
|
596 |
+
[24] M. Marcinek, J. Syzdek, M. Marczewski, M. Piszcz,
|
597 |
+
L. Niedzicki, M. Kalita, A. Plewa-Marczewska, A. Bit-
|
598 |
+
ner, P. Wieczorek, T. Trzeciak, M. Kasprzyk, P. �L¸e˙zak,
|
599 |
+
Z. Zukowska, A. Zalewska, and W. Wieczorek, “Elec-
|
600 |
+
trolytes for Li-ion transport – Review,” Solid State Ion-
|
601 |
+
ics, vol. 276, pp. 107–126, Aug. 2015.
|
602 |
+
[25] B. D. Wood, G. A. Stimpson, J. E. March, Y. N. D.
|
603 |
+
Lekhai, C. J. Stephen, B. L. Green, A. C. Frangeskou,
|
604 |
+
L. Gin´es, S. Mandal, O. A. Williams, and G. W. Morley,
|
605 |
+
“Long spin coherence times of nitrogen vacancy centers
|
606 |
+
in milled nanodiamonds,” Physical Review B, vol. 105,
|
607 |
+
p. 205401, May 2022.
|
608 |
+
[26] D. Misonou, K. Sasaki, S. Ishizu, Y. Monnai, K. M. Itoh,
|
609 |
+
and E. Abe, “Construction and operation of a tabletop
|
610 |
+
system for nanoscale magnetometry with single nitrogen-
|
611 |
+
vacancy centers in diamond,” AIP Advances, vol. 10,
|
612 |
+
p. 025206, Feb. 2020.
|
613 |
+
[27] B. J. Maertz, A. P. Wijnheijmer, G. D. Fuchs, M. E.
|
614 |
+
Nowakowski, and D. D. Awschalom, “Vector magnetic
|
615 |
+
field microscopy using nitrogen vacancy centers in dia-
|
616 |
+
mond,” Applied Physics Letters, vol. 96, p. 092504, Mar.
|
617 |
+
2010.
|
618 |
+
[28] J. H. N. Loubser and J. A. van Wyk, “Electron spin
|
619 |
+
resonance in the study of diamond,” Reports on Progress
|
620 |
+
in Physics, vol. 41, pp. 1201–1248, Aug. 1978.
|
621 |
+
[29] E. Van Oort and M. Glasbeek, “Electric-field-induced
|
622 |
+
modulation of spin echoes of N-V centers in diamond,”
|
623 |
+
Chemical Physics Letters, vol. 168, pp. 529–532, May
|
624 |
+
1990.
|
625 |
+
[30] E. Abe and K. Sasaki, “Tutorial: Magnetic resonance
|
626 |
+
with nitrogen-vacancy centers in diamond—microwave
|
627 |
+
engineering,
|
628 |
+
materials
|
629 |
+
science,
|
630 |
+
and
|
631 |
+
magnetometry,”
|
632 |
+
Journal of Applied Physics, vol. 123, p. 161101, Apr.
|
633 |
+
2018.
|
634 |
+
[31] P. London, P. Balasubramanian, B. Naydenov, L. P.
|
635 |
+
McGuinness, and F. Jelezko, “Strong driving of a single
|
636 |
+
spin using arbitrarily polarized fields,” Physical Review
|
637 |
+
A, vol. 90, p. 012302, July 2014.
|
638 |
+
[32] M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko,
|
639 |
+
J. Wrachtrup, and L. C. L. Hollenberg, “The nitrogen-
|
640 |
+
vacancy colour centre in diamond,” Physics Reports,
|
641 |
+
vol. 528, pp. 1–45, July 2013.
|
642 |
+
[33] N. Zhao, J.-L. Hu, S.-W. Ho, J. T. K. Wan, and R. B.
|
643 |
+
Liu, “Atomic-scale magnetometry of distant nuclear spin
|
644 |
+
clusters via nitrogen-vacancy spin in diamond,” Nature
|
645 |
+
Nanotechnology, vol. 6, pp. 242–246, Apr. 2011.
|
646 |
+
[34] J. Johansson, P. Nation, and F. Nori, “QuTiP: An
|
647 |
+
open-source Python framework for the dynamics of open
|
648 |
+
quantum systems,” Computer Physics Communications,
|
649 |
+
vol. 183, pp. 1760–1772, Aug. 2012.
|
650 |
+
[35] J. Johansson, P. Nation, and F. Nori, “QuTiP 2:
|
651 |
+
A
|
652 |
+
Python framework for the dynamics of open quantum
|
653 |
+
systems,” Computer Physics Communications, vol. 184,
|
654 |
+
pp. 1234–1240, Apr. 2013.
|
655 |
+
[36] H. S. Knowles, D. M. Kara, and M. Atat¨ure, “Observ-
|
656 |
+
ing bulk diamond spin coherence in high-purity nanodia-
|
657 |
+
monds,” Nature Materials, vol. 13, pp. 21–25, Jan. 2014.
|
658 |
+
[37] A.
|
659 |
+
Laraoui,
|
660 |
+
F.
|
661 |
+
Dolde,
|
662 |
+
C.
|
663 |
+
Burk,
|
664 |
+
F.
|
665 |
+
Reinhard,
|
666 |
+
J. Wrachtrup, and C. A. Meriles, “High-resolution corre-
|
667 |
+
lation spectroscopy of 13C spins near a nitrogen-vacancy
|
668 |
+
centre in diamond,” Nature Communications, vol. 4,
|
669 |
+
p. 1651, June 2013.
|
670 |
+
|
671 |
+
1
|
672 |
+
Quantum sensing of electric field distributions of liquid electrolytes with NV-centers
|
673 |
+
in nanodiamonds - Supplementary Information
|
674 |
+
I.
|
675 |
+
ELECTRIC FIELD AT CENTER OF NANODIAMOND
|
676 |
+
In the following we would like to deduce the electric field of a single point charge q at a distance b from the origin
|
677 |
+
of the nanodiamond with radius rND by following Ref. [S1]. Poisson’s equation describes the electrostatic potential Φ,
|
678 |
+
∇2Φ (r) = −ρ (r)
|
679 |
+
ϵ
|
680 |
+
,
|
681 |
+
(S1)
|
682 |
+
where ϵ = ϵ0ϵi, i = e, ND is the permittivity of, respectively, the electrolyte and the nanodiamond in terms of the
|
683 |
+
vacuum permittivity ϵ0. By exploiting azimuthal symmetry of the problem, the above expression reduces to Laplace’s
|
684 |
+
equation for r ̸= b, which in spherical coordinates with |r| = r and θ the angle spanned by r and b is
|
685 |
+
∇2Φ (r, θ) = 1
|
686 |
+
r2
|
687 |
+
∂
|
688 |
+
∂r
|
689 |
+
�
|
690 |
+
r2 ∂Φ
|
691 |
+
∂r
|
692 |
+
�
|
693 |
+
+
|
694 |
+
1
|
695 |
+
r2 sin θ
|
696 |
+
∂
|
697 |
+
∂θ
|
698 |
+
�
|
699 |
+
sin θ∂Φ
|
700 |
+
∂θ
|
701 |
+
�
|
702 |
+
= 0 .
|
703 |
+
(S2)
|
704 |
+
The general solution of this partial differential equation can be expressed in terms of the associated Legendre poly-
|
705 |
+
nomials Pl of order l and in terms of two constants Al and Cl as [S1, S2]
|
706 |
+
Φ (r, θ) =
|
707 |
+
∞
|
708 |
+
�
|
709 |
+
l=0
|
710 |
+
�
|
711 |
+
Alrl + Cl
|
712 |
+
1
|
713 |
+
rl+1
|
714 |
+
�
|
715 |
+
Pl (cos θ) .
|
716 |
+
(S3)
|
717 |
+
As the potential inside the nanodiamond must be finite at r = 0, Cl needs to vanish and one therefore has
|
718 |
+
ΦND (r, θ) =
|
719 |
+
∞
|
720 |
+
�
|
721 |
+
l=0
|
722 |
+
AlrlPl (cos θ) .
|
723 |
+
(S4)
|
724 |
+
By using that 1/|r − b| = �∞
|
725 |
+
l=0
|
726 |
+
�
|
727 |
+
rl
|
728 |
+
</rl+1
|
729 |
+
>
|
730 |
+
�
|
731 |
+
Pl (cos θ) [S1, S2] with r≷ being the greater (smaller) of |r| and |b|, one can
|
732 |
+
derive the potential in the electrolyte without discontinuity, i.e. without nanodiamond, to be
|
733 |
+
˜Φe (r, θ) =
|
734 |
+
q
|
735 |
+
4πϵ0ϵe
|
736 |
+
∞
|
737 |
+
�
|
738 |
+
l=0
|
739 |
+
rl
|
740 |
+
<
|
741 |
+
rl+1
|
742 |
+
>
|
743 |
+
Pl (cos θ) .
|
744 |
+
(S5)
|
745 |
+
The general solution would then be given as a superposition of this expression with Eq. (S3), i.e. Φe = ˜Φe + Φ, which
|
746 |
+
reads
|
747 |
+
Φe (r, θ) =
|
748 |
+
∞
|
749 |
+
�
|
750 |
+
l=0
|
751 |
+
�
|
752 |
+
Cl
|
753 |
+
1
|
754 |
+
rl+1 +
|
755 |
+
q
|
756 |
+
4πϵ0ϵe
|
757 |
+
rl
|
758 |
+
<
|
759 |
+
rl+1
|
760 |
+
>
|
761 |
+
�
|
762 |
+
Pl (cos θ) ,
|
763 |
+
(S6)
|
764 |
+
where it was used that in this case Al = 0 to ensure a vanishing potential at infinite distances to the origin, i.e.
|
765 |
+
Φe → 0 for r → ∞. The constants Al and Cl, which enter into, respectively, Eq. (S4) and Eq. (S6), can be determined
|
766 |
+
by requiring continuity at the interface between electrolyte and nanodiamond,
|
767 |
+
�
|
768 |
+
ϵeEe − ϵNDEND�
|
769 |
+
· nND = 0
|
770 |
+
(S7)
|
771 |
+
�
|
772 |
+
Ee − END�
|
773 |
+
× nND ,
|
774 |
+
(S8)
|
775 |
+
where nND = r/r is the unit vector normal to the surface of the nanodiamond.
|
776 |
+
These boundary conditions are
|
777 |
+
satisfied, if
|
778 |
+
Al =
|
779 |
+
q
|
780 |
+
4πϵ0ϵe
|
781 |
+
1
|
782 |
+
bl+1
|
783 |
+
ϵe (2l + 1)
|
784 |
+
ϵNDl + ϵe (l + 1)
|
785 |
+
(S9)
|
786 |
+
Cl =
|
787 |
+
q
|
788 |
+
4πϵ0ϵe
|
789 |
+
lr2l+1
|
790 |
+
ND
|
791 |
+
bl+1
|
792 |
+
ϵe − ϵND
|
793 |
+
ϵNDl + ϵe (l + 1) .
|
794 |
+
(S10)
|
795 |
+
|
796 |
+
2
|
797 |
+
The electrostatic potential inside the nanodiamond therefore is
|
798 |
+
ΦND (r, θ) =
|
799 |
+
q
|
800 |
+
4πϵ0ϵe
|
801 |
+
∞
|
802 |
+
�
|
803 |
+
l=0
|
804 |
+
1
|
805 |
+
bl+1
|
806 |
+
ϵe (2l + 1)
|
807 |
+
ϵNDl + ϵe (l + 1)rlPl (cos θ)
|
808 |
+
(S11)
|
809 |
+
and the electric field at the center, i.e. for r = 0, can be calculated as
|
810 |
+
E (r = 0, θ) =
|
811 |
+
q
|
812 |
+
4πϵ0
|
813 |
+
3
|
814 |
+
2ϵe + ϵND
|
815 |
+
b
|
816 |
+
b3 ,
|
817 |
+
(S12)
|
818 |
+
if it is used that in cartesian coordinates one has ez = cos θer − sin θeθ with ez the azimuthally symmetric unit vector
|
819 |
+
and er and eθ the radial and altitudinal unit vectors.
|
820 |
+
A.
|
821 |
+
Electric field variance
|
822 |
+
The probability of an ion to be located at b witin a sphere of radius R around the nanodiamond is
|
823 |
+
p (b) =
|
824 |
+
� 3
|
825 |
+
4π
|
826 |
+
1
|
827 |
+
R3−r3
|
828 |
+
ND ,
|
829 |
+
rND ≤ b ≤ R
|
830 |
+
0,
|
831 |
+
otherwise.
|
832 |
+
(S13)
|
833 |
+
It can be easily verified that this distribution is normalized, i.e.
|
834 |
+
�
|
835 |
+
R3 d3b p (b) = 1. Direct calculation reveals ⟨Ez⟩ = 0
|
836 |
+
and therefore
|
837 |
+
σ2
|
838 |
+
Ez,ion = ⟨E2
|
839 |
+
z⟩
|
840 |
+
=
|
841 |
+
9q2
|
842 |
+
(4πϵ0)2
|
843 |
+
1
|
844 |
+
(2ϵe + ϵND)2
|
845 |
+
1
|
846 |
+
R3 − r3
|
847 |
+
ND
|
848 |
+
� 1
|
849 |
+
rND
|
850 |
+
− 1
|
851 |
+
R
|
852 |
+
�
|
853 |
+
.
|
854 |
+
(S14)
|
855 |
+
Under the assumption that the electric fields generated by the single ions are uncorrelated, the total fluctuations
|
856 |
+
are given by multiplying the above expression with the number of ions inside the sphere. The standard deviation
|
857 |
+
σ2
|
858 |
+
Ez = cNAV σ2
|
859 |
+
Ez,ion of the electric field components with NA Avogadro’s number, c the molar ionic concentration
|
860 |
+
and V the volume in which the ions reside therefore is
|
861 |
+
σEz =
|
862 |
+
|q|
|
863 |
+
ϵ0 (2ϵe + ϵND)
|
864 |
+
�
|
865 |
+
3NA
|
866 |
+
4π
|
867 |
+
�
|
868 |
+
c
|
869 |
+
� 1
|
870 |
+
rNV
|
871 |
+
− 1
|
872 |
+
R
|
873 |
+
�
|
874 |
+
.
|
875 |
+
(S15)
|
876 |
+
From this it can be seen that the expected electric field fluctuations increase with the molar concentration, i.e.
|
877 |
+
σEz ∝ √c.
|
878 |
+
II.
|
879 |
+
HAMILTONIAN IN ROTATING FRAME
|
880 |
+
As derived by Doherty et al. in Ref. [S3], the Hamiltonian of the NV-center in presence of axial magnetic fields Bz
|
881 |
+
and electric field components Ei with i = x, y, z and ℏ = 1 is
|
882 |
+
ˆHNV =
|
883 |
+
�
|
884 |
+
D + d∥Ez
|
885 |
+
� ˆS2
|
886 |
+
z + γeBz ˆSz
|
887 |
+
+ d⊥
|
888 |
+
�
|
889 |
+
Ex
|
890 |
+
�
|
891 |
+
ˆS2
|
892 |
+
y − ˆS2
|
893 |
+
x
|
894 |
+
�
|
895 |
+
+ Ey
|
896 |
+
�
|
897 |
+
ˆSx ˆSy + ˆSy ˆSx
|
898 |
+
��
|
899 |
+
,
|
900 |
+
(S16)
|
901 |
+
with γe = 2.8 MHz/G the NV’s gyromagnetic ratio [S4] and d∥ = 0.35 Hz · cm/V and d⊥ = 17 Hz · cm/V the axial
|
902 |
+
and transverse dipole moments [S5]. By rewriting this Hamiltonian in terms of its frequency contributions βz = γeBz,
|
903 |
+
ξz = d∥Ez and ξ⊥ = d⊥
|
904 |
+
�
|
905 |
+
E2x + E2y and by introducing the electric field polarization φE, which defines the transverse
|
906 |
+
electric field projections via ξx = ξ⊥ cos φE and ξy = ξ⊥ sin φE, Eq. (S16) can be rewritten as
|
907 |
+
ˆHNV = (D + ξz) ˆS2
|
908 |
+
z + βz ˆSz − ξ⊥
|
909 |
+
2
|
910 |
+
�
|
911 |
+
eiφE ˆS2
|
912 |
+
+ + h.c.
|
913 |
+
�
|
914 |
+
,
|
915 |
+
(S17)
|
916 |
+
where ˆS± = ˆSx ± i ˆSy are spin-1 ladder-operators and h.c. means the hermitian conjugate.
|
917 |
+
|
918 |
+
3
|
919 |
+
The NV-center can be driven by perpendicular (compared to the NV’s symmetry axis) microwave magnetic fields
|
920 |
+
of amplitude Ω = γeBd and frequency ωd.
|
921 |
+
To exert polarized drives onto the NV-center, two wires which are
|
922 |
+
perpendicular to each other (see Fig. 1(a) main text) are operated with a phase φ between each other. This drive can
|
923 |
+
be described by an Hamiltonian [S6]
|
924 |
+
ˆHd (t) = Ω
|
925 |
+
�
|
926 |
+
ˆSx cos (ωdt) + ˆSy cos (ωdt + φ)
|
927 |
+
�
|
928 |
+
.
|
929 |
+
(S18)
|
930 |
+
Defining phase-factors ϵ± (φ) =
|
931 |
+
�
|
932 |
+
1 − ie∓iφ�
|
933 |
+
/2, similarly to Ref.
|
934 |
+
[S6], allows to compactly account for different
|
935 |
+
polarizations as ϵ+ = 1 only if φ = −π/2 (i.e. right-circular polarization) and ϵ− = 1 for left-circular polarized
|
936 |
+
microwave fields (φ = +π/2). By transforming ˆHNV + ˆHd (t) into a frame oscillating with ωd through the unitary
|
937 |
+
U = eiωdS2
|
938 |
+
z, one can derive the Hamiltonian under the rotating-wave approximation, which is
|
939 |
+
ˆH = ˆH0 + ˆHd
|
940 |
+
ˆH0 = (∆ + ξz) ˆS2
|
941 |
+
z + βz ˆSz − ξ⊥
|
942 |
+
2
|
943 |
+
�
|
944 |
+
eiφE ˆS2
|
945 |
+
+ + h.c.
|
946 |
+
�
|
947 |
+
ˆHd = Ω
|
948 |
+
√
|
949 |
+
2 (ϵ− |0⟩ ⟨−1| + ϵ+ |1⟩ ⟨0| + h.c.) .
|
950 |
+
(S19)
|
951 |
+
A.
|
952 |
+
Derivation of time-evolution operators
|
953 |
+
To allow for the efficient calculation of pulse-sequences, time evolution operators of the free evolution ˆF (τ) and the
|
954 |
+
drive ˆR (T) will be derived in the following.
|
955 |
+
1.
|
956 |
+
Free Evolution
|
957 |
+
A possible set of eigenstates of ˆH0 is {|0⟩ , |+⟩ , |−⟩} with
|
958 |
+
|+⟩ = cos θ
|
959 |
+
2eiφE/2 |1⟩ + sin θ
|
960 |
+
2e−iφE/2 |−1⟩
|
961 |
+
|−⟩ = sin θ
|
962 |
+
2eiφE/2 |1⟩ − cos θ
|
963 |
+
2e−iφE/2 |−1⟩ ,
|
964 |
+
(S20)
|
965 |
+
where tan θ = −ξ⊥/βz, with corresponding eigenenergies ω0 = 0 and ω± = ∆ + ξz ±
|
966 |
+
�
|
967 |
+
β2z + ξ2
|
968 |
+
⊥. The time evolution
|
969 |
+
operator of ˆH0 is ˆF (τ) = �
|
970 |
+
i={0,±} e−iωiτ |i⟩ ⟨i|, where the sum is performed over all eigenstates of ˆH0. In the basis
|
971 |
+
of {|0⟩ , |±1⟩} this is
|
972 |
+
ˆF (τ) = |0⟩ ⟨0| + e−iτ(∆+ξz)�
|
973 |
+
iξ⊥
|
974 |
+
x sin (τx)
|
975 |
+
�
|
976 |
+
eiφE |1⟩ ⟨−1| + h.c.
|
977 |
+
�
|
978 |
+
+
|
979 |
+
�
|
980 |
+
cos (τx) − iβz
|
981 |
+
x sin (τx)
|
982 |
+
�
|
983 |
+
|1��� ⟨1|
|
984 |
+
+
|
985 |
+
�
|
986 |
+
cos (τx) + iβz
|
987 |
+
x sin (τx)
|
988 |
+
�
|
989 |
+
|−1⟩ ⟨−1|
|
990 |
+
�
|
991 |
+
.
|
992 |
+
(S21)
|
993 |
+
Here the frequency of oscillation has been defined as x =
|
994 |
+
�
|
995 |
+
β2z + ξ2
|
996 |
+
⊥.
|
997 |
+
2.
|
998 |
+
Microwave Drive
|
999 |
+
To derive operators which describe the action of the microwave pulses, it will be assumed that these pulses
|
1000 |
+
exceed all other frequency scales in magnitude, i.e.
|
1001 |
+
Ω ≫ ∆, βz, ξz, ξ⊥, such that
|
1002 |
+
ˆH ≈
|
1003 |
+
Ω
|
1004 |
+
√
|
1005 |
+
2
|
1006 |
+
ˆ�
|
1007 |
+
Hd with ˆ�
|
1008 |
+
Hd =
|
1009 |
+
|
1010 |
+
4
|
1011 |
+
(ϵ− |0⟩ ⟨−1| + ϵ+ |1⟩ ⟨0| + h.c.). By noting that ˆ�
|
1012 |
+
H
|
1013 |
+
3
|
1014 |
+
d = ˆ�
|
1015 |
+
Hd, the time evolution
|
1016 |
+
ˆR (t) = e−it ˆ
|
1017 |
+
Hd =
|
1018 |
+
∞
|
1019 |
+
�
|
1020 |
+
k=0
|
1021 |
+
�
|
1022 |
+
−itΩ
|
1023 |
+
√
|
1024 |
+
2
|
1025 |
+
�n
|
1026 |
+
n!
|
1027 |
+
� ˆ�
|
1028 |
+
Hd
|
1029 |
+
�n
|
1030 |
+
,
|
1031 |
+
(S22)
|
1032 |
+
can be calculated as
|
1033 |
+
ˆR (t) = |1⟩ ⟨1|
|
1034 |
+
�
|
1035 |
+
1 − |ϵ+|2�
|
1036 |
+
+ |−1⟩ ⟨−1|
|
1037 |
+
�
|
1038 |
+
1 − |ϵ−|2�
|
1039 |
+
− ϵ+ϵ− |1⟩ ⟨−1| − ϵ∗
|
1040 |
+
+ϵ∗
|
1041 |
+
− |−1⟩ ⟨1|
|
1042 |
+
+ cos
|
1043 |
+
� tΩ
|
1044 |
+
√
|
1045 |
+
2
|
1046 |
+
� �
|
1047 |
+
|0⟩ ⟨0| + |ϵ+|2 |1⟩ ⟨1| + |ϵ−|2 |−1⟩ ⟨−1| + ϵ+ϵ− |1⟩ ⟨−1| + ϵ∗
|
1048 |
+
+ϵ∗
|
1049 |
+
− |−1⟩ ⟨1|
|
1050 |
+
�
|
1051 |
+
− i sin
|
1052 |
+
� tΩ
|
1053 |
+
√
|
1054 |
+
2
|
1055 |
+
�
|
1056 |
+
(ϵ− |0⟩ ⟨−1| + ϵ+ |1⟩ ⟨0| + h.c.) .
|
1057 |
+
(S23)
|
1058 |
+
Depending on the polarization, one can induce Rabi oscillations between |0⟩ and either |−1⟩ for φ = π/2 (denoted as
|
1059 |
+
ˆR+) or |+1⟩ (φ = −π/2, ˆR−),
|
1060 |
+
ˆR (t)± = |∓1⟩ ⟨∓1| + cos
|
1061 |
+
� Ωt
|
1062 |
+
√
|
1063 |
+
2
|
1064 |
+
� �
|
1065 |
+
|0⟩ ⟨0| + |±1⟩ ⟨±1|
|
1066 |
+
�
|
1067 |
+
− i sin
|
1068 |
+
� Ωt
|
1069 |
+
√
|
1070 |
+
2
|
1071 |
+
� �
|
1072 |
+
|0⟩ ⟨±1| + h.c.
|
1073 |
+
�
|
1074 |
+
.
|
1075 |
+
(S24)
|
1076 |
+
The system can be driven to both |±1⟩, if a linearly polarized drive is used,
|
1077 |
+
R (t)0 = 1
|
1078 |
+
2 (|1⟩ ⟨1| + |−1⟩ ⟨−1| + i |1⟩ ⟨−1| − i |−1⟩ ⟨1|)
|
1079 |
+
+ cos
|
1080 |
+
� tΩ
|
1081 |
+
√
|
1082 |
+
2
|
1083 |
+
� �
|
1084 |
+
|0⟩ ⟨0|
|
1085 |
+
+ 1
|
1086 |
+
2 (|1⟩ ⟨1| + |−1⟩ ⟨−1| − i |1⟩ ⟨−1| + i |−1⟩ ⟨1|)
|
1087 |
+
�
|
1088 |
+
− 1 + i
|
1089 |
+
2
|
1090 |
+
sin
|
1091 |
+
� tΩ
|
1092 |
+
√
|
1093 |
+
2
|
1094 |
+
�
|
1095 |
+
(|0⟩ ⟨−1| + |1⟩ ⟨0| + h.c.) .
|
1096 |
+
(S25)
|
1097 |
+
The last expression can similarly be compactly written by noting that (1 ± i) /2 = e±iπ/4/
|
1098 |
+
√
|
1099 |
+
2. These operators can
|
1100 |
+
then be used to describe the action of (polarized) π- and π/2-pulses onto the |ms = 0, ±1⟩-states of the NV-center.
|
1101 |
+
III.
|
1102 |
+
FOURIER TRANSFORMATION OF FID-SIGNAL
|
1103 |
+
Some arbitrary signals f and ˜f in time- and frequency-domain are connected to each other as
|
1104 |
+
˜f (ω) = FT [f (τ)] =
|
1105 |
+
� +∞
|
1106 |
+
−∞
|
1107 |
+
dτ f (τ) e−iωτ
|
1108 |
+
FT−1 �
|
1109 |
+
˜f (ω)
|
1110 |
+
�
|
1111 |
+
= 1
|
1112 |
+
2π
|
1113 |
+
� +∞
|
1114 |
+
−∞
|
1115 |
+
dω ˜f (ω) eiωτ .
|
1116 |
+
(S26)
|
1117 |
+
To simplify the calculation of the Fourier transformed FID-signal, one can rewrite FIDξ⊥,φE (Eq. (6) main text) as
|
1118 |
+
FIDξz,ξ⊥ (τ) = 1
|
1119 |
+
4
|
1120 |
+
�3
|
1121 |
+
2 + 1
|
1122 |
+
2 cos (2τξ⊥) − cos (τ [ξ⊥ + ξz])
|
1123 |
+
− cos (τ [ξ⊥ − ξz])
|
1124 |
+
�
|
1125 |
+
.
|
1126 |
+
(S27)
|
1127 |
+
From Eq. (S26), one sees that FT [cos (τx)] = π [δ (x − ω) + δ (x + ω)] and therefore
|
1128 |
+
�
|
1129 |
+
FID (ω) = π
|
1130 |
+
4
|
1131 |
+
�3
|
1132 |
+
2δ (ω) + 1
|
1133 |
+
2 [δ (2ξ⊥ − ω) + δ (2ξ⊥ + ω)]
|
1134 |
+
− [δ (ξ⊥ + ξz − ω) + δ (ξ⊥ + ξz + ω)]
|
1135 |
+
− [δ (ξ⊥ − ξz − ω) + δ (ξ⊥ − ξz + ω)]
|
1136 |
+
�
|
1137 |
+
.
|
1138 |
+
(S28)
|
1139 |
+
|
1140 |
+
5
|
1141 |
+
IV.
|
1142 |
+
SIMULATED PULSE SEQUENCES FOR NORMALLY DISTRIBUTED ELECTRIC FIELDS
|
1143 |
+
FIG. S1. Simulated expected FID-values of FIDξ⊥ (Eq. (5) main text), calculated from 500 individual FID-simulations with
|
1144 |
+
drive amplitude of Ω = 10 MHz, intrinsic T ∗
|
1145 |
+
2,int. and electric field components sampled from a normal distribution with mean
|
1146 |
+
Em and standard deviation σE. Dephasing is considered through a Lindblad-Operator
|
1147 |
+
�
|
1148 |
+
1/T ∗
|
1149 |
+
2,int.Sz. For both mean electric
|
1150 |
+
field values of (a) 1.0 V/µm and (b) 4.0 V/µm, it is not possible to resolve ξ⊥.
|
1151 |
+
To understand how fluctuating electric fields alter the FID-signal, we numerically [S7, S8] simulated FIDξ⊥ (Eq. (5)
|
1152 |
+
main text) for normally distributed electric fields. Hereby, at every timestep at which the time-evolution is calcuated,
|
1153 |
+
the electric field components are passed from a beforehand sampled normal distribution with mean Em and standard
|
1154 |
+
deviation σE. It can be seen from Fig. S1 that the average FIDξ⊥ signal decays rapidly to its steady-state value of
|
1155 |
+
1/2, which is due to the short T ∗
|
1156 |
+
2 time of 1 µs. For this reason it is proposed to use the Hahn-Echo pulse sequence for
|
1157 |
+
measurements of strongly fluctuating electric fields.
|
1158 |
+
0
|
1159 |
+
50
|
1160 |
+
100
|
1161 |
+
150
|
1162 |
+
200
|
1163 |
+
τ in µs
|
1164 |
+
0.0
|
1165 |
+
0.2
|
1166 |
+
0.4
|
1167 |
+
0.6
|
1168 |
+
0.8
|
1169 |
+
⟨Hahn (τ)⟩
|
1170 |
+
Sim.
|
1171 |
+
Fit
|
1172 |
+
FIG. S2.
|
1173 |
+
Example of the average Hahn-echo signal, which was obtained numerically from 1000 individual simulations of
|
1174 |
+
the pulse sequence shown in Fig. 3(a) (main text) with a mean electric field value of Em = 1.0 V/µm, standard deviation
|
1175 |
+
σE = 0.75 V/µm, drive amplitude Ω = 10 MHz and intrisic T2,int. = 100 µs together with the fit following Eq. (10) (main text).
|
1176 |
+
The total T2 value obtained from this fit is T2 = (39.87 ± 0.86) µs.
|
1177 |
+
As described in the main text, the numerically obtained Hahn-echo trajectories (see Fig. S2 for an example) are well
|
1178 |
+
fitted by ⟨Hahn (τ)⟩ = 1
|
1179 |
+
4
|
1180 |
+
�
|
1181 |
+
1 − cos (2τξ⊥) e−τ/T2�2. Here both the intrinsic T2,int. = 100 µs and T2,E due to fluctuating
|
1182 |
+
elecric fields contribute to the total T2 via
|
1183 |
+
1
|
1184 |
+
T2
|
1185 |
+
=
|
1186 |
+
1
|
1187 |
+
T2,int.
|
1188 |
+
+
|
1189 |
+
1
|
1190 |
+
T2,E
|
1191 |
+
.
|
1192 |
+
(S29)
|
1193 |
+
The latter can be fitted in terms of Em and σE via
|
1194 |
+
T2,E = αEm
|
1195 |
+
σ2
|
1196 |
+
E
|
1197 |
+
.
|
1198 |
+
(S30)
|
1199 |
+
The values of the fit parameter α can be found in Fig. S3.
|
1200 |
+
|
1201 |
+
6
|
1202 |
+
1
|
1203 |
+
2
|
1204 |
+
Em in V/µm
|
1205 |
+
30
|
1206 |
+
35
|
1207 |
+
40
|
1208 |
+
α
|
1209 |
+
FIG. S3. Fit parameter α, obtained by numerically fitting Eq. (S29) and Eq. (S30) with T2,int. = 100 µs to the data from Fig. 3
|
1210 |
+
(main text).
|
1211 |
+
[S1] R. Messina, “Image charges in spherical geometry: Application to colloidal systems,” The Journal of Chemical
|
1212 |
+
Physics, vol. 117, no. 24, pp. 11062-11074, Dec. 2002.
|
1213 |
+
[S2] J. D. Jackson, “Klassische Elektrodynamik,” De Gruyter, Dec. 2006.
|
1214 |
+
[S3] M. W. Doherty, F. Dolde, H. Fedder, F. Jelezko, J. Wrachtrup, N. B. Manson, and L. C. L. Hollenberg, “Theory
|
1215 |
+
of the ground-state spin of the NV center in diamond,” Physical Review B, vol. 85, no. 20, p. 205203, May
|
1216 |
+
2012.
|
1217 |
+
[S4] E. Abe and K. Sasaki, “Tutorial: Magnetic resonance with nitrogen-vacancy centers in diamond - microwave
|
1218 |
+
engineering, materials science, and magnetometry,” Journal of Applied Physics, vol. 123, no. 16, p. 161101,
|
1219 |
+
Apr. 2018.
|
1220 |
+
[S5] E. Van Oort and M. Glasbeek, “Electric-field induced modulation of spin echoes of N-V centers in diamond,”
|
1221 |
+
Chemical Physics Letters, vol. 168, no. 6, pp. 529-532, May 1990.
|
1222 |
+
[S6] P. London, P. Balasubramanian, B. Naydenov, L. P. McGuiness, and F. Jelezko, “Strong driving of a single spin
|
1223 |
+
using arbitrarily polarized fields,” Physical Review A, vol. 90, no. 1, p. 012302, July 2014.
|
1224 |
+
[S7] J. Johansson, P. Nation, and F. Nori, “QuTiP: An open-source Python framework for the dynamics of open
|
1225 |
+
quantum systems,” Computer Physics Communications, vol. 183, no. 8, pp. 1760-1772, Aug. 2012.
|
1226 |
+
[S8] J. Johansson, P. Nation, and F. Nori, “QuTiP 2: A Python framework for the dynamics of open quantum
|
1227 |
+
systems,” Computer Physics Communications, vol. 184, no. 4, pp. 1234-1240, Apr. 2013.
|
1228 |
+
|
8tE3T4oBgHgl3EQfSAk3/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
99A0T4oBgHgl3EQfO__U/content/2301.02170v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f1f20ad216a560d2b0666d9c88e4f4f2578b43ea93de7f4bd4ba17061504e504
|
3 |
+
size 502938
|
99A0T4oBgHgl3EQfO__U/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:07fbf523691c679eb4d95d32e256bd3015c74ffc0d78ff815d5c7d66fb3998b2
|
3 |
+
size 7798829
|
9NE4T4oBgHgl3EQfdgy1/content/2301.05092v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:624a56b4844611187e782f5f4cbc45ccfcc3ddf83b7464f01c3dd9be673f4ae8
|
3 |
+
size 299893
|
9NE4T4oBgHgl3EQfdgy1/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c3810c81325f13cbe1c0669c77fb2bf8a119e973999c5ed76d59a6bd7b462eaf
|
3 |
+
size 152679
|
9dE1T4oBgHgl3EQf8AVQ/content/tmp_files/2301.03540v1.pdf.txt
ADDED
@@ -0,0 +1,1008 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
2 |
+
Alex J. Vernon1†, Mark R. Dennis2‡, and Francisco J. Rodr´ıguez-Fortu˜no1∗
|
3 |
+
1Department of Physics and London Centre for Nanotechnology, King’s College London,
|
4 |
+
Strand, London WC2R 2LS, UK
|
5 |
+
2School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK
|
6 | |
7 | |
8 |
+
∗francisco.rodriguez [email protected]
|
9 |
+
Abstract. We present a study of 3D electromagnetic field zeros, uncovering their re-
|
10 |
+
markable characteristic features and propose a classifying framework. These are a spe-
|
11 |
+
cial case of general dark spots in optical fields, which sculpt light’s spatial structure into
|
12 |
+
matter-moving, information-rich vortices, escape the diffraction limit for single-molecule
|
13 |
+
imaging, and can trap particles for nanoscale manipulation. Conventional dark spots
|
14 |
+
are two-dimensional in two aspects: localised in a plane and having a non-zero out-of-
|
15 |
+
plane field component. We focus on non-paraxial fields, where three-dimensional dark
|
16 |
+
spots can exist non-stably at fully localised points, making distinct imprints in the flux
|
17 |
+
of energy and momentum, and in the light’s polarisation texture. With this work, we
|
18 |
+
hope to enhance current dark spot applications, or inspire new ones impossible with
|
19 |
+
lower-dimensional zeros.
|
20 |
+
1. Introduction
|
21 |
+
An optical vortex is the name commonly given to a zero in a complex scalar field, such
|
22 |
+
as a component of the electric E or magnetic H field. Vortices in these components occur
|
23 |
+
naturally in general 3D monochromatic interference [1], where they are infinitely thin con-
|
24 |
+
tinuous strands either extending infinitely through space, or coiled into knotted, un-knotted
|
25 |
+
or linked closed loops [2]–[5]. On a vortex strand, the phase of the complex scalar field (with
|
26 |
+
zero real and imaginary parts) is undefined creating circulation in the phase of the rest of the
|
27 |
+
field. This phase increases in a clockwise or anti-clockwise sense by an integer multiple of 2π
|
28 |
+
along any closed loop containing one vortex line. Vortex lines in optics have direct analogues
|
29 |
+
in acoustics and water waves, and as a type of topological defect, are related to vortices in
|
30 |
+
(super)fluids [6] and in Bose-Einstein condensates [7], and even cosmic strings [8]. Strong
|
31 |
+
research interest in optical vortices over the past 30 years, combined with the availability
|
32 |
+
of instruments and the flexibility in generating [9]–[12] and structuring [13] vortex-carrying
|
33 |
+
beams, has positioned optics to act as a sandbox for exploring topological phenomena that
|
34 |
+
appear more broadly across physics.
|
35 |
+
When considering the full 3D vector characteristics of an optical field, vortex lines in
|
36 |
+
individual field components like Ex, Ey, and Ez are basis-dependent and not so physically
|
37 |
+
meaningful. By picturing these different scalar vortex threads permeating the vector field,
|
38 |
+
we can appreciate how unlikely it is that the optical field is zero at a point (i.e. E = 0, all
|
39 |
+
1
|
40 |
+
arXiv:2301.03540v1 [physics.optics] 9 Jan 2023
|
41 |
+
|
42 |
+
2
|
43 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
44 |
+
three components simultaneously zero) in typical 3D interference (the vortex line in each of
|
45 |
+
the three field components would meet at such a zero point, requiring the manipulation of
|
46 |
+
three extra parameters beyond the spatial x, y, z). Despite the rarity of zeros in the wild, a
|
47 |
+
lower-dimensional version can be readily manufactured in optical beams, and is remarkably
|
48 |
+
well-studied. Paraxial doughnut beams have an axial zero in the transverse field surrounded
|
49 |
+
by a bright ring, and are used in modern spectroscopy techniques [14], [15] because of the
|
50 |
+
zero’s immunity to the diffraction limit. The transverse field effectively consists of one or
|
51 |
+
two scalar components with the vortex line along the beam axis, causing the real part of
|
52 |
+
the local wavevector to curl around the axis and imbue the beam with intrinsic orbital an-
|
53 |
+
gular momentum. The longitudinal field, meanwhile, is non-zero (albeit very small due to
|
54 |
+
paraxiality) in the centre of the beam which, therefore, is better imagined not as an exact
|
55 |
+
axial zero, but as a dim line of linear polarisation (an L line) polarised parallel to the beam
|
56 |
+
direction. This, and its confinement in only two dimensions, stretching along the third, is
|
57 |
+
why we refer to the almost-dark centre of the doughnut beam as a two-dimensional zero.
|
58 |
+
Its topological index is straightforward to define by counting how many multiples of 2π the
|
59 |
+
phases of the transverse components climb through over an enclosing circuit. The intrinsic
|
60 |
+
orbital angular momentum carried by doughnut beams is the key property of the spatial
|
61 |
+
structure of light that can rotate matter [16], [17] and store information [18]–[20].
|
62 |
+
Surprisingly, the fully localised, three-dimensional optical field zero, E = 0, has been left
|
63 |
+
largely unexplored. This is probably due to its unstable nature—a perturbation will destroy
|
64 |
+
the zero point (i.e. cause the vortices in the three components no longer to coincide). Never-
|
65 |
+
theless, such a point is theoretically possible and can be artificially synthesised [21], but very
|
66 |
+
little is understood about how it is imprinted into the surrounding field, and there is no clas-
|
67 |
+
sifying topological index like the topological charge of a 2D vortex. The 3D electromagnetic
|
68 |
+
field zero is the focus of this work. A zero in the E-field alone has codimension 6, requiring
|
69 |
+
that the six total degrees of freedom of two real, three-dimensional vectors (the real and
|
70 |
+
imaginary parts of the three components E) are suppressed simultaneously. This means 3D
|
71 |
+
zeros exist stably in a six-dimensional parameter space, and is why optical field zeros are
|
72 |
+
not natural in random interference patterns spanning only three spatial dimensions, being
|
73 |
+
hidden by instability. Instead, 3D zeros must be revealed by tuning an additional three
|
74 |
+
parameters (this is discussed in [22] for a zero in two electric field components). Some of
|
75 |
+
these parameters could be the polarisation components of a plane wave, for example, and in
|
76 |
+
fact, 3D zeros can be very easily manufactured and controlled in pure plane wave interfer-
|
77 |
+
ence or near fields with a simple technique [21], and their higher dimensional confinement
|
78 |
+
could provide a greater degree of precision in dark spot spectroscopy. Due to their electric
|
79 |
+
field dependence, the zero in E is coupled to a collection of singularities, each with its own
|
80 |
+
topological signature, in various physical quantities associated with the light field includ-
|
81 |
+
ing the complex Poynting vector, canonical momentum, spin momentum and spin angular
|
82 |
+
momentum. Learning how energy flow and momentum circulate around a 3D vortex could
|
83 |
+
inspire applications which would be otherwise unfeasible using typical lower dimensional
|
84 |
+
zeros. Alternatively, the magnetic field H may vanish at a point, or more extremely, both
|
85 |
+
E and H might simultaneously vanish, giving a true electromagnetic null with codimension
|
86 |
+
12. Here, we report the key features of a 3D field electric or magnetic field zero, including
|
87 |
+
the way that polarisation singularities are forced to intersect and the flux of the complex
|
88 |
+
Poynting vector and canonical and spin momentum. With these findings, for the first time,
|
89 |
+
we propose a framework to classify the physically realisable varieties of 3D field zero.
|
90 |
+
|
91 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
92 |
+
3
|
93 |
+
2. Results
|
94 |
+
To contextualise our study, we begin with some brief intuition on the special features
|
95 |
+
which we might expect to find near to a 3D zero.
|
96 |
+
If either E or H is zero at a point r0, then of that field, say E, the flux of energy, canon-
|
97 |
+
ical momentum, spin angular momentum (and other quantities) are zero too. Since these
|
98 |
+
fluxes are vector quantities, their direction is singular at r0 and an imprint is made in the
|
99 |
+
surrounding space where they are well-defined. In three spatial dimensions, even if these
|
100 |
+
fluxes are divergence-less, there is more than one possible (topologically unique) imprint
|
101 |
+
which can be left by and characterise the zero in E. The electric field spin is particularly
|
102 |
+
interesting, because its zeros (in non-paraxial fields) are co-dimension 2 objects—meaning
|
103 |
+
they are one-dimensional continuous lines, defining the threads of pure linear electric polar-
|
104 |
+
isation. This continuity should require at least one zero-spin line, an L line, to pass through
|
105 |
+
r0. A similar argument can be made for lines of pure circular electric polarisation, except
|
106 |
+
that C lines are defined by a complex quadratic equation, E · E = 0, equivalent to a real
|
107 |
+
quartic equation, |E · E|2 = 0, which has either zero, two or four real roots. It turns out,
|
108 |
+
as we will show, that a given number of C lines and L lines must always intersect in a 3D
|
109 |
+
electric field zero. Before reporting these and other findings in detail from mathematical
|
110 |
+
argument and analytical simulations in section 2.3 and beyond, the next two subsections 2.1
|
111 |
+
and 2.2 provide an overview of polarisation singularities and set out our way of classifying
|
112 |
+
3D field zeros using dyadics associated with the field.
|
113 |
+
2.1. Overview of Polarisation Singularities in Paraxial and Non-Paraxial Fields.
|
114 |
+
L lines and C lines are called polarisation singularities and are the vector version of scalar
|
115 |
+
vortex lines in wave fields, existing in light [23]–[25], acoustic and water waves [26] (both
|
116 |
+
acoustic and water waves have a vector nature [27], [28]) where some property of the general
|
117 |
+
polarisation ellipse is not defined. In 3D fields, polarisation singularities are often described
|
118 |
+
as the underlying skeleton which embeds highly complex topologies into the field’s polar-
|
119 |
+
isation texture [29], [30].
|
120 |
+
Polarisation singularities have been studied in full 3D and in
|
121 |
+
paraxial fields [31], where in paraxial fields (considering only the two transverse field com-
|
122 |
+
ponents), polarisation is circular at points and linear along lines. Propagating the paraxial
|
123 |
+
field (maintaining the transverse polarisation) draws out the C points and L lines in the
|
124 |
+
transverse plane into C lines and L surfaces in three dimensions.
|
125 |
+
A polarisation ellipse has orthogonal semi-major and semi-minor axes, telling us which
|
126 |
+
way the ellipse is oriented. But because a polarisation circle has no semi-major or semi-
|
127 |
+
minor axes, at a C point, the orientation of the circle is undefined causing neighbouring
|
128 |
+
polarisation ellipses (almost circular) to rotate when tracked along a C point-enclosing loop.
|
129 |
+
The ellipse major axis is described throughout space with a line field, in that the axis is
|
130 |
+
oriented some way in space but does not point one way or another—an ellipse looks identi-
|
131 |
+
cal to its 180 degree rotated self. This means that along the enclosing circuit, the rotating
|
132 |
+
ellipses turn continuously through an integer multiple of π radians, rather than 2π, which
|
133 |
+
is why C points are assigned a half-integer index. When the field is fully three dimensional
|
134 |
+
and the polarisation ellipse is free to tilt in any Cartesian direction, circular polarisation still
|
135 |
+
occurs along one-dimensional threads (C lines which are no longer straight as in the paraxial
|
136 |
+
case) but the surrounding polarisation ellipses also twist, so that their major axes sweep out
|
137 |
+
M¨obius strips [32]–[34]. Analogues of C lines exist in polychromatic fields, shaping the rest
|
138 |
+
of the field into other remarkable topological structures [35].
|
139 |
+
|
140 |
+
4
|
141 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
142 |
+
L lines/L surfaces in paraxial fields (ignoring longitudinal fields) separate regions of left
|
143 |
+
and right handed polarisation ellipses.
|
144 |
+
In non-paraxial fields, L lines are strictly one-
|
145 |
+
dimensional lines (not surfaces) and complement C lines in shaping the surrounding po-
|
146 |
+
larisation structure.
|
147 |
+
This reduction of dimension to the L entity occurs because, to be
|
148 |
+
linearly polarised, the real and imaginary parts of the field (say E = p + iq) need to be
|
149 |
+
(anti)parallel (not necessarily equal). If E is paraxial and linearly polarised, then in the
|
150 |
+
transverse plane, the ratio of the x components of p and q must equal the ratio of their y
|
151 |
+
components—a single condition, dissolving only one degree of freedom of one vector relative
|
152 |
+
to the other. If E is non-paraxial, then an extra condition accounting for the ratio of the
|
153 |
+
z components of p and q must be satisfied for linear polarisation [23]. Between paraxial
|
154 |
+
and full 3D fields, the linear polarisation object’s codimension, which is the dimension of
|
155 |
+
the electric spin angular momentum field SE minus the dimension of the L line/L surface
|
156 |
+
which lies in SE, increases from one to two. The spin angular momentum of the field is zero
|
157 |
+
when linearly polarised, meaning the direction of the normal to the field oscillations cannot
|
158 |
+
be defined. Drawing a circuit around an L line, the spin vector rotates through 2π radians
|
159 |
+
in a clockwise or anti-clockwise sense and defines the L line’s topological index.
|
160 |
+
The characteristics of scalar vortices and C lines and L lines are visualised in Fig. 1.
|
161 |
+
2.2. Indexing Point-like Singularities. Polarisation singularities occur equally often
|
162 |
+
among the general polarisation ellipses in E and H fields, and need not coincide with each
|
163 |
+
other. Phase singularities, C lines and L lines are all indexed by looking at the circulation or
|
164 |
+
rotation of a scalar or vector quantity around a loop enclosing the singularity of interest [36].
|
165 |
+
All three of these singularities are threads in 3D fields, but the winding number concept can
|
166 |
+
be generalised to higher dimensional singularities and calculated for point-like, 3D vector
|
167 |
+
singularities via the topological degree. Instead of integrating a quantity associated with
|
168 |
+
a line singularity around a 1D closed circuit, for isolated singular points in 3D, we should
|
169 |
+
integrate an appropriate quantity over a closed surface enclosing the point singularity. For
|
170 |
+
a vector V(rS) on a surface S (rS ∈ S) in 3D real space, for example, the topological de-
|
171 |
+
gree of rS �→ V (the mapping from the real space surface rS to V) is a calculation of the
|
172 |
+
integer number of times that every possible direction of V is realised (on a sphere) on all
|
173 |
+
the points rS on the surface S. As with other kinds of topological singularities in physical
|
174 |
+
fields, the easiest realised topological degrees (winding numbers) are ±1. Mathematically, a
|
175 |
+
0, ±1 topological degree is the integral of the determinant of the dyadic D(V) of V over S
|
176 |
+
divided by A, the area of S,
|
177 |
+
(1)
|
178 |
+
deg(V) = 1
|
179 |
+
A
|
180 |
+
�
|
181 |
+
S
|
182 |
+
det(D(V))dS.
|
183 |
+
The dyadic D(V), also called the Jacobian matrix of V, contains the first order spatial
|
184 |
+
derivatives of each component of V.
|
185 |
+
The sign of the determinant of D(V) equals the
|
186 |
+
product of the signs of its eigenvalues. For 3D vectors where D(V) is a 3 × 3 matrix, it
|
187 |
+
is possible for drastically different behaviour of V to be hidden under the same topological
|
188 |
+
degree. For example, if V(r = 0) = 0 (meaning the direction of V is singular at the origin)
|
189 |
+
and we assume that a linear map from an origin-enclosing surface to V has a topological
|
190 |
+
degree of −1, then D(V) at r = 0 could have signed eigenvalues (in any order) of + + −
|
191 |
+
or − − −. Physically, the origin could either be a saddle point or a sink for V with no
|
192 |
+
distinction in topological degree because both + + − and − − − eigenvalues multiply to a
|
193 |
+
negative sign. Rather than calculating the topological degree, to try to classify the flux of
|
194 |
+
|
195 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
196 |
+
5
|
197 |
+
C-l
|
198 |
+
in
|
199 |
+
e
|
200 |
+
L
|
201 |
+
-l
|
202 |
+
in
|
203 |
+
e
|
204 |
+
S
|
205 |
+
c
|
206 |
+
a
|
207 |
+
l
|
208 |
+
a
|
209 |
+
r
|
210 |
+
V
|
211 |
+
o
|
212 |
+
r
|
213 |
+
t
|
214 |
+
e
|
215 |
+
x
|
216 |
+
±
|
217 |
+
2
|
218 |
+
l
|
219 |
+
π
|
220 |
+
on-plane view
|
221 |
+
on-plane view
|
222 |
+
Figure 1. Visualisation of scalar and polarisation singularities in a non-
|
223 |
+
paraxial electromagnetic field. Scalar vortices (black line) exist in complex
|
224 |
+
scalar fields, such as the components of E, where the scalar field is zero
|
225 |
+
and its phase is undefined, forming 1D threads in the interference of three
|
226 |
+
or more plane waves. Around a scalar vortex line, the phase of the field
|
227 |
+
increases by an integer l multiple of 2π in a clockwise or anticlockwise
|
228 |
+
sense. Singular lines exist in the complex vector characteristic of E and
|
229 |
+
H fields, called polarisation singularities, which include C lines (lines of
|
230 |
+
circular polarisation) and L lines (lines of linear polarisation). In a circuit
|
231 |
+
around a point on a C line (blue line), in the plane of the polarisation
|
232 |
+
circle at that point, nearby polarisation ellipses rotate through an integer
|
233 |
+
multiple of π radians. Around an L line (green line), the normal to nearby
|
234 |
+
polarisation ellipses rotates by an integer multiple of 2π radians.
|
235 |
+
energy and canonical momentum through a 3D optical field zero, we use the signs of the
|
236 |
+
eigenvalues of their first order dyadics evaluated in the position of the field zero.
|
237 |
+
We use the ideas discussed here to report our findings in the following sub-sections,
|
238 |
+
beginning with the six possible ways that C lines and L lines can intersect in a 3D zero.
|
239 |
+
2.3. Polarisation Singularities at a 3D Electric Field Zero. We will focus on a 3D
|
240 |
+
electric field zero in a position r0, that is E(r0) = 0, and study the nearby strands of circular
|
241 |
+
|
242 |
+
6
|
243 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
244 |
+
and linear electric polarisation. Identical arguments to those given here could be made for
|
245 |
+
magnetic field zeros (H(r0) = 0) and magnetic polarisation singularities, or for simultaneous
|
246 |
+
electric and magnetic field zeros (E(r0) = H(r0) = 0) and polarisation singularities of either
|
247 |
+
E or H. Any smooth function of r is nearly linear over small distances, which means all
|
248 |
+
fundamental behaviour of the electric field in the immediate vicinity of the zero is captured by
|
249 |
+
its Jacobian, JE = D(E), a complex 3×3 matrix containing all first-order spatial derivatives
|
250 |
+
of Ex, Ey and Ez, evaluated at r0,
|
251 |
+
(2)
|
252 |
+
JE = D(E) =
|
253 |
+
�
|
254 |
+
�
|
255 |
+
�
|
256 |
+
∂Ex
|
257 |
+
∂x
|
258 |
+
∂Ex
|
259 |
+
∂y
|
260 |
+
∂Ex
|
261 |
+
∂z
|
262 |
+
∂Ey
|
263 |
+
∂x
|
264 |
+
∂Ey
|
265 |
+
∂y
|
266 |
+
∂Ey
|
267 |
+
∂z
|
268 |
+
∂Ez
|
269 |
+
∂x
|
270 |
+
∂Ez
|
271 |
+
∂y
|
272 |
+
∂Ez
|
273 |
+
∂z
|
274 |
+
�
|
275 |
+
�
|
276 |
+
� = (∇ ⊗ E)T .
|
277 |
+
The Jacobian of the magnetic field at r0, JH, can be defined similarly. In free space, JE
|
278 |
+
and JH are always traceless because E and H are divergence-free. Maxwell’s equations also
|
279 |
+
require that if E(r0) = 0, then JH must be symmetric at r0 and vice versa for H(r0) = 0.
|
280 |
+
We make a first-order approximation of the electric field vector near r0 with,
|
281 |
+
(3)
|
282 |
+
˜E = JEv,
|
283 |
+
where v = r−r0.
|
284 |
+
Nearby C lines emerge in our approximated field wherever ˜E · ˜E = 0, which we may
|
285 |
+
calculate using (3) and separate into real and imaginary parts,
|
286 |
+
(4)
|
287 |
+
˜E · ˜E = (JEv) · (JEv)
|
288 |
+
= vT Mv + ivT Nv,
|
289 |
+
where M = Re{JT
|
290 |
+
EJE} and N = Im{JT
|
291 |
+
EJE}. The two terms in equation (4) are quadric
|
292 |
+
surfaces connecting constant valued real and imaginary parts of ˜E · ˜E, and the real and
|
293 |
+
imaginary surfaces described by setting (4) equal to zero cross in real space where ˜E is
|
294 |
+
circularly polarised. The real 3 × 3 matrices M and N are symmetric and always have real
|
295 |
+
eigenvalues. Normally, these eigenvalues have signs + + − or − − + (in any order) so that
|
296 |
+
the surfaces vT Mv = 0 and vT Nv = 0 are both double cones, vertices touching at v = 0
|
297 |
+
as shown in Fig. 2(a). The cones have an elliptical cross section whose ellipticity is constant
|
298 |
+
with distance from v = 0 in the linear approximation. Because two ellipses can intersect
|
299 |
+
at either zero, two or four points (as shown in the lower part of Fig. 2(a)), there must be
|
300 |
+
either zero, two or four C lines passing through the electric field zero. If one matrix, say
|
301 |
+
M, is positive or negative definite (all positive or all negative eigenvalues), Re{˜E · ˜E} will
|
302 |
+
solely increase or decrease in all outward directions from v = 0. Then, the constant-valued
|
303 |
+
surface vT Mv = C becomes an ellipsoid, and vT Mv = 0 is satisfied only at v = 0 so that
|
304 |
+
no C lines pass through the 3D vortex.
|
305 |
+
To reveal the number of L lines that extend through the 3D electric field zero, we must
|
306 |
+
calculate the electric field spin, given by,
|
307 |
+
(5)
|
308 |
+
SE ∝ Im{E∗ × E} = 2Re{E} × Im{E}.
|
309 |
+
When the electric field is linearly polarised (SE = 0), the real and imaginary parts of E
|
310 |
+
must be (anti)parallel. Under the approximation (3), this means,
|
311 |
+
(6)
|
312 |
+
Re{JE}v = λIm{JE}v,
|
313 |
+
|
314 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
315 |
+
7
|
316 |
+
b
|
317 |
+
a
|
318 |
+
L-line
|
319 |
+
C-line
|
320 |
+
cone cross section
|
321 |
+
on unit sphere
|
322 |
+
Re{E · E} = 0
|
323 |
+
E = 0
|
324 |
+
Im{E · E} = 0
|
325 |
+
2
|
326 |
+
0
|
327 |
+
4
|
328 |
+
Figure 2. Electric polarisation singularities passing through a 3D electric
|
329 |
+
field zero at a position r0. (a) Visualisation of why zero, two or four C
|
330 |
+
lines must pass through r0. In a first-order approximation, the surfaces
|
331 |
+
Re{E · E} = 0 (red) and Im{E · E} = 0 (blue) are double cones, and where
|
332 |
+
they intersect, C lines exist. Two double cones intersect along two or four
|
333 |
+
lines, or do not intersect at all, which is easy to see by considering the cones’
|
334 |
+
cross sections on the unit sphere: ellipses which cross at zero, two or four
|
335 |
+
points. (b) Six different examples of electric field zeros created at a position
|
336 |
+
r0 (red circle), one per unique combination of C lines and L lines meeting
|
337 |
+
there. The C lines are marked by blue regions where E · E ≈ 0 and the L
|
338 |
+
lines by the green regions where Im{E∗ × E} ≈ 0. Each field zero is created
|
339 |
+
in analytical simulations by designing the polarisation of ten plane waves
|
340 |
+
with random wavevectors, wavelength 500 nm, to interfere destructively at
|
341 |
+
r0. The plane waves have different polarisations and wavevectors for each
|
342 |
+
example zero in (b).
|
343 |
+
where λ is a positive or negative scalar. The directions of the L lines crossing through v = 0
|
344 |
+
are given by the three eigenvectors of the matrix Im{JE}−1Re{JE}. Since this matrix is
|
345 |
+
real-valued, either all three of these eigenvectors are real, corresponding to three L lines, or
|
346 |
+
only one of them is real and is accompanied by a conjugate pair of complex eigenvectors. In
|
347 |
+
that case, just one L line passes through the 3D zero because v cannot point in a complex
|
348 |
+
direction.
|
349 |
+
Summarising, either zero, two or four C lines and either one or three L lines always meet
|
350 |
+
at r0 in a 3D electric field zero E(r0) = 0. An identical conclusion can be drawn for C lines
|
351 |
+
and L lines of the magnetic field for the case of H(r0) = 0. In Fig. 2(b), an example of
|
352 |
+
each of the six possible C line/L line combinations through a 3D zero is presented, the zeros
|
353 |
+
|
354 |
+
8
|
355 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
356 |
+
created in the interference of ten plane waves. Each zero is enforced by separate ensembles
|
357 |
+
of ten plane waves with random wavevector directions that are deliberately polarised to
|
358 |
+
destructively interfere at a single point.
|
359 |
+
2.4. Energy Flux Singularity. The flow of energy in a light field is described by the
|
360 |
+
complex Poynting vector,
|
361 |
+
1
|
362 |
+
2E∗ × H.
|
363 |
+
The real part of this vector (often itself called the
|
364 |
+
‘Poynting vector’) corresponds to the time-averaged power transfer (sometimes known as
|
365 |
+
active power) in the field, while reactive power (associated with oscillations in the transfer
|
366 |
+
of power) is accounted for by the less-used imaginary part. We refer to these two real vectors
|
367 |
+
as Pr and Pi,
|
368 |
+
(7)
|
369 |
+
Pr = 1
|
370 |
+
2Re{E∗ × H}
|
371 |
+
(8)
|
372 |
+
Pi = 1
|
373 |
+
2Im{E∗ × H}
|
374 |
+
When either E or H is zero at a point r0, the complex Poynting vector vanishes, and its
|
375 |
+
real and imaginary parts circulate in the space around the zero according to their first-order
|
376 |
+
derivatives at r0. The real part Pr is divergence-less in free space where there is no absorption
|
377 |
+
or energy generation, and must therefore be organised into a vector saddle point at r0. An
|
378 |
+
example flow of active power around a 3D electric field zero created at r0 (E(r0) = 0,
|
379 |
+
H(r0) ̸= 0) is given in the top row of panels in Fig. 3, where Pr is plotted on the xy, xz, and
|
380 |
+
yz planes coinciding at r0. Although there is no net flow of active power in or out of the zero,
|
381 |
+
Pr streamlines can be arranged in two topologically different ways depending on whether
|
382 |
+
the signs of the eigenvalues of its first-order dyadic, Im{(JT
|
383 |
+
E − JE)J∗
|
384 |
+
E} (written electrically
|
385 |
+
without prefactors), are + + − or + − −, corresponding to two possible topological orders
|
386 |
+
of −1 or +1. One might notice that the imaginary Poynting vector Pi, which is plotted on
|
387 |
+
the same planes for the same free space electric field zero at r0 in the lower row of panels of
|
388 |
+
Fig. 3, is not divergence-free—in fact, it is physically possible for a source, sink or saddle of
|
389 |
+
Pi to exist there, depending on whether E or H is zero. To see why, we first note that using
|
390 |
+
Maxwell’s equations in free space (see supplemental information), the imaginary Poynting
|
391 |
+
vector can be decomposed into a sum of two terms, one polarisation-independent and one
|
392 |
+
polarisation-dependent, each containing electric and magnetic contributions,
|
393 |
+
(9)
|
394 |
+
Pi = − c2
|
395 |
+
2ω ϵ0Re{(JT
|
396 |
+
E − JE)E∗}
|
397 |
+
= c2
|
398 |
+
2ω µ0Re{(JT
|
399 |
+
H − JH)H∗}
|
400 |
+
= c2
|
401 |
+
4ω
|
402 |
+
�
|
403 |
+
−1
|
404 |
+
2ϵ0∇(E∗ · E) + 1
|
405 |
+
2µ0∇(H∗ · H)
|
406 |
+
�
|
407 |
+
+ c2
|
408 |
+
4ω Re{ϵ0JEE∗ − µ0JHH∗}.
|
409 |
+
The first term in Eq. (9) represents the difference in gradient of the electric and magnetic
|
410 |
+
energy density of the light field, while polarisation-dependent behaviour of Pi derives from
|
411 |
+
the second term since JEE∗ and JHH∗ contain inter-component multiplication. In certain
|
412 |
+
cases such as a uniformly polarised standing wave, the second term is zero and the gradient
|
413 |
+
of the difference in electric and magnetic energy density determines the direction of reactive
|
414 |
+
power flow. Because E∗ · E = |E|2 is a positive real quantity, a 3D zero in E is a source for
|
415 |
+
|
416 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
417 |
+
9
|
418 |
+
Pi
|
419 |
+
Pr
|
420 |
+
x
|
421 |
+
y
|
422 |
+
0.08λ
|
423 |
+
x
|
424 |
+
z
|
425 |
+
y
|
426 |
+
z
|
427 |
+
y
|
428 |
+
x
|
429 |
+
x
|
430 |
+
y
|
431 |
+
z
|
432 |
+
z
|
433 |
+
Figure 3. Flow of the real (Pr, red) and imaginary (Pi, teal) parts of the
|
434 |
+
Poynting vector, 1
|
435 |
+
2E∗ × H, on the xy, xz and yz planes coinciding with an
|
436 |
+
electric field zero at position r0 (blue circle). The real Poynting vector is
|
437 |
+
divergence-free, meaning a vector saddle point of Pr is set up at r0. The
|
438 |
+
imaginary Poynting vector is not necessarily divergence-free and can be or-
|
439 |
+
ganised in a sink at r0 when E(r0) = 0 (a source is not possible unless
|
440 |
+
the magnetic field is zero). Results are generated by designing the polari-
|
441 |
+
sation of ten plane waves with random propagation directions to interfere
|
442 |
+
completely at r0.
|
443 |
+
the vector ∇(E∗ · E) (and likewise for H). Depending on how the polarisation-independent
|
444 |
+
and polarisation-dependent terms combine in Eq. (9), the imaginary Poynting vector could
|
445 |
+
have non-zero divergence at r0. Note that there is a difference in sign between the electric
|
446 |
+
and magnetic terms in Eq. (9), meaning Pi behaves differently for E(r0) = 0, H(r0) ̸= 0
|
447 |
+
and H(r0) = 0, E(r0) ̸= 0 and E(r0) = H(r0) = 0 3D zeros. To understand the flow of
|
448 |
+
Pi through an optical field zero, we assume a non-dual electric field zero (E(r0) = 0 and
|
449 |
+
H(r0) ̸= 0) and make a first-order approximation of Pi, this time referring to the relevant
|
450 |
+
linear transformation matrix as the first-order dyadic of the imaginary Poynting vector,
|
451 |
+
D(Pi), which is defined identically to JE in Eq. (2) with Pi and its components in place of
|
452 |
+
E. Our approximate imaginary Poynting vector is,
|
453 |
+
(10)
|
454 |
+
˜Pi = D(Pi)v
|
455 |
+
|
456 |
+
10
|
457 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
458 |
+
where v = r − r0. The dyadic D(Pi) = (∇ ⊗ Pi)T evaluated at r0 is, using the electric
|
459 |
+
representation of Pi in Eq. (9) (top line),
|
460 |
+
(11)
|
461 |
+
D(Pi) = − c2
|
462 |
+
2ω ϵ0Re{(JT
|
463 |
+
E − JE)J∗
|
464 |
+
E}.
|
465 |
+
There are no second order derivatives of E in Eq. (11) because E(r0) = 0. Surprisingly,
|
466 |
+
D(Pi) cannot have three positive eigenvalues, as justified in the supplemental information.
|
467 |
+
The result is that at a 3D electric field zero, Pi is organised into one of two types of sad-
|
468 |
+
dle with topological degree 1 or −1, or a sink with topological degree −1, never a source.
|
469 |
+
When H(r0) = 0 and E(r0) ̸= 0, the opposite is true because of the dual-asymmetry of the
|
470 |
+
imaginary Poynting vector: Pi can form a saddle or source at r0 but not a sink.
|
471 |
+
2.5. Orbital Current Singularity. When divided by c2, the real Poynting vector Eq. (7)
|
472 |
+
turns into a momentum density, the kinetic momentum density, which, using Maxwell’s
|
473 |
+
equations for time-harmonic fields, can be split in to a well-known sum of separate orbit
|
474 |
+
and spin contributions [37], [38]. For instance, by substituting (with prefactors) the curl of
|
475 |
+
E for H, the kinetic momentum density can be written as,
|
476 |
+
(12)
|
477 |
+
Π =
|
478 |
+
1
|
479 |
+
2c2 Re{E∗ × H}
|
480 |
+
= 1
|
481 |
+
2ω ϵ0Im{E∗ · (∇)E} + 1
|
482 |
+
2ω ϵ0∇ × 1
|
483 |
+
2Im{E∗ × E},
|
484 |
+
where A · (∇)B = Ax∇Bx + Ay∇By + Az∇Bz = JT
|
485 |
+
BA, with JB being the Jacobian
|
486 |
+
of B defined identically to Eq. (2) (the decomposition is explained in more detail in the
|
487 |
+
supplemental information). The first decomposed term is po
|
488 |
+
E, the orbital contribution to
|
489 |
+
the kinetic momentum density, called the canonical momentum density, imparted by the
|
490 |
+
electric field only,
|
491 |
+
(13)
|
492 |
+
po
|
493 |
+
E = 1
|
494 |
+
2ω ϵ0Im{E∗ · (∇)E} = 1
|
495 |
+
2ω ϵ0Im{JT
|
496 |
+
EE∗}.
|
497 |
+
Eq. (12) can also be written purely in terms of H and by averaging these equivalent repre-
|
498 |
+
sentations of Π, the dual-symmetric canonical momentum density is obtained,
|
499 |
+
(14)
|
500 |
+
po = 1
|
501 |
+
4ω Im{ϵ0E∗ · (∇)E + µ0H∗ · (∇)H}.
|
502 |
+
This momentum density definition contains both the electric and magnetic field’s influence,
|
503 |
+
and produces the total orbital angular momentum of the field within a volume when r×po is
|
504 |
+
integrated. Naturally, the electric and magnetic contributions to (14) become zero whenever
|
505 |
+
E = 0 and H = 0. This means that, in a 3D electric field zero positioned at r0, the direction
|
506 |
+
of the electric contribution po
|
507 |
+
E is undefined and should circulate around r0 in some fashion.
|
508 |
+
Of course, while the total canonical momentum density at r0 is not zero when only E = 0,
|
509 |
+
we could draw the same conclusions we make here for Eq. (14) rather than Eq. (13) near a
|
510 |
+
dual 3D vortex (E(r0) = H(r0) = 0). Note that by normalising E, the argument to Im{}
|
511 |
+
in Eq. (13) defines the local electric wavevector [25],
|
512 |
+
(15)
|
513 |
+
ke
|
514 |
+
loc = −ie∗ · (∇)e,
|
515 |
+
where e =
|
516 |
+
E
|
517 |
+
√
|
518 |
+
E∗·E. The real part of ke
|
519 |
+
loc is the local phase gradient of the electric field, while
|
520 |
+
|
521 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
522 |
+
11
|
523 |
+
Re{kloc} on yz plane
|
524 |
+
xplane = -25 nm
|
525 |
+
xplane = -95 nm
|
526 |
+
xplane = 0 nm
|
527 |
+
xplane = +25 nm
|
528 |
+
|Re{kloc}| < 0.1*k
|
529 |
+
r0
|
530 |
+
Figure 4. Vortex pseudo-line (red) of the real electric local wavevector,
|
531 |
+
Re{ke
|
532 |
+
loc}, passing though an electric field zero at position r0 (blue circle),
|
533 |
+
that is E(r0) = 0 and H(r0) ̸= 0. The red line indicates regions of space
|
534 |
+
where |Re{ke
|
535 |
+
loc}| < 0.1k, where k = 2π
|
536 |
+
λ and λ = 500 nm. The line is roughly
|
537 |
+
oriented along the x axis and the electric local wavevector is plotted on four
|
538 |
+
different yz planes. The three planes which coincide with the line are −25
|
539 |
+
nm, 0 nm, +25 nm in the x direction away from r0, showing clear vortex-
|
540 |
+
like circulation of momentum around the axis of the red line. On the fourth
|
541 |
+
plane −95 nm away from r0, the vortex-like circulation of Re{ke
|
542 |
+
loc} has lost
|
543 |
+
some definition, highlighting that Re{ke
|
544 |
+
loc} is not exactly zero along a line,
|
545 |
+
and only appears line-like near to the E field zero (the only location where
|
546 |
+
Re{ke
|
547 |
+
loc} is exactly zero is at r0, because Re{ke
|
548 |
+
loc} vanishes at points, not
|
549 |
+
along lines). Results are generated from interference of ten plane waves
|
550 |
+
with random wavevectors, wavelength λ = 500 nm, deliberately polarised
|
551 |
+
to create a 3D electric field zero at r0.
|
552 |
+
|
553 |
+
12
|
554 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
555 |
+
Im{ke
|
556 |
+
loc} points in the direction of decreasing electric field intensity. A three-dimensional,
|
557 |
+
real vector, Re{kE
|
558 |
+
loc} (and therefore canonical momentum density) can vanish at localised
|
559 |
+
points in space with non-zero electric field, where a saddle-like circulation of Re{kE
|
560 |
+
loc} sur-
|
561 |
+
rounds [39], similar to the top row of panels in Fig. 3. But when the electric field vanishes
|
562 |
+
and the direction of Re{kE
|
563 |
+
loc} is automatically undefined, a different behaviour emerges.
|
564 |
+
To understand why, we once again make a first-order approximation, this time of po
|
565 |
+
E,
|
566 |
+
capturing the electric canonical momentum very near to a 3D electric field zero at r0 in its
|
567 |
+
dyadic D(po
|
568 |
+
E),
|
569 |
+
(16)
|
570 |
+
˜po
|
571 |
+
E = D(po
|
572 |
+
E)v,
|
573 |
+
where v = r − r0. The dyadic D(po
|
574 |
+
E) = (∇ ⊗ po
|
575 |
+
E)T at a general point in space is given by,
|
576 |
+
(17)
|
577 |
+
D(po
|
578 |
+
E) = 1
|
579 |
+
4ω ϵ0Im{JT
|
580 |
+
EJ∗
|
581 |
+
E} + 1
|
582 |
+
4ω ϵ0Im{E∗
|
583 |
+
xHess(Ex) + E∗
|
584 |
+
yHess(Ey) + E∗
|
585 |
+
zHess(Ez)},
|
586 |
+
where Hess(A) is the Hessian matrix of the scalar field A,
|
587 |
+
(18)
|
588 |
+
Hess(A) =
|
589 |
+
�
|
590 |
+
�
|
591 |
+
�
|
592 |
+
∂2A
|
593 |
+
∂x2
|
594 |
+
∂2A
|
595 |
+
∂x∂y
|
596 |
+
∂2A
|
597 |
+
∂x∂z
|
598 |
+
∂2A
|
599 |
+
∂y∂x
|
600 |
+
∂2A
|
601 |
+
∂y2
|
602 |
+
∂2A
|
603 |
+
∂y∂z
|
604 |
+
∂2A
|
605 |
+
∂z∂x
|
606 |
+
∂2A
|
607 |
+
∂z∂y
|
608 |
+
∂2A
|
609 |
+
∂z2
|
610 |
+
�
|
611 |
+
�
|
612 |
+
� .
|
613 |
+
As E approaches zero, the trace-less matrix D(po
|
614 |
+
E) is dominated by the first term in Eq. (17)
|
615 |
+
and if evaluated at a location r0 where E(r0) = 0, the linear approximation of po
|
616 |
+
E responds
|
617 |
+
only to the properties of the matrix in the first term of Eq. (17), Im{JT
|
618 |
+
EJ∗
|
619 |
+
E}. This is an
|
620 |
+
anti-symmetric matrix which always has one zero and two purely imaginary eigenvalues,
|
621 |
+
meaning that in the direction of the one real eigenvector of D(po
|
622 |
+
E) at r0, the approximated
|
623 |
+
electric canonical momentum does not increase at all, producing a zero-momentum line. The
|
624 |
+
imaginary eigenvalues of D(po
|
625 |
+
E) twists po
|
626 |
+
E into a surrounding vortex-like structure. This
|
627 |
+
special type of vector field singularity is called a circulation. Fundamentally, the canonical
|
628 |
+
momentum should only be zero at confined points in general 3D fields, so this apparent
|
629 |
+
vortex line is only preserved locally to the electric field zero at r0, dissolving with distance
|
630 |
+
as higher-order derivatives of po
|
631 |
+
E become significant (it is, in fact, just a very elongated null
|
632 |
+
point of po
|
633 |
+
E). The direction of the vortex pseudo-line in the vicinity of the electric field zero
|
634 |
+
is also given by the curl of the orbital current,
|
635 |
+
(19) D = ∇×po
|
636 |
+
E ∝ Re{∇Ex}×Im{∇Ex}+Re{∇Ey}×Im{∇Ey}+Re{∇Ez}×Im{∇Ez}.
|
637 |
+
We visualise this feature in Fig. 4, where a 3D electric field zero is created at a point r0
|
638 |
+
by deliberately polarising ten plane waves, each with random wavevectors, to destructively
|
639 |
+
interfere at r0. The real part of the electric local wavevector, Re{ke
|
640 |
+
loc}, the real part of
|
641 |
+
Eq. (15), is calculated and the region of space where |Re{ke
|
642 |
+
loc}| < 0.1k (k = 2π
|
643 |
+
λ ) is revealed
|
644 |
+
by a red line approximately 0.1λ in length. The electric local wavevector is proportional to
|
645 |
+
po
|
646 |
+
E and shows the direction of canonical momentum carried by the electric field. This red
|
647 |
+
line is not continuous; Re{ke
|
648 |
+
loc} actually vanishes only at r0 but it increases in magnitude
|
649 |
+
so slowly in a certain direction (the direction of the real eigenvector of Im{JT
|
650 |
+
EJ∗
|
651 |
+
E}) that a
|
652 |
+
line-like structure of |Re{ke
|
653 |
+
loc}| ≈ 0 exists very near to r0, stirring the electric canonical
|
654 |
+
momentum into a local vortex. This is shown by the four yz planes on which Re{ke
|
655 |
+
loc} is
|
656 |
+
plotted in Fig. 4. The real part of the electric local wavevector forms a swirl around the
|
657 |
+
red line, a swirl losing definition if the plotting plane is too far from r0. This remarkable
|
658 |
+
|
659 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
660 |
+
13
|
661 |
+
structure always appears when all three electric field components are zero together at a point.
|
662 |
+
2.6. Spin Current. In the decomposition of the kinetic momentum density, Eq. (12), the
|
663 |
+
second term is called the spin momentum. It is proportional (and should not be confused
|
664 |
+
with) the curl of the spin angular momentum of the electric, magnetic or electromagnetic field
|
665 |
+
depending on the representation. Like before, we will focus on the electric representation of
|
666 |
+
the decomposed kinetic momentum density, referring to the electric spin momentum with
|
667 |
+
ps
|
668 |
+
E,
|
669 |
+
(20)
|
670 |
+
ps
|
671 |
+
E = 1
|
672 |
+
2ω ϵ0∇ × 1
|
673 |
+
2Im{E∗ × E} = − 1
|
674 |
+
2ω ϵ0Im{JEE∗}.
|
675 |
+
The electric spin momentum is a divergence-free vector whose dyadic D(ps
|
676 |
+
E) has three non-
|
677 |
+
zero eigenvalues when evaluated in the position of an electric field zero, organising ps
|
678 |
+
E into
|
679 |
+
one of two types of 3D vector saddle point, just like the real Poynting vector in Fig. 3.
|
680 |
+
Expressing, in Eq. (20), the electric spin momentum with the electric field Jacobian reveals
|
681 |
+
that only a difference in sign and orientation of JE separates ps
|
682 |
+
E from the electric canon-
|
683 |
+
ical momentum po
|
684 |
+
E, given by Eq. (13). This means that, in a dual electric-magnetic zero,
|
685 |
+
E(r0) = H(r0) = 0, where JE is symmetric from Maxwell’s equations, the spin and canoni-
|
686 |
+
cal momentum dyadics are equal and opposite, D(ps
|
687 |
+
E) = −D(po
|
688 |
+
E) (this also means that the
|
689 |
+
dyadic of the real Poynting vector is zero). In a first-order approximation of both ps
|
690 |
+
E and
|
691 |
+
po
|
692 |
+
E near r0 in this case, a zero-line exists in exactly the same place for both vectors, and
|
693 |
+
around it, ps
|
694 |
+
E and po
|
695 |
+
E have vortex-like circulation with opposite handedness to each other.
|
696 |
+
2.7. Spin Angular Momentum. The dual spin angular momentum, created by the rota-
|
697 |
+
tion of the electric and magnetic field vectors, is given by [40],
|
698 |
+
(21)
|
699 |
+
S = 1
|
700 |
+
4ω Im{ϵ0E∗ × E + µ0H∗ × H}.
|
701 |
+
The electric and magnetic parts individually describe the ellipticity of the electric and mag-
|
702 |
+
netic polarisation ellipses, pointing in the perpendicular direction to the ellipse plane. Once
|
703 |
+
more for simplicity, we will focus on the singularity in the electric field spin angular momen-
|
704 |
+
tum, SE =
|
705 |
+
1
|
706 |
+
4ωIm{ϵ0E∗ × E}, left in a 3D electric field zero positioned at r0. The total spin
|
707 |
+
angular momentum, Eq. (21), is not zero if only E(r0) = 0, but we could draw similar con-
|
708 |
+
clusions for S as we do here for SE when the electric and magnetic fields are simultaneously
|
709 |
+
zero at r0.
|
710 |
+
Decomposing SE using Maxwell’s equations, we can write its first-order dyadic at r0 in
|
711 |
+
terms of the light field Jacobian matrices (see supplemental material),
|
712 |
+
(22)
|
713 |
+
D(SE) =
|
714 |
+
1
|
715 |
+
4ω2 ϵ0Re{(JT
|
716 |
+
H − JH)J∗
|
717 |
+
E}.
|
718 |
+
We note that Eq. (22), describing the spatial derivatives of the electric field spin only, de-
|
719 |
+
pends on the magnetic field Jacobian matrix JH, which is automatically symmetric whenever
|
720 |
+
E = 0 from Maxwell’s equations. The consequence is that JT
|
721 |
+
H − JH = 0 and all elements
|
722 |
+
of D(SE) at r0 are zero when E(r0) = 0. Higher-order derivatives of SE (Hessian matrices
|
723 |
+
for each component) need to be calculated to fully understand the flux of the electric spin
|
724 |
+
angular momentum in the neighbourhood of a 3D zero in E.
|
725 |
+
|
726 |
+
14
|
727 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
728 |
+
Matrix at r0
|
729 |
+
Characteristic
|
730 |
+
JE
|
731 |
+
3D complex Jacobian of the electric field at r0 (Eq. (2))
|
732 |
+
1.
|
733 |
+
Im{JE}−1Re{JE}
|
734 |
+
Number of real eigenvalues is the number of L lines passing through r0
|
735 |
+
2.
|
736 |
+
Re{JT
|
737 |
+
EJE}
|
738 |
+
Eigenvectors are the principle axes of the double cone Re{E · E} = 0.
|
739 |
+
Number of intersections of this double cone with that of matrix 3 are the number of C lines.
|
740 |
+
3.
|
741 |
+
Im{JT
|
742 |
+
EJE}
|
743 |
+
Eigenvectors are the principle axes of the double cone Im{E · E} = 0.
|
744 |
+
Number of intersections of this double cone with that of matrix 2 are the number of C lines.
|
745 |
+
4.
|
746 |
+
Im{JT
|
747 |
+
EJ∗
|
748 |
+
E}
|
749 |
+
Direction of real eigenvector (there is only one) is the axis of the electric local wavevector vortex.
|
750 |
+
Imaginary eigenvectors give the handedness of momentum circulation.
|
751 |
+
5.
|
752 |
+
−Im{JEJ∗
|
753 |
+
E}
|
754 |
+
Proportional to first-order dyadic of spin current (Eq. (20)).
|
755 |
+
Eigenvalue signs give the type of minimum at r0
|
756 |
+
6.
|
757 |
+
Im{(JT
|
758 |
+
E − JE)J∗
|
759 |
+
E}
|
760 |
+
Proportional to first-order dyadic of real Poynting vector (active power flow).
|
761 |
+
Eigenvalue signs give the type of minimum at r0
|
762 |
+
7.
|
763 |
+
−Re{(JT
|
764 |
+
E − JE)J∗
|
765 |
+
E}
|
766 |
+
Proportional to first-order dyadic of imaginary Poynting vector (reactive power flow).
|
767 |
+
Eigenvalue signs give the type of minimum at r0
|
768 |
+
Table 1. Summary of the seven dyadics (numbered) which classify the
|
769 |
+
vector field singularities organised by a 3D electric field zero.
|
770 |
+
2.8. Summary Table. Here, in Table 1, we summarise the seven dyadics which classify
|
771 |
+
the number of crossing C lines and L lines, the flux of the real and imaginary parts of the
|
772 |
+
Poynting vector, the spin current, and the orientation of the canonical momentum vortex
|
773 |
+
pseudo-line existing at a 3D electric field zero, E(r0) = 0 while H(r0) ̸= 0. To characterise
|
774 |
+
a magnetic field zero, the matrices can be written magnetically by substituting JE for JH
|
775 |
+
(and changing the ‘−’ sign in front of matrix 7 to a ‘+’), in which case matrices 1, 2, and
|
776 |
+
3 characterise magnetic polarisation singularities, and matrix 4 and 5 the magnetic local
|
777 |
+
wavevector and spin current respectively. In the case of a dual 3D zero, E(r0) = H(r0) = 0,
|
778 |
+
matrices 6 and 7 are zero because both JE and JH are symmetric.
|
779 |
+
3. Discussion
|
780 |
+
Three-dimensional optical field zeros are co-dimension 6 entities which, unlike axial zeros
|
781 |
+
in beams, are completely localised, the optical field growing brighter in all outward directions.
|
782 |
+
Although they rarely occur naturally in light (requiring three additional parameters beyond
|
783 |
+
spatial x, y, z due to their codimension), 3D zeros can be deliberately created in plane wave
|
784 |
+
interference or in the near fields of light-scattering matter [21] to reveal the unusual features
|
785 |
+
they imprint in the light field’s energy, wavevector and polarisation structures. Both with
|
786 |
+
mathematical argument and by creating field zeros in plane wave interference, we showed
|
787 |
+
that whenever the electric or magnetic field is zero at a point r0, then some combination of
|
788 |
+
zero, two or four C lines, lines of pure circular polarisation, and one or three L lines, lines of
|
789 |
+
pure linear polarisation of the field in question, intersect at r0. Likewise, an imprint is made
|
790 |
+
at r0 in the surrounding flux of the parts of the complex Poynting vector 1
|
791 |
+
2E∗ × H, the local
|
792 |
+
wavevector, the spin momentum and spin angular momentum, each organised in a vector
|
793 |
+
source, sink or saddle point. The signs of the eigenvalues of the first-order dyadics of each
|
794 |
+
quantity at r0 reveal this. Of particular interest is the canonical momentum: while typically
|
795 |
+
|
796 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
797 |
+
15
|
798 |
+
vanishing at confined points in space, a zero in E or H at r0 twists the canonical momentum
|
799 |
+
imparted by that null-containing field into a sub-wavelength, vortex-like structure around
|
800 |
+
an axis with an easily calculated direction. We say it is a sub-wavelength object because,
|
801 |
+
although it resembles the twisted vortex structures of well-known doughnut beams, it is
|
802 |
+
not preserved with increasing distance from r0.
|
803 |
+
In the combination of the way energy
|
804 |
+
flows through r0 and the number of intersecting polarisation singularities, any 3D field zero
|
805 |
+
inscribes one of a discrete number of topologically unique signatures in the electromagnetic
|
806 |
+
field. We identify seven dyadics whose spectra could classify all physically possible imprints
|
807 |
+
of 3D optical field zeros.
|
808 |
+
It is tempting to speculate that a surface enclosing an electric or magnetic field point
|
809 |
+
zero might, in addition to the quantities already identified, possess a nonzero topological
|
810 |
+
Chern number due to a nontrivial geometric phase 2-form (Berry curvature) resulting from
|
811 |
+
the neighbouring polarisation pattern. The appropriate expression for the geometric phase
|
812 |
+
2-form is the curl of the local wavevector Eq. (15),
|
813 |
+
(23)
|
814 |
+
V = ∇ × ke
|
815 |
+
loc
|
816 |
+
Near an electric field zero, V is anti-symmetric; integrating over a small sphere centred on
|
817 |
+
the field zero gives zero. We showed that in its neighbourhood, a 3D zero in E constructs
|
818 |
+
a local wavevector vortex with an identifiable axis along which |V| is very large.
|
819 |
+
It is
|
820 |
+
interesting that even when the complete vector characteristics of light are considered, a
|
821 |
+
linear momentum vortex line still persists when all three field components are zero at a
|
822 |
+
confined point. This vector field vortex is an analogue to a phase vortex in a complex scalar
|
823 |
+
field, with a key difference being that the vector field vortex line is not continuous. Although
|
824 |
+
the electromagnetic zero has some topological effects as we described in this paper, it is not
|
825 |
+
so strong as to endow a surface around it with a nonzero Chern number.
|
826 |
+
We have shown that, despite being unstable to perturbation, 3D zeros of the electric and
|
827 |
+
electromagnetic field have topological properties generalising those of scalar vortices and
|
828 |
+
polarisation singularities. Further studies might indicate how these properties behave under
|
829 |
+
perturbation. We hope that by highlighting the unusual properties of 3D field zeros, we
|
830 |
+
can inspire new applications that may be otherwise unachievable with traditionally used,
|
831 |
+
lower-dimensional dark spots, such as those in beams or simple standing waves.
|
832 |
+
4. Methods
|
833 |
+
3D electric field zeros were created in analytical simulations of ten monochromatic inter-
|
834 |
+
fering plane waves. In all simulations, ten random wavevectors (all of the same magnitude
|
835 |
+
k = 2π
|
836 |
+
λ ) were generated, and for each, two orthogonal polarisation basis vectors were defined,
|
837 |
+
representing the two electric field degrees of freedom of a plane wave propagating in that
|
838 |
+
direction. The ten plane waves were then polarised deliberately to destructively interfere
|
839 |
+
and leave a 3D electric field zero at a single confined point, r0, following the procedure given
|
840 |
+
in [21]. Let eikj·rˆej,1 and eikj·rˆej,2 be the two orthogonal polarisation states (degrees of free-
|
841 |
+
dom) of the electric field of the jth plane wave with unit amplitude at the origin (j ranges
|
842 |
+
from 1 to 10, kj is the jth plane wave’s random wavevector with magnitude |kj| =
|
843 |
+
2π
|
844 |
+
λ ,
|
845 |
+
and ˆej,1 and ˆej,2 are two orthogonal unit vectors satisfying ˆej,1 · ˆej,2 = 0, ˆej,1 · kj = 0,
|
846 |
+
ˆej,2 · kj = 0). In total, we have twenty available polarisation degrees of freedom, and by
|
847 |
+
propagating each plane wave, we can calculate the electric field that each individual degree
|
848 |
+
of freedom develops in the position of a desired electric field zero, r0. Now, we multiply
|
849 |
+
|
850 |
+
16
|
851 |
+
3D ZEROS IN ELECTROMAGNETIC FIELDS
|
852 |
+
each degree of freedom by a complex scalar, so that the jth plane wave has components
|
853 |
+
xj,1eikj·rˆej,1 and xj,2eikj·rˆej,2. Adding together all scaled degrees of freedom, evaluated at
|
854 |
+
r = r0, we have a linear system of three equations, one per component of the total field
|
855 |
+
at r0, with complex variables xj,1 and xj,2 representing the amplitude of the orthogonal
|
856 |
+
components of the jth plane wave phasor. Setting to zero all three total electric field com-
|
857 |
+
ponents at r0, we may solve the system of equations to find the polarisation components of
|
858 |
+
each plane wave required for complete destructive interference at r0. Since only three scalar
|
859 |
+
conditions are enforced (Ex = 0, Ey = 0 and Ez = 0 for the total field at r0) by twenty
|
860 |
+
degrees of freedom, the system is under-determined and seventeen possible solutions exist
|
861 |
+
for a 3D zero at r0. Any one of these solutions may be chosen to realise the zero, or, as we
|
862 |
+
do, the solutions may be combined in a linear sum with random complex amplitudes. A 3D
|
863 |
+
zero could be produced with as few as four plane waves (in fact, a zero could be enforced by
|
864 |
+
only two plane waves, but it would not be three-dimensional), though the total field would
|
865 |
+
appear less random.
|
866 |
+
References
|
867 |
+
1.
|
868 |
+
O’Holleran, K., Dennis, M. R. & Padgett, M. J. Topology of Light’s Darkness. Physical
|
869 |
+
Review Letters 102 (2009).
|
870 |
+
2.
|
871 |
+
Leach, J., Dennis, M. R., Courtial, J. & Padgett, M. J. Knotted threads of darkness.
|
872 |
+
Nature 432 (2004).
|
873 |
+
3.
|
874 |
+
O’Holleran, K., Padgett, M. J. & Dennis, M. R. Topology of optical vortex lines formed
|
875 |
+
by the interference of three, four, and five plane waves. Optics Express 14, 3039 (2006).
|
876 |
+
4.
|
877 |
+
Dennis, M. R., King, R. P., Jack, B., O’Holleran, K. & Padgett, M. J. Isolated optical
|
878 |
+
vortex knots. Nature Physics 6 (2010).
|
879 |
+
5.
|
880 |
+
Tempone-Wiltshire, S. J., Johnstone, S. P. & Helmerson, K. Optical vortex knots – one
|
881 |
+
photon at a time. Scientific Reports 6 (2016).
|
882 |
+
6.
|
883 |
+
Kleckner, D. & Irvine, W. T. M. Creation and dynamics of knotted vortices. Nature
|
884 |
+
Physics 9 (2013).
|
885 |
+
7.
|
886 |
+
Weiler, C. N., Neely, T. W., Scherer, D. R., et al. Spontaneous vortices in the formation
|
887 |
+
of Bose–Einstein condensates. Nature 455 (2008).
|
888 |
+
8.
|
889 |
+
Hindmarsh, M. B. & Kibble, T. W. B. Cosmic strings. Reports on Progress in Physics
|
890 |
+
58 (1995).
|
891 |
+
9.
|
892 |
+
Guo, X., Zhong, J., Li, P., et al. Creation of topological vortices using Pancharatnam-
|
893 |
+
Berry phase liquid crystal holographic plates. Chinese Physics B 29 (2020).
|
894 |
+
10.
|
895 |
+
Wang, L., Zhang, W., Yin, H. & Zhang, X. Ultrasmall Optical Vortex Knots Generated
|
896 |
+
by Spin-Selective Metasurface Holograms. Advanced Optical Materials 7 (2019).
|
897 |
+
11.
|
898 |
+
Li, P., Guo, X., Zhong, J., et al. Optical vortex knots and links via holographic meta-
|
899 |
+
surfaces. Advances in Physics: X 6 (2021).
|
900 |
+
12.
|
901 |
+
Zhang, W., Wei, K., Huang, L., et al. Optical vortex generation with wavelength tun-
|
902 |
+
ability based on an acoustically-induced fiber grating. Optics Express 24 (2016).
|
903 |
+
13.
|
904 |
+
Lim, S. W. D., Park, J.-S., Meretska, M. L., Dorrah, A. H. & Capasso, F. Engineering
|
905 |
+
phase and polarization singularity sheets. Nature Communications 12, 4190 (2021).
|
906 |
+
14.
|
907 |
+
Balzarotti, F., Eilers, Y., Gwosch, K. C., et al. Nanometer resolution imaging and
|
908 |
+
tracking of fluorescent molecules with minimal photon fluxes. Science 355, 606–612
|
909 |
+
(2017).
|
910 |
+
|
911 |
+
REFERENCES
|
912 |
+
17
|
913 |
+
15.
|
914 |
+
Hell, S. W. & Wichmann, J. Breaking the diffraction resolution limit by stimulated
|
915 |
+
emission: stimulated-emission-depletion fluorescence microscopy. Optics Letters 19, 780
|
916 |
+
(1994).
|
917 |
+
16.
|
918 |
+
He, H., Heckenberg, N. & Rubinsztein-Dunlop, H. Optical Particle Trapping with
|
919 |
+
Higher-order Doughnut Beams Produced Using High Efficiency Computer Generated
|
920 |
+
Holograms. Journal of Modern Optics 42 (1995).
|
921 |
+
17.
|
922 |
+
He, H., Friese, M. E. J., Heckenberg, N. R. & Rubinsztein-Dunlop, H. Direct Obser-
|
923 |
+
vation of Transfer of Angular Momentum to Absorptive Particles from a Laser Beam
|
924 |
+
with a Phase Singularity. Physical Review Letters 75 (1995).
|
925 |
+
18.
|
926 |
+
Wang, J., Yang, J.-Y., Fazal, I. M., et al. Terabit free-space data transmission employ-
|
927 |
+
ing orbital angular momentum multiplexing. Nature Photonics 6 (2012).
|
928 |
+
19.
|
929 |
+
Huang, H., Xie, G., Yan, Y., et al. 100 Tbit/s free-space data link enabled by three-
|
930 |
+
dimensional multiplexing of orbital angular momentum, polarization, and wavelength.
|
931 |
+
Optics Letters 39 (2014).
|
932 |
+
20.
|
933 |
+
Willner, A. E., Pang, K., Song, H., Zou, K. & Zhou, H. Orbital angular momentum of
|
934 |
+
light for communications. Applied Physics Reviews 8 (2021).
|
935 |
+
21.
|
936 |
+
Vernon, A. J. & Rodr´ıguez-Fortu˜no, F. J. Creating and moving nanoantenna cold spots
|
937 |
+
anywhere. Light: Science & Applications 11 (2022).
|
938 |
+
22.
|
939 |
+
Spaegele, C. M., Tamagnone, M., Lim, S. W. D., et al. Topologically protected four-
|
940 |
+
dimensional optical singularities 2022.
|
941 |
+
23.
|
942 |
+
Nye, J. F. & V, H. J. The wave structure of monochromatic electromagnetic radiation.
|
943 |
+
Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences
|
944 |
+
409, 21–36 (1987).
|
945 |
+
24.
|
946 |
+
Nye, J. Lines of circular polarization in electromagnetic wave fields. Proceedings of the
|
947 |
+
Royal Society of London. A. Mathematical and Physical Sciences 389 (1983).
|
948 |
+
25.
|
949 |
+
Berry, M. & Dennis, M. Polarization singularities in isotropic random vector waves.
|
950 |
+
Proceedings of the Royal Society of London. Series A: Mathematical, Physical and
|
951 |
+
Engineering Sciences 457, 141–155 (2001).
|
952 |
+
26.
|
953 |
+
Bliokh, K. Y., Alonso, M. A., Sugic, D., et al. Polarization singularities and M¨obius
|
954 |
+
strips in sound and water-surface waves. Physics of Fluids 33 (2021).
|
955 |
+
27.
|
956 |
+
Bliokh, K. Y. & Nori, F. Spin and orbital angular momenta of acoustic beams. Physical
|
957 |
+
Review B 99 (2019).
|
958 |
+
28.
|
959 |
+
Bliokh, K. Y., Punzmann, H., Xia, H., Nori, F. & Shats, M. Field theory spin and
|
960 |
+
momentum in water waves. Science Advances 8 (2022).
|
961 |
+
29.
|
962 |
+
Sugic, D., Droop, R., Otte, E., et al. Particle-like topologies in light. Nature Commu-
|
963 |
+
nications 12 (2021).
|
964 |
+
30.
|
965 |
+
Larocque, H., Sugic, D., Mortimer, D., et al. Reconstructing the topology of optical
|
966 |
+
polarization knots. Nature Physics 14 (2018).
|
967 |
+
31.
|
968 |
+
Dennis, M. Polarization singularities in paraxial vector fields: morphology and statis-
|
969 |
+
tics. Optics Communications 213, 201–221 (2002).
|
970 |
+
32.
|
971 |
+
Freund, I. Multitwist optical M¨obius strips. Optics Letters 35, 148 (2010).
|
972 |
+
33.
|
973 |
+
Dennis, M. R. Fermionic out-of-plane structure of polarization singularities. Optics
|
974 |
+
Letters 36 (2011).
|
975 |
+
34.
|
976 |
+
Bauer, T., Banzer, P., Karimi, E., et al. Observation of optical polarization M¨obius
|
977 |
+
strips. Science 347, 964–966 (2015).
|
978 |
+
|
979 |
+
18
|
980 |
+
REFERENCES
|
981 |
+
35.
|
982 |
+
Pisanty, E., Machado, G. J., Vicu˜na-Hern´andez, V., et al. Knotting fractional-order
|
983 |
+
knots with the polarization state of light. Nature Photonics 13 (2019).
|
984 |
+
36.
|
985 |
+
Berry, M. V. Index formulae for singular lines of polarization. Journal of Optics A:
|
986 |
+
Pure and Applied Optics 6, 675–678 (2004).
|
987 |
+
37.
|
988 |
+
Berry, M. V. Optical currents. Journal of Optics A: Pure and Applied Optics 11 (2009).
|
989 |
+
38.
|
990 |
+
Bliokh, K. Y., Bekshaev, A. Y. & Nori, F. Optical momentum and angular momen-
|
991 |
+
tum in complex media: from the Abraham–Minkowski debate to unusual properties of
|
992 |
+
surface plasmon-polaritons. New Journal of Physics 19 (2017).
|
993 |
+
39.
|
994 |
+
Berry, M. V. & Shukla, P. Geometry of 3D monochromatic light: local wavevectors,
|
995 |
+
phases, curl forces, and superoscillations. Journal of Optics 21 (2019).
|
996 |
+
40.
|
997 |
+
Bekshaev, A. Y., Bliokh, K. Y. & Nori, F. Transverse Spin and Momentum in Two-
|
998 |
+
Wave Interference. Physical Review X 5 (2015).
|
999 |
+
5. Acknowledgements
|
1000 |
+
We would like to thank Sinuh´e Perea-Puente for a mathematical proof. This work was sup-
|
1001 |
+
ported by European Research Council Starting Grant ERC2016-STG-714151-PSINFONI.
|
1002 |
+
6. Author Contribution
|
1003 |
+
A.J.V. conducted mathematical analyses and simulations; M.R.D. gave direction to and
|
1004 |
+
supervised the research; F.J.R-F. supervised the research. All authors wrote the manuscript;
|
1005 |
+
A.J.V. wrote the first draft.
|
1006 |
+
7. Competing Interests
|
1007 |
+
The Authors declare no competing interests.
|
1008 |
+
|
9dE1T4oBgHgl3EQf8AVQ/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
9dE3T4oBgHgl3EQfSQko/content/2301.04430v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c7d3b2149d9385a9c4fc5c8772d20f658cc374a218625c76296386c92fa6662
|
3 |
+
size 3086526
|
9dE3T4oBgHgl3EQfSQko/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1c23c06929f8e125b722e4f680f419a7dd43acd133e35d8ccd86994f88ec8d98
|
3 |
+
size 6881325
|
9dE3T4oBgHgl3EQfSQko/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a5bc215ac62de89348eb3ebff2168b293d342c06dad9ffacb59ecaf841b1653a
|
3 |
+
size 287335
|
AdFLT4oBgHgl3EQfEy_H/content/2301.11985v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17ea3965965bd7cc11cd24368e173e8959a79ca82a0f18fcfd0f99c67aeea658
|
3 |
+
size 655893
|
AdFLT4oBgHgl3EQfEy_H/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c08fefd17fb896b91f9c937af3fb706726ff5b30e665a2e6cbdc82754a099bc7
|
3 |
+
size 373736
|
BNAzT4oBgHgl3EQf__-y/content/2301.01957v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a54e8cdd765c83be86d509a4332540543e1b4aa871cb78d5368a2f40031e5184
|
3 |
+
size 602080
|
BNAzT4oBgHgl3EQf__-y/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8409271edeae02b25fd6251c60f5e2d7814a1a3d03f20e2312b8ef7a04410438
|
3 |
+
size 2949165
|
CdFJT4oBgHgl3EQftC3b/content/2301.11616v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:428031ac2f6fbfc00616178dd9e25ee904eefe495c07af9afcf95fdcc5bc83cc
|
3 |
+
size 2882587
|
CdFJT4oBgHgl3EQftC3b/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f70c2cc5f7cec08faea24d9299b52fd9c44bdb2a9fbc050ff0b053deaac06249
|
3 |
+
size 3407917
|
CdFJT4oBgHgl3EQftC3b/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8672a5657d290bf9a26b0f89fbcc53f03354a231f698192768dfd93e1175ca96
|
3 |
+
size 137337
|
EdAzT4oBgHgl3EQfwv7y/content/2301.01729v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:925ef4555a30838df06af04de4718e654c6042c5de3e97dc0fcc9532b0943a62
|
3 |
+
size 688106
|
EdAzT4oBgHgl3EQfwv7y/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e709140c18a25e3192df22fc202df0c72fa86cea49464f02456c947a812c614b
|
3 |
+
size 3145773
|
EdAzT4oBgHgl3EQfwv7y/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:796955ba0da267ae55dc5660a29d27a90ca666c1a6779912d4e1734f96605dcb
|
3 |
+
size 114890
|
EtE2T4oBgHgl3EQf-Ak-/content/2301.04233v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:caebc8f93fb3d9988484e24f6181bd4615a82f817adb53eefe1911a6df45ff92
|
3 |
+
size 13465609
|
EtE2T4oBgHgl3EQf-Ak-/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9564be9d2f62ffc08e3e96c4b87a4e74d24c6f8ee385898c1f7dbdd029c61d56
|
3 |
+
size 4587565
|
EtE2T4oBgHgl3EQf-Ak-/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc0227cccf938264aa3a03bf1c701fd0c59843d03c9795c9707d8ab939f3bc0d
|
3 |
+
size 175698
|
FNE0T4oBgHgl3EQfzAKR/content/tmp_files/2301.02667v1.pdf.txt
ADDED
@@ -0,0 +1,1363 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Locomotion-Action-Manipulation:
|
2 |
+
Synthesizing Human-Scene Interactions in Complex 3D Environments
|
3 |
+
Jiye Lee
|
4 |
+
Hanbyul Joo
|
5 |
+
Seoul National University
|
6 |
+
{kay2353,hbjoo}@snu.ac.kr
|
7 |
+
Figure 1. Our system, LAMA, produces high-quality and realistic 3D human motions that include locomotion, scene interactions, and
|
8 |
+
manipulations given a 3D environment and designated interaction cues.
|
9 |
+
Abstract
|
10 |
+
Synthesizing interaction-involved human motions has
|
11 |
+
been challenging due to the high complexity of 3D environ-
|
12 |
+
ments and the diversity of possible human behaviors within.
|
13 |
+
We present LAMA, Locomotion-Action-MAnipulation, to
|
14 |
+
synthesize natural and plausible long term human move-
|
15 |
+
ments in complex indoor environments. The key motivation
|
16 |
+
of LAMA is to build a unified framework to encompass a
|
17 |
+
series of motions commonly observable in our daily lives,
|
18 |
+
including locomotion, interactions with 3D scenes, and ma-
|
19 |
+
nipulations of 3D objects. LAMA is based on a reinforce-
|
20 |
+
ment learning framework coupled with a motion matching
|
21 |
+
algorithm to synthesize locomotion and scene interaction
|
22 |
+
seamlessly under common constraints and collision avoid-
|
23 |
+
ance handling. LAMA also exploits a motion editing frame-
|
24 |
+
work via manifold learning to cover possible variations
|
25 |
+
in interaction and manipulation motions. We quantitatively
|
26 |
+
and qualitatively demonstrate that LAMA outperforms ex-
|
27 |
+
isting approaches in various challenging scenarios. Project
|
28 |
+
page: https://lama-www.github.io/.
|
29 |
+
1. Introduction
|
30 |
+
In our daily lives, we can easily observe that humans do
|
31 |
+
not live in isolation nor in voids, but continuously interact
|
32 |
+
with a complex environment surrounded by many objects.
|
33 |
+
Notably, humans perform such a diverse set of daily life
|
34 |
+
actions effortlessly. Imagine that we visit a new indoor en-
|
35 |
+
vironment (e.g., a hotel room) we have never been before.
|
36 |
+
It is expected that we can still easily figure out how to move
|
37 |
+
from rooms to rooms, how to sit on a chair, how to open the
|
38 |
+
doors of closets, and so on. However, endowing machines
|
39 |
+
or virtual humans with such abilities is still a largely unex-
|
40 |
+
plored area, despite its importance.
|
41 |
+
Synthesizing scene interactions within real-life 3D envi-
|
42 |
+
ronments has been a challenging research problem due to
|
43 |
+
its complexity and diversity. Human movements in real life
|
44 |
+
consists of various types of behaviors, including locomotion
|
45 |
+
with avoiding cluttered areas, diverse interactions with 3D
|
46 |
+
scenes, and sophisticated object-manipulations. In particu-
|
47 |
+
lar, the spatial constraint that arises from real-life 3D envi-
|
48 |
+
ronments where many objects are cluttered makes motion
|
49 |
+
synthesis highly constrained and complex, and various pos-
|
50 |
+
sible arrangements of 3D environments make generalization
|
51 |
+
difficult. As human-scene interactions cover a wide range of
|
52 |
+
technical challenges, previous approaches have focused on
|
53 |
+
sub-problems, such as (1) modeling static poses [17,24,49,
|
54 |
+
64,69,71,72] or (2) human object interactions with a single
|
55 |
+
target object or interaction type [10, 47, 53–55, 66, 67, 70].
|
56 |
+
Recent methods [15,59,60] extend to synthesizing dynamic
|
57 |
+
interaction motions in cluttered real-world 3D scenes. How-
|
58 |
+
ever, the performance of these methods are fundamentally
|
59 |
+
limited due to the lack of 3D ground truth data that contains
|
60 |
+
both human motions and paired 3D environments.
|
61 |
+
1
|
62 |
+
arXiv:2301.02667v1 [cs.CV] 9 Jan 2023
|
63 |
+
|
64 |
+
In this paper, we present LAMA, Locomotion-Action-
|
65 |
+
MAnipulation, to synthesize natural and plausible long term
|
66 |
+
human motions in complex indoor environments. The key
|
67 |
+
motivation of LAMA is to build a unified framework to
|
68 |
+
include locomotion, interactions with 3D scenes, and ma-
|
69 |
+
nipulations of 3D objects, which are the series of motions
|
70 |
+
commonly observable in our daily lives. LAMA is based
|
71 |
+
on a reinforcement learning framework coupled with a mo-
|
72 |
+
tion matching algorithm to synthesize locomotion and scene
|
73 |
+
interaction seamlessly while adapting to complicated 3D
|
74 |
+
scenes with collision avoidance handling. The reinforce-
|
75 |
+
ment learning framework interprets the 3D information of
|
76 |
+
the given scene and optimally traverses among the motion
|
77 |
+
capture database via motion matching. As an advantage, our
|
78 |
+
system does not require any “scene-paired” datasets where
|
79 |
+
human movements are captured with the surrounding 3D
|
80 |
+
environments simultaneously, which is rarely available. To
|
81 |
+
further cover the numerous variations of interaction mo-
|
82 |
+
tions, we also exploit an autoencoder based motion editing
|
83 |
+
approach to learn the motion manifold space [20] in which
|
84 |
+
the editing is performed. Through extensive quantitative
|
85 |
+
and qualitative evaluations against existing approaches, we
|
86 |
+
demonstrate that our method outperforms previous methods
|
87 |
+
in various challenging scenarios.
|
88 |
+
Our contributions are summarized as follows: (1) we
|
89 |
+
present the first method to generate realistic long term mo-
|
90 |
+
tions combined with locomotion, interaction with scene,
|
91 |
+
and manipulation in complicated cluttered scenes; (2) we
|
92 |
+
propose a novel, unified framework that synthesizes loco-
|
93 |
+
motion and human-scene interactions in a seamless man-
|
94 |
+
ner, by introducing scene interpretation terms to a reinforce-
|
95 |
+
ment learning based approach to automatically generate op-
|
96 |
+
timal transitions; and (3) our outputs show the state-of-the-
|
97 |
+
art motion synthesis quality with longer duration (more than
|
98 |
+
10 sec) than previous methods.
|
99 |
+
2. Related Work
|
100 |
+
Generating Human-Scene Interactions. Generating
|
101 |
+
natural human motion has been a widely researched topic
|
102 |
+
in the computer vision community. Early methods focus on
|
103 |
+
synthesizing or predicting human movements by exploiting
|
104 |
+
neural networks [11,13,35,35,38,46,56,58]. However, these
|
105 |
+
approaches primarily address the synthesis of human mo-
|
106 |
+
tion itself, without taking into account the surrounding 3D
|
107 |
+
environments. Recent approaches begin to tackle modeling
|
108 |
+
and synthesizing human interactions within 3D scenes, or
|
109 |
+
with objects. Most of the researches focus on statically pos-
|
110 |
+
ing humans within the given 3D environment [16,24,69,71],
|
111 |
+
by generating human scene interaction poses from vari-
|
112 |
+
ous types of input including object semantics [17], im-
|
113 |
+
ages [21,23,64,65,68], and text descriptions [49,72].
|
114 |
+
More recently, there have been approaches to synthesize
|
115 |
+
dynamic human object interactions (e.g., sitting on chairs,
|
116 |
+
Encoder
|
117 |
+
Decoder
|
118 |
+
Task-Adaptive Motion Editing
|
119 |
+
Motion Generation
|
120 |
+
Action
|
121 |
+
Controller
|
122 |
+
3D Scene
|
123 |
+
Interaction
|
124 |
+
Cue
|
125 |
+
Action
|
126 |
+
Posture
|
127 |
+
Motion
|
128 |
+
Synthesizer
|
129 |
+
Optimization
|
130 |
+
Figure 2. Overview of LAMA.
|
131 |
+
carrying boxes). Starke et al. [53] introduce an autoregres-
|
132 |
+
sive learning framework with object geometry-based envi-
|
133 |
+
ronmental encodings to synthesize various human-object
|
134 |
+
interactions. Later work [15, 70] extends this by synthe-
|
135 |
+
sizing motions conditioned with variations of objects and
|
136 |
+
contact points. Other approaches [47, 54, 55, 66, 67] focus
|
137 |
+
on generating natural hand movements for manipulation,
|
138 |
+
which is extended by including full body motions [54].
|
139 |
+
Physics-based character control to synthesize human object
|
140 |
+
interactions has been also explored in [8,10,39,47,66]. Al-
|
141 |
+
though these approaches cover a wide range of human ob-
|
142 |
+
ject interactions, most of them solely focus on the relation-
|
143 |
+
ship between human and the target object without long-term
|
144 |
+
navigation in cluttered 3D scenes.
|
145 |
+
More recent approaches include generating natural hu-
|
146 |
+
man scene interactions within a complex 3D scene clut-
|
147 |
+
tered with many objects [6, 59–61], closely related to ours.
|
148 |
+
These methods are trained using human motion datasets
|
149 |
+
paired with 3D scenes, which require both ground truth mo-
|
150 |
+
tions and simultaneously captured 3D scenes for supervi-
|
151 |
+
sion. Due to such difficulties, some methods exploit syn-
|
152 |
+
thetic datasets [6,61] or data fitted from depth videos [60].
|
153 |
+
In previous approaches [15,59], navigation to move through
|
154 |
+
cluttered environments is often performed by a separate
|
155 |
+
module via a path planning algorithm (e.g., A∗ algorithm)
|
156 |
+
by approximating the volume of a human as a cylinder. This
|
157 |
+
path planning based methods approximate the spatial infor-
|
158 |
+
mation of the scene and the human body and therefore have
|
159 |
+
limitations under highly complex conditions.
|
160 |
+
Motion Synthesis and Editing. Synthesizing natural
|
161 |
+
human motions by leveraging motion capture data has also
|
162 |
+
been a long-researched topic in computer graphics. Some
|
163 |
+
approaches [26,37] construct motion graphs, where plausi-
|
164 |
+
ble transitions are inserted as edges and motion synthesis is
|
165 |
+
done by traversing through the constructed graph. Similar
|
166 |
+
approaches [31, 51] connect motion patches to synthesize
|
167 |
+
interactions in a virtual environment or multi-person inter-
|
168 |
+
actions. Due to its versatility and simplicity, a number of
|
169 |
+
variations have been made on the graph based approach,
|
170 |
+
such as motion grammar [22] which enforces traversing
|
171 |
+
rules in the motion graph. Motion matching [5, 9] can also
|
172 |
+
be understood as a special case of motion graph traver-
|
173 |
+
sal, where the plausible transitions are not precomputed but
|
174 |
+
searched during runtime. Recent advances in deep learning
|
175 |
+
allow to leverage motion capture data for motion manifold
|
176 |
+
2
|
177 |
+
|
178 |
+
rendel
|
179 |
+
reset
|
180 |
+
prev
|
181 |
+
play
|
182 |
+
nextlearning [19, 20, 52]. Autoregressive approaches based on
|
183 |
+
variational autoencoders (VAE) [36, 46] and recurrent neu-
|
184 |
+
ral networks [14,29,41] are also used to forecast future mo-
|
185 |
+
tions based on past frames. These frameworks are general-
|
186 |
+
ized to synthesizing a diverse set of motions including lo-
|
187 |
+
comotion on terrains [19] mazes [36], action-specified mo-
|
188 |
+
tions [46], and interaction-involved sports [29, 41]. Neural
|
189 |
+
network-based methods are also reported to be successful
|
190 |
+
in various motion editing tasks such as skeleton retarget-
|
191 |
+
ing [2], style transfer [3,20], and inbetweening [14].
|
192 |
+
Reinforcement learning (RL) has also been successful in
|
193 |
+
combination with both data-driven and physics-based ap-
|
194 |
+
proaches for synthesizing human motions. Combined with
|
195 |
+
data-driven approaches, these RL frameworks serve as a
|
196 |
+
control module that generates corresponding motions to a
|
197 |
+
given user input by traversing through motion graphs [28],
|
198 |
+
latent space [34, 36, 57], and precomputed transition ta-
|
199 |
+
bles [30]. Deep reinforcement learning (DRL) has been
|
200 |
+
widely used recently in physics simulation as well to syn-
|
201 |
+
thesize physically plausible movements with a diverse set
|
202 |
+
of motor skills [4,32,41,43–45,62].
|
203 |
+
3. Method
|
204 |
+
3.1. Overview
|
205 |
+
Our system, dubbed as LAMA, outputs a sequence of
|
206 |
+
human poses M = {mt}T
|
207 |
+
t=1 by taking the 3D surrounding
|
208 |
+
cues W and desired interaction cues Φ, as inputs:
|
209 |
+
M = LAMA(W, Φ).
|
210 |
+
(1)
|
211 |
+
The output posture at time t, mt = (p0, r1, ..., rJ) ∈
|
212 |
+
R3J+3, is represented by a concatenated vector of global
|
213 |
+
root position p0 ∈ R3 and local joint orientations of J
|
214 |
+
joints where each j-th joint is in angle-axis representations
|
215 |
+
rj ∈ so(3). Throughout our system, the skeleton tree struc-
|
216 |
+
ture and joint offsets are fixed and shown in Fig. 3 (a). We
|
217 |
+
represent the 3D environments W = {wi} as a set of 3D
|
218 |
+
object and environment meshes, including the background
|
219 |
+
scene mesh and other object meshes targeted for manip-
|
220 |
+
ulation. The interaction cues, Φ = [φ1, φ2, ...φn], are an
|
221 |
+
ordered list of desired interaction inputs φi = {qj}j∈Ji
|
222 |
+
where qj ∈ R3 indicates desired positions of j-th joint,
|
223 |
+
and Ji is a set of specified joints for interaction (in prac-
|
224 |
+
tice, few joints such as root 1 or end-effectors). Examples of
|
225 |
+
the 3D environment W and interaction inputs φi are shown
|
226 |
+
in Fig. 5 (a). Intuitively, φi specifies the expected positions
|
227 |
+
of selected joints of the human character. Note that we do
|
228 |
+
not specify the exact timing of the interaction, as the timing
|
229 |
+
is automatically determined by our action controller. More
|
230 |
+
details are addressed in Sec. 3.4.
|
231 |
+
To synthesize locomotion, interaction, and manipulation
|
232 |
+
together, LAMA is designed via a three-level system com-
|
233 |
+
1For root, orientation in angle-axis representation is also included in φ.
|
234 |
+
Figure 3. (a) Skeleton with joints and box nodes. (b) Automatically
|
235 |
+
detected collision points (colored as red).
|
236 |
+
posed of the action controller A and the motion synthesizer
|
237 |
+
S, followed by a manifold-based motion editor E. By taking
|
238 |
+
3D scene cues W and desired interaction cues Φ as input,
|
239 |
+
the action controller A makes the use of a reinforcement
|
240 |
+
learning (RL) framework by training the control policy π to
|
241 |
+
sample an action at time t, π(at|st, W, Φ), where at con-
|
242 |
+
tains the plausible next action cues including predicted ac-
|
243 |
+
tion types and short-term future forecasting. st is the state
|
244 |
+
cues to represent the current status of human characters in-
|
245 |
+
cluding its body posture, surrounding scene occupancy, and
|
246 |
+
current target interaction cue, which can be computed via
|
247 |
+
a function ψ, st = ψ(mt−1, mt, W, Φ). Intuitively, action
|
248 |
+
controller A predicts the plausible next action cues at by
|
249 |
+
considering the current character-scene state st. The gener-
|
250 |
+
ated action signals at from the action controller A is pro-
|
251 |
+
vided as the input for the motion synthesizer S, which then
|
252 |
+
determines the posture at the next time step mt+1, i. e.,
|
253 |
+
S(mt, at) = mt+1. Afterwards, the character’s next state
|
254 |
+
can be computed again via st+1 = ψ(mt, mt+1, W, Φ),
|
255 |
+
which is input to the action controller recursively.
|
256 |
+
Followed by the initial motion generation part from A
|
257 |
+
and S, our system furthermore applies a motion editor
|
258 |
+
E(M) = ˜M, where ˜M = { ˜Mt}T
|
259 |
+
t=1 is the edited motions
|
260 |
+
to further express the motions involving complex human-
|
261 |
+
object interactions such as manipulation (e.g. moving ob-
|
262 |
+
jects, opening doors). Fig. 2 shows the overview of LAMA.
|
263 |
+
3.2. Scene-Aware Action Controller
|
264 |
+
Based on reinforcement learning, our action controller
|
265 |
+
A enables the character to perform locomotion and desired
|
266 |
+
actions with fulfilling the interaction cues Φ and avoiding
|
267 |
+
collisions in the 3D environment W. A is a trained control
|
268 |
+
policy π where π(at|st, W, Φ). Different from previous
|
269 |
+
approaches where navigation and scene-object interactions
|
270 |
+
(e.g., sitting) are performed by separate modules [15, 59],
|
271 |
+
our RL-based framework performs both in a unified way
|
272 |
+
with a common objective by automatically determining the
|
273 |
+
transition from navigation to specific actions. As a key ad-
|
274 |
+
vantage, LAMA can be robustly generalized to challenging
|
275 |
+
unseen 3D clutters in long-term human motion synthesis
|
276 |
+
and also outperforms previous methods by avoiding colli-
|
277 |
+
sions throughout the whole process, including navigation
|
278 |
+
3
|
279 |
+
|
280 |
+
Joint
|
281 |
+
- Node
|
282 |
+
Jointand interaction.
|
283 |
+
State. The state st = ψ(mt−1, mt, W, Φ) at time t is
|
284 |
+
a feature vector representing the current status of the hu-
|
285 |
+
man character. st = (sbody
|
286 |
+
t
|
287 |
+
, sscene
|
288 |
+
t
|
289 |
+
, sinter
|
290 |
+
t
|
291 |
+
) is composed of
|
292 |
+
body configuration sbody, 2D scene occupancy sscene, and
|
293 |
+
desired current target interaction sinter. Body configuration
|
294 |
+
sbody = {r, ˙r, θup, h, pe} includes r, ˙r ∈ RJ′×6 that are the
|
295 |
+
joint rotations and velocities respectively for the J′ joints
|
296 |
+
excluding the root in 6D representations [73], θup ∈ R that
|
297 |
+
is the up vector of the root (represented by the angle w.r.t
|
298 |
+
the Y-axis), h ∈ R that is the root height from the floor,
|
299 |
+
and pe ∈ Re×3 that is the end-effector positions in person-
|
300 |
+
centric coordinate (where e is the number of end-effectors).
|
301 |
+
sscene = {gocc, groot} includes scene occupancy informa-
|
302 |
+
tion in 2D floor plane, as shown in Fig. 4. gocc ∈ Rn2 rep-
|
303 |
+
resents the 2D occupancy grid on the floor plane of neigh-
|
304 |
+
boring n cells around the agent and groot ∈ R2 denote the
|
305 |
+
current 2D global root position of the character in the dis-
|
306 |
+
cretized grid plane. sinter is an element of Φ and represents
|
307 |
+
the interaction cue the character is currently targeting, that
|
308 |
+
is sinter = φi.
|
309 |
+
Action. Given the current status of the character st,
|
310 |
+
the control policy π outputs the feasible action at
|
311 |
+
=
|
312 |
+
(atype
|
313 |
+
t
|
314 |
+
, afuture
|
315 |
+
t
|
316 |
+
, aoffset
|
317 |
+
t
|
318 |
+
). atype
|
319 |
+
t
|
320 |
+
provides the probabilities
|
321 |
+
of next action type among all possible actions, determining
|
322 |
+
the transition timing between actions (e.g., from locomo-
|
323 |
+
tion to sitting). afuture
|
324 |
+
t
|
325 |
+
predict future motion cues such as
|
326 |
+
plausible root position for the next 10, 20, and 30 frames.
|
327 |
+
aoffset
|
328 |
+
t
|
329 |
+
is intended to update the raw motion data searched
|
330 |
+
from the motion database in motion synthesizer module S.
|
331 |
+
Intuitively, our learned control policy generates an optimal
|
332 |
+
posture offset aoffset
|
333 |
+
t
|
334 |
+
which is applied to the closest plau-
|
335 |
+
sible raw posture in the database. This enables the character
|
336 |
+
to perform more plausible scene-aware human poses, allow-
|
337 |
+
ing our system to be generalized to any unseen 3D scenes
|
338 |
+
given a limited amount of motion capture data. More details
|
339 |
+
are addressed in Sec. 3.3.
|
340 |
+
3.3. Motion Synthesizer
|
341 |
+
Given the current motion output mt and actions sig-
|
342 |
+
nals at from the action controller A as inputs, the mo-
|
343 |
+
tion synthesizer produces the next plausible character pos-
|
344 |
+
ture: S(mt, at) = mt+1. As the first step, motion synthe-
|
345 |
+
sizer searches for the motion from a motion database that
|
346 |
+
best matches the closest motion feature, then modifies the
|
347 |
+
searched raw motion to be more suitable for the scene envi-
|
348 |
+
ronment. To this end, motion synthesizer’s output mt+1 is
|
349 |
+
in turn fed into the action controller recursively. We exploit
|
350 |
+
a modified version of motion matching algorithm [5, 9, 18]
|
351 |
+
for the first step of motion synthesis. In motion matching,
|
352 |
+
motion synthesis is performed periodically by searching the
|
353 |
+
most plausible next shot motion segments from a motion
|
354 |
+
DB, and compositing them into a long connected sequence.
|
355 |
+
Figure 4. Visual representation of the 2D occupancy grid near the
|
356 |
+
root. Grid on the right represents top view of the grid. Blue in-
|
357 |
+
dicates root position while gray represents the space is occupied.
|
358 |
+
Occupied cells near the root are colored as black.
|
359 |
+
Motion features. Motion feature represents the charac-
|
360 |
+
teristic of each frame in the short motion segment and is
|
361 |
+
computed as f(m) = {{pj}, { ˙pj}, θup, c, ofuture}. From
|
362 |
+
a posture m, the positions and velocities pj, ˙pj
|
363 |
+
∈ R3
|
364 |
+
are extracted for the selected joints j ∈ {Head, Hand,
|
365 |
+
Foot}, which are defined in a person-centric coordinate
|
366 |
+
of m. θup ∈ R3 is the up-vector of the root joint, and
|
367 |
+
c ∈ {0, 0.5, 1} indicates automatically computed foot con-
|
368 |
+
tact cues of the left and right foot (0 for non-contact, 1 for
|
369 |
+
contact, 0.5 for non-contact but close to the floor within
|
370 |
+
a threshold). ofuture = {{pdt
|
371 |
+
0 }, {rdt
|
372 |
+
0 }} contains the cues
|
373 |
+
for the short-term future postures, where pdt
|
374 |
+
0 and rdt
|
375 |
+
0 are
|
376 |
+
the position and orientation of root joint at dt frames later
|
377 |
+
from the current target frame. ofuture are computed in 2D
|
378 |
+
XZ plane in person-centric coordinate of the current tar-
|
379 |
+
get motion m, and thus pdt
|
380 |
+
0 , rdt
|
381 |
+
0
|
382 |
+
∈ R2. The selected fu-
|
383 |
+
ture frames are action-type specific, and for locomotion we
|
384 |
+
extract 10, 20, and 30 frames in the future (at 30Hz) fol-
|
385 |
+
lowing [9]. Intuitively, the motion feature extracts the target
|
386 |
+
frame’s posture and temporal cues by considering neigh-
|
387 |
+
boring frames 2. We pre-compute motion features for every
|
388 |
+
frame of the motion clips in the motion database. Motion
|
389 |
+
feature of the current state of the character, or the query
|
390 |
+
feature, is also computed in the same way based on posture
|
391 |
+
mt−1, mt and afuture
|
392 |
+
t
|
393 |
+
produced by the action controller,
|
394 |
+
that is xt = f(mt−1, mt, atype
|
395 |
+
t
|
396 |
+
, afuture
|
397 |
+
t
|
398 |
+
). The component
|
399 |
+
afuture
|
400 |
+
t
|
401 |
+
serves as ofuture in the query feature, which can be
|
402 |
+
understood as the action controller providing cues for pre-
|
403 |
+
dicted future postures.
|
404 |
+
Motion searching and updating. The query motion fea-
|
405 |
+
ture xt from the current character is computed as addressed
|
406 |
+
above, and let the motion features in motion database de-
|
407 |
+
noted as yk for the k-th clips in the DB. Motion searching
|
408 |
+
finds the best matches in the motion database by computing
|
409 |
+
the weighted euclidean distances between the query feature
|
410 |
+
and DB features:
|
411 |
+
k∗ = arg min
|
412 |
+
k
|
413 |
+
||wT
|
414 |
+
f (xt − yk)||2,
|
415 |
+
(2)
|
416 |
+
where wf is a fixed weight vector to control the impor-
|
417 |
+
2In practice, the input of feature extractor function f should take into
|
418 |
+
account the motions of neighboring timesteps.
|
419 |
+
4
|
420 |
+
|
421 |
+
2D
|
422 |
+
occupancy gridtance of feature elements. After finding the best match ˆmk∗
|
423 |
+
from motion database, the motion synthesizer further up-
|
424 |
+
dates it with the predicted motion offset aoffset
|
425 |
+
t
|
426 |
+
from at,
|
427 |
+
that is τ( ˆmk∗+1, aoffset) = mt+1, where ˆmk∗+1 is the
|
428 |
+
next plausible character posture and τ is an update function
|
429 |
+
to update selected joints. In practice, the motion searching
|
430 |
+
is performed periodically (e.g., every N-th frames) to make
|
431 |
+
the synthesized motion temporally more coherent.
|
432 |
+
3.4. Learning for Scene-Aware Action Controller
|
433 |
+
In the reinforcement learning framework, the objective is
|
434 |
+
to learn the optimal policy which maximizes the discounted
|
435 |
+
cumulative reward. In our method, we design rewards to
|
436 |
+
guide the agent to perform locomotion towards the target
|
437 |
+
objects (e.g., sofa) and also perform desired interaction with
|
438 |
+
the object (e.g., sitting). In particular, our RL-framework
|
439 |
+
performs both navigation and interaction with common con-
|
440 |
+
straints (e.g., smooth transitions, collision avoidance).
|
441 |
+
Our reward function consist of the following terms:
|
442 |
+
Rtotal = wtrRtr + wactRact + wregRreg,
|
443 |
+
(3)
|
444 |
+
where wtr, wact, and wreg are the weights to balance among
|
445 |
+
reward terms. The trajectory reward Rtr is obtained when
|
446 |
+
the character moves towards the desired interaction input φ
|
447 |
+
while meeting the spatial constraints from the surrounding
|
448 |
+
3D scene, described below:
|
449 |
+
Rtr = rcoli · rpos · rroot, where
|
450 |
+
(4)
|
451 |
+
rcoli = exp
|
452 |
+
�
|
453 |
+
− 1
|
454 |
+
σ2
|
455 |
+
coli
|
456 |
+
�
|
457 |
+
b∈B
|
458 |
+
wbρ(b, W)
|
459 |
+
�
|
460 |
+
,
|
461 |
+
(5)
|
462 |
+
rpos = exp
|
463 |
+
�
|
464 |
+
�−
|
465 |
+
1
|
466 |
+
σ2
|
467 |
+
root
|
468 |
+
�
|
469 |
+
j∈J
|
470 |
+
∥p0 − qj∥2
|
471 |
+
�
|
472 |
+
� ,
|
473 |
+
(6)
|
474 |
+
rvel =
|
475 |
+
�
|
476 |
+
1
|
477 |
+
when ˙proot ≥ σth
|
478 |
+
σvel∥ ˙p0∥2
|
479 |
+
else.
|
480 |
+
(7)
|
481 |
+
The collision-avoidance reward rcoli penalizes collisions
|
482 |
+
with 3D scenes. As depicted in Fig. 3 (a), body limbs in
|
483 |
+
the skeletal structure are represented as a set of box-shaped
|
484 |
+
nodes B with a fixed width, where each element b ∈ B is
|
485 |
+
a 3D box representation of legs and arms (we exclude torso
|
486 |
+
and head). The function ρ(b, W) detects the collision be-
|
487 |
+
tween edges of a box-shaped node b with 3D scene meshes
|
488 |
+
W and returns the number of intersection points. (Fig. 3
|
489 |
+
(b)). wb is the weights to control importance of each limb b.
|
490 |
+
The collision-avoidance reward is maximized when no pen-
|
491 |
+
etration occurs, making the control policy π to find the opti-
|
492 |
+
mal trajectory and pose offset to avoid physically implausi-
|
493 |
+
ble collisions and penetrations. rpos are obtained when the
|
494 |
+
agent moves to reach the targeting interaction cue φ, by en-
|
495 |
+
couraging agent’s root position p0 to be closer to the target
|
496 |
+
interaction cue {qj}. rvel encourages the character to move
|
497 |
+
by penalizing when the root velocity ˙proot is less than a
|
498 |
+
threshold σth. σcoli, σroot, and vel are weights to control
|
499 |
+
the balance between terms.
|
500 |
+
Action reward Ract enforces the synthesized motion to
|
501 |
+
fulfill the given interaction cue φ = {qj}:
|
502 |
+
Ract = rinter · r∆t · r∆v,
|
503 |
+
where
|
504 |
+
rinter = exp
|
505 |
+
�
|
506 |
+
�−
|
507 |
+
1
|
508 |
+
σ2
|
509 |
+
inter
|
510 |
+
�
|
511 |
+
j∈J
|
512 |
+
∥pj − qj∥2
|
513 |
+
�
|
514 |
+
� ,
|
515 |
+
r∆t = exp
|
516 |
+
�
|
517 |
+
−σ2
|
518 |
+
∆tCtr
|
519 |
+
�
|
520 |
+
,
|
521 |
+
r∆v = exp
|
522 |
+
�
|
523 |
+
−σ2
|
524 |
+
∆vCvel
|
525 |
+
�
|
526 |
+
,
|
527 |
+
(8)
|
528 |
+
where interaction reward term rinter is maximized when the
|
529 |
+
performed action meets the positional constraints provided
|
530 |
+
by interaction cues. Smoothness reward terms r∆t and r∆v
|
531 |
+
minimizes the transition cost, which is based on the subpart
|
532 |
+
of the feature distances defined in Eq. 2, where Ctr is the
|
533 |
+
weighted feature distances of pj, θup, and c, and Cvel is
|
534 |
+
from ˙p. These are intended to penalize the case where the
|
535 |
+
character makes abrupt changes.
|
536 |
+
Regularization reward Rreg penalizes the aoffset
|
537 |
+
t
|
538 |
+
exces-
|
539 |
+
sively modifying the original posture brought from the mo-
|
540 |
+
tion synthesizer, denoted as ˆmt, and maintains temporal
|
541 |
+
consistency among frames.
|
542 |
+
Rreg = exp
|
543 |
+
�
|
544 |
+
−
|
545 |
+
1
|
546 |
+
σ2reg
|
547 |
+
�
|
548 |
+
∥ ˆmt − mt∥2 + ∥mt − mt−1∥2��
|
549 |
+
.
|
550 |
+
It is reported that [33, 41] multiplying rewards with con-
|
551 |
+
sistent goals are suitable for learning, as the reward is re-
|
552 |
+
ceived when the conditions are simultaneously met. Fur-
|
553 |
+
thermore, to accelerate learning, we use early termination
|
554 |
+
conditions [43] and limited action transitions. The episode
|
555 |
+
is terminated when the character moves out of the scene
|
556 |
+
bounding box, or when the collision reward rcoli is under a
|
557 |
+
certain threshold. Also, the action controller first checks in
|
558 |
+
advance whether the action signal is valid when it makes
|
559 |
+
transitions from locomotion to other actions. When the
|
560 |
+
nearest feature distance of Eq. 2 in the motion synthesizer
|
561 |
+
is over a certain threshold, the action controller discards the
|
562 |
+
transition and continues navigating. The control policy is
|
563 |
+
learned through Proximal Policy Optimization (PPO) algo-
|
564 |
+
rithm [50].
|
565 |
+
3.5. Task-Adaptive Motion Editing
|
566 |
+
Interaction includes a massively diverse pool of mo-
|
567 |
+
tions, and these variations cannot be fully handled by lim-
|
568 |
+
ited amount of motion database. In order to cover such di-
|
569 |
+
versity, we include a task-adaptive motion editing module
|
570 |
+
in our motion synthesis framework. The goal of our edit-
|
571 |
+
ing module E is (1) to edit motion M to fit into diverse
|
572 |
+
5
|
573 |
+
|
574 |
+
Figure 5. Visual representation of system input Φ, W and output
|
575 |
+
motion sequence. On the left, interaction cues are shown as cyan
|
576 |
+
spheres and arrows (indicating orientation). The right is the syn-
|
577 |
+
thesized human motion ˜
|
578 |
+
M.
|
579 |
+
target object geometries (e.g., sitting on a chair with dif-
|
580 |
+
ferent height), and (2) to generate additional hand move-
|
581 |
+
ments for manipulation (e.g., grasping). In particular, in the
|
582 |
+
case of manipulation, additional interaction cue φ can be
|
583 |
+
provided to enforce an end-effector (e.g., a hand) to fol-
|
584 |
+
low the desired trajectories to express the manipulation task
|
585 |
+
on the target object, as shown in Fig 8 (left). The edited
|
586 |
+
motion ˜M = E(M) should not only fulfill the sparsely
|
587 |
+
given positional constraints, but also preserve the temporal
|
588 |
+
consistency between frames and spatial correlations among
|
589 |
+
joints in order to maintain its naturalness. We adopt the mo-
|
590 |
+
tion manifold learning approach with convolutional autoen-
|
591 |
+
coders [20] to compress motion to a latent vector within a
|
592 |
+
motion manifold space. Motion editing is done by searching
|
593 |
+
for an optimal latent vector among the manifold. For train-
|
594 |
+
ing the autoencoder, motion sequence, which we denote as
|
595 |
+
X converted from M, is represented as a time-series of hu-
|
596 |
+
man postures by concatenating joint rotations in 6D repre-
|
597 |
+
sentations [73], root height, root transform relative to the
|
598 |
+
previous frame projected on the XZ plane, and foot contact
|
599 |
+
labels. The encoder and decoder module are trained based
|
600 |
+
on reconstruction loss, ||X − Ψ−1(Ψ (X)) ||2, where Ψ is
|
601 |
+
the encoder and Ψ−1 is the decoder.
|
602 |
+
The latent vector from the encoder z = Ψ(X) repre-
|
603 |
+
sent the motion manifold space by preserving the spatio-
|
604 |
+
temporal relationship among joints and frames within the
|
605 |
+
motion sequence. As demonstrated in [20], editing motions
|
606 |
+
in this manifold space ensures the edited motion to be re-
|
607 |
+
alistic and temporally coherent. To this end, we find the
|
608 |
+
optimal latent vector z∗ by minimizing a loss function L
|
609 |
+
by constraining the outputs motions to follow the interac-
|
610 |
+
tion constraint φ. We also include additional regularizers in
|
611 |
+
L so that the output motion to maintain the foot locations
|
612 |
+
and root trajectories to the original motions. See supp. mat.
|
613 |
+
for more details on L. Finally, the edited motion ˜M can be
|
614 |
+
computed via Ψ−1(z∗).
|
615 |
+
4. Experiments
|
616 |
+
We evaluate LAMA’s ability on synthesizing long-term
|
617 |
+
motions with various human-scene and human-object inter-
|
618 |
+
Method
|
619 |
+
Plausibility
|
620 |
+
Naturalness
|
621 |
+
Slip
|
622 |
+
Penetration
|
623 |
+
FDtotal
|
624 |
+
FDroot
|
625 |
+
FDjoint
|
626 |
+
Wang et al. [60]
|
627 |
+
5.13
|
628 |
+
3.88
|
629 |
+
1.38
|
630 |
+
0.45
|
631 |
+
0.93
|
632 |
+
Wang et al. [60]*
|
633 |
+
24.8
|
634 |
+
4.58
|
635 |
+
1.44
|
636 |
+
0.44
|
637 |
+
1.00
|
638 |
+
SAMP [15]
|
639 |
+
10.5
|
640 |
+
12.49
|
641 |
+
1.25
|
642 |
+
0.30
|
643 |
+
0.95
|
644 |
+
LAMA (ours)
|
645 |
+
5.21
|
646 |
+
1.52
|
647 |
+
1.22
|
648 |
+
0.31
|
649 |
+
0.91
|
650 |
+
Table 1. Baseline comparison Foot slip loss (cm, ↓) averaged over
|
651 |
+
all frames. Penetration loss(percentage, ↓) is counted based on in-
|
652 |
+
tersection points of the 3D environment and the skeleton. Natural-
|
653 |
+
ness score is based on fr´echet distance (FD ↓). Wang et al. with an
|
654 |
+
asterisk indicates without post-processing optimization.
|
655 |
+
actions involved. We exploit an extensive set of quantitative
|
656 |
+
metrics and perceptual study to evaluate the physical plau-
|
657 |
+
sibility and naturalness of the synthesized motion.
|
658 |
+
Dataset. For constructing the database for the motion
|
659 |
+
synthesizer, motion capture data are selectively collected
|
660 |
+
and refined from Ubisoft La Forge [14], COUCH [70],
|
661 |
+
and SAMP [15]. All the data used in this system are mo-
|
662 |
+
tion capture data (in bvh format) with no scene or ob-
|
663 |
+
ject related information, and are retargeted into a unified
|
664 |
+
skeletal structure with MotionBuilder. We use PROX [16]
|
665 |
+
and Matterport3D [7] datasets for 3D environment and
|
666 |
+
SAPIEN [63] object meshes for manipulation. Our code and
|
667 |
+
pre-processed data will be publicly released.
|
668 |
+
Implementation Details. The policy and the value net-
|
669 |
+
work of the action controller module consists of 4 and
|
670 |
+
2 fully connected layers of 256 nodes, respectively. The
|
671 |
+
encoder and decoder of the task-adaptive motion editing
|
672 |
+
module consist of three convolutional layers. Adam opti-
|
673 |
+
mizer [25] is used for training and optimization. We use
|
674 |
+
Nvidia RTX 3090 for training the action controller and the
|
675 |
+
motion editing module. It takes 10 to 80 minutes to learn
|
676 |
+
a single control policy, where the training time mainly de-
|
677 |
+
pends on how difficult the interaction cues are to achieve.
|
678 |
+
For optimization in the motion editing module, it takes 3 to
|
679 |
+
4 minutes for 500 epochs. See supp. mat. for more detail.
|
680 |
+
4.1. Experimental Setup
|
681 |
+
Evaluation metrics. Quantifying motion synthesis quality
|
682 |
+
is challenging due to the lack of ground-truth data or offi-
|
683 |
+
cial evaluation metrics. We try to quantify them in terms of
|
684 |
+
physical plausibility and naturalness.
|
685 |
+
• Physical plausibility: We use contact and penetration
|
686 |
+
metrics to evaluate the physical plausibility of the synthe-
|
687 |
+
sized motions. Contact loss penalizes the foot movement
|
688 |
+
when the foot is in contact. Since foot contact is a critical
|
689 |
+
element in dynamics, contact-based metric is closely related
|
690 |
+
in determining the physical plausibility of motions. Pene-
|
691 |
+
tration loss (“Penetration” in Table 1) measures implausible
|
692 |
+
cases when the body penetrates the objects in the scene. We
|
693 |
+
compute penetration metric by counting frames where the
|
694 |
+
6
|
695 |
+
|
696 |
+
interaction cue Φ2
|
697 |
+
interaction cue ΦFigure 6. Comparison with LAMA (left) and LAMA without col-
|
698 |
+
lision reward (right). As shown in the right, without collision re-
|
699 |
+
ward the character fails to avoid collisions with obstacles (marked
|
700 |
+
as red).
|
701 |
+
intersection points (Sec. 3.4) goes over a certain threshold. 3
|
702 |
+
• Naturalness: We measure the naturalness of the synthe-
|
703 |
+
sized motions by measuring the Fr´echet distance, as re-
|
704 |
+
ported in [15, 35, 40] between the synthesized motion and
|
705 |
+
motions from motion capture data. Features are extracted
|
706 |
+
from motion sequences and the Fr´echet distance is com-
|
707 |
+
puted with the extracted features. We measure the natural-
|
708 |
+
ness of character root movements FDroot, including root ori-
|
709 |
+
entation and velocity, and character joint rotations FDjoint.
|
710 |
+
Baselines. We compare our LAMA with the state-of-the-art
|
711 |
+
approaches as well as variations of ours.
|
712 |
+
• Wang et al. [60] is the state-of-the-art long term mo-
|
713 |
+
tion synthesis method for human-scene interactions within
|
714 |
+
a given 3D scene. We use the author’s code for evaluation.
|
715 |
+
As Wang et al. uses optimization to post-process the synthe-
|
716 |
+
sized motion to improve foot contact and reduce collisions,
|
717 |
+
we both compare Wang et al. with and without optimization.
|
718 |
+
• SAMP [15] generates interactions which can be general-
|
719 |
+
ized not only for object variations but also random starting
|
720 |
+
points within a given 3D scene. SAMP explicitly exploits a
|
721 |
+
path planning module to navigate through cluttered 3D en-
|
722 |
+
vironments.
|
723 |
+
• Ablative baselines We perform ablation studies on the ac-
|
724 |
+
tion controller and task-adaptive motion editing module. We
|
725 |
+
perform ablation studies on the scene reward rcoli, and ac-
|
726 |
+
tion offset aoffset
|
727 |
+
t
|
728 |
+
to present the contribution of both terms
|
729 |
+
on our system’s capability to generate scene-aware motions.
|
730 |
+
We also compare our method without the transition reward
|
731 |
+
r∆t and r∆v terms (Sec. 3.4) in the action controller. Fi-
|
732 |
+
nally, we demonstrate the strength of our task-adaptive mo-
|
733 |
+
tion editing module to edit motions naturally (Sec. 3.5) by
|
734 |
+
comparing with inverse kinematics (IK).
|
735 |
+
4.2. Comparisons with Previous Work
|
736 |
+
Quantitative Evaluation. We compare methods in 6
|
737 |
+
different scenarios from various 3D scenes in the PROX
|
738 |
+
dataset [16]. Foot contact is automatically labeled based on
|
739 |
+
310 for legs and 7 for arms
|
740 |
+
Figure 7. Comparison with LAMA (left) and LAMA without ac-
|
741 |
+
tion offset (right). The character in original LAMA moves forward
|
742 |
+
while tilting its arms to avoid collision with walls, while in LAMA
|
743 |
+
without action offset does not.
|
744 |
+
positional velocity of the foot joint. Foot slip metric is mea-
|
745 |
+
sured by foot joint positions. To compute penetration metric
|
746 |
+
in a fair way, SMPL-X outputs of Wang et al. and SAMP are
|
747 |
+
converted to box-shaped skeletons as in ours and intersec-
|
748 |
+
tion point are counted. Table 1 shows the results.
|
749 |
+
As shown, our LAMA outperforms Wang et al both in
|
750 |
+
naturalness and physical plausibility. It is noted that Wang
|
751 |
+
et al performs optimization as post-processing to explic-
|
752 |
+
itly minimize foot slip, and yet LAMA still shows on-par
|
753 |
+
performance against it (and better in all other metrics).
|
754 |
+
Compared with SAMP, our method shows much better re-
|
755 |
+
sults in plausibility metrics (both Slip and Penetration),
|
756 |
+
and shows slightly better performance in naturalness. Apart
|
757 |
+
from SAMP which relies on a separate navigation mod-
|
758 |
+
ule, our RL-based action controller handles collisions in the
|
759 |
+
same way of scene-interaction and shows much better per-
|
760 |
+
formance in in complex and cluttered 3D scenes.
|
761 |
+
A Human Study. To further validate our results, we
|
762 |
+
compare the quality of our output over other baselines,
|
763 |
+
Wang et al. and SAMP, through A/B testing from human
|
764 |
+
observers. For the study, we choose 5 scenarios from dif-
|
765 |
+
ferent indoor scenes, and render the results of each method
|
766 |
+
using the exactly same view and 3D characters, so that they
|
767 |
+
cannot be distinguished from the appearance side. We build
|
768 |
+
two separate sets, where in each set the result videos of
|
769 |
+
our method are shown with each competitor side by side
|
770 |
+
in a random order. Human observers are asked to choose a
|
771 |
+
motion clip that is more human-like and plausible in the
|
772 |
+
given 3D scene. We perform each set of tests with non-
|
773 |
+
overlapping 15 participants. See our supp. mat. for more
|
774 |
+
details about the study setup. As the result, the outputs of
|
775 |
+
our method are preferred by the majority (more than 50%
|
776 |
+
voting) in all cases. By considering all votes independently,
|
777 |
+
our method are preferred 80.0% over SAMP and 97.3%
|
778 |
+
over Wang et al.’s work. In particular, we found that our
|
779 |
+
method greatly outperform the competing methods in terms
|
780 |
+
of the naturalism of foot stepping, transition between loco-
|
781 |
+
motion and action, and collision avoidance with the scenes.
|
782 |
+
See our supp. videos for more results.
|
783 |
+
7
|
784 |
+
|
785 |
+
LAMA
|
786 |
+
LAMA w/o collision rewardoffset
|
787 |
+
LAMA w/o a
|
788 |
+
LAMA
|
789 |
+
LAMAFigure 8. (a) Comparison with LAMA (top) and LAMA without
|
790 |
+
manifold and replaced with IK (bottom) of a character opening the
|
791 |
+
toilet lid. (b) Comparison with LAMA (top) and LAMA without
|
792 |
+
motion editing (bottom) in sitting.
|
793 |
+
4.3. Ablation Studies
|
794 |
+
Ablation Studies on Action Controller. We quantita-
|
795 |
+
tively compare the original LAMA and the LAMA without
|
796 |
+
collision reward rcoli. We intend to demonstrate the role of
|
797 |
+
rcoli that enforces the action controller to search for optimal
|
798 |
+
actions for generating motions without collisions. Ablation
|
799 |
+
studies are done in 5 PROX scenes. In the original LAMA,
|
800 |
+
penetrations occur in only 1.1% of the frames among the
|
801 |
+
whole motion sequences, while the ratio is 15.7% in LAMA
|
802 |
+
without collision reward. The result supports that the colli-
|
803 |
+
sion reward rcoli enforces the action controller to compute
|
804 |
+
optimal actions for synthesizing body movement according
|
805 |
+
to the spatial constraint of the given 3D scene. Example re-
|
806 |
+
sults are shown in Fig. 6.
|
807 |
+
We also compare the contribution of other components
|
808 |
+
in the action controller module in generating natural inter-
|
809 |
+
actions. As seen in Fig. 7, with the action controller without
|
810 |
+
aoffset
|
811 |
+
t
|
812 |
+
the character fails to avoid penetration with objects
|
813 |
+
or walls, as the raw motion from the motion database does
|
814 |
+
not have any information of the scene. This demonstrates
|
815 |
+
that action offset also plays a role in generating detailed
|
816 |
+
scene-aware poses even from raw motion capture data.
|
817 |
+
Moreover, the results with the action controller without
|
818 |
+
smoothness rewards r∆t and r∆v are not smooth enough,
|
819 |
+
showing unnatural movements such as jerking. These abla-
|
820 |
+
tion studies justify the advantages of our reward terms.
|
821 |
+
Ablation Studies on Task-Adaptive Motion Editing.
|
822 |
+
We ablate our motion editing module by replacing it with
|
823 |
+
an alternative approach via Inverse-Kinematics (IK). An ex-
|
824 |
+
ample result is shown in Fig. 8 (left). For manipulation, the
|
825 |
+
results with IK show jerky and awkward motions because
|
826 |
+
the temporal and inter-joint correlations in natural human
|
827 |
+
motions are not reflected in IK, while original LAMA with
|
828 |
+
task-adaptive motion editing module shows much natural
|
829 |
+
motions. Our motion editing module can also be used to
|
830 |
+
Figure 9. Examples of synthesized manipulation motions. The tar-
|
831 |
+
get object for manipulation is colored as orange. Top is a motion
|
832 |
+
sequence of walking and opening a toilet lid, and the bottom is a
|
833 |
+
sequence of walking and opening doors. The character is colored
|
834 |
+
purple at start and aqua at the end.
|
835 |
+
further adjust the character movements in different object
|
836 |
+
geometries, going over the limit of the motion database. As
|
837 |
+
seen in Fig 8 (right), the motion editing module enables the
|
838 |
+
character to properly sit in chairs with various sizes.
|
839 |
+
5. Discussion
|
840 |
+
In this paper, we present a method to synthesize locomo-
|
841 |
+
tion, scene-interaction, and manipulation in a unified sys-
|
842 |
+
tem. Leveraging a RL framework with motion matching,
|
843 |
+
our method enables to produce natural and plausible hu-
|
844 |
+
mans motions in complex and cluttered 3D environments
|
845 |
+
only with a limited amount of motion-only datasets. Our
|
846 |
+
method has been thoroughly evaluated in diverse scenar-
|
847 |
+
ios, outperforming previous approaches [15, 60]. We also
|
848 |
+
demonstrate the robustness and generalization ability of our
|
849 |
+
system by covering a wide range of human interactions in
|
850 |
+
many different 3D environments.
|
851 |
+
While our RL-based method can be generalized to any
|
852 |
+
unseen 3D environments, a new control policy has to be
|
853 |
+
trained for each motion sequence. Combining RL with a
|
854 |
+
supervised learning framework for better efficiency can be
|
855 |
+
an interesting future research direction. Furthermore, al-
|
856 |
+
though we assume a fixed skeletal information throughout
|
857 |
+
the system, interaction motions may change depending on
|
858 |
+
the character’s body shape and sizes. We leave synthesizing
|
859 |
+
motions on varying body shapes as future work.
|
860 |
+
Acknowledgments: This work was supported by SNU-
|
861 |
+
Naver Hyperscale AI Center, SNU Creative-Pioneering Re-
|
862 |
+
searchers Program, and NRF grant funded by the Korea
|
863 |
+
government (MSIT) (No. 2022R1A2C209272411).
|
864 |
+
8
|
865 |
+
|
866 |
+
LAMA
|
867 |
+
LAMA
|
868 |
+
LAMA w/o motion editing
|
869 |
+
LAMA w/o manifold + IK梦人庆A. Supplementary Video
|
870 |
+
The supplementary video shows the results of our
|
871 |
+
method, LAMA, on various scenarios. In the video, we
|
872 |
+
show the human motion synthesis results on PROX [16],
|
873 |
+
Matterport3D [7], and also our own home-brewed 3D scene
|
874 |
+
produced by Polycam App [1] in an iPad pro. We use
|
875 |
+
SAPIEN [63] object meshes for manipulation examples. As
|
876 |
+
shown, our method successfully produces plausible and nat-
|
877 |
+
ural human motions in many challenging scenarios. Our
|
878 |
+
supplementary video contains several ablation studies of
|
879 |
+
our method by showing the importance of collision reward
|
880 |
+
rcoli in Eq. (4), transition reward (r∆t , r∆v) in Eq. (8), pos-
|
881 |
+
ture offset aoffset
|
882 |
+
t
|
883 |
+
in Action Controller (Sec. 3.2), and our
|
884 |
+
motion editing modules (Sec. 3.5) compared to the tradi-
|
885 |
+
tional Inverse Kinematics (IK). We also show the compari-
|
886 |
+
son with previous state-of-the arts [15, 59, 60] and demon-
|
887 |
+
strate that our results produces better quality of motions
|
888 |
+
with better collision avoidance performance in complicated
|
889 |
+
3D scenes.
|
890 |
+
B. Additional Details on Implementations
|
891 |
+
B.1. Action Controller
|
892 |
+
Implementation Details.
|
893 |
+
For the action controller A and
|
894 |
+
motion synthesizer module S, we use the animation library
|
895 |
+
DART [27]. We also use a publicly available PPO imple-
|
896 |
+
mentation [32, 41], where we remove the variable time-
|
897 |
+
stepping functions stepping in [32] by following the origi-
|
898 |
+
nal PPO algorithm. The details of the training regarding the
|
899 |
+
policy and value network of the action controller are written
|
900 |
+
in Table 2.
|
901 |
+
Early Termination Conditions.
|
902 |
+
As written in the main
|
903 |
+
paper, the episode is terminated (1) when the character
|
904 |
+
moves out of the scene bounding box; (2) when the colli-
|
905 |
+
sion reward rcoli is under a certain threshold; or (3) when
|
906 |
+
the root of the human character is located in the blocked
|
907 |
+
(occupied) regions of the scenes in 2D grid space during
|
908 |
+
the locomotion status.
|
909 |
+
Name
|
910 |
+
Value
|
911 |
+
Learning rate of policy network
|
912 |
+
2e-4
|
913 |
+
Learning rate of value network
|
914 |
+
0.001
|
915 |
+
Discount factor (γ)
|
916 |
+
0.95
|
917 |
+
GAE and TD (λ)
|
918 |
+
0.95
|
919 |
+
Clip parameter (ϵ)
|
920 |
+
0.2
|
921 |
+
# of tuples per policy update
|
922 |
+
30000
|
923 |
+
Batch size for policy/value update
|
924 |
+
512
|
925 |
+
Table 2. Details on the hyper-parameters for learning the control
|
926 |
+
policy of the action controller A.
|
927 |
+
B.2. Motion Synthesizer
|
928 |
+
Motion Database Information.
|
929 |
+
As described in our
|
930 |
+
main paper, we pre-process the motion segments by selec-
|
931 |
+
tively collecting and clipping from Ubisoft La Forge [14],
|
932 |
+
COUCH [70], and SAMP [15]. The length (in frames)
|
933 |
+
of motion segments (“Seg. Length” in tables), number of
|
934 |
+
motion segment (“Seg. Count” in tables), and the number
|
935 |
+
of total frames (“Total Frames” in tables) are summarized
|
936 |
+
in Table 3.
|
937 |
+
Action-Specific Feature Definition.
|
938 |
+
The motion feature,
|
939 |
+
as defined in our main paper Sec 3.3, represents both the
|
940 |
+
current state of the motion and a short term future move-
|
941 |
+
ments: f(m) = {{pj}, { ˙pj}, θup, c, ofuture}. In particu-
|
942 |
+
lar the action specific feature ofuture = {{pdt
|
943 |
+
0 }, {rdt
|
944 |
+
0 }}
|
945 |
+
contains future motions so that the motion search process
|
946 |
+
can take into account the future motion consistency, where
|
947 |
+
pdt
|
948 |
+
0 , rdt
|
949 |
+
0 ∈ R2 are the position and orientation of root joint at
|
950 |
+
dt frames later from the current target frame. For locomo-
|
951 |
+
tion, we extract dt = 10, 20, and 30 frames in the future (at
|
952 |
+
30Hz) following [9], as addressed in our main paper. For sit-
|
953 |
+
ting, we specifically choose dt as the frame where the char-
|
954 |
+
acter completes the sit-down motion. The major motivation
|
955 |
+
of this design choice is encourage the motion synthesizer to
|
956 |
+
search the motion clips with the desired target action.
|
957 |
+
Computation Cost for Searching.
|
958 |
+
The computation time
|
959 |
+
for searching the motion database is done between 1-2 mil-
|
960 |
+
liseconds in CPU, where we test on AMD Ryzen 5950X
|
961 |
+
CPU. The number of search times varies and is dependent
|
962 |
+
to the 3D scenes and desired motions. In one of our sce-
|
963 |
+
narios, total 17 searches in locomotion(walk) and 14 in ac-
|
964 |
+
tion(sit) were done. For locomotion, the searching time is
|
965 |
+
average 1.743 milliseconds (standard deviation 0.46) and
|
966 |
+
for action(sit) 1.103 milliseconds (standard deviation 0.63).
|
967 |
+
B.3. Motion Editing via Motion Manifold
|
968 |
+
Implementation Details.
|
969 |
+
For the convolutional autoen-
|
970 |
+
coder of task-adaptive motion editing, we use PyTorch [42],
|
971 |
+
FairMotion [12], and PyTorch3d [48]. The autoencoder is
|
972 |
+
trained with the Adam optimizer with learning rate 0.0001.
|
973 |
+
We use 3 layers of 1D temporal-convolutions with kernel
|
974 |
+
width of 25 and stride 2, and the channel dimension of each
|
975 |
+
output feature is 256. The training datasets are summarized
|
976 |
+
in Table 4. Note that we use different pre-processing steps
|
977 |
+
between Motion editing module and Motion Synthesizer.
|
978 |
+
Reconstruction Loss.
|
979 |
+
The encoder Ψ and decoder Ψ−1
|
980 |
+
are trained based on reconstruction loss Lrecon = ||X −
|
981 |
+
Ψ−1(Ψ (X)) ||2, where:
|
982 |
+
Lrecon = wcLcontact + wrLroot + wqLquat + wpLpos.
|
983 |
+
(9)
|
984 |
+
9
|
985 |
+
|
986 |
+
Lcontact, Lroot, and Lquat are the MSE losses of foot con-
|
987 |
+
tact labels, root status (height and transform relative to the
|
988 |
+
previous frame projected on the XZ plane), and the joint
|
989 |
+
rotations in 6D representations [73]. To penalize errors ac-
|
990 |
+
cumulating along the kinematic chain, we perform forward
|
991 |
+
kinematics (FK) and measure the global position distance of
|
992 |
+
joints between original and reconstructed motion. As global
|
993 |
+
positions of the joints are highly dependent on the root po-
|
994 |
+
sitions, for the early epochs, the distance is measured based
|
995 |
+
on root-centric coordinates to ignore the global location of
|
996 |
+
roots, which we found empirically more stable.
|
997 |
+
Motion Editing Loss
|
998 |
+
For motion editing, the positional
|
999 |
+
loss and regularization loss are defined as follows.
|
1000 |
+
L = wpLpos + wfLfoot + wrLroot,
|
1001 |
+
where
|
1002 |
+
Lpos =
|
1003 |
+
�
|
1004 |
+
j,qj∈φ
|
1005 |
+
∥pj − qj∥2, if φ exists at t
|
1006 |
+
Lfoot =
|
1007 |
+
�
|
1008 |
+
foot
|
1009 |
+
∥pe
|
1010 |
+
foot − pi
|
1011 |
+
foot∥2,
|
1012 |
+
Lroot = wr∥re
|
1013 |
+
xz − ri
|
1014 |
+
xz∥2 + w∆r∥˙re
|
1015 |
+
xz − ˙ri
|
1016 |
+
xz∥2.
|
1017 |
+
(10)
|
1018 |
+
pj denotes positions of joint j, and r, ˙r denotes root po-
|
1019 |
+
sitions and velocities respectively. Superscript e and i in-
|
1020 |
+
dicates whether it is from edited or initial motion, respec-
|
1021 |
+
tively. Subscript xz indicates the vector is projected onto
|
1022 |
+
the XZ plane. The loss term L enforces the edited motion
|
1023 |
+
to maintain contact and root trajectory (in the XZ plane) of
|
1024 |
+
the initial motion, while generating natural movements of
|
1025 |
+
the other joints to meet the sparse positional constraints.
|
1026 |
+
Generating Interaction Cue for Manipulation
|
1027 |
+
To syn-
|
1028 |
+
thesize character’s arm motions naturally interacting with
|
1029 |
+
the movements of articulated target objects, we produce
|
1030 |
+
desired interaction cues by producing the 3D trajectories
|
1031 |
+
of a chosen 3D position of the object at which the hand
|
1032 |
+
part of the character are expected to touch. Specifically,
|
1033 |
+
we apply the expected articulated motion of the 3D object
|
1034 |
+
model to produce the 3D trajectory of a chosen object ver-
|
1035 |
+
tex, v(Rt, Tt, θt), where Rt, Tt, are the global orientation
|
1036 |
+
and translation of the object and θt is the parameters for the
|
1037 |
+
object articulation (e.g., the hinge angle of the cover of a
|
1038 |
+
laptop) at time t. v(·) represents the 3D location of the cho-
|
1039 |
+
sen vertex v. To this end, we input the produced trajectory
|
1040 |
+
as the desired 3D interaction cue for a character’s joint (e.g.,
|
1041 |
+
a hand joint) assuming the joint is touching this object tra-
|
1042 |
+
jectory for manipulation φ = [v(Rt, Tt, θt)]t. Note that, in
|
1043 |
+
our visualization, we apply the desired articulated motions
|
1044 |
+
for the 3D object at each time, synced to the produced in-
|
1045 |
+
teraction cues.
|
1046 |
+
Label
|
1047 |
+
Seg. Length
|
1048 |
+
Seg. Count
|
1049 |
+
Total Frames
|
1050 |
+
Locomotion
|
1051 |
+
10
|
1052 |
+
11063
|
1053 |
+
11498
|
1054 |
+
Sit
|
1055 |
+
50 – 85
|
1056 |
+
5842
|
1057 |
+
14942
|
1058 |
+
Table 3. Details on pre-processed motion datasets per each action
|
1059 |
+
category for training our motion synthesizer S.
|
1060 |
+
Name
|
1061 |
+
Value
|
1062 |
+
Motion sequence length
|
1063 |
+
120
|
1064 |
+
Number of sequence (training)
|
1065 |
+
11397
|
1066 |
+
Number of sequence (validation)
|
1067 |
+
3135
|
1068 |
+
Number of sequence (test)
|
1069 |
+
2139
|
1070 |
+
Table 4. Details on pre-processed motion datasets for training our
|
1071 |
+
motion editing module M.
|
1072 |
+
C. More Details on Experiments
|
1073 |
+
C.1. Frechet Distance Features
|
1074 |
+
FDroot is computed by root feature vector, which is a con-
|
1075 |
+
catenated vector of root orientation in angle-axis represen-
|
1076 |
+
tation, root up vector, and root transform relative to the pre-
|
1077 |
+
vious frame. We note that all of the motions for comparison
|
1078 |
+
have the same up axis (y) and floor plane (xz). FDjoint is
|
1079 |
+
computed by joint feature vector, represented as joint orien-
|
1080 |
+
tations in angle-axis representation, excluding the root.
|
1081 |
+
References
|
1082 |
+
[1] Polycam - lidar and 3d scanner for iphone & android.
|
1083 |
+
https://poly.cam/. 9
|
1084 |
+
[2] Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-
|
1085 |
+
Hornung, Daniel Cohen-Or, and Baoquan Chen. Skeleton-
|
1086 |
+
aware networks for deep motion retargeting. ACM Trans.
|
1087 |
+
Graph, 39(4), 2020. 3
|
1088 |
+
[3] Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-
|
1089 |
+
Or, and Baoquan Chen. Unpaired motion style transfer from
|
1090 |
+
video to animation. ACM Trans. Graph., 39(4), 2020. 3
|
1091 |
+
[4] Kevin
|
1092 |
+
Bergamin,
|
1093 |
+
Simon
|
1094 |
+
Clavet,
|
1095 |
+
Daniel
|
1096 |
+
Holden,
|
1097 |
+
and
|
1098 |
+
James Richard Forbes. Drecon: data-driven responsive con-
|
1099 |
+
trol of physics-based characters. ACM Trans. Graph., 38(6),
|
1100 |
+
2019. 3
|
1101 |
+
[5] Michael B¨uttner and Simon Clavet. Motion matching - the
|
1102 |
+
road to next gen animation. In Proc. of Nucl.ai, 2015. 2, 4
|
1103 |
+
[6] Zhe Cao, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai,
|
1104 |
+
Minh Vo, and Jitendra Malik. Long-term human motion pre-
|
1105 |
+
diction with scene context. In ECCV, 2020. 2
|
1106 |
+
[7] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Hal-
|
1107 |
+
ber, Matthias Niessner, Manolis Savva, Shuran Song, Andy
|
1108 |
+
Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d
|
1109 |
+
data in indoor environments. In 3DV, 2017. 6, 9
|
1110 |
+
[8] Yu-Wei Chao, Jimei Yang, Weifeng Chen, and Jia Deng.
|
1111 |
+
Learning to sit: Synthesizing human-chair interactions via
|
1112 |
+
hierarchical control. In AAAI, 2021. 2
|
1113 |
+
10
|
1114 |
+
|
1115 |
+
[9] Simon Clavet. Motion matching and the road to next-gen
|
1116 |
+
animation. In Proc. of GDC, 2016. 2, 4, 9
|
1117 |
+
[10] Haegwang Eom, Daseong Han, Joseph S Shin, and Junyong
|
1118 |
+
Noh.
|
1119 |
+
Model predictive control with a visuomotor system
|
1120 |
+
for physics-based character animation. ACM Trans. Graph.,
|
1121 |
+
39(1), 2019. 1, 2
|
1122 |
+
[11] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Ji-
|
1123 |
+
tendra Malik. Recurrent network models for human dynam-
|
1124 |
+
ics. In ICCV, 2015. 2
|
1125 |
+
[12] Deepak Gopinath and Jungdam Won. fairmotion - tools to
|
1126 |
+
load, process and visualize motion capture data.
|
1127 |
+
Github,
|
1128 |
+
2020. 9
|
1129 |
+
[13] Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe
|
1130 |
+
Yearsley, and Taku Komura. A recurrent variational autoen-
|
1131 |
+
coder for human motion synthesis. In BMVC, 2017. 2
|
1132 |
+
[14] F´elix G Harvey, Mike Yurick, Derek Nowrouzezahrai, and
|
1133 |
+
Christopher Pal. Robust motion in-betweening. ACM Trans.
|
1134 |
+
Graph, 39(4), 2020. 3, 6, 9
|
1135 |
+
[15] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito,
|
1136 |
+
Jimei Yang, Yi Zhou, and Michael Black. Stochastic scene-
|
1137 |
+
aware motion prediction. In ICCV, 2021. 1, 2, 3, 6, 7, 8,
|
1138 |
+
9
|
1139 |
+
[16] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas,
|
1140 |
+
and Michael J. Black. Resolving 3D human pose ambigu-
|
1141 |
+
ities with 3D scene constraints. In ICCV, 2019. 2, 6, 7, 9
|
1142 |
+
[17] Mohamed Hassan, Partha Ghosh, Joachim Tesch, Dimitrios
|
1143 |
+
Tzionas, and Michael J Black.
|
1144 |
+
Populating 3d scenes by
|
1145 |
+
learning human-scene interaction. In CVPR, 2021. 1, 2
|
1146 |
+
[18] Daniel Holden, Oussama Kanoun, Maksym Perepichka, and
|
1147 |
+
Tiberiu Popa.
|
1148 |
+
Learned motion matching.
|
1149 |
+
ACM Trans.
|
1150 |
+
Graph., 39(4), 2020. 4
|
1151 |
+
[19] Daniel Holden, Taku Komura, and Jun Saito.
|
1152 |
+
Phase-
|
1153 |
+
functioned neural networks for character control.
|
1154 |
+
ACM
|
1155 |
+
Trans. Graph., 36(4), 2017. 3
|
1156 |
+
[20] Daniel Holden, Jun Saito, and Taku Komura. A deep learning
|
1157 |
+
framework for character motion synthesis and editing. ACM
|
1158 |
+
Trans. Graph., 35(4), 2016. 2, 3, 6
|
1159 |
+
[21] Chun-Hao P Huang, Hongwei Yi, Markus H¨oschle, Matvey
|
1160 |
+
Safroshkin, Tsvetelina Alexiadis, Senya Polikovsky, Daniel
|
1161 |
+
Scharstein, and Michael J Black. Capturing and inferring
|
1162 |
+
dense full-body human-scene contact. In CVPR, 2022. 2
|
1163 |
+
[22] Kyunglyul Hyun, Kyungho Lee, and Jehee Lee.
|
1164 |
+
Motion
|
1165 |
+
grammars for character animation. In Computer Graphics
|
1166 |
+
Forum, volume 35, 2016. 2
|
1167 |
+
[23] Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen
|
1168 |
+
Guo, Minye Wu, Jingyi Yu, and Lan Xu. Neuralhofusion:
|
1169 |
+
Neural volumetric rendering under human-object interac-
|
1170 |
+
tions. In CVPR, 2022. 2
|
1171 |
+
[24] Vladimir G Kim, Siddhartha Chaudhuri, Leonidas Guibas,
|
1172 |
+
and Thomas Funkhouser. Shape2pose: Human-centric shape
|
1173 |
+
analysis. ACM Trans. Graph., 33(4), 2014. 1, 2
|
1174 |
+
[25] Diederik P Kingma and Jimmy Ba. Adam: A method for
|
1175 |
+
stochastic optimization.
|
1176 |
+
arXiv preprint arXiv:1412.6980,
|
1177 |
+
2014. 6
|
1178 |
+
[26] Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K Hod-
|
1179 |
+
gins, and Nancy S Pollard.
|
1180 |
+
Interactive control of avatars
|
1181 |
+
animated with human motion data. In Proceedings of the
|
1182 |
+
29th annual conference on Computer graphics and interac-
|
1183 |
+
tive techniques, 2002. 2
|
1184 |
+
[27] Jeongseok Lee, Michael X Grey, Sehoon Ha, Tobias Kunz,
|
1185 |
+
Sumit Jain, Yuting Ye, Siddhartha S Srinivasa, Mike Stilman,
|
1186 |
+
and C Karen Liu.
|
1187 |
+
Dart: Dynamic animation and robotics
|
1188 |
+
toolkit. The Journal of Open Source Software, 3(22), 2018.
|
1189 |
+
9
|
1190 |
+
[28] Jehee Lee and Kang Hoon Lee.
|
1191 |
+
Precomputing avatar be-
|
1192 |
+
havior from human motion data.
|
1193 |
+
In Proceedings of the
|
1194 |
+
2004 ACM SIGGRAPH/Eurographics symposium on Com-
|
1195 |
+
puter animation, 2004. 3
|
1196 |
+
[29] Kyungho Lee, Seyoung Lee, and Jehee Lee. Interactive char-
|
1197 |
+
acter animation by learning multi-objective control. ACM
|
1198 |
+
Trans. Graph., 37(6), 2018. 3
|
1199 |
+
[30] Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee.
|
1200 |
+
Learning time-critical responses for interactive character
|
1201 |
+
control. ACM Trans. Graph., 40(4), 2021. 3
|
1202 |
+
[31] Kang Hoon Lee, Myung Geol Choi, and Jehee Lee. Motion
|
1203 |
+
patches: building blocks for virtual environments annotated
|
1204 |
+
with motion data. In ACM SIGGRAPH 2006 Papers. 2006.
|
1205 |
+
2
|
1206 |
+
[32] Seyoung Lee, Sunmin Lee, Yongwoo Lee, and Jehee Lee.
|
1207 |
+
Learning a family of motor skills from a single motion clip.
|
1208 |
+
ACM Trans. Graph., 40(4), 2021. 3, 9
|
1209 |
+
[33] Seunghwan Lee, Moonseok Park, Kyoungmin Lee, and Je-
|
1210 |
+
hee Lee. Scalable muscle-actuated human simulation and
|
1211 |
+
control. ACM Trans. Graph., 38(4), 2019. 5
|
1212 |
+
[34] Sergey Levine, Jack M Wang, Alexis Haraux, Zoran
|
1213 |
+
Popovi´c, and Vladlen Koltun.
|
1214 |
+
Continuous character con-
|
1215 |
+
trol with low-dimensional embeddings. ACM Trans. Graph,
|
1216 |
+
31(4), 2012. 3
|
1217 |
+
[35] Ruilong Li, Shan Yang, David A. Ross, and Angjoo
|
1218 |
+
Kanazawa. Ai choreographer: Music conditioned 3d dance
|
1219 |
+
generation with aist++. In ICCV, 2021. 2, 7
|
1220 |
+
[36] Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van
|
1221 |
+
De Panne. Character controllers using motion vaes. ACM
|
1222 |
+
Trans. Graph., 39(4), 2020. 3
|
1223 |
+
[37] Kovar Lucas, Gleicher Michael, and Pighin Fr´ed´eric. Motion
|
1224 |
+
graphs. In Proceedings of the 29th Annual Conference on
|
1225 |
+
Computer Graphics and Interactive Techniques, 2002. 2
|
1226 |
+
[38] Julieta Martinez, Michael J Black, and Javier Romero. On
|
1227 |
+
human motion prediction using recurrent neural networks.
|
1228 |
+
In CVPR, 2017. 2
|
1229 |
+
[39] Josh Merel, Saran Tunyasuvunakool, Arun Ahuja, Yuval
|
1230 |
+
Tassa, Leonard Hasenclever, Vu Pham, Tom Erez, Greg
|
1231 |
+
Wayne, and Nicolas Heess. Catch & carry: reusable neural
|
1232 |
+
controllers for vision-guided whole-body tasks. ACM Trans.
|
1233 |
+
Graph., 39(4), 2020. 2
|
1234 |
+
[40] Evonne Ng, Hanbyul Joo, Liwen Hu, Hao Li, , Trevor Dar-
|
1235 |
+
rell, Angjoo Kanazawa, and Shiry Ginosar. Learning to lis-
|
1236 |
+
ten: Modeling non-deterministic dyadic facial motion.
|
1237 |
+
In
|
1238 |
+
CVPR, 2022. 7
|
1239 |
+
[41] Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and
|
1240 |
+
Jehee Lee. Learning predict-and-simulate policies from un-
|
1241 |
+
organized human motion data. ACM Trans. Graph., 38(6),
|
1242 |
+
2019. 3, 5, 9
|
1243 |
+
11
|
1244 |
+
|
1245 |
+
[42] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,
|
1246 |
+
James Bradbury, Gregory Chanan, Trevor Killeen, Zeming
|
1247 |
+
Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,
|
1248 |
+
Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-
|
1249 |
+
son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,
|
1250 |
+
Lu Fang, Junjie Bai, and Soumith Chintala.
|
1251 |
+
Pytorch: An
|
1252 |
+
imperative style, high-performance deep learning library.
|
1253 |
+
In Advances in Neural Information Processing Systems 32,
|
1254 |
+
pages 8024–8035. Curran Associates, Inc., 2019. 9
|
1255 |
+
[43] Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel
|
1256 |
+
Van de Panne. Deepmimic: Example-guided deep reinforce-
|
1257 |
+
ment learning of physics-based character skills. ACM Trans.
|
1258 |
+
Graph., 37(4), 2018. 3, 5
|
1259 |
+
[44] Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine,
|
1260 |
+
and Sanja Fidler. Ase: Large-scale reusable adversarial skill
|
1261 |
+
embeddings for physically simulated characters. ACM Trans.
|
1262 |
+
Graph, 41(4), 2022. 3
|
1263 |
+
[45] Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and
|
1264 |
+
Angjoo Kanazawa. Amp: Adversarial motion priors for styl-
|
1265 |
+
ized physics-based character control. ACM Trans. Graph,
|
1266 |
+
40(4), 2021. 3
|
1267 |
+
[46] Mathis Petrovich, Michael J Black, and G¨ul Varol. Action-
|
1268 |
+
conditioned 3d human motion synthesis with transformer
|
1269 |
+
vae. In ICCV, 2021. 2, 3
|
1270 |
+
[47] Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Rui-
|
1271 |
+
han Yang, Yang Fu, and Xiaolong Wang. Dexmv: Imitation
|
1272 |
+
learning for dexterous manipulation from human videos. In
|
1273 |
+
ECCV, 2022. 1, 2
|
1274 |
+
[48] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Tay-
|
1275 |
+
lor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia
|
1276 |
+
Gkioxari.
|
1277 |
+
Accelerating 3d deep learning with pytorch3d.
|
1278 |
+
arXiv:2007.08501, 2020. 9
|
1279 |
+
[49] Manolis Savva, Angel X Chang, Pat Hanrahan, Matthew
|
1280 |
+
Fisher, and Matthias Nießner. Pigraphs: learning interaction
|
1281 |
+
snapshots from observations.
|
1282 |
+
ACM Trans. Graph., 35(4),
|
1283 |
+
2016. 1, 2
|
1284 |
+
[50] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Rad-
|
1285 |
+
ford, and Oleg Klimov. Proximal policy optimization algo-
|
1286 |
+
rithms. arXiv preprint arXiv:1707.06347, 2017. 5
|
1287 |
+
[51] Hubert PH Shum, Taku Komura, Masashi Shiraishi, and
|
1288 |
+
Shuntaro Yamazaki. Interaction patches for multi-character
|
1289 |
+
animation. ACM Trans. Graph., 27(5), 2008. 2
|
1290 |
+
[52] Sebastian Starke, Ian Mason, and Taku Komura. Deepphase:
|
1291 |
+
periodic autoencoders for learning motion phase manifolds.
|
1292 |
+
ACM Trans. Graph., 41(4):1–13, 2022. 3
|
1293 |
+
[53] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito.
|
1294 |
+
Neural state machine for character-scene interactions. ACM
|
1295 |
+
Trans. Graph., 38(6), 2019. 1, 2
|
1296 |
+
[54] Omid Taheri, Vasileios Choutas, Michael J Black, and Dim-
|
1297 |
+
itrios Tzionas. Goal: Generating 4d whole-body motion for
|
1298 |
+
hand-object grasping. In CVPR, 2022. 1, 2
|
1299 |
+
[55] Omid Taheri, Nima Ghorbani, Michael J Black, and Dim-
|
1300 |
+
itrios Tzionas. Grab: A dataset of whole-body human grasp-
|
1301 |
+
ing of objects. In ECCV, 2020. 1, 2
|
1302 |
+
[56] Graham W Taylor and Geoffrey E Hinton. Factored con-
|
1303 |
+
ditional restricted boltzmann machines for modeling motion
|
1304 |
+
style. In ICML, 2009. 2
|
1305 |
+
[57] Adrien Treuille, Yongjoon Lee, and Zoran Popovi´c. Near-
|
1306 |
+
optimal character animation with continuous control.
|
1307 |
+
In
|
1308 |
+
ACM SIGGRAPH 2007 papers. 2007. 3
|
1309 |
+
[58] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn,
|
1310 |
+
Xunyu Lin, and Honglak Lee. Learning to generate long-
|
1311 |
+
term future via hierarchical prediction. In ICML, 2017. 2
|
1312 |
+
[59] Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin,
|
1313 |
+
and Bo Dai. Towards diverse and natural scene-aware 3d
|
1314 |
+
human motion synthesis. In CVPR, 2022. 1, 2, 3, 9
|
1315 |
+
[60] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiao-
|
1316 |
+
long Wang. Synthesizing long-term 3d human motion and
|
1317 |
+
interaction in 3d scenes. In CVPR, 2021. 1, 2, 6, 7, 8, 9
|
1318 |
+
[61] Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin.
|
1319 |
+
Scene-
|
1320 |
+
aware generative network for human motion synthesis. In
|
1321 |
+
CVPR, 2021. 2
|
1322 |
+
[62] Jungdam Won, Deepak Gopinath, and Jessica Hodgins. A
|
1323 |
+
scalable approach to control diverse behaviors for physically
|
1324 |
+
simulated characters. ACM Trans. Graph., 39(4), 2020. 3
|
1325 |
+
[63] Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao
|
1326 |
+
Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan,
|
1327 |
+
He Wang, et al. Sapien: A simulated part-based interactive
|
1328 |
+
environment. In CVPR, 2020. 6, 9
|
1329 |
+
[64] Xianghui Xie, Bharat Lal Bhatnagar, and Gerard Pons-Moll.
|
1330 |
+
Chore: Contact, human and object reconstruction from a sin-
|
1331 |
+
gle rgb image. In ECCV, 2022. 1, 2
|
1332 |
+
[65] Xiang Xu, Hanbyul Joo, Greg Mori, and Manolis Savva.
|
1333 |
+
D3d-hoi: Dynamic 3d human-object interactions from
|
1334 |
+
videos. arXiv preprint arXiv:2108.08420, 2021. 2
|
1335 |
+
[66] Zeshi Yang, Kangkang Yin, and Libin Liu. Learning to use
|
1336 |
+
chopsticks in diverse gripping styles. ACM Trans. Graph.,
|
1337 |
+
41(4), 2022. 1, 2
|
1338 |
+
[67] He Zhang, Yuting Ye, Takaaki Shiratori, and Taku Komura.
|
1339 |
+
Manipnet: Neural manipulation synthesis with a hand-object
|
1340 |
+
spatial representation. ACM Trans. Graph., 40(4), 2021. 1, 2
|
1341 |
+
[68] Jason Y. Zhang, Sam Pepose, Hanbyul Joo, Deva Ramanan,
|
1342 |
+
Jitendra Malik, and Angjoo Kanazawa.
|
1343 |
+
Perceiving 3d
|
1344 |
+
human-object spatial arrangements from a single image in
|
1345 |
+
the wild. In ECCV, 2020. 2
|
1346 |
+
[69] Siwei Zhang, Yan Zhang, Qianli Ma, Michael J Black, and
|
1347 |
+
Siyu Tang. Place: Proximity learning of articulation and con-
|
1348 |
+
tact in 3d environments. In 3DV, 2020. 1, 2
|
1349 |
+
[70] Xiaohan Zhang, Bharat Lal Bhatnagar, Sebastian Starke,
|
1350 |
+
Vladimir Guzov, and Gerard Pons-Moll. Couch: Towards
|
1351 |
+
controllable human-chair interactions. In ECCV, 2022. 1, 2,
|
1352 |
+
6, 9
|
1353 |
+
[71] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J
|
1354 |
+
Black, and Siyu Tang. Generating 3d people in scenes with-
|
1355 |
+
out people. In CVPR, 2020. 1, 2
|
1356 |
+
[72] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, ,
|
1357 |
+
and Siyu Tang. Compositional human-scene interaction syn-
|
1358 |
+
thesis with semantic control. In ECCV, 2022. 1, 2
|
1359 |
+
[73] Yi Zhou, Connelly Barnes, Lu Jingwan, Yang Jimei, and Li
|
1360 |
+
Hao. On the continuity of rotation representations in neural
|
1361 |
+
networks. In CVPR, 2019. 4, 6, 10
|
1362 |
+
12
|
1363 |
+
|
FNE0T4oBgHgl3EQfzAKR/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
FdFJT4oBgHgl3EQfDCyN/content/2301.11432v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:03208bc0f2477510ed8d67c765746a7d627f9b98639c8e32e7c9904d21c68e8e
|
3 |
+
size 221312
|
FdFJT4oBgHgl3EQfDCyN/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90b48d51617bbb71df1b815fc00ebe451f447910edbd3d0b2daea1a7449c2406
|
3 |
+
size 1638445
|
FdFJT4oBgHgl3EQfDCyN/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:303cb74baadf59f951360c835a4afd1ba07a11642253c7446b90c6310a1e9c55
|
3 |
+
size 53443
|
HNAzT4oBgHgl3EQfjP3H/content/2301.01514v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:55fccdce5be01ca1a2a5a69a3f7e1de839adabac949ee02bfa18c7f9a46d5f80
|
3 |
+
size 451039
|
HNAzT4oBgHgl3EQfjP3H/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e501500941e2869b7fc2df4d9c1ac7901033112590724ab6f7df8829403e4af1
|
3 |
+
size 2883629
|