Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -9FAT4oBgHgl3EQfqR39/content/tmp_files/2301.08647v1.pdf.txt +1100 -0
- -9FAT4oBgHgl3EQfqR39/content/tmp_files/load_file.txt +0 -0
- .gitattributes +62 -0
- 0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss +3 -0
- 0NAyT4oBgHgl3EQfoPhb/content/tmp_files/2301.00503v1.pdf.txt +790 -0
- 0NAyT4oBgHgl3EQfoPhb/content/tmp_files/load_file.txt +0 -0
- 0NAzT4oBgHgl3EQfefyv/content/tmp_files/2301.01438v1.pdf.txt +1782 -0
- 0NAzT4oBgHgl3EQfefyv/content/tmp_files/load_file.txt +0 -0
- 1dAyT4oBgHgl3EQfofiA/content/tmp_files/2301.00508v1.pdf.txt +792 -0
- 1dAyT4oBgHgl3EQfofiA/content/tmp_files/load_file.txt +0 -0
- 1dE1T4oBgHgl3EQflQQz/content/tmp_files/2301.03282v1.pdf.txt +1676 -0
- 1dE1T4oBgHgl3EQflQQz/content/tmp_files/load_file.txt +0 -0
- 1dE2T4oBgHgl3EQfigf6/vector_store/index.faiss +3 -0
- 1dE2T4oBgHgl3EQfigf6/vector_store/index.pkl +3 -0
- 1tFAT4oBgHgl3EQfDBwd/content/tmp_files/2301.08413v1.pdf.txt +1880 -0
- 1tFAT4oBgHgl3EQfDBwd/content/tmp_files/load_file.txt +0 -0
- 2NFKT4oBgHgl3EQfPi1h/content/tmp_files/2301.11763v1.pdf.txt +1638 -0
- 2NFKT4oBgHgl3EQfPi1h/content/tmp_files/load_file.txt +0 -0
- 2tAyT4oBgHgl3EQfb_cv/vector_store/index.faiss +3 -0
- 3NE2T4oBgHgl3EQfNwaq/content/tmp_files/2301.03741v1.pdf.txt +783 -0
- 3NE2T4oBgHgl3EQfNwaq/content/tmp_files/load_file.txt +338 -0
- 3dFAT4oBgHgl3EQflR0d/content/tmp_files/2301.08616v1.pdf.txt +1769 -0
- 3dFAT4oBgHgl3EQflR0d/content/tmp_files/load_file.txt +0 -0
- 4dA0T4oBgHgl3EQfNf-w/vector_store/index.pkl +3 -0
- 5NAzT4oBgHgl3EQfu_1f/vector_store/index.pkl +3 -0
- 5dE2T4oBgHgl3EQfOgbi/content/2301.03750v1.pdf +3 -0
- 69AzT4oBgHgl3EQfgPxu/vector_store/index.faiss +3 -0
- 6dE5T4oBgHgl3EQfPg7O/content/tmp_files/2301.05506v1.pdf.txt +480 -0
- 6dE5T4oBgHgl3EQfPg7O/content/tmp_files/load_file.txt +255 -0
- 99E2T4oBgHgl3EQfmAd3/content/2301.03994v1.pdf +3 -0
- 99E2T4oBgHgl3EQfmAd3/vector_store/index.faiss +3 -0
- 99E2T4oBgHgl3EQfmAd3/vector_store/index.pkl +3 -0
- AtE0T4oBgHgl3EQfxwIb/vector_store/index.faiss +3 -0
- B9E0T4oBgHgl3EQfyAKb/vector_store/index.faiss +3 -0
- BNE0T4oBgHgl3EQfPwB8/vector_store/index.pkl +3 -0
- BtE3T4oBgHgl3EQfUAqQ/content/2301.04447v1.pdf +3 -0
- BtE3T4oBgHgl3EQfUAqQ/vector_store/index.faiss +3 -0
- BtE3T4oBgHgl3EQfUAqQ/vector_store/index.pkl +3 -0
- CtE1T4oBgHgl3EQf9wbT/content/2301.03561v1.pdf +3 -0
- CtE5T4oBgHgl3EQfTw_D/content/tmp_files/2301.05539v1.pdf.txt +3443 -0
- CtE5T4oBgHgl3EQfTw_D/content/tmp_files/load_file.txt +0 -0
- E9E1T4oBgHgl3EQfqgWQ/vector_store/index.pkl +3 -0
- EtAyT4oBgHgl3EQf4_op/vector_store/index.pkl +3 -0
- FNFRT4oBgHgl3EQfCDcd/vector_store/index.pkl +3 -0
- GdFLT4oBgHgl3EQfGy_B/content/tmp_files/2301.11994v1.pdf.txt +868 -0
- GdFLT4oBgHgl3EQfGy_B/content/tmp_files/load_file.txt +0 -0
- HNE0T4oBgHgl3EQfRgAk/content/tmp_files/2301.02207v1.pdf.txt +739 -0
- HNE0T4oBgHgl3EQfRgAk/content/tmp_files/load_file.txt +0 -0
- I9AyT4oBgHgl3EQf5_o0/vector_store/index.faiss +3 -0
- I9AyT4oBgHgl3EQf5_o0/vector_store/index.pkl +3 -0
-9FAT4oBgHgl3EQfqR39/content/tmp_files/2301.08647v1.pdf.txt
ADDED
@@ -0,0 +1,1100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Image Memorability Prediction with Vision
|
2 |
+
Transformers
|
3 |
+
Thomas Hagen1,� and Thomas Espeseth1,2
|
4 |
+
1Department of Psychology, University of Oslo, Oslo, Norway
|
5 |
+
2Department of Psychology, Oslo New University College, Oslo, Norway
|
6 |
+
Behavioral studies have shown that the memorability of images
|
7 |
+
is similar across groups of people, suggesting that memorability
|
8 |
+
is a function of the intrinsic properties of images, and is unre-
|
9 |
+
lated to people’s individual experiences and traits. Deep learn-
|
10 |
+
ing networks can be trained on such properties and be used
|
11 |
+
to predict memorability in new data sets. Convolutional neu-
|
12 |
+
ral networks (CNN) have pioneered image memorability predic-
|
13 |
+
tion, but more recently developed vision transformer (ViT) mod-
|
14 |
+
els may have the potential to yield even better predictions. In
|
15 |
+
this paper, we present the ViTMem, a new memorability model
|
16 |
+
based on ViT, and evaluate memorability predictions obtained
|
17 |
+
by it with state-of-the-art CNN-derived models. Results showed
|
18 |
+
that ViTMem performed equal to or better than state-of-the-
|
19 |
+
art models on all data sets. Additional semantic level analyses
|
20 |
+
revealed that ViTMem is particularly sensitive to the seman-
|
21 |
+
tic content that drives memorability in images. We conclude
|
22 |
+
that ViTMem provides a new step forward, and propose that
|
23 |
+
ViT-derived models can replace CNNs for computational pre-
|
24 |
+
diction of image memorability. Researchers, educators, adver-
|
25 |
+
tisers, visual designers and other interested parties can leverage
|
26 |
+
the model to improve the memorability of their image material.
|
27 |
+
memorability | vision transformers | psychology | semantic information
|
28 |
+
Introduction
|
29 |
+
Everyone knows that our memories depend on the experi-
|
30 |
+
ences we have had, facts we have encountered, and the abil-
|
31 |
+
ities we have to remember them. Combinations of these fac-
|
32 |
+
tors differ between individuals and give rise to unique memo-
|
33 |
+
ries in each of us. However, a complementary perspective on
|
34 |
+
memory focuses on the material that is (to be) remembered
|
35 |
+
rather than the individual that does the remembering. In one
|
36 |
+
central study, Isola et al. (1) presented more than 2000 scene
|
37 |
+
images in a continuous repeat-detection task. The partici-
|
38 |
+
pants were asked to respond whenever they saw an identical
|
39 |
+
repeat. The results revealed that the memorability score (per-
|
40 |
+
cent correct detections) varied considerably between images.
|
41 |
+
Most importantly, by running a consistency analysis in which
|
42 |
+
Spearman’s rank correlation was calculated on the memo-
|
43 |
+
rability scores from random splits of the participant group,
|
44 |
+
Isola and colleagues (1) were able to show that the memora-
|
45 |
+
bility score ranking was consistent across participants – some
|
46 |
+
images were memorable and some were forgettable. These
|
47 |
+
results indicate that the degree to which an image was cor-
|
48 |
+
rectly detected depended on properties intrinsic to the image
|
49 |
+
itself, not the traits of the observers. This is important be-
|
50 |
+
cause it shows that one can use the memorability scores in a
|
51 |
+
stimulus set to predict memory performance in a new group
|
52 |
+
of participants.
|
53 |
+
These results have been replicated and extended in a num-
|
54 |
+
ber of studies, revealing that similar findings are obtained
|
55 |
+
with different memory tasks (2), different retention times
|
56 |
+
(1, 2), different contexts (3), and independent of whether en-
|
57 |
+
coding is intentional or incidental (4). However, although
|
58 |
+
image memorability has proven to be a robust and reliable
|
59 |
+
phenomenon, it has not been straightforward to pinpoint the
|
60 |
+
image properties that drive it. What seems clear though, is
|
61 |
+
that memorability is multifaceted (5, 6). One way to char-
|
62 |
+
acterize the underpinnings of memorability is to investigate
|
63 |
+
the contribution from processes at different levels of the vi-
|
64 |
+
sual processing stream. For example, at the earliest stages of
|
65 |
+
processing of a visual scene, visual attributes such as local
|
66 |
+
contrast, orientation, and color are coded. At an intermedi-
|
67 |
+
ate level, contours are integrated, surfaces, shapes, and depth
|
68 |
+
cues are segmented, and foreground and background are dis-
|
69 |
+
tinguished. At a higher level, object recognition is conducted
|
70 |
+
through matching with templates stored in long term mem-
|
71 |
+
ory.
|
72 |
+
Positive correlations between brightness and high contrast of
|
73 |
+
objects with memorability has been found (7), but in general,
|
74 |
+
low-level visual factors such as color, contrast, and spatial
|
75 |
+
frequency do not predict memorability well (5, 8, 9). This
|
76 |
+
is consistent with results showing that perceptual features
|
77 |
+
are typically not retained in long term visual memory (10).
|
78 |
+
In contrast to the low-level features, the evidence for a re-
|
79 |
+
lation between intermediate to high level semantic features
|
80 |
+
and memorability is much stronger. For example, images that
|
81 |
+
contain people, faces, body parts, animals, and food are often
|
82 |
+
associated with high memorability, whereas the opposite is
|
83 |
+
a typical finding for objects like buildings and furniture and
|
84 |
+
images of landscapes and parks (3, 7, 11, 12). Other inter-
|
85 |
+
mediate to high level features such as object interaction with
|
86 |
+
the context or other objects, saliency factors, and image com-
|
87 |
+
position also contribute to memorability (5). Furthermore,
|
88 |
+
although memorability is not reducible to high-level features
|
89 |
+
such as aesthetics (1, 12), interestingness (1, 13), or popu-
|
90 |
+
larity (12), emotions, particularly of negative valence, seem
|
91 |
+
to predict higher memorability (9, 12). Finally, memorabil-
|
92 |
+
ity seems to be relatively independent of cognitive control,
|
93 |
+
attention, or priming (14).
|
94 |
+
Overall, the available evidence indicates that memorability
|
95 |
+
seems to capture intermediate- to high-level properties of
|
96 |
+
semantics, such as objects or actions, and image composi-
|
97 |
+
tion, such as layout and clutter, rather than low-level fea-
|
98 |
+
Hagen et al.
|
99 |
+
|
|
100 |
+
January 23, 2023
|
101 |
+
|
|
102 |
+
1–7
|
103 |
+
arXiv:2301.08647v1 [cs.CV] 20 Jan 2023
|
104 |
+
|
105 |
+
tures (5, 15). This fits well with the central role of semantic
|
106 |
+
categories in organizing cognition and memory (16). Gen-
|
107 |
+
erally, the priority of semantic-level information enables us
|
108 |
+
to quickly understand novel scenes and predict future events
|
109 |
+
(17). For example, when inspecting a novel scene or an im-
|
110 |
+
age, we do not primarily focus on low-level perceptual fea-
|
111 |
+
tures or pixels, but prioritize more abstract visual schemas
|
112 |
+
involving spatial regions, objects, and the relation between
|
113 |
+
them (18). Also, when people are asked to indicate which
|
114 |
+
regions of an image helps them recognize an image, there is
|
115 |
+
high consistency between people’s responses (18). Similarly,
|
116 |
+
fixation map data from eye-tracking have shown that there is
|
117 |
+
a positive correlation between fixation map consistency and
|
118 |
+
scene memorability, and this relation is associated with the
|
119 |
+
presence of meaningful objects (3, 7, 19). Bylinskii et al.
|
120 |
+
(5) suggest that these properties most efficiently signal infor-
|
121 |
+
mation of high utility to our species, for example, emotions,
|
122 |
+
social aspects, animate objects (e.g., faces, gestures, interac-
|
123 |
+
tions), unexpected events, and tangible objects.
|
124 |
+
Memorability prediction. The finding that the memorabil-
|
125 |
+
ity of an image is governed by properties intrinsic to the im-
|
126 |
+
age itself, not only implies that one can predict memory per-
|
127 |
+
formance in a new set of participants, as described above,
|
128 |
+
but also that one can predict the memorability of a novel set
|
129 |
+
of images (i.e., memorability is an “image computable” fea-
|
130 |
+
ture). Given the availability of computational algorithms and
|
131 |
+
high-quality training sets of sufficient size, one can predict
|
132 |
+
memorability in novel sets of images for future (or already
|
133 |
+
conducted) behavioral or neuroimaging studies. Such mem-
|
134 |
+
orability prediction could also be valuable in a number of ap-
|
135 |
+
plied settings (e.g., within education, marketing and human-
|
136 |
+
computer interaction).
|
137 |
+
Memorability researchers have employed computer vision
|
138 |
+
models such as convolutional neural networks (CNNs) from
|
139 |
+
early on (12), and advancements in the field have allowed
|
140 |
+
researchers to predict image memorability with increasing
|
141 |
+
precision (20–22). The inductive bias (the assumptions of
|
142 |
+
the learning algorithms used to generalize to unseen data) of
|
143 |
+
CNNs is inspired by knowledge about the primate visual sys-
|
144 |
+
tem, and activations in the networks layers have, with some
|
145 |
+
success, been used to explain neural activations (23). How-
|
146 |
+
ever, some vulnerabilities of CNNs have been noted. For ex-
|
147 |
+
ample, CNNs appear to depend more on image texture than
|
148 |
+
biological vision systems do (24), and have problems with
|
149 |
+
recognizing images based on the shape of objects (e.g., when
|
150 |
+
texture is suppressed or removed). However, this vulnera-
|
151 |
+
bility is reduced when the model’s shape bias is increased
|
152 |
+
through training on shape representations (25).
|
153 |
+
The LaMem train/test splits is a well-established benchmark
|
154 |
+
for memorability prediction (12). The original MemNet (12),
|
155 |
+
which is based on AlexNet (26), achieved a Spearman rank
|
156 |
+
correlation of 0.64 on this benchmark.
|
157 |
+
There have been
|
158 |
+
several improvements on this benchmark, the leading ap-
|
159 |
+
proaches utilize image captioning to enhance memorability
|
160 |
+
predictions. That is, a CNN produces a textual description of
|
161 |
+
the image, which is then used to provide more high-level se-
|
162 |
+
mantic information which is embedded into a semantic vec-
|
163 |
+
tor space before being combined with CNN image features
|
164 |
+
in a multi-layered perceptron network. Squalli-Houssaini et
|
165 |
+
al. (21) used this approach to reach a Spearman correlation
|
166 |
+
of 0.72, with a mean squared error (MSE) of approximately
|
167 |
+
0.0092 (22). Leonardi et al. (22) used the captioning ap-
|
168 |
+
proach with dual ResNet50s and a soft attention mechanism
|
169 |
+
to reach a rank correlation of 0.687 with an MSE of 0.0079.
|
170 |
+
The ResMem model (20), which is a CNN-based residual
|
171 |
+
neural network architecture (ResNet), uses LaMem, but also
|
172 |
+
takes advantage of a more recently published dataset named
|
173 |
+
MemCat (11). This is a data set containing 10,000 images
|
174 |
+
based on categories of animals, food, landscape, sports and
|
175 |
+
vehicles. This data set also has a higher split half correla-
|
176 |
+
tion than LaMem. Needell and Bainbridge (20) argue that
|
177 |
+
the LaMem dataset on its own is lacking in generalizability
|
178 |
+
due to poor sampling of naturalistic images. That is, the im-
|
179 |
+
ages are more intended as artistic renderings designed to at-
|
180 |
+
tract an online audience. Hence by combining MemCat with
|
181 |
+
LaMem this should potentially yield a more generalizable
|
182 |
+
model. Moreover, the increased size of the combined dataset
|
183 |
+
might help in driving the model performance further than pre-
|
184 |
+
vious models based on LaMem. The authors of ResMem
|
185 |
+
also noted the importance of semantic information and struc-
|
186 |
+
tured their approach to utilize semantic representations from
|
187 |
+
a ResNet model in order to improve predictions. An added
|
188 |
+
benefit of ResMem is that it is shared on the python pack-
|
189 |
+
age index, which makes it easily accessible to researchers in
|
190 |
+
diverse fields.
|
191 |
+
Vision transformers. Vision transformers (ViT) have re-
|
192 |
+
cently been shown to provide similar or better performance
|
193 |
+
than CNNs in a variety of computer vision tasks (27). This
|
194 |
+
architecture was first introduced in the natural language pro-
|
195 |
+
cessing field (28) for capturing long-range dependencies in
|
196 |
+
text. This architecture leads to superior speed/performance
|
197 |
+
balance relativ to ResNet architectures (29). Moreover, ViTs
|
198 |
+
have been shown to produce errors that are more similar to
|
199 |
+
human errors (30), suggesting that they could take similar
|
200 |
+
information into account (see also (31)). A reason for this
|
201 |
+
may be that ViTs are likely to take more of the global context
|
202 |
+
into account and be more dependent on the shape of objects
|
203 |
+
rather than their texture (30). While it is not entirely clear
|
204 |
+
why such properties may yield better predictions of image
|
205 |
+
memorability, it could still help inform the discourse on the
|
206 |
+
visual characteristics that are relevant as well as potentially
|
207 |
+
yielding a better model for predicting image memorability.
|
208 |
+
Hence, we set out to investigate if vision transformers can
|
209 |
+
yield better predictions of memorability than the state-of-
|
210 |
+
the-art in image memorability prediction. In particular, we
|
211 |
+
aimed to (i) benchmark a model based on ViT against the
|
212 |
+
well-established LaMem train/test splits (12), (ii) train a ViT
|
213 |
+
against the combined LaMem and MemCat data sets (20) to
|
214 |
+
benchmark against the ResMem model (20), (iii) train a final
|
215 |
+
ViT model against a more diverse and deduplicated data set,
|
216 |
+
(iv) validate the final ViT model against additional indepen-
|
217 |
+
dent data sets and (v) inspect semantic level distributions of
|
218 |
+
memorability scores for behavioral and predicted data.
|
219 |
+
2
|
220 |
+
|
|
221 |
+
Hagen et al.
|
222 |
+
|
|
223 |
+
ViTMem
|
224 |
+
|
225 |
+
Methods
|
226 |
+
As our model is based on ViT to predict memorability we
|
227 |
+
named it ViTMem.
|
228 |
+
Because it has been shown that low-
|
229 |
+
level visual features are less important for image memorabil-
|
230 |
+
ity prediction, it would seem appropriate to use image aug-
|
231 |
+
mentations in training our ViTMem model to reduce over-
|
232 |
+
fitting. This approach have also been used by others (22),
|
233 |
+
although not to the extent done here.
|
234 |
+
The augmentations
|
235 |
+
used consisted of horizontal flipping, sharpen, blur, motion
|
236 |
+
blur, random contrast, hue saturation value, CLAHE, shift
|
237 |
+
scale rotate, perspective, optical distortion and grid distortion
|
238 |
+
(32). For training all models we used PyTorch, the ADAM
|
239 |
+
optimizer and mean squared error (squared L2 norm) for the
|
240 |
+
loss function. Images were input as batches of 32 in RGB
|
241 |
+
and resized to 256x256 pixels before applying augmentations
|
242 |
+
with a probability of 0.7 and center cropping to 224x224 pix-
|
243 |
+
els. For creating ViTMem we used transfer learning on a
|
244 |
+
vision transformer (27) model pretrained on ImageNet 1k
|
245 |
+
(vit_base_patch16_224_miil) (33).
|
246 |
+
The final classification
|
247 |
+
layer was reduced to a single output and a sigmoid activation
|
248 |
+
function.
|
249 |
+
As we aim to provide an accessible model to the re-
|
250 |
+
search community, it is also necessary to compare against
|
251 |
+
the publicly available ResMem model. Unfortunately, the
|
252 |
+
authors of ResMem did not publish their held-out test
|
253 |
+
set, hence it is difficult to make a balanced compari-
|
254 |
+
son between the currently published ResMem model and
|
255 |
+
any competing models.
|
256 |
+
We propose to do 10 train/test
|
257 |
+
splits that can be used by future researchers (available
|
258 |
+
at https://github.com/brainpriority/vitmem_data). Moreover,
|
259 |
+
ResMem was not benchmarked on LaMem, hence a fair com-
|
260 |
+
parison can only be made on the combined LaMem and
|
261 |
+
MemCat data set.
|
262 |
+
For the semantic level analysis, we chose to use image cap-
|
263 |
+
tioning (34) as this provides an efficient method for deriv-
|
264 |
+
ing semantic properties from images at scale. Importantly,
|
265 |
+
as the image captioning model was trained on human image
|
266 |
+
descriptions, it is likely to extract content that humans find
|
267 |
+
meaningful in images, and in particular objects and contexts
|
268 |
+
that are relevant for conveying such meanings. Hence, nouns
|
269 |
+
derived from such descriptions are likely to be representative
|
270 |
+
portions of the content that would convey meaning to humans
|
271 |
+
observing the images.
|
272 |
+
Data Sources. For the large-scale image memorability
|
273 |
+
(LaMem) benchmark we used the LaMem dataset (12). The
|
274 |
+
image set used by ResMem is a combination of the image sets
|
275 |
+
LaMem (12) and MemCat (11). LaMem containing 58,741
|
276 |
+
and MemCat 10,000 images, for a total of 68,741 images.
|
277 |
+
ResMem is reported to have used a held-out test set with 5000
|
278 |
+
images, hence we randomly selected 5000 images as our test
|
279 |
+
set for our 10 train/test splits for this combined data set. For
|
280 |
+
our final model we aimed to clean up the data and combine
|
281 |
+
more of the available data sets on image memorability. As
|
282 |
+
number of duplicated images within and between data sets is
|
283 |
+
unknown and duplicated images may interfere with perfor-
|
284 |
+
mance measures, we aimed to deduplicate the data for this
|
285 |
+
model. Duplicated images were identified by simply deriv-
|
286 |
+
ing embeddings from an off-the-shelf CNN model, and then
|
287 |
+
visually inspecting the most similar embeddings. Our analy-
|
288 |
+
sis of the data sets LaMem and MemCat showed that LaMem
|
289 |
+
have 229 duplicated images while MemCat have 4. More-
|
290 |
+
over, 295 of the images in LaMem is also in MemCat. We
|
291 |
+
aimed to build a larger and more diverse data set by com-
|
292 |
+
bining more sources, and for this we chose CVPR2011 (9)
|
293 |
+
and FIGRIM (3). CVPR2011 had 6 internal duplicates, 651
|
294 |
+
duplicates against LaMem, 78 against MemCat og 9 against
|
295 |
+
FIGRIM. FIGRIM had 20 duplicates against MemCat and 70
|
296 |
+
against LaMem. All identified duplicates were removed be-
|
297 |
+
fore merging the data sets. As the images from FIGRIM and
|
298 |
+
CVPR2011 were cropped, we obtained the original images
|
299 |
+
before including them in the data set. This resulted in a data
|
300 |
+
set with 71,658 images. For this data set we performed a 10%
|
301 |
+
split for the test set.
|
302 |
+
Results
|
303 |
+
Results on LaMem data set. On the LaMem data set
|
304 |
+
the ViTMem model reached an average Spearman rank
|
305 |
+
correlation of 0.711 and an MSE of 0.0076 (see Table 1).
|
306 |
+
Here we compare our performance to measures obtained by
|
307 |
+
MemNet (12), Squalli-Houssaini et al. (21) and Leonardi et
|
308 |
+
al. (22).
|
309 |
+
Table 1. Comparison of model performance on LaMem data set
|
310 |
+
Model
|
311 |
+
MSE Loss ↓
|
312 |
+
Spearman ρ ↑
|
313 |
+
MemNet
|
314 |
+
Unknown
|
315 |
+
0.640
|
316 |
+
Squalli-Houssaini et al.
|
317 |
+
0.0092
|
318 |
+
0.720
|
319 |
+
Leonardi et al.
|
320 |
+
0.0079
|
321 |
+
0.687
|
322 |
+
ViTMem
|
323 |
+
0.0076
|
324 |
+
0.711
|
325 |
+
Results on the combined LaMem and MemCat data
|
326 |
+
set. Training on 10 train/test splits on the combined data
|
327 |
+
set the results showed that ViTMem performed better than
|
328 |
+
the ResMem model (see Table 2). The average across splits
|
329 |
+
showed a Spearman rank correlation of 0.77 and an MSE of
|
330 |
+
0.005.
|
331 |
+
Table 2. Model performance on LaMem and MemCat combiend dataset
|
332 |
+
Model
|
333 |
+
MSE Loss ↓
|
334 |
+
Spearman ρ ↑
|
335 |
+
ResMem
|
336 |
+
0.009
|
337 |
+
0.67
|
338 |
+
ViTMem
|
339 |
+
0.005
|
340 |
+
0.77
|
341 |
+
Results on combined and cleaned data set. To assess
|
342 |
+
model performance on the larger and cleaned data set, we
|
343 |
+
made a train/test split and then performed repeated k-fold
|
344 |
+
cross validation with 10 train/test splits on the training set.
|
345 |
+
This resulted in a mean MSE loss of 0.006 and a mean
|
346 |
+
Spearman rank correlation of 0.76 (see Table 3). In order
|
347 |
+
to provide a model for the community we used the full data
|
348 |
+
Hagen et al.
|
349 |
+
|
|
350 |
+
ViTMem
|
351 |
+
|
|
352 |
+
3
|
353 |
+
|
354 |
+
set to train the final model (ViTMem Final Model), which
|
355 |
+
is published on the python package index as version 1.0.0.
|
356 |
+
This was trained on the full training set and tested on its
|
357 |
+
corresponding test set. The results showed a Spearman rank
|
358 |
+
correlation of 0.77 and an MSE of 0.006 (see Table 3). The
|
359 |
+
train/test splits are available on github.
|
360 |
+
Table 3. Model performance on combined and cleaned data set
|
361 |
+
Model
|
362 |
+
MSE Loss ↓
|
363 |
+
Spearman ρ ↑
|
364 |
+
ViTMem
|
365 |
+
0.006
|
366 |
+
0.76
|
367 |
+
ViTMem Final Model
|
368 |
+
0.006
|
369 |
+
0.77
|
370 |
+
Validation on independent data sets. To further validate
|
371 |
+
our model, we used memorability scores from an indepen-
|
372 |
+
dent data set by Dubey and colleagues named PASCAL-S
|
373 |
+
(7, 35) consisting of 852 images and cropped objects from
|
374 |
+
the same images. ViTMem achieved a Spearman correlation
|
375 |
+
of 0.44 on the images and 0.21 on the objects. In compar-
|
376 |
+
ison ResMem achieved a correlation of 0.36 on the images
|
377 |
+
and 0.14 on the objects. Validating against the THINGS data
|
378 |
+
set (15), which consists of 26,106 images with memorabil-
|
379 |
+
ity scores, achieved a Spearman rank correlation of 0.30 for
|
380 |
+
ViTMem and 0.22 for ResMem.
|
381 |
+
Semantic level analysis. In order to better understand how
|
382 |
+
the model predictions relate to the semantic content of the
|
383 |
+
images, we performed image captioning (34) on the com-
|
384 |
+
bined LaMem and MemCat data set and the Places205 data
|
385 |
+
set (36). We extracted nouns from the resulting image de-
|
386 |
+
scriptions and averaged behavioral or predicted memorability
|
387 |
+
scores for each noun (37). That is, the memorability for each
|
388 |
+
image was assigned to each noun derived from the image cap-
|
389 |
+
tioning procedure. For the combined LaMem and MemCat
|
390 |
+
data set we averaged behavioral memorability scores over
|
391 |
+
nouns (see Figure 1), while for the Places205 data set we
|
392 |
+
averaged predicted memorability scores from the ViTMem
|
393 |
+
model (see Figure 2). A general interpretation of the visu-
|
394 |
+
alizations in Figure 1 and 2 is that they appear to reveal a
|
395 |
+
dimension from nouns usually observed outdoors to more in-
|
396 |
+
door related nouns and ending with nouns related to animals,
|
397 |
+
and in particular, humans. This would appear to reflect the
|
398 |
+
distributions observed in previous work (9, 15), and hence
|
399 |
+
help to validate the model in terms of the image content it
|
400 |
+
is sensitive to. To further investigate how well memorability
|
401 |
+
associated with nouns were similar across the models we se-
|
402 |
+
lected nouns occurring more than the 85th percentile in each
|
403 |
+
set (654 nouns for LaMem and MemCat, 2179 nouns for
|
404 |
+
Places205), this resulted in 633 matched nouns across sets.
|
405 |
+
Analysis of these showed a Spearman ranked correlation of
|
406 |
+
0.89 and a R2 of 0.79, p<0.001 (see Figure 3). This analysis
|
407 |
+
indicates that nouns from image captioning is a strong pre-
|
408 |
+
dictor of image memorability and that the ViTMem model is
|
409 |
+
able to generalize the importance of such aspects from the
|
410 |
+
training set to a new set of images.
|
411 |
+
Discussion
|
412 |
+
Using vision transformers we have improved on the state-
|
413 |
+
of-the-art in image memorability prediction. Results showed
|
414 |
+
that ViTMem performed equal to or better than state-of-
|
415 |
+
the-art models on LaMem, and better than ResMem on the
|
416 |
+
LaMem and MemCat hybrid data set. In addition, we assem-
|
417 |
+
bled a new deduplicated hybrid data set and benchmarked
|
418 |
+
the ViTMem model against this before training a final model.
|
419 |
+
The model was further validated on additional data sets, and
|
420 |
+
performed better than ResMem on these as well. Finally,
|
421 |
+
we ran a semantic level analysis by using image captioning
|
422 |
+
on the hybrid data set.
|
423 |
+
We ranked the behavioral memo-
|
424 |
+
rability scores on the images, labeled with nouns extracted
|
425 |
+
from the captioning procedure. The results revealed that im-
|
426 |
+
ages labeled by nouns related to landscapes, cities, buildings
|
427 |
+
and similar, were ranked lowest, whereas images labeled by
|
428 |
+
nouns related to animate objects and food, were ranked high-
|
429 |
+
est. This finding is consistent with known category effects
|
430 |
+
on memorability (3, 7, 11, 12, 15) and suggests that the la-
|
431 |
+
bels extracted from captioning procedure is strongly related
|
432 |
+
to factors that drive memorability for those images. Subse-
|
433 |
+
quently, we predicted memorability scores on images from a
|
434 |
+
novel data set (Places205), ran the image captioning proce-
|
435 |
+
dure, and ranked the predicted memorability scores on the
|
436 |
+
images, labeled with nouns extracted from the captioning
|
437 |
+
procedure. Visual inspection of the results revealed that the
|
438 |
+
ranks were similar across samples and methods. This impres-
|
439 |
+
sion was confirmed by a strong correlation between matching
|
440 |
+
pairs of nouns and 79% explained variance, suggesting that
|
441 |
+
ViTMem captures the semantic content that drives memora-
|
442 |
+
bility in images.
|
443 |
+
The use of image augmentations in training the ViTMem
|
444 |
+
model in combination with state-of-the-art performance sug-
|
445 |
+
gest that such augmentations are not disrupting the ability
|
446 |
+
of the model to predict image memorability and hence may
|
447 |
+
further support the importance of semantic level properties
|
448 |
+
in image memorability. That is, the augmentations modify
|
449 |
+
a range of low-level image properties but mostly leave the
|
450 |
+
semantic content intact.
|
451 |
+
In comparison with ResMem, which relies on a CNN-based
|
452 |
+
residual neural network architecture, ViTMem is based on
|
453 |
+
vision transformers which integrate information in a more
|
454 |
+
global manner (30). As images are compositions of several
|
455 |
+
semantically identifiable objects or parts of objects, a more
|
456 |
+
holistic approach may be more apt at delineating the relative
|
457 |
+
relevance of objects given their context. That is, we speculate
|
458 |
+
that a broader integration of image features allows for a more
|
459 |
+
complete evaluation of its constituent features in relation to
|
460 |
+
each other. Hence, if semantic content is important for pre-
|
461 |
+
dicting image memorability, the model may have weighed the
|
462 |
+
importance of semantic components in relation to each other
|
463 |
+
to a larger degree than models based on CNNs.
|
464 |
+
ViTMem code and train/test sets are shared on github
|
465 |
+
(https://github.com/brainpriority/), and a python package
|
466 |
+
named vitmem is available on the python package index (see
|
467 |
+
supplementary Sup. Note 1 for a tutorial). Researchers and
|
468 |
+
interested parties can use the model to predict memorability
|
469 |
+
4
|
470 |
+
|
|
471 |
+
Hagen et al.
|
472 |
+
|
|
473 |
+
ViTMem
|
474 |
+
|
475 |
+
Memorability
|
476 |
+
c()
|
477 |
+
0.56
|
478 |
+
0.58
|
479 |
+
0.60
|
480 |
+
0.62
|
481 |
+
0.64
|
482 |
+
0.66
|
483 |
+
0.68
|
484 |
+
0.70
|
485 |
+
0.72
|
486 |
+
0.74
|
487 |
+
0.76
|
488 |
+
0.78
|
489 |
+
0.80
|
490 |
+
0.82
|
491 |
+
0.84
|
492 |
+
0.86
|
493 |
+
0.88
|
494 |
+
0.90
|
495 |
+
mountains
|
496 |
+
skyline
|
497 |
+
clouds
|
498 |
+
sunset
|
499 |
+
fireplace
|
500 |
+
view
|
501 |
+
cloud
|
502 |
+
dresser
|
503 |
+
pine
|
504 |
+
bedroom
|
505 |
+
houses
|
506 |
+
stream
|
507 |
+
church
|
508 |
+
highway
|
509 |
+
waterfall
|
510 |
+
house
|
511 |
+
hotel
|
512 |
+
sky
|
513 |
+
boat
|
514 |
+
wave
|
515 |
+
water
|
516 |
+
park
|
517 |
+
reflection
|
518 |
+
building
|
519 |
+
walls
|
520 |
+
tree
|
521 |
+
people
|
522 |
+
lights
|
523 |
+
temple
|
524 |
+
smoke
|
525 |
+
flowers
|
526 |
+
power
|
527 |
+
rock
|
528 |
+
group
|
529 |
+
bus
|
530 |
+
store
|
531 |
+
lot
|
532 |
+
truck
|
533 |
+
fire
|
534 |
+
center
|
535 |
+
market
|
536 |
+
game
|
537 |
+
walking
|
538 |
+
bench
|
539 |
+
court
|
540 |
+
person
|
541 |
+
guitar
|
542 |
+
police
|
543 |
+
motorcycle
|
544 |
+
food
|
545 |
+
men
|
546 |
+
show
|
547 |
+
picture
|
548 |
+
stem
|
549 |
+
sign
|
550 |
+
ground
|
551 |
+
link
|
552 |
+
women
|
553 |
+
stuffed
|
554 |
+
toy
|
555 |
+
phone
|
556 |
+
bride
|
557 |
+
plate
|
558 |
+
bag
|
559 |
+
girl
|
560 |
+
cards
|
561 |
+
wedding
|
562 |
+
shoe
|
563 |
+
pair
|
564 |
+
scarf
|
565 |
+
hands
|
566 |
+
hand
|
567 |
+
shape
|
568 |
+
neck
|
569 |
+
face
|
570 |
+
cut
|
571 |
+
toothbrush
|
572 |
+
half
|
573 |
+
banana
|
574 |
+
smile
|
575 |
+
makeup
|
576 |
+
necklace
|
577 |
+
teeth
|
578 |
+
pepper
|
579 |
+
valley
|
580 |
+
mountain
|
581 |
+
dining
|
582 |
+
lake
|
583 |
+
trees
|
584 |
+
buildings
|
585 |
+
river
|
586 |
+
hill
|
587 |
+
city
|
588 |
+
boats
|
589 |
+
rocks
|
590 |
+
fog
|
591 |
+
lobby
|
592 |
+
ocean
|
593 |
+
tables
|
594 |
+
middle
|
595 |
+
kitchen
|
596 |
+
area
|
597 |
+
bridge
|
598 |
+
construction
|
599 |
+
field
|
600 |
+
office
|
601 |
+
room
|
602 |
+
woods
|
603 |
+
road
|
604 |
+
clock
|
605 |
+
photo
|
606 |
+
steel
|
607 |
+
street
|
608 |
+
stove
|
609 |
+
surfboard
|
610 |
+
light
|
611 |
+
dirt
|
612 |
+
window
|
613 |
+
side
|
614 |
+
fence
|
615 |
+
train
|
616 |
+
bed
|
617 |
+
museum
|
618 |
+
door
|
619 |
+
bird
|
620 |
+
mirror
|
621 |
+
flower
|
622 |
+
grass
|
623 |
+
course
|
624 |
+
blue
|
625 |
+
video
|
626 |
+
row
|
627 |
+
car
|
628 |
+
couple
|
629 |
+
table
|
630 |
+
top
|
631 |
+
line
|
632 |
+
man
|
633 |
+
bug
|
634 |
+
case
|
635 |
+
dog
|
636 |
+
floor
|
637 |
+
gas
|
638 |
+
boy
|
639 |
+
girls
|
640 |
+
cell
|
641 |
+
piece
|
642 |
+
camera
|
643 |
+
woman
|
644 |
+
knife
|
645 |
+
arms
|
646 |
+
baby
|
647 |
+
board
|
648 |
+
gold
|
649 |
+
head
|
650 |
+
hair
|
651 |
+
sunglasses
|
652 |
+
persons
|
653 |
+
shirt
|
654 |
+
tie
|
655 |
+
feet
|
656 |
+
nose
|
657 |
+
chocolate
|
658 |
+
word
|
659 |
+
beard
|
660 |
+
snake
|
661 |
+
tattoo
|
662 |
+
blood
|
663 |
+
bikini
|
664 |
+
Fig. 1. Average behavioral image memorability scores for nouns that were extracted from images in the LaMem and MemCat data sets. The nouns shown are those that
|
665 |
+
occurred most frequently or that are more frequent in the English language (38).
|
666 |
+
Memorability
|
667 |
+
c()
|
668 |
+
0.52
|
669 |
+
0.54
|
670 |
+
0.56
|
671 |
+
0.58
|
672 |
+
0.60
|
673 |
+
0.62
|
674 |
+
0.64
|
675 |
+
0.66
|
676 |
+
0.68
|
677 |
+
0.70
|
678 |
+
0.72
|
679 |
+
0.74
|
680 |
+
0.76
|
681 |
+
0.78
|
682 |
+
0.80
|
683 |
+
0.82
|
684 |
+
0.84
|
685 |
+
0.86
|
686 |
+
0.88
|
687 |
+
0.90
|
688 |
+
badlands
|
689 |
+
rim
|
690 |
+
stormy
|
691 |
+
glacier
|
692 |
+
mountain
|
693 |
+
sun
|
694 |
+
town
|
695 |
+
hill
|
696 |
+
fireplace
|
697 |
+
houses
|
698 |
+
sunset
|
699 |
+
snow
|
700 |
+
slope
|
701 |
+
desert
|
702 |
+
dusk
|
703 |
+
rocks
|
704 |
+
couches
|
705 |
+
city
|
706 |
+
cabinets
|
707 |
+
steeple
|
708 |
+
university
|
709 |
+
street
|
710 |
+
place
|
711 |
+
hotel
|
712 |
+
highway
|
713 |
+
cathedral
|
714 |
+
formation
|
715 |
+
center
|
716 |
+
building
|
717 |
+
beach
|
718 |
+
tree
|
719 |
+
home
|
720 |
+
way
|
721 |
+
stone
|
722 |
+
lot
|
723 |
+
fire
|
724 |
+
christmas
|
725 |
+
lighthouse
|
726 |
+
space
|
727 |
+
monument
|
728 |
+
desk
|
729 |
+
people
|
730 |
+
crowd
|
731 |
+
boat
|
732 |
+
wall
|
733 |
+
inside
|
734 |
+
airport
|
735 |
+
music
|
736 |
+
van
|
737 |
+
museum
|
738 |
+
model
|
739 |
+
round
|
740 |
+
stage
|
741 |
+
statue
|
742 |
+
baseball
|
743 |
+
auditorium
|
744 |
+
party
|
745 |
+
classroom
|
746 |
+
tent
|
747 |
+
stand
|
748 |
+
row
|
749 |
+
court
|
750 |
+
store
|
751 |
+
picture
|
752 |
+
pink
|
753 |
+
bus
|
754 |
+
shelf
|
755 |
+
bowling
|
756 |
+
sale
|
757 |
+
men
|
758 |
+
bars
|
759 |
+
family
|
760 |
+
gym
|
761 |
+
fish
|
762 |
+
boy
|
763 |
+
motel
|
764 |
+
woman
|
765 |
+
ring
|
766 |
+
rack
|
767 |
+
soldier
|
768 |
+
girl
|
769 |
+
ties
|
770 |
+
dresses
|
771 |
+
words
|
772 |
+
name
|
773 |
+
dancing
|
774 |
+
suit
|
775 |
+
wrestlers
|
776 |
+
arms
|
777 |
+
mannequin
|
778 |
+
cookies
|
779 |
+
cream
|
780 |
+
shirt
|
781 |
+
wife
|
782 |
+
cupcakes
|
783 |
+
chocolate
|
784 |
+
bikini
|
785 |
+
hillside
|
786 |
+
clouds
|
787 |
+
mountains
|
788 |
+
valley
|
789 |
+
cloud
|
790 |
+
farm
|
791 |
+
village
|
792 |
+
snowy
|
793 |
+
square
|
794 |
+
waves
|
795 |
+
vineyard
|
796 |
+
island
|
797 |
+
view
|
798 |
+
mansion
|
799 |
+
smoke
|
800 |
+
castle
|
801 |
+
living
|
802 |
+
coast
|
803 |
+
lawn
|
804 |
+
area
|
805 |
+
church
|
806 |
+
house
|
807 |
+
tower
|
808 |
+
clock
|
809 |
+
field
|
810 |
+
road
|
811 |
+
rain
|
812 |
+
wave
|
813 |
+
sink
|
814 |
+
top
|
815 |
+
state
|
816 |
+
water
|
817 |
+
chairs
|
818 |
+
bed
|
819 |
+
room
|
820 |
+
side
|
821 |
+
birds
|
822 |
+
dock
|
823 |
+
leaves
|
824 |
+
park
|
825 |
+
supplies
|
826 |
+
force
|
827 |
+
station
|
828 |
+
table
|
829 |
+
play
|
830 |
+
post
|
831 |
+
cross
|
832 |
+
market
|
833 |
+
desks
|
834 |
+
photos
|
835 |
+
group
|
836 |
+
image
|
837 |
+
library
|
838 |
+
game
|
839 |
+
line
|
840 |
+
school
|
841 |
+
video
|
842 |
+
dog
|
843 |
+
food
|
844 |
+
star
|
845 |
+
crib
|
846 |
+
show
|
847 |
+
clothes
|
848 |
+
book
|
849 |
+
floor
|
850 |
+
children
|
851 |
+
man
|
852 |
+
heart
|
853 |
+
baby
|
854 |
+
display
|
855 |
+
sign
|
856 |
+
roller
|
857 |
+
women
|
858 |
+
class
|
859 |
+
football
|
860 |
+
girls
|
861 |
+
case
|
862 |
+
hands
|
863 |
+
team
|
864 |
+
desserts
|
865 |
+
face
|
866 |
+
shirts
|
867 |
+
suits
|
868 |
+
logo
|
869 |
+
hair
|
870 |
+
plate
|
871 |
+
pastries
|
872 |
+
head
|
873 |
+
grave
|
874 |
+
meat
|
875 |
+
tie
|
876 |
+
bread
|
877 |
+
donuts
|
878 |
+
mouth
|
879 |
+
dance
|
880 |
+
dress
|
881 |
+
dancer
|
882 |
+
Fig. 2. Average ViTMem predicted image memorability scores for nouns that were extracted from images in the Places205 data set. The nouns shown are those that occurred
|
883 |
+
most frequently or that are more frequent in the English language (38).
|
884 |
+
0.6
|
885 |
+
0.7
|
886 |
+
0.8
|
887 |
+
0.9
|
888 |
+
0.6
|
889 |
+
0.7
|
890 |
+
0.8
|
891 |
+
0.9
|
892 |
+
Memorability for LaMem & MemCat Nouns (Behavioral)
|
893 |
+
Memorability for Places205 Nouns (ViTMem)
|
894 |
+
Fig. 3. Average memorability scores for images with matching nouns in different
|
895 |
+
data sets. The y-axis shows average predicted memorability scores from ViTMem
|
896 |
+
on the Places205 data set.
|
897 |
+
The x-axis shows average behavioral memorability
|
898 |
+
scores on the combined LaMem and MemCat data set.
|
899 |
+
in existing or novel stimuli and employ them in research or
|
900 |
+
applied settings. The ViTMem model will allow researchers
|
901 |
+
to more precisely predict image memorability. The release
|
902 |
+
of ViTMem follows up ResMem in providing an accessible
|
903 |
+
method for predicting image memorability. This is impor-
|
904 |
+
tant for studies aiming to control for how easily an image can
|
905 |
+
be remembered. This will for example allow experimental
|
906 |
+
psychologists and neuroscientists to better control their re-
|
907 |
+
search. Similarly, educators, advertisers and visual designers
|
908 |
+
can leverage the model to improve the memorability of their
|
909 |
+
content.
|
910 |
+
Despite state-of-the-art performance in memorability predic-
|
911 |
+
tion, improvements may still be possible to achieve. Previous
|
912 |
+
works have shown benefits of pretraining their networks on
|
913 |
+
data sets of places and objects prior to fine tuning for memo-
|
914 |
+
rability prediction (39). Moreover, ViTMem do not take im-
|
915 |
+
age captioning into account, which have been successfully
|
916 |
+
done with CNNs (21, 22). Hence there is potentially more
|
917 |
+
to be gained from incorporating image semantics and/or pre-
|
918 |
+
training on data sets of objects and places. In addition, ViT-
|
919 |
+
Mem is only based on the "base" configuration of the avail-
|
920 |
+
able ViT models. Model performance may still increase by
|
921 |
+
adopting the “large” or “huge” configurations of the model.
|
922 |
+
We conclude that ViTMem can be used to predict memora-
|
923 |
+
bility for images at a level that is equal to or better than state-
|
924 |
+
of-the-art models, and we propose that vision transformers
|
925 |
+
provide a new step forward in the computational prediction
|
926 |
+
of image memorability.
|
927 |
+
Hagen et al.
|
928 |
+
|
|
929 |
+
ViTMem
|
930 |
+
|
|
931 |
+
5
|
932 |
+
|
933 |
+
References
|
934 |
+
1.
|
935 |
+
Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva. What makes a
|
936 |
+
photograph memorable? IEEE transactions on pattern analysis and machine intelligence,
|
937 |
+
36(7):1469–1482, 2013.
|
938 |
+
2.
|
939 |
+
Lore Goetschalckx, Pieter Moors, and Johan Wagemans. Image memorability across longer
|
940 |
+
time intervals. Memory, 26(5):581–588, 2018.
|
941 |
+
3.
|
942 |
+
Zoya Bylinskii, Phillip Isola, Constance Bainbridge, Antonio Torralba, and Aude Oliva. In-
|
943 |
+
trinsic and extrinsic effects on image memorability. Vision research, 116:165–178, 2015.
|
944 |
+
4.
|
945 |
+
Lore Goetschalckx, Jade Moors, and Johan Wagemans. Incidental image memorability.
|
946 |
+
Memory, 27(9):1273–1282, 2019.
|
947 |
+
5.
|
948 |
+
Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, and Aude Oliva. Memorability: An
|
949 |
+
image-computable measure of information utility. In Human Perception of Visual Informa-
|
950 |
+
tion, pages 207��239. Springer, 2022.
|
951 |
+
6.
|
952 |
+
Nicole C Rust and Vahid Mehrpour. Understanding image memorability. Trends in cognitive
|
953 |
+
sciences, 24(7):557–568, 2020.
|
954 |
+
7.
|
955 |
+
Rachit Dubey, Joshua Peterson, Aditya Khosla, Ming-Hsuan Yang, and Bernard Ghanem.
|
956 |
+
What makes an object memorable? In Proceedings of the ieee international conference on
|
957 |
+
computer vision, pages 1089–1097, 2015.
|
958 |
+
8.
|
959 |
+
Wilma A Bainbridge, Daniel D Dilks, and Aude Oliva. Memorability: A stimulus-driven per-
|
960 |
+
ceptual neural signature distinctive from memory. NeuroImage, 149:141–152, 2017.
|
961 |
+
9.
|
962 |
+
Phillip Isola, Devi Parikh, Antonio Torralba, and Aude Oliva. Understanding the intrinsic
|
963 |
+
memorability of images. Advances in neural information processing systems, 24, 2011.
|
964 |
+
10.
|
965 |
+
Timothy F Brady, Talia Konkle, and George A Alvarez. A review of visual memory capacity:
|
966 |
+
Beyond individual items and toward structured representations. Journal of vision, 11(5):
|
967 |
+
4–4, 2011.
|
968 |
+
11.
|
969 |
+
Lore Goetschalckx and Johan Wagemans. Memcat: a new category-based image set quan-
|
970 |
+
tified on memorability. PeerJ, 7:e8169, 2019.
|
971 |
+
12.
|
972 |
+
Aditya Khosla, Akhil S. Raju, Antonio Torralba, and Aude Oliva. Understanding and predict-
|
973 |
+
ing image memorability at a large scale. In International Conference on Computer Vision
|
974 |
+
(ICCV), 2015.
|
975 |
+
13.
|
976 |
+
Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool.
|
977 |
+
The interestingness of images. In Proceedings of the IEEE international conference on
|
978 |
+
computer vision, pages 1633–1640, 2013.
|
979 |
+
14.
|
980 |
+
Wilma A Bainbridge. The resiliency of image memorability: A predictor of memory separate
|
981 |
+
from attention and priming. Neuropsychologia, 141:107408, 2020.
|
982 |
+
15.
|
983 |
+
Max A. Kramer, Martin N. Hebart, Chris I. Baker, and Wilma A. Bainbridge. The features
|
984 |
+
underlying the memorability of objects. bioRxiv, 2022. doi: 10.1101/2022.04.29.490104.
|
985 |
+
16.
|
986 |
+
Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes-
|
987 |
+
Braem. Basic objects in natural categories. Cognitive psychology, 8(3):382–439, 1976.
|
988 |
+
17.
|
989 |
+
Douglas L Medin and John D Coley. Concepts and categorization. Perception and cognition
|
990 |
+
at century’s end: Handbook of perception and cognition, pages 403–439, 1998.
|
991 |
+
18.
|
992 |
+
Erdem Akagunduz, Adrian G Bors, and Karla K Evans. Defining image memorability using
|
993 |
+
the visual memory schema. IEEE transactions on pattern analysis and machine intelligence,
|
994 |
+
42(9):2165–2178, 2019.
|
995 |
+
19.
|
996 |
+
Muxuan Lyu, Kyoung Whan Choe, Omid Kardan, Hiroki P Kotabe, John M Henderson, and
|
997 |
+
Marc G Berman. Overt attentional correlates of memorability of scene images and their
|
998 |
+
relationships to scene semantics. Journal of Vision, 20(9):2–2, 2020.
|
999 |
+
20.
|
1000 |
+
Coen D Needell and Wilma A Bainbridge. Embracing new techniques in deep learning for
|
1001 |
+
estimating image memorability. Computational Brain & Behavior, pages 1–17, 2022.
|
1002 |
+
21.
|
1003 |
+
Hammad Squalli-Houssaini, Ngoc QK Duong, Marquant Gwenaëlle, and Claire-Hélène De-
|
1004 |
+
marty. Deep learning for predicting image memorability. In 2018 IEEE international con-
|
1005 |
+
ference on acoustics, speech and signal processing (ICASSP), pages 2371–2375. IEEE,
|
1006 |
+
2018.
|
1007 |
+
22.
|
1008 |
+
Marco Leonardi, Luigi Celona, Paolo Napoletano, Simone Bianco, Raimondo Schettini,
|
1009 |
+
Franco Manessi, and Alessandro Rozza. Image memorability using diverse visual features
|
1010 |
+
and soft attention. In International Conference on Image Analysis and Processing, pages
|
1011 |
+
171–180. Springer, 2019.
|
1012 |
+
23.
|
1013 |
+
Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and
|
1014 |
+
James J DiCarlo. Performance-optimized hierarchical models predict neural responses in
|
1015 |
+
higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624,
|
1016 |
+
2014.
|
1017 |
+
24.
|
1018 |
+
Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J Kellman. Local features
|
1019 |
+
and global shape information in object classification by deep convolutional neural networks.
|
1020 |
+
Vision research, 172:46–61, 2020.
|
1021 |
+
25.
|
1022 |
+
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann,
|
1023 |
+
and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape
|
1024 |
+
bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018.
|
1025 |
+
26.
|
1026 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
|
1027 |
+
convolutional neural networks.
|
1028 |
+
Advances in neural information processing systems, 25,
|
1029 |
+
2012.
|
1030 |
+
27.
|
1031 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,
|
1032 |
+
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,
|
1033 |
+
et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv
|
1034 |
+
preprint arXiv:2010.11929, 2020.
|
1035 |
+
28.
|
1036 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
|
1037 |
+
Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural
|
1038 |
+
information processing systems, 30, 2017.
|
1039 |
+
29.
|
1040 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
|
1041 |
+
Identity mappings in deep
|
1042 |
+
residual networks. In European conference on computer vision, pages 630–645. Springer,
|
1043 |
+
2016.
|
1044 |
+
30.
|
1045 |
+
Shikhar Tuli, Ishita Dasgupta, Erin Grant, and Thomas L Griffiths. Are convolutional neural
|
1046 |
+
networks or transformers more like human vision? arXiv preprint arXiv:2105.07197, 2021.
|
1047 |
+
31.
|
1048 |
+
Nicholas Baker and James H Elder. Deep learning models fail to capture the configural
|
1049 |
+
nature of human shape perception. Iscience, 25(9):104913, 2022.
|
1050 |
+
32.
|
1051 |
+
Alexander Buslaev, Vladimir I Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail
|
1052 |
+
Druzhinin, and Alexandr A Kalinin. Albumentations: fast and flexible image augmentations.
|
1053 |
+
Information, 11(2):125, 2020.
|
1054 |
+
33.
|
1055 |
+
Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretrain-
|
1056 |
+
ing for the masses. arXiv preprint arXiv:2104.10972, 2021.
|
1057 |
+
34.
|
1058 |
+
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma,
|
1059 |
+
Chang Zhou, Jingren Zhou, and Hongxia Yang.
|
1060 |
+
Unifying architectures, tasks, and
|
1061 |
+
modalities through a simple sequence-to-sequence learning framework.
|
1062 |
+
arXiv preprint
|
1063 |
+
arXiv:2202.03052, 2022.
|
1064 |
+
35.
|
1065 |
+
Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille. The secrets of salient
|
1066 |
+
object segmentation. In Proceedings of the IEEE conference on computer vision and pattern
|
1067 |
+
recognition, pages 280–287, 2014.
|
1068 |
+
36.
|
1069 |
+
Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning
|
1070 |
+
deep features for scene recognition using places database. Advances in neural information
|
1071 |
+
processing systems, 27, 2014.
|
1072 |
+
37.
|
1073 |
+
Steven Loria et al. textblob v0.17.1, October 2021.
|
1074 |
+
38.
|
1075 |
+
Robyn Speer. rspeer/wordfreq: v3.0, September 2022.
|
1076 |
+
39.
|
1077 |
+
Shay Perera, Ayellet Tal, and Lihi Zelnik-Manor. Is image memorability prediction solved?
|
1078 |
+
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
|
1079 |
+
workshops, pages 0–0, 2019.
|
1080 |
+
6
|
1081 |
+
|
|
1082 |
+
Hagen et al.
|
1083 |
+
|
|
1084 |
+
ViTMem
|
1085 |
+
|
1086 |
+
Supplementary Note 1: How to use the vitmem python package
|
1087 |
+
Python needs to be installed on a computer before pip can be used to install the vitmem package.
|
1088 |
+
To install vitmem from a command prompt run:
|
1089 |
+
pip install vitmem
|
1090 |
+
To predict image memorability for an image named "image.jpg", run the following in a python interpreter:
|
1091 |
+
from vitmem import ViTMem
|
1092 |
+
model = ViTMem()
|
1093 |
+
memorability = model("image.jpg")
|
1094 |
+
print(f"Predicted memorability: {memorability}")
|
1095 |
+
Hagen et al.
|
1096 |
+
|
|
1097 |
+
ViTMem
|
1098 |
+
|
|
1099 |
+
7
|
1100 |
+
|
-9FAT4oBgHgl3EQfqR39/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
.gitattributes
CHANGED
@@ -7568,3 +7568,65 @@ a9AzT4oBgHgl3EQfnP2n/content/2301.01578v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
7568 |
F9E0T4oBgHgl3EQfzQJ7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7569 |
yNE3T4oBgHgl3EQflwrt/content/2301.04611v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7570 |
ZtE4T4oBgHgl3EQfOgxT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7568 |
F9E0T4oBgHgl3EQfzQJ7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7569 |
yNE3T4oBgHgl3EQflwrt/content/2301.04611v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7570 |
ZtE4T4oBgHgl3EQfOgxT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7571 |
+
atE4T4oBgHgl3EQfoA2t/content/2301.05181v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7572 |
+
jdFAT4oBgHgl3EQfah0B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7573 |
+
Z9E1T4oBgHgl3EQfwwUB/content/2301.03413v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7574 |
+
x9AyT4oBgHgl3EQfnvh8/content/2301.00494v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7575 |
+
99E2T4oBgHgl3EQfmAd3/content/2301.03994v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7576 |
+
0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7577 |
+
V9FJT4oBgHgl3EQf4C1z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7578 |
+
jdFST4oBgHgl3EQfHDjE/content/2301.13724v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7579 |
+
xNE2T4oBgHgl3EQf3Qj1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7580 |
+
kdAyT4oBgHgl3EQfYPeA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7581 |
+
yNE3T4oBgHgl3EQflwrt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7582 |
+
wdAyT4oBgHgl3EQfOfZu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7583 |
+
ctE3T4oBgHgl3EQfGgkb/content/2301.04314v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7584 |
+
i9FLT4oBgHgl3EQfbS-d/content/2301.12078v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7585 |
+
i9FLT4oBgHgl3EQfbS-d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7586 |
+
69AzT4oBgHgl3EQfgPxu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7587 |
+
jdFAT4oBgHgl3EQfah0B/content/2301.08551v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7588 |
+
JdAzT4oBgHgl3EQfVPzH/content/2301.01282v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7589 |
+
O9FJT4oBgHgl3EQfISxO/content/2301.11455v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7590 |
+
CtE1T4oBgHgl3EQf9wbT/content/2301.03561v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7591 |
+
htFPT4oBgHgl3EQfDzQQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7592 |
+
5dE2T4oBgHgl3EQfOgbi/content/2301.03750v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7593 |
+
AtE0T4oBgHgl3EQfxwIb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7594 |
+
WdFLT4oBgHgl3EQfTC8I/content/2301.12043v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7595 |
+
pdE3T4oBgHgl3EQf7wvD/content/2301.04802v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7596 |
+
2tAyT4oBgHgl3EQfb_cv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7597 |
+
B9E0T4oBgHgl3EQfyAKb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7598 |
+
PdFRT4oBgHgl3EQfIzcw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7599 |
+
mNE3T4oBgHgl3EQfiQps/content/2301.04578v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7600 |
+
BtE3T4oBgHgl3EQfUAqQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7601 |
+
sdE5T4oBgHgl3EQfmA-1/content/2301.05676v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7602 |
+
a9AzT4oBgHgl3EQfnP2n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7603 |
+
UNAzT4oBgHgl3EQflv0l/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7604 |
+
1dE2T4oBgHgl3EQfigf6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7605 |
+
BtE3T4oBgHgl3EQfUAqQ/content/2301.04447v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7606 |
+
tdAyT4oBgHgl3EQf0fkK/content/2301.00717v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7607 |
+
PtFKT4oBgHgl3EQfgy6A/content/2301.11835v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7608 |
+
99E2T4oBgHgl3EQfmAd3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7609 |
+
TNE4T4oBgHgl3EQfLgyY/content/2301.04939v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7610 |
+
VtAyT4oBgHgl3EQfhvgZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7611 |
+
V9FIT4oBgHgl3EQfhStk/content/2301.11287v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7612 |
+
ftE4T4oBgHgl3EQfqg2l/content/2301.05201v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7613 |
+
sdE5T4oBgHgl3EQfmA-1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7614 |
+
Z9E1T4oBgHgl3EQfwwUB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7615 |
+
JdAzT4oBgHgl3EQfVPzH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7616 |
+
XdE1T4oBgHgl3EQfvwV4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7617 |
+
gdE_T4oBgHgl3EQf2xzk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7618 |
+
iNE3T4oBgHgl3EQfIwkK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7619 |
+
I9AyT4oBgHgl3EQf5_o0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7620 |
+
YtAyT4oBgHgl3EQfWvcM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7621 |
+
gdE_T4oBgHgl3EQf2xzk/content/2301.08343v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7622 |
+
T9E3T4oBgHgl3EQfzwt3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7623 |
+
htE1T4oBgHgl3EQffgQ-/content/2301.03218v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7624 |
+
W9FOT4oBgHgl3EQf8DTf/content/2301.12965v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7625 |
+
pdFAT4oBgHgl3EQfeR2v/content/2301.08575v1.pdf filter=lfs diff=lfs merge=lfs -text
|
7626 |
+
atE4T4oBgHgl3EQfoA2t/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7627 |
+
W9FOT4oBgHgl3EQf8DTf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7628 |
+
ctE3T4oBgHgl3EQfGgkb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7629 |
+
tdAyT4oBgHgl3EQf0fkK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7630 |
+
pdFAT4oBgHgl3EQfeR2v/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7631 |
+
PtFKT4oBgHgl3EQfgy6A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
7632 |
+
_dE1T4oBgHgl3EQfVAM6/content/2301.03096v1.pdf filter=lfs diff=lfs merge=lfs -text
|
0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc910adb4d29e196eb341c4caed7d742725717dca1facbdd69bcf8b03305d2bc
|
3 |
+
size 5373997
|
0NAyT4oBgHgl3EQfoPhb/content/tmp_files/2301.00503v1.pdf.txt
ADDED
@@ -0,0 +1,790 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A Concept Knowledge Graph for User Next Intent Prediction at
|
2 |
+
Alipay
|
3 |
+
Yacheng He
|
4 |
+
Ant Group
|
5 |
+
Hangzhou, China
|
6 | |
7 |
+
Qianghuai Jia∗
|
8 |
+
Ant Group
|
9 |
+
Hangzhou, China
|
10 | |
11 |
+
Lin Yuan
|
12 |
+
Ant Group
|
13 |
+
Hangzhou, China
|
14 | |
15 |
+
Ruopeng Li
|
16 |
+
Ant Group
|
17 |
+
Hangzhou, China
|
18 | |
19 |
+
Yixin Ou
|
20 |
+
Zhejiang University
|
21 |
+
Hangzhou, China
|
22 | |
23 |
+
Ningyu Zhang
|
24 |
+
Zhejiang University
|
25 |
+
Hangzhou, China
|
26 | |
27 |
+
ABSTRACT
|
28 |
+
This paper illustrates the technologies of user next intent prediction
|
29 |
+
with a concept knowledge graph. The system has been deployed
|
30 |
+
on the Web at Alipay1, serving more than 100 million daily active
|
31 |
+
users. Specifically, we propose AlipayKG to explicitly characterize
|
32 |
+
user intent, which is an offline concept knowledge graph in the
|
33 |
+
Life-Service domain modeling the historical behaviors of users, the
|
34 |
+
rich content interacted by users and the relations between them.
|
35 |
+
We further introduce a Transformer-based model which integrates
|
36 |
+
expert rules from the knowledge graph to infer the online user’s
|
37 |
+
next intent. Experimental results demonstrate that the proposed
|
38 |
+
system can effectively enhance the performance of the downstream
|
39 |
+
tasks while retaining explainability.
|
40 |
+
CCS CONCEPTS
|
41 |
+
• Information systems → Query representation; Information
|
42 |
+
extraction.
|
43 |
+
KEYWORDS
|
44 |
+
Knowledge Graph; Intent Prediction; Graph Embedding; Multi-label
|
45 |
+
Classification
|
46 |
+
ACM Reference Format:
|
47 |
+
Yacheng He, Qianghuai Jia, Lin Yuan, Ruopeng Li, Yixin Ou, and Ningyu
|
48 |
+
Zhang. 2023. A Concept Knowledge Graph for User Next Intent Prediction
|
49 |
+
at Alipay. In Proceedings of Make sure to enter the correct conference title from
|
50 |
+
your rights confirmation emai (Conference acronym ’XX). ACM, New York,
|
51 |
+
NY, USA, 5 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
|
52 |
+
1
|
53 |
+
INTRODUCTION
|
54 |
+
User next intent prediction – the ability to automatically infer the
|
55 |
+
next decision of users based on historical behavior and background
|
56 |
+
1https://global.alipay.com/platform/site/ihome
|
57 |
+
Permission to make digital or hard copies of all or part of this work for personal or
|
58 |
+
classroom use is granted without fee provided that copies are not made or distributed
|
59 |
+
for profit or commercial advantage and that copies bear this notice and the full citation
|
60 |
+
on the first page. Copyrights for components of this work owned by others than ACM
|
61 |
+
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
|
62 |
+
to post on servers or to redistribute to lists, requires prior specific permission and/or a
|
63 |
+
fee. Request permissions from [email protected].
|
64 |
+
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
|
65 |
+
© 2023 Association for Computing Machinery.
|
66 |
+
ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00
|
67 |
+
https://doi.org/10.1145/nnnnnnn.nnnnnnn
|
68 |
+
Buy movie
|
69 |
+
tickets
|
70 |
+
Take an
|
71 |
+
internet taxi
|
72 |
+
User Historical Behavior
|
73 |
+
Sequence
|
74 |
+
Location Time Interaction
|
75 |
+
s
|
76 |
+
Online Next
|
77 |
+
Intent Prediction
|
78 |
+
Downstream
|
79 |
+
Applications
|
80 |
+
Intents
|
81 |
+
Order coffee
|
82 |
+
…
|
83 |
+
?
|
84 |
+
?
|
85 |
+
…
|
86 |
+
Next Intent
|
87 |
+
Prediction
|
88 |
+
Model
|
89 |
+
Buy snacks
|
90 |
+
Recommender
|
91 |
+
System
|
92 |
+
Search
|
93 |
+
System
|
94 |
+
Transaction Risk
|
95 |
+
Management
|
96 |
+
…
|
97 |
+
isA/Consequent
|
98 |
+
Intent
|
99 |
+
Product
|
100 |
+
Function
|
101 |
+
Consist
|
102 |
+
Consist
|
103 |
+
Sememe
|
104 |
+
Has
|
105 |
+
Has
|
106 |
+
Alipay
|
107 |
+
Knowledge
|
108 |
+
Graph
|
109 |
+
isA
|
110 |
+
Buy movie
|
111 |
+
tickets
|
112 |
+
Movie
|
113 |
+
tickets
|
114 |
+
Buy
|
115 |
+
Consist
|
116 |
+
Consist
|
117 |
+
coupon
|
118 |
+
Has
|
119 |
+
look
|
120 |
+
shows
|
121 |
+
buy
|
122 |
+
Entertainment
|
123 |
+
Ticketing
|
124 |
+
Buy
|
125 |
+
snacks
|
126 |
+
Watch the
|
127 |
+
reality show
|
128 |
+
Watch
|
129 |
+
Reality
|
130 |
+
show
|
131 |
+
image
|
132 |
+
Consequent
|
133 |
+
Has
|
134 |
+
Has
|
135 |
+
Has
|
136 |
+
Has
|
137 |
+
Subgraph
|
138 |
+
Consist
|
139 |
+
Consist
|
140 |
+
The core ontology
|
141 |
+
User_1
|
142 |
+
history
|
143 |
+
present & future
|
144 |
+
intent
|
145 |
+
product
|
146 |
+
function
|
147 |
+
sememe
|
148 |
+
Alipay
|
149 |
+
Knowledge Graph
|
150 |
+
(b) Next Intent Prediction Framework
|
151 |
+
(c) Downstream Applications
|
152 |
+
(a) Overview of Alipay Knowledge Graph
|
153 |
+
Figure 1: The user next intent prediction system at Alipay.
|
154 |
+
Sub-figure (a) illustrates the core ontology and subgraph of
|
155 |
+
AlipayKG. In sub-figure (b), an example of the user’s his-
|
156 |
+
torical interactions and intent sequence is shown in gray-
|
157 |
+
grounded boxes, and the next intent is marked with a red
|
158 |
+
"?" that has been inferred as "buy snacks" by the next intent
|
159 |
+
prediction model, whose outputs will provide a clear signal
|
160 |
+
to downstream applications as shown in sub-figure (c).
|
161 |
+
knowledge – holds an important place in in-device Apps [17]. For
|
162 |
+
example, in digital life service platforms such as Alipay, users often
|
163 |
+
purchase snacks at the cinema (corresponding intent "buy snacks")
|
164 |
+
after buying movie tickets via TaoPiaoPiao2 (corresponding intent
|
165 |
+
"buy movie tickets"), which implies the intent of "buy movie tickets"
|
166 |
+
may lead to the following intent of "buy snacks." As shown in Figure
|
167 |
+
1, the ability to infer the future intents of users has the potential
|
168 |
+
to be advantageous for tasks such as recommendation, searching,
|
169 |
+
transaction risk management and so on.
|
170 |
+
Intuitively, user intent can be characterized as clustering pat-
|
171 |
+
terns of user behaviors, which are usually hidden in the content
|
172 |
+
and interacted with or generated by users in mobile applications.
|
173 |
+
Specifically, the core of understanding user intent in Alipay lies in
|
174 |
+
systematic and explicit knowledge modeling of the user’s situation,
|
175 |
+
2https://dianying.taobao.com/
|
176 |
+
arXiv:2301.00503v1 [cs.CL] 2 Jan 2023
|
177 |
+
|
178 |
+
8Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
|
179 |
+
Yacheng He, et al.
|
180 |
+
and the user’s interacted item content, which consists of queries,
|
181 |
+
applet services, bills, coupons, stores, reviews, etc. Concretely, we
|
182 |
+
summarize the two non-trial issues in user next intent prediction
|
183 |
+
at Alipay as follows:
|
184 |
+
• How to characterize user intent. It is challenging to abstract
|
185 |
+
and encode intent from user behaviors that are very diverse and
|
186 |
+
can not be directly observed. In particular, unlike e-commerce
|
187 |
+
scenarios such as Amazon3 which mainly contains shopping
|
188 |
+
intent, the behaviors at Alipay are various, including shopping,
|
189 |
+
trip and payment, which further increases the difficulty of intent
|
190 |
+
representation.
|
191 |
+
• How to predict the user’s next intent in real-time. The user’s
|
192 |
+
next intent is not only based on the user’s profile and preference
|
193 |
+
but also largely influenced by spatial and temporal factors. For
|
194 |
+
example, the intent to "buy movie tickets" tends to occur at the
|
195 |
+
weekend, while the intent to "registration" often occurs in the
|
196 |
+
hospital.
|
197 |
+
To address the above-mentioned issues, we propose a user next
|
198 |
+
intent prediction system based on the Knowledge Graph (KG) and
|
199 |
+
apply it to downstream applications at Alipay. We summarize the
|
200 |
+
contributions of this paper as follows:
|
201 |
+
• We propose AlipayKG, a concept knowledge graph that explic-
|
202 |
+
itly represents user behaviors by defining an intent architecture
|
203 |
+
to achieve a unified representation of multi-source heterogeneous
|
204 |
+
content. Meanwhile, we propose a systematic approach to ob-
|
205 |
+
tain structured knowledge from multi-source content. With the
|
206 |
+
proposed AlipayKG, we address the first issue.
|
207 |
+
• As for the second issue, we design a next intent prediction frame-
|
208 |
+
work that integrates expert rules from AlipayKG, which improves
|
209 |
+
the performance while increasing interpretability.
|
210 |
+
• We evaluate this system on downstream tasks. Experimental
|
211 |
+
results demonstrate that the proposed system can enhance the
|
212 |
+
performance of several real-world applications, which serve more
|
213 |
+
than 100 million daily active users.
|
214 |
+
2
|
215 |
+
ALIPAYKG-BASED USER NEXT INTENT
|
216 |
+
PREDICTION SYSTEM
|
217 |
+
An overview of our user intent system is presented in Figure 1,
|
218 |
+
and it is composed of two parts as follows: 1) AlipayKG to suf-
|
219 |
+
ficiently characterize user intent, and 2) Next Intent Prediction
|
220 |
+
Framework to accurately predict the user’s next intent in real-
|
221 |
+
time. All collected data are anonymized and reviewed by the
|
222 |
+
IRB committee to preserve privacy.
|
223 |
+
2.1
|
224 |
+
AlipayKG Construction
|
225 |
+
In general, user intent plays a crucial role in promoting the per-
|
226 |
+
formance and interpretability of user modeling systems. However,
|
227 |
+
uniformly capturing the users’ intents and expressing them is ardu-
|
228 |
+
ous due to the various kinds of users’ behaviors in digital life service
|
229 |
+
platforms. Therefore, to sufficiently characterize user intent, we
|
230 |
+
propose a concept KG in the Life-Service domain called AlipayKG.
|
231 |
+
The core ontology of AlipayKG is shown in Figure 1(a), which in-
|
232 |
+
cludes four nodes and four relations. Specifically, "Intent" describes
|
233 |
+
the decision drivers behind users’ needs and mainly consists of
|
234 |
+
3https://www.amazon.com/
|
235 |
+
Item Content
|
236 |
+
phrase mining
|
237 |
+
crowdsourcing
|
238 |
+
Intents
|
239 |
+
bayesian network
|
240 |
+
Intent-isA-Intent
|
241 |
+
Relation
|
242 |
+
Intent-Consequent-
|
243 |
+
Intent Relation
|
244 |
+
Intent Function &
|
245 |
+
Intent Product
|
246 |
+
Sememe
|
247 |
+
isA/Consequent
|
248 |
+
Intent
|
249 |
+
Product
|
250 |
+
Function
|
251 |
+
Consist
|
252 |
+
Consist
|
253 |
+
Sememe
|
254 |
+
Has
|
255 |
+
Has
|
256 |
+
sememe multilabel
|
257 |
+
classification
|
258 |
+
part-of-speech tagging
|
259 |
+
& short text matching
|
260 |
+
lexical rule-based
|
261 |
+
& embedding-based
|
262 |
+
KG Nodes Mining
|
263 |
+
KG Relations Mining
|
264 |
+
Alipay Knowledge Graph
|
265 |
+
Figure 2: The process of constructing AlipayKG consists of
|
266 |
+
node mining and relations mining (by different colors).
|
267 |
+
"Function" and "Product," such as "take an internet taxi 打网约车"
|
268 |
+
and "order coffee 点咖啡." Furthermore, "Product" and "Function"
|
269 |
+
can be represented by more fine-grained Hownet sememes4 that are
|
270 |
+
regarded as the basic units of semantics, such as "movie ticket|电
|
271 |
+
影票 = {coupon|票证, look|看, shows|表演物}." Meanwhile, we also
|
272 |
+
define two types of relation between "Intent" nodes: 1) "isA" relation
|
273 |
+
builds the semantic hyponymy of "Intent" nodes, such as "rent an
|
274 |
+
iPhone13 -isA- rent a mobile phone"; 2) "Consequent" relation is
|
275 |
+
used to establish the order effect of "Intent" nodes, such as "buy
|
276 |
+
a house -consequent- renovate a house." Figure 2 illustrates the
|
277 |
+
framework of AlipayKG construction, which contains two parts:
|
278 |
+
1) KG Nodes Mining and 2) Intent Knowledge Mining. It is worth
|
279 |
+
noting that crowdsourcing is employed for data quality control
|
280 |
+
throughout the whole process.
|
281 |
+
2.1.1
|
282 |
+
KG Nodes Mining. To mine "Intent" nodes, we adopt the
|
283 |
+
automated phrase mining approach [13] based on item content
|
284 |
+
and extend it with a self-constructed ground dictionary for high-
|
285 |
+
quality phrase classification, where item content is chosen as our
|
286 |
+
data source since users often directly express their requirements
|
287 |
+
by interacting with items. Although the items in Alipay are multi-
|
288 |
+
source heterogeneous, the text of different items shares the same
|
289 |
+
critical information and can be used as input data sources for knowl-
|
290 |
+
edge mining. Then, we utilize lexical rule matching, part-of-speech
|
291 |
+
tagging [21], short text matching models [27], and structure the
|
292 |
+
"Intent" nodes into two parts: "Function" and "Product." Moreover,
|
293 |
+
HowNet [16] has a wealth of artificially annotated corpus, through
|
294 |
+
which we train a multi-label classification model [19] to automati-
|
295 |
+
cally obtain sememe information of "Function" and "Product." Due
|
296 |
+
to aliasing and ambiguity issues with entity names, we further use
|
297 |
+
alignment models of Bert-Int [20] on "Intent" and "Product" nodes
|
298 |
+
for semantic disambiguation, respectively.
|
299 |
+
2.1.2
|
300 |
+
KG Relations Mining. In this part, the mining methods of
|
301 |
+
the "isA" and "Consequent" relations between "Intent" nodes are
|
302 |
+
elaborated. It is worth noting that the other two relations (i.e.,
|
303 |
+
"Consist" and "Has") have been obtained in the "KG Nodes Mining"
|
304 |
+
Section.
|
305 |
+
"isA" Relation: Since "isA" is used to organize "Intent" nodes
|
306 |
+
in a hierarchical tree structure, it is challenging to acquire the
|
307 |
+
knowledge that belongs to common sense through data mining. For
|
308 |
+
instance, it is easy to know that "buy an iPhone13" is a kind of "buy
|
309 |
+
4https://github.com/thunlp/OpenHowNet.git
|
310 |
+
|
311 |
+
A Concept Knowledge Graph for User Next Intent Prediction at Alipay
|
312 |
+
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
|
313 |
+
Dot
|
314 |
+
Product
|
315 |
+
Informer Encoder
|
316 |
+
Informer Decoder
|
317 |
+
Concatenated
|
318 |
+
Feature Map
|
319 |
+
Predicted
|
320 |
+
Intent Outputs
|
321 |
+
0
|
322 |
+
0
|
323 |
+
0
|
324 |
+
0
|
325 |
+
0
|
326 |
+
isA/Consequent
|
327 |
+
Intent
|
328 |
+
Product
|
329 |
+
Function
|
330 |
+
Consist
|
331 |
+
Consist
|
332 |
+
Sememe
|
333 |
+
Has
|
334 |
+
Has
|
335 |
+
Graph Convolutional
|
336 |
+
Network (GCN)
|
337 |
+
Intent Label
|
338 |
+
Embedding
|
339 |
+
Intent Time Stamp
|
340 |
+
Location Time Stamp
|
341 |
+
Global Time Stamp
|
342 |
+
𝜲!"#$%_'"
|
343 |
+
𝜲!"#$%_('
|
344 |
+
AlipayKG
|
345 |
+
Item Text
|
346 |
+
Multi-Label
|
347 |
+
Loss
|
348 |
+
Dot
|
349 |
+
Product
|
350 |
+
N
|
351 |
+
Item Embedding
|
352 |
+
Item Images
|
353 |
+
ResNet
|
354 |
+
StructBERT
|
355 |
+
Encoder
|
356 |
+
…
|
357 |
+
(b) Offline Item-Intent Understanding Model
|
358 |
+
(c) Online Next Intent Prediction Model
|
359 |
+
(a) Intent Representation
|
360 |
+
Figure 3: User next intent prediction framework. Figure(a):
|
361 |
+
GCN is learned over the AlipayKG to obtain intent label rep-
|
362 |
+
resentation, which is applied to predict output intents. Fig-
|
363 |
+
ure(b): Intent label generation of each item via the multi-
|
364 |
+
label classification model. Figure(c): 1) The encoder receives
|
365 |
+
massive long sequence inputs (intent, location and global
|
366 |
+
time); 2) The decoder receives long sequence inputs, pads the
|
367 |
+
target intents into zero, and instantly predicts output intents
|
368 |
+
(marked orange) in a generative style.
|
369 |
+
a mobile phone," but difficult for a machine to understand. To this
|
370 |
+
end, we propose two different methods described as follows:
|
371 |
+
1) Lexical Rule-based Method: This method utilizes the "isA" of the
|
372 |
+
"Product" to build the "isA" relation between "Intent" nodes. For
|
373 |
+
example, "buy an iPhone13" and "buy a mobile phone" have the
|
374 |
+
same "Function," and it can be acquired from the general knowledge
|
375 |
+
graph that "iPhone13" is a kind of "mobile phone," then the relation
|
376 |
+
of "buy an iPhone13 -isA- buy a mobile phone" can be generated.
|
377 |
+
2) Embedding-based Method: This method employs the information
|
378 |
+
of the text semantics to avoid the drawbacks of the lexical rule-based
|
379 |
+
method. Specifically, we first apply StructBERT [22] pre-trained
|
380 |
+
on Alipay corpus to represent the embedding of the "Product."
|
381 |
+
Secondly, we calculate the cosine distance between "Product" nodes
|
382 |
+
and recall the top-K candidates with the closest semantics. Finally,
|
383 |
+
the positive "isA" between "Intent" nodes will be chosen.
|
384 |
+
"Consequent" Relation: Bayesian network [11] is leveraged
|
385 |
+
from the probability network to mine the "Consequent" relation in
|
386 |
+
AlipayKG. Specifically, the "Intent" of different time segments is
|
387 |
+
first aggregated as the input of the Bayesian network. After learning
|
388 |
+
the Bayesian network structure, it performs relation inference [2]
|
389 |
+
to obtain numerous pairs of "Intent" nodes. In the end, we build
|
390 |
+
the "Consequent" relation on pairs of highly correlated and order-
|
391 |
+
sensitive "Intent" nodes.
|
392 |
+
2.2
|
393 |
+
Next Intent Prediction Framework
|
394 |
+
Figure 3 illustrates the next intent prediction framework, which
|
395 |
+
consists of two parts: 1) Offline Item-Intent Understanding Model
|
396 |
+
to label the user interacted items with "Intent," and 2) Online User
|
397 |
+
Next Intent Prediction Model to forecast the next intent of users
|
398 |
+
with low latency and high prediction accuracy.
|
399 |
+
2.2.1
|
400 |
+
Offline Item-Intent Understanding Model. Since user intent
|
401 |
+
is always hidden in the items that are interacted with users, it is
|
402 |
+
important to establish the Item-Intent relationships, which can be
|
403 |
+
regarded as a matching problem between "Item" and "Intent." For
|
404 |
+
example, "Starbucks applet" contains various "Intent" such as "order
|
405 |
+
coffee" and "buy coffee beans."
|
406 |
+
The overview of the item-intent understanding model is shown
|
407 |
+
in Figure 3(b). Firstly, a multi-modal model [12] is adopted to unify
|
408 |
+
the multi-source heterogeneous input data. Specifically, we adopt
|
409 |
+
Resnet [10] to extract image features and combine them with text
|
410 |
+
features. Then, the concatenated features are fed into StructBERT
|
411 |
+
[22] model to obtain the item representation. Besides, intent embed-
|
412 |
+
ding is generated via graph algorithms such as GCN as shown in
|
413 |
+
3(a). Finally, the predicted label scores can be obtained by matching
|
414 |
+
the learned intent embedding with item representation.
|
415 |
+
2.2.2
|
416 |
+
Online User Next Intent Prediction Model. Online real-time
|
417 |
+
next intent prediction model needs low latency while guaranteeing
|
418 |
+
high prediction accuracy. Hence, an efficient Transformer-based
|
419 |
+
model for long time-series forecasting named Informer [28] is
|
420 |
+
adopted in our work. In this model, the input consists of three
|
421 |
+
parts: the intent timestamp, the location timestamp and the global
|
422 |
+
timestamp (Minutes, Hours, Week, Month, Holiday etc.). Moreover,
|
423 |
+
AlipayKG is fused into the model to enhance the prediction accu-
|
424 |
+
racy, as shown in Figure 3(c). Additionally, the mined rules (such as
|
425 |
+
"take an internet taxi -consequent- buy movie tickets -consequent-
|
426 |
+
buy snacks") are applied to the post-processing stage of the model,
|
427 |
+
which further improves the interpretability of the predicted results.
|
428 |
+
2.3
|
429 |
+
Industrial Deployment of User Intent
|
430 |
+
System
|
431 |
+
In this Section, the deployment of the user intent system will be
|
432 |
+
described in the recommendation engine for Alipay. First of all, it
|
433 |
+
can be observed from Figure 4 that the recommendation engine is
|
434 |
+
composed of a recall stage and a ranking stage. In the recall stage,
|
435 |
+
a candidate item set (recall pool) is generated by merging results
|
436 |
+
from different recall methods. In the ranking stage, those candi-
|
437 |
+
dates are passed through ranking and re-ranking to output the final
|
438 |
+
recommendation list. Secondly, the proposed user intent system
|
439 |
+
will be applied to the recommendation engine in the recall and
|
440 |
+
ranking stages. As shown in Figure 4, according to history behav-
|
441 |
+
ior data and current spatial-temporal information, the next intent
|
442 |
+
prediction model can predict the user’s top-K intent candidates
|
443 |
+
with the highest probability, which helps bring the intent-based
|
444 |
+
recall method directly into the recall stage. Meanwhile, the gener-
|
445 |
+
ated Top-K intent candidates, intent embedding and item-intent
|
446 |
+
relations can contribute to better-personalized modeling of user
|
447 |
+
behaviors in the ranking stage. Finally, the whole system is in a
|
448 |
+
positive feedback loop, as shown in Figure 4. User Intent System can
|
449 |
+
predict user intent based on user-interacted data, which facilitates
|
450 |
+
better recommendations. In return, a better recommendation can
|
451 |
+
provide more real user behavior data to improve the performance
|
452 |
+
of intent understanding. In addition, the efficacy of the deployment
|
453 |
+
will be demonstrated in Section 3.3.
|
454 |
+
|
455 |
+
InformerEncoderInformerDecoderConference acronym ’XX, June 03–05, 2018, Woodstock, NY
|
456 |
+
Yacheng He, et al.
|
457 |
+
Item Pool
|
458 |
+
Recall Stage
|
459 |
+
Location-based
|
460 |
+
Method
|
461 |
+
Embedding-based
|
462 |
+
Method
|
463 |
+
Intent-based
|
464 |
+
Method
|
465 |
+
…
|
466 |
+
Recall
|
467 |
+
Pool
|
468 |
+
Ranking
|
469 |
+
Re-Ranking
|
470 |
+
Ranking Stage
|
471 |
+
User Historical
|
472 |
+
Interactions Data
|
473 |
+
User spatial-temporal
|
474 |
+
information
|
475 |
+
Online Next Intent
|
476 |
+
Prediction Model
|
477 |
+
Top-K Predicted
|
478 |
+
Intent Labels
|
479 |
+
AlipayKG
|
480 |
+
Intent Embeddings
|
481 |
+
& Item-Intent
|
482 |
+
Relations
|
483 |
+
User Features &
|
484 |
+
Item Features
|
485 |
+
User Interaction
|
486 |
+
Recommendation List
|
487 |
+
User Intent System
|
488 |
+
Alipay Homepage
|
489 |
+
Offline Item-Intent
|
490 |
+
Understanding Model
|
491 |
+
User Historical
|
492 |
+
Intent Sequence
|
493 |
+
Figure 4: Industrial deployment of User Next Intent Predic-
|
494 |
+
tion System in the Alipay recommendation engine. The rec-
|
495 |
+
ommendation engine contains two stages: the recall stage
|
496 |
+
and the ranking stage. The dataflows of recommended items
|
497 |
+
are guided by the grey arrows. Our user next intent pre-
|
498 |
+
diction system provides intent embeddings, item-intent re-
|
499 |
+
lations and top-K predicted intents based on historical in-
|
500 |
+
formation, thereby improving the performance of the re-
|
501 |
+
call and ranking stages and providing users with a more in-
|
502 |
+
demand recommendation list.
|
503 |
+
3
|
504 |
+
EVALUATION
|
505 |
+
3.1
|
506 |
+
Evaluation of AlipayKG
|
507 |
+
In AlipayKG, we have accumulated 104𝐾+ "Intent," 31𝐾+ "Func-
|
508 |
+
tion," 66𝐾+ "Product," and 1.9𝐾+ "Sememe." With the item-intent
|
509 |
+
understanding model, we have collected relatively static data, such
|
510 |
+
as 1, 316𝐾+ Service-Intent triples and 57, 852𝐾+ Store-Intent triples,
|
511 |
+
and relatively dynamic data, such as 10K-level Coupon-Intent triples
|
512 |
+
and billion-level Bill-Intent triples, etc.
|
513 |
+
3.2
|
514 |
+
Evaluation of Next Intent Prediction
|
515 |
+
Framework
|
516 |
+
In this Section, the proposed intent prediction framework will be
|
517 |
+
evaluated from the following two aspects.
|
518 |
+
1) Offline Item-Intent Understanding Model: We evaluate our
|
519 |
+
matching model on item-intent prediction with 3𝐾+ primitive in-
|
520 |
+
tent labels. The multi-modal model is increased by 1.10%, and the
|
521 |
+
label-level graph embedding is further increased by 3.08% to 90.64%
|
522 |
+
in micro-F1.
|
523 |
+
2) Online Next Intent Prediction Model: We evaluate our next-
|
524 |
+
intent prediction model on 30𝐾 sampled user historical behavior
|
525 |
+
data. To restore online scenarios, we only predict the user’s next in-
|
526 |
+
tent at a specific time and location. Experimental results show that
|
527 |
+
the intent prediction model introduced with AlipayKG achieves
|
528 |
+
53.3% and 85.3% in Recall@1 and Recall@10, achieving an improve-
|
529 |
+
ment of 3.1% and 2.2%, respectively.
|
530 |
+
3.3
|
531 |
+
Evaluation of Downstream Applications
|
532 |
+
In this Section, we further evaluate whether the user next intent
|
533 |
+
prediction system can improve the downstream tasks’ performance
|
534 |
+
at Alipay.
|
535 |
+
1) Home Recommendation: Home recommendation is one of
|
536 |
+
the most important business scenarios in which our system helps
|
537 |
+
to discover user interests in real-time, shown in Section 2.3. Online
|
538 |
+
experiments show that our system can bring a relative increase of
|
539 |
+
1.61% in CTR (Click-Through-Rate).
|
540 |
+
2) Transaction Risk Management: To create a secure payment
|
541 |
+
environment, the potential risks (e.g., embezzlement and money
|
542 |
+
laundering) of each transaction should be estimated to determine
|
543 |
+
whether it is invalid, which consumes a huge amount of computa-
|
544 |
+
tion. In order to reduce the cost, we treat users’ consumption intent
|
545 |
+
as an important transaction feature to discover low-risk transac-
|
546 |
+
tions. By leveraging the credible transaction identification based on
|
547 |
+
AlipayKG, the coverage rate of low-risk transactions is relatively
|
548 |
+
increased by 100%.
|
549 |
+
3) Alipay Search: In this scenario, the fine-grained user intent can
|
550 |
+
be captured in real-time by query understanding technology and
|
551 |
+
then used in various stages of search service (e.g., recall, relevance
|
552 |
+
and ranking). Online A/B tests demonstrate that our user intent
|
553 |
+
system can cover 90% of the user problems, and the CTR achieves
|
554 |
+
an increase of 5.8%.
|
555 |
+
4
|
556 |
+
RELATED WORK
|
557 |
+
Knowledge Graph Construction Many efforts have been made
|
558 |
+
to construct KGs, such as Freebase [4], DBpedia [1], AliCoCo [15],
|
559 |
+
AliCG [25], OpenBG [8, 18], and HowNet [16], which utilizes crowd-
|
560 |
+
sourcing and information extraction technologies [5–7, 23, 24, 26] to
|
561 |
+
describe and extract specific facts with well-defined labels. Unlike
|
562 |
+
those works, we focus on the conceptualization of intent archi-
|
563 |
+
tecture where the "Intent" nodes and relations among them are
|
564 |
+
obtained from unstructured text. Meanwhile, different from lin-
|
565 |
+
guistic KGs such as HowNet [16] that are handcrafted mainly by
|
566 |
+
humans, AlipayKG is built based on natural language processing
|
567 |
+
via human-in-the-loop. AliMe KG [13] is very similar to us, which
|
568 |
+
models user intents, item information, points of interest (POI), and
|
569 |
+
relations thereof to understand user needs. Different from their
|
570 |
+
work, AlipayKG is fit for all user-item interaction scenarios, while
|
571 |
+
AliMe KG is designed for pre-sales conversation, which is quite a
|
572 |
+
different scenario from ours. Moreover, we formally introduce a
|
573 |
+
new type of concept named "Intent" to explicitly represent various
|
574 |
+
user needs and further build a bridge between user requirements
|
575 |
+
and item supplies for semantic matching.
|
576 |
+
User Intent Prediction User intent prediction has commonly been
|
577 |
+
treated as a classification problem, for which various approaches
|
578 |
+
have been proposed, such as traditional machine learning methods
|
579 |
+
like SVM [3] and recent pre-trained language models like BERT [9].
|
580 |
+
Li et al. [14] are somewhat similar to us, which attempt to discover
|
581 |
+
intents from user consumption data in Meituan. Different from
|
582 |
+
those works, we aim to predict the next intent from the user behav-
|
583 |
+
ioral sequence in Alipay, which is more challenging and requires
|
584 |
+
to fully capture the user preferences under the current situation.
|
585 |
+
5
|
586 |
+
CONCLUSION AND FUTURE WORK
|
587 |
+
In this work, we present the user intent system and demonstrate
|
588 |
+
its effectiveness in downstream applications deployed in Alipay.
|
589 |
+
In the future, we will continually maintain the AlipayKG to cover
|
590 |
+
more business data and applications, and hopefully, it can benefit
|
591 |
+
more downstream tasks in digital life. Furthermore, we will make
|
592 |
+
efforts in the direction of interpretable reasoning for better user
|
593 |
+
intent prediction.
|
594 |
+
|
595 |
+
13:57 1
|
596 |
+
杭州
|
597 |
+
Q预约挂号
|
598 |
+
搜索
|
599 |
+
要日消费节|促消费助实体
|
600 |
+
支付红包天天摇
|
601 |
+
¥288
|
602 |
+
每日都有の
|
603 |
+
+
|
604 |
+
夏日消费节
|
605 |
+
赚现金奖励可提现
|
606 |
+
去赚红包
|
607 |
+
能量签收
|
608 |
+
绿色
|
609 |
+
能量
|
610 |
+
可提现
|
611 |
+
来收能量
|
612 |
+
立即去>
|
613 |
+
去收取>
|
614 |
+
为你精选
|
615 |
+
今日07月22.日公积金调基查询
|
616 |
+
一年一次,缴存基数调整
|
617 |
+
2022年度公积金基数调整开始啦!可能会影响你接下来
|
618 |
+
一年的每月收入哦,快来查查~
|
619 |
+
住房公积金账户信息
|
620 |
+
查余额、缴存基数、比例
|
621 |
+
有点东西
|
622 |
+
啡快
|
623 |
+
有用有趣好服务
|
624 |
+
星巴克中国
|
625 |
+
星冰乐特饮丨夏日限定
|
626 |
+
继续上滑为女足加油
|
627 |
+
@
|
628 |
+
¥
|
629 |
+
支
|
630 |
+
理财
|
631 |
+
生活
|
632 |
+
消息
|
633 |
+
我的8A Concept Knowledge Graph for User Next Intent Prediction at Alipay
|
634 |
+
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
|
635 |
+
REFERENCES
|
636 |
+
[1] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak,
|
637 |
+
and Zachary G. Ives. 2007. DBpedia: A Nucleus for a Web of Open Data. In The
|
638 |
+
Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web
|
639 |
+
Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 (Lecture
|
640 |
+
Notes in Computer Science, Vol. 4825), Karl Aberer, Key-Sun Choi, Natasha Fridman
|
641 |
+
Noy, Dean Allemang, Kyung-Il Lee, Lyndon J. B. Nixon, Jennifer Golbeck, Peter
|
642 |
+
Mika, Diana Maynard, Riichiro Mizoguchi, Guus Schreiber, and Philippe Cudré-
|
643 |
+
Mauroux (Eds.). Springer, 722–735. https://doi.org/10.1007/978-3-540-76298-
|
644 |
+
0_52
|
645 |
+
[2] Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez,
|
646 |
+
Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam
|
647 |
+
Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin
|
648 |
+
Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Vic-
|
649 |
+
toria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet
|
650 |
+
Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. 2018. Re-
|
651 |
+
lational inductive biases, deep learning, and graph networks.
|
652 |
+
arXiv (2018).
|
653 |
+
https://arxiv.org/pdf/1806.01261.pdf
|
654 |
+
[3] Aditya Bhargava, Asli Celikyilmaz, Dilek Hakkani-Tür, and Ruhi Sarikaya. 2013.
|
655 |
+
Easy contextual intent prediction and slot detection. In IEEE International Con-
|
656 |
+
ference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC,
|
657 |
+
Canada, May 26-31, 2013. IEEE, 8337–8341. https://doi.org/10.1109/ICASSP.2013.
|
658 |
+
6639291
|
659 |
+
[4] Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie
|
660 |
+
Taylor. 2008. Freebase: a collaboratively created graph database for structuring
|
661 |
+
human knowledge. In Proceedings of the ACM SIGMOD International Conference
|
662 |
+
on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008,
|
663 |
+
Jason Tsong-Li Wang (Ed.). ACM, 1247–1250. https://doi.org/10.1145/1376616.
|
664 |
+
1376746
|
665 |
+
[5] Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo
|
666 |
+
Si, Huajun Chen, and Ningyu Zhang. 2022. LightNER: A Lightweight Tuning
|
667 |
+
Paradigm for Low-resource NER via Pluggable Prompting. In Proceedings of
|
668 |
+
the 29th International Conference on Computational Linguistics, COLING 2022,
|
669 |
+
Gyeongju, Republic of Korea, October 12-17, 2022, Nicoletta Calzolari, Chu-Ren
|
670 |
+
Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo
|
671 |
+
Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio,
|
672 |
+
Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee,
|
673 |
+
Enrico Santus, Francis Bond, and Seung-Hoon Na (Eds.). International Committee
|
674 |
+
on Computational Linguistics, 2374–2387. https://aclanthology.org/2022.coling-
|
675 |
+
1.209
|
676 |
+
[6] Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi
|
677 |
+
Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Decoupling Knowledge from
|
678 |
+
Memorization: Retrieval-augmented Prompt Learning. CoRR abs/2205.14704
|
679 |
+
(2022). https://doi.org/10.48550/arXiv.2205.14704 arXiv:2205.14704
|
680 |
+
[7] Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan,
|
681 |
+
Fei Huang, Luo Si, and Huajun Chen. 2022. KnowPrompt: Knowledge-aware
|
682 |
+
Prompt-tuning with Synergistic Optimization for Relation Extraction. In WWW
|
683 |
+
’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022,
|
684 |
+
Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides
|
685 |
+
Gionis, Ivan Herman, and Lionel Médini (Eds.). ACM, 2778–2788. https://doi.
|
686 |
+
org/10.1145/3485447.3511998
|
687 |
+
[8] Shumin Deng, Chengming Wang, Zhoubo Li, Ningyu Zhang, Zelin Dai, Hehong
|
688 |
+
Chen, Feiyu Xiong, Ming Yan, Qiang Chen, Mosha Chen, Jiaoyan Chen, Jeff Z. Pan,
|
689 |
+
Bryan Hooi, and Huajun Chen. 2022. Construction and Applications of Billion-
|
690 |
+
Scale Pre-trained Multimodal Business Knowledge Graph. CoRR abs/2209.15214
|
691 |
+
(2022). https://doi.org/10.48550/arXiv.2209.15214 arXiv:2209.15214
|
692 |
+
[9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT:
|
693 |
+
Pre-training of Deep Bidirectional Transformers for Language Understanding. In
|
694 |
+
Proceedings of the 2019 Conference of the North American Chapter of the Associa-
|
695 |
+
tion for Computational Linguistics: Human Language Technologies, NAACL-HLT
|
696 |
+
2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Jill
|
697 |
+
Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computa-
|
698 |
+
tional Linguistics, 4171–4186. https://doi.org/10.18653/v1/n19-1423
|
699 |
+
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual
|
700 |
+
Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision
|
701 |
+
and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE
|
702 |
+
Computer Society, 770–778. https://doi.org/10.1109/CVPR.2016.90
|
703 |
+
[11] Finn V Jensen and Thomas Dyhre Nielsen. 2007. Bayesian networks and decision
|
704 |
+
graphs. Vol. 2. Springer.
|
705 |
+
[12] Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine. 2019. Su-
|
706 |
+
pervised Multimodal Bitransformers for Classifying Images and Text. In Visually
|
707 |
+
Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Vancouver,
|
708 |
+
Canada, December 13, 2019. https://vigilworkshop.github.io/static/papers/40.pdf
|
709 |
+
[13] Feng-Lin Li, Hehong Chen, Guohai Xu, Tian Qiu, Feng Ji, Ji Zhang, and Haiqing
|
710 |
+
Chen. 2020. AliMe KG: Domain Knowledge Graph Construction and Application
|
711 |
+
in E-commerce. CoRR abs/2009.11684 (2020). arXiv:2009.11684 https://arxiv.org/
|
712 |
+
abs/2009.11684
|
713 |
+
[14] Yinfeng Li, Chen Gao, Xiaoyi Du, Huazhou Wei, Hengliang Luo, Depeng Jin, and
|
714 |
+
Yong Li. 2022. Automatically Discovering User Consumption Intents in Meituan.
|
715 |
+
In KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data
|
716 |
+
Mining, Washington, DC, USA, August 14 - 18, 2022, Aidong Zhang and Huzefa
|
717 |
+
Rangwala (Eds.). ACM, 3259–3269. https://doi.org/10.1145/3534678.3539122
|
718 |
+
[15] Xusheng Luo, Luxin Liu, Yonghua Yang, Le Bo, Yuanpeng Cao, Jinghang Wu,
|
719 |
+
Qiang Li, Keping Yang, and Kenny Q. Zhu. 2020. AliCoCo: Alibaba E-commerce
|
720 |
+
Cognitive Concept Net. In Proceedings of the 2020 International Conference on
|
721 |
+
Management of Data, SIGMOD Conference 2020, online conference [Portland, OR,
|
722 |
+
USA], June 14-19, 2020, David Maier, Rachel Pottinger, AnHai Doan, Wang-Chiew
|
723 |
+
Tan, Abdussalam Alawini, and Hung Q. Ngo (Eds.). ACM, 313–327.
|
724 |
+
https:
|
725 |
+
//doi.org/10.1145/3318464.3386132
|
726 |
+
[16] Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, and Zhen-
|
727 |
+
dong Dong. 2019. OpenHowNet: An Open Sememe-based Lexical Knowledge
|
728 |
+
Base. CoRR abs/1901.09957 (2019). arXiv:1901.09957 http://arxiv.org/abs/1901.
|
729 |
+
09957
|
730 |
+
[17] Chen Qu, Liu Yang, W Bruce Croft, Yongfeng Zhang, Johanne R Trippas, and
|
731 |
+
Minghui Qiu. 2019. User intent prediction in information-seeking conversations.
|
732 |
+
In Proceedings of the 2019 Conference on Human Information Interaction and
|
733 |
+
Retrieval. 25–33.
|
734 |
+
[18] Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Cheng-
|
735 |
+
ming Wang, Xiaoyu Wang, Qiang Chen, and Huajun Chen. 2022.
|
736 |
+
Com-
|
737 |
+
monsense Knowledge Salience Evaluation with a Benchmark Dataset in E-
|
738 |
+
commerce. CoRR abs/2205.10843 (2022).
|
739 |
+
https://doi.org/10.48550/arXiv.2205.
|
740 |
+
10843 arXiv:2205.10843
|
741 |
+
[19] Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman,
|
742 |
+
Matan Protter, and Lihi Zelnik-Manor. 2021. Asymmetric Loss for Multi-Label
|
743 |
+
Classification. In Proceedings of the IEEE/CVF International Conference on Com-
|
744 |
+
puter Vision (ICCV). 82–91.
|
745 |
+
[20] Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li. 2020.
|
746 |
+
BERT-INT:A BERT-based Interaction Model For Knowledge Graph Alignment.
|
747 |
+
In Proceedings of the Twenty-Ninth International Joint Conference on Artificial
|
748 |
+
Intelligence, IJCAI-20, Christian Bessiere (Ed.). International Joint Conferences
|
749 |
+
on Artificial Intelligence Organization, 3174–3180. https://doi.org/10.24963/ijcai.
|
750 |
+
2020/439 Main track.
|
751 |
+
[21] Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang, and
|
752 |
+
Yonggang Wang. 2020. Joint Chinese Word Segmentation and Part-of-speech
|
753 |
+
Tagging via Two-way Attentions of Auto-analyzed Knowledge. In ACL. 8286–
|
754 |
+
8296. https://doi.org/10.18653/v1/2020.acl-main.735
|
755 |
+
[22] Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng,
|
756 |
+
and Luo Si. 2020. StructBERT: Incorporating Language Structures into Pre-
|
757 |
+
training for Deep Language Understanding. In 8th International Conference on
|
758 |
+
Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
|
759 |
+
OpenReview.net. https://openreview.net/forum?id=BJgQ4lSFPH
|
760 |
+
[23] Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022. Generative
|
761 |
+
Knowledge Graph Construction: A Review. CoRR abs/2210.12714 (2022). https:
|
762 |
+
//doi.org/10.48550/arXiv.2210.12714 arXiv:2210.12714
|
763 |
+
[24] Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen,
|
764 |
+
Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level Relation Extraction
|
765 |
+
as Semantic Segmentation. In Proceedings of the Thirtieth International Joint
|
766 |
+
Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada,
|
767 |
+
19-27 August 2021, Zhi-Hua Zhou (Ed.). ijcai.org, 3999–4006. https://doi.org/10.
|
768 |
+
24963/ijcai.2021/551
|
769 |
+
[25] Ningyu Zhang, Qianghuai Jia, Shumin Deng, Xiang Chen, Hongbin Ye, Hui
|
770 |
+
Chen, Huaixiao Tou, Gang Huang, Zhao Wang, Nengwei Hua, and Huajun Chen.
|
771 |
+
2021. AliCG: Fine-grained and Evolvable Conceptual Graph Construction for
|
772 |
+
Semantic Search at Alibaba. In KDD ’21: The 27th ACM SIGKDD Conference on
|
773 |
+
Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18,
|
774 |
+
2021, Feida Zhu, Beng Chin Ooi, and Chunyan Miao (Eds.). ACM, 3895–3905.
|
775 |
+
https://doi.org/10.1145/3447548.3467057
|
776 |
+
[26] Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao,
|
777 |
+
Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, et al. 2022. DeepKE: A Deep Learning
|
778 |
+
Based Knowledge Extraction Toolkit for Knowledge Base Population. arXiv
|
779 |
+
preprint arXiv:2201.03335 (2022).
|
780 |
+
[27] Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang
|
781 |
+
Zhao. 2022. A Contrastive Framework for Learning Sentence Representations
|
782 |
+
from Pairwise and Triple-wise Perspective in Angular Space. In Proceedings of
|
783 |
+
the 60th Annual Meeting of the Association for Computational Linguistics (Volume
|
784 |
+
1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 4892–
|
785 |
+
4903. https://doi.org/10.18653/v1/2022.acl-long.336
|
786 |
+
[28] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong,
|
787 |
+
and Wancai Zhang. 2020. Informer: Beyond Efficient Transformer for Long
|
788 |
+
Sequence Time-Series Forecasting. CoRR abs/2012.07436 (2020). arXiv:2012.07436
|
789 |
+
https://arxiv.org/abs/2012.07436
|
790 |
+
|
0NAyT4oBgHgl3EQfoPhb/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
0NAzT4oBgHgl3EQfefyv/content/tmp_files/2301.01438v1.pdf.txt
ADDED
@@ -0,0 +1,1782 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.01438v1 [math.AP] 4 Jan 2023
|
2 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY IN
|
3 |
+
THE HEISENBERG–WEYL AND GELL-MANN BASES WITH
|
4 |
+
APPLICATIONS TO FAST LEARNING
|
5 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
6 |
+
Abstract. Previous noncommutative Bohnenblust–Hille inequalities addressed
|
7 |
+
operator decompositions in the tensor product space SU(2)⊗n [HCP22, VZ22].
|
8 |
+
Here we prove the inequalities for product spaces of arbitrary local dimension,
|
9 |
+
e.g., SU(N)⊗n or n-fold tensor products of N × N Hermitian matrices. We treat
|
10 |
+
operator decompositions in both the Gell-Mann and Heisenberg–Weyl bases by
|
11 |
+
reducing to commutative cases. The latter basis is reduced to a scalar Bohnenblust–
|
12 |
+
Hille inequality for cyclic groups which we also prove.
|
13 |
+
Applications to quantum junta theorems and learning qudit quantum observ-
|
14 |
+
ables in the Probably Approximately Correct framework are also listed.
|
15 |
+
Contents
|
16 |
+
Notations
|
17 |
+
2
|
18 |
+
1.
|
19 |
+
Introduction
|
20 |
+
2
|
21 |
+
1.1.
|
22 |
+
Gell-Mann matrix basis
|
23 |
+
4
|
24 |
+
1.2.
|
25 |
+
Heisenberg–Weyl matrix basis
|
26 |
+
5
|
27 |
+
2.
|
28 |
+
Applications
|
29 |
+
8
|
30 |
+
2.1.
|
31 |
+
Quantum k-juntas for qudits
|
32 |
+
8
|
33 |
+
2.2.
|
34 |
+
Learning quantum observables of low degrees
|
35 |
+
9
|
36 |
+
3.
|
37 |
+
Main results for the Gell-Mann matrix basis
|
38 |
+
10
|
39 |
+
4.
|
40 |
+
Main results for Heisenberg–Weyl matrix basis
|
41 |
+
13
|
42 |
+
5.
|
43 |
+
Bohnenblust–Hille inequalities for cyclic groups: the difficulty
|
44 |
+
17
|
45 |
+
2010 Mathematics Subject Classification. 46B10, 46B09; 46B07; 60E15.
|
46 |
+
Key words and phrases. Bohnenblust–Hille inequality, Gell-Mann matrix basis, Heisenberg-Weyl
|
47 |
+
basis, qubits, qudits, fast learning, k-juntas, PAC, probably approximately correct learning of big
|
48 |
+
matrices.
|
49 |
+
J.S. is supported by Chris Umans’ Simons Investigator Grant. The research of A.V. is supported
|
50 |
+
by NSF DMS-1900286, DMS-2154402 and by Hausdorff Center for Mathematics. H.Z. is supported
|
51 |
+
by the Lise Meitner fellowship, Austrian Science Fund (FWF) M3337.
|
52 |
+
This work is partially
|
53 |
+
supported by NSF DMS-1929284 while all three authors were in residence at the Institute for
|
54 |
+
Computational and Experimental Research in Mathematics in Providence, RI, during the Harmonic
|
55 |
+
Analysis and Convexity program.
|
56 |
+
1
|
57 |
+
|
58 |
+
2
|
59 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
60 |
+
6.
|
61 |
+
Bohnenblust–Hille inequalities for cyclic groups: a partial remedy
|
62 |
+
19
|
63 |
+
6.1.
|
64 |
+
Constant cannot be 1
|
65 |
+
19
|
66 |
+
6.2.
|
67 |
+
A partial solution
|
68 |
+
21
|
69 |
+
References
|
70 |
+
24
|
71 |
+
Notations
|
72 |
+
Let C and R be the complex numbers and real numbers, respectively. Let D =
|
73 |
+
{z ∈ C : |z| < 1} be the open unit disc in the complex plane. Fix an integer N ≥ 2.
|
74 |
+
Let ω := e
|
75 |
+
2πi
|
76 |
+
N denote a primitive root of unity of order N. Let ZN := {0, 1, . . ., N −1}
|
77 |
+
be the additive cyclic group of order N and ΩN := {1, ω, . . ., ωN−1} the multiplicative
|
78 |
+
cyclic group of order N. We also need
|
79 |
+
�ΩN := conv(ΩN),
|
80 |
+
a regular polygon inscribed in the circle T. We use Mn(C) to denote the n-by-n
|
81 |
+
complex matrix algebra and 1 the identity matrix. Denote by {ej : 1 ≤ j ≤ N} the
|
82 |
+
standard basis of CN. We use ⟨·, ·⟩ to denote the inner product on Cn that is linear
|
83 |
+
in the second argument. For two vectors ξ, η ∈ Cn, we use |ξ⟩⟨η| to denote the linear
|
84 |
+
operator such that |ξ⟩⟨η| ej = ⟨η, ej⟩ · |ξ⟩.
|
85 |
+
1. Introduction
|
86 |
+
Let
|
87 |
+
f(z) =
|
88 |
+
�
|
89 |
+
α
|
90 |
+
cαzα =
|
91 |
+
�
|
92 |
+
α
|
93 |
+
cαzα1
|
94 |
+
1 · · · zαn
|
95 |
+
n ,
|
96 |
+
where α = (α1, . . . , αn) are vectors of non-negative integers and the total degree of
|
97 |
+
polynomial f is d = maxα(α1 +· · ·+αn). Here z can be all complex vectors in Tn or
|
98 |
+
all sequences of ±1 in Boolean cube {−1, 1}n. Bohnenblust–Hille type of inequalities
|
99 |
+
are the following
|
100 |
+
� �
|
101 |
+
α
|
102 |
+
|cα|
|
103 |
+
2d
|
104 |
+
d+1
|
105 |
+
� d+1
|
106 |
+
2d ≤ C(d) sup
|
107 |
+
z |f(z)| .
|
108 |
+
(1.1)
|
109 |
+
The supremum is taken either over torus Tn or Boolean cube {−1, 1}n. In both cases
|
110 |
+
this inequality is proven with constant C(d) that is independent of the dimension n
|
111 |
+
and sub-exponential in the degree d. More precisely, denote by BH≤d
|
112 |
+
T and BH≤d
|
113 |
+
{±1} the
|
114 |
+
best constants in the Bohnenblust–Hille inequalities (1.1) for degree-d polynomials
|
115 |
+
on Tn and {−1, 1}n, respectively. Then both BH≤d
|
116 |
+
T and BH≤d
|
117 |
+
{±1} are bounded from
|
118 |
+
above by ec√d log d for some universal c > 0 [BPS, DMP].
|
119 |
+
One of the key features of this inequality (1.1) is the dimension-freeness of C(d).
|
120 |
+
This, together with its sub-exponential growth phenomenon in d, plays an important
|
121 |
+
|
122 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
123 |
+
3
|
124 |
+
role in resolving some open problems in functional analysis and harmonic analysis
|
125 |
+
[DGMS, BPS, DFOOS]. The optimal dependence of BH≤d
|
126 |
+
T and BH≤d
|
127 |
+
{±1} on the degree
|
128 |
+
d remains open. Important questions in quantum computing would be resolved if
|
129 |
+
one would improve the constant C(d) to dC, see [AA].
|
130 |
+
The Bohnenblust–Hille inequalities for the Boolean cubes {−1, 1}n have found
|
131 |
+
great applications in learning bounded low degree polynomials on Boolean cubes
|
132 |
+
[EI22]. Motivated by learning quantum observables, a quantum counterpart of the
|
133 |
+
Bohnenblust–Hille inequality for Boolean cubes was recently conjectured in [RWZ22].
|
134 |
+
In the quantum setting, functions on Boolean cubes {−1, 1}n are replaced by 2n-by-2n
|
135 |
+
matrices. More precisely, suppose σ0 = 1 is the 2-by-2 identity matrix and σ1, σ2, σ3
|
136 |
+
are Pauli matrices:
|
137 |
+
σ1 =
|
138 |
+
�
|
139 |
+
0
|
140 |
+
1
|
141 |
+
1
|
142 |
+
0
|
143 |
+
�
|
144 |
+
,
|
145 |
+
σ2 =
|
146 |
+
�
|
147 |
+
0
|
148 |
+
−i
|
149 |
+
i
|
150 |
+
0
|
151 |
+
�
|
152 |
+
,
|
153 |
+
σ3 =
|
154 |
+
�
|
155 |
+
1
|
156 |
+
0
|
157 |
+
0
|
158 |
+
−1
|
159 |
+
�
|
160 |
+
.
|
161 |
+
The degree-d polynomial Pauli observables are matrices A ∈ M2(C)⊗n of the form
|
162 |
+
A =
|
163 |
+
�
|
164 |
+
s∈{0,1,2,3}n:|s|≤d
|
165 |
+
�Asσs1 ⊗ · · · ⊗ σsn,
|
166 |
+
where �As ∈ C is the Fourier coefficient, and for s = (s1, . . . , sn) ∈ {0, 1, 2, 3}n,
|
167 |
+
|s| is the number of nonzero sj’s. Then the Bohnenblust–Hille inequality for Pauli
|
168 |
+
observables reads: for all n ≥ 1 and A ∈ M2(C)⊗n of degree-d
|
169 |
+
|
170 |
+
�
|
171 |
+
s:|s|≤d
|
172 |
+
| �As|
|
173 |
+
2d
|
174 |
+
d+1
|
175 |
+
|
176 |
+
|
177 |
+
d+1
|
178 |
+
2d
|
179 |
+
≤ C(d)∥A∥.
|
180 |
+
(1.2)
|
181 |
+
Here and in what follows, ∥A∥ always denotes the operator norm of A. The inequality
|
182 |
+
(1.2) was conjectured in [RWZ22] and was resolved in [HCP22] with C(d) = dCd for
|
183 |
+
some universal C > 0. A different proof was given in [VZ22] with constant that is
|
184 |
+
of exponential growth i.e. C(d) = Cd for some universal C > 0. Although it is still
|
185 |
+
not clear if one may match the sub-exponential growth in the classical setting, the
|
186 |
+
quantum Bohnenblust–Hille inequality (1.2) with dimension-free C(d) < ∞ already
|
187 |
+
has a number of interesting applications. For example, it enables the learning of
|
188 |
+
low degree Pauli observables using a logarithmic number of random queries [VZ22]
|
189 |
+
similar to the classical setting [EI22]. This in turn enables learning more general
|
190 |
+
quantum dynamics [HCP22].
|
191 |
+
However, in many contexts it is desirable to consider quantum observables decom-
|
192 |
+
posed in the product space MN(C)⊗n for N > 2, such as when studying observables
|
193 |
+
of multilevel quantum systems (termed qudits—though given our use of N for local
|
194 |
+
|
195 |
+
4
|
196 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
197 |
+
dimension, the term “quNit” might be more apt). For example, when learning an un-
|
198 |
+
known qudit observable, it can be physically important that sample states be drawn
|
199 |
+
from the native dimension of the system, rather than some larger ambient Hilbert
|
200 |
+
space. Having these inequalities in new bases also greatly expands the distributions
|
201 |
+
under which a PAC-learning theorem is available for arbitrary quantum processes.
|
202 |
+
Of particular interest to us are the Gell-Mann (GM) observables and Heisenberg–
|
203 |
+
Weyl (HW) observables, both of which (essentially) reduce to Pauli observables when
|
204 |
+
N = 2. In this paper we prove noncommutative Bohnenblust–Hille inequalities in
|
205 |
+
these two settings following the approach in [VZ22], where the proof of quantum
|
206 |
+
Bohnenblust–Hille inequalities (1.2) is reduced to the classical Bohnenblust–Hille
|
207 |
+
inequalities (1.1) for Boolean cubes. It turns out that the GM case can again be
|
208 |
+
reduced to the case for classical Boolean cubes (Ω2)n = {−1, 1}c(N)n, while the
|
209 |
+
HW case (under certain conditions) can be reduced to the case for cyclic groups
|
210 |
+
(ΩN)d(N)n, N ≥ 2. The Bohnenblust–Hille inequalities for cyclic groups (ΩN)n, N ≥ 3
|
211 |
+
was not known before, however, so we also initiate its study here. The constants
|
212 |
+
c(N), d(N) are specified below.
|
213 |
+
1.1. Gell-Mann matrix basis. Let N ≥ 1 and put Ejk = |ej⟩⟨ek| , 1 ≤ j, k ≤ N.
|
214 |
+
The generalized Gell-Mann Matrices are a basis of MN(C) and are comprised of the
|
215 |
+
identity matrix 1 along with:
|
216 |
+
symmetric:
|
217 |
+
Ajk =
|
218 |
+
�
|
219 |
+
N
|
220 |
+
2
|
221 |
+
�
|
222 |
+
Ejk + Ekj
|
223 |
+
�
|
224 |
+
for 1 ≤ j < k ≤ N
|
225 |
+
antisymmetric:
|
226 |
+
Bjk =
|
227 |
+
�
|
228 |
+
N
|
229 |
+
2
|
230 |
+
�
|
231 |
+
− iEjk + iEkj
|
232 |
+
�
|
233 |
+
for 1 ≤ j < k ≤ N
|
234 |
+
diagonal:
|
235 |
+
Cj = Γj
|
236 |
+
��j
|
237 |
+
k=1 Ekk − jEj+1,j+1
|
238 |
+
�
|
239 |
+
for 1 ≤ j ≤ N − 1,
|
240 |
+
where Γj :=
|
241 |
+
�
|
242 |
+
N
|
243 |
+
j2+j. We denote
|
244 |
+
GM(N) := {1, Ajk, Bjk, Cm}1≤j<k≤N,1≤m≤N−1 .
|
245 |
+
These are self-adjoint matrices and are orthonormal with respect to the inner product
|
246 |
+
induced by the normalized trace
|
247 |
+
1
|
248 |
+
N tr. If N = 2, they are exactly the Pauli matrices.
|
249 |
+
We refer the reader to [BK] for more details.
|
250 |
+
Here
|
251 |
+
√
|
252 |
+
N is a normalization factor that guarantees that those matrices are or-
|
253 |
+
thonormal with respect to the inner product
|
254 |
+
⟨A, B⟩ := trN(A∗B).
|
255 |
+
where trN := 1
|
256 |
+
N tr is the normalized trace.
|
257 |
+
|
258 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
259 |
+
5
|
260 |
+
Now the GM observable will be an expression of the type
|
261 |
+
A :=
|
262 |
+
�
|
263 |
+
α
|
264 |
+
�
|
265 |
+
AαMα1 ⊗ · · · ⊗ Mαn,
|
266 |
+
α = (α1, . . . , αn),
|
267 |
+
where each Mαi is a matrix from GM(N). It is said to be of degree-d if for each α
|
268 |
+
only at most d of {Mαi}1≤i≤n are not the identity matrix. Such aggregate A we call
|
269 |
+
a GM observable of degree-d. For the Fourier coefficients �
|
270 |
+
A = ( �
|
271 |
+
Aα)α, we write
|
272 |
+
∥ �
|
273 |
+
A∥p :=
|
274 |
+
��
|
275 |
+
α
|
276 |
+
| �
|
277 |
+
Aα|p
|
278 |
+
�1/p
|
279 |
+
,
|
280 |
+
1 ≤ p < ∞.
|
281 |
+
In this setup, our main result is a family of Bohnenblust–Hille inequalities for GM
|
282 |
+
observables of degree-d:
|
283 |
+
Theorem 1.1. Fix any N ≥ 2 and d ≥ 1. There exists C(d, N) > 0 such that for
|
284 |
+
all n ≥ 1 and GM observable A ∈ MN(C)⊗n of degree-d, we have
|
285 |
+
∥ �
|
286 |
+
A∥ 2d
|
287 |
+
d+1 ≤ C(d, N)∥A∥.
|
288 |
+
(1.3)
|
289 |
+
Moreover, we have C(d, N) ≤
|
290 |
+
�3
|
291 |
+
2(N2 − N)
|
292 |
+
�dBH≤d
|
293 |
+
{±1}.
|
294 |
+
Notice that when N = 2 (the Pauli case of [VZ22]) this upper bound of C(d, N)
|
295 |
+
becomes 3dBH≤d
|
296 |
+
{±1}.
|
297 |
+
The proof of this theorem follows similarly the approach in [VZ22] and we can
|
298 |
+
reduce the problem to the Bohnenblust–Hille inequalities (1.1) for Boolean cubes
|
299 |
+
{−1, 1}c(N)n. See Section 3 for details.
|
300 |
+
1.2. Heisenberg–Weyl matrix basis. Fix N ≥ 2.
|
301 |
+
Recall that ω = e
|
302 |
+
2πi
|
303 |
+
N
|
304 |
+
and
|
305 |
+
{ej : j ∈ ZN} = {ej : 1 ≤ j ≤ N} is the standard basis of CN. The “shift” operator
|
306 |
+
X and “phase” operator Z are defined via
|
307 |
+
Xej = ej+1,
|
308 |
+
Zej = ωjej,
|
309 |
+
for all
|
310 |
+
j ∈ ZN.
|
311 |
+
Note that XN = ZN = 1. See more in [AEHK]. In the following, everything is
|
312 |
+
mod N.
|
313 |
+
Below we consider Heisenberg–Weyl collection of matrices of size N × N:
|
314 |
+
HW(N) := {XℓZm}ℓ,m∈ZN .
|
315 |
+
These are unitary matrices and form a basis of MN(C) (see Lemma 4.1). Moreover,
|
316 |
+
they are orthonormal with respect to the normalized trace trN.
|
317 |
+
|
318 |
+
6
|
319 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
320 |
+
Fix n ≥ 1. Any HW observable A ∈ MN(C)⊗n has a unique Fourier expansion
|
321 |
+
with respect to HW(N):
|
322 |
+
A =
|
323 |
+
�
|
324 |
+
⃗ℓ,⃗m∈Zn
|
325 |
+
N
|
326 |
+
�A(⃗ℓ, ⃗m)Xℓ1Zm1 ⊗ · · · ⊗ XℓnZmn,
|
327 |
+
where �A(⃗ℓ, ⃗m) ∈ C is the Fourier coefficient at (⃗ℓ, ⃗m). We say that A is of degree-d
|
328 |
+
if �A(⃗ℓ, ⃗m) = 0 whenever
|
329 |
+
|(⃗ℓ, ⃗m)| :=
|
330 |
+
n
|
331 |
+
�
|
332 |
+
j=1
|
333 |
+
(ℓj + mj) > d.
|
334 |
+
Here, 0 ≤ ℓj, mj ≤ N − 1 and we do not mod N freely.
|
335 |
+
We denote by �A the sequence of Fourier coefficients of A, and write
|
336 |
+
∥ �A∥p :=
|
337 |
+
|
338 |
+
�
|
339 |
+
⃗ℓ,⃗m∈Zn
|
340 |
+
N
|
341 |
+
| �A(⃗ℓ, ⃗m)|p
|
342 |
+
|
343 |
+
|
344 |
+
1/p
|
345 |
+
,
|
346 |
+
1 ≤ p < ∞.
|
347 |
+
Now we are ready to state the Bohnenblust–Hille inequalities for the Heisenberg–
|
348 |
+
Weyl basis. However, due to some technical difficulties, we are not able to prove it
|
349 |
+
in full generality. Moreover, different from the Gell-Mann basis setting, we shall see
|
350 |
+
that the problem for the Heisenberg–Weyl basis will be reduced to the Bohnenblust–
|
351 |
+
Hille inequalities for the cyclic groups (ΩN)n, instead of the Boolean cubes (Ω2)n =
|
352 |
+
{−1, 1}n. One may already see the connection to ΩN (instead of Ω2) by considering
|
353 |
+
Xℓ, ℓ ∈ ZN only. However, the Bohnenblust–Hille inequalities for the cyclic groups
|
354 |
+
(ΩN)n were not known before. Recall that in the classical setting, Bohnenblust–Hille
|
355 |
+
inequalities have been known for groups (Ω2)n = {−1, 1}n and (Ω∞)n = Tn, and
|
356 |
+
their analogs for cyclic groups (ΩN)n, N ≥ 3 can be understood as the results in
|
357 |
+
between.
|
358 |
+
Our main result in this part consists of a partial solution to the Bohnenblust–Hille
|
359 |
+
inequalities for the cyclic groups (ΩN)n, and a family of quantum analogs for the
|
360 |
+
Heisenberg–Weyl basis. For this, recall that any polynomial f : (ΩN)n → C has the
|
361 |
+
Fourier expansion:
|
362 |
+
f(z) =
|
363 |
+
�
|
364 |
+
α
|
365 |
+
�f(α)zα1
|
366 |
+
1 · · · zαn
|
367 |
+
n ,
|
368 |
+
z = (z1, . . . , zn) ∈ (ΩN)n,
|
369 |
+
(1.4)
|
370 |
+
where α = (α1, . . . , αn) ∈ Zn
|
371 |
+
N. It is said to be of degree-d if �f(α) = 0 whenever
|
372 |
+
|α| := �n
|
373 |
+
j=1 αj > d.
|
374 |
+
As usual, we denote by ∥ �f∥p the ℓp-norm of the Fourier
|
375 |
+
coefficients �f(α).
|
376 |
+
|
377 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
378 |
+
7
|
379 |
+
It turns out that the Bohnenblust–Hille inequalities for the cyclic groups (ΩN)n, N ≥
|
380 |
+
3 are far from being trivial. Mimicking the classical proof for {−1, 1}n and Tn, one
|
381 |
+
may arrive the following:
|
382 |
+
Theorem 1.2. Fix N ≥ 2 and d ≥ 1. There exists C(d) > 0 such that for any
|
383 |
+
polynomial f on (ΩN)n of degree-d, we have
|
384 |
+
∥ �f∥ 2d
|
385 |
+
d+1 ≤ C(d)
|
386 |
+
sup
|
387 |
+
z∈(�ΩN)n
|
388 |
+
|f(z)|,
|
389 |
+
(1.5)
|
390 |
+
where f on the right hand side is the extension of f on (�ΩN)n via the same formula
|
391 |
+
(1.4). Moreover, C(d) ≤ ec√d log d for some universal c > 0.
|
392 |
+
The sketch proof of Theorem 1.2 will be presented in Section 5. The full proof will
|
393 |
+
be the goal of our subsequent article.
|
394 |
+
Recall that �ΩN is the convex hull of ΩN. On the right hand side of (1.5), the sup
|
395 |
+
over (�ΩN)n can be replaced by (ΩN)n when N = 2 (since f in this case is always
|
396 |
+
multi-affine, and therefore convex in each variable) or N = ∞ i.e. ΩN = T (by the
|
397 |
+
maximum modulus principle). For general N ≥ 3, it is not obvious. This brings
|
398 |
+
forward an interesting complex analysis question for commutative (1.1) on (ΩN)n.
|
399 |
+
This is one new difficulty which will be discussed in Section 6. We have a partial
|
400 |
+
solution that is the following theorem. We need to restrict to the polynomials for
|
401 |
+
which each variable has degree at most N−1
|
402 |
+
2 . For notational convenience, we consider
|
403 |
+
odd N only, say replace N with 2N − 1.
|
404 |
+
Theorem 1.3. Let N ≥ 2. Suppose that
|
405 |
+
f(z) :=
|
406 |
+
�
|
407 |
+
α
|
408 |
+
aαzα,
|
409 |
+
z = (z1, . . . , zn) ∈ Cn
|
410 |
+
is any analytic polynomial of n complex variables of degree at most d and such that
|
411 |
+
in each variable zi its degree is at most N − 1. Then
|
412 |
+
� �
|
413 |
+
α
|
414 |
+
|aα|
|
415 |
+
2d
|
416 |
+
d+1
|
417 |
+
� d+1
|
418 |
+
2d ≤ C′(d, N)
|
419 |
+
sup
|
420 |
+
z∈(Ω2N−1)n |f(z)| ,
|
421 |
+
where C′(d) ≤ cd
|
422 |
+
NC(d) with some constant cN > 0 and C(d) given in (1.5).
|
423 |
+
Let us have Fourier expansion of a matrix A
|
424 |
+
A =
|
425 |
+
�
|
426 |
+
⃗ℓ,⃗m∈Zn
|
427 |
+
N
|
428 |
+
�A(⃗ℓ, ⃗m)Xℓ1Zm1 ⊗ · · · ⊗ XℓnZmn .
|
429 |
+
(1.6)
|
430 |
+
Our main result for the Heisenberg–Weyl basis is the following quantum analog of
|
431 |
+
Bohnenblust–Hille inequality:
|
432 |
+
|
433 |
+
8
|
434 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
435 |
+
Theorem 1.4. Fix a prime number N ≥ 2 and suppose d ≥ 1. If the Bohnenblust–
|
436 |
+
Hille inequality holds for degree-d polynomials on cyclic groups (ΩN)n, n ≥ 1 with the
|
437 |
+
best constant BH≤d
|
438 |
+
ΩN < ∞ independent of n, then the Bohnenblust–Hille inequalities
|
439 |
+
hold for the Heisenberg–Weyl basis: for any n ≥ 1 and any A ∈ MN(C)⊗n of degree-
|
440 |
+
d, we have
|
441 |
+
∥ �A∥ 2d
|
442 |
+
d+1 ≤ C(d, N)∥A∥,
|
443 |
+
with C(d, N) ≤ (N + 1)dBH≤d
|
444 |
+
ΩN.
|
445 |
+
In particular, if in the Fourier expansion (1.6) either all ℓi ≤ N−1
|
446 |
+
2
|
447 |
+
or all mi ≤ N−1
|
448 |
+
2 ,
|
449 |
+
then ∥ �A∥ 2d
|
450 |
+
d+1 ≤ C(d, N)∥A∥ with the constant C(d, N) ≤ (N + 1)dBH≤d
|
451 |
+
ΩN.
|
452 |
+
As the statement suggests, we actually reduce the problem to the Bohnenblust–
|
453 |
+
Hille inequality for cyclic groups (ΩN)d(N)n. In this reduction step, we need N to be
|
454 |
+
prime. The proof is contained in Section 4. Combined with Theorems 1.3 and 1.4, we
|
455 |
+
obtain partial solution to the Bohnenblust–Hille inequality for the Heisenberg–Weyl
|
456 |
+
basis. Notice that the restrictions on powers ℓi or mi represent a sort of generalization
|
457 |
+
of multi-affinity in each variable, which was important for N = 2 case. For N = 3
|
458 |
+
this is still a multi-affinity assumption, but for N = 5, 7, . . . it is an assumption that
|
459 |
+
is considerably weaker than multi-affinity.
|
460 |
+
2. Applications
|
461 |
+
In this section, we present some applications of quantum Bohnenblust–Hille in-
|
462 |
+
equalities for GM observables. For A ∈ Mn(C) we use ∥A∥2 to denote the Schatten-2
|
463 |
+
norm of A with respect to the normalized trace 1
|
464 |
+
ntr.
|
465 |
+
2.1. Quantum k-juntas for qudits. Recall that a function f : {−1, 1}n → C is
|
466 |
+
called a k-junta if it depends on at most k coordinates.
|
467 |
+
Similarly, a self-adjoint
|
468 |
+
operator A ∈ MN(C)⊗n is a quantum k-junta if it acts non-trivially on at most k
|
469 |
+
qudits. It is known that [Bou02, DFKO07] if a bounded function f over {−1, 1}n is
|
470 |
+
of low degree, then it is close to some juntas. In the next corollary we derive such
|
471 |
+
a result in a quantum setting. We refer to [RWZ22] to another quantum junta type
|
472 |
+
theorem related to the influences instead of the degree.
|
473 |
+
Theorem 2.1. Fix N ≥ 2 and d ≥ 1. For any n ≥ 1, suppose that A ∈ MN(C)⊗n
|
474 |
+
is a self-adjoint GM observable of degree-d and ∥A∥ ≤ 1. Then for any ǫ > 0, there
|
475 |
+
exists a quantum k-junta B ∈ MN(C)⊗n such that
|
476 |
+
∥A − B∥2 ≤ ǫ
|
477 |
+
with
|
478 |
+
k ≤
|
479 |
+
d
|
480 |
+
�
|
481 |
+
BH≤d
|
482 |
+
MN(C)
|
483 |
+
�2d
|
484 |
+
ǫ2d
|
485 |
+
,
|
486 |
+
|
487 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
488 |
+
9
|
489 |
+
where BH≤d
|
490 |
+
MN(C) denotes the best constant in Bohnenblust–Hille inequalities for GM
|
491 |
+
observables (1.3).
|
492 |
+
In particular, we may choose k ≤ d( CN
|
493 |
+
ǫ )2d for some CN > 0
|
494 |
+
depending only on N.
|
495 |
+
Remark 2.2. The results in [Bou02, DFKO07] are in commutative setting, in this
|
496 |
+
setting they are more general. However, in the case when polynomials are of low
|
497 |
+
degree, the proof that uses Bohnenblust–Hille inequalities is simpler. We are grateful
|
498 |
+
to Alexandros Eskenazis for pointing this out to us.
|
499 |
+
2.2. Learning quantum observables of low degrees. Suppose we need to learn
|
500 |
+
an observable A over n qudits, i.e. A ∈ MN(C)⊗n, and suppose we a priori know
|
501 |
+
that it is a polynomial of degree-d in the Gell-Mann basis with
|
502 |
+
∥A∥ ≤ 1.
|
503 |
+
(2.1)
|
504 |
+
To learn it we can randomly choose a state (somehow), sampling it by the same law.
|
505 |
+
After that we wish to be able to build another (random) observable �
|
506 |
+
A such that
|
507 |
+
∥ �
|
508 |
+
A − A∥2
|
509 |
+
2 ≤ ε
|
510 |
+
(2.2)
|
511 |
+
with probability at least 1 − δ. The question is how many random samples K =
|
512 |
+
K(ε, δ, N, d, n) we need to accomplish this?
|
513 |
+
In the scalar case this was solved in [EI22] with
|
514 |
+
K ≤ C(d)
|
515 |
+
εd+1 log
|
516 |
+
�n
|
517 |
+
δ
|
518 |
+
�
|
519 |
+
,
|
520 |
+
where C(d) depends on the Bohnenblust–Hille constant BH≤d
|
521 |
+
{±1} for degree-d polyno-
|
522 |
+
mials on Boolean cubes {−1, 1}n.
|
523 |
+
In [VZ22] we explained one such algorithm for matrices in Pauli basis. The algo-
|
524 |
+
rithm for the Gell-Mann basis is almost the same and we will publish it separately.
|
525 |
+
The fact that A is of degree-d might be not so important as remarked in the discus-
|
526 |
+
sion before [CHP, Theorem 4]: with respect to certain measures, the contribution
|
527 |
+
of Gell-Mann monomials is exponentially decaying in the number of qudits that the
|
528 |
+
monomials act nontrivially on.
|
529 |
+
Theorem 2.3. Suppose that A ∈ MN(C)⊗n is of degree-d in the Gell-Mann basis
|
530 |
+
and satisfies (2.1). Fix δ, ǫ ∈ (0, 1) and
|
531 |
+
K ≥
|
532 |
+
Cd2 �
|
533 |
+
BH≤d
|
534 |
+
{±1}
|
535 |
+
�2d
|
536 |
+
ǫd+1
|
537 |
+
log
|
538 |
+
�n
|
539 |
+
δ
|
540 |
+
�
|
541 |
+
,
|
542 |
+
with C > 0 large enough. Then given any K i.i.d. random variables ⃗x(m) uniformly
|
543 |
+
distributed on {−1, 1}(N2−1)n, as well as the queries of pairs (⃗x(m), tr[Aρ(⃗x(m))]),
|
544 |
+
we can construct a random polynomial �
|
545 |
+
A ∈ MN(C)⊗n such that ∥A − �
|
546 |
+
A∥2
|
547 |
+
2 ≤ ǫ with
|
548 |
+
|
549 |
+
10
|
550 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
551 |
+
probability at least 1−δ. Here for each ⃗x ∈ {−1, 1}(N2−1)n, ρ(⃗x) is an explicit positive
|
552 |
+
semi-definite matrix with trace 1, independent of A.
|
553 |
+
Remark 2.4. The algorithm that builds �
|
554 |
+
A deserves the name PAC, probably approx-
|
555 |
+
imately correct construction.
|
556 |
+
3. Main results for the Gell-Mann matrix basis
|
557 |
+
In this section we prove Theorem 1.1. To reach this goal we consider the Boolean
|
558 |
+
cube
|
559 |
+
HN := {−1, 1}(N
|
560 |
+
2) × {−1, 1}(N
|
561 |
+
2) × {−1, 1}N−1,
|
562 |
+
for each N ≥ 2, and we will be reducing (1.3) to commutative Bohnenblust–Hille
|
563 |
+
inequality on Hn
|
564 |
+
N = {−1, 1}n(N2−1). Notice that in [VZ22] we already did this for
|
565 |
+
N = 2, and the reduction was to {−1, 1}3n.
|
566 |
+
For b ∈ {−1, 1} and 1 ≤ j < k ≤ N consider unit vectors,
|
567 |
+
α(b)
|
568 |
+
jk = (ej + bek)/
|
569 |
+
√
|
570 |
+
2,
|
571 |
+
β(b)
|
572 |
+
jk = (ej + biek)/
|
573 |
+
√
|
574 |
+
2.
|
575 |
+
These are b
|
576 |
+
�
|
577 |
+
N
|
578 |
+
2 -valued eigenvectors of Ajk, Bjk correspondingly.
|
579 |
+
Now consider density matrices, again for b ∈ {−1, 1} and 1 ≤ j < k ≤ N
|
580 |
+
A(b)
|
581 |
+
jk = |α(b)
|
582 |
+
jk ⟩⟨α(b)
|
583 |
+
jk | ,
|
584 |
+
B(b)
|
585 |
+
jk = |β(b)
|
586 |
+
jk ⟩⟨β(b)
|
587 |
+
jk | .
|
588 |
+
Fix any point
|
589 |
+
(x, y, z) ∈ HN = {−1, 1}(N
|
590 |
+
2) × {−1, 1}(N
|
591 |
+
2) × {−1, 1}N−1
|
592 |
+
with
|
593 |
+
x = (xjk)1≤j<k≤N ∈ {−1, 1}(N
|
594 |
+
2),
|
595 |
+
y = (yjk)1≤j<k≤N ∈ {−1, 1}(N
|
596 |
+
2),
|
597 |
+
and
|
598 |
+
z = (zm)1≤m≤N−1 ∈ {−1, 1}N−1,
|
599 |
+
we define
|
600 |
+
ρ(x, y, z) =
|
601 |
+
�
|
602 |
+
1≤j<k≤N
|
603 |
+
A
|
604 |
+
(xjk)
|
605 |
+
jk
|
606 |
+
+
|
607 |
+
�
|
608 |
+
1≤j<k≤N
|
609 |
+
B
|
610 |
+
(yjk)
|
611 |
+
jk
|
612 |
+
+
|
613 |
+
N−1
|
614 |
+
�
|
615 |
+
m=1
|
616 |
+
zm
|
617 |
+
1
|
618 |
+
√
|
619 |
+
2N Cm + N−1
|
620 |
+
2
|
621 |
+
· 1 .
|
622 |
+
Observe ρ is a positive semi-definite Hermitian matrix: each A
|
623 |
+
(xjk)
|
624 |
+
jk
|
625 |
+
, B
|
626 |
+
(yjk)
|
627 |
+
jk
|
628 |
+
are positive
|
629 |
+
semi-definite Hermitian and the remaining summands form a diagonal matrix with
|
630 |
+
positive entries. Also we have
|
631 |
+
tr ρ = N(N − 1)
|
632 |
+
2
|
633 |
+
+ N(N − 1)
|
634 |
+
2
|
635 |
+
+ 0 + N(N − 1)
|
636 |
+
2
|
637 |
+
= 3
|
638 |
+
�N
|
639 |
+
2
|
640 |
+
�
|
641 |
+
.
|
642 |
+
(3.1)
|
643 |
+
|
644 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
645 |
+
11
|
646 |
+
Lemma 3.1. For any (x, y, z) ∈ HN, 1 ≤ j < k ≤ N and 1 ≤ m ≤ N − 1 we have
|
647 |
+
tr(Ajkρ(x, y, z)) =
|
648 |
+
�
|
649 |
+
N
|
650 |
+
2 xjk,
|
651 |
+
(3.2)
|
652 |
+
tr(Bjkρ(x, y, z)) =
|
653 |
+
�
|
654 |
+
N
|
655 |
+
2 yjk,
|
656 |
+
(3.3)
|
657 |
+
tr(Cmρ(x, y, z)) =
|
658 |
+
�
|
659 |
+
N
|
660 |
+
2 zm.
|
661 |
+
(3.4)
|
662 |
+
Proof. Note for any 1 ≤ j < k ≤ N the anti-commutative relationship
|
663 |
+
AjkBjk + BjkAjk = 0 .
|
664 |
+
This implies that (see for example [VZ22, Lemma 2.1]) for any b ∈ {−1, 1},
|
665 |
+
⟨Ajkβ(b)
|
666 |
+
jk , β(b)
|
667 |
+
jk ⟩ = 0
|
668 |
+
and
|
669 |
+
⟨Bjkα(b)
|
670 |
+
jk , α(b)
|
671 |
+
jk ⟩ = 0.
|
672 |
+
This means
|
673 |
+
tr(AjkB(b)
|
674 |
+
jk ) = 0
|
675 |
+
and
|
676 |
+
tr(BjkA(b)
|
677 |
+
jk ) = 0 .
|
678 |
+
Next relationships are rather easy: when (j, k) ̸= (j′, k′) then the operators “miss”
|
679 |
+
each other and we get for all b ∈ {−1, 1}
|
680 |
+
tr(AjkB(b)
|
681 |
+
j′k′) = tr(BjkA(b)
|
682 |
+
j′k′) = tr(AjkA(b)
|
683 |
+
j′k′) = tr(BjkB(b)
|
684 |
+
j′k′) = 0.
|
685 |
+
By orthogonality the remaining summands in ρ contribute 0 to tr(Ajkρ), tr(Bjkρ).
|
686 |
+
We conclude (3.2) and (3.3) hold.
|
687 |
+
So far all follows more or less the path of [VZ22]. A bit more surprising are the
|
688 |
+
cancellations giving (3.4). For any x = (xjk)1≤j<k≤N ∈ {−1, 1}(n
|
689 |
+
2),
|
690 |
+
tr
|
691 |
+
�
|
692 |
+
Cm
|
693 |
+
�
|
694 |
+
�
|
695 |
+
1≤j<k≤N
|
696 |
+
A
|
697 |
+
(xjk)
|
698 |
+
jk
|
699 |
+
��
|
700 |
+
= 0 .
|
701 |
+
(3.5)
|
702 |
+
Similarly, for any y = (yjk)1≤j<k≤N ∈ {−1, 1}(n
|
703 |
+
2),
|
704 |
+
tr
|
705 |
+
�
|
706 |
+
Cm
|
707 |
+
�
|
708 |
+
�
|
709 |
+
1≤j<k≤N
|
710 |
+
B
|
711 |
+
(yjk)
|
712 |
+
jk
|
713 |
+
��
|
714 |
+
= 0 .
|
715 |
+
(3.6)
|
716 |
+
Let us prove (3.5) with Figure 3.1 for reference.
|
717 |
+
For a fixed k > m + 1 we can
|
718 |
+
immediately see that �m+1
|
719 |
+
j=1 tr(CmA
|
720 |
+
(xjk)
|
721 |
+
jk
|
722 |
+
) = 1
|
723 |
+
2Γm(1 + 1 + · · · + 1 − (m + 1)) = 0. We
|
724 |
+
are left to consider the j < k ≤ m summation and the j ≤ m, k = m+1 summation.
|
725 |
+
The first one gives
|
726 |
+
�m
|
727 |
+
2
|
728 |
+
�
|
729 |
+
Γm, while the second one gives 1
|
730 |
+
2mΓm − 1
|
731 |
+
2m2Γm. Altogether,
|
732 |
+
�m
|
733 |
+
2
|
734 |
+
�
|
735 |
+
Γm + 1
|
736 |
+
2mΓm − 1
|
737 |
+
2m2Γm = 0 .
|
738 |
+
|
739 |
+
12
|
740 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
741 |
+
1
|
742 |
+
1
|
743 |
+
· · ·
|
744 |
+
1
|
745 |
+
−m
|
746 |
+
0
|
747 |
+
0
|
748 |
+
· · ·
|
749 |
+
0
|
750 |
+
|
751 |
+
|
752 |
+
|
753 |
+
|
754 |
+
|
755 |
+
|
756 |
+
|
757 |
+
|
758 |
+
|
759 |
+
|
760 |
+
|
761 |
+
|
762 |
+
|
763 |
+
|
764 |
+
|
765 |
+
|
766 |
+
|
767 |
+
|
768 |
+
|
769 |
+
|
770 |
+
|
771 |
+
|
772 |
+
|
773 |
+
|
774 |
+
|
775 |
+
|
776 |
+
|
777 |
+
|
778 |
+
|
779 |
+
|
780 |
+
|
781 |
+
|
782 |
+
|
783 |
+
|
784 |
+
|
785 |
+
|
786 |
+
|
787 |
+
|
788 |
+
m-many
|
789 |
+
Γm
|
790 |
+
1
|
791 |
+
2Γm
|
792 |
+
−1
|
793 |
+
2Γm
|
794 |
+
1−m
|
795 |
+
2 Γm
|
796 |
+
Figure 3.1. Collating tr[CmA(b)
|
797 |
+
jk ]’s and tr[CmB(b)
|
798 |
+
jk ]’s. In the upper tri-
|
799 |
+
angle, a value v in coordinate (j, k) means tr[CmA(b)
|
800 |
+
jk ] = tr[CmB(b)
|
801 |
+
jk ] = v
|
802 |
+
for any b.
|
803 |
+
For reference, the (unnormalized) definition of Cm is
|
804 |
+
recorded on the diagonal.
|
805 |
+
Now, the rest of ρ(x, y, z) is �N−1
|
806 |
+
m=1 zm
|
807 |
+
1
|
808 |
+
√
|
809 |
+
2N Cm+ N−1
|
810 |
+
2 1, a sum of orthogonal matrices.
|
811 |
+
Hence (3.4) follows from (3.5), (3.6), and this orthogonality.
|
812 |
+
□
|
813 |
+
Now we are ready to prove Theorem 1.1.
|
814 |
+
Proof of Theorem 1.1. Let us normalize ρ as r(x, y, z) := 1
|
815 |
+
3
|
816 |
+
�N
|
817 |
+
2
|
818 |
+
�−1ρ(x, y, z), so
|
819 |
+
tr
|
820 |
+
�
|
821 |
+
r(x, y, z)
|
822 |
+
�
|
823 |
+
= 1 .
|
824 |
+
(3.7)
|
825 |
+
Now choosing any (⃗x, ⃗y, ⃗z) ∈ Hn
|
826 |
+
N with
|
827 |
+
⃗x =
|
828 |
+
�
|
829 |
+
x(1), . . . , x(n)�
|
830 |
+
,
|
831 |
+
⃗y =
|
832 |
+
�
|
833 |
+
y(1), . . . , y(n)�
|
834 |
+
,
|
835 |
+
⃗z =
|
836 |
+
�
|
837 |
+
z(1), . . . , z(n)�
|
838 |
+
,
|
839 |
+
and
|
840 |
+
�
|
841 |
+
x(j), y(j), z(j)�
|
842 |
+
∈ HN,
|
843 |
+
1 ≤ j ≤ n
|
844 |
+
we can consider
|
845 |
+
r(⃗x, ⃗y, ⃗z) = r
|
846 |
+
�
|
847 |
+
x(1), y(1), z(1)�
|
848 |
+
⊗ r
|
849 |
+
�
|
850 |
+
x(2), y(2), z(2)�
|
851 |
+
⊗ · · · ⊗ r
|
852 |
+
�
|
853 |
+
x(n), y(n), z(n)�
|
854 |
+
.
|
855 |
+
Recall that any GM observable A of degree at most d has the unique expansion
|
856 |
+
A =
|
857 |
+
�
|
858 |
+
α=(α1,...,αn)∈Λn
|
859 |
+
N
|
860 |
+
�
|
861 |
+
AαMα1 ⊗ · · · ⊗ Mαn
|
862 |
+
|
863 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
864 |
+
13
|
865 |
+
where {Mα}α∈ΛN = GM(N) and �
|
866 |
+
Aα = 0 if more than d matrices of Mαj, 1 ≤ j ≤ n
|
867 |
+
are not identity matrices.
|
868 |
+
By Lemma 3.1, for any α = (α1, . . . , αn) ∈ Λn
|
869 |
+
N with |{αj : Mαj ̸= 1}| := κ ≤ d,
|
870 |
+
(⃗x, ⃗y, ⃗z) �→ tr (Mα1 ⊗ · · · ⊗ Mαnr(⃗x, ⃗y, ⃗z))
|
871 |
+
is a multi-affine monomial of degree-κ on the Boolean cube Hn
|
872 |
+
N = {−1, 1}n(N2−1)
|
873 |
+
with coefficient
|
874 |
+
��
|
875 |
+
N/2
|
876 |
+
3
|
877 |
+
�N
|
878 |
+
2
|
879 |
+
�
|
880 |
+
�κ
|
881 |
+
.
|
882 |
+
Note also that for different α ̸= α′ ∈ Λn
|
883 |
+
N, the resulting monomials on Hn
|
884 |
+
N are different.
|
885 |
+
Since the coefficients of this scalar polynomial are of the form
|
886 |
+
��
|
887 |
+
N/2
|
888 |
+
3
|
889 |
+
�N
|
890 |
+
2
|
891 |
+
�
|
892 |
+
�κ
|
893 |
+
�
|
894 |
+
Aα,
|
895 |
+
0 ≤ κ ≤ d .
|
896 |
+
Therefore the absolute values of those coefficients are at least
|
897 |
+
1
|
898 |
+
� 3
|
899 |
+
2(N2 − N)
|
900 |
+
�d| �
|
901 |
+
Aα| ,
|
902 |
+
so that by commutative Bohnenblust–Hille inequality on Boolean cube as in [DMP]
|
903 |
+
� �
|
904 |
+
α
|
905 |
+
| �
|
906 |
+
Aα|
|
907 |
+
2d
|
908 |
+
d+1
|
909 |
+
� d+1
|
910 |
+
2d ≤
|
911 |
+
� 3
|
912 |
+
2(N2 − N)
|
913 |
+
�dBH≤d
|
914 |
+
{±1}
|
915 |
+
sup
|
916 |
+
(⃗x,⃗y,⃗z)∈Hn
|
917 |
+
N
|
918 |
+
|tr(A · r(⃗x, ⃗y, ⃗z)| ,
|
919 |
+
On the other hand, by (3.7)
|
920 |
+
|tr(A · r(⃗x, ⃗y, ⃗z)| ≤ ∥A∥ .
|
921 |
+
All combined, we get
|
922 |
+
� �
|
923 |
+
α
|
924 |
+
| �
|
925 |
+
Aα|
|
926 |
+
2d
|
927 |
+
d+1
|
928 |
+
� d+1
|
929 |
+
2d ≤
|
930 |
+
� 3
|
931 |
+
2(N2 − N)
|
932 |
+
�dC
|
933 |
+
√d log d∥A∥ .
|
934 |
+
□
|
935 |
+
4. Main results for Heisenberg–Weyl matrix basis
|
936 |
+
We collect first a few facts about X and Z.
|
937 |
+
Lemma 4.1. We have the following:
|
938 |
+
(1) {XℓZm : ℓ, m ∈ ZN} form a basis of MN(C).
|
939 |
+
(2) For all k, ℓ, m ∈ ZN:
|
940 |
+
(XℓZm)k = ω
|
941 |
+
1
|
942 |
+
2k(k−1)ℓmXkℓZkm
|
943 |
+
and for all ℓ1, ℓ2, m1, m2 ∈ ZN:
|
944 |
+
Xℓ1Zm1Xℓ2Zm2 = ωℓ2m1−ℓ1m2Xℓ2Zm2Xℓ1Zm1.
|
945 |
+
|
946 |
+
14
|
947 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
948 |
+
(3) If N is prime, then for any (0, 0) ̸= (ℓ, m) ∈ ZN × ZN, the eigenvalues of
|
949 |
+
XℓZm are {1, ω, . . . , ωN−1}. This is not the case if N is not prime.
|
950 |
+
Proof.
|
951 |
+
(1) Suppose that �
|
952 |
+
ℓ,m aℓ,mXℓZm = 0. For any j, k ∈ ZN, we have
|
953 |
+
�
|
954 |
+
ℓ,m
|
955 |
+
aℓ,m⟨XℓZmej, ej+k⟩ =
|
956 |
+
�
|
957 |
+
m
|
958 |
+
ak,mωjm = 0.
|
959 |
+
Since the Vandermonde matrix associated to (1, ω, . . . , ωN−1) is invertible, we
|
960 |
+
have ak,m = 0 for all k, m ∈ ZN.
|
961 |
+
(2) It follows immediately from the identity ZX = ωXZ which can be verified
|
962 |
+
directly: for all j ∈ ZN
|
963 |
+
ZXej = Zej+1 = ωj+1ej+1 = ωj+1Xej = ωXZej.
|
964 |
+
(3) Assume N to be prime and (ℓ, m) ̸= (0, 0). If ℓ = 0 and m ̸= 0, then the
|
965 |
+
eigenvalues of Zm are
|
966 |
+
{ωjm : j ∈ ZN} = {ωj : j ∈ ZN},
|
967 |
+
since N is prime. If ℓ ̸= 0, then we may relabel the standard basis {ej : j ∈
|
968 |
+
ZN} as {ejℓ : j ∈ ZN}. Consider the non-zero vectors
|
969 |
+
ζk :=
|
970 |
+
�
|
971 |
+
j∈ZN
|
972 |
+
ω
|
973 |
+
1
|
974 |
+
2 j(j−1)ℓm−jkejℓ,
|
975 |
+
k ∈ ZN.
|
976 |
+
A direct computation shows: for all k ∈ ZN
|
977 |
+
XℓZmζk =
|
978 |
+
�
|
979 |
+
j∈ZN
|
980 |
+
ω
|
981 |
+
1
|
982 |
+
2j(j−1)ℓm−jk · ωjℓmXℓejℓ
|
983 |
+
=
|
984 |
+
�
|
985 |
+
j∈ZN
|
986 |
+
ω
|
987 |
+
1
|
988 |
+
2j(j+1)ℓm−jke(j+1)ℓ
|
989 |
+
=
|
990 |
+
�
|
991 |
+
j∈ZN
|
992 |
+
ω
|
993 |
+
1
|
994 |
+
2j(j−1)ℓm−jk+kejℓ
|
995 |
+
= ωkζk.
|
996 |
+
If N is not prime, say N = N1N2 with N1, N2 > 1, then XN1 has 1 as
|
997 |
+
eigenvalue with multiplicity N1 > 1. So we do need N to be prime.
|
998 |
+
□
|
999 |
+
Let us record the following observation as a lemma.
|
1000 |
+
Lemma 4.2. Suppose that k ≥ 1, A, B are two unitary matrices such that Bk = 1,
|
1001 |
+
AB = λBA with λ ∈ C and λ ̸= 1. Suppose that ξ is a non-zero vector such that
|
1002 |
+
Bξ = µξ (µ ̸= 0 since µk = 1). Then
|
1003 |
+
⟨ξ, Aξ⟩ = 0.
|
1004 |
+
|
1005 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1006 |
+
15
|
1007 |
+
Proof. By assumption
|
1008 |
+
µ⟨ξ, Aξ⟩ = ⟨ξ, ABξ⟩ = λ⟨ξ, BAξ⟩.
|
1009 |
+
Since B∗ = Bk−1, B∗ξ = Bk−1ξ = µk−1ξ = µξ. Thus
|
1010 |
+
µ⟨ξ, Aξ⟩ = λ⟨ξ, BAξ⟩ = λ⟨B∗ξ, Aξ⟩ = λµ⟨ξ, Aξ⟩.
|
1011 |
+
Hence, µ(λ − 1)⟨ξ, Aξ⟩ = 0. This gives ⟨ξ, Aξ⟩ = 0 as µ(λ − 1) ̸= 0.
|
1012 |
+
□
|
1013 |
+
Now we are ready to prove Theorem 1.4:
|
1014 |
+
Proof of Theorem 1.4. Fix a prime number N ≥ 2. Recall that ω = e
|
1015 |
+
2πi
|
1016 |
+
N . Consider
|
1017 |
+
the generator set of ZN × ZN
|
1018 |
+
ΣN := {(1, 0), (1, 1), . . ., (1, N − 1), (0, 1)}.
|
1019 |
+
For any z ∈ ΩN and (ℓ, m) ∈ ΣN, we denote by eℓ,m
|
1020 |
+
z
|
1021 |
+
the unit eigenvector of XℓZm
|
1022 |
+
corresponding to the eigenvalue z. For any vector ⃗ω ∈ (ΩN)(N+1)n of the form
|
1023 |
+
⃗ω = (⃗ωℓ,m)(ℓ,m)∈ΣN,
|
1024 |
+
⃗ωℓ,m = (ωℓ,m
|
1025 |
+
1
|
1026 |
+
, . . . , ωℓ,m
|
1027 |
+
n ) ∈ (ΩN)(N+1)n,
|
1028 |
+
(4.1)
|
1029 |
+
we consider the matrix
|
1030 |
+
ρ(⃗ω) := ρ1(⃗ω) ⊗ · · · ⊗ ρn(⃗ω)
|
1031 |
+
where
|
1032 |
+
ρk(⃗ω) :=
|
1033 |
+
1
|
1034 |
+
N + 1
|
1035 |
+
�
|
1036 |
+
(ℓ,m)∈ΣN
|
1037 |
+
|eℓ,m
|
1038 |
+
ωℓ,m
|
1039 |
+
k ⟩⟨eℓ,m
|
1040 |
+
ωℓ,m
|
1041 |
+
k | .
|
1042 |
+
Then each ρk(⃗ω) is a density matrix and so is ρ(⃗ω).
|
1043 |
+
Suppose that (ℓ, m) ∈ ΣN and (ℓ′, m′) /∈ {(kℓ, km) : (ℓ, m) ∈ ΣN}, then by Lemma
|
1044 |
+
4.1
|
1045 |
+
Xℓ′Zm′XℓZm = ωℓm′−ℓ′mXℓZmXℓ′Zm′.
|
1046 |
+
From our choice ωℓm′−ℓ′m ̸= 1. By Lemmas 4.1 and 4.2
|
1047 |
+
tr[Xℓ′Zm′ |eℓ,m
|
1048 |
+
z
|
1049 |
+
⟩⟨eℓ,m
|
1050 |
+
z
|
1051 |
+
|] = ⟨Xℓ′Zm′eℓ,m
|
1052 |
+
z
|
1053 |
+
, eℓ,m
|
1054 |
+
z
|
1055 |
+
⟩ = 0,
|
1056 |
+
z ∈ ΩN.
|
1057 |
+
Suppose that (ℓ, m) ∈ ΣN and 1 ≤ k ≤ N − 1. Then by Lemma 4.1
|
1058 |
+
tr[XkℓZkm |eℓ,m
|
1059 |
+
z
|
1060 |
+
⟩⟨eℓ,m
|
1061 |
+
z
|
1062 |
+
|] = ω− 1
|
1063 |
+
2 k(k−1)ℓm⟨(XℓZm)keℓ,m
|
1064 |
+
z
|
1065 |
+
, eℓ,m
|
1066 |
+
z
|
1067 |
+
⟩
|
1068 |
+
= ω− 1
|
1069 |
+
2 k(k−1)ℓmzk,
|
1070 |
+
z ∈ ΩN.
|
1071 |
+
|
1072 |
+
16
|
1073 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
1074 |
+
All combined, for all 1 ≤ k ≤ N − 1, (ℓ, m) ∈ ΣN and 1 ≤ i ≤ n we get
|
1075 |
+
tr[XkℓZkmρi(⃗ω)] =
|
1076 |
+
1
|
1077 |
+
N + 1
|
1078 |
+
�
|
1079 |
+
(ℓ′,m′)∈ΣN
|
1080 |
+
⟨eℓ′,m′
|
1081 |
+
ωℓ′,m′
|
1082 |
+
i
|
1083 |
+
, XkℓZkmeℓ′,m′
|
1084 |
+
ωℓ′,m′
|
1085 |
+
i
|
1086 |
+
⟩
|
1087 |
+
=
|
1088 |
+
1
|
1089 |
+
N + 1⟨eℓ,m
|
1090 |
+
ωℓ,m
|
1091 |
+
i
|
1092 |
+
, XkℓZkmeℓ,m
|
1093 |
+
ωℓ,m
|
1094 |
+
i
|
1095 |
+
⟩
|
1096 |
+
=
|
1097 |
+
1
|
1098 |
+
N + 1ω− 1
|
1099 |
+
2 k(k−1)ℓm(ωℓ,m
|
1100 |
+
i
|
1101 |
+
)k.
|
1102 |
+
Recall that any degree-d polynomial in MN(C)⊗n is a linear combination of mono-
|
1103 |
+
mials
|
1104 |
+
A(⃗k, ⃗ℓ, ⃗m;⃗i) := · · · ⊗ Xk1ℓ1Zk1m1 ⊗ · · · ⊗ XkκℓκZkκmκ ⊗ · · ·
|
1105 |
+
where
|
1106 |
+
• ⃗k = (k1, . . . , kκ) ∈ {1, . . . , N − 1}κ with 0 ≤ �κ
|
1107 |
+
j=1 kj ≤ d;
|
1108 |
+
• ⃗ℓ = (ℓ1, . . . , ℓκ), ⃗m = (m1, . . . , mκ) with each (ℓj, mj) ∈ ΣN;
|
1109 |
+
• ⃗i = (i1, . . . , iκ) with 1 ≤ i1 < · · · < iκ ≤ n;
|
1110 |
+
• XkjℓjZkjmj appears in the ij-th place, 1 ≤ j ≤ κ, and all the other n − κ
|
1111 |
+
elements in the tensor product are the identity matrices 1.
|
1112 |
+
So for any ⃗ω ∈ (ΩN)(N+1)n of the form (4.1) we have from the above discussion that
|
1113 |
+
tr[A(⃗k, ⃗ℓ, ⃗m;⃗i)ρ(⃗ω)] =
|
1114 |
+
κ
|
1115 |
+
�
|
1116 |
+
j=1
|
1117 |
+
tr[XkjℓjZkjmjρij(⃗ω)]
|
1118 |
+
= ω− 1
|
1119 |
+
2
|
1120 |
+
�κ
|
1121 |
+
j=1 kj(kj−1)ℓjmj
|
1122 |
+
(N + 1)κ
|
1123 |
+
(ωℓ1,m1
|
1124 |
+
i1
|
1125 |
+
)k1 · · · (ωℓκ,mκ
|
1126 |
+
iκ
|
1127 |
+
)kκ.
|
1128 |
+
So ⃗ω �→ tr[A(⃗k, ⃗ℓ, ⃗m;⃗i)ρ(⃗ω)] is a polynomial on (ΩN)(N+1)n of degree at most �κ
|
1129 |
+
j=1 kj ≤
|
1130 |
+
d.
|
1131 |
+
Now for general polynomial A ∈ MN(C)⊗n of degree-d:
|
1132 |
+
A =
|
1133 |
+
�
|
1134 |
+
⃗k,⃗ℓ,⃗m,⃗i
|
1135 |
+
c(⃗k, ⃗ℓ, ⃗m;⃗i)A(⃗k, ⃗ℓ, ⃗m;⃗i)
|
1136 |
+
where the sum runs over the above (⃗k, ⃗ℓ, ⃗m;⃗i). This is the Fourier expansion of A
|
1137 |
+
and each c(⃗k, ⃗ℓ, ⃗m;⃗i) ∈ C is the Fourier coefficient. So
|
1138 |
+
∥ �A∥p =
|
1139 |
+
|
1140 |
+
�
|
1141 |
+
⃗k,⃗ℓ,⃗m,⃗i
|
1142 |
+
|c(⃗k, ⃗ℓ, ⃗m;⃗i)|p
|
1143 |
+
|
1144 |
+
|
1145 |
+
1/p
|
1146 |
+
.
|
1147 |
+
|
1148 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1149 |
+
17
|
1150 |
+
To each A we assign the function fA on (ΩN)(N+1)n given by
|
1151 |
+
fA(⃗ω) = tr[Aρ(⃗ω)]
|
1152 |
+
=
|
1153 |
+
�
|
1154 |
+
⃗k,⃗ℓ,⃗m,⃗i
|
1155 |
+
ω− 1
|
1156 |
+
2
|
1157 |
+
�κ
|
1158 |
+
j=1 kj(kj−1)ℓjmjc(⃗k, ⃗ℓ, ⃗m;⃗i)
|
1159 |
+
(N + 1)κ
|
1160 |
+
(ωℓ1,m1
|
1161 |
+
i1
|
1162 |
+
)k1 · · · (ωℓκ,mκ
|
1163 |
+
iκ
|
1164 |
+
)kκ.
|
1165 |
+
Note that this is the Fourier expansion of fA since the monomials (ωℓ1,m1
|
1166 |
+
i1
|
1167 |
+
)k1 · · · (ωℓκ,mκ
|
1168 |
+
iκ
|
1169 |
+
)kκ
|
1170 |
+
differ for different (⃗k, ⃗ℓ, ⃗m,⃗i). Therefore,
|
1171 |
+
∥�
|
1172 |
+
fA∥p =
|
1173 |
+
|
1174 |
+
�
|
1175 |
+
⃗k,⃗ℓ,⃗m,⃗i
|
1176 |
+
�����
|
1177 |
+
c(⃗k, ⃗ℓ, ⃗m;⃗i)
|
1178 |
+
(N + 1)κ
|
1179 |
+
�����
|
1180 |
+
p
|
1181 |
+
|
1182 |
+
1/p
|
1183 |
+
≥
|
1184 |
+
1
|
1185 |
+
(N + 1)d
|
1186 |
+
|
1187 |
+
�
|
1188 |
+
⃗k,⃗ℓ,⃗m,⃗i
|
1189 |
+
|c(⃗k, ⃗ℓ, ⃗m;⃗i)|p
|
1190 |
+
|
1191 |
+
|
1192 |
+
1/p
|
1193 |
+
=
|
1194 |
+
1
|
1195 |
+
(N + 1)d∥ �A∥p.
|
1196 |
+
So if the Bohnenblust–Hille inequalities hold for cyclic group ZN for N prime, then
|
1197 |
+
∥�
|
1198 |
+
fA∥ 2d
|
1199 |
+
d+1 ≤ C(d)∥fA∥L∞((ΩN )(N+1)n)
|
1200 |
+
for some C(d) > 0. All combined, we obtain
|
1201 |
+
∥ �A∥ 2d
|
1202 |
+
d+1 ≤ (N + 1)d∥�
|
1203 |
+
fA∥ 2d
|
1204 |
+
d+1 ≤ (N + 1)dC(d)∥fA∥L∞((ΩN )(N+1)n) ≤ (N + 1)dC(d)∥A∥.
|
1205 |
+
□
|
1206 |
+
5. Bohnenblust–Hille inequalities for cyclic groups: the difficulty
|
1207 |
+
Let us recall the reader that �ΩN denotes the convex hull of cyclic group ΩN =
|
1208 |
+
(1, ω, . . . ωN−1). In this section we sketch the proof Theorem 1.2.
|
1209 |
+
We wish to prove the following theorem:
|
1210 |
+
Theorem 5.1. Let f = �
|
1211 |
+
α bαzα be an analytic polynomial of n complex variables
|
1212 |
+
z = (z1, . . . , zn) of global degree at most d and such that in each variable zi its degree
|
1213 |
+
is at most N − 1. Then
|
1214 |
+
� �
|
1215 |
+
|cα|
|
1216 |
+
2d
|
1217 |
+
d+1
|
1218 |
+
� d+1
|
1219 |
+
2d ≤ C(d)
|
1220 |
+
sup
|
1221 |
+
z∈(�ΩN)n
|
1222 |
+
|f(z)| .
|
1223 |
+
|
1224 |
+
18
|
1225 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
1226 |
+
Here C(d) is as in [DGMS], in particular, it is sub-exponential.
|
1227 |
+
The proof of
|
1228 |
+
Theorem 5.1 follows closely the proof of [DMP], [BPS] and [DGMS] and will be
|
1229 |
+
recorded elsewhere.
|
1230 |
+
Now we give a sketch of this proof. We repeat Theorem 8.10 and Remark 8.16 of
|
1231 |
+
[DGMS]. As a result we get hypercontractive inequalities for polynomials of arbitrary
|
1232 |
+
number n of variables zi such that polynomials have degree at most N − 1 in each
|
1233 |
+
variable and such that in Remark 8.16 the integration in both parts is not over Tn
|
1234 |
+
but over (ΩN)n. The explanation is simple: for polynomials of degree N − 1 in each
|
1235 |
+
variable we can use integration over (ΩN)n to calculate its L2 norm. This allows us
|
1236 |
+
to have the hypercontractivity constant on page 265 of [DGMS] to be as in this page
|
1237 |
+
HC 2k
|
1238 |
+
k+1,2 ≤ 2
|
1239 |
+
4
|
1240 |
+
3k−1
|
1241 |
+
but the norms in hypercontractivity estimate use again integration over (ΩN)n rather
|
1242 |
+
than over Tn.
|
1243 |
+
The proof of Bohnenblust–Hille inequality uses several ingredients: a) algebraic
|
1244 |
+
calculations and Blei’s inequality, b) hypercontractivity or more precisely some mo-
|
1245 |
+
ment comparison estimates, c) polarization. Of course a) will be the same, b) is the
|
1246 |
+
same as we just observed.
|
1247 |
+
However, the polarization argument on pages 67–68 of [DGMS] one needs to be
|
1248 |
+
careful. One can repeat the proof with xi (or x, y) being vectors in (ΩN)n, complex
|
1249 |
+
variables (w1, w2) to be from (ΩN)2 instead of T2, but |ϕ(w1n1x+w2n2y, . . . , w1n1x+
|
1250 |
+
w2n2y)| now will have estimate maxu∈(�ΩN) |ϕ(u, . . . , u)|(n1 + n2)m (in our case we
|
1251 |
+
denote m by d).
|
1252 |
+
This is the sketch of the proof of Theorem 5.1.
|
1253 |
+
However, unlike the case when the maxDn |f(z)| by maxTn |f(z)| estimate is obvi-
|
1254 |
+
ous by maximum principle, we cannot replace max(�ΩN)n |f(z)| by max(ΩN)n |f(z)| by
|
1255 |
+
any obvious consideration.
|
1256 |
+
Remark 5.2. In application to matrix Bohnenblust–Hille inequality in Heisenberg–
|
1257 |
+
Weyl basis, which we considered above, we wanted to replace (�ΩN)n with (ΩN)n, but
|
1258 |
+
we cannot do that directly because (�ΩN)n is much bigger than (ΩN)n and we do not
|
1259 |
+
know the inequality
|
1260 |
+
sup
|
1261 |
+
⃗ζ∈(�ΩN)(n
|
1262 |
+
|fA(⃗ζ)| ≤ B(d)
|
1263 |
+
sup
|
1264 |
+
⃗γ∈(ΩN)n |fA(⃗γ)|
|
1265 |
+
for polynomials of degree at most d of z = (z1, . . . , zn) such that in each zi the degree
|
1266 |
+
is at most N − 1. One exception is N = 2, when polynomials are multi-affine and
|
1267 |
+
the previous inequality does hold just by convexity in each argument.
|
1268 |
+
But for N ≥ 3 this reasoning flops by the lack of convexity. This lack of convexity
|
1269 |
+
is our main difficulty, and for some time we will struggle with this difficulty.
|
1270 |
+
|
1271 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1272 |
+
19
|
1273 |
+
Question. Is it true that the following inequality holds with constant independent
|
1274 |
+
of n
|
1275 |
+
sup
|
1276 |
+
z∈(�ΩN)n
|
1277 |
+
|f(z)| ≤ B(d)
|
1278 |
+
sup
|
1279 |
+
⃗w∈(ΩN)n |f(⃗ω)|,
|
1280 |
+
for polynomials f of n complex variables z = (z1, . . . , zn) of global degree at most d
|
1281 |
+
of such that in each zi the degree is at most N − 1?
|
1282 |
+
6. Bohnenblust–Hille inequalities for cyclic groups: a partial
|
1283 |
+
remedy
|
1284 |
+
Let f(z) be an analytic polynomial of total degree d of variables (z1, . . . , zn) such
|
1285 |
+
that in each zi its degree is at most N − 1. We should think that
|
1286 |
+
n >> d >> N .
|
1287 |
+
We would like to compare ∥f∥L∞(Tn), ∥f∥L∞(�Ωn
|
1288 |
+
N), and ∥f∥L∞(Ωn
|
1289 |
+
N). We wish for the
|
1290 |
+
estimates independent of n. Obviously
|
1291 |
+
∥f∥L∞(Ωn
|
1292 |
+
N) ≤ ∥f∥L∞(�Ωn
|
1293 |
+
N) ≤ ∥f∥L∞(Dn) = ∥f∥L∞(Tn) .
|
1294 |
+
The converse estimate with constant 1 is impossible, we show this now.
|
1295 |
+
6.1. Constant cannot be 1. Let N = 3.
|
1296 |
+
Lemma 6.1. Let v1, v2, v3 be linear independent vectors in C3. Let C be their ab-
|
1297 |
+
solute convex hull. Then v ∈ C if and only if for every vector u we have |(u, v)| ≤
|
1298 |
+
maxi=1,2,3 |(u, vi)|.
|
1299 |
+
This is just the Hahn–Banach theorem.
|
1300 |
+
Proposition 6.2. There exists a polynomial of one complex variable p(z) = a0 +
|
1301 |
+
a1z + a2z2, z ∈ D, such that
|
1302 |
+
∥p∥L∞(�Ω3) > ∥p∥L∞(Ω3) .
|
1303 |
+
Proof. Consider three vectors in C3: v1 = (1, 1, 1), v2 = (1, ω, ω2), v3 = (1, ω2, ω4) =
|
1304 |
+
(1, ω2, ω), where ω = e
|
1305 |
+
2πi
|
1306 |
+
3 .
|
1307 |
+
Consider vector v = (1, z, z2) with some z ∈ �Ω3. If for every u = (a0, a1, a2),
|
1308 |
+
we have |(u, v)| ≤ maxi=1,2,3 |(u, vi)| then v is in absolute convex combination of
|
1309 |
+
(v1, v2, v3) and so there exist convex coefficients p1, p2, p3 and α1, α2, α3 in T such
|
1310 |
+
that
|
1311 |
+
v = α1p1v1 + α2p2v2 + α3p3v3.
|
1312 |
+
In particular α1p1 + α2p2 + α3p3 = 1, which means that αi = 1. Hence,
|
1313 |
+
z = p1 + p2ω + p3ω2, z2 = p1 + p2ω2 + p3ω .
|
1314 |
+
|
1315 |
+
20
|
1316 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
1317 |
+
Then
|
1318 |
+
p2
|
1319 |
+
1 + 2p2p3 + (p2
|
1320 |
+
2 + 2p1p3)ω2 + (p2
|
1321 |
+
3 + 2p1p2)ω = p1 + p2ω2 + p3ω .
|
1322 |
+
Two convex combinations (in the LHS we also have a convex combination) should
|
1323 |
+
have the same coefficients. We get
|
1324 |
+
p2
|
1325 |
+
1 + 2p2p3 = p1, p2
|
1326 |
+
2 + 2p1p3 = p2, p2
|
1327 |
+
3 + 2p1p2 = p3 .
|
1328 |
+
There can be only finitely many such (p1, p2, p3). Thus, choosing z = p1 +p2ω +p3ω2
|
1329 |
+
with a triple different from those finitely many ones, we get that v = (1, z, z2) is not
|
1330 |
+
in an absolute convex combination of v1, v2, v3. Then Lemma 6.1 shows that there
|
1331 |
+
exists u = (�a0, �a1, �a2) with |(v, u)| > maxi=1,2,3 |(vi, u).
|
1332 |
+
This is the same as to say that |p(z)| > maxk=0,1,2 |p(ωk)|.
|
1333 |
+
□
|
1334 |
+
Here is a concrete example showing that the constant cannot be 1. Let ω := e
|
1335 |
+
2πi
|
1336 |
+
3 .
|
1337 |
+
Consider the polynomial
|
1338 |
+
p(z) := p(1)(z − ω)(z − ω2)
|
1339 |
+
(1 − ω)(1 − ω2) + p(ω) (z − 1)(z − ω2)
|
1340 |
+
(ω − 1)(ω − ω2) + p(ω2) (z − 1)(z − ω)
|
1341 |
+
(ω2 − 1)(ω2 − ω)
|
1342 |
+
with p(1), p(ω), p(ω2) to be chosen later. Put z0 := 1+ω
|
1343 |
+
2
|
1344 |
+
∈ �Ω3. Then
|
1345 |
+
|z0 − 1| = |z0 − ω| =
|
1346 |
+
√
|
1347 |
+
3
|
1348 |
+
2 ,
|
1349 |
+
|z0 − ω2| = 3
|
1350 |
+
2.
|
1351 |
+
Now we choose p(1), p(ω), p(ω2) to be complex numbers of modules 1 such that
|
1352 |
+
p(1)(z0 − ω)(z0 − ω2)
|
1353 |
+
(1 − ω)(1 − ω2) =
|
1354 |
+
����
|
1355 |
+
(z0 − ω)(z0 − ω2)
|
1356 |
+
(1 − ω)(1 − ω2)
|
1357 |
+
���� =
|
1358 |
+
3
|
1359 |
+
√
|
1360 |
+
3
|
1361 |
+
4
|
1362 |
+
3
|
1363 |
+
=
|
1364 |
+
√
|
1365 |
+
3
|
1366 |
+
4 ,
|
1367 |
+
p(ω)(z0 − 1)(z0 − ω2)
|
1368 |
+
(ω − 1)(ω − ω2) =
|
1369 |
+
����
|
1370 |
+
(z0 − 1)(z0 − ω2)
|
1371 |
+
(ω − 1)(ω − ω2)
|
1372 |
+
���� =
|
1373 |
+
3
|
1374 |
+
√
|
1375 |
+
3
|
1376 |
+
4
|
1377 |
+
3
|
1378 |
+
=
|
1379 |
+
√
|
1380 |
+
3
|
1381 |
+
4 ,
|
1382 |
+
p(ω2) (z0 − 1)(z0 − ω)
|
1383 |
+
(ω2 − 1)(ω2 − ω) =
|
1384 |
+
����
|
1385 |
+
(z0 − 1)(z0 − ω)
|
1386 |
+
(ω2 − 1)(ω2 − ω)
|
1387 |
+
���� =
|
1388 |
+
3
|
1389 |
+
4
|
1390 |
+
3 = 1
|
1391 |
+
4.
|
1392 |
+
Therefore, this choice of p satisfies
|
1393 |
+
∥p∥L∞(�Ω3) ≥ |p(z0)| =
|
1394 |
+
√
|
1395 |
+
3
|
1396 |
+
4 +
|
1397 |
+
√
|
1398 |
+
3
|
1399 |
+
4 + 1
|
1400 |
+
4 = 1 + 2
|
1401 |
+
√
|
1402 |
+
3
|
1403 |
+
4
|
1404 |
+
> 1 = ∥p∥L∞(Ω3).
|
1405 |
+
Question. Is there a constant independent of n (but dependent on d) such that for
|
1406 |
+
analytic polynomials of global degree d and degree ≤ N in each variable zi has the
|
1407 |
+
estimate
|
1408 |
+
∥f∥L∞(Tn) ≤ C(d)∥f∥L∞(Ωn
|
1409 |
+
N ) ?
|
1410 |
+
We believe that there can be a counterexample.
|
1411 |
+
|
1412 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1413 |
+
21
|
1414 |
+
6.2. A partial solution. In this sequel, we partially answer this latter question.
|
1415 |
+
But we will need to make some concessions to answer affirmatively. The strategy
|
1416 |
+
will be to reverse the argument in Section 6.1. We start with the following key matrix
|
1417 |
+
lemma.
|
1418 |
+
Lemma 6.3. Fix N ≥ 2, put ξk = e
|
1419 |
+
2πik
|
1420 |
+
2N−1. There exists ε0 = ε0(N) ∈ (0, 1) such that,
|
1421 |
+
for all z ∈ C with |z| ≤ ε0, one can find pk = pk(z) ≥ 0, 0 ≤ k ≤ 2N − 2 satisfying
|
1422 |
+
zm =
|
1423 |
+
2N−2
|
1424 |
+
�
|
1425 |
+
k=0
|
1426 |
+
pkξm
|
1427 |
+
k ,
|
1428 |
+
0 ≤ m ≤ N − 1.
|
1429 |
+
(6.1)
|
1430 |
+
In particular, when m = 0, �2N−2
|
1431 |
+
k=0
|
1432 |
+
pk = 1.
|
1433 |
+
Proof. Put θ =
|
1434 |
+
2π
|
1435 |
+
2N−1. The equations (6.1) are equivalent to (pk’s are non-negative
|
1436 |
+
and thus real)
|
1437 |
+
|
1438 |
+
|
1439 |
+
|
1440 |
+
|
1441 |
+
|
1442 |
+
|
1443 |
+
|
1444 |
+
�2N−2
|
1445 |
+
k=0
|
1446 |
+
pk = 1
|
1447 |
+
�2N−2
|
1448 |
+
k=0
|
1449 |
+
pk cos(kmθ) = ℜzm
|
1450 |
+
1 ≤ m ≤ N − 1
|
1451 |
+
�2N−2
|
1452 |
+
k=0
|
1453 |
+
pk sin(kmθ) = ℑzm
|
1454 |
+
1 ≤ m ≤ N − 1
|
1455 |
+
.
|
1456 |
+
(6.2)
|
1457 |
+
Or equivalently, we want to find a solution to DN⃗p = ⃗vz with each entry of ⃗p being
|
1458 |
+
non-negative. Here DN is a (2N − 1) × (2N − 1) real matrix given by
|
1459 |
+
DN =
|
1460 |
+
|
1461 |
+
|
1462 |
+
1
|
1463 |
+
1
|
1464 |
+
1
|
1465 |
+
· · ·
|
1466 |
+
1
|
1467 |
+
1
|
1468 |
+
cos(θ)
|
1469 |
+
cos(2θ)
|
1470 |
+
· · ·
|
1471 |
+
cos((2N − 2)θ)
|
1472 |
+
...
|
1473 |
+
...
|
1474 |
+
...
|
1475 |
+
...
|
1476 |
+
1
|
1477 |
+
cos((N − 1)θ)
|
1478 |
+
cos(2(N − 1)θ)
|
1479 |
+
· · ·
|
1480 |
+
cos((2N − 2)(N − 1)θ)
|
1481 |
+
1
|
1482 |
+
sin(θ)
|
1483 |
+
sin(2θ)
|
1484 |
+
· · ·
|
1485 |
+
sin((2N − 2)θ)
|
1486 |
+
...
|
1487 |
+
...
|
1488 |
+
...
|
1489 |
+
...
|
1490 |
+
1
|
1491 |
+
sin((N − 1)θ)
|
1492 |
+
sin(2(N − 1)θ)
|
1493 |
+
· · ·
|
1494 |
+
sin((2N − 2)(N − 1)θ)
|
1495 |
+
|
1496 |
+
|
1497 |
+
,
|
1498 |
+
and ⃗vz = (1, ℜz, . . . , ℜzN−1, ℑz, . . . , ℑzN−1)T ∈ R2N−1.
|
1499 |
+
Note first that DN is non-singular.
|
1500 |
+
In fact, assume that DN⃗x = ⃗0 with ⃗x =
|
1501 |
+
(x0, x1, . . . , x2N−2)T ∈ R2N−1. Then
|
1502 |
+
2N−2
|
1503 |
+
�
|
1504 |
+
k=0
|
1505 |
+
xkξm
|
1506 |
+
k = 0,
|
1507 |
+
0 ≤ m ≤ N − 1.
|
1508 |
+
Since each xk is real and ξ2N−1 = 1, we have by taking conjugation that
|
1509 |
+
2N−2
|
1510 |
+
�
|
1511 |
+
k=0
|
1512 |
+
xkξm
|
1513 |
+
k = 0,
|
1514 |
+
N ≤ m ≤ 2N − 1.
|
1515 |
+
|
1516 |
+
22
|
1517 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
1518 |
+
Altogether we get
|
1519 |
+
2N−2
|
1520 |
+
�
|
1521 |
+
k=0
|
1522 |
+
xkξm
|
1523 |
+
k = 0,
|
1524 |
+
0 ≤ m ≤ 2N − 2.
|
1525 |
+
Since the Vandermonde matrix associated to (1, ξ, . . . , ξ2N−2) has determinant
|
1526 |
+
�
|
1527 |
+
0≤j<k≤2N−2
|
1528 |
+
(ξj − ξk) ̸= 0,
|
1529 |
+
we get ⃗x = ⃗0. So DN is non-singular.
|
1530 |
+
Therefore, for any z ∈ C, the solution to (6.2), thus to (6.1), is given by
|
1531 |
+
⃗pz = (p0(z), p1(z), . . . , p2N−2(z)) = D−1
|
1532 |
+
N ⃗vz ∈ R2N−1.
|
1533 |
+
Notice one more thing about the rows of D. As
|
1534 |
+
2N−2
|
1535 |
+
�
|
1536 |
+
k=0
|
1537 |
+
ξm
|
1538 |
+
k = 0,
|
1539 |
+
∀m = 1, 2, . . . , 2N − 2,
|
1540 |
+
we have automatically that vector ⃗v0 := (
|
1541 |
+
1
|
1542 |
+
2N−1, . . . ,
|
1543 |
+
1
|
1544 |
+
2N−1) of length 2N − 1 gives
|
1545 |
+
D⃗v0 = (1, 0, 0, . . ., 0)T .
|
1546 |
+
For any k-by-k matrix A denote
|
1547 |
+
∥A∥∞→∞ :=
|
1548 |
+
sup
|
1549 |
+
⃗0̸=v∈Rk
|
1550 |
+
∥Av∥∞
|
1551 |
+
∥v∥∞
|
1552 |
+
.
|
1553 |
+
So we have
|
1554 |
+
∥⃗pz − ⃗p0∥∞ ≤∥D−1
|
1555 |
+
N ∥∞→∞∥⃗vz − ⃗v0∥∞
|
1556 |
+
=∥D−1
|
1557 |
+
N ∥∞→∞ max
|
1558 |
+
�
|
1559 |
+
max
|
1560 |
+
1≤k≤N−1 |ℜzk|,
|
1561 |
+
max
|
1562 |
+
1≤k≤N−1 |ℑzk|
|
1563 |
+
�
|
1564 |
+
≤∥D−1
|
1565 |
+
N ∥∞→∞ max{|z|, |z|N−1}.
|
1566 |
+
That is,
|
1567 |
+
max
|
1568 |
+
0≤j≤2N−2
|
1569 |
+
����pj(z) −
|
1570 |
+
1
|
1571 |
+
2N − 1
|
1572 |
+
���� ≤ ∥D−1
|
1573 |
+
N ∥∞→∞ max{|z|, |z|N−1}.
|
1574 |
+
Since D−1
|
1575 |
+
N ⃗v0 = ⃗p0, we have ∥D−1
|
1576 |
+
N ∥∞→∞ ≥ 2N − 1. Put
|
1577 |
+
ε0 :=
|
1578 |
+
1
|
1579 |
+
(2N − 1)∥D−1
|
1580 |
+
N ∥∞→∞
|
1581 |
+
∈
|
1582 |
+
�
|
1583 |
+
0,
|
1584 |
+
1
|
1585 |
+
(2N − 1)2
|
1586 |
+
�
|
1587 |
+
.
|
1588 |
+
Thus whenever |z| < ε0 < 1, we have
|
1589 |
+
max
|
1590 |
+
0≤j≤2N−2
|
1591 |
+
����pj(z) −
|
1592 |
+
1
|
1593 |
+
2N − 1
|
1594 |
+
���� ≤ ε0∥D−1
|
1595 |
+
N ∥∞→∞ ≤
|
1596 |
+
1
|
1597 |
+
2N − 1.
|
1598 |
+
|
1599 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1600 |
+
23
|
1601 |
+
Therefore, pj(z) ≥ 0 for all 0 ≤ j ≤ 2N − 2 and the proof is complete.
|
1602 |
+
□
|
1603 |
+
With Lemma 6.3, we may replace �ΩN in Theorem 1.2 with ΩN up to a constant.
|
1604 |
+
Proof of Theorem 1.3. Denote ξ := e
|
1605 |
+
2πi
|
1606 |
+
2N−1 and ξk := ξk. Note that by Lemma 6.3,
|
1607 |
+
there exists ε0 = ε0(N) ∈ (0, 1) such that for all z = (z1, . . . , zn) with ∥z∥∞ ≤ ε0, we
|
1608 |
+
have
|
1609 |
+
zm
|
1610 |
+
j =
|
1611 |
+
2N−2
|
1612 |
+
�
|
1613 |
+
k=0
|
1614 |
+
p(j)
|
1615 |
+
k ξm
|
1616 |
+
k ,
|
1617 |
+
1 ≤ j ≤ n,
|
1618 |
+
0 ≤ m ≤ N − 1,
|
1619 |
+
where p(j)
|
1620 |
+
k
|
1621 |
+
= p(j)
|
1622 |
+
k (zj) > 0 satisfies �2N−2
|
1623 |
+
k=0
|
1624 |
+
p(j)
|
1625 |
+
k
|
1626 |
+
= 1 for any 1 ≤ j ≤ n. Then we have
|
1627 |
+
|f(z1, . . . , zn)| =
|
1628 |
+
�����
|
1629 |
+
d
|
1630 |
+
�
|
1631 |
+
α1,...,αn=0
|
1632 |
+
aα1,...,αnzα1
|
1633 |
+
1 · · · zαn
|
1634 |
+
n
|
1635 |
+
�����
|
1636 |
+
=
|
1637 |
+
�����
|
1638 |
+
d
|
1639 |
+
�
|
1640 |
+
α1,...,αn=0
|
1641 |
+
2N−2
|
1642 |
+
�
|
1643 |
+
k1,...,kn=0
|
1644 |
+
aα1,...,αnp(1)
|
1645 |
+
k1 · · · p(n)
|
1646 |
+
kn ξα1
|
1647 |
+
k1 · · · ξαn
|
1648 |
+
kn
|
1649 |
+
�����
|
1650 |
+
≤
|
1651 |
+
2N−2
|
1652 |
+
�
|
1653 |
+
k1,...,kn=0
|
1654 |
+
p(1)
|
1655 |
+
k1 · · · p(n)
|
1656 |
+
kn
|
1657 |
+
�����
|
1658 |
+
d
|
1659 |
+
�
|
1660 |
+
α1,...,αn=0
|
1661 |
+
aα1,...,αnξα1
|
1662 |
+
k1 · · ·ξαn
|
1663 |
+
kn
|
1664 |
+
�����
|
1665 |
+
=
|
1666 |
+
2N−2
|
1667 |
+
�
|
1668 |
+
k1,...,kn=0
|
1669 |
+
p(1)
|
1670 |
+
k1 · · · p(n)
|
1671 |
+
kn |P(ξk1, . . . , ξkn)|
|
1672 |
+
≤
|
1673 |
+
2N−2
|
1674 |
+
�
|
1675 |
+
k1,...,kn=0
|
1676 |
+
p(1)
|
1677 |
+
k1 · · · p(n)
|
1678 |
+
kn
|
1679 |
+
sup
|
1680 |
+
z∈(Ω2N−1)n |f(z)|
|
1681 |
+
=
|
1682 |
+
sup
|
1683 |
+
z∈(Ω2N−1)n |f(z)| .
|
1684 |
+
So we have shown that
|
1685 |
+
sup
|
1686 |
+
∥z∥∞≤ε0
|
1687 |
+
|f(z)| ≤
|
1688 |
+
sup
|
1689 |
+
z∈(Ω2N−1)n |f(z)| .
|
1690 |
+
(6.3)
|
1691 |
+
Now consider
|
1692 |
+
P(z) := f(ε0z1, . . . , ε0zn) =
|
1693 |
+
�
|
1694 |
+
α
|
1695 |
+
ε|α|
|
1696 |
+
0 aαzα.
|
1697 |
+
Then we have by Theorem 1.2 that
|
1698 |
+
� �
|
1699 |
+
α
|
1700 |
+
|aα|
|
1701 |
+
2d
|
1702 |
+
d+1
|
1703 |
+
� d+1
|
1704 |
+
2d ≤ ε−d
|
1705 |
+
0
|
1706 |
+
� �
|
1707 |
+
α
|
1708 |
+
|ε|α|
|
1709 |
+
0 aα|
|
1710 |
+
2d
|
1711 |
+
d+1
|
1712 |
+
� d+1
|
1713 |
+
2d ≤ ε−d
|
1714 |
+
0 C(d)
|
1715 |
+
sup
|
1716 |
+
z∈(�Ω2N−1)n
|
1717 |
+
|P(z)|.
|
1718 |
+
|
1719 |
+
24
|
1720 |
+
JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG
|
1721 |
+
By (6.3), we get
|
1722 |
+
sup
|
1723 |
+
z∈(�Ω2N−1)n
|
1724 |
+
|P(z)| ≤
|
1725 |
+
sup
|
1726 |
+
∥z∥∞≤1
|
1727 |
+
|P(z)| =
|
1728 |
+
sup
|
1729 |
+
∥z∥∞≤ε0
|
1730 |
+
|f(z)| ≤
|
1731 |
+
sup
|
1732 |
+
z∈(Ω2N−1)n |f(z)| .
|
1733 |
+
This completes the proof.
|
1734 |
+
□
|
1735 |
+
References
|
1736 |
+
[AA] S. Aaranson, A. Ambainis, The need for structure in quantum speed-up, Theory of computing,
|
1737 |
+
Volume 10 (6), 2014, pp. 133–166
|
1738 |
+
[AEHK] Ali Asadian, Paul Erker, Marcus Huber, and Claude Kl¨ockl, Heisenberg-Weyl Observables:
|
1739 |
+
Bloch vectors in phase space.” Physical Review A 94, no. 1 (2016): 010301.
|
1740 |
+
[BPS] F. Bayart, D. Pellegrino, J. B. Seoane-Sep´ulveda. The Bohr radius of the n-dimensional
|
1741 |
+
polydisk is equivalent to
|
1742 |
+
�
|
1743 |
+
(log n)/n, Advances in Mathematics 264 (2014) 726–746.
|
1744 |
+
[BK] Reinhold A. Bertlmann, Philipp Krammer, Bloch vectors for qudits, arXiv: 0806.1174v1.
|
1745 |
+
[BH] H. F. Bohnenblust and E. Hille, On the absolute convergence of Dirichlet series, Ann. of Math.
|
1746 |
+
32 (1931), no. 3, 600–622.
|
1747 |
+
[Bou02] Jean Bourgain. On the distribution of the Fourier spectrum of boolean functions. Israel
|
1748 |
+
Journal of Mathematics, 131(1):269–276, 2002.
|
1749 |
+
[CHP] S. Chen, H-Y. Huang, J. Preskill, Learning to efficiently predict arbitrary quantum evolu-
|
1750 |
+
tions, Preprint, September 15, 2022, pp. 1–47.
|
1751 |
+
[DFOOS] Defant Andreas, Frerick Leonhard, Ortega-Cerda Joaquim, Ounaıes Myriam, and Seip
|
1752 |
+
Kristian. 2011. The Bohnenblust–Hille inequality for homogeneous polynomials is hypercon-
|
1753 |
+
tractive. Ann. of Math., 174(1), 485–497.
|
1754 |
+
[DMP] A. Defant, M. Mastylo, A. P´erez, On the Fourier spectrum of functions on boolean cubes.
|
1755 |
+
Math. Ann. 374 (2019), no. 1-2, 653–680.
|
1756 |
+
[DFKO07] Irit Dinur, Ehud Friedgut, Guy Kindler, and Ryan O’Donnell. On the Fourier tails of
|
1757 |
+
bounded functions over the discrete cube. Israel Journal of Mathematics, 160(1):389–412, 2007.
|
1758 |
+
[DGMS] A. Defant, D. Garcia, M. Maestre, P. Sevilla-Peris, Dirichlet Series and Holomorphic
|
1759 |
+
functions in High Dimensions. New mathematical monographs, v. 37.
|
1760 |
+
[RWZ22] Cambyse Rouz´e, Melchior Wirth, and Haonan Zhang. Quantum Talagrand, KKL and
|
1761 |
+
Friedgut’s theorems and the learnability of quantum Boolean functions. arXiv preprint,
|
1762 |
+
arXiv:2209.07279, 2022.
|
1763 |
+
[HCP22] Hsin-Yuan Huang, Sitan Chen, and John Preskill. Learning to pre- dict arbitrary quantum
|
1764 |
+
processes. arXiv preprint, arXiv: 2210.14894, 2022.
|
1765 |
+
[EI22] Alexandros Eskenazis and Paata Ivanisvili. Learning low-degree functions from a logarithmic
|
1766 |
+
number of random queries. In Proceedings of the 54th Annual ACM SIGACT Symposium on
|
1767 |
+
Theory of Computing, pages 203–207, 2022.
|
1768 |
+
[VZ22] A. Volberg, H. Zhang, Noncommutative Bohnenblust–Hille inequality. arXiv:2210.14468,
|
1769 |
+
pp.1–18.
|
1770 |
+
|
1771 |
+
NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY
|
1772 |
+
25
|
1773 |
+
(J.S.) Department of Computing & Mathematical Sciences, California Institute
|
1774 |
+
of Technology, Pasadena, CA 91125
|
1775 |
+
Email address: [email protected]
|
1776 |
+
(A.V.) Department of Mathematics, MSU, East Lansing, MI 48823, USA and Haus-
|
1777 |
+
dorff Center of Mathematics
|
1778 |
+
Email address: [email protected]
|
1779 |
+
(H.Z.) Department of Mathematics, University of California, Irvine, CA 92617,
|
1780 |
+
USA
|
1781 |
+
Email address: [email protected]
|
1782 |
+
|
0NAzT4oBgHgl3EQfefyv/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1dAyT4oBgHgl3EQfofiA/content/tmp_files/2301.00508v1.pdf.txt
ADDED
@@ -0,0 +1,792 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
EMOGATOR: A NEW OPEN SOURCE VOCAL BURST DATASET
|
2 |
+
WITH BASELINE MACHINE LEARNING CLASSIFICATION
|
3 |
+
METHODOLOGIES
|
4 |
+
Fred W. Buhl
|
5 |
+
University of Florida
|
6 | |
7 |
+
January 3, 2023
|
8 |
+
ABSTRACT
|
9 |
+
Vocal Bursts – short, non-speech vocalizations that convey emotions, such as laughter, cries, sighs,
|
10 |
+
moans, and groans – are an often-overlooked aspect of speech emotion recognition, but an important
|
11 |
+
aspect of human vocal communication. One barrier to study of these interesting vocalizations is a
|
12 |
+
lack of large datasets. I am pleased to introduce the EmoGator dataset, which consists of 32,040
|
13 |
+
samples from 365 speakers, 16.91 hours of audio; each sample classified into one of 30 distinct
|
14 |
+
emotion categories by the speaker. Several different approaches to construct classifiers to identify
|
15 |
+
emotion categories will be discussed, and directions for future research will be suggested. Data set is
|
16 |
+
available for download from https://github.com/fredbuhl/EmoGator.
|
17 |
+
Keywords speech emotion recognition; vocal bursts; affect bursts; nonverbal vocalizations; affective computing;
|
18 |
+
machine learning; dataset
|
19 |
+
1
|
20 |
+
Introduction
|
21 |
+
Emotions are central to human experience—they motivate & inform much of what we do. Recognizing emotions in
|
22 |
+
others has been a longstanding area of interest. Perhaps the first scientific study of emotion recognition was the work
|
23 |
+
of Duchenne [1] in 1862, who collected photographs of facial expressions elicited via electrically stimulating facial
|
24 |
+
muscles.
|
25 |
+
The question of how many emotions there are remains open. Duchenne identified 13 primary emotions, and 60
|
26 |
+
combinations, from facial expression. A recent study by Cowen & Keltner found that humans were able to reliably
|
27 |
+
identify 28 distinct emotions from facial expression [2]. Another recent study by the same team [3] indicated that
|
28 |
+
humans self-report as many 27 distinct emotions; these responses were collected from subjects reacting to short video
|
29 |
+
clips. The emotion categories presented as gradients, which occasionally overlapped with other emotion categories;
|
30 |
+
multiple emotions were elicited to varying degrees by a given stimulus.
|
31 |
+
Humans often express emotion vocally by varying speech prosody—the audio characteristics of speech. One study [4]
|
32 |
+
found that 12 distinct emotions could be recognized from speech prosody—and this across two cultures—a previous
|
33 |
+
study [5] had found cross-cultural emotion recognition with subjects across five nations, although an in-group advantage
|
34 |
+
was noted.
|
35 |
+
Humans also express emotion via brief, non-speech sounds called vocal bursts, also referred to as "affect bursts", "emo-
|
36 |
+
tional vocalizations", or "nonverbal vocalizations"–sounds like laughter, cries, sighs, moans, and groans—vocalizations
|
37 |
+
that are not speech, and likely predate it, evolutionarily speaking. In [6] humans were found to be able to distinguish 14
|
38 |
+
emotional states from these vocal bursts. And a recent paper [7] by Cowen, Keltner, and others showed the ability to
|
39 |
+
distinguish 24 emotional states from these brief vocalizations.
|
40 |
+
The ability to detect and express emotion via human vocalization appears early in human development [8, 9, 10, 11, 12].
|
41 |
+
It is important to language and social development; people who have difficulties in discerning emotions in others, due
|
42 |
+
arXiv:2301.00508v1 [cs.SD] 2 Jan 2023
|
43 |
+
|
44 |
+
A PREPRINT - JANUARY 3, 2023
|
45 |
+
to brain injury, or conditions like Autism Spectrum Disorder, experience difficulties communicating effectively. People
|
46 |
+
with auditory affective agnosia [13] cannot discern emotional cues in speech, though they can still understand words,
|
47 |
+
while people afflicted with dysprosody [14] speak in a monotone, without intonation or emotional affect; this can also
|
48 |
+
appear in people with Parkinson’s disease [15]. Any impairment of these abilities has a severe effect on communication
|
49 |
+
and socialization with others, underlining the importance of evoking and understanding emotional expression.
|
50 |
+
1.1
|
51 |
+
The Problem at Hand
|
52 |
+
Interactions with computers via speech recognition is now commonplace via “smart speakers” and their associated
|
53 |
+
virtual assistants such as Siri, Alexa, and Google Assistant. Currently, none of these systems are capable of detecting
|
54 |
+
emotion from the speech audio signal; the signal is converted to text (sometimes with comic results) via speech-to-text
|
55 |
+
deep learning models, but any emotional content present in the speech’s prosody is ignored. For some applications,
|
56 |
+
where how a word is said may be as important (or more important) than what word was said, this could be a severe
|
57 |
+
limitation. And, given their non-speech nature, vocal bursts are completely ignored by these systems.
|
58 |
+
Computers capable of emotion recognition from speech have numerous applications; more life-like responses from non-
|
59 |
+
player characters in video games, for example. In early childhood education, awareness of the young user’s emotional
|
60 |
+
state would be helpful to gauge interest, frustration, or boredom; they could also be used to assess and improve the
|
61 |
+
child’s emotional intelligence (or "EQ") [16]. The ability to detect emotion could detect signs of loneliness, agitation, or
|
62 |
+
depression [17], a special concern for isolated people, such as aging-in-place seniors. Social Robots—robots designed
|
63 |
+
to interact closely with humans—benefit from emotion recognition [18]; such systems can even be used to gauge the
|
64 |
+
robot’s appeal to its human users [19]. The argument has been made that we will never claim human level performance
|
65 |
+
in speech recognition until we can achieve human-level speech emotion recognition, since humans are capable of both
|
66 |
+
[20]. (It should be noted that this area is just one aspect of the larger field of Affective Computing pioneered by Rosalind
|
67 |
+
Picard [21], which involve not only emotion recognition, but also emotional expression, and emotionally-aware decision
|
68 |
+
making.)
|
69 |
+
Despite the limitations of current commercial products, Speech Emotion Recognition (SER) is an area of longstanding
|
70 |
+
interest in computer science [22]. In 1996, Cowie et al. [23] developed a technique of automatically detecting landmarks
|
71 |
+
in a speech signal and collect summary statistics, which were then used to quantify speech characteristics for four
|
72 |
+
emotion categories. Various approaches have been used in speech emotion recognition over the years [24]—Mel-
|
73 |
+
Frequency Cepstrum Coefficients (MFCC), Gaussian Mixture Models (GMM), Support Vector Machines (SVM),
|
74 |
+
Hidden Markov Models (HMM), and neural network techniques such as LSTM [25] and, more recently, deep learning
|
75 |
+
neural networks have been used.
|
76 |
+
The research described here examines the largely-neglected area of vocal bursts, enabled by a newly-collected dataset.
|
77 |
+
A number of machine learning techniques will be explored, with varying levels of performance, along with suggested
|
78 |
+
directions for future research.
|
79 |
+
The primary inspiration for this work was [7]; the vocal burst dataset, which the authors graciously provide to other
|
80 |
+
researchers, was the largest vocal burst dataset available when released. That dataset consisted of 2,032 vocal burst
|
81 |
+
samples with 30 emotion categories; as mentioned, humans were able to reliably distinguish 24 categories. The
|
82 |
+
fundamental question at the basis of this current work: if humans can distinguish 24 emotion categories from vocal
|
83 |
+
bursts, can machines do so as well?
|
84 |
+
While the Cowen et al. dataset was the largest available at the time, it was still relatively small, and the categories
|
85 |
+
were not evenly represented; most machine learning approaches benefit greatly from larger numbers of samples, and
|
86 |
+
balanced categories. This author determined that a larger dataset would need to be collected, and several different
|
87 |
+
approaches evaluated, to find the best-performing emotion classifier.
|
88 |
+
2
|
89 |
+
The dataset, and a spectrum of deep learning and other methodologies for classification
|
90 |
+
2.1
|
91 |
+
The Dataset
|
92 |
+
The EmoGator dataset consists of 32,130 vocal bursts, produced by 357 speakers, providing 16.9654 hours of audio;
|
93 |
+
average sample length is 1.901 seconds. Each speaker recorded three samples for each of 30 emotion categories,
|
94 |
+
providing 90 samples per speaker–this provided for an equal number of samples for each category, and for each speaker,
|
95 |
+
assuring equal representation in the dataset. The emotion categories were the same 30 categories used in [7]: Adoration,
|
96 |
+
Amusement, Anger, Awe, Confusion, Contempt, Contentment, Desire, Disappointment, Disgust, Distress, Ecstasy,
|
97 |
+
Elation, Embarrassment, Fear, Guilt, Interest, Neutral, Pain, Pride, Realization, Relief, Romantic Love, Sadness,
|
98 |
+
Serenity, Shame, Surprise (Negative) Surprise (Positive), Sympathy, and Triumph. The speakers were provided text
|
99 |
+
2
|
100 |
+
|
101 |
+
A PREPRINT - JANUARY 3, 2023
|
102 |
+
prompts with scenarios to help elicit the emotional response; the prompts used were a modified and expanded version
|
103 |
+
used by [7], and listed in the online supplemental materials1.
|
104 |
+
Data was collected from unpaid volunteers, and also crowd-sourced workers via Mechanical Turk; a website was
|
105 |
+
created where speakers could record and play back their samples using their own computer or mobile device.
|
106 |
+
The audio files were originally recorded at 44100 or 48000 Hz, depending on the participant’s hardware, and stored as
|
107 |
+
mp3 files. Each individual recording file is named with a six-digit non-sequential user id, a two-digit emotion ID (1-30),
|
108 |
+
and a single-digit recording number (1,2,3). Since the files are labeled by user ID, researchers can break any train, test,
|
109 |
+
or validation set by speaker, ensuring a given speaker’s submission appears in only in one of the sets. (Efforts were
|
110 |
+
taken to avoid a speaker providing more than one contribution, though this cannot be 100% guaranteed). All participants
|
111 |
+
provided informed consent, and all aspects of the study procedures and design were approved by the University of
|
112 |
+
Florida’s Institutional Review Board (IRB).
|
113 |
+
Quality assurance was a major part of the data collection process; there were entire submissions that were silent
|
114 |
+
recordings, or only contained random background noise. Some contributors apparently misunderstood the assignment,
|
115 |
+
recording themselves reading the names of the categories, or phrases related to the categories. Many speakers provided
|
116 |
+
a large number of high quality samples, but also submitted problematic ones, usually due to audio issues such as
|
117 |
+
background noises (for example, phone chimes or background traffic sounds); another issue was excessive breath noise
|
118 |
+
picked up on the microphone. In these instances, speakers would be asked to re-record the problematic samples in order
|
119 |
+
to maintain the same number of samples per speaker.
|
120 |
+
In addition, some speakers did not seem to be able to produce evocative speech from the prompts; their responses didn’t
|
121 |
+
convey distinct emotions. This last group was omitted from the dataset. As a result of all these factors, this dataset will
|
122 |
+
therefore almost certainly have a bias toward the emotional expressions of North American English-speaking people, as
|
123 |
+
the author, and sole evaluator, shares that personal history.
|
124 |
+
The dataset will be publicly available at the following URL: https://github.com/fredbuhl/EmoGator.
|
125 |
+
Several different steps were evaluated to preprocess the data. Normalizing the data so the range of each audio sample
|
126 |
+
was within a [-1,1] range was universally used (for training, validation and testing). Denoising audio files and trimming
|
127 |
+
silence from the beginning and end of audio files was evaluated as well. Augmenting data by creating pitch and time
|
128 |
+
shifted variants of each sample was also explored.
|
129 |
+
While this dataset was being collected, a company named Hume AI collected their own vocal burst dataset, a subset
|
130 |
+
of which was made available for the The ICML 2022 Expressive Vocalizations Workshop and Competition[26] as
|
131 |
+
the Hume-VB dataset. This dataset consists of 59,201 vocalizations from 1702 speakers, with 10 emotion categories
|
132 |
+
(Amusement, Awe, Awkwardness, Distress, Excitement, Fear, Horror, Sadness, Surprise, and Triumph). Each sample
|
133 |
+
has been rated by reviewers, with [0:100] intensity scores for every emotion category provided for each sample. This
|
134 |
+
Hume-VB dataset was also used for the ACII 2022 Affective Vocal Bursts Workshop and Competition[27]
|
135 |
+
There are several differences between the EmoGator dataset to Hume-VB dataset:
|
136 |
+
1. EmoGator has 30 distinct emotion categories, with each sample belonging to a single category determined by
|
137 |
+
the speaker’s intent. Hume-VB has 0-100 ratings for all 10 of its categories provided by reviewers for each
|
138 |
+
sample–the listener’s interpretation, which may in some cases be very different than the speaker’s intent.
|
139 |
+
2. EmoGator contributors were provided text prompts describing situations that would elicit a given category of
|
140 |
+
vocal burst. Hume-VB contributors were provided ‘seed’ vocal burst audio samples to imitate–which could
|
141 |
+
reduce the range of expression for a given category.
|
142 |
+
3. EmoGator only permitted one 90-sample submission per speaker; Hume-VB allowed for multiple submissions
|
143 |
+
per speaker.
|
144 |
+
4. EmoGator has balanced categories; each emotion category has exactly 1,071 samples. In Hume-VB, this
|
145 |
+
varies; for example, “there are fewer samples that differentially convey Triumph” [26, p. 2]
|
146 |
+
5. While Hume-VB has nearly twice as many samples as EmoGator, the dataset is only provided for use in the
|
147 |
+
two sponsored competitions, and requires signing an End User License Agreement (EULA)2; EmoGator is
|
148 |
+
freely available under an open-source license.
|
149 |
+
At time of publication, EmoGator appears to be the largest vocal burst dataset publicly available.
|
150 |
+
1https://supp.apa.org/psycarticles/supplemental/amp0000399/amp0000399_Supplemental-Materials.
|
151 |
+
docx
|
152 |
+
2https://www.competitions.hume.ai/exvo2022
|
153 |
+
3
|
154 |
+
|
155 |
+
A PREPRINT - JANUARY 3, 2023
|
156 |
+
2.2
|
157 |
+
Classification Methodologies
|
158 |
+
A number of different techniques used in speech emotion recognition, sound classification, and elsewhere have been
|
159 |
+
used for these sorts of audio classification problems.
|
160 |
+
2.3
|
161 |
+
Spectrogram approaches
|
162 |
+
Some approaches to audio classification involve creating a time-frequency spectrogram (or spectrogram-like) represen-
|
163 |
+
tation of the audio signals, which can be created a number of ways. Typically, the Short-Time Fourier Transform, or
|
164 |
+
STFT [28] is used, which provides the amplitude of different frequencies over time; a variant, the Mel spectrogram,
|
165 |
+
modifies the frequencies to correspond to the Mel scale [29], which closely matches human perception of differences in
|
166 |
+
pitch. MFCC provide a spectrum-like “cepstrum” [30], which, while using Mel frequencies, provides the log of the
|
167 |
+
amplitude in decibels over the phase shift, instead of the time domain used for spectrograms. The resulting spectrograms
|
168 |
+
or cepstrograms are used as features for other machine learning approaches.
|
169 |
+
2.4
|
170 |
+
1D CNN training on raw waveforms
|
171 |
+
In [31], Dai et al. use a direct approach to sound classification; one-dimensional CNNs that work with the raw input
|
172 |
+
waveforms, without using spectograms or some other representation as an intermediate-step feature detector. networks
|
173 |
+
consisting of layers of one-dimensional convolutional neural networks (1D CNNs) [32] were used for this. [31] worked
|
174 |
+
on the UrbanSound8k dataset [33], which, with its 10 categories and 8,732 samples, is a bit smaller than the EmoGator
|
175 |
+
dataset. Testing various architectures, they reported up to 71.68% accuracy on an 18-layer model, which is competitive
|
176 |
+
with CNNs using spectrograms of the same dataset. For the EmoGator, dataset, we developed an 18-layer network as in
|
177 |
+
[31], and added dropout layers after each 1D convolution to help prevent overfitting.
|
178 |
+
2.5
|
179 |
+
Random forests
|
180 |
+
Random forest classifiers [34] were also explored. A random forest is constructed by generating multiple random
|
181 |
+
decision trees, each constructed from a random subset of the dataset, using a random subset of each sample’s features.
|
182 |
+
Once constructed, each tree in the forest casts a single vote for a class, and the class with the most votes chosen the
|
183 |
+
winner. This approach can be used on raw data or with spectrogram-like representations.
|
184 |
+
2.6
|
185 |
+
Large pre-trained speech models
|
186 |
+
Several teams in the 2022 ICML Expressive Vocalizations Workshop and Competition made use of large pre-trained
|
187 |
+
speech models [35], [36], [37], [38],[39],[40]. Two models were used frequently: WavLM [41] and HuBERT [42].
|
188 |
+
Both of these are self-supervised speech representation models, which are built using transformer architectures [43];
|
189 |
+
transformers have been applied successfully to a large number of domains–they are typically very large models, which
|
190 |
+
have been trained on large datasets for significant amounts of time. Having access to these pre-trained models can
|
191 |
+
produce better results then can be achieved by training other (usually smaller) datasets in isolation.
|
192 |
+
WavLM is a large scale self-supervised pre-trained speech model–The “Large” version of WavLM was trained on 94k
|
193 |
+
hours of speech, and has 316.62M parameters. HuBERT is a similar model, the “large” version has 317M parameters,
|
194 |
+
and was trained on 60k hours of audio on 128 Graphic Processing Units (GPUs). Both WavLM and HuBERT are built
|
195 |
+
upon wav2vec 2.0 [44], a “contrastive learning” self-supervised speech model, which itself is trained on 64 GPUs; the
|
196 |
+
output of wav2vec is used as the input to HuBERT or WavLM, providing them higher-level features to build and train
|
197 |
+
upon.
|
198 |
+
WavLM experiments were run by first running the EmoGator training, validation, and test data through a pre-trained
|
199 |
+
WavLM model, storing the last hidden layer as a new representation for each sample, using a 70% / 15% / 15%
|
200 |
+
train-validation-test split. The hidden layers from the training data were then used as input to train a single fully
|
201 |
+
connected network, using validation data to find the appropriate stopping point; once the ideal models were determined,
|
202 |
+
they were run on the test data. The HuBERT model was used in a identical fashion–using the last hidden later of the
|
203 |
+
HuBERT model instead of WavLM as the input to the fully-connected layer.
|
204 |
+
Incorporating WavLM and HuBERT in this work was greatly aided by the HuggingFace transformer libraries [45],
|
205 |
+
which, while initially covering natural language processing, have now expanded into many other areas. The benefit of
|
206 |
+
being able to incorporate an large pre-trained language model with a few lines of code cannot be overstated.
|
207 |
+
4
|
208 |
+
|
209 |
+
A PREPRINT - JANUARY 3, 2023
|
210 |
+
2.7
|
211 |
+
Ensemble Methods
|
212 |
+
Ensemble methods attempt to improve performance by combining the outputs of multiple models, with suitable
|
213 |
+
training and weighting; the aggregate often outperforms the individual models. Two approaches were used for the
|
214 |
+
EmoGator data: Ensemble A took the n-length output (where n was the number of emotion categories) produced by
|
215 |
+
the WavLM-and-HuBERT-single-layer model and averaged them together, using the resulting average to pick the most
|
216 |
+
likely emotion category. Ensemble B concatenated the last hidden layers from WavLM and HuBERT, and then trained
|
217 |
+
single fully-connected layer on those inputs.
|
218 |
+
2.8
|
219 |
+
Platform & Hardware Requirements
|
220 |
+
Most work on this project was performed on the University of Florida’s HiperGator-AI cluster, which uses 80G A100
|
221 |
+
GPUs; one A100 should be sufficient to run all the models included, but the code may not run directly on systems with
|
222 |
+
lower memory GPUs unless modifications to parameters such as batch size etc. are implemented.
|
223 |
+
3
|
224 |
+
3. Results
|
225 |
+
3.1
|
226 |
+
1D CNN training on raw waveforms
|
227 |
+
For one-dimensional convolutional neural networks, the best results against the full dataset were with a 70% / 15% /
|
228 |
+
15% train/validation/test split, using an 18-layer 1D CNN based on [31], but with dropout layers after each convolution.
|
229 |
+
A relatively low dropout rate of 0.07 was optimal. All experiments were run with a batchsize of 128 and an Adam
|
230 |
+
optimizer with a learning rate of 0.001. Several statistics were calculated; For the full 30-category dataset, the average
|
231 |
+
F1 score was 0.270. F1 scores and other accuracy metrics, with breakdowns by category, are shown in Table 1; a
|
232 |
+
confusion matrix is provided in Figure 1 based on the run with the highest F1 score.
|
233 |
+
The experiments above were all run with normalized audio data, but without denoising the audio signal or trimming
|
234 |
+
silence from the beginning and end; earlier experiments with a 70%/30% train/test split revealed that denoising or
|
235 |
+
trimming the audio signal reduced performance.
|
236 |
+
Data augmentation was also explored; two-to-three times larger “stretched” version of the 70% / 15% / 15% training set
|
237 |
+
were produced by creating new samples by performing independent pitch and tempo shifts of the audio samples; however
|
238 |
+
the stretched training sets produced lower performance than the original training set, despite making adjustments to the
|
239 |
+
amount of pitch and tempo scaling.
|
240 |
+
In reviewing these results, it is clear that some categories are much harder (or easier) to identify; for example, the F1
|
241 |
+
score (0.056) for Embarrassment, the worst performing category, is much lower than the highest performing category,
|
242 |
+
Amusement (0.627). The confusion matrix illustrates the problem well; it shows that certain types of vocal bursts
|
243 |
+
are simply difficult to place in the correct category. Per the confusion matrix, Embarrassment (with only 7 samples
|
244 |
+
correctly identified) was more likely to be interpreted as Shame (16) or Guilt (10); all closely related concepts that can
|
245 |
+
produce similar vocalizations. This is an inherently difficult problem, which helps explain why humans could only
|
246 |
+
reliably distinguish 24 emotion categories in [7].
|
247 |
+
By selectively removing emotion categories that performed poorly, it would be expected that overall performance should
|
248 |
+
improve. Using the F1 score as a metric, the lowest scoring categories were removed, creating 24-count, 16-count, and
|
249 |
+
10-count subsets of the dataset. Interestingly, three of the bottom-scoring six categories removed to make the 24-count
|
250 |
+
subset were also not identifiable by humans in [7]; two other categories unidentifiable by humans were removed in the
|
251 |
+
16-count subset–showing some commonality between the two datasets, and also illustrating the difficulties humans and
|
252 |
+
algorithms have with certain emotion categories, even across studies.
|
253 |
+
The same 1D CNN model architecture, hyperparameters, and validation approaches were used. Results are in Table 2;
|
254 |
+
we do see improvement as the more ambiguous categories are eliminated.
|
255 |
+
By creating binary 1D CNN classifiers, with one classifier for each possible pair of emotion categories, we can illustrate
|
256 |
+
which pairs are the easiest to distinguish. Using the same model architecture and 70%/15%/15% split, and using the F1
|
257 |
+
score as a similarity metric (on a [0,1] scale, where 1 is least similar), a similarity matrix was created based on the 435
|
258 |
+
permutations for the 30 categories, and a dendrogram displaying relationships between each category was generated
|
259 |
+
from that matrix (Figure 2). The dendrogram illustrates the most easily confused or distinguished categories. For
|
260 |
+
example, it shows how easily the Amusement category is distinguished from all other categories, and shows Realization
|
261 |
+
and Contempt as the most similar–and therefore most confused–categories, despite being very different emotions.
|
262 |
+
5
|
263 |
+
|
264 |
+
A PREPRINT - JANUARY 3, 2023
|
265 |
+
Table 1: Precision, Recall, and F1 scores from a best run of the 18 layer 1D CNN, with dropout layers.
|
266 |
+
Precision
|
267 |
+
Recall
|
268 |
+
F1 score
|
269 |
+
Support
|
270 |
+
Adoration
|
271 |
+
0.407
|
272 |
+
0.488
|
273 |
+
0.444
|
274 |
+
162
|
275 |
+
Amusement
|
276 |
+
0.561
|
277 |
+
0.710
|
278 |
+
0.627
|
279 |
+
162
|
280 |
+
Anger
|
281 |
+
0.405
|
282 |
+
0.327
|
283 |
+
0.362
|
284 |
+
162
|
285 |
+
Awe
|
286 |
+
0.220
|
287 |
+
0.296
|
288 |
+
0.253
|
289 |
+
162
|
290 |
+
Confusion
|
291 |
+
0.354
|
292 |
+
0.574
|
293 |
+
0.438
|
294 |
+
162
|
295 |
+
Contempt
|
296 |
+
0.236
|
297 |
+
0.296
|
298 |
+
0.263
|
299 |
+
162
|
300 |
+
Contentment
|
301 |
+
0.193
|
302 |
+
0.272
|
303 |
+
0.226
|
304 |
+
162
|
305 |
+
Desire
|
306 |
+
0.253
|
307 |
+
0.309
|
308 |
+
0.278
|
309 |
+
162
|
310 |
+
Disappointment
|
311 |
+
0.144
|
312 |
+
0.093
|
313 |
+
0.113
|
314 |
+
162
|
315 |
+
Disgust
|
316 |
+
0.376
|
317 |
+
0.580
|
318 |
+
0.456
|
319 |
+
162
|
320 |
+
Distress
|
321 |
+
0.243
|
322 |
+
0.111
|
323 |
+
0.153
|
324 |
+
162
|
325 |
+
Ecstasy
|
326 |
+
0.187
|
327 |
+
0.123
|
328 |
+
0.149
|
329 |
+
162
|
330 |
+
Elation
|
331 |
+
0.190
|
332 |
+
0.074
|
333 |
+
0.107
|
334 |
+
162
|
335 |
+
Embarrassment
|
336 |
+
0.078
|
337 |
+
0.043
|
338 |
+
0.056
|
339 |
+
162
|
340 |
+
Fear
|
341 |
+
0.341
|
342 |
+
0.179
|
343 |
+
0.235
|
344 |
+
162
|
345 |
+
Guilt
|
346 |
+
0.175
|
347 |
+
0.105
|
348 |
+
0.131
|
349 |
+
162
|
350 |
+
Interest
|
351 |
+
0.288
|
352 |
+
0.420
|
353 |
+
0.342
|
354 |
+
162
|
355 |
+
Neutral
|
356 |
+
0.397
|
357 |
+
0.568
|
358 |
+
0.467
|
359 |
+
162
|
360 |
+
Pain
|
361 |
+
0.276
|
362 |
+
0.438
|
363 |
+
0.339
|
364 |
+
162
|
365 |
+
Pride
|
366 |
+
0.175
|
367 |
+
0.086
|
368 |
+
0.116
|
369 |
+
162
|
370 |
+
Realization
|
371 |
+
0.351
|
372 |
+
0.241
|
373 |
+
0.286
|
374 |
+
162
|
375 |
+
Relief
|
376 |
+
0.294
|
377 |
+
0.432
|
378 |
+
0.350
|
379 |
+
162
|
380 |
+
Romantic Love
|
381 |
+
0.121
|
382 |
+
0.074
|
383 |
+
0.092
|
384 |
+
162
|
385 |
+
Sadness
|
386 |
+
0.355
|
387 |
+
0.302
|
388 |
+
0.327
|
389 |
+
162
|
390 |
+
Serenity
|
391 |
+
0.209
|
392 |
+
0.191
|
393 |
+
0.200
|
394 |
+
162
|
395 |
+
Shame
|
396 |
+
0.197
|
397 |
+
0.154
|
398 |
+
0.173
|
399 |
+
162
|
400 |
+
Surprise (Negative)
|
401 |
+
0.296
|
402 |
+
0.364
|
403 |
+
0.327
|
404 |
+
162
|
405 |
+
Surprise (Positive)
|
406 |
+
0.248
|
407 |
+
0.198
|
408 |
+
0.220
|
409 |
+
162
|
410 |
+
Sympathy
|
411 |
+
0.233
|
412 |
+
0.370
|
413 |
+
0.286
|
414 |
+
162
|
415 |
+
Triumph
|
416 |
+
0.378
|
417 |
+
0.228
|
418 |
+
0.285
|
419 |
+
162
|
420 |
+
Accuracy
|
421 |
+
0.288
|
422 |
+
4860
|
423 |
+
Macro Average
|
424 |
+
0.273
|
425 |
+
0.288
|
426 |
+
0.270
|
427 |
+
4860
|
428 |
+
Weighted Average
|
429 |
+
0.273
|
430 |
+
0.288
|
431 |
+
0.270
|
432 |
+
4860
|
433 |
+
Table 2: 1D CNN runs with 24, 16, and 10 category subsets of the EmoGator dataset, compared to the 30 category full
|
434 |
+
dataset.
|
435 |
+
1D CNN Dataset size
|
436 |
+
F1 score (avg.)
|
437 |
+
30-Count Full Dataset
|
438 |
+
0.267
|
439 |
+
24-Count Subset
|
440 |
+
0.344
|
441 |
+
16-Count Subset
|
442 |
+
0.459
|
443 |
+
10-Count Subset
|
444 |
+
0.597
|
445 |
+
6
|
446 |
+
|
447 |
+
A PREPRINT - JANUARY 3, 2023
|
448 |
+
Figure 1: The confusion matrix generated by the 18 layer 1D CNN with dropout layers.
|
449 |
+
3.2
|
450 |
+
Random Forests
|
451 |
+
As shown in [34], an approach known as Random Forests has been used on a number of small-count, small number-of-
|
452 |
+
category datasets, which suggested it might be an apt choice for the EmoGator dataset. The classifier (which is included
|
453 |
+
in the scikit-learn library [46]) was trained against Mel-Frequency Cepstral Coefficients (MFCC) of the audio data; runs
|
454 |
+
were completed for the full 30 category dataset, along with 24, 16, and 10 category subsets. Results all under-performed
|
455 |
+
the 1D CNN results, however (see Table 3).
|
456 |
+
3.3
|
457 |
+
Large pre-trained speech models
|
458 |
+
Results were calculated using the last hidden layer of WavLM and HuBERT models connected to a single fully-
|
459 |
+
connected network layer. A variant of Ensemble B incorporated two fully-connected layers (labeled “2-layer FC”),
|
460 |
+
which resulted in a moderate improvement. These results are presented, along with others, in Table 4.
|
461 |
+
7
|
462 |
+
|
463 |
+
Confusion Matrix
|
464 |
+
Adoration
|
465 |
+
100
|
466 |
+
Amusement
|
467 |
+
Anger
|
468 |
+
Awe
|
469 |
+
Confusion
|
470 |
+
Contempt
|
471 |
+
Contentment
|
472 |
+
Desire
|
473 |
+
80
|
474 |
+
Disappointment
|
475 |
+
Disgust
|
476 |
+
Distress
|
477 |
+
Ecstasy
|
478 |
+
Elation
|
479 |
+
Embarrassment
|
480 |
+
60
|
481 |
+
label
|
482 |
+
Fear
|
483 |
+
True
|
484 |
+
Guilt
|
485 |
+
Interest
|
486 |
+
Neutral
|
487 |
+
Pain
|
488 |
+
Pride
|
489 |
+
Realization
|
490 |
+
40
|
491 |
+
Relief
|
492 |
+
Romantic Love
|
493 |
+
Sadness
|
494 |
+
Serenity
|
495 |
+
Shame
|
496 |
+
10
|
497 |
+
Surprise (Negative)
|
498 |
+
20
|
499 |
+
Surprise (Positive)
|
500 |
+
Sympathy
|
501 |
+
Triumph
|
502 |
+
ization
|
503 |
+
rise
|
504 |
+
rassment
|
505 |
+
intment
|
506 |
+
(Negati)
|
507 |
+
(Positive
|
508 |
+
Love
|
509 |
+
Predicted labelA PREPRINT - JANUARY 3, 2023
|
510 |
+
Figure 2: The dendrogram generated from F1 scores (range [0,1]) between pairs of emotion categories.
|
511 |
+
Table 3: Random Forest runs with 24, 16, and 10 category subsets of the EmoGator dataset, compared to the 30 category
|
512 |
+
full dataset, using MFCCs.
|
513 |
+
Random Forest Dataset size
|
514 |
+
F1 score (avg.)
|
515 |
+
30-Count Full Dataset
|
516 |
+
0.146
|
517 |
+
24-Count Subset
|
518 |
+
0.180
|
519 |
+
16-Count Subset
|
520 |
+
0.256
|
521 |
+
10-Count Subset
|
522 |
+
0.345
|
523 |
+
3.4
|
524 |
+
Ensemble Methods
|
525 |
+
Results were calculated using averaged output from the trained fully-connected layers appended on WavLM and
|
526 |
+
HuBERT model runs (Ensemble A), and concatenated last-hidden-layer outputs from both models (Ensemble B), which
|
527 |
+
were then used to train a single fully-connected layer. The WavLM and HuBERT single fully-connected layers that
|
528 |
+
had the highest average F1 scores on the validation dataset were used to keep the test data from tainting the ensemble
|
529 |
+
model.
|
530 |
+
Results for the Ensemble methods are presented in Table 4, along with summary data from all the EmoGator experiments.
|
531 |
+
4
|
532 |
+
Discussion
|
533 |
+
Returning to our research question–whether, like humans, machines could reliably identify 24 emotion categories–it
|
534 |
+
appears that the results achieved for the 24-emotion category runs did not approach assumed human proficiency, with a
|
535 |
+
top F1 score of only 0.344 via the 1D CNN method on a 24-category subset. Results for the 24, 16, and 10-category
|
536 |
+
subsets were better than the full 30-category runs, with the 10-category runs performing the best, again using the 1D
|
537 |
+
CNN approach, scoring 0.597. (To put these results into perspective, a random guess for a 24-category subset would be
|
538 |
+
right only 4.2% of the time; a 10-category random guess would be right only 10% of the time–so these results are much
|
539 |
+
better than pure chance.)
|
540 |
+
One potential use of this dataset would be to use it to measure how accurate human performance is for vocal bursts–
|
541 |
+
whether the category the speaker intended to convey is correctly identified by listeners. Other studies have used gradient
|
542 |
+
rating scales for each category provided by the listener, without necessarily linking back to the ground truth of the
|
543 |
+
8
|
544 |
+
|
545 |
+
Surprise (Positive)
|
546 |
+
Elation
|
547 |
+
Triumph
|
548 |
+
Fear
|
549 |
+
Distress
|
550 |
+
Surprise (Negative)
|
551 |
+
Pride
|
552 |
+
Pain
|
553 |
+
Disgust
|
554 |
+
Shame
|
555 |
+
Guilt
|
556 |
+
Embarrassment
|
557 |
+
Sympathy
|
558 |
+
Romantic Love
|
559 |
+
Desire
|
560 |
+
Ecstasy
|
561 |
+
Awe
|
562 |
+
Serenity
|
563 |
+
Contentment
|
564 |
+
Relief
|
565 |
+
Disappointment
|
566 |
+
Anger
|
567 |
+
Realization
|
568 |
+
Contempt
|
569 |
+
Interest
|
570 |
+
Confusion
|
571 |
+
Adoration
|
572 |
+
Sadness
|
573 |
+
Neutral
|
574 |
+
Amusement
|
575 |
+
0.2
|
576 |
+
0.4
|
577 |
+
0.6
|
578 |
+
8:0
|
579 |
+
0.0A PREPRINT - JANUARY 3, 2023
|
580 |
+
Table 4: All results from the various approaches and dataset subsets used.
|
581 |
+
Approach
|
582 |
+
# Categories
|
583 |
+
F1 score
|
584 |
+
1D CNN
|
585 |
+
30
|
586 |
+
0.267
|
587 |
+
1D CNN
|
588 |
+
24
|
589 |
+
0.344
|
590 |
+
1D CNN
|
591 |
+
16
|
592 |
+
0.459
|
593 |
+
1D CNN
|
594 |
+
10
|
595 |
+
0.597
|
596 |
+
Random Forest
|
597 |
+
30
|
598 |
+
0.146
|
599 |
+
Random Forest
|
600 |
+
24
|
601 |
+
0.180
|
602 |
+
Random Forest
|
603 |
+
16
|
604 |
+
0.256
|
605 |
+
Random Forest
|
606 |
+
10
|
607 |
+
0.345
|
608 |
+
WavLM
|
609 |
+
30
|
610 |
+
0.255
|
611 |
+
WavLM
|
612 |
+
10
|
613 |
+
0.563
|
614 |
+
HuBERT
|
615 |
+
10
|
616 |
+
0.531
|
617 |
+
Ensemble A
|
618 |
+
10
|
619 |
+
0.571
|
620 |
+
Ensemble B
|
621 |
+
10
|
622 |
+
0.591
|
623 |
+
Ensemble B (2-layer FC)
|
624 |
+
10
|
625 |
+
0.593
|
626 |
+
speaker intent. Another question is whether collecting vocal bursts inspired by text-based prompts is better or worse
|
627 |
+
than trying to capture them “in the wild” from recorded conversations, or elicited by other sorts of prompts.
|
628 |
+
Collecting more data would no doubt improve these results; this vocal burst dataset, while (currently) the largest publicly
|
629 |
+
available, is still small by machine learning standards. Evaluating subsets of the dataset makes the situation even worse;
|
630 |
+
when looking at say, 10-category subsets, only 1
|
631 |
+
3 of the dataset is used.
|
632 |
+
Using more complex ensemble methods seems a promising way forward; while the ensemble results here did not exceed
|
633 |
+
the 1D CNN results, it’s possible that incorporating more individual models could increase accuracy beyond what we’ve
|
634 |
+
been able to achieve.
|
635 |
+
One topic that was not explored here is generating vocal bursts; the author will be next exploring methods such as
|
636 |
+
Generative Adversarial Networks (GANs) and Stable Diffusion models to generate vocal bursts; ideally these could
|
637 |
+
be tailored for an individual speaker by providing a few audio samples(the ICML competition had this as one of their
|
638 |
+
challenges).
|
639 |
+
More data will help, but it may be that audio data alone will be insufficient to properly classify vocal bursts. Datasets
|
640 |
+
and models incorporating video as well as audio data–not only to look at facial expressions, but also any visual cues that
|
641 |
+
might evoke a vocal burst–could improve accuracy. The words spoken by the utterer, and others around them, before or
|
642 |
+
after a vocal burst may also aid in identification. (It may be, however, that there are inherent limits far short of certainty
|
643 |
+
for vocal burst classification, regardless of any additional information that can be gathered–often cries of sadness and
|
644 |
+
amusement sound the same, and people sometimes say they are not sure “whether they should laugh or cry”.)
|
645 |
+
Another area to explore are the demographics of the speakers; their age, gender, place of origin, and cultural background
|
646 |
+
could all come into play on classifying bursts. These demographic concerns also extend to the person evaluating the
|
647 |
+
quality of the sample; ideally, the demographic aspects of the reviewer should match those of the submitter for best
|
648 |
+
quality.
|
649 |
+
Beyond the demographic aspects, each individual’s unique character and personality certainly comes into play when
|
650 |
+
they generative vocal bursts–so prior experience with the utterer could be key in improving accuracy, especially if the
|
651 |
+
model’s weights can be fine-tuned based on these experiences.
|
652 |
+
It is hoped that the EmoGator dataset will be introduce researchers to the fascinating area of vocal bursts; hopefully
|
653 |
+
other researchers could incorporate this dataset into still-larger collections in the future, “paying it forward” by making
|
654 |
+
those datasets publicly available.
|
655 |
+
Acknowledgement
|
656 |
+
My thanks to Anand Rangarajan for our helpful discussions about the project.
|
657 |
+
9
|
658 |
+
|
659 |
+
A PREPRINT - JANUARY 3, 2023
|
660 |
+
References
|
661 |
+
[1] G.B. Duchenne, G.B.D. de Boulogne, R.A. Cuthbertson, A.S.R. Manstead, and K. Oatley. The Mechanism of
|
662 |
+
Human Facial Expression. Cambridge books online. Cambridge University Press, 1990.
|
663 |
+
[2] Alan S. Cowen and Dacher Keltner. What the face displays: Mapping 28 emotions conveyed by naturalistic
|
664 |
+
expression. American Psychologist, pages No Pagination Specified–No Pagination Specified, 2019.
|
665 |
+
[3] Alan S. Cowen and Dacher Keltner. Self-report captures 27 distinct categories of emotion bridged by continuous
|
666 |
+
gradients. Proceedings of the National Academy of Sciences, 114(38):E7900–E7909, September 2017.
|
667 |
+
[4] Alan S. Cowen, Petri Laukka, Hillary Anger Elfenbein, Runjing Liu, and Dacher Keltner. The primacy of
|
668 |
+
categories in the recognition of 12 emotions in speech prosody across two cultures. Nature Human Behaviour,
|
669 |
+
3(4):369–382, April 2019.
|
670 |
+
[5] Petri Laukka, Hillary Anger Elfenbein, Nutankumar S. Thingujam, Thomas Rockstuhl, Frederick K. Iraki, Wanda
|
671 |
+
Chui, and Jean Althoff. The expression and recognition of emotions in the voice across five nations: A lens model
|
672 |
+
analysis based on acoustic features. Journal of Personality and Social Psychology, 111(5):686–705, November
|
673 |
+
2016.
|
674 |
+
[6] Emiliana R. Simon-Thomas, Dacher J. Keltner, Disa Sauter, Lara Sinicropi-Yao, and Anna Abramson. The voice
|
675 |
+
conveys specific emotions: Evidence from vocal burst displays. Emotion, 9(6):838–846, 2009.
|
676 |
+
[7] Alan S. Cowen, Hillary Anger Elfenbein, Petri Laukka, and Petri Keltner. Mapping 24 emotions conveyed by
|
677 |
+
brief human vocalization. American Psychologist, 74(6):698, 2019.
|
678 |
+
[8] Elena Lyakso and Olga Frolova. Emotion State Manifestation in Voice Features: Chimpanzees, Human Infants,
|
679 |
+
Children, Adults. In Andrey Ronzhin, Rodmonga Potapova, and Nikos Fakotakis, editors, Speech and Computer,
|
680 |
+
Lecture Notes in Computer Science, pages 201–208, Cham, 2015. Springer International Publishing.
|
681 |
+
[9] Mariana Vaillant-Molina, Lorraine E. Bahrick, and Ross Flom. Young Infants Match Facial and Vocal Emotional
|
682 |
+
Expressions of Other Infants. Infancy : the official journal of the International Society on Infant Studies, 18(Suppl
|
683 |
+
1), August 2013.
|
684 |
+
[10] Amaya Palama, Jennifer Malsert, and Edouard Gentaz. Are 6-month-old human infants able to transfer emotional
|
685 |
+
information (happy or angry) from voices to faces? An eye-tracking study. PLOS ONE, 13(4):e0194579, April
|
686 |
+
2018.
|
687 |
+
[11] Lois Bloom and Richard Beckwith. Talking with Feeling: Integrating Affective and Linguistic Expression in Early
|
688 |
+
Language Development. Cognition and Emotion, 3(4):313–342, October 1989. Publisher: Routledge _eprint:
|
689 |
+
https://doi.org/10.1080/02699938908412711.
|
690 |
+
[12] Yang Wu, Paul Muentener, and Laura E. Schulz. One- to four-year-olds connect diverse positive emotional
|
691 |
+
vocalizations to their probable causes. Proceedings of the National Academy of Sciences, 114(45):11896–11901,
|
692 |
+
November 2017.
|
693 |
+
[13] K. M. Heilman, R. Scholes, and R. T. Watson. Auditory affective agnosia. Disturbed comprehension of affective
|
694 |
+
speech. Journal of Neurology, Neurosurgery & Psychiatry, 38(1):69–72, January 1975. Publisher: BMJ Publishing
|
695 |
+
Group Ltd Section: Research Article.
|
696 |
+
[14] G. H. Monrad-Krohn. Dysprosody or altered "melody of language.". Brain: A Journal of Neurology, 70:405–415,
|
697 |
+
1947. Place: United Kingdom Publisher: Oxford University Press.
|
698 |
+
[15] Sabine Skodda,
|
699 |
+
Heiko Rinsche,
|
700 |
+
and Uwe Schlegel.
|
701 |
+
Progression of dysprosody in Parkinson’s
|
702 |
+
disease over time—A longitudinal study.
|
703 |
+
Movement Disorders,
|
704 |
+
24(5):716–722,
|
705 |
+
2009.
|
706 |
+
_eprint:
|
707 |
+
https://movementdisorders.onlinelibrary.wiley.com/doi/pdf/10.1002/mds.22430.
|
708 |
+
[16] Tsai-Hsuan Tsai, Hsien-Tsung Chang, Shin-Da Liao, Hui-Fang Chiu, Ko-Chun Hung, Chun-Yi Kuo, and Chih-Wei
|
709 |
+
Yang. Employing a Voice-Based Emotion-Recognition Function in a Social Chatbot to Foster Social and Emotional
|
710 |
+
Learning Among Preschoolers. In Constantine Stephanidis, editor, HCI International 2019 – Late Breaking
|
711 |
+
Papers, Lecture Notes in Computer Science, pages 341–356, Cham, 2019. Springer International Publishing.
|
712 |
+
[17] Young-Shin Lee and Won-Hyung Park. Diagnosis of Depressive Disorder Model on Facial Expression Based on
|
713 |
+
Fast R-CNN. Diagnostics, 12(2):317, January 2022.
|
714 |
+
[18] Cynthia Breazeal. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies,
|
715 |
+
59(1):119–155, July 2003.
|
716 |
+
[19] Jekaterina Novikova, Christian Dondrup, Ioannis Papaioannou, and Oliver Lemon. Sympathy Begins with
|
717 |
+
a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction.
|
718 |
+
arXiv:1706.02757v1 [cs], June 2017.
|
719 |
+
10
|
720 |
+
|
721 |
+
A PREPRINT - JANUARY 3, 2023
|
722 |
+
[20] D. O’Shaughnessy. Speech Communications: Human and Machine. Wiley, 2000.
|
723 |
+
[21] Rosalind W. Picard. Affective Computing. In Affective Computing. The MIT Press, 2000.
|
724 |
+
[22] Shashidhar G. Koolagudi and K. Sreenivasa Rao. Emotion recognition from speech: a review. International
|
725 |
+
Journal of Speech Technology, 15(2):99–117, June 2012.
|
726 |
+
[23] R. Cowie and E. Douglas-Cowie. Automatic statistical analysis of the signal and prosodic signs of emotion in
|
727 |
+
speech. In Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP ’96, volume 3,
|
728 |
+
pages 1989–1992 vol.3, October 1996.
|
729 |
+
[24] Akanksha Gadikar, Omkar Gokhale, Subodh Wagh, Anjali Wankhede, and P. Joshi. A Survey on Speech
|
730 |
+
Emotion Recognition by Using Neural Networks. International Journal of Research and Analytical Reviews, 7(3),
|
731 |
+
September 2020.
|
732 |
+
[25] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780,
|
733 |
+
1997.
|
734 |
+
[26] Alice Baird, Panagiotis Tzirakis, Gauthier Gidel, Marco Jiralerspong, Eilif B. Muller, Kory Mathewson, Björn
|
735 |
+
Schuller, Erik Cambria, Dacher Keltner, and Alan Cowen. The ICML 2022 Expressive Vocalizations Workshop
|
736 |
+
and Competition: Recognizing, Generating, and Personalizing Vocal Bursts, July 2022. arXiv:2205.01780 [cs,
|
737 |
+
eess].
|
738 |
+
[27] Alice Baird, Panagiotis Tzirakis, Jeffrey A. Brooks, Christopher B. Gregory, Björn Schuller, Anton Batliner,
|
739 |
+
Dacher Keltner, and Alan Cowen. The ACII 2022 Affective Vocal Bursts Workshop & Competition: Understanding
|
740 |
+
a critically understudied modality of emotional expression, July 2022. arXiv:2207.03572 [cs, eess].
|
741 |
+
[28] E. Jacobsen and R. Lyons. The sliding DFT. IEEE Signal Processing Magazine, 20(2):74–80, March 2003.
|
742 |
+
Conference Name: IEEE Signal Processing Magazine.
|
743 |
+
[29] S. S. Stevens, J. Volkmann, and E. B. Newman. A Scale for the Measurement of the Psychological Magnitude
|
744 |
+
Pitch. The Journal of the Acoustical Society of America, 8(3):185–190, January 1937. Publisher: Acoustical
|
745 |
+
Society of America.
|
746 |
+
[30] B. Bogert. The quefrency analysis of time series for echoes : cepstrum, pseudo-autocovariance, cross-cepstrum
|
747 |
+
and saphe cracking. In Proceedings of the Symposium on Time Series Analysis, pages 209–243, 1963.
|
748 |
+
[31] Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, and Samarjit Das. Very Deep Convolutional Neural Networks for
|
749 |
+
Raw Waveforms. arXiv:1610.00087 [cs], October 2016. arXiv: 1610.00087.
|
750 |
+
[32] S. Kiranyaz, T. Ince, R. Hamila, and M. Gabbouj. Convolutional Neural Networks for patient-specific ECG
|
751 |
+
classification. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology
|
752 |
+
Society (EMBC), pages 2608–2611, August 2015. ISSN: 1558-4615.
|
753 |
+
[33] Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. A Dataset and Taxonomy for Urban Sound Research.
|
754 |
+
In Proceedings of the 22nd ACM international conference on Multimedia, MM ’14, pages 1041–1044, Orlando,
|
755 |
+
Florida, USA, November 2014. Association for Computing Machinery.
|
756 |
+
[34] Leo Breiman. Random Forests. Machine Learning, 45(1):5–32, October 2001.
|
757 |
+
[35] Detai Xin, Shinnosuke Takamichi, and Hiroshi Saruwatari. Exploring the Effectiveness of Self-supervised
|
758 |
+
Learning and Classifier Chains in Emotion Recognition of Nonverbal Vocalizations, June 2022. arXiv:2206.10695
|
759 |
+
[cs, eess].
|
760 |
+
[36] Chin-Cheng Hsu. Synthesizing Personalized Non-speech Vocalization from Discrete Speech Representations,
|
761 |
+
June 2022. arXiv:2206.12662 [cs, eess].
|
762 |
+
[37] Josh Belanich, Krishna Somandepalli, Brian Eoff, and Brendan Jou. Multitask vocal burst modeling with ResNets
|
763 |
+
and pre-trained paralinguistic Conformers, June 2022. arXiv:2206.12494 [cs, eess].
|
764 |
+
[38] Roshan Sharma, Tyler Vuong, Mark Lindsey, Hira Dhamyal, Rita Singh, and Bhiksha Raj. Self-supervision and
|
765 |
+
Learnable STRFs for Age, Emotion, and Country Prediction, June 2022. arXiv:2206.12568 [cs, eess].
|
766 |
+
[39] Tilak Purohit, Imen Ben Mahmoud, Bogdan Vlasenko, and Mathew Magimai Doss. Comparing supervised and
|
767 |
+
self-supervised embedding for ExVo Multi-Task learning track, June 2022. arXiv:2206.11968 [cs, eess].
|
768 |
+
[40] Atijit Anuchitanukul and Lucia Specia. Burst2Vec: An Adversarial Multi-Task Approach for Predicting Emotion,
|
769 |
+
Age, and Origin from Vocal Bursts, June 2022. arXiv:2206.12469 [cs, eess].
|
770 |
+
[41] Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda,
|
771 |
+
Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael
|
772 |
+
Zeng, Xiangzhan Yu, and Furu Wei. WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech
|
773 |
+
Processing, June 2022. arXiv:2110.13900 [cs, eess].
|
774 |
+
11
|
775 |
+
|
776 |
+
A PREPRINT - JANUARY 3, 2023
|
777 |
+
[42] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman
|
778 |
+
Mohamed. HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units,
|
779 |
+
June 2021. arXiv:2106.07447 [cs, eess].
|
780 |
+
[43] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and
|
781 |
+
Illia Polosukhin. Attention is All you Need. 31st NIPS Conference Proceedings, 2017.
|
782 |
+
[44] Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A Framework for Self-
|
783 |
+
Supervised Learning of Speech Representations. arXiv:2006.11477 [cs, eess], June 2020. arXiv: 2006.11477.
|
784 |
+
[45] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac,
|
785 |
+
Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. HuggingFace’s Transformers: State-of-the-art Natural
|
786 |
+
Language Processing. arXiv:1910.03771 [cs], October 2019. arXiv: 1910.03771.
|
787 |
+
[46] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel,
|
788 |
+
Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David
|
789 |
+
Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine Learning in
|
790 |
+
Python. J. Mach. Learn. Res., 12:2825–2830, November 2011.
|
791 |
+
12
|
792 |
+
|
1dAyT4oBgHgl3EQfofiA/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1dE1T4oBgHgl3EQflQQz/content/tmp_files/2301.03282v1.pdf.txt
ADDED
@@ -0,0 +1,1676 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Universal Information Extraction as Unified Semantic Matching
|
2 |
+
Jie Lou1*, Yaojie Lu2*, Dai Dai1†, Wei Jia1, Hongyu Lin2,
|
3 |
+
Xianpei Han2,3†, Le Sun2,3, Hua Wu1
|
4 |
+
1Baidu Inc., Beijing, China
|
5 |
+
2Chinese Information Processing Laboratory
|
6 |
+
3State Key Laboratory of Computer Science
|
7 |
+
Institute of Software, Chinese Academy of Sciences, Beijing, China
|
8 |
+
{loujie, daidai, jiawei07, wu hua}@baidu.com
|
9 |
+
{luyaojie, hongyu, xianpei, sunle}@iscas.ac.cn
|
10 |
+
Abstract
|
11 |
+
The challenge of information extraction (IE) lies in the diver-
|
12 |
+
sity of label schemas and the heterogeneity of structures. Tra-
|
13 |
+
ditional methods require task-specific model design and rely
|
14 |
+
heavily on expensive supervision, making them difficult to
|
15 |
+
generalize to new schemas. In this paper, we decouple IE into
|
16 |
+
two basic abilities, structuring and conceptualizing, which
|
17 |
+
are shared by different tasks and schemas. Based on this
|
18 |
+
paradigm, we propose to universally model various IE tasks
|
19 |
+
with Unified Semantic Matching (USM) framework, which
|
20 |
+
introduces three unified token linking operations to model
|
21 |
+
the abilities of structuring and conceptualizing. In this way,
|
22 |
+
USM can jointly encode schema and input text, uniformly
|
23 |
+
extract substructures in parallel, and controllably decode tar-
|
24 |
+
get structures on demand. Empirical evaluation on 4 IE tasks
|
25 |
+
shows that the proposed method achieves state-of-the-art per-
|
26 |
+
formance under the supervised experiments and shows strong
|
27 |
+
generalization ability in zero/few-shot transfer settings.
|
28 |
+
Introduction
|
29 |
+
Information extraction aims to extract various information
|
30 |
+
structures from texts (Andersen et al. 1992; Grishman 2019).
|
31 |
+
For example, given the sentence “Monet was born in Paris,
|
32 |
+
the capital of France”, an IE system needs to extract various
|
33 |
+
task structures such as entities, relations, events, or senti-
|
34 |
+
ments in the sentence. It is challenging because the target
|
35 |
+
structures have diversified label schemas (person, work for,
|
36 |
+
positive sentiment, etc.) and heterogeneous structures (span,
|
37 |
+
triplet, etc.).
|
38 |
+
Traditional IE model leverages task- and schema-
|
39 |
+
specialized architecture, which is commonly specific to dif-
|
40 |
+
ferent target structures and label schemas. The expensive
|
41 |
+
annotation leads to limited predefined categories and small
|
42 |
+
data size in general domains for information extraction
|
43 |
+
tasks. From another perspective, task-specific model design
|
44 |
+
makes it challenging to migrate learned knowledge between
|
45 |
+
different tasks and extraction frameworks. The above prob-
|
46 |
+
lems lead to the poor performance of IE models in low-
|
47 |
+
resource settings or facing new label schema, which greatly
|
48 |
+
restricts the application of IE in real scenarios.
|
49 |
+
* Equally contribution.
|
50 |
+
† Corresponding authors.
|
51 |
+
Copyright © 2023, Association for the Advancement of Artificial
|
52 |
+
Intelligence (www.aaai.org). All rights reserved.
|
53 |
+
USM
|
54 |
+
utterance conceptualizing
|
55 |
+
pair conceptualizing
|
56 |
+
Structuring
|
57 |
+
Conceptualizing
|
58 |
+
Target Structures
|
59 |
+
Entity :
|
60 |
+
person
|
61 |
+
Monet
|
62 |
+
country
|
63 |
+
France
|
64 |
+
Relation :
|
65 |
+
Monet
|
66 |
+
birth place
|
67 |
+
Paris
|
68 |
+
France
|
69 |
+
capital
|
70 |
+
Paris
|
71 |
+
[L] person [L] country [L] birth place [L] capital [T]
|
72 |
+
Monet was born in Paris, the capital of France.
|
73 |
+
Input Schema and Text
|
74 |
+
utterance structure
|
75 |
+
pair structure
|
76 |
+
Monet
|
77 |
+
Paris
|
78 |
+
France
|
79 |
+
Monet
|
80 |
+
Paris
|
81 |
+
France
|
82 |
+
Paris
|
83 |
+
(
|
84 |
+
)
|
85 |
+
,
|
86 |
+
(
|
87 |
+
)
|
88 |
+
,
|
89 |
+
person
|
90 |
+
-
|
91 |
+
Monet
|
92 |
+
country
|
93 |
+
France
|
94 |
+
-
|
95 |
+
birth place -
|
96 |
+
Paris
|
97 |
+
capital -
|
98 |
+
Paris
|
99 |
+
birth place -
|
100 |
+
Monet
|
101 |
+
Paris
|
102 |
+
(
|
103 |
+
)
|
104 |
+
,
|
105 |
+
capital -
|
106 |
+
France
|
107 |
+
Paris
|
108 |
+
(
|
109 |
+
)
|
110 |
+
,
|
111 |
+
Figure 1: The USM framework for UIE. USM takes label
|
112 |
+
schema and text as input and directly outputs the target struc-
|
113 |
+
ture through the Structuring and Conceptualizing opera-
|
114 |
+
tions.
|
115 |
+
Very recently,
|
116 |
+
Lu et al. (2022) proposed the concept
|
117 |
+
of universal information extraction (UIE), which aims to
|
118 |
+
resolve multiple IE tasks using one universal model. To
|
119 |
+
this end, they proposed a sequence-to-sequence generation
|
120 |
+
model, which takes flattened schema and text as input, and
|
121 |
+
directly generates diversified target information structures.
|
122 |
+
Unfortunately, all associations between information pieces
|
123 |
+
and schemas are implicitly formulated due to the black-box
|
124 |
+
nature of sequence-to-sequence models (Alvarez-Melis and
|
125 |
+
Jaakkola 2017). Consequently, it is difficult to identify what
|
126 |
+
kind of abilities and knowledge are learned to transfer across
|
127 |
+
different tasks and schemas. Therefore we have no way of
|
128 |
+
diagnosing under what circumstances such transfer learn-
|
129 |
+
ing across tasks or schemas would fail. For the above rea-
|
130 |
+
sons, it is necessary to explicitly model and learn transfer-
|
131 |
+
able knowledge to obtain effective, robust, and explainable
|
132 |
+
transferability.
|
133 |
+
We find that, as shown in Figure 1, even with diversi-
|
134 |
+
fied tasks and extraction targets, all IE tasks can be fun-
|
135 |
+
damentally decoupled into the following two critical oper-
|
136 |
+
ations: 1) Structuring, which proposes label-agnostic basic
|
137 |
+
substructures of the target structure from the text. For ex-
|
138 |
+
ample, proposing the utterance structure “Monet” for entity
|
139 |
+
mention and “born in” for event mention, the associated pair
|
140 |
+
structure (“Monet”, “Paris”) for relation mention, and (“born
|
141 |
+
in”, “Paris”) for event argument mention. 2) Conceptualiz-
|
142 |
+
arXiv:2301.03282v1 [cs.CL] 9 Jan 2023
|
143 |
+
|
144 |
+
[L] country [L] capital [T] Monet was born in Paris, the capital of France.
|
145 |
+
[L] person [L] country
|
146 |
+
[L] birth place [L] capital
|
147 |
+
[L] born [L] person [L] place
|
148 |
+
Input Schema
|
149 |
+
Entity
|
150 |
+
Relation
|
151 |
+
Event
|
152 |
+
Label ⇢ Token Linking
|
153 |
+
Token ⇢ Token Linking
|
154 |
+
France
|
155 |
+
Paris
|
156 |
+
country
|
157 |
+
capital
|
158 |
+
Schema-constraint
|
159 |
+
Decoding
|
160 |
+
Token ⇢ Label Linking
|
161 |
+
USM
|
162 |
+
Figure 2: The overall framework of Unified Semantic Matching.
|
163 |
+
ing, which generalizes utterance and paired substructures to
|
164 |
+
corresponding target semantic concepts. More importantly,
|
165 |
+
these two operations can be explicitly reformulated using a
|
166 |
+
semantic matching paradigm when given a target extraction
|
167 |
+
schema. Specifically, structuring operations can be viewed
|
168 |
+
as building specific kinds of semantic associations between
|
169 |
+
utterances in the input text, while conceptualizing operations
|
170 |
+
can be regarded as matching between target semantic labels
|
171 |
+
and the given utterances or substructures. Consequently, if
|
172 |
+
we universally transform information extraction into combi-
|
173 |
+
nations of a series of structuring and conceptualizing, refor-
|
174 |
+
mulate all these operations with the semantic matching be-
|
175 |
+
tween structures and schemas, and jointly learn all IE tasks
|
176 |
+
under the same paradigm, we can easily conduct various
|
177 |
+
kinds of IE tasks with one universal architecture and share
|
178 |
+
knowledge across different tasks and schemas.
|
179 |
+
Unfortunately, directly conducting semantic matching be-
|
180 |
+
tween structures and schemas is impractical for universal in-
|
181 |
+
formation extraction. First, sentences have many substruc-
|
182 |
+
tures, resulting in a large number of potential matching can-
|
183 |
+
didates and a large scale of matching, which makes the com-
|
184 |
+
putational efficiency of the model unacceptable. Second, the
|
185 |
+
schema of IE is structural and hard to match with the plain
|
186 |
+
text. In this paper, we propose directed token linking for uni-
|
187 |
+
versal IE. The main idea is to transform the structuring and
|
188 |
+
conceptualizing into a series of directed token linking oper-
|
189 |
+
ations, which can be reverted to semantic matching between
|
190 |
+
utterances and schema.
|
191 |
+
Based on the above observation, we propose USM, a uni-
|
192 |
+
fied semantic matching framework for universal information
|
193 |
+
extraction (UIE), which decomposes structures and verbal-
|
194 |
+
izes label types for sharing structuring and conceptualiz-
|
195 |
+
ing abilities. Specifically, we design a set of directed token
|
196 |
+
linking operations (token-token linking, label-token linking,
|
197 |
+
and token-label linking) to decouple task-specific IE tasks
|
198 |
+
into two extraction abilities. To learn the common extraction
|
199 |
+
abilities, we pre-train USM by leveraging heterogeneous su-
|
200 |
+
pervision from linguistic resources. Compared to previous
|
201 |
+
works, USM is a new transferable, controllable, efficient
|
202 |
+
end-to-end framework for UIE, which jointly encodes ex-
|
203 |
+
traction schema and input text, uniformly extracts substruc-
|
204 |
+
tures, and controllably decodes target structures on demand.
|
205 |
+
We conduct experiments on four main IE tasks under the
|
206 |
+
supervised, multi-task, and zero/few-shot transfer settings.
|
207 |
+
The proposed USM framework achieves state-of-the-art re-
|
208 |
+
sults in all settings and solves massive tasks using a sin-
|
209 |
+
gle multi-task model. Under the zero/few-shot transfer set-
|
210 |
+
tings, USM shows a strong cross-type transfer ability due to
|
211 |
+
the shared structuring and conceptualizing obtained by pre-
|
212 |
+
training.
|
213 |
+
In summary, the main contributions of this paper are:
|
214 |
+
1. We propose an end-to-end framework for universal in-
|
215 |
+
formation extraction – USM, which can jointly model
|
216 |
+
schema and text, uniformly extract substructures, and
|
217 |
+
controllably generate the target structure on demand.
|
218 |
+
2. We design three unified token linking operations to de-
|
219 |
+
couple various IE tasks, sharing extraction capabilities
|
220 |
+
across different target structures and semantic schemas
|
221 |
+
and achieving “one model for solving all tasks” by multi-
|
222 |
+
task learning.
|
223 |
+
3. We pre-train a universal foundation model with large-
|
224 |
+
scale heterogeneous supervisions, which can benefit fu-
|
225 |
+
ture research on IE.
|
226 |
+
Unified Semantic Matching via Directed Token
|
227 |
+
Linking
|
228 |
+
Information extraction is structuring the text’s information
|
229 |
+
and elevating it into specific semantic categories. As shown
|
230 |
+
in Figure 2, USM takes the arbitrary extraction label schema
|
231 |
+
l and the raw text t as input and directly outputs the struc-
|
232 |
+
ture according to the given schema. For example, given the
|
233 |
+
text “Monet was born in Paris, the capital of France”, USM
|
234 |
+
needs to extract (“France”, capital, “Paris”) for the relation
|
235 |
+
type capital and (person, “Monet”)/(country, “France”) for
|
236 |
+
the entity type person and country. The main challenges
|
237 |
+
here are: 1) how to unifiedly extract heterogeneous struc-
|
238 |
+
tures using the shared structuring ability; 2) how to uni-
|
239 |
+
formly represent different extraction tasks under diversified
|
240 |
+
label schemas to share the common conceptualizing ability.
|
241 |
+
In this section, we describe how to end-to-end extract the
|
242 |
+
information structures from the text using USM. Specifi-
|
243 |
+
cally, as shown in Figure 3, USM first verbalizes all label
|
244 |
+
schemas (Levy et al. 2017; Li et al. 2020; Lu et al. 2022) and
|
245 |
+
learns the schema-text joint embedding to build a shared la-
|
246 |
+
bel text semantic space. Then we describe three basic token
|
247 |
+
|
248 |
+
[L] person [L] country [L] birth place [L] capital [T] Monet … Paris … France.
|
249 |
+
Token-Token Linking for Structuring
|
250 |
+
Label-Token Linking for Utterance Conceptualizing
|
251 |
+
Token-Label Linking for Pairing Conceptualizing
|
252 |
+
[L] person [L] country [L] birth place [L] capital [T] Monet … Paris … France.
|
253 |
+
label ⇢ mention (subject)
|
254 |
+
[L] person [L] country [L] birth place [L] capital [T] Monet was born in Paris,
|
255 |
+
the capital of France.
|
256 |
+
[L] person [L] country [L] birth place [L] capital [T] Monet … Paris … France.
|
257 |
+
subject ⇢ label
|
258 |
+
Schema-constraint Decoding
|
259 |
+
head ⇢ tail
|
260 |
+
Monet
|
261 |
+
subject ⇢ object
|
262 |
+
[L] person [L] country [L] birth place [L] capital [T] Monet … Paris … France.
|
263 |
+
label ⇢ mention (object)
|
264 |
+
Directed Token Linking
|
265 |
+
Paris
|
266 |
+
France
|
267 |
+
Monet
|
268 |
+
Paris
|
269 |
+
France
|
270 |
+
Monet
|
271 |
+
Paris
|
272 |
+
France
|
273 |
+
person
|
274 |
+
birth place
|
275 |
+
capital
|
276 |
+
country
|
277 |
+
Monet
|
278 |
+
Paris
|
279 |
+
France
|
280 |
+
person
|
281 |
+
birth place
|
282 |
+
capital
|
283 |
+
country
|
284 |
+
Input Schema and Text
|
285 |
+
Target Structures
|
286 |
+
Entity :
|
287 |
+
person
|
288 |
+
Monet
|
289 |
+
country
|
290 |
+
France
|
291 |
+
Relation :
|
292 |
+
Monet
|
293 |
+
birth place
|
294 |
+
Paris
|
295 |
+
France
|
296 |
+
capital
|
297 |
+
Paris
|
298 |
+
Figure 3: Illustrations of Directed Token Linking. Token-Token Linking structures utterance and association pair substructures
|
299 |
+
from the text, Label-Token Linking conceptualizes the utterance, and Token-Label Linking conceptualizes the association pair.
|
300 |
+
In practice, we employ different label symbols “[L]” for utterance conceptualizing: “[LM]” for the label of single mention,
|
301 |
+
such as entity types and event trigger types; “[LP]” for the predicate of association pair, such as relation types and event
|
302 |
+
argument types.
|
303 |
+
linking operations and how to structure and conceptualize
|
304 |
+
information from text using these three operations. Finally,
|
305 |
+
we introduce how to decode the final results using schema-
|
306 |
+
constraint decoding.
|
307 |
+
Schema-Text Joint Embedding
|
308 |
+
To capture the interaction between label schema and text,
|
309 |
+
USM first learns the joint contextualized embeddings of
|
310 |
+
schema labels and text tokens. Concretely, USM first ver-
|
311 |
+
balizes the extraction schema s as token sequence l =
|
312 |
+
{l1, l2, ..., l|l|} following the structural schema instructor
|
313 |
+
(Lu et al. 2022), then concatenates schema sequence l
|
314 |
+
and text tokens t
|
315 |
+
=
|
316 |
+
{t1, t2, ..., t|t|} as input, and fi-
|
317 |
+
nally computes the joint label-text embeddings H
|
318 |
+
=
|
319 |
+
[h1, h2, ..., h|l|+|t|] as follow:
|
320 |
+
H = Encoder(l1, l2, ..., l|l|, t1, t2, ..., t|t|, M)
|
321 |
+
(1)
|
322 |
+
where Encoder(·) is a transformer encoder, and M ∈
|
323 |
+
R(|l|+|t|)×(|l|+|t|)
|
324 |
+
is the mask matrix that determines
|
325 |
+
whether a pair of tokens can be attended to each other.
|
326 |
+
Token-Token Linking for Structuring
|
327 |
+
After obtaining the joint label-text embeddings H
|
328 |
+
=
|
329 |
+
[hl
|
330 |
+
1, ..., hl
|
331 |
+
|l|, ht
|
332 |
+
1, ..., ht
|
333 |
+
|t|], USM structures all valid substruc-
|
334 |
+
tures using Token-Token Linking (TTL) operations:
|
335 |
+
1. Utterance: a continuous token sequence in the input text,
|
336 |
+
e.g., entity mention “Monet” or event trigger “born in”.
|
337 |
+
We extract a single utterance with inner span head-to-
|
338 |
+
tail (H2T) linking, as shown in Figure 3. For example,
|
339 |
+
to extract the span “Monet” and “born in” as valid sub-
|
340 |
+
structures, USM utilizes H2T to link “Monet” to itself
|
341 |
+
and link “born” to “in”.
|
342 |
+
2. Association pair: a basic related pair unit extracted
|
343 |
+
from the text, e.g., relation subject-object pair (“Monet”,
|
344 |
+
“Paris”) or event trigger-argument (“born in”, “Paris”).
|
345 |
+
We extract span pairs with head-to-head (H2H) and tail-
|
346 |
+
to-tail (T2T) linking operations. For example, to extract
|
347 |
+
the subject-object pair “Monet” and “Paris” as a valid
|
348 |
+
substructure, USM links “Monet” and “Paris” using H2H
|
349 |
+
as well as links “Monet” and “Paris” using T2T.
|
350 |
+
For the above three token-to-token linking (H2T, H2H, T2T)
|
351 |
+
operations, USM respectively calculates the token-to-token
|
352 |
+
linking score sTTL(ti, tj) over all valid token pair candi-
|
353 |
+
dates ⟨ti, tj⟩. For each token pair ⟨ti, tj⟩, the linking score
|
354 |
+
sTTL(ti, tj) is calculated as:
|
355 |
+
sTTL(ti, tj) = FFNNl
|
356 |
+
TTL(hi
|
357 |
+
t)T Rj−iFFNNr
|
358 |
+
TTL(hj
|
359 |
+
t)
|
360 |
+
(2)
|
361 |
+
where FFNNl/r are feed-forward layers with output size d.
|
362 |
+
Rj−i ∈ Rd×d is the rotary position embedding (Su et al.
|
363 |
+
2021, 2022) that can effectively inject relative position in-
|
364 |
+
formation into the valid structure mentioned above.
|
365 |
+
|
366 |
+
Label-Token Linking for Utterance
|
367 |
+
Conceptualizing
|
368 |
+
Given label token embeddings hl
|
369 |
+
1, ..., hl
|
370 |
+
|l| and text token em-
|
371 |
+
beddings ht
|
372 |
+
1, ..., ht
|
373 |
+
|t|, USM conceptualizes valid utterance
|
374 |
+
structures with label-token linking (LTL) operations. The
|
375 |
+
output of LTL is a pair of label name and text mention, e,g.,
|
376 |
+
(person, “Monet”), (country, “France”), and (born, “born
|
377 |
+
in”). There are two types of utterance conceptualizing: the
|
378 |
+
first one is the type of mention, which indicates assigning
|
379 |
+
the label types to every single mention, such as entity type
|
380 |
+
person for entity mention “Monet”; the second one is the
|
381 |
+
predicate of object, which assigns the predicate type to each
|
382 |
+
object candidate, such as relation type birth place for “Paris”
|
383 |
+
and event argument type place for “Paris”.
|
384 |
+
We conceptualize the type of mention and the predi-
|
385 |
+
cate of object with the same label-to-token linking opera-
|
386 |
+
tion, thus enabling the two label semantics to reinforce each
|
387 |
+
other. Following the head-tail span extraction style, we name
|
388 |
+
each substructure with label-to-head (L2H) and label-to-tail
|
389 |
+
(L2T) linking operations. For the pair of label name birth
|
390 |
+
place and text span Paris, USM links the head of the label
|
391 |
+
birth with the head of text span “Paris” and links the tail of
|
392 |
+
label place with the tail of text span “Paris”.
|
393 |
+
For the above two label-to-token linking (L2H, L2T)
|
394 |
+
operations, USM respectively calculates the label-to-token
|
395 |
+
linking score sLTL(li, tj) over all valid label and text token
|
396 |
+
pair candidates ⟨li, tj⟩:
|
397 |
+
sLTL(li, tj) = FFNNlabel
|
398 |
+
LTL (hl
|
399 |
+
i)T Rj−iFFNNtext
|
400 |
+
LTL(ht
|
401 |
+
j)
|
402 |
+
(3)
|
403 |
+
Token-Label Linking for Pairing Conceptualizing
|
404 |
+
To conceptualize the association pair, USM links the subject
|
405 |
+
of the association pair to the label name using Token-Label
|
406 |
+
Linking (TLL). Precisely, TLL operation links the subject of
|
407 |
+
triplet and the predicate type with head-to-label (H2L) and
|
408 |
+
tail-to-label (T2L) operations. For instance, TLL links the
|
409 |
+
head of text span “Monet” and the head of the label birth
|
410 |
+
with H2L and links the tail of text span “Monet” and the
|
411 |
+
tail of the label place with T2L following the head-tail span
|
412 |
+
extraction style. For the above two token-label linking (H2L,
|
413 |
+
T2L) operations, the linking score sTLL(ti, lj) is computed
|
414 |
+
as:
|
415 |
+
sTLL(ti, lj) = FFNNtext
|
416 |
+
TLL(hl
|
417 |
+
i)T Rj−iFFNNlabel
|
418 |
+
TLL(ht
|
419 |
+
j)
|
420 |
+
(4)
|
421 |
+
Schema-constraint Decoding for Structure
|
422 |
+
Composing
|
423 |
+
USM decodes the final structures using a schema-constraint
|
424 |
+
decoding algorithm, given substructures extracted by unified
|
425 |
+
token linking operations. During the decoding stage, we sep-
|
426 |
+
arate types for different tasks according to the schema defi-
|
427 |
+
nition. For instance, in the joint entity and relation extraction
|
428 |
+
task, we uniformly encode entity types and relation types as
|
429 |
+
labels to utilize the common structuring and conceptualizing
|
430 |
+
ability but compose the final result by separating the entity
|
431 |
+
or relation types from input types.
|
432 |
+
As shown in Figure 3, USM 1) first decodes men-
|
433 |
+
tions and subject-object unit extracted by token-token link-
|
434 |
+
ing operation: {“Monet”, “Paris”, “France”, (“Monet”,
|
435 |
+
“Pairs”), (“France”, “Pairs”)}; 2) and then decodes label-
|
436 |
+
mention pairs by label-token linking operation: {(person,
|
437 |
+
“Monet”), (country, “France”), (birth place, “Paris”), (capi-
|
438 |
+
tal, “Paris”)}; 3) and finally decodes label-association pairs
|
439 |
+
using token-label linking operation: (“Monet”, birth place),
|
440 |
+
(“France”, capital). The above three token linking opera-
|
441 |
+
tions do not affect each other; hence the extraction opera-
|
442 |
+
tions are fully non-autoregressive and highly parallel.
|
443 |
+
Finally, we separate the entity types country and person,
|
444 |
+
relation types birth place, and capital from input types ac-
|
445 |
+
cording to the schema definition. Based on the result from
|
446 |
+
token-label linking (“Monet”, birth place), (“France”, capi-
|
447 |
+
tal), we can consistently obtain the full structure (“Monet”,
|
448 |
+
birth place, “Paris”) and (“France”, capital, “Paris”).
|
449 |
+
Learning from Heterogeneous Supervision
|
450 |
+
This section introduces how to leverage heterogeneous su-
|
451 |
+
pervised resources to learn the common structuring and con-
|
452 |
+
ceptualizing abilities for unified token linking. Specifically,
|
453 |
+
with the help of verbalized label representation and unified
|
454 |
+
token linking, we unify heterogeneous supervision signals
|
455 |
+
into <text, token pairs> for pre-training. We first pre-train
|
456 |
+
the USM on the heterogeneous resources, which contain
|
457 |
+
three different supervised signals, including task annotation
|
458 |
+
signals (e.g., IE datasets), distant signals (e.g., distant su-
|
459 |
+
pervision datasets), and indirect signals (e.g., question an-
|
460 |
+
swering datasets), then adopt the pre-trained USM model to
|
461 |
+
specific downstream information extraction tasks.
|
462 |
+
Pre-training
|
463 |
+
USM uniformly encodes label schema and text in the shared
|
464 |
+
semantic representation and employs unified token linking
|
465 |
+
to structure and conceptualize information from text. To help
|
466 |
+
USM to learn the common structuring and conceptualizing
|
467 |
+
abilities, we collect three different supervised signals from
|
468 |
+
existing linguistic sources for the pre-training of USM:
|
469 |
+
Dtask is the task annotation dataset, where each instance
|
470 |
+
has a gold annotation for information extraction. We use
|
471 |
+
Ontonotes (Pradhan et al. 2013), widely used in the field
|
472 |
+
of information extraction as gold annotation, which contains
|
473 |
+
18 entity types. Dtask is used as in-task supervision signals to
|
474 |
+
learn task-specific structuring and conceptualizing abilities.
|
475 |
+
Ddistant is the distant supervision dataset, where each in-
|
476 |
+
stance is aligned by text and knowledge base. Distant super-
|
477 |
+
vision is a common practice to obtain large-scale training
|
478 |
+
data for information extraction (Mintz et al. 2009; Riedel
|
479 |
+
et al. 2013). We employ NYT (Riedel et al. 2013) and Rebel
|
480 |
+
(Huguet Cabot and Navigli 2021) as our distant supervision
|
481 |
+
datasets, which are obtained by aligning text with Freebase
|
482 |
+
and Wikidata, respectively. Rebel dataset has a large label
|
483 |
+
schema, and all verbalized schemas are too long to be con-
|
484 |
+
catenated with input text and fed to the pre-trained trans-
|
485 |
+
former encoder. We sample negative label schema to con-
|
486 |
+
struct meta schema (Lu et al. 2022) as label schema for pre-
|
487 |
+
training.
|
488 |
+
Dindirect is the indirect supervision dataset, where each in-
|
489 |
+
stance is derived from other related NLP tasks (Wang, Ning,
|
490 |
+
and Roth 2020; Chen et al. 2022b). We utilize reading com-
|
491 |
+
prehension datasets from MRQA (Fisch et al. 2019) as our
|
492 |
+
|
493 |
+
Dataset
|
494 |
+
Metric
|
495 |
+
UIE
|
496 |
+
Task-specific SOTA Methods
|
497 |
+
USMRoberta
|
498 |
+
USM
|
499 |
+
USMUnify
|
500 |
+
ACE04
|
501 |
+
Entity F1
|
502 |
+
86.89
|
503 |
+
(Lou, Yang, and Tu 2022)
|
504 |
+
87.90
|
505 |
+
87.79
|
506 |
+
87.62
|
507 |
+
87.34
|
508 |
+
ACE05-Ent
|
509 |
+
Entity F1
|
510 |
+
85.78
|
511 |
+
(Lou, Yang, and Tu 2022)
|
512 |
+
86.91
|
513 |
+
86.98
|
514 |
+
87.14
|
515 |
+
-
|
516 |
+
CoNLL03
|
517 |
+
Entity F1
|
518 |
+
92.99
|
519 |
+
(Wang et al. 2021b)
|
520 |
+
93.21
|
521 |
+
92.76
|
522 |
+
93.16
|
523 |
+
92.97
|
524 |
+
ACE05-Rel
|
525 |
+
Relation Strict F1
|
526 |
+
66.06
|
527 |
+
(Yan et al. 2021)
|
528 |
+
66.80
|
529 |
+
66.54
|
530 |
+
67.88
|
531 |
+
-
|
532 |
+
CoNLL04
|
533 |
+
Relation Strict F1
|
534 |
+
75.00
|
535 |
+
(Huguet Cabot and Navigli 2021)
|
536 |
+
75.40
|
537 |
+
75.86
|
538 |
+
78.84
|
539 |
+
77.12
|
540 |
+
NYT
|
541 |
+
Relation Boundary F1
|
542 |
+
93.54
|
543 |
+
(Huguet Cabot and Navigli 2021)
|
544 |
+
93.40
|
545 |
+
93.96
|
546 |
+
94.07
|
547 |
+
94.01
|
548 |
+
SciERC
|
549 |
+
Relation Strict F1
|
550 |
+
36.53
|
551 |
+
(Yan et al. 2021)
|
552 |
+
38.40
|
553 |
+
37.05
|
554 |
+
37.36
|
555 |
+
37.42
|
556 |
+
ACE05-Evt
|
557 |
+
Event Trigger F1
|
558 |
+
73.36
|
559 |
+
(Wang et al. 2022b)
|
560 |
+
73.60
|
561 |
+
71.68
|
562 |
+
72.41
|
563 |
+
72.31
|
564 |
+
ACE05-Evt
|
565 |
+
Event Argument F1
|
566 |
+
54.79
|
567 |
+
(Wang et al. 2022b)
|
568 |
+
55.10
|
569 |
+
55.37
|
570 |
+
55.83
|
571 |
+
53.57
|
572 |
+
CASIE
|
573 |
+
Event Trigger F1
|
574 |
+
69.33
|
575 |
+
(Lu et al. 2021)
|
576 |
+
68.98
|
577 |
+
70.77
|
578 |
+
71.73
|
579 |
+
71.56
|
580 |
+
CASIE
|
581 |
+
Event Argument F1
|
582 |
+
61.30
|
583 |
+
(Lu et al. 2021)
|
584 |
+
60.37
|
585 |
+
63.05
|
586 |
+
63.26
|
587 |
+
63.00
|
588 |
+
14-res
|
589 |
+
Sentiment Triplet F1
|
590 |
+
74.52
|
591 |
+
(Lu et al. 2022)
|
592 |
+
74.52
|
593 |
+
76.35
|
594 |
+
77.26
|
595 |
+
77.29
|
596 |
+
14-lap
|
597 |
+
Sentiment Triplet F1
|
598 |
+
63.88
|
599 |
+
(Lu et al. 2022)
|
600 |
+
63.88
|
601 |
+
65.46
|
602 |
+
65.51
|
603 |
+
66.60
|
604 |
+
15-res
|
605 |
+
Sentiment Triplet F1
|
606 |
+
67.15
|
607 |
+
(Lu et al. 2022)
|
608 |
+
67.15
|
609 |
+
68.80
|
610 |
+
69.86
|
611 |
+
-
|
612 |
+
16-res
|
613 |
+
Sentiment Triplet F1
|
614 |
+
75.07
|
615 |
+
(Lu et al. 2022)
|
616 |
+
75.07
|
617 |
+
76.73
|
618 |
+
78.25
|
619 |
+
-
|
620 |
+
AVE-unify
|
621 |
+
-
|
622 |
+
71.10
|
623 |
+
-
|
624 |
+
71.34
|
625 |
+
71.83
|
626 |
+
72.46
|
627 |
+
72.11
|
628 |
+
AVE-total
|
629 |
+
-
|
630 |
+
71.75
|
631 |
+
-
|
632 |
+
72.05
|
633 |
+
72.61
|
634 |
+
73.35
|
635 |
+
-
|
636 |
+
Table 1: Overall results of USM on different datasets. AVE-unify indicates the average performance of non-overlapped datasets
|
637 |
+
(except ACE05-Rel/Evt and 15/16-res), and AVE-total indicates the average performance of all datasets.
|
638 |
+
indirect supervision datasets: HotpotQA (Yang et al. 2018),
|
639 |
+
Natural Questions (Kwiatkowski et al. 2019), NewsQA
|
640 |
+
(Trischler et al. 2017), SQuAD (Rajpurkar et al. 2016) and
|
641 |
+
TriviaQA (Joshi et al. 2017). Compared with limited entity
|
642 |
+
types in Dtask and relation types Ddistant, diversified question
|
643 |
+
expressions can provide richer label semantic information
|
644 |
+
for learning conceptualizing. For each (question, context,
|
645 |
+
answer) instance in Dindirect, we take the question as label
|
646 |
+
schema, the context as input text, and the answer as mention.
|
647 |
+
It captures structuring and conceptualizing ability in the pre-
|
648 |
+
training stage by learning token-token and label-token link-
|
649 |
+
ing operations.
|
650 |
+
Learning function
|
651 |
+
For pre-training, fine-tuning and multi-task learning, we
|
652 |
+
unify all datasets as {(xi, yi)}, where xi is text and yi is
|
653 |
+
linking annotation of each token linking pair (TTM, LTM,
|
654 |
+
TLM). We use the same learning function for all settings
|
655 |
+
with the homogenized data format.
|
656 |
+
The main challenge of USM learning is the sparsity of
|
657 |
+
linked token pairs. The linked ratio only occupies less than
|
658 |
+
1% of all valid token pair candidates. To overcome the ex-
|
659 |
+
treme sparsity of linking instances, we optimize class imbal-
|
660 |
+
ance loss (Su et al. 2022) for each instance as follows:
|
661 |
+
L =
|
662 |
+
�
|
663 |
+
m∈M
|
664 |
+
log
|
665 |
+
�
|
666 |
+
�1 +
|
667 |
+
�
|
668 |
+
(i,j)∈m+
|
669 |
+
e−sm(i,j)
|
670 |
+
�
|
671 |
+
�
|
672 |
+
+ log
|
673 |
+
�
|
674 |
+
�1 +
|
675 |
+
�
|
676 |
+
(i,j)∈m−
|
677 |
+
esm(i,j)
|
678 |
+
�
|
679 |
+
�
|
680 |
+
(5)
|
681 |
+
where M denotes linking types of USM, m+ indicates the
|
682 |
+
linked pairs, m− indicates the non-linked pairs, and sm(i, j)
|
683 |
+
is the predicate linking score for the linking operation m.
|
684 |
+
Experiments
|
685 |
+
This section conducts massive experiments under supervised
|
686 |
+
settings and transfer settings to demonstrate the effective-
|
687 |
+
ness of the proposed unified semantic matching framework.
|
688 |
+
Experiments on Supervised Settings
|
689 |
+
We conduct supervised experiments on extensive informa-
|
690 |
+
tion extraction tasks, including 4 tasks and 13 datasets (en-
|
691 |
+
tity extraction, relation extraction, event extraction, senti-
|
692 |
+
ment extraction) and their combinations (e.g., joint entity-
|
693 |
+
relation extraction). The used datasets includes ACE04
|
694 |
+
(Mitchell et al. 2005), ACE05 (Walker et al. 2006);
|
695 |
+
CoNLL03 (Tjong Kim Sang and De Meulder 2003),
|
696 |
+
CoNLL04 (Roth and Yih 2004), SciERC (Luan et al. 2018),
|
697 |
+
NYT (Riedel, Yao, and McCallum 2010), CASIE (Satya-
|
698 |
+
panich, Ferraro, and Finin 2020), SemEval-14/15/16 (Pon-
|
699 |
+
tiki et al. 2014, 2015, 2016). We employ the same end-to-
|
700 |
+
end settings and evaluation metrics as Lu et al. (2022).
|
701 |
+
We compare the proposed USM framework with the task-
|
702 |
+
specific state-of-the-art methods and the unified structure
|
703 |
+
generation method – UIE (Lu et al. 2022). For our approach,
|
704 |
+
we show three different settings:
|
705 |
+
• USM is the pre-trained model which learned unified to-
|
706 |
+
ken linking ability from heterogeneous supervision;
|
707 |
+
• USMRoberta is the initial model of the pre-trained USM,
|
708 |
+
which employs RoBERTa-Large (Liu et al. 2019) as the
|
709 |
+
pre-trained transformer encoder;
|
710 |
+
• USMUnify is initialized by the pre-trained USM and con-
|
711 |
+
ducts multi-task learning with all datasets but ignores
|
712 |
+
overlapped datasets: ACE05-Ent/Rel and 15/16-res.
|
713 |
+
For the USMRoberta and USM settings, we fine-tune them on
|
714 |
+
each specific task separately. We run each experiment with
|
715 |
+
three seeds and report their average performance.
|
716 |
+
Table 1 shows the overall performance of USM and other
|
717 |
+
baselines on the 13 datasets, where AVE-unify indicates the
|
718 |
+
average performance of non-overlapped datasets, and AVE-
|
719 |
+
total indicates the average performance of all datasets. We
|
720 |
+
|
721 |
+
Movie
|
722 |
+
Restaurant
|
723 |
+
Social
|
724 |
+
AI
|
725 |
+
Literature
|
726 |
+
Music
|
727 |
+
Politics
|
728 |
+
Science
|
729 |
+
Ave
|
730 |
+
Performance on Unseen Label Subset of Dt and Di
|
731 |
+
#Unseen/#All
|
732 |
+
12/12
|
733 |
+
7/8
|
734 |
+
7/10
|
735 |
+
10/14
|
736 |
+
8/12
|
737 |
+
9/13
|
738 |
+
5/9
|
739 |
+
13/17
|
740 |
+
-
|
741 |
+
Dtask
|
742 |
+
25.07
|
743 |
+
2.50
|
744 |
+
22.54
|
745 |
+
10.82
|
746 |
+
50.74
|
747 |
+
44.11
|
748 |
+
9.75
|
749 |
+
13.98
|
750 |
+
22.44
|
751 |
+
Dtask + Dindirect
|
752 |
+
37.73
|
753 |
+
14.73
|
754 |
+
29.34
|
755 |
+
28.18
|
756 |
+
56.00
|
757 |
+
44.93
|
758 |
+
36.10
|
759 |
+
44.09
|
760 |
+
36.39
|
761 |
+
Performance on Unseen Label Subset of Pre-training Dataset
|
762 |
+
#Unseen/#All
|
763 |
+
10/12
|
764 |
+
7/8
|
765 |
+
6/10
|
766 |
+
8/14
|
767 |
+
7/12
|
768 |
+
8/13
|
769 |
+
4/9
|
770 |
+
12/17
|
771 |
+
-
|
772 |
+
Dtask
|
773 |
+
32.1
|
774 |
+
2.50
|
775 |
+
1.64
|
776 |
+
10.68
|
777 |
+
52.42
|
778 |
+
45.93
|
779 |
+
11.16
|
780 |
+
14.12
|
781 |
+
21.32
|
782 |
+
Dtask + Dindirect
|
783 |
+
39.76
|
784 |
+
14.73
|
785 |
+
20.62
|
786 |
+
24.12
|
787 |
+
56.24
|
788 |
+
44.21
|
789 |
+
32.92
|
790 |
+
44.25
|
791 |
+
34.61
|
792 |
+
Dtask + Ddistant
|
793 |
+
35.35
|
794 |
+
21.10
|
795 |
+
40.64
|
796 |
+
27.57
|
797 |
+
56.97
|
798 |
+
49.29
|
799 |
+
43.72
|
800 |
+
44.05
|
801 |
+
39.84
|
802 |
+
Dtask + Ddistant + Dindirect
|
803 |
+
42.11
|
804 |
+
26.01
|
805 |
+
44.37
|
806 |
+
34.91
|
807 |
+
65.69
|
808 |
+
60.07
|
809 |
+
56.65
|
810 |
+
55.26
|
811 |
+
48.13
|
812 |
+
∆
|
813 |
+
10.01
|
814 |
+
23.51
|
815 |
+
42.73
|
816 |
+
24.23
|
817 |
+
13.27
|
818 |
+
14.14
|
819 |
+
45.49
|
820 |
+
41.14
|
821 |
+
26.82
|
822 |
+
Table 2: Performance of Zero-shot transfer settings on unseen entity label subset with different supervision signals. Unseen
|
823 |
+
indicates label types that do not appear in the pre-training dataset. ∆ indicates the improvement of pre-training using extra
|
824 |
+
supervision signals (Ddistant and Dindirect).
|
825 |
+
CoNLL04
|
826 |
+
Model Size
|
827 |
+
GPT-3
|
828 |
+
18.10
|
829 |
+
137B
|
830 |
+
DEEPSTRUCT
|
831 |
+
25.80
|
832 |
+
10B
|
833 |
+
USM
|
834 |
+
25.95
|
835 |
+
356M
|
836 |
+
Table 3: Performance of Zero-shot transfer settings on re-
|
837 |
+
lation extraction. * GPT-3 175B indicates formulating the
|
838 |
+
extraction task as a question answering problem through
|
839 |
+
prompting, and DEEPSTRUCT 10B is a pre-trained language
|
840 |
+
model for structure prediction (Wang et al. 2022a)
|
841 |
+
can observe that: 1) By verbalizing labels and modeling
|
842 |
+
all IE tasks as unified token linking, USM provides a novel
|
843 |
+
and effective framework for IE. USM achieves state-of-the-
|
844 |
+
art performance and outperforms the strong task-specific
|
845 |
+
methods by 1.30 in AVE-total. Even without pre-training,
|
846 |
+
USMRoberta also shows strong performance, which indicates
|
847 |
+
the strong portability and generalization ability of unified to-
|
848 |
+
ken linking. 2) Heterogeneous supervision provides a better
|
849 |
+
foundation for structuring and conceptualizing information
|
850 |
+
extraction. Compared to the initial model USMRoberta and
|
851 |
+
the pre-trained model USM, the heterogeneous pre-training
|
852 |
+
achieved an average 0.74 improvement across all datasets.
|
853 |
+
3) By homogenizing diversified label schemas and hetero-
|
854 |
+
geneous target structures into the unified token sequence,
|
855 |
+
USMUnify can solve massive IE tasks with a single multi-
|
856 |
+
task model. USMUnify outperforms task-specific state-of-the-
|
857 |
+
art methods with different model architectures and encoder
|
858 |
+
backbones in average, providing an efficient solution for ap-
|
859 |
+
plication and deployment.
|
860 |
+
Experiments on Zero-shot Transfer Settings
|
861 |
+
We conduct zero-shot cross-type transfer experiments on 9
|
862 |
+
datasets across various domains to verify the transferable
|
863 |
+
conceptualization learned by USM. In this setting, we di-
|
864 |
+
rectly employ the pre-trained USM to conduct extraction on
|
865 |
+
new datasets.
|
866 |
+
Model
|
867 |
+
1-Shot
|
868 |
+
5-Shot
|
869 |
+
10-Shot
|
870 |
+
AVE-S
|
871 |
+
Entity
|
872 |
+
CoNLL03
|
873 |
+
UIE-Large*
|
874 |
+
57.53
|
875 |
+
75.32
|
876 |
+
79.12
|
877 |
+
70.66
|
878 |
+
USMRoberta
|
879 |
+
9.69
|
880 |
+
40.66
|
881 |
+
62.87
|
882 |
+
37.74
|
883 |
+
USMSymbolic
|
884 |
+
60.56
|
885 |
+
81.87
|
886 |
+
83.87
|
887 |
+
75.43
|
888 |
+
USM
|
889 |
+
71.11
|
890 |
+
83.25
|
891 |
+
84.58
|
892 |
+
79.65
|
893 |
+
Relation
|
894 |
+
CoNLL04
|
895 |
+
UIE-Large*
|
896 |
+
34.88
|
897 |
+
51.64
|
898 |
+
58.98
|
899 |
+
48.50
|
900 |
+
USMRoberta
|
901 |
+
0.00
|
902 |
+
12.81
|
903 |
+
31.02
|
904 |
+
14.61
|
905 |
+
USMSymbolic
|
906 |
+
13.45
|
907 |
+
48.31
|
908 |
+
58.91
|
909 |
+
40.22
|
910 |
+
USM
|
911 |
+
36.17
|
912 |
+
53.20
|
913 |
+
60.99
|
914 |
+
50.12
|
915 |
+
Event
|
916 |
+
Trigger
|
917 |
+
ACE05-Evt
|
918 |
+
UIE-Large*
|
919 |
+
42.37
|
920 |
+
53.07
|
921 |
+
54.35
|
922 |
+
49.93
|
923 |
+
USMRoberta
|
924 |
+
26.39
|
925 |
+
47.10
|
926 |
+
51.46
|
927 |
+
41.65
|
928 |
+
USMSymbolic
|
929 |
+
1.97
|
930 |
+
30.77
|
931 |
+
52.30
|
932 |
+
28.35
|
933 |
+
USM
|
934 |
+
40.86
|
935 |
+
55.61
|
936 |
+
58.79
|
937 |
+
51.75
|
938 |
+
Event
|
939 |
+
Argument
|
940 |
+
ACE05-Evt
|
941 |
+
UIE-Large*
|
942 |
+
14.56
|
943 |
+
31.20
|
944 |
+
35.19
|
945 |
+
26.98
|
946 |
+
USMRoberta
|
947 |
+
6.47
|
948 |
+
27.00
|
949 |
+
34.20
|
950 |
+
22.56
|
951 |
+
USMSymbolic
|
952 |
+
0.08
|
953 |
+
13.71
|
954 |
+
33.52
|
955 |
+
15.77
|
956 |
+
USM
|
957 |
+
19.01
|
958 |
+
36.69
|
959 |
+
42.48
|
960 |
+
32.73
|
961 |
+
Sentiment
|
962 |
+
16res
|
963 |
+
UIE-Large*
|
964 |
+
23.04
|
965 |
+
42.67
|
966 |
+
53.28
|
967 |
+
39.66
|
968 |
+
USMRoberta
|
969 |
+
2.68
|
970 |
+
35.71
|
971 |
+
48.56
|
972 |
+
28.98
|
973 |
+
USMSymbolic
|
974 |
+
20.08
|
975 |
+
41.25
|
976 |
+
50.90
|
977 |
+
37.41
|
978 |
+
USM
|
979 |
+
30.81
|
980 |
+
52.06
|
981 |
+
58.29
|
982 |
+
47.05
|
983 |
+
Table 4: Few-shot results on end-to-end IE tasks. For a fair
|
984 |
+
comparison, we conduct text-structure pre-training from T5-
|
985 |
+
v1.1-large using the same pre-training corpus of USM, refer
|
986 |
+
to UIE-Large*.
|
987 |
+
For entity extraction, the cross-type extraction datasets
|
988 |
+
include Movie (MIT-Movie), Restaurant (MIT-Restaurant)
|
989 |
+
(Liu et al. 2013), Social (WNUT-16) (Strauss et al. 2016),
|
990 |
+
and AI/Literature/Music/Politics/Science from CrossNER
|
991 |
+
(Liu et al. 2021). We investigate the effect of different super-
|
992 |
+
vised signals in the zero-shot entity extraction setting. Dtask
|
993 |
+
indicates we first train USM on the common entity extrac-
|
994 |
+
tion dataset – Ontonotes, then directly conduct extraction on
|
995 |
+
the new types, which emulates the most common label trans-
|
996 |
+
fer method used in real-world scenarios. To be consistent
|
997 |
+
with the real scenario, we select the best checkpoint accord-
|
998 |
+
ing to the F1 score on the dev set of Dtask.
|
999 |
+
For zero-shot relation extraction, we compare USM with
|
1000 |
+
|
1001 |
+
the following strong baselines:
|
1002 |
+
• GPT-3 175B (Brown et al. 2020) is a large-scale, gen-
|
1003 |
+
erative pre-trained model, which can extract entity and
|
1004 |
+
relation by formulating the task as a question answering
|
1005 |
+
problem through prompting (Wang et al. 2022a).
|
1006 |
+
• DEEPSTRUCT 10B is a structured prediction model pre-
|
1007 |
+
trained on six large-scale entity, relation, and triple
|
1008 |
+
datasets (Wang et al. 2022a).
|
1009 |
+
Table 2 shows the entity extraction performance on the
|
1010 |
+
unseen label subset, in which types are not appearing in the
|
1011 |
+
pre-training dataset. And Table 3 shows the performance of
|
1012 |
+
zero-shot relation extraction on CoNLL04. From Table 2
|
1013 |
+
and Table 3, we can see that: 1) USM has a strong zero-shot
|
1014 |
+
transferability across labels. USM shows good migration
|
1015 |
+
performance on Movie, Literature, and Music domains even
|
1016 |
+
when learning from Dtask with limited entity types. For rela-
|
1017 |
+
tion extraction, USM (356M) outperforms the strong zero-
|
1018 |
+
shot baseline GPT-3 (175B) and DEEPSTRUCTURE (10B)
|
1019 |
+
with a smaller model size. 2) Heterogeneous supervision
|
1020 |
+
boosts USM with unified label semantics and outperforms
|
1021 |
+
the task annotation baseline by a large margin. Compared to
|
1022 |
+
the task annotation baseline (Dtask), USM significantly and
|
1023 |
+
consistently improves the performance on all datasets.
|
1024 |
+
Experiments on Few-shot Transfer Settings
|
1025 |
+
To further investigate the effects of verbalized label seman-
|
1026 |
+
tics, we conduct few-shot transfer experiments on four IE
|
1027 |
+
tasks and compare USM with the following baselines:
|
1028 |
+
• UIE-large* is the pre-trained sequence-to-structure
|
1029 |
+
model for effective low-resource IE tasks, which injects
|
1030 |
+
label semantics by generating labels and words in struc-
|
1031 |
+
tured extraction language synchronously and guiding the
|
1032 |
+
generation with a structural schema instructor.
|
1033 |
+
• USMRoberta is the initial model of USM, which directly
|
1034 |
+
use Roberta-large as the pre-trained encoder;
|
1035 |
+
• USMSymbolic replaces the names of labels with symbolic
|
1036 |
+
representation (meaning-less labels, e.g., label1, label2,
|
1037 |
+
...) during the fine-tuning stage of USM, which is used to
|
1038 |
+
verify the effect of verbalized label semantics.
|
1039 |
+
For few-shot transfer experiments, we follow the data
|
1040 |
+
splits and settings with the previous work (Lu et al. 2022)
|
1041 |
+
and repeat each experiment 10 times to avoid the influence
|
1042 |
+
of random sampling (Huang et al. 2021). Table 4 shows the
|
1043 |
+
performance on 4 IE tasks under the few-shot settings, where
|
1044 |
+
AVE-S is the average performance of 1/5/10-shot experi-
|
1045 |
+
ments. We can see that: 1) By modeling IE tasks via uni-
|
1046 |
+
fied semantic matching, USM exceeds the few-shot state-
|
1047 |
+
of-the-art UIE-large 5.11 on average. Although UIE also
|
1048 |
+
adopts verbalized label representation, this structure gener-
|
1049 |
+
ation method needs to learn to generate the novel schema
|
1050 |
+
word in the target structure during transfer learning. In con-
|
1051 |
+
trast, USM only needs to learn to match them, providing a
|
1052 |
+
better inductive bias and leading to a much smaller decoding
|
1053 |
+
search space. The pre-trained unified token linking ability
|
1054 |
+
boosts the USM in all settings. 2) It is crucial to verbalize la-
|
1055 |
+
bel schemas rather than meaningless symbols, especially for
|
1056 |
+
complex extraction tasks. USMSymbolic, which uses symbolic
|
1057 |
+
labels instead of verbalized labels, drastically reduces per-
|
1058 |
+
formance on all tasks. For tasks with more semantic types,
|
1059 |
+
such as event extraction with 33 types, the performance
|
1060 |
+
drops significantly, even lower than that of USMRoberta ini-
|
1061 |
+
tialized directly with Roberta-large.
|
1062 |
+
Related Work
|
1063 |
+
In the past decade, due to powerful representation ability,
|
1064 |
+
deep learning methods (Bengio et al. 2003; Collobert et al.
|
1065 |
+
2011) have made amazing achievements in information ex-
|
1066 |
+
traction tasks. Most of these methods decompose extraction
|
1067 |
+
into multiple sub-tasks and follow the classical neural clas-
|
1068 |
+
sifier method (Krizhevsky, Sutskever, and Hinton 2012) to
|
1069 |
+
model each sub-task, such as entity extraction, relation clas-
|
1070 |
+
sification, event trigger detection, event argument classifi-
|
1071 |
+
cation, etc. And several architectures are proposed to model
|
1072 |
+
the extraction, such as sequence tagging (Lample et al. 2016;
|
1073 |
+
Zheng et al. 2017), span classification (Sohrab and Miwa
|
1074 |
+
2018; Song et al. 2019; Wadden et al. 2019), table filling
|
1075 |
+
(Gupta, Sch¨utze, and Andrassy 2016; Wang and Lu 2020),
|
1076 |
+
question answering (Levy et al. 2017; Li et al. 2020), and
|
1077 |
+
token pair (Wang et al. 2020; Yu et al. 2021).
|
1078 |
+
Recently, to solve various IE tasks with a single archi-
|
1079 |
+
tecture, UIE employs unified structure generation, models
|
1080 |
+
the various IE tasks with structured extraction language,
|
1081 |
+
and pre-trains the ability of structure generation using dis-
|
1082 |
+
tant text-structure supervision (Lu et al. 2022). Unlike the
|
1083 |
+
generation-based approach, we model universal information
|
1084 |
+
extraction as unified token linking, which reduces the search
|
1085 |
+
space during decoding and leads to better generalization per-
|
1086 |
+
formance. Beyond distant supervision, we further introduce
|
1087 |
+
indirect supervision from related NLP tasks to learn the uni-
|
1088 |
+
fied token linking ability.
|
1089 |
+
Similar to USM in this paper, matching-based IE ap-
|
1090 |
+
proaches aim to verbalize the label schema and structure
|
1091 |
+
candidate to achieve better generalization (Liu et al. 2022).
|
1092 |
+
Such methods usually use pre-extracted syntactic structures
|
1093 |
+
(Wang et al. 2021a) and semantic structures (Huang et al.
|
1094 |
+
2018) as candidate structures, then model the extraction as
|
1095 |
+
text entailment (Obamuyide and Vlachos 2018; Sainz et al.
|
1096 |
+
2021; Lyu et al. 2021; Sainz et al. 2022) and semantic struc-
|
1097 |
+
ture mapping (Chen and Li 2021; Dong, Pan, and Luo 2021).
|
1098 |
+
Different from the pre-extraction and matching style, this
|
1099 |
+
paper decouples various IE tasks to unified token linking
|
1100 |
+
operations and designs a one-pass end-to-end information
|
1101 |
+
extraction framework for modeling all tasks.
|
1102 |
+
Conclusion
|
1103 |
+
In this paper, we propose a unified semantic matching frame-
|
1104 |
+
work – USM, which jointly encodes extraction schema
|
1105 |
+
and input text, uniformly extracts substructures in parallel,
|
1106 |
+
and controllably decodes target structures on demand. Ex-
|
1107 |
+
perimental results show that USM achieves state-of-the-art
|
1108 |
+
performance under the supervised experiments and shows
|
1109 |
+
strong generalization ability under zero/few-shot transfer
|
1110 |
+
settings, which verifies USM is a novel, transferable, con-
|
1111 |
+
trollable, and efficient framework. For future work, we want
|
1112 |
+
to extend USM to NLU tasks, e.g., text classification, and in-
|
1113 |
+
vestigate more indirect supervision signals for IE, e.g., text
|
1114 |
+
entailment.
|
1115 |
+
|
1116 |
+
Acknowledgments
|
1117 |
+
We sincerely thank the reviewers for their insightful com-
|
1118 |
+
ments and valuable suggestions. This work is supported
|
1119 |
+
by the National Key Research and Development Program
|
1120 |
+
of China (No.2020AAA0109400) and the Natural Sci-
|
1121 |
+
ence Foundation of China (No.62122077, 61876223, and
|
1122 |
+
62106251). Hongyu Lin is sponsored by CCF-Baidu Open
|
1123 |
+
Fund.
|
1124 |
+
Appendix: Experiment Details
|
1125 |
+
This section describes the details of the experiments, includ-
|
1126 |
+
ing implementation details and extra experiments analysis.
|
1127 |
+
Implementation Details
|
1128 |
+
For all experiments, we optimize our model using AdamW
|
1129 |
+
(Loshchilov and Hutter 2019) with the constant learning
|
1130 |
+
rate. For single-task fine-tuning, we tune the learning rate
|
1131 |
+
from {1e-5, 2e-5, 3e-5} with three seeds and select the best
|
1132 |
+
hyper-parameter setting according to the performance of the
|
1133 |
+
development set. For multi-task learning of USMUnify, we
|
1134 |
+
select the best checkpoint according to the average perfor-
|
1135 |
+
mance of all datasets. We conducted each experiment on
|
1136 |
+
NVIDIA A100 GPUs, and detailed hyper-parameters are
|
1137 |
+
shown in Table 5.
|
1138 |
+
Learning Rate
|
1139 |
+
Global Batch
|
1140 |
+
Epoch
|
1141 |
+
Pre-training
|
1142 |
+
2e-5
|
1143 |
+
96
|
1144 |
+
5
|
1145 |
+
Fine-tuning
|
1146 |
+
Entity
|
1147 |
+
1e-5, 2e-5, 3e-5
|
1148 |
+
64
|
1149 |
+
100
|
1150 |
+
Relation
|
1151 |
+
1e-5, 2e-5, 3e-5
|
1152 |
+
64
|
1153 |
+
200
|
1154 |
+
Event
|
1155 |
+
1e-5, 2e-5, 3e-5
|
1156 |
+
96
|
1157 |
+
200
|
1158 |
+
Sentiment
|
1159 |
+
1e-5, 2e-5, 3e-5
|
1160 |
+
32
|
1161 |
+
100
|
1162 |
+
Low-resource
|
1163 |
+
2e-5
|
1164 |
+
32
|
1165 |
+
200
|
1166 |
+
Multi-task
|
1167 |
+
2e-5
|
1168 |
+
96
|
1169 |
+
200
|
1170 |
+
Table 5: Hyper-parameters of USM experiments.
|
1171 |
+
Pre-train Datasets
|
1172 |
+
We collect three types of supervision signals for model pre-
|
1173 |
+
training: named entity annotation in Ontonotes for task an-
|
1174 |
+
notation Dtask; NYT (Riedel, Yao, and McCallum 2010) and
|
1175 |
+
Rebel (Huguet Cabot and Navigli 2021) for distant supervi-
|
1176 |
+
sion Ddistant; machine reading comprehension from MRQA
|
1177 |
+
(Fisch et al. 2019) for indirect supervision Dindirect. For the
|
1178 |
+
Rebel data, we only keep the 230 most frequently occurring
|
1179 |
+
relation types and randomly sample 300K instances for pre-
|
1180 |
+
training. For the reading comprehension data, we reserve a
|
1181 |
+
maximum of 5 questions for each instance and filter out in-
|
1182 |
+
stances where the total tokenized length of question and con-
|
1183 |
+
text exceeds 500. The final statistics are shown in Table 6.
|
1184 |
+
Ablation Analysis of Label-Text Interaction
|
1185 |
+
To investigate the effect of label-text interaction and acceler-
|
1186 |
+
ate the extraction process, we propose an approximate shal-
|
1187 |
+
low label-text interaction model to reuse the computation of
|
1188 |
+
label embedding during the inference stage. Motivated by
|
1189 |
+
Dataset
|
1190 |
+
#instance
|
1191 |
+
Dtask
|
1192 |
+
Ontonote
|
1193 |
+
60K
|
1194 |
+
Ddistant
|
1195 |
+
NYT + Rebel
|
1196 |
+
356K
|
1197 |
+
Dindirect
|
1198 |
+
MRQA
|
1199 |
+
195K
|
1200 |
+
Table 6: Detailed statistics of pre-training datasets.
|
1201 |
+
Dong et al. (2019), we design attention mask strategies to
|
1202 |
+
control the interaction between label and text, as illustrated
|
1203 |
+
in Figure 4. In the full mask setting (Label ⇔ Text, Fig-
|
1204 |
+
ure 4a), label and text can attend to each other to obtain
|
1205 |
+
deep interaction; in the partial mask setting (Label × Text,
|
1206 |
+
Figure 4b), label and text only attend to themselves. For the
|
1207 |
+
partial mask setting, USM can cache and reuse the calcula-
|
1208 |
+
tion of label embedding to reduce the computation cost in a
|
1209 |
+
dual encoder way during the inference stage.
|
1210 |
+
[Label] [Label]
|
1211 |
+
[Text]
|
1212 |
+
(a) Label ⇔ Text: Label and text
|
1213 |
+
can attend to each other.
|
1214 |
+
[Label] [Label]
|
1215 |
+
[Text]
|
1216 |
+
(b) Label × Text: Label and text
|
1217 |
+
can not attend to each other.
|
1218 |
+
Figure 4: Different attention masks for text-schema joint em-
|
1219 |
+
bedding.
|
1220 |
+
Entity
|
1221 |
+
Relation
|
1222 |
+
Event
|
1223 |
+
Sentiment
|
1224 |
+
Full-shot
|
1225 |
+
Label ⇔ Text
|
1226 |
+
97.03
|
1227 |
+
81.91
|
1228 |
+
63.51
|
1229 |
+
81.22
|
1230 |
+
Label × Text
|
1231 |
+
96.99
|
1232 |
+
81.18
|
1233 |
+
62.03
|
1234 |
+
80.92
|
1235 |
+
Few-shot (AVE-S)
|
1236 |
+
Label ⇔ Text
|
1237 |
+
82.12
|
1238 |
+
52.23
|
1239 |
+
37.52
|
1240 |
+
51.51
|
1241 |
+
Label × Text
|
1242 |
+
82.37
|
1243 |
+
45.75
|
1244 |
+
24.70
|
1245 |
+
26.65
|
1246 |
+
Table 7: Experiment results on the development set of entity
|
1247 |
+
(CoNLL03), relation (CoNLL04), event (ACE05-Evt argu-
|
1248 |
+
ment) and sentiment (16res) of USM with different label-
|
1249 |
+
text interaction.
|
1250 |
+
Table 7 shows the performance of two different label-text
|
1251 |
+
interactions, and we can see that: 1) Deep interaction (⇔)
|
1252 |
+
can effectively improve the ability of unified token linking,
|
1253 |
+
especially in low-resource settings. 2) In resource-rich sce-
|
1254 |
+
narios, shallow interaction (×) can replace deep interaction
|
1255 |
+
between label-text linking. This dynamic and variable scal-
|
1256 |
+
ability enables USM to have better application scenarios in
|
1257 |
+
|
1258 |
+
practice: for common rich resource extraction tasks, USM
|
1259 |
+
can pre-compute the representation of label and text sepa-
|
1260 |
+
rately in a dual encoder fashion, speeding up the inference
|
1261 |
+
process without the need for other deployments; for low-
|
1262 |
+
resource extraction tasks, USM can use deep-level interac-
|
1263 |
+
tive information to improve transfer ability and retain high
|
1264 |
+
parallelism.
|
1265 |
+
Effects of Controllable Ability
|
1266 |
+
To investigate the controllable ability of USM, we conduct
|
1267 |
+
partial extraction experiments on the CoNLL04 (Joint Entity
|
1268 |
+
and Relation Extraction), ACE05-Evt (Event Trigger and
|
1269 |
+
Argument), and 14lap (Sentiment Extraction). We employ
|
1270 |
+
two kinds of partial extraction settings: 1) partial task ex-
|
1271 |
+
traction: we train an end-to-end joint entity and relation ex-
|
1272 |
+
traction model using the full schema of CoNLL04 (entity
|
1273 |
+
and relation) but feed the partial schema (entity) to USM.
|
1274 |
+
2) partial label extraction: we train an extraction model on
|
1275 |
+
the full label set (positive, neutral, negative of sentiment),
|
1276 |
+
and only extract part of the label set (positive) from the text.
|
1277 |
+
Table 8 shows the performance of three different partial ex-
|
1278 |
+
traction experiments. We can see that USM achieves almost
|
1279 |
+
the same performance in both settings and has highly con-
|
1280 |
+
trollable extraction ability.
|
1281 |
+
Full
|
1282 |
+
Partial
|
1283 |
+
Partial Details
|
1284 |
+
CoNLL04 Entity
|
1285 |
+
90.74
|
1286 |
+
90.50
|
1287 |
+
Only Entity
|
1288 |
+
ACE05-Evt Trigger
|
1289 |
+
70.40
|
1290 |
+
70.99
|
1291 |
+
Only 16 Types of 33 Types
|
1292 |
+
ACE05-Evt Argument
|
1293 |
+
60.87
|
1294 |
+
60.24
|
1295 |
+
Only 16 Types of 33 Types
|
1296 |
+
14lap Sentiment
|
1297 |
+
75.00
|
1298 |
+
74.78
|
1299 |
+
Only Positive of 3 Types
|
1300 |
+
Table 8: Experiment results of partial extraction schema on
|
1301 |
+
the development set of different datasets. Partial indicates
|
1302 |
+
feeding part of the whole schema to USM, such as only ex-
|
1303 |
+
tracting positive sentiment rather than extracting all types
|
1304 |
+
(positive, neutral, negative) from the text. All results are
|
1305 |
+
evaluated on the partial extraction schema. For instance, the
|
1306 |
+
performances of ACE05-Evt Trigger under the full and par-
|
1307 |
+
tial settings result from 16 types in the partial extraction
|
1308 |
+
schema.
|
1309 |
+
14res
|
1310 |
+
14lap
|
1311 |
+
15res
|
1312 |
+
16res
|
1313 |
+
System using BERT-base
|
1314 |
+
(Xu et al. 2020)
|
1315 |
+
62.40
|
1316 |
+
51.04
|
1317 |
+
57.53
|
1318 |
+
63.83
|
1319 |
+
(Xu, Chia, and Bing 2021)
|
1320 |
+
71.85
|
1321 |
+
59.38
|
1322 |
+
63.27
|
1323 |
+
70.26
|
1324 |
+
(Yu Bai Jian et al. 2021)
|
1325 |
+
69.61
|
1326 |
+
59.50
|
1327 |
+
62.72
|
1328 |
+
68.41
|
1329 |
+
(Chen et al. 2022a)
|
1330 |
+
71.78
|
1331 |
+
58.81
|
1332 |
+
61.93
|
1333 |
+
68.33
|
1334 |
+
USMBERT-base
|
1335 |
+
71.87
|
1336 |
+
58.63
|
1337 |
+
63.41
|
1338 |
+
72.68
|
1339 |
+
Table 9: Experiment results of USMBERT-base on aspect based
|
1340 |
+
sentiment triplet extraction tasks.
|
1341 |
+
Comparison of BERT-base
|
1342 |
+
This section compares USM with other BERT-base based
|
1343 |
+
state-of-the-art systems. USMBERT-base indicates USM uses
|
1344 |
+
BERT-base (Devlin et al. 2019) as a pre-trained transformer
|
1345 |
+
P
|
1346 |
+
R
|
1347 |
+
F
|
1348 |
+
System using BERT-base
|
1349 |
+
(Wang et al. 2020)
|
1350 |
+
91.4
|
1351 |
+
92.6
|
1352 |
+
92.0
|
1353 |
+
(Sui et al. 2020)
|
1354 |
+
92.5
|
1355 |
+
92.2
|
1356 |
+
92.3
|
1357 |
+
(Zheng et al. 2021)
|
1358 |
+
93.5
|
1359 |
+
91.9
|
1360 |
+
92.7
|
1361 |
+
USMBERT-base
|
1362 |
+
93.7
|
1363 |
+
91.9
|
1364 |
+
92.8
|
1365 |
+
Table 10: Experiment results of USMBERT-base on the NYT.
|
1366 |
+
encoder. Table 9 shows the performance of USM and the
|
1367 |
+
state-of-the-art systems on the four aspect-based sentiment
|
1368 |
+
analysis datasets, and Table 10 shows the performance of
|
1369 |
+
USM and the state-of-the-art joint entity relation extraction
|
1370 |
+
systems on the NYT dataset. We can see that USMBERT-base
|
1371 |
+
achieves competitive performance on above datasets, which
|
1372 |
+
verifies the effectiveness of the proposed unified semantic
|
1373 |
+
matching framework.
|
1374 |
+
Effect of Token-Label Linking
|
1375 |
+
This section investigates the effect of the token-label link-
|
1376 |
+
ing operation. Table 11 shows results of different decoding
|
1377 |
+
strategies with golden token links: 1) Full employs all three
|
1378 |
+
types of token linking operations to decode the final struc-
|
1379 |
+
tures; 2) w/o TLL indicates decoding without the token-label
|
1380 |
+
links for pairing conceptualizing.
|
1381 |
+
Dataset
|
1382 |
+
Metric
|
1383 |
+
F1 with golden links
|
1384 |
+
w/o TLL
|
1385 |
+
Full
|
1386 |
+
ACE05-Rel
|
1387 |
+
Relation Strict F1
|
1388 |
+
98.54
|
1389 |
+
99.96
|
1390 |
+
CoNLL04
|
1391 |
+
Relation Strict F1
|
1392 |
+
100.00
|
1393 |
+
100.00
|
1394 |
+
NYT
|
1395 |
+
Relation Boundary F1
|
1396 |
+
72.74
|
1397 |
+
100.00
|
1398 |
+
SciERC
|
1399 |
+
Relation Strict F1
|
1400 |
+
92.06
|
1401 |
+
99.74
|
1402 |
+
ACE05-Evt
|
1403 |
+
Event Argument F1
|
1404 |
+
98.75
|
1405 |
+
100.00
|
1406 |
+
CASIE
|
1407 |
+
Event Argument F1
|
1408 |
+
99.98
|
1409 |
+
99.99
|
1410 |
+
14-res
|
1411 |
+
Sentiment Triplet F1
|
1412 |
+
99.10
|
1413 |
+
100.00
|
1414 |
+
14-lap
|
1415 |
+
Sentiment Triplet F1
|
1416 |
+
98.54
|
1417 |
+
100.00
|
1418 |
+
Table 11: Performance of different decoding strategies using
|
1419 |
+
golden links.
|
1420 |
+
References
|
1421 |
+
Alvarez-Melis, D.; and Jaakkola, T. 2017. A causal frame-
|
1422 |
+
work for explaining the predictions of black-box sequence-
|
1423 |
+
to-sequence models. In Proc. of EMNLP.
|
1424 |
+
Andersen, P. M.; Hayes, P. J.; Weinstein, S. P.; Huettner,
|
1425 |
+
A. K.; Schmandt, L. M.; and Nirenburg, I. B. 1992. Au-
|
1426 |
+
tomatic Extraction of Facts from Press Releases to Generate
|
1427 |
+
News Stories. In Proc. of ANLP.
|
1428 |
+
Bengio, Y.; Ducharme, R.; Vincent, P.; and Janvin, C. 2003.
|
1429 |
+
A Neural Probabilistic Language Model. J. Mach. Learn.
|
1430 |
+
Res.
|
1431 |
+
Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.;
|
1432 |
+
Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell,
|
1433 |
+
A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan,
|
1434 |
+
T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter,
|
1435 |
+
|
1436 |
+
C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.;
|
1437 |
+
Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford,
|
1438 |
+
A.; Sutskever, I.; and Amodei, D. 2020. Language Models
|
1439 |
+
are Few-Shot Learners. In Proc. of NeurIPS.
|
1440 |
+
Chen, C.-Y.; and Li, C.-T. 2021.
|
1441 |
+
ZS-BERT: Towards
|
1442 |
+
Zero-Shot Relation Extraction with Attribute Representation
|
1443 |
+
Learning. In Proc. of NAACL.
|
1444 |
+
Chen, H.; Zhai, Z.; Feng, F.; Li, R.; and Wang, X. 2022a.
|
1445 |
+
Enhanced Multi-Channel Graph Convolutional Network for
|
1446 |
+
Aspect Sentiment Triplet Extraction. In Proc. of ACL.
|
1447 |
+
Chen, M.; Huang, L.; Li, M.; Zhou, B.; Ji, H.; and Roth, D.
|
1448 |
+
2022b. New Frontiers of Information Extraction. In Proc.
|
1449 |
+
of NAACL.
|
1450 |
+
Collobert,
|
1451 |
+
R.;
|
1452 |
+
Weston,
|
1453 |
+
J.;
|
1454 |
+
Bottou,
|
1455 |
+
L.;
|
1456 |
+
Karlen,
|
1457 |
+
M.;
|
1458 |
+
Kavukcuoglu, K.; and Kuksa, P. 2011. Natural Language
|
1459 |
+
Processing (Almost) from Scratch. J. Mach. Learn. Res.
|
1460 |
+
Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019.
|
1461 |
+
BERT: Pre-training of Deep Bidirectional Transformers for
|
1462 |
+
Language Understanding. In Proc. of NAACL.
|
1463 |
+
Dong, L.; Yang, N.; Wang, W.; Wei, F.; Liu, X.; Wang,
|
1464 |
+
Y.; Gao, J.; Zhou, M.; and Hon, H.-W. 2019.
|
1465 |
+
Unified
|
1466 |
+
Language Model Pre-training for Natural Language Under-
|
1467 |
+
standing and Generation. In Proc. of NeurIPS.
|
1468 |
+
Dong, M.; Pan, C.; and Luo, Z. 2021. MapRE: An Effec-
|
1469 |
+
tive Semantic Mapping Approach for Low-resource Rela-
|
1470 |
+
tion Extraction. In Proc. of EMNLP.
|
1471 |
+
Fisch, A.; Talmor, A.; Jia, R.; Seo, M.; Choi, E.; and Chen,
|
1472 |
+
D. 2019. MRQA 2019 Shared Task: Evaluating Generaliza-
|
1473 |
+
tion in Reading Comprehension. In Proc. of MRQA.
|
1474 |
+
Grishman, R. 2019. Twenty-five years of information ex-
|
1475 |
+
traction. Natural Language Engineering.
|
1476 |
+
Gupta, P.; Sch¨utze, H.; and Andrassy, B. 2016. Table Filling
|
1477 |
+
Multi-Task Recurrent Neural Network for Joint Entity and
|
1478 |
+
Relation Extraction. In Proc. of COLING.
|
1479 |
+
Huang, J.; Li, C.; Subudhi, K.; Jose, D.; Balakrishnan, S.;
|
1480 |
+
Chen, W.; Peng, B.; Gao, J.; and Han, J. 2021. Few-Shot
|
1481 |
+
Named Entity Recognition: An Empirical Baseline Study.
|
1482 |
+
In Proc. of EMNLP.
|
1483 |
+
Huang, L.; Ji, H.; Cho, K.; Dagan, I.; Riedel, S.; and Voss,
|
1484 |
+
C. 2018. Zero-Shot Transfer Learning for Event Extraction.
|
1485 |
+
In Proc. of ACL.
|
1486 |
+
Huguet Cabot, P.-L.; and Navigli, R. 2021. REBEL: Re-
|
1487 |
+
lation Extraction By End-to-end Language generation. In
|
1488 |
+
Proc. of EMNLP Findings.
|
1489 |
+
Joshi, M.; Choi, E.; Weld, D.; and Zettlemoyer, L. 2017.
|
1490 |
+
TriviaQA: A Large Scale Distantly Supervised Challenge
|
1491 |
+
Dataset for Reading Comprehension. In Proc. of ACL.
|
1492 |
+
Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im-
|
1493 |
+
ageNet Classification with Deep Convolutional Neural Net-
|
1494 |
+
works. In Proc. of NeurIPS.
|
1495 |
+
Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.;
|
1496 |
+
Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Devlin,
|
1497 |
+
J.; Lee, K.; Toutanova, K.; Jones, L.; Kelcey, M.; Chang, M.-
|
1498 |
+
W.; Dai, A. M.; Uszkoreit, J.; Le, Q.; and Petrov, S. 2019.
|
1499 |
+
Natural Questions: A Benchmark for Question Answering
|
1500 |
+
Research.
|
1501 |
+
Transactions of the Association for Computa-
|
1502 |
+
tional Linguistics.
|
1503 |
+
Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami,
|
1504 |
+
K.; and Dyer, C. 2016. Neural Architectures for Named En-
|
1505 |
+
tity Recognition. In Proc. of NAACL.
|
1506 |
+
Levy, O.; Seo, M.; Choi, E.; and Zettlemoyer, L. 2017. Zero-
|
1507 |
+
Shot Relation Extraction via Reading Comprehension. In
|
1508 |
+
Proc. of CoNLL.
|
1509 |
+
Li, X.; Feng, J.; Meng, Y.; Han, Q.; Wu, F.; and Li, J. 2020.
|
1510 |
+
A Unified MRC Framework for Named Entity Recognition.
|
1511 |
+
In Proc. of ACL.
|
1512 |
+
Liu, F.; Lin, H.; Han, X.; Cao, B.; and Sun, L. 2022. Pre-
|
1513 |
+
training to Match for Unified Low-shot Relation Extraction.
|
1514 |
+
In Proc. of ACL.
|
1515 |
+
Liu, J.; Pasupat, P.; Cyphers, S.; and Glass, J. 2013. Asgard:
|
1516 |
+
A portable architecture for multilingual dialogue systems. In
|
1517 |
+
Proc. of ICASSP.
|
1518 |
+
Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.;
|
1519 |
+
Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V.
|
1520 |
+
2019. RoBERTa: A Robustly Optimized BERT Pretraining
|
1521 |
+
Approach. CoRR.
|
1522 |
+
Liu, Z.; Xu, Y.; Yu, T.; Dai, W.; Ji, Z.; Cahyawijaya, S.;
|
1523 |
+
Madotto, A.; and Fung, P. 2021.
|
1524 |
+
CrossNER: Evaluating
|
1525 |
+
Cross-Domain Named Entity Recognition. Proc. of AAAI.
|
1526 |
+
Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight De-
|
1527 |
+
cay Regularization. In Proc. of ICLR.
|
1528 |
+
Lou, C.; Yang, S.; and Tu, K. 2022. Nested Named Entity
|
1529 |
+
Recognition as Latent Lexicalized Constituency Parsing. In
|
1530 |
+
Proc. of ACL.
|
1531 |
+
Lu, Y.; Lin, H.; Xu, J.; Han, X.; Tang, J.; Li, A.; Sun, L.;
|
1532 |
+
Liao, M.; and Chen, S. 2021.
|
1533 |
+
Text2Event: Controllable
|
1534 |
+
Sequence-to-Structure Generation for End-to-end Event Ex-
|
1535 |
+
traction. In Proc. of ACL.
|
1536 |
+
Lu, Y.; Liu, Q.; Dai, D.; Xiao, X.; Lin, H.; Han, X.; Sun, L.;
|
1537 |
+
and Wu, H. 2022. Unified Structure Generation for Univer-
|
1538 |
+
sal Information Extraction. In Proc. of ACL.
|
1539 |
+
Luan, Y.; He, L.; Ostendorf, M.; and Hajishirzi, H. 2018.
|
1540 |
+
Multi-Task Identification of Entities, Relations, and Corefer-
|
1541 |
+
ence for Scientific Knowledge Graph Construction. In Proc.
|
1542 |
+
of EMNLP.
|
1543 |
+
Lyu, Q.; Zhang, H.; Sulem, E.; and Roth, D. 2021. Zero-
|
1544 |
+
shot Event Extraction via Transfer Learning: Challenges and
|
1545 |
+
Insights. In Proc. of ACL.
|
1546 |
+
Mintz, M.; Bills, S.; Snow, R.; and Jurafsky, D. 2009. Dis-
|
1547 |
+
tant supervision for relation extraction without labeled data.
|
1548 |
+
In Proc. of ACL.
|
1549 |
+
Mitchell, A.; Strassel, S.; Huang, S.; and Zakhary, R. 2005.
|
1550 |
+
ACE 2004 Multilingual Training Corpus.
|
1551 |
+
Obamuyide, A.; and Vlachos, A. 2018. Zero-shot Relation
|
1552 |
+
Classification as Textual Entailment. In Proc. of FEVER.
|
1553 |
+
Pontiki, M.; Galanis, D.; Papageorgiou, H.; Androutsopou-
|
1554 |
+
los, I.; Manandhar, S.; AL-Smadi, M.; Al-Ayyoub, M.;
|
1555 |
+
Zhao, Y.; Qin, B.; De Clercq, O.; Hoste, V.; Apidianaki,
|
1556 |
+
M.; Tannier, X.; Loukachevitch, N.; Kotelnikov, E.; Bel, N.;
|
1557 |
+
Jim´enez-Zafra, S. M.; and Eryi˘git, G. 2016. SemEval-2016
|
1558 |
+
|
1559 |
+
Task 5: Aspect Based Sentiment Analysis. In Proc. of Se-
|
1560 |
+
mEval.
|
1561 |
+
Pontiki, M.; Galanis, D.; Papageorgiou, H.; Manandhar, S.;
|
1562 |
+
and Androutsopoulos, I. 2015. SemEval-2015 Task 12: As-
|
1563 |
+
pect Based Sentiment Analysis. In Proc. of SemEval.
|
1564 |
+
Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.;
|
1565 |
+
Androutsopoulos, I.; and Manandhar, S. 2014.
|
1566 |
+
SemEval-
|
1567 |
+
2014 Task 4: Aspect Based Sentiment Analysis. In Proc. of
|
1568 |
+
SemEval.
|
1569 |
+
Pradhan, S.; Moschitti, A.; Xue, N.; Ng, H. T.; Bj¨orkelund,
|
1570 |
+
A.; Uryupina, O.; Zhang, Y.; and Zhong, Z. 2013. Towards
|
1571 |
+
Robust Linguistic Analysis using OntoNotes. In Proc. of
|
1572 |
+
CoNLL.
|
1573 |
+
Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016.
|
1574 |
+
SQuAD: 100,000+ Questions for Machine Comprehension
|
1575 |
+
of Text. In Proc. of EMNLP.
|
1576 |
+
Riedel, S.; Yao, L.; and McCallum, A. 2010. Modeling Rela-
|
1577 |
+
tions and Their Mentions without Labeled Text. In Machine
|
1578 |
+
Learning and Knowledge Discovery in Databases.
|
1579 |
+
Riedel, S.; Yao, L.; McCallum, A.; and Marlin, B. M. 2013.
|
1580 |
+
Relation Extraction with Matrix Factorization and Universal
|
1581 |
+
Schemas. In Proc. of NAACL.
|
1582 |
+
Roth, D.; and Yih, W.-t. 2004. A Linear Programming For-
|
1583 |
+
mulation for Global Inference in Natural Language Tasks.
|
1584 |
+
In Proc. of CoNLL.
|
1585 |
+
Sainz, O.; Gonzalez-Dios, I.; Lopez de Lacalle, O.; Min, B.;
|
1586 |
+
and Agirre, E. 2022. Textual Entailment for Event Argument
|
1587 |
+
Extraction: Zero- and Few-Shot with Multi-Source Learn-
|
1588 |
+
ing. In Proc. of ACL Findings.
|
1589 |
+
Sainz, O.; Lopez de Lacalle, O.; Labaka, G.; Barrena, A.;
|
1590 |
+
and Agirre, E. 2021. Label Verbalization and Entailment for
|
1591 |
+
Effective Zero and Few-Shot Relation Extraction. In Proc.
|
1592 |
+
of EMNLP.
|
1593 |
+
Satyapanich, T.; Ferraro, F.; and Finin, T. 2020.
|
1594 |
+
CASIE:
|
1595 |
+
Extracting Cybersecurity Event Information from Text. In
|
1596 |
+
Proc. of AAAI.
|
1597 |
+
Sohrab, M. G.; and Miwa, M. 2018. Deep Exhaustive Model
|
1598 |
+
for Nested Named Entity Recognition. In Proc. of EMNLP.
|
1599 |
+
Song, L.; Zhang, Y.; Gildea, D.; Yu, M.; Wang, Z.; and Su,
|
1600 |
+
J. 2019. Leveraging Dependency Forest for Neural Medical
|
1601 |
+
Relation Extraction. In Proc. of EMNLP-IJCNLP.
|
1602 |
+
Strauss, B.; Toma, B.; Ritter, A.; de Marneffe, M.-C.; and
|
1603 |
+
Xu, W. 2016. Results of the WNUT16 Named Entity Recog-
|
1604 |
+
nition Shared Task. In Proc. of WNUT.
|
1605 |
+
Su, J.; Lu, Y.; Pan, S.; Murta, A.; Wen, B.; and Liu, Y. 2021.
|
1606 |
+
RoFormer: Enhanced Transformer with Rotary Position Em-
|
1607 |
+
bedding.
|
1608 |
+
Su, J.; Murtadha, A.; Pan, S.; Hou, J.; Sun, J.; Huang, W.;
|
1609 |
+
Wen, B.; and Liu, Y. 2022. Global Pointer: Novel Efficient
|
1610 |
+
Span-based Approach for Named Entity Recognition.
|
1611 |
+
Sui, D.; Chen, Y.; Liu, K.; Zhao, J.; Zeng, X.; and Liu, S.
|
1612 |
+
2020. Joint Entity and Relation Extraction with Set Predic-
|
1613 |
+
tion Networks. CoRR.
|
1614 |
+
Tjong Kim Sang, E. F.; and De Meulder, F. 2003.
|
1615 |
+
In-
|
1616 |
+
troduction to the CoNLL-2003 Shared Task: Language-
|
1617 |
+
Independent Named Entity Recognition.
|
1618 |
+
Trischler, A.; Wang, T.; Yuan, X.; Harris, J.; Sordoni, A.;
|
1619 |
+
Bachman, P.; and Suleman, K. 2017. NewsQA: A Machine
|
1620 |
+
Comprehension Dataset. In Proc. of RepL4NLP.
|
1621 |
+
Wadden, D.; Wennberg, U.; Luan, Y.; and Hajishirzi, H.
|
1622 |
+
2019. Entity, Relation, and Event Extraction with Contextu-
|
1623 |
+
alized Span Representations. In Proc. of EMNLP.
|
1624 |
+
Walker, C.; Strassel, S.; Medero, J.; and Maeda, K. 2006.
|
1625 |
+
ACE 2005 Multilingual Training Corpus.
|
1626 |
+
Wang, C.; Liu, X.; Chen, Z.; Hong, H.; Tang, J.; and Song,
|
1627 |
+
D. 2021a. Zero-Shot Information Extraction as a Unified
|
1628 |
+
Text-to-Triple Translation. In Proc. of EMNLP.
|
1629 |
+
Wang, C.; Liu, X.; Chen, Z.; Hong, H.; Tang, J.; and Song,
|
1630 |
+
D. 2022a. DeepStruct: Pretraining of Language Models for
|
1631 |
+
Structure Prediction. In Proc. of ACL Findings.
|
1632 |
+
Wang, J.; and Lu, W. 2020.
|
1633 |
+
Two are Better than One:
|
1634 |
+
Joint Entity and Relation Extraction with Table-Sequence
|
1635 |
+
Encoders. In Proc. of EMNLP.
|
1636 |
+
Wang, K.; Ning, Q.; and Roth, D. 2020. Learnability with
|
1637 |
+
Indirect Supervision Signals. In Proc. of NeurIPS.
|
1638 |
+
Wang, S.; Yu, M.; Chang, S.; Sun, L.; and Huang, L. 2022b.
|
1639 |
+
Query and Extract: Refining Event Extraction as Type-
|
1640 |
+
oriented Binary Decoding. In Proc. of ACL Findings.
|
1641 |
+
Wang, X.; Jiang, Y.; Bach, N.; Wang, T.; Huang, Z.; Huang,
|
1642 |
+
F.; and Tu, K. 2021b. Improving Named Entity Recognition
|
1643 |
+
by External Context Retrieving and Cooperative Learning.
|
1644 |
+
In Proc. of ACL.
|
1645 |
+
Wang, Y.; Yu, B.; Zhang, Y.; Liu, T.; Zhu, H.; and Sun, L.
|
1646 |
+
2020. TPLinker: Single-stage Joint Extraction of Entities
|
1647 |
+
and Relations Through Token Pair Linking. In Proc. of COL-
|
1648 |
+
ING.
|
1649 |
+
Xu, L.; Chia, Y. K.; and Bing, L. 2021.
|
1650 |
+
Learning Span-
|
1651 |
+
Level Interactions for Aspect Sentiment Triplet Extraction.
|
1652 |
+
In Proc. of ACL.
|
1653 |
+
Xu, L.; Li, H.; Lu, W.; and Bing, L. 2020. Position-Aware
|
1654 |
+
Tagging for Aspect Sentiment Triplet Extraction. In Proc. of
|
1655 |
+
EMNLP.
|
1656 |
+
Yan, Z.; Zhang, C.; Fu, J.; Zhang, Q.; and Wei, Z. 2021. A
|
1657 |
+
Partition Filter Network for Joint Entity and Relation Ex-
|
1658 |
+
traction. In Proc. of EMNLP.
|
1659 |
+
Yang, Z.; Qi, P.; Zhang, S.; Bengio, Y.; Cohen, W.; Salakhut-
|
1660 |
+
dinov, R.; and Manning, C. D. 2018. HotpotQA: A Dataset
|
1661 |
+
for Diverse, Explainable Multi-hop Question Answering. In
|
1662 |
+
Proc. of EMNLP.
|
1663 |
+
Yu, B.; Wang, Y.; Liu, T.; Zhu, H.; Sun, L.; and Wang, B.
|
1664 |
+
2021. Maximal Clique Based Non-Autoregressive Open In-
|
1665 |
+
formation Extraction. In Proc. of EMNLP.
|
1666 |
+
Yu Bai Jian, S.; Nayak, T.; Majumder, N.; and Poria, S.
|
1667 |
+
2021. Aspect Sentiment Triplet Extraction Using Reinforce-
|
1668 |
+
ment Learning. In Proc. of CIKM.
|
1669 |
+
Zheng, H.; Wen, R.; Chen, X.; Yang, Y.; Zhang, Y.;
|
1670 |
+
Zhang, Z.; Zhang, N.; Qin, B.; Ming, X.; and Zheng, Y.
|
1671 |
+
2021. PRGC: Potential Relation and Global Correspondence
|
1672 |
+
Based Joint Relational Triple Extraction. In Proc. of ACL.
|
1673 |
+
Zheng, S.; Wang, F.; Bao, H.; Hao, Y.; Zhou, P.; and Xu, B.
|
1674 |
+
2017. Joint Extraction of Entities and Relations Based on a
|
1675 |
+
Novel Tagging Scheme. In Proc. of ACL.
|
1676 |
+
|
1dE1T4oBgHgl3EQflQQz/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1dE2T4oBgHgl3EQfigf6/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a602a498f5eff1837d2410dc41d96b9add395f3bd5b01880137b091d616daf2a
|
3 |
+
size 7274541
|
1dE2T4oBgHgl3EQfigf6/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0cdd4d2917896f2c9fe04c2ba0a166287a9564a225dc86ebe8b537a2b9d7af54
|
3 |
+
size 250201
|
1tFAT4oBgHgl3EQfDBwd/content/tmp_files/2301.08413v1.pdf.txt
ADDED
@@ -0,0 +1,1880 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
When Source-Free Domain Adaptation Meets Label Propagation
|
2 |
+
Chunwei Wu1,2 , Guitao Cao1,2 , Yan Li3 and Xidong Xi1,2 and Wenming Cao4 and Hong Wang5
|
3 |
+
1Shanghai Key Laboratory of Trustworthy Computing, East China Normal University
|
4 |
+
2MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University
|
5 |
+
3Information, Mechanical and Electrical Engineering, Shanghai Normal University
|
6 |
+
4College of Information Engineering, Shenzhen University
|
7 |
+
5Shanghai Research Institute of Microwave Equipment
|
8 |
+
{52215902005, 52265902004}@stu.ecnu.edu.cn, [email protected], [email protected],
|
9 | |
10 |
+
Abstract
|
11 |
+
Source-free domain adaptation, where only a pre-
|
12 |
+
trained source model is used to adapt to the target
|
13 |
+
distribution, is a more general approach to achiev-
|
14 |
+
ing domain adaptation. However, it can be chal-
|
15 |
+
lenging to capture the inherent structure of the tar-
|
16 |
+
get features accurately due to the lack of super-
|
17 |
+
vised information on the target domain. To tackle
|
18 |
+
this problem, we propose a novel approach called
|
19 |
+
Adaptive Local Transfer (ALT) that tries to achieve
|
20 |
+
efficient feature clustering from the perspective of
|
21 |
+
label propagation. ALT divides the target data into
|
22 |
+
inner and outlier samples based on the adaptive
|
23 |
+
threshold of the learning state, and applies a cus-
|
24 |
+
tomized learning strategy to best fits the data prop-
|
25 |
+
erty.
|
26 |
+
Specifically, inner samples are utilized for
|
27 |
+
learning intra-class structure thanks to their rela-
|
28 |
+
tively well-clustered properties. The low-density
|
29 |
+
outlier samples are regularized by input consistency
|
30 |
+
to achieve high accuracy with respect to the ground
|
31 |
+
truth labels. In this way, local clustering can be
|
32 |
+
prevented from forming spurious clusters while ef-
|
33 |
+
fectively propagating label information among sub-
|
34 |
+
populations. Empirical evidence demonstrates that
|
35 |
+
ALT outperforms the state of the arts on three
|
36 |
+
public benchmarks: Office-31, Office-Home, and
|
37 |
+
VisDA.
|
38 |
+
1
|
39 |
+
Introduction
|
40 |
+
The excellent performance of deep learning relies heavily on
|
41 |
+
a large amount of high-quality labeled data. Obtaining large
|
42 |
+
amounts of manually labeled data for specific learning tasks
|
43 |
+
is often time-consuming and expensive, making these tasks
|
44 |
+
challenging to implement in practical applications.
|
45 |
+
To al-
|
46 |
+
leviate this dependency, Unsupervised Domain Adaptation
|
47 |
+
(UDA) has been developed to improve performance in the
|
48 |
+
unlabeled target domain by exploiting the labeled source
|
49 |
+
domain.
|
50 |
+
Two popular practices for modern UDA design
|
51 |
+
are learning domain-invariant features [Ganin et al., 2016;
|
52 |
+
Long et al., 2018; Kang et al., 2019; Tang et al., 2020] and
|
53 |
+
generating dummy samples to match the target domain distri-
|
54 |
+
bution [Wu et al., 2020; Li et al., 2020a; Zhong et al., 2021;
|
55 |
+
Figure 1: A toy illustration of target feature distributions from the
|
56 |
+
trained source model. The target samples can be divided into two
|
57 |
+
subsets: the inner set and the outlier set. Different shapes represent
|
58 |
+
different classes. ALT achieves efficient clustering through Adap-
|
59 |
+
tive Local-consistency Regularization (solid) and Adaptive Input-
|
60 |
+
consistency Regularization (dashed).
|
61 |
+
Na et al., 2021].
|
62 |
+
However, due to data privacy and secu-
|
63 |
+
rity issues, the source domain training data required by most
|
64 |
+
existing UDA methods is usually unavailable in real-world
|
65 |
+
applications. In response, Source-Free Domain Adaptation
|
66 |
+
(SFDA) emerged, which attempted to adapt a trained source
|
67 |
+
model to the target domain without using any source data.
|
68 |
+
Due to the lack of source data, it is impossible to esti-
|
69 |
+
mate source-target domain differences. Existing theoretical
|
70 |
+
work usually provides learning guarantees on the target do-
|
71 |
+
main by further assuming that the source domain covers the
|
72 |
+
support of the target domain. In the seminal work by [Yang
|
73 |
+
et al., 2021a], the authors point out that the target features
|
74 |
+
from the source model have formed some semantic struc-
|
75 |
+
tures.
|
76 |
+
Inspired by this intuition, we can preserve the im-
|
77 |
+
portant clustering structure in the target domain by matching
|
78 |
+
similar features in the high-dimensional space. However, the
|
79 |
+
nearest-neighbor consistency of points in high-dimensional
|
80 |
+
space may be wrong, such as when forcing the local consis-
|
81 |
+
tency of points in low-density regions. As shown in Table 1,
|
82 |
+
when the source and target domains have significant differ-
|
83 |
+
ences (i.e., Pr→Cl and Rw→Cl), numerous features gather in
|
84 |
+
low-density regions, with only about one-third of the neigh-
|
85 |
+
bors having the correct labels. Along with such a question,
|
86 |
+
we propose the Adaptive Local Transfer (ALT) shown in Fig-
|
87 |
+
ure 1. To achieve flexible adaptation for different data proper-
|
88 |
+
ties and exploit the target domain structure information, our
|
89 |
+
work introduces a novel data division strategy and then de-
|
90 |
+
arXiv:2301.08413v1 [cs.CV] 20 Jan 2023
|
91 |
+
|
92 |
+
Outlier
|
93 |
+
Inner
|
94 |
+
Inner
|
95 |
+
Inner
|
96 |
+
Outlier
|
97 |
+
Inner
|
98 |
+
LearningK Ar→Cl Ar→Pr Cl→Ar Pr→Cl Pr→Rw Rw→Cl
|
99 |
+
1
|
100 |
+
42.0
|
101 |
+
66.2
|
102 |
+
47.3
|
103 |
+
33.6
|
104 |
+
70.0
|
105 |
+
41.2
|
106 |
+
2
|
107 |
+
36.8
|
108 |
+
62.7
|
109 |
+
40.7
|
110 |
+
28.6
|
111 |
+
66.1
|
112 |
+
36.9
|
113 |
+
3
|
114 |
+
33.8
|
115 |
+
59.6
|
116 |
+
37.4
|
117 |
+
24.7
|
118 |
+
63.0
|
119 |
+
33.1
|
120 |
+
4
|
121 |
+
30.4
|
122 |
+
57.1
|
123 |
+
34.3
|
124 |
+
22.0
|
125 |
+
60.4
|
126 |
+
30.7
|
127 |
+
5
|
128 |
+
28.5
|
129 |
+
55.1
|
130 |
+
31.2
|
131 |
+
20.0
|
132 |
+
58.2
|
133 |
+
28.0
|
134 |
+
6
|
135 |
+
26.8
|
136 |
+
53.0
|
137 |
+
29.1
|
138 |
+
18.1
|
139 |
+
56.4
|
140 |
+
26.3
|
141 |
+
7
|
142 |
+
25.2
|
143 |
+
51.6
|
144 |
+
27.6
|
145 |
+
16.7
|
146 |
+
54.9
|
147 |
+
24.3
|
148 |
+
Table 1: Ratio (%) of different number of nearest neighbor which
|
149 |
+
have the correct predicted label (on Office-Home).
|
150 |
+
signs different regularization strategies to achieve label prop-
|
151 |
+
agation.
|
152 |
+
Firstly, our approach treats the target domain’s intrinsic
|
153 |
+
structure information mining as a clustering problem. Al-
|
154 |
+
though existing local consistency-based methods aim to pre-
|
155 |
+
serve the local structure, Table 1 illustrates the reason why
|
156 |
+
neighbors are unreliable: In distance-based neighbor discrim-
|
157 |
+
ination, neighbors are similar points in a high-dimensional
|
158 |
+
space, and since the points in the low-density region are all
|
159 |
+
scattered far apart, the label information in the K-nearest
|
160 |
+
neighbors is not consistent at this point. In ALT, we utilize
|
161 |
+
the model’s learning state to dynamically divide the target
|
162 |
+
data into inner and outlier sets. The intrinsic reason is that
|
163 |
+
a sample can be considered an inner sample if it can obtain
|
164 |
+
high predictive values from the classifier; otherwise, it is an
|
165 |
+
outlier. We regularize the input consistency of outliers and
|
166 |
+
encourage local consistency for those inner samples, which
|
167 |
+
effectively improves the mining of intrinsic structural infor-
|
168 |
+
mation.
|
169 |
+
Secondly, we assume a minimum overlap in the subpop-
|
170 |
+
ulations of the inner and outlier sets, and extend the subset
|
171 |
+
using the simple but realistic extension assumption of [Wei et
|
172 |
+
al., 2021]. For the inner set, the local-consistency regularizer
|
173 |
+
connects similar points in the high-dimensional space, allow-
|
174 |
+
ing SFDA training to proceed stably. Enlightening experi-
|
175 |
+
ments on Office-Home show that: (1) the pre-trained source
|
176 |
+
model can extract rich semantic information from the target
|
177 |
+
data; (2) what is lacking in domain adaptation is the filter-
|
178 |
+
ing and permutation of high-dimensional semantic informa-
|
179 |
+
tion. We propose to recognize the clustering weights of each
|
180 |
+
sample and reweight these samples, called Adaptive Local-
|
181 |
+
consistency Regularization (ALR), to filter spurious cluster-
|
182 |
+
ing information. To advance further along this line, we pro-
|
183 |
+
pose Adaptive Input-Consistency Regularization (AIR) for
|
184 |
+
the outlier set. Intuitively, different classes should adjust the
|
185 |
+
thresholds based on the model’s learning state to encourage
|
186 |
+
diverse predictions. Furthermore, as [Wei et al., 2021] dis-
|
187 |
+
cussed, a low-probability subset of data can be extended to a
|
188 |
+
neighborhood with a large probability relative to that subset.
|
189 |
+
As a result, by customizing the learning strategy for differ-
|
190 |
+
ent data properties, ALT can propagate structural information
|
191 |
+
from the inner set to the outlier subset while also enhancing
|
192 |
+
the clustering of the inner set.
|
193 |
+
The contributions of this paper are summarized as follows:
|
194 |
+
• We introduce ALT, an adaptive clustering strategy for
|
195 |
+
SFDA. Such a strategy customizes the learning strategy
|
196 |
+
for data subsets by using dynamic data splits, allowing
|
197 |
+
label information to propagate among subpopulations.
|
198 |
+
• To combat spurious clustering, we propose a novel
|
199 |
+
Adaptive Local-consistency Regularization (ALR) strat-
|
200 |
+
egy that estimates ground-truth structural information by
|
201 |
+
re-weighting the neighbors.
|
202 |
+
• To utilize unlabeled data more effectively, we propose
|
203 |
+
Adaptive Input-Consistent Regularization (AIR) from
|
204 |
+
the perspective of label propagation. Such a regulariza-
|
205 |
+
tion improves the clustering performance by propagating
|
206 |
+
structural information from the inner set to the outlier
|
207 |
+
set.
|
208 |
+
• Empirical evidence demonstrates that the proposed
|
209 |
+
method outperforms the state-of-the-art on three domain
|
210 |
+
adaptation benchmark datasets.
|
211 |
+
2
|
212 |
+
Related Work
|
213 |
+
Source-free Domain Adaptation (SFDA).
|
214 |
+
SFDA aims to
|
215 |
+
adapt unlabeled target domains using only the pre-trained
|
216 |
+
source model.
|
217 |
+
Existing approaches try to refine the so-
|
218 |
+
lution of SFDA by pseudo-labeling [Liang et al., 2020;
|
219 |
+
Qiu et al., 2021; Huang et al., 2021; Ding et al., 2022;
|
220 |
+
Qu et al., 2022; Lee et al., 2022], generating transition do-
|
221 |
+
mains [Li et al., 2020b; Li et al., 2022; Kundu et al., 2021;
|
222 |
+
Kundu et al., 2022], or local consistency [Yang et al., 2021b;
|
223 |
+
Yang et al., 2021a; Yang et al., 2022].
|
224 |
+
However, due to
|
225 |
+
the domain differences, pseudo-labels that may contain noise
|
226 |
+
may cause confirmation bias. Additionally, task discrimina-
|
227 |
+
tive information and domain-related information are highly
|
228 |
+
nonlinearly entangled. Directly constructing an ideal generic
|
229 |
+
domain from the source model may be difficult. Most closely
|
230 |
+
related to our work is AaD[Yang et al., 2022], which intro-
|
231 |
+
duced a simple and efficient optimization upper bound for
|
232 |
+
feature clustering of unlabeled data, i.e., , aggregating (scat-
|
233 |
+
tering) similar (dissimilar) features in the feature space. How-
|
234 |
+
ever, AaD uses K-nearest neighbors directly, which may suf-
|
235 |
+
fer from source bias due to domain shift.
|
236 |
+
In contrast to
|
237 |
+
the above methods, we explore the idea of label propagation
|
238 |
+
to assign regularization strategies to unlabeled data that are
|
239 |
+
more suitable for the data properties, to achieve source-free
|
240 |
+
model adaptation.
|
241 |
+
Label Propagation.
|
242 |
+
Label propagation has been widely
|
243 |
+
used in semi-supervised learning. [Douze et al., 2018] show
|
244 |
+
that label propagation on large image sets outperforms state-
|
245 |
+
of-the-art few-shot learning when few labels are available.
|
246 |
+
[Iscen et al., 2019] employ a transductive label propagation
|
247 |
+
method based on the stream shape assumption to predict the
|
248 |
+
entire dataset. [Wei et al., 2021] introduce the ”extension”
|
249 |
+
assumption to analyze label propagation and show learning
|
250 |
+
guarantees for unsupervised and semi-supervised learning.
|
251 |
+
[Cai et al., 2021] extend the extension assumption to domain
|
252 |
+
adaptation and propose a provably effective framework for
|
253 |
+
domain adaptation based on label propagation. Considering
|
254 |
+
label propagation for SFDA and leveraging the advantages of
|
255 |
+
extension assumptions, we design a novel and adaptive clus-
|
256 |
+
tering strategy for SFDA that propagates structural informa-
|
257 |
+
tion from high-density regions to low-density regions.
|
258 |
+
|
259 |
+
Layer
|
260 |
+
Ar→Cl Ar→Pr Ar→Rw Cl→Ar Cl→Pr Cl→Rw Pr→Ar Pr→Cl Pr→Rw Rw→Ar Rw→Cl Rw→Pr Avg.
|
261 |
+
Layer 4
|
262 |
+
(source)
|
263 |
+
same class
|
264 |
+
0.355
|
265 |
+
0.464
|
266 |
+
0.378
|
267 |
+
0.305
|
268 |
+
0.386
|
269 |
+
0.341
|
270 |
+
0.322
|
271 |
+
0.314
|
272 |
+
0.363
|
273 |
+
0.301
|
274 |
+
0.305
|
275 |
+
0.422
|
276 |
+
0.355
|
277 |
+
across classes
|
278 |
+
0.189
|
279 |
+
0.135
|
280 |
+
0.106
|
281 |
+
0.152
|
282 |
+
0.125
|
283 |
+
0.122
|
284 |
+
0.159
|
285 |
+
0.201
|
286 |
+
0.126
|
287 |
+
0.126
|
288 |
+
0.163
|
289 |
+
0.115
|
290 |
+
0.143
|
291 |
+
Layer 4
|
292 |
+
(target)
|
293 |
+
same class
|
294 |
+
0.298
|
295 |
+
0.429
|
296 |
+
0.373
|
297 |
+
0.322
|
298 |
+
0.429
|
299 |
+
0.373
|
300 |
+
0.322
|
301 |
+
0.298
|
302 |
+
0.373
|
303 |
+
0.322
|
304 |
+
0.298
|
305 |
+
0.429
|
306 |
+
0.356
|
307 |
+
across classes
|
308 |
+
0.121
|
309 |
+
0.119
|
310 |
+
0.102
|
311 |
+
0.120
|
312 |
+
0.119
|
313 |
+
0.102
|
314 |
+
0.120
|
315 |
+
0.121
|
316 |
+
0.102
|
317 |
+
0.120
|
318 |
+
0.121
|
319 |
+
0.119
|
320 |
+
0.115
|
321 |
+
Bottleneck
|
322 |
+
(source)
|
323 |
+
same class
|
324 |
+
0.278
|
325 |
+
0.461
|
326 |
+
0.407
|
327 |
+
0.257
|
328 |
+
0.362
|
329 |
+
0.323
|
330 |
+
0.266
|
331 |
+
0.218
|
332 |
+
0.357
|
333 |
+
0.354
|
334 |
+
0.327
|
335 |
+
0.510
|
336 |
+
0.343
|
337 |
+
across classes
|
338 |
+
0.054
|
339 |
+
0.026
|
340 |
+
0.002
|
341 |
+
0.029
|
342 |
+
0.014
|
343 |
+
0.015
|
344 |
+
0.022
|
345 |
+
0.070
|
346 |
+
0.003
|
347 |
+
0.022
|
348 |
+
0.102
|
349 |
+
0.028
|
350 |
+
0.032
|
351 |
+
Bottleneck
|
352 |
+
(target)
|
353 |
+
same class
|
354 |
+
0.370
|
355 |
+
0.549
|
356 |
+
0.550
|
357 |
+
0.367
|
358 |
+
0.481
|
359 |
+
0.484
|
360 |
+
0.384
|
361 |
+
0.306
|
362 |
+
0.507
|
363 |
+
0.414
|
364 |
+
0.332
|
365 |
+
0.536
|
366 |
+
0.440
|
367 |
+
across classes
|
368 |
+
0.060
|
369 |
+
0.014
|
370 |
+
0.009
|
371 |
+
0.031
|
372 |
+
0.012
|
373 |
+
0.033
|
374 |
+
0.007
|
375 |
+
0.061
|
376 |
+
0.004
|
377 |
+
0.001
|
378 |
+
0.040
|
379 |
+
0.002
|
380 |
+
0.023
|
381 |
+
Table 2: Cosine similarity within the same class and across classes on Office-Home.
|
382 |
+
Methods
|
383 |
+
Ar→Cl Ar→Pr Ar→Rw Cl→Ar Cl→Pr Cl→Rw Pr→Ar Pr→Cl Pr→Rw Rw→Ar Rw→Cl Rw→Pr Avg.
|
384 |
+
AaD (w/ Source Bottleneck Layer)
|
385 |
+
59.3
|
386 |
+
79.3
|
387 |
+
82.1
|
388 |
+
68.9
|
389 |
+
79.8
|
390 |
+
79.5
|
391 |
+
67.2
|
392 |
+
57.4
|
393 |
+
83.1
|
394 |
+
72.1
|
395 |
+
58.5
|
396 |
+
85.4
|
397 |
+
72.7
|
398 |
+
AaD (w/ Target Bottleneck Layer)
|
399 |
+
69.3
|
400 |
+
85.7
|
401 |
+
91.4
|
402 |
+
82.4
|
403 |
+
86.2
|
404 |
+
87.4
|
405 |
+
84.5
|
406 |
+
67.5
|
407 |
+
90.5
|
408 |
+
89.1
|
409 |
+
68.9
|
410 |
+
92.1
|
411 |
+
82.9
|
412 |
+
Table 3: Comparison with different bottleneck layers on Office-Home.
|
413 |
+
3
|
414 |
+
Method
|
415 |
+
In this section, we first introduce the problem definition,
|
416 |
+
our experiential motivation and theoretical analysis. Then,
|
417 |
+
we propose ALT from the perspective of label propagation,
|
418 |
+
claiming local consistency of inner samples with input con-
|
419 |
+
sistency of outlier samples.
|
420 |
+
3.1
|
421 |
+
Preliminaries and Analysis
|
422 |
+
Preliminary.
|
423 |
+
For source-free domain adaptation (SFDA),
|
424 |
+
consider an unlabeled target dataset DT = {xi : xi ∈ X}Nt
|
425 |
+
i=1
|
426 |
+
on the input space X. The task is to adapt a well-trained
|
427 |
+
source model to the target domain without source data, where
|
428 |
+
the target domain has the same C class as the source do-
|
429 |
+
main.
|
430 |
+
Following [Yang et al., 2021a; Yang et al., 2022],
|
431 |
+
we use a feature extractor h : X → Z, and the classifier
|
432 |
+
gc : Z → C. Then the output of the network is denoted as
|
433 |
+
p(x) = δ(gc(h(x))) ∈ RC, where δ is the softmax func-
|
434 |
+
tion. Specifically, we retrieve the nearest neighbors for each
|
435 |
+
mini-batch of target features. Let F ∈ RNt×d denotes a
|
436 |
+
memory bank that stores all target features and P ∈ RNt×C
|
437 |
+
denotes the corresponding prediction scores in the memory
|
438 |
+
bank, where d is the feature dimension in the last linear layer:
|
439 |
+
F = [z1, z2, . . . zNt]
|
440 |
+
P = [p1, p2, . . . pNt]
|
441 |
+
(1)
|
442 |
+
where zi is L2-normalized and pi denotes the output softmax
|
443 |
+
probability for zi.
|
444 |
+
Experiential motivation.
|
445 |
+
Most of the clustering-based
|
446 |
+
SFDA methods have the problem of spurious clustering. Es-
|
447 |
+
pecially, in extreme domain shifts, the spurious clustering
|
448 |
+
problem worsens. To address this issue, we investigate the
|
449 |
+
local consistency of feature representations on the source
|
450 |
+
and target domain models.
|
451 |
+
We carry out the experiments
|
452 |
+
on Office-Home since it exists different degrees of domain
|
453 |
+
shift, i.e., Rw vs. Pr and Pr vs. Cl. In this experiment, we
|
454 |
+
use different network structures: (1) Layer 4: the last layer
|
455 |
+
of the backbone network with 2048 feature dimensions; (2)
|
456 |
+
Bottleneck: only replaces the bottleneck layer in the source
|
457 |
+
model, with 256 feature dimensions. It is worth noting that
|
458 |
+
most of the existing clustering-based methods are distance-
|
459 |
+
based. The key idea is the smoothness assumption that the
|
460 |
+
model should produce similar predictions for similar unla-
|
461 |
+
beled data. Therefore, a good feature representation should
|
462 |
+
have intra-class compactness and inter-class separability. It
|
463 |
+
is very unexpected that the same-class similarity and across-
|
464 |
+
class similarity between the source domain model and the tar-
|
465 |
+
get domain model on Layer 4 are similar, while a huge differ-
|
466 |
+
ence appears at the Bottleneck (see Table 2). This means that
|
467 |
+
adding a bottleneck layer in the model helps reduce redundant
|
468 |
+
features, which improves discriminability and generalizabil-
|
469 |
+
ity.
|
470 |
+
Table 3 shows the learning effect of the AaD with only
|
471 |
+
the bottleneck layer replaced. Note that the bottleneck layer
|
472 |
+
of the target model is only used for the analysis of this ex-
|
473 |
+
periment. We observe that replacing the target domain bot-
|
474 |
+
tleneck layer improves the AaD model dramatically, from
|
475 |
+
72.7% to 82.9%. This indicates that the high-dimensional
|
476 |
+
features from Layer 4 of the source model already contain
|
477 |
+
rich semantic information, whereas the generalization of the
|
478 |
+
features is more reflected in the filtering and permutation of
|
479 |
+
the semantic information. Additionally, on the results of AaD
|
480 |
+
(w/ Source Bottleneck Layer), there was a very strong corre-
|
481 |
+
lation between prediction accuracy and the ratio of same-class
|
482 |
+
similarity to across-class similarity, as indicated by the Spear-
|
483 |
+
man rank correlation of 0.92. This observation hints that we
|
484 |
+
can use the correlation between similarity and test accuracy
|
485 |
+
to improve the clustering effect.
|
486 |
+
Theoretical analysis.
|
487 |
+
Following the expansion assumption
|
488 |
+
in [Wei et al., 2021; Cai et al., 2021], we first define that
|
489 |
+
the suitable set of input transformations B(·) takes the gen-
|
490 |
+
eral form B(x) ≜ {x′ : ∃A ∈ A such that ∥x′ − A(x)∥ ≤ r}
|
491 |
+
for a small radius r > 0, where A can be understood as a
|
492 |
+
distance-based neighborhood or the data augmentations set.
|
493 |
+
Then, we define the neighborhood function N as
|
494 |
+
N(x) = {x′ | B(x) ∩ B (x′) ̸= ∅} ,
|
495 |
+
(2)
|
496 |
+
|
497 |
+
and the neighborhood of a set S ⊂ DT as
|
498 |
+
N(S) ≜ ∪x∈SN(x).
|
499 |
+
(3)
|
500 |
+
The regularizer of gc is defined as:
|
501 |
+
RB(gc) = EDT
|
502 |
+
�
|
503 |
+
max
|
504 |
+
neighbor x′ 1 (gc(h(x)) ̸= gc(h (x′)))
|
505 |
+
�
|
506 |
+
(4)
|
507 |
+
The expansion property on the target domain is defined as
|
508 |
+
follows:
|
509 |
+
Definition 1 (Constant Expansion [Wei et al., 2021]). We
|
510 |
+
say that distribution Q satisfies (q, ξ)-constant-expansion for
|
511 |
+
some constant q, ξ ∈ (0, 1), if for all S ⊂ Q satisfying
|
512 |
+
PQ(S) ≥ q, we have PQ[N(S)\S] ≥ min {ξ, PQ[S]}.
|
513 |
+
Based on the model’s learning state, our ALT method di-
|
514 |
+
vides the target data into the inner set (DI) and the outlier
|
515 |
+
set (DO). By the Theorem 3.6 in [Wei et al., 2021], suppose
|
516 |
+
Q satisfies (1/2, ξ)-constant-expansion, then the classifier gc
|
517 |
+
satisfies
|
518 |
+
ϵT (gc) ≤ max
|
519 |
+
�
|
520 |
+
ξ
|
521 |
+
ξ − 1, 2
|
522 |
+
�
|
523 |
+
µ
|
524 |
+
s.t. RB(gc) ≤ µ.
|
525 |
+
(5)
|
526 |
+
The expansion property implicitly states that if there is
|
527 |
+
minimal overlap between the neighborhoods of DI and DO,
|
528 |
+
labels can be propagated from DI to DO by the regularizer
|
529 |
+
RB(gc).
|
530 |
+
3.2
|
531 |
+
Overall Scheme
|
532 |
+
Our ALT method divides the target data DT into inner set
|
533 |
+
DI and outlier set DO by a dynamic threshold based on the
|
534 |
+
model’s learning state. As mentioned before, the proposed
|
535 |
+
ALT consists of two learning strategies: Adaptive Local-
|
536 |
+
consistency Regularization (ALR) for the inner set and Adap-
|
537 |
+
tive Input-consistency Regularization (AIR) for the outlier
|
538 |
+
set.
|
539 |
+
In Adaptive Local-consistency Regularization, inspired by
|
540 |
+
the fact that the target features from the source model have
|
541 |
+
formed some semantic structures, we treat the target do-
|
542 |
+
main’s intrinsic structure information mining as a cluster-
|
543 |
+
ing problem. Since neighbors may provide wrong semantic
|
544 |
+
information, we propose recognizing each sample’s cluster-
|
545 |
+
ing weights. As observed in Table 2, the cosine similarity
|
546 |
+
of same-class is generally higher than that of across-class.
|
547 |
+
Through this, we can measure neighbor affinity based on co-
|
548 |
+
sine similarity. By re-weighting with similarity-based adap-
|
549 |
+
tive weights, we are able to promote positive clustering and
|
550 |
+
combat spurious clustering.
|
551 |
+
Meanwhile, to improve sepa-
|
552 |
+
rability between clusters, we employ the separation strategy
|
553 |
+
proposed by [Yang et al., 2022] to disperse the prediction of
|
554 |
+
potentially dissimilar features.
|
555 |
+
In Adaptive Input-consistency Regularization, we propa-
|
556 |
+
gate the structural information from the inner set to the out-
|
557 |
+
lier set via the extension assumption proposed by [Wei et al.,
|
558 |
+
2021]. Since the outliers in the low-density region are far
|
559 |
+
away from all other points, which means there is no nearest
|
560 |
+
neighbor support, we turn to seek support from the outliers
|
561 |
+
themselves. Specifically, we perform label propagation by in-
|
562 |
+
put consistency regularization Lair with adaptive thresholds.
|
563 |
+
To encourage the model to produce diverse predictions, we
|
564 |
+
employ the learning state of the model to generate adaptive
|
565 |
+
thresholds.
|
566 |
+
The overall optimization objective of ALT can be summa-
|
567 |
+
rized as follows:
|
568 |
+
L = Lalr + Lair + λLsep
|
569 |
+
(6)
|
570 |
+
where λ are a trade-off parameter.
|
571 |
+
3.3
|
572 |
+
Adaptive Local Transfer
|
573 |
+
Dataset Division.
|
574 |
+
In this work, we employ the model’s
|
575 |
+
learning states to adaptively divide the data in DT into the
|
576 |
+
inner sets DI and outlier sets DO. As believed in [Zhang et
|
577 |
+
al., 2021], the learning effect of the model can be reflected by
|
578 |
+
the class-level hit rate. Therefore, our principle is that the data
|
579 |
+
division in ALT should be related to the prediction confidence
|
580 |
+
of the unlabeled data on different classes so as to reflect the
|
581 |
+
class-level learning status. Namely, classes with fewer sam-
|
582 |
+
ples reaching a threshold of prediction confidence are con-
|
583 |
+
sidered to have difficult in learning local structural informa-
|
584 |
+
tion. Moreover, the threshold should be increased steadily as
|
585 |
+
the model is continuously improved during training. We set
|
586 |
+
the confidence threshold as the exponential moving average
|
587 |
+
(EMA) of the highest confidence level for each training time
|
588 |
+
step:
|
589 |
+
τt =
|
590 |
+
� 1
|
591 |
+
C ,
|
592 |
+
if t = 0
|
593 |
+
ατt−1 + (1 − α) max(p),
|
594 |
+
otherwise
|
595 |
+
(7)
|
596 |
+
where α ∈ (0, 1) is the momentum decay of EMA, t de-
|
597 |
+
notes the t-th iteration. Combining this flexible thresholds,
|
598 |
+
the learning effect of class c in the time step is defined as:
|
599 |
+
σt(c) =
|
600 |
+
Nt
|
601 |
+
�
|
602 |
+
n=1
|
603 |
+
1 (max (p) > τt) · 1 (arg max (p = c) .
|
604 |
+
(8)
|
605 |
+
Then we formulate the adaptive data division weights:
|
606 |
+
Tt(c) = 1
|
607 |
+
C (1 −
|
608 |
+
βt(c)
|
609 |
+
log βt(c))
|
610 |
+
where, βt(c) =
|
611 |
+
σt(c)
|
612 |
+
maxc σt
|
613 |
+
(9)
|
614 |
+
Finally, the samples are dynamically grouped into the out-
|
615 |
+
lier set in the t-th iteration:
|
616 |
+
Dt
|
617 |
+
O = {xi | max (pi) ≥ Tt(arg max (pi)), xi ∈ DT } ,
|
618 |
+
(10)
|
619 |
+
and the inner samples are the rest target data, i.e., DI =
|
620 |
+
DT \DO. To this end, we customize learning strategies for
|
621 |
+
different data properties and connect both sets by extension
|
622 |
+
assumption.
|
623 |
+
Adaptive Local-consistency Regularization.
|
624 |
+
For the in-
|
625 |
+
ner samples, since their features already have some seman-
|
626 |
+
tic information, we can capture the intra-class structure by
|
627 |
+
local-consistency regularization. However, in the source-free
|
628 |
+
domain adaptation problem, the features extracted by the pre-
|
629 |
+
trained source model are typically influenced by the source
|
630 |
+
bias.
|
631 |
+
To promote positive clustering and combat spurious
|
632 |
+
|
633 |
+
clustering, we should find a technique to reveal the affinity
|
634 |
+
of the samples and then re-weight them to approximate the
|
635 |
+
ground-truth structural information.
|
636 |
+
As mentioned earlier,
|
637 |
+
in clustering, not all of the neighbors have an equal affin-
|
638 |
+
ity. Therefore, we use the distance information to estimate
|
639 |
+
the weights and relax the ranking of samples in low-density
|
640 |
+
regions. The Adaptive Local-consistency Regularization is as
|
641 |
+
follows:
|
642 |
+
Lalr = −
|
643 |
+
NDI
|
644 |
+
�
|
645 |
+
i
|
646 |
+
NCi
|
647 |
+
�
|
648 |
+
j
|
649 |
+
wijpT
|
650 |
+
i pj
|
651 |
+
(11)
|
652 |
+
where Ci denotes the K-nearest neighbor set of zi. The simi-
|
653 |
+
larity weight wij in Eq. 11 is the cosine similarity of zi to the
|
654 |
+
neighbors zj, which is calculated via the memory bank F .
|
655 |
+
For clustering separability, we apply the separation strategy
|
656 |
+
proposed in [Yang et al., 2022] to push zi away from other
|
657 |
+
features in mini-batches.
|
658 |
+
Lsep = −
|
659 |
+
NDI
|
660 |
+
�
|
661 |
+
i
|
662 |
+
NBi
|
663 |
+
�
|
664 |
+
m
|
665 |
+
pT
|
666 |
+
i pm
|
667 |
+
(12)
|
668 |
+
where Bi denotes other features except zi in mini-batch.
|
669 |
+
Adaptive Input-consistency Regularization.
|
670 |
+
For outlier,
|
671 |
+
since it is under-learned or hard-to-learn, we use the input
|
672 |
+
consistency regularization to ensure that the model is locally
|
673 |
+
consistent. Specifically, we use a weakly augmented version
|
674 |
+
of xi to generate the pseudo-label ˆpi = P(y | ω(xi)) and
|
675 |
+
enforce consistency against its strongly augmented version
|
676 |
+
Ω(xi). To encourage the model to make diverse predictions,
|
677 |
+
we combined regularization with the aforementioned class-
|
678 |
+
level confidence thresholds. The Adaptive Input-consistency
|
679 |
+
Regularization is as follows:
|
680 |
+
Lair =
|
681 |
+
1
|
682 |
+
NDO
|
683 |
+
NDO
|
684 |
+
�
|
685 |
+
i=1
|
686 |
+
H(ˆpi, qi)
|
687 |
+
(13)
|
688 |
+
where qi = P(y | Ω(xi)) is denote the pseudo label of Ω(xi).
|
689 |
+
4
|
690 |
+
Experiments
|
691 |
+
In this section, we evaluate the proposed method for SFDA
|
692 |
+
on three popular domain adaptation benchmarks, compared
|
693 |
+
with recent state-of-the-art SFDA methods.
|
694 |
+
4.1
|
695 |
+
Datasets
|
696 |
+
Office-31 [Saenko et al., 2010] is a commonly used dataset
|
697 |
+
for domain adaptation that consists of three domains: Ama-
|
698 |
+
zon (A), Webcam (W), and DSLR (D), each containing 31
|
699 |
+
categories of items in an office environment.
|
700 |
+
Office-Home [Venkateswara et al., 2017] is a standard do-
|
701 |
+
main adaptation dataset collected in office and home environ-
|
702 |
+
ments. It consists of four domains, Art (Ar), Clipart (Cl),
|
703 |
+
Product (Pr), and RealWorld (Rw), and each covering 65 ob-
|
704 |
+
ject categories.
|
705 |
+
VisDA [Peng et al., 2017] is one of the large benchmark
|
706 |
+
datasets on the domain adaptation task. It contains 12 cate-
|
707 |
+
gories of images from two subsets: synthetic image domain
|
708 |
+
and real image domain.
|
709 |
+
Methods
|
710 |
+
Source-free A→D A→W D→W W→D D→A W→A Avg.
|
711 |
+
ResNet-50 [He et al., 2016]
|
712 |
+
|
713 |
+
68.9
|
714 |
+
68.4
|
715 |
+
96.7
|
716 |
+
99.3
|
717 |
+
62.5
|
718 |
+
60.7 76.1
|
719 |
+
CDAN [Long et al., 2018]
|
720 |
+
|
721 |
+
92.9
|
722 |
+
94.1
|
723 |
+
98.6
|
724 |
+
100.0 71.0
|
725 |
+
69.3 87.7
|
726 |
+
MDD [Zhang et al., 2019]
|
727 |
+
|
728 |
+
90.4
|
729 |
+
90.4
|
730 |
+
98.7
|
731 |
+
99.9
|
732 |
+
75.0
|
733 |
+
73.7 88.0
|
734 |
+
CAN [Kang et al., 2019]
|
735 |
+
|
736 |
+
95.0
|
737 |
+
94.5
|
738 |
+
99.1
|
739 |
+
99.6
|
740 |
+
70.3
|
741 |
+
66.4 90.6
|
742 |
+
SRDC [Tang et al., 2020]
|
743 |
+
|
744 |
+
95.8
|
745 |
+
95.7
|
746 |
+
99.2
|
747 |
+
100.0 76.7
|
748 |
+
77.1 90.8
|
749 |
+
FixBi [Na et al., 2021]
|
750 |
+
|
751 |
+
95.0
|
752 |
+
96.1
|
753 |
+
99.3
|
754 |
+
100.0 78.7
|
755 |
+
79.4 91.4
|
756 |
+
SHOT [Liang et al., 2020]
|
757 |
+
|
758 |
+
93.1
|
759 |
+
90.9
|
760 |
+
98.8
|
761 |
+
99.9
|
762 |
+
74.5
|
763 |
+
74.8 88.7
|
764 |
+
3C-GAN [Li et al., 2020b]
|
765 |
+
|
766 |
+
92.7
|
767 |
+
93.7
|
768 |
+
98.5
|
769 |
+
99.8
|
770 |
+
75.3
|
771 |
+
77.8 89.6
|
772 |
+
A2Net [Xia et al., 2021]
|
773 |
+
|
774 |
+
94.5
|
775 |
+
94.0
|
776 |
+
99.2
|
777 |
+
100.0 76.7
|
778 |
+
76.1 90.1
|
779 |
+
NRC [Yang et al., 2021a]
|
780 |
+
|
781 |
+
96.0
|
782 |
+
90.8
|
783 |
+
99.0
|
784 |
+
100.0 75.3
|
785 |
+
75.0 89.4
|
786 |
+
HCL [Huang et al., 2021]
|
787 |
+
|
788 |
+
94.7
|
789 |
+
92.5
|
790 |
+
98.2
|
791 |
+
100.0 75.9
|
792 |
+
77.7 89.8
|
793 |
+
CPGA [Qiu et al., 2021]
|
794 |
+
|
795 |
+
94.4
|
796 |
+
94.1
|
797 |
+
98.4
|
798 |
+
99.8
|
799 |
+
76.0
|
800 |
+
76.6 89.9
|
801 |
+
SFDA-DE [Ding et al., 2022]
|
802 |
+
|
803 |
+
96.0
|
804 |
+
94.2
|
805 |
+
98.5
|
806 |
+
99.8
|
807 |
+
76.6
|
808 |
+
75.5 90.1
|
809 |
+
AaD [Yang et al., 2022]
|
810 |
+
|
811 |
+
96.4
|
812 |
+
92.1
|
813 |
+
99.1
|
814 |
+
100.0 75.0
|
815 |
+
76.5 89.9
|
816 |
+
feat-mixup [Kundu et al., 2022]
|
817 |
+
|
818 |
+
94.6
|
819 |
+
93.2
|
820 |
+
98.9
|
821 |
+
100.0 78.3
|
822 |
+
78.9 90.7
|
823 |
+
ours
|
824 |
+
|
825 |
+
96.4
|
826 |
+
95.1
|
827 |
+
99.0
|
828 |
+
100.0 80.0
|
829 |
+
78.2 91.5
|
830 |
+
Table 4: Accuracy (%) on Office-31 (ResNet-50).
|
831 |
+
4.2
|
832 |
+
Setup
|
833 |
+
Implementation details.
|
834 |
+
Following the standard protocol
|
835 |
+
for SFDA, we use all labeled source data to obtain pre-trained
|
836 |
+
models. For the Office-31 and Office-Home, the backbone
|
837 |
+
network is ResNet-50 [He et al., 2016]. For VisDA, the back-
|
838 |
+
bone network is ResNet-101. For a fair comparison, we use
|
839 |
+
the same network structure as SHOT [Liang et al., 2020],
|
840 |
+
NRC [Yang et al., 2021a] and AaD [Yang et al., 2022]. All
|
841 |
+
network parameters are updated by Stochastic Gradient De-
|
842 |
+
scent (SGD) with momentum of 0.9, an initial learning rate of
|
843 |
+
0.001, and a weight decay of 0.005. The learning rate of the
|
844 |
+
additional layer is 10 times smaller than that of the backbone
|
845 |
+
layer. We follow G-SFDA [Yang et al., 2021b], NRC [Yang
|
846 |
+
et al., 2021a], and AaD [Yang et al., 2022] for the number of
|
847 |
+
nearest neighbors (K): set 3 for Office-31, Office-Home, and
|
848 |
+
5 on VisDA. To ensure a fair comparison, we set the hyper-
|
849 |
+
parameter λ to be the same as in the previous work [Yang et
|
850 |
+
al., 2022]. That is, we set λ =
|
851 |
+
�
|
852 |
+
1 + 10 ∗
|
853 |
+
iter
|
854 |
+
maxiter
|
855 |
+
�−β
|
856 |
+
, and
|
857 |
+
set β to 0 on Office-Home, 2 on Office-31, and 5 on VisDA.
|
858 |
+
The strong augmentation function used in our experiments is
|
859 |
+
RandAugment [Cubuk et al., 2020].
|
860 |
+
Baselines.
|
861 |
+
To empirically validate the effectiveness of our
|
862 |
+
approach, we compared the ALT to the following base-
|
863 |
+
line: (1) source-present DA methods: CDAN [Long et al.,
|
864 |
+
2018], MDD [Zhang et al., 2019], CAN [Kang et al., 2019],
|
865 |
+
SAFN [Xu et al., 2019], MCC [Jin et al., 2020], SRDC [Tang
|
866 |
+
et al., 2020], FixBi [Na et al., 2021]; (2) source-free DA
|
867 |
+
methods: SHOT [Liang et al., 2020], 3C-GAN [Li et al.,
|
868 |
+
2020b], A2-Net [Xia et al., 2021], NRC [Yang et al., 2021a],
|
869 |
+
HCL [Huang et al., 2021], CPGA [Qiu et al., 2021], SFDA-
|
870 |
+
DE [Ding et al., 2022], AaD [Yang et al., 2022] and feat-
|
871 |
+
mixup [Kundu et al., 2022].
|
872 |
+
4.3
|
873 |
+
Results and Analysis
|
874 |
+
In this section, we will present our results and compare with
|
875 |
+
other methods, which are summarized in Table 4, 5, 6, re-
|
876 |
+
spectively. For a fair comparison, all baseline results were
|
877 |
+
obtained from their original papers or the follow-up work.
|
878 |
+
Comparison with state-of-the-art methods.
|
879 |
+
For Office-
|
880 |
+
31, as shown in Table 4, the proposed ALT yield state-of-
|
881 |
+
the-art performance on 4 out of 6 tasks. Note that our ALT
|
882 |
+
|
883 |
+
Methods
|
884 |
+
Source-free Ar→Cl Ar→Pr Ar→Rw Cl→Ar Cl→Pr Cl→Rw Pr→Ar Pr→Cl Pr→Rw Rw→Ar Rw→Cl Rw→Pr Avg.
|
885 |
+
ResNet-50 [He et al., 2016]
|
886 |
+
|
887 |
+
34.9
|
888 |
+
50.0
|
889 |
+
58.0
|
890 |
+
37.4
|
891 |
+
41.9
|
892 |
+
46.2
|
893 |
+
38.5
|
894 |
+
31.2
|
895 |
+
60.4
|
896 |
+
53.9
|
897 |
+
41.2
|
898 |
+
59.9
|
899 |
+
46.1
|
900 |
+
CDAN [Long et al., 2018]
|
901 |
+
|
902 |
+
50.7
|
903 |
+
70.6
|
904 |
+
76.0
|
905 |
+
57.6
|
906 |
+
70.0
|
907 |
+
70.0
|
908 |
+
57.4
|
909 |
+
50.9
|
910 |
+
77.3
|
911 |
+
70.9
|
912 |
+
56.7
|
913 |
+
81.6
|
914 |
+
65.8
|
915 |
+
MDD [Zhang et al., 2019]
|
916 |
+
|
917 |
+
54.9
|
918 |
+
73.7
|
919 |
+
77.8
|
920 |
+
60.0
|
921 |
+
71.4
|
922 |
+
71.8
|
923 |
+
61.2
|
924 |
+
53.6
|
925 |
+
78.1
|
926 |
+
72.5
|
927 |
+
60.2
|
928 |
+
82.3
|
929 |
+
68.1
|
930 |
+
SRDC [Tang et al., 2020]
|
931 |
+
|
932 |
+
52.3
|
933 |
+
76.3
|
934 |
+
81.0
|
935 |
+
69.5
|
936 |
+
76.2
|
937 |
+
78.0
|
938 |
+
68.7
|
939 |
+
53.8
|
940 |
+
81.7
|
941 |
+
76.3
|
942 |
+
57.1
|
943 |
+
85.0
|
944 |
+
71.3
|
945 |
+
FixBi [Na et al., 2021]
|
946 |
+
|
947 |
+
58.1
|
948 |
+
77.3
|
949 |
+
80.4
|
950 |
+
67.7
|
951 |
+
79.5
|
952 |
+
78.1
|
953 |
+
65.8
|
954 |
+
57.9
|
955 |
+
81.7
|
956 |
+
76.4
|
957 |
+
62.9
|
958 |
+
86.7
|
959 |
+
72.7
|
960 |
+
SHOT [Liang et al., 2020]
|
961 |
+
|
962 |
+
56.9
|
963 |
+
78.1
|
964 |
+
81.0
|
965 |
+
67.9
|
966 |
+
78.4
|
967 |
+
78.1
|
968 |
+
67.0
|
969 |
+
54.6
|
970 |
+
81.8
|
971 |
+
73.4
|
972 |
+
58.1
|
973 |
+
84.5
|
974 |
+
71.6
|
975 |
+
A2Net [Xia et al., 2021]
|
976 |
+
|
977 |
+
58.4
|
978 |
+
79.0
|
979 |
+
82.4
|
980 |
+
67.5
|
981 |
+
79.3
|
982 |
+
78.9
|
983 |
+
68.0
|
984 |
+
56.2
|
985 |
+
82.9
|
986 |
+
74.1
|
987 |
+
60.5
|
988 |
+
85.0
|
989 |
+
72.8
|
990 |
+
NRC [Yang et al., 2021a]
|
991 |
+
|
992 |
+
57.7
|
993 |
+
80.3
|
994 |
+
82.0
|
995 |
+
68.1
|
996 |
+
79.8
|
997 |
+
78.6
|
998 |
+
65.3
|
999 |
+
56.4
|
1000 |
+
83.0
|
1001 |
+
71.0
|
1002 |
+
58.6
|
1003 |
+
85.6
|
1004 |
+
72.2
|
1005 |
+
CPGA [Qiu et al., 2021]
|
1006 |
+
|
1007 |
+
59.3
|
1008 |
+
78.1
|
1009 |
+
79.8
|
1010 |
+
65.4
|
1011 |
+
75.5
|
1012 |
+
76.4
|
1013 |
+
65.7
|
1014 |
+
58.0
|
1015 |
+
81.0
|
1016 |
+
72.0
|
1017 |
+
64.4
|
1018 |
+
83.3
|
1019 |
+
71.6
|
1020 |
+
SFDA-DE [Ding et al., 2022]
|
1021 |
+
|
1022 |
+
59.7
|
1023 |
+
79.5
|
1024 |
+
82.4
|
1025 |
+
69.7
|
1026 |
+
78.6
|
1027 |
+
79.2
|
1028 |
+
66.1
|
1029 |
+
57.2
|
1030 |
+
82.6
|
1031 |
+
73.9
|
1032 |
+
60.8
|
1033 |
+
85.5
|
1034 |
+
72.9
|
1035 |
+
feat-mixup [Kundu et al., 2022]
|
1036 |
+
|
1037 |
+
61.8
|
1038 |
+
81.2
|
1039 |
+
83.0
|
1040 |
+
68.5
|
1041 |
+
80.6
|
1042 |
+
79.4
|
1043 |
+
67.8
|
1044 |
+
61.5
|
1045 |
+
85.1
|
1046 |
+
73.7
|
1047 |
+
64.1
|
1048 |
+
86.5
|
1049 |
+
74.5
|
1050 |
+
AaD [Yang et al., 2022]
|
1051 |
+
|
1052 |
+
59.3
|
1053 |
+
79.3
|
1054 |
+
82.1
|
1055 |
+
68.9
|
1056 |
+
79.8
|
1057 |
+
79.5
|
1058 |
+
67.2
|
1059 |
+
57.4
|
1060 |
+
83.1
|
1061 |
+
72.1
|
1062 |
+
58.5
|
1063 |
+
85.4
|
1064 |
+
72.7
|
1065 |
+
DaC [Zhang et al., 2022]
|
1066 |
+
|
1067 |
+
59.1
|
1068 |
+
79.5
|
1069 |
+
81.2
|
1070 |
+
69.3
|
1071 |
+
78.9
|
1072 |
+
79.2
|
1073 |
+
67.4
|
1074 |
+
56.4
|
1075 |
+
82.4
|
1076 |
+
74.0
|
1077 |
+
61.4
|
1078 |
+
84.4
|
1079 |
+
72.8
|
1080 |
+
(ours)
|
1081 |
+
|
1082 |
+
58.5
|
1083 |
+
79.8
|
1084 |
+
85.5
|
1085 |
+
74.8
|
1086 |
+
82.5
|
1087 |
+
83.1
|
1088 |
+
73.8
|
1089 |
+
58.4
|
1090 |
+
85.0
|
1091 |
+
78.2
|
1092 |
+
63.3
|
1093 |
+
89.6
|
1094 |
+
76.1
|
1095 |
+
Table 5: Accuracy (%) on Office-Home (ResNet-50).
|
1096 |
+
Methods
|
1097 |
+
Source-free plane bicycle bus
|
1098 |
+
car
|
1099 |
+
horse knife mcycl person plant sktbrd train truck Per-class
|
1100 |
+
ResNet-101 [He et al., 2016]
|
1101 |
+
|
1102 |
+
55.1
|
1103 |
+
53.3
|
1104 |
+
61.9 59.1
|
1105 |
+
80.6
|
1106 |
+
17.9
|
1107 |
+
79.7
|
1108 |
+
31.2
|
1109 |
+
81.0
|
1110 |
+
26.5
|
1111 |
+
73.5
|
1112 |
+
8.5
|
1113 |
+
52.4
|
1114 |
+
CDAN [Long et al., 2018]
|
1115 |
+
|
1116 |
+
85.2
|
1117 |
+
66.9
|
1118 |
+
83.0 50.8
|
1119 |
+
84.2
|
1120 |
+
74.9
|
1121 |
+
88.1
|
1122 |
+
74.5
|
1123 |
+
83.4
|
1124 |
+
76.0
|
1125 |
+
81.9 38.0
|
1126 |
+
73.9
|
1127 |
+
SAFN [Xu et al., 2019]
|
1128 |
+
|
1129 |
+
93.6
|
1130 |
+
61.3
|
1131 |
+
84.1 70.6
|
1132 |
+
94.1
|
1133 |
+
79.0
|
1134 |
+
91.8
|
1135 |
+
79.6
|
1136 |
+
89.9
|
1137 |
+
55.6
|
1138 |
+
89.0 24.4
|
1139 |
+
76.1
|
1140 |
+
MCC [Jin et al., 2020]
|
1141 |
+
|
1142 |
+
88.7
|
1143 |
+
80.3
|
1144 |
+
80.5 71.5
|
1145 |
+
90.1
|
1146 |
+
93.2
|
1147 |
+
85.0
|
1148 |
+
71.6
|
1149 |
+
89.4
|
1150 |
+
73.8
|
1151 |
+
85.0 36.9
|
1152 |
+
78.8
|
1153 |
+
FixBi [Na et al., 2021]
|
1154 |
+
|
1155 |
+
96.1
|
1156 |
+
87.8
|
1157 |
+
90.5 90.3
|
1158 |
+
96.8
|
1159 |
+
95.3
|
1160 |
+
92.8
|
1161 |
+
88.7
|
1162 |
+
97.2
|
1163 |
+
94.2
|
1164 |
+
90.9 25.7
|
1165 |
+
87.2
|
1166 |
+
SHOT [Liang et al., 2020]
|
1167 |
+
|
1168 |
+
92.6
|
1169 |
+
81.1
|
1170 |
+
80.1 58.5
|
1171 |
+
89.7
|
1172 |
+
86.1
|
1173 |
+
81.5
|
1174 |
+
77.8
|
1175 |
+
89.5
|
1176 |
+
84.9
|
1177 |
+
84.3 49.3
|
1178 |
+
79.6
|
1179 |
+
A2Net [Xia et al., 2021]
|
1180 |
+
|
1181 |
+
94.0
|
1182 |
+
87.8
|
1183 |
+
85.6 66.8
|
1184 |
+
93.7
|
1185 |
+
95.1
|
1186 |
+
85.8
|
1187 |
+
81.2
|
1188 |
+
91.6
|
1189 |
+
88.2
|
1190 |
+
86.5 56.0
|
1191 |
+
84.3
|
1192 |
+
NRC [Yang et al., 2021a]
|
1193 |
+
|
1194 |
+
96.8
|
1195 |
+
91.3
|
1196 |
+
82.4 62.4
|
1197 |
+
96.2
|
1198 |
+
95.9
|
1199 |
+
86.1
|
1200 |
+
80.6
|
1201 |
+
94.8
|
1202 |
+
94.1
|
1203 |
+
90.4 59.7
|
1204 |
+
85.9
|
1205 |
+
HCL [Huang et al., 2021]
|
1206 |
+
|
1207 |
+
93.3
|
1208 |
+
85.4
|
1209 |
+
80.7 68.5
|
1210 |
+
91.0
|
1211 |
+
88.1
|
1212 |
+
86.0
|
1213 |
+
78.6
|
1214 |
+
86.6
|
1215 |
+
88.8
|
1216 |
+
80.0 74.7
|
1217 |
+
83.5
|
1218 |
+
CPGA [Qiu et al., 2021]
|
1219 |
+
|
1220 |
+
94.8
|
1221 |
+
83.6
|
1222 |
+
79.7 65.1
|
1223 |
+
92.5
|
1224 |
+
94.7
|
1225 |
+
90.1
|
1226 |
+
82.4
|
1227 |
+
88.8
|
1228 |
+
88.0
|
1229 |
+
88.9 60.1
|
1230 |
+
84.1
|
1231 |
+
SFDA-DE [Ding et al., 2022]
|
1232 |
+
|
1233 |
+
95.3
|
1234 |
+
91.2
|
1235 |
+
77.5 72.1
|
1236 |
+
95.7
|
1237 |
+
97.8
|
1238 |
+
85.5
|
1239 |
+
86.1
|
1240 |
+
95.5
|
1241 |
+
93.0
|
1242 |
+
86.3 61.6
|
1243 |
+
86.5
|
1244 |
+
AaD [Yang et al., 2022]
|
1245 |
+
|
1246 |
+
97.4
|
1247 |
+
90.5
|
1248 |
+
80.8 76.2
|
1249 |
+
97.3
|
1250 |
+
96.1
|
1251 |
+
89.8
|
1252 |
+
82.9
|
1253 |
+
95.5
|
1254 |
+
93.0
|
1255 |
+
92.0 64.7
|
1256 |
+
88.0
|
1257 |
+
DaC [Zhang et al., 2022]
|
1258 |
+
|
1259 |
+
96.6
|
1260 |
+
86.8
|
1261 |
+
86.4 78.4
|
1262 |
+
96.4
|
1263 |
+
96.2
|
1264 |
+
93.6
|
1265 |
+
83.8
|
1266 |
+
96.8
|
1267 |
+
95.1
|
1268 |
+
89.6 50.0
|
1269 |
+
87.3
|
1270 |
+
ours
|
1271 |
+
|
1272 |
+
98.2
|
1273 |
+
91.0
|
1274 |
+
86.4 78.0
|
1275 |
+
97.6
|
1276 |
+
98.8
|
1277 |
+
91.8
|
1278 |
+
84.8
|
1279 |
+
96.6
|
1280 |
+
94.7
|
1281 |
+
93.7 53.3
|
1282 |
+
88.7
|
1283 |
+
Table 6: Accuracy (%) on VisDA (ResNet-101).
|
1284 |
+
produces competitive results even when compared to source-
|
1285 |
+
present methods such as FixBi (91.5% v.s.
|
1286 |
+
91.4%).
|
1287 |
+
For
|
1288 |
+
Office-Home, Table 5 presents that the proposed ALT method
|
1289 |
+
achieves the most advanced classification accuracy (76.1%)
|
1290 |
+
and achieves the highest results on 7 out of 12 tasks. As we
|
1291 |
+
all know, in clustering-based methods, the clustering error in-
|
1292 |
+
creases with the number of object classes. Therefore, it is
|
1293 |
+
difficult for local consistency-based SFDA methods to accu-
|
1294 |
+
rately capture the target structure information. However, our
|
1295 |
+
ALT employs input consistency regularization to efficiently
|
1296 |
+
utilize unlabeled data through label propagation. This is the
|
1297 |
+
primary reason for our success on Office-Home. Moreover,
|
1298 |
+
ALT beats several source-present DA methods, such as SRDC
|
1299 |
+
and FixBi, by a large margin, which means that even if we do
|
1300 |
+
not have access to the source data, our method can still exploit
|
1301 |
+
the target structure information to achieve better adaptation.
|
1302 |
+
Similar observations on VisDA can be found in Table 6. The
|
1303 |
+
reported results sufficiently demonstrate the superiority of our
|
1304 |
+
method.
|
1305 |
+
Comparison with clustering-based Method.
|
1306 |
+
As dis-
|
1307 |
+
cussed in related work, NRC uses reciprocal nearest neigh-
|
1308 |
+
bors to measure clustering affinity. The improvement of our
|
1309 |
+
method indicates that our adaptive local consistency regular-
|
1310 |
+
ization makes more effective use of intra-class structural in-
|
1311 |
+
AaD
|
1312 |
+
Lalr
|
1313 |
+
Lair
|
1314 |
+
A→D
|
1315 |
+
A→W
|
1316 |
+
D→A
|
1317 |
+
W→A
|
1318 |
+
Avg.
|
1319 |
+
|
1320 |
+
96.4
|
1321 |
+
92.1
|
1322 |
+
75.0
|
1323 |
+
76.5
|
1324 |
+
85.0
|
1325 |
+
|
1326 |
+
|
1327 |
+
95.4
|
1328 |
+
93.3
|
1329 |
+
77.9
|
1330 |
+
77.6
|
1331 |
+
86.1
|
1332 |
+
|
1333 |
+
|
1334 |
+
95.8
|
1335 |
+
94.7
|
1336 |
+
79.4
|
1337 |
+
77.8
|
1338 |
+
86.9
|
1339 |
+
|
1340 |
+
|
1341 |
+
96.4
|
1342 |
+
95.1
|
1343 |
+
80.0
|
1344 |
+
78.2
|
1345 |
+
87.4
|
1346 |
+
Table 7: Ablation study on Office-31.
|
1347 |
+
formation. Compared with AaD, our ALT improves the accu-
|
1348 |
+
racy by 1.6% on Office-31 and by 3.1% on Office-Home, in-
|
1349 |
+
dicating that the co-training of the local consistency regular-
|
1350 |
+
izer and the input consistency regularizer performs reliable la-
|
1351 |
+
bel propagation through the subpopulation of unlabeled data.
|
1352 |
+
Visualization.
|
1353 |
+
To demonstrate the superiority of our
|
1354 |
+
method, we show the t-SNE feature visualization and con-
|
1355 |
+
fusion matrix on Office-31 (see Figure 2). From Figures 2(a-
|
1356 |
+
d), we can observe that the clustering of the target features
|
1357 |
+
is more compact after the adaptation by ALT. Figures 2(b)
|
1358 |
+
and (d) illustrate that ALT can achieve good model adaptation
|
1359 |
+
whether the model is pre-trained on a large-scale or small-
|
1360 |
+
scale source domain. In particular, when significant domain
|
1361 |
+
differences exist (as shown in Figure 2(c)), abundant target
|
1362 |
+
features are jumbled together, so that the model has difficult
|
1363 |
+
in capturing the local structure. The flexible data division of
|
1364 |
+
|
1365 |
+
(a) source-only (A→W)
|
1366 |
+
(b) ALT (A→W)
|
1367 |
+
(c) source-only (D→A)
|
1368 |
+
(d) ALT (D→A)
|
1369 |
+
(e) source-only (D→A)
|
1370 |
+
(f) ALT (D→A)
|
1371 |
+
Figure 2: The t-SNE and Confusion Matrix visualization. Figures (a-d): t-SNE visualization of the final prediction layer activation for source
|
1372 |
+
model and ALT, where red and blue points denote the source and target domains, respectively. Note that the source samples are only used to
|
1373 |
+
plot the t-SNE. Figures (e) and (f): The Confusion Matrix visualization for source model and ALT. Best viewed in color.
|
1374 |
+
our method, thus, customizes the learning strategy for differ-
|
1375 |
+
ent data properties, which benefits the estimation of ground-
|
1376 |
+
truth structural information. The comparison of Figure 2(e)
|
1377 |
+
and (f) further demonstrates that our method increases predic-
|
1378 |
+
tion diversity by adaptively adjusting the training on under-
|
1379 |
+
learned or hard-to-learn samples (i.e., outlier).
|
1380 |
+
Ablation Study.
|
1381 |
+
To evaluate the contribution of the differ-
|
1382 |
+
ent components of our work, we conduct ablation studies for
|
1383 |
+
ALT on Office-31. We investigated different combinations
|
1384 |
+
of the two parts: Adaptive Local-consistency Regularization
|
1385 |
+
(ALR) and Adaptive Input-consistency Regularization (AIR).
|
1386 |
+
Compared to our method, AaD can be regarded as the base-
|
1387 |
+
line. As shown in Table 7, each part of our method con-
|
1388 |
+
tributes to improving performance. It is not difficult to find
|
1389 |
+
that AIR contributes the most to the improvement of accu-
|
1390 |
+
racy, with the performance increasing from 85.0% to 86.9%,
|
1391 |
+
which shows the effectiveness of label propagation. ALR also
|
1392 |
+
improves the average performance by 1.1% compared to the
|
1393 |
+
base model, confirming that the distance-based reweighting
|
1394 |
+
improves the quality of the neighbors. For easy transfer tasks,
|
1395 |
+
target features from pre-trained source models naturally have
|
1396 |
+
good clustering performance. In this case, ALR dominates in
|
1397 |
+
loss optimization, with AIR helping to improve model train-
|
1398 |
+
ing for under-learned categories. When the target feature dis-
|
1399 |
+
tribution is scattered, it benefits from the AIR to ensure the
|
1400 |
+
smoothness of the model, while the extended property am-
|
1401 |
+
plifies it to global consistency within the same class, allow-
|
1402 |
+
ing the limited structural information captured from the ALR
|
1403 |
+
to be propagated among subpopulations. Overall, ALT in-
|
1404 |
+
creased baseline AaD by an average of 2.4%. This shows that
|
1405 |
+
there is complementarity between ALR and AIR.
|
1406 |
+
5
|
1407 |
+
Conclusions
|
1408 |
+
In this paper, we propose a novel approach called Adaptive
|
1409 |
+
Local Transfer (ALT), which tries to achieve efficient feature
|
1410 |
+
clustering from the perspective of label propagation. ALT di-
|
1411 |
+
vides the target data into inner and outlier samples based on
|
1412 |
+
the adaptive threshold of the learning state, and applies a cus-
|
1413 |
+
tomized learning strategy to fit the data properties best. To
|
1414 |
+
mitigate the source bias, on the one hand, considering the
|
1415 |
+
clustering affinity, we propose Adaptive Local-consistency
|
1416 |
+
Regularization (ALR) to reduce spurious clustering by re-
|
1417 |
+
weighting neighbors.
|
1418 |
+
On the other hand, Adaptive Input-
|
1419 |
+
consistency Regularization (AIR) is used at outlier points to
|
1420 |
+
propagate structural information from high-density to low-
|
1421 |
+
density regions, thus achieving high accuracy with respect to
|
1422 |
+
the ground truth labels. Moreover, this co-training process
|
1423 |
+
can encourage positive clustering and combat spurious clus-
|
1424 |
+
tering. The experimental results of three popular benchmarks
|
1425 |
+
verify that our proposed model outperforms the state-of-the-
|
1426 |
+
art in various SFDA tasks. For future work, we plan to ex-
|
1427 |
+
tend our ALT method to source-free open-set and partial-set
|
1428 |
+
domain adaptation.
|
1429 |
+
Acknowledgements
|
1430 |
+
This work was supported by the National Natural Science
|
1431 |
+
Foundation of China under Grant 61871186 and 61771322.
|
1432 |
+
References
|
1433 |
+
[Cai et al., 2021] Tianle Cai, Ruiqi Gao, Jason D. Lee, and
|
1434 |
+
Qi Lei. A theory of label propagation for subpopulation
|
1435 |
+
shift. In ICML, volume 139 of Proceedings of Machine
|
1436 |
+
Learning Research, pages 1170–1182. PMLR, 2021.
|
1437 |
+
[Cubuk et al., 2020] Ekin
|
1438 |
+
Dogus
|
1439 |
+
Cubuk,
|
1440 |
+
Barret
|
1441 |
+
Zoph,
|
1442 |
+
Jonathon Shlens, and Quoc Le. Randaugment: Practical
|
1443 |
+
|
1444 |
+
CCC100
|
1445 |
+
0
|
1446 |
+
0
|
1447 |
+
74
|
1448 |
+
0
|
1449 |
+
1
|
1450 |
+
0
|
1451 |
+
0
|
1452 |
+
0
|
1453 |
+
2
|
1454 |
+
0
|
1455 |
+
0
|
1456 |
+
0
|
1457 |
+
0
|
1458 |
+
0
|
1459 |
+
0
|
1460 |
+
0
|
1461 |
+
0
|
1462 |
+
81
|
1463 |
+
0
|
1464 |
+
0
|
1465 |
+
0
|
1466 |
+
80
|
1467 |
+
1
|
1468 |
+
3
|
1469 |
+
14
|
1470 |
+
6
|
1471 |
+
1
|
1472 |
+
26
|
1473 |
+
0
|
1474 |
+
0
|
1475 |
+
0
|
1476 |
+
4
|
1477 |
+
0
|
1478 |
+
0
|
1479 |
+
50
|
1480 |
+
0
|
1481 |
+
1
|
1482 |
+
0
|
1483 |
+
10
|
1484 |
+
0
|
1485 |
+
0
|
1486 |
+
0
|
1487 |
+
0
|
1488 |
+
Truth
|
1489 |
+
60
|
1490 |
+
0
|
1491 |
+
0
|
1492 |
+
0
|
1493 |
+
0
|
1494 |
+
45
|
1495 |
+
0
|
1496 |
+
0
|
1497 |
+
0
|
1498 |
+
0
|
1499 |
+
0
|
1500 |
+
Ground
|
1501 |
+
1
|
1502 |
+
2
|
1503 |
+
1
|
1504 |
+
2
|
1505 |
+
4
|
1506 |
+
36
|
1507 |
+
6
|
1508 |
+
3
|
1509 |
+
0
|
1510 |
+
0
|
1511 |
+
40
|
1512 |
+
0
|
1513 |
+
4
|
1514 |
+
0
|
1515 |
+
1
|
1516 |
+
0
|
1517 |
+
0
|
1518 |
+
51
|
1519 |
+
0
|
1520 |
+
0
|
1521 |
+
0
|
1522 |
+
20
|
1523 |
+
3
|
1524 |
+
19
|
1525 |
+
2
|
1526 |
+
1
|
1527 |
+
19
|
1528 |
+
0
|
1529 |
+
1
|
1530 |
+
14
|
1531 |
+
3
|
1532 |
+
0
|
1533 |
+
20
|
1534 |
+
2
|
1535 |
+
1
|
1536 |
+
1
|
1537 |
+
1
|
1538 |
+
6
|
1539 |
+
3
|
1540 |
+
23
|
1541 |
+
0
|
1542 |
+
2
|
1543 |
+
1
|
1544 |
+
0
|
1545 |
+
1
|
1546 |
+
0
|
1547 |
+
3
|
1548 |
+
0
|
1549 |
+
0
|
1550 |
+
15
|
1551 |
+
1
|
1552 |
+
0
|
1553 |
+
12
|
1554 |
+
30
|
1555 |
+
0
|
1556 |
+
0
|
1557 |
+
10
|
1558 |
+
20
|
1559 |
+
30
|
1560 |
+
Prediction100
|
1561 |
+
-0
|
1562 |
+
0
|
1563 |
+
0
|
1564 |
+
0
|
1565 |
+
0
|
1566 |
+
0
|
1567 |
+
96
|
1568 |
+
1
|
1569 |
+
1
|
1570 |
+
0
|
1571 |
+
0
|
1572 |
+
90
|
1573 |
+
0
|
1574 |
+
0
|
1575 |
+
0
|
1576 |
+
0
|
1577 |
+
0
|
1578 |
+
0
|
1579 |
+
2
|
1580 |
+
0
|
1581 |
+
0
|
1582 |
+
80
|
1583 |
+
22
|
1584 |
+
5
|
1585 |
+
15
|
1586 |
+
4
|
1587 |
+
9
|
1588 |
+
20
|
1589 |
+
1
|
1590 |
+
0
|
1591 |
+
0
|
1592 |
+
0
|
1593 |
+
10
|
1594 |
+
0
|
1595 |
+
1
|
1596 |
+
73
|
1597 |
+
0
|
1598 |
+
1
|
1599 |
+
2
|
1600 |
+
0
|
1601 |
+
0
|
1602 |
+
0
|
1603 |
+
0
|
1604 |
+
Truth
|
1605 |
+
60
|
1606 |
+
0
|
1607 |
+
0
|
1608 |
+
0
|
1609 |
+
60
|
1610 |
+
3
|
1611 |
+
0
|
1612 |
+
0
|
1613 |
+
1
|
1614 |
+
2
|
1615 |
+
0
|
1616 |
+
Ground
|
1617 |
+
0
|
1618 |
+
0
|
1619 |
+
2
|
1620 |
+
1
|
1621 |
+
17
|
1622 |
+
53
|
1623 |
+
0
|
1624 |
+
0
|
1625 |
+
7
|
1626 |
+
0
|
1627 |
+
40
|
1628 |
+
0
|
1629 |
+
3
|
1630 |
+
1
|
1631 |
+
0
|
1632 |
+
0
|
1633 |
+
3
|
1634 |
+
77
|
1635 |
+
1
|
1636 |
+
0
|
1637 |
+
1
|
1638 |
+
20
|
1639 |
+
35
|
1640 |
+
3
|
1641 |
+
35
|
1642 |
+
0
|
1643 |
+
0
|
1644 |
+
0
|
1645 |
+
0
|
1646 |
+
0
|
1647 |
+
9
|
1648 |
+
1
|
1649 |
+
20
|
1650 |
+
0
|
1651 |
+
13
|
1652 |
+
10
|
1653 |
+
50
|
1654 |
+
1
|
1655 |
+
0
|
1656 |
+
0
|
1657 |
+
5
|
1658 |
+
0
|
1659 |
+
0
|
1660 |
+
0
|
1661 |
+
0
|
1662 |
+
1
|
1663 |
+
0
|
1664 |
+
1
|
1665 |
+
0
|
1666 |
+
0
|
1667 |
+
0
|
1668 |
+
46
|
1669 |
+
30
|
1670 |
+
0
|
1671 |
+
0
|
1672 |
+
20
|
1673 |
+
10
|
1674 |
+
30
|
1675 |
+
Predictionautomated data augmentation with a reduced search space.
|
1676 |
+
In NeurIPS, 2020.
|
1677 |
+
[Ding et al., 2022] Ning Ding, Yixing Xu, Yehui Tang, Chao
|
1678 |
+
Xu, Yunhe Wang, and Dacheng Tao. Source-free domain
|
1679 |
+
adaptation via distribution estimation.
|
1680 |
+
In CVPR, pages
|
1681 |
+
7202–7212. IEEE, 2022.
|
1682 |
+
[Douze et al., 2018] Matthijs Douze, Arthur Szlam, Bharath
|
1683 |
+
Hariharan, and Herv´e J´egou.
|
1684 |
+
Low-shot learning with
|
1685 |
+
large-scale diffusion. In CVPR, pages 3349–3358. Com-
|
1686 |
+
puter Vision Foundation / IEEE Computer Society, 2018.
|
1687 |
+
[Ganin et al., 2016] Yaroslav Ganin,
|
1688 |
+
Evgeniya Ustinova,
|
1689 |
+
Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois
|
1690 |
+
Laviolette, Mario Marchand, and Victor S. Lempitsky.
|
1691 |
+
Domain-adversarial training of neural networks. J. Mach.
|
1692 |
+
Learn. Res., 17:59:1–59:35, 2016.
|
1693 |
+
[He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing
|
1694 |
+
Ren, and Jian Sun. Deep residual learning for image recog-
|
1695 |
+
nition. In CVPR, pages 770–778. IEEE Computer Society,
|
1696 |
+
2016.
|
1697 |
+
[Huang et al., 2021] Jiaxing Huang, Dayan Guan, Aoran
|
1698 |
+
Xiao, and Shijian Lu. Model adaptation: Historical con-
|
1699 |
+
trastive learning for unsupervised domain adaptation with-
|
1700 |
+
out source data. In NeurIPS, pages 3635–3649, 2021.
|
1701 |
+
[Iscen et al., 2019] Ahmet Iscen, Giorgos Tolias, Yannis
|
1702 |
+
Avrithis, and Ondrej Chum. Label propagation for deep
|
1703 |
+
semi-supervised learning.
|
1704 |
+
In CVPR, pages 5070–5079.
|
1705 |
+
Computer Vision Foundation / IEEE, 2019.
|
1706 |
+
[Jin et al., 2020] Ying Jin, Ximei Wang, Mingsheng Long,
|
1707 |
+
and Jianmin Wang. Minimum class confusion for versatile
|
1708 |
+
domain adaptation. In ECCV (21), volume 12366 of Lec-
|
1709 |
+
ture Notes in Computer Science, pages 464–480. Springer,
|
1710 |
+
2020.
|
1711 |
+
[Kang et al., 2019] Guoliang Kang, Lu Jiang, Yi Yang, and
|
1712 |
+
Alexander G. Hauptmann.
|
1713 |
+
Contrastive adaptation net-
|
1714 |
+
work for unsupervised domain adaptation. In CVPR, pages
|
1715 |
+
4893–4902. Computer Vision Foundation / IEEE, 2019.
|
1716 |
+
[Kundu et al., 2021] Jogendra Nath Kundu,
|
1717 |
+
Akshay R.
|
1718 |
+
Kulkarni, Amit Singh, Varun Jampani, and R. Venkatesh
|
1719 |
+
Babu. Generalize then adapt: Source-free domain adap-
|
1720 |
+
tive semantic segmentation. In ICCV, pages 7026–7036.
|
1721 |
+
IEEE, 2021.
|
1722 |
+
[Kundu et al., 2022] Jogendra Nath Kundu,
|
1723 |
+
Akshay R.
|
1724 |
+
Kulkarni,
|
1725 |
+
Suvaansh
|
1726 |
+
Bhambri,
|
1727 |
+
Deepesh
|
1728 |
+
Mehta,
|
1729 |
+
Shreyas
|
1730 |
+
Anand
|
1731 |
+
Kulkarni,
|
1732 |
+
Varun
|
1733 |
+
Jampani,
|
1734 |
+
and
|
1735 |
+
Venkatesh Babu Radhakrishnan.
|
1736 |
+
Balancing discrim-
|
1737 |
+
inability
|
1738 |
+
and
|
1739 |
+
transferability
|
1740 |
+
for
|
1741 |
+
source-free
|
1742 |
+
domain
|
1743 |
+
adaptation.
|
1744 |
+
In ICML, volume 162 of Proceedings of
|
1745 |
+
Machine Learning Research, pages 11710–11728. PMLR,
|
1746 |
+
2022.
|
1747 |
+
[Lee et al., 2022] Jonghyun Lee, Dahuin Jung, Junho Yim,
|
1748 |
+
and Sungroh Yoon. Confidence score for source-free unsu-
|
1749 |
+
pervised domain adaptation. In ICML, volume 162 of Pro-
|
1750 |
+
ceedings of Machine Learning Research, pages 12365–
|
1751 |
+
12377. PMLR, 2022.
|
1752 |
+
[Li et al., 2020a] Rui Li, Wenming Cao, Si Wu, and Hau-
|
1753 |
+
San Wong. Generating target image-label pairs for unsu-
|
1754 |
+
pervised domain adaptation. IEEE Trans. Image Process.,
|
1755 |
+
29:7997–8011, 2020.
|
1756 |
+
[Li et al., 2020b] Rui Li, Qianfen Jiao, Wenming Cao, Hau-
|
1757 |
+
San Wong, and Si Wu. Model adaptation: Unsupervised
|
1758 |
+
domain adaptation without source data. In CVPR, pages
|
1759 |
+
9638–9647. Computer Vision Foundation / IEEE, 2020.
|
1760 |
+
[Li et al., 2022] Jingjing Li, Zhekai Du, Lei Zhu, Zheng-
|
1761 |
+
ming Ding, Ke Lu, and Heng Tao Shen.
|
1762 |
+
Divergence-
|
1763 |
+
agnostic unsupervised domain adaptation by adversar-
|
1764 |
+
ial attacks.
|
1765 |
+
IEEE Trans. Pattern Anal. Mach. Intell.,
|
1766 |
+
44(11):8196–8211, 2022.
|
1767 |
+
[Liang et al., 2020] Jian Liang, Dapeng Hu, and Jiashi Feng.
|
1768 |
+
Do we really need to access the source data? source hy-
|
1769 |
+
pothesis transfer for unsupervised domain adaptation. In
|
1770 |
+
ICML, volume 119 of Proceedings of Machine Learning
|
1771 |
+
Research, pages 6028–6039. PMLR, 2020.
|
1772 |
+
[Long et al., 2018] Mingsheng Long, Zhangjie Cao, Jianmin
|
1773 |
+
Wang, and Michael I. Jordan. Conditional adversarial do-
|
1774 |
+
main adaptation. In NeurIPS, pages 1647–1657, 2018.
|
1775 |
+
[Na et al., 2021] Jaemin Na,
|
1776 |
+
Heechul Jung,
|
1777 |
+
Hyung Jin
|
1778 |
+
Chang, and Wonjun Hwang.
|
1779 |
+
Fixbi: Bridging domain
|
1780 |
+
spaces for unsupervised domain adaptation.
|
1781 |
+
In CVPR,
|
1782 |
+
pages 1094–1103. Computer Vision Foundation / IEEE,
|
1783 |
+
2021.
|
1784 |
+
[Peng et al., 2017] Xingchao Peng,
|
1785 |
+
Ben Usman,
|
1786 |
+
Neela
|
1787 |
+
Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko.
|
1788 |
+
Visda: The visual domain adaptation challenge. CoRR,
|
1789 |
+
abs/1710.06924, 2017.
|
1790 |
+
[Qiu et al., 2021] Zhen Qiu, Yifan Zhang, Hongbin Lin,
|
1791 |
+
Shuaicheng Niu, Yanxia Liu, Qing Du, and Mingkui Tan.
|
1792 |
+
Source-free domain adaptation via avatar prototype gen-
|
1793 |
+
eration and adaptation.
|
1794 |
+
In IJCAI, pages 2921–2927. ij-
|
1795 |
+
cai.org, 2021.
|
1796 |
+
[Qu et al., 2022] Sanqing Qu, Guang Chen, Jing Zhang, Zhi-
|
1797 |
+
jun Li, Wei He, and Dacheng Tao.
|
1798 |
+
BMD: A general
|
1799 |
+
class-balanced multicentric dynamic prototype strategy for
|
1800 |
+
source-free domain adaptation.
|
1801 |
+
In ECCV (34), volume
|
1802 |
+
13694 of Lecture Notes in Computer Science, pages 165–
|
1803 |
+
182. Springer, 2022.
|
1804 |
+
[Saenko et al., 2010] Kate Saenko, Brian Kulis, et al. Adapt-
|
1805 |
+
ing visual category models to new domains.
|
1806 |
+
In ECCV,
|
1807 |
+
2010.
|
1808 |
+
[Tang et al., 2020] Hui Tang, Ke Chen, and Kui Jia.
|
1809 |
+
Un-
|
1810 |
+
supervised domain adaptation via structurally regularized
|
1811 |
+
deep clustering. In CVPR, pages 8722–8732. Computer
|
1812 |
+
Vision Foundation / IEEE, 2020.
|
1813 |
+
[Venkateswara et al., 2017] Hemanth
|
1814 |
+
Venkateswara,
|
1815 |
+
Jose
|
1816 |
+
Eusebio, Shayok Chakraborty, and Sethuraman Pan-
|
1817 |
+
chanathan. Deep hashing network for unsupervised do-
|
1818 |
+
main adaptation. In CVPR, pages 5385–5394. IEEE Com-
|
1819 |
+
puter Society, 2017.
|
1820 |
+
[Wei et al., 2021] Colin Wei, Kendrick Shen, Yining Chen,
|
1821 |
+
and Tengyu Ma. Theoretical analysis of self-training with
|
1822 |
+
|
1823 |
+
deep networks on unlabeled data.
|
1824 |
+
In ICLR. OpenRe-
|
1825 |
+
view.net, 2021.
|
1826 |
+
[Wu et al., 2020] Yuan Wu, Diana Inkpen, and Ahmed El-
|
1827 |
+
Roby. Dual mixup regularized learning for adversarial do-
|
1828 |
+
main adaptation. In ECCV (29), volume 12374 of Lec-
|
1829 |
+
ture Notes in Computer Science, pages 540–555. Springer,
|
1830 |
+
2020.
|
1831 |
+
[Xia et al., 2021] Haifeng Xia, Handong Zhao, and Zheng-
|
1832 |
+
ming Ding. Adaptive adversarial network for source-free
|
1833 |
+
domain adaptation.
|
1834 |
+
In ICCV, pages 8990–8999. IEEE,
|
1835 |
+
2021.
|
1836 |
+
[Xu et al., 2019] Ruijia Xu, Guanbin Li, Jihan Yang, and
|
1837 |
+
Liang Lin. Larger norm more transferable: An adaptive
|
1838 |
+
feature norm approach for unsupervised domain adapta-
|
1839 |
+
tion. In ICCV, pages 1426–1435. IEEE, 2019.
|
1840 |
+
[Yang et al., 2021a] Shiqi Yang, Yaxing Wang, Joost van de
|
1841 |
+
Weijer, Luis Herranz, and Shangling Jui. Exploiting the
|
1842 |
+
intrinsic neighborhood structure for source-free domain
|
1843 |
+
adaptation. In NeurIPS, pages 29393–29405, 2021.
|
1844 |
+
[Yang et al., 2021b] Shiqi Yang, Yaxing Wang, Joost van de
|
1845 |
+
Weijer, Luis Herranz, and Shangling Jui.
|
1846 |
+
Generalized
|
1847 |
+
source-free domain adaptation.
|
1848 |
+
In ICCV, pages 8958–
|
1849 |
+
8967. IEEE, 2021.
|
1850 |
+
[Yang et al., 2022] Shiqi Yang, Yaxing Wang, Kai Wang,
|
1851 |
+
Shangling Jui, et al. Attracting and dispersing: A simple
|
1852 |
+
approach for source-free domain adaptation. In Advances
|
1853 |
+
in Neural Information Processing Systems, 2022.
|
1854 |
+
[Zhang et al., 2019] Yuchen Zhang, Tianle Liu, Mingsheng
|
1855 |
+
Long, and Michael I. Jordan.
|
1856 |
+
Bridging theory and al-
|
1857 |
+
gorithm for domain adaptation. In ICML, volume 97 of
|
1858 |
+
Proceedings of Machine Learning Research, pages 7404–
|
1859 |
+
7413. PMLR, 2019.
|
1860 |
+
[Zhang et al., 2021] Bowen Zhang, Yidong Wang, Wenxin
|
1861 |
+
Hou,
|
1862 |
+
Hao Wu,
|
1863 |
+
Jindong Wang,
|
1864 |
+
Manabu Okumura,
|
1865 |
+
and Takahiro Shinozaki.
|
1866 |
+
Flexmatch:
|
1867 |
+
Boosting semi-
|
1868 |
+
supervised learning with curriculum pseudo labeling. In
|
1869 |
+
NeurIPS, pages 18408–18419, 2021.
|
1870 |
+
[Zhang et al., 2022] Ziyi Zhang, Weikai Chen, Hui Cheng,
|
1871 |
+
Zhen Li, Siyuan Li, Liang Lin, and Guanbin Li. Divide
|
1872 |
+
and contrast: Source-free domain adaptation via adaptive
|
1873 |
+
contrastive learning. In Advances in Neural Information
|
1874 |
+
Processing Systems, 2022.
|
1875 |
+
[Zhong et al., 2021] Li Zhong, Zhen Fang, Feng Liu, Jie Lu,
|
1876 |
+
Bo Yuan, and Guangquan Zhang. How does the combined
|
1877 |
+
risk affect the performance of unsupervised domain adap-
|
1878 |
+
tation approaches? In AAAI, pages 11079–11087. AAAI
|
1879 |
+
Press, 2021.
|
1880 |
+
|
1tFAT4oBgHgl3EQfDBwd/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
2NFKT4oBgHgl3EQfPi1h/content/tmp_files/2301.11763v1.pdf.txt
ADDED
@@ -0,0 +1,1638 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
2 |
+
1
|
3 |
+
Gene Teams are on the Field:
|
4 |
+
Evaluation of Variants in Gene-Networks Using
|
5 |
+
High Dimensional Modelling
|
6 |
+
Suha Tuna, Cagri Gulec, Emrah Yucesan, Ayse Cirakoglu, Yelda Tarkan Arguden∗
|
7 |
+
Abstract—In medical genetics, each genetic variant is evaluated as an independent entity regarding its clinical importance. However,
|
8 |
+
in most complex diseases, variant combinations in specific gene networks, rather than the presence of a particular single variant,
|
9 |
+
predominates. In the case of complex diseases, disease status can be evaluated by considering the success level of a team of specific
|
10 |
+
variants. We propose a high dimensional modelling based method to analyse all the variants in a gene network together. To evaluate
|
11 |
+
our method, we selected two gene networks, mTOR and TGF-β. For each pathway, we generated 400 control and 400 patient group
|
12 |
+
samples. mTOR and TGF-β pathways contain 31 and 93 genes of varying sizes, respectively. We produced Chaos Game
|
13 |
+
Representation images for each gene sequence to obtain 2-D binary patterns. These patterns were arranged in succession, and a 3-D
|
14 |
+
tensor structure was achieved for each gene network. Features for each data sample were acquired by exploiting Enhanced
|
15 |
+
Multivariance Products Representation to 3-D data. Features were split as training and testing vectors. Training vectors were employed
|
16 |
+
to train a Support Vector Machines classification model. We achieved more than 96% and 99% classification accuracies for mTOR and
|
17 |
+
TGF-β networks, respectively, using a limited amount of training samples.
|
18 |
+
Index Terms—Gene network analysis, high dimensional modelling, chaos game representation, enhanced multivariance products
|
19 |
+
representation, support vector machines
|
20 |
+
!
|
21 |
+
1
|
22 |
+
INTRODUCTION
|
23 |
+
Recently, in parallel with the development of new technolo-
|
24 |
+
gies in genetics, it has become possible to study the human
|
25 |
+
genome holistically. Previously genes were evaluated as
|
26 |
+
single entities -we can call those times as “analysis era” of
|
27 |
+
genetics- now, the “synthesis era” is born, in which genes
|
28 |
+
are examined as parts of a network made up of the whole
|
29 |
+
genome [1], [2], [3]. Albert Lazslo Barabasi accounted for
|
30 |
+
this situation as “disease phenotype is rarely a consequence
|
31 |
+
of an abnormality in a single effector gene product, but
|
32 |
+
reflects various pathobiological processes that interact in a
|
33 |
+
complex network.” [1]. In this remarkable concept, genes
|
34 |
+
that encode proteins involved in a pathway or known to
|
35 |
+
be associated with a particular disease are considered a
|
36 |
+
“gene network”. Therefore, gene network/s analysis is now
|
37 |
+
more reasonable and comprehensible than examining only
|
38 |
+
single genes or pathways. The importance of this approach
|
39 |
+
is evident in understanding the biogenesis of polygenic-
|
40 |
+
multifactorial diseases that are commonly observed in the
|
41 |
+
population and in which the cumulative effect of many
|
42 |
+
mildly acting genes is determinative. Unlike single-gene
|
43 |
+
•
|
44 |
+
S. Tuna is with the Department of Computational Science and Engineer-
|
45 |
+
ing, Informatics Institute, Istanbul Technical University, 34469, T¨urkiye.
|
46 |
+
•
|
47 |
+
C. Gulec is with the Department of Medical Genetics, Istanbul Faculty of
|
48 |
+
Medicine, Istanbul University, 34093, T¨urkiye.
|
49 |
+
•
|
50 |
+
E. Yucesan is with the Department of Neuroscience, Institute of Neuro-
|
51 |
+
logical Sciences, Istanbul University Cerrahpasa, 34098, T¨urkiye.
|
52 |
+
•
|
53 |
+
A. Cirakoglu and Y. Tarkan Arguden are with the Department of Medical
|
54 |
+
Biology, Faculty of Medicine, Istanbul University Cerrahpasa, 34098,
|
55 |
+
T¨urkiye.
|
56 |
+
•
|
57 |
+
∗The corresponding author. E-mail: [email protected]
|
58 |
+
Manuscript received ..., ...; revised ..., ...
|
59 |
+
disorders, in polygenic/multifactorial diseases, there is not
|
60 |
+
a singular genetic change (mutation) in a single underlying
|
61 |
+
gene. In addition to environmental factors, a combination of
|
62 |
+
genetic changes called polymorphisms or variants plays a
|
63 |
+
role in the emergence of such diseases [1], [2], [3], [4], [5],
|
64 |
+
[6].
|
65 |
+
As an analogy, a gene network may be considered as a
|
66 |
+
“team”. The success of the team relies on the efficiency of
|
67 |
+
the metabolic pathway that contains proteins encoded by
|
68 |
+
genes that make up the gene network. “Team success” is
|
69 |
+
directly related to all players, not just one. The performance
|
70 |
+
of any team depends on the harmonious working of its
|
71 |
+
individual players. Individual players of a “gene team”
|
72 |
+
are the specific variants of each one of the genes in the
|
73 |
+
network a person carries. Depending on the efficiency of
|
74 |
+
the variant combination, that individual is either healthy or
|
75 |
+
affected in terms of a specific trait. This combinatorial effect
|
76 |
+
of the genes contributes to the mechanism of penetrance and
|
77 |
+
expressivity [7], [8]. If a person has a “marvelous” variant
|
78 |
+
combination -like a “dream team” of genes- then that person
|
79 |
+
will be superior in this trait. When there are compensative
|
80 |
+
genes in the gene network for a disease-causing mutation,
|
81 |
+
then the mutant gene’s deleterious effect can be suppressed,
|
82 |
+
and the phenotype appears normal. On the contrary, when
|
83 |
+
many “weak” variants come together in the network, the
|
84 |
+
phenotype could be worse than expected from each of these
|
85 |
+
variants. This is already known as one of the mechanisms of
|
86 |
+
the emergence of polygenic multifactorial traits [9], [10].
|
87 |
+
Therefore, when a gene network is determined, it is
|
88 |
+
desirable to be able to identify the combination of vari-
|
89 |
+
ants in that network. If the differences between the gene
|
90 |
+
network variant combinations among individuals could be
|
91 |
+
arXiv:2301.11763v1 [cs.LG] 27 Jan 2023
|
92 |
+
|
93 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
94 |
+
2
|
95 |
+
determined, then it could be possible to foresee the sus-
|
96 |
+
ceptibility of that individual to the related diseases [7], [8].
|
97 |
+
The problem with this approach is the insufficiency of the
|
98 |
+
current techniques to examine a gene network as a team.
|
99 |
+
Currently Genome-wide association studies (GWAS)
|
100 |
+
techniques are used to detect genomic variants that may
|
101 |
+
be responsible for the predisposition to complex diseases.
|
102 |
+
These studies enable the determination of the most signifi-
|
103 |
+
cant variants in terms of the related trait/disease coexistence
|
104 |
+
among the variants commonly found in people with a
|
105 |
+
particular trait or disease. Using GWAS and bioinformatics
|
106 |
+
methods, defining the gene networks underlying certain
|
107 |
+
traits/diseases is possible. In this early days of the “holistic
|
108 |
+
genetics” era, a lot of research focused on this task [1], [2],
|
109 |
+
[3], [4], [5], [6], [11].
|
110 |
+
One of the many application areas of the results obtained
|
111 |
+
from GWAS studies is the prediction of an individual’s
|
112 |
+
susceptibility to a certain physical or mental illness based on
|
113 |
+
their genetic profile. Polygenic Risk Score (PRS) is the stan-
|
114 |
+
dard method used for this purpose, and it relies on the SNPs
|
115 |
+
(Single Nucleotide Polymorphisms) that were determined
|
116 |
+
as risky for that particular illness/trait by GWAS studies.
|
117 |
+
The weighted total scores of all risk SNPs are calculated
|
118 |
+
using the effect sizes determined in the GWAS study as the
|
119 |
+
weights of the SNPs. Thus, a person-specific Polygenic Risk
|
120 |
+
Score is determined. Although PRS is a method that can
|
121 |
+
be used as a biomarker to assess individual susceptibility
|
122 |
+
to diseases, there are currently some limitations that make
|
123 |
+
its clinical application difficult. One of these is the fact that
|
124 |
+
GWAS studies are still limited to specific ethnic groups, and
|
125 |
+
sometimes there are groups with different characteristics
|
126 |
+
even within the same population. Another limitation is that
|
127 |
+
many phenotypic traits are affected by too many genes
|
128 |
+
(polygenicity). Besides, there is no consensus on which of
|
129 |
+
the various methods used to calculate PRS is the most ap-
|
130 |
+
propriate. In particular, the necessity of finding new strate-
|
131 |
+
gies to overcome the polygenicity problem is emphasized
|
132 |
+
[6], [11], [12], [13], [14], [15], [16], [17], [18].
|
133 |
+
Methods such as GWAS are highly effective in identify-
|
134 |
+
ing variants in genes in a particular disease-associated path-
|
135 |
+
way that are common to most people with the disease. How-
|
136 |
+
ever, these methods are insufficient in determining patient-
|
137 |
+
specific combinations of other variants in pathway genes.
|
138 |
+
Regardless of whether they carry risky variants, clinical
|
139 |
+
differences between individuals with complex diseases are
|
140 |
+
considered to be the result of patient-specific combinations
|
141 |
+
of variants. Papadimitriu et al. report a machine learning
|
142 |
+
approach to identify digenic or bilocus variant combinations
|
143 |
+
[19]. Nevertheless, it is emphasized that “the large num-
|
144 |
+
ber of known variants gives rise to an immense number
|
145 |
+
of combinations, presenting mathematical, statistical, and
|
146 |
+
computational challenges” [20]. Therefore, with the current
|
147 |
+
techniques, it is not possible to study the combinatorial
|
148 |
+
effects of more than a few variants, let alone all of them.
|
149 |
+
It is obvious that new approaches are required to overcome
|
150 |
+
the problem.
|
151 |
+
Here, we propose a high dimensional modelling based
|
152 |
+
method to analyse all the variants in a gene network to-
|
153 |
+
gether, applying Chaos Game Representation (CGR) [21],
|
154 |
+
[22], [23], [24] as a pre-processing tool to the sequencing
|
155 |
+
data of the genes in the network, and a statistics-based high
|
156 |
+
dimensional feature extraction technique named Enhanced
|
157 |
+
Multivariance Products Representation (EMPR) [25], [26],
|
158 |
+
[27], [28]. Then, Support Vector Machines (SVM) which
|
159 |
+
is a flexible and efficient classification algorithm [29] was
|
160 |
+
utilized in order to assign the gene network of an individ-
|
161 |
+
ual based on their sequence variants to control or patient
|
162 |
+
groups. To test our approach, we created exemplary mTOR
|
163 |
+
and TGF-β sub-networks consisting of 31 and 93 genes,
|
164 |
+
respectively.
|
165 |
+
2
|
166 |
+
APPROACH
|
167 |
+
The biggest problem in processing variant combinations in
|
168 |
+
gene networks is the amount of sequence data. Therefore, to
|
169 |
+
facilitate analysis, we considered applying CGR, a technique
|
170 |
+
to convert 1-D sequence data into 2-D pattern form [21], [22],
|
171 |
+
[23]. The rationale was that the variants in each sequence
|
172 |
+
data would result in slightly different CGR patterns, and
|
173 |
+
computationally sorting out these pattern differences would
|
174 |
+
be easier than comparing sequences. Afterwards, we had
|
175 |
+
a 2-D pattern in hand for each gene in the network that
|
176 |
+
needed to be examined together as a team. To do that, we
|
177 |
+
aligned each of the CGR patterns in succession to create a
|
178 |
+
cube as a 3-D tensor, which would represent an individual’s
|
179 |
+
gene network as a single entity. Then, we adopted EMPR to
|
180 |
+
decompose this 3-D array and represent it in terms of less
|
181 |
+
dimensional features with the aim of distinguishing control
|
182 |
+
and patient groups according to their variant combinations
|
183 |
+
[28].
|
184 |
+
To examine the efficacy and the distinguishing capability
|
185 |
+
of our approach, we generated a data set for two gene
|
186 |
+
networks. These are the mTOR and TGF-β pathways, each
|
187 |
+
containing 800 individual 3-D tensors after applying CGR
|
188 |
+
and aligning the images as a CGR cube. Half of these tensors
|
189 |
+
stand for the control, while the other half denotes the patient
|
190 |
+
groups. We split both groups into training and testing parts.
|
191 |
+
Then, we fed the SVM binary classification algorithm with
|
192 |
+
three EMPR vector components of the training data and
|
193 |
+
generated the learning model. Finally, we calculated the
|
194 |
+
overall accuracy by predicting the class (control/patient)
|
195 |
+
of each testing feature according to the constructed SVM
|
196 |
+
model [29].
|
197 |
+
3
|
198 |
+
METHODS
|
199 |
+
3.1
|
200 |
+
Data Source and Recruitment
|
201 |
+
The mTOR [30] and TGF-β [31] pathway genes were
|
202 |
+
selected based on the KEGG database (https://www.
|
203 |
+
genome.jp/kegg/) [32]. Genomic sequences of the path-
|
204 |
+
way genes were fetched from GRCh37 human genome
|
205 |
+
database based on their genomic coordinates recorded in the
|
206 |
+
NCBI database (https://www.ncbi.nlm.nih.gov/projects/
|
207 |
+
genome/guide/human/index.shtml).
|
208 |
+
As represented in Fig. 1, reference sequences composed
|
209 |
+
of each gene sequence were used as a template to generate
|
210 |
+
400 control and 400 patient sequences for each pathway. In
|
211 |
+
the first step, we created two lists of integers for both groups
|
212 |
+
that represent the positions of polymorphic and pathogenic
|
213 |
+
variants (‘polymorphic positions list’and ‘pathogenic posi-
|
214 |
+
tions list’). Each integer in these lists has been randomly cho-
|
215 |
+
sen to be within certain consecutive intervals and exclusive
|
216 |
+
|
217 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
218 |
+
3
|
219 |
+
to the other list. This interval has been set to 100 and 200
|
220 |
+
for polymorphic and pathogenic variants, respectively (Any
|
221 |
+
integer within the range 1-100, 100-200, 200-300, and so on,
|
222 |
+
for ‘polymorphic positions list’, and any integer within the
|
223 |
+
range 1-200, 200-400, 400-600, and so on, for ‘pathogenic
|
224 |
+
positions list’). In the second step, the reference base at each
|
225 |
+
position represented in the ‘polymorphic positions list’ was
|
226 |
+
replaced by the variant base in 40% of both control and
|
227 |
+
patient sequences. The alterations in these positions were
|
228 |
+
accepted as non-pathogenic and/or common variants with
|
229 |
+
0.40 minor allele frequency in both groups. In the next
|
230 |
+
step, the reference base at each position represented in the
|
231 |
+
‘pathogenic positions list’was replaced by the variant base
|
232 |
+
in 25% of control sequences and 30% of patient sequences.
|
233 |
+
The alterations in these positions were accepted as disease-
|
234 |
+
associated/pathogenic variants with 0.25 allele frequency in
|
235 |
+
the control group and 0.30 allele frequency in the patient
|
236 |
+
group. In all these steps, we set minor allele frequency
|
237 |
+
(MAF) higher because, contrary to single-gene disorders
|
238 |
+
where rare variants (with MAF< 0.01) are causative, com-
|
239 |
+
plex disorders are the consequences of the combination of
|
240 |
+
the variants with higher allele frequency (MAF> 0.01). All
|
241 |
+
variant sequences were in the haploid state. The properties
|
242 |
+
of the datasets are summarised in Supp. Table 1 and Supp.
|
243 |
+
Table 2.
|
244 |
+
Fig. 1. Fetching and pre-processing the genomic sequence data
|
245 |
+
The known available datasets, e.g., 1000 Genomes, GEN-
|
246 |
+
ESIS, Solve-RD, Munich Exome (EVAdB), Baylor-Hopkins
|
247 |
+
Center
|
248 |
+
for
|
249 |
+
Mendelian
|
250 |
+
Genomics
|
251 |
+
(BH-CMG),
|
252 |
+
100KGP,
|
253 |
+
GeneDx, and NHLBI-GO Exome Sequencing Project (ESP)
|
254 |
+
databases, have not used preferably to avoid any bias
|
255 |
+
(it is difficult to distinguish patient from control dataset).
|
256 |
+
Therefore, we created datasets that we arranged according
|
257 |
+
to the percentage of the allele frequency. Since real human
|
258 |
+
samples or data were not used in the study, ethics committee
|
259 |
+
approval was not considered necessary.
|
260 |
+
To evaluate the efficiency of the proposed method,
|
261 |
+
both control and patient groups belonging to each path-
|
262 |
+
way dataset were split into two independent and non-
|
263 |
+
intersecting parts. The first part was considered the training,
|
264 |
+
while the latter was called the testing data. These separate
|
265 |
+
subsets for each pathway dataset were symbolised as Dtrain
|
266 |
+
and Dtest, respectively. Dtrain was collected by generating
|
267 |
+
randomly selected pathways among 400 control and 400
|
268 |
+
patient networks at a certain amount. For the classification
|
269 |
+
phase, the number of elements in Dtrain was assumed to
|
270 |
+
be less than the number of networks in Dtest. Dtrain was
|
271 |
+
utilised for training the classification algorithm, while Dtest
|
272 |
+
was employed to verify the efficacy of the training model.
|
273 |
+
To provide a convenient learning model and determine
|
274 |
+
whether a given network in Dtest belongs to the control
|
275 |
+
or the patient class, we applied a new feature extraction
|
276 |
+
approach based on CGR and EMPR.
|
277 |
+
3.2
|
278 |
+
Chaos Game Representation
|
279 |
+
CGR is an efficient technique that converts long 1-D ge-
|
280 |
+
nomic sequences into 2-D images (see Fig. 2), say patterns
|
281 |
+
[21], [22], [23]. In this manner, CGR enables to pull of signif-
|
282 |
+
icant data parts out from the corresponding gene sequence
|
283 |
+
using a convenient feature extraction method suitable for
|
284 |
+
images.
|
285 |
+
Fig. 2. 700×700 CGR images corresponding to four genes in mTOR and
|
286 |
+
TGF-β pathways: RPTOR of mTOR (top-left), GSK3B of mTOR (top-
|
287 |
+
right), SMAD6 of TGF-β (bottom-left), SMAD7 of TGF-β (bottom-right)
|
288 |
+
In the DNA sequence case, the corresponding CGR of
|
289 |
+
a sequence is nothing but a square-shaped binary image
|
290 |
+
whose bottom-left corner overlaps with the origin of 2-D
|
291 |
+
Cartesian space. If Adenine is assumed to be depicted with
|
292 |
+
the origin, which is the point (0, 0), Cytosine is placed at
|
293 |
+
the point (0, 1), Guanine is located at (1, 0), while Thymine
|
294 |
+
stands at the final corner, that is (1, 1). The pattern is
|
295 |
+
initialized with a point on the centre in the image, that is
|
296 |
+
(0.5, 0.5). The first point of the pattern is settled in the half
|
297 |
+
way between the centre and the corner corresponding to the
|
298 |
+
|
299 |
+
KEGG
|
300 |
+
NCBI
|
301 |
+
Pathway genes
|
302 |
+
Reference genomic sequences of
|
303 |
+
~21.000
|
304 |
+
selected pathway genes
|
305 |
+
(31 genes for mTOR /
|
306 |
+
genes
|
307 |
+
93 genes for TGF-β pathway)
|
308 |
+
Reference
|
309 |
+
sequence
|
310 |
+
..........
|
311 |
+
GTTTCCGGTGTTGTGACCGCAGGGCGGAATGACAGCGGCGAGGAGAACGTCCCGCTGGATCTGACCCGAGGCAACGCGGGGCGC.....
|
312 |
+
Pathogenic
|
313 |
+
1/1
|
314 |
+
Pathogenic
|
315 |
+
Artificial base
|
316 |
+
substitution
|
317 |
+
Polymorphic
|
318 |
+
variants with
|
319 |
+
variants with
|
320 |
+
variants with
|
321 |
+
lower (25%)
|
322 |
+
higher (30%)
|
323 |
+
40% frequency
|
324 |
+
frequency
|
325 |
+
frequency
|
326 |
+
1
|
327 |
+
2
|
328 |
+
Artificial samples
|
329 |
+
398
|
330 |
+
398
|
331 |
+
399
|
332 |
+
399
|
333 |
+
400
|
334 |
+
400
|
335 |
+
Artificial Control Group
|
336 |
+
Artificial Patient Group
|
337 |
+
Conversion of sequence data to CGR image data
|
338 |
+
CGR images
|
339 |
+
398
|
340 |
+
398
|
341 |
+
399
|
342 |
+
399
|
343 |
+
400
|
344 |
+
400IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
345 |
+
4
|
346 |
+
first nucleotide of the sequence. In general, the i-th point of
|
347 |
+
the image is then placed just in the middle of the (i − 1)-th
|
348 |
+
point and the vertex corresponding to the i-th nucleotide.
|
349 |
+
Formally, if the horizontal and the vertical coordinates
|
350 |
+
of the i-th nucleotide of a given sequence are defined as Xi
|
351 |
+
and Yi, respectively, these entities are determined using the
|
352 |
+
following linear equations
|
353 |
+
Xi = 1
|
354 |
+
2
|
355 |
+
�
|
356 |
+
Xi−1 + C(x)
|
357 |
+
i
|
358 |
+
�
|
359 |
+
Yi = 1
|
360 |
+
2
|
361 |
+
�
|
362 |
+
Yi−1 + C(y)
|
363 |
+
i
|
364 |
+
�
|
365 |
+
(1)
|
366 |
+
where X0 = Y0 = 0.5. In (1), C(x)
|
367 |
+
i
|
368 |
+
and C(y)
|
369 |
+
i
|
370 |
+
stand for the
|
371 |
+
coordinates of the pre-defined corners of the unit-square,
|
372 |
+
that is [ 0, 1 ]2, related to the corresponding nucleotide men-
|
373 |
+
tioned above.
|
374 |
+
The resolution of the CGR image is adjustable and may
|
375 |
+
affect the representation quality of the gene sequence under
|
376 |
+
consideration. For instance, if the size of the CGR image
|
377 |
+
is selected too small, then some of the points can overlap,
|
378 |
+
and this fact can prevent the contribution of the overlapping
|
379 |
+
points to the whole pattern. On the other hand, in case the
|
380 |
+
size of the image is selected too large, some unnecessary
|
381 |
+
gaps between the points may occur and the representation
|
382 |
+
eligibility of the CGR pattern is influenced negatively. Thus,
|
383 |
+
fixing the optimal resolution for a CGR image is also crucial
|
384 |
+
to improving the representation quality.
|
385 |
+
To process the pathways under consideration as a whole
|
386 |
+
and extract meaningful features using Enhanced Multivari-
|
387 |
+
ance Products Representation, all CGR images of the genes
|
388 |
+
in the pathways are aligned in succession. Thus, a 3-D
|
389 |
+
representation for any individual mTOR or TGF-β gene
|
390 |
+
network is constructed. The emerged 3-D data is named as
|
391 |
+
CGR cube of a gene network and is suitable for processing
|
392 |
+
by the proposed high-dimensional modelling method.
|
393 |
+
3.3
|
394 |
+
Enhanced Multivariance Products Representation
|
395 |
+
Enhanced Multivariance Products Representation (EMPR)
|
396 |
+
is a high dimensional data decomposition method [25], [26],
|
397 |
+
[27], [28]. It enables a representation of multidimensional
|
398 |
+
data in terms of lower-dimensional entities. Accordingly,
|
399 |
+
EMPR can be considered as a finite series of lower dimen-
|
400 |
+
sional components. This aspect of EMPR enables to reduce
|
401 |
+
the dimensionality of multidimensional data and simplifies
|
402 |
+
further analysis.
|
403 |
+
In scientific experiments and applications, one of the
|
404 |
+
crucial challenges in analysing data is the “curse of dimen-
|
405 |
+
sionality” [33]. Therefore, governing this issue by reducing
|
406 |
+
the number of dimensions becomes critical. Thus, EMPR
|
407 |
+
can be regarded as a suitable technique for addressing
|
408 |
+
multidimensional problems.
|
409 |
+
EMPR is an extension of a well-known statistical method
|
410 |
+
called High Dimensional Model Representation (HDMR)
|
411 |
+
[34], [35]. HDMR was invented for decomposing and decor-
|
412 |
+
relating the inputs in multidimensional input-output sys-
|
413 |
+
tems [34]. In a general multidimensional system, each input,
|
414 |
+
say dimension, contributes to the behaviour of the output
|
415 |
+
individually or cooperatively with other inputs [35], [36],
|
416 |
+
[37]. However, determining these contributions is significant
|
417 |
+
to evaluate the corresponding model for meta-modelling
|
418 |
+
[38], [39], sensitivity analysis [40] and reduction [41], etc.
|
419 |
+
As HDMR, EMPR is capable of dealing with N-D data.
|
420 |
+
But in this study, the 3-D case is considered without loss
|
421 |
+
of generality. However, all formulations which will be pre-
|
422 |
+
sented here can be generalised to the N-D case without any
|
423 |
+
difficulty. Further in this section, EMPR for Gene Network
|
424 |
+
Analysis (GNA) will be introduced and discussed.
|
425 |
+
Let G denote the 3-D CGR cube and assume its size is
|
426 |
+
n1 × n2 × n3. This means the network G has n3 gene se-
|
427 |
+
quences, each of which has various sizes and is represented
|
428 |
+
through n1 × n2 binary images, thanks to the CGR method.
|
429 |
+
Then, the EMPR expansion of the CGR cube can be explicitly
|
430 |
+
given as follows
|
431 |
+
G = g(0)
|
432 |
+
� 3
|
433 |
+
�
|
434 |
+
r=1
|
435 |
+
s(r)
|
436 |
+
�
|
437 |
+
+
|
438 |
+
3
|
439 |
+
�
|
440 |
+
i=1
|
441 |
+
g(i) ⊗
|
442 |
+
�
|
443 |
+
��
|
444 |
+
3
|
445 |
+
�
|
446 |
+
r=1
|
447 |
+
r̸=i
|
448 |
+
s(r)
|
449 |
+
�
|
450 |
+
��
|
451 |
+
+
|
452 |
+
3
|
453 |
+
�
|
454 |
+
i,j=1
|
455 |
+
i<j
|
456 |
+
g(i,j) ⊗
|
457 |
+
�
|
458 |
+
��
|
459 |
+
3
|
460 |
+
�
|
461 |
+
r=1
|
462 |
+
r̸=i,j
|
463 |
+
s(r)
|
464 |
+
�
|
465 |
+
�� + g(1,2,3).
|
466 |
+
(2)
|
467 |
+
In formula (2), g(0), g(i), and g(i,j) denote the zero-way, the
|
468 |
+
one-way, and the two-way EMPR components, respectively,
|
469 |
+
and ⊗ stands for the outer product operation [42]. The 3-D
|
470 |
+
Fig. 3. Graphical demonstration of EMPR expansion for 3-D case.
|
471 |
+
EMPR expansion is a finite sum. Thus, it involves exactly
|
472 |
+
23 EMPR components [25], [26], [27], [28]. The graphical
|
473 |
+
expression of the EMPR decomposition is given in Fig. 3.
|
474 |
+
In (2), g(0) is a scalar that can be considered as a 0-D en-
|
475 |
+
tity. g(i) stands for 1-D structures, which are the vectors, and
|
476 |
+
g(i,j) denotes the 2-D entities which can be acknowledged
|
477 |
+
as the matrices. Additionally, other entities involved in (2)
|
478 |
+
and denoted by s(r) are 1-D elements and called the support
|
479 |
+
vectors [28]. In this sense, s(r) is the r-th support vector
|
480 |
+
that resides on the r-th axis of the 3-D CGR cube where
|
481 |
+
r = 1, 2, 3. Thus, one can easily verify that the r-th support
|
482 |
+
vector is an entity composed of nr elements. Support vectors
|
483 |
+
are multiplied with the corresponding EMPR components
|
484 |
+
in outer product manner and enhance its dimensionality.
|
485 |
+
Besides, they provide flexibility for EMPR expansion and
|
486 |
+
must be selected rationally. This choice is crucial since it
|
487 |
+
affects the representation eligibility of the EMPR expansion.
|
488 |
+
Since EMPR has an additive nature, G should be ex-
|
489 |
+
pressed in terms of 3-D structures. As a consequence of
|
490 |
+
outer products between EMPR components and support
|
491 |
+
vectors, new 3-D but less complicated entities are estab-
|
492 |
+
lished. These new elements are called EMPR terms [25], [26],
|
493 |
+
[27], [28]. Each EMPR term is named regarding its EMPR
|
494 |
+
|
495 |
+
S3
|
496 |
+
S3
|
497 |
+
5
|
498 |
+
g3
|
499 |
+
g0
|
500 |
+
G
|
501 |
+
$2
|
502 |
+
g2
|
503 |
+
5
|
504 |
+
g1
|
505 |
+
5
|
506 |
+
S1
|
507 |
+
sn
|
508 |
+
0
|
509 |
+
g2.3
|
510 |
+
+
|
511 |
+
+
|
512 |
+
+
|
513 |
+
g12,3
|
514 |
+
si
|
515 |
+
g1,2
|
516 |
+
g1.3IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
517 |
+
5
|
518 |
+
component. Thus, the term constructed with g(0) and all
|
519 |
+
three supports are called the zeroth EMPR term. The term
|
520 |
+
composed of g(i) and the remaining two support vectors
|
521 |
+
(except the i-th one) is called the i-th EMPR term. Similarly,
|
522 |
+
the term including g(i,j) and the corresponding support
|
523 |
+
vector are called (i, j)-th EMPR term. It is clear that all EMPR
|
524 |
+
terms are of size n1 × n2 × n3, just as the original data, G.
|
525 |
+
Additionally, during the EMPR process, three weight
|
526 |
+
vectors can be exploited to adjust the contributions of each
|
527 |
+
CGR pixel in G. The weight vectors are consisted of non-
|
528 |
+
negative real values and must satisfy the following condi-
|
529 |
+
tions
|
530 |
+
���ω(1)���
|
531 |
+
1 = 1,
|
532 |
+
���ω(2)���
|
533 |
+
1 = 1,
|
534 |
+
���ω(3)���
|
535 |
+
1 = 1.
|
536 |
+
(3)
|
537 |
+
In (3), it is clear that the sum of all elements for each weight
|
538 |
+
vector should be equal to 1. These equations hold due to the
|
539 |
+
statistical necessities, and they facilitate the computations in
|
540 |
+
the evaluation process of EMPR components.
|
541 |
+
However, the EMPR components should satisfy the fol-
|
542 |
+
lowing constraints
|
543 |
+
np
|
544 |
+
�
|
545 |
+
ip=1
|
546 |
+
ω(p)
|
547 |
+
ip s(p)
|
548 |
+
ip g(1,...,m)
|
549 |
+
i1,...,im = 0;
|
550 |
+
1 ≤ p ≤ m ∈ {1, 2, 3}
|
551 |
+
(4)
|
552 |
+
where s(p)
|
553 |
+
ip
|
554 |
+
and ω(p)
|
555 |
+
ip
|
556 |
+
are the ip-th elements of the p-th
|
557 |
+
support vector and p-th weight vector, respectively. How-
|
558 |
+
ever, g(1,...,m)
|
559 |
+
i1,...,im stands for the (i1, . . . , im)-th entry of the
|
560 |
+
corresponding EMPR component g(1,...,m). The equalities in
|
561 |
+
(4) are called vanishing conditions. They lead to two essential
|
562 |
+
properties of EMPR components, which are the uniqueness
|
563 |
+
under a certain set of support vectors and the mutual
|
564 |
+
orthogonality.
|
565 |
+
By employing the vanishing conditions in (4) and adopt-
|
566 |
+
ing the weight vectors given in (3) with the pre-selected
|
567 |
+
support vectors, the scalar EMPR component, i.e. g(0), can
|
568 |
+
be determined uniquely as follows
|
569 |
+
g(0) =
|
570 |
+
n1
|
571 |
+
�
|
572 |
+
i=1
|
573 |
+
n2
|
574 |
+
�
|
575 |
+
j=1
|
576 |
+
n3
|
577 |
+
�
|
578 |
+
k=1
|
579 |
+
ω(1)
|
580 |
+
i
|
581 |
+
ω(2)
|
582 |
+
j
|
583 |
+
ω(3)
|
584 |
+
k
|
585 |
+
s(1)
|
586 |
+
i
|
587 |
+
s(2)
|
588 |
+
j
|
589 |
+
s(3)
|
590 |
+
k
|
591 |
+
Gijk.
|
592 |
+
(5)
|
593 |
+
It is possible to mark that the right-hand side of the equation
|
594 |
+
(5) denotes a weighted sum of G multiplied by the relevant
|
595 |
+
support vector elements over all axes. Thus, the zero-way
|
596 |
+
EMPR component associates with a specific weighted aver-
|
597 |
+
age value of the CGR cube, G.
|
598 |
+
If the conditions in (3) and constraints (4) are exploited
|
599 |
+
again, the elements of three one-way EMPR components are
|
600 |
+
calculated uniquely as follows
|
601 |
+
g(1)
|
602 |
+
i
|
603 |
+
=
|
604 |
+
n2
|
605 |
+
�
|
606 |
+
j=1
|
607 |
+
n3
|
608 |
+
�
|
609 |
+
k=1
|
610 |
+
ω(2)
|
611 |
+
j
|
612 |
+
ω(3)
|
613 |
+
k
|
614 |
+
s(2)
|
615 |
+
j
|
616 |
+
s(3)
|
617 |
+
k
|
618 |
+
Gijk − g(0) s(1)
|
619 |
+
i ,
|
620 |
+
g(2)
|
621 |
+
j
|
622 |
+
=
|
623 |
+
n1
|
624 |
+
�
|
625 |
+
i=1
|
626 |
+
n3
|
627 |
+
�
|
628 |
+
k=1
|
629 |
+
ω(1)
|
630 |
+
i
|
631 |
+
ω(3)
|
632 |
+
k
|
633 |
+
s(1)
|
634 |
+
i
|
635 |
+
s(3)
|
636 |
+
k
|
637 |
+
Gijk − g(0) s(2)
|
638 |
+
j ,
|
639 |
+
g(3)
|
640 |
+
k
|
641 |
+
=
|
642 |
+
n1
|
643 |
+
�
|
644 |
+
i=1
|
645 |
+
n2
|
646 |
+
�
|
647 |
+
j=1
|
648 |
+
ω(1)
|
649 |
+
i
|
650 |
+
ω(2)
|
651 |
+
j
|
652 |
+
s(1)
|
653 |
+
i
|
654 |
+
s(2)
|
655 |
+
j
|
656 |
+
Gijk − g(0) s(3)
|
657 |
+
k .
|
658 |
+
(6)
|
659 |
+
while the rest of the components can be computed in a
|
660 |
+
similar manner.
|
661 |
+
As addressed, the components g(1), g(2), and g(3) are
|
662 |
+
one-way entities. Therefore, each forms a vector lying on its
|
663 |
+
corresponding axis. According to (5) and (6), g(1) is obtained
|
664 |
+
by squeezing the CGR cube through its front and upper
|
665 |
+
sides, respectively. g(2) is obtained by suppressing the CGR
|
666 |
+
cube through its front and right sides. The last vector, that
|
667 |
+
is g(3), is evaluated by compressing the cube through its
|
668 |
+
upper and right sides. After these suppression steps, the
|
669 |
+
means associated with certain dimensions are procured.
|
670 |
+
Then, the relevant support vector weighted with g(0) is
|
671 |
+
subtracted from the calculated mean. Thus, each one-way
|
672 |
+
EMPR term defines the attitude and individual contribution
|
673 |
+
of the corresponding dimension (axis) to the whole network
|
674 |
+
G. In this sense, g(1) and g(2) terms specify both dimensions
|
675 |
+
of the surrogate CGR pattern emerged from G. This CGR
|
676 |
+
pattern is a weighted average of CGR images belonging
|
677 |
+
to all genes in the corresponding network. However, the
|
678 |
+
third one-way EMPR term, g(3), interprets the interrelation
|
679 |
+
among the CGR images of the genes of the network. Thus,
|
680 |
+
each one-way EMPR term characterizes the G in its own
|
681 |
+
way and can be exploited as low dimensional features for
|
682 |
+
the 3-D gene network data on the focus.
|
683 |
+
Finally, in this section, we will provide the details about
|
684 |
+
the properties and selection process of the EMPR support
|
685 |
+
vectors. As a beginning, the support vectors should satisfy
|
686 |
+
the following normalization conditions
|
687 |
+
np
|
688 |
+
�
|
689 |
+
ip=1
|
690 |
+
ω(p)
|
691 |
+
ip
|
692 |
+
�
|
693 |
+
s(p)
|
694 |
+
ip
|
695 |
+
�2
|
696 |
+
= 1;
|
697 |
+
p = 1, 2, 3.
|
698 |
+
(7)
|
699 |
+
under the given weight vectors. With the help of the con-
|
700 |
+
ditions in (7), the support vectors can be selected inde-
|
701 |
+
pendently from the magnitude. Thus, each support vector
|
702 |
+
indicates the relevant direction where it acts as a weight
|
703 |
+
vector to the contributions which are stored as the elements
|
704 |
+
of EMPR components.
|
705 |
+
Any suitable set of vectors can be employed as the sup-
|
706 |
+
port vector team for EMPR, as long as they are in harmony
|
707 |
+
with the conditions in (4) and (7). For this reason, the vectors
|
708 |
+
whose elements are given explicitly as
|
709 |
+
S(1)
|
710 |
+
i
|
711 |
+
=
|
712 |
+
n2
|
713 |
+
�
|
714 |
+
j=1
|
715 |
+
n3
|
716 |
+
�
|
717 |
+
k=1
|
718 |
+
ω(2)
|
719 |
+
j
|
720 |
+
ω(3)
|
721 |
+
k
|
722 |
+
Gijk,
|
723 |
+
S(2)
|
724 |
+
j
|
725 |
+
=
|
726 |
+
n1
|
727 |
+
�
|
728 |
+
i=1
|
729 |
+
n3
|
730 |
+
�
|
731 |
+
k=1
|
732 |
+
ω(1)
|
733 |
+
i
|
734 |
+
ω(3)
|
735 |
+
k
|
736 |
+
Gijk,
|
737 |
+
S(3)
|
738 |
+
k
|
739 |
+
=
|
740 |
+
n1
|
741 |
+
�
|
742 |
+
i=1
|
743 |
+
n2
|
744 |
+
�
|
745 |
+
j=1
|
746 |
+
ω(1)
|
747 |
+
i
|
748 |
+
ω(2)
|
749 |
+
j
|
750 |
+
Gijk.
|
751 |
+
(8)
|
752 |
+
can be adapted as the support vectors of an EMPR expan-
|
753 |
+
sion, after performing normalisation according to (7).
|
754 |
+
The support vectors in (8) can be calculated in a straight-
|
755 |
+
forward manner and exploited in EMPR expansion as long
|
756 |
+
as they do not vanish [25], [28]. From (8), it is obvious
|
757 |
+
that each formula denotes a weighted average of the CGR
|
758 |
+
cube G over all axes but the one direction (axis). Thereby,
|
759 |
+
the equations in (8) indicate averaged directions for the
|
760 |
+
CGR cube. To this end, these support vectors in (8) are
|
761 |
+
called Averaged Directional Supports (ADS) [28] and can be
|
762 |
+
encountered in several EMPR applications existing in the
|
763 |
+
scientific literature [25], [26], [27], [28]. In this study, the
|
764 |
+
|
765 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
766 |
+
6
|
767 |
+
ADS are employed in order to extract features using EMPR.
|
768 |
+
However, the constant weight vectors whose elements are
|
769 |
+
as follows
|
770 |
+
ω(1)
|
771 |
+
i
|
772 |
+
= 1
|
773 |
+
n1
|
774 |
+
,
|
775 |
+
ω(2)
|
776 |
+
j
|
777 |
+
= 1
|
778 |
+
n2
|
779 |
+
,
|
780 |
+
ω(3)
|
781 |
+
k
|
782 |
+
= 1
|
783 |
+
n3
|
784 |
+
(9)
|
785 |
+
will be exploited as the weights in EMPR processes.
|
786 |
+
In summary, EMPR enables to extract features from 3-D
|
787 |
+
CGR cubes. These features are the vector EMPR components
|
788 |
+
given in (6). The vectors are ensembled to form a long
|
789 |
+
feature vector. Each of these vectors spans all dimensions
|
790 |
+
of the CGR cube under consideration with one accord.
|
791 |
+
Therefore, the Support Vector Machines algorithm can be
|
792 |
+
fed with the ensembled feature vectors, and an efficient
|
793 |
+
learning model can be constructed.
|
794 |
+
3.4
|
795 |
+
Support Vector Machines
|
796 |
+
Determining whether a given gene network belongs to the
|
797 |
+
patient or the control group is the main aim of the present
|
798 |
+
work. Thus, extracting practical and meaningful features
|
799 |
+
and selecting an appropriate classifier that is in harmony
|
800 |
+
with these features are crucial. Since data classification is
|
801 |
+
one of the major challenges in machine learning, many tech-
|
802 |
+
niques are proposed both for supervised and unsupervised
|
803 |
+
cases. Support Vector Machines (SVM), a flexible super-
|
804 |
+
vised classification algorithm, is considered as an effective
|
805 |
+
technique for grouping pre-labeled data [29]. The aim of
|
806 |
+
SVM is to construct a hyperplane whose margins with each
|
807 |
+
cumulated point set (class) are the widest possible. If the
|
808 |
+
collected data points are overlayed separate enough, then
|
809 |
+
it becomes possible to distinguish them into homogeneous
|
810 |
+
groups using a linear hyperplane (or linear kernel). Other-
|
811 |
+
wise, a non-linear kernel should be exploited to obtain a
|
812 |
+
satisfactory classification accuracy. This approach is called
|
813 |
+
the kernel trick [43].
|
814 |
+
The main aim of this study is to determine whether a
|
815 |
+
given gene network belongs to the control or patient group.
|
816 |
+
Thus, we formulate this problem as a binary classification
|
817 |
+
task. To classify the data in Dtest, first, the SVM model
|
818 |
+
should be trained using Dtrain. The elements of Dtrain and
|
819 |
+
Dtest are CGR cubes defined in subsection 3.2 are 3-D. Thus,
|
820 |
+
it is hard to train the model by feeding SVM with the CGR
|
821 |
+
cubes. To overcome this fact, the SVM algorithm is trained
|
822 |
+
with the vector EMPR components of each CGR cube whose
|
823 |
+
explicit formulae are given in (6). Therefore, a feature vector
|
824 |
+
for each CGR cube is constructed by ensembling the one-
|
825 |
+
way EMPR components corresponding to the CGR cube as
|
826 |
+
follows
|
827 |
+
f =
|
828 |
+
�
|
829 |
+
g(1)T
|
830 |
+
g(2)T
|
831 |
+
g(3)T �T
|
832 |
+
.
|
833 |
+
(10)
|
834 |
+
If the CGR cubes are generated as the size of n1 × n2 × n3,
|
835 |
+
then the length of each feature vector f becomes n1+n2+n3.
|
836 |
+
This means the hypersurface created by the SVM algorithm
|
837 |
+
lays in n1 +n2 +n3 dimensional space. Though this number
|
838 |
+
may seem quite large, the features whose distinguishing
|
839 |
+
capabilities are satisfactory may reduce the computation
|
840 |
+
complexity of SVM significantly.
|
841 |
+
To train the SVM model, f features of the CGR cubes in
|
842 |
+
Dtrain are evaluated. Then, the SVM model is trained using
|
843 |
+
these feature vectors. After the training phase, f features of
|
844 |
+
the CGR cubes in Dtest are given to the trained model, and
|
845 |
+
the class of each feature which belongs to Dtest is predicted.
|
846 |
+
Consequently, the statistics for the objective evaluation of
|
847 |
+
the proposed estimator are calculated using the elements
|
848 |
+
of the corresponding confusion matrix obtained in each
|
849 |
+
independent run.
|
850 |
+
4
|
851 |
+
RESULTS
|
852 |
+
In this section, we will provide the results obtained by
|
853 |
+
assembling CGR, EMPR, and SVM for the mTOR and TGF-
|
854 |
+
β gene network datasets. To this end, we performed several
|
855 |
+
computational efforts to emphasise the efficiency of the
|
856 |
+
proposed method. Since the aim of this study is to present
|
857 |
+
an efficient classification method for the gene pathways, the
|
858 |
+
overall accuracy (OA) is considered as the fundamental ob-
|
859 |
+
jective assessment metric. The OA value for each experiment
|
860 |
+
is calculated as follows
|
861 |
+
OA = Number of correct predictions
|
862 |
+
Number of testing samples
|
863 |
+
× 100.
|
864 |
+
(11)
|
865 |
+
However, since OA could yield limited information about
|
866 |
+
the classifier performance, we also reported the true
|
867 |
+
negative rate, true positive rate (precision), recall (sen-
|
868 |
+
sitivity), specificity, and Matthew’s Correlation Coeffi-
|
869 |
+
cient (MCC) metrics [44], [45]. The reported statistics
|
870 |
+
are the average of 100 independent SVM runs. Before
|
871 |
+
the training stage, all features belonging to the train-
|
872 |
+
ing and the testing set were normalised. In the SVM
|
873 |
+
phase, we adopted Radial Basis Function (RBF) kernel as
|
874 |
+
the SVM kernel. To determine the best classifier param-
|
875 |
+
eters c and γ, which controls the behaviour of the RBF
|
876 |
+
kernel, we performed a 5-fold cross-validation and grid
|
877 |
+
search on a 9 × 9 grid [10−4, 10−3, . . . , 1, . . . , 103, 104] ×
|
878 |
+
[10−4, 10−3, . . . , 1, . . . , 103, 104]. Finally, the model was
|
879 |
+
trained using an SVM algorithm implemented by the LIB-
|
880 |
+
SVM package [46]. In Fig. 4, we provided the classifica-
|
881 |
+
Fig. 4. Average overall and cross-validation accuracies for varying train-
|
882 |
+
ing sample counts.
|
883 |
+
tion and cross-validation accuracies for both gene pathway
|
884 |
+
datasets. We performed the trials for various training sam-
|
885 |
+
ple amounts both for control and patient groups. These
|
886 |
+
|
887 |
+
Classificationand Cross-Validation Accuracy
|
888 |
+
100
|
889 |
+
98
|
890 |
+
96
|
891 |
+
94
|
892 |
+
Accuracy (%)
|
893 |
+
92
|
894 |
+
90
|
895 |
+
Accuracy (mTOR)
|
896 |
+
-CV-Accuracy (mTOR)
|
897 |
+
880
|
898 |
+
—Accuracy (TGF-Beta)
|
899 |
+
CV-Accuracy (TGF-Beta)
|
900 |
+
86
|
901 |
+
84
|
902 |
+
82
|
903 |
+
10
|
904 |
+
15
|
905 |
+
20
|
906 |
+
25
|
907 |
+
30
|
908 |
+
35
|
909 |
+
40
|
910 |
+
45
|
911 |
+
50
|
912 |
+
#ofTrainingSamplesperClassIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
913 |
+
7
|
914 |
+
amounts vary between 10 and 50 with an increment of 10.
|
915 |
+
After resolving the number of training samples for each
|
916 |
+
class, the fixed number of training samples were selected
|
917 |
+
randomly. Then, the rest of the networks in each dataset
|
918 |
+
were reserved for testing.
|
919 |
+
It is clear from Fig. 4 that the proposed method yields
|
920 |
+
higher than 90% classification accuracy using only 20 train-
|
921 |
+
ing samples for both mTOR and TGF-β datasets. Initially,
|
922 |
+
the OA values for mTOR and TGF-β networks are calculated
|
923 |
+
as about 88% and 83% for 10 training samples from both
|
924 |
+
classes, respectively. Then, these values increase to about
|
925 |
+
97% and 93% rapidly. The increments for both datasets are
|
926 |
+
consistent as the number of training samples from control
|
927 |
+
and patient classes grows. Furthermore, the cross-validation
|
928 |
+
(CV) accuracies for both datasets tend to escalate while the
|
929 |
+
number of training samples increases and are in harmony
|
930 |
+
with the observed OA results. It is evident that the gap
|
931 |
+
between the corresponding OA and CV accuracy tends to
|
932 |
+
decrease consistently both for mTOR and TGF-β while the
|
933 |
+
training sample count grows, especially after dealing with
|
934 |
+
20 training samples.
|
935 |
+
TABLE 1
|
936 |
+
Classifier performance metrics for mTOR and TGF-β datasets.
|
937 |
+
Metric / S
|
938 |
+
10
|
939 |
+
20
|
940 |
+
30
|
941 |
+
40
|
942 |
+
50
|
943 |
+
mTOR
|
944 |
+
True Neg. Rate
|
945 |
+
0.9087
|
946 |
+
0.9265
|
947 |
+
0.9398
|
948 |
+
0.9463
|
949 |
+
0.9579
|
950 |
+
True Pos. Rate
|
951 |
+
0.8127
|
952 |
+
0.9315
|
953 |
+
0.9596
|
954 |
+
0.9738
|
955 |
+
0.9813
|
956 |
+
Recall
|
957 |
+
0.9011
|
958 |
+
0.9227
|
959 |
+
0.9375
|
960 |
+
0.9438
|
961 |
+
0.9565
|
962 |
+
Specificity
|
963 |
+
0.7613
|
964 |
+
0.9282
|
965 |
+
0.9598
|
966 |
+
0.9740
|
967 |
+
0.9816
|
968 |
+
MCC
|
969 |
+
0.6894
|
970 |
+
0.8544
|
971 |
+
0.8984
|
972 |
+
0.9189
|
973 |
+
0.9386
|
974 |
+
TGF-β
|
975 |
+
True Neg. Rate
|
976 |
+
0.9608
|
977 |
+
0.9854
|
978 |
+
0.9902
|
979 |
+
0.9949
|
980 |
+
0.9949
|
981 |
+
True Pos. Rate
|
982 |
+
0.8407
|
983 |
+
0.9506
|
984 |
+
0.9770
|
985 |
+
0.9846
|
986 |
+
0.9907
|
987 |
+
Recall
|
988 |
+
0.9589
|
989 |
+
0.9853
|
990 |
+
0.9902
|
991 |
+
0.9949
|
992 |
+
0.9949
|
993 |
+
Specificity
|
994 |
+
0.7945
|
995 |
+
0.9476
|
996 |
+
0.9763
|
997 |
+
0.9844
|
998 |
+
0.9906
|
999 |
+
MCC
|
1000 |
+
0.7765
|
1001 |
+
0.9344
|
1002 |
+
0.9668
|
1003 |
+
0.9794
|
1004 |
+
0.9855
|
1005 |
+
After discussing the classification accuracy of the sug-
|
1006 |
+
gested method, we also need to evaluate the performance
|
1007 |
+
and stability of the proposed estimator based on CGR,
|
1008 |
+
EMPR, and SVM. To this end, the widely used machine
|
1009 |
+
learning metrics for the estimator assessment, such as true
|
1010 |
+
negative rate, true positive rate (precision), recall (sensi-
|
1011 |
+
tivity), specificity, and MCC, were provided in Table 1. In
|
1012 |
+
Table 1, the specified metrics are tabulated for increasing
|
1013 |
+
training sample counts from control and patient classes for
|
1014 |
+
both mTOR and TGF-β datasets.
|
1015 |
+
It is obvious from Table 1 that each metric approaches
|
1016 |
+
value 1 consistently as the number of training samples
|
1017 |
+
grows. However, the True Positive Rate, specificity, and
|
1018 |
+
MCC values may be considered a bit low at 10 training
|
1019 |
+
samples from control and patient classes for both datasets.
|
1020 |
+
Nevertheless, these values increased rapidly both for mTOR
|
1021 |
+
and TGF-β as 20 or more training samples were employed.
|
1022 |
+
We can easily verify from Table 1 that all stability metrics
|
1023 |
+
are calculated above 0.93 and 0.98 for mTOR and TGF-
|
1024 |
+
β datasets, respectively, by exploiting 50 training samples
|
1025 |
+
from both control and patient groups. The reported values
|
1026 |
+
address that the proposed estimator achieves significant
|
1027 |
+
success in accurately classifying the networks belonging to
|
1028 |
+
control and patient samples for the considered mTOR and
|
1029 |
+
TGF-β datasets.
|
1030 |
+
Fig. 5. ROC curves and AUC values for mTOR dataset with varying
|
1031 |
+
training sample counts.
|
1032 |
+
Fig. 6. ROC curves and AUC values for TGF-β dataset with varying
|
1033 |
+
training sample counts.
|
1034 |
+
As the further assessment of the proposed CGR, EMPR,
|
1035 |
+
and SVM assemble, receiver operating characteristic (ROC)
|
1036 |
+
curves for both datasets are presented in Fig. 5 and 6,
|
1037 |
+
where the corresponding area under curve (AUC) values
|
1038 |
+
are provided therein. In Fig. 5 and 6, the dashed line
|
1039 |
+
demonstrates the random classifier, which can be evaluated
|
1040 |
+
as the worst case. In Fig. 5, five ROC curves for 10, 20, 30,
|
1041 |
+
40, and 50 mTOR training samples were presented. On the
|
1042 |
+
other hand, for the TGF-β dataset in Fig. 6, the ROC curves
|
1043 |
+
were plotted for only 10, 20, and 30 training samples since
|
1044 |
+
the improvements in the results for higher training sample
|
1045 |
+
counts are not significant. One can easily observe from Fig.
|
1046 |
+
5 and 6 that the AUC values increase consistently while the
|
1047 |
+
number of training samples grows for both datasets.
|
1048 |
+
In addition to previous analyses, it is also crucial to
|
1049 |
+
investigate the performance of the proposed method for
|
1050 |
+
imbalanced datasets. To this end, two new datasets were cre-
|
1051 |
+
ated from the existing ones for mTOR and TGF-β networks.
|
1052 |
+
|
1053 |
+
ROCcurvesformTORdataset
|
1054 |
+
0.9
|
1055 |
+
0.8
|
1056 |
+
0.7
|
1057 |
+
Rate
|
1058 |
+
Positive
|
1059 |
+
0.6
|
1060 |
+
0.5
|
1061 |
+
0.3
|
1062 |
+
S = 10: AUC = 0.93387
|
1063 |
+
0.2
|
1064 |
+
S = 20: AUC = 0.96882
|
1065 |
+
S = 30: AUC = 0.98576
|
1066 |
+
0.1
|
1067 |
+
S = 40: AUC = 0.99306
|
1068 |
+
S = 50: AUC = 0.99739
|
1069 |
+
0
|
1070 |
+
0
|
1071 |
+
0.1
|
1072 |
+
0.2
|
1073 |
+
0.3
|
1074 |
+
0.4
|
1075 |
+
0.5
|
1076 |
+
0.6
|
1077 |
+
0.7
|
1078 |
+
0.8
|
1079 |
+
0.9
|
1080 |
+
1
|
1081 |
+
False Positive RateROC curvesforTGF-3dataset
|
1082 |
+
0.9
|
1083 |
+
0.8
|
1084 |
+
0.7
|
1085 |
+
Rate
|
1086 |
+
Positive
|
1087 |
+
0.6
|
1088 |
+
0.5
|
1089 |
+
S = 10: AUC = 0.97545
|
1090 |
+
0.3
|
1091 |
+
S = 20: AUC = 0.99510
|
1092 |
+
S = 30: AUC = 0.99866
|
1093 |
+
0.2
|
1094 |
+
0.1
|
1095 |
+
0
|
1096 |
+
0
|
1097 |
+
0.1
|
1098 |
+
0.2
|
1099 |
+
0.3
|
1100 |
+
0.4
|
1101 |
+
0.5
|
1102 |
+
0.6
|
1103 |
+
0.7
|
1104 |
+
0.8
|
1105 |
+
0.9
|
1106 |
+
1
|
1107 |
+
FalsePositiveRateIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
1108 |
+
8
|
1109 |
+
TABLE 2
|
1110 |
+
Classifier performance metrics for imbalanced mTOR and TGF-β
|
1111 |
+
datasets. The number of training samples are presented on headline.
|
1112 |
+
Patients / Controls
|
1113 |
+
10/40
|
1114 |
+
20/80
|
1115 |
+
30/120
|
1116 |
+
40/160
|
1117 |
+
mTOR
|
1118 |
+
True Neg. Rate
|
1119 |
+
0.8121
|
1120 |
+
0.9003
|
1121 |
+
0.9333
|
1122 |
+
0.9540
|
1123 |
+
True Pos. Rate
|
1124 |
+
0.9990
|
1125 |
+
0.9991
|
1126 |
+
0.9993
|
1127 |
+
0.9991
|
1128 |
+
Recall
|
1129 |
+
0.0736
|
1130 |
+
0.5560
|
1131 |
+
0.7131
|
1132 |
+
0.8060
|
1133 |
+
Specificity
|
1134 |
+
0.9999
|
1135 |
+
0.9999
|
1136 |
+
0.9999
|
1137 |
+
0.9998
|
1138 |
+
MCC
|
1139 |
+
0.2284
|
1140 |
+
0.7062
|
1141 |
+
0.8147
|
1142 |
+
0.8759
|
1143 |
+
TGF-β
|
1144 |
+
True Neg. Rate
|
1145 |
+
0.8236
|
1146 |
+
0.9302
|
1147 |
+
0.9672
|
1148 |
+
0.9813
|
1149 |
+
True Pos. Rate
|
1150 |
+
0.9992
|
1151 |
+
0.9994
|
1152 |
+
0.9993
|
1153 |
+
0.9996
|
1154 |
+
Recall
|
1155 |
+
0.1396
|
1156 |
+
0.6990
|
1157 |
+
0.8640
|
1158 |
+
0.9233
|
1159 |
+
Specificity
|
1160 |
+
0.9999
|
1161 |
+
0.9999
|
1162 |
+
0.9999
|
1163 |
+
0.9999
|
1164 |
+
MCC
|
1165 |
+
0.4414
|
1166 |
+
0.8054
|
1167 |
+
0.9136
|
1168 |
+
0.9515
|
1169 |
+
Fig. 7. Average overall and cross-validation accuracies for varying train-
|
1170 |
+
ing percentages using imbalanced datasets.
|
1171 |
+
These datasets contain 100 patient and 400 control samples.
|
1172 |
+
For each dataset, we selected random networks from both
|
1173 |
+
groups as training samples by following the fixed train-
|
1174 |
+
ing ratios of 10%, 20%, 30%, 40% and 50%, respectively.
|
1175 |
+
Thus, the training sample amounts for patient and control
|
1176 |
+
groups are determined as 10/40, 20/80, 30/120, 40/160,
|
1177 |
+
and 50/200, respectively. That means, in the imbalanced
|
1178 |
+
datasets, the number of training control networks are fixed
|
1179 |
+
is four times the number of training patient samples.
|
1180 |
+
In Fig. 7, it is shown that the calculated OA values at 10%
|
1181 |
+
training ratio for imbalanced mTOR and TGF-β datasets
|
1182 |
+
are approximately 82% and 83%, respectively. These values
|
1183 |
+
are less than the presented OAs in Fig. 4 for the same
|
1184 |
+
training percentage. Moreover, in Fig. 7, the gaps between
|
1185 |
+
the OAs and CV accuracies for both datasets at a 10%
|
1186 |
+
training percentage are close, in contrast with the findings
|
1187 |
+
in Fig. 4. On the other hand, these gaps tend to shrink
|
1188 |
+
after 20% training ratio consistently which are similar to
|
1189 |
+
the observations given in Fig. 4. The OA values and CV
|
1190 |
+
accuracies for both dataset increase constantly. The OA
|
1191 |
+
values at a 50% training ratio for imbalanced mTOR and
|
1192 |
+
TGF-β datasets are calculated as 97% and 99%, respectively.
|
1193 |
+
These values are in harmony with the accuracies calculated
|
1194 |
+
for the balanced datasets and provided in Fig. 4.
|
1195 |
+
To analyse the stability and performance of the proposed
|
1196 |
+
method for imbalanced datasets, the relevant machine learn-
|
1197 |
+
ing metrics for both mTOR and TGF-β are calculated. These
|
1198 |
+
values are presented in Table 2, but the results for 50%
|
1199 |
+
training rate are not provided due to the limited space.
|
1200 |
+
According to Table 2, the recall values at 10 training rate
|
1201 |
+
for both imbalanced datasets are quite low. That means
|
1202 |
+
the proposed classification scheme struggles to predict the
|
1203 |
+
patient samples correctly by employing 10 random patient
|
1204 |
+
features. On the other hand, the recall values tend to in-
|
1205 |
+
crease rapidly as the number of training patient samples
|
1206 |
+
grow. The recall values for imbalanced datasets underper-
|
1207 |
+
form the recall values for the balanced datasets. The same
|
1208 |
+
issue can be remarked on for the MCC metric. However, it
|
1209 |
+
can be observed that all presented metrics approach their
|
1210 |
+
maximum, which is 1, as the number of training samples
|
1211 |
+
increases.
|
1212 |
+
5
|
1213 |
+
DISCUSSION
|
1214 |
+
In this new age of“holistic genetics”, most efforts so far have
|
1215 |
+
been devoted to identifying specific gene networks [2], [3],
|
1216 |
+
[4], [5], [47], [48]. The attempts to study the behaviour of the
|
1217 |
+
variants in these network genes in their context are still few
|
1218 |
+
and timid because of the technical difficulties of handling
|
1219 |
+
the vast amount of variants between individuals [19], [20].
|
1220 |
+
GWAS studies are efficient in detecting the significant
|
1221 |
+
genomic variants for particular phenotypes. This knowledge
|
1222 |
+
made it possible to identify the relevant variants for certain
|
1223 |
+
diseases and which genomic variants are causal for the
|
1224 |
+
predisposition to the disease, giving hope to compare in-
|
1225 |
+
dividual variations in gene networks to predict the personal
|
1226 |
+
predisposition to diseases. The technique in use to assess an
|
1227 |
+
individual’s susceptibility to a particular physical or mental
|
1228 |
+
illness is the PRS, which relies on determining the set of the
|
1229 |
+
SNPs that were known as risky from other studies including
|
1230 |
+
GWAS. However, polygenicity is a significant problem for
|
1231 |
+
this technique, as many phenotypic traits are affected by
|
1232 |
+
too many genes, making it hard to calculate PRS [2], [3],
|
1233 |
+
[4], [5], [6], [11], [12], [13], [14], [15], [16], [17], [18]. Another
|
1234 |
+
difficulty of the PRS is the requirement of knowledge about
|
1235 |
+
the weighted effect of each variant on the phenotype. Since
|
1236 |
+
with the available techniques, it is impossible to study the
|
1237 |
+
combinatorial effects of more than a few variants, new
|
1238 |
+
approaches are required if it is desired to assess the effect
|
1239 |
+
of all the variants at once.
|
1240 |
+
To be able to interpret the impact and importance of the
|
1241 |
+
millions of variants obtained in a single Next Generation
|
1242 |
+
Sequencing study, focusing on data in terms of patterns and
|
1243 |
+
corresponding less dimensional entities is rational.
|
1244 |
+
Here, we propose a high dimensional modelling based
|
1245 |
+
method to analyse all the variants in a gene network to-
|
1246 |
+
gether. In our approach, we apply CGR [21], [22], [23] as
|
1247 |
+
a pre-processing tool to convert the sequencing data of the
|
1248 |
+
genes in the network to 2-D binary image patterns. Then,
|
1249 |
+
these patterns were aligned (as a three-dimensional tensor)
|
1250 |
+
in succession, creating a cube. Afterwards, these tensors
|
1251 |
+
were decomposed and represented in terms of their less
|
1252 |
+
dimensional features using EMPR. Finally, SVM, which is
|
1253 |
+
a multi-class classification algorithm, was fed with three
|
1254 |
+
EMPR vector features for each network.
|
1255 |
+
|
1256 |
+
ClassificationandCross-ValidationAccuracyforImbalancedDatasets
|
1257 |
+
100
|
1258 |
+
98
|
1259 |
+
96
|
1260 |
+
94
|
1261 |
+
Accuracy (%)
|
1262 |
+
92
|
1263 |
+
90
|
1264 |
+
Accuracy (mTOR)
|
1265 |
+
88
|
1266 |
+
-CV-Accuracy (mTOR)
|
1267 |
+
+
|
1268 |
+
Accuracy (TGF-Beta)
|
1269 |
+
86
|
1270 |
+
CV-Accuracy (TGF-Beta)
|
1271 |
+
84
|
1272 |
+
80
|
1273 |
+
10
|
1274 |
+
15
|
1275 |
+
20
|
1276 |
+
25
|
1277 |
+
30
|
1278 |
+
35
|
1279 |
+
40
|
1280 |
+
45
|
1281 |
+
50
|
1282 |
+
Training Percentage per Class (%)IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
1283 |
+
9
|
1284 |
+
To effectively assess the discrimination ability of our
|
1285 |
+
approach, we chose to test it on synthetic datasets. We gen-
|
1286 |
+
erated sample patient and control datasets prepared from
|
1287 |
+
the reference sequences of the mTOR and TGF-β networks
|
1288 |
+
of 31 and 93 genes of different sizes, respectively. Our
|
1289 |
+
findings revealed an accuracy higher than 96% employing
|
1290 |
+
only 50 training features out of 400 data samples from
|
1291 |
+
both control and patient groups. The AUC results indicate
|
1292 |
+
that the proposed classifier’s performance in distinguishing
|
1293 |
+
between two classes is admirable. Consequently, our results
|
1294 |
+
indicate that the proposed CGR, EMPR, and SVM ensemble
|
1295 |
+
provides efficient classification performance.
|
1296 |
+
One of the strengths of our approach is its capability to
|
1297 |
+
handle data of various sizes. It is independent of the length
|
1298 |
+
of the sequences and the number of genes in the networks. It
|
1299 |
+
can be easily applied to all gene networks and is an easy-to-
|
1300 |
+
implement algorithm. Also, -unlike PRS- it does not need
|
1301 |
+
predetermined knowledge of which variants are relevant
|
1302 |
+
and how much they have an impact. The only necessity is
|
1303 |
+
to know the relevant gene network. After that, it utilizes the
|
1304 |
+
raw sequence data from the case and normal subjects and
|
1305 |
+
determines the patient and normal CGR patterns according
|
1306 |
+
to the particular variant combinations.
|
1307 |
+
Since human has a diploid genome and each variant in
|
1308 |
+
the human genome has a zygosity (homozygous, heterozy-
|
1309 |
+
gous, or hemizygous) state, this method could be consid-
|
1310 |
+
ered challenging for human variant data. In addition, there
|
1311 |
+
are two positional possibilities (cis and trans) regarding
|
1312 |
+
any two variants at a heterozygous state. However, for a
|
1313 |
+
gene network, the positional state of the variants in the
|
1314 |
+
network genes is not important because each gene works
|
1315 |
+
as a separate unit of the network. Therefore, the positional
|
1316 |
+
state of the variants between different genes is not a lim-
|
1317 |
+
itation of our method. On the other hand, the positional
|
1318 |
+
state of the variants within the same gene may become a
|
1319 |
+
limitation because each one of two heterozygous variants
|
1320 |
+
in a gene may be located in the same or different protein
|
1321 |
+
molecule. To overcome this limitation, our method may
|
1322 |
+
require some modifications to be applied in the diploid case.
|
1323 |
+
These modifications may include the representation of each
|
1324 |
+
base substitution with IUPAC codes (R for A/G, S for C/G,
|
1325 |
+
W for A/T, M for A/C, for instance) as additional features
|
1326 |
+
or properties to CGR rules. Considering these additional
|
1327 |
+
features, the CGR process may be updated. Thus, a 4-D
|
1328 |
+
sample space may occur and the orthogonality of the sample
|
1329 |
+
space is preserved. Finally, EMPR, which is suitable for N–D
|
1330 |
+
structures may be implemented to extract the features of the
|
1331 |
+
network under consideration.
|
1332 |
+
To the best of our knowledge, our proposed approach to
|
1333 |
+
decipher the outcomes of gene networks based on specific
|
1334 |
+
combinations of all the variants in the module is original
|
1335 |
+
and unique. The observations and findings in this study
|
1336 |
+
encourage us that our approach has the potential to be a
|
1337 |
+
diagnostic tool as well as determines individual disposition
|
1338 |
+
to polygenic multifactorial conditions.
|
1339 |
+
In addition, comparative studies may be conducted
|
1340 |
+
from an evolutionary perspective. This study may also be
|
1341 |
+
adapted to different scientific fields, e.g. population ge-
|
1342 |
+
netics, phylogenetics, advanced genomics studies, etc. Fur-
|
1343 |
+
thermore, provided that the necessary fieldwork is done,
|
1344 |
+
this method can also be used in talent determination, thus
|
1345 |
+
providing the opportunity to receive appropriate training
|
1346 |
+
from an early age.
|
1347 |
+
6
|
1348 |
+
CONCLUSION
|
1349 |
+
According to our results and observations, using high-
|
1350 |
+
dimensional computational modelling for gene network
|
1351 |
+
and network-specific gene variant analyses in a holistic
|
1352 |
+
manner seems rational and reliable. Our promising results
|
1353 |
+
encourage us to perform the proposed approach on diploid
|
1354 |
+
sequence data for more comprehensive future studies.
|
1355 |
+
ACKNOWLEDGMENTS
|
1356 |
+
The authors would like to thank Osman ¨Ozkan for language
|
1357 |
+
editing.
|
1358 |
+
REFERENCES
|
1359 |
+
[1]
|
1360 |
+
A.-L. Barab´asi, N. Gulbahce, and J. Loscalzo, “Network medicine:
|
1361 |
+
a network-based approach to human disease,” Nature reviews
|
1362 |
+
genetics, vol. 12, no. 1, pp. 56–68, 2011.
|
1363 |
+
[2]
|
1364 |
+
S. Choobdar, M. E. Ahsen, J. Crawford, M. Tomasoni, T. Fang,
|
1365 |
+
D. Lamparter, J. Lin, B. Hescott, X. Hu, J. Mercer et al., “Assessment
|
1366 |
+
of network module identification across complex diseases,” Nature
|
1367 |
+
methods, vol. 16, no. 9, pp. 843–852, 2019.
|
1368 |
+
[3]
|
1369 |
+
J. S. Hawe, F. J. Theis, and M. Heinig, “Inferring interaction
|
1370 |
+
networks from multi-omics data,” Frontiers in genetics, vol. 10, p.
|
1371 |
+
535, 2019.
|
1372 |
+
[4]
|
1373 |
+
E. Maiorino, S. H. Baek, F. Guo, X. Zhou, P. H. Kothari, E. K. Silver-
|
1374 |
+
man, A.-L. Barab´asi, S. T. Weiss, B. A. Raby, and A. Sharma, “Dis-
|
1375 |
+
covering the genes mediating the interactions between chronic
|
1376 |
+
respiratory diseases in the human interactome,” Nature commu-
|
1377 |
+
nications, vol. 11, no. 1, pp. 1–14, 2020.
|
1378 |
+
[5]
|
1379 |
+
J. Menche, A. Sharma, M. Kitsak, S. D. Ghiassian, M. Vidal,
|
1380 |
+
J. Loscalzo, and A.-L. Barab´asi, “Uncovering disease-disease re-
|
1381 |
+
lationships through the incomplete interactome,” Science, vol. 347,
|
1382 |
+
no. 6224, 2015.
|
1383 |
+
[6]
|
1384 |
+
G. Fang, W. Wang, V. Paunic, H. Heydari, M. Costanzo, X. Liu,
|
1385 |
+
X. Liu, B. VanderSluis, B. Oately, M. Steinbach et al., “Discovering
|
1386 |
+
genetic interactions bridging pathways in genome-wide associa-
|
1387 |
+
tion studies,” Nature communications, vol. 10, no. 1, pp. 1–18, 2019.
|
1388 |
+
[7]
|
1389 |
+
A. C. Fahed, M. Wang, J. R. Homburger, A. P. Patel, A. G. Bick,
|
1390 |
+
C. L. Neben, C. Lai, D. Brockman, A. Philippakis, P. T. Ellinor
|
1391 |
+
et al., ��Polygenic background modifies penetrance of monogenic
|
1392 |
+
variants for tier 1 genomic conditions,” Nature communications,
|
1393 |
+
vol. 11, no. 1, pp. 1–9, 2020.
|
1394 |
+
[8]
|
1395 |
+
K. T. H. Rahit and M. Tarailo-Graovac, “Genetic modifiers and rare
|
1396 |
+
mendelian disease,” Genes, vol. 11, no. 3, p. 239, 2020.
|
1397 |
+
[9]
|
1398 |
+
D. J. Crouch and W. F. Bodmer, “Polygenic inheritance, gwas,
|
1399 |
+
polygenic risk scores, and the search for functional variants,”
|
1400 |
+
Proceedings of the National Academy of Sciences, vol. 117, no. 32, pp.
|
1401 |
+
18 924–18 933, 2020.
|
1402 |
+
[10] C. M. Lewis and E. Vassos, “Polygenic risk scores: from research
|
1403 |
+
tools to clinical instruments,” Genome medicine, vol. 12, no. 1, pp.
|
1404 |
+
1–11, 2020.
|
1405 |
+
[11] Y. R. Wang and H. Huang, “Review on statistical methods for
|
1406 |
+
gene network reconstruction using expression data,” Journal of
|
1407 |
+
theoretical biology, vol. 362, pp. 53–61, 2014.
|
1408 |
+
[12] E. Uffelmann, Q. Q. Huang, N. S. Munung, J. de Vries, Y. Okada,
|
1409 |
+
A. R. Martin, H. C. Martin, T. Lappalainen, and D. Posthuma,
|
1410 |
+
“Genome-wide association studies,” Nature Reviews Methods
|
1411 |
+
Primers, vol. 1, no. 1, pp. 1–21, 2021.
|
1412 |
+
[13] T. Konuma and Y. Okada, “Statistical genetics and polygenic
|
1413 |
+
risk score for precision medicine,” Inflammation and regeneration,
|
1414 |
+
vol. 41, no. 1, pp. 1–5, 2021.
|
1415 |
+
[14] A. V. Khera, M. Chaffin, K. G. Aragam, M. E. Haas, C. Roselli, S. H.
|
1416 |
+
Choi, P. Natarajan, E. S. Lander, S. A. Lubitz, P. T. Ellinor et al.,
|
1417 |
+
“Genome-wide polygenic scores for common diseases identify
|
1418 |
+
individuals with risk equivalent to monogenic mutations,” Nature
|
1419 |
+
genetics, vol. 50, no. 9, pp. 1219–1224, 2018.
|
1420 |
+
|
1421 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
1422 |
+
10
|
1423 |
+
[15] M. Silberstein, N. Nesbit, J. Cai, and P. H. Lee, “Pathway analysis
|
1424 |
+
for genome-wide genetic variation data: Analytic principles, latest
|
1425 |
+
developments, and new opportunities,” Journal of Genetics and
|
1426 |
+
Genomics, vol. 48, no. 3, pp. 173–183, 2021.
|
1427 |
+
[16] L. Weng, F. Macciardi, A. Subramanian, G. Guffanti, S. G. Potkin,
|
1428 |
+
Z. Yu, and X. Xie, “Snp-based pathway enrichment analysis for
|
1429 |
+
genome-wide association studies,” BMC bioinformatics, vol. 12,
|
1430 |
+
no. 1, pp. 1–9, 2011.
|
1431 |
+
[17] X. Xie, M. C. Kendzior, X. Ge, L. S. Mainzer, and S. Sinha, “Varsan:
|
1432 |
+
associating pathways with a set of genomic variants using net-
|
1433 |
+
work analysis,” Nucleic acids research, vol. 49, no. 15, pp. 8471–8487,
|
1434 |
+
2021.
|
1435 |
+
[18] X. Zhu and M. Stephens, “Large-scale genome-wide enrichment
|
1436 |
+
analyses identify new trait-associated genes and pathways across
|
1437 |
+
31 human phenotypes,” Nature communications, vol. 9, no. 1, pp.
|
1438 |
+
1–14, 2018.
|
1439 |
+
[19] S. Papadimitriou, A. Gazzo, N. Versbraegen, C. Nachtegael,
|
1440 |
+
J. Aerts, Y. Moreau, S. Van Dooren, A. Now´e, G. Smits, and
|
1441 |
+
T. Lenaerts, “Predicting disease-causing variant combinations,”
|
1442 |
+
Proceedings of the National Academy of Sciences, vol. 116, no. 24, pp.
|
1443 |
+
11 878–11 887, 2019.
|
1444 |
+
[20] E. Mellerup and G. L. Møller, “Combinations of genetic variants
|
1445 |
+
occurring exclusively in patients,” Computational and Structural
|
1446 |
+
Biotechnology Journal, vol. 15, pp. 286–289, 2017.
|
1447 |
+
[21] H. J. Jeffrey, “Chaos game representation of gene structure,” Nu-
|
1448 |
+
cleic acids research, vol. 18, no. 8, pp. 2163–2170, 1990.
|
1449 |
+
[22] T. Hoang, C. Yin, and S. S.-T. Yau, “Numerical encoding of dna
|
1450 |
+
sequences by chaos game representation with application in simi-
|
1451 |
+
larity comparison,” Genomics, vol. 108, no. 3-4, pp. 134–142, 2016.
|
1452 |
+
[23] A. Kania and K. Sarapata, “The robustness of the chaos game
|
1453 |
+
representation to mutations and its application in free-alignment
|
1454 |
+
methods,” Genomics, 2021.
|
1455 |
+
[24] W.-F. Yang, Z.-G. Yu, and V. Anh, “Whole genome/proteome
|
1456 |
+
based phylogeny reconstruction for prokaryotes using higher
|
1457 |
+
order markov model and chaos game representation,” Molecular
|
1458 |
+
Phylogenetics and Evolution, vol. 96, pp. 102–111, 2016.
|
1459 |
+
[25] B. Tunga and M. Demiralp, “The influence of the support
|
1460 |
+
functions on the quality of enhanced multivariance product
|
1461 |
+
representation,”
|
1462 |
+
Journal
|
1463 |
+
of
|
1464 |
+
Mathematical
|
1465 |
+
Chemistry,
|
1466 |
+
vol.
|
1467 |
+
48,
|
1468 |
+
no.
|
1469 |
+
3,
|
1470 |
+
pp.
|
1471 |
+
827–840,
|
1472 |
+
Oct
|
1473 |
+
2010.
|
1474 |
+
[Online].
|
1475 |
+
Available:
|
1476 |
+
https:
|
1477 |
+
//doi.org/10.1007/s10910-010-9714-2
|
1478 |
+
[26] M. A. Tunga and M. Demiralp, “A novel method for multivariate
|
1479 |
+
data
|
1480 |
+
modelling:
|
1481 |
+
Piecewise
|
1482 |
+
generalized
|
1483 |
+
EMPR,”
|
1484 |
+
Journal
|
1485 |
+
of
|
1486 |
+
Mathematical Chemistry, vol. 51, no. 10, pp. 2654–2667, Nov 2013.
|
1487 |
+
[Online]. Available: https://doi.org/10.1007/s10910-013-0228-6
|
1488 |
+
[27] S. Tuna and B. Tunga, “A novel piecewise multivariate function
|
1489 |
+
approximation method via universal matrix representation,”
|
1490 |
+
Journal of Mathematical Chemistry, vol. 51, no. 7, pp. 1784–
|
1491 |
+
1801, Aug 2013. [Online]. Available: https://doi.org/10.1007/
|
1492 |
+
s10910-013-0179-y
|
1493 |
+
[28] S. Tuna, B. U. T¨oreyin, M. Demiralp, J. Ren, H. Zhao, and S. Mar-
|
1494 |
+
shall, “Iterative enhanced multivariance products representation
|
1495 |
+
for effective compression of hyperspectral images,” IEEE Transac-
|
1496 |
+
tions on Geoscience and Remote Sensing, 2020.
|
1497 |
+
[29] N. Cristianini and E. Ricci, Support Vector Machines.
|
1498 |
+
Boston,
|
1499 |
+
MA:
|
1500 |
+
Springer
|
1501 |
+
US,
|
1502 |
+
2008,
|
1503 |
+
pp.
|
1504 |
+
928–932.
|
1505 |
+
[Online].
|
1506 |
+
Available:
|
1507 |
+
https://doi.org/10.1007/978-0-387-30162-4 415
|
1508 |
+
[30] S. Wullschleger, R. Loewith, and M. N. Hall, “Tor signaling in
|
1509 |
+
growth and metabolism,” Cell, vol. 124, no. 3, pp. 471–484, 2006.
|
1510 |
+
[31] K. Tzavlaki and A. Moustakas, “TGF-β signaling,” Biomolecules,
|
1511 |
+
vol. 10, no. 3, p. 487, 2020.
|
1512 |
+
[32] KEGG,
|
1513 |
+
“KEGG
|
1514 |
+
website,”
|
1515 |
+
2021,
|
1516 |
+
accessed:
|
1517 |
+
2021-02-20
|
1518 |
+
https://www.kegg.jp/dbget-bin/www bget?hsa04150. [Online].
|
1519 |
+
Available: https://www.kegg.jp/dbget-bin/www bget?hsa04150
|
1520 |
+
[33] E. S. Gualberto, R. T. De Sousa, P. D. B. Thiago, J. P. C. Da Costa,
|
1521 |
+
and C. G. Duque, “From feature engineering and topics models
|
1522 |
+
to enhanced prediction rates in phishing detection,” Ieee Access,
|
1523 |
+
vol. 8, pp. 76 368–76 385, 2020.
|
1524 |
+
[34] I. Sobol, “Sensitivity estimates for nonlinear mathematical mod-
|
1525 |
+
els,” Math. Model. Comput. Exp, vol. 1, no. 4, pp. 407–414, 1993.
|
1526 |
+
[35] H.
|
1527 |
+
Rabitz
|
1528 |
+
and
|
1529 |
+
¨O.
|
1530 |
+
F.
|
1531 |
+
Alis¸,
|
1532 |
+
“General
|
1533 |
+
foundations
|
1534 |
+
of
|
1535 |
+
high-
|
1536 |
+
dimensional model representations,” Journal of Mathematical Chem-
|
1537 |
+
istry, vol. 25, no. 2, pp. 197–233, 1999.
|
1538 |
+
[36]
|
1539 |
+
¨O. F. Alıs¸ and H. Rabitz, “Efficient implementation of high dimen-
|
1540 |
+
sional model representations,” Journal of Mathematical Chemistry,
|
1541 |
+
vol. 29, no. 2, pp. 127–142, 2001.
|
1542 |
+
[37] H. Rabitz,
|
1543 |
+
¨O. F. Alis¸, J. Shorter, and K. Shim, “Efficient in-
|
1544 |
+
put—output model representations,” Computer physics communi-
|
1545 |
+
cations, vol. 117, no. 1-2, pp. 11–20, 1999.
|
1546 |
+
[38] D. Ayres and M. Eaton, “Uncertainty quantification in nuclear crit-
|
1547 |
+
icality modelling using a high dimensional model representation,”
|
1548 |
+
Annals of Nuclear Energy, vol. 80, pp. 379–402, 2015.
|
1549 |
+
[39] M. Kubicek, E. Minisci, and M. Cisternino, “High dimensional
|
1550 |
+
sensitivity analysis using surrogate modeling and high dimen-
|
1551 |
+
sional model representation,” International Journal for Uncertainty
|
1552 |
+
Quantification, vol. 5, no. 5, 2015.
|
1553 |
+
[40] Y. Liu, M. Y. Hussaini, and G. ¨Okten, “Global sensitivity analysis
|
1554 |
+
for the rothermel model based on high-dimensional model repre-
|
1555 |
+
sentation,” Canadian Journal of Forest Research, vol. 45, no. 11, pp.
|
1556 |
+
1474–1479, 2015.
|
1557 |
+
[41] R. Chowdhury, B. Rao, and A. M. Prasad, “High-dimensional
|
1558 |
+
model representation for structural reliability analysis,” Commu-
|
1559 |
+
nications in Numerical Methods in Engineering, vol. 25, no. 4, pp.
|
1560 |
+
301–337, 2009.
|
1561 |
+
[42] T. G. Kolda and B. W. Bader, “Tensor decompositions and applica-
|
1562 |
+
tions,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009.
|
1563 |
+
[43] I. Dagher, “Quadratic kernel-free non-linear support vector ma-
|
1564 |
+
chine,” Journal of Global Optimization, vol. 41, no. 1, pp. 15–30, 2008.
|
1565 |
+
[44] D. V. Carvalho, E. M. Pereira, and J. S. Cardoso, “Machine learning
|
1566 |
+
interpretability: A survey on methods and metrics,” Electronics,
|
1567 |
+
vol. 8, no. 8, p. 832, 2019.
|
1568 |
+
[45] D. Chicco and G. Jurman, “The advantages of the matthews
|
1569 |
+
correlation coefficient (mcc) over f1 score and accuracy in binary
|
1570 |
+
classification evaluation,” BMC genomics, vol. 21, no. 1, pp. 1–13,
|
1571 |
+
2020.
|
1572 |
+
[46] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector
|
1573 |
+
machines,” ACM Transactions on Intelligent Systems and Technology,
|
1574 |
+
vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.
|
1575 |
+
ntu.edu.tw/∼cjlin/libsvm.
|
1576 |
+
[47] J. S. Almeida, J. A. Carrico, A. Maretzek, P. A. Noble, and
|
1577 |
+
M. Fletcher, “Analysis of genomic sequences by chaos game
|
1578 |
+
representation,” Bioinformatics, vol. 17, no. 5, pp. 429–437, 2001.
|
1579 |
+
[48] H. Cui, S. Srinivasan, and D. Korkin, “Enriching human inter-
|
1580 |
+
actome with functional mutations to detect high-impact network
|
1581 |
+
modules underlying complex diseases,” Genes, vol. 10, no. 11, p.
|
1582 |
+
933, 2019.
|
1583 |
+
Suha Tuna received a Ph.D. degree in com-
|
1584 |
+
putational science and engineering from Istan-
|
1585 |
+
bul Technical University (ITU), Istanbul, Turkey,
|
1586 |
+
in 2017. He is an assistant professor with the
|
1587 |
+
Department of Computational Science and En-
|
1588 |
+
gineering at the Informatics Institute, ITU. His re-
|
1589 |
+
search interests cover high dimensional model-
|
1590 |
+
ing, high performance computing, hyperspectral
|
1591 |
+
imagery, bioinformatics and machine learning.
|
1592 |
+
Cagri Gulec received his BSc in Biomedical
|
1593 |
+
Sciences from Istanbul University, Cerrahpas¸a
|
1594 |
+
Medical Faculty, and MS. and Ph.D. degrees
|
1595 |
+
in Genetics from Istanbul University, Institute of
|
1596 |
+
Health Sciences, Istanbul, Turkey. He is currently
|
1597 |
+
working at Istanbul University, Istanbul Medical
|
1598 |
+
Faculty, Department of Medical Genetics. His
|
1599 |
+
research interests include the molecular basis of
|
1600 |
+
genetic diseases and bioinformatics..
|
1601 |
+
Emrah Yucesan received his BSc. in Biomedical
|
1602 |
+
Sciences from Istanbul University, Cerrahpas¸a
|
1603 |
+
Medical Faculty, and his M.S. and Ph.D.degrees
|
1604 |
+
in Genetics from Istanbul University, Institute of
|
1605 |
+
Health Sciences, Istanbul, Turkey. Emrah Yuce-
|
1606 |
+
san got his associate professor title in Medi-
|
1607 |
+
cal Genetics at 2021. He is currently working
|
1608 |
+
at Istanbul University-Cerrahpasa, Institute of
|
1609 |
+
Neurological Sciences, Department of Neuro-
|
1610 |
+
science. His research interests include neuro-
|
1611 |
+
genetics and rare diseases. He also interests in
|
1612 |
+
bioinformatics and conducts several studies.
|
1613 |
+
|
1614 |
+
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
|
1615 |
+
11
|
1616 |
+
Ayse Cirakoglu received her BSc. in Biomedical
|
1617 |
+
Sciences from Istanbul University, Cerrahpas¸a
|
1618 |
+
Medical Faculty, and M.S. and Ph.D. degrees
|
1619 |
+
in Genetics from Istanbul University, Institute of
|
1620 |
+
Health Sciences, Istanbul, Turkey. She is cur-
|
1621 |
+
rently working as Associate Professor at the De-
|
1622 |
+
partment of Medical Biology, Cerrahpas¸a Medi-
|
1623 |
+
cal Faculty. Her research interests include cyto-
|
1624 |
+
genetics, molecular cytogenetics, cancer genet-
|
1625 |
+
ics, epigenetics, and gene network analysis.
|
1626 |
+
Yelda Tarkan Arguden graduated from the De-
|
1627 |
+
partment of Biomedical Sciences, Cerrahpas¸a
|
1628 |
+
Faculty of Medicine, Istanbul University, in 1988.
|
1629 |
+
She received her MSc. in Medical Genetics in
|
1630 |
+
1991 and a Ph.D. in Genetics in 1999 from
|
1631 |
+
the Institute of Health Sciences, Istanbul Uni-
|
1632 |
+
versity. She is currently working as Associate
|
1633 |
+
Professor at the Medical Biology Department
|
1634 |
+
of Cerrahpas¸a Faculty of Medicine, Istanbul
|
1635 |
+
University-Cerrahpas¸a. Her research interests
|
1636 |
+
include cytogenetics, cancer cytogenetics, epi-
|
1637 |
+
genetics, and gene network analysis.
|
1638 |
+
|
2NFKT4oBgHgl3EQfPi1h/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
2tAyT4oBgHgl3EQfb_cv/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43f84ce1fad0e65fd2cd12faf089379d3f894b1e8edf2f10bd7a14245f37f3c3
|
3 |
+
size 2031661
|
3NE2T4oBgHgl3EQfNwaq/content/tmp_files/2301.03741v1.pdf.txt
ADDED
@@ -0,0 +1,783 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.03741v1 [cond-mat.stat-mech] 10 Jan 2023
|
2 |
+
Geometric Study on Canonical Nonlinearity for FCC-based Binary Alloys
|
3 |
+
Koretaka Yuge1 and Ikumi Nishihara1
|
4 |
+
1 Department of Materials Science and Engineering, Kyoto University, Sakyo, Kyoto 606-8501, Japan
|
5 |
+
For classical discrete systems under constant composition (typically reffered to as substitutional alloys),
|
6 |
+
canonical average φ typically provides a complicated nonlinear map from a set of potential energy surface
|
7 |
+
to that of macroscropic structure in thermodynamic equilibrium, the so-called “canonical nonlinearity: CN”.
|
8 |
+
Although our recent study reveals that the CN can be reasonablly addressed for individual microscopic config-
|
9 |
+
uration by two different ways of special vector field on configuration space, “anharmonicity in the structural
|
10 |
+
degree of freedoms (ASDF)”,2,3 and Kullback-Leibler (KL) divergence DKL,4 that is the conceptual extention
|
11 |
+
of ASDF to statistical manifold to include further non-local information about CN, their direct correlation on
|
12 |
+
real lattices, is still totally unclear. We here tuckle this problem for fcc-based equiatomic binary alloys that
|
13 |
+
have been most studied in the CN-based context. We confirm that while one of the contribution to CN of DdG
|
14 |
+
KL
|
15 |
+
for each configuration, due to difference in CDOS from Gaussian, exhibits significant positive correlation with
|
16 |
+
ASDF, another contribution of Dns
|
17 |
+
KL due to non-separability in structural degee of freedoms (SDFs) exhibit no
|
18 |
+
effective correlation with ASDF, which can be naturally accepted since the former contribution depends on
|
19 |
+
ASDF itself, while the latter is independent. We find that average of Dns
|
20 |
+
KL over all configurations for sets of
|
21 |
+
SDFs can be well-characterized by information about asymmetric Hausdorff distance between configurational
|
22 |
+
polyhedra (CP) for practical and ideally separable system, and CP hypervolumes. This fact certainly indicates
|
23 |
+
that non-local information about CN has profound connection to the geometric configuration for ground-state
|
24 |
+
structures of alloys on configuration space.
|
25 |
+
I.
|
26 |
+
INTRODUCTION
|
27 |
+
When we consider substitutional alloys as classical discrete
|
28 |
+
systems under constant composition, microscopic configura-
|
29 |
+
tion along chosen coordination Qp in thermodynamic equilib-
|
30 |
+
rium can be typically given by the canonical average:
|
31 |
+
�
|
32 |
+
Qp
|
33 |
+
�
|
34 |
+
Z = Z−1∑
|
35 |
+
i
|
36 |
+
Q(i)
|
37 |
+
p exp
|
38 |
+
�
|
39 |
+
−βU(i)�
|
40 |
+
,
|
41 |
+
(1)
|
42 |
+
where Z denotes partition function, β inverse temperature, U
|
43 |
+
potential energy and summation is taken over all possible con-
|
44 |
+
figurations. For alloys, U can be exactly expressed as the
|
45 |
+
appropriate complete orthonormal basis such as generalized
|
46 |
+
Ising model (GIM),1 namely,
|
47 |
+
U(k) = ∑
|
48 |
+
j
|
49 |
+
�
|
50 |
+
U
|
51 |
+
��Qj
|
52 |
+
�
|
53 |
+
Q(k)
|
54 |
+
j ,
|
55 |
+
(2)
|
56 |
+
where ⟨·|·⟩ denotes inner product, i.e., trace over possible
|
57 |
+
configurations. Eq. (2) naturally provides the concept that
|
58 |
+
canonical average φ as a map from a set of potential energy U
|
59 |
+
to equilibrium configuration QZ:
|
60 |
+
φ (β) : U �→ QZ,
|
61 |
+
(3)
|
62 |
+
which generally exhibits complicated nonlinearity (here-
|
63 |
+
inafter we call “canonical nonlinearity (CN)”).
|
64 |
+
To multilateraly address the CN, we have introduced two
|
65 |
+
concepts of “anharmonicity in structural degree of freedoms
|
66 |
+
(ASDF)” that is a special vector field on configuration space,
|
67 |
+
and Kullback-Leibler divergence DKL on statistical manifold,
|
68 |
+
which is the extention of ASDF to include further non-local
|
69 |
+
CN information. We also confirm that the latter one can be
|
70 |
+
further decomposed into three contributions in terms of SDF,
|
71 |
+
i.e., deviation in CDOS from Gaussian DdG
|
72 |
+
KL, nonseparability
|
73 |
+
(NS) in SDF Dns
|
74 |
+
KL and nonadditivity in NS, where the last con-
|
75 |
+
tribution is specific to multicomponent (R ≥ 3) alloys under
|
76 |
+
pair correlations. While we recently bridge the above two
|
77 |
+
concepts of CN on different wolrds of configuration space and
|
78 |
+
statistical manifold through stochastic thermodynamics, their
|
79 |
+
direct correlation on real lattices is still totally unclear. We
|
80 |
+
here tuckle this problem, to address how CN as vector field
|
81 |
+
on configuration space and as divergence on statistical mani-
|
82 |
+
fold correlates, and how their correlations are dominated, on
|
83 |
+
fcc-based equiatomic binary alloys that have been most amply
|
84 |
+
studied in the context of CN. We confirm that while DdG
|
85 |
+
KL ex-
|
86 |
+
hibits significant positive correlation with ASDF, it does not
|
87 |
+
totally hold for Dns
|
88 |
+
KL, which can be naturally accepted since the
|
89 |
+
former contribution explicitly depends on ASDF while the lat-
|
90 |
+
ter is independent. We find that average of Dns
|
91 |
+
KL over possible
|
92 |
+
configurations can be well characterized by information about
|
93 |
+
asymmetric Hausdorff distance in configurational polyhedra
|
94 |
+
between practical and ideally separable system. The details
|
95 |
+
are shown below.
|
96 |
+
II.
|
97 |
+
CONCEPTS AND DISCUSSIONS
|
98 |
+
A.
|
99 |
+
Brief Concepts for Canonical Nonliearity
|
100 |
+
Before we provide basic concepts for the CN, we first
|
101 |
+
briefly explain the GIM that is employed throughout the paper.
|
102 |
+
We here focus on a A-B binary system, where the occupation
|
103 |
+
of lattice site i by A (B) is given by the spin variable σ = +1
|
104 |
+
(−1). Then information about any given microscopic config-
|
105 |
+
uration k along chosen coordination j can be given by
|
106 |
+
Q(k)
|
107 |
+
j
|
108 |
+
=
|
109 |
+
�
|
110 |
+
∏
|
111 |
+
i∈Sj
|
112 |
+
σi
|
113 |
+
�
|
114 |
+
k
|
115 |
+
,
|
116 |
+
(4)
|
117 |
+
where the product is performed over lattice points in fig-
|
118 |
+
ure j, and ⟨·⟩k denotes taking linear average over symmetry-
|
119 |
+
equivalent figures to j in configuraion k: Eq. (4) form com-
|
120 |
+
|
121 |
+
2
|
122 |
+
plete orthonormal basis functions, providing exact expantion
|
123 |
+
of potential energy as given in Eq. (2).
|
124 |
+
Using the GIM basis, we can introduce the measure of CN
|
125 |
+
in terms of the following vector field, ASDF, on configuration
|
126 |
+
space:
|
127 |
+
A(Q) =
|
128 |
+
�
|
129 |
+
φ (β)◦ (−βΓ)−1�
|
130 |
+
·Q− Q,
|
131 |
+
(5)
|
132 |
+
where Γ denotes covariance matrix for configurational den-
|
133 |
+
sity of states (CDOS) before applying many-body interaction
|
134 |
+
to the system. The ASDF has significant features of (i) it is
|
135 |
+
independent of energy and temperature, and (ii) it exhibit zero
|
136 |
+
vector when φ is globally (or locally) linear map. Therefore,
|
137 |
+
ASDF is a natural measure of the CN depending only on geo-
|
138 |
+
metric information derived from the underlying lattice.
|
139 |
+
Next, we introduce another measure of the CN on statistical
|
140 |
+
manifold , which is the natural, conceptual extention of ASDF
|
141 |
+
including futher non-local information. We have shown that
|
142 |
+
the following KL divergence corresponds to the extention for
|
143 |
+
CN:
|
144 |
+
DKL
|
145 |
+
�
|
146 |
+
gQ
|
147 |
+
C : gQ
|
148 |
+
L
|
149 |
+
�
|
150 |
+
= DKL
|
151 |
+
�
|
152 |
+
gQ
|
153 |
+
C : gQ
|
154 |
+
C0
|
155 |
+
�
|
156 |
+
+ DKL
|
157 |
+
�
|
158 |
+
gQ
|
159 |
+
C0 : gQ
|
160 |
+
L
|
161 |
+
�
|
162 |
+
+ ∆DNAD
|
163 |
+
KL (Q),
|
164 |
+
(6)
|
165 |
+
where the first, second and third term of the r.h.s. respec-
|
166 |
+
tively corresponds to contribution from nonseparability (NS)
|
167 |
+
in SDF, deviation in separable system from Gaussian (DG),
|
168 |
+
and nonadditivity in the NS (NAD). gQ
|
169 |
+
C , gQ
|
170 |
+
L and gQ
|
171 |
+
C0 respec-
|
172 |
+
tively denotes canonical distribution for practical system de-
|
173 |
+
rived from configuration Q, i.e.,
|
174 |
+
�
|
175 |
+
φ (β)◦ (−βΓ)−1�
|
176 |
+
·Q, that
|
177 |
+
for linear system whose CDOS takes Gaussian with Γ same as
|
178 |
+
the practical system, and the product of marginal distributions
|
179 |
+
for gQ
|
180 |
+
C. We emphasize that DG explicitly depends on ASDF
|
181 |
+
while NS and NAD are independentof ASDF, i.e., the DG cor-
|
182 |
+
responds to local nonlinear information while the latter two of
|
183 |
+
NS and NAD to more non-local nonlinear information around
|
184 |
+
the given configuration.
|
185 |
+
Here we focus on the correlation between ASDF and CN
|
186 |
+
as KL divergence for fcc-based equiatomic binary alloys with
|
187 |
+
pair correlations that have been most amply studied in the con-
|
188 |
+
text of CN, where under this condition, we have shown that
|
189 |
+
NAD takes zero for any configuration in thermodynamic limit
|
190 |
+
and we now consider such a case. For calculations, we pre-
|
191 |
+
pare 864-atom fcc-based supercell (i.e., 6 × 6 × 6 expansion
|
192 |
+
of conventional 4-atom cell), that is applied to MC simulation
|
193 |
+
to obtain canonical distribution for individual configuration Q
|
194 |
+
based on Eq. (5) to estimate ASDF and KL divergences.
|
195 |
+
B.
|
196 |
+
Results and Discussions
|
197 |
+
1.
|
198 |
+
Overall behavioer of ASDF and KL divergence
|
199 |
+
We first show in Fig. 1 the behavior of ASDF for five sets
|
200 |
+
of SDFs. Near origin, absolute of ASDF exhibit smaller value
|
201 |
+
than outer region, naturally reflecting that φ locally acts as lin-
|
202 |
+
ear map around the disordered state. From the figure, we can
|
203 |
+
!"#$%&'()*+,-
|
204 |
+
-1
|
205 |
+
-0.5
|
206 |
+
0
|
207 |
+
0.5
|
208 |
+
1
|
209 |
+
-1
|
210 |
+
-0.5
|
211 |
+
0
|
212 |
+
0.5
|
213 |
+
1
|
214 |
+
-1
|
215 |
+
-0.5
|
216 |
+
0
|
217 |
+
0.5
|
218 |
+
1
|
219 |
+
-1
|
220 |
+
-0.5
|
221 |
+
0
|
222 |
+
0.5
|
223 |
+
1
|
224 |
+
!.
|
225 |
+
!/
|
226 |
+
!.
|
227 |
+
!0
|
228 |
+
-1
|
229 |
+
-0.5
|
230 |
+
0
|
231 |
+
0.5
|
232 |
+
1
|
233 |
+
-1
|
234 |
+
-0.5
|
235 |
+
0
|
236 |
+
0.5
|
237 |
+
1
|
238 |
+
-1
|
239 |
+
-0.5
|
240 |
+
0
|
241 |
+
0.5
|
242 |
+
1
|
243 |
+
-1
|
244 |
+
-0.5
|
245 |
+
0
|
246 |
+
0.5
|
247 |
+
1
|
248 |
+
-1
|
249 |
+
-0.5
|
250 |
+
0
|
251 |
+
0.5
|
252 |
+
1
|
253 |
+
-1
|
254 |
+
-0.5
|
255 |
+
0
|
256 |
+
0.5
|
257 |
+
1
|
258 |
+
!.
|
259 |
+
!1
|
260 |
+
!/
|
261 |
+
!2
|
262 |
+
!0
|
263 |
+
!3
|
264 |
+
FIG. 1:
|
265 |
+
ASDF vector field on configuration space for pair correla-
|
266 |
+
tions on fcc binary alloys.
|
267 |
+
see several absorption points, typically corresponding to the
|
268 |
+
vetices of configurational polyhedra (i.e., convex polyhedron
|
269 |
+
deriving from end points of ASDF within the prepared area) as
|
270 |
+
shown in our previous studies . Meanwhile, when aparts from
|
271 |
+
origin, ASDF basically tend to end up to one of the absorp-
|
272 |
+
tion points corresponding to the canditate of ground states:
|
273 |
+
This can be naturally accepted since their exist multiple sets
|
274 |
+
of many-body interaction generated through (ASDF-Gamma
|
275 |
+
operator) for individual ground states.
|
276 |
+
We then show results of CN as KL divergence in Figs. ??
|
277 |
+
and 3 respectively corresponds to DG and NS. For DF, near
|
278 |
+
origin, extremely smaller value than outer region. Around
|
279 |
+
adsorption points of ASDF, DKL(div-Gauss) tend to exhibit
|
280 |
+
local minima, and intermediate configuration between origin
|
281 |
+
and gs, they have larger value than others. These appears sig-
|
282 |
+
nificantly similar tendency to ASDF, which can be naturally
|
283 |
+
accepted since again, DKL(div-Gauss) itself is a straightfor-
|
284 |
+
ward extention of ASDF concepts to statistical manifold.
|
285 |
+
For NS, its behavior totally differs from DKL(dG): i.e., they
|
286 |
+
exhibit sharp local maxima for specific configurations, where
|
287 |
+
their localtion strongly depends on the set of SDFs. Other than
|
288 |
+
the specific maxima, DKL(NS) exhibit extremely small value,
|
289 |
+
which would be partly attributed to the low-rank character of
|
290 |
+
canonical distribution around the ground state configurations.
|
291 |
+
Such behaviors of DKL(NS) do not appears direct correlation
|
292 |
+
with ASDF, which is further discussed later. Note: Compari-
|
293 |
+
son of magnitude for DKL(dG) and DKL(NS) for overall re-
|
294 |
+
|
295 |
+
3
|
296 |
+
!"#$
|
297 |
+
%&'(($)*+,-.//(0(1234'
|
298 |
+
-1
|
299 |
+
-0.5
|
300 |
+
0
|
301 |
+
0.5
|
302 |
+
1-1
|
303 |
+
-0.5
|
304 |
+
0
|
305 |
+
0.5
|
306 |
+
1
|
307 |
+
-4
|
308 |
+
-2
|
309 |
+
0
|
310 |
+
2
|
311 |
+
4
|
312 |
+
6
|
313 |
+
8
|
314 |
+
-1
|
315 |
+
-0.5
|
316 |
+
0
|
317 |
+
0.5
|
318 |
+
1-1
|
319 |
+
-0.5
|
320 |
+
0
|
321 |
+
0.5
|
322 |
+
1
|
323 |
+
-4
|
324 |
+
-2
|
325 |
+
0
|
326 |
+
2
|
327 |
+
4
|
328 |
+
6
|
329 |
+
8
|
330 |
+
10
|
331 |
+
-1
|
332 |
+
-0.5
|
333 |
+
0
|
334 |
+
0.5
|
335 |
+
1-1
|
336 |
+
-0.5
|
337 |
+
0
|
338 |
+
0.5
|
339 |
+
1
|
340 |
+
-4
|
341 |
+
-2
|
342 |
+
0
|
343 |
+
2
|
344 |
+
4
|
345 |
+
6
|
346 |
+
8
|
347 |
+
10
|
348 |
+
-1
|
349 |
+
-0.5
|
350 |
+
0
|
351 |
+
0.5
|
352 |
+
1-1
|
353 |
+
-0.5
|
354 |
+
0
|
355 |
+
0.5
|
356 |
+
1
|
357 |
+
-6
|
358 |
+
-4
|
359 |
+
-2
|
360 |
+
0
|
361 |
+
2
|
362 |
+
4
|
363 |
+
6
|
364 |
+
8
|
365 |
+
-1
|
366 |
+
-0.5
|
367 |
+
0
|
368 |
+
0.5
|
369 |
+
1-1
|
370 |
+
-0.5
|
371 |
+
0
|
372 |
+
0.5
|
373 |
+
1
|
374 |
+
-2
|
375 |
+
0
|
376 |
+
2
|
377 |
+
4
|
378 |
+
6
|
379 |
+
8
|
380 |
+
10
|
381 |
+
"5
|
382 |
+
"6
|
383 |
+
!"# $%&
|
384 |
+
'(
|
385 |
+
"5
|
386 |
+
"7
|
387 |
+
!"#$%&
|
388 |
+
'(
|
389 |
+
"5
|
390 |
+
"8
|
391 |
+
!"#$%&
|
392 |
+
'(
|
393 |
+
"6
|
394 |
+
"9
|
395 |
+
!"# $%&
|
396 |
+
'(
|
397 |
+
"7
|
398 |
+
":
|
399 |
+
!"# $%&
|
400 |
+
'(
|
401 |
+
FIG. 2:
|
402 |
+
Log plot of contribution to CN from deviation in CDOS
|
403 |
+
from Gaussian, DdG
|
404 |
+
KL.
|
405 |
+
!" #$%$ '()*+*,-(./*,%-(.0
|
406 |
+
-1
|
407 |
+
-0.5
|
408 |
+
0
|
409 |
+
0.5
|
410 |
+
1-1
|
411 |
+
-0.5
|
412 |
+
0
|
413 |
+
0.5
|
414 |
+
1
|
415 |
+
0
|
416 |
+
0.2
|
417 |
+
0.4
|
418 |
+
0.6
|
419 |
+
0.8
|
420 |
+
1
|
421 |
+
1.2
|
422 |
+
-1
|
423 |
+
-0.5
|
424 |
+
0
|
425 |
+
0.5
|
426 |
+
1-1
|
427 |
+
-0.5
|
428 |
+
0
|
429 |
+
0.5
|
430 |
+
1
|
431 |
+
0
|
432 |
+
0.1
|
433 |
+
0.2
|
434 |
+
0.3
|
435 |
+
0.4
|
436 |
+
0.5
|
437 |
+
0.6
|
438 |
+
0.7
|
439 |
+
-1
|
440 |
+
-0.5
|
441 |
+
0
|
442 |
+
0.5
|
443 |
+
1-1
|
444 |
+
-0.5
|
445 |
+
0
|
446 |
+
0.5
|
447 |
+
1
|
448 |
+
0
|
449 |
+
0.1
|
450 |
+
0.2
|
451 |
+
0.3
|
452 |
+
0.4
|
453 |
+
0.5
|
454 |
+
-1
|
455 |
+
-0.5
|
456 |
+
0
|
457 |
+
0.5
|
458 |
+
1-1
|
459 |
+
-0.5
|
460 |
+
0
|
461 |
+
0.5
|
462 |
+
1
|
463 |
+
0
|
464 |
+
0.2
|
465 |
+
0.4
|
466 |
+
0.6
|
467 |
+
0.8
|
468 |
+
1
|
469 |
+
1.2
|
470 |
+
1.4
|
471 |
+
1.6
|
472 |
+
-1
|
473 |
+
-0.5
|
474 |
+
0
|
475 |
+
0.5
|
476 |
+
1-1
|
477 |
+
-0.5
|
478 |
+
0
|
479 |
+
0.5
|
480 |
+
1
|
481 |
+
0
|
482 |
+
0.2
|
483 |
+
0.4
|
484 |
+
0.6
|
485 |
+
0.8
|
486 |
+
"1
|
487 |
+
"2
|
488 |
+
!"#
|
489 |
+
$%
|
490 |
+
"1
|
491 |
+
"3
|
492 |
+
"1
|
493 |
+
"4
|
494 |
+
"2
|
495 |
+
"5
|
496 |
+
"3
|
497 |
+
"6
|
498 |
+
!"#
|
499 |
+
$%
|
500 |
+
!"#
|
501 |
+
$%
|
502 |
+
!"#
|
503 |
+
$%
|
504 |
+
!"#
|
505 |
+
$%
|
506 |
+
FIG. 3: Contribution to CN from nonseparability in SDF, Dns
|
507 |
+
KL.
|
508 |
+
gion and near origin is discussed based on Figs. 4 and 5 since
|
509 |
+
in Fig. 2 and 3, their scale is different (log and normal plot).
|
510 |
+
2.
|
511 |
+
Correlation between ASDF and KL divergence
|
512 |
+
We show in Fig. 4 correlation between DG and ASDF on
|
513 |
+
each configuration for overall region and near origin. These
|
514 |
+
figures indicate that the contribution from DG to CN clearly
|
515 |
+
exhibit strong correlation to ASDF, which is naturally ac-
|
516 |
+
cpeted since the DG in definition explicitly depends on ASDF,
|
517 |
+
i.e., it reflects local NL information around the given config-
|
518 |
+
uration. Meanwhile, the correlation exhibit clear dependence
|
519 |
+
on the set of SDFs, which appears to be well characterized
|
520 |
+
by the set of coordination number. The correlation near ori-
|
521 |
+
0
|
522 |
+
20
|
523 |
+
40
|
524 |
+
60
|
525 |
+
0
|
526 |
+
0.5
|
527 |
+
1
|
528 |
+
1.5
|
529 |
+
!
|
530 |
+
"#$
|
531 |
+
%& '()
|
532 |
+
0.1
|
533 |
+
0.2
|
534 |
+
0.3
|
535 |
+
0.4
|
536 |
+
0.5
|
537 |
+
0
|
538 |
+
0.002
|
539 |
+
0.004
|
540 |
+
0.006
|
541 |
+
!
|
542 |
+
!"#$$
|
543 |
+
!"%$$
|
544 |
+
!"&$$
|
545 |
+
#"'$$
|
546 |
+
%"($$
|
547 |
+
FIG. 4: Square root of DdG
|
548 |
+
KL as a function of the absolute of ASDF on
|
549 |
+
each configuration for overall range (left) and near disordered state
|
550 |
+
(right).
|
551 |
+
!"#$$
|
552 |
+
!"%$$
|
553 |
+
!"&$$
|
554 |
+
#"'$$
|
555 |
+
%"($$
|
556 |
+
0
|
557 |
+
0.2
|
558 |
+
0.4
|
559 |
+
0.6
|
560 |
+
0.8
|
561 |
+
1
|
562 |
+
1.2
|
563 |
+
0
|
564 |
+
0.4
|
565 |
+
0.8
|
566 |
+
1.2
|
567 |
+
1.6
|
568 |
+
!
|
569 |
+
"#$
|
570 |
+
%& '()
|
571 |
+
FIG. 5: Square root of Dns
|
572 |
+
KL as a function of the absolute of ASDF on
|
573 |
+
each configuration.
|
574 |
+
gin (i.e., disordered state) can also be well characterized by
|
575 |
+
coordination number, whilst its dependence is opposite to the
|
576 |
+
overall one. To further address how the different correlations
|
577 |
+
between DKL(dG) and ASDF are dominated, we here provide
|
578 |
+
simple model where canonical distribution for pracitcal and
|
579 |
+
linear systems are both approximated by normal distributions,
|
580 |
+
and their variance is simply proportional to the corresponding
|
581 |
+
CDOS. Since we measure the divergence on e-flat manifold,
|
582 |
+
their canonical distributions are also separable. Therefore, we
|
583 |
+
can straightforwardly rewrite DKL(dG) as
|
584 |
+
|
585 |
+
4
|
586 |
+
DdG
|
587 |
+
KL ≃
|
588 |
+
A2
|
589 |
+
x
|
590 |
+
2σ2
|
591 |
+
LX
|
592 |
+
+ A2
|
593 |
+
y
|
594 |
+
2σ2
|
595 |
+
LY
|
596 |
+
�
|
597 |
+
��
|
598 |
+
�
|
599 |
+
f(⃗σ)
|
600 |
+
+ln
|
601 |
+
�σLXσLY
|
602 |
+
σXσY
|
603 |
+
�
|
604 |
+
+ σ2
|
605 |
+
Xσ2
|
606 |
+
LY + σ2
|
607 |
+
Yσ2
|
608 |
+
LX
|
609 |
+
2σ2
|
610 |
+
LXσ2
|
611 |
+
LY
|
612 |
+
− 1
|
613 |
+
�
|
614 |
+
��
|
615 |
+
�
|
616 |
+
g(⃗σ)
|
617 |
+
,
|
618 |
+
(7)
|
619 |
+
where Ax and Ay denotes element of ASDF for individual SDF
|
620 |
+
of X and Y, σX and σY , denotes standard deviation (SD) for
|
621 |
+
canonical distribution of practical system, and σLX and σLY
|
622 |
+
that of linear system.
|
623 |
+
When contribution from absolute of ASDF is dominant (for
|
624 |
+
overall region), DdG
|
625 |
+
KL ≃ f (⃗σ). At equicomposition, since SD
|
626 |
+
of CDOS is proportional to J−1/2 where J denotes coordina-
|
627 |
+
tion number, we get
|
628 |
+
DdG
|
629 |
+
KL ∝ J
|
630 |
+
2 |A|2
|
631 |
+
(8)
|
632 |
+
for the case where constituent SDFs has has the same coordi-
|
633 |
+
nation number J, which can reasonably capture the character-
|
634 |
+
istic correlation in the l.h.s. of FIg. 4.
|
635 |
+
Meanwhile, around origin where abosolute of ASDF can
|
636 |
+
be neglected, we approximate DdG
|
637 |
+
KL ≃ g(⃗σ), and consider the
|
638 |
+
condition where SDFs has the same coordination number. In
|
639 |
+
this case, we can rewrite
|
640 |
+
˜DdG
|
641 |
+
KL ≃ lnX + 1
|
642 |
+
X − 1,
|
643 |
+
(9)
|
644 |
+
where X = σ2
|
645 |
+
LX/σ2
|
646 |
+
X, and ˜· denotes divergence around the ori-
|
647 |
+
gin. Since (i) variance of canonical distribution for practical
|
648 |
+
system is bounded for CP while that for linear system is not
|
649 |
+
bounded, (ii) Eq. (9) exhibits monotonic increase in X > 1,
|
650 |
+
and (iii) the bound due to CP is expected to be much more
|
651 |
+
enhanced for lower coordination number system (because of
|
652 |
+
larger variance of CDOS), we can deduce that around the dis-
|
653 |
+
ordered state, the following relationships can be satisfied:
|
654 |
+
d ˜DdG
|
655 |
+
KL
|
656 |
+
dJ
|
657 |
+
< 0,
|
658 |
+
(10)
|
659 |
+
which can capture the opposite correlation between ASDF and
|
660 |
+
DKL in the r.h.s. of Fig. 4. These consideration indicates
|
661 |
+
that DG contains comparable amount of NOL information as
|
662 |
+
ASDF.
|
663 |
+
Next, we discuss about the correlation between ASDF and
|
664 |
+
NS as shown in Fig. ??. As shown in the figure, NS for in-
|
665 |
+
dividual configuration on each system does not appears effec-
|
666 |
+
tive correlation w.r.t. the ASDF, which is naturally accepted
|
667 |
+
since again, information about the NS is non-local NOL in-
|
668 |
+
formation, and is not explicitly included in the vector field.
|
669 |
+
Therefore, we propose alternative strategy to address how the
|
670 |
+
non-local, beyond-ASDF derived NOL is charaterized by geo-
|
671 |
+
metric information: Since average of DKL(NS) over possible
|
672 |
+
configuration under defined configuration space reflects the
|
673 |
+
magnitude of inter-constraints among SDFs, we here focus
|
674 |
+
on the geometric information about the CP for practical sys-
|
675 |
+
tem and artificially-constructed separable system: The con-
|
676 |
+
straint magnitude on configuration space can be attributed to
|
677 |
+
0
|
678 |
+
0.02
|
679 |
+
0.04
|
680 |
+
0.06
|
681 |
+
0.08
|
682 |
+
0.1
|
683 |
+
0.12
|
684 |
+
0.1 0.2 0.3 0.4 0.5 0.6 0.7
|
685 |
+
!"#$%&'()$%()*%+,-./01%2$*$3$%45678)*((%8&7'35'9#%()*%:;7%
|
686 |
+
!"#
|
687 |
+
$% &'(
|
688 |
+
)*+,-' +.
|
689 |
+
!"#$$
|
690 |
+
%"!$$
|
691 |
+
%"&$$
|
692 |
+
%"'$$
|
693 |
+
'"($$
|
694 |
+
FIG. 6:
|
695 |
+
Average of �Dns
|
696 |
+
KL over all configuration in terms of the
|
697 |
+
information about Hausdorff distance in CPs and their hypervolumes.
|
698 |
+
the difference bewteen that of practical (CP) and separable
|
699 |
+
system (CP0), where we measure the difference by the fol-
|
700 |
+
lowing asymmetric Hausdorff distance:
|
701 |
+
RH := sup
|
702 |
+
a∈CP0
|
703 |
+
�
|
704 |
+
inf
|
705 |
+
b∈CPd (a,b)
|
706 |
+
�
|
707 |
+
.
|
708 |
+
(11)
|
709 |
+
The reason why we particularly employ asymmetric Haus-
|
710 |
+
dorff distance is that CP for separarable system always takes
|
711 |
+
hyperrectangular that takes outer-tangent touch to the practi-
|
712 |
+
cal CP: i.e., we fix the standard of Hausdorff distance always
|
713 |
+
as separable system. In order to compare the NS character
|
714 |
+
among different set of SDFs, we would further require addi-
|
715 |
+
tional information for normalizing the Hausdorff distance, i.e.,
|
716 |
+
(i) contribution from difference in hypervolumes for separa-
|
717 |
+
ble system, which corresponds to the difference in constraints
|
718 |
+
to individual (separable) SDFs, and (ii) difference in hyper-
|
719 |
+
region that can take non-zero probability distribution values.
|
720 |
+
The former can be regarded as the inverse of
|
721 |
+
�
|
722 |
+
R1/ f
|
723 |
+
H
|
724 |
+
�
|
725 |
+
to van-
|
726 |
+
ish the dependence of normalization w.r.t. the dimension of
|
727 |
+
configuration space, and the latter as the hypervolume of the
|
728 |
+
practical CP itself. Therefore, we expect that average of the
|
729 |
+
NS over all configuration can be the function M of the follow-
|
730 |
+
ing:
|
731 |
+
��
|
732 |
+
DNS
|
733 |
+
KL
|
734 |
+
�
|
735 |
+
≃ M
|
736 |
+
�
|
737 |
+
RHVCP
|
738 |
+
(VCP0)1/ f
|
739 |
+
�
|
740 |
+
.
|
741 |
+
(12)
|
742 |
+
Figure 6 shows the relationship between NS and the Hausdorff
|
743 |
+
distance in CPs based on Eq. (12) for sets of SDFs, which ex-
|
744 |
+
hibits clear correlations. This fact certainly indicate that non-
|
745 |
+
local information about the nonlinearity, NS in SDFs, has pro-
|
746 |
+
found connection to the geometric configuration of ground-
|
747 |
+
state structures in configuration space.
|
748 |
+
|
749 |
+
5
|
750 |
+
III.
|
751 |
+
CONCLUSIONS
|
752 |
+
We investigate nonliear character in canonical ensemble of
|
753 |
+
canonical nonlinearity (CN), i.e., the correspondence between
|
754 |
+
a set of potential energy surface and microscopic configura-
|
755 |
+
tion in thermodynamic equilibrium, based on the correlation
|
756 |
+
between special vector field of ASDF on configuration space
|
757 |
+
and KL divergences on statistical manifold, which can be de-
|
758 |
+
composed into local CN information of DG and non-local in-
|
759 |
+
formation of NS. We find that the DG contains comparable
|
760 |
+
amount of CN information as ASDF, where their correlation
|
761 |
+
for different sets of SDFs can be well-interpreted in terms
|
762 |
+
of the difference in pair coordination number. Meanwhile,
|
763 |
+
non-local CN information of NS does not exhibit clear cor-
|
764 |
+
relation to ASDF. The average of NS over all configuration
|
765 |
+
can be well characterized by propery-normalized Hausdorff
|
766 |
+
distance in configurational polyhedra between practical and
|
767 |
+
artificially-separable system, which indicates that average of
|
768 |
+
non-local CN information has profound connection to the ge-
|
769 |
+
ometric configuration of ground-state structures in configura-
|
770 |
+
tion space.
|
771 |
+
IV.
|
772 |
+
ACKNOWLEDGEMENT
|
773 |
+
This work was supported by Grant-in-Aids for Scien-
|
774 |
+
tific Research on Innovative Areas on High Entropy Alloys
|
775 |
+
through the grant number JP18H05453 from the MEXT of
|
776 |
+
Japan and Research Grant from Hitachi Metals·Materials Sci-
|
777 |
+
ence Foundation.
|
778 |
+
1 J.M. Sanchez, F. Ducastelle, and D. Gratias, Physica A 128, 334
|
779 |
+
(1984).
|
780 |
+
2 K. Yuge, J. Phys. Soc. Jpn. 86, 104802 (2018).
|
781 |
+
3 K. Yuge and S. Ohta, J. Phys. Soc. Jpn. 88, 104803 (2019).
|
782 |
+
4 K. Yuge, J. Phys. Soc. Jpn. 91, 014802 (2022).
|
783 |
+
|
3NE2T4oBgHgl3EQfNwaq/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf,len=337
|
2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
3 |
+
page_content='03741v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
4 |
+
page_content='stat-mech] 10 Jan 2023 Geometric Study on Canonical Nonlinearity for FCC-based Binary Alloys Koretaka Yuge1 and Ikumi Nishihara1 1 Department of Materials Science and Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
5 |
+
page_content=' Kyoto University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
6 |
+
page_content=' Sakyo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
7 |
+
page_content=' Kyoto 606-8501,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
8 |
+
page_content=' Japan For classical discrete systems under constant composition (typically reffered to as substitutional alloys),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
9 |
+
page_content=' canonical average φ typically provides a complicated nonlinear map from a set of potential energy surface to that of macroscropic structure in thermodynamic equilibrium,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
10 |
+
page_content=' the so-called “canonical nonlinearity: CN”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
11 |
+
page_content=' Although our recent study reveals that the CN can be reasonablly addressed for individual microscopic config- uration by two different ways of special vector field on configuration space, “anharmonicity in the structural degree of freedoms (ASDF)”,2,3 and Kullback-Leibler (KL) divergence DKL,4 that is the conceptual extention of ASDF to statistical manifold to include further non-local information about CN, their direct correlation on real lattices, is still totally unclear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
12 |
+
page_content=' We here tuckle this problem for fcc-based equiatomic binary alloys that have been most studied in the CN-based context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
13 |
+
page_content=' We confirm that while one of the contribution to CN of DdG KL for each configuration, due to difference in CDOS from Gaussian, exhibits significant positive correlation with ASDF, another contribution of Dns KL due to non-separability in structural degee of freedoms (SDFs) exhibit no effective correlation with ASDF, which can be naturally accepted since the former contribution depends on ASDF itself, while the latter is independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
14 |
+
page_content=' We find that average of Dns KL over all configurations for sets of SDFs can be well-characterized by information about asymmetric Hausdorff distance between configurational polyhedra (CP) for practical and ideally separable system, and CP hypervolumes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
15 |
+
page_content=' This fact certainly indicates that non-local information about CN has profound connection to the geometric configuration for ground-state structures of alloys on configuration space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
16 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
17 |
+
page_content=' INTRODUCTION When we consider substitutional alloys as classical discrete systems under constant composition, microscopic configura- tion along chosen coordination Qp in thermodynamic equilib- rium can be typically given by the canonical average: � Qp � Z = Z−1∑ i Q(i) p exp � −βU(i)� , (1) where Z denotes partition function, β inverse temperature, U potential energy and summation is taken over all possible con- figurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
18 |
+
page_content=' For alloys, U can be exactly expressed as the appropriate complete orthonormal basis such as generalized Ising model (GIM),1 namely, U(k) = ∑ j � U ��Qj � Q(k) j , (2) where ⟨·|·⟩ denotes inner product, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
19 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
20 |
+
page_content=', trace over possible configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
21 |
+
page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
22 |
+
page_content=' (2) naturally provides the concept that canonical average φ as a map from a set of potential energy U to equilibrium configuration QZ: φ (β) : U �→ QZ, (3) which generally exhibits complicated nonlinearity (here- inafter we call “canonical nonlinearity (CN)”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
23 |
+
page_content=' To multilateraly address the CN, we have introduced two concepts of “anharmonicity in structural degree of freedoms (ASDF)” that is a special vector field on configuration space, and Kullback-Leibler divergence DKL on statistical manifold, which is the extention of ASDF to include further non-local CN information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
24 |
+
page_content=' We also confirm that the latter one can be further decomposed into three contributions in terms of SDF, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
25 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
26 |
+
page_content=', deviation in CDOS from Gaussian DdG KL, nonseparability (NS) in SDF Dns KL and nonadditivity in NS, where the last con- tribution is specific to multicomponent (R ≥ 3) alloys under pair correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
27 |
+
page_content=' While we recently bridge the above two concepts of CN on different wolrds of configuration space and statistical manifold through stochastic thermodynamics, their direct correlation on real lattices is still totally unclear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
28 |
+
page_content=' We here tuckle this problem, to address how CN as vector field on configuration space and as divergence on statistical mani- fold correlates, and how their correlations are dominated, on fcc-based equiatomic binary alloys that have been most amply studied in the context of CN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
29 |
+
page_content=' We confirm that while DdG KL ex- hibits significant positive correlation with ASDF, it does not totally hold for Dns KL, which can be naturally accepted since the former contribution explicitly depends on ASDF while the lat- ter is independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
30 |
+
page_content=' We find that average of Dns KL over possible configurations can be well characterized by information about asymmetric Hausdorff distance in configurational polyhedra between practical and ideally separable system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
31 |
+
page_content=' The details are shown below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
32 |
+
page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
33 |
+
page_content=' CONCEPTS AND DISCUSSIONS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
34 |
+
page_content=' Brief Concepts for Canonical Nonliearity Before we provide basic concepts for the CN, we first briefly explain the GIM that is employed throughout the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
35 |
+
page_content=' We here focus on a A-B binary system, where the occupation of lattice site i by A (B) is given by the spin variable σ = +1 (−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
36 |
+
page_content=' Then information about any given microscopic config- uration k along chosen coordination j can be given by Q(k) j = � ∏ i∈Sj σi � k , (4) where the product is performed over lattice points in fig- ure j, and ⟨·⟩k denotes taking linear average over symmetry- equivalent figures to j in configuraion k: Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
37 |
+
page_content=' (4) form com- 2 plete orthonormal basis functions, providing exact expantion of potential energy as given in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
38 |
+
page_content=' (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
39 |
+
page_content=' Using the GIM basis, we can introduce the measure of CN in terms of the following vector field, ASDF, on configuration space: A(Q) = � φ (β)◦ (−βΓ)−1� Q− Q, (5) where Γ denotes covariance matrix for configurational den- sity of states (CDOS) before applying many-body interaction to the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
40 |
+
page_content=' The ASDF has significant features of (i) it is independent of energy and temperature, and (ii) it exhibit zero vector when φ is globally (or locally) linear map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
41 |
+
page_content=' Therefore, ASDF is a natural measure of the CN depending only on geo- metric information derived from the underlying lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
42 |
+
page_content=' Next, we introduce another measure of the CN on statistical manifold , which is the natural, conceptual extention of ASDF including futher non-local information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
43 |
+
page_content=' We have shown that the following KL divergence corresponds to the extention for CN: DKL � gQ C : gQ L � = DKL � gQ C : gQ C0 � + DKL � gQ C0 : gQ L � + ∆DNAD KL (Q), (6) where the first, second and third term of the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
44 |
+
page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
45 |
+
page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
46 |
+
page_content=' respec- tively corresponds to contribution from nonseparability (NS) in SDF, deviation in separable system from Gaussian (DG), and nonadditivity in the NS (NAD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
47 |
+
page_content=' gQ C , gQ L and gQ C0 respec- tively denotes canonical distribution for practical system de- rived from configuration Q, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
48 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
49 |
+
page_content=', � φ (β)◦ (−βΓ)−1� Q, that for linear system whose CDOS takes Gaussian with Γ same as the practical system, and the product of marginal distributions for gQ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
50 |
+
page_content=' We emphasize that DG explicitly depends on ASDF while NS and NAD are independentof ASDF, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
51 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
52 |
+
page_content=', the DG cor- responds to local nonlinear information while the latter two of NS and NAD to more non-local nonlinear information around the given configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
53 |
+
page_content=' Here we focus on the correlation between ASDF and CN as KL divergence for fcc-based equiatomic binary alloys with pair correlations that have been most amply studied in the con- text of CN, where under this condition, we have shown that NAD takes zero for any configuration in thermodynamic limit and we now consider such a case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
54 |
+
page_content=' For calculations, we pre- pare 864-atom fcc-based supercell (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
55 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
56 |
+
page_content=', 6 × 6 × 6 expansion of conventional 4-atom cell), that is applied to MC simulation to obtain canonical distribution for individual configuration Q based on Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
57 |
+
page_content=' (5) to estimate ASDF and KL divergences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
58 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
59 |
+
page_content=' Results and Discussions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
60 |
+
page_content=' Overall behavioer of ASDF and KL divergence We first show in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
61 |
+
page_content=' 1 the behavior of ASDF for five sets of SDFs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
62 |
+
page_content=' Near origin, absolute of ASDF exhibit smaller value than outer region, naturally reflecting that φ locally acts as lin- ear map around the disordered state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
63 |
+
page_content=' From the figure, we can !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
64 |
+
page_content=' "#$%&\'()*+,- 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
65 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
66 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
67 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
68 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
69 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
70 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
71 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
72 |
+
page_content='5 1 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
73 |
+
page_content='. !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
74 |
+
page_content='/ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
75 |
+
page_content='. !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
76 |
+
page_content='0 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
77 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
78 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
79 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
80 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
81 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
82 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
83 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
84 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
85 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
86 |
+
page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
87 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
88 |
+
page_content='5 1 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
89 |
+
page_content='. !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
90 |
+
page_content='1 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
91 |
+
page_content='/ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
92 |
+
page_content='2 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
93 |
+
page_content='0 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
94 |
+
page_content='3 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
95 |
+
page_content=' 1: ASDF vector field on configuration space for pair correla- tions on fcc binary alloys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
96 |
+
page_content=' see several absorption points, typically corresponding to the vetices of configurational polyhedra (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
97 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
98 |
+
page_content=', convex polyhedron deriving from end points of ASDF within the prepared area) as shown in our previous studies .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
99 |
+
page_content=' Meanwhile, when aparts from origin, ASDF basically tend to end up to one of the absorp- tion points corresponding to the canditate of ground states: This can be naturally accepted since their exist multiple sets of many-body interaction generated through (ASDF-Gamma operator) for individual ground states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
100 |
+
page_content=' We then show results of CN as KL divergence in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
101 |
+
page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
102 |
+
page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
103 |
+
page_content=' and 3 respectively corresponds to DG and NS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
104 |
+
page_content=' For DF, near origin, extremely smaller value than outer region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
105 |
+
page_content=' Around adsorption points of ASDF, DKL(div-Gauss) tend to exhibit local minima, and intermediate configuration between origin and gs, they have larger value than others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
106 |
+
page_content=' These appears sig- nificantly similar tendency to ASDF, which can be naturally accepted since again, DKL(div-Gauss) itself is a straightfor- ward extention of ASDF concepts to statistical manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
107 |
+
page_content=' For NS, its behavior totally differs from DKL(dG): i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
108 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
109 |
+
page_content=', they exhibit sharp local maxima for specific configurations, where their localtion strongly depends on the set of SDFs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
110 |
+
page_content=' Other than the specific maxima, DKL(NS) exhibit extremely small value, which would be partly attributed to the low-rank character of canonical distribution around the ground state configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
111 |
+
page_content=' Such behaviors of DKL(NS) do not appears direct correlation with ASDF, which is further discussed later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
112 |
+
page_content=' Note: Compari- son of magnitude for DKL(dG) and DKL(NS) for overall re- 3 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
113 |
+
page_content=' "#$ %&\'(($)*+,-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
114 |
+
page_content="//(0(1234' 1 0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
115 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
116 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
117 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
118 |
+
page_content='5 1 4 2 0 2 4 6 8 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
119 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
120 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
121 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
122 |
+
page_content='5 1 4 2 0 2 4 6 8 10 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
123 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
124 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
125 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
126 |
+
page_content='5 1 4 2 0 2 4 6 8 10 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
127 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
128 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
129 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
130 |
+
page_content='5 1 6 4 2 0 2 4 6 8 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
131 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
132 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
133 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
134 |
+
page_content='5 1 2 0 2 4 6 8 10 "5 "6 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
135 |
+
page_content=' "# $%& \'( "5 "7 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
136 |
+
page_content=' "#$%& \'( "5 "8 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
137 |
+
page_content=' "#$%& \'( "6 "9 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
138 |
+
page_content=' "# $%& \'( "7 ": !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
139 |
+
page_content=' "# $%& \'( FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
140 |
+
page_content=' 2: Log plot of contribution to CN from deviation in CDOS from Gaussian, DdG KL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
141 |
+
page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
142 |
+
page_content='" #$%$ \'()*+*,-(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
143 |
+
page_content='/*,%-(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
144 |
+
page_content='0 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
145 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
146 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
147 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
148 |
+
page_content='5 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
149 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
150 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
151 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
152 |
+
page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
153 |
+
page_content='2 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
154 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
155 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
156 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
157 |
+
page_content='5 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
158 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
159 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
160 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
161 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
162 |
+
page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
163 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
164 |
+
page_content='7 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
165 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
166 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
167 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
168 |
+
page_content='5 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
169 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
170 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
171 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
172 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
173 |
+
page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
174 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
175 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
176 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
177 |
+
page_content='5 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
178 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
179 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
180 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
181 |
+
page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
182 |
+
page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
183 |
+
page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
184 |
+
page_content='6 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
185 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
186 |
+
page_content='5 1-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
187 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
188 |
+
page_content='5 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
189 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
190 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
191 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
192 |
+
page_content='8 "1 "2 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
193 |
+
page_content=' "# $% "1 "3 "1 "4 "2 "5 "3 "6 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
194 |
+
page_content=' "# $% !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
195 |
+
page_content=' "# $% !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
196 |
+
page_content=' "# $% !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
197 |
+
page_content=' "# $% FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
198 |
+
page_content=' 3: Contribution to CN from nonseparability in SDF, Dns KL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
199 |
+
page_content=' gion and near origin is discussed based on Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
200 |
+
page_content=' 4 and 5 since in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
201 |
+
page_content=' 2 and 3, their scale is different (log and normal plot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
202 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
203 |
+
page_content=' Correlation between ASDF and KL divergence We show in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
204 |
+
page_content=' 4 correlation between DG and ASDF on each configuration for overall region and near origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
205 |
+
page_content=' These figures indicate that the contribution from DG to CN clearly exhibit strong correlation to ASDF, which is naturally ac- cpeted since the DG in definition explicitly depends on ASDF, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
206 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
207 |
+
page_content=', it reflects local NL information around the given config- uration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
208 |
+
page_content=' Meanwhile, the correlation exhibit clear dependence on the set of SDFs, which appears to be well characterized by the set of coordination number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
209 |
+
page_content=' The correlation near ori- 0 20 40 60 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
210 |
+
page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
211 |
+
page_content='5 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
212 |
+
page_content=' "#$ %& \'() 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
213 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
214 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
215 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
216 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
217 |
+
page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
218 |
+
page_content='002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
219 |
+
page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
220 |
+
page_content='006 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
221 |
+
page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
222 |
+
page_content=' "#$$ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
223 |
+
page_content=' "%$$ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
224 |
+
page_content=' "&$$ #"\'$$ %"($$ FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
225 |
+
page_content=' 4: Square root of DdG KL as a function of the absolute of ASDF on each configuration for overall range (left) and near disordered state (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
226 |
+
page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
227 |
+
page_content=' "#$$ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
228 |
+
page_content=' "%$$ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
229 |
+
page_content=' "&$$ #"\'$$ %"($$ 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
230 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
231 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
232 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
233 |
+
page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
234 |
+
page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
235 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
236 |
+
page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
237 |
+
page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
238 |
+
page_content='6 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
239 |
+
page_content=' "#$ %& \'() FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
240 |
+
page_content=' 5: Square root of Dns KL as a function of the absolute of ASDF on each configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
241 |
+
page_content=' gin (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
242 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
243 |
+
page_content=', disordered state) can also be well characterized by coordination number, whilst its dependence is opposite to the overall one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
244 |
+
page_content=' To further address how the different correlations between DKL(dG) and ASDF are dominated, we here provide simple model where canonical distribution for pracitcal and linear systems are both approximated by normal distributions, and their variance is simply proportional to the corresponding CDOS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
245 |
+
page_content=' Since we measure the divergence on e-flat manifold, their canonical distributions are also separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
246 |
+
page_content=' Therefore, we can straightforwardly rewrite DKL(dG) as 4 DdG KL ≃ A2 x 2σ2 LX + A2 y 2σ2 LY � �� � f(⃗σ) +ln �σLXσLY σXσY � + σ2 Xσ2 LY + σ2 Yσ2 LX 2σ2 LXσ2 LY − 1 � �� � g(⃗σ) , (7) where Ax and Ay denotes element of ASDF for individual SDF of X and Y, σX and σY , denotes standard deviation (SD) for canonical distribution of practical system, and σLX and σLY that of linear system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
247 |
+
page_content=' When contribution from absolute of ASDF is dominant (for overall region), DdG KL ≃ f (⃗σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
248 |
+
page_content=' At equicomposition, since SD of CDOS is proportional to J−1/2 where J denotes coordina- tion number, we get DdG KL ∝ J 2 |A|2 (8) for the case where constituent SDFs has has the same coordi- nation number J, which can reasonably capture the character- istic correlation in the l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
249 |
+
page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
250 |
+
page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
251 |
+
page_content=' of FIg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
252 |
+
page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
253 |
+
page_content=' Meanwhile, around origin where abosolute of ASDF can be neglected, we approximate DdG KL ≃ g(⃗σ), and consider the condition where SDFs has the same coordination number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
254 |
+
page_content=' In this case, we can rewrite ˜DdG KL ≃ lnX + 1 X − 1, (9) where X = σ2 LX/σ2 X, and ˜· denotes divergence around the ori- gin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
255 |
+
page_content=' Since (i) variance of canonical distribution for practical system is bounded for CP while that for linear system is not bounded, (ii) Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
256 |
+
page_content=' (9) exhibits monotonic increase in X > 1, and (iii) the bound due to CP is expected to be much more enhanced for lower coordination number system (because of larger variance of CDOS), we can deduce that around the dis- ordered state, the following relationships can be satisfied: d ˜DdG KL dJ < 0, (10) which can capture the opposite correlation between ASDF and DKL in the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
257 |
+
page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
258 |
+
page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
259 |
+
page_content=' of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
260 |
+
page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
261 |
+
page_content=' These consideration indicates that DG contains comparable amount of NOL information as ASDF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
262 |
+
page_content=' Next, we discuss about the correlation between ASDF and NS as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
263 |
+
page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
264 |
+
page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
265 |
+
page_content='. As shown in the figure, NS for in- dividual configuration on each system does not appears effec- tive correlation w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
266 |
+
page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
267 |
+
page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
268 |
+
page_content=' the ASDF, which is naturally accepted since again, information about the NS is non-local NOL in- formation, and is not explicitly included in the vector field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
269 |
+
page_content=' Therefore, we propose alternative strategy to address how the non-local, beyond-ASDF derived NOL is charaterized by geo- metric information: Since average of DKL(NS) over possible configuration under defined configuration space reflects the magnitude of inter-constraints among SDFs, we here focus on the geometric information about the CP for practical sys- tem and artificially-constructed separable system: The con- straint magnitude on configuration space can be attributed to 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
270 |
+
page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
271 |
+
page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
272 |
+
page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
273 |
+
page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
274 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
275 |
+
page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
276 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
277 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
278 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
279 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
280 |
+
page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
281 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
282 |
+
page_content='7 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
283 |
+
page_content=' "#$%&\'()$%()*%+,-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
284 |
+
page_content="/01%2$*$3$%45678)*((%8&7'35'9#%()*%:;" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
285 |
+
page_content='7% !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
286 |
+
page_content=' "# $% &\'( )*+,-\' +.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
287 |
+
page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
288 |
+
page_content=' "#$$ %"!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
289 |
+
page_content='$$ %"&$$ %"\'$$ \'"($$ FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
290 |
+
page_content=' 6: Average of �Dns KL over all configuration in terms of the information about Hausdorff distance in CPs and their hypervolumes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
291 |
+
page_content=' the difference bewteen that of practical (CP) and separable system (CP0), where we measure the difference by the fol- lowing asymmetric Hausdorff distance: RH := sup a∈CP0 � inf b∈CPd (a,b) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
292 |
+
page_content=' (11) The reason why we particularly employ asymmetric Haus- dorff distance is that CP for separarable system always takes hyperrectangular that takes outer-tangent touch to the practi- cal CP: i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
293 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
294 |
+
page_content=', we fix the standard of Hausdorff distance always as separable system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
295 |
+
page_content=' In order to compare the NS character among different set of SDFs, we would further require addi- tional information for normalizing the Hausdorff distance, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
296 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
297 |
+
page_content=', (i) contribution from difference in hypervolumes for separa- ble system, which corresponds to the difference in constraints to individual (separable) SDFs, and (ii) difference in hyper- region that can take non-zero probability distribution values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
298 |
+
page_content=' The former can be regarded as the inverse of � R1/ f H � to van- ish the dependence of normalization w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
299 |
+
page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
300 |
+
page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
301 |
+
page_content=' the dimension of configuration space, and the latter as the hypervolume of the practical CP itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
302 |
+
page_content=' Therefore, we expect that average of the NS over all configuration can be the function M of the follow- ing: �� DNS KL � ≃ M � RHVCP (VCP0)1/ f � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
303 |
+
page_content=' (12) Figure 6 shows the relationship between NS and the Hausdorff distance in CPs based on Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
304 |
+
page_content=' (12) for sets of SDFs, which ex- hibits clear correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
305 |
+
page_content=' This fact certainly indicate that non- local information about the nonlinearity, NS in SDFs, has pro- found connection to the geometric configuration of ground- state structures in configuration space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
306 |
+
page_content=' 5 III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
307 |
+
page_content=' CONCLUSIONS We investigate nonliear character in canonical ensemble of canonical nonlinearity (CN), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
308 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
309 |
+
page_content=', the correspondence between a set of potential energy surface and microscopic configura- tion in thermodynamic equilibrium, based on the correlation between special vector field of ASDF on configuration space and KL divergences on statistical manifold, which can be de- composed into local CN information of DG and non-local in- formation of NS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
310 |
+
page_content=' We find that the DG contains comparable amount of CN information as ASDF, where their correlation for different sets of SDFs can be well-interpreted in terms of the difference in pair coordination number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
311 |
+
page_content=' Meanwhile, non-local CN information of NS does not exhibit clear cor- relation to ASDF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
312 |
+
page_content=' The average of NS over all configuration can be well characterized by propery-normalized Hausdorff distance in configurational polyhedra between practical and artificially-separable system, which indicates that average of non-local CN information has profound connection to the ge- ometric configuration of ground-state structures in configura- tion space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
313 |
+
page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
314 |
+
page_content=' ACKNOWLEDGEMENT This work was supported by Grant-in-Aids for Scien- tific Research on Innovative Areas on High Entropy Alloys through the grant number JP18H05453 from the MEXT of Japan and Research Grant from Hitachi Metals·Materials Sci- ence Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
315 |
+
page_content=' 1 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
316 |
+
page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
317 |
+
page_content=' Sanchez, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
318 |
+
page_content=' Ducastelle, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
319 |
+
page_content=' Gratias, Physica A 128, 334 (1984).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
320 |
+
page_content=' 2 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
321 |
+
page_content=' Yuge, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
322 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
323 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
324 |
+
page_content=' Jpn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
325 |
+
page_content=' 86, 104802 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
326 |
+
page_content=' 3 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
327 |
+
page_content=' Yuge and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
328 |
+
page_content=' Ohta, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
329 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
330 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
331 |
+
page_content=' Jpn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
332 |
+
page_content=' 88, 104803 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
333 |
+
page_content=' 4 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
334 |
+
page_content=' Yuge, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
335 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
336 |
+
page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
337 |
+
page_content=' Jpn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
338 |
+
page_content=' 91, 014802 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3NE2T4oBgHgl3EQfNwaq/content/2301.03741v1.pdf'}
|
3dFAT4oBgHgl3EQflR0d/content/tmp_files/2301.08616v1.pdf.txt
ADDED
@@ -0,0 +1,1769 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MNRAS 000, 1–13 (2022)
|
2 |
+
Preprint 23 January 2023
|
3 |
+
Compiled using MNRAS LATEX style file v3.0
|
4 |
+
Oxygen depletion in giant planets with different formation histories
|
5 |
+
Fonte S.1★, Turrini D.1,2, Pacetti E.1,3, Schisano E.1, Molinari S.1, Polychroni D.4, Politi R.1, Changeat Q.5,6
|
6 |
+
1 INAF-Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, I-00133, Rome, Italy
|
7 |
+
2 INAF-Osservatorio Astrofisico di Torino, Via Osservatorio 20, I-10025, Pino Torinese (TO), Italy
|
8 |
+
3 Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, I-00185, Rome, Italy
|
9 |
+
4 INAF-Osservatorio Astronomico di Trieste, Via Giambattista Tiepolo, 11, I-34131 Trieste (TS), Italy
|
10 |
+
5 European Space Agency (ESA), ESA Office, Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA
|
11 |
+
6 Department of Physics and Astronomy, University College London, Gower St., London WC1E6BT, UK
|
12 |
+
Accepted 2023 January 19. Received 2022 December 23; in original form 2022 October 19
|
13 |
+
ABSTRACT
|
14 |
+
The atmospheric C/O ratio of exoplanets is widely used to constrain their formation. To guarantee that the C/O ratio provides
|
15 |
+
robust information, we need to accurately quantify the amount of C and O in exoplanetary atmospheres. In the case of O, water
|
16 |
+
and carbon monoxide are generally studied as the two key carriers. However, oxygen is a very reactive element and does not
|
17 |
+
bind with carbon; depending on the temperature, it also binds to refractory elements. Estimating the amount of oxygen bound to
|
18 |
+
refractory elements is therefore critical for unbiased estimates of the C/O ratio. In this work, we investigate the oxygen deficit
|
19 |
+
due to refractory elements and its effects on the atmospheric C/O ratio of giant exoplanets as a function of their metallicity and
|
20 |
+
equilibrium temperature. We model the composition of planetary atmospheres assuming chemical equilibrium and using as input
|
21 |
+
physically justified elemental mixtures arising from detailed planet formation simulations. Our results show how the interplay
|
22 |
+
between the atmospheric temperature and non-solar abundances of oxygen and refractory elements can sequester large fractions
|
23 |
+
of oxygen, introducing significant biases in evaluating the C/O ratio when this effect is not accounted for. We apply our results to
|
24 |
+
the case of Jupiter in the Solar System and show how the currently estimated water abundance points to a true oxygen abundance
|
25 |
+
that is four times the solar one.
|
26 |
+
Key words: planets and satellites: atmospheres – planets and satellites: composition – planets and satellites: formation –
|
27 |
+
astrochemistry – Sun: abundances
|
28 |
+
1 INTRODUCTION
|
29 |
+
Oxygen and carbon are two of the most cosmically abundant elements
|
30 |
+
and, together, account for about 80% of the budget of planet-building
|
31 |
+
material within the circumstellar discs surrounding young stars (e.g.
|
32 |
+
Asplund et al. 2009; Lodders 2010; Palme et al. 2014; Öberg &
|
33 |
+
Bergin 2021). Oxygen and carbon are partitioned between the gas and
|
34 |
+
dust of circumstellar discs depending on their local thermodynamic
|
35 |
+
conditions (e.g. Fegley & Schaefer 2010; Palme et al. 2014; Eistrup
|
36 |
+
et al. 2016; Öberg & Bergin 2021). Hotter regions are characterised
|
37 |
+
by a larger presence of carbon and oxygen within the gas, although
|
38 |
+
varying fractions of these elements are linked to refractory materials
|
39 |
+
already in the innermost 1-2 au of circumstellar discs (Lodders 2003,
|
40 |
+
2010; Fegley & Schaefer 2010; Jura & Young 2014; Palme et al. 2014;
|
41 |
+
Bergin et al. 2015; Doyle et al. 2019). Conversely, colder regions see
|
42 |
+
increasing abundances of the two elements trapped in solids as ice.
|
43 |
+
Due to their different volatility, carbon and oxygen are not se-
|
44 |
+
questered into solids from the disc gas at the same rate. Thus, the
|
45 |
+
C/O abundance ratios of both gas and solids in circumstellar discs
|
46 |
+
vary with the distance from the host star (e.g. Öberg et al. 2011;
|
47 |
+
Eistrup et al. 2016; Öberg & Bergin 2021). As the disc composition
|
48 |
+
★ E-mail: [email protected]
|
49 |
+
is imprinted into planets during their formation process, the atmo-
|
50 |
+
spheric C/O ratio of giant planets provides constraints on where they
|
51 |
+
formed within their native discs (Öberg et al. 2011, see also Mad-
|
52 |
+
husudhan et al. 2016; Öberg & Bergin 2021; Turrini et al. 2022 and
|
53 |
+
references therein for recent discussions). To probe the formation
|
54 |
+
process of giant planets, therefore, it is critically important to quan-
|
55 |
+
tify as accurately as possible the amounts of carbon and oxygen in
|
56 |
+
their atmospheres.
|
57 |
+
Observations by the Hubble Space Telescope and the Spitzer Space
|
58 |
+
Telescope are mainly sensitive to H2O and CO. It is, therefore, a
|
59 |
+
common practice in exoplanetary studies to estimate oxygen from
|
60 |
+
the measured abundances of those two molecules (Lee et al. 2013;
|
61 |
+
Kreidberg et al. 2014; Line et al. 2014; MacDonald & Madhusud-
|
62 |
+
han 2019; Changeat et al. 2020; Line et al. 2021; Spake et al. 2021;
|
63 |
+
Kawashima & Min 2021; Mikal-Evans et al. 2022; Changeat et al.
|
64 |
+
2022; Edwards et al. 2022; The JWST Transiting Exoplanet Commu-
|
65 |
+
nity Early Release Science Team et al. 2022). This is typically done
|
66 |
+
either via chemical equilibrium assumptions or via direct measure-
|
67 |
+
ments of the abundance of those two tracers.
|
68 |
+
Oxygen, however, is a very reactive element and, as in the case
|
69 |
+
of circumstellar discs, it does not bind only with carbon but also
|
70 |
+
to refractory elements (Burrows & Sharp 1999; Fegley & Schaefer
|
71 |
+
2010). For example, in planetary atmospheres characterised by solar
|
72 |
+
composition about 22% of oxygen is expected to be bound to the rock-
|
73 |
+
© 2022 The Authors
|
74 |
+
arXiv:2301.08616v1 [astro-ph.EP] 20 Jan 2023
|
75 |
+
|
76 |
+
2
|
77 |
+
Fonte S. et al.
|
78 |
+
forming refractory elements Si, Fe, and Mg at temperatures lower
|
79 |
+
than 1200 K (Burrows & Sharp 1999; Fegley & Schaefer 2010).
|
80 |
+
Studies recovering the oxygen abundance using H2O and CO only,
|
81 |
+
or those assuming chemical equilibrium models that do not include
|
82 |
+
refractory elements, are therefore subject to biases.
|
83 |
+
As a case in point, the role of refractory elements in sequestering
|
84 |
+
oxygen has been recently invoked by Cridland et al. (2019) to par-
|
85 |
+
tially explain the unexpectedly high value of the C/O ratio inferred
|
86 |
+
for two hot-Neptunes (GJ 436 b and HAT-P-26 b), when compared
|
87 |
+
to predictions for a synthetic population of planets. However, we still
|
88 |
+
lack an in-depth understanding of the connection between this pro-
|
89 |
+
cess and the planet formation process, as well as its implications for
|
90 |
+
giant planets and their atmospheres.
|
91 |
+
To estimate the potential bias in C/O estimates arising from ne-
|
92 |
+
glecting molecules other than H2O and CO, we investigate the frac-
|
93 |
+
tion of oxygen sequestered in refractory elements (oxygen deficit in
|
94 |
+
the following). Our study uses realistic models of giant planet at-
|
95 |
+
mospheres governed by equilibrium chemistry and explores multiple
|
96 |
+
equilibrium temperatures and initial elemental abundances resulting
|
97 |
+
from different formation and migration histories. Specifically, we
|
98 |
+
consider hot and warm Jupiters that start their formation between 5
|
99 |
+
and 130 au from the host star and accrete both gas and planetesimals
|
100 |
+
while migrating to their final orbits based on the planet formation
|
101 |
+
simulations and compositional modelling from Turrini et al. (2021)
|
102 |
+
and Pacetti et al. (2022) (see Appendix A for details).
|
103 |
+
These planet formation scenarios result in planetary compositions
|
104 |
+
enriched in refractory elements with respect to giant planets whose
|
105 |
+
growth tracks are dominated by the accretion of nebular gas (Schnei-
|
106 |
+
der & Bitsch 2021; Turrini et al. 2021; Pacetti et al. 2022). We focus
|
107 |
+
on the atmospheric layers with optical depths that are optimally ac-
|
108 |
+
cessible by the spectrometers on-boards the NASA/ESA/CSA James
|
109 |
+
Webb Space Telescope (JWST, Greene et al. 2016) and the ESA mis-
|
110 |
+
sion Ariel (Tinetti et al. 2018, 2021), but similar predictions can be
|
111 |
+
easily produced for other observing conditions.
|
112 |
+
The rest of the paper is organised as follows. In section 2 we illus-
|
113 |
+
trate the thermo-physical and chemical modelling as well as the initial
|
114 |
+
elemental compositions of the planetary atmospheres we investigate.
|
115 |
+
In section 3 we show the partition of oxygen among its main carriers
|
116 |
+
as a function of the planetary metallicity and equilibrium temper-
|
117 |
+
ature. In section 4 we discuss the impact of the oxygen deficit for
|
118 |
+
warm-to-hot Jupiters and for Jupiter in our own Solar system. We
|
119 |
+
summarise our conclusions in section 5.
|
120 |
+
2 MODEL
|
121 |
+
In this study, we model the composition of the exoplanetary atmo-
|
122 |
+
spheres of warm and hot Jupiters under the assumption of chemical
|
123 |
+
equilibrium. The atmospheres we model are characterised by dif-
|
124 |
+
ferent temperatures, metallicity values and elemental compositions.
|
125 |
+
The input metallicity values and elemental compositions of the giant
|
126 |
+
planets are obtained from the planet formation simulations of Turrini
|
127 |
+
et al. (2021), hereafter Paper I. In the following we provide the key
|
128 |
+
aspects of our model and its input data.
|
129 |
+
2.1 Atmospheric modelling
|
130 |
+
We use FastChem (Stock et al. 2018) to solve the system of coupled
|
131 |
+
nonlinear algebraic equations describing the atmospheric chemical
|
132 |
+
equilibrium. The chemical network that FastChem implements is
|
133 |
+
appropriate for temperatures in excess of 100 K, so it provides a
|
134 |
+
reliable treatment in the range of atmospheric temperatures of warm
|
135 |
+
Table 1. HAT-P-5b’s characteristics (Thorngren & Fortney 2019)
|
136 |
+
Parameter
|
137 |
+
Value
|
138 |
+
Unit
|
139 |
+
𝑀𝑏
|
140 |
+
0.98
|
141 |
+
𝑀𝐽
|
142 |
+
𝑅𝑏
|
143 |
+
1.21
|
144 |
+
𝑅𝐽
|
145 |
+
𝑇★
|
146 |
+
5960
|
147 |
+
K
|
148 |
+
𝑀★
|
149 |
+
1.163
|
150 |
+
𝑀⊙
|
151 |
+
𝑅★
|
152 |
+
1.137
|
153 |
+
𝑅⊙
|
154 |
+
and hot Jupiters (700–1500 K) we model in this work. As our focus is
|
155 |
+
on quantifying the amount of oxygen sequestered by refractories, in
|
156 |
+
this work we do not model the physical state of the resulting oxides,
|
157 |
+
i.e. whether they remain in the gas phase or condense and trigger
|
158 |
+
cloud formation. We defer the exploration of these aspects to future
|
159 |
+
works.
|
160 |
+
As discussed by Stock et al. (2018), the temperature dependence
|
161 |
+
of the dimensionless mass action constant for each chemical reaction
|
162 |
+
𝑖 in FastChem’s chemical network is approximated as:
|
163 |
+
ln 𝐾𝑖(𝑇) = 𝑎0,𝑖
|
164 |
+
𝑇
|
165 |
+
+ 𝑎1,𝑖 ln𝑇 + 𝑏0,𝑖 + 𝑏1,𝑖𝑇 + 𝑏2,𝑖𝑇2
|
166 |
+
(1)
|
167 |
+
with the coefficients ���𝑘,𝑖 and 𝑏𝑘,𝑖 provided by FastChem in tabular
|
168 |
+
form.
|
169 |
+
The thermo-physical state of the atmosphere is defined through its
|
170 |
+
pressure-temperature relationship following the approach in Guillot
|
171 |
+
(2010). The temperature T is expressed as a function of the atmo-
|
172 |
+
sphere’s optical depth 𝜏 via
|
173 |
+
𝑇4 = 3
|
174 |
+
4𝑇4
|
175 |
+
𝑖𝑛𝑡
|
176 |
+
� 2
|
177 |
+
3 + 𝜏
|
178 |
+
�
|
179 |
+
+ 3
|
180 |
+
4𝑇4
|
181 |
+
𝑒𝑞
|
182 |
+
� 2
|
183 |
+
3 + 𝑎1 + 𝑎2
|
184 |
+
�
|
185 |
+
(2)
|
186 |
+
where 𝑇𝑖𝑛𝑡 is the temperature at the base of the atmosphere, which
|
187 |
+
is assumed constant at 100 K (Guillot 2010), and 𝑇𝑒𝑞 is the equi-
|
188 |
+
librium temperature of the planet that depends on the planet-star
|
189 |
+
distance 𝐷 and the star’s temperature 𝑇𝑠𝑡𝑎𝑟 as discussed below. The
|
190 |
+
quantities 𝑎1 and 𝑎2 incorporate the complex relationship between
|
191 |
+
the atmosphere’s optical depth and its opacity at thermal and visible
|
192 |
+
wavelengths (Guillot 2010).
|
193 |
+
Following Guillot (2010), we assume plane-parallel geometry and
|
194 |
+
hydrostatic equilibrium to describe the atmosphere. Under these con-
|
195 |
+
ditions, the local pressure 𝑃 and optical depth 𝜏 are linked by the
|
196 |
+
following relation:
|
197 |
+
𝑃 = 𝑔𝜏
|
198 |
+
𝑘𝑡ℎ
|
199 |
+
(3)
|
200 |
+
where 𝑔 is gravity acceleration and 𝑘𝑡ℎ is the opacity in the visible,
|
201 |
+
for which we adopt a fiducial value of 0.01 cm2/g again following
|
202 |
+
Guillot (2010).
|
203 |
+
We use HAT-P-5b and its host stars as templates on which to
|
204 |
+
set the planetary and stellar parameters. HAT-P-5b’s characteristics
|
205 |
+
match well those expected for an older version of the newly formed,
|
206 |
+
hot and expanded giant planet simulated in Paper I (1 MJ and 1.6
|
207 |
+
RJ) after it undergoes secular cooling and shrinking. The main input
|
208 |
+
parameters of HAT-P-5b and its star are derived from Thorngren &
|
209 |
+
Fortney (2019) and summarised in Tab. 1.
|
210 |
+
Since we are interested in exploring a range of equilibrium tem-
|
211 |
+
peratures, we vary the orbital distance 𝐷 of the giant planet from the
|
212 |
+
host star between 0.2 to 0.04 AU.
|
213 |
+
MNRAS 000, 1–13 (2022)
|
214 |
+
|
215 |
+
3
|
216 |
+
This results in increasing planetary equilibrium temperatures 𝑇𝑒𝑞
|
217 |
+
spanning from 700 to 1500 K as derived from
|
218 |
+
𝑇𝑒𝑞 = 𝑇★
|
219 |
+
√︂
|
220 |
+
𝑅★
|
221 |
+
2𝐷 .
|
222 |
+
(4)
|
223 |
+
With these assumptions, we generate the set of eight different
|
224 |
+
pressure-temperature profiles reported in Fig. 1 that we feed to
|
225 |
+
FastChem for each of the six sets of elemental abundances in Tab. 2.
|
226 |
+
We focus our analysis on the atmospheric layer encompassing the
|
227 |
+
pressure range between 0.01 and 1 bar (see Fig. 1) as it is the layer
|
228 |
+
where both JWST (Greene et al. 2016) and the ESA mission Ariel
|
229 |
+
(Tinetti et al. 2018, 2021) have the optimal sensitivity.
|
230 |
+
2.2 Elemental composition of the giant planets
|
231 |
+
To model the chemical initial conditions in the atmosphere, we con-
|
232 |
+
sider six planetary mixtures resulting from the concurrent accretion
|
233 |
+
of gas and planetesimals by a growing and migrating giant planet in
|
234 |
+
the midplane of a protoplanetary disc. The disc compositional model
|
235 |
+
assumes solar abundances (Asplund et al. 2009; Scott et al. 2015a,b)
|
236 |
+
and accounts for the presence of gas, ices, organics and refractories
|
237 |
+
in the disc’s midplane. The input elemental abundances in the plan-
|
238 |
+
etary atmosphere are listed in Tab. 2 and are the outcomes of the six
|
239 |
+
planet formation simulations from Paper I, coupled with the updated
|
240 |
+
disc compositional model by Pacetti et al. (2022), hereafter Paper II.
|
241 |
+
We refer interested readers to Appendix A for more details.
|
242 |
+
The six formation scenarios of Paper I simulate the growth and
|
243 |
+
migration process of giant planets starting at different distances from
|
244 |
+
the host star, namely between 5 and 130 au, and ending their forma-
|
245 |
+
tion at 0.4 au (see Tab. 2 and Appendix A). The bulk metallicity of
|
246 |
+
the giant planets increases with their initial planet formation distance
|
247 |
+
(see Tab. 3), as the giant planets migrate across larger fractions of the
|
248 |
+
circumstellar disc and encounter more solid material to accrete. We
|
249 |
+
assumed that the accreted gas and solids are split into the composing
|
250 |
+
elements due to the high temperature of the young giant planet (Lis-
|
251 |
+
sauer et al. 2009; D’Angelo et al. 2021) and recombine into molecules
|
252 |
+
in its atmosphere. In the following, we will identify the six chemical
|
253 |
+
mixtures based on their total bulk metallicity. The larger the migra-
|
254 |
+
tion, the more snowlines the giant planet crosses while migrating. As
|
255 |
+
a result, the giant planets possess different abundances of C, O and
|
256 |
+
refractory elements in the six formation scenarios.
|
257 |
+
The disc compositional model of Papers I and II focuses on the
|
258 |
+
four cosmically abundant elements nitrogen (N), carbon (C), oxygen
|
259 |
+
(O), and sulphur (S), here reported in order of decreasing volatility.
|
260 |
+
In these works, N, C, and O are partitioned in the disc midplane
|
261 |
+
between refractory solids (rocks and metals), organics, ices, and gas.
|
262 |
+
Their radial abundance profiles are based on the outcome of astro-
|
263 |
+
chemical models and on observational constraints provided by me-
|
264 |
+
teorites, comets, polluted white dwarfs, and the interstellar medium
|
265 |
+
(see Appendix A, Öberg & Bergin 2021 and Turrini et al. 2022 for
|
266 |
+
discussion). While N, C, and O are partitioned between the gas and
|
267 |
+
solid phase across the disc, the available observational evidence sug-
|
268 |
+
gests that the bulk of S is sequestered into refractory solids close to
|
269 |
+
the host star (see Papers I and II and Fegley & Schaefer 2010, Kama
|
270 |
+
et al. 2019 and Turrini et al. 2022 for discussion). Following Paper I
|
271 |
+
and II, we adopt S as a proxy for all refractory elements and derive
|
272 |
+
the planetary abundance of each refractory element X by multiply-
|
273 |
+
ing the S abundance in the simulated giant planet by the stellar X/S
|
274 |
+
abundance ratio. This approach allows us to account for the 25 most
|
275 |
+
abundant heavy elements (see Tab. 2).
|
276 |
+
Table 2. Elemental abundances of 25 elements in the atmosphere of the
|
277 |
+
giant planet, resulting from the planet formation simulations from Turrini
|
278 |
+
et al. 2021, using the updated compositional model of the protoplanetary
|
279 |
+
disc by Pacetti et al. 2022. The elemental abundances are sorted by planetary
|
280 |
+
migration scenario and expressed in dex (logarithmic abundance of atoms of
|
281 |
+
a given element for every 1012 hydrogen atoms, see Asplund et al. 2009 and
|
282 |
+
Lodders 2010).
|
283 |
+
Element
|
284 |
+
Initial distance of the planet (AU)
|
285 |
+
5
|
286 |
+
12
|
287 |
+
19
|
288 |
+
50
|
289 |
+
100
|
290 |
+
130
|
291 |
+
Al
|
292 |
+
6.38
|
293 |
+
6.57
|
294 |
+
7.15
|
295 |
+
7.06
|
296 |
+
7.21
|
297 |
+
7.42
|
298 |
+
Ar
|
299 |
+
6.40
|
300 |
+
6.40
|
301 |
+
6.40
|
302 |
+
6.40
|
303 |
+
6.40
|
304 |
+
6.40
|
305 |
+
C
|
306 |
+
8.44
|
307 |
+
9.00
|
308 |
+
9.12
|
309 |
+
9.39
|
310 |
+
9.14
|
311 |
+
9.35
|
312 |
+
Ca
|
313 |
+
6.27
|
314 |
+
6.46
|
315 |
+
7.04
|
316 |
+
7.35
|
317 |
+
7.10
|
318 |
+
7.31
|
319 |
+
Cl
|
320 |
+
6.15
|
321 |
+
6.34
|
322 |
+
6.52
|
323 |
+
7.23
|
324 |
+
7.38
|
325 |
+
7.19
|
326 |
+
Co
|
327 |
+
5.25
|
328 |
+
5.03
|
329 |
+
5.21
|
330 |
+
5.52
|
331 |
+
6.08
|
332 |
+
6.28
|
333 |
+
Cr
|
334 |
+
5.54
|
335 |
+
6.12
|
336 |
+
6.30
|
337 |
+
6.21
|
338 |
+
6.37
|
339 |
+
6.57
|
340 |
+
Cu
|
341 |
+
4.11
|
342 |
+
4.29
|
343 |
+
4.47
|
344 |
+
5.18
|
345 |
+
5.34
|
346 |
+
5.14
|
347 |
+
F
|
348 |
+
4.35
|
349 |
+
4.54
|
350 |
+
5.12
|
351 |
+
5.03
|
352 |
+
5.19
|
353 |
+
5.39
|
354 |
+
Fe
|
355 |
+
7.43
|
356 |
+
8.01
|
357 |
+
8.19
|
358 |
+
8.10
|
359 |
+
8.25
|
360 |
+
8.46
|
361 |
+
Ge
|
362 |
+
3.57
|
363 |
+
4.15
|
364 |
+
4.33
|
365 |
+
4.24
|
366 |
+
4.40
|
367 |
+
5.00
|
368 |
+
K
|
369 |
+
5.36
|
370 |
+
5.14
|
371 |
+
5.32
|
372 |
+
6.03
|
373 |
+
6.19
|
374 |
+
6.39
|
375 |
+
Mg
|
376 |
+
7.54
|
377 |
+
8.13
|
378 |
+
8.31
|
379 |
+
8.22
|
380 |
+
8.37
|
381 |
+
8.58
|
382 |
+
Mn
|
383 |
+
5.34
|
384 |
+
5.52
|
385 |
+
6.10
|
386 |
+
6.01
|
387 |
+
6.17
|
388 |
+
6.37
|
389 |
+
N
|
390 |
+
8.26
|
391 |
+
8.30
|
392 |
+
8.39
|
393 |
+
8.15
|
394 |
+
8.26
|
395 |
+
8.43
|
396 |
+
Na
|
397 |
+
6.16
|
398 |
+
6.35
|
399 |
+
6.53
|
400 |
+
7.24
|
401 |
+
7.39
|
402 |
+
7.20
|
403 |
+
Ni
|
404 |
+
6.12
|
405 |
+
6.30
|
406 |
+
6.48
|
407 |
+
7.19
|
408 |
+
7.35
|
409 |
+
7.15
|
410 |
+
O
|
411 |
+
9.15
|
412 |
+
9.24
|
413 |
+
9.00
|
414 |
+
9.29
|
415 |
+
9.45
|
416 |
+
10.05
|
417 |
+
P
|
418 |
+
5.36
|
419 |
+
5.55
|
420 |
+
6.13
|
421 |
+
6.04
|
422 |
+
6.19
|
423 |
+
6.40
|
424 |
+
S
|
425 |
+
7.07
|
426 |
+
7.26
|
427 |
+
7.44
|
428 |
+
8.15
|
429 |
+
8.30
|
430 |
+
8.11
|
431 |
+
Sc
|
432 |
+
3.08
|
433 |
+
3.26
|
434 |
+
3.44
|
435 |
+
4.15
|
436 |
+
4.31
|
437 |
+
4.11
|
438 |
+
Si
|
439 |
+
7.46
|
440 |
+
8.05
|
441 |
+
8.23
|
442 |
+
8.14
|
443 |
+
8.29
|
444 |
+
8.50
|
445 |
+
Ti
|
446 |
+
5.28
|
447 |
+
5.07
|
448 |
+
5.25
|
449 |
+
5.56
|
450 |
+
6.11
|
451 |
+
6.32
|
452 |
+
V
|
453 |
+
4.21
|
454 |
+
4.39
|
455 |
+
4.17
|
456 |
+
4.48
|
457 |
+
5.04
|
458 |
+
5.24
|
459 |
+
Zn
|
460 |
+
4.48
|
461 |
+
5.06
|
462 |
+
5.24
|
463 |
+
5.15
|
464 |
+
5.31
|
465 |
+
5.51
|
466 |
+
Table 3. Total metallicity (Z) and enrichments in C, O, and refractory elements
|
467 |
+
(among which Fe, Mg, and Si are the most abundant) of the atmospheres of the
|
468 |
+
giant planets in the six formation scenarios simulated by Turrini et al. 2021.
|
469 |
+
The metallicity and the enrichments are expressed in units of the respective
|
470 |
+
solar values. In this scale, a value of 1 indicates a perfect match with the
|
471 |
+
corresponding solar quantity.
|
472 |
+
Initial Distance
|
473 |
+
𝑍
|
474 |
+
C
|
475 |
+
O
|
476 |
+
Refractories
|
477 |
+
(AU)
|
478 |
+
(Fe/Mg/Si)
|
479 |
+
5
|
480 |
+
1.0
|
481 |
+
0.9
|
482 |
+
1.1
|
483 |
+
0.8
|
484 |
+
12
|
485 |
+
1.3
|
486 |
+
1.3
|
487 |
+
1.3
|
488 |
+
1.2
|
489 |
+
19
|
490 |
+
1.8
|
491 |
+
1.8
|
492 |
+
1.9
|
493 |
+
1.8
|
494 |
+
50
|
495 |
+
3.4
|
496 |
+
3.2
|
497 |
+
3.7
|
498 |
+
3.7
|
499 |
+
100
|
500 |
+
4.8
|
501 |
+
4.7
|
502 |
+
5.2
|
503 |
+
5.3
|
504 |
+
130
|
505 |
+
7.6
|
506 |
+
7.3
|
507 |
+
8.3
|
508 |
+
8.7
|
509 |
+
In Tab. 3 we report, for each of the six initial planet formation dis-
|
510 |
+
tances reported in Tab. 2, the total metallicity Z, and the abundances
|
511 |
+
of C, O and refractory elements. In terms of refractory elements,
|
512 |
+
we focus in particular on Fe, Mg and Si, as after C, O and N they
|
513 |
+
provide the largest mass contribution to heavy elements (Lodders
|
514 |
+
2010). Both the metallicity and the elemental abundances in Tab. 3
|
515 |
+
are normalised to the relevant solar values. As can be immediately
|
516 |
+
seen, refractory elements increase faster than O and C with increasing
|
517 |
+
migration. The elemental compositions of the giant planets, there-
|
518 |
+
fore, significantly deviate from the solar composition in terms of
|
519 |
+
elemental abundance ratios (i.e. different elements show different
|
520 |
+
enrichments), as discussed in Papers I and II.
|
521 |
+
MNRAS 000, 1–13 (2022)
|
522 |
+
|
523 |
+
4
|
524 |
+
Fonte S. et al.
|
525 |
+
600
|
526 |
+
800
|
527 |
+
1,000
|
528 |
+
1,200
|
529 |
+
1,400
|
530 |
+
1,600
|
531 |
+
1,800
|
532 |
+
10−5
|
533 |
+
10−4
|
534 |
+
10−3
|
535 |
+
10−2
|
536 |
+
10−1
|
537 |
+
100
|
538 |
+
101
|
539 |
+
Temperature [K]
|
540 |
+
Pressure [bar]
|
541 |
+
Guillot profiles of exoplanets @ different Teq
|
542 |
+
800
|
543 |
+
1,000
|
544 |
+
1,200
|
545 |
+
1,400
|
546 |
+
Teq[K]
|
547 |
+
Figure 1. P-T profiles of the simulated giant planets for the eight orbital dis-
|
548 |
+
tances 𝐷 and equilibrium temperatures 𝑇𝑒𝑞 we considered in this study. The
|
549 |
+
grey region indicates the pressure range our atmospheric modelling focuses
|
550 |
+
on, which has been chosen based on the atmospheric layer of highest sensitiv-
|
551 |
+
ity of the ESA mission Ariel (Tinetti et al. 2018, 2021) and NASA/ESA/CSA
|
552 |
+
JWST mission (Greene et al. 2016)
|
553 |
+
3 RESULTS
|
554 |
+
In this section, we discuss the abundances of all O-bearing molecules
|
555 |
+
resulting from our atmospheric modelling with FastChem. The at-
|
556 |
+
mospheric models are computed considering eight planetary equilib-
|
557 |
+
rium temperatures spanning the range between 700 and 1500 K, and
|
558 |
+
six formation and migration scenarios of the giant planet spanning
|
559 |
+
initial formation distances between 5 and 130 au. The resulting 48
|
560 |
+
atmospheric models are shown in Figs. 3, 4 and 5.
|
561 |
+
Each panel in these figures reports the molecular abundances re-
|
562 |
+
sulting from FastChem for the specific equilibrium temperature as
|
563 |
+
coloured bar charts. The different bar charts in each panel illustrate
|
564 |
+
the distribution of O in the various planet formation scenarios. The
|
565 |
+
planet formation scenarios are identified by their normalised metal-
|
566 |
+
licity value Z=Z𝑝/Z∗, where Z𝑝 and Z∗ are the planetary and stellar
|
567 |
+
metallicity (see Thorngren et al. 2016 and Paper I) and Z goes from
|
568 |
+
1 to 7.6. Individual molecules are explicitly reported only if their
|
569 |
+
contribution in sequestering O exceeds 1% of total O; species not
|
570 |
+
fulfilling this condition are grouped and their total contribution is
|
571 |
+
reported under the label “Other”.
|
572 |
+
In the following subsections, we will separately discuss the re-
|
573 |
+
sults for three classes of planetary equilibrium temperatures: warm
|
574 |
+
(700K≤ 𝑇𝑒𝑞 ≤800K), transitional hot (900K≤ 𝑇𝑒𝑞 ≤1100K) and
|
575 |
+
hot (1200K≤ 𝑇𝑒𝑞 ≤1500K) planets, as the three categories show
|
576 |
+
different properties in terms of chemical behaviour. As illustrated
|
577 |
+
by Fig. 2, the “transitional hot” label refers to the temperature range
|
578 |
+
separating the two regimes where oxygen is carried by different car-
|
579 |
+
riers. Specifically, oxygen is locked mostly in water and refractories
|
580 |
+
for "warm" planets, while it is carried preferentially by CO, water
|
581 |
+
and SiO in the atmospheres of “hot” planets.
|
582 |
+
In the pressure region considered in the present study (0.01-1 bar,
|
583 |
+
see Sect. 2), the balance between the major C-bearing molecules
|
584 |
+
CO and CH4 is set by the reaction CO + 3H2 ⇌ CH4 + H2O and
|
585 |
+
favours CH4 at the lowest temperatures we model (≤800 K). For
|
586 |
+
growing planetary temperatures the balance of the reaction gradually
|
587 |
+
shifts in favour of CO production. Around 900 K the two molecules
|
588 |
+
CO and CH4 equally contribute as C carriers in a gas with solar
|
589 |
+
composition. At higher temperatures (≥1000 K) CO is about 90% of
|
590 |
+
the C-bearing blend. We refer readers to Lodders & Fegley (2002),
|
591 |
+
Fegley & Schaefer (2010) and Madhusudhan et al. (2016) for more
|
592 |
+
detailed discussions.
|
593 |
+
3.1 Warm planets: 700K≤ 𝑇𝑒𝑞 ≤800K
|
594 |
+
In the temperature regime of warm giant planets the main carriers
|
595 |
+
of O are water and refractories (see Fig. 3). For increasing planetary
|
596 |
+
metallicity values, the fraction of O incorporated into H2O drops
|
597 |
+
from the initial value of about 3/4 (73%, see the cases Z=1 in Fig. 3)
|
598 |
+
to less than 2/3 (62-63%, see the cases Z=7.6 in Fig. 3) of total O.
|
599 |
+
Most of this decrease occurs as soon as the metallicity Z shifts from
|
600 |
+
stellar to super-stellar (i.e. between Z=1 and Z=1.3, see Fig. 3)
|
601 |
+
This decrease in the role of water as a carrier of O is due to
|
602 |
+
the faster increase of refractory elements with respect to O shown
|
603 |
+
in Tab. 3, as refractory elements (Fe-Mg-Si) increase by 50% when
|
604 |
+
going from Z=1 to 1.3 while O grows only by 18% due to the different
|
605 |
+
efficiencies with which gas and solids are accreted by the giant planet
|
606 |
+
(see Paper I and II for detailed discussions). Among refractories, Fe
|
607 |
+
sequesters between 10-13% of total O, Mg between 12-17%, while
|
608 |
+
Si’s contribution is mostly constant at 5-6% of total O.
|
609 |
+
When considering the physically-justified planetary compositions
|
610 |
+
from Paper I, we find that 33-38% of O is bound to refractory elements
|
611 |
+
as soon as the planetary metallicity is super-stellar (𝑍 > 1). This
|
612 |
+
value is significantly higher than the expected 22% arising when
|
613 |
+
solar abundance ratios are assumed between oxygen and refractories
|
614 |
+
(Burrows & Sharp 1999; Lodders 2003; Fegley & Schaefer 2010).
|
615 |
+
In the case of Z=1, moreover, we find that refractories account for
|
616 |
+
27% of the planetary oxygen as the different elements are not in solar
|
617 |
+
proportions (see Tab. 3). This highlights how the assumption of solar
|
618 |
+
composition introduces biases in the interpretation of giant planet
|
619 |
+
atmospheres.
|
620 |
+
3.2 Transitional hot planets: 900K≤ 𝑇𝑒𝑞 ≤1100K
|
621 |
+
In this temperature range, planetary atmospheres exhibit a more com-
|
622 |
+
plex behaviour than their colder counterparts discussed above. As
|
623 |
+
shown in Fig. 4, the amount of oxygen sequestered by refractories is
|
624 |
+
a function of both 𝑇𝑒𝑞 and the metallicity Z (hence, the formation
|
625 |
+
distance and migration of the growing giant planets).
|
626 |
+
Moving toward hotter temperatures, transitional hot giant planets
|
627 |
+
experience the expected shift from H2O to CO as the dominant car-
|
628 |
+
rier of O (see Fig. 4). In parallel, the fraction of O that is trapped
|
629 |
+
by refractories undergoes a more radical change. At a fixed temper-
|
630 |
+
ature 𝑇𝑒𝑞, the amount of O linked to refractories increases with the
|
631 |
+
planetary metallicity Z (see Fig. 4a, b, and c). For each metallicity Z,
|
632 |
+
however, the role of refractories as carriers of O drastically decreases
|
633 |
+
with increasing temperatures.
|
634 |
+
For 𝑇𝑒𝑞=900 K, refractories sequester between 18% and 36% of
|
635 |
+
total O when going from Z=1 to Z=7.6 largely due to the contributions
|
636 |
+
of Fe and Mg (see Fig. 4a). Moving to 𝑇𝑒𝑞=1000 K, the amount of
|
637 |
+
oxygen trapped by refractories drops by a factor comprised between 3
|
638 |
+
for the lowest values of Z and 1.5-2 for the highest one. This decrease
|
639 |
+
is due to the shrinking role of Fe and Mg (see Fig. 4b), while SiO
|
640 |
+
accounts for an almost constant fraction of 5-6% of total O as in the
|
641 |
+
case of warm giant planets. By 𝑇𝑒𝑞=1100 K, refractories account
|
642 |
+
for only 5-10% of total O with Si becoming the main refractory O
|
643 |
+
carrier (see Fig. 4c).
|
644 |
+
MNRAS 000, 1–13 (2022)
|
645 |
+
|
646 |
+
5
|
647 |
+
600
|
648 |
+
700
|
649 |
+
800
|
650 |
+
900
|
651 |
+
1000
|
652 |
+
1100
|
653 |
+
1200
|
654 |
+
1300
|
655 |
+
1400
|
656 |
+
1500
|
657 |
+
Hot region
|
658 |
+
Transition Hot region
|
659 |
+
Warm region
|
660 |
+
CO
|
661 |
+
CO
|
662 |
+
SiO
|
663 |
+
SiO
|
664 |
+
SiO
|
665 |
+
Mg(OH)2
|
666 |
+
Mg(OH)2
|
667 |
+
Fe(OH)2
|
668 |
+
Fe(OH)2
|
669 |
+
H2O
|
670 |
+
H2O
|
671 |
+
H2O
|
672 |
+
Teq [K]
|
673 |
+
Major Oxygen Carriers
|
674 |
+
Figure 2. Illustrative example of the evolution of the main oxygen carriers going from warm to transition hot and hot giant planets. The transition hot region
|
675 |
+
marks the shift between the warm temperature regime where water and refractories are the main oxygen carriers and the hot temperature regime where O is
|
676 |
+
mainly in the form of CO and H2O.
|
677 |
+
Z=1.0
|
678 |
+
Z=1.3
|
679 |
+
Z=1.8
|
680 |
+
Z=3.4
|
681 |
+
Z=4.8
|
682 |
+
Z=7.6
|
683 |
+
0
|
684 |
+
20
|
685 |
+
40
|
686 |
+
60
|
687 |
+
80
|
688 |
+
100
|
689 |
+
1
|
690 |
+
1.25
|
691 |
+
1.31
|
692 |
+
1.4
|
693 |
+
1.4
|
694 |
+
1.47
|
695 |
+
4.41
|
696 |
+
5.4
|
697 |
+
5.64
|
698 |
+
5.91
|
699 |
+
5.81
|
700 |
+
5.94
|
701 |
+
12.33
|
702 |
+
15.52
|
703 |
+
16.25
|
704 |
+
17.02
|
705 |
+
16.54
|
706 |
+
17.02
|
707 |
+
9.57
|
708 |
+
11.78
|
709 |
+
12.33
|
710 |
+
12.91
|
711 |
+
12.62
|
712 |
+
12.91
|
713 |
+
72.69
|
714 |
+
66.06
|
715 |
+
64.46
|
716 |
+
62.76
|
717 |
+
63.54
|
718 |
+
62.65
|
719 |
+
Percentage (%)
|
720 |
+
H2O
|
721 |
+
Fe(OH)2
|
722 |
+
Mg(OH)2
|
723 |
+
SiO
|
724 |
+
CO
|
725 |
+
Other
|
726 |
+
(a) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 700 K
|
727 |
+
Z=1.0
|
728 |
+
Z=1.3
|
729 |
+
Z=1.8
|
730 |
+
Z=3.4
|
731 |
+
Z=4.8
|
732 |
+
Z=7.6
|
733 |
+
0
|
734 |
+
20
|
735 |
+
40
|
736 |
+
60
|
737 |
+
80
|
738 |
+
100
|
739 |
+
1.14
|
740 |
+
1.43
|
741 |
+
1.54
|
742 |
+
1.81
|
743 |
+
1.98
|
744 |
+
2.4
|
745 |
+
4.63
|
746 |
+
5.68
|
747 |
+
5.91
|
748 |
+
6.15
|
749 |
+
6.03
|
750 |
+
6.15
|
751 |
+
11.64
|
752 |
+
14.82
|
753 |
+
15.86
|
754 |
+
16.9
|
755 |
+
16.57
|
756 |
+
16.99
|
757 |
+
9.47
|
758 |
+
11.68
|
759 |
+
12.28
|
760 |
+
12.9
|
761 |
+
12.61
|
762 |
+
12.91
|
763 |
+
73.12
|
764 |
+
66.4
|
765 |
+
64.41
|
766 |
+
62.25
|
767 |
+
62.81
|
768 |
+
61.54
|
769 |
+
Percentage (%)
|
770 |
+
H2O
|
771 |
+
Fe(OH)2
|
772 |
+
Mg(OH)2
|
773 |
+
SiO
|
774 |
+
CO
|
775 |
+
Other
|
776 |
+
(b) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 800 K
|
777 |
+
Figure 3. Distribution of oxygen-bearing molecules in the pressure range where JWST and Ariel have the highest sensitivity ([0.01, 1] bar) for 𝑇𝑒𝑞=[700, 800]
|
778 |
+
K in panels 𝑎 and 𝑏, respectively. We explicitly report only the molecules carrying a fraction of O greater than 1%.
|
779 |
+
MNRAS 000, 1–13 (2022)
|
780 |
+
|
781 |
+
6
|
782 |
+
Fonte S. et al.
|
783 |
+
Z=1.0
|
784 |
+
Z=1.3
|
785 |
+
Z=1.8
|
786 |
+
Z=3.4
|
787 |
+
Z=4.8
|
788 |
+
Z=7.6
|
789 |
+
0
|
790 |
+
20
|
791 |
+
40
|
792 |
+
60
|
793 |
+
80
|
794 |
+
100
|
795 |
+
1.02
|
796 |
+
1.28
|
797 |
+
1.36
|
798 |
+
1.47
|
799 |
+
1.5
|
800 |
+
1.64
|
801 |
+
4.09
|
802 |
+
5.29
|
803 |
+
6.22
|
804 |
+
9.19
|
805 |
+
11.46
|
806 |
+
15.14
|
807 |
+
4.73
|
808 |
+
5.82
|
809 |
+
6.04
|
810 |
+
6.23
|
811 |
+
6.07
|
812 |
+
6.14
|
813 |
+
5.41
|
814 |
+
7.64
|
815 |
+
10.18
|
816 |
+
13.96
|
817 |
+
14.84
|
818 |
+
16.04
|
819 |
+
7.11
|
820 |
+
9.23
|
821 |
+
10.65
|
822 |
+
12.23
|
823 |
+
12.25
|
824 |
+
12.72
|
825 |
+
77.63
|
826 |
+
70.74
|
827 |
+
65.54
|
828 |
+
56.92
|
829 |
+
53.87
|
830 |
+
48.32
|
831 |
+
Percentage (%)
|
832 |
+
H2O
|
833 |
+
Fe(OH)2
|
834 |
+
Mg(OH)2
|
835 |
+
SiO
|
836 |
+
CO
|
837 |
+
Other
|
838 |
+
(a) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 900 K
|
839 |
+
Z=1.0
|
840 |
+
Z=1.3
|
841 |
+
Z=1.8
|
842 |
+
Z=3.4
|
843 |
+
Z=4.8
|
844 |
+
Z=7.6
|
845 |
+
0
|
846 |
+
20
|
847 |
+
40
|
848 |
+
60
|
849 |
+
80
|
850 |
+
100
|
851 |
+
2.34
|
852 |
+
1.81
|
853 |
+
1.3
|
854 |
+
1.5
|
855 |
+
1.57
|
856 |
+
1.72
|
857 |
+
26.61
|
858 |
+
32.42
|
859 |
+
33.4
|
860 |
+
37.11
|
861 |
+
38.61
|
862 |
+
41.62
|
863 |
+
4.74
|
864 |
+
5.83
|
865 |
+
6.06
|
866 |
+
6.23
|
867 |
+
6.05
|
868 |
+
6.08
|
869 |
+
1.19
|
870 |
+
2.97
|
871 |
+
4.41
|
872 |
+
6.51
|
873 |
+
1.36
|
874 |
+
2.4
|
875 |
+
5.03
|
876 |
+
6.55
|
877 |
+
8.38
|
878 |
+
66.61
|
879 |
+
58.59
|
880 |
+
55.65
|
881 |
+
47.15
|
882 |
+
42.82
|
883 |
+
35.7
|
884 |
+
Percentage (%)
|
885 |
+
H2O
|
886 |
+
Fe(OH)2
|
887 |
+
Mg(OH)2
|
888 |
+
SiO
|
889 |
+
CO
|
890 |
+
Other
|
891 |
+
(b) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 1000 K
|
892 |
+
Z=1.0
|
893 |
+
Z=1.3
|
894 |
+
Z=1.8
|
895 |
+
Z=3.4
|
896 |
+
Z=4.8
|
897 |
+
Z=7.6
|
898 |
+
0
|
899 |
+
20
|
900 |
+
40
|
901 |
+
60
|
902 |
+
80
|
903 |
+
100
|
904 |
+
0.9
|
905 |
+
1.11
|
906 |
+
1.39
|
907 |
+
2.33
|
908 |
+
2.12
|
909 |
+
1.62
|
910 |
+
43.28
|
911 |
+
50.97
|
912 |
+
48.31
|
913 |
+
47.89
|
914 |
+
47.4
|
915 |
+
48.93
|
916 |
+
4.71
|
917 |
+
5.75
|
918 |
+
6.02
|
919 |
+
6.28
|
920 |
+
6.14
|
921 |
+
6.23
|
922 |
+
1.41
|
923 |
+
1.22
|
924 |
+
2.33
|
925 |
+
51.11
|
926 |
+
42.17
|
927 |
+
44.28
|
928 |
+
43.5
|
929 |
+
43.12
|
930 |
+
39.48
|
931 |
+
Percentage (%)
|
932 |
+
H2O
|
933 |
+
Fe(OH)2
|
934 |
+
Mg(OH)2
|
935 |
+
SiO
|
936 |
+
CO
|
937 |
+
Other
|
938 |
+
(c) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 1100 K
|
939 |
+
Figure 4. Distribution of oxygen-bearing molecules in the pressure range where JWST and Ariel have the highest sensitivity ([0.01, 1] bar) for 𝑇𝑒𝑞=[900, 1000,
|
940 |
+
1100] K in panels 𝑎, 𝑏 and 𝑐, respectively. We explicitly report only the molecules carrying a fraction of O greater than 1%.
|
941 |
+
3.3 Hot planets: 1200K≤ 𝑇𝑒𝑞 ≤1500K
|
942 |
+
Hot giant planets show simpler behaviour than transitional hot ones.
|
943 |
+
The volatile molecules CO and H2O play the leading role as O-
|
944 |
+
bearing species, with CO marginally dominant over H2O. The role
|
945 |
+
of refractories is limited to 5-8% and is by large dominated by the
|
946 |
+
constant 5-6% contribution of SiO, with only 0.5-2% being cumula-
|
947 |
+
tively contributed by all remaining refractory elements.
|
948 |
+
In the case of hot giant planets, estimating the atmospheric C/O
|
949 |
+
ratio by measuring the abundance of O through CO and H2O proves
|
950 |
+
more reliable. The measures are affected by a limited systematic error
|
951 |
+
MNRAS 000, 1–13 (2022)
|
952 |
+
|
953 |
+
7
|
954 |
+
Z=1.0
|
955 |
+
Z=1.3
|
956 |
+
Z=1.8
|
957 |
+
Z=3.4
|
958 |
+
Z=4.8
|
959 |
+
Z=7.6
|
960 |
+
0
|
961 |
+
20
|
962 |
+
40
|
963 |
+
60
|
964 |
+
80
|
965 |
+
100
|
966 |
+
0.62
|
967 |
+
0.75
|
968 |
+
0.93
|
969 |
+
1.28
|
970 |
+
1.52
|
971 |
+
2.07
|
972 |
+
47.92
|
973 |
+
56.28
|
974 |
+
51.74
|
975 |
+
49.73
|
976 |
+
48.68
|
977 |
+
49.85
|
978 |
+
4.73
|
979 |
+
5.76
|
980 |
+
6.06
|
981 |
+
6.34
|
982 |
+
6.22
|
983 |
+
6.33
|
984 |
+
46.73
|
985 |
+
37.21
|
986 |
+
41.27
|
987 |
+
42.66
|
988 |
+
43.58
|
989 |
+
41.74
|
990 |
+
Percentage (%)
|
991 |
+
H2O
|
992 |
+
Fe(OH)2
|
993 |
+
Mg(OH)2
|
994 |
+
SiO
|
995 |
+
CO
|
996 |
+
Other
|
997 |
+
(a) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 1200 K
|
998 |
+
Z=1.0
|
999 |
+
Z=1.3
|
1000 |
+
Z=1.8
|
1001 |
+
Z=3.4
|
1002 |
+
Z=4.8
|
1003 |
+
Z=7.6
|
1004 |
+
0
|
1005 |
+
20
|
1006 |
+
40
|
1007 |
+
60
|
1008 |
+
80
|
1009 |
+
100
|
1010 |
+
0.41
|
1011 |
+
0.49
|
1012 |
+
0.58
|
1013 |
+
0.78
|
1014 |
+
0.91
|
1015 |
+
1.13
|
1016 |
+
48.88
|
1017 |
+
57.43
|
1018 |
+
52.41
|
1019 |
+
50.06
|
1020 |
+
48.91
|
1021 |
+
50.03
|
1022 |
+
4.78
|
1023 |
+
5.84
|
1024 |
+
6.15
|
1025 |
+
6.44
|
1026 |
+
6.31
|
1027 |
+
6.43
|
1028 |
+
45.93
|
1029 |
+
36.25
|
1030 |
+
40.87
|
1031 |
+
42.73
|
1032 |
+
43.87
|
1033 |
+
42.41
|
1034 |
+
Percentage (%)
|
1035 |
+
H2O
|
1036 |
+
Fe(OH)2
|
1037 |
+
Mg(OH)2
|
1038 |
+
SiO
|
1039 |
+
CO
|
1040 |
+
Other
|
1041 |
+
(b) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 1300 K
|
1042 |
+
Z=1.0
|
1043 |
+
Z=1.3
|
1044 |
+
Z=1.8
|
1045 |
+
Z=3.4
|
1046 |
+
Z=4.8
|
1047 |
+
Z=7.6
|
1048 |
+
0
|
1049 |
+
20
|
1050 |
+
40
|
1051 |
+
60
|
1052 |
+
80
|
1053 |
+
100
|
1054 |
+
0.36
|
1055 |
+
0.42
|
1056 |
+
0.48
|
1057 |
+
0.6
|
1058 |
+
0.67
|
1059 |
+
0.81
|
1060 |
+
48.95
|
1061 |
+
57.52
|
1062 |
+
52.46
|
1063 |
+
50.09
|
1064 |
+
48.93
|
1065 |
+
50.05
|
1066 |
+
4.82
|
1067 |
+
5.9
|
1068 |
+
6.21
|
1069 |
+
6.5
|
1070 |
+
6.38
|
1071 |
+
6.5
|
1072 |
+
45.87
|
1073 |
+
36.16
|
1074 |
+
40.85
|
1075 |
+
42.81
|
1076 |
+
44.02
|
1077 |
+
42.63
|
1078 |
+
Percentage (%)
|
1079 |
+
H2O
|
1080 |
+
Fe(OH)2
|
1081 |
+
Mg(OH)2
|
1082 |
+
SiO
|
1083 |
+
CO
|
1084 |
+
Other
|
1085 |
+
(c) Exoplanet pressure-temperature profile with 𝑇𝑒𝑞 @ 1500 K
|
1086 |
+
Figure 5. Distribution of oxygen-bearing molecules in the pressure range where JWST and Ariel have the highest sensitivity ([0.01, 1] bar) for 𝑇𝑒𝑞=[1200,
|
1087 |
+
1300, 1500] K in panels 𝑎, 𝑏 and 𝑐, respectively. We explicitly report only the molecules carrying a fraction of O greater than 1%.
|
1088 |
+
of the order of 6% due to the neglected contribution of refractory
|
1089 |
+
elements. However, the interplay betweenthe non-stellarcomposition
|
1090 |
+
of giant planets and the sequestration of O by refractories means that
|
1091 |
+
the partition of O between CO and H2O deviates from the expected
|
1092 |
+
picture even at such high temperatures, as discussed below.
|
1093 |
+
3.4 Transition from H2O-dominated to CO-dominated
|
1094 |
+
atmospheres
|
1095 |
+
In Fig. 6 we show the evolution of the distribution of oxygen between
|
1096 |
+
the different O-bearing molecules as a function of the equilibrium
|
1097 |
+
temperature in the six planet formation scenarios from Paper I. As
|
1098 |
+
MNRAS 000, 1–13 (2022)
|
1099 |
+
|
1100 |
+
8
|
1101 |
+
Fonte S. et al.
|
1102 |
+
discussed previously, the fraction of O in the form of SiO is virtually
|
1103 |
+
constant at about 6% for all equilibrium temperatures and planetary
|
1104 |
+
metallicity values.
|
1105 |
+
O-bearing molecules with Mg and Fe carry a significant fraction
|
1106 |
+
of oxygen in the temperature range of warm giant planets but their
|
1107 |
+
role sharply decreases when 𝑇𝑒𝑞 exceeds 800 K. At about 900 K,
|
1108 |
+
the contribution of CO becomes comparable to the individual ones
|
1109 |
+
of Fe, Mg, and Si (see Fig. 6). Above this temperature, CO rapidly
|
1110 |
+
increases while H2O, Fe- and Mg- oxides decrease. Due to the non-
|
1111 |
+
stellar composition of the giant planets reported in Tab. 3, the crossing
|
1112 |
+
point between CO and H2O changes with the planetary metallicity.
|
1113 |
+
Specifically, the crossing point shifts by about 200 K, going from
|
1114 |
+
1200 K for solar-metallicity planets (𝑍 = 1) to less than 1000 K for
|
1115 |
+
the highest metallicity 𝑍 = 7.6 (see Fig. 6).
|
1116 |
+
Furthermore, the relative importance of CO and H2O (i.e. how
|
1117 |
+
much the two curves diverge at the highest temperatures) shows a
|
1118 |
+
non-monotonic evolution for increasing planetary metallicity (see
|
1119 |
+
Fig. 6). The difference between the percentages of O as CO and H2O
|
1120 |
+
is almost zero for 𝑍 = 1. This means that the O not sequestered by
|
1121 |
+
refractories equally distributes between water and carbon monoxide.
|
1122 |
+
Said difference sharply increases moving to 𝑍 = 1.3, where the water
|
1123 |
+
accounts for less than 40% of O and carbon monoxide about 60%
|
1124 |
+
(see Fig. 6).
|
1125 |
+
Moving toward increasing values of the planetary metallicity, the
|
1126 |
+
imbalance in the distribution of O among CO and H2O decreases
|
1127 |
+
until Z=4.8 and increases again at Z=7.6 where water accounts for
|
1128 |
+
about 40% of O while carbon monoxide accounts for about 50%.
|
1129 |
+
Such non-monotonic behaviour, as well as the non-monotonic trend
|
1130 |
+
of water between 900 and 1000 K in Fig. 6, arise from the different
|
1131 |
+
growth of the abundance of C, O, and refractories with planetary
|
1132 |
+
metallicity shown in Tab. 3.
|
1133 |
+
4 DISCUSSION
|
1134 |
+
The results described in Sect. 3 highlight how the presence of re-
|
1135 |
+
fractory elements among the carriers of O causes the atmospheres of
|
1136 |
+
warm and transitional hot giant planets to be less rich in water than
|
1137 |
+
expected. For these planets, therefore, the role of refractories needs
|
1138 |
+
to be accounted for to produce accurate estimates of the atmospheric
|
1139 |
+
O/H abundance and C/O ratio.
|
1140 |
+
We illustrate the effect of neglecting refractory oxides on the C/O
|
1141 |
+
ratio in Fig. 7, showing the trend of the oxygen deficit as a function of
|
1142 |
+
the metallicity and the equilibrium temperature of the giant planet.
|
1143 |
+
We quantify the oxygen deficit using the following formula:
|
1144 |
+
𝑑𝑂 = 1 − 𝑟
|
1145 |
+
𝑟𝑠
|
1146 |
+
(5)
|
1147 |
+
where 𝑟 is the C/O ratio calculated considering all O-bearing species
|
1148 |
+
present in exoplanetary atmospheres. The parameter 𝑟𝑠 is the C/O
|
1149 |
+
ratio estimated when the only carriers for oxygen taken into account
|
1150 |
+
are H2O and CO so that
|
1151 |
+
𝑟𝑠 = CO + CH4
|
1152 |
+
H2O + CO
|
1153 |
+
(6)
|
1154 |
+
As summarised in Fig. 7, at fixed planetary metallicity Z the oxygen
|
1155 |
+
deficit 𝑑𝑂 is inversely correlated to the equilibrium temperature 𝑇𝑒𝑞.
|
1156 |
+
In parallel, at fixed 𝑇𝑒𝑞 the oxygen deficit is directly correlated to
|
1157 |
+
the planetary metallicity. At temperatures higher than 1200 K, the
|
1158 |
+
oxygen deficit can be approximated as constant at more or less about
|
1159 |
+
6%. Between 1000 and 1100 K, 𝑑𝑂 grows almost linearly by a factor
|
1160 |
+
of 3 going from about 7% to 21% when moving from Z=1 to Z=7.6.
|
1161 |
+
For decreasing equilibrium temperatures below 1000 K, the oxygen
|
1162 |
+
deficit can easily span between about 20% and 40% due to the large
|
1163 |
+
contribution of refractory species discussed in Sect. 3.
|
1164 |
+
The importance of properly accounting for the oxygen deficit is
|
1165 |
+
immediately highlighted by the following example. Giant planets
|
1166 |
+
whose metallicity is shaped by the accretion of planetesimals in
|
1167 |
+
the simulations of Papers I and II have Z>1 and C/O≈0.55. An
|
1168 |
+
oxygen deficit d𝑂=0.3 (33%, well within the range of values shown
|
1169 |
+
in Fig. 7) would cause the same giant planets to appear as possessing
|
1170 |
+
C/O=0.8. This C/O value, however, is compatible with giant planets
|
1171 |
+
whose metallicity originates from the accretion of gas instead of
|
1172 |
+
planetesimals, meaning that not accounting for the oxygen deficit
|
1173 |
+
leads to incorrect constraints on the formation history.
|
1174 |
+
The correlation between oxygen deficit and planetary metallicity,
|
1175 |
+
however, is not constant for changing temperatures. Colder and lower
|
1176 |
+
metallicity planets can be characterised by the same oxygen deficit as
|
1177 |
+
warmer but higher metallicity planets. As a result, an uncertainty of
|
1178 |
+
100 K in the planetary temperature can easily translate into an inac-
|
1179 |
+
curacy ≥ 50% in the oxygen deficit for giant planets characterised by
|
1180 |
+
equilibrium temperatures around 1000 K. This, in turn, can critically
|
1181 |
+
impact the evaluation of the C/O ratio and the assessment of the giant
|
1182 |
+
planet formation history.
|
1183 |
+
4.1 Implications for Jupiter in the Solar System
|
1184 |
+
The results discussed in this work impact also the study of Jupiter in
|
1185 |
+
the Solar System, whose atmosphere has been compositionally char-
|
1186 |
+
acterised by the NASA missions Galileo and Juno (Atreya 2018; Li
|
1187 |
+
et al. 2020; Grassi et al. 2020). Specifically, the in-situ measurements
|
1188 |
+
by the mass spectrometer onboard Galileo’s atmospheric probe show
|
1189 |
+
that Jupiter’s C and S are 4 and 3 times more enriched than the Sun,
|
1190 |
+
with 1-𝜎 uncertainties of about 20% (Atreya 2018).
|
1191 |
+
Jupiter’s atmospheric water abundance has been recently estimated
|
1192 |
+
by the microwave radiometer onboard the Juno mission (Li et al.
|
1193 |
+
2020), revealing that the O abundance associated with H2O is 2.7
|
1194 |
+
times the solar one. While the measurement of the O abundance is
|
1195 |
+
still affected by large uncertainties (the 1-𝜎 uncertainty is least 60%,
|
1196 |
+
Li et al. 2020), this estimate suggests an atmospheric C/O ratio of
|
1197 |
+
0.8. This, in turn, would point to Jupiter’s heavy elements having
|
1198 |
+
been accreted through the disc gas (Bosman et al. 2019; Schneider
|
1199 |
+
& Bitsch 2021).
|
1200 |
+
The observed enrichment in S, however, points to a large abun-
|
1201 |
+
dance of refractory elements and a significant role of oxygen se-
|
1202 |
+
questration by refractories. Since S can be used as a proxy for the
|
1203 |
+
enrichment of refractory species, Jupiter’s atmospheric composition
|
1204 |
+
is similar to the scenario with Z=3.4 from Paper I1. To more accu-
|
1205 |
+
rately assess the oxygen deficit that should be expected for Jupiter, we
|
1206 |
+
reprocessed all scenarios with FastChem using the same pressure-
|
1207 |
+
temperature profile as Grassi et al. (2020) based on the Galileo Entry
|
1208 |
+
Probe measurements (Seiff et al. 1998).
|
1209 |
+
This pressure-temperature profile is shown in Fig. 8 and is asso-
|
1210 |
+
ciated with 𝑇𝑒𝑞 ∼ 122 K. The temperature of the atmospheric layer
|
1211 |
+
probed by Juno’s instruments is about 260 K (Seiff et al. 1998), as
|
1212 |
+
highlighted in the left-hand panel of Fig. 8. Reprocessing the six plan-
|
1213 |
+
etary compositions from Sect. 2 with Jupiter’s pressure-temperature
|
1214 |
+
profile results in the oxygen deficits reported in the right-hand panel
|
1215 |
+
of Fig. 8, where we highlight the scenario with the metallicity more
|
1216 |
+
closely matching Jupiter’s atmospheric value (Atreya 2018).
|
1217 |
+
1 This scenario is characterised by a lower N abundance than Jupiter’s nom-
|
1218 |
+
inal values from Atreya (2018) and Li et al. (2020), but this has no impact on
|
1219 |
+
the O deficit.
|
1220 |
+
MNRAS 000, 1–13 (2022)
|
1221 |
+
|
1222 |
+
9
|
1223 |
+
800
|
1224 |
+
1,000
|
1225 |
+
1,200
|
1226 |
+
1,400
|
1227 |
+
0
|
1228 |
+
20
|
1229 |
+
40
|
1230 |
+
60
|
1231 |
+
80
|
1232 |
+
Teq [K]
|
1233 |
+
Percentage[%]
|
1234 |
+
Z=1.0
|
1235 |
+
800
|
1236 |
+
1,000
|
1237 |
+
1,200
|
1238 |
+
1,400
|
1239 |
+
0
|
1240 |
+
20
|
1241 |
+
40
|
1242 |
+
60
|
1243 |
+
80
|
1244 |
+
Teq [K]
|
1245 |
+
Percentage[%]
|
1246 |
+
Z=1.3
|
1247 |
+
800
|
1248 |
+
1,000
|
1249 |
+
1,200
|
1250 |
+
1,400
|
1251 |
+
0
|
1252 |
+
20
|
1253 |
+
40
|
1254 |
+
60
|
1255 |
+
80
|
1256 |
+
Teq [K]
|
1257 |
+
Percentage[%]
|
1258 |
+
Z=1.8
|
1259 |
+
800
|
1260 |
+
1,000
|
1261 |
+
1,200
|
1262 |
+
1,400
|
1263 |
+
0
|
1264 |
+
20
|
1265 |
+
40
|
1266 |
+
60
|
1267 |
+
80
|
1268 |
+
Teq [K]
|
1269 |
+
Percentage[%]
|
1270 |
+
Z=3.4
|
1271 |
+
800
|
1272 |
+
1,000
|
1273 |
+
1,200
|
1274 |
+
1,400
|
1275 |
+
0
|
1276 |
+
20
|
1277 |
+
40
|
1278 |
+
60
|
1279 |
+
80
|
1280 |
+
Teq [K]
|
1281 |
+
Percentage[%]
|
1282 |
+
Z=4.8
|
1283 |
+
800
|
1284 |
+
1,000
|
1285 |
+
1,200
|
1286 |
+
1,400
|
1287 |
+
0
|
1288 |
+
20
|
1289 |
+
40
|
1290 |
+
60
|
1291 |
+
80
|
1292 |
+
Teq [K]
|
1293 |
+
Percentage[%]
|
1294 |
+
Z=7.6
|
1295 |
+
H2O
|
1296 |
+
Fe(OH)2
|
1297 |
+
Mg(OH)2
|
1298 |
+
SiO
|
1299 |
+
CO
|
1300 |
+
Figure 6. Evolution of the relative contributions of the five major O-bearing molecules as a function of the equilibrium temperature in the six formation scenarios.
|
1301 |
+
The highlighted regions mark the crossing of the CO and H2O curves, i.e. the temperature interval where CO becomes the major O carrier. This crossing point
|
1302 |
+
shifts towards lower equilibrium temperatures as the metallicity increase.
|
1303 |
+
11.3 1.8
|
1304 |
+
3.4
|
1305 |
+
4.8
|
1306 |
+
7.6
|
1307 |
+
0
|
1308 |
+
10
|
1309 |
+
20
|
1310 |
+
30
|
1311 |
+
Z
|
1312 |
+
Oxygen deficit [%]
|
1313 |
+
800
|
1314 |
+
1,000
|
1315 |
+
1,200
|
1316 |
+
1,400
|
1317 |
+
Teq[K]
|
1318 |
+
512 19
|
1319 |
+
50
|
1320 |
+
100
|
1321 |
+
130
|
1322 |
+
Initial planet formation distance [AU]
|
1323 |
+
Figure 7. Oxygen deficit as a function of the planetary metallicity 𝑍 and the
|
1324 |
+
equilibrium temperature 𝑇𝑒𝑞 of the exoplanet. The oxygen deficit is defined
|
1325 |
+
by Eq. 5 and quantifies the systematic error introduced by accounting only
|
1326 |
+
for CO and H2O as O carriers in the planetary atmosphere.
|
1327 |
+
The atmospheric mixture with Z=3.4 is associated with an oxygen
|
1328 |
+
deficit of 32%, meaning that water only accounts for 68% of total
|
1329 |
+
oxygen. Once we correct for the oxygen deficit, Jupiter’s oxygen
|
1330 |
+
abundance with respect to H becomes 4 times that of the Sun. This,
|
1331 |
+
in turn, means that Jupiter’s C/O ratio becomes equal to the solar
|
1332 |
+
value of 0.55. This value points to Jupiter’s heavy elements mainly
|
1333 |
+
originating from the accretion of planetesimals (Turrini et al. 2021;
|
1334 |
+
Pacetti et al. 2022) and thus argues for a radically different formation
|
1335 |
+
history.
|
1336 |
+
5 CONCLUSIONS
|
1337 |
+
In this work, we explore the role of refractory elements in seques-
|
1338 |
+
tering oxygen in the atmospheres of giant planets and its impact on
|
1339 |
+
estimating the atmospheric C/O ratio. We model the atmospheric
|
1340 |
+
chemistry assuming chemical equilibrium and using realistic ele-
|
1341 |
+
mental mixtures produced from planet formation simulations as in-
|
1342 |
+
put. These elemental mixtures are the result of the interplay between
|
1343 |
+
the concurrent accretion of planetesimals and disc gas by the grow-
|
1344 |
+
ing giant planets and are characterised by non-solar abundance ratios
|
1345 |
+
between C, O, and refractory elements.
|
1346 |
+
We find that the oxygen deficit depends on both the atmospheric
|
1347 |
+
metallicity and equilibrium temperature and, in general, does not
|
1348 |
+
match the classical value of 22% estimated assuming solar elemental
|
1349 |
+
ratios (Burrows & Sharp 1999; Fegley & Schaefer 2010). At equilib-
|
1350 |
+
rium temperatures lower than 1000 K, the oxygen deficit can reach
|
1351 |
+
values of 30-40% in the case of giant planets with high metallicity. At
|
1352 |
+
higher temperatures, the oxygen deficit is limited to 5-10%, mainly
|
1353 |
+
due to the contribution of silicon oxides.
|
1354 |
+
MNRAS 000, 1–13 (2022)
|
1355 |
+
|
1356 |
+
10
|
1357 |
+
Fonte S. et al.
|
1358 |
+
100
|
1359 |
+
150
|
1360 |
+
200
|
1361 |
+
250
|
1362 |
+
300
|
1363 |
+
350
|
1364 |
+
400
|
1365 |
+
10−1
|
1366 |
+
100
|
1367 |
+
101
|
1368 |
+
Temperature [K]
|
1369 |
+
Pressure [bar]
|
1370 |
+
(a) Jupiter pressure temperature profile
|
1371 |
+
11.31.8
|
1372 |
+
3.4
|
1373 |
+
4.8
|
1374 |
+
7.6
|
1375 |
+
24
|
1376 |
+
26
|
1377 |
+
28
|
1378 |
+
30
|
1379 |
+
32
|
1380 |
+
Z
|
1381 |
+
Oxygen deficit [%]
|
1382 |
+
(b) Deficit trend with the metallicity
|
1383 |
+
Figure 8. Constraints on the oxygen deficit of Jupiter. Left: P-T profile of the Jovian atmosphere adopted from Grassi et al. 2020. The highlighted red region
|
1384 |
+
marks the atmospheric layer probed by Juno’s instruments (Li et al. 2020; Grassi et al. 2020). Right: oxygen deficit of the six formation scenarios we consider
|
1385 |
+
in this work for the Jovian P-T profile. The highlighted blue region marks the scenario with the metallicity value closer to Jupiter’s one (Atreya 2018).
|
1386 |
+
We also find that the interplay between atmospheric metallicity
|
1387 |
+
and equilibrium temperature introduces degeneracies in the oxygen
|
1388 |
+
deficit at temperatures close to 1000 K. Specifically, colder and lower
|
1389 |
+
metallicity giant planets can be characterised by the same oxygen
|
1390 |
+
deficit as hotter but higher metallicity planets. As shown by Fig. 7,
|
1391 |
+
a 10% uncertainty on the atmospheric temperature (i.e. about 100 K
|
1392 |
+
at 1000 K) introduces uncertainties of more than a factor of three
|
1393 |
+
in the oxygen deficit. This issue can be mitigated by observationally
|
1394 |
+
constraining the atmospheric abundances of oxygen and refractories
|
1395 |
+
or the refractory-to-oxygen ratio. Future studies will need to assess
|
1396 |
+
the impact of the condensation of refractory materials and cloud
|
1397 |
+
formation on constraining the oxygen deficit, particularly for warm
|
1398 |
+
and transition hot giant planets.
|
1399 |
+
Our results highlight how not accounting for the oxygen deficit in-
|
1400 |
+
troduces systematic biases in quantifying the atmospheric C/O ratio
|
1401 |
+
of giant planets rich in refractory elements. These biases could be
|
1402 |
+
less marked for giant planets that accrete disc gas highly enriched
|
1403 |
+
in C and O by the pebble sublimation process (Bosman et al. 2019;
|
1404 |
+
Schneider & Bitsch 2021) or could be higher than estimated in this
|
1405 |
+
work if the giant planets accrete large amounts of oxygen-depleted
|
1406 |
+
planetesimals (e.g. closer to the host star). Similarly, different astro-
|
1407 |
+
chemical environments of circumstellar discs (Eistrup et al. 2016;
|
1408 |
+
Pacetti et al. 2022) could impact the magnitude of the oxygen deficit.
|
1409 |
+
Future studies will need to explore the role of oxygen deficit across
|
1410 |
+
a larger parameter space to shed light on these effects.
|
1411 |
+
Independently on these uncertainties, the results of this work high-
|
1412 |
+
light how ignoring the effects of oxygen deficit can lead to misin-
|
1413 |
+
terpreting the formation history of the observed giant planets. As an
|
1414 |
+
illustrative example, an oxygen deficit of 30% makes a giant planet
|
1415 |
+
with C/O=0.5 appear like it possesses C/O=0.8. These two values
|
1416 |
+
point to radically different accretion histories and sources of metal-
|
1417 |
+
licity: the accretion of planetesimals for the first one, the accretion
|
1418 |
+
of disc gas for the second one (see Paper I and II and Schneider &
|
1419 |
+
Bitsch 2021 for discussion). Adopting the second, incorrect, value as
|
1420 |
+
the true one, therefore, provides wrong constraints on the formation
|
1421 |
+
history and the native environment of the giant planet.
|
1422 |
+
Finally, we apply the same methodology used for giant exoplan-
|
1423 |
+
ets to the case of Jupiter in the Solar System, taking advantage of
|
1424 |
+
the constraints on its abundance of oxygen and refractories and its
|
1425 |
+
pressure-temperature profile provided by the NASA missions Galileo
|
1426 |
+
and Juno. The measured atmospheric enrichment of H2O suggests
|
1427 |
+
that Jupiter’s oxygen abundance is 2.7 times the solar one. However,
|
1428 |
+
the observed abundance of sulphur, which we use as a proxy for the
|
1429 |
+
refractory elements, points to oxygen deficit values of the order of
|
1430 |
+
30%. After correcting for this deficit, Jupiter’s oxygen abundance
|
1431 |
+
increases to 4 times the solar one, i.e. the same enrichment observed
|
1432 |
+
for carbon. This brings Jupiter’s C/O ratio to match the solar value
|
1433 |
+
and points to the accretion of planetesimals as the source of Jupiter’s
|
1434 |
+
heavy elements (Turrini et al. 2021; Pacetti et al. 2022).
|
1435 |
+
ACKNOWLEDGEMENTS
|
1436 |
+
The authors acknowledge the support of the European Research
|
1437 |
+
Council via the Horizon 2020 Framework Programme ERC Synergy
|
1438 |
+
“ECOGAL” Project GA-855130, of the Italian National Institute of
|
1439 |
+
Astrophysics (INAF) through the INAF Main Stream project “Ariel
|
1440 |
+
and the astrochemical link between circumstellar discs and planets”
|
1441 |
+
(CUP: C54I19000700005), and of the Italian Space Agency (ASI)
|
1442 |
+
through the ASI-INAF contracts No. 2016-23-H.0 and 2021-5-HH.0.
|
1443 |
+
This project also received funding from the European Research Coun-
|
1444 |
+
cil (ERC) under the European Union’s Horizon 2020 research and
|
1445 |
+
innovation programme (grant agreement No 758892, ExoAI) and
|
1446 |
+
from the Science and Technology Facilities Council (STFC) grant
|
1447 |
+
ST/S002634/1 and ST/T001836/1 and from the UK Space Agency
|
1448 |
+
grant ST/W00254X/1. Danae Polychroni is supported by INAF
|
1449 |
+
through the project PRIN INAF 2019 “Planetary systems at young
|
1450 |
+
ages (PLATEA)” and by the Istituto Nazionale di Oceanografia e di
|
1451 |
+
Geofisica Sperimentale (OGS) and CINECA through the programme
|
1452 |
+
“HPC-TRES (High Performance Computing Training and Research
|
1453 |
+
for Earth Sciences)” award number 2022-05. Quentin Changeat is
|
1454 |
+
funded by the European Space Agency under the 2022 ESA Research
|
1455 |
+
Fellowship Program. Eugenio Schisano acknowledges the contribu-
|
1456 |
+
tion from PRIN INAF 2019 through the project “HOT-ATMOS”.
|
1457 |
+
The authors wish to thank Aldo Bonomo and Matteo Brogi for their
|
1458 |
+
discussion and feedback on exoplanetary atmospheric observations.
|
1459 |
+
MNRAS 000, 1–13 (2022)
|
1460 |
+
|
1461 |
+
11
|
1462 |
+
The computational resources for this work were supplied by the Gen-
|
1463 |
+
esis cluster at INAF-IAPS and the technical support of Scigé John
|
1464 |
+
Liu is gratefully acknowledged.
|
1465 |
+
DATA AVAILABILITY
|
1466 |
+
All data necessary to reproduce the atmospheric models are avail-
|
1467 |
+
able in the article. The FastChem code used in the analysis is pub-
|
1468 |
+
licly available at https://github.com/exoclime/FastChem.
|
1469 |
+
The outputs of the planet formation simulations from Turrini et al.
|
1470 |
+
(2021) and of the disc chemical models from Pacetti et al. (2022)
|
1471 |
+
are available on reasonable request to the relevant corresponding
|
1472 |
+
authors. All information needed to reproduce the planet formation
|
1473 |
+
simulations is described in Turrini et al. (2021).
|
1474 |
+
REFERENCES
|
1475 |
+
Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481
|
1476 |
+
Atreya S. K., 2018, in Ko C. M., Yu P. C., Chang C. K., eds, Astronomical
|
1477 |
+
Society of the Pacific Conference Series Vol. 513, Serendipities in the
|
1478 |
+
Solar System and Beyond. p. 149
|
1479 |
+
Bergin E. A., Blake G. A., Ciesla F., Hirschmann M. M., Li J., 2015, Pro-
|
1480 |
+
ceedings of the National Academy of Science, 112, 8965
|
1481 |
+
Bitsch B., Lambrechts M., Johansen A., 2015, A&A, 582, A112
|
1482 |
+
Bosman A. D., Cridland A. J., Miguel Y., 2019, A&A, 632, L11
|
1483 |
+
Burrows A., Sharp C. M., 1999, ApJ, 512, 843
|
1484 |
+
Changeat Q., Edwards B., Al-Refaie A. F., Morvan M., Tsiaras A., Waldmann
|
1485 |
+
I. P., Tinetti G., 2020, AJ, 160, 260
|
1486 |
+
Changeat Q., et al., 2022, ApJS, 260, 3
|
1487 |
+
Cridland A. J., van Dishoeck E. F., Alessi M., Pudritz R. E., 2019, A&A,
|
1488 |
+
632, A63
|
1489 |
+
D’Angelo G., Weidenschilling S. J., Lissauer J. J., Bodenheimer P., 2021,
|
1490 |
+
Icarus, 355, 114087
|
1491 |
+
Doyle A. E., Young E. D., Klein B., Zuckerman B., Schlichting H. E., 2019,
|
1492 |
+
Science, 366, 356
|
1493 |
+
Edwards B., et al., 2022, ApJS
|
1494 |
+
Eistrup C., Walsh C., van Dishoeck E. F., 2016, A&A, 595, A83
|
1495 |
+
Fegley B., Schaefer L., 2010, in Principles and Perspectives in Cosmochem-
|
1496 |
+
istry. p. 347, doi:10.1007/978-3-642-10352-0_7
|
1497 |
+
Grassi D., et al., 2020, Journal of Geophysical Research (Planets), 125, e06206
|
1498 |
+
Greene T. P., Line M. R., Montero C., Fortney J. J., Lustig-Yaeger J., Luther
|
1499 |
+
K., 2016, ApJ, 817, 17
|
1500 |
+
Guillot T., 2010, A&A, 520, A27
|
1501 |
+
Hayashi C., 1981, Progress of Theoretical Physics Supplement, 70, 35
|
1502 |
+
Jura M., Young E. D., 2014, Annual Review of Earth and Planetary Sciences,
|
1503 |
+
42, 45
|
1504 |
+
Kama M., Shorttle O., Jermyn A. S., Folsom C. P., Furuya K., Bergin E. A.,
|
1505 |
+
Walsh C., Keller L., 2019, ApJ, 885, 114
|
1506 |
+
Kawashima Y., Min M., 2021, A&A, 656, A90
|
1507 |
+
Kreidberg L., et al., 2014, ApJ, 793, L27
|
1508 |
+
Lee J.-M., Heng K., Irwin P. G. J., 2013, ApJ, 778, 97
|
1509 |
+
Li C., et al., 2020, Nature Astron., 4, 609
|
1510 |
+
Line M. R., Knutson H., Wolf A. S., Yung Y. L., 2014, ApJ, 783, 70
|
1511 |
+
Line M. R., et al., 2021, Nature, 598, 580
|
1512 |
+
Lissauer J. J., Hubickyj O., D’Angelo G., Bodenheimer P., 2009, Icarus, 199,
|
1513 |
+
338
|
1514 |
+
Lodders K., 2003, ApJ, 591, 1220
|
1515 |
+
Lodders K., 2010, Astrophysics and Space Science Proceedings, 16, 379
|
1516 |
+
Lodders K., Fegley B., 2002, Icarus, 155, 393
|
1517 |
+
MacDonald R. J., Madhusudhan N., 2019, MNRAS, 486, 1292
|
1518 |
+
Madhusudhan N., Agúndez M., Moses J. I., Hu Y., 2016, Space Sci. Rev.,
|
1519 |
+
205, 285
|
1520 |
+
Mikal-Evans T., et al., 2022, Nature Astronomy, 6, 471
|
1521 |
+
Mordasini C., Mollière P., Dittkrist K. M., Jin S., Alibert Y., 2015, Interna-
|
1522 |
+
tional Journal of Astrobiology, 14, 201
|
1523 |
+
Öberg K. I., Bergin E. A., 2021, Phys. Rep., 893, 1
|
1524 |
+
Öberg K. I., Murray-Clay R., Bergin E. A., 2011, ApJ, 743, L16
|
1525 |
+
Pacetti E., et al., 2022, ApJ, 937, 36
|
1526 |
+
Palme H., Lodders K., Jones A., 2014, Solar System Abundances of the
|
1527 |
+
Elements. pp 15–36
|
1528 |
+
Schneider A. D., Bitsch B., 2021, A&A, 654, A72
|
1529 |
+
Scott P., et al., 2015a, A&A, 573, A25
|
1530 |
+
Scott P., Asplund M., Grevesse N., Bergemann M., Sauval A. J., 2015b, A&A,
|
1531 |
+
573, A26
|
1532 |
+
Seiff A., et al., 1998, J. Geophys. Res., 103, 22857
|
1533 |
+
Spake J. J., et al., 2021, MNRAS, 500, 4042
|
1534 |
+
Stock J. W., Kitzmann D., Patzer A. B. C., Sedlmayr E., 2018, Monthly
|
1535 |
+
Notices of the Royal Astronomical Society
|
1536 |
+
The JWST Transiting Exoplanet Community Early Release Science Team
|
1537 |
+
et al., 2022, arXiv e-prints, p. arXiv:2208.11692
|
1538 |
+
Thorngren D., Fortney J. J., 2019, The Astrophysical Journal, 874, L31
|
1539 |
+
Thorngren D. P., Fortney J. J., Murray-Clay R. A., Lopez E. D., 2016, ApJ,
|
1540 |
+
831, 64
|
1541 |
+
Tinetti G., et al., 2018, Experimental Astronomy, 46, 135
|
1542 |
+
Tinetti G., et al., 2021, arXiv e-prints, p. arXiv:2104.04824
|
1543 |
+
Turrini D., Marzari F., Polychroni D., Testi L., 2019, ApJ, 877, 50
|
1544 |
+
Turrini D., et al., 2021, ApJ, 909, 40
|
1545 |
+
Turrini D., et al., 2022, Experimental Astronomy, 53, 225
|
1546 |
+
APPENDIX A: GIANT PLANET FORMATION
|
1547 |
+
The giant planets simulated in Paper I begin their formation as plan-
|
1548 |
+
etary embryos of 0.1 M⊕ at different positions within their natal
|
1549 |
+
protoplanetary disc and end their growth and migration as 1 Jovian
|
1550 |
+
mass planets orbiting at 0.4 au from the host star. This close to the
|
1551 |
+
host star, further inward migration does not contribute to the compo-
|
1552 |
+
sitional evolution of the giant planets in any significant way. The final
|
1553 |
+
orbital distance in the simulations was therefore chosen for reasons
|
1554 |
+
of computational efficiency (see Paper I for details).
|
1555 |
+
The simulations from Paper I consider six growth and migration
|
1556 |
+
scenarios, with the initial seed of the giant planet starting its forma-
|
1557 |
+
tion track at 5, 12, 19, 50, 100 and 130 au from the host star. These
|
1558 |
+
starting positions imply that the six simulated giant planets cross
|
1559 |
+
different compositional regions of the protoplanetary disc and en-
|
1560 |
+
counter different masses of planetesimals during their migration (see
|
1561 |
+
Fig. A1). The simulations were performed with the parallel N-body
|
1562 |
+
code Mercury-Ar𝜒es (Turrini et al. 2019, 2021), which allows for
|
1563 |
+
accurate simulations of the aerodynamical and gravitational effects
|
1564 |
+
of the disc gas on the dynamical evolution of the planetesimals as
|
1565 |
+
well as the formation process of the giant planets.
|
1566 |
+
Mercury-Ar𝜒es models the growth and the migration of the form-
|
1567 |
+
ing giant planets through a two-phases approach (see Fig. A1),
|
1568 |
+
based on the growth and migration tracks from Bitsch et al. (2015),
|
1569 |
+
D’Angelo et al. (2021) and Mordasini et al. (2015). The simulations
|
1570 |
+
also account for the temporal evolution of the radius of the giant
|
1571 |
+
planet based on the treatment and results of Lissauer et al. (2009).
|
1572 |
+
This means that the radius of the giant planet is set by its expanded
|
1573 |
+
atmosphere during the growth of the planetary core and undergoes a
|
1574 |
+
rapid contraction after the runaway gas accretion phase begins (see
|
1575 |
+
Fig. A1). The physical radius of the giant planet is used by Mercury-
|
1576 |
+
Ar𝜒es to produce realistic impact fluxes of planetesimals.
|
1577 |
+
The giant planets form and migrate within a protoplanetary
|
1578 |
+
disc whose gas surface density profile is modelled after that of
|
1579 |
+
HD 163296’s circumstellar disc, one of the best characterised cir-
|
1580 |
+
cumstellar discs to date. The host star and the circumstellar disc in
|
1581 |
+
the simulations have masses of 1 and 0.053 M⊙, respectively, and
|
1582 |
+
they are both characterised by solar composition. The solar com-
|
1583 |
+
position is modelled based on the data from Asplund et al. (2009)
|
1584 |
+
MNRAS 000, 1–13 (2022)
|
1585 |
+
|
1586 |
+
12
|
1587 |
+
Fonte S. et al.
|
1588 |
+
Figure A1. Formation and migration tracks of the giant planet starting its growth at 19 au in the simulations from Paper I. The first two rows show the dynamical
|
1589 |
+
evolution of the planetesimals in response to the growth and migration of a giant planet (large red circle) at 0.5, 1.8, 2.1, and 2.5 Myr. The different colours
|
1590 |
+
mark planetesimals formed in different compositional regions of the disc (the legend reports the most volatile condensate of each region). The bottom left panel
|
1591 |
+
shows the relative contribution of the different compositional regions in the disc to the planetesimals accreted by the giant planet. The bottom right plot shows
|
1592 |
+
the temporal evolution of the mass of the giant planet (orange curve), its accretion of planetesimals (blue curve), and its semimajor axis (green curve). Mass and
|
1593 |
+
planetesimal flux are normalised to their final values, the semimajor axis to the initial one. Figure from Pacetti et al. 2022, who also supply an animated version
|
1594 |
+
of the figure.
|
1595 |
+
and Scott et al. (2015a,b). The disc temperature profile, which sets
|
1596 |
+
the position of the different snowlines, is modelled after that of the
|
1597 |
+
solar nebula (Hayashi 1981), i.e. 𝑇 = 𝑇0 (𝑟/1 au)−𝛽 where 𝛽=0.5 and
|
1598 |
+
𝑇0=280 K.
|
1599 |
+
The chemical composition of the disc midplane is taken from
|
1600 |
+
Pacetti et al. (2022). The volatile fractions of N, C, and O are radially
|
1601 |
+
distributed across the disc between gas and ices based on the astro-
|
1602 |
+
chemical simulations by Eistrup et al. (2016). In this work, we focus
|
1603 |
+
on the scenario of full chemical inheritance of the disc molecular
|
1604 |
+
composition from the pre-stellar phase and limited ionisation of the
|
1605 |
+
disc by the decay of short-lived radionuclides (“inheritance - low”
|
1606 |
+
scenario from Eistrup et al. 2016). The compositional model imple-
|
1607 |
+
mented by Pacetti et al. (2022) further incorporates the contribution
|
1608 |
+
of rocks and refractory organics as carriers of O, C and N.
|
1609 |
+
The contribution of rocks is modelled assuming that rock-forming
|
1610 |
+
elements condense in the disc midplane in chondritic proportions
|
1611 |
+
(Lodders 2010; Palme et al. 2014): the resulting mixture is identified
|
1612 |
+
as “rocks + metals” in Fig. A1. The term “rock-forming elements”
|
1613 |
+
encompasses all refractory elements and the fractions of O, C and N
|
1614 |
+
that participate in the formation of chondritic rocks. Specifically, the
|
1615 |
+
MNRAS 000, 1–13 (2022)
|
1616 |
+
|
1617 |
+
Rocks + Metals
|
1618 |
+
H20 lce 0
|
1619 |
+
Refr. Org. C
|
1620 |
+
NH3 Ice
|
1621 |
+
CO2 lce 0
|
1622 |
+
Rocks + Metals
|
1623 |
+
H20 lce 0
|
1624 |
+
Refr. Org. C
|
1625 |
+
NH3 Ice
|
1626 |
+
CO2 lce 0
|
1627 |
+
0.8
|
1628 |
+
0.8
|
1629 |
+
0.6
|
1630 |
+
0.6
|
1631 |
+
Eccentricity
|
1632 |
+
Eccentricity
|
1633 |
+
0.4
|
1634 |
+
0.4
|
1635 |
+
0.2
|
1636 |
+
0.2
|
1637 |
+
0
|
1638 |
+
0
|
1639 |
+
1
|
1640 |
+
10
|
1641 |
+
1
|
1642 |
+
10
|
1643 |
+
Semimajor Axis (au)
|
1644 |
+
Semimajor Axis (au)
|
1645 |
+
Rocks + Metals
|
1646 |
+
H20 lce 0
|
1647 |
+
Refr. Org. C
|
1648 |
+
NH3 lce
|
1649 |
+
CO2 lce 0
|
1650 |
+
Rocks + Metals
|
1651 |
+
H20 lce 0
|
1652 |
+
Refr. Org. C
|
1653 |
+
NH3 lce
|
1654 |
+
CO2 lce 0
|
1655 |
+
0.8
|
1656 |
+
0.8
|
1657 |
+
0.6
|
1658 |
+
0.6
|
1659 |
+
Eccentricity
|
1660 |
+
Eccentricity
|
1661 |
+
0.4
|
1662 |
+
0.4
|
1663 |
+
0.2
|
1664 |
+
0.2
|
1665 |
+
0
|
1666 |
+
!
|
1667 |
+
10
|
1668 |
+
10
|
1669 |
+
Semimajor Axis (au)
|
1670 |
+
Semimajor Axis (au)
|
1671 |
+
Rocks + Metals
|
1672 |
+
H20 Ice
|
1673 |
+
Refr. Org. C
|
1674 |
+
NH3 Ice
|
1675 |
+
CO2 Ice
|
1676 |
+
Planetary Mass
|
1677 |
+
Semimajor Axis
|
1678 |
+
Planetesimal Flux -
|
1679 |
+
1
|
1680 |
+
Fraction of Accreted Material
|
1681 |
+
(normalized values)"
|
1682 |
+
0.8
|
1683 |
+
0.1
|
1684 |
+
0.6
|
1685 |
+
0.4
|
1686 |
+
0.01
|
1687 |
+
0.2
|
1688 |
+
0.001
|
1689 |
+
0
|
1690 |
+
10
|
1691 |
+
0
|
1692 |
+
0.5
|
1693 |
+
1
|
1694 |
+
1.5
|
1695 |
+
2
|
1696 |
+
2.5
|
1697 |
+
Semimajor Axis (au)
|
1698 |
+
Time (Myr)13
|
1699 |
+
100
|
1700 |
+
101
|
1701 |
+
102
|
1702 |
+
Radial Distance (au)
|
1703 |
+
10
|
1704 |
+
7
|
1705 |
+
10
|
1706 |
+
6
|
1707 |
+
10
|
1708 |
+
5
|
1709 |
+
10
|
1710 |
+
4
|
1711 |
+
10
|
1712 |
+
3
|
1713 |
+
10
|
1714 |
+
2
|
1715 |
+
Abundance wrt H
|
1716 |
+
H2O
|
1717 |
+
Cref
|
1718 |
+
CO2
|
1719 |
+
CH4
|
1720 |
+
Inheritance (SLRs)
|
1721 |
+
Solid: solids
|
1722 |
+
Dotted: gas
|
1723 |
+
100
|
1724 |
+
101
|
1725 |
+
102
|
1726 |
+
Radial Distance (au)
|
1727 |
+
NH3
|
1728 |
+
Inheritance N-bearing species (SLRs)
|
1729 |
+
Solid: solids
|
1730 |
+
Dotted: gas
|
1731 |
+
CH4
|
1732 |
+
CO2
|
1733 |
+
CO
|
1734 |
+
H2O
|
1735 |
+
Cref
|
1736 |
+
N2
|
1737 |
+
NH3
|
1738 |
+
Figure A2. Disc midplane chemical structure for the volatile fractions of oxygen and carbon (left) and of nitrogen (right) as derived in Pacetti et al. 2022 based
|
1739 |
+
on the astrochemical simulations of Eistrup et al. 2016. The left-hand plot also shows the condensation profile of refractory organic carbon according to the
|
1740 |
+
prescription by Cridland et al. 2019. Based on the comparison of solar and meteoritic abundances (Lodders 2010; Palme et al. 2014), 48% of total oxygen, 9%
|
1741 |
+
of carbon, 3% of nitrogen, and the totality of S are sequestered by refractory solids (“rocks + metals” in Fig. A1, see Turrini et al. 2021 for further discussion).
|
1742 |
+
comparison between solar abundances and CI carbonaceous chon-
|
1743 |
+
drites reveals that chondritic rocks carry 48% of O, 9% of C, and 3%
|
1744 |
+
of N. Chondritic rocks also carry the totality of S, which we use as a
|
1745 |
+
proxy for all refractory elements. The major role played by refractory
|
1746 |
+
O revealed by meteorites is supported by the measurements of the
|
1747 |
+
oxygen fugacity of refractory exoplanetary material contaminating
|
1748 |
+
the atmospheres of polluted white dwarfs (Doyle et al. 2019).
|
1749 |
+
The refractory organic carbon is introduced to account for the
|
1750 |
+
carbon deficit observed in the Earth and solar system meteorites
|
1751 |
+
compared to the interstellar medium and comets (e.g. Bergin et al.
|
1752 |
+
(2015), and references therein). Its treatment is implemented accord-
|
1753 |
+
ing to the prescription used in Cridland et al. (2019), introducing a
|
1754 |
+
50% condensation front at 3 au (see Pacetti et al. (2022) for further
|
1755 |
+
details). The distribution of the volatile and refractory organic carbon
|
1756 |
+
across the disc, as implemented by Pacetti et al. (2022) and used in
|
1757 |
+
this work, is shown in Fig. A2.
|
1758 |
+
We refer interested readers to Turrini et al. (2021), Pacetti et al.
|
1759 |
+
(2022), and references therein for further details on the planet forma-
|
1760 |
+
tion and disc composition modelling. The distribution of elements
|
1761 |
+
between the different phases in the midplane sets the composition
|
1762 |
+
of the gas and the planetesimals accreted by the giant planets dur-
|
1763 |
+
ing their growth and migration. The accreted materials are reverted
|
1764 |
+
to their composing elements by the high temperatures of the newly
|
1765 |
+
formed planets (Lissauer et al. 2009; D’Angelo et al. 2021) and re-
|
1766 |
+
combine into molecules in their atmospheres.
|
1767 |
+
This paper has been typeset from a TEX/LATEX file prepared by the author.
|
1768 |
+
MNRAS 000, 1–13 (2022)
|
1769 |
+
|
3dFAT4oBgHgl3EQflR0d/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
4dA0T4oBgHgl3EQfNf-w/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56016d4a97ffed95b6fd5395a6f9726d3aa61ddf14099f927774cf401043fef0
|
3 |
+
size 265429
|
5NAzT4oBgHgl3EQfu_1f/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bc37e6cfbfc04e4495ec83726e7d9b3fab054a38cbecca4ab6a5b7e99f558e6d
|
3 |
+
size 149962
|
5dE2T4oBgHgl3EQfOgbi/content/2301.03750v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af331b0611071802a7fae11a106ed5960176879764048ea03343bd321dd6f698
|
3 |
+
size 2102372
|
69AzT4oBgHgl3EQfgPxu/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:81c7e74cffafe20d1037595fc842fed4825955a28fb598d0acb901eca70db7bb
|
3 |
+
size 4587565
|
6dE5T4oBgHgl3EQfPg7O/content/tmp_files/2301.05506v1.pdf.txt
ADDED
@@ -0,0 +1,480 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
On the feasibility of attacking Thai LPR systems
|
2 |
+
with adversarial examples
|
3 |
+
Chissanupong Jiamsuchon
|
4 |
+
College of Computing
|
5 |
+
Prince of Songkla University
|
6 |
+
Phuket, Thailand
|
7 | |
8 |
+
Jakapan Suaboot
|
9 |
+
College of Computing
|
10 |
+
Prince of Songkla University
|
11 |
+
Phuket, Thailand
|
12 | |
13 |
+
Norrathep Rattanavipanon
|
14 |
+
College of Computing
|
15 |
+
Prince of Songkla University
|
16 |
+
Phuket, Thailand
|
17 | |
18 |
+
Abstract—Recent advances in deep neural networks (DNNs)
|
19 |
+
have significantly enhanced the capabilities of optical character
|
20 |
+
recognition (OCR) technology, enabling its adoption to a wide
|
21 |
+
range of real-world applications. Despite this success, DNN-
|
22 |
+
based OCR is shown to be vulnerable to adversarial attacks, in
|
23 |
+
which the adversary can influence the DNN model’s prediction
|
24 |
+
by carefully manipulating input to the model. Prior work has
|
25 |
+
demonstrated the security impacts of adversarial attacks on
|
26 |
+
various OCR languages. However, to date, no studies have been
|
27 |
+
conducted and evaluated on an OCR system tailored specifically
|
28 |
+
for the Thai language. To bridge this gap, this work presents a
|
29 |
+
feasibility study of performing adversarial attacks on a specific
|
30 |
+
Thai OCR application – Thai License Plate Recognition (LPR).
|
31 |
+
Moreover, we propose a new type of adversarial attack based
|
32 |
+
on the semi-targeted scenario and show that this scenario is
|
33 |
+
highly realistic in LPR applications. Our experimental results
|
34 |
+
show the feasibility of our attacks as they can be performed on a
|
35 |
+
commodity computer desktop with over 90% attack success rate.
|
36 |
+
Index Terms—adversarial attacks, Thai OCR systems, Thai
|
37 |
+
LPR systems, machine learning security
|
38 |
+
I. INTRODUCTION
|
39 |
+
Optical character recognition (OCR) is a technology to
|
40 |
+
recognize characters from printed or handwritten images. In
|
41 |
+
the last few decades, OCR has been adopted in many real-
|
42 |
+
world applications mainly due to the rise of deep neural
|
43 |
+
network (DNN) development. With DNN, OCR can now
|
44 |
+
perform the character recognition task at high speed, enabling
|
45 |
+
its use in many mission-critical and time-sensitive applications.
|
46 |
+
For instance, an OCR system can be deployed in an airport to
|
47 |
+
recognize passport information automatically [1]; or modern
|
48 |
+
license plate recognition systems employed by law enforce-
|
49 |
+
ment rely heavily on OCR in their core engine [9].
|
50 |
+
Besides the timing performance, the security of OCR is also
|
51 |
+
paramount to the underlying application. Unfortunately, OCR
|
52 |
+
inherits the same security weakness as DNN since it is also
|
53 |
+
vulnerable to an attack based on adversarial examples [8]. The
|
54 |
+
aim of this attack is to confuse the DNN model, causing it to
|
55 |
+
misclassify a specific input image. It is typically carried out by
|
56 |
+
introducing subtle but deliberate changes to the input. These
|
57 |
+
changes can be in the form of noise perturbation or small pixel
|
58 |
+
images that are carefully crafted in such a way that they do
|
59 |
+
not look suspicious to the human eyes. As OCR has become
|
60 |
+
widely adopted, it presents more incentives for an adversary to
|
61 |
+
use this type of attack for his/her own benefit. This attack, for
|
62 |
+
instance, can cause the OCR model to misinterpret passport
|
63 |
+
data, license plate numbers, or financial documents, resulting
|
64 |
+
in financial damages or crime detection avoidance.
|
65 |
+
A number of prior works explore different techniques to
|
66 |
+
generate adversarial examples in black-box [10] and white-
|
67 |
+
box [7] environments, in targeted [15] and untargetd [11] sce-
|
68 |
+
narios, and with different OCR languages, e.g., English [14],
|
69 |
+
Chinese [5], and Arabic [2]. Despite this rich literature, to
|
70 |
+
the best of our knowledge, there has been no prior work to
|
71 |
+
demonstrate the attack success on an OCR system based on
|
72 |
+
Thai language. Due to the idiosyncratic features of the Thai
|
73 |
+
alphabet (e.g., some letters contain an upper/lower symbol
|
74 |
+
– ฮ/ญ), it remains unclear whether these existing attack
|
75 |
+
techniques are still effective for Thai OCR systems.
|
76 |
+
To this end, we set out to answer this question by demon-
|
77 |
+
strating whether it is feasible to generate adversarial examples
|
78 |
+
that can be used to fool the state-of-the-art Thai OCR system.
|
79 |
+
To achieve this goal, we turn our attack focus to a specific
|
80 |
+
but widely-used OCR application – License Plate Recognition
|
81 |
+
(LPR) system. In particular, our attack targets an LPR system
|
82 |
+
based on Google Tesseract [13] with Thai language support.
|
83 |
+
Contrary to the previous works in [15] or [11], we consider
|
84 |
+
our LPR attack scenario semi-targeted, in which a successful
|
85 |
+
arXiv:2301.05506v1 [cs.CR] 13 Jan 2023
|
86 |
+
|
87 |
+
adversarial example can mislead the LPR model to output any
|
88 |
+
element in the set of adversary-chosen incorrect classes (e.g.,
|
89 |
+
a set of valid license numbers other than the true number).
|
90 |
+
This is distinct from the targeted scenario, which aims to
|
91 |
+
misguide the model to return a particular adversary-chosen
|
92 |
+
incorrect class (e.g., a specific fake license number), or the
|
93 |
+
untargeted scenario, which tricks the model into predicting
|
94 |
+
any of the incorrect classes (e.g., any sequence of Thai
|
95 |
+
characters/digits other than the true license number). We also
|
96 |
+
propose a transformation that converts the existing targeted
|
97 |
+
attack into the semi-targeted attack considered in this work.
|
98 |
+
Finally, we perform implementation experiments to evaluate
|
99 |
+
our proposed LPR attack. The results indicate the realism of
|
100 |
+
our attack as it obtains a high attack success rate and requires
|
101 |
+
only a reasonable amount of resources (i.e., runtime and RAM
|
102 |
+
usage) that can feasibly be acquired from a regular desktop
|
103 |
+
computer. Overall, we believe this work represents the first
|
104 |
+
step towards raising awareness of the threats posed by Thai
|
105 |
+
OCR systems and eventually towards securing these systems
|
106 |
+
against adversarial examples.
|
107 |
+
The contribution of our work can be summarized as follows:
|
108 |
+
(i) We present a systematic approach to demonstrate the
|
109 |
+
feasibility of constructing adversarial examples to fool
|
110 |
+
the state-of-the-art Thai OCR-based LPR system.
|
111 |
+
(ii) We explore an alternative attack scenario, called semi-
|
112 |
+
targeted, and show it is highly realistic for attacking LPR
|
113 |
+
applications.
|
114 |
+
(iii) Our evaluation results show the feasibility of our attack; it
|
115 |
+
can achieve up to 91% attack success rate and can be car-
|
116 |
+
ried out realistically using only a commodity computer.
|
117 |
+
II. BACKGROUND AND RELATED WORK
|
118 |
+
A. License Plate Recognition (LPR)
|
119 |
+
LPR is the process that automatically reads and extracts
|
120 |
+
vehicle license plate information from an image. It typically
|
121 |
+
consists of three steps: localization, segmentation, and iden-
|
122 |
+
tification. In the first step, an LPR system scans through
|
123 |
+
the entire image to detect and locate a license plate. Then,
|
124 |
+
the segmentation step extracts the regions from the detected
|
125 |
+
license plate where each region contains exactly a single
|
126 |
+
character. Finally, LPR leverages OCR technology to classify
|
127 |
+
and recognize each character and outputs the digitized license
|
128 |
+
information in the identification step.
|
129 |
+
While numerous OCR techniques have been proposed for
|
130 |
+
LPR systems, the most common one used by modern LPR
|
131 |
+
systems is based on DNNs. For example, Tesseract [13] is the
|
132 |
+
state-of-the-art DNN-based OCR engine developed by Google
|
133 |
+
and has been used in many LPR systems [12]. The current
|
134 |
+
version of Tesseract uses LSTM DNNs and supports more
|
135 |
+
than 50 languages, including Thai. Besides LPR, Tesseract has
|
136 |
+
been adopted to recognize Thai characters in other settings,
|
137 |
+
e.g., Thai document digitization [6].
|
138 |
+
B. Adversarial Attacks
|
139 |
+
An adversarial attack was first introduced and investigated
|
140 |
+
by Szegedy et al. in 2013 [15]. They show that by optimizing
|
141 |
+
DNN’s prediction error, an adversary can generate a small
|
142 |
+
perturbation that can be applied to an input image in such a
|
143 |
+
way that the resulting image (called an adversarial example)
|
144 |
+
is misclassified by the DNN model. The work in [15] has
|
145 |
+
inspired many subsequent studies to improve upon, and/or
|
146 |
+
proposed different settings for, adversarial attacks. Techniques
|
147 |
+
in adversarial attacks can often be categorized using two
|
148 |
+
orthogonal dimensions – adversarial knowledge and goal:
|
149 |
+
1) Adversarial knowledge can be further divided into
|
150 |
+
white-box and black-box environments. White-box at-
|
151 |
+
tacks assume a powerful adversary that has complete
|
152 |
+
knowledge of the DNN model’s architecture, including
|
153 |
+
parameters, weight values, and/or its training dataset.
|
154 |
+
Black-box attacks, on the other hand, consider a weaker
|
155 |
+
adversary which can only query the DNN model but has
|
156 |
+
no access to the model’s internal information.
|
157 |
+
2) Adversarial goal is often classified as either targeted
|
158 |
+
or untargeted scenarios. Targeted attacks aim to deceive
|
159 |
+
the model into classifying an adversarial example as a
|
160 |
+
targeted adversarial class, whereas an untargeted attack
|
161 |
+
misleads the classification to an arbitrary class other than
|
162 |
+
the correct one.
|
163 |
+
Prior works have explored various techniques for adversarial
|
164 |
+
example generation targeting OCR systems with: (i) black-
|
165 |
+
box [3] and white-box [14] environments, (ii) targeted [5] and
|
166 |
+
untargeted [16] scenarios, and (iii) English [14], Chinese [5],
|
167 |
+
and Arabic [2] languages. In this work, we aim to assess
|
168 |
+
the feasibility of performing an adversarial attack in Thai
|
169 |
+
LPR systems with a realistic black-box and semi-targeted
|
170 |
+
adversarial setting.
|
171 |
+
III. ADVERSARY’S GOAL & THREAT MODEL
|
172 |
+
We consider a realistic adversary which aims to trick an
|
173 |
+
automatic LPR system to misclassify a specific potentially
|
174 |
+
illegal license plate into a different but still valid (i.e., well-
|
175 |
+
formed) license number. The adversary is assumed to have or-
|
176 |
+
acle access to the black-box LPR model, i.e., he/she can query
|
177 |
+
for the model’s prediction output on any given image input.
|
178 |
+
|
179 |
+
However, as the model is usually proprietary and confidential,
|
180 |
+
he/she has no access to the model’s internal parameters.
|
181 |
+
Figure 1 shows a scenario for performing an adversarial
|
182 |
+
attack on a Thai LPR system. The attack is carried out by
|
183 |
+
generating an adversarial example from an illegal license plate.
|
184 |
+
Then, it is considered a successful attack if the following
|
185 |
+
requirements hold:
|
186 |
+
Illegal
|
187 |
+
License Plate
|
188 |
+
Adversarial
|
189 |
+
Attack
|
190 |
+
Adversarial
|
191 |
+
Example
|
192 |
+
Adversary
|
193 |
+
Unsuspicious
|
194 |
+
Crime Record
|
195 |
+
|
196 |
+
Unsuspicious
|
197 |
+
Crime
|
198 |
+
Record
|
199 |
+
Detected
|
200 |
+
Clean
|
201 |
+
กข 4523
|
202 |
+
จช 1645
|
203 |
+
จช 1645
|
204 |
+
จช 1645
|
205 |
+
LPR system compromised!
|
206 |
+
Figure 1. Adversarial attacks on Thai LPR systems
|
207 |
+
[R1] The generated adversarial example looks similar to
|
208 |
+
the illegal license plate input in human eyes. This is to ensure
|
209 |
+
that only a small change needs to be applied on the physical
|
210 |
+
license plate, and as a result, the modified license plate can
|
211 |
+
still fool the LPR system without being noticed by humans.
|
212 |
+
[R2] The adversarial example’s prediction class is differ-
|
213 |
+
ent from its true class but still considered a valid license
|
214 |
+
number. The rationale behind this requirement is that to
|
215 |
+
better evade detection, the adversary wants to avoid the DNN
|
216 |
+
model returning an invalid and thus suspicious class, e.g., a
|
217 |
+
malformed/unassigned license number since it can easily be
|
218 |
+
detected in software or by police officers.
|
219 |
+
Without loss of generality, we simplify [R2] by considering
|
220 |
+
a license number valid if it consists of two Thai consonants
|
221 |
+
followed by a four-digit number. For example, มค3456 is valid
|
222 |
+
but มกุ1234 or มค123 are not. In practice, [R2] can be satisfied
|
223 |
+
by using a database of legal license plate numbers.
|
224 |
+
Due to [R2], it becomes clear that the traditional targeted
|
225 |
+
and untargeted scenarios are not directly suitable in this attack
|
226 |
+
setting. Specifically, the untargeted scenario could return an
|
227 |
+
invalid number (e.g., มค123), violating [R2]; whereas the
|
228 |
+
targeted scenario can be too restrictive. Hence, in this work,
|
229 |
+
we introduce a relaxed concept of the targeted scenario, called
|
230 |
+
semi-targeted, which accepts an adversarial example if its
|
231 |
+
prediction class falls into a specific adversary-chosen set (as
|
232 |
+
opposed to a specific class in the targeted scenario), e.g., a set
|
233 |
+
of valid license numbers in the LPR application.
|
234 |
+
IV. METHODOLOGY
|
235 |
+
A. Overview
|
236 |
+
Our methodology for attacking Thai OCR systems consists
|
237 |
+
of two phases, as shown in Figure 2. The first phase performs
|
238 |
+
the black-box semi-targeted adversarial attack on an input
|
239 |
+
license plate image and outputs an adversarial example.
|
240 |
+
Figure 2. Methodology for attacking Thai OCR systems
|
241 |
+
The second phase takes as input, the adversarial example,
|
242 |
+
and evaluates whether this adversarial example constitutes a
|
243 |
+
successful attack or not. We now discuss each phase in detail.
|
244 |
+
B. Phase-1: Black-box Semi-targeted Adversarial Attack
|
245 |
+
As illustrated in Figure 3, our black-box semi-targeted
|
246 |
+
attack requires three input parameters: (1) an original image
|
247 |
+
– img; (2) a set of valid classes – s; and (3) the number of
|
248 |
+
candidates to be considered in this attack – n. In the context
|
249 |
+
of LPR, img represents a license plate image; s corresponds to
|
250 |
+
a set of valid license numbers, where, in this work, s is set to
|
251 |
+
common license patterns in Thailand with two Thai consonants
|
252 |
+
followed by a four-digit number.
|
253 |
+
The attack starts in . It generates n classes from the
|
254 |
+
given input with a constraint that all of these n classes
|
255 |
+
must: (1) be non-repetitive and (2) contain at least one Thai
|
256 |
+
consonant different from the img class. Then, we can apply the
|
257 |
+
state-of-the-art black-box targeted attack for each individual
|
258 |
+
class, resulting in n candidates for adversarial examples in .
|
259 |
+
Finally, in , we display these n candidates to the user, ask
|
260 |
+
the user to select the one that is closely similar to img, and
|
261 |
+
output it as the adversarial example.
|
262 |
+
Note that this phase will always yield the adversarial
|
263 |
+
example satisfying [R2]. This is because the targeted attack
|
264 |
+
in guarantees to produce an adversarial example that will be
|
265 |
+
classified as the targeted class classi, which, by construction
|
266 |
+
in , is valid (i.e., classi ∈ s) and different from the img class.
|
267 |
+
C. Phase-2: Adversarial Example Assessment
|
268 |
+
To assess the generated adversarial example, we recruit par-
|
269 |
+
ticipants from our university, present them with the adversarial
|
270 |
+
example image, and interview them with two questions:
|
271 |
+
Q1: Are all characters legible in the presented image?
|
272 |
+
Q2: What license number can you read from the image?
|
273 |
+
|
274 |
+
Attack
|
275 |
+
success
|
276 |
+
D Black-box
|
277 |
+
License plate
|
278 |
+
Adversarial
|
279 |
+
image
|
280 |
+
semi-targeted attack
|
281 |
+
example evaluation
|
282 |
+
Attack
|
283 |
+
failure
|
284 |
+
X164501645Figure 3. Black-box semi-targeted attacks
|
285 |
+
The attack is considered successful if the participant re-
|
286 |
+
sponds “yes” to the first question and the answer from the
|
287 |
+
second question matches the license number in img. If any of
|
288 |
+
these conditions are not fulfilled, we return “Attack failure”. As
|
289 |
+
a result of these two carefully-crafted questions, the adversarial
|
290 |
+
example can only pass this phase when still resembling img,
|
291 |
+
thus satisfying [R1].
|
292 |
+
V. FEASIBILITY RESULTS
|
293 |
+
A. Experimental Setup
|
294 |
+
All of our experiments were conducted on an Ubuntu
|
295 |
+
20.04 machine with an Intel i7-11700k [email protected] GHz. To
|
296 |
+
measure the attack success rate, we performed our attack on
|
297 |
+
100 unique software-generated Thai license plate images. The
|
298 |
+
OCR system used in our attack was based on Tesseract v5.2.0
|
299 |
+
and ran with the following parameters: psm=10,oem=1.
|
300 |
+
Lastly, we used HopSkipJumpAttack [4] as the underlying
|
301 |
+
black-box targeted attack algorithm; for each sample, we ran
|
302 |
+
this attack until it reached 300 iterations.
|
303 |
+
Ethics. Our experiments were conducted using synthetic,
|
304 |
+
instead of real, license plates for ethical reasons. This work
|
305 |
+
was conducted solely for academic purposes and we do
|
306 |
+
not condone using it for real-world attacks. Further, we did
|
307 |
+
not gather any personally identifiable information during our
|
308 |
+
interviews with participants.
|
309 |
+
B. Experimental Results
|
310 |
+
Attack Success Rate (ASR). Figure 4 shows ASR of our
|
311 |
+
attack while varying n. ASR improved drastically as we moved
|
312 |
+
from the targeted attack (n = 1) to the semi-targeted attack
|
313 |
+
(n > 1), with ASR = 91% for n = 10, compared to ASR =
|
314 |
+
70% for n = 1. This highlights the effectiveness of the semi-
|
315 |
+
target scenario for attacking Thai OCR systems. We present
|
316 |
+
a selection of generated adversarial examples for various n
|
317 |
+
values in Table I, where Suc. refers to “Attack success".
|
318 |
+
Figure 4. Attack success rate and execution time
|
319 |
+
Attack Resource Consumption. In terms of resource con-
|
320 |
+
sumption, generating adversarial examples requires a moderate
|
321 |
+
amount of RAM (∼ 1.8−2GB) on our machine, independent
|
322 |
+
of the n value. On the other hand, the runtime for adversar-
|
323 |
+
ial example generation linearly depends on n, as shown in
|
324 |
+
Figure 4. For n = 10, the attack takes less than 2 hours to
|
325 |
+
complete, which we consider to be reasonable because it only
|
326 |
+
needs to be done once for any given license plate.
|
327 |
+
VI. CONCLUSION
|
328 |
+
This paper presents the first feasibility study of performing
|
329 |
+
adversarial attacks on Thai OCR-based LPR systems. In
|
330 |
+
addition, it proposes a new type of attack scenario, called
|
331 |
+
semi-targeted, and argues that this scenario is more practical
|
332 |
+
for attacking LPR systems than the traditional targeted and
|
333 |
+
untargeted scenarios. Our experiments demonstrate the feasi-
|
334 |
+
bility of our attack as it achieves a high success rate and can
|
335 |
+
be carried out only using a commodity computer.
|
336 |
+
|
337 |
+
② Black-box
|
338 |
+
candi
|
339 |
+
class1
|
340 |
+
targeted attack
|
341 |
+
Original image
|
342 |
+
(img)
|
343 |
+
② Black-box
|
344 |
+
cand2
|
345 |
+
clasS2
|
346 |
+
targeted attack
|
347 |
+
O Class
|
348 |
+
Adversarial
|
349 |
+
Set of valid
|
350 |
+
.
|
351 |
+
Assessment
|
352 |
+
example
|
353 |
+
classes (s)
|
354 |
+
generation
|
355 |
+
.
|
356 |
+
.
|
357 |
+
.
|
358 |
+
.
|
359 |
+
.
|
360 |
+
# of candidates (n)
|
361 |
+
② Black-box
|
362 |
+
clasSn
|
363 |
+
candn
|
364 |
+
targeted attack120
|
365 |
+
中
|
366 |
+
Attack success Rate (%)
|
367 |
+
Runtime (min.)
|
368 |
+
中
|
369 |
+
8
|
370 |
+
导
|
371 |
+
1
|
372 |
+
2
|
373 |
+
3
|
374 |
+
4
|
375 |
+
5
|
376 |
+
6
|
377 |
+
7
|
378 |
+
8
|
379 |
+
9
|
380 |
+
10
|
381 |
+
Number of candidates (n)Table I
|
382 |
+
SAMPLES OF ADVERSARIAL EXAMPLES
|
383 |
+
Sample
|
384 |
+
n=1
|
385 |
+
n=5
|
386 |
+
n=10
|
387 |
+
Input Image
|
388 |
+
Adv. Ex.
|
389 |
+
OCR Out.
|
390 |
+
Suc.
|
391 |
+
Adv. Ex.
|
392 |
+
OCR Out.
|
393 |
+
Suc.
|
394 |
+
Adv. Ex.
|
395 |
+
OCR Out.
|
396 |
+
Suc.
|
397 |
+
มค4364
|
398 |
+
|
399 |
+
มค4364
|
400 |
+
|
401 |
+
มศ4364
|
402 |
+
|
403 |
+
ลศ1805
|
404 |
+
|
405 |
+
ลห1805
|
406 |
+
|
407 |
+
ลม1805
|
408 |
+
|
409 |
+
จส1645
|
410 |
+
|
411 |
+
จซ1645
|
412 |
+
|
413 |
+
จซ1645
|
414 |
+
|
415 |
+
ซฝ9597
|
416 |
+
|
417 |
+
ซฝ9597
|
418 |
+
|
419 |
+
ซฝ9597
|
420 |
+
|
421 |
+
REFERENCES
|
422 |
+
[1] Airport Supplier.
|
423 |
+
Passport & ID VIZ OCR and authentication
|
424 |
+
software. https://www.airport-suppliers.com/product/passport-id-viz-ocr-
|
425 |
+
and-authentication-software/, 2022.
|
426 |
+
[2] Basemah Alshemali and Jugal Kalita. Adversarial examples in arabic.
|
427 |
+
In CSCI, pages 371–376, Las Vegas, NV, USA, 2019.
|
428 |
+
[3] Samet Bayram and Kenneth Barner.
|
429 |
+
A black-box attack on optical
|
430 |
+
character recognition systems. arXiv:2208.14302, 2022.
|
431 |
+
[4] Jianbo Chen, Michael I Jordan, and Martin J Wainwright.
|
432 |
+
Hop-
|
433 |
+
skipjumpattack: A query-efficient decision-based attack. In 2020 ieee
|
434 |
+
symposium on security and privacy (sp), pages 1277–1294. IEEE, 2020.
|
435 |
+
[5] Lu Chen and Wei Xu.
|
436 |
+
Attacking optical character recognition (ocr)
|
437 |
+
systems with adversarial watermarks. arXiv:2002.03095, 2020.
|
438 |
+
[6] Todsanai Chumwatana and Waramporn Rattana-umnuaychai. Using ocr
|
439 |
+
framework and information extraction for thai documents digitization.
|
440 |
+
In iEECON2021, pages 440–443, Pattaya, Thailand, 2021.
|
441 |
+
[7] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou.
|
442 |
+
Hotflip:
|
443 |
+
White-box adversarial examples for text classification. arXiv preprint
|
444 |
+
arXiv:1712.06751, 2017.
|
445 |
+
[8] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining
|
446 |
+
and harnessing adversarial examples. arXiv:1412.6572, 2014.
|
447 |
+
[9] IACP.
|
448 |
+
Automated
|
449 |
+
license
|
450 |
+
plate
|
451 |
+
recognition.
|
452 |
+
https://www.theiacp.org/projects/automated-license-plate-recognition,
|
453 |
+
2022.
|
454 |
+
[10] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-
|
455 |
+
box adversarial attacks with limited queries and information. In ICML,
|
456 |
+
pages 2137–2146, 2018.
|
457 |
+
[11] Seyed-Mohsen
|
458 |
+
Moosavi-Dezfooli,
|
459 |
+
Alhussein
|
460 |
+
Fawzi,
|
461 |
+
and
|
462 |
+
Pascal
|
463 |
+
Frossard. Deepfool: a simple and accurate method to fool deep neural
|
464 |
+
networks. In CVPR, pages 2574–2582, Las Vegas, NV, USA, 2016.
|
465 |
+
[12] Rahul R Palekar, Sushant U Parab, Dhrumil P Parikh, and Vijaya N
|
466 |
+
Kamble. Real time license plate detection using opencv and tesseract.
|
467 |
+
In ICCSP, pages 2111–2115, Chennai, India, 2017.
|
468 |
+
[13] Ray Smith. An overview of the tesseract ocr engine. In ICDAR, pages
|
469 |
+
629–633, Curitiba, Brazil, 2007.
|
470 |
+
[14] Congzheng Song and Vitaly Shmatikov.
|
471 |
+
Fooling ocr systems with
|
472 |
+
adversarial text images. arXiv:1802.05385, 2018.
|
473 |
+
[15] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,
|
474 |
+
Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties
|
475 |
+
of neural networks. arXiv:1312.6199, 2013.
|
476 |
+
[16] Mingming Zha, Guozhu Meng, Chaoyang Lin, Zhe Zhou, and Kai Chen.
|
477 |
+
Rolma: a practical adversarial attack against deep learning-based lpr
|
478 |
+
systems. In Inscrypt, pages 101–117, Guangzhou, China, 2020.
|
479 |
+
|
480 |
+
aw 1805aw 1805aW 1805% 16451645① 1645 95978 9597JM 4364Jm4364Jm4364J4364aW 1805
|
6dE5T4oBgHgl3EQfPg7O/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf,len=254
|
2 |
+
page_content='On the feasibility of attacking Thai LPR systems with adversarial examples Chissanupong Jiamsuchon College of Computing Prince of Songkla University Phuket, Thailand s6230613001@phuket.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
3 |
+
page_content='psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
4 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
5 |
+
page_content='th Jakapan Suaboot College of Computing Prince of Songkla University Phuket, Thailand jakapan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
6 |
+
page_content='su@phuket.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
7 |
+
page_content='psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
8 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
9 |
+
page_content='th Norrathep Rattanavipanon College of Computing Prince of Songkla University Phuket, Thailand norrathep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
10 |
+
page_content='r@phuket.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
11 |
+
page_content='psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
12 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
13 |
+
page_content='th Abstract—Recent advances in deep neural networks (DNNs) have significantly enhanced the capabilities of optical character recognition (OCR) technology, enabling its adoption to a wide range of real-world applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
14 |
+
page_content=' Despite this success, DNN- based OCR is shown to be vulnerable to adversarial attacks, in which the adversary can influence the DNN model’s prediction by carefully manipulating input to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
15 |
+
page_content=' Prior work has demonstrated the security impacts of adversarial attacks on various OCR languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
16 |
+
page_content=' However, to date, no studies have been conducted and evaluated on an OCR system tailored specifically for the Thai language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
17 |
+
page_content=' To bridge this gap, this work presents a feasibility study of performing adversarial attacks on a specific Thai OCR application – Thai License Plate Recognition (LPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
18 |
+
page_content=' Moreover, we propose a new type of adversarial attack based on the semi-targeted scenario and show that this scenario is highly realistic in LPR applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
19 |
+
page_content=' Our experimental results show the feasibility of our attacks as they can be performed on a commodity computer desktop with over 90% attack success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
20 |
+
page_content=' Index Terms—adversarial attacks, Thai OCR systems, Thai LPR systems, machine learning security I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
21 |
+
page_content=' INTRODUCTION Optical character recognition (OCR) is a technology to recognize characters from printed or handwritten images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
22 |
+
page_content=' In the last few decades, OCR has been adopted in many real- world applications mainly due to the rise of deep neural network (DNN) development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
23 |
+
page_content=' With DNN, OCR can now perform the character recognition task at high speed, enabling its use in many mission-critical and time-sensitive applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
24 |
+
page_content=' For instance, an OCR system can be deployed in an airport to recognize passport information automatically [1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
25 |
+
page_content=' or modern license plate recognition systems employed by law enforce- ment rely heavily on OCR in their core engine [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
26 |
+
page_content=' Besides the timing performance, the security of OCR is also paramount to the underlying application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
27 |
+
page_content=' Unfortunately, OCR inherits the same security weakness as DNN since it is also vulnerable to an attack based on adversarial examples [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
28 |
+
page_content=' The aim of this attack is to confuse the DNN model, causing it to misclassify a specific input image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
29 |
+
page_content=' It is typically carried out by introducing subtle but deliberate changes to the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
30 |
+
page_content=' These changes can be in the form of noise perturbation or small pixel images that are carefully crafted in such a way that they do not look suspicious to the human eyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
31 |
+
page_content=' As OCR has become widely adopted, it presents more incentives for an adversary to use this type of attack for his/her own benefit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
32 |
+
page_content=' This attack, for instance, can cause the OCR model to misinterpret passport data, license plate numbers, or financial documents, resulting in financial damages or crime detection avoidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
33 |
+
page_content=' A number of prior works explore different techniques to generate adversarial examples in black-box [10] and white- box [7] environments, in targeted [15] and untargetd [11] sce- narios, and with different OCR languages, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
34 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
35 |
+
page_content=', English [14], Chinese [5], and Arabic [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
36 |
+
page_content=' Despite this rich literature, to the best of our knowledge, there has been no prior work to demonstrate the attack success on an OCR system based on Thai language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
37 |
+
page_content=' Due to the idiosyncratic features of the Thai alphabet (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
38 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
39 |
+
page_content=', some letters contain an upper/lower symbol – ฮ/ญ), it remains unclear whether these existing attack techniques are still effective for Thai OCR systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
40 |
+
page_content=' To this end, we set out to answer this question by demon- strating whether it is feasible to generate adversarial examples that can be used to fool the state-of-the-art Thai OCR system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
41 |
+
page_content=' To achieve this goal, we turn our attack focus to a specific but widely-used OCR application – License Plate Recognition (LPR) system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
42 |
+
page_content=' In particular, our attack targets an LPR system based on Google Tesseract [13] with Thai language support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
43 |
+
page_content=' Contrary to the previous works in [15] or [11], we consider our LPR attack scenario semi-targeted, in which a successful arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
44 |
+
page_content='05506v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
45 |
+
page_content='CR] 13 Jan 2023 adversarial example can mislead the LPR model to output any element in the set of adversary-chosen incorrect classes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
46 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
47 |
+
page_content=', a set of valid license numbers other than the true number).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
48 |
+
page_content=' This is distinct from the targeted scenario, which aims to misguide the model to return a particular adversary-chosen incorrect class (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
49 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
50 |
+
page_content=', a specific fake license number), or the untargeted scenario, which tricks the model into predicting any of the incorrect classes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
51 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
52 |
+
page_content=', any sequence of Thai characters/digits other than the true license number).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
53 |
+
page_content=' We also propose a transformation that converts the existing targeted attack into the semi-targeted attack considered in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
54 |
+
page_content=' Finally, we perform implementation experiments to evaluate our proposed LPR attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
55 |
+
page_content=' The results indicate the realism of our attack as it obtains a high attack success rate and requires only a reasonable amount of resources (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
56 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
57 |
+
page_content=', runtime and RAM usage) that can feasibly be acquired from a regular desktop computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
58 |
+
page_content=' Overall, we believe this work represents the first step towards raising awareness of the threats posed by Thai OCR systems and eventually towards securing these systems against adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
59 |
+
page_content=' The contribution of our work can be summarized as follows: (i) We present a systematic approach to demonstrate the feasibility of constructing adversarial examples to fool the state-of-the-art Thai OCR-based LPR system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
60 |
+
page_content=' (ii) We explore an alternative attack scenario, called semi- targeted, and show it is highly realistic for attacking LPR applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
61 |
+
page_content=' (iii) Our evaluation results show the feasibility of our attack;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
62 |
+
page_content=' it can achieve up to 91% attack success rate and can be car- ried out realistically using only a commodity computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
63 |
+
page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
64 |
+
page_content=' BACKGROUND AND RELATED WORK A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
65 |
+
page_content=' License Plate Recognition (LPR) LPR is the process that automatically reads and extracts vehicle license plate information from an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
66 |
+
page_content=' It typically consists of three steps: localization, segmentation, and iden- tification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
67 |
+
page_content=' In the first step, an LPR system scans through the entire image to detect and locate a license plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
68 |
+
page_content=' Then, the segmentation step extracts the regions from the detected license plate where each region contains exactly a single character.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
69 |
+
page_content=' Finally, LPR leverages OCR technology to classify and recognize each character and outputs the digitized license information in the identification step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
70 |
+
page_content=' While numerous OCR techniques have been proposed for LPR systems, the most common one used by modern LPR systems is based on DNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
71 |
+
page_content=' For example, Tesseract [13] is the state-of-the-art DNN-based OCR engine developed by Google and has been used in many LPR systems [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
72 |
+
page_content=' The current version of Tesseract uses LSTM DNNs and supports more than 50 languages, including Thai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
73 |
+
page_content=' Besides LPR, Tesseract has been adopted to recognize Thai characters in other settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
74 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
75 |
+
page_content=', Thai document digitization [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
76 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
77 |
+
page_content=' Adversarial Attacks An adversarial attack was first introduced and investigated by Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
78 |
+
page_content=' in 2013 [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
79 |
+
page_content=' They show that by optimizing DNN’s prediction error, an adversary can generate a small perturbation that can be applied to an input image in such a way that the resulting image (called an adversarial example) is misclassified by the DNN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
80 |
+
page_content=' The work in [15] has inspired many subsequent studies to improve upon, and/or proposed different settings for, adversarial attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
81 |
+
page_content=' Techniques in adversarial attacks can often be categorized using two orthogonal dimensions – adversarial knowledge and goal: 1) Adversarial knowledge can be further divided into white-box and black-box environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
82 |
+
page_content=' White-box at- tacks assume a powerful adversary that has complete knowledge of the DNN model’s architecture, including parameters, weight values, and/or its training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
83 |
+
page_content=' Black-box attacks, on the other hand, consider a weaker adversary which can only query the DNN model but has no access to the model’s internal information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
84 |
+
page_content=' 2) Adversarial goal is often classified as either targeted or untargeted scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
85 |
+
page_content=' Targeted attacks aim to deceive the model into classifying an adversarial example as a targeted adversarial class, whereas an untargeted attack misleads the classification to an arbitrary class other than the correct one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
86 |
+
page_content=' Prior works have explored various techniques for adversarial example generation targeting OCR systems with: (i) black- box [3] and white-box [14] environments, (ii) targeted [5] and untargeted [16] scenarios, and (iii) English [14], Chinese [5], and Arabic [2] languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
87 |
+
page_content=' In this work, we aim to assess the feasibility of performing an adversarial attack in Thai LPR systems with a realistic black-box and semi-targeted adversarial setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
88 |
+
page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
89 |
+
page_content=' ADVERSARY’S GOAL & THREAT MODEL We consider a realistic adversary which aims to trick an automatic LPR system to misclassify a specific potentially illegal license plate into a different but still valid (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
90 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
91 |
+
page_content=', well- formed) license number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
92 |
+
page_content=' The adversary is assumed to have or- acle access to the black-box LPR model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
93 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
94 |
+
page_content=', he/she can query for the model’s prediction output on any given image input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
95 |
+
page_content=' However, as the model is usually proprietary and confidential, he/she has no access to the model’s internal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
96 |
+
page_content=' Figure 1 shows a scenario for performing an adversarial attack on a Thai LPR system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
97 |
+
page_content=' The attack is carried out by generating an adversarial example from an illegal license plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
98 |
+
page_content=' Then, it is considered a successful attack if the following requirements hold: Illegal License Plate Adversarial Attack Adversarial Example Adversary Unsuspicious Crime Record Unsuspicious Crime Record Detected Clean กข 4523 จช 1645 จช 1645 จช 1645 LPR system compromised!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
99 |
+
page_content=' Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
100 |
+
page_content=' Adversarial attacks on Thai LPR systems [R1] The generated adversarial example looks similar to the illegal license plate input in human eyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
101 |
+
page_content=' This is to ensure that only a small change needs to be applied on the physical license plate, and as a result, the modified license plate can still fool the LPR system without being noticed by humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
102 |
+
page_content=' [R2] The adversarial example’s prediction class is differ- ent from its true class but still considered a valid license number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
103 |
+
page_content=' The rationale behind this requirement is that to better evade detection, the adversary wants to avoid the DNN model returning an invalid and thus suspicious class, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
104 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
105 |
+
page_content=', a malformed/unassigned license number since it can easily be detected in software or by police officers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
106 |
+
page_content=' Without loss of generality, we simplify [R2] by considering a license number valid if it consists of two Thai consonants followed by a four-digit number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
107 |
+
page_content=' For example, มค3456 is valid but มกุ1234 or มค123 are not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
108 |
+
page_content=' In practice, [R2] can be satisfied by using a database of legal license plate numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
109 |
+
page_content=' Due to [R2], it becomes clear that the traditional targeted and untargeted scenarios are not directly suitable in this attack setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
110 |
+
page_content=' Specifically, the untargeted scenario could return an invalid number (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
111 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
112 |
+
page_content=', มค123), violating [R2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
113 |
+
page_content=' whereas the targeted scenario can be too restrictive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
114 |
+
page_content=' Hence, in this work, we introduce a relaxed concept of the targeted scenario, called semi-targeted, which accepts an adversarial example if its prediction class falls into a specific adversary-chosen set (as opposed to a specific class in the targeted scenario), e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
115 |
+
page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
116 |
+
page_content=', a set of valid license numbers in the LPR application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
117 |
+
page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
118 |
+
page_content=' METHODOLOGY A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
119 |
+
page_content=' Overview Our methodology for attacking Thai OCR systems consists of two phases, as shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
120 |
+
page_content=' The first phase performs the black-box semi-targeted adversarial attack on an input license plate image and outputs an adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
121 |
+
page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
122 |
+
page_content=' Methodology for attacking Thai OCR systems The second phase takes as input, the adversarial example, and evaluates whether this adversarial example constitutes a successful attack or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
123 |
+
page_content=' We now discuss each phase in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
124 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
125 |
+
page_content=' Phase-1: Black-box Semi-targeted Adversarial Attack As illustrated in Figure 3, our black-box semi-targeted attack requires three input parameters: (1) an original image – img;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
126 |
+
page_content=' (2) a set of valid classes – s;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
127 |
+
page_content=' and (3) the number of candidates to be considered in this attack – n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
128 |
+
page_content=' In the context of LPR, img represents a license plate image;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
129 |
+
page_content=' s corresponds to a set of valid license numbers, where, in this work, s is set to common license patterns in Thailand with two Thai consonants followed by a four-digit number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
130 |
+
page_content=' The attack starts in \x96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
131 |
+
page_content=' It generates n classes from the given input with a constraint that all of these n classes must: (1) be non-repetitive and (2) contain at least one Thai consonant different from the img class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
132 |
+
page_content=' Then, we can apply the state-of-the-art black-box targeted attack for each individual class, resulting in n candidates for adversarial examples in \x97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
133 |
+
page_content=' Finally, in \x98, we display these n candidates to the user, ask the user to select the one that is closely similar to img, and output it as the adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
134 |
+
page_content=' Note that this phase will always yield the adversarial example satisfying [R2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
135 |
+
page_content=' This is because the targeted attack in \x97 guarantees to produce an adversarial example that will be classified as the targeted class classi, which, by construction in \x96, is valid (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
136 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
137 |
+
page_content=', classi ∈ s) and different from the img class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
138 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
139 |
+
page_content=' Phase-2: Adversarial Example Assessment To assess the generated adversarial example, we recruit par- ticipants from our university, present them with the adversarial example image, and interview them with two questions: Q1: Are all characters legible in the presented image?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
140 |
+
page_content=' Q2: What license number can you read from the image?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
141 |
+
page_content=' Attack success D Black-box License plate Adversarial image semi-targeted attack example evaluation Attack failure X164501645Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
142 |
+
page_content=' Black-box semi-targeted attacks The attack is considered successful if the participant re- sponds “yes” to the first question and the answer from the second question matches the license number in img.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
143 |
+
page_content=' If any of these conditions are not fulfilled, we return “Attack failure”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
144 |
+
page_content=' As a result of these two carefully-crafted questions, the adversarial example can only pass this phase when still resembling img, thus satisfying [R1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
145 |
+
page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
146 |
+
page_content=' FEASIBILITY RESULTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
147 |
+
page_content=' Experimental Setup All of our experiments were conducted on an Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
148 |
+
page_content='04 machine with an Intel i7-11700k CPU@3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
149 |
+
page_content='60 GHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
150 |
+
page_content=' To measure the attack success rate, we performed our attack on 100 unique software-generated Thai license plate images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
151 |
+
page_content=' The OCR system used in our attack was based on Tesseract v5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
152 |
+
page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
153 |
+
page_content='0 and ran with the following parameters: psm=10,oem=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
154 |
+
page_content=' Lastly, we used HopSkipJumpAttack [4] as the underlying black-box targeted attack algorithm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
155 |
+
page_content=' for each sample, we ran this attack until it reached 300 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
156 |
+
page_content=' Ethics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
157 |
+
page_content=' Our experiments were conducted using synthetic, instead of real, license plates for ethical reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
158 |
+
page_content=' This work was conducted solely for academic purposes and we do not condone using it for real-world attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
159 |
+
page_content=' Further, we did not gather any personally identifiable information during our interviews with participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
160 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
161 |
+
page_content=' Experimental Results Attack Success Rate (ASR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
162 |
+
page_content=' Figure 4 shows ASR of our attack while varying n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
163 |
+
page_content=' ASR improved drastically as we moved from the targeted attack (n = 1) to the semi-targeted attack (n > 1), with ASR = 91% for n = 10, compared to ASR = 70% for n = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
164 |
+
page_content=' This highlights the effectiveness of the semi- target scenario for attacking Thai OCR systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
165 |
+
page_content=' We present a selection of generated adversarial examples for various n values in Table I, where Suc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
166 |
+
page_content=' refers to “Attack success".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
167 |
+
page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
168 |
+
page_content=' Attack success rate and execution time Attack Resource Consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
169 |
+
page_content=' In terms of resource con- sumption, generating adversarial examples requires a moderate amount of RAM (∼ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
170 |
+
page_content='8−2GB) on our machine, independent of the n value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
171 |
+
page_content=' On the other hand, the runtime for adversar- ial example generation linearly depends on n, as shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
172 |
+
page_content=' For n = 10, the attack takes less than 2 hours to complete, which we consider to be reasonable because it only needs to be done once for any given license plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
173 |
+
page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
174 |
+
page_content=' CONCLUSION This paper presents the first feasibility study of performing adversarial attacks on Thai OCR-based LPR systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
175 |
+
page_content=' In addition, it proposes a new type of attack scenario, called semi-targeted, and argues that this scenario is more practical for attacking LPR systems than the traditional targeted and untargeted scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
176 |
+
page_content=' Our experiments demonstrate the feasi- bility of our attack as it achieves a high success rate and can be carried out only using a commodity computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
177 |
+
page_content=' ② Black-box candi class1 targeted attack Original image (img) ② Black-box cand2 clasS2 targeted attack O Class Adversarial Set of valid .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
178 |
+
page_content=' Assessment example classes (s) generation .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
179 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
180 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
181 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
182 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
183 |
+
page_content=' # of candidates (n) ② Black-box clasSn candn targeted attack120 中 Attack success Rate (%) Runtime (min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
184 |
+
page_content=') 中 8 导 1 2 3 4 5 6 7 8 9 10 Number of candidates (n)Table I SAMPLES OF ADVERSARIAL EXAMPLES Sample n=1 n=5 n=10 Input Image Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
185 |
+
page_content=' Ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
186 |
+
page_content=' OCR Out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
187 |
+
page_content=' Suc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
188 |
+
page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
189 |
+
page_content=' Ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
190 |
+
page_content=' OCR Out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
191 |
+
page_content=' Suc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
192 |
+
page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
193 |
+
page_content=' Ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
194 |
+
page_content=' OCR Out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
195 |
+
page_content=' Suc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
196 |
+
page_content=' มค4364 \x17 มค4364 \x17 มศ4364 \x17 ลศ1805 \x17 ลห1805 \x17 ลม1805 \x13 จส1645 \x17 จซ1645 \x13 จซ1645 \x13 ซฝ9597 \x13 ซฝ9597 \x13 ซฝ9597 \x13 REFERENCES [1] Airport Supplier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
197 |
+
page_content=' Passport & ID VIZ OCR and authentication software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
198 |
+
page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
199 |
+
page_content='airport-suppliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
200 |
+
page_content='com/product/passport-id-viz-ocr- and-authentication-software/, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
201 |
+
page_content=' [2] Basemah Alshemali and Jugal Kalita.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
202 |
+
page_content=' Adversarial examples in arabic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
203 |
+
page_content=' In CSCI, pages 371–376, Las Vegas, NV, USA, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
204 |
+
page_content=' [3] Samet Bayram and Kenneth Barner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
205 |
+
page_content=' A black-box attack on optical character recognition systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
206 |
+
page_content=' arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
207 |
+
page_content='14302, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
208 |
+
page_content=' [4] Jianbo Chen, Michael I Jordan, and Martin J Wainwright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
209 |
+
page_content=' Hop- skipjumpattack: A query-efficient decision-based attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
210 |
+
page_content=' In 2020 ieee symposium on security and privacy (sp), pages 1277–1294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
211 |
+
page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
212 |
+
page_content=' [5] Lu Chen and Wei Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
213 |
+
page_content=' Attacking optical character recognition (ocr) systems with adversarial watermarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
214 |
+
page_content=' arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
215 |
+
page_content='03095, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
216 |
+
page_content=' [6] Todsanai Chumwatana and Waramporn Rattana-umnuaychai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
217 |
+
page_content=' Using ocr framework and information extraction for thai documents digitization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
218 |
+
page_content=' In iEECON2021, pages 440–443, Pattaya, Thailand, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
219 |
+
page_content=' [7] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
220 |
+
page_content=' Hotflip: White-box adversarial examples for text classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
221 |
+
page_content=' arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
222 |
+
page_content='06751, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
223 |
+
page_content=' [8] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
224 |
+
page_content=' Explaining and harnessing adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
225 |
+
page_content=' arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
226 |
+
page_content='6572, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
227 |
+
page_content=' [9] IACP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
228 |
+
page_content=' Automated license plate recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
229 |
+
page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
230 |
+
page_content='theiacp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
231 |
+
page_content='org/projects/automated-license-plate-recognition, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
232 |
+
page_content=' [10] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
233 |
+
page_content=' Black- box adversarial attacks with limited queries and information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
234 |
+
page_content=' In ICML, pages 2137–2146, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
235 |
+
page_content=' [11] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
236 |
+
page_content=' Deepfool: a simple and accurate method to fool deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
237 |
+
page_content=' In CVPR, pages 2574–2582, Las Vegas, NV, USA, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
238 |
+
page_content=' [12] Rahul R Palekar, Sushant U Parab, Dhrumil P Parikh, and Vijaya N Kamble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
239 |
+
page_content=' Real time license plate detection using opencv and tesseract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
240 |
+
page_content=' In ICCSP, pages 2111–2115, Chennai, India, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
241 |
+
page_content=' [13] Ray Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
242 |
+
page_content=' An overview of the tesseract ocr engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
243 |
+
page_content=' In ICDAR, pages 629–633, Curitiba, Brazil, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
244 |
+
page_content=' [14] Congzheng Song and Vitaly Shmatikov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
245 |
+
page_content=' Fooling ocr systems with adversarial text images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
246 |
+
page_content=' arXiv:1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
247 |
+
page_content='05385, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
248 |
+
page_content=' [15] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
249 |
+
page_content=' Intriguing properties of neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
250 |
+
page_content=' arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
251 |
+
page_content='6199, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
252 |
+
page_content=' [16] Mingming Zha, Guozhu Meng, Chaoyang Lin, Zhe Zhou, and Kai Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
253 |
+
page_content=' Rolma: a practical adversarial attack against deep learning-based lpr systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
254 |
+
page_content=' In Inscrypt, pages 101–117, Guangzhou, China, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
255 |
+
page_content=' aw 1805aw 1805aW 1805% 16451645① 1645 95978 9597JM 4364Jm4364Jm4364J4364aW 1805' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6dE5T4oBgHgl3EQfPg7O/content/2301.05506v1.pdf'}
|
99E2T4oBgHgl3EQfmAd3/content/2301.03994v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:96ae9d58b518d89744bcfda6969725b6a875579084af7d4f85e96484cc8cb722
|
3 |
+
size 1493461
|
99E2T4oBgHgl3EQfmAd3/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:03f2cbf3796ad087b5a7a97315e20ff2b0a08d6076d418dc879764e2cb378506
|
3 |
+
size 5046317
|
99E2T4oBgHgl3EQfmAd3/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fdcf2e6552c37426db4f8bce5b3f6808567fec3f213448a34944dc68301db870
|
3 |
+
size 202722
|
AtE0T4oBgHgl3EQfxwIb/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:daeb8a1079ad94fc81cf65ae414b414f8a70f18dd2cf139121ab1c56dda24621
|
3 |
+
size 5373997
|
B9E0T4oBgHgl3EQfyAKb/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19b166b8f1f0c233a2662beb39a1fd465b14d340ee65b114b7a3a9aa868b1fd7
|
3 |
+
size 7667757
|
BNE0T4oBgHgl3EQfPwB8/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec114ea2a482808c7524b30320f51605fc7446a04352934993300000bf732c26
|
3 |
+
size 221742
|
BtE3T4oBgHgl3EQfUAqQ/content/2301.04447v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed585a73d4f98ac6d4a963e4f2774d78983514f2cc550b97cd707975db890ad5
|
3 |
+
size 3448704
|
BtE3T4oBgHgl3EQfUAqQ/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e0705ad4785f893f70e904ed0e7771e79bd2dab6ec88a9f32af7c0907d5f7b9c
|
3 |
+
size 2031661
|
BtE3T4oBgHgl3EQfUAqQ/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76b96ffbb858f1020aa70120ab691fb6e050f218af015c6828422b6d08ba53cf
|
3 |
+
size 71380
|
CtE1T4oBgHgl3EQf9wbT/content/2301.03561v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:97cd384a98de90bd549a4b06e6b0eb1aad5d75e7d7e4fa7f06e5b903d6d16c92
|
3 |
+
size 15053043
|
CtE5T4oBgHgl3EQfTw_D/content/tmp_files/2301.05539v1.pdf.txt
ADDED
@@ -0,0 +1,3443 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.05539v1 [math.PR] 13 Jan 2023
|
2 |
+
Nonasymptotic error rates of the
|
3 |
+
sample average approximation method
|
4 |
+
to solve risk averse stochastic progams
|
5 |
+
Volker Kr¨atschmer ∗
|
6 |
+
Abstract
|
7 |
+
We study statistical properties of the optimal value of the Sample Average
|
8 |
+
Approximation. The focus is on rates dependent on the sample sizes for the tail
|
9 |
+
function of the absolute error induced by the Sample Average Approximation.
|
10 |
+
They allow to conclude immediately convergence rates for the optimal value of the
|
11 |
+
Sample Average Approximation. As a crucial point the investigations are based on
|
12 |
+
a new type of conditions from the theory of empirical processes which do not rely
|
13 |
+
on pathwise analytical properties of the goal functions. In particular, continuity in
|
14 |
+
the parameter is not imposed in advance as usual in the literature on the Sample
|
15 |
+
Average Approximation method. It is also shown that the new condition is satisfied
|
16 |
+
if the paths of the goal functions are H¨older continuous so that the main results
|
17 |
+
carry over in this case. Moreover, the main results are applied to goal functions
|
18 |
+
whose paths are piecewise linear as e.g. in two stage mixed-integer programs. The
|
19 |
+
main results are shown for classical risk neutral stochastic programs, but we also
|
20 |
+
demonstrate how to apply them to the sample average approximation of risk averse
|
21 |
+
stochastic programs. In this respect we consider stochastic programs expressed in
|
22 |
+
terms of mean upper semideviations and divergence risk measures.
|
23 |
+
keywords: Risk averse stochastic program, Sample Average Approximation, mean
|
24 |
+
upper semideviations, divergence risk measures, Talagrand’s inequality, covering num-
|
25 |
+
bers, VC-subgraph classes.
|
26 |
+
1 Introduction
|
27 |
+
Consider a classical risk neutral stochastic program
|
28 |
+
inf
|
29 |
+
θ∈Θ E
|
30 |
+
�
|
31 |
+
G(θ, Z)
|
32 |
+
�
|
33 |
+
,
|
34 |
+
(1.1)
|
35 |
+
∗Faculty of Mathematics, University of Duisburg–Essen, [email protected]
|
36 |
+
1
|
37 |
+
|
38 |
+
where Θ denotes a compact subset of Rm, whereas Z stands for a d-dimensional ran-
|
39 |
+
dom vector with distribution PZ. In general the parameterized distribution of the goal
|
40 |
+
function G is unknown, but some information is available by i.i.d. samples. Using this
|
41 |
+
information, a general device to solve approximately problem (1.1) is provided by the
|
42 |
+
so-called Sample Average Approximation (SAA) (see [23]). For explanation, let us con-
|
43 |
+
sider a sequence (Zj)j∈N of independent d-dimensional random vectors on some fixed
|
44 |
+
atomless complete probability space (Ω, F, P) which are identically distributed as the
|
45 |
+
d-dimensional random vector Z. Let us set
|
46 |
+
ˆFn,θ(t) := 1
|
47 |
+
n
|
48 |
+
n
|
49 |
+
�
|
50 |
+
j=1
|
51 |
+
1]−∞,t]
|
52 |
+
�
|
53 |
+
G(θ, Zj)
|
54 |
+
�
|
55 |
+
to define the empirical distribution function ˆFn,θ of G(θ, Z) based on the i.i.d. sample
|
56 |
+
(Z1, · · · , Zn). Then the SAA method approximates the genuine optimization problem
|
57 |
+
(1.1) by the following one
|
58 |
+
inf
|
59 |
+
θ∈Θ
|
60 |
+
ˆ
|
61 |
+
R
|
62 |
+
t d ˆFn,θ(t) = inf
|
63 |
+
θ∈Θ
|
64 |
+
1
|
65 |
+
n
|
66 |
+
n
|
67 |
+
�
|
68 |
+
j=1
|
69 |
+
G(θ, Zj)
|
70 |
+
(n ∈ N).
|
71 |
+
(1.2)
|
72 |
+
The optimal values depend on the sample size and the realization of the samples of
|
73 |
+
Z.
|
74 |
+
Their asymptotic behaviour with increasing sample size, also known as the first
|
75 |
+
order asymptotics of (1.1), is well-known. More precisely, the sequence of optimal values
|
76 |
+
of the approximated optimization problem converges P-a.s.
|
77 |
+
to the optimal value of
|
78 |
+
the genuine stochastic program. Moreover, if G is Lipschitz continuous in θ, then the
|
79 |
+
stochastic sequence
|
80 |
+
�√n
|
81 |
+
�
|
82 |
+
inf
|
83 |
+
θ∈Θ
|
84 |
+
ˆ
|
85 |
+
R
|
86 |
+
t d ˆFn,θ(t) − inf
|
87 |
+
θ∈Θ E
|
88 |
+
�
|
89 |
+
G(θ, Z)
|
90 |
+
���
|
91 |
+
n∈N
|
92 |
+
is asymptotically normally distributed. For these results, and more on asymptotics of
|
93 |
+
the SAA method the reader may consult the monograph [23].
|
94 |
+
In several fields like finance, insurance or microeconomics, the assumption of risk
|
95 |
+
neutral decision makers are considered to be too idealistic. Instead there it is preferred
|
96 |
+
to study the behaviour of actors with a more cautious attitude, known as risk aversion. In
|
97 |
+
this view the optimization problem (1.1) should be replaced with a risk averse stochastic
|
98 |
+
program, i.e. an optimization problem
|
99 |
+
inf
|
100 |
+
θ∈Θ ρ
|
101 |
+
�
|
102 |
+
G(θ, Z)
|
103 |
+
�
|
104 |
+
,
|
105 |
+
(1.3)
|
106 |
+
where ρ stands for a functional which is nondecreasing w.r.t.
|
107 |
+
the increasing convex
|
108 |
+
order. A general class of functionals fulfilling this requirement is built by the so called
|
109 |
+
law-invariant convex risk measures (see e.g. [11], [23]). They play an important role
|
110 |
+
as building blocks in quantitative risk management (see [18], [20], [21]), and they have
|
111 |
+
been suggested as a systematic approach for calculations of insurance premia (cf. [15]).
|
112 |
+
2
|
113 |
+
|
114 |
+
Law-invariance, sometimes also called distribution-invariance, denotes the property that
|
115 |
+
a functional ρ has the same outcome for random variables with identical distribution.
|
116 |
+
Hence, a law-invariant convex risk measure ρ may be associated with a functional Rρ
|
117 |
+
on sets of distribution functions. In this case (1.3) reads as follows
|
118 |
+
inf
|
119 |
+
θ∈Θ Rρ(Fθ),
|
120 |
+
where Fθ is the distribution function of G(θ, Z). Then we may modify the SAA method
|
121 |
+
by
|
122 |
+
inf
|
123 |
+
θ∈Θ Rρ( ˆFn,θ)
|
124 |
+
(n ∈ N).
|
125 |
+
(1.4)
|
126 |
+
It is already known that under rather general conditions on the mapping G we have
|
127 |
+
inf
|
128 |
+
θ∈Θ Rρ
|
129 |
+
� ˆFn,θ
|
130 |
+
�
|
131 |
+
→ inf
|
132 |
+
θ∈Θ Rρ
|
133 |
+
�
|
134 |
+
Fθ
|
135 |
+
�
|
136 |
+
P − a.s.
|
137 |
+
(see [22]). The subject of this paper is to look at deviation probabilities
|
138 |
+
P
|
139 |
+
���� inf
|
140 |
+
θ∈Θ Rρ
|
141 |
+
� ˆFn,θ
|
142 |
+
�
|
143 |
+
− inf
|
144 |
+
θ∈Θ Rρ
|
145 |
+
�
|
146 |
+
Fθ
|
147 |
+
��� ≥ ε
|
148 |
+
��
|
149 |
+
(n ∈ N, ε > 0)
|
150 |
+
(1.5)
|
151 |
+
dependent on the sample size n. Such error rates might be interesting to identify possi-
|
152 |
+
ble convergence rates for the optimal values of the SAA method. Also from a practical
|
153 |
+
viewpoint they might give some hints for which sample sizes the SAA method provides
|
154 |
+
sufficiently satisfying approximations. Very recently, the issue of deviation probabilities
|
155 |
+
has been addressed in [1], where G(·, z) is assumed to be linear for z ∈ Rd. Our con-
|
156 |
+
tribution is to investigate error rates for more general goal functions with more explicit
|
157 |
+
bounds than the ones from [1].
|
158 |
+
The paper is organized as follows. We shall start with a general exponential bound
|
159 |
+
for the deviation probabilities
|
160 |
+
P
|
161 |
+
����� inf
|
162 |
+
θ∈Θ
|
163 |
+
1
|
164 |
+
n
|
165 |
+
n
|
166 |
+
�
|
167 |
+
j=1
|
168 |
+
G(θ, Zj) − inf
|
169 |
+
θ∈Θ E
|
170 |
+
�
|
171 |
+
G(θ, Z1)
|
172 |
+
���� ≥ ε
|
173 |
+
��
|
174 |
+
(n ∈ N, ε > 0)
|
175 |
+
in the case of classical risk neutral stochastic programs.
|
176 |
+
The point is that we may
|
177 |
+
extend this result to deviation probabilities if the SAA method is applied to risk averse
|
178 |
+
stochastic programs. In Section 3 this will be demonstrated in the case that stochastic
|
179 |
+
programs are expressed in terms of mean upper semideviations, whereas in Section 4 the
|
180 |
+
application to stochastic programs under divergence risk measures are considered. We
|
181 |
+
always find exponential bounds for the deviation probabilities which as an immediate
|
182 |
+
by product give convergence rates for the SAA method in the different contexts. In
|
183 |
+
particular, √n-consistency will turn out to be an easy consequence. Finally Section 5
|
184 |
+
gathers proof of results from the previous sections.
|
185 |
+
The essential new ingredient of our results is to replace analytic conditions on the
|
186 |
+
paths G(·, z) with requirements which intuitively make the family {G(θ, Z) | θ ∈ Θ} of
|
187 |
+
random variables small in some sense. Fortunately, the respective invoked conditions
|
188 |
+
3
|
189 |
+
|
190 |
+
are satisfied if the paths G(·, z) are H¨older continuous. We shall also show that we may
|
191 |
+
utilize our results to study the SAA method for stochastic programs, where the paths
|
192 |
+
G(·, z) are piecewise linear but not necessarily continuous. Value functions of two stage
|
193 |
+
mixed-integer programs are typical examples for goal functions of such a kind.
|
194 |
+
2 Error rates in the risk neutral case
|
195 |
+
In this section we study the SAA (1.2) associated with the risk neutral stochastic program
|
196 |
+
(1.1). We shall restrict ourselves to mappings G which satisfy the following properties.
|
197 |
+
(A 1) G(θ, ·) is Borel measurable for every θ ∈ Θ.
|
198 |
+
(A 2) There is some strictly positive PZ-integrable mapping ξ : Rd → R such that
|
199 |
+
sup
|
200 |
+
θ∈Θ
|
201 |
+
|G(θ, z)| ≤ ξ(z)
|
202 |
+
for z ∈ Rd.
|
203 |
+
Note that under these assumptions the optimization problems (1.1) and (1.2) are well
|
204 |
+
defined with finite optimal values.
|
205 |
+
The subject of this section is to investigate
|
206 |
+
E
|
207 |
+
���� inf
|
208 |
+
θ∈Θ
|
209 |
+
1
|
210 |
+
n
|
211 |
+
n
|
212 |
+
�
|
213 |
+
j=1
|
214 |
+
G(θ, Zj) − inf
|
215 |
+
θ∈Θ E
|
216 |
+
�
|
217 |
+
G(θ, Z1)
|
218 |
+
����
|
219 |
+
�
|
220 |
+
,
|
221 |
+
(2.1)
|
222 |
+
and the probabilities
|
223 |
+
P
|
224 |
+
����� inf
|
225 |
+
θ∈Θ
|
226 |
+
1
|
227 |
+
n
|
228 |
+
n
|
229 |
+
�
|
230 |
+
j=1
|
231 |
+
G(θ, Zj) − inf
|
232 |
+
θ∈Θ E
|
233 |
+
�
|
234 |
+
G(θ, Z1)
|
235 |
+
���� ≥ ε
|
236 |
+
��
|
237 |
+
(n ∈ N, ε > 0).
|
238 |
+
(2.2)
|
239 |
+
The aim is to find explicit bounds in terms of the sample sizes n. In order to avoid
|
240 |
+
subtleties of measurability we additionally assume
|
241 |
+
(A 3) There exist some at most countable subset Θ ⊆ Θ and (PZ)n-null sets Nn (n ∈ N)
|
242 |
+
such that
|
243 |
+
inf
|
244 |
+
ϑ∈Θ
|
245 |
+
��E[G(ϑ, Z1)] − E[G(θ, Z1)]
|
246 |
+
�� = inf
|
247 |
+
ϑ∈Θ
|
248 |
+
max
|
249 |
+
j∈{1,...,n}
|
250 |
+
��G(θ, zj) − G(ϑ, zj)
|
251 |
+
�� = 0
|
252 |
+
for n ∈ N, θ ∈ Θ and (z1, . . . , zn) ∈ Rdn \ Nn.
|
253 |
+
By assumption (A 3) with at most countable subset Θ ⊆ Θ we have
|
254 |
+
inf
|
255 |
+
θ∈Θ
|
256 |
+
1
|
257 |
+
n
|
258 |
+
n
|
259 |
+
�
|
260 |
+
j=1
|
261 |
+
G(θ, Zj) = inf
|
262 |
+
θ∈Θ
|
263 |
+
1
|
264 |
+
n
|
265 |
+
n
|
266 |
+
�
|
267 |
+
j=1
|
268 |
+
G(θ, Zj) P − a.s.,
|
269 |
+
inf
|
270 |
+
θ∈Θ E[G(θ, Z1)] = inf
|
271 |
+
θ∈Θ E[G(θ, Z1)].
|
272 |
+
4
|
273 |
+
|
274 |
+
Hence the optimal value of the SAA (1.2) is a random variable on (Ω, F, P) due to the
|
275 |
+
assumed completeness of this probability space. Moreover, the desired upper estimations
|
276 |
+
of (2.1) and (2.2) may be derived by upper estimations of
|
277 |
+
E
|
278 |
+
�
|
279 |
+
sup
|
280 |
+
θ∈Θ
|
281 |
+
��� 1
|
282 |
+
n
|
283 |
+
n
|
284 |
+
�
|
285 |
+
j=1
|
286 |
+
G(θ, Zj) − E
|
287 |
+
�
|
288 |
+
G(θ, Z1)
|
289 |
+
����
|
290 |
+
�
|
291 |
+
,
|
292 |
+
(2.3)
|
293 |
+
and
|
294 |
+
P
|
295 |
+
��
|
296 |
+
sup
|
297 |
+
θ∈Θ
|
298 |
+
��� 1
|
299 |
+
n
|
300 |
+
n
|
301 |
+
�
|
302 |
+
j=1
|
303 |
+
G(θ, Zj) − E
|
304 |
+
�
|
305 |
+
G(θ, Z1)
|
306 |
+
���� ≥ ε
|
307 |
+
��
|
308 |
+
(n ∈ N, ε > 0)
|
309 |
+
(2.4)
|
310 |
+
which are interesting in their own right. Note that (A 2) outrules trivial cases.
|
311 |
+
Convenient ways to find upper bounds of the expectations in (2.3) may be provided
|
312 |
+
by general devices from empirical process theory which are based on covering numbers
|
313 |
+
for classes of Borel measurable mappings from Rd into R w.r.t. Lp-norms. To recall
|
314 |
+
these concepts adapted to our situation, let us fix any nonvoid set F of Borel measurable
|
315 |
+
mappings from Rd into R and any probability measure Q on B(Rd) with metric dQ,p
|
316 |
+
induced by the Lp-norm ∥ · ∥Q,p for p ∈ [1, ∞[.
|
317 |
+
• Covering numbers for F
|
318 |
+
We use N
|
319 |
+
�
|
320 |
+
η, F, Lp(Q)
|
321 |
+
�
|
322 |
+
to denote the minimal number to cover F by closed dQ,p-
|
323 |
+
balls of radius η > 0 with centers in F. We define N
|
324 |
+
�
|
325 |
+
η, F, Lp(Q)
|
326 |
+
�
|
327 |
+
:= ∞ if no finite
|
328 |
+
cover is available.
|
329 |
+
• An envelope of F is defined to mean some Borel measurable mapping CF : Rd → R
|
330 |
+
satisfying suph∈F |h| ≤ CF. If an envelope CF has strictly positive outcomes, we
|
331 |
+
shall speak of a positive envelope.
|
332 |
+
• Mfin denotes the set of all probability measures on B(Rd) with finite support.
|
333 |
+
For abbreviation let us introduce for a class F of Borel measurable functions from Rd
|
334 |
+
into R with arbitrary positive envelope CF of F the following notation
|
335 |
+
J(F, CF, δ) :=
|
336 |
+
ˆ δ
|
337 |
+
0
|
338 |
+
sup
|
339 |
+
Q∈Mfin
|
340 |
+
�
|
341 |
+
ln
|
342 |
+
�
|
343 |
+
2N
|
344 |
+
�
|
345 |
+
ε ∥CF∥Q,2, F, L2(Q)
|
346 |
+
��
|
347 |
+
dε.
|
348 |
+
(2.5)
|
349 |
+
If the positive envelope CF is PZ-square integrable, then it is known that for every at
|
350 |
+
most countable subset F ⊆ F the following inequality holds
|
351 |
+
E
|
352 |
+
�
|
353 |
+
sup
|
354 |
+
h∈F
|
355 |
+
���1
|
356 |
+
n
|
357 |
+
n
|
358 |
+
�
|
359 |
+
j=1
|
360 |
+
h(Zj) − E[h(Z1)]
|
361 |
+
���
|
362 |
+
�
|
363 |
+
≤ ∥CF∥PZ,2
|
364 |
+
√n
|
365 |
+
8
|
366 |
+
√
|
367 |
+
2J(F, CF, 1)
|
368 |
+
≤ 16
|
369 |
+
√
|
370 |
+
2 ∥CF∥PZ,2
|
371 |
+
√n
|
372 |
+
J(F, CF, 1/2)
|
373 |
+
(2.6)
|
374 |
+
(see [12, Remark 3.5.5]).
|
375 |
+
5
|
376 |
+
|
377 |
+
For our purposes the class FΘ := {G(θ, ·) | θ ∈ Θ} is the relevant one. Then property
|
378 |
+
(A 2) means nothing else but requiring a PZ-integrable positive envelope of FΘ. By (2.6)
|
379 |
+
we may conclude immediately the following upper bounds for expectations in (2.1) and
|
380 |
+
(2.3).
|
381 |
+
Theorem 2.1 Let (A 1) - (A 3) be fulfilled, and let the envelope ξ from (A 2) be square
|
382 |
+
PZ-integrable. Then with Θ ⊆ Θ from (A 3)
|
383 |
+
E
|
384 |
+
���� inf
|
385 |
+
θ∈Θ
|
386 |
+
1
|
387 |
+
n
|
388 |
+
n
|
389 |
+
�
|
390 |
+
j=1
|
391 |
+
G(θ, Zj) − inf
|
392 |
+
θ∈Θ E
|
393 |
+
�
|
394 |
+
G(θ, Z1)
|
395 |
+
����
|
396 |
+
�
|
397 |
+
≤ E
|
398 |
+
�
|
399 |
+
sup
|
400 |
+
θ∈Θ
|
401 |
+
��� 1
|
402 |
+
n
|
403 |
+
n
|
404 |
+
�
|
405 |
+
j=1
|
406 |
+
G(θ, Zj) − E
|
407 |
+
�
|
408 |
+
G(θ, Z1)
|
409 |
+
����
|
410 |
+
�
|
411 |
+
≤ 16
|
412 |
+
√
|
413 |
+
2 ∥ξ∥PZ,2
|
414 |
+
√n
|
415 |
+
J(FΘ, ξ, 1/2) for n ∈ N.
|
416 |
+
Let us turn over to bounds for (2.2) and (2.4). Since Talagrand introduced in [24] the
|
417 |
+
first time his now famous concentration inequality for empirical processes it is now well
|
418 |
+
understood how to derive exponential estimates for the probabilities (2.4). They are
|
419 |
+
essentially based on the expectations in (2.3). We obtain the following result, using
|
420 |
+
notation
|
421 |
+
Bξ
|
422 |
+
n :=
|
423 |
+
�1
|
424 |
+
n
|
425 |
+
n
|
426 |
+
�
|
427 |
+
j=1
|
428 |
+
ξ(Zj)2 ≤ 2E[ξ(Z1)2]
|
429 |
+
�
|
430 |
+
(n ∈ N)
|
431 |
+
(2.7)
|
432 |
+
for any square PZ-integrable strictly positive mapping ξ : Rd → R.
|
433 |
+
Theorem 2.2 Let (A 1) - (A 3) be satisfied, where the envelope ξ from (A 2) is assumed
|
434 |
+
to be square PZ-integrable. Furthermore, let ε > 0 be fixed. Using notation (2.5), if
|
435 |
+
J
|
436 |
+
�
|
437 |
+
FΘ, ξ, 1/2
|
438 |
+
�
|
439 |
+
is finite, then with Θ ⊆ Θ from (A 3)
|
440 |
+
P
|
441 |
+
����� inf
|
442 |
+
θ∈Θ
|
443 |
+
1
|
444 |
+
n
|
445 |
+
n
|
446 |
+
�
|
447 |
+
j=1
|
448 |
+
G(θ, Zj) − inf
|
449 |
+
θ∈Θ E
|
450 |
+
�
|
451 |
+
G(θ, Z1)
|
452 |
+
���� ≥ ε
|
453 |
+
��
|
454 |
+
≤ P
|
455 |
+
��
|
456 |
+
sup
|
457 |
+
θ∈Θ
|
458 |
+
��� 1
|
459 |
+
n
|
460 |
+
n
|
461 |
+
�
|
462 |
+
j=1
|
463 |
+
G(θ, Zj) − E
|
464 |
+
�
|
465 |
+
G(θ, Z1)
|
466 |
+
���� ≥ ε
|
467 |
+
��
|
468 |
+
≤ exp
|
469 |
+
�
|
470 |
+
−t2 √nε
|
471 |
+
8(t + 1)(t + 28)∥ξ∥PZ,2
|
472 |
+
�
|
473 |
+
+ P
|
474 |
+
�
|
475 |
+
Ω \ Bξ
|
476 |
+
n
|
477 |
+
�
|
478 |
+
holds for t > 0 and arbitrary n ∈ N with ε > ηt,n as well as n ≥ ∥ξ∥2
|
479 |
+
PZ,2/2, where
|
480 |
+
ηt,n := ∥ξ∥PZ,2/√n + 32
|
481 |
+
√
|
482 |
+
2(1 + t)∥ξ∥PZ,2J(FΘ, ξ, 1/4)/√n.
|
483 |
+
The proof of Theorem 2.2 is an application of Talagrand’s concentration inequality along
|
484 |
+
with the estimation (2.6). The details are worked out in the Subsection 5.1.
|
485 |
+
Remark 2.3 Let us point out some simplifications of Theorem 2.2.
|
486 |
+
6
|
487 |
+
|
488 |
+
1) If the function G is uniformly bounded by some positive constant L, then we may
|
489 |
+
choose ξ ≡ L. Then ηt,n = L[1 + 32
|
490 |
+
√
|
491 |
+
2(1 + t)J(FΘ, ξ, 1/4)]/√n and Ω \ Bξ
|
492 |
+
n = ∅ for
|
493 |
+
t > 0 and every n ∈ N.
|
494 |
+
2) If ξ1(Z1) is integrable of order 4, we may apply Chebychev’s inequality to conclude
|
495 |
+
P
|
496 |
+
�
|
497 |
+
Ω \ Bξ
|
498 |
+
n
|
499 |
+
�
|
500 |
+
≤ Var[ξ(Z1)2]
|
501 |
+
n E[ξ(Z1)2]2
|
502 |
+
for n ∈ N.
|
503 |
+
3) The upper estimate of the probability P
|
504 |
+
�
|
505 |
+
Ω \ Bn
|
506 |
+
�
|
507 |
+
in Theorem 2.2 may be further
|
508 |
+
improved if the random variable exp
|
509 |
+
�
|
510 |
+
λ · ξ2�
|
511 |
+
is PZ-integrable for some λ > 0. In
|
512 |
+
this case there exists some ε > 0 such that the exponential bound
|
513 |
+
P
|
514 |
+
�
|
515 |
+
Ω \ Bξ
|
516 |
+
n
|
517 |
+
�
|
518 |
+
≤ exp
|
519 |
+
�
|
520 |
+
−n ε E
|
521 |
+
�
|
522 |
+
ξ(Z1)2�
|
523 |
+
/2
|
524 |
+
�
|
525 |
+
holds for every n ∈ N ([19, Theorem 2.6 along with Lemma 2.2]).
|
526 |
+
As an easy consequence of Theorem 2.2 we may provide the following simple criterion
|
527 |
+
to ensure uniform tightness of the sequence
|
528 |
+
�√n
|
529 |
+
�
|
530 |
+
inf
|
531 |
+
θ∈Θ
|
532 |
+
1
|
533 |
+
n
|
534 |
+
n
|
535 |
+
�
|
536 |
+
j=1
|
537 |
+
G(θ, Zj) − inf
|
538 |
+
θ∈Θ E
|
539 |
+
�
|
540 |
+
G(θ, Z1)
|
541 |
+
���
|
542 |
+
n∈N.
|
543 |
+
The new point is that we do not require the paths G(·, z) to satisfy Lipschitz continuity
|
544 |
+
properties in advance, as usual in the literature on the SAA method (e.g. in [23]).
|
545 |
+
Theorem 2.4 Let (A 1) - (A 3) be fulfilled with ξ from (A 2) being square PZ-integrable.
|
546 |
+
Using notation (2.5), if J
|
547 |
+
�
|
548 |
+
FΘ, ξ, 1/2
|
549 |
+
�
|
550 |
+
is finite, then the sequence
|
551 |
+
�√n
|
552 |
+
�
|
553 |
+
inf
|
554 |
+
θ∈Θ
|
555 |
+
1
|
556 |
+
n
|
557 |
+
n
|
558 |
+
�
|
559 |
+
j=1
|
560 |
+
G(θ, Zj) − inf
|
561 |
+
θ∈Θ E
|
562 |
+
�
|
563 |
+
G(θ, Z1)
|
564 |
+
���
|
565 |
+
n∈N.
|
566 |
+
is uniformly tight.
|
567 |
+
Proof
|
568 |
+
Fix any n ∈ N with n ≥ ∥ξ∥2
|
569 |
+
PZ,2/2.
|
570 |
+
Then with Bξ
|
571 |
+
n as defined in (2.7) the
|
572 |
+
application of Theorem 2.2 yields
|
573 |
+
P
|
574 |
+
��√n
|
575 |
+
��� inf
|
576 |
+
θ∈Θ
|
577 |
+
1
|
578 |
+
n
|
579 |
+
n
|
580 |
+
�
|
581 |
+
j=1
|
582 |
+
G(θ, Zj) − inf
|
583 |
+
θ∈Θ E
|
584 |
+
�
|
585 |
+
G(θ, Z1)
|
586 |
+
���� ≥ ε
|
587 |
+
��
|
588 |
+
≤ exp
|
589 |
+
�
|
590 |
+
−t2 ε
|
591 |
+
8(t + 1)(t + 28)∥ξ∥PZ,2
|
592 |
+
�
|
593 |
+
+ P
|
594 |
+
�
|
595 |
+
Ω \ Bξ
|
596 |
+
n
|
597 |
+
�
|
598 |
+
for t > 0 and every ε > ∥ξ∥PZ,2 + 32
|
599 |
+
√
|
600 |
+
2(1 + t)∥ξ∥PZ,2J(FΘ, ξ, 1/4). Furthermore we have
|
601 |
+
convergence P
|
602 |
+
�
|
603 |
+
Ω \ Bξ
|
604 |
+
n
|
605 |
+
�
|
606 |
+
→ 0 by the law of large numbers. Thus
|
607 |
+
lim
|
608 |
+
ε→∞ lim sup
|
609 |
+
n→∞ P
|
610 |
+
��√n
|
611 |
+
��� inf
|
612 |
+
θ∈Θ
|
613 |
+
1
|
614 |
+
n
|
615 |
+
n
|
616 |
+
�
|
617 |
+
j=1
|
618 |
+
G(θ, Zj) − inf
|
619 |
+
θ∈Θ E
|
620 |
+
�
|
621 |
+
G(θ, Z1)
|
622 |
+
���� ≥ ε
|
623 |
+
��
|
624 |
+
= 0
|
625 |
+
7
|
626 |
+
|
627 |
+
which completes the proof.
|
628 |
+
✷
|
629 |
+
All the results within this section crucially require J(FΘ, ξ, 1/2) to be finite. This prop-
|
630 |
+
erty is always satisfied if the involved covering numbers have polynomial rates. Indeed
|
631 |
+
this relies on the observation, that by using change of variable formula several times
|
632 |
+
along with integration by parts, we obtain
|
633 |
+
ˆ 1
|
634 |
+
0
|
635 |
+
�
|
636 |
+
v ln(K/ε) dε ≤ 2
|
637 |
+
�
|
638 |
+
v ln(K)
|
639 |
+
for v ≥ 1, K ≥ e.
|
640 |
+
(2.8)
|
641 |
+
Inequality (2.8) may be applied if there exist K ≥ e, v ≥ 1 such that the following
|
642 |
+
condition is satisfied
|
643 |
+
N
|
644 |
+
�
|
645 |
+
ε ∥CFΘ∥Q,2, FΘ, L2(Q)
|
646 |
+
��
|
647 |
+
≤ (K/ε)v
|
648 |
+
for Q ∈ Mfin
|
649 |
+
and ε ∈]0, 1[.
|
650 |
+
In the rest of this section we shall utilize (2.8) to give explicit upper estimates of the
|
651 |
+
terms J(FΘ, CFΘ, δ) if the objective G satisfies specific analytical properties.
|
652 |
+
Denoting the Euclidean metric on Rm by dm,2, we start with the following condition
|
653 |
+
(H) There exist some β ∈]0, 1] and a square PZ-integrable strictly positive mappings
|
654 |
+
C : Rd →]0, ∞[ such that
|
655 |
+
��G(θ, z) − G(ϑ, z)
|
656 |
+
�� ≤ C(z) dm,2(θ, ϑ)β
|
657 |
+
for z ∈ Rd, θ, ϑ ∈ Θ.
|
658 |
+
Under (H) explicit upper estimates for the terms J(FΘ, ξ, δ) are provided by the following
|
659 |
+
result.
|
660 |
+
Proposition 2.5 Let condition (H) be fulfilled with β ∈]0, 1] and square PZ-integrable
|
661 |
+
strictly positive mapping C. Furthermore, let G(θ, ·) be Borel measurable for every θ ∈ Θ.
|
662 |
+
In addition let ∆(Θ) stand for the diameter of Θ w.r.t. the Euclidean metric dm,2. Then
|
663 |
+
requirement (A 3) is met. Moreover, if G(θ, ·) is square PZ-integrable for some θ ∈ Θ,
|
664 |
+
the mapping ξ := C ∆(Θ)β + |G(θ, ·)| is square PZ-integrable, satisfying property (A 2)
|
665 |
+
and
|
666 |
+
J(FΘ, ξ, δ) ≤ 2δ
|
667 |
+
�
|
668 |
+
(3m + 1) ln(2) + m
|
669 |
+
β ln(2/δ)
|
670 |
+
for δ ∈]0, 1/2].
|
671 |
+
For the proof see Subsection 5.2.
|
672 |
+
Remark 2.6 Proposition 2.5 tells us that under (H) the Theorems 2.2, 2.4 carry over
|
673 |
+
immediately, using the estimates from Proposition 2.5.
|
674 |
+
Next, let us consider objective G having the following kind of structure of piecewise
|
675 |
+
linearity.
|
676 |
+
(PL) G(θ, z) =
|
677 |
+
r�
|
678 |
+
i=1
|
679 |
+
�
|
680 |
+
minl=1,...,si
|
681 |
+
1Iil
|
682 |
+
�
|
683 |
+
Li
|
684 |
+
l(T(θ) + z) + ai
|
685 |
+
l
|
686 |
+
��
|
687 |
+
·
|
688 |
+
�
|
689 |
+
Λi(T(θ) + z) + bi
|
690 |
+
�
|
691 |
+
, where
|
692 |
+
• r, s1, . . . , sr ∈ N,
|
693 |
+
8
|
694 |
+
|
695 |
+
• bi, ai
|
696 |
+
l ∈ R for i ∈ {1, . . . , r}, l ∈ {1, . . . , si},
|
697 |
+
• Λi, Li
|
698 |
+
l : Rd → R linear for i ∈ {1, . . . , r}, l ∈ {1, . . . , si},
|
699 |
+
• T : Rm → Rd linear,
|
700 |
+
• ]0, ∞[⊆ Iil ⊆ [0, ∞[ for i ∈ {1, . . . , r} and l ∈ {1, . . . , si},
|
701 |
+
•
|
702 |
+
min
|
703 |
+
l=1,...,si
|
704 |
+
1Iil
|
705 |
+
�
|
706 |
+
Li
|
707 |
+
l(T(θ) + z) + ai
|
708 |
+
l
|
709 |
+
�
|
710 |
+
· min
|
711 |
+
l=1,...,sj
|
712 |
+
1Ijl
|
713 |
+
�
|
714 |
+
Lj
|
715 |
+
l (T(θ) + z) + aj
|
716 |
+
l
|
717 |
+
�
|
718 |
+
= 0 for i ̸= j,
|
719 |
+
•
|
720 |
+
r�
|
721 |
+
i=1
|
722 |
+
min
|
723 |
+
l=1,...,si
|
724 |
+
1Iil
|
725 |
+
�
|
726 |
+
Li
|
727 |
+
l(T(θ) + z) + ai
|
728 |
+
l
|
729 |
+
�
|
730 |
+
= 1.
|
731 |
+
In two stage mixed-integer programs the goal functions typically may be represented in
|
732 |
+
this way if the random variable Z has compact support (see [6]). Note that G satisfying
|
733 |
+
condition (PL) does not have continuity in θ in advance.
|
734 |
+
For preparation to find explicit upper estimations of the terms J(FΘ, ξ, δ) we may
|
735 |
+
observe by compactness of Θ along with the continuity of the mappings Λi
|
736 |
+
ηG
|
737 |
+
i := sup
|
738 |
+
θ∈Θ
|
739 |
+
|Λi ◦ T(θ) + bi| +
|
740 |
+
1{0}
|
741 |
+
�
|
742 |
+
sup
|
743 |
+
θ∈Θ
|
744 |
+
|Λi ◦ T(θ) + bi|
|
745 |
+
�
|
746 |
+
< ∞
|
747 |
+
for i ∈ {1, . . . , r}.
|
748 |
+
Furthermore, for abbreviation we set
|
749 |
+
f i(θ, z) :=
|
750 |
+
min
|
751 |
+
l=1,...,si
|
752 |
+
1Iil
|
753 |
+
�
|
754 |
+
Li
|
755 |
+
l(T(θ) + z) + ai
|
756 |
+
l
|
757 |
+
�
|
758 |
+
and
|
759 |
+
Gi(θ, z) := Λi
|
760 |
+
�
|
761 |
+
T(θ) + z)
|
762 |
+
�
|
763 |
+
+ bi
|
764 |
+
for i ∈ {1, . . . , r}, and we introduce the associated function classes
|
765 |
+
Fi
|
766 |
+
PL :=
|
767 |
+
�
|
768 |
+
f i(θ, ·) | θ ∈ Θ
|
769 |
+
�
|
770 |
+
and
|
771 |
+
F
|
772 |
+
i
|
773 |
+
PL :=
|
774 |
+
�
|
775 |
+
Gi(θ, ·) | θ ∈ Θ
|
776 |
+
�
|
777 |
+
i ∈ {1, . . . , r}.
|
778 |
+
Note that the classes Fi
|
779 |
+
PL are uniformly bounded by 1.
|
780 |
+
Proposition 2.7 f i(θ, ·) and Gi(θ, ·) are Borel measurable for θ ∈ Θ and i ∈ {1, . . . , r}.
|
781 |
+
In particular assumption (A 1) holds. Moreover, if Λ1, . . . , Λr are square PZ-integrable,
|
782 |
+
and if ξ1, . . . , ξr denote bounded positive envelopes of F1
|
783 |
+
PL, . . . , Fr
|
784 |
+
PL respectively, then the
|
785 |
+
mapping ξ := �r
|
786 |
+
i=1 ξi ·
|
787 |
+
�
|
788 |
+
|Λi| + ηG
|
789 |
+
i
|
790 |
+
�
|
791 |
+
is square PZ-integrable satisfying (A 2) and
|
792 |
+
J(FΘ, ξ, δ)
|
793 |
+
≤ 2δ
|
794 |
+
�
|
795 |
+
�
|
796 |
+
�
|
797 |
+
�
|
798 |
+
r
|
799 |
+
�
|
800 |
+
i=1
|
801 |
+
ln(si + 1) +
|
802 |
+
�
|
803 |
+
8
|
804 |
+
r
|
805 |
+
�
|
806 |
+
i=1
|
807 |
+
si + 30r + 1
|
808 |
+
�
|
809 |
+
ln(2) + 2
|
810 |
+
�
|
811 |
+
r
|
812 |
+
�
|
813 |
+
i=1
|
814 |
+
si + 3r
|
815 |
+
�
|
816 |
+
[1/2 + ln(r/δ)]
|
817 |
+
for δ ∈]0, 1].
|
818 |
+
The involved proof is delegated to Subsection 5.3.
|
819 |
+
Remark 2.8 In view of Proposition 2.7 the only critical condition left is (A 3) in order
|
820 |
+
to apply our main results. If Λ1, . . . , Λr are PZ-integrable, it is a routine excercise to
|
821 |
+
show that (A 3) may be guaranteed for any at most countable dense subset Θ ⊆ Θ by
|
822 |
+
the following property.
|
823 |
+
9
|
824 |
+
|
825 |
+
(*)
|
826 |
+
�
|
827 |
+
z ∈ Rd | Li
|
828 |
+
l(z) ∈ {−Li
|
829 |
+
l
|
830 |
+
�
|
831 |
+
T(θ)
|
832 |
+
�
|
833 |
+
− ai
|
834 |
+
l | θ ∈ Θ}
|
835 |
+
�
|
836 |
+
is a PZ-null set for i = 1, . . . , r and
|
837 |
+
l ∈ {1, . . . , si} with Iil = [0, ∞[.
|
838 |
+
For the application of the main results we may invoke the estimates from Proposition
|
839 |
+
2.7.
|
840 |
+
3 Error rates under mean upper semideviations
|
841 |
+
Let Lp(Ω, F, P) denote the usual Lp-space on (Ω, F, P) (p ∈ [0, ∞[), where we tacitely
|
842 |
+
identify random variables which are different on P-null sets only. The space Lp(Ω, F, P)
|
843 |
+
is endowed with the usual Lp-norm ∥ · ∥p.
|
844 |
+
We want to study the risk averse stochastic program (1.3), where in the objective the
|
845 |
+
functional ρ is a mean upper semideviation. This means that for p ∈ [1, ∞[ and a ∈]0, 1]
|
846 |
+
the functional ρ = ρp,a is defined as follows
|
847 |
+
ρp,a : Lp(Ω, F, P) → R, X �→ E[X] + a∥
|
848 |
+
�
|
849 |
+
X − E[X]
|
850 |
+
�+∥p.
|
851 |
+
It is well-known that mean upper semideviations are increasing w.r.t. the increasing
|
852 |
+
convex order (cf. e.g. [23, Theorem 6.51 along with Example 6.23 an Proposition 6.8]).
|
853 |
+
They are also law-invariant so that we may define the associated functional Rp,a on the
|
854 |
+
set of distributions functions of random variables with absolute moments of order p. So
|
855 |
+
the subject of this section is the optimization problem
|
856 |
+
inf
|
857 |
+
θ∈Θ Rp,a
|
858 |
+
�
|
859 |
+
Fθ
|
860 |
+
�
|
861 |
+
,
|
862 |
+
where Fθ stands for the distribution function of G(θ, Z) for θ ∈ Θ. Introducing the
|
863 |
+
notation
|
864 |
+
Gp : Θ × Rd → R, (θ, z) �→
|
865 |
+
��
|
866 |
+
G(θ, z) − E[G(θ, Z1)]
|
867 |
+
�+�p
|
868 |
+
(p ∈ [1, ∞[).
|
869 |
+
(3.1)
|
870 |
+
we may describe this optimization also in the following way
|
871 |
+
inf
|
872 |
+
θ∈Θ Rρp,a
|
873 |
+
�
|
874 |
+
Fθ
|
875 |
+
�
|
876 |
+
= inf
|
877 |
+
θ∈Θ
|
878 |
+
�
|
879 |
+
E[G(θ, Z1)] + a
|
880 |
+
�
|
881 |
+
E[Gp(θ, Z1)]
|
882 |
+
�1/p�
|
883 |
+
.
|
884 |
+
(3.2)
|
885 |
+
Then the stochastic objective of the approximative problem according to the SAA
|
886 |
+
method has the following representation.
|
887 |
+
Rρp,a
|
888 |
+
� ˆFn,θ
|
889 |
+
�
|
890 |
+
=
|
891 |
+
�1
|
892 |
+
n
|
893 |
+
n
|
894 |
+
�
|
895 |
+
j=1
|
896 |
+
G(θ, Zj) + a
|
897 |
+
� 1
|
898 |
+
n
|
899 |
+
n
|
900 |
+
�
|
901 |
+
j=1
|
902 |
+
��
|
903 |
+
G(θ, Zj) − 1
|
904 |
+
n
|
905 |
+
n
|
906 |
+
�
|
907 |
+
i=1
|
908 |
+
G(θ, Zi)
|
909 |
+
�+�p�1/p�
|
910 |
+
(3.3)
|
911 |
+
We shall look at bounds for the deviation probabilities (1.5) w.r.t. Rρp,a. It is intended
|
912 |
+
to utilize results for risk neutral case presented in Section 2. The key is the following
|
913 |
+
observation based on the notation (3.1).
|
914 |
+
10
|
915 |
+
|
916 |
+
Lemma 3.1 Let (A 1) be fulfilled, and let ξ be an envelope of FΘ which is PZ-integrable
|
917 |
+
of order p ∈ [1, ∞[. Then the optimal values of the problems (3.2) and (3.3) are finite.
|
918 |
+
Moreover, for any nonvoid subset Θ ⊆ Θ and arbitrary n ∈ N, ε > 0 as well as a ∈]0, 1]
|
919 |
+
��� inf
|
920 |
+
θ∈Θ Rρp,a
|
921 |
+
� ˆFn,θ
|
922 |
+
�
|
923 |
+
− inf
|
924 |
+
θ∈Θ Rρp,a
|
925 |
+
�
|
926 |
+
Fθ
|
927 |
+
��� ≥ ε
|
928 |
+
�
|
929 |
+
⊆ DΘ
|
930 |
+
n,ε,a ∪ D
|
931 |
+
Θ
|
932 |
+
n,ε,p,a,
|
933 |
+
holds, where
|
934 |
+
DΘ
|
935 |
+
n,ε,a :=
|
936 |
+
�
|
937 |
+
sup
|
938 |
+
θ∈Θ
|
939 |
+
��1
|
940 |
+
n
|
941 |
+
n
|
942 |
+
�
|
943 |
+
j=1
|
944 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
945 |
+
�� ≥ ε/(2 + 2a)
|
946 |
+
�
|
947 |
+
,
|
948 |
+
D
|
949 |
+
Θ
|
950 |
+
n,ε,p,a :=
|
951 |
+
�
|
952 |
+
sup
|
953 |
+
θ∈Θ
|
954 |
+
��1
|
955 |
+
n
|
956 |
+
n
|
957 |
+
�
|
958 |
+
j=1
|
959 |
+
Gp(θ, Zj) − E[Gp(θ, Z1)]
|
960 |
+
�� ≥
|
961 |
+
�
|
962 |
+
ε/[2a]
|
963 |
+
�p�
|
964 |
+
.
|
965 |
+
The proof may be found in Subsection 5.4.
|
966 |
+
In the next step we want to reduce simultaneously the optimization problems (3.2)
|
967 |
+
and (3.3) to at most countable parameter subsets of Θ. This will be achieved by the
|
968 |
+
following assumption which strengthens (A 3).
|
969 |
+
(A 3’) There exist some at most countable subset Θ ⊆ Θ and (PZ)n-null sets Nn (n ∈ N)
|
970 |
+
such that for z1, . . . , zn ∈ Rdn \ Nn and θ ∈ Θ
|
971 |
+
inf
|
972 |
+
ϑ∈Θ
|
973 |
+
�
|
974 |
+
E[|G(ϑ, Z1) − G(θ, Z1)|] +
|
975 |
+
max
|
976 |
+
j∈{1,...,n}
|
977 |
+
��G(θ, zj) − G(ϑ, zj)
|
978 |
+
��
|
979 |
+
�
|
980 |
+
= 0.
|
981 |
+
Remark 3.2 Under (A 2), property (A 3’) may be checked easily if condition (H) is
|
982 |
+
satisfied. If G has representation (PL), and if the involved linear mappings Λ1, . . . , Λr
|
983 |
+
are PZ-integrable, then (A 3’) holds under (*) from Remark 2.8.
|
984 |
+
Lemma 3.3 Let (A 1) and (A 3’) be satisfied, and let ξ be some positive envelope of
|
985 |
+
FΘ which is PZ-integrable of order p ∈ [1, ∞[. Then with the at most countable subset
|
986 |
+
Θ ⊆ Θ and the (PZ)n-null sets Nn from (A 3’) the following statements hold.
|
987 |
+
1) inf
|
988 |
+
θ∈Θ Rρp,a
|
989 |
+
�
|
990 |
+
Fθ
|
991 |
+
�
|
992 |
+
= inf
|
993 |
+
θ∈Θ Rρp,a
|
994 |
+
�
|
995 |
+
Fθ
|
996 |
+
�
|
997 |
+
for a ∈]0, 1].
|
998 |
+
2) For n ∈ N, θ ∈ Θ and (z1, . . . , zn) ∈ Rdn \ Nn
|
999 |
+
inf
|
1000 |
+
ϑ∈Θ
|
1001 |
+
��E[G(ϑ, Z1)] − E[G(ϑ, Z1)]
|
1002 |
+
�� = inf
|
1003 |
+
ϑ∈Θ max
|
1004 |
+
j=1,...,n
|
1005 |
+
��G(ϑ, zj) − G(ϑ, zj)
|
1006 |
+
�� = 0,
|
1007 |
+
inf
|
1008 |
+
ϑ∈Θ
|
1009 |
+
��E[Gp(ϑ, Z1)] − E[Gp(ϑ, Z1)]
|
1010 |
+
�� = inf
|
1011 |
+
ϑ∈Θ max
|
1012 |
+
j=1,...,n
|
1013 |
+
��Gp(ϑ, zj) − Gp(ϑ, zj)
|
1014 |
+
�� = 0.
|
1015 |
+
3) If n ∈ N, and if a ∈]0, 1], then inf
|
1016 |
+
θ∈Θ Rρp,a
|
1017 |
+
� ˆFn,θ
|
1018 |
+
�
|
1019 |
+
= inf
|
1020 |
+
θ∈Θ Rρp,a
|
1021 |
+
� ˆFn,θ
|
1022 |
+
�
|
1023 |
+
P − a.s..
|
1024 |
+
11
|
1025 |
+
|
1026 |
+
The proof is provided in Subsection 5.4.
|
1027 |
+
Lemma 3.1 suggests to apply the results from Section 2 simultaneously to the function
|
1028 |
+
classes FΘ and FΘ,p := {Gp(θ, ·) | θ ∈ Θ} (p ∈ [1, ∞[). However, we want to describe
|
1029 |
+
the involved terms J(FΘ,p, CFΘ,p, δ) by means of the terms J(FΘ, CFΘ, δ) associated with
|
1030 |
+
the genuine objective G. This will be done in the following auxiliary result.
|
1031 |
+
Lemma 3.4 Let (A 1) be fulfilled, and let ξ be a positive envelope of FΘ which is PZ-
|
1032 |
+
integrable of order 2(p + 1) for some p ∈ [1, ∞[. Then ξp :=
|
1033 |
+
�
|
1034 |
+
ξ +
|
1035 |
+
�
|
1036 |
+
E[ξ(Z1)] ∨ 1
|
1037 |
+
��p+1 is a
|
1038 |
+
square PZ-integrable positive envelope of FΘ,p satisfying
|
1039 |
+
J(FΘ,p, ξp, δ) ≤
|
1040 |
+
√
|
1041 |
+
2 2p+2J(FΘ, ξ, δ/2p+2) +
|
1042 |
+
√
|
1043 |
+
2 δ [
|
1044 |
+
�
|
1045 |
+
ln(2) + 2
|
1046 |
+
�
|
1047 |
+
ln
|
1048 |
+
�
|
1049 |
+
2p+4/δ
|
1050 |
+
�
|
1051 |
+
]
|
1052 |
+
for δ ∈]0, 1[.
|
1053 |
+
The proof is delegated to Subsection 5.4.
|
1054 |
+
Now, we are prepared to formulate and prove the main result on error rates under
|
1055 |
+
upper semideviations.
|
1056 |
+
Theorem 3.5 Let (A 1), (A 2), (A 3’) be fulfilled, where the Borel measurable map-
|
1057 |
+
ping ξ from (A 2) is integrable of order 2(p + 1) for some p ∈ [1, ∞[. Setting ξp :=
|
1058 |
+
�
|
1059 |
+
ξ +
|
1060 |
+
�
|
1061 |
+
E[ξ(Z1)] ∨ 1
|
1062 |
+
��p+1, and assuming J(FΘ, ξ, 1/4) < ∞ the following statements are
|
1063 |
+
valid.
|
1064 |
+
1) For ε, t > 0, n ∈ N with n ≥ max
|
1065 |
+
�
|
1066 |
+
∥ξp∥2
|
1067 |
+
PZ,2/2, [1 + 32
|
1068 |
+
√
|
1069 |
+
2J(FΘ, ξ, 1/4)]2�
|
1070 |
+
, and
|
1071 |
+
a ∈]0, 1] the inequality
|
1072 |
+
P
|
1073 |
+
���� inf
|
1074 |
+
θ∈Θ Rρp,a
|
1075 |
+
� ˆFn,θ
|
1076 |
+
�
|
1077 |
+
− inf
|
1078 |
+
θ∈Θ Rρp,a
|
1079 |
+
�
|
1080 |
+
Fθ
|
1081 |
+
��� ≥ ε
|
1082 |
+
��
|
1083 |
+
≤ exp
|
1084 |
+
�
|
1085 |
+
−t2 √nε
|
1086 |
+
16(t + 1)(t + 28)∥ξ∥PZ,2
|
1087 |
+
�
|
1088 |
+
+ exp
|
1089 |
+
�
|
1090 |
+
−t2 √nεp
|
1091 |
+
2p+3ap(t + 1)(t + 28)∥ξp∥PZ,2
|
1092 |
+
�
|
1093 |
+
+ P
|
1094 |
+
�
|
1095 |
+
Ω \ Bξ
|
1096 |
+
n
|
1097 |
+
�
|
1098 |
+
+ P
|
1099 |
+
�
|
1100 |
+
Ω \ Bξp
|
1101 |
+
n
|
1102 |
+
�
|
1103 |
+
,
|
1104 |
+
holds if
|
1105 |
+
ε >
|
1106 |
+
2(1 + a)321/p(t + 1)1/p∥ξp∥1/p
|
1107 |
+
PZ,2
|
1108 |
+
n1/(2p)
|
1109 |
+
�
|
1110 |
+
1 +
|
1111 |
+
�
|
1112 |
+
p + 6 + 2p+3J(FΘ, ξ, 1/2p+4)
|
1113 |
+
�1/p.
|
1114 |
+
Here Bξ
|
1115 |
+
n and Bξp
|
1116 |
+
n are defined according to (2.7).
|
1117 |
+
2) The sequence
|
1118 |
+
�√n
|
1119 |
+
�
|
1120 |
+
inf
|
1121 |
+
θ∈Θ Rρp,a
|
1122 |
+
� ˆFn,θ
|
1123 |
+
�
|
1124 |
+
− inf
|
1125 |
+
θ∈Θ Rρp,a
|
1126 |
+
�
|
1127 |
+
Fθ
|
1128 |
+
���
|
1129 |
+
n∈N
|
1130 |
+
is uniformly tight for a ∈]0, 1].
|
1131 |
+
12
|
1132 |
+
|
1133 |
+
Proof
|
1134 |
+
The mapping inf
|
1135 |
+
θ∈Θ Rρp,a
|
1136 |
+
� ˆFn,θ
|
1137 |
+
�
|
1138 |
+
− inf
|
1139 |
+
θ∈Θ Rρp,a
|
1140 |
+
�
|
1141 |
+
Fθ
|
1142 |
+
�
|
1143 |
+
is a well-defined random variable
|
1144 |
+
for a ∈]0, 1] due to Lemma 3.1 along with Lemma 3.3 and completeness of (Ω, F, P).
|
1145 |
+
Statement 2) may be concluded from statement 1) in the same way as Theorem 2.4 was
|
1146 |
+
derived from Theorem 2.2. Hence statement 1) is left to show.
|
1147 |
+
Let Θ ⊆ Θ be from (A 3’). By Lemma 3.3 together with Lemma 3.1 we have
|
1148 |
+
P
|
1149 |
+
���� inf
|
1150 |
+
θ∈Θ Rρp,a
|
1151 |
+
� ˆFn,θ
|
1152 |
+
�
|
1153 |
+
− inf
|
1154 |
+
θ∈Θ Rρp,a
|
1155 |
+
�
|
1156 |
+
Fθ
|
1157 |
+
��� ≥ ε
|
1158 |
+
��
|
1159 |
+
≤ P
|
1160 |
+
�
|
1161 |
+
DΘ
|
1162 |
+
n,ε,a
|
1163 |
+
�
|
1164 |
+
+ P
|
1165 |
+
�
|
1166 |
+
D
|
1167 |
+
Θ
|
1168 |
+
n,ε,p,a
|
1169 |
+
�
|
1170 |
+
(3.4)
|
1171 |
+
for n ∈ N, ε > 0, a ∈]0, 1], where the sets DΘ
|
1172 |
+
n,ε,a and D
|
1173 |
+
Θ
|
1174 |
+
n,ε,p,a are defined as in Lemma
|
1175 |
+
3.1. The inequality 2p+2J(FΘ, ξ, 1/2p+4) ≥ J(FΘ, ξ, 1/4) holds (see [12, Lemma 3.5.3]).
|
1176 |
+
Moreover, ∥ξp∥PZ,2 ≥ ∥ξp∥1/p
|
1177 |
+
PZ,2 ≥ ∥ξ∥PZ,2 due to Jensen’s inequality. Then in view of
|
1178 |
+
Lemma 3.4 it is easy to check that the requirements of Theorem 2.2 are met for both
|
1179 |
+
classes FΘ and FΘ,p. Then statement 1) follows immediately from (3.4) after application
|
1180 |
+
of Theorem 2.2 separately to FΘ and FΘ,p.
|
1181 |
+
✷
|
1182 |
+
Remark 3.6 Let us discuss upper estimations of the probabilities of the sets Ω\Bξ
|
1183 |
+
n and
|
1184 |
+
Ω \ Bξp
|
1185 |
+
n .
|
1186 |
+
1) If the function G is uniformly bounded by some positive constant L, then we may
|
1187 |
+
choose ξ ≡ L. Then Ω \ Bξ
|
1188 |
+
n = Ω \ Bξp
|
1189 |
+
n = ∅ for every n ∈ N.
|
1190 |
+
2) If ξ is PZ-integrable of order 4(p + 1), then we have P(Ω \ Bξ
|
1191 |
+
n) = O(n) and
|
1192 |
+
P(Ω \ Bξp
|
1193 |
+
n ) = O(n) due to Chebychev’s inequality. In the same way as in Re-
|
1194 |
+
mark 2.3, 3), we may even obtain exponential bounds for these probabilities if
|
1195 |
+
E
|
1196 |
+
�
|
1197 |
+
exp
|
1198 |
+
�
|
1199 |
+
λξp(Z1)2��
|
1200 |
+
< ∞ for some λ > 0. Note that this property is satisfied iff
|
1201 |
+
E
|
1202 |
+
�
|
1203 |
+
exp
|
1204 |
+
�
|
1205 |
+
λξ(Z1)2(p+1)��
|
1206 |
+
< ∞ for some λ > 0.
|
1207 |
+
Remark 3.7 Theorem 3.5 may be simplified if the objective G satisfies condition (H),
|
1208 |
+
or if it has representation (PL). This may be seen immediately in view of Proposition
|
1209 |
+
2.5, or Proposition 2.7 along with Remark 3.2. In addition we may invoke more explicit
|
1210 |
+
upper estimates for the term J(FΘ, ξ, 1/4) provided by Proposition 2.5 and Proposition
|
1211 |
+
2.7.
|
1212 |
+
According to Remark 3.6 the error rates in Theorem 3.5 may be further improved if the
|
1213 |
+
mapping G is bounded. In this situation a version has been shown in [1] for bounded G
|
1214 |
+
having the form G(θ, z) := W0(z) + ⟨θ, W⟩, where W0 and W are fixed Borel measurable
|
1215 |
+
mappings, and ⟨·, ·⟩ stands for the standard scalar product on Rm. However, the bounds
|
1216 |
+
for deviation probabilities derived in [1] are described in unknown universal constant.
|
1217 |
+
In contrast combining Theorem 3.5 with Proposition 2.5 we may provide more explicit
|
1218 |
+
bounds.
|
1219 |
+
The statement on uniform tightness in Theorem 3.5 has been already shown in [8]
|
1220 |
+
under (H) with β = 1.
|
1221 |
+
13
|
1222 |
+
|
1223 |
+
4 Error rates under divergence risk measures
|
1224 |
+
We want to study the risk averse stochastic program (1.3), where we shall focus on ρ
|
1225 |
+
being a divergence measure. For introduction, let us consider a lower semicontinuous
|
1226 |
+
convex mapping Φ : [0, ∞[→ [0, ∞] satisfying Φ(0) < ∞, Φ(x0) < ∞ for some x0 > 1,
|
1227 |
+
infx≥0 Φ(x) = 0, and the growth condition limx→∞
|
1228 |
+
Φ(x)
|
1229 |
+
x
|
1230 |
+
= ∞. Its Fenchel-Legendre
|
1231 |
+
transform
|
1232 |
+
Φ∗ : R → R ∪ {∞}, y �→ sup
|
1233 |
+
x≥0
|
1234 |
+
�
|
1235 |
+
xy − Φ(x)
|
1236 |
+
�
|
1237 |
+
is a finite nondecreasing convex function whose restriction Φ∗��
|
1238 |
+
[0,∞[ to [0, ∞[ is a finite
|
1239 |
+
Young function, i.e. a continuous nondecreasing and unbounded real-valued mapping
|
1240 |
+
with Φ∗(0) = 0 (cf. [3, Lemma A.1]). Note also that the right-sided derivative Φ∗′
|
1241 |
+
of Φ∗ is nonnegative and nondecreasing. We shall use HΦ∗ to denote the Orlicz heart
|
1242 |
+
w.r.t. Φ∗��
|
1243 |
+
[0,∞[ defined to mean the set of all random variables X on (Ω, F, P) satisfying
|
1244 |
+
E[ Φ∗(c|X|) ] < ∞ for all c > 0. As in the previous Section 3 we identify random variables
|
1245 |
+
which differ on P-null sets only.
|
1246 |
+
The Orlicz heart is known to be a vector space enclosing all P-essentially bounded
|
1247 |
+
random variables. Moreover, by Jensen’s inequality all members of HΦ∗ are P-integrable.
|
1248 |
+
For more on Orlicz hearts w.r.t. to Young functions the reader may consult [10].
|
1249 |
+
We can define the following mapping
|
1250 |
+
ρΦ(X) = sup
|
1251 |
+
P∈PΦ
|
1252 |
+
�
|
1253 |
+
EP [X] − E
|
1254 |
+
�
|
1255 |
+
Φ
|
1256 |
+
�dP
|
1257 |
+
dP
|
1258 |
+
���
|
1259 |
+
for all X ∈ HΦ∗, where PΦ, denotes the set of all probability measures P which are
|
1260 |
+
absolutely continuous w.r.t. P such that Φ
|
1261 |
+
�
|
1262 |
+
dP
|
1263 |
+
dP
|
1264 |
+
�
|
1265 |
+
is P−integrable. Note that dP
|
1266 |
+
dP X is
|
1267 |
+
P−integrable for every P ∈ PΦ and any X ∈ HΦ∗ due to Young’s inequality. We shall
|
1268 |
+
call ρΦ the divergence risk measure w.r.t. Φ.
|
1269 |
+
Ben-Tal and Teboulle ([4], [5]) discovered another more convenient representation. It
|
1270 |
+
reads as follows (see [3]).
|
1271 |
+
Theorem 4.1 The divergence risk measure ρΦ w.r.t. Φ satisfies the following represen-
|
1272 |
+
tation
|
1273 |
+
ρΦ(X) = inf
|
1274 |
+
x∈R E [Φ∗(X + x) − x]
|
1275 |
+
for all X ∈ HΦ∗.
|
1276 |
+
The representation in Theorem 4.1 is also known as the optimized certainty equivalent
|
1277 |
+
w.r.t. Φ∗. As optimized certainty equivalent the divergence measure ρΦ may be seen
|
1278 |
+
directly to be nondecreasing w.r.t. the increasing convex order. Theorem 4.1 also shows
|
1279 |
+
that ρΦ is law-invariant. In particular, we may define the functional RρΦ associated with
|
1280 |
+
ρΦ on the set of all distribution functions of the random variables from HΦ∗. Throughout
|
1281 |
+
this section we focus on the following specialization of optimization problem (1.3)
|
1282 |
+
inf
|
1283 |
+
θ∈Θ RρΦ
|
1284 |
+
�
|
1285 |
+
Fθ
|
1286 |
+
�
|
1287 |
+
,
|
1288 |
+
(4.1)
|
1289 |
+
14
|
1290 |
+
|
1291 |
+
where Fθ stands for the distribution function of G(θ, Z) for θ ∈ Θ.
|
1292 |
+
The SAA (1.4) of (4.1) reads as follows.
|
1293 |
+
inf
|
1294 |
+
θ∈Θ RρΦ
|
1295 |
+
� ˆFn,θ
|
1296 |
+
�
|
1297 |
+
= inf
|
1298 |
+
θ∈Θ inf
|
1299 |
+
x∈R
|
1300 |
+
�1
|
1301 |
+
n
|
1302 |
+
n
|
1303 |
+
�
|
1304 |
+
i=1
|
1305 |
+
Φ∗�
|
1306 |
+
G(θ, Zi) + x
|
1307 |
+
�
|
1308 |
+
− x
|
1309 |
+
�
|
1310 |
+
(n ∈ N).
|
1311 |
+
(4.2)
|
1312 |
+
We shall strengthen condition (A 2) to the following property.
|
1313 |
+
(A 2’) There exists some positive envelope ξ of FΘ satisfying ξ(Z1) ∈ HΦ∗.
|
1314 |
+
Note that (A 2’) together with (A 1) implies that G(θ, Z1) belongs to HΦ∗ for every
|
1315 |
+
θ ∈ Θ so that the genuine optimization problem (4.1) is well-defined.
|
1316 |
+
We are mainly interested in deviation probabilities (1.5) w.r.t. RρΦ. Representation
|
1317 |
+
(4.2) along with Theorem 4.1 suggests to apply Theorem 2.2 to the SAA of
|
1318 |
+
inf
|
1319 |
+
(θ,x)∈Θ×R E
|
1320 |
+
�
|
1321 |
+
GΦ
|
1322 |
+
�
|
1323 |
+
(θ, x), Z1
|
1324 |
+
��
|
1325 |
+
,
|
1326 |
+
where
|
1327 |
+
GΦ : (Θ × R) × Rd → R,
|
1328 |
+
�
|
1329 |
+
(θ, x), z
|
1330 |
+
�
|
1331 |
+
�→ Φ∗�
|
1332 |
+
G(θ, z) + x
|
1333 |
+
�
|
1334 |
+
− x.
|
1335 |
+
(4.3)
|
1336 |
+
Unfortunately, the application is not immediate because the parameter space is not
|
1337 |
+
totally bounded w.r.t. the Euclidean metric on Rd. So a kind of compactification is
|
1338 |
+
needed, provided by the following result. For preparation let us consider any mapping
|
1339 |
+
ξ as in (A 2’) and let x0 > 1 be from the effective domain of Φ. Then we introduce for
|
1340 |
+
δ > 0 the following real numbers
|
1341 |
+
xl(x0, ξ, δ) := −Φ(0) − δ − E
|
1342 |
+
�
|
1343 |
+
Φ∗�
|
1344 |
+
ξ(Z1)
|
1345 |
+
��
|
1346 |
+
(4.4)
|
1347 |
+
xu(x0, ξ, δ) := Φ(x0) + (1 + x0)δ + E
|
1348 |
+
�
|
1349 |
+
Φ∗�
|
1350 |
+
ξ(Z1)
|
1351 |
+
��
|
1352 |
+
+ x0E[ξ(Z1)]
|
1353 |
+
x0 − 1
|
1354 |
+
+ Φ(0).
|
1355 |
+
(4.5)
|
1356 |
+
Note that by (A 2’) along with Jensen’s inequality the mapping ξ is PZ-integrable. For
|
1357 |
+
abbreviation we set, using notations (4.4) as well as (4.5)
|
1358 |
+
Ix0,ξ,δ := [xl(x0, ξ, δ), xu(x0, ξ, δ)].
|
1359 |
+
(4.6)
|
1360 |
+
Proposition 4.2 Let (A 1), (A 2’) be fulfilled. Furthermore, for δ > 0 and n ∈ N the
|
1361 |
+
set Aξ
|
1362 |
+
n,δ ∈ F is defined to consist of all ω ∈ Ω satisfying
|
1363 |
+
1
|
1364 |
+
n
|
1365 |
+
n
|
1366 |
+
�
|
1367 |
+
j=1
|
1368 |
+
ξ
|
1369 |
+
�
|
1370 |
+
Zj(ω)
|
1371 |
+
�
|
1372 |
+
≤ E
|
1373 |
+
�
|
1374 |
+
ξ(Z1)
|
1375 |
+
�
|
1376 |
+
+ δ, 1
|
1377 |
+
n
|
1378 |
+
n
|
1379 |
+
�
|
1380 |
+
j=1
|
1381 |
+
Φ∗�
|
1382 |
+
ξ
|
1383 |
+
�
|
1384 |
+
Zj(ω)
|
1385 |
+
��
|
1386 |
+
≤ E
|
1387 |
+
�
|
1388 |
+
Φ∗�
|
1389 |
+
ξ(Z1)
|
1390 |
+
��
|
1391 |
+
+ δ.
|
1392 |
+
If G(·, z) is lower semicontinuous for z ∈ Rd, then optimal values of (4.2) and (4.1) are
|
1393 |
+
always finite, and, using notations (4.3), (4.6), if ω ∈ Aξ
|
1394 |
+
n,δ, then
|
1395 |
+
inf
|
1396 |
+
θ∈Θ RρΦ
|
1397 |
+
� ˆFn,θ
|
1398 |
+
�
|
1399 |
+
− inf
|
1400 |
+
θ∈Θ RρΦ(Fθ)
|
1401 |
+
=
|
1402 |
+
inf
|
1403 |
+
(θ,x)∈Θ×Ix0,ξ,δ
|
1404 |
+
1
|
1405 |
+
n
|
1406 |
+
n
|
1407 |
+
�
|
1408 |
+
j=1
|
1409 |
+
GΦ
|
1410 |
+
�
|
1411 |
+
(θ, x), Zj
|
1412 |
+
�
|
1413 |
+
−
|
1414 |
+
inf
|
1415 |
+
(θ,x)∈Θ×Ix0,ξ,δ E
|
1416 |
+
�
|
1417 |
+
GΦ
|
1418 |
+
�
|
1419 |
+
(θ, x), Z1
|
1420 |
+
��
|
1421 |
+
.
|
1422 |
+
15
|
1423 |
+
|
1424 |
+
The proof of Proposition 4.2 may be found in Subsection 5.5.
|
1425 |
+
Now in view of Proposition 4.2, we may derive the desired deviation probabilities by
|
1426 |
+
applying Theorem 2.2 to the function classes of the following type
|
1427 |
+
FΘ
|
1428 |
+
Φ,I :=
|
1429 |
+
�
|
1430 |
+
GΦ
|
1431 |
+
�
|
1432 |
+
(θ, x), ·
|
1433 |
+
�
|
1434 |
+
| (θ, x) ∈ Θ × I
|
1435 |
+
�
|
1436 |
+
(I ⊆ R compact interval).
|
1437 |
+
(4.7)
|
1438 |
+
However, we want to formulate the requirement by means of the terms J(FΘ, CFΘ, δ)
|
1439 |
+
associated with the genuine objective G instead of the terms J(FΘ
|
1440 |
+
Φ,I, CFΘ
|
1441 |
+
Φ,I, δ).
|
1442 |
+
The
|
1443 |
+
relationship between these terms is the subject of the following auxiliary result.
|
1444 |
+
Lemma 4.3 Let I ⊆ R be a nondegenerated compact interval fulfilling the property
|
1445 |
+
sup I = | inf I| ∨ | sup I| > 0, and let Φ∗′
|
1446 |
+
+ denote the right-sided derivative of Φ∗. If ξ is
|
1447 |
+
a square PZ-integrable positive envelope of FΘ, then
|
1448 |
+
CFΘ
|
1449 |
+
Φ,I := 2
|
1450 |
+
�
|
1451 |
+
Φ∗′
|
1452 |
+
+
|
1453 |
+
�
|
1454 |
+
ξ + sup I
|
1455 |
+
�
|
1456 |
+
+ 1]
|
1457 |
+
�
|
1458 |
+
ξ2 + (sup I)2
|
1459 |
+
is a positive envelope of FΘ
|
1460 |
+
Φ,I satisfying
|
1461 |
+
J(FΘ
|
1462 |
+
Φ,I, CFΘ
|
1463 |
+
Φ,I, δ) ≤
|
1464 |
+
√
|
1465 |
+
2 J(FΘ, ξ, δ) + 4δ
|
1466 |
+
�
|
1467 |
+
ln(1/δ) +
|
1468 |
+
�
|
1469 |
+
2 ln(2) δ
|
1470 |
+
for δ ∈]0, exp(−1)].
|
1471 |
+
The proof may be found in Subsection 5.5.
|
1472 |
+
Next, we want to find an analogue of (A 3) for the auxiliary goal GΦ but in terms of
|
1473 |
+
the genuine one G. It is the following one.
|
1474 |
+
(A 3”) There exist some at most countable subset Θ ⊆ Θ and (PZ)n-null sets Nn (n ∈ N)
|
1475 |
+
such that
|
1476 |
+
inf
|
1477 |
+
ϑ∈Θ E[|G(ϑ, Z1) − G(θ, Z1)|] = inf
|
1478 |
+
ϑ∈Θ
|
1479 |
+
max
|
1480 |
+
j∈{1,...,n}
|
1481 |
+
��G(θ, zj) − G(ϑ, zj)
|
1482 |
+
�� = 0
|
1483 |
+
for n ∈ N, θ ∈ Θ and (z1, . . . , zn) ∈ Rdn \ Nn.
|
1484 |
+
Remark 4.4 Criteria for (A 3”) in the cases that G satisfies (H) or has representation
|
1485 |
+
(PL) carry over directly from Remark 3.2. This is because (A 3”) is implied by (A 3’).
|
1486 |
+
Lemma 4.5 Let (A 1), (A 2’) and (A 3”) be fulfilled, and let I ⊆ R denote a nonde-
|
1487 |
+
generated interval. Then with the at most countable subset Θ ⊆ Θ and the (PZ)n-null
|
1488 |
+
sets Nn (n ∈ N) from (A 3”) it holds
|
1489 |
+
inf
|
1490 |
+
(ϑ,y)∈Θ×I∩Q
|
1491 |
+
��E
|
1492 |
+
�
|
1493 |
+
GΦ
|
1494 |
+
�
|
1495 |
+
(ϑ, y), Z1
|
1496 |
+
��
|
1497 |
+
− E
|
1498 |
+
�
|
1499 |
+
GΦ
|
1500 |
+
�
|
1501 |
+
(θ, x), Z1
|
1502 |
+
�
|
1503 |
+
]
|
1504 |
+
��
|
1505 |
+
=
|
1506 |
+
inf
|
1507 |
+
(ϑ,y)∈Θ×I∩Q
|
1508 |
+
max
|
1509 |
+
j∈{1,...,n}
|
1510 |
+
��GΦ
|
1511 |
+
�
|
1512 |
+
(θ, y), zj
|
1513 |
+
�
|
1514 |
+
− GΦ
|
1515 |
+
�
|
1516 |
+
(ϑ, x), zj
|
1517 |
+
��� = 0
|
1518 |
+
for n ∈ N, θ ∈ Θ, x ∈ I and (z1, . . . , zn) ∈ Rdn \ Nn.
|
1519 |
+
16
|
1520 |
+
|
1521 |
+
The proof is postponed to Subsection 5.5.
|
1522 |
+
Putting together Proposition 4.2 and Lemmata 4.3, 4.5, we end up with the following
|
1523 |
+
result on the deviation probabilities. Recall notations (4.5), and Φ∗′
|
1524 |
+
+ for the right-sided
|
1525 |
+
derivative of Φ∗.
|
1526 |
+
Theorem 4.6 Let (A 1), (A 2’), (A 3”) be fulfilled. Using notation (4.5) the Borel
|
1527 |
+
measurable mapping ξ from (A 2’) is assumed to satisfy the property that the mapping
|
1528 |
+
ξx0,ξ,δ := [Φ∗′
|
1529 |
+
+
|
1530 |
+
�
|
1531 |
+
ξ + xu(x0, ξ, δ)
|
1532 |
+
�
|
1533 |
+
+ 1]
|
1534 |
+
�
|
1535 |
+
ξ2 + xu(x0, ξ, δ)2 is square PZ-integrable for some
|
1536 |
+
x0 ∈]1, 2[ from the effective domain of Φ and δ > 0. If G(·, z) is lower semicontinuous
|
1537 |
+
for z ∈ Rd, and if J(FΘ, ξ, 1/2) is finite, then the following statements are true.
|
1538 |
+
1) For ε, t > 0 and n ∈ N with n ≥ 2∥ξx0,ξ,δ∥2
|
1539 |
+
PZ,2 the inequality
|
1540 |
+
P
|
1541 |
+
����� inf
|
1542 |
+
θ∈Θ RρΦ
|
1543 |
+
� ˆFn,θ
|
1544 |
+
�
|
1545 |
+
− inf
|
1546 |
+
θ∈Θ RρΦ
|
1547 |
+
�
|
1548 |
+
Fθ
|
1549 |
+
���� ≥ ε
|
1550 |
+
��
|
1551 |
+
≤ exp
|
1552 |
+
�
|
1553 |
+
−t2 √nε
|
1554 |
+
16(t + 1)(t + 28)∥ξx0,ξ,δ∥PZ,2
|
1555 |
+
�
|
1556 |
+
+ P
|
1557 |
+
�
|
1558 |
+
Ω \ Aξ
|
1559 |
+
n,δ
|
1560 |
+
�
|
1561 |
+
+ P
|
1562 |
+
�
|
1563 |
+
Ω \ B
|
1564 |
+
2ξx0,ξ,δ
|
1565 |
+
n
|
1566 |
+
�
|
1567 |
+
,
|
1568 |
+
holds if ε >
|
1569 |
+
∥ξx0,ξ,δ∥PZ ,2
|
1570 |
+
√n
|
1571 |
+
�
|
1572 |
+
2 + 32(t + 1)
|
1573 |
+
�
|
1574 |
+
4J(FΘ, ξ, 1/4) + 5
|
1575 |
+
�
|
1576 |
+
ln(2)
|
1577 |
+
��
|
1578 |
+
. Here Aξ
|
1579 |
+
n,δ is as
|
1580 |
+
in the display of Proposition 4.2, and B
|
1581 |
+
2ξx0,ξ,δ
|
1582 |
+
n
|
1583 |
+
is defined according to (2.7).
|
1584 |
+
2) The sequence
|
1585 |
+
�√n
|
1586 |
+
�
|
1587 |
+
inf
|
1588 |
+
θ∈Θ RρΦ( ˆFn,θ)− inf
|
1589 |
+
θ∈Θ RρΦ(Fθ)
|
1590 |
+
��
|
1591 |
+
n∈N is a uniformly tight sequence
|
1592 |
+
of random variables.
|
1593 |
+
Proof Let Θ ⊆ Θ be from (A 3”). Combining Theorem 4.1 and (4.2) with Lemma 4.5,
|
1594 |
+
we may observe
|
1595 |
+
inf
|
1596 |
+
θ∈Θ RρΦ
|
1597 |
+
�
|
1598 |
+
Fθ
|
1599 |
+
�
|
1600 |
+
=
|
1601 |
+
inf
|
1602 |
+
(θ,x)∈Θ×Q E
|
1603 |
+
�
|
1604 |
+
GΦ
|
1605 |
+
�
|
1606 |
+
(θ, x), Z1
|
1607 |
+
��
|
1608 |
+
,
|
1609 |
+
inf
|
1610 |
+
θ∈Θ RρΦ
|
1611 |
+
� ˆFn,θ
|
1612 |
+
�
|
1613 |
+
=
|
1614 |
+
inf
|
1615 |
+
(θ,x)∈Θ×Q
|
1616 |
+
1
|
1617 |
+
n
|
1618 |
+
n
|
1619 |
+
�
|
1620 |
+
j=1
|
1621 |
+
GΦ
|
1622 |
+
�
|
1623 |
+
(θ, x), Zj
|
1624 |
+
�
|
1625 |
+
P − a.s.
|
1626 |
+
for n ∈ N.
|
1627 |
+
In particular, taking Proposition 4.2 and completeness of (Ω, F, P) into account,
|
1628 |
+
inf
|
1629 |
+
θ∈Θ RρΦ
|
1630 |
+
� ˆFn,θ
|
1631 |
+
�
|
1632 |
+
− inf
|
1633 |
+
θ∈Θ RρΦ
|
1634 |
+
�
|
1635 |
+
Fθ
|
1636 |
+
�
|
1637 |
+
is a random variable for n ∈ N.
|
1638 |
+
Let Ix0,ξ,δ denote the interval defined in (4.6). By Proposition 4.2 along with Lemma
|
1639 |
+
4.5 we have
|
1640 |
+
inf
|
1641 |
+
θ∈Θ RρΦ
|
1642 |
+
�
|
1643 |
+
Fθ
|
1644 |
+
�
|
1645 |
+
=
|
1646 |
+
inf
|
1647 |
+
(θ,x)∈Θ×Ix0,ξ,δ∩Q E
|
1648 |
+
�
|
1649 |
+
GΦ
|
1650 |
+
�
|
1651 |
+
(θ, x), Z1
|
1652 |
+
��
|
1653 |
+
,
|
1654 |
+
inf
|
1655 |
+
θ∈Θ RρΦ
|
1656 |
+
� ˆFn,θ
|
1657 |
+
�
|
1658 |
+
(ω) =
|
1659 |
+
inf
|
1660 |
+
(θ,x)∈Θ×Ix0,ξ,δ∩Q
|
1661 |
+
1
|
1662 |
+
n
|
1663 |
+
n
|
1664 |
+
�
|
1665 |
+
j=1
|
1666 |
+
GΦ
|
1667 |
+
�
|
1668 |
+
(θ, x), Zj(ω)
|
1669 |
+
�
|
1670 |
+
for n ∈ N, ω ∈ Aξ
|
1671 |
+
n,δ.
|
1672 |
+
17
|
1673 |
+
|
1674 |
+
Finally, note that sup Ix0,ξ,δ = | sup Ix0,ξ,δ| ∨ | inf Ix0,ξ,δ| > 0 holds. Now, we may apply
|
1675 |
+
Theorems 2.2, 2.4 to the function class FΘ
|
1676 |
+
ξ,Ix0,ξ,δ, as defined in (4.7). Then in view of
|
1677 |
+
Lemma 4.3 we may derive easily the statements of Theorem 4.6.
|
1678 |
+
✷
|
1679 |
+
Remark 4.7 Let us point out some simplifications of Theorem 4.6.
|
1680 |
+
1) If the function G is uniformly bounded by some positive constant L, then we may
|
1681 |
+
choose ξ ≡ L. Then Ω \ Aξ
|
1682 |
+
n,δ = Ω \ B
|
1683 |
+
2ξx0,ξ,δ
|
1684 |
+
n
|
1685 |
+
= ∅ for every n ∈ N.
|
1686 |
+
2) By Chebychev’s inequality we have P
|
1687 |
+
�
|
1688 |
+
Ω \ Aξ
|
1689 |
+
n,δ
|
1690 |
+
�
|
1691 |
+
+ P
|
1692 |
+
�
|
1693 |
+
Ω \ B
|
1694 |
+
2ξx0,ξ,δ
|
1695 |
+
n
|
1696 |
+
�
|
1697 |
+
= O(n) if ξx0,ξ,δ
|
1698 |
+
is integrable of order 4. Analogously to Remark 2.3, 3), we may even obtain ex-
|
1699 |
+
ponential bounds for these probabilities if E
|
1700 |
+
�
|
1701 |
+
exp
|
1702 |
+
�
|
1703 |
+
λξx0,ξ,δ(Z1)2��
|
1704 |
+
is finite for some
|
1705 |
+
λ > 0.
|
1706 |
+
Remark 4.8 Drawing on Proposition 2.5, or Proposition 2.7 along with Remark 4.4,
|
1707 |
+
we may simplify directly Theorem 4.6 in the cases that G fulfills property (H), or has
|
1708 |
+
representation (PL). Moreover, Theorem 4.6 may be improved in the way that the results
|
1709 |
+
provide explicit upper bounds for the involved term J(FΘ, ξ, 1/4).
|
1710 |
+
In the simplified situation of bounded G error rates have been developped in [1] for
|
1711 |
+
linear G as already described in Remark 3.7. As in this remark we want to emphasize
|
1712 |
+
again that universal unknown constants are involved in the bounds from [1]. This short-
|
1713 |
+
coming may be avoided by Theorem 4.6 for this special type of objective G, just by using
|
1714 |
+
Proposition 2.5.
|
1715 |
+
Let us look at the specialization of Theorem 4.6 in the important case that ρΦ is the
|
1716 |
+
Average Value at Risk, also known as the Expected Shortfall.
|
1717 |
+
Example 4.9 Let Φ be defined by Φα(x) := 0 for x ≤ 1/(1 − α) for some α ∈]0, 1[,
|
1718 |
+
and Φ(x) := ∞ if x > 1/(1 − α). Then Φ∗
|
1719 |
+
α(y) = y+/(1 − α) for y ∈ R. In particular
|
1720 |
+
HΦ∗ coincides with L1, and we may recognize RρΦ as the so called Average Value at Risk
|
1721 |
+
w.r.t. α (e.g. [11], [23]), i.e.
|
1722 |
+
RρΦ(F) =
|
1723 |
+
1
|
1724 |
+
1 − α
|
1725 |
+
ˆ 1
|
1726 |
+
F ←(α)
|
1727 |
+
1]0,1[(u) F ←(u) du
|
1728 |
+
= inf
|
1729 |
+
x∈R
|
1730 |
+
�ˆ 1
|
1731 |
+
0
|
1732 |
+
1]0,1[(u) (F ←(u) + x)+
|
1733 |
+
1 − α
|
1734 |
+
du − x
|
1735 |
+
�
|
1736 |
+
(see e.g. [16]), where F ← denotes the left-continuous quantile function of F. In this
|
1737 |
+
situation we have the following specifications of some particular assumptions in Theorem
|
1738 |
+
4.6.
|
1739 |
+
• (A 2) and (A 2”) are equivalent.
|
1740 |
+
• If ξ : Rd → R is any strictly positive square PZ-integrable mapping, then
|
1741 |
+
[Φ∗′
|
1742 |
+
+(ξ + a) + 1]
|
1743 |
+
�
|
1744 |
+
ξ2 + a2 = (2 − α)
|
1745 |
+
�
|
1746 |
+
ξ2 + a2/(1 − α)
|
1747 |
+
is already square PZ-integrable for every a > 0.
|
1748 |
+
18
|
1749 |
+
|
1750 |
+
• The sets Aξ
|
1751 |
+
n,δ, B
|
1752 |
+
2ξx0,ξ,δ
|
1753 |
+
n
|
1754 |
+
from Theorem 4.6 may be simplified as follows
|
1755 |
+
Aξ
|
1756 |
+
n,δ =
|
1757 |
+
�1
|
1758 |
+
n
|
1759 |
+
n
|
1760 |
+
�
|
1761 |
+
j=1
|
1762 |
+
ξ(Zj) ≤ E[ξ(Z1)] + (1 − α)δ,
|
1763 |
+
�
|
1764 |
+
,
|
1765 |
+
B
|
1766 |
+
2ξx0,ξ,δ
|
1767 |
+
n
|
1768 |
+
=
|
1769 |
+
�1
|
1770 |
+
n
|
1771 |
+
n
|
1772 |
+
�
|
1773 |
+
j=1
|
1774 |
+
ξ(Zj)2 ≤ 2E[ξ(Z1)2] + xu(x0, ξ, δ)2�
|
1775 |
+
where xu(x0, ξ, δ) is as in (4.5).
|
1776 |
+
• Under condition (H) with β = 1 the uniform tightness result in Theorem 4.6 is
|
1777 |
+
already known from [13].
|
1778 |
+
5 Proofs
|
1779 |
+
5.1 Proof of Theorem 2.2
|
1780 |
+
The main tool for the proof of Theorem 2.2 is Bousquet’s version of Talagrand’s concen-
|
1781 |
+
tration inequality. We shall repeat it first, tailored to our situation, for the convenience
|
1782 |
+
of the reader (see Theorem 3.3.9 in [12]).
|
1783 |
+
Theorem 5.1 Let F be some at most countable set of centered PZ-integrable functions
|
1784 |
+
which is uniformly bounded by some positive constant u. Assume that σ ∈]0, u] is an
|
1785 |
+
upper bound for the set {Var(h) | h ∈ F}. Then for every n ∈ N and any ε > 0
|
1786 |
+
P
|
1787 |
+
��
|
1788 |
+
Sn ≥ E[Sn] + ε
|
1789 |
+
��
|
1790 |
+
≤ exp
|
1791 |
+
�
|
1792 |
+
−ε2
|
1793 |
+
2
|
1794 |
+
�
|
1795 |
+
u 2E[Sn] + nσ2 + u ε/3
|
1796 |
+
�
|
1797 |
+
�
|
1798 |
+
,
|
1799 |
+
where Sn := suph∈F
|
1800 |
+
�� �n
|
1801 |
+
j=1 h(Zj)
|
1802 |
+
��.
|
1803 |
+
Now, we are prepared to show Theorem 2.2.
|
1804 |
+
Proof of Theorem 2.2:
|
1805 |
+
As already discussed after introducing condition (A 3), we may replace in the optimiza-
|
1806 |
+
tion problems (1.1), (1.2) the parameter space with the at most countable subset Θ ⊆ Θ
|
1807 |
+
from (A 3). Hence
|
1808 |
+
P
|
1809 |
+
���� inf
|
1810 |
+
θ∈Θ
|
1811 |
+
1
|
1812 |
+
n
|
1813 |
+
n
|
1814 |
+
�
|
1815 |
+
j=1
|
1816 |
+
G(θ, Zj) − inf
|
1817 |
+
θ∈Θ E[G(θ, Z1)]
|
1818 |
+
�� ≥ ε
|
1819 |
+
�
|
1820 |
+
∩ Bξ
|
1821 |
+
n
|
1822 |
+
�
|
1823 |
+
≤ P
|
1824 |
+
��
|
1825 |
+
sup
|
1826 |
+
θ∈Θ
|
1827 |
+
��1
|
1828 |
+
n
|
1829 |
+
n
|
1830 |
+
�
|
1831 |
+
j=1
|
1832 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
1833 |
+
�� ≥ ε
|
1834 |
+
�
|
1835 |
+
∩ Bξ
|
1836 |
+
n
|
1837 |
+
�
|
1838 |
+
for ε > 0.
|
1839 |
+
(5.1)
|
1840 |
+
By definition of Bξ
|
1841 |
+
n we may observe for ω ∈ Bξ
|
1842 |
+
n and j ∈ {1, . . . , n}
|
1843 |
+
��G
|
1844 |
+
�
|
1845 |
+
θ, Zj(ω)
|
1846 |
+
��� ≤
|
1847 |
+
��ξ
|
1848 |
+
�
|
1849 |
+
Zj(ω)
|
1850 |
+
��� ≤ wn :=
|
1851 |
+
√
|
1852 |
+
2n∥ξ∥PZ,2.
|
1853 |
+
(5.2)
|
1854 |
+
19
|
1855 |
+
|
1856 |
+
Then, setting φn(t) := (t ∧ wn) ∨ (−wn) for t ∈ R, we obtain
|
1857 |
+
��1
|
1858 |
+
n
|
1859 |
+
n
|
1860 |
+
�
|
1861 |
+
j=1
|
1862 |
+
G(θ, Zj(ω)) − E
|
1863 |
+
�
|
1864 |
+
G(θ, Z1)
|
1865 |
+
�
|
1866 |
+
|
|
1867 |
+
≤
|
1868 |
+
��1
|
1869 |
+
n
|
1870 |
+
n
|
1871 |
+
�
|
1872 |
+
j=1
|
1873 |
+
φn
|
1874 |
+
�
|
1875 |
+
G(θ, Zj(ω))
|
1876 |
+
�
|
1877 |
+
− E
|
1878 |
+
�
|
1879 |
+
φn
|
1880 |
+
�
|
1881 |
+
G(θ, Z1)
|
1882 |
+
��
|
1883 |
+
| +
|
1884 |
+
��E
|
1885 |
+
�
|
1886 |
+
φn
|
1887 |
+
�
|
1888 |
+
G(θ, Z1)
|
1889 |
+
�
|
1890 |
+
− E[G(θ, Z1)]
|
1891 |
+
���
|
1892 |
+
for θ ∈ Θ, and ω ∈ Bξ
|
1893 |
+
n. The function φn satisfies the following properties
|
1894 |
+
|φn(t) − φn(s)| ≤ |t − s|
|
1895 |
+
for t, s ∈ R,
|
1896 |
+
(5.3)
|
1897 |
+
and for any integrable random variable W
|
1898 |
+
��E[φn(W)] − E[W]
|
1899 |
+
�� ≤
|
1900 |
+
��E[(−wn − W)1]−∞,−wn](W)]
|
1901 |
+
�� +
|
1902 |
+
��E[(W − wn)1[wn,∞[(W)]
|
1903 |
+
��
|
1904 |
+
= E[(−wn − W)+] + E[(W − wn)+]
|
1905 |
+
(5.4)
|
1906 |
+
Invoking (A 2), we may conclude from (5.4)
|
1907 |
+
sup
|
1908 |
+
θ∈Θ
|
1909 |
+
��E
|
1910 |
+
�
|
1911 |
+
φn
|
1912 |
+
�
|
1913 |
+
G(θ, Z1)
|
1914 |
+
�
|
1915 |
+
] − E[G(θ, Z1)]
|
1916 |
+
�� ≤ 2E
|
1917 |
+
��
|
1918 |
+
ξ(Z1) − wn)+�
|
1919 |
+
=: δn.
|
1920 |
+
Furthermore by square integrability of ξ(Z1)
|
1921 |
+
nδn = 2n
|
1922 |
+
ˆ ∞
|
1923 |
+
wn
|
1924 |
+
P
|
1925 |
+
�
|
1926 |
+
{ξ(Z1) > t
|
1927 |
+
��
|
1928 |
+
dt
|
1929 |
+
=
|
1930 |
+
√
|
1931 |
+
2n
|
1932 |
+
∥ξ∥PZ,2
|
1933 |
+
ˆ ∞
|
1934 |
+
√
|
1935 |
+
2n∥ξ∥PZ ,2
|
1936 |
+
√
|
1937 |
+
2n∥ξ∥PZ,2 P
|
1938 |
+
�
|
1939 |
+
{ξ(Z1) > t
|
1940 |
+
��
|
1941 |
+
dt
|
1942 |
+
≤
|
1943 |
+
√
|
1944 |
+
2n
|
1945 |
+
∥ξ∥PZ,2
|
1946 |
+
ˆ ∞
|
1947 |
+
0
|
1948 |
+
t P
|
1949 |
+
�
|
1950 |
+
{ξ(Z1) > t
|
1951 |
+
��
|
1952 |
+
dt ≤
|
1953 |
+
√n
|
1954 |
+
√
|
1955 |
+
2
|
1956 |
+
∥ξ∥PZ,2.
|
1957 |
+
Therefore
|
1958 |
+
sup
|
1959 |
+
θ∈Θ
|
1960 |
+
��E
|
1961 |
+
�
|
1962 |
+
φn
|
1963 |
+
�
|
1964 |
+
G(θ, Z1)
|
1965 |
+
�
|
1966 |
+
− E[G(θ, Z1)]
|
1967 |
+
��� ≤ ∥ξ∥PZ,2
|
1968 |
+
√
|
1969 |
+
2n
|
1970 |
+
for n ∈ N,
|
1971 |
+
and thus for arbitrary n ∈ N
|
1972 |
+
P
|
1973 |
+
��
|
1974 |
+
sup
|
1975 |
+
θ∈Θ
|
1976 |
+
��1
|
1977 |
+
n
|
1978 |
+
n
|
1979 |
+
�
|
1980 |
+
j=1
|
1981 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
1982 |
+
�� ≥ ε
|
1983 |
+
�
|
1984 |
+
∩ Bξ
|
1985 |
+
n
|
1986 |
+
�
|
1987 |
+
≤ P
|
1988 |
+
��
|
1989 |
+
sup
|
1990 |
+
θ∈Θ
|
1991 |
+
��
|
1992 |
+
n
|
1993 |
+
�
|
1994 |
+
j=1
|
1995 |
+
φn
|
1996 |
+
�
|
1997 |
+
G(θ, Zj)
|
1998 |
+
�
|
1999 |
+
− nE
|
2000 |
+
�
|
2001 |
+
φn
|
2002 |
+
�
|
2003 |
+
G(θ, Z1)
|
2004 |
+
���� ≥ nε − √n∥ξ∥PZ,2
|
2005 |
+
√
|
2006 |
+
2
|
2007 |
+
)
|
2008 |
+
��
|
2009 |
+
.
|
2010 |
+
We want to apply Theorem 5.1 to the function class Fn consisting of all mappings
|
2011 |
+
φn
|
2012 |
+
�
|
2013 |
+
G(θ, ·)
|
2014 |
+
�
|
2015 |
+
− E
|
2016 |
+
�
|
2017 |
+
φn
|
2018 |
+
�
|
2019 |
+
G(θ, Z1)
|
2020 |
+
��
|
2021 |
+
with θ ∈ Θ, and we set
|
2022 |
+
Sn := sup
|
2023 |
+
θ∈Θ
|
2024 |
+
��
|
2025 |
+
n
|
2026 |
+
�
|
2027 |
+
j=1
|
2028 |
+
φn
|
2029 |
+
�
|
2030 |
+
G(θ, Zj)
|
2031 |
+
�
|
2032 |
+
− nE
|
2033 |
+
�
|
2034 |
+
φn
|
2035 |
+
�
|
2036 |
+
G(θ, Z1)
|
2037 |
+
����.
|
2038 |
+
20
|
2039 |
+
|
2040 |
+
Combining (A 2) with (5.3) and property φn(0) = 0, we have
|
2041 |
+
��φn
|
2042 |
+
�
|
2043 |
+
G(θ, ·)
|
2044 |
+
�
|
2045 |
+
− φn
|
2046 |
+
�
|
2047 |
+
G(ϑ, ·)
|
2048 |
+
���
|
2049 |
+
Q,2 ≤
|
2050 |
+
��G(θ, ·) − G(ϑ, ·)
|
2051 |
+
��
|
2052 |
+
Q,2
|
2053 |
+
for θ, ϑ ∈ Θ, Q ∈ Mfin,
|
2054 |
+
��φn
|
2055 |
+
�
|
2056 |
+
G(θ, z)
|
2057 |
+
�
|
2058 |
+
| ≤ ξ(z)
|
2059 |
+
for θ ∈ Θ, z ∈ Rd.
|
2060 |
+
In particular ξ is not only a positive upper envelope of FΘ but also of the function classes
|
2061 |
+
FΘ := {G(θ, ·) | θ ∈ Θ} and Fn :=
|
2062 |
+
��
|
2063 |
+
G(θ, ·)
|
2064 |
+
�
|
2065 |
+
| θ ∈ Θ
|
2066 |
+
�
|
2067 |
+
, and
|
2068 |
+
N
|
2069 |
+
�
|
2070 |
+
η∥ξ∥Q,2, Fn, L2(Q)
|
2071 |
+
�
|
2072 |
+
≤ N
|
2073 |
+
�
|
2074 |
+
η∥ξ∥Q,2, FΘ, L2(Q)
|
2075 |
+
�
|
2076 |
+
≤ N
|
2077 |
+
�
|
2078 |
+
η∥ξ∥Q,2/2, FΘ, L2(Q)
|
2079 |
+
�
|
2080 |
+
holds for η > 0 and Q ∈ Mfin. So in view of (2.6) we obtain
|
2081 |
+
E[Sn] ≤ √n 32
|
2082 |
+
√
|
2083 |
+
2 ∥ξ∥PZ,2 J(FΘ, ξ, 1/4)
|
2084 |
+
(5.5)
|
2085 |
+
Since ξ is an envelope of Fn, we also have
|
2086 |
+
sup
|
2087 |
+
θ∈Θ
|
2088 |
+
��φn
|
2089 |
+
�
|
2090 |
+
G(θ, z)
|
2091 |
+
�
|
2092 |
+
− E
|
2093 |
+
�
|
2094 |
+
φn
|
2095 |
+
�
|
2096 |
+
G(θ, Z1)
|
2097 |
+
���� ≤ un := (
|
2098 |
+
√
|
2099 |
+
2n + 1) ∥ξ∥PZ,2
|
2100 |
+
(5.6)
|
2101 |
+
for n ∈ N, z ∈ Rd. Finally, setting σ2 := E
|
2102 |
+
�
|
2103 |
+
ξ(Z1)2�
|
2104 |
+
,
|
2105 |
+
E
|
2106 |
+
���φn
|
2107 |
+
�
|
2108 |
+
G(θ, z)
|
2109 |
+
�
|
2110 |
+
− E
|
2111 |
+
�
|
2112 |
+
φn
|
2113 |
+
�
|
2114 |
+
G(θ, Z1)
|
2115 |
+
����2�
|
2116 |
+
≤ E
|
2117 |
+
�
|
2118 |
+
φn
|
2119 |
+
�
|
2120 |
+
G(θ, Z1)
|
2121 |
+
�2�
|
2122 |
+
≤ σ2
|
2123 |
+
(5.7)
|
2124 |
+
for θ ∈ Θ and n ∈ N.
|
2125 |
+
Fix any t > 0, and let n ∈ N with ε > ηt,n as well as n ≥ ∥ξ∥2
|
2126 |
+
PZ,2/2, where ηt,n is as
|
2127 |
+
in the display of Theorem 2.2. Then σ2 ≤ un, and with the help of (5.5)
|
2128 |
+
nε − √n∥ξ∥PZ,2
|
2129 |
+
√
|
2130 |
+
2
|
2131 |
+
=
|
2132 |
+
t
|
2133 |
+
t + 1
|
2134 |
+
�
|
2135 |
+
nε − √n∥ξ∥PZ,2
|
2136 |
+
√
|
2137 |
+
2
|
2138 |
+
�
|
2139 |
+
+ nε − √n∥ξ∥PZ,2/
|
2140 |
+
√
|
2141 |
+
2
|
2142 |
+
t + 1
|
2143 |
+
≥
|
2144 |
+
tnε
|
2145 |
+
4(t + 1) + E[Sn].
|
2146 |
+
This implies
|
2147 |
+
P
|
2148 |
+
��
|
2149 |
+
sup
|
2150 |
+
θ∈Θ
|
2151 |
+
��1
|
2152 |
+
n
|
2153 |
+
n
|
2154 |
+
�
|
2155 |
+
j=1
|
2156 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
2157 |
+
�� ≥ ε
|
2158 |
+
�
|
2159 |
+
∩ Bξ
|
2160 |
+
n
|
2161 |
+
�
|
2162 |
+
≤ P
|
2163 |
+
��
|
2164 |
+
sup
|
2165 |
+
θ∈Θ
|
2166 |
+
��
|
2167 |
+
n
|
2168 |
+
�
|
2169 |
+
j=1
|
2170 |
+
φn
|
2171 |
+
�
|
2172 |
+
G(θ, Zj)
|
2173 |
+
�
|
2174 |
+
− n E
|
2175 |
+
�
|
2176 |
+
φn
|
2177 |
+
�
|
2178 |
+
G(θ, Z1)
|
2179 |
+
���� ≥
|
2180 |
+
tnε
|
2181 |
+
4(t + 1) + E[Sn]
|
2182 |
+
��
|
2183 |
+
.
|
2184 |
+
(5.8)
|
2185 |
+
Now, we are in the position to apply Theorem 5.1 to Fn due to (5.5) - (5.7), concluding
|
2186 |
+
P
|
2187 |
+
��
|
2188 |
+
sup
|
2189 |
+
θ∈Θ
|
2190 |
+
��1
|
2191 |
+
n
|
2192 |
+
n
|
2193 |
+
�
|
2194 |
+
j=1
|
2195 |
+
φn
|
2196 |
+
�
|
2197 |
+
G(θ, Zj)
|
2198 |
+
�
|
2199 |
+
− E
|
2200 |
+
�
|
2201 |
+
φn
|
2202 |
+
�
|
2203 |
+
G(θ, Z1)
|
2204 |
+
���� ≥
|
2205 |
+
tnε
|
2206 |
+
4(t + 1) + E[Sn]
|
2207 |
+
��
|
2208 |
+
≤ exp
|
2209 |
+
�
|
2210 |
+
−3t2n2ε2
|
2211 |
+
8(t + 1)2[24unE[Sn] + 12nσ2 + tunnε/(t + 1)]
|
2212 |
+
�
|
2213 |
+
.
|
2214 |
+
Furthermore σ2 = ∥ξ∥2
|
2215 |
+
PZ,2 < √nε∥ξ∥PZ,2, and E[Sn] < nε/(t + 1) by (5.5). Then the
|
2216 |
+
statement of Theorem 2.2 may be derived easily from (5.1) along with (5.8).
|
2217 |
+
✷
|
2218 |
+
21
|
2219 |
+
|
2220 |
+
5.2 Proof of Proposition 2.5
|
2221 |
+
Condition (H) allows to verify (A 3) for any at most countable dense subset Θ of the
|
2222 |
+
compact set Θ.
|
2223 |
+
Let θ ∈ Θ with G(θ, ·) being square PZ-integrable. Then for any θ ∈ Θ assumption
|
2224 |
+
(H) implies
|
2225 |
+
|G(θ, z)| ≤ |G(θ, z)| + C(z) dm,2(θ, θ)β
|
2226 |
+
(z ∈ Rd).
|
2227 |
+
In particular, ξ := C ∆(Θ)β +|G(θ, ·)| is square PZ-integrable and satisfies (A 2). Hence
|
2228 |
+
it remains to show the inequalities for the terms J(FΘ, ξ, δ).
|
2229 |
+
For a totally bounded metric d on Θ we shall use the symbol N
|
2230 |
+
�
|
2231 |
+
η, Θ, d
|
2232 |
+
�
|
2233 |
+
to denote the
|
2234 |
+
minimal number to cover Θ by closed d-balls with radius η > 0 and centers in Θ.
|
2235 |
+
It may be verified easily that the restriction dβ
|
2236 |
+
m,2 to Θ defines a totally bounded and
|
2237 |
+
complete metric on Θ. By (H) we may observe
|
2238 |
+
∥G(θ, ·) − G(ϑ, ·)∥Q,2 ≤ ∥C∥Q,2 dm,2(θ, ϑ)β
|
2239 |
+
for Q ∈ Mfin, and θ, ϑ ∈ Θ.
|
2240 |
+
Hence we obtain
|
2241 |
+
N
|
2242 |
+
�
|
2243 |
+
∥ξ∥Q,2 η, FΘ, L2(Q)
|
2244 |
+
�
|
2245 |
+
≤ N
|
2246 |
+
�
|
2247 |
+
∆(Θ)βη, Θ, dβ
|
2248 |
+
m,2
|
2249 |
+
�
|
2250 |
+
for all Q ∈ Mfin, η > 0.
|
2251 |
+
Moreover, we have Θ ⊆ {γ ∈ Rm | dm,2(γ, θ) ≤ ∆(Θ)}. Then we obtain from Lemma
|
2252 |
+
2.5 in [25] that for every η > 0
|
2253 |
+
N
|
2254 |
+
�
|
2255 |
+
∆(Θ)β η, Θ, dβ
|
2256 |
+
m,2
|
2257 |
+
�
|
2258 |
+
≤ N
|
2259 |
+
�
|
2260 |
+
∆(Θ) η1/β, Θ, dm,2
|
2261 |
+
�
|
2262 |
+
≤ (8 + η1/β)m/ηm/β.
|
2263 |
+
This implies for any δ ∈]0, 1/2], using change of variable formula
|
2264 |
+
J(FΘ, ξ, δ) ≤
|
2265 |
+
ˆ δ
|
2266 |
+
0
|
2267 |
+
�m
|
2268 |
+
β ln
|
2269 |
+
�
|
2270 |
+
2β/m[8 + δ1/β]β/η
|
2271 |
+
�
|
2272 |
+
dη
|
2273 |
+
≤ δ
|
2274 |
+
ˆ 1
|
2275 |
+
0
|
2276 |
+
�
|
2277 |
+
m
|
2278 |
+
β ln
|
2279 |
+
�2[(3m+1)β+m]/m/δ
|
2280 |
+
η
|
2281 |
+
�
|
2282 |
+
dη.
|
2283 |
+
Now, we may invoke (2.8) with v := m/β and K := 2[(3m+1)β+m]/m/δ to derive the
|
2284 |
+
remaining part of Proposition 2.5.
|
2285 |
+
✷
|
2286 |
+
5.3 Proof of Proposition 2.7
|
2287 |
+
We start the proof of Proposition 2.7 with the following observation induced by repre-
|
2288 |
+
sentation (PL).
|
2289 |
+
G(θ, z) =
|
2290 |
+
r
|
2291 |
+
�
|
2292 |
+
i=1
|
2293 |
+
f i(θ, z) Gi(θ, z)
|
2294 |
+
for θ ∈ Θ.
|
2295 |
+
(5.9)
|
2296 |
+
The mappings f i(θ, ·) and Gi(θ, ·) are Borel measurable due to the continuity of the
|
2297 |
+
involved linear mappings along with the measurability of the indicator mappings
|
2298 |
+
1Iil.
|
2299 |
+
22
|
2300 |
+
|
2301 |
+
Hence by (5.9) the assumption (A 1) is fulfilled. Moreover, let the mappings ξ1, . . . , ξr, ξ
|
2302 |
+
be defined as in the display of Proposition 2.7, and let Λ1, . . . , Λr be square PZ-integrable.
|
2303 |
+
Then by construction, the mapping ξ is also square PZ-integrable because the mappings
|
2304 |
+
ξ1, . . . , ξr are assumed to be bounded. In particular it satisfies (A 2) by (5.9) again.
|
2305 |
+
Therefore it remains to verify the claimed upper estimates of the terms J(FΘ, ξ, δ).
|
2306 |
+
We need some further preparation from the theory of empirical process theory. To
|
2307 |
+
recall, define for a collection B of subsets of Rd, and z1, . . . , zn ∈ Rd
|
2308 |
+
∆n(B, z1, . . . , zn) := cardinality of {B ∩ {z1, . . . , zn} | B ∈ B} .
|
2309 |
+
Then
|
2310 |
+
V (B) := inf
|
2311 |
+
�
|
2312 |
+
n ∈ N |
|
2313 |
+
max
|
2314 |
+
z1,...,zn∈Rd ∆n(B, z1, . . . , zn) < 2n�
|
2315 |
+
(inf ∅ := ∞)
|
2316 |
+
is known as the index of B (see [26], p. 135). In case of finite index, B is known as a so
|
2317 |
+
called VC-class (see [26], p. 135). The concept of VC-classes may be carried over from
|
2318 |
+
sets to functions in the following way. A set F of Borel measurable real valued functions
|
2319 |
+
on Rd is defined to be a VC-subgraph class or a VC-class if the corresponding collection
|
2320 |
+
�
|
2321 |
+
{(z, t) ∈ Rd×R | h(z) > t} | h ∈ F
|
2322 |
+
�
|
2323 |
+
of subgraphs is a V C-class ([26], p. 141). Its V C-
|
2324 |
+
index V (F) coincides with the index of the subgraphs. The significance of VC-subgraph
|
2325 |
+
classes stems from the fact that there there exists some universal constant KVC ≥ 1 such
|
2326 |
+
that for every VC-subgraph class F and any PZ-integrable positive envelope CF of F
|
2327 |
+
sup
|
2328 |
+
Q∈Mfin
|
2329 |
+
N
|
2330 |
+
�
|
2331 |
+
ε∥CF∥Q,2, F, L2(Q)
|
2332 |
+
�
|
2333 |
+
≤ KVC V (F) (16e)V (F)�
|
2334 |
+
1/ε
|
2335 |
+
�2[V (F)−1]
|
2336 |
+
for ε ∈]0, 1[
|
2337 |
+
(see [17, Theorem 9.3] or [26, Theorem 2.6.7]).
|
2338 |
+
For our purposes we are interested in more explicit upper estimations of the covering
|
2339 |
+
numbers. This may be achieved upon Corollary 3 in [14] which we recall now for the
|
2340 |
+
convenience of the reader.
|
2341 |
+
Proposition 5.2 Let F = { 1B | B ∈ B}, where B denotes some VC-class. Then
|
2342 |
+
sup
|
2343 |
+
Q∈Mfin
|
2344 |
+
N
|
2345 |
+
�
|
2346 |
+
ε, F, L1(Q)
|
2347 |
+
�
|
2348 |
+
≤ e V (F)
|
2349 |
+
�
|
2350 |
+
2e/ε
|
2351 |
+
�V (F)−1
|
2352 |
+
for ε ∈]0, 1[.
|
2353 |
+
Once we have upper estimates for covering numbers of VC-classes w.r.t. the L1-norms, it
|
2354 |
+
is well-known from the theory of empirical process theory how to derive upper estimates
|
2355 |
+
for covering numbers of VC-subgraph classes w.r.t. the L2-norm. We obtain the following
|
2356 |
+
result.
|
2357 |
+
Corollary 5.3 Let F be any VC-subgraph class with some arbitrary positive envelope
|
2358 |
+
CF. Then the inequality
|
2359 |
+
sup
|
2360 |
+
Q∈Mfin
|
2361 |
+
N
|
2362 |
+
�
|
2363 |
+
ε∥CF∥Q,2, F, L2(Q)
|
2364 |
+
�
|
2365 |
+
≤ e V (F)
|
2366 |
+
�
|
2367 |
+
4e1/2/ε
|
2368 |
+
�2[V (F)−1]
|
2369 |
+
for ε ∈]0, 1[
|
2370 |
+
holds.
|
2371 |
+
23
|
2372 |
+
|
2373 |
+
Proof
|
2374 |
+
The proof mimicks the proof of Theorem 9.3 in [17] or the proof of Theorem
|
2375 |
+
2.6.7 in [26].
|
2376 |
+
Let FB := { 1B | B ∈ B}, where B denotes the collection of subgraphs corresponding
|
2377 |
+
to F. In the first step one may obtain
|
2378 |
+
N
|
2379 |
+
�
|
2380 |
+
ε∥CF∥Q,1, F, L1(Q)
|
2381 |
+
�
|
2382 |
+
≤ N
|
2383 |
+
�
|
2384 |
+
ε/2, FB, L1(Q)
|
2385 |
+
�
|
2386 |
+
for Q ∈ Mfin, ε ∈]0, 1[.
|
2387 |
+
(5.10)
|
2388 |
+
In the second step any Q ∈ Mfin is associated with the probability measure QCF ∈ Mfin,
|
2389 |
+
defined by QCF(B) := EQ[1BCF]/EQ[CF]. Then it can be shown that
|
2390 |
+
N
|
2391 |
+
�
|
2392 |
+
ε∥CF∥Q,2, F, L2(Q)
|
2393 |
+
�
|
2394 |
+
≤ N
|
2395 |
+
�
|
2396 |
+
ε2∥CF∥QCF,1/4, F, L1(QCF)
|
2397 |
+
�
|
2398 |
+
(5.11)
|
2399 |
+
holds for ε ∈]0, 1[. Then, combining (5.10) and (5.11) with Haussler’s result Proposition
|
2400 |
+
5.2, we may complete the proof.
|
2401 |
+
✷
|
2402 |
+
In view of (5.9) the following two auxiliary results reveal that the classes FΘ is built upon
|
2403 |
+
specific VC-subgraph classes. This will be crucial for deriving the result of Proposition
|
2404 |
+
2.7.
|
2405 |
+
Lemma 5.4 For every i ∈ {1, . . . , r} and any nonvoid Θ ⊆ Θ, the set Fi,Θ consisting
|
2406 |
+
of all f i(θ, ·) with θ ∈ Θ is a VC-subgraph class with index V (Fi,Θ) ≤ si + 1.
|
2407 |
+
Proof Let Θ ⊆ Θ nonvoid, and let i ∈ {1, . . . , r}. Define the collection
|
2408 |
+
Bsi :=
|
2409 |
+
�
|
2410 |
+
l=1
|
2411 |
+
si Jl | J1, · · · , Jsi ∈ J
|
2412 |
+
�
|
2413 |
+
, where J := {] − ∞, x], ] − ∞, x[| x ∈ R}.
|
2414 |
+
It is a VC-class with V (Bsi) := si + 1 (see [9, Corollary 4.5.11]). Then,
|
2415 |
+
B
|
2416 |
+
i :=
|
2417 |
+
�
|
2418 |
+
Bi(θ) | θ ∈ Θ
|
2419 |
+
�
|
2420 |
+
⊆
|
2421 |
+
�
|
2422 |
+
(−Li
|
2423 |
+
1, . . . , −Li
|
2424 |
+
si)−1(B) | B ∈ Bsi
|
2425 |
+
�
|
2426 |
+
,
|
2427 |
+
where Bi(θ) :=
|
2428 |
+
�
|
2429 |
+
z ∈ Rd | Li
|
2430 |
+
l
|
2431 |
+
�
|
2432 |
+
z + T(θ)
|
2433 |
+
�
|
2434 |
+
+ ai
|
2435 |
+
l ∈ Iil; l = 1, . . . , si
|
2436 |
+
�
|
2437 |
+
. Hence B
|
2438 |
+
i is a VC-class
|
2439 |
+
with V (B
|
2440 |
+
i) ≤ V (Bsi) (see [17, Lemma 9.7, (vi)]), and thus V (B
|
2441 |
+
i) ≤ si + 1. Since Fi
|
2442 |
+
Θ
|
2443 |
+
consists just of all the indicator mappings associated with the sets from B
|
2444 |
+
i, we may
|
2445 |
+
derive directly the statement of Lemma 5.4 (see [17, Lemma 9.8]).
|
2446 |
+
✷
|
2447 |
+
Lemma 5.5 For nonvoid Θ ⊆ Θ and i ∈ {1, . . . , r} the set Fi,Θ of all Gi(θ, ·) with
|
2448 |
+
θ ∈ Θ is a VC-subgraph class with index V (Fi,Θ) ≤ 4.
|
2449 |
+
Proof Let us fix nonvoid Θ ⊆ Θ and i ∈ {1, . . . , r}. The linear hull of Fi,Θ is generated
|
2450 |
+
by {Λi, 1} so that it has finite dimension. Thus Fi,Θ is a VC-subgraph class with index
|
2451 |
+
not greater than 4 (see [26, Lemma 2.6.15]). The proof is complete.
|
2452 |
+
✷
|
2453 |
+
Now, we are ready to finish the proof Proposition 2.7.
|
2454 |
+
24
|
2455 |
+
|
2456 |
+
We consider the function class Fi consisting of all mappings f i(θ, ·)·Gi(θ, ·) with θ ∈ Θ
|
2457 |
+
for i ∈ {1, . . . , r}. The significance of these function classes for our purposes stems from
|
2458 |
+
representation (5.9). Note that ˆξi := ξi · (|Λi| + ηG
|
2459 |
+
i ) defines a positive envelope of Fi
|
2460 |
+
for i ∈ {1, . . . , r}. Our aim is to find explicit upper estimates of the covering number
|
2461 |
+
N
|
2462 |
+
�
|
2463 |
+
ε∥ˆξi∥Q,2, Fi, L2(Q)
|
2464 |
+
�
|
2465 |
+
with Q ∈ Mfin.
|
2466 |
+
Fix i ∈ {1, . . . , r}. First of all, Fi
|
2467 |
+
PL is a VC-subgraph class with index V (Fi
|
2468 |
+
PL) ≤ si + 1
|
2469 |
+
by Lemma 5.4, and F
|
2470 |
+
i
|
2471 |
+
PL is a VC-subgraph class with index V (F
|
2472 |
+
i
|
2473 |
+
PL) ≤ 4 due to Lemma
|
2474 |
+
5.5. Furthermore ξi := |Λi| + ηG
|
2475 |
+
i is a positive envelope of F
|
2476 |
+
i
|
2477 |
+
PL. Then we may conclude
|
2478 |
+
from Corollary 5.3
|
2479 |
+
sup
|
2480 |
+
Q∈Mfin
|
2481 |
+
N
|
2482 |
+
�
|
2483 |
+
ε∥ξi∥Q,2, Fi
|
2484 |
+
PL, L2(Q)
|
2485 |
+
�
|
2486 |
+
≤ e(si + 1)
|
2487 |
+
�
|
2488 |
+
4e1/2/ε
|
2489 |
+
�2si
|
2490 |
+
for ε ∈]0, 1[,
|
2491 |
+
sup
|
2492 |
+
Q∈Mfin
|
2493 |
+
N
|
2494 |
+
�
|
2495 |
+
ε∥ξi∥Q,2, F
|
2496 |
+
i
|
2497 |
+
PL, L2(Q)
|
2498 |
+
�
|
2499 |
+
≤ 4e
|
2500 |
+
�
|
2501 |
+
4e1/2/ε
|
2502 |
+
�6
|
2503 |
+
for ε ∈]0, 1[.
|
2504 |
+
Moreover, we have
|
2505 |
+
sup
|
2506 |
+
Q∈Mfin
|
2507 |
+
N
|
2508 |
+
�
|
2509 |
+
ε∥ˆξi∥Q,2, Fi, L2(Q)
|
2510 |
+
�
|
2511 |
+
≤
|
2512 |
+
sup
|
2513 |
+
Q∈Mfin
|
2514 |
+
N
|
2515 |
+
�
|
2516 |
+
ε∥ξi∥Q,2/4, Fi
|
2517 |
+
PL, L2(Q)
|
2518 |
+
�
|
2519 |
+
· sup
|
2520 |
+
Q∈Mfin
|
2521 |
+
N
|
2522 |
+
�
|
2523 |
+
ε∥ξi∥Q,2/4, F
|
2524 |
+
i
|
2525 |
+
PL, L2(Q)
|
2526 |
+
�
|
2527 |
+
for ε ∈]0, 1[ (see Corollary A.1. in supplement to [7] or proof of Theorem 9.15 in [17]).
|
2528 |
+
Hence we end up with.
|
2529 |
+
sup
|
2530 |
+
Q∈Mfin
|
2531 |
+
N
|
2532 |
+
�
|
2533 |
+
ε∥ˆξi∥Q,2, Fi, L2(Q)
|
2534 |
+
�
|
2535 |
+
≤ 4 e2 (si + 1)
|
2536 |
+
�
|
2537 |
+
16e1/2/ε
|
2538 |
+
�2[si+3]
|
2539 |
+
(5.12)
|
2540 |
+
for i ∈ {1, . . . , r}, ε ∈]0, 1[.
|
2541 |
+
Next, fix Q ∈ Mfin, ε > 0.
|
2542 |
+
Let hi, h
|
2543 |
+
i ∈ Fi such that
|
2544 |
+
the inequality ∥hi − h
|
2545 |
+
i∥Q,2 ≤ ε∥ˆξi∥Q,2/r holds for i = 1, . . . , r.
|
2546 |
+
Then by inequality
|
2547 |
+
��r
|
2548 |
+
i=1 ti ≥ �r
|
2549 |
+
i=1
|
2550 |
+
√ti/r for t1, . . . , tr ≥ 0
|
2551 |
+
∥
|
2552 |
+
r
|
2553 |
+
�
|
2554 |
+
i=1
|
2555 |
+
hi −
|
2556 |
+
r
|
2557 |
+
�
|
2558 |
+
i=1
|
2559 |
+
h
|
2560 |
+
i∥Q,2 ≤
|
2561 |
+
r
|
2562 |
+
�
|
2563 |
+
i=1
|
2564 |
+
∥hi − h
|
2565 |
+
i∥Q,2 ≤ ε
|
2566 |
+
r
|
2567 |
+
r
|
2568 |
+
�
|
2569 |
+
i=1
|
2570 |
+
∥ˆξi∥Q,2 ≤ ε∥
|
2571 |
+
r
|
2572 |
+
�
|
2573 |
+
i=1
|
2574 |
+
ˆξi∥Q,2.
|
2575 |
+
Thus by construction of ξ along with (5.9)
|
2576 |
+
N
|
2577 |
+
�
|
2578 |
+
ε∥ξ∥Q,2, FΘ, L2(Q)
|
2579 |
+
�
|
2580 |
+
≤
|
2581 |
+
r�
|
2582 |
+
i=1
|
2583 |
+
N
|
2584 |
+
�
|
2585 |
+
ε∥ˆξi∥Q,2/r, Fi, L2(Q)
|
2586 |
+
�
|
2587 |
+
(5.13)
|
2588 |
+
for Q ∈ Mfin and ε > 0.
|
2589 |
+
Combining (5.12) and (5.13), we obtain for δ ∈]0, 1] by change of variable formula
|
2590 |
+
J(FΘ, ξ, δ) = δ
|
2591 |
+
ˆ 1
|
2592 |
+
0
|
2593 |
+
sup
|
2594 |
+
Q∈Mfin
|
2595 |
+
�
|
2596 |
+
ln
|
2597 |
+
�
|
2598 |
+
2N
|
2599 |
+
�
|
2600 |
+
δε∥ξ∥Q,2, FΘ, L2(Q)
|
2601 |
+
��
|
2602 |
+
dε ≤ δ
|
2603 |
+
ˆ 1
|
2604 |
+
0
|
2605 |
+
�
|
2606 |
+
v ln(Kδ) dε,
|
2607 |
+
where
|
2608 |
+
v := 2
|
2609 |
+
r
|
2610 |
+
�
|
2611 |
+
i=1
|
2612 |
+
si + 6r
|
2613 |
+
and
|
2614 |
+
Kδ := 16re1/2 �
|
2615 |
+
22r+1e2r �r
|
2616 |
+
i=1(si + 1)
|
2617 |
+
�1/v
|
2618 |
+
δ
|
2619 |
+
.
|
2620 |
+
Now, we may finish the proof of Proposition 2.7 via (2.8) by routine calculations.
|
2621 |
+
✷
|
2622 |
+
25
|
2623 |
+
|
2624 |
+
5.4 Proofs of the results from Section 3
|
2625 |
+
As a first result we shall show Lemma 3.1.
|
2626 |
+
Proof of Lemma 3.1:
|
2627 |
+
Let n ∈ N and a ∈]0, 1]. By choice of the random variable ξ we may observe
|
2628 |
+
inf
|
2629 |
+
θ∈Θ Rρp,a
|
2630 |
+
�
|
2631 |
+
Fθ
|
2632 |
+
�
|
2633 |
+
≥ −E[ξ] > −∞
|
2634 |
+
and
|
2635 |
+
inf
|
2636 |
+
θ∈Θ Rρp,a
|
2637 |
+
� ˆFn,θ
|
2638 |
+
�
|
2639 |
+
≥ −1
|
2640 |
+
n
|
2641 |
+
n
|
2642 |
+
�
|
2643 |
+
j=1
|
2644 |
+
ξ(Zj) > −∞.
|
2645 |
+
Moreover, using Minkowski’s inequality, by representations (3.2) and (3.3) we have for
|
2646 |
+
nonvoid Θ ⊆ Θ
|
2647 |
+
�� inf
|
2648 |
+
θ∈Θ Rρp,a
|
2649 |
+
� ˆFn,θ
|
2650 |
+
�
|
2651 |
+
− inf
|
2652 |
+
θ∈Θ Rρp,a
|
2653 |
+
�
|
2654 |
+
Fθ
|
2655 |
+
���
|
2656 |
+
≤ (1 + a) sup
|
2657 |
+
θ∈Θ
|
2658 |
+
��1
|
2659 |
+
n
|
2660 |
+
n
|
2661 |
+
�
|
2662 |
+
j=1
|
2663 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
2664 |
+
��
|
2665 |
+
+ a sup
|
2666 |
+
θ∈Θ
|
2667 |
+
���
|
2668 |
+
�1
|
2669 |
+
n
|
2670 |
+
n
|
2671 |
+
�
|
2672 |
+
j=1
|
2673 |
+
Gp(θ, Zj)
|
2674 |
+
�1/p
|
2675 |
+
−
|
2676 |
+
�
|
2677 |
+
E[Gp(θ, Z1)]
|
2678 |
+
�1/p���.
|
2679 |
+
Since |t1/p − s1/p| ≤ |t − s|1/p holds for t, s ≥ 0, we end up with
|
2680 |
+
�� inf
|
2681 |
+
θ∈Θ Rρp,a
|
2682 |
+
� ˆFn,θ
|
2683 |
+
�
|
2684 |
+
− inf
|
2685 |
+
θ∈Θ Rρp,a
|
2686 |
+
�
|
2687 |
+
Fθ
|
2688 |
+
��� ≤ (1 + a) sup
|
2689 |
+
θ∈Θ
|
2690 |
+
��1
|
2691 |
+
n
|
2692 |
+
n
|
2693 |
+
�
|
2694 |
+
j=1
|
2695 |
+
G(θ, Zj) − E[G(θ, Z1)]
|
2696 |
+
��
|
2697 |
+
+ a sup
|
2698 |
+
θ∈Θ
|
2699 |
+
��1
|
2700 |
+
n
|
2701 |
+
n
|
2702 |
+
�
|
2703 |
+
j=1
|
2704 |
+
Gp(θ, Zj) − E[Gp(θ, Z1)]
|
2705 |
+
��1/p.
|
2706 |
+
Now, the proof may be finished easily.
|
2707 |
+
✷
|
2708 |
+
Proof of Lemma 3.3:
|
2709 |
+
Let Θ ⊆ Θ from (A 3’). For θ ∈ Θ we may select by (A 3’) a sequence (ϑk)k∈N in Θ such
|
2710 |
+
that E[|G(ϑk, Z1) − G(θ, Z1)|] → 0, and thus G(ϑk, Z1) → G(θ, Z1) in probability by
|
2711 |
+
application of Markov’s inequality. This implies Gp(ϑk, Z1) → Gp(θ, Z1) in probability.
|
2712 |
+
Furthermore we have upper estimation |Gp(ϑk, Z1)| ≤
|
2713 |
+
�
|
2714 |
+
ξ(Z1) + E[ξ(Z1)]
|
2715 |
+
�p for k ∈ N,
|
2716 |
+
and ξ is integrable of order p by assumption. Thus the application of Vitalis’ theorem
|
2717 |
+
(see [2, Proposition 21.4]) yields E[Gp(ϑk, Z1)] → E[Gp(θ, Z1)]. Thus we have shown for
|
2718 |
+
any θ ∈ Θ
|
2719 |
+
inf
|
2720 |
+
ϑ∈Θ
|
2721 |
+
���E[G(ϑ, Z1)] − E[G(θ, Z1)]
|
2722 |
+
�� +
|
2723 |
+
��E[Gp(ϑ, Z1)] − E[Gp(θ, Z1)]
|
2724 |
+
��
|
2725 |
+
�
|
2726 |
+
= 0.
|
2727 |
+
(5.14)
|
2728 |
+
In view of representation (3.2), statement 1) follows immediately from (5.14).
|
2729 |
+
Next, fix n ∈ N, choose the
|
2730 |
+
�
|
2731 |
+
PZ�n-null set Nn according to (A 3’), and consider any
|
2732 |
+
vector (z1, . . . , zn) ∈ Rdn \ Nn. For θ ∈ Θ we may find via (A 3’) some sequence (ϑk)k∈Θ
|
2733 |
+
26
|
2734 |
+
|
2735 |
+
in Θ such that E[G(ϑk, Z1)] → E[G(θ, Z1)] and G(ϑk, zj) → G(θ, zj) for j ∈ {1, . . . , n}.
|
2736 |
+
Then Gp(ϑk, zj) → Gp(θ, zj) for every j ∈ {1, . . . , n}. In particular statement 2) may be
|
2737 |
+
concluded from (5.14) along with (A 3’).
|
2738 |
+
Let us define the set An := {(Z1, . . . , Zn) ∈ Rdn \ Nn} ∈ F. Note P(An) = 1. Fix
|
2739 |
+
ω ∈ Ω. By (A 3’) there exists for any θ ∈ Θ some sequence (ϑk)k∈Θ in Θ satisfying
|
2740 |
+
G
|
2741 |
+
�
|
2742 |
+
ϑk, Zj(ω)
|
2743 |
+
�
|
2744 |
+
→ G
|
2745 |
+
�
|
2746 |
+
θ, Zj(ω)
|
2747 |
+
�
|
2748 |
+
for j ∈ {1, . . . , n}. Then, drawing on representation (3.3),
|
2749 |
+
the convergence Rρp,a( ˆFn,ϑk)(ω) → Rρp,a( ˆFn,θ)(ω) may be verified easily for every a ∈
|
2750 |
+
]0, 1]. This shows statement 3), recalling P(An) = 1. The proof is complete.
|
2751 |
+
✷
|
2752 |
+
Proof of Lemma 3.4:
|
2753 |
+
First of all
|
2754 |
+
G(θ, z) − E[G(θ, Z1)] ≤ ξ +
|
2755 |
+
�
|
2756 |
+
E[ξ(Z1)] ∨ 1
|
2757 |
+
�
|
2758 |
+
holds for θ ∈ Θ and z ∈ Rd so that ξp is a positive envelope of FΘ,p.
|
2759 |
+
Next, let s, t, u ∈ [0, ∞[ with u ≥ t ∨ s. The mapping f :]1, ∞[→ R, defined by
|
2760 |
+
f(q) = |sq − tq| is nondecreasing. Hence |sp − tp| ≤ |s⌈p⌉ − t⌈p⌉|, using notation ⌈p⌉ :=
|
2761 |
+
min[p, ∞[∩N.
|
2762 |
+
Moreover, |sk+1−tk+1| = (s∨t)|sk−tk|+|s−t|(s∧t)k holds for k ∈ N. Then it may be
|
2763 |
+
shown by induction that |sk −tk| ≤ |s −t|(2u)k−1 is valid for every k ∈ N. In particular,
|
2764 |
+
we end up with the inequality |sp − tp| ≤ |s − t|(2u)⌈p⌉−1. As a further consequence we
|
2765 |
+
may observe for θ, ϑ ∈ Θ and z ∈ Rd
|
2766 |
+
|Gp(θ, z) − Gp(ϑ, z)|2
|
2767 |
+
≤
|
2768 |
+
���
|
2769 |
+
G(θ, z) − E[G(θ, Z1)]
|
2770 |
+
�+ −
|
2771 |
+
�
|
2772 |
+
G(ϑ, z) − E[G(ϑ, Z1)]
|
2773 |
+
�+��2�
|
2774 |
+
2ξ(z) + 2E[ξ(Z1)]
|
2775 |
+
�2⌈p⌉−2
|
2776 |
+
≤ 22(p+1)ξp(z)2p/(p+1)�
|
2777 |
+
|G(θ, z) − G(ϑ, z)
|
2778 |
+
��2 + |E[G(θ, Z1)] − E[G(ϑ, Z1)]|2�
|
2779 |
+
.
|
2780 |
+
The positive envelope ξp of FΘ,p is square PZ-square integrable by assumption, and the
|
2781 |
+
constant E[ξ(Z1)] may be viewed as an positive envelope of the class I which gathers all
|
2782 |
+
constant functions E[G(θ, Z1)] (θ ∈ Θ). We may apply Theorem 2.10.20 from [26] which
|
2783 |
+
leads to
|
2784 |
+
ˆ δ
|
2785 |
+
0
|
2786 |
+
sup
|
2787 |
+
Q∈Mfin
|
2788 |
+
�
|
2789 |
+
ln
|
2790 |
+
�
|
2791 |
+
N
|
2792 |
+
�
|
2793 |
+
ε 2p+1∥ξp/(p+1)
|
2794 |
+
p
|
2795 |
+
�
|
2796 |
+
ξ2 + E[ξ(Z1)]2∥Q,2, FΘ,p, L2(Q)
|
2797 |
+
��
|
2798 |
+
dε
|
2799 |
+
≤
|
2800 |
+
ˆ δ
|
2801 |
+
0
|
2802 |
+
sup
|
2803 |
+
Q∈Mfin
|
2804 |
+
�
|
2805 |
+
ln
|
2806 |
+
�
|
2807 |
+
N
|
2808 |
+
�
|
2809 |
+
ε ∥ξ∥Q,2/2, FΘ, L2(Q)
|
2810 |
+
��
|
2811 |
+
dε +
|
2812 |
+
ˆ δ
|
2813 |
+
0
|
2814 |
+
�
|
2815 |
+
ln
|
2816 |
+
�
|
2817 |
+
N
|
2818 |
+
�
|
2819 |
+
ε E[ξ(Z1)]/2, I
|
2820 |
+
�
|
2821 |
+
dε
|
2822 |
+
≤ 2 J(FΘ, ξ, δ/2) +
|
2823 |
+
ˆ δ
|
2824 |
+
0
|
2825 |
+
�
|
2826 |
+
ln
|
2827 |
+
�
|
2828 |
+
N
|
2829 |
+
�
|
2830 |
+
ε E[ξ(Z1)]/4,
|
2831 |
+
�
|
2832 |
+
− E[ξ(Z1)], E[ξ(Z1)]
|
2833 |
+
��
|
2834 |
+
dε
|
2835 |
+
for δ > 0, where for J ∈
|
2836 |
+
�
|
2837 |
+
I,
|
2838 |
+
�
|
2839 |
+
− E[ξ(Z1)], E[ξ(Z1)]
|
2840 |
+
��
|
2841 |
+
and η > 0 we denote by the
|
2842 |
+
symbol N
|
2843 |
+
�
|
2844 |
+
η E[ξ(Z1)], J
|
2845 |
+
�
|
2846 |
+
the minimal number to cover J by closed intervals of the form
|
2847 |
+
[xi − η E[ξ(Z1)], xi + η E[ξ(Z1)]] with xi ∈ J. It is easy to check that the inequality
|
2848 |
+
27
|
2849 |
+
|
2850 |
+
N
|
2851 |
+
�
|
2852 |
+
ε E[ξ(Z1)]/4,
|
2853 |
+
�
|
2854 |
+
− E[ξ(Z1)], E[ξ(Z1)]
|
2855 |
+
��
|
2856 |
+
≤ 8/ε holds for ε > 0. Hence we may invoke
|
2857 |
+
the change of variable formula along with (2.8) which yields
|
2858 |
+
ˆ δ
|
2859 |
+
0
|
2860 |
+
sup
|
2861 |
+
Q∈Mfin
|
2862 |
+
�
|
2863 |
+
ln
|
2864 |
+
�
|
2865 |
+
N
|
2866 |
+
�
|
2867 |
+
ε 2p+1∥ξp/(p+1)
|
2868 |
+
p
|
2869 |
+
�
|
2870 |
+
ξ2 + E[ξ(Z1)]2∥Q,2, FΘ,p, L2(Q)
|
2871 |
+
��
|
2872 |
+
dε
|
2873 |
+
≤ 2 J(FΘ, ξ, δ/2) +
|
2874 |
+
ˆ δ
|
2875 |
+
0
|
2876 |
+
�
|
2877 |
+
ln(8/ε) dε = 2 J(FΘ, ξ, δ/2) + δ
|
2878 |
+
ˆ 1
|
2879 |
+
0
|
2880 |
+
�
|
2881 |
+
ln
|
2882 |
+
�
|
2883 |
+
(8/δ)/ε
|
2884 |
+
�
|
2885 |
+
) dε
|
2886 |
+
≤ 2 J(FΘ, ξ, δ/2) + 2δ
|
2887 |
+
�
|
2888 |
+
ln(8/δ)
|
2889 |
+
for δ ∈]0, 1[.
|
2890 |
+
Since ∥ξp/(p+1)
|
2891 |
+
p
|
2892 |
+
�
|
2893 |
+
ξ2 + E[ξ(Z1)]2∥Q,2 ≤ ∥ξp∥Q,2 is valid for any Q ∈ Mfin, we may further
|
2894 |
+
conclude, using change of variable formula again,
|
2895 |
+
ˆ δ
|
2896 |
+
0
|
2897 |
+
sup
|
2898 |
+
Q∈Mfin
|
2899 |
+
�
|
2900 |
+
ln
|
2901 |
+
�
|
2902 |
+
N
|
2903 |
+
�
|
2904 |
+
ε ∥ξp∥Q,2, FΘ,p, L2(Q)
|
2905 |
+
��
|
2906 |
+
dε
|
2907 |
+
≤
|
2908 |
+
ˆ δ
|
2909 |
+
0
|
2910 |
+
sup
|
2911 |
+
Q∈Mfin
|
2912 |
+
�
|
2913 |
+
ln
|
2914 |
+
�
|
2915 |
+
N
|
2916 |
+
�
|
2917 |
+
ε ∥ξp/(p+1)
|
2918 |
+
p
|
2919 |
+
�
|
2920 |
+
ξ2 + E[ξ(Z1)]2∥Q,2, FΘ,p, L2(Q)
|
2921 |
+
��
|
2922 |
+
dε
|
2923 |
+
≤ 2p+1
|
2924 |
+
ˆ δ/2p+1
|
2925 |
+
0
|
2926 |
+
sup
|
2927 |
+
Q∈Mfin
|
2928 |
+
�
|
2929 |
+
ln
|
2930 |
+
�
|
2931 |
+
N
|
2932 |
+
�
|
2933 |
+
η 2p+1∥ξp/(p+1)
|
2934 |
+
p
|
2935 |
+
�
|
2936 |
+
ξ2 + E[ξ(Z1)]2∥Q,2, FΘ,p, L2(Q)
|
2937 |
+
��
|
2938 |
+
dη
|
2939 |
+
≤ 2p+2J(FΘ, ξ, δ/2p+2) + 2 δ
|
2940 |
+
�
|
2941 |
+
ln
|
2942 |
+
�
|
2943 |
+
2p+4/δ
|
2944 |
+
�
|
2945 |
+
for δ ∈]0, 1[.
|
2946 |
+
Now, the statement of Lemma 3.4 follows easily from the observation
|
2947 |
+
J(FΘ,p, ξp, δ) ≤
|
2948 |
+
�
|
2949 |
+
2 ln(2) δ +
|
2950 |
+
√
|
2951 |
+
2
|
2952 |
+
ˆ δ
|
2953 |
+
0
|
2954 |
+
sup
|
2955 |
+
Q∈Mfin
|
2956 |
+
�
|
2957 |
+
ln
|
2958 |
+
�
|
2959 |
+
N
|
2960 |
+
�
|
2961 |
+
ε ∥ξp∥Q,2, FΘ,p, L2(Q)
|
2962 |
+
��
|
2963 |
+
dε
|
2964 |
+
for δ > 0
|
2965 |
+
✷
|
2966 |
+
5.5 Proof of results from Section 4
|
2967 |
+
Let us introduce the sequence
|
2968 |
+
�
|
2969 |
+
Xn
|
2970 |
+
�
|
2971 |
+
n∈N of random processes
|
2972 |
+
Xn : Ω × Θ × R → R, Xn(ω, θ, x) := 1
|
2973 |
+
n
|
2974 |
+
n
|
2975 |
+
�
|
2976 |
+
j=1
|
2977 |
+
�
|
2978 |
+
Φ∗�
|
2979 |
+
G(θ, Zj(ω)) + x
|
2980 |
+
�
|
2981 |
+
− x
|
2982 |
+
�
|
2983 |
+
(n ∈ N),
|
2984 |
+
and, under (A 2’), the mapping
|
2985 |
+
ψΦ : θ × R, (θ, x) �→ E
|
2986 |
+
�
|
2987 |
+
Φ∗�
|
2988 |
+
G(θ, Z1) + x
|
2989 |
+
�
|
2990 |
+
− x
|
2991 |
+
�
|
2992 |
+
.
|
2993 |
+
The key for proving Proposition 4.2 is the following observation.
|
2994 |
+
28
|
2995 |
+
|
2996 |
+
Lemma 5.6 Let (A 1) and (A 2’) be fulfilled. Furthermore let x0 > 1 be from the
|
2997 |
+
effective domain of Φ. Then with ξ from (A 2’) the following inequalities hold for θ ∈
|
2998 |
+
Θ, x ∈ R and n ∈ N.
|
2999 |
+
Xn(·, θ, x) ≥ max
|
3000 |
+
�
|
3001 |
+
− Φ(0) − x, −x0
|
3002 |
+
n
|
3003 |
+
n
|
3004 |
+
�
|
3005 |
+
j=1
|
3006 |
+
ξ(Zj) − Φ(x0) + x(x0 − 1)
|
3007 |
+
�
|
3008 |
+
ψΦ(θ, x) ≥ max {−Φ(0) − x, −x0E[ξ(Z1)] − Φ(x0) + x(x0 − 1)} .
|
3009 |
+
In particular, ψΦ is bounded from below and also the path Xn(ω, ·, ·) for every n ∈ N and
|
3010 |
+
any ω ∈ Ω.
|
3011 |
+
Proof
|
3012 |
+
The inequalities Φ∗(y) ≥ −Φ(0) and Φ∗(y) ≥ yx0 − Φ(x0) hold for y ∈ R by
|
3013 |
+
definition of Φ∗. Then, the inequalities in the statement follow easily.
|
3014 |
+
Next, notice
|
3015 |
+
that ϕ(x) := max {−Φ(0) − x, −x0E[ξ(Z1)] − Φ(x0) + x(x0 − 1)} defines a continuous
|
3016 |
+
mapping ϕ : R → R which tends to ∞ for x → −∞ and x → ∞. Hence ϕ is bounded
|
3017 |
+
from below, and thus also ψΦ. In the same way it may be shown that Xn(ω, ·, ·) is
|
3018 |
+
bounded from below for n ∈ N and ω ∈ Ω. This completes the proof.
|
3019 |
+
✷
|
3020 |
+
In the next step we want to show that with high probability we may restrict simulta-
|
3021 |
+
neously the minimizations of ψΦ and the processes Xn to a compact subset of Θ × R.
|
3022 |
+
More precisely, let us introduce the sets
|
3023 |
+
S(ψΦ) :=
|
3024 |
+
�
|
3025 |
+
(θ, x) ∈ Θ × R | ψΦ(θ, x) = inf
|
3026 |
+
θ∈Θ
|
3027 |
+
x∈R
|
3028 |
+
ψΦ(θ, x)
|
3029 |
+
�
|
3030 |
+
,
|
3031 |
+
Sn(ω) :=
|
3032 |
+
�
|
3033 |
+
(θ, x) ∈ Θ × R | Xn(ω, θ, x) = inf
|
3034 |
+
θ∈Θ
|
3035 |
+
x∈R
|
3036 |
+
Xn(ω, θ, x)
|
3037 |
+
�
|
3038 |
+
(n ∈ N, ω ∈ Ω).
|
3039 |
+
Theorem 5.7 Let (A 1), (A 2’) be fulfilled.
|
3040 |
+
If G(·, z) is lower semicontinuous for
|
3041 |
+
z ∈ Rd, then Sn(ω) is nonvoid for n ∈ N and ω ∈ Ω. Moreover,
|
3042 |
+
Sn(ω) ⊆ Θ ×
|
3043 |
+
�
|
3044 |
+
xl(x0, ξ, δ), xu(x0, ξ, δ)
|
3045 |
+
�
|
3046 |
+
for n ∈ N, δ > 0, ω ∈ Aξ
|
3047 |
+
n,δ,
|
3048 |
+
where xl(x0, ξ, δ), xu(x0, ξ, δ) are defined by (4.4) and (4.5) respectively, and Aξ
|
3049 |
+
n,δ ∈ F is
|
3050 |
+
as in the display of Proposition 4.2.
|
3051 |
+
Proof Since Φ∗ is nondecreasing, we may observe by (A 2’)
|
3052 |
+
sup
|
3053 |
+
θ∈Θ
|
3054 |
+
inf
|
3055 |
+
x∈R Xn(·, θ, x) ≤ sup
|
3056 |
+
θ∈Θ
|
3057 |
+
1
|
3058 |
+
n
|
3059 |
+
n
|
3060 |
+
�
|
3061 |
+
j=1
|
3062 |
+
Φ∗�
|
3063 |
+
G(θ, Zj)
|
3064 |
+
�
|
3065 |
+
≤ 1
|
3066 |
+
n
|
3067 |
+
n
|
3068 |
+
�
|
3069 |
+
j=1
|
3070 |
+
Φ∗�
|
3071 |
+
ξ(Zj)
|
3072 |
+
�
|
3073 |
+
.
|
3074 |
+
Then in view of Lemma 5.6, we obtain for every ω ∈ Ω
|
3075 |
+
inf
|
3076 |
+
θ∈Θ Xn(ω, θ, x) > sup
|
3077 |
+
θ∈Θ
|
3078 |
+
inf
|
3079 |
+
x∈R Xn(·, θ, x)
|
3080 |
+
for x ∈ R \ [an(ω), bn(ω)],
|
3081 |
+
29
|
3082 |
+
|
3083 |
+
where an(ω) := −Φ(0) − 1
|
3084 |
+
n
|
3085 |
+
�n
|
3086 |
+
j=1 Φ∗�
|
3087 |
+
ξ
|
3088 |
+
�
|
3089 |
+
Zj(ω)
|
3090 |
+
��
|
3091 |
+
, and
|
3092 |
+
bn(ω) :=
|
3093 |
+
Φ(x0) + 1
|
3094 |
+
n
|
3095 |
+
�n
|
3096 |
+
j=1 Φ∗�
|
3097 |
+
ξ
|
3098 |
+
�
|
3099 |
+
Zj(ω)
|
3100 |
+
��
|
3101 |
+
+ x0
|
3102 |
+
n
|
3103 |
+
�n
|
3104 |
+
j=1 ξ
|
3105 |
+
�
|
3106 |
+
Zj(ω)
|
3107 |
+
�
|
3108 |
+
x0 − 1
|
3109 |
+
.
|
3110 |
+
This means
|
3111 |
+
inf
|
3112 |
+
θ∈Θ
|
3113 |
+
x∈R
|
3114 |
+
Xn(ω, θ, x) =
|
3115 |
+
inf
|
3116 |
+
θ∈Θ
|
3117 |
+
x∈[an(ω),bn(ω)]
|
3118 |
+
Xn(ω, θ, x)
|
3119 |
+
and
|
3120 |
+
Sn(ω) ⊆ Θ × [an(ω), bn(ω)]
|
3121 |
+
(5.15)
|
3122 |
+
for n ∈ N, ω ∈ Ω. Since G(·, z) is lower semicontinuous for z ∈ Rd, and since Φ∗ is
|
3123 |
+
nondecreasing as well as continuous, the mapping Xn(ω, ·, ·) is lower semicontinuous on
|
3124 |
+
the compact set Θ × [an(ω), bn(ω)] for ω ∈ Ω. As a consequence Sn(ω) is nonvoid for
|
3125 |
+
n ∈ N and ω ∈ Ω. We may also conclude from (5.15) that Sn(ω) is contained in the set
|
3126 |
+
Θ ×
|
3127 |
+
�
|
3128 |
+
xl(x0, ξ, δ), xu(x0, ξ, δ)
|
3129 |
+
�
|
3130 |
+
if ω ∈ Aξ
|
3131 |
+
n,δ. The proof is complete.
|
3132 |
+
✷
|
3133 |
+
We may also derive compactness of the set S(ψΦ) of minimizers of ψΦ.
|
3134 |
+
Lemma 5.8 Let (A 1), (A 2’) be fulfilled, and let G(·, z) be lower semicontinuous for
|
3135 |
+
z ∈ Rd. Then the mapping ψΦ is lower semicontinuous, and the set S(ψΦ) is nonvoid
|
3136 |
+
and compact, satisfying
|
3137 |
+
S(ψΦ) ⊆ Θ ×
|
3138 |
+
�
|
3139 |
+
xl(x0, ξ, δ), xu(x0, ξ, δ)
|
3140 |
+
�
|
3141 |
+
for δ > 0.
|
3142 |
+
Proof First of all by (A 2’) along with monotonicity of Φ∗ we may observe
|
3143 |
+
sup
|
3144 |
+
θ
|
3145 |
+
inf
|
3146 |
+
x∈R ψΦ(θ, x) ≤ sup
|
3147 |
+
θ∈Θ
|
3148 |
+
E
|
3149 |
+
�
|
3150 |
+
Φ∗�
|
3151 |
+
G(θ, Z1)
|
3152 |
+
��
|
3153 |
+
≤ E
|
3154 |
+
�
|
3155 |
+
Φ∗�
|
3156 |
+
ξ(Z1)
|
3157 |
+
��
|
3158 |
+
.
|
3159 |
+
Then in view of Lemma 5.6 we may conclude that ψΦ(θ, x) > inf ψΦ if
|
3160 |
+
x < −Φ(0) − E
|
3161 |
+
�
|
3162 |
+
Φ∗�
|
3163 |
+
ξ(Z1)
|
3164 |
+
��
|
3165 |
+
or
|
3166 |
+
x > E
|
3167 |
+
�
|
3168 |
+
Φ∗�
|
3169 |
+
ξ(Z1)
|
3170 |
+
�
|
3171 |
+
+ x0E
|
3172 |
+
�
|
3173 |
+
ξ(Z1)
|
3174 |
+
�
|
3175 |
+
+ Φ(x0)
|
3176 |
+
x0 − 1
|
3177 |
+
.
|
3178 |
+
Hence ψΦ and its restriction to
|
3179 |
+
�
|
3180 |
+
xl(x0, ξ, δ), xu(x0, ξ, δ)
|
3181 |
+
�
|
3182 |
+
have the same infimal value, and
|
3183 |
+
S(ψΦ) ⊆ Θ ×
|
3184 |
+
�
|
3185 |
+
xl(x0, ξ, δ), xu(x0, ξ, δ)
|
3186 |
+
�
|
3187 |
+
for δ > 0.
|
3188 |
+
Lower semicontinuity of G in θ implies that (θ, x) �→ Φ∗�
|
3189 |
+
G(θ, Z1(ω)) + x
|
3190 |
+
�
|
3191 |
+
+ x is a
|
3192 |
+
lower semicontinuous mapping on Θ×R for any ω ∈ Ω because Φ∗ is nondecreasing and
|
3193 |
+
continuous. In addition by definition of Φ∗ we obtain for any η > 0 and ω ∈ Ω
|
3194 |
+
inf
|
3195 |
+
θ∈Θ
|
3196 |
+
|x|≤η
|
3197 |
+
�
|
3198 |
+
Φ∗�
|
3199 |
+
G(θ, Z1(ω)) + x
|
3200 |
+
�
|
3201 |
+
− x
|
3202 |
+
�
|
3203 |
+
≥ inf
|
3204 |
+
|x|≤η
|
3205 |
+
�
|
3206 |
+
− Φ(0) − x
|
3207 |
+
�
|
3208 |
+
≥ −Φ(0) − η.
|
3209 |
+
Then an easy excercise of Fatou’s Lemma shows that ψΦ is lower semicontinuous. Hence
|
3210 |
+
by compactness of Θ ×
|
3211 |
+
�
|
3212 |
+
xl(x0, ξ, 1), xu(x0, ξ1, ξ, 1)
|
3213 |
+
�
|
3214 |
+
the set S(ψΦ) is a nonvoid compact
|
3215 |
+
subset of Rm+1. This completes the proof.
|
3216 |
+
✷
|
3217 |
+
30
|
3218 |
+
|
3219 |
+
Now we are ready to show Proposition 4.2.
|
3220 |
+
Proof of Proposition 4.2:
|
3221 |
+
Recall the representation of the genuine optimization problem (4.1) via Theorem 4.1,
|
3222 |
+
and the representation of the problem associated with the SAA by (4.2). Then the entire
|
3223 |
+
statement of Proposition 4.2 may be derived easily from Theorem 5.7 along with Lemma
|
3224 |
+
5.8.
|
3225 |
+
✷
|
3226 |
+
Let us turn over to the proof of Lemma 4.3.
|
3227 |
+
Proof of Lemma 4.3:
|
3228 |
+
Since Φ∗ is convex, its right-sided derivative Φ∗′
|
3229 |
+
+ is nondecreasing. Then the inequality
|
3230 |
+
|Φ∗(x) −Φ∗(y)| ≤ Φ∗′
|
3231 |
+
+(x∨y)|x−y| holds for x, y ∈ R. In particular this yields |Φ∗(x)| ≤
|
3232 |
+
Φ∗′
|
3233 |
+
+(x+)|x| for x ∈ R because Φ∗(0) = 0. Hence we may observe
|
3234 |
+
��GΦ
|
3235 |
+
�
|
3236 |
+
(θ, x), z
|
3237 |
+
��� ≤ [Φ∗′
|
3238 |
+
+(ξ + sup I) + 1](ξ(z) + sup I) ≤ CFΘ
|
3239 |
+
Φ,I(z)
|
3240 |
+
for (θ, x) ∈ Θ × I, z ∈ Rd
|
3241 |
+
and
|
3242 |
+
��GΦ
|
3243 |
+
�
|
3244 |
+
(θ, x), z
|
3245 |
+
�
|
3246 |
+
− GΦ
|
3247 |
+
�
|
3248 |
+
(ϑ, y), z
|
3249 |
+
���2
|
3250 |
+
≤ 4[Φ∗′(ξ + sup I) + 1]2|G(θ, z) − G(ϑ, z)|2 + 4[Φ∗′(ξ + sup I) + 1]2|x − y|2
|
3251 |
+
for (θ, x), (ϑ, y) ∈ Θ × I and z ∈ Rd. So firstly, CFΘ
|
3252 |
+
Φ,I is a positive envelope of FΘ
|
3253 |
+
Φ,I.
|
3254 |
+
Secondly, we may invoke Theorem 2.10.20 from [26] to conclude
|
3255 |
+
J(FΘ
|
3256 |
+
Φ,I, CFΘ
|
3257 |
+
Φ,I, δ)
|
3258 |
+
≤
|
3259 |
+
�
|
3260 |
+
2 ln(2) δ +
|
3261 |
+
√
|
3262 |
+
2
|
3263 |
+
ˆ δ
|
3264 |
+
0
|
3265 |
+
sup
|
3266 |
+
Q∈Mfin
|
3267 |
+
�
|
3268 |
+
ln
|
3269 |
+
�
|
3270 |
+
N
|
3271 |
+
�
|
3272 |
+
ε ∥CFΘ
|
3273 |
+
Φ,I∥Q,2, FΘ
|
3274 |
+
Φ,I, L2(Q)
|
3275 |
+
��
|
3276 |
+
dε
|
3277 |
+
≤
|
3278 |
+
�
|
3279 |
+
2 ln(2) δ +
|
3280 |
+
√
|
3281 |
+
2 J(FΘ, ξ, δ) +
|
3282 |
+
√
|
3283 |
+
2
|
3284 |
+
ˆ δ
|
3285 |
+
0
|
3286 |
+
�
|
3287 |
+
ln
|
3288 |
+
�
|
3289 |
+
N(η sup I, I, | · |)
|
3290 |
+
�
|
3291 |
+
dη
|
3292 |
+
for δ > 0,
|
3293 |
+
where N(η · sup I, I, | · |) denotes the minimal number to cover I by intervals of the form
|
3294 |
+
[xi−η·sup I, xi+η·sup I] with xi ∈ I. Since N(η·sup I, I, |·|) ≤ (sup I −inf I)/(η·sup I)
|
3295 |
+
holds for η > 0, and since sup I − inf I ≤ 2 sup I, we obtain via the change of variable
|
3296 |
+
formula
|
3297 |
+
ˆ δ
|
3298 |
+
0
|
3299 |
+
�
|
3300 |
+
ln
|
3301 |
+
�
|
3302 |
+
N(η · sup I, I, | · |)
|
3303 |
+
�
|
3304 |
+
dη ≤ δ
|
3305 |
+
ˆ 1
|
3306 |
+
0
|
3307 |
+
�
|
3308 |
+
ln
|
3309 |
+
�
|
3310 |
+
(2/δ)/ε
|
3311 |
+
�
|
3312 |
+
dε
|
3313 |
+
for δ > 0.
|
3314 |
+
Now, we may finish the proof by applying (2.8) for every δ ∈]0, exp(−1)].
|
3315 |
+
✷
|
3316 |
+
Proof of Lemma 4.5:
|
3317 |
+
For n ∈ N, θ ∈ Θ, x ∈ I and (z1, . . . , zn) ∈ Rdn \ Nn we may conclude immediately from
|
3318 |
+
(A 3”) along with continuity of Φ∗
|
3319 |
+
inf
|
3320 |
+
(ϑ,y)∈Θ×I∩Q
|
3321 |
+
max
|
3322 |
+
j∈{1,...,n}
|
3323 |
+
��GΦ
|
3324 |
+
�
|
3325 |
+
(θ, y), zj
|
3326 |
+
�
|
3327 |
+
− GΦ
|
3328 |
+
�
|
3329 |
+
(ϑ, x), zj
|
3330 |
+
��� = 0.
|
3331 |
+
31
|
3332 |
+
|
3333 |
+
Next we may find by (A 3”) for a fixed (θ, x) ∈ Θ × I some sequence
|
3334 |
+
�
|
3335 |
+
θn, xn
|
3336 |
+
�
|
3337 |
+
n∈N
|
3338 |
+
in Θ × I ∩ Q such that E
|
3339 |
+
�
|
3340 |
+
|G(θn, Z1) − G(θ, Z1)|
|
3341 |
+
�
|
3342 |
+
→ 0 and xn → x.
|
3343 |
+
In particular
|
3344 |
+
GΦ
|
3345 |
+
�
|
3346 |
+
(θn, xn), Z1
|
3347 |
+
�
|
3348 |
+
→ GΦ
|
3349 |
+
�
|
3350 |
+
(θ, x), Z1
|
3351 |
+
�
|
3352 |
+
in probability because Φ∗ is continuous. Since Φ∗ is
|
3353 |
+
convex, nondecreasing with Φ∗(0) = 0, we may observe |Φ∗(y)| ≤ Φ∗(|y|) for y ∈ R.
|
3354 |
+
Hence by (A 2’) along with monotonicity of Φ∗ we have for ξ from (A 2’)
|
3355 |
+
sup
|
3356 |
+
n∈N
|
3357 |
+
��GΦ
|
3358 |
+
�
|
3359 |
+
(θn, xn), Z1
|
3360 |
+
��� ≤ Φ∗�
|
3361 |
+
ξ(Z1) + sup
|
3362 |
+
n∈N
|
3363 |
+
|xn|
|
3364 |
+
�
|
3365 |
+
+ sup
|
3366 |
+
n∈N
|
3367 |
+
|xn|.
|
3368 |
+
Hence by (A 2’) again the random variables GΦ
|
3369 |
+
�
|
3370 |
+
(θn, xn), Z1
|
3371 |
+
�
|
3372 |
+
are dominated by some in-
|
3373 |
+
tegrable random variable. Then an application of Vitalis’ theorem (see e.g. [2, Theorem
|
3374 |
+
21.4]) yields E
|
3375 |
+
���GΦ
|
3376 |
+
�
|
3377 |
+
(θn, xn), Z1
|
3378 |
+
�
|
3379 |
+
− GΦ
|
3380 |
+
�
|
3381 |
+
(θ, x), Z1
|
3382 |
+
����
|
3383 |
+
→ 0. This completes the proof.
|
3384 |
+
✷
|
3385 |
+
References
|
3386 |
+
[1] Bartl, D. and Tangpi, L. (2020). Non-asymptotic rates for the estimation of risk
|
3387 |
+
measures. arXiv:2003.10479.
|
3388 |
+
[2] Bauer, H. (2001) Measure and integration theory. de Gruyter, Berlin.
|
3389 |
+
[3] Belomestny, D. and Kr¨atschmer, V. (2016). Optimal stopping under model uncer-
|
3390 |
+
tainty: A randomized stopping times approach. Annals of Applied Probability 26,
|
3391 |
+
1260–1295.
|
3392 |
+
[4] Ben-Tal, A. and Teboulle, M. (1987). Penalty functions and duality in stochastic
|
3393 |
+
programming via φ−divergence functionals. Math. Oper. Research 12, 224 – 240.
|
3394 |
+
[5] Ben-Tal, A. and Teboulle, M. (2007). An old-new concept of convex risk measures:
|
3395 |
+
the optimized certainty equivalent. Math. Finance 17, 449 – 476.
|
3396 |
+
[6] Blair, C. E. and Jeroslow, R. G. (1977). The value function of a mixed integer pro-
|
3397 |
+
gram: I. Discrete Mathematics 19, 121–138.
|
3398 |
+
[7] Chernozhukov, V., Chetverikov, D. and Kato, K. (2014). Gaussian approximation of
|
3399 |
+
suprema of empirical processes. Annals of Statistics 42, 1564–1597.
|
3400 |
+
[8] Dentcheva, D., Penev, S. and Ruszczynski, A. (2017). Statistical estimation of com-
|
3401 |
+
posite risk functionals and risk optimization problems. Annals of the Institute of Sta-
|
3402 |
+
tistical Mathematics 69, 737–760.
|
3403 |
+
[9] Dudley, R. M. (1999). Uniform central limit theorems. Cambridge University Press,
|
3404 |
+
Cambridge.
|
3405 |
+
[10] Edgar, G. a. and Sucheston, L. (1992). Stopping times and directed processes.
|
3406 |
+
Cambridge University Press, Cambridge.
|
3407 |
+
32
|
3408 |
+
|
3409 |
+
[11] F¨ollmer, H. and A. Schied (2011). Stochastic Finance. de Gruyter, Berlin, New York
|
3410 |
+
(3rd ed.).
|
3411 |
+
[12] Gine, E. and Nickl, R. (2016). Mathematical Foundations of Infinite-Dimensional
|
3412 |
+
Statistical Models. Cambridge University Press, Cambridge.
|
3413 |
+
[13] Guigues, V., Kr¨atschmer, V. and Shapiro, A. (2018). A central limit theorem and
|
3414 |
+
hypotheses testing for risk-averse stochastic programs. SIAM J. OPTIM. 28, 1337–
|
3415 |
+
1366.
|
3416 |
+
[14] Haussler, D. Sphere packing numbers for subsets of the Boolean n-cube with bounded
|
3417 |
+
Vapnik-Chernvornenkis dimension. Journal of Combinatorical Theory, Series A 69,
|
3418 |
+
217–232 (1995).
|
3419 |
+
[15] Kaas, R., Goovaerts, M., Dhaene, J. and Denuit, M. (2008). Modern Actuarial Risk
|
3420 |
+
Theory, Springer, Berlin and Heidelberg (2nd ed.).
|
3421 |
+
[16] Kaina, M. and R¨uschendorf, L. (2009). On convex risk measures on Lp−spaces.
|
3422 |
+
Math. Methods Oper. Res. 69, 475 – 495.
|
3423 |
+
[17] Kosorok, M. R. (2008). Introduction to empirical processes and semiparametric in-
|
3424 |
+
ference. Springer, New York.
|
3425 |
+
[18] McNeil, A., Frey, R. and Embrechts, P. (2005). Quantitative Risk Management,
|
3426 |
+
Princeton University Press, Princeton.
|
3427 |
+
[19] Petrov, V. V. (1995). Limit theorems of probability theory, Oxford University Press,
|
3428 |
+
Oxford.
|
3429 |
+
[20] Pflug, G. Ch. and R¨omisch, W. (2007). Modeling, Measuring and Managing Risk,
|
3430 |
+
World Scientific, Singapore.
|
3431 |
+
[21] R¨uschendorf, L. (2013). Mathematical risk analysis, Springer, Berlin, Heidelberg.
|
3432 |
+
[22] Shapiro, A. (2013). Consistency of sample estimates of risk avers stochastic program.
|
3433 |
+
Journal of Applied Probability 50, 533–541.
|
3434 |
+
[23] Shapiro, A., Dentcheva, D. and Ruszczynski, A. (2014). Lectures on stochastic pro-
|
3435 |
+
gramming. MOS-SIAM Ser. Optim., Philadelphia (2nd ed.).
|
3436 |
+
[24] Talagrand, M. (1994). Sharper bounds for Gaussian and empirical processes. Annals
|
3437 |
+
of Statistics 22, 28–76.
|
3438 |
+
[25] van de Geer, S. (2000). Empirical processes in m-estimation. Cambridge University
|
3439 |
+
Press, Cambridge.
|
3440 |
+
[26] van der Vaart, A.W. and Wellner, J.A. (1996). Weak convergence and empirical
|
3441 |
+
processes. Springer, New York.
|
3442 |
+
33
|
3443 |
+
|
CtE5T4oBgHgl3EQfTw_D/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
E9E1T4oBgHgl3EQfqgWQ/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c41123f799ecd25e80641aa1f0e1d279c9fb06ed815e76a6084cc87417261cd
|
3 |
+
size 260818
|
EtAyT4oBgHgl3EQf4_op/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26914cbdbd5c0af2d8293f4bc8cece516b953a45d03ff52e88c72bc6e6721eef
|
3 |
+
size 293614
|
FNFRT4oBgHgl3EQfCDcd/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d7c2116ae843b7153e252c6fba2dadaa4c2c3f1c5a5fe5b8f17d700bf7e7c28
|
3 |
+
size 73513
|
GdFLT4oBgHgl3EQfGy_B/content/tmp_files/2301.11994v1.pdf.txt
ADDED
@@ -0,0 +1,868 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Gender and Prestige Bias in Coronavirus News Reporting
|
2 |
+
Rebecca Dorn
|
3 |
+
USC Information Science Institute
|
4 |
+
Marina del Rey, CA, USA
|
5 | |
6 |
+
Yiwen Ma
|
7 |
+
USC Information Science Institute
|
8 |
+
Marina del Rey, CA, USA
|
9 | |
10 |
+
Fred Morstatter
|
11 |
+
USC Information Science Institute
|
12 |
+
Marina del Rey, CA, USA
|
13 | |
14 |
+
Kristina Lerman
|
15 |
+
USC Information Science Institute
|
16 |
+
Marina del Rey, CA, USA
|
17 | |
18 |
+
ABSTRACT
|
19 |
+
Journalists play a vital role in surfacing issues of societal impor-
|
20 |
+
tance, but their choices of what to highlight and who to interview
|
21 |
+
are influenced by societal biases. In this work, we use natural lan-
|
22 |
+
guage processing tools to measure these biases in a large corpus of
|
23 |
+
news articles about the Covid-19 pandemic. Specifically, we identify
|
24 |
+
when experts are quoted in news and extract their names and insti-
|
25 |
+
tutional affiliations. We enrich the data by classifying each expert’s
|
26 |
+
gender, the type of organization they belong to, and for academic
|
27 |
+
institutions, their ranking. Our analysis reveals disparities in the
|
28 |
+
representation of experts in news. We find a substantial gender gap,
|
29 |
+
where men are quoted three times more than women. The gender
|
30 |
+
gap varies by partisanship of the news source, with conservative
|
31 |
+
media exhibiting greater gender bias. We also identify academic
|
32 |
+
prestige bias, where journalists turn to experts from highly-ranked
|
33 |
+
academic institutions more than experts from less prestigious insti-
|
34 |
+
tutions, even if the latter group has more public health expertise.
|
35 |
+
Liberal news sources exhibit slightly more prestige bias than con-
|
36 |
+
servative sources. Equality of representation is essential to enable
|
37 |
+
voices from all groups to be heard. By auditing bias, our methods
|
38 |
+
help identify blind spots in news coverage.
|
39 |
+
CCS CONCEPTS
|
40 |
+
Information systems → Data mining, Social networks; Computing
|
41 |
+
methodologies → Natural language processing
|
42 |
+
KEYWORDS
|
43 |
+
gender bias; prestige bias; ideological bias; news reporting; expert
|
44 |
+
sources; named entity recognition; dependency parsing
|
45 |
+
1
|
46 |
+
INTRODUCTION
|
47 |
+
In times of crisis people turn to news media for information and
|
48 |
+
to make sense of the world; journalists, in turn, seek out experts
|
49 |
+
and opinion leaders to interview and then help communicate their
|
50 |
+
knowledge to the public. Mass media does not simply convey in-
|
51 |
+
formation to the public but also shapes what is seen and what is
|
52 |
+
deemed important [12]. The interplay between mass media and the
|
53 |
+
public creates a cycle that amplifies attention to concerns and influ-
|
54 |
+
ences public policy. Given the media’s role in identifying issues of
|
55 |
+
societal importance, it is therefore critical that it equitably reflects
|
56 |
+
the interests of all stakeholders.
|
57 |
+
Representation of groups and individual social identity in the
|
58 |
+
media is one of the fundamental questions of equity. Does the media
|
59 |
+
adequately represent issues that are important to women, ethnic
|
60 |
+
minorities, the elderly, and the disadvantaged? Does it capture the
|
61 |
+
lived experience of these groups, the challenges they face? Or does
|
62 |
+
it focus on the concerns of the privileged few? One mechanism
|
63 |
+
for improving equity is to ensure that the pool of journalists and
|
64 |
+
reporters reflects society’s diversity. However, journalists are pre-
|
65 |
+
dominantly men and often choose to interview subjects whose
|
66 |
+
gender identity matches their own [11].
|
67 |
+
Another mechanism to improve equity is to diversify the pool
|
68 |
+
of subjects that journalists pay attention to. For example, by talk-
|
69 |
+
ing to women, journalists will surface their views and concerns.
|
70 |
+
This is important, because women typically bear a larger share of
|
71 |
+
care responsibilities, and their concerns may bring up issues with
|
72 |
+
childcare, for instance, that may not be visible to men. Moreover, if
|
73 |
+
journalists solely focus on sources from the same few prestigious
|
74 |
+
academic institutions, they lose the geographic and socio-economic
|
75 |
+
diversity that comes from interviewing experts from a range of
|
76 |
+
institutions. This may introduce additional blind spots in news
|
77 |
+
coverage.
|
78 |
+
Auditing gender representation in the news—or the representa-
|
79 |
+
tion of other identities—has proven difficult due to the challenges
|
80 |
+
of extracting representations from the text of the news stories. Pre-
|
81 |
+
vious studies have identified gender bias in news reporting [23];
|
82 |
+
however, they have generally relied on manually curated data or
|
83 |
+
were limited to certain media types, and thus do not scale to the
|
84 |
+
size of the media ecosystem. Addressing the question of bias in the
|
85 |
+
news media at scale calls for automated methods.In this study we
|
86 |
+
use natural language processing (NLP) methods to automate media
|
87 |
+
analysis, which enables us to scale our bias audit of news across
|
88 |
+
longer time periods and across more media sources. We focus on
|
89 |
+
gender and academic prestige bias in the coverage of the Covid-19
|
90 |
+
pandemic. When the novel coronavirus emerged, little was known
|
91 |
+
about the severity of the disease it caused, what mitigations were
|
92 |
+
effective and their benefits and costs. As researchers learned more
|
93 |
+
about the disease, public officials used these findings as a basis for
|
94 |
+
policy recommendations. Journalist sought out experts from the
|
95 |
+
research community and government agencies to communicate the
|
96 |
+
research findings, policy recommendations, and their trade-offs to
|
97 |
+
the public. We analyze thousands of news stories from six popular
|
98 |
+
media sources along the breadth of US political spectrum to identify
|
99 |
+
the experts the journalists turned to. We analyze three left leaning
|
100 |
+
news sources and three right leaning sources to enable analysis by
|
101 |
+
partisan bias and accommodate a variety of linguistic styles.
|
102 |
+
Our analysis reveals a gender gap in news coverage where
|
103 |
+
women appear much less frequently among the experts quoted
|
104 |
+
arXiv:2301.11994v1 [cs.SI] 27 Jan 2023
|
105 |
+
|
106 |
+
by journalists than men. The gender gap varies by political ideol-
|
107 |
+
ogy of the news source, with liberal media coming closer to gender
|
108 |
+
parity than conservative media. In addition to gender, we look at
|
109 |
+
the institutional affiliations of the experts and classify their aca-
|
110 |
+
demic prestige. We identify prestige bias, in which experts from the
|
111 |
+
higher-ranked academic institutions are quoted more frequently
|
112 |
+
than experts with less prestigious affiliations. We find that prestige
|
113 |
+
bias varies slightly by ideology of the reporting source.
|
114 |
+
One possible explanation for the observed bias is that women
|
115 |
+
are a minority in science and medicine. However, women make up
|
116 |
+
the majority of doctoral students and junior faculty in public health
|
117 |
+
and biomedical sciences [19], both of which are fields relevant to
|
118 |
+
the Covid-19 pandemic. Graduate-level public health degrees have
|
119 |
+
been awarded to more women than men since 1979, with 73% of
|
120 |
+
such degrees awarded to women in 2017 [9]. Therefore, the gender
|
121 |
+
disparity we observe is likely not due to a shortage of experts but
|
122 |
+
due to individual biases of reporters and media sources.
|
123 |
+
Our analysis of the gender and prestige of experts quoted in the
|
124 |
+
news during the Covid-19 pandemic answers the following research
|
125 |
+
questions:
|
126 |
+
• Gender Bias: Are women underrepresented among experts
|
127 |
+
whom journalists turn to for information about the pan-
|
128 |
+
demic?
|
129 |
+
• Ideological Gender Bias: Does the gender gap vary by
|
130 |
+
ideological leaning of news source?
|
131 |
+
• Prestige Bias: Is there media preference for experts from
|
132 |
+
highly ranked institutions?
|
133 |
+
• Ideological Prestige Bias: Does the prestige gap change
|
134 |
+
with political leaning of news outlet?
|
135 |
+
2
|
136 |
+
RELATED WORK
|
137 |
+
There has been work analyzing the gender composition of experts
|
138 |
+
in television news. Scott et al. discovered that from September 25
|
139 |
+
to October 6, 2006 and May 14 to May 25, 2007, 14.7% of people
|
140 |
+
featured in PBS NewsHour were women [20]. The authors also
|
141 |
+
found that 13.7% of experts had academic affiliations, 4.3% from
|
142 |
+
think tanks and 42.9% with governmental affiliations.
|
143 |
+
The role of gender in international news media use of non-
|
144 |
+
coronavirus specific experts has been documented. Niemi et al.
|
145 |
+
found that less than 30% of experts interviewed in Finnish news
|
146 |
+
journalism are women [15]. Lidia Mañoso Pacheco found a high
|
147 |
+
correlation between journalist and subject gender in 68 British
|
148 |
+
and English newspaper articles [11]. Kitzinger et al. analyzed 51
|
149 |
+
in-depth profiles of men and women scientists and found that 5
|
150 |
+
men are used for every 1 woman scientist [8].
|
151 |
+
Only manual analyses of American Coronavirus news experts
|
152 |
+
exist. Fletcher et al. [3] reviewed a total of 4,463 articles from 9 U.S.
|
153 |
+
news sources dating April 1, 2020 to April 15, 2020 and found 35.9%
|
154 |
+
of the 2,297 experts were women. In a special report from Luba
|
155 |
+
Kassova that looked at the frequency of men and women in 2,100
|
156 |
+
quotes between March 1, 2020 and April 15, 2020, men were quoted
|
157 |
+
three times as much as women [6]. Kassova additionally found that
|
158 |
+
women are less likely to be protagonists in news stories and more
|
159 |
+
likely to provide subjective views over expertise.
|
160 |
+
Large scale analysis of North American news experts exist, though
|
161 |
+
not specific to Coronavirus. Asr. et al. introduced a tool for large
|
162 |
+
scale gender analysis in news quotes in The Gender Gap Tracker [2],
|
163 |
+
which takes a sentence and returns people quoted and mentioned
|
164 |
+
with their inferred gender identities. Methods of extraction include
|
165 |
+
syntactic, heuristic and floating quote approaches. The software
|
166 |
+
is illustrated on seven Canadian news outlets, where the authors
|
167 |
+
found that men are represented three times as much as women
|
168 |
+
from October 2018 to September 2020.
|
169 |
+
Large-scale tools have been used to analyze the difference in how
|
170 |
+
men and women are featured in the news. LDA topic modelling is
|
171 |
+
performed on two years worth of American and Canadian news
|
172 |
+
articles by Rao et al. [17]. Persons quoted and their genders are
|
173 |
+
gathered using The Gender Gap Tracker. Contrary to our results, the
|
174 |
+
authors found that women are more represented in articles related
|
175 |
+
to healthcare. An analysis of gender, fame and sentiment is done
|
176 |
+
by Shor et al. [22]. The dataset used combines 14 million persons
|
177 |
+
mentioned throughout 1,323 news outlets with a manual analysis of
|
178 |
+
select Wikipedia pages. The authors looked at sentiment scores for
|
179 |
+
adjectives used with each person, and found that as women become
|
180 |
+
more famous the media attention recieved becomes increasingly
|
181 |
+
negative. Separately, Shor et al. analyzed gender and public interest
|
182 |
+
while controlling for occupation and age [21]. The authors looked
|
183 |
+
at over 20,000 persons from over 2,000 news sources. They found
|
184 |
+
that when men and women have similar occupations and ages,
|
185 |
+
women obtain higher public interest but less media coverage.
|
186 |
+
One of the most frequently observed forms of social learning is
|
187 |
+
where people observe and mimic seemingly competent and there-
|
188 |
+
fore admirable individuals [5]. Jimenez et al. explained how first
|
189 |
+
order cues of prestige (initially observable traits) are used to assume
|
190 |
+
prestige when quality information is lacking, though these cues
|
191 |
+
may be wrong and deceptive [5]. Additionally, upward mobility in
|
192 |
+
academia is limited. In a survey of n = 348 universities, 20% of fac-
|
193 |
+
ulty positions are inhabited by 8 universities [25]. The same survey
|
194 |
+
found that only 5% to 23% of faculty members from United States
|
195 |
+
universities hold doctorates from less prestigious institutions, and
|
196 |
+
that 64% of Universities have no departments listed as top 10 [25].
|
197 |
+
3
|
198 |
+
METHODS
|
199 |
+
3.1
|
200 |
+
Data
|
201 |
+
The AYLIEN Coronavirus Dataset consists of 1,673,353 news arti-
|
202 |
+
cles related to the Coronavirus pandemic collected from over 440
|
203 |
+
international news sources. This data is aggregated, analyzed, and
|
204 |
+
enriched by AYLIEN using AYLIEN’s News Intelligence Platform1.
|
205 |
+
We use the article attributes raw article text, article title, news
|
206 |
+
source name, and publication date and time. We analyze AYLIEN
|
207 |
+
Coronavirus related news articles from six US-based news sources:
|
208 |
+
Huffington Post (HUFF), Cable News Network (CNN), The New
|
209 |
+
York Times (NYT), The New York Post (NYP), Fox News (FOX),
|
210 |
+
and Breitbart News Network (BREIT) between January 6, 2020,
|
211 |
+
and July 31, 2020. These six news outlets are chosen because they
|
212 |
+
collectively exemplify an ideological spectrum in news reporting
|
213 |
+
while all having some partisan bias. This allows us to separate news
|
214 |
+
outlets into two distinct groups. Additionally, having 6 news outlets
|
215 |
+
ensures we cover a variety of linguistic style. This subset totals
|
216 |
+
66,368 articles: 9,897 articles from the New York Times, 17,765 from
|
217 |
+
1https://aylien.com/resources/datasets/coronavirus-dataset
|
218 |
+
|
219 |
+
CNN, 19,911 from Fox News, 7,609 from Breitbart, 13,391 from New
|
220 |
+
York Post and 6,625 from the Huffington Post.
|
221 |
+
3.2
|
222 |
+
Expert Quote Extraction
|
223 |
+
Fig. 1 shows an example of how journalists quote experts using
|
224 |
+
three different sentence structures. The components of interest are
|
225 |
+
reported speech, reported verb, person and organization. Reported
|
226 |
+
speech (RSPEECH) directly quotes or indirectly reconstructs the
|
227 |
+
words of the speaker. A reporting verb (RVERB) is used to introduce
|
228 |
+
or conclude reported speech (e.g. “report”, “acclaim”, “told”). The
|
229 |
+
person is the speaker being quoted. An organization is the institu-
|
230 |
+
tion associated with the speaker. We consider expert quotes to be
|
231 |
+
any permutation of these components. We find sentences quoting
|
232 |
+
experts by taking the union of two approaches:
|
233 |
+
3.2.1
|
234 |
+
Named Entity Recognition (NER). The three most common
|
235 |
+
reporting verbs are “said”, “say” and “says”. The most common
|
236 |
+
pattern quoting experts is:
|
237 |
+
“[𝑅𝑆𝑃𝐸𝐸𝐶𝐻]," (“said"|“say"|“says") [PERSON]
|
238 |
+
Where | denotes logical or and [PERSON] denotes speaker. This
|
239 |
+
pattern is captured using the following regular expression:
|
240 |
+
“𝑠([a-zA-z0-9?’,.𝑠()])*𝑠,"(said|say|says)([a−zA−z0−9?’,𝑠
|
241 |
+
()])*
|
242 |
+
The NLP library SpaCy offers an NER library pretrained on web text
|
243 |
+
with entity labels including person, organization, date and location
|
244 |
+
[4]. We use SpaCy’s NER on sentences following this pattern and
|
245 |
+
look for PERSON entities listed outside of quotation marks.
|
246 |
+
3.2.2
|
247 |
+
The Gender Gap Tracker. The second method we use to
|
248 |
+
find speakers is that of The Gender Gap Tracker Project [2]. The
|
249 |
+
syntactic method from The Gender Gap Tracker identifies quotes
|
250 |
+
following a clausal complement structure, where a dependent verb
|
251 |
+
is featured with an internal subject. Sentences following this struc-
|
252 |
+
ture are only kept if they feature one of 262 reporting verbs. The
|
253 |
+
second Gender Gap Tracker method we utilize identifies reported
|
254 |
+
speech introduced directly before or after the reporting verb “ac-
|
255 |
+
cording to.” Due to the difficulty in finding affiliated organizations,
|
256 |
+
we choose to omit the floating quote method which finds sentences
|
257 |
+
where reported speech takes a full sentence and the speaker is
|
258 |
+
introduced elsewhere.
|
259 |
+
When an expert is quoted in a news article, the journalist typi-
|
260 |
+
cally introduces the expert, specifying their position and affiliation.
|
261 |
+
To help focus our data collection only on expert speakers, we require
|
262 |
+
speakers to be present alongside an organizational affiliation. On
|
263 |
+
all sentences collected, we run NER and retain only those sentences
|
264 |
+
where NER identifies an organization (ORG entity).
|
265 |
+
3.3
|
266 |
+
Classifying Gender
|
267 |
+
The Python library gender-guesser implements a gender prediction
|
268 |
+
program built on a database of 45,376 names with each name’s
|
269 |
+
most likely gender identity [1]. The possible gender predictions
|
270 |
+
for a single person are “male", “female", “andy" (androgynous) and
|
271 |
+
“unknown". For each person quoted, we run gender-guesser on the
|
272 |
+
first string before a space (i.e., first name) to obtain that name’s
|
273 |
+
most common gender association [18].
|
274 |
+
The gender labels include “male" and “female" though would be
|
275 |
+
more accurately described as man/masculine and woman/feminine.
|
276 |
+
We acknowledge that gender is non-binary and not captured by
|
277 |
+
a person’s first name. Classifying by common gender affiliation
|
278 |
+
with names captures reader perception of gender, not the expert
|
279 |
+
speakers’ actual gender identification. The discussion section fur-
|
280 |
+
ther elaborates on the inability of a single androgynous category
|
281 |
+
to adequately capture non-binary non-cisgender gender identities.
|
282 |
+
3.4
|
283 |
+
Classifying Organization Prestige
|
284 |
+
During the Covid-19 pandemic, scientists, epidemiologists, and pub-
|
285 |
+
lic health experts from a variety of different organizations worked
|
286 |
+
to define our understanding of the disease and to define public
|
287 |
+
policy. These experts came from academic institutions (e.g., Brown
|
288 |
+
University), federal bodies (e.g., the Centers for Disease Control
|
289 |
+
and Prevention), and a variety of think tanks (e.g., the Hoover In-
|
290 |
+
stitution). Journalists turned to these experts for information and
|
291 |
+
guidance to share with the public.
|
292 |
+
We use fuzzy string matching, a mechanism that generates sim-
|
293 |
+
ilarity scores between two strings, to determine whether organi-
|
294 |
+
zation affiliations reference academic institutions, federal bodies,
|
295 |
+
or think tanks. For example, fuzzy string matching would find that
|
296 |
+
“The University of Maryland - College Park" matches to “The Univer-
|
297 |
+
sity of Maryland" with a score of 90. Journalists typically introduce
|
298 |
+
organizations with their full names, thus we do not accomodate for
|
299 |
+
organization abbreviations.
|
300 |
+
3.4.1
|
301 |
+
Academic Institutions. We use Times Higher Educations’
|
302 |
+
2015 World University Rankings2. This list gives 400 University
|
303 |
+
names as well as their ranking. Rankings are determined by fac-
|
304 |
+
tors including teaching, research, citations, industry income, and
|
305 |
+
international outlook.
|
306 |
+
3.4.2
|
307 |
+
Federal Bodies. We compile a list of Federal Bodies by web
|
308 |
+
scraping the U.S. Government Services and Information’s index of
|
309 |
+
Federal Departments and Agencies3. This list includes only federal
|
310 |
+
agencies therefore nothing at the state level.
|
311 |
+
3.4.3
|
312 |
+
Think Tanks. One of the most popular think tank defini-
|
313 |
+
tions is by McGann and Weaver: “non-governmental, not-for-profit
|
314 |
+
research organisations with substantial organisational autonomy
|
315 |
+
from government and from societal interests such as firms, interest
|
316 |
+
groups, and political parties” [16, 26]. Think tanks frequently focus
|
317 |
+
on public policy. We use the open source database On Think Tanks4,
|
318 |
+
which includes over 3,200 global think tanks and provides fields
|
319 |
+
including region, topic, website and office address.
|
320 |
+
For each sentence, we measure similarity between NER-identified
|
321 |
+
organization and organization names listed in these databases. We
|
322 |
+
manually review a sample of NER-extracted organizations, the or-
|
323 |
+
ganization name most closely matching and the distance metric
|
324 |
+
calculated for the two strings. For all three databases, we consider
|
325 |
+
a match if the similarity score is greater than or equal to 90. To
|
326 |
+
minimize noise, organizations consisting of two or fewer characters
|
327 |
+
in the name are ignored. We sample 25 random organizations of
|
328 |
+
two or fewer characters to ensure minimal impact. We find that
|
329 |
+
2https://www.timeshighereducation.com/world-university-rankings/2016/world-
|
330 |
+
ranking/methodology
|
331 |
+
3https://www.usa.gov/federal-agencies
|
332 |
+
4https://onthinktanks.org/open-think-tank-directory/
|
333 |
+
|
334 |
+
Figure 1: Examples of Expert Quotes. Examples capture three varieties of quote structure. RSPEECH (Reported Speech) is the portion of the quote
|
335 |
+
containing an exact quote or reconstruction of what the speaker previously said. RVERB (Reporting Verb) refers to the verb introducing or concluding reported
|
336 |
+
speech ("say", "said", "explains", etc.). PERSON refers to the speaker of the (reported) quote. ORG refers to the organization affiliated with the speaker. Quotes
|
337 |
+
are considered expert quotes if it has the presence of RSPEECH, RVERB, PERSON and ORG. We consider a sentence as containing both RSPEECH and RVERB
|
338 |
+
if it contains one of 262 Reporting Verbs, as a Reporting Verb implies the presence of Reported Speech. We use Named Entity Recognition (NER) to determine
|
339 |
+
whether a sentence features a PERSON and ORG.
|
340 |
+
the most common two-character string is “”s", followed closely by
|
341 |
+
strings “’m" and “AP".
|
342 |
+
4
|
343 |
+
RESULTS
|
344 |
+
We extract 89,130 expert sources (pairs of speakers and their af-
|
345 |
+
filiated organizations): 19,137 pairs from HUFF, 17,156 from CNN,
|
346 |
+
18,828 from NYT, 4,129 from NYP, 22,226 from FOX and 7,654 from
|
347 |
+
BREIT. The Gender Gap Tracker accounts for 26.7% of these ex-
|
348 |
+
tractions, and Named Entity Recognition-based for the rest. Our
|
349 |
+
methods improve the number of extractions by 65,263 pairs. The
|
350 |
+
scale increase from adding our method helps promote accuracy and
|
351 |
+
efficiency in studies of inequality.
|
352 |
+
For precision evaluation, we run our method on 100 randomly
|
353 |
+
sampled articles and manually annotate each extraction. Extractions
|
354 |
+
are labeled correct if they contain RSPEECH from a PERSON with
|
355 |
+
an ORG affiliation. The precision from this sample is 64.7%. The
|
356 |
+
method most commonly fails for instances where the ORG is the
|
357 |
+
news outlet rather than a professional affiliation. For example: "The
|
358 |
+
government took a very important step, but they waited too long
|
359 |
+
for this decision,” Dr. Jose Luis Vargas Segura, a pulmonologist, told
|
360 |
+
Fox News.’ finding Fox News as the affiliated ORG. We also sample
|
361 |
+
100 academic extractions, labeling whether the instance contains
|
362 |
+
RSPEECH, a PERSON and their affiliated university. The accuracy
|
363 |
+
for this is much higher at 87%.
|
364 |
+
4.1
|
365 |
+
Gender Bias
|
366 |
+
36.8% of extracted speakers have no identifiable gender in gender-
|
367 |
+
guesser. To reduce unknown genders, we take the union of each
|
368 |
+
news outlet’s 25 most frequently mentioned people with unknown
|
369 |
+
gender and manually label the gender where the person is recog-
|
370 |
+
nizable. Most of the names are easily identifiable public figures
|
371 |
+
(e.g., “Trump”, "Biden", and “Cuomo”). After this procedure, 26.4%
|
372 |
+
of extracted sentences have no persons with an identifiable gender.
|
373 |
+
The majority of androgynous names are Asian names popu-
|
374 |
+
lar both as first and last names. We look at the 25 most frequent
|
375 |
+
names with androgynous labels and manually labeled their gender,
|
376 |
+
if known. We find that the androgynous category captures a unique
|
377 |
+
subset of non-gender-identifying more than androgynous names,
|
378 |
+
so we merge androgynous and unknown gender categories.
|
379 |
+
Figure 2: Gender bias in news. Percentage of men and women in all
|
380 |
+
identified expert quotes. We show the composition in total mentions (speak-
|
381 |
+
ers counted each time they are referenced) and unique mentions (speakers
|
382 |
+
counted once over all mentions). Unique mentions are determined by check-
|
383 |
+
ing whether each expert’s name has a string similarity (via fuzzy string
|
384 |
+
matching) score of 90 or higher to previously mentioned experts. Men are
|
385 |
+
overrepresented in both total and unique mentions. The stronger affinity
|
386 |
+
towards men in total mentions demonstrates that journalists quote the same
|
387 |
+
men repeatedly.
|
388 |
+
Figure 2 breaks down experts quoted in the news by gender. The
|
389 |
+
26.4% of instances with unknown gender are omitted to better grasp
|
390 |
+
the immediate disparity between men and women. The left plot
|
391 |
+
represents the total mentions of all individuals by gender: women
|
392 |
+
represent 24% of all mentions of experts in the news. To identify
|
393 |
+
unique experts, we iterate through all experts while maintaining a
|
394 |
+
list of previously quoted people. For each name, we check whether
|
395 |
+
the person quoted fuzzy string matches to anyone previously quoted
|
396 |
+
with a score of 90 or more. The left pie chart in Fig. 2 shows the
|
397 |
+
gender breakdown of unique experts, where experts are counted
|
398 |
+
once over all mentions. Women’s representation improves with
|
399 |
+
|
400 |
+
'The coronavirus pandemic has triggered an unprecedented socio-economic crisis that is draining
|
401 |
+
resources for families all over the world," UNlCEF Executive Director Henrietta Fore said in a release.
|
402 |
+
“In a worst-case scenario, it could trigger another financial crisis," Marcel Fratzscher, president ofthe
|
403 |
+
German Institute for Economic Research in Berlin, told reporters on Thursday
|
404 |
+
Blair Mannix, the director of MBA admissions at The Wharton School of the University of Pennsylvania,
|
405 |
+
said the school expects an increase in the numbers of students wishing to defer their enrollment from
|
406 |
+
September.
|
407 |
+
- RSPEECH
|
408 |
+
- RVERB
|
409 |
+
-ORG
|
410 |
+
- PERSONTotal Mentions
|
411 |
+
Unigue Mentions
|
412 |
+
Women
|
413 |
+
Wbmen
|
414 |
+
24%
|
415 |
+
31%
|
416 |
+
76%
|
417 |
+
69%
|
418 |
+
Menunique mentions at 31%. However, this still shows that women
|
419 |
+
are under-represented in the news, considering that the fields of
|
420 |
+
epidemiology, bio-medicine, and public health—all relevant to the
|
421 |
+
pandemic—have achieved gender parity (or better) [9, 19]. Instead,
|
422 |
+
the news media turns to the same group of male experts. The
|
423 |
+
over-representation of men reinforces the idea that science requires
|
424 |
+
traditionally masculine traits and denies fair coverage (and therefore
|
425 |
+
career advancement opportunities) to women.
|
426 |
+
Sentences quoting men have on average 240 characters per sen-
|
427 |
+
tence and those quoting women have an average length of 236
|
428 |
+
characters. This difference is found significant using a two sided
|
429 |
+
t-test (p < 0.01). We also observe that 4.6% of sentences with expert
|
430 |
+
women also feature an expert man, while only 1.3% of sentences
|
431 |
+
with an expert man appear with an expert woman.
|
432 |
+
Figure 3: Gender Composition by Organization. Gender distribution
|
433 |
+
separated by type of organization. Quotes matched to organization types
|
434 |
+
by fuzzy string matching to databases of organization names (Times Higher
|
435 |
+
Educations’ 2015 World University Rankings, Index of Federal Departments
|
436 |
+
and Agencies, and On Think Tanks). Error bars determined through boot-
|
437 |
+
strapping 1,000 times. All organization types exhibit gender bias, with
|
438 |
+
federal bodies containing the lowest proportion of women.
|
439 |
+
4.2
|
440 |
+
Ideological Bias
|
441 |
+
Out of all our extractions, 27.6% have an organization matching
|
442 |
+
to our academic, federal and think tank databases. Analysis of the
|
443 |
+
organizational breakdown reveals journalists are most likely to
|
444 |
+
reach out to experts affiliated with federal agencies (60.5%), then
|
445 |
+
academic institutions (21.6%), and think tanks (17.9%). One possible
|
446 |
+
explanation is that federal agencies make recommendations for
|
447 |
+
pandemic safety procedures, which are then communicated to the
|
448 |
+
public by reporters.
|
449 |
+
Fig. 3 shows gender composition by organization type. The bars
|
450 |
+
show average gender representation over 1,000 bootstrapped sam-
|
451 |
+
ples of the data set. The category of unknown gender is included.
|
452 |
+
Experts associated with federal bodies (e.g., CDC, FDA) exhibit the
|
453 |
+
strongest disparity by gender with the lowest percentage of women.
|
454 |
+
Experts from academic institutions manifest less gender disparity,
|
455 |
+
with the highest percentage of women. The lowest percentage of
|
456 |
+
men occurs for experts affiliated with think tanks, which could be
|
457 |
+
due to the high number of persons with “unknown" gender.
|
458 |
+
Fig. 4 shows how each news outlet distributes attention over
|
459 |
+
experts from academic institutions, federal bodies and think tanks.
|
460 |
+
Figure 4: Preferred Organization Type for Expertise. Distribution of
|
461 |
+
organization types affiliated with news sources in expert quotes. Sources
|
462 |
+
are listed from top to bottom by political leaning reported in Media Bias
|
463 |
+
Fact Check. Across the board, Federal Bodies are the most common type
|
464 |
+
of expertise, though The New York Times has lowest proportion. Breitbart
|
465 |
+
News is the only news outlet with higher use of think tanks than academic
|
466 |
+
institutions.
|
467 |
+
Quotes with unknown organization types are not included. We
|
468 |
+
observe that federal bodies are always the most common sources
|
469 |
+
of expertise. NYT quotes federal experts 40.6%, and all other out-
|
470 |
+
lets utilize federal affiliated experts at least 60.8%. Additionally, we
|
471 |
+
observe that right-leaning outlets typically turn to experts from fed-
|
472 |
+
eral agencies more than left-leaning outlets. Academic institutions
|
473 |
+
are the second most common organization type for experts after
|
474 |
+
federal bodies, except for BREIT and FOX which utilizes academic
|
475 |
+
experts 9.9% and 14%, respectively.
|
476 |
+
Figure 5: Ideology and Gender Bias. Ratio of Women to Men experts
|
477 |
+
quoted by a news source. Smaller ratios signal under-representation of
|
478 |
+
women. Error bars included are from bootstrapping 1000 times. Outlets
|
479 |
+
are ordered left to right by political ideology. Left leaning outlets have the
|
480 |
+
greatest ratio of women cited. The difference in median ratio of news outlets
|
481 |
+
is found significant by the Kruskal-Wallis Test (p < 0.01).
|
482 |
+
Fig. 5 shows gender bias across the ideological spectrum of news
|
483 |
+
outlets, where HUFF, CNN and NYT are classified as liberal (left-
|
484 |
+
leaning) sources, and NYP, FOX, and BREIT as conservative (right-
|
485 |
+
leaning), as reported in Media Bias Fact Check5. The effect of news
|
486 |
+
5https://www.mediabiasfactcheck.com
|
487 |
+
|
488 |
+
GenderComposition byOrganizationType
|
489 |
+
Men
|
490 |
+
Think Tanks
|
491 |
+
Women
|
492 |
+
Type
|
493 |
+
Unknown
|
494 |
+
Federal
|
495 |
+
Bodies
|
496 |
+
Academic
|
497 |
+
Institutions
|
498 |
+
AlI
|
499 |
+
0%
|
500 |
+
10%
|
501 |
+
20%:
|
502 |
+
30%40%
|
503 |
+
50%
|
504 |
+
60%
|
505 |
+
70%
|
506 |
+
80%
|
507 |
+
PercentageOrganizationAffiliationbyNewsOutlet
|
508 |
+
Academic
|
509 |
+
HUFF
|
510 |
+
Federal
|
511 |
+
Think
|
512 |
+
Tanks
|
513 |
+
CNN
|
514 |
+
News Outlet
|
515 |
+
NYT
|
516 |
+
NYP
|
517 |
+
FOX
|
518 |
+
BREIT
|
519 |
+
0%
|
520 |
+
10%
|
521 |
+
20%
|
522 |
+
30%
|
523 |
+
40%
|
524 |
+
50%
|
525 |
+
60%
|
526 |
+
70%
|
527 |
+
PercentofKnownOrganizationAffiliationGender Representation by Outlet Orientation
|
528 |
+
0.40
|
529 |
+
0.35
|
530 |
+
0.30
|
531 |
+
venf#Men
|
532 |
+
0.25
|
533 |
+
0.20
|
534 |
+
0.15
|
535 |
+
0.10
|
536 |
+
0.05
|
537 |
+
0.0
|
538 |
+
HUFF
|
539 |
+
NNO
|
540 |
+
NYT
|
541 |
+
NYP
|
542 |
+
FOX
|
543 |
+
BREIT
|
544 |
+
Newstutletoutlet ideology on gender representation is measured by the ratio
|
545 |
+
of the number of women quoted to the number of men. A ratio
|
546 |
+
of 1.0 signifies equal representation of men and women, smaller
|
547 |
+
ration signal over-representation of men.
|
548 |
+
All news sources exhibit over-representation of men with ratios
|
549 |
+
at most .387. BREIT has the largest gender disparity with a ratio
|
550 |
+
of 0.264, and NYT has the least gender disparity with the share
|
551 |
+
of women experts at 0.387. We use the Kruskal-Wallis H-Test to
|
552 |
+
compare medians for the share of women experts for left-leaning
|
553 |
+
and right-leaning outlets (pictured in blue and red, respectively, in
|
554 |
+
Fig. 5). The Kruskal-Wallis test reports a statistic of 8.547 (p < 0.01)
|
555 |
+
signifying a statistically significant moderate effect. We conclude
|
556 |
+
left-leaning news outlets exhibit less gender disparity than the
|
557 |
+
right-leaning outlets.
|
558 |
+
4.3
|
559 |
+
Prestige Bias
|
560 |
+
Figure 6: Prestige Bias. Number of mentions of an academic institution
|
561 |
+
in the news as a function of its ranking (for institutions ranked by the Times
|
562 |
+
Higher Educations’ World Rankings) shows journalists pay more attention
|
563 |
+
to higher-ranking institutions. Lower rankings signal higher prestige.
|
564 |
+
We now take a closer look at experts from academic institu-
|
565 |
+
tions. Fig. 6 shows the number of times an academic institution is
|
566 |
+
mentioned in the news as a function of its placement in the Times
|
567 |
+
Higher Educations’ World Rankings. Spearman correlation mea-
|
568 |
+
sures monotonicity between two variables and scores between -1
|
569 |
+
and 1 (0 means no correlation). The scatter plot shows a downward
|
570 |
+
trend, with a Spearman coefficient of -0.379 (p < 0.01), indicat-
|
571 |
+
ing more prestigious (higher-ranked) institutions generally receive
|
572 |
+
more mentions in the news than less prestigious (lower-ranked)
|
573 |
+
institutions.
|
574 |
+
We measure prestige bias using the Gini coefficient. Gini is a
|
575 |
+
popular statistical measure of inequality, here attention to academic
|
576 |
+
institutions. A small Gini coefficient means attention (number of
|
577 |
+
mentions of an institution) is equally distributed across universi-
|
578 |
+
ties of any rank, while a Gini coefficient close to one means one
|
579 |
+
university gets all the attention while the rest receive no mentions.
|
580 |
+
The Gini coefficient of mentions of institutions in our data is 0.568,
|
581 |
+
suggesting existence of prestige bias: journalists prefer to turn to
|
582 |
+
experts from the same high-ranking institutions again and again.
|
583 |
+
Figure 7: Public Health Ranking and Prestige. Number of academic
|
584 |
+
institution mentions by public health ranking. In top 48 public health insti-
|
585 |
+
tutions, only a handful with high prestige are heavily utilized by journalists.
|
586 |
+
But what if news outlets are turning to prestige within a domain
|
587 |
+
relevant to the pandemic, like public health? For this case, we
|
588 |
+
rank institutions by prestige in the field of public health using the
|
589 |
+
US News’ ranking of US schools of public health6 in Figure 7. If
|
590 |
+
journalists were seeking out public health experts, we would expect
|
591 |
+
them to pay more attention to experts from these 48 institutions
|
592 |
+
with higher-ranked schools of public health, resulting in a much
|
593 |
+
lower Gini coefficient. However, the Gini coefficient drops to 0.537,
|
594 |
+
suggesting that prestige bias is driven by extraneous factors such as
|
595 |
+
the institution’s “brand name” rather than expertise in the relevant
|
596 |
+
field of public health.
|
597 |
+
Figure 8: Ideology and Prestige Bias. Boxplot bins the mentions of
|
598 |
+
academic institutions by their rankings, and shows the distributions of
|
599 |
+
the share of mentions of those institutions made by left- and right-leaning
|
600 |
+
news sources. Yellow dots represent group means. Left-leaning news outlets
|
601 |
+
display stronger preference for experts from prestigious institutions (top-50
|
602 |
+
ranked universities).
|
603 |
+
6https://www.usnews.com/best-graduate-schools/top-health-schools/public-health-
|
604 |
+
rankings
|
605 |
+
|
606 |
+
University Mentions by Aanking
|
607 |
+
10
|
608 |
+
101
|
609 |
+
10°
|
610 |
+
100
|
611 |
+
101
|
612 |
+
102
|
613 |
+
Lag(Ranking)University Mentions by Aanking
|
614 |
+
107
|
615 |
+
BO
|
616 |
+
101
|
617 |
+
10°
|
618 |
+
100
|
619 |
+
101
|
620 |
+
102
|
621 |
+
Lag(Ranking)UniversityMentionsbyRankingand Ideology
|
622 |
+
7%
|
623 |
+
Lean
|
624 |
+
Left
|
625 |
+
6%
|
626 |
+
Right
|
627 |
+
5%
|
628 |
+
ofMentions
|
629 |
+
4%
|
630 |
+
Percent
|
631 |
+
3%
|
632 |
+
2%
|
633 |
+
1%
|
634 |
+
0%
|
635 |
+
[1,50]
|
636 |
+
[51,100]
|
637 |
+
[101,150]
|
638 |
+
[151,200]
|
639 |
+
[201,250]
|
640 |
+
[251,300]
|
641 |
+
[301,350]
|
642 |
+
[351,400]
|
643 |
+
Ranking4.3.1
|
644 |
+
Ideology and Prestige Bias. We analyze overlap between
|
645 |
+
news outlet ideological leaning and tendency to mention higher
|
646 |
+
ranked universities. The boxplot in Fig. 8 shows the distribution
|
647 |
+
of academic expert mentions made by the left-leaning and right-
|
648 |
+
leaning news outlets. The universities which experts are affiliated
|
649 |
+
with are binned by school rank. The boxplot shows the distribution
|
650 |
+
over the share of institution mentions within each bin made by the
|
651 |
+
news sources. The boxplot shows the interquartile range, outliers
|
652 |
+
and median for each bin’s total mentions. The means within each
|
653 |
+
bin are displayed with yellow points. Prestige bias exists at both
|
654 |
+
ends of the ideological spectrum, though left-leaning news outlets
|
655 |
+
display more prestige bias, i.e., stronger preference for experts from
|
656 |
+
the top-50 academic institutions.
|
657 |
+
We control for political orientation of news outlet in comparing
|
658 |
+
academic institution mentions and rankings. Left-leaning news
|
659 |
+
sources have a Gini coefficient of 0.573 and Spearman coefficient
|
660 |
+
-0.439 (p < 0.01). Right-leaning news sources have a Gini coefficient
|
661 |
+
of 0.562 and Spearman coefficient -0.317 (p < 0.01). This suggests
|
662 |
+
that journalists from conservative sources divide their attention
|
663 |
+
more evenly across institutions than liberal journalists, though the
|
664 |
+
difference is small.
|
665 |
+
Figure 9: Gender and Prestige Bias. Cumulative distribution of men-
|
666 |
+
tions for the top 100 institutions broken down by gender. Shows minimal
|
667 |
+
difference in prestige bias between men and women in academia. Roughly
|
668 |
+
one third of quotations come from top 20 institutions, regardless of gender.
|
669 |
+
Men are overrepresented among the quotations from top 10 institutions.
|
670 |
+
4.3.2
|
671 |
+
Gender and Prestige Bias. Next we examine whether pres-
|
672 |
+
tige bias varies with expert gender. Fig. 9 shows the cumulative
|
673 |
+
distribution of the share of mentions of experts of either gender
|
674 |
+
affiliated with top-𝑛 academic institutions. Values of 𝑛 are 5, 10, 15,
|
675 |
+
etc. We observe almost no difference in how men and women’s cov-
|
676 |
+
erage varies with prestige. For each gender, top-50 highest ranked
|
677 |
+
universities account for half of the academic expert mentions (49.6%
|
678 |
+
for women and 50.1% for men). For women, the Gini coefficient of
|
679 |
+
university mentions is 0.56 and Spearman correlation coefficient
|
680 |
+
between the number of mentions and ranking is -.409 (p < 0.01). For
|
681 |
+
men, the Gini coefficient is 0.572 and Spearman coefficient -0.397
|
682 |
+
(p < 0.01). This disparity shows that prestige inequality is slightly
|
683 |
+
higher for men than women.
|
684 |
+
We expected that women would need to be from more presti-
|
685 |
+
gious institutions to be considered qualified experts. However, we
|
686 |
+
see in Fig. 9 that there is no significant difference in the prestige
|
687 |
+
distribution for men and women. This lack of difference reveals that
|
688 |
+
gender bias is not substantially amplified within expert mentions
|
689 |
+
from highly ranked universities.
|
690 |
+
5
|
691 |
+
DISCUSSION AND CONCLUSION
|
692 |
+
Involving a diverse set of perspectives in the research process en-
|
693 |
+
hances quality of research. However, women make up the minority
|
694 |
+
of faculty in most science departments, especially in the more senior
|
695 |
+
and leadership positions [19]. Additionally, the reward structure of
|
696 |
+
science itself creates disparities through the “Matthew effect” [10],
|
697 |
+
in which highly regarded scientists obtain disproportionate re-
|
698 |
+
sources and become more likely to produce more successful work.
|
699 |
+
We see this in an example where reviewers in a single-blind peer
|
700 |
+
review process are more likely to accept for publication papers from
|
701 |
+
authors from more prestigious universities [24]. The researchers
|
702 |
+
from a few prestigious institutions hold a greater influence in shap-
|
703 |
+
ing scientific research than authors from the less prestigious schools
|
704 |
+
with more diverse populations [14].
|
705 |
+
Our analysis of a large pandemic-related news corpus shows that
|
706 |
+
women are heard from less frequently than men. Women compose
|
707 |
+
24% of expert mentions, though the representation rises to 31% for
|
708 |
+
unique experts. This suggests that a few men, possibly public figures
|
709 |
+
such as Donald Trump or Andrew Cuomo, are disproportionately
|
710 |
+
represented. Rendering women with less visibility than men paves
|
711 |
+
the way for women’s concerns, such as reopening childcare centers
|
712 |
+
and schools, to receive less attention from policy makers.
|
713 |
+
We observe two different types of ideological bias. The represen-
|
714 |
+
tation of women, measured by the ratio of women included to men,
|
715 |
+
is always higher in left leaning sources than right. Additionally,
|
716 |
+
left leaning news sources display higher prestige bias than right
|
717 |
+
leaning ones. All news sources could improve in representation.
|
718 |
+
We showed that journalists reporting on Covid-19 paid much
|
719 |
+
more attention to experts with more prestigious affiliations. The
|
720 |
+
gender representation found is a starkly different than that of public
|
721 |
+
health, which is a field one would hope Covid-19 reporting relies
|
722 |
+
upon. When ranking experts by prestige of their institution in the
|
723 |
+
field of public health, ideally the distribution would be somewhat
|
724 |
+
even. However, we observe only a marginally smaller ranking coeffi-
|
725 |
+
cient. This suggests that journalists are either seeking out irrelevant
|
726 |
+
expertise, or wildly misrepresenting the public health field. Jour-
|
727 |
+
nalists have a unique ability to hand pick their subjects, thereby
|
728 |
+
shaping public perception of who constitutes scientific expertise.
|
729 |
+
By focusing their—and the public’s—attention on the same small
|
730 |
+
group of high-ranked universities, they risk perpetuating the cycle
|
731 |
+
of advantage for the privileged minority. To our knowledge, this is
|
732 |
+
the first large scale study of prestige bias in news reporting.
|
733 |
+
Our study has a number of limitations. Gender classification is a
|
734 |
+
major limitation. It has been shown that Named Entity Recognition
|
735 |
+
has worse performance identifying women’s names as PERSON en-
|
736 |
+
tities compared to men’s names [13]. As a result, it is likely that our
|
737 |
+
extractions obtained through NER are under-representative of the
|
738 |
+
number of women in the data set. Another gender-based limitation
|
739 |
+
is that the gender predictor used has a misleading androgynous
|
740 |
+
|
741 |
+
100Highest Ranked UniversitiesasPercent of Total Mentions
|
742 |
+
70%
|
743 |
+
Women
|
744 |
+
Men
|
745 |
+
60%
|
746 |
+
Cumulative Share of Mentions
|
747 |
+
50%
|
748 |
+
40%
|
749 |
+
30%
|
750 |
+
20%
|
751 |
+
10%
|
752 |
+
0%
|
753 |
+
0
|
754 |
+
20
|
755 |
+
40
|
756 |
+
60
|
757 |
+
80
|
758 |
+
100
|
759 |
+
Cumulative Rankcategory. Rather than capturing names with equitable gender bal-
|
760 |
+
ance or high association with non-binary people, the androgynous
|
761 |
+
category captures popular Asian last names. The gender classifier
|
762 |
+
is based on a dataset built around cisgender people with historically
|
763 |
+
Western names, meaning our study inherently focuses on cisgender
|
764 |
+
people from Western countries. Such exclusion of non-cisgender
|
765 |
+
people in research continues a long legacy of transgender erasure
|
766 |
+
[7].
|
767 |
+
Our work can be expanded by auditing the gender and institu-
|
768 |
+
tional prestige of Coronavirus experts who are active online on
|
769 |
+
Twitter. We hope to compare network structure by gender category
|
770 |
+
and see how engagement-increasing behaviors differ by gender.
|
771 |
+
We are also interested in hate speech analysis of how scientists
|
772 |
+
of different genders are interacted with on Twitter. Twitter also
|
773 |
+
gives users opportunities to provide their pronouns, allowing us to
|
774 |
+
look at under representations of the gender queer community in
|
775 |
+
scientific research and expert positions.
|
776 |
+
This large scale analysis of Covid-19 expertise helps us better
|
777 |
+
understand information ecosystems in times of crisis. We observe
|
778 |
+
that men are the dominant sources of expertise, and that a positive
|
779 |
+
feedback loop may occur in news media where men with research
|
780 |
+
success are featured more and therefore are better positioned for
|
781 |
+
further success (and further features in the news media). By au-
|
782 |
+
tomating this analysis, we demonstrate the utility of NLP tools. We
|
783 |
+
hope these findings will help news media more faithfully represent
|
784 |
+
society’s diversity.
|
785 |
+
ETHICS STATEMENT
|
786 |
+
This work uses publicly available published news articles from
|
787 |
+
well known news outlets. Thus, the data set raises few ethical
|
788 |
+
issues around privacy. Ethical concerns around gender inference
|
789 |
+
mechanisms are discussed further in the Conclusion and Discussion
|
790 |
+
portion. The code for this paper will be made available on GitHub.
|
791 |
+
ACKNOWLEDGEMENTS
|
792 |
+
This work was supported, in part, by the Defense Advanced Re-
|
793 |
+
search Projects Agency under contract W911NF192027.
|
794 |
+
REFERENCES
|
795 |
+
[1] David Arcos, Ferhat Elmas, and Israel Perez. 2016.
|
796 |
+
https://github.com/
|
797 |
+
lead-ratings/gender-guesser. (2016).
|
798 |
+
[2] Fatemeh Torabi Asr, Mohammad Mazraeh, Alexandre Lopes, Vasundhara Gautam,
|
799 |
+
Junette Gonzales, Prashanth Rao, and Maite Taboada. 2021. The gender gap
|
800 |
+
tracker: Using natural language processing to measure gender bias in media. PloS
|
801 |
+
one 16, 1 (2021), e0245533.
|
802 |
+
[3] Sarah Fletcher, Moss Bruton Joe, Santanna Hernandez, Inka Toman, Tyrone G
|
803 |
+
Harrison, and Shannon M Ruzycki. 2021. The gender of COVID-19 experts in
|
804 |
+
newspaper articles: a descriptive cross-sectional study. Journal of general internal
|
805 |
+
medicine 36, 4 (2021), 1011–1016.
|
806 |
+
[4] Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language under-
|
807 |
+
standing with Bloom embeddings, convolutional neural networks and incremen-
|
808 |
+
tal parsing. (2017). To appear.
|
809 |
+
[5] Ángel V Jiménez and Alex Mesoudi. 2019. Prestige-biased social learning: Current
|
810 |
+
evidence and outstanding questions. Palgrave Communications 5, 1 (2019), 1–12.
|
811 |
+
[6] Luba Kassova. 2020. The missing perspectives of women in COVID-19 news. A
|
812 |
+
Special Report on Women’s Under-Representation in News Media. New York: Bill
|
813 |
+
and Melinda Gates Foundation (2020).
|
814 |
+
[7] Os Keyes. 2018. The misgendering machines: Trans/HCI implications of automatic
|
815 |
+
gender recognition. Proceedings of the ACM on human-computer interaction 2,
|
816 |
+
CSCW (2018), 1–22.
|
817 |
+
[8] Jenny Kitzinger, Mwenya Diana Chimba, Andy Williams, Joan Haran, and Tammy
|
818 |
+
Boyce. 2008. Gender, stereotypes and expertise in the press: how newspapers
|
819 |
+
represent female and male scientists. (2008).
|
820 |
+
[9] Jonathon P Leider, Christine M Plepys, Brian C Castrucci, Emily M Burke, and
|
821 |
+
Craig H Blakely. 2018. Trends in the conferral of graduate public health degrees:
|
822 |
+
a triangulated approach. Public Health Reports 133, 6 (2018), 729–737.
|
823 |
+
[10] Chien Hsiang Liao. 2021. The Matthew effect and the halo effect in research
|
824 |
+
funding. Journal of Informetrics 15, 1 (2021), 101108.
|
825 |
+
[11] Lidia Mañoso Pacheco. 2019. Gender asymmetries in news reports. Ene 11 (2019),
|
826 |
+
27.
|
827 |
+
[12] Maxwell E McCombs and Donald L Shaw. 1972. The agenda-setting function of
|
828 |
+
mass media. Public opinion quarterly 36, 2 (1972), 176–187.
|
829 |
+
[13] Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram
|
830 |
+
Galstyan. 2020. Man is to person as woman is to location: Measuring gender
|
831 |
+
bias in named entity recognition. In Proceedings of the 31st ACM Conference on
|
832 |
+
Hypertext and Social Media. 231–232.
|
833 |
+
[14] Allison C Morgan, Dimitrios J Economou, Samuel F Way, and Aaron Clauset.
|
834 |
+
2018. Prestige drives epistemic inequality in the diffusion of scientific ideas. EPJ
|
835 |
+
Data Science 7, 1 (2018), 40.
|
836 |
+
[15] Mari K Niemi and Ville Pitkänen. 2017. Gendered use of experts in the media:
|
837 |
+
Analysis of the gender gap in Finnish news journalism. Public understanding of
|
838 |
+
science 26, 3 (2017), 355–368.
|
839 |
+
[16] Hartwig Pautz. 2011. Revisiting the think-tank phenomenon. Public policy and
|
840 |
+
administration 26, 4 (2011), 419–435.
|
841 |
+
[17] Prashanth Rao and Maite Taboada. 2021. Gender bias in the news: A scalable
|
842 |
+
topic modelling and visualization framework. Frontiers in Artificial Intelligence 4
|
843 |
+
(2021).
|
844 |
+
[18] Lucía Santamaría and Helena Mihaljević. 2018. Comparison and benchmark of
|
845 |
+
name-to-gender inference services. PeerJ Computer Science 4 (2018), e156.
|
846 |
+
[19] Enrique F Schisterman, Chandra W Swanson, Ya-Ling Lu, and Sunni L Mum-
|
847 |
+
ford. 2017. The changing face of epidemiology: gender disparities in citations?
|
848 |
+
Epidemiology (Cambridge, Mass.) 28, 2 (2017), 159.
|
849 |
+
[20] David K Scott, Mike Chanslor, and Jennifer Dixon. 2010. FAIR and the PBS
|
850 |
+
NewsHour: Assessing diversity and elitism in news sourcing. Communication
|
851 |
+
Quarterly 58, 3 (2010), 319–340.
|
852 |
+
[21] Eran Shor, Arnout Van De Rijt, and Babak Fotouhi. 2019. A large-scale test of
|
853 |
+
gender bias in the media. Sociological science 6 (2019), 526–550.
|
854 |
+
[22] Eran Shor, Arnout van de Rijt, and Vivek Kulkarni. 2022. Women Who Break
|
855 |
+
the Glass Ceiling Get a “Paper Cut”: Gender, Fame, and Media Sentiment. Social
|
856 |
+
Problems (2022).
|
857 |
+
[23] Eran Shor, Arnout Van De Rijt, Alex Miltsov, Vivek Kulkarni, and Steven Skiena.
|
858 |
+
2015. A paper ceiling: Explaining the persistent underrepresentation of women
|
859 |
+
in printed news. American Sociological Review 80, 5 (2015), 960–984.
|
860 |
+
[24] I Sverdlichenko, S Xie, and E Margolin. 2022. Impact of institutional affiliation bias
|
861 |
+
on editorial publication decisions: A bibliometric analysis of three ophthalmology
|
862 |
+
journals. Ethics, Medicine and Public Health 21 (2022), 100758.
|
863 |
+
[25] K Hunter Wapman, Sam Zhang, Aaron Clauset, and Daniel B Larremore. 2022.
|
864 |
+
Quantifying hierarchy and dynamics in US faculty hiring and retention. Nature
|
865 |
+
(2022), 1–8.
|
866 |
+
[26] R Weaver and James McGann. 2017. Think tanks and civil societies: Catalysts for
|
867 |
+
ideas and action. Routledge.
|
868 |
+
|
GdFLT4oBgHgl3EQfGy_B/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
HNE0T4oBgHgl3EQfRgAk/content/tmp_files/2301.02207v1.pdf.txt
ADDED
@@ -0,0 +1,739 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.02207v1 [hep-th] 5 Jan 2023
|
2 |
+
Spinors, Proper Time and Higher-Spin Fields
|
3 |
+
N.G. Misuna
|
4 |
+
Max-Planck-Institut f¨ur Gravitationsphysik (Albert-Einstein-Institut),
|
5 |
+
Am M¨uhlenberg 1, 14476, Potsdam, Germany
|
6 |
+
Tamm Department of Theoretical Physics, Lebedev Physical Institute,
|
7 |
+
Leninsky prospekt 53, 119991, Moscow, Russia
|
8 | |
9 |
+
Abstract
|
10 |
+
We present a Lagrangian formulation for 4d integer-spin relativistic fields in the 5d space
|
11 |
+
spanned by two conjugate Weyl spinors and a Lorentz-invariant proper-time coordinate.
|
12 |
+
We construct a manifestly Poincar´e-invariant free classical action, find a general solution
|
13 |
+
to equations of motion and a corresponding positive-definite inner product. Our formu-
|
14 |
+
lation displays a separation of variables: equations of motion represent ODE in a proper
|
15 |
+
time only, while spinor coordinates parameterize the Cauchy hypersurface. We also find
|
16 |
+
momentum eigenstates solutions for massless arbitrary integer-spin fields and a massive
|
17 |
+
scalar field.
|
18 |
+
1
|
19 |
+
|
20 |
+
Contents
|
21 |
+
1
|
22 |
+
Introduction
|
23 |
+
2
|
24 |
+
2
|
25 |
+
4d Poincar´e algebra and relativistic fields
|
26 |
+
3
|
27 |
+
3
|
28 |
+
Spin-s representation
|
29 |
+
4
|
30 |
+
4
|
31 |
+
Free action, e.o.m. and inner product
|
32 |
+
7
|
33 |
+
5
|
34 |
+
Momentum eigenstates
|
35 |
+
9
|
36 |
+
5.1
|
37 |
+
Scalar field
|
38 |
+
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
39 |
+
9
|
40 |
+
5.2
|
41 |
+
Massless fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
42 |
+
10
|
43 |
+
6
|
44 |
+
Conclusion
|
45 |
+
11
|
46 |
+
1
|
47 |
+
Introduction
|
48 |
+
Higher-spin (HS) theories represent an important class of models of fundamental interactions.
|
49 |
+
Covariant Lagrangian formulations for free higher-spin fields have been constructed in massive
|
50 |
+
case by Singh and Hagen [1, 2], and in massless case by Fronsdal and Fang, both in Minkowski
|
51 |
+
[3, 4] and (A)dS [5, 6] spaces. But it turned out that constructing consistent interactions for
|
52 |
+
massless HS fields, which problem is of the most interest, gets very involved in the covariant
|
53 |
+
setup. Therefore the main progress beyond the free level is due to other approaches.
|
54 |
+
In particular, cubic HS interactions have been found and studied in detail within the light-
|
55 |
+
cone framework (see e.g. [7–11]). However, already beyond the cubic level the analysis becomes
|
56 |
+
too complicated.
|
57 |
+
Self-dual HS models are conveniently formulated and analyzed by means of the methods of
|
58 |
+
twistor theory [12–15].
|
59 |
+
The full all-order system of classical e.o.m. of interacting HS gauge fields has been con-
|
60 |
+
structed by Vasiliev [16, 17] in terms of the generating equations, written in the so-called
|
61 |
+
unfolded form [18–20] (for a review of Vasiliev theory see [21, 22]). But extracting HS vertices
|
62 |
+
from Vasiliev equations represents a very nontrivial task, because one must restrict somehow
|
63 |
+
the degree of non-locality while solving for auxiliary generating variables, which problem is
|
64 |
+
currently under the active study (see [23] and references therein).
|
65 |
+
More references and a partial review of the recent HS literature can be found in [24].
|
66 |
+
Thus, the availability of different implementations of HS fields significantly enriches our
|
67 |
+
possibilities for constructing and studying HS theories. In this paper we propose a new real-
|
68 |
+
ization for the integer-spin representations of the 4d Poincar´e group. Instead of dealing with
|
69 |
+
4d Minkowski space, we consider a 5d space spanned by a pair of conjugate spinors and one
|
70 |
+
Lorentz scalar. This set of coordinates appeared previously in the unfolded formulation of the
|
71 |
+
4d off-shell fields [25–28], where they have been playing the role of the auxiliary fiber coordi-
|
72 |
+
nates, encoding unfolded descendants of the space-time fields under consideration. In this paper
|
73 |
+
we use these coordinates to build a self-contained Lagrangian formulation for 4d integer-spin
|
74 |
+
fields without any reference to a space-time.
|
75 |
+
To give a preliminary intuitive idea of how such 5d space can encode 4d fields, let us
|
76 |
+
consider a simple example. An asymptotic one-particle state of a scalar field is determined by
|
77 |
+
2
|
78 |
+
|
79 |
+
4-momentum pa = (E, −→p ), which is forced to lie on the mass-shell papa = m2. Hence, the state
|
80 |
+
is fixed by three independent parameters: four variables with one constraint. Alternatively,
|
81 |
+
the same information can be encoded in a Lorentz-scalar π =
|
82 |
+
�
|
83 |
+
E2 − −→p 2 and a null vector
|
84 |
+
na = (|−→p |, −→p ), with the constraint being π = m.
|
85 |
+
In its turn, a real null 4d vector can
|
86 |
+
be represented in terms of spinors as na = (¯σ) ˙αβ ¯ξ ˙αξβ. Thus, a set of 5 variables {π, ξα, ¯ξ ˙α}
|
87 |
+
(effectively, 4 of them, as the global phase of ξ does not contribute) determines the 4-momentum,
|
88 |
+
while the mass-shell equation becomes simply π = m, putting no restrictions on ξ.
|
89 |
+
In our consideration, however, we make use of a similar 5d space as a substitute not for
|
90 |
+
the momentum pa, but rather for the coordinate xa, so that classical e.o.m. become ODE
|
91 |
+
in a scalar coordinate. We find expressions for Poincar´e generators and identify appropriate
|
92 |
+
modules supplied with a positive-definite inner product.We also construct simple Poincar´e-
|
93 |
+
invariant actions which lead to the appropriate e.o.m. and find their general solutions. In
|
94 |
+
addition, we find solutions for momentum eigenstates for the cases of an arbitrary-mass scalar
|
95 |
+
field and of massless arbitrary spin fields.
|
96 |
+
The paper is organized as follows. In Section 2 we introduce our conventions for Poincar´e
|
97 |
+
generators and give a brief reminder on how covariant quantum fields are constructed in the
|
98 |
+
standard approach, to be later compared with our construction. In Section 3 we build a 4d
|
99 |
+
integer-spin representation on a certain 5d space. In Section 4 we present a Poincar´e-invariant
|
100 |
+
action for a free field, give a general solution to e.o.m.
|
101 |
+
and propose an inner product for
|
102 |
+
solutions. In Section 5 we find solutions of e.o.m. corresponding to momentum eigenstates for
|
103 |
+
a scalar field and massless fields. In Section 6 we sum up our results.
|
104 |
+
2
|
105 |
+
4d Poincar´e algebra and relativistic fields
|
106 |
+
Elementary particles are associated with unitary irreducible representations (UIRs) of the
|
107 |
+
Poincar´e group (or an isometry group of the spacetime in question, more generally) [29].
|
108 |
+
In the paper we consider 4d Poincar´e algebra with generators Pα ˙α, Mαβ = Mβα and ¯
|
109 |
+
M ˙α ˙β =
|
110 |
+
¯
|
111 |
+
M ˙β ˙α, which correspond to translations, anti-selfdual and selfdual rotations of Minkowski space,
|
112 |
+
respectively. Here indices belong to two conjugate spinor representations of the Lorentz algebra
|
113 |
+
sl(2, C). Commutation relations are
|
114 |
+
[Mαβ, Mγδ] = ǫαγMβδ + ǫαδMβγ + ǫβγMαδ + ǫβδMαγ,
|
115 |
+
(2.1)
|
116 |
+
[ ¯
|
117 |
+
M ˙α ˙β, ¯
|
118 |
+
M˙γ ˙δ] = ǫ ˙α ˙γ ¯
|
119 |
+
M ˙β ˙δ + ǫ ˙α ˙δ ¯
|
120 |
+
M ˙β ˙γ + ǫ ˙β ˙γ ¯
|
121 |
+
M ˙α ˙δ + ǫ ˙β ˙δ ¯
|
122 |
+
M ˙α ˙γ,
|
123 |
+
(2.2)
|
124 |
+
[Mαβ, ¯
|
125 |
+
M˙γ ˙δ] = 0,
|
126 |
+
(2.3)
|
127 |
+
[Mαβ, Pγ ˙γ] = ǫαγPβ ˙γ + ǫβγPα˙γ,
|
128 |
+
(2.4)
|
129 |
+
[ ¯
|
130 |
+
M ˙α ˙β, Pγ ˙γ] = ǫ ˙α ˙γPγ ˙β + ǫ ˙β ˙γPγ ˙α,
|
131 |
+
(2.5)
|
132 |
+
[Pα ˙α, Pβ ˙β] = 0,
|
133 |
+
(2.6)
|
134 |
+
where ǫαβ and ǫ ˙α ˙β are Lorentz-invariant spinor metrics
|
135 |
+
ǫαβ = ǫαβ = ǫ ˙α ˙β = ǫ ˙α ˙β =
|
136 |
+
� 0
|
137 |
+
1
|
138 |
+
−1
|
139 |
+
0
|
140 |
+
�
|
141 |
+
,
|
142 |
+
(2.7)
|
143 |
+
which raise and lower spinor indices according to
|
144 |
+
vα = ǫβαvβ,
|
145 |
+
vα = ǫαβvβ,
|
146 |
+
¯v ˙α = ǫ ˙β ˙α¯v
|
147 |
+
˙β,
|
148 |
+
¯v ˙α = ǫ ˙α ˙β¯v ˙β.
|
149 |
+
(2.8)
|
150 |
+
3
|
151 |
+
|
152 |
+
UIRs are determined by the values of two Casimir operators: a square of the momentum,
|
153 |
+
associated with the mass,
|
154 |
+
P 2 = m2
|
155 |
+
(2.9)
|
156 |
+
and, introducing the Pauli–Lubanski pseudovector as
|
157 |
+
Wα ˙α = 1
|
158 |
+
2MαβP β
|
159 |
+
˙α − 1
|
160 |
+
2
|
161 |
+
¯
|
162 |
+
M ˙α ˙βPα
|
163 |
+
˙β,
|
164 |
+
(2.10)
|
165 |
+
either its square, associated with the spin s when m2 > 0
|
166 |
+
W 2 = −m2s(s + 1),
|
167 |
+
(2.11)
|
168 |
+
or the helicity λ when m = 0
|
169 |
+
Wα ˙β = λPα ˙β.
|
170 |
+
(2.12)
|
171 |
+
In (2.9), (2.11) and throughout the paper the square v2 of a vector vα ˙β is defined as
|
172 |
+
v2 = 1
|
173 |
+
2vα ˙βvα ˙β.
|
174 |
+
(2.13)
|
175 |
+
The standard covariant QFT approach is to implement momentum generators as coordinate
|
176 |
+
derivatives
|
177 |
+
Pa = −i ∂
|
178 |
+
∂xa
|
179 |
+
(2.14)
|
180 |
+
on the Minkowski space with coordinates xa. Then quantum fields look as φI(x), where index I
|
181 |
+
belongs to some finite-dimensional representation of the Lorentz group (spin), so that rotations
|
182 |
+
are realized as
|
183 |
+
Ma,b = i(xa
|
184 |
+
∂
|
185 |
+
∂xb − xb
|
186 |
+
∂
|
187 |
+
∂xa ) + (Sa,b)I
|
188 |
+
J
|
189 |
+
(2.15)
|
190 |
+
with S being x-independent spin generators. In general, however, the resulting representation
|
191 |
+
of the Poincar´e algebra is neither irreducible nor unitary, and one has to remove undesirable
|
192 |
+
subrepresentations by imposing additional constraints besides the Klein–Gordon equation (2.9).
|
193 |
+
In order to represent all of them as following from some Lagrangian equations of motion, one
|
194 |
+
has to introduce auxiliary fields (for massive fields with s > 1) and/or to provide certain gauge
|
195 |
+
symmetry (for massless fields with s ≥ 1). Corresponding Lagrangian formulations for arbitrary
|
196 |
+
spin fields have been constructed by Sing and Hagen for massive fields [1, 2] and by Fronsdal
|
197 |
+
and Fang for massless fields [3–6].
|
198 |
+
3
|
199 |
+
Spin-s representation
|
200 |
+
In the paper we construct a realization of bosonic UIRs on a 5d linear space spanned by a pair
|
201 |
+
of conjugate commuting sl(2, C) spinors Y A = (yα, ¯y ˙α) and a Lorentz-invariant ’proper time’
|
202 |
+
τ. This set of variables (Y, τ) was previously used in formulating off-shell unfolded equations
|
203 |
+
for various 4d field systems [25–28]. And spinors Y were initially used in the unfolded Vasiliev
|
204 |
+
equations [16, 17], where they play the crucial role of the generators of an associative HS gauge
|
205 |
+
algebra. Here we propose to use (Y, τ)-space instead of a space-time and build a corresponding
|
206 |
+
Lagrangian formulation for bosonic fields. All fields are ’scalar’ (i.e. without non-contracted
|
207 |
+
Lorentz indices) functions F(Y, τ) on this space.
|
208 |
+
4
|
209 |
+
|
210 |
+
For the rotation generators we take
|
211 |
+
Mαβ = yα∂β + yβ∂α,
|
212 |
+
(3.1)
|
213 |
+
¯
|
214 |
+
M ˙α ˙β = ¯y ˙α ¯∂ ˙β + ¯y ˙β ¯∂ ˙α,
|
215 |
+
(3.2)
|
216 |
+
where Y -derivatives are defined as
|
217 |
+
∂αyβ = δα
|
218 |
+
β,
|
219 |
+
¯∂ ˙α¯y
|
220 |
+
˙β = δ ˙α
|
221 |
+
˙β.
|
222 |
+
(3.3)
|
223 |
+
It is easy to check that (3.1)-(3.2) satisfy (2.1)-(2.3). From here it also directly follows that the
|
224 |
+
proper-time coordinate τ is Lorentz-invariant (but not translation-invariant, as we will see).
|
225 |
+
The expressions (3.1)-(3.2) for rotations operators are universal: we demand that they look the
|
226 |
+
same for all fields of arbitrary masses and spins, like it is the case for the translation operator
|
227 |
+
in the standard construction (2.14). The price to pay for this is that the translation operator
|
228 |
+
now depends on a spin, as we will see.
|
229 |
+
As Y commute with themselves, they have zero norm
|
230 |
+
yαyα = 0,
|
231 |
+
¯y ˙α¯y ˙α = 0,
|
232 |
+
(3.4)
|
233 |
+
and the only independent Lorentz-invariant Y -combinations one can form are Euler operators
|
234 |
+
N = yα∂α,
|
235 |
+
¯N = ¯y ˙α ¯∂ ˙α.
|
236 |
+
(3.5)
|
237 |
+
An appropriate module of a spin-s representation has to contain states with helicities from
|
238 |
+
−s to +s. This can be achieved by considering a set of functions
|
239 |
+
Φs(Y, τ) = {Φα(m), ˙α(n)(τ)(yα)m(¯y ˙α)n,
|
240 |
+
(m + n) ≥ 2s,
|
241 |
+
|m − n| ≤ 2s},
|
242 |
+
(3.6)
|
243 |
+
where we make use of condensed notations for symmetrized indices
|
244 |
+
vα(m) = v(α1α2...αm),
|
245 |
+
(yα)m = yα1yα2...yαm.
|
246 |
+
(3.7)
|
247 |
+
The module (3.6) can be also represented as
|
248 |
+
Φs(Y, τ) = ΦA(2s)(y¯y, τ)(Y A)2s,
|
249 |
+
(3.8)
|
250 |
+
where A is a Majorana index taking four values {1, 2, ˙1, ˙2}. This form is visually more similar
|
251 |
+
to the standard Minkowski approach, where an integer spin-s module is a rank-s tensor field
|
252 |
+
φa(s)(x). It should be stressed however, that in (3.8) ’external’ Y -s and ’internal’ y-s and ¯y-s
|
253 |
+
are on a completely equal footing, as seen from (3.6). And 2s explicit spinors and indices in
|
254 |
+
(3.8) are highlighted only in order to show restrictions on the number of y and ¯y and play no
|
255 |
+
special role otherwise.
|
256 |
+
Now one has to find an expression for the momentum operator Pα ˙β. The most general
|
257 |
+
Ansatz is
|
258 |
+
Pα ˙β = aN, ¯
|
259 |
+
N∂α ¯∂ ˙β + bN, ¯
|
260 |
+
Nyα¯y ˙β + cN, ¯
|
261 |
+
Nyα ¯∂ ˙β + ¯cN, ¯
|
262 |
+
N∂α¯y ˙β,
|
263 |
+
(3.9)
|
264 |
+
where Lorentz-invariant coefficients a, b, c, ¯c are built out of Euler operators (3.5), as well as of
|
265 |
+
τ and τ-derivatives. (3.9) automatically satisfies (2.4) and (2.5), so the only equation to be
|
266 |
+
solved is (2.6). It can be equivalently reformulated in terms of two conjugate equations
|
267 |
+
Pα ˙βPα˙γǫ
|
268 |
+
˙β ˙γ = 0,
|
269 |
+
(3.10)
|
270 |
+
5
|
271 |
+
|
272 |
+
Pβ ˙αPγ ˙αǫβγ = 0.
|
273 |
+
(3.11)
|
274 |
+
Substituting (3.9), they lead to the following constraints
|
275 |
+
( ¯N + 2)aN, ¯
|
276 |
+
N¯cN+1, ¯
|
277 |
+
N+1 − ¯NaN+1, ¯
|
278 |
+
N−1¯cN, ¯
|
279 |
+
N = 0,
|
280 |
+
(3.12)
|
281 |
+
( ¯N + 2)bN−1, ¯
|
282 |
+
N+1cN, ¯
|
283 |
+
N − ¯NbN, ¯
|
284 |
+
N¯cN−1, ¯
|
285 |
+
N−1 = 0,
|
286 |
+
(3.13)
|
287 |
+
( ¯N + 2)aN, ¯
|
288 |
+
NbN+1, ¯
|
289 |
+
N+1 − ¯NaN−1, ¯
|
290 |
+
N−1bN, ¯
|
291 |
+
N + ( ¯N + 2)cN, ¯
|
292 |
+
N¯cN−1, ¯
|
293 |
+
N+1 − ¯N¯cN, ¯
|
294 |
+
NcN+1, ¯
|
295 |
+
N−1 = 0,
|
296 |
+
(3.14)
|
297 |
+
plus three conjugate equations with N ↔ ¯N, c ↔ ¯c interchanged. In addition, one has to
|
298 |
+
ensure that the action of (3.9) does not lead outside the module (3.6). This means that only
|
299 |
+
those solutions are suitable that satisfy
|
300 |
+
aN, ¯
|
301 |
+
N|ς=s−1 = 0,
|
302 |
+
cN, ¯
|
303 |
+
N|χ=s+1 = 0,
|
304 |
+
¯cN, ¯
|
305 |
+
N|χ=−s−1 = 0,
|
306 |
+
(3.15)
|
307 |
+
where ς and χ are important linear combinations of Euler operators (3.5), which we actively
|
308 |
+
use below,
|
309 |
+
ς = N + ¯N
|
310 |
+
2
|
311 |
+
,
|
312 |
+
χ = N − ¯N
|
313 |
+
2
|
314 |
+
.
|
315 |
+
(3.16)
|
316 |
+
Any solution of (3.12)-(3.14) respecting boundary conditions (3.15) defines some representation
|
317 |
+
of the Poincar´e algebra. But many of these representations are equivalent, and this allows one
|
318 |
+
to put some further constraints.
|
319 |
+
First, we restrict τ-dependence and provide a ’separation of variables’ Y and τ. Specifically,
|
320 |
+
we require the operator P 2 to be Y -independent, so that the mass-shell equation (2.9) becomes
|
321 |
+
an ODE in τ. In addition, we demand τ to enter (3.9) only through this P 2-combination.
|
322 |
+
Second, we require Pα ˙β to allow for a usual integration by parts rule
|
323 |
+
�
|
324 |
+
dτ
|
325 |
+
�
|
326 |
+
d4Y f(Y, τ)Pα ˙βg(Y, τ) = −
|
327 |
+
�
|
328 |
+
dτ
|
329 |
+
�
|
330 |
+
d4Y g(Y, τ)Pα ˙βf(Y, τ).
|
331 |
+
(3.17)
|
332 |
+
To this end one notes that (assuming that one can neglect boundary terms)
|
333 |
+
�
|
334 |
+
d4Y (yα∂αf(Y ))g(Y ) =
|
335 |
+
�
|
336 |
+
d4Y ((∂αyα−2)f(Y ))g(Y ) = −
|
337 |
+
�
|
338 |
+
d4Y f(Y )(yα∂α+2)g(Y ), (3.18)
|
339 |
+
which allows one to formulate general rules
|
340 |
+
�
|
341 |
+
Nf·g = −
|
342 |
+
�
|
343 |
+
f·(N+2)g,
|
344 |
+
�
|
345 |
+
¯Nf·g = −
|
346 |
+
�
|
347 |
+
f·( ¯N+2)g,
|
348 |
+
�
|
349 |
+
ςf·g = −
|
350 |
+
�
|
351 |
+
f·(ς+2)g,
|
352 |
+
�
|
353 |
+
χf·g = −
|
354 |
+
�
|
355 |
+
f·χg.
|
356 |
+
(3.19)
|
357 |
+
These constraints significantly restrict the space of solutions to (3.10)-(3.11), though still do
|
358 |
+
not fix it unambiguously. We pick up the following particular solution
|
359 |
+
−iPα ˙β
|
360 |
+
=
|
361 |
+
(ς − s + 1)(ς + s + 2)(ς + 3/2)
|
362 |
+
(N + 1)(N + 2)( ¯N + 1)( ¯N + 2)∂α ¯∂ ˙β −
|
363 |
+
P 2
|
364 |
+
(ς + 1/2)yα¯y ˙β +
|
365 |
+
+
|
366 |
+
1
|
367 |
+
( ¯N + 1)( ¯N + 2)[(χ + s)(χ − s − 1)Π+ − P 2Π−0]yα ¯∂ ˙β +
|
368 |
+
+
|
369 |
+
1
|
370 |
+
(N + 1)(N + 2)[(χ − s)(χ + s + 1)Π− − P 2Π+0]∂α¯y ˙β,
|
371 |
+
(3.20)
|
372 |
+
6
|
373 |
+
|
374 |
+
where projectors Π on different χ-components are introduced as
|
375 |
+
Π+Fχ(Y ) =
|
376 |
+
�
|
377 |
+
Fχ(Y ),
|
378 |
+
χ > 0
|
379 |
+
0,
|
380 |
+
χ ≤ 0 ;
|
381 |
+
Π−Fχ(Y ) =
|
382 |
+
�
|
383 |
+
Fχ(Y ),
|
384 |
+
χ < 0
|
385 |
+
0,
|
386 |
+
χ ≥ 0 ;
|
387 |
+
(3.21)
|
388 |
+
Π+0Fχ(Y ) =
|
389 |
+
�
|
390 |
+
Fχ(Y ),
|
391 |
+
χ ≥ 0
|
392 |
+
0,
|
393 |
+
χ < 0 ;
|
394 |
+
Π−0Fχ(Y ) =
|
395 |
+
�
|
396 |
+
Fχ(Y ),
|
397 |
+
χ ≤ 0
|
398 |
+
0,
|
399 |
+
χ > 0 .
|
400 |
+
(3.22)
|
401 |
+
Expression (3.20) for P contains manifestly and self-consistently its own square P 2, which is
|
402 |
+
Y -independent by construction. P 2 is also required to be even under integration by parts in
|
403 |
+
order to provide (3.17).
|
404 |
+
Now for the Pauli–Lubanski pseudovector (2.10) one has
|
405 |
+
−iWα ˙β
|
406 |
+
=
|
407 |
+
−χ (ς − s + 1)(ς + s + 2)(ς + 3/2)
|
408 |
+
(N + 1)(N + 2)( ¯N + 1)( ¯N + 2)∂α ¯∂ ˙β − χ
|
409 |
+
P 2
|
410 |
+
(ς + 1/2)yα¯y ˙β +
|
411 |
+
+
|
412 |
+
(ς + 1)
|
413 |
+
( ¯N + 1)( ¯N + 2)[(χ + s)(χ − s − 1)Π+ − P 2Π−0]yα ¯∂ ˙β −
|
414 |
+
−
|
415 |
+
(ς + 1)
|
416 |
+
(N + 1)(N + 2)[(χ − s)(χ + s + 1)Π− − P 2Π+0]∂α¯y ˙β,
|
417 |
+
(3.23)
|
418 |
+
with its square being
|
419 |
+
W 2 = −P 2s(s + 1).
|
420 |
+
(3.24)
|
421 |
+
In the case P 2 = 0 one finds that Pα ˙β and Wα ˙β are proportional to each other whenever the
|
422 |
+
module contains components with |χ| = s only, in which case
|
423 |
+
−iP m=0
|
424 |
+
α ˙β
|
425 |
+
=
|
426 |
+
(ς − s + 1)(ς + s + 2)(ς + 3/2)
|
427 |
+
(N + 1)(N + 2)( ¯N + 1)( ¯N + 2)∂α ¯∂ ˙β,
|
428 |
+
W m=0
|
429 |
+
α ˙β
|
430 |
+
= −χP m=0
|
431 |
+
α ˙β
|
432 |
+
,
|
433 |
+
(3.25)
|
434 |
+
that corresponds to two ±s helicities (2.12) of the massless field.
|
435 |
+
Thus, operators (3.1), (3.2) and (3.20) indeed correctly determine a spin-s representation
|
436 |
+
on the module (3.6) after fixing the value of P 2. In the massless case P 2 = 0 one also has
|
437 |
+
to reduce the module, leaving only ±s helicities, which corresponds to setting |m − n| = 2s
|
438 |
+
instead of |m − n| ≤ 2s in (3.6), or having, instead of (3.8),
|
439 |
+
Φs
|
440 |
+
m=0(Y, τ) = Φα(2s)(y¯y, τ)(yα)2s ⊕ ¯Φ ˙α(2s)(y¯y, τ)(¯y ˙α)2s.
|
441 |
+
(3.26)
|
442 |
+
Now, in order to formulate an action principle, one has to realize P 2 as a differential operator.
|
443 |
+
As mentioned previously, it must be τ-dependent only and even under integration by parts, but
|
444 |
+
completely unrestricted otherwise. This means that in our construction Klein–Gordon equation
|
445 |
+
(2.9) can be implemented in many different ways. In the next Section we consider one of the
|
446 |
+
simplest possibilities.
|
447 |
+
4
|
448 |
+
Free action, e.o.m. and inner product
|
449 |
+
First we consider the massive case. We take
|
450 |
+
P 2 = − ∂2
|
451 |
+
∂τ 2 .
|
452 |
+
(4.1)
|
453 |
+
7
|
454 |
+
|
455 |
+
Then a Poincar´e-invariant action for a spin-s mass-m field is simply
|
456 |
+
S = 1
|
457 |
+
2
|
458 |
+
s
|
459 |
+
�
|
460 |
+
χ=−s
|
461 |
+
�
|
462 |
+
d4Y
|
463 |
+
�
|
464 |
+
dτ( ˙Φ2
|
465 |
+
χ − m2Φ2
|
466 |
+
χ),
|
467 |
+
(4.2)
|
468 |
+
where the dot means a τ-derivative and Φχ means a subspace of the spin-s module (3.6) of the
|
469 |
+
definite helicity-χ
|
470 |
+
Φχ(Y, τ) = Φα(s+χ), ˙β(s−χ)(y¯y, τ)(yα)s+χ(y
|
471 |
+
˙β)s−χ.
|
472 |
+
(4.3)
|
473 |
+
Poincar´e-invariance of the action (4.2) is guaranteed by the integration-by-parts property (3.17),
|
474 |
+
which is obvious for M and ¯
|
475 |
+
M (3.1)-(3.2) as well.
|
476 |
+
The action (4.2) leads to an e.o.m.
|
477 |
+
¨Φχ + m2Φχ = 0.
|
478 |
+
(4.4)
|
479 |
+
Its general solution is
|
480 |
+
Φχ(Y, τ) = e−imτfχ(Y ) + eimτgχ(Y ),
|
481 |
+
(4.5)
|
482 |
+
where the only requirement to Y -functions f and g is to belong to helicity-χ subspace. Thus,
|
483 |
+
from the point of view of (4.4), Y are coordinates on the subspace of Cauchy data, while e.o.m.
|
484 |
+
determines the evolution in τ-direction.
|
485 |
+
A Poincar´e-invariant inner product for the on-shell states is
|
486 |
+
(Φχ, Ψχ′) = i
|
487 |
+
�
|
488 |
+
d4Y (¯Φ ˙Ψ − Ψ ˙¯Φ)δχ,χ′.
|
489 |
+
(4.6)
|
490 |
+
It is τ-independent due to (4.4) and positive-definite for a ’positive-mass’ subspace of (4.5) with
|
491 |
+
g = 0. The states with the same Y -dependence but with different mass signs are orthogonal.
|
492 |
+
The split of the on-shell space into two subspaces, corresponding to ’positive-mass’ f and
|
493 |
+
’negative-mass’ g contributions in (4.6), is reminiscent to the split into positive-energy and
|
494 |
+
negative-energy branches in the standard QFT. However, establishing the rigorous relation
|
495 |
+
between two these phenomenae requires a separate thorough analysis which we leave for the
|
496 |
+
future study. Let us note, however, that in our case the split, being determined by τ-dependence,
|
497 |
+
is manifestly Lorentz-invariant.
|
498 |
+
Now we move to the massless case. Here using (4.1) potentially leads to problems: the
|
499 |
+
general solution to (4.4) with m = 0 is an arbitrary linear function of τ, so all on-shell states
|
500 |
+
either have zero norm with respect to (4.6) or are unbounded in τ, which may be unpleasant.
|
501 |
+
This can be easily fixed by introducing a mass-dimension parameter µ and deforming (4.1)
|
502 |
+
to
|
503 |
+
P 2 = − ∂2
|
504 |
+
∂τ 2 − µ2.
|
505 |
+
(4.7)
|
506 |
+
Then the zero-mass action becomes
|
507 |
+
S = 1
|
508 |
+
2
|
509 |
+
s
|
510 |
+
�
|
511 |
+
χ=−s
|
512 |
+
�
|
513 |
+
d4Y
|
514 |
+
�
|
515 |
+
dτ( ˙Φ2
|
516 |
+
χ − µ2Φ2
|
517 |
+
χ),
|
518 |
+
(4.8)
|
519 |
+
and e.o.m. now are
|
520 |
+
¨Φχ + µ2Φχ = 0,
|
521 |
+
(4.9)
|
522 |
+
8
|
523 |
+
|
524 |
+
so the general solution is
|
525 |
+
Φχ(Y, τ) = e−iµτfχ(Y ) + eiµτgχ(Y ),
|
526 |
+
(4.10)
|
527 |
+
and one has τ-bounded functions and the split into two branches again.
|
528 |
+
As said before, in the massless case one also has to reduce the module, leaving only |χ| = s
|
529 |
+
components, (3.26). Intermediate components |χ| < s are necessary to provide off-shell Poincar´e
|
530 |
+
invariance of the action (4.8), but on shell |χ| = s components decouple into closed subspaces.
|
531 |
+
It should be stressed that the equation (4.9) describes a massless field, m = 0. The param-
|
532 |
+
eter µ does not shift the value of the mass, it only deforms the functional dependence of P 2 on
|
533 |
+
τ. In particular, µ enters directly the expression for the off-shell momentum generator (3.20)
|
534 |
+
through (4.7). In principle, it can be introduced for the massive fields as well. Practically, the
|
535 |
+
parameter µ plays the role of a manifestly Poincar´e-invariant IR-regulator. The possibility of
|
536 |
+
such deformation relies on the large freedom in choosing the differential realization of the P 2
|
537 |
+
and is specific to the presented construction. In particular, it is unclear how to locally deform
|
538 |
+
the momentum operator (2.14) of a covariant QFT to have P 2 = −□ + µ2.
|
539 |
+
Let us also give a brief comment on the issue of locality of the constructed representations.
|
540 |
+
As seen from (3.20), the translations, as opposite to the rotations (3.1)-(3.2), are realized
|
541 |
+
non-locally: Y -differential operators N and ¯N enter (3.20) in a non-polynomial way. But a
|
542 |
+
crucial feature is that the translations are local in τ, so one cannot e.g. shift the pole of the
|
543 |
+
propagator by means of Poincar´e-transformations. So the evolution in τ is completely local,
|
544 |
+
while transformations on the Cauchy hypersurface with coordinates Y are non-local.
|
545 |
+
5
|
546 |
+
Momentum eigenstates
|
547 |
+
Having formulated the classical action and e.o.m., the next natural step is to look for various
|
548 |
+
partial solutions to them. Of special importance are solutions that correspond to momentum
|
549 |
+
eigenstates.
|
550 |
+
We restrict ourselves here to the simplest cases of a scalar field and massless
|
551 |
+
arbitrary spin fields, for which the momentum operator takes a particularly simple form.
|
552 |
+
5.1
|
553 |
+
Scalar field
|
554 |
+
Let us construct momentum eigenstates for the scalar field s = 0. In this case the module (3.6)
|
555 |
+
is
|
556 |
+
Φs=0(Y, τ) = Φ(y¯y, τ),
|
557 |
+
(5.1)
|
558 |
+
and the momentum operator (3.20) reduces to
|
559 |
+
P s=0
|
560 |
+
α ˙α
|
561 |
+
=
|
562 |
+
i(ς + 3/2)
|
563 |
+
(ς + 1)(ς + 2)∂α ¯∂ ˙α +
|
564 |
+
i
|
565 |
+
(ς + 1/2)yα¯y ˙α
|
566 |
+
∂2
|
567 |
+
∂τ 2 .
|
568 |
+
(5.2)
|
569 |
+
We have to solve an equation
|
570 |
+
Pα ˙βΦp(Y, τ) = pα ˙βΦp(Y, τ)
|
571 |
+
(5.3)
|
572 |
+
with some momentum pα ˙β, p2 = m2.
|
573 |
+
A natural Ansatz is
|
574 |
+
Φp(Y, τ) = Φp(−ipα ˙αyα¯y ˙α)e±imτ,
|
575 |
+
(5.4)
|
576 |
+
where τ-dependence gets fixed by the general solution (4.5) and pα ˙αyα¯y ˙α is the only available
|
577 |
+
Lorentz-invariant combination involving Y .
|
578 |
+
9
|
579 |
+
|
580 |
+
Using that
|
581 |
+
∂α ¯∂ ˙αf(zβ ˙βyβ¯y
|
582 |
+
˙β) = zα ˙α(ς + 1)f ′ − z2yα¯y ˙αf ′′,
|
583 |
+
(5.5)
|
584 |
+
where the prime means the derivative with respect to the entire argument of f, one can rewrite
|
585 |
+
(5.3) as an ODE with respect to the variable u = −ipα ˙αyα¯y ˙α
|
586 |
+
uΦ′′(u) + (3
|
587 |
+
2 − u)Φ′(u) − 2Φ(u) = 0.
|
588 |
+
(5.6)
|
589 |
+
This arises from the terms in (5.3), proportional to pα ˙β. Strictly speaking, there is one more
|
590 |
+
ODE coming from (5.3), which is generated by terms proportional to yα¯y ˙α, but it represents a
|
591 |
+
differential consequence of (5.6).
|
592 |
+
(5.6) is the Kummer’s equation. Its solution regular at u = 0 is the confluent hypergeometric
|
593 |
+
function
|
594 |
+
Φ(u) = 1F1(2; 3
|
595 |
+
2; u).
|
596 |
+
(5.7)
|
597 |
+
Thus, momentum-pα ˙β eigenstate of the scalar field is
|
598 |
+
Φp(Y, τ) = 1F1(2; 3
|
599 |
+
2; −ipy¯y)e±imτ.
|
600 |
+
(5.8)
|
601 |
+
5.2
|
602 |
+
Massless fields
|
603 |
+
For a massless spin-s field the module is (3.26). It contains two ±s helicities and for both of
|
604 |
+
them the momentum operator reduces to
|
605 |
+
P m=0
|
606 |
+
α ˙β
|
607 |
+
=
|
608 |
+
i(ς + 3/2)
|
609 |
+
(ς + s + 1)(ς − s + 2)∂α ¯∂ ˙β.
|
610 |
+
(5.9)
|
611 |
+
Introducing a polarization vector εα ˙β, orthogonal to the null momentum pα ˙β, p2 = 0,
|
612 |
+
εα ˙βpα ˙β = 0,
|
613 |
+
(5.10)
|
614 |
+
we choose following Ans¨atze for negative and positive helicites
|
615 |
+
Φ−
|
616 |
+
p,ε(Y, τ) = (iεα ˙βpα
|
617 |
+
˙βyαyα)sΨ(−ipy¯y)e±iµτ,
|
618 |
+
(5.11)
|
619 |
+
Φ+
|
620 |
+
p,ε(Y, τ) = (iεβ ˙αpβ
|
621 |
+
˙α¯y ˙α¯y ˙α)sΨ(−ipy¯y)e±iµτ.
|
622 |
+
(5.12)
|
623 |
+
Here we made use of a µ-deformed realization of P 2 (4.7). Then for Ψ one gets, analogously to
|
624 |
+
the scalar field case, the following Kummer’s equation
|
625 |
+
uΨ′′(u) + (3
|
626 |
+
2 + s − u)Ψ′(u) − 2Ψ(u) = 0
|
627 |
+
(5.13)
|
628 |
+
whose regular at u = 0 solution is
|
629 |
+
Ψ(u) = 1F1(2; 3
|
630 |
+
2 + s; u).
|
631 |
+
(5.14)
|
632 |
+
10
|
633 |
+
|
634 |
+
6
|
635 |
+
Conclusion
|
636 |
+
In the paper we proposed a new way of implementing bosonic UIR of 4d Poincar´e group.
|
637 |
+
We presented them as bunches of scalar fields on 5d space with coordinates {Y A, τ} and found
|
638 |
+
appropriate realizations for Poincar´e generators. These realizations possess some distinguishing
|
639 |
+
features: the mass operator P 2 is independent of spinor coordinates Y , so that equations of
|
640 |
+
motion become ODE in a Lorentz-invariant proper time τ and follow from a simple manifestly
|
641 |
+
Poincar´e-invariant action. Thus, our construction demonstrates a separation of variables: e.o.m.
|
642 |
+
governs the evolution in τ, while Y parameterize the space of Cauchy data. The translation
|
643 |
+
generators are local differential operators in τ, but non-local in Y , hence τ-evolution is local,
|
644 |
+
while translations act non-locally on the Cauchy hypersurface spanned by Y .
|
645 |
+
The simple form of e.o.m. allowed us to write down their general solutions. Those contain
|
646 |
+
two branches, corresponding to different sign-dependence on τ, similarly to positive and negative
|
647 |
+
energy branches in the standard QFT approach. We found a Poincar´e-invariant inner product,
|
648 |
+
which is positive-definite for one of the branches.
|
649 |
+
For massless fields we modified the mass operator by introducing an IR-regulator. This
|
650 |
+
allowed us to have bounded in τ solutions and the split into two branches. This modification
|
651 |
+
is manifestly Poincar´e-invariant and is possible due to the large ambiguity in the form of the
|
652 |
+
mass operator, caused by the separation of variables. Our construction is non-gauge, as we work
|
653 |
+
directly with helicity-expanded fields: the bunch of scalar fields mentioned before represents
|
654 |
+
a bunch of helicities of a spin-s representation, connected by Poincar´e transformations. On
|
655 |
+
the zero-mass shell ±s-helicity components form closed subrepresentations, so ’gauge-fixing’
|
656 |
+
reduces to direct putting all intermediate-helicity components to zero.
|
657 |
+
We also found the momentum-eigenstate solutions for the simplest cases of a scalar field
|
658 |
+
and massless fields. They have the form of the confluent hypergeometric functions.
|
659 |
+
The construction, proposed in the paper, poses many problems for further research. One
|
660 |
+
of the most urgent is to develop appropriate canonical structures and to define an analogue
|
661 |
+
of the canonical quantization procedure, regarding that some necessary elements are already
|
662 |
+
presented (a classical action, distinguished in a Lorentz-invariant way coordinate τ that gov-
|
663 |
+
erns the evolution, two branches of classical solutions etc). Other interesting directions include
|
664 |
+
considering fermionic and infinite-spin representations as well as supersymmetric extensions,
|
665 |
+
generalizations to (A)dS backgrounds and, the most important, introducing interactions. The
|
666 |
+
problem of interactions, in its turn, immediately rise many questions: can one formulate a sys-
|
667 |
+
tematic procedure of looking for Poincar´e-invariant vertices? what happens to the separation
|
668 |
+
of τ and Y variables at the nonlinear level? how does the Y -nonlocality of Poincar´e transfor-
|
669 |
+
mations affect the perturbative analysis? One may hope that answering these questions will
|
670 |
+
provide us with new powerful formalism for studying higher-spin theories.
|
671 |
+
Acknowledgments
|
672 |
+
The research was supported by the Alexander von Humboldt Foundation.
|
673 |
+
References
|
674 |
+
[1] L.P.S. Singh, C.R. Hagen, Phys.Rev.D 9 (1974) 898-909.
|
675 |
+
11
|
676 |
+
|
677 |
+
[2] L.P.S. Singh, C.R. Hagen, Phys.Rev.D 9 (1974) 910-920.
|
678 |
+
[3] C. Fronsdal, Phys.Rev.D 18 (1978) 3624.
|
679 |
+
[4] J. Fang, C. Fronsdal, Phys.Rev.D 18 (1978) 3630.
|
680 |
+
[5] C. Fronsdal, Phys.Rev.D 20 (1979) 848-856.
|
681 |
+
[6] J. Fang, C. Fronsdal, Phys.Rev.D 22 (1980) 1361.
|
682 |
+
[7] R.R. Metsaev, Mod.Phys.Lett.A 6 (1991) 359-367.
|
683 |
+
[8] A.K.H. Bengtsson, I. Bengtsson, L. Brink, Nucl.Phys.B 227 (1983) 31-40.
|
684 |
+
[9] A.K.H. Bengtsson, I. Bengtsson, N. Linden, Class.Quant.Grav. 4 (1987) 1333.
|
685 |
+
[10] R.R. Metsaev, Nucl.Phys.B 984 (2022) 115978 [arXiv:2206.13268].
|
686 |
+
[11] D. Ponomarev, E.D. Skvortsov, J.Phys.A 50 (2017) 9, 095401 [arXiv:1609.04655].
|
687 |
+
[12] T. Tran, JHEP 11 (2021) 117 [arXiv:2107.04500].
|
688 |
+
[13] T. Tran, Toward a twistor action for chiral higher-spin gravity [arXiv:2209.00925].
|
689 |
+
[14] Y. Herfray, K. Krasnov, E. Skvortsov, Higher-Spin Self-Dual Yang-Mills and Gravity from
|
690 |
+
the twistor space [arXiv:2210.06209].
|
691 |
+
[15] T.
|
692 |
+
Adamo,
|
693 |
+
T.
|
694 |
+
Tran,
|
695 |
+
Higher-spin
|
696 |
+
Yang-Mills,
|
697 |
+
amplitudes
|
698 |
+
and
|
699 |
+
self-duality
|
700 |
+
[arXiv:2210.07130].
|
701 |
+
[16] M.A. Vasiliev, Phys.Lett.B 243 (1990) 378-382.
|
702 |
+
[17] M.A. Vasiliev, Phys.Lett.B 285 (1992) 225-234.
|
703 |
+
[18] M.A. Vasiliev, Annals Phys. 190 (1989) 59-106.
|
704 |
+
[19] M.A. Vasiliev, Class.Quant.Grav. 11 (1994) 649-664.
|
705 |
+
[20] M.A. Vasiliev, Int.J.Geom.Meth.Mod.Phys. 3 (2006) 37-80 [hep-th/0504090].
|
706 |
+
[21] M. A. Vasiliev, Higher spin gauge theories: Star product and AdS space, In *Shifman, M.A.
|
707 |
+
(ed.): The many faces of the superworld* 533-610 [hep-th/9910096].
|
708 |
+
[22] V.E. Didenko, E.D. Skvortsov, Elements of Vasiliev theory [arXiv:1401.2975].
|
709 |
+
[23] M.A. Vasiliev, Phys.Lett.B 834 (2022) 137401 [arXiv:2208.02004].
|
710 |
+
[24] X. Bekaert, N. Boulanger, A. Campoleoni, M. Chiodaroli, D. Francia, M. Grigoriev, E.
|
711 |
+
Sezgin, E. Skvortsov, Snowmass White Paper: Higher Spin Gravity and Higher Spin Sym-
|
712 |
+
metry, [arXiv:2205.01567].
|
713 |
+
[25] N.G. Misuna, Phys.Lett.B 798 (2019) 134956 [arXiv:1905.06925].
|
714 |
+
[26] N.G. Misuna, JHEP 12 (2021) 172 [arXiv:2012.06570].
|
715 |
+
12
|
716 |
+
|
717 |
+
[27] N.G.
|
718 |
+
Misuna,
|
719 |
+
On
|
720 |
+
Unfolded
|
721 |
+
Approach
|
722 |
+
To
|
723 |
+
Off-Shell
|
724 |
+
Supersymmetric
|
725 |
+
Models
|
726 |
+
[arXiv:2201.01674].
|
727 |
+
[28] N.G.
|
728 |
+
Misuna,
|
729 |
+
Unfolded
|
730 |
+
Dynamics
|
731 |
+
Approach
|
732 |
+
and
|
733 |
+
Quantum
|
734 |
+
Field
|
735 |
+
Theory
|
736 |
+
[arXiv:2208.04306].
|
737 |
+
[29] E.P. Wigner, Annals Math. 40 (1939) 149-204.
|
738 |
+
13
|
739 |
+
|
HNE0T4oBgHgl3EQfRgAk/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
I9AyT4oBgHgl3EQf5_o0/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ec3b854b84490a3cd2022befae107e1bef1f27ef3cecefa935435fd3c94cee6
|
3 |
+
size 3342381
|
I9AyT4oBgHgl3EQf5_o0/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a5ad6593d591775bb44e88b584cf21b0391844f847366219bcec36e97869c45e
|
3 |
+
size 137975
|