diff --git a/-9FAT4oBgHgl3EQfqR39/content/tmp_files/2301.08647v1.pdf.txt b/-9FAT4oBgHgl3EQfqR39/content/tmp_files/2301.08647v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..505b88035c7e4330d6410f864360b203c62a0703 --- /dev/null +++ b/-9FAT4oBgHgl3EQfqR39/content/tmp_files/2301.08647v1.pdf.txt @@ -0,0 +1,1100 @@ +Image Memorability Prediction with Vision +Transformers +Thomas Hagen1,� and Thomas Espeseth1,2 +1Department of Psychology, University of Oslo, Oslo, Norway +2Department of Psychology, Oslo New University College, Oslo, Norway +Behavioral studies have shown that the memorability of images +is similar across groups of people, suggesting that memorability +is a function of the intrinsic properties of images, and is unre- +lated to people’s individual experiences and traits. Deep learn- +ing networks can be trained on such properties and be used +to predict memorability in new data sets. Convolutional neu- +ral networks (CNN) have pioneered image memorability predic- +tion, but more recently developed vision transformer (ViT) mod- +els may have the potential to yield even better predictions. In +this paper, we present the ViTMem, a new memorability model +based on ViT, and evaluate memorability predictions obtained +by it with state-of-the-art CNN-derived models. Results showed +that ViTMem performed equal to or better than state-of-the- +art models on all data sets. Additional semantic level analyses +revealed that ViTMem is particularly sensitive to the seman- +tic content that drives memorability in images. We conclude +that ViTMem provides a new step forward, and propose that +ViT-derived models can replace CNNs for computational pre- +diction of image memorability. Researchers, educators, adver- +tisers, visual designers and other interested parties can leverage +the model to improve the memorability of their image material. +memorability | vision transformers | psychology | semantic information +Introduction +Everyone knows that our memories depend on the experi- +ences we have had, facts we have encountered, and the abil- +ities we have to remember them. Combinations of these fac- +tors differ between individuals and give rise to unique memo- +ries in each of us. However, a complementary perspective on +memory focuses on the material that is (to be) remembered +rather than the individual that does the remembering. In one +central study, Isola et al. (1) presented more than 2000 scene +images in a continuous repeat-detection task. The partici- +pants were asked to respond whenever they saw an identical +repeat. The results revealed that the memorability score (per- +cent correct detections) varied considerably between images. +Most importantly, by running a consistency analysis in which +Spearman’s rank correlation was calculated on the memo- +rability scores from random splits of the participant group, +Isola and colleagues (1) were able to show that the memora- +bility score ranking was consistent across participants – some +images were memorable and some were forgettable. These +results indicate that the degree to which an image was cor- +rectly detected depended on properties intrinsic to the image +itself, not the traits of the observers. This is important be- +cause it shows that one can use the memorability scores in a +stimulus set to predict memory performance in a new group +of participants. +These results have been replicated and extended in a num- +ber of studies, revealing that similar findings are obtained +with different memory tasks (2), different retention times +(1, 2), different contexts (3), and independent of whether en- +coding is intentional or incidental (4). However, although +image memorability has proven to be a robust and reliable +phenomenon, it has not been straightforward to pinpoint the +image properties that drive it. What seems clear though, is +that memorability is multifaceted (5, 6). One way to char- +acterize the underpinnings of memorability is to investigate +the contribution from processes at different levels of the vi- +sual processing stream. For example, at the earliest stages of +processing of a visual scene, visual attributes such as local +contrast, orientation, and color are coded. At an intermedi- +ate level, contours are integrated, surfaces, shapes, and depth +cues are segmented, and foreground and background are dis- +tinguished. At a higher level, object recognition is conducted +through matching with templates stored in long term mem- +ory. +Positive correlations between brightness and high contrast of +objects with memorability has been found (7), but in general, +low-level visual factors such as color, contrast, and spatial +frequency do not predict memorability well (5, 8, 9). This +is consistent with results showing that perceptual features +are typically not retained in long term visual memory (10). +In contrast to the low-level features, the evidence for a re- +lation between intermediate to high level semantic features +and memorability is much stronger. For example, images that +contain people, faces, body parts, animals, and food are often +associated with high memorability, whereas the opposite is +a typical finding for objects like buildings and furniture and +images of landscapes and parks (3, 7, 11, 12). Other inter- +mediate to high level features such as object interaction with +the context or other objects, saliency factors, and image com- +position also contribute to memorability (5). Furthermore, +although memorability is not reducible to high-level features +such as aesthetics (1, 12), interestingness (1, 13), or popu- +larity (12), emotions, particularly of negative valence, seem +to predict higher memorability (9, 12). Finally, memorabil- +ity seems to be relatively independent of cognitive control, +attention, or priming (14). +Overall, the available evidence indicates that memorability +seems to capture intermediate- to high-level properties of +semantics, such as objects or actions, and image composi- +tion, such as layout and clutter, rather than low-level fea- +Hagen et al. +| +January 23, 2023 +| +1–7 +arXiv:2301.08647v1 [cs.CV] 20 Jan 2023 + +tures (5, 15). This fits well with the central role of semantic +categories in organizing cognition and memory (16). Gen- +erally, the priority of semantic-level information enables us +to quickly understand novel scenes and predict future events +(17). For example, when inspecting a novel scene or an im- +age, we do not primarily focus on low-level perceptual fea- +tures or pixels, but prioritize more abstract visual schemas +involving spatial regions, objects, and the relation between +them (18). Also, when people are asked to indicate which +regions of an image helps them recognize an image, there is +high consistency between people’s responses (18). Similarly, +fixation map data from eye-tracking have shown that there is +a positive correlation between fixation map consistency and +scene memorability, and this relation is associated with the +presence of meaningful objects (3, 7, 19). Bylinskii et al. +(5) suggest that these properties most efficiently signal infor- +mation of high utility to our species, for example, emotions, +social aspects, animate objects (e.g., faces, gestures, interac- +tions), unexpected events, and tangible objects. +Memorability prediction. The finding that the memorabil- +ity of an image is governed by properties intrinsic to the im- +age itself, not only implies that one can predict memory per- +formance in a new set of participants, as described above, +but also that one can predict the memorability of a novel set +of images (i.e., memorability is an “image computable” fea- +ture). Given the availability of computational algorithms and +high-quality training sets of sufficient size, one can predict +memorability in novel sets of images for future (or already +conducted) behavioral or neuroimaging studies. Such mem- +orability prediction could also be valuable in a number of ap- +plied settings (e.g., within education, marketing and human- +computer interaction). +Memorability researchers have employed computer vision +models such as convolutional neural networks (CNNs) from +early on (12), and advancements in the field have allowed +researchers to predict image memorability with increasing +precision (20–22). The inductive bias (the assumptions of +the learning algorithms used to generalize to unseen data) of +CNNs is inspired by knowledge about the primate visual sys- +tem, and activations in the networks layers have, with some +success, been used to explain neural activations (23). How- +ever, some vulnerabilities of CNNs have been noted. For ex- +ample, CNNs appear to depend more on image texture than +biological vision systems do (24), and have problems with +recognizing images based on the shape of objects (e.g., when +texture is suppressed or removed). However, this vulnera- +bility is reduced when the model’s shape bias is increased +through training on shape representations (25). +The LaMem train/test splits is a well-established benchmark +for memorability prediction (12). The original MemNet (12), +which is based on AlexNet (26), achieved a Spearman rank +correlation of 0.64 on this benchmark. +There have been +several improvements on this benchmark, the leading ap- +proaches utilize image captioning to enhance memorability +predictions. That is, a CNN produces a textual description of +the image, which is then used to provide more high-level se- +mantic information which is embedded into a semantic vec- +tor space before being combined with CNN image features +in a multi-layered perceptron network. Squalli-Houssaini et +al. (21) used this approach to reach a Spearman correlation +of 0.72, with a mean squared error (MSE) of approximately +0.0092 (22). Leonardi et al. (22) used the captioning ap- +proach with dual ResNet50s and a soft attention mechanism +to reach a rank correlation of 0.687 with an MSE of 0.0079. +The ResMem model (20), which is a CNN-based residual +neural network architecture (ResNet), uses LaMem, but also +takes advantage of a more recently published dataset named +MemCat (11). This is a data set containing 10,000 images +based on categories of animals, food, landscape, sports and +vehicles. This data set also has a higher split half correla- +tion than LaMem. Needell and Bainbridge (20) argue that +the LaMem dataset on its own is lacking in generalizability +due to poor sampling of naturalistic images. That is, the im- +ages are more intended as artistic renderings designed to at- +tract an online audience. Hence by combining MemCat with +LaMem this should potentially yield a more generalizable +model. Moreover, the increased size of the combined dataset +might help in driving the model performance further than pre- +vious models based on LaMem. The authors of ResMem +also noted the importance of semantic information and struc- +tured their approach to utilize semantic representations from +a ResNet model in order to improve predictions. An added +benefit of ResMem is that it is shared on the python pack- +age index, which makes it easily accessible to researchers in +diverse fields. +Vision transformers. Vision transformers (ViT) have re- +cently been shown to provide similar or better performance +than CNNs in a variety of computer vision tasks (27). This +architecture was first introduced in the natural language pro- +cessing field (28) for capturing long-range dependencies in +text. This architecture leads to superior speed/performance +balance relativ to ResNet architectures (29). Moreover, ViTs +have been shown to produce errors that are more similar to +human errors (30), suggesting that they could take similar +information into account (see also (31)). A reason for this +may be that ViTs are likely to take more of the global context +into account and be more dependent on the shape of objects +rather than their texture (30). While it is not entirely clear +why such properties may yield better predictions of image +memorability, it could still help inform the discourse on the +visual characteristics that are relevant as well as potentially +yielding a better model for predicting image memorability. +Hence, we set out to investigate if vision transformers can +yield better predictions of memorability than the state-of- +the-art in image memorability prediction. In particular, we +aimed to (i) benchmark a model based on ViT against the +well-established LaMem train/test splits (12), (ii) train a ViT +against the combined LaMem and MemCat data sets (20) to +benchmark against the ResMem model (20), (iii) train a final +ViT model against a more diverse and deduplicated data set, +(iv) validate the final ViT model against additional indepen- +dent data sets and (v) inspect semantic level distributions of +memorability scores for behavioral and predicted data. +2 +| +Hagen et al. +| +ViTMem + +Methods +As our model is based on ViT to predict memorability we +named it ViTMem. +Because it has been shown that low- +level visual features are less important for image memorabil- +ity prediction, it would seem appropriate to use image aug- +mentations in training our ViTMem model to reduce over- +fitting. This approach have also been used by others (22), +although not to the extent done here. +The augmentations +used consisted of horizontal flipping, sharpen, blur, motion +blur, random contrast, hue saturation value, CLAHE, shift +scale rotate, perspective, optical distortion and grid distortion +(32). For training all models we used PyTorch, the ADAM +optimizer and mean squared error (squared L2 norm) for the +loss function. Images were input as batches of 32 in RGB +and resized to 256x256 pixels before applying augmentations +with a probability of 0.7 and center cropping to 224x224 pix- +els. For creating ViTMem we used transfer learning on a +vision transformer (27) model pretrained on ImageNet 1k +(vit_base_patch16_224_miil) (33). +The final classification +layer was reduced to a single output and a sigmoid activation +function. +As we aim to provide an accessible model to the re- +search community, it is also necessary to compare against +the publicly available ResMem model. Unfortunately, the +authors of ResMem did not publish their held-out test +set, hence it is difficult to make a balanced compari- +son between the currently published ResMem model and +any competing models. +We propose to do 10 train/test +splits that can be used by future researchers (available +at https://github.com/brainpriority/vitmem_data). Moreover, +ResMem was not benchmarked on LaMem, hence a fair com- +parison can only be made on the combined LaMem and +MemCat data set. +For the semantic level analysis, we chose to use image cap- +tioning (34) as this provides an efficient method for deriv- +ing semantic properties from images at scale. Importantly, +as the image captioning model was trained on human image +descriptions, it is likely to extract content that humans find +meaningful in images, and in particular objects and contexts +that are relevant for conveying such meanings. Hence, nouns +derived from such descriptions are likely to be representative +portions of the content that would convey meaning to humans +observing the images. +Data Sources. For the large-scale image memorability +(LaMem) benchmark we used the LaMem dataset (12). The +image set used by ResMem is a combination of the image sets +LaMem (12) and MemCat (11). LaMem containing 58,741 +and MemCat 10,000 images, for a total of 68,741 images. +ResMem is reported to have used a held-out test set with 5000 +images, hence we randomly selected 5000 images as our test +set for our 10 train/test splits for this combined data set. For +our final model we aimed to clean up the data and combine +more of the available data sets on image memorability. As +number of duplicated images within and between data sets is +unknown and duplicated images may interfere with perfor- +mance measures, we aimed to deduplicate the data for this +model. Duplicated images were identified by simply deriv- +ing embeddings from an off-the-shelf CNN model, and then +visually inspecting the most similar embeddings. Our analy- +sis of the data sets LaMem and MemCat showed that LaMem +have 229 duplicated images while MemCat have 4. More- +over, 295 of the images in LaMem is also in MemCat. We +aimed to build a larger and more diverse data set by com- +bining more sources, and for this we chose CVPR2011 (9) +and FIGRIM (3). CVPR2011 had 6 internal duplicates, 651 +duplicates against LaMem, 78 against MemCat og 9 against +FIGRIM. FIGRIM had 20 duplicates against MemCat and 70 +against LaMem. All identified duplicates were removed be- +fore merging the data sets. As the images from FIGRIM and +CVPR2011 were cropped, we obtained the original images +before including them in the data set. This resulted in a data +set with 71,658 images. For this data set we performed a 10% +split for the test set. +Results +Results on LaMem data set. On the LaMem data set +the ViTMem model reached an average Spearman rank +correlation of 0.711 and an MSE of 0.0076 (see Table 1). +Here we compare our performance to measures obtained by +MemNet (12), Squalli-Houssaini et al. (21) and Leonardi et +al. (22). +Table 1. Comparison of model performance on LaMem data set +Model +MSE Loss ↓ +Spearman ρ ↑ +MemNet +Unknown +0.640 +Squalli-Houssaini et al. +0.0092 +0.720 +Leonardi et al. +0.0079 +0.687 +ViTMem +0.0076 +0.711 +Results on the combined LaMem and MemCat data +set. Training on 10 train/test splits on the combined data +set the results showed that ViTMem performed better than +the ResMem model (see Table 2). The average across splits +showed a Spearman rank correlation of 0.77 and an MSE of +0.005. +Table 2. Model performance on LaMem and MemCat combiend dataset +Model +MSE Loss ↓ +Spearman ρ ↑ +ResMem +0.009 +0.67 +ViTMem +0.005 +0.77 +Results on combined and cleaned data set. To assess +model performance on the larger and cleaned data set, we +made a train/test split and then performed repeated k-fold +cross validation with 10 train/test splits on the training set. +This resulted in a mean MSE loss of 0.006 and a mean +Spearman rank correlation of 0.76 (see Table 3). In order +to provide a model for the community we used the full data +Hagen et al. +| +ViTMem +| +3 + +set to train the final model (ViTMem Final Model), which +is published on the python package index as version 1.0.0. +This was trained on the full training set and tested on its +corresponding test set. The results showed a Spearman rank +correlation of 0.77 and an MSE of 0.006 (see Table 3). The +train/test splits are available on github. +Table 3. Model performance on combined and cleaned data set +Model +MSE Loss ↓ +Spearman ρ ↑ +ViTMem +0.006 +0.76 +ViTMem Final Model +0.006 +0.77 +Validation on independent data sets. To further validate +our model, we used memorability scores from an indepen- +dent data set by Dubey and colleagues named PASCAL-S +(7, 35) consisting of 852 images and cropped objects from +the same images. ViTMem achieved a Spearman correlation +of 0.44 on the images and 0.21 on the objects. In compar- +ison ResMem achieved a correlation of 0.36 on the images +and 0.14 on the objects. Validating against the THINGS data +set (15), which consists of 26,106 images with memorabil- +ity scores, achieved a Spearman rank correlation of 0.30 for +ViTMem and 0.22 for ResMem. +Semantic level analysis. In order to better understand how +the model predictions relate to the semantic content of the +images, we performed image captioning (34) on the com- +bined LaMem and MemCat data set and the Places205 data +set (36). We extracted nouns from the resulting image de- +scriptions and averaged behavioral or predicted memorability +scores for each noun (37). That is, the memorability for each +image was assigned to each noun derived from the image cap- +tioning procedure. For the combined LaMem and MemCat +data set we averaged behavioral memorability scores over +nouns (see Figure 1), while for the Places205 data set we +averaged predicted memorability scores from the ViTMem +model (see Figure 2). A general interpretation of the visu- +alizations in Figure 1 and 2 is that they appear to reveal a +dimension from nouns usually observed outdoors to more in- +door related nouns and ending with nouns related to animals, +and in particular, humans. This would appear to reflect the +distributions observed in previous work (9, 15), and hence +help to validate the model in terms of the image content it +is sensitive to. To further investigate how well memorability +associated with nouns were similar across the models we se- +lected nouns occurring more than the 85th percentile in each +set (654 nouns for LaMem and MemCat, 2179 nouns for +Places205), this resulted in 633 matched nouns across sets. +Analysis of these showed a Spearman ranked correlation of +0.89 and a R2 of 0.79, p<0.001 (see Figure 3). This analysis +indicates that nouns from image captioning is a strong pre- +dictor of image memorability and that the ViTMem model is +able to generalize the importance of such aspects from the +training set to a new set of images. +Discussion +Using vision transformers we have improved on the state- +of-the-art in image memorability prediction. Results showed +that ViTMem performed equal to or better than state-of- +the-art models on LaMem, and better than ResMem on the +LaMem and MemCat hybrid data set. In addition, we assem- +bled a new deduplicated hybrid data set and benchmarked +the ViTMem model against this before training a final model. +The model was further validated on additional data sets, and +performed better than ResMem on these as well. Finally, +we ran a semantic level analysis by using image captioning +on the hybrid data set. +We ranked the behavioral memo- +rability scores on the images, labeled with nouns extracted +from the captioning procedure. The results revealed that im- +ages labeled by nouns related to landscapes, cities, buildings +and similar, were ranked lowest, whereas images labeled by +nouns related to animate objects and food, were ranked high- +est. This finding is consistent with known category effects +on memorability (3, 7, 11, 12, 15) and suggests that the la- +bels extracted from captioning procedure is strongly related +to factors that drive memorability for those images. Subse- +quently, we predicted memorability scores on images from a +novel data set (Places205), ran the image captioning proce- +dure, and ranked the predicted memorability scores on the +images, labeled with nouns extracted from the captioning +procedure. Visual inspection of the results revealed that the +ranks were similar across samples and methods. This impres- +sion was confirmed by a strong correlation between matching +pairs of nouns and 79% explained variance, suggesting that +ViTMem captures the semantic content that drives memora- +bility in images. +The use of image augmentations in training the ViTMem +model in combination with state-of-the-art performance sug- +gest that such augmentations are not disrupting the ability +of the model to predict image memorability and hence may +further support the importance of semantic level properties +in image memorability. That is, the augmentations modify +a range of low-level image properties but mostly leave the +semantic content intact. +In comparison with ResMem, which relies on a CNN-based +residual neural network architecture, ViTMem is based on +vision transformers which integrate information in a more +global manner (30). As images are compositions of several +semantically identifiable objects or parts of objects, a more +holistic approach may be more apt at delineating the relative +relevance of objects given their context. That is, we speculate +that a broader integration of image features allows for a more +complete evaluation of its constituent features in relation to +each other. Hence, if semantic content is important for pre- +dicting image memorability, the model may have weighed the +importance of semantic components in relation to each other +to a larger degree than models based on CNNs. +ViTMem code and train/test sets are shared on github +(https://github.com/brainpriority/), and a python package +named vitmem is available on the python package index (see +supplementary Sup. Note 1 for a tutorial). Researchers and +interested parties can use the model to predict memorability +4 +| +Hagen et al. +| +ViTMem + +Memorability +c() +0.56 +0.58 +0.60 +0.62 +0.64 +0.66 +0.68 +0.70 +0.72 +0.74 +0.76 +0.78 +0.80 +0.82 +0.84 +0.86 +0.88 +0.90 +mountains +skyline +clouds +sunset +fireplace +view +cloud +dresser +pine +bedroom +houses +stream +church +highway +waterfall +house +hotel +sky +boat +wave +water +park +reflection +building +walls +tree +people +lights +temple +smoke +flowers +power +rock +group +bus +store +lot +truck +fire +center +market +game +walking +bench +court +person +guitar +police +motorcycle +food +men +show +picture +stem +sign +ground +link +women +stuffed +toy +phone +bride +plate +bag +girl +cards +wedding +shoe +pair +scarf +hands +hand +shape +neck +face +cut +toothbrush +half +banana +smile +makeup +necklace +teeth +pepper +valley +mountain +dining +lake +trees +buildings +river +hill +city +boats +rocks +fog +lobby +ocean +tables +middle +kitchen +area +bridge +construction +field +office +room +woods +road +clock +photo +steel +street +stove +surfboard +light +dirt +window +side +fence +train +bed +museum +door +bird +mirror +flower +grass +course +blue +video +row +car +couple +table +top +line +man +bug +case +dog +floor +gas +boy +girls +cell +piece +camera +woman +knife +arms +baby +board +gold +head +hair +sunglasses +persons +shirt +tie +feet +nose +chocolate +word +beard +snake +tattoo +blood +bikini +Fig. 1. Average behavioral image memorability scores for nouns that were extracted from images in the LaMem and MemCat data sets. The nouns shown are those that +occurred most frequently or that are more frequent in the English language (38). +Memorability +c() +0.52 +0.54 +0.56 +0.58 +0.60 +0.62 +0.64 +0.66 +0.68 +0.70 +0.72 +0.74 +0.76 +0.78 +0.80 +0.82 +0.84 +0.86 +0.88 +0.90 +badlands +rim +stormy +glacier +mountain +sun +town +hill +fireplace +houses +sunset +snow +slope +desert +dusk +rocks +couches +city +cabinets +steeple +university +street +place +hotel +highway +cathedral +formation +center +building +beach +tree +home +way +stone +lot +fire +christmas +lighthouse +space +monument +desk +people +crowd +boat +wall +inside +airport +music +van +museum +model +round +stage +statue +baseball +auditorium +party +classroom +tent +stand +row +court +store +picture +pink +bus +shelf +bowling +sale +men +bars +family +gym +fish +boy +motel +woman +ring +rack +soldier +girl +ties +dresses +words +name +dancing +suit +wrestlers +arms +mannequin +cookies +cream +shirt +wife +cupcakes +chocolate +bikini +hillside +clouds +mountains +valley +cloud +farm +village +snowy +square +waves +vineyard +island +view +mansion +smoke +castle +living +coast +lawn +area +church +house +tower +clock +field +road +rain +wave +sink +top +state +water +chairs +bed +room +side +birds +dock +leaves +park +supplies +force +station +table +play +post +cross +market +desks +photos +group +image +library +game +line +school +video +dog +food +star +crib +show +clothes +book +floor +children +man +heart +baby +display +sign +roller +women +class +football +girls +case +hands +team +desserts +face +shirts +suits +logo +hair +plate +pastries +head +grave +meat +tie +bread +donuts +mouth +dance +dress +dancer +Fig. 2. Average ViTMem predicted image memorability scores for nouns that were extracted from images in the Places205 data set. The nouns shown are those that occurred +most frequently or that are more frequent in the English language (38). +0.6 +0.7 +0.8 +0.9 +0.6 +0.7 +0.8 +0.9 +Memorability for LaMem & MemCat Nouns (Behavioral) +Memorability for Places205 Nouns (ViTMem) +Fig. 3. Average memorability scores for images with matching nouns in different +data sets. The y-axis shows average predicted memorability scores from ViTMem +on the Places205 data set. +The x-axis shows average behavioral memorability +scores on the combined LaMem and MemCat data set. +in existing or novel stimuli and employ them in research or +applied settings. The ViTMem model will allow researchers +to more precisely predict image memorability. The release +of ViTMem follows up ResMem in providing an accessible +method for predicting image memorability. This is impor- +tant for studies aiming to control for how easily an image can +be remembered. This will for example allow experimental +psychologists and neuroscientists to better control their re- +search. Similarly, educators, advertisers and visual designers +can leverage the model to improve the memorability of their +content. +Despite state-of-the-art performance in memorability predic- +tion, improvements may still be possible to achieve. Previous +works have shown benefits of pretraining their networks on +data sets of places and objects prior to fine tuning for memo- +rability prediction (39). Moreover, ViTMem do not take im- +age captioning into account, which have been successfully +done with CNNs (21, 22). Hence there is potentially more +to be gained from incorporating image semantics and/or pre- +training on data sets of objects and places. In addition, ViT- +Mem is only based on the "base" configuration of the avail- +able ViT models. Model performance may still increase by +adopting the “large” or “huge” configurations of the model. +We conclude that ViTMem can be used to predict memora- +bility for images at a level that is equal to or better than state- +of-the-art models, and we propose that vision transformers +provide a new step forward in the computational prediction +of image memorability. +Hagen et al. +| +ViTMem +| +5 + +References +1. +Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva. What makes a +photograph memorable? IEEE transactions on pattern analysis and machine intelligence, +36(7):1469–1482, 2013. +2. +Lore Goetschalckx, Pieter Moors, and Johan Wagemans. Image memorability across longer +time intervals. Memory, 26(5):581–588, 2018. +3. +Zoya Bylinskii, Phillip Isola, Constance Bainbridge, Antonio Torralba, and Aude Oliva. In- +trinsic and extrinsic effects on image memorability. Vision research, 116:165–178, 2015. +4. +Lore Goetschalckx, Jade Moors, and Johan Wagemans. Incidental image memorability. +Memory, 27(9):1273–1282, 2019. +5. +Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, and Aude Oliva. Memorability: An +image-computable measure of information utility. In Human Perception of Visual Informa- +tion, pages 207–239. Springer, 2022. +6. +Nicole C Rust and Vahid Mehrpour. Understanding image memorability. Trends in cognitive +sciences, 24(7):557–568, 2020. +7. +Rachit Dubey, Joshua Peterson, Aditya Khosla, Ming-Hsuan Yang, and Bernard Ghanem. +What makes an object memorable? In Proceedings of the ieee international conference on +computer vision, pages 1089–1097, 2015. +8. +Wilma A Bainbridge, Daniel D Dilks, and Aude Oliva. Memorability: A stimulus-driven per- +ceptual neural signature distinctive from memory. NeuroImage, 149:141–152, 2017. +9. +Phillip Isola, Devi Parikh, Antonio Torralba, and Aude Oliva. Understanding the intrinsic +memorability of images. Advances in neural information processing systems, 24, 2011. +10. +Timothy F Brady, Talia Konkle, and George A Alvarez. A review of visual memory capacity: +Beyond individual items and toward structured representations. Journal of vision, 11(5): +4–4, 2011. +11. +Lore Goetschalckx and Johan Wagemans. Memcat: a new category-based image set quan- +tified on memorability. PeerJ, 7:e8169, 2019. +12. +Aditya Khosla, Akhil S. Raju, Antonio Torralba, and Aude Oliva. Understanding and predict- +ing image memorability at a large scale. In International Conference on Computer Vision +(ICCV), 2015. +13. +Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool. +The interestingness of images. In Proceedings of the IEEE international conference on +computer vision, pages 1633–1640, 2013. +14. +Wilma A Bainbridge. The resiliency of image memorability: A predictor of memory separate +from attention and priming. Neuropsychologia, 141:107408, 2020. +15. +Max A. Kramer, Martin N. Hebart, Chris I. Baker, and Wilma A. Bainbridge. The features +underlying the memorability of objects. bioRxiv, 2022. doi: 10.1101/2022.04.29.490104. +16. +Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes- +Braem. Basic objects in natural categories. Cognitive psychology, 8(3):382–439, 1976. +17. +Douglas L Medin and John D Coley. Concepts and categorization. Perception and cognition +at century’s end: Handbook of perception and cognition, pages 403–439, 1998. +18. +Erdem Akagunduz, Adrian G Bors, and Karla K Evans. Defining image memorability using +the visual memory schema. IEEE transactions on pattern analysis and machine intelligence, +42(9):2165–2178, 2019. +19. +Muxuan Lyu, Kyoung Whan Choe, Omid Kardan, Hiroki P Kotabe, John M Henderson, and +Marc G Berman. Overt attentional correlates of memorability of scene images and their +relationships to scene semantics. Journal of Vision, 20(9):2–2, 2020. +20. +Coen D Needell and Wilma A Bainbridge. Embracing new techniques in deep learning for +estimating image memorability. Computational Brain & Behavior, pages 1–17, 2022. +21. +Hammad Squalli-Houssaini, Ngoc QK Duong, Marquant Gwenaëlle, and Claire-Hélène De- +marty. Deep learning for predicting image memorability. In 2018 IEEE international con- +ference on acoustics, speech and signal processing (ICASSP), pages 2371–2375. IEEE, +2018. +22. +Marco Leonardi, Luigi Celona, Paolo Napoletano, Simone Bianco, Raimondo Schettini, +Franco Manessi, and Alessandro Rozza. Image memorability using diverse visual features +and soft attention. In International Conference on Image Analysis and Processing, pages +171–180. Springer, 2019. +23. +Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and +James J DiCarlo. Performance-optimized hierarchical models predict neural responses in +higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624, +2014. +24. +Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J Kellman. Local features +and global shape information in object classification by deep convolutional neural networks. +Vision research, 172:46–61, 2020. +25. +Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, +and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape +bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018. +26. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep +convolutional neural networks. +Advances in neural information processing systems, 25, +2012. +27. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, +Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, +et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv +preprint arXiv:2010.11929, 2020. +28. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N +Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural +information processing systems, 30, 2017. +29. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. +Identity mappings in deep +residual networks. In European conference on computer vision, pages 630–645. Springer, +2016. +30. +Shikhar Tuli, Ishita Dasgupta, Erin Grant, and Thomas L Griffiths. Are convolutional neural +networks or transformers more like human vision? arXiv preprint arXiv:2105.07197, 2021. +31. +Nicholas Baker and James H Elder. Deep learning models fail to capture the configural +nature of human shape perception. Iscience, 25(9):104913, 2022. +32. +Alexander Buslaev, Vladimir I Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail +Druzhinin, and Alexandr A Kalinin. Albumentations: fast and flexible image augmentations. +Information, 11(2):125, 2020. +33. +Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretrain- +ing for the masses. arXiv preprint arXiv:2104.10972, 2021. +34. +Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, +Chang Zhou, Jingren Zhou, and Hongxia Yang. +Unifying architectures, tasks, and +modalities through a simple sequence-to-sequence learning framework. +arXiv preprint +arXiv:2202.03052, 2022. +35. +Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille. The secrets of salient +object segmentation. In Proceedings of the IEEE conference on computer vision and pattern +recognition, pages 280–287, 2014. +36. +Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning +deep features for scene recognition using places database. Advances in neural information +processing systems, 27, 2014. +37. +Steven Loria et al. textblob v0.17.1, October 2021. +38. +Robyn Speer. rspeer/wordfreq: v3.0, September 2022. +39. +Shay Perera, Ayellet Tal, and Lihi Zelnik-Manor. Is image memorability prediction solved? +In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition +workshops, pages 0–0, 2019. +6 +| +Hagen et al. +| +ViTMem + +Supplementary Note 1: How to use the vitmem python package +Python needs to be installed on a computer before pip can be used to install the vitmem package. +To install vitmem from a command prompt run: +pip install vitmem +To predict image memorability for an image named "image.jpg", run the following in a python interpreter: +from vitmem import ViTMem +model = ViTMem() +memorability = model("image.jpg") +print(f"Predicted memorability: {memorability}") +Hagen et al. +| +ViTMem +| +7 + diff --git a/-9FAT4oBgHgl3EQfqR39/content/tmp_files/load_file.txt b/-9FAT4oBgHgl3EQfqR39/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..d36b8975114aebf7db1f9836e894ef68d9323b50 --- /dev/null +++ b/-9FAT4oBgHgl3EQfqR39/content/tmp_files/load_file.txt @@ -0,0 +1,843 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf,len=842 +page_content='Image Memorability Prediction with Vision Transformers Thomas Hagen1,� and Thomas Espeseth1,2 1Department of Psychology, University of Oslo, Oslo, Norway 2Department of Psychology, Oslo New University College, Oslo, Norway Behavioral studies have shown that the memorability of images is similar across groups of people, suggesting that memorability is a function of the intrinsic properties of images, and is unre- lated to people’s individual experiences and traits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Deep learn- ing networks can be trained on such properties and be used to predict memorability in new data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Convolutional neu- ral networks (CNN) have pioneered image memorability predic- tion, but more recently developed vision transformer (ViT) mod- els may have the potential to yield even better predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In this paper, we present the ViTMem, a new memorability model based on ViT, and evaluate memorability predictions obtained by it with state-of-the-art CNN-derived models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Results showed that ViTMem performed equal to or better than state-of-the- art models on all data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Additional semantic level analyses revealed that ViTMem is particularly sensitive to the seman- tic content that drives memorability in images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We conclude that ViTMem provides a new step forward, and propose that ViT-derived models can replace CNNs for computational pre- diction of image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Researchers, educators, adver- tisers, visual designers and other interested parties can leverage the model to improve the memorability of their image material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' memorability | vision transformers | psychology | semantic information Introduction Everyone knows that our memories depend on the experi- ences we have had, facts we have encountered, and the abil- ities we have to remember them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Combinations of these fac- tors differ between individuals and give rise to unique memo- ries in each of us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' However, a complementary perspective on memory focuses on the material that is (to be) remembered rather than the individual that does the remembering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In one central study, Isola et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (1) presented more than 2000 scene images in a continuous repeat-detection task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The partici- pants were asked to respond whenever they saw an identical repeat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The results revealed that the memorability score (per- cent correct detections) varied considerably between images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Most importantly, by running a consistency analysis in which Spearman’s rank correlation was calculated on the memo- rability scores from random splits of the participant group, Isola and colleagues (1) were able to show that the memora- bility score ranking was consistent across participants – some images were memorable and some were forgettable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' These results indicate that the degree to which an image was cor- rectly detected depended on properties intrinsic to the image itself, not the traits of the observers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This is important be- cause it shows that one can use the memorability scores in a stimulus set to predict memory performance in a new group of participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' These results have been replicated and extended in a num- ber of studies, revealing that similar findings are obtained with different memory tasks (2), different retention times (1, 2), different contexts (3), and independent of whether en- coding is intentional or incidental (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' However, although image memorability has proven to be a robust and reliable phenomenon, it has not been straightforward to pinpoint the image properties that drive it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' What seems clear though, is that memorability is multifaceted (5, 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' One way to char- acterize the underpinnings of memorability is to investigate the contribution from processes at different levels of the vi- sual processing stream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For example, at the earliest stages of processing of a visual scene, visual attributes such as local contrast, orientation, and color are coded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' At an intermedi- ate level, contours are integrated, surfaces, shapes, and depth cues are segmented, and foreground and background are dis- tinguished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' At a higher level, object recognition is conducted through matching with templates stored in long term mem- ory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Positive correlations between brightness and high contrast of objects with memorability has been found (7), but in general, low-level visual factors such as color, contrast, and spatial frequency do not predict memorability well (5, 8, 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This is consistent with results showing that perceptual features are typically not retained in long term visual memory (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In contrast to the low-level features, the evidence for a re- lation between intermediate to high level semantic features and memorability is much stronger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For example, images that contain people, faces, body parts, animals, and food are often associated with high memorability, whereas the opposite is a typical finding for objects like buildings and furniture and images of landscapes and parks (3, 7, 11, 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Other inter- mediate to high level features such as object interaction with the context or other objects, saliency factors, and image com- position also contribute to memorability (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Furthermore, although memorability is not reducible to high-level features such as aesthetics (1, 12), interestingness (1, 13), or popu- larity (12), emotions, particularly of negative valence, seem to predict higher memorability (9, 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Finally, memorabil- ity seems to be relatively independent of cognitive control, attention, or priming (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Overall, the available evidence indicates that memorability seems to capture intermediate- to high-level properties of semantics, such as objects or actions, and image composi- tion, such as layout and clutter, rather than low-level fea- Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | January 23, 2023 | 1–7 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='08647v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='CV] 20 Jan 2023 tures (5, 15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This fits well with the central role of semantic categories in organizing cognition and memory (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Gen- erally, the priority of semantic-level information enables us to quickly understand novel scenes and predict future events (17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For example, when inspecting a novel scene or an im- age, we do not primarily focus on low-level perceptual fea- tures or pixels, but prioritize more abstract visual schemas involving spatial regions, objects, and the relation between them (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Also, when people are asked to indicate which regions of an image helps them recognize an image, there is high consistency between people’s responses (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Similarly, fixation map data from eye-tracking have shown that there is a positive correlation between fixation map consistency and scene memorability, and this relation is associated with the presence of meaningful objects (3, 7, 19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Bylinskii et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (5) suggest that these properties most efficiently signal infor- mation of high utility to our species, for example, emotions, social aspects, animate objects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=', faces, gestures, interac- tions), unexpected events, and tangible objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memorability prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The finding that the memorabil- ity of an image is governed by properties intrinsic to the im- age itself, not only implies that one can predict memory per- formance in a new set of participants, as described above, but also that one can predict the memorability of a novel set of images (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=', memorability is an “image computable” fea- ture).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Given the availability of computational algorithms and high-quality training sets of sufficient size, one can predict memorability in novel sets of images for future (or already conducted) behavioral or neuroimaging studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Such mem- orability prediction could also be valuable in a number of ap- plied settings (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=', within education, marketing and human- computer interaction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memorability researchers have employed computer vision models such as convolutional neural networks (CNNs) from early on (12), and advancements in the field have allowed researchers to predict image memorability with increasing precision (20–22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The inductive bias (the assumptions of the learning algorithms used to generalize to unseen data) of CNNs is inspired by knowledge about the primate visual sys- tem, and activations in the networks layers have, with some success, been used to explain neural activations (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' How- ever, some vulnerabilities of CNNs have been noted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For ex- ample, CNNs appear to depend more on image texture than biological vision systems do (24), and have problems with recognizing images based on the shape of objects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=', when texture is suppressed or removed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' However, this vulnera- bility is reduced when the model’s shape bias is increased through training on shape representations (25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The LaMem train/test splits is a well-established benchmark for memorability prediction (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The original MemNet (12), which is based on AlexNet (26), achieved a Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='64 on this benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' There have been several improvements on this benchmark, the leading ap- proaches utilize image captioning to enhance memorability predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' That is, a CNN produces a textual description of the image, which is then used to provide more high-level se- mantic information which is embedded into a semantic vec- tor space before being combined with CNN image features in a multi-layered perceptron network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Squalli-Houssaini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (21) used this approach to reach a Spearman correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='72, with a mean squared error (MSE) of approximately 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0092 (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Leonardi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (22) used the captioning ap- proach with dual ResNet50s and a soft attention mechanism to reach a rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='687 with an MSE of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0079.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The ResMem model (20), which is a CNN-based residual neural network architecture (ResNet), uses LaMem, but also takes advantage of a more recently published dataset named MemCat (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This is a data set containing 10,000 images based on categories of animals, food, landscape, sports and vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This data set also has a higher split half correla- tion than LaMem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Needell and Bainbridge (20) argue that the LaMem dataset on its own is lacking in generalizability due to poor sampling of naturalistic images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' That is, the im- ages are more intended as artistic renderings designed to at- tract an online audience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hence by combining MemCat with LaMem this should potentially yield a more generalizable model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Moreover, the increased size of the combined dataset might help in driving the model performance further than pre- vious models based on LaMem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The authors of ResMem also noted the importance of semantic information and struc- tured their approach to utilize semantic representations from a ResNet model in order to improve predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' An added benefit of ResMem is that it is shared on the python pack- age index, which makes it easily accessible to researchers in diverse fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Vision transformers (ViT) have re- cently been shown to provide similar or better performance than CNNs in a variety of computer vision tasks (27).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This architecture was first introduced in the natural language pro- cessing field (28) for capturing long-range dependencies in text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This architecture leads to superior speed/performance balance relativ to ResNet architectures (29).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Moreover, ViTs have been shown to produce errors that are more similar to human errors (30), suggesting that they could take similar information into account (see also (31)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' A reason for this may be that ViTs are likely to take more of the global context into account and be more dependent on the shape of objects rather than their texture (30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' While it is not entirely clear why such properties may yield better predictions of image memorability, it could still help inform the discourse on the visual characteristics that are relevant as well as potentially yielding a better model for predicting image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hence, we set out to investigate if vision transformers can yield better predictions of memorability than the state-of- the-art in image memorability prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In particular,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' we aimed to (i) benchmark a model based on ViT against the well-established LaMem train/test splits (12),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (ii) train a ViT against the combined LaMem and MemCat data sets (20) to benchmark against the ResMem model (20),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (iii) train a final ViT model against a more diverse and deduplicated data set,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (iv) validate the final ViT model against additional indepen- dent data sets and (v) inspect semantic level distributions of memorability scores for behavioral and predicted data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 2 | Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem Methods As our model is based on ViT to predict memorability we named it ViTMem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Because it has been shown that low- level visual features are less important for image memorabil- ity prediction, it would seem appropriate to use image aug- mentations in training our ViTMem model to reduce over- fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This approach have also been used by others (22), although not to the extent done here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The augmentations used consisted of horizontal flipping, sharpen, blur, motion blur, random contrast, hue saturation value, CLAHE, shift scale rotate, perspective, optical distortion and grid distortion (32).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For training all models we used PyTorch, the ADAM optimizer and mean squared error (squared L2 norm) for the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Images were input as batches of 32 in RGB and resized to 256x256 pixels before applying augmentations with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='7 and center cropping to 224x224 pix- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For creating ViTMem we used transfer learning on a vision transformer (27) model pretrained on ImageNet 1k (vit_base_patch16_224_miil) (33).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The final classification layer was reduced to a single output and a sigmoid activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' As we aim to provide an accessible model to the re- search community, it is also necessary to compare against the publicly available ResMem model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Unfortunately, the authors of ResMem did not publish their held-out test set, hence it is difficult to make a balanced compari- son between the currently published ResMem model and any competing models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We propose to do 10 train/test splits that can be used by future researchers (available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='com/brainpriority/vitmem_data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Moreover, ResMem was not benchmarked on LaMem, hence a fair com- parison can only be made on the combined LaMem and MemCat data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For the semantic level analysis, we chose to use image cap- tioning (34) as this provides an efficient method for deriv- ing semantic properties from images at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Importantly, as the image captioning model was trained on human image descriptions, it is likely to extract content that humans find meaningful in images, and in particular objects and contexts that are relevant for conveying such meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hence, nouns derived from such descriptions are likely to be representative portions of the content that would convey meaning to humans observing the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Data Sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For the large-scale image memorability (LaMem) benchmark we used the LaMem dataset (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The image set used by ResMem is a combination of the image sets LaMem (12) and MemCat (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' LaMem containing 58,741 and MemCat 10,000 images, for a total of 68,741 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' ResMem is reported to have used a held-out test set with 5000 images, hence we randomly selected 5000 images as our test set for our 10 train/test splits for this combined data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For our final model we aimed to clean up the data and combine more of the available data sets on image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' As number of duplicated images within and between data sets is unknown and duplicated images may interfere with perfor- mance measures, we aimed to deduplicate the data for this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Duplicated images were identified by simply deriv- ing embeddings from an off-the-shelf CNN model, and then visually inspecting the most similar embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Our analy- sis of the data sets LaMem and MemCat showed that LaMem have 229 duplicated images while MemCat have 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' More- over, 295 of the images in LaMem is also in MemCat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We aimed to build a larger and more diverse data set by com- bining more sources, and for this we chose CVPR2011 (9) and FIGRIM (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' CVPR2011 had 6 internal duplicates, 651 duplicates against LaMem, 78 against MemCat og 9 against FIGRIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' FIGRIM had 20 duplicates against MemCat and 70 against LaMem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' All identified duplicates were removed be- fore merging the data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' As the images from FIGRIM and CVPR2011 were cropped, we obtained the original images before including them in the data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This resulted in a data set with 71,658 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For this data set we performed a 10% split for the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Results Results on LaMem data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' On the LaMem data set the ViTMem model reached an average Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='711 and an MSE of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0076 (see Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Here we compare our performance to measures obtained by MemNet (12), Squalli-Houssaini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (21) and Leonardi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Comparison of model performance on LaMem data set Model MSE Loss ↓ Spearman ρ ↑ MemNet Unknown 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='640 Squalli-Houssaini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0092 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='720 Leonardi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0079 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='687 ViTMem 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='711 Results on the combined LaMem and MemCat data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Training on 10 train/test splits on the combined data set the results showed that ViTMem performed better than the ResMem model (see Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The average across splits showed a Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='77 and an MSE of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Model performance on LaMem and MemCat combiend dataset Model MSE Loss ↓ Spearman ρ ↑ ResMem 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='009 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='67 ViTMem 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='77 Results on combined and cleaned data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' To assess model performance on the larger and cleaned data set, we made a train/test split and then performed repeated k-fold cross validation with 10 train/test splits on the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This resulted in a mean MSE loss of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='006 and a mean Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='76 (see Table 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In order to provide a model for the community we used the full data Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem | 3 set to train the final model (ViTMem Final Model), which is published on the python package index as version 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This was trained on the full training set and tested on its corresponding test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The results showed a Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='77 and an MSE of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='006 (see Table 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The train/test splits are available on github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Model performance on combined and cleaned data set Model MSE Loss ↓ Spearman ρ ↑ ViTMem 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='76 ViTMem Final Model 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='77 Validation on independent data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' To further validate our model, we used memorability scores from an indepen- dent data set by Dubey and colleagues named PASCAL-S (7, 35) consisting of 852 images and cropped objects from the same images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' ViTMem achieved a Spearman correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='44 on the images and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='21 on the objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In compar- ison ResMem achieved a correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='36 on the images and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='14 on the objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Validating against the THINGS data set (15), which consists of 26,106 images with memorabil- ity scores, achieved a Spearman rank correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='30 for ViTMem and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='22 for ResMem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Semantic level analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In order to better understand how the model predictions relate to the semantic content of the images, we performed image captioning (34) on the com- bined LaMem and MemCat data set and the Places205 data set (36).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We extracted nouns from the resulting image de- scriptions and averaged behavioral or predicted memorability scores for each noun (37).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' That is, the memorability for each image was assigned to each noun derived from the image cap- tioning procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' For the combined LaMem and MemCat data set we averaged behavioral memorability scores over nouns (see Figure 1), while for the Places205 data set we averaged predicted memorability scores from the ViTMem model (see Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' A general interpretation of the visu- alizations in Figure 1 and 2 is that they appear to reveal a dimension from nouns usually observed outdoors to more in- door related nouns and ending with nouns related to animals, and in particular, humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This would appear to reflect the distributions observed in previous work (9, 15), and hence help to validate the model in terms of the image content it is sensitive to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' To further investigate how well memorability associated with nouns were similar across the models we se- lected nouns occurring more than the 85th percentile in each set (654 nouns for LaMem and MemCat, 2179 nouns for Places205), this resulted in 633 matched nouns across sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Analysis of these showed a Spearman ranked correlation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='89 and a R2 of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='79, p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='001 (see Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This analysis indicates that nouns from image captioning is a strong pre- dictor of image memorability and that the ViTMem model is able to generalize the importance of such aspects from the training set to a new set of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Discussion Using vision transformers we have improved on the state- of-the-art in image memorability prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Results showed that ViTMem performed equal to or better than state-of- the-art models on LaMem, and better than ResMem on the LaMem and MemCat hybrid data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In addition, we assem- bled a new deduplicated hybrid data set and benchmarked the ViTMem model against this before training a final model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The model was further validated on additional data sets, and performed better than ResMem on these as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Finally, we ran a semantic level analysis by using image captioning on the hybrid data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We ranked the behavioral memo- rability scores on the images, labeled with nouns extracted from the captioning procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The results revealed that im- ages labeled by nouns related to landscapes, cities, buildings and similar, were ranked lowest, whereas images labeled by nouns related to animate objects and food, were ranked high- est.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This finding is consistent with known category effects on memorability (3, 7, 11, 12, 15) and suggests that the la- bels extracted from captioning procedure is strongly related to factors that drive memorability for those images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Subse- quently, we predicted memorability scores on images from a novel data set (Places205), ran the image captioning proce- dure, and ranked the predicted memorability scores on the images, labeled with nouns extracted from the captioning procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Visual inspection of the results revealed that the ranks were similar across samples and methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This impres- sion was confirmed by a strong correlation between matching pairs of nouns and 79% explained variance, suggesting that ViTMem captures the semantic content that drives memora- bility in images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The use of image augmentations in training the ViTMem model in combination with state-of-the-art performance sug- gest that such augmentations are not disrupting the ability of the model to predict image memorability and hence may further support the importance of semantic level properties in image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' That is, the augmentations modify a range of low-level image properties but mostly leave the semantic content intact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In comparison with ResMem, which relies on a CNN-based residual neural network architecture, ViTMem is based on vision transformers which integrate information in a more global manner (30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' As images are compositions of several semantically identifiable objects or parts of objects, a more holistic approach may be more apt at delineating the relative relevance of objects given their context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' That is, we speculate that a broader integration of image features allows for a more complete evaluation of its constituent features in relation to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hence, if semantic content is important for pre- dicting image memorability, the model may have weighed the importance of semantic components in relation to each other to a larger degree than models based on CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' ViTMem code and train/test sets are shared on github (https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='com/brainpriority/), and a python package named vitmem is available on the python package index (see supplementary Sup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Note 1 for a tutorial).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Researchers and interested parties can use the model to predict memorability 4 | Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem Memorability c() 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mountains ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='skyline ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='clouds ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sunset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fireplace ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='view ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cloud ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dresser ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='pine ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bedroom ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='houses ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stream ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='church ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='highway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='waterfall ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='house ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hotel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sky ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='boat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wave ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='park ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='reflection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='building ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='walls ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='people ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lights ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='temple ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='smoke ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='flowers ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='power ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rock ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='group ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bus ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='store ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='truck ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fire ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='center ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='market ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='game ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='walking ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bench ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='court ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='person ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='guitar ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='police ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='motorcycle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='food ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='men ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='show ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='picture ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stem ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sign ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='ground ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='link ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='women ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stuffed ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='toy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='phone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bride ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='plate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bag ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='girl ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cards ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shoe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='pair ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='scarf ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hands ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hand ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shape ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='neck ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='face ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cut ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='toothbrush ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='half ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='banana ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='smile ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='makeup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='necklace ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='teeth ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='pepper ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='valley ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mountain ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lake ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='river ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hill ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='city ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='boats ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rocks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fog ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lobby ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='ocean ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tables ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='middle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='kitchen ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='area ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bridge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='construction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='field ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='office ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='room ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='woods ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='clock ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='photo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='steel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='street ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stove ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='light ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dirt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='window ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='side ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fence ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bed ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='museum ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='door ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bird ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mirror ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='flower ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='course ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='blue ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='video ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='row ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='car ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='couple ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='table ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='top ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='line ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='man ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bug ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='case ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dog ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='floor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='gas ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='boy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='girls ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cell ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='piece ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='camera ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='woman ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='knife ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='arms ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='baby ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='board ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='gold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hair ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sunglasses ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='persons ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shirt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='feet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='nose ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='chocolate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='word ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='beard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='snake ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tattoo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='blood ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bikini ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Average behavioral image memorability scores for nouns that were extracted from images in the LaMem and MemCat data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The nouns shown are those that occurred most frequently or that are more frequent in the English language (38).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memorability c() 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='badlands ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rim ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stormy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='glacier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mountain ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sun ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='town ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hill ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fireplace ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='houses ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sunset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='snow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='slope ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='desert ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dusk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rocks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='couches ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='city ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cabinets ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='steeple ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='university ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='street ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='place ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hotel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='highway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cathedral ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='formation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='center ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='building ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='beach ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='home ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='way ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fire ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='christmas ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lighthouse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='space ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='monument ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='desk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='people ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='crowd ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='boat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wall ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='inside ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='airport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='music ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='van ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='museum ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='round ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='statue ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='baseball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='auditorium ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='party ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='classroom ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='stand ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='row ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='court ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='store ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='picture ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='pink ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bus ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shelf ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bowling ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sale ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='men ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='family ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='gym ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='fish ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='boy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='motel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='woman ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='ring ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rack ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='soldier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='girl ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='ties ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dresses ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='words ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dancing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='suit ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wrestlers ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='arms ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mannequin ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cookies ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cream ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shirt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wife ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cupcakes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='chocolate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bikini ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hillside ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='clouds ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mountains ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='valley ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cloud ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='farm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='village ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='snowy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='square ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='waves ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='vineyard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='island ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='view ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mansion ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='smoke ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='castle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='living ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='coast ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='lawn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='area ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='church ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='house ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tower ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='clock ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='field ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='rain ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='wave ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sink ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='top ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='state ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='chairs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bed ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='room ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='side ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='birds ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dock ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='leaves ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='park ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='supplies ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='force ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='station ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='table ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='play ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='post ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='cross ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='market ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='desks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='photos ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='group ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='library ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='game ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='line ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='school ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='video ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dog ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='food ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='star ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='crib ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='show ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='clothes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='book ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='floor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='children ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='man ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='heart ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='baby ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='display ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='sign ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='roller ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='women ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='class ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='football ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='girls ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='case ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hands ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='team ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='desserts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='face ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='shirts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='suits ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='logo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='hair ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='plate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='pastries ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='grave ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='meat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='tie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='bread ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='donuts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='mouth ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dress ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='dancer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Average ViTMem predicted image memorability scores for nouns that were extracted from images in the Places205 data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The nouns shown are those that occurred most frequently or that are more frequent in the English language (38).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='9 Memorability for LaMem & MemCat Nouns (Behavioral) Memorability for Places205 Nouns (ViTMem) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Average memorability scores for images with matching nouns in different data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The y-axis shows average predicted memorability scores from ViTMem on the Places205 data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The x-axis shows average behavioral memorability scores on the combined LaMem and MemCat data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' in existing or novel stimuli and employ them in research or applied settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The ViTMem model will allow researchers to more precisely predict image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The release of ViTMem follows up ResMem in providing an accessible method for predicting image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This is impor- tant for studies aiming to control for how easily an image can be remembered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' This will for example allow experimental psychologists and neuroscientists to better control their re- search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Similarly, educators, advertisers and visual designers can leverage the model to improve the memorability of their content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Despite state-of-the-art performance in memorability predic- tion, improvements may still be possible to achieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Previous works have shown benefits of pretraining their networks on data sets of places and objects prior to fine tuning for memo- rability prediction (39).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Moreover, ViTMem do not take im- age captioning into account, which have been successfully done with CNNs (21, 22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hence there is potentially more to be gained from incorporating image semantics and/or pre- training on data sets of objects and places.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In addition, ViT- Mem is only based on the "base" configuration of the avail- able ViT models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Model performance may still increase by adopting the “large” or “huge” configurations of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' We conclude that ViTMem can be used to predict memora- bility for images at a level that is equal to or better than state- of-the-art models, and we propose that vision transformers provide a new step forward in the computational prediction of image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem | 5 References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' What makes a photograph memorable?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 36(7):1469–1482, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Lore Goetschalckx, Pieter Moors, and Johan Wagemans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Image memorability across longer time intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memory, 26(5):581–588, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Zoya Bylinskii, Phillip Isola, Constance Bainbridge, Antonio Torralba, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In- trinsic and extrinsic effects on image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Vision research, 116:165–178, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Lore Goetschalckx, Jade Moors, and Johan Wagemans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Incidental image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memory, 27(9):1273–1282, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memorability: An image-computable measure of information utility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In Human Perception of Visual Informa- tion, pages 207–239.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Springer, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Nicole C Rust and Vahid Mehrpour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Understanding image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Trends in cognitive sciences, 24(7):557–568, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Rachit Dubey, Joshua Peterson, Aditya Khosla, Ming-Hsuan Yang, and Bernard Ghanem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' What makes an object memorable?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In Proceedings of the ieee international conference on computer vision, pages 1089–1097, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Wilma A Bainbridge, Daniel D Dilks, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memorability: A stimulus-driven per- ceptual neural signature distinctive from memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' NeuroImage, 149:141–152, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Phillip Isola, Devi Parikh, Antonio Torralba, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Understanding the intrinsic memorability of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Advances in neural information processing systems, 24, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Timothy F Brady, Talia Konkle, and George A Alvarez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' A review of visual memory capacity: Beyond individual items and toward structured representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Journal of vision, 11(5): 4–4, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Lore Goetschalckx and Johan Wagemans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Memcat: a new category-based image set quan- tified on memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' PeerJ, 7:e8169, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Aditya Khosla, Akhil S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Raju, Antonio Torralba, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Understanding and predict- ing image memorability at a large scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In International Conference on Computer Vision (ICCV), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The interestingness of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 1633–1640, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Wilma A Bainbridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The resiliency of image memorability: A predictor of memory separate from attention and priming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Neuropsychologia, 141:107408, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Max A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Kramer, Martin N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hebart, Chris I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Baker, and Wilma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Bainbridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The features underlying the memorability of objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='1101/2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='490104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes- Braem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Basic objects in natural categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Cognitive psychology, 8(3):382–439, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Douglas L Medin and John D Coley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Concepts and categorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Perception and cognition at century’s end: Handbook of perception and cognition, pages 403–439, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Erdem Akagunduz, Adrian G Bors, and Karla K Evans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Defining image memorability using the visual memory schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 42(9):2165–2178, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Muxuan Lyu, Kyoung Whan Choe, Omid Kardan, Hiroki P Kotabe, John M Henderson, and Marc G Berman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Overt attentional correlates of memorability of scene images and their relationships to scene semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Journal of Vision, 20(9):2–2, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Coen D Needell and Wilma A Bainbridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Embracing new techniques in deep learning for estimating image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Computational Brain & Behavior, pages 1–17, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Hammad Squalli-Houssaini, Ngoc QK Duong, Marquant Gwenaëlle, and Claire-Hélène De- marty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Deep learning for predicting image memorability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In 2018 IEEE international con- ference on acoustics, speech and signal processing (ICASSP), pages 2371–2375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Marco Leonardi, Luigi Celona, Paolo Napoletano, Simone Bianco, Raimondo Schettini, Franco Manessi, and Alessandro Rozza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Image memorability using diverse visual features and soft attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In International Conference on Image Analysis and Processing, pages 171–180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Springer, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Performance-optimized hierarchical models predict neural responses in higher visual cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Proceedings of the national academy of sciences, 111(23):8619–8624, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J Kellman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Local features and global shape information in object classification by deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Vision research, 172:46–61, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Imagenet-trained cnns are biased towards texture;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' increasing shape bias improves accuracy and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='12231, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Imagenet classification with deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Advances in neural information processing systems, 25, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='11929, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Identity mappings in deep residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In European conference on computer vision, pages 630–645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Shikhar Tuli, Ishita Dasgupta, Erin Grant, and Thomas L Griffiths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Are convolutional neural networks or transformers more like human vision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='07197, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Nicholas Baker and James H Elder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Deep learning models fail to capture the configural nature of human shape perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Iscience, 25(9):104913, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Alexander Buslaev, Vladimir I Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail Druzhinin, and Alexandr A Kalinin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Albumentations: fast and flexible image augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Information, 11(2):125, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Imagenet-21k pretrain- ing for the masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='10972, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='03052, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' The secrets of salient object segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 280–287, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Learning deep features for scene recognition using places database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Advances in neural information processing systems, 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Steven Loria et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' textblob v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='1, October 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Robyn Speer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' rspeer/wordfreq: v3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='0, September 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Shay Perera, Ayellet Tal, and Lihi Zelnik-Manor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' Is image memorability prediction solved?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' 6 | Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem Supplementary Note 1: How to use the vitmem python package Python needs to be installed on a computer before pip can be used to install the vitmem package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' To install vitmem from a command prompt run: pip install vitmem To predict image memorability for an image named "image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='jpg", run the following in a python interpreter: from vitmem import ViTMem model = ViTMem() memorability = model("image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content='jpg") print(f"Predicted memorability: {memorability}") Hagen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} +page_content=' | ViTMem | 7' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FAT4oBgHgl3EQfqR39/content/2301.08647v1.pdf'} diff --git a/.gitattributes b/.gitattributes index e58b5a9f169b08d322545aa3e36665856690ba38..145b620bbd976dcaf5b7f1b1ef73a63c87d2af3b 100644 --- a/.gitattributes +++ b/.gitattributes @@ -7568,3 +7568,65 @@ a9AzT4oBgHgl3EQfnP2n/content/2301.01578v1.pdf filter=lfs diff=lfs merge=lfs -tex F9E0T4oBgHgl3EQfzQJ7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text yNE3T4oBgHgl3EQflwrt/content/2301.04611v1.pdf filter=lfs diff=lfs merge=lfs -text ZtE4T4oBgHgl3EQfOgxT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +atE4T4oBgHgl3EQfoA2t/content/2301.05181v1.pdf filter=lfs diff=lfs merge=lfs -text +jdFAT4oBgHgl3EQfah0B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +Z9E1T4oBgHgl3EQfwwUB/content/2301.03413v1.pdf filter=lfs diff=lfs merge=lfs -text +x9AyT4oBgHgl3EQfnvh8/content/2301.00494v1.pdf filter=lfs diff=lfs merge=lfs -text +99E2T4oBgHgl3EQfmAd3/content/2301.03994v1.pdf filter=lfs diff=lfs merge=lfs -text +0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +V9FJT4oBgHgl3EQf4C1z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +jdFST4oBgHgl3EQfHDjE/content/2301.13724v1.pdf filter=lfs diff=lfs merge=lfs -text +xNE2T4oBgHgl3EQf3Qj1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +kdAyT4oBgHgl3EQfYPeA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +yNE3T4oBgHgl3EQflwrt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +wdAyT4oBgHgl3EQfOfZu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE3T4oBgHgl3EQfGgkb/content/2301.04314v1.pdf filter=lfs diff=lfs merge=lfs -text +i9FLT4oBgHgl3EQfbS-d/content/2301.12078v1.pdf filter=lfs diff=lfs merge=lfs -text +i9FLT4oBgHgl3EQfbS-d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +69AzT4oBgHgl3EQfgPxu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +jdFAT4oBgHgl3EQfah0B/content/2301.08551v1.pdf filter=lfs diff=lfs merge=lfs -text +JdAzT4oBgHgl3EQfVPzH/content/2301.01282v1.pdf filter=lfs diff=lfs merge=lfs -text +O9FJT4oBgHgl3EQfISxO/content/2301.11455v1.pdf filter=lfs diff=lfs merge=lfs -text +CtE1T4oBgHgl3EQf9wbT/content/2301.03561v1.pdf filter=lfs diff=lfs merge=lfs -text +htFPT4oBgHgl3EQfDzQQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +5dE2T4oBgHgl3EQfOgbi/content/2301.03750v1.pdf filter=lfs diff=lfs merge=lfs -text +AtE0T4oBgHgl3EQfxwIb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +WdFLT4oBgHgl3EQfTC8I/content/2301.12043v1.pdf filter=lfs diff=lfs merge=lfs -text +pdE3T4oBgHgl3EQf7wvD/content/2301.04802v1.pdf filter=lfs diff=lfs merge=lfs -text +2tAyT4oBgHgl3EQfb_cv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +B9E0T4oBgHgl3EQfyAKb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +PdFRT4oBgHgl3EQfIzcw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +mNE3T4oBgHgl3EQfiQps/content/2301.04578v1.pdf filter=lfs diff=lfs merge=lfs -text +BtE3T4oBgHgl3EQfUAqQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +sdE5T4oBgHgl3EQfmA-1/content/2301.05676v1.pdf filter=lfs diff=lfs merge=lfs -text +a9AzT4oBgHgl3EQfnP2n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +UNAzT4oBgHgl3EQflv0l/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +1dE2T4oBgHgl3EQfigf6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +BtE3T4oBgHgl3EQfUAqQ/content/2301.04447v1.pdf filter=lfs diff=lfs merge=lfs -text +tdAyT4oBgHgl3EQf0fkK/content/2301.00717v1.pdf filter=lfs diff=lfs merge=lfs -text +PtFKT4oBgHgl3EQfgy6A/content/2301.11835v1.pdf filter=lfs diff=lfs merge=lfs -text +99E2T4oBgHgl3EQfmAd3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +TNE4T4oBgHgl3EQfLgyY/content/2301.04939v1.pdf filter=lfs diff=lfs merge=lfs -text +VtAyT4oBgHgl3EQfhvgZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +V9FIT4oBgHgl3EQfhStk/content/2301.11287v1.pdf filter=lfs diff=lfs merge=lfs -text +ftE4T4oBgHgl3EQfqg2l/content/2301.05201v1.pdf filter=lfs diff=lfs merge=lfs -text +sdE5T4oBgHgl3EQfmA-1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +Z9E1T4oBgHgl3EQfwwUB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +JdAzT4oBgHgl3EQfVPzH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +XdE1T4oBgHgl3EQfvwV4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +gdE_T4oBgHgl3EQf2xzk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +iNE3T4oBgHgl3EQfIwkK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +I9AyT4oBgHgl3EQf5_o0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +YtAyT4oBgHgl3EQfWvcM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +gdE_T4oBgHgl3EQf2xzk/content/2301.08343v1.pdf filter=lfs diff=lfs merge=lfs -text +T9E3T4oBgHgl3EQfzwt3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +htE1T4oBgHgl3EQffgQ-/content/2301.03218v1.pdf filter=lfs diff=lfs merge=lfs -text +W9FOT4oBgHgl3EQf8DTf/content/2301.12965v1.pdf filter=lfs diff=lfs merge=lfs -text +pdFAT4oBgHgl3EQfeR2v/content/2301.08575v1.pdf filter=lfs diff=lfs merge=lfs -text +atE4T4oBgHgl3EQfoA2t/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +W9FOT4oBgHgl3EQf8DTf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE3T4oBgHgl3EQfGgkb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +tdAyT4oBgHgl3EQf0fkK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +pdFAT4oBgHgl3EQfeR2v/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +PtFKT4oBgHgl3EQfgy6A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +_dE1T4oBgHgl3EQfVAM6/content/2301.03096v1.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss b/0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..a8b5c9a7bbb30f2cf1edfc675454a238fc2446df --- /dev/null +++ b/0NAyT4oBgHgl3EQfbfc0/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc910adb4d29e196eb341c4caed7d742725717dca1facbdd69bcf8b03305d2bc +size 5373997 diff --git a/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/2301.00503v1.pdf.txt b/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/2301.00503v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..866d2ce67958e73a430cad84a61320c155ea6b43 --- /dev/null +++ b/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/2301.00503v1.pdf.txt @@ -0,0 +1,790 @@ +A Concept Knowledge Graph for User Next Intent Prediction at +Alipay +Yacheng He +Ant Group +Hangzhou, China +heyachen.hyc@antgroup.com +Qianghuai Jia∗ +Ant Group +Hangzhou, China +qianghuai.jqh@antgroup.com +Lin Yuan +Ant Group +Hangzhou, China +huiwai.yl@antgroup.com +Ruopeng Li +Ant Group +Hangzhou, China +ruopeng.lrp@antgroup.com +Yixin Ou +Zhejiang University +Hangzhou, China +ouyixin@zju.edu.cn +Ningyu Zhang +Zhejiang University +Hangzhou, China +zhangningyu@zju.edu.cn +ABSTRACT +This paper illustrates the technologies of user next intent prediction +with a concept knowledge graph. The system has been deployed +on the Web at Alipay1, serving more than 100 million daily active +users. Specifically, we propose AlipayKG to explicitly characterize +user intent, which is an offline concept knowledge graph in the +Life-Service domain modeling the historical behaviors of users, the +rich content interacted by users and the relations between them. +We further introduce a Transformer-based model which integrates +expert rules from the knowledge graph to infer the online user’s +next intent. Experimental results demonstrate that the proposed +system can effectively enhance the performance of the downstream +tasks while retaining explainability. +CCS CONCEPTS +• Information systems → Query representation; Information +extraction. +KEYWORDS +Knowledge Graph; Intent Prediction; Graph Embedding; Multi-label +Classification +ACM Reference Format: +Yacheng He, Qianghuai Jia, Lin Yuan, Ruopeng Li, Yixin Ou, and Ningyu +Zhang. 2023. A Concept Knowledge Graph for User Next Intent Prediction +at Alipay. In Proceedings of Make sure to enter the correct conference title from +your rights confirmation emai (Conference acronym ’XX). ACM, New York, +NY, USA, 5 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn +1 +INTRODUCTION +User next intent prediction – the ability to automatically infer the +next decision of users based on historical behavior and background +1https://global.alipay.com/platform/site/ihome +Permission to make digital or hard copies of all or part of this work for personal or +classroom use is granted without fee provided that copies are not made or distributed +for profit or commercial advantage and that copies bear this notice and the full citation +on the first page. Copyrights for components of this work owned by others than ACM +must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, +to post on servers or to redistribute to lists, requires prior specific permission and/or a +fee. Request permissions from permissions@acm.org. +Conference acronym ’XX, June 03–05, 2018, Woodstock, NY +© 2023 Association for Computing Machinery. +ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00 +https://doi.org/10.1145/nnnnnnn.nnnnnnn +Buy movie +tickets +Take an +internet taxi +User Historical Behavior +Sequence +Location Time Interaction +s +Online Next +Intent Prediction +Downstream +Applications +Intents +Order coffee +… +? +? +… +Next Intent +Prediction +Model +Buy snacks +Recommender +System +Search +System +Transaction Risk +Management +… +isA/Consequent +Intent +Product +Function +Consist +Consist +Sememe +Has +Has +Alipay +Knowledge +Graph +isA +Buy movie +tickets +Movie +tickets +Buy +Consist +Consist +coupon +Has +look +shows +buy +Entertainment +Ticketing +Buy +snacks +Watch the +reality show +Watch +Reality +show +image +Consequent +Has +Has +Has +Has +Subgraph +Consist +Consist +The core ontology +User_1 +history +present & future +intent +product +function +sememe +Alipay +Knowledge Graph +(b) Next Intent Prediction Framework +(c) Downstream Applications +(a) Overview of Alipay Knowledge Graph +Figure 1: The user next intent prediction system at Alipay. +Sub-figure (a) illustrates the core ontology and subgraph of +AlipayKG. In sub-figure (b), an example of the user’s his- +torical interactions and intent sequence is shown in gray- +grounded boxes, and the next intent is marked with a red +"?" that has been inferred as "buy snacks" by the next intent +prediction model, whose outputs will provide a clear signal +to downstream applications as shown in sub-figure (c). +knowledge – holds an important place in in-device Apps [17]. For +example, in digital life service platforms such as Alipay, users often +purchase snacks at the cinema (corresponding intent "buy snacks") +after buying movie tickets via TaoPiaoPiao2 (corresponding intent +"buy movie tickets"), which implies the intent of "buy movie tickets" +may lead to the following intent of "buy snacks." As shown in Figure +1, the ability to infer the future intents of users has the potential +to be advantageous for tasks such as recommendation, searching, +transaction risk management and so on. +Intuitively, user intent can be characterized as clustering pat- +terns of user behaviors, which are usually hidden in the content +and interacted with or generated by users in mobile applications. +Specifically, the core of understanding user intent in Alipay lies in +systematic and explicit knowledge modeling of the user’s situation, +2https://dianying.taobao.com/ +arXiv:2301.00503v1 [cs.CL] 2 Jan 2023 + +8Conference acronym ’XX, June 03–05, 2018, Woodstock, NY +Yacheng He, et al. +and the user’s interacted item content, which consists of queries, +applet services, bills, coupons, stores, reviews, etc. Concretely, we +summarize the two non-trial issues in user next intent prediction +at Alipay as follows: +• How to characterize user intent. It is challenging to abstract +and encode intent from user behaviors that are very diverse and +can not be directly observed. In particular, unlike e-commerce +scenarios such as Amazon3 which mainly contains shopping +intent, the behaviors at Alipay are various, including shopping, +trip and payment, which further increases the difficulty of intent +representation. +• How to predict the user’s next intent in real-time. The user’s +next intent is not only based on the user’s profile and preference +but also largely influenced by spatial and temporal factors. For +example, the intent to "buy movie tickets" tends to occur at the +weekend, while the intent to "registration" often occurs in the +hospital. +To address the above-mentioned issues, we propose a user next +intent prediction system based on the Knowledge Graph (KG) and +apply it to downstream applications at Alipay. We summarize the +contributions of this paper as follows: +• We propose AlipayKG, a concept knowledge graph that explic- +itly represents user behaviors by defining an intent architecture +to achieve a unified representation of multi-source heterogeneous +content. Meanwhile, we propose a systematic approach to ob- +tain structured knowledge from multi-source content. With the +proposed AlipayKG, we address the first issue. +• As for the second issue, we design a next intent prediction frame- +work that integrates expert rules from AlipayKG, which improves +the performance while increasing interpretability. +• We evaluate this system on downstream tasks. Experimental +results demonstrate that the proposed system can enhance the +performance of several real-world applications, which serve more +than 100 million daily active users. +2 +ALIPAYKG-BASED USER NEXT INTENT +PREDICTION SYSTEM +An overview of our user intent system is presented in Figure 1, +and it is composed of two parts as follows: 1) AlipayKG to suf- +ficiently characterize user intent, and 2) Next Intent Prediction +Framework to accurately predict the user’s next intent in real- +time. All collected data are anonymized and reviewed by the +IRB committee to preserve privacy. +2.1 +AlipayKG Construction +In general, user intent plays a crucial role in promoting the per- +formance and interpretability of user modeling systems. However, +uniformly capturing the users’ intents and expressing them is ardu- +ous due to the various kinds of users’ behaviors in digital life service +platforms. Therefore, to sufficiently characterize user intent, we +propose a concept KG in the Life-Service domain called AlipayKG. +The core ontology of AlipayKG is shown in Figure 1(a), which in- +cludes four nodes and four relations. Specifically, "Intent" describes +the decision drivers behind users’ needs and mainly consists of +3https://www.amazon.com/ +Item Content +phrase mining +crowdsourcing +Intents +bayesian network +Intent-isA-Intent +Relation +Intent-Consequent- +Intent Relation +Intent Function & +Intent Product +Sememe +isA/Consequent +Intent +Product +Function +Consist +Consist +Sememe +Has +Has +sememe multilabel +classification +part-of-speech tagging +& short text matching +lexical rule-based +& embedding-based +KG Nodes Mining +KG Relations Mining +Alipay Knowledge Graph +Figure 2: The process of constructing AlipayKG consists of +node mining and relations mining (by different colors). +"Function" and "Product," such as "take an internet taxi 打网约车" +and "order coffee 点咖啡." Furthermore, "Product" and "Function" +can be represented by more fine-grained Hownet sememes4 that are +regarded as the basic units of semantics, such as "movie ticket|电 +影票 = {coupon|票证, look|看, shows|表演物}." Meanwhile, we also +define two types of relation between "Intent" nodes: 1) "isA" relation +builds the semantic hyponymy of "Intent" nodes, such as "rent an +iPhone13 -isA- rent a mobile phone"; 2) "Consequent" relation is +used to establish the order effect of "Intent" nodes, such as "buy +a house -consequent- renovate a house." Figure 2 illustrates the +framework of AlipayKG construction, which contains two parts: +1) KG Nodes Mining and 2) Intent Knowledge Mining. It is worth +noting that crowdsourcing is employed for data quality control +throughout the whole process. +2.1.1 +KG Nodes Mining. To mine "Intent" nodes, we adopt the +automated phrase mining approach [13] based on item content +and extend it with a self-constructed ground dictionary for high- +quality phrase classification, where item content is chosen as our +data source since users often directly express their requirements +by interacting with items. Although the items in Alipay are multi- +source heterogeneous, the text of different items shares the same +critical information and can be used as input data sources for knowl- +edge mining. Then, we utilize lexical rule matching, part-of-speech +tagging [21], short text matching models [27], and structure the +"Intent" nodes into two parts: "Function" and "Product." Moreover, +HowNet [16] has a wealth of artificially annotated corpus, through +which we train a multi-label classification model [19] to automati- +cally obtain sememe information of "Function" and "Product." Due +to aliasing and ambiguity issues with entity names, we further use +alignment models of Bert-Int [20] on "Intent" and "Product" nodes +for semantic disambiguation, respectively. +2.1.2 +KG Relations Mining. In this part, the mining methods of +the "isA" and "Consequent" relations between "Intent" nodes are +elaborated. It is worth noting that the other two relations (i.e., +"Consist" and "Has") have been obtained in the "KG Nodes Mining" +Section. +"isA" Relation: Since "isA" is used to organize "Intent" nodes +in a hierarchical tree structure, it is challenging to acquire the +knowledge that belongs to common sense through data mining. For +instance, it is easy to know that "buy an iPhone13" is a kind of "buy +4https://github.com/thunlp/OpenHowNet.git + +A Concept Knowledge Graph for User Next Intent Prediction at Alipay +Conference acronym ’XX, June 03–05, 2018, Woodstock, NY +Dot +Product +Informer Encoder +Informer Decoder +Concatenated +Feature Map +Predicted +Intent Outputs +0 +0 +0 +0 +0 +isA/Consequent +Intent +Product +Function +Consist +Consist +Sememe +Has +Has +Graph Convolutional +Network (GCN) +Intent Label +Embedding +Intent Time Stamp +Location Time Stamp +Global Time Stamp +𝜲!"#$%_'" +𝜲!"#$%_(' +AlipayKG +Item Text +Multi-Label +Loss +Dot +Product +N +Item Embedding +Item Images +ResNet +StructBERT +Encoder +… +(b) Offline Item-Intent Understanding Model +(c) Online Next Intent Prediction Model +(a) Intent Representation +Figure 3: User next intent prediction framework. Figure(a): +GCN is learned over the AlipayKG to obtain intent label rep- +resentation, which is applied to predict output intents. Fig- +ure(b): Intent label generation of each item via the multi- +label classification model. Figure(c): 1) The encoder receives +massive long sequence inputs (intent, location and global +time); 2) The decoder receives long sequence inputs, pads the +target intents into zero, and instantly predicts output intents +(marked orange) in a generative style. +a mobile phone," but difficult for a machine to understand. To this +end, we propose two different methods described as follows: +1) Lexical Rule-based Method: This method utilizes the "isA" of the +"Product" to build the "isA" relation between "Intent" nodes. For +example, "buy an iPhone13" and "buy a mobile phone" have the +same "Function," and it can be acquired from the general knowledge +graph that "iPhone13" is a kind of "mobile phone," then the relation +of "buy an iPhone13 -isA- buy a mobile phone" can be generated. +2) Embedding-based Method: This method employs the information +of the text semantics to avoid the drawbacks of the lexical rule-based +method. Specifically, we first apply StructBERT [22] pre-trained +on Alipay corpus to represent the embedding of the "Product." +Secondly, we calculate the cosine distance between "Product" nodes +and recall the top-K candidates with the closest semantics. Finally, +the positive "isA" between "Intent" nodes will be chosen. +"Consequent" Relation: Bayesian network [11] is leveraged +from the probability network to mine the "Consequent" relation in +AlipayKG. Specifically, the "Intent" of different time segments is +first aggregated as the input of the Bayesian network. After learning +the Bayesian network structure, it performs relation inference [2] +to obtain numerous pairs of "Intent" nodes. In the end, we build +the "Consequent" relation on pairs of highly correlated and order- +sensitive "Intent" nodes. +2.2 +Next Intent Prediction Framework +Figure 3 illustrates the next intent prediction framework, which +consists of two parts: 1) Offline Item-Intent Understanding Model +to label the user interacted items with "Intent," and 2) Online User +Next Intent Prediction Model to forecast the next intent of users +with low latency and high prediction accuracy. +2.2.1 +Offline Item-Intent Understanding Model. Since user intent +is always hidden in the items that are interacted with users, it is +important to establish the Item-Intent relationships, which can be +regarded as a matching problem between "Item" and "Intent." For +example, "Starbucks applet" contains various "Intent" such as "order +coffee" and "buy coffee beans." +The overview of the item-intent understanding model is shown +in Figure 3(b). Firstly, a multi-modal model [12] is adopted to unify +the multi-source heterogeneous input data. Specifically, we adopt +Resnet [10] to extract image features and combine them with text +features. Then, the concatenated features are fed into StructBERT +[22] model to obtain the item representation. Besides, intent embed- +ding is generated via graph algorithms such as GCN as shown in +3(a). Finally, the predicted label scores can be obtained by matching +the learned intent embedding with item representation. +2.2.2 +Online User Next Intent Prediction Model. Online real-time +next intent prediction model needs low latency while guaranteeing +high prediction accuracy. Hence, an efficient Transformer-based +model for long time-series forecasting named Informer [28] is +adopted in our work. In this model, the input consists of three +parts: the intent timestamp, the location timestamp and the global +timestamp (Minutes, Hours, Week, Month, Holiday etc.). Moreover, +AlipayKG is fused into the model to enhance the prediction accu- +racy, as shown in Figure 3(c). Additionally, the mined rules (such as +"take an internet taxi -consequent- buy movie tickets -consequent- +buy snacks") are applied to the post-processing stage of the model, +which further improves the interpretability of the predicted results. +2.3 +Industrial Deployment of User Intent +System +In this Section, the deployment of the user intent system will be +described in the recommendation engine for Alipay. First of all, it +can be observed from Figure 4 that the recommendation engine is +composed of a recall stage and a ranking stage. In the recall stage, +a candidate item set (recall pool) is generated by merging results +from different recall methods. In the ranking stage, those candi- +dates are passed through ranking and re-ranking to output the final +recommendation list. Secondly, the proposed user intent system +will be applied to the recommendation engine in the recall and +ranking stages. As shown in Figure 4, according to history behav- +ior data and current spatial-temporal information, the next intent +prediction model can predict the user’s top-K intent candidates +with the highest probability, which helps bring the intent-based +recall method directly into the recall stage. Meanwhile, the gener- +ated Top-K intent candidates, intent embedding and item-intent +relations can contribute to better-personalized modeling of user +behaviors in the ranking stage. Finally, the whole system is in a +positive feedback loop, as shown in Figure 4. User Intent System can +predict user intent based on user-interacted data, which facilitates +better recommendations. In return, a better recommendation can +provide more real user behavior data to improve the performance +of intent understanding. In addition, the efficacy of the deployment +will be demonstrated in Section 3.3. + +InformerEncoderInformerDecoderConference acronym ’XX, June 03–05, 2018, Woodstock, NY +Yacheng He, et al. +Item Pool +Recall Stage +Location-based +Method +Embedding-based +Method +Intent-based +Method +… +Recall +Pool +Ranking +Re-Ranking +Ranking Stage +User Historical +Interactions Data +User spatial-temporal +information +Online Next Intent +Prediction Model +Top-K Predicted +Intent Labels +AlipayKG +Intent Embeddings +& Item-Intent +Relations +User Features & +Item Features +User Interaction +Recommendation List +User Intent System +Alipay Homepage +Offline Item-Intent +Understanding Model +User Historical +Intent Sequence +Figure 4: Industrial deployment of User Next Intent Predic- +tion System in the Alipay recommendation engine. The rec- +ommendation engine contains two stages: the recall stage +and the ranking stage. The dataflows of recommended items +are guided by the grey arrows. Our user next intent pre- +diction system provides intent embeddings, item-intent re- +lations and top-K predicted intents based on historical in- +formation, thereby improving the performance of the re- +call and ranking stages and providing users with a more in- +demand recommendation list. +3 +EVALUATION +3.1 +Evaluation of AlipayKG +In AlipayKG, we have accumulated 104𝐾+ "Intent," 31𝐾+ "Func- +tion," 66𝐾+ "Product," and 1.9𝐾+ "Sememe." With the item-intent +understanding model, we have collected relatively static data, such +as 1, 316𝐾+ Service-Intent triples and 57, 852𝐾+ Store-Intent triples, +and relatively dynamic data, such as 10K-level Coupon-Intent triples +and billion-level Bill-Intent triples, etc. +3.2 +Evaluation of Next Intent Prediction +Framework +In this Section, the proposed intent prediction framework will be +evaluated from the following two aspects. +1) Offline Item-Intent Understanding Model: We evaluate our +matching model on item-intent prediction with 3𝐾+ primitive in- +tent labels. The multi-modal model is increased by 1.10%, and the +label-level graph embedding is further increased by 3.08% to 90.64% +in micro-F1. +2) Online Next Intent Prediction Model: We evaluate our next- +intent prediction model on 30𝐾 sampled user historical behavior +data. To restore online scenarios, we only predict the user’s next in- +tent at a specific time and location. Experimental results show that +the intent prediction model introduced with AlipayKG achieves +53.3% and 85.3% in Recall@1 and Recall@10, achieving an improve- +ment of 3.1% and 2.2%, respectively. +3.3 +Evaluation of Downstream Applications +In this Section, we further evaluate whether the user next intent +prediction system can improve the downstream tasks’ performance +at Alipay. +1) Home Recommendation: Home recommendation is one of +the most important business scenarios in which our system helps +to discover user interests in real-time, shown in Section 2.3. Online +experiments show that our system can bring a relative increase of +1.61% in CTR (Click-Through-Rate). +2) Transaction Risk Management: To create a secure payment +environment, the potential risks (e.g., embezzlement and money +laundering) of each transaction should be estimated to determine +whether it is invalid, which consumes a huge amount of computa- +tion. In order to reduce the cost, we treat users’ consumption intent +as an important transaction feature to discover low-risk transac- +tions. By leveraging the credible transaction identification based on +AlipayKG, the coverage rate of low-risk transactions is relatively +increased by 100%. +3) Alipay Search: In this scenario, the fine-grained user intent can +be captured in real-time by query understanding technology and +then used in various stages of search service (e.g., recall, relevance +and ranking). Online A/B tests demonstrate that our user intent +system can cover 90% of the user problems, and the CTR achieves +an increase of 5.8%. +4 +RELATED WORK +Knowledge Graph Construction Many efforts have been made +to construct KGs, such as Freebase [4], DBpedia [1], AliCoCo [15], +AliCG [25], OpenBG [8, 18], and HowNet [16], which utilizes crowd- +sourcing and information extraction technologies [5–7, 23, 24, 26] to +describe and extract specific facts with well-defined labels. Unlike +those works, we focus on the conceptualization of intent archi- +tecture where the "Intent" nodes and relations among them are +obtained from unstructured text. Meanwhile, different from lin- +guistic KGs such as HowNet [16] that are handcrafted mainly by +humans, AlipayKG is built based on natural language processing +via human-in-the-loop. AliMe KG [13] is very similar to us, which +models user intents, item information, points of interest (POI), and +relations thereof to understand user needs. Different from their +work, AlipayKG is fit for all user-item interaction scenarios, while +AliMe KG is designed for pre-sales conversation, which is quite a +different scenario from ours. Moreover, we formally introduce a +new type of concept named "Intent" to explicitly represent various +user needs and further build a bridge between user requirements +and item supplies for semantic matching. +User Intent Prediction User intent prediction has commonly been +treated as a classification problem, for which various approaches +have been proposed, such as traditional machine learning methods +like SVM [3] and recent pre-trained language models like BERT [9]. +Li et al. [14] are somewhat similar to us, which attempt to discover +intents from user consumption data in Meituan. Different from +those works, we aim to predict the next intent from the user behav- +ioral sequence in Alipay, which is more challenging and requires +to fully capture the user preferences under the current situation. +5 +CONCLUSION AND FUTURE WORK +In this work, we present the user intent system and demonstrate +its effectiveness in downstream applications deployed in Alipay. +In the future, we will continually maintain the AlipayKG to cover +more business data and applications, and hopefully, it can benefit +more downstream tasks in digital life. Furthermore, we will make +efforts in the direction of interpretable reasoning for better user +intent prediction. + +13:57 1 +杭州 +Q预约挂号 +搜索 +要日消费节|促消费助实体 +支付红包天天摇 +¥288 +每日都有の ++ +夏日消费节 +赚现金奖励可提现 +去赚红包 +能量签收 +绿色 +能量 +可提现 +来收能量 +立即去> +去收取> +为你精选 +今日07月22.日公积金调基查询 +一年一次,缴存基数调整 +2022年度公积金基数调整开始啦!可能会影响你接下来 +一年的每月收入哦,快来查查~ +住房公积金账户信息 +查余额、缴存基数、比例 +有点东西 +啡快 +有用有趣好服务 +星巴克中国 +星冰乐特饮丨夏日限定 +继续上滑为女足加油 +@ +¥ +支 +理财 +生活 +消息 +我的8A Concept Knowledge Graph for User Next Intent Prediction at Alipay +Conference acronym ’XX, June 03–05, 2018, Woodstock, NY +REFERENCES +[1] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, +and Zachary G. Ives. 2007. DBpedia: A Nucleus for a Web of Open Data. In The +Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web +Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 (Lecture +Notes in Computer Science, Vol. 4825), Karl Aberer, Key-Sun Choi, Natasha Fridman +Noy, Dean Allemang, Kyung-Il Lee, Lyndon J. B. Nixon, Jennifer Golbeck, Peter +Mika, Diana Maynard, Riichiro Mizoguchi, Guus Schreiber, and Philippe Cudré- +Mauroux (Eds.). Springer, 722–735. https://doi.org/10.1007/978-3-540-76298- +0_52 +[2] Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez, +Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam +Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin +Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Vic- +toria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet +Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. 2018. Re- +lational inductive biases, deep learning, and graph networks. +arXiv (2018). +https://arxiv.org/pdf/1806.01261.pdf +[3] Aditya Bhargava, Asli Celikyilmaz, Dilek Hakkani-Tür, and Ruhi Sarikaya. 2013. +Easy contextual intent prediction and slot detection. In IEEE International Con- +ference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, +Canada, May 26-31, 2013. IEEE, 8337–8341. https://doi.org/10.1109/ICASSP.2013. +6639291 +[4] Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie +Taylor. 2008. Freebase: a collaboratively created graph database for structuring +human knowledge. In Proceedings of the ACM SIGMOD International Conference +on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, +Jason Tsong-Li Wang (Ed.). ACM, 1247–1250. https://doi.org/10.1145/1376616. +1376746 +[5] Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo +Si, Huajun Chen, and Ningyu Zhang. 2022. LightNER: A Lightweight Tuning +Paradigm for Low-resource NER via Pluggable Prompting. In Proceedings of +the 29th International Conference on Computational Linguistics, COLING 2022, +Gyeongju, Republic of Korea, October 12-17, 2022, Nicoletta Calzolari, Chu-Ren +Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo +Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, +Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, +Enrico Santus, Francis Bond, and Seung-Hoon Na (Eds.). International Committee +on Computational Linguistics, 2374–2387. https://aclanthology.org/2022.coling- +1.209 +[6] Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi +Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Decoupling Knowledge from +Memorization: Retrieval-augmented Prompt Learning. CoRR abs/2205.14704 +(2022). https://doi.org/10.48550/arXiv.2205.14704 arXiv:2205.14704 +[7] Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, +Fei Huang, Luo Si, and Huajun Chen. 2022. KnowPrompt: Knowledge-aware +Prompt-tuning with Synergistic Optimization for Relation Extraction. In WWW +’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, +Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides +Gionis, Ivan Herman, and Lionel Médini (Eds.). ACM, 2778–2788. https://doi. +org/10.1145/3485447.3511998 +[8] Shumin Deng, Chengming Wang, Zhoubo Li, Ningyu Zhang, Zelin Dai, Hehong +Chen, Feiyu Xiong, Ming Yan, Qiang Chen, Mosha Chen, Jiaoyan Chen, Jeff Z. Pan, +Bryan Hooi, and Huajun Chen. 2022. Construction and Applications of Billion- +Scale Pre-trained Multimodal Business Knowledge Graph. CoRR abs/2209.15214 +(2022). https://doi.org/10.48550/arXiv.2209.15214 arXiv:2209.15214 +[9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: +Pre-training of Deep Bidirectional Transformers for Language Understanding. In +Proceedings of the 2019 Conference of the North American Chapter of the Associa- +tion for Computational Linguistics: Human Language Technologies, NAACL-HLT +2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Jill +Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computa- +tional Linguistics, 4171–4186. https://doi.org/10.18653/v1/n19-1423 +[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual +Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision +and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE +Computer Society, 770–778. https://doi.org/10.1109/CVPR.2016.90 +[11] Finn V Jensen and Thomas Dyhre Nielsen. 2007. Bayesian networks and decision +graphs. Vol. 2. Springer. +[12] Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine. 2019. Su- +pervised Multimodal Bitransformers for Classifying Images and Text. In Visually +Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Vancouver, +Canada, December 13, 2019. https://vigilworkshop.github.io/static/papers/40.pdf +[13] Feng-Lin Li, Hehong Chen, Guohai Xu, Tian Qiu, Feng Ji, Ji Zhang, and Haiqing +Chen. 2020. AliMe KG: Domain Knowledge Graph Construction and Application +in E-commerce. CoRR abs/2009.11684 (2020). arXiv:2009.11684 https://arxiv.org/ +abs/2009.11684 +[14] Yinfeng Li, Chen Gao, Xiaoyi Du, Huazhou Wei, Hengliang Luo, Depeng Jin, and +Yong Li. 2022. Automatically Discovering User Consumption Intents in Meituan. +In KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data +Mining, Washington, DC, USA, August 14 - 18, 2022, Aidong Zhang and Huzefa +Rangwala (Eds.). ACM, 3259–3269. https://doi.org/10.1145/3534678.3539122 +[15] Xusheng Luo, Luxin Liu, Yonghua Yang, Le Bo, Yuanpeng Cao, Jinghang Wu, +Qiang Li, Keping Yang, and Kenny Q. Zhu. 2020. AliCoCo: Alibaba E-commerce +Cognitive Concept Net. In Proceedings of the 2020 International Conference on +Management of Data, SIGMOD Conference 2020, online conference [Portland, OR, +USA], June 14-19, 2020, David Maier, Rachel Pottinger, AnHai Doan, Wang-Chiew +Tan, Abdussalam Alawini, and Hung Q. Ngo (Eds.). ACM, 313–327. +https: +//doi.org/10.1145/3318464.3386132 +[16] Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, and Zhen- +dong Dong. 2019. OpenHowNet: An Open Sememe-based Lexical Knowledge +Base. CoRR abs/1901.09957 (2019). arXiv:1901.09957 http://arxiv.org/abs/1901. +09957 +[17] Chen Qu, Liu Yang, W Bruce Croft, Yongfeng Zhang, Johanne R Trippas, and +Minghui Qiu. 2019. User intent prediction in information-seeking conversations. +In Proceedings of the 2019 Conference on Human Information Interaction and +Retrieval. 25–33. +[18] Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Cheng- +ming Wang, Xiaoyu Wang, Qiang Chen, and Huajun Chen. 2022. +Com- +monsense Knowledge Salience Evaluation with a Benchmark Dataset in E- +commerce. CoRR abs/2205.10843 (2022). +https://doi.org/10.48550/arXiv.2205. +10843 arXiv:2205.10843 +[19] Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, +Matan Protter, and Lihi Zelnik-Manor. 2021. Asymmetric Loss for Multi-Label +Classification. In Proceedings of the IEEE/CVF International Conference on Com- +puter Vision (ICCV). 82–91. +[20] Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li. 2020. +BERT-INT:A BERT-based Interaction Model For Knowledge Graph Alignment. +In Proceedings of the Twenty-Ninth International Joint Conference on Artificial +Intelligence, IJCAI-20, Christian Bessiere (Ed.). International Joint Conferences +on Artificial Intelligence Organization, 3174–3180. https://doi.org/10.24963/ijcai. +2020/439 Main track. +[21] Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang, and +Yonggang Wang. 2020. Joint Chinese Word Segmentation and Part-of-speech +Tagging via Two-way Attentions of Auto-analyzed Knowledge. In ACL. 8286– +8296. https://doi.org/10.18653/v1/2020.acl-main.735 +[22] Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, +and Luo Si. 2020. StructBERT: Incorporating Language Structures into Pre- +training for Deep Language Understanding. In 8th International Conference on +Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. +OpenReview.net. https://openreview.net/forum?id=BJgQ4lSFPH +[23] Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022. Generative +Knowledge Graph Construction: A Review. CoRR abs/2210.12714 (2022). https: +//doi.org/10.48550/arXiv.2210.12714 arXiv:2210.12714 +[24] Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, +Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level Relation Extraction +as Semantic Segmentation. In Proceedings of the Thirtieth International Joint +Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, +19-27 August 2021, Zhi-Hua Zhou (Ed.). ijcai.org, 3999–4006. https://doi.org/10. +24963/ijcai.2021/551 +[25] Ningyu Zhang, Qianghuai Jia, Shumin Deng, Xiang Chen, Hongbin Ye, Hui +Chen, Huaixiao Tou, Gang Huang, Zhao Wang, Nengwei Hua, and Huajun Chen. +2021. AliCG: Fine-grained and Evolvable Conceptual Graph Construction for +Semantic Search at Alibaba. In KDD ’21: The 27th ACM SIGKDD Conference on +Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, +2021, Feida Zhu, Beng Chin Ooi, and Chunyan Miao (Eds.). ACM, 3895–3905. +https://doi.org/10.1145/3447548.3467057 +[26] Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, +Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, et al. 2022. DeepKE: A Deep Learning +Based Knowledge Extraction Toolkit for Knowledge Base Population. arXiv +preprint arXiv:2201.03335 (2022). +[27] Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang +Zhao. 2022. A Contrastive Framework for Learning Sentence Representations +from Pairwise and Triple-wise Perspective in Angular Space. In Proceedings of +the 60th Annual Meeting of the Association for Computational Linguistics (Volume +1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 4892– +4903. https://doi.org/10.18653/v1/2022.acl-long.336 +[28] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, +and Wancai Zhang. 2020. Informer: Beyond Efficient Transformer for Long +Sequence Time-Series Forecasting. CoRR abs/2012.07436 (2020). arXiv:2012.07436 +https://arxiv.org/abs/2012.07436 + diff --git a/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/load_file.txt b/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..91722a30f0fee4748df2ddb762be498972e048ec --- /dev/null +++ b/0NAyT4oBgHgl3EQfoPhb/content/tmp_files/load_file.txt @@ -0,0 +1,599 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf,len=598 +page_content='A Concept Knowledge Graph for User Next Intent Prediction at Alipay Yacheng He Ant Group Hangzhou, China heyachen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='hyc@antgroup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com Qianghuai Jia∗ Ant Group Hangzhou, China qianghuai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='jqh@antgroup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com Lin Yuan Ant Group Hangzhou, China huiwai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='yl@antgroup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com Ruopeng Li Ant Group Hangzhou, China ruopeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='lrp@antgroup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com Yixin Ou Zhejiang University Hangzhou, China ouyixin@zju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='cn Ningyu Zhang Zhejiang University Hangzhou, China zhangningyu@zju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='cn ABSTRACT This paper illustrates the technologies of user next intent prediction with a concept knowledge graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The system has been deployed on the Web at Alipay1, serving more than 100 million daily active users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user’s next intent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CCS CONCEPTS Information systems → Query representation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Information extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' KEYWORDS Knowledge Graph;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Intent Prediction;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Graph Embedding;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Multi-label Classification ACM Reference Format: Yacheng He, Qianghuai Jia, Lin Yuan, Ruopeng Li, Yixin Ou, and Ningyu Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' A Concept Knowledge Graph for User Next Intent Prediction at Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of Make sure to enter the correct conference title from your rights confirmation emai (Conference acronym ’XX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, New York, NY, USA, 5 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/nnnnnnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='nnnnnnn 1 INTRODUCTION User next intent prediction – the ability to automatically infer the next decision of users based on historical behavior and background 1https://global.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com/platform/site/ihome Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Copyrights for components of this work owned by others than ACM must be honored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Abstracting with credit is permitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Request permissions from permissions@acm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Conference acronym ’XX, June 03–05, 2018, Woodstock, NY © 2023 Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM ISBN 978-1-4503-XXXX-X/18/06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='$15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='00 https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/nnnnnnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='nnnnnnn Buy movie tickets Take an internet taxi User Historical Behavior Sequence Location Time Interaction s Online Next Intent Prediction Downstream Applications Intents Order coffee … ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='… ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Next Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Prediction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Buy snacks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Recommender ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='System ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Search ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='System ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Transaction Risk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Management ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='… ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='isA/Consequent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Product ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Function ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Sememe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Alipay ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Graph ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='isA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Buy movie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='tickets ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Movie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='tickets ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Buy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='coupon ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='look ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='shows ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='buy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Entertainment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Ticketing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Buy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='snacks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Watch the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='reality show ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Watch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Reality ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='show ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consequent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Subgraph ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='The core ontology ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User_1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='history ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='present & future ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='product ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='function ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='sememe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Alipay ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Knowledge Graph ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='(b) Next Intent Prediction Framework ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='(c) Downstream Applications ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='(a) Overview of Alipay Knowledge Graph ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Figure 1: The user next intent prediction system at Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Sub-figure (a) illustrates the core ontology and subgraph of AlipayKG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In sub-figure (b), an example of the user’s his- torical interactions and intent sequence is shown in gray- grounded boxes, and the next intent is marked with a red "?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" that has been inferred as "buy snacks" by the next intent prediction model, whose outputs will provide a clear signal to downstream applications as shown in sub-figure (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' knowledge – holds an important place in in-device Apps [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' For example, in digital life service platforms such as Alipay, users often purchase snacks at the cinema (corresponding intent "buy snacks") after buying movie tickets via TaoPiaoPiao2 (corresponding intent "buy movie tickets"), which implies the intent of "buy movie tickets" may lead to the following intent of "buy snacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" As shown in Figure 1, the ability to infer the future intents of users has the potential to be advantageous for tasks such as recommendation, searching, transaction risk management and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Intuitively, user intent can be characterized as clustering pat- terns of user behaviors, which are usually hidden in the content and interacted with or generated by users in mobile applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, the core of understanding user intent in Alipay lies in systematic and explicit knowledge modeling of the user’s situation, 2https://dianying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='taobao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com/ arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='00503v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='CL] 2 Jan 2023 8Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yacheng He, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' and the user’s interacted item content, which consists of queries, applet services, bills, coupons, stores, reviews, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Concretely, we summarize the two non-trial issues in user next intent prediction at Alipay as follows: How to characterize user intent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' It is challenging to abstract and encode intent from user behaviors that are very diverse and can not be directly observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In particular, unlike e-commerce scenarios such as Amazon3 which mainly contains shopping intent, the behaviors at Alipay are various, including shopping, trip and payment, which further increases the difficulty of intent representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' How to predict the user’s next intent in real-time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The user’s next intent is not only based on the user’s profile and preference but also largely influenced by spatial and temporal factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' For example, the intent to "buy movie tickets" tends to occur at the weekend, while the intent to "registration" often occurs in the hospital.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' To address the above-mentioned issues, we propose a user next intent prediction system based on the Knowledge Graph (KG) and apply it to downstream applications at Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' We summarize the contributions of this paper as follows: We propose AlipayKG, a concept knowledge graph that explic- itly represents user behaviors by defining an intent architecture to achieve a unified representation of multi-source heterogeneous content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Meanwhile, we propose a systematic approach to ob- tain structured knowledge from multi-source content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' With the proposed AlipayKG, we address the first issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' As for the second issue, we design a next intent prediction frame- work that integrates expert rules from AlipayKG, which improves the performance while increasing interpretability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' We evaluate this system on downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Experimental results demonstrate that the proposed system can enhance the performance of several real-world applications, which serve more than 100 million daily active users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2 ALIPAYKG-BASED USER NEXT INTENT PREDICTION SYSTEM An overview of our user intent system is presented in Figure 1, and it is composed of two parts as follows: 1) AlipayKG to suf- ficiently characterize user intent, and 2) Next Intent Prediction Framework to accurately predict the user’s next intent in real- time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' All collected data are anonymized and reviewed by the IRB committee to preserve privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1 AlipayKG Construction In general, user intent plays a crucial role in promoting the per- formance and interpretability of user modeling systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' However, uniformly capturing the users’ intents and expressing them is ardu- ous due to the various kinds of users’ behaviors in digital life service platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Therefore, to sufficiently characterize user intent, we propose a concept KG in the Life-Service domain called AlipayKG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The core ontology of AlipayKG is shown in Figure 1(a), which in- cludes four nodes and four relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, "Intent" describes the decision drivers behind users’ needs and mainly consists of 3https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Item Content ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='phrase mining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='crowdsourcing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intents ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='bayesian network ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent-isA-Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Relation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent-Consequent- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Relation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Function & ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Product ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Sememe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='isA/Consequent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Product ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Function ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Consist ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Sememe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Has ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='sememe multilabel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='classification ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='part-of-speech tagging ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='& short text matching ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='lexical rule-based ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='& embedding-based ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='KG Nodes Mining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='KG Relations Mining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Alipay Knowledge Graph ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Figure 2: The process of constructing AlipayKG consists of ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='node mining and relations mining (by different colors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' "Function" and "Product," such as "take an internet taxi 打网约车" and "order coffee 点咖啡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Furthermore, "Product" and "Function" can be represented by more fine-grained Hownet sememes4 that are regarded as the basic units of semantics, such as "movie ticket|电 影票 = {coupon|票证, look|看, shows|表演物}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Meanwhile, we also define two types of relation between "Intent" nodes: 1) "isA" relation builds the semantic hyponymy of "Intent" nodes, such as "rent an iPhone13 -isA- rent a mobile phone";' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2) "Consequent" relation is used to establish the order effect of "Intent" nodes, such as "buy a house -consequent- renovate a house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Figure 2 illustrates the framework of AlipayKG construction, which contains two parts: 1) KG Nodes Mining and 2) Intent Knowledge Mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' It is worth noting that crowdsourcing is employed for data quality control throughout the whole process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1 KG Nodes Mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' To mine "Intent" nodes, we adopt the automated phrase mining approach [13] based on item content and extend it with a self-constructed ground dictionary for high- quality phrase classification, where item content is chosen as our data source since users often directly express their requirements by interacting with items.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Although the items in Alipay are multi- source heterogeneous, the text of different items shares the same critical information and can be used as input data sources for knowl- edge mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Then, we utilize lexical rule matching, part-of-speech tagging [21], short text matching models [27], and structure the "Intent" nodes into two parts: "Function" and "Product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Moreover, HowNet [16] has a wealth of artificially annotated corpus, through which we train a multi-label classification model [19] to automati- cally obtain sememe information of "Function" and "Product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Due to aliasing and ambiguity issues with entity names, we further use alignment models of Bert-Int [20] on "Intent" and "Product" nodes for semantic disambiguation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2 KG Relations Mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In this part, the mining methods of the "isA" and "Consequent" relations between "Intent" nodes are elaborated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' It is worth noting that the other two relations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=', "Consist" and "Has") have been obtained in the "KG Nodes Mining" Section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' "isA" Relation: Since "isA" is used to organize "Intent" nodes in a hierarchical tree structure, it is challenging to acquire the knowledge that belongs to common sense through data mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' For instance, it is easy to know that "buy an iPhone13" is a kind of "buy 4https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='com/thunlp/OpenHowNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='git A Concept Knowledge Graph for User Next Intent Prediction at Alipay Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Dot Product Informer Encoder Informer Decoder Concatenated Feature Map Predicted Intent Outputs 0 0 0 0 0 isA/Consequent Intent Product Function Consist Consist Sememe Has Has Graph Convolutional Network (GCN) Intent Label Embedding Intent Time Stamp Location Time Stamp Global Time Stamp 𝜲!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' "#$%_\'" 𝜲!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' "#$%_(\' AlipayKG Item Text Multi-Label Loss Dot Product N Item Embedding Item Images ResNet StructBERT Encoder … (b) Offline Item-Intent Understanding Model (c) Online Next Intent Prediction Model (a) Intent Representation Figure 3: User next intent prediction framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Figure(a): GCN is learned over the AlipayKG to obtain intent label rep- resentation, which is applied to predict output intents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Fig- ure(b): Intent label generation of each item via the multi- label classification model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Figure(c): 1) The encoder receives massive long sequence inputs (intent, location and global time);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2) The decoder receives long sequence inputs, pads the target intents into zero, and instantly predicts output intents (marked orange) in a generative style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' a mobile phone," but difficult for a machine to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' To this end, we propose two different methods described as follows: 1) Lexical Rule-based Method: This method utilizes the "isA" of the "Product" to build the "isA" relation between "Intent" nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' For example, "buy an iPhone13" and "buy a mobile phone" have the same "Function," and it can be acquired from the general knowledge graph that "iPhone13" is a kind of "mobile phone," then the relation of "buy an iPhone13 -isA- buy a mobile phone" can be generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2) Embedding-based Method: This method employs the information of the text semantics to avoid the drawbacks of the lexical rule-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, we first apply StructBERT [22] pre-trained on Alipay corpus to represent the embedding of the "Product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" Secondly, we calculate the cosine distance between "Product" nodes and recall the top-K candidates with the closest semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Finally, the positive "isA" between "Intent" nodes will be chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' "Consequent" Relation: Bayesian network [11] is leveraged from the probability network to mine the "Consequent" relation in AlipayKG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, the "Intent" of different time segments is first aggregated as the input of the Bayesian network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' After learning the Bayesian network structure, it performs relation inference [2] to obtain numerous pairs of "Intent" nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In the end, we build the "Consequent" relation on pairs of highly correlated and order- sensitive "Intent" nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2 Next Intent Prediction Framework Figure 3 illustrates the next intent prediction framework, which consists of two parts: 1) Offline Item-Intent Understanding Model to label the user interacted items with "Intent," and 2) Online User Next Intent Prediction Model to forecast the next intent of users with low latency and high prediction accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1 Offline Item-Intent Understanding Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Since user intent is always hidden in the items that are interacted with users, it is important to establish the Item-Intent relationships, which can be regarded as a matching problem between "Item" and "Intent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" For example, "Starbucks applet" contains various "Intent" such as "order coffee" and "buy coffee beans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" The overview of the item-intent understanding model is shown in Figure 3(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Firstly, a multi-modal model [12] is adopted to unify the multi-source heterogeneous input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Specifically, we adopt Resnet [10] to extract image features and combine them with text features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Then, the concatenated features are fed into StructBERT [22] model to obtain the item representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Besides, intent embed- ding is generated via graph algorithms such as GCN as shown in 3(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Finally, the predicted label scores can be obtained by matching the learned intent embedding with item representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2 Online User Next Intent Prediction Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Online real-time next intent prediction model needs low latency while guaranteeing high prediction accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Hence, an efficient Transformer-based model for long time-series forecasting named Informer [28] is adopted in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In this model, the input consists of three parts: the intent timestamp, the location timestamp and the global timestamp (Minutes, Hours, Week, Month, Holiday etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Moreover, AlipayKG is fused into the model to enhance the prediction accu- racy, as shown in Figure 3(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Additionally, the mined rules (such as "take an internet taxi -consequent- buy movie tickets -consequent- buy snacks") are applied to the post-processing stage of the model, which further improves the interpretability of the predicted results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3 Industrial Deployment of User Intent System In this Section, the deployment of the user intent system will be described in the recommendation engine for Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' First of all, it can be observed from Figure 4 that the recommendation engine is composed of a recall stage and a ranking stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In the recall stage, a candidate item set (recall pool) is generated by merging results from different recall methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In the ranking stage, those candi- dates are passed through ranking and re-ranking to output the final recommendation list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Secondly, the proposed user intent system will be applied to the recommendation engine in the recall and ranking stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' As shown in Figure 4, according to history behav- ior data and current spatial-temporal information, the next intent prediction model can predict the user’s top-K intent candidates with the highest probability, which helps bring the intent-based recall method directly into the recall stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Meanwhile, the gener- ated Top-K intent candidates, intent embedding and item-intent relations can contribute to better-personalized modeling of user behaviors in the ranking stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Finally, the whole system is in a positive feedback loop, as shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' User Intent System can predict user intent based on user-interacted data, which facilitates better recommendations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In return, a better recommendation can provide more real user behavior data to improve the performance of intent understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In addition, the efficacy of the deployment will be demonstrated in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' InformerEncoderInformerDecoderConference acronym ’XX, June 03–05, 2018, Woodstock, NY Yacheng He, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Item Pool ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Recall Stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Location-based ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Method ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Embedding-based ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Method ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent-based ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Method ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='… ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Recall ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Pool ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Ranking ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Re-Ranking ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Ranking Stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User Historical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Interactions Data ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User spatial-temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='information ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Online Next Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Prediction Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Top-K Predicted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Labels ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='AlipayKG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Embeddings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='& Item-Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Relations ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User Features & ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Item Features ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User Interaction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Recommendation List ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User Intent System ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Alipay Homepage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Offline Item-Intent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Understanding Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='User Historical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Intent Sequence ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='Figure 4: Industrial deployment of User Next Intent Predic- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='tion System in the Alipay recommendation engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The rec- ommendation engine contains two stages: the recall stage and the ranking stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The dataflows of recommended items are guided by the grey arrows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Our user next intent pre- diction system provides intent embeddings, item-intent re- lations and top-K predicted intents based on historical in- formation, thereby improving the performance of the re- call and ranking stages and providing users with a more in- demand recommendation list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 3 EVALUATION 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1 Evaluation of AlipayKG In AlipayKG, we have accumulated 104𝐾+ "Intent," 31𝐾+ "Func- tion," 66𝐾+ "Product," and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='9𝐾+ "Sememe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='" With the item-intent understanding model, we have collected relatively static data, such as 1, 316𝐾+ Service-Intent triples and 57, 852𝐾+ Store-Intent triples, and relatively dynamic data, such as 10K-level Coupon-Intent triples and billion-level Bill-Intent triples, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2 Evaluation of Next Intent Prediction Framework In this Section, the proposed intent prediction framework will be evaluated from the following two aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 1) Offline Item-Intent Understanding Model: We evaluate our matching model on item-intent prediction with 3𝐾+ primitive in- tent labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' The multi-modal model is increased by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='10%, and the label-level graph embedding is further increased by 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='08% to 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='64% in micro-F1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2) Online Next Intent Prediction Model: We evaluate our next- intent prediction model on 30𝐾 sampled user historical behavior data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' To restore online scenarios, we only predict the user’s next in- tent at a specific time and location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Experimental results show that the intent prediction model introduced with AlipayKG achieves 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3% and 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3% in Recall@1 and Recall@10, achieving an improve- ment of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1% and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3 Evaluation of Downstream Applications In this Section, we further evaluate whether the user next intent prediction system can improve the downstream tasks’ performance at Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 1) Home Recommendation: Home recommendation is one of the most important business scenarios in which our system helps to discover user interests in real-time, shown in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Online experiments show that our system can bring a relative increase of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='61% in CTR (Click-Through-Rate).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2) Transaction Risk Management: To create a secure payment environment, the potential risks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=', embezzlement and money laundering) of each transaction should be estimated to determine whether it is invalid, which consumes a huge amount of computa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In order to reduce the cost, we treat users’ consumption intent as an important transaction feature to discover low-risk transac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' By leveraging the credible transaction identification based on AlipayKG, the coverage rate of low-risk transactions is relatively increased by 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 3) Alipay Search: In this scenario, the fine-grained user intent can be captured in real-time by query understanding technology and then used in various stages of search service (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=', recall, relevance and ranking).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Online A/B tests demonstrate that our user intent system can cover 90% of the user problems, and the CTR achieves an increase of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='8%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 4 RELATED WORK Knowledge Graph Construction Many efforts have been made to construct KGs, such as Freebase [4], DBpedia [1], AliCoCo [15], AliCG [25], OpenBG [8, 18], and HowNet [16], which utilizes crowd- sourcing and information extraction technologies [5–7, 23, 24, 26] to describe and extract specific facts with well-defined labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Unlike those works, we focus on the conceptualization of intent archi- tecture where the "Intent" nodes and relations among them are obtained from unstructured text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Meanwhile, different from lin- guistic KGs such as HowNet [16] that are handcrafted mainly by humans, AlipayKG is built based on natural language processing via human-in-the-loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' AliMe KG [13] is very similar to us, which models user intents, item information, points of interest (POI), and relations thereof to understand user needs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Different from their work, AlipayKG is fit for all user-item interaction scenarios, while AliMe KG is designed for pre-sales conversation, which is quite a different scenario from ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Moreover, we formally introduce a new type of concept named "Intent" to explicitly represent various user needs and further build a bridge between user requirements and item supplies for semantic matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' User Intent Prediction User intent prediction has commonly been treated as a classification problem, for which various approaches have been proposed, such as traditional machine learning methods like SVM [3] and recent pre-trained language models like BERT [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [14] are somewhat similar to us, which attempt to discover intents from user consumption data in Meituan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Different from those works, we aim to predict the next intent from the user behav- ioral sequence in Alipay, which is more challenging and requires to fully capture the user preferences under the current situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 5 CONCLUSION AND FUTURE WORK In this work, we present the user intent system and demonstrate its effectiveness in downstream applications deployed in Alipay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In the future, we will continually maintain the AlipayKG to cover more business data and applications, and hopefully, it can benefit more downstream tasks in digital life.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Furthermore, we will make efforts in the direction of interpretable reasoning for better user intent prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 13:57 1 杭州 Q预约挂号 搜索 要日消费节|促消费助实体 支付红包天天摇 ¥288 每日都有の + 夏日消费节 赚现金奖励可提现 去赚红包 能量签收 绿色 能量 可提现 来收能量 立即去> 去收取> 为你精选 今日07月22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='日公积金调基查询 一年一次,缴存基数调整 2022年度公积金基数调整开始啦!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='可能会影响你接下来 一年的每月收入哦,快来查查~ 住房公积金账户信息 查余额、缴存基数、比例 有点东西 啡快 有用有趣好服务 星巴克中国 星冰乐特饮丨夏日限定 继续上滑为女足加油 @ ¥ 支 理财 生活 消息 我的8A Concept Knowledge Graph for User Next Intent Prediction at Alipay Conference acronym ’XX, June 03–05, 2018, Woodstock, NY REFERENCES [1] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Ives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' DBpedia: A Nucleus for a Web of Open Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 (Lecture Notes in Computer Science, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 4825), Karl Aberer, Key-Sun Choi, Natasha Fridman Noy, Dean Allemang, Kyung-Il Lee, Lyndon J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Nixon, Jennifer Golbeck, Peter Mika, Diana Maynard, Riichiro Mizoguchi, Guus Schreiber, and Philippe Cudré- Mauroux (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Springer, 722–735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1007/978-3-540-76298- 0_52 [2] Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin Gilmer, George E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Vic- toria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Re- lational inductive biases, deep learning, and graph networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' arXiv (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/pdf/1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='01261.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='pdf [3] Aditya Bhargava, Asli Celikyilmaz, Dilek Hakkani-Tür, and Ruhi Sarikaya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Easy contextual intent prediction and slot detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In IEEE International Con- ference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, May 26-31, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' IEEE, 8337–8341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1109/ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 6639291 [4] Kurt D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Bollacker, Colin Evans, Praveen K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Paritosh, Tim Sturge, and Jamie Taylor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Freebase: a collaboratively created graph database for structuring human knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, Jason Tsong-Li Wang (Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, 1247–1250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/1376616.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 1376746 [5] Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, and Seung-Hoon Na (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' International Committee on Computational Linguistics, 2374–2387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='coling- 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='209 [6] Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='14704 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='14704 arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='14704 [7] Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, 2778–2788.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/3485447.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3511998 [8] Shumin Deng, Chengming Wang, Zhoubo Li, Ningyu Zhang, Zelin Dai, Hehong Chen, Feiyu Xiong, Ming Yan, Qiang Chen, Mosha Chen, Jiaoyan Chen, Jeff Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Pan, Bryan Hooi, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Construction and Applications of Billion- Scale Pre-trained Multimodal Business Knowledge Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='15214 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='15214 arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='15214 [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the 2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Association for Computa- tional Linguistics, 4171–4186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='18653/v1/n19-1423 [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Deep Residual Learning for Image Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' IEEE Computer Society, 770–778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1109/CVPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='90 [11] Finn V Jensen and Thomas Dyhre Nielsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Bayesian networks and decision graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [12] Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Su- pervised Multimodal Bitransformers for Classifying Images and Text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Visually Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Vancouver, Canada, December 13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://vigilworkshop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='io/static/papers/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='pdf [13] Feng-Lin Li, Hehong Chen, Guohai Xu, Tian Qiu, Feng Ji, Ji Zhang, and Haiqing Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' AliMe KG: Domain Knowledge Graph Construction and Application in E-commerce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='11684 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='11684 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/ abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='11684 [14] Yinfeng Li, Chen Gao, Xiaoyi Du, Huazhou Wei, Hengliang Luo, Depeng Jin, and Yong Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Automatically Discovering User Consumption Intents in Meituan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022, Aidong Zhang and Huzefa Rangwala (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, 3259–3269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/3534678.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3539122 [15] Xusheng Luo, Luxin Liu, Yonghua Yang, Le Bo, Yuanpeng Cao, Jinghang Wu, Qiang Li, Keping Yang, and Kenny Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' AliCoCo: Alibaba E-commerce Cognitive Concept Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the 2020 International Conference on Management of Data, SIGMOD Conference 2020, online conference [Portland, OR, USA], June 14-19, 2020, David Maier, Rachel Pottinger, AnHai Doan, Wang-Chiew Tan, Abdussalam Alawini, and Hung Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Ngo (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, 313–327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/3318464.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3386132 [16] Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, and Zhen- dong Dong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' OpenHowNet: An Open Sememe-based Lexical Knowledge Base.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='09957 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='09957 http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/abs/1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 09957 [17] Chen Qu, Liu Yang, W Bruce Croft, Yongfeng Zhang, Johanne R Trippas, and Minghui Qiu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' User intent prediction in information-seeking conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 25–33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [18] Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Cheng- ming Wang, Xiaoyu Wang, Qiang Chen, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Com- monsense Knowledge Salience Evaluation with a Benchmark Dataset in E- commerce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='10843 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 10843 arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='10843 [19] Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Asymmetric Loss for Multi-Label Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Com- puter Vision (ICCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 82–91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [20] Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' BERT-INT:A BERT-based Interaction Model For Knowledge Graph Alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere (Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' International Joint Conferences on Artificial Intelligence Organization, 3174–3180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='24963/ijcai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020/439 Main track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [21] Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang, and Yonggang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In ACL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 8286– 8296.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='18653/v1/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='acl-main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='735 [22] Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' StructBERT: Incorporating Language Structures into Pre- training for Deep Language Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='id=BJgQ4lSFPH [23] Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Generative Knowledge Graph Construction: A Review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='12714 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='12714 arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='12714 [24] Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Document-level Relation Extraction as Semantic Segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, Zhi-Hua Zhou (Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ijcai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org, 3999–4006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 24963/ijcai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='2021/551 [25] Ningyu Zhang, Qianghuai Jia, Shumin Deng, Xiang Chen, Hongbin Ye, Hui Chen, Huaixiao Tou, Gang Huang, Zhao Wang, Nengwei Hua, and Huajun Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' AliCG: Fine-grained and Evolvable Conceptual Graph Construction for Semantic Search at Alibaba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, Feida Zhu, Beng Chin Ooi, and Chunyan Miao (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' ACM, 3895–3905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='1145/3447548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='3467057 [26] Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='03335 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' [27] Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Association for Computational Linguistics, Dublin, Ireland, 4892– 4903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='18653/v1/2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='acl-long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='336 [28] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' CoRR abs/2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='07436 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content=' arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='07436 https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='org/abs/2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} +page_content='07436' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQfoPhb/content/2301.00503v1.pdf'} diff --git a/0NAzT4oBgHgl3EQfefyv/content/tmp_files/2301.01438v1.pdf.txt b/0NAzT4oBgHgl3EQfefyv/content/tmp_files/2301.01438v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3e2c7579eaa52bafeb5a074dbd41a731ed0f2d87 --- /dev/null +++ b/0NAzT4oBgHgl3EQfefyv/content/tmp_files/2301.01438v1.pdf.txt @@ -0,0 +1,1782 @@ +arXiv:2301.01438v1 [math.AP] 4 Jan 2023 +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY IN +THE HEISENBERG–WEYL AND GELL-MANN BASES WITH +APPLICATIONS TO FAST LEARNING +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Abstract. Previous noncommutative Bohnenblust–Hille inequalities addressed +operator decompositions in the tensor product space SU(2)⊗n [HCP22, VZ22]. +Here we prove the inequalities for product spaces of arbitrary local dimension, +e.g., SU(N)⊗n or n-fold tensor products of N × N Hermitian matrices. We treat +operator decompositions in both the Gell-Mann and Heisenberg–Weyl bases by +reducing to commutative cases. The latter basis is reduced to a scalar Bohnenblust– +Hille inequality for cyclic groups which we also prove. +Applications to quantum junta theorems and learning qudit quantum observ- +ables in the Probably Approximately Correct framework are also listed. +Contents +Notations +2 +1. +Introduction +2 +1.1. +Gell-Mann matrix basis +4 +1.2. +Heisenberg–Weyl matrix basis +5 +2. +Applications +8 +2.1. +Quantum k-juntas for qudits +8 +2.2. +Learning quantum observables of low degrees +9 +3. +Main results for the Gell-Mann matrix basis +10 +4. +Main results for Heisenberg–Weyl matrix basis +13 +5. +Bohnenblust–Hille inequalities for cyclic groups: the difficulty +17 +2010 Mathematics Subject Classification. 46B10, 46B09; 46B07; 60E15. +Key words and phrases. Bohnenblust–Hille inequality, Gell-Mann matrix basis, Heisenberg-Weyl +basis, qubits, qudits, fast learning, k-juntas, PAC, probably approximately correct learning of big +matrices. +J.S. is supported by Chris Umans’ Simons Investigator Grant. The research of A.V. is supported +by NSF DMS-1900286, DMS-2154402 and by Hausdorff Center for Mathematics. H.Z. is supported +by the Lise Meitner fellowship, Austrian Science Fund (FWF) M3337. +This work is partially +supported by NSF DMS-1929284 while all three authors were in residence at the Institute for +Computational and Experimental Research in Mathematics in Providence, RI, during the Harmonic +Analysis and Convexity program. +1 + +2 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +6. +Bohnenblust–Hille inequalities for cyclic groups: a partial remedy +19 +6.1. +Constant cannot be 1 +19 +6.2. +A partial solution +21 +References +24 +Notations +Let C and R be the complex numbers and real numbers, respectively. Let D = +{z ∈ C : |z| < 1} be the open unit disc in the complex plane. Fix an integer N ≥ 2. +Let ω := e +2πi +N denote a primitive root of unity of order N. Let ZN := {0, 1, . . ., N −1} +be the additive cyclic group of order N and ΩN := {1, ω, . . ., ωN−1} the multiplicative +cyclic group of order N. We also need +�ΩN := conv(ΩN), +a regular polygon inscribed in the circle T. We use Mn(C) to denote the n-by-n +complex matrix algebra and 1 the identity matrix. Denote by {ej : 1 ≤ j ≤ N} the +standard basis of CN. We use ⟨·, ·⟩ to denote the inner product on Cn that is linear +in the second argument. For two vectors ξ, η ∈ Cn, we use |ξ⟩⟨η| to denote the linear +operator such that |ξ⟩⟨η| ej = ⟨η, ej⟩ · |ξ⟩. +1. Introduction +Let +f(z) = +� +α +cαzα = +� +α +cαzα1 +1 · · · zαn +n , +where α = (α1, . . . , αn) are vectors of non-negative integers and the total degree of +polynomial f is d = maxα(α1 +· · ·+αn). Here z can be all complex vectors in Tn or +all sequences of ±1 in Boolean cube {−1, 1}n. Bohnenblust–Hille type of inequalities +are the following +� � +α +|cα| +2d +d+1 +� d+1 +2d ≤ C(d) sup +z |f(z)| . +(1.1) +The supremum is taken either over torus Tn or Boolean cube {−1, 1}n. In both cases +this inequality is proven with constant C(d) that is independent of the dimension n +and sub-exponential in the degree d. More precisely, denote by BH≤d +T and BH≤d +{±1} the +best constants in the Bohnenblust–Hille inequalities (1.1) for degree-d polynomials +on Tn and {−1, 1}n, respectively. Then both BH≤d +T and BH≤d +{±1} are bounded from +above by ec√d log d for some universal c > 0 [BPS, DMP]. +One of the key features of this inequality (1.1) is the dimension-freeness of C(d). +This, together with its sub-exponential growth phenomenon in d, plays an important + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +3 +role in resolving some open problems in functional analysis and harmonic analysis +[DGMS, BPS, DFOOS]. The optimal dependence of BH≤d +T and BH≤d +{±1} on the degree +d remains open. Important questions in quantum computing would be resolved if +one would improve the constant C(d) to dC, see [AA]. +The Bohnenblust–Hille inequalities for the Boolean cubes {−1, 1}n have found +great applications in learning bounded low degree polynomials on Boolean cubes +[EI22]. Motivated by learning quantum observables, a quantum counterpart of the +Bohnenblust–Hille inequality for Boolean cubes was recently conjectured in [RWZ22]. +In the quantum setting, functions on Boolean cubes {−1, 1}n are replaced by 2n-by-2n +matrices. More precisely, suppose σ0 = 1 is the 2-by-2 identity matrix and σ1, σ2, σ3 +are Pauli matrices: +σ1 = +� +0 +1 +1 +0 +� +, +σ2 = +� +0 +−i +i +0 +� +, +σ3 = +� +1 +0 +0 +−1 +� +. +The degree-d polynomial Pauli observables are matrices A ∈ M2(C)⊗n of the form +A = +� +s∈{0,1,2,3}n:|s|≤d +�Asσs1 ⊗ · · · ⊗ σsn, +where �As ∈ C is the Fourier coefficient, and for s = (s1, . . . , sn) ∈ {0, 1, 2, 3}n, +|s| is the number of nonzero sj’s. Then the Bohnenblust–Hille inequality for Pauli +observables reads: for all n ≥ 1 and A ∈ M2(C)⊗n of degree-d + + � +s:|s|≤d +| �As| +2d +d+1 + + +d+1 +2d +≤ C(d)∥A∥. +(1.2) +Here and in what follows, ∥A∥ always denotes the operator norm of A. The inequality +(1.2) was conjectured in [RWZ22] and was resolved in [HCP22] with C(d) = dCd for +some universal C > 0. A different proof was given in [VZ22] with constant that is +of exponential growth i.e. C(d) = Cd for some universal C > 0. Although it is still +not clear if one may match the sub-exponential growth in the classical setting, the +quantum Bohnenblust–Hille inequality (1.2) with dimension-free C(d) < ∞ already +has a number of interesting applications. For example, it enables the learning of +low degree Pauli observables using a logarithmic number of random queries [VZ22] +similar to the classical setting [EI22]. This in turn enables learning more general +quantum dynamics [HCP22]. +However, in many contexts it is desirable to consider quantum observables decom- +posed in the product space MN(C)⊗n for N > 2, such as when studying observables +of multilevel quantum systems (termed qudits—though given our use of N for local + +4 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +dimension, the term “quNit” might be more apt). For example, when learning an un- +known qudit observable, it can be physically important that sample states be drawn +from the native dimension of the system, rather than some larger ambient Hilbert +space. Having these inequalities in new bases also greatly expands the distributions +under which a PAC-learning theorem is available for arbitrary quantum processes. +Of particular interest to us are the Gell-Mann (GM) observables and Heisenberg– +Weyl (HW) observables, both of which (essentially) reduce to Pauli observables when +N = 2. In this paper we prove noncommutative Bohnenblust–Hille inequalities in +these two settings following the approach in [VZ22], where the proof of quantum +Bohnenblust–Hille inequalities (1.2) is reduced to the classical Bohnenblust–Hille +inequalities (1.1) for Boolean cubes. It turns out that the GM case can again be +reduced to the case for classical Boolean cubes (Ω2)n = {−1, 1}c(N)n, while the +HW case (under certain conditions) can be reduced to the case for cyclic groups +(ΩN)d(N)n, N ≥ 2. The Bohnenblust–Hille inequalities for cyclic groups (ΩN)n, N ≥ 3 +was not known before, however, so we also initiate its study here. The constants +c(N), d(N) are specified below. +1.1. Gell-Mann matrix basis. Let N ≥ 1 and put Ejk = |ej⟩⟨ek| , 1 ≤ j, k ≤ N. +The generalized Gell-Mann Matrices are a basis of MN(C) and are comprised of the +identity matrix 1 along with: +symmetric: +Ajk = +� +N +2 +� +Ejk + Ekj +� +for 1 ≤ j < k ≤ N +antisymmetric: +Bjk = +� +N +2 +� +− iEjk + iEkj +� +for 1 ≤ j < k ≤ N +diagonal: +Cj = Γj +��j +k=1 Ekk − jEj+1,j+1 +� +for 1 ≤ j ≤ N − 1, +where Γj := +� +N +j2+j. We denote +GM(N) := {1, Ajk, Bjk, Cm}1≤j 0 such that for +all n ≥ 1 and GM observable A ∈ MN(C)⊗n of degree-d, we have +∥ � +A∥ 2d +d+1 ≤ C(d, N)∥A∥. +(1.3) +Moreover, we have C(d, N) ≤ +�3 +2(N2 − N) +�dBH≤d +{±1}. +Notice that when N = 2 (the Pauli case of [VZ22]) this upper bound of C(d, N) +becomes 3dBH≤d +{±1}. +The proof of this theorem follows similarly the approach in [VZ22] and we can +reduce the problem to the Bohnenblust–Hille inequalities (1.1) for Boolean cubes +{−1, 1}c(N)n. See Section 3 for details. +1.2. Heisenberg–Weyl matrix basis. Fix N ≥ 2. +Recall that ω = e +2πi +N +and +{ej : j ∈ ZN} = {ej : 1 ≤ j ≤ N} is the standard basis of CN. The “shift” operator +X and “phase” operator Z are defined via +Xej = ej+1, +Zej = ωjej, +for all +j ∈ ZN. +Note that XN = ZN = 1. See more in [AEHK]. In the following, everything is +mod N. +Below we consider Heisenberg–Weyl collection of matrices of size N × N: +HW(N) := {XℓZm}ℓ,m∈ZN . +These are unitary matrices and form a basis of MN(C) (see Lemma 4.1). Moreover, +they are orthonormal with respect to the normalized trace trN. + +6 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Fix n ≥ 1. Any HW observable A ∈ MN(C)⊗n has a unique Fourier expansion +with respect to HW(N): +A = +� +⃗ℓ,⃗m∈Zn +N +�A(⃗ℓ, ⃗m)Xℓ1Zm1 ⊗ · · · ⊗ XℓnZmn, +where �A(⃗ℓ, ⃗m) ∈ C is the Fourier coefficient at (⃗ℓ, ⃗m). We say that A is of degree-d +if �A(⃗ℓ, ⃗m) = 0 whenever +|(⃗ℓ, ⃗m)| := +n +� +j=1 +(ℓj + mj) > d. +Here, 0 ≤ ℓj, mj ≤ N − 1 and we do not mod N freely. +We denote by �A the sequence of Fourier coefficients of A, and write +∥ �A∥p := + + � +⃗ℓ,⃗m∈Zn +N +| �A(⃗ℓ, ⃗m)|p + + +1/p +, +1 ≤ p < ∞. +Now we are ready to state the Bohnenblust–Hille inequalities for the Heisenberg– +Weyl basis. However, due to some technical difficulties, we are not able to prove it +in full generality. Moreover, different from the Gell-Mann basis setting, we shall see +that the problem for the Heisenberg–Weyl basis will be reduced to the Bohnenblust– +Hille inequalities for the cyclic groups (ΩN)n, instead of the Boolean cubes (Ω2)n = +{−1, 1}n. One may already see the connection to ΩN (instead of Ω2) by considering +Xℓ, ℓ ∈ ZN only. However, the Bohnenblust–Hille inequalities for the cyclic groups +(ΩN)n were not known before. Recall that in the classical setting, Bohnenblust–Hille +inequalities have been known for groups (Ω2)n = {−1, 1}n and (Ω∞)n = Tn, and +their analogs for cyclic groups (ΩN)n, N ≥ 3 can be understood as the results in +between. +Our main result in this part consists of a partial solution to the Bohnenblust–Hille +inequalities for the cyclic groups (ΩN)n, and a family of quantum analogs for the +Heisenberg–Weyl basis. For this, recall that any polynomial f : (ΩN)n → C has the +Fourier expansion: +f(z) = +� +α +�f(α)zα1 +1 · · · zαn +n , +z = (z1, . . . , zn) ∈ (ΩN)n, +(1.4) +where α = (α1, . . . , αn) ∈ Zn +N. It is said to be of degree-d if �f(α) = 0 whenever +|α| := �n +j=1 αj > d. +As usual, we denote by ∥ �f∥p the ℓp-norm of the Fourier +coefficients �f(α). + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +7 +It turns out that the Bohnenblust–Hille inequalities for the cyclic groups (ΩN)n, N ≥ +3 are far from being trivial. Mimicking the classical proof for {−1, 1}n and Tn, one +may arrive the following: +Theorem 1.2. Fix N ≥ 2 and d ≥ 1. There exists C(d) > 0 such that for any +polynomial f on (ΩN)n of degree-d, we have +∥ �f∥ 2d +d+1 ≤ C(d) +sup +z∈(�ΩN)n +|f(z)|, +(1.5) +where f on the right hand side is the extension of f on (�ΩN)n via the same formula +(1.4). Moreover, C(d) ≤ ec√d log d for some universal c > 0. +The sketch proof of Theorem 1.2 will be presented in Section 5. The full proof will +be the goal of our subsequent article. +Recall that �ΩN is the convex hull of ΩN. On the right hand side of (1.5), the sup +over (�ΩN)n can be replaced by (ΩN)n when N = 2 (since f in this case is always +multi-affine, and therefore convex in each variable) or N = ∞ i.e. ΩN = T (by the +maximum modulus principle). For general N ≥ 3, it is not obvious. This brings +forward an interesting complex analysis question for commutative (1.1) on (ΩN)n. +This is one new difficulty which will be discussed in Section 6. We have a partial +solution that is the following theorem. We need to restrict to the polynomials for +which each variable has degree at most N−1 +2 . For notational convenience, we consider +odd N only, say replace N with 2N − 1. +Theorem 1.3. Let N ≥ 2. Suppose that +f(z) := +� +α +aαzα, +z = (z1, . . . , zn) ∈ Cn +is any analytic polynomial of n complex variables of degree at most d and such that +in each variable zi its degree is at most N − 1. Then +� � +α +|aα| +2d +d+1 +� d+1 +2d ≤ C′(d, N) +sup +z∈(Ω2N−1)n |f(z)| , +where C′(d) ≤ cd +NC(d) with some constant cN > 0 and C(d) given in (1.5). +Let us have Fourier expansion of a matrix A +A = +� +⃗ℓ,⃗m∈Zn +N +�A(⃗ℓ, ⃗m)Xℓ1Zm1 ⊗ · · · ⊗ XℓnZmn . +(1.6) +Our main result for the Heisenberg–Weyl basis is the following quantum analog of +Bohnenblust–Hille inequality: + +8 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Theorem 1.4. Fix a prime number N ≥ 2 and suppose d ≥ 1. If the Bohnenblust– +Hille inequality holds for degree-d polynomials on cyclic groups (ΩN)n, n ≥ 1 with the +best constant BH≤d +ΩN < ∞ independent of n, then the Bohnenblust–Hille inequalities +hold for the Heisenberg–Weyl basis: for any n ≥ 1 and any A ∈ MN(C)⊗n of degree- +d, we have +∥ �A∥ 2d +d+1 ≤ C(d, N)∥A∥, +with C(d, N) ≤ (N + 1)dBH≤d +ΩN. +In particular, if in the Fourier expansion (1.6) either all ℓi ≤ N−1 +2 +or all mi ≤ N−1 +2 , +then ∥ �A∥ 2d +d+1 ≤ C(d, N)∥A∥ with the constant C(d, N) ≤ (N + 1)dBH≤d +ΩN. +As the statement suggests, we actually reduce the problem to the Bohnenblust– +Hille inequality for cyclic groups (ΩN)d(N)n. In this reduction step, we need N to be +prime. The proof is contained in Section 4. Combined with Theorems 1.3 and 1.4, we +obtain partial solution to the Bohnenblust–Hille inequality for the Heisenberg–Weyl +basis. Notice that the restrictions on powers ℓi or mi represent a sort of generalization +of multi-affinity in each variable, which was important for N = 2 case. For N = 3 +this is still a multi-affinity assumption, but for N = 5, 7, . . . it is an assumption that +is considerably weaker than multi-affinity. +2. Applications +In this section, we present some applications of quantum Bohnenblust–Hille in- +equalities for GM observables. For A ∈ Mn(C) we use ∥A∥2 to denote the Schatten-2 +norm of A with respect to the normalized trace 1 +ntr. +2.1. Quantum k-juntas for qudits. Recall that a function f : {−1, 1}n → C is +called a k-junta if it depends on at most k coordinates. +Similarly, a self-adjoint +operator A ∈ MN(C)⊗n is a quantum k-junta if it acts non-trivially on at most k +qudits. It is known that [Bou02, DFKO07] if a bounded function f over {−1, 1}n is +of low degree, then it is close to some juntas. In the next corollary we derive such +a result in a quantum setting. We refer to [RWZ22] to another quantum junta type +theorem related to the influences instead of the degree. +Theorem 2.1. Fix N ≥ 2 and d ≥ 1. For any n ≥ 1, suppose that A ∈ MN(C)⊗n +is a self-adjoint GM observable of degree-d and ∥A∥ ≤ 1. Then for any ǫ > 0, there +exists a quantum k-junta B ∈ MN(C)⊗n such that +∥A − B∥2 ≤ ǫ +with +k ≤ +d +� +BH≤d +MN(C) +�2d +ǫ2d +, + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +9 +where BH≤d +MN(C) denotes the best constant in Bohnenblust–Hille inequalities for GM +observables (1.3). +In particular, we may choose k ≤ d( CN +ǫ )2d for some CN > 0 +depending only on N. +Remark 2.2. The results in [Bou02, DFKO07] are in commutative setting, in this +setting they are more general. However, in the case when polynomials are of low +degree, the proof that uses Bohnenblust–Hille inequalities is simpler. We are grateful +to Alexandros Eskenazis for pointing this out to us. +2.2. Learning quantum observables of low degrees. Suppose we need to learn +an observable A over n qudits, i.e. A ∈ MN(C)⊗n, and suppose we a priori know +that it is a polynomial of degree-d in the Gell-Mann basis with +∥A∥ ≤ 1. +(2.1) +To learn it we can randomly choose a state (somehow), sampling it by the same law. +After that we wish to be able to build another (random) observable � +A such that +∥ � +A − A∥2 +2 ≤ ε +(2.2) +with probability at least 1 − δ. The question is how many random samples K = +K(ε, δ, N, d, n) we need to accomplish this? +In the scalar case this was solved in [EI22] with +K ≤ C(d) +εd+1 log +�n +δ +� +, +where C(d) depends on the Bohnenblust–Hille constant BH≤d +{±1} for degree-d polyno- +mials on Boolean cubes {−1, 1}n. +In [VZ22] we explained one such algorithm for matrices in Pauli basis. The algo- +rithm for the Gell-Mann basis is almost the same and we will publish it separately. +The fact that A is of degree-d might be not so important as remarked in the discus- +sion before [CHP, Theorem 4]: with respect to certain measures, the contribution +of Gell-Mann monomials is exponentially decaying in the number of qudits that the +monomials act nontrivially on. +Theorem 2.3. Suppose that A ∈ MN(C)⊗n is of degree-d in the Gell-Mann basis +and satisfies (2.1). Fix δ, ǫ ∈ (0, 1) and +K ≥ +Cd2 � +BH≤d +{±1} +�2d +ǫd+1 +log +�n +δ +� +, +with C > 0 large enough. Then given any K i.i.d. random variables ⃗x(m) uniformly +distributed on {−1, 1}(N2−1)n, as well as the queries of pairs (⃗x(m), tr[Aρ(⃗x(m))]), +we can construct a random polynomial � +A ∈ MN(C)⊗n such that ∥A − � +A∥2 +2 ≤ ǫ with + +10 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +probability at least 1−δ. Here for each ⃗x ∈ {−1, 1}(N2−1)n, ρ(⃗x) is an explicit positive +semi-definite matrix with trace 1, independent of A. +Remark 2.4. The algorithm that builds � +A deserves the name PAC, probably approx- +imately correct construction. +3. Main results for the Gell-Mann matrix basis +In this section we prove Theorem 1.1. To reach this goal we consider the Boolean +cube +HN := {−1, 1}(N +2) × {−1, 1}(N +2) × {−1, 1}N−1, +for each N ≥ 2, and we will be reducing (1.3) to commutative Bohnenblust–Hille +inequality on Hn +N = {−1, 1}n(N2−1). Notice that in [VZ22] we already did this for +N = 2, and the reduction was to {−1, 1}3n. +For b ∈ {−1, 1} and 1 ≤ j < k ≤ N consider unit vectors, +α(b) +jk = (ej + bek)/ +√ +2, +β(b) +jk = (ej + biek)/ +√ +2. +These are b +� +N +2 -valued eigenvectors of Ajk, Bjk correspondingly. +Now consider density matrices, again for b ∈ {−1, 1} and 1 ≤ j < k ≤ N +A(b) +jk = |α(b) +jk ⟩⟨α(b) +jk | , +B(b) +jk = |β(b) +jk ⟩⟨β(b) +jk | . +Fix any point +(x, y, z) ∈ HN = {−1, 1}(N +2) × {−1, 1}(N +2) × {−1, 1}N−1 +with +x = (xjk)1≤j m + 1 we can +immediately see that �m+1 +j=1 tr(CmA +(xjk) +jk +) = 1 +2Γm(1 + 1 + · · · + 1 − (m + 1)) = 0. We +are left to consider the j < k ≤ m summation and the j ≤ m, k = m+1 summation. +The first one gives +�m +2 +� +Γm, while the second one gives 1 +2mΓm − 1 +2m2Γm. Altogether, +�m +2 +� +Γm + 1 +2mΓm − 1 +2m2Γm = 0 . + +12 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +1 +1 +· · · +1 +−m +0 +0 +· · · +0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m-many +Γm +1 +2Γm +−1 +2Γm +1−m +2 Γm +Figure 3.1. Collating tr[CmA(b) +jk ]’s and tr[CmB(b) +jk ]’s. In the upper tri- +angle, a value v in coordinate (j, k) means tr[CmA(b) +jk ] = tr[CmB(b) +jk ] = v +for any b. +For reference, the (unnormalized) definition of Cm is +recorded on the diagonal. +Now, the rest of ρ(x, y, z) is �N−1 +m=1 zm +1 +√ +2N Cm+ N−1 +2 1, a sum of orthogonal matrices. +Hence (3.4) follows from (3.5), (3.6), and this orthogonality. +□ +Now we are ready to prove Theorem 1.1. +Proof of Theorem 1.1. Let us normalize ρ as r(x, y, z) := 1 +3 +�N +2 +�−1ρ(x, y, z), so +tr +� +r(x, y, z) +� += 1 . +(3.7) +Now choosing any (⃗x, ⃗y, ⃗z) ∈ Hn +N with +⃗x = +� +x(1), . . . , x(n)� +, +⃗y = +� +y(1), . . . , y(n)� +, +⃗z = +� +z(1), . . . , z(n)� +, +and +� +x(j), y(j), z(j)� +∈ HN, +1 ≤ j ≤ n +we can consider +r(⃗x, ⃗y, ⃗z) = r +� +x(1), y(1), z(1)� +⊗ r +� +x(2), y(2), z(2)� +⊗ · · · ⊗ r +� +x(n), y(n), z(n)� +. +Recall that any GM observable A of degree at most d has the unique expansion +A = +� +α=(α1,...,αn)∈Λn +N +� +AαMα1 ⊗ · · · ⊗ Mαn + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +13 +where {Mα}α∈ΛN = GM(N) and � +Aα = 0 if more than d matrices of Mαj, 1 ≤ j ≤ n +are not identity matrices. +By Lemma 3.1, for any α = (α1, . . . , αn) ∈ Λn +N with |{αj : Mαj ̸= 1}| := κ ≤ d, +(⃗x, ⃗y, ⃗z) �→ tr (Mα1 ⊗ · · · ⊗ Mαnr(⃗x, ⃗y, ⃗z)) +is a multi-affine monomial of degree-κ on the Boolean cube Hn +N = {−1, 1}n(N2−1) +with coefficient +�� +N/2 +3 +�N +2 +� +�κ +. +Note also that for different α ̸= α′ ∈ Λn +N, the resulting monomials on Hn +N are different. +Since the coefficients of this scalar polynomial are of the form +�� +N/2 +3 +�N +2 +� +�κ +� +Aα, +0 ≤ κ ≤ d . +Therefore the absolute values of those coefficients are at least +1 +� 3 +2(N2 − N) +�d| � +Aα| , +so that by commutative Bohnenblust–Hille inequality on Boolean cube as in [DMP] +� � +α +| � +Aα| +2d +d+1 +� d+1 +2d ≤ +� 3 +2(N2 − N) +�dBH≤d +{±1} +sup +(⃗x,⃗y,⃗z)∈Hn +N +|tr(A · r(⃗x, ⃗y, ⃗z)| , +On the other hand, by (3.7) +|tr(A · r(⃗x, ⃗y, ⃗z)| ≤ ∥A∥ . +All combined, we get +� � +α +| � +Aα| +2d +d+1 +� d+1 +2d ≤ +� 3 +2(N2 − N) +�dC +√d log d∥A∥ . +□ +4. Main results for Heisenberg–Weyl matrix basis +We collect first a few facts about X and Z. +Lemma 4.1. We have the following: +(1) {XℓZm : ℓ, m ∈ ZN} form a basis of MN(C). +(2) For all k, ℓ, m ∈ ZN: +(XℓZm)k = ω +1 +2k(k−1)ℓmXkℓZkm +and for all ℓ1, ℓ2, m1, m2 ∈ ZN: +Xℓ1Zm1Xℓ2Zm2 = ωℓ2m1−ℓ1m2Xℓ2Zm2Xℓ1Zm1. + +14 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +(3) If N is prime, then for any (0, 0) ̸= (ℓ, m) ∈ ZN × ZN, the eigenvalues of +XℓZm are {1, ω, . . . , ωN−1}. This is not the case if N is not prime. +Proof. +(1) Suppose that � +ℓ,m aℓ,mXℓZm = 0. For any j, k ∈ ZN, we have +� +ℓ,m +aℓ,m⟨XℓZmej, ej+k⟩ = +� +m +ak,mωjm = 0. +Since the Vandermonde matrix associated to (1, ω, . . . , ωN−1) is invertible, we +have ak,m = 0 for all k, m ∈ ZN. +(2) It follows immediately from the identity ZX = ωXZ which can be verified +directly: for all j ∈ ZN +ZXej = Zej+1 = ωj+1ej+1 = ωj+1Xej = ωXZej. +(3) Assume N to be prime and (ℓ, m) ̸= (0, 0). If ℓ = 0 and m ̸= 0, then the +eigenvalues of Zm are +{ωjm : j ∈ ZN} = {ωj : j ∈ ZN}, +since N is prime. If ℓ ̸= 0, then we may relabel the standard basis {ej : j ∈ +ZN} as {ejℓ : j ∈ ZN}. Consider the non-zero vectors +ζk := +� +j∈ZN +ω +1 +2 j(j−1)ℓm−jkejℓ, +k ∈ ZN. +A direct computation shows: for all k ∈ ZN +XℓZmζk = +� +j∈ZN +ω +1 +2j(j−1)ℓm−jk · ωjℓmXℓejℓ += +� +j∈ZN +ω +1 +2j(j+1)ℓm−jke(j+1)ℓ += +� +j∈ZN +ω +1 +2j(j−1)ℓm−jk+kejℓ += ωkζk. +If N is not prime, say N = N1N2 with N1, N2 > 1, then XN1 has 1 as +eigenvalue with multiplicity N1 > 1. So we do need N to be prime. +□ +Let us record the following observation as a lemma. +Lemma 4.2. Suppose that k ≥ 1, A, B are two unitary matrices such that Bk = 1, +AB = λBA with λ ∈ C and λ ̸= 1. Suppose that ξ is a non-zero vector such that +Bξ = µξ (µ ̸= 0 since µk = 1). Then +⟨ξ, Aξ⟩ = 0. + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +15 +Proof. By assumption +µ⟨ξ, Aξ⟩ = ⟨ξ, ABξ⟩ = λ⟨ξ, BAξ⟩. +Since B∗ = Bk−1, B∗ξ = Bk−1ξ = µk−1ξ = µξ. Thus +µ⟨ξ, Aξ⟩ = λ⟨ξ, BAξ⟩ = λ⟨B∗ξ, Aξ⟩ = λµ⟨ξ, Aξ⟩. +Hence, µ(λ − 1)⟨ξ, Aξ⟩ = 0. This gives ⟨ξ, Aξ⟩ = 0 as µ(λ − 1) ̸= 0. +□ +Now we are ready to prove Theorem 1.4: +Proof of Theorem 1.4. Fix a prime number N ≥ 2. Recall that ω = e +2πi +N . Consider +the generator set of ZN × ZN +ΣN := {(1, 0), (1, 1), . . ., (1, N − 1), (0, 1)}. +For any z ∈ ΩN and (ℓ, m) ∈ ΣN, we denote by eℓ,m +z +the unit eigenvector of XℓZm +corresponding to the eigenvalue z. For any vector ⃗ω ∈ (ΩN)(N+1)n of the form +⃗ω = (⃗ωℓ,m)(ℓ,m)∈ΣN, +⃗ωℓ,m = (ωℓ,m +1 +, . . . , ωℓ,m +n ) ∈ (ΩN)(N+1)n, +(4.1) +we consider the matrix +ρ(⃗ω) := ρ1(⃗ω) ⊗ · · · ⊗ ρn(⃗ω) +where +ρk(⃗ω) := +1 +N + 1 +� +(ℓ,m)∈ΣN +|eℓ,m +ωℓ,m +k ⟩⟨eℓ,m +ωℓ,m +k | . +Then each ρk(⃗ω) is a density matrix and so is ρ(⃗ω). +Suppose that (ℓ, m) ∈ ΣN and (ℓ′, m′) /∈ {(kℓ, km) : (ℓ, m) ∈ ΣN}, then by Lemma +4.1 +Xℓ′Zm′XℓZm = ωℓm′−ℓ′mXℓZmXℓ′Zm′. +From our choice ωℓm′−ℓ′m ̸= 1. By Lemmas 4.1 and 4.2 +tr[Xℓ′Zm′ |eℓ,m +z +⟩⟨eℓ,m +z +|] = ⟨Xℓ′Zm′eℓ,m +z +, eℓ,m +z +⟩ = 0, +z ∈ ΩN. +Suppose that (ℓ, m) ∈ ΣN and 1 ≤ k ≤ N − 1. Then by Lemma 4.1 +tr[XkℓZkm |eℓ,m +z +⟩⟨eℓ,m +z +|] = ω− 1 +2 k(k−1)ℓm⟨(XℓZm)keℓ,m +z +, eℓ,m +z +⟩ += ω− 1 +2 k(k−1)ℓmzk, +z ∈ ΩN. + +16 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +All combined, for all 1 ≤ k ≤ N − 1, (ℓ, m) ∈ ΣN and 1 ≤ i ≤ n we get +tr[XkℓZkmρi(⃗ω)] = +1 +N + 1 +� +(ℓ′,m′)∈ΣN +⟨eℓ′,m′ +ωℓ′,m′ +i +, XkℓZkmeℓ′,m′ +ωℓ′,m′ +i +⟩ += +1 +N + 1⟨eℓ,m +ωℓ,m +i +, XkℓZkmeℓ,m +ωℓ,m +i +⟩ += +1 +N + 1ω− 1 +2 k(k−1)ℓm(ωℓ,m +i +)k. +Recall that any degree-d polynomial in MN(C)⊗n is a linear combination of mono- +mials +A(⃗k, ⃗ℓ, ⃗m;⃗i) := · · · ⊗ Xk1ℓ1Zk1m1 ⊗ · · · ⊗ XkκℓκZkκmκ ⊗ · · · +where +• ⃗k = (k1, . . . , kκ) ∈ {1, . . . , N − 1}κ with 0 ≤ �κ +j=1 kj ≤ d; +• ⃗ℓ = (ℓ1, . . . , ℓκ), ⃗m = (m1, . . . , mκ) with each (ℓj, mj) ∈ ΣN; +• ⃗i = (i1, . . . , iκ) with 1 ≤ i1 < · · · < iκ ≤ n; +• XkjℓjZkjmj appears in the ij-th place, 1 ≤ j ≤ κ, and all the other n − κ +elements in the tensor product are the identity matrices 1. +So for any ⃗ω ∈ (ΩN)(N+1)n of the form (4.1) we have from the above discussion that +tr[A(⃗k, ⃗ℓ, ⃗m;⃗i)ρ(⃗ω)] = +κ +� +j=1 +tr[XkjℓjZkjmjρij(⃗ω)] += ω− 1 +2 +�κ +j=1 kj(kj−1)ℓjmj +(N + 1)κ +(ωℓ1,m1 +i1 +)k1 · · · (ωℓκ,mκ +iκ +)kκ. +So ⃗ω �→ tr[A(⃗k, ⃗ℓ, ⃗m;⃗i)ρ(⃗ω)] is a polynomial on (ΩN)(N+1)n of degree at most �κ +j=1 kj ≤ +d. +Now for general polynomial A ∈ MN(C)⊗n of degree-d: +A = +� +⃗k,⃗ℓ,⃗m,⃗i +c(⃗k, ⃗ℓ, ⃗m;⃗i)A(⃗k, ⃗ℓ, ⃗m;⃗i) +where the sum runs over the above (⃗k, ⃗ℓ, ⃗m;⃗i). This is the Fourier expansion of A +and each c(⃗k, ⃗ℓ, ⃗m;⃗i) ∈ C is the Fourier coefficient. So +∥ �A∥p = + + � +⃗k,⃗ℓ,⃗m,⃗i +|c(⃗k, ⃗ℓ, ⃗m;⃗i)|p + + +1/p +. + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +17 +To each A we assign the function fA on (ΩN)(N+1)n given by +fA(⃗ω) = tr[Aρ(⃗ω)] += +� +⃗k,⃗ℓ,⃗m,⃗i +ω− 1 +2 +�κ +j=1 kj(kj−1)ℓjmjc(⃗k, ⃗ℓ, ⃗m;⃗i) +(N + 1)κ +(ωℓ1,m1 +i1 +)k1 · · · (ωℓκ,mκ +iκ +)kκ. +Note that this is the Fourier expansion of fA since the monomials (ωℓ1,m1 +i1 +)k1 · · · (ωℓκ,mκ +iκ +)kκ +differ for different (⃗k, ⃗ℓ, ⃗m,⃗i). Therefore, +∥� +fA∥p = + + � +⃗k,⃗ℓ,⃗m,⃗i +����� +c(⃗k, ⃗ℓ, ⃗m;⃗i) +(N + 1)κ +����� +p + +1/p +≥ +1 +(N + 1)d + + � +⃗k,⃗ℓ,⃗m,⃗i +|c(⃗k, ⃗ℓ, ⃗m;⃗i)|p + + +1/p += +1 +(N + 1)d∥ �A∥p. +So if the Bohnenblust–Hille inequalities hold for cyclic group ZN for N prime, then +∥� +fA∥ 2d +d+1 ≤ C(d)∥fA∥L∞((ΩN )(N+1)n) +for some C(d) > 0. All combined, we obtain +∥ �A∥ 2d +d+1 ≤ (N + 1)d∥� +fA∥ 2d +d+1 ≤ (N + 1)dC(d)∥fA∥L∞((ΩN )(N+1)n) ≤ (N + 1)dC(d)∥A∥. +□ +5. Bohnenblust–Hille inequalities for cyclic groups: the difficulty +Let us recall the reader that �ΩN denotes the convex hull of cyclic group ΩN = +(1, ω, . . . ωN−1). In this section we sketch the proof Theorem 1.2. +We wish to prove the following theorem: +Theorem 5.1. Let f = � +α bαzα be an analytic polynomial of n complex variables +z = (z1, . . . , zn) of global degree at most d and such that in each variable zi its degree +is at most N − 1. Then +� � +|cα| +2d +d+1 +� d+1 +2d ≤ C(d) +sup +z∈(�ΩN)n +|f(z)| . + +18 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Here C(d) is as in [DGMS], in particular, it is sub-exponential. +The proof of +Theorem 5.1 follows closely the proof of [DMP], [BPS] and [DGMS] and will be +recorded elsewhere. +Now we give a sketch of this proof. We repeat Theorem 8.10 and Remark 8.16 of +[DGMS]. As a result we get hypercontractive inequalities for polynomials of arbitrary +number n of variables zi such that polynomials have degree at most N − 1 in each +variable and such that in Remark 8.16 the integration in both parts is not over Tn +but over (ΩN)n. The explanation is simple: for polynomials of degree N − 1 in each +variable we can use integration over (ΩN)n to calculate its L2 norm. This allows us +to have the hypercontractivity constant on page 265 of [DGMS] to be as in this page +HC 2k +k+1,2 ≤ 2 +4 +3k−1 +but the norms in hypercontractivity estimate use again integration over (ΩN)n rather +than over Tn. +The proof of Bohnenblust–Hille inequality uses several ingredients: a) algebraic +calculations and Blei’s inequality, b) hypercontractivity or more precisely some mo- +ment comparison estimates, c) polarization. Of course a) will be the same, b) is the +same as we just observed. +However, the polarization argument on pages 67–68 of [DGMS] one needs to be +careful. One can repeat the proof with xi (or x, y) being vectors in (ΩN)n, complex +variables (w1, w2) to be from (ΩN)2 instead of T2, but |ϕ(w1n1x+w2n2y, . . . , w1n1x+ +w2n2y)| now will have estimate maxu∈(�ΩN) |ϕ(u, . . . , u)|(n1 + n2)m (in our case we +denote m by d). +This is the sketch of the proof of Theorem 5.1. +However, unlike the case when the maxDn |f(z)| by maxTn |f(z)| estimate is obvi- +ous by maximum principle, we cannot replace max(�ΩN)n |f(z)| by max(ΩN)n |f(z)| by +any obvious consideration. +Remark 5.2. In application to matrix Bohnenblust–Hille inequality in Heisenberg– +Weyl basis, which we considered above, we wanted to replace (�ΩN)n with (ΩN)n, but +we cannot do that directly because (�ΩN)n is much bigger than (ΩN)n and we do not +know the inequality +sup +⃗ζ∈(�ΩN)(n +|fA(⃗ζ)| ≤ B(d) +sup +⃗γ∈(ΩN)n |fA(⃗γ)| +for polynomials of degree at most d of z = (z1, . . . , zn) such that in each zi the degree +is at most N − 1. One exception is N = 2, when polynomials are multi-affine and +the previous inequality does hold just by convexity in each argument. +But for N ≥ 3 this reasoning flops by the lack of convexity. This lack of convexity +is our main difficulty, and for some time we will struggle with this difficulty. + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +19 +Question. Is it true that the following inequality holds with constant independent +of n +sup +z∈(�ΩN)n +|f(z)| ≤ B(d) +sup +⃗w∈(ΩN)n |f(⃗ω)|, +for polynomials f of n complex variables z = (z1, . . . , zn) of global degree at most d +of such that in each zi the degree is at most N − 1? +6. Bohnenblust–Hille inequalities for cyclic groups: a partial +remedy +Let f(z) be an analytic polynomial of total degree d of variables (z1, . . . , zn) such +that in each zi its degree is at most N − 1. We should think that +n >> d >> N . +We would like to compare ∥f∥L∞(Tn), ∥f∥L∞(�Ωn +N), and ∥f∥L∞(Ωn +N). We wish for the +estimates independent of n. Obviously +∥f∥L∞(Ωn +N) ≤ ∥f∥L∞(�Ωn +N) ≤ ∥f∥L∞(Dn) = ∥f∥L∞(Tn) . +The converse estimate with constant 1 is impossible, we show this now. +6.1. Constant cannot be 1. Let N = 3. +Lemma 6.1. Let v1, v2, v3 be linear independent vectors in C3. Let C be their ab- +solute convex hull. Then v ∈ C if and only if for every vector u we have |(u, v)| ≤ +maxi=1,2,3 |(u, vi)|. +This is just the Hahn–Banach theorem. +Proposition 6.2. There exists a polynomial of one complex variable p(z) = a0 + +a1z + a2z2, z ∈ D, such that +∥p∥L∞(�Ω3) > ∥p∥L∞(Ω3) . +Proof. Consider three vectors in C3: v1 = (1, 1, 1), v2 = (1, ω, ω2), v3 = (1, ω2, ω4) = +(1, ω2, ω), where ω = e +2πi +3 . +Consider vector v = (1, z, z2) with some z ∈ �Ω3. If for every u = (a0, a1, a2), +we have |(u, v)| ≤ maxi=1,2,3 |(u, vi)| then v is in absolute convex combination of +(v1, v2, v3) and so there exist convex coefficients p1, p2, p3 and α1, α2, α3 in T such +that +v = α1p1v1 + α2p2v2 + α3p3v3. +In particular α1p1 + α2p2 + α3p3 = 1, which means that αi = 1. Hence, +z = p1 + p2ω + p3ω2, z2 = p1 + p2ω2 + p3ω . + +20 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Then +p2 +1 + 2p2p3 + (p2 +2 + 2p1p3)ω2 + (p2 +3 + 2p1p2)ω = p1 + p2ω2 + p3ω . +Two convex combinations (in the LHS we also have a convex combination) should +have the same coefficients. We get +p2 +1 + 2p2p3 = p1, p2 +2 + 2p1p3 = p2, p2 +3 + 2p1p2 = p3 . +There can be only finitely many such (p1, p2, p3). Thus, choosing z = p1 +p2ω +p3ω2 +with a triple different from those finitely many ones, we get that v = (1, z, z2) is not +in an absolute convex combination of v1, v2, v3. Then Lemma 6.1 shows that there +exists u = (�a0, �a1, �a2) with |(v, u)| > maxi=1,2,3 |(vi, u). +This is the same as to say that |p(z)| > maxk=0,1,2 |p(ωk)|. +□ +Here is a concrete example showing that the constant cannot be 1. Let ω := e +2πi +3 . +Consider the polynomial +p(z) := p(1)(z − ω)(z − ω2) +(1 − ω)(1 − ω2) + p(ω) (z − 1)(z − ω2) +(ω − 1)(ω − ω2) + p(ω2) (z − 1)(z − ω) +(ω2 − 1)(ω2 − ω) +with p(1), p(ω), p(ω2) to be chosen later. Put z0 := 1+ω +2 +∈ �Ω3. Then +|z0 − 1| = |z0 − ω| = +√ +3 +2 , +|z0 − ω2| = 3 +2. +Now we choose p(1), p(ω), p(ω2) to be complex numbers of modules 1 such that +p(1)(z0 − ω)(z0 − ω2) +(1 − ω)(1 − ω2) = +���� +(z0 − ω)(z0 − ω2) +(1 − ω)(1 − ω2) +���� = +3 +√ +3 +4 +3 += +√ +3 +4 , +p(ω)(z0 − 1)(z0 − ω2) +(ω − 1)(ω − ω2) = +���� +(z0 − 1)(z0 − ω2) +(ω − 1)(ω − ω2) +���� = +3 +√ +3 +4 +3 += +√ +3 +4 , +p(ω2) (z0 − 1)(z0 − ω) +(ω2 − 1)(ω2 − ω) = +���� +(z0 − 1)(z0 − ω) +(ω2 − 1)(ω2 − ω) +���� = +3 +4 +3 = 1 +4. +Therefore, this choice of p satisfies +∥p∥L∞(�Ω3) ≥ |p(z0)| = +√ +3 +4 + +√ +3 +4 + 1 +4 = 1 + 2 +√ +3 +4 +> 1 = ∥p∥L∞(Ω3). +Question. Is there a constant independent of n (but dependent on d) such that for +analytic polynomials of global degree d and degree ≤ N in each variable zi has the +estimate +∥f∥L∞(Tn) ≤ C(d)∥f∥L∞(Ωn +N ) ? +We believe that there can be a counterexample. + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +21 +6.2. A partial solution. In this sequel, we partially answer this latter question. +But we will need to make some concessions to answer affirmatively. The strategy +will be to reverse the argument in Section 6.1. We start with the following key matrix +lemma. +Lemma 6.3. Fix N ≥ 2, put ξk = e +2πik +2N−1. There exists ε0 = ε0(N) ∈ (0, 1) such that, +for all z ∈ C with |z| ≤ ε0, one can find pk = pk(z) ≥ 0, 0 ≤ k ≤ 2N − 2 satisfying +zm = +2N−2 +� +k=0 +pkξm +k , +0 ≤ m ≤ N − 1. +(6.1) +In particular, when m = 0, �2N−2 +k=0 +pk = 1. +Proof. Put θ = +2π +2N−1. The equations (6.1) are equivalent to (pk’s are non-negative +and thus real) + + + + + + + +�2N−2 +k=0 +pk = 1 +�2N−2 +k=0 +pk cos(kmθ) = ℜzm +1 ≤ m ≤ N − 1 +�2N−2 +k=0 +pk sin(kmθ) = ℑzm +1 ≤ m ≤ N − 1 +. +(6.2) +Or equivalently, we want to find a solution to DN⃗p = ⃗vz with each entry of ⃗p being +non-negative. Here DN is a (2N − 1) × (2N − 1) real matrix given by +DN = + + +1 +1 +1 +· · · +1 +1 +cos(θ) +cos(2θ) +· · · +cos((2N − 2)θ) +... +... +... +... +1 +cos((N − 1)θ) +cos(2(N − 1)θ) +· · · +cos((2N − 2)(N − 1)θ) +1 +sin(θ) +sin(2θ) +· · · +sin((2N − 2)θ) +... +... +... +... +1 +sin((N − 1)θ) +sin(2(N − 1)θ) +· · · +sin((2N − 2)(N − 1)θ) + + +, +and ⃗vz = (1, ℜz, . . . , ℜzN−1, ℑz, . . . , ℑzN−1)T ∈ R2N−1. +Note first that DN is non-singular. +In fact, assume that DN⃗x = ⃗0 with ⃗x = +(x0, x1, . . . , x2N−2)T ∈ R2N−1. Then +2N−2 +� +k=0 +xkξm +k = 0, +0 ≤ m ≤ N − 1. +Since each xk is real and ξ2N−1 = 1, we have by taking conjugation that +2N−2 +� +k=0 +xkξm +k = 0, +N ≤ m ≤ 2N − 1. + +22 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +Altogether we get +2N−2 +� +k=0 +xkξm +k = 0, +0 ≤ m ≤ 2N − 2. +Since the Vandermonde matrix associated to (1, ξ, . . . , ξ2N−2) has determinant +� +0≤j 0 satisfies �2N−2 +k=0 +p(j) +k += 1 for any 1 ≤ j ≤ n. Then we have +|f(z1, . . . , zn)| = +����� +d +� +α1,...,αn=0 +aα1,...,αnzα1 +1 · · · zαn +n +����� += +����� +d +� +α1,...,αn=0 +2N−2 +� +k1,...,kn=0 +aα1,...,αnp(1) +k1 · · · p(n) +kn ξα1 +k1 · · · ξαn +kn +����� +≤ +2N−2 +� +k1,...,kn=0 +p(1) +k1 · · · p(n) +kn +����� +d +� +α1,...,αn=0 +aα1,...,αnξα1 +k1 · · ·ξαn +kn +����� += +2N−2 +� +k1,...,kn=0 +p(1) +k1 · · · p(n) +kn |P(ξk1, . . . , ξkn)| +≤ +2N−2 +� +k1,...,kn=0 +p(1) +k1 · · · p(n) +kn +sup +z∈(Ω2N−1)n |f(z)| += +sup +z∈(Ω2N−1)n |f(z)| . +So we have shown that +sup +∥z∥∞≤ε0 +|f(z)| ≤ +sup +z∈(Ω2N−1)n |f(z)| . +(6.3) +Now consider +P(z) := f(ε0z1, . . . , ε0zn) = +� +α +ε|α| +0 aαzα. +Then we have by Theorem 1.2 that +� � +α +|aα| +2d +d+1 +� d+1 +2d ≤ ε−d +0 +� � +α +|ε|α| +0 aα| +2d +d+1 +� d+1 +2d ≤ ε−d +0 C(d) +sup +z∈(�Ω2N−1)n +|P(z)|. + +24 +JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG +By (6.3), we get +sup +z∈(�Ω2N−1)n +|P(z)| ≤ +sup +∥z∥∞≤1 +|P(z)| = +sup +∥z∥∞≤ε0 +|f(z)| ≤ +sup +z∈(Ω2N−1)n |f(z)| . +This completes the proof. +□ +References +[AA] S. Aaranson, A. Ambainis, The need for structure in quantum speed-up, Theory of computing, +Volume 10 (6), 2014, pp. 133–166 +[AEHK] Ali Asadian, Paul Erker, Marcus Huber, and Claude Kl¨ockl, Heisenberg-Weyl Observables: +Bloch vectors in phase space.” Physical Review A 94, no. 1 (2016): 010301. +[BPS] F. Bayart, D. Pellegrino, J. B. Seoane-Sep´ulveda. The Bohr radius of the n-dimensional +polydisk is equivalent to +� +(log n)/n, Advances in Mathematics 264 (2014) 726–746. +[BK] Reinhold A. Bertlmann, Philipp Krammer, Bloch vectors for qudits, arXiv: 0806.1174v1. +[BH] H. F. Bohnenblust and E. Hille, On the absolute convergence of Dirichlet series, Ann. of Math. +32 (1931), no. 3, 600–622. +[Bou02] Jean Bourgain. On the distribution of the Fourier spectrum of boolean functions. Israel +Journal of Mathematics, 131(1):269–276, 2002. +[CHP] S. Chen, H-Y. Huang, J. Preskill, Learning to efficiently predict arbitrary quantum evolu- +tions, Preprint, September 15, 2022, pp. 1–47. +[DFOOS] Defant Andreas, Frerick Leonhard, Ortega-Cerda Joaquim, Ounaıes Myriam, and Seip +Kristian. 2011. The Bohnenblust–Hille inequality for homogeneous polynomials is hypercon- +tractive. Ann. of Math., 174(1), 485–497. +[DMP] A. Defant, M. Mastylo, A. P´erez, On the Fourier spectrum of functions on boolean cubes. +Math. Ann. 374 (2019), no. 1-2, 653–680. +[DFKO07] Irit Dinur, Ehud Friedgut, Guy Kindler, and Ryan O’Donnell. On the Fourier tails of +bounded functions over the discrete cube. Israel Journal of Mathematics, 160(1):389–412, 2007. +[DGMS] A. Defant, D. Garcia, M. Maestre, P. Sevilla-Peris, Dirichlet Series and Holomorphic +functions in High Dimensions. New mathematical monographs, v. 37. +[RWZ22] Cambyse Rouz´e, Melchior Wirth, and Haonan Zhang. Quantum Talagrand, KKL and +Friedgut’s theorems and the learnability of quantum Boolean functions. arXiv preprint, +arXiv:2209.07279, 2022. +[HCP22] Hsin-Yuan Huang, Sitan Chen, and John Preskill. Learning to pre- dict arbitrary quantum +processes. arXiv preprint, arXiv: 2210.14894, 2022. +[EI22] Alexandros Eskenazis and Paata Ivanisvili. Learning low-degree functions from a logarithmic +number of random queries. In Proceedings of the 54th Annual ACM SIGACT Symposium on +Theory of Computing, pages 203–207, 2022. +[VZ22] A. Volberg, H. Zhang, Noncommutative Bohnenblust–Hille inequality. arXiv:2210.14468, +pp.1–18. + +NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY +25 +(J.S.) Department of Computing & Mathematical Sciences, California Institute +of Technology, Pasadena, CA 91125 +Email address: jslote@caltech.edu +(A.V.) Department of Mathematics, MSU, East Lansing, MI 48823, USA and Haus- +dorff Center of Mathematics +Email address: volberg@math.msu.edu +(H.Z.) Department of Mathematics, University of California, Irvine, CA 92617, +USA +Email address: haonanzhangmath@gmail.com + diff --git a/0NAzT4oBgHgl3EQfefyv/content/tmp_files/load_file.txt b/0NAzT4oBgHgl3EQfefyv/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..38c440c067de127d6d3a2ba2bdafd47d198108dd --- /dev/null +++ b/0NAzT4oBgHgl3EQfefyv/content/tmp_files/load_file.txt @@ -0,0 +1,770 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf,len=769 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='01438v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='AP] 4 Jan 2023 NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY IN THE HEISENBERG–WEYL AND GELL-MANN BASES WITH APPLICATIONS TO FAST LEARNING JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Previous noncommutative Bohnenblust–Hille inequalities addressed operator decompositions in the tensor product space SU(2)⊗n [HCP22, VZ22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Here we prove the inequalities for product spaces of arbitrary local dimension, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=', SU(N)⊗n or n-fold tensor products of N × N Hermitian matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' We treat operator decompositions in both the Gell-Mann and Heisenberg–Weyl bases by reducing to commutative cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The latter basis is reduced to a scalar Bohnenblust– Hille inequality for cyclic groups which we also prove.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Applications to quantum junta theorems and learning qudit quantum observ- ables in the Probably Approximately Correct framework are also listed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Contents Notations 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Introduction 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Gell-Mann matrix basis 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Heisenberg–Weyl matrix basis 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Applications 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Quantum k-juntas for qudits 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Learning quantum observables of low degrees 9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Main results for the Gell-Mann matrix basis 10 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Main results for Heisenberg–Weyl matrix basis 13 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Bohnenblust–Hille inequalities for cyclic groups: the difficulty 17 2010 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 46B10, 46B09;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 46B07;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 60E15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Bohnenblust–Hille inequality, Gell-Mann matrix basis, Heisenberg-Weyl basis, qubits, qudits, fast learning, k-juntas, PAC, probably approximately correct learning of big matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' is supported by Chris Umans’ Simons Investigator Grant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The research of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' is supported by NSF DMS-1900286, DMS-2154402 and by Hausdorff Center for Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' is supported by the Lise Meitner fellowship, Austrian Science Fund (FWF) M3337.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' This work is partially supported by NSF DMS-1929284 while all three authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Harmonic Analysis and Convexity program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 1 2 JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Bohnenblust–Hille inequalities for cyclic groups: a partial remedy 19 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Constant cannot be 1 19 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' A partial solution 21 References 24 Notations Let C and R be the complex numbers and real numbers, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Let D = {z ∈ C : |z| < 1} be the open unit disc in the complex plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Fix an integer N ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Let ω := e 2πi N denote a primitive root of unity of order N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Let ZN := {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=', N −1} be the additive cyclic group of order N and ΩN := {1, ω, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=', ωN−1} the multiplicative cyclic group of order N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' We also need �ΩN := conv(ΩN), a regular polygon inscribed in the circle T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' We use Mn(C) to denote the n-by-n complex matrix algebra and 1 the identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Denote by {ej : 1 ≤ j ≤ N} the standard basis of CN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' We use ⟨·, ·⟩ to denote the inner product on Cn that is linear in the second argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' For two vectors ξ, η ∈ Cn, we use |ξ⟩⟨η| to denote the linear operator such that |ξ⟩⟨η| ej = ⟨η, ej⟩ · |ξ⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Introduction Let f(z) = � α cαzα = � α cαzα1 1 · · · zαn n , where α = (α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' , αn) are vectors of non-negative integers and the total degree of polynomial f is d = maxα(α1 +· · ·+αn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Here z can be all complex vectors in Tn or all sequences of ±1 in Boolean cube {−1, 1}n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Bohnenblust–Hille type of inequalities are the following � � α |cα| 2d d+1 � d+1 2d ≤ C(d) sup z |f(z)| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1) The supremum is taken either over torus Tn or Boolean cube {−1, 1}n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' In both cases this inequality is proven with constant C(d) that is independent of the dimension n and sub-exponential in the degree d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' More precisely, denote by BH≤d T and BH≤d {±1} the best constants in the Bohnenblust–Hille inequalities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1) for degree-d polynomials on Tn and {−1, 1}n, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Then both BH≤d T and BH≤d {±1} are bounded from above by ec√d log d for some universal c > 0 [BPS, DMP].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' One of the key features of this inequality (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1) is the dimension-freeness of C(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' This, together with its sub-exponential growth phenomenon in d, plays an important NONCOMMUTATIVE BOHNENBLUST–HILLE INEQUALITY 3 role in resolving some open problems in functional analysis and harmonic analysis [DGMS, BPS, DFOOS].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The optimal dependence of BH≤d T and BH≤d {±1} on the degree d remains open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Important questions in quantum computing would be resolved if one would improve the constant C(d) to dC, see [AA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The Bohnenblust–Hille inequalities for the Boolean cubes {−1, 1}n have found great applications in learning bounded low degree polynomials on Boolean cubes [EI22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Motivated by learning quantum observables, a quantum counterpart of the Bohnenblust–Hille inequality for Boolean cubes was recently conjectured in [RWZ22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' In the quantum setting, functions on Boolean cubes {−1, 1}n are replaced by 2n-by-2n matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' More precisely, suppose σ0 = 1 is the 2-by-2 identity matrix and σ1, σ2, σ3 are Pauli matrices: σ1 = � 0 1 1 0 � , σ2 = � 0 −i i 0 � , σ3 = � 1 0 0 −1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The degree-d polynomial Pauli observables are matrices A ∈ M2(C)⊗n of the form A = � s∈{0,1,2,3}n:|s|≤d �Asσs1 ⊗ · · · ⊗ σsn, where �As ∈ C is the Fourier coefficient, and for s = (s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' , sn) ∈ {0, 1, 2, 3}n, |s| is the number of nonzero sj’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Then the Bohnenblust–Hille inequality for Pauli observables reads: for all n ≥ 1 and A ∈ M2(C)⊗n of degree-d \uf8eb \uf8ed � s:|s|≤d | �As| 2d d+1 \uf8f6 \uf8f8 d+1 2d ≤ C(d)∥A∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2) Here and in what follows, ∥A∥ always denotes the operator norm of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The inequality (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2) was conjectured in [RWZ22] and was resolved in [HCP22] with C(d) = dCd for some universal C > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' A different proof was given in [VZ22] with constant that is of exponential growth i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' C(d) = Cd for some universal C > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Although it is still not clear if one may match the sub-exponential growth in the classical setting, the quantum Bohnenblust–Hille inequality (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2) with dimension-free C(d) < ∞ already has a number of interesting applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' For example, it enables the learning of low degree Pauli observables using a logarithmic number of random queries [VZ22] similar to the classical setting [EI22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' This in turn enables learning more general quantum dynamics [HCP22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' However, in many contexts it is desirable to consider quantum observables decom- posed in the product space MN(C)⊗n for N > 2, such as when studying observables of multilevel quantum systems (termed qudits—though given our use of N for local 4 JOSEPH SLOTE, ALEXANDER VOLBERG, AND HAONAN ZHANG dimension, the term “quNit” might be more apt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' For example, when learning an un- known qudit observable, it can be physically important that sample states be drawn from the native dimension of the system, rather than some larger ambient Hilbert space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Having these inequalities in new bases also greatly expands the distributions under which a PAC-learning theorem is available for arbitrary quantum processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Of particular interest to us are the Gell-Mann (GM) observables and Heisenberg– Weyl (HW) observables, both of which (essentially) reduce to Pauli observables when N = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' In this paper we prove noncommutative Bohnenblust–Hille inequalities in these two settings following the approach in [VZ22], where the proof of quantum Bohnenblust–Hille inequalities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='2) is reduced to the classical Bohnenblust–Hille inequalities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1) for Boolean cubes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' It turns out that the GM case can again be reduced to the case for classical Boolean cubes (Ω2)n = {−1, 1}c(N)n, while the HW case (under certain conditions) can be reduced to the case for cyclic groups (ΩN)d(N)n, N ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The Bohnenblust–Hille inequalities for cyclic groups (ΩN)n, N ≥ 3 was not known before, however, so we also initiate its study here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The constants c(N), d(N) are specified below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Gell-Mann matrix basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' Let N ≥ 1 and put Ejk = |ej⟩⟨ek| , 1 ≤ j, k ≤ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' The generalized Gell-Mann Matrices are a basis of MN(C) and are comprised of the identity matrix 1 along with: symmetric: Ajk = � N 2 � Ejk + Ekj � for 1 ≤ j < k ≤ N antisymmetric: Bjk = � N 2 � − iEjk + iEkj � for 1 ≤ j < k ≤ N diagonal: Cj = Γj ��j k=1 Ekk − jEj+1,j+1 � for 1 ≤ j ≤ N − 1, where Γj := � N j2+j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQfefyv/content/2301.01438v1.pdf'} +page_content=' We denote GM(N) := {1, Ajk, Bjk, Cm}1≤j