text
stringlengths 54
548k
| label
stringclasses 4
values | id_
stringlengths 32
32
|
---|---|---|
which is an expression similar to that for electrostatic energy, and {{formula:b6ca7f18-2239-4bb2-b9f1-497796192f46}} is now a length instead mass. This expression is still logarithmically divergent as {{formula:31d0366f-df6b-4e53-88c4-c9da62c9142d}} or {{formula:80148cb1-3396-4c5e-901a-12b307c9a07b}} . If we now set {{formula:fa017be2-e3ea-40bf-9262-9511c6838ca2}} , we get zero for electromagnetic mass. Thus the expression is quite meaningless. In fact, for {{formula:5a8306ae-c004-498f-b88c-d043e47eb47b}} in Eq. (REF ) we also get zero, which is possible since the mass is virtual. That is the main reason why our attempts to use the “standard" result failed. As shown by Milonni in {{cite:fbd829718ae8a4e437e0c0c9d0c17e84353d7606}}, the observed mass is in fact
{{formula:f0442a1b-7507-48df-ae28-b2f428ea7d1f}}
|
d
|
5b6688608c3f8ee645d00172002658da
|
To employ the internal motion relations, recent works {{cite:06eba37f0267426db74e5d406251bd9f25f2c6ae}}, {{cite:940bf02d315ce8e6444f10f3953341de047b0ac8}}, {{cite:e468027e07b67b7152bb75d096feec726e1e57d4}}, {{cite:19a2e37618c473d696b5b2456d0fdfa533e8756f}} built spatial graphs over body-joints in each frame; however, such a single-scale modeling cannot easily capture a functional group of joints or high-order relations. For example, while walking, multiple joints on arms and legs collaborate together. Thus the modeling requires representations at multiple body scales, which are also adaptive to the input data.
|
i
|
e36f3c4e401f7bb1dd063ab296a0e996
|
Settings. We test our methods on five benchmarking datasets for graph classification, including two real social networks datasets IMDB, COLLAB {{cite:aadaf1a1e35b6e63ba916cea26bd8967ad916baa}}, and three other standard graph classification benchmarking datasets MNIST, CIFAR10, PROTEINS {{cite:219a294aaaf6a3cfd0c2b2bc9f3b9ce0a690e45f}}, {{cite:81800e65508e3d9593c10a0ef69bf8ffc4020322}}, {{cite:22ad636ffee2d3d2272db1cef2ad903bebf9d459}}, {{cite:bfd40ceab57f79afb142924f667f16ed71020db6}}, {{cite:1207ef759619953627af1ecd238335c8c666e28c}}. As we focus on the limited training data regime, we use 10 random splits for all experiments with the training/validation/testing ratio {{formula:4c8ca9dd-10c5-4620-be1e-542ac3590a65}} . Following {{cite:d15c81abf8d20e1d1a2daa3090b8efc045162052}}, we use LBFGS as the optimizer for all non-GNN methods due to its high efficiency on strongly convex problems. We adopt the Adam {{cite:51d5d53a93942e2a62592c159ebd28db387236c9}} optimizer for GNNs following the implementation of Pytorch Geometric library benchmarking examples {{cite:6abc1a86575370f29c1288727d18b66ec5651cd7}}. We compare our unlearning approach (Figure REF (a)) with a naive application of {{cite:d15c81abf8d20e1d1a2daa3090b8efc045162052}} (Figure REF (b)) as well as complete retraining. The tested backbone graph learning models include GST, GFT, linear-GST (i.e., GST without nonlinear activations) and GIN {{cite:d86fe0e127fbedd1ec69f9d7b5d5b619c634218d}}. For approximate unlearning methods we use {{formula:782d040b-ce22-47d6-b1e5-8cd714b74888}} and privacy noise {{formula:1b808979-69fb-4b1d-aa97-b7ec565a476f}} as the default parameters unless specified otherwise. Additional details can be found in Appendix .
|
r
|
db598fa4db81ee4b539ac93e00c62f76
|
Context features are aggregated from neighbour pixels and disparities with 3D convolution networks to predict a disparity probability volume. Following GC-Net, Chang {{cite:fa841770149391f55810fa43c3a6791c50fb9a8a}} proposed the pyramid stereo matching network (PSMNet) with a spatial pyramid pooling module and stacked 3D hourglass networks for cost volume refinement. Yu {{cite:f6c0170d931e6f82029e8d11fb3021ea66b29191}} proposed to generate and select multiple cost aggregation
proposals. Zhong {{cite:7d23b315f8756056f7970f8c1740b277fe11671b}} proposed a self-adaptive recurrent stereo model to tackle open-world video data.
|
m
|
c821bf9959aa0ad49e281d89453a2d8b
|
In this section, we report results on the dataset described in section REF . In all the experiments, we use batch normalization and weight decay as regularization. The evaluation metrics we report is word error rate (WER). To assess the effectiveness of our proposed adversarial STT training method, we compare it to the standard DS2 {{cite:be93006a08f0507d8a34a03a4cfe0c79dbc783bf}} based STT system. Both the models are trained on common voice dataset as described in the section REF .
|
r
|
3f0e905ffcdfeb123d990600647cefd5
|
Fully-commuting (FC) grouping: This approach partitions {{formula:81a973b0-496a-46ac-945d-6b58b490beb6}} into {{formula:0e8ebe31-03a9-4035-a350-62c3c1eeb161}}
fragments containing commuting Pauli products: if {{formula:54212ad4-73fd-48c8-a4ec-55964321df49}}
then {{formula:428a0f0f-d127-40b3-92e5-88e51a9d9f25}} . This commutativity condition guarantees that
{{formula:0040efdd-0618-438b-b26e-7c8f27a655be}} can be rotated into a linear combination of {{formula:799c240b-c2b6-41ee-8ad1-655e9a79ed20}} operator products
by a sequence of Clifford group transformations. {{cite:7a90b6c72db4ddddf6e1f7dd68f6670faef9af2f}}, {{cite:41d18c032e4b03a77d490dc210ed7b51e08644d9}}
We will be referring to it as the fully-commuting qubit partitioning scheme, mainly because it was developed
for the VQE measurement problem where “fully" was added to its name to separate it from a more restrictive
qubit-wise commuting scheme.{{cite:736a3ba5004bf8e3c22eb86919204f7d52a4c16b}}
|
m
|
a109c7d21fb336bb428d900778ffb41c
|
For 2D databases, we compared the proposed ADL to improved BM3D {{cite:7949fb83ead2df325f0005a016359973ee98beb8}}https://pypi.org/project/bm3d/ as the conventional model-based method, and three deep learning-based methods, Dynamic Residual Attention Network (DRAN) {{cite:0f2590818917e0bbe84b9399dcbaf3d6bc24ac4d}}, DnCNN-S {{cite:d600f55b6e9fe9ed224834cb9eb10749f0eeb6fa}}, and recently-developed SwinIR {{cite:41fca42e0b07aeafa203312e5db3b74920eb23ae}}. We compared the 3D version of our model to BM4D {{cite:625d2eeaa14d5d82ab506cc27960ecdd23ac919b}} that is widely used for denoising 3D biomedical image data.
|
r
|
1c950e24bab460a4874a92fb3eede78e
|
By Weierstrass extreme value theorem {{cite:048acb639fcfa906d4530afa508e196738f848b6}}, {{formula:d2403a0a-e65c-4a5e-b575-d5fea965d150}} is well-defined and strictly positive, which immediately gives Eq. (REF ).
|
d
|
aafb3f1e3964fa73bfa9c1646cb773d8
|
{{cite:0d0c664fec4d984d16616c517e1fcac3e194fcef}} and {{cite:f313a06f8192dfd6a3629c3cabeefa4f3e5df6fd}} used dilated convolutions to increase the recpetive field while maintaining the number of parameters. SegNet {{cite:3921e459c55dab0b79771451b0da36a8f86a3bc9}} utilizes a
small network structure and the skip-connected method to achieve improved FPS. {{cite:33c5ebcdff27ccb0754bbd441929fc9bd6634b55}}, {{cite:d304910c960caef06a8b7176a3cf04477bea9ae3}} and {{cite:07b093e561077613789e502fff790f12328905c9}} proposed unique approaches to tackle real time semantic segmentation problem.
|
i
|
7b437e15c4e525e418f97bd3071882b8
|
We introduced a unifying model of communication as reward design {{cite:bdc825d7cea08c886fbe613bb3bb47754fddf547}} to explain humans' use of instructions and descriptions, allowing pragmatic inference of their reward functions—a critical capability for value alignment {{cite:0daea296330cc508203abfe18256393683da168d}}, {{cite:cddd7849424ea6dbf98be82db2a499b8b1bd93d7}}, {{cite:00ee1721ae12b3c15ce3dfde6526a487b0ced25f}}. Analyses show that instructions are optimal at short horizons, but descriptive language affords much stronger generalization. This allows our pragmatic listener to perform inverse reward design {{cite:d8ef1f4c64cd241bc99f491c36696d899c1bd06e}} to jointly infer the speaker's horizon and reward function, reducing risk of model mis-specification in pragmatic inference {{cite:d70ae3222d28627a65b9bf92900bf5e149336315}}. Finally, our behavioral experiment shows that humans follow our theoretical predictions, demonstrating the benefits of pragmatic inference for value alignment and accelerating traditional reinforcement learning.
|
d
|
7506a8977b21f5539b4ef77deb11d58e
|
The standard deterministic SIS model {{cite:90ec8dfb73e41614a6f5a70723e11ff536485cbb}}, {{cite:1c44154a79452d4adb1218e782833c44ee1d03dc}}, {{cite:9eff1de9a8890e9101794480d06e13907490b224}}, {{cite:adc4436868e25c51c9b8d321c5110eefcc599d16}} exhibits an epidemic threshold below which the pathogen will go extinct and above which the pathogen will reach an endemic steady-state solution {{cite:90ec8dfb73e41614a6f5a70723e11ff536485cbb}}, {{cite:1c44154a79452d4adb1218e782833c44ee1d03dc}}. More complicated `deterministic' models have been developed, such as pair-approximations models {{cite:305990344d13731af869be390823998a178c4798}}, {{cite:b70093202522efe1a5bb8f6659ebe71006323c6c}}, {{cite:5d41784e8058fea49d550e75b3a545d57276bc62}}, {{cite:68f1b00a3f91bc22e1a9b3376806bfcfc717abd8}}, {{cite:a80b4d4a03662673ea601f45a2448ea61ad53d05}}, {{cite:d5b3bf41c9319ec6134cd78a26e452908a7ce0ce}}, {{cite:fe997f5a19a1fb568348f9ad259b12d504a58a40}}, in which this threshold behaviour is also observed {{cite:5d41784e8058fea49d550e75b3a545d57276bc62}}, {{cite:d5b3bf41c9319ec6134cd78a26e452908a7ce0ce}}. However, no steady-state solution exists in the stochastic SIS model, making it hard to relate the deterministic and stochastic models in finite populations.
|
d
|
cfa16b701c9fb7a6c901b34439156b82
|
Remark.
Our standard experimental setting follows that used in {{cite:edd8b70ed3e0489577c1311537be0f7e108d7ced}}, {{cite:d3baa85b9679d9bf8dec30b53487739f55ca5ab4}}. In this setting, for questions tagged with multiple concepts (in the ASSISTments2009 dataset), a single learner response is repeated multiple times, one for each concept. Other works used different experimental settings for these questions; In {{cite:76c649725fde81c5010dcfb6efc8543449d207e7}}, the authors removed such questions and as a result, DKT's performance dropped to 0.71. In {{cite:619f553df1f9d7969bdd7c71e8352115f0018dcb}}, the authors built new concepts for each combination of co-occurring single concepts and as a result, DKT's performance dropped to 0.73. Therefore, we also use an alternative experimental setting on the ASSISTments2009 dataset.
For a question tagged with multiple concepts, we average the corresponding concept embeddings and use them as both input embeddings and for response prediction. Table REF lists the performance of all KT methods on the ASSISTments2009 dataset under this setting.
DKT's performance dropped to 0.76 using average embeddings, faring better than settings under {{cite:619f553df1f9d7969bdd7c71e8352115f0018dcb}}, {{cite:76c649725fde81c5010dcfb6efc8543449d207e7}}. We observe similar performance drops compared to our standard experimental setting for all KT methods, while AKT-R still comfortably outperforms all baselines.
{{table:73b5aca5-b5ff-437a-b904-c5f8debe08d9}}
|
r
|
fa21529cbf39f63a2d7411504a8b46eb
|
For the perception SR task, a preliminary attempt was made by Ledig et al. {{cite:95b09f7454622922f64eb2c7240cf5be08af3ea5}} who proposed the SRGAN method to produce perceptually more pleasant results. To further enhance the performance of the SRGAN, Wang et al. {{cite:95b09f7454622922f64eb2c7240cf5be08af3ea5}} proposed the ESRGAN model to achieve the state-of-art perceptual performance. Despite their success, the previously mentioned methods are trained with HR/LR image pairs on the bicubic down-sampling and thus limited performance int real-world settings. More recently, Lugmayr et al. {{cite:6cbfe2a88d9cd9959aae7ab8b73f6ac1e793eb40}} proposed a benchmark protocol for the real-wold image corruptions and introduced the real-world challenge series {{cite:100b3832a5bf195c552df28ad9bdc4f7f0c8d72f}} that described the effects of bicubic downsampling and separate degradation learning for super-resolution. Later on, Fritsche et al. {{cite:bcd1570361cc1fd1fd46ea21551ea97221884fc2}} proposed the DSGAN to learn degradation by training the network in an unsupervised way, and also modified the ESRGAN as ESRGAN-FS to further enhance it performance in the real-world settings. However, the above methods still suffer unpleasant artifacts (see the Figures REF , REF and REF , and the Table REF ). Our approach takes into account the real-world settings by greatly increase its applicability.
|
m
|
00d100359db47bd3a9a85fdf51401ae8
|
While the individual and multi-qubit control results shown here define benchmarks for quantum dot qubit systems, we envision several strategies can be followed to further improve the fidelity.
Precise control over the exchange interactions between adjacent qubits is extremely important for high fidelity quantum operations. While the overlapping gate structure and tight quantum dot definition {{cite:427ea0e79eba66a212f318f998a2350bbdc1042b}} used here, have proven essential in silicon and in particular SiMOS devices {{cite:a0407eb9b1e12cc09cc6b41cc574879e8d832930}}, the small effective mass of holes in germanium {{cite:2e3d144a2747eaed9c37ddc17fa4bcefd6e8d6cc}} increases the exchange interaction significantly. While gate voltage pulsing may be used to turn off the exchange, here we operate in a dynamical mode where we program single and multi-qubit gates {{cite:d98da8443000fd1129a981631d8b6d52a1e587d1}} as required for universal quantum computation and device stability limits the maximal pulsing amplitude. The low disorder in strained germanium, however, enables to relax the gate pitch and the gate structure may thus be optimized to obtain larger on/off ratios for the exchange interaction.
|
d
|
36b222a638efcec5993252e7c05337eb
|
In Subtask-A of the shared task of Multilingual Offensive Language Identification (OffensEval2020), we focus on detecting offensive language on social media platforms, more specifically, on Twitter. The organizers provided data from five different languages, which we worked on three languages of them, namely, Arabic {{cite:285bbd935821a0c3175aa405d6c04782d443ce11}}, Greek {{cite:bced592ede0f4848810bd26b2141124249f0009b}}, and Turkish {{cite:ade67b57801fb706b869eb0c7a56c136654f09d8}}. More details about the annotation process have been described in task description paper {{cite:17c72f8c461b405e0b6aa882ed5090291444648b}}.
|
i
|
d4a38e5a79d66fadafd21fff25342192
|
Over the past few decades, there have been several methods published to solve the LiDAR odometry problem. Based on the type correspondence used during the point cloud registration step, the works mainly follow three branches: (1) point correspondence based methods ({{cite:71a7c71361094bccb344bb556248b6d57af9c0e5}} {{cite:58329e21d8c55445876537a8ed88a23b4319d9c2}} {{cite:e877f4237e31ef4f52dac7e3c0b3b5e0505bdcfd}} {{cite:6589ed8d1fa2f1a1b17431d7c84d62f43f1862ed}}), (2) distribution correspondence based methods ({{cite:493096097149afdbf80c6e70fad2c11eeed93327}} {{cite:3f25400912ac3da6d5bec5044172518afe9d005b}} {{cite:7a23babe11d41583b92c332f68eee7e5b124b702}}), and (3) network correspondence based methods ({{cite:ab030c38bd04b7f6a59e5398afb5f9b585d002ae}} {{cite:8eeebaf98b67215a51202263708a52b9d871f18d}} {{cite:c45395e91292ebf6c87cc55566f640ebf8c26cca}} {{cite:c23a87247f0c488dd1fff75dc1b79b111df8f022}}). In Section , we introduced the pipeline of a general LiDAR odometry solution, in which the major branches are compared in each step. This is summarized in the Figure REF . In this section, the methods in each branch are analyzed and compared at each step.
|
m
|
65f5bc963294125c638b4da10827c625
|
Relation to Prior-based Methods.
Existing methods {{cite:45c4284cf3a3da7bdaeda421c13ba0e7317ad69a}}, {{cite:8f8e6933e230258f5a3d8dbbe917ce810241e499}}, {{cite:8c36f8eb97cbe7d2c300289c833ffc1058152f68}}, {{cite:68ba865caa092923e467087d6ebf122174f28a5a}} focus on the application of dataset priors.
They typically inject label frequencies into the classified logits, hence fail in accommodating the re-sampled training label distribution within detectors.
In contrast, our LogN that normalizes logits with post-hoc calculated statistics is capable of dynamically adapting this altered distribution.
Moreover, the BN-like formulation renders LogN as a non-parametric self-calibration
approach,
and thus is more flexible than the parametric prior-based methods such as LogA {{cite:8c36f8eb97cbe7d2c300289c833ffc1058152f68}}.
Additionally, LogN also imposes calibration of background class, which is overlooked by previous prior-based methods.
|
d
|
c35d476272d5d69604eaafd366db19b0
|
In our implementation, each segment of the scene is reconstructed based on the standard incremental pipeline of COLMAP {{cite:d03548c5a91614b204d8da1d50c5f057683438be}} with the default configuration. Since the feature extraction and matching are common steps for SfM, the time consumption of these two steps is not included when we report the runtime of Tables REF and REF . The experiments were conducted on a PC equipped with an Intel Core i9-9900K CPU (3.60GHz), 128GB of RAM and an NVIDIA RTX 2080Ti GPU. Moreover, the configuration of the parameters and the limitations of our method are discussed.
|
d
|
9eab4c2f75adf8cf15fcfad6de19021b
|
Definition 1.5 ({{cite:08e01f9ace08b4d49b34c3c1791d59cbb60eed59}}, Definition 4.1)
A nonnegative tensor {{formula:5cb04ad5-2346-4c40-9555-044783482dc6}} is called essentially positive, if it satisfies one of the three conditions in Proposition REF .
|
i
|
be306c277c5933f91336d4db4693d97b
|
Clearly, the goal in teaching a calculus for propositional or first-order logic is not just about the simple manipulation of strings. Instead students
need to learn to fluently understand the properties being expressed by logical formulas, to visualise the classes of structures that are being
represented by them, etc. In other words, students also need to understand the semantics of logical languages and calculi. The Sequent Calculus Trainer
is not meant to address possible deficits in understanding semantics, nor does it do it automatically as the considerations at the end of
Section show. We do believe, though, that similar improvement in learning success could be achieved for such semantical aspects by
complementing the Sequent Calculus Trainer with a tool that trains the understanding of semantics, for instance using model checking games for
first-order logic {{cite:09358de2cba6a6d2909de6ae257250fb9452e86a}}.
|
d
|
86cc725d25288b0ca708e5767b4cf3ca
|
As a physical quantum resource, QC for multipartite system is interchangeable with other resources such as entanglement or discord {{cite:83199f9b2083eb19400095f65c41ce1e41207ac9}}. It would be interesting to extend our analysis to multi-UDW detectors system and compare the QC dynamics with other quantum correlations. Indeed, some prior works have shown {{cite:5751424d641c991a0b79aed747f9286d7c0568d1}} that the bipartite QC for two-UDW detectors within scalar background may also exhibit a QC revival that benefits some metrological tasks. By extending these arguments to the general conformal background presented in this paper, new light may shed on our interpretation for the quantum side of Unruh effect. The related work will be reported elsewhere.
|
d
|
2e414d22092c859dd3f3f57f0ffd41d1
|
Here we focus on STORM (STochastic Optimization with Random Models) studied in {{cite:a39ad3f6bf8993dc9a2b469ba35337543f613a2e}}, {{cite:5d90b3c23b436a32dc310e63d06e9ca73ae9026e}}, {{cite:e6e18a04ac5b1f9637025a009d2d1a86cdaf4fe7}}, since eventually our method falls in such framework. The computation and
acceptance of the iterates parallel the standard trust-region mechanism and the success of the procedure relies
on function values and models being sufficiently accurate with fixed and large enough probability.
|
m
|
d28ec98a444f4560c1b5ab1ac1a955aa
|
Based on the above discussion, VAPC focuses on addressing unsupervised vehicle Re-ID through a viewpoint-aware progressive clustering framework.
We alleviate the impact of vehicle similarity dilemmas on clustering by transforming global comparisons into progressive clustering based on viewpoint.
To improve the clustering quality of the same viewpoint cluster, we introduced {{formula:ec73fead-a20f-477e-be21-3bfc038d02dc}} -reciprocal encoding {{cite:95c1888594c3ea4f5a33183232ce66da889e2456}}, {{cite:0a8ca73c4fd2c6a423de1a180b562d0c01621aa2}}, {{cite:362bd40b765a33e58526e12b96f5224398e290c0}} as a distance metric for DBSCAN {{cite:1e07523034aee7937bd5264b3967bba7e029d943}} clustering.
In order to deal with outlier noise samples, we propose a noise selection method to improve the generalization ability of the model further.
The major contributions of this work are summarized as follows.
|
i
|
a57397245bef32a12073d640aa53fa70
|
There exist a number of proposals for contribution measurement, i.e., algorithms that determine the quality of the service provided by the clients, for Federated Learning {{cite:51cc95b59f478607891dab7bfab4da5ec83f124e}}, {{cite:d09bb93887f642666579ac99a2d4094014294f85}}, {{cite:8d6c679fb8a1e356ee3b0d21480e0f29d488a4a6}}, {{cite:c9ee70552d1f18cc23d3db0750aabfad57742a77}}, {{cite:da91c1194d1132a8e5354668a1f5e9f1e1378121}}, {{cite:90688a320d202a2ebe48a16862ffe10f2eff9b8f}}, {{cite:377b3ded3a6f0f497bcfd305ecdd692d459f0300}}, {{cite:6aa38edfb219556dc239a5deadbb764d401e7eff}}, {{cite:37536389e563823a5dbc77dc1a5a3aaceb86deac}}.
In particular, previous work established that Shapley Value, which measure the marginal loss caused by a client's sequential absence from the training, offer accurate contribution measurements.
Yet, the prior art of contribution measurement focused on identical and independently distributed (i.i.d.) data {{cite:c9ee70552d1f18cc23d3db0750aabfad57742a77}}. Here, i.i.d. refers to i.i.d. data distributions whereas the data quantity can be heterogeneous. The assumption of i.i.d. is quite limiting in practice.
For example, in the widely used image classification benchmark, Cifar-10 {{cite:03dad1150847ea7bbfea495ba1cce5873e21f0f9}}, most people can contribute images of cats and dogs.
However, deer images are bound to be comparable rare and owned by few clients. Another relevant example arises from learning predictive medicine from clinics who specialize in different kinds of patients, e.g., AIDS and Amyotrophic Lateral Sclerosis, and own data of exclusive disease type.
{{figure:ef921a17-0f84-4404-8ca0-c1011742aea4}}
|
i
|
18e0df0937e346b6e5ffe4d6775921ca
|
i) Minimum Average Displacement Error Given K Predictions (minADEK): similar to the metric described in {{cite:585c1bf5b55db5bda860eb9c14c6092df10ec2a0}}, {{cite:1633141db169234c0258c2fb1f2a47cc7777acba}}, {{cite:7a1e85fa131e105784b615582aec71679015bd74}}, {{cite:d9474197916441736c467cd9bce3963599af46fe}}, for each true trajectory {{formula:aa727950-93a0-41c3-858f-376ef3575bc3}} in test sample {{formula:d4263a62-8ef0-4419-8b5a-0d4f85a6e01b}} ,
we select
the closest overall prediction (from the {{formula:4f207952-e5fe-4450-a886-97aa9ddd05aa}} model predictions),
and then measure its average error:
{{formula:11a39b16-01d8-4818-b5db-cc029e5a58bb}}
|
r
|
57b42026eaf0d7d8c1b1790ebe008360
|
Deep reinforcement learning has been demonstrated to be highly effective in a diverse array of sequential decision-making tasks {{cite:b9cc5562402bb71641ab35175c5ce5537faf04a3}}, {{cite:83e244c74b37385f2769a243c7df8a6ca48093a0}}. However, deep reinforcement learning remains challenging to implement in the real world, in part because of the massive amount of data required for training. This challenge is acute in robotics {{cite:80f8c47a36da8cb85a62a8cb2a054580715bce9a}}, {{cite:df0c55dbd90211d3e25ecdcf43787e0e3de6995d}}, in tasks such as manipulation {{cite:f16dbb5e0d21b526ef02f3d61fb66016bf1d2b1a}}, and in self-driving cars {{cite:0cc300c79f7aa2d13347b93a0bc09c7c1ada29f5}}.
|
i
|
38337118c546c1fdf01de13a012b8bf3
|
BayLIME is a Bayesian modification of LIME that provides a principled mechanism to combine useful knowledge (e.g., from other diverse XAI methods, embedded human knowledge in the training of the AI/ML model under explanation or simply previous explanations of similar instances), which is a clear trend in AI {{cite:dffd577effa848914ce5f5d78eca3e279e658215}}, {{cite:3de12ac4216bec541897034ae1ebfe2bcefea7c2}}, {{cite:8faf546bb2ad547329fe41ac3d78ed364533d776}}. Such combination benefits the consistency in repeated explanations of a single prediction, robustness to kernel settings and may also improve the efficiency by requiring less queries made to the AI/ML model. That said, we discuss the following questions to highlight the practical usefulness of BayLIME.
|
d
|
b18bb683af74525db7e710b82faeaf18
|
(iii) In the considered model of non-renewable resources, the stock
region is depleted upon each encounter with each diffusing species.
This assumption can be relaxed in different ways. For instance, one
can consider a continuous-time supply of resources, for which the
problem is equivalent to finding the first-crossing time of a
deterministic time-dependent threshold {{formula:fe1381c5-3bba-4b8c-8d59-5eb49d8f333c}} . Alternatively,
replenishment of resources can be realized at random times, as a sort
of stochastic resetting. If the resetting times are independent from
diffusion of species, one may apply the renewal theory, which was
successful in describing diffusion with resetting
{{cite:307698b4d46b70a6c1289cdfb95f5485dceb878e}}, {{cite:ca8a0037cc26c81d0c9a56707666b063486ace32}}, {{cite:c65791e6b3654f34e35edfbef77b9c377a50debf}}. Yet another option consists of
implementing a dynamic regeneration of consumed resources on the stock
region (like a natural regeneration of forests). Finally, one can
also include more sophisticated consumption mechanisms when resources
are distributed to each species depending on the number of its
previous encounters with the stock region (e.g., a species receives
less resources at its next return to the stock region). This
mechanism and its theoretical implementation resemble the concept of
encounter-dependent reactivity in diffusion-controlled reactions
{{cite:41e2808f288355b2380d5408005ea2f3aea4e5c8}}.
|
d
|
295c4365d779c373535c13ce18989467
|
The first problem is important both in theory and application. An optimization algorithm with superlinear and quadratic convergence is appealing in most cases. The second problem is of great significance in analyzing the convergence properties sub-sampled Newton methods. Besides, a unifying framework can provide some potential inspirations for developing more efficient sub-sampled Newton methods. The third question is also of great importance both theory and application. Without the constrain of Lipschitz continuity condition, sub-sampled Newton methods can be widely used in optimization problems. In fact, {{cite:8071583f70a92ae9740d8fd3698a008fb1974568}} found that NewSamp can be used in training SVM which did not meet the Lipschitz continuity condition. They concluded that NewSamp can be used in optimization problems where Lipschitz continuity condition is not satisfied empirically but without any theoretical analysis.
|
i
|
0c63051deb2ec57a89905e7064e4b943
|
For training, relation statements are collected into groups of positive examples (statements that align with the same relation type according to our denoising techniques) and negative examples (statements that align with a different relation type) for each selected entity pair. Following Soares:19, negative examples include both “easy" and “hard" mentions. “Easy" mentions include no part of the entity pair and “hard" mentions include exactly one of the entities in the entity pair, suggesting that these “hard" examples describe similar but different relations that require disambiguation. With probability {{formula:8af9b63d-1675-4d16-b445-4a86e6eb82df}} , each entity in a relation statement is replaced with a {{formula:23603d75-67e2-4e23-82cb-ba5e44f0be2f}} token and with probability {{formula:9810b059-4bca-4e0b-84fe-1f6fdb5eadd8}} each token in a relation statement is replaced with a {{formula:0afe15be-2d34-4895-a042-f42e2350b5fb}} token. We train a parameterization of {{formula:c5c56471-cf16-4f4d-928c-e98db1781e99}} that minimizes both a masked language modeling loss {{cite:40c1b4332a2fe838505df3478a9d866d9ba2b0d8}}, {{cite:56ba9e90eb913fb7046b717fc08748fcc31f7b8c}} and a simple binary cross-entropy loss that encourages the representations of positive examples for a given entity pair to be closer to one another than to the negative examples.
|
m
|
f41a2b1fc78169de0a24be6e73eac879
|
In order to compare the power of different valid independence tests we must create alternative hypotheses, stratified by both strength and form of dependence, that reflect the types of alternatives we are most interested in. Studying power is thus necessarily more subjective than studying validity, but once we have formulated our assumptions and priorities it is possible to search for tests with optimal power. The most common approaches to nonparametric statistics impose smoothness conditions, for example Sobolev smoothness, and prioritize alternative distributions that are both smooth and exhibit relatively strong dependence {{cite:8b06f090acc7c924eb1a3aba8daa0d739fc15923}}, {{cite:f87092c4dbf5b69d1ac684da0656943a647bb7d4}}. However, there are other choices: {{cite:df1b14f39c2142f8a9338ca3af92c2b2464bf475}} imposes conditions to assume that alternatives exhibit dependence early on in the binary expansions of the data, and the current work assumes, for example in Theorem 3, that alternatives exhibit dependence at relatively coarse resolutions. These three types of assumptions are somewhat related, especially for the alternatives {{formula:3f44bde7-6db9-4736-8bfa-fdcc58c7f25b}} above, but there is no guarantee that optimal tests with respect to one set of assumptions are optimal with respect to another.
|
r
|
13b519beb0a8a8689a192c92e9890407
|
Triplet Loss From the results in tab:comp-fine-tune, our methods consistently improve the results over the original model no matter which variant is chosen.
As in fig:learning-curve, our methods boost the generalization after the training Stage I converges, where the triplet loss reaches saddle point and there is still room to further intensify the cluster compactness (fig:train-tsne left).
After the Stage II training, the intra-class distance is further reduced (fig:train-tsne right) and boost the generalization in terms of both rank@1 accuracy and mAP (tab:comp-fine-tune).
As illustrated in fig:view-optimization and the experiments in sec:ablation, our methods perform more effectively after the training Stage I is converged, where a stable feature distribution is provided for aggregation.
On the other hand, triplet loss may provide beneficial effects during the initial stochastic training process.
Triplet loss can inhabit more compacted feature embedding for each class in euclidean space than cross-entropy loss, which has been discussed in BNNeck {{cite:04654bcab0a117bba0c0e1857ada9f3feb1902f0}}.
Consequently, the improvement that the proposed anchor loss brings, is more significant for the model trained with {{formula:d2f6e6d2-53d9-4e58-9c6f-a544865f00f4}} than the one trained without {{formula:56a09f6b-4f63-438f-83f9-cf46d22a43ef}} (c.f. tab:comp-fine-tune&fig:learning-curve).
In summary, anchor loss stimulates stable and effective optimization to find better local optimal when triplet loss suffers from stochastic saddle point (c.f. fig:train-tsne).
{{table:ba56c2df-94da-401b-946a-99bb44f1fab8}}
|
d
|
04a89baa2bb93712f5fae90803fefc5d
|
The optomechanical system consists of a high-finesse single-mode optical
cavity of frequency {{formula:1321f1b1-017a-41e0-8391-9ab449da583f}} with a fixed mirror and a movable mirror,
which is coupled to a mechanical oscillator {{cite:e144581745178d9d44b71e688e217eef0f09245e}}. We assume that {{formula:0593ebca-eb41-4021-bb30-8a4478b17263}}
two-level {{formula:1e23976b-b681-46b7-ab9f-f1df0cf803e8}} Rb atoms with transition frequency {{formula:542d5125-d97f-411e-ad13-42d9f665874c}} are
trapped in the quantized cavity shown schematically in Fig. 1. Although the
QPT has been realized experimentally with an external pump laser {{cite:c69c1f7ede8d0228be24db0011c6a916b9a901b7}}, {{cite:c97c21e09a91193c859184c605d67735871cbbc7}}, {{cite:3f4903deb44ee4625cb5a193793920385bb0c8de}} we in this paper consider only the simple Dicke-model cavity in order to demonstrate the effect of oscillator in a clear
manner. The optomechanical cavity with {{formula:166a7cbb-4f4e-428c-854a-396a62b1d5d2}} -atom can be described by the
following Hamiltonian {{cite:c69c1f7ede8d0228be24db0011c6a916b9a901b7}}, {{cite:c97c21e09a91193c859184c605d67735871cbbc7}}, {{cite:e51f1f82e22aa4f10ce932b3e9b51ae0fe28b53f}} (with the convention {{formula:907cc8c1-315f-45b5-9065-dcdd86ce33e0}}
{{formula:83bb8bc0-aee5-4ed6-a0e3-e295e9a79aaa}} 1):
{{formula:42d60438-ce7c-4537-8fec-bca444f045eb}}
|
m
|
ee7975824c8123ab67017f5fa6349187
|
The Kuramoto model with inertia (KM{{formula:a37b7513-0d7a-4ea4-b89b-aa454dfdcebf}} ) has been the subject of tremendous research efforts within the last decade {{cite:ebce64c6e371f3617aff581ef878e55b50d35259}}. Certainly, one of the main reasons is that it has become common practice to use networks of nonlinear oscillators – such as the KM{{formula:a9bb3b5e-2f38-4139-817d-ea4289d96cc4}} – as coarse-scale representations of real power grids {{cite:709660bd21e9080398cbb300ed47b957f656677c}}, {{cite:bb9cba0f4f42499be3c00b2fd654885f44ccc0a8}}, {{cite:3fe0ba53d7d1fd42ce246033260d879486283674}}. In addition, the interest in conceptual models of power grids has been fueled by the necessary transformation of todays electrical distribution grids. It is in this context that the KM{{formula:cb875b84-0031-4171-9eec-cbf9f4bfe148}} has been applied to tackle some of the major challenges which accompany the decarbonization of power supply, like increasing frequency fluctuations {{cite:9f358a1d6e09477fbf051fb8585b23da5b3a82f5}}, {{cite:a423ca11ccc05ce81b39d2f8c2042929984ab494}}, {{cite:b74bc010b9dbc348ba01069483e817c89d74259d}}, the loss of inertia within the grid {{cite:b030332b90ddc4166532b60ef836186510e3616a}}, {{cite:3cf89a66873d222d77725067a0ce5148b92eb830}} or the progressive decentralization of power generation {{cite:e72099610659f93850e29367bcfecc9d5d728e7c}}, {{cite:aa2c94e1f569d11850d2ae5643c3c2e4953caf18}}.
|
i
|
034cdd8b3231c73bca48645aa0dcaf9a
|
Finally, we explore whether unsupervised adapted networks can provide better pre-trained feature for night images since the weight itself should contain adaptation ability. Specifically, we adopt DANNet {{cite:b1719c237e99cae866c28ba87ffa9a8fdbf3d720}} and AdaptSeg {{cite:7e3c9ce38cdf8faf789a43476d429242096f465d}} to first train an adapted segmentation network with unlabelled day/night images, and then finetune it with our full labelled day/night dataset. Unfortunately, it also does not help with the accuracy as shown in lines of “DANNet” and “AdaptSeg”. It seems the adapted feature could be biased, yielding a slightly worse optimized weights than vanilla training in our experiments.
More details can be found in supplementary materials.
|
r
|
911cc692e142a8075c9e0a6f9991ab4a
|
The next result extends Lemma to a family of unbounded delays, which allows to deal with totally asynchronous iterations {{cite:f4a55211166fd397c4eb8bab3cc107308dbf6426}}, and shows that the sequence {{formula:c0ec0c8b-92b3-474a-8a87-a15bbac78f43}} can still be guaranteed to converge.
Let {{formula:5f70c103-408f-4d72-b608-3483adac8213}} be a non-negative sequence such that
{{formula:519a6c7a-a375-45ff-83fb-09d1821978fc}}
|
r
|
d7541104a1510737fbffb663036ae354
|
One can observe that the Rank-1 and mAP achieved by ISM are superior to those methods. In particular, the proposed model achieves 48.4%/21.9% for Duke {{cite:db30b4835b8b916bc19223dc2b15e6f55f005648}}{{formula:e7ce84cb-1034-48d8-8727-e3c387119756}} Hazy-Market, 37.7%/20.8% for Market {{cite:e544b0328ab605c44060aae47b43ee0af04df75d}}{{formula:8560ae55-c076-4e72-9233-b7907215a51e}} Hazy-Duke in Rank-1/mAP. The state-of-the-art domain adaption method ECN {{cite:4ac1f8ece778fb876d1ad1aca9e9704d8e85f8ab}} is newly released and the proposed ISM surpass it by a large margin on Rank-1 and mAP in all tasks. Compared with the best clustering-based method MMT-500 {{cite:9678dd1077b9a024adce9addccbf814734809613}} that achieves 85.3%/68.5% (72.9%/57.6%) for the task of Duke{{formula:2c340891-777f-443f-ba64-39b3c4ef0b94}} Market (Market{{formula:da468637-040c-4bd3-bf38-30372bdb5ff2}} Duke) in Rank-1/mAP, the proposed model outperforms it by 18.5%/9.5%(10.8%/9.1%) when the Hazy-Market (Hazy-Duke) is used as the test dataset.
{{figure:92af81f5-0f79-400b-bf65-94464f41ead4}}
|
m
|
07741f0f1b13a4a292fc51f6c4167cfe
|
Our goal is to train a stereo matching network on synthetic data that can generalize to realistic scenes without the need for fine-tuning. To achieve this, we propose an information-theoretic approach to automatically restrict the shortcut-related information from being encoded from the input into the feature representations. The approach is based on the well known information bottleneck (IB) principle that proposes to optimize the following objective {{cite:aef9117c888e3b3567c9e92ebfb33736728eae45}}, {{cite:c85487df91538c0764308e1a6210512766ae138b}}:
{{formula:1ad3b723-cb47-4f94-8869-56b0d9080d2e}}
|
i
|
527d8a6a86d5e6208cbdb2b1ff486d1b
|
We choose several most recent state-of-the-art methods (and label their time of publication) as comparison. Specifically, we compare with methods that represent different existing temporal modeling strategies in VQA, including VSFA{{cite:f0373f6b1f0b743638f6f147f35dd9608274a786}}, which applied a ResNet-50 2D-CNN backbone and an RNN for temporal modeling (and GST-VQA{{cite:b6d69b5cb1601c0c04d2d7b2c34a85deb308e3fa}} which is based on VSFA and improves the training strategy for VQA); we also compare with CNN-TLVQM{{cite:06f10031d2a282f0d4423f1831d4a1845b5204eb}}, which carefully designed handcrafted features for temporal modeling. We also notice a newly proposed method, MLSP-FF{{cite:7a16c7aab552432f21d13ed12fdde7d0ff1bc745}} with a heavy CNN backbone and only used naive average pooling for temporal modeling. A very recent approach, R+S applies SlowFast{{cite:40c69ef715139e87fb8dbdcb03ba2197dd8dc927}}, a two-branch 3D-CNN network and a GRU{{cite:422e7a1415f57cfb0f22560604767590c7086874}} temporal regression module for temporal modeling, and ensembles it with another spatial branch for VQA.
|
m
|
95b20ac07522d92e90d633d3c9eb8ba1
|
In this section, we recall some basic concepts from convex analysis and theory of monotone operators which will be used in this paper.
These concepts and properties can be found, e.g., in three monographs {{cite:8b4f51ef2aac1f1f107821da57441d0d83dfbcd8}}, {{cite:74fb320c532bd110b1fc550a60246623630188bb}}, {{cite:0ef9fad856fae08ec57574065c7d5e95646da553}}.
|
r
|
7875e29934614efcecb1a0fed2f1757a
|
The outputs of 240 Skyrme interaction parameter sets, in eleven domains where experimentally or empirically derived constraints exist, have been examined by Dutra et al. {{cite:d761aff47d648b1b52f19b518f7b9e5a6f93643e}}. These domains consist of a detailed systematic analyses of the symmetric nuclear matter (SNM) (4 areas named SM1, SM2, SM3, SM4 in Dutra et al.) properties, pure neutron matter (PNM) (2 areas named PNM1, PNM2 in Dutra et al.) properties and both the SNM and the PNM (5 areas named MIX1, MIX2, MIX3, MIX3 and MIX5 in Dutra et al.) properties. These domains have been covered by quite a few selected macroscopic constraints. It was observed that only six satisfy all the constraints out of the 240 Skyrme models whereas 66 satisfy all the properties except one. In 10 out of the 66, even those having only one failure have their magnitudes off by {{formula:4d6b726d-1b12-4165-a635-94c7a525ed10}} 5{{formula:bc08296a-cc09-4410-b7da-c8ae6f679886}} . The final list includes 16 consistent models (the CSkP set) consisting of GSkI, GSkII, KDE0v1, LNS, MSL0, NRAPR, Ska25s20, Ska35s20, SKRA, SkT1, SkT2, SkT3, Skxs20, SQMC650, SQMC700 and SV-sym32. These models satisfy a host of criteria extracted from the macroscopic properties of nuclear matter in the neighborhood of the nuclear saturation density at zero temperature and their density dependence obtained from the liquid drop model, experiments with giant resonances and heavy-ion collisions. Further curtailment in this number to 5 ensues with the application of the constraints like density dependence of the proton and the neutron effective masses, Landau parameters of SNM and PNM, {{formula:7ace9698-bb55-42c1-8da0-92108e069ab6}} -equilibrated matter and observational data on low- and high-mass cold NSs. These five types of Skyrme models have been termed collectively as the CSkP{{formula:535e2b82-3c19-4b1c-8fe7-7afcec966d61}} set consisting of KDE0v1, LNS, NRAPR, SKRA, and SQMC700. As extrapolation to densities above the valid range is required for describing the NS structure, constraints like maximum mass and the corresponding central density of high-mass NSs put further restrictions on the Skyrme models. The radio pulsars, which are NSs with masses {{formula:66a38bcc-4466-4c98-8915-f46d46bb334a}} 1.8 {{formula:473563d5-d1fc-42f8-8c34-cc5259ac5914}} , are critical probes of nuclear astrophysics in extreme conditions. These massive NSs have extremely high gravitational fields inside, leading to substantially higher gravitational binding energies, than inside commonly found 1.4 {{formula:6569f5a5-cd1e-482a-8743-058be35e7a8a}} NSs. It was proposed that the Tolman VII EoS-independent analytic solution of Einstein’s equations marks an upper limit on the ultimate density of observable cold matter. If this argument is correct, it follows that mass measurement sets an upper limit on this maximum density of 10 times the saturation density {{cite:9667e9bcfcb2200b9434dae0c2e1f0f3aa8e439d}}. The maximum mass NS with central density in line with observation is not simulated by any of the CSkP{{formula:018aa6a0-e5de-4b29-ad88-c0e6a0307eea}} models except NRAPR and KDE0v1 parameter sets. For the present work, the EoS for {{formula:ac447aa6-6b75-404d-821b-b7e8da6d05a1}} -equilibrated NS matter has been derived by using the Skyrme interaction with NRAPR parameter set {{cite:8582cd1a7eae4e2bf432385ef92e6fed73e1a845}} provided in Table-I.
|
r
|
81d74bdddd7b7d1a6391b442331bd7ef
|
Using the free energy surface one can employ a probabilistic (PROB) method to calculate the p{{formula:5d1a68cb-7a32-4123-85d5-7764c0389c61}} of water and p{{formula:7625b7a0-c791-44c1-9ed8-c6e775324316}} of an acid, as suggested by Davies et al.,{{cite:8dda7ca78cc49a2eb34f6186a9374173c80f36d8}}
based on the work of Chandler.{{cite:38487279a36e361b29f9b571eb7b753bd0589aa2}}
This method relies on the relative probability of finding the system in a bound state.
For this purpose we define a cutoff bond distance, {{formula:40cfbdd1-a68c-4d2c-babe-37a09e89d42f}} ,
at which the O-H bond breaks and the OH{{formula:5807e243-4ead-4c45-aae3-7249c4c246e0}} and H{{formula:ccdfb398-8d77-41c6-8d08-1a76d2a93943}} O{{formula:5182d172-b0b4-40f9-ab2b-ed8befc03a9c}} ions are formed.
The probability ratio between the bound and dissociated states is given by
{{formula:1d0776b6-4255-4ef6-9e90-4c7f6cb17208}}
|
m
|
6270b26bffaadba3ca16a6fdbbb64cef
|
Dynamic convolution was firstly proposed by Jia et al. to improve the performance of ImageNet classification by introducing attention mechanisms {{cite:ae43e1d84b2c75dad31544152403161772fac10b}} without increasing the depth and width of the network. However, in Simu-Net, dynamic convolution was used to transform pulse sequence parameters into learnable convolution kernels and enable simultaneous encoding of 2D (parametric templates) and 1D (imaging parameters) information to high-dimensional feature maps. This strategy improves the flexibility of physics-informed network training rather than simple end-to-end mapping. On the other hand, Simu-Net is trained by a few randomly sampled synthetic data similar to classical PINN. Although these data are discrete points in a high-dimensional solution space, Simu-Net achieves continuous data-driven solutions to Bloch equations. From the results of Figure 4, we see that TE-dependent signal attenuation and {{formula:8fb9f9cf-cf86-43ff-82c6-913996c36ecb}} -introduced geometric distortion of GRE-EPI can be successfully simulated, closing to Bloch equations, which shows the generalizability of Simu-Net on unseen imaging parameters following the physical laws.
|
d
|
81038abd10b8fd1cf8bacf76b5c34ec5
|
We compare the performance of popular real time warping methods and the proposed method. Bilinear warping probably has the best trade-off between performance and speed in the literature, as pointed out by Zitová-Flusser {{cite:07a685b1c1b6be0dcec7e1a6d01cf42c31022ffd}}. Hence it is recommended for video and animation applications, such as frame registration for motion estimation in OpenCV and texture mapping in OpenGL. Bicubic warping is another frequently adopted option, especially for still image processing. For example, image editing softwares Photoshop and GIMP employ bicubic interpolation for image resizing and perspective view rectification.
|
m
|
6070d2ce7b4a2b85b0cf9853a2a9c516
|
subparagraph51.5ex plus1ex minus.2ex-1emEmulating synchrony.
Alternative abstractions avoid dependency on the specifics of a failure model
by simulating synchrony {{cite:efec7132abe32b640d1a1c1a191e6460ad3e164d}}, {{cite:e19a7a63f01d96f1d3508cfcf2e8ca9f86af1056}}, {{cite:e23f6f8f4126b5ba7051c220cb3b5fce9125ae7c}}, {{cite:fc1dc4a018005083afadc8bd01797c03d7d7ae52}}.
The first such abstraction is due to Awerbuch {{cite:67739ce5af548758d3b28bca794cfeee42d51106}} who proposed a family
of synchronizer algorithms emulating a round-based synchronous system of top of an
asynchronous network with reliable communication and processes.
The first such emulation in a failure-prone partially synchronous
system was introduced in the DLS paper {{cite:be1e41c8ea7facfff24bc098dd6481583b8d1e7d}}.
It relied on an expensive
clock synchronization protocol, which interleaved its messages with
every step of a high-level consensus algorithm implemented on top of it.
Later work has proposed more practical solutions, which reduce the
synchronization frequency by relying on either timers {{cite:2fe070c5f180f826335ac6babb10067e8665021b}} or
synchronized hardware clocks {{cite:f9485dd28b431eba8f8cb44e7d38ed4ea9d7c8b6}}, {{cite:e5fabcc72de4c7a76b15b384a2e4d47fb5580485}}, {{cite:749882a5b923bed82a0f177b1494c0a1b060a794}} (the latter can be obtained using
one of the existing fault-tolerant clock synchronization
algorithms {{cite:fd4e2b47794968a6c66cafa18dd926cf8d248c16}}, {{cite:dcab649e1e645a8343f5b76f468fb670d80c5de3}}).
However, the DLS model emulates communication-closed rounds, i.e., eventually, a
process in a round {{formula:f9350f6d-5d1e-4ebb-8570-71d980583bae}} receives all messages sent by correct processes in
{{formula:64ab35a8-4ade-41bd-8721-f248ce35a5ee}} . This property is needlessly strong for Byzantine consensus or SMR, since
protocols such as PBFT can make progress in a given view if they receive
messages from any quorum.
|
d
|
982227a8dcf46fde9b8e5815abe6f590
|
However, the derivation of the retarded potentials using the Euclidean geometry implies some conceptual and mathematical inconsistencies.
An extended physical charge cannot be reduced to a Euclidean point. Conversely, a Euclidean point cannot be expanded to a finite dimension, unless invoking more advanced geometries.
As a matter of fact, a physical charge occupies a finite volume and, strictly speaking, cannot be assimilated to a Euclidean point.
In this frame, the demonstrations given in most electrodynamics textbooks are acceptable but not reducible to the limit of a point charge.
Accordingly, the arguments that these potentials do not depend on the charge volume, therefore they are extensible to the infinitesimal point charge limit, are arbitrary. {{cite:f360397152799f0e0efdce79d8762ad2566eedae}}
Furthermore, these demonstrations do not provide a clear interpretation of the physics behinds the mathematical formalism. {{cite:4933df61385248f6cf790ed7da5e88890ebfe6b1}}, {{cite:c9a67eedb269ea3a5f04a3540780897ab6705d39}}, {{cite:3125a08e5a4c2e1b34c2f0956bd301cf878ab34d}}, {{cite:09f8bae61f2c17be660478e838a2f8a766cc7a24}}
|
i
|
134e0751482788a0f356709bde1c8fde
|
As mentioned earlier, an agent can potentially have equally valid future trajectories, e.g. turning or going straight at an intersection, and it is important to design a predictive model capable of capturing such diversity. However, this goal is not achievable by using unimodal approaches {{cite:7b02453c36aba0290bb588278fa5db3e3799fa31}}, {{cite:f7734e674d9a729540baa8f92d808695665e2a40}} or simply constraining the output over a probability distribution {{cite:9c1954418e1e26fb427fe365940f33fc02228681}}, {{cite:4d30bd5711e98c1fe3fd61984525fa1baeda0909}}. To remedy this issue, some algorithms use proposal-based approaches {{cite:b30867fcd31690d5ddd4d70d48474dc96a9eca8e}}, {{cite:ee5677d7d2abf551b42cd9947bd26f60620b796c}}, {{cite:105267b7519c6f312bef2d29ad512e5ab6da07d9}}, {{cite:7951dca6749f0142344982ae7a48c00e2fa3bd5d}}, {{cite:927e6b7b9521b8d0e062feefbbf6c855bdc5c705}} in which predefined trajectory anchors are generated according to the observed dynamics of the agents or map constrains. Although these methods encourage trajectory diversity, they largely rely on heuristic methods and their performance depends on the quality of predefined anchors.
|
m
|
0dc94911902e63b81bbbc315b74fec4b
|
We use the approach Multiconfigurational Time-Dependent Hartree Method for Indistinguishable Particles (MCTDH-X) to simulate the steady state of the system and extract the observables of interest {{cite:c6fb07c1fe7167034410b0c01bc7074ecc6c0a9d}}, {{cite:4f12a45cd7f7967b307d11f142c7055680691d2a}}, {{cite:7b55c5efd4b184da4203e409880ab5d01652b4ed}}, {{cite:f0478760c8f721f3dd67cf6bdf8ab14ec93ff969}}, {{cite:6bbeb47b9146ba825f30d5d2ead65429c8f82892}}, {{cite:cd7d4f3d8bf54ca459d2a3ad3dd9dc18b3f8ae91}}, {{cite:5c11a4295e7494aaa7f6f91b65853e67947d3898}}, like the momentum space density distribution and the cavity field expectation value.
MCTDH-X is able to solve problems beyond the Gross-Pitaevskii mean-field limit, and capture the correlations between atoms as well as quantum fluctuations in the many-body states. The method relies on a variational ansatz for the many-body state, which is a symmetrized product of multiple optimized functions, or orbitals. The number of orbitals {{formula:5922f1a5-e5a5-46a4-b881-619417ad1a4b}} controls the simulation accuracy. Ideally, the exact solution of the numerical problem is found when an infinite number of orbitals is used {{cite:4f12a45cd7f7967b307d11f142c7055680691d2a}}, {{cite:7b55c5efd4b184da4203e409880ab5d01652b4ed}}, {{cite:9b5dd0dcdad645453d9c5fe83464044d70231647}}. MCTDH-X has been successfully applied for investigating the static and dynamic behaviors of Bose-Hubbard systems {{cite:4aadf3ffcbe92febf2fd52235753da8c60e14810}}, {{cite:40c14448187b7a5a0e52a83547b71841a7d55d1b}}, {{cite:49bdc4fa9b6c4d6d67173c349745b2ac8d6d16d7}}, {{cite:cce4796ad195c2e5322d692ce552846c131e9f8e}}.
A more detailed description of the method can be found in Appendix .
|
m
|
32e582f3ed3e30a510708f23058aedd8
|
The rest of this note is as follows.
In Section 2, we give a brief explanation of Knuth's algorithm
{{cite:75d4ca441ac6ca4b4620a819230f29923e6a42e7}}. Then, in Section 3,
we describe our computational experiments for determining {{formula:e38b62bd-642a-4903-ab26-2823b2d80d9a}} and {{formula:e4279826-07ed-4890-ab59-2cec5fa2d433}} .
The code used in our experiments can be viewed on GitLab
at https://gitlab.com/kkimura/tswops.
|
i
|
b7d24f6b5b465e75f5834527a0f1f7f0
|
While fusion encoders can be time-consuming, we combine the strengths of both strategies by performing re-ranking as in {{cite:d7c11dca98ec6502552627823cea6b883653acd1}}, {{cite:52a76ad16e7d7a76c91701c2757e1fba78ce8ffb}}. Specifically, we can first retrieve the top-{{formula:af761ecf-c621-4c81-ae33-494703e32423}} most similar instances using dual encoder, then adding the similarity scores between the given instance and the top-{{formula:db745a6b-bcd5-4ce1-b80f-3aec20e9ad3d}} candidates using fusion encoder to the original scores to perform retrieval. From Table REF , we can see that this strategy can balance well between efficiency and performance, and just re-ranking the top-10 instances can achieve comparable performance with ensembling.
|
r
|
cb44893590bd81327c23d64906e566d9
|
Third, data augmentation techniques are more effective when the training sets are small. For example, all data augmentation methods achieve significant improvements when the training set contains only 50 instances. In contrast, when the complete training sets are used, only three augmentation methods achieve significant improvements and some even decrease the performance. This has also been observed in previous work on machine translation tasks {{cite:bd18777f89d8b93a41af7ff0517dd640b16e5447}}.
|
r
|
d0542aaf782f0a13fc70a717a2d498aa
|
In previous research summarized in {{cite:fbd44f6131922a62d0db7eb4814140b60363f76c}}, most of methods to solve Learning to Rank problems are pointwise, pairwise or listwise approaches. The pointwise approaches use a single document as its input in learning and define the loss functions by the relevance of each document. In contrast, pairwise approaches take document pairs as instances, while listwise approaches consider the whole list of documents {{cite:4f73b57d28a50bc18a1830b6944c915fc46488a2}}{{cite:fe8277e9dc84a1f140472a8c3ebcea92df3be0ca}}{{cite:63a4eb676aecaaf0dc1c609477d6c45a40474456}}{{cite:b50fd402528bd97549155cf184fba12772ecd224}}{{cite:e3d1ddcbc7ca0deb272030ce8c749c4260744518}}{{cite:2a5cf5b6ac1d7e4c5881282b9e0f853d18183d14}}.
|
m
|
8859a7ca041ed9ac7aff3a3f0d5b9f54
|
Besides the video-level evaluation, we also compare our model with state-of-the-art methods in terms of frame-level AUC on CD2 {{cite:15a7b367faf1909c4d9f6e2c2b31df1cf44c5fa0}}, where our model is trained under the cross-dataset setting (see Sec. 4.2 for more details). As shown in Table REF , our model outperforms the state-of-the-art method {{cite:dcfcd7525e4f2aef4ae517e01fba95452d0499bf}} by over {{formula:eea232bf-6c03-425c-9a12-215092a71035}} .
{{table:745336e3-5491-4739-94b6-51e1cb791427}}
|
r
|
0703616e54c351e5d936af6202d8c989
|
Another possible explanation for the difficulty in directly learning a small number of shared Hebbian learning rules is the lottery ticket hypothesis {{cite:c6c877ed4b2ce2c7b3cef3183637d89693429f79}}. The lottery ticket hypothesis is based on the observation that neural networks can often be pruned aggressively without losing performance, and that it is in fact a small sub-network of the bigger network which solves the task; however, it is difficult to learn the small sub-networks directly.
The hypothesis is that overparameterized neural networks are more likely to contain sub-network, which are initialized in such a way that they can be effectively optimized to solve the task. If finding a well-initialized sub-network corresponds to winning the lottery then bigger networks simply have more tickets and are thus more likely to win.
|
d
|
5883558d80f8a6b4c9ade5680364dde1
|
[leftmargin=6mm]
We develop a fully state-dependent performative prediction framework which extends the analysis in {{cite:4cd8b60a83680802b3b499c241cf5b32eec8e8c4}}, {{cite:99e80b9fbd45a670b3c265984af57ac9b01d7c43}}. The proposed extension relies on a state-dependent stochastic approximation (SA) algorithm with noise originating from a controlled Markov chain [cf. Algorithm 1].
Our main result consists of a finite-time convergence analysis of the state-dependent SA algorithm under a setting which does not assume the iterates to be bounded a-priori. Previous works either assumed the latter condition a-priori (e.g., {{cite:ade0f0cd93b382713213311b7a217b9aaee28790}}), or they require a compact constraint set (e.g., {{cite:1d019d6b2dfdf01cc4dc476ef8b0e47112547f52}}). Using a novel analysis, we show that the mean squared error between the SA iterates and the unique performative stable solution [cf. ()] converges at a rate of {{formula:3b0111c6-f763-4f97-9f01-4493af7ade2b}} in expectation. Additionally, we discuss about the convergence to an approximate stationary point of () when the loss function {{formula:d90eb6f1-002c-4345-8fbc-e7795dec56c5}} is not strongly convex (possibly non-convex).
We demonstrate the efficacy of the SA algorithm with several experiments. We show that it has a comparable performance as in {{cite:4cd8b60a83680802b3b499c241cf5b32eec8e8c4}} which assumes an ideal setting with i.i.d. samples taken from the shifted distribution.
|
i
|
1b247980a16400bed0fc328d0f5345ae
|
In this work, we have ascribed all the electron-phonon coupling to
a single longitudinal optical mode with frequency {{formula:95a1b843-c806-426d-8b79-60d87584ddb5}}
which is mostly coupled to charge carriers. This mode has quite a
high frequency ({{formula:f6dab3c1-cb9d-41de-aeb3-f48ccf3c1e9d}} meV), therefore we
have largely analyzed the antiadiabatic regime relative to this
mode (Fermi energy {{formula:6172f399-9364-4dc5-9aa8-473164461465}} such that {{formula:869f754e-88f9-4b14-97a1-997b62574bbe}} )
Additional low frequency optical modes are present, however, they
are much more weakly coupled to the electrons
{{cite:ac8fbc667449030013f7bc62c9778695572b081a}}, {{cite:bcb78a8e7c818e0c3b00a35b106941257e7b15f2}}. Tiny spectral features due to these modes
can be recognized in tunneling experiments {{cite:bcb78a8e7c818e0c3b00a35b106941257e7b15f2}}, but are
not visible in photoemission data {{cite:b49b9dde1720e93a1caa09c645cafc908c9435e8}}. Indeed, due to
instrumental resolution of photoemission experiments, the main
peak shown in the experimental data at the Fermi energy is quite
large. We have estimated a width of the order of {{formula:32175f6e-4ca6-4ff1-b16f-f8bfc941e6ac}} . Therefore, in photoemission experiments, the main peak
at the Fermi energy could include not only the coherent peak but
also the first satellites due to low frequency optical modes. This
is the reason why, in the literature, the modeling of spectral
properties has been done in the perturbative electron-phonon
coupling regime considering only the most coupled high frequency
mode {{cite:b49b9dde1720e93a1caa09c645cafc908c9435e8}}. We point out that additional phonon modes
can be included into the theoretical model since this involves a
simple generalization of our approach.
|
d
|
6a2bd3655cdb97f6c4d32d25b14baef3
|
There is a large body of work that suggest that local environment around the chromophore may affect its fluorescence lifetime. Variation in pH {{cite:65639ad3697eacd1ab2b41f7d204ceff1fec27a1}}, viscosity {{cite:dadda7a6364d8c9a683b06f7f71bd1c86e50c416}}, temperature {{cite:6efa58c0409194ca3bd7af8bf7779eeb961c4e39}} and pressure {{cite:2cdd71729657d3bd07212d5684439116e948f454}} have been shown to have an effect on the fluorescence lifetime of different fluorophores. Although, viscosity of the solvent does seem to affect the fluorescence lifetime, there is no correlation between the viscosity and the fluorescence lifetime of GFP {{cite:dadda7a6364d8c9a683b06f7f71bd1c86e50c416}}. Dependence of fluorescence lifetime on refractive index of the solvent has been investigated by Suhling et. al. {{cite:73677e1822d24689624350d746f26c40613fbe0d}}. These authors find that the inverse of the fluorescence lifetime scales linearly with square of the refractive index for GFP {{cite:73677e1822d24689624350d746f26c40613fbe0d}}, {{cite:089fe0e0dbabdd7bd78ce76cdf765ba12f3131dd}}, {{cite:2baebbe24637e65036dd99406743f5573a73bc3f}} and Enhanced Cyan and Yellow Fluorescent Protein {{cite:b551695fb0fbd62028f6a3007c89d3717cb46f95}}. At a given temperature, the refractive index of water increases with increasing pressure {{cite:eb5a22847c5e49e0f36c7e75e61987b462a06153}}. However, these changes in refractive index would only lead to decrease in fluorescence lifetime.
|
d
|
b05657666ef586dbbcb3b5ea6dda5a6b
|
In practice, these considerations mean that most, if not all, the model dielectric functions can be viewed as particular cases of the memory function approach. Another advantage of this method is that it is highly flexible and customizable. We have shown here only the basic features, limiting ourselves to a simple memory function and to a
scalar generalized Langevin equation, using the constant
density {{formula:85d51cbe-f424-475b-ad19-c33ba08bc4c0}} as the only variable of interest. But, as mentioned in the introduction, by using
a vector composed of 1) the number density {{formula:528f1498-f440-42ef-b8c0-c9b670e15f5d}} , 2) the longitudinal current density {{formula:809473c5-945e-4c4e-8075-12ebeb6ebe16}} and 3) the energy density {{formula:2d77d922-94fd-423f-8d36-af2fb637418b}} , we can build a matrix GLE containing a {{formula:479e4557-0a09-4d14-be17-932db7d9a55d}} memory matrix {{cite:dd2bc2a8c46990ce29e383896aa6cd5f66e604b7}}, {{cite:1ce510bab6873b2270ff7947ebdc0916295f7ba4}}. Following the same scheme as presented here, we can obtain the expression of a dielectric function that has embedded: (i) the conservation of the number of particle, (ii) the conservation of the momentum, (iii) the conservation of the energy, (iv) pair correlations. Moreover, we believe that the Atwal-Ashcroft approach {{cite:cb0a73881a4afbb58a35546cd887058396e6bc60}} will be found to be a particular case of the {{formula:d707c295-c158-40ca-898f-58962cffe2aa}} memory matrix method.
|
d
|
3fc0fe811a354eeaa1638cf03c6f3634
|
Rather than learning a policy defined over the full {{formula:0136d551-e6e9-4834-a38d-e60a48cadede}} dimension space, we have restricted the search space to the typical set {{cite:178cf114aecd37fc60fed6de2fd9b0375d5b9ebf}} of a d-dimensional standard normal distribution (the base distribution used to train the ProgressiveGAN {{cite:90f92ecb930ba1c0352e8170411d616491787eb2}}). We have used the locally linear approximation of manifolds to learn a linear affine subspace to generate a new state under the Markovian assumption. The agent gets a positive reward if it samples from a new age bin (to improve image generation with age diversity) while respecting the defined identity bound (to preserve identity) and the typical set of the distribution (to avoid mode collapse). In later sub-sections, we will discuss each of these components individually.
|
m
|
6221144078e086b26638d4aa91c0ff8f
|
If we check exhaustive textbooks in biostatistics, such as
{{cite:e3dbb29e22db664358e02ead19f6545d7dacdc81}}, {{cite:82ffd86ed9695a2d7a0f2bf2cc36893482744a99}}, {{cite:716be15015007884253d731778f60a3e898de9e0}}, or more wide ranging ones, such as
{{cite:4c7ff907564c885e60047870a50569276adc78b7}}, {{cite:2f8c3d1ea245e1363a8fa43371db0fb9fd22541b}}, {{cite:52cb4f9c1e34e6868439905c92fb0f25af53db2f}}, {{cite:23897fb436df9cbd8106a5a7cdd88bd9093e5587}}, {{cite:d69e247515dcbb5e767d8d8e6c26e6edc678fc6e}}, we do not
find any account of a correcting method that is similar to what we propose
here. Some of the texts come close sometimes, but they never hit the target.
|
d
|
ae98f2039150b63c5463331691bb4120
|
In {{cite:b4b9b8344a23f2d0511f2af74e5415d61d9cb314}}, Xie and Peng (team “Pengy”) used a well-tuned patch-based 3D nnU-Net {{cite:244b43bbeca8f5f55600648d3f4484905486c0e8}} with standard pre-processing and training scheme, where the learning rate is adjusted dynamically using polyLR {{cite:4748d7828dd4928f5e427c4fb3a7f775ab4eaf38}}.
The approach is straighforward yet efficient as they ranked first for Task 1.
|
m
|
ec597a26929839f85e2831aba95071b6
|
It is relevant to recognize that ass:lyap,
ass:lyapmeanfield and ass:lyapminorization boil down
to standard stability and recurrence conditions; see
e.g. {{cite:bc6b539ee6ac770d1238ae5714cb4e0201378ad3}}, {{cite:d1ea8984eb2d233a93e41eaf226ca8f8145c84b1}}. In the
Euclidean case when we assume the uniqueness of a solution {{formula:41210332-aa7b-41cc-940d-2ae9ff3fa84d}} ,
a common choice for {{formula:abbdc3ad-97f1-4a24-a377-22ec6dbaf837}} is {{formula:58712d6b-6097-4455-9b67-279e23804f03}} . However,
the square distance is no longer a suitable candidate in non-compact Riemannian
settings, and therefore selecting a
Lyapunov function adapted to the manifold {{formula:cc7595ee-1e0b-420a-bd1d-449e949bb638}} and the geometry of
the mean field {{formula:a9849061-db36-4684-bc28-fb2aff8a7f97}} is all the more important. Note that
ass:lyap-REF is automatically satisfied
if {{formula:145eb3d9-f978-4046-957c-c6979c252bf0}} is compact.
In addition, in most cases {{formula:93729bbb-36b5-4f7b-a67f-a319b4df2ce8}} and {{formula:d70fe382-f0de-4f73-a53b-3ce2dba572b0}} are chosen such that
{{formula:5633ffc3-8948-48ce-8ae6-4bb49af94ad4}} or {{formula:a8d9ce72-5b4d-41dd-ad8a-ac32407722d9}} for some {{formula:0015f526-7ced-4cf8-9a2c-0896923759f5}} , {{formula:8511e157-1550-4a5b-9fc0-2425c6f7051b}} for some {{formula:283df3c4-03a8-4737-9fdc-2bf2df32c22f}} and any {{formula:4f05a057-e2f2-466e-8ce4-233e4df614b8}} , and therefore ass:lyapmeanfield is satisfied with {{formula:2b450b84-f093-4f24-9d74-134f826c6fda}} .
|
r
|
da016c21280f3e775542fa415f8519c8
|
In this section, we provide more benchmark results.
We evaluate IQA methods using Spearman rank order correlation coefficients (SRCC) {{cite:1d7fda0cd0da76591a0211ccd5f538803150211c}} and Kendall rank order correlation coefficients (KRCC) {{cite:c4894f0a61d9b1775292e634b019b9e3a9833ceb}}.
These two indexes evaluate the monotonicity of the methods: whether the scores of high-quality images are higher (or lower) than low-quality images.
We also provide the Pearson linear correlation coefficient (PLCC) results.
The PLCC index evaluates the accuracy of the methods.
Before calculating the PLCC index, we perform a nonlinear regression to fit the subjective scores and the objective scores using third-order polynomials fitting.
The KRCC results are shown in REF and the PLCC results are shown in REF .
In the paper, we prefer to analyze SRCC and KRCC because the PLCC index may overestimate the performance when the IQA method cannot effectively indicate image similarity.
As shown in REF , some IQA methods obtain high PLCC performance such as SR-SIM, IFC and MAD, which is inconsistent with the conclusion reached by observing SRCC performance.
We argue that this is because these methods fail to predict image quality when the Elo score is high, and then the samples with low Elo scores dominate the PLCC performance.
We show the scatter plots of SR-SIM, IFC and MAD in REF .
As one can see, all the fitted curves of those IQA methods tend to horizontal in the area with better image quality, which means they fail to predict image qualities.
After non-linear fitting, the abscissa values of these samples are concentrated in a small interval, and thus cannot have enough influence on the calculation of the PLCC index.
We verify this phenomenon through an experiment.
In REF , we show the situation of removing these samples and their PLCC performance has not changed significantly.
This proves that it is inappropriate to use PLCC for evaluation.
At last, we show more results for the SR benchmark in REF , including more algorithms and IQA methods.
|
r
|
7aceb3842b73d81bf040b56dceece0e2
|
In this work, we use three different deep learning methods as fast SWE solvers. These methods, shortly, are referred to as PCA-DNN (principal components analysis-deep neural network), SE (supervised encoder), and SVE (supervised variational encoder). The schematic of these methods are shown in DNNssketch. The PCA-DNN method consists of first, a low-rank approximation of data via PCA-based linear projection, and then applying DNN to the reduced-dimension data {{cite:f73df42e5f836a70e129cef88a04aceee1c79f07}}. SE is similar to an autoencoder (AE) {{cite:46cb0cfb3e863e65053f42e0601bdeb047077c37}}, except, it is used for supervised learning. In SE architectures, a high-dimensional input (bathymetry) is fed as the input to the network, where its dimension is reduced via a convolutional neural network (CNN), then it is combined with the BCs (with two elements: discharge and the free-surface elevation), passes through a fully connected network, and finally is augmented to the high dimensional output (the velocity) via another CNN. SVE is also similar to a variational autoencoder (VAE) {{cite:40d10fcc54fa8f4cebc7bfed52767a9fac205133}}, but it is used in supervised learning. The SVE has a similar structure as SE, except the middle layer which defines a random variable based on a multivariate normal distribution.
{{figure:f9c8f7b6-8792-4e51-8a9a-5cbe3ed06c61}}
|
m
|
d8d29f97bcc7cac3bfc48e17aae49b56
|
Linear attention may be a promising direction to solve this issue, which decomposes the self-attention to reduce the computational complexity to linear. Numerous methods have been proposed for attention decomposition, such as approximating the softmax {{cite:c0a9f29b346bd31a840247b345162f274126a9ea}}, {{cite:52c11a6ff64d425d074db7c3e7e0a8ed0b87b79f}}, {{cite:c0d26b8a2bfb7cbfaa81ffc5d6ee12bf332a501e}}, finding a new similarity metric {{cite:f65d263c956beb88867f401117d8167930382c94}}, {{cite:c7545cd1511b3649b586edfd3083584bb3b016ff}}, and so on. However, most of these methods are only verified on NLP tasks, and suffer from a crucial performance drop in computer vision tasks, when compared with conventional softmax attention.
|
i
|
c49196bcfc37b99e5352e0c682644aa9
|
In future work, we would explore more machine learning techniques for multi-modal sequence data integration and metacell identification. For example, scATAC-seq and scRNA-seq are naturally causally related. It is interesting to utilize this unique relation to learn more robust, causally sufficient, and efficient representations {{cite:8c7f92575e785348f3b378568badf1a109c3879f}}, {{cite:1e035e70ab60daabe808a834b41f49117f4d3ada}}, {{cite:41c1ac71c67c2698b4518b0806f769dd7a2d3d62}}, {{cite:ee1a7aa6b5537ff8f24d366758015ddd744cda04}}, {{cite:a04582f6e23a6c9865461083958e7e8626458397}}. Besides, the results are promising for use of an optimal transport approach to model cellular stage changes across different modalities on the metacell level {{cite:32f1203a7bcd5afdd88bbfd5bb2f61ae06ecaa91}}.
Appendix
In this section, we show the results of additional experiments on datasets including Snare Cellline, 10x_PBMC, and 10X Multiome CD34+ bone marrow.
{{table:0beea887-f972-45bb-b927-ad9fbb258afa}}
{{table:aa7ffe9a-6c21-4a72-8457-1a64b9e4e473}}
{{table:4286a777-e29f-4956-b8ed-cc5de8ebc31a}}
|
d
|
4aad8c2838baad337917cd278d7ade39
|
[leftmargin=*,nosep]
Seq2Seq {{cite:9d775bee1ded90a5988702cce0d8258af7f95895}} uses attention based encoder-decoder architecture to generate hashtags from the processed post.
Seq2Seq+Copy {{cite:ea3e48fdcd09a228af692e3c7bb95cfa1b64eefd}} uses Seq2Seq architecture with copy mechanism.
LSTM-TOP {{cite:38fe3e0e48e7924ba2bd9a3114a3aca741d1a819}} is a topic-modeling based approach that uses attention-based LSTM model for learning representations.
|
m
|
c32e66a9bb48bff63330d16fe13d64d0
|
We take a different approach, based on offline off-policy learning {{cite:852e079979813aa715cfeba91485d14d194d5ea1}}, {{cite:53f746d7ec6e0f8e0e0f7943e2783afbf69d8205}}, {{cite:3e84773095ddbe15f45a5de104560ee46eaa4952}}. Specifically, we aim at learning an optimal policy from offline data collected another policy, referred to as the logging policy. While operating the network, vast amount of data is collected and stored by telecommunication operators at little or no cost. These offline datasets represent a significant advantage for learning policies when compared to online approaches where the agent is required to learn in a trial and error fashion that inevitably degrades the network's performance during exploration phases. However, learning a new policy completely offline gives rise to new challenges that are not contemplated in the online setting. In particular, the dataset collected under the logging policy may have a strong bias towards actions that are very frequent under this policy. This issue is exacerbated by the inherent partial (often referred to as bandit) feedback available in the dataset (only feedback from actions executed by the logging policy are observed).
|
i
|
5c8352ca86b8c43b34fea28ba9108c25
|
In addition, the power of gravitational radiation decays faster than that of the magnetic dipole radiation (see equations (REF ) and (REF )).
When the spindown is dominated by gravitational radiation initially,
there will be a moment {{formula:8636be3b-745e-46f8-a3e9-ecd68af2928e}} that the spindown dominated by gravitational radiation
will transform into the case of magnetic-dipole-radiation domination {{cite:8b1a4c0b49779dec2d9eab4f74167c7fd3556057}}, {{cite:18b577aeeae1e7d3625e92cc75cb263e6b329c06}}, i.e.,
{{formula:7851d714-89bf-468b-be12-7b44bebb54ca}}
|
m
|
08cf9cb1b545fbbe7cd9407a4ec0e190
|
To evaluate the proposed DRr-Net and LadRa-Net models, we conduct an empirical evaluation based on two well-known tasks (i.e., SNLI and PI). For each task, we select three benchmark datasets for evaluation.
Specifically, for NLI task, we select SNLI {{cite:63971a883f61e4a6c73917d46d893d1fa9986330}}, SICK {{cite:1f102839366bc7ba7d112916537c7ce93ecae40e}}, and SciTail {{cite:8aed57ddb2b883e372d2e95041315c19e252d5bc}} datasets to evaluate the model performance. For PI task, we choose Quora {{cite:437fd5325cd3f99f1d769b03e237d7710fa2a427}}, MSRP {{cite:38c971d9da3fda924a05e62ad0a27101efca0815}}, and Twitter-URL {{cite:b1f0c33d172068fdfae799a878bac33c430f3b02}}.
These two tasks cover the asymmetric and symmetric sentence relations separately, and can exhibit different characteristics.
|
m
|
657fc0c896973d64b31c0b1ccabab45e
|
One widely used distance metric in patient similarity study is Mahalanobis distance{{cite:29570a98e346ab02bef30b912346e1cd8024fe1f}}, {{cite:ebb589578b112798400dc058f63d73bbe1f1cab5}} or variations of it{{cite:29d5a04b9d9488086bce555cac890542c23e49b3}}. The equation of Mahalanobis distance is as follow:
{{formula:5988ab22-9f6c-45bd-9dfe-f787a9935a98}}
|
m
|
7f1e6ddc53ae27ff9263c3cd800a9802
|
All detection results are measured using the official WOD evaluation detection metrics which are BEV and 3D average precision (AP), heading error weighted BEV and 3D average precision (APH) for L1 (easy) and L2 (hard) difficulty levels {{cite:c2e71d6dab7b35908a079ccf3e7a86c60d92b5b0}}. The IoU threshold is set as 0.7 for vehicle, 0.5 for pedestrian. We show results on the validation set for all our models in Table REF and Table REF , results on the official test set in Table REF . The latency numbers are obtained on Tesla V100 GPUs with float32 without TensorRT except PVRCNN which is obtained on Titan RTX from PVRCNN authors. In order to better show the latency improvement from our RSN model, NMS timing is not included in all of the baselines because our efficient detection head can be adapted to most of other baselines. We do not show timing for our single frame models as their latency is bounded by their multi-frame correspondences.
|
r
|
063f66156e489a890d4e3216b387a641
|
The proposed network, inspired by {{cite:e030f733861895c090f9611c866ea481734e0a7a}}, is designed to model the private and shared representations of the different domains explicitly. The private representations are specific to each domain and the shared representations are common between domains. To model this property, we use three separate set of encoders. Two private encoders are trained to capture the domain specific features. The shared encoder is trained to capture features that are common across domains and is trained on both the labeled source and unlabeled target samples. A variety of loss functions are utilized in the model to capture different features relevant to the task at hand. Furthermore, to ensure that the content of the private representations are still useful and to generalize even better, we apply image reconstructions over the shared and private representations using source and target decoders. A classifier is trained on the shared representations to improve the generalization across domains and avoid being influenced by factors specific to each domain. The loss functions are defined as follows:
{{formula:a575b2ad-c95f-4b64-bcf0-4dbc11c7d5f7}}
|
m
|
9d20c1639e0dd6694b86a49100abe645
|
To obtain the prior map in a similar manner to PFENet {{cite:57ff5d5fa6d32c72a892958e7e640626927a141b}}, high level query and support features are reshaped from R{{formula:3697cd40-e9b0-4881-99f0-5c47d01bc8ea}} to R{{formula:2ecf9117-4894-49f4-885f-3f95734a2049}} at first. After that, row wise norms for high level query and support pixel features are computed respectively as in Eq. REF and Eq. REF , where {{formula:cb2685ec-4f44-4926-a623-d11754cc395f}} corresponds to Hadamard root while {{formula:0fa3f032-ca2c-4c0b-a72e-e145edf05668}} outputs diagonal elements of a matrix as a column vector.
{{formula:1dbf7d1a-3741-469a-99f7-48aef119da55}}
{{formula:83df74d6-3804-40f8-a525-35b878bdd68a}}
|
m
|
4895a9334857a535f392e52c54cec383
|
The paper is organized as follows.
In Section , we introduce the variational mathematical framework
of our models. Starting with the local equations as guide for the modelling exercise, we focus on the non-local diffusive terms, explicited in the form of {{formula:83cf2886-9386-47ad-be83-b08e07e2dec7}} -Laplacian, for {{formula:a422cec1-5bc9-4f2d-8f82-02b87954ad90}} and extend it to the range {{formula:68c0f818-0719-44f9-8488-5caafd4c3e5c}} through a differentiable family of fluxes to cover the resulting non-local non-convex hyper-Laplacian operators.
Then, we introduce a multi-valued concave saliency detection term which defines an obstacle problem for the non-local diffusion models. In Section we deduce the corresponding Euler-Lagrange equations. A gradient descent approximation is finally used to solve the elliptic non-local problems until stabilization of the associated evolution problems.
In Section , we give the discretization schemes used for the actual computation of the solutions of the non-local diffusion problems. In Section a simplified computational approach is described in order to reduce the time execution with a view to a massive implementation on the proposed data-set. Section contains the numerical experiments on the proposed model and present the simulations performed on FLAIR sequences of MR images obtained from the BRATS2015 dataset {{cite:1c8dd07f77c2abe3baae7ec08886825e43a64147}}.
Finally, in Section , we give our conclusions.
|
i
|
84587761ce86151ce0acc838edc57805
|
The FDM scenario, where this kinematically forbidden process dominates freeze-out, was first discussed in {{cite:b35597fa363f5efbde5c697be1e90715c07fa7fb}} and is further explored at the weak scale in {{cite:c270bf326d7bbeb0cbb450a4ca5539336371984d}}. It is discussed in the context of sub-GeV vector portal/kinetic mixing DM in e.g.{{cite:fadd84aa7ff0f093743a3cbf893ec5461202b797}}, {{cite:811d6290e3f572710a69617099bcec9358d5ebd0}}, {{cite:6d03d2124b92efc7a43e6931fe99676dcd3ab497}}. The effect of the dark Higgs in this FDM regime, however, has been left largely unexplored.See, however, {{cite:a22e75ef2114a5912854ce5ae1f6933c655fe255}}, which explores forbidden sub-GeV DM with a scalar portal, albeit outside of the kinetic mixing paradigm. Given the fact that a broken dark {{formula:5f4a384e-53a2-4138-8d13-a4166eb956b4}} strongly motivates the existence of such a dark Higgs, it is not unreasonable to consider if there exist constructions in which the dark Higgs plays a significant role in model phenomenology, and further to consider how finely-tuned these constructions are. To that end, in this paper we present the simplest construction of a sub-GeV vector portal/kinetic mixing model in which the dark Higgs directly couples to the DM: The SM augmented by a dark {{formula:89ea799e-201c-4242-a8bb-6c74c3df1f35}} group, a complex scalar DM candidate, and a second complex scalar that achieves a vev in order to break the dark {{formula:cb1d012e-7010-4ae4-9ce7-8501a6c4aa19}} (containing the dark Higgs).There do exist more complicated constructions in this framework such that the DM has a significant coupling to the dark Higgs. For example, with fermionic DM, one can realize significant Yukawa-like dark Higgs-DM couplings by selecting the dark {{formula:a13f36bb-b00f-46d7-ab1a-98e8fe88a261}} charges of the new particles appropriately. However, this selection requires either two chiral DM fermions of different dark {{formula:093e6ad9-634d-4be0-90d0-f07af59bb016}} charge or that the dark Higgs vev imparts a Majorana mass term for the Dirac fermion, i.e., the psuedo-Dirac setup. The former case requires multiple additional chiral fields to avoid gauge anomalies, while the latter will split the Dirac fermion DM into two Weyl fermions with non-degenerate masses. In either case, the constructions are substantially more complicated than the complex scalar DM scenario discussed here. Even in this simple construction, the dark Higgs provides for the addition of rich phenomenology to the FDM paradigm.
We shall find that for a significant range of dark Higgs masses, the dark Higgs effects are potentially enormous, even altering the predicted relic density for these constructions by as much as three orders of magnitude. Additionally, we find that these effects are remarkably resilient against changes in the coupling between the dark Higgs and the DM: Even very small couplings of the dark Higgs to the DM can result in potentially very large effects on the DM relic abundance.
|
i
|
562a70764fa415934bc98ffe72353c34
|
We used the narrowest and the broadest profiles, the one observed on
September 23, 2008 (QS), and the analytic profile deduced by {{cite:d9f3e262d2efda880152b95270b9614fc326cd77}},
to derive the 2D maps of HI outflow velocity. Taking into account the
velocity difference maps shown in Fig. REF , we found that RMS
values of the difference of the solar wind HI speed are equal to
{{formula:9c8bbf32-1786-4ae6-8a9b-7843de6d51d4}} (narrowest - broadest), {{formula:1cd44373-885f-43b9-bc60-1eeb21eb2b38}}
(narrowest - Auchère), {{formula:7f249273-8e0b-4e72-8542-aca199d21216}} (broadest - Auchère),
and {{formula:be26f91e-b6e3-4518-a9d3-277c8ff65bc4}} (narrowest - QS). These values can be
considered as possible uncertainties in the estimate of the outflow velocity
with regard to the dependence on the chromospheric profile shape.
However, they are significantly smaller than those found by {{cite:f848d373bacc336eab2561d8ab6509e0712ddead}},
which are related to other parameters. In fact, assuming a maximum uncertainty on the other
physical quantities of {{formula:91c6a462-437d-4d37-b8a7-d6339d8151fe}} %, they estimated that the resulting uncertainties
on the derived velocity were of {{formula:fcc6664c-3421-4369-af65-37eadd94f8e9}} ,
{{formula:7b5fd0b9-d035-49e4-96c8-6f1169c207eb}} , {{formula:ef268cb3-e6aa-4e41-902a-9a4c616c6c67}} ,
and {{formula:7611c6cf-9006-4b90-9660-7be42d6287f2}} for the impact of electron density,
total chromospheric intensity, electron temperature, and HI temperature, respectively.
|
d
|
0d7e908a4a5145a2bf5d2047a28c3e8c
|
We first discuss the origin of {{formula:b9427dab-3252-448c-a803-72dc8b7603d4}} in this compound. As current knowledge, it has been well established that an AHE emerges from the two mechanisms: the intrinsic one caused by a Berry curvature effect {{cite:2f0f189f3230518f692812f7619e3bae745d0464}}, {{cite:2993c0156f2f09e52efc40e2a2e2af736931ca38}} and the extrinsic one by either skew or side-jump scattering effects {{cite:ba7fdf811730d8dddbaf9583a755817f434a7b0a}}, {{cite:856eb549a19fd8c183afcb1cb0a164dd34c88639}}. The former is observed in a moderately dirty metal in which the anomalous Hall conductivity {{formula:f59d2618-2d2f-400a-9e27-84d739ae80bd}} becomes independent on {{formula:902fb9c2-58cd-4a13-ae10-3db5add0d933}} , whereas the latter in a clean one in which {{formula:bfdf0bd6-0187-450a-9e7f-7127d240649a}} {{cite:5b0c5e37a7caf18ef27594fb305d1e203c5c3737}}.
In addition, {{formula:13944541-e852-4439-b2f5-b5d59ef55012}} of the intrinsic AHE is known to linearly scale to {{formula:67e42d6f-40d8-48b6-aaff-db08a393cfad}} {{cite:2f0f189f3230518f692812f7619e3bae745d0464}}, {{cite:5b0c5e37a7caf18ef27594fb305d1e203c5c3737}}.
|
r
|
67a43e48f32904f97a6eece840674b20
|
In this section, we introduce our hands-on experience of implementing STARec in the display advertising system with top-K recommendation and learning-to-rank tasks in Company X.
As industrial recommender or ranker systems need to process massive traffic requests per second, it's hard to make a long-term sequential user interest model serving in real-time industrial system.
As discussed in {{cite:83e1fe5ae7dbbe70cbed4b00af570e010feb45de}}, {{cite:8a5f39a58a8a48ecd99c8ec51bf2b8519d60fe96}}, the storage and latency constraints could be main bottlenecks.
|
d
|
869bf63d2d1bdfccfc6fe12db0072835
|
For the {{formula:926a5956-1d14-4386-a7ae-1d9e94beb4d6}} case the process is quite different. As {{formula:906df71c-0330-4154-a271-85f757965d5d}} is a spacelike hypersurface one can not obtain the symmetry group the way the {{formula:85602af3-fd1b-4e06-a77b-dd11b2308893}} group has been obtained. It should be added that by considering {{formula:245f3579-d2e8-4fe8-a741-ec572252d5fa}} coordinate transformation for a locally de Sitter Bondi-Sachs metric can be used and a symmetry group named as {{formula:8dc55791-2a6f-40dd-96d1-516be9d9a083}} group has been obtained
To study such a structure, we first need to consider the basic definitions {{cite:7a0ca8e5e5dce21b4bd214897fd68351b21319eb}}, {{cite:cbfc8fc4fa1289187d549f7a61753252fb0ef01c}}, {{cite:ef5397949db688a16dc9085ce6c0d710523b7bf6}}.
|
d
|
556f32e7cbc3ed4198b94575072a6b53
|
where the regularization parameter {{formula:617c099f-7965-4b44-a43b-94cb2fee0560}} , calibrates the trade-off between data fitting and sparsity of the solutions. Under technical assumptions on the data generation process, it has strong theoretical guarantees on its performance {{cite:d1ad0a0807919ad6136d577bb5c3cfdda3be0d8c}}. A particularly interesting property of the Lasso estimator is that some of its components are exactly equal to zero depending on the value of {{formula:53131589-654f-4033-83b7-4c7f014acd08}} . Those coordinates that are equal to zero correspond to the columns of the matrix {{formula:c0be67d0-2a36-464a-8bac-fa738be1ac49}} that can be ignored in the prediction of {{formula:04e972ca-b39a-4fae-bc0f-9fe6f155e084}} . So a natural idea is to detect these variables and eliminate them as early as possible in the resolution of problem (REF ).
|
i
|
8cc5d42968c59519d4c105bd2d5c18bf
|
Quantitatively (tab:compare:events),
EDS outperforms all other monocular baseline methods, even without using inertial measurements
(which are known to improve robustness and increase accuracy in VO {{cite:aecc56ab97803a0bbf86717a60fd3d41a9524103}}).
Our approach also outperforms the state-of-the-art event-only stereo method
ESVO {{cite:01a7e73278b19051a26d625482ab9b869dc92c86}}, despite the fact that our method is monocular
and hence does not exploit the spatial parallax of stereo setups.
The tight fusion in the front-end (between frames and events) and the PBA in the back-end compensate for the lack of stereo baseline in the event data.
|
r
|
fe42d7d3082f867e7e18b0ec9d4a8517
|
A marginal structural model (MSM)
is a semiparametric model assuming {{formula:82f983d5-a573-442b-a785-23c43b875227}}
{{cite:a5a69c1f820a7b24fa71e0902684abe5edd9eb3a}}, {{cite:3f4183fb559ac2a4e50849dbfb2effc6b5fb88cd}}, {{cite:049ec8bdb227fea28f7857a4a7ae5d060bc6dea5}}. The MSM provides an interpretable model for the treatment effect
and {{formula:e3ff50ef-0a83-4e34-b680-d4f0416be9c8}} can be estimated
using simple estimating equations.
The model is semiparametric in the sense that it leaves
the data generating distribution unspecified except
for the restriction that
{{formula:d7bac817-2419-4330-a309-45da11b80b11}} .
If {{formula:6d3dac14-123c-4d37-adb4-5f56760218fe}} is mis-specified, one can regard {{formula:c28324a5-1948-4bde-bf73-a819afc621b0}} as an approximation to {{formula:c2091dc6-7b53-40b9-b5d6-814eb380ed7e}} ,
in which case one estimates
the value {{formula:6e4b3312-d5ac-4320-b9be-9fb6894696a6}} that minimizes
{{formula:1c2ed122-ef88-42db-ade7-0f7dc11234a8}} ,
where {{formula:02ed2b90-dda4-465c-92c9-94fa68d47e3b}} is a user provided weight
function {{cite:609c0a058683b7259e9ac601802320bf9686490a}}.
|
i
|
406a6a6e32bf2271f19682cf57f2c454
|
Let us recall a special case of the quadrilateral comparison from {{cite:73e2444c9dcae3fd5d62bd365d12b4919fca9c31}} (see also the previous {{cite:54a27543ccdb54baf2a27ee3023a173b09bb8d63}}).
|
r
|
2a89f8c907cdc780a4b343f699b24f61
|
Density functional theory (DFT) calculations are performed using the Vienna ab initio simulation package (vasp) with the projector-augmented-wave potential method {{cite:1837028550266f3b37ca009ee964744d237b921f}}, {{cite:b41e9b73a3da80289c7e7ae33241bd455712e12d}}. We use the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof parameterization revised for solids (PBEsol) as the exchange-correlation functional {{cite:24162f91dee4a022ab7293a0f914a7c57c550cbf}}. A plane-wave basis set is employed with a kinetic energy cutoff of 800 eV and a {{formula:d09bb4dc-305f-43a2-abc1-af8840d6fa5b}} k-mesh during structural relaxation, which is stopped when forces are below 10{{formula:d5c84e7d-b561-42d5-b774-66377573e9e8}} eV/Å. The band structure of BaTiO{{formula:6c53f1b0-7e3c-41ce-85dc-303aa77ee3a9}} is calculated using the HSE06 hybrid functional in the presence of spin-orbit coupling {{cite:2a44212f4e1ad2730031b1f2dde3e868bd61565e}}. Photoexcited carriers are simulated by promoting electrons from high-energy valence band states to low-energy conduction band states. This {{formula:4637fa7e-af95-4f8b-93c6-07744d8d9175}} self-consistent field ({{formula:8b8b513e-75e7-4ab7-bd31-2fd6defd9dc3}} SCF) method introduces non-interacting electron-hole pairs by changing the occupation numbers of the Kohn-Sham orbitals {{cite:740a7b5f5227d96a02eb2523bb225f7e5d42e1c2}}, {{cite:45ddd88003d797f3b41134ba116377fc83a20fe3}}, {{cite:e15ba210b5d3a1d4998931c99c8af9124bc538bd}}, {{cite:cbc97580f1df2762f25721b3beb8b4df15fadfdc}}, and is computationally less demanding compared to other approaches like constrained density functional theory {{cite:3ced7b4dbdc622af1e2a9ee9b7764c97766b9ffc}} and excited-state force calculations {{cite:c9e8f0d089d63b96c06a8ccd60a2b334302ea4ad}}. Nevertheless, it gives consistent phonon spectra compared with those obtained with constrained DFT {{cite:3b341f911c18dccee74a86665e6b0d1cf7b00135}}, as shown in the Supplementary Materials. The occupancies are fixed with a smearing of {{formula:50471979-6f1f-4d39-a7a3-931fdecb6ef3}} eV.
|
m
|
f3f78b88b483e5e51dc5f790b904b4f7
|
New advances in quantum technology provide new perspectives for exploiting of rotational degrees of freedom of quantum systems. In this regard, many experimental works focused on the study of rotational degrees of freedom and control of molecules {{cite:0e96a1d5fa3ca56a1e0c0c1c58af57b34931cc15}}, {{cite:10d3a4fa0f8f57ea233cd97f4d46981b8a2b7236}} and nanoparticles {{cite:dee835282a6cc3198cd94ff0c8323112250a329c}}, {{cite:3c20350623c7e3edbb71bc13449666e700a4238c}}. Moreover, the possible applications have been investigated in ultracold chemistry {{cite:cc20dee9fd87093db4412ce3a90a5ac92da3d7d0}}, highly sensitive sensors {{cite:7988e4122d8707213feb01ecf1a5932d82e551e9}}, quantum heat engine {{cite:608a85a00d619cdf124c9dcb3532a168a8955551}} and nanomagnets {{cite:2a71b88b85edf19fbd6cc4d43095d88295ab0f70}}. It seems that understanding of these experiments and applications requires the study of theoretical aspects of the effects of the environment on the rotational degrees of freedom of quantum systems. However, in the past decades, most studies have been focused on the effect of the environment on the translational degrees of freedom, e.g.; see {{cite:ca89d209b4c7783eb33a53913a36183fa3e9d4f1}}, {{cite:aee1fdde99c040b549bf8201f80dad5728f8aca2}}, {{cite:922d41f37cd2e6c969b3f75957a584d810525e14}}, {{cite:6fce0601e07a06741438b6cfc9c5bf59b493ed0a}}, {{cite:9d867cffe2b0a764d66d5f9d8789a9e51652c71a}} and, just a few studies focused on the case of rotations.
|
i
|
21117ecbafb144302ef0b82b7aedb37f
|
A major challenge in realizing a real-world autonomous system capable of continuously learning and adapting over time is to prevent catastrophic forgetting. The learning model needs to maintain a balance between plasticity (the ability to adapt to new knowledge) and stability (the ability to retain prior knowledge){{cite:c60e77cca6878255f28d6dbf3f63226b96d3ecde}}, {{cite:9e3a5a015bd1f5a1b8d9772f496c986114ba5d77}}. Excessive plasticity can cause forgetting the prior learned information while learning a new task. In addition, due to extreme stability, sequential task learning may become more difficult. This phenomenon is known as the stability-plasticity dilemma{{cite:fb596a591b1613d5917d775f6d57800bdfbba6ba}}, {{cite:cc9dc86f99066484e682b9fa2bf6988622526abe}}.
|
i
|
a29f350fd4c65b890482ceadef171382
|
As described in detail in {{cite:8de624d4438f97d8c38d9e9b9f78a02bda2a54dd}}, the toy torus
{{formula:11d0fd45-b1b6-4d5b-85ad-3fc3468f578a}} is distorted
into the “target torus” that approximates the orbit by the generating
function
{{formula:edf05c8f-6d9b-43fb-9ac0-aa4470682215}}
|
m
|
a76f303c8d8075c2d9a899bf4495390b
|
We noted that,
compared to many visual benchmark data sets,
disease-specific factors in medical images
may be buried by other more significant factors of variations
in terms of contribution to pixel reconstruction
or image distribution
(e.g., heart-chest ratio vs. torso shape).
For instance,
we attempted to remove the inactive dimensions (defined as {{formula:bf84ecfa-8e43-4a0c-b6af-179b38dbeab6}} where {{formula:7ddaaada-1dff-4b88-ae47-6a842a6b4fd5}} = {{formula:dc4b0d3f-a1a3-4085-b8ac-2431cbdb6b5c}} for each dimension {{formula:f8664688-be0a-4c9d-ab27-58925fab3a14}} in {{formula:dc411e5c-0ab7-49f7-b5a6-49a95172ead5}} ) from VAE embedding,
a strategy shown to improve the performance of the M1+M2 model in visual benchmarks {{cite:af93cf1cef3305cb20327bd671ff1e960d2e1b9f}}.
The mean AUROC of the presented model, however, decreased around 3% to 0.658 (for {{formula:f79073e2-a3dd-402f-b011-0c9c441a70ea}} =500).
This, we believe, may explain the relatively limited progress of unsupervised representation learning in medical images despite its recent traction in other visual domains, a pressing challenge to be resolved
in order to leverage unlabeled data in a field where image labeling is especially costly and difficult. For future work, we plan to improve two-stage training strategy and disentangling by hierarchical generative models.
|
d
|
727e743676e912e4f78ccf731143e312
|
The solution to the Einstein field equations for a static, spherically symmetric object in vacuum is well-known in the literature as the Schwarzschild metric {{cite:bc0c5288a81ad5d1987763fa38c073d2be5c3d83}}. This solution describes new effects that could not be explained within the classical Newtonian theory of gravity {{cite:63d552502b0ad076d4a5a9a54e74bfdf90e0a350}}.
|
i
|
ae204dcec437f131f75ef0e316f4bdd4
|
We also require positive modularity in the core node sub-clusters for each cluster.
This is a relatively mild requirement that avoids
cases where the core node sub-cluster may be k-valid and connected but might not reflect a preference for itself over the outside.
Consider the case where a 10-clique, a complete graph on 10 nodes, is contained in a clique with 20 nodes.
This 10-clique would satisfy k-validity for {{formula:fd480c54-8b8e-4257-94f9-e9faae89e30f}} and would be connected, but would not have positive modularity.
By enforcing positive modularity, we would avoid returning such clusters. This example illustrates the advantage of enforcing
positive modularity even though the probability of it occurring in a real world network is likely to be small.
We also note that enforcing positive modularity in the core node sub-cluster (or even in the final cluster that contains both core and non-core nodes) is
not the same as trying to maximize the sum of the modularity scores of the individual clusters (total modularity score).
In other words, enforcing positive modularity does not have the same vulnerability to the resolution limit that was established for the modularity criterion, which
seeks to maximize the total modularity score {{cite:961e8a10b3aefd25b40b59472c5c576dfefccb5a}}.
|
m
|
8aead712caafb73963cb8bf75174986c
|
State-of-the-art performance on face parsing is mostly achieved by deep learning methods.
Liu {{cite:b1859d37fdf1d93f50ee012d57fea09e390d4a51}} incorporated CNNs into CRFs and proposed a multi-objective learning method to model pixel-wise likelihoods and label dependencies jointly.
An interlinked CNN was present in {{cite:2a2f9c56a593f57af287f4cb9fbac5833495d55b}} to detect different facial parts, while this architecture cannot generate semantic labels for large-scale components like facial skin.
Luo {{cite:a82372bd329106d75082d5e90d1a7ece21ef3ecc}} applied multiple Deep Belief Networks to detect facial parts and accordingly built a hierarchical face parsing framework. Jackson {{cite:44d9f8aeb33dec27de51d0c4e0063c8374f06dfc}} employed facial landmarks as a shape constraint to guide Fully Convolution Networks (FCNs) for face parsing.
Multiple deep methods including CRFs, Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GAN) were integrated by authors of {{cite:90c3626747108adcdd1707ab9916cd2d31346898}} to formulate an end-to-end trainable face parsing model, while the facial landmarks also served as the shape constraints for segmentation predictions. The idea of leveraging shape priors to regularise segmentation masks can also be found in the Shape Constrained Network (SCN) {{cite:7c8e8fb868936852919348ba4d7ca1184586aa64}} for eye segmentation.
In {{cite:bc5284e62284134cc3ea0dd3224a4c7f2eb40d7e}}, a spatial Recurrent Neural Networks was used to model the spatial relations within face segmentation masks.
A spatial consensus learning technique was explored in {{cite:cdb27572fa2cea57b98efcf12c101a340ff0939b}} to model the relations between output pixels, while graph models was adopted in {{cite:9b32423eefb910254d557e83da5811415663c87c}} to learn the implicit relationships between facial components.
To better utilise the temporal information of sequential data, authors of {{cite:a53c721516d5f81d88f0586f6d8c7eb50a29a24d}} integrated ConvLSTM {{cite:9fd4bfc1b50cb9bd6adeb7cb799184f6b43c09cf}} with the FCN model {{cite:a64e154a9e6770cdc7c3b82e7dff90d8ea07e44f}} to simultaneously learn the spatial-temporal information in face videos and to obtain temporally-smoothed face masks.
In {{cite:ea3dcdc9aa73b57ce923bbd7bcb3e75a204613c2}}, a Reinforcement-Learning-based key scheduler was introduced to select online key frames for video face segmentation such that the overall efficiency can be globally optimised.
|
m
|
25957d06edddd3552e425018640fb818
|
In contrast to {{cite:2e9381d5bd594bf3fb3b0eed2625cb6a98c5bcf8}}, whose work we build upon here, we use the value-based agent DQN/DQfD instead of the policy-gradient-based agent A3C.
This shows that learning reward functions is feasible across two very different RL algorithms with comparable success. sec:comparea3c compares the scores of the two agents.
|
d
|
e5fb65c5c60848bf16b1211019cc5332
|
The areas of stopping theory and dynamic programming are vast. For an overview of classical stopping theory, we refer the reader to Ferguson {{cite:256e43dea5f3a02dd429f7c7536b30157766fe04}}.
Various robust approaches towards stopping problems have been introduced as well.
Closest to our work is the robust stopping framework of Riedel {{cite:0562cd398fed39818b31ec2f2ab43e4776572ba5}}, that also generalizes the classical Bayesian setting. He provides a similar result as Theorem REF , but much more general, and is based on martingale theory. However, this framework relies on some assumptions that seem not to be satisfied by our setting. We can circumvent those as we consider a simple, concrete stopping problem with independent distributions. This allows us to give a more elementary proof of Theorem REF , avoiding advanced martingale theory, which might be of independent interest. In the context of Markov decision processes, Iyengar {{cite:83cf4ed1a45f2a569f0264dd84fa643dd60986f9}}, and independently Ghaoui and Nilim {{cite:da79cc0bafc15fcd8528da486ad588f9a04f68a2}}, provide a robust dynamic programming approach, leading to a robust Bellman equation, which is of a similar spirit as Theorem REF . In the Markov decision process literature, the robust perspective is introduced with respect to uncertainty arising in the transition matrix that determines with which probabilities the process moves to a new state.
|
d
|
6b5a5597e7a5f7aaffbba4e1c0c49a8c
|
A system of reduced order with respect to (REF ) is {{cite:32d45c53e93d81f67a1047a0716a553f1028fa6a}}, {{cite:23a0f874cad134a52b14e2047de7b445d3e67736}}, {{cite:bf7609c6517c3d571686d39543b91a4a5f51c7e5}} the system governed by the equations
{{formula:8b6c95fd-1ed8-4987-b41b-ea8c94879f33}}
|
m
|
e5f81ef539ce4634effa2edfb8859e2c
|
This section describes the TAR method and its application for the automatic translation of the Stanford Question Answering Dataset (SQuAD) v.1.1 {{cite:36b8a37ed9dde3b7dc5d01819a947538004632e5}} into Spanish. The SQuAD v1.1 is a large-scale machine reading comprehension dataset containing more than 100,000 questions crowd-sourced on Wikipedia articles. It represents a high-quality dataset for extractive question answering tasks where the answer to each question is a span annotated from the context paragraph. It is partitioned into a training and development set with 80% and 10% of the total examples, respectively. Unlike the training set, the development set contains at least two answers for each posed question intended for robust evaluation. Each example in the SQuAD v.1.1 is a {{formula:2d5f8ac0-cae7-4f40-a860-fa176eea7df2}} tuple made of a context paragraph, a question, and the related answers along with its start position in the context {{formula:74dba74b-5ef6-4897-b815-8c7b9f12e6e1}} .
|
m
|
3a8364d6abc84c3ce67d3336524ed43e
|
We experimented with alternative graph construction methods, such as applying a score threshold for edges (more akin to {{cite:01f2c039bfcfef8d1b9fc15f41549f9cf0cbe20d}}), or using BERT {{cite:6cd0bf9ff022d37c0f2f1afccd23fc42b206e6b8}} to encode diagnosis texts prior to similarity computation. Empirically we have found the presented method to work best through manual inspection of the resultant graph and preliminary experimentation.
|
m
|
2c049a0689eed714b501d38ac04ea4ad
|
The Unified Transform Method (UTM) or Method of Fokas provides a powerful approach to solve evolution IBVPs, including all those with linear, constant-coefficient PDEs and some integrable nonlinear PDEs. The UTM was introduced by A. S. Fokas in 1997 for the purpose of generalizing the method of inverse scattering to IBVPs on the half-line and on a finite interval {{cite:acc531345e33fbfc9b7dd381d8daaa2ff49f94ae}}, {{cite:3211b854bf092a40ac5002d5850c65ec7011e4b9}}, {{cite:88fa16a79434868b037031ebaa9ca12b92df7895}}.
|
m
|
02fff29e6776c7c648c725d6076dd6f4
|
We evaluate three state-of-the-art X-ray CAD methods based on image classification, CheXNet {{cite:e1a0e6aa65ff17cd038120b5262ae9ec232e4e38}}, WSPLWeakly-Supervised Pathology Localization {{cite:ac8ba8f5abd87b9e047df2c5bf2a30480d3ebf0f}} and UFDetUniversal Fracture Detection {{cite:dbc1361940e4fcd11cd5ef98c6d2d8e999a02777}}. CheXNet trains a classification network with a GAP layer on the last feature map followed by a fully connected layer. WSPL replaces the GAP with a LSE pooling. UFDet estimates a dense probability map and use LSE pooling to produce the classification score. We evaluate the stage-1 of UFDet for a fair comparison with CheXNet, WSPL and our method, which are all single-stage methods. Anomaly localization map is generated using CAM {{cite:77b80b062ade0a1082943f6d27f1d9ae421bd50e}} for CheXNet and WSPL, and the produced probability map for UFDet. The localization map is converted to bounding box for FROC evaluation using the same steps described above. ResNet-50 is used as the backbone network for all image classification methods.
|
m
|
3e0dbd136ab74601050b2654a608913f
|
In Fig. 6, we compare the convergence rate, w.r.t. the unit of time in seconds, between the proposed DPD-AirComp scheme and the error-free baseline. For DPD-AirComp scheme, time duration of one communication round between the BS and all users, is given as {{formula:31ef1708-dd72-4569-9c66-fb93b8c54cc3}} , where {{formula:c7e08148-9c6f-4e24-8e78-4f75cd8c554c}} is the total number of symbols to be transmitted. For use case B, {{formula:4edafcda-169a-4cec-8434-e68374089a5c}} . The time duration of one iteration for the error-free scheme is calculated as {{formula:03de3fb3-48cc-4f31-9cf5-03ffe00dc10d}}
where {{formula:13f2604f-7f06-4c67-9920-3799c6c69237}} , representing the quantization level {{cite:17d213e1f6357f4dccae6ecb81bd8360aed63c06}}. As can be seen, the DPD-AirComp scheme significantly outperforms the error-free scheme, in terms of convergence rate. Specifically, DPD-AirComp converges within the first 0.05 seconds, while the error-free baseline converges after 1 second. Therefore, the DPD-AirComp scheme is an order of magnitude faster than the error-free scheme, while achieving a near-optimal performance.
{{figure:c2a295ad-2db5-4cbd-9cf1-0bde3f6ab405}}
|
d
|
a0c3c7a03bdea4e70802c7ce884ca471
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.