text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Here, we will present the behavior of the canonical potential or free energy, from Eq. (REF ). By analyzing Fig. REF , one can see that the canonical potential {{formula:0e46e3ef-cba9-4704-b705-935dcd892e60}} has a minimum for each value of the Horndeski parameter {{formula:15f2e171-ae60-4129-8e2b-99a58c90d9d7}} which assures a global condition of thermodynamic stability {{cite:13e4086c20b117c036fdad3f22a388cb7fe3ef28}}. This picture also shows that there are critical temperatures where {{formula:9aac4c3a-e842-4e32-8911-7bc3c80dccfe}} , depending on {{formula:121a8057-cfd5-4447-8b6b-06b54712b641}} . For {{formula:cca83ea1-d64d-4800-8632-cb04f13a8365}} these solutions become unstable. The increase of the absolute value of {{formula:1d8a4cc3-2fb1-4ab3-ba90-09cd6dce7716}} induces a decrease of these critical temperatures. {{figure:cb320c66-7be4-48a3-bd2e-ac2438da8dfd}}
r
0c714b082fc5ee1c9af7a91fb6accb6b
The methodology in this work follows a path similar to the one proposed by Barmparis and Tsironis to discover nonlinear resonances through ML{{cite:4c58e1c794e48bf5d29ce588eef52a33a91a7f50}}. There is currently substantial interest in ML approaches that utilize directly equations of physics or mathematics{{cite:207f8244acca20ff08f418d18c0fbbf5c470249a}}. In the present work we integrate numerically Eq. (REF ) using a 4th order Runge-Kutta method with an integration step of 0.005 and introduce a new data-free physics-informed loss function, designed to capture the desired properties of self-trapping transition defined as: {{formula:e7b2c56f-8b72-49bd-a327-3373fecb3157}}
m
f07ff0ddb740af6aefbdf1cf9669f659
In review {{cite:f03ef3e6fb681c8907384c186b3ec9b56d15080f}} it has been emphasized that the use of the QH language helped to elucidate several deep and long-standing unresolved conceptual as well as technical puzzles, say, in the field of relativistic quantum mechanics or in quantum cosmology. In several other physical contexts, unfortunately, the abstract version of the QH theory has been found ”very difficult to implement”, with reasons explained on p. 1216 of Ref. {{cite:f03ef3e6fb681c8907384c186b3ec9b56d15080f}}. In this context, our present letter is to be read as the description of one of the innovative simplification strategies.
d
3013177fb21fe4108800c73e87fe5e18
SOD. In the past few years, U-Net and feature pyramid networks (FPN) have been the most commonly used basic architectures for SOTA models (e.g., {{cite:59c7ec55429673f2de8fc1a12d530a130b366b1d}}, {{cite:1f3d2c2fd4b5ae0b38112f4a345be3595bc2de4c}}, {{cite:7613f27f7efcaaf1863aa22dae2c2cf12307c706}}, {{cite:050933f19fdc1821578b023cd73d33f8b8768225}}, {{cite:b8ad1d6add55d11a7763d3cfdf577c3053e8c24e}}, {{cite:4c069918a73e60ba096254a0200e1e6571168481}}, {{cite:7e2a8045b0acc3a29a3a2e166cf0fc83ee5c442d}}, {{cite:fa131bb6f05d5be8511d42698e0affa579a639ce}}, {{cite:6019589782f9add3a7bdab24cbfe1419b0282f2b}}, {{cite:4d501413f7fe243ec61d1706726d089938b132d8}}, {{cite:12acf5f5e1cf94c5e84082b87cc97bccd6ede90f}}, {{cite:f373e17d318e379c032b7b8c1f1f218ebd2c8044}}), which were trained on large-scale benchmark datasets (e.g., DUTS {{cite:453964ddd4a9e7c819bf786cb7ae0f254860b7cf}}) in a manner of fully supervision. Specifically, methods such as BASNet {{cite:59c7ec55429673f2de8fc1a12d530a130b366b1d}}, PoolNet {{cite:7613f27f7efcaaf1863aa22dae2c2cf12307c706}}, EGNet {{cite:050933f19fdc1821578b023cd73d33f8b8768225}} SCRN {{cite:b8ad1d6add55d11a7763d3cfdf577c3053e8c24e}} and LDF{{cite:6019589782f9add3a7bdab24cbfe1419b0282f2b}} pay much attention to object boundaries detection. GateNet {{cite:f373e17d318e379c032b7b8c1f1f218ebd2c8044}} was embedded with a gated module for more efficient information exchange between the encoder and decoder. With comparable detecting accuracy, methods such as CPD {{cite:1f3d2c2fd4b5ae0b38112f4a345be3595bc2de4c}}, ITSD {{cite:fa131bb6f05d5be8511d42698e0affa579a639ce}} and CSNet {{cite:12acf5f5e1cf94c5e84082b87cc97bccd6ede90f}} focus on designing light-weighted models with significantly improved inference speed. Due to the limited space, we will not include all the SOD methods in this section (please refer to recent survey {{cite:fc7bf06deec00dc6838ebe36e24a1655ab3aa553}} for more information).
m
37c4601d59689c5ee13ccecbf14ed27b
News recommendation is important for improving users' online news reading experience {{cite:2602f8e86f116b1cdf06172bb29ae5fe6c6f2639}}. Many existing news recommendation methods model the new recommendation task as a sequential recommendation problem {{cite:0835519c8a6d1f0bd9726953183b6980bbe919b6}}, {{cite:d7d5afda4b77c1ad67c325fcbdac8d1185dd6c2e}}, {{cite:7b13912cb2d4bafb8a4df9f027107e4988eb143a}}, {{cite:3de1620e34be96d3fe18e22d1f5c92638aed7a6b}}. For example, {{cite:adc9a674d3048f0c9855a9fd8e2dfd0a7cad950a}} use a GRU network to model user interest from the sequence of clicked news, and rank candidate news based on their relevance to user interest. {{cite:ac774146ecf6a5c86e7f9bd26cab10003e0983ec}} use a combination of LSTM network and directional self-attention network to learn user interest representations from clicked news sequence, and further match it with candidate news. {{cite:c76c65949ac0f3b1458d2002f6621515a837affa}} use a Transformer to model clicked news sequence to learn user interest representation for interest matching. A core assumption of these methods is that there are rich short-term dependencies over historical behaviors, and future behaviors are also likely to be relevant to recent past behaviors {{cite:315fb32dbd05a8987fd1289c5687afa5dc043f11}}. Although this assumption is widely used by many sequential recommendation scenarios like e-commerce recommendation {{cite:0e54723909a75456944b0440df51ef51102d1da4}} and movie recommendation {{cite:e7854320be8de6a4d22aafd42b27684792be7784}}, we find it may not be valid in the news recommendation scenario due to users' preference on the temporal diversity (i.e., novelty) of news information {{cite:e583d4cf90e3ad1409de8d3449f0764889c93569}}. For example, in the MIND {{cite:2602f8e86f116b1cdf06172bb29ae5fe6c6f2639}} news recommendation dataset, only 7.2% adjacently clicked news are in the same topic category (the ratio is 7.9% for random clicks). In addition, only 0.04% adjacently clicked news mention the same entities, while 0.11% random pairs of clicked news share at least one same entity.We observe similar phenomena in our production news recommendation dataset. These results show that adjacently clicked news tend to be diverse rather than similar, which contradicts the basic assumption of sequential recommendation.
i
b709de5e30a8c2c2a919af98757b6748
see e.g. {{cite:d9e7cb67e9ad3a6fb02d6c4e1468acf600b5e919}}. Setting {{formula:0cbeb324-58d5-45e0-b48e-2d932c10d340}} we get, for the {{formula:31ba1872-d826-473c-ba65-58a041a0f286}} corresponding to the above separated {{formula:18285a33-8a02-4cf1-87ef-cc42d20b4d23}} , that {{formula:8eb52dcb-0805-4abb-bad6-978ef8a96f39}}
r
f27363d2430449efae3fcb139b0b8ad3
The other important controversy comes for the determination of the sign of low-field currents and still it is an unresolved issue between theoretical and experimental results. In an experiment on persistent current Levy et al. {{cite:ffbc178a9edeaf3c9d3f0a89d1987c5647b0cbde}} have shown diamagnetic nature for the measured currents at low-field limit. While, in other experiment Chandrasekhar et al. {{cite:82ae2e6cfb36eae8dfd1454d4975c9b4d296cb4f}} have obtained paramagnetic response near zero field limit. Jariwala et al. {{cite:3a2c833e7a5cfae01eebb9b863b0da5b2fa29a2f}} have predicted diamagnetic persistent current in their experiment and similar diamagnetic response in the vicinity of zero field limit were also supported in an experiment done by Deblock {{cite:cb01f16f7b03147b2c2c7c5920f3463ff0be10ea}} et al. on Ag rings. Yu and Fowler {{cite:a188441c90e6e2675fcd77039e3fffa23c666450}} have shown both diamagnetic and paramagnetic responses in mesoscopic Hubbard rings. Though in a theoretical work Cheung et al. {{cite:6ff789b75f4a1f7ff7e9fbffbaf8b92606ae4239}} have predicted that the direction of current is random depending on the total number of electrons in the system and the specific realization of the random potentials. Hence, prediction of the sign of low-field currents is still an open challenge and further studies on persistent current in mesoscopic systems are needed to remove the existing controversies.
i
ea42e167a6776b76dffadde43d0bb994
HDE would then have an energy density given by {{formula:c0e15663-1f08-4001-b388-663e9b583174}} , where {{formula:87a34c55-a936-4570-9ad8-24d07c1bfe00}} is a constant, {{formula:4993f1b3-f208-4a60-9912-6b703caf3d3d}} is the reduced Planck mass and {{formula:ceb5d40d-9567-4074-82aa-e39a95994f0d}} is the infrared (IR) cutoff {{cite:f1287a1a10fcf1f99bf9b6a5901676870b5dc801}}, {{cite:682969d8273ae77843e012f7627e831225b1155b}}. The first natural choice for the IR cutoff is the Hubble radius, however it led to an equation of state that described pressureless matter {{cite:f1287a1a10fcf1f99bf9b6a5901676870b5dc801}}. This problem was circumvented choosing the future event horizon as cutoff {{cite:682969d8273ae77843e012f7627e831225b1155b}}. Other choices for {{formula:7c40eb7d-3090-4028-bb3c-105508ca53f0}} include the inverse of the Ricci scalar curvature {{cite:8f305d4c8456e202397c0df9490576a1d38405b8}}, the age of the Universe {{cite:1dae4fe757180ac304255758f290c4ec93220ac7}}, among others {{cite:56612842446f8e75f54ca96d4b54564f1b1e2041}}. Inspired by the holographic principle and the AdS/CFT correspondence {{cite:4703dcac8395d47e14ab3783cf7f4b9448ac2a15}}, HDE has been embedded in minimal supergravity {{cite:098ca5b50f4f24139c93f72321e4a9186bf90127}}, while in {{cite:bb2cb18698fadce6e5de41fa6f3d8e2cd7efc7e1}} it was shown that HDE arises from generic quantum gravity theory, assuming only the existence of a minimum length.
i
db5915870a13b304c6cca9497b09efb2
Considering each branch, the first part is represented by a FCGRU cell that takes as input one sequence of the time series at each time stamp. The FCGRU cell is a modified structure of the standard GRU unit {{cite:e7cce1e40f1362dcfbd084dac244c89b00744325}}, a kind of RNN which has demonstrated its effectiveness in the field of remote sensing {{cite:06fb0a720cfd90758fa975927f53a02be9b8e3d7}}, {{cite:f9e95c8ee5d26ed2738f9e34df7a4b0566253050}}. The FCGRU cell extend the GRU unit including two fully connected layers that process the input information, at a particular time stamp, before the use of the standard GRU unit. Such layers allow the architecture to extract an useful input combination for the classification task enriching the original data representation. A hyperbolic tangent ({{formula:3ac3951c-82b9-45c2-89ff-ced5fb1f0a71}} ) non linearity is associated to each of the layers for the sake of consistency, since the GRU unit is mainly based on Sigmoid and {{formula:8c62d83f-84ed-4089-b744-710c87b79420}} activations.
m
98b4a4a32194773818b1ac319f859203
Initial work on error correction focused heavily on developing quantum codes {{cite:f55e6c4c6e7faf594e4711643ac427d44d38d17f}}, {{cite:683b95d0ca55ac80c97a94cd0b358046f1a3faf4}}, {{cite:3795e97e86318732e996e7f8753bc1b9ed5170fa}}, {{cite:417391f6a2b057acda7a65c826eb0712842d3f77}}, introducing a more rigorous theoretical framework for the structure and properties of Quantum Error Correction (QEC) {{cite:2805a2f1d7551512a110691d135b9c7bb7f84549}}, {{cite:10f9295849ce2da029c5225c6c5935ca488a1a6a}}, {{cite:cca388aeae5b8c22f739b8721a5d7d93d939e686}}, {{cite:dc5a30d25da34d33da5d472512ae4274e3c0a816}}, {{cite:175301390899d39d7489cde5fe76fcd34333478c}} and the introduction of concepts such as fault-tolerant quantum computation {{cite:91825bd24e3d5a8245c9fb164aa87c0aa13b56bb}}, {{cite:a637213bcb32c5cfe1267fe68add050cc6fede5b}}, {{cite:fc68705250e038cbb3e7e328815ba16568725ceb}} which leads directly to the threshold theorem for concatenated QEC {{cite:8fcbad243f7ff8679ee19d241a9f3bafbe981135}}, {{cite:3bf1297f2e98a13f6c0d5712f78d6fccfa3eb9cf}}. In more recent years QEC protocols have been developed for various systems, such as continuous variables {{cite:fce0394802b56945890eda95f0551e59f8d1e110}}, {{cite:1bf4746aacd77e6393594019f9b7360c4d1aa69f}}, {{cite:89ad9ccf40a0d3e022885fa81fc55ddf7c200e3c}}, {{cite:936061430615013a9a274acdbf2380a8b50072d6}}, ion-traps and other systems containing motional degrees of freedom {{cite:6de55c14f4d08f60dee7d86631f404cd64258974}}, {{cite:65e5bd0fb7c0b766c0660bded168e325afcc70df}}, adiabatic computation {{cite:ba4c3bb92b0719189a557544a5b482326d478273}} and globally controlled quantum computers {{cite:59d36698d8a0e29e7e12950e683f574518c0fd82}}. Additionally, work still continues on not only developing more complicated (and in some ways, more technologically useful) protocols such as subsystem codes {{cite:6e223b5359b8f8880cb93b38c73f041e53597dd9}} and topological codes {{cite:abaf98dd0c20de6d83b96d0e228bfc1d043f85b5}}, {{cite:8a942476ed17c009605fffa046662c2105c34f62}}, {{cite:4d83f7def4ecfaaf1a90828256226fb01998f379}}, {{cite:b7fe3d3fb7635ebf3105119e1c18a913570abf18}} but also advanced techniques to implement error correction in a fault-tolerant manner {{cite:7041f972577614f5fe159b3590d2c7cf71efe903}}, {{cite:25de34ad641a1e97795e0e3d19930759ea7a759e}}, {{cite:68499f8722e1a26df7d543e65fbd9730a436d7c1}}.
i
261459c9870ae8cc4c4f030265e23fe6
Phylogenetic trees and networks are important discrete structures from biology, where they are used to model evolution; see {{cite:2c49320fd87e1e9e533069a4d665f1abe93a9893}}, {{cite:93f6f4766b6bc405c18000138e43183d78f85f72}}, {{cite:48b2c3d5bee0ac3a398a0211f1ea5eca7e15719b}}. Of these two types of structures, phylogenetic trees are simpler and more classical but they cannot be used to model evolutionary scenarios that involve reticulation events. Thus, in many recent studies, they have been replaced by (the more general) phylogenetic networks. However, the majority of the classes of phylogenetic networks are not recursive and thus they are a poor model for processes that evolve over time. In order to be able to model such processes, Bienvenu et al. recently proposed the class of ranked tree-child networks; see {{cite:36e9972e2260af9d172c1636f9f9e23ef6f9b7d8}}.
i
46f6743ab591b47d6e65393fa8162c4f
To address these weaknesses, we propose Fast Interpretable Greedy-Tree Sums (FIGS), a novel yet natural algorithm which is able to grow a flexible number of trees simultaneously. This procedure is based on a simple modification to Classification and Regression Trees (CART) {{cite:56ab112db47fceb0e9440138cce9036db3c7a1ab}}, allowing it to adapt to additive structure if present by starting new trees, while still maintaining the ability of CART to adapt to higher-order interaction terms. Meanwhile, the running time of FIGS remains largely similar to CART due to the similarity of the two algorithms. FIGS also remains interpretable by keeping the total number of splits in the model limited, allowing for the model to be easily visualized and simulated by hand.
i
dfcfe54ce1aab3d503afe339620ca0a2
In this paper we assume that the Lévy measure {{formula:114f8203-789d-4e00-8d72-bc6d5793864e}} of the basis {{formula:07af2f05-fbe6-424f-bd96-9795e03860ed}} has a regularly varying right tail; see for instance {{cite:718fd7f11224e9de285a717f6096481d5d391928}}. The regularly varying distributions are in particular subexponential, and the class covers many interesting heavy-tailed distributions such as the Pareto, Cauchy, Loggamma, stable (of index {{formula:89a12e32-521d-4154-a664-fc76a606c190}} ) and, in particular, Fréchet distribution. Moreover (and essential to this paper), the set of regularly varying distributions coincide with the maximum domain of attraction of the Fréchet distribution; see {{cite:718fd7f11224e9de285a717f6096481d5d391928}}.
i
e88aaeb70660c0c5f95a6782e24e5acc
Computational complexity. The most computationally expensive component of STED is computing optical flow. Our implementation uses FlowNet2, which requires 123ms to compute on an Nvidia GTX 1080 GPU {{cite:72383482120b8b6b3ead9e9d78b0b812acf3cfde}}. This model may be replaced by more efficient methods, although we found the quality of optical flow to impact overall performance. Additional components, such as the CNN architecture or number of hidden units in the GRUs may be modified if real-time performance is required, at some cost in forecasting accuracy. {{table:ff30c0ca-7694-48b5-814a-a69b8ffb5735}}
r
b2c58f9695a7c86cd2d21ea5e87252b8
In this paper we compare exact quantizations of a dust shell in two different times, viz., the coordinate time interior to the shell and the shell's proper time. In section II, we briefly summarize (for completeness) the IDL formalism for the dust shell and obtain the first integral of the shell's motion. If the exterior geometry is taken to be a vacuum spacetime, the first integral of the motion involves two constants which are interpreted as the the rest mass, {{formula:9e99abc3-0e5b-4da4-b812-c7ec80b1792b}} , of the shell and the total (ADM) mass, {{formula:012ffd3c-86ed-4afa-abfa-a5ad6bbabe63}} , defining the exterior. Of these, {{formula:db3c3c77-d059-4e99-ac89-dca308f720b1}} is a constant over the entire phase space, whereas {{formula:9ec235cc-0030-4cbe-8222-16f8c4dc516e}} is a dynamical variable which represents the total energy, {{formula:25ef4d02-7711-4fe5-bb27-a53207d933ec}} , of the system. Following {{cite:b9a9ec1547e840f5db24ce98b9e03d6ea030bd8c}} we take the ADM mass to generate the evolution in the time coordinate of the interior of the shell. We take this to be a canonical choice defining the system, not a just convenient trick. This then allows for the construction of an effective Largrangian for the system. Once it is known for one particular time variable, the effective Lagrangian may be re-expressed in terms of either of the other two time variables (Schwarzschild time in the exterior and proper time) and, from the effective Lagrangians, Hamiltonians for the evolution in all three times may be obtained. The proper time Hamiltonian obtained in this way is structurally identical to the Hamiltonian obtained in {{cite:b80e66342d77e198c2e70c165402fe9117ab94ae}} for a dust ball in the LeMaître-Tolman-Bondi (LTB) {{cite:c48051a025889bf1fad5f70f8165b882caa3e04b}} collapse models by an application of a canonical chart analogous that that employed by Kuchař {{cite:d2201470ac1caa48a3f5bfe24504a8c683fa5b8e}}, {{cite:6f4c9a15fb8077a6dbc3fb84fdbb6cd1a7947e4d}} to describe the Schwarzschild black hole. The Hamiltonians obtained in this approach differ from those that would have been obtained had one not made the canonical choice of {{cite:b9a9ec1547e840f5db24ce98b9e03d6ea030bd8c}} at the start.
i
aa3670359e4895d827fb7681fc11b8dd
In this section, the discussion of deep learning (DL) models that were used in the research on text style transfer is presented. The vast majority of the models are built upon the encoder-decoder architecture {{cite:c4ebe7a14bc41694586ed7ccda2e41efed7532b7}}, {{cite:88d3684435d66eec80cdc115174b4f0f765460ae}}, {{cite:4e673c2490ee3cded3b84a51652f4f6d8d344e99}}, {{cite:892cb85441a264b5401daaea0fa330bdf469a6c6}}, {{cite:ef05a4cdb7a8d0910ec5470715798a6fc49d968a}}, {{cite:d62ab89eaec0a8cf2acdd6a69a648d4c45acd617}}, {{cite:1ee6dff1b81a43f6ee25f31654b30b19a5c145a2}}, {{cite:661cda5e790da43f7811627ed206340430667e21}}, {{cite:03b5745871c94bd797f4730b19602bae2629d584}}, {{cite:d9f1e6531b73758c4516542b89dd755b8bfaf03c}}, {{cite:33954672aa687e7b25bbd4f9044e80cd3837b9ac}}, {{cite:0c3ab1e15f4980a7ccedfc9ede38bd005e683917}}, {{cite:16a402d1c6d78dd8aa153d3d8d602653d8fb1dc0}}, {{cite:efbed9e27d7500e07b8c32e536a44832266d0cc1}}, {{cite:86c7978f320ae2be0a81bed8e018e26fb18350a4}}, {{cite:daf40bc5e26c2c2ebe9099e2c51f18943f506d10}}, {{cite:f36a5cd66ed1dfdd7d9843f5704a0e28f397cc5c}}, {{cite:3e7629172abcc0cc668590e8bfec5abb7f5d5ffe}}, {{cite:16c28ede6627ef6f4460239b13bc84599c93898f}}, {{cite:1fe238c6dcc99338caf7428088f6c42303016a0a}}, while in another line of research, adversarial learning with GANs have been put forward {{cite:8475f3a3a54195b8689e6577eb29df9203133a30}}, {{cite:1717408e2b5ac04a93283e71f32b201029d07111}}, {{cite:7d2fb2b8323d64120584095edc0480cadb959239}}, {{cite:f693a258237c4e88ca69c5072b5f5e4388cc0236}}, {{cite:0fed1aadea061af61257408b5186ef2533f740b1}}, {{cite:210d924671e4f1124e0a7e0bf67f80f780dab5ef}}, {{cite:ea6671eeaea79506f6a790c1e36ce8252307bd11}}, {{cite:5fb551d28bbdc074cc71e174784fcaf61c2f873d}}.
m
1601f25d678bad4a2bc5026035064d52
The results of {{cite:2f34206c99eadfd039106a36b595141ffa4a2c41}} and {{cite:5db1f8cd89984421b74b52595293381758261af6}} have been generalized to homogeneous nonlinear predictors by {{cite:11878774f15228d467bb57757f4c67f4138c87d8}} and {{cite:50aa43ea6ffb9852319be61c292a2138cc32c285}}. They demonstrate that gradient descent converges to a fixed margin classifier that minimizes the {{formula:1a3ffe03-7227-4207-b7e1-3d4fdb0b311f}} norm over all weights parameterizing the network. It remains unclear, however, what effect this penalty on the weights has on the inductive bias of the network they parameterize.
i
88ff0406f729cc1f832066185241d63c
Perturbation via Unknown Data-Distribution: Prior work on audio conversion via exemplar autoencoders {{cite:1fe607326933c29076a8b159d2a288aab0544220}} showed that one can input an unknown voice sample from a different person (other than training sample) as input and yet be able to get a consistent output. We study if this property holds for the video-specific autoencoders. This property could potentially allow us to establish correspondences across the frames of two videos and do video retargeting {{cite:1e4673c4b735743f406a61f6be90fe590ccbcd5f}}. We study this behaviour via three controlled experiments: (1) multi-view videos: training a video-specific autoencoder on one stationary camera from a multi-view sequence {{cite:b61b17d5831086cb0af7b8e90e9a0b4bd6468f9c}}, and test it on other cameras. The multi-view sequences via stationary cameras allow us to study the role of slight perturbation. Figure REF shows the analysis of perturbing data distribution via multi-views. We observe that the points are farther away from the original points (sequence used for training) as we move away from them. This means we cannot naively use a video-specific autoencoder for the inputs that largely vary from original points; (2) semantically similar videos: training a video-specific autoencoder on one baseball game and test it on other baseball games {{cite:8f55e8c8d3e2181d1929f53f0c92955e80e4473a}}. The game videos allow us to study the role of semantic perturbation. Figure REF shows the reprojection of various semanitically similar events. We observe that the points move farther as the input becomes less similar. We also observe that we can iteratively bring the points close to the original points by iterative reprojection. After a few iterations, we observe that two semantically similar events align with each other as shown in Figure REF . We also get temporally coherent outputs showing alignment between two videos. This property allows us to establish correspondences between two semantically similar videos and do video retargeting; and (3) finally using completely different videos from DAVIS dataset {{cite:6b63c7e1bb48423ec33b37801f8a5d34e126c651}} (for e.g., a model trained on bear sequence and tested it with a surfing event). The completely different videos allow us to see if the autoencoder can learn a reasonable pattern between two videos (for e.g., movement of objects in a similar direction) or leads to indecipherable random projections. We show two examples in Figure REF . In the first example, we train a video-specific autoencoder using a cow sequence. We input the frames of a surfing event to this trained model. The first iteration yields a noisy outputs and far-away from the original points in the video-specific manifold. We reproject the input iteratively multiple times, thereby bringing it close to the original points. We show the results of 51st iteration. We also show a few examples showing the mapping from the frames of surfing event and the corresponding reconstructed frames. The outputs are noisy and does not have a temporal coherence. We observe similar behavior in other example.
d
382f918932c72adfbe8960f80ad5e9c6
In this paper several new gate protocols have been proposed for realizing high fidelity and spatially addressable quantum operations that are insensitive to qubit position fluctuations and motion, which is one of the major challenges affecting scalable quantum processors based on atomic qubits. This includes a new robust gate for addressing individual qubits that decouples control and addressing lasers, requires modest light shifts and perfectly corrects for errors on non-target qubits. Compared to travelling waves, standing wave drives can provide typically an order of magnitude lower infidelities approaching {{formula:5b6105a8-e844-425f-96b0-9fdf09985246}} for conditions achievable in current setups. The basic idea can be applied also to more complex multiqubit gates, including Rydberg mediated gates {{cite:d1e7f553f77a4370dc6e9e0dcdd8328dae594d49}}, {{cite:caffb810da391e6ac0f61335d4571c554429055a}}, which combined with phase controlled single qubit gates would form a universal gate set. It can also be applied to different geometries including standing-waves produced by optical cavities, holographic techniques or integrated photonic devices which may bring additional advantages including higher stability and faster quantum operations.
d
1cf11db5d96bf9a1de811251761e4268
Traditional works analyzed EEG data without taking consideration of the topological relationship of EEG electrodes{{cite:a745d0e17e502abf156049230b0ba73b220f0639}}, {{cite:71dcf26c8c9b12b0071a2938855f38c6b9240487}}, {{cite:a3dde3dfe880202ddd3a5c9a8e62bd4b22707ed2}}. The latest neuroscience, however, has suggested brain dynamic functional connectivity{{cite:4d95e6675ddf7f19db1ae03fba7097677cdbbbc0}}, {{cite:a8d7143dfea88fcb839516e21aef388270f066fa}}, {{cite:80648db3528b4b45863b4d68fd3fd3cfd3616b4e}}. Thus, the exhibited interaction of EEG channels might not be well reflected via Euclidean distance. The Graph Convolutional Neural Network (GCN) has generated widespread research interest, which has proven superior to process and analyze graph-structured data. Spectral GCN was primarily studied since it well defined a localized operator for convolutions on graphs, and managed to process graph signals{{cite:97fd9dcbae734f2c6cbbc71ea9a8ad29ff453f9c}}. Reference{{cite:198a9a228e5f767831572bb241ad8b5e7c2e7126}} proposed an effective and efficient GCN approach by constructing fast localized graph filters. A few works have applied such a base model to classify EEG tasks, primarily for emotion recognition{{cite:af1eb5e2c3becf4465747b5cc2a71769b70c6865}}, {{cite:cc4440b4441e49d3937884ce24f4b138072a4ac8}}, {{cite:d126c3d901f5894341039a626397d360cd72f6b9}}. However, there remains a vacancy to employ the GCN approach in the area of EEG MI. Therefore, this paper presented a novel structure of the GCN, an Attention-based graph ResNet, to achieve precise detection of human motor intentions. The main contributions were precisely summarized: (1) To date, this was the first work to detect human motor intents from raw EEG signals via the GCN approach. (2) The intrinsic topological relationship of EEG electrodes was built as a graph, which has proven superior to classify and analyze EEG signals. (3) This paper entailed a novel approach intending to address the universal EEG-based problems in neuroscience, and pave the road to build practical clinical applications.
i
d54eacb26deb61eb91ceec4133c7eed1
Qualitative Results. Fig. REF shows qualitative comparisons on SYNTHIA {{formula:eedd4a51-daef-424a-9d6e-9cf91ed7b2ec}} Cityscapes. It can be observed that qualitative segmentation is well aligned with the quantitative results. Specifically, the baseline DETR {{cite:e7d0951e733aea835b65bea871404df5d2569984}} produces the worst segmentation and the state-of-the-art improves but tends to miss small things and produce false predictions. UniDAPS further improves and yields better segmentation with more TP and less false predictions.
d
65ae027585355e458f431e7d78bc0d1b
Qualitative comparison In fig:teaserexample and fig:qualitative-comparison, we further show visual image comparisons of the previous work and our method on different SR scales. For areas containing scale variant stripes, the previous method{{cite:1924daeed416ac3f86ccb7b025d1f3934a43ca59}} will produce checkboard artifacts and could not faithfully recover the HR contents, while IPE-LIIF can distinguish texture details by encoding the query pixel spatial information. In these cases, IPE is able to better identify the position and size of complex textures so that they can be clearly distinguished. {{figure:405a560d-4d88-441f-b360-f96fe3630c67}}
r
5665bb694686fa212e5739ba20c27b1c
Qualitative comparison is shown in Fig. REF . ABCNet{{cite:91c92fc56bc16f0774ce9df6d61d15fbc910725a}} is easy to be confused by adjacent instances since it predicts two separate curves rather than the whole shape together. TextRay{{cite:dbdcd44e918c4deab4961037396c77969bd36c4c}} fails in highly-curved or large aspect ratio cases and FCENet{{cite:2d8d7e64091573d0bc7c3de888a736255c199c11}} prefers to missing the corners of long text, which is not conducive to the subsequent recognition. By comparison, our proposed TPSNet obtained the most compact and complete detection.
m
ed1b341e65f2ca4d52304246036a24c1
If we saturate the decay width of the {{formula:c91bfdba-1e72-4738-906a-24ab4e3c2bb3}} with the two-body strong decays to the {{formula:561763a1-408e-4c32-98b7-cddd7a4fd123}} , {{formula:7c29476b-fe1d-4ad1-85e0-beef8166dcdc}} and {{formula:930adb22-5e66-4e89-a860-c557e5018da2}} , we can obtain the total width {{formula:0bcc8b02-191c-4b53-a3e1-0c42952b42cf}} , which is compatible with the experimental value {{formula:282dbf19-4b9e-4466-a145-e97b3c04f763}} from the LHCb collaboration {{cite:a218dd14046e16546a34f1ce55d047c4588dd0fe}}. The present calculations also support assigning the {{formula:a0ce8464-a221-40de-8dc2-947fdb61243f}} to be the diquark-diquark-antiquark type hidden-charm pentaquark state with the spin-parity {{formula:0cd64419-3ed7-4ca9-8445-552bcaeea723}} . The {{formula:9d65078b-7315-48f3-94c1-1bf261f983ba}} maybe have a diquark-diquark-antiquark type pentaquark core with the typical size of the {{formula:88f209dc-6ef2-4e2a-8e86-e326e249b867}} -type baryon states, the strong couplings to the meson-baryon pairs {{formula:2776111f-4384-4703-abb3-a9ad60ecf8cb}} and {{formula:5cb162e7-44a2-4694-8f8e-090461aeed78}} lead to some pentaquark molecule components according to the large hadronic coupling constants {{formula:fd089faf-45d8-466c-a880-7d0e2013f545}} , and the {{formula:43be16ad-47c6-460c-9c4c-e8f08e07bf8e}} maybe spend a rather large time as the {{formula:02ec65ce-d499-44a5-bb2f-c2af9aff5bfa}} and {{formula:c01bf0c7-3423-4ebc-b417-22d4579f26ee}} molecular states, just in the case of the {{formula:07416462-b7ad-4979-828c-d73af36423c9}} , {{formula:e4e89ca0-198b-4791-9b93-788c8f7ef178}} and {{formula:0dd3b8fd-3d00-4d72-917d-f9d36feee02e}} . In Ref.{{cite:6eabf9b7ef15512ef890afe896e5c798b6526df3}}, we assign the {{formula:e268a4f9-e4b5-46d4-9fe6-f4ff254b4c4d}} to be the {{formula:8043c862-9644-451f-b472-50887ec1dafb}} pentaquark molecular state with the spin-parity {{formula:5e11d1f1-1ce4-42f0-942b-56eb3140e39c}} tentatively, and explore its two-body strong decays with the QCD sum rules, and obtain the partial decay widths {{formula:4900ffb9-55e8-4f77-800e-639c7cd18fac}} and {{formula:e1e45b07-0b3d-4b1d-8ef4-236b5ab215ae}} . The {{formula:cc12e92e-d78d-4456-b610-4f168eca2c0b}} has quite different branching fractions in the scenarios of the pentaquark state and pentaquark molecular state. We can search for the {{formula:c6e54c7e-e035-4846-b8c7-fed1276e5bb5}} in the {{formula:f4bf42c2-1c97-4eb5-8fc6-978116da21cd}} , {{formula:21221142-b459-4709-8965-db1d6247b9f5}} and {{formula:1e67fa9f-d799-4700-adf3-83b25d7050aa}} invariant mass spectrum, and measure the branching fractions {{formula:8ab53159-6065-4ad0-b05f-3c5c0e497f3a}} precisely, which maybe shed light on the nature of the {{formula:3a1ac1e6-5c87-4602-a50b-0c7df5b9b8df}} unambiguously, test the predictions of the QCD sum rules and examine the hadronic dressing mechanism. If the hadronic dressing mechanism works, the {{formula:9e31d3ea-51bd-41c2-952d-a6755366d803}} has both the diquark-diquark-antiquark type and meson-baryon type Fock components, we should introduce mixing effects in the interpolating current, and fix the mixing angle by the precise experimental data in the future.
r
ed4ac5a45d176c311d48561bfabbad83
The fast rising high-tech industry leads researchers to find a better alternative for linear controllers {{cite:f9c445880b8631a7612af081e837b1f8fa012e2a}}. One of the appropriate alternatives is reset element which has gained a lot of attention due to its simple configuration {{cite:ae68f18dfbc617a1ad5216ba553ec299a6dcd7c2}}, {{cite:9bbc2537a774f27a9c4575db2b0d9f306190d51d}}, {{cite:6ee95e64874219dad20dc5f7da733eb9d2e3c11a}}, {{cite:073d4f667494e3c3579c0901f6358ac89db38061}}, {{cite:f533b52dd68c71004ad0c61f307031ab2dccfe45}}, {{cite:749514637aba73fcb25b1d7235995f5756cf733b}}. In 1958, the first reset element was introduced by Clegg {{cite:6ee95e64874219dad20dc5f7da733eb9d2e3c11a}}. Clegg Integrator (CI) is an integrator which resets its state to zero when its input crosses zero. Then, First Order Reset Element (FORE) {{cite:ae68f18dfbc617a1ad5216ba553ec299a6dcd7c2}}, {{cite:e15ffbc00205a08e931e8909d79e4340cdac44b8}} and Second Order Reset Element (SORE) {{cite:e15ffbc00205a08e931e8909d79e4340cdac44b8}}, {{cite:f533b52dd68c71004ad0c61f307031ab2dccfe45}} have been developed to provide more design freedom and applicability. Other reset conditions such as reset band {{cite:2a1e6d3efdacbdb2893706874b169ae11f95388d}}, {{cite:ecb453325f3e74b673ead9df3feeabab6b9a4452}} and fixed reset instants {{cite:8df8b94406a699be47017c470cf93814bf6702a5}} have also been studied. In order to soften non-linearities of reset elements, several techniques like partial reset and PI+CI approaches have been proposed {{cite:4ae903203e7ba9335843c151d9a687425827a0f2}}.
i
b81d52538d6423febcb340e94d63f81b
To check the validity of the theory explained in the next section, we perform the molecular dynamics simulation. The detailed information is as follows: We solve the equation of motion for each particle in terms of the Sllod equation {{cite:230926f2d5e4d562b8e41016cccaa787d0e2bb03}}, {{cite:a42a02d65f2909c9fdcbf002b1bd848b4a6ddf5e}}: {{formula:d517ea0f-5ee5-4bef-8802-511534d42fe5}}
m
fbb68968b1796f0bf8ef9c86b529dcdb
To quantify structure, we characterize the inter-particle forces and particle configurations using the radial distribution function, {{formula:6793655d-c7b1-47a6-841c-c4e9f499e50b}} . Since the material is jammed, the motion of each particle is arrested by its neighbors {{cite:41129ec3cf475faf425a45a51905369285752385}}, {{cite:e4985cf46b054a656101beba125e29d3d7e525b2}}, {{cite:c55f84fcbf75b77f4cce55f2ef38df24ded2afcc}}, {{cite:fe08ae45f71b00426043b11bcec52687c5b01738}}, {{cite:ae4e008dfc987e19ac80f00de0013746fa5ef7f5}}, {{cite:2c7f81a87405e7bf3f299aac6683ed3ce76dee76}}. This caging, and escape thereof, provides another lens for the non-affine motions mentioned above; when enough particles pass each other via small changes in the structure of their surrounding cage, the material yields {{cite:92cf0d4117dded298a8a6af84b2962c2b8eb6745}}. For quantitative analysis, we compute {{formula:5802fe5f-1e18-43ef-b672-d1436cdda5b7}} , the sum of the magnitudes of inter-particle forces acting on the average particle. Specifically: {{formula:37c3d40d-5aee-464c-baa3-a194e34f2395}} ; here {{formula:d2a766c8-1c9a-40fa-a3c2-0b0eb6329315}} is the number density of particles, {{formula:ef6b7ec3-633f-41ad-bc82-5d9f811b3713}} is an upper cutoff distance below which nearest neighbor particles are found, {{formula:e398a7ec-c01e-423b-be5e-13c0c318d1c9}} is the pair potential function between any two particles, {{formula:4ebff140-12f7-45a5-8871-bdf7bc826057}} is the force acting between any two particles, and {{formula:fd354541-756a-4614-8d3b-d143833b6519}} is the sample radial distribution function as a function of separation {{formula:fd3fa3ba-be90-4ff3-af1d-6bf860c61aea}} (Fig. REF c; Methods). To determine {{formula:ed8ed18c-acf9-4b29-89a8-60ffa721e930}} , we use the coordination number as a function of radial distance, {{formula:7a58dbb4-c013-4546-8cf1-c8da315b4bc7}} (Fig. REF c). {{formula:c1a62d6c-7d7a-4ebb-a050-5f6d3d0fee5c}} is derived from {{formula:18a12374-cd65-45ad-b30a-1473c6d53e8b}} and has been studied {{cite:41129ec3cf475faf425a45a51905369285752385}} and recently used {{cite:086ae9b2b25fa73ae1cbbfa13a76e34e317e0ac8}} to characterize particle interactions and their effect on bulk materials. In our systems, neighbor shells are well defined by broad peaks in {{formula:d9efcd2a-4572-4a8b-b4f0-73f73c9f91a4}} separated by troughs (Fig. REF c-inset). The extent of the nearest neighbor shell is defined as the radius at which {{formula:be3940d9-318d-4428-a2af-46a593782428}} begins to increase rapidly for a second time (Fig. REF c-main).
r
833314e0eaef187f1c19726bb5c5788e
ML can be used for tasks such as classification, object recognition, and segmentation. Classification is a task to classify input data by assigning it to one or more categories. In fashion, some models were proposed to classify photographic images into fashion style categories (Fig. REF ) {{cite:40b81fc9c19cd3e2391da0e5c24ea1a7ca2fec49}}, {{cite:83eb379a3d8e08c8a6db2cb1adb01dcf3c50aa1f}}. Object recognition and segmentation are tasks to detect objects in the input data. They can be used to detect parts of clothes or poses of subjects (Figs. REF and REF ) {{cite:cafe1012c8a86c133754e83984487ea29342d8f5}}, {{cite:9032fd611bafda036b67c4850e2e97b31b387c8a}}, {{cite:89ec7cc23f95782bab4efcc41077e708afc383c9}}, {{cite:ffa42f9c2487b139597cbbcd2120dffa28a107c2}}, {{cite:f3c25d0d711effe3211134c476b0dccf50646d83}}, {{cite:e22d0f8543a859c3e8b21a32f094a4433832ac0f}}, {{cite:90f099c16e6b3f80fbf963ec985b481d64c5cb17}}, {{cite:d25968411ae50ae54182f72cae6ff6975cc5de19}}. By combining these tasks, ML can select appropriate images, classify the images into categories, and measure the features on behalf of a human to improve the efficiency of analyzing digital archives. {{table:912b73f0-8f19-4015-8e04-99ee9f702677}}{{figure:58d761c2-b655-4beb-a193-79d779bd9ab4}}{{figure:9521eaba-b92d-4ed2-9109-8b970f60d695}}{{figure:ae05c01a-330e-40c0-a48c-d709a88d1145}}{{figure:08ff36af-6902-471d-afb8-a4b474be1251}}
m
8a380b542d5ea10dd813049fa2085c2e
The Elbow method is a heuristic technique for clusters number estimation {{cite:9d08fdb6d64d698a2a4f70321c5fd91c43d313a7}}, {{cite:648ec396c998a7337c9d4ed5c316f780e4d8604b}}. The overall goal for the method is to maximize the inter-class variability and minimize the intra-class variability. In this study, the data samples are denoted as {{formula:48ea2183-e607-45ca-af45-620cb7022174}} . The number of clusters is {{formula:30c8fd88-a9c2-4eb0-83b2-56992569a4dd}} and their centroids are given by {{formula:7bb0d475-22c1-4d1b-b0e1-6e0f5fa7872d}} . The distortion {{formula:5884e981-3616-4a08-8213-56bec0dbfa88}} is used to measure the effectiveness of the method: {{formula:3cd99ea5-3bc8-4a7d-92c1-d8f25af5c397}}
m
01523031fab635c733e5ad6be874abf8
Following the standard process of membership inference attacks against ML models {{cite:62d010df48c764d3c2bc633749168e2193d6a21b}}, our attack can be divided into three stages, i.e., shadow model training, attack model training, and membership inference. figure:attackpipeline provides a schematic overview of the attack process.
m
1825042244f62f8560fa8a57b2391d79
The second major outcome of the present analyse relates to the potential use of the helicity eruptivity index, {{formula:c14bc823-0a4c-47b0-b264-8fcbc876e372}} , in eruption prediction. The numerical experiments of {{cite:b91ba174e47923012037ae2c40718b734c345a12}} showed clearly that the onset of the eruptive behavior was associated with a threshold in {{formula:1c673a78-587b-4a5c-a0b2-7c136e786eff}} . Unlike in {{cite:b91ba174e47923012037ae2c40718b734c345a12}}, where the point-of-no-return was precisely determined, the present parametric simulations do not permit to completely link the moment in which the system becomes unstable with the helicities. Although the behaviour during the post-driving phase heavily involves breakout reconnection above the flux rope structure {{cite:c0adb6ea9ed4e33fb43e06b1d73e2f380b85c369}}, {{cite:6783ee81734914183c13be9d3092580bdf7d88ef}}, the present simulations do not permit the precise determination of which instability is triggering this eruptive behaviour, i.e. whether it is a resistive instability, as argued by the "breakout" scenario {{cite:f517cc46a1c3b813be8ef42367a6287e100eda5d}} or an ideal MHD instability such as the Torus instability {{cite:641038874bf2a59d770ae7af46e186c919cc2a6d}}, {{cite:c65dbd1b3a1819936de2b5c4218457f81bf37f80}} that acts to kick off or to later supplement the eruptive evolution. Precisely determining this would require further parametric MHD simulations, perhaps alongside the use of an ideal code {{cite:93545e210cecd43de7d535acdb4a19cf7d8614cb}}, which is beyond the scope of this investigation. What can be strictly said is that a point-of-no-return is crossed during the extra driving time of the Jet producing simulation, between {{formula:431db15c-eda6-4e35-9df8-484e920430bb}} and {{formula:9dca911f-bd5f-4213-a690-391252915f47}} , which eventually lead to the eruptive behavior.
d
3265d5cf8d7ee2d7bc6a276348f4ceaf
A fundamental property of diffusion is the concept of particle lifetime, which is a particular application of the more general concept of the first passage time {{cite:e4d3b1848348960ae78b7f20c59560eaa3ad6362}}, {{cite:297a158bed223d49a60e30a8cfe9b4bfbaa82199}}, {{cite:bb6966808863e5216a27ae45199e5f4638c11c72}}. Estimates of particle lifetime provide insight into the timescale required for a diffusing particle to reach a certain target, such as an absorbing boundary. Many results are known about the particle lifetime for diffusion in simple geometries, such as lines and discs {{cite:e4d3b1848348960ae78b7f20c59560eaa3ad6362}}, {{cite:297a158bed223d49a60e30a8cfe9b4bfbaa82199}}. Generalising these results to deal with other geometrical features, such as wedges {{cite:999a123a2f86e4f9bfa31ddb049951b31c4bba36}}, {{cite:7bbe6a5de0d004d9ff7c4084a1b9464addd1fff7}}, symmetric domains {{cite:ee0d5484dad87e3627b8e38d919927070a36c09b}}, {{cite:1d80e62b398427e43e15656ecf7479807af88ca9}}, {{cite:85969610a0209d81cab1c71f0f08732cbf2107da}}, growing domains {{cite:8cef5f3063cc3a98b6cfcadfb8c01168cda3212a}}, {{cite:ec94ef2bfaf8ad96ffc63aaa1246e8cfae2e32de}}, slender domains {{cite:2d299839e40ff723e6f0b473ee5aae85e52461ca}}, {{cite:f354afece8bf63ba4ba6e6b668bf28d910545772}}, {{cite:6b21a75cbbdea270a8cb26cc9bb612a65ee4ddbb}}, small targets {{cite:c27721ae2cc2cb9dd707f1bd3153b2f9a7a5ab73}}, {{cite:0d8b7d15a1007db31d7616609767b2d4a27105cb}} or arbitrary initial conditions {{cite:1ae0e3225d59a4215cbe3ee89edb4fdb908eee7f}} is an active area of research.
i
31611af188e478112bf6cf45a4a76da6
2017 {{cite:779426169c30a3e47d3b1eb7bd967b0b132f0c52}} YOLO9000 Improving accuracy
m
7172845032af7ec309008afe01154139
To do so, we first transform each input image into a color LDI {{cite:4fc40d55bc7746afd5bb3efd148ea4e83a9ac546}} with inpainted color and depth in occluded regions. We then extract deep feature maps from each color layer of these LDIs to obtain a pair of feature LDIs ({{formula:b52c871f-a9ed-4328-adda-5cc3185d5a2a}} ). To model scene dynamics, the scene flows of each pixel in the LDIs are estimated based on predicted depth and optical flows between the two inputs. Finally, to render a novel view at intermediate time {{formula:fbb8601c-3406-4027-be9e-c172420b1dc3}} , we lift the feature LDIs into a pair of point clouds {{formula:53727fb6-c97b-4c2a-93bf-63cb726ee45b}} and propose a scene-flow-based bidirectional splatting and rendering module to combine the features from two directions and synthesize the final image. We now describe our method in more detail.
m
754c9887b17efcec946630e8f0429978
The measurement of all ground-state charm hadron species in a broad {{formula:b5aedad1-e3ef-49f4-b63f-fd87dca78091}} range allows the total charm production cross section to be derived at mid-rapidity with minimal dependence on models, as shown in Fig. REF (left) as a function of collision energy. The total {{formula:11f3f4bc-8fe2-4301-b4ca-56b8fa633c01}} cross section was computed for the first time in p–Pb collisions at {{formula:11ec2c8d-bbc9-4a6a-a8a1-9528bf6a808f}} and shown as the open blue circle, along with previous ALICE measurements {{cite:b20a9b0664b17dda20a5cf8d7b83c04066c64e15}} in pp collisions at {{formula:35c50086-6083-48e1-aa55-90a486dc44e5}} and {{formula:f843f69b-d9b8-42c3-bd92-fd1e1af5f1fb}} shown as solid points. The results are compared with pQCD calculations under the FONLL and NNLO schemes, and the ALICE measurements are consistently seen to lie at the upper edge of the theory uncertainty bands. The cross section can also be split into the individual contributions from hadron species to give the relative hadronisation fractions, {{formula:bd37b5e5-7f75-43e2-bebd-62d4653061c1}} , including the first measurement of the {{formula:17320428-5b46-47a9-a8b6-47f14942f917}} . The measurements in pp collisions {{cite:b20a9b0664b17dda20a5cf8d7b83c04066c64e15}} and preliminary measurements in p–Pb collisions are shown in the right panel of Fig. REF , in comparison with measurements in {{formula:6177ca0c-dd20-446d-a8cb-08490d81e4b8}} and {{formula:313853d6-1872-4522-b546-fd57a4d29471}} collisions. The two hadronic collision systems are consistent with one another, but a significant enhancement of {{formula:92ac4ece-d94b-4fed-9f5e-b1efe10c5e0a}} and depletion of {{formula:13bff09a-979f-46b5-bbde-cb2e1e2b2572}} production are seen with respect to leptonic collisions. {{figure:5114c5b3-d9e4-4b2f-aeb9-8d3bd72f4219}}
r
52f16cd380dc7577f4ed3e9aa5c323d5
Relations to temperature scaling. As introduced in Section , the logit normalization can be viewed as an input-dependent temperature on the logits. Related to our work, previous work ODIN {{cite:0d095b6f622035d9e1395acea50cec5bb53aabbc}} proposes a variant of softmax score by employing temperature scaling with a constant {{formula:4685ffa4-de60-4805-9c46-747cd981c77a}} in the testing phase: {{formula:3a0c00cd-62f5-4285-8ffe-0a720734e58d}}
d
192adf55cfb185672cbd0d263c409ee6
For scalar problems, standard flux limiter functions {{formula:6248a0f2-82c5-4e41-9a90-ee3cb215df51}} , such as minmod, superbee, van Leer {{cite:6c4b49c34dadfc128797bd343e4f7b0f5c4ca3dc}}, {{cite:7184bac279d84e02b33378ad485e964f2bb0720e}}, may be used: {{formula:e48ddc8d-07c8-4cc5-b8c1-029251cd9804}}
m
85b8afdb29355664288f61a5b6bc0a8b
The disadvantage of data augmentation methods: CutMix {{cite:c76ed27625f0e46acc2a80360c88fda67c855aa7}}, CutOut {{cite:319a6266adb88938db6a866f03c5b80dd5238385}}, and ClassMix {{cite:78a31bbccf7e6baffa51cec755d7f217c172d5b0}}, is that the images of Mars are relatively simple, so the mixed data is not far from other data in the training set. As a consequence, the purpose of expanding data distribution cannot be achieved. Meanwhile, the gap between the training and the testing sets cannot be overcome by simply mixing images, but requires the network to learn better feature representations. In contrast, our method obtains a better feature space by making full use of the pixels of the training set. Also, contrastive learning can make the features of different categories more discriminative and generalized to the testing set.
r
48367d9371e4fee5e32c6f9dd7a2ea09
Recent work has shown that scaling language models with considerably more data and parameters, such as GPT3-175B {{cite:96d9bc9d9f9d55f4ff00f083c6ceff5d261d5ae5}}, could drive significant advances in commonsense reasoning tasks. Nevertheless, such models make predictions by only “looking up information” stored in their parameters, making it difficult to determine what knowledge is stored or has been already forgotten by the neural network {{cite:8f3f37678e992fd04d869222ac0927f32f88135d}}. Besides, storage space is limited by the size of the neural network. In order to memorize more world knowledge, one must train ever-larger networks, which can be prohibitively expensive and slow. {{table:2b55e21b-d16a-402e-ad97-0562b1c2b524}}
i
a71b079bfb4561f8f04f27c690cec4f1
QCD at strong coupling has been studied by various bottom-up models constructed from gauge/gravity duality(from both, bottom-up and top-down approaches) and at weak coupling from perturbation theory. A popular top-down holographic type IIA dual, though catering only to the IR, is the Sakai-Sugimoto model {{cite:a33410809973016894052a44bf883c12e019eb43}}. The only UV-complete (type IIB) top-down holographic dual of QCD-like theories at strong coupling that we are aware of is {{cite:d12cb96921ff374fbbe52e8e09b55dbc47057bc9}}, and its {{formula:3eda2ec9-f24d-477d-a07f-f76a074fbf43}} -theory uplift {{cite:06cde145d665a3b1af3e8bde1a76823702e0c030}}. Authors in {{cite:160309f60ff4ad1c8ac15d56a88b6c01bdd4de6e}} have studied QCD thermodynamic functions at intermediate coupling based on hard-thermal-loop perturbation theory (HTLpt). One can also study intermediate coupling regime of QCD from gauge/gravity duality. In this direction, two of the authors (VY and AM) worked out {{formula:be1319e1-de60-4a10-a585-1000bcccbfb7}} corrections to the {{formula:d7d9554a-9c84-4469-81cc-6666393e80f7}} -theory metric in {{cite:76db700d17b76fb7c92f613a4bd47cd27c460746}}. Starting with {{formula:8795ba06-1a1a-4ddc-b202-b82ddce62398}} -theory dual of thermal QCD-like theories in the 'MQGP' limit as constructed in {{cite:06cde145d665a3b1af3e8bde1a76823702e0c030}} and incorporating higher derivative terms in eleven dimensional supergravity action which are quartic in Riemann curvature tensor i.e. {{formula:ed7ebf14-ae4a-4213-aa7e-b1030c4613a3}} ({{formula:5453c4bb-4b18-4a80-a0c2-18e2e588a57a}} ), the corrections to the supergravity background were then worked out in {{cite:76db700d17b76fb7c92f613a4bd47cd27c460746}} and in fact, successfully used in {{cite:6986922469ae1b044e0fadd6bf0622fcef1d3a57}} in obtaining phenomenologically-compatible values of the NLO LECs in {{formula:7bdc9c15-b475-4adc-98d2-b8d0f6798e0f}} PT Lagrangian of {{cite:76f6d52d34afa936fce016140a71e48f43c3ee82}}.
i
a0b7519dba8183957e46249f9e537fea
Kronecker graphs operate by learning an initiator matrix and then performing a recursive multiplication of that initiator matrix in order to create an adjacency matrix of the approximate graph. In our case, we use KronFit {{cite:47dc00400fb4ab6b644bec1f760c5ab23d05b005}} with default parameters to learn a {{formula:1ede0fb9-13db-4e9d-8a7d-e289f07233a6}} initiator matrix and then use the recursive Kronecker product to generate the graph. Unfortunately, the Kronecker product only creates graphs where the number of nodes is a power of 2, i.e., {{formula:e9d0268a-19b4-44a2-af4a-5cc347cb1566}} , where we chose {{formula:bcbf878b-f02c-4118-8add-c068fb633abd}} , {{formula:5b35ffdc-b5e2-4c8c-bf4d-31de93f337ed}} , {{formula:809d753c-0ac5-4e4d-8dda-1faff8d3d0f4}} , and {{formula:a5abde69-78d5-476b-bd82-ebec1e237c12}} for Enron, ArXiv, Routers and DBLP graphs respectively to match the number of nodes as closely as possible.
m
8b3e4a4d115ec0d320410dc07f83f005
In a multi-core or many-core shared-memory environment, the speed at which the parameters can be updated by different threads may be a limiting factor for stochastic gradient descent (SGD) performance. This is because threads must lock the parameter vector while they are updating it, preventing other threads from reading or updating the parameters, so that they must wait. Asynchronous versions of SGD may relax the data dependencies and order of operations compared to the classical SGD algorithm, allowing the algorithm to update parameters more rapidly—but also making the asynchronous algorithm mathematically different and non-deterministic in its execution {{cite:e1991f3304385666f61f26c7f50f8321a3c1d484}}.
m
0d0455314ec5a3a9cd1a826f5d6ae54a
A minimal matching book thickness layout and coloring of a computation {{formula:3672ccae-6ed6-4ad4-8d3f-dbc561571c1b}} needs {{formula:bdb1749a-df59-46e1-bdb4-77e2d16cfe1b}} distinct time-periods (phases), where during each phase all edges of the corresponding color can be routed simultaneously {{cite:5890cd5bb1f8db009a16a1a3616367ce23e2f2d5}}.
d
ee817ceb4b122a4d9c803b7f5ad6db3f
For DL models, there are several interpretation methods based on example-level information. For example, the influence function {{cite:aff7aa63320572c3d8fabb8cc20557339ef16af0}}, example-level feature selection {{cite:b9f53dc52c741b3545c74cf4eeb8cb275b6a946c}}, contextual decomposition (CD) {{cite:9a22ccba65675f2aed1615cbd80797b5c7ec8eff}}, and the combination of both prototypes and criticism samples—data points that can’t be represented by prototypes {{cite:49969e13773d4f7dbdb013bc79d0f19209767587}}. Other popular methods for interpretation, such as LIME {{cite:19e1be50b5a1e9af916c1241c7c966edd36f75b1}} (Section REF ) and SHAP {{cite:a749d62cac10028906b5f8e2c8cc4c4a7535dd96}} (Section REF ), also provide example-level model interpretability.
m
e2dfd137035561c15eac4922193e8023
{{cite:09713f46bfdc8da67226658474d840992accd7ad}} impose assumptions that imply the confounding bridge is unique and point identified. In our model it may be neither unique nor point identified. In fact, under Assumptions 1.1-1.4 the confounding bridge is generally not unique unless {{formula:2088a152-d6a6-487a-8480-28332366fc7e}} , otherwise it may depend on the matrix {{formula:d03d8b0b-b2b4-46b1-9b7c-187d00a89702}} .Under Assumptions 1.1-1.4 {{formula:48f5ba97-09d8-4e9f-90b0-6e55547f5834}} is non-singular. If {{formula:630b80a4-b6fb-4791-9ddb-ee6d7594e353}} is non-singular then {{formula:baabb0e9-9492-41bd-b7f4-6a53a1b81805}} for all non-singular {{formula:29cd49be-5998-4fa4-9123-56be2b6e6aec}} if and only if {{formula:ef1bfedf-1a2e-4597-ad96-46df484c2725}} is a square matrix, i.e., {{formula:45687ac2-7b80-41b8-a37f-4e6900ace3a0}} . Strictly speaking, even if {{formula:ada5e6e6-ab8a-491a-ac0e-d678fdef2827}} the confounding bridge may be unique for certain values of {{formula:bf4be035-6a39-4c0a-83ec-c6270ac5069d}} (for example, if {{formula:73bc079a-bdc9-4c69-8145-758b29937d05}} is a matrix of zeros). Even if the confounding bridge is unique, in order to identify the bridge, {{formula:3baa42f5-ce20-4cf7-8e6d-1e084857e14b}} must be relevant instruments for {{formula:4d9f04b9-903e-4828-b3a8-0bf918555ac3}} after controlling for {{formula:c0b5e983-2de1-43af-8b5f-9fe89c7bc21c}} (see Assumption 5 in {{cite:09713f46bfdc8da67226658474d840992accd7ad}}). Again, under our assumptions this is only possible when {{formula:c7c5124e-2591-4512-88c2-86e8462a15a6}} .
d
39dab452e3e4df5bfd9c5b812126e56b
To assess the standard scale variation method of assigning a theory uncertainty to predictions of inclusive cross sections at the LHC, we analyze the comprehensive set of LO and NLO calculations presented in Ref. {{cite:dd9da7878c5e3bb3dae75d6fdaff2883a40ef6e8}}. For each case, we investigate to what degree the uncertainty derived from scale variation of the LO prediction is a reasonable estimate of the difference between it and the known NLO calculation. While in many cases these NLO predictions are not the state of the art for the particular process at hand, and it would be very interesting to perform the same study using higher-order calculations, the number of NNLO calculations is comparably much smaller, which would limit the scope of the available processes. Furthermore, many searches at the LHC still use LO for generating signal samples.
r
68d0ccc906102bf5fcf40398fbcc4a54
Following previous work {{cite:a3186ce195a26036ecae807693641485d63d2166}}, {{cite:98423fcb7921050b00d426e518a11410b969a4d4}}, I took the covariance matrices underlying the probabilistic model to be identity matrices, {{formula:51da7715-92fc-419f-ba01-f436ad34c1bb}} , when deriving the predictive coding model. Even with this assumption, predictive coding produces similar learning performance to backpropagation, which suggests that relaxing this assumption could lead to improvements over backpropagation. Future work will focus on these possibilities. Previous work derives local learning rules that account for non-diagonal covariance matrices {{cite:0a9aab3a5d68528b635edcfeb6b9c9d9d5d79810}}.
d
7bf69c6dac77c3f0c05b60cc2a7db250
In the light of AdS/CFT correspondence {{cite:af92c27b82a55d87277db2ea24228ec0fd3517c0}}, {{cite:c0bad2122a478643ce81b4d4431643b8ac6d073d}}, {{cite:b674a491ede49b336d03c8285afb4a5c3a959585}}, the quantum computational complexity has been thought of as one of the most useful probes to study the physics of black hole interior. This is a curious quantity which, for a finite entropic fast-scrambling system, keeps growing long after the system attains thermal equilibrium {{cite:9212f2437c3771b0ac576ac117a604d48b03675f}}, {{cite:fca7f168637951b78451f2ddccc10c52e397c11b}}. Therefore, in a holographic setting, it is naturally identified with the growth of the black hole interior. Accordingly, the most natural holographic proposal to compute complexity in AdS/CFT turns out to be the celebrated “Complexity = Volume" (CV) conjecture which measures complexity as the volume of the maximal slice in the interior of the black hole {{cite:9212f2437c3771b0ac576ac117a604d48b03675f}}, {{cite:7105dbc365f57d012e19a82906982a21a39fa19d}}.
i
0e339852a09f97ffea26f2b5a5f21b1e
To solve this MDP, the authors in {{cite:0196267da52ad84e6304b57a1f0479af07f62fad}} have leveraged the Bellman equation and studied the characteristics of the value function involved. Based on the particularity of the value function, the following result was found:
r
1e4d4a23d2c826ee2b99b1b6a02f5493
For Gaussian noise we compare our method against six other blind zero-shot denoisers: Noise2Self (N2S){{cite:3e735e0a3600aa8d1f05072899533d5c22c041fd}}, Noise2Void (N2V){{cite:f2b0fc1baa5ee5d71d0d01f3b248779dfa097dab}}, Self2Self {{cite:60200c297dc9d94eaf4bedba6355fd8ed2ce8c72}}, Deep Image Prior (DIP){{cite:9929e953283ea908ab4fba61187f555ca885253a}}, Noise2Fast (N2F) {{cite:c21ba18cea6fbe51aec8019031386c808a71c2bc}} and SS-GMM {{cite:b7fa7351dca6ec991d762ceed99a433a2967712f}}. For Poisson noise, we replace SS-GMM (which is designed to work specifically on Gaussian noise) with Poisson2Sparse (P2S){{cite:3acb2d76c8a4bdb942d703f86165c448d01f5201}} (which is designed to work specifically on Poisson noise). We also add our own seventh method to each comparison, N2F+Domino (N2F+DOM), which is N2F where we replaced checkerboard downsampling with our domino tiling based approach. This serves to illustrate the broader applicability of domino tiling to generate noisy image pairs.
m
7540d18d21bb4cd7d720357f657e12a6
Moreover, the data-driven parameter discovery of the defocusing NLS equation with the time-dependent potential under the sense of the rogue wave solution is also studied. It should be pointed out that the considered periodic boundary conditions are non-zero, which differ from the other learning solitons with zero boundary conditions {{cite:3240d639e33bccf0a89e62bbb9d8d2d184ec4603}}, and seem to be difficultly learned. In brief, these results show that the PINNs can be used to learn the rogue waves of the defocusing NLS equation with a spatio-temporal potential even though the small sampled points are applied. However, there are many unknown issues such as i) there is no the theoretical analysis of the PINNs with different activations, weights and bias functions for indicating latent solutions; ii) what is the more suitable error loss for the different physical models ? iii) whether can the more physical laws make the deep PINNs better learn the corresponding nonlinear physical models ? These problems will be considered in future.
d
4091337a9ebdf4d8f94a5063d11299ef
Effectiveness evaluation: Our method was compared with well-known anomaly detection methods including the one-class SVM (OC-SVM) {{cite:28ba4bf705cb4df262933f41eb1845129b551efc}}, the statistical kernel density estimation (KDE), and the autoencoder (AE) {{cite:e270098ad31ea192a88f3de5d7b4fd7bb5ad4a59}}, the recently proposed methods Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED) {{cite:434c1e35c644b4a95ec6030ad631eb7a8a360b92}} and Unsupervised Anomaly Detection (USAD) {{cite:c8a41c17836be0ff97c1cd93ac75ad5bcce01c64}} for multivariate time series, and the recently proposed SSL methods for anomaly detection, including ScaleNet {{cite:48d6c303ae892ae9b5ab395bb0df9d8981a28cbc}} and CutPaste {{cite:430ee60351c371c68f51fc2dbd751e12d306a16d}}. Note that ScaleNet {{cite:48d6c303ae892ae9b5ab395bb0df9d8981a28cbc}} uses frequencies of normal EEGs at multiple scales to help detect abnormal EEGs, without considering any characteristics in abnormal EEGs. Similar efforts were taken to tune relevant hyper-parameters for each method. In particular, to obtain feature vector input for OC-SVM and KDE, every row in each EEG data was reduced to a 64-dimensional vector by principal component analysis (PCA) based on all the row vectors of all normal EEGs in each dataset, and then all the dimension-reduced rows were concatenated as the feature representation of the EEG data. For AE, the encoder consists of three convolutional layers and one fully connected (FC) layer, and symmetrically the decoder consists of one FC and three deconvolutional layers. For CutPaste, each normal EEG in each training set is considered as a gray image of size {{formula:d58a2649-d86f-4eda-bda9-df1c1c09e95d}} pixels, and the suggested hyper-parameters from the original study {{cite:430ee60351c371c68f51fc2dbd751e12d306a16d}} were adopted for model training. For ScaleNet, the method was re-implemented with suggested hyper-parameters {{cite:48d6c303ae892ae9b5ab395bb0df9d8981a28cbc}}. As Table REF shows, on all three datasets, our method (last row) outperforms all the baselines by a large margin. Consistently, as Figure REF (Left) demonstrates, our method performed best as well at the subject level (i.e., Setting II), although the performance decreases a bit due to the more challenging setting. All these results clearly confirm the effectiveness of our method for anomaly detection in EEGs. {{table:238c2f80-c851-4a30-a809-0ca500edec04}}{{figure:b2cd1172-a640-4c2d-a4f5-9add2349dfad}}
r
a352faed5a54a734c974f09169afb127
Since catastrophic forgetting is an inherent feature of current online neural networks, test-time training could benefit from the field of continual learning {{cite:35c88fdd3c051809a002aa7c2d9c66d036adbce0}}, {{cite:c680b3b137a79ef0414da357cfca082cb89fc07c}}, {{cite:f4eca553a3cec9d66dcb543a4dd7ed66bd1571d6}} which aims to prevent forgetting in general, not necessarily under an adversarial setting. Understanding forgetting is a major milestone in neural networks research, as it is one of the most significant dissimilarities between artificial and biological neural networks. Last, the concept of lethean attacks, a.k.a memory erasure in human minds has been the subject of countless Hollywood moviesA few personal recommendations: Eternal Sunshine of the Spotless Mind, Inception, Men in Black and in reality could potentially restore the lives of millions of PTSD patients. A review of the controversial field of memory reconsolidation {{cite:e398a3f9a78dadaac20d31498de7d67029882648}} and its applications is beyond the scope of this paper, interesting as they are.
d
b56dbdedc3c53e615e2db9a31737b052
An inherit drawback of kernel methods for large datasets is the fact that they require the calculation of the so-called kernel matrix {{formula:81074a07-e498-4fb1-a32e-79f886358d2b}} , having elements {{formula:a34c8562-dd04-441d-b51f-a4f3fe0a308b}} , and the exploitation of {{formula:3054aabe-53cd-4a9c-aea2-98f92a065a57}} in order to learn the model parameters. For example, KPCA and KDA solve a (generalized) eigendecomposition problem defined on {{formula:48165485-d259-4ff0-b2df-80fd6d9d67e5}} and, thus, their time complexity is a function of {{formula:df1f708d-e637-413c-98db-077626112d2c}} . For large scale problems, such a time complexity is prohibitive and, thus, approximate kernel approaches have been proposed to overcome this issue {{cite:98d47d92d53ab8d81db8170632ed0677ab2d3c4b}}, {{cite:2f07b0a1dcc1c754e84ca0cc79d95f1b1ff422e1}}, {{cite:862b9b3fb73ea70d82eff1fba905937b6f1d2455}}. They aim at constructing a low-rank approximation of the kernel matrix {{formula:d2fbd3a1-036e-4260-b704-516fe4368a0e}} to reduce the computational complexity of applying the main algorithm.
m
e3f3f74e3c81d4c86859ea1e454fde1d
When the post-change distribution has parametric uncertainties, Lai {{cite:9d24dc3c4abd7cf963c39f03e17ce5428bbcb413}} generalized this performance guarantee with the following assumptions. Suppose that {{formula:9236d538-bd17-4805-bdcb-05152d004b94}} and {{formula:ea3e0c2f-caec-40b1-ae4d-5e7d68f7644d}} satisfy {{formula:7b53e483-fe8a-427e-8514-11051919a3b2}}
r
e2ec3c06692b910d2b605d8a30f28415
The ZX-calculus has several applications in quantum information processing {{cite:3956b44647f99085b55a745313abc9828a07235f}} (e.g. measurement-based quantum computing {{cite:8eb9e7987126d846052d491d601767760071ae56}}, {{cite:9835495675d1af243cb8099c369d56c223947dc3}}, {{cite:642d32c796cff9b81b20ec9991bd870d62eadb49}}, quantum codes {{cite:69794867b28b4df86a51684391a697af01b433a6}}, {{cite:d2228ec20c7918df34c288122d12f840d694df91}}, {{cite:2cbe7efc413f4d1217f80b962a568e33f3edbbde}}, {{cite:81884c41301a08444ffb0608d20284bbd5e413ce}}, foundations {{cite:e0986546ad614dac3d8eeb2d0baf3ad60e931169}}, {{cite:bdeafd6c856c02eb38bdef2cd055a43124e27d84}}), and can be used through the interactive theorem prover Quantomatic {{cite:5b41f0907b675bca2653239fd161f69b67d6c6a6}}, {{cite:081b7481ddf1a619a4633edfe7718f00867ae1d3}}. However, the main obstacle to wider use of the ZX-calculus was the absence of a completeness result for a universal fragment of quantum mechanics – a restriction of the language that can be used to represent any quantum operator. Completeness would guarantee that, in this fragment, any true property is provable using the ZX-calculus. More precisely, the language would be complete if, given any two diagrams representing the same matrix, one could transform one diagram into the other using the axioms of the language. Completeness is crucial, it means in particular that all the fundamental properties of quantum mechanics are captured by the graphical rules.
i
cb6274f53c51a7ed8e7369ac0dc87c18
For contrastive loss, different from {{cite:2dca87b6ef09e6ba0e6fbe48cab9037b51a9d069}}, we adjust the contrastive loss term to allow for multiple positive samples during each forward propagation so that similar images that belong to the corresponding class prototype are pulled together for more discriminative representation. We defer the definition of contrastive loss term in the following section. The overall losses can be formulated as: {{formula:144f771b-d876-4e70-9725-eeec5faea8c4}}
m
71546854cb190c06e0fd55cb19b43993
{{formula:d3cd1021-574c-438d-9802-c16b11dc10c6}} -gram token distribution.  We compare the unigram distribution of a code-switched corpus (SEAME) with a standard monolingual corpus (PTB {{cite:85ea7f049cde2107b1ccadba6c87631cf59922f9}}). A glaring difference is observed in their distributions (Figure REF -a) with significantly high occurrence of less-frequent unigrams in the code-switched corpus, which makes them rather difficult to capture using standard {{formula:024668c3-3a22-4de8-97e6-6a32a2f45b37}} -gram models (which often fall back to a unigram model). The DLM partially compensates for this by emulating a “class-based language model,” using the only class information readily available in the data (namely, the language of each word). {{figure:f3d7320d-b79e-4f31-a3d7-6cf6c85e54ae}}
d
116b514fd0e969dc206173fdb2120947
We follow the definitions in {{cite:3ab314d74bcd4e2689e95feafe79a6f0a54c03ad}}, {{cite:90c2879affe2bfdd932789e4e2ec301e01dae769}} to formalize theHFR task.
m
dffb48815c18b31a287ed521734eb67a
Federated learning (FL) enables various data owners to collaboratively train a model without sharing their own data {{cite:3e6ed5f03443bc642155ca52433f73c67f6c9176}}. In a FL system, there is a server that broadcasts a global model to clients and then aggregates the local models from them to update the global model. Such a distributed optimization may cause prohibitive communication costs due to the unavailability of clients {{cite:8b2c978a2fb5d3ba7a74be75a36478a3a829541f}}.
i
3f2c7b84b86733664ceefef2af784ce0
The edges and fine structural details present in the LR image are extracted by the HLIE module using the low-level information {{formula:f78e5b08-daaa-4d5d-bb32-b35cfe36cd8e}} . HLIE module comprises of 32 Residual-In-Residual (RIR) blocks, one {{formula:b177bcdd-edad-46db-ad30-3ea084157977}} convolutional layer, and have one long skip connection. The long skip connection here stabilizes the network training {{cite:661b8c19e2a25d28c57dbc2bb67404fc9a8c7043}}, {{cite:004f36129199cbfc4ca978a104e6cce4dd01d2ac}}, {{cite:94e2badf0b06dc0493c3dfe9d159c59244333ec2}}, {{cite:e9a870ef0962cfda3f07a05f6daa59a7068b3e47}}. Each RIR block is created using three residual blocks and a skip connection with a {{formula:0961f48f-ce44-47da-8de3-f5eb3af2df86}} convolutional layer. The Residual Block comprises of four {{formula:c9b748d6-d4d7-4cae-b8a2-e53afa8da5f1}} convolutional layers with a serially attached Channel Attention (CA) module. Using the statistical average of each channel, each channel is independently re-scaled via the CA module {{cite:fb108b062dcb89afb00528c34d3eabfd728cb03d}}. As depicted in Fig. REF , skip connections are also used in residual blocks, which aids in stabilizing the training of deeper networks and resolving the vanishing gradient problem. The output from HLIE module can be expressed as, {{formula:9b0ad78c-2e9d-4e23-b76b-291a1c9905d8}}
m
cbd58e6beb77afc8759ebcd49fde3132
Rotational mixing can more efficiently inhibit the settling of helium than of heavy-element abundances because the mixing depends on the gradient of elements {{cite:4862d49a0444c69341c18527ca44dcd05e4389f0}}. It can reduce the amount of the surface helium settling by about {{formula:fe22fdb3-07a2-438b-b54d-380e402f00b8}} in our models, which is consistent with the result of {{cite:cf2ece8289616ca65755d7fb12ac9b6facd6a902}}, who found that macroscopic turbulent mixing can reduce the amount of the surface helium settling by around {{formula:46afa3f3-56e2-42a4-a6dd-62fe472d3b1a}} . It leads to the fact that the surface helium abundances of rotating models are obviously higher than those of non-rotating models. In the enhanced diffusion models, the velocity of diffusion and settling was increased by {{formula:183bf5a9-17c3-4116-a1b2-61c2cb3cc963}} . However, we have no obvious physical justification for the multiplier. In non-rotating models, the enhanced diffusion leaves the surface helium abundance too low. However, rotational mixing completely counteracts the effect of the enhanced diffusion on the surface helium abundance in rotating models. Thus the surface helium abundances of the rotating models with the enhanced diffusion are higher than those of non-rotating models. The effects of rotation and enhanced diffusion bring low-Z models into agreement with helioseismic results {{cite:1d4d2e9f5b4abf852bb566e5f3e521c2d4ebed07}}, {{cite:fcd93df33e710dc45cb52e58ace1a5cd3978ff67}} and updated neutrino fluxes {{cite:ef17ca3b87627212c18ee7308d04f6e2a39dbda0}}, {{cite:b7433a2b744a4aed226867657e2798ce1d12b02b}}. However, the calculations show that the same effects can not bring high-Z models into agreement with the helioseismic results and the updated neutrino fluxes at the same time.
d
bdca6d971ad02275e8b236034a18cb01
We compare our method to 13 state-of-the-art methods, including 7 traditional methods {{cite:7dfffa9d813e95c563d39f5324c3c18f0e9e4427}}, {{cite:58b420567a5fe0512f393c28acb814ee83125ff2}}, {{cite:90f492ffdd99716451998c2986c9ddfbde0588cf}}, {{cite:662e451febab6117dc0e14193d23f35f36fe698c}}, {{cite:c96fbe093e56ab6dfad599a66f0aff839d035d22}}, {{cite:b61c6c5c32ffe36ede7db9b8335eefd810b7a5dd}}, {{cite:32836da342999ef6232edfe5a2b998d11804936f}} and 6 deep learning-based methods {{cite:4681a28611d1aff787343d31dff8908dd3fb33b1}}, {{cite:725adac0e3293a56c96cf7e95bc5b61843120824}}, {{cite:d8c6ef0b5c2edb720d5c5c085bca9d8b1dde1ca2}}, {{cite:411f80631cf291a002023e81a1b66f2e2df5a704}}, {{cite:c3b6f52b68f2e6604ddcfa5b5cd5c6f7f9d4b761}}, {{cite:36cf0fb7e0d16c0b893f6eb844a08ffdf215c808}}.
m
68a9e79870645b2fe7624f91b417c92d
Let us estimate the probability {{formula:5432d73c-5729-41c3-b760-0bef0437698d}} using a probabilistic classifier—a function {{formula:1a652cf8-7e9b-49a4-a97c-9110d1e91eb0}} parameterized by {{formula:9e33d5d3-5832-44e8-9bb8-202012a82f1c}} . To recover the true class-posterior probability, we minimize a proper scoring rule {{cite:1eaa922883dce377e161202bce06906d192d10ee}}, such as the log loss {{formula:5a817c00-662d-4909-ada2-3f707d87960b}}
m
d287fa591299995e97fc6948fe038930
Trading a minimax problem for an estimation problem does not come for free. First, there are some computational considerations. The HSIC is computed in quadratic time but linear time estimators of dependence {{cite:e5ca52e7379e3cbffbfa24fcccbe599d288e04f0}} or random features approximations {{cite:4cce9f82816fd3a040936b61575aec782b50bbbb}} should be used for non-standard batch sizes. For example, to train on the entire extended Yale B dataset, VAE takes two minutes, VFAE takes ten minutesVFAE is slower because of the discrete operation it has to perform to form the samples for estimating the MMD., and HCV takes three minutes. Second, the problem of choosing the best kernel is known to be difficult {{cite:3457e30678c14a16940b324fbc786cb875ec20ab}}. In the experiments, we rely on standard and efficient choices: a Gaussian kernel with median heuristic for the bandwidth. The bandwidth can be chosen analytically in the case of a Gaussian latent variable and done offline in case of an observed nuisance variable. Third, the general formulation of HCV with the {{formula:2970024f-9037-4e4d-b3fa-606f8e755a66}} HSIC penalization, as in Equation REF , should be nuanced since the V-statistic relies on a U-statistic of order {{formula:946d9d05-f4d1-4f4b-b8ab-d339ca92bcaa}} . Standard non-asymptotic bounds as in {{cite:65a8e3acb33cfcbc2cb6c0f7ed401702ff28f9a7}} would exhibit a concentration rate of {{formula:851ce9a7-66ff-467e-aa66-c88a47b0f9e9}} and therefore not scale well for a large number of variables.
d
1b963eee46a10990e4a2ff25321f016c
We first consider an example of the difference between superimposing and aligning manifolds on image domains. Earlier work {{cite:4b6669344c56df4ffac9480298e8e02341f1810e}} has demonstrated that an image of an object in the first domain can be mapped to an object of another object in the second domain while preserving the orientation with respect to the picture frame. However, in those cases, the orientation axis can be completely reversed. An image in the first domain facing 30 maps to an image in the second domain facing 150, and vice versa. The mappings successfully fool the discriminator in each domain at the level of an entire batch (the manifolds are superimposed), but there are other mappings that also fool the discriminator that preserve the individual pointwise structure of the original domain. Namely, the optimal alignment would map first domain images at 30 to second domain images also at 30. Without aligning the manifolds, only random initialization will determine which superimposition is learned on any particular attempt.
i
95dbf6b4fee4beb58e3f72bd05834ad6
We use the same prompt at all perspectives. This can lead to repeated patterns on multiple sides of an object. The target caption could be varied across different camera poses. Many of the prompts we tested involve multiple subjects, but we do not target complex scene generation {{cite:a17b98fa5a5ccde9bcb3f3118591d4f844c6234d}}, {{cite:e54c9dea0c964842d1a1a3343fb71194de249e17}}, {{cite:eddef5390348d3f40e638f5d34177d1e0d0c9fa1}}, {{cite:787c8c6e3cc027bb8c7360dc783146e2353806bf}} partly because CLIP poorly encodes spatial relations {{cite:e1fcc5d7a5b04b06b1c2a717bd35c7774c03ccf0}}, {{cite:c8dd1b0b8a662f4c0c5b5b8de8336df9910396b9}}. Scene layout could be handled in a post-processing step.
d
33e495a4fc2a521194028bde1dafabae
Using OGLE IV photometry {{cite:488cc69c5bc2bf200f737ce9f84d6bb1dc305b6f}}, a systematic difference between the results for the Oo I and Oo II type samples in the Galactic bulge was detected by {{cite:1d0e390bcbc51a2d68e98b6ec6fad6580e3bc8fa}}. The median [Fe/H] value for the Oo II stars proved to be 0.1 dex more metal poor than for the Oo I stars. However, {{cite:a7f602ab7283e3df82b61c9636428cdded87e715}} noticed a {{formula:2cb1e887-bed8-44f7-a5ed-36578f97fe2a}}  dex amplitude-dependent residual in the [Fe/H] values derived for Oo I variables with eq. 3 of {{cite:3dd9dbf84e0d324daf7406803ce24525009bf59b}} in the Galactic bulge RR Lyrae sample. The correction for this effect results in somewhat lower metallicities for Oo I RR Lyrae. Nevertheless, these type of corrections hasn't been widely adopted for the derivation of photometric metallicity estimates, we are focusing on verifying the behaviour of the original eq. 3 {{cite:3dd9dbf84e0d324daf7406803ce24525009bf59b}} formula.
i
f6d229fa2867dd3d293bf7e3f4d028bd
where I put {{formula:3a26dac9-1540-49c2-9538-1464f64906fb}} {{cite:70d563998fe0b9f270e0a957f120cf7b4db41edb}} in order to have for {{formula:b16698f9-908f-4f2e-a1a1-1493ce6bb7ac}} the physical dimension of time. The expression for the “4-velocity" {{formula:65446de8-b132-408c-9af9-18d52a5705d2}} is as follows {{formula:0c29fbd6-3d7b-4c10-a334-88f83226c808}}
r
52cac9b8b92aaabddc17f13b1769c8f2
For non-sparse graphs ({{formula:6e813087-c858-4754-8e22-0ec680b69561}} for some large constant {{formula:eb2a5549-9d8c-4159-beb8-a62f56533cb7}} ), the work of our algorithm matches the best {{formula:20e9bb1c-842c-4a5d-84ee-10e4d5b63922}} runtime for sequential algorithms {{cite:57c0f12d54d7e1b3d1e1d3ab859cc008d7dcbc53}}, {{cite:a41577dd952b606a65ce37ee4b99365205570b05}}.This is because {{formula:a346a443-926a-456f-88d7-6837a10514da}} when {{formula:b638e8d7-1961-49af-bbc5-e77495bf50dd}} . Otherwise, {{formula:74cbacb6-2597-45d3-bd00-071cdc1efb01}} implies that {{formula:b3e13855-bf3c-4b94-a916-7a6ca00de05b}} . But then {{formula:1dda767e-89e8-4dbb-a5d2-cba50a82f927}} for large enough constant {{formula:11f14493-80e3-4ee6-bfcb-2a190a696298}} (note that {{formula:d4bc2f25-a89b-46b2-b8c3-9eb69895f8ba}} since the claim is obvious otherwise). We thank Paweł Gawrychowski for the clarification regarding the bound in {{cite:c01f43cd3c6b3233ec7f916893a2ed3cf21b277b}}. When {{formula:6dde739c-0d82-400e-b3f1-594ead41bbbb}} , the work of our algorithm can be simplified to {{formula:59ea18df-be7b-43cc-9876-c8a8b74e64d5}} and improves the previous best work by Geissmann and Gianinazzi {{cite:84d0614f0f89f24a340c6af66ba3784af5910bb7}} by an {{formula:673c0eff-2960-4e07-902c-ca3e4ff82ac9}} factor while matching the depth of their algorithm.
r
adfeac5dfa4fe56d808a625fb6b3a6e6
Subalgebra-subregion duality also leads to new insights into and perspectives on entanglement wedge reconstruction and its interpretation in terms of quantum error correction {{cite:3ea4c4dfa984dabcbe79fb734616a255c3ae4f0a}}, {{cite:9b1ba27b160beb2cd201468fb92406ec81f2a251}}. It can also be used to obtain entanglement wedges directly from equivalence of algebras, rather than using entropies as in {{cite:766d7eb367fdccc2d61398af8b844b214f30f855}}, {{cite:95326ce71c853e7f2f32f85b56217714184610ab}}, {{cite:e0d70447fccf319e4bd03b6b92d2aa755e50bd63}}. It should be possible to use the duality to give a first principle derivation of the “island” prescription used in the calculation of the Page curve for an evaporating black hole {{cite:cf3e9b43dae3231dbf32ab2b88e33e4c8f0d4902}}, {{cite:60e3eb684260aaa03019a36a3e8a0dd8c133880f}}, {{cite:5942ba7abd9c9f0bfd63e8c45a20313aa7ec26c7}}.
d
f5680c89b49f149753b67a6b2b20f528
With such a diversity of works around the idea of interpretation, we still believe that the current discussion lacks generality and doesn't touch the core idea behind the interpretation. By rethinking the fundamental property of interpretation and starting from intuitive assumptions, we build a general framework to synthesize interpretation from an information-theoretical perspective. The proposed framework is transparent in style, capable of both global and local explanation, and produces interpretation in the form of meta-information. The framework is then implemented to solve image recognition tasks on a simplified CLEVR dataset in an interpretable manner {{cite:d72c57ad2b0695e6c7e22b34a7c2572288466edf}}.
i
a955560f345e4a0f3c67a009209cfaf8
Network properties have to be tuned to task requirements for optimal performance. It is widely assumed that criticality optimizes task performance. However, we found that one has to phrase this statement more carefully. While certain abstract computational properties, like the susceptibility, sensitivity or memory time span are indeed maximal or even approaching infinity at a critical state, this is not necessary for task performance in general {{cite:2a49f80e0f1f9dffee3503a37292a58f09405692}}, {{cite:9f464c6be53388282111625b7c4e400086645535}}, {{cite:089ebcd9e6a1634f9dc7dc3e280dd9d895c4c430}}, {{cite:6dfc423e9683858e6af3eef09a507970d4c20068}}. We find that it can even be detrimental. For every single task complexity, a different distance to criticality is optimal, as outlined in the following.
r
d445bf445b68207fb50fe610132a064b
In this paper we constructed a family of density-scaled filtered complexes, with a focus on the density-scaled Čech complex and the density-scaled Vietoris–Rips complex. The density-scaled filtered complexes improve our abilities to analyze point clouds with locally-varying density and to compare point clouds with differing densities. Given a Riemannian manifold {{formula:541b8948-e03e-445f-b846-92c11a26934b}} and a smooth probability density function {{formula:1efb4199-ab64-4cd3-8fee-19f33bef4e6f}} from which a point cloud {{formula:21856bb1-769e-42e7-a929-c4e4fb8f1f1d}} is sampled, we defined a conformally-equivalent density-scaled Riemannian manifold {{formula:550b94b6-43a6-4656-a0a0-90337a7f44f9}} such that {{formula:b4c83b13-46f4-4251-87a1-f38fdf081934}} is uniformly distributed in {{formula:3836e7ee-576a-4cd7-b070-8454e3473e75}} . This allowed us to define a density-scaled Čech complex {{formula:f69a58c2-3eb9-40cf-9744-0fd6d5e6da66}} and more generally to define a density-scaled version of any distance-based filtered complex, including a density-scaled Vietoris–Rips complex {{formula:b2ab5b5a-7c95-4c1e-98a9-2108f4a568b9}} . We proved that the density-scaled Čech complex has better guarantees on topological consistency than the standard Čech complex, and we showed that the density-scaled complexes are invariant under conformal transformations (e.g., scaling), which brings topological data analysis closer to being a topological tool. By using kernel-density estimation and Riemannian-distance estimation techniques from {{cite:69929f56fb7ac7190a251c0501bb4c1f0560658a}}, {{cite:b3225b3d54e764e1669e8d398ec4f9920316cb50}}, we implemented a filtered complex {{formula:f2a1db2c-96ea-4205-bf45-18abdecccec3}} that approximates {{formula:4680d158-c6f2-4263-b5b4-39d96e5769e1}} . We compared {{formula:ddcc380b-ea76-4af5-9626-1a78d91ee26f}} to the usual Vietoris–Rips complex and found in our experiments that {{formula:2a85bef9-e3ac-40e2-9f84-ad8e97b16be8}} was better than Vietoris–Rips at providing information about the underlying homology of {{formula:b08c83cf-e929-4c77-bf95-2e78deb0084f}} .
d
e24012c5d80e0de2b53ac2a4836f218d
In the following we present a sample program using +MARTY-1.4+ {{cite:61d4e348403b449d83085be6e50f151fb021d4a5}} to calculate, in general BSM scenarios, one-loop Wilson coefficients for flavor physics such as {{formula:8c1e7890-e05d-4e6f-8a27-45780fdc54ef}} and {{formula:28151782-b8d4-4cf0-8370-d6dd8a4007f4}} . These coefficients which are relevant to describe the {{formula:ac7ef511-922c-46ae-a26e-885c93c3b9e2}} transitions are defined from the corresponding effective Hamiltonian {{cite:0e00264cd882ecb56423da98339e50d0e6f59b07}}, namely {{formula:be44e945-c20d-4d0c-ac9e-5a58d11fb4f9}}
i
d7fa37218c94dd88190edeb2b301b35b
Additionaly, the area denoted by a blue circle in fig. REF , where bright surfaces have been identified by {{cite:7d51975b3a38dae492f493e314347218db1b718c}}, was considered to host material enriched by up to 6% in water-ice according to infrared spectrum modeling {{cite:f62b073d442531e5a048765e5bce054c5aa90bec}}. For some of the surface elements of the same area, {{cite:9d16c9c1e44941e53f0133e6808bfefca3dfbab6}} have obtained corresponding values ranging from 6% to 25% based on the spectrophotometric properties of these spectral regions and thermal modeling. Similarly, the analyses conducted in the Anhur and Bes on some of the singular very large compositional heterogeneities have indicated local {{formula:7ac82791-7d45-4353-addd-49776f365a9c}} 20% and {{formula:befaf5e4-ff5a-4086-96c2-ea04c111f5fc}} 30% enrichment in water ice ({{cite:ed20fe7db8048f2f04199c0453f96881fa972687}} and {{cite:f554b2b215624fe804d9229fa88f79187e3ec15c}}).
d
da7f84246980e15398ff30830a8beda1
In the experiments on the false news detection task, we found that detecting right-lean articles, in general, was not helpful among the classifiers. The reason for the difficulty in detecting right-lean articles could be a potential data bias. Social media and crowd-sourced platforms tend to lend high visibility to viral content, which includes right-leaning and left-leaning news publishers that are by definition extreme in their worldviews, coupled with a sensationalist tone. This issue also reflects the existing dataset NELA and our newly collected dataset CIND. In our study, we observed that right-leaning articles are mostly misclassified as left-leaning articles. The difficulty of detecting right-leaning articles is also observed by {{cite:ec0aa1f6212962c246dc5e34bbfa8520479e139c}} for the hyperpartisan news task. Additionally, {{cite:6d3e6fefb0a2d9754c1158ac331e3d89bb269731}}  analyzed that fake news are mostly misclassified as right-leaning news and observed that right-leaning news publishers sometimes campaigned false information. The significant difference of our study from the prior studies {{cite:ec0aa1f6212962c246dc5e34bbfa8520479e139c}}, {{cite:6d3e6fefb0a2d9754c1158ac331e3d89bb269731}} is that we tackle the problem as a multi-class news type classification because different fake news types may have different implications. Biased sources can also misinform the readers {{cite:090e1bdac5ac9706f94cff2aa9429fe8fc86d158}}, {{cite:ec99c198b48357f07dba4d851bbda78a6dbebacb}}, and fine-grained detection is vital for prioritizing what should be fact-checked.
d
35f7eb59629fb98a4625cb2669ea39c0
Many of the existing experimental constraints on dark matter (DM) crucially rely on the DM interactions with nucleons, and therefore, can be largely weakened if the DM predominantly interacts with the Standard Model (SM) leptons, but not quarks at tree-level. Such leptophilic DM (LDM) could arise naturally in many beyond the Standard Model (BSM) scenarios {{cite:19c0697d52cb6ef2834f77109ddbfc9b71b0dfda}}, {{cite:7186465e11435b54ac594b9db8ead079e2f486fe}}, {{cite:ce49ab4f72259aa29f246619c3131b1b2d6f66fe}}, {{cite:44fd0960f3c19b3a78b9140a59b45f691599f84e}}, {{cite:a4b1060b8a64f3ea62c7bb334a03cb0f2997b3ce}}, {{cite:6855a077a984ef3066f8afebf832016b2bac2735}}, {{cite:86eb5ad3ab794448b13319a3835030c68ebc58c1}}, {{cite:1df2eb02d20c17e15230363e2ec0a6e063410f61}}, {{cite:e0812011fe0fd51bd41c60aecca754d00beadf67}}, {{cite:f23f7c53505c7804c5c8877731e140027af559d2}}, {{cite:6df7abd357036265ce1fba64b667fc5e16122b03}}, {{cite:0266da5ec4a303ba6fbfa6d5009f75c9a6c3dc0f}}, {{cite:2e6a5e31a3832b74478565fa3ea7085a1eacf372}}, {{cite:9deee368b23b52c120f46983cc41d12369b7ac31}}, {{cite:cb5695efd94c2176a69b83c2d063b4c0dfb1918d}}, {{cite:581a119e9ef8b6dab4cac30e7807ae58a06f3746}}, {{cite:19e08016f18ee18f3dc24c5db57c179fe59df3d0}}, {{cite:8948a970360557b4e9562d2284ef6ff7f2a20eeb}}, {{cite:55e5bdae354cadf3e26cd5e9b76023d42faf42bf}}, {{cite:1c129b0cfcf8c3a63ca1beb88edc98ffb48f95e1}}, {{cite:bd3cc28f3638ffbaeface92a97295ccde3ec2598}}, {{cite:73263d1fd813e2c6728cc254d5e73723363dbc2c}}, some of which could even explain various experimental anomalies, such as the muon anomalous magnetic moment  {{cite:f95c4a6e95c68c6d599ea2ba9ac56eec11f8eb05}}, DAMA/LIBRA annual modulation {{cite:7453b4669f256eddd616ceec3522c0c15052f66c}}, anomalous cosmic ray positron excess {{cite:68987b38328cbcaabb9cb760b11b9d2f28e7ff65}}, {{cite:2c97657f3fd4d5b74e905a36459a8f1e17b09e28}}, {{cite:c790c7f3eb7ce3fccac913e646f1fb943f774e38}}, {{cite:baf1b76fa670127e02cb92697f7ab6c677792a9f}}, the galactic center gamma-ray excess {{cite:13371eebd4b1dd6403213fc2399f414258cd28ac}}, and XENON1T electron excess {{cite:2cbea0e2217f8916117da1e45933a4eb091fb2d5}}. Dedicated searches for LDM in direct detection {{cite:6e629c3d2c3ca7aaad1180fc52097499140cb77e}}, {{cite:a8f2dba4ff59e2d44442ebeb8a22b3c12422a2f6}}, {{cite:fd658033da008f2d9c7c9048b8fa1e4edb9e33c6}} and beam dump {{cite:5d7b051db778c69a39ed7da52042a8565de4b55a}}, {{cite:61a47cc46fd40b52ae661a2de840cb5003f1177f}} experiments have also been discussed.
i
5dedb89069f792640f4cb391e1aaf370
The accuracy of DenseNet on the Flower dataset {{cite:28864dfe1bdc5e51e47cae1837019cc3a1669bd4}} is 98.1% which is around 5.5% higher as compared to the second-best performing state-of-the-art method (PC-ResCNN {{cite:bee031dd4d02c84bdb82fa46d0d0d457d9ecbd94}}) in Table  REF . Similarly, the other traditional classifiers also outperform the fine-grained ones by a significant margin. It should also be noted that the performance on this dataset is approaching saturation.
r
5a12ba2ad194510c8ab3c1309a4e79c5
In this paper, we propose a theoretically motivated and provably optimal solution to enforce the affine constraint of eq:constraint3 into DNNs. In particular, by focusing on convex polytopal regions {{formula:286ad19d-f342-43dd-ac43-214d0743c78c}} , we are able to exactly enforce the model to be affine on {{formula:30cfd5fb-5cc5-4bf8-916d-3fa4360482d1}} ; a visual depiction of the method at work is given in fig:classification, and the pseudo-code is given in algo:police. Our method has linear asymptotic complexity with respect to the number of vertices {{formula:fcf6f99a-28ea-4e34-8bf5-09cbb4cd5b6e}} defining (or approximating) the region {{formula:8360abdd-11ec-4493-8d8b-1df6d6dcd900}} i.e. it has time-complexity {{formula:8c9c5201-df3f-42a3-83db-2d7548633137}} where {{formula:33e5dede-94af-48ed-b63f-8f15fd5a994f}} is the complexity of the model's forward pass. Hence the proposed strategy will often be unimpaired by the curse of dimensionality since for example a simplicial region {{formula:b9970a1a-ca13-4482-9beb-8c117968f3b1}} has only {{formula:37fc789c-b5d2-48c3-a67f-e94c6f610474}} vertices {{cite:a3dc1ac5661b325cdbabb4155ca168988c95c8eb}} i.e. our method in this case has complexity growing linearly with the input space dimension. Our method is exact for any DNN architecture in which the nonlinearities are such as (leaky-)ReLU, absolute value, max-pooling and the likes, extension to smooth nonlinearities is discussed in sec:conclusion. We coin our method POLICE standing for Provably Optimal LInear Constraint Enforcement; and we denoted a constrained DNN as being POLICEd. Two crucial benefits of POLICE are that the constraint enforcement is exact without requiring any sampling, and that the standard gradient-based training can be used on the POLICEd DNN parameters without any changes.
i
88b55cbb56fdb8c4b8536a54238c11fa
The typical connection density in the Industry 4.0 era is about {{formula:a7419987-d70c-48e3-ad2a-185d1153e25f}} /km{{formula:ba727c63-049a-4038-81f1-3b7f6a950e26}} , however, wireless communications have a limited spectrum bandwidth for transmission. Therefore, transmission scheduling among the ubiquitous sensors is a critical issue over the shared limited number of frequency channels. Most existing wireless scheduling works aim for achieving certain communications performance, including throughput, latency, and reliability, and are agnostic to upper-layer applications, such as estimation and control {{cite:cedfa199cd414490e47159c30088b4e1c16bd0d2}}. However, for a multi-sensor-multi-channel remote estimation system, where each sensor measures an unstable dynamic plant, the scheduler must guarantee the stability of the remote estimation all plant states. Otherwise, some of the plants cannot be stabilized, leading to catastrophic impact on real-world systems. The design of stability-guaranteeing multi-sensor-multi-channel remote estimator is a challenging problem and has drawn much attention in the past few years.
i
5e923800af3208e4a0cce1071724cbdf
Early regression-based methods such as TextBoxes++ {{cite:c08cd29aa753095e0c6d5c969a9c62d1ebb48e97}} and EAST {{cite:7cdf304ac7f51b70908cb20cc502adb8934e006c}} used SSD's {{cite:e633b27055dc6ca03866e2985adbc9e8e0ca4add}} architecture to detect text regions with rotated rectangles or quadrilateral descriptions. More recently, {{cite:6ff77f1d59fd6e6d9aff0e4e4927caf128c2c511}} extended DTER's {{cite:b316ba02db88ac87735e7b0c602e845245ba31ff}} architecture to output rotated rectangular boxes directly and achieved SOTA performance in multi-oriented benchmark datasets. However, these representations ignore the geometric traits of the arbitrary shape of curved texts and end up producing considerable background noise.
m
aeb4631994be745fc971353375f7abdd
The proposed framework assumes the existence of latent components that connect the exposures, mediators, and outcome. The identified components are linear combinations of the exposure/mediator features making the interpretation less feasible. Though an ad hoc approach is suggested to sparsify the loading profiles, integrated approaches can be a future direction. This type of approach may also apply to the scenario of high-dimensional exposures and/or high-dimensional mediators. In the current study, asymptotic properties are derived under the low-dimensional scenario. Asymptotic theories under the high-dimensional setting are challenging without any constraint or regularization and thus are left to future research. The proposed estimators are likelihood-based estimators. In practice, when properly scaling the data and imposing unit variances ({{formula:3b37823e-b9e6-465b-9cdd-ebe1d29213c0}} ), it is equivalent to the least squares estimation. The consistency still holds but the estimators are more robust to non-Gaussian continuous distributions {{cite:db3dfbbfa6b03426c2f2e66a4fe1bae4b4f410f4}}, {{cite:a986251f0c00824e57f84e21c6381b78e7577ddf}}. For other types of data outcome, such as a binary outcome, extensions to the generalized SEMs are feasible but require further investigation. The proposed framework assumes no interaction between the exposures and mediators. With multiple exposures and mediators, an extension of including interactions is not straightforward and is considered a future direction.
d
0bc8b31502c2ebeba1f39b349772beec
General transformations between these two sets of constants can be found in {{cite:fc70c66e2ab3de7173a8f874669e470367174a44}} (or Appendix A of {{cite:7e31e3dffd460c1448ead67e7cfd558deaf70f8c}}). The nonlinear information coding of Lanczos coefficients plays an indispensable role in demonstrating universality of operator growth {{cite:7e31e3dffd460c1448ead67e7cfd558deaf70f8c}}.
m
55d4e8a24dd885ec0e13fc039e3665ea
The consideration of small black hole presented here can be generalized to the Reissner-Nordström black holes {{cite:82ee1a42dd8deb7c42e1eba050cba27217601837}}, {{cite:882fd565b8f7bf2ce84dda6dd41f380c040b0ee2}}, charged dilaton black holes in non-asymptotically flat cases {{cite:e63753fc8b5797e8e6aa6d984162689d8607a1bf}}, {{cite:ba498a1b086a2e93c7d286e373b36ff3a80aecf8}} as well as to black holes in (A)dS {{cite:5dd280c0cd71dadcd88111115e43c1aea52781e9}}. The latter are of particular interest in the context of possible phenomenological applications in the condense matter physics {{cite:24d33a970f4dd1ad55d9045717fa2a4ef2753edd}}, {{cite:02ac7c3d2383b42dc08a2d29fb6e30ac3158256c}} and quark gluon plasma {{cite:b3e15f1dcb1a4012146c7947f15c14a079a6cf5e}}. In particular, the entropy for the dilaton charged black hole contains the {{formula:efc90fa4-2131-4b2e-8709-87dc895d814e}} termWe thank Hyun-Sik Jeong for taking our attention to this fact..
d
2d2437571dba3aa12fc373a5b3157b09
For more details we refer to {{cite:24223e35e21ae65634858cec06b75991ca8b181e}}.
i
60c087fa174380c82bd179b1ee77311f
We emphasize that there are conceptual differences between the flow method and the standard bulk reconstruction. (See {{cite:574f2a95d561ba1269ab6978c26dd68516275692}} for a different point of view on the standard bulk reconstruction.) Although both methods employ formally the same smearing function to construct the bulk field from the boundary field, they are in fact complementary to each other. (i) The smearing in the flow method is applied to the elementary non-singlet field {{formula:093511f8-361e-4d0c-874d-db01d25d081a}} in the Euclidean path integral, while the bulk reconstruction gives a relation between boundary and bulk free local singlet quantum operators with Lorentzian signature. (ii) In the bulk reconstruction, the boundary CFT primary field must have a conformal dimension larger than {{formula:0c62e6e7-f3d6-4fb8-8568-d42206e6b4be}} . (This constraint can be loosened but then the formula needs to be modified{{cite:8f77362ba076e1e59f29efe1d9501224821da93b}}, {{cite:9f5ba873936509b1743a90581ca89f87abf44a63}}.) On the other hand, {{formula:7c511b55-573d-46e2-a9e2-a5c1e70bd9de}} in the flow equation must have {{formula:434c2c7c-2f17-4a48-8d35-6fd4cd6803f4}} . (iii) The VEV of the metric field realizes an emergent bulk AdS geometry from the CFT in the flow method. On the other hand, the existence of a background AdS spacetime is assumed from the beginning in the bulk reconstruction.
d
099cce5e4012aa85cc05d469ca65dde1
Theoretical guarantees for treatment effect estimates generated by (REF ) often rely on statistical models of the outcomes {{formula:d013bc24-ca91-4959-9f13-3f89d0babe33}} . While synthetic control has good performance under a range of outcome models, there may still be doubt whether these models are plausible in settings that synthetic control studies, in the spirit of comments by {{cite:a57ba5e028b747a5d644cb7d13ac036526be624c}}. In contrast with the usual outcome modeling approach, we instead consider a worst-case setting where the outcomes are generated by an adversary. The adversarial framework, popular in online learning, dates back to the works of {{cite:e65ed58294154904fbc848e5a04335d55e82b1c9}} and {{cite:eaa99ddbb030e7bf37e9c6c080efa01744754f1b}}. It has the appeal of giving decision-theoretic justification for methods while being entirely agnostic towards the data-generating process. Since a dizzying range of reasonable data-generating models and identifying assumptions are possible in panel data settings—yet perhaps none are unquestionably realistic—this worst-case view is valuable, and worst-case guarantees can be comforting.
r
dcd4379a951aa63a1840ee7be58acbe7
where {{formula:aec9a90f-d2a4-4fed-8947-a4fe1efb749d}} is a two-layer fully connected neural network. The final network output is given by applying a softmax function to the vector of logits. This approach to fusing the embeddings of the other devices to make a permutation-equivariant scoring function is maximally generic (see Theorem 2 in {{cite:ba432c96d5454ccbb2e66c050542855b11e6f207}}).
m
9947919d5ca4afdedd1ab7832443252c
then one can derive from Table REF and Fig. REF that the change in {{formula:0c9e3034-9f81-47fc-974e-739249d4b4e6}} and so in {{formula:a614bf05-deeb-44ce-87cc-531f7477702d}} from our fit is {{formula:dfd0a369-a35d-4646-9b48-f60be3ce2ab3}} in a redshift range {{formula:f83e6191-526c-4195-9d82-b07f33b0cc4f}} ({{formula:fb1ef45e-c3f7-4708-8cc1-ab5281e105d2}} ), while other observational bounds, in the same range (see table II of Ref. {{cite:8aaa84d0c17c82488f5d7ae134c57a92bca2363a}} which is based on {{cite:b0bc8590ac580068f04d7e0dc4e46fcb9fd144e8}}, {{cite:e261b69251ba9798f6dcb8343f433473d12ba2f5}}, {{cite:c8d441fd2e3651432a3b61fd67b23839da81df58}}, {{cite:0be67b8c4df82a69580a5664a1b2015ad839d87d}}), give {{formula:ae37393d-4eb0-48e9-9044-8c57f8178c42}} . But still our estimation is compatible with other cosmological constraints, as the ones derived from CMB Planck first release, see {{cite:285a93d004a534dff2f73e5687d0524d0626d252}}. Moreover, recent observations show that both positive and negative values of {{formula:27c86fc7-69e0-4cec-92fc-f956cca3eab6}} are possible (the so-called {{formula:202b4529-4105-437d-bad2-5494ec5a061c}} dipole {{cite:8caaa9a0c1ae7a896a987f1d9e8830fe372079e9}}).
r
4699a7ec2563ef19ff3e139d87d4cc34
One interesting proposal to solve these issues was presented in {{cite:51325f57f7a85e0c6e90ac14a9a5619d97214433}}, where a generic EoS depending solely on the form of a dark energy-like term and its derivative was found. This approach can be adapted to any polynomial, to test the viability of cosmography since this result can be tested with a supernovae sample trained via a RNN+BNN network that solves problems with over-fitting at lower redshifts and increases the density of data in the {{formula:c9d49431-be9e-47cc-beb4-c4bd79ee5ef2}} -region of interest. Furthermore, an as extension, in this work we design a new neural network to add a new GRB calibrated dataset, which can therefore help to compare the cosmographic {{formula:3e79e106-f9d3-4687-9d5d-a4402060dde7}} relation inferred from supernovae observations (assuming flatness) with the cosmographic {{formula:90443d53-a766-4530-b9dd-4359eb55fed9}} relation inferred from supernovae and GRBs, extending to higher forecast redshifts. In the near future, high-redshift spectroscopic surveys {{cite:5f64b4e51ad1791a21953257171abe07e812bfd9}}, {{cite:8afdab652d9f9da8a397a446fc02452de4ea7c7a}} and/or gravitational-wave missions and projects {{cite:e1157e2af3c9cd2dd4689e3d98c2db644da49707}}, {{cite:067b39615fcd78112b94779c3881d5fbb8f94bba}} will provide accurate data in the redshift range of {{formula:7cd420bf-8c9e-41e9-acd4-7466572bb492}} . Meanwhile, our trained RNN+BNN method will allow us to acquire a higher density of precise distance modulus data, for both SNeIa and GRBs, over the wide redshift range of {{formula:d0cdc733-3670-4804-b298-eb77bfbd3db5}} .
i
956bfaf6d09fea9c73d3e513fe9e89cf
These results raise several possibilities for interesting future work. For example, the mixing layer is an “amplifier” flow characterized by convective instability and transient energy growth. It is therefore highly sensitive to the nature of the upstream flow; models developed for a specific inlet forcing may not generalize. Further consideration of this issue could be the foundation of a multiscale approach to input/output models in the resolvent framework {{cite:edc65beef7e12571a4e04b6d5288194e3c8f704c}}, {{cite:0c21676e99df6b3db0bf4604cb7cc6b2e33329ea}}, {{cite:8a85ed952197c0126ba08a609cc8e4f58c07d137}}, {{cite:608e82db63c273f2d9e3d5ae8457bfd340526183}}, {{cite:95a04df154ea08af49d794fb32bad2ea6f9acf9c}}. In this work we neglect the time-varying boundary condition in the region upstream of the modeling domain, but an input/output model could more properly treat this explicitly as forcing, as is done for instance in control-oriented reduced-order models {{cite:1c572c12ce19748f938cc6bcf314b57e938574c3}}, {{cite:781f5038f8b4bb788f93a6f5e2c42f9be235799a}}. Recent work on trajectory-based optimal oblique projections has also shown promise for capturing dynamically important, but typically low-energy, modes in an input-output framework for model reduction of open shear flows {{cite:11f88e239e67a5af23b3970e54e3ae349f595ebe}}.
d
0d47619e28401dae933311e861250612
In open-ended text generation tasks, , story generation and text generation after prompts, stochastic sampling methods are used instead of beam search to increase the diversity in output text {{cite:fc7dacdcc59562b49ed0ac1968b7e2a7a26c1dc1}}, {{cite:f8f76e7faf00da5c1f1728be458a2d3c18df9126}}, {{cite:c94586b3c217c515f1670a8811b92d6af633f91f}}. Although image captioning does not fall in the category of open-ended text generation as input images tightly scope the correctness of captions, we additionally test whether the randomness in stochastic sampling can increase the output vocabulary. We used Nucleus sampling {{cite:fc7dacdcc59562b49ed0ac1968b7e2a7a26c1dc1}} with a hyperparameter {{formula:f85263d1-3a02-4663-8a59-a892af8b47b3}} , which is the best hyperparameter reported {{cite:fc7dacdcc59562b49ed0ac1968b7e2a7a26c1dc1}}, {{cite:c94586b3c217c515f1670a8811b92d6af633f91f}}.
m
79e8d71915b404e0f664439babe03887
Apart from utilizing the quantum-specific properties to enhance Shuffle-QUDIO, we can also leverage the experience from classical distributed optimization, such as Elastic Averaging SGD {{cite:60c92cbee54707273344ef53e3fdeea2af5824b1}}, decentralized SGD {{cite:a14fe75fff81013ae870f142ed5caf14e598abdc}}. It is worth noting that the flexibility of Shuffle-QUDIO makes it easy to replace some components with advanced classical techniques, as discussed in Sec. REF . Taken together, it is expected to utilize Shuffle-QUDIO and its variants to speed up the computation of variational quantum circuits and tackle real-world problems with NISQ devices.
d
4b112751157e5dfedec71ab1ad6ccb55
Consider the situation where a computationally bounded client wants to delegate her computation to a server with access to a full quantum computer. The protocol for blind computation provided in {{cite:ee5ced8143d1ce5da82e0b6db1473bf1aa0c30bd}} ensures that the client can have a server carry out a quantum computation for her such that the client's inputs, outputs, and computation remain perfectly private. Hence, a malicious server cannot learn any of the client's information, and all the client needs to be able to do is prepare single qubit states and send them to the server.It should be pointed out that this problem becomes non-trivial only when the communication between verifier and prover involves communicating quantum states, as in the protocol by Ref {{cite:ee5ced8143d1ce5da82e0b6db1473bf1aa0c30bd}} discussed here. When the communication between a single prover and verifier is classical, then all standard results in interactive proof systems hold, since they apply to arbitrary provers. Moreover, the security of this protocol has been shown to follow from the no-signalling principle {{cite:9852a2940ce694cf4903ad543edb28b1e060d937}}. However, while the server cannot learn the client's computation, they can still tamper with it by deviating from the client's instructionsIndeed, the server could simply refuse to perform the client's computation. Such a refusal is immediately obvious to the client. As nothing can be done to force the server to perform the computation, we only consider situations in which the server does carry out a computation. It is then up to the client to verify if the performed computation is the one specified.. Using a scheme introduced by Ref. {{cite:ee5ced8143d1ce5da82e0b6db1473bf1aa0c30bd}}, the client can detect any deviations made by the server with probability exponentially close to one.
d
e9da16e47df9404ad8e6bdf4bad6e169
Comparison with other ML baselines: We did not find good benchmarks to compare our unsupervised, iterative inferencing algorithm against, however, for the sake of completeness, we have provided comparisons of the experiments from our paper and appendix with ML baselines such as UNet {{cite:d5bbf2fb8dc661e72e457fb648365f1ce328c32c}} and FNO {{cite:e8e3d68680f75419683c2982c7808de32975407b}}, wherever available. The results are shown in the table below. All the errors are computed with respect to results computed by a traditional PDE solver. The comparisons are carried over 50 unseen test cases. In the chip cooling experiment REF , the {{formula:db709f0b-4e0e-4ca2-b5cd-f3ad108c0db7}} error in temperature is compared since that is the most relevant metric to this experiment. For the vortex decay experiment REF , the mean absolute error is averaged over 50 timesteps. In both cases, the CoAE-MLSim performs better than the baselines and has small training parameter space. In the appendix we present other experiments with comparisons to Ansys Fluent. Since the CoAE-MLSim approach is unsupervised, local and lower-dimensional, it requires lesser amount of data and trainable parameters. For consistency, we have used the same amount of training data to train the ML baselines. {{table:25e057bd-5329-4c7f-812d-b10455aa5406}}
d
39b0f9efdc740ca1e839aa0feeb365b0
Density functional theory (DFT) calculations have been performed using a plane-wave basis with a kinetic energy cutoff of 500 eV and the projector augmented-wave (PAW) method{{cite:d1d7b771d99b097fe99db84120890a6a9bb7d916}} as implemented in the Vienna Ab initio Simulation Package (VASP). {{cite:1efa48579bfa9e24faad7866b7fd25a7146800e2}}, {{cite:5d9de48deaf0d8d5bbd4daf8c9da5477466c2dd2}} The Perdew-Burke-Ernzerhof (PBE){{cite:19f5b10b03f42fcd17dfbd4a48f62a5ec42add30}} version of the generalized gradient approximation (GGA) has been used as exchange-correlation functional. Both in-plane lattice constants and atomic positions are relaxed using spin-polarized GGA+{{formula:01b86453-b5f9-41c4-b56c-48a633fc1797}} until the total energy is converged to 10{{formula:5010a8ac-871a-42bc-b3a8-56a4c722163c}} eV, and the forces on each atom are converged to 0.001 eV/Å. The reciprocal space integration has been carried out using a 15{{formula:303b7049-952a-4790-a85c-fcc06a7aae6a}} 15{{formula:ab3d77ee-3c77-49e2-a247-343388701e07}} 1 Monkhorst-Pack k-mesh grid. The phonon dispersions are calculated using density functional perturbation theory (DFPT) {{cite:7c8016888f72fd27e921c093e1bd057e26280ba3}} as implemented in the PHONOPY code {{cite:137e9305b5fa6fad07b7e27c1ad33db0a0048b0d}} with a 4{{formula:f6796b65-cf8a-443a-9463-b501e104d90f}} 4{{formula:a6e21125-489a-472a-a52d-baa17a1ff1e3}} 1 supercell and 3{{formula:35404144-ccee-4b48-b92a-79b81907cb0a}} 3{{formula:738730e5-ac47-4a70-9949-7cc5b13b6a87}} 1 k-mesh. Ab initio molecular dynamics simulations are conducted on a 4{{formula:0e8eb6ed-2abc-4d71-af8d-fcba0cff3ebd}} 4{{formula:6e9eca95-a71c-4300-956f-d971f24adaff}} 1 supercell by employing a canonical ensemble, and the Nosé Hoover method. {{cite:2e60e3e87168566b272529fca129c7c926fda613}}
m
aee782766fd5ef01bd361c78a07caca7
On the analytic side of the argument we will rely on the mean value theorem for Dirichlet polynomials {{cite:a7024517b9fcd24bb3a2cd1cf3e1a43cf73f84dc}}.
r
b03df3ef16030875ab3bb85daaabd5ce
First, while this paper answers the question of whether the observational studies are worth conducting, the symmetrical problem can also be answered. If experimental data are unavailable, the bounds of PNS are {{formula:98578398-0bb9-4276-bf24-5248670cda05}} which are less informative as discussed in {{cite:885d2b930f4b853e2105633d75835d38803ecb9d}}. In such cases, experimental data should be investigated either by random controlled trial or by adjustment formula {{cite:d5c0ed4f278395da597bf042ddf34c6eb62e6003}}.
d
d725d42a963193029cab32d2ff7c2b83
In optical fluids, it has been shown that a positive (self-defocusing) third-order nonlinearity can be induced by propagating a laser field within a hot atomic rubidium vapor {{cite:faa4c26c5d06f61484637737e82f06e30b5f2a97}}. The strength of the nonlinearity was reported to be on the order of {{formula:4ce08c8b-f05f-4690-8746-76bd0f31899e}} , yet still yield nonlinear effects such that the system displays a Boguliobov dispersion relation in which low-wave number features propagate like phonons, and high-wavenumber features propagate like interacting particles. Higher values of {{formula:8b7f79d1-3977-4594-86dc-c6a1f5df36ae}} have been shown  {{cite:78245af00a6a678e59f3ace58a2b06849bc83401}} in indium tin oxide using a modestly-powered pulsed laser ({{formula:9a5e6fd9-5a49-4c5d-98e0-4beee5e726ac}} , {{formula:7d5a6ada-1cb1-49b3-83bd-74f817f5e3bf}} ) and producing a nonlinearity on the order of {{formula:774736bd-cfe4-4925-95bf-73640ef620d7}} . It is conceivable that even higher nonlinearities can be achieved in other materials  {{cite:f2b824bb31890cd49bb4c11b01270ac4770858a2}} and using higher intensity beams. It is important to reiterate that vortex biome effects should be present for any value of {{formula:3b22ec6e-fe1a-46aa-b0e0-acec759d862a}} greater than zero.
d
5c8deb9031b468cb3fb36076eaaff7c8