text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Finally, we conducted a post-hoc Bonferroni test {{cite:edc23f09f2945f8ed9a52b51ddc47ed29e2bad87}}, for ranking the proposed regularization method and the only hinge loss training and evaluating the statistical significance of the obtained results. The performance of two methods is significantly different, if the corresponding average ranks over the datasets differ by at least the critical difference: {{formula:694d6d27-43b5-467b-8050-227e4f6f0d87}}
r
75a3d6635e9e456ef45f9aee4bbfaea9
Evaluation Metrics: Two widely used clustering evaluation metrics, i.e., Accuracy (ACC) and Normalized Mutual Information (NMI) are employed to assess the effectiveness of the clustering performance. For all the metrics, higher value indicates better performance. More details of these metrics can be found in {{cite:cd15900a785c46b87ad63861573dbb89fe587cae}}. {{figure:0d8d05bc-450f-43b0-b239-b880111e1dab}}{{figure:1f8eeae5-dd87-4c06-8826-8e866f44b703}}
m
cb08369e6d33ba590e318a70ea972728
the last integral is finite for all {{formula:6ab90ef9-68a2-4879-bf2b-07f4965d4a23}} and all {{formula:722e0a74-2286-4b02-b955-27bf28aec5da}} . It is clear that {{formula:efbcae26-d3a0-4f5c-ab09-7fd6332461b4}} . Now, we estimate {{formula:479bb968-e134-4394-9f80-32c6b5b359b8}} . Taking account of Proposition 2, p. 332, in {{cite:b0b4963eae38d09a24714e9115a6aed311874af8}} we note that {{formula:3c4df4a6-ef7f-4112-93fe-03d35d3dd97c}}
r
b208348faa36d5e904ac778eb797c754
where {{formula:9961341d-b49c-4172-a426-354dd21aba00}} is used to balance between the knowledge distillation with the EMD pseudo label and entropy minimization. We note that a trivial solution to entropy minimization is that all unlabeled target data could have the same one-hot encoding {{cite:bb3bbcf2cdd346722036c7e6ec5a2a4945a6e154}}. Thus, in order to stabilize the training, we linearly change the hyper-parameter {{formula:2793495f-06d9-4276-9767-d50cbdae1283}} from 5 to 0.
m
e09c321a57219aaeb714b87fcf641a8c
Under our strategy stack-and-finetune, the model training process is divided into two phases, which are described in detail below. In the first phase, the parameters of the pre-training model are fixed, and only the upper-level models added for a specific task is learned. In the second phase, we fine-tune the upper-level models together with the pre-trained language models. We choose this strategy for the following reasons. Pre-training models have been used to obtain more effective word representations through the study of a large number of corpora. In the paradigm proposed in the original work by Devlin et al. {{cite:bfd32f9e68dad1fffadc89c0b5102687d4d9d07a}}, the author directly trained BERT along with with a light-weighted task-specific head. In our case though, we top BERT with a more complex network structure, using Kaiming initialization {{cite:87c881bb3c2a6c9d7330b9fa4bde40f6c0f26d41}}. If one would fine-tune directly the top models along with the weights in BERT, one is faced with the following dilemma: on the one hand, if the learning rate is too large, it is likely to disturb the structure innate to the pre-trained language models; on the other hand, if the learning rate is too small, since we top BERT with relatively complex models, the convergence of the top models might be impeded. Therefore, in the first phase we fix the weights in the pre-training language models, and only train the model on top of it.
m
56d647dcda97c1ebcbfce5e9cd07ccfd
We assume that the secondary masses {{formula:6de6c3e5-cded-4cd6-94f4-9ac48357036e}} obey a flat distribution between 0 and {{formula:034e60f9-aa83-4169-989d-50c6fc4f169d}} {{cite:6c8e516882fa20a227b9a673c367057585f9c3f8}}, thus giving {{formula:54b84fde-030b-4f83-a6ee-e938b7eb3448}}
m
54c30e4eb4e39f6d15f4711d29d96f02
Furthermore, our point of view has been entirely from a Rindler observer. It would be interesting to work out the implications of our results for an observer that only has access to {{formula:3e941e12-1cf4-4e9f-bfa1-de0388afc625}} . One can think for example of an observer whose past lightcone contains a period of inflation and this scenario therefore more naturally connects to inflationary physics. Previous work suggest the presence of timelike separated islands {{cite:847339791fd54931908f9dc8decf775994200849}} in such a context whose significance is not entirely clear, although there is recent work in this direction {{cite:b8e58289e0221c2c8aedab16924f5c51ec0d9fbc}}.
d
f2d796958c4a18fd464b9a8227950b70
The conservation of energy momentum tensor (EMT) has usually been assumed in the most investigations based on some physical facts. However, it was turned out that the conservation of EMT may be violated in quantum systems {{cite:f3834056e747dc2e945c7914e45055f398dcdd1a}}, {{cite:d06590331a3ff3440f2e72c0ba94e644c28c1669}}, {{cite:c54a11a0760bdc219a6dbfeb4d66b0337dbba3c7}}. Besides, Rastall in 1972 has emphasized that there is no observational and theoretical evidence to prohibit the non-conservation of EMT in gravitational physics {{cite:0e28e1efba2b65f783d6f884d82af25cc57bab79}}. He justified his proposal by this fact that the conservation of EMT (which is mathematically stated as {{formula:6b12eb69-e061-4294-b202-23cc581a3b7d}} ) has only been examined in a flat spacetime or in the presence of weak gravitational fields. Motivated by this idea, Rastall then assumed that the covariant divergence of the EMT is no longer zero but, proportional to the covariant derivative of the Ricci curvature scalar, i.e., {{formula:1ce9dc33-5206-47ec-97b1-468441316d02}} . The constant {{formula:cf52d740-a9c6-4907-a6e7-58f0a7aec1a3}} is considered as a measure of non-minimal coupling between matter and geometry. We note that in the weak field limit, the usual conservation law is recovered. Recently, a generalization of Rastall gravity has been proposed in which the Rastall constant parameter is replaced by a function of arbitrary spacetime coordinates thus the modified EMT conservation reads {{formula:b74f5a5c-b3e3-4baf-8d5a-ad3163136ed5}} . Interestingly, such a modification can explain the present accelerated expansion of the universe {{cite:bbf2b0e6a24f9b769ac9061c1d14e65c4279a819}}. Also it has been shown that a coupling between the energy-momentum source and geometry can produce the primary inflationary expansion even in the absence of a matter source. We refer to this extension of Rastall gravity as the generalized Rastall gravity (GRG).
i
a5054b1ae939274b69c07f1bde202d3c
Dark energy with varying equation of state may be realised theoretically in terms of dynamics of a scalar field ({{formula:5d8fe9a0-27ad-43c9-9d8e-6a2c3a2bc847}} ). One class of such scalar field models, called `Quintessence', is described in terms of standard canonical Lagrangian of the form {{formula:2205a1d4-1cf1-41a8-b2a7-779089b80768}} where {{formula:f557a451-22e6-451e-96ec-689a305ed1a7}} is the kinetic term. There also exist alternative class of models involving Lagrangians with non-canonical kinetic terms as {{formula:d447d6d4-66f0-4144-b6c4-6dd7293a4b9d}} , where {{formula:4d31ab6b-ef1c-451a-85b7-3295e284daf0}} is a function of {{formula:b6c26023-e56e-426c-aade-1c368d4e4c51}} . Such models, called {{formula:8f94b7c6-83c7-46c4-9d80-4c9040542196}} essence models, have interesting phenomenological consequences different from those of quintessence models. Another motivation for considering {{formula:b8ec53ad-3e3e-4b0e-a848-11bcb486c2c1}} essence scalar fields is that they appear naturally in low energy effective string theory. Such theories with non-canonical kinetic terms was first proposed by Born and Infeld to get rid of the infinite self-energy of the electrons {{cite:162e4abc06d8f0dd6d43b8b4d9677eff5596c9ec}} and were also investigated by Heisenberg in the context of cosmic ray physics {{cite:27480c552e7a4aaea540f36017a07b782caf9380}} and meson production {{cite:9c1eaff2a59775232430c22176e6937ed4d6d8b3}}. In this work, we consider dark energy represented in terms of a homogeneous scalar field {{formula:e83d18dd-2be2-4fc2-b3d1-5ee7de5f40ef}} whose dynamics is driven by a {{formula:c648bd0a-b4f0-40ad-8343-0e73c7a83b60}} essence Lagrangian {{formula:7af6b8bc-0ffa-4837-a496-94358fae53fc}} with a constant potential {{formula:f93c1780-33ca-4068-abcc-212aa9ae819e}} . The constancy of the potential ensures existence of a scaling relation, {{formula:57dbc9b9-18c3-4487-839d-5bfd50371f9e}} ({{formula:9d4e4fa3-8f5a-4f19-b352-ffabc58ccb80}} constant), in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background space-time with scale factor {{formula:b07ab6cb-efe2-4fa1-aa79-291d96823809}} . We exploited the scaling relation and observational constraints on the parameters {{formula:61209dc7-d9fc-46c4-b2ad-a0c79a0ac14a}} , to reconstruct the forms of the function {{formula:0ee2e0af-e843-4a6b-b318-fb99409957be}} for the different varying dark energy models.
i
e2277534eb1e3cd6087883df81fe4ad5
Attack variants. We have presented the DPC attack we found to be the most effective when the adversary splits its hash power in two constant parts {{formula:ee3cba53-e5e9-4209-a52b-676caba447eb}} and {{formula:c3a855d7-197a-49fd-b674-8ac2f3eeffa5}} . We foresee that one could devise variants of the DPC attack, e.g., using techniques that have been applied to selfish mining {{cite:6dc0dd69b5d599c516e33090c75aa869debf4625}}, {{cite:61febd928862155b84abd8197ca521091c4b600f}}, {{cite:de049515f9cf17217da644d8b4e230616dc2b41a}}, {{cite:c961a71e28e43f5977b34ce347138976c3eac392}}. In these variants the adversary would mine on different blocks depending on the system's state, or dedicate a different fraction of its hash power to extend each of its two private chains. We leave the study of these variants to future work.
d
572bd2fc9b49d65afb81ef822073c399
Our models only covers part of the parameter space. We assumed a fixed dust-to-gas ratio of 0.01 even in the circumplanetary disk {{cite:4c23aadff4b2cd51acb123467d91a956420e1004}}, however real disks can have smaller and larger values than this {{cite:573625f0deef42b32d80d5d98dfb2e0bf33c89f7}}, {{cite:b8b4f72cab9555cd65b487b68f9e90c70fdf3dc0}}, {{cite:bbcbc3160c345299c27926edfad29e6e3cccbf6d}}, {{cite:870dfb549f1fcdf7643ad3462a6530fb9722b6d3}}, {{cite:cc8f9856e01814e001c407a05cff980396107a61}}, {{cite:cf486054a5ebc9644560dd8b973de397eea77d6d}}, which might affect the results. In this work we have considered the planets to be 50 AU from the star, but the circumplanetary disk-circumstellar disk contrast can be very different if the circumplanetary disk at another distance. Circumplanetary disks closer to the star tend to be more optically thick, and hotter than the more distant ones.
d
9c03b2f8ca8ecc47fdc816ea06043ad9
and the structural similarity index measure (SSIM){{cite:9146a8141d01e256b6e6e14e6cb2875cdf0da0c5}}. {{table:d0bd2a1c-820a-4f4b-b48c-35ad91740053}}{{figure:8fec7544-b2e1-4648-a107-5e7bd359fb91}}{{figure:d557dc1d-57ec-490d-b104-1facebf7e991}}{{figure:13741af2-a86a-400a-af86-c068ddba85e5}}
r
13924e30e61581d345e0d40aa7cf1f4c
Another important direction is to consider more complicated background geometries. Although only limited class of Argyres-Dougras theories are realized by the D3-7-brane systems, more general Argyres-Douglas theories can be realized {{cite:f8baf04f8dffa0c867242a3cdaef77a1566ea140}} as class S theories {{cite:27b0a86471c61bc0f9941f3eb83e4facff3b6141}}, {{cite:1e82dfb082fb477e294975863a631873cef430cf}}, and some supergravity solutions have been proposed {{cite:29b8a59195aec4a56f1465a595659fdd52c7af4d}}, {{cite:0781d2eb06ef22c2e6492ee429b2c4cfa8c4d1f2}}, {{cite:89355c9ec76ee30b108ca6ed44a534fddce2b3b9}}. It would be interesting to study to what extent the method can be applied in such more complicated backgrounds.
d
6236ece5d5e6121d41ae77813ef24110
To find transmission probability, a finite size DSHG is clamped between two electronic reservoirs those are usually called as source (S) and drain (D). Following the well known Green's function prescription, transmission probability is obtained from the relation {{cite:34f5b46a7bd5ae9bbd4ba2620f244a1b6489462d}}, {{cite:958552e06ffc1b64b163f4b8193ade074e68e59f}}, {{cite:a5d88c2225558b108ccf7b44359220ca289b763f}} {{formula:3b684819-5be9-4cfa-af30-b5aad5a3820b}}
r
1c2a6026609efbec8f53d274477a1b7b
Most of the existing XAI methods attribute the output of a pre-trained neural network to a part of its input. For a multi-class classification problem with {{formula:f95898b6-92b8-4b50-bbbd-c63b6be56770}} classes, let {{formula:9a329cfc-48af-4dbb-945f-9875c47eb648}} be a pre-trained neural network which takes an input feature vector {{formula:b5d2b0df-6e2c-43eb-9a48-ecb3b8b135d5}} and output a vector representing the degree of classification into each class. A typical XAI method provides a saliency map {{formula:549802a2-10e7-4494-9a0d-1e3c624214b8}} that maps an input feature vector to a vector whose {{formula:dfbbb68d-ba3a-4bc2-ad22-69a2dce343c9}} -th element indicates the importance of the {{formula:454fe701-64b0-4e3f-bc02-1ed51e7efe46}} -th feature of input {{formula:72fe0396-b46b-4c60-9b4b-cda181ea8809}}  {{cite:fb576fe159981aad03d8852a9bd54703cf54303e}}.
m
e0d46eb1fa81dee2474a502eeba12aac
This paper compares two methods: OpenMax {{cite:029adcafac3b2738655829e1d97b9683e8e205b4}}, and a baseline algorithm {{cite:352fd56ab737ee0eb1ef0c8b7089bfa48dcb22c0}}. The selected supervisors all consist of a manipulation, followed by a discriminate criterion, {{formula:1a1e2c91-172d-4625-a7d1-02c457b439b8}} , that corresponds to if the supervisor accepts or rejects the sample. The methods are described in detail here:
m
2a199323e926b54bea0d7a3c43622f04
There are multiple approaches to tackle the task of nested named entity recognition. One class of method addresses the problem by enumerating all possible spans of the input sequence and classify each one with its label, including a "no entity" class. {{cite:09f44cf610bf7fa093c6e76c18543f64753fee20}} classify each span independently by pooling its tokens reprentations. {{cite:e1727135a33a3d414f45a33fec5c2b6917c9eb41}} consider the left and right context when classifying the spans. {{cite:b78a613d12b4a5c6f9658328d0ff0e17fe1adfc4}} uses an LSTM cell {{cite:7d75413374376a09a51f180345cf2699782d020d}} to model dependencies between spans that differ by one token. {{cite:f1d7f8aa359573d55d0e6751c4f7ebf753972507}} first filters candidate mentions by predicting all possible start and end tokens and then predicting a label for every mention that starts or end at one of the boundaries.
m
85b8a84908935fd8204a7f77fb3b55fb
Plasmonic devices hold promise for a host of metamaterials applications, due to their ability to control light propagation on a subwavelength scale. Noble metal nanomaterials are in some ways ideal components in such devices – they possess tunable, large amplitude plasmon resonances, which can be excited at optical wavelengths. {{cite:b32ed14316edf5a9bf0334e2339118df9b5cd951}} Nevertheless, metallic plasmonic devices at infrared and visible wavelengths present significant challenges due to bulk plasmon losses. These large losses severely limit the practicality of these materials for a wide variety of applications, particularly in telecommunications and photovoltaics. {{cite:c4ba891610ad7ff194c662ce85e098fdb17fa427}}
i
f6e934ed02be601b0a812825c4601bc9
To obtain the values of NP WCs, we minimize the {{formula:259ac874-41f5-482d-b9ee-5adbb809df91}} function by taking non-zero value of one NP WC at a time. While doing so, we set other coefficients to be zero. This minimizations is performed by the CERN {{formula:deff04f3-b4dd-4aff-a55b-cec2a397242c}} library {{cite:c3f495782870d231d1297702ca3647dabf37dc6d}}, {{cite:4c68ce178eb167d78225168805b1ad9750e2a323}}. We find that the values of {{formula:11f34ddc-2864-45c3-bdd9-d82fb8fca8f9}} fall into two disjoint ranges {{formula:fea10406-ff8b-4bd7-bdcd-bd8b2b9218dd}} and {{formula:d4e6793f-94ea-4fdb-b99e-c6e1cc05bdf1}} . We keep only those NP WCs which satisfy {{formula:1e67233d-20eb-4e98-b26e-6acc268ce3ed}} . The central values of these allowed WCs of NP solutions are listed in Table REF . We do not provide the errors of individual best fit values because of the correlation between the real and imaginary parts. In stead, we show the {{formula:9daa80ac-1ae7-4d74-981a-10ad1d95b858}} allowed regions for theses NP solutions in Fig. REF . {{table:67187b7a-5089-49bf-86f5-4fca09d05c31}}{{figure:eb4a6914-8e48-4849-80e9-dbc0cba7ba77}}
m
af53c6a0ca5b6bc42f3561e2845b26dd
The above experiments used {{formula:8e80816f-9cef-496d-92cc-6f680193127b}} -values from null hypothesis tests as criteria for matching. However, since our stated goal is to create groups which were equivalent—not to accept or reject the null hypothesis—future work could use Bayesian statistical methods which allow one directly estimate the probability that experimental groups are practically equivalent with respect to their covariates {{cite:081f7e8035e5e994524169a414206f8c31956682}}.
d
b803a8df8bc0f953474b873cb0bc23a7
Perturbation and counterfactual explanation methods {{cite:ea0e36660ab76d9e98d6755819670be6dfd68d9f}}, {{cite:0676c260cb7d74cec01db05c8b6414d8e802bda5}} can attribute a static prediction to individual edges or nodes by searching the optimal change to the input graph, while we are handling arbitrary given graph evolution from {{formula:2852efdb-f9b1-4bca-996a-264ac4293a18}} to {{formula:a5b7d255-54fb-42d7-a326-32cea144c277}} and discovering the cause of the change. We focus on attributing prediction changes to GNN propagation paths where information flows (see Figure REF , top). The axiomatic attribution method GNN-LRP {{cite:04de3b8bb142479eb64fff06ae4f7adafa65a5a8}} handles each class independently given a static graph and cannot explain arbitrary change in the distribution of multiple classes. DeepLIFT {{cite:53a88994cd8d932023537f0222b43fdd20e02388}} can explain the log-odd between two classes only (the original and the new predicted classes), which does not fully capture the change in a multi-class distribution. For example, let classes 1 and 2 be the classes with the highest probabilities for the same node {{formula:c7154ca8-e4d0-4fa0-bdce-520fdeeb201a}} appearing in the graphs {{formula:c54912e3-98b2-49d5-bdcd-8a68047a963b}} and {{formula:cf0a4f1f-ff5a-4188-bc74-a464089751f0}} , respectively. One can construct an example multi-class distribution so that {{formula:96798d62-ebd0-45aa-bb54-36a899a31512}} and {{formula:17205991-bc44-4403-83df-580151787989}} , where the log-odd is positive but the positive contributions of the salient factors to the log-odd cannot explain the reduction in the predicted class probability on {{formula:ec1333d0-c75f-43b7-bc4b-ca9c9f664bc6}} .
i
813758b7268a482f828c0ac47c23b634
RL applications {{cite:d605be8e9524ce769475ef45df500f0c35261fd1}} are based on the idea that an optimal solution can be obtained by learning from continuous interactions of an agent with its environment. The agent interacts with the environment by sampling its states {{formula:dde10d97-54f2-45bb-9650-259b5bb7ad67}} , performing actions {{formula:2e24c201-7d61-44b3-a761-2f94d58c1235}} and collecting rewards {{formula:422768af-771f-44df-abcc-b6158fce2d13}} . In our case the vessel acts as the agent and the two-dimensional flow as the environment. In the approach used here, actions are chosen randomly with a probability that is given by the policy {{formula:5f74d34b-f515-4dde-8400-30e2326cfa36}} , given the current flow-state {{formula:2ef0e0ac-8351-40be-817e-1616eeea0685}} . The goal is to find the optimal policy {{formula:dddbbe4c-fa6e-4df1-bc5c-1012a02c7933}} that maximizes the total reward, {{formula:59ab296e-2da2-405d-888d-407d5a8f1499}} accumulated along one episode. For the purpose to find the fastest trajectory we used {{formula:b485ccb2-d277-4826-96e8-c91ecba7cda6}} composed of three different terms; {{formula:7cf20110-382a-4aa0-b109-0c0cd366b8fd}}
m
26d6d5770cae619f7e1eae75f4aa14a8
A fundamental assumption in meta-learning is that tasks in meta-training and meta-testing phases are sampled from the same distribution, i.e., tasks are i.i.d. However, in many real-world applications, collecting tasks from the same distribution is infeasible. Instead, there are datasets available from the same modality but different domains. In transfer learning, this is referred as heterogeneous transfer learning where the feature/label spaces between the source and target domains are nonequivalent and are generally non-overlapping {{cite:a5b510bafcdcc8b7b468e479d62529e9a02b3a98}}. It is observed that when there is a large shift between source and target domains, meta-learning algorithms are outperformed by pre-training/fine-tuning methods {{cite:6aa5d6f24ea40cad7bdd3537dffb45e0285e4f7f}}.
i
a30c8dcc3302003f9ae684151bdcc927
We compare with three pre-trained Transformer-based models: BART {{cite:62d9e9fe6205c68605fc17fafdf0111377e0db9a}}, T5 {{cite:909ccb2d7de184e080b9b4358876fd541050fb2a}} and BERTGen {{cite:06ec87847cf950a45ba4e3a8604b7d6186c6a51f}}. These models have demonstrated state-of-the-art performance in various tasks. We also compare with GPT-2 {{cite:564a651dad571f04ff3ead838484b31afe0eed9a}} and two recent non-autoregressive generation models: BLM {{cite:9f3138a59ae87f2526345b965731c8e235b76d75}} and POINTER {{cite:f4eb14694602d208db562dc9a4db435a430b54ce}}.
m
f930f3ff8f70d009ab23d24dc8d85ed4
Balancing and control is central for the stable operation of power systems. Secondary control is one of three measures that are typically installed to enforce the balance between power supply and demand {{cite:9185e169aa47fb7632689fddd8d5f610e90f96e3}}. While primary control acts within a few seconds after a disturbance and stabilises the frequency, secondary control activates fully after a few minutes and restores the frequency back to its reference value. Secondary control, also known as automatic Frequency Restoration Reserve (aFRR) in Continental Europe, activates automatically according to the local power mismatch of the control area. Meanwhile, a lack of control reserves requires costly emergency measures such as load shedding. For an appropriate reserve sizing and optimal control design we thus need a precise modelling and a good understanding of the required aFRR volumes. Furthermore, predicting future aFRR volumes can be helpful for trading and bidding strategies.
i
db3d20881ae721a81a5d1d2739431b4d
Although there is still a long way to go and the challenges are diverse, huge advances have recently been made. Several experimental demonstrations of quantum computational advantage have been performed since 2019, when Google Research claimed to have achieved it on a 53-qubit programmable superconducting processor (Sycamore) {{cite:bf35d3ea085b0a82c5b9f6abe9efe6ee47875fec}}, {{cite:79b2c1beed3fceed22a1ad525986f3506636c0b8}}, {{cite:7005f07697a513805df46a621c416f602d37bcbd}}. Furthermore, in terms of processors' scalability, qubit counts (number of qubits on a chip) are rapidly increasing, especially in quantum technologies based on superconductors. IBM just released a 127-qubit quantum processor named Eagle{{cite:3e836dfd72698c1eef7740464feca789fad27752}}, and expects to present a 1000-qubit chip by 2023 {{cite:879dcc35e8adedf237db865f8a05709e372be722}}. Note that adding more qubits exponentially increases the number of states the quantum computer can calculate with and thus its computational power.
i
3d81ac5a0766df96e9ca7d1204dc7429
Table REF compares our defense with the benchmarks from the literature (all columns except the last report classification accuracies). In all whitebox attacks, we use attack settings: number of steps {{formula:0d07ddec-b42b-4b2c-875c-0caa88c5f37d}} , number of random restarts {{formula:d82d4977-3254-4b13-8f27-85a66cf6d85d}} and step size {{formula:49ea45c7-ffc1-4308-bd6e-a4c42065b9aa}} . For the attacks against our defense, we also use {{formula:fa02644d-1154-4dc2-b028-c1962b87c3c9}} and backward pass smoothing mentioned in Section  to make it stronger. See Appendix REF for discussion of these choices and accuracies for other attack variations. Column 1 reports on clean accuracy. In columns 2 and 3, we report results for the {{formula:94887b0b-7520-4cf4-ab7c-9f93c23ea2ec}} bounded whitebox attack with cross entropy loss and Carlini-Wagner loss, respectively. In columns 4 and 5, we report results for {{formula:0aeaa126-cc35-4c9a-98e5-f91399b003ce}} and {{formula:34cbf8f6-24e9-40a1-9bb1-460b52447226}} bounded attacks, respectively. For the last (sixth) column, we use a decision-based blackbox attack {{cite:d3baae539c7beec486c104f13fe8bebe5ee54c73}} and report the average {{formula:f4141531-fc9a-48a8-b30a-dd0920b78341}} norm of perturbations that brings the classification accuracy to {{formula:d9b04cc2-a3b5-4661-bb0f-9f5dd1a51cee}} . This attack does not rely on gradient computation. For a given datapoint, the starting point for the perturbation search in this attack is chosen as the closest (in the {{formula:1df3f105-f613-4098-8102-cb7c60804c16}} sense) datapoint with a different label. Other parameters for this attack are the default parameters in foolbox {{cite:70fb4cafdf24dc080e3dae690dc6f0fdbb0f96a2}} Python package.
r
98bed2501ea73b789d158a651b59b679
All main experimental results concerning the 10-class ASC task are shown in Table REF . In the training set, device A data accounts for around 75%, and device B, C, s1-s3 accounts for around 5%, respectively. Thus, we can think of device A as a source device, and the remaining ones as target devices. Based on the device information of data, we divide the test set into four different subsets, which represent real source data (device A), real target data (device B & C), target seen-simulated data (device s1-s3), and target unseen-simulated data (device s4-s6). The first row in Table REF gives ASC accuracy of the official baseline system {{cite:ab74e3d810cad58cd1455bfa31b2f35e69455675}}, which uses a two-fully connected layer neural classifier, and OpenL3 {{cite:d217a51ae77529d86e8fa107b281f85ac99dad60}} to extract input audio embedding. From baseline results, we can argue that good ASC accuracy can be attained on real source data (device A), but a severe performance drop is observed on other devices. Specifically, a severe ASC accuracy drop is reported on unseen devices (s4-s6), where the ASC accuracy is as low as 44.3%. Rows from 6th through 9th in Table REF display ASC results attained with a 10-class classifier. Resnet attains a significant improvement on all test devices over the baseline system. FCNN and fsFCNN outperform Resnet on all testing scenarios, and FCNN achieves slightly better results than fsFCNN. Finally, the ensemble of these three models attains a 79.4% overall ASC accuracy, which is a meaningful improvement compared to any of the single-stage models reported in Table REF .
r
d65b9c0cb5e4d3607df3db10859b830c
This research made use of Python (http://www.python.org) and IPython {{cite:7ba3743ac315054fa624059b92e61aa64a31744e}}; APLpy {{cite:a1ae5872719317ea2a9197e2dba994abfc2812a1}}; Numpy {{cite:1b5c19394b55fef042d89f8ca9a52ec93d06d3c3}}; Pandas {{cite:ddb370c6bea537353df420ba072d3d50ea1e1ada}}; of Matplotlib {{cite:9c92e0f0bec681fddcffab2d50f135787b82d4ed}}, a suite of open-source Python modules that provides a framework for creating scientific plots. This research made use of Astropy, a community-developed core Python package for Astronomy {{cite:a6ddc2b24827e3522e6b21342f37cf064376aa76}}. The Astropy web site is http://www.astropy.org. This research made use of astrodendro, a Python package to compute dendrograms of astronomical data (http://www.dendrograms.org/)
d
c577329a2565531a5f64ff05fe583f10
Before describing the analytical tools employed in our experiments, we introduce the necessary notation. Let partition the feature extractor {{formula:ee3cf023-5c33-4cee-b1dc-a7d0c8b87389}} of a generic CNN in blocks along the depth dimension and denote with {{formula:bdb49903-9164-409d-b9cf-1acc0b19e737}} the activations (corresponding to input image {{formula:9ee5d718-c282-45f0-aa84-e14a9b7a0134}} ) at the output of the {{formula:39849ecb-a10b-47db-aa02-280a2c1e4155}} -th block. The definition of block depends on the architecture. By way of example, for models in the ResNet family {{cite:bd58138e3a27b1e4b49b583d048bb0178062e314}}, {{cite:2046216cc4b5bbaf77c4eee92ec65686944492d7}} we collect activations at the output of each ResNet block, any of whom is composed of multiple Conv-BN-ReLU sequences. In general, {{formula:912c433d-fb6c-4d8f-a3f6-be9104771612}} is a 3-D tensor with dimension {{formula:ae6decfb-171d-41d6-8dd6-d610c301733d}} , where {{formula:21f5af07-9c0d-45dc-a6f0-07be905b2d04}} , {{formula:b4f696dc-6a5f-4231-b32f-1a28e6341ec1}} and {{formula:266addcd-3ea7-485a-a42a-d3dc07a286ff}} are the number of channels, feature maps height and width at the output of the {{formula:2509bc56-82fe-43f1-a073-514ecd85c747}} -th block. For our purposes, we consider its 2-D version {{formula:46eaf1b7-111a-41f7-930f-5297edd0775a}} with vectorized feature maps. Let {{formula:d11adfbb-9666-4f77-9382-9462890305b1}} (or equivalently, {{formula:2ebf0104-52cf-4e72-906e-560f022715cd}} ) represent the {{formula:33cf1fad-b5c8-40ce-850f-db0aeec19584}} -th feature map at the output of the {{formula:989fe15e-78b3-4386-981c-c542440661b5}} -th block, with {{formula:478ba6a2-3cf1-423a-b15b-5bd2086962de}} .
m
01cd175a4e6b84d47b009ec2e5cea31a
The SFR from the 1.4 GHz emission is lower than the SFR from the H30{{formula:50584080-2690-4f7c-b033-2b57ae15f0a9}} data even though the 1.4 GHz emission is measured over a much larger area. The 1.4 GHz band has been calibrated as a star formation tracer using data for solar metallicity galaxies and relies upon a priori assumptions about the relative contributions of free-free and synchrotron emission to the SED. The ratio of these forms of emission can vary between spiral and low mass galaxies, but the relation between infrared and radio emission has been found to staty linear in low metallicity galaxies in general {{cite:e94f5d4363d341340dec650a385169e477459053}}, {{cite:ef5deb231cb13f7ed992a85478c353ce614bf3e8}}. In NGC 5253 specifically, the relative contribution of synchrotron emission to the 1.4 GHz band is very low {{cite:a3e54d9f99233fdf28b220cbfd6b5ca461b31b1c}}, which is probably sufficient to cause the SFR from the globally-integrated 1.4 GHz measurement to fall below the SFR from the nuclear H30{{formula:57c3eab3-7ad7-49e4-8cc9-541a580fdf55}} measurement as well as the SFR from the total infrared flux (see Section REF ).
r
cdf4297a2abe77bb4fff4e2970782128
L. Funar also proved that torus bundles are not distinguished by the profinite completions of its fundamental groups (see {{cite:b633f9a67df6ac73e30e9561cdd410b2ae2c8569}}). It follows from Theorem REF that the torus bundles {{formula:a4dc82db-c203-4d13-be3c-a56c0bb7dfea}} that are determined by the profinite completions of its fundamental groups are exactly for which {{formula:87449de9-53a4-4751-b1b9-942015b2fca1}} . Note that, in 1801 Gauss conjectured that there are infinitely many real quadratic fields with class number one and this conjecture is still open (see {{cite:43fafca98ebae17ee33bab2eaa3c4b80a48419bf}}).
i
484db57092e1b34e42769bbbc7a5bc3c
In Section , we show that our Rawlsian fair adaptation formulation above is readily applicable to any black-box model that computes a score function or a feature representation. For example, we train a model to maximize classification accuracy on a standard dataset used in text classification for toxicity. Our Rawlsian adaptation using its label-likelihood scores and feature representation does not require retraining a different fair model, and shows a significant improvement in the error rates on the worst-off sensitive sub-populations. We also show a similar improvement over real-world and synthetic data sets, when compared against best known post-processing fairness methods {{cite:9477ad6965949c7a775bdbbe68e50a8ff03cd736}} and group-fair classifiers {{cite:7e773efece957185a9a96b94c5c3e1e6955e8592}} as our baselines.
r
00e16a0c6901169df1390ee3a6875387
In {{cite:7648c1b598a2f666c82b00ae6971538b3dca3bd7}}, Zakharov proved that the water waves system (REF ) enjoys an Hamiltonian formulation. Let {{formula:84a43692-f4fc-4ac3-9435-9fb99bcf2475}} be defined by {{formula:345824f0-1f31-47f5-b308-cf6028beed52}}
r
6ff3f3928797814999b4455f670295df
A well known property of shear Alfvén waves in the expanding solar wind is that, for frequencies higher than the expansion rate, they evolve according to the conservation law of wave action {{cite:f308ce329056a7fcb526ee45ea3e51ffdda5d73e}}, {{cite:457b755370e72d6dc0fe64519c9e769d838a6daa}}. In the radially expanding solar wind this leads to fluctuation amplitudes scaling as {{formula:79d2116e-2737-467a-bada-e4b7f0daf838}} . Such a scaling holds true for any Alfvénic structure at arbitrary amplitude, involving also radial fluctuations, provided the magnitude of the total magnetic field remains constant, so that nonlinearities and coupling with compressible modes are quenched. On the other hand, this condition implies that the tip of the magnetic field is bound to rotate on the surface of a sphere of radius {{formula:51212cc8-c33c-441e-8fba-0eee3aa5872d}} , where {{formula:30e5ab44-0bf0-4d3e-bdec-3d62b4726ff4}} is the magnitude of the total magnetic field, thus imposing a constraint on the maximum possible excursion of the magnetic field {{cite:65e4aef1ce3cdf1d0670ddba75f47bdfa5a9a3eb}}.
d
ff50c4fbe77ade113c592d86b77a5b8f
In summary, most recent unsupervised and weakly supervised methods involving adversarial structures only consider the consistency constraints between 2D poses and 2D reprojections or among several lifted 3D poses{{cite:14c9332f8137b56ff1a2894b1053ac6a5c1beb48}}. In this paper, we propose a weakly supervised method that considers 3D estimations along with 2D reprojections simultaneously and train a reprojection network with a GAN synchronously. We also propose a weighted KCS matrix and use it as one part of the discriminator's input to improve the 3D pose estimation accuracy. The experimental results show that our model outperforms state-of-the-art methods. {{figure:06f894c4-55d4-4745-ba42-7d27dbdd767e}}
m
fc7e704af31ff456b8be2e4b6a99587c
The random forest of decision trees model is the most popular solution for regression and classification machine learning problems {{cite:73e97bbac7fdf2e57b96fa2db1be0ead8e14cfaf}}. Its combination with Gradient Boosting, a specific method for the training, allows overcoming many technical difficulties. For example, such models can easily handle data of various sample sizes and quality, automatically process missing data, learn quickly with a large number of features {{cite:83846cb63b9652474f132420ba81a8973c8f5198}}, {{cite:0ed9b58bfbf21f08683c1bb7f5de38c7f9d3b218}} and are suitable in different settings {{cite:09d433ecfd71f402895b7982be820524fa6dfde2}}, {{cite:2aad4e9f0795551a2580b0d066ee052d4d505184}}. We use a gradient boosting realization XGBoost with default hyperparameters {{cite:81b18a9d82e8c5e5e31213d7741a3eac87754845}}.
m
ba4f561ded942f0008679cbe55aba2c3
Group discrimination is not a first proposed concept. Previous clustering based methods DeepClustering {{cite:e35f8a398e61574e766251cb07eae5ac25e841e9}} and SwAV {{cite:607e49433c81ee2c4580c31ed1d70c83bfdc8133}} also conduct it and in terms of the loss function, the three algorithms have a similar form. The main difference lies on the method to generate the groups and the way to utilize them. Our SMoG aims at conducting the contrasting among the groups like Eq. REF shows. This is impossible for previous clustering-based methods since their group features have to be detached out and cannot back-propagate the gradients to the parameters. Thus, they do not contrast the groups but classify the instances into groups. To achieve the group contrasting, we propose the momentum grouping scheme which allows us directly contrasting the groups and propagate gradients. Although we modify the final loss to Eq. REF for better performance, our SMoG is still a group-level contrastive algorithm, since the group features directly guide the optimization direction. This is the reason we call our method "grouping contrasting" instead of "clustering" to distinguish them. SMoG conducts contrasting among groups instead of classifying instances based on clusters.
m
e02c1ed67d5a47cad3236623c0c1e77d
In Table REF , we also include results on detecting pedestrians using Faster R-CNN {{cite:992eb5c0564652926ffad2f4d1497c78725b9346}}. Comparing the results, it is shown that YOLO outperforms Faster R-CNN both in accuracy and detection. Moreover, in the case that we choose Faster R-CNN as the detector, the AABBFI method does not help with accuracy compared with the average and median methods. Furthermore, Table REF shows that mAP improves by using fusion. The mAP improved the most by using the proposed method. In table REF , the best results are shown in bold.
d
1bca389323670fe2bfdca4a1e7e16f3a
Nowadays, search engines play an ever more crucial role in meeting users' information needs by locating relevant web pages from a prohibitively large corpus. Query-document relevance estimation, the core task in search result ranking, has been the most critical problem since the birth of web search. Numerous works have been proposed to extract the relevance signals given the query and a large corpus of documents {{cite:885960f763ea90099e49795d1a83ff564007bd34}}, including direct text matching {{cite:d9fe444dc0b34ec212d04d3e86d145bf9d0ac2d1}}, {{cite:340db441beba7509d30f50db214bcdc81bf42761}} and link analysis {{cite:fffe6caa00382623dcab2ed9881f082c5c259c98}}.
i
76d22b5c451dd38367f982a500071c37
BasicVSR. BasicVSR outperforms existing state of the arts on various datasets, including REDS4, UDM10, and Vid4. BasicVSR also demonstrates high efficiency in addition to improvements in restoration quality. As shown in Fig. REF , BasicVSR surpasses RSDN {{cite:0a6f829fa74f994ef5d0a8d0ac7f847e57360ee8}} by 0.61 dB on UDM10 while having a similar number of parameters. When compared with EDVR {{cite:2822786bb74133febf217a3498f0e41c8a92dad1}}, which has a significantly larger complexity, BasicVSR obtains a marked improvement of 0.33 dB on REDS4 and competitive performances on Vimeo-90K-T and Vid4. We note that the performance of BasicVSR on Vimeo-90K-T is slightly lower than that achieved by sliding-window methods such as EDVR {{cite:2822786bb74133febf217a3498f0e41c8a92dad1}} and TGA {{cite:0c7b26fc376432f29419fec0d95b3a605e540b02}}. This is expected since Vimeo-90K-T contains sequences with only seven frames, while the success of BasicVSR partially comes from the aggregation of long-term information (which is a realistic assumption).
m
217a26f15c0a485216de8adbf27bf4df
In {{cite:0a81cea81a9d87a95d60b43e8be3855c996a93da}} it was suggested that the origin for such singular behavior lies in the use of the Kruskal coordinates. The Kruskal coordinates are suitable to the analytical continuation of the Schwarzschild metric but they are singular in the limit of vanishing of the black hole mass {{formula:90fa6b8d-388a-48ce-a005-d4ce207a7967}} . Thus, the Kruskal coordinates are not appropriate to describe small black holes. Depending of the initial state parameters the unboundeness of the entanglement entropy can occur inside or outside of the Planck length. Similar remarks about islands and Page/anti-Page curves are applied to other static black holes, in particular, to Reissner-Nordström black holes {{cite:82ee1a42dd8deb7c42e1eba050cba27217601837}}, {{cite:882fd565b8f7bf2ce84dda6dd41f380c040b0ee2}}, black holes in modified gravity {{cite:02013648f9b1f9052769c07b2bbc51aad4303851}} as well as black holes in dS space-time {{cite:5dd280c0cd71dadcd88111115e43c1aea52781e9}}.
d
ffb9f0d0952c9ba99504aff7ab4a33a2
In theoretical aspect, the inevitable ambiguities in penguin topologies result in great difficulties in evaluating the {{formula:62e8a937-0d22-4b49-bc57-ab44ba3352d6}} asymmetries in the singly Cabibbo-suppressed {{formula:a552df41-7ce1-40d8-9454-513219faffd0}} meson decays. The Quantum Chromodynamics (QCD) inspired approaches do not work well in charm scale. Harmful to the almost exact cancellation between the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements {{formula:0d6b6631-33e8-4bbf-8417-4a0fdd3a1f50}} and {{formula:f6ed7c29-4c50-4a25-b088-c32d7e38ea95}} , the penguin topologies cannot be extracted from branching fractions. In literature, there are two controversial viewpoints for the observed {{formula:57ad02e1-4129-4149-bf09-ed112d9e1f2b}} asymmetry in charm: regarding it as a signal of new physics {{cite:c42b1a55279d35112589ae49cc05f266131244c2}}, {{cite:d2a48911e72912868978959c73195a479a54b1c3}}, {{cite:904f712ea3c771e8c725df9c6d87c7c266388835}}, {{cite:a471277dc78b5978441502e1214b577c98709e8a}}, or the non-perturbative QCD enhancements to penguin {{cite:d3a936af0941734aad2883a4214f6357ed7967b0}}, {{cite:3685ed0a3339f142ca0860bc0f6f7f79582577f2}}, {{cite:0546699974bcbc30a884031a27ddd6a7f2b6f5d0}}, {{cite:d62cfc20e25ab07b3b575f9956bf3a66c5ac7ec7}}, {{cite:045fac531383b905cbaf0110120ccbda4826cf27}}.
i
7d5624dcc53337be77fabee9bc701f37
By the time WMAP and its contemporaries were observing, the field had matured to the point that common tools were used between experiments. HEALPixhttp://healpix.jpl.nasa.gov {{cite:2dbd54e4955794cf6d160e3817430e579aa82bbf}} became a de facto standard for pixelizing the sky, and many experiments began to use Conjugate Gradient (CG) mapmakers on a regular basis . These ideas were refined during the analysis of Planck {{cite:eeeed13d34d3a99095cc0848a75e15e2ed67a64a}}, {{cite:ae7f04d9acc98c909964be4278262725149d9dc7}}, {{cite:75461341f1b6a61b4a90ddcb9aa8929e18146559}}, and since then those efforts have dominated the field. Many mapmaking tools that were developed for Planck have a strong influence on BeyondPlanck, including the MADAM destriper {{cite:b17d0a4b6df09fca357469ee31cf3db98f3ae495}}, the LevelS simulation codebase {{cite:050f820eaefb5df346dc631840960774a7de4cae}}, the libsharp spherical harmonic transform library {{cite:7978e50149a5de81216daec237bce85face7344c}}, and the Planck DR4 (often called NPIPE) analysis pipeline {{cite:77e3900229cdc50005c17ec3f7bcd6d97893e6c0}}.
i
8402363b5a626259edb78525f3d4644d
The method proposed based on adversarial validation in this paper can not only judge whether the dataset distribution is consistent, but also further balance the training and testing sets. Specifically, gradient boosting decision tree (GBDT) {{cite:756424b59f60c7dfde265b0993c0c4296c53480e}} is used as the classifier for both the adversarial validation and the credit scoring phases, which is a boosting model that continuously reduces the residuals during the training process. It was chosen because the GBDT-based methods have been proven to be very effective in recent research in the field of credit scoring {{cite:59b7d343a2632fa240fb9ae5d4c78e56c697fce6}}, {{cite:5b65dd22207550652a6f9f59bcf1426a0bd861d4}}. Apart from that, GBDT has a very efficient modern libraryhttps://github.com/microsoft/LightGBM, LightGBM{{cite:5cf460a38c72f24a203de0cef02786d68e4c25ec}}, which has won many data competitions and will be used in this paper to build the model.
m
7383da84c120d76fbfcbbde945706e8e
The Reissner-Nordström spacetime for {{formula:364cca04-548e-484d-87db-b94d7a306bd7}} , where {{formula:2914158c-9b0f-4ec1-951f-2deb81cf2dfa}} and {{formula:7a6ff0c6-1357-4cc3-aa16-579f3200af94}} are its electrical charge and its mass, respectively, does not have an event horizon but it has an antiphoton sphere and a photon sphere. The photon sphere, the shadow, and the magnifications of lensed images have been studied in Refs. {{cite:dfc45c3bad1e3dd4b75a13209aaacc2e4949bd12}}, {{cite:749e9c7da6f3f07e34ccc950864269d46a5510de}}. For {{formula:f3e6d732-c39d-47a6-8d8f-b90092983e64}} , the antiphoton sphere and the photon sphere degenerate to be a marginally unstable photon sphere and its gravitational lensing has been considered in Refs. {{cite:a69210eed50813f0860508006c112495e1ef7af3}}, {{cite:8f902ea7bea2ce0d5f807ade4fc801ef2b68dc51}}.
i
5d6fe426d10c98ed645b9db331dbad71
We set {{formula:3efed6ae-7545-44df-8e79-c633bac979b6}} and {{formula:d4263c5e-3d04-46e6-b2cc-ec240cacbb02}} . As with the previous experiment, we show representative examples of distinctive dynamic modes and their temporal profiles in fig:eeg. While such information alone is not necessarily sufficient for analyzing EEG signals, it may be informative when combined with existing domain knowledge. For example, we may find some similarity between the patterns in fig:eeg and the ones obtained in the previous work {{cite:41a890a1cda11178a93c1fd95954146bf4f849c4}} using the method of common spatial pattern.
r
6d9797506e580e68143c84113d48a484
by, e.g., {{cite:ef99f027ef5fd9afbc537574dcd91dd985862e84}}. Thus {{formula:de7e5a60-aa8f-4e32-9727-69afe4964f72}} for {{formula:21dec757-b8c1-4c41-b137-ab6de1f192ba}} and (REF ) is satisfied.
r
5bf46bf5d23fb26f493279323907be59
The motivation of the tracklet booster is simple yet effective. We take the tracklets from any tracklet generators {{cite:e2b0550b582f2e1629d8af11169cc18a3db00f71}}, {{cite:56d4c686d16e01a154840654bf79936447f2fde0}}, {{cite:7ccba6239e875241f04475d40b47ff05959d6102}} or preliminary tracking results as the input. Due to matching errors, a tracklet may contain multiple object IDs. To clean the IDs within the tracklet, Splitter is proposed to split tracklets into small pieces on the potential ID-switch positions to ensure split tracklets have purer IDs as much as possible. Next, the split tracklets are sent into Connector to learn representative embeddings. Finally, the tracklets with the similar embeddings are grouped to form clusters, i.e., to generate the final entire track. The framework of the tracklet booster is shown in Figure REF . The details of Splitter and Connector are demonstrated in the following sub-sections.
m
ab038a97e225df2f50cd01a8fcc3c165
The second challenge is the communication between the computing nodes. Although GPU RDMA is discussed in {{cite:787076a8cfd728d4946d3d6dae64cdb9fce54d5e}}, the inter-node communication through the network is slow and has incomparable bandwidth with the inner-node PCI-E data transfer—frequent inter-node synchronizations hurt the training performance. We introduce a {{formula:f300f486-d45d-434f-9031-9ddc597675d7}} -step model merging algorithm with Adam {{cite:738913e95409cf6c3b6559db448ef210578b9e18}} as the local optimizer, which is different from the common local SGD ({{formula:dd87a615-d09b-4563-925b-1130bb989edd}} -step SGD) in that Adam employs adaptive coordinate-wise learning rates during the training process.
i
ba2862b1a1fbee7a8286f318a5365657
About 15 years ago there was the claim that a meteor originating from comet 114P/Wiseman-Skiff was photographed in the Martian sky by the Mars exploration rover Spirit {{cite:f94de72aa7aa84f2b9c5797bfb5e922e8d43dc22}}. However, later work quantified the effects of cosmic ray hits on the Spirit Pancam as part of a dedicated meteor search and placed this detection in doubt {{cite:1f2384515d9d38a543c3656f5611d06c9911fcd8}}. We also simulated the trail traverse of 114P/Wiseman-Skiff in 2004 with IMEX and we did not find a significantly enhanced dust flux in that period, consistent with the interpretation that the feature seen in the image taken by the Mars rover was a likely false detection.
d
38ba5e61b79afede0e11b2dc8164777b
The contour {{formula:dbc1d2b9-120e-4022-9f30-07a8dc3822bc}} is the one appropriate for an inverse Laplace transform, running along the imaginary direction with a real part such that all singularities are to the left. This solution matches the one found in {{cite:253664401f3550b34a8093c507150befc0000948}}, {{cite:165df3d79b75082b25d6b8d3c5525501fcbd7a2f}} when {{formula:e4e92ffe-2b12-4ef6-8379-e936de6218b4}} involves only defects in the range {{formula:fdd47cbf-891e-4477-a184-828434f60099}} . Since the connection between the minimal string deformations, defects, and Belavin-Zamolodchikov solution are valid for any value of {{formula:2f39b795-541f-4042-b67d-8e11f54fd9f1}} we claim that this solution is valid for JT gravity with a gas of general defects with {{formula:614ea2eb-54c3-4ff3-99e9-10829a6a9f1d}} . The solution we find for {{formula:7a9cdb19-d411-47f3-8e1d-e5a8816ee5a4}} is very different from the {{formula:3aaed61e-cb5f-48cb-a7d5-7320bd7b2c6b}} solution, analytically continued in {{formula:9258e92a-364f-4df6-8bcb-11d721bf9dbd}} . This feature is most transparent in (REF ). The new terms we find have a nice geometrical interpretation as we explain later onWhen {{formula:0ad082a8-a1aa-41c7-880b-c5959e92de64}} there is the possibility of defects merging and this produces new contributions to the density of states. This is reminiscent of the situation with conical defects in 3D gravity and 2D CFT associated to operators with {{formula:871917ef-f277-4627-b6c8-38c0657b0158}} {{cite:9f1ae7e7923f40fb0f95e39837990fbc9443bd73}}. We thank S. Collier for discussions on this.. It is an open problem to derive this result using the JT gravity path integral representation of the theory, since the methods of {{cite:253664401f3550b34a8093c507150befc0000948}}, {{cite:165df3d79b75082b25d6b8d3c5525501fcbd7a2f}} cannot be directly applied for reasons we review in next section. {{figure:886aa3bf-c1bc-46a1-bde3-6d6e99d9deaf}}
i
f874f25630e99e8628d92dfabcaf166a
Training DNNs on perturbed examples is the primal approach to improve the model robustness {{cite:1fc061ec5921eaaffdd0a9271b5bbb36bf1b7d8c}}, {{cite:9432a04f8dbe076ff482ec2d33a4bc9da31f9227}}. Representative methods include noise injection {{cite:32801b6bdd7443069ddb4d6e1e46af4efc11f8ec}} and PGD-based robust training {{cite:036905e81b418597cdbad3ccdf1e91a1f9a4fc67}}. However, most of the existing methods assign a fixed level of perturbations (e.g. fixed radius in {{formula:85e73f42-27ce-4e36-87f5-9d32bb2ea9b4}} norm-bounded perturbations or bandwidth in Gaussian noise) to all examples, ignoring the fact that each example has its own intrinsic tolerance to noises. In fact, excessive perturbations would destroy the class-distinguishing feature of an example, while deficient perturbations may fail to provide helpful information for improving the robustness. Intuitively, some examples are closer to the decision boundary, where tiny perturbations could change their labels, while some others are far away from the decision boundary and may tolerate higher levels of perturbation. As shown in Figure REF , under the same perturbation, the discriminability of the corrupted images is significantly different, if the original images have different intrinsic robustness. A higher-quality image is likely to be able to tolerate heavier perturbations.
i
6f23879c252bc0baa91e5abf03af9274
Blind-spot, self-supervised denoising techniques, such as N2V {{cite:bf898ab7942ecdad80653acfbf11052c57a46cba}} and StructN2V {{cite:1fddf124d633933f79eab69b929824e3801b6547}}, remove the common requirement of clean-noisy pairs of data for training a denoising neural network. However, the main drawback of N2V is the random noise assumption, that rarely holds for noise in seismic data. {{cite:1a8f28bd902f36021245e21bc2696ac7c5a72178}} illustrated how training on data with even minor correlation within the noise field results in the network learning to reproduce both the desired signal and noise. Whilst N2V's successor, StructN2V, can suppress coherent noise, it requires a consistent noise pattern for which a specific noise mask is built, e.g., masking the noise along a specific direction. This has shown great promise for trace-wise noise suppression, such as dead sensors {{cite:9cb9d61d7dde17dfcb04c6b911094f421264f078}} or blended data {{cite:3b63dbddbbb964fa0ac6e823c1c3867915f06568}}, however it is not practical for the suppression of the general seismic noise field which is continuously evolving. In this work, we have proposed to initially train a network on simplistic synthetic datasets and then fine-tune the model in a self-supervised manner on the noisy field data. This base-trained model has learnt to replicate only the signal component of a central pixel from noisy neighbouring pixels. Therefore, a very small number of epochs (e.g., two) is needed for the network to adapt to the field seismic signature, significantly reducing the amount of time the network is exposed to the field noise. As such, the final network is capable of predicting the field seismic signature without substantial recreation of the field noise.
d
1311787258566a36099fcdb415c55b05
The simulation results for optimizing objective (REF ) are presented in Figure REF , where we compare vanilla decentralized (no compression) algorithm with our proposed compressed optimization procedure using {{formula:1b360bb4-3246-47a0-977e-acf6ffc525aa}} {{cite:da3d5c54eeb5513e41a6e6fef0c72c093aff3cf5}}, {{formula:f3ba97bf-225d-4bd2-aec5-7f7d68aabcde}} {{cite:a7fb82ba9052d5de07bf70efdbc723c72189328b}} and composed {{formula:459da755-3e25-4e6c-9030-f1595a381eea}} {{cite:e89d1e40e08feb7930ad8e69bc94c1537f00049f}} compression operators. Schemes with `Bandit' in parenthesis indicate those implemented via Algorithm REF for the case of gradient estimation in bandit feedback, and via Algorithm REF with sample feedback otherwise. Figure REF shows the relative cost gap for the objective given by {{formula:fa37aa49-8f53-4b57-a2bd-2a19e7a6d580}} , and Figure REF shows the difference of the parameter from the optimal value normalized to the latter, given by {{formula:dda2aaa7-a183-4dd1-a4cf-33501ad09d18}} for iteration {{formula:a3477728-1527-494a-84d8-8bc3180ff002}} . From these figures, we see that schemes with compression, including the ones implemented via bandit feedback, effectively perform the same as uncompressed vanilla SGD to minimize the objective. The benefit of our proposed scheme can be seen in Figure REF , where we show the relative cost gap as a function of the number of bits communicated among the nodes, assuming precision of 32bit floats. To achieve a target relative cost gap of around {{formula:95440a52-566c-4964-9b19-96d5f1a6c0c7}} , compressed schemes use significantly fewer bits than vanilla decentralized training, saving a factor of about {{formula:fae5f1e1-f383-43be-bd09-d0528d08385d}} with {{formula:1f457890-e4f7-4f05-a8b7-4576c93fefad}} compression, factor of {{formula:e7e9ce24-6369-429c-9276-bb4aee2dec32}} when using {{formula:fdf9f068-35e5-4954-8ef6-bd77f641e700}} compression operation, and a factor of around {{formula:4b811017-ab71-4c7a-80d5-183cd7702485}} for the composed {{formula:c744b88d-f2e7-4346-b66d-818807cd936a}} compression operator. Further, in Figure REF we plot the value of the constraint {{formula:35da4370-7176-4a29-8bb7-50b280e2d1ff}} for a randomly chosen {{formula:827aea3b-fd93-403c-94dd-a51760a8f45f}} and {{formula:09984c21-adf4-4978-a5af-3fb44e44b7b3}} . The constraint value settles to a negative value, which implies that each scheme arrives at an objective value lying in the feasible space of the problem (REF ).
r
7739238d8cd5020fbcd97c2e6529adf3
However, in quantum signal processing the classical preprocessing to find interspersing single-qubit rotations for a given transformation function {{formula:e1ebe994-13d9-4b48-89c0-71ffbee45d3f}} has been so numercially unstable that it has been unclear whether it can be performed efficiently. In fact, Ref. {{cite:a9ec07ce311e6e3d5979350f1c140fb296164510}} reports that the computation time is “prohibitive” to obtain sequences of length {{formula:44540c99-5754-4aaf-aa89-be2c5f61a77f}} of interspersing unitaries for Jacobi-Anger expansions that we explain in sec:JacobiAnger. The true usefulness of quantum signal processing hinges upon the ability to compute long sequences of interspersing single-qubit rotations.
i
1a3aa8b8153af31857cba9bae747c86d
In the rest of the present Section, we are going to establish a connection between the universal scaling functions {{formula:864048c5-4ae6-4dc7-be1f-905c4ba7aef7}} mentioned in Eqs. (REF )-(REF ) with the extreme value theory. This connection was first introduced in {{cite:8581cf8b0e9ea06a2be40e8940db24abce7a11a3}} for a wide class of percolation problems in different dimensions where {{formula:874d7f6f-60dd-4e3c-a89f-2ce9b4ab26d8}} for {{formula:af700459-3c5f-421b-9c31-db10fe7de50d}} and {{formula:d485cbbb-96b0-43ca-ace9-3c8a231f1aaa}} were shown to be consistent with the Gumbel distribution {{cite:bc59f22ad3092f7361c37332d9df38facba29632}}. Here by {{formula:fb070af4-2d3f-4ac9-8e0d-dd44d0532126}} we do not mean the fluctuation of the corresponding random variables about their average, but we simply mean the variables themselves i.e., {{formula:3105756b-17ac-4d13-8fa4-f453b0cea2d3}} denotes for either {{formula:09d91142-eae4-4637-a385-2d9809c93b9a}} , {{formula:6b01cfe8-d0c8-421e-9d32-df59c5e96974}} or {{formula:4960bfd6-0656-4bd7-bfc9-a2e3592c48ab}} directly. This introduces computational simplicity, in particular for problems in which the average of the variables is not in a priori known.
r
2d4a98ad51b8a1afad24ccb12fce19da
In the following subsections, we present the dataset-aware loss that can be utilized in the multi-dataset training without any label cleaning effort. The dataset-aware loss can be easily combined with existing state-of-the-art softmax based losses, like SphereFace {{cite:97975305de43f5086e2b8826887869f791d77d27}}, {{cite:775e096860acb96d9bbdf7c279079b1a6b2861b0}}, CosFace {{cite:def7134c278479bb72fa3120f77a28c03296602e}}, {{cite:926addf8513904ddb1d2251ba594eb4d2cfceee4}} and ArcFace {{cite:e478d69154ed9e9238081734f73079d25ff9f035}}. Meanwhile, we also employ the domain adaptation approach with gradient reversal layers (GRL) to ensure that the learned embeddings are dataset invariant. The overall multi-dataset training approach is illustrated in Fig. REF .
m
00d2f2c4fcfd889f8dddbb2485ba88de
In order to push forward the state-of-the-art (SOTA) in fake news detection we present an end-to-end deep learning approach based on the Transformer architecture {{cite:5f3ab67abe2af54eefc536dca86650e7cf7c32d3}} at the core of which we incorporate different methods to transform the original text into some condensed form. Due to architectural restrictions, transformer-based models like BERT are limited to specific input sequence lengths {{cite:0af6bc5983625a4e2fe9ea33e1a2a04c3c125e64}}, which are shorter than many news articles {{cite:e8cbaac13f0d9e65d32bcbb06f32d00fd385b188}}. To better capture the missing information, we therefore propose CMTR-BERT (Contextual Multi-Text Representations for fake news detection with BERT) which is an ensemble of BERT models. CMTR-BERT is particularly aimed at longer sequences and additional contextual information. The proposed model incorporates three different ways to deal with long sequences, namely a simplified hierarchical transformer representation adopted from pappagarihierarchical2019, extractive as well as abstractive text summarization. Also, the model enables contextual data to be incorporated for fake news detection via additional BERT embeddings. Furthermore, the high-level architecture is language-agnostic, thereby offering plenty of future directions to reproduce our experiments in other languages.
i
0a5410fce4738e22de3e42dd97b01e6a
Intuition tells us that if our features are invariant to the domain, then the main task should not be affected by the domain of the input. In fact, recent theoretical argument {{cite:05824a323af640fb2b6d36ed1cd20f4f4726c76c}} formally suggests such domain invariance in the feature space as a solution for DG. This motivates our proposed approach. We first employ the common domain adversarial training algorithm DANN which learns domain invariant features that fool a domain discriminator. We further show the data-augmentation algorithm mixup {{cite:a03f59c69e5c5a46d6ae218030cd137c9d6a3078}} may also be viewed as promoting domain invariance. We then propose to use both DANN and mixup after identifying their theoretical connection in DG.
m
c446aa0a91a8d0b36f42f334e636f264
It is difficult to compare the physical scales at which the effective dimensions appear for different {{formula:4b7f6a19-2593-49eb-a8e7-f2713476a4f9}} . One reason for this is that the discrete Laplace-Beltrami operator, constructed from the unweighted incidence matrices of the {{formula:140f48a7-936e-4a25-9922-dcd701a9fdbd}} -simplices, is not defined in a manner consistent between different values of {{formula:64bc67aa-45bc-4218-b2d7-f68001d75de7}} . The fraction of the geometry that is probed with a discrete diffusion step from some starting point is therefore determined by the connectivity matrix for the {{formula:348e1f97-d88d-415e-9fe4-7a89b657a544}} -simplex. It is non-trivial how to relate the connectivity matrices for different {{formula:35005f1e-801b-48d2-be1e-2728a2a0d642}} , so it is also difficult to relate the diffusion time for different {{formula:1282e225-14a8-4925-8e8d-bc913299ce53}} . It will be interesting to explore to which extent an alternative choice of weights for the incidence matrix makes it possible to compare scales at which effective dimensions appear for different values of {{formula:f7976649-0aee-4669-822c-132015a5a2a4}} . A potential candidate for such a choice of weights is given by the framework of Discrete Exterior Calculus (DEC) {{cite:9b9c4ddc12c292e622b3c403b2a2fb9b1eb651e3}}. Using DEC, we can choose weights for the incidence matrices such that a diffusion step is weighted by the associated dual volume of the {{formula:f17e0ceb-78b4-47c1-a642-5c01e55dae8c}} -simplices.
d
db6f09c3e54c10d5f86705d4ceefb4fb
- For the first time in the literature, we propose a novel data-free approach, MINIMAL (MINIng Models for AdversariaL triggers), to craft universal adversarial triggers for natural language processing models and achieve state-of-the-art success (adversarial) rates (§). We show the efficacy of the triggers generated using our method on three well-known datasets and tasks, viz., sentiment analysis (§REF ) on Stanford Sentiment Treebank (SST) {{cite:b1559c6f54d7741d3ce06ea17248182ad1c6cfd4}}, natural language inference (§REF ) on the SNLI dataset {{cite:921705c4a800c5398becb6f89136768043e5ad29}}, and paraphrase detection (§REF ) on the MRPC dataset {{cite:a81c7618040e146f24d1172270c38ea362b075a9}}.
i
d470769438be3d68b7f6ef6cd4bdb11e
[h!] Input: data and label {{formula:b2e5abdc-a0dd-497d-9c0e-f3f5af8b34b0}} and unlabeled data {{formula:fb9c831b-9c99-4937-a747-1a1e8313d9f3}} Init: set training step {{formula:51fc9552-0c92-4590-bcc0-ebc415f7b072}} , total training steps {{formula:cb94651f-9f9b-4813-bc67-b3c3fde91b6c}} , generation step K, randomly initialize M models with random PseudoAugment hyperparameters {{formula:551a44d7-0616-4f1b-b16d-1d2550229011}} . {{formula:73a4bc71-3d29-4ce5-aa24-f4ec7b135143}} {{formula:5038d267-4748-4aa2-ab66-0bef1c0aa0f0}} # Population based distillation Select the top N models in the previous generation to pseudo label unlabeled data {{formula:4ac932d7-8a8f-487b-bb86-203e4faa2617}} and store into a pseudo database which contains unlabeled data and pseudo label {{formula:8ca418e3-b4b2-4694-a7c4-e36fdf347b65}} # Standard progressive PBT exploitation and exploration Update hyperparameters {{formula:011625da-509b-4ed3-bdc3-3327e928a0b8}} and model parameters based on PBT {{cite:c0da38e096acfb9d885c1de054d12fd67b391afa}}, {{cite:9683734248e34486d61a74eed71bcfc93c2f10f0}} # Model trained with PseudoAugment policies Independently train M models in parallel while using the pseudo database {{formula:0f72bee5-dac8-40a9-b525-d490a88943f3}} to augment the training data {{formula:0930f349-9131-4ac1-8b19-3c1a5b8a2a6b}} through PseudoAugment policies.
m
5d4b0e852d4c1a0cb05e51e4dc4c5d93
GAN inversion is the process of finding the latent space representation of a given image. There are two approaches for GAN inversion in the literature, optimization-based and learning-based, as described in a review paper by Xia {{cite:b7c7cc4f4830f88807e1be5fbc39d02e5952616b}}. Optimization-based GAN inversion {{cite:485ccf3b25801af2ff818b9d19231fb3881e74cf}}, {{cite:be66c0a2de385d9db70a4198d738ce8740ebf15f}}, {{cite:bd93a72d3c1d85176d9d921320afde47c99f19fc}}, {{cite:72a64ee4d3cbf87ba5c110f1944a0f0551b13db9}} searches for target latent vectors by fixing parameters of the generator and treating latent vectors as the optimization variables. Learning-based inversion {{cite:879b2c531d72fe274c425126a9f2544e39be03f0}}, {{cite:b9a737be21a0e3aed7a792e5f11746ed9c1eca81}} adds an encoder network in the training process and facilitates inference time at the expense of potential accuracy degradation. Hybrids of the two approaches have also demonstrated good results {{cite:485ccf3b25801af2ff818b9d19231fb3881e74cf}}, {{cite:7f383ee81c993cf55e628fbfb990947624bf2608}}, {{cite:ad4e467306b5d636bc5f49ae6ef50b3aeca443b7}}. The investigation of proper inversion loss functions for training the encoder has also been an active area. Commonly adopted losses include pixel-wise loss, content loss {{cite:9eca37de8346e07e2b00a190174dfdc8be73dc5a}}, style loss {{cite:076da2632131ff79584f4ba9747529625a813f4e}}, and learned perceptual image patch similarity (LPIPS) {{cite:24bf7c5ad0d88d345426c98f61ee5b75fca8b481}}. Additionally, a network trained for human face recognition {{cite:0acf497ddb9cd1879f4315490bf951c3b8d6623d}} can be utilized as a loss ensuring the reconstructed human identity {{cite:99d4d40b9642fd3f22b7e7d959d510db14c35195}}. To our best knowledge, GAN inversion has not been applied to texture analysis for synthesis so far.
m
2909d463ececceeee95040afb01df32b
A second implication is that ID and OOD performance are not necessarily coupled. Without further assumptions, in-domain validation is not a reliable model selection strategy for OOD performance despite contradictory suggestions made in the literature {{cite:f06e651e7f959d312f6396c2d33619909e6adba0}}, {{cite:596a7077700dcac398f5c8d4e6566f943e5317c1}}. This strategy might still be useful as a heuristic owing to some inherent structure in real-world data, but its limits of applicability are yet to be understood.
d
cc42d78ea335a0b72000b509a117050a
As mentioned previously, the performance of our method is not well-captured by comparisons to the original image (using, for example, PSNR or SSIM). Performance should ultimately be tested using experiments with human observers, which might be approximated using using a no-reference perceptual quality metric (e.g., {{cite:fd4f8fdfd06c22f287a1b21405ad1c47ae17cbb5}}). Handling of nonlinear inverse problems with convex measurements (e.g. recovery from quantized representation, such as JPEG) is a natural extension of the method, in which the algorithm would need to be modified to incorporate projection onto convex sets. Finally, our method for image generation offers a means of visualizing the implicit prior of a denoiser, which arises from the combination of architecture, optimization, regularization, and training set. As such, it might offer a means of experimentally isolating and elucidating the effects of these components.
d
90561b0deb1e883193e4145be58da7e3
Recently, the layered label propagation algorithm (LayeredLPA) has been introduced in {{cite:5e1b5bac8c829c5494eee8b4925f680cd49c7a4f}}. This propagation method is based on the Potts model {{cite:8fdf094f01923e2135731354581b4de112cb2a08}}. This algorithm is significantly more successful than natural, random, lexicographic, Gray and (double) shingle orderings. The success of the label propagation methods has a similar nature to the success of the algebraic distance coupling (see Section REF and {{cite:7891df83cba7627f89ae18bf190420cbd6e5545d}}) in which the propagation and averaging of random values over the node neighborhoods is employed. However, our multiscale method shows better results (see Figure REF ). Note that the LayeredLPA is introduced for undirected graphs only. We believe, that introducing the AMG-based framework to the label propagation model can significantly improve its quality. {{figure:b05cc3ef-834e-4888-9709-18454624092c}}{{figure:be6fdcc3-24f8-48de-a28d-db3d8460cdb8}}{{figure:296d126f-8fec-4780-95d2-d62a3552c859}}{{figure:abc1282f-0641-4ca3-8ce6-968b806f43ca}}
r
c40e11fa4daf00d7ba2c03f2d4a05c6d
Besides rank aggregation methods we also experimented with a supervised learning-to-rank approach, specifically the LambdaRank {{cite:baa4ff46e0102032488981a2a5efa19daf15289e}} implementation from the XGBoosthttps://github.com/dmlc/xgboost package. In this case, for each training query, we collected the relevant passage and two other passages in the top 1000 list, ranked according to DeepCT. The LambdaRank model was trained on this data, using as features the DeepCT scores plus those from the FGE snapshots, together with the average and standard deviation, and attempting to optimize the MAP metric.
m
1cacaf3079ad4a6007a722647251ea75
We further quantify this through the “residual,” computed as the difference between the BayesWave and Bilby reconstructions. Specifically, we subtract the maximum likelihood Bilby waveform from the upper and lower 90% credible intervals of the BayesWave reconstruction as a function of time to obtain a region for the residual. This illustrates the extent to which the Bilby best-fit waveform is contained in the BayesWave interval at that specific time. In the GR case this residual is consistent with zero where the signal is strong. As {{formula:06ec976b-d24d-456a-8936-65603697efae}} increases the residual becomes inconsistent with zero during the merger phase, with smaller uncertainty as the SNR increases. This would also manifest as residual power left after subtracting the best fit Bilby reconstruction from the data, as would be measured be the residuals test formulated in {{cite:ea5c91b5f109f3127eea52d877d035a12f82fa2a}}, {{cite:6303839392fd19a875bc60e0348f5a78b6e85a07}}. {{figure:09a45286-f2e9-4d4d-862e-ebfbc2002493}}{{figure:88bcaa20-a5da-42f5-889c-611cd73dee48}}
r
7b1d068bc1f6d088333627598f2c9b2b
The third interference term in eq. (REF ) basically allows to distinguish the initial flavour of neutral kaon. Parameters {{formula:51eb6524-361b-421e-9f78-2c1dadc24d1c}} and {{formula:a847548c-cd8e-4903-b9fd-e1def5771bea}} have been measured with great precision and current world averages (assuming CPT invariance) are  {{cite:de42c790e2c1029795b21d3bfac2d0578c654699}}: {{formula:1a7f8fc6-a680-4a37-8846-b92e0bf18a0d}} , {{formula:f33c9b24-f3b5-448f-9ed9-03e26c244465}} . In the following calculations we neglect the direct {{formula:13a43d26-592c-40e0-8d98-ec209c5d97ed}} in kaons and assume {{formula:d036940c-d3a5-4433-8423-87d831f39824}} .
m
e78d0b2559061a46e95b593c01d21a65
For quantitative evaluation, we compared the Dice similarity score in Tables REF and REF , which used 50 or 60 subjects for training, respectively. Note that the larger Dice similarity score indicates the better segmentation performance. The best results are bolded. With 50 subjects for training, our SSL framework outperformed U-Net {{cite:6cd214da943edf2bffa4ebb1c1afad37b4fd3a72}} and attention-based U-Net {{cite:9b176132a93c746eabe910f237cea19529b65ece}} in all of the three classes. We can observe that with relatively limited training datasets, the performance of the CNN models is inferior to our framework. In addition, the statistics of the network parameters are provided and compared in Table REF . We can see that the number of parameters of our SSL framework was about 200 times fewer than the popular U-Net structures {{cite:6cd214da943edf2bffa4ebb1c1afad37b4fd3a72}}, {{cite:9b176132a93c746eabe910f237cea19529b65ece}}. The much fewer parameters can largely alleviate the difficulty of a small number of training datasets. In the case of using 60 subjects for training, our SSL framework achieved better performance than the U-Net based methods in the average Dice similarity score.
r
f421596ace32ad14e25206aebb98046b
This result shows that our algorithm generalizes the regret and long-term constraint bounds of {{cite:4f20d2fa5df1c6303cd4efc2344780468c959220}}. Please refer to the Appendix for this section's proofs.
r
24a0f710e4cfcc9ddebf1e85ce239d05
Measurements of the accretion shock feature, on the other hand, are currently still dominated by the instrument noise. We have partially worked around this issue by stacking {{formula:340e8d9a-4b5f-4b35-890e-dc669aa94d36}} clusters, which significantly lowers the noise amplitude, but this still results in only a weak measurement of the feature. Ongoing and future SZ surveys — like SPT-3G {{cite:900d554fa956f3f84e698b225aedfa68726239f0}}, Advanced ACT {{cite:14d0c73e4a8699203335a49832a3e7faf6bc6649}}, Simons Observatory {{cite:2ba092de23a8079f3eaa83e6635df3f16ac08b3f}}, and CMB-S4 {{cite:10d00695ae5741e6fbb34e32b3d45126cf4c5c48}} — will both have a higher sensitivity and a larger sample of clusters. Both factors will greatly improve the ability to make precise measurements of the stacked mean pressure profile, leading to more precise measurements of the accretion shock. It will also make it possible to better study the dependence of the accretion shock on redshift, mass, and orientation. Studying the dependence on orientation will also provide insight into how shocks respond to the dynamics of mass accretion, which differs significantly between directions toward filaments and toward voids.
d
1a8be8fb0c823f78e6206a190f1ee052
Using simulated data sets, we first examine the performance of the maximum likelihood estimation method used to estimate the parameters of the linear motion model of a moving molecule in terms of the bias of the method. The bias is assessed by the average of the deviations of the estimates from the true value. For this purpose, we simulated 100 data sets, each containing a trajectory of an out-of-focus molecule, with the out-of-focus level {{formula:d9264a09-d844-406e-896a-255a43f0720f}} {{formula:2628700f-56a8-4df4-88ce-6fb710c4d81f}} m, simulated using Eqs. () and (REF ), with the Born and Wolf profile (Eq. (REF )) and the parameters given in Section REF , with a mean photon count of 500 photons in the time interval {{formula:a0728866-ffb5-491c-b7ab-0bb05471a337}} ms, where the first order drift coefficient {{formula:fcffe458-0564-45cb-8dd0-5e954a69ec30}} /s and the diffusion coefficient {{formula:7f8dbe5d-be82-42b4-9acf-a2f4675cec58}} {{formula:cfbbe2eb-1c2a-4855-a526-b2281e121b1a}} /s. We assume the zero order drift is equal to 0. In Figs. REF (a) and REF (b), an example of a molecule trajectory in the object space and its image in the image space are shown. For these data sets, we calculated the maximum likelihood estimates of the diffusion and drift coefficients, separately. For this purpose, we needed to obtain the distributions of the prediction in the likelihood function expressions (Eqs. (REF ) and (REF )) through Eqs. (REF ) and (REF ), which in general is a computationally expensive problem. We approximated the distributions of the prediction using a sequential Monte Carlo algorithm proposed in {{cite:8efa618d44e04b96b102e9a8e0c509b2a8ce09af}}. The overall approach is explained in supplementary Section in detail. In Figs. REF (c) and REF (d), the differences between the maximum likelihood estimates of the diffusion and the first order drift coefficients and the true values are plotted. We also estimated the {{formula:74d41ede-f5af-44d8-81da-207223ce118f}} -location of the molecule, i.e., the out-of-focus level, and show the errors of estimation in Fig. REF . As can be seen, the deviations of the estimates from the ground truth are, overall, centered around 0 nm, which suggests that there is no systematic bias associated with our proposed method (the average of the diffusion coefficient deviations and the first order drift coefficient deviations are -0.0319 {{formula:dbaf04bd-1119-4913-8998-8f62e3ce63e7}} /s and 0.0307/s, respectively). {{figure:b2a07bc7-eb6c-4bc0-ae53-639f3616cae7}}{{figure:59c602af-ad6c-4c88-98e3-320806f9388a}}
r
2f03e5eab46d49d2529a18e262130238
when {{formula:88c03c44-011f-4de4-817a-ab067fb4e1a1}} is non-singular. In (REF ), the inequality constraints becomes equality constraints since the equality should hold at the optimum. The optimal value of (REF ), named Sato upper bound {{cite:1e581a6a270c789398596c2ed157d2bcef9bc08c}}, {{cite:42179725800a1ea3d252c14491f7d4934eabfb47}}, {{cite:93923fcb9396c22029fc7b871096ad350f26c533}}, gives the upper bound for the sum rate capacity of the GBC. Note that the maximized sum rate via () is equal to the optimal value of (REF ), and thus equals to the Sato upper bound. Therefore, the sum rate capacity is achieved by DPC, which corresponds with the conclusion in {{cite:de7ebda5ae6794fa0cc072da5fd630e6fae871ae}} that DPC achieves the capacity region of GBC with the transmit covariance constraint.
d
7b83e18e0e2e5e5f41755fc7053573a9
To tackle the first problem, as illustrated in Fig. REF , we argue that if the network could be supervised by more fine-grained labels, more object regions will be activated to provide sufficient information for differentiating different classes. Therefore, in this work, we propose the Visual Words Learning (VWL) module for WSSS task with image-level labels. The VWL module generates the visual word labels by using a codebook to encode the feature maps extracted by the CNN backbone. In the training process of the classification network, the network will be forced to jointly learn the image-level labels and visual word labels so that more object regions could be activated. To learn an effective codebook, based on the definition and solution of Bag of Visual Word models (BoVW) {{cite:b5bd751110993a74830e7b7c5d85fe9dc614f994}}, {{cite:8f33b917ed241776f3e8b7c69c4c1904db2c6e45}}, we devised two strategies for updating the codebook, i.e. learning-based strategy and memory-bank strategy. For the learning-based strategy, the codebook is set as a learnable parameter. By enforcing the encoded visual word features to learn image-level labels, the codebook could learn the latent visual word representations. In practice, we also notice that the learned representations in the codebook are often redundant, which affects the network training and the quality of CAMs. We tackle this problem by regularizing the codebook with DeCov loss {{cite:f12abeebb4d342474934a0d94a479384792207de}}, which reduces the redundancy of a matrix by minimizing its off-diagonal co-variance values. For the memory-bank strategy, we follow the classic BoVW models {{cite:295e7f70fdd4ee358dc7d3d10cde8a0fc7682a15}}, {{cite:6575af538195681688403df11141e43f911165e6}}, which take the clustering centroids of features as the codebook. Specifically, we decompose the clustering on the whole training set to each mini-batch iteration and leverage memory-bank strategy {{cite:14ddbba8db6e72e25b48a8d8e256176a4ef71479}}, {{cite:127725321f72031647182fdf82c8c630fbe412ce}} to gradually update the codebook. Our experimental results show that, after sufficient updates, the learning-based and memory-bank strategy could both learn codebooks with effective representations of visual words and achieve analogous performance.
i
38b106c49814270c425ee1ab7434d026
Cosmology and notation. We assume a universe with dimensionless energy densities at the current time in total matter (baryons plus dark matter) {{formula:90425ee1-1a8f-4c98-9b2c-da567667bff0}} and vacuum energy {{formula:9e3573e0-70d5-4607-af05-027aeb705484}} , with Hubble constant {{formula:080223a4-158d-4c12-bb88-e30ff2be11bb}} . The Hubble expansion rate is normalized via {{formula:dc286863-b752-49c1-925f-efa4e93af77f}} . For the halo population, we employ a mass scale convention, {{formula:d2979d6d-2bf2-4ea2-88dd-62a5b224aaaf}} , defined as the mass within a sphere, of radius {{formula:995d7134-7823-4715-8647-627fc25b6083}} , within which the mean enclosed density is {{formula:8198e02c-ab19-4f52-8a9b-72eef2cb5a35}} , where {{formula:c5bbbf76-8be6-427e-8591-f46ee20deed3}} , is the critical density of the universe{{cite:a120292fc90104e106d66725c214b632f546e42e}}. Unless stated otherwise, the weak-lensing determined radius, {{formula:aaf6d721-bbba-4cc2-a3e0-d2a391acc9e9}} , defines the aperture within which integrated observable properties are derived.
m
5d7071a882849ebcdfc99153731e853b
The lack of a probabilistic interpretation in a convolutional neural network under this sparse coding interpretation may be problematic because the prior expectation in the model via the prior distribution (exponential distribution) is lost. This issue may contribute to fooling convolutional neural networks {{cite:4365abbb84562e7aec5c63bb3b7df52247639f64}}, {{cite:af740885525aa9bb4b33fda227a464a8acaf1603}} because the expectation of image structure may be poor (some negative {{formula:c2fe282e-7a45-4ccd-856a-f2cbefe665df}} ) or completely absent (all negative {{formula:b1b767bf-1854-49ff-87e0-cd7082a16489}} ). Given that the parameters {{formula:37941d34-7994-4c9f-ac09-48c064eb6762}} were chosen to maximize classification accuracy, and not preserve the probabilistic interpretation, this may be related to overfitting. Maximizing classification accuracy given a sparse prior may be thought of as a way of traversing the landscape of the loss function along representations that go together according to some logic (signals can usually be represented by a few underlying causes). Though one may argue that maximizing classification accuracy without a sparse prior inherently preserves a prior expectation, the prior may be a different one not characterized by this sparse coding interpretation. However, an algorithm like gradient descent need not find a representation with a prior expectation if there exists a solution with low error highly specific to the training set, but without the need for a set of rules for how representations should model images. One such model may be thought of as a model with many high-level features that are useful for signaling a certain label, but a disregard for how the features should be integrated together. Issues arising from an incorrect prior may work their way into convolutional neural networks given that the hierarchical non-negative orthogonal sparse coding model discussed here has the same loss function, discussed next.
d
2d4a5f8a32eceff2b5812ed0067686a8
Before concluding this section, we elaborate on the choice of the parameters {{formula:bcc1020b-77b4-47f1-810e-bc90ab7f1b97}} . It can be seen (see {{cite:d093db3b93462167ac56bf35bcf3bbfae394c6ba}}), that the speed of convergence of Algorithm REF to the global minimizer of (REF ) is a direct function of the conditioning of the Hessian of the cost functionThat is the ratio between {{formula:82f5116e-8453-457b-95c8-fcdcd95ad357}} and {{formula:e3f07b1a-13b8-4017-8fff-ecb38ff0a355}} . {{formula:568fd893-8824-4245-bd2f-6e59db9b1f5a}}
m
b1e352ebf1a67d7e502ab17d09d66747
In this study, we argue that the literature has proposed a large array of dimensions of homophily without accounting for the model uncertainty problem inherent in the identification of the determinants of friendship formation. This research aims at initializing such a research line. Focusing on 20 particular individual characteristics that cover a large share of the determinants of segmentation of heterogeneous populations analyzed so far, we find that the robust determinants of friendship formation are gender, smoking habits, and geographical/spatial proximity. Hence, our data suggest that, in our context of an emerging social structure, all robust predictors of friendships are directly visible by a naked eye. Due to the homogeneity of our sample, we do not consider other observable attributes such as race, ethnicity, and religion, but the literature shows that these characteristics are key predictors of links in many contexts {{cite:4f0f6b3969d001252367c13234fe25346de7d7ab}}, {{cite:15527a03ff5b2cbf08b6c050fd9610504bc85545}}, {{cite:94bd960b922024364df08f1f792a8d06efa0f801}}, {{cite:c2d520e932aa3165a47408c7e725eba4a090416b}}, {{cite:e0d2d15bbb8499af0be9e433b9f0273f4dcfb488}}, {{cite:0d48fdb8dfba73d8132b923d79bbb1021957ea25}}. We thus positions them among the robust predictors of friendships generally. In contrast, we find little evidence for unobservable features to predict who befriends whom in an emerging social network among University freshmen. Neither political orientation, family background, nor cognitive and behavioral traits survive our robustness tests.
d
2e4cd10a86470dc88f415b851c7dd82d
The Soft Gamma-ray Repeater (SGR) J1935+2154 was initially detected by the BAT (Burst Alert Telescope) instrument aboard the Swift satellite as an X-ray burst. Subsequent observations of this source allowed to classify it as a magnetar and they found that the source became active again in April 2020, whilst it exhibited multiple and severe X-ray burst activity (see  {{cite:ceeeacd9c1a20e3d28025b6ac7b079ce9c2baf1a}}, {{cite:4ca6412e81f5814cc6745775c046a2fb1db370fc}}, {{cite:7d225fb961bcf1f46bb7ec92570057eebc210400}} and references therein). SGRs are a diverse set of sources with huge magnetic fields of the order of {{formula:1f9e616c-a3b3-490e-b192-7ee0b17b643c}} G, rotational periods {{formula:b7e9740c-7a07-4ff3-8f3c-8419ea7f19b8}} s, slowing down rates {{formula:0821807d-55e6-475d-a0ad-0568ec25232f}} s/s, persistent X-ray luminosity as large as {{formula:66efc068-5421-48e8-bcb6-fbdc98ad11d2}} erg/s, transient activity in the form of outbursts of energy around {{formula:964fa49a-cdf1-4c71-83fe-ccecc88dbd64}} erg, and, for some sources, the presence of giant flares, whose typical luminosities are {{formula:ef213bc3-577a-4e97-9b06-4a808a623b02}} erg (see, e.g.,  {{cite:7907ca6a5c94df654d98a747747fc2a33cd92e1c}} and references therein). The emission nature of SGRs remains a cause for debate, and several scenarios were proposed to explain their observed spectra and properties {{cite:da75fa818b0266656f95bdd7464e0131136dc05b}}, {{cite:5f26ddf8eec10a0de0c4cc17954e9a60282f0307}}, {{cite:c0c798a110bdd5cd511e1a216ce0ef021f224352}}. Examples thereof are magnetars {{cite:a71755cf58e6ad2b7a04ce9223d003a3e6c120e0}}, {{cite:91b5d6158d4cd8ec11abfeb82a2807ca09a465b4}}, {{cite:60f32f259213be7ce63bcc74b9490dd151ad9772}}, {{cite:8c6243175cc9b2deb51c43911b40cdc6fec9f060}}, {{cite:487c8fef53272b3488ba9cf04bc38898a7fffc13}}, {{cite:ccc511ed0deeb1c5e61a80a229e64e2a0b0b9e97}}, accreting neutron stars {{cite:77303b0e00a1d5e4d838871d08f3c5f9bdbc0953}}, {{cite:f15d81ebf425c61755f82761e3d4e7c292978b7b}}, rotation-powered pulsars {{cite:2617fef6ee381335535d5c724002df254323124f}}, quark stars {{cite:bc8739feff837e02da72ce67c1599dac36205fd0}}, or massive, fast-spinning and highly magnetized white dwarfs  {{cite:57b08714d481622da1161e453dddc78901b5ec06}}, {{cite:4484b2ce48d8417f6b2e6a9780b65d6d7d988177}}, {{cite:dde2b2f73715d5d38ece67a747e66487e4a46715}}, {{cite:6e56bb3eb82d699d416469f8b0232785aa6984cf}}, {{cite:0d2b176a0f99334a2931818ee796fe5003d78355}}, {{cite:cdbc7c38a73b85420a9befa84a21d42b1e77c227}}.
i
2d8a85dc995c588ff576bff9044c45a1
See Table REF . We achieve comparable accuracy with much less number of parameters and GFLOPs on Moments in Time {{cite:82e72cc0662ceb2e002fc139d98064628b2baf82}}. {{table:2fe6314a-871b-446f-bb1d-fa7610327b1d}}
r
1c40aecbae987991e65435c84c2f7251
As our theory suggests, one should add the temperature to the last layer linear classifier; this is in agreement with the results of {{cite:93757f341ae34082562dce9727b9984229334e25}}. However, we find that the class feature means do not converge to an equiangular tight frame as {{cite:37010c682220beb6c92aed3dc6d8d5f28e54af5e}} suggested. The effect of the imbalance ratio on angles between majority/minority classes is shown in Figure REF , where the constants {{formula:640e0d48-4bfc-496a-9466-a871dcf630cf}} (gray line) and {{formula:f59cea27-c5af-4c27-9f3b-63b054769583}} (green line) are marked for comparison. Our experimental results matches what our theory (Theorem REF ) predicts: adding IT to the last layer classifier leads to the largest possible angle in the extremely imbalanced limit.
r
d326031ed7ea339882f470e095fa38ec
The information about the position, direction and energy of the muons hitting the first mu-PSD was analyzed using the ROOT framework {{cite:0e29e0af7326dc18497ac1472ccd926ca451ddf2}}. We simulated three different conditions:
r
d32561ebf156cb1dc3a2a4c68e7346fc
where {{formula:c8408769-06a2-4576-9cbd-d45d08d0e562}} when {{formula:3576416c-a164-4f2d-9d88-9199963dd441}} , and {{formula:b3e4fad6-8ca5-4e75-ad06-6b765e669301}} can be chosen to be as small as wanted by choosing {{formula:528df51f-c802-4bdf-8794-cd8a78b023dc}} small enough. We note that the proof of {{cite:d9e7cb67e9ad3a6fb02d6c4e1468acf600b5e919}} actually states a slightly stronger result, showing the above lower bound for a certain testing problem, as stated in (6.105). This, combined with the fact that {{formula:124fe663-d893-40d8-836f-c814786bd03b}} can be lower bound by that testing problem, see (6.101) and (6.99) from {{cite:d9e7cb67e9ad3a6fb02d6c4e1468acf600b5e919}}, concludes the proof of Theorem REF .
r
08538fb811a495f1815c8eb7fe3ee599
Dark matter halos are present in the vast majority of galaxies, and thus one has considered the formation of traversable wormholes inside them (see, e.g., {{cite:4cde94e531b6b558e6f2806a5325959c583ea2a3}}, {{cite:9b25df5425828e9f8c16b94e1e9e1c66f685c26d}}, {{cite:e6ff5a4e54f077a4c8a01887d6e177711715a08f}}). These hypothetical objects are predicted by general relativity (GR) and represent a kind of tunnel in the spacetime that connects two distant regions of the same universe or two different universes (see {{cite:aff0ce6145e7fd82d72e6fe860fbc30dbfa67f10}}, {{cite:ecaac142f8ccff42da6c5f8f9dcb101cd7ef0efe}}, and references therein). This subject has been recently investigated in a more fundamental level from the J. Maldacena works {{cite:89b91d919cde26f0160568011d96938e3a67e7d6}}, {{cite:e15616a9d8a89786991e3c37326ff3de47c12fbe}}, {{cite:204cec0e04df15ac1f0a6e6bda8b7784824d5072}} and also applied in the context of condensed matter systems {{cite:747231e8bbf825adacb1fdfd3bb2c631ef903d28}}, {{cite:6f3040277c127147a6b77c70d014c33f8bc9ba68}}. Usually, it is required some type of exotic matter sourcing traversable wormholes. However, in scenarios of modified theories of gravity, such a feature can change with non-exotic matter working as a source for the wormhole geometry {{cite:f3d9ba2deb1d22a7b2de02011f4a3bd3b13f53ae}}, {{cite:f0f690dd929a54cc0436f890366b25329a870515}}, {{cite:661ab26d054874ee6bf7064bd96de3e887783d06}}, {{cite:45e0618729cbc991a5a71fb606f0349143f8cf46}}, {{cite:e3172e9c496aafdab28073134867d95dc48f1d46}}, {{cite:df6bf405b75bcca6bbd10180d3fbbc0e72063f5d}}.
i
c724dcd8848dc8bf16ef900c450d80e2
Due to presence of non-Gaussian terms, it becomes inconvenient to cast the effective action into stochastic Langevin type equation {{cite:b97564e831e1de0dcaaf0251123ff880a8d32efd}}. Nevertheless, we successfully convert the non-Gaussian effective action into a deterministic Fokker-Planck type equation, which corresponds to truncated Kramers-Moyal master equation at quartic order in derivatives. The Fokker-Planck type equation is more efficient for computing observables, such as moments of position/velocity of Brownian particle. It will be interesting to carry out a numerical study based on our non-Gaussian theory and clarify phenomenological consequences of non-Gaussian interactions {{cite:ef64380c3ea94805ef9aed83a3037a768451ba2d}}.
d
fa3e13092dc21dc57fc8ba3da17731ac
A common feature of these analyses is that they study (deterministic) gradient descents (GD), while it has been shown empirically that stochasticity may be of primary importance to match best generalisation guarantees {{cite:01016cb82c880ab028ef50cf013386c2ce80fe69}}. Hence, it is natural to try to understand the role of stochasticity induced by the mini-batch training procedure of stochastic gradient decent (SGD). It is often shown that SGD tends to move towards flat regions of the training loss {{cite:9cd9d8ab43d0d3ea2508aa0b2f9b5feb413531ab}}, {{cite:a677ae087c5e5e506c61f20799809cacefbb01c1}}. However, the flat minima selection phenomenon does not appear very clearly and noise models taken to rigorously prove these are disputable. In this perspective, specific noise models to understand the role of stochasticity are primordial: an example of that is the fact that minibatch stochasticity of SGD is state dependent and cancels itself at global optima {{cite:931f08d04c85142f9fac094327c6f04be0656f04}}, {{cite:8e12668cdcf43138e1b28415322462c10170c448}}, {{cite:8206a73ac9d1edaa4aaafcff1951f194384a7b58}}. Another important feature is that the noise has a specific geometry, e.g. in the least-square model it belongs to the span of the data inputs {{cite:da35d7b65d473f41166fb716fb976512c6a393af}}.
i
6efd72c47dad667d60edc4cd67f83ab7
The overall picture we seem to find – that large models can learn a wide variety of skills, including alignment, in a mutually compatible way – does not seem very surprising. Behaving in an aligned fashion is just another capability, and many works have shown that larger models are more capable {{cite:362e8c9d39128e68851233c057b71f97b3cbb194}}, {{cite:cb19ec6162dd4d96933b4bbd78f827bf997d15bf}}, {{cite:e9a74b2d237fbf2eb20c592fe78cb8f41633e431}}, finetune with greater sample efficiency {{cite:d09fa1541c480cec7fe75e44082cca8220d1c015}}, {{cite:e24a3c15eb0c563430ac41461a26db377724138a}}, and do not suffer significantly from forgetting {{cite:29b56067fb42f52a5f429f5086a217fc70c228e9}}. Although we did not demonstrate it directly, we also expect that RLHF alignment training can be mixed with or precede training for other objectives; this might be relevant in the future in order to avoid the production of intermediate, unaligned AI systems.
d
f5c7ebbd9b51d76feda5bdfd2c2f8a1c
As shown in Table REF , our proposed method outperforms existing unsupervised methods with a considerable margin. Compared to weakly-supervised methods with image-label supervisions, our method achieves better performance on all benchmarks. It proves that the subitizing supervision helps boost the saliency detection task. In addition, our method compares favorably against some fully-supervised counterparts. Note that on the DUT-OMRON dataset, our method obtains more precise results than the fully-supervised methods. Since the masks of the DUT-OMRON dataset are complex in appearance, sometimes with holes, it reveals that our method is capable of handling difficult situations. Compared to SOSD {{cite:5a31f631b95e3f7c04e30db2d8b68bbe82bb5c92}}, which utilized additional subitizing information, our method extracts more valid information from the subitizing supervision. Moreover, our method achieve comparable results with MSWS {{cite:4ae42a88d85ec3997d27f9ecd38c5d06d2f2c35d}}, which applied multi-source weak supervisions, including subitizing, image labels and captioning. {{table:bbbfe5e5-cd9b-47b4-9d09-c97f909bfdbf}}
m
cc981deeb99ebf3f4373bea860c132bb
The perspective of cerebellum foliation as the action of a nonlinear oscillator can be a useful one given the extensive theoretical studies of such oscillators {{cite:cc0eb260d412bbfe20c4432f9a8a21a049dc166b}}, {{cite:150608136096f672aca02898ecff45c751457b3e}}. For BWBM of the cerebellum, the linear model with constant {{formula:7e75cda0-c8d1-4b82-9868-270afadb5652}} maps to a forced harmonic oscillator and, for small eccentricities of {{formula:84717f43-079f-49df-a28a-94903684a121}} , maps to an unconventional Duffing oscillator. For nonlinear {{formula:a0473024-3486-4f2d-b458-1b3006efdee6}} , we attempt to understand the corresponding nonlinearity in the context of the assisting-dampening oscillator. We hope the study of cerebellar foliation as a nonlinear oscillator problem continues to be fruitful. In a related work, the existence of a new morphological instability in confined nonlinear elastic sheets was found in the context of a period-doubling bifurcation, exhibiting an analogy with parametric resonance in another nonlinear oscillator {{cite:598763e5af781e4ce2eafb3bdbf01fa9c6e2141d}}.
d
f99114e003d96bba2a6bd84f550e5976
We assume that the gas has a uniform solar metallicity and adopt the standard ratio of helium to hydrogen, and abundances of carbon and oxygen taken from {{cite:2d04fa5cc41a577079e02469fc338e3ab5ad6f4d}}, i.e. {{formula:e84b81df-1009-4f2d-859c-8ddd395378d9}} and {{formula:c86d2789-998e-4c39-bb31-e3e35ce8fde8}} , where {{formula:af3f4518-44f1-4861-8d44-6244d9bd1b68}} and {{formula:57232b27-f7e2-43cc-ba6f-b308a6445f68}} are the fractional abundances by number of carbon and oxygen relative to hydrogen. However, we have to keep in mind that the CMZ has actually a super-solar metallicity. Nevertheless, we use a uniform solar value in order to be conservative regarding the cooling and star formation rates in our runs. At the start of the simulations, hydrogen, helium and oxygen are in atomic form, while carbon is assumed to be in singly ionized form, as C{{formula:e1e94271-6394-43ba-8f92-841276640fac}} . We also adopt the standard local value for the dust-to-gas ratio of 1:100 {{cite:9fc80d5067e7a3a41dcb09442fc3ce9fe1a6270b}}, and assume that the dust properties do not vary with the gas density. The cosmic ray ionization rate of atomic hydrogen is set to {{formula:9e9f323d-f78d-46ca-acd1-c8cf3ea693ca}} {{cite:d759ab041c1f5c67f678f4f4d660a7dcee6c85fc}}, which is a factor of {{formula:c2935c30-e946-4e7f-81a9-d859d69f21f5}} higher than the value in the solar neighbourhood {{cite:64eca9ec141e9b2dd8d540a8ec0852d50071234b}}. For the incident ultraviolet radiation field, we adopt the same spectral slope as given in {{cite:08b0c73c7b2523062cb1637eb5da9ddeb00be465}}. We denote the strength of the Draine ISRF as {{formula:c6bc2271-51f1-41c7-b0bc-858ddd453799}} and perform simulations with a field strength {{formula:5576feaf-5dfc-4095-b0d8-c841cf3dd2b5}} {{cite:d759ab041c1f5c67f678f4f4d660a7dcee6c85fc}}. The Draine field has a strength {{formula:5f848c44-ba62-4df8-b121-990e6d8963d9}} in {{cite:540c98a503a12f69cb075e06fe62f0e85763ab04}} units, corresponding to an integrated flux of {{formula:19aee1c7-fd02-419b-b6c3-c7b529800560}} erg cm{{formula:c64bbd0a-0f0f-47b1-bdb9-7fe57c3eb75f}} s{{formula:b59a635b-67a7-441b-af37-7b487db9eaaa}} . Furthermore, as a reference, we also run simulations with local solar neighbourhood values of the ISRF and the CRF in order to explore the effect of a different radiation field on the SFE. In this case, we set the field strength of the ISRF to {{formula:6d25eebd-efe2-4f80-85a3-3f47ae96bce5}} and the cosmic ray ionization rate of atomic hydrogen to {{formula:82531e73-f125-455c-af12-e7cddffaf8ac}} . Our simulations use a Jeans refinement criterion, which is active over the whole simulation period in order to accurately refine dense and collapsed gas regions in the box. We use a constant number of 8 cells per Jeans length, which is sufficient to avoid artificial fragmentation {{cite:7257567b091178f883b92548949949094fcb6a04}}, {{cite:820fc072c01108810ee253b1be05dacfc674f47c}}.
m
d0963d7e9cda3d0d50cc42fd4e0dc14f
This research has made use of data obtained with the Global Millimeter VLBI Array (GMVA), which consists of telescopes operated by the MPIfR, IRAM, Onsala, Metsahovi, Yebes, the Korean VLBI Network, the Greenland Telescope, the Green Bank Observatory and the Very Long Baseline Array (VLBA). The VLBA and the GBT are a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The data were correlated at the correlator of the MPIfR in Bonn, Germany. RL, JLG, GYZ, AF, TT, IC, IA and AA acknowledge financial support from the State Agency for Research of the Spanish MCIU through the “Center of Excellence Severo Ochoa” award for the Instituto de Astrofísica de Andalucía (SEV-2017-0709), from the Spanish Ministerio de Economía y Competitividad, and Ministerio de Ciencia e Innovación (grants AYA2016-80889-P, PID2019-108995GB-C21, PID2019-107847RB-C44), the Consejería de Economía, Conocimiento, Empresas y Universidad of the Junta de Andalucía (grant P18-FR-1769), the Consejo Superior de Investigaciones Científicas (grant 2019AEP112). This study makes use of 43 GHz VLBA data from the VLBA-BU Blazar Monitoring Program (BEAM-ME and VLBA-BU-BLAZAR;http://www.bu.edu/blazars/VLBAproject.html), funded by NASA through the Fermi Guest Investigator Program. The research at Boston University was supported in part by NASA Fermi Guest Investigator program grant 80NSSC20K1567. The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. The POLAMI observations were carried out at the IRAM 30m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team {{cite:5d6de551b0e878c946d1880d6b69a296cec43815}}. This research has made use of data from the OVRO 40-m monitoring program {{cite:66bdbb5f3f74c95f79a1d0cafa08c7d2a5a0d4e6}}, supported by private funding from the California Insitute of Technology and the Max Planck Institute for Radio Astronomy, and by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST- 1109911.
d
f4a54c8ab9d305f8dc1a58869a33bffc
In recent years, attributed to the wide deployment of smart devices, the interest in speech-based applications has been rapidly growing. One major sub-field that is gaining popularity is speech-based identity recognition task, which includes speaker verification, language identification, and voice spoof detection. Since the given speech signals are likely to have different durations, usually an utterance-level fixed-dimensional vector (i.e., embedding vector) is extracted and fed into a scoring or classification algorithm. To achieve this, various methods have been proposed utilizing deep learning architectures for extracting embedding vectors and have shown state-of-the-art performance when a large amount of training data is available {{cite:bb127664f3505db4ba3bff91331b0dd104deb972}}, {{cite:7fe2a92daa7074d23f32897d2ef9b0150f18455d}}, {{cite:9509ae9a018479e0da0c9651447d3050280ad68e}}, {{cite:7c364eb224eee9ab003bd3a1fdb2130dd213150c}}, {{cite:794167bb998869005b99a214b57109e8b43a8039}}, {{cite:7d4f9d7f5ef5e08295f62c8e954dcdb39fc18c04}}, {{cite:3045b3c8fb56af8f2e04be1d1ea5a192f3892af7}}. However, despite their success in well-matched conditions, the deep learning-based embedding methods are vulnerable to the performance degradation caused by mismatched conditions {{cite:6bde5d2a10f5a6f1a98da6496f0ab23c8b21d1c8}}.
i
2e7cf3e860e5ca3d3954856ff00caf96
Finally we comment on the properties of the finite temperature holographic two-component superfluid that are different from that of the weakly coupled zero temperature two-component BECs studied in {{cite:42dc4550a4a36d555d768e9dca6267ca39e690e3}}, {{cite:597e5c9fc2853dbbdfa3763b5a849d5d5a2c6011}}. Firstly, in the finite temperature strongly coupled holographic superfluid, the increased correlation length will delay the appearance of the phase separation phase at a larger repulsive coupling {{formula:102dd348-9f9b-4132-91b0-f41861ed8a63}} (see Fig.REF (a)). Secondly, the triangle and square lattices (see Fig.REF (a-d)) are less perfect in the holographic model compared to the perfect lattices founded in a very low temperature rotating spinor BECs{{cite:922fc354c6c79fd1cc08a8a5ca59458b5fc18491}}; the distortions of regular lattices found here is very likely due to the relatively high temperature close to {{formula:8ae444c5-001e-4d39-8148-1a6e231ab45e}} , since we have confirmed that at higher temperature {{formula:2940bd50-6ce2-4cf4-8e67-15310bb083ff}} , the vortex lattice becomes more disorder with a lower translational symmetry, this is also the case we observed in a single component superfluid for different temperatures{{cite:4c982ce15a7cf8828b04467430561ea8f16694c2}}. This is probably due to the fact that the larger vortices at higher temperature are more closer then the interaction between vortices is large which prevents the lattices to organized as a perfect lattice. Employ a gravity dual theory at zero temperature defined in AdS soliton{{cite:b5dc44b1a6e98d4f658b73b7fd60ff2e6e597774}}, the vortex lattices with perfect hexagonal and square symmetry might be expected to be obtained in single and two-component superfluid respectively. Thirdly, even at finite temperature, the perfect sheet solutions with accurately equidistant layers were obtained for the first time from holography (see Fig.REF ) in the deeply phase separated regime, this is probably due to the strongly coupling nature of the holographic model. While in a weakly coupled zero temperature BECs{{cite:597e5c9fc2853dbbdfa3763b5a849d5d5a2c6011}}, the distances {{formula:4aaaaa08-c707-4f2f-bd9c-8be1a54a9e28}} between sheets are not perfect equal which is harder to calculate when comparing the numerical results to the Landau-Lifshitz formula.
d
f85d15d9053de210046de9ca1f932dbf
The simulation of quantum many-body systems on quantum computers has natural advantages by avoiding the exponential scaling of computing costs on classical computers {{cite:0abb47209cf2f340d8a4e4becd3a43a4a6a48c49}}. Atomic nuclei are strongly correlated finite quantum many-body systems, for which the accurate treatment of many-body correlations is essential. There are already several applications of quantum computing in nuclear physics, such as the implementation of coupled cluster method for light nuclei {{cite:8fb8e954efe9481400391d56e46abed4701ef95d}}, the Lipkin model {{cite:4063124254c764660d286b7370e5aa3a9bcb778a}}, {{cite:9608b1ed7cc421e196c46cc50fb705942c0fc508}}, neutrino-nucleus scattering {{cite:8af1f3359d578c5118f6d28f9da3042b4001e867}}, nuclear dynamics {{cite:7db6d022a583f47e34cfe0af5dfcb96cfde771c5}}, {{cite:afe3494441e58c6e4d15cb72c9b053578d564d9e}}, and the symmetry restoration {{cite:a470d59ce0fa30b61fa85253c5e3312bdb9f93b3}} on quantum computers. Presently these applications in simplified many-body models paved a route to practical quantum computing of small quantum systems such as light nuclei in the near future.
i
7d9a44121baea41eb34201a440d39c4b
Estimating click-through rate (CTR) and conversion rate (CVR) accurately plays a vital role in E-commerce search and recommendation systems. It helps discover valuable products and better understand users' purchasing intention. Due to the huge commercial values, much efforts have been devoted to designing intelligent CTR and CVR algorithms. For CTR estimation, many fancy models like DeepFM,xDeepFM,DIN,DIEN {{cite:6fe1c0592aff0456405a0592b81cf5ecaaf27128}}, {{cite:6101a65def6a77a37d15e958bf02bcf0c5fb4993}}, {{cite:00b27bbd19a28e1e4c83c7300a90d51c7d6717a3}}, {{cite:4fb092875cba2b2a41a0deebf3c2386d0fb5ee91}} etc. have been proposed in the last few years. By taking advantage of great nonlinear fitting capability of deep neural network and huge amount of click and impression data, CTR models have achieved great performance. However, due to label collection and dataset size problems, it becomes quite different and challenging for CVR modeling.
i
f2fb8342f5b73f0bf40882c7ba54fbf6
We shall now reproduce the experiments form Sections REF  & REF with different feature importance methods. To approximate Shapley values, we use the Monte-Carlo sampling from {{cite:65af1e13ffdff5152df5b5debd5aa67b908f5f9a}}. We found this approach more computationally efficient than KernelShap.
m
01917f680ed01917f2da1c4bfdc601c1
{{formula:cb64aa11-6b30-435d-b5f2-0f9640bc217f}} Taking into account the sections 2 and 3 of {{cite:b15230f2ab2b7264bc276af0df71a27789ece23b}}, the next theorem can be resulted (Find more information in ({{cite:4d3c4d5785192da58ffe9877028bb56e00816ab6}}, Chapter 3).
d
bb835a66b1e4352d768c80fe1525747c
Similarly to Algebraic Triangulation proposed by {{cite:73ed7b6f8cd32ed7e86d9f24663527e69d7d931c}}, our method consists of a 2D backbone network applied to each view, followed by a differentiable triangulation step. Unlike {{cite:73ed7b6f8cd32ed7e86d9f24663527e69d7d931c}}, our method directly models keypoint uncertainty, outputting a 3D distribution (as opposed to a point-wise quantity) while requiring only 2D labels which can contain noise; see Figure REF for a schematic of our architecture.
m
22ecb5c806b8aa87a4e8f3673465b7e9