text
stringlengths 54
548k
| label
stringclasses 4
values | id_
stringlengths 32
32
|
---|---|---|
Multidimensional scaling (MDS) is a technique to solve exactly this type of problem, that attempts to minimize the disparity between ideal and low-dimensional distances.
This is done by defining an equation to quantify the error in a layout, and then minimizing it. While this equation comes in many forms {{cite:dbb36e8353c7d1132f6da14f291805808e401f29}},
distance scaling is most commonly used for graphs {{cite:c6a1c1e649895bad85ebefd5796445c4f7580887}}, where the error is defined as
{{formula:3481cebf-ef65-46ce-a907-9df279cb6d17}}
|
i
|
ed0d2b9c67b49db5c02acd990d045abb
|
FixMatch.
FixMatch {{cite:689a0a37412d533489550693e3900590d188fa54}} combines consistency regularization and pseudo-labeling while vastly simplifying the overall method.
The key innovation comes from the combination of these two ingredients, and the use of a separate weak and strong augmentation in the consistency regularization approach. Given an instance, only when the model predicts a high-confidence label can the predicted pseudo-label be identified as ground-truth. As shown in Fig.REF (5), given an instance {{formula:644d6871-1b62-4721-ae67-88674177f704}} , FixMatch firstly generates pseudo-label {{formula:b9752089-0721-4198-9891-e3e4467deb01}} for weakly-augmented unlabeled instances {{formula:0426f6f3-be1b-4729-89b2-b0c9df713846}} . Then, the model trained on the weakly-augmented samples is used to predict pseudo-label in the strongly-augmented version of {{formula:a1341133-ba6a-43a1-a7ee-862cf2ee947b}} .
In FixMatch, weak augmentation is a standard flip-and-shift augmentation strategy, randomly flipping images horizontally with a probability. For strong augmentation, there are two approaches which are based on {{cite:3bf3076e0a861aabc26d99f7d3fdebb4d2004ec0}}, i.e., RandAugment {{cite:3ac83a59c2655a8d326e7806a706036f402d2c10}} and CTAugment {{cite:9beac6090968f87f3e62ea1688af68f4b73e3e8d}}. Moreover, Cutout {{cite:cefdfd63de99e8319b6375e6d124cf28019f75f2}} is followed by either of these strategies.
|
m
|
4b7edbf495087b37c8d5821d3ea83629
|
The possibility of analytic treatment comes with limitations. We use the rotating wave approximation (RWA) throughout the paper, which means that all frequency scales such as Rabi frequencies and detunings are much smaller than the cavity mode frequency and the optical transition frequencies in the qubits. Beyond the RWA, general analytic solutions become impossible in the fully quantized and nonlinear (nonperturbative) regime of light-matter interaction between fermionic and bosonic fields. Note that some recent experiments with low-frequency transitions (e.g. microwave or terahertz) in strongly coupled multi-electron systems went beyond the RWA into the ultra strong coupling regime; e.g., {{cite:416e412c6d670a8d8389d80b992baaebb995c1b3}}, {{cite:f232274e9e9cd639df174682443098d1736aace2}}, {{cite:ecbbd28b25124b311564f645f00a03677f64f741}}.
|
i
|
4b2ee03d0db8fdeec94acbfdb62e5434
|
We finally recall the privileged position of the two-party {{formula:84c77c42-d47e-4c13-a802-0b54f15b692c}} -mode GSs,
for whom the above necessary conditions regarding both kinds of quantum correlations
are equally sufficient ones. We start with Simon's seminal work {{cite:f78e17161e3869721f230a7c78d00e1788fc4ca3}}, who essentially
proved that the first inequality (REF ), {{formula:d0d5bdfd-68c9-4207-8929-0c89f2bc8de8}}
is a criterium of separability for two-mode GSs. The result of Werner and Wolf {{cite:db8adc90e5fe96b12ba980101cde657ec1659a23}}
that the PPT property of any {{formula:82ffe994-b1ec-470c-ac9c-6da4c8c29359}} -mode GS implies its separability
is the maximal generalization of Simon's theorem for {{formula:373be469-a8ec-4b00-b9e3-a91a90391b33}} {{cite:f78e17161e3869721f230a7c78d00e1788fc4ca3}}.
However, by contrast to the latter, it provides no analytic separability indicator for {{formula:bcf68e09-59a4-49d5-a5f4-8b5ad4761733}} .
|
d
|
502260dfd7587fbfdd08018f552ea739
|
Secondly, we directly compare the results of the QRCM to its classical counterpart for the same flow. We identify hyperparameters in both approaches that can be related to each other. Note that classical and quantum reservoir computing models differ essentially, which is primarily a consequence of the linearity and unitarity of the quantum dynamics {{cite:cbdf5cc0abee6fb7b52c11a9a62fbe3d23102ccd}}. The present work also extends previous QRCM investigations, such as in the form of spin ensembles {{cite:30eb1064e56fc6307e091a6cfe0e4ea8de3dd15c}}, {{cite:b6e97dbcdf4880db1ca5b52f3d9ce67e4e4c7543}}, {{cite:f424cd4a89a186976698d5ae40c88c4e52b288b4}}, single nonlinear oscillators {{cite:6a6d65301a41caf556586a2a3eb7573c5fd51e1d}}, and smaller universal quantum circuits {{cite:48fab8280dd5cdbdbe7bced8a758f6df1785c8c6}}, {{cite:e60db7ab23848feea3cffeec59f588f1f343dc7b}} that have been applied for the one-step prediction of one-dimensional time series or solutions of the Mackey-Glass time-delay differential equation, see also ref. {{cite:0119532f57ef0b7ad320a7a2d1ed0062ccd2931d}} for a compact review. This is similar to the open–loop scenario with {{formula:d17901d2-f26b-4456-9332-ff7076872516}} .
|
i
|
ab30e452fb432a6c8b1e4ab081b8c9d5
|
Although {{formula:3c01b1fa-30fa-4347-927b-924f0e15024a}} has been calculated with high precision {{cite:2e80857e1fd3d4cafff01f3ee4a5284ba842fa4d}}, {{cite:5ab27c58bd7960327e10875521f1c0668aa80fde}}, {{cite:e2d74999524305ceb457168e71f301d7755b9114}}, {{cite:13a2411682e8dde40043d1ac7f00502b260a53de}}, {{cite:c20e561858b9e94136aaea79f3cd6e630f48b365}}, {{cite:e1e8124877577c12d94a8b8edc43df09098d0c37}}, {{cite:cb4f876f10c223736e97eb89bb526b1a69aba7dc}}, {{cite:3f2131b37e3300ffbed8b46fba4f69d3c8565559}}, {{cite:4141f748dbeabb5fbf6d1e906de181eabcc2bf1e}}, we will not do that hard work here. We only keep the LO results because a large part of the QCD corrections can be cancelled in the ratio {{cite:ffaa6af1d30e3cb1501199fea05eb54b64e12737}}, {{cite:5db4cb7156fb6883fc007f1632febff137d28db7}}, {{cite:7d3aeead72edb6c8a071859590e65f982b765901}}, {{cite:a6725cf73b8addde6115e6e42dabc304cebdd36d}}. To get the numerical results of cross sections, we write a model file through FeynRules {{cite:f335ba6ddf90c1b7c3fd767d10a0f430123c70aa}}, {{cite:8f237d1af3f40b50a2698696ac3450499d615b64}}, FeynArts {{cite:fe6cc9683d8dde2fbd7e9d9a3c4bd15abd861c8a}} and NLOCT {{cite:babeadda886e22621e0b5c2481a71b61f5aaae50}}. Then it is linked to MadGraph {{cite:240f4a479c5398c2d02004dce80a34d1e20db1af}}. Before the numerical calculations, we take the following default settings:
|
r
|
d7ed7990466241889d1dac40c0480727
|
In this sub-section, we present an intuitive comparison between our method and other RGBT trackers on several benchmarks, including VOT-RGBT2019 {{cite:cd398d934569fbbd3f30f5435fa60076ac386c16}}, GTOT {{cite:9fc09016217e9db0c6146a241dc2e9b7f40b3494}} and RGBT210 {{cite:e3d17a9b314b0eab66477f0d871a5c01c9477d95}}.
{{table:731c38af-0d8e-4c91-8a23-33001c137d84}}
|
r
|
518d46c4e8ae5cc70ec7b5705e0ab820
|
Generative modeling has seen drastic improvements in recent years. Especially for images, state-of-the-art generative adversarial networks (GANs) produce fantastic results {{cite:fc23415728481acdc28e4834c4c27e244ad5aa9a}}, {{cite:b5b4ccfd249d1b44faa769621747cdd5a9638f25}}, {{cite:975f7434e1ba85f5fa28ab01647c449c7f95c752}}. Even though generative models such as GANs or variational autoencoders (VAEs) {{cite:6e220eead93cca01095c1c63d7b1082c771edac4}} are considered to be unsupervised methods, a large amount of data is still required.
While multiple attempts have been made in reproducing the success of image-based generative models in other domains, such as scenes, meshes, and point clouds, the results are far behind.
A major obstacle is the lack of data, as there are no high quality data sets of scenes that contain many models. Many data sets are small, e.g. {{cite:ad232c99ff0a0015814f5bedb0c20cbd5c96e53c}}, or contain too many low quality models. For example, the SUNCG data set {{cite:4de2fb815add64e20ceb06fbaa1a3e822ec55848}} contains multiple low quality models generated by amateur modelers that violate commonly accepted hard constraints (in addition, the data set is currently unavailable due to a legal dispute).
|
i
|
9a7a9ca9bff096420550b8a7fab28afd
|
where {{formula:970108bb-fe52-4df8-97fb-fc21fc6161b0}} denotes the Robin transmission operator, and
{{formula:578b5915-95dc-42fc-bfe8-ed160e44716f}} are nonnegative acceleration
constants satisfying {{formula:784e605a-193f-4d40-9006-559faea90a37}} (see, for instance {{cite:0a1b1861b366379e852178806e0d739384344a77}} and {{cite:76cf351ab7b161fd43992a6b6c3d12155adbc1e5}}, {{cite:a55dcf8bdcc1dd9747f3c878eebacee09afdd537}}). On the other hand, the adhesion
interface condition featured by the elasticity problem is incorporated through the following relation
{{formula:3057eaf3-6fde-4cbb-849a-08ebbe15f934}}
|
m
|
43b47e9cac57fe0ef821aa98622c2c99
|
In the redshift
interval ({{formula:e3263a08-bfbe-4698-9d5a-24932dc2109a}} ), that overlaps both of the redshift ranges that
we have studied, {{cite:ee584d0d35f7eb5d17fff98f3fbf3004a386ac50}} used the Hubble Space Telescope
to measure the UV luminosity function to fainter magnitudes than
{{cite:6e631beb328acfb7846c13fa98b38194280f160d}}, {{cite:1a81cbbbc087976edb7fdc7c03e7df237c744e66}}, or us. {{cite:ee584d0d35f7eb5d17fff98f3fbf3004a386ac50}} obtained
{{formula:99849ec7-98d1-4835-a5c6-127917854bcb}} , which is consistent at 2{{formula:43341a33-7d5c-4048-982b-1273a4ec09c6}} with our
measurements in both redshift ranges, and those of {{cite:6e631beb328acfb7846c13fa98b38194280f160d}}.
|
d
|
349581bc15475efa4ac37f120cd78da6
|
In Sec REF , we showed examples of different masking methods: TM, FM, SpecAugment, and PM.
SpecAugment {{cite:679087cb507d44ce73dbf3b27d75a8d8f661d180}} is a simple but effective masking method, which has been applied to many environmental sound recognition tasks.
It randomly masks the frequency bins and time frames of the acoustic features.
TM and FM only cover time frames or frequency bins.
While PM randomly covers {{formula:ae967814-8ce6-4e95-9065-311ce23bd9b7}} square areas of the acoustic features.
|
m
|
f4f88641ea8ae3e6a946c73c38248064
|
The differences between OPAL and OP opacities are small but can obviously affect the properties of
solar models. The {{formula:983de5f4-efb9-4d4a-955f-4040b36cab3b}} B neutrino fluxes computed from models constructed with OP opacity tables
are lower than those calculated from models constructed with OPAL opacity tables, which derives
from the fact that OP underestimates opacities for the regions of the Sun with {{formula:876a2be9-05db-4a47-8939-c3cfbae3a129}}
K by about {{formula:8cd14866-43f8-4d65-be64-b7fd0b922004}} compared to OPAL, especially in the core. As a consequence, the models constructed
with OP opacity tables disagree with the {{formula:63e40fdc-1450-40b4-af90-fa420539c5ae}} B neutrino flux detected by {{cite:ef17ca3b87627212c18ee7308d04f6e2a39dbda0}} but can agree
with that determined by {{cite:f37a14c3f9718a7c5a98ee9be5e1af7941e9cff3}}. Thus precisely determining the {{formula:780967a9-69fe-4eb7-ae00-9f9fe924f2dc}} B neutrino flux aid in
diagnosing the opacities in the solar core.
|
d
|
aedc01aa6625bf19d17c459032d18b70
|
We show the segmentation results on BB in Fig REF . Our approach (column (1)) achieves the most stable predictions compared to other baselines. For example, on row 2 left side, the input BB patch has a large portion of artifacts and tears. Our approach exhibits resistance to those regions while still generating a meaningful semantic segmentation. On rows 1 and 2 on the right side, our approach generates consistent segmentation in regions with elongated structures (long stripes), whereas {{cite:dfd7cf890e77004245848b9e335079511acaea73}}, {{cite:7e344f58ed42784f95f01de31b133ba74db41e98}} generate discontinuous predictions. When compared with {{cite:ba3fee7a549a5b67bc1a56c3593dee566151b83a}}, our approach does not require multiple forward passes to quantify the uncertainty, thus resulting in over 60% GPU memory efficiency and over 50% time efficiency, as is shown in Table REF .
{{table:e9bd578e-1eb8-42d9-bba3-146999f9abe6}}
|
r
|
79567dbb24022da90638b5d19e40dd4e
|
In this work, we exploited the step-wise target controllability framework {{cite:5f1a23be93b42328aa2756867b38e926f70b812a}}, which relies on a lti dynamics and is based on the Kalman rank condition.
While it is known that the brain presents a nonlinear dynamics, the study of linear models has proved to be beneficial in improving our understanding {{cite:f8324572c86e25a53d95c60d7aa3b7ff17a2528c}}, {{cite:90f1d91c5d99885ec9579d9dc5a0d006351e0733}}, {{cite:2c6029ac35b3a14d29d5e11c8cf3265522602497}}.
Specifically, the controllability of a linearized model can inform on the controllability of the nonlinear model {{cite:2085ca9eba2fc46dede8a492f6eed4cd1399360c}}.
|
m
|
a00d7576ab7d2baca4d9a6ae169c7059
|
Neutrino oscillation has substantiated the evidence for non-zero neutrino mass. Massive neutrinos can also have a non-zero magnetic moment. Although they cannot directly couple to photons, a magnetic moment of neutrinos can be generated via quantum loops. The minimally extended SM (MESM) value of a neutrino magnetic moment is {{formula:7043a748-ec9a-4d5e-81de-eefd059dc46f}} {{cite:428aedbdfa63e17e6ad6905766a4979357fb92a6}}. Various beyond SM scenarios can enhance the value of {{formula:ef4e271d-8ff0-45d5-b88e-c0460e862093}} {{cite:98c6ba6dec3f470c0790a1a79a46ab9a4a5b2671}}. The current experimental upper limit on {{formula:c4d85286-82c1-48fd-bc83-03e62efaddf0}} is {{formula:09771c0a-5757-4c0c-8690-7e8d139b3670}} {{cite:2edba62c2b0ba6ff9166b96a976f82be3103b9ba}}, {{cite:4a3686f57488bfc5d2a764cebf8621482d407ef0}}. Due to this non-zero magnetic moment, neutrinos can be affected while passing through a magnetic field. In particular, a plausible possibility is that the spin of neutrinos would be flipped while moving through the magnetic field. This spin flip is, of course, caused by an external field. However, neutrino oscillation is an intrinsic property of ultra-relativistic neutrinos. Thus flavor oscillations subsist during the passage through an external magnetic field. As a result, both spin and flavor oscillations can proliferate during their propagation. This is known as spin-flavor oscillation of neutrinos.
|
i
|
1cbc1e74febaf1de21475880c30ad425
|
The violation of the inverse-square law encountered here is not incompatible with the requirements of the conservation of energy because the radiation process by which the superluminally moving current sheet in the magnetosphere of a neutron star generates the observed gamma-ray pulses is intrinsically transient. Temporal rate of change of the energy density of the radiation generated by this process has a time-averaged value that is negative (instead of being zero as in a steady-state radiation) at points where the envelopes of the wave fronts emanating from the constituent volume elements of the current sheet are cusped {{cite:4becc877abdc24e2d486c00415af3169bd476d41}}. The difference in the fluxes of power across any two spheres centred on the star is thus balanced by the change with time of the energy contained inside the shell bounded by those spheres {{cite:5e543f23f05ebb9597063a63e9800cf6ed319c99}}.
|
d
|
2975309cc2bbf8e1dadfecb1911fbb4b
|
To compare torsional responses from the two-phase model with that from the WLC model, we tested at a constant rate of supercoil accumulation, {{formula:4937a6e9-0310-495f-b394-72ef2777dd41}} per second, comparable to the RNA polymerization rate during transcription elongation (i.e.,{{formula:8287f878-5dcc-48c7-aa1b-851680e64e6f}} 100 bp per sec), in which {{formula:abe38abb-b40e-4a3e-bb17-9e31937318ab}} is injected into DNA for the RNA polymerase enzyme unwinding to synthesize one nucleotide{{cite:fef5844e6e02716c2100d1283aebbe9207f00a02}}, {{cite:e891e1a3d4eb9a8a186cca034a377525fa5a80c2}}. The discontinuities on torque and extension curves are reproduced by the two-phase dynamic model, which reflect the buckling transition during plectoneme nucleation{{cite:4adc779ea968fce17993ffc9434f6dfe04ca0d66}}, {{cite:273806d6ce9ba1cba2ca7272fb5ee5df2b446109}}, {{cite:36f380efe9ab374804ddc307600bae4e7eb592ff}}, {{cite:81adba2881fb284ac49a9eb777c2fc1ea0194cb7}}, {{cite:c78b656b66e927659c1a7a3589f57c9afe7a640c}}, {{cite:dedcdc177fffceddbbdb7cfcebd99d1e053a56a5}}. The torque and extension versus linking number curves suggest that the non-equilibrium effect of the supercoiling accumulation (from the growth of the plectonemic phase) induced by RNAP is constantly relaxed during each polymerization step or elongation cycle. Hence, such an RNA elongation process can be regarded as quasi-steady state. An obvious advantage of the currently developed two-phase dynamic model of DNA supercoiling comes from its trivial computational cost as we only deal with the slow degrees of freedom of the system. Besides, we performed simulations of the plectonemes growth under a constant torque since some studies also regarded RNAPs as a torsional motor which imposes a constant torque{{cite:9d1339efe93399836498101f0b64b48990538032}}. We accordingly determined frictional coefficient due to slithering in plectoneme by calibrating with the WLC model.
|
d
|
c43e2a34bf1984ae0e78ae9e40c41405
|
Ideally, we would like to compute the size or complexity of an operator in a CFT. There are quite a few available approaches and some of them are applicable to our current scheme. For example, it is straightforward to compute the complexity of a bulk operator using the techniques in {{cite:f4dfbb59624f198698dd1a665eedd47072407654}}. Roughly speaking, it is the cost to move a bulk operator radially to a given point from the boundary. Unfortunately, we found that the complexity grows linearly with the proper distance (instead of exponentially). We suspect the mismatch is due to the fact that the bulk operator (in the standard HKLL form, or for computation purpose the form in {{cite:dab6ccdeae3d368eb119212e210b6fb92802ba1b}}) is essentially free (not seeing even the gravity). Another option is to compute the Krylov complexity (see e.g. {{cite:24f0a2b1a096e84549de65a8aca55f4a4ac3b996}}, {{cite:d6859954ccfa419d67282fcbdf4265513db813f0}}, {{cite:26bd4aeb04ac2de14547ffe48166e7d67668c6a7}}).
|
d
|
6537409d5c89bb4e42a7f9b838c5af0b
|
While much research has explored how echo chambers on social media platforms fuel polarization {{cite:bab329483f93827c40a49d716c6dee10829134c5}}, {{cite:f8e7de6590e9ccd15ae24f26f6b0976a2c0b9ad9}} or provided theoretical explorations around how connections may be formed between individuals with opposing views in order to reduce controversy {{cite:fd4c62bae31c2754c3e9e5f8dd30468c8f4eb12e}}, there have been few experimentally-driven efforts to empirically understand how we might mitigate it. The few randomized experiments that have been carried out have largely explored whether or not exposure to diverse content—as opposed to exposure to people with diverse political views—reduces political animosity, even though prior research has highlighted the importance of contact between people in opposing groups to reduce inter-group conflict {{cite:2d2d8258d156ab94a48ef9e4e7b2ee17e264d59c}}. For example, several digital tools emerged following the 2016 U.S. Presidential Election to enable individuals to explore their own social media echo chambers {{cite:5ba328cac168ff4b7eb74497ad66fc8f1286dd9b}}, read news from politically diverse sources {{cite:3cf66d653ffa40e804c8ec0fcb53e22562763113}}, and even connect face-to-face with members of opposing political groups {{cite:68e4ce8064c5245ed1da4a4e6c3f80cc5adab0a9}}. However, to our knowledge, none of these tools evaluated the causal effects of their interventions on reducing political animosity between disparate groups.
|
i
|
45ecf19aafccfca525c1da0fada29931
|
The DBM{{cite:bc9d02a9a82c449ff6c98d041504be0161d1c4a3}}, {{cite:d59d3a546994408ecf7f0149bcf85cd55b7fcc6d}}, {{cite:a01b39f4863eef5f2228f4a739fc8598260fb0bf}}, {{cite:8d7f8b469e3b15e12a90cd799d275b69ea651d64}}, {{cite:50ca09e754eb8d4fab9e2b50bbf7142096ef0ab1}}, {{cite:375dfb9ed67b77b8f164ae7ca8361f372b1e6e8e}}, {{cite:7783803239dd9114c1384a074bcab07cd7626f3c}} is developed from a hybrid of Lattice Boltzmann method (LBM) {{cite:60745bd4dcdf1ef8b629cecf632237b48d6f2b1a}}, {{cite:1bf7e2161fc167d2c5436c5a3834b579c65dfbf7}}, {{cite:3e74725229b38ae2338ff8768c39f41d579d9e60}}, {{cite:f0d666964070c20d4eb978fb2a6a5e540419b65d}}, {{cite:04512478c705772010c7e2d54e33bde9ce940445}}, {{cite:72c7a6c414c84759b412a96896a8a06e51047630}}, {{cite:058db7f9c64d62a2407200a86d3b51dfbc0efc46}}, {{cite:50a3d7af377906d432b95b9dca01f263848276bb}}, {{cite:da71a293cb1e615c63ba51a10015072cf5865f58}}, {{cite:796fa442968907e12842c24c658ddaf70f7f7df6}}, {{cite:d99265c0c49d0ee8365f21874cc7396ff4cd5d55}}, {{cite:7fee391c4c4de9590a8d42745d4a26d4fc3c190a}}, {{cite:085e9cf8ac88e09b6f292d0470d04cf30a490a8d}}, {{cite:a912070816714e9f7e61653787e4dda89dfb3fef}}, {{cite:5515dc7bd704f196f592e9961181f049e6b216ba}}, {{cite:ef72b2a1ebc400fdf4a538b34224079fd8eddbae}}, {{cite:d9859ba4982b74c42723eed56a229367d14c068e}}, {{cite:d347b1019a7d4d1c8bf7e2c0c86d72eb55e66138}}, {{cite:b652a8c88ffa9c6a58655f09ea3b87fe6f707ed6}} and the phase space description method of statistical physics.
It is one of the concrete applications of statistical physics coarse-grained modeling method in the field of fluid mechanics, and is a further development of the description method of statistical physical phase space in the form of discrete Boltzmann equation.
Its idea originated from a research review published by Xu et al. in 2012 {{cite:bc9d02a9a82c449ff6c98d041504be0161d1c4a3}}. In the process of development, it was inspired by the morphological phase space description method {{cite:375dfb9ed67b77b8f164ae7ca8361f372b1e6e8e}}, {{cite:7783803239dd9114c1384a074bcab07cd7626f3c}}, {{cite:97e042228818b7d07ddd408077a0ad402ae1ff6c}}, {{cite:26ada5479a5d3cca2ce9987be927b6376c209c2e}}, {{cite:7538d812774a8016b77607bec7cd0e557f45c835}}.
The methodology of DBM is as follows: It selects a perspective to study a set of kinetic properties of the system according to research requirements. Therefore, the kinetic moments describing this set of properties are required to maintain their values in the model simplification. Based on the independent components of the non-conserved kinetic moments of {{formula:24c7f49f-5e8d-435a-b622-feb8c56c8feb}} , construct the phase space, and the phase space and its subspaces are used to describe the non-equilibrium behaviors of the system. The research perspective and modeling accuracy should be adjusted as the research progresses, where {{formula:1b4db831-aa85-499b-89d8-c92f274f1a70}} is the corresponding equilibrium distribution function of {{formula:897d9971-edea-4bf9-903a-f4d061e75bf0}} .
|
m
|
30fadb34a1ab0c2488b1211d215de611
|
The fate of the metastable state beyond the probe approximation involves higher levels of complexity. Considerable effort has been devoted to understand the properties of backreaction in the supergravity regime where one needs to construct backreacted anti-D3 brane solutions with KS asymptotics. Many works, starting with {{cite:4416b15bf1d245aee1d947e80732438757fbd04f}}, revealed solutions of the supergravity equations that involved unphysical singularities in the 3-form fluxes.Related earlier work includes {{cite:cdc44c17b872734e437f031a5e4f151d0c4199e8}}, {{cite:9db06a3b1284d92830b98eabe1ad04127127abaa}}. Subsequent developments after {{cite:4416b15bf1d245aee1d947e80732438757fbd04f}} include {{cite:d377c3377003f51ed0281ed425417d9ba0652b87}}, {{cite:7195145ee6a5a439ebb939e1a347346bfc9d1614}}, {{cite:5b4de253e2237ca84a6b33ed5d6e1ed007cd697d}}, {{cite:099c52e7c461f87529160806becaacd2fed0aa8a}}, {{cite:7b98446fa62793153a87a4e2ac3b51b0145a2857}}. The presence of these singularities was viewed by some authors as evidence that backreaction can change dramatically the conclusions of the probe approximation, casting doubt on the very existence of the metastable state originally discovered by KPV (and its subsequent applications to string phenomenology, e.g. {{cite:2ba577b7e8f5ccaf1f469e2e112c3d44d750a924}}). This conclusion was challenged, however, by the authors of {{cite:85dc5cbeefffd46b72e741e5977e3e54fa0de527}} who argued that the inclusion of backreaction effects in the effective field theory of a single anti-D3 brane are mild and under control, as one would naively expect.
|
d
|
56db6f13dd5ad3eacb9cad909cba6ec7
|
Let {{formula:30b7ec01-40e7-4c49-91c5-d6213354449c}} be a (always Hausdorff) topological space. Recall that {{formula:18cf3093-cf67-4ee4-94e2-dad70f9513aa}} is totally disconnected if all of its components are singleton sets, and {{formula:194e3cbb-becd-44c5-be45-5f146ed7490e}} is zero-dimensional if {{formula:574f2d23-7394-485b-a667-b5843246dd63}} , the collection of closed-and-open — hereafter called clopen — subsets of {{formula:39e3b7cb-58c8-4d39-9db5-8e9c94011607}} , is an open base for {{formula:8e959ee5-17e2-41ea-b07c-0a4a26c3e09c}} . When {{formula:1bfa9ab8-431c-4b2f-8953-9372375956bd}} is locally compact, it is totally disconnected if and only if it is zero-dimensional {{cite:92309ea84f136b82498655386065f3097b2c070f}}. We will sometimes use the shorthand “TD" in place of “totally disconnected". By a compactification of {{formula:9c59dc90-8b68-440a-adab-a51df3e45846}} , we will mean a pair {{formula:a8ad8898-a25d-427a-b0ca-0624f593c3c4}} where {{formula:96a9a8a0-9df5-49e7-8cf9-cf9a08c40ba7}} is a compact (Hausdorff) space and {{formula:80d056e3-42d7-4ea3-8268-8cc68fe5a6f8}} is a dense-range continuous map; we will call {{formula:55065cfa-7a40-4399-9bfc-e63edac30954}} a topologist's compactification when {{formula:d4ebe482-d928-409f-93eb-1f5947e7411c}} is a homeomorphism of {{formula:a82ba7ba-d978-43c6-a514-4e62802477ad}} onto {{formula:8c11c27a-6d2c-4828-9a38-fcf65b9fe9e6}} . When {{formula:a20a32e1-5329-4c49-9cfb-8c9316419c29}} and {{formula:ed57fd46-cfa2-4735-9b29-1d199c78b525}} are two compactifications of {{formula:6c2e3443-2b1e-4297-8cf8-ce793640cb2e}} , we write {{formula:81037d9c-1165-4a48-9ebf-4f54e79646a3}} and say that {{formula:972de3c9-fd34-430a-9f5f-0350289e30d9}} is a factor of {{formula:d58c7537-bfa7-4a71-be63-2054adb96659}} if there is a continuous (necessarily surjective) map {{formula:f4cfe9b6-c505-4f4a-8dd4-a2f70a183998}} such that {{formula:7bbb74c1-2c93-4332-8d23-e1584572cfd0}} ; when {{formula:577096fd-f1fe-4d09-aeca-ce4e71fa7da2}} is a homeomorphism, we call {{formula:1d6247f2-36b5-422d-953a-d5097e3a26c5}} a compactification isomorphism, write {{formula:79f63952-c466-4c5b-b3a8-792cad8f108b}} , and say that the two compactifications are equivalent. The notation {{formula:30692a8c-74b4-4916-bbd2-51c8feb32d7a}} is used to indicate that {{formula:ffe55114-3de6-41e2-99d4-0c5157ed4593}} is a non-equivalent factor of {{formula:edae9751-a4d6-4f6c-9df9-e45f39938f1c}} . We let {{formula:7b219630-d0ea-4f09-b577-716fe0374ed8}} denote the set (up to equivalence) of all compactifications of {{formula:21af110c-2896-4ac7-8b7f-cb0c68012751}} , and denote the subset of totally disconnected (equivalently, zero-dimensional) compactifications by {{formula:b9db1314-f2e8-4da9-bf32-8e9ab2b0ff88}} .
|
i
|
4d086ab6b9d8960c63e96f9a49ff2ec1
|
Often labels are expensive to generate, especially in fields developed by highly trained medical professionals, such as radiologists, pathologists,
or psychologists {{cite:3fd1110255be49ba983d9338e797f55abfee271c}}, {{cite:9a1e363b63967ec80746ce3ccef752f6eb6f7337}}, {{cite:9f6c5f62b1a85fc453fc560d0dfdd868ccd72d87}}, {{cite:007063b5a8a2d70aec7dd5608e58cdf90c0f42fb}}.
Examples of this include the labelling of hystopathological
images, necessary for training a deep learning model for its usage
in clinical procedures {{cite:3fd1110255be49ba983d9338e797f55abfee271c}}. Therefore, there is an increasing interest for dealing with scarce labelled data to feed deep learning architectures, stimulated by the success of deep learning based models {{cite:167fe97f6b9d3e39972dff2257f9d8ee6fc837c3}}.
|
i
|
ca80612ade2dbcde1d0f73e1a37c1ee5
|
Recent methods including SLAMP {{cite:15217ecd11a2391b101b9c8b55b11c53c06c15c1}}, Improved-VRNN {{cite:44da6b052b58e4760cd5d6e3e9363225e1704215}}, and our models clearly outperform SVG {{cite:6331e3a6b786e30edc3160498753cace5a385d92}} and SRVP {{cite:8d183dddd91448cbda3d38090a311ac079209e24}} in terms of all three metrics. Improved-VRNN {{cite:44da6b052b58e4760cd5d6e3e9363225e1704215}} The results of Improved-VRNN on Cityscapes is different from the ones in the original paper since we retrained their model on a similar resolution to ours with the same number of conditioning frames as ours. achieves that by using six times more parameters compared to our models (57M vs. 308M). A recent study shows the importance of attacking the under-fitting issue in video prediction {{cite:34b2bae589b0579f8d15868332969912ff6312f9}}. The results can be improved by over-parameterizing the model and using data augmentation to prevent over-fitting. This finding
is complementary to other approaches including ours, however, it comes at a cost in terms of run-time. The time required to generate 10 samples is 40 seconds for Improved-RNN compared to 1 second, or significantly less in case of SRVP, for other methods. See Appendix for a comparison of all the methods in terms of their run-time and the number of parameters. The performance of Improved-VRNN is impressive, especially in terms of LPIPS, however costly in terms of both memory and run-time, therefore not applicable to autonomous driving.
|
r
|
8043ff184f3dbcd08fb42f3f388bc2c7
|
Images from modalities which capture internal body components such as X-ray, computed tomography (CT), or magnetic resonance imaging (MRI) are often viewed and used by model developers in a qualitative way similar to the way natural images are used. However, many medical image types should be used as quantitative maps instead of just qualitative anatomical images {{cite:04b60614ae9f80e5cd75ee384e1e1b8ec69d5547}}. Internal imaging modalities rely on specific biophysical interactions to generate images in which each pixel corresponds to a specific biological, physical, and physiological property {{cite:a794a76c5c015c203039008ce00ebe295a2ca78d}}. This is especially relevant to the case of image reconstruction learning task which includes generative self-supervised learning and cycle generative adversarial networks (GANs) {{cite:3a0bddaa54df8ea3fe15d4f32f68cca3d21c542f}}. Often, image quality metrics such as peak signal to noise ratios (PSNR) or structural similarity indexes (SSIM) {{cite:7fcfa15315555940745c6b1b1232fefbb2f5ade5}} are reported for reconstructions of medical images yet the accuracy as it relates quantitative imaging measurements or biology is underexplored. In fact, the lack of quantitative accuracy has limited the adoption of deep learning generated images into clinical practice since it fails to validate quantitatively on commercial clinical systems and software.
|
i
|
52c507eab58c60877289025fe9ec8bd7
|
The yield ratio for multi-strange baryons, {{formula:41a83f24-bb02-449b-b5b9-2641e3a23713}} and {{formula:13cd4784-a4b2-4b16-836c-4a366eec7ccd}} , are displayed in Fig.REF for both the scenarios A & B. The solid points of both panels represent the data from {{cite:91ba5a4a775a439f9f4b2a66b93de42d5d6f98f2}}, {{cite:c80749c0db520eb38c92f5a17782afc29c5832d8}}, while the solid curves are our results with the same initial conditions considered to explain {{formula:a079c2c5-ccba-46ec-b004-c97d4410d3cd}} and {{formula:38dbad4d-ccfc-470a-92d6-c9e44a1261ba}} in Fig.REF . Scenario-A with {{formula:205675e9-ee1d-432c-87be-2925445d9433}} explains the data points nicely except for the most peripheral one (having centrality, 60-80 %). In case of {{formula:de46189a-f003-4c0f-b5ab-82475496cd00}} the datum point is not available for 60-80% centrality, but it is also expected that scenario A would over-predict. However, a lower initial number density ({{formula:529521a5-f76c-42b9-aa22-076187fe33c4}} ) is observed to explain {{formula:70b936e2-3530-4b75-add5-72bd61b3f189}} yield ratio at most peripheral collsion, which is also expected from scenario B. Here also, we observe a steep rise in the yield ratio, when we move from (60-80)% centrality to (20-40)% centrality. For both scenarios, A and B, the {{formula:1271e488-edd6-4ff8-8525-982a9f746bcc}} are kept same as mentioned in Table-REF . One can explore other scenarios with different initial conditions. Similar analysis has been carried out by us while explaining the data at LHC {{cite:1726e92aeb0c15a6741d95f88c7a9bdf97496e64}} at a different collision energy. From the analysis at both colliding energies we find a similar trend i.e, lower initial densities explain better the most peripheral collisions. This is expected as a system with smaller energy densities might be produced at most peripheral collisions; which produce less particle densities.
{{table:f9aef5d1-957e-4f3b-9e2c-978bc70aff6d}}{{table:475eaa44-fe17-4e2f-a6b7-19d9ca0d06d3}}
|
r
|
6c67efcce282749f5a7ff705ea4e1079
|
BP-based attribution approaches are established on a straightforward view that gradients (of the output with respect to the input) could highlight key components in the input since they characterize how much variation would be triggered on the output by a tiny change on the input. Baehrens et al. {{cite:90cb5209a426b3f0f2d6fd6f9dc02465f2a7c2be}} and Simonyan et al. {{cite:f5cc602d339148d7ad6b9126daffdb775f67274b}} have shown the correlation between the pixels' importance and their gradients for a target label. However, the attribution maps generated by raw gradients are typically visually noisy. The ways to overcome this problem could be partitioned into three branches. DeConvNets {{cite:cdb8dbc3e8ab3cdce5c9ebf2d06e245b52904056}} and Guided BP {{cite:673255f1278232e21b61d511cef88689898ae35b}} modify the gradient of the ReLU function by discarding negative values during the back-propagation calculation. Integrated Gradient {{cite:7f8c21d54197da6ee701fe3fdc1f9ac51ac1feb7}} and SmoothGrad {{cite:97ab10c31b6ffdd8178b2d881144bc6ddb32e7b7}} resist noises by accumulating gradients. LRP {{cite:437d91cb4a99c2a1e23afabf31708ab8e0d74e4c}}, DeepLift {{cite:a123adc72b03adc1857b6e88ed86d7a20e97fc98}} and Excitation BP {{cite:247124bf19339384bb641d59cdf12da2b287f323}} employ modified backpropagation rules by leveraging a local approximation or a probabilistic Winner-Take-All process. SmoothGrad-Squared {{cite:1c2403e311863e28a3f3530b5792f58c57d44ccd}} achieved improvements of SmoothGrad by adding a square operation. XRAI {{cite:c54c81a18dae7bfe4d29c31c5e3f23a95ecbc6ac}} and Blur Integrated Gradient {{cite:e2c196a22308c362a4381fe94157263946475f9d}} make improvements on Integrated Gradient by incorporating the region-based attribution and blurred input baseline respectively. BP-based methods are often computationally efficient because they need only one forward and backward pass to get attribution maps for the inputs. However, compared with other types of attribution methods, the attribution maps generated by BP-based methods tend to be more noisy and sparse, which makes the contributive regions cannot be evidently highlighted.
|
m
|
1621201c882b8e3a0f6fdb964b1ca529
|
In this paper, we study an important kind of classical quasi-Newton method named SR1 for the smooth unconstrained optimization.
Similar to other quasi-Newton methods (e.g., DFP and BFGS), SR1 attempts to replace the exact Hessian in the Newton method with some approximation and the update of approximation only involves the gradients of the objective function.
Due to only using the gradients, quasi-Newton commonly can achieve much lower computation complexity compared with the exact Newton method.
The detailed introduction to quasi-Newton such as SR1, DFP, and BFGS can be found in Chapter 6 of {{cite:b57bc40e4c2e7e07d621c624bdbf58a79f19ffb8}}.
And randomized quasi-Newton methods can be found in {{cite:24b45f8f2dce5c5f15fdb21e0c2cb38e42747967}}, {{cite:61be624051a892068e5baa3ace0818b0625fc00d}}, {{cite:ef2fdf8d94151540197f18ed79ce130a1db04d3b}}, {{cite:bef3ccfce5773959cd145c929bbe99baf032072a}}, {{cite:5742bfdb47e457ff4fd82f0bfa370cd4b8541c4a}}.
|
i
|
b98dfee0e2652bafd44866de2a759b51
|
We applied inference triage to the data sets mentioned in section REF . For the classification tasks we used XGBoost {{cite:5d3aaf559ce677f56aa6571ff3e5bf969f51524d}} as babybear and Universal Sentence Encoder {{cite:4c64db8c512e62e569ebd7b2d66766ecb0e368f2}} as its embedding method. The code was run on a GPU instance of g4dn.2xlarge on Amazon Web Services (AWS).
|
r
|
73db88e65eae028a4456c92c581186c3
|
To answer these questions, we relied on two models: an E/I spiking neuronal network in an all-to-all graph; and a probabilistic excitable cellular automaton in a random graph.
Despite the simplicity and limitations of these models (which we discuss below), they have a fundamental strength that led us to choose them: they are very well understood analytically.
In both cases, mean-field calculations agree extremely well with simulations, so that we are safe in locating the critical points of these models {{cite:d05fcc382025695f0fcb623be4915a2f9e584141}}, {{cite:d899ed32542f256d927c28f4f2f6e869d6b723ab}}.
This is very important for our purposes, because it allows us to test whether the models can reproduce the data and, if so, how close to the critical point they have to be.
Besides, their universality class is also well determined:
the exponents shown in Figures REF B, REF C and REF D are those of with MF-DP.
|
d
|
2a8535b8fed248669d20dcbe2f4d87fd
|
Practical implementations of {{formula:f1ecaeb0-5b31-4ef0-9a8b-60e45f725cad}} objectives.
After showing the power of {{formula:916096bf-3d6c-4658-9824-e5971ff89f8f}} ,
we introduce the practical implementations of {{formula:d915b5ef-c712-4e7e-bc0d-dc5dfd5fa0e8}} v1 and {{formula:0d8bccd4-8087-4d24-a787-5a62828a75df}} v2 objectives.
The approximation of the first term {{formula:1dfabf52-2f42-4801-8a50-a492e5bdddcf}} can be achieved via the variational bound as Eq. REF .
While the exact estimation of the second term {{formula:f2bd8016-4eab-4566-a415-4b5400781a2a}} can be expensive,
contrastive learning provides a practical solution for its approximation {{cite:13a039643e908d7ca97f88e4eb257a96a2e3e741}}, {{cite:9a8c0d484d792053e7db40da928da99b9d84e960}}, {{cite:48f5995dc637c1bdd4001b9bb434952d937162a5}}, {{cite:90063bc96a9b7eaa05561a27e08ba36ab32f7843}}:
{{formula:2474969e-60bc-4d98-a7d5-5b68e3790682}}
|
d
|
b8f75173331a75f8636b5249426c9c33
|
In this work, we introduce the Stein variational policy gradient (SVPG) method, a new policy optimization method that leverages a recent Stein variational gradient descent method {{cite:d3a7374997df721d5bda7a9cb6666361c6d1c82d}} to allow simultaneous exploitation and exploration of multiple policies. Unlike traditional policy optimization which attempts to learn a single policy, we model a distribution of policy parameters, where samples from this distribution will represent strong policies. We first introduce a framework that optimizes this distribution of policy parameters with (relative) entropy regularization. The (relative) entropy term explicitly encourages exploration in the parameter space while also optimizing the expected utility of polices drawn from this distribution. We show that this framework can be reduced to a Bayesian inference problem in which we generate samples of policy parameters from a posterior. We then use Stein variational gradient descent (SVGD) to optimize this distribution. SVGD leverages efficient deterministic dynamics to transport a set of particles to approximate given target posterior distributions. It combines the advantages of MCMC, which does not confine the approximation within a parametric family, and variational inference, which converges fast due to deterministic updates that utilize the gradient. Specifically, in SVPG a policy gradient estimator is applied to improve policy parameter particles while a repulsive functional is used to diversify these particles to enable parameter exploration.
|
i
|
17106acd8713d722e837bb87ae333560
|
We apply the the fast gradient sign method {{cite:8895b1a86cfd66ca8311f5a7a817e6553ce1be52}} to generate the noise. The results are given in Table REF , where {{formula:ea8961fe-d2a4-4108-b5d6-9fc56bbdc338}} denotes the noise level. We observe that our MTNPs show better stability than NPs at different noise levels.
{{figure:d5d4dbda-fa42-4745-87fb-46cfa8f6e89e}}{{table:83d9f4b6-2a01-4181-90f4-6dc681904500}}
|
m
|
025f4d7fa6ff5e872060adb971277340
|
In this paper, we have studied the impact the MGB gravity on the dynamics of evolving cavity in cluster of stars. The evolving stars cluster are significant for the study of voids as well as galactic filaments. Voids are the regions with low density in a large scale matter distribution in the universe {{cite:b9d68629b9435d9fe11db9cf5b074008c4e88b53}}- {{cite:ed2784af404dc5f3536090e9f28973c89b8ec16b}}. Moreover, voids of different scales have been found in our universe {{cite:926046b53dd2ab00d3239430b91159bce9aa3418}}, {{cite:24522563e062c0136c49e42298fc350a50089ceb}}. It should also be considered that voids are neither spherical nor empty, either in deep redshift surveys or in simulations. Nevertheless, for the sake of simplicity they are usually described as vacuum spherical cavities neighboured by a fluid. As a matter of fact, voids are the successor of the cavity model described by purely areal evolution condition. We have studied the outcomes appearing from the purely evolution condition. Moreover, it has been proved that this type of condition is specifically appropriate for the study of cavity evolution among the cluster of stars.
|
d
|
951c578550472e855f568af8aadd7303
|
Our calculations are based on the density functional theory (DFT) within the generalized gradient approximation (GGA), in the form of Perdew-Burke-Ernzerhof's exchange-correlation functional {{cite:392cab9c966528bd50273361c6f5b9358a05c772}}. All the calculations are performed using the Vienna Ab-initio Simulation Package (VASP) {{cite:2b07b4c2ccfba5be9bc903a00eed96547363ca40}}. Periodic boundary conditions were employed and vacuum slabs of 10 Å were used to isolate the replicas of OPG layers. Geometrical optimizations are performed until the Hellmann-Feynman forces on the ions are less than {{formula:a61eef81-0079-4532-b1a7-bd3c38b07077}} eV/Å. The plane-wave basis is used, with a cut-off of 700 eV that converges the total energy to 1 meV/atom. The Brillouin zone is sampled using {{formula:03029c01-7990-49ce-8809-a28a34611cbe}} Monkhorst-Pack k-point scheme {{cite:e3dc678021c3607b74650a6cef1336a251c17258}}. The phonon spectra are calculated using the finite-displacement method in a {{formula:726b4e46-de1b-4082-af22-1de9b72272ff}} supercell {{cite:a1effbc49224b10acbf3f6c47b7fd76acc7ac4e8}}, {{cite:f9c954c1967fe7ffab71853821e1ed66a4baf222}}.
{{figure:f7c076a0-a32f-4237-af2e-1d90e2c0fda7}}
|
m
|
d72b7e881f4e7286c214d49354d8f1ac
|
In GR, according to the no-hair theorem, an isolated and stationary BH is completely characterized by only three quantities, mass, angular momentum and electric charge. Astrophysically, we expect BHs to be neutral, so they are uniquely described by the Kerr solution. Then, the QNM frequencies and damping times will depend only on the mass and angular momentum of the finally formed BH. Clearly, to extract physics from the ringdown phase, at least two QNMs
are needed. This will require the signal-to-noise ratio (SNR) to be of the order 100 {{cite:cddcb6432663842f491ff42ce2beed79794d0fdd}}. Although such high SNRs are not achievable right now, it has been shown that they may be achievable once the advanced LIGO, Virgo and KAGRA reach their fully designed sensitivities. In any case, it is certain that they will be detected by the ground-based third-generation detectors, such as Cosmic Explorer {{cite:5ff36a7d07242e761ae2d42eba9e3123427336b0}} and the Einstein Telescope {{cite:84301303dd9097041ec45aebff5cbae6dac355b6}}, as well as the space-based detectors, including LISA {{cite:e866253fd249a66e417612aeb90d6635465f562f}}, TianQin {{cite:d217ca977ea9a3c9892f2b964ca19189aacb87fa}}, Taiji {{cite:8b6f3c302e271b4dd0287bb478577a40c967b307}}, and DECIGO {{cite:bc8f3361de1666cb8a46ba3b736b140ffcf29ed2}}.
|
i
|
bbbb65cff4143437dee4138376dd6f2e
|
Another recent approach to uncertainty estimation relies on replacing the conventional cross-entropy function by focal loss {{cite:4334bcd64acd46d624ace41ad238af92c4c8258e}}. The idea behind focal loss is to direct the network's attention to samples for which it is currently predicting a low probability for the correct class. Seo et al. {{cite:eece17ebf96ff00b54f26d06ed50661c5af6e0e5}} enhance the typical cross-entropy loss between predictions and ground truth with the cross entropy between the predictions and the uniform distribution, forcing the network to construct as uniform distribution as possible. Kumar et al. {{cite:a0bdc761e767c63ac78cb4e21459cf3f09b11b53}} design a kernel-based function, which can be optimized during training and serves as a surrogate for the calibration error.
|
m
|
57ad98e9af45f318751bf77576bbfd1b
|
In this section, we discuss the methodology to evaluate the error detection performance (Definition REF ) of the proposed self-explainable deep learning (SE-DL) system (Subsection REF ) and compare it with the existing ensemble technique {{cite:20226cee0c0e3c2cafb99dc581f87fad047caa98}}, {{cite:af889a27f18faa9e7d3d2b4a575dd64281a76363}}, {{cite:5fdd3350dbe79f30dae11c5dce4443053bd8a7cc}}. Additionally, we discuss the methodology to evaluate the error detection performance of the proposed SE-DL system when it is integrated into an ensemble error detection scheme.
|
m
|
337da743af1b52cf7711fde71b66129b
|
These reactions have not been studied, but one could reasonably expect them to be fast at low temperatures. {{cite:9086c45cd3d7a318adbaa9151ca0a2bd9d2461e0}} assume that reaction (REF ) is rapid and find that it is the main formation route to HCOCCH in cold interstellar clouds. A second formation route can be provided by
{{formula:ac7c5c5c-49eb-4f72-9135-47f71f975d09}}
|
d
|
dff1ac2930dd3170f6a8b6d9a7981357
|
AU intensity estimation on DISFA, results are reported in Table REF with 3-fold cross validation.
AU detection.
Observing Table REF , MAE-Face achieves the average F1 score of 67.4% on BP4D, outperforming all the previous works. In Table REF and Table REF , MAE-Face, with the average F1 scores of 64.8% on BP4D+ and 70.8% on DISFA, also shows absolute superiority in comparison with all the previous works. It's worth noting that our MAE-Face exceeds the previous best F1 scores on BP4D, BP4D+, and DISFA by 2.5%, 3.3%, and 2.6% respectively.
These results strongly prove the effectiveness and generality of the proposed method.
AU intensity estimation (Table REF ).
The results are evaluated with three metrics discussed in Section including ICC, mean squared error (MSE), and mean absolute error (MAE). Note that not all of the previous works report the results on both BP4D and DISFA with all three metrics.
In terms of mean squared error and mean absolute error, MAE-Face outperforms all the previous works, proving its superiority.
If measured with ICC, our proposed MAE-Face achieves 0.740 on BP4D, which is the same as the previous best result of APs{{cite:d55a4a36a4fbd26129c2cbcc9a8e228bef6f175d}}. MAE-Face also achieves the state-of-the-art level of 0.674 on DISFA, which exceeds the previous best result (0.598) by quite a large margin.
These results suggest that, a more difficult dataset, such as DISFA, benefits more from the MAE-Face pre-training, which implies the potential of the proposed model for more challenging tasks.
Furthermore, the ICC of MAE-Face on each AU is rather close to the corresponding previous best from various methods. It indicates the big improvement of average ICC performance on MAE-Face is because it has nearly no short board on all the AUs nor any bias on AUs.
It shows that MAE-Face is insensitive to the imbalanced label distribution in the training set, which can overcome the overfitting caused by data scarcity.
{{figure:7309cbaf-a84b-4ba6-8204-8264a43449ae}}Fig. REF compares the predicted curve of MAE-Face with the ground-truth for AU12 intensity. This example comes from a video of Subject F002 in the BP4D test partition. We observe that the predicted curve sticks to the ground-truth values, which illustrates the high accuracy of our method in a direct view.
Fine-tuning with Partial Dataset
{{figure:b3d627a9-cb97-4d53-aa6e-4bad24731c57}}To further dig into the generalization performance of MAE-Face, we carry out fine-tuning on a subset of the training data to observe if it can still perform well with a small amount of data, investigating its potential for few-shot learning.
We take a subset from the training set for training, with the test set unchanged for evaluation. The subset is built up by taking every one frame out of N frames. We fine-tune MAE-Face using 10%, 1%, 0.5%, 0.2% and 0.1% of the training set, for 200, 2000, 4000, 10000 and 20000 epochs, respectively.
The results are shown in Fig. REF , which also includes the results reported by KS{{cite:15e5474493dde3b8cceb9c8bd856e35ca9a57f37}} and FRL{{cite:25ded2d985a5a1a63ec30b18875dd3cc7ceff821}} for comparison.
Interestingly, the performance of 10% is almost the same as that of 100% for most of the works. This is due to the temporal redundancy in video-based AU datasets, where you can take every 1 frame out of 10 frames and still retain most of the information.
When comparing 1% to 10% in terms of performance degradation, e.g. on DISFA, AU detection is 8.6% and 4.3% for KS and MAE-Face respectively, and AU intensity estimation is 0.021 and 0.008 for FRL and MAE-Face respectively.
Moreover, it's notable that the performance of MAE-Face on 0.5% is still better than the performance of KS and FRL on 1%.
We only observe significant performance degradation on MAE-Face when less than 0.5% of the training set is used.
These results strongly prove the robustness of our proposed MAE-Face on small and sparse datasets, showing its potential to be a few-shot learner for action units and to solve real problems with very limited datasets.
Ablation Studies
Pre-training model
{{table:2e60f3ef-aeb8-4370-8220-7e014f5abdfb}}MAE-Face benefits a lot from the proposed facial representation pre-training method. We conduct further experiments to confirm
whether MAE-Face benefits from the two-stage pre-training framework, and
whether MAE-Face benefits from pre-training on face images instead of general images.
Table REF shows the results of the one-stage training (train from scratch), the pre-training using general images (MAE-IN1k), and the pre-training using face images (the proposed MAE-Face).
train from scratch refers to training from scratch using randomly initialized weights on AU datasets, where the training takes 200 epochs with 20 warmup epochs.
MAE-IN1k refers to fine-tuned from the ImageNet-1k{{cite:6553911524eae728a2d57c12777d3305099d797c}} pre-trained MAE{{cite:3c75602780cd16cf53681f37bb10d88ed340fd75}}, where we use the same fine-tuning settings as that of MAE-Face.
MAE-Face refers to fine-tuned from the proposed pre-training dataset.
All the above models use the same ViT-Base backbone and the same regularization tricks.
The major difference is that they are initialized with different weights for the training of AU-related tasks.
As the results show in Table REF , train from scratch has the worst performance when compared to the others.
By investigating its training progress, we find that its convergence is slow, yet it still overfits the training set easily.
MAE-IN1k improves the performance significantly in comparison to train from scratch.
MAE-Face further improves the results by a large margin compared to MAE-IN1k and train from scratch.
This result proves that a good initialization is a key point for an AU analysis model to converge and generalize, and our facial representation model MAE-Face serves as a pretty good one.
These results confirm the effectiveness of the MAE-Face method, rather than its backbone or the regularization tricks.
Pre-training loss function
{{figure:b08b42bd-35dd-4899-8c8d-f498f57158cc}}{{table:18474053-1239-4427-b08d-476959ad1d47}}Our proposed MAE-Face uses L1 loss instead of L2 loss to regress the reconstructed pixels.
In Table REF , we test the impact of different loss functions for pre-training.
We also test the impact of patch-wise normalization (w/ norm), which has also been studied in{{cite:3c75602780cd16cf53681f37bb10d88ed340fd75}}.
The experimental results confirm the effectiveness of both L1 loss and patch-wise normalization.
Specifically, we present some reconstructed samples of L1 loss or L2 loss in Fig. REF .
We observe that the facial expression reconstructed by L1 loss is more accurate than L2 loss, especially in the mouth part.
It proves that the MAE-Face trained with L1 loss learns a better facial expression representation.
As illustrated by {{cite:ed6e40f475c387c6938b918c4483b196d0805a9b}}, L2 loss tends to over-penalize large errors while under-penalize small errors, usually performing worse than L1 loss in image restoration. Masked autoencoding is another image restoration problem, which is also expected to take advantage of L1 loss. Our experiments confirm this expectation.
Discussion and Conclusion
This work describes a self-supervised pre-training framework specifically for face images, achieving really good results for AU detection and AU intensity estimation on available datasets.
The pre-trained model, named MAE-Face, benefits from the masked autoencoding paradigm to learn a robust facial representation.
By comparing the proposed MAE-Face with the previous works, MAE-Face has reached new state-of-the-arts on nearly all the evaluated subjects.
Furthermore, when fine-tuned on a subset of the training set, MAE-Face still exhibits very good results.
The performance degradation is subtle even when fine-tuned on 1% of the training set, which shows its robustness to generalize on limited and biased data.
Since the AU annotation requires domain expertise, MAE-Face may greatly ease the efforts required for building an AU dataset in the future.
Besides, we carry out ablations to survey the performance of MAE-Face on different setups.
We verify that MAE-Face benefits from both masked autoencoding and the specific pre-training for facial representations.
We also verify that MAE-Face pre-training benefits from both L1 loss function and patch-wise normalization to boost its performance for downstream tasks.
We've also found some limitations of MAE-Face:
It still relies on face alignment to get the optimal performance, which may bring extra computational burdens for deployment.
It benefits from a carefully tuned learning rate to get the optimal performance on each dataset.
Nevertheless, when searching for the optimal setups, even without face alignment and learning rate search, MAE-Face has already surpassed the previous bests on most of the tasks.
To this end, we conclude that our proposed MAE-Face is a strong learner for AU detection and AU intensity estimation.
However, since MAE-Face is pre-trained on face images without any specific designs for action units, perhaps its potential is far more than that.
Thus for future works, it is interesting to survey that,
if it can adapt to other kinds of face-related tasks, and
if it can work well on much smaller backbone other than ViT-Base (e.g. through distillation).
With which, it has the potential to become a universal facial representation model for a bunch of face-related tasks, as well as a lightweight model for easy deployments.
Acknowledgments
Acknowledgment
This research is supported by the Key Research and Development Program of Zhejiang Province (No. 2022C01011) and the 2022 Key Artificial Intelligence Science and Technology Innovation Project of Hangzhou Science and Technology Office.
|
r
|
6075726978164815cf2b3caec49a0259
|
These equations appear in the book by Nelson {{cite:447cb928a26f83604295272ba4ec7cad7195f06b}} although with defect densities written in terms of Dirac measures (which are used to represent isolated singularities), with {{formula:4a197b20-759d-4e96-855d-0d414ac7ba5b}} denoting a density of point defects. The other scenario, where results are previously known, is when only metric anomalies are considered (no dislocations and disclinations). This case was discussed recently by the authors {{cite:a287ca93a40f3f1268087b55804fdc637f14308f}} who connected the resulting formalism with the existing work on morphology of growing thin shells {{cite:7a737cb93644fb09cda680b27d92eaee5df3e9c1}} and thermal deformations for Föppl-von Kármán shells. Here, we have {{formula:d231c5fd-60fc-4f75-999a-f82e401fbb7a}} and
{{formula:ac120369-2034-4977-9f94-1d3906ff1f80}} , where, in the context of growth, {{formula:2d2b9121-a407-445f-bb61-2a3717970bf5}} and {{formula:bc418006-dc69-485a-9f2d-c94d2c0d7004}} are to be interpreted as extensional and bending growth strains {{cite:a287ca93a40f3f1268087b55804fdc637f14308f}}.
|
r
|
1fbe80521593c51ba43816db01ebf6af
|
Table REF shows the performance of MetricGAN+/- relative to the MetricGAN+ baseline and the unprocessed noisy audio on the VoiceBank-DEMAND testset. We also compare performance with a second baseline system
SEGAN {{cite:21ce9398a8a096d7e19ca6dd6102fbc4f6922674}}, a state-of-the-art speech enhancement system. For more comparison baseline performances the interested reader is referred to Table 3 in {{cite:55b64844f3c8dd3826d8078eb7dc35758aff5a17}}, which shows that MetricGAN+ with a PESQ objective outperforms all systems listed terms of PESQ score. We assess this performance using PESQ and STOI and also using the Composite {{cite:ec48d6a8f481b3eb3c1edc9cf5c76e0a7a86e6b4}} Measure, where Csig, Cbak and Covl are intrusive measures of speech signal quality, background noise reduction quality, and overall quality respectively.
{{table:cc389f4c-2ffb-44a9-a8d4-7948df97e892}}
|
r
|
7639a95c06b42252448c04260dd75c81
|
Proposition 2 ({{cite:7ce66d4d6ce7653f197d0070709a065b6c51fba0}},Extension and restriction operators on
Besov spaces)
Let {{formula:930e0987-dfa5-4a45-8753-dfcd4d066b41}} be an open set with
Lipschitz boundary, then the Besov space {{formula:b1134a53-a5d2-40fd-8643-255167fd4b92}} is defined as
{{formula:c4983b03-52d8-4880-99bd-88172d9d8ce2}}
|
r
|
474578cc6a7ab222fec86146d4611e89
|
The generation-based methods aim to reconstruct the input data and use the input data as their supervision signals. The origin of this category of methods can be traced back to Autoencoder {{cite:4029f08a3f70ac44febc4d6ce82c2b4d1f68a60c}} which learns to compress data vectors into low-dimensional representations with the encoder network and then try to rebuild the input vectors with the decoder network. Different from generic input data represented in vector formats, graph data are interconnected. As a result, generation-based graph SSL approaches often take the full graph or a subgraph as the model input, and reconstruct one of the components, i.e. feature or structure, individually.
According to the objects of reconstruction, we divide these works into two sub-categories: (1) feature generation that learns to reconstruct the feature information of graphs, and (2) structure generation that learns to reconstruct the topological structure information of graphs. The pipelines of two example methods are given in Fig. REF , and a summary of the generation-based works is illustrated in Table REF .
{{table:82f76343-99ad-47a9-a9df-ad6e906c2271}}
|
m
|
a34ce7077ecbbfe1fe9dfd06283a2185
|
Many of the ingredients in the approach we have taken in this article also appear in a series of articles by Haggard, Han, Kamiński and Riello on {{formula:5e358ece-ac47-43a7-874f-db296613b502}} Chern-Simons theory and four-dimensional quantum gravity with a cosmological constant {{cite:bc37bf20743a8065dd5397eae975e793cb628738}}, {{cite:38817d4396daf23cedd3269a23f277a35f74130e}}, {{cite:737aacec2dba583cc2a08d1784450e27288234f5}}, and it would be interesting to develop those connections further. Finally, any observer in a universe with a positive cosmological constant is limited by their cosmological horizon. It would be interesting to consider how the inner product we have introduced here might affect the analysis of local sub-systems as in {{cite:c9490be374b13a6a2f8dc38f1f3e113f87740bbe}} and entanglement entropy {{cite:27a1d6e7ea26f7eadc84b4ce4d31b8a8f082412e}} for the Kodama state.
|
d
|
0136db41d83d31080e2b9a83a6d252b8
|
However, both of these methods have their weakness. MC-Dropout performs significantly worse than Deep Ensemble on some uncertainty estimation tasks {{cite:0cd04a7ad59bb4c3f77a50176442baf00de17608}}, {{cite:d99241a6464045b2ec94007ccd69647f34ece41b}}, {{cite:3cda526a22b211db7f464720976efda8042d10db}}. We argue that the main reason for MC-Dropout's poor performance is the high correlation between the ensemble elements that make the overall predictions insufficiently diverse. Moreover, dropping the weights randomly will result in similar weight configurations in different models obtained by sampling, consequently, less diverse predictions {{cite:411db789ce58b68bf33738657da0c6e3eab5e7fb}}. Deep Ensemble does not have the above problem because ensemble elements are trained independently, which leads to no similar weight configurations between ensemble elements. Despite its success, Deep Ensemble is limited in practice due to its expensive computational and memory costs, increasing linearly with the ensemble size in both training and testing phases. In terms of computation, each ensemble member requires a separate neural network to forward pass its inputs. From the memory perspective, each ensemble member requires a separate copy of neural network weights, each of which can contain up to millions (sometimes billions) of parameters {{cite:aec3e2b325242143c0eacce030beca988c338e30}}.
|
i
|
b2f6a6b775fd1b350a7219ca1fbe783f
|
These low probabilities suggest that the subhalo populations
in the central regions of the Aquarius haloes are not sufficient to
explain the observed frequency of violations of the cusp-caustic
relations. It is important to ask whether our results will change
significantly if even lower-mass subhaloes are resolved. We argue
that this is unlikely to be the case. The total subhalo lensing
cross-section is an integral of the cross-section of subhaloes of
each mass weighted by their abundance. As shown in
§, most of the perturbing subhaloes have
relatively low mass ({{formula:99618185-ed89-4444-b035-796caea49e23}} to {{formula:944f98f9-81e5-4070-84ac-4ddf9d6dc3d7}} ).
Their abundance scales as {{formula:a3a18268-90f5-4eff-9433-7ce48af526c2}} . For a galaxy (subhalo) approximated by a SIS, the
lensing cross-section roughly scales as {{formula:01d39cc5-9405-4f7f-8995-551c26b1a35f}} (e.g.
{{cite:fb18a65eccab82dc4291cd5c02f64c48701e61fe}}) where {{formula:0f6bd47f-0d8a-487b-bac7-2c3e3fbc19d8}} is the one-dimensional velocity
dispersion. For Aquarius subhaloes, {{formula:f4070acb-c999-45be-9cb6-7003c025d273}}
({{cite:9941cf070dfc2aa6d349056b5524de97b765e61f}}), where {{formula:f977475f-b035-49f8-93ca-f8635bd5b0f6}} is the maximum circular
velocity. If {{formula:00b2d531-6e29-4973-bee3-a4a563fe7151}} , then the integrated lensing cross-section will be {{formula:f23fcbb3-1511-4009-aece-ed2c7074bdc6}} . On the other hand, for a point lens or an elliptical galaxy,
the lensing cross-section is proportional to the lens mass, and the
integrated lensing cross-section would be {{formula:f3a4b85c-0545-4ed6-b7e2-5dfb770f8139}} . In all
these cases, the subhalo lensing cross-sections are biased towards
relatively massive subhaloes in the projected central region, and the
incorporation of even lower mass subhaloes should not change our results
significantly.
|
d
|
f77a1fd06b95f24698273797421a8744
|
For each convolutional filter in a layer of the network, we can synthesize its corresponding “signature" image based on Activation Maximization. The goal is then to perform K-Means clustering of those images, effectively grouping similar images together while also keeping unique ones into their dedicated group. To effectively reduce dimensionality and facilitate the task of K-Means clustering, we first feed our images to the convolutional part of an AlexNet model {{cite:b5c5af9d9ebec412e6e2aec8e92aed17ebc011a5}} pretrained on ImageNet {{cite:8cd9cde308dff3adb8f46f485351cf69dbf308b3}}, encoding those into a feature vector, which will serve as input data to the clustering algorithm.
{{figure:85a43a28-8618-4b59-99ab-b39e4f553af4}}
|
m
|
7bd65b881f18c3f1f0aefdea3b69a950
|
We first evaluate our method on CIFAR-10 and CIFAR-100, which each contain {{formula:99d6536a-da9b-434d-b291-ba3dfa9a4143}} training images, and {{formula:dd1c515d-56d6-4042-aed6-f449f26b926e}} test images of size {{formula:f63b8587-5d21-4bea-8361-82c34b24c9ab}} . CIFAR-10 and CIFAR-100 have 10 and 100 classes respectively. We use the same testing protocol as {{cite:b7a546d5348988300fcdbd40990f4b2aba4b509b}}, {{cite:5ee0d0ba3cfc09a90febe09846a4cb1304caf00e}}, {{cite:d29e97e614d3ea77fc604cf62d0856da16e4463f}}, by evaluating our method on symmetric and asymmetric label noise. For both CIFAR-10 and CIFAR-100, we use symmetric noise ratios of {{formula:93f23df5-e96e-43f2-b901-bae2832a5115}} , {{formula:114d843b-bc46-436d-8240-2bbfef78d484}} , {{formula:30d3dbea-26aa-45a9-931d-66b75f239306}} , and an asymmetric noise ratio of {{formula:afbe24ce-e55a-4ce8-8883-3203ede5021a}} .
|
r
|
e00c93f2a1462e652eadb7730fd68dcc
|
To the best of our knowledge, none of the CNN based mass segmentation methods have been used for 3-D ABUS images. In this paper, a new segmentation method based on U-net{{cite:488b99ea703e52494d5fda979894ec47be66fb24}} is introduced for segmentation of 3-D ABUS images.
|
i
|
583bf6dfeab39ab3887a59ced58320a9
|
Tol1326-379 is detected at high significance by the WISE satellite in
all four bands and its colors are W1-W2=0.38 {{formula:c8d97b1a-f189-451d-9bdf-ec55ec418773}} 0.03 and
W2-W3=2.58{{formula:453b091e-52e3-49d1-a305-5c22393e4822}} 0.03, respectively. These are significantly different from
those typical of elliptical galaxies {{cite:d1d5f29a8b41b864cc222e466c6dfb5b2bdf2a4b}}. This indicates that the
emission seen, at least, in the W3 band is dominated by the AGN
component. Indeed, Tol1326-379 lies in a region populated by BL Lac objects
{{cite:0662a8aeef1c45c1a2f2faaa3266abfd6cfff75a}}, although offset from the main blazar strip.
|
d
|
fcf18ff77f4bbec1d85d55b5827f317a
|
See {{cite:8cde128128fa50ee8dabb1728e6adb73d244110b}} .
|
r
|
7db43032977ff7d56aca94a702756cc8
|
While most baggage screening frameworks involved supervised learning, researchers have also explored adversarial learning to screen contraband data as anomalies. Akçay et al. {{cite:fc00d49c18e00735061d5fcd40ae98a5d784b289}}, among others, laid the foundation of unsupervised baggage threat detection by proposing GANomaly {{cite:fc00d49c18e00735061d5fcd40ae98a5d784b289}}, an encoder-decoder-encoder network trained in an adversarial manner to recognize prohibited items within baggage X-ray scans. In another work, they proposed Skip-GANomaly {{cite:29320969c1c867492cbc440818d22504d37efe7b}} which employs skip-connections in an encoder-decoder topology that not only gives better latent representations for detecting baggage threats but also reduces the overall computational complexity of GANomaly {{cite:fc00d49c18e00735061d5fcd40ae98a5d784b289}}.
{{figure:197187e4-f761-4fb4-9e0e-d7884bb39be4}}
|
m
|
717a184e6d49703967699f8f4b31d02e
|
where {{formula:168d45a5-185f-45d2-ba35-411754bba546}} is the antineutrino energy, {{formula:e6340a0d-ef20-4e3e-b7b1-0d20436d2378}} is the antineutrino flux and {{formula:f721afe6-eb87-49c2-b8db-721b6a906055}} is the IBD cross section {{cite:529c1089ad27923269b65b61f9bb166e405e7191}}, {{cite:325b8f0d044fa55cbff7332bc3e8c86b8f999158}}. The theoretical isotopic IBD yield for a certain isotope {{formula:0f2bccf0-4aa9-4bf5-8fa4-537cc2407752}} from different flux models {{cite:4332cbb401a069d12acd86731ffd19a97e5f47ba}}, {{cite:89fdb6103c49b95f8b3d16e39b02bd5f3169daa8}}, {{cite:41040a32f53fdcb6e986baa84a37919b3042e5ff}}, {{cite:e02053746d8466bd24bb29f3c87ca482d583e759}}, {{cite:9618352f1ecaccce5cb4264918cdb348092949f0}}, {{cite:dc8e1409a9ad4c45ad0a3219e0eaab1fa007be8b}} has been revisited in {{cite:23fb7b1861e9fa5e51a5a1531469955815f53129}}, and will be used in the current analysis.
|
m
|
c9a1fe561d5e00eba09be105fefe880d
|
Complexity-free manufacturing techniques such as additive manufacturing (AM) have enabled the fabrication of intricate geometric features. This permits the design of complex structures that fulfill specific functional criteria while possessing lower weight. Access to this new design space makes complex structural designs (e.g., cellular structures) coveted in various engineering applications {{cite:4c031461b4f2c6726480be650a596a3977903813}}, {{cite:bcd49df05acd3b2b6f65b3eeed324cb799c8dd37}}, {{cite:2dc9d44d6af9f84ac51312e872ad57fa6d994d8a}}, {{cite:ad26e56c1b4349ab3bed9e11eca60f9aded7540a}}, {{cite:e314032da459c77a56fef30ec844ffbdbe21b42d}}, {{cite:ec7999d3d34592f8c5dde26a7c13c6b276d66690}}. In this context, topology optimization (TO) plays a major role in designing light-weight structures that satisfy functional goals {{cite:20a0086665954bdfb38630b949e24222af8bf231}}. However, the most widely used TO algorithms (e.g., solid isotropic material with penalization or SIMP) {{cite:ca63c08f8f4fe615446421e8d132a1d8296e7cd7}} deliver discrete density maps as the outcome, leading to poor manufacturability due to 1) connectivity issues within variable-density bulk materials and 2) the limited resolution of the density map.
|
i
|
91caf86763f8c3772f1ce0cdd7b7a155
|
A second feature of our upper bounds is that they are expressed in terms of
generalised colouring numbers, and hence are increasing with the distance.
However, for none of the classes of graphs do we have examples that show
that {{formula:2254fb5d-0ce6-4330-a5d5-4fb75f8b139e}} grows when the odd number {{formula:5f421465-89a3-4d47-9ab2-5b30776de597}} increases. This
feature was noticed earlier for planar graphs. The following problem, which
is attributed to Van den Heuvel and Naserasr, appears in {{cite:73d098bc58df8f85ba0a56189fb7e7af839bb5db}} (see
also {{cite:e2a9d306eaa0822a4389247ad1899b5228b344a4}}).
|
d
|
2a5a2b32a8ce86162b9e54d969f2bfce
|
The architecture of the proposed GAN's generator is depicted in the Figure REF . It consists of 4 convolutional upsampling blocks followed by a squeeze-and-excitation block with a residual connection, a technique originally proposed in by Hu et al. {{cite:52c64a579b9a1c045e8bf7a4238c744f3084299e}}. The Squeeze and Excitation block consists of a global average pooling layer, which allows to squeeze global information to channel descriptors, a re-calibration part, which acts as a channel-wise attention mechanism and allows to capture channel-wise relationships in a non-mutually-exclusive way. The last operation scales the input's channels by multiplying them with the obtained coefficients. The Squeeze and Excitation mechanisms adds two fully connected layers with a ReLU activation function in between and a sigmoid function applied in the end as shown in the equation REF .
{{formula:b0c7f5c1-ed1b-457d-9cad-0335d9ab3731}}
|
m
|
0d437b2b9c0bf62f733185682472f424
|
It is worth mentioning that the dominant quasinormal modes extracted from the time domain profiles by the Prony method are in a very good agreement with those obtained via the 7th order WKB method with further usage of the Padé approximants as prescribed in {{cite:178b49918bc449cc5009149e6ada2c1c1cbbbbea}}. This can be seen in data presented in Tables I and II. The choice of the Padé approximants was such that the known accurate quasinormal modes of Schwarzschild black hole is reproduced with the best accuracy, which corresponds to {{formula:99ad9731-e770-4e49-993c-9bd4f9855d1f}} , where {{formula:e1742b21-8982-4674-a6e0-b64706ba0c71}} is defined in {{cite:baeab6072cfd4fbc893b65bb05b81f926d62f90e}}.
|
m
|
efe93799b8d219437df4f1838927595b
|
A method which has shown considerable improvement isolates the desired source using a time-frequency mask and then uses the statistics of that masked source to steer a beamformer {{cite:aea3d7e2c4781698d5f93e3e715839c53ccc06a4}}, {{cite:3b683db2b6f8aafec99010f3055313fa1a54c1b3}}. While effective in non-speech noise, it encounters challenges deciding between one or more voices. The entire utterance is usually used to steer the beamformer so the streaming, low-latency requirements of the smart speaker are not met. When operating under such constraints, these techniques have had difficulties {{cite:fd64e31e9d4268f92d1dc8b0ee5b913a0d59d400}}, {{cite:d7fe41462fa4b49194852e3460923138184dd6f4}}.
|
i
|
e38a9ab4e30abaf0dea29538fbb3517f
|
Finally, in {{cite:c5c7118a054b4081f99b6c498b5e2de8d9688526}}, the author proves the universality of QAOA using a line graph quantum architecture, which is not easily comparable to the method of showing ma-QAOA universality in this paper. The author mentions that this architecture is limited but says that the techniques used to prove QAOA universality can be expanded to higher dimensions. It would be of interest to determine if there are cases where the QAOA universality methods in {{cite:c5c7118a054b4081f99b6c498b5e2de8d9688526}} require fewer operations to implement an arbitrary circuit than the method in this paper, and vice-versa. Additionally, fully-connected architecture and gates that act on {{formula:7dc0e33d-dfa7-4e8f-859e-106908ac0c0d}} qubits for all {{formula:d2504033-c711-4e0d-b701-697e5de5006d}} are required in this implementation, whereas the line architecture in {{cite:c5c7118a054b4081f99b6c498b5e2de8d9688526}} is much more sparse. It would be of interest to determine if there is a more natural method of using QAOA or ma-QAOA for computation on lattices such as the square grid or hexagonal lattice, which more closely model current quantum architecture {{cite:cc78794926612f594aaced4cc1fce56636290bee}}, {{cite:b66cadecc051fc8b22e9e5a57efd20103e08e2b8}}.
|
d
|
4a34bac49bd50432355904288732ec5a
|
Table REF shows the details of the original content of all data sources considered. Similarly to {{cite:0a26a63bd2425f1c70b35de14550f20c466c173a}}, our training examples are derived only from Wikipedia (Wiki-Train) and the remaining data sources are used as test cases. Wikipedia provides a large collection for training, while other data sources are important to verify a model's generalization. All documents are split at a paragraph level, as it reflects a more realistic scenario in the plagiarism domain compared to sentences {{cite:50ea7ae6ed684645370146b7e01fb1ba58ee076e}}, {{cite:cb0ee6a4e16be4f54852f7dbaade40a029876f34}}.
We use BERT {{cite:4e4f19c77f283416128dcf433864f20da415c452}}, RoBERTa {{cite:1a50fa72d171f86e68922bb308d8965b34ffefae}}, and Longformer {{cite:bc0a5905e42dc5bd3aaffdfaf331dfc95a2f2ef4}} to generate the neural paraphrased content for the entire dataset.
BERT offers a strong baseline for classification, RoBERTa and Longformer improve BERT's architecture through more training volume and an efficient attention mechanism, respectively.
|
m
|
9c5c1fc6984a157f4119e0cdded1de9e
|
We propose a novel approach to 3D shape reconstruction, called Multiple View Performer (or: MVP) that can be used to complete objects with only two views, or up to an arbitrary number of views, leveraging recently introduced class of scalable linear-attention Transformers {{cite:5d20b4fd34303bc9f6ff3886b5bac6211ea901bc}}, called Performers {{cite:c5e68786e19143eab16cc40a906788378b9c4337}}, {{cite:08d827ad46e21102b5b178d8f9d65d630e6b2a13}}.
At MVP runtime, 2.5D views about the object or simple scene are captured in a panning motion to create a sweeping snapshot of the object's geometry (see fig:multiviewcoverfigure), or from a still camera in the case of moving objects. For each of these views, the causal performer block updates its corresponding compact associative memory (approximating modern continuous Hopfield memory {{cite:a691a52106202e21cf4d6d09a4c57d7d20b1f307}}), effectively improving MVP's understanding of an object and consequently - the overall shape estimation. Crucially, the size of the aforementioned compact associative memory is independent from the number of views it consumed (see: Section REF for more details).
When a completion is requested, the current observation implicitly interacts with all the previous observations through that compact memory for its more accurate infilling.
Due to the Performer block's ability to memorize multiple views, it can also remember objects that are no longer visible or utilize newly revealed views of objects that were previously hidden. Through our results, we will show that the proposed MVP system is able to generalize better both for single view and multiple view reconstruction without requiring the registration of multiple views of the object. We will show that this shape completion system is able to perform better or on par versus an LSTM-based system and an attention-based system. This system can be used for many different robotics tasks, such as grasping, stacking, and collision avoidance. We show a real world demonstration of how this could be used with a camera fixed to a robot in fig:multiviewdemo. We also demonstrate, using a simulated BarrettHand, that this shape completion system can be used for grasp planning.
|
i
|
5eb3265ccbb9b0e576965d59036cab19
|
This work answers a question raised in {{cite:0f1a1ea26547ab84e95782572755c90ad430cc26}}, namely, identifying the basic ingredients necessary for developing a variational principle all, or some, of whose Euler-Lagrange (E-L) equations are a given system of pde; the functional to be developed is required to have space-time derivatives of its fundamental fields in `more-than-linear' combinations. Such a question arose from the purely practical issue of developing a basis for application of Effective Field Theory techniques in Physics (cf. {{cite:658d712b36b4f42369b4c81a5f723b7ebfc2a793}}, {{cite:cc39abe1c6c6a4d93ad01243a65cccc07c035fe0}}, {{cite:85a1abde0f329f556074d0c314a132969063e2b8}}) to the system of nonlinear dislocation dynamics {{cite:0f1a1ea26547ab84e95782572755c90ad430cc26}} in continuum mechanics and materials science. Despite the success in formulating an appropriate action functional, that effort also exposed a certain flexibility in the adopted scheme whose details remained to be understood. Here, we are able to understand those details and abstract out the essence of the technique. The idea is then demonstrated on a wider setting of important classes of physical systems of nonlinear pde. We note here that the question of finding a variational principle(s) (some of) whose E-L equations are a given system of pde is different from the one adopted in the `Least-Squares Method,' (cf. {{cite:ffb88cdd63424757cf778555fa7eac96f3a72c8b}}), as explained in {{cite:d11614793fdca0cbb7e02aad60b1bfbe0f94d271}} and {{cite:9a14cc062d136d2b498d95c82f47e008063911a6}}; the E-L equations of the Least Squares functional are not necessarily the pde system from which the Least-Squares functional is developed. With the null minimizer requirement, minimizers of the Least Squares functional are solutions of the pde system involved. In the approach adopted herein, a family of functionals is developed which satisfy the stated requirement. Mathematically rigorous considerations of the Least-Squares approach rely strongly on convex duality - the present approach relies on elements of convex duality even at a formal level.
|
i
|
90278d2eeae5286c239281c3b1192463
|
In addition to further applications to QAOA, our framework can give insights into recently proposed variants such as adaptive QAOA {{cite:01bbe775e6fc85d890388657aadfe330c6142b0f}}
and recursive QAOA (RQAOA) {{cite:35e240c8a23772e88e3b055e3470cbec28d8cc9f}}.
The framework and obtained results could also provide insight into quantum annealing (QA) and adiabatic quantum computing (AQC) given the close ties between parameters for QAOA and QA or AQC schedules {{cite:7f37181bb9334214a8397229e6f540f160209bf4}}, {{cite:67e4780f1f432737003d145bf3c8d077b440cf7d}}, including to cases with advanced drivers {{cite:5b7dc0346ad15db7a58c9dcc93f89d72661f2c41}}, {{cite:eea9f2806bf7615b425d931a6ad35e09ee2d03ab}}, {{cite:adae6015484046555628ece8b98ac62572201f66}}.
One such application of our formalism is designing
more effective mixing operators and ansätze, and facilitating quantitative
means for comparing them. Our framework
suggests direct approaches to incorporating cost function information into
mixing operators, such as, for example, using {{formula:36300edc-e90e-4053-bdce-e1a7b46e05cf}} as a mixing
Hamiltonian. Indeed, the ansatz
{{formula:afcc540b-a19f-45c8-a60f-8db4f8702d35}} has the same
leading-order contribution to {{formula:9f99efa0-49a0-4737-a5fa-1cb0f4b50598}} as QAOA{{formula:f8ef8db4-6c67-418b-8239-b3a03ad81a8b}} for
{{formula:07331d69-42f4-4845-9a41-b3bc9b344c66}} when using the transverse-field mixer. Another
possibility is to incorporate measurements or expectation
values of cost gradient observables, such as {{formula:bb87d028-1bb9-42f6-8b1a-4503a81fd044}} or {{formula:dcf76cf9-1d88-41b8-85b9-9b8d27b41335}} , directly into the algorithm or parameter search procedure.
Thm. REF
shows that the cost expectation of a QAOA{{formula:cbe5033d-ec3a-4ba5-966f-26cba6868b5d}} circuit can be computed or
approximated in terms of expectation values taken for a corresponding circuit
with {{formula:ffd6a9c9-b686-43ab-8b94-a53bd27d0e41}} levels of the cost Hamiltonian and its
cost gradient operators, which could be estimated classically or via a
quantum computer.
In this vein, a recent paper {{cite:7605ff491845e0d901e3d9d281ce891438b1591c}} proposes an adaptive
parameter setting strategy that involves repeatedly obtaining estimates of
{{formula:0c9ba8d6-2c25-4ec2-9922-59a9740614e9}} .
Questions related to algorithm performance and parameter setting, both in ideal
and realistic (noisy) settings, appear
amenable to the series approaches of our framework.
For example, in cases when noise largely flattens the cost expectation
landscape {{cite:e88acc292967a34d9c79b6812fe51426307fdb2d}}, {{cite:7ead0060633b6f884ebf555ce4e72d01ee5fd270}},
our series expressions with only a few terms may be sufficient to
capture key aspects of the behavior.
|
d
|
f4a731eb7f87455de03db8f36399c638
|
The method we propose has two components, Mask R-CNN {{cite:0ac9089ce17fe0a101ce43ac33629a1830eca49a}} and Neural Style Transfer. "Virtual Fitting Room" firstly uses Mask R-CNN to find the regions of different fashion items, and secondly uses Neural Style Transfer to change the style of the selected fashion items (Figure 1). Both of the two components will be explained in details in this section.
{{figure:c58d851a-5f81-4833-884c-438269187b98}}
|
m
|
c6cbac239c794870e0b9ad110a3b24ff
|
In the student-teacher framework, as illustrated in Figure REF (a)(b), one image is augmented into two different views for a student and a teacher. The student is trained to predict the representation of the teacher, and the teacher is updated with a “momentum update" (exponential moving average) of the student. The success of MoCo and BYOL has proven the effectiveness of the student-teacher framework. With the momentum update, the teacher obtains more stable parameters in a temporal ensembling manner {{cite:66f89dd6f3c5f49f0d306185a6f17e6d28a25ce2}}.
|
i
|
b614086fea26ba7d0c902899497fb343
|
A complementary model inspired by the gauge/gravity duality is to use holographic model of nuclear matter and perform weakly couple calculation in the gravity picture to obtain the physics of strongly interacting gauge theory. The Sakai-Sugimoto (SS) model {{cite:bc8d75909d1a40b60d29101cffbca6bc01fa41b7}}, {{cite:e0340771a9925cad5270bbca4609e144cf9268f9}} is a holographic model which shares a number of common features with the QCD. Variations of the SS model allow a chiral-symmetry-broken deconfined phase {{cite:c372566d76f93cc9432f0643d85b8eb6865a0173}}, {{cite:c450cc7541955f99ef9ba2aea57d321eccfbf725}} with the possibility of the multiquark phase {{cite:ecd667b61260f455e176f4887985de062e4f4387}}, {{cite:6f95486cd8c1a7d2a8b3758f879911b4f4d9cfbc}}.
|
i
|
fc1a33b314885cca1b2e9af8f73c9bb7
|
The one-photon spontaneous emission (OPSE) rate can be obtained by using Fermi golden rule and first-order perturbation theory. In the initial state of our system, denoted by {{formula:3d56a37a-4e51-4978-9cbb-0433c6badae2}} , the atom is in an excited state and the are no photons in the field; in the final state, denoted by {{formula:2815c561-b6f7-4b1b-b68c-32c6038732fd}} , the atom is in a state of lower energy (not necessarily its ground state) and there is one photon in the field in the mode {{formula:ab898895-ba87-4a22-b55b-656bca74a12a}} . The OPSE rate can be written in terms of the field modes as {{cite:576b93e2e69cbeb4c0a1eb8e3c0ae857ac224f91}}
{{formula:22955a66-1682-455b-9fb2-277bd3c9239b}}
|
m
|
6cc8f12a2ae45b67eae75876b7eb0887
|
For this photophilic scenario we also superimpose limits from X-ray searches which severely narrow the viable parameter space in the {{formula:0c9054fb-634a-47d7-b482-ba8eca683a60}} region where structure formation constraints fade away. We have calculated X-ray limits on ALP DM by utilizing existing ones on keV-scale sterile neutrino DM {{cite:4bb6f6e51f6d2a165f36fb0070ad9ee097f617d6}} as well as the expression for photon fluxes in both models.
|
r
|
6e8c096728052d3df6043bb0adf3cbce
|
Exemplar-based clustering methods. The selected competing methods are kmedoids {{cite:c3557585cf3c35fcb3655869477ad4822ade9740}} and AP {{cite:d9283bc823e740a3a80ae9c04c0b0db4930eb705}}. We denote the convergence parameters of AP and geometric-AP by {{formula:08f64896-be6e-4c36-9e51-0b36ca3776ca}} , {{formula:ab008ae2-217f-4c88-a9e8-c05de4cd5ae4}} , and {{formula:a070a263-8499-4c4a-bc2c-316fc232e92e}} to designate the maximum number of iterations, the number of iterations for convergence, and the message damping factor, respectively. We choose values reported to guarantee high convergence rates {{cite:43ce8c3bc458f26474346041020f01aae5d7439b}}. {{formula:c003d5ef-59b8-466e-8948-059b39c3cd86}} , {{formula:181d566d-b85b-49f3-8243-9270b8b2d865}} , and {{formula:1a36c702-e683-48b3-8466-8a87b66479d2}} . kmedoids relates to the kmeans {{cite:182eaf03de43181ff23dd3e5a8f7f300082c8cf3}} algorithm but it identifies the medoid of each cluster by minimizing the sum of distances between the medoid and data points instead of sum-of-squares. Unlike centroids, medoids are selected from the existent data points.
Centroid-based clustering. The most popular method, that is kmeans {{cite:182eaf03de43181ff23dd3e5a8f7f300082c8cf3}}, is performed. kmeans is run 1000 times with random centroid seeds and the best performance is reported.
Structural clustering. The clustering method spectral-g {{cite:ac1c576a757ff1a0eb7bf122015a88c462e81d46}}, which takes the network adjacency matrix as the similarity matrix, is selected and compared. In spectral-g, the eigenvectors of the graph Laplacian are computed and the kmeans algorithm is used to determine the clusters. The assignment process is repeated 1000 times with random initialization and the best result is recorded.
Hierarchical clustering. To mimic the operating mode of AP, where initially all data points can be exemplars, we select a bottom-up hierarchical clustering method that is the hierarchical agglomerative clustering (HAC) {{cite:45ef3ddabd04a79157e4c611d98d54c29e3dcb31}}. To prioritize compact clusters with small diameters we further consider the complete linkage criterion to merge similar clusters.
Model-based clustering. We also test a Gaussian mixture model (GMM) {{cite:b43d6b6e5771839fdd7233ebc941440cba8e578d}} method that utilizes the Expectation-Maximization (EM) algorithm to fit a multi-variate Gaussian distribution per cluster. Initially, the probability distributions are centered using kmeans and then EM is used to find local optimal model parameters using full covariances. The mixture model is employed afterwards to assign data points to the class that maximizes the posterior density. 100 random restarts are performed and the best performance is reported.
Variational inference clustering. We perform a Bayesian variational inference clustering by fitting a Gaussian mixture model with an additional regularization from a prior Dirichlet process distribution (DPGMM) {{cite:f68d294a5a6fe90938680b1251c0c4d042d3aaef}}. Similar to GMM, 100 random restarts with full covariances are performed and the best result is reported.
|
m
|
d7c5b53832bb80c6c51ae41d6934ba51
|
For the power peak, the probability {{formula:93a7c656-904a-454b-9434-b749e5d52fc2}} to obtain a power equal to or higher
than the threshold from a chance fluctuation is {{formula:fd526002-72e2-4bdd-8e9a-a525eb27c728}} ,
and the false-alarm probability (FAP {{formula:160da7e4-3c2b-4b3f-b9aa-e62ac120381c}} ) is
{{formula:08592991-9e07-41df-acde-1404b3c0881b}} {{cite:5c5d7a1c4d206d1d4552c41b0f69eb7f8f101b81}}, {{cite:6fc816cbae958444c5f3022c07bd35da0be0cfad}}, where {{formula:5b4ac31c-400e-4688-bdb0-944b0e9ff15d}} is the number
of independent frequencies sampled (i.e., the trial factor).
To estimate the underlying power spectral density (PSD), we used a function
of smoothly bending power-law plus a constant {{cite:829a752409ba1c9644ac1a166044f3cf36c2934d}} to model
the PSD calculated from the light curve. A maximum likelihood method was
employed {{cite:c21802cb45e43752482e1c301e94f460447e26f7}}.
The PSD function has a form of {{formula:3ad788a9-12bb-4f87-b3fb-5f7888dddee7}} , where {{formula:4967f5ba-2623-444a-8003-4ac2c9206f0e}} , {{formula:3df5f44c-a4d1-4dda-b2e0-4808532509b9}} , {{formula:eda28b02-4bdd-444f-9839-a538064cb50d}} , {{formula:a84e3296-b480-4661-b8ac-5a0452396083}} , and {{formula:2707ccf0-923a-43a2-8932-f3b2d970f498}} are the normalization, low frequency slope,
high frequency slope, bend frequency, and Poisson noise, respectively, and
the obtained values are {{formula:c64762a8-a068-4367-aa89-228921da2585}} , {{formula:436d5a1a-4176-4fbb-bf00-e6fde19dd56d}} , {{formula:afe0d777-5ec2-4e80-8f21-da8378d5234d}} , {{formula:1da44134-f0dd-495d-a86e-9b7bbf4e0dc5}} , and {{formula:1f68d2dc-3c32-49d9-9f58-ac6df2c04379}} , respectively.
To evaluate a confidence level for the periodic signal, we generated {{formula:97ad3325-f224-4929-bd30-1ae59e4ad5b1}}
artificial light curves based on the best-fit underlying PSD and probability
density function of the flux density variations by employing the simulation
program provided in {{cite:efb75caefeae04c0741bc43afa57fb66a04d35c5}}.
The confidence curve evaluated from the artificial light curves is shown as
a green line in the right panel of Figure REF . The confidence level
for the QPO signal was found to be {{formula:a0f22ec2-033f-4abc-8991-4d3e222d8b62}} .
|
r
|
6eed792320fea5d30ea9a0b2df2eeec5
|
To evaluate the transferability of our models, we contrast our PSViT-2D model to ResNet on the downstream tasks, object detection and instance segmentation. We apply the Mask-RCNN-FPN {{cite:67e03ebb16f18ac4366e58ee17a5a612498cbaf7}} as our framework. We conduct experiments on MSCOCO 2017 dataset, which consists of 118K training images and 5K validation images. We report the performance on the validation set.
|
r
|
99e6d2a85283ce4f849c46b7ed80e069
|
We should point out that our discussion has focused on the properties of the SC at {{formula:b7dd4534-11e2-479c-a183-a4055a328e83}} , and we have deliberately avoided directing much attention to the properties of the correlated insulator (CI) at {{formula:e3aedd8b-7680-4406-8e56-19b9fb4add01}} . Assuming that the flavor polarization leading to anti-parallel SVL is present in the insulator does not uniquely fix the nature of the insulating gap, and there are various candidate orders whose competition is decided by comparatively delicate effects {{cite:3ffd45133fb2be4e16044dd641dc49c1accb2651}}, {{cite:9691d6a03bf2460bc57e61215e9b69dd730bf634}} (it is even possible in some samples to obtain an insulating state with a {{formula:910f0597-0195-4079-b25b-dd31e4c0899a}} quantized anomalous Hall effect at {{formula:aeb8f5f0-4f82-4765-ab33-a7257a83713e}} {{cite:8469c75f92f3068107ff089dee193cae2230af8d}}). While it is natural to assume that the SC is obtained by doping holes into the CI (with this scenario apparently being born out in the STM study of {{cite:a63b9378a5212c012d25ae93d5bebd2187cd93ad}}), this does not necessarily always need to be the case, and it is possible that different types of CIs (likely all with the same type of flavor polarization) are present in different samples that superconduct at {{formula:815ba0fa-9994-4f65-944a-4e6f6b159702}} . This possibility is suggested by the fact that in {{cite:4c7e534249fb40b166deb2df65c03afba30c0565}} the gap was seen to possibly close between the SC and insulator, and by the Josephson study of {{cite:0a33f98bcb05721a5c028508383b9b5512ea00a5}}, which found indications of a correlated insulator that strongly breaks time reversal being present in a device which also superconducts. All of this discussion goes to show that flavor polarization — rather than the existence of a correlated insulator — is the more fundamental phenomenon to take into account when trying to address the nature of the SC, and indeed only flavor polarization has played a role in shaping some aspects of our phenomenological analysis. This means that our conclusions about the order parameter should apply equally well in samples that exhibit superconductivity and flavor polarization, but posses no CI (of course, whether or not it is possible to create such a sample is a separate question).
|
d
|
ac37174018c874f0c2c96139ad61e80d
|
It is fascinating that the Kerr solution corresponds to a particularly simple three-point amplitude {{cite:941c1e8cbc2dfec1d5ef3b83fd99fd38d9966bc4}}.
Clearly, this fact is related to the Newman-Janis shift, which is an all-orders property of Kerr {{cite:9449bb80b52056eed838fe4852e449c0fec71a6a}}.
The “single” copy of the Kerr solution, {{formula:f79ffefb-9e42-48b6-ae76-cea8a8844abf}} , is a solution of the Maxwell equations which is also endowed with a simple three-point
amplitude. Turning on a magnetic charge in addition to the spin leads to a spinning dyonic solution which is, to date, the most
general known three-point amplitude in pure gauge theory.
The double copy of this amplitude in pure gravity is the Kerr-Taub-NUT solution {{cite:314dbfe96213fc947e5d270494a1eec8ed99ee1d}}.
However, it is also possible to perform the double copy of these amplitudes in NS-NS gravity where, as we have seen,
the resulting class of solutions is of the type Kerr-Taub-NUT-dilaton-axion.
This generalises the previous discussion of the double copy from Coulomb to the JNW solution to the more general three-point amplitudes.
|
d
|
0742d9d7eecbe7892296d1da42b14821
|
What does it take to reach human-level performance with a machine-learning algorithm? In the case of supervised learning, the problem is two-fold. First, the algorithm must be suitable for the task, such as pattern classification in the case of object recognition {{cite:71295f6e862f785909e896000af93da46779d502}}, {{cite:0814ae654a2abbe76ad9ee31a75ed33c20abfb9e}}, pattern localization for object detection {{cite:da11bcb5aa4b95596b67287a514161d13e10417b}} or the necessity of temporal connections between different memory units for natural language processing {{cite:434693c2dbbfdeb4f22a061db18b4cf59efd6f9a}}, {{cite:c9fd2855bc560a84c0fc40938273a6b16b36ac20}}. Second, it must have access to a training dataset of appropriate coverage (quasi-exhaustive representation of classes and variety of examplars) and density (enough samples to cover the diversity of each class). The optimal space for these datasets is often task-dependent, but the rise of multi-million-item sets has enabled unprecedented performance in many domains of artificial intelligence.
|
i
|
a10a2d17707e3a4d31de6630981c883d
|
where {{formula:19eedcb4-037d-4780-bc08-88be31d21d7d}} is a positive real number, {{formula:c1e833c0-6e69-4401-8679-dd4eaa733558}} stands for the concept of forward difference for {{formula:3d47d5d6-b1f2-4152-b20c-66dd895200ec}} . It should be noted that there is main difference between forward-difference approximation and finite-difference approximation with regards to computational expenses. The forward difference approximation of the products of the Jacobians and vectors can be calculated with only an additional evaluation of the function, which requires notably less computational burden than approximation of the Jacobians themselves. Since Eq. (REF ) is a linear equation with respect to {{formula:5867545f-fca1-49fd-8819-f5ae3b43d4df}} , we applied the forward difference GMRES method to solve it {{cite:15384f089246134be61e3695ac9708df944727ce}}. The details of this method is described in Algorithm .
|
m
|
13cdc6db9d52dd5ae638f58cc8dd7cba
|
Another attention-based breakthrough was made by Vaswani et al. {{cite:11598baf3a3935833a30ebcdef4fb6b37a85e5b2}}, where an entire architecture was created based on the self-attention mechanism. The items in the input sequence are first encoded in parallel into multiple representations called key, query, and value. This architecture, coined the Transformer, helps capture the importance of each item relative to others in the input sequence more effectively. Recently, many researchers have extended the basic Transformer architecture for specific applications.
|
i
|
962ec77b5c0d5eb168962ad27ac8cc9b
|
We performed two groups of experiments.
One included three scribble-guided models, i.e., partial cross entropy (PCE) {{cite:e4225a481a3009856ce0ad8325ed1ceb20c88109}}, weighted partial cross entropy (WPCE) {{cite:e5165bc12599d9517adc19f9ebb74df58377f506}}, conditional random fields post-processing (CRF) {{cite:77d251cb552463a7fc05ff8988e530c3e84957f4}}.
The other consisted of three GAN-based models trained with additional unpaired full annotations to provide shape priors, i.e., post-processing with denoising auto-encoders (PostDAE) {{cite:85fc557f5f1d754959a1011387002b0d8679f370}}, adversarial constrained CNN (ACCL) {{cite:dfc8c3c0d62af2c8885f7deecd0815e1aec43a62}}, multi-scale attention gates (MAAG) {{cite:e5165bc12599d9517adc19f9ebb74df58377f506}}.
Finally, the results from fully supervised UNet {{cite:4bb99890a8e13b4bdcdfc032785adf5d5171c424}} (UNet{{formula:fbb1e55f-0286-4f33-9152-58337716a359}} ), were provided for reference.
|
m
|
471c6360c0f716d86bf4752edfdb35f2
|
Efficient link sampling. A naive implementation of the link sampling in step 5 in Alg.REF is to compute and normalize the sampling probabilities of all links in {{formula:13ef2266-e6ed-40d3-822a-b1a5370d577d}} , which requires to memorize all historical links and costs much time and memory. To solve this problem, we propose a sampling strategy (Appendix ) with expected time and memory complexity {{formula:90811b86-a0b6-490b-9a1e-6bd2c3c4d48a}} if links in {{formula:4509ef9f-e6fc-4901-9ae1-f2a245be2372}} come in by following a Poisson process with intensity {{formula:54278bc6-6805-4aa7-bb3e-f42d932e3adf}} {{cite:d928d4cf5d803fc217d4b4346c428f9fb4311993}}. This means that our sampling strategy with a positive {{formula:6000065d-d42b-45e7-8bd8-5bfd4ab56790}} allows the model only recording {{formula:249b477a-3aa7-4fc3-958c-d84317d240ef}} recent links for each node instead of the entire history. Our experiments in Sec.REF show that {{formula:c6cbc122-d241-4387-a95a-083927615995}} that achieves the best prediction performance makes {{formula:cfdbf85c-cabd-4718-9fc4-13f6dcfa06e7}} 5 in different datasets. Since the time and memory complexity do not increase with respect to the number of links, our model can be used for online training and inference. Note that the {{formula:c8112952-43df-4b56-869d-41584705fc87}} case reduces to uniform sampling, mostly adopted by previous methods {{cite:130614e3fd36187579b0e7c8ca81f47ebbd2c8b7}}, which requires to record the entire history and thus is not scalable.
{{table:fe116c52-0292-452d-81f0-62a49519b345}}
|
d
|
4a9e7a435e6f971c86c345cd79d28d2c
|
Different from previous Bell inequalities for verifying special {{formula:666958f7-1136-49a3-926b-a374631f32dc}} -shaped quantum networks {{cite:72ebccf962f1ee6f11552bde4a2a8c6ffe9b3a0a}}, {{cite:043a38f7efd35cfdc95a68e1d9af0fcbfe62e718}}, {{cite:d1a68e3511f02e08523927ca37f1360bb3c374d9}}, {{cite:d56ae285f53c2437a9bc67eae43bf5333d80c69d}}, Theorem A shows that a generalized Bell-type inequality exists for a given {{formula:74d24b1f-8189-4b75-864f-56c2cab23284}} -shaped quantum network consisting of any bipartite entangled states. Generally, this Bell-type inequality is state-dependent. Furthermore, one measurement of Bob can create an entanglement between Alice and Charlie who have no prior shared entanglement. The interesting feature of bipartite entangled states is going beyond classical resources {{cite:520326c5b914e37555015c32823b6c86076be0d0}} and key to build large-scale quantum networks {{cite:acdddee326bd6020e6d90a2f21ffba3ac9cb3085}}, {{cite:3dbe940ba57ab18a8f3090c11483ec8b195ff708}}, {{cite:e293cb33c7327755b270d3eff360e6c9c83d661c}}. The proof of the tripartite nonlocality stated in Theorem A is a straight forward application of the semiquantum nonlocal game for each entanglement {{cite:f7be4de7006ce26d580680b50de2e1c147a42d02}}. For the bipartite activated nonlocality, our proof will be completed for a reduced quantum network consisting of qubit-based entangled states. The main idea is that Alice and Bob are allowed to firstly perform local projections on high-dimensional systems and local distilling of entangled mixed states {{cite:6dfa195bc8cde3bf622d1492ce7cdacba009743d}}, {{cite:940362b0b465a57ec5edb30c33f9d7a9540e5f6a}}. These assumptions are reasonable because local operations and classical communication (LOCC) or local operations and shared randomness (LOSR) cannot create entanglement between two observers initially sharing no entanglement {{cite:6dfa195bc8cde3bf622d1492ce7cdacba009743d}}, {{cite:f7be4de7006ce26d580680b50de2e1c147a42d02}}. Additionally, the proof of Theorem 1 also provided an interesting by-product that universal Bell inequality exists for detecting a single entanglement by using local projection and entanglement distilling {{cite:f7be4de7006ce26d580680b50de2e1c147a42d02}}.
{{figure:87868ae6-c3fd-4ee0-b3fe-b96ce037d2f5}}
|
r
|
1f09dd4006933b76d76a238a984aa86c
|
It is well known that the pairing correlations play a significant role in the
description of the ground state properties of open-shell nuclei. However, the
properties of rotating nuclei and fission barriers are especially sensitive to fine
details of pairing interaction. For example, the experimental moments of inertia
of low and medium spin rotational bands and their evolution with spin cannot be
described without inclusion of pairing interaction {{cite:5b6b0134ec396be5858c768aa152fe390134ac8b}}, {{cite:4f9332c1738c2c842df8eb80aa7a26a76edc5d43}}. In addition,
the accuracy of their description sensitively depends both on the details of
pairing interaction [such as its form [for example, quadrupole pairing {{cite:7967481af879a200442785b91ba1f0a955a9bf9e}}]
and strength (see Refs. {{cite:c645646b9d8e532591ab53da9ed801d85d8f4395}}, {{cite:d7f1a52e18d905ca0e40f7e16ab71b1e7bf59582}}, {{cite:b99144081bcb9c6a2696821c0b3790197e304634}})] and on (at least,
approximate) particle number projection (such as Lipkin-Nogami method)
(see Refs. {{cite:c645646b9d8e532591ab53da9ed801d85d8f4395}}, {{cite:7967481af879a200442785b91ba1f0a955a9bf9e}}, {{cite:4432e1b546eab671da79ec28a0b42353ff899157}}, {{cite:d1fbae7607c50167313cbcc6d3f831495fb52821}}, {{cite:d7f1a52e18d905ca0e40f7e16ab71b1e7bf59582}}). The same
situation exists also in the description of fission barriers. It was found in Ref. {{cite:8a27f0d852c0f4750c2241ed8bab396718a87465}} that the pairing gap changes considerably with deformation and
that relativistic mean field (RMF)+BCS calculations with constant gap do not
provide an adequate description of the barriers. Relativistic Hartree-Bogoliubov
(RHB) calculations show that there is a substantial difference in the predicted
barrier heights between zero-range and finite range pairing forces even in the case
when the pairing strengths of these two forces are adjusted to the same value of
the pairing gap at the ground state (see Ref. {{cite:8a27f0d852c0f4750c2241ed8bab396718a87465}}). For zero range forces
the barrier heights depend on the renormalization procedure. Note also that the
details of pairing are important for the description of transitional nuclei since the
modification of the strength of pairing could drive the system from transitional
in nature to spherical one and vice versa (see the discussion of octupole deformed
nuclei in Sec. V of Ref. {{cite:46aa9531977522c747f28cee2de0f28b94d04a38}}).
|
i
|
28a2eb467567dd77863df71e6ad8a2f6
|
In the present study, several ML classifiers have been examined using different parameters for the purposes of categorising middle ear function as normal or OME on the basis of WAI data. As shown in Table REF , results from most of the tested classifiers are promising with the accuracy of most of the classifiers at around 80%. Indeed the results from the ML classifiers in this study exceed diagnostic performance in identifying normal and OME ear conditions in the primary care settings by General Practitioners (GP) or other healthcare professionals using traditional middle ear diagnostic tools {{cite:36833a66f4caac29f2052a0526d489f48a19bb8c}}. A study by Lee et al. {{cite:ac71a8f478c27138ea6f79de9ca3fbd64c90e2d7}} investigated the accuracy of traditional diagnostic tools for OME, such as pneumatic otoscopy, otomicroscopy, and tympanometry. Their results showed low specificity in diagnosing childhood OME, although pneumatic otoscopy is recommended as the gold standard for OME diagnosis. In addition, there were high percentages of false positive and false negative cases when the results obtained from traditional tympanometry were examined. Future research will focus on improving the performance of the CNNs in terms of achieving more accurate and reliable classification results. Performance could be improved by using either large datasets or advanced deep learning techniques such as transfer learning, data augmentation, and few-shot learning {{cite:6ac62190e78e22ebd93e71424757b32217a3887d}} to train the CNNs more efficiently with small datasets. The work by Feyjie et al {{cite:1d324d227635015ab032b7eee00769effba53aee}} demonstrates the efficiency of using few-shot learning in the task of skin lesion segmentation. A review on the state-of-the-art data augmentation methods applied in the context of segmenting brain tumours from MRI indicates that data augmentation has become a main part of almost all deep learning methods for segmenting brain lesions {{cite:1fea4438db053a3dc58ae59e733c2253d44d5b63}}. Moreover, transfer learning has become a useful approach for analysing medical imaging using DL {{cite:f7eb8800dd2c703b5e38b0d44e999fa02affdc28}}.
|
d
|
a16a1aef8594d50ce9e6414bd237380c
|
Results on Adience and MegaAge.
In this experiment we compared our method with state-of-the-art methods {{cite:42095a19e6971310fff28dcd68b614daeeb3dd32}}, {{cite:f6085a42c879faad6847b007d2027262c9909f4b}}, {{cite:7fef08d144bb5c39335984dc411eb0d30b62606e}} on both the Adience and MegaAge datasets. For the Cumulative Attribute method {{cite:42095a19e6971310fff28dcd68b614daeeb3dd32}}, we retrained the method using VGG face verification features trained on MS-Celeb-1M. For Deep CNN {{cite:7fef08d144bb5c39335984dc411eb0d30b62606e}}, we reported the results given in the original paper.
In addition to the aforementioned comparisons, we also evaluated the importance of different losses used in the training of our network:
|
m
|
ac24a0c86298ce36421ba207cdec42f1
|
Previously proposed ASR-based seq2seq pretraining techniques {{cite:35d433549aada85e7ff904dae7a5282e20deb147}}, {{cite:f30148664ea5ea42907e75c1374f3fed3b081280}} can also be seen as capturing some token level information but still fall short compared to the proposed method. We hypothesize that because our method performs tokenwise alignment directly in BERT's embedding space, the knowledge in our pretrained model is already semantically richer. An ASR based encoder can map speech to tokens but it is highly unlikely that the resulting embeddings would lie in BERT's space.
|
r
|
84f9233f9663c4078089e88ca8b61a65
|
We focused to visualize the impact of different OCC methods with NSA variations, and to best visualize, we took shape data-set. Inside the shapes is the true class, and outside the shape would be the false class. We have 2000 data points inside the shape class. We have another data-set containing 2000 data-point inside and outside shapes for the test. We experimented with two different shapes; Ring and Pentagon. There is inside negative and outside the positive region in the ring, and for the pentagon, outside detection is not linearly separable. In figure REF illustrated the inlier and outlier position for Ring and Pentagon test and train data. This nature of the shape made it possible to experiment and better visualize. For conducting experiments, we used python pyod library {{cite:b13713b52c617f5d2cbcfc30ded63cf24989057e}}, and for NSA variations, we used Zhou-Ji's developed code{{cite:231cb3abd7eb6dd57e99f69f345924421d1b4e04}}. Also used the default configuration of pyod library, and for NSA methods, considered minkowski distance. Our experimental platform was google co-lab with GPU and high ram usage enabled. Also, we used open-source repository of data and source code for reproducibility. For CBLOF, the Ring dataset was not considered. We used pyod built in visualization utility and orange tool{{cite:50db258f22535b4bfa67541138ff5b99d76db77d}} for visual image generation.
|
m
|
c36d8a2aadc075a71e224b16978ed55c
|
Specifically, we include the following representative methods in this section: CORAL {{cite:fde84e3f7dee25c92dde624b669e163e1a2077d2}}, MMD {{cite:affefb70e3a9aab15c93df436087d2ba728031ba}}, DANN {{cite:f7581dd8914633b221de87ca5979e4c118be792e}}, C-DANN {{cite:57be0fd536966d7fd3e087f83027e4bfdc5d43cc}}, IRM {{cite:2d6e73f28fe6c239bbbf1f94a72e185b36730b56}}, VREx {{cite:40d0516f21bdd850cc652da859d17e4d28a0153d}}.
|
m
|
958a19ffb29dcc1e0cb43d530cf9b0e6
|
Problem statement and experimental setup:
We explore anatomical segmentation of lung, heart and clavicles in a multi-center database of chest X-ray images, with heterogeneous labels. The database is composed of 4 publicly available datasets (JSRT {{cite:ff5d1138fc612d95f8ece6822d4d4581ecbab822}}, Padchest {{cite:95e28cbfc88acf5bcffc43fa0affb155e1fb0b29}}, Montgomery {{cite:845650ab4cd4dd735af0734f0844d4aeabe0fa1b}} and Shenzhen {{cite:89323d8a00c67ea77c72b0ca152652f4786ea30d}}), which originally provide pixel level annotations. Since the proposed method employs landmark-based annotations, we adopted the publicly available Chest X-ray landmark dataset, which provides landmarks for 3 different organs from the aforementioned databases (github.com/ngaggion/Chest-xray-landmark-dataset). Images from JSRT (246 subjects) include annotations for lungs, heart and clavicle (LHC); Padchest (137 subjects) include lungs and heart (LH); while Montgomery (138 subjects) and Shenzhen (390 subjects) include only lung (L). To evaluate each domain separately, we divide the datasets into 80% train/val and 20% test partitions.
|
m
|
1c770cec1e7be323b7876e663fbb3010
|
We use the official nuScenes 3D MOT evaluation {{cite:67d0aa78ca9cccdea178dd9c8328752efe9bdc26}} for comparison, reporting the AMOTA and AMOTP metrics. Table REF shows the results of InterTrack on the nuScenes test set {{cite:67d0aa78ca9cccdea178dd9c8328752efe9bdc26}}, compared to 3D MOT methods using CenterPoint {{cite:c67c6fa6b006b18941731b42db59c007a6b58816}} detections for fair comparison. We observe that InterTrack outperforms previous methods by a large margin on overall AMOTA (+1.00%) with a similar AMOTP (+0.01m).
|
r
|
c24702895f1af08a54f0a48497ca35ad
|
Furthermore, we introduce the cohomology of a compatible Lie
algebra associated to arbitrary representation. We would also like to
point out that there might be more than one cohomology theory for compatible Lie algebras. Such a phenomenon
has already appeared in the study of cohomologies of some
algebraic systems with more than one product such as Poisson
algebras ({{cite:f6594830ff66f8caaab329f8f1de231aaad4f98f}}, {{cite:c97ed922ec1e23a8486b048d66c784d6b639958a}}). In particular, the coproduct
for a compatible Lie bialgebra given in {{cite:a6b10bbd233d195f31bfcd27f3ee413f441284c7}} is not a
1-cocycle in the sense of the cohomology theory in this paper,
that is, there might be another cohomology theory for compatible
Lie algebras such that it is a 1-cocycle. Then we study abelian
extensions of a compatible Lie algebra and show that they are
classified by the second cohomology group. We also
introduce the reduced cohomology of a compatible Lie algebra and
establish the relation between the reduced cohomology of a
compatible Lie algebra and the cohomology of the corresponding
compatible linear Poisson structures introduced by Dubrovin and
Zhang in {{cite:93c08823c8bd06f58bfc1e1aa081863f5a0ddbb1}} in the study of bi-Hamiltonian structures.
Finally, we study nonabelian extensions of compatible Lie algebras
by a similar method introduced in . We construct a bidifferential graded Lie algebra associated
to compatible Lie algebras {{formula:ad4bcfa9-7f93-429c-a09f-cc8d5f05637b}} and {{formula:ecb5223b-425e-4e8b-b0a4-2d428615aedc}} g{{formula:336dd731-bd68-4dc0-a1fd-2d36022fe1a8}} by its Maurer-Cartan elements.
|
i
|
45801faf0dd63de04e48df23c705d23b
|
If the UV flux is mainly from optically thick synchrotron emission of the compact jet, the jet should be very energetic. We estimated the radiative flux of the jet at the UV peak was {{formula:3680bc96-5745-45c6-b96f-83724acc07db}} by extrapolating the radio spectrum from 5.5 GHz to UVW2 band. The corresponding 0.1-100 keV X-ray flux, extrapolated from the 0.4–10 keV spectrum, was about {{formula:4ed1af61-c8ff-4bd2-bde2-67fc426b7ac3}} , corresponding to {{formula:dc8015f8-0377-417a-ab9c-4ba3b5d6ffc8}} if adopting a distance of 8 kpc and a mass of {{formula:2cc0e82b-c0ed-44cb-93fd-d63f11da9878}} {{cite:a37806b0e59bd0c0a3ae3400b0fb545c25908f15}}. Taking reasonable radiative efficiency of the jet of about 0.05 and the Doppler correction factor of about 1 {{cite:9d7302fbf5baf84374221623b08b491a4ee66c4a}}, {{cite:e50b77744f7a4bc596a7b6cf60a104e2a17a320e}}, the jet power {{formula:bda5be59-1543-4686-9fd5-7fc1e5b93dc0}} is about 2.5 times of the X-ray luminosity {{formula:42cdb8a9-bbc1-44ba-bd5c-57597db3e9d7}} . This is larger than previous estimates of {{formula:56d1cb48-b393-402b-9735-1b62d4696b82}} in the hard state of GX 339{{formula:17fbcf48-eb67-47d9-90a2-075c1e041e4f}} 4, which were all less than 1 {{cite:e50b77744f7a4bc596a7b6cf60a104e2a17a320e}}, {{cite:063f8aa02c05c68c67c4c3088b23121857f3719a}}. Moreover, the jet power at the UV flux peak was greater than the X-ray luminosity during a bright hard state of more than 0.1 {{formula:3aa7d146-a09f-4295-9c78-1bfa0a89c872}} , much higher than the critical value of {{formula:5f8aa2e9-bd39-490f-8ed8-91bb657ef2f0}} determined in {{cite:17b306fbcd6c1ab9820d9f0dda9d7b78a09090c0}}. It is worth noting that a much lower jet power ( {{formula:b62d6e6e-c3d9-41aa-a5a4-4dce91882d29}} ) was seen in GX 339{{formula:cbbca429-fa3c-4435-8af0-fc67f3ba32e9}} 4 at similar X-ray luminosity {{formula:c0107c64-5fb6-4cc3-8a28-87c881d3454a}} {{cite:e50b77744f7a4bc596a7b6cf60a104e2a17a320e}}. Therefore similar X-ray luminosity corresponds to a large range of jet power in the same source. This phenomenon is similar to the hysteresis effect of spectral state transitions, in which mass accretion rate is not the dominant parameter {{cite:4fbd8be7acbea290c955eb498bbab1edf7f60285}}. This suggests that the jet power might be determined by non-stationary accretion, which leads to insignificant dependence on the mass accretion rate or X-ray luminosity.
|
d
|
b0ef6f9f1f8d69f11dccab5bb6e7cb9c
|
In the FOCP-1, we use the bar notation to denote the predicted variables and to differentiate them from the actual variables which do not have a bar. Specifically, {{formula:fc377e35-af37-402f-a341-ca3848444cbe}} is the predicted trajectory of the state {{formula:31aa0468-026d-463a-90fc-a7d3cd7a28bc}} , computed using the dynamic model (REF ) and the initial condition at the time {{formula:74885e2b-0a50-4070-bd6c-b634bad3aff8}} , driven by the input {{formula:e340d3e4-53c4-41f7-b725-ef47d3dd2065}} over the prediction horizon {{formula:b14bbd45-bb7d-4f0d-a433-6375e49805b8}} . The first cost of {{formula:0bd77539-ab6e-47f1-beeb-dbb228100c83}} is associated with the geometrical task, whereas the choice of {{formula:6b36fb96-c24c-4521-9f50-84868382abdf}} in the argument of the cost function to be minimized is motivated by the fact that once {{formula:4e98f23e-03db-4317-a496-1fd25c93ee24}} and {{formula:1c7bd4e7-d77c-4254-8b8a-51e87566e0c8}} {{formula:52dbc2cd-af70-4bf2-891e-6e4be3e8dd55}} , {{formula:bf4683c5-7684-422d-ae70-49dbcac9ca3b}} as well. {{formula:46834626-4424-44a1-b593-7353fcf17aa9}} and {{formula:a6dd4858-2d97-4d16-b3a5-c4faca71d057}} represent the terminal constraints (the terminal set and the terminal cost, respectively), that should be designed appropriately to guarantee “recursive feasibility"An MPC scheme is called recursive feasibility if its associated finite optimal control problem is feasible for all {{formula:348c02a7-b559-4b9d-b364-fb315afb3bcd}} {{cite:756bbd2d70755df7e1c0573f88180c42e19100b8}}. and “stability" of the MPC scheme {{cite:acd73c980b236ccf51630cd27745a553deacccf3}}.
|
m
|
dd629e07e7b189e483f206ebc1b939cc
|
Two questions arise. First, is it worth attempting to calibrate the missing
cells in the {{formula:af5fa576-bf93-490f-bb57-1432ed73c17c}} and {{formula:41183c0e-51e4-4e10-a5a9-3cd5673f3bfd}} bands with LUCI at the LBT? Probably not, since
their density is too low for the field of view available. Moreover,
G2020 found that most of them have low star formation
rates, making the detection of emission lines very difficult, and the
success rate (around 30%) of the present campaign is not particularly
encouraging. Second, is it worth attempting the calibration of the cells missing in the
redshift range 1.7 to 2 with ground-based facilities? Detecting the H{{formula:d4a34d18-e10d-468c-b4bd-2f86ac6d28dc}} , H{{formula:afb09aa9-ef2f-46de-ba34-64a2c2df38ad}} , and [Oiii] lines in the
{{formula:73a5c4a9-e6ae-4b50-a0c0-584508ebd3b8}} band up to redshift 2, 1.9, and 1.8, respectively, could be possible with an
increasing probability of success. Whether the LBT with the LUCI
spectrographs should be used for such a campaign depends on the
density on the sky of the objects needing spectra. However, the
difficult remaining sources at these redshifts are primarily passive
galaxies, and so spectroscopic searches for Balmer and [Oiii]
emission lines are unlikely to be efficient or successful. A costly
solution could be ground-based (optical) absorption-line
spectroscopy. A release of optical spectra gathered at the VLT and the
GRANTECAN telescopes is in preparation. In the end the (possibly
partial) natural solution will come from Euclid itself: its
spectroscopic program will deliver near-infrared spectra in the
relevant redshift range free of telluric absorption. The survey is
designed to detect emission lines down to {{formula:07700229-ae4c-4150-a7de-ccdc9f061869}} at {{formula:eaa294eb-144e-4997-984d-ea6b0b3fe419}} in the wide configuration, and
reach up to 2 mag fainter levels in the deep fields
{{cite:e73b5d519bf6658815192abf8605c5c35c8e2bfc}}, or 3 to {{formula:1e341e64-8876-4af7-a636-e524fc667748}} . Using the H{{formula:b9d6d766-bfbd-4dc2-b3c0-b99b1df40f95}} fluxes measured by G2020, we
estimate that 60 to 90% of the sources will be detected within
these limits in the deep fields. Therefore, the mission itself
will provide enough spectra to finish the photometric redshift
calibration in the near-infrared range to the required
precision. In particular, a question that needs to be clarified is
whether in a given SOM cell the galaxies for which we are able or
unable to measure a spectroscopic redshift might have different
redshift distributions.
|
d
|
c4b996053f0c8f62bee15f97408cf7bd
|
can solve convex optimization problems {{cite:d77f19f43a084941449ce20aac092ae652784e0a}} (see also {{cite:961d8094111646ac05c22fb0bb2b7419d34b3887}} for the convergence analyses of AdaBelief using (REF )). Motivated by the results in {{cite:d77f19f43a084941449ce20aac092ae652784e0a}}, {{cite:961d8094111646ac05c22fb0bb2b7419d34b3887}}, we decided to study Adam under Condition (REF ). Our results are summarized in Table REF (see Theorems REF , REF , REF , and REF for the details). While the previous results shown in Table REF used {{formula:30437bb3-65bd-46b4-8637-33f5831863b6}} , our results use {{formula:658d5eff-fa1e-458d-a071-1faa457a6ff8}} to make the upper bound of (REF ) small (see Theorems REF , REF , REF , and REF for the upper bounds of (REF )). Therefore, the results we present in this paper are theoretical confirmation of the numerical evaluations {{cite:60bb69197bd987af721eaf1cef0ed59ccc28fe18}}, {{cite:d77f19f43a084941449ce20aac092ae652784e0a}}, {{cite:3dac1e0621c1c20ce765dadc624655ffe27e8b22}}, {{cite:e327fcd3bc0a8bf1aa18695dd9b228ac42943efb}}, {{cite:42181b50fc20f32ea0b5812a02caa68d9d914c22}}, {{cite:961d8094111646ac05c22fb0bb2b7419d34b3887}}, {{cite:b4a800d58de6e0eab93a9a85c350e76f05c1d711}} showing that Adam and its variants using {{formula:7b0bc754-a2df-443a-bea3-4c1ab1533d6b}} and {{formula:9beb9414-66ba-450c-9f12-b8cb394cff85}} close to 1 perform well.
{{table:c1b8644b-0c95-408d-b103-be620b8d4350}}
|
r
|
21378b540cebe33dce293d49341be062
|
Spacecraft measurements have revealed that the solar-wind plasma is usually in a state of fully developed turbulence {{cite:c0f215655db107d81a0cabb06a36ed92d43204db}}, {{cite:e5539ea563b9ea39b277ab4a29e7f9e53068aef6}}, {{cite:efcfe3abe975eb2b3f2f242506e2cf3460269ea8}}. As the turbulent fluctuations expand away from the Sun, these decay and transfer energy to smaller scales. At scales comparable with the ion gyro-radius or the ion inertial length, the MHD approximation fails and kinetic processes involving field-particle interactions take place, along with important cross-scale coupling {{cite:1178c25eaf0aa0cde244ec502c10524dd9bab15e}}. Furthermore, since the plasma collisionality is quite low, the ion and electron velocity distribution functions can exhibit significant non-Maxwellian signatures.
Kinetic processes manifest through the excitation of plasma waves, temperature anisotropy, plasma heating, particle energization, entropy cascade, excitation of plasma waves and so on {{cite:410ed43bb6902144fbb07c3c103a5b0b94a9fd2c}}. Recently, a phenomenological description in terms of cascade in the velocity space, due to highly structured particle distributions, has been proposed {{cite:6a46120da52aaaeab208687fc084c6c167bdb28b}}, {{cite:142def03bec51cd826e9d7bc71eca147ab3c11f9}}. Observations based on the high-resolution ion velocity distribution measurements by the Magnetospheric Multiscale (MMS) mission in the Earth's turbulent magnetosheath {{cite:142def03bec51cd826e9d7bc71eca147ab3c11f9}} have recently confirmed such description, opening a new pathways to the understanding of plasma turbulence and dissipation in collisionless plasmas. This cascade may efficiently transfer the energy towards smaller velocity scales, thus producing finer structures in the particle velocity distribution function. The presence of such structures may ultimately enable and enhance the effect of collisions in dissipating the energy {{cite:9f5df78e4eea95559ceac76ccf6f7ace5a65ebcb}}, {{cite:5ec9ea6848c5efce1a7fd533455fcb896174bf84}}, {{cite:67d5acbd59f0345e6958115b6ec595543a4aff12}}, {{cite:7f919952c20ba89628b2cb0d7f25e6e91ddc822d}}, {{cite:c6c0e4c76ff8ea75e18b3830682e7795ce574836}}, {{cite:bc01eb748c94365f3395bfd3333208ca351f3422}}. Numerical simulations represent an indispensable tool for the understanding of plasma nonlinear dynamics. The use of kinetic numerical simulations is crucial, indeed, when kinetic processes occur in turbulent collisionless plasmas {{cite:47dce1460d36af85d6c7831b3959b72261168ca2}}, {{cite:e182781034d0aca97f860d4f4ac9ea2a5cb7a9ca}}, {{cite:da741bd59db275b2641f6fa35ff4d96b372c3ebd}}, {{cite:9ab449856707c68fa8f161ff159af488377e3d60}}, {{cite:16cbe527c1732baead90fb74c3b53afa011764b4}}, {{cite:e478b4679ff9a58db6ca0f2e4fe594e79f76aacd}}, {{cite:b819ca2899311a42464bfac873e179d20e0b0b59}}, {{cite:9f90cd0ebac4e69433e9bf8b096ff1fc916724ed}}. The Eulerian hybrid Vlasov-Maxwell HVM code {{cite:4362e48edb77d50e7b3bfbbd072df2f9fdacbad2}}, {{cite:7daf1b2519fa2477b94c9de3a773c41dbd11145f}} can be used to study the dynamics of a turbulent plasma with typical solar wind conditions, in order to describe the cross-scale coupling between the MHD turbulence and the small-scale kinetic processes {{cite:1394123c6f4265ae44862d95ebcbd5a2bbf0f457}}, {{cite:7df6dbf92a00f63101d457260e7a8d9b2e03c4f7}}, {{cite:617432aab9c29159a383c19f10f63165e926b4fc}}, {{cite:69ec7a308050450650c98fd11a821c18096752a5}}, {{cite:0376d1725bf6e5a4b1c9d321182e0ecd480920c4}}, {{cite:7daf1b2519fa2477b94c9de3a773c41dbd11145f}}, {{cite:ae0aef8380f038d6e293df037fabec5cb908f202}}, {{cite:601952a74ebfc495280d7eefde5d41811cab3334}}, {{cite:37d9879aa34dbf04885b83c1b5a3052d83c5921e}}, {{cite:2ebfae15bc88ac019682b0ad28c0775564f7c1c1}}, {{cite:d2c6db67aff224bca21ff72a1dbe29546055a9c5}}, {{cite:c4046278d08b07f85ac174b2ba044fce92ddad2a}}, {{cite:b5570bc940c271e552a993a7527c1458d9911d56}}, {{cite:da741bd59db275b2641f6fa35ff4d96b372c3ebd}}, {{cite:9bf2d9f0d39539bce7e8da2135e93cac8fc547cd}}, {{cite:cf2111451b0b59a2feb42c07c332f7aa28c13254}}, {{cite:993dc08183bd09f2c7236d263fdda610fb5529cb}}.
|
i
|
218f79d353ba5353bcb5e4bd245a1c9c
|
In today's world, repeated measurements taken over time or space are collected across a variety of different areas of industry, government, and academia. For instance, health monitoring of patients, daily weather information, and stock prices, to name only a few. In such cases, the data of interest changes continuously over time. One approach to modeling such data is functional data analysis {{cite:ffb43774a262734548a82491be79736fc6e9b651}}, {{cite:5b188ce63db30ec805ade93e1b7350c96d732ff3}}.
|
i
|
267455c41fde0b4165c6225cb4578439
|
Firstly, considering a DUN and a non-DUN with similar performances, we respectively equip them with our sampling subnet (SampNet), so as to compare the different performance gains brought by SampNet.
Concretely, we select U-Net {{cite:738e5a31aa92114319206669f25b577dc640cecf}} as an instance of non-DUNs, and choose our RecNet with 4 stages, named RecNet{{formula:2c74270c-8142-43f7-92fe-e32d0c13f5f3}} , to represent DUNs.
Table REF provides the PSNR results of U-Net with and without SampNet, as well as RecNet{{formula:5f0ca32d-bf68-4484-9670-b5f503f82a57}} with and without SampNet, when ratio is 5% and 10% under 1D and 2D settings on Brain dataset.
It can be observed that, without SampNet, U-Net and RecNet{{formula:0509dce2-d4bf-4ed4-9ed9-835d302a33cb}} achieve similar PSNR results on average. However, when equipped with SampNet, RecNet{{formula:d7dfce0a-c094-42c2-a37b-97c41aca9bc7}} realizes remarkably higher PSNR increase than U-Net (1.46 dB v.s. 1.03 dB on average).
This result confirms the superiority of DUNs against non-DUNs on fully exploiting the knowledge on the learned mask.
{{table:d3530819-4a74-4024-97e1-90ac3e145938}}
|
d
|
1584ad7a0adde1c63a5f2366a2fe8350
|
Apart from theoretical derivations, {{cite:6d4ddcb6394dafd8dad6edd80d98426f657c7f80}}, {{cite:0dc1a8e54c3d3c2fec344559fca2ca84f3d25973}} have discussed the “double descent" phenomenon, where for both data complexity and the neural network capacity. The U-shaped-like risk curve only appears in the under-parameterized regime, and the testing risk constantly improves after crossing the interpolation threshold. Combining this with the monotonically decreasing training risk, we can find that generalization error peaks at the interpolation threshold, and the error decreases as models move away from it.
|
i
|
c13f668fd6ed656b8429dbfa9bf4d628
|
In this regard a possible way out is to lower the seesaw scale (i.e., the values
of {{formula:5d2e3e60-4118-4bb0-89fa-d8d042224284}} ) by introducing more extra degrees of freedom, and the typical
examples of this kind include the inverse (or double) seesaw scenario
{{cite:7f8a0761835d52de96fa4aa1a34e4a9d1e3ac8fa}}, {{cite:f647c8f024176cb66857247e38366f30895e7a2b}} and the linear seesaw scenario
{{cite:f647c8f024176cb66857247e38366f30895e7a2b}}, {{cite:56f8663b39de476af04caeceef4fa170408e418d}}, {{cite:627fcd4f09f8c128cc785c1b694456cd043b40f2}}, {{cite:7a87ab4790ae72798b701e791865046bc03d3bb8}}
A natural extension of the inverse seesaw scenario to the multiple
seesaw scenario has been proposed in Ref. {{cite:2c674cd585d722074133eca16aa48eccf5bfb6c3}}..
Besides the left-handed neutrino fields {{formula:41d2512e-915e-40ff-81ce-4a5cc4ab20c2}} and the
right-handed neutrino fields {{formula:e2b1f97a-5202-4b97-886c-a87a15c11bb9}} (for {{formula:086b7cd2-5da9-4093-86ee-6e11b1db52b8}} ),
the inverse or linear seesaw mechanism requires the adding of three neutral SM
gauge-singlet fermions {{formula:7d16b899-929e-4992-a079-a30300d3990a}} (for {{formula:641a9517-975d-4301-ac55-c8862e8cb521}} )
and one scalar singlet {{formula:2db68386-96b7-411c-9454-38d645b32a97}} . With such new degrees of freedom, a generic
gauge-invariant neutrino mass term can be written as follows:
{{formula:83e528d4-2b65-4c57-835e-a0ecdbdefeae}}
|
i
|
ca40d8d3b033d255d0e9a4b4f4df37cb
|
To regularize the algorithm, one option is to use the 'shrinkage' estimator of the covariance matrix {{cite:678fef17e9c9d817346c634adc3c2fd965c20589}}. The coefficient {{formula:e2651068-065e-4b71-b2be-0f4ae1e83c59}} determines the degree to which we interpolate between the observed covariance matrix and a covariance matrix of interactions being independent:
{{formula:f3d4d6b5-0d01-48f4-8188-156881c6e00a}}
|
m
|
40cdcdb165e9b6fa5c2aa95b1a3df645
|
Degradation modeling is also vitally important for deep learning-based SR approaches. Deep convolutional neural networks (CNN)-based SISR approaches usually achieve state-of-the-art (SOTA) results on standard benchmarks. Nevertheless, their performance is limited when applied to real-world images. The main reason is that the kernel (e.g., “bicubic” kernel) used to generate training data is significantly different from the blur in a real scenario. To address this problem, some recently presented deep learning-based RSISR methods {{cite:7f0084741ad87bd291b271d355be8d46f6f66484}}, {{cite:230293d86235cbe9b7288b6eb8f2145e096e1e75}}, {{cite:b27f26e6bb64458c1e3f864aa9321e346db68d18}}, {{cite:60422994fa963913e89dbe950a1c7a71fc60a500}}, {{cite:7da9e534d129f16945e6ebe167c06c8370f58331}} adopt the pre-estimated degradation parameters to generate samples for model training. For example, inspired by {{cite:01e2c5698e0b1fcc33404d1c11d800dc1391ca72}}, Bell-Kligler et al. {{cite:7f0084741ad87bd291b271d355be8d46f6f66484}} develop an image-specific internal-GAN (i.e., KernelGAN) to learn the internal distribution of patches. The source code of KernelGAN {{cite:7f0084741ad87bd291b271d355be8d46f6f66484}} is available at https://github.com/sefibk/KernelGAN The KernelGAN {{cite:7f0084741ad87bd291b271d355be8d46f6f66484}} is trained solely using the LR test image, making its discriminator unable to differentiate the patch distribution of the original LR input from that of the degraded version of the LR image produced by the generator. After the joint training with the discriminator, the generator can well characterize the degradation process with an image-specific kernel. Then, the LR test image and its degraded version generated by the generator form paired data for SR model training. Bulat et al. {{cite:230293d86235cbe9b7288b6eb8f2145e096e1e75}} train a generative adversarial network (GAN)-based degradation model from unpaired HR and LR face images, and then use the learned network to generated image pairs for SR network training. The source code of FISR {{cite:230293d86235cbe9b7288b6eb8f2145e096e1e75}} is available at https://github.com/jingyang2017/Face-and-Image-super-resolution Zhou et al. {{cite:b27f26e6bb64458c1e3f864aa9321e346db68d18}} propose to obtain a group of realistic blur kernels from real-world photographs and a GAN is trained on them to augment the pool of realistic blur kernels. The source code of KMSR {{cite:b27f26e6bb64458c1e3f864aa9321e346db68d18}} is available at https://github.com/IVRL/Kernel-Modeling-Super-Resolution With the augmented kernel pool, more realistic and diverse LR-HR image pairs can be constructed to train the SR model. Analogously, Xiao et al. {{cite:60422994fa963913e89dbe950a1c7a71fc60a500}} model spatially variant degradation via learning a set of basis blur kernels and corresponding pixel-wise weights from real-world image pairs. The learned realistic degradation model is then used to generate pseudo-realistic LR-HR image pairs. More recently, Ji et al. {{cite:7da9e534d129f16945e6ebe167c06c8370f58331}} take this idea one step further and develop an effective degradation framework using various realistic blur kernels and noise distributions, winning the NTIRE 2020 Challenge on Real-World Image Super-Resolution {{cite:8736c6dabf8d25b9c0c65442a33372f4114ea4a3}}. The source code of RSRKN {{cite:7da9e534d129f16945e6ebe167c06c8370f58331}} is available at https://github.com/jixiaozhong/RealSR The outstanding performance of degradation modeling-based RSISR methods demonstrates that degradation modeling is meaningful and this kind of approach is a feasible solution to the SR of real-world images.
{{figure:64863b3c-3393-48a9-a723-395e47bcdd92}}
|
m
|
a0a1e38a8061d8b635ae49f563d1d290
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.