text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Let us begin our discussion with the variation of the conductance {{formula:cac2a103-47a1-4351-b6f1-4428526afe90}} as a function of the injecting electron energy {{formula:b87a734b-e219-4fa2-826f-6f6e60b9b265}} . As representative examples, in Fig. REF , we plot the {{formula:5a2f066e-0fc4-4849-b232-dae0407b73f9}} -{{formula:30c74433-f110-4f1c-97c0-e8eaa39a55fd}} characteristics for the molecular wires in which the molecules are attached to the electrodes in the trans configuration. Figures REF (a), (b), (c) and (d) correspond to the results for the wires with benzene, napthalene, anthracene and tetracene molecules respectively. The solid and dotted curves represent the results in the weak and strong molecular coupling limits respectively. It is observed that, in the limit of weak molecular coupling, the conductance shows very sharp resonance peaks for some particular energy values, while almost for all other energies it ({{formula:0350de40-adc4-4d7a-8cc1-275429355cc0}} ) drops to zero. At these resonances, the conductance approaches the value 2, and therefore, the transmission probability {{formula:e8e00434-b81a-4091-9aba-edb0be62b709}} goes to unity since we have the relation {{formula:1de8f5e1-1eab-473b-aedd-8b8cc0c1a35b}} from the Landauer conductance formula (see Eq.(REF ) with {{formula:b75c48b1-0047-4e3d-a789-e65ebe6a1e44}} in the present description). These resonance peaks are associated with the energy eigenvalues of the single hydrocarbon molecules, and therefore we can say that the conductance spectrum manifests itself the electronic structure of the molecules. Now in the strong molecule-to-electrode coupling limit, all the resonances get substantial widths, which emphasize that the electron conduction takes place almost for all energy values. Such an enhancement of the resonance widths is due to the broadening of the molecular energy levels in the limit of strong molecular coupling, where the contribution comes from the imaginary parts of the self-energies {{formula:aae661a9-1c27-4a52-a401-4052b9fccd8f}} and {{formula:6051c0f6-ffb2-4693-88c7-6fe32e3e7243}}  {{cite:afc71d70bce0e121858f619ceec8b38c27c80f1f}} as mentioned earlier in the previous section.
r
4efe2a72e86b1cd3a4a13da4d76f812a
For convenience we calculate {{formula:79ffbe23-a489-4816-bedf-33eb888a6d90}} in step 1 with the Vienna ab initio simulation package (VASP) {{cite:583fe7f4905e3a296583bb6f652a3a55bb943921}}, a plane wave code, using PBE-based projector-augmented waves (PAWs) for treating core electrons {{cite:e79e2d905e49ac08866882cafaca64abf4af5af1}}. We use an in-house modified version of the Quantum Espresso {{cite:46b04b5861e9063bfde70ecf7e14360509155050}} plane-wave code to calculate steps 2-4. We use optimized norm-conserving (NC) Vanderbilt pseudopotentials {{cite:78cd64bfa58a04217f3aa05da7b143263749d44c}} obtained from the online repository, pseudo-dojo {{cite:2c1d5e72549bce3558dff985a88b3204f693c806}} (see SI for complete computational details {{cite:28d09c8a24be1edf5490729f8e1efa32fc651f1b}}). Methods which include exact exchange are known to be sensitive to the number of semicore states {{cite:a6fcd1596de7d7db2aff6b300842c34880862654}}. We find that for Ge, Ga, In, As and Sb it is important to include one complete shell of semicore states as valence electrons. Maximally localized Wannier functions are generated using the Wannier90 software package {{cite:e4d8de53433947dc19c92cd04bc0d9148f00b0ab}}.
r
e5d726bdd44576a174fc46c708e3766a
Certainly, the nature of attention as a distribution over tokens lends itself to a straightforward interpretation of a model's inner workings. {{cite:c2fedd6455189b72da3ee157236e57a80fadb8af}} illustrate this nicely in the context of seq2seq machine translation, showing that the attention learned by their models reflects expected cross-lingual idiosyncrasies between English and French, e.g., concerning word order. With self-attentive Transformers, interpretation becomes slightly more difficult, as attention is distributed across words within the input itself. This is further compounded by the use of multiple layers and heads, each combination of which yields its own alignment, representing a different (possibly redundant) view of the data. Given the similarity of such attention matrices to the score matrices employed in arc-factored dependency parsing {{cite:8cd4e9832b8cb9ccf5ddd053ca10d215b1db4f50}}, {{cite:e617818d2bd8823b2ebee30da238cc5bb7cb7ab1}}, a salient question concerning interpretability becomes: Can we expect some combination of these parameters to capture linguistic structure in the form of a dependency tree, especially if the model performs well on NLP tasks? If not, can we relax the expectation and examine the extent to which subcomponents of the linguistic structure, such as subject-verb relations, are represented? This prospect was first posed by {{cite:d097561625ad9671927de64c0b3e90aeff6ab811}} for MT encoders, and later explored by {{cite:603d2216b9fba1aacfa8021c25543a4d749ec2d3}} for BERT. Ultimately, the consensus of these and other studies {{cite:4a58b25899002ee200f84039524ae3d871065cc6}}, {{cite:7c1f15219029afbc7d293bb1835d72307dae37db}}, {{cite:b2815e437a7e63647f6bb1bea5d9d5f08ac5b1c2}} was that, while there appears to exist no “generalist” head responsible for extracting full dependency structures, standalone heads often specialize in capturing individual grammatical relations.
i
4218a02b8ad6ccef309abbe85f43f30c
In this work, we present CAT, contrastive adversarial training for text classification. We build upon {{cite:867ea7f2cd0a03d7073349336b55ab6f67091a3d}} to regularize the fine-tuning of Transformer-based {{cite:3ce9323f73c13dfb273dd477ffde2b0ccd8d33ca}} encoders on text classification tasks. Additionally, we encourage the model to learn noise-invariant representations by introducing a contrastive objective {{cite:c663428b6dd59486e31145d5a05a4302e09f1dbd}} that pushes clean examples and their corresponding perturbed examples close to each other in the representation space, while pushing apart examples not from the same pair. We evaluate our method on a range of natural language understanding tasks including the standard GLUE {{cite:36292c44da8b331816d51b592197cd7d0fcc2bed}} benchmark as well as three intent classification tasks for dialog systems. On GLUE tasks, we compare our fine-tuning method against strong baselines of fine-tuning BERT{{formula:1a40cada-1d0f-4034-9870-9c1e3f39b02a}} {{cite:2543ee3c2ddbf911ff7a637b421ead474183f36a}} and RoBERTa{{formula:138dabf8-5121-43eb-81a2-0e2c9432100c}} {{cite:f977be0e72d6a0994898d780c982cb6d18ccd9d6}} on clean examples with the cross-entropy loss. Our method outperforms BERT{{formula:c739eea4-54e5-4ef3-88a5-9a587fc3baac}} by {{formula:e3a0bbf0-ceaf-47b9-b31d-7bf223e7f65d}} on average and RoBERTa{{formula:f0dcc7d5-e6b5-4bef-974d-e774667dc0fb}} by {{formula:3a650832-8323-46cc-8c67-b1d0a4b04a3b}} . On intent classification tasks, our fine-tuned RoBERTa{{formula:870d2426-f398-497a-9fd3-8301d2b9ed2e}} outperforms RoBERTa{{formula:a2abba68-8533-4001-b6d8-f175a7fd0ca2}} baseline by {{formula:5c945a2d-6baa-4369-aed8-abadbb982c8e}} on the full test sets and {{formula:c61ae333-19af-47b6-916f-2af15c363197}} on the difficult test sets. We further perform sample efficiency tests, where we use only half of the training data (per intent) and achieve near identical accuracy compared to the baseline trained using full training data.
i
d6d0c341dc521bae12e9fc17a0cece41
Finally, it can be shown that typed processes are strongly normalising, which is not so surprising since we followed closely the logical principles of Affine Logic. This can be shown by a small adaptation of the standard method {{cite:0d843084cc5be0de6e972237b3010ef32134fad6}}, first by giving an interpretation of types based on biorthogonals, then by strengthening the induction hypothesis using a notion of reducibility (a contextual test for normalisation), and finally by making use of Theorem  to obtain strong normalisation from weak normalisation.
d
e351ffe0a8bd3261f795e7bf585041d0
The divergence of the mean and variance of {{formula:8d9b283c-2795-4506-a76c-2e9ff283ac88}} in certain parameter regimes makes the power-law resetting protocol drastically different than the constant rate resetting, which corresponds to an exponential waiting-time distribution, with a finite mean and variance. Even a single diffusing particle under power-law resetting shows a spectrum of rich long-time behaviour including a non-diffusive spreading for {{formula:9433ca06-bbf1-43de-a9e1-7a779a195c5c}} and a nontrivial stationary state for {{formula:e0baa689-5c8b-4407-b2ee-3cf9ae63a33d}}  {{cite:f4164c6896d082e8c4ee619aa51456139a76ffa6}}.
r
3938e7c2cfb2acc8b7fc40a0ead73dce
In this paper, we have studied how optical isolation can be accomplished in three different models of topological photonics. A few basic ingredients are common to all three models. Firstly, the model must be nonlinear, so as to break optical reciprocity {{cite:4a128f0fdb9ed4bc19635a9436f9b8d6312caec2}}. Secondly, the structure must contain an asymmetry (e.g., asymmetric input/output couplings) that distinguishes between “forward” and “backward” directions. The third ingredient is the use of a topological phase transition to associate forward transmission and backward transmission with different topological phases, whose physical properties are qualitatively different from each other. This last design principle is reminiscent of recently-proposed optical isolation schemes based on parity/time-reversal (PT) symmetric structures, which rely on a non-Hermitian transition (between “PT symmetric” and “PT broken” phases) rather than a topological phase transition {{cite:2ccb54e5add28817183eea6961a2eb10ba9a857e}}, {{cite:17d0e8b3781c44aeb1549b9f547a8b569b0132a9}}, {{cite:613a33cf3f059245a53c50599afc29ab41c31816}}, {{cite:d4744eb6091a4014a22c89edf3182d6026ad7c5b}}.
d
938ae17ed2cbc9a96d5cf8807b5346af
Beyond the case of a complex inflaton and Q-balls, an additional motivation for this study of Q-ball solutions is the possible insights it may give into the case of a real inflaton and oscillons. A non-minimally coupled Palatini model of this type could also inflate successfully for the case of a real inflaton, and neutral oscillons could be formed from the fragmentation of such a condensate. There have been numerous works which explore the formation and existence of oscillons and related objects from condensate fragmentation {{cite:c6292c922645ae46c9f12784e7a878ce65f191eb}}, {{cite:2a06e81c166abfe201d2aaa8ecd65a2a49aafa5e}} - {{cite:2d7163b3431305dd85d777889d252e8f4229e2bc}}; the evolution of a subsequent period of oscillon domination {{cite:fb3445a55faaeabd06cf2f53b52e6fa8e89680d9}}, and the observational signatures these objects could leave {{cite:b6239b405090378eff34939a9d782762b33f2cba}},{{cite:d5916f2ebec84e97f65ae3d5078eaafcbe30f574}}, and as such this is a many-faceted and active area of research in cosmology. However, unlike the case of Q-balls, there are no analytical solutions for oscillons. Given the similarity of the underlying physics of real and complex scalars, it is possible that the Q-ball properties will be similar to those of the corresponding oscillons and that an oscillon window for oscillon formation will exist, analogous to the Q-ball window.
d
40af9148c1da6c4b8696d455c66c8463
Berkley Deep Drive 100K Dataset (BDD) {{cite:e5c5d2290c9d10aaeb8592d43d6c7446e3ced0bb}} road scene dataset, with {{formula:eaf81687-2d00-42ca-af42-1e555bbd4a36}} frames used according to the official {{formula:c943aa2d-ae31-45a4-8043-eb9fdee04d85}} training/validation split. Models trained on BDD are also tested on {{formula:62633397-270a-493b-a800-cfc5fd6457fa}} frames of KITTI {{cite:baefdada6d756ff69cf63fc051ad1cc855f31499}}. Both datasets contain 7 common road scene object categories. MS COCO {{cite:28d972664ce22a5b61f946c24de6164401c120ee}} dataset, with {{formula:681266da-ddad-4c9f-9692-83d4b303c7d0}} frames that contain instances from 81 different object categories, and an official {{formula:329de4a1-e390-4fcc-a79e-d91f35b01de1}} training/testing split. Models trained on COCO are also tested on {{formula:57ba5978-abd3-4838-8077-1f3b27f37bac}} frames from Pascal VOC {{cite:b003c0afdb5a5b6a79843320750539ebd93219b7}}, which shares 20 object categories with the COCO dataset.
r
531e627ca0cba4359ce8ddd3310ab1a6
The basic building blocks of SF-QED are the Compton effect (photon emission by an electron) and the Breit-Wheeler effect (photon decay into an electron-positron pair) in strong EM fields {{cite:409bdc20ee5ba5fbdbac3d9aa76fabd7d8b60902}}. It is most convenient to characterize these interactions in terms of Lorentz invariant parameters: {{formula:fb94971b-b79e-4a44-bd64-3e6b08d70168}}
i
321c9b002a8e17c3560e4c13d863dd88
Park et al. 2020 {{cite:9a26aecc23751f9f770733d66831881bb332aa96}} Conv TTFS 68.79 680 0.084 0.04 29.94 0.99 0.79
m
ade2ba803e7f0922782e0c8d6e85d1a1
Note that the state-of-the-art results on Kinetics-600 are about {{formula:c28f7de1-827a-4565-93c9-8b07f61a3b9b}}  {{cite:904fe48aac302bc8979fcf240e130cda35b146f8}}, {{cite:da69ef741f8402dc0e77327f4cc8dd05d754b0e4}}. As mentioned above, these results are generally obtained using various mechanisms that are orthogonal to our work. Here, we use standard visual backbones and focus on alternative to BP training that satisfies real-time requirements and is more scalable.
r
433a72ed6975725302726d26c10cb217
Our analysis is based on fifteen epidemic datasets (influenza, dengue, malaria, and hepatitis B) collected from publicly available sources. The dengue datasets have been used multiple times in various studies for formulating better epicasting techniques {{cite:ed1a2ff65bc877edb1de854c6b61996e63041baa}}, {{cite:82829da95602e21ce5883acd0a886692d2e49643}}, {{cite:82a01a67d495f2ed3ecd2ec347927c9d939cddfe}}, {{cite:846aab8ce142238ff4a2db9cc583458f461146fe}}. Our chosen datasets are diverse in nature, representing several diseases from distinct locations, with varied length, frequency, and statistical characteristics, which generalizes our findings. However, further investigations on some other infectious disease datasets are essential in future work. We did not consider Covid-19 datasets in our study due to its dubious nature, and thus forecasting Covid-19 majorly failed due to lack of transparency, errors, and lack of determinacy {{cite:367e0ce52ca865e1e2c03804b4b4849d13d5c9d4}}. In our study, RMSE, MASE, MAE, and sMAPE are considered as the key performance indicator {{cite:208bb7b3494c042b68d67ef976115317dfcff22f}}, {{cite:4d8318b8b17d1a9a592276e940be082e9ff2fd93}}, {{cite:0d590e86f9b9fba6870721c8d73d0d872db778c1}}. Different accuracy measures are available in the time series forecasting literature, and the metric's choice may influence the forecasters' performance. Although we considered both absolute, percentage, and scaled error measures for computing the epicasters' performance, several other measures can be considered for studying the effectiveness of different models. Proposed EWNet performed best on average compared with 16 statistical, machine learning, and deep learning models. However, epidemic outbreaks sometimes vary with respect to climatic, social, environmental, biological, and human factors. In this study, we have only studied the past observations of epidemic datasets and extrapolated the forecasts based on the past dependency to provide valuable insights into the disease dynamics.
r
2b7bfb6950852ec9a9a145268bd05af0
Table REF shows quantitative comparison of these methods on the same testing set. It can be observed that 3D U-Net {{cite:599167759b7b67abe8e96c9b3236e3c9d31ee961}} got the worst performance in average, and the results of U-Net cSE, U-Net sSE and U-Net scSE demonstrate that using attention networks helped to improve the segmentation accuracy. FocusNet {{cite:a22c78bf72a1305b5da6a151f691dcbd2e99c459}} and 3D SepNet {{cite:ddb96b45e379fad077578d971c9c92ba50285760}} performed better than the other existing methods, but they are inferior to our method. It should be noted that “Ours ({{formula:004cefd3-f684-4053-ae89-73fcda05a672}} )” already outperformed existing methods when trained with the same loss function, and using our {{formula:4367805a-4d41-4774-86e7-cf5f6d4343be}} further improved the performance, which achieved an average Dice and ASSD of 86.7{{formula:d5e6795e-a38d-4fee-800e-749ced404657}} and 0.476 mm, respectively. We also did a t-test between our proposed method and the state-of-the-art methods. For the average Dice, the corresponding p-values were 0.0007(3D U-Net {{cite:599167759b7b67abe8e96c9b3236e3c9d31ee961}}), 0.0003(3D Res U-Net {{cite:1d14128e41de91639ab6e2d2e0a8cccd866b9faf}}) , 0.0106 ( U-Net cSE {{cite:ca6a91fa238186441881597b18314bca36a822f1}}), 0.0285 (U-Net sSE {{cite:d6fa85aa65623f6addf860ff294c356d20fb0a25}}), 0.0144 (U-Net scSE {{cite:814f011f8ce63c0741b6a70c9f7b33adb032c51f}}), 0.0092 (nnU-Net {{cite:870d7ecdbb9229499415f4065e51c771d9a7aeb1}}), 0.0003 (FocusNet {{cite:a22c78bf72a1305b5da6a151f691dcbd2e99c459}}) and 0.0021 (3D SepNet {{cite:ddb96b45e379fad077578d971c9c92ba50285760}}), respectively. All the p-values were less than 0.05, indicating that our proposed method has a significant improvement, as shown in Table REF . Fig. REF shows a visual comparison of these methods. It can be observed that our method can better deal with thin structures like the falx cerebri and the brain sinuses than the others, as highlighted by yellow and red arrows.
m
fa386d5478214c51d1d236caf0e6f067
It is helpful to point out in this paper, we will think about JT gravity and its {{formula:b7df5088-c447-48ff-a9df-9edf317267a9}} deformation from the point of view of the matrix model dual rather than the boundary dual theory. This is because, as shown in {{cite:2bf07c50bdecc6b0fe5cc67b3dbd0d4b9c28bc27}}, higher topology contributions in JT gravity are captured entirely by a double-scaled matrix model. The partition function of JT gravity at any genus and with any number of boundaries can be computed from the correlators in its dual matrix model. From the point of view of boundary dual theory, we only know that JT gravity on a disk is dual to Schwarzian theory on the boundary. Also, the result of {{cite:2bf07c50bdecc6b0fe5cc67b3dbd0d4b9c28bc27}} showed the connected {{formula:6b6350e1-cb5f-4a85-b4ee-d9d6c9998229}} -point function in JT gravity – namely the partition function on a connected surface with {{formula:60f0befd-d3f3-47a1-84ef-56210a0db50a}} boundaries – does not factorize. Therefore, in the case of multiple boundaries, it remains an open question on how one should interpret the boundary dual. Since the replica trick is essential to compute quenched free energy and requires knowledge on {{formula:601124e0-1974-4431-b166-d033ef2ff234}} -point correlators, we will think mostly from the point of view of matrix model and its correlators rather than directly from the boundary dual theory.
r
8f9aaea059f5ac7dacff613ec373d1ee
A drawback of our study resides in the small architecture used on medical imaging scans. Extending our “measure preserving DistGP” module to larger architectures such as U-Net for segmentation or modern CNNs for whole-image prediction tasks remains a prospective research avenue fuelled by advances in scalability of SGP. Moreover, our experiments involving more complicated architectures, such as ResNet or DenseNet for standard multi-class classification, have not managed to surpass in accuracy a far less complex model with only 3 hidden layers. A plausible reason behind this under-fitting resides in the factorized approximate posterior formulation, which was shown to negatively affect predictive performance compared to MCMC inference schemes {{cite:f29d44200848f9ad54cceccd9e6b25e591322615}}. We posit that using alternative inference frameworks {{cite:e2f0357f85abb1ced8fb3e4f231e2a81d9d3026f}} whereby we impose correlations between layers might alleviate this issue. Moreover, the lack of added representational capacity upon adding new layers raises some further questions regarding what are optimal architectures for hierarchical GPs, what inductive biases do they need or how to properly initialize them to facilitate adequate training. Additionally, our comparison with respect to reconstruction based approaches towards OOD detection was not complete as it did not include a comprehensive list of recent models {{cite:4914806969f51551a887f822aefc660a8f17fc42}}, {{cite:965286867e0f11a02e6342dcb1354f15d44d1561}}, {{cite:9c591aad94df7639de3a4d66c2d5c06372a574b5}}, {{cite:67d937d9403996e4df7ada6ecc5895613071a6bb}}. However, comparing our proposed model with reconstruction based approaches was not our intended goal for this paper, the main aim being to compare with models which can provide accurate predictive results alongside OOD detection capabilities at the same time. Another limitation of our work is the training speed for our proposed module, with matrix inversion operations and log determinants being required at each layer. Future work should consider matrix inversion free inference techniques for GPs {{cite:e7c25ae300f17146846f7b8913970caaca8b5957}}.
d
77b93aef956e7ce436b82f63652f1085
Truncating the summation in (REF ) based on this approximation reduces the maximal time evolution parameter (i.e., the maximal value of the parameter {{formula:18950f26-5464-41f5-a682-971b4b0b3424}} in the {{formula:ec88f232-29d2-41d6-9f34-b94ad178037a}} terms) quadratically. To make this approximation precise, we use Chernoff's inequality {{cite:05b015acfb2bcd194318d8cc312d7af935da9ad6}} for the binomial distribution, or more precisely its corollary for sums of binomial coefficients, stating {{formula:3e021bbe-58dc-46aa-ac24-7401f5aaf0d2}}
r
a17f54e395c4509317ffb1ad9d58400c
It is straightforward to apply the FDM to a geophysical flow model on rectangular grids. These were the first methods used {{cite:fb3f6152939941157626453ecdfa5f8433faae5f}}, {{cite:097d364647355f4138e51d94b4922773d57acce2}} to simulate geophysical flows. In particular, the Arakawa grids were introduced by Arakawa and Lamb {{cite:844c45f7cac217c0d67d7aea8e8897c31d4ac088}} to conserve energy and enstrophy at the grid level by effectively locating state variables across the mesh (i.e., a staggered-grid representation instead of nodal or cell-centered). See, e.g., {{cite:8bf346040ec74e4fece9866038377823ef563f2c}}, {{cite:7429932dde10008b217810bdda14da2cf1e1bad7}} for detailed discussions. Among this class of grids, the C-grid places scalar quantities at the cell centers, while specifying the normal velocity components at the cell edges (which is essentially the classic MAC scheme {{cite:fe9b5645fab18c3c6fa5717bcfd9b587c9ab21f6}}). Because of its excellent representation of the inertial-gravity waves, it has been widely used in geophysical flow simulations, for instance, for solving QGE in {{cite:d750882c471c521d86d0192b3e278897d5710d1f}} and is the standard solver in the Modular Ocean Model version 6 {{cite:0d995a620ba5ce8f1f90ab1462a0207f965cb6cf}}. Staggered-grid grid approximations like the C-grid can be thought of as either finite difference or finite volume schemes since the various velocity fluxes are explicitly solved for at cell faces rather than being reconstructed first from cell-centered values - this is the fundamental property that gives, e.g., the MAC scheme exactly zero divergence at cell centers (when calculated with standard second-order difference operators). Such schemes can also be extended to work with various turbulence modelling strategies {{cite:d750882c471c521d86d0192b3e278897d5710d1f}}, {{cite:33eb5be7be83dafd684707d9a24dc07c8225385d}}, {{cite:b0bc0bcc561117e9c8d51f4fd9ce5f6e2ee6c163}}, {{cite:2bef7d3ba71d9ccbc62bd76275cfb81c95501ea2}}.
m
87961e8bb847cd35a160c0b9e17f317c
The most important corollary and main motivation for proving such theorems in the context of extended fermionic lattice systems is the rigorous justification of linear response theory {{cite:78abafb8a2ccf8775983db6c714ed5477022cb17}}, {{cite:3c68ecc7dd6b4a65e340a4bb07dd18e7439e24ca}} and the Kubo formula {{cite:2c36a60e29900b8456e0e12a46bcac0b8a15e9e8}} for (topological) insulators {{cite:dda72d4ad77d676f9cb72d57d930ec8553f5772f}}, such as quantum Hall systems {{cite:e06497840f3e3fd5052948398096a5d9b25e029a}}, where the prototypical relevant perturbation is a linear external potential modeling a constant electric field closing the gap of {{formula:b762859b-84ad-42ae-a94d-8fb2af287511}} for every {{formula:7777bd70-e7c6-4af1-9dd4-394cf6af0c8a}} (see Figure REF on page REF ).
i
fcb299f61206fa8a7176f2d45718adf9
In Figure REF we compare the texture similarity of images generated by our proposed method with real samples based on SIFID{{cite:78f25c7d2640ca260d12823e5d1493d126b2a901}}. {{figure:7ec689b9-2360-462f-9ad4-419a9698d992}}
r
2f96c31437f1a872c53bb5b72e6c3935
We propose a U-net-based {{cite:c9627c0c3fcd0cde8fccc70669026d657c396e0b}} multitask structure to incorporate instrument recognition with music source separation, which we refer to as Instrument Aware Source Separation (IASS) system. An overview of the model is shown in fig: model. Although the multitask approach shows similarities with previous approaches (compare {{cite:ccab1dd12303bfd9e1a77bed82e103c898c5b170}}), we design our model with a different goal: instead of just learning a joint representation using the multitask structure, our model uses estimated labels from multitask learning during inference to improve source separation estimation.
m
7f50f31f50b96f1e6b5dbd748fa9113d
For further assessment of the proposed network, the gradient-based class activation mapping (Grad-CAM) {{cite:ddb39a6ff099e3975402d6cd0cf93e7d4d1befcb}} was employed to depict the decision region on a heatmap. Fig. REF illustrates the class activation heatmaps and superimposed images for three sample images. As can be observed, the proposed method extracted appropriate features, and the model mainly focused on the lung area. Radiologists can use superimposed images to examine the chest area more precisely. {{figure:4ea4b6c8-f430-4d5d-b1f3-99e6c36577b4}}{{table:1bc01f0f-23ae-44b3-af6f-098ae028d652}}{{table:4d29cf4a-b337-4eb4-b96b-f964556527fa}}{{table:16464859-5609-4ee3-9125-ae20e7e21d5f}}{{figure:5e0f59e1-0028-44d5-9bb4-c1b1bd265cf8}}{{table:4cfbb0bc-c691-425d-a922-4f90588934b8}}{{figure:ad4f0c62-635e-4c93-91b2-ab52ab4bf608}}
r
ac723e157217124a68e2001ae8ded9be
In this work we stray away from an all-encompassing definition of an outlier, rather focus on a definition for the one dimensional case in terms of order statistics. In a sense our definition tries to approach the problem from the point of view of investigating data points "that arouse suspicions that it was generated by a different mechanism" {{cite:528b02e1d7ed72b58c8daef36d580831ee9274df}}.
r
f1ccc79cf1174b9367ba62867bb98fdc
Defenses against physical surveillance. We adopt a systems-view of the problem and reason about how various stages of the CSIS pipeline can work together to make physical surveillance attacks harder. First, one could leverage the recent progress in defenses against adversarial examples to make computing poison delivery images harder. For example, techniques like adversarial training {{cite:0bcd7a94b9835e784a22c7727eb647cdbfae25f1}}, diffusion-based adversarial purification {{cite:cd015bdb0a80c7f20afd0436432db63ac14ca8ff}} or certified robustness {{cite:f6a4745818fd58cf35a172635991e93ee73031d4}}, {{cite:346d6be0ddfa9f1e63c80a19201976097e572da5}} can increase the distortion required on the adversarial example to the point that either the human curator rejects the sample as being too noisy or the resultant hash of the poison delivery image is too far from the desired hash. The challenge is that such techniques would work for deep learning-based perceptual hashes like NeuralHash but not for algorithms like PDQ.
d
22d1774955455db672d52725c07c6de8
There has been extensive research in the field of gravitational collapse. Since the pioneering work on the gravitational collapse of homogeneous dust,{{cite:9079a25840ad91c9372346aab5ff3ade93726b73}}{{cite:a32eb87280b85f55cdccfba4da423e9843187640}}, it is now accepted that for a gravitational collapse of homogeneous pressureless matter, the central singularity remains hidden behind the horizon, implying that the end state of the continual gravitational collapse of homogeneous dust cloud must be a black hole {{cite:5673a5eb042d20aff9c61b36a077f51fc6a27d2b}}. Further studies have examined various aspects of gravitationally collapsing stellar systems for different kinds of matter distributions, and details may be found in {{cite:923c3d21331341315ee615d0c503cd50b28a0c34}}, {{cite:794c28e13a366742ad2b7a11fb08fd0dcf9d5959}} {{cite:06db3b61b6698f306c2d096cadb2ab1847d799de}}-{{cite:3f01a3235f2db9f959a4fb7dabd3df5c3a9ddcb6}}. These studies have thrown light to many interesting facts which must hold for the collapse processes to be physically realistic. For example, for the continuous and smooth matching of the interior collapsed spacetime to the exterior Vaidya spacetime over the timelike hypersurface {{formula:e0b72b0b-7c08-4861-8923-f67e24c99130}} , the radial pressure must not vanish at the boundary of the collapsing radiant star, but instead be proportional to the heat flux {{cite:4e690b678a02534497591cae4cba14cdb8fae75c}}{{cite:4bd5b78e0d7094d8adc98fc384b35a1a364932af}}.
i
d438d687feb503a1c276726ba3b152de
In NAS method, there are three major components {{cite:1da012f39d7dac90885b1468dbd6a79e9462f847}}: search space {{formula:4ca208e1-02f7-440c-8e85-3deb083a2458}} , search strategy {{formula:5db6965f-ff2e-4654-8367-3f9129040f21}} and performance estimation strategy {{formula:3417cac5-a542-4a8e-98a4-721f982f265f}} . Predefined {{formula:2b5e7f21-be0f-419f-bcb1-4362ffe08bcf}} confines the total number of possible architectures if it is not unbounded. Thus it will affect the search efficiency and optimal architecture. {{formula:9683c7d3-8873-4ca3-b8dd-1fcbb2b7c1fe}} determines the search efficiency and should avoid potential pitfall of local minimum. {{formula:f4da38c8-55a2-4717-8d2c-301954825867}} provides a way to evaluate architecture candidate and feedback. The simplest way for {{formula:bdcd1fe4-bfce-464e-8ddb-57096d2784e0}} is to perform a standard training and validation for targeting optimal architecture.
i
7d59c82da818c8d8fc7f529148c9794d
We are in the process of conducting comparison experiments between our white-box attacker and that of {{cite:cb061565ef402b0964e3d0dc99cbf2183f5a6426}}. However, their method does not easily lend itself to be used with the LTU framework, because it requires training a neural network for each LTU round (i.e., on each {{formula:adb492e8-4aa5-47bf-a866-e8b611cb77b2}} ). We are considering doing only one data split to evaluate privacy, with {{formula:23efb737-5388-4684-93f8-567f79e1669a}} and using the rest of the data for privacy evaluation. However, we can still use the pairwise testing of the LTU methodology, i.e., the evaluator queries the attacker with pairs of samples, one from the Defender data and the other from the Reserved data. In Appendix C, we show on an example that this results in an increased accuracy of the attacker.
d
3dc24512646ca5cbec168d6d6d657e11
Remark 1 It is established that problem (REF ) can recover the positive mean value {{formula:f24fe174-74c0-4b44-b257-735d80700eab}} without requiring any separation between the delays {{cite:4a94ef9af89454a4c6727866bf37455dcf3066ff}}, as long as {{formula:1e5e3acd-6cb8-473f-b8a2-4fda5a23bd1c}} , and the estimated parameters {{formula:d24a6b62-4a9b-404b-9840-7965b350c556}} satisfy {{formula:2d0fe55d-c04f-44e8-80bb-0ee9ee5abbaa}}
m
a8545d81f6b9d12adba0a4de277bf18a
Example 2.20 Let {{formula:b981f273-ae24-46de-8c39-1404d301ad63}} , a subspace of {{formula:509d9108-3566-4a7f-9266-63e395dd3bf0}} with usual topology. If {{formula:ad686fd5-39e6-4002-9a8c-b910a955efa7}} , {{formula:b23cca70-f4ad-4d89-94b3-0f69693a5d3e}} , then the natural density of {{formula:e95e4288-4c50-4a34-8c16-846428b4a6fd}} is defined by {{formula:b6a14ad0-2737-44c3-a076-a421a9a5ec57}} , if the limit exists ({{cite:20ead0cdde84996e4e9fd406cbfb7cacf1791b6f}},{{cite:0fa057a8a81d5cc10e04a66edd83a24f29440f22}}). Let {{formula:9869a266-6aad-4cc7-8fea-b25e26219a9a}} ({{cite:3c9f0247ca9414bb5b1b4ed543d93ba77c566e58}}). Let {{formula:50247783-9c9d-4079-b8ad-da46f4b9e7cb}} is a sequence {{formula:004b323f-15ab-47f2-abd0-46cc0894e995}} uniformly distributed in {{formula:b5f5ec2b-3be9-4c4c-a5ae-a88746b50076}} ({{cite:ad76101109128cb36b6f9aa173ff9348950dc95d}}). So the density of the index of {{formula:70a554ab-5e7f-42f4-9c91-a6eb547cc19a}} in any subinterval of length {{formula:ff851994-4e67-4e59-89d6-f9797d0e24ed}} is {{formula:f31afaef-50b7-4804-a03a-8c8036686ed6}} itself. If {{formula:96288fbc-23c6-49a0-89c1-d212462781eb}} is a subsequence of {{formula:651c85e5-289e-49f6-9b2c-2f24bff2bf67}} that {{formula:21829d10-a5ca-47dd-a49d-abe9fc6f7c39}} -converges to {{formula:049d8e9c-7555-4fef-aa76-6aba3bef7692}} then, {{formula:a9338660-69fe-46a2-a5df-20a0248ff209}} . For, as in ({{cite:c7c83cf11ba09a01bcc74e223382beff27f58f41}}) let {{formula:447039d0-48ac-491d-bfc7-9be84c1470a9}} be given and for each n, {{formula:e0a363d5-06c3-489b-858b-0546470e3acf}} then {{formula:5c212f32-20aa-4108-9b36-3c5ac2d092b8}} {{formula:9d0604f3-3181-494a-a71e-0b9d44876396}}
r
dd5bbc1fa0f80bbb0113d03e1f22527d
Wangerin {{cite:af661047203644ea02a64965b073ed2b3b4e78e3}} showed that Lamé's equation appears when Laplace's equation is separated in confocal cyclidic coordinates of revolution. Such coordinate systems can be found in Moon and Spencer {{cite:9d54cad75bc3e682681490f9ad610981c1fc816b}} and in Miller {{cite:443793b7c20d6688ae18048aba3ff350a857a8a9}}. They include flat-ring, flat-disk, bi-cyclide and cap-cyclide coordinates. An outline of Wangerin's results is given in {{cite:8fe34f03c0707c1dead5a9e7cf60cbfeeb40f513}}. In order to obtain harmonic functions relevant for applications special solutions of the Lamé equation called Lamé–Wangerin functions were introduced; see Erdélyi {{cite:87ba7696c581db02b442f78d5978f6d59a831f5a}}, {{cite:8fe34f03c0707c1dead5a9e7cf60cbfeeb40f513}}. The Lamé–Wangerin eigenvalue problem is obtained when we require that {{formula:206e853c-6aea-4b22-bed0-48504e270ffd}} stays bounded at the singularities {{formula:4f4beb44-05ca-45c9-9d36-5afa3b2161f5}} and {{formula:b6428047-2131-401f-979e-954a2584bbf5}} ; see Erdélyi {{cite:87ba7696c581db02b442f78d5978f6d59a831f5a}} and Erdélyi, Magnus and Oberhettinger {{cite:8fe34f03c0707c1dead5a9e7cf60cbfeeb40f513}}. These eigenfunctions will be defined on the segment {{formula:13a8cb5c-2da3-44eb-bee9-9c2533938a94}} but can then be continued analytically.
i
179f67d6311b4630b613db055017c99d
The spectra observed far from the electron at angles with respect to the {{formula:98583da1-18c4-42a4-addd-aa3e039e3c3d}} direction are shown in Fig. 3. The higher frequencies ({{formula:054b6ae3-f9bd-4df1-afef-f79ae66a0719}} ) are strongly damped with increasing angles as {{formula:c4dec9a0-f48a-4a9a-8502-5c274acf26f0}} , see {{cite:9550fedbaa875686f20a0e11f55c972da116a592}}. Since the critical frequency {{formula:6009600a-52da-44a9-9b08-4410adc93493}} , where {{formula:1ac7a12f-8376-4007-9043-aeabe51fd2cf}} for the electron with {{formula:a1475aed-5312-4701-89d9-7840d770037d}} is larger than that for {{formula:129a330a-adc2-426e-b0c6-e62052cb2de9}} , the radiation from the electron {{formula:bca72bcb-8cf6-4e05-8ee2-9d0e66c49793}} is dominant {{cite:6c2a0e89bd840a6be1f72ba59bb95f635f3ffd62}}. The electron with {{formula:925cee2a-9c52-4eaa-85fc-4243148dbddd}} gyrates about three times in this period, the ripples in the spectrum shows the electron cyclotron frequency. However, in order to resolve it much longer time is required {{cite:3ddaa299924de9f406c8e39abcec6a30e0f6ca17}}. We have very good agreement between the spectrum obtained from the simulation and the theoretical synchrotron spectrum expectation (red curve) from eq. 3 (eq. 7.10 {{cite:3ddaa299924de9f406c8e39abcec6a30e0f6ca17}}).
m
2e413bdb6253a2bafb7cc8eb597aaf49
The concept of FSL, however, is not new and has been introduced by several previous works {{cite:445ceacd06f96584bd0320656635cee2dede171c}}, {{cite:a6e99a9aecd578665e57a62bf385698260a52a15}}, {{cite:0a93c5328edc79a35db8fa5a03341913a00937cb}}, {{cite:9e788d2617ed00ac886ceb71d156fdf16a041415}}, {{cite:7fc30fbf13c88dce8b481b65998b81135e41ca49}}. By involving the online updating into the offline training stage as an inner loop, these methods turn the manual design of the online update strategy {{cite:54df44b7b2f4c66cdd87e373ba8ae933aaca88fa}}, {{cite:fcdb11855a296a39197b7d81cfbc003babbbee04}} into a data-driven module. Compared to siamese networks {{cite:3e7fe353d52cf1614fba9d55e43b42434951c803}}, {{cite:d77e6a255e4a0ec4db8bdd88afc0dd62057ad7d1}}, which can be regarded as metric-based few-shot learners via pure matching, the existing applications of FSL algorithms in tracking mainly focus on optimization-based few-shot learners with its powerful adaptation capability in distinguishing novel classes. However, previous methods hinge greatly on specific FSL algorithms, the majority of which resorts to a similar design as MAML {{cite:48d8aae9eb69073f8432deef14a81ba5d61a4a94}} by learning a gradient-descent strategy with trainable parameters. Specifically, most of these approaches limit themselves to an optimization design of specific convolution kernels over the whole image instead of more customized weight learning (e.g, such as a matrix multiplication factor) with sparse samples as in FSL's task setting, which prohibits a direct introduction of various new FSL algorithms, since a direct application of various FSL algorithms by taking all locations of the whole image as input samples will surely sacrifice its tracking speed.
i
4f9cf3eda669a0d8d9d89ffb2de95509
It is a popular strategy in comparative effectiveness research to embed observational data into a randomized controlled experiment using statistical matching and analyze matched data as if it were a randomized experiment. As Collin Mallows famously pointed out (see, e.g., {{cite:17878ee6c1a8a49a7825743119ccf4e4a7c4d57c}}), the most robust statistical technique is to look at the data; a matched observational study is therefore robust in the sense that it forces researchers to examine the covariate balance (or overlap) after statistical matching, focus on the covariate space that is well-overlapped, and avoid unfounded extrapolation ({{cite:be44c8282ad1bb768cb277fddd3ebc50f57c9646}}, {{cite:f8eed65aae5dc2b6bfdbbbd21837388d923ce742}}, {{cite:f7f122440eddaba498b12c8d2a6383f1cb373981}}, {{cite:13eb979f5568f57d511d39a52e9b1ae1f827b794}}, {{cite:cc9c60e6b30dde630258946fd4a548627a4e6d78}}). Despite a preferred strategy to draw causal conclusion (in our opinion), there is a gap between an approximate experiment (i.e., data after statistical matching) and a genuine experiment, and this gap is often circumvented by making the randomization assumption justified by informal balance diagnostics.
d
9d821a0e05311be6f376aa5833c38e82
Before we describe our approach for the proofs of these results, we recall the situation for the classical Euler equations (REF ). DiPerna {{cite:8a66ef2169b5d03e0aa228beb2dacfec4ea5f3ef}} first showed the existence of entropy solutions of (REF ) for the case of a gamma-law gas with {{formula:fbebf964-9483-44d7-adba-66538667035a}} , {{formula:40d7eae6-c77f-47d1-9b2b-815d3c5f6ab2}} odd and {{formula:5afa3f20-7785-4b27-bcff-7febfbdd08a2}} , by developing the method of compensated compactness of Murat-Tartar {{cite:43d30f6273d3570d7d0e2f7c0ba7285f4f030213}}, {{cite:dfeabdb591ada1fd1f1c2cc247b5f2ed987fffa0}}. The general case {{formula:99f34a13-43c6-411b-8886-4af07a566079}} for polytropic gases was first solved in Chen {{cite:95abd62ac16759e37efc1fde8fa0d196e02939d9}} and Ding-Chen-Luo {{cite:1cc3ab396c6e93f05f0ad4ba5d5877ecc1673fb6}} by developing new techniques for entropy analysis which involve fractional derivatives and the Hilbert transform, combined with the compensated compactness argument. The case {{formula:e6f7be10-0a54-4eb0-8546-b49efd417b1a}} was subsequently solved by Lions-Perthame-Tadmor {{cite:a67f20e55ad8106a63d571fa6bbec3733505cceb}} through the introduction of the kinetic formulation, before Lions-Perthame-Souganidis {{cite:4b281a53bdc889dee5b471ce593ee4dc8393c027}} solved the problem for the remaining interval {{formula:11ab4ab0-f615-4aed-95c9-0637b1755e05}} , simplifying the proof for all {{formula:e1c36603-1046-443d-94bc-80b4bf8e73e6}} . Chen-LeFloch {{cite:ba28e3128f909aeb33388613087889c7b47c7b70}}, {{cite:93a2f0ae2be4cbc1e022307fa85984b07d80df92}} considered the case of a more general pressure law, under the assumptions of strict hyperbolicity and genuine nonlinearity away from the vacuum and an approximate gamma-law form close to the vacuum; see {{cite:ba28e3128f909aeb33388613087889c7b47c7b70}}, {{cite:93a2f0ae2be4cbc1e022307fa85984b07d80df92}}, as well as (REF ), for the precise assumptions on the pressure law.
i
63be13db0faaefdb97c2646eaaa1f08e
Two BS candidates in {{formula:436bf0fb-c910-435b-accc-8ca12e57e37d}} -{{formula:c4dd15e2-b940-41a7-b58e-751d092de2ae}} (Fig. 12) lie on {{formula:a172a2e0-d360-4037-a012-49bef32d0352}} ({{formula:9cf938b5-4048-4f83-9240-e0991777e2a6}}  mag) and {{formula:1812792e-4869-4063-83bf-79bab1f69231}} {{formula:5ef4dd75-c40d-40a6-99ba-49bc75ca6f7c}} . These limits for {{formula:23cc5ec2-88a3-4b5e-9729-4c10cf57dee3}} and {{formula:9691bed5-5553-4a5a-bcc3-f6c26377cf71}} –{{formula:c5a1015c-1428-44fa-9c2a-7ac7954a83dc}} which BSs occupy in CMDs are similar to those given by {{cite:ef087a903bc6887200c123a756a872c770f5fd85}} and {{cite:214020638daf13f95ec41d69775b718faaeedd78}}. The positions of two BSs locate between ZAMS and the 85 Myr isochrone. The age of BSs is definitely younger than the cluster. Therefore the 45 Myr isochrone is drawn up to MS turn-off. As discussed by {{cite:eba2d7fa61f82ae371d96314a55cf42e474e372c}}, BSs are commonly defined as stars brighter and bluer than the main-sequence (MS) turnoff in open/globular clusters. Therefore, their origin cannot be explained with normal single star evolution. Two main formation mechanisms are proposed: (1) mass transfer in binary systems {{cite:1074bee2443c7554d8b35b1bc02a8574c2431853}} possibly up to the complete coalescence of the two stars, and (2) stellar collisions {{cite:a3615100ea22fa457e569c6c77afcd8a19a9d246}}. Both these processes can potentially bring new hydrogen into the core and therefore 'rejuvenate' a star to its MS stage {{cite:716f43db7f91c6ddb51a01b1be1cbf7bc2e42272}}, {{cite:d59d7bcca423e6bdb3aea2473d33a1264d6bd5d5}}. According to {{cite:7cd2aa0d81ad6c2dfc63d5a7a63f087fa7a3f942}} and {{cite:8a163052e2aca042d8ff6a5401f73c140a1a374f}}, the increase in mass of a star makes it look younger than it is.
d
797bb56be9aca0b7288fc893b477050d
The performance metric for the MAB algorithm is the regret which is proportional to the number of selection of the sub-optimal arms and it should be as low as possible {{cite:ff511008dc1576de5b7613cecfa314c8c9f027da}}, {{cite:3ff121381c7ab80b913c3c58dd3bd7304a463f51}}, {{cite:a45768c795309d04cbb80e500bb8d6ba670c9721}}, {{cite:aeb8d667393c184607c8e74995ee72402a23ee67}}, {{cite:ec53eafdc0ee7d80f069f4df93f025a79f3d9efa}}. An optimal MAB algorithm guarantees a logarithmic regret which is the best one can achieve. The upper confidence bound (UCB) algorithm {{cite:aeb8d667393c184607c8e74995ee72402a23ee67}}, Kullback Leibler UCB (KL-UCB) {{cite:3ff121381c7ab80b913c3c58dd3bd7304a463f51}}, and Thompson Sampling (TS) {{cite:a45768c795309d04cbb80e500bb8d6ba670c9721}} are the popular optimal MAB algorithms. The KL-UCB algorithm is computationally complex due to underlining optimization routine and hence, TS and UCB algorithms are preferred in {{cite:8d429737a80a1b99f84661747fe04385335e0469}}.
i
46f3cee445a0d4a046da76e3dce4b81f
A version of the spherical top model constitutes the free part of an effective field theory (EFT) approach {{cite:2c71095c51b1adacacf92b46e83f860b9aeab74a}} to the post-Newtonian (PN) gravity of spinning objects {{cite:4a416b5902a4329c881df3874747fec30332030b}}, {{cite:2914615b2ca82a03441f929f85610dbb3c44e4d3}}. In intermediate steps of the EFT computation, some higher time-derivative terms appear in the Lagrangian, but they can be eliminated systematically in the final Hamiltonian of the interacting binary system. It would be interesting to experiment with different gauge choices and see if one can reduce the appearance of higher derivative terms.
d
ed405ae833d1271636b570ea33f9eb8d
The sub-Gaussianity condition above is equivalent to the following tail bound {{cite:304f5e1bb9db525b6d75eb4015389f94bdc4a2ef}}: {{formula:72fde53a-86e8-4ae6-9f21-bb2019c4b78c}}
r
c70d7f068949dae96a51cc83c9516036
Our works is based on extending Zhang and Bu (ZB) {{cite:2b1e32bfd24c867692325c6c8ef2c0be52b29ef0}} method for community detection to automate its operation and quantify its accuracy to correctly detect communities in benchmark networks. In order to successively partition the network into smaller modules, ZB method follows Kannan-Vempala-Vetta (KVV) bi-sectioning algorithm {{cite:96db7c0fbfd715ce5cdd37fa8981fd0123483857}} (steps 2-6 in Sect. REF ), but it uses the resistance distance of the network (step 2 in Sect. REF ) instead of its adjacency matrix. We add modularity optimisation to the process (steps 7-10), which makes the resultant algorithm a hybrid method involving resistance distance, spectral partitioning, and modularity optimisation. Consequently, our adaptation allows to iterate the algorithm without needing to specify the number of communities in the network or control its outcomes, making it an unsupervised algorithm.
d
abc813a1904fcb906017ee821a586cf1
Research on SSOD boils down to answering the following two questions, i.e., how to generate pseudo bounding-box labels from unlabeled images? and how to exploit such auto-labeled data together with the previously labeled data? While these questions have been answered with some success, e.g., by Instant Teaching (InsT) {{cite:e272abefc767dd8a09fb048d72a492c364348409}}, Unbiased Teacher (UnT) {{cite:03965edee523480b1b0f11292fe639c0c3c07e40}}, and Soft Teacher (SoftT) {{cite:a01c8ff8d4156ce0320788ebcea3564e53ef3006}}, these efforts are mostly targeted at object detection in natural scenes as provided by the PASCAL VOC and MS-COCO benchmarks. For the natural-scene domain, strong data augmentation operations such as large-scale Cutout or geometric transformation are often performed for better performance {{cite:a01c8ff8d4156ce0320788ebcea3564e53ef3006}}. However, as lesions in an OCT image are closely related to their localities in the image, the original anatomical position information is crucial for lesion localization. Therefore, such information can be disrupted with easy given the aforementioned operations. To what extent can conclusions and good practices of SSOD learned from the natural-image domain be generalized to the OCT-image domain is mostly untouched.
i
7f3293d97d83349dfdbaffe3503d14e1
Electricity generated from renewable sources is a continually growing component of global energy production and a key driver for a sustainable energy future {{cite:a7c094fa691dc1265ca60db4a698be7ab89ee1eb}}, {{cite:22936c2ad82b300cc5702fb4586827d213ee068f}}, {{cite:3823d0d382875750591520b39755e92a1b4e2894}}. Further expansion requires efficient and cost-effective integration into existing power distribution systems but intermittency and curtailment remain a challenge {{cite:97adc5474fcf8fa65952c3c94d0eb3f8c53cd4f2}}, {{cite:82131a88dcf597e38d45eb185e01ecc6ce46074b}}, {{cite:f803673cd6c99f1c855c28cc548b745cbce09159}}. A number of strategies have emerged to address these issues, including electrochemical energy storage {{cite:d8e4f3371bbcddfad9891225ac8e3e4c5f8f30f8}} and repurposing otherwise wasted electricity to electrify chemical manufacturing {{cite:0e770b4ed5130b73ea1c968a2134436c98c9e4cc}}, {{cite:467f7b9ea4f5d0d4a1acc9b391fbfc8c6e6f7c29}}, {{cite:e3d1e580d42063c396f077c8fcf44e12183ee4da}}, {{cite:f9ca52666b08878b9a4f998e71a3bbfd7aa83ca9}}. The direct electrochemical conversion of CO{{formula:78e83aa0-a217-44a4-900a-2aceaa8439f3}} is an especially powerful avenue as it simultaneously combines storage, chemical synthesis, and carbon-removal {{cite:a599ae83a598acba13b3f77caa5df539f9929d1d}}, {{cite:417dc41659b074356a7d9a9c1320f24faa3a7d01}}, {{cite:a1e83be3396f338565c58784eb2e61ce9a0bffcb}}. As these advances are translated from the laboratory to industrial scale, energy efficient operation will become increasingly important to ensure economic viability {{cite:22936c2ad82b300cc5702fb4586827d213ee068f}}, {{cite:d8e4f3371bbcddfad9891225ac8e3e4c5f8f30f8}}, {{cite:f78fc08214a141d7d53876ca999aef7384c04308}}, {{cite:f803673cd6c99f1c855c28cc548b745cbce09159}}.
i
4ff7f439c4cd7ae05eccb2e1603f0cd9
We conduct comprehensive experiments by comparing BasicVSR and IconVSR with 14 models: VESPCN {{cite:2106d45a958829148fcff89b8d8d489512f782fd}}, SPMC {{cite:f1f601beb3ca2676f926eff4822062bc8224b578}}, TOFlow {{cite:bd89f1906c1ade7b07cf8df7c197fe2d767995cc}}, FRVSR {{cite:5ba77d31b63b21de623271a8c4b7db95129a4c79}}, DUF {{cite:11604ff334c0551a4fcb441a8aa3f9f1196fe6cc}}, RBPN {{cite:ca65c3afc4d0446c8c4115a5f0a482f96c571d59}}, EDVR-M {{cite:2822786bb74133febf217a3498f0e41c8a92dad1}}, EDVR {{cite:2822786bb74133febf217a3498f0e41c8a92dad1}}, MuCAN {{cite:9b31e64d8b038a7a122635464212fbdfc2ffa14d}}, PFNL {{cite:b175ffc428743bb254774593f13935b07d97f6bf}}, RLSP {{cite:7baa88c536ae6818b2dca86d0eac1954563d5acd}}, TGA {{cite:0c7b26fc376432f29419fec0d95b3a605e540b02}}, RSDN {{cite:0a6f829fa74f994ef5d0a8d0ac7f847e57360ee8}}, and RRN {{cite:5cced21a54f95be9b804ba1853edfceb37543ced}}. The quantitative results are summarized in Table REF and the speed and performance comparison is provided in Fig. REF . Note that the parameters of BasicVSR and IconVSR are inclusive of that in the optical flow network, SPyNet. So the comparison is fair.
m
74759c2f674473a04f25d86617880c0d
Despite the potential of panorama images, it is challenging to perform localization amidst drastic scene changes while simultaneously attaining efficiency and accuracy. On the 3D map side, it is costly to collect the up-to-date 3D map that reflects the frequent changes within the scenes. On the algorithmic side, existing localization methods have bottlenecks either in computational efficiency or accuracy. While recent panorama-based localization methods {{cite:ce91f56e2c4981ccc82f14585fb929755c2559db}}, {{cite:9053c9fe6c06ef847f43a9ce2c64b0fbaa17034f}}, {{cite:2732fa3f9450b38ccc2da46288aca5fbe906abce}}, {{cite:f75908f6d689e41b7b936bf16238a03d67f514ce}} perform accurate localization by leveraging the holistic context in panoramas, they are vulnerable to scene changes without dedicated treatment to account for changes. For perspective cameras, such scene changes are often handled by a two-step approach, using learning-based robust image retrieval {{cite:a3a01ae979195f96da43fc1a7c0fc309a0bcb178}}, {{cite:d8e709db507b1cd36967caf629f37f1c573b0a81}} followed by feature matching {{cite:bac110ae57031e1e942914cc90294f80af4f3841}}. However, the image retrieval step involves global feature extraction which is often costly to compute and memory intensive. {{figure:b8243e77-b9a9-4c54-8776-8caa54264228}}
i
3a628933e3acb304b11de729a1402018
There are a handful of demonstrated methods that modify the training procedure to achieve better tradeoffs between the final model accuracy and the time to train the model. Some of these methods include Blurpool {{cite:37b7558b395b1b0ce2871f4f727cc44d32606563}}, Channels Last, Label Smoothing {{cite:274d6078a8d8d6cbc2c067678ea35873d5a14311}}, {{cite:3e60a92c2e0d44eacdf1f7b34e8cb5f5c0bc52e6}}, and MixUp {{cite:86122bb9d1cb079dfbc5d178781cee4e3759736e}}. In our benchmarking work, we wanted to find methods in the literature that led to clear Pareto improvements. We therefore asked whether multiplicative cyclic learning rate schedules could be used to construct competitive accuracy-time tradeoff curves for separate methods such as Blurpool (BP), Channels Last (CL), Label Smoothing (LS), and MixUp (MX). We found that using multiplicative cyclic learning rate schedules allowed us to generate tradeoff curves of comparable quality to standard tradeoff curves, but at substantial time savings ({{formula:4d01fff1-7ca7-42e4-92bc-a8b5c447d8cc}}{{formula:19b3c70f-e268-41bf-abd6-6a412d134812}} , see equation REF ).
m
06ee054234dca82398292161756ff11e
A major advantage of the proposed method is the possibility to distinguish between epistemic and aleatoric uncertainty in the predictions. Capturing epistemic uncertainty may be especially important in the area of atomic structure analysis. In some other areas, where a lot of data is available, it may be sufficient to model aleatoric uncertainty, which cannot be reduced with more data, and reduce epistemic uncertainty with large amounts of training data. Chemical space, however, is so vast that it is not feasible to gather enough training data to cover the entire domain {{cite:3e6282e4b824df2be6f55b0dcb089f30e52cc5ee}}, {{cite:29783371dc4caa0c63ffcc6cba24f05ca862f818}}. Thus, identifying cases beyond the training data distribution where the model is not expected to perform well is more critical. In particular, distinguishing between epistemic and aleatoric uncertainty can be utilised in a screening system for atomic structures in the following way: If the epistemic uncertainty of a prediction is low, the aleatoric uncertainty indicates the expected error. If, on the other hand, the epistemic uncertainty is high, there is a high level of disagreement in the ensemble and therefore low confidence in the prediction, and the system can automatically fall back to a more accurate method such as DFT {{cite:704a23defe1b81f78ad76381312d0de074adb010}}. The specific thresholds for decision making can be tuned depending on the data, application and computational resources available.
d
6120514b58f6836ee691190398521825
The numerical results presented in this paper for defect detection have shown that the FCN-based ultrasonic inversion method can be a useful tool to achieve an accurate quantitative reconstruction of multi-layered bonded composites. Compared with other conventional inversion methods, the reconstruction costs of this method are negligible once the trained network is obtained in the training process. Additionally, this method neither requires an initial velocity model, nor does it exhibit any cycle-skipping problems {{cite:41be6f4f33ecb3ea080198dd47685a92221a0039}}. However, there are several factors that can affect the performance of the proposed method, such as the selection of training datasets, the parameters (learning rate, batch size, epoch, et al.) used in the training process as well as the architecture of the network. For example, the proposed method is based on the supervised-learning network, and the capability of this network relies on the training dataset. In addition, the velocity models, which can be accurately predicted in the testing dataset, should exhibit similarly distributed structures as the velocity models used in the training dataset. Generally, with a larger amount of the training datasets, it can result in more accurate trained network {{cite:f174ed0845d47efaeeb8120815cb7caeb2986881}}, which can help to achieve a more accurate velocity reconstruction. However, this in turn also increases the computational effort to train the network. The influence of the training dataset on the proposed method will be further investigated along with the physical experimental dataset used in the training process.
d
26bbb9f2395d88ad25552bd47067cd62
By noting the design choices and their rationales, the desired ML model is described in humanly readable terms and ML developers thereby produce a pre hoc explanation of the expected behavior and justification of the model, similar to the preregistration of a scientific experimental setup {{cite:e7df49e0d581b2ad40370e3dd78c81babd248a6a}}. This description can help making expectations concrete and tangible and can anchor later post hoc explanations {{cite:9ea06bfc3555517caec384d580c69ecc616db533}} of model behavior. Describing the desired model by iteratively refining answers to the eight design questions, generates a stated justification which can act as an anchor for discussions and maintenance. It thereby gives a lens through which to explain, debug, or contest the model, opening up a discussion not just of the effectiveness of these choices but also of their appropriateness.
d
5f08c5d98ddb898ddf8e1136529f43b9
The proof of Theorem REF follows this roadmap: (1) We first show that the model learned with UMix can fall into a specific hypothesis set {{formula:4289c348-f292-404b-8d60-a43fef485a02}} . (2) We analyze the Rademacher complexity of the hypothesis set and obtain its complexity upper bound (Lemma REF ). (3) Finally, we can characterize the generalization bound by using complexity-based learning theory {{cite:6f6accc6c3ee652409e945086a2a0e01c22e3cf4}} (Theorem 8). More details of the proof can be found in Appendix.
r
af1520066bdd355431441676302c9ad9
In this section, we first present the overview of BinsFormer. Then, we introduce our instantiation of the adaptive bins generation strategy with the help of Transformer decoder {{cite:4a42b5d150bb858db34a3b33a58929c1c24cd253}}. Finally, we present the auxiliary scene understanding task and the multi-scale prediction refinement strategies, which can be neatly melted into the framework and improve the depth estimation performance.
m
7677cb16c07ce235da9871247a05b0ab
Higher dimensional black holes and their properties have attracted considerable attention {{cite:a34ef5c3d9d9de54cb66fc6b4fdf7a567d9232a9}}, {{cite:9125c3787eb115bc7f2f99ddbe3937bd1c9ec238}}, {{cite:de681cb75081496e41d8a8c7fff41df1e55cc012}}, {{cite:1bfd3546678ac3e6bc6ebc17c9dd2cbb4cd1cc94}}, {{cite:c1263d9a9b1d3becf795dc8ef97ac872dbb92bf1}}, in particular, with the development of the conformal field theory (CFT) correspondence, allowing statement to be made about four dimensional quantum field theory using solutions of the Einstein equation constant in five dimensions, and with the advent of brane-world theories {{cite:99571d7762b1b23f439e4de6d3afaf38ef73e9f6}}, {{cite:287e85ef51de80e4344f12f606a0355d8facb0e9}}, raising the possibility of direct observation of Hawking radiation and of as probes of large spatial extra dimensional in future high energy colliders {{cite:9e61a523006b868760051e40962cf8b3a8587528}}, {{cite:739ab7b627c729c7c746909435d5dcbf1a7ee503}}. On the other hand, a realistic black hole must be localized inside the cosmological background, and a natural consideration for it is our universe. The Gödel universe is an exact solution to Einstein's equation in the presence of a cosmological constant and homogeneous pressureless matter. In recent years, there has been great interest in studying Gödel-type solutions to five-dimensional supergravity {{cite:0fdcb742fd0130e5162b1e920cd4e130b0ee78e8}}, {{cite:24ada564bb373703e11240afe72671b36bd0e91e}}, {{cite:0a6b89e9132445b28697d184e1d6647e9d4369ad}}, {{cite:af64cf56c53e4753541729ce5ef140af69b9ecb1}}, {{cite:74fd75c6dd2f23724f65ac64c0aae79221b926d0}}, {{cite:7f946a76dee65cc3aeeee44e0495151b8470ab0e}}. These developments motivate us to explore five dimensional Gödel black hole's quantum correction to fermion tunneling via modified Dirac equation based on the GUP.
i
aeb9ecf28cc6a560939707f43e47967d
On the theoretical front, it is of interest to extend the work of {{cite:453f78ef1a320492cf7c8b9b81954ce718a86048}} to binary tensors and investigate conditions on the signal-to-noise ratio to recover true factor matrices from an observed binary tensor with high probability. Moreover, the optimality of model selection approaches in binary tensor decomposition is still unknown, and it is worth investigating the consistency of AIC, BIC or other information criteria.
d
ee3c2be5c18a4d99f06a850ffdac1534
Implications and limitations. The finding that methodological tools might affect scientific progress is factually known {{cite:9a7a43bf853b197241897a2a7fa6a8aff9fa10ad}}, {{cite:3bacc12146712b6b865f9df40f579f7b9628ccc4}} and being studied extensively by statisticians and meta-scientists alike. Model comparison methods such as AIC and SC as well as all other statistical inference methods work best when their assumptions are met and might lead to invalid inferences under assumption violations. An unsurprising implication of our findings is that statistical theory should inform statistical practice even in the absence of well-known procedural violations such as p-hacking.
m
4289cd1fd14322b6ccf04ce61b97cad5
Deep learning has elevated the performance of speech enhancement in the past decade {{cite:200b4c28a40e8ad1366b9589ae1c7531f6682c26}}. Since the very first success of deep learning in offline enhancement {{cite:c531512161d629a35da608d084d7769b231fbcb7}}, there have been growing interests in using DNNs for low-latency speech enhancement, as many application scenarios require online real-time processing. For example, the recent deep noise suppression challenges {{cite:532ef60aa3df8bf57a2cc5814c564a8f37e71588}} target at speech enhancement in a monaural teleconferencing setup, requiring a processing latency less than 40 ms on a specified Intel i5 processor. Similar latency requirements exist in other related challenges {{cite:19c87e81d3c17738f2cbef739f49cdc444bff99f}}, {{cite:89f30d011cfacd6afa4c18a32792df004847450d}}. The recent Clarity challenge {{cite:75251e96804d4884ec834e2ca4610928f474a15a}} aims at multi-microphone speech enhancement in a hearing aid setup, requiring an algorithmic latency of at maximum 5 ms. {{figure:2b2c5b3c-1d5f-41da-b100-39cf7bc268c9}}
i
a4b77c1bd51ceb1a514e243a762b568e
The zx-calculus is a high-level and intuitive graphical language for pure qubit quantum mechanics (QM), based on category theory {{cite:4fedb160b3a4c22fd85e5fd68d2ff5f17b358e06}}. It comes with a set of rewrite rules that potentially allow this graphical calculus to be used to replace matrix-based formalisms entirely for certain classes of problems. However, this replacement is only possible without losing deductive power if the zx-calculus is complete for this class of problems, i.e. if any equality that is derivable using matrices can also be derived graphically.
i
f43bbc44ee43c800d75ff60f944594ca
In this section, we will evaluate the BER performance of the proposed algorithm by assuming perfect CSI. We consider the average BER performance with a sufficient number of realizations of the channel. Specifically, the corresponding channel matrix is generated according to (REF ). The channel coefficients are randomly generated based on a uniform power delay profile, and the delay and Doppler indices are randomly generated within the range of {{formula:8c77355b-b746-475c-84c4-a77e22421f2c}} and {{formula:8b95bacf-630d-46ba-80d0-51eb4eed42b8}} , where {{formula:2e8ca364-e4cd-4f13-bb41-431807747fc2}} and {{formula:de5aaa57-b5dc-4cb0-8540-3879b453bb53}} , unless otherwise specified. We note that, as mentioned before, the delay index can only be an integer number, while the Doppler index can be a fractional number {{cite:d80115fa511524c6ae6ebe5a2974691006685d74}}. Without loss of generality, we consider the QPSK modulated OTFS system with different numbers of paths, where {{formula:b37fe6ad-9d02-443c-800b-24ad531dccea}} and {{formula:005f0883-a52e-4257-b3a7-5c28b67047c7}} , respectively, unless otherwise specified. Specifically, we set the subcarrier spacing as 15 kHz, and the total bandwidth of the transmission is 960 kHz, unless otherwise specified. We also provide other detection methods for comparison that include the MMSE detection based on the DD domain effective channel {{formula:8c49c9d2-c506-4317-b6a1-915f9b4ee3cf}} , DD domain detection based on the SPA {{cite:8240d1f555563e5396776d4eb41477111d59ffd4}}, and the DD domain message passing algorithm in {{cite:d80115fa511524c6ae6ebe5a2974691006685d74}}. The considered SPA detection is derived based on the graphical model corresponding to the DD domain effective channel, whose computational complexity can be {{formula:6ec99de6-54e1-4ba6-86a5-072997a35890}} in the case of complex fractional Doppler shifts {{cite:8240d1f555563e5396776d4eb41477111d59ffd4}}. In particular, the considered SPA detection can theoretically approach the error performance of the optimal MLSE detection and achieve the same performance when the graphical model does not contain any cycle {{cite:8240d1f555563e5396776d4eb41477111d59ffd4}}. However, since the DD domain SPA detection requires a very high detection complexity in the fractional Doppler case, we only consider the integer Doppler case for simplicity. {{figure:4555377a-1543-4d28-8c09-d144feb0063f}}
r
f2461ca209068ced3f1cb0025524d272
In this subsection, we study the shadow of a particular rotating solution, in order to show the applicability of the results presented in Sec.  and . Our seed metric is the spherically symmetric family of generic magnetically charged regular black hole spacetimes, proposed in Ref. {{cite:da04ac8aab51677304e4717825f4899375b52f64}} by Fan and Wang. The line element is given by: {{formula:b2d8999a-4a17-477a-82b5-3011a4851d58}}
r
2cafdb19fdf296c695c89fa4a7d23f33
While approaches that average the word embeddings for a sentence are comparable to state-of-the-art results {{cite:b9371268935973cbbe9b424810d27817c211714d}}, Ave and Retrofit do not perform particularly well. This is likely due to the fact that logistic regression lacks the non-linearities which Iyyer2015 found helped, especially at deeper layers. Averaging all of the embeddings for longer phrases also seems to lead to representations that do not contain enough information for the classifier.
d
0167711fb372ff71fa12ded253523917
In Figure REF , we plot both the 90% EB CS on {{formula:3645790a-63f9-4661-9048-73812c26658c}} (top) as well as the corresponding sub-exponential e-values for the weak one-sided null {{formula:acb3c7b8-a6ae-4759-bf08-210f8f75b464}} (bottom), between HCLR and IDR, IDR and HCLR_, and HCLR and HCLR_ on 1-day PoP forecasts, using the Brier score. Note that these are the same three pairs compared in Figure 3 of {{cite:4088f598544cc168dc9d63f29f85b420d7d35742}}, which would correspond to e-values for the strong one-sided null {{formula:1fc56c54-f810-4aa2-a716-2a7cc0b21401}} . The EB CS is computed using Theorem REF and the gamma-exponential CM boundary from Lemma REF ; the e-values are then computed using Theorem REF and the exponential supermartingale that constructs the gamma-exponential CM boundary for the CS, as described in Section REF . We use the same set of hyperparameters ({{formula:450ef70f-7796-4e6b-84fa-41390ba840d7}} , {{formula:907a6a2e-2cff-4f7e-b8e9-bf9d61a0d4ad}} , and {{formula:dea410cd-ac46-42e3-a8b3-0ba64822ef0a}} with {{formula:f37cb0bb-1583-46b2-95a1-5e6ab9e08a0e}} , following Proposition 3(a) in {{cite:044d941968910e5c87266b50b0aca34299cf54b5}}) for both the CS and the e-values, as they rely on the same underlying supermartingale. Here we choose the significance level of {{formula:967fd550-c07c-4b54-a8db-74014e1a1237}} , because it roughly corresponds to the threshold of 10 for e-values to be considered a “strong” evidence against the null {{cite:4c43935d0c3381b61680539fdd452654029ebba2}}.
m
4373a9085d6bc646f197ac7c36c8f975
Furthermore, Transformer is a type of self-attention mechanism-based deep neural network that is wildly used in natural language processing (NLP). A transformer has been developed to satisfy computer vision because of the strong representation ability recently. The use of Transformer performs better by comparing with other networks, such as CNN, which has shown strong competitiveness {{cite:c4af7cb75334221d6fca3345e135d93364e6813a}}. A transformer has a more robust ability of global information representation than CNNs, making it possible to describe the microorganism structure in a complete image. Especially, Vision Transform (ViT) is one of the most remarkable visual transformer method till now, which directly applies sequences of image patches (with position information) as input first. The ViT projects the patches to the original transformer encoder and classifies the images with a multi-head attention mechanism as it does in NLP tasks. The framework of ViT is shown in Fig. REF . {{figure:96b415a3-bf91-4d7b-b69a-e0afc637ee6b}}
m
ea6102bd3d3b1b8dd0fd41de1f70c352
Perturbation-based attribution methods start from an intuitive assumption that the contributions of certain input elements can be reflected by the changes of the outputs when these elements are removed or preserved only in the input. However, to find the optimal results, theoretically, it is necessary to traverse the elements and their possible combinations in the input and observe their impact on the output. Due to the computational cost of the traversal process, how to obtain an approximated optimal solution faster is the research focus of this problem. Occlusion {{cite:cdb8dbc3e8ab3cdce5c9ebf2d06e245b52904056}} and RISE {{cite:1244d3176f74d3895fbc0c731be7e8505a11e8ba}} perturb an image by sliding a grey patch or randomly combining occlusion patches, respectively, and then use changes in the output as weights to sum different patch patterns. LIME {{cite:94c2643888814fabaf35b07aa671502db6c9df81}} approximates networks into linear models and uses a superpixel based occlusion strategy. Meaningful perturbation {{cite:221c4e89144c6ae837c90b758bfdaa02824b6355}} converts the problem to an optimization task of finding a preservation mask that can maximize the output probability under the constraints of preservation ratio and shape smoothness. Real-time saliency {{cite:02e9fceb580d5376926e03a71ba28b7c21213a29}} learns to predict a perturbation mask with an auxiliary neural network. I-GOS {{cite:6780c99f27dadafa826a33cd9b4c3c3b27daae1d}} introduces integrated gradients instead of the normal gradients to improve the convergence of the optimization process and FG-Vis {{cite:db01dd0313b8018c63903ad7798e5ca905ccfcd4}} incorporates certain restrictions in the optimization process to avoid adversarial results. Extremal Perturbation {{cite:8f7727038464421f6c1390050122e975b207d39e}} factorizes the optimization procedure into two steps to solve the problem of the imbalance between several constraining terms. Most perturbation-based methods characterize model-agnostic since they only access the input and output of a network and require no knowledge or modification of the network's internal structure (except for I-GOS and FG-Vis that need to change the BP rule). However, perturbation-based methods are usually time-consuming because they generate the final results by iteratively adjusting inputs and observing outputs.
m
29670438385b6183bbf9bdc507961f42
Encoder Module.   Our method adopts a CNN-Transformer hybrid model design instead of using a pure transformer, which uses 40 convolutional layers, to generate multi-scale feature maps. Such a convolutional stem setting provides two advantages: (1) using convolutional stem helps transformers perform better in the downstream vision task {{cite:3e984b2025015299ec183c291e84cc4e2806fdb3}}, {{cite:b7ee7039fdaa4d5702f1009d7bb5819443e683c2}}; (2) it provides high-resolution feature maps with parallel medium- and low-resolution feature maps to help boost better representations. In this way, we can construct the feature pyramid for the Transformers, and utilize the multi-scale feature maps for the downstream medical segmentation task. With the aid of feature maps of different resolutions, our model is capable of modeling multi-resolution spatially local contexts.
m
6e7f7b4320d09f16091f7ea7221f926f
Contrast learning has been successfully applied in the area of self-supervision learning ({{cite:ea955306170fac06a51b840f01df0425353cfa4d}}, {{cite:705b758293166193954cf34ce17a2813afa3982b}}), especially on computer vision tasks ({{cite:ce5d0fb3c3465a403671f24ef244f541a2f8a3b5}}). Many extensions and improvements have been made, including w.r.t. the learning loss function ({{cite:56a9269108e3b4a845f2869addbe3735767f936d}}), data augmentation techniques ({{cite:e2606c6eb9b0fa1f99123dd80d4a4e71c193e432}}), network architectures ({{cite:705b758293166193954cf34ce17a2813afa3982b}}) and computational efficiency ({{cite:330e3fa36c442a200e4814293e86036a200407c7}}). Our proposed method is partially based on this technique, with two major differences. First, our technique addresses CDE problem and not density estimation or self-supervised learning. Second, the distribution we choose is unknown, potentially intractable, and/or with a large set of highly dependent components, which violates the usual restrictions for performing noise contrastive density estimation but allows us to taylor the noise distribution precisely to the estimated distribution. This allows us to fit the noise distribution precisely to the estimated distribution. To the best of our knowledge, the technique closest to our method is designed to evaluate the deviation from the independence setting, {{formula:180418d6-7eaa-4aa9-b898-725579ae5904}} when all random features {{formula:e3a5f4f6-b6fe-4676-b80f-b2d5f4288711}} are independent, in an unsupervised framework. It has been briefly described in the second edition of {{cite:bf497ccfb199246c22ccaa1d4932dc1185a2b33d}} (pages 495-497) where, based on the noise contrastive reformulation given above with {{formula:ff8807c6-eb7a-4ced-8c47-e4bffa48f4e7}} . Note that {{formula:e8ed9613-a61e-45e9-8a7b-f0ea08a5d074}} is unknown and corresponds to what we will call later the noise distribution. Nevertheless, noise samples can be generated by applying a random permutation on the feature columns of the dataset, which is sufficient to discover association rules between the {{formula:8fa8d6b8-8e56-43ce-8ddb-a8a003159f88}} features but not to estimate the density function {{formula:bee5843e-0dce-4ae2-8450-1d770c9287c6}} . On the other hand, although each component of {{formula:288e5b52-5c4f-4c54-a05e-62de8a972655}} is one-dimensional by definition, the errors made when estimating the marginal densities {{formula:2dc7d5b7-a5a2-4990-bbd1-3a810e6e11d8}} will compound when estimating {{formula:9b0b74f3-7e61-4a5d-87cf-7ffbe7409e94}} , which means that the dimensionality of {{formula:71713211-9367-4816-8b4d-b13623dcc79e}} is again a limiting factor.
m
e534cd9cee8360af0856c7f14b09c278
Drowsy driving continues to cause accidents. Therefore, an accurate drowsiness state classification is necessary to prevent road traffic accidents. Physiological signals are frequently used to estimate mental states. Especially, EEG is capable of measuring brain activity directly and monitoring brain activity {{cite:4dcd390955420be62a7141d33c8f9c2e6bfc07b3}}, {{cite:c575d29ff1ac5078f04a5f1739160fb591e3114f}}, {{cite:77af059bebc6d4ce8dc78040e028a6af0c4e6c86}}. Recently, detecting affective states such as emotions and mental states with EEG signals, in other words, affective brain-computer interface (BCI), has consistently gained interest {{cite:3f03b35c04654aaa32e97f29bb21732c737ce39d}}, {{cite:e9e56b35c253987ec0de5e03c4e8133814d817a3}}, {{cite:571ee1ea05d4e44ddf9ce52e0ff0c5a8da715a13}}. For instance, Xu et al. {{cite:641b3291e2982dd8f0540f8cfb299b306cb449a9}} proposed a unified convolutional attention neural network that concurrently identifies personal information and detects the driver's drowsiness. Paulo et al. {{cite:1e1676398293c02c7063573ce2743debfbef034c}} proposed a drowsiness detection model with spatiotemporal image encoding representations such as recurrence plots and gramian angular fields.
i
3faa41fe13b8e4b1843e3985096e202b
Previous work in ABC reduces the data dimension by seeking low-dimensional summary statistics designed to retain information about the parameters {{cite:e41098743856b4feb86ef53bad392d0489685d1d}}. On the other hand, for conjugate linear–Gaussian models, {{cite:966df2ac20e459fdc30257baec568b670cfafc0d}} find maximally informative subspaces of the data, of any given dimension, by solving an eigenvalue problem depending on the likelihood and on the prior covariance. {{cite:6bccd5f0b1e1edc8defe56a8369013912deab9fc}} seek low-dimensional projections of the data for generalized linear models, and these projections are endowed with error guarantees under certain conditions (e.g., strongly log-concave posteriors). Optimal experimental design can also be seen as a way of reducing the data dimension, by sub-selecting the most important components of the random vector {{formula:5e66cdbc-e9ec-4d8b-a408-4aeafd501ded}} {{cite:47bda535255fcab8141d7c6a4d486385e179a203}}, {{cite:9aa1294f565b1ac9a014172012a48375b9456d17}}, {{cite:84bdc259df3c2f2cab45aefdac706462ad01160a}}. It is important to note that all of these dimension reduction methods are applied before the data are realized, and hence do not depend on the observed value of {{formula:2e21ffb3-1979-423c-aba3-150acd164503}} . These data summaries or subspaces can thus be re-used for multiple instances of {{formula:6437aaf2-4777-47e5-99d9-60af41af731a}} . Such approaches differ fundamentally from, e.g., Bayesian coresets {{cite:6604b2eb32aba8677352aee3e0c9fdbbc0d4ea2d}}, which summarize a given realization of {{formula:25ef649d-e925-4d19-aac2-32d3da74fcc0}} via a smaller weighted subset of the data (assuming, moreover, that elements of {{formula:7afa1605-bc68-4719-bbf7-ed21af782f20}} are conditionally independent given {{formula:da0bb24e-ed20-4b03-9d98-fb333a6c19f8}} ).
i
c2da45657de72ae7cb859f79d64a0c63
From a neuroscientific perspective, it is interesting to conjecture whether a similar power-law code with a similar exponent is a hallmark of canonical cortical computation, or whether it reflects the unique specialization of lower visual areas. Existing results in theoretical neuroscience point to the fact neural code dimensionality in visual processing is likely to be either transiently increasing and then decreasing as stimuli are propagated to downstream neurons, or monotonically decreasing {{cite:391c5194754d875943ee32d14a511ae136be48cd}}, {{cite:d5d64760fe488a14b4e34fd64674f91c186a309c}}, {{cite:58ac6eee4e8e1859faa640e1eba04ae87081529a}}. Resolving the question of the ubiquity of power-law-like codes with particular exponents can therefore be simultaneously addressed in-vivo and in-silico, with synthetic experiments probing different exponents at higher layers in an artificial neural network. Moreover, this curiously relates to the observed, but commonplace spectrum flattening for random deep neural networks {{cite:ba491756996bb0eab4a9c5fd617864a64baa988e}}, and questions about its effect on information propagation. We leave these questions for future study.
d
ff0ddf412b8cf81da4b44c4c5b9ec053
A set of user requests (or topics) {{formula:62b95eaf-2b6b-4c83-a9b9-58a722c1be7e}} in the conversational form (e.g. “What is Fickle Creek Farm?”) with a label reflects if clarification is needed ranged from 1 to 4; A question bank which contains possible clarifying questions collected for the user queries via crowdsourcing {{cite:8ad8638ce63a73670649ba79549e3a7fe451d6a4}} (e.g. “Do you want to know the location of fickle creek farm?”); and A set of user answers, one for each question (e.g. “No, I want to find out where can I purchase fickle creek farm products.”), which are generated via crowdsourcing. The answer {{formula:fbaba460-d04d-48ec-b689-7b8bec9429b2}} to a clarification question {{formula:b6895b7b-cf11-4250-b774-1120049aa394}} can be used to measure how much asking {{formula:686ced95-866e-4541-87a4-c3946832b76e}} can help improve the search result, thus providing supervision for training question selection models.
m
c3ff3deeb70a04435fa613a831d5fda3
To overcome these problems caused by traditional ULAs, nonuniform linear arrays (NLAs) (also referred as sparse arrays) were introduced {{cite:dff3699df9484aa9075385aba3d69cb2cd110a9b}}. Consider an {{formula:84be8187-2491-430f-83bc-99f36dc9e1b5}} -sensor NLA with sensors located at {{formula:1e3a2bc9-c7d8-406d-85d5-431a4c75de8c}} , where {{formula:87ebac64-f587-4016-8285-36152d8698a1}} belongs to an integer set {{formula:5be86a47-84d6-4548-9c4e-19284bac5a42}}
i
b2e291d4fe61c6f0b098b158d486db35
In this section, we extensively compare our frameworks with previous methods across various datasets and settings. The other compared state-of-art results are from {{cite:fab2bc87d55b9ad5072a9d8531d041f5a881360c}} and marked as *.
m
b7c2061d76e62a92d9910a9ba80b97b2
There are many interesting extensions to the results in this paper to be explored in the future. First, as we have mentioned in Remark REF , our results on the connection of Riemannian and Euclidean Hessians are established at FOSPs. It is interesting to explore whether it is possible to connect the geometry of the manifold and the factorization formulations of low-rank matrix optimization at non-stationary points. By achieving this we can (1) connect approximate SOSPs  between two formulations, which is useful in practice as standard optimization methods such as stochastic or perturbed gradient descent can only find approximate SOSPs {{cite:ee4a93272939fed0c2d76916a8f2fc8fb2613ced}}, {{cite:d025f82ae27bef7a08ebf1d4b20c722a1b56ca98}}, {{cite:f50a52cc112567fb2d619c1ba5e178cb816b2789}}, {{cite:ad2f08b0ff4929d1bee4210676787247b7c7c75c}}; (2) transfer the global geometry properties (the landscape property of the objective in the whole space rather than at stationary points) between two formulations {{cite:a9c16b408f0ca9f8f57820b50af84d15186173b4}}, {{cite:bbc15477dcb9e2840dafc2a1e4b7b80aa2862207}}. Second, in this work we consider the natural embedded geometry of low-rank matrices in the manifold formulation. Another choice for handling low-rank matrices is the quotient manifold {{cite:8598512aebcd15da45df983f393de811345c1023}}. It is interesting to investigate the landscape under the quotient geometry and its connection to the embedded geometry. Finally, manifold approach is a general way to deal with geometric constraints in optimization problems and here we show a strong geometric connection of it to the factorization approach in dealing with the rank constraint in matrix optimization. From an algorithmic perspective, connections of manifold methods with the sequential quadratic programming (SQP) method for solving equality constrained optimization problems and common nonlinear programming methods for handling orthogonal constraints were revealed in {{cite:bf75a094e2552884efb69b39bc4e6f998abefcf8}}, {{cite:fd34e5df335dee45b504d89fe2994545c4e15eab}} and {{cite:bf75a094e2552884efb69b39bc4e6f998abefcf8}}, respectively. It is an interesting future work to find more instances under which manifold approach is geometrically or algorithmically connected with other well-known approaches in general nonlinear optimization.
d
1c18d5d5569cb38cf724568f8304bfd5
We evaluate four recently introduced personalization methods, PersFL {{cite:a999cbe8aea3ec1a6c2e39bda69a5567cbf67ecc}}, FedPer {{cite:354bf13aa9f49a5c98e981f84e8f0c9a6c8f6b9a}}, pFedMe {{cite:64b193734d297cec327119b27dde42c06b35bbdd}}, and Per-FedAvg {{cite:79c266dbad517388b4c9c8b7b9c4c4ae47d66eaa}}, to evaluate their performance and fairness through introduced metrics.
r
fe8352ef298567ab7c50a20d0b848da9
The universal approximation property of neural networks (see {{cite:e132b7ed841023184b16f4da1c2f5f33366460a7}} and {{cite:d02f908c4b4f1004595583cb1ac4b7c76a2d0e32}}) might make us assume that GANs can simulate any distribution from a Gaussian prior. However, neural networks, as functions are by design almost everywhere differentiable functions with bounded derivatives to limit exploding gradients phenomenons (see {{cite:42101b3e04ea7c4f9b737624043ad96cfbcb6f34}}). By Rademacher (see {{cite:c4e2701ae5b6ab1be152d494afd482b0da3ac335}} for a proof) and mean value theorems, this is nearly equivalent to say that neural networks functions are Lipschitz continuous. This fact basically sets the limitations of GANs to express any probability distribution given a Gaussian prior. The are numerous definitions of the concept of "fat", "longed" or "heavy" tailed distributions. They are usually not equivalent but all convey a sense of having a larger probability of being "big" compared to a Gaussian or Exponential distribution. Here we focus on two possible ways to define the concept. One, similarly to {{cite:75548fa3ae8dbb15fa8438aafeb004395b92c883}}, is focussing on finite samples and relies on classical concentration inequalities. The other is asymptotic and uses Extreme Value Theory to prove a new theorem in the continuity of the theoretical work of Huster et al. in {{cite:450b19a22d58946195eef59cdc18cd6418fbd30b}} and the experimental approach of {{cite:ca2c550463f43ea88319228911ff3d1d20bcdd63}}.
i
edb69e1caf066caa3f9fe1dd1a71ee98
In this section we numerically compare the classical deep ensembles {{cite:2ac8e9051b816479a32179c7f2b11d58ce287536}} (DE) to the introduced extended approach (DE extended). The implemented code including additional examples is provided BayesianDeepEnsembles. Following Algorithm , the network training is exactly the same for the classical and the extended deep ensembles. The inferred regression function is identical in both cases, namely the average over the estimates of all ensemble members {{formula:4ef04c4e-cbe9-4ca4-85c3-17918232d9a3}} at the trained MAPs {{formula:42af50a6-a43c-47dd-bddb-bb060a28bbbc}} . The difference only lies in the additional term of the extended covariance matrix in (), which is related to the epistemic part of the uncertainty.
r
14b08dd8c171aabe65288c27ef17b924
Speech emotion recognition (SER) requires a dataset that includes rich emotional utterances and labels. There are a handful of SER datasets, for example, IEMOCAP {{cite:c15f45789e919b1ec673d8ab517da4d3cb5a58e6}}, EmoDB {{cite:c56b77ad8956dee04c83c63e82e088a72f768021}}, RAVDESS {{cite:7da715c1071a529acd947580352f7d8770781fa5}}, and TESS {{cite:8161e8ad6868f637b37fc7c06e2da014655cafa5}}. The SER datasets were designed for text-independent emotion recognition. In other words, the system based on the datasets should recognize the speaker's emotion regardless of the lexical information. On the other hand, the SER dataset for WUW is confined to the signature keyword such as Ok Google or Hey Siri and, as a result, the length is very short. RAVDESS and TESS have the lexically-matched characteristics. Especially, TESS has the shortest average duration utterance about 2.06 seconds as shown in Table REF . However, WUW utterances are generally even shorter, less or equal to a second. OK Aura, a recently released WUW dataset, contains 1247 utterances from 80 speakers with rich metadata annotations such as gender, room size, accent, and emotions {{cite:7d7803a5f4486380c754433bddb35e62219bc041}}. The dataset distinguishes the utterances with three emotions such as annoyed, friendly, or neutral; however, only 218 out of 1247 are labeled.
i
7586dc67662ddabac082e7199723ba0f
In Table REF , we compare the zero-shot classification performance with existing approaches. Some models require extra pre-training on 3D point cloud datasets. CLIP2Point trains a depth map encoder on ShapeNet dataset {{cite:184df7fdd50a2e78eb66dd4afbdfd80cd46add7f}}, and then uses it for a 3D zero-shot classification task. Cheraghian {{cite:0043b6a1d0220dfa2c81742a74e9cd2bd8df0d2f}} directly extracts point cloud features with a 3D encoder. They sample `seen' categories in the dataset to pre-train the model and validate on the `unseen' categories. In contrast, PointCLIP and our V2 discard any 3D training and can directly test on 3D datasets. For all three benchmarks, our approach outperforms existing works by significant margins. PointCLIP V2 achieves {{formula:9b451216-7ae9-487a-b345-5e1d7b659cfa}} and {{formula:6d9fffe8-7974-479a-85b0-36a0e85d4f39}} accuracy on ModelNet10 and ModelNet40, respectively, surpassing PointCLIP by {{formula:6ee81220-b541-4056-b140-dd642db5a372}} and {{formula:5f96e650-e720-4b86-90a5-7d602fa4793e}} . V2 also achieves {{formula:6c87dd23-6cd0-4a96-973e-27dd471ee50a}} on PB_T50_RS split of the ScanObjectNN dataset, demonstrating our effectiveness under noisy real-world scenes.
r
7ca21a2241ebf62149de76fa0f87904f
Rough set theory {{cite:eb4a563cdadff9bdac2e2ba13481e9bebb4326e6}} is an effective mathematical tool for dealing with inaccurate, fuzzy and uncertain data, and it has been successfully applied in many fields, such as machine learning, pattern recognition, financial analysis, decision analysis and etc.{{cite:05d87a71e3d33bc9c6d3ff9ca92ea31e6891f0c5}}, {{cite:9fffcb1b345bf5a78ae61c3d2b935a8c9d120ee7}}, {{cite:89ea7e44379cee3955e53ffa9c3682780a47786f}}, {{cite:7d5377f10d6d39fce2b326d6e4b233338e4700f4}}, {{cite:797f2c31bb5d942b3be9dbd18469a22a6087e156}}, {{cite:df139596b7cb3611700aeeff4f915ca2b2e27556}}, {{cite:72f81bd89e86ed83234a6946ec951750f0b3a26b}}, {{cite:d9c530453feb0b2c59007190bbe1513d8078f1af}}.
i
1326f15fa6b941666b00e36b44d4ff06
When training AVOD, we follow the methodology followed in the paper proposed by Chen et al  {{cite:cc596b66ef934261f2f48c831c36f4782091198d}} on the KITTI dataset  {{cite:6b620c3169b524903716f91a052ef295c0a01a62}}, We split the trainval set into a training set with 3712 samples and a validation set with 3769 samples. We train all models to closely match the results stated in the original paper.
r
90ec82522ee87a54b9f803717d4a7950
Our survey area covers the entire extent of the deepest portion of the M101 H1 imaging survey by {{cite:29d51e6f39c83cab47dd06780c0cf6d8954d54c5}}. That survey had a limiting H1 column density of {{formula:d9f98039-2f46-4ac9-a26c-fe8b01872735}} and H1 mass detection limit of 2e6; for comparison, that survey would have detected even the lowest H1 mass objects in the SINGG survey if they were in the M101 Group. The {{cite:29d51e6f39c83cab47dd06780c0cf6d8954d54c5}} survey did detect a number of discrete H1 clouds in the M101 Group, along with a diffuse loop of H1 extending 85 to the southwest of M101. This loop and the associated H1 clouds likely arise from tidal interactions between M101 and its companions, yet our deep narrowband imaging presented here shows no evidence of ongoing star formation in this gas, either in discrete sources or diffuse emission. Nor did the deep broadband imaging of the M101 system by {{cite:2a48cdab15f19be1459a9bebc2b9d7b119f7fda5}} show evidence for diffuse light in this gas. If extended star formation was triggered in this gas by the past interactions in the M101 Group, that star formation must have been very weak and died out quickly.
d
ea80affd51583da10c723b7d4e25ceab
Unfortunately, UCBVI {{cite:87da2d75c069efbd4b8fcf8e998a96c55836f51a}} does not fit into our framework. However, even if it did, we argue that it would not lead to improved results. Indeed, the algorithm makes use of Bernstein's inequality to handle the estimation error, resulting in the need to bound an additional term of the form: {{formula:25edac8a-ed4b-46b9-be83-f699e60e0580}} . Lemma REF shows that this term alone introduces the following dependency on the delay: {{formula:28a211e4-89d1-4931-98d2-18c8057efe9d}} , which is worse than any of the other algorithms we consider in this paper. UBEV {{cite:b9500d7e8e7e1e86c87d4e50ecc1c3ce5bcf5e73}}, for example, has a delay dependency of: {{formula:8001a660-41a0-44c0-ad91-b16060d224ad}} ; an improvement of {{formula:669d8c2a-6db1-4a1e-9877-b3f230dd22cd}} . Moreover, due to bounding the empirical variance, UCBVI and UBEV will have the same leading order term in their regret bound. Therefore, we do not believe that using UCBVI in the delayed setting will lead to an improvement over UBEV, an algorithm which does fit into our framework.
d
8b98d4fd439aba4aabbdd91ddc1e31e8
TTFS {{cite:6ea459296297ab06b09eba244057967b104b4dad}} 99.31 99.20 98.76 60.58 83.45
r
a94451be91e9572d01021c259f6a1c90
In Section , we listed the different challenges inherent to a sub-percent calculation of the leading order HVP. Interestingly, they do not affect the same time ranges. If discretization effects are mostly important at short distances, FSEs and the specific treatment of the tail of the integrand become more relevant at large Euclidean times. Thus, the window method is a useful tool to compare different lattice calculations. The choice made in {{cite:e8f019e75657e80df24f3e575accb011a1cd8c0a}} has several advantages. First, by removing the short distance contribution, discretization effects are suppressed and the continuum extrapolation might be smoother. Second, the suppression of the tail not only reduces significantly the noise at large Euclidean times but also flattens the chiral behavior. Finally, finite-size effects are much smaller on this quantity. However, some difficulties remain: the uncertainty associated to the scale setting, discussed in Section REF , is still present. The situation is even a bit worse since the definition of the window itself depends on the scale setting determination.
m
1223222d4dfe7951654bec11849fe0ff
Sampling numbers may also be defined without imposing the linearity of {{formula:f53b9752-e05f-4863-a48e-ec916d997bfd}} , leading to smaller quantities. In what follows, we shall establish upper bound on the linear sampling numbers, which in turns are upper bounds for the nonlinear ones. We refer to {{cite:43566cac5ac4913fec8784fa01a0969ffcbf0b1c}} for an introduction and study of sampling numbers in the context of general linear measurements, and to {{cite:1bf268e1189f5ac4a8a3de60308e09a490d172c8}} that focuses on point evaluation, also termed as standard information.
r
a6e4aab1659a49d21abbf4fa95939326
Note that for {{formula:2376b065-f39f-445c-880d-4e9cb539846e}} the space {{formula:ed4dbcfa-2dc9-4c5d-b8d0-f49e8acbebd5}} coincides with the Lorentz space {{formula:60cae310-6031-41b4-9f63-ccae84d500df}} , which consists of all functions {{formula:29262d53-49e2-4d04-b003-b0a802526f36}} such that (see {{cite:ad8c114f12440cfe825da4cc2de43325c7ebec6c}}) {{formula:7627b447-9b8a-40d0-95f6-1b40717ce0ff}}
i
45ebc48853d0ff1f130651b79940a8f2
We next consider three parameter estimation problems, and show that MMD has a benign landscape for those. Namely, the MMD objective has no local minima and all its saddle-points are strict. This implies that gradient based methods can find a global optimum {{cite:e662e7c5dc02798fddffcd75963f6367f39e76cf}}. We focus on the cases of Gaussian with an unknown mean (Sec. REF ), Gaussian with unknown low-rank covariance (Sec. REF ) and a mixture of Gaussians with unknown means (Sec. REF ).
r
651e54f3eed054e761503f985fcebec6
However, most of the state-of-the-art DA methods for medical image segmentation require that the source and target domains have the same set of anatomical structures even they can be from different imaging modalities {{cite:eab230d89fbdd82bead10e7e9efdfaa81bf7b064}}. Such a requirement prevents these methods from leveraging images of other anatomical structures for training, and unlocking this requirement would enlarge the scope of candidate source domains, which helps to improve the segmentation in the target domain when a source domain with exactly the same anatomical structures is not available. For instance, for segmentation of coronary arteries from 2D X-ray Angiogram (XA) with limited annotations, it is hard to find an annotated dataset with the same structure as the target domain. However, there are a lot of public fundus images with annotated retinal vessels (e.g., DRIVE {{cite:30ac1fccaeed69ec5fbfce35c778348778531c28}} and STARE {{cite:c62ba9d696b4d8585e969a9ff5d8dcbfeb25c8d9}}), where the retinal vessels share similar tubular structures with the coronary arteries, as shown in fist row of Fig. REF . Another example is the similar circular structures between Left Ventricle blood cavity (LV) and the Myocardium (Myo) in Cardiac MR (CMR) images and the optic cup and disc in public Retinal images (Retinal), as shown in the second row of Fig.REF . Hence, it is promising to transfer the knowledge from these retinal vessel datasets to the coronary artery segmentation task {{cite:9669d1c1018e495155d113dd9636c42acd026a0e}}, as they can be regarded as cost-free source domain images. However, the different morphologies and contexts of these two kinds of vessels make it hard to achieve accurate results for existing UDA methods that are designed to deal with the same anatomical structures.
i
c66612b1bb0616acf996c7f39c2580e3
Model-agnostic methods on the other hand make no assumptions about the internal structure of the model and depend on the relationship between changes in the model inputs and model outputs. This is achieved by training a global mimic model to approximate the original model, then locally explaining the mimic model {{cite:86ef954cba38cd4048457593d1323e10d10f501a}}, {{cite:ff3ab700acb7386ea8d9607225230072652cae31}}. Alternatively, the mimic model can be fit into the original model locally for each prediction. In the LIME method {{cite:746e050a5c88d54465444fd00d05395468c4b5d1}} the coefficients are used as an explanation for a local linear mimic model. In Anchors {{cite:1590de86a3db754ac67632d8fd0a669463559e89}} the rules are used as the explanation for a local decision rule mimic model.
m
8b3b1cc3701c107da2d49b2b1d3be453
Even in transductive setting where the baselines claim their primary contribution, our approaches still significantly outperform them on five out of six datasets. Note that our models achieve almost perfect scores on Reddit and Wikpedia when the baselines are far from perfect. Meanwhile, the strongest baseline on these two attributed datasets, TGAT {{cite:130614e3fd36187579b0e7c8ca81f47ebbd2c8b7}}, suffers a lot on all the other datasets where informative node / link attributes become unavailable.
r
9ad9728169a0417c0261aa1f0d0563e7
The addition of the MMD cost function term significantly improves the results of regression in the low data regime. Furthermore, to the best knowledge of the authors, this method achieves state of the art results on the embeddings of {{cite:0cac7223ab159ae2a919b0d47dfcd3f0bbe3ea59}}. The authors also experimented with deeper nets, but did not observe significant performance improvements, an observation consistent with the observations of {{cite:51ca7e72398d7c144551ae4519a019d6fde7bc2e}}.
d
874ee158a482935bf181279a73205de4
In this section, we present the experimental results of the proposed metric for various standard GANs and datasets. We use {{formula:1bb9a1ef-75a7-45b2-a585-05a51752c12a}} for the manifold approximation which is used as a robust choice for the size of the neighborhood in {{cite:acb2873cdd6a3beb4ad527f57c2480d02a6f38aa}}. We use 30k of real images to approximate the real manifold and calculate the rarity of 10k fake images. We use VGG16 as a feature extractor for all experiments except for Section 4.4.
r
9a9bd2cdefd188b4c0921400b9b67a1c
where {{formula:7d350e4f-8e9f-4866-ad14-24529a5a1667}} is chosen according to some predefined principles, {{formula:19f225eb-db84-4c54-bf43-16c07b414218}} and {{formula:59dfb98b-1b22-4ac4-90ad-854210b4c4c5}} is the relaxation parameter. In the original work of Kaczmarz {{cite:497ba125553df319e066d1938a1b968e7ea69c16}}, {{formula:4290e368-3d65-4b4b-adb6-0dc7110100bf}} and {{formula:7eb3d612-6306-4154-9775-f26787b4f971}} . This is a deterministic procedure. The conditions for convergence of this scheme are readily established, while useful theoretical estimates of the rate of convergence are difficult to obtain. In 2009, Strohmer and Vershynin {{cite:16fbda6e17cfb199c5d1508fba0ff3c4c9af2ce5}} introduced a randomized version of the Kaczmarz method, in which {{formula:86c87dea-3b76-4354-bd01-81c93b563eed}} and {{formula:1dc81821-3a98-496e-9c14-00899602e45a}} is chosen with probability {{formula:d3225883-f970-4b65-9cf1-d2f449b5b36a}} . It was proved that this randomized Kaczmarz algorithm converges to {{formula:4169306f-7d1a-4c3c-b28a-e14b4f948cc2}} using {{formula:4f789d58-6aae-4ee4-9138-04f8ef474983}} steps of iterations in expectation. After that, there appear many improvements and generalizations (e.g., see {{cite:b0679a8d4319ce594763040f039baaa01d554d3d}}, {{cite:57022b8bf11b124019d217d35d065121bb2f6f20}}, {{cite:4868e4386036336b99e51c35577cd8d1c90c694f}}, {{cite:d26b23ea7c974b29fa65921e6ccc0e20d13211ca}}).
i
44cd445cd114a36680c4be15696c91b2
Quantitative results and comparison to the SotA approaches: Overall, our model outperforms the SotA approaches by a clear margin on all datasets except the Stanford Dogs {{cite:e755041e90ad76b7a19b70b2aa8ae1d4900a52ef}} and Oxford Flowers {{cite:40abd36b4831d6b69638e2fd3dbdcb1ee01a280b}} (Table REF ). In Table REF , we compare our performances with the two previous best (last two columns). One uses only the target dataset (primary) for training and evaluation (past best) and is the case in our model. The other (last column) uses primary and additional secondary (e.g. ImageNet, COCO, iNat, etc.) datasets for joint/transfer learning of objects/patches/regions during training. It is worth mentioning that we use only the primary datasets and our performance in most datasets is significantly better than those uses additional datasets. This demonstrates the benefit of the proposed approach for discriminating fine-grained changes in recognizing subordinate categories. Moreover, we use only one network for end-to-end training, and our novel CAP and classification layers are added on top of a base CNN. Therefore, the major computations are associated with the base CNNs.
d
fc714f6010d57a06184c88ef3f0d5e94
Table REF shows the result of our evaluation of MobileStyleGAN on the FFHQ datase. We compare the number of parameters, the computational cost and the Frechet inception distance (FID) {{cite:7c21ce2c50b2dfa856f7acfbe49d03a8da599cee}} of MobileStyleGAN and the teacher network (StyleGAN2). {{table:5caddc8f-036a-4cb3-8e39-20eb9f4a5b60}}
r
08a01212a49535d80fc15f67091145fb
Acknowledgements. This work is supported by NTU NAP, MOE AcRF Tier 1 (2021-T1-001-088), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contributions from the industry partner(s). tocsectionAppendices Appendices Details of the Swapping-Driven ID Inversion Strategy Revisit. Here we present more details of the Swapping-Driven ID Inversion strategy. For clearer representation, we re-illustrate the pipeline in Fig. REF . Inspired by the process of StyleGAN inversion, this strategy optimizes the features {{formula:60d1daf1-bac0-46ab-b576-52fc902d3816}} in a total {{formula:c582dd25-7820-4f93-a7ae-ccfd2e321fd0}} iterations. At the {{formula:351e3c3e-6486-4239-b7f5-4aa4e120e690}} th iteration of the optimization, we denote the source identity-related {{formula:77e5c4b4-bbf6-4dd5-8e99-7e515c2db7c6}} space feature as {{formula:06ad5841-bfaa-4775-a5b5-c83e91a722ce}} and the desired output as {{formula:9a127674-b5a1-4b0f-86d1-0cbd76f43b23}} . {{formula:40c3062f-f018-4d3a-8d96-dc6f64923254}} is initialized as {{formula:1620ab92-d3e5-4364-aaa8-26d97fefad3c}} . We randomly sample any image {{formula:887d4c99-2844-4b5c-be6a-330bd6029726}} , and firstly generate an intermediate frame {{formula:0ac362bc-d011-4342-b755-fe910e40e232}} . Then it is taken as the target frame to generate the cycled-back image {{formula:a64d56ad-a054-4872-85cf-e18e3b225831}} The optimization is conducted using the identity loss {{formula:285a0169-a87d-4ef9-b3f4-b4b8688684db}} and the reconstruction loss {{formula:2ec63bb0-b918-4906-9994-ca711fa15f63}} . We recap the two losses here. Given any source image {{formula:3feda2ef-dfb6-4b78-aad9-43ce1b764847}} and our generated image {{formula:7c891818-be3c-4df5-bb03-0d5ae67c391c}} , the identity loss between the two images are: {{formula:4b8d5439-ee10-4453-a01d-e6a79da160ee}} where {{formula:e4e6766d-3409-49bd-98db-6807c223b839}} denotes the cosine distance, {{formula:822c3c8c-bf2d-4d95-b17c-231acad6066c}} for any {{formula:4a0df344-e7de-4669-8bd6-8abbf0765f24}} . The reconstruction loss is: {{formula:d59f7b6d-4d78-49f7-8598-ea47a1f07e20}} Optimization Algorithm The choice of the final optimized {{formula:f3ae8d00-80ef-4c19-9b01-28590fa67dbe}} can be performed in two ways namely the one-to-one optimization and one-to-many optimization. The one-to-one optimization aims at finding the {{formula:83ac530a-34c6-4aa8-bbf5-e670c6300d01}} for swapping a specific source-target image pair ({{formula:55478b75-d108-422e-b713-a3e554e7047e}} ), while one-to-one optimization aims to find a general {{formula:82faad92-3010-4066-9d0f-0743b05cda56}} that is suitable for swapping the identity of {{formula:e3958541-d5f2-4064-98c7-edb11fbaed79}} to any face. We start by introducing the one-to-one optimization. Within the total optimization iteration {{formula:92090d24-7783-424d-8e0b-856346236e05}} , we select the {{formula:0a5a71c5-14d6-445f-a407-97faae446b9d}} as the feature that achieves the lowest {{formula:dfbc2335-c6f0-4857-9303-6bb55838c5ef}} . The whole optimization algorithm is depicted as follows: [h] The algorithm of Swapping-Guided ID Inversion A set of images with random identities {{formula:10cbf5bf-df4f-4cb5-baa4-dee4cefa91bb}} ;               The source image {{formula:8d05f59e-39d3-444d-833f-19ad1422f791}} ; The trained encoders {{formula:a5c5c99a-9e45-40ac-b0c2-6d18328210a7}} , {{formula:92af30d6-fe6c-4c50-8718-cab41cafb7d1}} and G;            The gradient-based optimizer {{formula:dbc60bfd-4bf5-4f94-99c6-a42fd7dadb5e}} . The target image {{formula:93a73f96-ed84-4dad-91dc-cce7578694cd}} . The {{formula:cb8a0b5c-efc0-496f-9701-7a682884b9f7}} space feature {{formula:433e3153-9153-4c3f-89b5-bbbb98c70186}} . Initialize {{formula:0efe99a9-981f-40da-bcaf-7b05ff1b954e}} , {{formula:fa0165a5-a30c-45c0-aeff-51a1f93c5ffb}} and {{formula:a8386166-427e-49d7-99a1-7875e8c51d66}} . {{formula:52f57bcc-c87f-4974-864e-6b701016789d}} {{formula:a2877cd7-f3c0-40e5-842f-ca23c58bdcad}} {{formula:e5ddfe3e-1fc4-480a-9b20-ba02f4e13e71}} {{formula:28eacf60-0caa-46e9-871c-5a99c59ddf34}} {{formula:1ba077b3-62c1-474e-b577-05abc68bf11e}} {{formula:93c6ac53-27a4-4422-9780-8daf192e4dbb}} {{formula:54c8e8fa-0dd2-4cfb-9c16-aa7d83939268}} {{formula:973f476a-b788-485d-9527-3bb552a0b3b7}} {{formula:81300f3b-01f6-4b4b-8264-ad6c06ab411e}} {{formula:64dd69fb-e334-42f3-b6bf-1e2ff3f29845}} ; {{table:6ec6acda-af65-4079-b4b6-fa96cac63e42}}{{formula:81862167-b94f-425e-9e35-5e41fad8a612}} is empirically selected as 200. After the optimization, the optimized {{formula:2063fb0f-4bff-44d2-b973-d0e47dd62474}} can be sent into the generator for swapping any target face {{formula:1a57a80f-fb37-48e0-ab9f-9993500d604e}} given the source image {{formula:1525d44d-7a22-4647-8a21-d4efe1fe6917}} . As for the one-to-many optimization, all parts related to {{formula:7afd9a7b-77ee-4fa0-82de-ea554e22eca9}} are not required. Thus {{formula:06486e4e-2472-474c-9eb6-f6157f116655}} is set as {{formula:32e6852a-88a2-408d-bece-00f3d7c75b2d}} . According to empirical studies carried out on the one-to-one setting, the inversion procedure normally optimizes the identity similarity around the first 50 iterations. Thus we set {{formula:cd629ae3-38eb-4887-8160-62fc83cf4a1e}} in the one-to-many setting, and this is the standard setting in our experiments. Experiments on Face Forgery Detection We conduct face forgery detection experiments with backbone Xception {{cite:7f1c7736e6765bbc5e14398bb4cdd84ad7f2d25a}} that has been widely used as baseline in previous face forgery detection methods {{cite:af20b95e66847a0d13e75244670c62cb645546fb}}, {{cite:1fb0193c10e60c9833d5274cd2283238f274efb1}}, {{cite:ccf92bfd4ace8011a357bd8aa797639e3ef1c973}}. The experiments are carried out on the following datasets. 1) FaceForensics++ (FF++) {{cite:af20b95e66847a0d13e75244670c62cb645546fb}} that has been introduced in the main paper. It is the most popular dataset used in face forgery detection. 2) WildDeepfake (Deepwild) {{cite:ffd0be9df5d4876c9b5e67691a654e597c33a2cb}} contains 3805 real clips and 3509 fake clips. All these videos are manually collected from the Internet. 3) Celeb-DF (CDF) {{cite:af84c65b7318276a9a8609ba6ac7f55a15be696c}} which contains high-quality face-swapped videos. 4) DeepFake Detection Challenge (DFDC) {{cite:f891cbc2ade93c00fdbac2fbfb24029d7f95fc0e}} which is one of the most challenging datasets. As for evaluation metrics, we use Area Under the Receiver Operating Characteristic Curve (AUC). The final confidence score of one video comes from the average of the first 80 frames. The baseline model is trained with 0/1 label (0 for real, 1 for fake, and p-fake) supervision using binary cross-entropy loss. Specifically, our baseline model is trained on FF++ {{cite:af20b95e66847a0d13e75244670c62cb645546fb}} without involving FaceShifter {{cite:b346d68103512149e5ceb3ac3664e462cedaa408}} data. Then we additional involve 50,000 fake images generated by our method and 50,000 fake images from the results of FaceShifter to enlarge the training set to FF++ w/ Ours, and FF++ w/ FaceShifter respectively. The results are shown in the Table . It can be seen that the model trained with the assistant of our method outperforms the model trained assisted with FaceShifter, which proves that our model could be more useful to the deepfake detection community. We suppose that it is because our model creates fewer artifacts and appears to be more realistic. Thus the forgery detection model trained on our data has more generalization ability.
d
50154904bdf9ea1618f88507cfd85653
There are two critical metrics to measure a CAD solution's effectiveness in clinical settings: generality (handling of unseen patients) and precision (generating good accuracy robustly for the given task). Naturally, incorporating as many patients as possible for training is desirable, but patient data often cannot be fully annotated at scale, precluding approaches seen in natural images {{cite:1b937c38920231f012bd72fbef6f71d729ef4d87}}. In medical imaging, the labor costs are always paramount. Even more intractably, for many applications, e.g., PXR fracture detection, there are inherent perceptual uncertainties and challenges in annotating precise bounding boxes of pathological findings. Alternative strategies are needed. A promising inspiration can be found in extreme points for object annotation {{cite:67be1e642bc49b0bad8d2365f519e6f5b0b281bd}}, which are found to be a quicker and easier visual perception protocol by human annotators. Similarly, in this work, we use only point-based annotations to localize bone fractures in PXR (i.e., where to look?) so we can execute the annotation process at scale to cover as many as thousands of patients to achieve generality. To achieve precision, we propose a Window Loss that can robustly cope with the aforementioned ambiguities in defining the extents and scales of fracture sites (i.e., how to look?). Fig. REF illustrates an overview of our framework.
m
797341529ab1d380c6da4c078a2d6326
With the recent rise of neuromorphic {{cite:cea393abe136e06cd0f4fa838a99ac2cab6f3b3f}}, {{cite:1e800da67ee77f56b7c5b13e664cabc1cd205f11}}, {{cite:ebdf69292c857fdb9db17246f1a0c4dfa3f59c16}}, {{cite:7363c73541736d24bf74f21f6b3497d6192097eb}} and edge computing {{cite:fd62615d8c8d9f7074bcbb02189cb6655be13053}}, {{cite:d8e45f158b9a4c8034d67b01c0fc24528792791a}}, the liquid state machine (LSM) learning framework {{cite:6de54d0aa6bc13c4507836bad47ed88a115b8960}} has become an attractive alternative {{cite:25ee8b8de3224e17d871a8beea805db77cdb2046}}, {{cite:efb3fbb6ccf4b1fe690c0bce78a5fd95d9762f6e}}, {{cite:cde55cc90644ca06bd66e04beb87800a1d1bc52e}}, {{cite:4da8ae4def6647c4800e3df8c1c31fa7bc41d286}} to deep neural networks owing to its compatibility with energy-efficient neuromorphic hardware {{cite:dd217929ce8a796314b5e82c6ed577ede369ac6a}}, {{cite:d8a42ae88b5a9df8e92e1c00431a3ec57b05d6e2}}, {{cite:cef2b3dea029ee72f703197f205cec4c55ffe160}} and inherently low training complexity. Originally proposed as a biologically plausible model of learning, LSMs avoid training via backpropagation by using a sparse, recurrent, spiking neural network (liquid) with fixed synaptic connection weights to project inputs into a high dimensional space from which a single neural layer can learn the correct outputs. Yet, these advantages over deep networks come at the expense of 1) sub-par accuracy and 2) extensive data-specific hand-tuning of liquid weights. Interestingly, these two limitations have been targeted by several studies that tackle one {{cite:2ba4f2e84d8ec1bd8aa9a2d14245c5a1fc36e94e}}, {{cite:4085b39df486c6620680800127e63eedb2af21eb}} or the other {{cite:e6aec381773355c9c6baa0f79e83782199acabc0}}, {{cite:711142b5dfdefc917ae52340e3ee087abd9a6140}}, but not both. This has limited the widespread use of LSMs in real-world applications {{cite:25ee8b8de3224e17d871a8beea805db77cdb2046}}. In that sense, there is an unmet need for a unified, brain-inspired approach that is directly applicable to the emerging neuromorphic and edge computing technologies, facilitating them to go mainstream.
i
212d16ab9cf78a551484a155a8d0a33b
To evaluate the quality of generated images, we adopt the reconstruction metrics Peak SNR (PSNR) and Structural Similarity (SSIM) {{cite:e924a910786176dd786dd469f51989970afaf6b9}}. For the lip-sync performance, we verify the recognition accuracy and F1 score of the five selected speech-related AUs in generated frames. Specifically, we use the OpenFace toolkit {{cite:bc0cd767101b2fb5468f3f150f871ab3cafbe038}}, {{cite:14f7f2f11ffefc672dc5fcaf016beb9cdcac9e53}} to detect the state of the five selected AUs (activated or not) in each generated frame, then compare them with ground truth labels. {{table:b60ee92d-6349-4ee5-ad83-386779390798}}
r
feed1eda0b2d644085f2f4eadaaafa72
Assuming the foreground and the background are well separated, the intrinsic clustering term does not contribute to the correlation of Eqn.(REF ), and therefore eliminating the impact of the galaxy bias. For {{formula:3202c732-e06a-4353-be23-fe1d3b5ae662}} , it is {{cite:6e2639a785439f270eee6926bea9aa67209f5040}}, {{cite:92c0cca2ee7db96bab332bc44fcd3d505b119c32}} {{formula:3c56e16b-4769-4d9d-9842-f7296e518e61}}
m
dc7d0e28db5ad724570861a52415d4da
In examining the evolution of densities, there are three major types of behaviour that may occur and they are ergodicity, mixing, and exactness {{cite:376d58be9c7e52f61a116e12648bd4eba556027b}}. In addition there is a less well known fourth type of behaviour called asymptotic periodicity (or statistical periodicity), and which was first introduced and studied by Keller {{cite:d4508724207c17fb5f9f7f82ede3d5d4f8e29261}}. We will say more about these four types of behaviour in Section .
i
d2fac5a8d9fb4df251fb4cf914bc03b4
A common extension to this question is the single dot product problem: “How often can a specified dot product occur between {{formula:56abae10-28bf-427d-8449-f6f44ead3f8e}} points in some ambient space?”. This has been studied in {{cite:e8176786f5f0ecf277532df5eb7f7e883edde80a}}, {{cite:79c1124bb5ce95d7eb58641dc3d9ae76a2b8a12b}}, {{cite:fb93cec9d79fb2c2b2b32db936b820fe0d5168a7}}, and more, where different bounds and configurations are studied.
i
386b8b990b1097e9c16b01afafcb8a0e
However, the performance of RGB SOD models tends to drastically decrease when faced with certain complex scenarios (e.g., cluttered backgrounds, multiple objects, varying illuminations, transparent objects, etc) {{cite:67dfecc9df1d87f2c7348f375af3847391027b0d}}. One of the most important reasons behind these failure cases may be the lack of depth information, which is critical for saliency prediction. For example, an object with less texture but closer to the camera will be more salient than an object with more texture but farther away. Depth maps contain abundant spatial structure and layout information {{cite:a2f5cc9cbbc3c7d6a6388e75ec0e847041946d9d}}, providing geometrical cues for improving the performance of SOD. Besides, depth information can easily be obtained using popular devices, e.g., stereo cameras, Kinect and smartphones, which are becoming increasingly more ubiquitous. Therefore, various algorithms (e.g., {{cite:5dd331b9ac20bd333bae4381bbe1ff9f296625f9}}, {{cite:7ed99dceed65c7ea73dcbcb15814bbf05e3c9e15}}) have been proposed to solve the SOD problem by combining RGB and depth information (i.e., RGB-D SOD).
i
876f8a6cabc8a1ad47516bec800baba0
Translators have been widely studied in literature ({{cite:aa48c01818a22271b6ba78cee21b14c913f73072}}, {{cite:c667ff7356030fac1936830ef8798433264446e2}}, {{cite:29fa4285f39a79633e63cdcec6369811d02d7afd}}, {{cite:892bb4f2b0529cf3cef0e3a4dd1d635ae5791586}},..., and references therein). For instance, they naturally appear in the study of solutions of the mean curvature flow with a certain type of singularities (see for example {{cite:6f9d5ae8d1870054fe25723c4c85e7189b82a070}}) and are equivalent to minimal surfaces for a conformally modified metric {{cite:8b0032911d0778348b886dac947102854171cdfe}}. There are other studies for translators in other ambients spaces, such as {{formula:cd5c6d09-c1f0-4011-9854-5e47ae846da5}} {{cite:7bb3a6bf54b26fe5cceae12535280cd758691311}}, in {{formula:00309527-6ec1-4ccf-8627-0db5affa8f06}} {{cite:c77b47914758af56ca43a417789a042a8e399093}}, a solvable group {{cite:a27501d4413e401b91f89230036e0f46084265a7}}, the Heisenberg 3-group {{cite:6a8638d8e999e94d3eec992e52404f8227e4cc3e}}, etc.
i
587a711086c64f8094190cb1f4bb8955