text
stringlengths 54
548k
| label
stringclasses 4
values | id_
stringlengths 32
32
|
---|---|---|
Cavity optomechanics, where the electromagnetic mode of the cavity is coupled to the mechanical motion via radiation pressure force, has attracted a great deal of renewed interest in recent years {{cite:b07224db793441a8fe8ae65b90365ced7d81d897}}. Such coupling of macroscopic objects with the cavity field can be used to directly investigate the limitation of the quantum-based measurements and quantum information protocols {{cite:f89c466b799e01da6b7b0fff93db5d5f7827998c}}, {{cite:ad710dd72a47164b784bd5c32ed1b56ac078034b}}, {{cite:b92a9023c35f88caf2ee96be15cbc6a4d8bacd00}}. Furthermore, optomechanical coupling is a promising approach to create and manipulate quantum states of macroscopic systems. Many quantum and nonlinear effects have been theoretically investigated. Examples include, squeezing of the transmitted field {{cite:e04170bc0fbed69353995cbaa604d085e1444146}}, {{cite:632842d93e499954b4654d1346c7a6225214731a}}, {{cite:b99ebd2649b916506578f4efa455a85e66e4bd13}}, entanglement between the cavity mode and the mechanical oscillator {{cite:57c399d341588f7f0af3cd88c4fa74f30aacffc2}}, {{cite:adb37c43f493257d2aed2abb0235cc37c5d0a7ce}}, {{cite:7c7b7fe8fcf57c64c54fe5801d5885e06fdebea2}}, optical bistability {{cite:0a865d89d16e396e64ec9aadd4ce27b0a3a450dd}}, {{cite:cf095eafda16de345602b7495492df389ccba99f}}, {{cite:cd714ca33711b84a5ff34111eb1e19a6b9cd9adf}}, {{cite:56039db1c2da0109f3d2ec9da45f10b9f3ac3d9f}}, {{cite:b99ebd2649b916506578f4efa455a85e66e4bd13}}, side band ground state cooling {{cite:5f5b629b37777bd4ea1529dfe64c0866d10791cb}}, {{cite:c53ed634ac7a757120ed3ca47c1dc2521e0d42e5}} among others. In particular, the squeezing of the transmitted field and the optomechanical entanglement strongly rely on the nonlinearity induced by the optomechanical interaction which couples the position of the oscillator to the intensity of the cavity mode. Recently, relatively strong optomechanical squeezing has been realized experimentally by exploiting the quantum nature of the mechanical interaction between the cavity mode and a membrane mechanical oscillator embedded in an optical cavity {{cite:e852b918340d20d5b5d01e690245390205196762}}.
| i | 364bae42a374f910040e77dc259b5e1e |
Recently, the most massive neutron star (NS) known as PSR J0952-0607 was discovered with a mass 2.35 times as much as the Sun {{cite:09e7762cf7223cb51cbe04290beea17f9d2785d8}}. Also, earlier neutron stars like PSR J0740+6620 with masses higher than two (times as much as the Sun) have been discovered {{cite:dfbc633262659b1193f841246e4089821de58a89}}. Therefore, theoretical modeling of neutron star that can predict a mass above two ({{formula:c1ec5f8a-12f1-46c4-aa27-44d82a6835bc}} ) for the star by considering the real structure of neutron stars is important.
| i | a4f5743059d0d649bb45ee5a6c7d0c8f |
The new algorithm avoids an exponential blowup by exploiting the fact that {{formula:bf8fc4b2-4dd7-4864-849e-4b75bb1e571e}} and addition are simple operations from the point of view of Boolean circuit complexity. Namely, these operations are both in {{formula:500f5e8d-71f8-47c9-8c9c-77bf4e6b4af9}} , i.e., they have circuits of constant-depth and polynomial-size over AND/OR/NOT gates of unbounded fan-in. It follows that min-plus inner products can be computed in {{formula:c72e2a30-081c-4386-8632-4f7e718d3670}} , and the new algorithm manipulates such circuits at the bit-level. (This also means the approach is highly non-black-box, and not subject to lower bounds based on additions and comparisons alone; this is necessary, due to Kerr's {{formula:99c2dcb9-36ff-485e-8df8-6f1266792534}} lower bound.) {{formula:149aedd4-a248-4ff1-900f-f7405fae9721}} operations are very structured and have many known limitations (starting with {{cite:51ba9c22ff00f0412412a5650d8ae9c4a9138b8c}}, {{cite:5732e29f7fba003da105f7d186bc11c3b671f495}}). Circuit lower bound techniques often translate algorithmically into nice methods for manipulating circuits.
| i | 63d42b22dddf612f3829622a9c25b791 |
In the MI-IRL setting, methods can be partitioned into parametric (known number of clusters {{formula:6e33246c-b1bd-4329-960d-c91e19835ab2}} ) and non-parametric approaches ({{formula:dc25e48f-3ece-42f3-b071-2b1a95609cd4}} unknown).
While initialisation strategies such as ours may be helpful in Bayesian non-parametric models {{cite:7d082dc4b1fd1d25879f7eaeca035118248b6efe}} or hierarchical non-parametric clustering schemes {{cite:83d633230bd0c15948a0da96509455fca7ad11e4}}, we leave exploration of this avenue to future work, instead focusing on parametric methods modelled on the Expectation Maximization framework {{cite:8e483ff8947b765e72b0860237991df94d355ce6}}.
Specifically, our work is based on the EM-based MI-IRL approach of {{cite:042e4da79e6def6626ca1f22bd68cad2b285c94b}}, which has also been extended to a full Bayesian treatment {{cite:7aec06ddf3e43cefc3747ff1d41c11b989aafa81}}, and to use model-free gradient based IRL methods {{cite:b2e35e6203d582eae220e204d9f9d65fb0858886}}.
While our theoretical analysis uses the MaxEnt IRL behaviour model
{{cite:8501e6491024a6379b723f9ca879189825db680e}}, {{cite:063b5ba6c707f4cb99676aeaa0917b2248b52fa1}}, our empirical experiments with
the ML-IRL model from {{cite:042e4da79e6def6626ca1f22bd68cad2b285c94b}} and
the {{formula:f482b015-7169-4633-b9a8-323402704cce}} -GIRL model of {{cite:b2e35e6203d582eae220e204d9f9d65fb0858886}} suggest that our initialisation
method is effective with a range of behaviour models.
| m | b235591a4eeedcfd2821a17324501fd6 |
where {{formula:4a66e3ed-2dbd-4af2-8de7-56d93ac85240}} is the Newton constant and {{formula:06e17459-977a-4de8-9bd4-f5f2d14ba347}} is the AdS radius.
The Complexity=Action (CA) conjecture {{cite:c97513c22a60e71a5bd69609768ad0edfbba099b}}, {{cite:a467d1e6ad4204a04c49c51c33fdff2bd32d3962}} proposes instead that complexity is proportional
to the classical action {{formula:fdda6a67-27a4-493f-a96b-075ee3c45af7}} evaluated on the Wheeler-DeWitt patch, which is defined as
the bulk domain of dependence of the maximal slice
attached to the boundary time, i.e.
{{formula:989fbd31-c52c-4dee-814f-d00caf55fc69}}
| i | 55e6a32edb746c729bafa582fcf0f37e |
In our current implementation we restrict ourselves to orbits in axisymmetric
potentials. Conservation of angular momentum, {{formula:8ceae3a9-0587-44f0-9f8b-401e5cc041e9}} , about the system's
symmetry axis then reduces the problem to that of motion in the {{formula:06146004-b100-46f0-9f0e-282d95ba4a03}}
meridional plane in the effective potential {{formula:1e8dba93-09f4-4e48-a6fb-e708bc67da86}} {{cite:e73ed7da7fa87916a26e39fd3eb0a9610db29c2a}}. {{formula:5ee2c47e-cc8a-4aac-b97d-e15d433930ad}}
is the third action.
| m | 0ce5addd57a989a52e677c9ff3c1aee1 |
Augmentation strategy.
A well-defined augmentation strategy is critical in contrastive learning {{cite:30a659500a551bdccac653a614b85fe045987370}}. As pointed out by {{cite:30a659500a551bdccac653a614b85fe045987370}}, {{cite:76f6e0dadd7844daeb854edf5ba4a5c31e66dc65}}, the essential parts of constructing this strategy are random cropping and random color distortion transformations. The latter is targeted towards histogram and color-channel correlation-based overfitting of the network. Since diffraction data is monochrome, we replace the channel-independent RGB distortion with a single-channel jitter distortion. Furthermore, as in {{cite:30a659500a551bdccac653a614b85fe045987370}}, {{cite:76f6e0dadd7844daeb854edf5ba4a5c31e66dc65}}, we use a probabilistic augmentation strategy that includes Flip, Rotation, Crop & Resize, Jitter, Fill, and Translation transformations on all input patches. However, our Crop & Resize routine is not changing the aspect ratio, as is usually done in other contrastive learning augmentation pipelines. Changing the aspect ratio would break the correlation between the Polar and Cartesian projections. Every transformation has a fixed probability of 50 for being applied at every invocation. We implemented the entire pipeline using TensorFlow augmentation layers placed on the GPU itself. Code is available in the official repository {{cite:322b0820affe11060c550a898e5ae6fc439c37af}}.
| m | 86484eb397e86e2a336e8e1e43586ec9 |
We define a measure to evaluate beneficial/harmful
explanatory effects of a sequential teaching curriculum on human comprehension.
We hypothesise the improvement of human comprehension from sequential teaching curricula based on the Blumer bound {{cite:ee8ac26c12e5e0b988516d644b924f196cb09a20}}.
We demonstrate based on an analysis of empirical results that a sequential teaching curriculum with increasing concept complexity has a beneficial effect on human comprehension.
We show results that indicate the re-discovery of divide-and-conquer algorithms from novices after learning from concepts with increasing complexity.
We provide evidence of the optimisation of problem-solving strategy as a result of studying explanations generated from machine-learned logic rules.
| i | 42b0a35b6175b4cdb1ce777b2345856c |
Quantitative evaluation.
tab:rescv gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE close to {{formula:845ce7c4-5912-4301-9f4c-27fd80e3d92b}} . By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms.
For example, under gradient perturbation with {{formula:87bb68aa-4c35-4e0c-a633-1881d369adfd}} , which prior work believed is sufficient for preventing gradient inversion attacks {{cite:ab5637a980ce08ad5e0e7040081390a25bd86869}}, {{cite:3eb2f1f7e74a48315bae1127031bc2ef4a610112}}, MSE can be as low as {{formula:4700feed-5213-4c04-9274-d1da500a1a5e}} for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD {{cite:e3fcf4ec1765aa27d75a00f1e5be2e3b133b326c}}, and suggests that the theoretical privacy leakage as predicted by DP {{formula:ee0930c2-b15d-4178-9eaf-b4473f8a8f3e}} may be tighter than previously thought.
| r | bebf7a37f520f23c8844bff6f67d24b8 |
We try out our method on mobilenet class of models (cite) and demonstrate that the observations for our method generalize form resnet models to mobilenet clas sof models as well. We perform experiments with the MobileNet {{cite:7d9eb454f7a0ff08ce498acf4454e9d81046fae7}} architecture where the teacher is a ResNet-152. The results are listed in table REF .
{{table:98bac7b1-bc16-4161-a51d-4b9eafd51bc7}} | r | 613fb079b6995a6fad22460380eb8f45 |
We are interested in learning rich representations from unlabeled data. We have a teacher network and a student network. We initialize both models from scratch and update the teacher to be a slower version of the student: we use the momentum idea from MoCo in updating the teacher so that it is running average of the student. The method is described in Figure REF . Following the notation in {{cite:8523218e760766ab1acb6606e8dfb982f6c4d12e}}, at each iteration, we pick a random query image and a bunch of random other images that we call anchor points. We augment those images and feed them to the teacher model to get their embeddings. Then, we augment the query again independent of the earlier augmentation and feed it to the student model only. We calculate the similarity of the query point compared to the anchor points in the teacher's embedding space and then optimize the student to mimic the same similarity for the anchor point at the student's embedding space. Finally, we update the teacher with a momentum to be the running average of the student similar to MoCo and BYOL. Note that our method is closely related to ComPress method {{cite:8523218e760766ab1acb6606e8dfb982f6c4d12e}} which uses the similarity distillation for compressing a frozen larger model to a smaller one.
| m | 22fdbc9a39ecc336710bce9d5dbc29e7 |
and by the Lippmann-Schwinger equation (see (8.13) of {{cite:5ab8226101aecccdc528a3c228b78002fabcafc2}}) we have that
{{formula:919b0d5a-4385-41fa-b47a-7df7ff11d72a}}
| m | fe874de9208a8ec35d8bc8232b3bd42a |
It follows that the pullback {{formula:2a10511d-dc3c-4e66-9e6d-b0c8ac8c333f}} is an injection. (Such pullbacks are examples of edge maps, described for example in {{cite:96a2a1aa905639157505c8c520082bcead7dd5aa}}.)
| m | e4fa3a27f9e838ae7ace4845175ac91a |
To assess our approach, we evaluate the geometry against state-of-the-art methods among 3 categories: classic MVS, deep MVS and differential rendering based methods. First, COLMAP {{cite:3231adad50b0d8cf09a4213b42d13cadd61ae8d9}} and ACMMP {{cite:c4486004176ec64af4e32b260eaecf2fd331d4ba}} are classic MVS methods that have been widely used and demonstrate strong performances for MVS reconstruction. Among all the deep MVS methods, we consider two of the most efficient methods PatchmatchNet {{cite:7b8193ac003cfa27065f2581aa0eb2af1f63fa0a}} and CasMVSNet {{cite:e68fdc6aa3818eea7e92fbf09202cd4021ad9f97}} for which the code is available and easy to use. Finally, for the differentiable rendering based methods we consider IDR {{cite:721bd3edcba94df21e70c67c0877fc65c7c906a2}} which was one of the first works that combines a differentiable surface renderer with a neural implicit representation. It requires accurate masks but handles specular surfaces and has shown impressive reconstruction results. We also compare with two more recent works that use volumetric rendering and provide impressive reconstruction results: NeuS {{cite:8bbd06e68e85348e25ba66fbb816e93aba56d983}} and NeuralWarp {{cite:282594585f1b11a9d5fc18ed4a1a52cd76956441}}.
| m | 6aafd36d054aeb0b1e159baa1e0f7312 |
1. Choice of Segmentation Model:
In this study, we compared the performance of several state-of-the-art semantic segmentation models, including
PSPNet {{cite:d28c66d5b395315ed3072491bdba1c5c5a05635c}}, SegNet {{cite:e7596ce740e60e23ae7edc15d91d6724b6f2e982}}, U-Net {{cite:992013571ce15c72c86e6c40418f83fa5765a62f}}, FCN-8 and FCN-32 {{cite:20dafa83e2bb50affa3f981b2b6f85f203ba9fc4}} with our proposed CIE-Net model
| r | 2559c95cf1d4c6e63aecd9a506e062cf |
There were noteworthy additions to the literature we did not consider for various reasons. The Longformer of {{cite:0130e26f49a3470b9d92950a11b48924e4f845b5}} uses a sliding attention window in which the resulting self-attention mechanism scales linearly with length, however, the number of parameters of the pretrained Longformer models often coincided or exceeded those of the BERT model. Like the Reformer model of {{cite:9858ba54ba0718e89520f35a55ca28f85f2e574b}}, the Linformer of {{cite:b98e7dca14a1c9753a46b9eb555754e158da2677}}, Sparse Transformer of {{cite:b6ae369a5cb9357460295d951a57ae08c3df9012}}, and Performer model of {{cite:df95028fbe5577d1fb108a60303a594ad2ecc41e}} exploit the observation that the attention matrix (given by the softmax) can be approximated by collection of low-rank matrices. Their different mechanisms for doing so means their complexity scales differently. The Projection Attention Networks for Document Classification On-Device (PRADO) model given by {{cite:bf40b7992a57112314a31c0433feeb102e2f9a9e}} seems promising, however, we did not have access to a version of it we could use. The SHARNN of {{cite:8564af56689aa974415d07aed5b12e40562a64ea}} also looks interesting, however, we found that the architecture was difficult to use for transfer learning.
| d | 75f847ce745fd79531c8992c26b5a703 |
[Proof of Theorem REF ]
It follows from Lemma REF (b), Theorem REF (b), and the maximum principle {{cite:a9d2ef99e83a873a16bec0a2766d37b2e55b43dd}} (the implication {{formula:f7b2da30-0e7a-4dcb-9867-f7cba9c64945}} is proved as in the proof of Lemma REF (b)).
| r | 355a60f940dc9117698b6655ffbe5b7f |
Salient Object Detection (SOD) is widely explored in color images {{cite:5f81351d353b8e22b669fd520d03bfa873d6df98}}, {{cite:9eed3aa304a63c751e96d066f476a001aaeb99e3}}, {{cite:5cdf5b2927e30f8ebe16bc2c6ecf15b24954a344}}, {{cite:d74aec5607130c4a17c5afc01edd2d6c98fe481f}}, {{cite:517e9574d57205d12f50c9c7eda0efd71dd34c21}}, RGB-D images {{cite:3d6a4b99c2cc75fc46175e5d962ab1882368055c}}, {{cite:3339ed0c1b83c138021da2a735be42e01680bea4}} and videos {{cite:2607d770d3ca96b48d39e02e1e778d5933518162}}, {{cite:7764766a0566c63ab8faf008e456faad7253ee63}}, {{cite:a9c2db301d7cdfc04873c9032e70aca06cc84a0a}}, and it is closely related to our fixation-based object segmentation task.
In this section, we discuss the connections between fixation-based object segmentation and SOD.
| d | dd7ca846733f199fdafbc81d32d1ed54 |
Constructing truncation ansatze for embedding the domain wall solutions found here in string/M-theory using {{formula:ee315b43-48e6-4a17-8223-57b67b981e16}} exceptional field theory given in {{cite:d3f85585a46c01508b2bd82aa7c58b176d3acb7b}} is of particular interest. This would provide a complete framework for a holographic study of five-dimensional maximal SYM. It is also interesting to perform a similar analysis for gaugings under {{formula:5bda3aea-f641-4787-a130-90c53110b41a}} and construct the corresponding embedding tensors together with possible supersymmetric domain walls. These gaugings can be truncated to gaugings of half-maximal {{formula:191792ef-7848-4e5b-994f-35d7598a301d}} supergravity coupled to four vector multiplets in which supersymmetric {{formula:87d8140b-7f0e-4782-8ed2-c9ae95d7e5f1}} vacua are known to exist {{cite:f711ec2ca28698550883c5fa52492b845f1feaad}}, {{cite:3e5bda50d4f031708294c2a2e18375b10c6815d3}}, {{cite:e776541d1cf3c7cb2191762b2f52bff00d2cebd1}}. In this case, the results could be useful in the study of both DW{{formula:5d033728-b7c4-44fa-97c9-f9bad0150030}} /QFT{{formula:7bc0e3a8-5931-4c39-8902-e135e114e696}} duality and AdS{{formula:0ab023ef-be8c-4703-afee-e9db241a37e1}} /CFT{{formula:2902ac85-c178-43a8-bbcc-0be780a33cd6}} correspondence as well.
| d | 1500042985adeb7371eba2ffa47bfabe |
IceCube detector at south pole was able to detect high energy astrophysical neutrinos and identify its source {{cite:42ea1d00de5d9a2ccb88a50ed6025838a29ed6a3}}, {{cite:fda08ad25f2ba57ee02ce4646a9477591c90d987}}, {{cite:83633a054be4ffc95b5e48b16ca8d8434e1b0ce4}}.
| d | 769a1f9f32ac6dee7e4d3a9d811bfe3f |
The Eq. ( REF ) shows that the
effective NN-potential in terms of the well known inbuilt RMF
theory parameters of {{formula:58965a4c-9ef0-4058-b0c1-e612a6659744}} , {{formula:80954418-9380-42fb-8d4d-71126082cfaf}} and {{formula:4cd9af6a-b56e-47b4-9185-2c44a0265003}} meson fields. Here,
we have used RMF (NL3) force parameter along with varying {{formula:b8d58e85-7b31-4683-bc97-e8b2f0f03d1b}} for
{{formula:52b5303b-3cba-414c-8588-1efb20bb9f5d}} -self interactions to determine the nuclear properties. The
values of the parameters for NL3-force are listed in Table REF .
Although, the {{formula:eaea7473-c7c5-4631-992b-fd643cadd908}} term is already there in the FSU-Gold parameter
{{cite:0d3ad2adec87410873064b5bd667b9665d6a1fe2}}, {{cite:1fb2924ca82addd52cdc860375245abdb27a886c}}, here we are interested to see the effect
of non-linear self coupling of {{formula:faf931af-2c6e-4fc1-bb3a-aff09b1959bc}} meson. Thus, we have added
the self-interaction of {{formula:4a8d62c3-438b-4cbb-91f6-553797a1b781}} with coupling constant {{formula:1a02b450-0e9e-4ce7-9bc8-514f48d03fb4}} on top
of NL3 set and observed the possible effects.
{{figure:e6e19403-bf18-4201-a84f-f5c721c9997c}} | r | ed4a8c3a125e6c401870ed477e808a0b |
Our results confirm that the presence of a vibrating mirror changes significantly both the photon's as well as the phonon quantum dynamics of an
off-resonant pumped cavity optomechanical system. Moreover, the parameter range required to observe this behavior is
within experimental reach {{cite:98a321cbc74442b05ea0c41ac80b6b97d576e6a6}}, {{cite:9c0f3faa67f45fb78a5045d266d4960dd0be8d9c}}, {{cite:7d5bfa9f42979060a14a03bc02afd40b871649a7}} and is close to those for photon blockade effect predicted in optomechanical systems
{{cite:86f98f7992fe8dfe3470f04f704865c1f77eea97}}, {{cite:d1f06eaac2d399c32d1c5dc8b23a5be8c8e1049d}}. Finally, the analytical approaches proper to cavity optomecanics with a moving mirror equally apply to other related
samples, like e.g., hybrid metal-dielectric cavities {{cite:aa6d5c2262f7af08c838536defcf1923613e2318}}, plasmon-excitonic polaritons {{cite:c9985e2a5e89d48ad4b8bb6595538b687e992041}}, superconducting qubits and quantum
circuits {{cite:a37c8bbb200dafd1775f8fb761f8201816f7d378}} or other types of nanomechanical resonators {{cite:cc20edf9693bd66c52ef155b6127f3b4b37a77c9}}, respectively, rendering our developed analytical approach relevant also for
these systems.
| r | c415f1efbf99ad861fd6506d7bf21df3 |
One additional constraint on the composition of the atmosphere is the cumulative impact of historic X-ray ({{formula:5b2866ea-b434-4bc8-ba36-989723dce9d4}} ) and extreme ultraviolet ({{formula:41f1fc7c-fccb-46c4-bf93-f2a81afd6775}} ) radiation-driven mass loss (the sum is represented as {{formula:1ba2d167-200d-4660-9238-cd3013ff1705}} ). To do this, we have developed a simple model of atmospheric lossGitHub repository for Snowball separate from the other two models. The escape model interpolates the BaSTI luminosity evolution gridhttp://basti-iac.oa-abruzzo.inaf.it/ of {{cite:78bbf7b82ea43cc9ef555e415d6cc43d70fd6de2}} to the observed mass and luminosity of the host star (see [fig:luminosity-uv-flux]Fig. REF , top panel), similar to {{cite:7e763317862497a0af8dc8cdabec905b6619de2f}}. The stellar luminosity evolution of {{cite:78bbf7b82ea43cc9ef555e415d6cc43d70fd6de2}} agrees with other stellar evolution models {{cite:4aeec1e6bd90089575106cf49c033ed934646974}} from {{formula:b8e317e2-d6bc-4476-9f32-39f39b504bf1}} 0.01-10 Gyr (the interval for the {{cite:4aeec1e6bd90089575106cf49c033ed934646974}} grid), and include time points back to 0.01 Myr and out to the end of the main sequence, even if this is beyond the age of the universe. Given the large uncertainty in the age of TOI-1266, this larger stellar age range allows for a more complete uncertainty analysis. We then use the X-ray and EUV scaling relationships from {{cite:e3bd07ba21999c5c33902a22c7c31ef8d1109299}} (see Fig. REF , bottom panel), rather than the empirical scaling with respect to the bolometric luminosity (L{{formula:7b144176-0982-4b38-9ae9-24ab49c6ae63}} ) from {{cite:4b06c8b2e2b3be1f320871ba193956fd2d08d702}}, for two reasons. First, the EUV luminosity (L{{formula:c5abe97e-4aa1-46ee-8f97-d4f04fc593de}} ) from {{cite:4b06c8b2e2b3be1f320871ba193956fd2d08d702}} is above 1% of L{{formula:099709e7-ec60-4b96-bb69-6f7d04e6942a}} for {{formula:5864a04e-492e-4e09-a780-754d590d4305}} 0.2 Gyr, and above 0.1% for over 1 Gyr, due to the lack of an EUV saturation threshold for younger stars. The second issue with using the parameterizations of {{cite:4b06c8b2e2b3be1f320871ba193956fd2d08d702}} is that for a dimmer star like TOI-1266, there is a discontinuity in the calculated X-ray luminosity (L{{formula:74bad668-05fc-4950-89c2-86917d21dc59}} ) saturation timescale ({{formula:8f89c367-6733-46d4-800e-b7a0058c0538}} 0.33 Gyr) using their Equation 5, such that the post-saturation L{{formula:f91fa532-2693-47a7-9c87-3b135110e119}} is briefly higher than it is in the saturated regime. During our initial tests, we chose to empirically set the saturation timescale to when the time-dependent X-ray luminosity falls below the saturated value, resulting in {{formula:ab7385fd-9ee9-4fd2-b71d-99cca3d5d6e8}} 0.47 Gyr, which produced a negligible change in the total mass lost. We instead chose to use the X-ray and EUV scaling relationships from {{cite:e3bd07ba21999c5c33902a22c7c31ef8d1109299}} in order to address these two discrepancies. {{cite:e3bd07ba21999c5c33902a22c7c31ef8d1109299}} note a saturation of {{formula:1ab1fd9a-0402-4759-b44b-304dd89d2188}} 10{{formula:0f2476fa-4c40-46c5-b643-27ae1f0510d1}} F{{formula:32d195aa-6a14-492a-9eb3-235221592ca7}} /F{{formula:5d7ff3b9-da8d-435f-86e4-695f10f66d50}} in their simulations of {{formula:5aaa14ea-bdc0-4308-887d-94bb2e00f63a}} 0.4 M{{formula:1313188e-479e-4964-9e3b-5f785793cfc9}} stars, slightly higher in magnitude but qualitatively consistent with work for larger stars {{cite:e96be46de560527f3f5ba264d793d0b59d999b00}}. Additionally, the log-linear dependence on age after the saturated period is comparable with other early M dwarf studies {{cite:dc2e3e679ee2230da83ba7d086ea980e0e9f8682}}. These specific values from {{cite:e3bd07ba21999c5c33902a22c7c31ef8d1109299}} are converted from flux ratios to luminosity ratios using the luminosity, distance, and 2MASS J-band magnitude listed in {{cite:e44da7a267db36142eaa418a6ad91846e4c1a5aa}}, assuming that the J-band fluxes are proportional to the bolometric luminosityChanges in stellar effective temperature are {{formula:75182dec-48c6-4f35-8303-d147c896e497}} 10% for 0.4-0.5 M{{formula:7ec581d9-3059-4da6-b2db-9bb1662bd095}} stars over their lifetimes {{cite:4aeec1e6bd90089575106cf49c033ed934646974}}, {{cite:78bbf7b82ea43cc9ef555e415d6cc43d70fd6de2}}, which would shift the wavelength peak by <50 nm..
{{figure:c4fdc398-f8a8-452b-b7d6-881995169707}} | m | f47ef889e34b912b89a161b82e593247 |
We first compare the following schemes: (1) Single GPU + SB: This is the existing PyTorch implementation of Transformer fine-tuning from HuggingFace (HF), using small batch (SB) sizes (e.g., 32). (2) Multi-GPU + SB: This is multi-GPU PyTorch implementation using DistributedDataParallel {{cite:1c6563f7eb187a10c4166a1d5520ba5839b57655}}, and (3) Multi-GPU + LB + ScaLA: This is our approach as described in Algorithm REF , using large minibatches (LB), e.g., 1K, for adaptation.
Table REF shows results on MNLI, QNLI, QQP, and SST2, which are larger datasets and less sensitive to random seeds. {{formula:4249a15c-f89a-4a3c-80eb-469d0e1d5e5f}} refers to {{formula:499fe998-2752-470c-96aa-af3b8bb1fda0}} nodes each with {{formula:8d7613c4-d1ea-4304-97b8-146c0cc1f315}} GPUs for a total of {{formula:5a9e53d3-6ed2-4753-919e-bb453d68711a}} homogeneous workers (e.g., 32 GPUs on 2 NVIDIA DGX-2 nodes). For a fair comparison, we reproduce BERT and RoBERTa baseline. Our reproduced baseline achieves the same or slightly higher accuracy than the originally reported results in {{cite:141bec3571b25a4261083f0da54b6a420bbadc8a}} and {{cite:833b9c90de5a16beaa5bee7d7c3862dc0a156183}}. We now discuss our results and observations.
{{table:ff70e07b-6d18-4a2f-a57b-0f3a785342f0}} | r | ce55f9656f6c8779732b964b1518b4d6 |
The use of stochastic methods as a way of circumventing this computational scaling was pioneered by the quantum state diffusion methods of Gisin and Percival {{cite:ee64980af3296cbf1c9dfc38c221cbb3a2acdab0}}. They showed that the solution to a large class of master equations could be modelled by the evolution of a stochastic Schrödinger equation with appropriately chosen stochastic forcing terms. Such calculations would result in a wavefunction that evolved in a non-deterministic manner, in response to the constant and random influence of the system's environment. The total history of a wavefunction calculated in such a way is known as a stochastic unravelling. In essence, these amount to samples in a Monte-Carlo numerical integration scheme of the path integral formulation, with each path corresponding to an unravelling, and can offer significant computational advantages over master equation methods {{cite:693147d867f94480497944625c42e38cebee25ec}}. Additionally, conceptual insight into the interpretation of quantum mechanics with open systems offered by stochastic methods has been leveraged in proposals of modified theories of quantum mechanics such as the spontaneous collapse model of Ghirardi, Rimini and Weber {{cite:d1559049e73fa899d29e146a0c8bb40ee2b7a9d9}} and the subsequent continuous spontaneous localisation model of Ghirardi, Pearle, and Rimini {{cite:2eb93bd630d3f11dbdaa535a727e9c1145881dfd}}. Extensions to relate these models to gravitation have also been proposed by Diosi {{cite:62d232056e869319641d164fd20375cc72e4b58b}} and Penrose {{cite:1b541cec2b35010fa6ef87a6497528de01dfdabe}}.
| i | 3350e4a2f8d66bee5981ae67049fcc0b |
With regards to increasing dimension, though the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. REF , decreased only slightly compared to others, the increased training time and much larger serialized model size render any possible minimal score advantage with higher dimensions undesirable.
As can be observed in fig. REF , from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW, confirming the observation by {{cite:4d4c502c17f716bb3e63432daae563a72b58fdef}}.
This trend is true for all combinations for all tests.
Polynomial interpolation may be used to determine the optimal dimension in both corpora.
{{figure:1fd4701d-3db3-4ab8-847f-be0da2a8b859}}{{figure:8ed641b7-2f0c-42fd-9482-691dc50d4eee}}{{table:be097ee7-83dc-4290-9427-9aeaba3b6891}}{{table:3bfc930d-05b2-46ba-bb11-cd99941d103b}} | r | 93951bf977575d9a9c7ade50ea7c04c4 |
The value function of an optimal control problem is known to be usually only Lipschitz continuous even when the data is regular. The characterization of the value function is obtained in terms of a first-order nonlinear Hamilton-Jacobi-Bellman (HJB) partial differential equation.
A bottleneck in the computation of the value function comes from the need of approaching a nonlinear partial differential equation in dimension {{formula:dad00364-5976-4988-ba7b-cbf7c59a9a0a}} , which is a challenging problem in high dimensions. Several approximation schemes have been proposed in the literature, ranging from finite differences to semi-Lagrangian and finite volume methods, see e.g. {{cite:4e073465742724bb341b8e3a94705b200afd312b}}, {{cite:2f46e238903139d0f214b9d1858bea65347ce20d}}, {{cite:ba692b9920ce58b3d13011811e2fbadcafd1a892}}, {{cite:7c2e7a2ca76bb0c8f739c9d10755f527d4c0a71d}}. Some of these algorithms converge to the value function but their convergence is slow. The curse of dimensionality is mitigated in {{cite:cc65f24fd0082ea58d922843964fc69122c97564}}, {{cite:4a6fcb0fcdb62c7c50e7128b41c5b919553c4b77}} by means of a reduced-order model based on proper orthogonal decomposition. A new accelerated algorithm which can produce an accurate approximation of the value function in a reduced amount of time in comparison to other available methods is introduced in {{cite:5a2031806b8fe9b5ae5269f146cf537030971aef}}.
| i | 4fd007970e980de99180e5da57962db6 |
All graphs in the experiments were generated using the Python NetworkX package {{cite:f862b2d22157dc852590d57052bb392fdfe6708c}}. Parameters used for the experiments concerning the expected number of mutants were m=1 for BA, p = 0.5 for ER, and {{formula:6fb86496-585f-4565-a691-df93f504f87d}} and {{formula:ea8e9aa0-5add-42f1-965e-5b93cd164a9d}} for NWS graph generator functions.
| m | 9ff65fe988d69a893306f206cc49020e |
Fitting the model to the data using the benchmark scenario provided in Section , with free spectral indices for both nucleons and nuclei ({{formula:c9704358-65c1-4786-8f75-fa92b64b5ad8}} ), provides the parameters and deviance shown in Table 1 of {{cite:2735eb079f908e12c91651c3350c1cdc68980d15}} using EPOS-LHC to interpret {{formula:1a0dfa6e-a928-4d8a-9ba6-3e92396ffe5f}} data. Table 1 of {{cite:2735eb079f908e12c91651c3350c1cdc68980d15}} includes the results predicted under the assumptions of a proton component dominated by neutron escape (proton maximum energy of {{formula:436ef305-9621-49ae-9fc5-ed52773cc330}} ), no local overdensity (widely used uniform distribution), and a shared spectral index across the five species ({{formula:3ff88bae-730c-44d7-a8bd-329baa12bf1e}} ).
The value of {{formula:eb9def67-dd68-4727-bcab-6843ef3cda0f}} , derived by the drop in the nuclear components at {{formula:1f935042-7de4-46fb-bd2c-ea55ea6e701e}} , indicates that the suppression of the spectrum is due to a combination of the cut-off energy at the sources for the heavier nuclei and energy losses en route, as found in {{cite:5bc4133488073c3b81328fea9b5f9afe5700f9d7}}. The nuclei's spectral index, {{formula:24a57a7a-985c-40b7-9799-43373594346e}} , is defined by the increase in average mass with energy, which is virtually monoelemental, in order to mimic the {{formula:40b6fcd2-3c3e-4267-95f6-42fa7c1d9fa7}} distributions as as accurately as possible. The fit's solution thus consists on setting a hard index for nuclei so that the contributions of each element mix as little as possible: high-energy suppression imposed by the cut-off beyond {{formula:0d22e33b-d2de-4cc8-8b0a-e44b14000ec5}} and low-energy suppression via the hard index {{formula:4275a6cd-2d55-44d9-a7bc-08d1f189e407}} . This effect, however, does not apply to protons, which persist in an energy range where a mixing of elements is required. The best-fit value of {{formula:d88771e8-dbda-43c8-9dda-381db2bb3bc1}} is significantly softer than that of {{formula:83f258dd-02f6-42cc-bbf0-f21fe9c412e6}} . The addition of {{formula:fa88a6a7-3c7c-4e52-b1c8-f3405e427917}} improves the fit of the data down to 0.63EeV, with a total deviance {{formula:f34c0e20-f0e5-423e-9b7d-e63cd891945a}} compared to {{formula:533c05b9-3dad-40d9-8b47-e7301309aa63}} in the case of {{formula:02a7a8a0-1fc9-44ae-81a8-a263d4af99c4}} , therefore the addition of this extra free parameter is sufficiently justified. On the other hand, using local overdensity to trace the source distribution improves the deviation significantly while it has a minor impact on the best-fit parameters.
| r | 664221404925a8727da969da0404cc23 |
For time integration we use the explicit third-order TVD Runge-Kutta method {{cite:0d67695c84cce9d91201b8e9fff7985f1a95c83e}}:
{{formula:76a16a92-c2b6-4a37-b96f-20ed233a2af3}}
| m | 419c83e2fcd50c111db32edf07046864 |
Our main results are shown in Table REF and Table REF .
In supervised experiments, there is almost no cost of introducing local greedy losses, and our local forward gradient method can match the test error of backprop on MNIST and CIFAR. Note that LG-FG-A fails to overfit the training set to 0% error when trained without data augmentation. This suggests that variance could still be an issue. For CIFAR-10 contrastive learning, our method obtains an error rate approaching that obtained by backprop (26.81% vs. 17.53%), and most of the gap is due to greedy learning vs. gradient estimation (6.09% vs. 3.19%). On ImageNet, we achieve reasonable performance compared to backprop (58.37% vs. 36.82% for supervised and 73.24% vs. 55.66% for contrastive). However, we find that the error due to greediness grows as the problem gets more complex and requires more layers to cooperate. We significantly outperform the FA family on ImageNet (by 25% for supervised and 10% for contrastive). Interestingly, local greedy FA also performs better than global feedback alignment, which suggests that the benefit of local learning transfers to other types of gradient approximation. TP-based methods were evaluated in {{cite:df37941e13d0f6dec77dc1ad3461b3f00882df40}} and were found to be worse than FA on ImageNet. In sum, although there is still some noticeable gap between our method and backprop, we have made a large stride forward compared to backprop-free algorithms. More results are included in the Appendix .
{{figure:2d77c677-6646-4aa4-854f-a7058b93ca74}}{{figure:d99d7b4f-7729-4ee3-b931-ff9a435ee3f3}} | r | 75c8945bc9fae2d0074f773ae37e9a16 |
The first blocks of [fig:dataflow]Fig. fig:dataflow involve the transformation of a general purpose dataset to an analysis-specific one through the implementation of a data model.
The newly obtained dataset is then used for network analysis.
The graph is constructed identifying the nodes and the properties that define if two nodes are linked or not (in our case: being partner within the same project).
Global and local properties are extracted, in order to produce a qualitative and quantitative description of the structure of the network of relationships generated by the program under examination.
As represented in [fig:dataflow]Fig. fig:dataflow, the overall process of our studies ends when a report summarising the analysis and its outcomes is produced.
Descriptions of the numerical outcomes in terms of social/economical effects are given, in order to provide the evaluator with a useful tool for her/his purposes.
We report on the SQL-based {{cite:25036b05d7d16e110e2691f75a20820080c997ce}} modelling approach, which allows to translate a given dataset in Open Data format into the reference set of analysis models (relational tables), and on the selected metrics relevant for executing an effective network analysis.
It is worth underlining that we have adopted well-known relational tables to store data in order to ensure integrity of our knowledge base and provide significant results.
Our current implementation of data models exploits the open source object-oriented PostgreSQL 9.3 DBMS.
For network analysis we have used the Wolfram Mathematica software and the R programming language.
| m | b8d6854d084c96249d81c78b9274dde7 |
Recently K2K collaboration has reported experimental results on the inclusive single pion production induced by charged current by neutrinos in the energy region of 0.4-3 GeV{{cite:da0072ed07275ac205f68c5682b0b4ab832dcda0}}. Similar results were earlier reported by MiniBooNE collaboration in the energy region of 1GeV {{cite:50377f7c69463b4e94abed703162a8cb614f72e2}}. The study of the energy dependence of the pion production cross sections in neutrino reactions for nuclei in this energy is important in modeling the neutrino nucleus cross sections for various Monte Carlo neutrino event generators used in analyzing the present neutrino oscillation experiment at MiniBooNE, K2K and future experiments to be done by T2K{{cite:95dc161803bca5c4b8a4efc627b99252e6232657}} and NO{{formula:a4d66294-f40b-4230-a1ab-3dac51e8a346}} A {{cite:4680dfcc682e389403154a626c9de5c591655598}} collaborations.
| i | 8666604cbc532e390ee925e20c31f964 |
A recent trend developing in quantum compiler optimizations {{cite:b6a5356b73d09e237fe30bb7d1271f2ca74b0444}}, {{cite:89f8f3f4a14f8cdf448ab2b72ed128a362a68882}}, {{cite:61d72440726d96bbdb34afc6de4547474d27a37d}}, {{cite:1174a453b3ac81113c9725a03b9e6d8cef9302d4}}, {{cite:e9efbb3936f497b7949c87bfb2c715d429e9b0ee}}, {{cite:f7c4357bf38f5a70b2cc41af1cb9742f984f98f5}}, {{cite:6abfee4d6ecc3c55acbb5e8c642860b3cc300ed5}}, {{cite:7a4055807b815320f06193c940f7c9efea1ba26e}} is to
exploit more potential from the hardware with more detailed device information.
Different from the innovations in this direction that are mostly driven by the underlying technologies, Paulihedral takes another approach which is to enable deeper compiler optimizations by leveraging the algorithmic properties of the high-level quantum programs.
Relatively little attention has been paid to this direction because
1) it is exceedingly difficult to extract useful high-level semantics from gate sequences, which is the level that most compiler infrastructures today operate at,
and
2) scalable yet effective static analysis of quantum programs is also very hard as the size of the operation matrices grows exponentially with the number of qubits.
We believe that these are two critical yet difficult open problems in the future development of quantum compiler/software infrastructure since they prevent the compiler from automatically detecting high-level and large-scale optimization opportunities.
| d | 7a4ea9047cc98a955ddb3c525dd2efe7 |
where {{formula:db4ce0be-604b-4848-af74-c99e891af3a1}} , {{formula:fb8d4ccd-91c2-40f1-a9be-3774a58a3059}} , {{formula:3f311830-3527-484a-b060-e7806f22b882}} , {{formula:15b8fe83-5b13-42b8-a216-516fe7426f6a}} , {{formula:34545b95-7e34-4c1d-900e-ccc06476f69f}} are proper and convex, and {{formula:4b151712-0789-430a-9dfb-840f9b5c319b}} is additionally closed.
The alternating direction method of multipliers (ADMM) {{cite:676b986caad6297058963ab059ff0ebe3db2d7c5}}, {{cite:f370cf10c23f03efaa6a083f686e55f706bd689f}} is popular for solving (REF ).
The iterative formula reads as
{{formula:f427f22d-0f48-4541-8477-113c1bac4b65}}
| i | 3abf93681df64d5b674eff4176f601be |
As stated in the previous part, the main difficulty of performing the random layout sampling is the description of the two-dimensional feasible layout region of components under the non-overlapping constraint so that we cannot directly draw samples from the complicated layout space.
Hence, we propose to use a Markov chain Monte Carlo (MCMC) sampling technique to solve this constrained sampling problem.
To be specific, we follow the idea from Gibbs sampler {{cite:bdb8e66aa4165090fea458d144627d6b54858209}} and convert the high-dimensional constrained layout sampling into a sequence of one-dimensional constrained conditional sampling processes.
For this reason, we refer to the proposed method as Gibbs layout sampling (GibLS).
More precisely, when performing conditional sampling, only one coordinate variable of one component is considered as varying while the other components and the other coordinate of this component are kept fixed.
Therefore, the feasible layout region for sampling consists of finite pieces of one-dimensional segments under the non-overlapping constraint, which can be readily described and sampled in a continuous way.
Apart from this, it should also be noticed that any layout scheme can be sampled by the conditional distribution given another layout as long as these two layouts can be converted to each other by translating components within the layout domain and without leaving this plane.
{{figure:b28ae7bf-2f0d-4508-a6d7-c01fc59f904d}} | m | c0ac1b2ba348c3d3661aa590adbb62e6 |
The electron parameters were obtained from the Parker Solar Probe Solar
Wind Electrons Alphas and Protons Investigation (SWEAP) {{cite:c96dc287a9e6ac84726cb8468ff52b837f84af7f}}
Solar Probe Analyzers (SPAN-A-E and SPAN-B-E) {{cite:f86b0167646942022b5e3abc1922ee39fe2e3eff}}.
We utilize electron temperature, temperature anisotropy, heat
flux and density moments and the pitch angle distributions for energies
from 2 to 2000 eV, covering core, halo and strahl {{cite:229bc97d01b77b37c03f95acf4abadc35f213d68}}, {{cite:9cd3b489cda0495c6cd28552df3fb11be2ae2066}}.
The solar wind velocity was obtained from the Level 2
Solar Probe Cup (SPC) moments {{cite:2d920c453fe998d2ca0ef35bfb971b8c474aeb05}}. The solar wind
density, and core and suprathermal electron temperatures were obtained
from the Fields Quasi-thermal Noise (QTN) data {{cite:1f7dfae5cccf4ca982950de9457080292ccc0172}}.
| m | 3b4ffdbd96762595bec00218f25055fb |
Comparing the calculation results of {{formula:52367c76-dbfe-47d6-a4e0-5556857b0ac8}} with that of {{formula:609fdbd4-ee0a-4fda-bc7b-b3d53e09ba3a}} decay modes, it can be found that when the mixing angle is {{formula:58f2f77f-d550-466b-8434-0684c77e6824}} or {{formula:29f296ff-734c-4701-9af6-24211137f62f}} , the branching ratio associated with {{formula:1b0afde6-9004-4267-9357-808c97f6fda8}} is nearly 4 times or 2 times that of {{formula:eba3f860-3e33-442f-91fc-076a200054d4}} , respectively. This can be attributed to the fact that the decay constant of {{formula:781d3592-3693-40e9-84c5-912f2544f6fe}} is small, which is consistent with the analysis in the Refs. {{cite:8514971ae0b7f0f8677e08f9e2f623b6102be18a}}, {{cite:2a6b64b309e22f6b815404dc83bb5bfd45612fb6}}, {{cite:603032b34605254bd855c1640f609348b0c777d5}}, {{cite:53d9cd4f639742879c2c919451cc0cddbb38a8a3}}, {{cite:dae3491da1c02f115591f12119c25f35982fea79}}. Additionally, when the 2S-1D mixing mechanism is considered for the decays {{formula:b77d2d6e-b037-4d86-a2d4-42370c951fe5}} , the calculated numerical results is changed slightly in comparison with that of the decays {{formula:b8a3c948-22bf-4878-8aed-94cb1ddc8d68}} . Hence, the {{formula:b7813539-d419-4038-86a0-f8d8f2d45cd7}} state might be deemed as the {{formula:e97724b5-4cce-40bb-9f5c-aa9cea964fa4}} state. In addition, considering the specific expressions from of Eq. (REF ) and Eq. (REF ), the reason why {{formula:691fe12d-92a3-4015-b5f1-afd0f660508c}} and {{formula:13f65ef0-ce16-45ee-82f0-31d398f9b304}} have very different sensitivity to the change of mixing angle under the 2S-1D mixing mechanism can be provided. Numerically, {{formula:cac4170c-ec53-421f-8323-f18ed3fa898c}} is much larger than {{formula:0f4c92fb-e4a7-4ba8-8c3a-8fdda492caff}} because of the difference in their decay constants, so {{formula:680459d3-e515-4623-b03c-62f0cb6587da}} dominates the decay amplitudes of {{formula:c2cc2d8c-4b16-4944-9558-0628d88f19a9}} . It can be found from Eq. (REF ) that when the mixing angle is set to {{formula:f284cc06-c257-4eca-8b3b-a6ea7276e3a3}} and {{formula:a7a3896a-f86d-48a0-9f95-f6e9ea9d0bb0}} respectively, the values of {{formula:68618661-ceca-43c4-919e-82c7cd1541fb}} are greatly different. Therefore, the change of mixing angle will cause a significant change in the decay amplitude of {{formula:e79f29f0-dfca-4ae9-a771-89994bd02085}} . However, the decay amplitude of {{formula:bb80f942-3b4c-438f-b156-e03410b66522}} , based on Eq. (REF ), when the mixing angle is selected as {{formula:5a3626bf-bdab-4c88-9627-436d24d74da2}} or {{formula:dfcf6200-4abb-45cc-a283-5fb94b7e356b}} , the two values of {{formula:a21fb364-5986-45df-95b2-2b4f99809e5b}} are very close, which makes the decay amplitudes of {{formula:ce8895b8-f274-4adf-a73f-c2d964e34988}} has little sensitivity to the mixing angle. The running LHCb experiment is a excellent place to detect decay channels {{formula:e05523f0-b96b-4745-8da2-e9dd73b3c258}} with branching ratios on the order of {{formula:07b34acc-2516-4cb8-8988-cb0897044cb0}} , which will help us to gain a better understanding of the mixing mechanism of charmonium mesons.
| r | 9626f1f1f6cc07fa1bd0ae251f278c4d |
Related to this is the question of completeness and contamination in
analyses of stellar populations near extragalactic supernovae and
remnants.
We can use our nearly complete knowledge of the environment around
Vela to evaluate the 50 pc projected search region used by {{cite:95ad90fa9e5a4f6d18fc77f929b42774902fc729}}
and subsequent papers. We again restrict ourselves to the more massive
stars using the magnitude and color cuts described in §2.
We transform the stellar positions to axes
aligned with Galactic coordinates, select stars in a sphere around
Vela, and then count stars using their positions {{formula:fcce0a23-a911-4c6d-85f8-abb8c028c661}} years ago
in circles centered on their median position
as if we were looking down on the plane of the Galaxy
(i.e. we ignore the distance of the star from the Galactic plane).
As before, there are 19 stars in a 50 pc sphere around Vela, while
a 50 pc circle centered on their median position {{formula:be2f0524-f27c-4773-9643-b2b22d529b2d}} years ago
contains only 9 of them along with 22 other stars, so the completeness
is 47% and the contamination is 79%. A 100 pc circle contains 16
of the stars along with 140 other stars leading to a completeness of
84% and a contamination rate of 90%. If instead we consider a 100 pc
sphere, which contains 152 stars, a 100 pc circle contains 73 of them
with 73 additional stars for a completeness of 58% and a contamination
rate of 50%. For a 150 pc circle, the completeness and contamination
increase to 76% and 63%, respectively. This age corresponds to
the life time of a {{formula:929f5e9b-7e8d-45f1-8cf6-15dd74889648}} star – a {{formula:f9b4a6f8-de6e-4d22-ad6e-cdb617adac85}}
star lives three times longer so the completeness will be far lower
and the contamination will be far higher. For the 50 pc sphere, the
completeness/contamination for the 50 pc and 100 pc circles are
11%/89% and 26%/90%, respectively. For the 100 pc sphere,
the completeness/contamination for the 100 pc and 150 pc circles
are 14%/56% and 28%/64%, respectively.
| d | 504db64e5a6652cf1991ff2512491288 |
Access to intermediate DNN layers:
In this paper, we assume that only black-box access to ML models is available.
The performance of the detector can be improved by enabling access to outputs of intermediate layers of DNNs {{cite:40f66eed0e856863fdf9553b271355438e726513}}.
One strategy is to compare outputs of intermediate layers of a Trojan model to outputs of a clean model, assuming that the output from the last layer of the Trojan model is similar to that of a clean model for the same input distribution.
Quantifying impacts of access to intermediate layers on Acc-C and Acc-T when the adversary uses the MM Trojan Algorithm REF to train undetectable Trojan models is a promising research direction.
| d | 67b72931a0c4e0c55fd02362aff21a59 |
M33 X-7's secure dynamical data and distance, the X-ray source's clean
thermal-state spectrum and moderate luminosity, and an abundance of Chandra and XMM data have provided arguably the most secure
estimate of black hole spin that has been achieved to date: {{formula:6faa6eed-a20f-4c73-a49a-8d3db6d258dd}} , where the error estimate includes all sources of
observational error. Since an astrophysical black hole can be described by
just the two parameters that specify its mass and spin {{cite:cab4092926fde24e5d592b1c41dc666f8ef677e2}}, we now
have a complete description of an asteroid-size object that is situated
at a distance of about one Mpc.
| d | 80ccfd31d209f4eb1f1c7fa645a8e9be |
The Bell instability {{cite:7ca6cc9bd16ab2bd4494ec6ab9cc2d18a9631a7e}}, {{cite:1ead4aacdeb5db030d8995b75e242b0bb6c525f6}}, which amplifies magnetic field in the
shock precursor region, generates linearly polarized structures
in a near perpendicular shock geometry.
Considering the effect of these short wavelength fluctuations on the cosmic ray current,
{{cite:4fb01c7dfb6e10b88706e4e442aa03725d1803fe}} and {{cite:ace4863fc5a7422dcf71f7f8acb5f3486a74ea7e}} have shown that long wavelength upstream structures can result, with spatial variations of the magnetic field strength and hence synchrotron emissivity. {{cite:381c634c23cfed7165cf096b39a1ca4599e60b1a}} argue that these long wavelength structures are responsible for the “stripes”. A number of conditions must be met. Most importantly, the
shock region where the stripes appear must be “nearly perpendicular” {{cite:381c634c23cfed7165cf096b39a1ca4599e60b1a}}, and that in this nearly perpendicular
region, shock acceleration must still be efficient. But as discussed elsewhere
{{cite:d2934b43b5b1c6be61245c07280e9f01fcbfacb0}}, {{cite:2a3ef26251ce7915698b6a256f7867b4ddbb1cb3}}, the efficiency of shock
acceleration at quasi-perpendicular shocks is open to question.
| i | fe0d6868a3d34a617000fcb4891a151f |
Table REF compares Impression Network with Flow-Guided Feature Aggregation and its faster variant. Both are described in {{cite:90b6f75112e41c9c08d85d40ab5247e1582a4c0c}}. FGFA is the standard version with a fusion radius of 10, and FGFA-fast is the accelerated version. It only calculates flow fields for adjacent frames, and composite them for non-adjacent pairs. This comparison shows that the accuracy of Impression Network is on par with the best aggregation-based method, yet being much more efficient.
| m | f69a4e080692e5e04b3c3f1436899566 |
Table REF evaluates the performance of four previous approaches on Dataset-1 and on Dataset-2. We see that three of the past four works give an F1-score that is close to each other on Dataset-2. On Dataset-1 and Dataset-2, the best F1-score of 0.72 and 0.25 respectively are obtained by using features from {{cite:ac53697f8aafc08db847482406cfbdd28d1f929a}}, there is a degradation of 47% on Dataset-2 which contains only numerical tweets. This degradation clearly shows that these past approaches are not able to capture the sarcasm that arise due to numbers in the text. All the past 4 approaches are build to detect the normal sarcasm in which the incongruity arises due to text. When the incongruity arises due to numbers these approaches gets degraded in their performance as shown in Table REF . This clearly shows that there is a need to develop a system that is able to capture numerical sarcasm.
{{table:1b431dda-a41e-4672-90f6-b79566107f8a}} | r | 50a3b1e228b47e335baf6276f57deadc |
PASCOL VOC 2012.
To further prove the generalization ability of
our method, we also conduct experiments on PASCAL VOC 2012 val dataset. As we can see from Table REF , our method consistently beats supervised baseline over a large margin, the improvements are {{formula:1e0d8d25-d4bf-46ef-83ff-1ed6beb2eda8}} , {{formula:acf35a48-95a4-4ceb-bcc6-b0812b8b9b0e}} , {{formula:a1c8130d-cc0b-4a63-8b49-c44d89add785}} , and {{formula:a71d612c-9241-45ae-b00d-fc20f4c2e9f9}}
with ResNet-50 under {{formula:47ac9fbe-976e-4cf3-b464-7841b21459a7}} , {{formula:d38c34ba-ec28-464d-8719-27ab863325f9}} , {{formula:47d37d4a-2b54-4c26-aef4-171227bd66d4}} , and {{formula:9c4299f0-d258-437e-9b1b-4384478e2823}} partition protocols separately, {{formula:8f620d11-a139-4baa-b056-e50947ecad06}} , {{formula:02a012a6-f2a5-4d17-b76a-452521042497}} , {{formula:40881e8f-c088-4768-af74-4d3acd2211ea}} , and {{formula:d9639a6c-bc90-4d78-94cb-2b6d7df2a5b3}} with ResNet-101 under {{formula:48462fa1-6dfc-4401-b62c-8dad1ad260df}} , {{formula:dc389cbd-9926-46e7-8554-2b8a37de414b}} , {{formula:005500df-9daa-46ee-a319-71512be3d6fe}} , and {{formula:25f72e2d-b4a6-4034-a7fc-c15edb52cf8d}} partition protocols separately. Additionally, our method is superior to all the other state-of-art methods over different settings. To be specific, it outperforms previous state-of-art {{cite:fab2bc87d55b9ad5072a9d8531d041f5a881360c}} by {{formula:fee9f058-e041-41d9-adb9-df08c0e9c457}} and {{formula:f8e1949a-f6c5-4257-8ee1-b1c8f1111d1f}} under the 1/16 and 1/4 partitions.
| m | e7723a0fd51a5141bf38c7cf56a03080 |
Here, we show how to restore gauge invariance of GW theory beyond the linear approximation and beyond the assumption of the Minkowski background. We start with a variational formulation of GWs, which is based on the well-known variational formulation of wave physics {{cite:38cd200b1dec8b8a25d4282e8f76d80c1ac2a96f}}, {{cite:b914ac829ea30d9f1911e6d0a50d6a4eedf49b68}}, {{cite:13f136ac3f52d50b439c3433007cbf4de2d7caf1}} and has also been fruitful in GW theory {{cite:f0cabb4991d58393bef766685b57181d18687845}}, {{cite:c466d9b0b2c309aee438512f9e487b0284669c4a}}, {{cite:c51f459f9d985c011056d33b3d905b7e40887f45}}, {{cite:04766c0f09a35a078bce61a60bbe970e8cf39456}}, {{cite:d6bc973b002999a11ecdff15030839e5caa8eafd}}. We assume the same general methodology that is commonly used in quasilinear (QL) theory of plasma waves {{cite:7c0398a5c19ce8910d9357f57b2cf1c0b1f75510}}, {{cite:7911942bfb8d3693d180a7c62f5427dda61befba}}, {{cite:11fa8272bf37cc36cdee1513ced4f5779b23c92e}}, {{cite:2e9ace0a06f9d54d5a08f329b71fc4509174848d}}, {{cite:bf1d1d9fdd66e6900fe691101c57ff95cebcb65b}}, {{cite:5a9fb54a021d5ed5530127c12f015814c8d01d47}}, {{cite:076c5f66e716057ae72ac4591abf106d90d59c11}}. This methodology is then applied to GWs propagating in an arbitrary smooth background and possibly experiencing adiabatic (see below) coupling with matter. Using our recent results from tex:mydecomp, we formulate the wave Lagrangian density in a gauge-invariant form. This leads to a QL theory of arbitrarily dispersive GWs, where the waves per se satisfy a linear equation (QL approximation), but the background metric evolves in response to the average energy–momentum of these GWs. Vacuum QL GWs are discussed as an example. As another corollary, we report a gauge-invariant geometrical optics of linear dispersive GWs in a general background. We also show how gauge invariance can be maintained within a given accuracy if nonlinearities are included up to an arbitrary order in the GW amplitude, and we comment on the related model from ref:isaacson68a.
| i | f5baa8a10abb506b4d8b8c0d62a0a268 |
Finally, we note that UHE CRs could be accelerated by external shocks during the early afterglow phase {{cite:9b2c8fd1ad39f56e3f2d7e40cb6a1d01a11e05ac}}, {{cite:e34205b7a99e36bc3b95786471a3d3b9d576a288}}, {{cite:5b67049240ce8c7fba45a20e19bf7695b40dc23e}}, in which PeV-EeV neutrinos are expected and the predicted fluxes have not been reached by the current IceCube. Future UHE neutrino detectors {{cite:a1ce4c8b8b67e831cd2cd862b1682661ec88a55d}} such as IceCube-Gen2, Trinity, and GRAND will be required to test those afterglow models.
| d | 66e223d85cb45f2d9581ddaa3263c99d |
Regardless of the particular method, however, all subspace mapping approaches tend to rely on the assumption that aligning source and target covariate distributions is equivalent to aligning the source and target joint (covariate-response) distributions, which is only true if one believes that {{formula:d100dce2-ce65-4244-81ac-b17b15904974}} . As such, one can think of subspace mapping methods as “fancier” methods for performing covariate shift adaptation when the target data's support is not contained in the source data's supports. Another important theme that must be wrestled with when developing subspace mapping methods (and, indeed, feature-based methods in general) is that the transformation must be chosen so that, when the transformed source data is used to build a classifier, that classifier still has good performance on said transformed source data. To see why this might be important, consider the extreme example whereby the transformation chosen is just a constant function. In this case, the transformed source and transformed target covariate distributions are perfectly aligned, yet the transformed source covariates are now useless for discriminating between the classes. Thus, in addition to choosing the transformation function that aligns the source and target covariate distributions, one must also ensure that the transformation is such that the information contained in the source covariates about the response is not lost. Overall, these themes were well captured by {{cite:9ec697c5b52aa460f2ac79849a833e2df47909ca}}, who developed generalization bounds showing that minimizing the target risk under the covariate shift assumption required choosing a transformation function that would balance a tradeoff between (a) aligning the source and target covariate distributions while (b) preserving the information contained in the source covariates about the response.
| m | a294f104cb2b55f08e14b53245f9f299 |
One appealing way is to seek a common origin for these three issues, such as the {{formula:2e42f748-00e6-4a5f-ad3b-7e3efc9f901d}} MSM {{cite:b68047be419ae86ddb9ff992740921abb9c69290}}, {{cite:5d929b231c0d566f0180e18484730f28b0f9a35e}}, {{cite:df50dc9cfa3bf409deefc6b54e6b7d101e02fde6}}, {{cite:7467acb98a6b0762f56d6e828ac65877d4796d5d}}, the scotogenic model {{cite:15bbe497463cd496e6277c078541b98348e753df}}, {{cite:2a869a065f4d9cae5586dcc841efb9ba16bebc83}}, {{cite:03fe7289f3042996644364297abb040a0705949f}}, {{cite:ada32768ad8f478ee1d091ec7fd7fef470ba4f62}}, {{cite:db16be24540a7968efc9281c97a883b5c8852771}}, {{cite:126064ee9f21ed7c04a6f2c6c6757f244d6cc60d}}, {{cite:4f557da280a1f50f0a05f00a73c3f932fb8cf65d}}, {{cite:03a4fcca9abadc110eeb354d1bf302cfa32d99d5}}, and the sterile neutrino portal model {{cite:016d4275ceab0d8279ca0f8f19248508b7fc64bd}}, {{cite:c4843f91fdc14ca199eb1401926e1d458282e541}}, {{cite:bc79666b123d43b48a7a28437a841e2b6ce2b4de}}, {{cite:bc7c75a40820f6115265a1cc6c3795bcb5320ac4}}, {{cite:a579c0f07b867f12e1d3e32f3ad5030fb80cddb2}}, {{cite:fd59fdb64932c32a057e2b4dc8c28717fc317141}}. In this paper, we consider the third scenario. This model employs sterile Majorana neutrinos {{formula:5971c741-d6fb-4c79-816b-a8615a18c0e3}} to generate tiny neutrino masses via the type-I seesaw mechanism {{cite:7cb17c045471f9f45b46365b76902e9d00491ce6}}, {{cite:184b24bf39553894392e7176e443bb2d8c26c674}}. The baryon asymmetry is generated via the thermal leptogenesis mechanism due to out-of-equilibrium CP-violation decays of {{formula:68fbee59-c291-473e-ba73-15eb7ac84ccf}}{{cite:db4955a61d57c49270ea78c6f4eb60164b6aec4b}}. A dark sector with one scalar singlet {{formula:58f752fd-d32d-41b4-ad40-7b3a3a7eb893}} and one Dirac fermion singlet {{formula:8d4d90c1-fa7d-4952-992a-3d9046f742bb}} is also introduced, which interact with SM particles via the heavy Majorana neutrinos {{formula:691445b7-2401-4183-91a8-ecb2b2e55aad}} . Here, we consider {{formula:833dde60-3934-400d-a51c-6472b368f672}} is the FIMP DM candidate. For the scenario of WIMP and asymmetric DM see early studies in Ref. {{cite:c4843f91fdc14ca199eb1401926e1d458282e541}}, {{cite:016d4275ceab0d8279ca0f8f19248508b7fc64bd}}, {{cite:12084642671fc399ad13259ea282bc6bc1b47c99}}. To make sure the stability of {{formula:c28fa88d-ef47-40b8-bbf2-b7e55232e05f}} , a {{formula:8b165d55-47e8-4ed5-828b-225cdcf14d03}} symmetry is imposed, which also helps to avoid the X-ray constraints {{cite:7dfa366159ecb537d06f08df10009bcd22e0c441}}. Relic density of {{formula:b7aa5cbc-1e9a-4c53-a788-ea34667345af}} is then obtained due to {{formula:ba322b6b-a758-449c-8086-bfed1e0f1ee3}} decay via the freeze-in mechanism {{cite:bc79666b123d43b48a7a28437a841e2b6ce2b4de}}, {{cite:a48c1c6b1e43eaa731f4cec35f02e4d0b5a8e9d9}}. The relevant Yukawa interactions and mass terms are
{{formula:45ea1894-c25f-4700-9c3b-8f70f2d1be40}}
| i | fce917ba2ba606481d526b69243dfd2e |
Another reason is that the BIC-formation mechanism in such a simple, basic photonic structure as the planar lamellar-grating waveguide turns out to be so special that it simultaneously merges almost all other known mechanisms leading to the BIC formation (see reviews {{cite:9af377d6945aacfe6c60e81b4cfcd5f05df4005e}}, {{cite:9b9ddcf35d4bd0150baa13df271ed70882f34e0c}}, {{cite:047140da2d14a353269474948fa6d75b351b6233}}, {{cite:64b326bcbd1431195c0a184573eb30f147e3be32}}, {{cite:b37862051270ed4bfcf072f11f0617c5b499abf1}}), including mechanisms of the symmetry-protected BIC, accidental BIC, single-resonance parametric BIC, coupled multiple resonances (Friedrich–Wintgen, but not Fabry–Pérot) BIC, interference-based BIC through parameter tuning, and topologically-protected BIC. A more detailed discussion of this fact requires results presented in sections II–V and is postponed to sect. VI. It is remarkable how merely splitting the central dielectric layer of the planar waveguide into two alternating sections of lengths {{formula:0a898eac-b480-4e2e-84e9-cd27e11f1eea}} (with fill factor {{formula:1811c2b4-a072-42c8-98da-10f9a3a23b9c}} ) and different permittivities {{formula:7f6197cb-1621-48aa-ad50-f75abf397a33}} converts an elementary, trivially soluble problem of the planar dielectric waveguide {{cite:5e74ff88cdd8447bf9292a6c15efc3607b6bd453}}, {{cite:938aca085cd03ec85dd4fc24a6b85f38b64e13e3}}, {{cite:e738a5f02a730e7f4987d7abc09d277fb653a1da}} into a rich, complex problem demonstrating many generic features of optical crystals.
| i | 6f1210c3c3f329d811f6b3bd0edc7a91 |
It is important that large number of formulations not only enrich the linear complementarity problem but also generate different matrix classes along with their computational methods.
For details see {{cite:039b2655737ea363f4408ecc80eeb88eaf2c7a8b}}, {{cite:eb0419a2d45be2bcdb622725e3d7868b3dec1d8e}}, {{cite:4b7d68f39c93f3d786129761b77417d750667623}} {{cite:35f4602d991b839e69a14ead508e2800b0b3ba52}}, {{cite:a25f54c03a07242a80b7278b9323b02f57ec11a7}}, {{cite:4161a2ed4516acbc437fd84050b4edca5317292e}}, {{cite:955918d0834338a4e9ba12162df36b39c79023b1}}, {{cite:e7eb7f3a383845c2cbae3f2ddbb818425a7212fc}}, {{cite:4dd75aa22069e75ce88bbf694080a09e880f288a}}, {{cite:ed65394f85e91bf7a77ecc07c6105c10a77b01df}}, {{cite:9bf57f2810db78d179b897be656c040c31cadf20}}, {{cite:118c476b5c99e83664d15725e73c6979eaa7527f}}, {{cite:157384e6dbf62f813644f9ef60eaac2096dc1e99}}. For details of game theory see {{cite:fe8e2e0ee3ef8592d8b00402876fbedf80b684c6}}, {{cite:c5c3a9eaadfb66b7ce6c1ef0bc864cabeab5be9c}}, {{cite:6cedc8b07af9fc042fb16f5891831abbe9de1df0}}, {{cite:5b04fb74819b0613e73222b11445542f307c21e2}}, {{cite:75575b55787e20271f20c4e0da40e2a79c9355dc}}, {{cite:a79cd93c2a23a5dc2c07da40a0db37e423f9338e}}, {{cite:ad63efcc9dec25b4054848dd94b321145a1cd669}} and for details of QMOP see {{cite:ad2c71769885862e9a725c2cbe44f84beba0f76e}}. Even matrix classes arise during the study of Lemke's algorithm as well as principal pivot transform. For details see {{cite:039c71a5dbc1e6a6d7873ac15e9fbecae4151755}}, {{cite:b8b8e0ad4f96920c61b9e64b9d7344ae73e77099}} {{cite:73cadbeb5ab64844d419b60ea3e41238bd48269a}}, {{cite:331cfea0dd7e7c0f1f3a242881ea5407277c4f8e}}, {{cite:8586fec2c4f7a8e8dcf3241ee8c004b0f7b1f577}}, {{cite:4dd75aa22069e75ce88bbf694080a09e880f288a}}, {{cite:6df0220fe34bd246f523ed15d15819d658189b8d}}, {{cite:ed65394f85e91bf7a77ecc07c6105c10a77b01df}}, {{cite:79248903ff786d024006a7a5883bc96478b8b6ce}}.
Now we consider the case of {{formula:ddc23574-3ff5-4e38-9405-8615cbdf3b0c}} with {{formula:ec9b7527-2b58-4bf5-988a-fb387ef97833}} and {{formula:53ed5a64-ad6c-4019-a135-7f2061c75e16}} then the problem (REF ) becomes
{{formula:71b5d1e4-9f83-45ba-8642-f8502f74bd9c}}
| i | 022b3e8e7ae94eefddfcd7e53617b136 |
AdS/CFT holographic correspondence {{cite:bcbf646cef8eb9a8312dd792772aba09a80d573d}} gave rise to a revolution in high-energy physics, as it gave access to the non-perturbative regime of gauge theories and gravity. Holography opened up the possibility of addressing otherwise inaccessible problems in strongly coupled quantum field theories, in relativistic hydrodynamics, in black hole thermodynamics, in high-energy scattering amplitudes, in quantum cosmology, and in many other topics in high-energy physics as well as in other areas of physics. The indubitable capability of the holographic techniques to work out the details of strongly coupled systems led to explore similar realizations in the context of condensed matter and statistical physics {{cite:671f4b8f093dd86f6543478ee6824a7fca34432a}}, {{cite:0271313ff7d88cd08105ac292564223881cdc671}}. This motivated the search for non-relativistic strongly correlated systems that could in principle allow for a holographic realization. This is how gravity duals for models with anisotropic scale invariance, both with {{cite:947c1a2a35f3c2bb86160bd489a2ed71a204bffb}}, {{cite:d682d89b2a0e923a07740bba5b558651ab8366c6}} and without {{cite:ea8e9334b571629e3f2a87c0d67905231b23daba}} Galilean symmetry, were rapidly proposed; these being given by the so-called Schrödinger and the Lifshitz spacetimes. From a broader perspective, the search for holographic realizations beyond AdS spaces has been one of the main lines of research in theoretical high-energy physics for at least twenty years: the dS/CFT correspondence {{cite:16da32d0e728379190bbe9cfa57280282a55ae95}}, the Kerr/CFT correspondence {{cite:96b7591e1ab54c6728a47f3dd1634832d104d462}}, the celestial holography {{cite:22dba56fff6e9d501bf21c530232b54a50e5360e}} and other realizations of flat space holography {{cite:96fb853d276b8f845c36417454d8e2ecf1c03e11}} are some examples of this. Then, the question arises as to what extent the holographic paradigm can work for non-AdS scenarios and what can we learn from such adaptations.
| i | 622ebbcff95363dc7b5700cc3b8f30a2 |
where {{formula:6d82b1d9-2bd6-4a4c-8598-5503504b1eb1}} is a subgraph of {{formula:6100e48e-7761-4f9e-8da8-9db3199211a4}} with connected components isomorphic to either
{{formula:009f1fbf-d831-4b82-8612-47cc9df1ebe3}} (the connected graph with two vertices) or a cycle, {{formula:88eda766-db57-484b-b69d-ea2b5194f015}} is the rank of {{formula:2878d251-9de1-47f8-9a4f-1556526255ff}} , i.e., {{formula:8f319e78-3939-4885-b254-307618eebb9c}} with {{formula:a70a631c-3531-4bc5-8d80-c675528b07df}} the number of connected components of {{formula:2c20984e-8901-4009-98cd-fa5ffca8d4d1}} and {{formula:285fa73e-4f7f-47f2-a75f-f4d8c543a21d}} its corank, i.e., the number of connected components isomorphic to a cycle.
Clearly, if {{formula:a2dbb00f-f4cf-4ea4-869a-f28eecfd2634}} has no loops, then {{formula:345f7078-d103-405c-b1e1-fdad71a43cf2}} . Besides, {{formula:afa67034-2417-4af9-8a68-86d09eaea383}} counts the number of edges of {{formula:9fb4b7b5-f6d6-4d4d-b896-f47ba05f8148}} . It was also proved by Sachs {{cite:2cc21a2eb25d8c244c1d40ae75c7a79c8820dcea}}, that {{formula:aecdde8d-0f9a-4ff8-a20c-ab9673ac104a}} is bipartite iff {{formula:58f140d9-90ce-4bad-9d1b-10f641f8982e}} whenever {{formula:75d23faa-e521-4be5-b28f-b26579a2985b}} is odd.
Moreover, if {{formula:421e47b3-6f8e-4663-a4e7-69fb0b026a31}} is a tree, the formula becomes
{{formula:7ab9e197-42c1-4d3b-b727-36875938589c}}
| r | b30964c186ed86e92f70561f1bdd1c4f |
According to the experimental analysis discussed in Section , we can say that different machine learning-based security models perform differently for detecting cyber-anomalies, or multi-attacks utilizing the training security data. The significance of the security features greatly impact both the binary classification model while detecting anomalies for unknown attacks, as well as the multi-class classification model while detecting several known classes mentioned above. For instance, according to experimental results shown in Table REF , the feature {{formula:fabd4f9f-d9ec-464a-8da2-d79edce08daa}} has the highest correlation score of {{formula:d4a747bd-d981-41df-8e63-dc9339c2c59d}} and thus selected as the highly significant feature, whereas another feature {{formula:e27f10ad-1d47-47bf-b1c0-55448e5f35cd}} has a lower score of {{formula:c99a449d-9384-46c4-afac-3ca1cbe59864}} that is closer to the value 0 for the dataset UNSW-NB15 {{cite:b4d4e05aa20f3aaf36b083eeb351175a55308db2}}, and thus can be considered as the less significant feature for modeling. A set of highly significant security features reducing the insignificant or irrelevant features can help to make the security model lightweight and more applicable. For instance, the NB security model gives higher accuracy (85%) when the top 24 features are selected for detecting cyber-anomalies, rather than considering all 42 features as shown in Table REF . Overall, the security models for detecting anomalies and attacks, based on various learning algorithms are also affected by the variations in the significance of the security features, as discussed briefly in Section .
| d | 18281f701f11f46a24b864e1ec078b7b |
Additionally, CMA-ES {{cite:1ea0ac88cd6dcef28c1ef72cbcee7c92ad77c092}} as a state-of-the-art, derivative-free, and evolutionary black-box optimization method has been employed to optimize hyperparamters. Therefore, we use it as our evolutionary search strategy for HPO. This will allow us to focus on the impact of the two types of hyperparameters with the same search strategy. However, we also point out that more evolutionary HPO approaches could be used in the future to test the impact of hyperparameters, but this is beyond the scope of this research. .
| i | 052cc66588978c97450dadd015ed934d |
Finally, we examine the impact of the weighting factor {{formula:500e2148-6cf7-48f9-b83b-3f1367989314}} in Equation REF to further elucidate the network training process and the relative roles of {{formula:1c51023b-5527-4290-9b1c-0c32b4b036e9}} and {{formula:00e1ccc0-841d-4edb-ab0f-fcd8d75840a7}} . On one hand, hybrid networks trained with a strong {{formula:08ca2d48-cd7d-4a05-a0e5-eda030597871}} weighting can be treated as physics-informed neural networks that predominantly train by solving differential equations but use data to help with network convergence. On the other hand, hybrid networks that use a strong {{formula:4ea66dd7-1a49-4b43-a814-6c63a1d501e1}} weighting can be treated more as conventional data-based networks that use Maxwell regularization to push the outputted data to be more wavelike. We train a series of WaveY-Nets in which {{formula:08fc3019-3e7e-44c4-b64b-17ae62324370}} is normalized each iteration to have the same magnitude as {{formula:85731a51-4d37-4d18-8d42-1494d7b0e73d}} and {{formula:b13ffd82-3da3-4d93-af6c-d7fcfcbbf51a}} is fixed to a chosen number. The plot of the resulting full field MAE values for {{formula:e0f90803-f3a5-417c-bbda-796ce98a3eac}} ranging from 0 to 1 is shown in Fig. REF d and indicates that the best performing networks use an {{formula:264fc0f0-4b7f-4599-a4a7-1284194ff9ca}} between 0.2 and 0.6. As such, WaveY-Net most effectively operates as a data-based network that uses physics to regularize the quality of outputted fields. This biasing towards data-based loss is reflected in our observation that while it is straight forward to effectively train a network only with {{formula:ed655f16-515d-4187-b51f-d2d11f2c9cbb}} , the network does not properly converge when trained only with {{formula:72bd3567-ba0f-4f60-9080-87d1bbb9e314}} (See Supplementary Section 3). Training methods that use stronger {{formula:fc000014-2326-4eea-9453-55375ecf90ca}} weighting are of interest because their proper implementation may reduce the reliance of large training datasets. Concepts such as the incorporation of an active weighting scheme for boundary condition contributions may improve the performance of those networks {{cite:be5f7bf26122b882bb56450434b46cb25331d959}}, and they will be a topic of future study.
| d | 7b6542eeae022cd071b9f86670f8c185 |
We compare the quality scores of the deblurred images {{formula:f457ed81-50ce-452d-b3b0-c194b2c7a721}} with the results of other two spatially varying defocus blur removal methods: a combination of the deconvolution methods proposed in {{cite:72fcfac2181cb43d20ac6530699e4a6326867e18}} and {{cite:fd34e6ed3a167cb7afcab04603c1bc5a5ab1da04}}, which is also used in {{cite:d0ffbf8f90045cb9c372288f0a40cc4588d144d5}}, {{cite:106682ddcc74ce581982d6452b5242339c19d37f}}, {{cite:988d1b5dd954d0427ff96ed65aba041467cbbefc}}, {{cite:56729e19d7007d0f6017fac2065aa005ed4749a1}}, and the method proposed in {{cite:27c78dd06a3f8685df24c186c16a2fdb37902e00}}. Additionally, we compare the proposed method with a very recent blind approach {{cite:298c44fd22a5f89c63ac3b22db493b0c18cf17ea}} that estimates both the blur map and the deblurred image.
{{table:8771e06a-9dea-40f9-abd5-c4306beefb98}} | r | f834dcaddd745676ce15492deaf49c2b |
In eukaryotic cells, membrane shapes under mechanical stress are mostly controlled by the mechanics of the cortical actin cytoskeleton underlying the cell membrane which produces contractile stresses. It has been observed that membrane shapes mainly depend on the actin thickness, where thin shells show a cup-shaped deformation and thick shells produce membrane wrinkles {{cite:ff2792e5e86bffdfa88870c1b9be47085e08cde5}}. Our work provides a possible link between contractile activity and the emergence of cell-surface ruffles, circular dorsal ruffles (CDRs) and caveolae {{cite:d9248083a15a8b25a772eebc357a996ae0888879}}, {{cite:8341b0d78fe2fb28fc2015c53b3b9aa379aeba23}}, {{cite:b544708c12687a6ba3bd89edef633f2a1e9db218}} at the surface of cells.
Surface invagination which plays a role in active transport through cell membranes might also be related to local contractile forces. During macropinocytosis (fluid endocytosis), extracellular fluid is brought into the cell through an invagination of the cell membrane forming a small vesicle inside the cell. Vesicles form from cell-surface ruffles that close first into open cups (ruffle closure) and then into intracellular vesicles (cup closure) {{cite:8b81b431bdc2a03f2266ebd7817b9eee01e093f3}}, {{cite:421b1d9763cd4a6001798ac0bbb59b1703f07acc}}, which is reminiscent of the dimples that are driven into contractile drops. Some morphologies observed in contractile drops, such as partial invagination and run-and-tumble motion are known to also occur in active polar droplets {{cite:7aeb1bfca733a8573b4a9d4e03ccedb45b65ea3b}}, {{cite:be328bca62d3201147222279c21e9b0e64a63cec}}. In contrast to nematic systems, which are characterised by a headless director field {{formula:f719845e-0f0a-45cf-bc02-8e6c581710d1}} , polar systems are described by a polarisation vector {{formula:9a08650b-6540-4e80-840c-a2f67634d4e5}} . As a result, polar models do not exhibit {{formula:cb5b4170-eace-4250-b7ef-f051b7100216}} topological defects, which have been shown to contribute to the dynamics and flows in cell cultures and bacterial biofilms {{cite:20fd51e20f27b5437f79e11c93d0d1b22ef7ca22}}, {{cite:9a0bab1fa4976cf5b910c30c13982642da8ce79c}}, {{cite:0de47a7b64706d6801d741582766219a972d4742}}, {{cite:639a46c101a9ef85290cd48dc0f5c21d2b301c3d}}, {{cite:21ac11ac8287867c2fda34d2445d5db6fb32458f}}.
| d | 74cd63775c41e7f774aeb2087bc99b63 |
Finding an observed FE for the robustness test is data dependent. Birth registries are commonplace in sibling FE analyses{{cite:caa62c19c83c0a50a11feeb060e5bfcec4d26b3a}}, {{cite:8d651052a51b09f67ec6b472dd4b5fbce13b6b9d}}, {{cite:894948bda6b6bf554ae4a95fd25014212fe180de}}, {{cite:6ccd4f4d361e51ffd6592de8aa7da23e06efdb21}}, {{cite:c354b81d880bfc338e35114d5ce994febab2d9af}}, {{cite:d5ebe97193f9ad5b063946df2084b96aeee73375}} and may provide several candidate variables. For instance, United States birth records record maternal demographic information that are fixed and related to family health history, such as nativity and ethnicity.{{cite:c986fa8e1fc190599e31279656c1bcedd4cdaa50}}, {{cite:d7b8497a725c9af21daf0bee8c3fe7c5d77b0666}}, {{cite:09e3d97ec138f10ffa48f60aa938f7ddcee20003}} Other sources may not have a similarly wide breadth of sibling-shared characteristics. Additionally, these shared characteristics may have complex links to siblings’ characteristics, and researchers must interrogate how candidate observed FEs relate the treatments and outcomes. For this reason, we considered only pre-treatment variables as candidate observed FEs for the robustness test. Recent methodological guidance discourages conditioning on post-treatment variables to minimize bias in treatment estimates.{{cite:3ffa9cb8dc2382a83ab4b19a8cec03ce0bbbe738}}, {{cite:639c16ad5e40ac19b45d152f6e0fef5beb19558a}} In our sibling FE model, conditioning on an observed FE that is causally affected by both siblings’ treatments would induce such bias in the treatment estimate, so it is also possible that it would invalidate the observed FE’s performance in the robustness test. Using an observed FE that is measured prior to siblings’ treatments alleviates this concern. More broadly, we recommend graphical modeling to determine a variable’s usefulness for the robustness test{{cite:5a1f4cc0ce8a9ecbd8aeb13135df51ad9913a3d1}}, {{cite:ff08f8b7d66f22dfe811154c4001281742d49514}} and, when possible, running multiple robustness tests with valid observed FEs to thoroughly screen possible outcome-to-outcome interference.
| d | da5a2e694e8b10fb4e0db271d6cf329d |
We show that the revenue-maximizing strategy profile is an equilibrium for a large enough number of bidders, regardless of the information released between the stages.
We compare the number of bidders required for this strategy profile to be an equilibrium across different information structures.
We find that the less informationInformation structures are pair-wise compared using an adaptation of {{cite:57fd647f3edcb175012e8cdcab22eacf05305af0}} to our model. See Definition REF . is given to the bidders, the fewer bidders are required to maintain this strategy profile as an equilibrium.
As a result, when the number of bidders is unknown, there are some advantages to conducting an auction without revealing information before the BAFO stage.
| i | a0824e4d3f99cb2412c3ca0a58e22d1a |
Table REF shows the experimental results.
Even though the original GPT-Neo models (125M, {{formula:74d386a6-645d-4bee-9e9e-11470e8a57cd}} B and {{formula:20b87eb5-60cd-472f-a011-09fb0dd6a2f5}} B) and GPT-J 6B {{cite:f6b4f3a18cf1ccdac092c3a01a77524e840afe81}} are trained on the dataset containing GitHub files, they do not achieve satisfactory performance on HumanEval compared to Codex. For the open-source CodeParrot models (110M and {{formula:42ffd43a-0f8a-44d6-b898-b1362a5827f5}} B), as we mentioned before, they outperform the corresponding ones of GPT-Neo but still not enough to compare with Codex. AlphaCode {{cite:564d46cc4caec5ffa780e12c3e3f317bcfcfa904}} also evaluates their decoder-only model on HumanEval, and the performance is slightly worse than Codex. PolyCoder {{cite:e6eba7e19b314ef77368457fcf02a7bb3b0da566}} is pre-trained on several programming languages, and it shows even worse performance than CodeParrot. However, our pre-trained 110M model can obtain comparable performance to Codex 85M and outperform CodeParrot 110M by {{formula:8f324806-8305-4733-8e56-41bd6598d6a1}} % on pass{{formula:5061e6b7-7516-4da6-ba57-8142cd0bc9dc}} , {{formula:266aac34-ad67-441f-8c9f-4774d7e26714}} % on pass{{formula:397bf330-871f-4708-b3da-ec4838f475a5}} and {{formula:4b27e6e2-954c-4b64-b00a-a36444c5ba86}} % on pass{{formula:e91b66ff-117b-4340-b368-228e6b3fb93a}} . During our paper review, CodeGen {{cite:472ee2e69395c49f383dae6cba0dcf256ae0b4c1}} also provided code generation models of various sizes from 350M to {{formula:2f611755-1f82-4855-b663-567412afe137}} B and obtained good performance. We omit CodeT5, CodeGPT, and CodeClippy, since they all get {{formula:01d74ff5-333d-4315-b7c6-4ad3a6a58a99}} on pass{{formula:b9db2462-6f20-475b-a5b8-b17ff94b6cb0}} , pass{{formula:4885c4b1-eaeb-484b-92a9-01509dc3cb1c}} , and pass{{formula:4d9c22a9-36bb-421b-bce1-672723ec9a27}} .
{{table:c247c4e4-9800-43dc-b85d-8275d8acfff1}} | r | c929a49687dc7a36a935e32745b3f7f5 |
Fig.REF shows the luminosity {{formula:c5c73c8a-99d2-49ac-8d99-ee229d161e56}} as a function of R for a variable efficiency {{formula:82a849ee-60c8-4804-9317-29648a24dcc5}} . Notice that L is a decreasing function of R and for relatively smaller values of R (UR case) it is almost of the same order (for our model and those of ref. {{cite:4b4d8bb9f1ece3aa950f79ef6a915b284d1cdb3b}} {{cite:196e511cae4f4e2bf582025c736f727a93f6902c}}) with a small shift difference (ours is larger). For example for {{formula:d636f50e-dc35-4487-9043-917dc7ddebb0}} , {{formula:fcf06983-f313-427f-b935-8dd476a61d57}} and {{formula:582ae447-6c4c-43e6-89a7-611090573272}} . Now, if R increases, {{formula:8c88eaff-c00e-4e88-81de-62cfd7a8b59e}} decreases faster than {{formula:21e5a1ec-5b06-411c-bed6-c5e5eb149248}} and {{formula:c1e7254d-3502-4692-ad41-08e74d75d83c}} for example for {{formula:4083dbf6-3c77-4ad9-9123-fff94403f332}} , {{formula:c8e2a0e0-209d-46e9-a98f-031e0dbf81a8}} and {{formula:73919b17-177e-43a4-b3a8-027566a12905}} and {{formula:8ea296b7-82c8-454b-b2b7-d9613c296dab}} . Therefore, for larger values of R (NR case), the discrepancy increases between the results of our model and those of ref. {{cite:4b4d8bb9f1ece3aa950f79ef6a915b284d1cdb3b}} {{cite:196e511cae4f4e2bf582025c736f727a93f6902c}}.This is due mainly to the fact that L contains two compelling terms {{formula:5eb4d828-5559-4840-9949-a97343d88ba4}} which is positive and {{formula:fb34804f-e83b-4c23-a541-73feae69717b}} (present only in our model), which is negative. As R increases, {{formula:e6dc9354-aed4-43d3-8d68-e57fdc69dd0b}} increases and compensates {{formula:03cda7bb-de28-4bac-99ed-7c3b04230f59}} , such that {{formula:18f298bd-b4ed-40f4-8ca2-c2269824097d}} become smaller in comparison with the other models where the term {{formula:57a6af4a-be06-4383-ab8c-5e86fe0f89cd}} is absent. Table REF , summarizes some numerical values illustrating this fact.
{{table:b4f7a4e1-8990-415e-abf9-60ad9681336a}} | r | bfd9f930d1c26b2ddde5dffb7be1ea7d |
We showed that the contact terms allowed by the Lorentzian inversion formula for {{formula:41eda2c6-dbd1-4620-bd7e-6be4c177f200}} vanished by combining localization with the {{formula:b1e0b266-e9b6-4cc2-8a5b-2b27f6d392f1}} four-point function computed using the weakly broken Ward identity. There is in fact a possible alternative argument that only uses {{formula:c758fe4d-0797-4dd3-8fec-a474e4ac58cf}} superconformal symmetry, and so would apply to any {{formula:d1504c40-fdfa-4652-b755-61228f976781}} higher spin theory. Note that {{formula:3f5b561e-1dc1-427d-aa48-49b2d43af2b3}} superconformal symmetry only allows a single contact term with four or less derivatives, which thus contributes to spin two or less as allowed by the large {{formula:968e36ff-c5fc-4275-972e-c290191ddea6}} Lorentzian inversion formula {{cite:fcadd6d5808546060ec3f9c8a91e32834976b97f}}. In {{cite:de56f0d5c7ed77510f25ac70562436251295618a}}, we used flat space amplitude arguments to show that this four derivative contact term for {{formula:a19c4611-33d1-46d8-800e-7b0549b42f99}} actually becomes a six derivative contact term in other stress tensor multiplet correlators like {{formula:818fb1ca-548c-44c6-9110-0e96deee6bef}} , where {{formula:c67191a5-75d7-4ade-a3ad-9a0f5ca888ec}} is the R-symmetry current, that are related to {{formula:a814521e-918d-45a9-8fb0-b29ae9ccdc8a}} by supersymmetry. Since six derivative contact terms generically contribute to spin three CFT data in correlators of non-identical operators {{cite:59d596db42be7638706c13b347a52161ab8fc4e2}}, they would be disallowed by the Lorentzian inversion formula for correlators with spin {{cite:349bc08cfbd43741372c45ab0a0f83087c42ba56}}, which would then disallow the putative four derivative {{formula:f30bf512-cd4a-4fb0-ad43-c976c3daa523}} contact term. In fact, the {{formula:e632e051-8b40-4f17-a9da-cf5dd3dcc91b}} contact term contributes to a scalar long multiplet that contains a spin three descendant. This happens to not contribute to the {{formula:e78010be-326a-4089-aefb-14b2de4a7154}} superblock {{cite:0036919458a87f44d0c956a33e0689c1d653715c}}, but could well appear in the {{formula:34fe4068-c9ad-46a1-8501-eab2b96fca21}} superblock. It would be interesting to derive the superconformal Ward identity that explicitly relates {{formula:8f7a51af-f849-4dd5-aaaa-d7b99f8ecaa7}} to {{formula:54e8536a-48cc-4dce-9529-a788db10202f}} , so that we could verify this alternative argument for the vanishing of the contact term. Our tree level result would then just be fixed in terms of a single free parameter, as in the non-supersymmetric case of {{cite:b2d0327e23ab0a191b55f1218fde6e595ad4a421}}, {{cite:a7eab8573824a5e7cb51a7bf76bc60bea86359c7}}.
| d | 8b1faf55af6e624e35fdbe15e833e95d |
In recent years, there has been growing interesting in understanding the performance of statistical procedures when the models they have been designed for are misspecified; see for example {{cite:e763a247d4de527904e3f920cb7e383fdccf9956}}, {{cite:bd4464b6e8313cb0b82dbb4aaa7e605ba6ed6540}}.
In this work, we consider regression models with response {{formula:e120e52f-9484-4654-9052-3834d6492cd8}} , a single predictor of interest {{formula:c3f754dd-ef2a-46dc-bb20-1d03831be052}} , and additional covariates {{formula:723bf2d1-7426-4322-b136-a03fce65d31b}} . Our goal is assessing the significance of {{formula:da4e27e9-ba77-4f84-968d-d0c173d0e4f6}} after controlling for {{formula:2c7b1f9c-9f64-4d1b-845f-5039b1088bd3}} , a problem which may be equivalently framed as testing for the null hypothesis {{formula:256f0e72-329e-4167-aaf1-f163ec7905bf}} of conditional independence {{formula:7169c90f-b0ed-44f4-bb6e-831b6cf14bdc}} . If either the {{formula:2d7d8914-452f-4c4f-8798-074ceb72d52f}} - or the {{formula:f0d6a6ec-2112-4d24-88d0-ba12c8ef4df8}} -model is
linear or generalised linear, the situation is favourable for DEF inference.
| d | d7a42c2200d81e65466228b2e7df4d7a |
We evaluated the proposed model on MSCOCO image captioning dataset. The results are reported in Table REF . To make a fair comparison, we extract an image feature vector as initialization of the hidden state using the same Inception_v3 model {{cite:5f490fd486398fd7f253f1273e2d6347698d64de}}, and lock the parameters in it (without fine-tuning) in all test models. We compared three test models: LSTM denotes the im2txt model using regular LSTM cells implemented by {{cite:c9e1a4fec27efa9f41ade6f0c66d44c9b8136589}}; RHN denotes the image captioning generation performed by original RHNs {{cite:968155cbace1ed9c04e36ac8433d42ab79bac0cf}}; and BN_RHN is the proposed method with batch normalization instead of the {{formula:4d2259cc-8ba2-4d13-8204-ca3c290dd1cf}} constraint in RHN cell. The results show that the BN_RHN is the best performing model. METEOR and CIDEr are generally considered the most robust scores for captioning. The higher BLEU-4 and METEOR scores, due to fluency of language in the image captions, can be attributed to the RHN depth. More depth increases the complexity that helps learn the grammatical rules and language semantics. The LSTM employs a mechanism with input, output, and forget gates to generate complex captions. Our model shows better performance than LSTM, which may indicate that simplifying the gate mechanism and increasing depth do not affect performance for image captioning. The test model with RHN cells benefits from having less parameters during training, and good gradient control, in a simple way. Our BN_RHN achieves better result than original RHN, because the gate value model biases are more flexible, and batch normalization guarantees the steady gradient flow in back propagation.
{{table:981c207c-aac3-4ce6-baec-66fd00d27731}} | r | 167b164e7fc6bd032d3c4f7c17b1d7e6 |
To prove cor:all, the requirement {{formula:4a96560b-c02c-4c4d-9ad0-995e7b1f910c}} is crucial in the application of thm:CVRP.
Intuitively, the threshold {{formula:78abeb60-ac55-4949-aa31-fff622e6625b}} comes from the additive term {{formula:acf8265c-7850-45f0-8d70-2066d12bffb4}} in the Structure Theorem.
For the Christofides algorithm {{cite:d96f5f5e09df88909f2fbe283bbf2c94b7ad3d71}}, the requirement {{formula:15700b61-ab33-4fed-96d0-4381bd3c46db}} follows immediately since in a graph with {{formula:3f4431dc-b506-4942-88ae-885b9a5488b6}} vertices the algorithm combines a spanning tree of cost exactly {{formula:5662c21d-6840-40ad-8e44-0e4c1d0b7c5e}} with a matching of cost at most {{formula:6d3a2427-865f-4f4d-95cb-920e8b4dff2d}} .
For the Mömke-Svensson algorithm {{cite:791ccd2d512c42e8794bc63bb98ce0520651ae3c}} we may assume {{formula:c7be24fb-397b-4572-9bb8-20366e746e8a}} , see lem:MS16;
and for the Sebő-Vygen algorithm {{cite:c2ebaab7f6a5b1b272953e9bab7f6d75e0ff5f64}} we may assume {{formula:6d2023c7-b024-47c7-ade0-5c3fdf263ebe}} , see lem:Sebo-Vygen.
Comparing their solutions with the solution of the Christofides algorithm and returning the better of the solutions lead to {{formula:ca85b3ba-75d4-4072-910c-a7126f6c0bdb}} .
The proof of cor:all is in sec:proof-cor.
| r | be2f6349c83cf43e8e8a991b4c700e0b |
(iv) Consistency with the standard neutrinos and mass number: From Table REF , we notice that the constraints on {{formula:b4b27585-3e18-4f62-aa34-f4382e475246}} and {{formula:a16cc89c-e129-47e6-8354-64b26734af0c}} in the interaction models are consistent with the standard model predictions. This observation is interesting in the sense that higher values {{formula:0625053c-f65a-48c1-953e-2ecb751c8038}} usually correspond to higher values of {{formula:e7ab5908-28df-4f4e-9f44-05c487461236}} because of the positive correlation between these two parameters {{cite:ba002af1da528d1e59a79cfcd54fd8f45fbd4676}}, {{cite:11dfd4b988247461fb32cb4e2bbd3bc3d5fc41d7}}, leading to larger values of {{formula:4905cba1-8ede-4dc8-894b-b98a32fa5deb}} . But in the {{formula:cbde4b34-d09f-4416-be4f-8026851d64af}} I{{formula:e1857105-067d-44ed-9e7b-31ddbc26b5d4}} CDM model, we find no correlation between {{formula:a10453fa-97cc-4dbc-b0a4-1499a18c0631}} and {{formula:9db33f79-bcfe-460b-b4d3-e077e3f86c4c}} . Thus, the {{formula:415f4deb-5630-4d8d-bddc-f5c148584665}} I{{formula:69608bd9-0121-4671-86ec-7ece4c1879ad}} CDM model yields higher values of {{formula:23661391-7e9e-4c8a-8bc5-12db3023b273}} and smaller values {{formula:a4a2720d-59ee-43bb-9596-64ddc2aacc19}} while being consistent with the standard model predictions of the neutrinos and mass number. We notice that {{formula:81df1fe9-c6d9-496f-9acc-5156dfa43eb4}} CDM model is also consistent with the standard model predictions of the neutrinos and mass number but this model does not relax the {{formula:6aebbb6d-6886-4105-9ce6-54f5eb1a4067}} tension.
| d | ab612ee53a1e34c2df81c47d39c2f9ad |
In our future investigation, we'll apply the proposed adaptive sample weighting strategies to more robust learning tasks to further validate its generality. Attributed to its relatively concise modeling manner, it is also hopeful to develop deeper and more comprehensive statistical learning understanding for revealing its intrinsic generalization capability across different tasks. Besides, we'll try to build more wider range of connections of our method to previous techniques on exploring data insights, like importance weighting {{cite:5fdaeeebc7f743efc11f0fa56ad06a45413ad089}}. More sufficient and comprehensive meta-class representation will also be further investigated in our future research.
| d | b270c3ea3c3edf05b58b0ae0891dcde4 |
We submitted the prediction with defferent settings to the evaluation server of Cityscapeshttps://www.cityscapes-dataset.com/method-details/?submissionID=10089. The overall comparisons of our model with some recently proposed methods are summarized in Table REF . Note that our method mainly aims to optimize the mIoU measure named Class IoU in Cityscapes. From the table, we can see that training with margin calibration, a single deep segmentation model achieves very promising results. Without the pre-training using the 20,000 coarsely labelled images, simply replacing the cross-entropy with the proposed margin calibration, the mIoU can be improved by 1%. If the model is pre-traind with the coarsely labelled data then finetuned, the final mIoU can be further boosted by 0.6%. Compared to the original implementation of DeepLab v3+ in {{cite:7af49622f6db6aa27d85205c5d08f36f38f4a3b7}}, our segmentation model backend on SEResNeXt-50, which is a shallower network pre-traind on ImageNet-1K {{cite:28fa30a60ff2ffcc16e9430c6aaacfacfe1af8a9}} but not the much larger JFT-300M {{cite:647483c4080ebac20f904a1ea6f2dd25670e9a29}}. Even so, with the margin calibration as a better learning objective, the final performance of our implementation is slightly better than the Deeplab v3+ backend on Aligned Xception. Some examplar segmentation results for the scene parsing visualization are illustrated in Fig. REF . We can see that fine-tuning with margin calibration can generally reduce false positives and lead to finer details.
{{table:91c55d74-a8bf-48c5-a7ac-75c62ade98d6}} | r | c43d9642f070c7a96cbd3144c13ad290 |
Considering that graph wavelets have a very strong ability to express information, this paper proposes a new deep graph wavelet convolutional network (DeepGWC) to improve the deep graph convolution models{{cite:7686a91a5d177dcd1dc0aedee8b1da25cfeaf708}}. Our DeepGWC model not only achieves the new state-of-the-art performance, but also could be converted into vanilla GCN, GWNN, and other shallow models by simply adjusting hyperparameters for different application scenarios.
| m | c4e3592a9a188f7b66d2e0fe958e2c2b |
One of the earliest CFD studies on DPIs was conducted by Coates et al. {{cite:de258fe267d9095466bd2c9f6442b3a8571bffcf}} in which they studied the flow-field and particle trajectories in the Aerolizer® DPI for different design parameters of the inhaler mouthpiece and grid. The flow-field was simulated using the RANS approach with the {{formula:af59e411-0627-4afc-9781-b32847fb3dc0}} -{{formula:b7499488-598a-494a-afea-5ac1c22c0c62}} Shear Stress Transport (SST) turbulence model {{cite:e4d4544df40b2d62e9dfd62a2c47ef87ddeff28c}} and with particles tracked using a Lagrangian approach. Flow field validation was carried out by comparing the simulation results with laser doppler velocimetry (LDV) data at the exit of the device. An increase in the size of the grid openings reduced the flow straightening effect, and also the turbulence intensity, just downstream of the grid. Consequently, particle collisions with the grid also decreased, but led to an increase in particle-wall collisions in the mouthpiece. This balancing effect, of lower turbulence intensity and particle-grid collisions with higher particle-wall collisions in the mouthpiece, was found to result in similar values of fine particle fraction (FPF) for these design changes.
| i | c0ac15c12aac15f1cc828866e5248285 |
(4) Gegenbauer moments at the scale of {{formula:c30aa4b3-1b4c-481a-8f66-dcf5d5ca80ef}} {{formula:30562e45-879c-4a6b-93ba-5c1a3b734c6f}} 1 GeV:
{{formula:429e9f9b-37e7-4c84-9765-41f8ef67076d}} {{formula:1804cad3-dee6-4fb9-a337-c11c4eb0da03}} {{formula:e3a739a1-3ac9-41de-8a89-7d572863078e}} and
{{formula:ac698ee0-9eb1-4307-a538-592d0e06f06f}} {{formula:de9db1d3-ad19-4b76-8254-c7306d6d3954}} {{formula:2aa390e5-a747-4155-a493-b9a32349ed7e}} {{cite:01a2be2f25d0e8e38b49c2c6b127bfb16ec531a3}}
for twist-2 pion distribution amplitudes,
{{formula:719ae990-f89a-4358-a45b-b663a24f4a5b}} {{formula:99bd89c0-517b-4f95-aabe-5719f6a3b20f}} {{formula:e12c6ce9-0176-440b-a5e0-a668f215f6e7}} and
{{formula:88ee5fac-1b84-4563-9466-f70026d54eb2}} {{formula:e9464243-0a38-44e5-9638-161301daabe1}} {{formula:5963b601-c3c2-47f0-a611-1ec7cc3c854d}} {{cite:5dc34adeebfd8b9918f61d4ede77e3e29a794675}}
for twist-2 kaon distribution amplitudes.
| r | 46bf1233b227d792d564a4b812cda131 |
OCL, as one of the cornerstones of GR {{cite:8599d711da690fd167b2e2633232f8508e4af711}}, is not
respected in all modified gravity theories, for example, it is
broken in the non-minimal curvature matter coupling
theories {{cite:3e1addaf0bfc5c88bba4623987e1ea01124828ff}}, {{cite:74385e4c74b40bed888e0ad75579a7d9358fc540}}, {{cite:8db07857914b016c096397e7cbf198bd6030398c}}, {{cite:af431d2c9d55161ff7e1ec6f7edbbd833e8f72fa}}, {{cite:75b2f80073983f3c6f43a866e094bfe657fc442b}}, {{cite:ced3d119783f22f0e9b863bf69aff0283c81268a}}, {{cite:7125db7bf7425dbe392995942370ab123adaf2b3}}, {{cite:41d9c8edf5e70037a7c4618c4ea23755ef72bc1b}}, {{cite:623c3dfd814dbe02f62dfa1127f8e5abc5126890}}, {{cite:3271da684a94f47b4c52d3591f39cc2f172f1217}}.
Rastall gravity is a pioneering theory in this area {{cite:41d9c8edf5e70037a7c4618c4ea23755ef72bc1b}} in
accordance with various observations {{cite:50ca6a3ca48f7872fe612486936c6b4850f28502}}, {{cite:dc09273938ef175ff756212028f2454dc4db583a}}, {{cite:26a358c8b9248a1722e6176cbf9d7f93fc6d6143}}, {{cite:03e91996112cef1f8dc51f88cae03ae6aca8f0a6}}, {{cite:e0127c1e99d7d2eb1295bdf22ccc7b3dd4889cb2}}, {{cite:b3af212b44c3a80cae970d6097ed56f4b6e0248a}}, {{cite:36da3c6523e50e7d1a641f106290f5bcc72d00a0}}, {{cite:9e582a0ab3a7a15504e652839afa3cb207b6211c}}, {{cite:7a0e349a2d2bc5482eae652edc86d93b06e51c06}}, {{cite:1bef21163e39d2b2c59869af9695129cc985b51b}}, {{cite:d89839eae116aaa501b07dfd9c133ee68496c75e}}, {{cite:681b813f713ab5151bb8b2421ef0f1bd2a4d2ee5}}, {{cite:c088403000e225ae964f1df3df61d29c38bd3841}}
and its corresponding cosmology avoids the age and entropy
problems arisen in the framework of the standard
cosmology {{cite:108a0f37824769cd4c9fc07a32f22106df94bdf4}}. In fact, this theory can even provide a
better platform for describing the matter dominated era compared
to the Einstein theory {{cite:26a358c8b9248a1722e6176cbf9d7f93fc6d6143}}. A generalized form of
this theory allows us to relate the current and primary
accelerated universe to the ability of the spacetime to couple
with the energy-momentum sources, filling the background, and in
fact, introduces this coupling as a candidate for dark energy and
inflaton field {{cite:623c3dfd814dbe02f62dfa1127f8e5abc5126890}}.
| i | afe1a01d86613f33bbbdec2b29f41252 |
An alternative to the microscopic, molecular perspective is to attempt to develop a macroscopic, phenomenological understanding. Such an approach has been productive in diverse areas from statistical physics {{cite:df32d961e4a8fcbfc9cf7113f9f53b24f521d66a}} to economics {{cite:6bbc8e3528e198d4ada1683b7952c23ba4643509}} to some fields of biology {{cite:19b14222818876b00da25e0b062f7ec595aa16bd}}, {{cite:43e2b5e47f3cd325b4c35d0f54391d731e399a30}}, including protein evolution {{cite:99603184252c29d6d16b2d3437a8cdf3118a6ddf}} and cell-size control in bacteria {{cite:330057f675babd3282b333209f27cc078bf969c3}}, {{cite:5868280f3c395c894b9a9061f785ac83f3e8463f}}, {{cite:c28e773af0e00bfb7684429716553699629edcbf}}. One significant concern is that the great complexity of oogenesis and embryogenesis might make simple, phenomenological descriptions inapplicable. Furthermore, the validity of the phenomenological approach can only be determined by developing models and rigorously testing them. This requires a large amount of quantitative data, which is difficult to obtain from oocytes and embryos in model organisms.
| i | c11e87a57b0307d79a1d947d5597a477 |
Light is known to possess polarization and spatial degrees of freedom, manifested by its linear momentum as well as spin and orbital angular momenta{{cite:4b3bec116f154d48afa569b8d58c51fd8f0a1333}}. Remarkably, the spin angular momentum (SAM) of light can be transferred to electrons in matter, a phenomenon which refers to as the inverse Faraday effect (IFE) {{cite:111f1c6eff00d9102fdc18f0c0dc61ecb30c3c99}}, {{cite:87d39dd88c2b6ac9588bedbbb938753ec158ff6c}}, {{cite:fc6fb2a053c5f77477f64ce3ef0b81561cc6b5d7}}. The inverse Faraday effect (IFE) has attracted much attention for its ability to generate light-induced magnetization, thereby opening the prospect of an ultrafast magnetic data storage {{cite:cd6f0b8e1cafe8646084113e4c1341ba2edd91c3}}, {{cite:f6aa4d5c7adf2aa4cd1f9a8c487660b2a91e2baf}}, {{cite:8555473c7e77aab3ee6738c778ee1d25a83f9447}} and a non-contact excitation of spin-waves {{cite:30597f822e415f86ee65f6fca0fd9d6b2a8fe5b0}}, {{cite:31e975335076552f83641321886b7e8fb58e5e0b}}, {{cite:eeef3784fb2e711c395d01deeb81d2ce0adc5f25}}, {{cite:8bd035ea5e22095504c436de57c2e3bf430833f1}}, {{cite:607871b65436217392289be52f2d9189fd91dbdd}}. Plasmonic nanostructures have recently been investigated to locally enhance and control the IFE in non-magnetic metals {{cite:78177903876e7fd9814c391ebbd09e88a83f9c2d}}, {{cite:0583f573577adc3344ff69b8ba41b290215d1bf6}}, {{cite:18c7ed968dd9c357c1b4925b4cc648dc1accd54d}}, {{cite:c862f542d21edf189d420bdadcbee3de96984453}}, {{cite:ae3557a27f833d631611cc9ceb265c28c40e9ef7}}, {{cite:971a42b9c465b34d219ffe2f5d87d1c39af97440}}, {{cite:9a203d2980aee82baa2bf96cc1fbdea8da185010}}, {{cite:fa3f7cc1fa984ff9e67d7ac5d3968978a0affff9}}, {{cite:28352d21d7e3fd484786366127e3551c26b187aa}}, {{cite:0f86262c0429d0ee6ce324145190211e38d0b9df}}, {{cite:7ab31c9c8f5c2de68fa8c7d7fec7af7dccc02ebd}} and in hybrid structures including magnetic materials
{{cite:cdd7efd1623754809395c78e989718e6aab75b59}}, {{cite:926c1792df93074c5d3b996c59131fdd9c42d681}}, {{cite:2f8bac9b4a04173c6b3d12b55d8faa5034ffb7c8}}, {{cite:e2e1355148a51ecdf509d2344e2c5b0da0217c5a}}, {{cite:175ad51e3cd5afb19219e931b47a41524f1424cf}}, {{cite:415f5a6e2fffc610be854e99e18e8b6da3eb4f00}}.
| i | 0277b2c816347f65fe1254320b921a44 |
with {{formula:f0fc7e57-1d7c-4261-aadc-927377a8f5fc}} , are solutions of () and (REF ), respectively {{cite:da6dfc7bd1003e7ee56cef8e030a33a2f0b186e7}}, {{cite:a7729996758635c8acdec95391a7b7defa3a33fb}}, {{cite:dfc5c6fdcdf2c1b24790d04e029419c4fb787d50}}, {{cite:a0fd99ff0e14ba49cd7b94dd5c9927e774262f09}}.
| i | f3d2411abf63aa16baaa001ed8358fa5 |
Other line of traditional methods {{cite:3ea79f47b7baf90093572bdf17fc0285af13ddbb}}, {{cite:45f802d667940dd8994a905a5233ae021e043fbd}} tried to embed the source and target data into a Grassmann manifold and learn a specific geodesic path between the two domains to bridge the source and target domains, but they are not easily applicable to deep model. Following them, {{cite:15d9eb1145823bb25b0eb9642b2535226d1a9175}} proposed a person re-identification method based on the shortest geodesic path definition, forcing the positioning of the intermediate domain on the appropriate path between the source and target domains in the manifold, but their method is designed for supervised domain adaptation since their bridging loss involves labels from the target-train set. {{cite:fdf7937a4b1ef752c8dbd7aa78bb34b049296265}} proposed a GAN-based source domain-like target domain image, but it is computationally expensive and GAN-based methods may cause mode collapse. {{cite:65f9190ed2698799a269adc2be253d74d6bd9167}} proposed a transferable semantic augmentation approach to enhance the classifier adaptation ability, but missed the feature admixture association between different categories.
| m | 9a8f9f4968d032f0c086422bd4074db9 |
On the KPZ fixed point level, ergodicity and behaviors around the maximum are better understood. Under the zero temperature setting, numerous results and techniques address the ergodicity question for the KPZ fixed point. For instance, due to the {{formula:673a255c-0fd5-4f4f-bc15-bb5c2dc75442}} scaling invariance, ergodicity of the fixed point is equivalent to the local Brownian behavior ({{cite:e947918ddfebdf1d2d48d6bcfd31ed8761647e5a}}) or can be deduced in {{cite:082635bffd4572c22d5a6f20dc1ce79ce743b0b2}} using coupling techniques applicable only in zero temperature settings.
| r | 903eda2d8d16b897c8e9f73533cc76fb |
One approach to dealing with this problem may be create models that account for differences between datasets. More generally, this approach is known as domain adaptation. One specific way to implement this, initially proposed by Alvi et al {{cite:4cbec48c79882db97815958dab4de1fa9425586b}}, is so-called `Joint Learning and Unlearning'. This uses an adversarial multi-task approach to simultaneously minimise domain (i.e. data set) prediction accuracy and maximise task accuracy. This approach has been successfully used for MRI segmentation problems, and we have begun investigating how it may be used for ECG data {{cite:ab187d8064b84eb8b9dd1a7d7a6c50f3c6ca9e98}}.
| d | 179738e3f5dd0756c3ad8976d05be157 |
If we modify the RTN ansatz by taking the same random state {{formula:6702ebe8-227e-4951-9c0b-5abee1511de3}} on different vertices, then after ensemble average there will be nontrivial permutations within a single copy of the system.
Such permutations change the geometry of the averaged state and correspond to more general types of spacetime wormholes.
One type of such wormhole is the one that brings a particle inside the horizon to outside at late time{{cite:be267f4b179ba7b6dd1d6e85c5e9a074d8919ed4}}. Such wormholes are direct consequences of the random matrix behavior of the Hamiltonian, and they describe the system's memory of the initial condition. Even after ensemble averaging, such wormholes have observable consequences in a single copy system, such as the late time behavior of the two point functions.
| d | 6aa14a1a2307b2944febe246d1db17fb |
For image-to-image translation, we review some general methods from supervised to unsupervised settings, such as pixel-wise loss {{cite:1000b4cc069a9024aa8feec1693aa25a8ed102ef}}, cyclic loss {{cite:f5e9701c90338865858c120b0e0d234b879029da}} and self-distance loss {{cite:d0cc597c29dbc90cdc9f8b59a35ad2d98344a40e}}. Besides, we also introduce some task-specific image-to-image translation models for face editing, video prediction and image super-resolution. Image-to-image translation is certainly an interesting application of GAN, which has great potential to be incorporated into other software products, especially mobile apps. Although research in unsupervised methods seems more popular, supervised methods may be more practical since they still produce better synthetic images than unsupervised methods.
| d | d843a506c1b3c57af13d1d0b4a5aa89b |
Blazars can be classified into BL Lacs and flat spectrum radio quasars (FSRQs) based on the optical spectroscopic identification. The BL Lacs are characterized by a general lack of emission or absorption lines with equivalent widths {{formula:c3189c6a-eef4-4b17-946e-4be49cdb9589}} 5 Å, while the FSRQs have emission lines with significantly higher equivalent widths and a flat radio spectral index ({{formula:cf8802f7-7e02-4c8a-9403-afecea92c152}} , {{formula:90371d96-b5f0-43a7-ac17-2a5b973ee95c}} ) and are believed to be the counterparts of the strongly jetted (highly collimated to large scales) Fanaroff-Riley Type II radio galaxies {{cite:591c8e70edb12edaf559cec7d774a0a75564238f}} according to the radio-loud AGN unification scheme {{cite:fc22590322dceb299ab6b13186334bc3cdd10b1a}}. The detection of a population of radio-loud {{formula:a5629c86-7967-4569-9517-b76032cdc6c4}} -ray emitting narrow-line Seyfert 1 galaxies {{cite:c899e93c9dd504fc0c7e22249b19da914663e36d}} provides evidence for jet activity in terms of luminosities and radio component kinematics as well as in terms of their luminosity function which matches that of FSRQs, and host galaxy properties {{cite:31baf5d9e8dfc94368ecc9a3b0246f0af56d784e}}, {{cite:b12b0adf0c88a99de096dc9fb98851e0bfa57019}}, {{cite:889a3c59c751ee7b1dbe105da2d05adf30b35f19}}. The traditional classification then requires to be updated by a physically motivated scheme which may include jet power, accretion rate and black hole mass as defining parameters {{cite:b12b0adf0c88a99de096dc9fb98851e0bfa57019}} such as an evolutionary blazar sequence {{cite:591c8e70edb12edaf559cec7d774a0a75564238f}}, {{cite:81b861bcc89ab87a8855f6e0b128f3770ef39ecf}}, {{cite:33febeb48c38b2504aaeaa4dbcfdb4725d5616da}}.
| i | 9b47075c3f754b3199c2cb3bc57b6650 |
Qualitative and Qualitative Evaluations for MP.
We use the following state-of-the-art methods for comparison:
(i) U-Net as used in {{cite:fb6d24b03a05eb2dbc578453964c5b514a9f33aa}}, {{cite:7fff93949bbc3ef0f402a98efd260620a52c60b0}} that is adapted to work with unimodal input (T1w) and predicts the unimodal output (T2w).
(ii) CycleGAN from {{cite:2804b9f501474637f009a0ab0d4ebe0aec3a0db2}}. Here we used a U-Net for the generator (as it led to better performance).
(iii) We improve the robustness of U-Net (from {{cite:fb6d24b03a05eb2dbc578453964c5b514a9f33aa}}, {{cite:7fff93949bbc3ef0f402a98efd260620a52c60b0}}) to OOD-noisy noisy input data by employing dropouts (DO) during both training and inference, called U-Net+DO.
(iv) Similarly, we employ dropouts for CycleGAN (from {{cite:2804b9f501474637f009a0ab0d4ebe0aec3a0db2}}) giving us CycleGAN+DO.
We perform multiple forward passes and take the mean to get the final outputs of models with dropouts activated at inference.
Both U-Net and U-Net+DO are trained using the pixel-wise {{formula:ddb3c894-72a7-4dc0-8b12-f2cdd1543453}} loss; CycleGAN and CycleGAN+DO use
an additional adversarial loss with cycle consistency as in {{cite:2804b9f501474637f009a0ab0d4ebe0aec3a0db2}}.
{{figure:957e0f79-103b-402a-9015-36d3287a3390}}{{figure:5165cacd-2b2d-4159-9c90-b9aa01288094}} | r | c94a727d7ee8a9cb43e1d1789ce2d07a |
Following the same methodology as described in {{cite:26b6fcf736646f91b65d7db6eb705dd8a7445673}}, we compute the predicted number ({{formula:b0e680cd-9c23-4ff2-bfaa-ada921f40bfb}} ) of N-rich field stars observed in APOGEE-2 toward M 54/Sgr using the smooth halo density relations presented in {{cite:3f81d1fafb1ef2da1c5ec17b4ee3fb1d9dfdb18b}}, and by adopting the same Monte Carlo implementation of the Von Neumann Rejection Technique {{cite:d95c37f766f0f843a1dcee88c22d33b59807fc2e}} as in Eq. 7 in {{cite:ee5106c34bf4a82e08960652483e6178edf73536}}. We find the expected number of observed N-rich halo stars beyond {{formula:8b91b55c-4260-45b6-80a2-4860d331b56f}} kpc over the sky area of 1.5 degree radius centred in M 54, and with both astrometric and kinematic properties as M 54 to be {{formula:e1631c89-58a0-48c2-ade4-26279b665487}} (from 1000 Monte Carlo realisations). This yield a very low probability that the new identified extra-tidal N-rich star associated with M 54 is due to random fluctuations in the field. Furthermore, we also use the Besancon galactic model {{cite:813fa795160ddf017dacf73ccddb0d5b8bee3f99}} and the GravPot16 model {{cite:b17dc803d1ee30d6aec9515ba6bbb65ca7cba677}} to explore the expectations for a "default" Milky Way along the RVs to the Sgr+M4 surrounding field beyond {{formula:9edb7a6c-aeea-488d-b320-252a66fcf3e6}} 15 kpc. The "all" sample is dominated by halo kinematics with a negligible contribution from the thin and thick disk beyond {{formula:4deefd14-c1d4-4736-a791-bfc3628e8669}} km s{{formula:52945bbd-8435-4a7e-ab04-a4c32b9b013c}} . Thus, our Milky Way simulated sample act to guide us in {{formula:ef4552c2-df3d-498c-aa21-9664864df3c8}} space, confirming that the kinematics of the newly identified extra-tidal N-rich star differs from the disk population, with practically low contribution form the expected halo. ,
| r | 5d50eb698eaab33404d0f8256a418a8a |
Sentiment analysis or opinion mining is the computational study of people's opinions, appraisals, attitudes, and emotions toward entities, individuals, issues, events, topics and their attributes {{cite:3d3a6a8405ef1b89ac513209cef7f4e0a405c435}}. The task of sentiment analysis is technically challenging and practically very useful. For example, businesses always want to find public or consumer opinions about their products and services. Consumers also need a sounding board rather than thinking alone while making decisions. With the development of Internet, opinionated texts from social media (e.g., reviews, blogs and micro-blogs) are used frequently for decision making, which makes automated
sentiment analysis techniques more and more important. Among those tasks of the sentiment analysis, the key one is to classify the polarity of given texts. Many works have been done in recent years to improve English sentiment polarity classification. There are two categories of such works. One is called “machine learning" which is firstly proposed to determine whether a review is positive or negative by using three machine learning methods, including NB, ME and SVM {{cite:dc27a79744080165d1f899069e307d15adfcda2b}}. The other category called “semantic orientation" is applied to classify words into various classes by giving a score to each word to evaluate the strength of sentiment. And an overall score is calculated to assign the review to a specific class {{cite:9855c2b076f0d534bfb3d5b1344e790803ad6832}}.
| i | c24dc8c534b0241f22d1a99a9537855c |
LIME. Ribeiro et al. {{cite:37447985b29b72001c0b9b1909993c373bb6cc9b}} propose Local Interpretable Model-agnostic Explanations (LIME) which identify interpretable data representations faithful to a given black box classifier. The explanation is defined by the following optimisation problem:
{{formula:ee1921b4-bee3-4e59-93ba-12c8bd6ea30b}}
| m | b7d6514ab410a812663af6faeb8ece02 |
Detections Feature Representation.
In all other experiments, we have used RoI pooling to get feature representations of our detections. As presented in Sec , we also explore using MLP projection.
Inspired by popular MLPs {{cite:3c83bc2a26c04f5756fe620b8147b1f2535fa9d3}}, we compare performances of MLPs and ConvNets as to get the visual features of our detections. Tab. REF shows that MLP has better modelling capabilities than ConvNets only for verb recognition. However, ConvNets are more advantageous for noun recognition and their accuracy is far better than MLPs by about 2.7%. Hence, ConvNets are more accurate in action recognition.
All previous and subsequent results thus use 3D ConvNet RoI pooling to represent the visual features of hand and object detections.
{{table:78220171-9b97-4e11-b49c-8923e84d65b6}} | r | aa748b19cacebaebd257549687ffbcb0 |
Acharya et al. 2018 {{cite:1803b396c677e8e37f16949a3cd3ef1c41eaa49b}} CNN Epilepsy Noraml, Preictal, and Seizure From Andrzejak et al. {{cite:679219f58a4afe9b829b1e90079bfe77e3a88236}}, 10 Participants (5 Healthy and 5 Epileptic Patients)
| d | daa096ded9084ba743a2556715a62bcf |
Like {{formula:4ce54dea-a117-46fe-977b-1c0e273321de}} , {{formula:3477aaf4-2fd3-4b93-8961-14e0d03f73a7}} , {{formula:ee462805-2178-427d-bf09-ed15c548172e}} , {{formula:ea579133-2ad4-4975-a0d0-5551db440f23}} , our algorithm relies on an inlier threshold {{formula:c0982ef6-284d-442f-99ae-79ae4ded4ee4}} . While how to set this hyper-parameter suitably is known for Gaussian noise with given variance, in practice the distance threshold is usually chosen empirically, as Hartley & Zisserman wrote {{cite:419c1bc61688d7f237e674d5a32b2ff756bd3c26}}. While mis-specification of {{formula:8e60428b-f80c-4fb3-a46c-ddfbd81d9cac}} could fail the registration, certain heuristics have been developed to alleviate the sensitivity to such mis-specification; see {{cite:f91293abec9cf18c82cae1bcc3b423f8b34b6268}}, {{cite:7d64f86876ee317a95e31c56b3b3a5fbd2a79cbc}}, {{cite:8bfc91c72cb783e20f46a568b4b56699770e4423}}, {{cite:a042ea8649cb8e5229794c941350882bff600d3b}}. Finally, our experience is to set {{formula:525f4759-8d83-4a80-aa2c-1c8cfdc6e150}} based on the scale of the point clouds.
| d | 3b211442f6c25ef31ec0853b13d17ae7 |
A major interest in neuroscience is understanding how macroscopic brain functions, such as cognition and memory, are encoded at the microscale of neurons and their topological connectivities. One of the significant developments in this direction was the Wilson-Cowan (WC) model, describing the averaged behavior of large populations of simple excitatory and inhibitory neurons in terms of a set of coupled, mesoscale differential equations {{cite:dbc1af98a3282bc253c786794dba60b2b9ee3ad7}}, {{cite:7e191c49638c2b6708fb36ca07c70bc3643a1466}}, {{cite:50aed227df9a22f8abe151aeab12dc059fe7a01c}}. With only a few physical parameters, WC provided one of the first mechanisms for simple (single-frequency) oscillations across the brain, such as the hyper-synchronized dynamics observed during epileptic seizures {{cite:50aed227df9a22f8abe151aeab12dc059fe7a01c}}, {{cite:16e619aab85d2d57951d56f4db55ca676bd23fd6}}. More recently, generalized WC-like models have been used to describe heterogeneous populations of neurons ranging in scale from single regions to networks of activities across the whole brain {{cite:41fe77db79ad5c62674ca7c896418a2cff3e8cff}}, {{cite:50aed227df9a22f8abe151aeab12dc059fe7a01c}}, {{cite:65f543611be45f8ca220bfbeef4a8a6134459419}}, {{cite:1abe27894deee8fa73e7a6204d2e8ac45501d150}}, {{cite:0af6c1585121c9a95adcbf9420af5b8c968cbe53}}, {{cite:ddc4175c02f65e86398f1480f17a61371bc898dc}}, {{cite:67f9bd02ddb9f03f38c9d9e688c7d870e356d034}}.
| i | edaefa0968c2afd3d27c3b022151ef91 |
To show the effectiveness of the proposed method, we have compared the results with the most advanced and similar methodologies of the fault diagnosis of rotating machines. As reported in literature review the selected algorithms are deep neural network (DNN) {{cite:2eb20fba6b73d5e2860409509af2527a50e9ade9}}, domain adversarial neural network (DANN) {{cite:e2eb31493b1157e9f471ee8acf1e725a747a7ada}}, deep transfer learning (DTL) with classification loss and MMD term minimization {{cite:1bf819966d72b5eb7494f31d5e4a2231c7e3dcbd}} and Deep Model Based Domain Adaptation {{cite:5b0f4ac8bae2d3577d63f862031a3f7af9c84787}}. All these models are also trained using the same dataset. The architecture of the DNN and the DTL is kept the same as the new model (student net): {{formula:409f2b29-f320-424c-8dc3-602c3b2ababf}} .
| r | 268aa326d73b6b5079695a568d46cb9d |
Today's machine learning (ML) systems are large, complex software artifacts. Due to the ever-increasing system scale and training cost, it becomes not only tempting but also necessary to re-use pre-trained models in building ML systems. It was estimated that as of 2016, over 13.7% of ML-related repositories on GitHub use at least one pre-trained model{{cite:1ec4f52f1ffdbf324f41f958be520f3af984442c}}. On the upside, this “plug-and-play” paradigm significantly simplifies the development cycles of ML systems{{cite:deaa9a2203c215655b7e09a5726524e147512726}}. On the downside, as most pre-trained models are contributed by untrusted third parties (e.g., ModelZoo{{cite:34badf7f3190443b10fa097be4365c9d163899d0}}), their lack of standardization or regulation entails profound security implications.
| i | 01aa5e9e9a48a70e8e0830d6d8793c3c |
In this work, we have developed a multiscale model reduction (MMR) approach to improve linear-quadratic dynamical systems, derived from POD-Galerkin projection of the Navier-Stokes equations, with the addition of systematically computed cubic closure terms.
These cubic closure terms are derived through an adaptation and application of a multiscale stochastic averaging method that originated in singular perturbation theories of Markov processes {{cite:dd5db95590f7cc336e6fefa24e1dc53e885803af}}, {{cite:60802442cd2414755b55c44a125242a6d3e4e76d}}, {{cite:f4e919e70c16facfe94b289cb0ef323c695b2b7a}}, {{cite:d7f1c0ca9be6c2ca4c2223889912a6ec41b0a3ab}}, {{cite:f5d58ae873653eb13bab8ec69811f73fc8a9d715}}.
Whereas the standard truncation of the Galerkin system disregards the influence of unresolved variables, the proposed multiscale model reduction method, accounts for their effect in an average sense through averaging via the stochastic Koopman operator.
In particular, this approach is able to model nonlinear interactions between resolved and unresolved variables, capturing key mechanisms such as mean flow deformation and the energy cascade.
The closed model includes cubic terms, taking the form of generalized Stuart-Landau equations that often act as coupled nonlinear oscillators.
| d | 7ee35d8d97ee0968ff6b6a0194f5ae3d |
where {{formula:fde0e01d-db35-41b5-af31-28ccc24154c3}} .
As a result, the box-counting dimension of the graph of {{formula:509c4850-7507-43ec-84b3-33a40d4fd886}} follows from Example 11.4 in {{cite:6325584df621950edbad3a43b7c037d29d958b7d}}. It is 1 if {{formula:795263a6-1b53-4929-9c77-0c95e80d82ee}} , and {{formula:17e3cedc-6462-4396-96a5-63540f40f713}} if {{formula:84a7a64d-81af-4a76-8aca-637616f02b31}} .
| i | b86d83fe1d2f2e9175c439f805b9c133 |
Figure REF displays examples of activation maps {{cite:d92f6c1f19aeb1431537ea662885489da9a34155}}, {{cite:820b53e5ddef305be70698188b79e58724df9bd0}} for SPCL and our proposed method obtained from three bounding box images, featuring different individuals, in the target domain (ILIDS-VID dataset). The activation maps indicates the regions of interest of the backbone CNN when extract feature representations. The figure shows that our method provides a better localization of the person. Compared to the baseline SPCL, less background information is captured, allowing the model to focus on strong identity-based features.
{{figure:62c4324a-04b6-4431-a6e2-6b02591e6bc5}} | r | 43a20b0a36167f31a8cff96c11cf73b6 |
In the present study we have concentrated on the local aspects of Einstein dark energy theory. From a physical point of view our main assumption is that the vector field {{formula:f61dc8b6-5b16-40df-b8fb-c5491e62f5b0}} , which in Einstein's
theory can be interpreted from a cosmological point of view as vector type dark energy, plays also an important role at the stellar level.
To obtain the field equations of the model, we have adopted a {{formula:dc1645da-162c-4660-9e3b-c9ee4050afeb}} type Lagrangian {{cite:53a9cd589b876decbd671095b79270a61588200e}}, which contains a linear combination of the Ricci
scalar, and of the trace of the energy-momentum tensor. Moreover, we construct
the self-interacting dark energy tensor field {{formula:200ab4d5-dfc4-49d0-be7d-b3ced9c597b7}} in terms of the massive vector potential {{formula:c72cfa28-9e6a-435e-b705-35960bcd521d}} . A coupling between the matter current and the vector potential can also be assumed, but in the present approach we have neglected this term.
| d | afb9b16246e11574e461e5b99cea211c |
Reinforcement learning (RL) is another approach for solving the credit assignment problem. Node perturbation algorithms correlate noise injections with scalar reward signals in order to update network weights {{cite:1fd34f5677d0ab1c56d70274fdffda959cf8f5a7}}, {{cite:f8012e1540d3908c66a97a087f3536efdc05bf10}}, {{cite:56be665d1ccd6da39834ad7bcbf35944cdddf620}}. Such algorithms avoid the weight transport problem, as the update rule does not explicitly require a credit assignment matrix mapping the vector error back to the recurrent weights. While the weight updates for algorithms in this family tend to be noisy, the policy-gradient theorem guarantees that they tend to follow the true gradient of the objective function (Appendix , {{cite:1fd34f5677d0ab1c56d70274fdffda959cf8f5a7}}, {{cite:77e54001ca9a7e7b3358d4cdc75f821663e28640}}).
| i | c6f2aa64d9c4c58df4a03c7a59bb706a |
This work proposes a fully learned glioma growth model, first introduced in {{cite:815e7e49be61c1ee22c368328da0806f25ef8234}} as an alternative to commonly used biological diffusion models {{cite:10e48d82890905b7b3f1055cf2cbf95dfc47c57e}}, {{cite:88fcac354c1ec77545e843d735ccf51ba61aa9fe}}. While successful, the approach from {{cite:815e7e49be61c1ee22c368328da0806f25ef8234}} has a number of practical limitations: it requires a fixed number of context observations, it requires a fixed time interval between consecutive observations and it can only make a prediction one time interval step into the future. Our proposed model overcomes all of those limitations and can be conditioned on arbitrarily many context observations on a continuous time axis. From these context observations, our model predicts a distribution over growth trajectories, where each sample is temporally consistent and can be evaluated at any desired continuous-valued time. Our model also significantly outperforms the one from {{cite:815e7e49be61c1ee22c368328da0806f25ef8234}} in several metrics, which we demonstrate on a dataset ten times larger than the one used in {{cite:815e7e49be61c1ee22c368328da0806f25ef8234}}. Our model's main limitation is the high GPU memory requirement of the spatio-temporal attention mechanism we introduce. This is a problem many attention and transformer architectures suffer from, and ways to make them less resource intensive are actively being researched {{cite:dbe1671aa6a21353b1e34041ac3b885953a92460}}, {{cite:a50f4eea02fe890d67111167eb8ae0733c8bc3d0}}. It's also the reason why we performed our experiments on two-dimensional slices rather than full 3D volumes. As a result, one should be careful not to draw any premature conclusions with respect to a possible clinical application. While our results look promising, the value of our model for clinical purposes, for example in radiation therapy, must be validated extensively. We leave such efforts for future work. A comparison with diffusion-based models, particularly in a radiation therapy context {{cite:b823fb16c2f7b46ee7793981a7c4021608d9d877}}, {{cite:43d6e2f7d48b3cbdde8fb1fa1e0304bcfb79bd35}}, is another interesting opportunity for future work. Building on the Neural Process framework {{cite:536a08f2470224fe66c4ec1a6bc58e6665077b81}}, {{cite:27383991678a9dc0fae6b67991d3be38edbe22eb}}, our proposed approach constitutes an efficient Neural Process variant for image time series and is, to the best of our knowledge, only the second time Neural Processes have been demonstrated on real data in the image domain {{cite:01b9ca9e55ca91b8d96769afadf43a200259d41a}}. We believe it can prove useful for both other types of tumor growth as well as any other kind of stochastic time series with image data.
| d | 862a240bb3ece4e968c95de53c7aa5ee |
The object bounding box annotations are taken from {{cite:d17acc15b745f4c4cc3c3e9cb33ff4c25197a452}} and use the weighted average precision as the metric for success as in common when reporting results for this dataset. In Tab REF the average precision across the something-something subset dataset for the specific actions are shown together with the retrained baseline CNN and Temporal Relational Networks (TRM) - however TRM was trained on the complete dataset. We also show the performance of the fusion of our work with the baseline CNN approach of {{cite:07ae01524cce46846388400f5d26db80bae09247}}, Fused:Proposed+ {{cite:07ae01524cce46846388400f5d26db80bae09247}}, and fusion with the work of CNN+bbox {{cite:d17acc15b745f4c4cc3c3e9cb33ff4c25197a452}}, Fused:Proposed+ {{cite:d17acc15b745f4c4cc3c3e9cb33ff4c25197a452}}.
{{table:18445875-b980-441c-a318-1b2135515b2a}} | r | baac6736c1c19ee74c728739261763bf |
We shall also need the class of functions of vanishing mean oscillation, as described, for example, in {{cite:4409c9a8675a897fb91af8b9067dd93fd6bd13d7}}.
| r | d7cdaec644ec70c63939ac29dc3747e7 |